Monorepo for Aesthetic.Computer
aesthetic.computer
1# Chat Message Moderation with Ollama
2
3PG-13 content filter for chat messages using local LLM.
4
5## Current Status
6
7**Model:** gemma2:2b (1.6GB, Google's safety-focused model) ✅ **RECOMMENDED**
8**Performance:** 100% accuracy on explicit content, ~50% false positive rate on edge cases
9**Output:** Simple t/f decision
10**Speed:** ~1-2s per message (slower but reliable)
11
12### Why gemma2:2b?
13- ✅ **NEVER misses explicit content** (0% false negatives)
14- ✅ Correctly blocks: profanity, sexual content, violence, drugs, hate speech
15- ✅ Allows: normal chat, URLs, questions
16- ⚠️ May block some borderline content (false positives on foreign language, slang)
17- 🎯 **Better to be safe than sorry** for content moderation
18
19### Tested Alternatives
20
21| Model | Size | Speed | Explicit Content | False Positives |
22|-------|------|-------|------------------|-----------------|
23| **gemma2:2b** ✅ | 1.6GB | ~1.5s | 100% blocked | ~50% on edge cases |
24| qwen2.5:0.5b ❌ | 397MB | ~300ms | **0% blocked** | Low |
25| qwen2.5:1.5b ❌ | 986MB | ~2.5s | Blocks everything | 100% |
26
27## Setup
28
291. **Start Ollama daemon:**
30 ```fish
31 ac-llama start
32 ```
33
342. **Check status:**
35 ```fish
36 ac-llama status
37 ```
38
393. **View logs:**
40 ```fish
41 ac-llama logs
42 ```
43
44## Usage
45
46### Test MongoDB Messages
47
48Test chat messages from the aesthetic.chat-system collection:
49
50```fish
51# Test 20 messages
52node test-mongodb-messages.mjs 20
53
54# Test 50 messages
55node test-mongodb-messages.mjs 50
56
57# Continuous testing (all messages)
58node test-mongodb-messages.mjs 100 --continuous
59```
60
61### Direct HTTP Testing
62
63Test single messages via HTTP (requires Caddy server running):
64
65```fish
66./start-server.fish
67curl -X POST http://localhost:8080/censor \
68 -H "Content-Type: application/json" \
69 -d '{"message": "hello world"}'
70```
71
72## How It Works
73
74- **Model:** qwen2.5:0.5b (397MB, fast inference)
75- **Prompt:** Simple t/f rating for PG-13 appropriateness
76- **Blocks:** Sexual content, body functions, profanity, violence, drugs, hate speech
77- **Allows:** URLs, links, normal conversation explicitly allowed in prompt
78- **Response:** Single letter - `t` (allow) or `f` (block)
79- **Speed:** ~300-400ms per message
80
81## Performance
82
83From 20-message test:
84- ✅ 95% pass rate (19/20)
85- ⚠️ Known issue: Occasionally blocks URLs despite prompt instruction
86- ⏱️ Average latency: 350ms
87- 🚀 Fast enough for real-time moderation
88
89# PG-13 Content Filter
90
91AI-powered chat message moderation using Ollama and gemma2:2b model.
92
93## 🌐 Web Dashboard
94
95**Live at: http://localhost:8080**
96
97Interactive web interface for testing messages in real-time with:
98- ✅/🚫 Visual pass/fail indicators
99- 📊 Live statistics (total tests, pass/fail counts, avg response time)
100- 📝 Example messages to try
101- 🤖 AI reasoning display
102- ⚡ Real-time response times
103
104### Starting the Dashboard
105
106```fish
107# Make sure Ollama is running
108ac-llama start
109
110# Start the API server (in one terminal)
111cd /workspaces/aesthetic-computer/censor
112node api-server.mjs &
113
114# Start Caddy web server (in another terminal or background)
115caddy run --config Caddyfile &
116```
117
118Then open http://localhost:8080 in your browser!
119
120## 🔌 API Endpoint
121
122**POST http://localhost:3000/api/filter**
123
124```json
125{
126 "message": "Hello, how are you?"
127}
128```
129
130Response:
131```json
132{
133 "decision": "t",
134 "sentiment": "t",
135 "responseTime": 1.32
136}
137```
138
139- `decision`: `"t"` (allow) or `"f"` (block)
140- `sentiment`: AI's raw response with reasoning
141- `responseTime`: Processing time in seconds
142
143## Model Comparison
144
145Tested models:
146- **qwen2.5:0.5b** (current): Fast, 95% accurate, no reasoning ✅
147- **qwen2.5:1.5b**: Slower (2-3s), blocks everything ❌
148- **qwen3:0.6b**: Unreliable output format ❌
149
150For sentiment analysis: Need 3b+ model (slower, not yet tested)
151
152## ac-llama Commands
153
154Manage Ollama daemon from anywhere:
155
156```fish
157ac-llama start # Start daemon
158ac-llama stop # Stop daemon
159ac-llama restart # Restart daemon
160ac-llama status # Check status & list models
161ac-llama logs # View recent logs
162```
163
164Added to `.devcontainer/config.fish` for automatic availability in dev environment.
165
166## Files
167
168- `test-mongodb-messages.mjs` - MongoDB integration test with color output
169- `Caddyfile` - HTTP server config (optional)
170- `start-server.fish` - Caddy startup script
171- `test-filter.fish` - Manual testing helper
172
173## MongoDB Connection
174
175Database: `aesthetic.chat-system`
176Connection string from env or default in script