ADHD Support Agent#
A Telegram bot that helps with ADHD task management, brain dumps, and gentle accountability. Built with Bun, Letta, and Claude.
Architecture#
Telegram Bot (Bun)
│
├──► Haiku 4.5 Detection ──► LiteLLM ──► auth-adapter ──► anthropic-proxy ──► Anthropic API
│ (overwhelm, brain dump, self-bullying)
│
▼
Letta (port 8283) - AI agent framework
↓ OpenAI-compatible API
LiteLLM (port 4000) - API translation layer
↓
auth-adapter (port 4002) - Bearer → x-api-key header translation
↓
anthropic-proxy (port 4001) - OAuth session management
↓
Anthropic API (Claude Opus 4.5)
- Bun: Runtime and HTTP server for Telegram bot
- Haiku 4.5 Detection: Fast classification of user messages for overwhelm, brain dumps, and self-bullying
- Letta: AI agent framework with persistent memory (uses Opus 4.5)
- LiteLLM: Translates OpenAI-compatible requests to Anthropic format
- auth-adapter: Translates Bearer tokens to x-api-key headers for anthropic-proxy
- anthropic-proxy: OAuth proxy for Anthropic API access
- SQLite: Local storage for items, wins, and context
Prerequisites#
- Bun v1.1+
- Docker and Docker Compose
- Telegram account (for bot setup)
- OpenAI API key (for embeddings)
- Anthropic account (for Claude access via OAuth)
Development Setup#
1. Install dependencies#
bun install
2. Configure environment#
cp .env.example .env
cp prompts/SYSTEM_PROMPT.md.example prompts/SYSTEM_PROMPT.md
Edit .env with your values (see sections below for how to get each).
Edit prompts/SYSTEM_PROMPT.md to customize the agent's personality and behavior. The {{TOOLS}} placeholder is automatically replaced with the list of available tools at runtime.
3. Start Docker services#
Start the anthropic-proxy and Letta services:
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
This starts:
anthropic-proxyon port 4001 - OAuth proxy for Anthropic APIlitellmon port 4000 - OpenAI-compatible API proxylettaon port 8283 - AI agent framework
The dev override (docker-compose.dev.yml) excludes the app service so you can run it locally.
4. Complete Anthropic OAuth#
The anthropic-proxy requires OAuth setup for Anthropic API access:
- Open http://localhost:4001/auth/device in your browser
- Click "Start Authorization" to generate an auth URL
- Open the URL and authorize in Claude
- Paste the authorization code back into the form
- Copy the session ID shown after success
- Add it to your
.env:ANTHROPIC_PROXY_SESSION_ID=your_session_id_here
5. Verify Letta setup#
Verify that Letta can access Claude models via LiteLLM:
bun run setup:letta
This checks that the proxy chain is working and Claude models are available.
6. Run the app#
bun run dev
Or without hot reload:
bun run start
Environment Variables#
Required#
| Variable | Description | How to get |
|---|---|---|
TELEGRAM_BOT_TOKEN |
Bot token | Create bot via @BotFather |
LETTA_BASE_URL |
Letta API URL | http://localhost:8283 for dev |
ANTHROPIC_PROXY_URL |
Proxy URL | http://localhost:4001/v1 for dev |
ANTHROPIC_PROXY_SESSION_SECRET |
Proxy secret | Generate: openssl rand -hex 32 |
OPENAI_API_KEY |
OpenAI key | platform.openai.com |
Optional#
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
Server port |
TELEGRAM_WEBHOOK_URL |
(empty) | Webhook URL for production |
TELEGRAM_WEBHOOK_SECRET_TOKEN |
(empty) | Webhook verification secret |
ANTHROPIC_PROXY_SESSION_ID |
(empty) | Filled after OAuth flow |
DB_PATH |
./data/assistant.db |
SQLite database path |
HAIKU_MODEL |
claude-haiku-4-5-20251001 |
Model for fast detection/classification |
Telegram Bot Setup#
- Message @BotFather on Telegram
- Send
/newbotand follow the prompts - Copy the bot token to
TELEGRAM_BOT_TOKENin.env - (Optional) Set bot commands via
/setcommands:
start - Start the bot help - Show help reset - Reset the agent (clear memory) dump - Brain dump mode focus - Set current focus wins - Show recent wins
### Webhook vs Polling
**Development (polling):** Leave `TELEGRAM_WEBHOOK_URL` empty. The bot will poll for updates.
**Production (webhook):** Set both:
TELEGRAM_WEBHOOK_URL=https://your-domain.com/webhook TELEGRAM_WEBHOOK_SECRET_TOKEN=your_secret_here
## Docker Services
### View logs
```bash
# All services
docker compose logs -f
# Specific service
docker compose logs -f letta
docker compose logs -f anthropic-proxy
Restart services#
docker compose restart
Stop services#
docker compose down
Rebuild (after Dockerfile changes)#
docker compose build --no-cache anthropic-proxy
docker compose up -d
Deployment#
Deploy to Hetzner Cloud with a single command:
cd infra
cp secrets.env.example secrets.env
nano secrets.env # Fill in your values
./deploy.sh
See DEPLOY.md for full deployment guide (automated and manual options).
Testing#
# Run all tests
bun test
# Run specific test file
bun test src/config.test.ts
# Watch mode
bun test --watch
Project Structure#
├── src/
│ ├── index.ts # Main server entry point
│ ├── bot.ts # Telegram bot handlers
│ ├── config.ts # Environment configuration
│ ├── detect.ts # Haiku-based overwhelm/brain dump detection
│ ├── health.ts # Health check endpoints
│ ├── letta.ts # Letta client bootstrap
│ ├── prompts.ts # System prompt loader with tool injection
│ ├── db/
│ │ ├── index.ts # Database initialization
│ │ ├── schema.ts # Drizzle ORM schema
│ │ └── migrations/ # SQL migrations
│ └── tools/
│ ├── index.ts # Barrel exports
│ ├── dispatcher.ts # Tool registry and Letta integration
│ ├── capture.ts # parse_brain_dump tool
│ ├── breakdown.ts # break_down_task tool
│ ├── items.ts # save_item, update_item tools
│ ├── context.ts # get_open_items tool
│ └── wins.ts # Tiny wins tools (record, delete, query)
├── prompts/
│ ├── SYSTEM_PROMPT.md # Your customized system prompt (gitignored)
│ └── SYSTEM_PROMPT.md.example # Template to copy
├── scripts/
│ ├── setup-letta-provider.ts # Setup verification
│ └── cleanup-agents.ts # Delete stale Letta agents
├── drizzle.config.ts # Drizzle Kit configuration
├── litellm-config.yaml # LiteLLM model configuration
├── docker-compose.yml
├── docker-compose.dev.yml
├── Dockerfile.anthropic-proxy
└── .env.example
Features#
Tiny Wins#
The bot tracks small accomplishments to build momentum and combat ADHD-related feelings of underachievement. Every win counts!
Available tools:
| Tool | Description |
|---|---|
record_tiny_win |
Record an accomplishment with category and magnitude |
delete_tiny_win |
Remove a win recorded by mistake |
get_wins_by_day |
Get wins for today, yesterday, or a specific date |
get_wins_summary |
Get summary with streaks and category breakdown |
Categories: task, habit, self_care, social, work, creative, other
Magnitudes: tiny (just did it), small (took effort), medium (meaningful), big (milestone)
Example interactions:
- "I drank water today" → Records a tiny self_care win
- "What did I accomplish yesterday?" → Shows yesterday's wins with timestamps
- "Delete that last win, I made a mistake" → Removes the incorrect entry
Milestones#
- M0: Infrastructure (Docker, config, health, Letta client)
- M1: E2E Chat (Telegram bot, basic message flow)
- M2: Tools + Items (database, capture, breakdown)
- M3: Tone + Detection (Haiku 4.5 for overwhelm, brain dump, self-bullying)
- M4: Tiny Wins (win tracking, daily breakdown, delete)
- M5: Threading (focus, deviations)
- M6: Hardening (idempotency, retries, tests)
Troubleshooting#
"Missing required environment variable"#
Make sure you've copied .env.example to .env and filled in all required values.
Letta health check failing#
Check if Letta is running:
curl http://localhost:8283/v1/health
If not responding, check logs:
docker compose logs letta
Anthropic proxy not working#
- Verify the proxy is running:
curl http://localhost:4001/health - Check if OAuth is complete (session ID should be set)
- Check logs:
docker compose logs anthropic-proxy
LiteLLM not routing to Claude#
- Verify LiteLLM is running:
curl http://localhost:4000/health - Check available models:
curl http://localhost:4000/models - Test direct call:
curl -X POST http://localhost:4000/chat/completions -H "Content-Type: application/json" -d '{"model":"claude-opus-4-5-20251101","messages":[{"role":"user","content":"Hi"}],"max_tokens":10}' - Check logs:
docker compose logs litellm
Can't connect to Telegram#
- Verify bot token is correct
- For webhooks, ensure URL is publicly accessible with valid HTTPS
- If you see a 409 "terminated by other getUpdates request" error, another bot instance may be running - kill it and restart
Note: Hot reload (bun run dev) handles this automatically by stopping the bot before restarting.
Tools not working / "missing required parameter"#
This usually means Letta doesn't know the tool's parameter schema. Check:
-
Verify tool schema in Letta:
curl http://localhost:8283/v1/tools/<tool-id> | jq '.json_schema'If
propertiesis empty{}, the schema wasn't registered correctly. -
Restart the app to re-register tools:
# Kill and restart bun run devWatch for "Updated tool 'xxx'" messages in the logs.
-
Check tool webhook is receiving calls: Look for
🔧 TOOL WEBHOOK RECEIVED:in the console output. -
Delete and recreate the agent if tools were registered after agent creation:
# List agents curl http://localhost:8283/v1/agents | jq '.[].id' # Delete problematic agent curl -X DELETE http://localhost:8283/v1/agents/<agent-id>The bot will create a new agent on the next message.
See AGENTS.md for detailed Letta tool registration requirements.