CLAUDE.md#
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview#
localcode — a single CLI for managing a fully offline local AI coding environment on macOS Apple Silicon. Uses Ollama to serve Qwen 2.5 Coder models via OpenAI-compatible APIs, with switchable terminal coding agents (Aider, OpenCode, Pi).
Commands#
npm run dev -- <args>— Run via tsx (development)npm run build— Compile TypeScript todist/npx tsc --noEmit— Type-check without emitting
After localcode setup, the localcode binary is available in ~/.local/bin/.
CLI#
localcode Launch active TUI in current directory
localcode status Show current config + server health
localcode start Start Ollama + pull models
localcode stop Stop Ollama
localcode model List available models
localcode set model <id> Switch the chat model
localcode set autocomplete <id> Switch the autocomplete model
localcode tui List available TUIs
localcode set tui <id> Switch the active TUI
localcode bench Benchmark running chat model
localcode bench history Show past benchmark results
localcode pipe "prompt" Pipe stdin through the model
localcode setup Full install
Architecture#
src/
main.ts — CLI dispatcher (switch on process.argv[2])
config.ts — Ollama URL/port constants, TUI config paths
log.ts — log/warn/err with ANSI colors
util.ts — Shell exec helpers, file writers
runtime-config.ts — Read/write ~/.config/localcode/config.json
registry/
models.ts — ModelDef interface + MODELS array (Ollama tags)
tuis.ts — TuiDef interface + TUIS array
commands/
run.ts — Default action: ensure Ollama, init git, exec TUI
status.ts — Show config + Ollama health
server.ts — Start/stop Ollama, pull models
setup.ts — Full install pipeline
models.ts — List/switch models, auto-pull + regen configs
tuis.ts — List/switch TUIs, auto-install + regen configs
bench.ts — Benchmark against running Ollama
pipe.ts — Pipe stdin through the model
steps/ — Individual setup phases (preflight, homebrew, ollama, etc.)
templates/
scripts.ts — localcode wrapper script
aider.ts — Aider config template
opencode.ts — OpenCode config template
pi.ts — Pi models.json + settings.json templates
Key patterns#
Ollama backend: Single Ollama server on port 11434 serves all models. Models identified by Ollama tags (e.g., qwen2.5-coder:32b). No separate chat/autocomplete server processes — Ollama loads/unloads models on demand.
Runtime config (~/.config/localcode/config.json): Stores active chatModel, autocompleteModel, and tui IDs. Read by runtime-config.ts with defaults fallback.
Registries: registry/models.ts and registry/tuis.ts define available options as typed arrays. Add new models/TUIs by appending to these arrays. Models have ollamaTag field for the Ollama model identifier.
Config regeneration: When models or TUI are switched, TUI configs are automatically regenerated.
Generated scripts: Only 1 bash script is generated in ~/.local/bin/: localcode (thin wrapper calling node dist/main.js). All other functionality lives in TypeScript commands.
Benchmark: Hits Ollama's /v1/chat/completions with 3 hardcoded prompts, measures wall-clock time + token counts. Results saved to ~/.config/localcode/benchmarks.json.
Key paths on the user's system#
~/.local/bin/localcode— CLI wrapper script~/.config/localcode/config.json— Active model/TUI selection~/.config/localcode/benchmarks.json— Benchmark history~/.aider/— Aider config~/.config/opencode/opencode.json— OpenCode config~/.pi/agent/models.json— Pi config~/.pi/agent/settings.json— Pi settings (packages)- Ollama port 11434
Important: after changing TypeScript#
The localcode wrapper in ~/.local/bin/ calls node dist/main.js. After modifying TypeScript source, run npm run build to recompile, or the wrapper will run stale code.
Dead files to clean up#
src/commands/proxy.ts— Was the llama.cpp tool-call rewriting proxy, now unused (Ollama handles tool calling natively)templates/qwen-tool-call.jinja— Was the Qwen tool-use Jinja template for llama.cpp, now unused