commits
Rewrite CLI to launch immediately: localcode goose, localcode qwen3-coder,
localcode claude gpt-oss. Parser auto-detects TUI and model IDs from
registries. Add qwen3-coder, glm-flash, gpt-oss models. Add cline,
droid, openclaw TUIs via ollama launch. Use ollamaLaunch field to
delegate to ollama launch for supported TUIs. Auto-pull models before
launch. Set GOOSE_MODEL env var for goose. Default model now qwen3-coder.
Ollama supports many coding agents. Add all viable ones with per-TUI
env vars, config templates, and launch args. Auto-install Ollama if
missing when running ensureOllama.
Single Ollama server on port 11434 replaces two llama-server processes,
the tool-call rewriting proxy, manual GGUF downloads, and bash server
launcher scripts. Models managed via ollama pull with tags like
qwen2.5-coder:32b. Eliminates grammar-constrained decoding crashes.
Rewrite CLI to launch immediately: localcode goose, localcode qwen3-coder,
localcode claude gpt-oss. Parser auto-detects TUI and model IDs from
registries. Add qwen3-coder, glm-flash, gpt-oss models. Add cline,
droid, openclaw TUIs via ollama launch. Use ollamaLaunch field to
delegate to ollama launch for supported TUIs. Auto-pull models before
launch. Set GOOSE_MODEL env var for goose. Default model now qwen3-coder.