solstone-think#
Post-processing utilities for clustering and summarising captured data. The tools leverage the Gemini API to analyse transcriptions and screenshots. All commands work with a journal directory that holds daily folders in YYYYMMDD format.
Installation#
make install
All dependencies are listed in pyproject.toml.
Usage#
The package exposes several commands:
sol call transcripts readgroups audio and screen transcripts into report sections. Use--startand--lengthto limit the report to a specific time range. Seesol call transcripts --helpfor additional commands.sol dreamruns generators and agents for a single day via Cortex.sol agentsis the unified CLI for tool agents and generators (spawned by Cortex, NDJSON protocol).sol supervisormonitors observation heartbeats. Use--no-observersto disable local capture (sense still runs for remote uploads and imports).sol cortexstarts a Callosum-based service for managing AI agent instances and generators.sol talentlists available agents and generators with their configuration. Usesol talent show <name>to see details, andsol talent show <name> --promptto see the fully composed prompt that would be sent to the LLM.
sol call transcripts read YYYYMMDD [--start HHMMSS --length MINUTES]
sol dream [--day YYYYMMDD] [--segment HHMMSS_LEN] [--stream NAME] [--refresh] [--flush]
sol supervisor [--no-observers]
sol cortex [--host HOST] [--port PORT] [--path PATH]
sol talent list [--schedule daily|segment] [--json]
sol talent show <name> [--prompt] [--day YYYYMMDD] [--segment HHMMSS_LEN] [--full]
Use --refresh to overwrite existing files, and -v for verbose logs.
Set GOOGLE_API_KEY before running any command that contacts Gemini.
GOOGLE_API_KEY can also be provided in a .env file which
is loaded automatically by most commands.
Service Discovery#
Agents invoke tools through sol call shell commands:
sol call <module> <command> [args...].
Tool access is command-based via the sol call CLI framework.
Automating daily processing#
The sol dream command can be triggered by a systemd timer. Below is a
minimal service and timer that process yesterday's folder every morning at
06:00:
[Unit]
Description=Process solstone journal
[Service]
Type=oneshot
ExecStart=/usr/local/bin/sol dream
[Install]
WantedBy=multi-user.target
[Unit]
Description=Run sol dream daily
[Timer]
OnCalendar=*-*-* 06:00:00
Persistent=true
Unit=sol-dream.service
[Install]
WantedBy=timers.target
Agent System#
Unified Priority Execution#
All scheduled prompts (both generators and tool-using agents) share a unified priority system. The sol dream command executes prompts ordered by priority, from lowest (runs first) to highest (runs last).
Priority is required for all scheduled prompts. Prompts without a priority field will fail validation. Suggested priority bands:
| Band | Range | Use Case |
|---|---|---|
| Generators | 10-30 | Content-producing prompts that create .md files |
| Analysis Agents | 40-60 | Agents that analyze generated content |
| Late-stage | 90+ | Agents that run after most others complete |
| Fun/Optional | 99 | Low-priority or experimental prompts |
After each generator completes and creates output, the indexer runs --rescan-file for incremental indexing. A full --rescan runs in the post phase.
Cortex: Central Agent Manager#
The Cortex service (sol cortex) is the central system for managing AI agent instances and generators. It monitors the journal's agents/ directory for new requests and manages execution. All agent spawning should go through Cortex for proper event tracking and management.
Cortex routes requests based on configuration:
- Requests with
toolsfield → tool-using agents (sol agents) - Requests with
outputfield (notools) → generators (sol agents)
Both types are handled by the unified sol agents CLI which routes internally.
To spawn agents programmatically, use the cortex_client functions:
from think.cortex_client import cortex_request
from think.callosum import CallosumConnection
# Create a request
agent_id = cortex_request(
prompt="Your task here",
name="default",
provider="openai" # or "google", "anthropic", "claude"
)
# Watch for agent events via Callosum
def on_event(message):
# Filter for cortex tract events
if message.get('tract') != 'cortex':
return
print(f"Event: {message['event']}")
if message.get('event') == 'finish':
print(f"Result: {message.get('result')}")
watcher = CallosumConnection()
watcher.start(callback=on_event)
# ... later, when done:
watcher.stop()
Spawning Generators via Cortex#
Generators can also be spawned via cortex_request by including an output field:
from think.cortex_client import cortex_request, wait_for_agents
# Spawn a generator
agent_id = cortex_request(
prompt="", # Generators don't use prompts
name="activity",
config={
"day": "20250109",
"output": "md",
"refresh": True, # Regenerate even if output exists
}
)
# Wait for completion
completed, timed_out = wait_for_agents([agent_id], timeout=300)
Direct CLI Usage (Testing Only)#
The sol agents command is primarily used internally by Cortex. For testing purposes, it can be invoked directly:
sol agents [TASK_FILE] [--provider PROVIDER] [--model MODEL] [--max-tokens N] [-o OUT_FILE]
The provider can be openai (default), google or anthropic. Configure the corresponding API key in the env section of journal/config/journal.json (e.g., OPENAI_API_KEY, GOOGLE_API_KEY, or ANTHROPIC_API_KEY). Keys are loaded into os.environ by setup_cli() at process startup.
Provider modules#
Each provider lives in think/providers/ and exposes a common interface:
run_generate()- Sync text generation, returnsGenerateResultrun_agenerate()- Async text generation, returnsGenerateResultrun_cogitate()- Tool-calling execution viasol callcommands and event streaming
For direct LLM calls, use think.models.generate() or think.models.agenerate()
which automatically routes to the configured provider based on context.
Generator map keys#
think.talent.get_talent_configs(has_tools=False) reads the .md prompt files under talent/ and
returns a dictionary keyed by generator name. Each entry contains:
path– the prompt file pathcolor– UI color hex stringmtime– modification time of the.mdfile- Additional keys from JSON frontmatter such as
title,description,hook, orload
The hook field enables event extraction by invoking named hooks like "occurrence" or "anticipation".
The load key controls transcript/percept/agent source filtering for generators.
See APPS.md for the full schema.
Cortex API#
Cortex is the central agent management system that all agent spawning should go through. See CORTEX.md for complete documentation of the Cortex API and agent event structures.
Using cortex_client#
The think.cortex_client module provides functions for interacting with Cortex:
from think.cortex_client import cortex_request, cortex_agents
# Create an agent request
request_file = cortex_request(
prompt="Your prompt",
name="default",
provider="openai"
)
# List running and completed agents
agents_info = cortex_agents(limit=10, agent_type="live")
print(f"Found {agents_info['live_count']} running agents")
Talent Module#
AI agent system and tool-calling support for solstone.
Commands#
| Command | Purpose |
|---|---|
sol cortex |
Agent orchestration service |
sol agents |
Direct agent invocation (testing only) |
Architecture#
Cortex (orchestrator)
├── Callosum connection (events)
├── Tool execution via `sol call`
└── Agent subprocess management
↓
Providers (openai, google, anthropic)
Providers#
| Provider | Module | Features |
|---|---|---|
| OpenAI | think/providers/openai.py |
GPT models via Agents SDK |
think/providers/google.py |
Gemini models | |
| Anthropic | think/providers/anthropic.py |
Claude via Anthropic SDK |
Providers implement run_generate(), run_agenerate(), and run_cogitate() functions. See PROVIDERS.md for implementation details.
Key Components#
- cortex.py - Central agent manager, file watcher, event distribution, spawns agents.py
- cortex_client.py - Client functions:
cortex_request(),cortex_agents(),wait_for_agents() - agents.py - Unified CLI entry point for both tool-using agents and generators (NDJSON protocol)
- models.py - Unified
generate()/agenerate()API, provider routing, token logging - batch.py -
Batchclass for concurrent LLM requests with dynamic queuing
Agent Personas#
System prompts in talent/*.md (markdown with JSON frontmatter). Apps can add custom agents in apps/{app}/talent/.
JSON metadata supports title, provider, model, tools, schedule, priority, multi_facet, and load keys.
Important: The priority field is required for all prompts with a schedule. Prompts without explicit priority will fail validation. See the Unified Priority Execution section for priority bands.
See APPS.md for the load schema and inline template variables that control source filtering and prompt context.
Documentation#
- PROVIDERS.md - Provider implementation guide
- CORTEX.md - Full API, event schemas, request format
- CALLOSUM.md - Message bus protocol
- THINK.md - Cortex usage examples