personal memory agent
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

refactor: rename muse → talent project-wide

Three-case rename (muse/Muse/MUSE → talent/Talent/TALENT) across all
directories, filenames, Python code, CLI flags, config, docs, and tests.

Backward-compat shims for legacy journal data:
- Token logs: normalize "muse.*" context strings to "talent.*" at read time
- Exchanges: accept both "talent" and "muse" field names when reading

+732 -714
+4 -4
Makefile
··· 66 66 # Directories where AI coding agents look for skills 67 67 SKILL_DIRS := .agents/skills .claude/skills 68 68 69 - # Discover SKILL.md files in muse/ and apps/*/muse/, symlink into agent skill dirs 69 + # Discover SKILL.md files in talent/ and apps/*/talent/, symlink into agent skill dirs 70 70 skills: 71 71 @# Collect all skill directories (containing SKILL.md) 72 72 @SKILLS=""; \ 73 - for skill_md in muse/*/SKILL.md apps/*/muse/*/SKILL.md; do \ 73 + for skill_md in talent/*/SKILL.md apps/*/talent/*/SKILL.md; do \ 74 74 [ -f "$$skill_md" ] || continue; \ 75 75 skill_dir=$$(dirname "$$skill_md"); \ 76 76 skill_name=$$(basename "$$skill_dir"); \ 77 77 if echo "$$SKILLS" | grep -qw "$$skill_name"; then \ 78 78 echo "Error: duplicate skill name '$$skill_name' found in $$skill_dir" >&2; \ 79 - echo "Each skill directory name must be unique across muse/ and apps/*/muse/." >&2; \ 79 + echo "Each skill directory name must be unique across talent/ and apps/*/talent/." >&2; \ 80 80 exit 1; \ 81 81 fi; \ 82 82 SKILLS="$$SKILLS $$skill_name"; \ ··· 88 88 done; \ 89 89 done; \ 90 90 count=0; \ 91 - for skill_md in muse/*/SKILL.md apps/*/muse/*/SKILL.md; do \ 91 + for skill_md in talent/*/SKILL.md apps/*/talent/*/SKILL.md; do \ 92 92 [ -f "$$skill_md" ] || continue; \ 93 93 skill_dir=$$(dirname "$$skill_md"); \ 94 94 skill_name=$$(basename "$$skill_dir"); \
+1 -1
README.md
··· 60 60 ``` 61 61 62 62 - **observe** — receives captured audio and screen activity from standalone observers (solstone-linux, solstone-tmux, solstone-macos) via remote ingest. processes FLAC audio, WebM screen recordings, and timestamped metadata. 63 - - **think** — transcribes audio (faster-whisper), analyzes screen captures, extracts entities, detects meetings, and indexes everything into SQLite. runs 30 configurable agent/generator templates from `muse/`. 63 + - **think** — transcribes audio (faster-whisper), analyzes screen captures, extracts entities, detects meetings, and indexes everything into SQLite. runs 30 configurable agent/generator templates from `talent/`. 64 64 - **cortex** — orchestrates agent execution. receives events, dispatches agents, writes results back to the journal. 65 65 - **callosum** — async message bus connecting all services. enables event-driven coordination between observe, think, cortex, and convey. 66 66 - **convey** — Flask-based web interface with 17 pluggable apps for navigating journal data.
+5 -5
apps/agents/routes.py
··· 18 18 from convey.utils import DATE_RE, format_date 19 19 from think.facets import get_facets 20 20 from think.models import calc_agent_cost 21 - from think.muse import get_muse_configs, get_output_path 21 + from think.talent import get_talent_configs, get_output_path 22 22 from think.utils import updated_days 23 23 24 24 agents_bp = Blueprint( ··· 319 319 320 320 @lru_cache(maxsize=1) 321 321 def _build_agents_meta() -> dict[str, dict[str, Any]]: 322 - """Build agent metadata dict from all muse configs. 322 + """Build agent metadata dict from all talent configs. 323 323 324 324 Returns dict mapping agent name to metadata with capability fields 325 - for frontend display. Cached for process lifetime since muse configs 325 + for frontend display. Cached for process lifetime since talent configs 326 326 are static. 327 327 """ 328 - configs = get_muse_configs(include_disabled=True) 328 + configs = get_talent_configs(include_disabled=True) 329 329 agents: dict[str, dict[str, Any]] = {} 330 330 331 331 for name, config in configs.items(): ··· 545 545 } 546 546 """ 547 547 try: 548 - from think.muse import get_agent 548 + from think.talent import get_agent 549 549 550 550 config = get_agent(name) 551 551
apps/calendar/muse/calendar/SKILL.md apps/calendar/talent/calendar/SKILL.md
+2 -2
apps/calendar/routes.py
··· 54 54 return "", 404 55 55 56 56 from think.indexer.journal import get_events 57 - from think.muse import get_muse_configs 57 + from think.talent import get_talent_configs 58 58 59 - generators = get_muse_configs(type="generate") 59 + generators = get_talent_configs(type="generate") 60 60 61 61 # Get full event objects from source files 62 62 raw_events = get_events(day)
apps/entities/muse/entities.md apps/entities/talent/entities.md
apps/entities/muse/entities/SKILL.md apps/entities/talent/entities/SKILL.md
apps/entities/muse/entities_review.md apps/entities/talent/entities_review.md
apps/entities/muse/entity_assist.md apps/entities/talent/entity_assist.md
apps/entities/muse/entity_describe.md apps/entities/talent/entity_describe.md
apps/entities/muse/entity_observer.md apps/entities/talent/entity_observer.md
+16 -16
apps/health/muse/health/SKILL.md apps/health/talent/health/SKILL.md
··· 15 15 16 16 Use these commands to check service health, view logs, and inspect agent runs. 17 17 18 - **Typical workflow**: `sol health` to check service status → `sol health logs` to inspect recent log output → `sol muse logs` to review agent runs → `sol muse log <ID>` for run details. 18 + **Typical workflow**: `sol health` to check service status → `sol health logs` to inspect recent log output → `sol talent logs` to review agent runs → `sol talent log <ID>` for run details. 19 19 20 20 ## status 21 21 ··· 66 66 ## agent runs 67 67 68 68 ```bash 69 - sol muse logs [AGENT] [-c COUNT] [--day YYYYMMDD] [--daily] [--errors] [--summary] 69 + sol talent logs [AGENT] [-c COUNT] [--day YYYYMMDD] [--daily] [--errors] [--summary] 70 70 ``` 71 71 72 72 List recent agent runs. ··· 85 85 Examples: 86 86 87 87 ```bash 88 - sol muse logs 89 - sol muse logs activity -c 10 90 - sol muse logs --daily 91 - sol muse logs --daily --summary 92 - sol muse logs --day 20260228 93 - sol muse logs --daily --errors 88 + sol talent logs 89 + sol talent logs activity -c 10 90 + sol talent logs --daily 91 + sol talent logs --daily --summary 92 + sol talent logs --day 20260228 93 + sol talent logs --daily --errors 94 94 ``` 95 95 96 96 ## agent run detail 97 97 98 98 ```bash 99 - sol muse log <ID> [--json] [--full] 99 + sol talent log <ID> [--json] [--full] 100 100 ``` 101 101 102 102 Show events for a single agent run. 103 103 104 - - `ID`: agent run ID (from `sol muse logs` output). 104 + - `ID`: agent run ID (from `sol talent logs` output). 105 105 - `--json`: raw JSONL events. 106 106 - `--full`: expanded event detail (no truncation). 107 107 ··· 110 110 Examples: 111 111 112 112 ```bash 113 - sol muse log 1700000000001 114 - sol muse log 1700000000001 --json 115 - sol muse log 1700000000001 --full 113 + sol talent log 1700000000001 114 + sol talent log 1700000000001 --json 115 + sol talent log 1700000000001 --full 116 116 ``` 117 117 118 118 ## journal layout ··· 170 170 ### `sol health` returns "Connection refused" or times out 171 171 The supervisor is not running. Check if `sol supervisor` is active. The owner may need to start solstone with `sol start` or `make dev`. 172 172 173 - ### Agent run shows "error" status in `sol muse logs` 174 - Run `sol muse log <ID> --full` to see the complete event timeline including the error. Common causes: 173 + ### Agent run shows "error" status in `sol talent logs` 174 + Run `sol talent log <ID> --full` to see the complete event timeline including the error. Common causes: 175 175 - API key issues (rate limits, expired keys) 176 176 - Prompt too large (context overflow) 177 177 - Network connectivity ··· 182 182 3. Check if the stream is active: `sol streams` 183 183 184 184 ### High agent costs 185 - Run `sol muse logs --summary` for aggregated cost view. Filter by agent: `sol muse logs <agent-name> --summary`. 185 + Run `sol talent logs --summary` for aggregated cost view. Filter by agent: `sol talent logs <agent-name> --summary`.
+1 -1
apps/home/events.py
··· 50 50 path=path, 51 51 user_message=user_message, 52 52 agent_response=result, 53 - muse=name, 53 + talent=name, 54 54 agent_id=agent_id, 55 55 ) 56 56 except Exception:
+15 -15
apps/settings/routes.py
··· 319 319 - cogitate: Current cogitate provider, tier, and backup 320 320 - contexts: Configured context overrides from journal.json 321 321 - context_defaults: Context registry with labels/groups for UI 322 - (includes muse configs with type, schedule, disabled, extract) 322 + (includes talent configs with type, schedule, disabled, extract) 323 323 - api_keys: Boolean status for each provider's API key 324 324 - auth: Per-provider auth mode for cogitate ("platform" or "api_key") 325 325 """ ··· 328 328 TYPE_DEFAULTS, 329 329 get_context_registry, 330 330 ) 331 - from think.muse import get_muse_configs 331 + from think.talent import get_talent_configs 332 332 from think.providers import get_provider_list 333 333 334 334 config = get_journal_config() ··· 356 356 "label": ctx_config["label"], 357 357 "group": ctx_config["group"], 358 358 } 359 - # Include type for muse contexts 359 + # Include type for talent contexts 360 360 if "type" in ctx_config: 361 361 context_defaults[pattern]["type"] = ctx_config["type"] 362 362 363 - # Enhance muse contexts with additional metadata from get_muse_configs 364 - from think.muse import key_to_context 363 + # Enhance talent contexts with additional metadata from get_talent_configs 364 + from think.talent import key_to_context 365 365 366 - muse_configs = get_muse_configs(include_disabled=True) 367 - for key, info in muse_configs.items(): 366 + talent_configs = get_talent_configs(include_disabled=True) 367 + for key, info in talent_configs.items(): 368 368 context_key = key_to_context(key) 369 369 370 370 if context_key in context_defaults: 371 - # Add muse-specific fields 371 + # Add talent-specific fields 372 372 if "schedule" in info: 373 373 context_defaults[context_key]["schedule"] = info["schedule"] 374 374 context_defaults[context_key]["disabled"] = info.get("disabled", False) ··· 475 475 Set or clear context overrides 476 476 477 477 Setting a context to null removes the override. 478 - For muse contexts, disabled and extract can also be set. 478 + For talent contexts, disabled and extract can also be set. 479 479 """ 480 480 try: 481 481 from think.providers import PROVIDER_REGISTRY ··· 698 698 699 699 700 700 def _build_generator_info(key: str, info: dict) -> dict: 701 - """Build generator info dict from muse config for Settings UI. 701 + """Build generator info dict from talent config for Settings UI. 702 702 703 - Transforms muse config metadata into the format expected by the 703 + Transforms talent config metadata into the format expected by the 704 704 Settings UI Insights section. 705 705 """ 706 706 # Determine if extraction is supported (occurrence/anticipation hooks) ··· 725 725 def get_generators() -> Any: 726 726 """Return generators grouped by schedule for Settings UI. 727 727 728 - This is a compatibility layer that transforms the unified muse config 728 + This is a compatibility layer that transforms the unified talent config 729 729 into the format expected by the Settings UI Insights section. 730 730 731 731 Returns: ··· 733 733 - daily: List of daily-schedule generators 734 734 """ 735 735 try: 736 - from think.muse import get_muse_configs 736 + from think.talent import get_talent_configs 737 737 738 738 # Get all generate prompts 739 - all_generators = get_muse_configs(type="generate", include_disabled=True) 739 + all_generators = get_talent_configs(type="generate", include_disabled=True) 740 740 741 741 segment = [] 742 742 daily = [] ··· 787 787 old_contexts = old_providers.get("contexts", {}) 788 788 changed_fields = {} 789 789 790 - from think.muse import key_to_context 790 + from think.talent import key_to_context 791 791 792 792 for key, updates in request_data.items(): 793 793 if not isinstance(updates, dict):
+1 -1
apps/speakers/attribution.py
··· 11 11 meetings.md) — no LLM 12 12 Layer 3: Acoustic matching (voiceprint cosine similarity, same-stream 13 13 preference) — no LLM 14 - Layer 4: Contextual identification (LLM) — handled externally via muse hook 14 + Layer 4: Contextual identification (LLM) — handled externally via talent hook 15 15 16 16 High-confidence attributions from Layers 2-3 automatically accumulate 17 17 into entity voiceprints, creating a learning flywheel.
apps/speakers/muse/speakers/SKILL.md apps/speakers/talent/speakers/SKILL.md
+2 -2
apps/stats/routes.py
··· 10 10 from flask import Blueprint, jsonify 11 11 12 12 from convey import state 13 - from think.muse import get_muse_configs 13 + from think.talent import get_talent_configs 14 14 15 15 stats_bp = Blueprint( 16 16 "app:stats", ··· 37 37 except Exception: 38 38 pass 39 39 40 - response["generators"] = get_muse_configs(type="generate") 40 + response["generators"] = get_talent_configs(type="generate") 41 41 42 42 return jsonify(response)
apps/support/muse/support.md apps/support/talent/support.md
apps/support/muse/support/SKILL.md apps/support/talent/support/SKILL.md
+1 -1
apps/support/tools.py
··· 3 3 4 4 """Support tool functions for agent workflows. 5 5 6 - Each function provides a discrete capability that both the muse agent 6 + Each function provides a discrete capability that both the talent agent 7 7 (via ``sol call support``) and the convey routes can use. All outbound 8 8 operations are **consent-gated** — they return a draft for review rather 9 9 than submitting directly.
apps/todos/muse/daily.md apps/todos/talent/daily.md
apps/todos/muse/todo.md apps/todos/talent/todo.md
apps/todos/muse/todos/SKILL.md apps/todos/talent/todos/SKILL.md
apps/todos/muse/weekly.md apps/todos/talent/weekly.md
apps/transcripts/muse/transcripts/SKILL.md apps/transcripts/talent/transcripts/SKILL.md
+1 -1
convey/root.py
··· 127 127 128 128 @bp.route("/") 129 129 def index() -> Any: 130 - """Root redirect — always to home, onboarding muse handles new journals.""" 130 + """Root redirect — always to home, onboarding talent handles new journals.""" 131 131 return redirect(url_for("app:home.index"))
+3 -3
convey/triage.py
··· 31 31 WebSocket (cortex/finish event). For reload recovery, use GET /result/<agent_id>. 32 32 33 33 Routes based on onboarding status: new/in-progress journals go to 34 - the onboarding muse, completed/skipped journals go to unified. 34 + the onboarding talent, completed/skipped journals go to unified. 35 35 """ 36 36 payload = request.get_json(force=True) 37 37 message = payload.get("message", "").strip() ··· 65 65 # Path A active — use triage with observation context 66 66 agent_name = "triage" 67 67 elif onboarding_status not in ("complete", "skipped"): 68 - # Onboarding not yet completed — route to onboarding muse 68 + # Onboarding not yet completed — route to onboarding talent 69 69 agent_name = "onboarding" 70 70 elif has_conversation: 71 - # Conversation context present — use unified muse 71 + # Conversation context present — use unified talent 72 72 agent_name = "unified" 73 73 else: 74 74 agent_name = "unified"
+27 -27
docs/APPS.md
··· 43 43 ├── app.json # Optional: Metadata (icon, label, facet support) 44 44 ├── app_bar.html # Optional: Bottom bar controls (forms, buttons) 45 45 ├── background.html # Optional: Background JavaScript service 46 - ├── muse/ # Optional: Custom agents, generators, and skills (auto-discovered) 46 + ├── talent/ # Optional: Custom agents, generators, and skills (auto-discovered) 47 47 │ └── my-skill/ # Optional: Agent Skill directories (SKILL.md + resources) 48 48 ├── maint/ # Optional: One-time maintenance tasks (auto-discovered) 49 49 └── tests/ # Optional: App-specific tests (run via make test-apps) ··· 61 61 | `app.json` | No | Icon, label, facet support overrides | 62 62 | `app_bar.html` | No | Bottom fixed bar for app controls | 63 63 | `background.html` | No | Background service (WebSocket listeners) | 64 - | `muse/` | No | Custom agents, generators, and skills (`.md` files + skill subdirectories) | 64 + | `talent/` | No | Custom agents, generators, and skills (`.md` files + skill subdirectories) | 65 65 | `maint/` | No | One-time maintenance tasks (run on Convey startup) | 66 66 | `tests/` | No | App-specific tests with self-contained fixtures | 67 67 ··· 284 284 285 285 --- 286 286 287 - ### 8. `muse/` - App Generators 287 + ### 8. `talent/` - App Generators 288 288 289 289 Define custom generator prompts that integrate with solstone's output generation system. 290 290 291 291 **Key Points:** 292 - - Create `muse/` directory with `.md` files containing JSON frontmatter 292 + - Create `talent/` directory with `.md` files containing JSON frontmatter 293 293 - App generators are automatically discovered alongside system generators 294 294 - Keys are namespaced as `{app}:{agent}` (e.g., `my_app:weekly_summary`) 295 295 - Outputs go to `JOURNAL/YYYYMMDD/agents/_<app>_<agent>.md` (or `.json` if `output: "json"`) 296 296 297 - **Metadata format:** Same schema as system generators in `muse/*.md` - JSON frontmatter includes `title`, `description`, `color`, `schedule` (required), `priority` (required for scheduled prompts), `hook`, `output`, `max_output_tokens`, and `thinking_budget` fields. The `schedule` field must be `"segment"` or `"daily"`. The `priority` field is required for all scheduled prompts - prompts without explicit priority will fail validation. Set `output: "json"` for structured JSON output instead of markdown. Optional `max_output_tokens` sets the maximum response length; `thinking_budget` sets the model's thinking token budget (provider-specific defaults apply if omitted). 297 + **Metadata format:** Same schema as system generators in `talent/*.md` - JSON frontmatter includes `title`, `description`, `color`, `schedule` (required), `priority` (required for scheduled prompts), `hook`, `output`, `max_output_tokens`, and `thinking_budget` fields. The `schedule` field must be `"segment"` or `"daily"`. The `priority` field is required for all scheduled prompts - prompts without explicit priority will fail validation. Set `output: "json"` for structured JSON output instead of markdown. Optional `max_output_tokens` sets the maximum response length; `thinking_budget` sets the model's thinking token budget (provider-specific defaults apply if omitted). 298 298 299 299 **Priority bands:** Prompts run in priority order (lowest first). Recommended bands: 300 300 - 10-30: Generators (content-producing prompts) ··· 329 329 - Use `"hook": {"post": "my_hook"}` for post-processing hooks 330 330 - Use both together: `"hook": {"pre": "prep", "post": "process"}` 331 331 - Use `"hook": {"flush": true}` to opt into segment flush (see below) 332 - - Resolution: `"name"` → `muse/{name}.py`, `"app:name"` → `apps/{app}/muse/{name}.py`, or explicit path 332 + - Resolution: `"name"` → `talent/{name}.py`, `"app:name"` → `apps/{app}/talent/{name}.py`, or explicit path 333 333 334 334 **Pre-hooks** (`pre_process`): Modify inputs before the LLM call 335 335 - `context` is the full config dict with: `name`, `agent_id`, `provider`, `model`, `prompt`, `system_instruction` (if set), `user_instruction`, `output`, `meta`, and for generators: `day`, `segment`, `span`, `span_mode`, `transcript`, `output_path` ··· 346 346 Hook errors are logged but don't crash the pipeline (graceful degradation). 347 347 348 348 ```python 349 - # muse/my_hook.py 349 + # talent/my_hook.py 350 350 def pre_process(context: dict) -> dict | None: 351 351 # Modify inputs before LLM call 352 352 return {"prompt": context["prompt"] + "\n\nBe concise."} ··· 357 357 ``` 358 358 359 359 **Reference implementations:** 360 - - System generator templates: `muse/*.md` (files with `schedule` field but no `tools` field) 361 - - Extraction hooks: `muse/occurrence.py`, `muse/anticipation.py` 362 - - Discovery logic: `think/muse.py` - `get_muse_configs(has_tools=False)`, `get_output_name()` 363 - - Hook loading: `think/muse.py` - `load_pre_hook()`, `load_post_hook()` 360 + - System generator templates: `talent/*.md` (files with `schedule` field but no `tools` field) 361 + - Extraction hooks: `talent/occurrence.py`, `talent/anticipation.py` 362 + - Discovery logic: `think/talent.py` - `get_muse_configs(has_tools=False)`, `get_output_name()` 363 + - Hook loading: `think/talent.py` - `load_pre_hook()`, `load_post_hook()` 364 364 365 365 --- 366 366 367 - ### 9. `muse/` - App Agents and Generators 367 + ### 9. `talent/` - App Agents and Generators 368 368 369 369 Define custom agents and generator templates that integrate with solstone's Cortex agent system. 370 370 371 371 **Key Points:** 372 - - Create `muse/` directory with `.md` files containing JSON frontmatter 372 + - Create `talent/` directory with `.md` files containing JSON frontmatter 373 373 - Both agents and generators live in the same directory - distinguished by frontmatter fields 374 374 - Agents have a `tools` field, generators have `schedule` but no `tools` 375 375 - App agents/generators are automatically discovered alongside system ones 376 376 - Keys are namespaced as `{app}:{name}` (e.g., `my_app:helper`) 377 377 - Agents inherit all system agent capabilities (tools, scheduling, multi-facet) 378 378 379 - **Metadata format:** Same schema as system agents in `muse/*.md` - JSON frontmatter includes `title`, `provider`, `model`, `tools`, `schedule`, `priority`, `multi_facet`, `max_output_tokens`, and `thinking_budget` fields. The `priority` field is **required** for all scheduled prompts - prompts without explicit priority will fail validation. See the priority bands documentation in [THINK.md](THINK.md#unified-priority-execution). Optional `max_output_tokens` sets the maximum response length; `thinking_budget` sets the model's thinking token budget (provider-specific defaults apply if omitted; OpenAI uses fixed reasoning and ignores this field). See [CORTEX.md](CORTEX.md) for agent configuration details. 379 + **Metadata format:** Same schema as system agents in `talent/*.md` - JSON frontmatter includes `title`, `provider`, `model`, `tools`, `schedule`, `priority`, `multi_facet`, `max_output_tokens`, and `thinking_budget` fields. The `priority` field is **required** for all scheduled prompts - prompts without explicit priority will fail validation. See the priority bands documentation in [THINK.md](THINK.md#unified-priority-execution). Optional `max_output_tokens` sets the maximum response length; `thinking_budget` sets the model's thinking token budget (provider-specific defaults apply if omitted; OpenAI uses fixed reasoning and ignores this field). See [CORTEX.md](CORTEX.md) for agent configuration details. 380 380 381 381 **Template variables:** Agent prompts can use template variables like `$name`, `$preferred`, and pronoun variables. See [PROMPT_TEMPLATES.md](PROMPT_TEMPLATES.md) for the complete template system documentation. 382 382 383 383 **Reference implementations:** 384 - - System agent examples: `muse/*.md` (files with `tools` field) 385 - - Discovery logic: `think/muse.py` - `get_muse_configs(has_tools=True)`, `get_agent()` 384 + - System agent examples: `talent/*.md` (files with `tools` field) 385 + - Discovery logic: `think/talent.py` - `get_muse_configs(has_tools=True)`, `get_agent()` 386 386 387 387 #### Prompt Context Configuration 388 388 ··· 406 406 - `$facets` - focused facet context or all available facets 407 407 - `$activity_context` - activity metadata, segment state, and analysis focus sections 408 408 409 - **Authoritative source:** `think/muse.py` - `_DEFAULT_LOAD`, `source_is_enabled()`, `source_is_required()`, `get_agent_filter()` 409 + **Authoritative source:** `think/talent.py` - `_DEFAULT_LOAD`, `source_is_enabled()`, `source_is_required()`, `get_agent_filter()` 410 410 411 411 --- 412 412 413 - ### 10. `muse/` - Agent Skills 413 + ### 10. `talent/` - Agent Skills 414 414 415 - Define [Agent Skills](https://agentskills.io/specification) as subdirectories within `muse/`. Skills package procedural knowledge, workflows, and resources that AI coding agents (Claude Code, GitHub Copilot, Gemini CLI, etc.) can discover and use on demand. 415 + Define [Agent Skills](https://agentskills.io/specification) as subdirectories within `talent/`. Skills package procedural knowledge, workflows, and resources that AI coding agents (Claude Code, GitHub Copilot, Gemini CLI, etc.) can discover and use on demand. 416 416 417 417 **Key Points:** 418 - - Create a subdirectory in `muse/` with a `SKILL.md` file (YAML frontmatter + markdown body) 418 + - Create a subdirectory in `talent/` with a `SKILL.md` file (YAML frontmatter + markdown body) 419 419 - The directory name must match the `name` field in the YAML frontmatter 420 - - Skill names must be unique across system `muse/` and all `apps/*/muse/` directories 420 + - Skill names must be unique across system `talent/` and all `apps/*/talent/` directories 421 421 - `make skills` discovers all skills and symlinks them into `.agents/skills/` and `.claude/skills/` 422 - - Skills are standalone — they don't interact with the muse agent/generator system 423 - - The muse loader ignores subdirectories, so skills won't interfere with agent discovery 422 + - Skills are standalone — they don't interact with the talent agent/generator system 423 + - The talent loader ignores subdirectories, so skills won't interfere with agent discovery 424 424 425 425 **Directory structure:** 426 426 ``` 427 - muse/my-skill/ 427 + talent/my-skill/ 428 428 ├── SKILL.md # Required: YAML frontmatter + instructions 429 429 ├── scripts/ # Optional: Executable code (Python, Bash, etc.) 430 430 ├── references/ # Optional: Additional documentation loaded on demand ··· 453 453 - `metadata` — Arbitrary key-value string map 454 454 - `allowed-tools` — Space-delimited list of pre-approved tools (experimental) 455 455 456 - **App skills** work the same way — place a skill directory inside `apps/my_app/muse/`: 456 + **App skills** work the same way — place a skill directory inside `apps/my_app/talent/`: 457 457 ``` 458 - apps/my_app/muse/my-skill/ 458 + apps/my_app/talent/my-skill/ 459 459 ├── SKILL.md 460 460 └── references/ 461 461 ``` 462 462 463 - **Running `make skills`:** Discovers all `SKILL.md` files under `muse/*/` and `apps/*/muse/*/`, then creates symlinks so that all supported coding agents see the same skills. Errors if two skills share the same directory name. 463 + **Running `make skills`:** Discovers all `SKILL.md` files under `talent/*/` and `apps/*/talent/*/`, then creates symlinks so that all supported coding agents see the same skills. Errors if two skills share the same directory name. 464 464 465 465 --- 466 466
+1 -1
docs/BACKLOG.md
··· 13 13 14 14 - [ ] Update supervisor/dream interaction to use dynamic daily schedule from daily schedule agent output 15 15 - [ ] Create segment agent for voiceprint detection and updating via hooks 16 - - [ ] Surface named hook outputs in agents app and sol muse CLI 16 + - [ ] Surface named hook outputs in agents app and sol talent CLI 17 17 - [ ] Make daily schedule agents idempotent with state tracking (show existing vs new segments) 18 18 - [ ] Add activities attach/update CLI tools for facet curation (like entity tools) 19 19
+1 -1
docs/CALLOSUM.md
··· 111 111 **`status`** - Periodic progress (every ~5s). Fields: `mode`, `day`, `segment`, `stream`, `agents_completed`, `agents_total`, `current_group_priority`, `current_agents` (list of running agent names). In `--segments` batch mode, also includes `segments_completed`, `segments_total`. In activity mode, includes `activity`, `facet`. 112 112 113 113 ### `activity` - Activity lifecycle events 114 - **Sources:** `muse/activity_state.py` (post-hook), `muse/activities.py` (post-hook) 114 + **Sources:** `talent/activity_state.py` (post-hook), `talent/activities.py` (post-hook) 115 115 **Events:** `live`, `recorded` 116 116 **Event Log:** Logged to `<day>/<segment>/events.jsonl` by supervisor 117 117
+9 -9
docs/CORTEX.md
··· 37 37 "event": "request", 38 38 "ts": 1234567890123, // Required: millisecond timestamp (must match agent_id in filename) 39 39 "prompt": "Analyze this code for security issues", // Required for agents (not generators) 40 - "name": "default", // Optional: agent name from muse/*.md 40 + "name": "default", // Optional: agent name from talent/*.md 41 41 "provider": "openai", // Optional: override provider (openai, google, anthropic) 42 42 "max_output_tokens": 8192, // Optional: maximum response tokens 43 43 "thinking_budget": 10000, // Optional: thinking token budget (ignored by OpenAI) ··· 53 53 } 54 54 ``` 55 55 56 - The model is automatically resolved based on the muse context (`muse.{source}.{name}`) 56 + The model is automatically resolved based on the talent context (`talent.{source}.{name}`) 57 57 and the configured tier in `journal.json`. Provider can optionally be overridden at 58 58 request time, which will resolve the appropriate model for that provider at the same tier. 59 59 ··· 65 65 { 66 66 "event": "request", 67 67 "ts": 1234567890123, // Required: millisecond timestamp 68 - "name": "activity", // Required: generator name from muse/*.md 68 + "name": "activity", // Required: generator name from talent/*.md 69 69 "day": "20250109", // Required: day in YYYYMMDD format 70 70 "output": "md", // Required: output format ("md" or "json") 71 71 "segment": "120000_300", // Optional: single segment key (HHMMSS_duration) ··· 240 240 241 241 ## Agent Configuration 242 242 243 - Agents use configurations stored in the `muse/` directory. Each agent is a `.md` file containing: 243 + Agents use configurations stored in the `talent/` directory. Each agent is a `.md` file containing: 244 244 - JSON frontmatter with metadata and configuration 245 245 - The agent-specific prompt and instructions in the content 246 246 247 247 When spawning an agent: 248 248 1. Cortex passes the raw request to `sol agents` via stdin (NDJSON format) 249 249 2. The agent process (`think/agents.py`) handles all config loading via `prepare_config()`: 250 - - Loads agent configuration using `get_agent()` from `think/muse.py` 250 + - Loads agent configuration using `get_agent()` from `think/talent.py` 251 251 - Merges request parameters with agent defaults 252 252 - Resolves provider and model based on context 253 253 3. The agent validates the config via `validate_config()` before execution ··· 256 256 - `extra_context`: Runtime context (facets, generators list, datetime) 257 257 - `user_instruction`: The agent's `.md` file content 258 258 259 - Agents define specialized behaviors and facet expertise. Available agents can be discovered using `get_muse_configs(type="cogitate")` or by listing files in the `muse/` directory. 259 + Agents define specialized behaviors and facet expertise. Available agents can be discovered using `get_muse_configs(type="cogitate")` or by listing files in the `talent/` directory. 260 260 261 261 ### Agent Configuration Options 262 262 ··· 281 281 ### Model Resolution 282 282 283 283 Models are resolved automatically based on context and tier: 284 - 1. Each muse config has a context pattern: `muse.{source}.{name}` (e.g., `muse.system.default`) 284 + 1. Each talent config has a context pattern: `talent.{source}.{name}` (e.g., `talent.system.default`) 285 285 2. The context determines the tier (pro/flash/lite) from `journal.json` or system defaults 286 286 3. The tier + provider determines the actual model to use 287 287 ··· 290 290 { 291 291 "providers": { 292 292 "contexts": { 293 - "muse.system.default": {"tier": 1}, 294 - "muse.*": {"tier": 2} 293 + "talent.system.default": {"tier": 1}, 294 + "talent.*": {"tier": 2} 295 295 } 296 296 } 297 297 }
+12 -12
docs/JOURNAL.md
··· 322 322 }, 323 323 "contexts": { 324 324 "observe.*": {"provider": "google", "tier": 3}, 325 - "muse.system.*": {"tier": 1}, 326 - "muse.system.meetings": {"provider": "anthropic", "disabled": true}, 327 - "muse.entities.observer": {"tier": 2, "extract": false} 325 + "talent.system.*": {"tier": 1}, 326 + "talent.system.meetings": {"provider": "anthropic", "disabled": true}, 327 + "talent.entities.observer": {"tier": 2, "extract": false} 328 328 }, 329 329 "models": { 330 330 "google": { ··· 354 354 #### Context matching 355 355 356 356 Contexts are matched in order of specificity: 357 - 1. **Exact match** – `"muse.system.meetings"` matches only that exact context 357 + 1. **Exact match** – `"talent.system.meetings"` matches only that exact context 358 358 2. **Glob pattern** – `"observe.*"` matches any context starting with `observe.` 359 359 3. **Default** – Falls back to the `default` configuration 360 360 361 361 #### Context naming convention 362 362 363 - Muse configs (agents and generators) use the pattern `muse.{source}.{name}`: 364 - - System configs: `muse.system.{name}` (e.g., `muse.system.meetings`, `muse.system.default`) 365 - - App configs: `muse.{app}.{name}` (e.g., `muse.entities.observer`, `muse.support.support`) 363 + Talent configs (agents and generators) use the pattern `talent.{source}.{name}`: 364 + - System configs: `talent.system.{name}` (e.g., `talent.system.meetings`, `talent.system.default`) 365 + - App configs: `talent.{app}.{name}` (e.g., `talent.entities.observer`, `talent.support.support`) 366 366 367 367 Other contexts follow the pattern `{module}.{feature}[.{operation}]`: 368 368 - Observe pipeline: `observe.describe.frame`, `observe.enrich`, `observe.transcribe.gemini` ··· 378 378 - `provider` (string) – Override provider (optional, inherits from default). 379 379 - `tier` (integer) – Tier number (optional). 380 380 - `model` (string) – Explicit model name (optional, overrides tier). 381 - - `disabled` (boolean) – Disable this muse config (optional, muse contexts only). 381 + - `disabled` (boolean) – Disable this talent config (optional, talent contexts only). 382 382 - `extract` (boolean) – Enable/disable event extraction for generators with occurrence/anticipation hooks (optional). 383 383 384 384 **models** – Per-provider tier overrides. Maps provider name to tier-model mappings: ··· 1058 1058 1059 1059 #### Segment outputs 1060 1060 1061 - After captures are processed, segment-level outputs are generated within each segment folder as `HHMMSS_LEN/*.md` files. Available segment output types are defined by templates in `muse/` with `"schedule": "segment"` in their metadata JSON. 1061 + After captures are processed, segment-level outputs are generated within each segment folder as `HHMMSS_LEN/*.md` files. Available segment output types are defined by templates in `talent/` with `"schedule": "segment"` in their metadata JSON. 1062 1062 1063 1063 #### Daily outputs 1064 1064 1065 1065 Post-processing generates day-level outputs in the `agents/` directory that synthesize all segments. 1066 1066 1067 1067 **Generator discovery:** Available generator types are discovered at runtime from: 1068 - - `muse/*.md` – system generator templates (files with `schedule` field but no `tools` field) 1069 - - `apps/{app}/muse/*.md` – app-specific generator templates 1068 + - `talent/*.md` – system generator templates (files with `schedule` field but no `tools` field) 1069 + - `apps/{app}/talent/*.md` – app-specific generator templates 1070 1070 1071 - Each template is a `.md` file with JSON frontmatter containing metadata (title, description, schedule, output format). The `schedule` field is required and must be `"segment"` or `"daily"` - generators with missing or invalid schedule are skipped. Use `get_muse_configs(has_tools=False)` from `think/muse.py` to retrieve all available generators, or `get_muse_configs(has_tools=False, schedule="daily")` to get generators filtered by schedule. 1071 + Each template is a `.md` file with JSON frontmatter containing metadata (title, description, schedule, output format). The `schedule` field is required and must be `"segment"` or `"daily"` - generators with missing or invalid schedule are skipped. Use `get_muse_configs(has_tools=False)` from `think/talent.py` to retrieve all available generators, or `get_muse_configs(has_tools=False, schedule="daily")` to get generators filtered by schedule. 1072 1072 1073 1073 **Output naming:** 1074 1074 - System outputs: `agents/{agent}.md` (e.g., `agents/flow.md`, `agents/meetings.md`)
+4 -4
docs/PROMPT_TEMPLATES.md
··· 130 130 131 131 **Optional model configuration:** Add `max_output_tokens` (response length limit) and `thinking_budget` (model thinking token budget) to override provider defaults. 132 132 133 - **Reference:** `muse/*.md` for examples (files with `schedule` field but no `tools` field) 133 + **Reference:** `talent/*.md` for examples (files with `schedule` field but no `tools` field) 134 134 135 135 ### For Agents 136 136 ··· 152 152 153 153 **Optional model configuration:** Add `max_output_tokens` (response length limit) and `thinking_budget` (model thinking token budget) to override provider defaults. Note: OpenAI uses fixed reasoning and ignores `thinking_budget`. 154 154 155 - **Reference:** `think/muse.py` → `get_agent()` for agent configuration loading 155 + **Reference:** `think/talent.py` → `get_agent()` for agent configuration loading 156 156 157 157 ### The load_prompt() Function 158 158 ··· 196 196 | Core load function | `think/prompts.py` (`load_prompt`) | 197 197 | Template files | `think/templates/*.md` | 198 198 | Test coverage | `tests/test_template_substitution.py` | 199 - | Generator prompts | `muse/*.md` (files with `schedule` field but no `tools`) | 200 - | Agent prompts | `muse/*.md` (files with `tools` field) | 199 + | Generator prompts | `talent/*.md` (files with `schedule` field but no `tools`) | 200 + | Agent prompts | `talent/*.md` (files with `tools` field) |
+6 -6
docs/PROVIDERS.md
··· 217 217 Context strings determine provider and model selection. Providers receive already-resolved models, but understanding the system helps: 218 218 219 219 **Context naming convention:** 220 - - Muse configs (agents/generators): `muse.{source}.{name}` where source is `system` or app name 221 - - System: `muse.system.meetings`, `muse.system.default` 222 - - App: `muse.entities.observer`, `muse.chat.helper` 220 + - Talent configs (agents/generators): `talent.{source}.{name}` where source is `system` or app name 221 + - System: `talent.system.meetings`, `talent.system.default` 222 + - App: `talent.entities.observer`, `talent.chat.helper` 223 223 - Other contexts: `{module}.{feature}[.{operation}]` 224 224 - Examples: `observe.describe.frame`, `app.chat.title` 225 225 226 226 **Dynamic discovery:** All context metadata (tier/label/group) is defined in prompt .md files via YAML frontmatter: 227 227 - Prompt files: Listed in `PROMPT_PATHS` in `think/models.py` - add `context`, `tier`, `label`, `group` fields 228 228 - Categories: `observe/categories/*.md` - add `tier`, `label`, `group` fields 229 - - System muse: `muse/*.md` - add `tier`, `label`, `group` fields in frontmatter 230 - - App muse: `apps/*/muse/*.md` - add `tier`, `label`, `group` fields in frontmatter 229 + - System talent: `talent/*.md` - add `tier`, `label`, `group` fields in frontmatter 230 + - App talent: `apps/*/talent/*.md` - add `tier`, `label`, `group` fields in frontmatter 231 231 232 232 All contexts are discovered at runtime. Use `get_context_registry()` to get the complete context map. 233 233 234 234 **Resolution** (handled by `think/models.py` `resolve_provider(context, agent_type)`): 235 235 1. Exact match in journal.json `providers.contexts` 236 236 2. Glob pattern match (fnmatch) with specificity ranking 237 - 3. Dynamic context registry (discovered prompts, categories, muse configs) 237 + 3. Dynamic context registry (discovered prompts, categories, talent configs) 238 238 4. Type-specific default (from `providers.generate` or `providers.cogitate`) 239 239 5. System defaults from `TYPE_DEFAULTS` 240 240
+1 -1
docs/ROADMAP.md
··· 42 42 43 43 ## Agent Customization 44 44 45 - - Journal-level muse directory (`<journal>/muse/`) for custom agents and frontmatter overrides 45 + - Journal-level talent directory (`<journal>/talent/`) for custom agents and frontmatter overrides 46 46 47 47 ## Convey UI 48 48
+10 -10
docs/SOLCLI.md
··· 13 13 14 14 ### The boundary 15 15 16 - **If an AI agent should tool-call it → `sol call`.** These commands appear in SKILL.md files and are invoked by muse agents during conversations. 16 + **If an AI agent should tool-call it → `sol call`.** These commands appear in SKILL.md files and are invoked by talent agents during conversations. 17 17 18 18 **If it's system plumbing → `sol <cmd>`.** Processing pipelines, supervisor, services, capture — things that cron or systemd runs. 19 19 ··· 134 134 3. **Create the agent skill** (if agents should use these commands): 135 135 136 136 ```markdown 137 - # apps/myapp/muse/myapp/SKILL.md 137 + # apps/myapp/talent/myapp/SKILL.md 138 138 --- 139 139 name: myapp 140 140 description: > ··· 175 175 from think.tools.mytools import app as mytools_app 176 176 call_app.add_typer(mytools_app, name="mytools") 177 177 ``` 178 - 3. **Optionally create a skill** in `muse/<name>/SKILL.md`. 178 + 3. **Optionally create a skill** in `talent/<name>/SKILL.md`. 179 179 180 180 ### Files to maintain for a new call command 181 181 182 182 | File | What to do | Required? | 183 183 |------|-----------|-----------| 184 184 | `apps/<name>/call.py` | Typer app with commands | Yes | 185 - | `apps/<name>/muse/<name>/SKILL.md` | Skill doc for agents | If agents should use it | 185 + | `apps/<name>/talent/<name>/SKILL.md` | Skill doc for agents | If agents should use it | 186 186 | `.agents/skills/<name>` | Symlink (via `make skills`) | Auto-generated | 187 187 | `AGENTS.md` Skills table | Add trigger description | If skill exists | 188 188 | `tests/test_<name>_call.py` | CLI tests | Yes | ··· 264 264 │ ├── todos/ 265 265 │ │ ├── call.py # sol call todos (auto-discovered) 266 266 │ │ ├── todo.py # Data models 267 - │ │ └── muse/todos/SKILL.md # Agent skill doc 267 + │ │ └── talent/todos/SKILL.md # Agent skill doc 268 268 │ ├── calendar/ 269 269 │ │ ├── call.py # sol call calendar (auto-discovered) 270 - │ │ └── muse/calendar/SKILL.md 270 + │ │ └── talent/calendar/SKILL.md 271 271 │ ├── entities/call.py 272 272 │ ├── speakers/call.py 273 273 │ ├── support/call.py ··· 275 275 │ ├── agent/call.py 276 276 │ ├── awareness/call.py 277 277 │ └── ... (web-only apps without call.py) 278 - ├── muse/ 278 + ├── talent/ 279 279 │ ├── journal/SKILL.md # Skills not tied to an app 280 280 │ ├── coding/SKILL.md 281 281 │ └── *.md # Agent prompt files ··· 296 296 | Think (processing) | `import`, `dream`, `planner`, `indexer`, `supervisor`, `schedule`, `top`, `health`, `callosum`, `notify`, `heartbeat` | 297 297 | Service | `service` (+ aliases `up`, `down`, `start`) | 298 298 | Observe (capture) | `transcribe`, `describe`, `sense`, `sync`, `transfer`, `remote` | 299 - | Muse (AI agents) | `agents`, `cortex`, `muse`, `call`, `engage` | 299 + | Talent (AI agents) | `agents`, `cortex`, `talent`, `call`, `engage` | 300 300 | Convey (web UI) | `convey`, `restart-convey`, `screenshot`, `maint` | 301 301 | Specialized | `config`, `streams`, `journal-stats`, `formatter`, `detect-created` | 302 302 | Help | `help`, `chat` | ··· 323 323 Skills are documented in `SKILL.md` files and symlinked into `.agents/skills/` by `make skills`. 324 324 325 325 **Skill locations:** 326 - - App skills: `apps/<name>/muse/<name>/SKILL.md` 327 - - Core skills: `muse/<name>/SKILL.md` 326 + - App skills: `apps/<name>/talent/<name>/SKILL.md` 327 + - Core skills: `talent/<name>/SKILL.md` 328 328 329 329 **Skill ≠ call command.** Not every skill has a corresponding `call.py`, and not every `call.py` has a skill: 330 330 - `health`, `coding`, `vit`, `onboarding` have skills but no `call.py`
+6 -6
docs/THINK.md
··· 20 20 - `sol agents` is the unified CLI for tool agents and generators (spawned by Cortex, NDJSON protocol). 21 21 - `sol supervisor` monitors observation heartbeats. Use `--no-observers` to disable local capture (sense still runs for remote uploads and imports). 22 22 - `sol cortex` starts a Callosum-based service for managing AI agent instances and generators. 23 - - `sol muse` lists available agents and generators with their configuration. Use `sol muse show <name>` to see details, and `sol muse show <name> --prompt` to see the fully composed prompt that would be sent to the LLM. 23 + - `sol talent` lists available agents and generators with their configuration. Use `sol talent show <name>` to see details, and `sol talent show <name> --prompt` to see the fully composed prompt that would be sent to the LLM. 24 24 25 25 ```bash 26 26 sol call transcripts read YYYYMMDD [--start HHMMSS --length MINUTES] 27 27 sol dream [--day YYYYMMDD] [--segment HHMMSS_LEN] [--stream NAME] [--refresh] [--flush] 28 28 sol supervisor [--no-observers] 29 29 sol cortex [--host HOST] [--port PORT] [--path PATH] 30 - sol muse list [--schedule daily|segment] [--json] 31 - sol muse show <name> [--prompt] [--day YYYYMMDD] [--segment HHMMSS_LEN] [--full] 30 + sol talent list [--schedule daily|segment] [--json] 31 + sol talent show <name> [--prompt] [--day YYYYMMDD] [--segment HHMMSS_LEN] [--full] 32 32 ``` 33 33 34 34 Use `--refresh` to overwrite existing files, and `-v` for verbose logs. ··· 175 175 176 176 ## Generator map keys 177 177 178 - `think.muse.get_muse_configs(has_tools=False)` reads the `.md` prompt files under `muse/` and 178 + `think.talent.get_muse_configs(has_tools=False)` reads the `.md` prompt files under `talent/` and 179 179 returns a dictionary keyed by generator name. Each entry contains: 180 180 181 181 - `path` – the prompt file path ··· 209 209 agents_info = cortex_agents(limit=10, agent_type="live") 210 210 print(f"Found {agents_info['live_count']} running agents") 211 211 ``` 212 - # Muse Module 212 + # Talent Module 213 213 214 214 AI agent system and tool-calling support for solstone. 215 215 ··· 251 251 252 252 ## Agent Personas 253 253 254 - System prompts in `muse/*.md` (markdown with JSON frontmatter). Apps can add custom agents in `apps/{app}/muse/`. 254 + System prompts in `talent/*.md` (markdown with JSON frontmatter). Apps can add custom agents in `apps/{app}/talent/`. 255 255 256 256 JSON metadata supports `title`, `provider`, `model`, `tools`, `schedule`, `priority`, `multi_facet`, and `load` keys. 257 257
-4
muse/__init__.py
··· 1 - # SPDX-License-Identifier: AGPL-3.0-only 2 - # Copyright (c) 2026 sol pbc 3 - 4 - """Muse - unified agentic prompts for agents and insights."""
+2 -2
muse/activities.py talent/activities.py
··· 26 26 import logging 27 27 import os 28 28 29 - from muse.activity_state import ( 29 + from talent.activity_state import ( 30 30 check_timeout, 31 31 find_previous_segment, 32 32 ) ··· 381 381 if not timed_out: 382 382 # Look back for facets missing from the adjacent previous segment 383 383 # but present in earlier segments (covers gaps from agent failures). 384 - from muse.activity_state import _get_preceding_segments 384 + from talent.activity_state import _get_preceding_segments 385 385 386 386 for earlier_seg in _get_preceding_segments(day, prev_segment, stream=stream): 387 387 if check_timeout(segment, earlier_seg):
muse/activity_state.py talent/activity_state.py
muse/anticipation.md talent/anticipation.md
+2 -2
muse/anticipation.py talent/anticipation.py
··· 20 20 write_events_jsonl, 21 21 ) 22 22 from think.models import generate 23 - from think.muse import get_output_name 23 + from think.talent import get_output_name 24 24 from think.prompts import load_prompt 25 25 26 26 ··· 57 57 try: 58 58 response_text = generate( 59 59 contents=contents, 60 - context=f"muse.system.{name}", 60 + context=f"talent.system.{name}", 61 61 temperature=0.3, 62 62 max_output_tokens=24576, 63 63 thinking_budget=0,
+1 -1
muse/chat.md talent/chat.md
··· 2 2 "type": "cogitate", 3 3 "title": "Sol", 4 4 "description": "Sol — the journal itself, as a conversational partner", 5 - "hook": {"pre": "muse/chat_context.py"} 5 + "hook": {"pre": "talent/chat_context.py"} 6 6 } 7 7 8 8 $sol_identity
+2 -2
muse/chat_context.py talent/chat_context.py
··· 3 3 4 4 """Pre-hook: inject chat-bar context into sol's user instruction. 5 5 6 - Replaces conversation_memory as the unified muse's pre-hook. 6 + Replaces conversation_memory as the unified talent's pre-hook. 7 7 Appends conversation memory, location/health context instructions, 8 8 awareness-conditional guidance, and behavioral defaults to the 9 9 identity-first prompt. ··· 261 261 262 262 263 263 def pre_process(context: dict) -> dict | None: 264 - """Append chat-context instructions to the unified muse prompt.""" 264 + """Append chat-context instructions to the unified talent prompt.""" 265 265 from think.awareness import get_imports, get_onboarding 266 266 from think.conversation import build_memory_context 267 267 from think.utils import get_config
muse/coder.md talent/coder.md
+2 -2
muse/coding/SKILL.md talent/coding/SKILL.md
··· 46 46 47 47 ```bash 48 48 make install # Install package (includes all deps) 49 - make skills # Discover and symlink Agent Skills from muse/ dirs 49 + make skills # Discover and symlink Agent Skills from talent/ dirs 50 50 make format # Auto-fix formatting, then report remaining issues 51 51 make test # Run unit tests 52 52 make ci # Full CI check (format check + lint + test) ··· 72 72 - `sol indexer --reset` — destructive index rebuild (read-only queries via `sol indexer` are fine) 73 73 74 74 Agents should use `sol call` commands for journal interaction and `sol health` / 75 - `sol muse logs` for diagnostics. 75 + `sol talent logs` for diagnostics. 76 76 77 77 ## Reference 78 78
muse/coding/reference/coding-standards.md talent/coding/reference/coding-standards.md
muse/coding/reference/environment.md talent/coding/reference/environment.md
+5 -5
muse/coding/reference/project-structure.md talent/coding/reference/project-structure.md
··· 9 9 ├── think/ # Data post-processing, AI agents & orchestration 10 10 ├── convey/ # Web app frontend & backend 11 11 ├── apps/ # Convey app extensions (see docs/APPS.md) 12 - ├── muse/ # Agent/generator configs + Agent Skills (muse/*/SKILL.md) 12 + ├── talent/ # Agent/generator configs + Agent Skills (talent/*/SKILL.md) 13 13 ├── tests/ # Pytest test suites + test fixtures under tests/fixtures/ 14 14 ├── docs/ # All documentation (*.md files) 15 15 ├── AGENTS.md # Development guidelines (this file) ··· 34 34 35 35 ## Agent & Skill Organization 36 36 37 - `muse/*.md` stores agent personas and generator templates. Apps can add their own in `apps/*/muse/*.md`. Skills live at `muse/*/SKILL.md` and are symlinked to `.agents/skills/` and `.claude/skills/` via `make skills`. 37 + `talent/*.md` stores agent personas and generator templates. Apps can add their own in `apps/*/talent/*.md`. Skills live at `talent/*/SKILL.md` and are symlinked to `.agents/skills/` and `.claude/skills/` via `make skills`. 38 38 39 39 ## File Locations 40 40 41 41 - **Entry Points**: `sol.py` `COMMANDS` dict 42 42 - **Test Fixtures**: `tests/fixtures/journal/` - complete mock journal 43 43 - **Live Logs**: `journal/health/<service>.log` 44 - - **Agent Personas**: `muse/*.md` (apps can add their own in `muse/`, see [docs/APPS.md](docs/APPS.md)) 45 - - **Generator Templates**: `muse/*.md` (apps can add their own in `muse/`, see [docs/APPS.md](docs/APPS.md)) 46 - - **Agent Skills**: `muse/*/SKILL.md` - symlinked to `.agents/skills/` and `.claude/skills/` via `make skills`, read https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices to create the best skills 44 + - **Agent Personas**: `talent/*.md` (apps can add their own in `talent/`, see [docs/APPS.md](docs/APPS.md)) 45 + - **Generator Templates**: `talent/*.md` (apps can add their own in `talent/`, see [docs/APPS.md](docs/APPS.md)) 46 + - **Agent Skills**: `talent/*/SKILL.md` - symlinked to `.agents/skills/` and `.claude/skills/` via `make skills`, read https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices to create the best skills 47 47 - **Scratch Space**: `scratch/` - git-ignored local workspace
muse/coding/reference/testing.md talent/coding/reference/testing.md
+3 -3
muse/conversation_memory.py talent/conversation_memory.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - """Pre-hook: inject conversation memory into unified muse context. 4 + """Pre-hook: inject conversation memory into unified talent context. 5 5 6 6 Loaded via hook config: {"hook": {"pre": "conversation_memory"}} 7 7 8 - Replaces CONVERSATION_MEMORY_INJECTION_POINT in the unified muse's 8 + Replaces CONVERSATION_MEMORY_INJECTION_POINT in the unified talent's 9 9 user instruction with recent conversation exchanges and today's summary. 10 10 This gives the agent awareness of past conversations without needing 11 11 to search — recent interactions are always in context. ··· 17 17 18 18 19 19 def pre_process(context: dict) -> dict | None: 20 - """Inject conversation memory into the unified muse's user instruction. 20 + """Inject conversation memory into the unified talent's user instruction. 21 21 22 22 Args: 23 23 context: Full agent config dict.
muse/daily_schedule.md talent/daily_schedule.md
muse/daily_schedule.py talent/daily_schedule.py
muse/decisionalizer.md talent/decisionalizer.md
muse/decisions.md talent/decisions.md
muse/entities.md talent/entities.md
muse/entities.py talent/entities.py
muse/facet_newsletter.md talent/facet_newsletter.md
muse/firstday_checkin.md talent/firstday_checkin.md
muse/firstday_checkin.py talent/firstday_checkin.py
muse/flow.md talent/flow.md
muse/followups.md talent/followups.md
+2 -2
muse/heartbeat.md talent/heartbeat.md
··· 30 30 31 31 ## Step 2: Check journal quality 32 32 33 - Run `sol muse logs --daily -c 10` to review recent agent runs and 34 - `sol muse logs --errors -c 10` for recent errors. Look for: 33 + Run `sol talent logs --daily -c 10` to review recent agent runs and 34 + `sol talent logs --errors -c 10` for recent errors. Look for: 35 35 - Broken segments (transcription failures, missing agent output) 36 36 - Processing gaps (capture with no dream processing) 37 37 - Orphaned entities (zero observations after 7+ days)
muse/joke_bot.md talent/joke_bot.md
+1 -1
muse/journal/SKILL.md talent/journal/SKILL.md
··· 105 105 - `--emoji`: optional icon emoji (default: `📦`). 106 106 - `--color`: optional hex color (default: `#667eea`). 107 107 - `--description`: optional description text. 108 - - `--consent`: asserts that the agent has received a direct owner request or explicit owner approval before calling this command. Pass when acting proactively (cogitate, suggestion flows) rather than in direct response to an owner instruction. Omit for the onboarding muse — onboarding is owner-driven by definition. Adds `"consent": true` to the audit log entry. 108 + - `--consent`: asserts that the agent has received a direct owner request or explicit owner approval before calling this command. Pass when acting proactively (cogitate, suggestion flows) rather than in direct response to an owner instruction. Omit for the onboarding talent — onboarding is owner-driven by definition. Adds `"consent": true` to the audit log entry. 109 109 110 110 Examples: 111 111
muse/knowledge_graph.md talent/knowledge_graph.md
muse/meetings.md talent/meetings.md
muse/messaging.md talent/messaging.md
muse/morning_briefing.md talent/morning_briefing.md
+1 -1
muse/naming.md talent/naming.md
··· 8 8 9 9 ## Pre-hooks 10 10 11 - Before this muse runs, two checks must pass silently (no output on failure): 11 + Before this talent runs, two checks must pass silently (no output on failure): 12 12 13 13 1. **Thickness gate** — Run `sol call agent thickness`. If `ready` is `false`, exit silently. 14 14 2. **Name gate** — Run `sol call agent name`. If `name_status` is not `"default"`, exit silently.
muse/observation.md talent/observation.md
muse/observation.py talent/observation.py
muse/observation_review.md talent/observation_review.md
muse/occurrence.md talent/occurrence.md
+2 -2
muse/occurrence.py talent/occurrence.py
··· 20 20 write_events_jsonl, 21 21 ) 22 22 from think.models import generate 23 - from think.muse import get_output_name 23 + from think.talent import get_output_name 24 24 from think.prompts import load_prompt 25 25 26 26 ··· 62 62 try: 63 63 response_text = generate( 64 64 contents=contents, 65 - context=f"muse.system.{name}", 65 + context=f"talent.system.{name}", 66 66 temperature=0.3, 67 67 max_output_tokens=24576, 68 68 thinking_budget=0,
muse/onboarding.md talent/onboarding.md
muse/onboarding/SKILL.md talent/onboarding/SKILL.md
muse/partner.md talent/partner.md
+2 -2
muse/patterns/provenance.md talent/patterns/provenance.md
··· 2 2 3 3 How cogitate agents communicate the basis and reliability of their claims. This pattern ensures briefings and reports distinguish between well-sourced facts and inferences. 4 4 5 - Canonical implementation: `muse/morning_briefing.md`. 5 + Canonical implementation: `talent/morning_briefing.md`. 6 6 7 7 ## Four Mechanisms 8 8 ··· 50 50 51 51 **Bidirectional rule:** Strong evidence must NOT get hedging language. Weak evidence must NOT get assertive language. Both directions must be enforced. 52 52 53 - **Upstream confidence scores:** Some agents (e.g., `muse/followups.md`, `muse/decisions.md`) emit a `Confidence: 0.0–1.0` field per item. When consuming this output, use the score to inform language grading. The briefing expresses confidence through language, not by forwarding the numeric score. 53 + **Upstream confidence scores:** Some agents (e.g., `talent/followups.md`, `talent/decisions.md`) emit a `Confidence: 0.0–1.0` field per item. When consuming this output, use the score to inform language grading. The briefing expresses confidence through language, not by forwarding the numeric score. 54 54 55 55 ### 4. Tool Error Guard 56 56
muse/pulse.md talent/pulse.md
muse/routine.md talent/routine.md
muse/routines/SKILL.md talent/routines/SKILL.md
muse/schedule.md talent/schedule.md
muse/screen.md talent/screen.md
muse/sense.md talent/sense.md
muse/speaker_attribution.md talent/speaker_attribution.md
+1 -1
muse/speaker_attribution.py talent/speaker_attribution.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - """Speaker attribution muse hook — orchestrates the 4-layer pipeline. 4 + """Speaker attribution talent hook — orchestrates the 4-layer pipeline. 5 5 6 6 pre_process: Runs Layers 1-3 (computational). If all sentences are 7 7 resolved, writes speaker_labels.json and skips the LLM.
muse/timeline.md talent/timeline.md
+1 -1
muse/triage.md talent/triage.md
··· 107 107 108 108 1. Run `sol call agent name` to check status. 109 109 2. If `name_status` is `"default"`, run `sol call agent thickness` to check readiness. 110 - 3. If `ready` is `true`, mention that you've been getting to know the owner and offer to suggest a name — or let the naming muse handle it. 110 + 3. If `ready` is `true`, mention that you've been getting to know the owner and offer to suggest a name — or let the naming talent handle it. 111 111 4. Only do this once per session. If you've already checked or offered, don't repeat. 112 112 5. If `name_status` is `"chosen"` or `"self-named"`, do nothing. 113 113
muse/vit/SKILL.md talent/vit/SKILL.md
+3 -3
pyproject.toml
··· 92 92 "Bug Tracker" = "https://github.com/solpbc/solstone/issues" 93 93 94 94 [tool.setuptools.packages.find] 95 - include = ["apps*", "think*", "convey*", "observe*", "muse*"] 95 + include = ["apps*", "think*", "convey*", "observe*", "talent*"] 96 96 97 97 [tool.setuptools] 98 98 py-modules = ["media", "sol"] 99 99 100 100 [tool.setuptools.package-data] 101 - apps = ["*/templates/*.html", "*/muse/*.md"] 101 + apps = ["*/templates/*.html", "*/talent/*.md"] 102 102 think = ["*.md", "*.json", "templates/*.md"] 103 - muse = ["*.md", "*.py"] 103 + talent = ["*.md", "*.py"] 104 104 observe = ["*.md", "categories/*.md", "transcribe/*.md"] 105 105 convey = [ 106 106 "templates/*.html",
+4 -4
sol.py
··· 60 60 "sync": "observe.sync", 61 61 "transfer": "observe.transfer", 62 62 "remote": "observe.remote_cli", 63 - # AI agents (formerly muse package) 63 + # AI agents (talent package) 64 64 "agents": "think.agents", 65 65 "cortex": "think.cortex", 66 - "muse": "think.muse_cli", 66 + "talent": "think.talent_cli", 67 67 "call": "think.call", 68 68 "engage": "think.engage", 69 69 "help": "think.help_cli", ··· 117 117 "transfer", 118 118 "remote", 119 119 ], 120 - "Muse (AI agents)": [ 120 + "Talent (AI agents)": [ 121 121 "agents", 122 122 "cortex", 123 - "muse", 123 + "talent", 124 124 "call", 125 125 "engage", 126 126 ],
+4
talent/__init__.py
··· 1 + # SPDX-License-Identifier: AGPL-3.0-only 2 + # Copyright (c) 2026 sol pbc 3 + 4 + """Talent - unified agentic prompts for agents and insights."""
+40 -40
tests/baselines/api/settings/providers.json
··· 20 20 "label": "Date Detection", 21 21 "tier": 3 22 22 }, 23 - "muse.entities.entities": { 23 + "talent.entities.entities": { 24 24 "disabled": false, 25 25 "group": "Entities", 26 26 "label": "Entity Detector", ··· 28 28 "tier": 3, 29 29 "type": "cogitate" 30 30 }, 31 - "muse.entities.entities_review": { 31 + "talent.entities.entities_review": { 32 32 "disabled": false, 33 33 "group": "Entities", 34 34 "label": "Entity Reviewer", ··· 36 36 "tier": 2, 37 37 "type": "cogitate" 38 38 }, 39 - "muse.entities.entity_assist": { 39 + "talent.entities.entity_assist": { 40 40 "disabled": false, 41 41 "group": "Entities", 42 42 "label": "Entity Assistant", 43 43 "tier": 2, 44 44 "type": "cogitate" 45 45 }, 46 - "muse.entities.entity_describe": { 46 + "talent.entities.entity_describe": { 47 47 "disabled": false, 48 48 "group": "Entities", 49 49 "label": "Entity Description", 50 50 "tier": 2, 51 51 "type": "cogitate" 52 52 }, 53 - "muse.entities.entity_observer": { 53 + "talent.entities.entity_observer": { 54 54 "disabled": false, 55 55 "group": "Entities", 56 56 "label": "Entity Observer", ··· 58 58 "tier": 3, 59 59 "type": "cogitate" 60 60 }, 61 - "muse.support.support": { 61 + "talent.support.support": { 62 62 "disabled": false, 63 63 "group": "Think", 64 64 "label": "Support", 65 65 "tier": 2, 66 66 "type": "cogitate" 67 67 }, 68 - "muse.system.anticipation": { 68 + "talent.system.anticipation": { 69 69 "disabled": false, 70 70 "group": "Think", 71 71 "label": "Anticipation Extraction", 72 72 "tier": 2, 73 73 "type": null 74 74 }, 75 - "muse.system.chat": { 75 + "talent.system.chat": { 76 76 "disabled": false, 77 77 "group": "Think", 78 78 "label": "Sol", 79 79 "tier": 2, 80 80 "type": "cogitate" 81 81 }, 82 - "muse.system.coder": { 82 + "talent.system.coder": { 83 83 "disabled": false, 84 84 "group": "Think", 85 85 "label": "Coder", 86 86 "tier": 2, 87 87 "type": "cogitate" 88 88 }, 89 - "muse.system.daily_schedule": { 89 + "talent.system.daily_schedule": { 90 90 "disabled": false, 91 91 "group": "Think", 92 92 "label": "Maintenance Window", ··· 94 94 "tier": 2, 95 95 "type": "generate" 96 96 }, 97 - "muse.system.decisionalizer": { 97 + "talent.system.decisionalizer": { 98 98 "disabled": false, 99 99 "group": "Think", 100 100 "label": "Decision Dossier Generator", ··· 102 102 "tier": 2, 103 103 "type": "cogitate" 104 104 }, 105 - "muse.system.decisions": { 105 + "talent.system.decisions": { 106 106 "disabled": false, 107 107 "extract": true, 108 108 "group": "Think", ··· 111 111 "tier": 2, 112 112 "type": "generate" 113 113 }, 114 - "muse.system.entities": { 114 + "talent.system.entities": { 115 115 "disabled": false, 116 116 "group": "Think", 117 117 "label": "Entity Extraction", ··· 119 119 "tier": 2, 120 120 "type": "generate" 121 121 }, 122 - "muse.system.facet_newsletter": { 122 + "talent.system.facet_newsletter": { 123 123 "disabled": false, 124 124 "group": "Think", 125 125 "label": "Facet Newsletter Generator", ··· 127 127 "tier": 3, 128 128 "type": "cogitate" 129 129 }, 130 - "muse.system.firstday_checkin": { 130 + "talent.system.firstday_checkin": { 131 131 "disabled": false, 132 132 "group": "Think", 133 133 "label": "First-Day Check-In", ··· 135 135 "tier": 3, 136 136 "type": "generate" 137 137 }, 138 - "muse.system.flow": { 138 + "talent.system.flow": { 139 139 "disabled": false, 140 140 "extract": true, 141 141 "group": "Think", ··· 144 144 "tier": 2, 145 145 "type": "generate" 146 146 }, 147 - "muse.system.followups": { 147 + "talent.system.followups": { 148 148 "disabled": false, 149 149 "extract": true, 150 150 "group": "Think", ··· 153 153 "tier": 2, 154 154 "type": "generate" 155 155 }, 156 - "muse.system.heartbeat": { 156 + "talent.system.heartbeat": { 157 157 "disabled": false, 158 158 "group": "Think", 159 159 "label": "Heartbeat", ··· 161 161 "tier": 2, 162 162 "type": "cogitate" 163 163 }, 164 - "muse.system.joke_bot": { 164 + "talent.system.joke_bot": { 165 165 "disabled": false, 166 166 "group": "Think", 167 167 "label": "Joke Bot", ··· 169 169 "tier": 2, 170 170 "type": "cogitate" 171 171 }, 172 - "muse.system.knowledge_graph": { 172 + "talent.system.knowledge_graph": { 173 173 "disabled": false, 174 174 "extract": true, 175 175 "group": "Think", ··· 178 178 "tier": 2, 179 179 "type": "generate" 180 180 }, 181 - "muse.system.meetings": { 181 + "talent.system.meetings": { 182 182 "disabled": false, 183 183 "extract": true, 184 184 "group": "Think", ··· 187 187 "tier": 2, 188 188 "type": "generate" 189 189 }, 190 - "muse.system.messaging": { 190 + "talent.system.messaging": { 191 191 "disabled": false, 192 192 "extract": true, 193 193 "group": "Think", ··· 196 196 "tier": 2, 197 197 "type": "generate" 198 198 }, 199 - "muse.system.morning_briefing": { 199 + "talent.system.morning_briefing": { 200 200 "disabled": false, 201 201 "group": "Think", 202 202 "label": "Morning Briefing", ··· 204 204 "tier": 2, 205 205 "type": "cogitate" 206 206 }, 207 - "muse.system.naming": { 207 + "talent.system.naming": { 208 208 "disabled": false, 209 209 "group": "Think", 210 210 "label": "Naming", 211 211 "tier": 2, 212 212 "type": "cogitate" 213 213 }, 214 - "muse.system.observation": { 214 + "talent.system.observation": { 215 215 "disabled": false, 216 216 "group": "Think", 217 217 "label": "Observation", ··· 219 219 "tier": 3, 220 220 "type": "generate" 221 221 }, 222 - "muse.system.observation_review": { 222 + "talent.system.observation_review": { 223 223 "disabled": false, 224 224 "group": "Think", 225 225 "label": "Observation Review", 226 226 "tier": 2, 227 227 "type": "cogitate" 228 228 }, 229 - "muse.system.occurrence": { 229 + "talent.system.occurrence": { 230 230 "disabled": false, 231 231 "group": "Think", 232 232 "label": "Occurrence Extraction", 233 233 "tier": 2, 234 234 "type": null 235 235 }, 236 - "muse.system.onboarding": { 236 + "talent.system.onboarding": { 237 237 "disabled": false, 238 238 "group": "Think", 239 239 "label": "Onboarding", 240 240 "tier": 2, 241 241 "type": "cogitate" 242 242 }, 243 - "muse.system.partner": { 243 + "talent.system.partner": { 244 244 "disabled": false, 245 245 "group": "Think", 246 246 "label": "Partner Profile", ··· 248 248 "tier": 2, 249 249 "type": "cogitate" 250 250 }, 251 - "muse.system.pulse": { 251 + "talent.system.pulse": { 252 252 "disabled": false, 253 253 "group": "Think", 254 254 "label": "Pulse", ··· 256 256 "tier": 3, 257 257 "type": "cogitate" 258 258 }, 259 - "muse.system.routine": { 259 + "talent.system.routine": { 260 260 "disabled": false, 261 261 "group": "Think", 262 262 "label": "Routine", ··· 264 264 "tier": 2, 265 265 "type": "cogitate" 266 266 }, 267 - "muse.system.schedule": { 267 + "talent.system.schedule": { 268 268 "disabled": false, 269 269 "extract": true, 270 270 "group": "Think", ··· 273 273 "tier": 2, 274 274 "type": "generate" 275 275 }, 276 - "muse.system.screen": { 276 + "talent.system.screen": { 277 277 "disabled": false, 278 278 "group": "Think", 279 279 "label": "Screen Record", ··· 281 281 "tier": 2, 282 282 "type": "generate" 283 283 }, 284 - "muse.system.sense": { 284 + "talent.system.sense": { 285 285 "disabled": false, 286 286 "group": "Think", 287 287 "label": "Segment Sense", ··· 289 289 "tier": 3, 290 290 "type": "generate" 291 291 }, 292 - "muse.system.speaker_attribution": { 292 + "talent.system.speaker_attribution": { 293 293 "disabled": false, 294 294 "group": "Think", 295 295 "label": "Speaker Attribution", ··· 297 297 "tier": 2, 298 298 "type": "generate" 299 299 }, 300 - "muse.system.timeline": { 300 + "talent.system.timeline": { 301 301 "disabled": false, 302 302 "extract": true, 303 303 "group": "Think", ··· 306 306 "tier": 2, 307 307 "type": "generate" 308 308 }, 309 - "muse.system.triage": { 309 + "talent.system.triage": { 310 310 "disabled": false, 311 311 "group": "Think", 312 312 "label": "Triage", 313 313 "tier": 2, 314 314 "type": "cogitate" 315 315 }, 316 - "muse.todos.daily": { 316 + "talent.todos.daily": { 317 317 "disabled": false, 318 318 "group": "Todos", 319 319 "label": "Daily TODO Curator", ··· 321 321 "tier": 3, 322 322 "type": "cogitate" 323 323 }, 324 - "muse.todos.todo": { 324 + "talent.todos.todo": { 325 325 "disabled": false, 326 326 "group": "Todos", 327 327 "label": "TODO Detector", ··· 329 329 "tier": 3, 330 330 "type": "cogitate" 331 331 }, 332 - "muse.todos.weekly": { 332 + "talent.todos.weekly": { 333 333 "disabled": false, 334 334 "group": "Todos", 335 335 "label": "TODO Weekly Scout",
+15 -15
tests/baselines/api/stats/stats.json
··· 17 17 "max_output_tokens": 512, 18 18 "mtime": 0, 19 19 "output": "json", 20 - "path": "<PROJECT>/muse/daily_schedule.md", 20 + "path": "<PROJECT>/talent/daily_schedule.md", 21 21 "priority": 10, 22 22 "schedule": "daily", 23 23 "source": "system", ··· 47 47 "mtime": 0, 48 48 "occurrences": "Create an occurrence for every decision-action observed. Include the time span, decision type, actors involved, entities affected, and impact assessment. Each occurrence should capture both the intent and enactment of the decision.", 49 49 "output": "md", 50 - "path": "<PROJECT>/muse/decisions.md", 50 + "path": "<PROJECT>/talent/decisions.md", 51 51 "priority": 10, 52 52 "schedule": "activity", 53 53 "source": "system", ··· 68 68 "max_output_tokens": 1024, 69 69 "mtime": 0, 70 70 "output": "md", 71 - "path": "<PROJECT>/muse/entities.md", 71 + "path": "<PROJECT>/talent/entities.md", 72 72 "priority": 10, 73 73 "schedule": "segment", 74 74 "source": "system", ··· 89 89 "max_output_tokens": 256, 90 90 "mtime": 0, 91 91 "output": "text", 92 - "path": "<PROJECT>/muse/firstday_checkin.md", 92 + "path": "<PROJECT>/talent/firstday_checkin.md", 93 93 "priority": 98, 94 94 "schedule": "segment", 95 95 "source": "system", ··· 114 114 "mtime": 0, 115 115 "occurrences": "Create an occurrence for noteworthy shifts in work rhythms or focus. Include timestamps when deep work starts or ends, or when energy levels noticeably change. Classify each as work or personal based on the surrounding context.", 116 116 "output": "md", 117 - "path": "<PROJECT>/muse/flow.md", 117 + "path": "<PROJECT>/talent/flow.md", 118 118 "priority": 10, 119 119 "schedule": "daily", 120 120 "source": "system", ··· 143 143 "mtime": 0, 144 144 "occurrences": "Whenever a future task or commitment is mentioned, create an occurrence with the expected action and deadline if known. Note who requested it and whether it is work or personal.", 145 145 "output": "md", 146 - "path": "<PROJECT>/muse/followups.md", 146 + "path": "<PROJECT>/talent/followups.md", 147 147 "priority": 10, 148 148 "schedule": "activity", 149 149 "source": "system", ··· 166 166 "mtime": 0, 167 167 "occurrences": "For each entity interaction or relationship mentioned, create an occurrence describing the connection. Include start and end times when the relationship is visible, and capture the type of link such as works-on or discusses-with.", 168 168 "output": "md", 169 - "path": "<PROJECT>/muse/knowledge_graph.md", 169 + "path": "<PROJECT>/talent/knowledge_graph.md", 170 170 "priority": 10, 171 171 "schedule": "daily", 172 172 "source": "system", ··· 192 192 "mtime": 0, 193 193 "occurrences": "Each meeting should generate an occurrence with start and end times, list of participants and a concise summary. If slides are present, mention them in the details field.", 194 194 "output": "md", 195 - "path": "<PROJECT>/muse/meetings.md", 195 + "path": "<PROJECT>/talent/meetings.md", 196 196 "priority": 10, 197 197 "schedule": "activity", 198 198 "source": "system", ··· 219 219 "mtime": 0, 220 220 "occurrences": "Create an occurrence for every distinct message interaction. Include the time block, app name, contacts or channels involved, whether $preferred was reading or replying, and a summary of visible content.", 221 221 "output": "md", 222 - "path": "<PROJECT>/muse/messaging.md", 222 + "path": "<PROJECT>/talent/messaging.md", 223 223 "priority": 10, 224 224 "schedule": "activity", 225 225 "source": "system", ··· 244 244 "max_output_tokens": 2048, 245 245 "mtime": 0, 246 246 "output": "json", 247 - "path": "<PROJECT>/muse/observation.md", 247 + "path": "<PROJECT>/talent/observation.md", 248 248 "priority": 97, 249 249 "schedule": "segment", 250 250 "source": "system", ··· 268 268 }, 269 269 "mtime": 0, 270 270 "output": "md", 271 - "path": "<PROJECT>/muse/schedule.md", 271 + "path": "<PROJECT>/talent/schedule.md", 272 272 "priority": 10, 273 273 "schedule": "daily", 274 274 "source": "system", ··· 285 285 }, 286 286 "mtime": 0, 287 287 "output": "md", 288 - "path": "<PROJECT>/muse/screen.md", 288 + "path": "<PROJECT>/talent/screen.md", 289 289 "priority": 10, 290 290 "schedule": "segment", 291 291 "source": "system", ··· 303 303 "max_output_tokens": 4096, 304 304 "mtime": 0, 305 305 "output": "json", 306 - "path": "<PROJECT>/muse/sense.md", 306 + "path": "<PROJECT>/talent/sense.md", 307 307 "priority": 5, 308 308 "schedule": "segment", 309 309 "source": "system", ··· 328 328 }, 329 329 "mtime": 0, 330 330 "output": "json", 331 - "path": "<PROJECT>/muse/speaker_attribution.md", 331 + "path": "<PROJECT>/talent/speaker_attribution.md", 332 332 "priority": 40, 333 333 "schedule": "segment", 334 334 "source": "system", ··· 351 351 "mtime": 0, 352 352 "occurrences": "Create an occurrence for each hour segment, don't break down hours into any smaller segments the goal for timeline occurrences is for them to capture whatever happened within each hour of the day where there was activity.", 353 353 "output": "md", 354 - "path": "<PROJECT>/muse/timeline.md", 354 + "path": "<PROJECT>/talent/timeline.md", 355 355 "priority": 10, 356 356 "schedule": "daily", 357 357 "source": "system",
+40 -40
tests/test_activities.py
··· 449 449 ], 450 450 ) 451 451 452 - from muse.activity_state import format_activities_context 452 + from talent.activity_state import format_activities_context 453 453 454 454 output = format_activities_context("test_facet") 455 455 ··· 632 632 633 633 634 634 # --------------------------------------------------------------------------- 635 - # Activities Agent Hooks (muse/activities.py) 635 + # Activities Agent Hooks (talent/activities.py) 636 636 # --------------------------------------------------------------------------- 637 637 638 638 ··· 661 661 662 662 class TestListFacetsWithActivityState: 663 663 def test_finds_facets(self, monkeypatch): 664 - from muse.activities import _list_facets_with_activity_state 664 + from talent.activities import _list_facets_with_activity_state 665 665 666 666 with tempfile.TemporaryDirectory() as tmpdir: 667 667 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 675 675 assert facets == ["personal", "work"] 676 676 677 677 def test_returns_empty_for_nonexistent(self, monkeypatch): 678 - from muse.activities import _list_facets_with_activity_state 678 + from talent.activities import _list_facets_with_activity_state 679 679 680 680 with tempfile.TemporaryDirectory() as tmpdir: 681 681 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 689 689 690 690 class TestDetectEndedActivities: 691 691 def test_explicit_ended(self): 692 - from muse.activities import _detect_ended_activities 692 + from talent.activities import _detect_ended_activities 693 693 694 694 prev = [ 695 695 {"activity": "coding", "state": "active", "since": "100000_300"}, ··· 702 702 assert ended[0]["activity"] == "coding" 703 703 704 704 def test_implicit_ended(self): 705 - from muse.activities import _detect_ended_activities 705 + from talent.activities import _detect_ended_activities 706 706 707 707 prev = [ 708 708 {"activity": "coding", "state": "active", "since": "100000_300"}, ··· 716 716 assert ended[0]["activity"] == "coding" 717 717 718 718 def test_timeout_ends_all(self): 719 - from muse.activities import _detect_ended_activities 719 + from talent.activities import _detect_ended_activities 720 720 721 721 prev = [ 722 722 {"activity": "coding", "state": "active", "since": "100000_300"}, ··· 726 726 assert len(ended) == 2 727 727 728 728 def test_continuing_not_ended(self): 729 - from muse.activities import _detect_ended_activities 729 + from talent.activities import _detect_ended_activities 730 730 731 731 prev = [ 732 732 {"activity": "coding", "state": "active", "since": "100000_300"}, ··· 738 738 assert len(ended) == 0 739 739 740 740 def test_ignores_previously_ended(self): 741 - from muse.activities import _detect_ended_activities 741 + from talent.activities import _detect_ended_activities 742 742 743 743 prev = [ 744 744 {"activity": "coding", "state": "ended", "since": "090000_300"}, ··· 751 751 752 752 def test_new_activity_same_type(self): 753 753 """A new activity of same type with different since is not the same.""" 754 - from muse.activities import _detect_ended_activities 754 + from talent.activities import _detect_ended_activities 755 755 756 756 prev = [ 757 757 {"activity": "coding", "state": "active", "since": "100000_300"}, ··· 766 766 767 767 class TestWalkActivitySegments: 768 768 def test_walks_segments(self, monkeypatch): 769 - from muse.activities import _walk_activity_segments 769 + from talent.activities import _walk_activity_segments 770 770 771 771 with tempfile.TemporaryDirectory() as tmpdir: 772 772 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 814 814 assert result["active_entities"] == ["VS Code", "Claude Code"] 815 815 816 816 def test_deduplicates_entities(self, monkeypatch): 817 - from muse.activities import _walk_activity_segments 817 + from talent.activities import _walk_activity_segments 818 818 819 819 with tempfile.TemporaryDirectory() as tmpdir: 820 820 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 857 857 assert result["active_entities"] == ["VS Code", "Git", "Claude Code"] 858 858 859 859 def test_empty_when_no_match(self, monkeypatch): 860 - from muse.activities import _walk_activity_segments 860 + from talent.activities import _walk_activity_segments 861 861 862 862 with tempfile.TemporaryDirectory() as tmpdir: 863 863 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 873 873 """Tests for the activities pre_process hook.""" 874 874 875 875 def test_skips_when_no_previous_segment(self, monkeypatch): 876 - from muse.activities import pre_process 876 + from talent.activities import pre_process 877 877 878 878 with tempfile.TemporaryDirectory() as tmpdir: 879 879 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 889 889 assert "skip_reason" in result 890 890 891 891 def test_skips_when_no_ended_activities(self, monkeypatch): 892 - from muse.activities import pre_process 892 + from talent.activities import pre_process 893 893 894 894 with tempfile.TemporaryDirectory() as tmpdir: 895 895 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 930 930 assert result.get("skip_reason") == "no_ended_activities" 931 931 932 932 def test_detects_ended_and_writes_record(self, monkeypatch): 933 - from muse.activities import pre_process 933 + from talent.activities import pre_process 934 934 from think.activities import load_activity_records 935 935 936 936 with tempfile.TemporaryDirectory() as tmpdir: ··· 968 968 assert records[0]["segments"] == ["100000_300"] 969 969 970 970 def test_idempotent_on_rerun(self, monkeypatch): 971 - from muse.activities import pre_process 971 + from talent.activities import pre_process 972 972 from think.activities import load_activity_records 973 973 974 974 with tempfile.TemporaryDirectory() as tmpdir: ··· 999 999 assert len(records) == 1 1000 1000 1001 1001 def test_multi_facet_detection(self, monkeypatch): 1002 - from muse.activities import pre_process 1002 + from talent.activities import pre_process 1003 1003 from think.activities import load_activity_records 1004 1004 1005 1005 with tempfile.TemporaryDirectory() as tmpdir: ··· 1054 1054 1055 1055 def test_multi_segment_span(self, monkeypatch): 1056 1056 """Activity spanning multiple segments should collect all segments.""" 1057 - from muse.activities import pre_process 1057 + from talent.activities import pre_process 1058 1058 from think.activities import load_activity_records 1059 1059 1060 1060 with tempfile.TemporaryDirectory() as tmpdir: ··· 1127 1127 """Tests for the activities post_process hook.""" 1128 1128 1129 1129 def test_updates_descriptions(self, monkeypatch): 1130 - from muse.activities import post_process 1130 + from talent.activities import post_process 1131 1131 from think.activities import append_activity_record, load_activity_records 1132 1132 1133 1133 with tempfile.TemporaryDirectory() as tmpdir: ··· 1162 1162 ) 1163 1163 1164 1164 def test_handles_invalid_json(self): 1165 - from muse.activities import post_process 1165 + from talent.activities import post_process 1166 1166 1167 1167 result = post_process("not json", {"day": "20260209"}) 1168 1168 assert result is None 1169 1169 1170 1170 def test_handles_non_object(self): 1171 - from muse.activities import post_process 1171 + from talent.activities import post_process 1172 1172 1173 1173 result = post_process("[]", {"day": "20260209"}) 1174 1174 assert result is None 1175 1175 1176 1176 def test_returns_none(self, monkeypatch): 1177 - from muse.activities import post_process 1177 + from talent.activities import post_process 1178 1178 1179 1179 with tempfile.TemporaryDirectory() as tmpdir: 1180 1180 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 1204 1204 """Tests for pre-hook stashing record data in meta.""" 1205 1205 1206 1206 def test_meta_contains_activity_records(self, monkeypatch): 1207 - from muse.activities import pre_process 1207 + from talent.activities import pre_process 1208 1208 1209 1209 with tempfile.TemporaryDirectory() as tmpdir: 1210 1210 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 1248 1248 assert rec["description"] == "Writing code" 1249 1249 1250 1250 def test_meta_multiple_facets(self, monkeypatch): 1251 - from muse.activities import pre_process 1251 + from talent.activities import pre_process 1252 1252 1253 1253 with tempfile.TemporaryDirectory() as tmpdir: 1254 1254 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 1299 1299 def test_emits_events_with_llm_description(self, monkeypatch): 1300 1300 from unittest.mock import patch 1301 1301 1302 - from muse.activities import post_process 1302 + from talent.activities import post_process 1303 1303 from think.activities import append_activity_record 1304 1304 1305 1305 with tempfile.TemporaryDirectory() as tmpdir: ··· 1339 1339 } 1340 1340 } 1341 1341 1342 - with patch("muse.activities.callosum_send") as mock_send: 1342 + with patch("talent.activities.callosum_send") as mock_send: 1343 1343 mock_send.return_value = True 1344 1344 post_process( 1345 1345 llm_result, ··· 1363 1363 def test_falls_back_to_prehook_description_with_warning(self, monkeypatch, caplog): 1364 1364 from unittest.mock import patch 1365 1365 1366 - from muse.activities import post_process 1366 + from talent.activities import post_process 1367 1367 1368 1368 with tempfile.TemporaryDirectory() as tmpdir: 1369 1369 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 1383 1383 } 1384 1384 1385 1385 # LLM returns empty — no descriptions to update 1386 - with patch("muse.activities.callosum_send") as mock_send: 1386 + with patch("talent.activities.callosum_send") as mock_send: 1387 1387 mock_send.return_value = True 1388 1388 import logging 1389 1389 1390 - with caplog.at_level(logging.WARNING, logger="muse.activities"): 1390 + with caplog.at_level(logging.WARNING, logger="talent.activities"): 1391 1391 post_process( 1392 1392 "{}", 1393 1393 {"day": "20260209", "segment": "100500_300", "meta": meta}, ··· 1401 1401 def test_no_events_without_meta(self, monkeypatch): 1402 1402 from unittest.mock import patch 1403 1403 1404 - from muse.activities import post_process 1404 + from talent.activities import post_process 1405 1405 1406 1406 with tempfile.TemporaryDirectory() as tmpdir: 1407 1407 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) 1408 1408 1409 - with patch("muse.activities.callosum_send") as mock_send: 1409 + with patch("talent.activities.callosum_send") as mock_send: 1410 1410 post_process("{}", {"day": "20260209", "segment": "100500_300"}) 1411 1411 mock_send.assert_not_called() 1412 1412 1413 1413 def test_event_emission_failure_does_not_raise(self, monkeypatch): 1414 1414 from unittest.mock import patch 1415 1415 1416 - from muse.activities import post_process 1416 + from talent.activities import post_process 1417 1417 1418 1418 with tempfile.TemporaryDirectory() as tmpdir: 1419 1419 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 1432 1432 } 1433 1433 } 1434 1434 1435 - with patch("muse.activities.callosum_send") as mock_send: 1435 + with patch("talent.activities.callosum_send") as mock_send: 1436 1436 mock_send.side_effect = OSError("socket error") 1437 1437 # Should not raise 1438 1438 result = post_process( ··· 1556 1556 """Tests for the activities pre_process hook in flush mode.""" 1557 1557 1558 1558 def test_flush_ends_all_active_activities(self, monkeypatch): 1559 - from muse.activities import pre_process 1559 + from talent.activities import pre_process 1560 1560 from think.activities import load_activity_records 1561 1561 1562 1562 with tempfile.TemporaryDirectory() as tmpdir: ··· 1601 1601 assert records[0]["segments"] == ["100000_300"] 1602 1602 1603 1603 def test_flush_skips_when_no_active_activities(self, monkeypatch): 1604 - from muse.activities import pre_process 1604 + from talent.activities import pre_process 1605 1605 1606 1606 with tempfile.TemporaryDirectory() as tmpdir: 1607 1607 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 1632 1632 assert result["skip_reason"] == "no_active_activities" 1633 1633 1634 1634 def test_flush_skips_when_no_activity_state(self, monkeypatch): 1635 - from muse.activities import pre_process 1635 + from talent.activities import pre_process 1636 1636 1637 1637 with tempfile.TemporaryDirectory() as tmpdir: 1638 1638 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir) ··· 1652 1652 assert result["skip_reason"] == "no_activity_state" 1653 1653 1654 1654 def test_flush_handles_multiple_facets(self, monkeypatch): 1655 - from muse.activities import pre_process 1655 + from talent.activities import pre_process 1656 1656 from think.activities import load_activity_records 1657 1657 1658 1658 with tempfile.TemporaryDirectory() as tmpdir: ··· 1707 1707 assert len(personal_records) == 1 1708 1708 1709 1709 def test_flush_is_idempotent(self, monkeypatch): 1710 - from muse.activities import pre_process 1710 + from talent.activities import pre_process 1711 1711 from think.activities import load_activity_records 1712 1712 1713 1713 with tempfile.TemporaryDirectory() as tmpdir: ··· 1742 1742 assert len(records) == 1 1743 1743 1744 1744 def test_flush_stashes_meta_for_post_hook(self, monkeypatch): 1745 - from muse.activities import pre_process 1745 + from talent.activities import pre_process 1746 1746 1747 1747 with tempfile.TemporaryDirectory() as tmpdir: 1748 1748 monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", tmpdir)
+65 -65
tests/test_activity_state.py
··· 13 13 """Tests for _extract_facet_from_output_path.""" 14 14 15 15 def test_extracts_facet_from_valid_path(self): 16 - from muse.activity_state import _extract_facet_from_output_path 16 + from talent.activity_state import _extract_facet_from_output_path 17 17 18 18 path = "/journal/20260130/143000_300/agents/work/activity_state.json" 19 19 assert _extract_facet_from_output_path(path) == "work" 20 20 21 21 def test_extracts_facet_with_hyphen(self): 22 - from muse.activity_state import _extract_facet_from_output_path 22 + from talent.activity_state import _extract_facet_from_output_path 23 23 24 24 path = "/journal/20260130/143000_300/agents/my-project/activity_state.json" 25 25 assert _extract_facet_from_output_path(path) == "my-project" 26 26 27 27 def test_returns_none_for_empty_path(self): 28 - from muse.activity_state import _extract_facet_from_output_path 28 + from talent.activity_state import _extract_facet_from_output_path 29 29 30 30 assert _extract_facet_from_output_path("") is None 31 31 assert _extract_facet_from_output_path(None) is None 32 32 33 33 def test_returns_none_for_non_matching_path(self): 34 - from muse.activity_state import _extract_facet_from_output_path 34 + from talent.activity_state import _extract_facet_from_output_path 35 35 36 36 # Different generator name 37 37 assert _extract_facet_from_output_path("/path/to/facets.json") is None ··· 46 46 """Tests for find_previous_segment.""" 47 47 48 48 def test_finds_previous_segment(self): 49 - from muse.activity_state import find_previous_segment 49 + from talent.activity_state import find_previous_segment 50 50 51 51 with tempfile.TemporaryDirectory() as tmpdir: 52 52 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 70 70 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 71 71 72 72 def test_returns_none_for_nonexistent_day(self): 73 - from muse.activity_state import find_previous_segment 73 + from talent.activity_state import find_previous_segment 74 74 75 75 with tempfile.TemporaryDirectory() as tmpdir: 76 76 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 83 83 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 84 84 85 85 def test_handles_segments_with_suffix(self): 86 - from muse.activity_state import find_previous_segment 86 + from talent.activity_state import find_previous_segment 87 87 88 88 with tempfile.TemporaryDirectory() as tmpdir: 89 89 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 110 110 """Tests for check_timeout.""" 111 111 112 112 def test_no_timeout_within_threshold(self): 113 - from muse.activity_state import check_timeout 113 + from talent.activity_state import check_timeout 114 114 115 115 # 5 minute gap (300 seconds) 116 116 assert check_timeout("100500_300", "100000_300", timeout_seconds=3600) is False 117 117 118 118 def test_timeout_exceeds_threshold(self): 119 - from muse.activity_state import check_timeout 119 + from talent.activity_state import check_timeout 120 120 121 121 # 2 hour gap 122 122 assert check_timeout("120000_300", "100000_300", timeout_seconds=3600) is True 123 123 124 124 def test_uses_segment_end_time(self): 125 - from muse.activity_state import check_timeout 125 + from talent.activity_state import check_timeout 126 126 127 127 # Previous segment: 10:00:00 - 10:05:00 (300 seconds) 128 128 # Current segment: 10:10:00 ··· 134 134 """Tests for load_previous_state.""" 135 135 136 136 def test_loads_valid_state(self): 137 - from muse.activity_state import load_previous_state 137 + from talent.activity_state import load_previous_state 138 138 139 139 with tempfile.TemporaryDirectory() as tmpdir: 140 140 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 171 171 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 172 172 173 173 def test_returns_none_for_missing_file(self): 174 - from muse.activity_state import load_previous_state 174 + from talent.activity_state import load_previous_state 175 175 176 176 with tempfile.TemporaryDirectory() as tmpdir: 177 177 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 193 193 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 194 194 195 195 def test_rejects_non_array(self): 196 - from muse.activity_state import load_previous_state 196 + from talent.activity_state import load_previous_state 197 197 198 198 with tempfile.TemporaryDirectory() as tmpdir: 199 199 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 224 224 """Tests for format_activities_context.""" 225 225 226 226 def test_formats_activities_list(self): 227 - from muse.activity_state import format_activities_context 227 + from talent.activity_state import format_activities_context 228 228 229 229 with tempfile.TemporaryDirectory() as tmpdir: 230 230 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 255 255 256 256 def test_handles_empty_activities(self): 257 257 """Facet with no activities.jsonl still gets always-on defaults.""" 258 - from muse.activity_state import format_activities_context 258 + from talent.activity_state import format_activities_context 259 259 260 260 with tempfile.TemporaryDirectory() as tmpdir: 261 261 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 281 281 """Tests for format_previous_state.""" 282 282 283 283 def test_formats_active_activities(self): 284 - from muse.activity_state import format_previous_state 284 + from talent.activity_state import format_previous_state 285 285 286 286 state = [ 287 287 { ··· 303 303 assert "since" not in result 304 304 305 305 def test_formats_ended_activities(self): 306 - from muse.activity_state import format_previous_state 306 + from talent.activity_state import format_previous_state 307 307 308 308 state = [ 309 309 { ··· 321 321 assert "email" in result 322 322 323 323 def test_handles_timeout(self): 324 - from muse.activity_state import format_previous_state 324 + from talent.activity_state import format_previous_state 325 325 326 326 state = [{"activity": "meeting", "state": "active"}] 327 327 result = format_previous_state( ··· 331 331 assert "meeting" not in result 332 332 333 333 def test_handles_no_previous_state(self): 334 - from muse.activity_state import format_previous_state 334 + from talent.activity_state import format_previous_state 335 335 336 336 result = format_previous_state(None, None, "100000_300", timed_out=False) 337 337 assert "No previous segment state" in result 338 338 339 339 def test_handles_empty_list(self): 340 - from muse.activity_state import format_previous_state 340 + from talent.activity_state import format_previous_state 341 341 342 342 result = format_previous_state([], "100000_300", "100500_300", timed_out=False) 343 343 assert "No activities were detected" in result ··· 347 347 """Tests for the pre_process hook function.""" 348 348 349 349 def test_builds_enriched_context(self): 350 - from muse.activity_state import pre_process 350 + from talent.activity_state import pre_process 351 351 352 352 with tempfile.TemporaryDirectory() as tmpdir: 353 353 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 414 414 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 415 415 416 416 def test_returns_none_without_day(self): 417 - from muse.activity_state import pre_process 417 + from talent.activity_state import pre_process 418 418 419 419 context = { 420 420 "segment": "100000_300", ··· 423 423 assert pre_process(context) is None 424 424 425 425 def test_returns_none_without_segment(self): 426 - from muse.activity_state import pre_process 426 + from talent.activity_state import pre_process 427 427 428 428 context = { 429 429 "day": "20260130", ··· 432 432 assert pre_process(context) is None 433 433 434 434 def test_returns_none_without_facet_in_path(self): 435 - from muse.activity_state import pre_process 435 + from talent.activity_state import pre_process 436 436 437 437 context = { 438 438 "day": "20260130", ··· 446 446 """Tests for the post_process hook function.""" 447 447 448 448 def test_new_activity_gets_current_segment(self): 449 - from muse.activity_state import post_process 449 + from talent.activity_state import post_process 450 450 451 451 llm_output = json.dumps( 452 452 [ ··· 468 468 assert items[0]["level"] == "high" 469 469 470 470 def test_continuing_activity_copies_since(self): 471 - from muse.activity_state import post_process 471 + from talent.activity_state import post_process 472 472 473 473 with tempfile.TemporaryDirectory() as tmpdir: 474 474 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 526 526 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 527 527 528 528 def test_ended_activity_copies_since(self): 529 - from muse.activity_state import post_process 529 + from talent.activity_state import post_process 530 530 531 531 with tempfile.TemporaryDirectory() as tmpdir: 532 532 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 583 583 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 584 584 585 585 def test_no_previous_state_continuing_becomes_new(self): 586 - from muse.activity_state import post_process 586 + from talent.activity_state import post_process 587 587 588 588 llm_output = json.dumps( 589 589 [ ··· 607 607 def test_unmatched_ended_with_novel_description_becomes_active(self): 608 608 """Ended activity with no previous active match but novel description 609 609 is treated as a new active activity (LLM mis-tagged).""" 610 - from muse.activity_state import post_process 610 + from talent.activity_state import post_process 611 611 612 612 llm_output = json.dumps( 613 613 [ ··· 632 632 def test_unmatched_ended_with_empty_description_dropped(self): 633 633 """Ended activity with no previous active match and no description 634 634 is dropped as redundant.""" 635 - from muse.activity_state import post_process 635 + from talent.activity_state import post_process 636 636 637 637 llm_output = json.dumps( 638 638 [ ··· 650 650 651 651 def test_unmatched_ended_matching_prev_ended_dropped(self): 652 652 """Ended activity that matches a previously ended activity is dropped.""" 653 - from muse.activity_state import post_process 653 + from talent.activity_state import post_process 654 654 655 655 with tempfile.TemporaryDirectory() as tmpdir: 656 656 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 706 706 def test_unmatched_ended_novel_desc_with_prev_ended_becomes_active(self): 707 707 """Ended activity with novel description (different from prev ended) 708 708 is promoted to active.""" 709 - from muse.activity_state import post_process 709 + from talent.activity_state import post_process 710 710 711 711 with tempfile.TemporaryDirectory() as tmpdir: 712 712 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 762 762 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 763 763 764 764 def test_empty_array_passthrough(self): 765 - from muse.activity_state import post_process 765 + from talent.activity_state import post_process 766 766 767 767 result = post_process("[]", {"segment": "143000_300"}) 768 768 assert result is not None 769 769 assert json.loads(result) == [] 770 770 771 771 def test_malformed_json_returns_none(self): 772 - from muse.activity_state import post_process 772 + from talent.activity_state import post_process 773 773 774 774 result = post_process("not json", {"segment": "143000_300"}) 775 775 assert result is None 776 776 777 777 def test_non_array_returns_none(self): 778 - from muse.activity_state import post_process 778 + from talent.activity_state import post_process 779 779 780 780 result = post_process('{"active": []}', {"segment": "143000_300"}) 781 781 assert result is None 782 782 783 783 def test_missing_segment_returns_none(self): 784 - from muse.activity_state import post_process 784 + from talent.activity_state import post_process 785 785 786 786 result = post_process("[]", {}) 787 787 assert result is None 788 788 789 789 def test_same_type_transition_end_and_new(self): 790 790 """One meeting ends, another starts — both get correct since.""" 791 - from muse.activity_state import post_process 791 + from talent.activity_state import post_process 792 792 793 793 with tempfile.TemporaryDirectory() as tmpdir: 794 794 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 857 857 858 858 def test_default_level_for_new(self): 859 859 """New activity without level gets default 'medium'.""" 860 - from muse.activity_state import post_process 860 + from talent.activity_state import post_process 861 861 862 862 llm_output = json.dumps( 863 863 [{"activity": "coding", "state": "new", "description": "Writing code"}] ··· 869 869 870 870 def test_active_entities_passthrough_on_new(self): 871 871 """active_entities array is passed through on new activities.""" 872 - from muse.activity_state import post_process 872 + from talent.activity_state import post_process 873 873 874 874 llm_output = json.dumps( 875 875 [ ··· 889 889 890 890 def test_active_entities_omitted_when_empty(self): 891 891 """active_entities is omitted from output when not provided or empty.""" 892 - from muse.activity_state import post_process 892 + from talent.activity_state import post_process 893 893 894 894 llm_output = json.dumps( 895 895 [ ··· 909 909 910 910 def test_active_entities_omitted_on_ended(self): 911 911 """active_entities is not included on ended activities.""" 912 - from muse.activity_state import post_process 912 + from talent.activity_state import post_process 913 913 914 914 with tempfile.TemporaryDirectory() as tmpdir: 915 915 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 966 966 967 967 def test_fuzzy_match_disambiguates_same_type(self): 968 968 """Multiple same-type previous activities matched by description.""" 969 - from muse.activity_state import post_process 969 + from talent.activity_state import post_process 970 970 971 971 with tempfile.TemporaryDirectory() as tmpdir: 972 972 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1033 1033 """Tests for the id field added to resolved activity entries.""" 1034 1034 1035 1035 def test_new_activity_gets_id(self): 1036 - from muse.activity_state import post_process 1036 + from talent.activity_state import post_process 1037 1037 1038 1038 llm_output = json.dumps( 1039 1039 [ ··· 1051 1051 assert items[0]["id"] == "coding_143000_300" 1052 1052 1053 1053 def test_continuing_activity_preserves_since_in_id(self): 1054 - from muse.activity_state import post_process 1054 + from talent.activity_state import post_process 1055 1055 1056 1056 with tempfile.TemporaryDirectory() as tmpdir: 1057 1057 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1106 1106 os.environ["_SOLSTONE_JOURNAL_OVERRIDE"] = original_path 1107 1107 1108 1108 def test_ended_activity_gets_id(self): 1109 - from muse.activity_state import post_process 1109 + from talent.activity_state import post_process 1110 1110 1111 1111 with tempfile.TemporaryDirectory() as tmpdir: 1112 1112 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1161 1161 1162 1162 def test_promoted_ended_gets_new_id(self): 1163 1163 """Ended activity promoted to active gets id with current segment.""" 1164 - from muse.activity_state import post_process 1164 + from talent.activity_state import post_process 1165 1165 1166 1166 llm_output = json.dumps( 1167 1167 [ ··· 1185 1185 def test_emits_live_for_new_activity(self): 1186 1186 from unittest.mock import patch 1187 1187 1188 - from muse.activity_state import post_process 1188 + from talent.activity_state import post_process 1189 1189 1190 1190 llm_output = json.dumps( 1191 1191 [ ··· 1205 1205 "output_path": "/j/20260130/143000_300/agents/work/activity_state.json", 1206 1206 } 1207 1207 1208 - with patch("muse.activity_state.callosum_send") as mock_send: 1208 + with patch("talent.activity_state.callosum_send") as mock_send: 1209 1209 mock_send.return_value = True 1210 1210 post_process(llm_output, context) 1211 1211 ··· 1226 1226 def test_emits_live_for_continuing_activity(self): 1227 1227 from unittest.mock import patch 1228 1228 1229 - from muse.activity_state import post_process 1229 + from talent.activity_state import post_process 1230 1230 1231 1231 with tempfile.TemporaryDirectory() as tmpdir: 1232 1232 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1272 1272 "output_path": f"{tmpdir}/20260130/100500_300/agents/work/activity_state.json", 1273 1273 } 1274 1274 1275 - with patch("muse.activity_state.callosum_send") as mock_send: 1275 + with patch("talent.activity_state.callosum_send") as mock_send: 1276 1276 mock_send.return_value = True 1277 1277 post_process(llm_output, context) 1278 1278 ··· 1288 1288 def test_no_live_event_for_ended_activity(self): 1289 1289 from unittest.mock import patch 1290 1290 1291 - from muse.activity_state import post_process 1291 + from talent.activity_state import post_process 1292 1292 1293 1293 with tempfile.TemporaryDirectory() as tmpdir: 1294 1294 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1333 1333 "output_path": f"{tmpdir}/20260130/100500_300/agents/work/activity_state.json", 1334 1334 } 1335 1335 1336 - with patch("muse.activity_state.callosum_send") as mock_send: 1336 + with patch("talent.activity_state.callosum_send") as mock_send: 1337 1337 post_process(llm_output, context) 1338 1338 mock_send.assert_not_called() 1339 1339 ··· 1344 1344 def test_no_live_events_without_day_or_facet(self): 1345 1345 from unittest.mock import patch 1346 1346 1347 - from muse.activity_state import post_process 1347 + from talent.activity_state import post_process 1348 1348 1349 1349 llm_output = json.dumps( 1350 1350 [ ··· 1358 1358 ) 1359 1359 1360 1360 # No day — events should not fire 1361 - with patch("muse.activity_state.callosum_send") as mock_send: 1361 + with patch("talent.activity_state.callosum_send") as mock_send: 1362 1362 post_process(llm_output, {"segment": "143000_300"}) 1363 1363 mock_send.assert_not_called() 1364 1364 1365 1365 def test_live_event_failure_does_not_break_posthook(self): 1366 1366 from unittest.mock import patch 1367 1367 1368 - from muse.activity_state import post_process 1368 + from talent.activity_state import post_process 1369 1369 1370 1370 llm_output = json.dumps( 1371 1371 [ ··· 1384 1384 "output_path": "/j/20260130/143000_300/agents/work/activity_state.json", 1385 1385 } 1386 1386 1387 - with patch("muse.activity_state.callosum_send") as mock_send: 1387 + with patch("talent.activity_state.callosum_send") as mock_send: 1388 1388 mock_send.side_effect = OSError("socket error") 1389 1389 result = post_process(llm_output, context) 1390 1390 # Should still return valid resolved output ··· 1401 1401 """Post-hook drops LLM output entries with activity IDs not in config.""" 1402 1402 from unittest.mock import patch 1403 1403 1404 - from muse.activity_state import post_process 1404 + from talent.activity_state import post_process 1405 1405 1406 1406 with tempfile.TemporaryDirectory() as tmpdir: 1407 1407 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1437 1437 "output_path": f"{tmpdir}/20260130/143000_300/agents/work/activity_state.json", 1438 1438 } 1439 1439 1440 - with patch("muse.activity_state.callosum_send"): 1440 + with patch("talent.activity_state.callosum_send"): 1441 1441 result = post_process(llm_output, context) 1442 1442 1443 1443 items = json.loads(result) ··· 1453 1453 import logging 1454 1454 from unittest.mock import patch 1455 1455 1456 - from muse.activity_state import post_process 1456 + from talent.activity_state import post_process 1457 1457 1458 1458 with tempfile.TemporaryDirectory() as tmpdir: 1459 1459 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1480 1480 "output_path": f"{tmpdir}/20260130/143000_300/agents/work/activity_state.json", 1481 1481 } 1482 1482 1483 - with caplog.at_level(logging.WARNING, logger="muse.activity_state"): 1484 - with patch("muse.activity_state.callosum_send"): 1483 + with caplog.at_level(logging.WARNING, logger="talent.activity_state"): 1484 + with patch("talent.activity_state.callosum_send"): 1485 1485 post_process(llm_output, context) 1486 1486 1487 1487 assert "Dropped 1 activity entries" in caplog.text ··· 1495 1495 """Post-hook preserves entries with valid activity IDs.""" 1496 1496 from unittest.mock import patch 1497 1497 1498 - from muse.activity_state import post_process 1498 + from talent.activity_state import post_process 1499 1499 1500 1500 with tempfile.TemporaryDirectory() as tmpdir: 1501 1501 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1536 1536 "output_path": f"{tmpdir}/20260130/143000_300/agents/work/activity_state.json", 1537 1537 } 1538 1538 1539 - with patch("muse.activity_state.callosum_send"): 1539 + with patch("talent.activity_state.callosum_send"): 1540 1540 result = post_process(llm_output, context) 1541 1541 1542 1542 items = json.loads(result) ··· 1553 1553 """Post-hook allows all default activity IDs for unconfigured facets.""" 1554 1554 from unittest.mock import patch 1555 1555 1556 - from muse.activity_state import post_process 1556 + from talent.activity_state import post_process 1557 1557 1558 1558 with tempfile.TemporaryDirectory() as tmpdir: 1559 1559 original_path = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") ··· 1586 1586 "output_path": f"{tmpdir}/20260130/143000_300/agents/new_facet/activity_state.json", 1587 1587 } 1588 1588 1589 - with patch("muse.activity_state.callosum_send"): 1589 + with patch("talent.activity_state.callosum_send"): 1590 1590 result = post_process(llm_output, context) 1591 1591 1592 1592 items = json.loads(result)
+9 -9
tests/test_agent_fallback.py
··· 100 100 101 101 def _patch_prepare_config_dependencies(monkeypatch): 102 102 monkeypatch.setattr( 103 - "think.muse.get_agent", lambda *args, **kwargs: _mock_base_agent_config() 103 + "think.talent.get_agent", lambda *args, **kwargs: _mock_base_agent_config() 104 104 ) 105 105 monkeypatch.setattr( 106 - "think.muse.key_to_context", lambda _name: "muse.system.default" 106 + "think.talent.key_to_context", lambda _name: "talent.system.default" 107 107 ) 108 108 monkeypatch.setattr( 109 109 "think.models.resolve_provider", ··· 205 205 "provider": "google", 206 206 "model": "gemini-3-flash-preview", 207 207 "health_stale": False, 208 - "context": "muse.system.default", 208 + "context": "talent.system.default", 209 209 } 210 210 211 211 asyncio.run(_execute_with_tools(config, events.append)) ··· 247 247 ), 248 248 ) 249 249 monkeypatch.setattr( 250 - "think.muse.key_to_context", 251 - lambda _name: "muse.system.default", 250 + "think.talent.key_to_context", 251 + lambda _name: "talent.system.default", 252 252 ) 253 253 monkeypatch.setattr("think.models.get_backup_provider", lambda _type: "anthropic") 254 254 monkeypatch.setattr("think.models.resolve_model_for_provider", resolve_model) ··· 263 263 264 264 asyncio.run(_execute_with_tools(config, events.append)) 265 265 266 - assert seen["context"] == "muse.system.default" 266 + assert seen["context"] == "talent.system.default" 267 267 268 268 269 269 def test_on_failure_retry_generate(monkeypatch): ··· 281 281 return {"text": "backup text", "usage": {"input_tokens": 1, "output_tokens": 1}} 282 282 283 283 monkeypatch.setattr( 284 - "think.muse.key_to_context", lambda _name: "muse.system.default" 284 + "think.talent.key_to_context", lambda _name: "talent.system.default" 285 285 ) 286 286 monkeypatch.setattr("think.models.generate_with_result", mock_generate_with_result) 287 287 monkeypatch.setattr("think.models.get_backup_provider", lambda _type: "anthropic") ··· 319 319 raise ValueError("bad input") 320 320 321 321 monkeypatch.setattr( 322 - "think.muse.key_to_context", lambda _name: "muse.system.default" 322 + "think.talent.key_to_context", lambda _name: "talent.system.default" 323 323 ) 324 324 monkeypatch.setattr("think.models.generate_with_result", bad_generate) 325 325 ··· 350 350 raise RuntimeError("primary failed") 351 351 352 352 monkeypatch.setattr( 353 - "think.muse.key_to_context", lambda _name: "muse.system.default" 353 + "think.talent.key_to_context", lambda _name: "talent.system.default" 354 354 ) 355 355 monkeypatch.setattr("think.models.generate_with_result", always_fail) 356 356 monkeypatch.setattr("think.models.get_backup_provider", lambda _type: "anthropic")
+19 -19
tests/test_app_agents.py
··· 10 10 import pytest 11 11 12 12 from apps.agents.routes import _resolve_output_path 13 - from think.muse import _resolve_agent_path, get_agent, get_muse_configs 13 + from think.talent import _resolve_agent_path, get_agent, get_talent_configs 14 14 15 15 16 16 @pytest.fixture ··· 24 24 def app_with_agent(tmp_path, monkeypatch): 25 25 """Create a temporary app with an agent for testing. 26 26 27 - Creates apps/testapp/muse/myhelper.md with frontmatter in a temp directory, 27 + Creates apps/testapp/talent/myhelper.md with frontmatter in a temp directory, 28 28 then monkeypatches the apps directory path. 29 29 """ 30 30 # Create app structure 31 31 app_dir = tmp_path / "apps" / "testapp" 32 - muse_dir = app_dir / "muse" 33 - muse_dir.mkdir(parents=True) 32 + talent_dir = app_dir / "talent" 33 + talent_dir.mkdir(parents=True) 34 34 35 35 # Create workspace.html (required for app discovery, though not used here) 36 36 (app_dir / "workspace.html").write_text("<h1>Test App</h1>") ··· 45 45 "priority": 42, 46 46 } 47 47 json_str = json.dumps(metadata, indent=2) 48 - (muse_dir / "myhelper.md").write_text( 48 + (talent_dir / "myhelper.md").write_text( 49 49 f"{{\n{json_str[1:-1]}\n}}\n\nYou are a test helper agent.\n\n## Purpose\nHelp with testing." 50 50 ) 51 51 52 52 # Create another agent without metadata (defaults only) 53 - (muse_dir / "simple.md").write_text("A simple test agent with no metadata.") 53 + (talent_dir / "simple.md").write_text("A simple test agent with no metadata.") 54 54 55 55 # Monkeypatch the parent directory so apps discovery finds our temp apps 56 56 monkeypatch.setattr( ··· 64 64 yield { 65 65 "tmp_path": tmp_path, 66 66 "app_dir": app_dir, 67 - "muse_dir": muse_dir, 67 + "talent_dir": talent_dir, 68 68 } 69 69 70 70 ··· 73 73 agent_dir, agent_name = _resolve_agent_path("unified") 74 74 75 75 assert agent_name == "chat" 76 - assert agent_dir.name == "muse" 76 + assert agent_dir.name == "talent" 77 77 78 78 79 79 def test_resolve_agent_path_app_agent(): ··· 81 81 agent_dir, agent_name = _resolve_agent_path("support:support") 82 82 83 83 assert agent_name == "support" 84 - assert agent_dir.name == "muse" 84 + assert agent_dir.name == "talent" 85 85 assert agent_dir.parent.name == "support" 86 86 assert "apps" in str(agent_dir) 87 87 ··· 119 119 assert "fakeapp:fakeagent" in str(exc_info.value) 120 120 121 121 122 - def test_get_muse_configs_includes_system_agents(fixture_journal): 123 - """Test get_muse_configs returns system agents with metadata.""" 124 - agents = get_muse_configs(type="cogitate") 122 + def test_get_talent_configs_includes_system_agents(fixture_journal): 123 + """Test get_talent_configs returns system agents with metadata.""" 124 + agents = get_talent_configs(type="cogitate") 125 125 126 126 # Should include known system agents with frontmatter metadata 127 127 assert "chat" in agents ··· 130 130 assert "path" in agents["chat"] 131 131 132 132 133 - def test_get_muse_configs_system_agents_have_metadata(fixture_journal): 133 + def test_get_talent_configs_system_agents_have_metadata(fixture_journal): 134 134 """Test system agents have proper metadata fields.""" 135 - agents = get_muse_configs(type="cogitate") 135 + agents = get_talent_configs(type="cogitate") 136 136 137 137 # Check a known system agent 138 138 chat = agents.get("chat") ··· 142 142 assert "color" in chat 143 143 144 144 145 - def test_get_muse_configs_excludes_private_apps(fixture_journal, tmp_path, monkeypatch): 146 - """Test get_muse_configs skips apps starting with underscore.""" 145 + def test_get_talent_configs_excludes_private_apps(fixture_journal, tmp_path, monkeypatch): 146 + """Test get_talent_configs skips apps starting with underscore.""" 147 147 # Create a private app with an agent 148 148 private_app = tmp_path / "_private_app" / "agents" 149 149 private_app.mkdir(parents=True) ··· 151 151 152 152 # This is tricky to test without modifying the actual apps directory 153 153 # The current implementation filters by app_path.name.startswith("_") 154 - # We verify this by checking the code behavior with get_muse_configs() 154 + # We verify this by checking the code behavior with get_talent_configs() 155 155 156 - agents = get_muse_configs(type="cogitate") 156 + agents = get_talent_configs(type="cogitate") 157 157 158 158 # No agents should have keys starting with "_" 159 159 for key in agents: ··· 162 162 163 163 def test_app_agent_namespace_format(fixture_journal): 164 164 """Test app agent keys follow {app}:{agent} format.""" 165 - agents = get_muse_configs(type="cogitate") 165 + agents = get_talent_configs(type="cogitate") 166 166 167 167 for key, config in agents.items(): 168 168 if config.get("source") == "app":
+7 -7
tests/test_awareness.py
··· 367 367 ] 368 368 exchanges = [ 369 369 { 370 - "muse": "triage", 370 + "talent": "triage", 371 371 "agent_response": f"talked about entity_{i}", 372 372 "user_message": "hi", 373 373 } ··· 402 402 ] 403 403 exchanges = [ 404 404 { 405 - "muse": "triage", 405 + "talent": "triage", 406 406 "agent_response": f"entity_{i} is great", 407 407 "user_message": "yo", 408 408 } ··· 443 443 {"entity_name": f"entity_{i}", "observation_depth": 3} for i in range(15) 444 444 ] 445 445 exchanges = [ 446 - {"muse": "triage", "agent_response": "hello there", "user_message": "hi"} 446 + {"talent": "triage", "agent_response": "hello there", "user_message": "hi"} 447 447 for _ in range(10) 448 448 ] 449 449 facets = {"work": {}, "personal": {}, "hobby": {}} ··· 466 466 assert result["ready"] is False 467 467 468 468 def test_onboarding_exchanges_excluded(self): 469 - """Exchanges with muse='onboarding' are excluded from conversation_count.""" 469 + """Exchanges with talent='onboarding' are excluded from conversation_count.""" 470 470 from think.awareness import compute_thickness 471 471 472 472 entities = [{"entity_name": "foo", "observation_depth": 3}] * 10 473 473 exchanges = [ 474 - {"muse": "onboarding", "agent_response": "foo stuff", "user_message": "hi"}, 474 + {"talent": "onboarding", "agent_response": "foo stuff", "user_message": "hi"}, 475 475 { 476 - "muse": "onboarding", 476 + "talent": "onboarding", 477 477 "agent_response": "foo bar", 478 478 "user_message": "hello", 479 479 }, 480 - {"muse": "triage", "agent_response": "foo is great", "user_message": "hey"}, 480 + {"talent": "triage", "agent_response": "foo is great", "user_message": "hey"}, 481 481 ] 482 482 483 483 with unittest.mock.patch(
+2 -2
tests/test_chat_context.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - from muse.chat_context import pre_process 4 + from talent.chat_context import pre_process 5 5 6 6 7 7 def test_chat_context_appends_conversation_memory(monkeypatch, tmp_path): ··· 16 16 facet="work", 17 17 user_message="hello", 18 18 agent_response="hi there!", 19 - muse="unified", 19 + talent="unified", 20 20 ) 21 21 22 22 result = pre_process({"user_instruction": "Base instruction.", "facet": "work"})
+10 -10
tests/test_cogitate_coder.py
··· 179 179 180 180 181 181 # --------------------------------------------------------------------------- 182 - # muse/coder.md existence and frontmatter 182 + # talent/coder.md existence and frontmatter 183 183 # --------------------------------------------------------------------------- 184 184 185 185 186 186 class TestCoderAgent: 187 - """Verify muse/coder.md exists with correct frontmatter.""" 187 + """Verify talent/coder.md exists with correct frontmatter.""" 188 188 189 189 def test_coder_md_exists(self): 190 - """muse/coder.md must exist in the repo.""" 190 + """talent/coder.md must exist in the repo.""" 191 191 from pathlib import Path 192 192 193 - coder_path = Path(__file__).parent.parent / "muse" / "coder.md" 194 - assert coder_path.exists(), "muse/coder.md not found" 193 + coder_path = Path(__file__).parent.parent / "talent" / "coder.md" 194 + assert coder_path.exists(), "talent/coder.md not found" 195 195 196 196 def test_coder_frontmatter(self): 197 197 """coder.md must have write: true and type: cogitate.""" ··· 199 199 200 200 import frontmatter 201 201 202 - coder_path = Path(__file__).parent.parent / "muse" / "coder.md" 202 + coder_path = Path(__file__).parent.parent / "talent" / "coder.md" 203 203 post = frontmatter.load(coder_path) 204 204 205 205 assert post.metadata.get("type") == "cogitate" ··· 211 211 """coder.md must reference the coding skill instead of inlining guidelines.""" 212 212 from pathlib import Path 213 213 214 - coder_path = Path(__file__).parent.parent / "muse" / "coder.md" 214 + coder_path = Path(__file__).parent.parent / "talent" / "coder.md" 215 215 content = coder_path.read_text(encoding="utf-8") 216 216 217 217 # Should reference the coding skill, not inline dev guidelines ··· 219 219 assert "single source of truth" in content 220 220 221 221 # The coding skill must exist with reference files 222 - coding_skill = Path(__file__).parent.parent / "muse" / "coding" / "SKILL.md" 223 - assert coding_skill.exists(), "muse/coding/SKILL.md not found" 222 + coding_skill = Path(__file__).parent.parent / "talent" / "coding" / "SKILL.md" 223 + assert coding_skill.exists(), "talent/coding/SKILL.md not found" 224 224 225 - coding_refs = Path(__file__).parent.parent / "muse" / "coding" / "reference" 225 + coding_refs = Path(__file__).parent.parent / "talent" / "coding" / "reference" 226 226 assert (coding_refs / "coding-standards.md").exists() 227 227 assert (coding_refs / "project-structure.md").exists() 228 228 assert (coding_refs / "testing.md").exists()
+18 -18
tests/test_conversation.py
··· 35 35 path="/app/entities/adrian", 36 36 user_message="what's our history with adrian?", 37 37 agent_response="You met Adrian at betaworks.", 38 - muse="unified", 38 + talent="unified", 39 39 agent_id="12345", 40 40 ) 41 41 ··· 52 52 assert ex["app"] == "entities" 53 53 assert ex["user_message"] == "what's our history with adrian?" 54 54 assert ex["agent_response"] == "You met Adrian at betaworks." 55 - assert ex["muse"] == "unified" 55 + assert ex["talent"] == "unified" 56 56 assert ex["agent_id"] == "12345" 57 57 58 58 ··· 70 70 path="/app/calendar", 71 71 user_message="move my 3pm to 4pm", 72 72 agent_response="Done — moved 'DVD sync' to 4pm.", 73 - muse="unified", 73 + talent="unified", 74 74 agent_id="67890", 75 75 ) 76 76 ··· 102 102 ts=1710000001000, 103 103 user_message="hello", 104 104 agent_response="hi there", 105 - muse="triage", 105 + talent="triage", 106 106 ) 107 107 record_exchange( 108 108 ts=1710000002000, 109 109 user_message="what time is it?", 110 110 agent_response="It's 2pm.", 111 - muse="triage", 111 + talent="triage", 112 112 ) 113 113 114 114 jsonl_path = journal_dir / "conversation" / "exchanges.jsonl" ··· 124 124 """Empty user_message or agent_response is silently skipped.""" 125 125 from think.conversation import record_exchange 126 126 127 - record_exchange(user_message="", agent_response="response", muse="triage") 128 - record_exchange(user_message="hello", agent_response="", muse="triage") 127 + record_exchange(user_message="", agent_response="response", talent="triage") 128 + record_exchange(user_message="hello", agent_response="", talent="triage") 129 129 130 130 jsonl_path = journal_dir / "conversation" / "exchanges.jsonl" 131 131 assert not jsonl_path.exists() ··· 152 152 ts=1710000000000 + i * 1000, 153 153 user_message=f"msg {i}", 154 154 agent_response=f"resp {i}", 155 - muse="triage", 155 + talent="triage", 156 156 ) 157 157 158 158 recent = get_recent_exchanges(limit=5) ··· 170 170 facet="work", 171 171 user_message="work question", 172 172 agent_response="work answer", 173 - muse="triage", 173 + talent="triage", 174 174 ) 175 175 record_exchange( 176 176 ts=1710000002000, 177 177 facet="personal", 178 178 user_message="personal question", 179 179 agent_response="personal answer", 180 - muse="triage", 180 + talent="triage", 181 181 ) 182 182 183 183 work = get_recent_exchanges(facet="work") ··· 204 204 ts=now_ms(), 205 205 user_message="today question", 206 206 agent_response="today answer", 207 - muse="triage", 207 + talent="triage", 208 208 ) 209 209 210 210 # Record an exchange with old timestamp (not today) ··· 212 212 ts=1000000000000, # 2001-09-08 213 213 user_message="old question", 214 214 agent_response="old answer", 215 - muse="triage", 215 + talent="triage", 216 216 ) 217 217 218 218 today = get_today_exchanges() ··· 244 244 app="entities", 245 245 user_message="who is adrian?", 246 246 agent_response="Adrian is the CTO of Own Company.", 247 - muse="unified", 247 + talent="unified", 248 248 ) 249 249 250 250 context = build_memory_context() ··· 266 266 ts=now_ms(), 267 267 user_message="tell me a story", 268 268 agent_response=long_response, 269 - muse="unified", 269 + talent="unified", 270 270 ) 271 271 272 272 context = build_memory_context() ··· 287 287 ts=ts + i * 1000, 288 288 user_message=f"question {i}", 289 289 agent_response=f"answer {i}", 290 - muse="unified", 290 + talent="unified", 291 291 ) 292 292 293 293 context = build_memory_context(recent_limit=10) ··· 389 389 390 390 def test_conversation_memory_pre_hook(journal_dir): 391 391 """Pre-hook injects memory into user instruction.""" 392 - from muse.conversation_memory import pre_process 392 + from talent.conversation_memory import pre_process 393 393 from think.conversation import record_exchange 394 394 from think.utils import now_ms 395 395 ··· 399 399 facet="work", 400 400 user_message="hello", 401 401 agent_response="hi there!", 402 - muse="unified", 402 + talent="unified", 403 403 ) 404 404 405 405 context = { ··· 425 425 426 426 def test_conversation_memory_pre_hook_no_marker(): 427 427 """Pre-hook returns None when no injection marker present.""" 428 - from muse.conversation_memory import pre_process 428 + from talent.conversation_memory import pre_process 429 429 430 430 context = {"user_instruction": "No marker here."} 431 431 result = pre_process(context)
+28 -28
tests/test_dream_activity.py
··· 58 58 ) 59 59 60 60 # No activity-scheduled agents 61 - monkeypatch.setattr("think.dream.get_muse_configs", lambda schedule: {}) 61 + monkeypatch.setattr("think.dream.get_talent_configs", lambda schedule: {}) 62 62 63 63 result = run_activity_prompts( 64 64 day="20260209", ··· 103 103 } 104 104 105 105 monkeypatch.setattr( 106 - "think.dream.get_muse_configs", lambda schedule: configs 106 + "think.dream.get_talent_configs", lambda schedule: configs 107 107 ) 108 108 109 109 spawned_requests = [] ··· 159 159 } 160 160 161 161 monkeypatch.setattr( 162 - "think.dream.get_muse_configs", lambda schedule: configs 162 + "think.dream.get_talent_configs", lambda schedule: configs 163 163 ) 164 164 165 165 spawned = [] ··· 209 209 } 210 210 211 211 monkeypatch.setattr( 212 - "think.dream.get_muse_configs", lambda schedule: configs 212 + "think.dream.get_talent_configs", lambda schedule: configs 213 213 ) 214 214 215 215 captured_config = {} ··· 275 275 } 276 276 277 277 monkeypatch.setattr( 278 - "think.dream.get_muse_configs", lambda schedule: configs 278 + "think.dream.get_talent_configs", lambda schedule: configs 279 279 ) 280 280 monkeypatch.setattr( 281 281 "think.dream.cortex_request", ··· 351 351 } 352 352 353 353 monkeypatch.setattr( 354 - "think.dream.get_muse_configs", lambda schedule: configs 354 + "think.dream.get_talent_configs", lambda schedule: configs 355 355 ) 356 356 monkeypatch.setattr( 357 357 "think.dream.cortex_request", ··· 466 466 467 467 468 468 # --------------------------------------------------------------------------- 469 - # Muse config validation for activity schedule 469 + # Talent config validation for activity schedule 470 470 # --------------------------------------------------------------------------- 471 471 472 472 473 - class TestMuseActivityValidation: 474 - """Tests for activity schedule validation in get_muse_configs.""" 473 + class TestTalentActivityValidation: 474 + """Tests for activity schedule validation in get_talent_configs.""" 475 475 476 - def _isolate_muse(self, monkeypatch, tmp_path): 477 - """Point muse discovery at tmp_path only (no real muse/ or apps/).""" 478 - muse_dir = tmp_path / "muse" 479 - muse_dir.mkdir(exist_ok=True) 480 - monkeypatch.setattr("think.muse.MUSE_DIR", muse_dir) 481 - monkeypatch.setattr("think.muse.APPS_DIR", tmp_path / "no_apps") 482 - return muse_dir 476 + def _isolate_talent(self, monkeypatch, tmp_path): 477 + """Point talent discovery at tmp_path only (no real talent/ or apps/).""" 478 + talent_dir = tmp_path / "talent" 479 + talent_dir.mkdir(exist_ok=True) 480 + monkeypatch.setattr("think.talent.TALENT_DIR", talent_dir) 481 + monkeypatch.setattr("think.talent.APPS_DIR", tmp_path / "no_apps") 482 + return talent_dir 483 483 484 484 def test_missing_activities_field_raises(self, monkeypatch, tmp_path): 485 485 import frontmatter 486 486 487 - from think.muse import get_muse_configs 487 + from think.talent import get_talent_configs 488 488 489 - muse_dir = self._isolate_muse(monkeypatch, tmp_path) 489 + talent_dir = self._isolate_talent(monkeypatch, tmp_path) 490 490 491 491 post = frontmatter.Post( 492 492 "Test prompt", ··· 496 496 output="md", 497 497 # Missing 'activities' field 498 498 ) 499 - (muse_dir / "test_agent.md").write_text(frontmatter.dumps(post)) 499 + (talent_dir / "test_agent.md").write_text(frontmatter.dumps(post)) 500 500 501 501 with pytest.raises(ValueError, match="non-empty 'activities' list"): 502 - get_muse_configs(schedule="activity") 502 + get_talent_configs(schedule="activity") 503 503 504 504 def test_valid_activities_field_passes(self, monkeypatch, tmp_path): 505 505 import frontmatter 506 506 507 - from think.muse import get_muse_configs 507 + from think.talent import get_talent_configs 508 508 509 - muse_dir = self._isolate_muse(monkeypatch, tmp_path) 509 + talent_dir = self._isolate_talent(monkeypatch, tmp_path) 510 510 511 511 post = frontmatter.Post( 512 512 "Test prompt", ··· 516 516 output="md", 517 517 activities=["coding", "meeting"], 518 518 ) 519 - (muse_dir / "test_agent.md").write_text(frontmatter.dumps(post)) 519 + (talent_dir / "test_agent.md").write_text(frontmatter.dumps(post)) 520 520 521 - configs = get_muse_configs(schedule="activity") 521 + configs = get_talent_configs(schedule="activity") 522 522 assert "test_agent" in configs 523 523 assert configs["test_agent"]["activities"] == ["coding", "meeting"] 524 524 525 525 def test_wildcard_activities_passes(self, monkeypatch, tmp_path): 526 526 import frontmatter 527 527 528 - from think.muse import get_muse_configs 528 + from think.talent import get_talent_configs 529 529 530 - muse_dir = self._isolate_muse(monkeypatch, tmp_path) 530 + talent_dir = self._isolate_talent(monkeypatch, tmp_path) 531 531 532 532 post = frontmatter.Post( 533 533 "Test prompt", ··· 537 537 output="md", 538 538 activities=["*"], 539 539 ) 540 - (muse_dir / "test_agent.md").write_text(frontmatter.dumps(post)) 540 + (talent_dir / "test_agent.md").write_text(frontmatter.dumps(post)) 541 541 542 - configs = get_muse_configs(schedule="activity") 542 + configs = get_talent_configs(schedule="activity") 543 543 assert "test_agent" in configs 544 544 assert configs["test_agent"]["activities"] == ["*"] 545 545
+3 -3
tests/test_dream_full.py
··· 101 101 102 102 103 103 def test_priority_validation_required(tmp_path, monkeypatch): 104 - """Test that get_muse_configs raises error for scheduled prompts without priority.""" 105 - from think.muse import get_muse_configs 104 + """Test that get_talent_configs raises error for scheduled prompts without priority.""" 105 + from think.talent import get_talent_configs 106 106 107 107 # This test verifies the validation exists - actual validation tested in test_utils.py 108 108 # Here we just confirm all existing scheduled prompts have priority 109 - configs = get_muse_configs(schedule="daily") 109 + configs = get_talent_configs(schedule="daily") 110 110 for name, config in configs.items(): 111 111 assert "priority" in config, f"Scheduled prompt '{name}' missing priority"
+11 -11
tests/test_dream_segment.py
··· 143 143 144 144 monkeypatch.setattr( 145 145 dream, 146 - "get_muse_configs", 146 + "get_talent_configs", 147 147 lambda schedule=None, **kwargs: _segment_configs("sense", "entities"), 148 148 ) 149 149 monkeypatch.setattr( ··· 189 189 190 190 monkeypatch.setattr( 191 191 dream, 192 - "get_muse_configs", 192 + "get_talent_configs", 193 193 lambda schedule=None, **kwargs: _segment_configs( 194 194 "sense", "entities", "screen" 195 195 ), ··· 239 239 240 240 monkeypatch.setattr( 241 241 dream, 242 - "get_muse_configs", 242 + "get_talent_configs", 243 243 lambda schedule=None, **kwargs: _segment_configs( 244 244 "sense", "entities", "screen" 245 245 ), ··· 297 297 298 298 monkeypatch.setattr( 299 299 dream, 300 - "get_muse_configs", 300 + "get_talent_configs", 301 301 lambda schedule=None, **kwargs: _segment_configs( 302 302 "sense", 303 303 "entities", ··· 337 337 338 338 monkeypatch.setattr( 339 339 dream, 340 - "get_muse_configs", 340 + "get_talent_configs", 341 341 lambda schedule=None, **kwargs: _segment_configs("sense", "entities"), 342 342 ) 343 343 monkeypatch.setattr( ··· 376 376 377 377 monkeypatch.setattr( 378 378 dream, 379 - "get_muse_configs", 379 + "get_talent_configs", 380 380 lambda schedule=None, **kwargs: _segment_configs( 381 381 "sense", "entities", "screen" 382 382 ), ··· 415 415 416 416 monkeypatch.setattr( 417 417 dream, 418 - "get_muse_configs", 418 + "get_talent_configs", 419 419 lambda schedule=None, **kwargs: _segment_configs( 420 420 "sense", "entities", "pulse" 421 421 ), ··· 453 453 454 454 monkeypatch.setattr( 455 455 dream, 456 - "get_muse_configs", 456 + "get_talent_configs", 457 457 lambda schedule=None, **kwargs: _segment_configs("sense", "entities"), 458 458 ) 459 459 monkeypatch.setattr( ··· 499 499 500 500 monkeypatch.setattr( 501 501 dream, 502 - "get_muse_configs", 502 + "get_talent_configs", 503 503 lambda schedule=None, **kwargs: _segment_configs("sense", "entities"), 504 504 ) 505 505 monkeypatch.setattr( ··· 560 560 561 561 monkeypatch.setattr( 562 562 dream, 563 - "get_muse_configs", 563 + "get_talent_configs", 564 564 lambda schedule=None, **kwargs: { 565 565 **_segment_configs("sense"), 566 566 "entities": { ··· 617 617 618 618 monkeypatch.setattr( 619 619 dream, 620 - "get_muse_configs", 620 + "get_talent_configs", 621 621 lambda schedule=None, **kwargs: _segment_configs("sense", "entities"), 622 622 ) 623 623 monkeypatch.setattr(dream, "cortex_request", mock_cortex_request)
+5 -5
tests/test_entities_hook.py
··· 9 9 10 10 11 11 def test_entities_post_process_writes_without_segment(tmp_path): 12 - from muse.entities import post_process 12 + from talent.entities import post_process 13 13 14 14 output_path = ( 15 15 tmp_path ··· 40 40 41 41 42 42 def test_entities_post_process_requires_output_path(caplog): 43 - from muse.entities import post_process 43 + from talent.entities import post_process 44 44 45 45 post_process("* Person: Alice Smith - Mentioned in the meeting\n", {}) 46 46 47 47 assert "missing output_path" in caplog.text 48 48 49 49 50 - def test_entities_muse_is_segment_scheduled(): 51 - from think.muse import get_muse_configs 50 + def test_entities_talent_is_segment_scheduled(): 51 + from think.talent import get_talent_configs 52 52 53 - segment_prompts = get_muse_configs(schedule="segment") 53 + segment_prompts = get_talent_configs(schedule="segment") 54 54 55 55 assert "entities" in segment_prompts
+3 -3
tests/test_entity_agents.py
··· 7 7 8 8 import pytest 9 9 10 - from think.muse import get_agent 10 + from think.talent import get_agent 11 11 12 12 13 13 @pytest.fixture ··· 20 20 21 21 def test_entities_agent_config(fixture_journal): 22 22 """Test detection agent configuration loads correctly.""" 23 - # Entity agents are in apps/entities/muse/ so use app-qualified name 23 + # Entity agents are in apps/entities/talent/ so use app-qualified name 24 24 config = get_agent("entities:entities") 25 25 26 26 # Verify required fields ··· 37 37 38 38 def test_entities_review_agent_config(fixture_journal): 39 39 """Test review agent configuration loads correctly.""" 40 - # Entity agents are in apps/entities/muse/ so use app-qualified name 40 + # Entity agents are in apps/entities/talent/ so use app-qualified name 41 41 config = get_agent("entities:entities_review") 42 42 43 43 # Verify required fields
+12 -12
tests/test_generate_full.py
··· 73 73 mod = importlib.import_module("think.agents") 74 74 copy_day(tmp_path) 75 75 76 - import think.muse 76 + import think.talent 77 77 78 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 78 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 79 79 80 80 test_generator = tmp_path / "test_gen.md" 81 81 test_generator.write_text( ··· 119 119 mod = importlib.import_module("think.agents") 120 120 copy_day(tmp_path) 121 121 122 - import think.muse 122 + import think.talent 123 123 124 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 124 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 125 125 126 126 hook_file = tmp_path / "test_hook.py" 127 127 hook_file.write_text(""" ··· 192 192 mod = importlib.import_module("think.agents") 193 193 copy_day(tmp_path) 194 194 195 - import think.muse 195 + import think.talent 196 196 197 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 197 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 198 198 199 199 test_generator = tmp_path / "nohook_gen.md" 200 200 test_generator.write_text( ··· 259 259 day_dir = day_path("20240101") 260 260 day_dir.mkdir(parents=True, exist_ok=True) 261 261 262 - import think.muse 262 + import think.talent 263 263 264 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 264 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 265 265 266 266 test_generator = tmp_path / "empty_gen.md" 267 267 test_generator.write_text( ··· 296 296 day_dir = day_path("20240101") 297 297 day_dir.mkdir(parents=True, exist_ok=True) 298 298 299 - import think.muse 299 + import think.talent 300 300 301 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 301 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 302 302 303 303 test_agent = tmp_path / "test_cogitate.md" 304 304 test_agent.write_text( ··· 320 320 321 321 def test_named_hook_resolution(tmp_path, monkeypatch): 322 322 """Test that named hooks are resolved via load_post_hook.""" 323 - from think.muse import load_post_hook 323 + from think.talent import load_post_hook 324 324 325 325 # Config with named hook (new format) 326 326 config = {"hook": {"post": "occurrence"}} 327 327 hook_fn = load_post_hook(config) 328 328 329 - # Should resolve to muse/occurrence.py and be callable 329 + # Should resolve to talent/occurrence.py and be callable 330 330 assert callable(hook_fn)
+52 -52
tests/test_generators.py
··· 9 9 import pytest 10 10 11 11 12 - def test_get_muse_configs_generators(): 12 + def test_get_talent_configs_generators(): 13 13 """Test that system generators are discovered with source field.""" 14 - muse = importlib.import_module("think.muse") 15 - generators = muse.get_muse_configs(type="generate") 14 + talent = importlib.import_module("think.talent") 15 + generators = talent.get_talent_configs(type="generate") 16 16 assert "flow" in generators 17 17 info = generators["flow"] 18 18 assert os.path.basename(info["path"]) == "flow.md" ··· 26 26 27 27 def test_get_output_name(): 28 28 """Test generator key to filename conversion.""" 29 - muse = importlib.import_module("think.muse") 29 + talent = importlib.import_module("think.talent") 30 30 31 31 # System generators: key unchanged 32 - assert muse.get_output_name("activity") == "activity" 33 - assert muse.get_output_name("flow") == "flow" 32 + assert talent.get_output_name("activity") == "activity" 33 + assert talent.get_output_name("flow") == "flow" 34 34 35 35 # App generators: _app_name format 36 - assert muse.get_output_name("chat:sentiment") == "_chat_sentiment" 37 - assert muse.get_output_name("my_app:weekly_summary") == "_my_app_weekly_summary" 36 + assert talent.get_output_name("chat:sentiment") == "_chat_sentiment" 37 + assert talent.get_output_name("my_app:weekly_summary") == "_my_app_weekly_summary" 38 38 39 39 40 - def test_get_muse_configs_app_discovery(tmp_path, monkeypatch): 41 - """Test that app generators are discovered from apps/*/muse/.""" 42 - muse = importlib.import_module("think.muse") 40 + def test_get_talent_configs_app_discovery(tmp_path, monkeypatch): 41 + """Test that app generators are discovered from apps/*/talent/.""" 42 + talent = importlib.import_module("think.talent") 43 43 44 44 # Create a fake app with a generator 45 - app_dir = tmp_path / "apps" / "test_app" / "muse" 45 + app_dir = tmp_path / "apps" / "test_app" / "talent" 46 46 app_dir.mkdir(parents=True) 47 47 48 48 # Create generator files with frontmatter ··· 54 54 (tmp_path / "apps" / "test_app" / "workspace.html").write_text("<h1>Test</h1>") 55 55 56 56 # For now, just verify system generators have correct source 57 - generators = muse.get_muse_configs(type="generate") 57 + generators = talent.get_talent_configs(type="generate") 58 58 for key, info in generators.items(): 59 59 if ":" not in key: 60 60 assert info.get("source") == "system", f"{key} should have source=system" 61 61 62 62 63 - def test_get_muse_configs_by_schedule(): 63 + def test_get_talent_configs_by_schedule(): 64 64 """Test filtering generators by schedule.""" 65 - muse = importlib.import_module("think.muse") 65 + talent = importlib.import_module("think.talent") 66 66 67 67 # Get daily generators 68 - daily = muse.get_muse_configs(type="generate", schedule="daily") 68 + daily = talent.get_talent_configs(type="generate", schedule="daily") 69 69 assert len(daily) > 0 70 70 for key, meta in daily.items(): 71 71 assert meta.get("schedule") == "daily", f"{key} should have schedule=daily" 72 72 73 73 # Get segment generators 74 - segment = muse.get_muse_configs(type="generate", schedule="segment") 74 + segment = talent.get_talent_configs(type="generate", schedule="segment") 75 75 assert len(segment) > 0 76 76 for key, meta in segment.items(): 77 77 assert meta.get("schedule") == "segment", f"{key} should have schedule=segment" ··· 82 82 ) 83 83 84 84 # Unknown schedule returns empty dict 85 - assert muse.get_muse_configs(type="generate", schedule="hourly") == {} 86 - assert muse.get_muse_configs(type="generate", schedule="") == {} 85 + assert talent.get_talent_configs(type="generate", schedule="hourly") == {} 86 + assert talent.get_talent_configs(type="generate", schedule="") == {} 87 87 88 88 89 - def test_get_muse_configs_include_disabled(monkeypatch): 89 + def test_get_talent_configs_include_disabled(monkeypatch): 90 90 """Test include_disabled parameter.""" 91 - muse = importlib.import_module("think.muse") 91 + talent = importlib.import_module("think.talent") 92 92 93 93 # Get generators without disabled (default) 94 - without_disabled = muse.get_muse_configs(type="generate", schedule="daily") 94 + without_disabled = talent.get_talent_configs(type="generate", schedule="daily") 95 95 96 96 # Get generators with disabled included 97 - with_disabled = muse.get_muse_configs( 97 + with_disabled = talent.get_talent_configs( 98 98 type="generate", schedule="daily", include_disabled=True 99 99 ) 100 100 ··· 109 109 ('segment', 'daily', or 'activity'). Some generators (like importer) have 110 110 output but no schedule - they're used for ad-hoc processing, not scheduled runs. 111 111 """ 112 - muse = importlib.import_module("think.muse") 112 + talent = importlib.import_module("think.talent") 113 113 114 - generators = muse.get_muse_configs(type="generate") 114 + generators = talent.get_talent_configs(type="generate") 115 115 valid_schedules = ("segment", "daily", "activity", "weekly") 116 116 117 117 for key, meta in generators.items(): ··· 124 124 125 125 def test_sense_in_segment_schedule(): 126 126 """Test that sense generator exists in segment schedule at priority 5.""" 127 - muse = importlib.import_module("think.muse") 127 + talent = importlib.import_module("think.talent") 128 128 129 - generators = muse.get_muse_configs(type="generate", schedule="segment") 129 + generators = talent.get_talent_configs(type="generate", schedule="segment") 130 130 assert "sense" in generators 131 131 132 132 sense = generators["sense"] ··· 138 138 assert sources.get("percepts") is True, "sense should include percepts" 139 139 140 140 141 - def _write_temp_muse_prompt(stem: str, frontmatter: str) -> Path: 142 - muse_dir = Path(__file__).resolve().parent.parent / "muse" 143 - prompt_path = muse_dir / f"{stem}.md" 141 + def _write_temp_talent_prompt(stem: str, frontmatter: str) -> Path: 142 + talent_dir = Path(__file__).resolve().parent.parent / "talent" 143 + prompt_path = talent_dir / f"{stem}.md" 144 144 prompt_path.write_text( 145 145 f"{frontmatter}\n\nTemporary test prompt\n", encoding="utf-8" 146 146 ) 147 147 return prompt_path 148 148 149 149 150 - def test_get_muse_configs_raises_on_missing_type_with_output(): 151 - muse = importlib.import_module("think.muse") 150 + def test_get_talent_configs_raises_on_missing_type_with_output(): 151 + talent = importlib.import_module("think.talent") 152 152 stem = f"test_missing_type_output_{uuid.uuid4().hex}" 153 - prompt_path = _write_temp_muse_prompt( 153 + prompt_path = _write_temp_talent_prompt( 154 154 stem, 155 155 '{\n "schedule": "daily",\n "priority": 10,\n "output": "md"\n}', 156 156 ) ··· 158 158 with pytest.raises( 159 159 ValueError, match=rf"Prompt '{stem}'.*missing required 'type'" 160 160 ): 161 - muse.get_muse_configs(include_disabled=True) 161 + talent.get_talent_configs(include_disabled=True) 162 162 finally: 163 163 prompt_path.unlink(missing_ok=True) 164 164 165 165 166 - def test_get_muse_configs_allows_missing_type_with_tools(): 167 - muse = importlib.import_module("think.muse") 166 + def test_get_talent_configs_allows_missing_type_with_tools(): 167 + talent = importlib.import_module("think.talent") 168 168 stem = f"test_missing_type_tools_{uuid.uuid4().hex}" 169 - prompt_path = _write_temp_muse_prompt( 169 + prompt_path = _write_temp_talent_prompt( 170 170 stem, 171 171 '{\n "schedule": "daily",\n "priority": 10,\n "tools": "journal"\n}', 172 172 ) 173 173 try: 174 - configs = muse.get_muse_configs(include_disabled=True) 174 + configs = talent.get_talent_configs(include_disabled=True) 175 175 assert stem in configs 176 176 assert configs[stem].get("type") is None 177 177 finally: 178 178 prompt_path.unlink(missing_ok=True) 179 179 180 180 181 - def test_get_muse_configs_raises_when_generate_missing_output(): 182 - muse = importlib.import_module("think.muse") 181 + def test_get_talent_configs_raises_when_generate_missing_output(): 182 + talent = importlib.import_module("think.talent") 183 183 stem = f"test_generate_missing_output_{uuid.uuid4().hex}" 184 - prompt_path = _write_temp_muse_prompt( 184 + prompt_path = _write_temp_talent_prompt( 185 185 stem, 186 186 '{\n "type": "generate",\n "schedule": "daily",\n "priority": 10\n}', 187 187 ) ··· 190 190 ValueError, 191 191 match=rf"Prompt '{stem}'.*type='generate'.*missing required 'output'", 192 192 ): 193 - muse.get_muse_configs(include_disabled=True) 193 + talent.get_talent_configs(include_disabled=True) 194 194 finally: 195 195 prompt_path.unlink(missing_ok=True) 196 196 197 197 198 - def test_get_muse_configs_allows_cogitate_without_tools(): 199 - muse = importlib.import_module("think.muse") 198 + def test_get_talent_configs_allows_cogitate_without_tools(): 199 + talent = importlib.import_module("think.talent") 200 200 stem = f"test_cogitate_missing_tools_{uuid.uuid4().hex}" 201 - prompt_path = _write_temp_muse_prompt( 201 + prompt_path = _write_temp_talent_prompt( 202 202 stem, 203 203 '{\n "type": "cogitate",\n "schedule": "daily",\n "priority": 10\n}', 204 204 ) 205 205 try: 206 - configs = muse.get_muse_configs(include_disabled=True) 206 + configs = talent.get_talent_configs(include_disabled=True) 207 207 assert stem in configs 208 208 assert configs[stem]["type"] == "cogitate" 209 209 finally: 210 210 prompt_path.unlink(missing_ok=True) 211 211 212 212 213 - def test_get_muse_configs_type_generate_returns_only_generate(): 214 - muse = importlib.import_module("think.muse") 215 - generators = muse.get_muse_configs(type="generate") 213 + def test_get_talent_configs_type_generate_returns_only_generate(): 214 + talent = importlib.import_module("think.talent") 215 + generators = talent.get_talent_configs(type="generate") 216 216 assert generators, "Expected at least one generate prompt" 217 217 assert all(meta.get("type") == "generate" for meta in generators.values()) 218 218 219 219 220 - def test_get_muse_configs_type_cogitate_returns_only_cogitate(): 221 - muse = importlib.import_module("think.muse") 222 - cogitate_prompts = muse.get_muse_configs(type="cogitate") 220 + def test_get_talent_configs_type_cogitate_returns_only_cogitate(): 221 + talent = importlib.import_module("think.talent") 222 + cogitate_prompts = talent.get_talent_configs(type="cogitate") 223 223 assert cogitate_prompts, "Expected at least one cogitate prompt" 224 224 assert all(meta.get("type") == "cogitate" for meta in cogitate_prompts.values())
+2 -2
tests/test_home_events.py
··· 92 92 path="/home", 93 93 user_message="hello world", 94 94 agent_response="hi there", 95 - muse=agent_name, 95 + talent=agent_name, 96 96 agent_id="abc123", 97 97 ) 98 98 ··· 119 119 path="", 120 120 user_message="", 121 121 agent_response="done", 122 - muse="unified", 122 + talent="unified", 123 123 agent_id="abc123", 124 124 ) 125 125
+1 -1
tests/test_journal_stats.py
··· 283 283 token_entry_with_duration = { 284 284 "timestamp": 1704067200.0, 285 285 "model": "gemini-2.5-flash", 286 - "context": "muse.system.meetings", 286 + "context": "talent.system.meetings", 287 287 "type": "cogitate", 288 288 "usage": { 289 289 "input_tokens": 100,
+16 -16
tests/test_models.py
··· 440 440 assert registry[context]["tier"] in (TIER_PRO, TIER_FLASH, TIER_LITE) 441 441 442 442 443 - def test_context_registry_includes_muse_configs(): 444 - """Test that registry includes discovered muse contexts (agents + generators).""" 443 + def test_context_registry_includes_talent_configs(): 444 + """Test that registry includes discovered talent contexts (agents + generators).""" 445 445 registry = get_context_registry() 446 446 447 - # Should have muse entries (from muse/*.md and apps/*/muse/*.md) 448 - muse_contexts = [k for k in registry if k.startswith("muse.")] 447 + # Should have talent entries (from talent/*.md and apps/*/talent/*.md) 448 + talent_contexts = [k for k in registry if k.startswith("talent.")] 449 449 450 - # Should have multiple muse contexts (agents + generators) 451 - assert len(muse_contexts) > 1, "Should discover muse contexts" 450 + # Should have multiple talent contexts (agents + generators) 451 + assert len(talent_contexts) > 1, "Should discover talent contexts" 452 452 453 - # Should have system muse configs 454 - system_muse = [k for k in muse_contexts if k.startswith("muse.system.")] 455 - assert len(system_muse) > 0, "Should discover system muse configs" 453 + # Should have system talent configs 454 + system_talent = [k for k in talent_contexts if k.startswith("talent.system.")] 455 + assert len(system_talent) > 0, "Should discover system talent configs" 456 456 457 - # Should have app muse configs 458 - app_muse = [ 457 + # Should have app talent configs 458 + app_talent = [ 459 459 k 460 - for k in muse_contexts 461 - if k.startswith("muse.") and not k.startswith("muse.system.") 460 + for k in talent_contexts 461 + if k.startswith("talent.") and not k.startswith("talent.system.") 462 462 ] 463 - assert len(app_muse) > 0, "Should discover app muse configs" 463 + assert len(app_talent) > 0, "Should discover app talent configs" 464 464 465 - # Should include type field for muse contexts 466 - for context in muse_contexts: 465 + # Should include type field for talent contexts 466 + for context in talent_contexts: 467 467 assert "type" in registry[context], f"{context} missing type field" 468 468 469 469
+2 -2
tests/test_muse.py tests/test_talent.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - """Tests for think.muse module.""" 4 + """Tests for think.talent module.""" 5 5 6 - from think.muse import get_agent_filter, source_is_enabled, source_is_required 6 + from think.talent import get_agent_filter, source_is_enabled, source_is_required 7 7 8 8 9 9 def test_source_is_enabled_bool():
+11 -11
tests/test_muse_cli.py tests/test_talent_cli.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - """Tests for the sol muse CLI.""" 4 + """Tests for the sol talent CLI.""" 5 5 6 6 import json 7 7 8 8 import pytest 9 9 10 - from think.muse_cli import ( 10 + from think.talent_cli import ( 11 11 _collect_configs, 12 12 _format_bytes, 13 13 _format_cost, ··· 159 159 show_prompt("flow") 160 160 output = capsys.readouterr().out 161 161 162 - assert "muse/flow.md" in output 162 + assert "talent/flow.md" in output 163 163 assert "title:" in output 164 164 assert "schedule:" in output 165 165 assert "daily" in output ··· 238 238 239 239 def test_truncate_content(): 240 240 """Content truncation works correctly.""" 241 - from think.muse_cli import _truncate_content 241 + from think.talent_cli import _truncate_content 242 242 243 243 # Short content not truncated 244 244 short = "line1\nline2\nline3" ··· 257 257 258 258 def test_yesterday(): 259 259 """Yesterday helper returns correct format.""" 260 - from think.muse_cli import _yesterday 260 + from think.talent_cli import _yesterday 261 261 262 262 result = _yesterday() 263 263 assert len(result) == 8 ··· 266 266 267 267 def test_show_prompt_context_segment_validation(capsys): 268 268 """Segment-scheduled prompts require --segment.""" 269 - from think.muse_cli import show_prompt_context 269 + from think.talent_cli import show_prompt_context 270 270 271 271 with pytest.raises(SystemExit): 272 272 show_prompt_context("screen", day="20260101") ··· 277 277 278 278 def test_show_prompt_context_multi_facet_validation(capsys): 279 279 """Multi-facet prompts require --facet.""" 280 - from think.muse_cli import show_prompt_context 280 + from think.talent_cli import show_prompt_context 281 281 282 282 with pytest.raises(SystemExit): 283 283 show_prompt_context("entities:entities") ··· 288 288 289 289 def test_show_prompt_context_day_format_validation(capsys): 290 290 """Day argument must be YYYYMMDD format.""" 291 - from think.muse_cli import show_prompt_context 291 + from think.talent_cli import show_prompt_context 292 292 293 293 # Too short 294 294 with pytest.raises(SystemExit): ··· 601 601 602 602 def test_show_prompt_context_activity_requires_facet(capsys): 603 603 """Activity-scheduled prompts require --facet.""" 604 - from think.muse_cli import show_prompt_context 604 + from think.talent_cli import show_prompt_context 605 605 606 606 with pytest.raises(SystemExit): 607 607 show_prompt_context("decisions", day="20260214") ··· 613 613 614 614 def test_show_prompt_context_activity_requires_activity_id(capsys): 615 615 """Activity-scheduled prompts require --activity and list available IDs.""" 616 - from think.muse_cli import show_prompt_context 616 + from think.talent_cli import show_prompt_context 617 617 618 618 with pytest.raises(SystemExit): 619 619 show_prompt_context("decisions", day="20260214", facet="full-featured") ··· 626 626 627 627 def test_show_prompt_context_activity_not_found(capsys): 628 628 """Activity-scheduled prompt with unknown activity ID errors.""" 629 - from think.muse_cli import show_prompt_context 629 + from think.talent_cli import show_prompt_context 630 630 631 631 with pytest.raises(SystemExit): 632 632 show_prompt_context(
+22 -22
tests/test_observation.py
··· 16 16 17 17 class TestPreHook: 18 18 def test_skips_when_not_observing(self): 19 - from muse.observation import pre_process 19 + from talent.observation import pre_process 20 20 21 21 result = pre_process({"day": "20260306", "segment": "120000_300"}) 22 22 assert result == {"skip_reason": "not_observing"} 23 23 24 24 def test_skips_when_status_is_ready(self): 25 - from muse.observation import pre_process 25 + from talent.observation import pre_process 26 26 from think.awareness import update_state 27 27 28 28 update_state("onboarding", {"status": "ready"}) ··· 30 30 assert result == {"skip_reason": "not_observing"} 31 31 32 32 def test_skips_when_status_is_complete(self): 33 - from muse.observation import pre_process 33 + from talent.observation import pre_process 34 34 from think.awareness import update_state 35 35 36 36 update_state("onboarding", {"status": "complete"}) ··· 38 38 assert result == {"skip_reason": "not_observing"} 39 39 40 40 def test_skips_when_status_is_skipped(self): 41 - from muse.observation import pre_process 41 + from talent.observation import pre_process 42 42 from think.awareness import update_state 43 43 44 44 update_state("onboarding", {"status": "skipped"}) ··· 46 46 assert result == {"skip_reason": "not_observing"} 47 47 48 48 def test_proceeds_when_observing(self): 49 - from muse.observation import pre_process 49 + from talent.observation import pre_process 50 50 from think.awareness import start_onboarding 51 51 52 52 start_onboarding("a") ··· 62 62 start_onboarding("a") 63 63 64 64 def test_writes_observation_to_log(self): 65 - from muse.observation import post_process 65 + from talent.observation import post_process 66 66 from think.awareness import read_log 67 67 68 68 findings = json.dumps( ··· 89 89 assert obs[0]["message"] == "Solo coding session" 90 90 91 91 def test_increments_observation_count(self): 92 - from muse.observation import post_process 92 + from talent.observation import post_process 93 93 from think.awareness import get_onboarding 94 94 95 95 findings = json.dumps({"summary": "test", "has_meeting": False}) ··· 101 101 assert get_onboarding()["observation_count"] == 2 102 102 103 103 def test_handles_invalid_json(self): 104 - from muse.observation import post_process 104 + from talent.observation import post_process 105 105 106 106 result = post_process("not json", {"day": "20260306", "segment": "120000_300"}) 107 107 assert result == "not json" # Returns result unchanged 108 108 109 109 def test_handles_non_dict_json(self): 110 - from muse.observation import post_process 110 + from talent.observation import post_process 111 111 112 112 result = post_process("[1,2,3]", {"day": "20260306", "segment": "120000_300"}) 113 113 assert result == "[1,2,3]" # Returns result unchanged 114 114 115 115 def test_returns_result_unchanged(self): 116 - from muse.observation import post_process 116 + from talent.observation import post_process 117 117 118 118 findings = json.dumps({"summary": "test", "has_meeting": False}) 119 119 result = post_process(findings, {"day": "20260306", "segment": "120000_300"}) ··· 122 122 123 123 class TestNudgeLogic: 124 124 def test_first_meeting_triggers_nudge(self): 125 - from muse.observation import _check_nudge 125 + from talent.observation import _check_nudge 126 126 127 127 findings = { 128 128 "has_meeting": True, ··· 135 135 assert "3 people" in nudge["message"] 136 136 137 137 def test_no_meeting_no_first_nudge(self): 138 - from muse.observation import _check_nudge 138 + from talent.observation import _check_nudge 139 139 140 140 findings = {"has_meeting": False} 141 141 nudge = _check_nudge(findings, 1, 0, {}) 142 142 assert nudge is None 143 143 144 144 def test_entity_cluster_triggers_nudge(self): 145 - from muse.observation import _check_nudge 145 + from talent.observation import _check_nudge 146 146 147 147 findings = { 148 148 "has_meeting": False, ··· 153 153 assert "network" in nudge["title"].lower() 154 154 155 155 def test_progress_update_at_5_segments(self): 156 - from muse.observation import _check_nudge 156 + from talent.observation import _check_nudge 157 157 158 158 findings = {"has_meeting": False, "people": []} 159 159 nudge = _check_nudge(findings, 5, 2, {}) ··· 161 161 assert "Still learning" in nudge["title"] 162 162 163 163 def test_no_nudge_when_max_reached(self): 164 - from muse.observation import MAX_NUDGES, _check_nudge 164 + from talent.observation import MAX_NUDGES, _check_nudge 165 165 166 166 findings = {"has_meeting": True, "speaker_count": 5} 167 167 # nudges_sent == MAX_NUDGES means all nudges used ··· 173 173 174 174 class TestThreshold: 175 175 def test_not_met_with_few_segments(self): 176 - from muse.observation import _threshold_met 176 + from talent.observation import _threshold_met 177 177 178 178 onboarding = {"started": "20260306T08:00:00"} 179 179 assert _threshold_met(onboarding, 5) is False ··· 182 182 # Just started — not enough time elapsed 183 183 from datetime import datetime 184 184 185 - from muse.observation import MIN_SEGMENTS, _threshold_met 185 + from talent.observation import MIN_SEGMENTS, _threshold_met 186 186 187 187 now = datetime.now().strftime("%Y%m%dT%H:%M:%S") 188 188 onboarding = {"started": now} 189 189 assert _threshold_met(onboarding, MIN_SEGMENTS) is False 190 190 191 191 def test_met_with_enough_segments_and_time(self): 192 - from muse.observation import MIN_SEGMENTS, _threshold_met 192 + from talent.observation import MIN_SEGMENTS, _threshold_met 193 193 194 194 # Started 5 hours ago 195 195 onboarding = {"started": "20260101T03:00:00"} 196 196 assert _threshold_met(onboarding, MIN_SEGMENTS) is True 197 197 198 198 def test_not_met_with_no_started(self): 199 - from muse.observation import MIN_SEGMENTS, _threshold_met 199 + from talent.observation import MIN_SEGMENTS, _threshold_met 200 200 201 201 onboarding = {} 202 202 assert _threshold_met(onboarding, MIN_SEGMENTS) is False ··· 204 204 205 205 class TestElapsedHours: 206 206 def test_valid_iso(self): 207 - from muse.observation import _elapsed_hours 207 + from talent.observation import _elapsed_hours 208 208 209 209 # A date far in the past should give many hours 210 210 hours = _elapsed_hours("20200101T00:00:00") 211 211 assert hours > 24 212 212 213 213 def test_empty_string(self): 214 - from muse.observation import _elapsed_hours 214 + from talent.observation import _elapsed_hours 215 215 216 216 assert _elapsed_hours("") == 0.0 217 217 218 218 def test_invalid_format(self): 219 - from muse.observation import _elapsed_hours 219 + from talent.observation import _elapsed_hours 220 220 221 221 assert _elapsed_hours("not-a-date") == 0.0 222 222
+9 -9
tests/test_onboarding.py
··· 104 104 105 105 106 106 def test_triage_skipped_gets_unified(): 107 - """Onboarding skipped, no facets → unified (single muse, no two-mode split).""" 107 + """Onboarding skipped, no facets → unified (single talent, no two-mode split).""" 108 108 mock = _run_triage(onboarding={"status": "skipped"}) 109 109 assert mock.call_args.kwargs["name"] == "unified" 110 110 111 111 112 112 def test_triage_complete_gets_unified(): 113 - """Onboarding complete, no facets → unified (single muse, no two-mode split).""" 113 + """Onboarding complete, no facets → unified (single talent, no two-mode split).""" 114 114 mock = _run_triage(onboarding={"status": "complete"}) 115 115 assert mock.call_args.kwargs["name"] == "unified" 116 116 ··· 121 121 def test_chat_cli_routes_to_onboarding_when_unified_and_no_facets(): 122 122 args = argparse.Namespace( 123 123 message=["Hi there"], 124 - muse="unified", 124 + talent="unified", 125 125 facet=None, 126 126 provider=None, 127 127 verbose=False, ··· 130 130 assert mock_request.call_args.kwargs["name"] == "onboarding" 131 131 132 132 133 - def test_chat_cli_keeps_explicit_muse_when_no_facets(): 133 + def test_chat_cli_keeps_explicit_talent_when_no_facets(): 134 134 args = argparse.Namespace( 135 135 message=["Hi there"], 136 - muse="entities", 136 + talent="entities", 137 137 facet=None, 138 138 provider=None, 139 139 verbose=False, ··· 143 143 144 144 145 145 def test_chat_cli_path_a_observing_stays_unified(): 146 - """During Path A observation, chat CLI uses unified muse, not onboarding.""" 146 + """During Path A observation, chat CLI uses unified talent, not onboarding.""" 147 147 args = argparse.Namespace( 148 148 message=["What have you noticed?"], 149 - muse="unified", 149 + talent="unified", 150 150 facet=None, 151 151 provider=None, 152 152 verbose=False, ··· 158 158 159 159 160 160 def test_chat_cli_skipped_stays_unified(): 161 - """After skipping onboarding, chat CLI uses unified muse.""" 161 + """After skipping onboarding, chat CLI uses unified talent.""" 162 162 args = argparse.Namespace( 163 163 message=["Hello"], 164 - muse="unified", 164 + talent="unified", 165 165 facet=None, 166 166 provider=None, 167 167 verbose=False,
+16 -16
tests/test_output_hooks.py
··· 16 16 import shutil 17 17 from pathlib import Path 18 18 19 - from think.muse import load_post_hook, load_pre_hook 19 + from think.talent import load_post_hook, load_pre_hook 20 20 from think.utils import day_path 21 21 22 22 FIXTURES = Path("tests/fixtures") ··· 120 120 121 121 122 122 def test_load_post_hook_named_resolution(): 123 - """Test that named hooks resolve to muse/{name}.py.""" 124 - # occurrence.py exists in muse/ 123 + """Test that named hooks resolve to talent/{name}.py.""" 124 + # occurrence.py exists in talent/ 125 125 config = {"hook": {"post": "occurrence"}} 126 126 hook_fn = load_post_hook(config) 127 127 assert callable(hook_fn) ··· 139 139 140 140 def test_prompt_metadata_no_hook_path(tmp_path): 141 141 """Test that _load_prompt_metadata no longer sets hook_path.""" 142 - muse = importlib.import_module("think.muse") 142 + talent = importlib.import_module("think.talent") 143 143 144 144 md_file = tmp_path / "test_generator.md" 145 145 md_file.write_text( ··· 150 150 hook_file = tmp_path / "test_generator.py" 151 151 hook_file.write_text("def post_process(r, c): return r") 152 152 153 - meta = muse._load_prompt_metadata(md_file) 153 + meta = talent._load_prompt_metadata(md_file) 154 154 155 155 # hook_path should no longer be set (hooks are loaded via load_post_hook) 156 156 assert "hook_path" not in meta ··· 163 163 mod = importlib.import_module("think.agents") 164 164 copy_day(tmp_path) 165 165 166 - # Use tmp_path as muse directory to avoid polluting real muse/ 167 - import think.muse 166 + # Use tmp_path as talent directory to avoid polluting real talent/ 167 + import think.talent 168 168 169 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 169 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 170 170 171 171 prompt_file = tmp_path / "hooked_test.md" 172 172 prompt_file.write_text( ··· 218 218 mod = importlib.import_module("think.agents") 219 219 copy_day(tmp_path) 220 220 221 - import think.muse 221 + import think.talent 222 222 223 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 223 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 224 224 225 225 prompt_file = tmp_path / "noop_test.md" 226 226 prompt_file.write_text( ··· 264 264 mod = importlib.import_module("think.agents") 265 265 copy_day(tmp_path) 266 266 267 - import think.muse 267 + import think.talent 268 268 269 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 269 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 270 270 271 271 prompt_file = tmp_path / "broken_test.md" 272 272 prompt_file.write_text( ··· 381 381 mod = importlib.import_module("think.agents") 382 382 copy_day(tmp_path) 383 383 384 - import think.muse 384 + import think.talent 385 385 386 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 386 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 387 387 388 388 prompt_file = tmp_path / "prehooked_test.md" 389 389 prompt_file.write_text( ··· 441 441 mod = importlib.import_module("think.agents") 442 442 copy_day(tmp_path) 443 443 444 - import think.muse 444 + import think.talent 445 445 446 - monkeypatch.setattr(think.muse, "MUSE_DIR", tmp_path) 446 + monkeypatch.setattr(think.talent, "TALENT_DIR", tmp_path) 447 447 448 448 prompt_file = tmp_path / "both_hooks_test.md" 449 449 prompt_file.write_text(
+1 -1
tests/test_output_path.py
··· 6 6 import os 7 7 from pathlib import Path 8 8 9 - from think.muse import get_output_name, get_output_path 9 + from think.talent import get_output_name, get_output_path 10 10 11 11 os.environ.setdefault("_SOLSTONE_JOURNAL_OVERRIDE", "tests/fixtures/journal") 12 12
+2 -2
tests/test_routines.py
··· 21 21 22 22 23 23 def _load_chat_context_module(): 24 - """Load muse.chat_context from this worktree explicitly for tests.""" 25 - path = Path(__file__).resolve().parents[1] / "muse" / "chat_context.py" 24 + """Load talent.chat_context from this worktree explicitly for tests.""" 25 + path = Path(__file__).resolve().parents[1] / "talent" / "chat_context.py" 26 26 spec = importlib.util.spec_from_file_location("test_chat_context", path) 27 27 assert spec is not None 28 28 assert spec.loader is not None
+3 -3
tests/test_speaker_attribution_hook.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - """Unit tests for muse.speaker_attribution pre_process stub-writing behavior.""" 4 + """Unit tests for talent.speaker_attribution pre_process stub-writing behavior.""" 5 5 6 6 import json 7 7 from unittest.mock import patch ··· 19 19 return_value=seg_dir, 20 20 ), 21 21 ): 22 - from muse.speaker_attribution import pre_process 22 + from talent.speaker_attribution import pre_process 23 23 24 24 return pre_process(context) 25 25 ··· 82 82 """Missing day/segment -> returns early before any stub logic.""" 83 83 (tmp_path / "audio.npz").write_bytes(b"x") 84 84 with patch("think.utils.segment_path", return_value=tmp_path): 85 - from muse.speaker_attribution import pre_process 85 + from talent.speaker_attribution import pre_process 86 86 87 87 result = pre_process({"stream": "default"}) 88 88 stub_path = tmp_path / "agents" / "speaker_labels.json"
+1 -1
think/activities.py
··· 705 705 Returns: 706 706 Absolute path for the output file 707 707 """ 708 - from think.muse import get_output_name 708 + from think.talent import get_output_name 709 709 710 710 output_name = get_output_name(key) 711 711 ext = "json" if output_format == "json" else "md"
+6 -6
think/agents.py
··· 26 26 from typing import Any, Callable, Optional 27 27 28 28 from think.cluster import cluster, cluster_period, cluster_span 29 - from think.muse import ( 29 + from think.talent import ( 30 30 get_agent_filter, 31 - get_muse_configs, 31 + get_talent_configs, 32 32 get_output_path, 33 33 load_post_hook, 34 34 load_pre_hook, ··· 434 434 Fully prepared config dict 435 435 """ 436 436 from think.models import resolve_model_for_provider, resolve_provider 437 - from think.muse import get_agent, key_to_context 437 + from think.talent import get_agent, key_to_context 438 438 439 439 name = request.get("name", "unified") 440 440 facet = request.get("facet") ··· 806 806 807 807 context = config.get("context") 808 808 if not context: 809 - from think.muse import key_to_context 809 + from think.talent import key_to_context 810 810 811 811 context = key_to_context(config.get("name", "unified")) 812 812 backup_model = resolve_model_for_provider(context, backup, "cogitate") ··· 862 862 emit_event: Event emission callback 863 863 """ 864 864 from think.models import generate_with_result 865 - from think.muse import key_to_context 865 + from think.talent import key_to_context 866 866 867 867 name = config.get("name", "unified") 868 868 transcript = config.get("transcript", "") ··· 1127 1127 stored within segment directories and are not included here. 1128 1128 """ 1129 1129 day_dir = day_path(day) 1130 - daily_generators = get_muse_configs( 1130 + daily_generators = get_talent_configs( 1131 1131 type="generate", schedule="daily", include_disabled=True 1132 1132 ) 1133 1133 processed: list[str] = []
+6 -1
think/awareness.py
··· 25 25 from typing import Any 26 26 27 27 logger = logging.getLogger(__name__) 28 + _LEGACY_AGENT_FIELD = "mu" "se" 28 29 29 30 30 31 def _awareness_dir() -> Path: ··· 565 566 exchanges = get_recent_exchanges(limit=10000) 566 567 except Exception: 567 568 exchanges = [] 568 - non_onboarding = [ex for ex in exchanges if ex.get("muse") != "onboarding"] 569 + non_onboarding = [ 570 + ex 571 + for ex in exchanges 572 + if (ex.get("talent") or ex.get(_LEGACY_AGENT_FIELD, "")) != "onboarding" 573 + ] 569 574 conversation_count = len(non_onboarding) 570 575 571 576 entity_names = [e["entity_name"].lower() for e in entities if e.get("entity_name")]
+5 -5
think/chat_cli.py
··· 24 24 parser.add_argument("--facet", help="Facet context") 25 25 parser.add_argument("--provider", help="AI provider override") 26 26 parser.add_argument( 27 - "--muse", default="unified", help="Muse agent name (default: unified)" 27 + "--talent", default="unified", help="Talent agent name (default: unified)" 28 28 ) 29 29 args = setup_cli(parser) 30 30 31 - if args.muse == "unified": 31 + if args.talent == "unified": 32 32 from think.awareness import get_onboarding 33 33 from think.facets import get_enabled_facets 34 34 ··· 36 36 onboarding_status = onboarding.get("status", "") 37 37 38 38 if onboarding_status in ("observing", "ready", "complete", "skipped"): 39 - pass # Stay with unified muse — onboarding path already chosen 39 + pass # Stay with unified talent — onboarding path already chosen 40 40 elif not get_enabled_facets(): 41 - args.muse = "onboarding" 41 + args.talent = "onboarding" 42 42 43 43 if not args.message: 44 44 parser.print_help() ··· 52 52 53 53 agent_id = cortex_request( 54 54 prompt=message, 55 - name=args.muse, 55 + name=args.talent, 56 56 provider=args.provider, 57 57 config=config if config else None, 58 58 )
+6 -6
think/conversation.py
··· 4 4 """Conversation memory service for solstone. 5 5 6 6 Manages conversation exchange storage, retrieval, and context injection 7 - for the unified muse agent. Three layers of recall: 7 + for the unified talent agent. Three layers of recall: 8 8 9 9 - Layer 1: Recent exchanges (last ~10 turns), loaded directly into context 10 10 - Layer 2: Today's earlier exchanges, summarized compactly ··· 30 30 # Journal stream name for conversation segments 31 31 CONVERSATION_STREAM = "conversation" 32 32 33 - # Marker in unified muse for memory injection 33 + # Marker in unified talent for memory injection 34 34 INJECTION_MARKER = "CONVERSATION_MEMORY_INJECTION_POINT" 35 35 36 36 # Context budget: max characters for agent response in recent exchanges ··· 56 56 path: str = "", 57 57 user_message: str = "", 58 58 agent_response: str = "", 59 - muse: str = "", 59 + talent: str = "", 60 60 agent_id: str = "", 61 61 ) -> None: 62 62 """Record a conversation exchange to journal storage. ··· 83 83 "path": path, 84 84 "user_message": user_message, 85 85 "agent_response": agent_response, 86 - "muse": muse, 86 + "talent": talent, 87 87 "agent_id": agent_id, 88 88 } 89 89 ··· 321 321 """Build the full conversation memory context block. 322 322 323 323 Assembles layer 1 (recent exchanges) and layer 2 (today's summary) 324 - into a formatted block for injection into the unified muse prompt. 324 + into a formatted block for injection into the unified talent prompt. 325 325 326 326 Args: 327 327 facet: Active facet for filtering. ··· 361 361 """Replace the CONVERSATION_MEMORY_INJECTION_POINT with memory context. 362 362 363 363 Args: 364 - user_instruction: The unified muse's user instruction text. 364 + user_instruction: The unified talent's user instruction text. 365 365 memory_context: Formatted conversation memory to inject. 366 366 367 367 Returns:
+1 -1
think/cortex.py
··· 428 428 if usage_data and original_request: 429 429 try: 430 430 from think.models import log_token_usage 431 - from think.muse import key_to_context 431 + from think.talent import key_to_context 432 432 433 433 model = original_request.get("model", "unknown") 434 434 name = original_request.get("name", "unknown")
+13 -13
think/dream.py
··· 30 30 get_enabled_facets, 31 31 load_segment_facets, 32 32 ) 33 - from think.muse import get_muse_configs, get_output_path 33 + from think.talent import get_talent_configs, get_output_path 34 34 from think.runner import run_task 35 35 from think.sense_splitter import write_idle_stubs, write_sense_outputs 36 36 from think.utils import ( ··· 404 404 agents based on Sense output. 405 405 """ 406 406 target_schedule = "segment" 407 - all_prompts = get_muse_configs(schedule="segment") 407 + all_prompts = get_talent_configs(schedule="segment") 408 408 if not all_prompts: 409 409 logging.info("No prompts found for schedule: segment") 410 410 return (0, 0, []) ··· 764 764 target_schedule = "daily" 765 765 766 766 # Load ALL scheduled prompts (both generators and agents) 767 - all_prompts = get_muse_configs(schedule=target_schedule) 767 + all_prompts = get_talent_configs(schedule=target_schedule) 768 768 769 769 if not all_prompts: 770 770 logging.info(f"No prompts found for schedule: {target_schedule}") ··· 773 773 # Group prompts by priority 774 774 priority_groups: dict[int, list[tuple[str, dict]]] = {} 775 775 for name, config in all_prompts.items(): 776 - priority = config["priority"] # Required field, validated by get_muse_configs 776 + priority = config["priority"] # Required field, validated by get_talent_configs 777 777 priority_groups.setdefault(priority, []).append((name, config)) 778 778 779 779 # Pre-compute shared data for multi-facet prompts ··· 1060 1060 target_schedule = "weekly" 1061 1061 1062 1062 # Load ALL scheduled prompts (both generators and agents) 1063 - all_prompts = get_muse_configs(schedule=target_schedule) 1063 + all_prompts = get_talent_configs(schedule=target_schedule) 1064 1064 1065 1065 if not all_prompts: 1066 1066 logging.info(f"No prompts found for schedule: {target_schedule}") ··· 1069 1069 # Group prompts by priority 1070 1070 priority_groups: dict[int, list[tuple[str, dict]]] = {} 1071 1071 for name, config in all_prompts.items(): 1072 - priority = config["priority"] # Required field, validated by get_muse_configs 1072 + priority = config["priority"] # Required field, validated by get_talent_configs 1073 1073 priority_groups.setdefault(priority, []).append((name, config)) 1074 1074 1075 1075 # Pre-compute shared data for multi-facet prompts ··· 1380 1380 return False 1381 1381 1382 1382 # Load activity-scheduled agents 1383 - all_prompts = get_muse_configs(schedule="activity") 1383 + all_prompts = get_talent_configs(schedule="activity") 1384 1384 1385 1385 if not all_prompts: 1386 1386 logging.info("No activity-scheduled agents found") ··· 1680 1680 Returns: 1681 1681 True if all flush agents succeeded, False if any failed 1682 1682 """ 1683 - all_prompts = get_muse_configs(schedule="segment") 1683 + all_prompts = get_talent_configs(schedule="segment") 1684 1684 1685 1685 # Filter to only agents with flush hooks 1686 1686 flush_prompts = { ··· 1895 1895 return 1896 1896 1897 1897 if weekly: 1898 - all_prompts = get_muse_configs(schedule="weekly") 1898 + all_prompts = get_talent_configs(schedule="weekly") 1899 1899 print(f"Day {day_formatted} — weekly agents\n") 1900 1900 if not all_prompts: 1901 1901 print("No prompts for schedule: weekly") ··· 1917 1917 label += f" stream={seg_stream}" 1918 1918 print(label) 1919 1919 print() 1920 - all_prompts = get_muse_configs(schedule="segment") 1920 + all_prompts = get_talent_configs(schedule="segment") 1921 1921 if all_prompts: 1922 1922 _print_segment_orchestrator(all_prompts, "<each>") 1923 1923 return 1924 1924 1925 1925 # Default: full daily or segment run 1926 1926 target_schedule = "segment" if segment else "daily" 1927 - all_prompts = get_muse_configs(schedule=target_schedule) 1927 + all_prompts = get_talent_configs(schedule=target_schedule) 1928 1928 1929 1929 header = f"Day {day_formatted}" 1930 1930 if segment: ··· 2063 2063 print(f" type: {activity_type}") 2064 2064 print(f" segments: {len(segments)}") 2065 2065 2066 - all_prompts = get_muse_configs(schedule="activity") 2066 + all_prompts = get_talent_configs(schedule="activity") 2067 2067 matching = { 2068 2068 n: c 2069 2069 for n, c in all_prompts.items() ··· 2102 2102 2103 2103 def _dry_run_flush(day: str, segment: str) -> None: 2104 2104 """Dry-run for --flush mode.""" 2105 - all_prompts = get_muse_configs(schedule="segment") 2105 + all_prompts = get_talent_configs(schedule="segment") 2106 2106 flush_prompts = { 2107 2107 n: c 2108 2108 for n, c in all_prompts.items()
+2 -2
think/hooks.py
··· 4 4 """Shared utilities for output extraction hooks. 5 5 6 6 This module provides common functions used by extraction hooks like 7 - occurrence.py and anticipation.py in the muse/ directory. 7 + occurrence.py and anticipation.py in the talent/ directory. 8 8 """ 9 9 10 10 import json ··· 182 182 Returns: 183 183 Relative path like "20240101/agents/meetings.md". 184 184 """ 185 - from think.muse import get_output_name 185 + from think.talent import get_output_name 186 186 from think.utils import get_journal 187 187 188 188 day = context.get("day", "")
+37 -24
think/models.py
··· 117 117 # Examples: 118 118 # - observe.describe.frame -> observe module, describe feature, frame operation 119 119 # - observe.enrich -> observe module, enrich feature (no sub-operation) 120 - # - muse.system.meetings -> muse module, system source, meetings config 121 - # - muse.entities.observer -> muse module, entities app, observer config 120 + # - talent.system.meetings -> talent module, system source, meetings config 121 + # - talent.entities.observer -> talent module, entities app, observer config 122 122 # - app.chat.title -> apps module, chat app, title operation 123 123 # 124 124 # DISCOVERY SOURCES: 125 125 # 1. Prompt files listed in PROMPT_PATHS (with context in frontmatter) 126 126 # 2. Categories from observe/categories/*.md (tier/label/group in frontmatter) 127 - # 3. Muse configs from muse/*.md and apps/*/muse/*.md 127 + # 3. Talent configs from talent/*.md and apps/*/talent/*.md 128 128 # 129 129 # When adding new contexts: 130 130 # 1. Create a .md prompt file with YAML frontmatter containing: ··· 153 153 154 154 # Cached context registry (built lazily on first use) 155 155 _context_registry: Optional[Dict[str, Dict[str, Any]]] = None 156 + _LEGACY_CONTEXT_PREFIX = "mu" "se." 157 + _TALENT_CONTEXT_PREFIX = "talent." 156 158 157 159 158 160 def _discover_prompt_contexts() -> Dict[str, Dict[str, Any]]: ··· 198 200 return contexts 199 201 200 202 201 - def _discover_muse_contexts() -> Dict[str, Dict[str, Any]]: 202 - """Discover muse context defaults from muse/*.md config files. 203 + def _discover_talent_contexts() -> Dict[str, Dict[str, Any]]: 204 + """Discover talent context defaults from talent/*.md config files. 203 205 204 - Uses get_muse_configs() from think.muse to load all muse configurations 206 + Uses get_talent_configs() from think.talent to load all talent configurations 205 207 and converts them to context patterns with tier/label/group metadata. 206 208 207 209 Returns 208 210 ------- 209 211 Dict[str, Dict[str, Any]] 210 212 Mapping of context patterns to {tier, label, group, type} dicts. 211 - Context patterns are: muse.system.{name} or muse.{app}.{name} 213 + Context patterns are: talent.system.{name} or talent.{app}.{name} 212 214 """ 213 - from think.muse import get_muse_configs, key_to_context 215 + from think.talent import get_talent_configs, key_to_context 214 216 215 217 contexts = {} 216 218 217 - # Load all muse configs (including disabled for completeness) 218 - all_configs = get_muse_configs(include_disabled=True) 219 + # Load all talent configs (including disabled for completeness) 220 + all_configs = get_talent_configs(include_disabled=True) 219 221 220 222 for key, config in all_configs.items(): 221 223 context = key_to_context(key) ··· 235 237 Merges: 236 238 1. Prompt contexts from _discover_prompt_contexts() 237 239 2. Category contexts from observe/describe.py CATEGORIES 238 - 3. Muse contexts from _discover_muse_contexts() 240 + 3. Talent contexts from _discover_talent_contexts() 239 241 240 242 Returns 241 243 ------- ··· 259 261 except ImportError: 260 262 pass # observe module not available 261 263 262 - # Merge muse contexts (agents + generators) 263 - muse_contexts = _discover_muse_contexts() 264 - registry.update(muse_contexts) 264 + # Merge talent contexts (agents + generators) 265 + talent_contexts = _discover_talent_contexts() 266 + registry.update(talent_contexts) 265 267 266 268 return registry 267 269 ··· 288 290 Parameters 289 291 ---------- 290 292 context 291 - Context string (e.g., "muse.system.default", "observe.describe.frame"). 293 + Context string (e.g., "talent.system.default", "observe.describe.frame"). 292 294 agent_type 293 295 Agent type ("generate" or "cogitate"). 294 296 ··· 305 307 providers_config = journal_config.get("providers", {}) 306 308 contexts = providers_config.get("contexts", {}) 307 309 308 - # Get dynamic context registry (discovered prompts, categories, muse configs) 310 + # Get dynamic context registry (discovered prompts, categories, talent configs) 309 311 registry = get_context_registry() 310 312 311 313 # Check journal config contexts first (exact match) ··· 383 385 Parameters 384 386 ---------- 385 387 context 386 - Context string (e.g., "muse.system.default"). 388 + Context string (e.g., "talent.system.default"). 387 389 provider 388 390 Provider name ("google", "openai", "anthropic"). 389 391 agent_type ··· 421 423 Parameters 422 424 ---------- 423 425 context 424 - Context string (e.g., "observe.describe.frame", "muse.system.meetings"). 426 + Context string (e.g., "observe.describe.frame", "talent.system.meetings"). 425 427 agent_type 426 428 Agent type ("generate" or "cogitate"). 427 429 ··· 470 472 471 473 # No context match - check dynamic context registry for this context 472 474 if match_config is None: 473 - # Get dynamic context registry (discovered prompts, categories, muse configs) 475 + # Get dynamic context registry (discovered prompts, categories, talent configs) 474 476 registry = get_context_registry() 475 477 476 478 # Check for matching context default (exact match first, then glob) ··· 538 540 usage : dict 539 541 Normalized usage dict with keys from USAGE_KEYS. 540 542 context : str, optional 541 - Context string (e.g., "module.function:123" or "muse.system.default"). 543 + Context string (e.g., "module.function:123" or "talent.system.default"). 542 544 If None, auto-detects from call stack. 543 545 segment : str, optional 544 546 Segment key (e.g., "143022_300") for attribution. ··· 761 763 return None 762 764 763 765 766 + def _normalize_legacy_context(ctx: str) -> str: 767 + """Normalize legacy token-log context strings to the talent namespace.""" 768 + if ctx.startswith(_LEGACY_CONTEXT_PREFIX): 769 + return _TALENT_CONTEXT_PREFIX + ctx[len(_LEGACY_CONTEXT_PREFIX) :] 770 + return ctx 771 + 772 + 764 773 def iter_token_log(day: str) -> Any: 765 774 """Iterate over token log entries for a given day. 766 775 ··· 790 799 if not line: 791 800 continue 792 801 try: 793 - yield json.loads(line) 802 + entry = json.loads(line) 803 + ctx = entry.get("context") 804 + if isinstance(ctx, str): 805 + entry["context"] = _normalize_legacy_context(ctx) 806 + yield entry 794 807 except json.JSONDecodeError: 795 808 continue 796 809 ··· 813 826 Filter to entries with this exact segment key. 814 827 context : str, optional 815 828 Filter to entries where context starts with this prefix. 816 - For example, "muse.system" matches "muse.system.default". 829 + For example, "talent.system" matches "talent.system.default". 817 830 818 831 Returns 819 832 ------- ··· 896 909 contents : str or List 897 910 The content to send to the model. 898 911 context : str 899 - Context string for routing and token logging (e.g., "muse.system.meetings"). 912 + Context string for routing and token logging (e.g., "talent.system.meetings"). 900 913 This is required and determines which provider/model to use. 901 914 temperature : float 902 915 Temperature for generation (default: 0.3). ··· 1145 1158 contents : str or List 1146 1159 The content to send to the model. 1147 1160 context : str 1148 - Context string for routing and token logging (e.g., "muse.system.meetings"). 1161 + Context string for routing and token logging (e.g., "talent.system.meetings"). 1149 1162 This is required and determines which provider/model to use. 1150 1163 temperature : float 1151 1164 Temperature for generation (default: 0.3).
+34 -34
think/muse.py think/talent.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - """Muse agent and generator orchestration utilities. 4 + """Talent agent and generator orchestration utilities. 5 5 6 - This module provides functionality for configuring and orchestrating muse agents 7 - and generators from muse/*.md and apps/*/muse/*.md. 6 + This module provides functionality for configuring and orchestrating talent agents 7 + and generators from talent/*.md and apps/*/talent/*.md. 8 8 9 9 Key functions: 10 - - get_muse_configs(): Discover all muse configs with filtering 10 + - get_talent_configs(): Discover all talent configs with filtering 11 11 - get_agent(): Load complete agent configuration by name 12 12 - Hook loading: load_pre_hook(), load_post_hook() 13 13 ··· 31 31 # Constants 32 32 # --------------------------------------------------------------------------- 33 33 34 - MUSE_DIR = Path(__file__).parent.parent / "muse" 34 + TALENT_DIR = Path(__file__).parent.parent / "talent" 35 35 APPS_DIR = Path(__file__).parent.parent / "apps" 36 36 37 37 38 38 # --------------------------------------------------------------------------- 39 - # Muse Config Discovery 39 + # Talent Config Discovery 40 40 # --------------------------------------------------------------------------- 41 41 42 42 43 43 def key_to_context(key: str) -> str: 44 - """Convert muse config key to context pattern. 44 + """Convert talent config key to context pattern. 45 45 46 46 Parameters 47 47 ---------- 48 48 key: 49 - Muse config key in format "name" (system) or "app:name" (app). 49 + Talent config key in format "name" (system) or "app:name" (app). 50 50 51 51 Returns 52 52 ------- 53 53 str 54 - Context pattern: "muse.system.{name}" or "muse.{app}.{name}". 54 + Context pattern: "talent.system.{name}" or "talent.{app}.{name}". 55 55 56 56 Examples 57 57 -------- 58 58 >>> key_to_context("meetings") 59 - 'muse.system.meetings' 59 + 'talent.system.meetings' 60 60 >>> key_to_context("entities:observer") 61 - 'muse.entities.observer' 61 + 'talent.entities.observer' 62 62 """ 63 63 if ":" in key: 64 64 app, name = key.split(":", 1) 65 - return f"muse.{app}.{name}" 66 - return f"muse.system.{key}" 65 + return f"talent.{app}.{name}" 66 + return f"talent.system.{key}" 67 67 68 68 69 69 def get_output_name(key: str) -> str: ··· 151 151 return day / "agents" / filename 152 152 153 153 154 - def get_muse_configs( 154 + def get_talent_configs( 155 155 *, 156 156 type: str | None = None, 157 157 schedule: str | None = None, 158 158 include_disabled: bool = False, 159 159 ) -> dict[str, dict[str, Any]]: 160 - """Load muse configs from system and app directories. 160 + """Load talent configs from system and app directories. 161 161 162 162 Unified function for loading both cogitate agents and generate prompts from 163 - muse/*.md and apps/*/muse/*.md files. Filters based on explicit type field. 163 + talent/*.md and apps/*/talent/*.md files. Filters based on explicit type field. 164 164 165 165 Args: 166 166 type: If provided, only configs with matching type value ··· 197 197 198 198 return True 199 199 200 - # System configs from muse/ 201 - if MUSE_DIR.is_dir(): 202 - for md_path in sorted(MUSE_DIR.glob("*.md")): 200 + # System configs from talent/ 201 + if TALENT_DIR.is_dir(): 202 + for md_path in sorted(TALENT_DIR.glob("*.md")): 203 203 name = md_path.stem 204 204 info = _load_prompt_metadata(md_path) 205 205 206 206 info["source"] = "system" 207 207 configs[name] = info 208 208 209 - # App configs from apps/*/muse/ 209 + # App configs from apps/*/talent/ 210 210 apps_dir = APPS_DIR 211 211 if apps_dir.is_dir(): 212 212 for app_path in sorted(apps_dir.iterdir()): 213 213 if not app_path.is_dir() or app_path.name.startswith("_"): 214 214 continue 215 - app_muse_dir = app_path / "muse" 216 - if not app_muse_dir.is_dir(): 215 + app_talent_dir = app_path / "talent" 216 + if not app_talent_dir.is_dir(): 217 217 continue 218 218 app_name = app_path.name 219 - for md_path in sorted(app_muse_dir.glob("*.md")): 219 + for md_path in sorted(app_talent_dir.glob("*.md")): 220 220 item_name = md_path.stem 221 221 info = _load_prompt_metadata(md_path) 222 222 ··· 311 311 (agent_directory, agent_name) tuple. 312 312 """ 313 313 if ":" in name: 314 - # App agent: "support:support" -> apps/support/muse/support 314 + # App agent: "support:support" -> apps/support/talent/support 315 315 app, agent_name = name.split(":", 1) 316 - agent_dir = Path(__file__).parent.parent / "apps" / app / "muse" 316 + agent_dir = Path(__file__).parent.parent / "apps" / app / "talent" 317 317 elif name == "unified": 318 - # Chat agent: "unified" -> muse/chat 319 - agent_dir = MUSE_DIR 318 + # Chat agent: "unified" -> talent/chat 319 + agent_dir = TALENT_DIR 320 320 agent_name = "chat" 321 321 else: 322 - # System agent: bare name -> muse/{name} 323 - agent_dir = MUSE_DIR 322 + # System agent: bare name -> talent/{name} 323 + agent_dir = TALENT_DIR 324 324 agent_name = name 325 325 return agent_dir, agent_name 326 326 ··· 426 426 ---------- 427 427 name: 428 428 Agent name to load. Can be a system agent (e.g., "unified") 429 - or an app-namespaced agent (e.g., "support:support" for apps/support/muse/support). 429 + or an app-namespaced agent (e.g., "support:support" for apps/support/talent/support). 430 430 facet: 431 431 Optional facet name to focus on. Controls $facets template variable. 432 432 analysis_day: ··· 485 485 """Resolve hook name to file path. 486 486 487 487 Resolution: 488 - - Named: "name" -> muse/{name}.py 489 - - App-qualified: "app:name" -> apps/{app}/muse/{name}.py 488 + - Named: "name" -> talent/{name}.py 489 + - App-qualified: "app:name" -> apps/{app}/talent/{name}.py 490 490 - Explicit path: "path/to/hook.py" -> direct path 491 491 """ 492 492 if "/" in hook_name or hook_name.endswith(".py"): ··· 495 495 return project_root / hook_name 496 496 elif ":" in hook_name: 497 497 app, name = hook_name.split(":", 1) 498 - return Path(__file__).parent.parent / "apps" / app / "muse" / f"{name}.py" 498 + return Path(__file__).parent.parent / "apps" / app / "talent" / f"{name}.py" 499 499 else: 500 - return MUSE_DIR / f"{hook_name}.py" 500 + return TALENT_DIR / f"{hook_name}.py" 501 501 502 502 503 503 def _load_hook_function(config: dict, key: str, func_name: str) -> Callable | None:
+25 -25
think/muse_cli.py think/talent_cli.py
··· 1 1 # SPDX-License-Identifier: AGPL-3.0-only 2 2 # Copyright (c) 2026 sol pbc 3 3 4 - """CLI for inspecting muse prompt configurations. 4 + """CLI for inspecting talent prompt configurations. 5 5 6 6 Lists all system and app prompts with their frontmatter metadata, 7 7 supports filtering by schedule and source, and provides detail views. 8 8 9 9 Usage: 10 - sol muse List all prompts grouped by schedule 11 - sol muse list --schedule daily Filter by schedule type 12 - sol muse list --json Output all configs as JSONL 13 - sol muse show <name> Show details for a specific prompt 14 - sol muse show <name> --json Output a single prompt as JSONL 15 - sol muse show <name> --prompt Show full prompt context (dry-run) 16 - sol muse logs Show recent agent runs 17 - sol muse logs <agent> -c 5 Show last 5 runs for an agent 18 - sol muse log <id> Show events for an agent run 19 - sol muse log <id> --json Output raw JSONL events 20 - sol muse log <id> --full Show expanded event details 10 + sol talent List all prompts grouped by schedule 11 + sol talent list --schedule daily Filter by schedule type 12 + sol talent list --json Output all configs as JSONL 13 + sol talent show <name> Show details for a specific prompt 14 + sol talent show <name> --json Output a single prompt as JSONL 15 + sol talent show <name> --prompt Show full prompt context (dry-run) 16 + sol talent logs Show recent agent runs 17 + sol talent logs <agent> -c 5 Show last 5 runs for an agent 18 + sol talent log <id> Show events for an agent run 19 + sol talent log <id> --json Output raw JSONL events 20 + sol talent log <id> --full Show expanded event details 21 21 """ 22 22 23 23 from __future__ import annotations ··· 35 35 36 36 import frontmatter 37 37 38 - from think.muse import ( 39 - MUSE_DIR, 38 + from think.talent import ( 39 + TALENT_DIR, 40 40 _load_prompt_metadata, 41 - get_muse_configs, 41 + get_talent_configs, 42 42 ) 43 43 from think.utils import setup_cli 44 44 ··· 61 61 """Resolve a prompt name to its .md file path.""" 62 62 if ":" in name: 63 63 app, agent_name = name.split(":", 1) 64 - return _PROJECT_ROOT / "apps" / app / "muse" / f"{agent_name}.md" 65 - return MUSE_DIR / f"{name}.md" 64 + return _PROJECT_ROOT / "apps" / app / "talent" / f"{agent_name}.md" 65 + return TALENT_DIR / f"{name}.md" 66 66 67 67 68 68 def _scan_variables(body: str) -> list[str]: ··· 162 162 source: str | None = None, 163 163 include_disabled: bool = False, 164 164 ) -> dict[str, dict[str, Any]]: 165 - """Collect all muse configs with optional filters applied.""" 166 - configs = get_muse_configs(schedule=schedule, include_disabled=True) 165 + """Collect all talent configs with optional filters applied.""" 166 + configs = get_talent_configs(schedule=schedule, include_disabled=True) 167 167 168 168 filtered: dict[str, dict[str, Any]] = {} 169 169 for key, info in configs.items(): ··· 424 424 what would be sent to the LLM provider. 425 425 """ 426 426 # Load prompt metadata 427 - configs = get_muse_configs(include_disabled=True) 427 + configs = get_talent_configs(include_disabled=True) 428 428 if name not in configs: 429 429 print(f"Prompt not found: {name}", file=sys.stderr) 430 430 sys.exit(1) ··· 571 571 config["facet"] = facet 572 572 else: 573 573 # Cogitate prompt - use get_agent() to build full config with instructions 574 - from think.muse import get_agent 574 + from think.talent import get_agent 575 575 576 576 try: 577 577 agent_config = get_agent(name, facet=facet) ··· 789 789 790 790 def _get_output_size(request_event: dict[str, Any], journal_root: str) -> int | None: 791 791 """Get output file size in bytes from a request event, or None.""" 792 - from think.muse import get_output_path 792 + from think.talent import get_output_path 793 793 794 794 req_output = request_event.get("output") 795 795 if not req_output: ··· 920 920 rec_schedule = record.get("schedule") 921 921 if rec_schedule is None: 922 922 if _schedule_lookup is None: 923 - all_configs = get_muse_configs(include_disabled=True) 923 + all_configs = get_talent_configs(include_disabled=True) 924 924 _schedule_lookup = { 925 925 key: info.get("schedule") 926 926 for key, info in all_configs.items() ··· 1118 1118 1119 1119 1120 1120 def main() -> None: 1121 - """Entry point for sol muse.""" 1122 - parser = argparse.ArgumentParser(description="Inspect muse prompt configurations") 1121 + """Entry point for sol talent.""" 1122 + parser = argparse.ArgumentParser(description="Inspect talent prompt configurations") 1123 1123 subparsers = parser.add_subparsers(dest="subcommand") 1124 1124 1125 1125 # --- list subcommand ---
+2 -2
think/prompts.py
··· 4 4 """Core prompt loading utilities. 5 5 6 6 This module provides the foundational prompt loading functionality used by both 7 - standalone prompts (observe/, think/*.md) and the full muse agent orchestration. 7 + standalone prompts (observe/, think/*.md) and the full talent agent orchestration. 8 8 9 9 Key functions: 10 10 - load_prompt(): Load and parse .md prompt files with template substitution 11 11 - PromptContent: Named tuple for prompt text, path, and metadata 12 12 13 13 For full agent/generator orchestration (scheduling, hooks, instruction composition), 14 - use think.muse instead. 14 + use think.talent instead. 15 15 """ 16 16 17 17 from __future__ import annotations
+3 -3
think/utils.py
··· 4 4 """General utilities for solstone. 5 5 6 6 This module provides core utilities for journal access, date/segment handling, 7 - configuration loading, and CLI setup. Muse-related utilities (prompt loading, 8 - agent configs, etc.) have been moved to think/muse.py. 7 + configuration loading, and CLI setup. Talent-related utilities (prompt loading, 8 + agent configs, etc.) have been moved to think/talent.py. 9 9 """ 10 10 11 11 from __future__ import annotations ··· 104 104 Trust this function — never bypass it, cache its result, or set 105 105 _SOLSTONE_JOURNAL_OVERRIDE from application code. The env var 106 106 exists for external use only (tests, Makefile sandboxes). See 107 - ``muse/coding/reference/environment.md``. 107 + ``talent/coding/reference/environment.md``. 108 108 """ 109 109 override = os.environ.get("_SOLSTONE_JOURNAL_OVERRIDE") 110 110 if override: