Reference implementation for the Phoenix Architecture. Work in progress.
aicoding.leaflet.pub/
ai
coding
crazy
1# Phoenix VCS — Data Model & Taxonomy
2
3**Version:** 1.0
4**Status:** Reference document for research review
5**Audience:** Research team, systems architects, PL/SE researchers
6
7---
8
9## 1. What Phoenix Is
10
11Phoenix is a **causal compiler for intent**. It transforms human-written specification documents into generated code through a deterministic, content-addressed pipeline where every transformation is traceable.
12
13The core thesis: version control should operate on **intent and causality**, not file diffs. Changing one sentence in a spec should invalidate only the dependent subtree of generated code — not the entire repository.
14
15Phoenix is not "AI that writes code." It is a system that maintains a **provenance graph** from English sentences to TypeScript files, with formal policies governing trust, drift, and evidence at every stage.
16
17---
18
19## 2. The Pipeline (Five Stages)
20
21Every project flows through five transformation stages. Each stage produces content-addressed nodes linked by provenance edges to the stages before and after it.
22
23```
24 ┌──────────┐ ┌──────────┐ ┌──────────────┐ ┌──────────┐ ┌──────────────┐
25 │ Spec │ │ │ │ Canonical │ │ Implemen-│ │ Generated │
26 │ Files │────▶│ Clauses │────▶│ Nodes │────▶│ tation │────▶│ Files │
27 │ (.md) │ │ │ │ │ │ Units │ │ (.ts) │
28 └──────────┘ └──────────┘ └──────────────┘ └──────────┘ └──────────────┘
29 ▲ │
30 └────┘
31 cross-references
32
33 ingest canonicalize plan regen
34```
35
36| Stage | Count (TaskFlow example) | Description |
37|-------|--------------------------|-------------|
38| Spec Files | 3 | Markdown documents written by humans |
39| Clauses | 14 | Atomic text blocks extracted from specs |
40| Canonical Nodes | 54 | Structured requirements, constraints, definitions |
41| Implementation Units | 11 | Compilation boundaries mapping requirements → code |
42| Generated Files | 11 | TypeScript source files |
43
44The pipeline produces **283 provenance edges** for the TaskFlow example — every connection from spec sentence to generated file is recorded and queryable.
45
46---
47
48## 3. Stage 1: Spec Files
49
50**What:** Markdown documents written by humans. These are the source of truth.
51
52**Example:** `spec/tasks.md`, `spec/analytics.md`, `spec/web-dashboard.md`
53
54Spec files are not parsed by Phoenix beyond clause extraction. They are the raw input. Phoenix never modifies spec files.
55
56---
57
58## 4. Stage 2: Clauses
59
60**What:** The atomic unit of specification. Every spec document is decomposed into an ordered list of clauses — contiguous blocks of text that express one or more requirements.
61
62```typescript
63interface Clause {
64 clause_id: string; // SHA-256(doc_id + section_path + normalized_text)
65 source_doc_id: string; // e.g. "spec/tasks.md"
66 source_line_range: [number, number]; // [3, 10] — 1-indexed, inclusive
67 raw_text: string; // Original text as written
68 normalized_text: string; // Whitespace-normalized for stable hashing
69 section_path: string[]; // Heading hierarchy: ["Task Lifecycle", "Status Transitions"]
70 clause_semhash: string; // SHA-256(normalized_text) — pure content identity
71 context_semhash_cold: string; // SHA-256(text + section + neighbor hashes)
72}
73```
74
75### Identity Model
76
77Clauses are **content-addressed**: the ID is derived from the content itself. If you change the text, the clause gets a new ID. If you revert the text, it gets the original ID back. This is the foundation of Phoenix's selective invalidation — identity tracks meaning, not location.
78
79### Two Hash Layers
80
81| Hash | What it captures | Use |
82|------|-----------------|-----|
83| `clause_semhash` | Pure content (the words) | Detect textual changes |
84| `context_semhash_cold` | Content + structural position + neighbors | Detect contextual shifts (same words, different meaning due to surrounding changes) |
85
86The "cold" suffix indicates this hash is computed without canonical graph context. A "warm" pass (after canonicalization) can incorporate graph-level context for higher fidelity, but the cold hash is always available as a baseline.
87
88### Clause Diffing
89
90When a spec file changes, Phoenix computes a clause-level diff:
91
92| Diff Type | Meaning |
93|-----------|---------|
94| `ADDED` | New clause appeared |
95| `REMOVED` | Clause no longer present |
96| `MODIFIED` | Same position, different content |
97| `MOVED` | Same content, different section path |
98| `UNCHANGED` | Identical |
99
100---
101
102## 5. Stage 3: Canonical Nodes
103
104**What:** Structured, typed requirements extracted from clauses. This is where raw English becomes a formal graph.
105
106```typescript
107interface CanonicalNode {
108 canon_id: string; // Content-addressed
109 type: CanonicalType; // REQUIREMENT | CONSTRAINT | INVARIANT | DEFINITION
110 statement: string; // Normalized canonical statement
111 source_clause_ids: string[]; // Provenance: which clauses produced this node
112 linked_canon_ids: string[]; // Cross-references to related nodes
113 tags: string[]; // Extracted keywords for linking and search
114}
115```
116
117### Node Types
118
119| Type | Meaning | Example |
120|------|---------|---------|
121| **REQUIREMENT** | Something the system must do | "Tasks must support status transitions: open → in_progress → review → done" |
122| **CONSTRAINT** | A limitation or boundary | "Task titles must not exceed 200 characters" |
123| **INVARIANT** | A property that must always hold | "Every task must have exactly one assignee at all times" |
124| **DEFINITION** | A term or concept definition | "A 'task' is a unit of work with a title, description, status, and assignee" |
125
126### Canonicalization Methods
127
128Phoenix supports two canonicalization paths:
129
1301. **Rule-based** (default): Pattern matching, keyword extraction, section-aware heuristics. Deterministic, fast, zero external dependencies.
131
1322. **LLM-enhanced** (optional): Sends clause text to an LLM (Anthropic Claude or OpenAI) for structured JSON extraction. Falls back to rule-based if the LLM is unavailable or returns invalid results. The LLM path typically extracts more fine-grained nodes and better type classification.
133
134The canonicalization pipeline is **versioned** — the model, prompt pack, and extraction rules all have explicit version identifiers:
135
136```typescript
137interface PipelineConfig {
138 pipeline_id: string;
139 model_id: string;
140 promptpack_version: string;
141 extraction_rules_version: string;
142 diff_policy_version: string;
143}
144```
145
146### The Canonical Graph
147
148Canonical nodes form a graph through `linked_canon_ids`. These cross-references capture semantic relationships: a CONSTRAINT may reference the REQUIREMENT it constrains, a DEFINITION may be linked to every REQUIREMENT that uses the defined term.
149
150This graph is the **core data structure** of Phoenix — it is what enables selective invalidation. When a clause changes, only the canonical nodes derived from that clause are invalidated, and only the implementation units that depend on those canonical nodes need regeneration.
151
152---
153
154## 6. Stage 4: Implementation Units (IUs)
155
156**What:** Stable compilation boundaries that map groups of canonical requirements to generated code modules. This is where the "what" (requirements) meets the "how" (code structure).
157
158```typescript
159interface ImplementationUnit {
160 iu_id: string; // Content-addressed
161 kind: 'module' | 'function'; // Granularity level
162 name: string; // Human-readable: "Task Lifecycle"
163 risk_tier: RiskTier; // low | medium | high | critical
164 contract: IUContract; // What this unit does
165 source_canon_ids: string[]; // Which requirements this implements
166 dependencies: string[]; // Other IU IDs this depends on
167 boundary_policy: BoundaryPolicy; // What this unit is allowed to touch
168 enforcement: EnforcementConfig; // How violations are treated
169 evidence_policy: EvidencePolicy; // What proof is required
170 output_files: string[]; // Generated file paths
171}
172```
173
174### Contracts
175
176Every IU has an explicit contract describing its purpose, inputs, outputs, and invariants:
177
178```typescript
179interface IUContract {
180 description: string; // "Manages task status transitions and lifecycle events"
181 inputs: string[]; // ["taskId: string", "newStatus: TaskStatus"]
182 outputs: string[]; // ["TaskTransitionResult"]
183 invariants: string[]; // ["Status transitions must follow the allowed graph"]
184}
185```
186
187### Risk Tiers
188
189Risk tiers determine how much evidence is required before Phoenix considers an IU trustworthy:
190
191| Tier | Evidence Required | Typical Use |
192|------|------------------|-------------|
193| **low** | typecheck, lint, boundary validation | Simple data types, utilities |
194| **medium** | + unit tests | Business logic, CRUD |
195| **high** | + property tests, threat note, static analysis | Auth, payments, data integrity |
196| **critical** | + human signoff or formal verification | Security boundaries, compliance |
197
198### Boundary Policies
199
200Each IU declares what it is and isn't allowed to depend on:
201
202```typescript
203interface BoundaryPolicy {
204 code: {
205 allowed_ius: string[]; // IUs this can import from
206 allowed_packages: string[]; // npm packages allowed
207 forbidden_ius: string[]; // Explicit denials
208 forbidden_packages: string[];
209 forbidden_paths: string[]; // File system paths forbidden
210 };
211 side_channels: {
212 databases: string[]; // DB connections allowed
213 queues: string[]; // Message queues
214 caches: string[]; // Cache systems
215 config: string[]; // Config sources
216 external_apis: string[]; // External HTTP APIs
217 files: string[]; // File system access
218 };
219}
220```
221
222This is **architectural enforcement as data**. After code generation, Phoenix validates that the generated code respects its declared boundaries. Violations become diagnostics in `phoenix status`.
223
224---
225
226## 7. Stage 5: Generated Files & Manifest
227
228**What:** The actual TypeScript files produced by the regeneration engine, tracked by a manifest for drift detection.
229
230```typescript
231interface GeneratedManifest {
232 iu_manifests: Record<string, IUManifest>;
233 generated_at: string;
234}
235
236interface IUManifest {
237 iu_id: string;
238 iu_name: string;
239 files: Record<string, FileManifestEntry>; // path → {content_hash, size}
240 regen_metadata: RegenMetadata;
241}
242
243interface RegenMetadata {
244 model_id: string; // Which LLM generated the code
245 promptpack_hash: string; // Hash of the prompt template used
246 toolchain_version: string; // Phoenix version
247 generated_at: string; // Timestamp
248}
249```
250
251The manifest records the **content hash** of every generated file at generation time. This is the basis for drift detection.
252
253---
254
255## 8. Cross-Cutting Systems
256
257These systems operate across the pipeline rather than belonging to a single stage.
258
259### 8.1 Change Classification (A/B/C/D)
260
261When a spec changes, Phoenix classifies every clause-level change:
262
263| Class | Meaning | Action |
264|-------|---------|--------|
265| **A** | Trivial (whitespace, formatting) | No invalidation |
266| **B** | Local semantic change | Invalidate dependent canon nodes |
267| **C** | Contextual semantic shift (same words, different meaning due to surrounding changes) | Invalidate dependent canon nodes + neighbors |
268| **D** | Uncertain — classifier can't determine impact | Escalate to LLM or human |
269
270Classification uses multiple signals, not a single threshold:
271
272```typescript
273interface ClassificationSignals {
274 norm_diff: number; // 0–1 edit distance on normalized text
275 semhash_delta: boolean; // Did the content hash change?
276 context_cold_delta: boolean; // Did the context hash change?
277 term_ref_delta: number; // 0–1 Jaccard distance on extracted terms
278 section_structure_delta: boolean; // Did the heading hierarchy change?
279 canon_impact: number; // How many canon nodes are affected?
280}
281```
282
283### D-Rate: The Trust Metric
284
285The **D-rate** is the percentage of changes classified as D (uncertain) in a rolling window. It is a first-class system health metric:
286
287| Level | D-Rate | Meaning |
288|-------|--------|---------|
289| TARGET | ≤ 5% | System understands your specs well |
290| ACCEPTABLE | ≤ 10% | Normal operation |
291| WARNING | ≤ 15% | Classifier needs tuning |
292| ALARM | > 15% | System cannot reliably interpret changes — trust degrades |
293
294**This is the key insight:** if Phoenix can't classify changes, it can't selectively invalidate. D-rate measures whether the system's understanding of your specs is keeping up with reality.
295
296### LLM Escalation for D-Class
297
298When a change is classified as D, Phoenix can optionally escalate to an LLM:
299
3001. Send the before/after clause text and classification signals to Claude or GPT-4
3012. LLM returns a reclassification (A, B, or C) with reasoning
3023. If the LLM is confident, the D is resolved; if not, it remains D
303
304This reduces D-rate without sacrificing correctness — the LLM is a second opinion, not an override.
305
306### 8.2 Drift Detection
307
308After code generation, the manifest records content hashes. On every `phoenix status`, Phoenix compares the actual files on disk to the manifest:
309
310| Status | Meaning |
311|--------|---------|
312| **CLEAN** | File matches manifest hash exactly |
313| **DRIFTED** | File has been modified since generation (no waiver) |
314| **WAIVED** | File has been modified, but a waiver exists |
315| **MISSING** | Manifest entry exists, but file is gone from disk |
316| **UNTRACKED** | File exists on disk but isn't in the manifest |
317
318**Drifted files are errors.** If someone hand-edits a generated file without labeling the change, `phoenix status` blocks further operations until the drift is resolved.
319
320### Drift Waivers
321
322Manual edits to generated code must be labeled:
323
324| Waiver Kind | Meaning |
325|-------------|---------|
326| `promote_to_requirement` | This edit should become a spec requirement (feeds back into the pipeline) |
327| `waiver` | Acknowledged deviation, signed by a responsible party |
328| `temporary_patch` | Hotfix with an expiration date |
329
330### 8.3 Evidence & Policy
331
332Evidence records prove that an IU meets its risk-tier requirements:
333
334```typescript
335interface EvidenceRecord {
336 evidence_id: string;
337 kind: EvidenceKind; // typecheck | lint | boundary_validation | unit_tests |
338 // property_tests | static_analysis | threat_note | human_signoff
339 status: EvidenceStatus; // PASS | FAIL | PENDING | SKIPPED
340 iu_id: string;
341 canon_ids: string[]; // Which requirements this evidence covers
342 artifact_hash?: string; // Hash of the code version this was run against
343 timestamp: string;
344}
345```
346
347Evidence binds to **both** the IU and the specific canon nodes it covers, and to the artifact hash of the generated code it was run against. This means evidence is invalidated if the code changes — you can't pass tests on version N and claim they apply to version N+1.
348
349### Policy Evaluation
350
351```typescript
352interface PolicyEvaluation {
353 iu_id: string;
354 risk_tier: string;
355 required: string[]; // What evidence kinds are needed
356 satisfied: string[]; // What's been provided and passed
357 missing: string[]; // What hasn't been provided yet
358 failed: string[]; // What was provided but failed
359 verdict: 'PASS' | 'FAIL' | 'INCOMPLETE';
360}
361```
362
363### 8.4 Cascading Failures
364
365If evidence fails for one IU, Phoenix propagates the failure through the dependency graph:
366
367```typescript
368interface CascadeEvent {
369 source_iu_id: string; // The IU that failed
370 failure_kind: string; // What failed (e.g. "unit_tests")
371 affected_iu_ids: string[]; // All downstream IUs
372 actions: CascadeAction[]; // What Phoenix will do about it
373}
374
375interface CascadeAction {
376 iu_id: string;
377 action: string; // "re-run typecheck", "re-run boundary checks", etc.
378 reason: string; // "Depends on AuthIU which failed unit tests"
379}
380```
381
382### 8.5 Bootstrap State Machine
383
384Phoenix tracks its own confidence level:
385
386```
387BOOTSTRAP_COLD ──▶ BOOTSTRAP_WARMING ──▶ STEADY_STATE
388```
389
390| State | Meaning | D-Rate Handling |
391|-------|---------|-----------------|
392| `BOOTSTRAP_COLD` | First ingestion, no canonical graph yet | D-rate alarms suppressed |
393| `BOOTSTRAP_WARMING` | Canonical graph exists, running warm hashing pass | D-rate severity downgraded |
394| `STEADY_STATE` | System is calibrated and operational | Full enforcement |
395
396This is explicit: cold start exists, and Phoenix names it rather than hiding it.
397
398### 8.6 Diagnostics
399
400Every issue Phoenix reports follows a uniform schema:
401
402```typescript
403interface Diagnostic {
404 severity: 'error' | 'warning' | 'info';
405 category: 'dependency_violation' | 'side_channel_violation' | 'drift' |
406 'boundary' | 'd-rate' | 'canon' | 'evidence' | 'regen';
407 subject: string; // What has the problem
408 message: string; // Human-readable explanation
409 iu_id?: string;
410 recommended_actions: string[];
411}
412```
413
414`phoenix status` groups diagnostics by severity and presents them as a trust dashboard. This is the **primary UX surface** — if `phoenix status` is trusted, Phoenix works. If it's noisy or wrong, the system dies.
415
416---
417
418## 9. The Provenance Graph
419
420All five pipeline stages are connected by typed, directed edges:
421
422| Edge Type | From | To | Cardinality |
423|-----------|------|-----|-------------|
424| `spec→clause` | Spec File | Clause | 1:N |
425| `clause→canon` | Clause | Canonical Node | N:M |
426| `canon→canon` | Canonical Node | Canonical Node | N:M |
427| `canon→iu` | Canonical Node | Implementation Unit | N:M |
428| `iu→file` | Implementation Unit | Generated File | 1:N |
429
430The provenance graph enables two critical queries:
431
4321. **Forward:** "If I change this spec sentence, what generated files are affected?" (selective invalidation)
4332. **Backward:** "Why does this generated file exist? What spec sentences caused it?" (explainability)
434
435Every edge is stored explicitly. There is no inference — if a connection exists, it was recorded at the transformation step that created it.
436
437---
438
439## 10. Content Addressing & Identity
440
441All primary entities use content-addressed IDs:
442
443| Entity | ID Formula |
444|--------|-----------|
445| Clause | `SHA-256(source_doc_id + section_path + normalized_text)` |
446| Canonical Node | `SHA-256(statement + type + source_clause_ids)` |
447| Implementation Unit | `SHA-256(kind + contract + boundary_policy)` |
448| Generated File | `SHA-256(file_content)` |
449
450This means:
451
452- **Same content = same ID**, always, across time and machines
453- **Changed content = new ID**, which propagates invalidation through the graph
454- **Reverting content = original ID restored**, which is a no-op for the pipeline
455- **No mutable state** — you can't "update" a node, you replace it with a new content-addressed node
456
457---
458
459## 11. Storage & Compaction
460
461### Storage Tiers
462
463| Tier | Contents | Retention |
464|------|----------|-----------|
465| **Hot** | Full graph (last 30 days default) | Active working set |
466| **Ancestry** | Node headers + provenance edges + approvals | Forever |
467| **Cold** | Heavy blobs (full node bodies, old generations) | Archival |
468
469### Compaction Rules
470
471Compaction **never deletes**:
472- Node headers (identity + type + provenance pointers)
473- Provenance edges
474- Approvals and signatures
475
476Compaction is triggered by:
477- Size threshold exceeded
478- Pipeline upgrade accepted
479- Time-based fallback
480
481---
482
483## 12. Shadow Pipelines (Upgrade Safety)
484
485When upgrading the canonicalization model (e.g., new LLM, new prompt pack), Phoenix runs old and new pipelines in parallel and computes a diff:
486
487```typescript
488interface ShadowDiffMetrics {
489 node_change_pct: number; // How many canon nodes changed
490 edge_change_pct: number; // How many edges changed
491 risk_escalations: number; // How many IUs got riskier
492 orphan_nodes: number; // Canon nodes with no clause provenance
493 out_of_scope_growth: number; // New nodes that don't map to existing specs
494 semantic_stmt_drift: number; // How much statement text changed
495}
496```
497
498Classification:
499
500| Result | Criteria | Action |
501|--------|----------|--------|
502| **SAFE** | ≤3% node change, no orphans, no risk escalations | Auto-accept |
503| **COMPACTION_EVENT** | ≤25% node change, no orphans, limited escalations | Accept with compaction record |
504| **REJECT** | Orphans exist, excessive churn, or large semantic drift | Block upgrade |
505
506---
507
508## 13. Bot Interface (Freeq)
509
510Phoenix exposes three bots for programmatic and conversational interaction:
511
512| Bot | Role |
513|-----|------|
514| **SpecBot** | Ingest and manage spec documents |
515| **ImplBot** | Regenerate code, manage IUs |
516| **PolicyBot** | Query status, evidence, policy evaluations |
517
518Mutating commands require confirmation:
519
520```
521SpecBot: ingest spec/auth.md
522→ "Will extract clauses from spec/auth.md. Confirm? [ok / phx confirm abc123]"
523ok
524→ "Ingested 5 clauses from spec/auth.md"
525```
526
527Read-only commands execute immediately. No fuzzy NLP — command grammar is explicit and documented.
528
529---
530
531## 14. The Full Entity-Relationship Diagram
532
533```
534┌─────────────┐
535│ Spec File │ ── path, clause_count
536└──────┬──────┘
537 │ 1:N
538 ▼
539┌─────────────┐ ┌────────────────────┐
540│ Clause │────▶│ ClauseDiff │
541│ │ │ (ADDED/REMOVED/ │
542│ clause_id │ │ MODIFIED/MOVED/ │
543│ semhash │ │ UNCHANGED) │
544│ context_hash│ └────────┬───────────┘
545│ section_path│ │
546│ line_range │ ▼
547└──────┬──────┘ ┌────────────────────┐
548 │ N:M │ ChangeClassification│
549 ▼ │ (A/B/C/D) │
550┌─────────────┐ │ signals, confidence│
551│ Canonical │ │ llm_resolved? │
552│ Node │ └────────────────────┘
553│ │ │
554│ canon_id │ ▼
555│ type (RCID) │ ┌────────────────────┐
556│ statement │ │ DRateStatus │
557│ tags │ │ rate, level, window│
558│ │◀───▶│ │
559│ linked_ids │ └────────────────────┘
560└──────┬──────┘
561 │ N:M
562 ▼
563┌─────────────┐ ┌────────────────────┐
564│ IU │────▶│ BoundaryPolicy │
565│ │ │ allowed/forbidden │
566│ iu_id │ │ code + side_channels│
567│ name │ └────────────────────┘
568│ kind │
569│ risk_tier │ ┌────────────────────┐
570│ contract │────▶│ EvidenceRecord │
571│ dependencies│ │ kind, status │
572│ output_files│ │ artifact_hash │
573└──────┬──────┘ └────────┬───────────┘
574 │ 1:N │
575 ▼ ▼
576┌─────────────┐ ┌────────────────────┐
577│ Generated │ │ PolicyEvaluation │
578│ File │ │ required/satisfied │
579│ │ │ missing/failed │
580│ path │ │ verdict │
581│ content_hash│ └────────────────────┘
582│ size │
583│ drift_status│ ┌────────────────────┐
584│ waiver? │ │ Diagnostic │
585└─────────────┘ │ severity, category │
586 │ subject, message │
587 │ recommended_actions│
588 └────────────────────┘
589```
590
591---
592
593## 15. Design Principles
594
595| Principle | Implementation |
596|-----------|---------------|
597| **Trust > cleverness** | `phoenix status` must be explainable and correct — conservative by default |
598| **Content-addressed identity** | Same content = same ID, always. Identity tracks meaning, not location |
599| **Provenance is never lost** | Every edge is explicit and stored. Compaction preserves provenance |
600| **Risk-proportional enforcement** | Low-risk IUs need a typecheck; critical IUs need human signoff |
601| **Cold start is named, not hidden** | Bootstrap state machine makes system confidence explicit |
602| **Selective invalidation** | One spec change → only the dependent subtree is invalidated |
603| **Drift is an error** | Unlabeled manual edits to generated code block the pipeline |
604| **D-rate is health** | If the system can't classify changes, it can't selectively invalidate |
605| **Boundaries are data** | Architectural constraints are declared, enforced, and versioned |
606| **Determinism where possible, LLM where needed** | Rule-based by default, LLM for canonicalization and D-class resolution |
607
608---
609
610## 16. Open Research Questions
611
6121. **Semantic hash fidelity:** How well do two-pass hashes (cold + warm) capture meaning stability vs. structural changes? What's the false positive/negative rate for contextual shifts?
613
6142. **D-rate dynamics:** Does D-rate converge naturally as a project matures, or does it require active classifier tuning? What's the relationship between spec writing style and D-rate?
615
6163. **Canonicalization stability:** When using LLM-enhanced canonicalization, how stable are the extracted nodes across model versions? What shadow pipeline rejection rates should we expect?
617
6184. **Boundary policy expressiveness:** Is the current boundary schema sufficient for real-world microservice architectures? What patterns require extension?
619
6205. **Evidence binding granularity:** Evidence binds to IU + canon_ids + artifact_hash. Is this the right granularity, or do we need finer-grained binding (e.g., function-level)?
621
6226. **Compaction safety:** Can we prove that compaction preserves all queries that matter? What's the formal definition of "lossless" for provenance graphs?
623
6247. **Scale characteristics:** How does the provenance graph grow relative to spec size? At what point does the canonical graph need partitioning?
625
626---
627
628## Appendix A: CLI Commands
629
630| Command | Pipeline Stage |
631|---------|---------------|
632| `phoenix init` | Initialize `.phoenix/` directory |
633| `phoenix bootstrap` | Run full cold → warm → steady pipeline |
634| `phoenix ingest <file>` | Spec → Clauses |
635| `phoenix canonicalize` | Clauses → Canonical Nodes |
636| `phoenix plan` | Canonical Nodes → Implementation Units |
637| `phoenix regen --iu=<Name>` | IU → Generated Files |
638| `phoenix status` | Drift detection + diagnostics dashboard |
639| `phoenix diff <file>` | Clause-level diff with A/B/C/D classification |
640| `phoenix inspect` | Interactive provenance visualization (web UI) |
641| `phoenix graph` | Provenance graph summary |
642
643---
644
645## Appendix B: Directory Layout
646
647```
648project/
649├── spec/ # Human-written specifications
650│ ├── tasks.md
651│ ├── analytics.md
652│ └── web-dashboard.md
653├── src/generated/ # Phoenix-generated code (do not hand-edit)
654│ ├── tasks/
655│ │ ├── task-lifecycle.ts
656│ │ └── assignment.ts
657│ ├── analytics/
658│ │ └── metrics.ts
659│ └── web-dashboard/
660│ ├── dashboard-page.ts
661│ └── server.ts
662└── .phoenix/ # Phoenix metadata (content-addressed store)
663 ├── store/objects/ # All graph nodes as JSON
664 ├── graphs/
665 │ ├── spec.json # Clause index
666 │ ├── canonical.json # Canon graph
667 │ ├── implementation.json # IU graph
668 │ └── evidence.json # Evidence records
669 ├── manifests/
670 │ └── generated_manifest.json
671 └── state.json # Bootstrap state + pipeline config
672```
673
674---
675
676*Document generated from Phoenix VCS v0.1.0 codebase. See PRD.md for the full product requirements and ARCHITECTURE.md for system layer details.*