Rust and WASM did-method-plc tools and structures

feature: full chain validation with support for forks

+6 -1
Cargo.toml
··· 1 1 [package] 2 2 name = "atproto-plc" 3 - version = "0.1.0" 3 + version = "0.2.0" 4 4 authors = ["Nick Gerakines <nick.gerakines@gmail.com>"] 5 5 edition = "2024" 6 6 rust-version = "1.90.0" ··· 85 85 [[bin]] 86 86 name = "plc-audit" 87 87 path = "src/bin/plc-audit.rs" 88 + required-features = ["cli"] 89 + 90 + [[bin]] 91 + name = "plc-fork-viz" 92 + path = "src/bin/plc-fork-viz.rs" 88 93 required-features = ["cli"] 89 94 90 95 [profile.release]
+225 -125
src/bin/README.md
··· 1 - # plc-audit: DID Audit Log Validator 1 + # AT Protocol PLC Command-Line Tools 2 + 3 + Command-line tools for working with AT Protocol PLC (Public Ledger of Credentials) DIDs. 4 + 5 + ## Tools 2 6 3 - A command-line tool for fetching and validating DID audit logs from plc.directory. 7 + ### plc-audit: DID Audit Log Validator 4 8 5 - ## Features 9 + Validates the cryptographic integrity of a DID's audit log from plc.directory. 10 + 11 + #### Features 6 12 7 13 - 🔍 **Fetch Audit Logs**: Retrieves complete operation history from plc.directory 8 14 - 🔐 **Cryptographic Validation**: Verifies all signatures using rotation keys ··· 10 16 - 📊 **Detailed Output**: Shows operation history and final DID state 11 17 - ⚡ **Fast & Reliable**: Built with Rust for performance and safety 12 18 13 - ## Installation 19 + #### Usage 14 20 15 21 ```bash 16 - cargo build --release --features cli --bin plc-audit 22 + # Basic validation 23 + cargo run --bin plc-audit --features cli -- did:plc:z72i7hdynmk6r22z27h6tvur 24 + 25 + # Verbose mode (show all operations) 26 + cargo run --bin plc-audit --features cli -- did:plc:z72i7hdynmk6r22z27h6tvur --verbose 27 + 28 + # Quiet mode (VALID/INVALID only) 29 + cargo run --bin plc-audit --features cli -- did:plc:z72i7hdynmk6r22z27h6tvur --quiet 30 + 31 + # Custom PLC directory 32 + cargo run --bin plc-audit --features cli -- did:plc:example --plc-url https://custom.plc.directory 17 33 ``` 18 34 19 - The binary will be available at `./target/release/plc-audit`. 35 + --- 20 36 21 - ## Usage 37 + ### plc-fork-viz: Fork Visualizer 22 38 23 - ### Basic Validation 39 + Visualizes forks in a DID's operation chain, showing which operations won/lost based on rotation key priority and the 72-hour recovery window. 24 40 25 - Validate a DID and show detailed output: 41 + #### Features 42 + 43 + - 🔀 **Fork Detection**: Identifies competing operations in the chain 44 + - 🔐 **Priority Analysis**: Determines which rotation key signed each operation 45 + - ⏱️ **Recovery Window**: Applies 72-hour recovery window rules 46 + - 📊 **Multiple Formats**: Tree, JSON, and Markdown visualization 47 + - 🎨 **Color Coding**: Green for canonical operations, red for rejected 48 + 49 + #### Usage 26 50 27 51 ```bash 28 - plc-audit did:plc:z72i7hdynmk6r22z27h6tvur 52 + # Basic fork visualization 53 + cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz 54 + 55 + # Verbose mode (detailed operation info) 56 + cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --verbose 57 + 58 + # Show timestamps and recovery window calculations 59 + cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --timing 60 + 61 + # Show full DIDs/CIDs 62 + cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --full-ids 63 + 64 + # Output as JSON 65 + cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --format json 66 + 67 + # Output as Markdown 68 + cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --format markdown 29 69 ``` 30 70 31 - ### Verbose Mode 71 + #### Output Formats 32 72 33 - Show all operations in the audit log: 73 + **Tree Format** (default): 74 + ``` 75 + Fork at operation referencing bafyre...abc123 76 + ├─ 🔴 ✗ CID: bafyre...def456 77 + │ Signed by: rotation_key[1] 78 + │ Reason: Invalidated by higher-priority key[0] within recovery window 79 + └─ 🟢 ✓ CID: bafyre...ghi789 80 + Signed by: rotation_key[0] 81 + Status: CANONICAL (winner) 82 + ``` 34 83 35 - ```bash 36 - plc-audit did:plc:z72i7hdynmk6r22z27h6tvur --verbose 84 + **JSON Format**: 85 + ```json 86 + [ 87 + { 88 + "prev_cid": "bafyre...", 89 + "winner_cid": "bafyre...", 90 + "operations": [...] 91 + } 92 + ] 37 93 ``` 38 94 39 - Output includes: 40 - - Operation index and CID 41 - - Creation timestamp 42 - - Operation type (Genesis/Update) 43 - - Previous operation reference 95 + **Markdown Format**: 96 + | Status | CID | Key Index | Timestamp | Reason | 97 + |--------|-----|-----------|-----------|--------| 98 + | ✅ Winner | bafyre...ghi789 | 0 | 2025-01-15 14:30:00 | Canonical operation | 99 + | ❌ Rejected | bafyre...def456 | 1 | 2025-01-15 10:00:00 | Invalidated by higher-priority key[0] | 44 100 45 - ### Quiet Mode 101 + #### Fork Resolution Rules 46 102 47 - Only show validation result (useful for scripts): 103 + 1. **Rotation Key Priority**: Keys are ordered by array index (0 = highest priority) 104 + 2. **Recovery Window**: 72 hours from the first operation's timestamp 105 + 3. **First-Received Default**: The operation received first wins unless invalidated 106 + 4. **Higher Priority Override**: A higher-priority key can invalidate if: 107 + - It arrives within 72 hours 108 + - Its key index is lower (e.g., key[0] beats key[1]) 109 + 110 + #### Example: No Forks 48 111 49 112 ```bash 50 - plc-audit did:plc:z72i7hdynmk6r22z27h6tvur --quiet 51 - ``` 113 + $ cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz 52 114 53 - Output: `✅ VALID` or error message 115 + 🔍 Analyzing forks in: did:plc:ewvi7nxzyoun6zhxrhs64oiz 116 + Source: https://plc.directory 54 117 55 - ### Custom PLC Directory 118 + 📊 Audit log contains 5 operations 119 + 120 + ✅ No forks detected - this is a linear operation chain 121 + All operations form a single canonical path from genesis to tip. 122 + ``` 56 123 57 - Use a custom PLC directory server: 124 + #### Example: Fork Detected 58 125 59 126 ```bash 60 - plc-audit did:plc:example --plc-url https://custom.plc.directory 61 - ``` 127 + $ cargo run --bin plc-fork-viz --features cli -- did:plc:z7x2k3j4m5n6 --timing 62 128 63 - ## What is Validated? 129 + 🔍 Analyzing forks in: did:plc:z7x2k3j4m5n6 130 + Source: https://plc.directory 64 131 65 - The tool performs comprehensive validation: 132 + 📊 Audit log contains 8 operations 133 + ⚠️ Detected 1 fork point(s) 66 134 67 - 1. **DID Format Validation** 68 - - Checks prefix is `did:plc:` 69 - - Verifies identifier is exactly 24 characters 70 - - Ensures only valid base32 characters (a-z, 2-7) 135 + 📊 Fork Visualization (Tree Format) 136 + ═══════════════════════════════════════════════════════════════ 71 137 72 - 2. **Chain Linkage Verification** 73 - - First operation must be genesis (prev = null) 74 - - Each subsequent operation's `prev` field must match previous operation's CID 75 - - No breaks in the chain 138 + Fork at operation referencing bafyre...abc123 139 + ├─ 🔴 ✗ CID: bafyre...def456 140 + │ Signed by: rotation_key[1] 141 + │ Timestamp: 2025-01-15 10:00:00 UTC 142 + │ Reason: Invalidated by higher-priority key[0] within recovery window 143 + 144 + └─ 🟢 ✓ CID: bafyre...ghi789 145 + │ Signed by: rotation_key[0] 146 + │ Timestamp: 2025-01-15 14:00:00 UTC 147 + │ Status: CANONICAL (winner) 76 148 77 - 3. **Cryptographic Signature Verification** 78 - - Each operation's signature is verified using rotation keys 79 - - Genesis operation establishes initial rotation keys 80 - - Later operations can rotate keys 149 + ═══════════════════════════════════════════════════════════════ 150 + 📈 Summary: 151 + Total operations: 8 152 + Fork points: 1 153 + Rejected operations: 1 81 154 82 - 4. **State Consistency** 83 - - Final state is extracted and displayed 84 - - Shows rotation keys, verification methods, services, and aliases 155 + 🔐 Fork Resolution Details: 156 + Fork 1: Winner is bafyre...ghi789 (signed by key[0]) 157 + ``` 85 158 86 - ## Exit Codes 159 + --- 87 160 88 - - `0`: Validation successful 89 - - `1`: Validation failed or error occurred 161 + ## Building & Installation 90 162 91 - ## Example Output 163 + ### Build Both Tools 92 164 93 - ### Standard Mode 165 + ```bash 166 + cargo build --bins --features cli --release 167 + ``` 94 168 169 + ### Install to ~/.cargo/bin 170 + 171 + ```bash 172 + cargo install --path . --features cli 95 173 ``` 96 - 🔍 Fetching audit log for: did:plc:z72i7hdynmk6r22z27h6tvur 97 - Source: https://plc.directory 98 174 99 - 📊 Audit Log Summary: 100 - Total operations: 4 101 - Genesis operation: bafyreigp6shzy6dlcxuowwoxz7u5nemdrkad2my5zwzpwilcnhih7bw6zm 102 - Latest operation: bafyreifn4pkect7nymne3sxkdg7tn7534msyxcjkshmzqtijmn3enyxm3q 175 + Then use directly: 176 + ```bash 177 + plc-audit did:plc:ewvi7nxzyoun6zhxrhs64oiz 178 + plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz 179 + ``` 103 180 104 - 🔐 Validating operation chain... 181 + --- 182 + 183 + ## Common Workflows 184 + 185 + ### Workflow 1: Validate and Check for Forks 186 + 187 + ```bash 188 + # First validate the audit log 189 + $ plc-audit did:plc:ewvi7nxzyoun6zhxrhs64oiz 105 190 ✅ Validation successful! 106 191 107 - 📄 Final DID State: 108 - Rotation keys: 2 109 - [0] did:key:zQ3shhCGUqDKjStzuDxPkTxN6ujddP4RkEKJJouJGRRkaLGbg 110 - [1] did:key:zQ3shpKnbdPx3g3CmPf5cRVTPe1HtSwVn5ish3wSnDPQCbLJK 192 + # Then check for forks 193 + $ plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz 194 + ``` 111 195 112 - Verification methods: 1 113 - atproto: did:key:zQ3shQo6TF2moaqMTrUZEM1jeuYRQXeHEx4evX9751y2qPqRA 196 + ### Workflow 2: Export Fork Data 114 197 115 - Also known as: 1 116 - - at://bsky.app 198 + ```bash 199 + # Export as JSON for analysis 200 + $ plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz --format json > forks.json 117 201 118 - Services: 1 119 - atproto_pds: https://puffball.us-east.host.bsky.network (AtprotoPersonalDataServer) 202 + # Generate Markdown report 203 + $ plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz --format markdown > FORK_REPORT.md 120 204 ``` 121 205 122 - ### Verbose Mode 206 + ### Workflow 3: Monitor DIDs 207 + 208 + ```bash 209 + #!/bin/bash 210 + # Monitor a DID for changes and forks 211 + DID="did:plc:ewvi7nxzyoun6zhxrhs64oiz" 123 212 124 - Shows detailed operation information: 213 + # Validate 214 + if plc-audit $DID --quiet; then 215 + echo "✅ DID is valid" 125 216 217 + # Check for forks 218 + plc-fork-viz $DID --format json > /tmp/forks.json 219 + 220 + if [ -s /tmp/forks.json ]; then 221 + echo "⚠️ Forks detected!" 222 + plc-fork-viz $DID 223 + fi 224 + else 225 + echo "❌ DID validation failed!" 226 + exit 1 227 + fi 126 228 ``` 127 - 📋 Operations: 128 - [0] ✅ bafyreigp6shzy6dlcxuowwoxz7u5nemdrkad2my5zwzpwilcnhih7bw6zm - 2023-04-12T04:53:57.057Z 129 - Type: Genesis (creates the DID) 130 - [1] ✅ bafyreihmuvr3frdvd6vmdhucih277prdcfcezf67lasg5oekxoimnunjoq - 2023-04-12T17:26:46.468Z 131 - Type: Update 132 - Previous: bafyreigp6shzy6dlcxuowwoxz7u5nemdrkad2my5zwzpwilcnhih7bw6zm 133 - ... 134 - ``` 229 + 230 + --- 231 + 232 + ## Understanding Fork Visualization 233 + 234 + ### Symbols 235 + 236 + - 🌱 Genesis operation (creates the DID) 237 + - 🟢 ✓ Canonical operation (winner) 238 + - 🔴 ✗ Rejected operation (lost the fork) 239 + - ├─ Fork branch (more operations follow) 240 + - └─ Final fork branch 241 + 242 + ### Rejection Reasons 243 + 244 + 1. **"Invalidated by higher-priority key[N] within recovery window"** 245 + - A higher-priority rotation key signed a competing operation within 72 hours 135 246 136 - ## Error Handling 247 + 2. **"Higher-priority key[N] but outside 72-hour recovery window (X hours late)"** 248 + - A higher-priority key tried to invalidate but arrived too late 137 249 138 - The tool provides clear error messages: 250 + 3. **"Lower-priority key[N] (current winner has key[M])"** 251 + - This operation was signed by a lower-priority key and can't override 139 252 140 - ### Invalid DID Format 141 - ``` 142 - ❌ Error: Invalid DID format: DID must be exactly 32 characters, got 18 143 - Expected format: did:plc:<24 lowercase base32 characters> 144 - ``` 253 + --- 145 254 146 - ### Network Errors 147 - ``` 148 - ❌ Error: Failed to fetch audit log: HTTP error: 404 - Not Found 149 - ``` 255 + ## Troubleshooting 150 256 151 - ### Invalid Signature 152 - ``` 153 - ❌ Validation failed: Invalid signature at operation 2 154 - Error: Signature verification failed 155 - CID: bafyrei... 156 - ``` 257 + ### Error: "Invalid DID format" 258 + DID must follow format: `did:plc:<24 lowercase base32 characters>` 157 259 158 - ### Broken Chain 159 - ``` 160 - ❌ Validation failed: Chain linkage broken at operation 2 161 - Expected prev: bafyreiabc... 162 - Actual prev: bafyreixyz... 163 - ``` 260 + ### Error: "Failed to fetch audit log" 261 + - Check internet connection 262 + - Verify DID exists on plc.directory 263 + - Try `--plc-url` for custom PLC directories 164 264 165 - ## Use Cases 265 + ### No forks detected but expected 266 + - The DID may have a linear operation chain 267 + - All operations were submitted sequentially without conflicts 166 268 167 - - **Audit DID History**: Review all changes made to a DID over time 168 - - **Verify DID Integrity**: Ensure a DID hasn't been tampered with 169 - - **Debug Issues**: Identify problems in operation chains 170 - - **Monitor DIDs**: Automate validation in scripts or monitoring systems 171 - - **Security Analysis**: Investigate suspicious DID activity 269 + --- 172 270 173 271 ## Technical Details 174 272 175 - The validator: 176 - - Uses `reqwest` for HTTP requests to plc.directory 177 - - Implements cryptographic verification with P-256 and secp256k1 178 - - Validates ECDSA signatures using the `atproto-plc` library 179 - - Supports both standard and legacy operation formats 273 + **plc-audit** validates: 274 + 1. DID format (prefix, length, base32 encoding) 275 + 2. Chain linkage (prev references) 276 + 3. Cryptographic signatures (ECDSA with P-256/secp256k1) 277 + 4. State consistency 180 278 181 - ## JavaScript/WASM Version 279 + **plc-fork-viz** implements: 280 + 1. Fork detection (multiple operations with same prev CID) 281 + 2. Rotation key index resolution 282 + 3. Priority-based fork resolution 283 + 4. 72-hour recovery window enforcement 182 284 183 - A JavaScript implementation using WebAssembly is also available. See [`wasm/README.md`](../../wasm/README.md) for details. 285 + Both tools use: 286 + - `reqwest` for HTTP requests 287 + - `atproto-plc` library for cryptographic operations 288 + - `clap` for command-line parsing 184 289 185 - ### Quick Start 290 + --- 186 291 187 - ```bash 188 - # Build WASM module 189 - cd wasm && ./build.sh 292 + ## See Also 190 293 191 - # Run the tool 192 - node plc-audit.js did:plc:z72i7hdynmk6r22z27h6tvur --verbose 193 - ``` 294 + - [AT Protocol PLC Specification](https://atproto.com/specs/did-plc) 295 + - [Fork Resolution Implementation Report](../../FORK_RESOLUTION_REPORT.md) 296 + - [Implementation Summary](../../IMPLEMENTATION_SUMMARY.md) 297 + - [Library Documentation](../../README.md) 194 298 195 - The JavaScript version provides the same functionality and output format as the Rust binary, making it suitable for: 196 - - Cross-platform deployment (runs anywhere with Node.js) 197 - - Web applications and browser extensions 198 - - Integration with JavaScript/TypeScript projects 199 - - Smaller binary size (~200KB WASM vs 1.5MB native) 299 + --- 200 300 201 301 ## License 202 302
+189 -46
src/bin/plc-audit.rs
··· 3 3 //! This binary fetches DID audit logs from plc.directory and validates 4 4 //! each operation cryptographically to ensure the chain is valid. 5 5 6 - use atproto_plc::{Did, Operation}; 6 + use atproto_plc::{Did, Operation, OperationChainValidator, PlcState}; 7 7 use clap::Parser; 8 8 use reqwest::blocking::Client; 9 9 use serde::Deserialize; ··· 121 121 println!(); 122 122 } 123 123 124 - // Validate the operation chain 124 + // Detect forks and build canonical chain 125 125 if !args.quiet { 126 - println!("🔐 Validating operation chain..."); 126 + println!("🔐 Analyzing operation chain..."); 127 127 println!(); 128 128 } 129 129 130 - // Step 1: Validate chain linkage (prev references) 130 + // Detect fork points and nullified operations 131 + let has_forks = detect_forks(&audit_log); 132 + let has_nullified = audit_log.iter().any(|e| e.nullified); 133 + 134 + if has_forks || has_nullified { 135 + if !args.quiet { 136 + if has_forks { 137 + println!("⚠️ Fork detected - multiple operations reference the same prev CID"); 138 + } 139 + if has_nullified { 140 + println!("⚠️ Nullified operations detected - will validate canonical chain only"); 141 + } 142 + println!(); 143 + } 144 + 145 + // Use fork resolution to build canonical chain 146 + if args.verbose { 147 + println!("Step 1: Fork Resolution & Canonical Chain Building"); 148 + println!("==================================================="); 149 + } 150 + 151 + // Build operations and timestamps for fork resolution 152 + let operations: Vec<_> = audit_log.iter().map(|e| e.operation.clone()).collect(); 153 + let timestamps: Vec<_> = audit_log 154 + .iter() 155 + .map(|e| { 156 + e.created_at 157 + .parse::<chrono::DateTime<chrono::Utc>>() 158 + .unwrap_or_else(|_| chrono::Utc::now()) 159 + }) 160 + .collect(); 161 + 162 + // Resolve forks and get canonical chain 163 + match OperationChainValidator::validate_chain_with_forks(&operations, &timestamps) { 164 + Ok(final_state) => { 165 + if args.verbose { 166 + println!(" ✅ Fork resolution complete"); 167 + println!(" ✅ Canonical chain validated successfully"); 168 + println!(); 169 + 170 + // Show which operations are in the canonical chain 171 + println!("Canonical Chain Operations:"); 172 + println!("==========================="); 173 + 174 + // Build the canonical chain by following non-nullified operations 175 + let canonical_indices = build_canonical_chain_indices(&audit_log); 176 + 177 + for idx in &canonical_indices { 178 + let entry = &audit_log[*idx]; 179 + println!(" [{}] ✅ {} - {}", idx, entry.cid, entry.created_at); 180 + } 181 + println!(); 182 + 183 + if has_nullified { 184 + println!("Nullified/Rejected Operations:"); 185 + println!("=============================="); 186 + for (i, entry) in audit_log.iter().enumerate() { 187 + if entry.nullified && !canonical_indices.contains(&i) { 188 + println!(" [{}] ❌ {} - {} (nullified)", i, entry.cid, entry.created_at); 189 + if let Some(prev) = entry.operation.prev() { 190 + println!(" Referenced: {}", prev); 191 + } 192 + } 193 + } 194 + println!(); 195 + } 196 + } 197 + 198 + // Display final state 199 + display_final_state(&final_state, args.quiet); 200 + return; 201 + } 202 + Err(e) => { 203 + eprintln!(); 204 + eprintln!("❌ Validation failed: {}", e); 205 + process::exit(1); 206 + } 207 + } 208 + } 209 + 210 + // Simple linear chain validation (no forks or nullified operations) 131 211 if args.verbose { 132 - println!("Step 1: Chain Linkage Validation"); 212 + println!("Step 1: Linear Chain Validation"); 133 213 println!("================================"); 134 214 } 135 215 136 216 for i in 1..audit_log.len() { 137 - if audit_log[i].nullified { 138 - if args.verbose { 139 - println!(" [{}] ⊘ Skipped (nullified)", i); 140 - } 141 - continue; 142 - } 143 - 144 217 let prev_cid = audit_log[i - 1].cid.clone(); 145 218 let expected_prev = audit_log[i].operation.prev(); 146 219 ··· 298 371 } 299 372 300 373 // Build final state 301 - let final_entry = audit_log.iter().filter(|e| !e.nullified).last().unwrap(); 374 + let final_entry = audit_log.last().unwrap(); 302 375 if let Some(_rotation_keys) = final_entry.operation.rotation_keys() { 303 376 let final_state = match &final_entry.operation { 304 377 Operation::PlcOperation { ··· 308 381 services, 309 382 .. 310 383 } => { 311 - use atproto_plc::PlcState; 312 384 PlcState { 313 385 rotation_keys: rotation_keys.clone(), 314 386 verification_methods: verification_methods.clone(), ··· 317 389 } 318 390 } 319 391 _ => { 320 - use atproto_plc::PlcState; 321 392 PlcState::new() 322 393 } 323 394 }; 324 395 325 - { 326 - if args.quiet { 327 - println!("✅ VALID"); 396 + display_final_state(&final_state, args.quiet); 397 + } else { 398 + eprintln!("❌ Error: Could not extract final state"); 399 + process::exit(1); 400 + } 401 + } 402 + 403 + /// Detect if there are fork points in the audit log 404 + fn detect_forks(audit_log: &[AuditLogEntry]) -> bool { 405 + use std::collections::HashMap; 406 + 407 + let mut prev_counts: HashMap<String, usize> = HashMap::new(); 408 + 409 + for entry in audit_log { 410 + if let Some(prev) = entry.operation.prev() { 411 + *prev_counts.entry(prev.to_string()).or_insert(0) += 1; 412 + } 413 + } 414 + 415 + // If any prev CID is referenced by more than one operation, there's a fork 416 + prev_counts.values().any(|&count| count > 1) 417 + } 418 + 419 + /// Build a list of indices that form the canonical chain 420 + fn build_canonical_chain_indices(audit_log: &[AuditLogEntry]) -> Vec<usize> { 421 + use std::collections::HashMap; 422 + 423 + // Build a map of prev CID to operations 424 + let mut prev_to_indices: HashMap<String, Vec<usize>> = HashMap::new(); 425 + 426 + for (i, entry) in audit_log.iter().enumerate() { 427 + if let Some(prev) = entry.operation.prev() { 428 + prev_to_indices 429 + .entry(prev.to_string()) 430 + .or_default() 431 + .push(i); 432 + } 433 + } 434 + 435 + // Start from genesis and follow the canonical chain 436 + let mut canonical = Vec::new(); 437 + 438 + // Find genesis (first operation) 439 + let genesis = match audit_log.first() { 440 + Some(g) => g, 441 + None => return canonical, 442 + }; 443 + 444 + canonical.push(0); 445 + let mut current_cid = genesis.cid.clone(); 446 + 447 + // Follow the chain, preferring non-nullified operations 448 + loop { 449 + if let Some(indices) = prev_to_indices.get(&current_cid) { 450 + // Find the first non-nullified operation 451 + if let Some(&next_idx) = indices.iter().find(|&&idx| !audit_log[idx].nullified) { 452 + canonical.push(next_idx); 453 + current_cid = audit_log[next_idx].cid.clone(); 328 454 } else { 329 - println!("✅ Validation successful!"); 330 - println!(); 331 - println!("📄 Final DID State:"); 332 - println!(" Rotation keys: {}", final_state.rotation_keys.len()); 333 - for (i, key) in final_state.rotation_keys.iter().enumerate() { 334 - println!(" [{}] {}", i, key); 335 - } 336 - println!(); 337 - println!(" Verification methods: {}", final_state.verification_methods.len()); 338 - for (name, key) in &final_state.verification_methods { 339 - println!(" {}: {}", name, key); 340 - } 341 - println!(); 342 - if !final_state.also_known_as.is_empty() { 343 - println!(" Also known as: {}", final_state.also_known_as.len()); 344 - for uri in &final_state.also_known_as { 345 - println!(" - {}", uri); 346 - } 347 - println!(); 348 - } 349 - if !final_state.services.is_empty() { 350 - println!(" Services: {}", final_state.services.len()); 351 - for (name, service) in &final_state.services { 352 - println!(" {}: {} ({})", name, service.endpoint, service.service_type); 353 - } 455 + // All operations at this point are nullified - try to find any operation 456 + if let Some(&next_idx) = indices.first() { 457 + canonical.push(next_idx); 458 + current_cid = audit_log[next_idx].cid.clone(); 459 + } else { 460 + break; 354 461 } 355 462 } 463 + } else { 464 + // No more operations 465 + break; 356 466 } 467 + } 468 + 469 + canonical 470 + } 471 + 472 + /// Display the final state after validation 473 + fn display_final_state(final_state: &PlcState, quiet: bool) { 474 + if quiet { 475 + println!("✅ VALID"); 357 476 } else { 358 - eprintln!("❌ Error: Could not extract final state"); 359 - process::exit(1); 477 + println!("✅ Validation successful!"); 478 + println!(); 479 + println!("📄 Final DID State:"); 480 + println!(" Rotation keys: {}", final_state.rotation_keys.len()); 481 + for (i, key) in final_state.rotation_keys.iter().enumerate() { 482 + println!(" [{}] {}", i, key); 483 + } 484 + println!(); 485 + println!(" Verification methods: {}", final_state.verification_methods.len()); 486 + for (name, key) in &final_state.verification_methods { 487 + println!(" {}: {}", name, key); 488 + } 489 + println!(); 490 + if !final_state.also_known_as.is_empty() { 491 + println!(" Also known as: {}", final_state.also_known_as.len()); 492 + for uri in &final_state.also_known_as { 493 + println!(" - {}", uri); 494 + } 495 + println!(); 496 + } 497 + if !final_state.services.is_empty() { 498 + println!(" Services: {}", final_state.services.len()); 499 + for (name, service) in &final_state.services { 500 + println!(" {}: {} ({})", name, service.endpoint, service.service_type); 501 + } 502 + } 360 503 } 361 504 } 362 505 ··· 365 508 let url = format!("{}/{}/log/audit", plc_url, did); 366 509 367 510 let client = Client::builder() 368 - .user_agent("atproto-plc-audit/0.1.0") 511 + .user_agent("atproto-plc-audit/0.2.0") 369 512 .timeout(std::time::Duration::from_secs(30)) 370 513 .build()?; 371 514
+613
src/bin/plc-fork-viz.rs
··· 1 + //! PLC Fork Visualizer 2 + //! 3 + //! This binary fetches DID audit logs and visualizes any forks in the operation chain, 4 + //! showing which operations won/lost and why based on rotation key priority and the 5 + //! 72-hour recovery window. 6 + 7 + use atproto_plc::{Did, Operation, VerifyingKey}; 8 + use chrono::{DateTime, Utc}; 9 + use clap::Parser; 10 + use reqwest::blocking::Client; 11 + use serde::Deserialize; 12 + use std::collections::HashMap; 13 + use std::process; 14 + 15 + /// Command-line arguments 16 + #[derive(Parser, Debug)] 17 + #[command( 18 + name = "plc-fork-viz", 19 + about = "Visualize forks in PLC audit logs", 20 + long_about = "Fetches and visualizes fork points in DID operation chains from plc.directory,\nshowing which operations won/lost based on rotation key priority and recovery window rules" 21 + )] 22 + struct Args { 23 + /// The DID to analyze (e.g., did:plc:ewvi7nxzyoun6zhxrhs64oiz) 24 + #[arg(value_name = "DID")] 25 + did: String, 26 + 27 + /// Custom PLC directory URL (default: https://plc.directory) 28 + #[arg(long, default_value = "https://plc.directory")] 29 + plc_url: String, 30 + 31 + /// Show detailed information for each operation 32 + #[arg(short, long)] 33 + verbose: bool, 34 + 35 + /// Show timing information (timestamps and recovery window calculations) 36 + #[arg(short, long)] 37 + timing: bool, 38 + 39 + /// Show full DIDs and CIDs instead of truncated versions 40 + #[arg(long)] 41 + full_ids: bool, 42 + 43 + /// Output format: tree (default), json, or markdown 44 + #[arg(short, long, default_value = "tree")] 45 + format: OutputFormat, 46 + } 47 + 48 + #[derive(Debug, Clone, clap::ValueEnum)] 49 + enum OutputFormat { 50 + Tree, 51 + Json, 52 + Markdown, 53 + } 54 + 55 + /// Audit log response from plc.directory 56 + #[derive(Debug, Deserialize, Clone)] 57 + struct AuditLogEntry { 58 + /// The DID this operation is for 59 + #[allow(dead_code)] 60 + did: String, 61 + 62 + /// The operation itself 63 + operation: Operation, 64 + 65 + /// CID of this operation 66 + cid: String, 67 + 68 + /// Timestamp when this operation was created 69 + #[serde(rename = "createdAt")] 70 + created_at: String, 71 + 72 + /// Nullified flag (if this operation was invalidated by fork resolution) 73 + #[allow(dead_code)] 74 + #[serde(default)] 75 + nullified: bool, 76 + } 77 + 78 + /// Represents a fork point in the operation chain 79 + #[derive(Debug, Clone)] 80 + struct ForkPoint { 81 + /// The prev CID that all operations in this fork reference 82 + prev_cid: String, 83 + 84 + /// Competing operations at this fork 85 + operations: Vec<ForkOperation>, 86 + 87 + /// The winning operation (after fork resolution) 88 + winner_cid: String, 89 + } 90 + 91 + /// An operation that's part of a fork 92 + #[derive(Debug, Clone)] 93 + struct ForkOperation { 94 + cid: String, 95 + operation: Operation, 96 + timestamp: DateTime<Utc>, 97 + signing_key_index: Option<usize>, 98 + signing_key: Option<String>, 99 + is_winner: bool, 100 + rejection_reason: Option<String>, 101 + } 102 + 103 + fn main() { 104 + let args = Args::parse(); 105 + 106 + // Parse and validate the DID 107 + let did = match Did::parse(&args.did) { 108 + Ok(did) => did, 109 + Err(e) => { 110 + eprintln!("❌ Error: Invalid DID format: {}", e); 111 + eprintln!(" Expected format: did:plc:<24 lowercase base32 characters>"); 112 + process::exit(1); 113 + } 114 + }; 115 + 116 + println!("🔍 Analyzing forks in: {}", did); 117 + println!(" Source: {}", args.plc_url); 118 + println!(); 119 + 120 + // Fetch the audit log 121 + let audit_log = match fetch_audit_log(&args.plc_url, &did) { 122 + Ok(log) => log, 123 + Err(e) => { 124 + eprintln!("❌ Error: Failed to fetch audit log: {}", e); 125 + process::exit(1); 126 + } 127 + }; 128 + 129 + if audit_log.is_empty() { 130 + eprintln!("❌ Error: No operations found in audit log"); 131 + process::exit(1); 132 + } 133 + 134 + println!("📊 Audit log contains {} operations", audit_log.len()); 135 + 136 + // Detect forks 137 + let forks = detect_forks(&audit_log, &args); 138 + 139 + if forks.is_empty() { 140 + println!("\n✅ No forks detected - this is a linear operation chain"); 141 + println!(" All operations form a single canonical path from genesis to tip."); 142 + 143 + if args.verbose { 144 + println!("\n📋 Linear chain visualization:"); 145 + visualize_linear_chain(&audit_log, &args); 146 + } 147 + 148 + return; 149 + } 150 + 151 + println!("⚠️ Detected {} fork point(s)", forks.len()); 152 + println!(); 153 + 154 + // Visualize based on format 155 + match args.format { 156 + OutputFormat::Tree => visualize_tree(&audit_log, &forks, &args), 157 + OutputFormat::Json => visualize_json(&forks), 158 + OutputFormat::Markdown => visualize_markdown(&audit_log, &forks, &args), 159 + } 160 + } 161 + 162 + /// Detect fork points in the audit log 163 + fn detect_forks(audit_log: &[AuditLogEntry], args: &Args) -> Vec<ForkPoint> { 164 + let mut prev_to_operations: HashMap<String, Vec<AuditLogEntry>> = HashMap::new(); 165 + 166 + // Group operations by their prev CID 167 + for entry in audit_log { 168 + if let Some(prev) = entry.operation.prev() { 169 + prev_to_operations 170 + .entry(prev.to_string()) 171 + .or_default() 172 + .push(entry.clone()); 173 + } 174 + } 175 + 176 + // Build operation map for state reconstruction 177 + let operation_map: HashMap<String, AuditLogEntry> = audit_log 178 + .iter() 179 + .map(|e| (e.cid.clone(), e.clone())) 180 + .collect(); 181 + 182 + let mut forks = Vec::new(); 183 + 184 + // Find fork points (where multiple operations reference the same prev) 185 + for (prev_cid, operations) in prev_to_operations { 186 + if operations.len() > 1 { 187 + if args.verbose { 188 + println!("🔀 Fork detected at {}", truncate(&prev_cid, args)); 189 + println!(" {} competing operations", operations.len()); 190 + } 191 + 192 + // Get the state at the prev operation to determine rotation keys 193 + let state = if let Some(prev_entry) = operation_map.get(&prev_cid) { 194 + get_state_from_operation(&prev_entry.operation) 195 + } else { 196 + // This shouldn't happen in a valid chain 197 + continue; 198 + }; 199 + 200 + // Analyze each operation in the fork 201 + let mut fork_ops = Vec::new(); 202 + for entry in &operations { 203 + let timestamp = parse_timestamp(&entry.created_at); 204 + 205 + // Determine which rotation key signed this operation 206 + let (signing_key_index, signing_key) = if !state.rotation_keys.is_empty() { 207 + find_signing_key(&entry.operation, &state.rotation_keys) 208 + } else { 209 + (None, None) 210 + }; 211 + 212 + fork_ops.push(ForkOperation { 213 + cid: entry.cid.clone(), 214 + operation: entry.operation.clone(), 215 + timestamp, 216 + signing_key_index, 217 + signing_key, 218 + is_winner: false, 219 + rejection_reason: None, 220 + }); 221 + } 222 + 223 + // Resolve the fork to determine winner 224 + let winner_cid = resolve_fork(&mut fork_ops); 225 + 226 + forks.push(ForkPoint { 227 + prev_cid, 228 + operations: fork_ops, 229 + winner_cid, 230 + }); 231 + } 232 + } 233 + 234 + // Sort forks chronologically 235 + forks.sort_by_key(|f| { 236 + f.operations 237 + .iter() 238 + .map(|op| op.timestamp) 239 + .min() 240 + .unwrap_or_else(Utc::now) 241 + }); 242 + 243 + forks 244 + } 245 + 246 + /// Resolve a fork point and mark the winner 247 + fn resolve_fork(fork_ops: &mut [ForkOperation]) -> String { 248 + // Sort by timestamp (chronological order) 249 + fork_ops.sort_by_key(|op| op.timestamp); 250 + 251 + // First-received is the default winner 252 + let mut winner_idx = 0; 253 + fork_ops[0].is_winner = true; 254 + 255 + // Check if any later operation can invalidate based on priority 256 + for i in 1..fork_ops.len() { 257 + let competing_key_idx = fork_ops[i].signing_key_index; 258 + let winner_key_idx = fork_ops[winner_idx].signing_key_index; 259 + 260 + match (competing_key_idx, winner_key_idx) { 261 + (Some(competing_idx), Some(winner_idx_val)) => { 262 + if competing_idx < winner_idx_val { 263 + // Higher priority (lower index) 264 + let time_diff = fork_ops[i].timestamp - fork_ops[winner_idx].timestamp; 265 + 266 + if time_diff <= chrono::Duration::hours(72) { 267 + // Within recovery window - this operation wins 268 + fork_ops[winner_idx].is_winner = false; 269 + fork_ops[winner_idx].rejection_reason = Some(format!( 270 + "Invalidated by higher-priority key[{}] within recovery window", 271 + competing_idx 272 + )); 273 + 274 + fork_ops[i].is_winner = true; 275 + winner_idx = i; 276 + } else { 277 + // Outside recovery window 278 + fork_ops[i].rejection_reason = Some(format!( 279 + "Higher-priority key[{}] but outside 72-hour recovery window ({:.1}h late)", 280 + competing_idx, 281 + time_diff.num_hours() as f64 282 + )); 283 + } 284 + } else { 285 + // Lower priority 286 + fork_ops[i].rejection_reason = Some(format!( 287 + "Lower-priority key[{}] (current winner has key[{}])", 288 + competing_idx, 289 + winner_idx_val 290 + )); 291 + } 292 + } 293 + _ => { 294 + fork_ops[i].rejection_reason = Some("Could not determine signing key".to_string()); 295 + } 296 + } 297 + } 298 + 299 + fork_ops[winner_idx].cid.clone() 300 + } 301 + 302 + /// Find which rotation key signed an operation 303 + fn find_signing_key(operation: &Operation, rotation_keys: &[String]) -> (Option<usize>, Option<String>) { 304 + for (index, key_did) in rotation_keys.iter().enumerate() { 305 + if let Ok(verifying_key) = VerifyingKey::from_did_key(key_did) { 306 + if operation.verify(&[verifying_key]).is_ok() { 307 + return (Some(index), Some(key_did.clone())); 308 + } 309 + } 310 + } 311 + (None, None) 312 + } 313 + 314 + /// Get state from an operation 315 + fn get_state_from_operation(operation: &Operation) -> atproto_plc::PlcState { 316 + match operation { 317 + Operation::PlcOperation { 318 + rotation_keys, 319 + verification_methods, 320 + also_known_as, 321 + services, 322 + .. 323 + } => atproto_plc::PlcState { 324 + rotation_keys: rotation_keys.clone(), 325 + verification_methods: verification_methods.clone(), 326 + also_known_as: also_known_as.clone(), 327 + services: services.clone(), 328 + }, 329 + _ => atproto_plc::PlcState::new(), 330 + } 331 + } 332 + 333 + /// Parse ISO 8601 timestamp 334 + fn parse_timestamp(timestamp: &str) -> DateTime<Utc> { 335 + timestamp 336 + .parse::<DateTime<Utc>>() 337 + .unwrap_or_else(|_| Utc::now()) 338 + } 339 + 340 + /// Visualize forks as a tree 341 + fn visualize_tree(audit_log: &[AuditLogEntry], forks: &[ForkPoint], args: &Args) { 342 + println!("📊 Fork Visualization (Tree Format)"); 343 + println!("═══════════════════════════════════════════════════════════════"); 344 + println!(); 345 + 346 + // Build a map of which operations are part of forks 347 + let mut fork_map: HashMap<String, &ForkPoint> = HashMap::new(); 348 + for fork in forks { 349 + for op in &fork.operations { 350 + fork_map.insert(op.cid.clone(), fork); 351 + } 352 + } 353 + 354 + // Track which prev CIDs have been processed 355 + let mut processed_forks: std::collections::HashSet<String> = std::collections::HashSet::new(); 356 + 357 + for entry in audit_log.iter() { 358 + let is_genesis = entry.operation.is_genesis(); 359 + let prev = entry.operation.prev(); 360 + 361 + // Check if this operation is part of a fork 362 + if let Some(_prev_cid) = prev { 363 + if let Some(fork) = fork_map.get(&entry.cid) { 364 + // This is a fork point 365 + if !processed_forks.contains(&fork.prev_cid) { 366 + processed_forks.insert(fork.prev_cid.clone()); 367 + 368 + println!("Fork at operation referencing {}", truncate(&fork.prev_cid, args)); 369 + 370 + for (j, fork_op) in fork.operations.iter().enumerate() { 371 + let symbol = if fork_op.is_winner { "✓" } else { "✗" }; 372 + let color = if fork_op.is_winner { "🟢" } else { "🔴" }; 373 + let prefix = if j == fork.operations.len() - 1 { "└─" } else { "├─" }; 374 + 375 + println!( 376 + " {} {} {} CID: {}", 377 + prefix, 378 + color, 379 + symbol, 380 + truncate(&fork_op.cid, args) 381 + ); 382 + 383 + if let Some(key_idx) = fork_op.signing_key_index { 384 + println!(" │ Signed by: rotation_key[{}]", key_idx); 385 + if args.verbose { 386 + if let Some(key) = &fork_op.signing_key { 387 + println!(" │ Key: {}", truncate(key, args)); 388 + } 389 + } 390 + } 391 + 392 + if args.timing { 393 + println!( 394 + " │ Timestamp: {}", 395 + fork_op.timestamp.format("%Y-%m-%d %H:%M:%S UTC") 396 + ); 397 + } 398 + 399 + if !fork_op.is_winner { 400 + if let Some(reason) = &fork_op.rejection_reason { 401 + println!(" │ Reason: {}", reason); 402 + } 403 + } else { 404 + println!(" │ Status: CANONICAL (winner)"); 405 + } 406 + 407 + if args.verbose { 408 + if let Some(Operation::PlcOperation { services, .. }) = Some(&fork_op.operation) { 409 + if !services.is_empty() { 410 + println!(" │ Services: {} configured", services.len()); 411 + } 412 + } 413 + } 414 + 415 + if j < fork.operations.len() - 1 { 416 + println!(" │"); 417 + } 418 + } 419 + println!(); 420 + } 421 + continue; 422 + } 423 + } 424 + 425 + // Regular operation (not part of a fork) 426 + if is_genesis { 427 + println!("🌱 Genesis"); 428 + println!(" CID: {}", truncate(&entry.cid, args)); 429 + if args.timing { 430 + println!(" Timestamp: {}", entry.created_at); 431 + } 432 + if args.verbose { 433 + if let Operation::PlcOperation { rotation_keys, .. } = &entry.operation { 434 + println!(" Rotation keys: {}", rotation_keys.len()); 435 + } 436 + } 437 + println!(); 438 + } 439 + } 440 + 441 + // Summary 442 + println!("═══════════════════════════════════════════════════════════════"); 443 + println!("📈 Summary:"); 444 + println!(" Total operations: {}", audit_log.len()); 445 + println!(" Fork points: {}", forks.len()); 446 + 447 + let total_competing_ops: usize = forks.iter().map(|f| f.operations.len()).sum(); 448 + let rejected_ops = total_competing_ops - forks.len(); 449 + println!(" Rejected operations: {}", rejected_ops); 450 + 451 + if !forks.is_empty() { 452 + println!("\n🔐 Fork Resolution Details:"); 453 + for (i, fork) in forks.iter().enumerate() { 454 + let winner = fork.operations.iter().find(|op| op.is_winner).unwrap(); 455 + println!( 456 + " Fork {}: Winner is {} (signed by key[{}])", 457 + i + 1, 458 + truncate(&winner.cid, args), 459 + winner.signing_key_index.unwrap_or(999) 460 + ); 461 + } 462 + } 463 + } 464 + 465 + /// Visualize linear chain (no forks) 466 + fn visualize_linear_chain(audit_log: &[AuditLogEntry], args: &Args) { 467 + for (i, entry) in audit_log.iter().enumerate() { 468 + let symbol = if i == 0 { "🌱" } else { " ↓" }; 469 + println!("{} Operation {}: {}", symbol, i, truncate(&entry.cid, args)); 470 + 471 + if args.timing { 472 + println!(" Timestamp: {}", entry.created_at); 473 + } 474 + 475 + if args.verbose { 476 + if let Some(prev) = entry.operation.prev() { 477 + println!(" Previous: {}", truncate(prev, args)); 478 + } 479 + } 480 + } 481 + } 482 + 483 + /// Visualize forks as JSON 484 + fn visualize_json(forks: &[ForkPoint]) { 485 + #[derive(serde::Serialize)] 486 + struct JsonFork { 487 + prev_cid: String, 488 + winner_cid: String, 489 + operations: Vec<JsonForkOp>, 490 + } 491 + 492 + #[derive(serde::Serialize)] 493 + struct JsonForkOp { 494 + cid: String, 495 + timestamp: String, 496 + signing_key_index: Option<usize>, 497 + is_winner: bool, 498 + rejection_reason: Option<String>, 499 + } 500 + 501 + let json_forks: Vec<JsonFork> = forks 502 + .iter() 503 + .map(|f| JsonFork { 504 + prev_cid: f.prev_cid.clone(), 505 + winner_cid: f.winner_cid.clone(), 506 + operations: f 507 + .operations 508 + .iter() 509 + .map(|op| JsonForkOp { 510 + cid: op.cid.clone(), 511 + timestamp: op.timestamp.to_rfc3339(), 512 + signing_key_index: op.signing_key_index, 513 + is_winner: op.is_winner, 514 + rejection_reason: op.rejection_reason.clone(), 515 + }) 516 + .collect(), 517 + }) 518 + .collect(); 519 + 520 + println!( 521 + "{}", 522 + serde_json::to_string_pretty(&json_forks).unwrap() 523 + ); 524 + } 525 + 526 + /// Visualize forks as Markdown 527 + fn visualize_markdown(audit_log: &[AuditLogEntry], forks: &[ForkPoint], args: &Args) { 528 + println!("# PLC Fork Analysis Report\n"); 529 + println!("**DID**: {}\n", args.did); 530 + println!("**Total Operations**: {}", audit_log.len()); 531 + println!("**Fork Points**: {}\n", forks.len()); 532 + 533 + if forks.is_empty() { 534 + println!("✅ No forks detected - linear operation chain\n"); 535 + return; 536 + } 537 + 538 + println!("## Fork Details\n"); 539 + 540 + for (i, fork) in forks.iter().enumerate() { 541 + println!("### Fork {} (at {})\n", i + 1, truncate(&fork.prev_cid, args)); 542 + println!("| Status | CID | Key Index | Timestamp | Reason |"); 543 + println!("|--------|-----|-----------|-----------|--------|"); 544 + 545 + for op in &fork.operations { 546 + let status = if op.is_winner { "✅ Winner" } else { "❌ Rejected" }; 547 + let key_idx = op 548 + .signing_key_index 549 + .map(|i| i.to_string()) 550 + .unwrap_or_else(|| "?".to_string()); 551 + let timestamp = op.timestamp.format("%Y-%m-%d %H:%M:%S"); 552 + let reason = op 553 + .rejection_reason 554 + .as_deref() 555 + .unwrap_or("Canonical operation"); 556 + 557 + println!( 558 + "| {} | {} | {} | {} | {} |", 559 + status, 560 + truncate(&op.cid, args), 561 + key_idx, 562 + timestamp, 563 + reason 564 + ); 565 + } 566 + 567 + println!(); 568 + } 569 + 570 + println!("## Summary\n"); 571 + println!("- Total competing operations: {}", forks.iter().map(|f| f.operations.len()).sum::<usize>()); 572 + println!("- Rejected operations: {}", forks.iter().map(|f| f.operations.len() - 1).sum::<usize>()); 573 + } 574 + 575 + /// Truncate a string for display 576 + fn truncate(s: &str, args: &Args) -> String { 577 + if args.full_ids { 578 + s.to_string() 579 + } else { 580 + if s.len() > 20 { 581 + format!("{}...{}", &s[..8], &s[s.len() - 8..]) 582 + } else { 583 + s.to_string() 584 + } 585 + } 586 + } 587 + 588 + /// Fetch the audit log for a DID from plc.directory 589 + fn fetch_audit_log( 590 + plc_url: &str, 591 + did: &Did, 592 + ) -> Result<Vec<AuditLogEntry>, Box<dyn std::error::Error>> { 593 + let url = format!("{}/{}/log/audit", plc_url, did); 594 + 595 + let client = Client::builder() 596 + .user_agent("atproto-plc-fork-viz/0.2.0") 597 + .timeout(std::time::Duration::from_secs(30)) 598 + .build()?; 599 + 600 + let response = client.get(&url).send()?; 601 + 602 + if !response.status().is_success() { 603 + return Err(format!( 604 + "HTTP error: {} - {}", 605 + response.status(), 606 + response.text().unwrap_or_default() 607 + ) 608 + .into()); 609 + } 610 + 611 + let audit_log: Vec<AuditLogEntry> = response.json()?; 612 + Ok(audit_log) 613 + }
+1 -1
src/lib.rs
··· 136 136 fn test_library_info() { 137 137 let info = library_info(); 138 138 assert!(info.contains("atproto-plc")); 139 - assert!(info.contains("0.1.0")); 139 + assert!(info.contains("0.2.0")); 140 140 } 141 141 142 142 #[test]
+943 -4
src/validation.rs
··· 5 5 use crate::error::{PlcError, Result}; 6 6 use crate::operations::Operation; 7 7 use chrono::{DateTime, Duration, Utc}; 8 + use std::collections::HashMap; 8 9 9 10 /// Recovery window duration (72 hours) 10 11 const RECOVERY_WINDOW_HOURS: i64 = 72; 11 12 13 + /// Represents an operation with its server-assigned timestamp 14 + #[derive(Debug, Clone)] 15 + pub struct OperationWithTimestamp { 16 + /// The operation itself 17 + pub operation: Operation, 18 + /// Server-assigned timestamp when the operation was received 19 + pub timestamp: DateTime<Utc>, 20 + } 21 + 22 + /// Represents a fork point where multiple operations reference the same prev CID 23 + #[derive(Debug)] 24 + struct ForkPoint { 25 + /// The prev CID that multiple operations reference 26 + #[allow(dead_code)] 27 + prev_cid: String, 28 + /// Competing operations at this fork point, with their timestamps and signing key indices 29 + operations: Vec<(Operation, DateTime<Utc>, usize)>, 30 + } 31 + 32 + impl ForkPoint { 33 + /// Create a new fork point 34 + fn new(prev_cid: String) -> Self { 35 + Self { 36 + prev_cid, 37 + operations: Vec::new(), 38 + } 39 + } 40 + 41 + /// Add an operation to this fork point 42 + fn add_operation(&mut self, operation: Operation, timestamp: DateTime<Utc>, key_index: usize) { 43 + self.operations.push((operation, timestamp, key_index)); 44 + } 45 + 46 + /// Resolve this fork point and return the canonical operation 47 + /// 48 + /// Resolution algorithm: 49 + /// 1. Sort operations by timestamp (first-received wins by default) 50 + /// 2. Check if any later-received operation with higher priority can invalidate within 72 hours 51 + /// 3. Return the winning operation 52 + fn resolve(&self) -> Result<Operation> { 53 + if self.operations.is_empty() { 54 + return Err(PlcError::ForkResolutionError( 55 + "Fork point has no operations".to_string(), 56 + )); 57 + } 58 + 59 + // If only one operation, it wins by default 60 + if self.operations.len() == 1 { 61 + return Ok(self.operations[0].0.clone()); 62 + } 63 + 64 + // Sort by timestamp - first-received wins by default 65 + let mut sorted = self.operations.clone(); 66 + sorted.sort_by_key(|(_, timestamp, _)| *timestamp); 67 + 68 + // The first operation in chronological order is the default winner 69 + let (mut canonical_op, mut canonical_ts, mut canonical_key_idx) = sorted[0].clone(); 70 + 71 + // Check if any later-received operation can invalidate the current canonical 72 + for (competing_op, competing_ts, competing_key_idx) in &sorted[1..] { 73 + // Can only invalidate if the competing operation has higher priority (lower index) 74 + if *competing_key_idx < canonical_key_idx { 75 + // Check if within recovery window from the canonical operation 76 + let time_diff = *competing_ts - canonical_ts; 77 + if time_diff <= Duration::hours(RECOVERY_WINDOW_HOURS) 78 + && time_diff >= Duration::zero() 79 + { 80 + // This higher-priority operation invalidates the canonical one 81 + // Update the canonical to this operation 82 + canonical_op = competing_op.clone(); 83 + canonical_ts = *competing_ts; 84 + canonical_key_idx = *competing_key_idx; 85 + } 86 + } 87 + } 88 + 89 + Ok(canonical_op) 90 + } 91 + } 92 + 93 + /// Builder for constructing the canonical chain from a set of operations with timestamps 94 + struct CanonicalChainBuilder { 95 + /// All operations with timestamps, in the order they were received 96 + operations: Vec<OperationWithTimestamp>, 97 + } 98 + 99 + impl CanonicalChainBuilder { 100 + /// Create a new builder 101 + fn new(operations: Vec<OperationWithTimestamp>) -> Self { 102 + Self { operations } 103 + } 104 + 105 + /// Build the canonical chain by detecting and resolving forks 106 + /// 107 + /// Algorithm: 108 + /// 1. Build a graph of operations by CID 109 + /// 2. Detect fork points (multiple operations with same prev CID) 110 + /// 3. Resolve each fork using rotation key priority and recovery window 111 + /// 4. Build the canonical chain from genesis to tip 112 + fn build(&self) -> Result<Vec<Operation>> { 113 + if self.operations.is_empty() { 114 + return Err(PlcError::EmptyChain); 115 + } 116 + 117 + // Build a map of CID -> (operation, timestamp) for quick lookup 118 + let mut operation_map: HashMap<String, (Operation, DateTime<Utc>)> = HashMap::new(); 119 + for op_with_ts in &self.operations { 120 + let cid = op_with_ts.operation.cid()?; 121 + operation_map.insert(cid, (op_with_ts.operation.clone(), op_with_ts.timestamp)); 122 + } 123 + 124 + // Find the genesis operation (prev = None) 125 + let genesis = self 126 + .operations 127 + .iter() 128 + .find(|op| op.operation.is_genesis()) 129 + .ok_or_else(|| PlcError::FirstOperationNotGenesis)?; 130 + 131 + // Detect fork points - group operations by their prev CID 132 + let mut prev_to_operations: HashMap<String, Vec<(Operation, DateTime<Utc>)>> = 133 + HashMap::new(); 134 + 135 + for op_with_ts in &self.operations { 136 + if let Some(prev_cid) = op_with_ts.operation.prev() { 137 + prev_to_operations 138 + .entry(prev_cid.to_string()) 139 + .or_default() 140 + .push((op_with_ts.operation.clone(), op_with_ts.timestamp)); 141 + } 142 + } 143 + 144 + // Build fork points for any prev CID with multiple operations 145 + let mut fork_points: HashMap<String, ForkPoint> = HashMap::new(); 146 + for (prev_cid, operations) in &prev_to_operations { 147 + if operations.len() > 1 { 148 + // This is a fork point - multiple operations reference the same prev 149 + let mut fork_point = ForkPoint::new(prev_cid.clone()); 150 + 151 + // For each operation, determine which rotation key signed it 152 + // We need to look at the state at the prev operation 153 + for (operation, timestamp) in operations { 154 + // Get the state at the prev operation to find rotation keys 155 + let rotation_keys = if let Some((prev_op, _)) = operation_map.get(prev_cid) { 156 + // Build state up to prev operation to get its rotation keys 157 + self.get_state_at_operation(prev_op)? 158 + } else { 159 + PlcState::new() 160 + }; 161 + 162 + // Find which rotation key signed this operation 163 + let key_index = self.find_signing_key_index(operation, &rotation_keys)?; 164 + fork_point.add_operation(operation.clone(), *timestamp, key_index); 165 + } 166 + 167 + fork_points.insert(prev_cid.clone(), fork_point); 168 + } 169 + } 170 + 171 + // Build the canonical chain by starting from genesis and following the canonical path 172 + let mut canonical_chain = vec![genesis.operation.clone()]; 173 + let mut current_cid = genesis.operation.cid()?; 174 + 175 + // Follow the chain until we reach the tip 176 + loop { 177 + // Check if there's a fork at this point 178 + if let Some(fork_point) = fork_points.get(&current_cid) { 179 + // Resolve the fork and get the canonical operation 180 + let canonical_op = fork_point.resolve()?; 181 + let next_cid = canonical_op.cid()?; 182 + canonical_chain.push(canonical_op); 183 + current_cid = next_cid; 184 + } else if let Some(operations) = prev_to_operations.get(&current_cid) { 185 + // No fork, just a single operation 186 + if operations.len() == 1 { 187 + let (operation, _) = &operations[0]; 188 + let next_cid = operation.cid()?; 189 + canonical_chain.push(operation.clone()); 190 + current_cid = next_cid; 191 + } else { 192 + // This shouldn't happen - we should have detected this as a fork 193 + return Err(PlcError::ForkResolutionError( 194 + "Unexpected multiple operations without fork point".to_string(), 195 + )); 196 + } 197 + } else { 198 + // No more operations - we've reached the tip 199 + break; 200 + } 201 + } 202 + 203 + Ok(canonical_chain) 204 + } 205 + 206 + /// Find the index of the rotation key that signed this operation 207 + fn find_signing_key_index( 208 + &self, 209 + operation: &Operation, 210 + state: &PlcState, 211 + ) -> Result<usize> { 212 + if state.rotation_keys.is_empty() { 213 + return Err(PlcError::InvalidRotationKeys( 214 + "No rotation keys available for verification".to_string(), 215 + )); 216 + } 217 + 218 + // Try each rotation key and return the index of the first one that verifies 219 + for (index, key_did) in state.rotation_keys.iter().enumerate() { 220 + let verifying_key = VerifyingKey::from_did_key(key_did)?; 221 + if operation.verify(&[verifying_key]).is_ok() { 222 + return Ok(index); 223 + } 224 + } 225 + 226 + // No key verified the signature 227 + Err(PlcError::SignatureVerificationFailed) 228 + } 229 + 230 + /// Get the state at a specific operation 231 + /// 232 + /// This reconstructs the state by applying all operations from genesis up to 233 + /// and including the specified operation. 234 + fn get_state_at_operation(&self, target_operation: &Operation) -> Result<PlcState> { 235 + let mut state = PlcState::new(); 236 + let target_cid = target_operation.cid()?; 237 + 238 + // Build the chain up to the target operation 239 + // We need to traverse from genesis to the target 240 + for op_with_ts in &self.operations { 241 + let op = &op_with_ts.operation; 242 + 243 + // Apply this operation to the state 244 + match op { 245 + Operation::PlcOperation { 246 + rotation_keys, 247 + verification_methods, 248 + also_known_as, 249 + services, 250 + .. 251 + } => { 252 + state.rotation_keys = rotation_keys.clone(); 253 + state.verification_methods = verification_methods.clone(); 254 + state.also_known_as = also_known_as.clone(); 255 + state.services = services.clone(); 256 + } 257 + Operation::PlcTombstone { .. } => { 258 + state = PlcState::new(); 259 + } 260 + Operation::LegacyCreate { .. } => { 261 + return Err(PlcError::InvalidOperationType( 262 + "Legacy create operations not fully supported".to_string(), 263 + )); 264 + } 265 + } 266 + 267 + // Check if this is the target operation 268 + if op.cid()? == target_cid { 269 + break; 270 + } 271 + } 272 + 273 + Ok(state) 274 + } 275 + } 276 + 12 277 /// Operation chain validator 13 278 pub struct OperationChainValidator; 14 279 ··· 122 387 /// 123 388 /// This handles the recovery mechanism where operations signed by higher-priority 124 389 /// rotation keys can invalidate later operations if submitted within 72 hours. 390 + /// 391 + /// # Arguments 392 + /// 393 + /// * `operations` - All operations in the audit log (may contain forks) 394 + /// * `timestamps` - Server-assigned timestamps for each operation 395 + /// 396 + /// # Returns 397 + /// 398 + /// The final state after applying the canonical chain (after fork resolution) 399 + /// 400 + /// # Errors 401 + /// 402 + /// Returns errors if: 403 + /// - Operations and timestamps arrays have different lengths 404 + /// - Chain is empty 405 + /// - First operation is not genesis 406 + /// - Fork resolution fails 407 + /// - Any signature is invalid 408 + /// - Any operation violates constraints 125 409 pub fn validate_chain_with_forks( 126 410 operations: &[Operation], 127 411 timestamps: &[DateTime<Utc>], ··· 132 416 )); 133 417 } 134 418 135 - // For now, we do basic validation without fork resolution 136 - // Full fork resolution would require tracking all possible forks 137 - // and selecting the canonical chain based on rotation key priority 138 - Self::validate_chain(operations) 419 + if operations.is_empty() { 420 + return Err(PlcError::EmptyChain); 421 + } 422 + 423 + // Build operations with timestamps 424 + let operations_with_timestamps: Vec<OperationWithTimestamp> = operations 425 + .iter() 426 + .zip(timestamps.iter()) 427 + .map(|(op, ts)| OperationWithTimestamp { 428 + operation: op.clone(), 429 + timestamp: *ts, 430 + }) 431 + .collect(); 432 + 433 + // Use the canonical chain builder to resolve forks 434 + let builder = CanonicalChainBuilder::new(operations_with_timestamps); 435 + let canonical_chain = builder.build()?; 436 + 437 + // Validate the canonical chain 438 + Self::validate_chain(&canonical_chain) 139 439 } 140 440 141 441 /// Check if an operation is within the recovery window relative to another operation ··· 370 670 #[test] 371 671 fn test_validate_chain_empty() { 372 672 assert!(OperationChainValidator::validate_chain(&[]).is_err()); 673 + } 674 + 675 + // ======================================================================== 676 + // Fork Resolution Tests 677 + // ======================================================================== 678 + 679 + /// Test simple fork resolution where higher priority key wins within recovery window 680 + #[test] 681 + fn test_fork_resolution_priority_within_window() { 682 + // Create two rotation keys with different priorities 683 + let primary_key = SigningKey::generate_p256(); // Index 0 - highest priority 684 + let backup_key = SigningKey::generate_k256(); // Index 1 - lower priority 685 + 686 + let rotation_keys = vec![primary_key.to_did_key(), backup_key.to_did_key()]; 687 + 688 + // Create genesis operation 689 + let genesis = Operation::new_genesis( 690 + rotation_keys.clone(), 691 + HashMap::new(), 692 + vec![], 693 + HashMap::new(), 694 + ) 695 + .sign(&primary_key) 696 + .unwrap(); 697 + 698 + let genesis_cid = genesis.cid().unwrap(); 699 + let genesis_time = Utc::now(); 700 + 701 + // Create two competing operations that both reference genesis 702 + // Operation A: signed by backup key (lower priority) 703 + let mut services_a = HashMap::new(); 704 + services_a.insert( 705 + "pds".to_string(), 706 + ServiceEndpoint { 707 + service_type: "AtprotoPersonalDataServer".to_string(), 708 + endpoint: "https://pds-a.example.com".to_string(), 709 + }, 710 + ); 711 + 712 + let op_a = Operation::new_update( 713 + rotation_keys.clone(), 714 + HashMap::new(), 715 + vec![], 716 + services_a, 717 + genesis_cid.clone(), 718 + ) 719 + .sign(&backup_key) 720 + .unwrap(); 721 + 722 + let op_a_time = genesis_time + Duration::hours(1); 723 + 724 + // Operation B: signed by primary key (higher priority), arrives 24 hours after A 725 + let mut services_b = HashMap::new(); 726 + services_b.insert( 727 + "pds".to_string(), 728 + ServiceEndpoint { 729 + service_type: "AtprotoPersonalDataServer".to_string(), 730 + endpoint: "https://pds-b.example.com".to_string(), 731 + }, 732 + ); 733 + 734 + let op_b = Operation::new_update( 735 + rotation_keys.clone(), 736 + HashMap::new(), 737 + vec![], 738 + services_b, 739 + genesis_cid, 740 + ) 741 + .sign(&primary_key) 742 + .unwrap(); 743 + 744 + let op_b_time = op_a_time + Duration::hours(24); // Within 72-hour window 745 + 746 + // Build operations and timestamps arrays 747 + let operations = vec![genesis.clone(), op_a.clone(), op_b.clone()]; 748 + let timestamps = vec![genesis_time, op_a_time, op_b_time]; 749 + 750 + // Validate with fork resolution 751 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 752 + assert!(result.is_ok()); 753 + 754 + let state = result.unwrap(); 755 + 756 + // Operation B should win because it has higher priority (lower index) 757 + // and was received within the 72-hour recovery window 758 + assert_eq!( 759 + state.services.get("pds").unwrap().endpoint, 760 + "https://pds-b.example.com" 761 + ); 762 + } 763 + 764 + /// Test fork resolution where lower priority operation wins after recovery window expires 765 + #[test] 766 + fn test_fork_resolution_priority_outside_window() { 767 + let primary_key = SigningKey::generate_p256(); // Index 0 768 + let backup_key = SigningKey::generate_k256(); // Index 1 769 + 770 + let rotation_keys = vec![primary_key.to_did_key(), backup_key.to_did_key()]; 771 + 772 + let genesis = Operation::new_genesis( 773 + rotation_keys.clone(), 774 + HashMap::new(), 775 + vec![], 776 + HashMap::new(), 777 + ) 778 + .sign(&primary_key) 779 + .unwrap(); 780 + 781 + let genesis_cid = genesis.cid().unwrap(); 782 + let genesis_time = Utc::now(); 783 + 784 + // Operation A: signed by backup key (lower priority) 785 + let mut services_a = HashMap::new(); 786 + services_a.insert( 787 + "pds".to_string(), 788 + ServiceEndpoint { 789 + service_type: "AtprotoPersonalDataServer".to_string(), 790 + endpoint: "https://pds-a.example.com".to_string(), 791 + }, 792 + ); 793 + 794 + let op_a = Operation::new_update( 795 + rotation_keys.clone(), 796 + HashMap::new(), 797 + vec![], 798 + services_a, 799 + genesis_cid.clone(), 800 + ) 801 + .sign(&backup_key) 802 + .unwrap(); 803 + 804 + let op_a_time = genesis_time + Duration::hours(1); 805 + 806 + // Operation B: signed by primary key, arrives 100 hours after A (outside 72-hour window) 807 + let mut services_b = HashMap::new(); 808 + services_b.insert( 809 + "pds".to_string(), 810 + ServiceEndpoint { 811 + service_type: "AtprotoPersonalDataServer".to_string(), 812 + endpoint: "https://pds-b.example.com".to_string(), 813 + }, 814 + ); 815 + 816 + let op_b = Operation::new_update( 817 + rotation_keys.clone(), 818 + HashMap::new(), 819 + vec![], 820 + services_b, 821 + genesis_cid, 822 + ) 823 + .sign(&primary_key) 824 + .unwrap(); 825 + 826 + let op_b_time = op_a_time + Duration::hours(100); // Outside 72-hour window 827 + 828 + let operations = vec![genesis.clone(), op_a.clone(), op_b.clone()]; 829 + let timestamps = vec![genesis_time, op_a_time, op_b_time]; 830 + 831 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 832 + assert!(result.is_ok()); 833 + 834 + let state = result.unwrap(); 835 + 836 + // Operation A should win because even though B has higher priority, 837 + // it arrived outside the 72-hour recovery window 838 + assert_eq!( 839 + state.services.get("pds").unwrap().endpoint, 840 + "https://pds-a.example.com" 841 + ); 842 + } 843 + 844 + /// Test multiple forks at different points in the chain 845 + #[test] 846 + fn test_fork_resolution_multiple_forks() { 847 + let key1 = SigningKey::generate_p256(); 848 + let key2 = SigningKey::generate_k256(); 849 + let key3 = SigningKey::generate_p256(); 850 + 851 + let rotation_keys = vec![key1.to_did_key(), key2.to_did_key(), key3.to_did_key()]; 852 + 853 + // Genesis 854 + let genesis = Operation::new_genesis( 855 + rotation_keys.clone(), 856 + HashMap::new(), 857 + vec![], 858 + HashMap::new(), 859 + ) 860 + .sign(&key1) 861 + .unwrap(); 862 + 863 + let genesis_cid = genesis.cid().unwrap(); 864 + let genesis_time = Utc::now(); 865 + 866 + // First fork at genesis 867 + let mut services_1a = HashMap::new(); 868 + services_1a.insert( 869 + "pds".to_string(), 870 + ServiceEndpoint { 871 + service_type: "AtprotoPersonalDataServer".to_string(), 872 + endpoint: "https://fork1a.example.com".to_string(), 873 + }, 874 + ); 875 + 876 + let op_1a = Operation::new_update( 877 + rotation_keys.clone(), 878 + HashMap::new(), 879 + vec![], 880 + services_1a, 881 + genesis_cid.clone(), 882 + ) 883 + .sign(&key2) 884 + .unwrap(); 885 + 886 + let op_1a_time = genesis_time + Duration::hours(1); 887 + 888 + // Second operation at same fork (higher priority, within window) 889 + let mut services_1b = HashMap::new(); 890 + services_1b.insert( 891 + "pds".to_string(), 892 + ServiceEndpoint { 893 + service_type: "AtprotoPersonalDataServer".to_string(), 894 + endpoint: "https://fork1b.example.com".to_string(), 895 + }, 896 + ); 897 + 898 + let op_1b = Operation::new_update( 899 + rotation_keys.clone(), 900 + HashMap::new(), 901 + vec![], 902 + services_1b, 903 + genesis_cid, 904 + ) 905 + .sign(&key1) 906 + .unwrap(); 907 + 908 + let op_1b_time = op_1a_time + Duration::hours(2); 909 + 910 + // Next operation continues from op_1b (the winner) 911 + let op_1b_cid = op_1b.cid().unwrap(); 912 + 913 + let mut services_2 = HashMap::new(); 914 + services_2.insert( 915 + "pds".to_string(), 916 + ServiceEndpoint { 917 + service_type: "AtprotoPersonalDataServer".to_string(), 918 + endpoint: "https://continuation.example.com".to_string(), 919 + }, 920 + ); 921 + 922 + let op_2 = Operation::new_update( 923 + rotation_keys.clone(), 924 + HashMap::new(), 925 + vec![], 926 + services_2, 927 + op_1b_cid, 928 + ) 929 + .sign(&key1) 930 + .unwrap(); 931 + 932 + let op_2_time = op_1b_time + Duration::hours(1); 933 + 934 + let operations = vec![ 935 + genesis.clone(), 936 + op_1a.clone(), 937 + op_1b.clone(), 938 + op_2.clone(), 939 + ]; 940 + let timestamps = vec![genesis_time, op_1a_time, op_1b_time, op_2_time]; 941 + 942 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 943 + assert!(result.is_ok()); 944 + 945 + let state = result.unwrap(); 946 + 947 + // The final state should be from op_2, which continues from op_1b 948 + assert_eq!( 949 + state.services.get("pds").unwrap().endpoint, 950 + "https://continuation.example.com" 951 + ); 952 + } 953 + 954 + /// Test fork with three competing operations 955 + #[test] 956 + fn test_fork_resolution_three_way_fork() { 957 + let key1 = SigningKey::generate_p256(); // Highest priority 958 + let key2 = SigningKey::generate_k256(); // Medium priority 959 + let key3 = SigningKey::generate_p256(); // Lowest priority 960 + 961 + let rotation_keys = vec![key1.to_did_key(), key2.to_did_key(), key3.to_did_key()]; 962 + 963 + let genesis = Operation::new_genesis( 964 + rotation_keys.clone(), 965 + HashMap::new(), 966 + vec![], 967 + HashMap::new(), 968 + ) 969 + .sign(&key1) 970 + .unwrap(); 971 + 972 + let genesis_cid = genesis.cid().unwrap(); 973 + let genesis_time = Utc::now(); 974 + 975 + // Three competing operations 976 + let mut services_a = HashMap::new(); 977 + services_a.insert( 978 + "pds".to_string(), 979 + ServiceEndpoint { 980 + service_type: "AtprotoPersonalDataServer".to_string(), 981 + endpoint: "https://op-a.example.com".to_string(), 982 + }, 983 + ); 984 + 985 + let op_a = Operation::new_update( 986 + rotation_keys.clone(), 987 + HashMap::new(), 988 + vec![], 989 + services_a, 990 + genesis_cid.clone(), 991 + ) 992 + .sign(&key3) 993 + .unwrap(); // Lowest priority, first to arrive 994 + 995 + let op_a_time = genesis_time + Duration::hours(1); 996 + 997 + let mut services_b = HashMap::new(); 998 + services_b.insert( 999 + "pds".to_string(), 1000 + ServiceEndpoint { 1001 + service_type: "AtprotoPersonalDataServer".to_string(), 1002 + endpoint: "https://op-b.example.com".to_string(), 1003 + }, 1004 + ); 1005 + 1006 + let op_b = Operation::new_update( 1007 + rotation_keys.clone(), 1008 + HashMap::new(), 1009 + vec![], 1010 + services_b, 1011 + genesis_cid.clone(), 1012 + ) 1013 + .sign(&key2) 1014 + .unwrap(); // Medium priority 1015 + 1016 + let op_b_time = op_a_time + Duration::hours(2); 1017 + 1018 + let mut services_c = HashMap::new(); 1019 + services_c.insert( 1020 + "pds".to_string(), 1021 + ServiceEndpoint { 1022 + service_type: "AtprotoPersonalDataServer".to_string(), 1023 + endpoint: "https://op-c.example.com".to_string(), 1024 + }, 1025 + ); 1026 + 1027 + let op_c = Operation::new_update( 1028 + rotation_keys.clone(), 1029 + HashMap::new(), 1030 + vec![], 1031 + services_c, 1032 + genesis_cid, 1033 + ) 1034 + .sign(&key1) 1035 + .unwrap(); // Highest priority, within window 1036 + 1037 + let op_c_time = op_a_time + Duration::hours(10); 1038 + 1039 + let operations = vec![ 1040 + genesis.clone(), 1041 + op_a.clone(), 1042 + op_b.clone(), 1043 + op_c.clone(), 1044 + ]; 1045 + let timestamps = vec![genesis_time, op_a_time, op_b_time, op_c_time]; 1046 + 1047 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 1048 + assert!(result.is_ok()); 1049 + 1050 + let state = result.unwrap(); 1051 + 1052 + // Operation C should win (highest priority, within window) 1053 + assert_eq!( 1054 + state.services.get("pds").unwrap().endpoint, 1055 + "https://op-c.example.com" 1056 + ); 1057 + } 1058 + 1059 + /// Test no fork - linear chain should work as before 1060 + #[test] 1061 + fn test_fork_resolution_no_fork() { 1062 + let key = SigningKey::generate_p256(); 1063 + let rotation_keys = vec![key.to_did_key()]; 1064 + 1065 + let genesis = Operation::new_genesis( 1066 + rotation_keys.clone(), 1067 + HashMap::new(), 1068 + vec![], 1069 + HashMap::new(), 1070 + ) 1071 + .sign(&key) 1072 + .unwrap(); 1073 + 1074 + let genesis_cid = genesis.cid().unwrap(); 1075 + let genesis_time = Utc::now(); 1076 + 1077 + let mut services = HashMap::new(); 1078 + services.insert( 1079 + "pds".to_string(), 1080 + ServiceEndpoint { 1081 + service_type: "AtprotoPersonalDataServer".to_string(), 1082 + endpoint: "https://pds.example.com".to_string(), 1083 + }, 1084 + ); 1085 + 1086 + let op1 = Operation::new_update( 1087 + rotation_keys.clone(), 1088 + HashMap::new(), 1089 + vec![], 1090 + services.clone(), 1091 + genesis_cid, 1092 + ) 1093 + .sign(&key) 1094 + .unwrap(); 1095 + 1096 + let op1_time = genesis_time + Duration::hours(1); 1097 + 1098 + let operations = vec![genesis.clone(), op1.clone()]; 1099 + let timestamps = vec![genesis_time, op1_time]; 1100 + 1101 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 1102 + assert!(result.is_ok()); 1103 + 1104 + let state = result.unwrap(); 1105 + assert_eq!( 1106 + state.services.get("pds").unwrap().endpoint, 1107 + "https://pds.example.com" 1108 + ); 1109 + } 1110 + 1111 + /// Test fork resolution with rotation key changes 1112 + #[test] 1113 + fn test_fork_resolution_with_key_rotation() { 1114 + let key1 = SigningKey::generate_p256(); 1115 + let key2 = SigningKey::generate_k256(); 1116 + let key3 = SigningKey::generate_p256(); 1117 + 1118 + // Initial rotation keys 1119 + let rotation_keys_v1 = vec![key1.to_did_key(), key2.to_did_key()]; 1120 + 1121 + let genesis = Operation::new_genesis( 1122 + rotation_keys_v1.clone(), 1123 + HashMap::new(), 1124 + vec![], 1125 + HashMap::new(), 1126 + ) 1127 + .sign(&key1) 1128 + .unwrap(); 1129 + 1130 + let genesis_cid = genesis.cid().unwrap(); 1131 + let genesis_time = Utc::now(); 1132 + 1133 + // Update rotation keys in first operation 1134 + let rotation_keys_v2 = vec![key1.to_did_key(), key3.to_did_key()]; 1135 + 1136 + let op1 = Operation::new_update( 1137 + rotation_keys_v2.clone(), 1138 + HashMap::new(), 1139 + vec![], 1140 + HashMap::new(), 1141 + genesis_cid, 1142 + ) 1143 + .sign(&key1) 1144 + .unwrap(); 1145 + 1146 + let op1_cid = op1.cid().unwrap(); 1147 + let op1_time = genesis_time + Duration::hours(1); 1148 + 1149 + // Create a fork at op1 - both operations use the new rotation keys 1150 + let mut services_a = HashMap::new(); 1151 + services_a.insert( 1152 + "pds".to_string(), 1153 + ServiceEndpoint { 1154 + service_type: "AtprotoPersonalDataServer".to_string(), 1155 + endpoint: "https://op-a.example.com".to_string(), 1156 + }, 1157 + ); 1158 + 1159 + let op_a = Operation::new_update( 1160 + rotation_keys_v2.clone(), 1161 + HashMap::new(), 1162 + vec![], 1163 + services_a, 1164 + op1_cid.clone(), 1165 + ) 1166 + .sign(&key3) 1167 + .unwrap(); // Index 1 in new keys 1168 + 1169 + let op_a_time = op1_time + Duration::hours(1); 1170 + 1171 + let mut services_b = HashMap::new(); 1172 + services_b.insert( 1173 + "pds".to_string(), 1174 + ServiceEndpoint { 1175 + service_type: "AtprotoPersonalDataServer".to_string(), 1176 + endpoint: "https://op-b.example.com".to_string(), 1177 + }, 1178 + ); 1179 + 1180 + let op_b = Operation::new_update( 1181 + rotation_keys_v2.clone(), 1182 + HashMap::new(), 1183 + vec![], 1184 + services_b, 1185 + op1_cid, 1186 + ) 1187 + .sign(&key1) 1188 + .unwrap(); // Index 0 in new keys (higher priority) 1189 + 1190 + let op_b_time = op_a_time + Duration::hours(2); 1191 + 1192 + let operations = vec![ 1193 + genesis.clone(), 1194 + op1.clone(), 1195 + op_a.clone(), 1196 + op_b.clone(), 1197 + ]; 1198 + let timestamps = vec![genesis_time, op1_time, op_a_time, op_b_time]; 1199 + 1200 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 1201 + assert!(result.is_ok()); 1202 + 1203 + let state = result.unwrap(); 1204 + 1205 + // Operation B should win (signed by key1 which is index 0 in the new rotation keys) 1206 + assert_eq!( 1207 + state.services.get("pds").unwrap().endpoint, 1208 + "https://op-b.example.com" 1209 + ); 1210 + } 1211 + 1212 + /// Test that operations with mismatched timestamps and operations fail 1213 + #[test] 1214 + fn test_fork_resolution_mismatched_lengths() { 1215 + let key = SigningKey::generate_p256(); 1216 + let rotation_keys = vec![key.to_did_key()]; 1217 + 1218 + let genesis = Operation::new_genesis( 1219 + rotation_keys, 1220 + HashMap::new(), 1221 + vec![], 1222 + HashMap::new(), 1223 + ) 1224 + .sign(&key) 1225 + .unwrap(); 1226 + 1227 + let operations = vec![genesis]; 1228 + let timestamps = vec![Utc::now(), Utc::now()]; // Different length 1229 + 1230 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 1231 + assert!(result.is_err()); 1232 + } 1233 + 1234 + /// Test recovery window boundary (exactly 72 hours) 1235 + #[test] 1236 + fn test_fork_resolution_recovery_window_boundary() { 1237 + let primary_key = SigningKey::generate_p256(); 1238 + let backup_key = SigningKey::generate_k256(); 1239 + 1240 + let rotation_keys = vec![primary_key.to_did_key(), backup_key.to_did_key()]; 1241 + 1242 + let genesis = Operation::new_genesis( 1243 + rotation_keys.clone(), 1244 + HashMap::new(), 1245 + vec![], 1246 + HashMap::new(), 1247 + ) 1248 + .sign(&primary_key) 1249 + .unwrap(); 1250 + 1251 + let genesis_cid = genesis.cid().unwrap(); 1252 + let genesis_time = Utc::now(); 1253 + 1254 + // Operation A: signed by backup key 1255 + let mut services_a = HashMap::new(); 1256 + services_a.insert( 1257 + "pds".to_string(), 1258 + ServiceEndpoint { 1259 + service_type: "AtprotoPersonalDataServer".to_string(), 1260 + endpoint: "https://op-a.example.com".to_string(), 1261 + }, 1262 + ); 1263 + 1264 + let op_a = Operation::new_update( 1265 + rotation_keys.clone(), 1266 + HashMap::new(), 1267 + vec![], 1268 + services_a, 1269 + genesis_cid.clone(), 1270 + ) 1271 + .sign(&backup_key) 1272 + .unwrap(); 1273 + 1274 + let op_a_time = genesis_time + Duration::hours(1); 1275 + 1276 + // Operation B: exactly at 72-hour boundary (should still be within window) 1277 + let mut services_b = HashMap::new(); 1278 + services_b.insert( 1279 + "pds".to_string(), 1280 + ServiceEndpoint { 1281 + service_type: "AtprotoPersonalDataServer".to_string(), 1282 + endpoint: "https://op-b.example.com".to_string(), 1283 + }, 1284 + ); 1285 + 1286 + let op_b = Operation::new_update( 1287 + rotation_keys.clone(), 1288 + HashMap::new(), 1289 + vec![], 1290 + services_b, 1291 + genesis_cid, 1292 + ) 1293 + .sign(&primary_key) 1294 + .unwrap(); 1295 + 1296 + // Exactly 72 hours after op_a 1297 + let op_b_time = op_a_time + Duration::hours(72); 1298 + 1299 + let operations = vec![genesis.clone(), op_a.clone(), op_b.clone()]; 1300 + let timestamps = vec![genesis_time, op_a_time, op_b_time]; 1301 + 1302 + let result = OperationChainValidator::validate_chain_with_forks(&operations, &timestamps); 1303 + assert!(result.is_ok()); 1304 + 1305 + let state = result.unwrap(); 1306 + 1307 + // At exactly 72 hours, the higher priority operation should still win 1308 + assert_eq!( 1309 + state.services.get("pds").unwrap().endpoint, 1310 + "https://op-b.example.com" 1311 + ); 373 1312 } 374 1313 }
+2 -2
wasm/package-lock.json
··· 1 1 { 2 2 "name": "atproto-plc", 3 - "version": "0.1.0", 3 + "version": "0.2.0", 4 4 "lockfileVersion": 3, 5 5 "requires": true, 6 6 "packages": { 7 7 "": { 8 8 "name": "atproto-plc", 9 - "version": "0.1.0", 9 + "version": "0.2.0", 10 10 "license": "MIT OR Apache-2.0", 11 11 "devDependencies": { 12 12 "@types/node": "^20.0.0"
+1 -1
wasm/package.json
··· 1 1 { 2 2 "name": "atproto-plc", 3 - "version": "0.1.0", 3 + "version": "0.2.0", 4 4 "description": "did-method-plc implementation for ATProto with WASM support", 5 5 "type": "module", 6 6 "main": "index.js",
+300 -31
wasm/plc-audit.js
··· 62 62 try { 63 63 const response = await fetch(url, { 64 64 headers: { 65 - 'User-Agent': 'atproto-plc-audit-wasm/0.1.0', 65 + 'User-Agent': 'atproto-plc-audit-wasm/0.2.0', 66 66 }, 67 67 }); 68 68 ··· 78 78 } 79 79 80 80 /** 81 + * Detect if there are fork points in the audit log 82 + */ 83 + function detectForks(operations) { 84 + const prevCounts = new Map(); 85 + 86 + for (const entry of operations) { 87 + const prev = entry.operation.prev(); 88 + if (prev) { 89 + prevCounts.set(prev, (prevCounts.get(prev) || 0) + 1); 90 + } 91 + } 92 + 93 + // If any prev CID is referenced by more than one operation, there's a fork 94 + return Array.from(prevCounts.values()).some(count => count > 1); 95 + } 96 + 97 + /** 98 + * Build a list of indices that form the canonical chain 99 + */ 100 + function buildCanonicalChainIndices(operations) { 101 + // Build a map of prev CID to operations 102 + const prevToIndices = new Map(); 103 + 104 + for (let i = 0; i < operations.length; i++) { 105 + const prev = operations[i].operation.prev(); 106 + if (prev) { 107 + if (!prevToIndices.has(prev)) { 108 + prevToIndices.set(prev, []); 109 + } 110 + prevToIndices.get(prev).push(i); 111 + } 112 + } 113 + 114 + // Start from genesis and follow the canonical chain 115 + const canonical = []; 116 + 117 + // Find genesis (first operation) 118 + if (operations.length === 0) { 119 + return canonical; 120 + } 121 + 122 + canonical.push(0); 123 + let currentCid = operations[0].cid; 124 + 125 + // Follow the chain, preferring non-nullified operations 126 + while (true) { 127 + const indices = prevToIndices.get(currentCid); 128 + if (!indices || indices.length === 0) { 129 + break; 130 + } 131 + 132 + // Find the first non-nullified operation 133 + const nextIdx = indices.find(idx => !operations[idx].nullified); 134 + if (nextIdx !== undefined) { 135 + canonical.push(nextIdx); 136 + currentCid = operations[nextIdx].cid; 137 + } else { 138 + // All operations at this point are nullified - try to find any operation 139 + if (indices.length > 0) { 140 + canonical.push(indices[0]); 141 + currentCid = operations[indices[0]].cid; 142 + } else { 143 + break; 144 + } 145 + } 146 + } 147 + 148 + return canonical; 149 + } 150 + 151 + /** 152 + * Display the final state after validation 153 + */ 154 + function displayFinalState(finalEntry, rawEntry) { 155 + const rotationKeys = finalEntry.operation.rotationKeys(); 156 + 157 + if (!rotationKeys) { 158 + console.error('❌ Error: Could not extract final state'); 159 + process.exit(1); 160 + } 161 + 162 + if (args.quiet) { 163 + console.log('✅ VALID'); 164 + } else { 165 + console.log('✅ Validation successful!'); 166 + console.log(); 167 + console.log('📄 Final DID State:'); 168 + console.log(' Rotation keys:', rotationKeys.length); 169 + for (let i = 0; i < rotationKeys.length; i++) { 170 + console.log(` [${i}] ${rotationKeys[i]}`); 171 + } 172 + console.log(); 173 + 174 + // Extract additional state from the raw operation 175 + const op = rawEntry.operation; 176 + if (op.verificationMethods) { 177 + const vmKeys = Object.keys(op.verificationMethods); 178 + console.log(' Verification methods:', vmKeys.length); 179 + for (const name of vmKeys) { 180 + console.log(` ${name}: ${op.verificationMethods[name]}`); 181 + } 182 + console.log(); 183 + } 184 + 185 + if (op.alsoKnownAs && op.alsoKnownAs.length > 0) { 186 + console.log(' Also known as:', op.alsoKnownAs.length); 187 + for (const uri of op.alsoKnownAs) { 188 + console.log(` - ${uri}`); 189 + } 190 + console.log(); 191 + } 192 + 193 + if (op.services) { 194 + const serviceNames = Object.keys(op.services); 195 + if (serviceNames.length > 0) { 196 + console.log(' Services:', serviceNames.length); 197 + for (const name of serviceNames) { 198 + const service = op.services[name]; 199 + console.log(` ${name}: ${service.endpoint} (${service.type})`); 200 + } 201 + } 202 + } 203 + } 204 + } 205 + 206 + /** 81 207 * Main validation logic 82 208 */ 83 209 async function main() { ··· 154 280 console.log(); 155 281 } 156 282 157 - // Validate the operation chain 283 + // Detect forks and build canonical chain 158 284 if (!args.quiet) { 159 - console.log('🔐 Validating operation chain...'); 285 + console.log('🔐 Analyzing operation chain...'); 160 286 console.log(); 161 287 } 162 288 163 - // Step 1: Validate chain linkage (prev references) 289 + // Detect fork points and nullified operations 290 + const hasForks = detectForks(operations); 291 + const hasNullified = operations.some(e => e.nullified); 292 + 293 + if (hasForks || hasNullified) { 294 + if (!args.quiet) { 295 + if (hasForks) { 296 + console.log('⚠️ Fork detected - multiple operations reference the same prev CID'); 297 + } 298 + if (hasNullified) { 299 + console.log('⚠️ Nullified operations detected - will validate canonical chain only'); 300 + } 301 + console.log(); 302 + } 303 + 304 + // Build canonical chain 305 + if (args.verbose) { 306 + console.log('Step 1: Fork Resolution & Canonical Chain Building'); 307 + console.log('==================================================='); 308 + } 309 + 310 + const canonicalIndices = buildCanonicalChainIndices(operations); 311 + 312 + if (args.verbose) { 313 + console.log(' ✅ Fork resolution complete'); 314 + console.log(' ✅ Canonical chain identified'); 315 + console.log(); 316 + 317 + console.log('Canonical Chain Operations:'); 318 + console.log('==========================='); 319 + 320 + for (const idx of canonicalIndices) { 321 + const entry = operations[idx]; 322 + console.log(` [${idx}] ✅ ${entry.cid} - ${entry.createdAt}`); 323 + } 324 + console.log(); 325 + 326 + if (hasNullified) { 327 + console.log('Nullified/Rejected Operations:'); 328 + console.log('=============================='); 329 + for (let i = 0; i < operations.length; i++) { 330 + const entry = operations[i]; 331 + if (entry.nullified && !canonicalIndices.includes(i)) { 332 + console.log(` [${i}] ❌ ${entry.cid} - ${entry.createdAt} (nullified)`); 333 + const prev = entry.operation.prev(); 334 + if (prev) { 335 + console.log(' Referenced:', prev); 336 + } 337 + } 338 + } 339 + console.log(); 340 + } 341 + } 342 + 343 + // Validate signatures along canonical chain 344 + if (args.verbose) { 345 + console.log('Step 2: Cryptographic Signature Validation'); 346 + console.log('=========================================='); 347 + } 348 + 349 + let currentRotationKeys = []; 350 + 351 + for (const idx of canonicalIndices) { 352 + const entry = operations[idx]; 353 + 354 + // For genesis operation, extract rotation keys 355 + if (idx === 0) { 356 + if (args.verbose) { 357 + console.log(` [${idx}] Genesis operation - extracting rotation keys`); 358 + } 359 + 360 + const rotationKeys = entry.operation.rotationKeys(); 361 + if (rotationKeys) { 362 + currentRotationKeys = rotationKeys; 363 + 364 + if (args.verbose) { 365 + console.log(' Rotation keys:', rotationKeys.length); 366 + for (let j = 0; j < rotationKeys.length; j++) { 367 + console.log(` [${j}] ${rotationKeys[j]}`); 368 + } 369 + console.log(' ⚠️ Genesis signature cannot be verified (bootstrapping trust)'); 370 + } 371 + } 372 + continue; 373 + } 374 + 375 + if (args.verbose) { 376 + console.log(` [${idx}] Validating signature...`); 377 + console.log(' CID:', entry.cid); 378 + console.log(' Signature:', entry.operation.signature()); 379 + } 380 + 381 + // Validate signature using current rotation keys 382 + if (currentRotationKeys.length > 0) { 383 + if (args.verbose) { 384 + console.log(' Available rotation keys:', currentRotationKeys.length); 385 + for (let j = 0; j < currentRotationKeys.length; j++) { 386 + console.log(` [${j}] ${currentRotationKeys[j]}`); 387 + } 388 + } 389 + 390 + // Parse verifying keys 391 + const verifyingKeys = []; 392 + for (const keyStr of currentRotationKeys) { 393 + try { 394 + verifyingKeys.push(WasmVerifyingKey.fromDidKey(keyStr)); 395 + } catch (error) { 396 + console.error(`Warning: Failed to parse rotation key: ${keyStr}`); 397 + } 398 + } 399 + 400 + if (args.verbose) { 401 + console.log(` Parsed verifying keys: ${verifyingKeys.length}/${currentRotationKeys.length}`); 402 + } 403 + 404 + // Try to verify with each key and track which one worked 405 + try { 406 + const keyIndex = entry.operation.verifyWithKeyIndex(verifyingKeys); 407 + 408 + if (args.verbose) { 409 + console.log(` ✅ Signature verified with rotation key [${keyIndex}]`); 410 + console.log(` ${currentRotationKeys[keyIndex]}`); 411 + } 412 + } catch (error) { 413 + console.error(); 414 + console.error(`❌ Validation failed: Invalid signature at operation ${idx}`); 415 + console.error(' Error:', error.message); 416 + console.error(' CID:', entry.cid); 417 + console.error(` Tried ${verifyingKeys.length} rotation keys, none verified the signature`); 418 + process.exit(1); 419 + } 420 + } 421 + 422 + // Update rotation keys if this operation changes them 423 + const newRotationKeys = entry.operation.rotationKeys(); 424 + if (newRotationKeys) { 425 + const keysChanged = JSON.stringify(newRotationKeys) !== JSON.stringify(currentRotationKeys); 426 + 427 + if (keysChanged) { 428 + if (args.verbose) { 429 + console.log(' 🔄 Rotation keys updated by this operation'); 430 + console.log(' Old keys:', currentRotationKeys.length); 431 + console.log(' New keys:', newRotationKeys.length); 432 + for (let j = 0; j < newRotationKeys.length; j++) { 433 + console.log(` [${j}] ${newRotationKeys[j]}`); 434 + } 435 + } 436 + currentRotationKeys = newRotationKeys; 437 + } 438 + } 439 + } 440 + 441 + if (args.verbose) { 442 + console.log(); 443 + console.log('✅ Cryptographic signature validation complete'); 444 + console.log(); 445 + } 446 + 447 + // Build final state 448 + const finalIdx = canonicalIndices[canonicalIndices.length - 1]; 449 + const finalEntry = operations[finalIdx]; 450 + const finalRawEntry = auditLog[finalIdx]; 451 + displayFinalState(finalEntry, finalRawEntry); 452 + return; 453 + } 454 + 455 + // Simple linear chain validation (no forks or nullified operations) 164 456 if (args.verbose) { 165 - console.log('Step 1: Chain Linkage Validation'); 457 + console.log('Step 1: Linear Chain Validation'); 166 458 console.log('================================'); 167 459 } 168 460 169 461 for (let i = 1; i < operations.length; i++) { 170 - if (operations[i].nullified) { 171 - if (args.verbose) { 172 - console.log(` [${i}] ⊘ Skipped (nullified)`); 173 - } 174 - continue; 175 - } 176 - 177 462 const prevCid = operations[i - 1].cid; 178 463 const expectedPrev = operations[i].operation.prev(); 179 464 ··· 323 608 } 324 609 325 610 // Build final state 326 - const finalEntry = operations.filter(e => !e.nullified).pop(); 327 - const finalRotationKeys = finalEntry.operation.rotationKeys(); 328 - 329 - if (finalRotationKeys) { 330 - if (args.quiet) { 331 - console.log('✅ VALID'); 332 - } else { 333 - console.log('✅ Validation successful!'); 334 - console.log(); 335 - console.log('📄 Final DID State:'); 336 - console.log(' Rotation keys:', finalRotationKeys.length); 337 - for (let i = 0; i < finalRotationKeys.length; i++) { 338 - console.log(` [${i}] ${finalRotationKeys[i]}`); 339 - } 340 - } 341 - } else { 342 - console.error('❌ Error: Could not extract final state'); 343 - process.exit(1); 344 - } 611 + const finalEntry = operations[operations.length - 1]; 612 + const finalRawEntry = auditLog[auditLog.length - 1]; 613 + displayFinalState(finalEntry, finalRawEntry); 345 614 346 615 } catch (error) { 347 616 console.error('❌ Fatal error:', error.message);