+19
-9
CLAUDE.md
+19
-9
CLAUDE.md
···
27
27
- **Sign data**: `cargo run --features clap --bin atproto-identity-sign -- <did_key> <json_file>`
28
28
- **Validate signatures**: `cargo run --features clap --bin atproto-identity-validate -- <did_key> <json_file> <signature>`
29
29
30
+
#### Attestation Operations
31
+
- **Sign records (inline)**: `cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-sign -- inline <source_record> <signing_key> <metadata_record>`
32
+
- **Sign records (remote)**: `cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-sign -- remote <source_record> <repository_did> <metadata_record>`
33
+
- **Verify records**: `cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-verify -- <record>` (verifies all signatures)
34
+
- **Verify attestation**: `cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-verify -- <record> <attestation>` (verifies specific attestation)
35
+
30
36
#### Record Operations
31
-
- **Sign records**: `cargo run --features clap --bin atproto-record-sign -- <issuer_did> <signing_key> <record_input> repository=<repo> collection=<collection>`
32
-
- **Verify records**: `cargo run --features clap --bin atproto-record-verify -- <issuer_did> <key> <record_input> repository=<repo> collection=<collection>`
37
+
- **Generate CID**: `cat record.json | cargo run --features clap --bin atproto-record-cid` (reads JSON from stdin, outputs CID)
33
38
34
39
#### Client Tools
35
40
- **App password auth**: `cargo run --features clap --bin atproto-client-app-password -- <subject> <access_token> <xrpc_path>`
···
44
49
## Architecture
45
50
46
51
A comprehensive Rust workspace with multiple crates:
47
-
- **atproto-identity**: Core identity management with 10 modules (resolve, plc, web, model, validation, config, errors, key, storage, storage_lru)
48
-
- **atproto-record**: Record signature operations and validation
52
+
- **atproto-identity**: Core identity management with 11 modules (resolve, plc, web, model, validation, config, errors, key, storage_lru, traits, url)
53
+
- **atproto-attestation**: CID-first attestation utilities for creating and verifying record signatures
54
+
- **atproto-record**: Record utilities including TID generation, AT-URI parsing, and CID generation
49
55
- **atproto-client**: HTTP client with OAuth and identity integration
50
56
- **atproto-jetstream**: WebSocket event streaming with compression
51
57
- **atproto-oauth**: OAuth workflow implementation with DPoP, PKCE, JWT, and storage abstractions
···
55
61
- **atproto-xrpcs-helloworld**: Complete example XRPC service
56
62
57
63
Features:
58
-
- **12 CLI tools** with consistent clap-based command-line interfaces (optional via `clap` feature)
64
+
- **13 CLI tools** with consistent clap-based command-line interfaces (optional via `clap` feature)
59
65
- **Rust edition 2024** with modern async/await patterns
60
66
- **Comprehensive error handling** with structured error types
61
67
- **Full test coverage** with unit tests across all modules
···
136
142
### Core Library Modules (atproto-identity)
137
143
- **`src/lib.rs`**: Main library exports
138
144
- **`src/resolve.rs`**: Core resolution logic for handles and DIDs, DNS/HTTP resolution
139
-
- **`src/plc.rs`**: PLC directory client for did:plc resolution
145
+
- **`src/plc.rs`**: PLC directory client for did:plc resolution
140
146
- **`src/web.rs`**: Web DID client for did:web resolution and URL conversion
141
147
- **`src/model.rs`**: Data structures for DID documents and AT Protocol entities
142
148
- **`src/validation.rs`**: Input validation for handles and DIDs
143
149
- **`src/config.rs`**: Configuration management and environment variable handling
144
150
- **`src/errors.rs`**: Structured error types following project conventions
145
151
- **`src/key.rs`**: Cryptographic key operations including signature validation and key identification for P-256, P-384, and K-256 curves
146
-
- **`src/storage.rs`**: Storage abstraction interface for DID document caching
147
152
- **`src/storage_lru.rs`**: LRU-based storage implementation (requires `lru` feature)
153
+
- **`src/traits.rs`**: Core trait definitions for identity resolution and key resolution
154
+
- **`src/url.rs`**: URL utilities for AT Protocol services
148
155
149
156
### CLI Tools (require --features clap)
150
157
···
154
161
- **`src/bin/atproto-identity-sign.rs`**: Create cryptographic signatures of JSON data
155
162
- **`src/bin/atproto-identity-validate.rs`**: Validate cryptographic signatures
156
163
164
+
#### Attestation Operations (atproto-attestation)
165
+
- **`src/bin/atproto-attestation-sign.rs`**: Sign AT Protocol records with inline or remote attestations using CID-first specification
166
+
- **`src/bin/atproto-attestation-verify.rs`**: Verify cryptographic signatures on AT Protocol records with attestation validation
167
+
157
168
#### Record Operations (atproto-record)
158
-
- **`src/bin/atproto-record-sign.rs`**: Sign AT Protocol records with cryptographic signatures
159
-
- **`src/bin/atproto-record-verify.rs`**: Verify AT Protocol record signatures
169
+
- **`src/bin/atproto-record-cid.rs`**: Generate CID (Content Identifier) for AT Protocol records using DAG-CBOR serialization
160
170
161
171
#### Client Tools (atproto-client)
162
172
- **`src/bin/atproto-client-app-password.rs`**: Make XRPC calls using app password authentication
+576
-626
Cargo.lock
+576
-626
Cargo.lock
···
3
3
version = 4
4
4
5
5
[[package]]
6
-
name = "addr2line"
7
-
version = "0.24.2"
8
-
source = "registry+https://github.com/rust-lang/crates.io-index"
9
-
checksum = "dfbe277e56a376000877090da837660b4427aad530e3028d44e0bffe4f89a1c1"
10
-
dependencies = [
11
-
"gimli",
12
-
]
13
-
14
-
[[package]]
15
-
name = "adler2"
16
-
version = "2.0.0"
17
-
source = "registry+https://github.com/rust-lang/crates.io-index"
18
-
checksum = "512761e0bb2578dd7380c6baaa0f4ce03e84f95e960231d1dec8bf4d7d6e2627"
19
-
20
-
[[package]]
21
6
name = "aho-corasick"
22
-
version = "1.1.3"
7
+
version = "1.1.4"
23
8
source = "registry+https://github.com/rust-lang/crates.io-index"
24
-
checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916"
9
+
checksum = "ddd31a130427c27518df266943a5308ed92d4b226cc639f5a8f1002816174301"
25
10
dependencies = [
26
11
"memchr",
27
12
]
···
34
19
35
20
[[package]]
36
21
name = "anstream"
37
-
version = "0.6.19"
22
+
version = "0.6.21"
38
23
source = "registry+https://github.com/rust-lang/crates.io-index"
39
-
checksum = "301af1932e46185686725e0fad2f8f2aa7da69dd70bf6ecc44d6b703844a3933"
24
+
checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a"
40
25
dependencies = [
41
26
"anstyle",
42
27
"anstyle-parse",
···
49
34
50
35
[[package]]
51
36
name = "anstyle"
52
-
version = "1.0.11"
37
+
version = "1.0.13"
53
38
source = "registry+https://github.com/rust-lang/crates.io-index"
54
-
checksum = "862ed96ca487e809f1c8e5a8447f6ee2cf102f846893800b20cebdf541fc6bbd"
39
+
checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
55
40
56
41
[[package]]
57
42
name = "anstyle-parse"
···
64
49
65
50
[[package]]
66
51
name = "anstyle-query"
67
-
version = "1.1.3"
52
+
version = "1.1.4"
68
53
source = "registry+https://github.com/rust-lang/crates.io-index"
69
-
checksum = "6c8bdeb6047d8983be085bab0ba1472e6dc604e7041dbf6fcd5e71523014fae9"
54
+
checksum = "9e231f6134f61b71076a3eab506c379d4f36122f2af15a9ff04415ea4c3339e2"
70
55
dependencies = [
71
-
"windows-sys 0.59.0",
56
+
"windows-sys 0.60.2",
72
57
]
73
58
74
59
[[package]]
75
60
name = "anstyle-wincon"
76
-
version = "3.0.9"
61
+
version = "3.0.10"
77
62
source = "registry+https://github.com/rust-lang/crates.io-index"
78
-
checksum = "403f75924867bb1033c59fbf0797484329750cfbe3c4325cd33127941fabc882"
63
+
checksum = "3e0633414522a32ffaac8ac6cc8f748e090c5717661fddeea04219e2344f5f2a"
79
64
dependencies = [
80
65
"anstyle",
81
66
"once_cell_polyfill",
82
-
"windows-sys 0.59.0",
67
+
"windows-sys 0.60.2",
83
68
]
84
69
85
70
[[package]]
86
71
name = "anyhow"
87
-
version = "1.0.98"
72
+
version = "1.0.100"
88
73
source = "registry+https://github.com/rust-lang/crates.io-index"
89
-
checksum = "e16d2d3311acee920a9eb8d33b8cbc1787ce4a264e85f964c2404b969bdcd487"
74
+
checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
90
75
91
76
[[package]]
92
77
name = "async-trait"
93
-
version = "0.1.88"
78
+
version = "0.1.89"
94
79
source = "registry+https://github.com/rust-lang/crates.io-index"
95
-
checksum = "e539d3fca749fcee5236ab05e93a52867dd549cc157c8cb7f99595f3cedffdb5"
80
+
checksum = "9035ad2d096bed7955a320ee7e2230574d28fd3c3a0f186cbea1ff3c7eed5dbb"
96
81
dependencies = [
97
82
"proc-macro2",
98
83
"quote",
99
-
"syn",
84
+
"syn 2.0.109",
100
85
]
101
86
102
87
[[package]]
···
106
91
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
107
92
108
93
[[package]]
94
+
name = "atproto-attestation"
95
+
version = "0.13.0"
96
+
dependencies = [
97
+
"anyhow",
98
+
"async-trait",
99
+
"atproto-client",
100
+
"atproto-identity",
101
+
"atproto-record",
102
+
"base64",
103
+
"chrono",
104
+
"cid",
105
+
"clap",
106
+
"elliptic-curve",
107
+
"k256",
108
+
"multihash",
109
+
"p256",
110
+
"reqwest",
111
+
"serde",
112
+
"serde_ipld_dagcbor",
113
+
"serde_json",
114
+
"sha2",
115
+
"thiserror 2.0.17",
116
+
"tokio",
117
+
]
118
+
119
+
[[package]]
109
120
name = "atproto-client"
110
121
version = "0.13.0"
111
122
dependencies = [
112
123
"anyhow",
124
+
"async-trait",
113
125
"atproto-identity",
114
126
"atproto-oauth",
115
127
"atproto-record",
···
122
134
"secrecy",
123
135
"serde",
124
136
"serde_json",
125
-
"thiserror 2.0.12",
137
+
"thiserror 2.0.17",
126
138
"tokio",
127
139
"tracing",
128
140
"urlencoding",
141
+
]
142
+
143
+
[[package]]
144
+
name = "atproto-extras"
145
+
version = "0.13.0"
146
+
dependencies = [
147
+
"anyhow",
148
+
"async-trait",
149
+
"atproto-identity",
150
+
"atproto-record",
151
+
"clap",
152
+
"regex",
153
+
"reqwest",
154
+
"serde_json",
155
+
"tokio",
129
156
]
130
157
131
158
[[package]]
···
148
175
"serde",
149
176
"serde_ipld_dagcbor",
150
177
"serde_json",
151
-
"thiserror 2.0.12",
178
+
"thiserror 2.0.17",
152
179
"tokio",
153
180
"tracing",
181
+
"url",
154
182
"urlencoding",
155
183
"zeroize",
156
184
]
···
167
195
"http",
168
196
"serde",
169
197
"serde_json",
170
-
"thiserror 2.0.12",
198
+
"thiserror 2.0.17",
171
199
"tokio",
172
200
"tokio-util",
173
201
"tokio-websockets",
···
190
218
"reqwest",
191
219
"serde",
192
220
"serde_json",
193
-
"thiserror 2.0.12",
221
+
"thiserror 2.0.17",
194
222
"tokio",
195
223
"tracing",
196
224
"zeroize",
···
221
249
"serde_ipld_dagcbor",
222
250
"serde_json",
223
251
"sha2",
224
-
"thiserror 2.0.12",
252
+
"thiserror 2.0.17",
225
253
"tokio",
226
254
"tracing",
227
255
"ulid",
···
239
267
"reqwest",
240
268
"serde",
241
269
"serde_json",
242
-
"thiserror 2.0.12",
270
+
"thiserror 2.0.17",
243
271
"zeroize",
244
272
]
245
273
···
266
294
"secrecy",
267
295
"serde",
268
296
"serde_json",
269
-
"thiserror 2.0.12",
297
+
"thiserror 2.0.17",
270
298
"tokio",
271
299
"tracing",
272
300
"zeroize",
···
277
305
version = "0.13.0"
278
306
dependencies = [
279
307
"anyhow",
308
+
"async-trait",
280
309
"atproto-identity",
281
310
"base64",
282
311
"chrono",
312
+
"cid",
283
313
"clap",
314
+
"multihash",
315
+
"rand 0.8.5",
284
316
"serde",
285
317
"serde_ipld_dagcbor",
286
318
"serde_json",
287
-
"thiserror 2.0.12",
319
+
"sha2",
320
+
"thiserror 2.0.17",
321
+
"tokio",
322
+
]
323
+
324
+
[[package]]
325
+
name = "atproto-tap"
326
+
version = "0.13.0"
327
+
dependencies = [
328
+
"atproto-client",
329
+
"atproto-identity",
330
+
"base64",
331
+
"clap",
332
+
"compact_str",
333
+
"futures",
334
+
"http",
335
+
"itoa",
336
+
"reqwest",
337
+
"serde",
338
+
"serde_json",
339
+
"thiserror 2.0.17",
288
340
"tokio",
341
+
"tokio-stream",
342
+
"tokio-websockets",
343
+
"tracing",
344
+
"tracing-subscriber",
289
345
]
290
346
291
347
[[package]]
···
309
365
"reqwest-middleware",
310
366
"serde",
311
367
"serde_json",
312
-
"thiserror 2.0.12",
368
+
"thiserror 2.0.17",
313
369
"tokio",
314
370
"tracing",
315
371
]
···
336
392
"reqwest-middleware",
337
393
"serde",
338
394
"serde_json",
339
-
"thiserror 2.0.12",
395
+
"thiserror 2.0.17",
340
396
"tokio",
341
397
"tracing",
342
398
]
343
399
344
400
[[package]]
345
401
name = "autocfg"
346
-
version = "1.4.0"
402
+
version = "1.5.0"
347
403
source = "registry+https://github.com/rust-lang/crates.io-index"
348
-
checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
404
+
checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
349
405
350
406
[[package]]
351
407
name = "axum"
352
-
version = "0.8.4"
408
+
version = "0.8.6"
353
409
source = "registry+https://github.com/rust-lang/crates.io-index"
354
-
checksum = "021e862c184ae977658b36c4500f7feac3221ca5da43e3f25bd04ab6c79a29b5"
410
+
checksum = "8a18ed336352031311f4e0b4dd2ff392d4fbb370777c9d18d7fc9d7359f73871"
355
411
dependencies = [
356
412
"axum-core",
357
413
"axum-macros",
···
369
425
"mime",
370
426
"percent-encoding",
371
427
"pin-project-lite",
372
-
"rustversion",
373
-
"serde",
428
+
"serde_core",
374
429
"serde_json",
375
430
"serde_path_to_error",
376
431
"serde_urlencoded",
···
384
439
385
440
[[package]]
386
441
name = "axum-core"
387
-
version = "0.5.2"
442
+
version = "0.5.5"
388
443
source = "registry+https://github.com/rust-lang/crates.io-index"
389
-
checksum = "68464cd0412f486726fb3373129ef5d2993f90c34bc2bc1c1e9943b2f4fc7ca6"
444
+
checksum = "59446ce19cd142f8833f856eb31f3eb097812d1479ab224f54d72428ca21ea22"
390
445
dependencies = [
391
446
"bytes",
392
447
"futures-core",
···
395
450
"http-body-util",
396
451
"mime",
397
452
"pin-project-lite",
398
-
"rustversion",
399
453
"sync_wrapper",
400
454
"tower-layer",
401
455
"tower-service",
···
410
464
dependencies = [
411
465
"proc-macro2",
412
466
"quote",
413
-
"syn",
414
-
]
415
-
416
-
[[package]]
417
-
name = "backtrace"
418
-
version = "0.3.75"
419
-
source = "registry+https://github.com/rust-lang/crates.io-index"
420
-
checksum = "6806a6321ec58106fea15becdad98371e28d92ccbc7c8f1b3b6dd724fe8f1002"
421
-
dependencies = [
422
-
"addr2line",
423
-
"cfg-if",
424
-
"libc",
425
-
"miniz_oxide",
426
-
"object",
427
-
"rustc-demangle",
428
-
"windows-targets 0.52.6",
467
+
"syn 2.0.109",
429
468
]
430
469
431
470
[[package]]
···
441
480
checksum = "4c7f02d4ea65f2c1853089ffd8d2787bdbc63de2f0d29dedbcf8ccdfa0ccd4cf"
442
481
443
482
[[package]]
483
+
name = "base256emoji"
484
+
version = "1.0.2"
485
+
source = "registry+https://github.com/rust-lang/crates.io-index"
486
+
checksum = "b5e9430d9a245a77c92176e649af6e275f20839a48389859d1661e9a128d077c"
487
+
dependencies = [
488
+
"const-str",
489
+
"match-lookup",
490
+
]
491
+
492
+
[[package]]
444
493
name = "base64"
445
494
version = "0.22.1"
446
495
source = "registry+https://github.com/rust-lang/crates.io-index"
···
448
497
449
498
[[package]]
450
499
name = "base64ct"
451
-
version = "1.7.3"
500
+
version = "1.8.0"
452
501
source = "registry+https://github.com/rust-lang/crates.io-index"
453
-
checksum = "89e25b6adfb930f02d1981565a6e5d9c547ac15a96606256d3b59040e5cd4ca3"
502
+
checksum = "55248b47b0caf0546f7988906588779981c43bb1bc9d0c44087278f80cdb44ba"
454
503
455
504
[[package]]
456
505
name = "bitflags"
457
-
version = "2.9.1"
506
+
version = "2.10.0"
458
507
source = "registry+https://github.com/rust-lang/crates.io-index"
459
-
checksum = "1b8e56985ec62d17e9c1001dc89c88ecd7dc08e47eba5ec7c29c7b5eeecde967"
508
+
checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
460
509
461
510
[[package]]
462
511
name = "block-buffer"
···
469
518
470
519
[[package]]
471
520
name = "bumpalo"
472
-
version = "3.17.0"
521
+
version = "3.19.0"
473
522
source = "registry+https://github.com/rust-lang/crates.io-index"
474
-
checksum = "1628fb46dfa0b37568d12e5edd512553eccf6a22a78e8bde00bb4aed84d5bdbf"
523
+
checksum = "46c5e41b57b8bba42a04676d81cb89e9ee8e859a1a66f80a5a72e1cb76b34d43"
475
524
476
525
[[package]]
477
526
name = "bytes"
···
480
529
checksum = "d71b6127be86fdcfddb610f7182ac57211d4b18a3e9c82eb2d17662f2227ad6a"
481
530
482
531
[[package]]
532
+
name = "castaway"
533
+
version = "0.2.4"
534
+
source = "registry+https://github.com/rust-lang/crates.io-index"
535
+
checksum = "dec551ab6e7578819132c713a93c022a05d60159dc86e7a7050223577484c55a"
536
+
dependencies = [
537
+
"rustversion",
538
+
]
539
+
540
+
[[package]]
483
541
name = "cbor4ii"
484
542
version = "0.2.14"
485
543
source = "registry+https://github.com/rust-lang/crates.io-index"
···
490
548
491
549
[[package]]
492
550
name = "cc"
493
-
version = "1.2.24"
551
+
version = "1.2.44"
494
552
source = "registry+https://github.com/rust-lang/crates.io-index"
495
-
checksum = "16595d3be041c03b09d08d0858631facccee9221e579704070e6e9e4915d3bc7"
553
+
checksum = "37521ac7aabe3d13122dc382493e20c9416f299d2ccd5b3a5340a2570cdeb0f3"
496
554
dependencies = [
555
+
"find-msvc-tools",
497
556
"jobserver",
498
557
"libc",
499
558
"shlex",
···
501
560
502
561
[[package]]
503
562
name = "cfg-if"
504
-
version = "1.0.0"
563
+
version = "1.0.4"
505
564
source = "registry+https://github.com/rust-lang/crates.io-index"
506
-
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
565
+
checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801"
507
566
508
567
[[package]]
509
568
name = "cfg_aliases"
···
513
572
514
573
[[package]]
515
574
name = "chrono"
516
-
version = "0.4.41"
575
+
version = "0.4.42"
517
576
source = "registry+https://github.com/rust-lang/crates.io-index"
518
-
checksum = "c469d952047f47f91b68d1cba3f10d63c11d73e4636f24f08daf0278abf01c4d"
577
+
checksum = "145052bdd345b87320e369255277e3fb5152762ad123a901ef5c262dd38fe8d2"
519
578
dependencies = [
520
579
"num-traits",
521
580
"serde",
···
537
596
538
597
[[package]]
539
598
name = "clap"
540
-
version = "4.5.40"
599
+
version = "4.5.51"
541
600
source = "registry+https://github.com/rust-lang/crates.io-index"
542
-
checksum = "40b6887a1d8685cebccf115538db5c0efe625ccac9696ad45c409d96566e910f"
601
+
checksum = "4c26d721170e0295f191a69bd9a1f93efcdb0aff38684b61ab5750468972e5f5"
543
602
dependencies = [
544
603
"clap_builder",
545
604
"clap_derive",
···
547
606
548
607
[[package]]
549
608
name = "clap_builder"
550
-
version = "4.5.40"
609
+
version = "4.5.51"
551
610
source = "registry+https://github.com/rust-lang/crates.io-index"
552
-
checksum = "e0c66c08ce9f0c698cbce5c0279d0bb6ac936d8674174fe48f736533b964f59e"
611
+
checksum = "75835f0c7bf681bfd05abe44e965760fea999a5286c6eb2d59883634fd02011a"
553
612
dependencies = [
554
613
"anstream",
555
614
"anstyle",
···
559
618
560
619
[[package]]
561
620
name = "clap_derive"
562
-
version = "4.5.40"
621
+
version = "4.5.49"
563
622
source = "registry+https://github.com/rust-lang/crates.io-index"
564
-
checksum = "d2c7947ae4cc3d851207c1adb5b5e260ff0cca11446b1d6d1423788e442257ce"
623
+
checksum = "2a0b5487afeab2deb2ff4e03a807ad1a03ac532ff5a2cee5d86884440c7f7671"
565
624
dependencies = [
566
625
"heck",
567
626
"proc-macro2",
568
627
"quote",
569
-
"syn",
628
+
"syn 2.0.109",
570
629
]
571
630
572
631
[[package]]
573
632
name = "clap_lex"
574
-
version = "0.7.5"
633
+
version = "0.7.6"
575
634
source = "registry+https://github.com/rust-lang/crates.io-index"
576
-
checksum = "b94f61472cee1439c0b966b47e3aca9ae07e45d070759512cd390ea2bebc6675"
635
+
checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
577
636
578
637
[[package]]
579
638
name = "colorchoice"
···
582
641
checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75"
583
642
584
643
[[package]]
644
+
name = "compact_str"
645
+
version = "0.8.1"
646
+
source = "registry+https://github.com/rust-lang/crates.io-index"
647
+
checksum = "3b79c4069c6cad78e2e0cdfcbd26275770669fb39fd308a752dc110e83b9af32"
648
+
dependencies = [
649
+
"castaway",
650
+
"cfg-if",
651
+
"itoa",
652
+
"rustversion",
653
+
"ryu",
654
+
"serde",
655
+
"static_assertions",
656
+
]
657
+
658
+
[[package]]
585
659
name = "const-oid"
586
660
version = "0.9.6"
587
661
source = "registry+https://github.com/rust-lang/crates.io-index"
588
662
checksum = "c2459377285ad874054d797f3ccebf984978aa39129f6eafde5cdc8315b612f8"
589
663
590
664
[[package]]
665
+
name = "const-str"
666
+
version = "0.4.3"
667
+
source = "registry+https://github.com/rust-lang/crates.io-index"
668
+
checksum = "2f421161cb492475f1661ddc9815a745a1c894592070661180fdec3d4872e9c3"
669
+
670
+
[[package]]
591
671
name = "core-foundation"
592
672
version = "0.9.4"
593
673
source = "registry+https://github.com/rust-lang/crates.io-index"
···
706
786
checksum = "8d162beedaa69905488a8da94f5ac3edb4dd4788b732fadb7bd120b2625c1976"
707
787
dependencies = [
708
788
"data-encoding",
709
-
"syn",
789
+
"syn 2.0.109",
710
790
]
711
791
712
792
[[package]]
···
740
820
dependencies = [
741
821
"proc-macro2",
742
822
"quote",
743
-
"syn",
823
+
"syn 2.0.109",
744
824
]
745
825
746
826
[[package]]
···
800
880
"heck",
801
881
"proc-macro2",
802
882
"quote",
803
-
"syn",
883
+
"syn 2.0.109",
804
884
]
805
885
806
886
[[package]]
···
820
900
]
821
901
822
902
[[package]]
903
+
name = "find-msvc-tools"
904
+
version = "0.1.4"
905
+
source = "registry+https://github.com/rust-lang/crates.io-index"
906
+
checksum = "52051878f80a721bb68ebfbc930e07b65ba72f2da88968ea5c06fd6ca3d3a127"
907
+
908
+
[[package]]
823
909
name = "fnv"
824
910
version = "1.0.7"
825
911
source = "registry+https://github.com/rust-lang/crates.io-index"
···
833
919
834
920
[[package]]
835
921
name = "form_urlencoded"
836
-
version = "1.2.1"
922
+
version = "1.2.2"
837
923
source = "registry+https://github.com/rust-lang/crates.io-index"
838
-
checksum = "e13624c2627564efccf4934284bdd98cbaa14e79b0b5a141218e507b3a823456"
924
+
checksum = "cb4cb245038516f5f85277875cdaa4f7d2c9a0fa0468de06ed190163b1581fcf"
839
925
dependencies = [
840
926
"percent-encoding",
841
927
]
···
896
982
dependencies = [
897
983
"proc-macro2",
898
984
"quote",
899
-
"syn",
985
+
"syn 2.0.109",
900
986
]
901
987
902
988
[[package]]
···
930
1016
]
931
1017
932
1018
[[package]]
933
-
name = "generator"
934
-
version = "0.8.5"
935
-
source = "registry+https://github.com/rust-lang/crates.io-index"
936
-
checksum = "d18470a76cb7f8ff746cf1f7470914f900252ec36bbc40b569d74b1258446827"
937
-
dependencies = [
938
-
"cc",
939
-
"cfg-if",
940
-
"libc",
941
-
"log",
942
-
"rustversion",
943
-
"windows",
944
-
]
945
-
946
-
[[package]]
947
1019
name = "generic-array"
948
-
version = "0.14.7"
1020
+
version = "0.14.9"
949
1021
source = "registry+https://github.com/rust-lang/crates.io-index"
950
-
checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
1022
+
checksum = "4bb6743198531e02858aeaea5398fcc883e71851fcbcb5a2f773e2fb6cb1edf2"
951
1023
dependencies = [
952
1024
"typenum",
953
1025
"version_check",
···
963
1035
"cfg-if",
964
1036
"js-sys",
965
1037
"libc",
966
-
"wasi 0.11.0+wasi-snapshot-preview1",
1038
+
"wasi",
967
1039
"wasm-bindgen",
968
1040
]
969
1041
970
1042
[[package]]
971
1043
name = "getrandom"
972
-
version = "0.3.3"
1044
+
version = "0.3.4"
973
1045
source = "registry+https://github.com/rust-lang/crates.io-index"
974
-
checksum = "26145e563e54f2cadc477553f1ec5ee650b00862f0a58bcd12cbdc5f0ea2d2f4"
1046
+
checksum = "899def5c37c4fd7b2664648c28120ecec138e4d395b459e5ca34f9cce2dd77fd"
975
1047
dependencies = [
976
1048
"cfg-if",
977
1049
"js-sys",
978
1050
"libc",
979
1051
"r-efi",
980
-
"wasi 0.14.2+wasi-0.2.4",
1052
+
"wasip2",
981
1053
"wasm-bindgen",
982
1054
]
983
1055
984
1056
[[package]]
985
-
name = "gimli"
986
-
version = "0.31.1"
987
-
source = "registry+https://github.com/rust-lang/crates.io-index"
988
-
checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
989
-
990
-
[[package]]
991
1057
name = "group"
992
1058
version = "0.13.0"
993
1059
source = "registry+https://github.com/rust-lang/crates.io-index"
···
1000
1066
1001
1067
[[package]]
1002
1068
name = "h2"
1003
-
version = "0.4.10"
1069
+
version = "0.4.12"
1004
1070
source = "registry+https://github.com/rust-lang/crates.io-index"
1005
-
checksum = "a9421a676d1b147b16b82c9225157dc629087ef8ec4d5e2960f9437a90dac0a5"
1071
+
checksum = "f3c0b69cfcb4e1b9f1bf2f53f95f766e4661169728ec61cd3fe5a0166f2d1386"
1006
1072
dependencies = [
1007
1073
"atomic-waker",
1008
1074
"bytes",
···
1019
1085
1020
1086
[[package]]
1021
1087
name = "hashbrown"
1022
-
version = "0.15.3"
1088
+
version = "0.15.5"
1023
1089
source = "registry+https://github.com/rust-lang/crates.io-index"
1024
-
checksum = "84b26c544d002229e640969970a2e74021aadf6e2f96372b9c58eff97de08eb3"
1090
+
checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1"
1025
1091
dependencies = [
1026
1092
"allocator-api2",
1027
1093
"equivalent",
1028
1094
"foldhash",
1029
1095
]
1096
+
1097
+
[[package]]
1098
+
name = "hashbrown"
1099
+
version = "0.16.0"
1100
+
source = "registry+https://github.com/rust-lang/crates.io-index"
1101
+
checksum = "5419bdc4f6a9207fbeba6d11b604d481addf78ecd10c11ad51e76c2f6482748d"
1030
1102
1031
1103
[[package]]
1032
1104
name = "heck"
···
1050
1122
"idna",
1051
1123
"ipnet",
1052
1124
"once_cell",
1053
-
"rand 0.9.1",
1125
+
"rand 0.9.2",
1054
1126
"ring",
1055
-
"thiserror 2.0.12",
1127
+
"thiserror 2.0.17",
1056
1128
"tinyvec",
1057
1129
"tokio",
1058
1130
"tracing",
···
1072
1144
"moka",
1073
1145
"once_cell",
1074
1146
"parking_lot",
1075
-
"rand 0.9.1",
1147
+
"rand 0.9.2",
1076
1148
"resolv-conf",
1077
1149
"smallvec",
1078
-
"thiserror 2.0.12",
1150
+
"thiserror 2.0.17",
1079
1151
"tokio",
1080
1152
"tracing",
1081
1153
]
···
1146
1218
1147
1219
[[package]]
1148
1220
name = "hyper"
1149
-
version = "1.6.0"
1221
+
version = "1.7.0"
1150
1222
source = "registry+https://github.com/rust-lang/crates.io-index"
1151
-
checksum = "cc2b571658e38e0c01b1fdca3bbbe93c00d3d71693ff2770043f8c29bc7d6f80"
1223
+
checksum = "eb3aa54a13a0dfe7fbe3a59e0c76093041720fdc77b110cc0fc260fafb4dc51e"
1152
1224
dependencies = [
1225
+
"atomic-waker",
1153
1226
"bytes",
1154
1227
"futures-channel",
1155
-
"futures-util",
1228
+
"futures-core",
1156
1229
"h2",
1157
1230
"http",
1158
1231
"http-body",
···
1160
1233
"httpdate",
1161
1234
"itoa",
1162
1235
"pin-project-lite",
1236
+
"pin-utils",
1163
1237
"smallvec",
1164
1238
"tokio",
1165
1239
"want",
···
1167
1241
1168
1242
[[package]]
1169
1243
name = "hyper-rustls"
1170
-
version = "0.27.6"
1244
+
version = "0.27.7"
1171
1245
source = "registry+https://github.com/rust-lang/crates.io-index"
1172
-
checksum = "03a01595e11bdcec50946522c32dde3fc6914743000a68b93000965f2f02406d"
1246
+
checksum = "e3c93eb611681b207e1fe55d5a71ecf91572ec8a6705cdb6857f7d8d5242cf58"
1173
1247
dependencies = [
1174
1248
"http",
1175
1249
"hyper",
···
1184
1258
1185
1259
[[package]]
1186
1260
name = "hyper-util"
1187
-
version = "0.1.13"
1261
+
version = "0.1.17"
1188
1262
source = "registry+https://github.com/rust-lang/crates.io-index"
1189
-
checksum = "b1c293b6b3d21eca78250dc7dbebd6b9210ec5530e038cbfe0661b5c47ab06e8"
1263
+
checksum = "3c6995591a8f1380fcb4ba966a252a4b29188d51d2b89e3a252f5305be65aea8"
1190
1264
dependencies = [
1191
1265
"base64",
1192
1266
"bytes",
···
1200
1274
"libc",
1201
1275
"percent-encoding",
1202
1276
"pin-project-lite",
1203
-
"socket2",
1277
+
"socket2 0.6.1",
1204
1278
"system-configuration",
1205
1279
"tokio",
1206
1280
"tower-service",
···
1210
1284
1211
1285
[[package]]
1212
1286
name = "icu_collections"
1213
-
version = "2.0.0"
1287
+
version = "2.1.1"
1214
1288
source = "registry+https://github.com/rust-lang/crates.io-index"
1215
-
checksum = "200072f5d0e3614556f94a9930d5dc3e0662a652823904c3a75dc3b0af7fee47"
1289
+
checksum = "4c6b649701667bbe825c3b7e6388cb521c23d88644678e83c0c4d0a621a34b43"
1216
1290
dependencies = [
1217
1291
"displaydoc",
1218
1292
"potential_utf",
···
1223
1297
1224
1298
[[package]]
1225
1299
name = "icu_locale_core"
1226
-
version = "2.0.0"
1300
+
version = "2.1.1"
1227
1301
source = "registry+https://github.com/rust-lang/crates.io-index"
1228
-
checksum = "0cde2700ccaed3872079a65fb1a78f6c0a36c91570f28755dda67bc8f7d9f00a"
1302
+
checksum = "edba7861004dd3714265b4db54a3c390e880ab658fec5f7db895fae2046b5bb6"
1229
1303
dependencies = [
1230
1304
"displaydoc",
1231
1305
"litemap",
···
1236
1310
1237
1311
[[package]]
1238
1312
name = "icu_normalizer"
1239
-
version = "2.0.0"
1313
+
version = "2.1.1"
1240
1314
source = "registry+https://github.com/rust-lang/crates.io-index"
1241
-
checksum = "436880e8e18df4d7bbc06d58432329d6458cc84531f7ac5f024e93deadb37979"
1315
+
checksum = "5f6c8828b67bf8908d82127b2054ea1b4427ff0230ee9141c54251934ab1b599"
1242
1316
dependencies = [
1243
-
"displaydoc",
1244
1317
"icu_collections",
1245
1318
"icu_normalizer_data",
1246
1319
"icu_properties",
···
1251
1324
1252
1325
[[package]]
1253
1326
name = "icu_normalizer_data"
1254
-
version = "2.0.0"
1327
+
version = "2.1.1"
1255
1328
source = "registry+https://github.com/rust-lang/crates.io-index"
1256
-
checksum = "00210d6893afc98edb752b664b8890f0ef174c8adbb8d0be9710fa66fbbf72d3"
1329
+
checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a"
1257
1330
1258
1331
[[package]]
1259
1332
name = "icu_properties"
1260
-
version = "2.0.1"
1333
+
version = "2.1.1"
1261
1334
source = "registry+https://github.com/rust-lang/crates.io-index"
1262
-
checksum = "016c619c1eeb94efb86809b015c58f479963de65bdb6253345c1a1276f22e32b"
1335
+
checksum = "e93fcd3157766c0c8da2f8cff6ce651a31f0810eaa1c51ec363ef790bbb5fb99"
1263
1336
dependencies = [
1264
-
"displaydoc",
1265
1337
"icu_collections",
1266
1338
"icu_locale_core",
1267
1339
"icu_properties_data",
1268
1340
"icu_provider",
1269
-
"potential_utf",
1270
1341
"zerotrie",
1271
1342
"zerovec",
1272
1343
]
1273
1344
1274
1345
[[package]]
1275
1346
name = "icu_properties_data"
1276
-
version = "2.0.1"
1347
+
version = "2.1.1"
1277
1348
source = "registry+https://github.com/rust-lang/crates.io-index"
1278
-
checksum = "298459143998310acd25ffe6810ed544932242d3f07083eee1084d83a71bd632"
1349
+
checksum = "02845b3647bb045f1100ecd6480ff52f34c35f82d9880e029d329c21d1054899"
1279
1350
1280
1351
[[package]]
1281
1352
name = "icu_provider"
1282
-
version = "2.0.0"
1353
+
version = "2.1.1"
1283
1354
source = "registry+https://github.com/rust-lang/crates.io-index"
1284
-
checksum = "03c80da27b5f4187909049ee2d72f276f0d9f99a42c306bd0131ecfe04d8e5af"
1355
+
checksum = "85962cf0ce02e1e0a629cc34e7ca3e373ce20dda4c4d7294bbd0bf1fdb59e614"
1285
1356
dependencies = [
1286
1357
"displaydoc",
1287
1358
"icu_locale_core",
1288
-
"stable_deref_trait",
1289
-
"tinystr",
1290
1359
"writeable",
1291
1360
"yoke",
1292
1361
"zerofrom",
···
1296
1365
1297
1366
[[package]]
1298
1367
name = "idna"
1299
-
version = "1.0.3"
1368
+
version = "1.1.0"
1300
1369
source = "registry+https://github.com/rust-lang/crates.io-index"
1301
-
checksum = "686f825264d630750a544639377bae737628043f20d38bbc029e8f29ea968a7e"
1370
+
checksum = "3b0875f23caa03898994f6ddc501886a45c7d3d62d04d2d90788d47be1b1e4de"
1302
1371
dependencies = [
1303
1372
"idna_adapter",
1304
1373
"smallvec",
···
1317
1386
1318
1387
[[package]]
1319
1388
name = "indexmap"
1320
-
version = "2.9.0"
1389
+
version = "2.12.0"
1321
1390
source = "registry+https://github.com/rust-lang/crates.io-index"
1322
-
checksum = "cea70ddb795996207ad57735b50c5982d8844f38ba9ee5f1aedcfb708a2aa11e"
1391
+
checksum = "6717a8d2a5a929a1a2eb43a12812498ed141a0bcfb7e8f7844fbdbe4303bba9f"
1323
1392
dependencies = [
1324
1393
"equivalent",
1325
-
"hashbrown",
1394
+
"hashbrown 0.16.0",
1326
1395
]
1327
1396
1328
1397
[[package]]
···
1331
1400
source = "registry+https://github.com/rust-lang/crates.io-index"
1332
1401
checksum = "b58db92f96b720de98181bbbe63c831e87005ab460c1bf306eb2622b4707997f"
1333
1402
dependencies = [
1334
-
"socket2",
1403
+
"socket2 0.5.10",
1335
1404
"widestring",
1336
1405
"windows-sys 0.48.0",
1337
1406
"winreg",
···
1356
1425
1357
1426
[[package]]
1358
1427
name = "iri-string"
1359
-
version = "0.7.8"
1428
+
version = "0.7.9"
1360
1429
source = "registry+https://github.com/rust-lang/crates.io-index"
1361
-
checksum = "dbc5ebe9c3a1a7a5127f920a418f7585e9e758e911d0466ed004f393b0e380b2"
1430
+
checksum = "4f867b9d1d896b67beb18518eda36fdb77a32ea590de864f1325b294a6d14397"
1362
1431
dependencies = [
1363
1432
"memchr",
1364
1433
"serde",
···
1366
1435
1367
1436
[[package]]
1368
1437
name = "is_terminal_polyfill"
1369
-
version = "1.70.1"
1438
+
version = "1.70.2"
1370
1439
source = "registry+https://github.com/rust-lang/crates.io-index"
1371
-
checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf"
1440
+
checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
1372
1441
1373
1442
[[package]]
1374
1443
name = "itoa"
···
1378
1447
1379
1448
[[package]]
1380
1449
name = "jobserver"
1381
-
version = "0.1.33"
1450
+
version = "0.1.34"
1382
1451
source = "registry+https://github.com/rust-lang/crates.io-index"
1383
-
checksum = "38f262f097c174adebe41eb73d66ae9c06b2844fb0da69969647bbddd9b0538a"
1452
+
checksum = "9afb3de4395d6b3e67a780b6de64b51c978ecf11cb9a462c66be7d4ca9039d33"
1384
1453
dependencies = [
1385
-
"getrandom 0.3.3",
1454
+
"getrandom 0.3.4",
1386
1455
"libc",
1387
1456
]
1388
1457
1389
1458
[[package]]
1390
1459
name = "js-sys"
1391
-
version = "0.3.77"
1460
+
version = "0.3.82"
1392
1461
source = "registry+https://github.com/rust-lang/crates.io-index"
1393
-
checksum = "1cfaf33c695fc6e08064efbc1f72ec937429614f25eef83af942d0e227c3a28f"
1462
+
checksum = "b011eec8cc36da2aab2d5cff675ec18454fad408585853910a202391cf9f8e65"
1394
1463
dependencies = [
1395
1464
"once_cell",
1396
1465
"wasm-bindgen",
···
1418
1487
1419
1488
[[package]]
1420
1489
name = "libc"
1421
-
version = "0.2.172"
1490
+
version = "0.2.177"
1422
1491
source = "registry+https://github.com/rust-lang/crates.io-index"
1423
-
checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa"
1492
+
checksum = "2874a2af47a2325c2001a6e6fad9b16a53b802102b528163885171cf92b15976"
1424
1493
1425
1494
[[package]]
1426
1495
name = "litemap"
1427
-
version = "0.8.0"
1496
+
version = "0.8.1"
1428
1497
source = "registry+https://github.com/rust-lang/crates.io-index"
1429
-
checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
1498
+
checksum = "6373607a59f0be73a39b6fe456b8192fcc3585f602af20751600e974dd455e77"
1430
1499
1431
1500
[[package]]
1432
1501
name = "lock_api"
1433
-
version = "0.4.13"
1502
+
version = "0.4.14"
1434
1503
source = "registry+https://github.com/rust-lang/crates.io-index"
1435
-
checksum = "96936507f153605bddfcda068dd804796c84324ed2510809e5b2a624c81da765"
1504
+
checksum = "224399e74b87b5f3557511d98dff8b14089b3dadafcab6bb93eab67d3aace965"
1436
1505
dependencies = [
1437
-
"autocfg",
1438
1506
"scopeguard",
1439
1507
]
1440
1508
1441
1509
[[package]]
1442
1510
name = "log"
1443
-
version = "0.4.27"
1511
+
version = "0.4.28"
1444
1512
source = "registry+https://github.com/rust-lang/crates.io-index"
1445
-
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
1446
-
1447
-
[[package]]
1448
-
name = "loom"
1449
-
version = "0.7.2"
1450
-
source = "registry+https://github.com/rust-lang/crates.io-index"
1451
-
checksum = "419e0dc8046cb947daa77eb95ae174acfbddb7673b4151f56d1eed8e93fbfaca"
1452
-
dependencies = [
1453
-
"cfg-if",
1454
-
"generator",
1455
-
"scoped-tls",
1456
-
"tracing",
1457
-
"tracing-subscriber",
1458
-
]
1513
+
checksum = "34080505efa8e45a4b816c349525ebe327ceaa8559756f0356cba97ef3bf7432"
1459
1514
1460
1515
[[package]]
1461
1516
name = "lru"
···
1463
1518
source = "registry+https://github.com/rust-lang/crates.io-index"
1464
1519
checksum = "234cf4f4a04dc1f57e24b96cc0cd600cf2af460d4161ac5ecdd0af8e1f3b2a38"
1465
1520
dependencies = [
1466
-
"hashbrown",
1521
+
"hashbrown 0.15.5",
1467
1522
]
1468
1523
1469
1524
[[package]]
···
1473
1528
checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154"
1474
1529
1475
1530
[[package]]
1531
+
name = "match-lookup"
1532
+
version = "0.1.1"
1533
+
source = "registry+https://github.com/rust-lang/crates.io-index"
1534
+
checksum = "1265724d8cb29dbbc2b0f06fffb8bf1a8c0cf73a78eede9ba73a4a66c52a981e"
1535
+
dependencies = [
1536
+
"proc-macro2",
1537
+
"quote",
1538
+
"syn 1.0.109",
1539
+
]
1540
+
1541
+
[[package]]
1476
1542
name = "matchers"
1477
-
version = "0.1.0"
1543
+
version = "0.2.0"
1478
1544
source = "registry+https://github.com/rust-lang/crates.io-index"
1479
-
checksum = "8263075bb86c5a1b1427b5ae862e8889656f126e9f77c484496e8b47cf5c5558"
1545
+
checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9"
1480
1546
dependencies = [
1481
-
"regex-automata 0.1.10",
1547
+
"regex-automata",
1482
1548
]
1483
1549
1484
1550
[[package]]
···
1489
1555
1490
1556
[[package]]
1491
1557
name = "memchr"
1492
-
version = "2.7.4"
1558
+
version = "2.7.6"
1493
1559
source = "registry+https://github.com/rust-lang/crates.io-index"
1494
-
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
1560
+
checksum = "f52b00d39961fc5b2736ea853c9cc86238e165017a493d1d5c8eac6bdc4cc273"
1495
1561
1496
1562
[[package]]
1497
1563
name = "mime"
···
1510
1576
]
1511
1577
1512
1578
[[package]]
1513
-
name = "miniz_oxide"
1514
-
version = "0.8.8"
1515
-
source = "registry+https://github.com/rust-lang/crates.io-index"
1516
-
checksum = "3be647b768db090acb35d5ec5db2b0e1f1de11133ca123b9eacf5137868f892a"
1517
-
dependencies = [
1518
-
"adler2",
1519
-
]
1520
-
1521
-
[[package]]
1522
1579
name = "mio"
1523
-
version = "1.0.4"
1580
+
version = "1.1.0"
1524
1581
source = "registry+https://github.com/rust-lang/crates.io-index"
1525
-
checksum = "78bed444cc8a2160f01cbcf811ef18cac863ad68ae8ca62092e8db51d51c761c"
1582
+
checksum = "69d83b0086dc8ecf3ce9ae2874b2d1290252e2a30720bea58a5c6639b0092873"
1526
1583
dependencies = [
1527
1584
"libc",
1528
-
"wasi 0.11.0+wasi-snapshot-preview1",
1529
-
"windows-sys 0.59.0",
1585
+
"wasi",
1586
+
"windows-sys 0.61.2",
1530
1587
]
1531
1588
1532
1589
[[package]]
1533
1590
name = "moka"
1534
-
version = "0.12.10"
1591
+
version = "0.12.11"
1535
1592
source = "registry+https://github.com/rust-lang/crates.io-index"
1536
-
checksum = "a9321642ca94a4282428e6ea4af8cc2ca4eac48ac7a6a4ea8f33f76d0ce70926"
1593
+
checksum = "8261cd88c312e0004c1d51baad2980c66528dfdb2bee62003e643a4d8f86b077"
1537
1594
dependencies = [
1538
1595
"crossbeam-channel",
1539
1596
"crossbeam-epoch",
1540
1597
"crossbeam-utils",
1541
-
"loom",
1598
+
"equivalent",
1542
1599
"parking_lot",
1543
1600
"portable-atomic",
1544
1601
"rustc_version",
1545
1602
"smallvec",
1546
1603
"tagptr",
1547
-
"thiserror 1.0.69",
1548
1604
"uuid",
1549
1605
]
1550
1606
1551
1607
[[package]]
1552
1608
name = "multibase"
1553
-
version = "0.9.1"
1609
+
version = "0.9.2"
1554
1610
source = "registry+https://github.com/rust-lang/crates.io-index"
1555
-
checksum = "9b3539ec3c1f04ac9748a260728e855f261b4977f5c3406612c884564f329404"
1611
+
checksum = "8694bb4835f452b0e3bb06dbebb1d6fc5385b6ca1caf2e55fd165c042390ec77"
1556
1612
dependencies = [
1557
1613
"base-x",
1614
+
"base256emoji",
1558
1615
"data-encoding",
1559
1616
"data-encoding-macro",
1560
1617
]
···
1572
1629
1573
1630
[[package]]
1574
1631
name = "nu-ansi-term"
1575
-
version = "0.46.0"
1632
+
version = "0.50.3"
1576
1633
source = "registry+https://github.com/rust-lang/crates.io-index"
1577
-
checksum = "77a8165726e8236064dbb45459242600304b42a5ea24ee2948e18e023bf7ba84"
1634
+
checksum = "7957b9740744892f114936ab4a57b3f487491bbeafaf8083688b16841a4240e5"
1578
1635
dependencies = [
1579
-
"overload",
1580
-
"winapi",
1636
+
"windows-sys 0.61.2",
1581
1637
]
1582
1638
1583
1639
[[package]]
···
1590
1646
]
1591
1647
1592
1648
[[package]]
1593
-
name = "object"
1594
-
version = "0.36.7"
1595
-
source = "registry+https://github.com/rust-lang/crates.io-index"
1596
-
checksum = "62948e14d923ea95ea2c7c86c71013138b66525b86bdc08d2dcc262bdb497b87"
1597
-
dependencies = [
1598
-
"memchr",
1599
-
]
1600
-
1601
-
[[package]]
1602
1649
name = "once_cell"
1603
1650
version = "1.21.3"
1604
1651
source = "registry+https://github.com/rust-lang/crates.io-index"
···
1610
1657
1611
1658
[[package]]
1612
1659
name = "once_cell_polyfill"
1613
-
version = "1.70.1"
1660
+
version = "1.70.2"
1614
1661
source = "registry+https://github.com/rust-lang/crates.io-index"
1615
-
checksum = "a4895175b425cb1f87721b59f0f286c2092bd4af812243672510e1ac53e2e0ad"
1662
+
checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
1616
1663
1617
1664
[[package]]
1618
1665
name = "openssl-probe"
1619
1666
version = "0.1.6"
1620
1667
source = "registry+https://github.com/rust-lang/crates.io-index"
1621
1668
checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
1622
-
1623
-
[[package]]
1624
-
name = "overload"
1625
-
version = "0.1.1"
1626
-
source = "registry+https://github.com/rust-lang/crates.io-index"
1627
-
checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
1628
1669
1629
1670
[[package]]
1630
1671
name = "p256"
···
1654
1695
1655
1696
[[package]]
1656
1697
name = "parking_lot"
1657
-
version = "0.12.4"
1698
+
version = "0.12.5"
1658
1699
source = "registry+https://github.com/rust-lang/crates.io-index"
1659
-
checksum = "70d58bf43669b5795d1576d0641cfb6fbb2057bf629506267a92807158584a13"
1700
+
checksum = "93857453250e3077bd71ff98b6a65ea6621a19bb0f559a85248955ac12c45a1a"
1660
1701
dependencies = [
1661
1702
"lock_api",
1662
1703
"parking_lot_core",
···
1664
1705
1665
1706
[[package]]
1666
1707
name = "parking_lot_core"
1667
-
version = "0.9.11"
1708
+
version = "0.9.12"
1668
1709
source = "registry+https://github.com/rust-lang/crates.io-index"
1669
-
checksum = "bc838d2a56b5b1a6c25f55575dfc605fabb63bb2365f6c2353ef9159aa69e4a5"
1710
+
checksum = "2621685985a2ebf1c516881c026032ac7deafcda1a2c9b7850dc81e3dfcb64c1"
1670
1711
dependencies = [
1671
1712
"cfg-if",
1672
1713
"libc",
1673
1714
"redox_syscall",
1674
1715
"smallvec",
1675
-
"windows-targets 0.52.6",
1716
+
"windows-link 0.2.1",
1676
1717
]
1677
1718
1678
1719
[[package]]
···
1686
1727
1687
1728
[[package]]
1688
1729
name = "percent-encoding"
1689
-
version = "2.3.1"
1730
+
version = "2.3.2"
1690
1731
source = "registry+https://github.com/rust-lang/crates.io-index"
1691
-
checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e"
1732
+
checksum = "9b4f627cb1b25917193a259e49bdad08f671f8d9708acfd5fe0a8c1455d87220"
1692
1733
1693
1734
[[package]]
1694
1735
name = "pin-project-lite"
···
1720
1761
1721
1762
[[package]]
1722
1763
name = "portable-atomic"
1723
-
version = "1.11.0"
1764
+
version = "1.11.1"
1724
1765
source = "registry+https://github.com/rust-lang/crates.io-index"
1725
-
checksum = "350e9b48cbc6b0e028b0473b114454c6316e57336ee184ceab6e53f72c178b3e"
1766
+
checksum = "f84267b20a16ea918e43c6a88433c2d54fa145c92a811b5b047ccbe153674483"
1726
1767
1727
1768
[[package]]
1728
1769
name = "potential_utf"
1729
-
version = "0.1.2"
1770
+
version = "0.1.4"
1730
1771
source = "registry+https://github.com/rust-lang/crates.io-index"
1731
-
checksum = "e5a7c30837279ca13e7c867e9e40053bc68740f988cb07f7ca6df43cc734b585"
1772
+
checksum = "b73949432f5e2a09657003c25bca5e19a0e9c84f8058ca374f49e0ebe605af77"
1732
1773
dependencies = [
1733
1774
"zerovec",
1734
1775
]
···
1754
1795
1755
1796
[[package]]
1756
1797
name = "proc-macro2"
1757
-
version = "1.0.95"
1798
+
version = "1.0.103"
1758
1799
source = "registry+https://github.com/rust-lang/crates.io-index"
1759
-
checksum = "02b3e5e68a3a1a02aad3ec490a98007cbc13c37cbe84a3cd7b8e406d76e7f778"
1800
+
checksum = "5ee95bc4ef87b8d5ba32e8b7714ccc834865276eab0aed5c9958d00ec45f49e8"
1760
1801
dependencies = [
1761
1802
"unicode-ident",
1762
1803
]
1763
1804
1764
1805
[[package]]
1765
1806
name = "quinn"
1766
-
version = "0.11.8"
1807
+
version = "0.11.9"
1767
1808
source = "registry+https://github.com/rust-lang/crates.io-index"
1768
-
checksum = "626214629cda6781b6dc1d316ba307189c85ba657213ce642d9c77670f8202c8"
1809
+
checksum = "b9e20a958963c291dc322d98411f541009df2ced7b5a4f2bd52337638cfccf20"
1769
1810
dependencies = [
1770
1811
"bytes",
1771
1812
"cfg_aliases",
···
1774
1815
"quinn-udp",
1775
1816
"rustc-hash",
1776
1817
"rustls",
1777
-
"socket2",
1778
-
"thiserror 2.0.12",
1818
+
"socket2 0.6.1",
1819
+
"thiserror 2.0.17",
1779
1820
"tokio",
1780
1821
"tracing",
1781
1822
"web-time",
···
1783
1824
1784
1825
[[package]]
1785
1826
name = "quinn-proto"
1786
-
version = "0.11.12"
1827
+
version = "0.11.13"
1787
1828
source = "registry+https://github.com/rust-lang/crates.io-index"
1788
-
checksum = "49df843a9161c85bb8aae55f101bc0bac8bcafd637a620d9122fd7e0b2f7422e"
1829
+
checksum = "f1906b49b0c3bc04b5fe5d86a77925ae6524a19b816ae38ce1e426255f1d8a31"
1789
1830
dependencies = [
1790
1831
"bytes",
1791
-
"getrandom 0.3.3",
1832
+
"getrandom 0.3.4",
1792
1833
"lru-slab",
1793
-
"rand 0.9.1",
1834
+
"rand 0.9.2",
1794
1835
"ring",
1795
1836
"rustc-hash",
1796
1837
"rustls",
1797
1838
"rustls-pki-types",
1798
1839
"slab",
1799
-
"thiserror 2.0.12",
1840
+
"thiserror 2.0.17",
1800
1841
"tinyvec",
1801
1842
"tracing",
1802
1843
"web-time",
···
1804
1845
1805
1846
[[package]]
1806
1847
name = "quinn-udp"
1807
-
version = "0.5.12"
1848
+
version = "0.5.14"
1808
1849
source = "registry+https://github.com/rust-lang/crates.io-index"
1809
-
checksum = "ee4e529991f949c5e25755532370b8af5d114acae52326361d68d47af64aa842"
1850
+
checksum = "addec6a0dcad8a8d96a771f815f0eaf55f9d1805756410b39f5fa81332574cbd"
1810
1851
dependencies = [
1811
1852
"cfg_aliases",
1812
1853
"libc",
1813
1854
"once_cell",
1814
-
"socket2",
1855
+
"socket2 0.6.1",
1815
1856
"tracing",
1816
-
"windows-sys 0.59.0",
1857
+
"windows-sys 0.60.2",
1817
1858
]
1818
1859
1819
1860
[[package]]
1820
1861
name = "quote"
1821
-
version = "1.0.40"
1862
+
version = "1.0.41"
1822
1863
source = "registry+https://github.com/rust-lang/crates.io-index"
1823
-
checksum = "1885c039570dc00dcb4ff087a89e185fd56bae234ddc7f056a945bf36467248d"
1864
+
checksum = "ce25767e7b499d1b604768e7cde645d14cc8584231ea6b295e9c9eb22c02e1d1"
1824
1865
dependencies = [
1825
1866
"proc-macro2",
1826
1867
]
1827
1868
1828
1869
[[package]]
1829
1870
name = "r-efi"
1830
-
version = "5.2.0"
1871
+
version = "5.3.0"
1831
1872
source = "registry+https://github.com/rust-lang/crates.io-index"
1832
-
checksum = "74765f6d916ee2faa39bc8e68e4f3ed8949b48cccdac59983d287a7cb71ce9c5"
1873
+
checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
1833
1874
1834
1875
[[package]]
1835
1876
name = "rand"
···
1844
1885
1845
1886
[[package]]
1846
1887
name = "rand"
1847
-
version = "0.9.1"
1888
+
version = "0.9.2"
1848
1889
source = "registry+https://github.com/rust-lang/crates.io-index"
1849
-
checksum = "9fbfd9d094a40bf3ae768db9361049ace4c0e04a4fd6b359518bd7b73a73dd97"
1890
+
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
1850
1891
dependencies = [
1851
1892
"rand_chacha 0.9.0",
1852
1893
"rand_core 0.9.3",
···
1887
1928
source = "registry+https://github.com/rust-lang/crates.io-index"
1888
1929
checksum = "99d9a13982dcf210057a8a78572b2217b667c3beacbf3a0d8b454f6f82837d38"
1889
1930
dependencies = [
1890
-
"getrandom 0.3.3",
1931
+
"getrandom 0.3.4",
1891
1932
]
1892
1933
1893
1934
[[package]]
1894
1935
name = "redox_syscall"
1895
-
version = "0.5.12"
1936
+
version = "0.5.18"
1896
1937
source = "registry+https://github.com/rust-lang/crates.io-index"
1897
-
checksum = "928fca9cf2aa042393a8325b9ead81d2f0df4cb12e1e24cef072922ccd99c5af"
1938
+
checksum = "ed2bf2547551a7053d6fdfafda3f938979645c44812fbfcda098faae3f1a362d"
1898
1939
dependencies = [
1899
1940
"bitflags",
1900
1941
]
1901
1942
1902
1943
[[package]]
1903
1944
name = "regex"
1904
-
version = "1.11.1"
1945
+
version = "1.12.2"
1905
1946
source = "registry+https://github.com/rust-lang/crates.io-index"
1906
-
checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
1947
+
checksum = "843bc0191f75f3e22651ae5f1e72939ab2f72a4bc30fa80a066bd66edefc24d4"
1907
1948
dependencies = [
1908
1949
"aho-corasick",
1909
1950
"memchr",
1910
-
"regex-automata 0.4.9",
1911
-
"regex-syntax 0.8.5",
1912
-
]
1913
-
1914
-
[[package]]
1915
-
name = "regex-automata"
1916
-
version = "0.1.10"
1917
-
source = "registry+https://github.com/rust-lang/crates.io-index"
1918
-
checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132"
1919
-
dependencies = [
1920
-
"regex-syntax 0.6.29",
1951
+
"regex-automata",
1952
+
"regex-syntax",
1921
1953
]
1922
1954
1923
1955
[[package]]
1924
1956
name = "regex-automata"
1925
-
version = "0.4.9"
1957
+
version = "0.4.13"
1926
1958
source = "registry+https://github.com/rust-lang/crates.io-index"
1927
-
checksum = "809e8dc61f6de73b46c85f4c96486310fe304c434cfa43669d7b40f711150908"
1959
+
checksum = "5276caf25ac86c8d810222b3dbb938e512c55c6831a10f3e6ed1c93b84041f1c"
1928
1960
dependencies = [
1929
1961
"aho-corasick",
1930
1962
"memchr",
1931
-
"regex-syntax 0.8.5",
1963
+
"regex-syntax",
1932
1964
]
1933
1965
1934
1966
[[package]]
1935
1967
name = "regex-syntax"
1936
-
version = "0.6.29"
1968
+
version = "0.8.8"
1937
1969
source = "registry+https://github.com/rust-lang/crates.io-index"
1938
-
checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1"
1939
-
1940
-
[[package]]
1941
-
name = "regex-syntax"
1942
-
version = "0.8.5"
1943
-
source = "registry+https://github.com/rust-lang/crates.io-index"
1944
-
checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
1970
+
checksum = "7a2d987857b319362043e95f5353c0535c1f58eec5336fdfcf626430af7def58"
1945
1971
1946
1972
[[package]]
1947
1973
name = "reqwest"
1948
-
version = "0.12.18"
1974
+
version = "0.12.24"
1949
1975
source = "registry+https://github.com/rust-lang/crates.io-index"
1950
-
checksum = "e98ff6b0dbbe4d5a37318f433d4fc82babd21631f194d370409ceb2e40b2f0b5"
1976
+
checksum = "9d0946410b9f7b082a427e4ef5c8ff541a88b357bc6c637c40db3a68ac70a36f"
1951
1977
dependencies = [
1952
1978
"base64",
1953
1979
"bytes",
···
1961
1987
"hyper",
1962
1988
"hyper-rustls",
1963
1989
"hyper-util",
1964
-
"ipnet",
1965
1990
"js-sys",
1966
1991
"log",
1967
1992
"mime",
1968
1993
"mime_guess",
1969
-
"once_cell",
1970
1994
"percent-encoding",
1971
1995
"pin-project-lite",
1972
1996
"quinn",
···
2017
2041
2018
2042
[[package]]
2019
2043
name = "resolv-conf"
2020
-
version = "0.7.4"
2044
+
version = "0.7.5"
2021
2045
source = "registry+https://github.com/rust-lang/crates.io-index"
2022
-
checksum = "95325155c684b1c89f7765e30bc1c42e4a6da51ca513615660cb8a62ef9a88e3"
2046
+
checksum = "6b3789b30bd25ba102de4beabd95d21ac45b69b1be7d14522bab988c526d6799"
2023
2047
2024
2048
[[package]]
2025
2049
name = "rfc6979"
···
2067
2091
]
2068
2092
2069
2093
[[package]]
2070
-
name = "rustc-demangle"
2071
-
version = "0.1.24"
2072
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2073
-
checksum = "719b953e2095829ee67db738b3bfa9fa368c94900df327b3f07fe6e794d2fe1f"
2074
-
2075
-
[[package]]
2076
2094
name = "rustc-hash"
2077
2095
version = "2.1.1"
2078
2096
source = "registry+https://github.com/rust-lang/crates.io-index"
···
2089
2107
2090
2108
[[package]]
2091
2109
name = "rustls"
2092
-
version = "0.23.27"
2110
+
version = "0.23.35"
2093
2111
source = "registry+https://github.com/rust-lang/crates.io-index"
2094
-
checksum = "730944ca083c1c233a75c09f199e973ca499344a2b7ba9e755c457e86fb4a321"
2112
+
checksum = "533f54bc6a7d4f647e46ad909549eda97bf5afc1585190ef692b4286b198bd8f"
2095
2113
dependencies = [
2096
2114
"once_cell",
2097
2115
"ring",
···
2103
2121
2104
2122
[[package]]
2105
2123
name = "rustls-native-certs"
2106
-
version = "0.8.1"
2124
+
version = "0.8.2"
2107
2125
source = "registry+https://github.com/rust-lang/crates.io-index"
2108
-
checksum = "7fcff2dd52b58a8d98a70243663a0d234c4e2b79235637849d15913394a247d3"
2126
+
checksum = "9980d917ebb0c0536119ba501e90834767bffc3d60641457fd84a1f3fd337923"
2109
2127
dependencies = [
2110
2128
"openssl-probe",
2111
2129
"rustls-pki-types",
···
2115
2133
2116
2134
[[package]]
2117
2135
name = "rustls-pki-types"
2118
-
version = "1.12.0"
2136
+
version = "1.13.0"
2119
2137
source = "registry+https://github.com/rust-lang/crates.io-index"
2120
-
checksum = "229a4a4c221013e7e1f1a043678c5cc39fe5171437c88fb47151a21e6f5b5c79"
2138
+
checksum = "94182ad936a0c91c324cd46c6511b9510ed16af436d7b5bab34beab0afd55f7a"
2121
2139
dependencies = [
2122
2140
"web-time",
2123
2141
"zeroize",
···
2125
2143
2126
2144
[[package]]
2127
2145
name = "rustls-webpki"
2128
-
version = "0.103.3"
2146
+
version = "0.103.8"
2129
2147
source = "registry+https://github.com/rust-lang/crates.io-index"
2130
-
checksum = "e4a72fe2bcf7a6ac6fd7d0b9e5cb68aeb7d4c0a0271730218b3e92d43b4eb435"
2148
+
checksum = "2ffdfa2f5286e2247234e03f680868ac2815974dc39e00ea15adc445d0aafe52"
2131
2149
dependencies = [
2132
2150
"ring",
2133
2151
"rustls-pki-types",
···
2136
2154
2137
2155
[[package]]
2138
2156
name = "rustversion"
2139
-
version = "1.0.21"
2157
+
version = "1.0.22"
2140
2158
source = "registry+https://github.com/rust-lang/crates.io-index"
2141
-
checksum = "8a0d197bd2c9dc6e53b84da9556a69ba4cdfab8619eb41a8bd1cc2027a0f6b1d"
2159
+
checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d"
2142
2160
2143
2161
[[package]]
2144
2162
name = "ryu"
···
2148
2166
2149
2167
[[package]]
2150
2168
name = "schannel"
2151
-
version = "0.1.27"
2169
+
version = "0.1.28"
2152
2170
source = "registry+https://github.com/rust-lang/crates.io-index"
2153
-
checksum = "1f29ebaa345f945cec9fbbc532eb307f0fdad8161f281b6369539c8d84876b3d"
2171
+
checksum = "891d81b926048e76efe18581bf793546b4c0eaf8448d72be8de2bbee5fd166e1"
2154
2172
dependencies = [
2155
-
"windows-sys 0.59.0",
2173
+
"windows-sys 0.61.2",
2156
2174
]
2157
-
2158
-
[[package]]
2159
-
name = "scoped-tls"
2160
-
version = "1.0.1"
2161
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2162
-
checksum = "e1cf6437eb19a8f4a6cc0f7dca544973b0b78843adbfeb3683d1a94a0024a294"
2163
2175
2164
2176
[[package]]
2165
2177
name = "scopeguard"
···
2194
2206
2195
2207
[[package]]
2196
2208
name = "security-framework"
2197
-
version = "3.2.0"
2209
+
version = "3.5.1"
2198
2210
source = "registry+https://github.com/rust-lang/crates.io-index"
2199
-
checksum = "271720403f46ca04f7ba6f55d438f8bd878d6b8ca0a1046e8228c4145bcbb316"
2211
+
checksum = "b3297343eaf830f66ede390ea39da1d462b6b0c1b000f420d0a83f898bbbe6ef"
2200
2212
dependencies = [
2201
2213
"bitflags",
2202
2214
"core-foundation 0.10.1",
···
2207
2219
2208
2220
[[package]]
2209
2221
name = "security-framework-sys"
2210
-
version = "2.14.0"
2222
+
version = "2.15.0"
2211
2223
source = "registry+https://github.com/rust-lang/crates.io-index"
2212
-
checksum = "49db231d56a190491cb4aeda9527f1ad45345af50b0851622a7adb8c03b01c32"
2224
+
checksum = "cc1f0cbffaac4852523ce30d8bd3c5cdc873501d96ff467ca09b6767bb8cd5c0"
2213
2225
dependencies = [
2214
2226
"core-foundation-sys",
2215
2227
"libc",
···
2217
2229
2218
2230
[[package]]
2219
2231
name = "semver"
2220
-
version = "1.0.26"
2232
+
version = "1.0.27"
2221
2233
source = "registry+https://github.com/rust-lang/crates.io-index"
2222
-
checksum = "56e6fa9c48d24d85fb3de5ad847117517440f6beceb7798af16b4a87d616b8d0"
2234
+
checksum = "d767eb0aabc880b29956c35734170f26ed551a859dbd361d140cdbeca61ab1e2"
2223
2235
2224
2236
[[package]]
2225
2237
name = "serde"
2226
-
version = "1.0.219"
2238
+
version = "1.0.228"
2227
2239
source = "registry+https://github.com/rust-lang/crates.io-index"
2228
-
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
2240
+
checksum = "9a8e94ea7f378bd32cbbd37198a4a91436180c5bb472411e48b5ec2e2124ae9e"
2229
2241
dependencies = [
2242
+
"serde_core",
2230
2243
"serde_derive",
2231
2244
]
2232
2245
2233
2246
[[package]]
2234
2247
name = "serde_bytes"
2235
-
version = "0.11.17"
2248
+
version = "0.11.19"
2236
2249
source = "registry+https://github.com/rust-lang/crates.io-index"
2237
-
checksum = "8437fd221bde2d4ca316d61b90e337e9e702b3820b87d63caa9ba6c02bd06d96"
2250
+
checksum = "a5d440709e79d88e51ac01c4b72fc6cb7314017bb7da9eeff678aa94c10e3ea8"
2238
2251
dependencies = [
2239
2252
"serde",
2253
+
"serde_core",
2254
+
]
2255
+
2256
+
[[package]]
2257
+
name = "serde_core"
2258
+
version = "1.0.228"
2259
+
source = "registry+https://github.com/rust-lang/crates.io-index"
2260
+
checksum = "41d385c7d4ca58e59fc732af25c3983b67ac852c1a25000afe1175de458b67ad"
2261
+
dependencies = [
2262
+
"serde_derive",
2240
2263
]
2241
2264
2242
2265
[[package]]
2243
2266
name = "serde_derive"
2244
-
version = "1.0.219"
2267
+
version = "1.0.228"
2245
2268
source = "registry+https://github.com/rust-lang/crates.io-index"
2246
-
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
2269
+
checksum = "d540f220d3187173da220f885ab66608367b6574e925011a9353e4badda91d79"
2247
2270
dependencies = [
2248
2271
"proc-macro2",
2249
2272
"quote",
2250
-
"syn",
2273
+
"syn 2.0.109",
2251
2274
]
2252
2275
2253
2276
[[package]]
2254
2277
name = "serde_ipld_dagcbor"
2255
-
version = "0.6.3"
2278
+
version = "0.6.4"
2256
2279
source = "registry+https://github.com/rust-lang/crates.io-index"
2257
-
checksum = "99600723cf53fb000a66175555098db7e75217c415bdd9a16a65d52a19dcc4fc"
2280
+
checksum = "46182f4f08349a02b45c998ba3215d3f9de826246ba02bb9dddfe9a2a2100778"
2258
2281
dependencies = [
2259
2282
"cbor4ii",
2260
2283
"ipld-core",
···
2264
2287
2265
2288
[[package]]
2266
2289
name = "serde_json"
2267
-
version = "1.0.140"
2290
+
version = "1.0.145"
2268
2291
source = "registry+https://github.com/rust-lang/crates.io-index"
2269
-
checksum = "20068b6e96dc6c9bd23e01df8827e6c7e1f2fddd43c21810382803c136b99373"
2292
+
checksum = "402a6f66d8c709116cf22f558eab210f5a50187f702eb4d7e5ef38d9a7f1c79c"
2270
2293
dependencies = [
2294
+
"indexmap",
2271
2295
"itoa",
2272
2296
"memchr",
2273
2297
"ryu",
2274
2298
"serde",
2299
+
"serde_core",
2275
2300
]
2276
2301
2277
2302
[[package]]
2278
2303
name = "serde_path_to_error"
2279
-
version = "0.1.17"
2304
+
version = "0.1.20"
2280
2305
source = "registry+https://github.com/rust-lang/crates.io-index"
2281
-
checksum = "59fab13f937fa393d08645bf3a84bdfe86e296747b506ada67bb15f10f218b2a"
2306
+
checksum = "10a9ff822e371bb5403e391ecd83e182e0e77ba7f6fe0160b795797109d1b457"
2282
2307
dependencies = [
2283
2308
"itoa",
2284
2309
"serde",
2310
+
"serde_core",
2285
2311
]
2286
2312
2287
2313
[[package]]
···
2334
2360
2335
2361
[[package]]
2336
2362
name = "signal-hook-registry"
2337
-
version = "1.4.5"
2363
+
version = "1.4.6"
2338
2364
source = "registry+https://github.com/rust-lang/crates.io-index"
2339
-
checksum = "9203b8055f63a2a00e2f593bb0510367fe707d7ff1e5c872de2f537b339e5410"
2365
+
checksum = "b2a4719bff48cee6b39d12c020eeb490953ad2443b7055bd0b21fca26bd8c28b"
2340
2366
dependencies = [
2341
2367
"libc",
2342
2368
]
···
2359
2385
2360
2386
[[package]]
2361
2387
name = "slab"
2362
-
version = "0.4.9"
2388
+
version = "0.4.11"
2363
2389
source = "registry+https://github.com/rust-lang/crates.io-index"
2364
-
checksum = "8f92a496fb766b417c996b9c5e57daf2f7ad3b0bebe1ccfca4856390e3d3bb67"
2365
-
dependencies = [
2366
-
"autocfg",
2367
-
]
2390
+
checksum = "7a2ae44ef20feb57a68b23d846850f861394c2e02dc425a50098ae8c90267589"
2368
2391
2369
2392
[[package]]
2370
2393
name = "smallvec"
2371
-
version = "1.15.0"
2394
+
version = "1.15.1"
2372
2395
source = "registry+https://github.com/rust-lang/crates.io-index"
2373
-
checksum = "8917285742e9f3e1683f0a9c4e6b57960b7314d0b08d30d1ecd426713ee2eee9"
2396
+
checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03"
2374
2397
2375
2398
[[package]]
2376
2399
name = "socket2"
···
2383
2406
]
2384
2407
2385
2408
[[package]]
2409
+
name = "socket2"
2410
+
version = "0.6.1"
2411
+
source = "registry+https://github.com/rust-lang/crates.io-index"
2412
+
checksum = "17129e116933cf371d018bb80ae557e889637989d8638274fb25622827b03881"
2413
+
dependencies = [
2414
+
"libc",
2415
+
"windows-sys 0.60.2",
2416
+
]
2417
+
2418
+
[[package]]
2386
2419
name = "spki"
2387
2420
version = "0.7.3"
2388
2421
source = "registry+https://github.com/rust-lang/crates.io-index"
···
2394
2427
2395
2428
[[package]]
2396
2429
name = "stable_deref_trait"
2397
-
version = "1.2.0"
2430
+
version = "1.2.1"
2431
+
source = "registry+https://github.com/rust-lang/crates.io-index"
2432
+
checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596"
2433
+
2434
+
[[package]]
2435
+
name = "static_assertions"
2436
+
version = "1.1.0"
2398
2437
source = "registry+https://github.com/rust-lang/crates.io-index"
2399
-
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
2438
+
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
2400
2439
2401
2440
[[package]]
2402
2441
name = "strsim"
···
2412
2451
2413
2452
[[package]]
2414
2453
name = "syn"
2415
-
version = "2.0.101"
2454
+
version = "1.0.109"
2455
+
source = "registry+https://github.com/rust-lang/crates.io-index"
2456
+
checksum = "72b64191b275b66ffe2469e8af2c1cfe3bafa67b529ead792a6d0160888b4237"
2457
+
dependencies = [
2458
+
"proc-macro2",
2459
+
"quote",
2460
+
"unicode-ident",
2461
+
]
2462
+
2463
+
[[package]]
2464
+
name = "syn"
2465
+
version = "2.0.109"
2416
2466
source = "registry+https://github.com/rust-lang/crates.io-index"
2417
-
checksum = "8ce2b7fc941b3a24138a0a7cf8e858bfc6a992e7978a068a5c760deb0ed43caf"
2467
+
checksum = "2f17c7e013e88258aa9543dcbe81aca68a667a9ac37cd69c9fbc07858bfe0e2f"
2418
2468
dependencies = [
2419
2469
"proc-macro2",
2420
2470
"quote",
···
2438
2488
dependencies = [
2439
2489
"proc-macro2",
2440
2490
"quote",
2441
-
"syn",
2491
+
"syn 2.0.109",
2442
2492
]
2443
2493
2444
2494
[[package]]
···
2479
2529
2480
2530
[[package]]
2481
2531
name = "thiserror"
2482
-
version = "2.0.12"
2532
+
version = "2.0.17"
2483
2533
source = "registry+https://github.com/rust-lang/crates.io-index"
2484
-
checksum = "567b8a2dae586314f7be2a752ec7474332959c6460e02bde30d702a66d488708"
2534
+
checksum = "f63587ca0f12b72a0600bcba1d40081f830876000bb46dd2337a3051618f4fc8"
2485
2535
dependencies = [
2486
-
"thiserror-impl 2.0.12",
2536
+
"thiserror-impl 2.0.17",
2487
2537
]
2488
2538
2489
2539
[[package]]
···
2494
2544
dependencies = [
2495
2545
"proc-macro2",
2496
2546
"quote",
2497
-
"syn",
2547
+
"syn 2.0.109",
2498
2548
]
2499
2549
2500
2550
[[package]]
2501
2551
name = "thiserror-impl"
2502
-
version = "2.0.12"
2552
+
version = "2.0.17"
2503
2553
source = "registry+https://github.com/rust-lang/crates.io-index"
2504
-
checksum = "7f7cf42b4507d8ea322120659672cf1b9dbb93f8f2d4ecfd6e51350ff5b17a1d"
2554
+
checksum = "3ff15c8ecd7de3849db632e14d18d2571fa09dfc5ed93479bc4485c7a517c913"
2505
2555
dependencies = [
2506
2556
"proc-macro2",
2507
2557
"quote",
2508
-
"syn",
2558
+
"syn 2.0.109",
2509
2559
]
2510
2560
2511
2561
[[package]]
2512
2562
name = "thread_local"
2513
-
version = "1.1.8"
2563
+
version = "1.1.9"
2514
2564
source = "registry+https://github.com/rust-lang/crates.io-index"
2515
-
checksum = "8b9ef9bad013ada3808854ceac7b46812a6465ba368859a37e2100283d2d719c"
2565
+
checksum = "f60246a4944f24f6e018aa17cdeffb7818b76356965d03b07d6a9886e8962185"
2516
2566
dependencies = [
2517
2567
"cfg-if",
2518
-
"once_cell",
2519
2568
]
2520
2569
2521
2570
[[package]]
2522
2571
name = "tinystr"
2523
-
version = "0.8.1"
2572
+
version = "0.8.2"
2524
2573
source = "registry+https://github.com/rust-lang/crates.io-index"
2525
-
checksum = "5d4f6d1145dcb577acf783d4e601bc1d76a13337bb54e6233add580b07344c8b"
2574
+
checksum = "42d3e9c45c09de15d06dd8acf5f4e0e399e85927b7f00711024eb7ae10fa4869"
2526
2575
dependencies = [
2527
2576
"displaydoc",
2528
2577
"zerovec",
···
2530
2579
2531
2580
[[package]]
2532
2581
name = "tinyvec"
2533
-
version = "1.9.0"
2582
+
version = "1.10.0"
2534
2583
source = "registry+https://github.com/rust-lang/crates.io-index"
2535
-
checksum = "09b3661f17e86524eccd4371ab0429194e0d7c008abb45f7a7495b1719463c71"
2584
+
checksum = "bfa5fdc3bce6191a1dbc8c02d5c8bffcf557bafa17c124c5264a458f1b0613fa"
2536
2585
dependencies = [
2537
2586
"tinyvec_macros",
2538
2587
]
···
2545
2594
2546
2595
[[package]]
2547
2596
name = "tokio"
2548
-
version = "1.45.1"
2597
+
version = "1.48.0"
2549
2598
source = "registry+https://github.com/rust-lang/crates.io-index"
2550
-
checksum = "75ef51a33ef1da925cea3e4eb122833cb377c61439ca401b770f54902b806779"
2599
+
checksum = "ff360e02eab121e0bc37a2d3b4d4dc622e6eda3a8e5253d5435ecf5bd4c68408"
2551
2600
dependencies = [
2552
-
"backtrace",
2553
2601
"bytes",
2554
2602
"libc",
2555
2603
"mio",
2556
2604
"parking_lot",
2557
2605
"pin-project-lite",
2558
2606
"signal-hook-registry",
2559
-
"socket2",
2607
+
"socket2 0.6.1",
2560
2608
"tokio-macros",
2561
-
"windows-sys 0.52.0",
2609
+
"windows-sys 0.61.2",
2562
2610
]
2563
2611
2564
2612
[[package]]
2565
2613
name = "tokio-macros"
2566
-
version = "2.5.0"
2614
+
version = "2.6.0"
2567
2615
source = "registry+https://github.com/rust-lang/crates.io-index"
2568
-
checksum = "6e06d43f1345a3bcd39f6a56dbb7dcab2ba47e68e8ac134855e7e2bdbaf8cab8"
2616
+
checksum = "af407857209536a95c8e56f8231ef2c2e2aff839b22e07a1ffcbc617e9db9fa5"
2569
2617
dependencies = [
2570
2618
"proc-macro2",
2571
2619
"quote",
2572
-
"syn",
2620
+
"syn 2.0.109",
2573
2621
]
2574
2622
2575
2623
[[package]]
2576
2624
name = "tokio-rustls"
2577
-
version = "0.26.2"
2625
+
version = "0.26.4"
2578
2626
source = "registry+https://github.com/rust-lang/crates.io-index"
2579
-
checksum = "8e727b36a1a0e8b74c376ac2211e40c2c8af09fb4013c60d910495810f008e9b"
2627
+
checksum = "1729aa945f29d91ba541258c8df89027d5792d85a8841fb65e8bf0f4ede4ef61"
2580
2628
dependencies = [
2581
2629
"rustls",
2582
2630
"tokio",
2583
2631
]
2584
2632
2585
2633
[[package]]
2634
+
name = "tokio-stream"
2635
+
version = "0.1.17"
2636
+
source = "registry+https://github.com/rust-lang/crates.io-index"
2637
+
checksum = "eca58d7bba4a75707817a2c44174253f9236b2d5fbd055602e9d5c07c139a047"
2638
+
dependencies = [
2639
+
"futures-core",
2640
+
"pin-project-lite",
2641
+
"tokio",
2642
+
]
2643
+
2644
+
[[package]]
2586
2645
name = "tokio-util"
2587
-
version = "0.7.15"
2646
+
version = "0.7.17"
2588
2647
source = "registry+https://github.com/rust-lang/crates.io-index"
2589
-
checksum = "66a539a9ad6d5d281510d5bd368c973d636c02dbf8a67300bfb6b950696ad7df"
2648
+
checksum = "2efa149fe76073d6e8fd97ef4f4eca7b67f599660115591483572e406e165594"
2590
2649
dependencies = [
2591
2650
"bytes",
2592
2651
"futures-core",
···
2607
2666
"futures-sink",
2608
2667
"http",
2609
2668
"httparse",
2610
-
"rand 0.9.1",
2669
+
"rand 0.9.2",
2611
2670
"ring",
2612
2671
"rustls-native-certs",
2613
2672
"rustls-pki-types",
···
2635
2694
2636
2695
[[package]]
2637
2696
name = "tower-http"
2638
-
version = "0.6.4"
2697
+
version = "0.6.6"
2639
2698
source = "registry+https://github.com/rust-lang/crates.io-index"
2640
-
checksum = "0fdb0c213ca27a9f57ab69ddb290fd80d970922355b83ae380b395d3986b8a2e"
2699
+
checksum = "adc82fd73de2a9722ac5da747f12383d2bfdb93591ee6c58486e0097890f05f2"
2641
2700
dependencies = [
2642
2701
"bitflags",
2643
2702
"bytes",
···
2677
2736
2678
2737
[[package]]
2679
2738
name = "tracing-attributes"
2680
-
version = "0.1.28"
2739
+
version = "0.1.30"
2681
2740
source = "registry+https://github.com/rust-lang/crates.io-index"
2682
-
checksum = "395ae124c09f9e6918a2310af6038fba074bcf474ac352496d5910dd59a2226d"
2741
+
checksum = "81383ab64e72a7a8b8e13130c49e3dab29def6d0c7d76a03087b3cf71c5c6903"
2683
2742
dependencies = [
2684
2743
"proc-macro2",
2685
2744
"quote",
2686
-
"syn",
2745
+
"syn 2.0.109",
2687
2746
]
2688
2747
2689
2748
[[package]]
2690
2749
name = "tracing-core"
2691
-
version = "0.1.33"
2750
+
version = "0.1.34"
2692
2751
source = "registry+https://github.com/rust-lang/crates.io-index"
2693
-
checksum = "e672c95779cf947c5311f83787af4fa8fffd12fb27e4993211a84bdfd9610f9c"
2752
+
checksum = "b9d12581f227e93f094d3af2ae690a574abb8a2b9b7a96e7cfe9647b2b617678"
2694
2753
dependencies = [
2695
2754
"once_cell",
2696
2755
"valuable",
···
2709
2768
2710
2769
[[package]]
2711
2770
name = "tracing-subscriber"
2712
-
version = "0.3.19"
2771
+
version = "0.3.20"
2713
2772
source = "registry+https://github.com/rust-lang/crates.io-index"
2714
-
checksum = "e8189decb5ac0fa7bc8b96b7cb9b2701d60d48805aca84a238004d665fcc4008"
2773
+
checksum = "2054a14f5307d601f88daf0553e1cbf472acc4f2c51afab632431cdcd72124d5"
2715
2774
dependencies = [
2716
2775
"matchers",
2717
2776
"nu-ansi-term",
2718
2777
"once_cell",
2719
-
"regex",
2778
+
"regex-automata",
2720
2779
"sharded-slab",
2721
2780
"smallvec",
2722
2781
"thread_local",
···
2733
2792
2734
2793
[[package]]
2735
2794
name = "typenum"
2736
-
version = "1.18.0"
2795
+
version = "1.19.0"
2737
2796
source = "registry+https://github.com/rust-lang/crates.io-index"
2738
-
checksum = "1dccffe3ce07af9386bfd29e80c0ab1a8205a2fc34e4bcd40364df902cfa8f3f"
2797
+
checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb"
2739
2798
2740
2799
[[package]]
2741
2800
name = "ulid"
···
2743
2802
source = "registry+https://github.com/rust-lang/crates.io-index"
2744
2803
checksum = "470dbf6591da1b39d43c14523b2b469c86879a53e8b758c8e090a470fe7b1fbe"
2745
2804
dependencies = [
2746
-
"rand 0.9.1",
2805
+
"rand 0.9.2",
2747
2806
"web-time",
2748
2807
]
2749
2808
···
2755
2814
2756
2815
[[package]]
2757
2816
name = "unicode-ident"
2758
-
version = "1.0.18"
2817
+
version = "1.0.22"
2759
2818
source = "registry+https://github.com/rust-lang/crates.io-index"
2760
-
checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512"
2819
+
checksum = "9312f7c4f6ff9069b165498234ce8be658059c6728633667c526e27dc2cf1df5"
2761
2820
2762
2821
[[package]]
2763
2822
name = "unsigned-varint"
···
2773
2832
2774
2833
[[package]]
2775
2834
name = "url"
2776
-
version = "2.5.4"
2835
+
version = "2.5.7"
2777
2836
source = "registry+https://github.com/rust-lang/crates.io-index"
2778
-
checksum = "32f8b686cadd1473f4bd0117a5d28d36b1ade384ea9b5069a1c40aefed7fda60"
2837
+
checksum = "08bc136a29a3d1758e07a9cca267be308aeebf5cfd5a10f3f67ab2097683ef5b"
2779
2838
dependencies = [
2780
2839
"form_urlencoded",
2781
2840
"idna",
2782
2841
"percent-encoding",
2842
+
"serde",
2783
2843
]
2784
2844
2785
2845
[[package]]
···
2802
2862
2803
2863
[[package]]
2804
2864
name = "uuid"
2805
-
version = "1.17.0"
2865
+
version = "1.18.1"
2806
2866
source = "registry+https://github.com/rust-lang/crates.io-index"
2807
-
checksum = "3cf4199d1e5d15ddd86a694e4d0dffa9c323ce759fea589f00fef9d81cc1931d"
2867
+
checksum = "2f87b8aa10b915a06587d0dec516c282ff295b475d94abf425d62b57710070a2"
2808
2868
dependencies = [
2809
-
"getrandom 0.3.3",
2869
+
"getrandom 0.3.4",
2810
2870
"js-sys",
2811
2871
"wasm-bindgen",
2812
2872
]
···
2834
2894
2835
2895
[[package]]
2836
2896
name = "wasi"
2837
-
version = "0.11.0+wasi-snapshot-preview1"
2897
+
version = "0.11.1+wasi-snapshot-preview1"
2838
2898
source = "registry+https://github.com/rust-lang/crates.io-index"
2839
-
checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
2899
+
checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
2840
2900
2841
2901
[[package]]
2842
-
name = "wasi"
2843
-
version = "0.14.2+wasi-0.2.4"
2902
+
name = "wasip2"
2903
+
version = "1.0.1+wasi-0.2.4"
2844
2904
source = "registry+https://github.com/rust-lang/crates.io-index"
2845
-
checksum = "9683f9a5a998d873c0d21fcbe3c083009670149a8fab228644b8bd36b2c48cb3"
2905
+
checksum = "0562428422c63773dad2c345a1882263bbf4d65cf3f42e90921f787ef5ad58e7"
2846
2906
dependencies = [
2847
-
"wit-bindgen-rt",
2907
+
"wit-bindgen",
2848
2908
]
2849
2909
2850
2910
[[package]]
2851
2911
name = "wasm-bindgen"
2852
-
version = "0.2.100"
2912
+
version = "0.2.105"
2853
2913
source = "registry+https://github.com/rust-lang/crates.io-index"
2854
-
checksum = "1edc8929d7499fc4e8f0be2262a241556cfc54a0bea223790e71446f2aab1ef5"
2914
+
checksum = "da95793dfc411fbbd93f5be7715b0578ec61fe87cb1a42b12eb625caa5c5ea60"
2855
2915
dependencies = [
2856
2916
"cfg-if",
2857
2917
"once_cell",
2858
2918
"rustversion",
2859
2919
"wasm-bindgen-macro",
2860
-
]
2861
-
2862
-
[[package]]
2863
-
name = "wasm-bindgen-backend"
2864
-
version = "0.2.100"
2865
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2866
-
checksum = "2f0a0651a5c2bc21487bde11ee802ccaf4c51935d0d3d42a6101f98161700bc6"
2867
-
dependencies = [
2868
-
"bumpalo",
2869
-
"log",
2870
-
"proc-macro2",
2871
-
"quote",
2872
-
"syn",
2873
2920
"wasm-bindgen-shared",
2874
2921
]
2875
2922
2876
2923
[[package]]
2877
2924
name = "wasm-bindgen-futures"
2878
-
version = "0.4.50"
2925
+
version = "0.4.55"
2879
2926
source = "registry+https://github.com/rust-lang/crates.io-index"
2880
-
checksum = "555d470ec0bc3bb57890405e5d4322cc9ea83cebb085523ced7be4144dac1e61"
2927
+
checksum = "551f88106c6d5e7ccc7cd9a16f312dd3b5d36ea8b4954304657d5dfba115d4a0"
2881
2928
dependencies = [
2882
2929
"cfg-if",
2883
2930
"js-sys",
···
2888
2935
2889
2936
[[package]]
2890
2937
name = "wasm-bindgen-macro"
2891
-
version = "0.2.100"
2938
+
version = "0.2.105"
2892
2939
source = "registry+https://github.com/rust-lang/crates.io-index"
2893
-
checksum = "7fe63fc6d09ed3792bd0897b314f53de8e16568c2b3f7982f468c0bf9bd0b407"
2940
+
checksum = "04264334509e04a7bf8690f2384ef5265f05143a4bff3889ab7a3269adab59c2"
2894
2941
dependencies = [
2895
2942
"quote",
2896
2943
"wasm-bindgen-macro-support",
···
2898
2945
2899
2946
[[package]]
2900
2947
name = "wasm-bindgen-macro-support"
2901
-
version = "0.2.100"
2948
+
version = "0.2.105"
2902
2949
source = "registry+https://github.com/rust-lang/crates.io-index"
2903
-
checksum = "8ae87ea40c9f689fc23f209965b6fb8a99ad69aeeb0231408be24920604395de"
2950
+
checksum = "420bc339d9f322e562942d52e115d57e950d12d88983a14c79b86859ee6c7ebc"
2904
2951
dependencies = [
2952
+
"bumpalo",
2905
2953
"proc-macro2",
2906
2954
"quote",
2907
-
"syn",
2908
-
"wasm-bindgen-backend",
2955
+
"syn 2.0.109",
2909
2956
"wasm-bindgen-shared",
2910
2957
]
2911
2958
2912
2959
[[package]]
2913
2960
name = "wasm-bindgen-shared"
2914
-
version = "0.2.100"
2961
+
version = "0.2.105"
2915
2962
source = "registry+https://github.com/rust-lang/crates.io-index"
2916
-
checksum = "1a05d73b933a847d6cccdda8f838a22ff101ad9bf93e33684f39c1f5f0eece3d"
2963
+
checksum = "76f218a38c84bcb33c25ec7059b07847d465ce0e0a76b995e134a45adcb6af76"
2917
2964
dependencies = [
2918
2965
"unicode-ident",
2919
2966
]
2920
2967
2921
2968
[[package]]
2922
2969
name = "web-sys"
2923
-
version = "0.3.77"
2970
+
version = "0.3.82"
2924
2971
source = "registry+https://github.com/rust-lang/crates.io-index"
2925
-
checksum = "33b6dd2ef9186f1f2072e409e99cd22a975331a6b3591b12c764e0e55c60d5d2"
2972
+
checksum = "3a1f95c0d03a47f4ae1f7a64643a6bb97465d9b740f0fa8f90ea33915c99a9a1"
2926
2973
dependencies = [
2927
2974
"js-sys",
2928
2975
"wasm-bindgen",
···
2940
2987
2941
2988
[[package]]
2942
2989
name = "webpki-roots"
2943
-
version = "1.0.0"
2990
+
version = "1.0.4"
2944
2991
source = "registry+https://github.com/rust-lang/crates.io-index"
2945
-
checksum = "2853738d1cc4f2da3a225c18ec6c3721abb31961096e9dbf5ab35fa88b19cfdb"
2992
+
checksum = "b2878ef029c47c6e8cf779119f20fcf52bde7ad42a731b2a304bc221df17571e"
2946
2993
dependencies = [
2947
2994
"rustls-pki-types",
2948
2995
]
2949
2996
2950
2997
[[package]]
2951
2998
name = "widestring"
2952
-
version = "1.2.0"
2953
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2954
-
checksum = "dd7cf3379ca1aac9eea11fba24fd7e315d621f8dfe35c8d7d2be8b793726e07d"
2955
-
2956
-
[[package]]
2957
-
name = "winapi"
2958
-
version = "0.3.9"
2959
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2960
-
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
2961
-
dependencies = [
2962
-
"winapi-i686-pc-windows-gnu",
2963
-
"winapi-x86_64-pc-windows-gnu",
2964
-
]
2965
-
2966
-
[[package]]
2967
-
name = "winapi-i686-pc-windows-gnu"
2968
-
version = "0.4.0"
2969
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2970
-
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
2971
-
2972
-
[[package]]
2973
-
name = "winapi-x86_64-pc-windows-gnu"
2974
-
version = "0.4.0"
2975
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2976
-
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
2977
-
2978
-
[[package]]
2979
-
name = "windows"
2980
-
version = "0.61.1"
2981
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2982
-
checksum = "c5ee8f3d025738cb02bad7868bbb5f8a6327501e870bf51f1b455b0a2454a419"
2983
-
dependencies = [
2984
-
"windows-collections",
2985
-
"windows-core",
2986
-
"windows-future",
2987
-
"windows-link",
2988
-
"windows-numerics",
2989
-
]
2990
-
2991
-
[[package]]
2992
-
name = "windows-collections"
2993
-
version = "0.2.0"
2994
-
source = "registry+https://github.com/rust-lang/crates.io-index"
2995
-
checksum = "3beeceb5e5cfd9eb1d76b381630e82c4241ccd0d27f1a39ed41b2760b255c5e8"
2996
-
dependencies = [
2997
-
"windows-core",
2998
-
]
2999
-
3000
-
[[package]]
3001
-
name = "windows-core"
3002
-
version = "0.61.2"
2999
+
version = "1.2.1"
3003
3000
source = "registry+https://github.com/rust-lang/crates.io-index"
3004
-
checksum = "c0fdd3ddb90610c7638aa2b3a3ab2904fb9e5cdbecc643ddb3647212781c4ae3"
3005
-
dependencies = [
3006
-
"windows-implement",
3007
-
"windows-interface",
3008
-
"windows-link",
3009
-
"windows-result",
3010
-
"windows-strings 0.4.2",
3011
-
]
3001
+
checksum = "72069c3113ab32ab29e5584db3c6ec55d416895e60715417b5b883a357c3e471"
3012
3002
3013
3003
[[package]]
3014
-
name = "windows-future"
3015
-
version = "0.2.1"
3004
+
name = "windows-link"
3005
+
version = "0.1.3"
3016
3006
source = "registry+https://github.com/rust-lang/crates.io-index"
3017
-
checksum = "fc6a41e98427b19fe4b73c550f060b59fa592d7d686537eebf9385621bfbad8e"
3018
-
dependencies = [
3019
-
"windows-core",
3020
-
"windows-link",
3021
-
"windows-threading",
3022
-
]
3023
-
3024
-
[[package]]
3025
-
name = "windows-implement"
3026
-
version = "0.60.0"
3027
-
source = "registry+https://github.com/rust-lang/crates.io-index"
3028
-
checksum = "a47fddd13af08290e67f4acabf4b459f647552718f683a7b415d290ac744a836"
3029
-
dependencies = [
3030
-
"proc-macro2",
3031
-
"quote",
3032
-
"syn",
3033
-
]
3034
-
3035
-
[[package]]
3036
-
name = "windows-interface"
3037
-
version = "0.59.1"
3038
-
source = "registry+https://github.com/rust-lang/crates.io-index"
3039
-
checksum = "bd9211b69f8dcdfa817bfd14bf1c97c9188afa36f4750130fcdf3f400eca9fa8"
3040
-
dependencies = [
3041
-
"proc-macro2",
3042
-
"quote",
3043
-
"syn",
3044
-
]
3007
+
checksum = "5e6ad25900d524eaabdbbb96d20b4311e1e7ae1699af4fb28c17ae66c80d798a"
3045
3008
3046
3009
[[package]]
3047
3010
name = "windows-link"
3048
-
version = "0.1.1"
3011
+
version = "0.2.1"
3049
3012
source = "registry+https://github.com/rust-lang/crates.io-index"
3050
-
checksum = "76840935b766e1b0a05c0066835fb9ec80071d4c09a16f6bd5f7e655e3c14c38"
3051
-
3052
-
[[package]]
3053
-
name = "windows-numerics"
3054
-
version = "0.2.0"
3055
-
source = "registry+https://github.com/rust-lang/crates.io-index"
3056
-
checksum = "9150af68066c4c5c07ddc0ce30421554771e528bde427614c61038bc2c92c2b1"
3057
-
dependencies = [
3058
-
"windows-core",
3059
-
"windows-link",
3060
-
]
3013
+
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
3061
3014
3062
3015
[[package]]
3063
3016
name = "windows-registry"
3064
-
version = "0.4.0"
3017
+
version = "0.5.3"
3065
3018
source = "registry+https://github.com/rust-lang/crates.io-index"
3066
-
checksum = "4286ad90ddb45071efd1a66dfa43eb02dd0dfbae1545ad6cc3c51cf34d7e8ba3"
3019
+
checksum = "5b8a9ed28765efc97bbc954883f4e6796c33a06546ebafacbabee9696967499e"
3067
3020
dependencies = [
3021
+
"windows-link 0.1.3",
3068
3022
"windows-result",
3069
-
"windows-strings 0.3.1",
3070
-
"windows-targets 0.53.0",
3023
+
"windows-strings",
3071
3024
]
3072
3025
3073
3026
[[package]]
···
3076
3029
source = "registry+https://github.com/rust-lang/crates.io-index"
3077
3030
checksum = "56f42bd332cc6c8eac5af113fc0c1fd6a8fd2aa08a0119358686e5160d0586c6"
3078
3031
dependencies = [
3079
-
"windows-link",
3080
-
]
3081
-
3082
-
[[package]]
3083
-
name = "windows-strings"
3084
-
version = "0.3.1"
3085
-
source = "registry+https://github.com/rust-lang/crates.io-index"
3086
-
checksum = "87fa48cc5d406560701792be122a10132491cff9d0aeb23583cc2dcafc847319"
3087
-
dependencies = [
3088
-
"windows-link",
3032
+
"windows-link 0.1.3",
3089
3033
]
3090
3034
3091
3035
[[package]]
···
3094
3038
source = "registry+https://github.com/rust-lang/crates.io-index"
3095
3039
checksum = "56e6c93f3a0c3b36176cb1327a4958a0353d5d166c2a35cb268ace15e91d3b57"
3096
3040
dependencies = [
3097
-
"windows-link",
3041
+
"windows-link 0.1.3",
3098
3042
]
3099
3043
3100
3044
[[package]]
···
3125
3069
]
3126
3070
3127
3071
[[package]]
3072
+
name = "windows-sys"
3073
+
version = "0.60.2"
3074
+
source = "registry+https://github.com/rust-lang/crates.io-index"
3075
+
checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb"
3076
+
dependencies = [
3077
+
"windows-targets 0.53.5",
3078
+
]
3079
+
3080
+
[[package]]
3081
+
name = "windows-sys"
3082
+
version = "0.61.2"
3083
+
source = "registry+https://github.com/rust-lang/crates.io-index"
3084
+
checksum = "ae137229bcbd6cdf0f7b80a31df61766145077ddf49416a728b02cb3921ff3fc"
3085
+
dependencies = [
3086
+
"windows-link 0.2.1",
3087
+
]
3088
+
3089
+
[[package]]
3128
3090
name = "windows-targets"
3129
3091
version = "0.48.5"
3130
3092
source = "registry+https://github.com/rust-lang/crates.io-index"
···
3157
3119
3158
3120
[[package]]
3159
3121
name = "windows-targets"
3160
-
version = "0.53.0"
3122
+
version = "0.53.5"
3161
3123
source = "registry+https://github.com/rust-lang/crates.io-index"
3162
-
checksum = "b1e4c7e8ceaaf9cb7d7507c974735728ab453b67ef8f18febdd7c11fe59dca8b"
3124
+
checksum = "4945f9f551b88e0d65f3db0bc25c33b8acea4d9e41163edf90dcd0b19f9069f3"
3163
3125
dependencies = [
3164
-
"windows_aarch64_gnullvm 0.53.0",
3165
-
"windows_aarch64_msvc 0.53.0",
3166
-
"windows_i686_gnu 0.53.0",
3167
-
"windows_i686_gnullvm 0.53.0",
3168
-
"windows_i686_msvc 0.53.0",
3169
-
"windows_x86_64_gnu 0.53.0",
3170
-
"windows_x86_64_gnullvm 0.53.0",
3171
-
"windows_x86_64_msvc 0.53.0",
3172
-
]
3173
-
3174
-
[[package]]
3175
-
name = "windows-threading"
3176
-
version = "0.1.0"
3177
-
source = "registry+https://github.com/rust-lang/crates.io-index"
3178
-
checksum = "b66463ad2e0ea3bbf808b7f1d371311c80e115c0b71d60efc142cafbcfb057a6"
3179
-
dependencies = [
3180
-
"windows-link",
3126
+
"windows-link 0.2.1",
3127
+
"windows_aarch64_gnullvm 0.53.1",
3128
+
"windows_aarch64_msvc 0.53.1",
3129
+
"windows_i686_gnu 0.53.1",
3130
+
"windows_i686_gnullvm 0.53.1",
3131
+
"windows_i686_msvc 0.53.1",
3132
+
"windows_x86_64_gnu 0.53.1",
3133
+
"windows_x86_64_gnullvm 0.53.1",
3134
+
"windows_x86_64_msvc 0.53.1",
3181
3135
]
3182
3136
3183
3137
[[package]]
···
3194
3148
3195
3149
[[package]]
3196
3150
name = "windows_aarch64_gnullvm"
3197
-
version = "0.53.0"
3151
+
version = "0.53.1"
3198
3152
source = "registry+https://github.com/rust-lang/crates.io-index"
3199
-
checksum = "86b8d5f90ddd19cb4a147a5fa63ca848db3df085e25fee3cc10b39b6eebae764"
3153
+
checksum = "a9d8416fa8b42f5c947f8482c43e7d89e73a173cead56d044f6a56104a6d1b53"
3200
3154
3201
3155
[[package]]
3202
3156
name = "windows_aarch64_msvc"
···
3212
3166
3213
3167
[[package]]
3214
3168
name = "windows_aarch64_msvc"
3215
-
version = "0.53.0"
3169
+
version = "0.53.1"
3216
3170
source = "registry+https://github.com/rust-lang/crates.io-index"
3217
-
checksum = "c7651a1f62a11b8cbd5e0d42526e55f2c99886c77e007179efff86c2b137e66c"
3171
+
checksum = "b9d782e804c2f632e395708e99a94275910eb9100b2114651e04744e9b125006"
3218
3172
3219
3173
[[package]]
3220
3174
name = "windows_i686_gnu"
···
3230
3184
3231
3185
[[package]]
3232
3186
name = "windows_i686_gnu"
3233
-
version = "0.53.0"
3187
+
version = "0.53.1"
3234
3188
source = "registry+https://github.com/rust-lang/crates.io-index"
3235
-
checksum = "c1dc67659d35f387f5f6c479dc4e28f1d4bb90ddd1a5d3da2e5d97b42d6272c3"
3189
+
checksum = "960e6da069d81e09becb0ca57a65220ddff016ff2d6af6a223cf372a506593a3"
3236
3190
3237
3191
[[package]]
3238
3192
name = "windows_i686_gnullvm"
···
3242
3196
3243
3197
[[package]]
3244
3198
name = "windows_i686_gnullvm"
3245
-
version = "0.53.0"
3199
+
version = "0.53.1"
3246
3200
source = "registry+https://github.com/rust-lang/crates.io-index"
3247
-
checksum = "9ce6ccbdedbf6d6354471319e781c0dfef054c81fbc7cf83f338a4296c0cae11"
3201
+
checksum = "fa7359d10048f68ab8b09fa71c3daccfb0e9b559aed648a8f95469c27057180c"
3248
3202
3249
3203
[[package]]
3250
3204
name = "windows_i686_msvc"
···
3260
3214
3261
3215
[[package]]
3262
3216
name = "windows_i686_msvc"
3263
-
version = "0.53.0"
3217
+
version = "0.53.1"
3264
3218
source = "registry+https://github.com/rust-lang/crates.io-index"
3265
-
checksum = "581fee95406bb13382d2f65cd4a908ca7b1e4c2f1917f143ba16efe98a589b5d"
3219
+
checksum = "1e7ac75179f18232fe9c285163565a57ef8d3c89254a30685b57d83a38d326c2"
3266
3220
3267
3221
[[package]]
3268
3222
name = "windows_x86_64_gnu"
···
3278
3232
3279
3233
[[package]]
3280
3234
name = "windows_x86_64_gnu"
3281
-
version = "0.53.0"
3235
+
version = "0.53.1"
3282
3236
source = "registry+https://github.com/rust-lang/crates.io-index"
3283
-
checksum = "2e55b5ac9ea33f2fc1716d1742db15574fd6fc8dadc51caab1c16a3d3b4190ba"
3237
+
checksum = "9c3842cdd74a865a8066ab39c8a7a473c0778a3f29370b5fd6b4b9aa7df4a499"
3284
3238
3285
3239
[[package]]
3286
3240
name = "windows_x86_64_gnullvm"
···
3296
3250
3297
3251
[[package]]
3298
3252
name = "windows_x86_64_gnullvm"
3299
-
version = "0.53.0"
3253
+
version = "0.53.1"
3300
3254
source = "registry+https://github.com/rust-lang/crates.io-index"
3301
-
checksum = "0a6e035dd0599267ce1ee132e51c27dd29437f63325753051e71dd9e42406c57"
3255
+
checksum = "0ffa179e2d07eee8ad8f57493436566c7cc30ac536a3379fdf008f47f6bb7ae1"
3302
3256
3303
3257
[[package]]
3304
3258
name = "windows_x86_64_msvc"
···
3314
3268
3315
3269
[[package]]
3316
3270
name = "windows_x86_64_msvc"
3317
-
version = "0.53.0"
3271
+
version = "0.53.1"
3318
3272
source = "registry+https://github.com/rust-lang/crates.io-index"
3319
-
checksum = "271414315aff87387382ec3d271b52d7ae78726f5d44ac98b4f4030c91880486"
3273
+
checksum = "d6bbff5f0aada427a1e5a6da5f1f98158182f26556f345ac9e04d36d0ebed650"
3320
3274
3321
3275
[[package]]
3322
3276
name = "winreg"
···
3329
3283
]
3330
3284
3331
3285
[[package]]
3332
-
name = "wit-bindgen-rt"
3333
-
version = "0.39.0"
3286
+
name = "wit-bindgen"
3287
+
version = "0.46.0"
3334
3288
source = "registry+https://github.com/rust-lang/crates.io-index"
3335
-
checksum = "6f42320e61fe2cfd34354ecb597f86f413484a798ba44a8ca1165c58d42da6c1"
3336
-
dependencies = [
3337
-
"bitflags",
3338
-
]
3289
+
checksum = "f17a85883d4e6d00e8a97c586de764dabcc06133f7f1d55dce5cdc070ad7fe59"
3339
3290
3340
3291
[[package]]
3341
3292
name = "writeable"
3342
-
version = "0.6.1"
3293
+
version = "0.6.2"
3343
3294
source = "registry+https://github.com/rust-lang/crates.io-index"
3344
-
checksum = "ea2f10b9bb0928dfb1b42b65e1f9e36f7f54dbdf08457afefb38afcdec4fa2bb"
3295
+
checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9"
3345
3296
3346
3297
[[package]]
3347
3298
name = "yoke"
3348
-
version = "0.8.0"
3299
+
version = "0.8.1"
3349
3300
source = "registry+https://github.com/rust-lang/crates.io-index"
3350
-
checksum = "5f41bb01b8226ef4bfd589436a297c53d118f65921786300e427be8d487695cc"
3301
+
checksum = "72d6e5c6afb84d73944e5cedb052c4680d5657337201555f9f2a16b7406d4954"
3351
3302
dependencies = [
3352
-
"serde",
3353
3303
"stable_deref_trait",
3354
3304
"yoke-derive",
3355
3305
"zerofrom",
···
3357
3307
3358
3308
[[package]]
3359
3309
name = "yoke-derive"
3360
-
version = "0.8.0"
3310
+
version = "0.8.1"
3361
3311
source = "registry+https://github.com/rust-lang/crates.io-index"
3362
-
checksum = "38da3c9736e16c5d3c8c597a9aaa5d1fa565d0532ae05e27c24aa62fb32c0ab6"
3312
+
checksum = "b659052874eb698efe5b9e8cf382204678a0086ebf46982b79d6ca3182927e5d"
3363
3313
dependencies = [
3364
3314
"proc-macro2",
3365
3315
"quote",
3366
-
"syn",
3316
+
"syn 2.0.109",
3367
3317
"synstructure",
3368
3318
]
3369
3319
3370
3320
[[package]]
3371
3321
name = "zerocopy"
3372
-
version = "0.8.25"
3322
+
version = "0.8.27"
3373
3323
source = "registry+https://github.com/rust-lang/crates.io-index"
3374
-
checksum = "a1702d9583232ddb9174e01bb7c15a2ab8fb1bc6f227aa1233858c351a3ba0cb"
3324
+
checksum = "0894878a5fa3edfd6da3f88c4805f4c8558e2b996227a3d864f47fe11e38282c"
3375
3325
dependencies = [
3376
3326
"zerocopy-derive",
3377
3327
]
3378
3328
3379
3329
[[package]]
3380
3330
name = "zerocopy-derive"
3381
-
version = "0.8.25"
3331
+
version = "0.8.27"
3382
3332
source = "registry+https://github.com/rust-lang/crates.io-index"
3383
-
checksum = "28a6e20d751156648aa063f3800b706ee209a32c0b4d9f24be3d980b01be55ef"
3333
+
checksum = "88d2b8d9c68ad2b9e4340d7832716a4d21a22a1154777ad56ea55c51a9cf3831"
3384
3334
dependencies = [
3385
3335
"proc-macro2",
3386
3336
"quote",
3387
-
"syn",
3337
+
"syn 2.0.109",
3388
3338
]
3389
3339
3390
3340
[[package]]
···
3404
3354
dependencies = [
3405
3355
"proc-macro2",
3406
3356
"quote",
3407
-
"syn",
3357
+
"syn 2.0.109",
3408
3358
"synstructure",
3409
3359
]
3410
3360
3411
3361
[[package]]
3412
3362
name = "zeroize"
3413
-
version = "1.8.1"
3363
+
version = "1.8.2"
3414
3364
source = "registry+https://github.com/rust-lang/crates.io-index"
3415
-
checksum = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde"
3365
+
checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0"
3416
3366
dependencies = [
3417
3367
"zeroize_derive",
3418
3368
]
···
3425
3375
dependencies = [
3426
3376
"proc-macro2",
3427
3377
"quote",
3428
-
"syn",
3378
+
"syn 2.0.109",
3429
3379
]
3430
3380
3431
3381
[[package]]
3432
3382
name = "zerotrie"
3433
-
version = "0.2.2"
3383
+
version = "0.2.3"
3434
3384
source = "registry+https://github.com/rust-lang/crates.io-index"
3435
-
checksum = "36f0bbd478583f79edad978b407914f61b2972f5af6fa089686016be8f9af595"
3385
+
checksum = "2a59c17a5562d507e4b54960e8569ebee33bee890c70aa3fe7b97e85a9fd7851"
3436
3386
dependencies = [
3437
3387
"displaydoc",
3438
3388
"yoke",
···
3441
3391
3442
3392
[[package]]
3443
3393
name = "zerovec"
3444
-
version = "0.11.2"
3394
+
version = "0.11.5"
3445
3395
source = "registry+https://github.com/rust-lang/crates.io-index"
3446
-
checksum = "4a05eb080e015ba39cc9e23bbe5e7fb04d5fb040350f99f34e338d5fdd294428"
3396
+
checksum = "6c28719294829477f525be0186d13efa9a3c602f7ec202ca9e353d310fb9a002"
3447
3397
dependencies = [
3448
3398
"yoke",
3449
3399
"zerofrom",
···
3452
3402
3453
3403
[[package]]
3454
3404
name = "zerovec-derive"
3455
-
version = "0.11.1"
3405
+
version = "0.11.2"
3456
3406
source = "registry+https://github.com/rust-lang/crates.io-index"
3457
-
checksum = "5b96237efa0c878c64bd89c436f661be4e46b2f3eff1ebb976f7ef2321d2f58f"
3407
+
checksum = "eadce39539ca5cb3985590102671f2567e659fca9666581ad3411d59207951f3"
3458
3408
dependencies = [
3459
3409
"proc-macro2",
3460
3410
"quote",
3461
-
"syn",
3411
+
"syn 2.0.109",
3462
3412
]
3463
3413
3464
3414
[[package]]
···
3481
3431
3482
3432
[[package]]
3483
3433
name = "zstd-sys"
3484
-
version = "2.0.15+zstd.1.5.7"
3434
+
version = "2.0.16+zstd.1.5.7"
3485
3435
source = "registry+https://github.com/rust-lang/crates.io-index"
3486
-
checksum = "eb81183ddd97d0c74cedf1d50d85c8d08c1b8b68ee863bdee9e706eedba1a237"
3436
+
checksum = "91e19ebc2adc8f83e43039e79776e3fda8ca919132d68a1fed6a5faca2683748"
3487
3437
dependencies = [
3488
3438
"cc",
3489
3439
"pkg-config",
+29
-20
Cargo.toml
+29
-20
Cargo.toml
···
1
1
[workspace]
2
2
members = [
3
3
"crates/atproto-client",
4
+
"crates/atproto-extras",
4
5
"crates/atproto-identity",
5
6
"crates/atproto-jetstream",
6
7
"crates/atproto-oauth-aip",
7
8
"crates/atproto-oauth-axum",
8
9
"crates/atproto-oauth",
9
10
"crates/atproto-record",
11
+
"crates/atproto-tap",
10
12
"crates/atproto-xrpcs-helloworld",
11
13
"crates/atproto-xrpcs",
12
14
"crates/atproto-lexicon",
15
+
"crates/atproto-attestation",
13
16
]
14
17
resolver = "3"
15
18
···
23
26
categories = ["command-line-utilities", "web-programming"]
24
27
25
28
[workspace.dependencies]
29
+
atproto-attestation = { version = "0.13.0", path = "crates/atproto-attestation" }
26
30
atproto-client = { version = "0.13.0", path = "crates/atproto-client" }
31
+
atproto-extras = { version = "0.13.0", path = "crates/atproto-extras" }
27
32
atproto-identity = { version = "0.13.0", path = "crates/atproto-identity" }
33
+
atproto-jetstream = { version = "0.13.0", path = "crates/atproto-jetstream" }
28
34
atproto-oauth = { version = "0.13.0", path = "crates/atproto-oauth" }
35
+
atproto-oauth-aip = { version = "0.13.0", path = "crates/atproto-oauth-aip" }
29
36
atproto-oauth-axum = { version = "0.13.0", path = "crates/atproto-oauth-axum" }
30
-
atproto-oauth-aip = { version = "0.13.0", path = "crates/atproto-oauth-aip" }
31
37
atproto-record = { version = "0.13.0", path = "crates/atproto-record" }
38
+
atproto-tap = { version = "0.13.0", path = "crates/atproto-tap" }
32
39
atproto-xrpcs = { version = "0.13.0", path = "crates/atproto-xrpcs" }
33
-
atproto-jetstream = { version = "0.13.0", path = "crates/atproto-jetstream" }
34
40
41
+
ammonia = "4.0"
35
42
anyhow = "1.0"
36
-
async-trait = "0.1.88"
37
-
base64 = "0.22.1"
38
-
chrono = {version = "0.4.41", default-features = false, features = ["std", "now"]}
43
+
async-trait = "0.1"
44
+
base64 = "0.22"
45
+
chrono = {version = "0.4", default-features = false, features = ["std", "now"]}
39
46
clap = { version = "4.5", features = ["derive", "env"] }
40
-
ecdsa = { version = "0.16.9", features = ["std"] }
41
-
elliptic-curve = { version = "0.13.8", features = ["jwk", "serde"] }
47
+
ecdsa = { version = "0.16", features = ["std"] }
48
+
elliptic-curve = { version = "0.13", features = ["jwk", "serde"] }
42
49
futures = "0.3"
43
50
hickory-resolver = { version = "0.25" }
44
-
http = "1.3.1"
45
-
k256 = "0.13.4"
51
+
http = "1.3"
52
+
k256 = "0.13"
46
53
lru = "0.12"
47
-
multibase = "0.9.1"
48
-
p256 = "0.13.2"
49
-
p384 = "0.13.0"
54
+
multibase = "0.9"
55
+
p256 = "0.13"
56
+
p384 = "0.13"
50
57
rand = "0.8"
58
+
regex = "1.11"
51
59
reqwest = { version = "0.12", default-features = false, features = ["charset", "http2", "system-proxy", "json", "rustls-tls"] }
52
-
reqwest-chain = "1.0.0"
53
-
reqwest-middleware = { version = "0.4.2", features = ["json", "multipart"]}
60
+
reqwest-chain = "1.0"
61
+
reqwest-middleware = { version = "0.4", features = ["json", "multipart"]}
54
62
rpassword = "7.3"
55
63
secrecy = { version = "0.10", features = ["serde"] }
56
64
serde = { version = "1.0", features = ["derive"] }
57
-
serde_ipld_dagcbor = "0.6.3"
58
-
serde_json = "1.0"
59
-
sha2 = "0.10.9"
65
+
serde_ipld_dagcbor = "0.6"
66
+
serde_json = { version = "1.0", features = ["unbounded_depth"] }
67
+
sha2 = "0.10"
60
68
thiserror = "2.0"
61
69
tokio = { version = "1.41", features = ["macros", "rt", "rt-multi-thread"] }
62
70
tokio-websockets = { version = "0.11.4", features = ["client", "rustls-native-roots", "rand", "ring"] }
63
71
tokio-util = "0.7"
64
72
tracing = { version = "0.1", features = ["async-await"] }
65
-
ulid = "1.2.1"
66
-
urlencoding = "2.1"
73
+
ulid = "1.2"
67
74
zstd = "0.13"
75
+
url = "2.5"
76
+
urlencoding = "2.1"
68
77
69
-
zeroize = { version = "1.8.1", features = ["zeroize_derive"] }
78
+
zeroize = { version = "1.8", features = ["zeroize_derive"] }
70
79
71
80
[workspace.lints.rust]
72
81
unsafe_code = "forbid"
+12
-8
Dockerfile
+12
-8
Dockerfile
···
1
1
# Multi-stage build for atproto-identity-rs workspace
2
-
# Builds and installs all 13 binaries from the workspace
2
+
# Builds and installs all 15 binaries from the workspace
3
3
4
-
# Build stage - use 1.89 to support resolver = "3" and edition = "2024"
4
+
# Build stage - use 1.90 to support resolver = "3" and edition = "2024"
5
5
FROM rust:1.90-slim-bookworm AS builder
6
6
7
7
# Install system dependencies needed for building
···
19
19
# Build all binaries in release mode
20
20
# This will build all binaries defined in the workspace:
21
21
# - atproto-identity: 4 binaries (resolve, key, sign, validate)
22
-
# - atproto-record: 2 binaries (sign, verify)
22
+
# - atproto-attestation: 2 binaries (attestation-sign, attestation-verify)
23
+
# - atproto-record: 1 binary (record-cid)
23
24
# - atproto-client: 3 binaries (auth, app-password, dpop)
24
25
# - atproto-oauth: 1 binary (service-token)
25
26
# - atproto-oauth-axum: 1 binary (oauth-tool)
···
40
41
COPY --from=builder /usr/src/app/target/release/atproto-identity-key .
41
42
COPY --from=builder /usr/src/app/target/release/atproto-identity-sign .
42
43
COPY --from=builder /usr/src/app/target/release/atproto-identity-validate .
43
-
COPY --from=builder /usr/src/app/target/release/atproto-record-sign .
44
-
COPY --from=builder /usr/src/app/target/release/atproto-record-verify .
44
+
COPY --from=builder /usr/src/app/target/release/atproto-attestation-sign .
45
+
COPY --from=builder /usr/src/app/target/release/atproto-attestation-verify .
46
+
COPY --from=builder /usr/src/app/target/release/atproto-record-cid .
45
47
COPY --from=builder /usr/src/app/target/release/atproto-client-auth .
46
48
COPY --from=builder /usr/src/app/target/release/atproto-client-app-password .
47
49
COPY --from=builder /usr/src/app/target/release/atproto-client-dpop .
···
53
55
54
56
# Default to the main resolution tool
55
57
# Users can override with specific binary: docker run <image> atproto-identity-resolve --help
56
-
# Or run other tools:
58
+
# Or run other tools:
57
59
# docker run <image> atproto-identity-key --help
58
-
# docker run <image> atproto-record-sign --help
60
+
# docker run <image> atproto-attestation-sign --help
61
+
# docker run <image> atproto-attestation-verify --help
62
+
# docker run <image> atproto-record-cid --help
59
63
# docker run <image> atproto-client-auth --help
60
64
# docker run <image> atproto-oauth-service-token --help
61
65
# docker run <image> atproto-oauth-tool --help
···
73
77
LABEL org.opencontainers.image.licenses="MIT"
74
78
75
79
# Document available binaries
76
-
LABEL binaries="atproto-identity-resolve,atproto-identity-key,atproto-identity-sign,atproto-identity-validate,atproto-record-sign,atproto-record-verify,atproto-client-auth,atproto-client-app-password,atproto-client-dpop,atproto-oauth-service-token,atproto-oauth-tool,atproto-jetstream-consumer,atproto-xrpcs-helloworld,atproto-lexicon-resolve"
80
+
LABEL binaries="atproto-identity-resolve,atproto-identity-key,atproto-identity-sign,atproto-identity-validate,atproto-attestation-sign,atproto-attestation-verify,atproto-record-cid,atproto-client-auth,atproto-client-app-password,atproto-client-dpop,atproto-oauth-service-token,atproto-oauth-tool,atproto-jetstream-consumer,atproto-xrpcs-helloworld,atproto-lexicon-resolve"
+34
-19
README.md
+34
-19
README.md
···
11
11
### Identity & Cryptography
12
12
13
13
- **[`atproto-identity`](crates/atproto-identity/)** - Core identity management with multi-method DID resolution (plc, web, key), DNS/HTTP handle resolution, and P-256/P-384/K-256 cryptographic operations. *Includes 4 CLI tools.*
14
-
- **[`atproto-record`](crates/atproto-record/)** - Cryptographic signature operations for AT Protocol records using IPLD DAG-CBOR serialization with AT-URI parsing support. *Includes 2 CLI tools.*
14
+
- **[`atproto-attestation`](crates/atproto-attestation/)** - CID-first attestation utilities for creating and verifying cryptographic signatures on AT Protocol records, supporting both inline and remote attestation workflows. *Includes 2 CLI tools.*
15
+
- **[`atproto-record`](crates/atproto-record/)** - Record utilities including TID generation, AT-URI parsing, datetime formatting, and CID generation using IPLD DAG-CBOR serialization. *Includes 1 CLI tool.*
15
16
- **[`atproto-lexicon`](crates/atproto-lexicon/)** - Lexicon schema resolution and validation for AT Protocol, supporting recursive resolution, NSID validation, and DNS-based lexicon discovery. *Includes 1 CLI tool.*
16
17
17
18
### Authentication & Authorization
···
37
38
```toml
38
39
[dependencies]
39
40
atproto-identity = "0.13.0"
41
+
atproto-attestation = "0.13.0"
40
42
atproto-record = "0.13.0"
41
43
atproto-lexicon = "0.13.0"
42
44
atproto-oauth = "0.13.0"
···
85
87
### Record Signing
86
88
87
89
```rust
88
-
use atproto_identity::key::identify_key;
89
-
use atproto_record::signature;
90
+
use atproto_identity::key::{identify_key, to_public};
91
+
use atproto_attestation::{
92
+
create_inline_attestation, verify_all_signatures, VerificationStatus,
93
+
input::{AnyInput, PhantomSignature}
94
+
};
90
95
use serde_json::json;
91
96
92
97
#[tokio::main]
93
98
async fn main() -> anyhow::Result<()> {
94
-
let signing_key = identify_key("did:key:zQ3shNzMp4oaaQ1gQRzCxMGXFrSW3NEM1M9T6KCY9eA7HhyEA")?;
99
+
let private_key = identify_key("did:key:zQ3shNzMp4oaaQ1gQRzCxMGXFrSW3NEM1M9T6KCY9eA7HhyEA")?;
100
+
let public_key = to_public(&private_key)?;
101
+
let key_reference = format!("{}", &public_key);
102
+
let repository_did = "did:plc:repo123";
95
103
96
104
let record = json!({
97
105
"$type": "app.bsky.feed.post",
···
99
107
"createdAt": "2024-01-01T00:00:00.000Z"
100
108
});
101
109
102
-
let signature_object = json!({
110
+
let sig_metadata = json!({
111
+
"$type": "com.example.inlineSignature",
112
+
"key": &key_reference,
103
113
"issuer": "did:plc:issuer123",
104
114
"issuedAt": "2024-01-01T00:00:00.000Z"
105
115
});
106
116
107
-
let signed_record = signature::create(
108
-
&signing_key,
109
-
&record,
110
-
"did:plc:user123",
111
-
"app.bsky.feed.post",
112
-
signature_object,
113
-
).await?;
117
+
let signed_record = create_inline_attestation::<PhantomSignature, PhantomSignature>(
118
+
AnyInput::Json(record),
119
+
AnyInput::Json(sig_metadata),
120
+
repository_did,
121
+
&private_key
122
+
)?;
123
+
124
+
let reports = verify_all_signatures(&signed_record, repository_did, None).await?;
125
+
assert!(reports.iter().all(|report| matches!(report.status, VerificationStatus::Valid { .. })));
114
126
115
127
Ok(())
116
128
}
···
119
131
### XRPC Service
120
132
121
133
```rust
122
-
use atproto_xrpcs::authorization::ResolvingAuthorization;
134
+
use atproto_xrpcs::authorization::Authorization;
123
135
use axum::{Json, Router, extract::Query, routing::get};
124
136
use serde::Deserialize;
125
137
use serde_json::json;
···
131
143
132
144
async fn handle_hello(
133
145
params: Query<HelloParams>,
134
-
authorization: Option<ResolvingAuthorization>,
146
+
authorization: Option<Authorization>,
135
147
) -> Json<serde_json::Value> {
136
148
let subject = params.subject.as_deref().unwrap_or("World");
137
-
149
+
138
150
let message = if let Some(auth) = authorization {
139
151
format!("Hello, authenticated {}! (caller: {})", subject, auth.subject())
140
152
} else {
141
153
format!("Hello, {}!", subject)
142
154
};
143
-
155
+
144
156
Json(json!({ "message": message }))
145
157
}
146
158
···
212
224
cargo run --features clap --bin atproto-identity-sign -- did:key:... data.json
213
225
cargo run --features clap --bin atproto-identity-validate -- did:key:... data.json signature
214
226
227
+
# Attestation operations (atproto-attestation crate)
228
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-sign -- inline record.json did:key:... metadata.json
229
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-verify -- signed_record.json
230
+
215
231
# Record operations (atproto-record crate)
216
-
cargo run --features clap --bin atproto-record-sign -- did:key:... did:plc:issuer record.json repository=did:plc:user collection=app.bsky.feed.post
217
-
cargo run --features clap --bin atproto-record-verify -- did:plc:issuer did:key:... signed_record.json repository=did:plc:user collection=app.bsky.feed.post
232
+
cat record.json | cargo run --features clap --bin atproto-record-cid
218
233
219
234
# Lexicon operations (atproto-lexicon crate)
220
235
cargo run --features clap,hickory-dns --bin atproto-lexicon-resolve -- app.bsky.feed.post
···
289
304
290
305
## Acknowledgments
291
306
292
-
Parts of this project were extracted from the [smokesignal.events](https://tangled.sh/@smokesignal.events/smokesignal) project, an open-source AT Protocol event and RSVP management application. This extraction enables broader community use and contribution to AT Protocol tooling in Rust.
307
+
Parts of this project were extracted from the [smokesignal.events](https://tangled.sh/@smokesignal.events/smokesignal) project, an open-source AT Protocol event and RSVP management application. This extraction enables broader community use and contribution to AT Protocol tooling in Rust.
+64
crates/atproto-attestation/Cargo.toml
+64
crates/atproto-attestation/Cargo.toml
···
1
+
[package]
2
+
name = "atproto-attestation"
3
+
version = "0.13.0"
4
+
description = "AT Protocol attestation utilities for creating and verifying record signatures"
5
+
readme = "README.md"
6
+
homepage = "https://tangled.sh/@smokesignal.events/atproto-identity-rs"
7
+
documentation = "https://docs.rs/atproto-attestation"
8
+
edition.workspace = true
9
+
rust-version.workspace = true
10
+
repository.workspace = true
11
+
authors.workspace = true
12
+
license.workspace = true
13
+
keywords.workspace = true
14
+
categories.workspace = true
15
+
16
+
[[bin]]
17
+
name = "atproto-attestation-sign"
18
+
test = false
19
+
bench = false
20
+
doc = true
21
+
required-features = ["clap", "tokio"]
22
+
23
+
[[bin]]
24
+
name = "atproto-attestation-verify"
25
+
test = false
26
+
bench = false
27
+
doc = true
28
+
required-features = ["clap", "tokio"]
29
+
30
+
[dependencies]
31
+
atproto-client.workspace = true
32
+
atproto-identity.workspace = true
33
+
atproto-record.workspace = true
34
+
anyhow.workspace = true
35
+
base64.workspace = true
36
+
serde.workspace = true
37
+
serde_json = {workspace = true, features = ["preserve_order"]}
38
+
serde_ipld_dagcbor.workspace = true
39
+
sha2.workspace = true
40
+
thiserror.workspace = true
41
+
42
+
cid = "0.11"
43
+
elliptic-curve = { version = "0.13", default-features = false, features = ["std"] }
44
+
k256 = { version = "0.13", default-features = false, features = ["ecdsa", "std"] }
45
+
multihash = "0.19"
46
+
p256 = { version = "0.13", default-features = false, features = ["ecdsa", "std"] }
47
+
48
+
async-trait = { workspace = true, optional = true }
49
+
clap = { workspace = true, optional = true }
50
+
reqwest = { workspace = true, optional = true }
51
+
tokio = { workspace = true, optional = true }
52
+
53
+
[dev-dependencies]
54
+
async-trait = "0.1"
55
+
chrono = { workspace = true }
56
+
tokio = { workspace = true, features = ["macros", "rt"] }
57
+
58
+
[features]
59
+
default = []
60
+
clap = ["dep:clap"]
61
+
tokio = ["dep:tokio", "dep:reqwest", "dep:async-trait"]
62
+
63
+
[lints]
64
+
workspace = true
+398
crates/atproto-attestation/README.md
+398
crates/atproto-attestation/README.md
···
1
+
# atproto-attestation
2
+
3
+
Utilities for creating and verifying AT Protocol record attestations using the CID-first workflow.
4
+
5
+
## Overview
6
+
7
+
A Rust library implementing the CID-first attestation specification for AT Protocol records. This crate provides cryptographic signature creation and verification for records, supporting both inline attestations (signatures embedded directly in records) and remote attestations (separate proof records with strongRef references).
8
+
9
+
The attestation workflow ensures deterministic signing payloads and prevents replay attacks by:
10
+
1. Automatically preparing records with `$sig` metadata containing `$type` and `repository` fields
11
+
2. Generating content identifiers (CIDs) using DAG-CBOR serialization
12
+
3. Signing CID bytes with elliptic curve cryptography (for inline attestations)
13
+
4. Normalizing signatures to low-S form to prevent malleability attacks
14
+
5. Embedding signatures or creating proof records with strongRef references
15
+
16
+
**Critical Security Feature**: The `repository` field in `$sig` metadata binds attestations to specific repositories, preventing replay attacks where an attacker might attempt to clone records from one repository into their own.
17
+
18
+
## Features
19
+
20
+
- **Inline attestations**: Embed cryptographic signatures directly in record structures
21
+
- **Remote attestations**: Create separate proof records with CID-based strongRef references
22
+
- **CID-first workflow**: Deterministic signing based on content identifiers
23
+
- **Multi-curve support**: Full support for P-256, P-384, and K-256 elliptic curves
24
+
- **Signature normalization**: Automatic low-S normalization for ECDSA signatures to prevent malleability
25
+
- **Flexible input types**: Accept records as JSON strings, JSON values, or typed lexicons
26
+
- **Repository binding**: Automatic prevention of replay attacks
27
+
28
+
## CLI Tools
29
+
30
+
The following command-line tools are available when built with the `clap` and `tokio` features:
31
+
32
+
- **`atproto-attestation-sign`**: Sign AT Protocol records with inline or remote attestations
33
+
- **`atproto-attestation-verify`**: Verify cryptographic signatures on AT Protocol records
34
+
35
+
## Library Usage
36
+
37
+
### Creating Inline Attestations
38
+
39
+
Inline attestations embed the signature bytes directly in the record:
40
+
41
+
```rust
42
+
use atproto_identity::key::{generate_key, to_public, KeyType};
43
+
use atproto_attestation::{create_inline_attestation, input::{AnyInput, PhantomSignature}};
44
+
use serde_json::json;
45
+
46
+
fn main() -> anyhow::Result<()> {
47
+
// Generate a signing key
48
+
let private_key = generate_key(KeyType::K256Private)?;
49
+
let public_key = to_public(&private_key)?;
50
+
let key_reference = format!("{}", &public_key);
51
+
52
+
// The record to sign
53
+
let record = json!({
54
+
"$type": "app.bsky.feed.post",
55
+
"text": "Hello world!",
56
+
"createdAt": "2024-01-01T00:00:00.000Z"
57
+
});
58
+
59
+
// Repository housing this record (for replay attack prevention)
60
+
let repository_did = "did:plc:repo123";
61
+
62
+
// Attestation metadata (required: $type and key for inline attestations)
63
+
// Note: repository field is automatically added during CID generation
64
+
let sig_metadata = json!({
65
+
"$type": "com.example.inlineSignature",
66
+
"key": &key_reference,
67
+
"issuer": "did:plc:issuer123",
68
+
"issuedAt": "2024-01-01T00:00:00.000Z"
69
+
});
70
+
71
+
// Create inline attestation (repository_did is bound into the CID)
72
+
// Signature is automatically normalized to low-S form
73
+
let signed_record = create_inline_attestation::<PhantomSignature, PhantomSignature>(
74
+
AnyInput::Json(record),
75
+
AnyInput::Json(sig_metadata),
76
+
repository_did,
77
+
&private_key
78
+
)?;
79
+
80
+
println!("{}", serde_json::to_string_pretty(&signed_record)?);
81
+
82
+
Ok(())
83
+
}
84
+
```
85
+
86
+
The resulting record will have a `signatures` array:
87
+
88
+
```json
89
+
{
90
+
"$type": "app.bsky.feed.post",
91
+
"text": "Hello world!",
92
+
"createdAt": "2024-01-01T00:00:00.000Z",
93
+
"signatures": [
94
+
{
95
+
"$type": "com.example.inlineSignature",
96
+
"key": "did:key:zQ3sh...",
97
+
"issuer": "did:plc:issuer123",
98
+
"issuedAt": "2024-01-01T00:00:00.000Z",
99
+
"cid": "bafyrei...",
100
+
"signature": {
101
+
"$bytes": "base64-encoded-normalized-signature-bytes"
102
+
}
103
+
}
104
+
]
105
+
}
106
+
```
107
+
108
+
### Creating Remote Attestations
109
+
110
+
Remote attestations create a separate proof record that must be stored in a repository:
111
+
112
+
```rust
113
+
use atproto_attestation::{create_remote_attestation, input::{AnyInput, PhantomSignature}};
114
+
use serde_json::json;
115
+
116
+
fn main() -> anyhow::Result<()> {
117
+
let record = json!({
118
+
"$type": "app.bsky.feed.post",
119
+
"text": "Hello world!"
120
+
});
121
+
122
+
// Repository housing the original record (for replay attack prevention)
123
+
let repository_did = "did:plc:repo123";
124
+
125
+
// DID of the entity creating the attestation (will store the proof record)
126
+
let attestor_did = "did:plc:attestor456";
127
+
128
+
let metadata = json!({
129
+
"$type": "com.example.attestation",
130
+
"issuer": "did:plc:issuer123",
131
+
"purpose": "verification"
132
+
});
133
+
134
+
// Create both the attested record and proof record in one call
135
+
// Returns: (attested_record_with_strongRef, proof_record)
136
+
let (attested_record, proof_record) = create_remote_attestation::<PhantomSignature, PhantomSignature>(
137
+
AnyInput::Json(record),
138
+
AnyInput::Json(metadata),
139
+
repository_did, // Repository housing the original record
140
+
attestor_did // Repository that will store the proof record
141
+
)?;
142
+
143
+
// The proof_record should be stored in the attestor's repository
144
+
// The attested_record contains the strongRef reference
145
+
println!("Proof record:\n{}", serde_json::to_string_pretty(&proof_record)?);
146
+
println!("Attested record:\n{}", serde_json::to_string_pretty(&attested_record)?);
147
+
148
+
Ok(())
149
+
}
150
+
```
151
+
152
+
### Verifying Signatures
153
+
154
+
Verify all signatures in a record:
155
+
156
+
```rust
157
+
use atproto_attestation::{verify_record, input::AnyInput};
158
+
use atproto_identity::key::IdentityDocumentKeyResolver;
159
+
use atproto_client::record_resolver::HttpRecordResolver;
160
+
161
+
#[tokio::main]
162
+
async fn main() -> anyhow::Result<()> {
163
+
// Signed record with signatures array
164
+
let signed_record = /* ... */;
165
+
166
+
// The repository DID where this record is stored
167
+
// CRITICAL: This must match the repository used during signing to prevent replay attacks
168
+
let repository_did = "did:plc:repo123";
169
+
170
+
// Create resolvers for key and record fetching
171
+
let key_resolver = /* ... */; // IdentityDocumentKeyResolver
172
+
let record_resolver = HttpRecordResolver::new(/* ... */);
173
+
174
+
// Verify all signatures with repository validation
175
+
verify_record(
176
+
AnyInput::Json(signed_record),
177
+
repository_did,
178
+
key_resolver,
179
+
record_resolver
180
+
).await?;
181
+
182
+
println!("โ All signatures verified successfully");
183
+
184
+
Ok(())
185
+
}
186
+
```
187
+
188
+
## Command Line Usage
189
+
190
+
### Signing Records
191
+
192
+
#### Inline Attestation
193
+
194
+
```bash
195
+
# Sign with inline attestation (signature embedded in record)
196
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-sign -- \
197
+
inline \
198
+
record.json \
199
+
did:key:zQ3shNzMp4oaaQ1gQRzCxMGXFrSW3NEM1M9T6KCY9eA7HhyEA \
200
+
metadata.json
201
+
202
+
# Using JSON strings instead of files
203
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-sign -- \
204
+
inline \
205
+
'{"$type":"app.bsky.feed.post","text":"Hello!"}' \
206
+
did:key:zQ3sh... \
207
+
'{"$type":"com.example.sig","key":"did:key:zQ3sh..."}'
208
+
209
+
# Read record from stdin
210
+
cat record.json | cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-sign -- \
211
+
inline \
212
+
- \
213
+
did:key:zQ3sh... \
214
+
metadata.json
215
+
```
216
+
217
+
#### Remote Attestation
218
+
219
+
```bash
220
+
# Create remote attestation (generates proof record + strongRef)
221
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-sign -- \
222
+
remote \
223
+
record.json \
224
+
did:plc:repo123... \
225
+
metadata.json
226
+
227
+
# This outputs TWO JSON objects:
228
+
# 1. Proof record (store this in the attestor's repository)
229
+
# 2. Source record with strongRef attestation
230
+
```
231
+
232
+
### Verifying Signatures
233
+
234
+
```bash
235
+
# Verify all signatures in a record from file
236
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-verify -- \
237
+
./signed_record.json \
238
+
did:plc:repo123
239
+
240
+
# Verify from stdin
241
+
cat signed_record.json | cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-verify -- \
242
+
- \
243
+
did:plc:repo123
244
+
245
+
# Verify from inline JSON
246
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-verify -- \
247
+
'{"$type":"app.bsky.feed.post","text":"Hello","signatures":[...]}' \
248
+
did:plc:repo123
249
+
250
+
# Verify specific attestation against record
251
+
cargo run --package atproto-attestation --features clap,tokio --bin atproto-attestation-verify -- \
252
+
./record.json \
253
+
did:plc:repo123 \
254
+
./attestation.json
255
+
```
256
+
257
+
## Public API
258
+
259
+
The crate exposes the following public functions:
260
+
261
+
### Attestation Creation
262
+
263
+
- **`create_inline_attestation`**: Create a signed record with embedded signature
264
+
- Automatically normalizes signatures to low-S form
265
+
- Binds attestation to repository DID
266
+
- Returns signed record with `signatures` array
267
+
268
+
- **`create_remote_attestation`**: Create separate proof record and strongRef
269
+
- Returns tuple of (attested_record, proof_record)
270
+
- Proof record must be stored in attestor's repository
271
+
272
+
### CID Generation
273
+
274
+
- **`create_cid`**: Generate CID for a record with `$sig` metadata
275
+
- **`create_dagbor_cid`**: Generate CID for any serializable data
276
+
- **`create_attestation_cid`**: High-level CID generation with automatic `$sig` preparation
277
+
278
+
### Signature Operations
279
+
280
+
- **`normalize_signature`**: Normalize raw signature bytes to low-S form
281
+
- Prevents signature malleability attacks
282
+
- Supports P-256, P-384, and K-256 curves
283
+
284
+
### Verification
285
+
286
+
- **`verify_record`**: Verify all signatures in a record
287
+
- Validates repository binding
288
+
- Supports both inline and remote attestations
289
+
- Requires key and record resolvers
290
+
291
+
### Input Types
292
+
293
+
- **`AnyInput`**: Flexible input enum supporting:
294
+
- `String`: JSON string to parse
295
+
- `Json`: serde_json::Value
296
+
- `TypedLexicon`: Strongly-typed lexicon records
297
+
298
+
## Attestation Specification
299
+
300
+
This crate implements the CID-first attestation specification, which ensures:
301
+
302
+
1. **Deterministic signing**: Records are serialized to DAG-CBOR with `$sig` metadata, producing consistent CIDs
303
+
2. **Content addressing**: Signatures are over CID bytes, not the full record
304
+
3. **Repository binding**: Every attestation is bound to a specific repository DID to prevent replay attacks
305
+
4. **Signature normalization**: ECDSA signatures are normalized to low-S form to prevent malleability
306
+
5. **Flexible metadata**: Custom fields in `$sig` are preserved and included in the CID calculation
307
+
6. **Multiple attestations**: Records can have multiple signatures in the `signatures` array
308
+
309
+
### Signature Structure
310
+
311
+
Inline attestation entry:
312
+
```json
313
+
{
314
+
"$type": "com.example.signature",
315
+
"key": "did:key:z...",
316
+
"issuer": "did:plc:...",
317
+
"cid": "bafyrei...",
318
+
"signature": {
319
+
"$bytes": "base64-normalized-signature"
320
+
}
321
+
}
322
+
```
323
+
324
+
Remote attestation entry (strongRef):
325
+
```json
326
+
{
327
+
"$type": "com.atproto.repo.strongRef",
328
+
"uri": "at://did:plc:repo/com.example.attestation/tid",
329
+
"cid": "bafyrei..."
330
+
}
331
+
```
332
+
333
+
## Error Handling
334
+
335
+
The crate provides structured error types via `AttestationError`:
336
+
337
+
- `RecordMustBeObject`: Input must be a JSON object
338
+
- `MetadataMustBeObject`: Attestation metadata must be a JSON object
339
+
- `SigMetadataMissing`: No `$sig` field found in prepared record
340
+
- `SignatureCreationFailed`: Key signing operation failed
341
+
- `SignatureValidationFailed`: Signature verification failed
342
+
- `SignatureNotNormalized`: ECDSA signature not in low-S form
343
+
- `SignatureLengthInvalid`: Signature bytes have incorrect length
344
+
- `KeyResolutionFailed`: Could not resolve verification key
345
+
- `UnsupportedKeyType`: Key type not supported for signing/verification
346
+
- `RemoteAttestationFetchFailed`: Failed to fetch remote proof record
347
+
348
+
## Security Considerations
349
+
350
+
### Repository Binding and Replay Attack Prevention
351
+
352
+
The most critical security feature of this attestation framework is **repository binding**. Every attestation includes the repository DID in the `$sig` metadata during CID generation, which:
353
+
354
+
- **Prevents replay attacks**: An attacker cannot copy a signed record from one repository to another because the signature is bound to the original repository DID
355
+
- **Ensures context integrity**: Attestations are only valid within their intended repository context
356
+
- **Automatic enforcement**: The library automatically adds the repository field during CID generation
357
+
358
+
**Important**: Always verify signatures with the correct repository DID. Verifying with a different repository DID will (correctly) fail validation, as this would indicate a potential replay attack.
359
+
360
+
### Signature Normalization
361
+
362
+
All ECDSA signatures are automatically normalized to low-S form to prevent signature malleability attacks:
363
+
364
+
- The library enforces low-S normalization during signature creation
365
+
- Verification accepts only normalized signatures
366
+
- This prevents attackers from creating alternate valid signatures for the same content
367
+
368
+
### Key Management Best Practices
369
+
370
+
- **Private keys**: Never log, transmit, or store private keys in plaintext
371
+
- **Key rotation**: Plan for key rotation by using verification method references that can be updated in DID documents
372
+
- **Key types**: The library supports P-256, P-384, and K-256 elliptic curves
373
+
- **did:key**: For testing and simple use cases, did:key identifiers provide self-contained key references
374
+
375
+
### CID Verification
376
+
377
+
- **Always verify against CIDs**: Signatures are over CID bytes, not the original record content
378
+
- **Deterministic generation**: The same record with the same `$sig` metadata always produces the same CID
379
+
- **Content integrity**: Any modification to the record will produce a different CID and invalidate signatures
380
+
381
+
### Metadata Validation
382
+
383
+
When creating attestations:
384
+
385
+
- The `$type` field is always required in metadata to scope the attestation
386
+
- The `repository` field is automatically added and must not be manually set
387
+
- Custom metadata fields are preserved and included in CID calculation
388
+
- The `cid` field is automatically added to inline attestation metadata
389
+
390
+
### Remote Attestation Considerations
391
+
392
+
- **Proof record storage**: Store proof records in the attestor's repository with appropriate access controls
393
+
- **CID matching**: Verify that the CID in the proof record matches the computed CID of the attested content
394
+
- **Record resolution**: Use trusted record resolvers when fetching remote proof records
395
+
396
+
## License
397
+
398
+
MIT License
+787
crates/atproto-attestation/src/attestation.rs
+787
crates/atproto-attestation/src/attestation.rs
···
1
+
//! Core attestation creation functions.
2
+
//!
3
+
//! This module provides functions for creating inline and remote attestations
4
+
//! and attaching attestation references.
5
+
6
+
use crate::cid::{create_attestation_cid, create_dagbor_cid};
7
+
use crate::errors::AttestationError;
8
+
pub use crate::input::AnyInput;
9
+
use crate::signature::normalize_signature;
10
+
use crate::utils::BASE64;
11
+
use atproto_identity::key::{KeyData, KeyResolver, sign, validate};
12
+
use atproto_record::lexicon::com::atproto::repo::STRONG_REF_NSID;
13
+
use atproto_record::tid::Tid;
14
+
use base64::Engine;
15
+
use serde::Serialize;
16
+
use serde_json::{Value, json, Map};
17
+
use std::convert::TryInto;
18
+
19
+
/// Helper function to extract and validate signatures array from a record
20
+
fn extract_signatures(record_obj: &Map<String, Value>) -> Result<Vec<Value>, AttestationError> {
21
+
match record_obj.get("signatures") {
22
+
Some(value) => value
23
+
.as_array()
24
+
.ok_or(AttestationError::SignaturesFieldInvalid)
25
+
.cloned(),
26
+
None => Ok(vec![]),
27
+
}
28
+
}
29
+
30
+
/// Helper function to append a signature to a record and return the modified record
31
+
fn append_signature_to_record(
32
+
mut record_obj: Map<String, Value>,
33
+
signature: Value,
34
+
) -> Result<Value, AttestationError> {
35
+
let mut signatures = extract_signatures(&record_obj)?;
36
+
signatures.push(signature);
37
+
38
+
record_obj.insert(
39
+
"signatures".to_string(),
40
+
Value::Array(signatures),
41
+
);
42
+
43
+
Ok(Value::Object(record_obj))
44
+
}
45
+
46
+
/// Creates a cryptographic signature for a record with attestation metadata.
47
+
///
48
+
/// This is a low-level function that generates just the signature bytes without
49
+
/// embedding them in a record structure. It's useful when you need to create
50
+
/// signatures independently or for custom attestation workflows.
51
+
///
52
+
/// The signature is created over a content CID that binds together:
53
+
/// - The record content
54
+
/// - The attestation metadata
55
+
/// - The repository DID (to prevent replay attacks)
56
+
///
57
+
/// # Arguments
58
+
///
59
+
/// * `record_input` - The record to sign (as AnyInput: String, Json, or TypedLexicon)
60
+
/// * `attestation_input` - The attestation metadata (as AnyInput)
61
+
/// * `repository` - The repository DID where this record will be stored
62
+
/// * `private_key_data` - The private key to use for signing
63
+
///
64
+
/// # Returns
65
+
///
66
+
/// A byte vector containing the normalized ECDSA signature that can be verified
67
+
/// against the same content CID.
68
+
///
69
+
/// # Errors
70
+
///
71
+
/// Returns an error if:
72
+
/// - CID generation fails
73
+
/// - Signature creation fails
74
+
/// - Signature normalization fails
75
+
///
76
+
/// # Example
77
+
///
78
+
/// ```rust
79
+
/// use atproto_attestation::{create_signature, input::AnyInput};
80
+
/// use atproto_identity::key::{KeyType, generate_key};
81
+
/// use serde_json::json;
82
+
///
83
+
/// # fn example() -> Result<(), Box<dyn std::error::Error>> {
84
+
/// let private_key = generate_key(KeyType::K256Private)?;
85
+
///
86
+
/// let record = json!({"$type": "app.bsky.feed.post", "text": "Hello!"});
87
+
/// let metadata = json!({"$type": "com.example.signature"});
88
+
///
89
+
/// let signature_bytes = create_signature(
90
+
/// AnyInput::Serialize(record),
91
+
/// AnyInput::Serialize(metadata),
92
+
/// "did:plc:repo123",
93
+
/// &private_key
94
+
/// )?;
95
+
///
96
+
/// // signature_bytes can now be base64-encoded or used as needed
97
+
/// # Ok(())
98
+
/// # }
99
+
/// ```
100
+
pub fn create_signature<R, M>(
101
+
record_input: AnyInput<R>,
102
+
attestation_input: AnyInput<M>,
103
+
repository: &str,
104
+
private_key_data: &KeyData,
105
+
) -> Result<Vec<u8>, AttestationError>
106
+
where
107
+
R: Serialize + Clone,
108
+
M: Serialize + Clone,
109
+
{
110
+
// Step 1: Create a content CID from record + attestation + repository
111
+
let content_cid = create_attestation_cid(record_input, attestation_input, repository)?;
112
+
113
+
// Step 2: Sign the CID bytes
114
+
let raw_signature = sign(private_key_data, &content_cid.to_bytes())
115
+
.map_err(|error| AttestationError::SignatureCreationFailed { error })?;
116
+
117
+
// Step 3: Normalize the signature to ensure consistent format
118
+
normalize_signature(raw_signature, private_key_data.key_type())
119
+
}
120
+
121
+
/// Creates a remote attestation with both the attested record and proof record.
122
+
///
123
+
/// This is the recommended way to create remote attestations. It generates both:
124
+
/// 1. The attested record with a strongRef in the signatures array
125
+
/// 2. The proof record containing the CID to be stored in the attestation repository
126
+
///
127
+
/// The CID is generated with the repository DID included in the `$sig` metadata
128
+
/// to bind the attestation to a specific repository and prevent replay attacks.
129
+
///
130
+
/// # Arguments
131
+
///
132
+
/// * `record_input` - The record to attest (as AnyInput: String, Json, or TypedLexicon)
133
+
/// * `metadata_input` - The attestation metadata (must include `$type`)
134
+
/// * `repository` - The DID of the repository housing the original record
135
+
/// * `attestation_repository` - The DID of the repository that will store the proof record
136
+
///
137
+
/// # Returns
138
+
///
139
+
/// A tuple containing:
140
+
/// * `(attested_record, proof_record)` - Both records needed for remote attestation
141
+
///
142
+
/// # Errors
143
+
///
144
+
/// Returns an error if:
145
+
/// - The record or metadata are not valid JSON objects
146
+
/// - The metadata is missing the required `$type` field
147
+
/// - CID generation fails
148
+
///
149
+
/// # Example
150
+
///
151
+
/// ```rust
152
+
/// use atproto_attestation::{create_remote_attestation, input::AnyInput};
153
+
/// use serde_json::json;
154
+
///
155
+
/// # fn example() -> Result<(), Box<dyn std::error::Error>> {
156
+
/// let record = json!({"$type": "app.bsky.feed.post", "text": "Hello!"});
157
+
/// let metadata = json!({"$type": "com.example.attestation"});
158
+
///
159
+
/// let (attested_record, proof_record) = create_remote_attestation(
160
+
/// AnyInput::Serialize(record),
161
+
/// AnyInput::Serialize(metadata),
162
+
/// "did:plc:repo123", // Source repository
163
+
/// "did:plc:attestor456" // Attestation repository
164
+
/// )?;
165
+
/// # Ok(())
166
+
/// # }
167
+
/// ```
168
+
pub fn create_remote_attestation<
169
+
R: Serialize + Clone,
170
+
M: Serialize + Clone,
171
+
>(
172
+
record_input: AnyInput<R>,
173
+
metadata_input: AnyInput<M>,
174
+
repository: &str,
175
+
attestation_repository: &str,
176
+
) -> Result<(Value, Value), AttestationError> {
177
+
// Step 1: Create a content CID
178
+
let content_cid =
179
+
create_attestation_cid(record_input.clone(), metadata_input.clone(), repository)?;
180
+
181
+
let record_obj: Map<String, Value> = record_input
182
+
.try_into()
183
+
.map_err(|_| AttestationError::RecordMustBeObject)?;
184
+
185
+
// Step 2: Create the remote attestation record
186
+
let (remote_attestation_record, remote_attestation_type) = {
187
+
let mut metadata_obj: Map<String, Value> = metadata_input
188
+
.try_into()
189
+
.map_err(|_| AttestationError::MetadataMustBeObject)?;
190
+
191
+
// Extract the type from metadata before modifying it
192
+
let remote_type = metadata_obj
193
+
.get("$type")
194
+
.and_then(Value::as_str)
195
+
.ok_or(AttestationError::MetadataMissingType)?
196
+
.to_string();
197
+
198
+
metadata_obj.insert("cid".to_string(), Value::String(content_cid.to_string()));
199
+
(serde_json::Value::Object(metadata_obj), remote_type)
200
+
};
201
+
202
+
// Step 3: Create the remote attestation reference (type, AT-URI, and CID)
203
+
let remote_attestation_record_key = Tid::new();
204
+
let remote_attestation_cid = create_dagbor_cid(&remote_attestation_record)?;
205
+
206
+
let attestation_reference = json!({
207
+
"$type": STRONG_REF_NSID,
208
+
"uri": format!("at://{attestation_repository}/{remote_attestation_type}/{remote_attestation_record_key}"),
209
+
"cid": remote_attestation_cid.to_string()
210
+
});
211
+
212
+
// Step 4: Append the attestation reference to the record "signatures" array
213
+
let attested_record = append_signature_to_record(record_obj, attestation_reference)?;
214
+
215
+
Ok((attested_record, remote_attestation_record))
216
+
}
217
+
218
+
/// Creates an inline attestation with signature embedded in the record.
219
+
///
220
+
/// This is the v2 API that supports flexible input types (String, Json, TypedLexicon)
221
+
/// and provides a more streamlined interface for creating inline attestations.
222
+
///
223
+
/// The CID is generated with the repository DID included in the `$sig` metadata
224
+
/// to bind the attestation to a specific repository and prevent replay attacks.
225
+
///
226
+
/// # Arguments
227
+
///
228
+
/// * `record_input` - The record to sign (as AnyInput: String, Json, or TypedLexicon)
229
+
/// * `metadata_input` - The attestation metadata (must include `$type` and `key`)
230
+
/// * `repository` - The DID of the repository that will house this record
231
+
/// * `private_key_data` - The private key to use for signing
232
+
///
233
+
/// # Returns
234
+
///
235
+
/// The record with an inline attestation embedded in the signatures array
236
+
///
237
+
/// # Errors
238
+
///
239
+
/// Returns an error if:
240
+
/// - The record or metadata are not valid JSON objects
241
+
/// - The metadata is missing required fields
242
+
/// - Signature creation fails
243
+
/// - CID generation fails
244
+
///
245
+
/// # Example
246
+
///
247
+
/// ```rust
248
+
/// use atproto_attestation::{create_inline_attestation, input::AnyInput};
249
+
/// use atproto_identity::key::{KeyType, generate_key, to_public};
250
+
/// use serde_json::json;
251
+
///
252
+
/// # fn example() -> Result<(), Box<dyn std::error::Error>> {
253
+
/// let private_key = generate_key(KeyType::K256Private)?;
254
+
/// let public_key = to_public(&private_key)?;
255
+
///
256
+
/// let record = json!({"$type": "app.bsky.feed.post", "text": "Hello!"});
257
+
/// let metadata = json!({
258
+
/// "$type": "com.example.signature",
259
+
/// "key": format!("{}", public_key)
260
+
/// });
261
+
///
262
+
/// let signed_record = create_inline_attestation(
263
+
/// AnyInput::Serialize(record),
264
+
/// AnyInput::Serialize(metadata),
265
+
/// "did:plc:repo123",
266
+
/// &private_key
267
+
/// )?;
268
+
/// # Ok(())
269
+
/// # }
270
+
/// ```
271
+
pub fn create_inline_attestation<
272
+
R: Serialize + Clone,
273
+
M: Serialize + Clone,
274
+
>(
275
+
record_input: AnyInput<R>,
276
+
metadata_input: AnyInput<M>,
277
+
repository: &str,
278
+
private_key_data: &KeyData,
279
+
) -> Result<Value, AttestationError> {
280
+
// Step 1: Create a content CID
281
+
let content_cid =
282
+
create_attestation_cid(record_input.clone(), metadata_input.clone(), repository)?;
283
+
284
+
let record_obj: Map<String, Value> = record_input
285
+
.try_into()
286
+
.map_err(|_| AttestationError::RecordMustBeObject)?;
287
+
288
+
// Step 2: Create the inline attestation record
289
+
let inline_attestation_record = {
290
+
let mut metadata_obj: Map<String, Value> = metadata_input
291
+
.try_into()
292
+
.map_err(|_| AttestationError::MetadataMustBeObject)?;
293
+
294
+
metadata_obj.insert("cid".to_string(), Value::String(content_cid.to_string()));
295
+
296
+
let raw_signature = sign(private_key_data, &content_cid.to_bytes())
297
+
.map_err(|error| AttestationError::SignatureCreationFailed { error })?;
298
+
let signature_bytes = normalize_signature(raw_signature, private_key_data.key_type())?;
299
+
300
+
metadata_obj.insert(
301
+
"signature".to_string(),
302
+
json!({"$bytes": BASE64.encode(signature_bytes)}),
303
+
);
304
+
305
+
serde_json::Value::Object(metadata_obj)
306
+
};
307
+
308
+
// Step 4: Append the attestation reference to the record "signatures" array
309
+
append_signature_to_record(record_obj, inline_attestation_record)
310
+
}
311
+
312
+
/// Validates an existing proof record and appends a strongRef to it in the record's signatures array.
313
+
///
314
+
/// This function validates that an existing proof record (attestation metadata with CID)
315
+
/// is valid for the given record and repository, then creates and appends a strongRef to it.
316
+
///
317
+
/// Unlike `create_remote_attestation` which creates a new proof record, this function validates
318
+
/// an existing proof record that was already created and stored in an attestor's repository.
319
+
///
320
+
/// # Security
321
+
///
322
+
/// - **Repository binding validation**: Ensures the attestation was created for the specified repository DID
323
+
/// - **CID verification**: Validates the proof record's CID matches the computed CID
324
+
/// - **Content validation**: Ensures the proof record content matches what should be attested
325
+
///
326
+
/// # Workflow
327
+
///
328
+
/// 1. Compute the content CID from record + metadata + repository (same as attestation creation)
329
+
/// 2. Extract the claimed CID from the proof record metadata
330
+
/// 3. Verify the claimed CID matches the computed CID
331
+
/// 4. Extract the proof record's storage CID (DAG-CBOR CID of the full proof record)
332
+
/// 5. Create a strongRef with the AT-URI and proof record CID
333
+
/// 6. Append the strongRef to the record's signatures array
334
+
///
335
+
/// # Arguments
336
+
///
337
+
/// * `record_input` - The record to append the attestation to (as AnyInput)
338
+
/// * `metadata_input` - The proof record metadata (must include `$type`, `cid`, and attestation fields)
339
+
/// * `repository` - The repository DID where the source record is stored (for replay attack prevention)
340
+
/// * `attestation_uri` - The AT-URI where the proof record is stored (e.g., "at://did:plc:attestor/com.example.attestation/abc123")
341
+
///
342
+
/// # Returns
343
+
///
344
+
/// The modified record with the strongRef appended to its `signatures` array
345
+
///
346
+
/// # Errors
347
+
///
348
+
/// Returns an error if:
349
+
/// - The record or metadata are not valid JSON objects
350
+
/// - The metadata is missing the `cid` field
351
+
/// - The computed CID doesn't match the claimed CID in the metadata
352
+
/// - The metadata is missing required attestation fields
353
+
///
354
+
/// # Type Parameters
355
+
///
356
+
/// * `R` - The record type (must implement Serialize + LexiconType + PartialEq + Clone)
357
+
/// * `A` - The attestation type (must implement Serialize + LexiconType + PartialEq + Clone)
358
+
///
359
+
/// # Example
360
+
///
361
+
/// ```ignore
362
+
/// use atproto_attestation::{append_remote_attestation, input::AnyInput};
363
+
/// use serde_json::json;
364
+
///
365
+
/// let record = json!({
366
+
/// "$type": "app.bsky.feed.post",
367
+
/// "text": "Hello world!"
368
+
/// });
369
+
///
370
+
/// // This is the proof record that was previously created and stored
371
+
/// let proof_metadata = json!({
372
+
/// "$type": "com.example.attestation",
373
+
/// "issuer": "did:plc:issuer",
374
+
/// "cid": "bafyrei...", // Content CID computed from record+metadata+repository
375
+
/// // ... other attestation fields
376
+
/// });
377
+
///
378
+
/// let repository_did = "did:plc:repo123";
379
+
/// let attestation_uri = "at://did:plc:attestor456/com.example.attestation/abc123";
380
+
///
381
+
/// let signed_record = append_remote_attestation(
382
+
/// AnyInput::Serialize(record),
383
+
/// AnyInput::Serialize(proof_metadata),
384
+
/// repository_did,
385
+
/// attestation_uri
386
+
/// )?;
387
+
/// ```
388
+
pub fn append_remote_attestation<R, A>(
389
+
record_input: AnyInput<R>,
390
+
metadata_input: AnyInput<A>,
391
+
repository: &str,
392
+
attestation_uri: &str,
393
+
) -> Result<Value, AttestationError>
394
+
where
395
+
R: Serialize + Clone,
396
+
A: Serialize + Clone,
397
+
{
398
+
// Step 1: Compute the content CID (same as create_remote_attestation)
399
+
let content_cid =
400
+
create_attestation_cid(record_input.clone(), metadata_input.clone(), repository)?;
401
+
402
+
// Step 2: Convert metadata to JSON and extract the claimed CID
403
+
let metadata_obj: Map<String, Value> = metadata_input
404
+
.try_into()
405
+
.map_err(|_| AttestationError::MetadataMustBeObject)?;
406
+
407
+
let claimed_cid = metadata_obj
408
+
.get("cid")
409
+
.and_then(Value::as_str)
410
+
.filter(|value| !value.is_empty())
411
+
.ok_or(AttestationError::SignatureMissingField {
412
+
field: "cid".to_string(),
413
+
})?;
414
+
415
+
// Step 3: Verify the claimed CID matches the computed content CID
416
+
if content_cid.to_string() != claimed_cid {
417
+
return Err(AttestationError::RemoteAttestationCidMismatch {
418
+
expected: claimed_cid.to_string(),
419
+
actual: content_cid.to_string(),
420
+
});
421
+
}
422
+
423
+
// Step 4: Compute the proof record's DAG-CBOR CID
424
+
let proof_record_cid = create_dagbor_cid(&metadata_obj)?;
425
+
426
+
// Step 5: Create the strongRef
427
+
let strongref = json!({
428
+
"$type": STRONG_REF_NSID,
429
+
"uri": attestation_uri,
430
+
"cid": proof_record_cid.to_string()
431
+
});
432
+
433
+
// Step 6: Convert record to JSON object and append the strongRef
434
+
let record_obj: Map<String, Value> = record_input
435
+
.try_into()
436
+
.map_err(|_| AttestationError::RecordMustBeObject)?;
437
+
438
+
append_signature_to_record(record_obj, strongref)
439
+
}
440
+
441
+
/// Validates an inline attestation and appends it to a record's signatures array.
442
+
///
443
+
/// Inline attestations contain cryptographic signatures embedded directly in the record.
444
+
/// This function validates the attestation signature against the record and repository,
445
+
/// then appends it if validation succeeds.
446
+
///
447
+
/// # Security
448
+
///
449
+
/// - **Repository binding validation**: Ensures the attestation was created for the specified repository DID
450
+
/// - **CID verification**: Validates the CID in the attestation matches the computed CID
451
+
/// - **Signature verification**: Cryptographically verifies the ECDSA signature
452
+
/// - **Key resolution**: Resolves and validates the verification key
453
+
///
454
+
/// # Arguments
455
+
///
456
+
/// * `record_input` - The record to append the attestation to (as AnyInput)
457
+
/// * `attestation_input` - The inline attestation to validate and append (as AnyInput)
458
+
/// * `repository` - The repository DID where this record is stored (for replay attack prevention)
459
+
/// * `key_resolver` - Resolver for looking up verification keys from DIDs
460
+
///
461
+
/// # Returns
462
+
///
463
+
/// The modified record with the validated attestation appended to its `signatures` array
464
+
///
465
+
/// # Errors
466
+
///
467
+
/// Returns an error if:
468
+
/// - The record or attestation are not valid JSON objects
469
+
/// - The attestation is missing required fields (`$type`, `key`, `cid`, `signature`)
470
+
/// - The attestation CID doesn't match the computed CID for the record
471
+
/// - The signature bytes are invalid or not base64-encoded
472
+
/// - Signature verification fails
473
+
/// - Key resolution fails
474
+
///
475
+
/// # Type Parameters
476
+
///
477
+
/// * `R` - The record type (must implement Serialize + LexiconType + PartialEq + Clone)
478
+
/// * `A` - The attestation type (must implement Serialize + LexiconType + PartialEq + Clone)
479
+
/// * `KR` - The key resolver type (must implement KeyResolver)
480
+
///
481
+
/// # Example
482
+
///
483
+
/// ```ignore
484
+
/// use atproto_attestation::{append_inline_attestation, input::AnyInput};
485
+
/// use serde_json::json;
486
+
///
487
+
/// let record = json!({
488
+
/// "$type": "app.bsky.feed.post",
489
+
/// "text": "Hello world!"
490
+
/// });
491
+
///
492
+
/// let attestation = json!({
493
+
/// "$type": "com.example.inlineSignature",
494
+
/// "key": "did:key:zQ3sh...",
495
+
/// "cid": "bafyrei...",
496
+
/// "signature": {"$bytes": "base64-signature-bytes"}
497
+
/// });
498
+
///
499
+
/// let repository_did = "did:plc:repo123";
500
+
/// let key_resolver = /* your KeyResolver implementation */;
501
+
///
502
+
/// let signed_record = append_inline_attestation(
503
+
/// AnyInput::Serialize(record),
504
+
/// AnyInput::Serialize(attestation),
505
+
/// repository_did,
506
+
/// key_resolver
507
+
/// ).await?;
508
+
/// ```
509
+
pub async fn append_inline_attestation<R, A, KR>(
510
+
record_input: AnyInput<R>,
511
+
attestation_input: AnyInput<A>,
512
+
repository: &str,
513
+
key_resolver: KR,
514
+
) -> Result<Value, AttestationError>
515
+
where
516
+
R: Serialize + Clone,
517
+
A: Serialize + Clone,
518
+
KR: KeyResolver,
519
+
{
520
+
// Step 1: Create a content CID
521
+
let content_cid =
522
+
create_attestation_cid(record_input.clone(), attestation_input.clone(), repository)?;
523
+
524
+
let record_obj: Map<String, Value> = record_input
525
+
.try_into()
526
+
.map_err(|_| AttestationError::RecordMustBeObject)?;
527
+
528
+
let attestation_obj: Map<String, Value> = attestation_input
529
+
.try_into()
530
+
.map_err(|_| AttestationError::MetadataMustBeObject)?;
531
+
532
+
let key = attestation_obj
533
+
.get("key")
534
+
.and_then(Value::as_str)
535
+
.filter(|value| !value.is_empty())
536
+
.ok_or(AttestationError::SignatureMissingField {
537
+
field: "key".to_string(),
538
+
})?;
539
+
let key_data =
540
+
key_resolver
541
+
.resolve(key)
542
+
.await
543
+
.map_err(|error| AttestationError::KeyResolutionFailed {
544
+
key: key.to_string(),
545
+
error,
546
+
})?;
547
+
548
+
let signature_bytes = attestation_obj
549
+
.get("signature")
550
+
.and_then(Value::as_object)
551
+
.and_then(|object| object.get("$bytes"))
552
+
.and_then(Value::as_str)
553
+
.ok_or(AttestationError::SignatureBytesFormatInvalid)?;
554
+
555
+
let signature_bytes = BASE64
556
+
.decode(signature_bytes)
557
+
.map_err(|error| AttestationError::SignatureDecodingFailed { error })?;
558
+
559
+
let computed_cid_bytes = content_cid.to_bytes();
560
+
561
+
validate(&key_data, &signature_bytes, &computed_cid_bytes)
562
+
.map_err(|error| AttestationError::SignatureValidationFailed { error })?;
563
+
564
+
// Step 6: Append the validated attestation to the signatures array
565
+
append_signature_to_record(record_obj, json!(attestation_obj))
566
+
}
567
+
568
+
#[cfg(test)]
569
+
mod tests {
570
+
use super::*;
571
+
use atproto_identity::key::{KeyType, generate_key, to_public};
572
+
use serde_json::json;
573
+
574
+
#[test]
575
+
fn create_remote_attestation_produces_both_records() -> Result<(), Box<dyn std::error::Error>> {
576
+
577
+
let record = json!({
578
+
"$type": "app.example.record",
579
+
"body": "remote attestation"
580
+
});
581
+
582
+
let metadata = json!({
583
+
"$type": "com.example.attestation"
584
+
});
585
+
586
+
let source_repository = "did:plc:test";
587
+
let attestation_repository = "did:plc:attestor";
588
+
589
+
let (attested_record, proof_record) =
590
+
create_remote_attestation(
591
+
AnyInput::Serialize(record.clone()),
592
+
AnyInput::Serialize(metadata),
593
+
source_repository,
594
+
attestation_repository,
595
+
)?;
596
+
597
+
// Verify proof record structure
598
+
let proof_object = proof_record.as_object().expect("proof should be an object");
599
+
assert_eq!(
600
+
proof_object.get("$type").and_then(Value::as_str),
601
+
Some("com.example.attestation")
602
+
);
603
+
assert!(
604
+
proof_object.get("cid").and_then(Value::as_str).is_some(),
605
+
"proof must contain a cid"
606
+
);
607
+
assert!(
608
+
proof_object.get("repository").is_none(),
609
+
"repository should not be in final proof record"
610
+
);
611
+
612
+
// Verify attested record has strongRef
613
+
let attested_object = attested_record
614
+
.as_object()
615
+
.expect("attested record should be an object");
616
+
let signatures = attested_object
617
+
.get("signatures")
618
+
.and_then(Value::as_array)
619
+
.expect("attested record should have signatures array");
620
+
assert_eq!(signatures.len(), 1, "should have one signature");
621
+
622
+
let signature = &signatures[0];
623
+
assert_eq!(
624
+
signature.get("$type").and_then(Value::as_str),
625
+
Some("com.atproto.repo.strongRef"),
626
+
"signature should be a strongRef"
627
+
);
628
+
assert!(
629
+
signature.get("uri").and_then(Value::as_str).is_some(),
630
+
"strongRef must contain a uri"
631
+
);
632
+
assert!(
633
+
signature.get("cid").and_then(Value::as_str).is_some(),
634
+
"strongRef must contain a cid"
635
+
);
636
+
637
+
Ok(())
638
+
}
639
+
640
+
#[tokio::test]
641
+
async fn create_inline_attestation_full_workflow() -> Result<(), Box<dyn std::error::Error>> {
642
+
let private_key = generate_key(KeyType::K256Private)?;
643
+
let public_key = to_public(&private_key)?;
644
+
let key_reference = format!("{}", &public_key);
645
+
let repository_did = "did:plc:testrepository123";
646
+
647
+
let base_record = json!({
648
+
"$type": "app.example.record",
649
+
"body": "Sign me"
650
+
});
651
+
652
+
let sig_metadata = json!({
653
+
"$type": "com.example.inlineSignature",
654
+
"key": key_reference,
655
+
"purpose": "unit-test"
656
+
});
657
+
658
+
let signed = create_inline_attestation(
659
+
AnyInput::Serialize(base_record),
660
+
AnyInput::Serialize(sig_metadata),
661
+
repository_did,
662
+
&private_key,
663
+
)?;
664
+
665
+
// Verify structure
666
+
let signatures = signed
667
+
.get("signatures")
668
+
.and_then(Value::as_array)
669
+
.expect("should have signatures array");
670
+
assert_eq!(signatures.len(), 1);
671
+
672
+
let sig = &signatures[0];
673
+
assert_eq!(
674
+
sig.get("$type").and_then(Value::as_str),
675
+
Some("com.example.inlineSignature")
676
+
);
677
+
assert!(sig.get("signature").is_some());
678
+
assert!(sig.get("key").is_some());
679
+
assert!(sig.get("repository").is_none()); // Should not be in final signature
680
+
681
+
Ok(())
682
+
}
683
+
684
+
#[test]
685
+
fn create_signature_returns_valid_bytes() -> Result<(), Box<dyn std::error::Error>> {
686
+
let private_key = generate_key(KeyType::K256Private)?;
687
+
let public_key = to_public(&private_key)?;
688
+
689
+
let record = json!({
690
+
"$type": "app.example.record",
691
+
"body": "Test signature creation"
692
+
});
693
+
694
+
let metadata = json!({
695
+
"$type": "com.example.signature",
696
+
"key": format!("{}", public_key)
697
+
});
698
+
699
+
let repository = "did:plc:test123";
700
+
701
+
// Create signature
702
+
let signature_bytes = create_signature(
703
+
AnyInput::Serialize(record.clone()),
704
+
AnyInput::Serialize(metadata.clone()),
705
+
repository,
706
+
&private_key,
707
+
)?;
708
+
709
+
// Verify signature is not empty
710
+
assert!(!signature_bytes.is_empty(), "Signature bytes should not be empty");
711
+
712
+
// Verify signature length is reasonable for ECDSA (typically 64-72 bytes for secp256k1)
713
+
assert!(
714
+
signature_bytes.len() >= 64 && signature_bytes.len() <= 73,
715
+
"Signature length should be between 64 and 73 bytes, got {}",
716
+
signature_bytes.len()
717
+
);
718
+
719
+
// Verify we can validate the signature
720
+
let content_cid = create_attestation_cid(
721
+
AnyInput::Serialize(record),
722
+
AnyInput::Serialize(metadata),
723
+
repository,
724
+
)?;
725
+
726
+
validate(&public_key, &signature_bytes, &content_cid.to_bytes())?;
727
+
728
+
Ok(())
729
+
}
730
+
731
+
#[test]
732
+
fn create_signature_different_inputs_produce_different_signatures() -> Result<(), Box<dyn std::error::Error>> {
733
+
let private_key = generate_key(KeyType::K256Private)?;
734
+
735
+
let record1 = json!({"$type": "app.example.record", "body": "First message"});
736
+
let record2 = json!({"$type": "app.example.record", "body": "Second message"});
737
+
let metadata = json!({"$type": "com.example.signature"});
738
+
let repository = "did:plc:test123";
739
+
740
+
let sig1 = create_signature(
741
+
AnyInput::Serialize(record1),
742
+
AnyInput::Serialize(metadata.clone()),
743
+
repository,
744
+
&private_key,
745
+
)?;
746
+
747
+
let sig2 = create_signature(
748
+
AnyInput::Serialize(record2),
749
+
AnyInput::Serialize(metadata),
750
+
repository,
751
+
&private_key,
752
+
)?;
753
+
754
+
assert_ne!(sig1, sig2, "Different records should produce different signatures");
755
+
756
+
Ok(())
757
+
}
758
+
759
+
#[test]
760
+
fn create_signature_different_repositories_produce_different_signatures() -> Result<(), Box<dyn std::error::Error>> {
761
+
let private_key = generate_key(KeyType::K256Private)?;
762
+
763
+
let record = json!({"$type": "app.example.record", "body": "Message"});
764
+
let metadata = json!({"$type": "com.example.signature"});
765
+
766
+
let sig1 = create_signature(
767
+
AnyInput::Serialize(record.clone()),
768
+
AnyInput::Serialize(metadata.clone()),
769
+
"did:plc:repo1",
770
+
&private_key,
771
+
)?;
772
+
773
+
let sig2 = create_signature(
774
+
AnyInput::Serialize(record),
775
+
AnyInput::Serialize(metadata),
776
+
"did:plc:repo2",
777
+
&private_key,
778
+
)?;
779
+
780
+
assert_ne!(
781
+
sig1, sig2,
782
+
"Different repository DIDs should produce different signatures"
783
+
);
784
+
785
+
Ok(())
786
+
}
787
+
}
+354
crates/atproto-attestation/src/bin/atproto-attestation-sign.rs
+354
crates/atproto-attestation/src/bin/atproto-attestation-sign.rs
···
1
+
//! Command-line tool for signing AT Protocol records with inline or remote attestations.
2
+
//!
3
+
//! This tool creates cryptographic signatures for AT Protocol records using the CID-first
4
+
//! attestation specification. It supports both inline attestations (embedding signatures
5
+
//! directly in records) and remote attestations (creating separate proof records).
6
+
//!
7
+
//! ## Usage Patterns
8
+
//!
9
+
//! ### Remote Attestation
10
+
//! ```bash
11
+
//! atproto-attestation-sign remote <source_repository_did> <source_record> <attestation_repository_did> <metadata_record>
12
+
//! ```
13
+
//!
14
+
//! ### Inline Attestation
15
+
//! ```bash
16
+
//! atproto-attestation-sign inline <source_record> <repository_did> <signing_key> <metadata_record>
17
+
//! ```
18
+
//!
19
+
//! ## Arguments
20
+
//!
21
+
//! - `source_repository_did`: (Remote mode) DID of the repository housing the source record (prevents replay attacks)
22
+
//! - `source_record`: JSON string or path to JSON file containing the record being attested
23
+
//! - `attestation_repository_did`: (Remote mode) DID of the repository where the attestation proof will be stored
24
+
//! - `repository_did`: (Inline mode) DID of the repository that will house the record (prevents replay attacks)
25
+
//! - `signing_key`: (Inline mode) Private key string (did:key format) used to sign the attestation
26
+
//! - `metadata_record`: JSON string or path to JSON file with attestation metadata used during CID creation
27
+
//!
28
+
//! ## Examples
29
+
//!
30
+
//! ```bash
31
+
//! # Remote attestation - creates proof record and strongRef
32
+
//! atproto-attestation-sign remote \
33
+
//! did:plc:sourceRepo... \
34
+
//! record.json \
35
+
//! did:plc:attestationRepo... \
36
+
//! metadata.json
37
+
//!
38
+
//! # Inline attestation - embeds signature in record
39
+
//! atproto-attestation-sign inline \
40
+
//! record.json \
41
+
//! did:plc:xyz123... \
42
+
//! did:key:z42tv1pb3... \
43
+
//! '{"$type":"com.example.attestation","purpose":"demo"}'
44
+
//!
45
+
//! # Read from stdin
46
+
//! cat record.json | atproto-attestation-sign remote \
47
+
//! did:plc:sourceRepo... \
48
+
//! - \
49
+
//! did:plc:attestationRepo... \
50
+
//! metadata.json
51
+
//! ```
52
+
53
+
use anyhow::{Context, Result, anyhow};
54
+
use atproto_attestation::{
55
+
create_inline_attestation, create_remote_attestation,
56
+
input::AnyInput,
57
+
};
58
+
use atproto_identity::key::identify_key;
59
+
use clap::{Parser, Subcommand};
60
+
use serde_json::Value;
61
+
use std::{
62
+
fs,
63
+
io::{self, Read},
64
+
path::Path,
65
+
};
66
+
67
+
/// Command-line tool for signing AT Protocol records with cryptographic attestations.
68
+
///
69
+
/// Creates inline or remote attestations following the CID-first specification.
70
+
/// Inline attestations embed signatures directly in records, while remote attestations
71
+
/// generate separate proof records with strongRef references.
72
+
#[derive(Parser)]
73
+
#[command(
74
+
name = "atproto-attestation-sign",
75
+
version,
76
+
about = "Sign AT Protocol records with cryptographic attestations",
77
+
long_about = "
78
+
A command-line tool for signing AT Protocol records using the CID-first attestation
79
+
specification. Supports both inline attestations (signatures embedded in the record)
80
+
and remote attestations (separate proof records with CID references).
81
+
82
+
MODES:
83
+
remote Creates a separate proof record with strongRef reference
84
+
Syntax: remote <source_repository_did> <source_record> <attestation_repository_did> <metadata_record>
85
+
86
+
inline Embeds signature bytes directly in the record
87
+
Syntax: inline <source_record> <repository_did> <signing_key> <metadata_record>
88
+
89
+
ARGUMENTS:
90
+
source_repository_did (Remote) DID of repository housing the source record (for replay prevention)
91
+
source_record JSON string or file path to the record being attested
92
+
attestation_repository_did (Remote) DID of repository where attestation proof will be stored
93
+
repository_did (Inline) DID of repository that will house the record (for replay prevention)
94
+
signing_key (Inline) Private key in did:key format for signing
95
+
metadata_record JSON string or file path with attestation metadata
96
+
97
+
EXAMPLES:
98
+
# Remote attestation (creates proof record + strongRef):
99
+
atproto-attestation-sign remote \\
100
+
did:plc:sourceRepo... \\
101
+
record.json \\
102
+
did:plc:attestationRepo... \\
103
+
metadata.json
104
+
105
+
# Inline attestation (embeds signature):
106
+
atproto-attestation-sign inline \\
107
+
record.json \\
108
+
did:plc:xyz123abc... \\
109
+
did:key:z42tv1pb3Dzog28Q1udyieg1YJP3x1Un5vraE1bttXeCDSpW \\
110
+
'{\"$type\":\"com.example.attestation\",\"purpose\":\"demo\"}'
111
+
112
+
# Read source record from stdin:
113
+
cat record.json | atproto-attestation-sign remote \\
114
+
did:plc:sourceRepo... \\
115
+
- \\
116
+
did:plc:attestationRepo... \\
117
+
metadata.json
118
+
119
+
OUTPUT:
120
+
Remote mode outputs TWO JSON objects:
121
+
1. The proof record (to be stored in the repository)
122
+
2. The source record with strongRef attestation appended
123
+
124
+
Inline mode outputs ONE JSON object:
125
+
- The source record with inline attestation embedded
126
+
"
127
+
)]
128
+
struct Args {
129
+
#[command(subcommand)]
130
+
command: Commands,
131
+
}
132
+
133
+
#[derive(Subcommand)]
134
+
enum Commands {
135
+
/// Create a remote attestation with separate proof record
136
+
///
137
+
/// Generates a proof record containing the CID and returns both the proof
138
+
/// record (to be stored in the attestation repository) and the source record
139
+
/// with a strongRef attestation reference.
140
+
#[command(visible_alias = "r")]
141
+
Remote {
142
+
/// DID of the repository housing the source record (for replay attack prevention)
143
+
source_repository_did: String,
144
+
145
+
/// Source record JSON string or file path (use '-' for stdin)
146
+
source_record: String,
147
+
148
+
/// DID of the repository where the attestation proof will be stored
149
+
attestation_repository_did: String,
150
+
151
+
/// Attestation metadata JSON string or file path
152
+
metadata_record: String,
153
+
},
154
+
155
+
/// Create an inline attestation with embedded signature
156
+
///
157
+
/// Signs the record with the provided private key and embeds the signature
158
+
/// directly in the record's attestation structure.
159
+
#[command(visible_alias = "i")]
160
+
Inline {
161
+
/// Source record JSON string or file path (use '-' for stdin)
162
+
source_record: String,
163
+
164
+
/// Repository DID that will house the record (for replay attack prevention)
165
+
repository_did: String,
166
+
167
+
/// Private signing key in did:key format (e.g., did:key:z...)
168
+
signing_key: String,
169
+
170
+
/// Attestation metadata JSON string or file path
171
+
metadata_record: String,
172
+
},
173
+
}
174
+
175
+
#[tokio::main]
176
+
async fn main() -> Result<()> {
177
+
let args = Args::parse();
178
+
179
+
match args.command {
180
+
Commands::Remote {
181
+
source_repository_did,
182
+
source_record,
183
+
attestation_repository_did,
184
+
metadata_record,
185
+
} => handle_remote_attestation(
186
+
&source_record,
187
+
&source_repository_did,
188
+
&metadata_record,
189
+
&attestation_repository_did,
190
+
)?,
191
+
192
+
Commands::Inline {
193
+
source_record,
194
+
repository_did,
195
+
signing_key,
196
+
metadata_record,
197
+
} => handle_inline_attestation(
198
+
&source_record,
199
+
&repository_did,
200
+
&signing_key,
201
+
&metadata_record,
202
+
)?,
203
+
}
204
+
205
+
Ok(())
206
+
}
207
+
208
+
/// Handle remote attestation mode.
209
+
///
210
+
/// Creates a proof record and appends a strongRef to the source record.
211
+
/// Outputs both the proof record and the updated source record.
212
+
///
213
+
/// - `source_repository_did`: Used for signature binding (prevents replay attacks)
214
+
/// - `attestation_repository_did`: Where the attestation proof record will be stored
215
+
fn handle_remote_attestation(
216
+
source_record: &str,
217
+
source_repository_did: &str,
218
+
metadata_record: &str,
219
+
attestation_repository_did: &str,
220
+
) -> Result<()> {
221
+
// Load source record and metadata
222
+
let record_json = load_json_input(source_record)?;
223
+
let metadata_json = load_json_input(metadata_record)?;
224
+
225
+
// Validate inputs
226
+
if !record_json.is_object() {
227
+
return Err(anyhow!("Source record must be a JSON object"));
228
+
}
229
+
230
+
if !metadata_json.is_object() {
231
+
return Err(anyhow!("Metadata record must be a JSON object"));
232
+
}
233
+
234
+
// Validate repository DIDs
235
+
if !source_repository_did.starts_with("did:") {
236
+
return Err(anyhow!(
237
+
"Source repository DID must start with 'did:' prefix, got: {}",
238
+
source_repository_did
239
+
));
240
+
}
241
+
242
+
if !attestation_repository_did.starts_with("did:") {
243
+
return Err(anyhow!(
244
+
"Attestation repository DID must start with 'did:' prefix, got: {}",
245
+
attestation_repository_did
246
+
));
247
+
}
248
+
249
+
// Create the remote attestation using v2 API
250
+
// This creates both the attested record with strongRef and the proof record in one call
251
+
let (attested_record, proof_record) =
252
+
create_remote_attestation(
253
+
AnyInput::Serialize(record_json),
254
+
AnyInput::Serialize(metadata_json),
255
+
source_repository_did,
256
+
attestation_repository_did,
257
+
)
258
+
.context("Failed to create remote attestation")?;
259
+
260
+
// Output both records
261
+
println!("=== Proof Record (store in repository) ===");
262
+
println!("{}", serde_json::to_string_pretty(&proof_record)?);
263
+
println!();
264
+
println!("=== Attested Record (with strongRef) ===");
265
+
println!("{}", serde_json::to_string_pretty(&attested_record)?);
266
+
267
+
Ok(())
268
+
}
269
+
270
+
/// Handle inline attestation mode.
271
+
///
272
+
/// Signs the record with the provided key and embeds the signature.
273
+
/// Outputs the record with inline attestation.
274
+
fn handle_inline_attestation(
275
+
source_record: &str,
276
+
repository_did: &str,
277
+
signing_key: &str,
278
+
metadata_record: &str,
279
+
) -> Result<()> {
280
+
// Load source record and metadata
281
+
let record_json = load_json_input(source_record)?;
282
+
let metadata_json = load_json_input(metadata_record)?;
283
+
284
+
// Validate inputs
285
+
if !record_json.is_object() {
286
+
return Err(anyhow!("Source record must be a JSON object"));
287
+
}
288
+
289
+
if !metadata_json.is_object() {
290
+
return Err(anyhow!("Metadata record must be a JSON object"));
291
+
}
292
+
293
+
// Validate repository DID
294
+
if !repository_did.starts_with("did:") {
295
+
return Err(anyhow!(
296
+
"Repository DID must start with 'did:' prefix, got: {}",
297
+
repository_did
298
+
));
299
+
}
300
+
301
+
// Parse the signing key
302
+
let key_data = identify_key(signing_key)
303
+
.with_context(|| format!("Failed to parse signing key: {}", signing_key))?;
304
+
305
+
// Create inline attestation with repository binding using v2 API
306
+
let signed_record = create_inline_attestation(
307
+
AnyInput::Serialize(record_json),
308
+
AnyInput::Serialize(metadata_json),
309
+
repository_did,
310
+
&key_data,
311
+
)
312
+
.context("Failed to create inline attestation")?;
313
+
314
+
// Output the signed record
315
+
println!("{}", serde_json::to_string_pretty(&signed_record)?);
316
+
317
+
Ok(())
318
+
}
319
+
320
+
/// Load JSON input from various sources.
321
+
///
322
+
/// Accepts:
323
+
/// - "-" for stdin
324
+
/// - File paths (if the file exists)
325
+
/// - Direct JSON strings
326
+
///
327
+
/// Returns the parsed JSON value or an error.
328
+
fn load_json_input(argument: &str) -> Result<Value> {
329
+
// Handle stdin input
330
+
if argument == "-" {
331
+
let mut input = String::new();
332
+
io::stdin()
333
+
.read_to_string(&mut input)
334
+
.context("Failed to read from stdin")?;
335
+
return serde_json::from_str(&input).context("Failed to parse JSON from stdin");
336
+
}
337
+
338
+
// Try as file path first
339
+
let path = Path::new(argument);
340
+
if path.is_file() {
341
+
let file_content = fs::read_to_string(path)
342
+
.with_context(|| format!("Failed to read file: {}", argument))?;
343
+
return serde_json::from_str(&file_content)
344
+
.with_context(|| format!("Failed to parse JSON from file: {}", argument));
345
+
}
346
+
347
+
// Try as direct JSON string
348
+
serde_json::from_str(argument).with_context(|| {
349
+
format!(
350
+
"Argument is neither valid JSON nor a readable file: {}",
351
+
argument
352
+
)
353
+
})
354
+
}
+363
crates/atproto-attestation/src/bin/atproto-attestation-verify.rs
+363
crates/atproto-attestation/src/bin/atproto-attestation-verify.rs
···
1
+
//! Command-line tool for verifying cryptographic signatures on AT Protocol records.
2
+
//!
3
+
//! This tool validates attestation signatures on AT Protocol records by reconstructing
4
+
//! the signed content and verifying ECDSA signatures against public keys embedded in the
5
+
//! attestation metadata.
6
+
//!
7
+
//! ## Usage Patterns
8
+
//!
9
+
//! ### Verify all signatures in a record
10
+
//! ```bash
11
+
//! atproto-attestation-verify <record> <repository_did>
12
+
//! ```
13
+
//!
14
+
//! ### Verify a specific attestation against a record
15
+
//! ```bash
16
+
//! atproto-attestation-verify <record> <repository_did> <attestation>
17
+
//! ```
18
+
//!
19
+
//! ## Parameter Formats
20
+
//!
21
+
//! Both `record` and `attestation` parameters accept:
22
+
//! - **JSON string**: Direct JSON payload (e.g., `'{"$type":"...","text":"..."}'`)
23
+
//! - **File path**: Path to a JSON file (e.g., `./record.json`)
24
+
//! - **AT-URI**: AT Protocol URI to fetch the record (e.g., `at://did:plc:abc/app.bsky.feed.post/123`)
25
+
//!
26
+
//! ## Examples
27
+
//!
28
+
//! ```bash
29
+
//! # Verify all signatures in a record from file
30
+
//! atproto-attestation-verify ./signed_post.json did:plc:repo123
31
+
//!
32
+
//! # Verify all signatures in a record from AT-URI
33
+
//! atproto-attestation-verify at://did:plc:abc123/app.bsky.feed.post/3k2k4j5h6g did:plc:abc123
34
+
//!
35
+
//! # Verify specific attestation against a record (both from files)
36
+
//! atproto-attestation-verify ./record.json did:plc:repo123 ./attestation.json
37
+
//!
38
+
//! # Verify specific attestation (from AT-URI) against record (from file)
39
+
//! atproto-attestation-verify ./record.json did:plc:repo123 at://did:plc:xyz/com.example.attestation/abc123
40
+
//!
41
+
//! # Read record from stdin, verify all signatures
42
+
//! cat signed.json | atproto-attestation-verify - did:plc:repo123
43
+
//!
44
+
//! # Verify inline JSON
45
+
//! atproto-attestation-verify '{"$type":"app.bsky.feed.post","text":"Hello","signatures":[...]}' did:plc:repo123
46
+
//! ```
47
+
48
+
use anyhow::{Context, Result, anyhow};
49
+
use atproto_attestation::AnyInput;
50
+
use atproto_identity::key::{KeyData, KeyResolver, identify_key};
51
+
use clap::Parser;
52
+
use serde_json::Value;
53
+
use std::{
54
+
fs,
55
+
io::{self, Read},
56
+
path::Path,
57
+
};
58
+
59
+
/// Command-line tool for verifying cryptographic signatures on AT Protocol records.
60
+
///
61
+
/// Validates attestation signatures by reconstructing signed content and checking
62
+
/// ECDSA signatures against embedded public keys. Supports verifying all signatures
63
+
/// in a record or validating a specific attestation record.
64
+
///
65
+
/// The repository DID parameter is now REQUIRED to prevent replay attacks where
66
+
/// attestations might be copied to different repositories.
67
+
#[derive(Parser)]
68
+
#[command(
69
+
name = "atproto-attestation-verify",
70
+
version,
71
+
about = "Verify cryptographic signatures of AT Protocol records",
72
+
long_about = "
73
+
A command-line tool for verifying cryptographic signatures of AT Protocol records.
74
+
75
+
USAGE:
76
+
atproto-attestation-verify <record> <repository_did> Verify all signatures
77
+
78
+
PARAMETER FORMATS:
79
+
Each parameter accepts JSON strings, file paths, or AT-URIs:
80
+
- JSON string: '{\"$type\":\"...\",\"text\":\"...\"}'
81
+
- File path: ./record.json
82
+
- AT-URI: at://did:plc:abc/app.bsky.feed.post/123
83
+
- Stdin: - (for record parameter only)
84
+
85
+
EXAMPLES:
86
+
# Verify all signatures in a record:
87
+
atproto-attestation-verify ./signed_post.json did:plc:repo123
88
+
atproto-attestation-verify at://did:plc:abc/app.bsky.feed.post/123 did:plc:abc
89
+
90
+
# Verify specific attestation:
91
+
atproto-attestation-verify ./record.json did:plc:repo123 ./attestation.json
92
+
atproto-attestation-verify ./record.json did:plc:repo123 at://did:plc:xyz/com.example.attestation/abc
93
+
94
+
# Read from stdin:
95
+
cat signed.json | atproto-attestation-verify - did:plc:repo123
96
+
97
+
OUTPUT:
98
+
Single record mode: Reports each signature with โ (valid), โ (invalid), or ? (unverified)
99
+
Attestation mode: Outputs 'OK' on success, error message on failure
100
+
101
+
VERIFICATION:
102
+
- Inline signatures are verified by reconstructing $sig and validating against embedded keys
103
+
- Remote attestations (strongRef) are reported as unverified (require fetching proof record)
104
+
- Keys are resolved from did:key identifiers or require a key resolver for DID document keys
105
+
"
106
+
)]
107
+
struct Args {
108
+
/// Record to verify - JSON string, file path, AT-URI, or '-' for stdin
109
+
record: String,
110
+
111
+
/// Repository DID that houses the record (required for replay attack prevention)
112
+
repository_did: String,
113
+
114
+
/// Optional attestation record to verify against the record - JSON string, file path, or AT-URI
115
+
attestation: Option<String>,
116
+
}
117
+
118
+
/// A key resolver that supports `did:key:` identifiers directly.
119
+
///
120
+
/// This resolver handles key references that are encoded as `did:key:` strings,
121
+
/// parsing them to extract the cryptographic key data. For other DID methods,
122
+
/// it returns an error since those would require fetching DID documents.
123
+
struct DidKeyResolver {}
124
+
125
+
#[async_trait::async_trait]
126
+
impl KeyResolver for DidKeyResolver {
127
+
async fn resolve(&self, subject: &str) -> Result<KeyData> {
128
+
if subject.starts_with("did:key:") {
129
+
identify_key(subject)
130
+
.map_err(|e| anyhow!("Failed to parse did:key '{}': {}", subject, e))
131
+
} else {
132
+
Err(anyhow!(
133
+
"Subject '{}' is not a did:key: identifier. Only did:key: subjects are supported by this resolver.",
134
+
subject
135
+
))
136
+
}
137
+
}
138
+
}
139
+
140
+
#[tokio::main]
141
+
async fn main() -> Result<()> {
142
+
let args = Args::parse();
143
+
144
+
// Load the record
145
+
let record = load_input(&args.record, true)
146
+
.await
147
+
.context("Failed to load record")?;
148
+
149
+
if !record.is_object() {
150
+
return Err(anyhow!("Record must be a JSON object"));
151
+
}
152
+
153
+
// Validate repository DID
154
+
if !args.repository_did.starts_with("did:") {
155
+
return Err(anyhow!(
156
+
"Repository DID must start with 'did:' prefix, got: {}",
157
+
args.repository_did
158
+
));
159
+
}
160
+
161
+
// Determine verification mode
162
+
verify_all_mode(&record, &args.repository_did).await
163
+
}
164
+
165
+
/// Mode 1: Verify all signatures contained in the record.
166
+
///
167
+
/// Reports each signature with status indicators:
168
+
/// - โ Valid signature
169
+
/// - โ Invalid signature
170
+
/// - ? Unverified (e.g., remote attestations requiring proof record fetch)
171
+
async fn verify_all_mode(record: &Value, repository_did: &str) -> Result<()> {
172
+
// Create an identity resolver for fetching remote attestations
173
+
use atproto_identity::resolve::{HickoryDnsResolver, InnerIdentityResolver};
174
+
use std::sync::Arc;
175
+
176
+
let http_client = reqwest::Client::new();
177
+
let dns_resolver = HickoryDnsResolver::create_resolver(&[]);
178
+
179
+
let identity_resolver = InnerIdentityResolver {
180
+
http_client: http_client.clone(),
181
+
dns_resolver: Arc::new(dns_resolver),
182
+
plc_hostname: "plc.directory".to_string(),
183
+
};
184
+
185
+
// Create record resolver that can fetch remote attestation proof records
186
+
let record_resolver = RemoteAttestationResolver {
187
+
http_client,
188
+
identity_resolver,
189
+
};
190
+
191
+
let key_resolver = DidKeyResolver {};
192
+
193
+
atproto_attestation::verify_record(
194
+
AnyInput::Serialize(record.clone()),
195
+
repository_did,
196
+
key_resolver,
197
+
record_resolver,
198
+
)
199
+
.await
200
+
.context("Failed to verify signatures")
201
+
}
202
+
203
+
/// Load input from various sources: JSON string, file path, AT-URI, or stdin.
204
+
///
205
+
/// The `allow_stdin` parameter controls whether "-" is interpreted as stdin.
206
+
async fn load_input(input: &str, allow_stdin: bool) -> Result<Value> {
207
+
// Handle stdin
208
+
if input == "-" {
209
+
if !allow_stdin {
210
+
return Err(anyhow!(
211
+
"Stdin ('-') is only supported for the record parameter"
212
+
));
213
+
}
214
+
215
+
let mut buffer = String::new();
216
+
io::stdin()
217
+
.read_to_string(&mut buffer)
218
+
.context("Failed to read from stdin")?;
219
+
220
+
return serde_json::from_str(&buffer).context("Failed to parse JSON from stdin");
221
+
}
222
+
223
+
// Check if it's an AT-URI
224
+
if input.starts_with("at://") {
225
+
return load_from_aturi(input)
226
+
.await
227
+
.with_context(|| format!("Failed to fetch record from AT-URI: {}", input));
228
+
}
229
+
230
+
// Try as file path
231
+
let path = Path::new(input);
232
+
if path.exists() && path.is_file() {
233
+
let content =
234
+
fs::read_to_string(path).with_context(|| format!("Failed to read file: {}", input))?;
235
+
236
+
return serde_json::from_str(&content)
237
+
.with_context(|| format!("Failed to parse JSON from file: {}", input));
238
+
}
239
+
240
+
// Try as direct JSON string
241
+
serde_json::from_str(input).with_context(|| {
242
+
format!(
243
+
"Input is not valid JSON, an existing file, or an AT-URI: {}",
244
+
input
245
+
)
246
+
})
247
+
}
248
+
249
+
/// Load a record from an AT-URI by fetching it from a PDS.
250
+
///
251
+
/// This requires resolving the DID to find the PDS endpoint, then fetching the record.
252
+
async fn load_from_aturi(aturi: &str) -> Result<Value> {
253
+
use atproto_identity::resolve::{HickoryDnsResolver, InnerIdentityResolver};
254
+
use atproto_record::aturi::ATURI;
255
+
use std::str::FromStr;
256
+
use std::sync::Arc;
257
+
258
+
// Parse the AT-URI
259
+
let parsed = ATURI::from_str(aturi).map_err(|e| anyhow!("Invalid AT-URI: {}", e))?;
260
+
261
+
// Create resolver components
262
+
let http_client = reqwest::Client::new();
263
+
let dns_resolver = HickoryDnsResolver::create_resolver(&[]);
264
+
265
+
// Create identity resolver
266
+
let identity_resolver = InnerIdentityResolver {
267
+
http_client: http_client.clone(),
268
+
dns_resolver: Arc::new(dns_resolver),
269
+
plc_hostname: "plc.directory".to_string(),
270
+
};
271
+
272
+
// Resolve the DID to get the PDS endpoint
273
+
let document = identity_resolver
274
+
.resolve(&parsed.authority)
275
+
.await
276
+
.with_context(|| format!("Failed to resolve DID: {}", parsed.authority))?;
277
+
278
+
// Find the PDS endpoint
279
+
let pds_endpoint = document
280
+
.service
281
+
.iter()
282
+
.find(|s| s.r#type == "AtprotoPersonalDataServer")
283
+
.map(|s| s.service_endpoint.as_str())
284
+
.ok_or_else(|| anyhow!("No PDS endpoint found for DID: {}", parsed.authority))?;
285
+
286
+
// Fetch the record using the XRPC client
287
+
let response = atproto_client::com::atproto::repo::get_record(
288
+
&http_client,
289
+
&atproto_client::client::Auth::None,
290
+
pds_endpoint,
291
+
&parsed.authority,
292
+
&parsed.collection,
293
+
&parsed.record_key,
294
+
None,
295
+
)
296
+
.await
297
+
.with_context(|| format!("Failed to fetch record from PDS: {}", pds_endpoint))?;
298
+
299
+
match response {
300
+
atproto_client::com::atproto::repo::GetRecordResponse::Record { value, .. } => Ok(value),
301
+
atproto_client::com::atproto::repo::GetRecordResponse::Error(error) => {
302
+
Err(anyhow!("Failed to fetch record: {}", error.error_message()))
303
+
}
304
+
}
305
+
}
306
+
307
+
/// Record resolver for remote attestations that resolves DIDs to find PDS endpoints.
308
+
struct RemoteAttestationResolver {
309
+
http_client: reqwest::Client,
310
+
identity_resolver: atproto_identity::resolve::InnerIdentityResolver,
311
+
}
312
+
313
+
#[async_trait::async_trait]
314
+
impl atproto_client::record_resolver::RecordResolver for RemoteAttestationResolver {
315
+
async fn resolve<T>(&self, aturi: &str) -> anyhow::Result<T>
316
+
where
317
+
T: serde::de::DeserializeOwned + Send,
318
+
{
319
+
use atproto_record::aturi::ATURI;
320
+
use std::str::FromStr;
321
+
322
+
// Parse the AT-URI
323
+
let parsed = ATURI::from_str(aturi).map_err(|e| anyhow!("Invalid AT-URI: {}", e))?;
324
+
325
+
// Resolve the DID to get the PDS endpoint
326
+
let document = self
327
+
.identity_resolver
328
+
.resolve(&parsed.authority)
329
+
.await
330
+
.with_context(|| format!("Failed to resolve DID: {}", parsed.authority))?;
331
+
332
+
// Find the PDS endpoint
333
+
let pds_endpoint = document
334
+
.service
335
+
.iter()
336
+
.find(|s| s.r#type == "AtprotoPersonalDataServer")
337
+
.map(|s| s.service_endpoint.as_str())
338
+
.ok_or_else(|| anyhow!("No PDS endpoint found for DID: {}", parsed.authority))?;
339
+
340
+
// Fetch the record using the XRPC client
341
+
let response = atproto_client::com::atproto::repo::get_record(
342
+
&self.http_client,
343
+
&atproto_client::client::Auth::None,
344
+
pds_endpoint,
345
+
&parsed.authority,
346
+
&parsed.collection,
347
+
&parsed.record_key,
348
+
None,
349
+
)
350
+
.await
351
+
.with_context(|| format!("Failed to fetch record from PDS: {}", pds_endpoint))?;
352
+
353
+
match response {
354
+
atproto_client::com::atproto::repo::GetRecordResponse::Record { value, .. } => {
355
+
serde_json::from_value(value)
356
+
.map_err(|e| anyhow!("Failed to deserialize record: {}", e))
357
+
}
358
+
atproto_client::com::atproto::repo::GetRecordResponse::Error(error) => {
359
+
Err(anyhow!("Failed to fetch record: {}", error.error_message()))
360
+
}
361
+
}
362
+
}
363
+
}
+532
crates/atproto-attestation/src/cid.rs
+532
crates/atproto-attestation/src/cid.rs
···
1
+
//! CID (Content Identifier) generation for AT Protocol records.
2
+
//!
3
+
//! This module implements the CID-first attestation workflow, generating
4
+
//! deterministic content identifiers using DAG-CBOR serialization and SHA-256 hashing.
5
+
6
+
use crate::{errors::AttestationError, input::AnyInput};
7
+
#[cfg(test)]
8
+
use atproto_record::typed::LexiconType;
9
+
use cid::Cid;
10
+
use multihash::Multihash;
11
+
use serde::Serialize;
12
+
use serde_json::{Value, Map};
13
+
use sha2::{Digest, Sha256};
14
+
use std::convert::TryInto;
15
+
16
+
/// DAG-CBOR codec identifier used in AT Protocol CIDs.
17
+
///
18
+
/// This codec (0x71) indicates that the data is encoded using DAG-CBOR,
19
+
/// a deterministic subset of CBOR designed for content-addressable systems.
20
+
pub const DAG_CBOR_CODEC: u64 = 0x71;
21
+
22
+
/// SHA-256 multihash code used in AT Protocol CIDs.
23
+
///
24
+
/// This code (0x12) identifies SHA-256 as the hash function used to generate
25
+
/// the content identifier. SHA-256 provides 256-bit cryptographic security.
26
+
pub const MULTIHASH_SHA256: u64 = 0x12;
27
+
28
+
/// Create a CID from any serializable data using DAG-CBOR encoding.
29
+
///
30
+
/// This function generates a content identifier (CID) for arbitrary data by:
31
+
/// 1. Serializing the input to DAG-CBOR format
32
+
/// 2. Computing a SHA-256 hash of the serialized bytes
33
+
/// 3. Creating a CIDv1 with dag-cbor codec (0x71)
34
+
///
35
+
/// # Arguments
36
+
///
37
+
/// * `record` - The data to generate a CID for (must implement `Serialize`)
38
+
///
39
+
/// # Returns
40
+
///
41
+
/// The generated CID for the data using CIDv1 with dag-cbor codec (0x71) and sha2-256 hash
42
+
///
43
+
/// # Type Parameters
44
+
///
45
+
/// * `T` - Any type that implements `Serialize` and is compatible with DAG-CBOR encoding
46
+
///
47
+
/// # Errors
48
+
///
49
+
/// Returns an error if:
50
+
/// - DAG-CBOR serialization fails
51
+
/// - Multihash wrapping fails
52
+
///
53
+
/// # Example
54
+
///
55
+
/// ```rust
56
+
/// use atproto_attestation::cid::create_dagbor_cid;
57
+
/// use serde_json::json;
58
+
///
59
+
/// # fn example() -> Result<(), Box<dyn std::error::Error>> {
60
+
/// let data = json!({"text": "Hello, world!"});
61
+
/// let cid = create_dagbor_cid(&data)?;
62
+
/// assert_eq!(cid.codec(), 0x71); // dag-cbor codec
63
+
/// # Ok(())
64
+
/// # }
65
+
/// ```
66
+
pub fn create_dagbor_cid<T: Serialize>(record: &T) -> Result<Cid, AttestationError> {
67
+
let dag_cbor_bytes = serde_ipld_dagcbor::to_vec(record)?;
68
+
let digest = Sha256::digest(&dag_cbor_bytes);
69
+
let multihash = Multihash::wrap(MULTIHASH_SHA256, &digest)
70
+
.map_err(|error| AttestationError::MultihashWrapFailed { error })?;
71
+
72
+
Ok(Cid::new_v1(DAG_CBOR_CODEC, multihash))
73
+
}
74
+
75
+
/// Create a CID for an attestation with automatic `$sig` metadata preparation.
76
+
///
77
+
/// This is the high-level function used internally by attestation creation functions.
78
+
/// It handles the full workflow of preparing a signing record with `$sig` metadata
79
+
/// and generating the CID.
80
+
///
81
+
/// # Arguments
82
+
///
83
+
/// * `record_input` - The record to attest (as AnyInput: String, Json, or TypedLexicon)
84
+
/// * `metadata_input` - The attestation metadata (must include `$type`)
85
+
/// * `repository` - The repository DID to bind the attestation to (prevents replay attacks)
86
+
///
87
+
/// # Returns
88
+
///
89
+
/// The generated CID for the prepared attestation record
90
+
///
91
+
/// # Errors
92
+
///
93
+
/// Returns an error if:
94
+
/// - The record or metadata are not valid JSON objects
95
+
/// - The record is missing the required `$type` field
96
+
/// - The metadata is missing the required `$type` field
97
+
/// - DAG-CBOR serialization fails
98
+
pub fn create_attestation_cid<
99
+
R: Serialize + Clone,
100
+
M: Serialize + Clone,
101
+
>(
102
+
record_input: AnyInput<R>,
103
+
metadata_input: AnyInput<M>,
104
+
repository: &str,
105
+
) -> Result<Cid, AttestationError> {
106
+
let mut record_obj: Map<String, Value> = record_input
107
+
.try_into()
108
+
.map_err(|_| AttestationError::RecordMustBeObject)?;
109
+
110
+
if record_obj
111
+
.get("$type")
112
+
.and_then(Value::as_str)
113
+
.filter(|value| !value.is_empty())
114
+
.is_none()
115
+
{
116
+
return Err(AttestationError::RecordMissingType);
117
+
}
118
+
119
+
let mut metadata_obj: Map<String, Value> = metadata_input
120
+
.try_into()
121
+
.map_err(|_| AttestationError::MetadataMustBeObject)?;
122
+
123
+
if metadata_obj
124
+
.get("$type")
125
+
.and_then(Value::as_str)
126
+
.filter(|value| !value.is_empty())
127
+
.is_none()
128
+
{
129
+
return Err(AttestationError::MetadataMissingSigType);
130
+
}
131
+
132
+
record_obj.remove("signatures");
133
+
134
+
metadata_obj.remove("cid");
135
+
metadata_obj.remove("signature");
136
+
metadata_obj.insert(
137
+
"repository".to_string(),
138
+
Value::String(repository.to_string()),
139
+
);
140
+
141
+
record_obj.insert("$sig".to_string(), Value::Object(metadata_obj.clone()));
142
+
143
+
// Directly pass the Map<String, Value> - no need to wrap in Value::Object
144
+
create_dagbor_cid(&record_obj)
145
+
}
146
+
147
+
/// Validates that a CID string is a valid DAG-CBOR CID for AT Protocol attestations.
148
+
///
149
+
/// This function performs strict validation to ensure the CID meets the exact
150
+
/// specifications required for AT Protocol attestations:
151
+
///
152
+
/// 1. **Valid format**: The string must be a parseable CID
153
+
/// 2. **Version**: Must be CIDv1 (not CIDv0)
154
+
/// 3. **Codec**: Must use DAG-CBOR codec (0x71)
155
+
/// 4. **Hash algorithm**: Must use SHA-256 (multihash code 0x12)
156
+
/// 5. **Hash length**: Must have exactly 32 bytes (SHA-256 standard)
157
+
///
158
+
/// These requirements ensure consistency and security across the AT Protocol
159
+
/// ecosystem, particularly for content addressing and attestation verification.
160
+
///
161
+
/// # Arguments
162
+
///
163
+
/// * `cid` - A string slice containing the CID to validate
164
+
///
165
+
/// # Returns
166
+
///
167
+
/// * `true` if the CID is a valid DAG-CBOR CID with SHA-256 hash
168
+
/// * `false` if the CID is invalid or doesn't meet any requirement
169
+
///
170
+
/// # Examples
171
+
///
172
+
/// ```rust
173
+
/// use atproto_attestation::cid::validate_dagcbor_cid;
174
+
///
175
+
/// // Valid AT Protocol CID (CIDv1, DAG-CBOR, SHA-256)
176
+
/// let valid_cid = "bafyreigw5bqvbz6m3c3zjpqhxwl4njlnbbnw5xvptbx6dzfxjqcde6lt3y";
177
+
/// assert!(validate_dagcbor_cid(valid_cid));
178
+
///
179
+
/// // Invalid: Empty string
180
+
/// assert!(!validate_dagcbor_cid(""));
181
+
///
182
+
/// // Invalid: Not a CID
183
+
/// assert!(!validate_dagcbor_cid("not-a-cid"));
184
+
///
185
+
/// // Invalid: CIDv0 (starts with Qm)
186
+
/// let cid_v0 = "QmYwAPJzv5CZsnA625ub3XtLxT3Tz5Lno5Wqv9eKewWKjE";
187
+
/// assert!(!validate_dagcbor_cid(cid_v0));
188
+
/// ```
189
+
pub fn validate_dagcbor_cid(cid: &str) -> bool {
190
+
if cid.is_empty() {
191
+
return false
192
+
}
193
+
194
+
// Parse the CID using the cid crate for proper validation
195
+
let parsed_cid = match Cid::try_from(cid) {
196
+
Ok(value) => value,
197
+
Err(_) => return false,
198
+
};
199
+
200
+
// Verify it's CIDv1 (version 1)
201
+
if parsed_cid.version() != cid::Version::V1 {
202
+
return false;
203
+
}
204
+
205
+
// Verify it uses DAG-CBOR codec (0x71)
206
+
if parsed_cid.codec() != DAG_CBOR_CODEC {
207
+
return false;
208
+
}
209
+
210
+
// Get the multihash and verify it uses SHA-256
211
+
let multihash = parsed_cid.hash();
212
+
213
+
// SHA-256 code is 0x12
214
+
if multihash.code() != MULTIHASH_SHA256 {
215
+
return false;
216
+
}
217
+
218
+
// Verify the hash digest is 32 bytes (SHA-256 standard)
219
+
if multihash.digest().len() != 32 {
220
+
return false;
221
+
}
222
+
223
+
true
224
+
}
225
+
226
+
#[cfg(test)]
227
+
mod tests {
228
+
use super::*;
229
+
use atproto_record::typed::TypedLexicon;
230
+
use serde::Deserialize;
231
+
232
+
233
+
#[tokio::test]
234
+
async fn test_create_attestation_cid() -> Result<(), AttestationError> {
235
+
use atproto_record::datetime::format as datetime_format;
236
+
use chrono::{DateTime, Utc};
237
+
238
+
// Define test record type with createdAt and text fields
239
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
240
+
#[cfg_attr(debug_assertions, derive(Debug))]
241
+
struct TestRecord {
242
+
#[serde(rename = "createdAt", with = "datetime_format")]
243
+
created_at: DateTime<Utc>,
244
+
text: String,
245
+
}
246
+
247
+
impl LexiconType for TestRecord {
248
+
fn lexicon_type() -> &'static str {
249
+
"com.example.testrecord"
250
+
}
251
+
}
252
+
253
+
// Define test metadata type
254
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
255
+
#[cfg_attr(debug_assertions, derive(Debug))]
256
+
struct TestMetadata {
257
+
#[serde(rename = "createdAt", with = "datetime_format")]
258
+
created_at: DateTime<Utc>,
259
+
purpose: String,
260
+
}
261
+
262
+
impl LexiconType for TestMetadata {
263
+
fn lexicon_type() -> &'static str {
264
+
"com.example.testmetadata"
265
+
}
266
+
}
267
+
268
+
// Create test data
269
+
let created_at = DateTime::parse_from_rfc3339("2025-01-15T14:00:00.000Z")
270
+
.unwrap()
271
+
.with_timezone(&Utc);
272
+
273
+
let record = TestRecord {
274
+
created_at,
275
+
text: "Hello, AT Protocol!".to_string(),
276
+
};
277
+
278
+
let metadata_created_at = DateTime::parse_from_rfc3339("2025-01-15T14:05:00.000Z")
279
+
.unwrap()
280
+
.with_timezone(&Utc);
281
+
282
+
let metadata = TestMetadata {
283
+
created_at: metadata_created_at,
284
+
purpose: "attestation".to_string(),
285
+
};
286
+
287
+
let repository = "did:plc:test123";
288
+
289
+
// Create typed lexicons
290
+
let typed_record = TypedLexicon::new(record);
291
+
let typed_metadata = TypedLexicon::new(metadata);
292
+
293
+
// Call the function
294
+
let cid = create_attestation_cid(
295
+
AnyInput::Serialize(typed_record),
296
+
AnyInput::Serialize(typed_metadata),
297
+
repository,
298
+
)?;
299
+
300
+
// Verify CID properties
301
+
assert_eq!(cid.codec(), 0x71, "CID should use dag-cbor codec");
302
+
assert_eq!(cid.hash().code(), 0x12, "CID should use sha2-256 hash");
303
+
assert_eq!(
304
+
cid.hash().digest().len(),
305
+
32,
306
+
"Hash digest should be 32 bytes"
307
+
);
308
+
assert_eq!(cid.to_bytes().len(), 36, "CID should be 36 bytes total");
309
+
310
+
Ok(())
311
+
}
312
+
313
+
#[tokio::test]
314
+
async fn test_create_attestation_cid_deterministic() -> Result<(), AttestationError> {
315
+
use atproto_record::datetime::format as datetime_format;
316
+
use chrono::{DateTime, Utc};
317
+
318
+
// Define simple test types
319
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
320
+
struct SimpleRecord {
321
+
#[serde(rename = "createdAt", with = "datetime_format")]
322
+
created_at: DateTime<Utc>,
323
+
text: String,
324
+
}
325
+
326
+
impl LexiconType for SimpleRecord {
327
+
fn lexicon_type() -> &'static str {
328
+
"com.example.simple"
329
+
}
330
+
}
331
+
332
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
333
+
struct SimpleMetadata {
334
+
#[serde(rename = "createdAt", with = "datetime_format")]
335
+
created_at: DateTime<Utc>,
336
+
}
337
+
338
+
impl LexiconType for SimpleMetadata {
339
+
fn lexicon_type() -> &'static str {
340
+
"com.example.meta"
341
+
}
342
+
}
343
+
344
+
let created_at = DateTime::parse_from_rfc3339("2025-01-01T00:00:00.000Z")
345
+
.unwrap()
346
+
.with_timezone(&Utc);
347
+
348
+
let record1 = SimpleRecord {
349
+
created_at,
350
+
text: "test".to_string(),
351
+
};
352
+
let record2 = SimpleRecord {
353
+
created_at,
354
+
text: "test".to_string(),
355
+
};
356
+
357
+
let metadata1 = SimpleMetadata { created_at };
358
+
let metadata2 = SimpleMetadata { created_at };
359
+
360
+
let repository = "did:plc:same";
361
+
362
+
// Create CIDs for identical records
363
+
let cid1 = create_attestation_cid(
364
+
AnyInput::Serialize(TypedLexicon::new(record1)),
365
+
AnyInput::Serialize(TypedLexicon::new(metadata1)),
366
+
repository,
367
+
)?;
368
+
369
+
let cid2 = create_attestation_cid(
370
+
AnyInput::Serialize(TypedLexicon::new(record2)),
371
+
AnyInput::Serialize(TypedLexicon::new(metadata2)),
372
+
repository,
373
+
)?;
374
+
375
+
// Verify determinism: identical inputs produce identical CIDs
376
+
assert_eq!(
377
+
cid1, cid2,
378
+
"Identical records should produce identical CIDs"
379
+
);
380
+
381
+
Ok(())
382
+
}
383
+
384
+
#[tokio::test]
385
+
async fn test_create_attestation_cid_different_repositories() -> Result<(), AttestationError> {
386
+
use atproto_record::datetime::format as datetime_format;
387
+
use chrono::{DateTime, Utc};
388
+
389
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
390
+
struct RepoRecord {
391
+
#[serde(rename = "createdAt", with = "datetime_format")]
392
+
created_at: DateTime<Utc>,
393
+
text: String,
394
+
}
395
+
396
+
impl LexiconType for RepoRecord {
397
+
fn lexicon_type() -> &'static str {
398
+
"com.example.repo"
399
+
}
400
+
}
401
+
402
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
403
+
struct RepoMetadata {
404
+
#[serde(rename = "createdAt", with = "datetime_format")]
405
+
created_at: DateTime<Utc>,
406
+
}
407
+
408
+
impl LexiconType for RepoMetadata {
409
+
fn lexicon_type() -> &'static str {
410
+
"com.example.repometa"
411
+
}
412
+
}
413
+
414
+
let created_at = DateTime::parse_from_rfc3339("2025-01-01T12:00:00.000Z")
415
+
.unwrap()
416
+
.with_timezone(&Utc);
417
+
418
+
let record = RepoRecord {
419
+
created_at,
420
+
text: "content".to_string(),
421
+
};
422
+
let metadata = RepoMetadata { created_at };
423
+
424
+
// Same record and metadata, different repositories
425
+
let cid1 = create_attestation_cid(
426
+
AnyInput::Serialize(TypedLexicon::new(record.clone())),
427
+
AnyInput::Serialize(TypedLexicon::new(metadata.clone())),
428
+
"did:plc:repo1",
429
+
)?;
430
+
431
+
let cid2 = create_attestation_cid(
432
+
AnyInput::Serialize(TypedLexicon::new(record)),
433
+
AnyInput::Serialize(TypedLexicon::new(metadata)),
434
+
"did:plc:repo2",
435
+
)?;
436
+
437
+
// Different repositories should produce different CIDs (prevents replay attacks)
438
+
assert_ne!(
439
+
cid1, cid2,
440
+
"Different repository DIDs should produce different CIDs"
441
+
);
442
+
443
+
Ok(())
444
+
}
445
+
446
+
#[test]
447
+
fn test_validate_dagcbor_cid() {
448
+
// Test valid CID (generated from our own create_dagbor_cid function)
449
+
let valid_data = serde_json::json!({"test": "data"});
450
+
let valid_cid = create_dagbor_cid(&valid_data).unwrap();
451
+
let valid_cid_str = valid_cid.to_string();
452
+
assert!(validate_dagcbor_cid(&valid_cid_str), "Valid CID should pass validation");
453
+
454
+
// Test empty string
455
+
assert!(!validate_dagcbor_cid(""), "Empty string should fail validation");
456
+
457
+
// Test invalid CID string
458
+
assert!(!validate_dagcbor_cid("not-a-cid"), "Invalid string should fail validation");
459
+
assert!(!validate_dagcbor_cid("abc123"), "Invalid string should fail validation");
460
+
461
+
// Test CIDv0 (starts with Qm, uses different format)
462
+
let cid_v0 = "QmYwAPJzv5CZsnA625ub3XtLxT3Tz5Lno5Wqv9eKewWKjE";
463
+
assert!(!validate_dagcbor_cid(cid_v0), "CIDv0 should fail validation");
464
+
465
+
// Test valid CID base32 format but wrong codec (not DAG-CBOR)
466
+
// This is a valid CID but uses raw codec (0x55) instead of DAG-CBOR (0x71)
467
+
let wrong_codec = "bafkreigw5bqvbz6m3c3zjpqhxwl4njlnbbnw5xvptbx6dzfxjqcde6lt3y";
468
+
assert!(!validate_dagcbor_cid(wrong_codec), "CID with wrong codec should fail");
469
+
470
+
// Test that our constants match what we're checking
471
+
assert_eq!(DAG_CBOR_CODEC, 0x71, "DAG-CBOR codec constant should be 0x71");
472
+
assert_eq!(MULTIHASH_SHA256, 0x12, "SHA-256 multihash code should be 0x12");
473
+
}
474
+
475
+
#[tokio::test]
476
+
async fn phantom_data_test() -> Result<(), AttestationError> {
477
+
let repository = "did:web:example.com";
478
+
479
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
480
+
struct FooRecord {
481
+
text: String,
482
+
}
483
+
484
+
impl LexiconType for FooRecord {
485
+
fn lexicon_type() -> &'static str {
486
+
"com.example.foo"
487
+
}
488
+
}
489
+
490
+
#[derive(Serialize, Deserialize, PartialEq, Clone)]
491
+
struct BarRecord {
492
+
text: String,
493
+
}
494
+
495
+
impl LexiconType for BarRecord {
496
+
fn lexicon_type() -> &'static str {
497
+
"com.example.bar"
498
+
}
499
+
}
500
+
501
+
let foo = FooRecord {
502
+
text: "foo".to_string(),
503
+
};
504
+
let typed_foo = TypedLexicon::new(foo);
505
+
506
+
let bar = BarRecord {
507
+
text: "bar".to_string(),
508
+
};
509
+
let typed_bar = TypedLexicon::new(bar);
510
+
511
+
let cid1 = create_attestation_cid(
512
+
AnyInput::Serialize(typed_foo.clone()),
513
+
AnyInput::Serialize(typed_bar.clone()),
514
+
repository,
515
+
)?;
516
+
517
+
let value_bar = serde_json::to_value(typed_bar).expect("bar serde_json::Value conversion");
518
+
519
+
let cid2 = create_attestation_cid(
520
+
AnyInput::Serialize(typed_foo),
521
+
AnyInput::Serialize(value_bar),
522
+
repository,
523
+
)?;
524
+
525
+
assert_eq!(
526
+
cid1, cid2,
527
+
"Different repository DIDs should produce different CIDs"
528
+
);
529
+
530
+
Ok(())
531
+
}
532
+
}
+202
crates/atproto-attestation/src/errors.rs
+202
crates/atproto-attestation/src/errors.rs
···
1
+
//! Errors that can occur during attestation preparation and verification.
2
+
//!
3
+
//! Covers CID construction, `$sig` metadata validation, inline attestation
4
+
//! structure checks, and identity/key resolution failures.
5
+
6
+
use thiserror::Error;
7
+
8
+
/// Errors that can occur during attestation preparation and verification.
9
+
#[derive(Debug, Error)]
10
+
pub enum AttestationError {
11
+
/// Error when the record value is not a JSON object.
12
+
#[error("error-atproto-attestation-1 Record must be a JSON object")]
13
+
RecordMustBeObject,
14
+
15
+
/// Error when the record omits the `$type` discriminator.
16
+
#[error("error-atproto-attestation-1 Record must include a string `$type` field")]
17
+
RecordMissingType,
18
+
19
+
/// Error when attestation metadata is not a JSON object.
20
+
#[error("error-atproto-attestation-2 Attestation metadata must be a JSON object")]
21
+
MetadataMustBeObject,
22
+
23
+
/// Error when attestation metadata is missing a required field.
24
+
#[error("error-atproto-attestation-3 Attestation metadata missing required field: {field}")]
25
+
MetadataMissingField {
26
+
/// Name of the missing field.
27
+
field: String,
28
+
},
29
+
30
+
/// Error when attestation metadata omits the `$type` discriminator.
31
+
#[error("error-atproto-attestation-4 Attestation metadata must include a string `$type` field")]
32
+
MetadataMissingSigType,
33
+
34
+
/// Error when the record does not contain a signatures array.
35
+
#[error("error-atproto-attestation-5 Signatures array not found on record")]
36
+
SignaturesArrayMissing,
37
+
38
+
/// Error when the signatures field exists but is not an array.
39
+
#[error("error-atproto-attestation-6 Signatures field must be an array")]
40
+
SignaturesFieldInvalid,
41
+
42
+
/// Error when attempting to verify a signature at an invalid index.
43
+
#[error("error-atproto-attestation-7 Signature index {index} out of bounds")]
44
+
SignatureIndexOutOfBounds {
45
+
/// Index that was requested.
46
+
index: usize,
47
+
},
48
+
49
+
/// Error when a signature object is missing a required field.
50
+
#[error("error-atproto-attestation-8 Signature object missing required field: {field}")]
51
+
SignatureMissingField {
52
+
/// Field name that was expected.
53
+
field: String,
54
+
},
55
+
56
+
/// Error when a signature object uses an invalid `$type` for inline attestations.
57
+
#[error(
58
+
"error-atproto-attestation-9 Inline attestation `$type` cannot be `com.atproto.repo.strongRef`"
59
+
)]
60
+
InlineAttestationTypeInvalid,
61
+
62
+
/// Error when a remote attestation entry does not use the strongRef type.
63
+
#[error(
64
+
"error-atproto-attestation-10 Remote attestation entries must use `com.atproto.repo.strongRef`"
65
+
)]
66
+
RemoteAttestationTypeInvalid,
67
+
68
+
/// Error when a remote attestation entry is missing a CID.
69
+
#[error(
70
+
"error-atproto-attestation-11 Remote attestation entries must include a string `cid` field"
71
+
)]
72
+
RemoteAttestationMissingCid,
73
+
74
+
/// Error when signature bytes are not provided using the `$bytes` wrapper.
75
+
#[error(
76
+
"error-atproto-attestation-12 Signature bytes must be encoded as `{{\"$bytes\": \"...\"}}`"
77
+
)]
78
+
SignatureBytesFormatInvalid,
79
+
80
+
/// Error when record serialization to DAG-CBOR fails.
81
+
#[error("error-atproto-attestation-13 Record serialization failed: {error}")]
82
+
RecordSerializationFailed {
83
+
/// Underlying serialization error.
84
+
#[from]
85
+
error: serde_ipld_dagcbor::EncodeError<std::collections::TryReserveError>,
86
+
},
87
+
88
+
/// Error when `$sig` metadata is missing from the record before CID creation.
89
+
#[error("error-atproto-attestation-14 `$sig` metadata must be present before generating a CID")]
90
+
SigMetadataMissing,
91
+
92
+
/// Error when `$sig` metadata is not an object.
93
+
#[error("error-atproto-attestation-15 `$sig` metadata must be a JSON object")]
94
+
SigMetadataNotObject,
95
+
96
+
/// Error when `$sig` metadata omits the `$type` discriminator.
97
+
#[error("error-atproto-attestation-16 `$sig` metadata must include a string `$type` field")]
98
+
SigMetadataMissingType,
99
+
100
+
/// Error when metadata omits the `$type` discriminator.
101
+
#[error("error-atproto-attestation-18 Metadata must include a string `$type` field")]
102
+
MetadataMissingType,
103
+
104
+
/// Error when a key resolver is required but not provided.
105
+
#[error("error-atproto-attestation-17 Key resolver required to resolve key reference: {key}")]
106
+
KeyResolverRequired {
107
+
/// Key reference that required resolution.
108
+
key: String,
109
+
},
110
+
111
+
/// Error when key resolution using the provided resolver fails.
112
+
#[error("error-atproto-attestation-18 Failed to resolve key reference {key}: {error}")]
113
+
KeyResolutionFailed {
114
+
/// Key reference that was being resolved.
115
+
key: String,
116
+
/// Underlying resolution error.
117
+
#[source]
118
+
error: anyhow::Error,
119
+
},
120
+
121
+
/// Error when the key type is unsupported for inline attestations.
122
+
#[error("error-atproto-attestation-21 Unsupported key type for attestation: {key_type}")]
123
+
UnsupportedKeyType {
124
+
/// Unsupported key type.
125
+
key_type: atproto_identity::key::KeyType,
126
+
},
127
+
128
+
/// Error when signature decoding fails.
129
+
#[error("error-atproto-attestation-22 Signature decoding failed: {error}")]
130
+
SignatureDecodingFailed {
131
+
/// Underlying base64 decoding error.
132
+
#[from]
133
+
error: base64::DecodeError,
134
+
},
135
+
136
+
/// Error when signature length does not match the expected size.
137
+
#[error(
138
+
"error-atproto-attestation-23 Signature length invalid: expected {expected} bytes, found {actual}"
139
+
)]
140
+
SignatureLengthInvalid {
141
+
/// Expected signature length.
142
+
expected: usize,
143
+
/// Actual signature length.
144
+
actual: usize,
145
+
},
146
+
147
+
/// Error when signature is not normalized to low-S form.
148
+
#[error("error-atproto-attestation-24 Signature must be normalized to low-S form")]
149
+
SignatureNotNormalized,
150
+
151
+
/// Error when cryptographic verification fails.
152
+
#[error("error-atproto-attestation-25 Signature verification failed: {error}")]
153
+
SignatureValidationFailed {
154
+
/// Underlying key validation error.
155
+
#[source]
156
+
error: atproto_identity::errors::KeyError,
157
+
},
158
+
159
+
/// Error when multihash construction for CID generation fails.
160
+
#[error("error-atproto-attestation-26 Failed to construct CID multihash: {error}")]
161
+
MultihashWrapFailed {
162
+
/// Underlying multihash error.
163
+
#[source]
164
+
error: multihash::Error,
165
+
},
166
+
167
+
/// Error when signature creation fails during inline attestation.
168
+
#[error("error-atproto-attestation-27 Signature creation failed: {error}")]
169
+
SignatureCreationFailed {
170
+
/// Underlying signing error.
171
+
#[source]
172
+
error: atproto_identity::errors::KeyError,
173
+
},
174
+
175
+
/// Error when fetching a remote attestation proof record fails.
176
+
#[error("error-atproto-attestation-28 Failed to fetch remote attestation from {uri}: {error}")]
177
+
RemoteAttestationFetchFailed {
178
+
/// AT-URI that failed to resolve.
179
+
uri: String,
180
+
/// Underlying fetch error.
181
+
#[source]
182
+
error: anyhow::Error,
183
+
},
184
+
185
+
/// Error when the CID of a remote attestation proof record doesn't match expected.
186
+
#[error(
187
+
"error-atproto-attestation-29 Remote attestation CID mismatch: expected {expected}, got {actual}"
188
+
)]
189
+
RemoteAttestationCidMismatch {
190
+
/// Expected CID.
191
+
expected: String,
192
+
/// Actual CID.
193
+
actual: String,
194
+
},
195
+
196
+
/// Error when parsing a CID string fails.
197
+
#[error("error-atproto-attestation-30 Invalid CID format: {cid}")]
198
+
InvalidCid {
199
+
/// Invalid CID string.
200
+
cid: String,
201
+
},
202
+
}
+384
crates/atproto-attestation/src/input.rs
+384
crates/atproto-attestation/src/input.rs
···
1
+
//! Input types for attestation functions supporting multiple input formats.
2
+
3
+
use serde::Serialize;
4
+
use serde_json::{Map, Value};
5
+
use std::convert::TryFrom;
6
+
use std::str::FromStr;
7
+
use thiserror::Error;
8
+
9
+
/// Flexible input type for attestation functions.
10
+
///
11
+
/// Allows passing records and metadata as JSON strings or any serde serializable types.
12
+
#[derive(Clone)]
13
+
pub enum AnyInput<S: Serialize + Clone> {
14
+
/// JSON string representation
15
+
String(String),
16
+
/// Serializable types
17
+
Serialize(S),
18
+
}
19
+
20
+
/// Error types for AnyInput parsing and transformation operations.
21
+
///
22
+
/// This enum provides specific error types for various failure modes when working
23
+
/// with `AnyInput`, including JSON parsing errors, type conversion errors, and
24
+
/// serialization failures.
25
+
#[derive(Debug, Error)]
26
+
pub enum AnyInputError {
27
+
/// Error when parsing JSON from a string fails.
28
+
#[error("Failed to parse JSON from string: {0}")]
29
+
JsonParseError(#[from] serde_json::Error),
30
+
31
+
/// Error when the value is not a JSON object.
32
+
#[error("Expected JSON object, but got {value_type}")]
33
+
NotAnObject {
34
+
/// The actual type of the value.
35
+
value_type: String,
36
+
},
37
+
38
+
/// Error when the string contains invalid JSON.
39
+
#[error("Invalid JSON string: {message}")]
40
+
InvalidJson {
41
+
/// Error message describing what's wrong with the JSON.
42
+
message: String,
43
+
},
44
+
}
45
+
46
+
impl AnyInputError {
47
+
/// Creates a new `NotAnObject` error with the actual type information.
48
+
pub fn not_an_object(value: &Value) -> Self {
49
+
let value_type = match value {
50
+
Value::Null => "null".to_string(),
51
+
Value::Bool(_) => "boolean".to_string(),
52
+
Value::Number(_) => "number".to_string(),
53
+
Value::String(_) => "string".to_string(),
54
+
Value::Array(_) => "array".to_string(),
55
+
Value::Object(_) => "object".to_string(), // Should not happen
56
+
};
57
+
58
+
AnyInputError::NotAnObject { value_type }
59
+
}
60
+
}
61
+
62
+
/// Implementation of `FromStr` for `AnyInput` that deserializes JSON strings.
63
+
///
64
+
/// This allows parsing JSON strings directly into `AnyInput<serde_json::Value>` using
65
+
/// the standard `FromStr` trait. The string is deserialized using `serde_json::from_str`
66
+
/// and wrapped in `AnyInput::Serialize`.
67
+
///
68
+
/// # Errors
69
+
///
70
+
/// Returns `AnyInputError::JsonParseError` if the string contains invalid JSON.
71
+
///
72
+
/// # Example
73
+
///
74
+
/// ```
75
+
/// use atproto_attestation::input::AnyInput;
76
+
/// use std::str::FromStr;
77
+
///
78
+
/// let input: AnyInput<serde_json::Value> = r#"{"type": "post", "text": "Hello"}"#.parse().unwrap();
79
+
/// ```
80
+
impl FromStr for AnyInput<serde_json::Value> {
81
+
type Err = AnyInputError;
82
+
83
+
fn from_str(s: &str) -> Result<Self, Self::Err> {
84
+
let value = serde_json::from_str(s)?;
85
+
Ok(AnyInput::Serialize(value))
86
+
}
87
+
}
88
+
89
+
impl<S: Serialize + Clone> From<S> for AnyInput<S> {
90
+
fn from(value: S) -> Self {
91
+
AnyInput::Serialize(value)
92
+
}
93
+
}
94
+
95
+
/// Implementation of `TryFrom` for converting `AnyInput` into a JSON object map.
96
+
///
97
+
/// This allows converting any `AnyInput` into a `serde_json::Map<String, Value>`, which
98
+
/// represents a JSON object. Both string and serializable inputs are converted to JSON
99
+
/// objects, with appropriate error handling for non-object values.
100
+
///
101
+
/// # Example
102
+
///
103
+
/// ```
104
+
/// use atproto_attestation::input::AnyInput;
105
+
/// use serde_json::{json, Map, Value};
106
+
/// use std::convert::TryInto;
107
+
///
108
+
/// let input = AnyInput::Serialize(json!({"type": "post", "text": "Hello"}));
109
+
/// let map: Map<String, Value> = input.try_into().unwrap();
110
+
/// assert_eq!(map.get("type").unwrap(), "post");
111
+
/// ```
112
+
impl<S: Serialize + Clone> TryFrom<AnyInput<S>> for Map<String, Value> {
113
+
type Error = AnyInputError;
114
+
115
+
fn try_from(input: AnyInput<S>) -> Result<Self, Self::Error> {
116
+
match input {
117
+
AnyInput::String(value) => {
118
+
// Parse string as JSON
119
+
let json_value = serde_json::from_str::<Value>(&value)?;
120
+
121
+
// Extract as object
122
+
json_value
123
+
.as_object()
124
+
.cloned()
125
+
.ok_or_else(|| AnyInputError::not_an_object(&json_value))
126
+
}
127
+
AnyInput::Serialize(value) => {
128
+
// Convert to JSON value
129
+
let json_value = serde_json::to_value(value)?;
130
+
131
+
// Extract as object
132
+
json_value
133
+
.as_object()
134
+
.cloned()
135
+
.ok_or_else(|| AnyInputError::not_an_object(&json_value))
136
+
}
137
+
}
138
+
}
139
+
}
140
+
141
+
/// Default phantom type for AnyInput when no specific lexicon type is needed.
142
+
///
143
+
/// This type serves as the default generic parameter for `AnyInput`, allowing
144
+
/// for simpler usage when working with untyped JSON values.
145
+
#[derive(Serialize, PartialEq, Clone)]
146
+
pub struct PhantomSignature {}
147
+
148
+
#[cfg(test)]
149
+
mod tests {
150
+
use super::*;
151
+
152
+
#[test]
153
+
fn test_from_str_valid_json() {
154
+
let json_str = r#"{"type": "post", "text": "Hello", "count": 42}"#;
155
+
let result: Result<AnyInput<serde_json::Value>, _> = json_str.parse();
156
+
157
+
assert!(result.is_ok());
158
+
159
+
let input = result.unwrap();
160
+
match input {
161
+
AnyInput::Serialize(value) => {
162
+
assert_eq!(value["type"], "post");
163
+
assert_eq!(value["text"], "Hello");
164
+
assert_eq!(value["count"], 42);
165
+
}
166
+
_ => panic!("Expected AnyInput::Serialize variant"),
167
+
}
168
+
}
169
+
170
+
#[test]
171
+
fn test_from_str_invalid_json() {
172
+
let invalid_json = r#"{"type": "post", "text": "Hello" invalid json"#;
173
+
let result: Result<AnyInput<serde_json::Value>, _> = invalid_json.parse();
174
+
175
+
assert!(result.is_err());
176
+
}
177
+
178
+
#[test]
179
+
fn test_from_str_array() {
180
+
let json_array = r#"[1, 2, 3, "four"]"#;
181
+
let result: Result<AnyInput<serde_json::Value>, _> = json_array.parse();
182
+
183
+
assert!(result.is_ok());
184
+
185
+
let input = result.unwrap();
186
+
match input {
187
+
AnyInput::Serialize(value) => {
188
+
assert!(value.is_array());
189
+
let array = value.as_array().unwrap();
190
+
assert_eq!(array.len(), 4);
191
+
assert_eq!(array[0], 1);
192
+
assert_eq!(array[3], "four");
193
+
}
194
+
_ => panic!("Expected AnyInput::Serialize variant"),
195
+
}
196
+
}
197
+
198
+
#[test]
199
+
fn test_from_str_null() {
200
+
let null_str = "null";
201
+
let result: Result<AnyInput<serde_json::Value>, _> = null_str.parse();
202
+
203
+
assert!(result.is_ok());
204
+
205
+
let input = result.unwrap();
206
+
match input {
207
+
AnyInput::Serialize(value) => {
208
+
assert!(value.is_null());
209
+
}
210
+
_ => panic!("Expected AnyInput::Serialize variant"),
211
+
}
212
+
}
213
+
214
+
#[test]
215
+
fn test_from_str_with_use() {
216
+
// Test using the parse method directly with type inference
217
+
let input: AnyInput<serde_json::Value> = r#"{"$type": "app.bsky.feed.post"}"#
218
+
.parse()
219
+
.expect("Failed to parse JSON");
220
+
221
+
match input {
222
+
AnyInput::Serialize(value) => {
223
+
assert_eq!(value["$type"], "app.bsky.feed.post");
224
+
}
225
+
_ => panic!("Expected AnyInput::Serialize variant"),
226
+
}
227
+
}
228
+
229
+
#[test]
230
+
fn test_try_into_from_string() {
231
+
use std::convert::TryInto;
232
+
233
+
let input = AnyInput::<Value>::String(r#"{"type": "post", "text": "Hello"}"#.to_string());
234
+
let result: Result<Map<String, Value>, _> = input.try_into();
235
+
236
+
assert!(result.is_ok());
237
+
let map = result.unwrap();
238
+
assert_eq!(map.get("type").unwrap(), "post");
239
+
assert_eq!(map.get("text").unwrap(), "Hello");
240
+
}
241
+
242
+
#[test]
243
+
fn test_try_into_from_serialize() {
244
+
use serde_json::json;
245
+
use std::convert::TryInto;
246
+
247
+
let input = AnyInput::Serialize(json!({"$type": "app.bsky.feed.post", "count": 42}));
248
+
let result: Result<Map<String, Value>, _> = input.try_into();
249
+
250
+
assert!(result.is_ok());
251
+
let map = result.unwrap();
252
+
assert_eq!(map.get("$type").unwrap(), "app.bsky.feed.post");
253
+
assert_eq!(map.get("count").unwrap(), 42);
254
+
}
255
+
256
+
#[test]
257
+
fn test_try_into_string_not_object() {
258
+
use std::convert::TryInto;
259
+
260
+
let input = AnyInput::<Value>::String(r#"["array", "not", "object"]"#.to_string());
261
+
let result: Result<Map<String, Value>, AnyInputError> = input.try_into();
262
+
263
+
assert!(result.is_err());
264
+
match result.unwrap_err() {
265
+
AnyInputError::NotAnObject { value_type } => {
266
+
assert_eq!(value_type, "array");
267
+
}
268
+
_ => panic!("Expected NotAnObject error"),
269
+
}
270
+
}
271
+
272
+
#[test]
273
+
fn test_try_into_serialize_not_object() {
274
+
use serde_json::json;
275
+
use std::convert::TryInto;
276
+
277
+
let input = AnyInput::Serialize(json!([1, 2, 3]));
278
+
let result: Result<Map<String, Value>, AnyInputError> = input.try_into();
279
+
280
+
assert!(result.is_err());
281
+
match result.unwrap_err() {
282
+
AnyInputError::NotAnObject { value_type } => {
283
+
assert_eq!(value_type, "array");
284
+
}
285
+
_ => panic!("Expected NotAnObject error"),
286
+
}
287
+
}
288
+
289
+
#[test]
290
+
fn test_try_into_invalid_json_string() {
291
+
use std::convert::TryInto;
292
+
293
+
let input = AnyInput::<Value>::String("not valid json".to_string());
294
+
let result: Result<Map<String, Value>, AnyInputError> = input.try_into();
295
+
296
+
assert!(result.is_err());
297
+
match result.unwrap_err() {
298
+
AnyInputError::JsonParseError(_) => {}
299
+
_ => panic!("Expected JsonParseError"),
300
+
}
301
+
}
302
+
303
+
#[test]
304
+
fn test_try_into_null() {
305
+
use serde_json::json;
306
+
use std::convert::TryInto;
307
+
308
+
let input = AnyInput::Serialize(json!(null));
309
+
let result: Result<Map<String, Value>, AnyInputError> = input.try_into();
310
+
311
+
assert!(result.is_err());
312
+
match result.unwrap_err() {
313
+
AnyInputError::NotAnObject { value_type } => {
314
+
assert_eq!(value_type, "null");
315
+
}
316
+
_ => panic!("Expected NotAnObject error"),
317
+
}
318
+
}
319
+
320
+
#[test]
321
+
fn test_any_input_error_not_an_object() {
322
+
use serde_json::json;
323
+
324
+
// Test null
325
+
let err = AnyInputError::not_an_object(&json!(null));
326
+
match err {
327
+
AnyInputError::NotAnObject { value_type } => {
328
+
assert_eq!(value_type, "null");
329
+
}
330
+
_ => panic!("Expected NotAnObject error"),
331
+
}
332
+
333
+
// Test boolean
334
+
let err = AnyInputError::not_an_object(&json!(true));
335
+
match err {
336
+
AnyInputError::NotAnObject { value_type } => {
337
+
assert_eq!(value_type, "boolean");
338
+
}
339
+
_ => panic!("Expected NotAnObject error"),
340
+
}
341
+
342
+
// Test number
343
+
let err = AnyInputError::not_an_object(&json!(42));
344
+
match err {
345
+
AnyInputError::NotAnObject { value_type } => {
346
+
assert_eq!(value_type, "number");
347
+
}
348
+
_ => panic!("Expected NotAnObject error"),
349
+
}
350
+
351
+
// Test string
352
+
let err = AnyInputError::not_an_object(&json!("hello"));
353
+
match err {
354
+
AnyInputError::NotAnObject { value_type } => {
355
+
assert_eq!(value_type, "string");
356
+
}
357
+
_ => panic!("Expected NotAnObject error"),
358
+
}
359
+
360
+
// Test array
361
+
let err = AnyInputError::not_an_object(&json!([1, 2, 3]));
362
+
match err {
363
+
AnyInputError::NotAnObject { value_type } => {
364
+
assert_eq!(value_type, "array");
365
+
}
366
+
_ => panic!("Expected NotAnObject error"),
367
+
}
368
+
}
369
+
370
+
#[test]
371
+
fn test_error_display() {
372
+
use serde_json::json;
373
+
374
+
// Test NotAnObject error display
375
+
let err = AnyInputError::not_an_object(&json!(42));
376
+
assert_eq!(err.to_string(), "Expected JSON object, but got number");
377
+
378
+
// Test InvalidJson display
379
+
let err = AnyInputError::InvalidJson {
380
+
message: "unexpected token".to_string()
381
+
};
382
+
assert_eq!(err.to_string(), "Invalid JSON string: unexpected token");
383
+
}
384
+
}
+74
crates/atproto-attestation/src/lib.rs
+74
crates/atproto-attestation/src/lib.rs
···
1
+
//! AT Protocol record attestation utilities based on the CID-first specification.
2
+
//!
3
+
//! This crate implements helpers for creating inline and remote attestations
4
+
//! and verifying signatures against DID verification methods. It follows the
5
+
//! requirements documented in `bluesky-attestation-tee/documentation/spec/attestation.md`.
6
+
//!
7
+
//! ## Inline Attestations
8
+
//!
9
+
//! Use `create_inline_attestation` to create a signed record with an embedded signature:
10
+
//!
11
+
//! ```no_run
12
+
//! use atproto_attestation::{create_inline_attestation, AnyInput};
13
+
//! use atproto_identity::key::{generate_key, KeyType};
14
+
//! use serde_json::json;
15
+
//!
16
+
//! # fn main() -> Result<(), Box<dyn std::error::Error>> {
17
+
//! let key = generate_key(KeyType::P256Private)?;
18
+
//! let record = json!({"$type": "app.example.post", "text": "Hello!"});
19
+
//! let metadata = json!({"$type": "com.example.sig", "key": "did:key:..."});
20
+
//!
21
+
//! let signed = create_inline_attestation(
22
+
//! AnyInput::Serialize(record),
23
+
//! AnyInput::Serialize(metadata),
24
+
//! "did:plc:repository",
25
+
//! &key
26
+
//! )?;
27
+
//! # Ok(())
28
+
//! # }
29
+
//! ```
30
+
//!
31
+
//! ## Remote Attestations
32
+
//!
33
+
//! Use `create_remote_attestation` to generate both the proof record and the
34
+
//! attested record with strongRef in a single call.
35
+
36
+
#![forbid(unsafe_code)]
37
+
#![warn(missing_docs)]
38
+
39
+
// Public modules
40
+
pub mod cid;
41
+
pub mod errors;
42
+
pub mod input;
43
+
44
+
// Internal modules
45
+
mod attestation;
46
+
mod signature;
47
+
mod utils;
48
+
mod verification;
49
+
50
+
// Re-export error type
51
+
pub use errors::AttestationError;
52
+
53
+
// Re-export CID generation functions
54
+
pub use cid::{create_dagbor_cid};
55
+
56
+
// Re-export signature normalization
57
+
pub use signature::normalize_signature;
58
+
59
+
// Re-export attestation functions
60
+
pub use attestation::{
61
+
append_inline_attestation, append_remote_attestation, create_inline_attestation,
62
+
create_remote_attestation, create_signature,
63
+
};
64
+
65
+
// Re-export input types
66
+
pub use input::{AnyInput, AnyInputError};
67
+
68
+
// Re-export verification functions
69
+
pub use verification::verify_record;
70
+
71
+
/// Resolver trait for retrieving remote attestation records by AT URI.
72
+
///
73
+
/// This trait is re-exported from atproto_client for convenience.
74
+
pub use atproto_client::record_resolver::RecordResolver;
+98
crates/atproto-attestation/src/signature.rs
+98
crates/atproto-attestation/src/signature.rs
···
1
+
//! ECDSA signature normalization.
2
+
//!
3
+
//! This module handles signature normalization to the low-S form required by
4
+
//! the AT Protocol attestation specification, preventing signature malleability attacks.
5
+
6
+
use crate::errors::AttestationError;
7
+
use atproto_identity::key::KeyType;
8
+
use k256::ecdsa::Signature as K256Signature;
9
+
use p256::ecdsa::Signature as P256Signature;
10
+
11
+
/// Normalize raw signature bytes to the required low-S form.
12
+
///
13
+
/// This helper ensures signatures produced by signing APIs comply with the
14
+
/// specification requirements before embedding them in attestation objects.
15
+
///
16
+
/// # Arguments
17
+
///
18
+
/// * `signature` - The raw signature bytes to normalize
19
+
/// * `key_type` - The type of key used to create the signature
20
+
///
21
+
/// # Returns
22
+
///
23
+
/// The normalized signature bytes in low-S form
24
+
///
25
+
/// # Errors
26
+
///
27
+
/// Returns an error if:
28
+
/// - The signature length is invalid for the key type
29
+
/// - The key type is not supported
30
+
pub fn normalize_signature(
31
+
signature: Vec<u8>,
32
+
key_type: &KeyType,
33
+
) -> Result<Vec<u8>, AttestationError> {
34
+
match key_type {
35
+
KeyType::P256Private | KeyType::P256Public => normalize_p256(signature),
36
+
KeyType::K256Private | KeyType::K256Public => normalize_k256(signature),
37
+
other => Err(AttestationError::UnsupportedKeyType {
38
+
key_type: (*other).clone(),
39
+
}),
40
+
}
41
+
}
42
+
43
+
/// Normalize a P-256 signature to low-S form.
44
+
fn normalize_p256(signature: Vec<u8>) -> Result<Vec<u8>, AttestationError> {
45
+
if signature.len() != 64 {
46
+
return Err(AttestationError::SignatureLengthInvalid {
47
+
expected: 64,
48
+
actual: signature.len(),
49
+
});
50
+
}
51
+
52
+
let parsed = P256Signature::from_slice(&signature).map_err(|_| {
53
+
AttestationError::SignatureLengthInvalid {
54
+
expected: 64,
55
+
actual: signature.len(),
56
+
}
57
+
})?;
58
+
59
+
let normalized = parsed.normalize_s().unwrap_or(parsed);
60
+
61
+
Ok(normalized.to_vec())
62
+
}
63
+
64
+
/// Normalize a K-256 signature to low-S form.
65
+
fn normalize_k256(signature: Vec<u8>) -> Result<Vec<u8>, AttestationError> {
66
+
if signature.len() != 64 {
67
+
return Err(AttestationError::SignatureLengthInvalid {
68
+
expected: 64,
69
+
actual: signature.len(),
70
+
});
71
+
}
72
+
73
+
let parsed = K256Signature::from_slice(&signature).map_err(|_| {
74
+
AttestationError::SignatureLengthInvalid {
75
+
expected: 64,
76
+
actual: signature.len(),
77
+
}
78
+
})?;
79
+
80
+
let normalized = parsed.normalize_s().unwrap_or(parsed);
81
+
82
+
Ok(normalized.to_vec())
83
+
}
84
+
85
+
#[cfg(test)]
86
+
mod tests {
87
+
use super::*;
88
+
89
+
#[test]
90
+
fn reject_invalid_signature_length() {
91
+
let short_signature = vec![0u8; 32];
92
+
let result = normalize_p256(short_signature);
93
+
assert!(matches!(
94
+
result,
95
+
Err(AttestationError::SignatureLengthInvalid { expected: 64, .. })
96
+
));
97
+
}
98
+
}
+22
crates/atproto-attestation/src/utils.rs
+22
crates/atproto-attestation/src/utils.rs
···
1
+
//! Utility functions and constants for attestation operations.
2
+
//!
3
+
//! This module provides common utilities used throughout the attestation framework,
4
+
//! including base64 encoding/decoding with flexible padding support.
5
+
6
+
use base64::{
7
+
alphabet::STANDARD as STANDARD_ALPHABET,
8
+
engine::{
9
+
DecodePaddingMode,
10
+
general_purpose::{GeneralPurpose, GeneralPurposeConfig},
11
+
},
12
+
};
13
+
14
+
/// Base64 engine that accepts both padded and unpadded input for maximum compatibility
15
+
/// with various AT Protocol implementations. Uses standard encoding with padding for output,
16
+
/// but accepts any padding format for decoding.
17
+
pub(crate) const BASE64: GeneralPurpose = GeneralPurpose::new(
18
+
&STANDARD_ALPHABET,
19
+
GeneralPurposeConfig::new()
20
+
.with_encode_padding(true)
21
+
.with_decode_padding_mode(DecodePaddingMode::Indifferent),
22
+
);
+160
crates/atproto-attestation/src/verification.rs
+160
crates/atproto-attestation/src/verification.rs
···
1
+
//! Signature verification for AT Protocol attestations.
2
+
//!
3
+
//! This module provides verification functions for AT Protocol record attestations.
4
+
5
+
use crate::cid::create_attestation_cid;
6
+
use crate::errors::AttestationError;
7
+
use crate::input::AnyInput;
8
+
use crate::utils::BASE64;
9
+
use atproto_identity::key::{KeyResolver, validate};
10
+
use atproto_record::lexicon::com::atproto::repo::STRONG_REF_NSID;
11
+
use base64::Engine;
12
+
use serde::Serialize;
13
+
use serde_json::{Value, Map};
14
+
use std::convert::TryInto;
15
+
16
+
/// Helper function to extract and validate signatures array from a record
17
+
fn extract_signatures(record_object: &Map<String, Value>) -> Result<Vec<Value>, AttestationError> {
18
+
match record_object.get("signatures") {
19
+
Some(value) => value
20
+
.as_array()
21
+
.ok_or(AttestationError::SignaturesFieldInvalid)
22
+
.cloned(),
23
+
None => Ok(vec![]),
24
+
}
25
+
}
26
+
27
+
/// Verify all signatures in a record with flexible input types.
28
+
///
29
+
/// This is a high-level verification function that accepts records in multiple formats
30
+
/// (String, Json, or TypedLexicon) and verifies all signatures with custom resolvers.
31
+
///
32
+
/// # Arguments
33
+
///
34
+
/// * `verify_input` - The record to verify (as AnyInput: String, Json, or TypedLexicon)
35
+
/// * `repository` - The repository DID to validate against (prevents replay attacks)
36
+
/// * `key_resolver` - Resolver for looking up verification keys from DIDs
37
+
/// * `record_resolver` - Resolver for fetching remote attestation proof records
38
+
///
39
+
/// # Returns
40
+
///
41
+
/// Returns `Ok(())` if all signatures are valid, or an error if any verification fails.
42
+
///
43
+
/// # Errors
44
+
///
45
+
/// Returns an error if:
46
+
/// - The input is not a valid record object
47
+
/// - Any signature verification fails
48
+
/// - Key or record resolution fails
49
+
///
50
+
/// # Type Parameters
51
+
///
52
+
/// * `R` - The record type (must implement Serialize + LexiconType + PartialEq + Clone)
53
+
/// * `RR` - The record resolver type (must implement RecordResolver)
54
+
/// * `KR` - The key resolver type (must implement KeyResolver)
55
+
pub async fn verify_record<R, RR, KR>(
56
+
verify_input: AnyInput<R>,
57
+
repository: &str,
58
+
key_resolver: KR,
59
+
record_resolver: RR,
60
+
) -> Result<(), AttestationError>
61
+
where
62
+
R: Serialize + Clone,
63
+
RR: atproto_client::record_resolver::RecordResolver,
64
+
KR: KeyResolver,
65
+
{
66
+
let record_object: Map<String, Value> = verify_input
67
+
.clone()
68
+
.try_into()
69
+
.map_err(|_| AttestationError::RecordMustBeObject)?;
70
+
71
+
let signatures = extract_signatures(&record_object)?;
72
+
73
+
if signatures.is_empty() {
74
+
return Ok(());
75
+
}
76
+
77
+
for signature in signatures {
78
+
let signature_refernce_type = signature
79
+
.get("$type")
80
+
.and_then(Value::as_str)
81
+
.filter(|value| !value.is_empty())
82
+
.ok_or(AttestationError::SigMetadataMissingType)?;
83
+
84
+
let metadata = if signature_refernce_type == STRONG_REF_NSID {
85
+
let aturi = signature
86
+
.get("uri")
87
+
.and_then(Value::as_str)
88
+
.filter(|value| !value.is_empty())
89
+
.ok_or(AttestationError::SignatureMissingField {
90
+
field: "uri".to_string(),
91
+
})?;
92
+
93
+
record_resolver
94
+
.resolve::<serde_json::Value>(aturi)
95
+
.await
96
+
.map_err(|error| AttestationError::RemoteAttestationFetchFailed {
97
+
uri: aturi.to_string(),
98
+
error,
99
+
})?
100
+
} else {
101
+
signature.clone()
102
+
};
103
+
104
+
let computed_cid = create_attestation_cid(
105
+
verify_input.clone(),
106
+
AnyInput::Serialize(metadata.clone()),
107
+
repository,
108
+
)?;
109
+
110
+
if signature_refernce_type == STRONG_REF_NSID {
111
+
let attestation_cid = metadata
112
+
.get("cid")
113
+
.and_then(Value::as_str)
114
+
.filter(|value| !value.is_empty())
115
+
.ok_or(AttestationError::SignatureMissingField {
116
+
field: "cid".to_string(),
117
+
})?;
118
+
119
+
if computed_cid.to_string() != attestation_cid {
120
+
return Err(AttestationError::RemoteAttestationCidMismatch {
121
+
expected: attestation_cid.to_string(),
122
+
actual: computed_cid.to_string(),
123
+
});
124
+
}
125
+
continue;
126
+
}
127
+
128
+
let key = metadata
129
+
.get("key")
130
+
.and_then(Value::as_str)
131
+
.filter(|value| !value.is_empty())
132
+
.ok_or(AttestationError::SignatureMissingField {
133
+
field: "key".to_string(),
134
+
})?;
135
+
let key_data = key_resolver.resolve(key).await.map_err(|error| {
136
+
AttestationError::KeyResolutionFailed {
137
+
key: key.to_string(),
138
+
error,
139
+
}
140
+
})?;
141
+
142
+
let signature_bytes = metadata
143
+
.get("signature")
144
+
.and_then(Value::as_object)
145
+
.and_then(|object| object.get("$bytes"))
146
+
.and_then(Value::as_str)
147
+
.ok_or(AttestationError::SignatureBytesFormatInvalid)?;
148
+
149
+
let signature_bytes = BASE64
150
+
.decode(signature_bytes)
151
+
.map_err(|error| AttestationError::SignatureDecodingFailed { error })?;
152
+
153
+
let computed_cid_bytes = computed_cid.to_bytes();
154
+
155
+
validate(&key_data, &signature_bytes, &computed_cid_bytes)
156
+
.map_err(|error| AttestationError::SignatureValidationFailed { error })?;
157
+
}
158
+
159
+
Ok(())
160
+
}
+8
-1
crates/atproto-client/Cargo.toml
+8
-1
crates/atproto-client/Cargo.toml
···
35
35
doc = true
36
36
required-features = ["clap"]
37
37
38
+
[[bin]]
39
+
name = "atproto-client-put-record"
40
+
test = false
41
+
bench = false
42
+
doc = true
43
+
38
44
[dependencies]
39
45
atproto-identity.workspace = true
40
-
atproto-record.workspace = true
41
46
atproto-oauth.workspace = true
47
+
atproto-record.workspace = true
42
48
43
49
anyhow.workspace = true
44
50
reqwest-chain.workspace = true
···
50
56
tokio.workspace = true
51
57
tracing.workspace = true
52
58
urlencoding = "2.1.3"
59
+
async-trait.workspace = true
53
60
bytes = "1.10.1"
54
61
clap = { workspace = true, optional = true }
55
62
rpassword = { workspace = true, optional = true }
+165
crates/atproto-client/src/bin/atproto-client-put-record.rs
+165
crates/atproto-client/src/bin/atproto-client-put-record.rs
···
1
+
//! AT Protocol client tool for writing records to a repository.
2
+
//!
3
+
//! This binary tool creates or updates records in an AT Protocol repository
4
+
//! using app password authentication. It resolves the subject to a DID,
5
+
//! creates a session, and writes the record using the putRecord XRPC method.
6
+
//!
7
+
//! # Usage
8
+
//!
9
+
//! ```text
10
+
//! ATPROTO_PASSWORD=<password> atproto-client-put-record <subject> <record_key> <record_json>
11
+
//! ```
12
+
//!
13
+
//! # Environment Variables
14
+
//!
15
+
//! - `ATPROTO_PASSWORD` - Required. App password for authentication.
16
+
//! - `CERTIFICATE_BUNDLES` - Custom CA certificate bundles.
17
+
//! - `USER_AGENT` - Custom user agent string.
18
+
//! - `DNS_NAMESERVERS` - Custom DNS nameservers.
19
+
//! - `PLC_HOSTNAME` - Override PLC hostname (default: plc.directory).
20
+
21
+
use anyhow::Result;
22
+
use atproto_client::{
23
+
client::{AppPasswordAuth, Auth},
24
+
com::atproto::{
25
+
repo::{put_record, PutRecordRequest, PutRecordResponse},
26
+
server::create_session,
27
+
},
28
+
errors::CliError,
29
+
};
30
+
use atproto_identity::{
31
+
config::{CertificateBundles, DnsNameservers, default_env, optional_env, version},
32
+
plc,
33
+
resolve::{HickoryDnsResolver, resolve_subject},
34
+
web,
35
+
};
36
+
use std::env;
37
+
38
+
fn print_usage() {
39
+
eprintln!("Usage: atproto-client-put-record <subject> <record_key> <record_json>");
40
+
eprintln!();
41
+
eprintln!("Arguments:");
42
+
eprintln!(" <subject> Handle or DID of the repository owner");
43
+
eprintln!(" <record_key> Record key (rkey) for the record");
44
+
eprintln!(" <record_json> JSON record data (must include $type field)");
45
+
eprintln!();
46
+
eprintln!("Environment Variables:");
47
+
eprintln!(" ATPROTO_PASSWORD Required. App password for authentication.");
48
+
eprintln!(" CERTIFICATE_BUNDLES Custom CA certificate bundles.");
49
+
eprintln!(" USER_AGENT Custom user agent string.");
50
+
eprintln!(" DNS_NAMESERVERS Custom DNS nameservers.");
51
+
eprintln!(" PLC_HOSTNAME Override PLC hostname (default: plc.directory).");
52
+
}
53
+
54
+
#[tokio::main]
55
+
async fn main() -> Result<()> {
56
+
let args: Vec<String> = env::args().collect();
57
+
58
+
if args.len() != 4 {
59
+
print_usage();
60
+
std::process::exit(1);
61
+
}
62
+
63
+
let subject = &args[1];
64
+
let record_key = &args[2];
65
+
let record_json = &args[3];
66
+
67
+
// Get password from environment variable
68
+
let password = env::var("ATPROTO_PASSWORD").map_err(|_| {
69
+
anyhow::anyhow!("ATPROTO_PASSWORD environment variable is required")
70
+
})?;
71
+
72
+
// Set up HTTP client configuration
73
+
let certificate_bundles: CertificateBundles = optional_env("CERTIFICATE_BUNDLES").try_into()?;
74
+
let default_user_agent = format!(
75
+
"atproto-identity-rs ({}; +https://tangled.sh/@smokesignal.events/atproto-identity-rs)",
76
+
version()?
77
+
);
78
+
let user_agent = default_env("USER_AGENT", &default_user_agent);
79
+
let dns_nameservers: DnsNameservers = optional_env("DNS_NAMESERVERS").try_into()?;
80
+
let plc_hostname = default_env("PLC_HOSTNAME", "plc.directory");
81
+
82
+
let mut client_builder = reqwest::Client::builder();
83
+
for ca_certificate in certificate_bundles.as_ref() {
84
+
let cert = std::fs::read(ca_certificate)?;
85
+
let cert = reqwest::Certificate::from_pem(&cert)?;
86
+
client_builder = client_builder.add_root_certificate(cert);
87
+
}
88
+
89
+
client_builder = client_builder.user_agent(user_agent);
90
+
let http_client = client_builder.build()?;
91
+
92
+
let dns_resolver = HickoryDnsResolver::create_resolver(dns_nameservers.as_ref());
93
+
94
+
// Parse the record JSON
95
+
let record: serde_json::Value = serde_json::from_str(record_json).map_err(|err| {
96
+
tracing::error!(error = ?err, "Failed to parse record JSON");
97
+
anyhow::anyhow!("Failed to parse record JSON: {}", err)
98
+
})?;
99
+
100
+
// Extract collection from $type field
101
+
let collection = record
102
+
.get("$type")
103
+
.and_then(|v| v.as_str())
104
+
.ok_or_else(|| anyhow::anyhow!("Record must contain a $type field for the collection"))?
105
+
.to_string();
106
+
107
+
// Resolve subject to DID
108
+
let did = resolve_subject(&http_client, &dns_resolver, subject).await?;
109
+
110
+
// Get DID document to find PDS endpoint
111
+
let document = if did.starts_with("did:plc:") {
112
+
plc::query(&http_client, &plc_hostname, &did).await?
113
+
} else if did.starts_with("did:web:") {
114
+
web::query(&http_client, &did).await?
115
+
} else {
116
+
anyhow::bail!("Unsupported DID method: {}", did);
117
+
};
118
+
119
+
// Get PDS endpoint from the DID document
120
+
let pds_endpoints = document.pds_endpoints();
121
+
let pds_endpoint = pds_endpoints
122
+
.first()
123
+
.ok_or_else(|| CliError::NoPdsEndpointFound { did: did.clone() })?;
124
+
125
+
// Create session
126
+
let session = create_session(&http_client, pds_endpoint, &did, &password, None).await?;
127
+
128
+
// Set up app password authentication
129
+
let auth = Auth::AppPassword(AppPasswordAuth {
130
+
access_token: session.access_jwt.clone(),
131
+
});
132
+
133
+
// Create put record request
134
+
let put_request = PutRecordRequest {
135
+
repo: session.did.clone(),
136
+
collection,
137
+
record_key: record_key.clone(),
138
+
validate: true,
139
+
record,
140
+
swap_commit: None,
141
+
swap_record: None,
142
+
};
143
+
144
+
// Execute put record
145
+
let response = put_record(&http_client, &auth, pds_endpoint, put_request).await?;
146
+
147
+
match response {
148
+
PutRecordResponse::StrongRef { uri, cid, .. } => {
149
+
println!(
150
+
"{}",
151
+
serde_json::to_string_pretty(&serde_json::json!({
152
+
"uri": uri,
153
+
"cid": cid
154
+
}))?
155
+
);
156
+
}
157
+
PutRecordResponse::Error(err) => {
158
+
let error_message = err.error_message();
159
+
tracing::error!(error = %error_message, "putRecord failed");
160
+
anyhow::bail!("putRecord failed: {}", error_message);
161
+
}
162
+
}
163
+
164
+
Ok(())
165
+
}
+112
-3
crates/atproto-client/src/client.rs
+112
-3
crates/atproto-client/src/client.rs
···
17
17
///
18
18
/// Contains the private key for DPoP proof generation and OAuth access token
19
19
/// for Authorization header.
20
+
#[derive(Clone)]
20
21
pub struct DPoPAuth {
21
22
/// Private key data for generating DPoP proof tokens
22
23
pub dpop_private_key_data: KeyData,
···
27
28
/// App password authentication credentials for authenticated HTTP requests.
28
29
///
29
30
/// Contains the JWT access token for Bearer token authentication.
31
+
#[derive(Clone)]
30
32
pub struct AppPasswordAuth {
31
33
/// JWT access token for the Authorization header
32
34
pub access_token: String,
···
36
38
///
37
39
/// Supports multiple authentication schemes including unauthenticated requests,
38
40
/// DPoP (Demonstration of Proof-of-Possession) tokens, and app password bearer tokens.
41
+
#[derive(Clone)]
39
42
pub enum Auth {
40
43
/// No authentication - for public endpoints that don't require authentication
41
44
None,
···
394
397
Ok(value)
395
398
}
396
399
400
+
/// Performs a DPoP-authenticated HTTP POST request with raw bytes body and additional headers, and parses the response as JSON.
401
+
///
402
+
/// This function is similar to `post_dpop_json_with_headers` but accepts a raw bytes payload
403
+
/// instead of JSON. Useful for sending pre-serialized data or binary payloads while maintaining
404
+
/// DPoP authentication and custom headers.
405
+
///
406
+
/// # Arguments
407
+
///
408
+
/// * `http_client` - The HTTP client to use for the request
409
+
/// * `dpop_auth` - DPoP authentication credentials
410
+
/// * `url` - The URL to request
411
+
/// * `payload` - The raw bytes to send in the request body
412
+
/// * `additional_headers` - Additional HTTP headers to include in the request
413
+
///
414
+
/// # Returns
415
+
///
416
+
/// The parsed JSON response as a `serde_json::Value`
417
+
///
418
+
/// # Errors
419
+
///
420
+
/// Returns `DPoPError::ProofGenerationFailed` if DPoP proof generation fails,
421
+
/// `DPoPError::HttpRequestFailed` if the HTTP request fails,
422
+
/// or `DPoPError::JsonParseFailed` if JSON parsing fails.
423
+
///
424
+
/// # Example
425
+
///
426
+
/// ```no_run
427
+
/// use atproto_client::client::{DPoPAuth, post_dpop_bytes_with_headers};
428
+
/// use atproto_identity::key::identify_key;
429
+
/// use reqwest::{Client, header::{HeaderMap, CONTENT_TYPE}};
430
+
/// use bytes::Bytes;
431
+
///
432
+
/// # async fn example() -> anyhow::Result<()> {
433
+
/// let client = Client::new();
434
+
/// let dpop_auth = DPoPAuth {
435
+
/// dpop_private_key_data: identify_key("did:key:zQ3sh...")?,
436
+
/// oauth_access_token: "access_token".to_string(),
437
+
/// };
438
+
///
439
+
/// let mut headers = HeaderMap::new();
440
+
/// headers.insert(CONTENT_TYPE, "application/json".parse()?);
441
+
///
442
+
/// let payload = Bytes::from(r#"{"text": "Hello!"}"#);
443
+
/// let response = post_dpop_bytes_with_headers(
444
+
/// &client,
445
+
/// &dpop_auth,
446
+
/// "https://pds.example.com/xrpc/com.atproto.repo.createRecord",
447
+
/// payload,
448
+
/// &headers
449
+
/// ).await?;
450
+
/// # Ok(())
451
+
/// # }
452
+
/// ```
453
+
pub async fn post_dpop_bytes_with_headers(
454
+
http_client: &reqwest::Client,
455
+
dpop_auth: &DPoPAuth,
456
+
url: &str,
457
+
payload: Bytes,
458
+
additional_headers: &HeaderMap,
459
+
) -> Result<serde_json::Value> {
460
+
let (dpop_proof_token, dpop_proof_header, dpop_proof_claim) = request_dpop(
461
+
&dpop_auth.dpop_private_key_data,
462
+
"POST",
463
+
url,
464
+
&dpop_auth.oauth_access_token,
465
+
)
466
+
.map_err(|error| DPoPError::ProofGenerationFailed { error })?;
467
+
468
+
let dpop_retry = DpopRetry::new(
469
+
dpop_proof_header.clone(),
470
+
dpop_proof_claim.clone(),
471
+
dpop_auth.dpop_private_key_data.clone(),
472
+
true,
473
+
);
474
+
475
+
let dpop_retry_client = ClientBuilder::new(http_client.clone())
476
+
.with(ChainMiddleware::new(dpop_retry.clone()))
477
+
.build();
478
+
479
+
let http_response = dpop_retry_client
480
+
.post(url)
481
+
.headers(additional_headers.clone())
482
+
.header(
483
+
"Authorization",
484
+
&format!("DPoP {}", dpop_auth.oauth_access_token),
485
+
)
486
+
.header("DPoP", &dpop_proof_token)
487
+
.body(payload)
488
+
.send()
489
+
.await
490
+
.map_err(|error| DPoPError::HttpRequestFailed {
491
+
url: url.to_string(),
492
+
error,
493
+
})?;
494
+
495
+
let value = http_response
496
+
.json::<serde_json::Value>()
497
+
.await
498
+
.map_err(|error| DPoPError::JsonParseFailed {
499
+
url: url.to_string(),
500
+
error,
501
+
})?;
502
+
503
+
Ok(value)
504
+
}
505
+
397
506
/// Performs an unauthenticated HTTP POST request with JSON body and parses the response as JSON.
398
507
///
399
508
/// # Arguments
···
690
799
http_client: &reqwest::Client,
691
800
app_auth: &AppPasswordAuth,
692
801
url: &str,
693
-
record: serde_json::Value,
802
+
payload: Bytes,
694
803
additional_headers: &HeaderMap,
695
804
) -> Result<Bytes> {
696
805
let mut headers = additional_headers.clone();
···
701
810
let http_response = http_client
702
811
.post(url)
703
812
.headers(headers)
704
-
.json(&record)
813
+
.body(payload)
705
814
.send()
706
-
.instrument(tracing::info_span!("get_apppassword_bytes_with_headers", url = %url))
815
+
.instrument(tracing::info_span!("post_apppassword_bytes_with_headers", url = %url))
707
816
.await
708
817
.map_err(|error| ClientError::HttpRequestFailed {
709
818
url: url.to_string(),
+7
-7
crates/atproto-client/src/com_atproto_identity.rs
+7
-7
crates/atproto-client/src/com_atproto_identity.rs
···
6
6
use std::collections::HashMap;
7
7
8
8
use anyhow::Result;
9
-
use atproto_identity::url::URLBuilder;
9
+
use atproto_identity::url::build_url;
10
10
use serde::{Deserialize, de::DeserializeOwned};
11
11
12
12
use crate::{
···
58
58
base_url: &str,
59
59
handle: String,
60
60
) -> Result<ResolveHandleResponse> {
61
-
let mut url_builder = URLBuilder::new(base_url);
62
-
url_builder.path("/xrpc/com.atproto.identity.resolveHandle");
63
-
64
-
url_builder.param("handle", &handle);
65
-
66
-
let url = url_builder.build();
61
+
let url = build_url(
62
+
base_url,
63
+
"/xrpc/com.atproto.identity.resolveHandle",
64
+
[("handle", handle.as_str())],
65
+
)?
66
+
.to_string();
67
67
68
68
match auth {
69
69
Auth::None => get_json(http_client, &url)
+48
-41
crates/atproto-client/src/com_atproto_repo.rs
+48
-41
crates/atproto-client/src/com_atproto_repo.rs
···
23
23
//! OAuth access tokens and private keys for proof generation.
24
24
25
25
use std::collections::HashMap;
26
+
use std::iter;
26
27
27
28
use anyhow::Result;
28
-
use atproto_identity::url::URLBuilder;
29
+
use atproto_identity::url::build_url;
29
30
use bytes::Bytes;
30
31
use serde::{Deserialize, Serialize, de::DeserializeOwned};
31
32
···
77
78
did: &str,
78
79
cid: &str,
79
80
) -> Result<Bytes> {
80
-
let mut url_builder = URLBuilder::new(base_url);
81
-
url_builder.path("/xrpc/com.atproto.sync.getBlob");
82
-
83
-
url_builder.param("did", did);
84
-
url_builder.param("cid", cid);
85
-
86
-
let url = url_builder.build();
81
+
let url = build_url(
82
+
base_url,
83
+
"/xrpc/com.atproto.sync.getBlob",
84
+
[("did", did), ("cid", cid)],
85
+
)?
86
+
.to_string();
87
87
88
88
get_bytes(http_client, &url).await
89
89
}
···
112
112
rkey: &str,
113
113
cid: Option<&str>,
114
114
) -> Result<GetRecordResponse> {
115
-
let mut url_builder = URLBuilder::new(base_url);
116
-
url_builder.path("/xrpc/com.atproto.repo.getRecord");
117
-
118
-
url_builder.param("repo", repo);
119
-
url_builder.param("collection", collection);
120
-
url_builder.param("rkey", rkey);
121
-
115
+
let mut params = vec![("repo", repo), ("collection", collection), ("rkey", rkey)];
122
116
if let Some(cid) = cid {
123
-
url_builder.param("cid", cid);
117
+
params.push(("cid", cid));
124
118
}
125
119
126
-
let url = url_builder.build();
120
+
let url = build_url(base_url, "/xrpc/com.atproto.repo.getRecord", params)?.to_string();
127
121
128
122
match auth {
129
123
Auth::None => get_json(http_client, &url)
···
218
212
collection: String,
219
213
params: ListRecordsParams,
220
214
) -> Result<ListRecordsResponse<T>> {
221
-
let mut url_builder = URLBuilder::new(base_url);
222
-
url_builder.path("/xrpc/com.atproto.repo.listRecords");
215
+
let mut url = build_url(
216
+
base_url,
217
+
"/xrpc/com.atproto.repo.listRecords",
218
+
iter::empty::<(&str, &str)>(),
219
+
)?;
220
+
{
221
+
let mut pairs = url.query_pairs_mut();
222
+
pairs.append_pair("repo", &repo);
223
+
pairs.append_pair("collection", &collection);
223
224
224
-
// Add query parameters
225
-
url_builder.param("repo", &repo);
226
-
url_builder.param("collection", &collection);
225
+
if let Some(limit) = params.limit {
226
+
pairs.append_pair("limit", &limit.to_string());
227
+
}
227
228
228
-
if let Some(limit) = params.limit {
229
-
url_builder.param("limit", &limit.to_string());
230
-
}
229
+
if let Some(cursor) = params.cursor {
230
+
pairs.append_pair("cursor", &cursor);
231
+
}
231
232
232
-
if let Some(cursor) = params.cursor {
233
-
url_builder.param("cursor", &cursor);
233
+
if let Some(reverse) = params.reverse {
234
+
pairs.append_pair("reverse", &reverse.to_string());
235
+
}
234
236
}
235
237
236
-
if let Some(reverse) = params.reverse {
237
-
url_builder.param("reverse", &reverse.to_string());
238
-
}
239
-
240
-
let url = url_builder.build();
238
+
let url = url.to_string();
241
239
242
240
match auth {
243
241
Auth::None => get_json(http_client, &url)
···
319
317
base_url: &str,
320
318
record: CreateRecordRequest<T>,
321
319
) -> Result<CreateRecordResponse> {
322
-
let mut url_builder = URLBuilder::new(base_url);
323
-
url_builder.path("/xrpc/com.atproto.repo.createRecord");
324
-
let url = url_builder.build();
320
+
let url = build_url(
321
+
base_url,
322
+
"/xrpc/com.atproto.repo.createRecord",
323
+
iter::empty::<(&str, &str)>(),
324
+
)?
325
+
.to_string();
325
326
326
327
let value = serde_json::to_value(record)?;
327
328
···
413
414
base_url: &str,
414
415
record: PutRecordRequest<T>,
415
416
) -> Result<PutRecordResponse> {
416
-
let mut url_builder = URLBuilder::new(base_url);
417
-
url_builder.path("/xrpc/com.atproto.repo.putRecord");
418
-
let url = url_builder.build();
417
+
let url = build_url(
418
+
base_url,
419
+
"/xrpc/com.atproto.repo.putRecord",
420
+
iter::empty::<(&str, &str)>(),
421
+
)?
422
+
.to_string();
419
423
420
424
let value = serde_json::to_value(record)?;
421
425
···
496
500
base_url: &str,
497
501
record: DeleteRecordRequest,
498
502
) -> Result<DeleteRecordResponse> {
499
-
let mut url_builder = URLBuilder::new(base_url);
500
-
url_builder.path("/xrpc/com.atproto.repo.deleteRecord");
501
-
let url = url_builder.build();
503
+
let url = build_url(
504
+
base_url,
505
+
"/xrpc/com.atproto.repo.deleteRecord",
506
+
iter::empty::<(&str, &str)>(),
507
+
)?
508
+
.to_string();
502
509
503
510
let value = serde_json::to_value(record)?;
504
511
+26
-13
crates/atproto-client/src/com_atproto_server.rs
+26
-13
crates/atproto-client/src/com_atproto_server.rs
···
19
19
//! an access JWT token from an authenticated session.
20
20
21
21
use anyhow::Result;
22
-
use atproto_identity::url::URLBuilder;
22
+
use atproto_identity::url::build_url;
23
23
use serde::{Deserialize, Serialize};
24
+
use std::iter;
24
25
25
26
use crate::{
26
27
client::{Auth, post_json},
···
118
119
password: &str,
119
120
auth_factor_token: Option<&str>,
120
121
) -> Result<AppPasswordSession> {
121
-
let mut url_builder = URLBuilder::new(base_url);
122
-
url_builder.path("/xrpc/com.atproto.server.createSession");
123
-
let url = url_builder.build();
122
+
let url = build_url(
123
+
base_url,
124
+
"/xrpc/com.atproto.server.createSession",
125
+
iter::empty::<(&str, &str)>(),
126
+
)?
127
+
.to_string();
124
128
125
129
let request = CreateSessionRequest {
126
130
identifier: identifier.to_string(),
···
156
160
base_url: &str,
157
161
refresh_token: &str,
158
162
) -> Result<RefreshSessionResponse> {
159
-
let mut url_builder = URLBuilder::new(base_url);
160
-
url_builder.path("/xrpc/com.atproto.server.refreshSession");
161
-
let url = url_builder.build();
163
+
let url = build_url(
164
+
base_url,
165
+
"/xrpc/com.atproto.server.refreshSession",
166
+
iter::empty::<(&str, &str)>(),
167
+
)?
168
+
.to_string();
162
169
163
170
// Create a new client with the refresh token in Authorization header
164
171
let mut headers = reqwest::header::HeaderMap::new();
···
197
204
access_token: &str,
198
205
name: &str,
199
206
) -> Result<AppPasswordResponse> {
200
-
let mut url_builder = URLBuilder::new(base_url);
201
-
url_builder.path("/xrpc/com.atproto.server.createAppPassword");
202
-
let url = url_builder.build();
207
+
let url = build_url(
208
+
base_url,
209
+
"/xrpc/com.atproto.server.createAppPassword",
210
+
iter::empty::<(&str, &str)>(),
211
+
)?
212
+
.to_string();
203
213
204
214
let request_body = serde_json::json!({
205
215
"name": name
···
260
270
}
261
271
};
262
272
263
-
let mut url_builder = URLBuilder::new(base_url);
264
-
url_builder.path("/xrpc/com.atproto.server.deleteSession");
265
-
let url = url_builder.build();
273
+
let url = build_url(
274
+
base_url,
275
+
"/xrpc/com.atproto.server.deleteSession",
276
+
iter::empty::<(&str, &str)>(),
277
+
)?
278
+
.to_string();
266
279
267
280
// Create headers with the Bearer token
268
281
let mut headers = reqwest::header::HeaderMap::new();
+3
crates/atproto-client/src/lib.rs
+3
crates/atproto-client/src/lib.rs
+103
crates/atproto-client/src/record_resolver.rs
+103
crates/atproto-client/src/record_resolver.rs
···
1
+
//! Helpers for resolving AT Protocol records referenced by URI.
2
+
3
+
use std::str::FromStr;
4
+
use std::sync::Arc;
5
+
6
+
use anyhow::{Result, anyhow, bail};
7
+
use async_trait::async_trait;
8
+
use atproto_identity::traits::IdentityResolver;
9
+
use atproto_record::aturi::ATURI;
10
+
11
+
use crate::{
12
+
client::Auth,
13
+
com::atproto::repo::{GetRecordResponse, get_record},
14
+
};
15
+
16
+
/// Trait for resolving AT Protocol records by `at://` URI.
17
+
///
18
+
/// Implementations perform the network lookup and deserialize the response into
19
+
/// the requested type.
20
+
#[async_trait]
21
+
pub trait RecordResolver: Send + Sync {
22
+
/// Resolve an AT URI to a typed record.
23
+
async fn resolve<T>(&self, aturi: &str) -> Result<T>
24
+
where
25
+
T: serde::de::DeserializeOwned + Send;
26
+
}
27
+
28
+
/// Resolver that fetches records using public XRPC endpoints.
29
+
///
30
+
/// Uses an identity resolver to dynamically determine the PDS endpoint for each record.
31
+
#[derive(Clone)]
32
+
pub struct HttpRecordResolver {
33
+
http_client: reqwest::Client,
34
+
identity_resolver: Arc<dyn IdentityResolver>,
35
+
}
36
+
37
+
impl HttpRecordResolver {
38
+
/// Create a new resolver using the provided HTTP client and identity resolver.
39
+
///
40
+
/// The identity resolver is used to dynamically determine the PDS endpoint for each record
41
+
/// based on the authority (DID or handle) in the AT URI.
42
+
pub fn new(
43
+
http_client: reqwest::Client,
44
+
identity_resolver: Arc<dyn IdentityResolver>,
45
+
) -> Self {
46
+
Self {
47
+
http_client,
48
+
identity_resolver,
49
+
}
50
+
}
51
+
}
52
+
53
+
#[async_trait]
54
+
impl RecordResolver for HttpRecordResolver {
55
+
async fn resolve<T>(&self, aturi: &str) -> Result<T>
56
+
where
57
+
T: serde::de::DeserializeOwned + Send,
58
+
{
59
+
let parsed = ATURI::from_str(aturi).map_err(|error| anyhow!(error))?;
60
+
61
+
// Resolve the authority (DID or handle) to get the DID document
62
+
let document = self
63
+
.identity_resolver
64
+
.resolve(&parsed.authority)
65
+
.await
66
+
.map_err(|error| {
67
+
anyhow!("Failed to resolve identity for {}: {}", parsed.authority, error)
68
+
})?;
69
+
70
+
// Extract PDS endpoint from the DID document
71
+
let pds_endpoints = document.pds_endpoints();
72
+
let base_url = pds_endpoints
73
+
.first()
74
+
.ok_or_else(|| anyhow!("No PDS endpoint found for {}", parsed.authority))?;
75
+
76
+
let auth = Auth::None;
77
+
78
+
let response = get_record(
79
+
&self.http_client,
80
+
&auth,
81
+
base_url,
82
+
&parsed.authority,
83
+
&parsed.collection,
84
+
&parsed.record_key,
85
+
None,
86
+
)
87
+
.await?;
88
+
89
+
match response {
90
+
GetRecordResponse::Record { value, .. } => {
91
+
serde_json::from_value(value).map_err(|error| anyhow!(error))
92
+
}
93
+
GetRecordResponse::Error(error) => {
94
+
let message = error.error_message();
95
+
if message.is_empty() {
96
+
bail!("Record resolution failed without additional error details");
97
+
}
98
+
99
+
bail!(message);
100
+
}
101
+
}
102
+
}
103
+
}
+43
crates/atproto-extras/Cargo.toml
+43
crates/atproto-extras/Cargo.toml
···
1
+
[package]
2
+
name = "atproto-extras"
3
+
version = "0.13.0"
4
+
description = "AT Protocol extras - facet parsing and rich text utilities"
5
+
readme = "README.md"
6
+
homepage = "https://tangled.sh/@smokesignal.events/atproto-identity-rs"
7
+
documentation = "https://docs.rs/atproto-extras"
8
+
9
+
edition.workspace = true
10
+
rust-version.workspace = true
11
+
authors.workspace = true
12
+
repository.workspace = true
13
+
license.workspace = true
14
+
keywords.workspace = true
15
+
categories.workspace = true
16
+
17
+
[dependencies]
18
+
atproto-identity.workspace = true
19
+
atproto-record.workspace = true
20
+
21
+
anyhow.workspace = true
22
+
async-trait.workspace = true
23
+
clap = { workspace = true, optional = true }
24
+
regex.workspace = true
25
+
reqwest = { workspace = true, optional = true }
26
+
serde_json = { workspace = true, optional = true }
27
+
tokio = { workspace = true, optional = true }
28
+
29
+
[dev-dependencies]
30
+
tokio = { workspace = true, features = ["macros", "rt"] }
31
+
32
+
[features]
33
+
default = ["hickory-dns"]
34
+
hickory-dns = ["atproto-identity/hickory-dns"]
35
+
clap = ["dep:clap"]
36
+
cli = ["dep:clap", "dep:serde_json", "dep:tokio", "dep:reqwest"]
37
+
38
+
[[bin]]
39
+
name = "atproto-extras-parse-facets"
40
+
required-features = ["clap", "cli", "hickory-dns"]
41
+
42
+
[lints]
43
+
workspace = true
+128
crates/atproto-extras/README.md
+128
crates/atproto-extras/README.md
···
1
+
# atproto-extras
2
+
3
+
Extra utilities for AT Protocol applications, including rich text facet parsing.
4
+
5
+
## Features
6
+
7
+
- **Facet Parsing**: Extract mentions (`@handle`), URLs, and hashtags (`#tag`) from plain text with correct UTF-8 byte offset calculation
8
+
- **Identity Integration**: Resolve mention handles to DIDs during parsing
9
+
10
+
## Installation
11
+
12
+
Add to your `Cargo.toml`:
13
+
14
+
```toml
15
+
[dependencies]
16
+
atproto-extras = "0.13"
17
+
```
18
+
19
+
## Usage
20
+
21
+
### Parsing Text for Facets
22
+
23
+
```rust
24
+
use atproto_extras::{parse_urls, parse_tags};
25
+
use atproto_record::lexicon::app::bsky::richtext::facet::FacetFeature;
26
+
27
+
let text = "Check out https://example.com #rust";
28
+
29
+
// Parse URLs and tags - returns Vec<Facet> directly
30
+
let url_facets = parse_urls(text);
31
+
let tag_facets = parse_tags(text);
32
+
33
+
// Each facet includes byte positions and typed features
34
+
for facet in url_facets {
35
+
if let Some(FacetFeature::Link(link)) = facet.features.first() {
36
+
println!("URL at bytes {}..{}: {}",
37
+
facet.index.byte_start, facet.index.byte_end, link.uri);
38
+
}
39
+
}
40
+
41
+
for facet in tag_facets {
42
+
if let Some(FacetFeature::Tag(tag)) = facet.features.first() {
43
+
println!("Tag at bytes {}..{}: #{}",
44
+
facet.index.byte_start, facet.index.byte_end, tag.tag);
45
+
}
46
+
}
47
+
```
48
+
49
+
### Parsing Mentions
50
+
51
+
Mention parsing requires an `IdentityResolver` to convert handles to DIDs:
52
+
53
+
```rust
54
+
use atproto_extras::{parse_mentions, FacetLimits};
55
+
use atproto_record::lexicon::app::bsky::richtext::facet::FacetFeature;
56
+
57
+
let text = "Hello @alice.bsky.social!";
58
+
let limits = FacetLimits::default();
59
+
60
+
// Requires an async context and IdentityResolver
61
+
let facets = parse_mentions(text, &resolver, &limits).await;
62
+
63
+
for facet in facets {
64
+
if let Some(FacetFeature::Mention(mention)) = facet.features.first() {
65
+
println!("Mention at bytes {}..{} resolved to {}",
66
+
facet.index.byte_start, facet.index.byte_end, mention.did);
67
+
}
68
+
}
69
+
```
70
+
71
+
Mentions that cannot be resolved to a valid DID are automatically skipped. Mentions appearing within URLs are also excluded.
72
+
73
+
### Creating AT Protocol Facets
74
+
75
+
```rust
76
+
use atproto_extras::{parse_facets_from_text, FacetLimits};
77
+
78
+
let text = "Hello @alice.bsky.social! Check https://rust-lang.org #rust";
79
+
let limits = FacetLimits::default();
80
+
81
+
// Requires an async context and IdentityResolver
82
+
let facets = parse_facets_from_text(text, &resolver, &limits).await;
83
+
84
+
if let Some(facets) = facets {
85
+
for facet in &facets {
86
+
println!("Facet at {}..{}", facet.index.byte_start, facet.index.byte_end);
87
+
}
88
+
}
89
+
```
90
+
91
+
## Byte Offset Handling
92
+
93
+
AT Protocol facets use UTF-8 byte offsets, not character indices. This is critical for correct handling of multi-byte characters like emojis or non-ASCII text.
94
+
95
+
```rust
96
+
use atproto_extras::parse_urls;
97
+
98
+
// Text with emojis (multi-byte UTF-8 characters)
99
+
let text = "โจ Check https://example.com โจ";
100
+
101
+
let facets = parse_urls(text);
102
+
// Byte positions correctly account for the 4-byte emoji
103
+
assert_eq!(facets[0].index.byte_start, 11); // After "โจ Check " (4 + 1 + 6 = 11 bytes)
104
+
```
105
+
106
+
## Facet Limits
107
+
108
+
Use `FacetLimits` to control the maximum number of facets processed:
109
+
110
+
```rust
111
+
use atproto_extras::FacetLimits;
112
+
113
+
// Default limits
114
+
let limits = FacetLimits::default();
115
+
// mentions_max: 5, tags_max: 5, links_max: 5, max: 10
116
+
117
+
// Custom limits
118
+
let custom = FacetLimits {
119
+
mentions_max: 10,
120
+
tags_max: 10,
121
+
links_max: 10,
122
+
max: 20,
123
+
};
124
+
```
125
+
126
+
## License
127
+
128
+
MIT
+176
crates/atproto-extras/src/bin/atproto-extras-parse-facets.rs
+176
crates/atproto-extras/src/bin/atproto-extras-parse-facets.rs
···
1
+
//! Command-line tool for generating AT Protocol facet arrays from text.
2
+
//!
3
+
//! This tool parses a string and outputs the facet array in JSON format.
4
+
//! Facets include mentions (@handle), URLs (https://...), and hashtags (#tag).
5
+
//!
6
+
//! By default, mentions are detected but output with placeholder DIDs. Use
7
+
//! `--resolve-mentions` to resolve handles to actual DIDs (requires network access).
8
+
//!
9
+
//! # Usage
10
+
//!
11
+
//! ```bash
12
+
//! # Parse facets without resolving mentions
13
+
//! cargo run --features clap,serde_json,tokio,hickory-dns --bin atproto-extras-parse-facets -- "Check out https://example.com and #rust"
14
+
//!
15
+
//! # Resolve mentions to DIDs
16
+
//! cargo run --features clap,serde_json,tokio,hickory-dns --bin atproto-extras-parse-facets -- --resolve-mentions "Hello @bsky.app!"
17
+
//! ```
18
+
19
+
use atproto_extras::{FacetLimits, parse_mentions, parse_tags, parse_urls};
20
+
use atproto_identity::resolve::{HickoryDnsResolver, InnerIdentityResolver};
21
+
use atproto_record::lexicon::app::bsky::richtext::facet::{
22
+
ByteSlice, Facet, FacetFeature, Mention,
23
+
};
24
+
use clap::Parser;
25
+
use regex::bytes::Regex;
26
+
use std::sync::Arc;
27
+
28
+
/// Parse text and output AT Protocol facets as JSON.
29
+
#[derive(Parser)]
30
+
#[command(
31
+
name = "atproto-extras-parse-facets",
32
+
version,
33
+
about = "Parse text and output AT Protocol facets as JSON",
34
+
long_about = "This tool parses a string for mentions, URLs, and hashtags,\n\
35
+
then outputs the corresponding AT Protocol facet array in JSON format.\n\n\
36
+
By default, mentions are detected but output with placeholder DIDs.\n\
37
+
Use --resolve-mentions to resolve handles to actual DIDs (requires network)."
38
+
)]
39
+
struct Args {
40
+
/// The text to parse for facets
41
+
text: String,
42
+
43
+
/// Resolve mention handles to DIDs (requires network access)
44
+
#[arg(long)]
45
+
resolve_mentions: bool,
46
+
47
+
/// Show debug information on stderr
48
+
#[arg(long, short = 'd')]
49
+
debug: bool,
50
+
}
51
+
52
+
/// Parse mention spans from text without resolution (returns placeholder DIDs).
53
+
fn parse_mention_spans(text: &str) -> Vec<Facet> {
54
+
let mut facets = Vec::new();
55
+
56
+
// Get URL ranges to exclude mentions within URLs
57
+
let url_facets = parse_urls(text);
58
+
59
+
// Same regex pattern as parse_mentions
60
+
let mention_regex = Regex::new(
61
+
r"(?:^|[^\w])(@([a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)",
62
+
)
63
+
.expect("Invalid mention regex");
64
+
65
+
let text_bytes = text.as_bytes();
66
+
67
+
for capture in mention_regex.captures_iter(text_bytes) {
68
+
if let Some(mention_match) = capture.get(1) {
69
+
let start = mention_match.start();
70
+
let end = mention_match.end();
71
+
72
+
// Check if this mention overlaps with any URL
73
+
let overlaps_url = url_facets.iter().any(|facet| {
74
+
(start >= facet.index.byte_start && start < facet.index.byte_end)
75
+
|| (end > facet.index.byte_start && end <= facet.index.byte_end)
76
+
});
77
+
78
+
if !overlaps_url {
79
+
let handle = std::str::from_utf8(&mention_match.as_bytes()[1..])
80
+
.unwrap_or_default()
81
+
.to_string();
82
+
83
+
facets.push(Facet {
84
+
index: ByteSlice {
85
+
byte_start: start,
86
+
byte_end: end,
87
+
},
88
+
features: vec![FacetFeature::Mention(Mention {
89
+
did: format!("did:plc:<unresolved:{}>", handle),
90
+
})],
91
+
});
92
+
}
93
+
}
94
+
}
95
+
96
+
facets
97
+
}
98
+
99
+
#[tokio::main]
100
+
async fn main() {
101
+
let args = Args::parse();
102
+
let text = &args.text;
103
+
let mut facets: Vec<Facet> = Vec::new();
104
+
let limits = FacetLimits::default();
105
+
106
+
// Parse mentions (either resolved or with placeholders)
107
+
if args.resolve_mentions {
108
+
let http_client = reqwest::Client::new();
109
+
let dns_resolver = HickoryDnsResolver::create_resolver(&[]);
110
+
let resolver = InnerIdentityResolver {
111
+
http_client,
112
+
dns_resolver: Arc::new(dns_resolver),
113
+
plc_hostname: "plc.directory".to_string(),
114
+
};
115
+
let mention_facets = parse_mentions(text, &resolver, &limits).await;
116
+
facets.extend(mention_facets);
117
+
} else {
118
+
let mention_facets = parse_mention_spans(text);
119
+
facets.extend(mention_facets);
120
+
}
121
+
122
+
// Parse URLs
123
+
let url_facets = parse_urls(text);
124
+
facets.extend(url_facets);
125
+
126
+
// Parse hashtags
127
+
let tag_facets = parse_tags(text);
128
+
facets.extend(tag_facets);
129
+
130
+
// Sort facets by byte_start for consistent output
131
+
facets.sort_by_key(|f| f.index.byte_start);
132
+
133
+
// Output as JSON
134
+
if facets.is_empty() {
135
+
println!("null");
136
+
} else {
137
+
match serde_json::to_string_pretty(&facets) {
138
+
Ok(json) => println!("{}", json),
139
+
Err(e) => {
140
+
eprintln!(
141
+
"error-atproto-extras-parse-facets-1 Error serializing facets: {}",
142
+
e
143
+
);
144
+
std::process::exit(1);
145
+
}
146
+
}
147
+
}
148
+
149
+
// Show debug info if requested
150
+
if args.debug {
151
+
eprintln!();
152
+
eprintln!("--- Debug Info ---");
153
+
eprintln!("Input text: {:?}", text);
154
+
eprintln!("Text length: {} bytes", text.len());
155
+
eprintln!("Facets found: {}", facets.len());
156
+
eprintln!("Mentions resolved: {}", args.resolve_mentions);
157
+
158
+
// Show byte slice verification
159
+
let text_bytes = text.as_bytes();
160
+
for (i, facet) in facets.iter().enumerate() {
161
+
let start = facet.index.byte_start;
162
+
let end = facet.index.byte_end;
163
+
let slice_text =
164
+
std::str::from_utf8(&text_bytes[start..end]).unwrap_or("<invalid utf8>");
165
+
let feature_type = match &facet.features[0] {
166
+
FacetFeature::Mention(_) => "mention",
167
+
FacetFeature::Link(_) => "link",
168
+
FacetFeature::Tag(_) => "tag",
169
+
};
170
+
eprintln!(
171
+
" [{}] {} @ bytes {}..{}: {:?}",
172
+
i, feature_type, start, end, slice_text
173
+
);
174
+
}
175
+
}
176
+
}
+942
crates/atproto-extras/src/facets.rs
+942
crates/atproto-extras/src/facets.rs
···
1
+
//! Rich text facet parsing for AT Protocol.
2
+
//!
3
+
//! This module provides functionality for extracting semantic annotations (facets)
4
+
//! from plain text. Facets include mentions, links (URLs), and hashtags.
5
+
//!
6
+
//! # Overview
7
+
//!
8
+
//! AT Protocol rich text uses "facets" to annotate specific byte ranges within text with
9
+
//! semantic meaning. This module handles:
10
+
//!
11
+
//! - **Parsing**: Extract mentions, URLs, and hashtags from plain text
12
+
//! - **Facet Creation**: Build proper AT Protocol facet structures with resolved DIDs
13
+
//!
14
+
//! # Byte Offset Calculation
15
+
//!
16
+
//! This implementation correctly uses UTF-8 byte offsets as required by AT Protocol.
17
+
//! The facets use "inclusive start and exclusive end" byte ranges. All parsing is done
18
+
//! using `regex::bytes::Regex` which operates on byte slices and returns byte positions,
19
+
//! ensuring correct handling of multi-byte UTF-8 characters (emojis, CJK, accented chars).
20
+
//!
21
+
//! # Example
22
+
//!
23
+
//! ```ignore
24
+
//! use atproto_extras::facets::{parse_urls, parse_tags, FacetLimits};
25
+
//! use atproto_record::lexicon::app::bsky::richtext::facet::FacetFeature;
26
+
//!
27
+
//! let text = "Check out https://example.com #rust";
28
+
//!
29
+
//! // Parse URLs and tags as Facet objects
30
+
//! let url_facets = parse_urls(text);
31
+
//! let tag_facets = parse_tags(text);
32
+
//!
33
+
//! // Access facet data directly
34
+
//! for facet in url_facets {
35
+
//! if let Some(FacetFeature::Link(link)) = facet.features.first() {
36
+
//! println!("URL at bytes {}..{}: {}",
37
+
//! facet.index.byte_start, facet.index.byte_end, link.uri);
38
+
//! }
39
+
//! }
40
+
//! ```
41
+
42
+
use atproto_identity::resolve::IdentityResolver;
43
+
use atproto_record::lexicon::app::bsky::richtext::facet::{
44
+
ByteSlice, Facet, FacetFeature, Link, Mention, Tag,
45
+
};
46
+
use regex::bytes::Regex;
47
+
48
+
/// Configuration for facet parsing limits.
49
+
///
50
+
/// These limits protect against abuse by capping the number of facets
51
+
/// that will be processed. This is important for both performance and
52
+
/// security when handling user-generated content.
53
+
///
54
+
/// # Example
55
+
///
56
+
/// ```
57
+
/// use atproto_extras::FacetLimits;
58
+
///
59
+
/// // Use defaults
60
+
/// let limits = FacetLimits::default();
61
+
///
62
+
/// // Or customize
63
+
/// let custom = FacetLimits {
64
+
/// mentions_max: 10,
65
+
/// tags_max: 10,
66
+
/// links_max: 10,
67
+
/// max: 20,
68
+
/// };
69
+
/// ```
70
+
#[derive(Debug, Clone, Copy)]
71
+
pub struct FacetLimits {
72
+
/// Maximum number of mention facets to process (default: 5)
73
+
pub mentions_max: usize,
74
+
/// Maximum number of tag facets to process (default: 5)
75
+
pub tags_max: usize,
76
+
/// Maximum number of link facets to process (default: 5)
77
+
pub links_max: usize,
78
+
/// Maximum total number of facets to process (default: 10)
79
+
pub max: usize,
80
+
}
81
+
82
+
impl Default for FacetLimits {
83
+
fn default() -> Self {
84
+
Self {
85
+
mentions_max: 5,
86
+
tags_max: 5,
87
+
links_max: 5,
88
+
max: 10,
89
+
}
90
+
}
91
+
}
92
+
93
+
/// Parse mentions from text and return them as Facet objects with resolved DIDs.
94
+
///
95
+
/// This function extracts AT Protocol handle mentions (e.g., `@alice.bsky.social`)
96
+
/// from text, resolves each handle to a DID using the provided identity resolver,
97
+
/// and returns AT Protocol Facet objects with Mention features.
98
+
///
99
+
/// Mentions that cannot be resolved to a valid DID are skipped. Mentions that
100
+
/// appear within URLs are also excluded to avoid false positives.
101
+
///
102
+
/// # Arguments
103
+
///
104
+
/// * `text` - The text to parse for mentions
105
+
/// * `identity_resolver` - Resolver for converting handles to DIDs
106
+
/// * `limits` - Configuration for maximum mentions to process
107
+
///
108
+
/// # Returns
109
+
///
110
+
/// A vector of Facet objects for successfully resolved mentions.
111
+
///
112
+
/// # Example
113
+
///
114
+
/// ```ignore
115
+
/// use atproto_extras::{parse_mentions, FacetLimits};
116
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::FacetFeature;
117
+
///
118
+
/// let text = "Hello @alice.bsky.social!";
119
+
/// let limits = FacetLimits::default();
120
+
///
121
+
/// // Requires an async context and identity resolver
122
+
/// let facets = parse_mentions(text, &resolver, &limits).await;
123
+
///
124
+
/// for facet in facets {
125
+
/// if let Some(FacetFeature::Mention(mention)) = facet.features.first() {
126
+
/// println!("Mention {} resolved to {}",
127
+
/// &text[facet.index.byte_start..facet.index.byte_end],
128
+
/// mention.did);
129
+
/// }
130
+
/// }
131
+
/// ```
132
+
pub async fn parse_mentions(
133
+
text: &str,
134
+
identity_resolver: &dyn IdentityResolver,
135
+
limits: &FacetLimits,
136
+
) -> Vec<Facet> {
137
+
let mut facets = Vec::new();
138
+
139
+
// First, parse all URLs to exclude mention matches within them
140
+
let url_facets = parse_urls(text);
141
+
142
+
// Regex based on: https://atproto.com/specs/handle#handle-identifier-syntax
143
+
// Pattern: [$|\W](@([a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)
144
+
let mention_regex = Regex::new(
145
+
r"(?:^|[^\w])(@([a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)",
146
+
)
147
+
.unwrap();
148
+
149
+
let text_bytes = text.as_bytes();
150
+
let mut mention_count = 0;
151
+
152
+
for capture in mention_regex.captures_iter(text_bytes) {
153
+
if mention_count >= limits.mentions_max {
154
+
break;
155
+
}
156
+
157
+
if let Some(mention_match) = capture.get(1) {
158
+
let start = mention_match.start();
159
+
let end = mention_match.end();
160
+
161
+
// Check if this mention overlaps with any URL
162
+
let overlaps_url = url_facets.iter().any(|facet| {
163
+
// Check if mention is within or overlaps the URL span
164
+
(start >= facet.index.byte_start && start < facet.index.byte_end)
165
+
|| (end > facet.index.byte_start && end <= facet.index.byte_end)
166
+
});
167
+
168
+
// Only process the mention if it doesn't overlap with a URL
169
+
if !overlaps_url {
170
+
let handle = std::str::from_utf8(&mention_match.as_bytes()[1..])
171
+
.unwrap_or_default()
172
+
.to_string();
173
+
174
+
// Try to resolve the handle to a DID
175
+
// First try with at:// prefix, then without
176
+
let at_uri = format!("at://{}", handle);
177
+
let did_result = match identity_resolver.resolve(&at_uri).await {
178
+
Ok(doc) => Ok(doc),
179
+
Err(_) => identity_resolver.resolve(&handle).await,
180
+
};
181
+
182
+
// Only add the mention facet if we successfully resolved the DID
183
+
if let Ok(did_doc) = did_result {
184
+
facets.push(Facet {
185
+
index: ByteSlice {
186
+
byte_start: start,
187
+
byte_end: end,
188
+
},
189
+
features: vec![FacetFeature::Mention(Mention {
190
+
did: did_doc.id.to_string(),
191
+
})],
192
+
});
193
+
mention_count += 1;
194
+
}
195
+
}
196
+
}
197
+
}
198
+
199
+
facets
200
+
}
201
+
202
+
/// Parse URLs from text and return them as Facet objects.
203
+
///
204
+
/// This function extracts HTTP and HTTPS URLs from text with correct
205
+
/// byte position tracking for UTF-8 text, returning AT Protocol Facet objects
206
+
/// with Link features.
207
+
///
208
+
/// # Supported URL Patterns
209
+
///
210
+
/// - HTTP URLs: `http://example.com`
211
+
/// - HTTPS URLs: `https://example.com`
212
+
/// - URLs with paths, query strings, and fragments
213
+
/// - URLs with subdomains: `https://www.example.com`
214
+
///
215
+
/// # Example
216
+
///
217
+
/// ```
218
+
/// use atproto_extras::parse_urls;
219
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::FacetFeature;
220
+
///
221
+
/// let text = "Visit https://example.com/path?query=1 for more info";
222
+
/// let facets = parse_urls(text);
223
+
///
224
+
/// assert_eq!(facets.len(), 1);
225
+
/// assert_eq!(facets[0].index.byte_start, 6);
226
+
/// assert_eq!(facets[0].index.byte_end, 38);
227
+
/// if let Some(FacetFeature::Link(link)) = facets[0].features.first() {
228
+
/// assert_eq!(link.uri, "https://example.com/path?query=1");
229
+
/// }
230
+
/// ```
231
+
///
232
+
/// # Multi-byte Character Handling
233
+
///
234
+
/// Byte positions are correctly calculated even with emojis and other
235
+
/// multi-byte UTF-8 characters:
236
+
///
237
+
/// ```
238
+
/// use atproto_extras::parse_urls;
239
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::FacetFeature;
240
+
///
241
+
/// let text = "Check out https://example.com now!";
242
+
/// let facets = parse_urls(text);
243
+
/// let text_bytes = text.as_bytes();
244
+
///
245
+
/// // The byte slice matches the URL
246
+
/// let url_bytes = &text_bytes[facets[0].index.byte_start..facets[0].index.byte_end];
247
+
/// assert_eq!(std::str::from_utf8(url_bytes).unwrap(), "https://example.com");
248
+
/// ```
249
+
pub fn parse_urls(text: &str) -> Vec<Facet> {
250
+
let mut facets = Vec::new();
251
+
252
+
// Partial/naive URL regex based on: https://stackoverflow.com/a/3809435
253
+
// Pattern: [$|\W](https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]+\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*[-a-zA-Z0-9@%_\+~#//=])?)
254
+
// Modified to use + instead of {1,6} to support longer TLDs and multi-level subdomains
255
+
let url_regex = Regex::new(
256
+
r"(?:^|[^\w])(https?://(?:www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]+\b(?:[-a-zA-Z0-9()@:%_\+.~#?&//=]*[-a-zA-Z0-9@%_\+~#//=])?)"
257
+
).unwrap();
258
+
259
+
let text_bytes = text.as_bytes();
260
+
for capture in url_regex.captures_iter(text_bytes) {
261
+
if let Some(url_match) = capture.get(1) {
262
+
let url = std::str::from_utf8(url_match.as_bytes())
263
+
.unwrap_or_default()
264
+
.to_string();
265
+
266
+
facets.push(Facet {
267
+
index: ByteSlice {
268
+
byte_start: url_match.start(),
269
+
byte_end: url_match.end(),
270
+
},
271
+
features: vec![FacetFeature::Link(Link { uri: url })],
272
+
});
273
+
}
274
+
}
275
+
276
+
facets
277
+
}
278
+
279
+
/// Parse hashtags from text and return them as Facet objects.
280
+
///
281
+
/// This function extracts hashtags (e.g., `#rust`, `#ATProto`) from text,
282
+
/// returning AT Protocol Facet objects with Tag features.
283
+
/// It supports both standard `#` and full-width `๏ผ` (U+FF03) hash symbols.
284
+
///
285
+
/// # Tag Syntax
286
+
///
287
+
/// - Tags must start with `#` or `๏ผ` (full-width)
288
+
/// - Tag content follows word character rules (`\w`)
289
+
/// - Purely numeric tags (e.g., `#123`) are excluded
290
+
///
291
+
/// # Example
292
+
///
293
+
/// ```
294
+
/// use atproto_extras::parse_tags;
295
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::FacetFeature;
296
+
///
297
+
/// let text = "Learning #rust and #golang today! #100DaysOfCode";
298
+
/// let facets = parse_tags(text);
299
+
///
300
+
/// assert_eq!(facets.len(), 3);
301
+
/// if let Some(FacetFeature::Tag(tag)) = facets[0].features.first() {
302
+
/// assert_eq!(tag.tag, "rust");
303
+
/// }
304
+
/// if let Some(FacetFeature::Tag(tag)) = facets[1].features.first() {
305
+
/// assert_eq!(tag.tag, "golang");
306
+
/// }
307
+
/// if let Some(FacetFeature::Tag(tag)) = facets[2].features.first() {
308
+
/// assert_eq!(tag.tag, "100DaysOfCode");
309
+
/// }
310
+
/// ```
311
+
///
312
+
/// # Numeric Tags
313
+
///
314
+
/// Purely numeric tags are excluded:
315
+
///
316
+
/// ```
317
+
/// use atproto_extras::parse_tags;
318
+
///
319
+
/// let text = "Item #42 is special";
320
+
/// let facets = parse_tags(text);
321
+
///
322
+
/// // #42 is not extracted because it's purely numeric
323
+
/// assert_eq!(facets.len(), 0);
324
+
/// ```
325
+
pub fn parse_tags(text: &str) -> Vec<Facet> {
326
+
let mut facets = Vec::new();
327
+
328
+
// Regex based on: https://github.com/bluesky-social/atproto/blob/d91988fe79030b61b556dd6f16a46f0c3b9d0b44/packages/api/src/rich-text/util.ts
329
+
// Simplified for Rust - matches hashtags at word boundaries
330
+
// Pattern matches: start of string or non-word char, then # or ๏ผ, then tag content
331
+
let tag_regex = Regex::new(r"(?:^|[^\w])([#\xEF\xBC\x83])([\w]+(?:[\w]*)*)").unwrap();
332
+
333
+
let text_bytes = text.as_bytes();
334
+
335
+
// Work with bytes for proper position tracking
336
+
for capture in tag_regex.captures_iter(text_bytes) {
337
+
if let (Some(full_match), Some(hash_match), Some(tag_match)) =
338
+
(capture.get(0), capture.get(1), capture.get(2))
339
+
{
340
+
// Calculate the absolute byte position of the hash symbol
341
+
// The full match includes the preceding character (if any)
342
+
// so we need to adjust for that
343
+
let match_start = full_match.start();
344
+
let hash_offset = hash_match.start() - full_match.start();
345
+
let start = match_start + hash_offset;
346
+
let end = match_start + hash_offset + hash_match.len() + tag_match.len();
347
+
348
+
// Extract just the tag text (without the hash symbol)
349
+
let tag = std::str::from_utf8(tag_match.as_bytes()).unwrap_or_default();
350
+
351
+
// Only include tags that are not purely numeric
352
+
if !tag.chars().all(|c| c.is_ascii_digit()) {
353
+
facets.push(Facet {
354
+
index: ByteSlice {
355
+
byte_start: start,
356
+
byte_end: end,
357
+
},
358
+
features: vec![FacetFeature::Tag(Tag {
359
+
tag: tag.to_string(),
360
+
})],
361
+
});
362
+
}
363
+
}
364
+
}
365
+
366
+
facets
367
+
}
368
+
369
+
/// Parse facets from text and return a vector of Facet objects.
370
+
///
371
+
/// This function extracts mentions, URLs, and hashtags from the provided text
372
+
/// and creates AT Protocol facets with proper byte indices.
373
+
///
374
+
/// Mentions are resolved to actual DIDs using the provided identity resolver.
375
+
/// If a handle cannot be resolved to a DID, the mention facet is skipped.
376
+
///
377
+
/// # Arguments
378
+
///
379
+
/// * `text` - The text to extract facets from
380
+
/// * `identity_resolver` - Resolver for converting handles to DIDs
381
+
/// * `limits` - Configuration for maximum facets per type and total
382
+
///
383
+
/// # Returns
384
+
///
385
+
/// Optional vector of facets. Returns `None` if no facets were found.
386
+
///
387
+
/// # Example
388
+
///
389
+
/// ```ignore
390
+
/// use atproto_extras::{parse_facets_from_text, FacetLimits};
391
+
///
392
+
/// let text = "Hello @alice.bsky.social! Check #rust at https://rust-lang.org";
393
+
/// let limits = FacetLimits::default();
394
+
///
395
+
/// // Requires an async context and identity resolver
396
+
/// let facets = parse_facets_from_text(text, &resolver, &limits).await;
397
+
///
398
+
/// if let Some(facets) = facets {
399
+
/// for facet in &facets {
400
+
/// println!("Facet at {}..{}", facet.index.byte_start, facet.index.byte_end);
401
+
/// }
402
+
/// }
403
+
/// ```
404
+
///
405
+
/// # Mention Resolution
406
+
///
407
+
/// Mentions are only included if the handle resolves to a valid DID:
408
+
///
409
+
/// ```ignore
410
+
/// let text = "@valid.handle.com and @invalid.handle.xyz";
411
+
/// let facets = parse_facets_from_text(text, &resolver, &limits).await;
412
+
///
413
+
/// // Only @valid.handle.com appears as a facet if @invalid.handle.xyz
414
+
/// // cannot be resolved to a DID
415
+
/// ```
416
+
pub async fn parse_facets_from_text(
417
+
text: &str,
418
+
identity_resolver: &dyn IdentityResolver,
419
+
limits: &FacetLimits,
420
+
) -> Option<Vec<Facet>> {
421
+
let mut facets = Vec::new();
422
+
423
+
// Parse mentions (already limited by mentions_max in parse_mentions)
424
+
let mention_facets = parse_mentions(text, identity_resolver, limits).await;
425
+
facets.extend(mention_facets);
426
+
427
+
// Parse URLs (limited by links_max)
428
+
let url_facets = parse_urls(text);
429
+
for (idx, facet) in url_facets.into_iter().enumerate() {
430
+
if idx >= limits.links_max {
431
+
break;
432
+
}
433
+
facets.push(facet);
434
+
}
435
+
436
+
// Parse hashtags (limited by tags_max)
437
+
let tag_facets = parse_tags(text);
438
+
for (idx, facet) in tag_facets.into_iter().enumerate() {
439
+
if idx >= limits.tags_max {
440
+
break;
441
+
}
442
+
facets.push(facet);
443
+
}
444
+
445
+
// Apply global facet limit (truncate if exceeds max)
446
+
if facets.len() > limits.max {
447
+
facets.truncate(limits.max);
448
+
}
449
+
450
+
// Only return facets if we found any
451
+
if !facets.is_empty() {
452
+
Some(facets)
453
+
} else {
454
+
None
455
+
}
456
+
}
457
+
458
+
#[cfg(test)]
459
+
mod tests {
460
+
use async_trait::async_trait;
461
+
use atproto_identity::model::Document;
462
+
use std::collections::HashMap;
463
+
464
+
use super::*;
465
+
466
+
/// Mock identity resolver for testing
467
+
struct MockIdentityResolver {
468
+
handles_to_dids: HashMap<String, String>,
469
+
}
470
+
471
+
impl MockIdentityResolver {
472
+
fn new() -> Self {
473
+
let mut handles_to_dids = HashMap::new();
474
+
handles_to_dids.insert(
475
+
"alice.bsky.social".to_string(),
476
+
"did:plc:alice123".to_string(),
477
+
);
478
+
handles_to_dids.insert(
479
+
"at://alice.bsky.social".to_string(),
480
+
"did:plc:alice123".to_string(),
481
+
);
482
+
Self { handles_to_dids }
483
+
}
484
+
485
+
fn add_identity(&mut self, handle: &str, did: &str) {
486
+
self.handles_to_dids
487
+
.insert(handle.to_string(), did.to_string());
488
+
self.handles_to_dids
489
+
.insert(format!("at://{}", handle), did.to_string());
490
+
}
491
+
}
492
+
493
+
#[async_trait]
494
+
impl IdentityResolver for MockIdentityResolver {
495
+
async fn resolve(&self, handle: &str) -> anyhow::Result<Document> {
496
+
let handle_key = handle.to_string();
497
+
498
+
if let Some(did) = self.handles_to_dids.get(&handle_key) {
499
+
Ok(Document {
500
+
context: vec![],
501
+
id: did.clone(),
502
+
also_known_as: vec![format!("at://{}", handle_key.trim_start_matches("at://"))],
503
+
verification_method: vec![],
504
+
service: vec![],
505
+
extra: HashMap::new(),
506
+
})
507
+
} else {
508
+
Err(anyhow::anyhow!("Handle not found"))
509
+
}
510
+
}
511
+
}
512
+
513
+
#[tokio::test]
514
+
async fn test_parse_facets_from_text_comprehensive() {
515
+
let mut resolver = MockIdentityResolver::new();
516
+
resolver.add_identity("bob.test.com", "did:plc:bob456");
517
+
518
+
let limits = FacetLimits::default();
519
+
let text = "Join @alice.bsky.social and @bob.test.com at https://example.com #rust #golang";
520
+
let facets = parse_facets_from_text(text, &resolver, &limits).await;
521
+
522
+
assert!(facets.is_some());
523
+
let facets = facets.unwrap();
524
+
assert_eq!(facets.len(), 5); // 2 mentions, 1 URL, 2 hashtags
525
+
526
+
// Check first mention
527
+
assert_eq!(facets[0].index.byte_start, 5);
528
+
assert_eq!(facets[0].index.byte_end, 23);
529
+
if let FacetFeature::Mention(ref mention) = facets[0].features[0] {
530
+
assert_eq!(mention.did, "did:plc:alice123");
531
+
} else {
532
+
panic!("Expected Mention feature");
533
+
}
534
+
535
+
// Check second mention
536
+
assert_eq!(facets[1].index.byte_start, 28);
537
+
assert_eq!(facets[1].index.byte_end, 41);
538
+
if let FacetFeature::Mention(mention) = &facets[1].features[0] {
539
+
assert_eq!(mention.did, "did:plc:bob456");
540
+
} else {
541
+
panic!("Expected Mention feature");
542
+
}
543
+
544
+
// Check URL
545
+
assert_eq!(facets[2].index.byte_start, 45);
546
+
assert_eq!(facets[2].index.byte_end, 64);
547
+
if let FacetFeature::Link(link) = &facets[2].features[0] {
548
+
assert_eq!(link.uri, "https://example.com");
549
+
} else {
550
+
panic!("Expected Link feature");
551
+
}
552
+
553
+
// Check first hashtag
554
+
assert_eq!(facets[3].index.byte_start, 65);
555
+
assert_eq!(facets[3].index.byte_end, 70);
556
+
if let FacetFeature::Tag(tag) = &facets[3].features[0] {
557
+
assert_eq!(tag.tag, "rust");
558
+
} else {
559
+
panic!("Expected Tag feature");
560
+
}
561
+
562
+
// Check second hashtag
563
+
assert_eq!(facets[4].index.byte_start, 71);
564
+
assert_eq!(facets[4].index.byte_end, 78);
565
+
if let FacetFeature::Tag(tag) = &facets[4].features[0] {
566
+
assert_eq!(tag.tag, "golang");
567
+
} else {
568
+
panic!("Expected Tag feature");
569
+
}
570
+
}
571
+
572
+
#[tokio::test]
573
+
async fn test_parse_facets_from_text_with_unresolvable_mention() {
574
+
let resolver = MockIdentityResolver::new();
575
+
let limits = FacetLimits::default();
576
+
577
+
// Only alice.bsky.social is in the resolver, not unknown.handle.com
578
+
let text = "Contact @unknown.handle.com for details #rust";
579
+
let facets = parse_facets_from_text(text, &resolver, &limits).await;
580
+
581
+
assert!(facets.is_some());
582
+
let facets = facets.unwrap();
583
+
// Should only have 1 facet (the hashtag) since the mention couldn't be resolved
584
+
assert_eq!(facets.len(), 1);
585
+
586
+
// Check that it's the hashtag facet
587
+
if let FacetFeature::Tag(tag) = &facets[0].features[0] {
588
+
assert_eq!(tag.tag, "rust");
589
+
} else {
590
+
panic!("Expected Tag feature");
591
+
}
592
+
}
593
+
594
+
#[tokio::test]
595
+
async fn test_parse_facets_from_text_empty() {
596
+
let resolver = MockIdentityResolver::new();
597
+
let limits = FacetLimits::default();
598
+
let text = "No mentions, URLs, or hashtags here";
599
+
let facets = parse_facets_from_text(text, &resolver, &limits).await;
600
+
assert!(facets.is_none());
601
+
}
602
+
603
+
#[tokio::test]
604
+
async fn test_parse_facets_from_text_url_with_at_mention() {
605
+
let resolver = MockIdentityResolver::new();
606
+
let limits = FacetLimits::default();
607
+
608
+
// URLs with @ should not create mention facets
609
+
let text = "Tangled https://tangled.org/@smokesignal.events";
610
+
let facets = parse_facets_from_text(text, &resolver, &limits).await;
611
+
612
+
assert!(facets.is_some());
613
+
let facets = facets.unwrap();
614
+
615
+
// Should have exactly 1 facet (the URL), not 2 (URL + mention)
616
+
assert_eq!(
617
+
facets.len(),
618
+
1,
619
+
"Expected 1 facet (URL only), got {}",
620
+
facets.len()
621
+
);
622
+
623
+
// Verify it's a link facet, not a mention
624
+
if let FacetFeature::Link(link) = &facets[0].features[0] {
625
+
assert_eq!(link.uri, "https://tangled.org/@smokesignal.events");
626
+
} else {
627
+
panic!("Expected Link feature, got Mention or Tag instead");
628
+
}
629
+
}
630
+
631
+
#[tokio::test]
632
+
async fn test_parse_facets_with_mention_limit() {
633
+
let mut resolver = MockIdentityResolver::new();
634
+
resolver.add_identity("bob.test.com", "did:plc:bob456");
635
+
resolver.add_identity("charlie.test.com", "did:plc:charlie789");
636
+
637
+
// Limit to 2 mentions
638
+
let limits = FacetLimits {
639
+
mentions_max: 2,
640
+
tags_max: 5,
641
+
links_max: 5,
642
+
max: 10,
643
+
};
644
+
645
+
let text = "Join @alice.bsky.social @bob.test.com @charlie.test.com";
646
+
let facets = parse_facets_from_text(text, &resolver, &limits).await;
647
+
648
+
assert!(facets.is_some());
649
+
let facets = facets.unwrap();
650
+
// Should only have 2 mentions (alice and bob), charlie should be skipped
651
+
assert_eq!(facets.len(), 2);
652
+
653
+
// Verify they're both mentions
654
+
for facet in &facets {
655
+
assert!(matches!(facet.features[0], FacetFeature::Mention(_)));
656
+
}
657
+
}
658
+
659
+
#[tokio::test]
660
+
async fn test_parse_facets_with_global_limit() {
661
+
let mut resolver = MockIdentityResolver::new();
662
+
resolver.add_identity("bob.test.com", "did:plc:bob456");
663
+
664
+
// Very restrictive global limit
665
+
let limits = FacetLimits {
666
+
mentions_max: 5,
667
+
tags_max: 5,
668
+
links_max: 5,
669
+
max: 3, // Only allow 3 total facets
670
+
};
671
+
672
+
let text =
673
+
"Join @alice.bsky.social @bob.test.com at https://example.com #rust #golang #python";
674
+
let facets = parse_facets_from_text(text, &resolver, &limits).await;
675
+
676
+
assert!(facets.is_some());
677
+
let facets = facets.unwrap();
678
+
// Should be truncated to 3 facets total
679
+
assert_eq!(facets.len(), 3);
680
+
}
681
+
682
+
#[test]
683
+
fn test_parse_urls_multiple_links() {
684
+
let text = "IETF124 is happening in Montreal, Nov 1st to 7th https://www.ietf.org/meeting/124/\n\nWe're confirmed for two days of ATProto community sessions on Monday, Nov 3rd & Tuesday, Mov 4th at ECTO Co-Op. Many of us will also be participating in the free-to-attend IETF hackathon on Sunday, Nov 2nd.\n\nLatest updates and attendees in the forum https://discourse.atprotocol.community/t/update-on-timing-and-plan-for-montreal/164";
685
+
686
+
let facets = parse_urls(text);
687
+
688
+
// Should find both URLs
689
+
assert_eq!(
690
+
facets.len(),
691
+
2,
692
+
"Expected 2 URLs but found {}",
693
+
facets.len()
694
+
);
695
+
696
+
// Check first URL
697
+
if let Some(FacetFeature::Link(link)) = facets[0].features.first() {
698
+
assert_eq!(link.uri, "https://www.ietf.org/meeting/124/");
699
+
} else {
700
+
panic!("Expected Link feature");
701
+
}
702
+
703
+
// Check second URL
704
+
if let Some(FacetFeature::Link(link)) = facets[1].features.first() {
705
+
assert_eq!(
706
+
link.uri,
707
+
"https://discourse.atprotocol.community/t/update-on-timing-and-plan-for-montreal/164"
708
+
);
709
+
} else {
710
+
panic!("Expected Link feature");
711
+
}
712
+
}
713
+
714
+
#[test]
715
+
fn test_parse_urls_with_html_entity() {
716
+
// Test with the HTML entity & in the text
717
+
let text = "IETF124 is happening in Montreal, Nov 1st to 7th https://www.ietf.org/meeting/124/\n\nWe're confirmed for two days of ATProto community sessions on Monday, Nov 3rd & Tuesday, Mov 4th at ECTO Co-Op. Many of us will also be participating in the free-to-attend IETF hackathon on Sunday, Nov 2nd.\n\nLatest updates and attendees in the forum https://discourse.atprotocol.community/t/update-on-timing-and-plan-for-montreal/164";
718
+
719
+
let facets = parse_urls(text);
720
+
721
+
// Should find both URLs
722
+
assert_eq!(
723
+
facets.len(),
724
+
2,
725
+
"Expected 2 URLs but found {}",
726
+
facets.len()
727
+
);
728
+
729
+
// Check first URL
730
+
if let Some(FacetFeature::Link(link)) = facets[0].features.first() {
731
+
assert_eq!(link.uri, "https://www.ietf.org/meeting/124/");
732
+
} else {
733
+
panic!("Expected Link feature");
734
+
}
735
+
736
+
// Check second URL
737
+
if let Some(FacetFeature::Link(link)) = facets[1].features.first() {
738
+
assert_eq!(
739
+
link.uri,
740
+
"https://discourse.atprotocol.community/t/update-on-timing-and-plan-for-montreal/164"
741
+
);
742
+
} else {
743
+
panic!("Expected Link feature");
744
+
}
745
+
}
746
+
747
+
#[test]
748
+
fn test_byte_offset_with_html_entities() {
749
+
// This test demonstrates that HTML entity escaping shifts byte positions.
750
+
// The byte positions shift:
751
+
// In original: '&' is at byte 8 (1 byte)
752
+
// In escaped: '&' starts at byte 8 (5 bytes)
753
+
// This causes facet byte offsets to be misaligned if text is escaped before rendering.
754
+
755
+
// If we have a URL after the ampersand in the original:
756
+
let original_with_url = "Nov 3rd & Tuesday https://example.com";
757
+
let escaped_with_url = "Nov 3rd & Tuesday https://example.com";
758
+
759
+
// Parse URLs from both versions
760
+
let original_facets = parse_urls(original_with_url);
761
+
let escaped_facets = parse_urls(escaped_with_url);
762
+
763
+
// Both should find the URL, but at different byte positions
764
+
assert_eq!(original_facets.len(), 1);
765
+
assert_eq!(escaped_facets.len(), 1);
766
+
767
+
// The byte positions will be different
768
+
assert_eq!(original_facets[0].index.byte_start, 18); // After "Nov 3rd & Tuesday "
769
+
assert_eq!(escaped_facets[0].index.byte_start, 22); // After "Nov 3rd & Tuesday " (4 extra bytes for &)
770
+
}
771
+
772
+
#[test]
773
+
fn test_parse_urls_from_atproto_record_text() {
774
+
// Test parsing URLs from real AT Protocol record description text.
775
+
// This demonstrates the correct byte positions that should be used for facets.
776
+
let text = "Dev, Power Users, and Generally inquisitive folks get a completely unprofessionally amateur interview. Just a yap sesh where chat is part of the call!\n\nโจthe danielโจ & I will be on a Zoom call and I will stream out to https://stream.place/psingletary.com\n\nSubscribe to the publications! https://atprotocalls.leaflet.pub/";
777
+
778
+
let facets = parse_urls(text);
779
+
780
+
assert_eq!(facets.len(), 2, "Should find 2 URLs");
781
+
782
+
// First URL: https://stream.place/psingletary.com
783
+
assert_eq!(facets[0].index.byte_start, 221);
784
+
assert_eq!(facets[0].index.byte_end, 257);
785
+
if let Some(FacetFeature::Link(link)) = facets[0].features.first() {
786
+
assert_eq!(link.uri, "https://stream.place/psingletary.com");
787
+
}
788
+
789
+
// Second URL: https://atprotocalls.leaflet.pub/
790
+
assert_eq!(facets[1].index.byte_start, 290);
791
+
assert_eq!(facets[1].index.byte_end, 323);
792
+
if let Some(FacetFeature::Link(link)) = facets[1].features.first() {
793
+
assert_eq!(link.uri, "https://atprotocalls.leaflet.pub/");
794
+
}
795
+
796
+
// Verify the byte slices match the expected text
797
+
let text_bytes = text.as_bytes();
798
+
assert_eq!(
799
+
std::str::from_utf8(&text_bytes[221..257]).unwrap(),
800
+
"https://stream.place/psingletary.com"
801
+
);
802
+
assert_eq!(
803
+
std::str::from_utf8(&text_bytes[290..323]).unwrap(),
804
+
"https://atprotocalls.leaflet.pub/"
805
+
);
806
+
}
807
+
808
+
#[tokio::test]
809
+
async fn test_parse_mentions_basic() {
810
+
let resolver = MockIdentityResolver::new();
811
+
let limits = FacetLimits::default();
812
+
let text = "Hello @alice.bsky.social!";
813
+
let facets = parse_mentions(text, &resolver, &limits).await;
814
+
815
+
assert_eq!(facets.len(), 1);
816
+
assert_eq!(facets[0].index.byte_start, 6);
817
+
assert_eq!(facets[0].index.byte_end, 24);
818
+
if let Some(FacetFeature::Mention(mention)) = facets[0].features.first() {
819
+
assert_eq!(mention.did, "did:plc:alice123");
820
+
} else {
821
+
panic!("Expected Mention feature");
822
+
}
823
+
}
824
+
825
+
#[tokio::test]
826
+
async fn test_parse_mentions_multiple() {
827
+
let mut resolver = MockIdentityResolver::new();
828
+
resolver.add_identity("bob.example.com", "did:plc:bob456");
829
+
let limits = FacetLimits::default();
830
+
let text = "CC @alice.bsky.social and @bob.example.com";
831
+
let facets = parse_mentions(text, &resolver, &limits).await;
832
+
833
+
assert_eq!(facets.len(), 2);
834
+
if let Some(FacetFeature::Mention(mention)) = facets[0].features.first() {
835
+
assert_eq!(mention.did, "did:plc:alice123");
836
+
}
837
+
if let Some(FacetFeature::Mention(mention)) = facets[1].features.first() {
838
+
assert_eq!(mention.did, "did:plc:bob456");
839
+
}
840
+
}
841
+
842
+
#[tokio::test]
843
+
async fn test_parse_mentions_unresolvable() {
844
+
let resolver = MockIdentityResolver::new();
845
+
let limits = FacetLimits::default();
846
+
// unknown.handle.com is not in the resolver
847
+
let text = "Hello @unknown.handle.com!";
848
+
let facets = parse_mentions(text, &resolver, &limits).await;
849
+
850
+
// Should be empty since the handle can't be resolved
851
+
assert_eq!(facets.len(), 0);
852
+
}
853
+
854
+
#[tokio::test]
855
+
async fn test_parse_mentions_in_url_excluded() {
856
+
let resolver = MockIdentityResolver::new();
857
+
let limits = FacetLimits::default();
858
+
// The @smokesignal.events is inside a URL and should not be parsed as a mention
859
+
let text = "Check https://tangled.org/@smokesignal.events";
860
+
let facets = parse_mentions(text, &resolver, &limits).await;
861
+
862
+
// Should be empty since the mention is inside a URL
863
+
assert_eq!(facets.len(), 0);
864
+
}
865
+
866
+
#[test]
867
+
fn test_parse_tags_basic() {
868
+
let text = "Learning #rust today!";
869
+
let facets = parse_tags(text);
870
+
871
+
assert_eq!(facets.len(), 1);
872
+
assert_eq!(facets[0].index.byte_start, 9);
873
+
assert_eq!(facets[0].index.byte_end, 14);
874
+
if let Some(FacetFeature::Tag(tag)) = facets[0].features.first() {
875
+
assert_eq!(tag.tag, "rust");
876
+
} else {
877
+
panic!("Expected Tag feature");
878
+
}
879
+
}
880
+
881
+
#[test]
882
+
fn test_parse_tags_multiple() {
883
+
let text = "#rust #golang #python are great!";
884
+
let facets = parse_tags(text);
885
+
886
+
assert_eq!(facets.len(), 3);
887
+
if let Some(FacetFeature::Tag(tag)) = facets[0].features.first() {
888
+
assert_eq!(tag.tag, "rust");
889
+
}
890
+
if let Some(FacetFeature::Tag(tag)) = facets[1].features.first() {
891
+
assert_eq!(tag.tag, "golang");
892
+
}
893
+
if let Some(FacetFeature::Tag(tag)) = facets[2].features.first() {
894
+
assert_eq!(tag.tag, "python");
895
+
}
896
+
}
897
+
898
+
#[test]
899
+
fn test_parse_tags_excludes_numeric() {
900
+
let text = "Item #42 is special #test123";
901
+
let facets = parse_tags(text);
902
+
903
+
// #42 should be excluded (purely numeric), #test123 should be included
904
+
assert_eq!(facets.len(), 1);
905
+
if let Some(FacetFeature::Tag(tag)) = facets[0].features.first() {
906
+
assert_eq!(tag.tag, "test123");
907
+
}
908
+
}
909
+
910
+
#[test]
911
+
fn test_parse_urls_basic() {
912
+
let text = "Visit https://example.com today!";
913
+
let facets = parse_urls(text);
914
+
915
+
assert_eq!(facets.len(), 1);
916
+
assert_eq!(facets[0].index.byte_start, 6);
917
+
assert_eq!(facets[0].index.byte_end, 25);
918
+
if let Some(FacetFeature::Link(link)) = facets[0].features.first() {
919
+
assert_eq!(link.uri, "https://example.com");
920
+
}
921
+
}
922
+
923
+
#[test]
924
+
fn test_parse_urls_with_path() {
925
+
let text = "Check https://example.com/path/to/page?query=1#section";
926
+
let facets = parse_urls(text);
927
+
928
+
assert_eq!(facets.len(), 1);
929
+
if let Some(FacetFeature::Link(link)) = facets[0].features.first() {
930
+
assert_eq!(link.uri, "https://example.com/path/to/page?query=1#section");
931
+
}
932
+
}
933
+
934
+
#[test]
935
+
fn test_facet_limits_default() {
936
+
let limits = FacetLimits::default();
937
+
assert_eq!(limits.mentions_max, 5);
938
+
assert_eq!(limits.tags_max, 5);
939
+
assert_eq!(limits.links_max, 5);
940
+
assert_eq!(limits.max, 10);
941
+
}
942
+
}
+50
crates/atproto-extras/src/lib.rs
+50
crates/atproto-extras/src/lib.rs
···
1
+
//! Extra utilities for AT Protocol applications.
2
+
//!
3
+
//! This crate provides additional utilities that complement the core AT Protocol
4
+
//! identity and record crates. Currently, it focuses on rich text facet parsing.
5
+
//!
6
+
//! ## Features
7
+
//!
8
+
//! - **Facet Parsing**: Extract mentions, URLs, and hashtags from plain text
9
+
//! with correct UTF-8 byte offset calculation
10
+
//! - **Identity Integration**: Resolve mention handles to DIDs during parsing
11
+
//!
12
+
//! ## Example
13
+
//!
14
+
//! ```ignore
15
+
//! use atproto_extras::{parse_facets_from_text, FacetLimits};
16
+
//!
17
+
//! // Parse facets from text (requires an IdentityResolver)
18
+
//! let text = "Hello @alice.bsky.social! Check out https://example.com #rust";
19
+
//! let limits = FacetLimits::default();
20
+
//! let facets = parse_facets_from_text(text, &resolver, &limits).await;
21
+
//! ```
22
+
//!
23
+
//! ## Byte Offset Calculation
24
+
//!
25
+
//! This implementation correctly uses UTF-8 byte offsets as required by AT Protocol.
26
+
//! The facets use "inclusive start and exclusive end" byte ranges. All parsing is done
27
+
//! using `regex::bytes::Regex` which operates on byte slices and returns byte positions,
28
+
//! ensuring correct handling of multi-byte UTF-8 characters (emojis, CJK, accented chars).
29
+
30
+
#![forbid(unsafe_code)]
31
+
#![warn(missing_docs)]
32
+
33
+
/// Rich text facet parsing for AT Protocol.
34
+
///
35
+
/// This module provides functionality for extracting semantic annotations (facets)
36
+
/// from plain text. Facets include:
37
+
///
38
+
/// - **Mentions**: User handles prefixed with `@` (e.g., `@alice.bsky.social`)
39
+
/// - **Links**: HTTP/HTTPS URLs
40
+
/// - **Tags**: Hashtags prefixed with `#` or `๏ผ` (e.g., `#rust`)
41
+
///
42
+
/// ## Byte Offsets
43
+
///
44
+
/// All facet indices use UTF-8 byte offsets, not character indices. This is
45
+
/// critical for correct handling of multi-byte characters like emojis or
46
+
/// non-ASCII text.
47
+
pub mod facets;
48
+
49
+
/// Re-export commonly used types for convenience.
50
+
pub use facets::{FacetLimits, parse_facets_from_text, parse_mentions, parse_tags, parse_urls};
+1
crates/atproto-identity/Cargo.toml
+1
crates/atproto-identity/Cargo.toml
+70
-21
crates/atproto-identity/src/key.rs
+70
-21
crates/atproto-identity/src/key.rs
···
47
47
//! }
48
48
//! ```
49
49
50
-
use anyhow::Result;
50
+
use anyhow::{Context, Result, anyhow};
51
51
use ecdsa::signature::Signer;
52
52
use elliptic_curve::JwkEcKey;
53
53
use elliptic_curve::sec1::ToEncodedPoint;
54
54
55
+
use crate::model::VerificationMethod;
56
+
use crate::traits::IdentityResolver;
57
+
58
+
pub use crate::traits::KeyResolver;
59
+
use std::sync::Arc;
60
+
55
61
use crate::errors::KeyError;
56
62
57
63
#[cfg(feature = "zeroize")]
58
64
use zeroize::{Zeroize, ZeroizeOnDrop};
59
65
60
66
/// Cryptographic key types supported for AT Protocol identity.
61
-
#[derive(Clone, PartialEq)]
62
-
#[cfg_attr(debug_assertions, derive(Debug))]
67
+
#[derive(Clone, PartialEq, Debug)]
63
68
#[cfg_attr(feature = "zeroize", derive(Zeroize, ZeroizeOnDrop))]
64
69
pub enum KeyType {
65
70
/// A p256 (P-256 / secp256r1 / ES256) public key.
···
160
165
// Add DID key prefix
161
166
write!(f, "did:key:{}", multibase_encoded)
162
167
}
163
-
}
164
-
165
-
/// Trait for providing cryptographic keys by identifier.
166
-
///
167
-
/// This trait defines the interface for key providers that can retrieve private keys
168
-
/// by their identifier. Implementations must be thread-safe to support concurrent access.
169
-
#[async_trait::async_trait]
170
-
pub trait KeyProvider: Send + Sync {
171
-
/// Retrieves a private key by its identifier.
172
-
///
173
-
/// # Arguments
174
-
/// * `key_id` - The identifier of the key to retrieve
175
-
///
176
-
/// # Returns
177
-
/// * `Ok(Some(KeyData))` - If the key was found and successfully retrieved
178
-
/// * `Ok(None)` - If no key exists for the given identifier
179
-
/// * `Err(anyhow::Error)` - If an error occurred during key retrieval
180
-
async fn get_private_key_by_id(&self, key_id: &str) -> Result<Option<KeyData>>;
181
168
}
182
169
183
170
/// DID key method prefix.
···
362
349
.map_err(|error| KeyError::ECDSAError { error })?;
363
350
Ok(signature.to_vec())
364
351
}
352
+
}
353
+
}
354
+
355
+
/// Key resolver implementation that fetches DID documents using an [`IdentityResolver`].
356
+
#[derive(Clone)]
357
+
pub struct IdentityDocumentKeyResolver {
358
+
identity_resolver: Arc<dyn IdentityResolver>,
359
+
}
360
+
361
+
impl IdentityDocumentKeyResolver {
362
+
/// Creates a new key resolver backed by an [`IdentityResolver`].
363
+
pub fn new(identity_resolver: Arc<dyn IdentityResolver>) -> Self {
364
+
Self { identity_resolver }
365
+
}
366
+
}
367
+
368
+
#[async_trait::async_trait]
369
+
impl KeyResolver for IdentityDocumentKeyResolver {
370
+
async fn resolve(&self, key: &str) -> Result<KeyData> {
371
+
if let Some(did_key) = key.split('#').next() {
372
+
if let Ok(key_data) = identify_key(did_key) {
373
+
return Ok(key_data);
374
+
}
375
+
} else if let Ok(key_data) = identify_key(key) {
376
+
return Ok(key_data);
377
+
}
378
+
379
+
let (did, fragment) = key
380
+
.split_once('#')
381
+
.context("Key reference must contain a DID fragment (e.g., did:example#key)")?;
382
+
383
+
if did.is_empty() || fragment.is_empty() {
384
+
return Err(anyhow!(
385
+
"Key reference must include both DID and fragment (received `{key}`)"
386
+
));
387
+
}
388
+
389
+
let document = self.identity_resolver.resolve(did).await?;
390
+
let fragment_with_hash = format!("#{fragment}");
391
+
392
+
let public_key_multibase = document
393
+
.verification_method
394
+
.iter()
395
+
.find_map(|method| match method {
396
+
VerificationMethod::Multikey {
397
+
id,
398
+
public_key_multibase,
399
+
..
400
+
} if id == key || *id == fragment_with_hash => Some(public_key_multibase.clone()),
401
+
_ => None,
402
+
})
403
+
.context(format!(
404
+
"Verification method `{key}` not found in DID document `{did}`"
405
+
))?;
406
+
407
+
let full_key = if public_key_multibase.starts_with("did:key:") {
408
+
public_key_multibase
409
+
} else {
410
+
format!("did:key:{}", public_key_multibase)
411
+
};
412
+
413
+
identify_key(&full_key).context("Failed to parse key data from verification method")
365
414
}
366
415
}
367
416
+1
-1
crates/atproto-identity/src/lib.rs
+1
-1
crates/atproto-identity/src/lib.rs
+19
-1
crates/atproto-identity/src/model.rs
+19
-1
crates/atproto-identity/src/model.rs
···
70
70
/// The DID identifier (e.g., "did:plc:abc123").
71
71
pub id: String,
72
72
/// Alternative identifiers like handles and domains.
73
+
#[serde(default)]
73
74
pub also_known_as: Vec<String>,
74
75
/// Available services for this identity.
76
+
#[serde(default)]
75
77
pub service: Vec<Service>,
76
78
77
79
/// Cryptographic verification methods.
78
-
#[serde(alias = "verificationMethod")]
80
+
#[serde(alias = "verificationMethod", default)]
79
81
pub verification_method: Vec<VerificationMethod>,
80
82
81
83
/// Additional document properties not explicitly defined.
···
402
404
let document = document.unwrap();
403
405
assert_eq!(document.id, "did:plc:cbkjy5n7bk3ax2wplmtjofq2");
404
406
}
407
+
}
408
+
409
+
#[test]
410
+
fn test_deserialize_service_did_document() {
411
+
// DID document from api.bsky.app - a service DID without alsoKnownAs
412
+
let document = serde_json::from_str::<Document>(
413
+
r##"{"@context":["https://www.w3.org/ns/did/v1","https://w3id.org/security/multikey/v1"],"id":"did:web:api.bsky.app","verificationMethod":[{"id":"did:web:api.bsky.app#atproto","type":"Multikey","controller":"did:web:api.bsky.app","publicKeyMultibase":"zQ3shpRzb2NDriwCSSsce6EqGxG23kVktHZc57C3NEcuNy1jg"}],"service":[{"id":"#bsky_notif","type":"BskyNotificationService","serviceEndpoint":"https://api.bsky.app"},{"id":"#bsky_appview","type":"BskyAppView","serviceEndpoint":"https://api.bsky.app"}]}"##,
414
+
);
415
+
assert!(document.is_ok(), "Failed to parse: {:?}", document.err());
416
+
417
+
let document = document.unwrap();
418
+
assert_eq!(document.id, "did:web:api.bsky.app");
419
+
assert!(document.also_known_as.is_empty());
420
+
assert_eq!(document.service.len(), 2);
421
+
assert_eq!(document.service[0].id, "#bsky_notif");
422
+
assert_eq!(document.service[1].id, "#bsky_appview");
405
423
}
406
424
}
+95
-29
crates/atproto-identity/src/resolve.rs
+95
-29
crates/atproto-identity/src/resolve.rs
···
32
32
use crate::validation::{is_valid_did_method_plc, is_valid_handle};
33
33
use crate::web::query as web_query;
34
34
35
-
/// Trait for AT Protocol identity resolution.
36
-
///
37
-
/// Implementations must be thread-safe (Send + Sync) and usable in async environments.
38
-
/// This trait provides the core functionality for resolving AT Protocol subjects
39
-
/// (handles or DIDs) to their corresponding DID documents.
40
-
#[async_trait::async_trait]
41
-
pub trait IdentityResolver: Send + Sync {
42
-
/// Resolves an AT Protocol subject to its DID document.
43
-
///
44
-
/// Takes a handle or DID, resolves it to a canonical DID, then retrieves
45
-
/// the corresponding DID document from the appropriate source (PLC directory or web).
46
-
///
47
-
/// # Arguments
48
-
/// * `subject` - The AT Protocol handle or DID to resolve
49
-
///
50
-
/// # Returns
51
-
/// * `Ok(Document)` - The resolved DID document
52
-
/// * `Err(anyhow::Error)` - Resolution error with detailed context
53
-
async fn resolve(&self, subject: &str) -> Result<Document>;
54
-
}
55
-
56
-
/// Trait for DNS resolution operations.
57
-
/// Provides async DNS TXT record lookups for handle resolution.
58
-
#[async_trait::async_trait]
59
-
pub trait DnsResolver: Send + Sync {
60
-
/// Resolves TXT records for a given domain name.
61
-
/// Returns a vector of strings representing the TXT record values.
62
-
async fn resolve_txt(&self, domain: &str) -> Result<Vec<String>, ResolveError>;
63
-
}
35
+
pub use crate::traits::{DnsResolver, IdentityResolver};
64
36
65
37
/// Hickory DNS implementation of the DnsResolver trait.
66
38
/// Wraps hickory_resolver::TokioResolver for TXT record resolution.
···
196
168
is_valid_handle(trimmed)
197
169
.map(InputType::Handle)
198
170
.ok_or(ResolveError::InvalidInput)
171
+
}
172
+
}
173
+
174
+
#[cfg(test)]
175
+
mod tests {
176
+
use super::*;
177
+
use crate::key::{
178
+
IdentityDocumentKeyResolver, KeyResolver, KeyType, generate_key, identify_key, to_public,
179
+
};
180
+
use crate::model::{DocumentBuilder, VerificationMethod};
181
+
use std::collections::HashMap;
182
+
183
+
struct StubIdentityResolver {
184
+
expected: String,
185
+
document: Document,
186
+
}
187
+
188
+
#[async_trait::async_trait]
189
+
impl IdentityResolver for StubIdentityResolver {
190
+
async fn resolve(&self, subject: &str) -> Result<Document> {
191
+
if !self.expected.is_empty() {
192
+
assert_eq!(self.expected, subject);
193
+
}
194
+
Ok(self.document.clone())
195
+
}
196
+
}
197
+
198
+
#[tokio::test]
199
+
async fn resolves_direct_did_key() -> Result<()> {
200
+
let private_key = generate_key(KeyType::K256Private)?;
201
+
let public_key = to_public(&private_key)?;
202
+
let key_reference = format!("{}", &public_key);
203
+
204
+
let resolver = IdentityDocumentKeyResolver::new(Arc::new(StubIdentityResolver {
205
+
expected: String::new(),
206
+
document: Document::builder()
207
+
.id("did:plc:placeholder")
208
+
.build()
209
+
.unwrap(),
210
+
}));
211
+
212
+
let key_data = resolver.resolve(&key_reference).await?;
213
+
assert_eq!(key_data.bytes(), public_key.bytes());
214
+
Ok(())
215
+
}
216
+
217
+
#[tokio::test]
218
+
async fn resolves_literal_did_key_reference() -> Result<()> {
219
+
let resolver = IdentityDocumentKeyResolver::new(Arc::new(StubIdentityResolver {
220
+
expected: String::new(),
221
+
document: Document::builder()
222
+
.id("did:example:unused".to_string())
223
+
.build()
224
+
.unwrap(),
225
+
}));
226
+
227
+
let sample = "did:key:zDnaezRmyM3NKx9NCphGiDFNBEMyR2sTZhhMGTseXCU2iXn53";
228
+
let expected = identify_key(sample)?;
229
+
let resolved = resolver.resolve(sample).await?;
230
+
assert_eq!(resolved.bytes(), expected.bytes());
231
+
Ok(())
232
+
}
233
+
234
+
#[tokio::test]
235
+
async fn resolves_via_identity_document() -> Result<()> {
236
+
let private_key = generate_key(KeyType::P256Private)?;
237
+
let public_key = to_public(&private_key)?;
238
+
let public_key_multibase = format!("{}", &public_key)
239
+
.strip_prefix("did:key:")
240
+
.unwrap()
241
+
.to_string();
242
+
243
+
let did = "did:web:example.com";
244
+
let method_id = format!("{did}#atproto");
245
+
246
+
let document = DocumentBuilder::new()
247
+
.id(did.to_string())
248
+
.add_verification_method(VerificationMethod::Multikey {
249
+
id: method_id.clone(),
250
+
controller: did.to_string(),
251
+
public_key_multibase,
252
+
extra: HashMap::new(),
253
+
})
254
+
.build()
255
+
.unwrap();
256
+
257
+
let resolver = IdentityDocumentKeyResolver::new(Arc::new(StubIdentityResolver {
258
+
expected: did.to_string(),
259
+
document,
260
+
}));
261
+
262
+
let key_data = resolver.resolve(&method_id).await?;
263
+
assert_eq!(key_data.bytes(), public_key.bytes());
264
+
Ok(())
199
265
}
200
266
}
201
267
-212
crates/atproto-identity/src/storage.rs
-212
crates/atproto-identity/src/storage.rs
···
1
-
//! DID document storage abstraction.
2
-
//!
3
-
//! Storage trait for DID document CRUD operations supporting multiple
4
-
//! backends (database, file system, memory) with consistent interface.
5
-
6
-
use anyhow::Result;
7
-
8
-
use crate::model::Document;
9
-
10
-
/// Trait for implementing DID document CRUD operations across different storage backends.
11
-
///
12
-
/// This trait provides an abstraction layer for storing and retrieving DID documents,
13
-
/// allowing different implementations for various storage systems such as databases, file systems,
14
-
/// in-memory stores, or cloud storage services.
15
-
///
16
-
/// All methods return `anyhow::Result` to allow implementations to use their own error types
17
-
/// while providing a consistent interface for callers. Implementations should handle their
18
-
/// specific error conditions and convert them to appropriate error messages.
19
-
///
20
-
/// ## Thread Safety
21
-
///
22
-
/// This trait requires implementations to be thread-safe (`Send + Sync`), meaning:
23
-
/// - `Send`: The storage implementation can be moved between threads
24
-
/// - `Sync`: The storage implementation can be safely accessed from multiple threads simultaneously
25
-
///
26
-
/// This is essential for async applications where the storage might be accessed from different
27
-
/// async tasks running on different threads. Implementations should use appropriate
28
-
/// synchronization primitives (like `Arc<Mutex<>>`, `RwLock`, or database connection pools)
29
-
/// to ensure thread safety.
30
-
///
31
-
/// ## Usage
32
-
///
33
-
/// Implementors of this trait can provide storage for AT Protocol DID documents in any backend:
34
-
///
35
-
/// ```rust,ignore
36
-
/// use atproto_identity::storage::DidDocumentStorage;
37
-
/// use atproto_identity::model::Document;
38
-
/// use anyhow::Result;
39
-
/// use std::sync::Arc;
40
-
/// use tokio::sync::RwLock;
41
-
/// use std::collections::HashMap;
42
-
///
43
-
/// // Thread-safe in-memory storage using Arc<RwLock<>>
44
-
/// #[derive(Clone)]
45
-
/// struct InMemoryStorage {
46
-
/// data: Arc<RwLock<HashMap<String, Document>>>, // DID -> Document mapping
47
-
/// }
48
-
///
49
-
/// #[async_trait::async_trait]
50
-
/// impl DidDocumentStorage for InMemoryStorage {
51
-
/// async fn get_document_by_did(&self, did: &str) -> Result<Option<Document>> {
52
-
/// let data = self.data.read().await;
53
-
/// Ok(data.get(did).cloned())
54
-
/// }
55
-
///
56
-
/// async fn store_document(&self, document: Document) -> Result<()> {
57
-
/// let mut data = self.data.write().await;
58
-
/// data.insert(document.id.clone(), document);
59
-
/// Ok(())
60
-
/// }
61
-
///
62
-
/// async fn delete_document_by_did(&self, did: &str) -> Result<()> {
63
-
/// let mut data = self.data.write().await;
64
-
/// data.remove(did);
65
-
/// Ok(())
66
-
/// }
67
-
/// }
68
-
///
69
-
/// // Database storage with thread-safe connection pool
70
-
/// struct DatabaseStorage {
71
-
/// pool: sqlx::Pool<sqlx::Postgres>, // Thread-safe connection pool
72
-
/// }
73
-
///
74
-
/// #[async_trait::async_trait]
75
-
/// impl DidDocumentStorage for DatabaseStorage {
76
-
/// async fn get_document_by_did(&self, did: &str) -> Result<Option<Document>> {
77
-
/// // Database connection pools are thread-safe
78
-
/// let row: Option<(serde_json::Value,)> = sqlx::query_as(
79
-
/// "SELECT document FROM did_documents WHERE did = $1"
80
-
/// )
81
-
/// .bind(did)
82
-
/// .fetch_optional(&self.pool)
83
-
/// .await?;
84
-
///
85
-
/// if let Some((doc_json,)) = row {
86
-
/// let document: Document = serde_json::from_value(doc_json)?;
87
-
/// Ok(Some(document))
88
-
/// } else {
89
-
/// Ok(None)
90
-
/// }
91
-
/// }
92
-
///
93
-
/// async fn store_document(&self, document: Document) -> Result<()> {
94
-
/// let doc_json = serde_json::to_value(&document)?;
95
-
/// sqlx::query("INSERT INTO did_documents (did, document) VALUES ($1, $2) ON CONFLICT (did) DO UPDATE SET document = $2")
96
-
/// .bind(&document.id)
97
-
/// .bind(doc_json)
98
-
/// .execute(&self.pool)
99
-
/// .await?;
100
-
/// Ok(())
101
-
/// }
102
-
///
103
-
/// async fn delete_document_by_did(&self, did: &str) -> Result<()> {
104
-
/// sqlx::query("DELETE FROM did_documents WHERE did = $1")
105
-
/// .bind(did)
106
-
/// .execute(&self.pool)
107
-
/// .await?;
108
-
/// Ok(())
109
-
/// }
110
-
/// }
111
-
/// ```
112
-
#[async_trait::async_trait]
113
-
pub trait DidDocumentStorage: Send + Sync {
114
-
/// Retrieves a DID document associated with the given DID.
115
-
///
116
-
/// This method looks up the complete DID document that is currently stored for the provided
117
-
/// DID (Decentralized Identifier). The document contains services, verification methods,
118
-
/// and other identity information for the DID.
119
-
///
120
-
/// # Arguments
121
-
/// * `did` - The DID (Decentralized Identifier) to look up. Should be in the format
122
-
/// `did:method:identifier` (e.g., "did:plc:bv6ggog3tya2z3vxsub7hnal")
123
-
///
124
-
/// # Returns
125
-
/// * `Ok(Some(document))` - If a document is found for the given DID
126
-
/// * `Ok(None)` - If no document is currently stored for the DID
127
-
/// * `Err(error)` - If an error occurs during retrieval (storage failure, invalid DID format, etc.)
128
-
///
129
-
/// # Examples
130
-
///
131
-
/// ```rust,ignore
132
-
/// let storage = MyStorage::new();
133
-
/// let document = storage.get_document_by_did("did:plc:bv6ggog3tya2z3vxsub7hnal").await?;
134
-
/// match document {
135
-
/// Some(doc) => {
136
-
/// println!("Found document for DID: {}", doc.id);
137
-
/// if let Some(handle) = doc.handles() {
138
-
/// println!("Primary handle: {}", handle);
139
-
/// }
140
-
/// },
141
-
/// None => println!("No document found for this DID"),
142
-
/// }
143
-
/// ```
144
-
async fn get_document_by_did(&self, did: &str) -> Result<Option<Document>>;
145
-
146
-
/// Stores or updates a DID document.
147
-
///
148
-
/// This method creates a new DID document entry or updates an existing one.
149
-
/// In the AT Protocol ecosystem, this operation typically occurs when a DID document
150
-
/// is resolved from the network, updated by the identity owner, or cached for performance.
151
-
///
152
-
/// Implementations should ensure that:
153
-
/// - The document's DID (`document.id`) is used as the key for storage
154
-
/// - The operation is atomic (either fully succeeds or fully fails)
155
-
/// - Any existing document for the same DID is properly replaced
156
-
/// - The complete document structure is preserved
157
-
///
158
-
/// # Arguments
159
-
/// * `document` - The complete DID document to store. The document's `id` field
160
-
/// will be used as the storage key.
161
-
///
162
-
/// # Returns
163
-
/// * `Ok(())` - If the document was successfully stored or updated
164
-
/// * `Err(error)` - If an error occurs during the operation (storage failure,
165
-
/// serialization failure, constraint violation, etc.)
166
-
///
167
-
/// # Examples
168
-
///
169
-
/// ```rust,ignore
170
-
/// let storage = MyStorage::new();
171
-
/// let document = Document {
172
-
/// id: "did:plc:bv6ggog3tya2z3vxsub7hnal".to_string(),
173
-
/// also_known_as: vec!["at://alice.bsky.social".to_string()],
174
-
/// service: vec![/* services */],
175
-
/// verification_method: vec![/* verification methods */],
176
-
/// extra: HashMap::new(),
177
-
/// };
178
-
/// storage.store_document(document).await?;
179
-
/// println!("Document successfully stored");
180
-
/// ```
181
-
async fn store_document(&self, document: Document) -> Result<()>;
182
-
183
-
/// Deletes a DID document by its DID.
184
-
///
185
-
/// This method removes a DID document from storage using the DID as the identifier.
186
-
/// This operation is typically used when cleaning up expired cache entries, removing
187
-
/// invalid documents, or when an identity is deactivated.
188
-
///
189
-
/// Implementations should:
190
-
/// - Handle the case where the DID doesn't exist gracefully (return Ok(()))
191
-
/// - Ensure the deletion is atomic
192
-
/// - Clean up any related data or indexes
193
-
/// - Preserve referential integrity if applicable
194
-
///
195
-
/// # Arguments
196
-
/// * `did` - The DID identifying the document to delete.
197
-
/// Should be in the format `did:method:identifier`
198
-
/// (e.g., "did:plc:bv6ggog3tya2z3vxsub7hnal")
199
-
///
200
-
/// # Returns
201
-
/// * `Ok(())` - If the document was successfully deleted or didn't exist
202
-
/// * `Err(error)` - If an error occurs during deletion (storage failure, etc.)
203
-
///
204
-
/// # Examples
205
-
///
206
-
/// ```rust,ignore
207
-
/// let storage = MyStorage::new();
208
-
/// storage.delete_document_by_did("did:plc:bv6ggog3tya2z3vxsub7hnal").await?;
209
-
/// println!("Document deleted");
210
-
/// ```
211
-
async fn delete_document_by_did(&self, did: &str) -> Result<()>;
212
-
}
+8
-7
crates/atproto-identity/src/storage_lru.rs
+8
-7
crates/atproto-identity/src/storage_lru.rs
···
11
11
12
12
use crate::errors::StorageError;
13
13
use crate::model::Document;
14
-
use crate::storage::DidDocumentStorage;
14
+
use crate::traits::DidDocumentStorage;
15
15
16
16
/// An LRU-based implementation of `DidDocumentStorage` that maintains a fixed-size cache of DID documents.
17
17
///
···
54
54
///
55
55
/// ```rust
56
56
/// use atproto_identity::storage_lru::LruDidDocumentStorage;
57
-
/// use atproto_identity::storage::DidDocumentStorage;
57
+
/// use atproto_identity::traits::DidDocumentStorage;
58
58
/// use atproto_identity::model::Document;
59
59
/// use std::num::NonZeroUsize;
60
60
/// use std::collections::HashMap;
···
164
164
///
165
165
/// ```rust
166
166
/// use atproto_identity::storage_lru::LruDidDocumentStorage;
167
-
/// use atproto_identity::storage::DidDocumentStorage;
167
+
/// use atproto_identity::traits::DidDocumentStorage;
168
168
/// use atproto_identity::model::Document;
169
169
/// use std::num::NonZeroUsize;
170
170
/// use std::collections::HashMap;
···
251
251
///
252
252
/// ```rust
253
253
/// use atproto_identity::storage_lru::LruDidDocumentStorage;
254
-
/// use atproto_identity::storage::DidDocumentStorage;
254
+
/// use atproto_identity::traits::DidDocumentStorage;
255
255
/// use atproto_identity::model::Document;
256
256
/// use std::num::NonZeroUsize;
257
257
/// use std::collections::HashMap;
···
305
305
///
306
306
/// ```rust
307
307
/// use atproto_identity::storage_lru::LruDidDocumentStorage;
308
-
/// use atproto_identity::storage::DidDocumentStorage;
308
+
/// use atproto_identity::traits::DidDocumentStorage;
309
309
/// use atproto_identity::model::Document;
310
310
/// use std::num::NonZeroUsize;
311
311
/// use std::collections::HashMap;
···
370
370
///
371
371
/// ```rust
372
372
/// use atproto_identity::storage_lru::LruDidDocumentStorage;
373
-
/// use atproto_identity::storage::DidDocumentStorage;
373
+
/// use atproto_identity::traits::DidDocumentStorage;
374
374
/// use atproto_identity::model::Document;
375
375
/// use std::num::NonZeroUsize;
376
376
/// use std::collections::HashMap;
···
460
460
///
461
461
/// ```rust
462
462
/// use atproto_identity::storage_lru::LruDidDocumentStorage;
463
-
/// use atproto_identity::storage::DidDocumentStorage;
463
+
/// use atproto_identity::traits::DidDocumentStorage;
464
464
/// use atproto_identity::model::Document;
465
465
/// use std::num::NonZeroUsize;
466
466
/// use std::collections::HashMap;
···
507
507
#[cfg(test)]
508
508
mod tests {
509
509
use super::*;
510
+
use crate::traits::DidDocumentStorage;
510
511
use std::collections::HashMap;
511
512
use std::num::NonZeroUsize;
512
513
+49
crates/atproto-identity/src/traits.rs
+49
crates/atproto-identity/src/traits.rs
···
1
+
//! Shared trait definitions for AT Protocol identity operations.
2
+
//!
3
+
//! This module centralizes async traits used across the identity crate so they can
4
+
//! be implemented without introducing circular module dependencies.
5
+
6
+
use anyhow::Result;
7
+
use async_trait::async_trait;
8
+
9
+
use crate::errors::ResolveError;
10
+
use crate::key::KeyData;
11
+
use crate::model::Document;
12
+
13
+
/// Trait for AT Protocol identity resolution.
14
+
///
15
+
/// Implementations must resolve handles or DIDs to canonical DID documents.
16
+
#[async_trait]
17
+
pub trait IdentityResolver: Send + Sync {
18
+
/// Resolves an AT Protocol subject to its DID document.
19
+
async fn resolve(&self, subject: &str) -> Result<Document>;
20
+
}
21
+
22
+
/// Trait for DNS resolution operations used during handle lookups.
23
+
#[async_trait]
24
+
pub trait DnsResolver: Send + Sync {
25
+
/// Resolves TXT records for a given domain name.
26
+
async fn resolve_txt(&self, domain: &str) -> Result<Vec<String>, ResolveError>;
27
+
}
28
+
29
+
/// Trait for retrieving private keys by identifier.
30
+
#[async_trait]
31
+
/// Trait for resolving key references (e.g., DID verification methods) to [`KeyData`].
32
+
#[async_trait]
33
+
pub trait KeyResolver: Send + Sync {
34
+
/// Resolves a key reference string into key material.
35
+
async fn resolve(&self, key: &str) -> Result<KeyData>;
36
+
}
37
+
38
+
/// Trait for DID document storage backends.
39
+
#[async_trait]
40
+
pub trait DidDocumentStorage: Send + Sync {
41
+
/// Retrieves a DID document if present.
42
+
async fn get_document_by_did(&self, did: &str) -> Result<Option<Document>>;
43
+
44
+
/// Stores or updates a DID document.
45
+
async fn store_document(&self, document: Document) -> Result<()>;
46
+
47
+
/// Deletes a DID document by DID.
48
+
async fn delete_document_by_did(&self, did: &str) -> Result<()>;
49
+
}
+48
-119
crates/atproto-identity/src/url.rs
+48
-119
crates/atproto-identity/src/url.rs
···
1
-
//! URL construction utilities for HTTP endpoints.
1
+
//! URL construction utilities leveraging the `url` crate.
2
2
//!
3
-
//! Build well-formed HTTP request URLs with parameter encoding
4
-
//! and query string generation.
5
-
6
-
/// A single query parameter as a key-value pair.
7
-
pub type QueryParam<'a> = (&'a str, &'a str);
8
-
/// A collection of query parameters.
9
-
pub type QueryParams<'a> = Vec<QueryParam<'a>>;
10
-
11
-
/// Builds a query string from a collection of query parameters.
12
-
///
13
-
/// # Arguments
14
-
///
15
-
/// * `query` - Collection of key-value pairs to build into a query string
16
-
///
17
-
/// # Returns
18
-
///
19
-
/// A formatted query string with URL-encoded parameters
20
-
pub fn build_querystring(query: QueryParams) -> String {
21
-
query.iter().fold(String::new(), |acc, &tuple| {
22
-
acc + tuple.0 + "=" + tuple.1 + "&"
23
-
})
24
-
}
3
+
//! Provides helpers for building URLs and appending query parameters
4
+
//! without manual string concatenation.
25
5
26
-
/// Builder for constructing URLs with host, path, and query parameters.
27
-
pub struct URLBuilder {
28
-
host: String,
29
-
path: String,
30
-
params: Vec<(String, String)>,
31
-
}
6
+
use url::{ParseError, Url};
32
7
33
-
/// Convenience function to build a URL with optional parameters.
34
-
///
35
-
/// # Arguments
36
-
///
37
-
/// * `host` - The hostname (will be prefixed with https:// if needed)
38
-
/// * `path` - The URL path
39
-
/// * `params` - Vector of optional key-value pairs for query parameters
40
-
///
41
-
/// # Returns
42
-
///
43
-
/// A fully constructed URL string
44
-
pub fn build_url(host: &str, path: &str, params: Vec<Option<(&str, &str)>>) -> String {
45
-
let mut url_builder = URLBuilder::new(host);
46
-
url_builder.path(path);
8
+
/// Builds a URL from the provided components.
9
+
/// Returns `Result<Url, ParseError>` to surface parsing errors.
10
+
pub fn build_url<K, V, I>(host: &str, path: &str, params: I) -> Result<Url, ParseError>
11
+
where
12
+
I: IntoIterator<Item = (K, V)>,
13
+
K: AsRef<str>,
14
+
V: AsRef<str>,
15
+
{
16
+
let mut base = if host.starts_with("http://") || host.starts_with("https://") {
17
+
Url::parse(host)?
18
+
} else {
19
+
Url::parse(&format!("https://{}", host))?
20
+
};
47
21
48
-
for (key, value) in params.iter().filter_map(|x| *x) {
49
-
url_builder.param(key, value);
22
+
if !base.path().ends_with('/') {
23
+
let mut new_path = base.path().to_string();
24
+
if !new_path.ends_with('/') {
25
+
new_path.push('/');
26
+
}
27
+
if new_path.is_empty() {
28
+
new_path.push('/');
29
+
}
30
+
base.set_path(&new_path);
50
31
}
51
32
52
-
url_builder.build()
53
-
}
54
-
55
-
impl URLBuilder {
56
-
/// Creates a new URLBuilder with the specified host.
57
-
///
58
-
/// # Arguments
59
-
///
60
-
/// * `host` - The hostname (will be prefixed with https:// if needed and trailing slash removed)
61
-
///
62
-
/// # Returns
63
-
///
64
-
/// A new URLBuilder instance
65
-
pub fn new(host: &str) -> URLBuilder {
66
-
let host = if host.starts_with("https://") {
67
-
host.to_string()
68
-
} else {
69
-
format!("https://{}", host)
70
-
};
71
-
72
-
let host = if let Some(trimmed) = host.strip_suffix('/') {
73
-
trimmed.to_string()
74
-
} else {
75
-
host
76
-
};
77
-
78
-
URLBuilder {
79
-
host: host.to_string(),
80
-
params: vec![],
81
-
path: "/".to_string(),
33
+
let mut url = base.join(path.trim_start_matches('/'))?;
34
+
{
35
+
let mut pairs = url.query_pairs_mut();
36
+
for (key, value) in params {
37
+
pairs.append_pair(key.as_ref(), value.as_ref());
82
38
}
83
39
}
40
+
Ok(url)
41
+
}
84
42
85
-
/// Adds a query parameter to the URL.
86
-
///
87
-
/// # Arguments
88
-
///
89
-
/// * `key` - The parameter key
90
-
/// * `value` - The parameter value (will be URL-encoded)
91
-
///
92
-
/// # Returns
93
-
///
94
-
/// A mutable reference to self for method chaining
95
-
pub fn param(&mut self, key: &str, value: &str) -> &mut Self {
96
-
self.params
97
-
.push((key.to_owned(), urlencoding::encode(value).to_string()));
98
-
self
99
-
}
43
+
#[cfg(test)]
44
+
mod tests {
45
+
use super::*;
100
46
101
-
/// Sets the URL path.
102
-
///
103
-
/// # Arguments
104
-
///
105
-
/// * `path` - The URL path
106
-
///
107
-
/// # Returns
108
-
///
109
-
/// A mutable reference to self for method chaining
110
-
pub fn path(&mut self, path: &str) -> &mut Self {
111
-
path.clone_into(&mut self.path);
112
-
self
113
-
}
47
+
#[test]
48
+
fn builds_url_with_params() {
49
+
let url = build_url(
50
+
"example.com/api",
51
+
"resource",
52
+
[("id", "123"), ("status", "active")],
53
+
)
54
+
.expect("url build failed");
114
55
115
-
/// Constructs the final URL string.
116
-
///
117
-
/// # Returns
118
-
///
119
-
/// The complete URL with host, path, and query parameters
120
-
pub fn build(self) -> String {
121
-
let mut url_params = String::new();
122
-
123
-
if !self.params.is_empty() {
124
-
url_params.push('?');
125
-
126
-
let qs_args = self.params.iter().map(|(k, v)| (&**k, &**v)).collect();
127
-
url_params.push_str(build_querystring(qs_args).as_str());
128
-
}
129
-
130
-
format!("{}{}{}", self.host, self.path, url_params)
56
+
assert_eq!(
57
+
url.as_str(),
58
+
"https://example.com/api/resource?id=123&status=active"
59
+
);
131
60
}
132
61
}
+75
-24
crates/atproto-jetstream/src/consumer.rs
+75
-24
crates/atproto-jetstream/src/consumer.rs
···
2
2
//!
3
3
//! WebSocket event consumption with background processing and
4
4
//! customizable event handler dispatch.
5
+
//!
6
+
//! ## Memory Efficiency
7
+
//!
8
+
//! This module is optimized for high-throughput event processing with minimal allocations:
9
+
//!
10
+
//! - **Arc-based event sharing**: Events are wrapped in `Arc` and shared across all handlers,
11
+
//! avoiding expensive clones of event data structures.
12
+
//! - **Zero-copy handler IDs**: Handler identifiers use string slices to avoid allocations
13
+
//! during registration and dispatch.
14
+
//! - **Optimized query building**: WebSocket query strings are built with pre-allocated
15
+
//! capacity to minimize reallocations.
16
+
//!
17
+
//! ## Usage
18
+
//!
19
+
//! Implement the `EventHandler` trait to process events:
20
+
//!
21
+
//! ```rust
22
+
//! use atproto_jetstream::{EventHandler, JetstreamEvent};
23
+
//! use async_trait::async_trait;
24
+
//! use std::sync::Arc;
25
+
//! use anyhow::Result;
26
+
//!
27
+
//! struct MyHandler;
28
+
//!
29
+
//! #[async_trait]
30
+
//! impl EventHandler for MyHandler {
31
+
//! async fn handle_event(&self, event: Arc<JetstreamEvent>) -> Result<()> {
32
+
//! // Process event without cloning
33
+
//! Ok(())
34
+
//! }
35
+
//!
36
+
//! fn handler_id(&self) -> &str {
37
+
//! "my-handler"
38
+
//! }
39
+
//! }
40
+
//! ```
5
41
6
42
use crate::errors::ConsumerError;
7
43
use anyhow::Result;
···
133
169
#[async_trait]
134
170
pub trait EventHandler: Send + Sync {
135
171
/// Handle a received event
136
-
async fn handle_event(&self, event: JetstreamEvent) -> Result<()>;
172
+
///
173
+
/// Events are wrapped in Arc to enable efficient sharing across multiple handlers
174
+
/// without cloning the entire event data structure.
175
+
async fn handle_event(&self, event: Arc<JetstreamEvent>) -> Result<()>;
137
176
138
177
/// Get the handler's identifier
139
-
fn handler_id(&self) -> String;
178
+
///
179
+
/// Returns a string slice to avoid unnecessary allocations.
180
+
fn handler_id(&self) -> &str;
140
181
}
141
182
142
183
#[cfg_attr(debug_assertions, derive(Debug))]
···
167
208
pub struct Consumer {
168
209
config: ConsumerTaskConfig,
169
210
handlers: Arc<RwLock<HashMap<String, Arc<dyn EventHandler>>>>,
170
-
event_sender: Arc<RwLock<Option<broadcast::Sender<JetstreamEvent>>>>,
211
+
event_sender: Arc<RwLock<Option<broadcast::Sender<Arc<JetstreamEvent>>>>>,
171
212
}
172
213
173
214
impl Consumer {
···
185
226
let handler_id = handler.handler_id();
186
227
let mut handlers = self.handlers.write().await;
187
228
188
-
if handlers.contains_key(&handler_id) {
229
+
if handlers.contains_key(handler_id) {
189
230
return Err(ConsumerError::HandlerRegistrationFailed(format!(
190
231
"Handler with ID '{}' already registered",
191
232
handler_id
···
193
234
.into());
194
235
}
195
236
196
-
handlers.insert(handler_id.clone(), handler);
237
+
handlers.insert(handler_id.to_string(), handler);
197
238
Ok(())
198
239
}
199
240
···
205
246
}
206
247
207
248
/// Get a broadcast receiver for events
208
-
pub async fn get_event_receiver(&self) -> Result<broadcast::Receiver<JetstreamEvent>> {
249
+
///
250
+
/// Events are wrapped in Arc to enable efficient sharing without cloning.
251
+
pub async fn get_event_receiver(&self) -> Result<broadcast::Receiver<Arc<JetstreamEvent>>> {
209
252
let sender_guard = self.event_sender.read().await;
210
253
match sender_guard.as_ref() {
211
254
Some(sender) => Ok(sender.subscribe()),
···
249
292
tracing::info!("Starting Jetstream consumer");
250
293
251
294
// Build WebSocket URL with query parameters
252
-
let mut query_params = vec![];
295
+
// Pre-allocate capacity to avoid reallocations during string building
296
+
let capacity = 50 // Base parameters
297
+
+ self.config.collections.len() * 30 // Estimate per collection
298
+
+ self.config.dids.len() * 60; // Estimate per DID
299
+
let mut query_string = String::with_capacity(capacity);
253
300
254
301
// Add compression parameter
255
-
query_params.push(format!("compress={}", self.config.compression));
302
+
query_string.push_str("compress=");
303
+
query_string.push_str(if self.config.compression { "true" } else { "false" });
256
304
257
305
// Add requireHello parameter
258
-
query_params.push(format!("requireHello={}", self.config.require_hello));
306
+
query_string.push_str("&requireHello=");
307
+
query_string.push_str(if self.config.require_hello { "true" } else { "false" });
259
308
260
309
// Add wantedCollections if specified (each collection as a separate query parameter)
261
310
if !self.config.collections.is_empty() && !self.config.require_hello {
262
311
for collection in &self.config.collections {
263
-
query_params.push(format!(
264
-
"wantedCollections={}",
265
-
urlencoding::encode(collection)
266
-
));
312
+
query_string.push_str("&wantedCollections=");
313
+
query_string.push_str(&urlencoding::encode(collection));
267
314
}
268
315
}
269
316
270
317
// Add wantedDids if specified (each DID as a separate query parameter)
271
318
if !self.config.dids.is_empty() && !self.config.require_hello {
272
319
for did in &self.config.dids {
273
-
query_params.push(format!("wantedDids={}", urlencoding::encode(did)));
320
+
query_string.push_str("&wantedDids=");
321
+
query_string.push_str(&urlencoding::encode(did));
274
322
}
275
323
}
276
324
277
325
// Add maxMessageSizeBytes if specified
278
326
if let Some(max_size) = self.config.max_message_size_bytes {
279
-
query_params.push(format!("maxMessageSizeBytes={}", max_size));
327
+
use std::fmt::Write;
328
+
write!(&mut query_string, "&maxMessageSizeBytes={}", max_size).unwrap();
280
329
}
281
330
282
331
// Add cursor if specified
283
332
if let Some(cursor) = self.config.cursor {
284
-
query_params.push(format!("cursor={}", cursor));
333
+
use std::fmt::Write;
334
+
write!(&mut query_string, "&cursor={}", cursor).unwrap();
285
335
}
286
-
287
-
let query_string = query_params.join("&");
288
336
let ws_url = Uri::from_str(&format!(
289
337
"wss://{}/subscribe?{}",
290
338
self.config.jetstream_hostname, query_string
···
335
383
break;
336
384
},
337
385
() = &mut sleeper => {
338
-
// consumer_control_insert(&self.pool, &self.config.jetstream_hostname, time_usec).await?;
339
-
340
386
sleeper.as_mut().reset(Instant::now() + interval);
341
387
},
342
388
item = client.next() => {
···
404
450
}
405
451
406
452
/// Dispatch event to all registered handlers
453
+
///
454
+
/// Wraps the event in Arc once and shares it across all handlers,
455
+
/// avoiding expensive clones of the event data structure.
407
456
async fn dispatch_to_handlers(&self, event: JetstreamEvent) -> Result<()> {
408
457
let handlers = self.handlers.read().await;
458
+
let event = Arc::new(event);
409
459
410
460
for (handler_id, handler) in handlers.iter() {
411
461
let handler_span = tracing::debug_span!("handler_dispatch", handler_id = %handler_id);
462
+
let event_ref = Arc::clone(&event);
412
463
async {
413
-
if let Err(err) = handler.handle_event(event.clone()).await {
464
+
if let Err(err) = handler.handle_event(event_ref).await {
414
465
tracing::error!(
415
466
error = ?err,
416
467
handler_id = %handler_id,
···
440
491
441
492
#[async_trait]
442
493
impl EventHandler for LoggingHandler {
443
-
async fn handle_event(&self, _event: JetstreamEvent) -> Result<()> {
494
+
async fn handle_event(&self, _event: Arc<JetstreamEvent>) -> Result<()> {
444
495
Ok(())
445
496
}
446
497
447
-
fn handler_id(&self) -> String {
448
-
self.id.clone()
498
+
fn handler_id(&self) -> &str {
499
+
&self.id
449
500
}
450
501
}
451
502
+2
-2
crates/atproto-oauth/src/dpop.rs
+2
-2
crates/atproto-oauth/src/dpop.rs
···
183
183
/// * `false` if no DPoP error is found or the header format is invalid
184
184
///
185
185
/// # Examples
186
-
/// ```
186
+
/// ```no_run
187
187
/// use atproto_oauth::dpop::is_dpop_error;
188
188
///
189
189
/// // Valid DPoP error: invalid_dpop_proof
···
516
516
/// - HTTP method or URI don't match expected values
517
517
///
518
518
/// # Examples
519
-
/// ```
519
+
/// ```no_run
520
520
/// use atproto_oauth::dpop::{validate_dpop_jwt, DpopValidationConfig};
521
521
///
522
522
/// let dpop_jwt = "eyJhbGciOiJFUzI1NiIsImp3ayI6eyJhbGciOiJFUzI1NiIsImNydiI6IlAtMjU2Iiwia2lkIjoiZGlkOmtleTp6RG5hZVpVeEFhZDJUbkRYTjFaZWprcFV4TWVvMW9mNzF6NGVackxLRFRtaEQzOEQ3Iiwia3R5IjoiRUMiLCJ1c2UiOiJzaWciLCJ4IjoiaG56dDlSSGppUDBvMFJJTEZacEdjX0phenJUb1pHUzF1d0d5R3JleUNNbyIsInkiOiJzaXJhU2FGU09md3FrYTZRdnR3aUJhM0FKUi14eEhQaWVWZkFhZEhQQ0JRIn0sInR5cCI6ImRwb3Arand0In0.eyJqdGkiOiI2NDM0ZmFlNC00ZTYxLTQ1NDEtOTNlZC1kMzQ5ZjRiMTQ1NjEiLCJodG0iOiJQT1NUIiwiaHR1IjoiaHR0cHM6Ly9haXBkZXYudHVubi5kZXYvb2F1dGgvdG9rZW4iLCJpYXQiOjE3NDk3NjQ1MTl9.GkoB00Y-68djRHLhO5-PayNV8PWcQI1pwZaAUL3Hzppj-ga6SKMyGpPwY4kcGdHM7lvvisNkzvd7RjEmdDtnjQ";
+374
-5
crates/atproto-oauth/src/scopes.rs
+374
-5
crates/atproto-oauth/src/scopes.rs
···
38
38
Atproto,
39
39
/// Transition scope for migration operations
40
40
Transition(TransitionScope),
41
+
/// Include scope for referencing permission sets by NSID
42
+
Include(IncludeScope),
41
43
/// OpenID Connect scope - required for OpenID Connect authentication
42
44
OpenId,
43
45
/// Profile scope - access to user profile information
···
91
93
Generic,
92
94
/// Email transition operations
93
95
Email,
96
+
}
97
+
98
+
/// Include scope for referencing permission sets by NSID
99
+
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
100
+
pub struct IncludeScope {
101
+
/// The permission set NSID (e.g., "app.example.authFull")
102
+
pub nsid: String,
103
+
/// Optional audience DID for inherited RPC permissions
104
+
pub aud: Option<String>,
94
105
}
95
106
96
107
/// Blob scope with mime type constraints
···
310
321
"rpc",
311
322
"atproto",
312
323
"transition",
324
+
"include",
313
325
"openid",
314
326
"profile",
315
327
"email",
···
349
361
"rpc" => Self::parse_rpc(suffix),
350
362
"atproto" => Self::parse_atproto(suffix),
351
363
"transition" => Self::parse_transition(suffix),
364
+
"include" => Self::parse_include(suffix),
352
365
"openid" => Self::parse_openid(suffix),
353
366
"profile" => Self::parse_profile(suffix),
354
367
"email" => Self::parse_email(suffix),
···
573
586
Ok(Scope::Transition(scope))
574
587
}
575
588
589
+
fn parse_include(suffix: Option<&str>) -> Result<Self, ParseError> {
590
+
let (nsid, params) = match suffix {
591
+
Some(s) => {
592
+
if let Some(pos) = s.find('?') {
593
+
(&s[..pos], Some(&s[pos + 1..]))
594
+
} else {
595
+
(s, None)
596
+
}
597
+
}
598
+
None => return Err(ParseError::MissingResource),
599
+
};
600
+
601
+
if nsid.is_empty() {
602
+
return Err(ParseError::MissingResource);
603
+
}
604
+
605
+
let aud = if let Some(params) = params {
606
+
let parsed_params = parse_query_string(params);
607
+
parsed_params
608
+
.get("aud")
609
+
.and_then(|v| v.first())
610
+
.map(|s| url_decode(s))
611
+
} else {
612
+
None
613
+
};
614
+
615
+
Ok(Scope::Include(IncludeScope {
616
+
nsid: nsid.to_string(),
617
+
aud,
618
+
}))
619
+
}
620
+
576
621
fn parse_openid(suffix: Option<&str>) -> Result<Self, ParseError> {
577
622
if suffix.is_some() {
578
623
return Err(ParseError::InvalidResource(
···
677
722
if let Some(lxm) = scope.lxm.iter().next() {
678
723
match lxm {
679
724
RpcLexicon::All => "rpc:*".to_string(),
680
-
RpcLexicon::Nsid(nsid) => format!("rpc:{}", nsid),
725
+
RpcLexicon::Nsid(nsid) => format!("rpc:{}?aud=*", nsid),
726
+
}
727
+
} else {
728
+
"rpc:*".to_string()
729
+
}
730
+
} else if scope.lxm.len() == 1 && scope.aud.len() == 1 {
731
+
// Single lxm and single aud (aud is not All, handled above)
732
+
if let (Some(lxm), Some(aud)) =
733
+
(scope.lxm.iter().next(), scope.aud.iter().next())
734
+
{
735
+
match (lxm, aud) {
736
+
(RpcLexicon::Nsid(nsid), RpcAudience::Did(did)) => {
737
+
format!("rpc:{}?aud={}", nsid, did)
738
+
}
739
+
(RpcLexicon::All, RpcAudience::Did(did)) => {
740
+
format!("rpc:*?aud={}", did)
741
+
}
742
+
_ => "rpc:*".to_string(),
681
743
}
682
744
} else {
683
745
"rpc:*".to_string()
···
713
775
TransitionScope::Generic => "transition:generic".to_string(),
714
776
TransitionScope::Email => "transition:email".to_string(),
715
777
},
778
+
Scope::Include(scope) => {
779
+
if let Some(ref aud) = scope.aud {
780
+
format!("include:{}?aud={}", scope.nsid, url_encode(aud))
781
+
} else {
782
+
format!("include:{}", scope.nsid)
783
+
}
784
+
}
716
785
Scope::OpenId => "openid".to_string(),
717
786
Scope::Profile => "profile".to_string(),
718
787
Scope::Email => "email".to_string(),
···
732
801
// Other scopes don't grant transition scopes
733
802
(_, Scope::Transition(_)) => false,
734
803
(Scope::Transition(_), _) => false,
804
+
// Include scopes only grant themselves (exact match including aud)
805
+
(Scope::Include(a), Scope::Include(b)) => a == b,
806
+
// Other scopes don't grant include scopes
807
+
(_, Scope::Include(_)) => false,
808
+
(Scope::Include(_), _) => false,
735
809
// OpenID Connect scopes only grant themselves
736
810
(Scope::OpenId, Scope::OpenId) => true,
737
811
(Scope::OpenId, _) => false,
···
873
947
params
874
948
}
875
949
950
+
/// Decode a percent-encoded string
951
+
fn url_decode(s: &str) -> String {
952
+
let mut result = String::with_capacity(s.len());
953
+
let mut chars = s.chars().peekable();
954
+
955
+
while let Some(c) = chars.next() {
956
+
if c == '%' {
957
+
let hex: String = chars.by_ref().take(2).collect();
958
+
if hex.len() == 2 {
959
+
if let Ok(byte) = u8::from_str_radix(&hex, 16) {
960
+
result.push(byte as char);
961
+
continue;
962
+
}
963
+
}
964
+
result.push('%');
965
+
result.push_str(&hex);
966
+
} else {
967
+
result.push(c);
968
+
}
969
+
}
970
+
971
+
result
972
+
}
973
+
974
+
/// Encode a string for use in a URL query parameter
975
+
fn url_encode(s: &str) -> String {
976
+
let mut result = String::with_capacity(s.len() * 3);
977
+
978
+
for c in s.chars() {
979
+
match c {
980
+
'A'..='Z' | 'a'..='z' | '0'..='9' | '-' | '_' | '.' | '~' | ':' => {
981
+
result.push(c);
982
+
}
983
+
_ => {
984
+
for byte in c.to_string().as_bytes() {
985
+
result.push_str(&format!("%{:02X}", byte));
986
+
}
987
+
}
988
+
}
989
+
}
990
+
991
+
result
992
+
}
993
+
876
994
/// Error type for scope parsing
877
995
#[derive(Debug, Clone, PartialEq, Eq)]
878
996
pub enum ParseError {
···
1056
1174
("repo:foo.bar", "repo:foo.bar"),
1057
1175
("repo:foo.bar?action=create", "repo:foo.bar?action=create"),
1058
1176
("rpc:*", "rpc:*"),
1177
+
("rpc:com.example.service", "rpc:com.example.service?aud=*"),
1178
+
(
1179
+
"rpc:com.example.service?aud=did:example:123",
1180
+
"rpc:com.example.service?aud=did:example:123",
1181
+
),
1059
1182
];
1060
1183
1061
1184
for (input, expected) in tests {
···
1677
1800
1678
1801
// Test with complex scopes including query parameters
1679
1802
let scopes = vec![
1680
-
Scope::parse("rpc:com.example.service?aud=did:example:123&lxm=com.example.method")
1681
-
.unwrap(),
1803
+
Scope::parse("rpc:com.example.service?aud=did:example:123").unwrap(),
1682
1804
Scope::parse("repo:foo.bar?action=create&action=update").unwrap(),
1683
1805
Scope::parse("blob:image/*?accept=image/png&accept=image/jpeg").unwrap(),
1684
1806
];
1685
1807
let result = Scope::serialize_multiple(&scopes);
1686
1808
// The result should be sorted alphabetically
1687
-
// Note: RPC scope with query params is serialized as "rpc?aud=...&lxm=..."
1809
+
// Single lxm + single aud is serialized as "rpc:[lxm]?aud=[aud]"
1688
1810
assert!(result.starts_with("blob:"));
1689
1811
assert!(result.contains(" repo:"));
1690
-
assert!(result.contains("rpc?aud=did:example:123&lxm=com.example.service"));
1812
+
assert!(result.contains("rpc:com.example.service?aud=did:example:123"));
1691
1813
1692
1814
// Test with transition scopes
1693
1815
let scopes = vec![
···
1835
1957
assert!(!result.contains(&Scope::parse("account:email").unwrap()));
1836
1958
assert!(result.contains(&Scope::parse("account:email?action=manage").unwrap()));
1837
1959
assert!(result.contains(&Scope::parse("account:repo").unwrap()));
1960
+
}
1961
+
1962
+
#[test]
1963
+
fn test_repo_nsid_with_wildcard_suffix() {
1964
+
// Test parsing "repo:app.bsky.feed.*" - the asterisk is treated as a literal part of the NSID,
1965
+
// not as a wildcard pattern. Only "repo:*" has special wildcard behavior for ALL collections.
1966
+
let scope = Scope::parse("repo:app.bsky.feed.*").unwrap();
1967
+
1968
+
// Verify it parses as a specific NSID, not as a wildcard
1969
+
assert_eq!(
1970
+
scope,
1971
+
Scope::Repo(RepoScope {
1972
+
collection: RepoCollection::Nsid("app.bsky.feed.*".to_string()),
1973
+
actions: {
1974
+
let mut actions = BTreeSet::new();
1975
+
actions.insert(RepoAction::Create);
1976
+
actions.insert(RepoAction::Update);
1977
+
actions.insert(RepoAction::Delete);
1978
+
actions
1979
+
}
1980
+
})
1981
+
);
1982
+
1983
+
// Verify normalization preserves the literal NSID
1984
+
assert_eq!(scope.to_string_normalized(), "repo:app.bsky.feed.*");
1985
+
1986
+
// Test that it does NOT grant access to "app.bsky.feed.post"
1987
+
// (because "app.bsky.feed.*" is a literal NSID, not a pattern)
1988
+
let specific_feed = Scope::parse("repo:app.bsky.feed.post").unwrap();
1989
+
assert!(!scope.grants(&specific_feed));
1990
+
1991
+
// Test that only "repo:*" grants access to "app.bsky.feed.*"
1992
+
let repo_all = Scope::parse("repo:*").unwrap();
1993
+
assert!(repo_all.grants(&scope));
1994
+
1995
+
// Test that "repo:app.bsky.feed.*" only grants itself
1996
+
assert!(scope.grants(&scope));
1997
+
1998
+
// Test with actions
1999
+
let scope_with_create = Scope::parse("repo:app.bsky.feed.*?action=create").unwrap();
2000
+
assert_eq!(
2001
+
scope_with_create,
2002
+
Scope::Repo(RepoScope {
2003
+
collection: RepoCollection::Nsid("app.bsky.feed.*".to_string()),
2004
+
actions: {
2005
+
let mut actions = BTreeSet::new();
2006
+
actions.insert(RepoAction::Create);
2007
+
actions
2008
+
}
2009
+
})
2010
+
);
2011
+
2012
+
// The full scope (with all actions) grants the create-only scope
2013
+
assert!(scope.grants(&scope_with_create));
2014
+
// But the create-only scope does NOT grant the full scope
2015
+
assert!(!scope_with_create.grants(&scope));
2016
+
2017
+
// Test parsing multiple scopes with NSID wildcards
2018
+
let scopes = Scope::parse_multiple("repo:app.bsky.feed.* repo:app.bsky.graph.* repo:*").unwrap();
2019
+
assert_eq!(scopes.len(), 3);
2020
+
2021
+
// Test that parse_multiple_reduced properly reduces when "repo:*" is present
2022
+
let reduced = Scope::parse_multiple_reduced("repo:app.bsky.feed.* repo:app.bsky.graph.* repo:*").unwrap();
2023
+
assert_eq!(reduced.len(), 1);
2024
+
assert_eq!(reduced[0], repo_all);
2025
+
}
2026
+
2027
+
#[test]
2028
+
fn test_include_scope_parsing() {
2029
+
// Test basic include scope
2030
+
let scope = Scope::parse("include:app.example.authFull").unwrap();
2031
+
assert_eq!(
2032
+
scope,
2033
+
Scope::Include(IncludeScope {
2034
+
nsid: "app.example.authFull".to_string(),
2035
+
aud: None,
2036
+
})
2037
+
);
2038
+
2039
+
// Test include scope with audience
2040
+
let scope = Scope::parse("include:app.example.authFull?aud=did:web:api.example.com").unwrap();
2041
+
assert_eq!(
2042
+
scope,
2043
+
Scope::Include(IncludeScope {
2044
+
nsid: "app.example.authFull".to_string(),
2045
+
aud: Some("did:web:api.example.com".to_string()),
2046
+
})
2047
+
);
2048
+
2049
+
// Test include scope with URL-encoded audience (with fragment)
2050
+
let scope = Scope::parse("include:app.example.authFull?aud=did:web:api.example.com%23svc_chat").unwrap();
2051
+
assert_eq!(
2052
+
scope,
2053
+
Scope::Include(IncludeScope {
2054
+
nsid: "app.example.authFull".to_string(),
2055
+
aud: Some("did:web:api.example.com#svc_chat".to_string()),
2056
+
})
2057
+
);
2058
+
2059
+
// Test missing NSID
2060
+
assert!(matches!(
2061
+
Scope::parse("include"),
2062
+
Err(ParseError::MissingResource)
2063
+
));
2064
+
2065
+
// Test empty NSID with query params
2066
+
assert!(matches!(
2067
+
Scope::parse("include:?aud=did:example:123"),
2068
+
Err(ParseError::MissingResource)
2069
+
));
2070
+
}
2071
+
2072
+
#[test]
2073
+
fn test_include_scope_normalization() {
2074
+
// Test normalization without audience
2075
+
let scope = Scope::parse("include:com.example.authBasic").unwrap();
2076
+
assert_eq!(scope.to_string_normalized(), "include:com.example.authBasic");
2077
+
2078
+
// Test normalization with audience (no special chars)
2079
+
let scope = Scope::parse("include:com.example.authBasic?aud=did:plc:xyz123").unwrap();
2080
+
assert_eq!(
2081
+
scope.to_string_normalized(),
2082
+
"include:com.example.authBasic?aud=did:plc:xyz123"
2083
+
);
2084
+
2085
+
// Test normalization with URL encoding (fragment needs encoding)
2086
+
let scope = Scope::parse("include:app.example.authFull?aud=did:web:api.example.com%23svc_chat").unwrap();
2087
+
let normalized = scope.to_string_normalized();
2088
+
assert_eq!(
2089
+
normalized,
2090
+
"include:app.example.authFull?aud=did:web:api.example.com%23svc_chat"
2091
+
);
2092
+
}
2093
+
2094
+
#[test]
2095
+
fn test_include_scope_grants() {
2096
+
let include1 = Scope::parse("include:app.example.authFull").unwrap();
2097
+
let include2 = Scope::parse("include:app.example.authBasic").unwrap();
2098
+
let include1_with_aud = Scope::parse("include:app.example.authFull?aud=did:plc:xyz").unwrap();
2099
+
let account = Scope::parse("account:email").unwrap();
2100
+
2101
+
// Include scopes only grant themselves (exact match)
2102
+
assert!(include1.grants(&include1));
2103
+
assert!(!include1.grants(&include2));
2104
+
assert!(!include1.grants(&include1_with_aud)); // Different because aud differs
2105
+
assert!(include1_with_aud.grants(&include1_with_aud));
2106
+
2107
+
// Include scopes don't grant other scope types
2108
+
assert!(!include1.grants(&account));
2109
+
assert!(!account.grants(&include1));
2110
+
2111
+
// Include scopes don't grant atproto or transition
2112
+
let atproto = Scope::parse("atproto").unwrap();
2113
+
let transition = Scope::parse("transition:generic").unwrap();
2114
+
assert!(!include1.grants(&atproto));
2115
+
assert!(!include1.grants(&transition));
2116
+
assert!(!atproto.grants(&include1));
2117
+
assert!(!transition.grants(&include1));
2118
+
}
2119
+
2120
+
#[test]
2121
+
fn test_parse_multiple_with_include() {
2122
+
let scopes = Scope::parse_multiple("atproto include:app.example.auth repo:*").unwrap();
2123
+
assert_eq!(scopes.len(), 3);
2124
+
assert_eq!(scopes[0], Scope::Atproto);
2125
+
assert!(matches!(scopes[1], Scope::Include(_)));
2126
+
assert!(matches!(scopes[2], Scope::Repo(_)));
2127
+
2128
+
// Test with URL-encoded audience
2129
+
let scopes = Scope::parse_multiple(
2130
+
"include:app.example.auth?aud=did:web:api.example.com%23svc account:email"
2131
+
).unwrap();
2132
+
assert_eq!(scopes.len(), 2);
2133
+
if let Scope::Include(inc) = &scopes[0] {
2134
+
assert_eq!(inc.nsid, "app.example.auth");
2135
+
assert_eq!(inc.aud, Some("did:web:api.example.com#svc".to_string()));
2136
+
} else {
2137
+
panic!("Expected Include scope");
2138
+
}
2139
+
}
2140
+
2141
+
#[test]
2142
+
fn test_parse_multiple_reduced_with_include() {
2143
+
// Include scopes don't reduce each other (each is distinct)
2144
+
let scopes = Scope::parse_multiple_reduced(
2145
+
"include:app.example.auth include:app.example.other include:app.example.auth"
2146
+
).unwrap();
2147
+
assert_eq!(scopes.len(), 2); // Duplicates are removed
2148
+
assert!(scopes.contains(&Scope::Include(IncludeScope {
2149
+
nsid: "app.example.auth".to_string(),
2150
+
aud: None,
2151
+
})));
2152
+
assert!(scopes.contains(&Scope::Include(IncludeScope {
2153
+
nsid: "app.example.other".to_string(),
2154
+
aud: None,
2155
+
})));
2156
+
2157
+
// Include scopes with different audiences are not duplicates
2158
+
let scopes = Scope::parse_multiple_reduced(
2159
+
"include:app.example.auth include:app.example.auth?aud=did:plc:xyz"
2160
+
).unwrap();
2161
+
assert_eq!(scopes.len(), 2);
2162
+
}
2163
+
2164
+
#[test]
2165
+
fn test_serialize_multiple_with_include() {
2166
+
let scopes = vec![
2167
+
Scope::parse("repo:*").unwrap(),
2168
+
Scope::parse("include:app.example.authFull").unwrap(),
2169
+
Scope::Atproto,
2170
+
];
2171
+
let result = Scope::serialize_multiple(&scopes);
2172
+
assert_eq!(result, "atproto include:app.example.authFull repo:*");
2173
+
2174
+
// Test with URL-encoded audience
2175
+
let scopes = vec![
2176
+
Scope::Include(IncludeScope {
2177
+
nsid: "app.example.auth".to_string(),
2178
+
aud: Some("did:web:api.example.com#svc".to_string()),
2179
+
}),
2180
+
];
2181
+
let result = Scope::serialize_multiple(&scopes);
2182
+
assert_eq!(result, "include:app.example.auth?aud=did:web:api.example.com%23svc");
2183
+
}
2184
+
2185
+
#[test]
2186
+
fn test_remove_scope_with_include() {
2187
+
let scopes = vec![
2188
+
Scope::Atproto,
2189
+
Scope::parse("include:app.example.auth").unwrap(),
2190
+
Scope::parse("account:email").unwrap(),
2191
+
];
2192
+
let to_remove = Scope::parse("include:app.example.auth").unwrap();
2193
+
let result = Scope::remove_scope(&scopes, &to_remove);
2194
+
assert_eq!(result.len(), 2);
2195
+
assert!(!result.contains(&to_remove));
2196
+
assert!(result.contains(&Scope::Atproto));
2197
+
}
2198
+
2199
+
#[test]
2200
+
fn test_include_scope_roundtrip() {
2201
+
// Test that parse and serialize are inverses
2202
+
let original = "include:com.example.authBasicFeatures?aud=did:web:api.example.com%23svc_appview";
2203
+
let scope = Scope::parse(original).unwrap();
2204
+
let serialized = scope.to_string_normalized();
2205
+
let reparsed = Scope::parse(&serialized).unwrap();
2206
+
assert_eq!(scope, reparsed);
1838
2207
}
1839
2208
}
+17
-11
crates/atproto-oauth-aip/src/workflow.rs
+17
-11
crates/atproto-oauth-aip/src/workflow.rs
···
112
112
//! and protocol violations.
113
113
114
114
use anyhow::Result;
115
-
use atproto_identity::url::URLBuilder;
115
+
use atproto_identity::url::build_url;
116
116
use atproto_oauth::{
117
117
jwk::WrappedJsonWebKey,
118
118
workflow::{OAuthRequest, OAuthRequestState, ParResponse, TokenResponse},
119
119
};
120
120
use serde::Deserialize;
121
+
use std::iter;
121
122
122
123
use crate::errors::OAuthWorkflowError;
123
124
···
522
523
access_token_type: &Option<&str>,
523
524
subject: &Option<&str>,
524
525
) -> Result<ATProtocolSession> {
525
-
let mut url_builder = URLBuilder::new(protected_resource_base);
526
-
url_builder.path("/api/atprotocol/session");
526
+
let mut url = build_url(
527
+
protected_resource_base,
528
+
"/api/atprotocol/session",
529
+
iter::empty::<(&str, &str)>(),
530
+
)?;
531
+
{
532
+
let mut pairs = url.query_pairs_mut();
533
+
if let Some(value) = access_token_type {
534
+
pairs.append_pair("access_token_type", value);
535
+
}
527
536
528
-
if let Some(value) = access_token_type {
529
-
url_builder.param("access_token_type", value);
530
-
}
531
-
532
-
if let Some(value) = subject {
533
-
url_builder.param("sub", value);
537
+
if let Some(value) = subject {
538
+
pairs.append_pair("sub", value);
539
+
}
534
540
}
535
541
536
-
let url = url_builder.build();
542
+
let url: String = url.into();
537
543
538
544
let response = http_client
539
-
.get(url)
545
+
.get(&url)
540
546
.bearer_auth(access_token)
541
547
.send()
542
548
.await
+15
-12
crates/atproto-oauth-axum/src/bin/atproto-oauth-tool.rs
+15
-12
crates/atproto-oauth-axum/src/bin/atproto-oauth-tool.rs
···
30
30
use async_trait::async_trait;
31
31
use atproto_identity::{
32
32
config::{CertificateBundles, DnsNameservers, default_env, optional_env, require_env, version},
33
-
key::{KeyData, KeyProvider, KeyType, generate_key, identify_key, to_public},
34
-
storage::DidDocumentStorage,
33
+
key::{KeyData, KeyResolver, KeyType, generate_key, identify_key, to_public},
35
34
storage_lru::LruDidDocumentStorage,
35
+
traits::DidDocumentStorage,
36
36
};
37
37
38
38
#[cfg(feature = "hickory-dns")]
···
66
66
};
67
67
68
68
#[derive(Clone)]
69
-
pub struct SimpleKeyProvider {
69
+
pub struct SimpleKeyResolver {
70
70
keys: HashMap<String, KeyData>,
71
71
}
72
72
73
-
impl Default for SimpleKeyProvider {
73
+
impl Default for SimpleKeyResolver {
74
74
fn default() -> Self {
75
75
Self::new()
76
76
}
77
77
}
78
78
79
-
impl SimpleKeyProvider {
79
+
impl SimpleKeyResolver {
80
80
pub fn new() -> Self {
81
81
Self {
82
82
keys: HashMap::new(),
···
85
85
}
86
86
87
87
#[async_trait]
88
-
impl KeyProvider for SimpleKeyProvider {
89
-
async fn get_private_key_by_id(&self, key_id: &str) -> anyhow::Result<Option<KeyData>> {
90
-
Ok(self.keys.get(key_id).cloned())
88
+
impl KeyResolver for SimpleKeyResolver {
89
+
async fn resolve(&self, key_id: &str) -> anyhow::Result<KeyData> {
90
+
self.keys
91
+
.get(key_id)
92
+
.cloned()
93
+
.ok_or_else(|| anyhow::anyhow!("Key not found: {}", key_id))
91
94
}
92
95
}
93
96
···
97
100
pub oauth_client_config: OAuthClientConfig,
98
101
pub oauth_storage: Arc<dyn OAuthRequestStorage + Send + Sync>,
99
102
pub document_storage: Arc<dyn DidDocumentStorage + Send + Sync>,
100
-
pub key_provider: Arc<dyn KeyProvider + Send + Sync>,
103
+
pub key_resolver: Arc<dyn KeyResolver + Send + Sync>,
101
104
}
102
105
103
106
#[derive(Clone, FromRef)]
···
135
138
}
136
139
}
137
140
138
-
impl FromRef<WebContext> for Arc<dyn KeyProvider> {
141
+
impl FromRef<WebContext> for Arc<dyn KeyResolver> {
139
142
fn from_ref(context: &WebContext) -> Self {
140
-
context.0.key_provider.clone()
143
+
context.0.key_resolver.clone()
141
144
}
142
145
}
143
146
···
305
308
oauth_client_config: oauth_client_config.clone(),
306
309
oauth_storage: Arc::new(LruOAuthRequestStorage::new(NonZeroUsize::new(256).unwrap())),
307
310
document_storage: Arc::new(LruDidDocumentStorage::new(NonZeroUsize::new(255).unwrap())),
308
-
key_provider: Arc::new(SimpleKeyProvider {
311
+
key_resolver: Arc::new(SimpleKeyResolver {
309
312
keys: signing_key_storage,
310
313
}),
311
314
}));
+7
-9
crates/atproto-oauth-axum/src/handle_complete.rs
+7
-9
crates/atproto-oauth-axum/src/handle_complete.rs
···
7
7
8
8
use anyhow::Result;
9
9
use atproto_identity::{
10
-
key::{KeyProvider, identify_key},
11
-
storage::DidDocumentStorage,
10
+
key::{KeyResolver, identify_key},
11
+
traits::DidDocumentStorage,
12
12
};
13
13
use atproto_oauth::{
14
14
resources::pds_resources,
···
61
61
client: HttpClient,
62
62
oauth_request_storage: State<Arc<dyn OAuthRequestStorage>>,
63
63
did_document_storage: State<Arc<dyn DidDocumentStorage>>,
64
-
key_provider: State<Arc<dyn KeyProvider>>,
64
+
key_resolver: State<Arc<dyn KeyResolver>>,
65
65
Form(callback_form): Form<OAuthCallbackForm>,
66
66
) -> Result<impl IntoResponse, OAuthCallbackError> {
67
67
let oauth_request = oauth_request_storage
···
77
77
});
78
78
}
79
79
80
-
let private_signing_key_data = key_provider
81
-
.get_private_key_by_id(&oauth_request.signing_public_key)
82
-
.await?;
83
-
84
-
let private_signing_key_data =
85
-
private_signing_key_data.ok_or(OAuthCallbackError::NoSigningKeyFound)?;
80
+
let private_signing_key_data = key_resolver
81
+
.resolve(&oauth_request.signing_public_key)
82
+
.await
83
+
.map_err(|_| OAuthCallbackError::NoSigningKeyFound)?;
86
84
87
85
let private_dpop_key_data = identify_key(&oauth_request.dpop_private_key)?;
88
86
+11
-10
crates/atproto-record/Cargo.toml
+11
-10
crates/atproto-record/Cargo.toml
···
14
14
keywords.workspace = true
15
15
categories.workspace = true
16
16
17
-
[[bin]]
18
-
name = "atproto-record-sign"
17
+
[[bin]]
18
+
name = "atproto-record-cid"
19
19
test = false
20
20
bench = false
21
21
doc = true
22
-
required-features = ["clap", "tokio"]
23
-
24
-
[[bin]]
25
-
name = "atproto-record-verify"
26
-
test = false
27
-
bench = false
28
-
doc = true
29
-
required-features = ["clap", "tokio"]
22
+
required-features = ["clap"]
30
23
31
24
[dependencies]
32
25
atproto-identity.workspace = true
33
26
34
27
anyhow.workspace = true
35
28
base64.workspace = true
29
+
rand.workspace = true
36
30
serde_ipld_dagcbor.workspace = true
37
31
serde_json.workspace = true
38
32
serde.workspace = true
···
41
35
tokio = { workspace = true, optional = true }
42
36
chrono = {version = "0.4.41", default-features = false, features = ["std", "now", "serde"]}
43
37
clap = { workspace = true, optional = true }
38
+
cid = "0.11"
39
+
multihash = "0.19"
40
+
sha2 = { workspace = true }
41
+
42
+
[dev-dependencies]
43
+
async-trait = "0.1"
44
+
tokio = { workspace = true, features = ["macros", "rt"] }
44
45
45
46
[features]
46
47
default = ["hickory-dns"]
+51
-68
crates/atproto-record/README.md
+51
-68
crates/atproto-record/README.md
···
1
1
# atproto-record
2
2
3
-
Cryptographic signature operations and utilities for AT Protocol records.
3
+
Utilities for working with AT Protocol records.
4
4
5
5
## Overview
6
6
7
-
A comprehensive Rust library for working with AT Protocol records, providing cryptographic signature creation and verification, AT-URI parsing, and datetime utilities. Built on IPLD DAG-CBOR serialization with support for P-256, P-384, and K-256 elliptic curve cryptography.
7
+
A Rust library for working with AT Protocol records, providing AT-URI parsing, TID generation, datetime formatting, and CID generation. Built on IPLD DAG-CBOR serialization for deterministic content addressing.
8
8
9
9
## Features
10
10
11
-
- **Record signing**: Create cryptographic signatures on AT Protocol records following community.lexicon.attestation.signature specification
12
-
- **Signature verification**: Verify record signatures against public keys with issuer validation
13
11
- **AT-URI parsing**: Parse and validate AT Protocol URIs (at://authority/collection/record_key) with robust error handling
14
-
- **IPLD serialization**: DAG-CBOR serialization ensuring deterministic and verifiable record encoding
15
-
- **Multi-curve support**: Full support for P-256, P-384, and K-256 elliptic curve signatures
12
+
- **TID generation**: Timestamp-based identifiers for AT Protocol records with microsecond precision
13
+
- **CID generation**: Content Identifier generation using DAG-CBOR serialization and SHA-256 hashing
16
14
- **DateTime utilities**: RFC 3339 datetime serialization with millisecond precision for consistent timestamp handling
15
+
- **Typed records**: Type-safe record handling with lexicon type validation
16
+
- **Bytes handling**: Base64 encoding/decoding for binary data in AT Protocol records
17
17
- **Structured errors**: Type-safe error handling following project conventions with detailed error messages
18
18
19
19
## CLI Tools
20
20
21
-
The following command-line tools are available when built with the `clap` feature:
21
+
The following command-line tool is available when built with the `clap` feature:
22
22
23
-
- **`atproto-record-sign`**: Sign AT Protocol records with private keys, supporting flexible argument ordering
24
-
- **`atproto-record-verify`**: Verify AT Protocol record signatures by validating cryptographic signatures against issuer DIDs and public keys
23
+
- **`atproto-record-cid`**: Generate CID (Content Identifier) for AT Protocol records from JSON input
25
24
26
25
## Library Usage
27
26
28
-
### Creating Signatures
27
+
### Generating CIDs
29
28
30
29
```rust
31
-
use atproto_record::signature;
32
-
use atproto_identity::key::identify_key;
33
30
use serde_json::json;
34
-
35
-
// Parse the signing key from a did:key
36
-
let key_data = identify_key("did:key:zQ3sh...")?;
37
-
38
-
// The record to sign
39
-
let record = json!({"$type": "app.bsky.feed.post", "text": "Hello world!"});
31
+
use cid::Cid;
32
+
use sha2::{Digest, Sha256};
33
+
use multihash::Multihash;
40
34
41
-
// Signature metadata (issuer is required, other fields are optional)
42
-
let signature_object = json!({
43
-
"issuer": "did:plc:issuer"
44
-
// Optional: "issuedAt", "purpose", "expiry", etc.
35
+
// Serialize a record to DAG-CBOR and generate its CID
36
+
let record = json!({
37
+
"$type": "app.bsky.feed.post",
38
+
"text": "Hello world!",
39
+
"createdAt": "2024-01-01T00:00:00.000Z"
45
40
});
46
41
47
-
// Create the signed record with embedded signatures array
48
-
let signed_record = signature::create(
49
-
&key_data,
50
-
&record,
51
-
"did:plc:repository",
52
-
"app.bsky.feed.post",
53
-
signature_object
54
-
).await?;
42
+
let dag_cbor_bytes = serde_ipld_dagcbor::to_vec(&record)?;
43
+
let hash = Sha256::digest(&dag_cbor_bytes);
44
+
let multihash = Multihash::wrap(0x12, &hash)?;
45
+
let cid = Cid::new_v1(0x71, multihash);
46
+
47
+
println!("Record CID: {}", cid);
55
48
```
56
49
57
-
### Verifying Signatures
50
+
### Generating TIDs
58
51
59
52
```rust
60
-
use atproto_record::signature;
61
-
use atproto_identity::key::identify_key;
53
+
use atproto_record::tid::Tid;
62
54
63
-
// Parse the public key for verification
64
-
let issuer_key = identify_key("did:key:zQ3sh...")?;
55
+
// Generate a new timestamp-based identifier
56
+
let tid = Tid::new();
57
+
println!("TID: {}", tid); // e.g., "3l2k4j5h6g7f8d9s"
65
58
66
-
// Verify the signature (throws error if invalid)
67
-
signature::verify(
68
-
"did:plc:issuer", // Expected issuer DID
69
-
&issuer_key, // Public key for verification
70
-
signed_record, // The signed record
71
-
"did:plc:repository", // Repository context
72
-
"app.bsky.feed.post" // Collection context
73
-
).await?;
59
+
// TIDs are sortable by creation time
60
+
let tid1 = Tid::new();
61
+
std::thread::sleep(std::time::Duration::from_millis(1));
62
+
let tid2 = Tid::new();
63
+
assert!(tid1 < tid2);
74
64
```
75
65
76
66
### AT-URI Parsing
···
110
100
111
101
## Command Line Usage
112
102
113
-
All CLI tools require the `clap` feature:
103
+
The CLI tool requires the `clap` feature:
114
104
115
105
```bash
116
106
# Build with CLI support
117
107
cargo build --features clap --bins
118
108
119
-
# Sign a record
120
-
cargo run --features clap --bin atproto-record-sign -- \
121
-
did:key:zQ3sh... # Signing key (did:key format)
122
-
did:plc:issuer # Issuer DID
123
-
record.json # Record file (or use -- for stdin)
124
-
repository=did:plc:repo # Repository context
125
-
collection=app.bsky.feed.post # Collection type
109
+
# Generate CID from JSON file
110
+
cat record.json | cargo run --features clap --bin atproto-record-cid
126
111
127
-
# Sign with custom fields (e.g., issuedAt, purpose, expiry)
128
-
cargo run --features clap --bin atproto-record-sign -- \
129
-
did:key:zQ3sh... did:plc:issuer record.json \
130
-
repository=did:plc:repo collection=app.bsky.feed.post \
131
-
issuedAt="2024-01-01T00:00:00.000Z" purpose="attestation"
112
+
# Generate CID from inline JSON
113
+
echo '{"$type":"app.bsky.feed.post","text":"Hello!"}' | cargo run --features clap --bin atproto-record-cid
132
114
133
-
# Verify a signature
134
-
cargo run --features clap --bin atproto-record-verify -- \
135
-
did:plc:issuer # Expected issuer DID
136
-
did:key:zQ3sh... # Verification key
137
-
signed.json # Signed record file
138
-
repository=did:plc:repo # Repository context (must match signing)
139
-
collection=app.bsky.feed.post # Collection type (must match signing)
115
+
# Example with a complete AT Protocol record
116
+
cat <<EOF | cargo run --features clap --bin atproto-record-cid
117
+
{
118
+
"$type": "app.bsky.feed.post",
119
+
"text": "Hello AT Protocol!",
120
+
"createdAt": "2024-01-01T00:00:00.000Z"
121
+
}
122
+
EOF
123
+
```
140
124
141
-
# Read from stdin
142
-
echo '{"text":"Hello"}' | cargo run --features clap --bin atproto-record-sign -- \
143
-
did:key:zQ3sh... did:plc:issuer -- \
144
-
repository=did:plc:repo collection=app.bsky.feed.post
125
+
The tool outputs the CID in base32 format:
126
+
```
127
+
bafyreibjzlvhtyxnhbvvzl3gj4qmg2ufl2jbhh5qr3gvvxlm7ksf3qwxqq
145
128
```
146
129
147
130
## License
148
131
149
-
MIT License
132
+
MIT License
+150
crates/atproto-record/src/bin/atproto-record-cid.rs
+150
crates/atproto-record/src/bin/atproto-record-cid.rs
···
1
+
//! Command-line tool for generating CIDs from JSON records.
2
+
//!
3
+
//! This tool reads JSON from stdin, serializes it using IPLD DAG-CBOR format,
4
+
//! and outputs the corresponding CID (Content Identifier) using CIDv1 with
5
+
//! SHA-256 hashing. This matches the AT Protocol specification for content
6
+
//! addressing of records.
7
+
//!
8
+
//! # AT Protocol CID Format
9
+
//!
10
+
//! The tool generates CIDs that follow the AT Protocol specification:
11
+
//! - **CID Version**: CIDv1
12
+
//! - **Codec**: DAG-CBOR (0x71)
13
+
//! - **Hash Function**: SHA-256 (0x12)
14
+
//! - **Encoding**: Base32 (default for CIDv1)
15
+
//!
16
+
//! # Example Usage
17
+
//!
18
+
//! ```bash
19
+
//! # Generate CID from a simple JSON object
20
+
//! echo '{"text":"Hello, AT Protocol!"}' | cargo run --features clap --bin atproto-record-cid
21
+
//!
22
+
//! # Generate CID from a file
23
+
//! cat post.json | cargo run --features clap --bin atproto-record-cid
24
+
//!
25
+
//! # Generate CID from a complex record
26
+
//! echo '{
27
+
//! "$type": "app.bsky.feed.post",
28
+
//! "text": "Hello world",
29
+
//! "createdAt": "2025-01-19T10:00:00.000Z"
30
+
//! }' | cargo run --features clap --bin atproto-record-cid
31
+
//! ```
32
+
//!
33
+
//! # Output Format
34
+
//!
35
+
//! The tool outputs the CID as a single line string in the format:
36
+
//! ```text
37
+
//! bafyreibjzlvhtyxnhbvvzl3gj4qmg2ufl2jbhh5qr3gvvxlm7ksf3qwxqq
38
+
//! ```
39
+
//!
40
+
//! # Error Handling
41
+
//!
42
+
//! The tool will return an error if:
43
+
//! - Input is not valid JSON
44
+
//! - JSON cannot be serialized to DAG-CBOR
45
+
//! - CID generation fails
46
+
//!
47
+
//! # Technical Details
48
+
//!
49
+
//! The CID generation process:
50
+
//! 1. Read JSON from stdin
51
+
//! 2. Parse JSON into serde_json::Value
52
+
//! 3. Serialize to DAG-CBOR bytes using serde_ipld_dagcbor
53
+
//! 4. Hash the bytes using SHA-256
54
+
//! 5. Create CIDv1 with DAG-CBOR codec
55
+
//! 6. Output the CID string
56
+
57
+
use anyhow::Result;
58
+
use atproto_record::errors::CliError;
59
+
use cid::Cid;
60
+
use clap::Parser;
61
+
use multihash::Multihash;
62
+
use sha2::{Digest, Sha256};
63
+
use std::io::{self, Read};
64
+
65
+
/// AT Protocol Record CID Generator
66
+
#[derive(Parser)]
67
+
#[command(
68
+
name = "atproto-record-cid",
69
+
version,
70
+
about = "Generate CID for AT Protocol DAG-CBOR records from JSON",
71
+
long_about = "
72
+
A command-line tool for generating Content Identifiers (CIDs) from JSON records
73
+
using the AT Protocol DAG-CBOR serialization format.
74
+
75
+
The tool reads JSON from stdin, serializes it using IPLD DAG-CBOR format, and
76
+
outputs the corresponding CID using CIDv1 with SHA-256 hashing. This matches
77
+
the AT Protocol specification for content addressing of records.
78
+
79
+
CID FORMAT:
80
+
Version: CIDv1
81
+
Codec: DAG-CBOR (0x71)
82
+
Hash: SHA-256 (0x12)
83
+
Encoding: Base32 (default for CIDv1)
84
+
85
+
EXAMPLES:
86
+
# Generate CID from stdin:
87
+
echo '{\"text\":\"Hello!\"}' | atproto-record-cid
88
+
89
+
# Generate CID from a file:
90
+
cat post.json | atproto-record-cid
91
+
92
+
# Complex record with AT Protocol fields:
93
+
echo '{
94
+
\"$type\": \"app.bsky.feed.post\",
95
+
\"text\": \"Hello world\",
96
+
\"createdAt\": \"2025-01-19T10:00:00.000Z\"
97
+
}' | atproto-record-cid
98
+
99
+
OUTPUT:
100
+
The tool outputs a single line containing the CID:
101
+
bafyreibjzlvhtyxnhbvvzl3gj4qmg2ufl2jbhh5qr3gvvxlm7ksf3qwxqq
102
+
103
+
NOTES:
104
+
- Input must be valid JSON
105
+
- The same JSON input will always produce the same CID
106
+
- Field order in JSON objects may affect the CID due to DAG-CBOR serialization
107
+
- Special AT Protocol fields like $type, $sig, and $link are preserved
108
+
"
109
+
)]
110
+
struct Args {}
111
+
112
+
fn main() -> Result<()> {
113
+
let _args = Args::parse();
114
+
115
+
// Read JSON from stdin
116
+
let mut stdin_content = String::new();
117
+
io::stdin()
118
+
.read_to_string(&mut stdin_content)
119
+
.map_err(|_| CliError::StdinReadFailed)?;
120
+
121
+
// Parse JSON
122
+
let json_value: serde_json::Value =
123
+
serde_json::from_str(&stdin_content).map_err(|_| CliError::StdinJsonParseFailed)?;
124
+
125
+
// Serialize to DAG-CBOR
126
+
let dag_cbor_bytes = serde_ipld_dagcbor::to_vec(&json_value).map_err(|error| {
127
+
CliError::RecordSerializationFailed {
128
+
error: error.to_string(),
129
+
}
130
+
})?;
131
+
132
+
// Hash the bytes using SHA-256
133
+
// Code 0x12 is SHA-256, size 32 bytes
134
+
let mut hasher = Sha256::new();
135
+
hasher.update(&dag_cbor_bytes);
136
+
let hash_result = hasher.finalize();
137
+
138
+
let multihash =
139
+
Multihash::wrap(0x12, &hash_result).map_err(|error| CliError::CidGenerationFailed {
140
+
error: error.to_string(),
141
+
})?;
142
+
143
+
// Create CIDv1 with DAG-CBOR codec (0x71)
144
+
let cid = Cid::new_v1(0x71, multihash);
145
+
146
+
// Output the CID
147
+
println!("{}", cid);
148
+
149
+
Ok(())
150
+
}
-192
crates/atproto-record/src/bin/atproto-record-sign.rs
-192
crates/atproto-record/src/bin/atproto-record-sign.rs
···
1
-
//! Command-line tool for signing AT Protocol records with cryptographic signatures.
2
-
//!
3
-
//! This tool creates cryptographic signatures on AT Protocol records using ECDSA
4
-
//! signatures with IPLD DAG-CBOR serialization. It supports flexible argument
5
-
//! ordering and customizable signature metadata.
6
-
7
-
use anyhow::Result;
8
-
use atproto_identity::{
9
-
key::{KeyData, identify_key},
10
-
resolve::{InputType, parse_input},
11
-
};
12
-
use atproto_record::errors::CliError;
13
-
use atproto_record::signature::create;
14
-
use clap::Parser;
15
-
use serde_json::json;
16
-
use std::{
17
-
collections::HashMap,
18
-
fs,
19
-
io::{self, Read},
20
-
};
21
-
22
-
/// AT Protocol Record Signing CLI
23
-
#[derive(Parser)]
24
-
#[command(
25
-
name = "atproto-record-sign",
26
-
version,
27
-
about = "Sign AT Protocol records with cryptographic signatures",
28
-
long_about = "
29
-
A command-line tool for signing AT Protocol records using DID keys. Reads a JSON
30
-
record from a file or stdin, applies a cryptographic signature, and outputs the
31
-
signed record with embedded signature metadata.
32
-
33
-
The tool accepts flexible argument ordering with DID keys, issuer DIDs, record
34
-
inputs, and key=value parameters for repository, collection, and custom metadata.
35
-
36
-
REQUIRED PARAMETERS:
37
-
repository=<DID> Repository context for the signature
38
-
collection=<name> Collection type context for the signature
39
-
40
-
OPTIONAL PARAMETERS:
41
-
Any additional key=value pairs are included in the signature metadata
42
-
(e.g., issuedAt=<timestamp>, purpose=<string>, expiry=<timestamp>)
43
-
44
-
EXAMPLES:
45
-
# Basic usage:
46
-
atproto-record-sign \\
47
-
did:key:z42tv1pb3Dzog28Q1udyieg1YJP3x1Un5vraE1bttXeCDSpW \\
48
-
./post.json \\
49
-
did:plc:tgudj2fjm77pzkuawquqhsxm \\
50
-
repository=did:plc:4zutorghlchjxzgceklue4la \\
51
-
collection=app.bsky.feed.post
52
-
53
-
# With custom metadata:
54
-
atproto-record-sign \\
55
-
did:key:z42tv1pb3... ./post.json did:plc:issuer... \\
56
-
repository=did:plc:repo... collection=app.bsky.feed.post \\
57
-
issuedAt=\"2024-01-01T00:00:00.000Z\" purpose=\"attestation\"
58
-
59
-
# Reading from stdin:
60
-
echo '{\"text\":\"Hello!\"}' | atproto-record-sign \\
61
-
did:key:z42tv1pb3... -- did:plc:issuer... \\
62
-
repository=did:plc:repo... collection=app.bsky.feed.post
63
-
64
-
SIGNATURE PROCESS:
65
-
- Creates $sig object with repository, collection, and custom metadata
66
-
- Serializes record using IPLD DAG-CBOR format
67
-
- Generates ECDSA signatures using P-256, P-384, or K-256 curves
68
-
- Embeds signatures with issuer and any provided metadata
69
-
"
70
-
)]
71
-
struct Args {
72
-
/// All arguments - flexible parsing handles DID keys, issuer DIDs, files, and key=value pairs
73
-
args: Vec<String>,
74
-
}
75
-
#[tokio::main]
76
-
async fn main() -> Result<()> {
77
-
let args = Args::parse();
78
-
79
-
let arguments = args.args.into_iter();
80
-
81
-
let mut collection: Option<String> = None;
82
-
let mut repository: Option<String> = None;
83
-
let mut record: Option<serde_json::Value> = None;
84
-
let mut issuer: Option<String> = None;
85
-
let mut key_data: Option<KeyData> = None;
86
-
let mut signature_extras: HashMap<String, String> = HashMap::default();
87
-
88
-
for argument in arguments {
89
-
if let Some((key, value)) = argument.split_once("=") {
90
-
match key {
91
-
"collection" => {
92
-
collection = Some(value.to_string());
93
-
}
94
-
"repository" => {
95
-
repository = Some(value.to_string());
96
-
}
97
-
_ => {
98
-
signature_extras.insert(key.to_string(), value.to_string());
99
-
}
100
-
}
101
-
} else if argument.starts_with("did:key:") {
102
-
// Parse the did:key to extract key data for signing
103
-
key_data = Some(identify_key(&argument)?);
104
-
} else if argument.starts_with("did:") {
105
-
match parse_input(&argument) {
106
-
Ok(InputType::Plc(did)) | Ok(InputType::Web(did)) => {
107
-
issuer = Some(did);
108
-
}
109
-
Ok(_) => {
110
-
return Err(CliError::UnsupportedDidMethod {
111
-
method: argument.clone(),
112
-
}
113
-
.into());
114
-
}
115
-
Err(_) => {
116
-
return Err(CliError::DidParseFailed {
117
-
did: argument.clone(),
118
-
}
119
-
.into());
120
-
}
121
-
}
122
-
} else if argument == "--" {
123
-
// Read record from stdin
124
-
if record.is_none() {
125
-
let mut stdin_content = String::new();
126
-
io::stdin()
127
-
.read_to_string(&mut stdin_content)
128
-
.map_err(|_| CliError::StdinReadFailed)?;
129
-
record = Some(
130
-
serde_json::from_str(&stdin_content)
131
-
.map_err(|_| CliError::StdinJsonParseFailed)?,
132
-
);
133
-
} else {
134
-
return Err(CliError::UnexpectedArgument {
135
-
argument: argument.clone(),
136
-
}
137
-
.into());
138
-
}
139
-
} else {
140
-
// Assume it's a file path to read the record from
141
-
if record.is_none() {
142
-
let file_content =
143
-
fs::read_to_string(&argument).map_err(|_| CliError::FileReadFailed {
144
-
path: argument.clone(),
145
-
})?;
146
-
record = Some(serde_json::from_str(&file_content).map_err(|_| {
147
-
CliError::FileJsonParseFailed {
148
-
path: argument.clone(),
149
-
}
150
-
})?);
151
-
} else {
152
-
return Err(CliError::UnexpectedArgument {
153
-
argument: argument.clone(),
154
-
}
155
-
.into());
156
-
}
157
-
}
158
-
}
159
-
160
-
let collection = collection.ok_or(CliError::MissingRequiredValue {
161
-
name: "collection".to_string(),
162
-
})?;
163
-
let repository = repository.ok_or(CliError::MissingRequiredValue {
164
-
name: "repository".to_string(),
165
-
})?;
166
-
let record = record.ok_or(CliError::MissingRequiredValue {
167
-
name: "record".to_string(),
168
-
})?;
169
-
let issuer = issuer.ok_or(CliError::MissingRequiredValue {
170
-
name: "issuer".to_string(),
171
-
})?;
172
-
let key_data = key_data.ok_or(CliError::MissingRequiredValue {
173
-
name: "signing_key".to_string(),
174
-
})?;
175
-
176
-
// Write "issuer" key to signature_extras
177
-
signature_extras.insert("issuer".to_string(), issuer);
178
-
179
-
let signature_object = json!(signature_extras);
180
-
let signed_record = create(
181
-
&key_data,
182
-
&record,
183
-
&repository,
184
-
&collection,
185
-
signature_object,
186
-
)?;
187
-
188
-
let pretty_signed_record = serde_json::to_string_pretty(&signed_record);
189
-
println!("{}", pretty_signed_record.unwrap());
190
-
191
-
Ok(())
192
-
}
-166
crates/atproto-record/src/bin/atproto-record-verify.rs
-166
crates/atproto-record/src/bin/atproto-record-verify.rs
···
1
-
//! Command-line tool for verifying cryptographic signatures on AT Protocol records.
2
-
//!
3
-
//! This tool validates signatures on AT Protocol records by reconstructing the
4
-
//! signed content and verifying ECDSA signatures against public keys. It ensures
5
-
//! that records have valid signatures from specified issuers.
6
-
7
-
use anyhow::Result;
8
-
use atproto_identity::{
9
-
key::{KeyData, identify_key},
10
-
resolve::{InputType, parse_input},
11
-
};
12
-
use atproto_record::errors::CliError;
13
-
use atproto_record::signature::verify;
14
-
use clap::Parser;
15
-
use std::{
16
-
fs,
17
-
io::{self, Read},
18
-
};
19
-
20
-
/// AT Protocol Record Verification CLI
21
-
#[derive(Parser)]
22
-
#[command(
23
-
name = "atproto-record-verify",
24
-
version,
25
-
about = "Verify cryptographic signatures of AT Protocol records",
26
-
long_about = "
27
-
A command-line tool for verifying cryptographic signatures of AT Protocol records.
28
-
Reads a signed JSON record from a file or stdin, validates the embedded signatures
29
-
using a public key, and reports verification success or failure.
30
-
31
-
The tool accepts flexible argument ordering with issuer DIDs, verification keys,
32
-
record inputs, and key=value parameters for repository and collection context.
33
-
34
-
REQUIRED PARAMETERS:
35
-
repository=<DID> Repository context used during signing
36
-
collection=<name> Collection type context used during signing
37
-
38
-
EXAMPLES:
39
-
# Basic verification:
40
-
atproto-record-verify \\
41
-
did:plc:tgudj2fjm77pzkuawquqhsxm \\
42
-
did:key:z42tv1pb3Dzog28Q1udyieg1YJP3x1Un5vraE1bttXeCDSpW \\
43
-
./signed_post.json \\
44
-
repository=did:plc:4zutorghlchjxzgceklue4la \\
45
-
collection=app.bsky.feed.post
46
-
47
-
# Verify from stdin:
48
-
echo '{\"signatures\":[...]}' | atproto-record-verify \\
49
-
did:plc:issuer... did:key:z42tv1pb3... -- \\
50
-
repository=did:plc:repo... collection=app.bsky.feed.post
51
-
52
-
VERIFICATION PROCESS:
53
-
- Extracts signatures from the signatures array
54
-
- Finds signatures matching the specified issuer DID
55
-
- Reconstructs $sig object with repository and collection context
56
-
- Validates ECDSA signatures using P-256 or K-256 curves
57
-
"
58
-
)]
59
-
struct Args {
60
-
/// All arguments - flexible parsing handles issuer DIDs, verification keys, files, and key=value pairs
61
-
args: Vec<String>,
62
-
}
63
-
#[tokio::main]
64
-
async fn main() -> Result<()> {
65
-
let args = Args::parse();
66
-
67
-
let arguments = args.args.into_iter();
68
-
69
-
let mut collection: Option<String> = None;
70
-
let mut repository: Option<String> = None;
71
-
let mut record: Option<serde_json::Value> = None;
72
-
let mut issuer: Option<String> = None;
73
-
let mut key_data: Option<KeyData> = None;
74
-
75
-
for argument in arguments {
76
-
if let Some((key, value)) = argument.split_once("=") {
77
-
match key {
78
-
"collection" => {
79
-
collection = Some(value.to_string());
80
-
}
81
-
"repository" => {
82
-
repository = Some(value.to_string());
83
-
}
84
-
_ => {}
85
-
}
86
-
} else if argument.starts_with("did:key:") {
87
-
// Parse the did:key to extract key data for verification
88
-
key_data = Some(identify_key(&argument)?);
89
-
} else if argument.starts_with("did:") {
90
-
match parse_input(&argument) {
91
-
Ok(InputType::Plc(did)) | Ok(InputType::Web(did)) => {
92
-
issuer = Some(did);
93
-
}
94
-
Ok(_) => {
95
-
return Err(CliError::UnsupportedDidMethod {
96
-
method: argument.clone(),
97
-
}
98
-
.into());
99
-
}
100
-
Err(_) => {
101
-
return Err(CliError::DidParseFailed {
102
-
did: argument.clone(),
103
-
}
104
-
.into());
105
-
}
106
-
}
107
-
} else if argument == "--" {
108
-
// Read record from stdin
109
-
if record.is_none() {
110
-
let mut stdin_content = String::new();
111
-
io::stdin()
112
-
.read_to_string(&mut stdin_content)
113
-
.map_err(|_| CliError::StdinReadFailed)?;
114
-
record = Some(
115
-
serde_json::from_str(&stdin_content)
116
-
.map_err(|_| CliError::StdinJsonParseFailed)?,
117
-
);
118
-
} else {
119
-
return Err(CliError::UnexpectedArgument {
120
-
argument: argument.clone(),
121
-
}
122
-
.into());
123
-
}
124
-
} else {
125
-
// Assume it's a file path to read the record from
126
-
if record.is_none() {
127
-
let file_content =
128
-
fs::read_to_string(&argument).map_err(|_| CliError::FileReadFailed {
129
-
path: argument.clone(),
130
-
})?;
131
-
record = Some(serde_json::from_str(&file_content).map_err(|_| {
132
-
CliError::FileJsonParseFailed {
133
-
path: argument.clone(),
134
-
}
135
-
})?);
136
-
} else {
137
-
return Err(CliError::UnexpectedArgument {
138
-
argument: argument.clone(),
139
-
}
140
-
.into());
141
-
}
142
-
}
143
-
}
144
-
145
-
let collection = collection.ok_or(CliError::MissingRequiredValue {
146
-
name: "collection".to_string(),
147
-
})?;
148
-
let repository = repository.ok_or(CliError::MissingRequiredValue {
149
-
name: "repository".to_string(),
150
-
})?;
151
-
let record = record.ok_or(CliError::MissingRequiredValue {
152
-
name: "record".to_string(),
153
-
})?;
154
-
let issuer = issuer.ok_or(CliError::MissingRequiredValue {
155
-
name: "issuer".to_string(),
156
-
})?;
157
-
let key_data = key_data.ok_or(CliError::MissingRequiredValue {
158
-
name: "key".to_string(),
159
-
})?;
160
-
161
-
verify(&issuer, &key_data, record, &repository, &collection)?;
162
-
163
-
println!("OK");
164
-
165
-
Ok(())
166
-
}
+60
-1
crates/atproto-record/src/errors.rs
+60
-1
crates/atproto-record/src/errors.rs
···
14
14
//! Errors occurring during AT-URI parsing and validation.
15
15
//! Error codes: aturi-1 through aturi-9
16
16
//!
17
+
//! ### `TidError` (Domain: tid)
18
+
//! Errors occurring during TID (Timestamp Identifier) parsing and decoding.
19
+
//! Error codes: tid-1 through tid-3
20
+
//!
17
21
//! ### `CliError` (Domain: cli)
18
22
//! Command-line interface specific errors for file I/O, argument parsing, and DID validation.
19
-
//! Error codes: cli-1 through cli-8
23
+
//! Error codes: cli-1 through cli-10
20
24
//!
21
25
//! ## Error Format
22
26
//!
···
222
226
EmptyRecordKey,
223
227
}
224
228
229
+
/// Errors that can occur during TID (Timestamp Identifier) operations.
230
+
///
231
+
/// This enum covers all validation failures when parsing and decoding TIDs,
232
+
/// including format violations, invalid characters, and encoding errors.
233
+
#[derive(Debug, Error)]
234
+
pub enum TidError {
235
+
/// Error when TID string length is invalid.
236
+
///
237
+
/// This error occurs when a TID string is not exactly 13 characters long,
238
+
/// which is required by the TID specification.
239
+
#[error("error-atproto-record-tid-1 Invalid TID length: expected {expected}, got {actual}")]
240
+
InvalidLength {
241
+
/// Expected length (always 13)
242
+
expected: usize,
243
+
/// Actual length of the provided string
244
+
actual: usize,
245
+
},
246
+
247
+
/// Error when TID contains an invalid character.
248
+
///
249
+
/// This error occurs when a TID string contains a character outside the
250
+
/// base32-sortable character set (234567abcdefghijklmnopqrstuvwxyz).
251
+
#[error("error-atproto-record-tid-2 Invalid character '{character}' at position {position}")]
252
+
InvalidCharacter {
253
+
/// The invalid character
254
+
character: char,
255
+
/// Position in the string (0-indexed)
256
+
position: usize,
257
+
},
258
+
259
+
/// Error when TID format is invalid.
260
+
///
261
+
/// This error occurs when the TID violates structural requirements,
262
+
/// such as having the top bit set (which must always be 0).
263
+
#[error("error-atproto-record-tid-3 Invalid TID format: {reason}")]
264
+
InvalidFormat {
265
+
/// Reason for the format violation
266
+
reason: String,
267
+
},
268
+
}
269
+
225
270
/// Errors specific to command-line interface operations.
226
271
///
227
272
/// This enum covers failures in CLI argument parsing, file I/O operations,
···
276
321
MissingRequiredValue {
277
322
/// The name of the missing value
278
323
name: String,
324
+
},
325
+
326
+
/// Occurs when record serialization to DAG-CBOR fails
327
+
#[error("error-atproto-record-cli-9 Failed to serialize record to DAG-CBOR: {error}")]
328
+
RecordSerializationFailed {
329
+
/// The underlying serialization error
330
+
error: String,
331
+
},
332
+
333
+
/// Occurs when CID generation fails
334
+
#[error("error-atproto-record-cli-10 Failed to generate CID: {error}")]
335
+
CidGenerationFailed {
336
+
/// The underlying CID generation error
337
+
error: String,
279
338
},
280
339
}
+205
crates/atproto-record/src/lexicon/app_bsky_richtext_facet.rs
+205
crates/atproto-record/src/lexicon/app_bsky_richtext_facet.rs
···
1
+
//! AT Protocol rich text facet types.
2
+
//!
3
+
//! This module provides types for annotating rich text content with semantic
4
+
//! meaning, based on the `app.bsky.richtext.facet` lexicon. Facets enable
5
+
//! mentions, links, hashtags, and other structured metadata to be attached
6
+
//! to specific byte ranges within text content.
7
+
//!
8
+
//! # Overview
9
+
//!
10
+
//! Facets consist of:
11
+
//! - A byte range (start/end indices in UTF-8 encoded text)
12
+
//! - One or more features (mention, link, tag) that apply to that range
13
+
//!
14
+
//! # Example
15
+
//!
16
+
//! ```ignore
17
+
//! use atproto_record::lexicon::app::bsky::richtext::facet::{Facet, ByteSlice, FacetFeature, Mention};
18
+
//!
19
+
//! // Create a mention facet for "@alice.bsky.social"
20
+
//! let facet = Facet {
21
+
//! index: ByteSlice { byte_start: 0, byte_end: 19 },
22
+
//! features: vec![
23
+
//! FacetFeature::Mention(Mention {
24
+
//! did: "did:plc:alice123".to_string(),
25
+
//! })
26
+
//! ],
27
+
//! };
28
+
//! ```
29
+
30
+
use serde::{Deserialize, Serialize};
31
+
32
+
/// Byte range specification for facet features.
33
+
///
34
+
/// Specifies the sub-string range a facet feature applies to using
35
+
/// zero-indexed byte offsets in UTF-8 encoded text. Start index is
36
+
/// inclusive, end index is exclusive.
37
+
///
38
+
/// # Example
39
+
///
40
+
/// ```ignore
41
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::ByteSlice;
42
+
///
43
+
/// // Represents bytes 0-5 of the text
44
+
/// let slice = ByteSlice {
45
+
/// byte_start: 0,
46
+
/// byte_end: 5,
47
+
/// };
48
+
/// ```
49
+
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
50
+
#[serde(rename_all = "camelCase")]
51
+
pub struct ByteSlice {
52
+
/// Starting byte index (inclusive)
53
+
pub byte_start: usize,
54
+
55
+
/// Ending byte index (exclusive)
56
+
pub byte_end: usize,
57
+
}
58
+
59
+
/// Mention facet feature for referencing another account.
60
+
///
61
+
/// The text content typically displays a handle with '@' prefix (e.g., "@alice.bsky.social"),
62
+
/// but the facet reference must use the account's DID for stable identification.
63
+
///
64
+
/// # Example
65
+
///
66
+
/// ```ignore
67
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::Mention;
68
+
///
69
+
/// let mention = Mention {
70
+
/// did: "did:plc:alice123".to_string(),
71
+
/// };
72
+
/// ```
73
+
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
74
+
pub struct Mention {
75
+
/// DID of the mentioned account
76
+
pub did: String,
77
+
}
78
+
79
+
/// Link facet feature for URL references.
80
+
///
81
+
/// The text content may be simplified or truncated for display purposes,
82
+
/// but the facet reference should contain the complete, valid URL.
83
+
///
84
+
/// # Example
85
+
///
86
+
/// ```ignore
87
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::Link;
88
+
///
89
+
/// let link = Link {
90
+
/// uri: "https://example.com/full/path".to_string(),
91
+
/// };
92
+
/// ```
93
+
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
94
+
pub struct Link {
95
+
/// Complete URI/URL for the link
96
+
pub uri: String,
97
+
}
98
+
99
+
/// Tag facet feature for hashtags.
100
+
///
101
+
/// The text content typically includes a '#' prefix for display,
102
+
/// but the facet reference should contain only the tag text without the prefix.
103
+
///
104
+
/// # Example
105
+
///
106
+
/// ```ignore
107
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::Tag;
108
+
///
109
+
/// // For text "#atproto", store just "atproto"
110
+
/// let tag = Tag {
111
+
/// tag: "atproto".to_string(),
112
+
/// };
113
+
/// ```
114
+
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
115
+
pub struct Tag {
116
+
/// Tag text without '#' prefix
117
+
pub tag: String,
118
+
}
119
+
120
+
/// Discriminated union of facet feature types.
121
+
///
122
+
/// Represents the different types of semantic annotations that can be
123
+
/// applied to text ranges. Each variant corresponds to a specific lexicon
124
+
/// type in the `app.bsky.richtext.facet` namespace.
125
+
///
126
+
/// # Example
127
+
///
128
+
/// ```ignore
129
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::{FacetFeature, Mention, Link, Tag};
130
+
///
131
+
/// // Create different feature types
132
+
/// let mention = FacetFeature::Mention(Mention {
133
+
/// did: "did:plc:alice123".to_string(),
134
+
/// });
135
+
///
136
+
/// let link = FacetFeature::Link(Link {
137
+
/// uri: "https://example.com".to_string(),
138
+
/// });
139
+
///
140
+
/// let tag = FacetFeature::Tag(Tag {
141
+
/// tag: "rust".to_string(),
142
+
/// });
143
+
/// ```
144
+
#[derive(Serialize, Deserialize, Clone, PartialEq)]
145
+
#[cfg_attr(debug_assertions, derive(Debug))]
146
+
#[serde(tag = "$type")]
147
+
pub enum FacetFeature {
148
+
/// Account mention feature
149
+
#[serde(rename = "app.bsky.richtext.facet#mention")]
150
+
Mention(Mention),
151
+
152
+
/// URL link feature
153
+
#[serde(rename = "app.bsky.richtext.facet#link")]
154
+
Link(Link),
155
+
156
+
/// Hashtag feature
157
+
#[serde(rename = "app.bsky.richtext.facet#tag")]
158
+
Tag(Tag),
159
+
}
160
+
161
+
/// Rich text facet annotation.
162
+
///
163
+
/// Associates one or more semantic features with a specific byte range
164
+
/// within text content. Multiple features can apply to the same range
165
+
/// (e.g., a URL that is also a hashtag).
166
+
///
167
+
/// # Example
168
+
///
169
+
/// ```ignore
170
+
/// use atproto_record::lexicon::app::bsky::richtext::facet::{
171
+
/// Facet, ByteSlice, FacetFeature, Mention, Link
172
+
/// };
173
+
///
174
+
/// // Annotate "@alice.bsky.social" at bytes 0-19
175
+
/// let facet = Facet {
176
+
/// index: ByteSlice { byte_start: 0, byte_end: 19 },
177
+
/// features: vec![
178
+
/// FacetFeature::Mention(Mention {
179
+
/// did: "did:plc:alice123".to_string(),
180
+
/// }),
181
+
/// ],
182
+
/// };
183
+
///
184
+
/// // Multiple features for the same range
185
+
/// let multi_facet = Facet {
186
+
/// index: ByteSlice { byte_start: 20, byte_end: 35 },
187
+
/// features: vec![
188
+
/// FacetFeature::Link(Link {
189
+
/// uri: "https://example.com".to_string(),
190
+
/// }),
191
+
/// FacetFeature::Tag(Tag {
192
+
/// tag: "example".to_string(),
193
+
/// }),
194
+
/// ],
195
+
/// };
196
+
/// ```
197
+
#[derive(Serialize, Deserialize, Clone, PartialEq)]
198
+
#[cfg_attr(debug_assertions, derive(Debug))]
199
+
pub struct Facet {
200
+
/// Byte range this facet applies to
201
+
pub index: ByteSlice,
202
+
203
+
/// Semantic features applied to this range
204
+
pub features: Vec<FacetFeature>,
205
+
}
+19
-68
crates/atproto-record/src/lexicon/community_lexicon_attestation.rs
+19
-68
crates/atproto-record/src/lexicon/community_lexicon_attestation.rs
···
30
30
///
31
31
/// // Inline signature
32
32
/// let inline = SignatureOrRef::Inline(create_typed_signature(
33
-
/// "did:plc:issuer".to_string(),
34
33
/// Bytes { bytes: b"signature".to_vec() },
35
34
/// ));
36
35
///
···
55
54
56
55
/// Cryptographic signature structure.
57
56
///
58
-
/// Represents a signature created by an issuer (identified by DID) over
59
-
/// some data. The signature can be used to verify authenticity, authorization,
60
-
/// or other properties of the signed content.
57
+
/// Represents a cryptographic signature over some data. The signature can be
58
+
/// used to verify authenticity, authorization, or other properties of the
59
+
/// signed content.
61
60
///
62
61
/// # Fields
63
62
///
64
-
/// - `issuer`: DID of the entity that created the signature
65
63
/// - `signature`: The actual signature bytes
66
64
/// - `extra`: Additional fields that may be present in the signature
67
65
///
···
73
71
/// use std::collections::HashMap;
74
72
///
75
73
/// let sig = Signature {
76
-
/// issuer: "did:plc:example".to_string(),
77
74
/// signature: Bytes { bytes: b"signature_bytes".to_vec() },
78
75
/// extra: HashMap::new(),
79
76
/// };
···
81
78
#[derive(Deserialize, Serialize, Clone, PartialEq)]
82
79
#[cfg_attr(debug_assertions, derive(Debug))]
83
80
pub struct Signature {
84
-
/// DID of the entity that created this signature
85
-
pub issuer: String,
86
-
87
81
/// The cryptographic signature bytes
88
82
pub signature: Bytes,
89
83
···
116
110
///
117
111
/// # Arguments
118
112
///
119
-
/// * `issuer` - DID of the signature issuer
120
113
/// * `signature` - The signature bytes
121
114
///
122
115
/// # Example
···
126
119
/// use atproto_record::lexicon::Bytes;
127
120
///
128
121
/// let sig = create_typed_signature(
129
-
/// "did:plc:issuer".to_string(),
130
122
/// Bytes { bytes: b"sig_data".to_vec() },
131
123
/// );
132
124
/// ```
133
-
pub fn create_typed_signature(issuer: String, signature: Bytes) -> TypedSignature {
125
+
pub fn create_typed_signature(signature: Bytes) -> TypedSignature {
134
126
TypedLexicon::new(Signature {
135
-
issuer,
136
127
signature,
137
128
extra: HashMap::new(),
138
129
})
···
150
141
let json_str = r#"{
151
142
"$type": "community.lexicon.attestation.signature",
152
143
"issuedAt": "2025-08-19T20:17:17.133Z",
153
-
"issuer": "did:web:acudo-dev.smokesignal.tools",
154
144
"signature": {
155
145
"$bytes": "mr9c0MCu3g6SXNQ25JFhzfX1ecYgK9k1Kf6OZI2p2AlQRoQu09dOE7J5uaeilIx/UFCjJErO89C/uBBb9ANmUA"
156
146
}
···
160
150
let typed_sig_result: Result<TypedSignature, _> = serde_json::from_str(json_str);
161
151
match &typed_sig_result {
162
152
Ok(sig) => {
163
-
println!("TypedSignature OK: issuer={}", sig.inner.issuer);
164
-
assert_eq!(sig.inner.issuer, "did:web:acudo-dev.smokesignal.tools");
153
+
println!("TypedSignature OK: signature bytes len={}", sig.inner.signature.bytes.len());
154
+
assert_eq!(sig.inner.signature.bytes.len(), 64);
165
155
}
166
156
Err(e) => {
167
157
eprintln!("TypedSignature deserialization error: {}", e);
···
172
162
let sig_or_ref_result: Result<SignatureOrRef, _> = serde_json::from_str(json_str);
173
163
match &sig_or_ref_result {
174
164
Ok(SignatureOrRef::Inline(sig)) => {
175
-
println!("SignatureOrRef OK (Inline): issuer={}", sig.inner.issuer);
176
-
assert_eq!(sig.inner.issuer, "did:web:acudo-dev.smokesignal.tools");
165
+
println!("SignatureOrRef OK (Inline): signature bytes len={}", sig.inner.signature.bytes.len());
166
+
assert_eq!(sig.inner.signature.bytes.len(), 64);
177
167
}
178
168
Ok(SignatureOrRef::Reference(_)) => {
179
169
panic!("Expected Inline signature, got Reference");
···
186
176
// Try without $type field
187
177
let json_no_type = r#"{
188
178
"issuedAt": "2025-08-19T20:17:17.133Z",
189
-
"issuer": "did:web:acudo-dev.smokesignal.tools",
190
179
"signature": {
191
180
"$bytes": "mr9c0MCu3g6SXNQ25JFhzfX1ecYgK9k1Kf6OZI2p2AlQRoQu09dOE7J5uaeilIx/UFCjJErO89C/uBBb9ANmUA"
192
181
}
···
195
184
let no_type_result: Result<Signature, _> = serde_json::from_str(json_no_type);
196
185
match &no_type_result {
197
186
Ok(sig) => {
198
-
println!("Signature (no type) OK: issuer={}", sig.issuer);
199
-
assert_eq!(sig.issuer, "did:web:acudo-dev.smokesignal.tools");
187
+
println!("Signature (no type) OK: signature bytes len={}", sig.signature.bytes.len());
200
188
assert_eq!(sig.signature.bytes.len(), 64);
201
189
202
190
// Now wrap it in TypedLexicon and try as SignatureOrRef
···
220
208
fn test_signature_deserialization() {
221
209
let json_str = r#"{
222
210
"$type": "community.lexicon.attestation.signature",
223
-
"issuer": "did:plc:test123",
224
211
"signature": {"$bytes": "dGVzdCBzaWduYXR1cmU="}
225
212
}"#;
226
213
227
214
let signature: Signature = serde_json::from_str(json_str).unwrap();
228
215
229
-
assert_eq!(signature.issuer, "did:plc:test123");
230
216
assert_eq!(signature.signature.bytes, b"test signature");
231
217
// The $type field will be captured in extra due to #[serde(flatten)]
232
218
assert_eq!(signature.extra.len(), 1);
···
237
223
fn test_signature_deserialization_with_extra_fields() {
238
224
let json_str = r#"{
239
225
"$type": "community.lexicon.attestation.signature",
240
-
"issuer": "did:plc:test123",
241
226
"signature": {"$bytes": "dGVzdCBzaWduYXR1cmU="},
242
227
"issuedAt": "2024-01-01T00:00:00.000Z",
243
228
"purpose": "verification"
···
245
230
246
231
let signature: Signature = serde_json::from_str(json_str).unwrap();
247
232
248
-
assert_eq!(signature.issuer, "did:plc:test123");
249
233
assert_eq!(signature.signature.bytes, b"test signature");
250
234
// 3 extra fields: $type, issuedAt, purpose
251
235
assert_eq!(signature.extra.len(), 3);
···
263
247
extra.insert("custom_field".to_string(), json!("custom_value"));
264
248
265
249
let signature = Signature {
266
-
issuer: "did:plc:serializer".to_string(),
267
250
signature: Bytes {
268
251
bytes: b"hello world".to_vec(),
269
252
},
···
274
257
275
258
// Without custom Serialize impl, $type is not automatically added
276
259
assert!(!json.as_object().unwrap().contains_key("$type"));
277
-
assert_eq!(json["issuer"], "did:plc:serializer");
278
260
// "hello world" base64 encoded is "aGVsbG8gd29ybGQ="
279
261
assert_eq!(json["signature"]["$bytes"], "aGVsbG8gd29ybGQ=");
280
262
assert_eq!(json["custom_field"], "custom_value");
···
283
265
#[test]
284
266
fn test_signature_round_trip() {
285
267
let original = Signature {
286
-
issuer: "did:plc:roundtrip".to_string(),
287
268
signature: Bytes {
288
269
bytes: b"round trip test".to_vec(),
289
270
},
···
296
277
// Deserialize back
297
278
let deserialized: Signature = serde_json::from_str(&json).unwrap();
298
279
299
-
assert_eq!(original.issuer, deserialized.issuer);
300
280
assert_eq!(original.signature.bytes, deserialized.signature.bytes);
301
281
// Without the custom Serialize impl, no $type is added
302
282
// so the round-trip preserves the empty extra map
···
317
297
extra.insert("tags".to_string(), json!(["tag1", "tag2", "tag3"]));
318
298
319
299
let signature = Signature {
320
-
issuer: "did:plc:complex".to_string(),
321
300
signature: Bytes {
322
301
bytes: vec![0xFF, 0xEE, 0xDD, 0xCC, 0xBB, 0xAA],
323
302
},
···
328
307
329
308
// Without custom Serialize impl, $type is not automatically added
330
309
assert!(!json.as_object().unwrap().contains_key("$type"));
331
-
assert_eq!(json["issuer"], "did:plc:complex");
332
310
assert_eq!(json["timestamp"], 1234567890);
333
311
assert_eq!(json["metadata"]["version"], "1.0");
334
312
assert_eq!(json["metadata"]["algorithm"], "ES256");
···
338
316
#[test]
339
317
fn test_empty_signature() {
340
318
let signature = Signature {
341
-
issuer: String::new(),
342
319
signature: Bytes { bytes: Vec::new() },
343
320
extra: HashMap::new(),
344
321
};
···
347
324
348
325
// Without custom Serialize impl, $type is not automatically added
349
326
assert!(!json.as_object().unwrap().contains_key("$type"));
350
-
assert_eq!(json["issuer"], "");
351
327
assert_eq!(json["signature"]["$bytes"], ""); // Empty bytes encode to empty string
352
328
}
353
329
···
356
332
// Test with plain Vec<Signature> for basic signature serialization
357
333
let signatures: Vec<Signature> = vec![
358
334
Signature {
359
-
issuer: "did:plc:first".to_string(),
360
335
signature: Bytes {
361
336
bytes: b"first".to_vec(),
362
337
},
363
338
extra: HashMap::new(),
364
339
},
365
340
Signature {
366
-
issuer: "did:plc:second".to_string(),
367
341
signature: Bytes {
368
342
bytes: b"second".to_vec(),
369
343
},
···
375
349
376
350
assert!(json.is_array());
377
351
assert_eq!(json.as_array().unwrap().len(), 2);
378
-
assert_eq!(json[0]["issuer"], "did:plc:first");
379
-
assert_eq!(json[1]["issuer"], "did:plc:second");
352
+
assert_eq!(json[0]["signature"]["$bytes"], "Zmlyc3Q="); // "first" in base64
353
+
assert_eq!(json[1]["signature"]["$bytes"], "c2Vjb25k"); // "second" in base64
380
354
}
381
355
382
356
#[test]
···
384
358
// Test the new Signatures type with inline signatures
385
359
let signatures: Signatures = vec![
386
360
SignatureOrRef::Inline(create_typed_signature(
387
-
"did:plc:first".to_string(),
388
361
Bytes {
389
362
bytes: b"first".to_vec(),
390
363
},
391
364
)),
392
365
SignatureOrRef::Inline(create_typed_signature(
393
-
"did:plc:second".to_string(),
394
366
Bytes {
395
367
bytes: b"second".to_vec(),
396
368
},
···
402
374
assert!(json.is_array());
403
375
assert_eq!(json.as_array().unwrap().len(), 2);
404
376
assert_eq!(json[0]["$type"], "community.lexicon.attestation.signature");
405
-
assert_eq!(json[0]["issuer"], "did:plc:first");
377
+
assert_eq!(json[0]["signature"]["$bytes"], "Zmlyc3Q="); // "first" in base64
406
378
assert_eq!(json[1]["$type"], "community.lexicon.attestation.signature");
407
-
assert_eq!(json[1]["issuer"], "did:plc:second");
379
+
assert_eq!(json[1]["signature"]["$bytes"], "c2Vjb25k"); // "second" in base64
408
380
}
409
381
410
382
#[test]
411
383
fn test_typed_signature_serialization() {
412
384
let typed_sig = create_typed_signature(
413
-
"did:plc:typed".to_string(),
414
385
Bytes {
415
386
bytes: b"typed signature".to_vec(),
416
387
},
···
419
390
let json = serde_json::to_value(&typed_sig).unwrap();
420
391
421
392
assert_eq!(json["$type"], "community.lexicon.attestation.signature");
422
-
assert_eq!(json["issuer"], "did:plc:typed");
423
393
// "typed signature" base64 encoded
424
394
assert_eq!(json["signature"]["$bytes"], "dHlwZWQgc2lnbmF0dXJl");
425
395
}
···
428
398
fn test_typed_signature_deserialization() {
429
399
let json = json!({
430
400
"$type": "community.lexicon.attestation.signature",
431
-
"issuer": "did:plc:typed",
432
401
"signature": {"$bytes": "dHlwZWQgc2lnbmF0dXJl"}
433
402
});
434
403
435
404
let typed_sig: TypedSignature = serde_json::from_value(json).unwrap();
436
405
437
-
assert_eq!(typed_sig.inner.issuer, "did:plc:typed");
438
406
assert_eq!(typed_sig.inner.signature.bytes, b"typed signature");
439
407
assert!(typed_sig.has_type_field());
440
408
assert!(typed_sig.validate().is_ok());
···
443
411
#[test]
444
412
fn test_typed_signature_without_type_field() {
445
413
let json = json!({
446
-
"issuer": "did:plc:notype",
447
414
"signature": {"$bytes": "bm8gdHlwZQ=="} // "no type" in base64
448
415
});
449
416
450
417
let typed_sig: TypedSignature = serde_json::from_value(json).unwrap();
451
418
452
-
assert_eq!(typed_sig.inner.issuer, "did:plc:notype");
453
419
assert_eq!(typed_sig.inner.signature.bytes, b"no type");
454
420
assert!(!typed_sig.has_type_field());
455
421
// Validation should still pass because type_required() returns false for Signature
···
459
425
#[test]
460
426
fn test_typed_signature_with_extra_fields() {
461
427
let mut sig = Signature {
462
-
issuer: "did:plc:extra".to_string(),
463
428
signature: Bytes {
464
429
bytes: b"extra test".to_vec(),
465
430
},
···
474
439
let json = serde_json::to_value(&typed_sig).unwrap();
475
440
476
441
assert_eq!(json["$type"], "community.lexicon.attestation.signature");
477
-
assert_eq!(json["issuer"], "did:plc:extra");
478
442
assert_eq!(json["customField"], "customValue");
479
443
assert_eq!(json["timestamp"], 1234567890);
480
444
}
···
482
446
#[test]
483
447
fn test_typed_signature_round_trip() {
484
448
let original = Signature {
485
-
issuer: "did:plc:roundtrip2".to_string(),
486
449
signature: Bytes {
487
450
bytes: b"round trip typed".to_vec(),
488
451
},
···
494
457
let json = serde_json::to_string(&typed).unwrap();
495
458
let deserialized: TypedSignature = serde_json::from_str(&json).unwrap();
496
459
497
-
assert_eq!(deserialized.inner.issuer, original.issuer);
498
460
assert_eq!(deserialized.inner.signature.bytes, original.signature.bytes);
499
461
assert!(deserialized.has_type_field());
500
462
}
···
503
465
fn test_typed_signatures_vec() {
504
466
let typed_sigs: Vec<TypedSignature> = vec![
505
467
create_typed_signature(
506
-
"did:plc:first".to_string(),
507
468
Bytes {
508
469
bytes: b"first".to_vec(),
509
470
},
510
471
),
511
472
create_typed_signature(
512
-
"did:plc:second".to_string(),
513
473
Bytes {
514
474
bytes: b"second".to_vec(),
515
475
},
···
520
480
521
481
assert!(json.is_array());
522
482
assert_eq!(json[0]["$type"], "community.lexicon.attestation.signature");
523
-
assert_eq!(json[0]["issuer"], "did:plc:first");
483
+
assert_eq!(json[0]["signature"]["$bytes"], "Zmlyc3Q="); // "first" in base64
524
484
assert_eq!(json[1]["$type"], "community.lexicon.attestation.signature");
525
-
assert_eq!(json[1]["issuer"], "did:plc:second");
485
+
assert_eq!(json[1]["signature"]["$bytes"], "c2Vjb25k"); // "second" in base64
526
486
}
527
487
528
488
#[test]
529
489
fn test_plain_vs_typed_signature() {
530
490
// Plain Signature doesn't include $type field
531
491
let plain_sig = Signature {
532
-
issuer: "did:plc:plain".to_string(),
533
492
signature: Bytes {
534
493
bytes: b"plain sig".to_vec(),
535
494
},
···
548
507
);
549
508
550
509
// Both have the same core data
551
-
assert_eq!(plain_json["issuer"], typed_json["issuer"]);
552
510
assert_eq!(plain_json["signature"], typed_json["signature"]);
553
511
}
554
512
···
556
514
fn test_signature_or_ref_inline() {
557
515
// Test inline signature
558
516
let inline_sig = create_typed_signature(
559
-
"did:plc:inline".to_string(),
560
517
Bytes {
561
518
bytes: b"inline signature".to_vec(),
562
519
},
···
567
524
// Serialize
568
525
let json = serde_json::to_value(&sig_or_ref).unwrap();
569
526
assert_eq!(json["$type"], "community.lexicon.attestation.signature");
570
-
assert_eq!(json["issuer"], "did:plc:inline");
571
527
assert_eq!(json["signature"]["$bytes"], "aW5saW5lIHNpZ25hdHVyZQ=="); // "inline signature" in base64
572
528
573
529
// Deserialize
574
530
let deserialized: SignatureOrRef = serde_json::from_value(json.clone()).unwrap();
575
531
match deserialized {
576
532
SignatureOrRef::Inline(sig) => {
577
-
assert_eq!(sig.inner.issuer, "did:plc:inline");
578
533
assert_eq!(sig.inner.signature.bytes, b"inline signature");
579
534
}
580
535
_ => panic!("Expected inline signature"),
···
621
576
let signatures: Signatures = vec![
622
577
// Inline signature
623
578
SignatureOrRef::Inline(create_typed_signature(
624
-
"did:plc:signer1".to_string(),
625
579
Bytes {
626
580
bytes: b"sig1".to_vec(),
627
581
},
···
633
587
})),
634
588
// Another inline signature
635
589
SignatureOrRef::Inline(create_typed_signature(
636
-
"did:plc:signer3".to_string(),
637
590
Bytes {
638
591
bytes: b"sig3".to_vec(),
639
592
},
···
648
601
649
602
// First element should be inline signature
650
603
assert_eq!(array[0]["$type"], "community.lexicon.attestation.signature");
651
-
assert_eq!(array[0]["issuer"], "did:plc:signer1");
604
+
assert_eq!(array[0]["signature"]["$bytes"], "c2lnMQ=="); // "sig1" in base64
652
605
653
606
// Second element should be reference
654
607
assert_eq!(array[1]["$type"], "com.atproto.repo.strongRef");
···
659
612
660
613
// Third element should be inline signature
661
614
assert_eq!(array[2]["$type"], "community.lexicon.attestation.signature");
662
-
assert_eq!(array[2]["issuer"], "did:plc:signer3");
615
+
assert_eq!(array[2]["signature"]["$bytes"], "c2lnMw=="); // "sig3" in base64
663
616
664
617
// Deserialize back
665
618
let deserialized: Signatures = serde_json::from_value(json).unwrap();
···
667
620
668
621
// Verify each element
669
622
match &deserialized[0] {
670
-
SignatureOrRef::Inline(sig) => assert_eq!(sig.inner.issuer, "did:plc:signer1"),
623
+
SignatureOrRef::Inline(sig) => assert_eq!(sig.inner.signature.bytes, b"sig1"),
671
624
_ => panic!("Expected inline signature at index 0"),
672
625
}
673
626
···
682
635
}
683
636
684
637
match &deserialized[2] {
685
-
SignatureOrRef::Inline(sig) => assert_eq!(sig.inner.issuer, "did:plc:signer3"),
638
+
SignatureOrRef::Inline(sig) => assert_eq!(sig.inner.signature.bytes, b"sig3"),
686
639
_ => panic!("Expected inline signature at index 2"),
687
640
}
688
641
}
···
694
647
// Inline signature JSON
695
648
let inline_json = r#"{
696
649
"$type": "community.lexicon.attestation.signature",
697
-
"issuer": "did:plc:testinline",
698
650
"signature": {"$bytes": "aGVsbG8="}
699
651
}"#;
700
652
701
653
let inline_deser: SignatureOrRef = serde_json::from_str(inline_json).unwrap();
702
654
match inline_deser {
703
655
SignatureOrRef::Inline(sig) => {
704
-
assert_eq!(sig.inner.issuer, "did:plc:testinline");
705
656
assert_eq!(sig.inner.signature.bytes, b"hello");
706
657
}
707
658
_ => panic!("Expected inline signature"),
+1
-2
crates/atproto-record/src/lexicon/community_lexicon_badge.rs
+1
-2
crates/atproto-record/src/lexicon/community_lexicon_badge.rs
···
311
311
// The signature should be inline in this test
312
312
match sig_or_ref {
313
313
crate::lexicon::community_lexicon_attestation::SignatureOrRef::Inline(sig) => {
314
-
assert_eq!(sig.issuer, "did:plc:issuer");
315
314
// The bytes should match the decoded base64 value
316
315
// "dGVzdCBzaWduYXR1cmU=" decodes to "test signature"
317
-
assert_eq!(sig.signature.bytes, b"test signature".to_vec());
316
+
assert_eq!(sig.inner.signature.bytes, b"test signature".to_vec());
318
317
}
319
318
_ => panic!("Expected inline signature"),
320
319
}
+43
-9
crates/atproto-record/src/lexicon/community_lexicon_calendar_event.rs
+43
-9
crates/atproto-record/src/lexicon/community_lexicon_calendar_event.rs
···
10
10
11
11
use crate::datetime::format as datetime_format;
12
12
use crate::datetime::optional_format as optional_datetime_format;
13
+
use crate::lexicon::app::bsky::richtext::facet::Facet;
13
14
use crate::lexicon::TypedBlob;
14
15
use crate::lexicon::community::lexicon::location::Locations;
15
16
use crate::typed::{LexiconType, TypedLexicon};
16
17
17
-
/// The namespace identifier for events
18
+
/// Lexicon namespace identifier for calendar events.
19
+
///
20
+
/// Used as the `$type` field value for event records in the AT Protocol.
18
21
pub const NSID: &str = "community.lexicon.calendar.event";
19
22
20
23
/// Event status enumeration.
···
65
68
Hybrid,
66
69
}
67
70
68
-
/// The namespace identifier for named URIs
71
+
/// Lexicon namespace identifier for named URIs in calendar events.
72
+
///
73
+
/// Used as the `$type` field value for URI references associated with events.
69
74
pub const NAMED_URI_NSID: &str = "community.lexicon.calendar.event#uri";
70
75
71
76
/// Named URI structure.
···
89
94
}
90
95
}
91
96
92
-
/// Type alias for NamedUri with automatic $type field handling
97
+
/// Type alias for NamedUri with automatic $type field handling.
98
+
///
99
+
/// Wraps `NamedUri` in `TypedLexicon` to ensure proper serialization
100
+
/// and deserialization of the `$type` field.
93
101
pub type TypedNamedUri = TypedLexicon<NamedUri>;
94
102
95
-
/// The namespace identifier for event links
103
+
/// Lexicon namespace identifier for event links.
104
+
///
105
+
/// Used as the `$type` field value for event link references.
106
+
/// Note: This shares the same NSID as `NAMED_URI_NSID` for compatibility.
96
107
pub const EVENT_LINK_NSID: &str = "community.lexicon.calendar.event#uri";
97
108
98
109
/// Event link structure.
···
116
127
}
117
128
}
118
129
119
-
/// Type alias for EventLink with automatic $type field handling
130
+
/// Type alias for EventLink with automatic $type field handling.
131
+
///
132
+
/// Wraps `EventLink` in `TypedLexicon` to ensure proper serialization
133
+
/// and deserialization of the `$type` field.
120
134
pub type TypedEventLink = TypedLexicon<EventLink>;
121
135
122
-
/// A vector of typed event links
136
+
/// Collection of typed event links.
137
+
///
138
+
/// Represents multiple URI references associated with an event,
139
+
/// such as registration pages, live streams, or related content.
123
140
pub type EventLinks = Vec<TypedEventLink>;
124
141
125
142
/// Aspect ratio for media content.
···
134
151
pub height: u64,
135
152
}
136
153
137
-
/// The namespace identifier for media
154
+
/// Lexicon namespace identifier for event media.
155
+
///
156
+
/// Used as the `$type` field value for media attachments associated with events.
138
157
pub const MEDIA_NSID: &str = "community.lexicon.calendar.event#media";
139
158
140
159
/// Media structure for event-related visual content.
···
163
182
}
164
183
}
165
184
166
-
/// Type alias for Media with automatic $type field handling
185
+
/// Type alias for Media with automatic $type field handling.
186
+
///
187
+
/// Wraps `Media` in `TypedLexicon` to ensure proper serialization
188
+
/// and deserialization of the `$type` field.
167
189
pub type TypedMedia = TypedLexicon<Media>;
168
190
169
-
/// A vector of typed media items
191
+
/// Collection of typed media items.
192
+
///
193
+
/// Represents multiple media attachments for an event, such as banners,
194
+
/// posters, thumbnails, or promotional images.
170
195
pub type MediaList = Vec<TypedMedia>;
171
196
172
197
/// Calendar event structure.
···
248
273
#[serde(skip_serializing_if = "Vec::is_empty", default)]
249
274
pub media: MediaList,
250
275
276
+
/// Rich text facets for semantic annotations in description field.
277
+
///
278
+
/// Enables mentions, links, and hashtags to be embedded in the event
279
+
/// description text with proper semantic metadata.
280
+
#[serde(skip_serializing_if = "Option::is_none")]
281
+
pub facets: Option<Vec<Facet>>,
282
+
251
283
/// Extension fields for forward compatibility.
252
284
/// This catch-all allows unknown fields to be preserved and indexed
253
285
/// for potential future use without requiring re-indexing.
···
312
344
locations: vec![],
313
345
uris: vec![],
314
346
media: vec![],
347
+
facets: None,
315
348
extra: HashMap::new(),
316
349
};
317
350
···
466
499
locations: vec![],
467
500
uris: vec![TypedLexicon::new(event_link)],
468
501
media: vec![TypedLexicon::new(media)],
502
+
facets: None,
469
503
extra: HashMap::new(),
470
504
};
471
505
-3
crates/atproto-record/src/lexicon/community_lexicon_calendar_rsvp.rs
-3
crates/atproto-record/src/lexicon/community_lexicon_calendar_rsvp.rs
···
294
294
assert_eq!(typed_rsvp.inner.signatures.len(), 1);
295
295
match &typed_rsvp.inner.signatures[0] {
296
296
SignatureOrRef::Inline(sig) => {
297
-
assert_eq!(sig.inner.issuer, "did:plc:issuer");
298
297
assert_eq!(sig.inner.signature.bytes, b"test signature");
299
298
}
300
299
_ => panic!("Expected inline signature"),
···
364
363
assert_eq!(typed_rsvp.inner.signatures.len(), 1);
365
364
match &typed_rsvp.inner.signatures[0] {
366
365
SignatureOrRef::Inline(sig) => {
367
-
assert_eq!(sig.inner.issuer, "did:web:acudo-dev.smokesignal.tools");
368
-
369
366
// Verify the issuedAt field if present
370
367
if let Some(issued_at_value) = sig.inner.extra.get("issuedAt") {
371
368
assert_eq!(issued_at_value, "2025-08-19T20:17:17.133Z");
+22
crates/atproto-record/src/lexicon/mod.rs
+22
crates/atproto-record/src/lexicon/mod.rs
···
37
37
mod community_lexicon_calendar_event;
38
38
mod community_lexicon_calendar_rsvp;
39
39
mod community_lexicon_location;
40
+
mod app_bsky_richtext_facet;
40
41
mod primatives;
41
42
43
+
// Re-export primitive types for convenience
42
44
pub use primatives::*;
45
+
46
+
/// Bluesky application namespace.
47
+
///
48
+
/// Contains lexicon types specific to the Bluesky application,
49
+
/// including rich text formatting and social features.
50
+
pub mod app {
51
+
/// Bluesky namespace.
52
+
pub mod bsky {
53
+
/// Rich text formatting types.
54
+
pub mod richtext {
55
+
/// Facet types for semantic text annotations.
56
+
///
57
+
/// Provides types for mentions, links, hashtags, and other
58
+
/// structured metadata that can be attached to text content.
59
+
pub mod facet {
60
+
pub use crate::lexicon::app_bsky_richtext_facet::*;
61
+
}
62
+
}
63
+
}
64
+
}
43
65
44
66
/// AT Protocol core types namespace
45
67
pub mod com {
+40
-19
crates/atproto-record/src/lib.rs
+40
-19
crates/atproto-record/src/lib.rs
···
16
16
//! ## Example Usage
17
17
//!
18
18
//! ```ignore
19
-
//! use atproto_record::signature;
20
-
//! use atproto_identity::key::identify_key;
19
+
//! use atproto_record::attestation;
20
+
//! use atproto_identity::key::{identify_key, sign, to_public};
21
+
//! use base64::engine::general_purpose::STANDARD;
21
22
//! use serde_json::json;
22
23
//!
23
-
//! // Sign a record
24
-
//! let key_data = identify_key("did:key:...")?;
25
-
//! let record = json!({"$type": "app.bsky.feed.post", "text": "Hello!"});
26
-
//! let sig_obj = json!({"issuer": "did:plc:..."});
24
+
//! let private_key = identify_key("did:key:zPrivate...")?;
25
+
//! let public_key = to_public(&private_key)?;
26
+
//! let key_reference = format!("{}", &public_key);
27
27
//!
28
-
//! let signed = signature::create(&key_data, &record, "did:plc:repo",
29
-
//! "app.bsky.feed.post", sig_obj).await?;
28
+
//! let record = json!({
29
+
//! "$type": "app.example.record",
30
+
//! "text": "Hello from attestation helpers!"
31
+
//! });
32
+
//!
33
+
//! let sig_metadata = json!({
34
+
//! "$type": "com.example.inlineSignature",
35
+
//! "key": &key_reference,
36
+
//! "purpose": "demo"
37
+
//! });
38
+
//!
39
+
//! let signing_record = attestation::prepare_signing_record(&record, &sig_metadata)?;
40
+
//! let cid = attestation::create_cid(&signing_record)?;
41
+
//! let signature_bytes = sign(&private_key, &cid.to_bytes())?;
42
+
//!
43
+
//! let inline_attestation = json!({
44
+
//! "$type": "com.example.inlineSignature",
45
+
//! "key": key_reference,
46
+
//! "purpose": "demo",
47
+
//! "signature": {"$bytes": STANDARD.encode(signature_bytes)}
48
+
//! });
30
49
//!
31
-
//! // Verify a signature
32
-
//! signature::verify("did:plc:issuer", &key_data, signed,
33
-
//! "did:plc:repo", "app.bsky.feed.post").await?;
50
+
//! let signed = attestation::create_inline_attestation_reference(&record, &inline_attestation)?;
51
+
//! let reports = tokio_test::block_on(async {
52
+
//! attestation::verify_all_signatures(&signed, None).await
53
+
//! })?;
54
+
//! assert!(matches!(reports[0].status, attestation::VerificationStatus::Valid { .. }));
34
55
//! ```
35
56
36
57
#![forbid(unsafe_code)]
···
42
63
/// and CLI operations. All errors follow the project's standardized format:
43
64
/// `error-atproto-record-{domain}-{number} {message}: {details}`
44
65
pub mod errors;
45
-
46
-
/// Core signature creation and verification.
47
-
///
48
-
/// Provides functions for creating and verifying cryptographic signatures on
49
-
/// AT Protocol records using IPLD DAG-CBOR serialization. Supports the
50
-
/// community.lexicon.attestation.signature specification with proper $sig
51
-
/// object handling and multiple signature support.
52
-
pub mod signature;
53
66
54
67
/// AT-URI parsing and validation.
55
68
///
···
84
97
/// in many AT Protocol lexicon structures. The wrapper can automatically add type
85
98
/// fields during serialization and validate them during deserialization.
86
99
pub mod typed;
100
+
101
+
/// Timestamp Identifier (TID) generation and parsing.
102
+
///
103
+
/// TIDs are sortable, distributed identifiers combining microsecond timestamps
104
+
/// with random clock identifiers. They provide a collision-resistant, monotonically
105
+
/// increasing identifier scheme for AT Protocol records encoded as 13-character
106
+
/// base32-sortable strings.
107
+
pub mod tid;
-672
crates/atproto-record/src/signature.rs
-672
crates/atproto-record/src/signature.rs
···
1
-
//! AT Protocol record signature creation and verification.
2
-
//!
3
-
//! This module provides comprehensive functionality for creating and verifying
4
-
//! cryptographic signatures on AT Protocol records following the
5
-
//! community.lexicon.attestation.signature specification.
6
-
//!
7
-
//! ## Signature Process
8
-
//!
9
-
//! 1. **Signing**: Records are augmented with a `$sig` object containing issuer,
10
-
//! timestamp, and context information, then serialized using IPLD DAG-CBOR
11
-
//! for deterministic encoding before signing with ECDSA.
12
-
//!
13
-
//! 2. **Storage**: Signatures are stored in a `signatures` array within the record,
14
-
//! allowing multiple signatures from different issuers.
15
-
//!
16
-
//! 3. **Verification**: The original signed content is reconstructed by replacing
17
-
//! the signatures array with the appropriate `$sig` object, then verified
18
-
//! using the issuer's public key.
19
-
//!
20
-
//! ## Supported Curves
21
-
//!
22
-
//! - P-256 (NIST P-256 / secp256r1)
23
-
//! - P-384 (NIST P-384 / secp384r1)
24
-
//! - K-256 (secp256k1)
25
-
//!
26
-
//! ## Example
27
-
//!
28
-
//! ```ignore
29
-
//! use atproto_record::signature::{create, verify};
30
-
//! use atproto_identity::key::identify_key;
31
-
//! use serde_json::json;
32
-
//!
33
-
//! // Create a signature
34
-
//! let key = identify_key("did:key:...")?;
35
-
//! let record = json!({"text": "Hello!"});
36
-
//! let sig_obj = json!({
37
-
//! "issuer": "did:plc:issuer"
38
-
//! // Optional: any additional fields like "issuedAt", "purpose", etc.
39
-
//! });
40
-
//!
41
-
//! let signed = create(&key, &record, "did:plc:repo",
42
-
//! "app.bsky.feed.post", sig_obj)?;
43
-
//!
44
-
//! // Verify the signature
45
-
//! verify("did:plc:issuer", &key, signed,
46
-
//! "did:plc:repo", "app.bsky.feed.post")?;
47
-
//! ```
48
-
49
-
use atproto_identity::key::{KeyData, sign, validate};
50
-
use base64::{Engine, engine::general_purpose::STANDARD};
51
-
use serde_json::json;
52
-
53
-
use crate::errors::VerificationError;
54
-
55
-
/// Creates a cryptographic signature for an AT Protocol record.
56
-
///
57
-
/// This function generates a signature following the community.lexicon.attestation.signature
58
-
/// specification. The record is augmented with a `$sig` object containing context information,
59
-
/// serialized using IPLD DAG-CBOR, signed with the provided key, and the signature is added
60
-
/// to a `signatures` array in the returned record.
61
-
///
62
-
/// # Parameters
63
-
///
64
-
/// * `key_data` - The signing key (private key) wrapped in KeyData
65
-
/// * `record` - The JSON record to be signed (will not be modified)
66
-
/// * `repository` - The repository DID where this record will be stored
67
-
/// * `collection` - The collection type (NSID) for this record
68
-
/// * `signature_object` - Metadata for the signature, must include:
69
-
/// - `issuer`: The DID of the entity creating the signature (required)
70
-
/// - Additional custom fields are preserved in the signature (optional)
71
-
///
72
-
/// # Returns
73
-
///
74
-
/// Returns a new record containing:
75
-
/// - All original record fields
76
-
/// - A `signatures` array with the new signature appended
77
-
/// - No `$sig` field (only used during signing)
78
-
///
79
-
/// # Errors
80
-
///
81
-
/// Returns [`VerificationError`] if:
82
-
/// - Required field `issuer` is missing from signature_object
83
-
/// - IPLD DAG-CBOR serialization fails
84
-
/// - Cryptographic signing operation fails
85
-
/// - JSON structure manipulation fails
86
-
pub fn create(
87
-
key_data: &KeyData,
88
-
record: &serde_json::Value,
89
-
repository: &str,
90
-
collection: &str,
91
-
signature_object: serde_json::Value,
92
-
) -> Result<serde_json::Value, VerificationError> {
93
-
if let Some(record_map) = signature_object.as_object() {
94
-
if !record_map.contains_key("issuer") {
95
-
return Err(VerificationError::SignatureObjectMissingField {
96
-
field: "issuer".to_string(),
97
-
});
98
-
}
99
-
} else {
100
-
return Err(VerificationError::InvalidSignatureObjectType);
101
-
};
102
-
103
-
// Prepare the $sig object.
104
-
let mut sig = signature_object.clone();
105
-
if let Some(record_map) = sig.as_object_mut() {
106
-
record_map.insert("repository".to_string(), json!(repository));
107
-
record_map.insert("collection".to_string(), json!(collection));
108
-
record_map.insert(
109
-
"$type".to_string(),
110
-
json!("community.lexicon.attestation.signature"),
111
-
);
112
-
}
113
-
114
-
// Create a copy of the record with the $sig object for signing.
115
-
let mut signing_record = record.clone();
116
-
if let Some(record_map) = signing_record.as_object_mut() {
117
-
record_map.remove("signatures");
118
-
record_map.remove("$sig");
119
-
record_map.insert("$sig".to_string(), sig);
120
-
}
121
-
122
-
// Create a signature.
123
-
let serialized_signing_record = serde_ipld_dagcbor::to_vec(&signing_record)?;
124
-
125
-
let signature: Vec<u8> = sign(key_data, &serialized_signing_record)?;
126
-
let encoded_signature = STANDARD.encode(&signature);
127
-
128
-
// Compose the proof object
129
-
let mut proof = signature_object.clone();
130
-
if let Some(record_map) = proof.as_object_mut() {
131
-
record_map.remove("repository");
132
-
record_map.remove("collection");
133
-
record_map.insert(
134
-
"signature".to_string(),
135
-
json!({"$bytes": json!(encoded_signature)}),
136
-
);
137
-
record_map.insert(
138
-
"$type".to_string(),
139
-
json!("community.lexicon.attestation.signature"),
140
-
);
141
-
}
142
-
143
-
// Add the signature to the original record
144
-
let mut signed_record = record.clone();
145
-
146
-
if let Some(record_map) = signed_record.as_object_mut() {
147
-
let mut signatures: Vec<serde_json::Value> = record
148
-
.get("signatures")
149
-
.and_then(|v| v.as_array().cloned())
150
-
.unwrap_or_default();
151
-
152
-
signatures.push(proof);
153
-
154
-
record_map.remove("$sig");
155
-
record_map.remove("signatures");
156
-
157
-
// Add the $sig field
158
-
record_map.insert("signatures".to_string(), json!(signatures));
159
-
}
160
-
161
-
Ok(signed_record)
162
-
}
163
-
164
-
/// Verifies a cryptographic signature on an AT Protocol record.
165
-
///
166
-
/// This function validates signatures by reconstructing the original signed content
167
-
/// (record with `$sig` object) and verifying the ECDSA signature against it.
168
-
/// It searches through all signatures in the record to find one matching the
169
-
/// specified issuer, then verifies it with the provided public key.
170
-
///
171
-
/// # Parameters
172
-
///
173
-
/// * `issuer` - The DID of the expected signature issuer to verify
174
-
/// * `key_data` - The public key for signature verification
175
-
/// * `record` - The signed record containing a `signatures` or `sigs` array
176
-
/// * `repository` - The repository DID used during signing (must match)
177
-
/// * `collection` - The collection type used during signing (must match)
178
-
///
179
-
/// # Returns
180
-
///
181
-
/// Returns `Ok(())` if a valid signature from the specified issuer is found
182
-
/// and successfully verified against the reconstructed signed content.
183
-
///
184
-
/// # Errors
185
-
///
186
-
/// Returns [`VerificationError`] if:
187
-
/// - No `signatures` or `sigs` field exists in the record
188
-
/// - No signature from the specified issuer is found
189
-
/// - The issuer's signature is malformed or missing required fields
190
-
/// - The signature is not in the expected `{"$bytes": "..."}` format
191
-
/// - Base64 decoding of the signature fails
192
-
/// - IPLD DAG-CBOR serialization of reconstructed content fails
193
-
/// - Cryptographic verification fails (invalid signature)
194
-
///
195
-
/// # Note
196
-
///
197
-
/// This function supports both `signatures` and `sigs` field names for
198
-
/// backward compatibility with different AT Protocol implementations.
199
-
pub fn verify(
200
-
issuer: &str,
201
-
key_data: &KeyData,
202
-
record: serde_json::Value,
203
-
repository: &str,
204
-
collection: &str,
205
-
) -> Result<(), VerificationError> {
206
-
let signatures = record
207
-
.get("sigs")
208
-
.or_else(|| record.get("signatures"))
209
-
.and_then(|v| v.as_array())
210
-
.ok_or(VerificationError::NoSignaturesField)?;
211
-
212
-
for sig_obj in signatures {
213
-
// Extract the issuer from the signature object
214
-
let signature_issuer = sig_obj
215
-
.get("issuer")
216
-
.and_then(|v| v.as_str())
217
-
.ok_or(VerificationError::MissingIssuerField)?;
218
-
219
-
let signature_value = sig_obj
220
-
.get("signature")
221
-
.and_then(|v| v.as_object())
222
-
.and_then(|obj| obj.get("$bytes"))
223
-
.and_then(|b| b.as_str())
224
-
.ok_or(VerificationError::MissingSignatureField)?;
225
-
226
-
if issuer != signature_issuer {
227
-
continue;
228
-
}
229
-
230
-
let mut sig_variable = sig_obj.clone();
231
-
232
-
if let Some(sig_map) = sig_variable.as_object_mut() {
233
-
sig_map.remove("signature");
234
-
sig_map.insert("repository".to_string(), json!(repository));
235
-
sig_map.insert("collection".to_string(), json!(collection));
236
-
}
237
-
238
-
let mut signed_record = record.clone();
239
-
if let Some(record_map) = signed_record.as_object_mut() {
240
-
record_map.remove("signatures");
241
-
record_map.remove("sigs");
242
-
record_map.insert("$sig".to_string(), sig_variable);
243
-
}
244
-
245
-
let serialized_record = serde_ipld_dagcbor::to_vec(&signed_record)
246
-
.map_err(|error| VerificationError::RecordSerializationFailed { error })?;
247
-
248
-
let signature_bytes = STANDARD
249
-
.decode(signature_value)
250
-
.map_err(|error| VerificationError::SignatureDecodingFailed { error })?;
251
-
252
-
validate(key_data, &signature_bytes, &serialized_record)
253
-
.map_err(|error| VerificationError::CryptographicValidationFailed { error })?;
254
-
255
-
return Ok(());
256
-
}
257
-
258
-
Err(VerificationError::NoValidSignatureForIssuer {
259
-
issuer: issuer.to_string(),
260
-
})
261
-
}
262
-
263
-
#[cfg(test)]
264
-
mod tests {
265
-
use super::*;
266
-
use atproto_identity::key::{KeyType, generate_key, to_public};
267
-
use serde_json::json;
268
-
269
-
#[test]
270
-
fn test_create_sign_and_verify_record_p256() -> Result<(), Box<dyn std::error::Error>> {
271
-
// Step 1: Generate a P-256 key pair
272
-
let private_key = generate_key(KeyType::P256Private)?;
273
-
let public_key = to_public(&private_key)?;
274
-
275
-
// Step 2: Create a sample record
276
-
let record = json!({
277
-
"text": "Hello AT Protocol!",
278
-
"createdAt": "2025-01-19T10:00:00Z",
279
-
"langs": ["en"]
280
-
});
281
-
282
-
// Step 3: Define signature metadata
283
-
let issuer_did = "did:plc:test123";
284
-
let repository = "did:plc:repo456";
285
-
let collection = "app.bsky.feed.post";
286
-
287
-
let signature_object = json!({
288
-
"issuer": issuer_did,
289
-
"issuedAt": "2025-01-19T10:00:00Z",
290
-
"purpose": "attestation"
291
-
});
292
-
293
-
// Step 4: Sign the record
294
-
let signed_record = create(
295
-
&private_key,
296
-
&record,
297
-
repository,
298
-
collection,
299
-
signature_object.clone(),
300
-
)?;
301
-
302
-
// Verify that the signed record contains signatures array
303
-
assert!(signed_record.get("signatures").is_some());
304
-
let signatures = signed_record
305
-
.get("signatures")
306
-
.and_then(|v| v.as_array())
307
-
.expect("signatures should be an array");
308
-
assert_eq!(signatures.len(), 1);
309
-
310
-
// Verify signature object structure
311
-
let sig = &signatures[0];
312
-
assert_eq!(sig.get("issuer").and_then(|v| v.as_str()), Some(issuer_did));
313
-
assert!(sig.get("signature").is_some());
314
-
assert_eq!(
315
-
sig.get("$type").and_then(|v| v.as_str()),
316
-
Some("community.lexicon.attestation.signature")
317
-
);
318
-
319
-
// Step 5: Verify the signature
320
-
verify(
321
-
issuer_did,
322
-
&public_key,
323
-
signed_record.clone(),
324
-
repository,
325
-
collection,
326
-
)?;
327
-
328
-
Ok(())
329
-
}
330
-
331
-
#[test]
332
-
fn test_create_sign_and_verify_record_k256() -> Result<(), Box<dyn std::error::Error>> {
333
-
// Test with K-256 curve
334
-
let private_key = generate_key(KeyType::K256Private)?;
335
-
let public_key = to_public(&private_key)?;
336
-
337
-
let record = json!({
338
-
"subject": "at://did:plc:example/app.bsky.feed.post/123",
339
-
"likedAt": "2025-01-19T10:00:00Z"
340
-
});
341
-
342
-
let issuer_did = "did:plc:issuer789";
343
-
let repository = "did:plc:repo789";
344
-
let collection = "app.bsky.feed.like";
345
-
346
-
let signature_object = json!({
347
-
"issuer": issuer_did,
348
-
"issuedAt": "2025-01-19T10:00:00Z"
349
-
});
350
-
351
-
let signed_record = create(
352
-
&private_key,
353
-
&record,
354
-
repository,
355
-
collection,
356
-
signature_object,
357
-
)?;
358
-
359
-
verify(
360
-
issuer_did,
361
-
&public_key,
362
-
signed_record,
363
-
repository,
364
-
collection,
365
-
)?;
366
-
367
-
Ok(())
368
-
}
369
-
370
-
#[test]
371
-
fn test_create_sign_and_verify_record_p384() -> Result<(), Box<dyn std::error::Error>> {
372
-
// Test with P-384 curve
373
-
let private_key = generate_key(KeyType::P384Private)?;
374
-
let public_key = to_public(&private_key)?;
375
-
376
-
let record = json!({
377
-
"displayName": "Test User",
378
-
"description": "Testing P-384 signatures"
379
-
});
380
-
381
-
let issuer_did = "did:web:example.com";
382
-
let repository = "did:plc:profile123";
383
-
let collection = "app.bsky.actor.profile";
384
-
385
-
let signature_object = json!({
386
-
"issuer": issuer_did,
387
-
"issuedAt": "2025-01-19T10:00:00Z",
388
-
"expiresAt": "2025-01-20T10:00:00Z",
389
-
"customField": "custom value"
390
-
});
391
-
392
-
let signed_record = create(
393
-
&private_key,
394
-
&record,
395
-
repository,
396
-
collection,
397
-
signature_object.clone(),
398
-
)?;
399
-
400
-
// Verify custom fields are preserved in signature
401
-
let signatures = signed_record
402
-
.get("signatures")
403
-
.and_then(|v| v.as_array())
404
-
.expect("signatures should exist");
405
-
let sig = &signatures[0];
406
-
assert_eq!(
407
-
sig.get("customField").and_then(|v| v.as_str()),
408
-
Some("custom value")
409
-
);
410
-
411
-
verify(
412
-
issuer_did,
413
-
&public_key,
414
-
signed_record,
415
-
repository,
416
-
collection,
417
-
)?;
418
-
419
-
Ok(())
420
-
}
421
-
422
-
#[test]
423
-
fn test_multiple_signatures() -> Result<(), Box<dyn std::error::Error>> {
424
-
// Create a record with multiple signatures from different issuers
425
-
let private_key1 = generate_key(KeyType::P256Private)?;
426
-
let public_key1 = to_public(&private_key1)?;
427
-
428
-
let private_key2 = generate_key(KeyType::K256Private)?;
429
-
let public_key2 = to_public(&private_key2)?;
430
-
431
-
let record = json!({
432
-
"text": "Multi-signed content",
433
-
"important": true
434
-
});
435
-
436
-
let repository = "did:plc:repo_multi";
437
-
let collection = "app.example.document";
438
-
439
-
// First signature
440
-
let issuer1 = "did:plc:issuer1";
441
-
let sig_obj1 = json!({
442
-
"issuer": issuer1,
443
-
"issuedAt": "2025-01-19T09:00:00Z",
444
-
"role": "author"
445
-
});
446
-
447
-
let signed_once = create(&private_key1, &record, repository, collection, sig_obj1)?;
448
-
449
-
// Second signature on already signed record
450
-
let issuer2 = "did:plc:issuer2";
451
-
let sig_obj2 = json!({
452
-
"issuer": issuer2,
453
-
"issuedAt": "2025-01-19T10:00:00Z",
454
-
"role": "reviewer"
455
-
});
456
-
457
-
let signed_twice = create(
458
-
&private_key2,
459
-
&signed_once,
460
-
repository,
461
-
collection,
462
-
sig_obj2,
463
-
)?;
464
-
465
-
// Verify we have two signatures
466
-
let signatures = signed_twice
467
-
.get("signatures")
468
-
.and_then(|v| v.as_array())
469
-
.expect("signatures should exist");
470
-
assert_eq!(signatures.len(), 2);
471
-
472
-
// Verify both signatures independently
473
-
verify(
474
-
issuer1,
475
-
&public_key1,
476
-
signed_twice.clone(),
477
-
repository,
478
-
collection,
479
-
)?;
480
-
verify(
481
-
issuer2,
482
-
&public_key2,
483
-
signed_twice.clone(),
484
-
repository,
485
-
collection,
486
-
)?;
487
-
488
-
Ok(())
489
-
}
490
-
491
-
#[test]
492
-
fn test_verify_wrong_issuer_fails() -> Result<(), Box<dyn std::error::Error>> {
493
-
let private_key = generate_key(KeyType::P256Private)?;
494
-
let public_key = to_public(&private_key)?;
495
-
496
-
let record = json!({"test": "data"});
497
-
let repository = "did:plc:repo";
498
-
let collection = "app.test";
499
-
500
-
let sig_obj = json!({
501
-
"issuer": "did:plc:correct_issuer"
502
-
});
503
-
504
-
let signed = create(&private_key, &record, repository, collection, sig_obj)?;
505
-
506
-
// Try to verify with wrong issuer
507
-
let result = verify(
508
-
"did:plc:wrong_issuer",
509
-
&public_key,
510
-
signed,
511
-
repository,
512
-
collection,
513
-
);
514
-
515
-
assert!(result.is_err());
516
-
assert!(matches!(
517
-
result.unwrap_err(),
518
-
VerificationError::NoValidSignatureForIssuer { .. }
519
-
));
520
-
521
-
Ok(())
522
-
}
523
-
524
-
#[test]
525
-
fn test_verify_wrong_key_fails() -> Result<(), Box<dyn std::error::Error>> {
526
-
let private_key = generate_key(KeyType::P256Private)?;
527
-
let wrong_private_key = generate_key(KeyType::P256Private)?;
528
-
let wrong_public_key = to_public(&wrong_private_key)?;
529
-
530
-
let record = json!({"test": "data"});
531
-
let repository = "did:plc:repo";
532
-
let collection = "app.test";
533
-
let issuer = "did:plc:issuer";
534
-
535
-
let sig_obj = json!({ "issuer": issuer });
536
-
537
-
let signed = create(&private_key, &record, repository, collection, sig_obj)?;
538
-
539
-
// Try to verify with wrong key
540
-
let result = verify(issuer, &wrong_public_key, signed, repository, collection);
541
-
542
-
assert!(result.is_err());
543
-
assert!(matches!(
544
-
result.unwrap_err(),
545
-
VerificationError::CryptographicValidationFailed { .. }
546
-
));
547
-
548
-
Ok(())
549
-
}
550
-
551
-
#[test]
552
-
fn test_verify_tampered_record_fails() -> Result<(), Box<dyn std::error::Error>> {
553
-
let private_key = generate_key(KeyType::P256Private)?;
554
-
let public_key = to_public(&private_key)?;
555
-
556
-
let record = json!({"text": "original"});
557
-
let repository = "did:plc:repo";
558
-
let collection = "app.test";
559
-
let issuer = "did:plc:issuer";
560
-
561
-
let sig_obj = json!({ "issuer": issuer });
562
-
563
-
let mut signed = create(&private_key, &record, repository, collection, sig_obj)?;
564
-
565
-
// Tamper with the record content
566
-
if let Some(obj) = signed.as_object_mut() {
567
-
obj.insert("text".to_string(), json!("tampered"));
568
-
}
569
-
570
-
// Verification should fail
571
-
let result = verify(issuer, &public_key, signed, repository, collection);
572
-
573
-
assert!(result.is_err());
574
-
assert!(matches!(
575
-
result.unwrap_err(),
576
-
VerificationError::CryptographicValidationFailed { .. }
577
-
));
578
-
579
-
Ok(())
580
-
}
581
-
582
-
#[test]
583
-
fn test_create_missing_issuer_fails() -> Result<(), Box<dyn std::error::Error>> {
584
-
let private_key = generate_key(KeyType::P256Private)?;
585
-
586
-
let record = json!({"test": "data"});
587
-
let repository = "did:plc:repo";
588
-
let collection = "app.test";
589
-
590
-
// Signature object without issuer field
591
-
let sig_obj = json!({
592
-
"issuedAt": "2025-01-19T10:00:00Z"
593
-
});
594
-
595
-
let result = create(&private_key, &record, repository, collection, sig_obj);
596
-
597
-
assert!(result.is_err());
598
-
assert!(matches!(
599
-
result.unwrap_err(),
600
-
VerificationError::SignatureObjectMissingField { field } if field == "issuer"
601
-
));
602
-
603
-
Ok(())
604
-
}
605
-
606
-
#[test]
607
-
fn test_verify_supports_sigs_field() -> Result<(), Box<dyn std::error::Error>> {
608
-
// Test backward compatibility with "sigs" field name
609
-
let private_key = generate_key(KeyType::P256Private)?;
610
-
let public_key = to_public(&private_key)?;
611
-
612
-
let record = json!({"test": "data"});
613
-
let repository = "did:plc:repo";
614
-
let collection = "app.test";
615
-
let issuer = "did:plc:issuer";
616
-
617
-
let sig_obj = json!({ "issuer": issuer });
618
-
619
-
let mut signed = create(&private_key, &record, repository, collection, sig_obj)?;
620
-
621
-
// Rename "signatures" to "sigs"
622
-
if let Some(obj) = signed.as_object_mut()
623
-
&& let Some(signatures) = obj.remove("signatures")
624
-
{
625
-
obj.insert("sigs".to_string(), signatures);
626
-
}
627
-
628
-
// Should still verify successfully
629
-
verify(issuer, &public_key, signed, repository, collection)?;
630
-
631
-
Ok(())
632
-
}
633
-
634
-
#[test]
635
-
fn test_signature_preserves_original_record() -> Result<(), Box<dyn std::error::Error>> {
636
-
let private_key = generate_key(KeyType::P256Private)?;
637
-
638
-
let original_record = json!({
639
-
"text": "Original content",
640
-
"metadata": {
641
-
"author": "Test",
642
-
"version": 1
643
-
},
644
-
"tags": ["test", "sample"]
645
-
});
646
-
647
-
let repository = "did:plc:repo";
648
-
let collection = "app.test";
649
-
650
-
let sig_obj = json!({
651
-
"issuer": "did:plc:issuer"
652
-
});
653
-
654
-
let signed = create(
655
-
&private_key,
656
-
&original_record,
657
-
repository,
658
-
collection,
659
-
sig_obj,
660
-
)?;
661
-
662
-
// All original fields should be preserved
663
-
assert_eq!(signed.get("text"), original_record.get("text"));
664
-
assert_eq!(signed.get("metadata"), original_record.get("metadata"));
665
-
assert_eq!(signed.get("tags"), original_record.get("tags"));
666
-
667
-
// Plus the new signatures field
668
-
assert!(signed.get("signatures").is_some());
669
-
670
-
Ok(())
671
-
}
672
-
}
+492
crates/atproto-record/src/tid.rs
+492
crates/atproto-record/src/tid.rs
···
1
+
//! Timestamp Identifier (TID) generation and parsing.
2
+
//!
3
+
//! TIDs are 64-bit integers encoded as 13-character base32-sortable strings, combining
4
+
//! a microsecond timestamp with a random clock identifier for collision resistance.
5
+
//! They provide a sortable, distributed identifier scheme for AT Protocol records.
6
+
//!
7
+
//! ## Format
8
+
//!
9
+
//! - **Length**: Always 13 ASCII characters
10
+
//! - **Encoding**: Base32-sortable character set (`234567abcdefghijklmnopqrstuvwxyz`)
11
+
//! - **Structure**: 64-bit big-endian integer with:
12
+
//! - Bit 0 (top): Always 0
13
+
//! - Bits 1-53: Microseconds since UNIX epoch
14
+
//! - Bits 54-63: Random 10-bit clock identifier
15
+
//!
16
+
//! ## Example
17
+
//!
18
+
//! ```
19
+
//! use atproto_record::tid::Tid;
20
+
//!
21
+
//! // Generate a new TID
22
+
//! let tid = Tid::new();
23
+
//! let tid_str = tid.to_string();
24
+
//! assert_eq!(tid_str.len(), 13);
25
+
//!
26
+
//! // Parse a TID string
27
+
//! let parsed = tid_str.parse::<Tid>().unwrap();
28
+
//! assert_eq!(tid, parsed);
29
+
//!
30
+
//! // TIDs are sortable by timestamp
31
+
//! let tid1 = Tid::new();
32
+
//! std::thread::sleep(std::time::Duration::from_micros(10));
33
+
//! let tid2 = Tid::new();
34
+
//! assert!(tid1 < tid2);
35
+
//! ```
36
+
37
+
use std::fmt;
38
+
use std::str::FromStr;
39
+
use std::sync::Mutex;
40
+
use std::time::{SystemTime, UNIX_EPOCH};
41
+
42
+
use crate::errors::TidError;
43
+
44
+
/// Base32-sortable character set for TID encoding.
45
+
///
46
+
/// This character set maintains lexicographic sort order when encoded TIDs
47
+
/// are compared as strings, ensuring timestamp ordering is preserved.
48
+
const BASE32_SORTABLE: &[u8; 32] = b"234567abcdefghijklmnopqrstuvwxyz";
49
+
50
+
/// Reverse lookup table for base32-sortable decoding.
51
+
///
52
+
/// Maps ASCII character values to their corresponding 5-bit values.
53
+
/// Invalid characters are marked with 0xFF.
54
+
const BASE32_DECODE: [u8; 256] = {
55
+
let mut table = [0xFF; 256];
56
+
table[b'2' as usize] = 0;
57
+
table[b'3' as usize] = 1;
58
+
table[b'4' as usize] = 2;
59
+
table[b'5' as usize] = 3;
60
+
table[b'6' as usize] = 4;
61
+
table[b'7' as usize] = 5;
62
+
table[b'a' as usize] = 6;
63
+
table[b'b' as usize] = 7;
64
+
table[b'c' as usize] = 8;
65
+
table[b'd' as usize] = 9;
66
+
table[b'e' as usize] = 10;
67
+
table[b'f' as usize] = 11;
68
+
table[b'g' as usize] = 12;
69
+
table[b'h' as usize] = 13;
70
+
table[b'i' as usize] = 14;
71
+
table[b'j' as usize] = 15;
72
+
table[b'k' as usize] = 16;
73
+
table[b'l' as usize] = 17;
74
+
table[b'm' as usize] = 18;
75
+
table[b'n' as usize] = 19;
76
+
table[b'o' as usize] = 20;
77
+
table[b'p' as usize] = 21;
78
+
table[b'q' as usize] = 22;
79
+
table[b'r' as usize] = 23;
80
+
table[b's' as usize] = 24;
81
+
table[b't' as usize] = 25;
82
+
table[b'u' as usize] = 26;
83
+
table[b'v' as usize] = 27;
84
+
table[b'w' as usize] = 28;
85
+
table[b'x' as usize] = 29;
86
+
table[b'y' as usize] = 30;
87
+
table[b'z' as usize] = 31;
88
+
table
89
+
};
90
+
91
+
/// Timestamp Identifier (TID) for AT Protocol records.
92
+
///
93
+
/// A TID combines a microsecond-precision timestamp with a random clock identifier
94
+
/// to create a sortable, collision-resistant identifier. TIDs are represented as
95
+
/// 13-character base32-sortable strings.
96
+
///
97
+
/// ## Monotonicity
98
+
///
99
+
/// The TID generator ensures monotonically increasing values even when the system
100
+
/// clock moves backwards or multiple TIDs are generated within the same microsecond.
101
+
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
102
+
pub struct Tid(u64);
103
+
104
+
/// Thread-local state for monotonic TID generation.
105
+
///
106
+
/// Tracks the last generated timestamp and clock identifier to ensure
107
+
/// monotonically increasing TID values.
108
+
static LAST_TID: Mutex<Option<(u64, u16)>> = Mutex::new(None);
109
+
110
+
impl Tid {
111
+
/// The length of a TID string in characters.
112
+
pub const LENGTH: usize = 13;
113
+
114
+
/// Maximum valid timestamp value (53 bits).
115
+
const MAX_TIMESTAMP: u64 = (1u64 << 53) - 1;
116
+
117
+
/// Bitmask for extracting the 10-bit clock identifier.
118
+
const CLOCK_ID_MASK: u64 = 0x3FF;
119
+
120
+
/// Creates a new TID with the current timestamp and a random clock identifier.
121
+
///
122
+
/// This function ensures monotonically increasing TID values by tracking the
123
+
/// last generated TID and incrementing the clock identifier when necessary.
124
+
///
125
+
/// # Example
126
+
///
127
+
/// ```
128
+
/// use atproto_record::tid::Tid;
129
+
///
130
+
/// let tid = Tid::new();
131
+
/// println!("Generated TID: {}", tid);
132
+
/// ```
133
+
pub fn new() -> Self {
134
+
Self::new_with_time(Self::current_timestamp_micros())
135
+
}
136
+
137
+
/// Creates a new TID with a specific timestamp (for testing).
138
+
///
139
+
/// # Arguments
140
+
///
141
+
/// * `timestamp_micros` - Microseconds since UNIX epoch
142
+
///
143
+
/// # Panics
144
+
///
145
+
/// Panics if the timestamp exceeds 53 bits (year 2255+).
146
+
pub fn new_with_time(timestamp_micros: u64) -> Self {
147
+
assert!(
148
+
timestamp_micros <= Self::MAX_TIMESTAMP,
149
+
"Timestamp exceeds 53-bit maximum"
150
+
);
151
+
152
+
let mut last = LAST_TID.lock().unwrap();
153
+
154
+
let clock_id = if let Some((last_timestamp, last_clock)) = *last {
155
+
if timestamp_micros > last_timestamp {
156
+
// New timestamp, generate random clock ID
157
+
Self::random_clock_id()
158
+
} else if timestamp_micros == last_timestamp {
159
+
// Same timestamp, increment clock ID
160
+
if last_clock == Self::CLOCK_ID_MASK as u16 {
161
+
// Clock ID overflow, use random
162
+
Self::random_clock_id()
163
+
} else {
164
+
last_clock + 1
165
+
}
166
+
} else {
167
+
// Clock moved backwards, use last timestamp + 1
168
+
let adjusted_timestamp = last_timestamp + 1;
169
+
let adjusted_clock = Self::random_clock_id();
170
+
*last = Some((adjusted_timestamp, adjusted_clock));
171
+
return Self::from_parts(adjusted_timestamp, adjusted_clock);
172
+
}
173
+
} else {
174
+
// First TID, generate random clock ID
175
+
Self::random_clock_id()
176
+
};
177
+
178
+
*last = Some((timestamp_micros, clock_id));
179
+
Self::from_parts(timestamp_micros, clock_id)
180
+
}
181
+
182
+
/// Creates a TID from timestamp and clock identifier components.
183
+
///
184
+
/// # Arguments
185
+
///
186
+
/// * `timestamp_micros` - Microseconds since UNIX epoch (53 bits max)
187
+
/// * `clock_id` - Random clock identifier (10 bits max)
188
+
///
189
+
/// # Panics
190
+
///
191
+
/// Panics if timestamp exceeds 53 bits or clock_id exceeds 10 bits.
192
+
pub fn from_parts(timestamp_micros: u64, clock_id: u16) -> Self {
193
+
assert!(
194
+
timestamp_micros <= Self::MAX_TIMESTAMP,
195
+
"Timestamp exceeds 53-bit maximum"
196
+
);
197
+
assert!(
198
+
clock_id <= Self::CLOCK_ID_MASK as u16,
199
+
"Clock ID exceeds 10-bit maximum"
200
+
);
201
+
202
+
// Combine: top bit 0, 53 bits timestamp, 10 bits clock ID
203
+
let value = (timestamp_micros << 10) | (clock_id as u64);
204
+
Tid(value)
205
+
}
206
+
207
+
/// Returns the timestamp component in microseconds since UNIX epoch.
208
+
///
209
+
/// # Example
210
+
///
211
+
/// ```
212
+
/// use atproto_record::tid::Tid;
213
+
///
214
+
/// let tid = Tid::new();
215
+
/// let timestamp = tid.timestamp_micros();
216
+
/// println!("Timestamp: {} ฮผs", timestamp);
217
+
/// ```
218
+
pub fn timestamp_micros(&self) -> u64 {
219
+
self.0 >> 10
220
+
}
221
+
222
+
/// Returns the clock identifier component (10 bits).
223
+
///
224
+
/// # Example
225
+
///
226
+
/// ```
227
+
/// use atproto_record::tid::Tid;
228
+
///
229
+
/// let tid = Tid::new();
230
+
/// let clock_id = tid.clock_id();
231
+
/// println!("Clock ID: {}", clock_id);
232
+
/// ```
233
+
pub fn clock_id(&self) -> u16 {
234
+
(self.0 & Self::CLOCK_ID_MASK) as u16
235
+
}
236
+
237
+
/// Returns the raw 64-bit integer value.
238
+
pub fn as_u64(&self) -> u64 {
239
+
self.0
240
+
}
241
+
242
+
/// Encodes the TID as a 13-character base32-sortable string.
243
+
///
244
+
/// # Example
245
+
///
246
+
/// ```
247
+
/// use atproto_record::tid::Tid;
248
+
///
249
+
/// let tid = Tid::new();
250
+
/// let encoded = tid.encode();
251
+
/// assert_eq!(encoded.len(), 13);
252
+
/// ```
253
+
pub fn encode(&self) -> String {
254
+
let mut chars = [0u8; Self::LENGTH];
255
+
let mut value = self.0;
256
+
257
+
// Encode from right to left (least significant to most significant)
258
+
for i in (0..Self::LENGTH).rev() {
259
+
chars[i] = BASE32_SORTABLE[(value & 0x1F) as usize];
260
+
value >>= 5;
261
+
}
262
+
263
+
// BASE32_SORTABLE only contains valid UTF-8 ASCII characters
264
+
String::from_utf8(chars.to_vec()).expect("base32-sortable encoding is always valid UTF-8")
265
+
}
266
+
267
+
/// Decodes a base32-sortable string into a TID.
268
+
///
269
+
/// # Errors
270
+
///
271
+
/// Returns [`TidError::InvalidLength`] if the string is not exactly 13 characters.
272
+
/// Returns [`TidError::InvalidCharacter`] if the string contains invalid characters.
273
+
/// Returns [`TidError::InvalidFormat`] if the decoded value has the top bit set.
274
+
///
275
+
/// # Example
276
+
///
277
+
/// ```
278
+
/// use atproto_record::tid::Tid;
279
+
///
280
+
/// let tid_str = "3jzfcijpj2z2a";
281
+
/// let tid = Tid::decode(tid_str).unwrap();
282
+
/// assert_eq!(tid.to_string(), tid_str);
283
+
/// ```
284
+
pub fn decode(s: &str) -> Result<Self, TidError> {
285
+
if s.len() != Self::LENGTH {
286
+
return Err(TidError::InvalidLength {
287
+
expected: Self::LENGTH,
288
+
actual: s.len(),
289
+
});
290
+
}
291
+
292
+
let bytes = s.as_bytes();
293
+
let mut value: u64 = 0;
294
+
295
+
for (i, &byte) in bytes.iter().enumerate() {
296
+
let decoded = BASE32_DECODE[byte as usize];
297
+
if decoded == 0xFF {
298
+
return Err(TidError::InvalidCharacter {
299
+
character: byte as char,
300
+
position: i,
301
+
});
302
+
}
303
+
value = (value << 5) | (decoded as u64);
304
+
}
305
+
306
+
// Verify top bit is 0
307
+
if value & (1u64 << 63) != 0 {
308
+
return Err(TidError::InvalidFormat {
309
+
reason: "Top bit must be 0".to_string(),
310
+
});
311
+
}
312
+
313
+
Ok(Tid(value))
314
+
}
315
+
316
+
/// Gets the current timestamp in microseconds since UNIX epoch.
317
+
fn current_timestamp_micros() -> u64 {
318
+
SystemTime::now()
319
+
.duration_since(UNIX_EPOCH)
320
+
.expect("System time before UNIX epoch")
321
+
.as_micros() as u64
322
+
}
323
+
324
+
/// Generates a random 10-bit clock identifier.
325
+
fn random_clock_id() -> u16 {
326
+
use rand::RngCore;
327
+
let mut rng = rand::thread_rng();
328
+
(rng.next_u32() as u16) & (Self::CLOCK_ID_MASK as u16)
329
+
}
330
+
}
331
+
332
+
impl Default for Tid {
333
+
fn default() -> Self {
334
+
Self::new()
335
+
}
336
+
}
337
+
338
+
impl fmt::Display for Tid {
339
+
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
340
+
write!(f, "{}", self.encode())
341
+
}
342
+
}
343
+
344
+
impl FromStr for Tid {
345
+
type Err = TidError;
346
+
347
+
fn from_str(s: &str) -> Result<Self, Self::Err> {
348
+
Self::decode(s)
349
+
}
350
+
}
351
+
352
+
impl serde::Serialize for Tid {
353
+
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
354
+
where
355
+
S: serde::Serializer,
356
+
{
357
+
serializer.serialize_str(&self.encode())
358
+
}
359
+
}
360
+
361
+
impl<'de> serde::Deserialize<'de> for Tid {
362
+
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
363
+
where
364
+
D: serde::Deserializer<'de>,
365
+
{
366
+
let s = String::deserialize(deserializer)?;
367
+
Self::decode(&s).map_err(serde::de::Error::custom)
368
+
}
369
+
}
370
+
371
+
#[cfg(test)]
372
+
mod tests {
373
+
use super::*;
374
+
375
+
#[test]
376
+
fn test_tid_encode_decode() {
377
+
let tid = Tid::new();
378
+
let encoded = tid.encode();
379
+
assert_eq!(encoded.len(), Tid::LENGTH);
380
+
381
+
let decoded = Tid::decode(&encoded).unwrap();
382
+
assert_eq!(tid, decoded);
383
+
}
384
+
385
+
#[test]
386
+
fn test_tid_from_parts() {
387
+
let timestamp = 1234567890123456u64;
388
+
let clock_id = 42u16;
389
+
let tid = Tid::from_parts(timestamp, clock_id);
390
+
391
+
assert_eq!(tid.timestamp_micros(), timestamp);
392
+
assert_eq!(tid.clock_id(), clock_id);
393
+
}
394
+
395
+
#[test]
396
+
fn test_tid_monotonic() {
397
+
let tid1 = Tid::new();
398
+
std::thread::sleep(std::time::Duration::from_micros(10));
399
+
let tid2 = Tid::new();
400
+
401
+
assert!(tid1 < tid2);
402
+
}
403
+
404
+
#[test]
405
+
fn test_tid_same_timestamp() {
406
+
let timestamp = 1234567890123456u64;
407
+
let tid1 = Tid::new_with_time(timestamp);
408
+
let tid2 = Tid::new_with_time(timestamp);
409
+
410
+
// Should have different clock IDs or incremented clock ID
411
+
assert!(tid1 < tid2 || tid1.clock_id() + 1 == tid2.clock_id());
412
+
}
413
+
414
+
#[test]
415
+
fn test_tid_string_roundtrip() {
416
+
let tid = Tid::new();
417
+
let s = tid.to_string();
418
+
let parsed: Tid = s.parse().unwrap();
419
+
assert_eq!(tid, parsed);
420
+
}
421
+
422
+
#[test]
423
+
fn test_tid_serde() {
424
+
let tid = Tid::new();
425
+
let json = serde_json::to_string(&tid).unwrap();
426
+
let parsed: Tid = serde_json::from_str(&json).unwrap();
427
+
assert_eq!(tid, parsed);
428
+
}
429
+
430
+
#[test]
431
+
fn test_tid_valid_examples() {
432
+
// Examples from the specification
433
+
let examples = ["3jzfcijpj2z2a", "7777777777777", "2222222222222"];
434
+
435
+
for example in &examples {
436
+
let tid = Tid::decode(example).unwrap();
437
+
assert_eq!(&tid.encode(), example);
438
+
}
439
+
}
440
+
441
+
#[test]
442
+
fn test_tid_invalid_length() {
443
+
let result = Tid::decode("123");
444
+
assert!(matches!(result, Err(TidError::InvalidLength { .. })));
445
+
}
446
+
447
+
#[test]
448
+
fn test_tid_invalid_character() {
449
+
let result = Tid::decode("123456789012!");
450
+
assert!(matches!(result, Err(TidError::InvalidCharacter { .. })));
451
+
}
452
+
453
+
#[test]
454
+
fn test_tid_first_char_range() {
455
+
// First character must be in valid range per spec
456
+
let tid = Tid::new();
457
+
let encoded = tid.encode();
458
+
let first_char = encoded.chars().next().unwrap();
459
+
460
+
// First char must be 234567abcdefghij (values 0-15 in base32-sortable)
461
+
assert!("234567abcdefghij".contains(first_char));
462
+
}
463
+
464
+
#[test]
465
+
fn test_tid_sortability() {
466
+
// TIDs with increasing timestamps should sort correctly as strings
467
+
let tid1 = Tid::from_parts(1000000, 0);
468
+
let tid2 = Tid::from_parts(2000000, 0);
469
+
let tid3 = Tid::from_parts(3000000, 0);
470
+
471
+
let s1 = tid1.to_string();
472
+
let s2 = tid2.to_string();
473
+
let s3 = tid3.to_string();
474
+
475
+
assert!(s1 < s2);
476
+
assert!(s2 < s3);
477
+
assert!(s1 < s3);
478
+
}
479
+
480
+
#[test]
481
+
fn test_tid_clock_backward() {
482
+
// Simulate clock moving backwards
483
+
let timestamp1 = 2000000u64;
484
+
let tid1 = Tid::new_with_time(timestamp1);
485
+
486
+
let timestamp2 = 1000000u64; // Earlier timestamp
487
+
let tid2 = Tid::new_with_time(timestamp2);
488
+
489
+
// TID should still be monotonically increasing
490
+
assert!(tid2 > tid1);
491
+
}
492
+
}
+53
crates/atproto-tap/Cargo.toml
+53
crates/atproto-tap/Cargo.toml
···
1
+
[package]
2
+
name = "atproto-tap"
3
+
version = "0.13.0"
4
+
description = "AT Protocol TAP (Trusted Attestation Protocol) service consumer"
5
+
readme = "README.md"
6
+
homepage = "https://tangled.sh/@smokesignal.events/atproto-identity-rs"
7
+
documentation = "https://docs.rs/atproto-tap"
8
+
9
+
edition.workspace = true
10
+
rust-version.workspace = true
11
+
authors.workspace = true
12
+
repository.workspace = true
13
+
license.workspace = true
14
+
keywords.workspace = true
15
+
categories.workspace = true
16
+
17
+
[dependencies]
18
+
tokio = { workspace = true, features = ["sync", "time"] }
19
+
tokio-stream = "0.1"
20
+
tokio-websockets = { workspace = true }
21
+
futures = { workspace = true }
22
+
reqwest = { workspace = true }
23
+
serde = { workspace = true }
24
+
serde_json = { workspace = true }
25
+
thiserror = { workspace = true }
26
+
tracing = { workspace = true }
27
+
http = { workspace = true }
28
+
base64 = { workspace = true }
29
+
atproto-identity.workspace = true
30
+
atproto-client = { workspace = true, optional = true }
31
+
32
+
# Memory efficiency
33
+
compact_str = { version = "0.8", features = ["serde"] }
34
+
itoa = "1.0"
35
+
36
+
# Optional for CLI
37
+
clap = { workspace = true, optional = true }
38
+
tracing-subscriber = { version = "0.3", features = ["env-filter"], optional = true }
39
+
40
+
[features]
41
+
default = []
42
+
clap = ["dep:clap", "dep:tracing-subscriber", "dep:atproto-client", "tokio/rt-multi-thread", "tokio/macros", "tokio/signal"]
43
+
44
+
[[bin]]
45
+
name = "atproto-tap-client"
46
+
required-features = ["clap"]
47
+
48
+
[[bin]]
49
+
name = "atproto-tap-extras"
50
+
required-features = ["clap"]
51
+
52
+
[lints]
53
+
workspace = true
+351
crates/atproto-tap/src/bin/atproto-tap-client.rs
+351
crates/atproto-tap/src/bin/atproto-tap-client.rs
···
1
+
//! Command-line client for TAP services.
2
+
//!
3
+
//! This tool provides commands for consuming TAP events and managing tracked repositories.
4
+
//!
5
+
//! # Usage
6
+
//!
7
+
//! ```bash
8
+
//! # Stream events from a TAP service
9
+
//! cargo run --features cli --bin atproto-tap-client -- localhost:2480 read
10
+
//!
11
+
//! # Stream with authentication and filters
12
+
//! cargo run --features cli --bin atproto-tap-client -- localhost:2480 -p secret read --live-only
13
+
//!
14
+
//! # Add repositories to track
15
+
//! cargo run --features cli --bin atproto-tap-client -- localhost:2480 -p secret repos add did:plc:xyz did:plc:abc
16
+
//!
17
+
//! # Remove repositories from tracking
18
+
//! cargo run --features cli --bin atproto-tap-client -- localhost:2480 -p secret repos remove did:plc:xyz
19
+
//!
20
+
//! # Resolve a DID to its DID document
21
+
//! cargo run --features cli --bin atproto-tap-client -- localhost:2480 resolve did:plc:xyz
22
+
//!
23
+
//! # Resolve a DID and only output the handle
24
+
//! cargo run --features cli --bin atproto-tap-client -- localhost:2480 resolve did:plc:xyz --handle-only
25
+
//!
26
+
//! # Get repository tracking info
27
+
//! cargo run --features cli --bin atproto-tap-client -- localhost:2480 info did:plc:xyz
28
+
//! ```
29
+
30
+
use atproto_tap::{TapClient, TapConfig, TapEvent, connect};
31
+
use clap::{Parser, Subcommand};
32
+
use std::time::Duration;
33
+
use tokio_stream::StreamExt;
34
+
35
+
/// TAP service client for consuming events and managing repositories.
36
+
#[derive(Parser)]
37
+
#[command(
38
+
name = "atproto-tap-client",
39
+
version,
40
+
about = "TAP service client for AT Protocol",
41
+
long_about = "Connect to a TAP service to stream repository/identity events or manage tracked repositories.\n\n\
42
+
Events are printed to stdout as JSON, one per line.\n\
43
+
Use Ctrl+C to gracefully stop the consumer."
44
+
)]
45
+
struct Args {
46
+
/// TAP service hostname (e.g., localhost:2480)
47
+
hostname: String,
48
+
49
+
/// Admin password for authentication
50
+
#[arg(short, long, global = true)]
51
+
password: Option<String>,
52
+
53
+
#[command(subcommand)]
54
+
command: Command,
55
+
}
56
+
57
+
#[derive(Subcommand)]
58
+
enum Command {
59
+
/// Connect to TAP and stream events as JSON
60
+
Read {
61
+
/// Disable acknowledgments
62
+
#[arg(long)]
63
+
no_acks: bool,
64
+
65
+
/// Maximum reconnection attempts (0 = unlimited)
66
+
#[arg(long, default_value = "0")]
67
+
max_reconnects: u32,
68
+
69
+
/// Print debug information to stderr
70
+
#[arg(short, long)]
71
+
debug: bool,
72
+
73
+
/// Filter to specific collections (comma-separated)
74
+
#[arg(long)]
75
+
collections: Option<String>,
76
+
77
+
/// Only show live events (skip backfill)
78
+
#[arg(long)]
79
+
live_only: bool,
80
+
},
81
+
82
+
/// Manage tracked repositories
83
+
Repos {
84
+
#[command(subcommand)]
85
+
action: ReposAction,
86
+
},
87
+
88
+
/// Resolve a DID to its DID document
89
+
Resolve {
90
+
/// DID to resolve (e.g., did:plc:xyz123)
91
+
did: String,
92
+
93
+
/// Only output the handle (instead of full DID document)
94
+
#[arg(long)]
95
+
handle_only: bool,
96
+
},
97
+
98
+
/// Get tracking info for a repository
99
+
Info {
100
+
/// DID to get info for (e.g., did:plc:xyz123)
101
+
did: String,
102
+
},
103
+
}
104
+
105
+
#[derive(Subcommand)]
106
+
enum ReposAction {
107
+
/// Add repositories to track
108
+
Add {
109
+
/// DIDs to add (e.g., did:plc:xyz123)
110
+
#[arg(required = true)]
111
+
dids: Vec<String>,
112
+
},
113
+
114
+
/// Remove repositories from tracking
115
+
Remove {
116
+
/// DIDs to remove
117
+
#[arg(required = true)]
118
+
dids: Vec<String>,
119
+
},
120
+
}
121
+
122
+
#[tokio::main]
123
+
async fn main() {
124
+
let args = Args::parse();
125
+
126
+
match args.command {
127
+
Command::Read {
128
+
no_acks,
129
+
max_reconnects,
130
+
debug,
131
+
collections,
132
+
live_only,
133
+
} => {
134
+
run_read(
135
+
&args.hostname,
136
+
args.password,
137
+
no_acks,
138
+
max_reconnects,
139
+
debug,
140
+
collections,
141
+
live_only,
142
+
)
143
+
.await;
144
+
}
145
+
Command::Repos { action } => {
146
+
run_repos(&args.hostname, args.password, action).await;
147
+
}
148
+
Command::Resolve { did, handle_only } => {
149
+
run_resolve(&args.hostname, args.password, &did, handle_only).await;
150
+
}
151
+
Command::Info { did } => {
152
+
run_info(&args.hostname, args.password, &did).await;
153
+
}
154
+
}
155
+
}
156
+
157
+
async fn run_read(
158
+
hostname: &str,
159
+
password: Option<String>,
160
+
no_acks: bool,
161
+
max_reconnects: u32,
162
+
debug: bool,
163
+
collections: Option<String>,
164
+
live_only: bool,
165
+
) {
166
+
// Initialize tracing if debug mode
167
+
if debug {
168
+
tracing_subscriber::fmt()
169
+
.with_env_filter("atproto_tap=debug")
170
+
.with_writer(std::io::stderr)
171
+
.init();
172
+
}
173
+
174
+
// Build configuration
175
+
let mut config_builder = TapConfig::builder()
176
+
.hostname(hostname)
177
+
.send_acks(!no_acks);
178
+
179
+
if let Some(password) = password {
180
+
config_builder = config_builder.admin_password(password);
181
+
}
182
+
183
+
if max_reconnects > 0 {
184
+
config_builder = config_builder.max_reconnect_attempts(Some(max_reconnects));
185
+
}
186
+
187
+
// Set reasonable defaults for CLI usage
188
+
config_builder = config_builder
189
+
.initial_reconnect_delay(Duration::from_secs(1))
190
+
.max_reconnect_delay(Duration::from_secs(30));
191
+
192
+
let config = config_builder.build();
193
+
194
+
eprintln!("Connecting to TAP service at {}...", hostname);
195
+
196
+
let mut stream = connect(config);
197
+
198
+
// Parse collection filters
199
+
let collection_filters: Vec<String> = collections
200
+
.map(|c| c.split(',').map(|s| s.trim().to_string()).collect())
201
+
.unwrap_or_default();
202
+
203
+
// Handle Ctrl+C
204
+
let ctrl_c = tokio::signal::ctrl_c();
205
+
tokio::pin!(ctrl_c);
206
+
207
+
loop {
208
+
tokio::select! {
209
+
Some(result) = stream.next() => {
210
+
match result {
211
+
Ok(event) => {
212
+
// Apply filters
213
+
let should_print = match event.as_ref() {
214
+
TapEvent::Record { record, .. } => {
215
+
// Filter by live flag
216
+
if live_only && !record.live {
217
+
false
218
+
}
219
+
// Filter by collection
220
+
else if !collection_filters.is_empty() {
221
+
collection_filters.iter().any(|c| record.collection.as_ref() == c)
222
+
} else {
223
+
true
224
+
}
225
+
}
226
+
TapEvent::Identity { .. } => !live_only, // Always show identity unless live_only
227
+
};
228
+
229
+
if should_print {
230
+
// Print as JSON to stdout
231
+
match serde_json::to_string(event.as_ref()) {
232
+
Ok(json) => println!("{}", json),
233
+
Err(e) => {
234
+
eprintln!("Failed to serialize event: {}", e);
235
+
}
236
+
}
237
+
}
238
+
}
239
+
Err(e) => {
240
+
eprintln!("Error: {}", e);
241
+
242
+
// Exit on fatal errors
243
+
if e.is_fatal() {
244
+
eprintln!("Fatal error, exiting");
245
+
std::process::exit(1);
246
+
}
247
+
}
248
+
}
249
+
}
250
+
_ = &mut ctrl_c => {
251
+
eprintln!("\nReceived Ctrl+C, shutting down...");
252
+
stream.close().await;
253
+
break;
254
+
}
255
+
}
256
+
}
257
+
258
+
eprintln!("Client stopped");
259
+
}
260
+
261
+
async fn run_repos(hostname: &str, password: Option<String>, action: ReposAction) {
262
+
let client = TapClient::new(hostname, password);
263
+
264
+
match action {
265
+
ReposAction::Add { dids } => {
266
+
let did_refs: Vec<&str> = dids.iter().map(|s| s.as_str()).collect();
267
+
268
+
match client.add_repos(&did_refs).await {
269
+
Ok(()) => {
270
+
eprintln!("Added {} repository(ies) to tracking", dids.len());
271
+
for did in &dids {
272
+
println!("{}", did);
273
+
}
274
+
}
275
+
Err(e) => {
276
+
eprintln!("Failed to add repositories: {}", e);
277
+
std::process::exit(1);
278
+
}
279
+
}
280
+
}
281
+
ReposAction::Remove { dids } => {
282
+
let did_refs: Vec<&str> = dids.iter().map(|s| s.as_str()).collect();
283
+
284
+
match client.remove_repos(&did_refs).await {
285
+
Ok(()) => {
286
+
eprintln!("Removed {} repository(ies) from tracking", dids.len());
287
+
for did in &dids {
288
+
println!("{}", did);
289
+
}
290
+
}
291
+
Err(e) => {
292
+
eprintln!("Failed to remove repositories: {}", e);
293
+
std::process::exit(1);
294
+
}
295
+
}
296
+
}
297
+
}
298
+
}
299
+
300
+
async fn run_resolve(hostname: &str, password: Option<String>, did: &str, handle_only: bool) {
301
+
let client = TapClient::new(hostname, password);
302
+
303
+
match client.resolve(did).await {
304
+
Ok(doc) => {
305
+
if handle_only {
306
+
// Use the handles() method from atproto_identity::model::Document
307
+
match doc.handles() {
308
+
Some(handle) => println!("{}", handle),
309
+
None => {
310
+
eprintln!("No handle found in DID document");
311
+
std::process::exit(1);
312
+
}
313
+
}
314
+
} else {
315
+
// Print full DID document as JSON
316
+
match serde_json::to_string_pretty(&doc) {
317
+
Ok(json) => println!("{}", json),
318
+
Err(e) => {
319
+
eprintln!("Failed to serialize DID document: {}", e);
320
+
std::process::exit(1);
321
+
}
322
+
}
323
+
}
324
+
}
325
+
Err(e) => {
326
+
eprintln!("Failed to resolve DID: {}", e);
327
+
std::process::exit(1);
328
+
}
329
+
}
330
+
}
331
+
332
+
async fn run_info(hostname: &str, password: Option<String>, did: &str) {
333
+
let client = TapClient::new(hostname, password);
334
+
335
+
match client.info(did).await {
336
+
Ok(info) => {
337
+
// Print as JSON for easy parsing
338
+
match serde_json::to_string_pretty(&info) {
339
+
Ok(json) => println!("{}", json),
340
+
Err(e) => {
341
+
eprintln!("Failed to serialize info: {}", e);
342
+
std::process::exit(1);
343
+
}
344
+
}
345
+
}
346
+
Err(e) => {
347
+
eprintln!("Failed to get repository info: {}", e);
348
+
std::process::exit(1);
349
+
}
350
+
}
351
+
}
+214
crates/atproto-tap/src/bin/atproto-tap-extras.rs
+214
crates/atproto-tap/src/bin/atproto-tap-extras.rs
···
1
+
//! Additional TAP client utilities for AT Protocol.
2
+
//!
3
+
//! This tool provides extra commands for managing TAP tracked repositories
4
+
//! based on social graph data.
5
+
//!
6
+
//! # Usage
7
+
//!
8
+
//! ```bash
9
+
//! # Add all accounts followed by a DID to TAP tracking
10
+
//! cargo run --features cli --bin atproto-tap-extras -- localhost:2480 repos-add-followers did:plc:xyz
11
+
//!
12
+
//! # With authentication
13
+
//! cargo run --features cli --bin atproto-tap-extras -- localhost:2480 -p secret repos-add-followers did:plc:xyz
14
+
//! ```
15
+
16
+
use atproto_client::client::Auth;
17
+
use atproto_client::com::atproto::repo::{ListRecordsParams, list_records};
18
+
use atproto_identity::plc::query as plc_query;
19
+
use atproto_tap::TapClient;
20
+
use clap::{Parser, Subcommand};
21
+
use serde::Deserialize;
22
+
23
+
/// TAP extras utility for managing tracked repositories.
24
+
#[derive(Parser)]
25
+
#[command(
26
+
name = "atproto-tap-extras",
27
+
version,
28
+
about = "TAP extras utility for AT Protocol",
29
+
long_about = "Additional utilities for managing TAP tracked repositories based on social graph data."
30
+
)]
31
+
struct Args {
32
+
/// TAP service hostname (e.g., localhost:2480)
33
+
hostname: String,
34
+
35
+
/// Admin password for TAP authentication
36
+
#[arg(short, long, global = true)]
37
+
password: Option<String>,
38
+
39
+
/// PLC directory hostname for DID resolution
40
+
#[arg(long, default_value = "plc.directory", global = true)]
41
+
plc_hostname: String,
42
+
43
+
#[command(subcommand)]
44
+
command: Command,
45
+
}
46
+
47
+
#[derive(Subcommand)]
48
+
enum Command {
49
+
/// Add accounts followed by a DID to TAP tracking.
50
+
///
51
+
/// Fetches all app.bsky.graph.follow records from the specified DID's repository
52
+
/// and adds the followed DIDs to TAP for tracking.
53
+
ReposAddFollowers {
54
+
/// DID to read followers from (e.g., did:plc:xyz123)
55
+
did: String,
56
+
57
+
/// Batch size for adding repos to TAP
58
+
#[arg(long, default_value = "100")]
59
+
batch_size: usize,
60
+
61
+
/// Dry run - print DIDs without adding to TAP
62
+
#[arg(long)]
63
+
dry_run: bool,
64
+
},
65
+
}
66
+
67
+
/// Follow record structure from app.bsky.graph.follow.
68
+
#[derive(Debug, Deserialize)]
69
+
struct FollowRecord {
70
+
/// The DID of the account being followed.
71
+
subject: String,
72
+
}
73
+
74
+
#[tokio::main]
75
+
async fn main() {
76
+
let args = Args::parse();
77
+
78
+
match args.command {
79
+
Command::ReposAddFollowers {
80
+
did,
81
+
batch_size,
82
+
dry_run,
83
+
} => {
84
+
run_repos_add_followers(
85
+
&args.hostname,
86
+
args.password,
87
+
&args.plc_hostname,
88
+
&did,
89
+
batch_size,
90
+
dry_run,
91
+
)
92
+
.await;
93
+
}
94
+
}
95
+
}
96
+
97
+
async fn run_repos_add_followers(
98
+
tap_hostname: &str,
99
+
tap_password: Option<String>,
100
+
plc_hostname: &str,
101
+
did: &str,
102
+
batch_size: usize,
103
+
dry_run: bool,
104
+
) {
105
+
let http_client = reqwest::Client::new();
106
+
107
+
// Resolve the DID to get the PDS endpoint
108
+
eprintln!("Resolving DID: {}", did);
109
+
let document = match plc_query(&http_client, plc_hostname, did).await {
110
+
Ok(doc) => doc,
111
+
Err(e) => {
112
+
eprintln!("Failed to resolve DID: {}", e);
113
+
std::process::exit(1);
114
+
}
115
+
};
116
+
117
+
let pds_endpoints = document.pds_endpoints();
118
+
if pds_endpoints.is_empty() {
119
+
eprintln!("No PDS endpoint found in DID document");
120
+
std::process::exit(1);
121
+
}
122
+
let pds_url = pds_endpoints[0];
123
+
eprintln!("Using PDS: {}", pds_url);
124
+
125
+
// Collect all followed DIDs
126
+
let mut followed_dids: Vec<String> = Vec::new();
127
+
let mut cursor: Option<String> = None;
128
+
let collection = "app.bsky.graph.follow".to_string();
129
+
130
+
eprintln!("Fetching follow records...");
131
+
132
+
loop {
133
+
let params = if let Some(c) = cursor.take() {
134
+
ListRecordsParams::new().limit(100).cursor(c)
135
+
} else {
136
+
ListRecordsParams::new().limit(100)
137
+
};
138
+
139
+
let response = match list_records::<FollowRecord>(
140
+
&http_client,
141
+
&Auth::None,
142
+
pds_url,
143
+
did.to_string(),
144
+
collection.clone(),
145
+
params,
146
+
)
147
+
.await
148
+
{
149
+
Ok(resp) => resp,
150
+
Err(e) => {
151
+
eprintln!("Failed to list records: {}", e);
152
+
std::process::exit(1);
153
+
}
154
+
};
155
+
156
+
for record in &response.records {
157
+
followed_dids.push(record.value.subject.clone());
158
+
}
159
+
160
+
eprintln!(
161
+
" Fetched {} records (total: {})",
162
+
response.records.len(),
163
+
followed_dids.len()
164
+
);
165
+
166
+
match response.cursor {
167
+
Some(c) if !response.records.is_empty() => {
168
+
cursor = Some(c);
169
+
}
170
+
_ => break,
171
+
}
172
+
}
173
+
174
+
if followed_dids.is_empty() {
175
+
eprintln!("No follow records found");
176
+
return;
177
+
}
178
+
179
+
eprintln!("Found {} followed accounts", followed_dids.len());
180
+
181
+
if dry_run {
182
+
eprintln!("\nDry run - would add these DIDs to TAP:");
183
+
for did in &followed_dids {
184
+
println!("{}", did);
185
+
}
186
+
return;
187
+
}
188
+
189
+
// Add to TAP in batches
190
+
let tap_client = TapClient::new(tap_hostname, tap_password);
191
+
let mut added = 0;
192
+
193
+
for chunk in followed_dids.chunks(batch_size) {
194
+
let did_refs: Vec<&str> = chunk.iter().map(|s| s.as_str()).collect();
195
+
196
+
match tap_client.add_repos(&did_refs).await {
197
+
Ok(()) => {
198
+
added += chunk.len();
199
+
eprintln!("Added {} DIDs to TAP (total: {})", chunk.len(), added);
200
+
}
201
+
Err(e) => {
202
+
eprintln!("Failed to add repos to TAP: {}", e);
203
+
std::process::exit(1);
204
+
}
205
+
}
206
+
}
207
+
208
+
eprintln!("Successfully added {} DIDs to TAP", added);
209
+
210
+
// Print all added DIDs
211
+
for did in &followed_dids {
212
+
println!("{}", did);
213
+
}
214
+
}
+371
crates/atproto-tap/src/client.rs
+371
crates/atproto-tap/src/client.rs
···
1
+
//! HTTP client for TAP management API.
2
+
//!
3
+
//! This module provides [`TapClient`] for interacting with the TAP service's
4
+
//! HTTP management endpoints, including adding/removing tracked repositories.
5
+
6
+
use crate::errors::TapError;
7
+
use atproto_identity::model::Document;
8
+
use base64::Engine;
9
+
use base64::engine::general_purpose::STANDARD as BASE64;
10
+
use reqwest::header::{AUTHORIZATION, CONTENT_TYPE, HeaderMap, HeaderValue};
11
+
use serde::{Deserialize, Serialize};
12
+
13
+
/// HTTP client for TAP management API.
14
+
///
15
+
/// Provides methods for managing which repositories the TAP service tracks,
16
+
/// checking service health, and querying repository status.
17
+
///
18
+
/// # Example
19
+
///
20
+
/// ```ignore
21
+
/// use atproto_tap::TapClient;
22
+
///
23
+
/// let client = TapClient::new("localhost:2480", Some("admin_password".to_string()));
24
+
///
25
+
/// // Add repositories to track
26
+
/// client.add_repos(&["did:plc:xyz123", "did:plc:abc456"]).await?;
27
+
///
28
+
/// // Check health
29
+
/// if client.health().await? {
30
+
/// println!("TAP service is healthy");
31
+
/// }
32
+
/// ```
33
+
#[derive(Debug, Clone)]
34
+
pub struct TapClient {
35
+
http_client: reqwest::Client,
36
+
base_url: String,
37
+
auth_header: Option<HeaderValue>,
38
+
}
39
+
40
+
impl TapClient {
41
+
/// Create a new TAP management client.
42
+
///
43
+
/// # Arguments
44
+
///
45
+
/// * `hostname` - TAP service hostname (e.g., "localhost:2480")
46
+
/// * `admin_password` - Optional admin password for authentication
47
+
pub fn new(hostname: &str, admin_password: Option<String>) -> Self {
48
+
let auth_header = admin_password.map(|password| {
49
+
let credentials = format!("admin:{}", password);
50
+
let encoded = BASE64.encode(credentials.as_bytes());
51
+
HeaderValue::from_str(&format!("Basic {}", encoded))
52
+
.expect("Invalid auth header value")
53
+
});
54
+
55
+
Self {
56
+
http_client: reqwest::Client::new(),
57
+
base_url: format!("http://{}", hostname),
58
+
auth_header,
59
+
}
60
+
}
61
+
62
+
/// Create default headers for requests.
63
+
fn default_headers(&self) -> HeaderMap {
64
+
let mut headers = HeaderMap::new();
65
+
headers.insert(CONTENT_TYPE, HeaderValue::from_static("application/json"));
66
+
if let Some(auth) = &self.auth_header {
67
+
headers.insert(AUTHORIZATION, auth.clone());
68
+
}
69
+
headers
70
+
}
71
+
72
+
/// Add repositories to track.
73
+
///
74
+
/// Sends a POST request to `/repos/add` with the list of DIDs.
75
+
///
76
+
/// # Arguments
77
+
///
78
+
/// * `dids` - Slice of DID strings to track
79
+
///
80
+
/// # Example
81
+
///
82
+
/// ```ignore
83
+
/// client.add_repos(&[
84
+
/// "did:plc:z72i7hdynmk6r22z27h6tvur",
85
+
/// "did:plc:ewvi7nxzyoun6zhxrhs64oiz",
86
+
/// ]).await?;
87
+
/// ```
88
+
pub async fn add_repos(&self, dids: &[&str]) -> Result<(), TapError> {
89
+
let url = format!("{}/repos/add", self.base_url);
90
+
let body = AddReposRequest {
91
+
dids: dids.iter().map(|s| s.to_string()).collect(),
92
+
};
93
+
94
+
let response = self
95
+
.http_client
96
+
.post(&url)
97
+
.headers(self.default_headers())
98
+
.json(&body)
99
+
.send()
100
+
.await?;
101
+
102
+
if response.status().is_success() {
103
+
tracing::debug!(count = dids.len(), "Added repositories to TAP");
104
+
Ok(())
105
+
} else {
106
+
let status = response.status().as_u16();
107
+
let message = response.text().await.unwrap_or_default();
108
+
Err(TapError::HttpResponseError { status, message })
109
+
}
110
+
}
111
+
112
+
/// Remove repositories from tracking.
113
+
///
114
+
/// Sends a POST request to `/repos/remove` with the list of DIDs.
115
+
///
116
+
/// # Arguments
117
+
///
118
+
/// * `dids` - Slice of DID strings to stop tracking
119
+
pub async fn remove_repos(&self, dids: &[&str]) -> Result<(), TapError> {
120
+
let url = format!("{}/repos/remove", self.base_url);
121
+
let body = AddReposRequest {
122
+
dids: dids.iter().map(|s| s.to_string()).collect(),
123
+
};
124
+
125
+
let response = self
126
+
.http_client
127
+
.post(&url)
128
+
.headers(self.default_headers())
129
+
.json(&body)
130
+
.send()
131
+
.await?;
132
+
133
+
if response.status().is_success() {
134
+
tracing::debug!(count = dids.len(), "Removed repositories from TAP");
135
+
Ok(())
136
+
} else {
137
+
let status = response.status().as_u16();
138
+
let message = response.text().await.unwrap_or_default();
139
+
Err(TapError::HttpResponseError { status, message })
140
+
}
141
+
}
142
+
143
+
/// Check service health.
144
+
///
145
+
/// Sends a GET request to `/health`.
146
+
///
147
+
/// # Returns
148
+
///
149
+
/// `true` if the service is healthy, `false` otherwise.
150
+
pub async fn health(&self) -> Result<bool, TapError> {
151
+
let url = format!("{}/health", self.base_url);
152
+
153
+
let response = self
154
+
.http_client
155
+
.get(&url)
156
+
.headers(self.default_headers())
157
+
.send()
158
+
.await?;
159
+
160
+
Ok(response.status().is_success())
161
+
}
162
+
163
+
/// Resolve a DID to its DID document.
164
+
///
165
+
/// Sends a GET request to `/resolve/:did`.
166
+
///
167
+
/// # Arguments
168
+
///
169
+
/// * `did` - The DID to resolve
170
+
///
171
+
/// # Returns
172
+
///
173
+
/// The DID document for the identity.
174
+
pub async fn resolve(&self, did: &str) -> Result<Document, TapError> {
175
+
let url = format!("{}/resolve/{}", self.base_url, did);
176
+
177
+
let response = self
178
+
.http_client
179
+
.get(&url)
180
+
.headers(self.default_headers())
181
+
.send()
182
+
.await?;
183
+
184
+
if response.status().is_success() {
185
+
let doc: Document = response.json().await?;
186
+
Ok(doc)
187
+
} else {
188
+
let status = response.status().as_u16();
189
+
let message = response.text().await.unwrap_or_default();
190
+
Err(TapError::HttpResponseError { status, message })
191
+
}
192
+
}
193
+
194
+
/// Get info about a tracked repository.
195
+
///
196
+
/// Sends a GET request to `/info/:did`.
197
+
///
198
+
/// # Arguments
199
+
///
200
+
/// * `did` - The DID to get info for
201
+
///
202
+
/// # Returns
203
+
///
204
+
/// Repository tracking information.
205
+
pub async fn info(&self, did: &str) -> Result<RepoInfo, TapError> {
206
+
let url = format!("{}/info/{}", self.base_url, did);
207
+
208
+
let response = self
209
+
.http_client
210
+
.get(&url)
211
+
.headers(self.default_headers())
212
+
.send()
213
+
.await?;
214
+
215
+
if response.status().is_success() {
216
+
let info: RepoInfo = response.json().await?;
217
+
Ok(info)
218
+
} else {
219
+
let status = response.status().as_u16();
220
+
let message = response.text().await.unwrap_or_default();
221
+
Err(TapError::HttpResponseError { status, message })
222
+
}
223
+
}
224
+
}
225
+
226
+
/// Request body for adding/removing repositories.
227
+
#[derive(Debug, Serialize)]
228
+
struct AddReposRequest {
229
+
dids: Vec<String>,
230
+
}
231
+
232
+
/// Repository tracking information.
233
+
#[derive(Debug, Clone, Serialize, Deserialize)]
234
+
pub struct RepoInfo {
235
+
/// The repository DID.
236
+
pub did: Box<str>,
237
+
/// Current sync state.
238
+
pub state: RepoState,
239
+
/// The handle for the repository.
240
+
#[serde(default)]
241
+
pub handle: Option<Box<str>>,
242
+
/// Number of records in the repository.
243
+
#[serde(default)]
244
+
pub records: u64,
245
+
/// Current repository revision.
246
+
#[serde(default)]
247
+
pub rev: Option<Box<str>>,
248
+
/// Number of retries for syncing.
249
+
#[serde(default)]
250
+
pub retries: u32,
251
+
/// Error message if any.
252
+
#[serde(default)]
253
+
pub error: Option<Box<str>>,
254
+
/// Additional fields may be present depending on TAP version.
255
+
#[serde(flatten)]
256
+
pub extra: serde_json::Value,
257
+
}
258
+
259
+
/// Repository sync state.
260
+
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
261
+
#[serde(rename_all = "lowercase")]
262
+
pub enum RepoState {
263
+
/// Repository is active and synced.
264
+
Active,
265
+
/// Repository is currently syncing.
266
+
Syncing,
267
+
/// Repository is fully synced.
268
+
Synced,
269
+
/// Sync failed for this repository.
270
+
Failed,
271
+
/// Repository is queued for sync.
272
+
Queued,
273
+
/// Unknown state.
274
+
#[serde(other)]
275
+
Unknown,
276
+
}
277
+
278
+
/// Deprecated alias for RepoState.
279
+
#[deprecated(since = "0.13.0", note = "Use RepoState instead")]
280
+
pub type RepoStatus = RepoState;
281
+
282
+
impl std::fmt::Display for RepoState {
283
+
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
284
+
match self {
285
+
RepoState::Active => write!(f, "active"),
286
+
RepoState::Syncing => write!(f, "syncing"),
287
+
RepoState::Synced => write!(f, "synced"),
288
+
RepoState::Failed => write!(f, "failed"),
289
+
RepoState::Queued => write!(f, "queued"),
290
+
RepoState::Unknown => write!(f, "unknown"),
291
+
}
292
+
}
293
+
}
294
+
295
+
#[cfg(test)]
296
+
mod tests {
297
+
use super::*;
298
+
299
+
#[test]
300
+
fn test_client_creation() {
301
+
let client = TapClient::new("localhost:2480", None);
302
+
assert_eq!(client.base_url, "http://localhost:2480");
303
+
assert!(client.auth_header.is_none());
304
+
305
+
let client = TapClient::new("localhost:2480", Some("secret".to_string()));
306
+
assert!(client.auth_header.is_some());
307
+
}
308
+
309
+
#[test]
310
+
fn test_repo_state_display() {
311
+
assert_eq!(RepoState::Active.to_string(), "active");
312
+
assert_eq!(RepoState::Syncing.to_string(), "syncing");
313
+
assert_eq!(RepoState::Synced.to_string(), "synced");
314
+
assert_eq!(RepoState::Failed.to_string(), "failed");
315
+
assert_eq!(RepoState::Queued.to_string(), "queued");
316
+
assert_eq!(RepoState::Unknown.to_string(), "unknown");
317
+
}
318
+
319
+
#[test]
320
+
fn test_repo_state_deserialize() {
321
+
let json = r#""active""#;
322
+
let state: RepoState = serde_json::from_str(json).unwrap();
323
+
assert_eq!(state, RepoState::Active);
324
+
325
+
let json = r#""syncing""#;
326
+
let state: RepoState = serde_json::from_str(json).unwrap();
327
+
assert_eq!(state, RepoState::Syncing);
328
+
329
+
let json = r#""some_new_state""#;
330
+
let state: RepoState = serde_json::from_str(json).unwrap();
331
+
assert_eq!(state, RepoState::Unknown);
332
+
}
333
+
334
+
#[test]
335
+
fn test_repo_info_deserialize() {
336
+
let json = r#"{"did":"did:plc:cbkjy5n7bk3ax2wplmtjofq2","error":"","handle":"ngerakines.me","records":21382,"retries":0,"rev":"3mam4aazabs2m","state":"active"}"#;
337
+
let info: RepoInfo = serde_json::from_str(json).unwrap();
338
+
assert_eq!(&*info.did, "did:plc:cbkjy5n7bk3ax2wplmtjofq2");
339
+
assert_eq!(info.state, RepoState::Active);
340
+
assert_eq!(info.handle.as_deref(), Some("ngerakines.me"));
341
+
assert_eq!(info.records, 21382);
342
+
assert_eq!(info.retries, 0);
343
+
assert_eq!(info.rev.as_deref(), Some("3mam4aazabs2m"));
344
+
// Empty string deserializes as Some("")
345
+
assert_eq!(info.error.as_deref(), Some(""));
346
+
}
347
+
348
+
#[test]
349
+
fn test_repo_info_deserialize_minimal() {
350
+
// Test with only required fields
351
+
let json = r#"{"did":"did:plc:test","state":"syncing"}"#;
352
+
let info: RepoInfo = serde_json::from_str(json).unwrap();
353
+
assert_eq!(&*info.did, "did:plc:test");
354
+
assert_eq!(info.state, RepoState::Syncing);
355
+
assert_eq!(info.handle, None);
356
+
assert_eq!(info.records, 0);
357
+
assert_eq!(info.retries, 0);
358
+
assert_eq!(info.rev, None);
359
+
assert_eq!(info.error, None);
360
+
}
361
+
362
+
#[test]
363
+
fn test_add_repos_request_serialize() {
364
+
let req = AddReposRequest {
365
+
dids: vec!["did:plc:xyz".to_string(), "did:plc:abc".to_string()],
366
+
};
367
+
let json = serde_json::to_string(&req).unwrap();
368
+
assert!(json.contains("dids"));
369
+
assert!(json.contains("did:plc:xyz"));
370
+
}
371
+
}
+220
crates/atproto-tap/src/config.rs
+220
crates/atproto-tap/src/config.rs
···
1
+
//! Configuration for TAP stream connections.
2
+
//!
3
+
//! This module provides the [`TapConfig`] struct for configuring TAP stream
4
+
//! connections, including hostname, authentication, and reconnection behavior.
5
+
6
+
use std::time::Duration;
7
+
8
+
/// Configuration for a TAP stream connection.
9
+
///
10
+
/// Use [`TapConfig::builder()`] for ergonomic construction with defaults.
11
+
///
12
+
/// # Example
13
+
///
14
+
/// ```
15
+
/// use atproto_tap::TapConfig;
16
+
/// use std::time::Duration;
17
+
///
18
+
/// let config = TapConfig::builder()
19
+
/// .hostname("localhost:2480")
20
+
/// .admin_password("secret")
21
+
/// .send_acks(true)
22
+
/// .max_reconnect_attempts(Some(10))
23
+
/// .build();
24
+
/// ```
25
+
#[derive(Debug, Clone)]
26
+
pub struct TapConfig {
27
+
/// TAP service hostname (e.g., "localhost:2480").
28
+
///
29
+
/// The WebSocket URL is constructed as `ws://{hostname}/channel`.
30
+
pub hostname: String,
31
+
32
+
/// Optional admin password for authentication.
33
+
///
34
+
/// If set, HTTP Basic Auth is used with username "admin".
35
+
pub admin_password: Option<String>,
36
+
37
+
/// Whether to send acknowledgments for received messages.
38
+
///
39
+
/// Default: `true`. Set to `false` if the TAP service has acks disabled.
40
+
pub send_acks: bool,
41
+
42
+
/// User-Agent header value for WebSocket connections.
43
+
pub user_agent: String,
44
+
45
+
/// Maximum reconnection attempts before giving up.
46
+
///
47
+
/// `None` means unlimited reconnection attempts (default).
48
+
pub max_reconnect_attempts: Option<u32>,
49
+
50
+
/// Initial delay before first reconnection attempt.
51
+
///
52
+
/// Default: 1 second.
53
+
pub initial_reconnect_delay: Duration,
54
+
55
+
/// Maximum delay between reconnection attempts.
56
+
///
57
+
/// Default: 60 seconds.
58
+
pub max_reconnect_delay: Duration,
59
+
60
+
/// Multiplier for exponential backoff between reconnections.
61
+
///
62
+
/// Default: 2.0 (doubles the delay each attempt).
63
+
pub reconnect_backoff_multiplier: f64,
64
+
}
65
+
66
+
impl Default for TapConfig {
67
+
fn default() -> Self {
68
+
Self {
69
+
hostname: "localhost:2480".to_string(),
70
+
admin_password: None,
71
+
send_acks: true,
72
+
user_agent: format!("atproto-tap/{}", env!("CARGO_PKG_VERSION")),
73
+
max_reconnect_attempts: None,
74
+
initial_reconnect_delay: Duration::from_secs(1),
75
+
max_reconnect_delay: Duration::from_secs(60),
76
+
reconnect_backoff_multiplier: 2.0,
77
+
}
78
+
}
79
+
}
80
+
81
+
impl TapConfig {
82
+
/// Create a new configuration builder with defaults.
83
+
pub fn builder() -> TapConfigBuilder {
84
+
TapConfigBuilder::default()
85
+
}
86
+
87
+
/// Create a minimal configuration for the given hostname.
88
+
pub fn new(hostname: impl Into<String>) -> Self {
89
+
Self {
90
+
hostname: hostname.into(),
91
+
..Default::default()
92
+
}
93
+
}
94
+
95
+
/// Returns the WebSocket URL for the TAP channel.
96
+
pub fn ws_url(&self) -> String {
97
+
format!("ws://{}/channel", self.hostname)
98
+
}
99
+
100
+
/// Returns the HTTP base URL for the TAP management API.
101
+
pub fn http_base_url(&self) -> String {
102
+
format!("http://{}", self.hostname)
103
+
}
104
+
}
105
+
106
+
/// Builder for [`TapConfig`].
107
+
#[derive(Debug, Clone, Default)]
108
+
pub struct TapConfigBuilder {
109
+
config: TapConfig,
110
+
}
111
+
112
+
impl TapConfigBuilder {
113
+
/// Set the TAP service hostname.
114
+
pub fn hostname(mut self, hostname: impl Into<String>) -> Self {
115
+
self.config.hostname = hostname.into();
116
+
self
117
+
}
118
+
119
+
/// Set the admin password for authentication.
120
+
pub fn admin_password(mut self, password: impl Into<String>) -> Self {
121
+
self.config.admin_password = Some(password.into());
122
+
self
123
+
}
124
+
125
+
/// Set whether to send acknowledgments.
126
+
pub fn send_acks(mut self, send_acks: bool) -> Self {
127
+
self.config.send_acks = send_acks;
128
+
self
129
+
}
130
+
131
+
/// Set the User-Agent header value.
132
+
pub fn user_agent(mut self, user_agent: impl Into<String>) -> Self {
133
+
self.config.user_agent = user_agent.into();
134
+
self
135
+
}
136
+
137
+
/// Set the maximum reconnection attempts.
138
+
///
139
+
/// `None` means unlimited attempts.
140
+
pub fn max_reconnect_attempts(mut self, max: Option<u32>) -> Self {
141
+
self.config.max_reconnect_attempts = max;
142
+
self
143
+
}
144
+
145
+
/// Set the initial reconnection delay.
146
+
pub fn initial_reconnect_delay(mut self, delay: Duration) -> Self {
147
+
self.config.initial_reconnect_delay = delay;
148
+
self
149
+
}
150
+
151
+
/// Set the maximum reconnection delay.
152
+
pub fn max_reconnect_delay(mut self, delay: Duration) -> Self {
153
+
self.config.max_reconnect_delay = delay;
154
+
self
155
+
}
156
+
157
+
/// Set the reconnection backoff multiplier.
158
+
pub fn reconnect_backoff_multiplier(mut self, multiplier: f64) -> Self {
159
+
self.config.reconnect_backoff_multiplier = multiplier;
160
+
self
161
+
}
162
+
163
+
/// Build the configuration.
164
+
pub fn build(self) -> TapConfig {
165
+
self.config
166
+
}
167
+
}
168
+
169
+
#[cfg(test)]
170
+
mod tests {
171
+
use super::*;
172
+
173
+
#[test]
174
+
fn test_default_config() {
175
+
let config = TapConfig::default();
176
+
assert_eq!(config.hostname, "localhost:2480");
177
+
assert!(config.admin_password.is_none());
178
+
assert!(config.send_acks);
179
+
assert!(config.max_reconnect_attempts.is_none());
180
+
assert_eq!(config.initial_reconnect_delay, Duration::from_secs(1));
181
+
assert_eq!(config.max_reconnect_delay, Duration::from_secs(60));
182
+
assert!((config.reconnect_backoff_multiplier - 2.0).abs() < f64::EPSILON);
183
+
}
184
+
185
+
#[test]
186
+
fn test_builder() {
187
+
let config = TapConfig::builder()
188
+
.hostname("tap.example.com:2480")
189
+
.admin_password("secret123")
190
+
.send_acks(false)
191
+
.max_reconnect_attempts(Some(5))
192
+
.initial_reconnect_delay(Duration::from_millis(500))
193
+
.max_reconnect_delay(Duration::from_secs(30))
194
+
.reconnect_backoff_multiplier(1.5)
195
+
.build();
196
+
197
+
assert_eq!(config.hostname, "tap.example.com:2480");
198
+
assert_eq!(config.admin_password, Some("secret123".to_string()));
199
+
assert!(!config.send_acks);
200
+
assert_eq!(config.max_reconnect_attempts, Some(5));
201
+
assert_eq!(config.initial_reconnect_delay, Duration::from_millis(500));
202
+
assert_eq!(config.max_reconnect_delay, Duration::from_secs(30));
203
+
assert!((config.reconnect_backoff_multiplier - 1.5).abs() < f64::EPSILON);
204
+
}
205
+
206
+
#[test]
207
+
fn test_ws_url() {
208
+
let config = TapConfig::new("localhost:2480");
209
+
assert_eq!(config.ws_url(), "ws://localhost:2480/channel");
210
+
211
+
let config = TapConfig::new("tap.example.com:8080");
212
+
assert_eq!(config.ws_url(), "ws://tap.example.com:8080/channel");
213
+
}
214
+
215
+
#[test]
216
+
fn test_http_base_url() {
217
+
let config = TapConfig::new("localhost:2480");
218
+
assert_eq!(config.http_base_url(), "http://localhost:2480");
219
+
}
220
+
}
+168
crates/atproto-tap/src/connection.rs
+168
crates/atproto-tap/src/connection.rs
···
1
+
//! WebSocket connection management for TAP streams.
2
+
//!
3
+
//! This module handles the low-level WebSocket connection to a TAP service,
4
+
//! including authentication and message sending/receiving.
5
+
6
+
use crate::config::TapConfig;
7
+
use crate::errors::TapError;
8
+
use base64::Engine;
9
+
use base64::engine::general_purpose::STANDARD as BASE64;
10
+
use futures::{SinkExt, StreamExt};
11
+
use http::Uri;
12
+
use std::str::FromStr;
13
+
use tokio_websockets::{ClientBuilder, Message, WebSocketStream};
14
+
use tokio_websockets::MaybeTlsStream;
15
+
use tokio::net::TcpStream;
16
+
17
+
/// WebSocket connection to a TAP service.
18
+
pub(crate) struct TapConnection {
19
+
/// The underlying WebSocket stream.
20
+
ws: WebSocketStream<MaybeTlsStream<TcpStream>>,
21
+
/// Pre-allocated buffer for acknowledgment messages.
22
+
ack_buffer: Vec<u8>,
23
+
}
24
+
25
+
impl TapConnection {
26
+
/// Establish a new WebSocket connection to the TAP service.
27
+
pub async fn connect(config: &TapConfig) -> Result<Self, TapError> {
28
+
let uri = Uri::from_str(&config.ws_url())
29
+
.map_err(|e| TapError::InvalidUrl(e.to_string()))?;
30
+
31
+
let mut builder = ClientBuilder::from_uri(uri);
32
+
33
+
// Add User-Agent header
34
+
builder = builder
35
+
.add_header(
36
+
http::header::USER_AGENT,
37
+
http::HeaderValue::from_str(&config.user_agent)
38
+
.map_err(|e| TapError::ConnectionFailed(format!("Invalid user agent: {}", e)))?,
39
+
)
40
+
.map_err(|e| TapError::ConnectionFailed(format!("Failed to add header: {}", e)))?;
41
+
42
+
// Add Basic Auth header if password is configured
43
+
if let Some(password) = &config.admin_password {
44
+
let credentials = format!("admin:{}", password);
45
+
let encoded = BASE64.encode(credentials.as_bytes());
46
+
let auth_value = format!("Basic {}", encoded);
47
+
48
+
builder = builder
49
+
.add_header(
50
+
http::header::AUTHORIZATION,
51
+
http::HeaderValue::from_str(&auth_value)
52
+
.map_err(|e| TapError::ConnectionFailed(format!("Invalid auth header: {}", e)))?,
53
+
)
54
+
.map_err(|e| TapError::ConnectionFailed(format!("Failed to add auth header: {}", e)))?;
55
+
}
56
+
57
+
// Connect
58
+
let (ws, _response) = builder
59
+
.connect()
60
+
.await
61
+
.map_err(|e| TapError::ConnectionFailed(e.to_string()))?;
62
+
63
+
tracing::debug!(hostname = %config.hostname, "Connected to TAP service");
64
+
65
+
Ok(Self {
66
+
ws,
67
+
ack_buffer: Vec::with_capacity(48), // {"type":"ack","id":18446744073709551615} is 40 bytes max
68
+
})
69
+
}
70
+
71
+
/// Receive the next message from the WebSocket.
72
+
///
73
+
/// Returns `None` if the connection was closed cleanly.
74
+
pub async fn recv(&mut self) -> Result<Option<String>, TapError> {
75
+
match self.ws.next().await {
76
+
Some(Ok(msg)) => {
77
+
if msg.is_text() {
78
+
msg.as_text()
79
+
.map(|s| Some(s.to_string()))
80
+
.ok_or_else(|| TapError::ParseError("Failed to get text from message".into()))
81
+
} else if msg.is_close() {
82
+
tracing::debug!("Received close frame from TAP service");
83
+
Ok(None)
84
+
} else {
85
+
// Ignore ping/pong and binary messages
86
+
tracing::trace!("Received non-text message, ignoring");
87
+
// Recurse to get the next text message
88
+
Box::pin(self.recv()).await
89
+
}
90
+
}
91
+
Some(Err(e)) => Err(TapError::ConnectionFailed(e.to_string())),
92
+
None => {
93
+
tracing::debug!("WebSocket stream ended");
94
+
Ok(None)
95
+
}
96
+
}
97
+
}
98
+
99
+
/// Send an acknowledgment for the given event ID.
100
+
///
101
+
/// Uses a pre-allocated buffer and itoa for allocation-free formatting.
102
+
/// Format: `{"type":"ack","id":12345}`
103
+
pub async fn send_ack(&mut self, id: u64) -> Result<(), TapError> {
104
+
self.ack_buffer.clear();
105
+
self.ack_buffer.extend_from_slice(b"{\"type\":\"ack\",\"id\":");
106
+
let mut itoa_buf = itoa::Buffer::new();
107
+
self.ack_buffer.extend_from_slice(itoa_buf.format(id).as_bytes());
108
+
self.ack_buffer.push(b'}');
109
+
110
+
// All bytes are ASCII so this is always valid UTF-8
111
+
let msg = std::str::from_utf8(&self.ack_buffer)
112
+
.expect("ack buffer contains only ASCII");
113
+
114
+
self.ws
115
+
.send(Message::text(msg.to_string()))
116
+
.await
117
+
.map_err(|e| TapError::AckFailed(e.to_string()))?;
118
+
119
+
// Flush to ensure the ack is sent immediately
120
+
self.ws
121
+
.flush()
122
+
.await
123
+
.map_err(|e| TapError::AckFailed(format!("Failed to flush ack: {}", e)))?;
124
+
125
+
tracing::trace!(id, "Sent ack");
126
+
Ok(())
127
+
}
128
+
129
+
/// Close the WebSocket connection gracefully.
130
+
pub async fn close(&mut self) -> Result<(), TapError> {
131
+
self.ws
132
+
.close()
133
+
.await
134
+
.map_err(|e| TapError::ConnectionFailed(format!("Failed to close: {}", e)))?;
135
+
Ok(())
136
+
}
137
+
}
138
+
139
+
#[cfg(test)]
140
+
mod tests {
141
+
#[test]
142
+
fn test_ack_buffer_format() {
143
+
// Test that our manual JSON formatting is correct
144
+
// Format: {"type":"ack","id":12345}
145
+
let mut buffer = Vec::with_capacity(64);
146
+
147
+
let id: u64 = 12345;
148
+
buffer.clear();
149
+
buffer.extend_from_slice(b"{\"type\":\"ack\",\"id\":");
150
+
let mut itoa_buf = itoa::Buffer::new();
151
+
buffer.extend_from_slice(itoa_buf.format(id).as_bytes());
152
+
buffer.push(b'}');
153
+
154
+
let result = std::str::from_utf8(&buffer).unwrap();
155
+
assert_eq!(result, r#"{"type":"ack","id":12345}"#);
156
+
157
+
// Test max u64
158
+
let id: u64 = u64::MAX;
159
+
buffer.clear();
160
+
buffer.extend_from_slice(b"{\"type\":\"ack\",\"id\":");
161
+
buffer.extend_from_slice(itoa_buf.format(id).as_bytes());
162
+
buffer.push(b'}');
163
+
164
+
let result = std::str::from_utf8(&buffer).unwrap();
165
+
assert_eq!(result, r#"{"type":"ack","id":18446744073709551615}"#);
166
+
assert!(buffer.len() <= 64); // Fits in our pre-allocated buffer
167
+
}
168
+
}
+143
crates/atproto-tap/src/errors.rs
+143
crates/atproto-tap/src/errors.rs
···
1
+
//! Error types for TAP operations.
2
+
//!
3
+
//! This module defines the error types returned by TAP stream and client operations.
4
+
5
+
use thiserror::Error;
6
+
7
+
/// Errors that can occur during TAP operations.
8
+
#[derive(Debug, Error)]
9
+
pub enum TapError {
10
+
/// WebSocket connection failed.
11
+
#[error("error-atproto-tap-connection-1 WebSocket connection failed: {0}")]
12
+
ConnectionFailed(String),
13
+
14
+
/// Connection was closed unexpectedly.
15
+
#[error("error-atproto-tap-connection-2 Connection closed unexpectedly")]
16
+
ConnectionClosed,
17
+
18
+
/// Maximum reconnection attempts exceeded.
19
+
#[error("error-atproto-tap-connection-3 Maximum reconnection attempts exceeded after {0} attempts")]
20
+
MaxReconnectAttemptsExceeded(u32),
21
+
22
+
/// Authentication failed.
23
+
#[error("error-atproto-tap-auth-1 Authentication failed: {0}")]
24
+
AuthenticationFailed(String),
25
+
26
+
/// Failed to parse a message from the server.
27
+
#[error("error-atproto-tap-parse-1 Failed to parse message: {0}")]
28
+
ParseError(String),
29
+
30
+
/// Failed to send an acknowledgment.
31
+
#[error("error-atproto-tap-ack-1 Failed to send acknowledgment: {0}")]
32
+
AckFailed(String),
33
+
34
+
/// HTTP request failed.
35
+
#[error("error-atproto-tap-http-1 HTTP request failed: {0}")]
36
+
HttpError(String),
37
+
38
+
/// HTTP response indicated an error.
39
+
#[error("error-atproto-tap-http-2 HTTP error response: {status} - {message}")]
40
+
HttpResponseError {
41
+
/// HTTP status code.
42
+
status: u16,
43
+
/// Error message from response.
44
+
message: String,
45
+
},
46
+
47
+
/// Invalid URL.
48
+
#[error("error-atproto-tap-url-1 Invalid URL: {0}")]
49
+
InvalidUrl(String),
50
+
51
+
/// I/O error.
52
+
#[error("error-atproto-tap-io-1 I/O error: {0}")]
53
+
IoError(#[from] std::io::Error),
54
+
55
+
/// JSON serialization/deserialization error.
56
+
#[error("error-atproto-tap-json-1 JSON error: {0}")]
57
+
JsonError(#[from] serde_json::Error),
58
+
59
+
/// Stream has been closed and cannot be used.
60
+
#[error("error-atproto-tap-stream-1 Stream is closed")]
61
+
StreamClosed,
62
+
63
+
/// Operation timed out.
64
+
#[error("error-atproto-tap-timeout-1 Operation timed out")]
65
+
Timeout,
66
+
}
67
+
68
+
impl TapError {
69
+
/// Returns true if this error indicates a connection issue that may be recoverable.
70
+
pub fn is_connection_error(&self) -> bool {
71
+
matches!(
72
+
self,
73
+
TapError::ConnectionFailed(_)
74
+
| TapError::ConnectionClosed
75
+
| TapError::IoError(_)
76
+
| TapError::Timeout
77
+
)
78
+
}
79
+
80
+
/// Returns true if this error is a parse error that doesn't affect connection state.
81
+
pub fn is_parse_error(&self) -> bool {
82
+
matches!(self, TapError::ParseError(_) | TapError::JsonError(_))
83
+
}
84
+
85
+
/// Returns true if this error is fatal and the stream should not attempt recovery.
86
+
pub fn is_fatal(&self) -> bool {
87
+
matches!(
88
+
self,
89
+
TapError::MaxReconnectAttemptsExceeded(_)
90
+
| TapError::AuthenticationFailed(_)
91
+
| TapError::StreamClosed
92
+
)
93
+
}
94
+
}
95
+
96
+
impl From<reqwest::Error> for TapError {
97
+
fn from(err: reqwest::Error) -> Self {
98
+
if err.is_timeout() {
99
+
TapError::Timeout
100
+
} else if err.is_connect() {
101
+
TapError::ConnectionFailed(err.to_string())
102
+
} else {
103
+
TapError::HttpError(err.to_string())
104
+
}
105
+
}
106
+
}
107
+
108
+
#[cfg(test)]
109
+
mod tests {
110
+
use super::*;
111
+
112
+
#[test]
113
+
fn test_error_classification() {
114
+
assert!(TapError::ConnectionFailed("test".into()).is_connection_error());
115
+
assert!(TapError::ConnectionClosed.is_connection_error());
116
+
assert!(TapError::Timeout.is_connection_error());
117
+
118
+
assert!(TapError::ParseError("test".into()).is_parse_error());
119
+
assert!(TapError::JsonError(serde_json::from_str::<()>("invalid").unwrap_err()).is_parse_error());
120
+
121
+
assert!(TapError::MaxReconnectAttemptsExceeded(5).is_fatal());
122
+
assert!(TapError::AuthenticationFailed("test".into()).is_fatal());
123
+
assert!(TapError::StreamClosed.is_fatal());
124
+
125
+
// Non-fatal errors
126
+
assert!(!TapError::ConnectionFailed("test".into()).is_fatal());
127
+
assert!(!TapError::ParseError("test".into()).is_fatal());
128
+
}
129
+
130
+
#[test]
131
+
fn test_error_display() {
132
+
let err = TapError::ConnectionFailed("refused".to_string());
133
+
assert!(err.to_string().contains("error-atproto-tap-connection-1"));
134
+
assert!(err.to_string().contains("refused"));
135
+
136
+
let err = TapError::HttpResponseError {
137
+
status: 404,
138
+
message: "Not Found".to_string(),
139
+
};
140
+
assert!(err.to_string().contains("404"));
141
+
assert!(err.to_string().contains("Not Found"));
142
+
}
143
+
}
+488
crates/atproto-tap/src/events.rs
+488
crates/atproto-tap/src/events.rs
···
1
+
//! TAP event types for AT Protocol record and identity events.
2
+
//!
3
+
//! This module defines the event structures received from a TAP service.
4
+
//! Events are optimized for memory efficiency using:
5
+
//! - `CompactString` for small strings (SSO for โค24 bytes)
6
+
//! - `Box<str>` for immutable strings (no capacity overhead)
7
+
//! - `serde_json::Value` for record payloads (allows lazy access)
8
+
9
+
use compact_str::CompactString;
10
+
use serde::de::{self, Deserializer, IgnoredAny, MapAccess, Visitor};
11
+
use serde::{Deserialize, Serialize, de::DeserializeOwned};
12
+
use std::fmt;
13
+
14
+
/// A TAP event received from the stream.
15
+
///
16
+
/// TAP delivers two types of events:
17
+
/// - `Record`: Repository record changes (create, update, delete)
18
+
/// - `Identity`: Identity/handle changes for accounts
19
+
#[derive(Debug, Clone, Serialize, Deserialize)]
20
+
#[serde(tag = "type", rename_all = "lowercase")]
21
+
pub enum TapEvent {
22
+
/// A repository record event (create, update, or delete).
23
+
Record {
24
+
/// Sequential event identifier.
25
+
id: u64,
26
+
/// The record event data.
27
+
record: RecordEvent,
28
+
},
29
+
/// An identity change event.
30
+
Identity {
31
+
/// Sequential event identifier.
32
+
id: u64,
33
+
/// The identity event data.
34
+
identity: IdentityEvent,
35
+
},
36
+
}
37
+
38
+
impl TapEvent {
39
+
/// Returns the event ID.
40
+
pub fn id(&self) -> u64 {
41
+
match self {
42
+
TapEvent::Record { id, .. } => *id,
43
+
TapEvent::Identity { id, .. } => *id,
44
+
}
45
+
}
46
+
}
47
+
48
+
/// Extract only the event ID from a JSON string without fully parsing it.
49
+
///
50
+
/// This is a fallback parser used when full `TapEvent` parsing fails (e.g., due to
51
+
/// deeply nested records hitting serde_json's recursion limit). It uses `IgnoredAny`
52
+
/// to efficiently skip over nested content without building data structures, allowing
53
+
/// us to extract the ID for acknowledgment even when full parsing fails.
54
+
///
55
+
/// # Example
56
+
///
57
+
/// ```
58
+
/// use atproto_tap::extract_event_id;
59
+
///
60
+
/// let json = r#"{"type":"record","id":12345,"record":{"deeply":"nested"}}"#;
61
+
/// assert_eq!(extract_event_id(json), Some(12345));
62
+
/// ```
63
+
pub fn extract_event_id(json: &str) -> Option<u64> {
64
+
let mut deserializer = serde_json::Deserializer::from_str(json);
65
+
deserializer.disable_recursion_limit();
66
+
EventIdOnly::deserialize(&mut deserializer).ok().map(|e| e.id)
67
+
}
68
+
69
+
/// Internal struct for extracting only the "id" field from a TAP event.
70
+
#[derive(Debug)]
71
+
struct EventIdOnly {
72
+
id: u64,
73
+
}
74
+
75
+
impl<'de> Deserialize<'de> for EventIdOnly {
76
+
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
77
+
where
78
+
D: Deserializer<'de>,
79
+
{
80
+
deserializer.deserialize_map(EventIdOnlyVisitor)
81
+
}
82
+
}
83
+
84
+
struct EventIdOnlyVisitor;
85
+
86
+
impl<'de> Visitor<'de> for EventIdOnlyVisitor {
87
+
type Value = EventIdOnly;
88
+
89
+
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
90
+
formatter.write_str("a map with an 'id' field")
91
+
}
92
+
93
+
fn visit_map<M>(self, mut map: M) -> Result<Self::Value, M::Error>
94
+
where
95
+
M: MapAccess<'de>,
96
+
{
97
+
let mut id: Option<u64> = None;
98
+
99
+
while let Some(key) = map.next_key::<&str>()? {
100
+
if key == "id" {
101
+
id = Some(map.next_value()?);
102
+
// Found what we need - skip the rest efficiently using IgnoredAny
103
+
// which handles deeply nested structures without recursion issues
104
+
while map.next_entry::<IgnoredAny, IgnoredAny>()?.is_some() {}
105
+
break;
106
+
} else {
107
+
// Skip this value without fully parsing it
108
+
map.next_value::<IgnoredAny>()?;
109
+
}
110
+
}
111
+
112
+
id.map(|id| EventIdOnly { id })
113
+
.ok_or_else(|| de::Error::missing_field("id"))
114
+
}
115
+
}
116
+
117
+
/// A repository record event from TAP.
118
+
///
119
+
/// Contains information about a record change in a user's repository,
120
+
/// including the action taken and the record data (for creates/updates).
121
+
#[derive(Debug, Clone, Serialize, Deserialize)]
122
+
pub struct RecordEvent {
123
+
/// True if from live firehose, false if from backfill/resync.
124
+
///
125
+
/// During initial sync or recovery, TAP delivers historical events
126
+
/// with `live: false`. Once caught up, live events have `live: true`.
127
+
pub live: bool,
128
+
129
+
/// Repository revision identifier.
130
+
///
131
+
/// Typically 13 characters, stored inline via CompactString SSO.
132
+
pub rev: CompactString,
133
+
134
+
/// Actor DID (e.g., "did:plc:xyz123").
135
+
pub did: Box<str>,
136
+
137
+
/// Collection NSID (e.g., "app.bsky.feed.post").
138
+
pub collection: Box<str>,
139
+
140
+
/// Record key within the collection.
141
+
///
142
+
/// Typically a TID (13 characters), stored inline via CompactString SSO.
143
+
pub rkey: CompactString,
144
+
145
+
/// The action performed on the record.
146
+
pub action: RecordAction,
147
+
148
+
/// Content identifier (CID) of the record.
149
+
///
150
+
/// Present for create and update actions, absent for delete.
151
+
#[serde(skip_serializing_if = "Option::is_none")]
152
+
pub cid: Option<CompactString>,
153
+
154
+
/// Record data as JSON value.
155
+
///
156
+
/// Present for create and update actions, absent for delete.
157
+
/// Use [`parse_record`](Self::parse_record) to deserialize on demand.
158
+
#[serde(skip_serializing_if = "Option::is_none")]
159
+
pub record: Option<serde_json::Value>,
160
+
}
161
+
162
+
impl RecordEvent {
163
+
/// Parse the record payload into a typed structure.
164
+
///
165
+
/// This method deserializes the raw JSON on demand, avoiding
166
+
/// unnecessary allocations when the record data isn't needed.
167
+
///
168
+
/// # Errors
169
+
///
170
+
/// Returns an error if the record is absent (delete events) or
171
+
/// if deserialization fails.
172
+
///
173
+
/// # Example
174
+
///
175
+
/// ```ignore
176
+
/// use serde::Deserialize;
177
+
///
178
+
/// #[derive(Deserialize)]
179
+
/// struct Post {
180
+
/// text: String,
181
+
/// #[serde(rename = "createdAt")]
182
+
/// created_at: String,
183
+
/// }
184
+
///
185
+
/// let post: Post = record_event.parse_record()?;
186
+
/// println!("Post text: {}", post.text);
187
+
/// ```
188
+
pub fn parse_record<T: DeserializeOwned>(&self) -> Result<T, serde_json::Error> {
189
+
match &self.record {
190
+
Some(value) => serde_json::from_value(value.clone()),
191
+
None => Err(serde::de::Error::custom("no record data (delete event)")),
192
+
}
193
+
}
194
+
195
+
/// Returns the record as a JSON Value reference, if present.
196
+
pub fn record_value(&self) -> Option<&serde_json::Value> {
197
+
self.record.as_ref()
198
+
}
199
+
200
+
/// Returns true if this is a delete event.
201
+
pub fn is_delete(&self) -> bool {
202
+
self.action == RecordAction::Delete
203
+
}
204
+
205
+
/// Returns the AT-URI for this record.
206
+
///
207
+
/// Format: `at://{did}/{collection}/{rkey}`
208
+
pub fn at_uri(&self) -> String {
209
+
format!("at://{}/{}/{}", self.did, self.collection, self.rkey)
210
+
}
211
+
}
212
+
213
+
/// The action performed on a record.
214
+
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
215
+
#[serde(rename_all = "lowercase")]
216
+
pub enum RecordAction {
217
+
/// A new record was created.
218
+
Create,
219
+
/// An existing record was updated.
220
+
Update,
221
+
/// A record was deleted.
222
+
Delete,
223
+
}
224
+
225
+
impl std::fmt::Display for RecordAction {
226
+
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
227
+
match self {
228
+
RecordAction::Create => write!(f, "create"),
229
+
RecordAction::Update => write!(f, "update"),
230
+
RecordAction::Delete => write!(f, "delete"),
231
+
}
232
+
}
233
+
}
234
+
235
+
/// An identity change event from TAP.
236
+
///
237
+
/// Contains information about handle or account status changes.
238
+
#[derive(Debug, Clone, Serialize, Deserialize)]
239
+
pub struct IdentityEvent {
240
+
/// Actor DID.
241
+
pub did: Box<str>,
242
+
243
+
/// Current handle for the account.
244
+
pub handle: Box<str>,
245
+
246
+
/// Whether the account is currently active.
247
+
#[serde(default)]
248
+
pub is_active: bool,
249
+
250
+
/// Account status.
251
+
#[serde(default)]
252
+
pub status: IdentityStatus,
253
+
}
254
+
255
+
/// Account status in an identity event.
256
+
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Default, Serialize, Deserialize)]
257
+
#[serde(rename_all = "lowercase")]
258
+
pub enum IdentityStatus {
259
+
/// Account is active and in good standing.
260
+
#[default]
261
+
Active,
262
+
/// Account has been deactivated by the user.
263
+
Deactivated,
264
+
/// Account has been suspended.
265
+
Suspended,
266
+
/// Account has been deleted.
267
+
Deleted,
268
+
/// Account has been taken down.
269
+
Takendown,
270
+
}
271
+
272
+
impl std::fmt::Display for IdentityStatus {
273
+
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
274
+
match self {
275
+
IdentityStatus::Active => write!(f, "active"),
276
+
IdentityStatus::Deactivated => write!(f, "deactivated"),
277
+
IdentityStatus::Suspended => write!(f, "suspended"),
278
+
IdentityStatus::Deleted => write!(f, "deleted"),
279
+
IdentityStatus::Takendown => write!(f, "takendown"),
280
+
}
281
+
}
282
+
}
283
+
284
+
#[cfg(test)]
285
+
mod tests {
286
+
use super::*;
287
+
288
+
#[test]
289
+
fn test_parse_record_event() {
290
+
let json = r#"{
291
+
"id": 12345,
292
+
"type": "record",
293
+
"record": {
294
+
"live": true,
295
+
"rev": "3lyileto4q52k",
296
+
"did": "did:plc:z72i7hdynmk6r22z27h6tvur",
297
+
"collection": "app.bsky.feed.post",
298
+
"rkey": "3lyiletddxt2c",
299
+
"action": "create",
300
+
"cid": "bafyreigroo6vhxt62ufcndhaxzas6btq4jmniuz4egszbwuqgiyisqwqoy",
301
+
"record": {"$type": "app.bsky.feed.post", "text": "Hello world!", "createdAt": "2025-01-01T00:00:00Z"}
302
+
}
303
+
}"#;
304
+
305
+
let event: TapEvent = serde_json::from_str(json).expect("Failed to parse");
306
+
307
+
match event {
308
+
TapEvent::Record { id, record } => {
309
+
assert_eq!(id, 12345);
310
+
assert!(record.live);
311
+
assert_eq!(record.rev.as_str(), "3lyileto4q52k");
312
+
assert_eq!(&*record.did, "did:plc:z72i7hdynmk6r22z27h6tvur");
313
+
assert_eq!(&*record.collection, "app.bsky.feed.post");
314
+
assert_eq!(record.rkey.as_str(), "3lyiletddxt2c");
315
+
assert_eq!(record.action, RecordAction::Create);
316
+
assert!(record.cid.is_some());
317
+
assert!(record.record.is_some());
318
+
319
+
// Test lazy parsing
320
+
#[derive(Deserialize)]
321
+
struct Post {
322
+
text: String,
323
+
}
324
+
let post: Post = record.parse_record().expect("Failed to parse record");
325
+
assert_eq!(post.text, "Hello world!");
326
+
}
327
+
_ => panic!("Expected Record event"),
328
+
}
329
+
}
330
+
331
+
#[test]
332
+
fn test_parse_delete_event() {
333
+
let json = r#"{
334
+
"id": 12346,
335
+
"type": "record",
336
+
"record": {
337
+
"live": true,
338
+
"rev": "3lyileto4q52k",
339
+
"did": "did:plc:z72i7hdynmk6r22z27h6tvur",
340
+
"collection": "app.bsky.feed.post",
341
+
"rkey": "3lyiletddxt2c",
342
+
"action": "delete"
343
+
}
344
+
}"#;
345
+
346
+
let event: TapEvent = serde_json::from_str(json).expect("Failed to parse");
347
+
348
+
match event {
349
+
TapEvent::Record { id, record } => {
350
+
assert_eq!(id, 12346);
351
+
assert_eq!(record.action, RecordAction::Delete);
352
+
assert!(record.is_delete());
353
+
assert!(record.cid.is_none());
354
+
assert!(record.record.is_none());
355
+
}
356
+
_ => panic!("Expected Record event"),
357
+
}
358
+
}
359
+
360
+
#[test]
361
+
fn test_parse_identity_event() {
362
+
let json = r#"{
363
+
"id": 12347,
364
+
"type": "identity",
365
+
"identity": {
366
+
"did": "did:plc:z72i7hdynmk6r22z27h6tvur",
367
+
"handle": "user.bsky.social",
368
+
"is_active": true,
369
+
"status": "active"
370
+
}
371
+
}"#;
372
+
373
+
let event: TapEvent = serde_json::from_str(json).expect("Failed to parse");
374
+
375
+
match event {
376
+
TapEvent::Identity { id, identity } => {
377
+
assert_eq!(id, 12347);
378
+
assert_eq!(&*identity.did, "did:plc:z72i7hdynmk6r22z27h6tvur");
379
+
assert_eq!(&*identity.handle, "user.bsky.social");
380
+
assert!(identity.is_active);
381
+
assert_eq!(identity.status, IdentityStatus::Active);
382
+
}
383
+
_ => panic!("Expected Identity event"),
384
+
}
385
+
}
386
+
387
+
#[test]
388
+
fn test_record_action_display() {
389
+
assert_eq!(RecordAction::Create.to_string(), "create");
390
+
assert_eq!(RecordAction::Update.to_string(), "update");
391
+
assert_eq!(RecordAction::Delete.to_string(), "delete");
392
+
}
393
+
394
+
#[test]
395
+
fn test_identity_status_display() {
396
+
assert_eq!(IdentityStatus::Active.to_string(), "active");
397
+
assert_eq!(IdentityStatus::Deactivated.to_string(), "deactivated");
398
+
assert_eq!(IdentityStatus::Suspended.to_string(), "suspended");
399
+
assert_eq!(IdentityStatus::Deleted.to_string(), "deleted");
400
+
assert_eq!(IdentityStatus::Takendown.to_string(), "takendown");
401
+
}
402
+
403
+
#[test]
404
+
fn test_at_uri() {
405
+
let record = RecordEvent {
406
+
live: true,
407
+
rev: "3lyileto4q52k".into(),
408
+
did: "did:plc:xyz".into(),
409
+
collection: "app.bsky.feed.post".into(),
410
+
rkey: "abc123".into(),
411
+
action: RecordAction::Create,
412
+
cid: None,
413
+
record: None,
414
+
};
415
+
416
+
assert_eq!(record.at_uri(), "at://did:plc:xyz/app.bsky.feed.post/abc123");
417
+
}
418
+
419
+
#[test]
420
+
fn test_event_id() {
421
+
let record_event = TapEvent::Record {
422
+
id: 100,
423
+
record: RecordEvent {
424
+
live: true,
425
+
rev: "rev".into(),
426
+
did: "did".into(),
427
+
collection: "col".into(),
428
+
rkey: "rkey".into(),
429
+
action: RecordAction::Create,
430
+
cid: None,
431
+
record: None,
432
+
},
433
+
};
434
+
assert_eq!(record_event.id(), 100);
435
+
436
+
let identity_event = TapEvent::Identity {
437
+
id: 200,
438
+
identity: IdentityEvent {
439
+
did: "did".into(),
440
+
handle: "handle".into(),
441
+
is_active: true,
442
+
status: IdentityStatus::Active,
443
+
},
444
+
};
445
+
assert_eq!(identity_event.id(), 200);
446
+
}
447
+
448
+
#[test]
449
+
fn test_extract_event_id_simple() {
450
+
let json = r#"{"type":"record","id":12345,"record":{"deeply":"nested"}}"#;
451
+
assert_eq!(extract_event_id(json), Some(12345));
452
+
}
453
+
454
+
#[test]
455
+
fn test_extract_event_id_at_end() {
456
+
let json = r#"{"type":"record","record":{"deeply":"nested"},"id":99999}"#;
457
+
assert_eq!(extract_event_id(json), Some(99999));
458
+
}
459
+
460
+
#[test]
461
+
fn test_extract_event_id_missing() {
462
+
let json = r#"{"type":"record","record":{"deeply":"nested"}}"#;
463
+
assert_eq!(extract_event_id(json), None);
464
+
}
465
+
466
+
#[test]
467
+
fn test_extract_event_id_invalid_json() {
468
+
let json = r#"{"type":"record","id":123"#; // Truncated JSON
469
+
assert_eq!(extract_event_id(json), None);
470
+
}
471
+
472
+
#[test]
473
+
fn test_extract_event_id_deeply_nested() {
474
+
// Create a deeply nested JSON that would exceed serde_json's default recursion limit
475
+
let mut json = String::from(r#"{"id":42,"record":{"nested":"#);
476
+
for _ in 0..200 {
477
+
json.push_str("[");
478
+
}
479
+
json.push_str("1");
480
+
for _ in 0..200 {
481
+
json.push_str("]");
482
+
}
483
+
json.push_str("}}");
484
+
485
+
// extract_event_id should still work because it uses IgnoredAny with disabled recursion limit
486
+
assert_eq!(extract_event_id(&json), Some(42));
487
+
}
488
+
}
+119
crates/atproto-tap/src/lib.rs
+119
crates/atproto-tap/src/lib.rs
···
1
+
//! TAP (Trusted Attestation Protocol) service consumer for AT Protocol.
2
+
//!
3
+
//! This crate provides a client for consuming events from a TAP service,
4
+
//! which delivers filtered, verified AT Protocol repository events.
5
+
//!
6
+
//! # Overview
7
+
//!
8
+
//! TAP is a single-tenant service that subscribes to an AT Protocol Relay and
9
+
//! outputs filtered, verified events. Key features include:
10
+
//!
11
+
//! - **Verified Events**: MST integrity checks and signature verification
12
+
//! - **Automatic Backfill**: Historical events delivered with `live: false`
13
+
//! - **Repository Filtering**: Track specific DIDs or collections
14
+
//! - **Acknowledgment Protocol**: At-least-once delivery semantics
15
+
//!
16
+
//! # Quick Start
17
+
//!
18
+
//! ```ignore
19
+
//! use atproto_tap::{connect_to, TapEvent};
20
+
//! use tokio_stream::StreamExt;
21
+
//!
22
+
//! #[tokio::main]
23
+
//! async fn main() {
24
+
//! let mut stream = connect_to("localhost:2480");
25
+
//!
26
+
//! while let Some(result) = stream.next().await {
27
+
//! match result {
28
+
//! Ok(event) => match event.as_ref() {
29
+
//! TapEvent::Record { record, .. } => {
30
+
//! println!("{} {} {}", record.action, record.collection, record.did);
31
+
//! }
32
+
//! TapEvent::Identity { identity, .. } => {
33
+
//! println!("Identity: {} = {}", identity.did, identity.handle);
34
+
//! }
35
+
//! },
36
+
//! Err(e) => eprintln!("Error: {}", e),
37
+
//! }
38
+
//! }
39
+
//! }
40
+
//! ```
41
+
//!
42
+
//! # Using with `tokio::select!`
43
+
//!
44
+
//! The stream integrates naturally with Tokio's select macro:
45
+
//!
46
+
//! ```ignore
47
+
//! use atproto_tap::{connect, TapConfig};
48
+
//! use tokio_stream::StreamExt;
49
+
//! use tokio::signal;
50
+
//!
51
+
//! #[tokio::main]
52
+
//! async fn main() {
53
+
//! let config = TapConfig::builder()
54
+
//! .hostname("localhost:2480")
55
+
//! .admin_password("secret")
56
+
//! .build();
57
+
//!
58
+
//! let mut stream = connect(config);
59
+
//!
60
+
//! loop {
61
+
//! tokio::select! {
62
+
//! Some(result) = stream.next() => {
63
+
//! // Process event
64
+
//! }
65
+
//! _ = signal::ctrl_c() => {
66
+
//! break;
67
+
//! }
68
+
//! }
69
+
//! }
70
+
//! }
71
+
//! ```
72
+
//!
73
+
//! # Management API
74
+
//!
75
+
//! Use [`TapClient`] to manage tracked repositories:
76
+
//!
77
+
//! ```ignore
78
+
//! use atproto_tap::TapClient;
79
+
//!
80
+
//! let client = TapClient::new("localhost:2480", Some("password".to_string()));
81
+
//!
82
+
//! // Add repositories to track
83
+
//! client.add_repos(&["did:plc:xyz123"]).await?;
84
+
//!
85
+
//! // Check service health
86
+
//! if client.health().await? {
87
+
//! println!("TAP service is healthy");
88
+
//! }
89
+
//! ```
90
+
//!
91
+
//! # Memory Efficiency
92
+
//!
93
+
//! This crate is optimized for high-throughput event processing:
94
+
//!
95
+
//! - **Arc-wrapped events**: Events are shared via `Arc` for zero-cost sharing
96
+
//! - **CompactString**: Small strings use inline storage (no heap allocation)
97
+
//! - **Box<str>**: Immutable strings without capacity overhead
98
+
//! - **RawValue**: Record payloads are lazily parsed on demand
99
+
//! - **Pre-allocated buffers**: Ack messages avoid per-message allocations
100
+
101
+
#![forbid(unsafe_code)]
102
+
#![warn(missing_docs)]
103
+
104
+
mod client;
105
+
mod config;
106
+
mod connection;
107
+
mod errors;
108
+
mod events;
109
+
mod stream;
110
+
111
+
// Re-export public types
112
+
pub use atproto_identity::model::{Document, Service, VerificationMethod};
113
+
pub use client::{RepoInfo, RepoState, TapClient};
114
+
#[allow(deprecated)]
115
+
pub use client::RepoStatus;
116
+
pub use config::{TapConfig, TapConfigBuilder};
117
+
pub use errors::TapError;
118
+
pub use events::{IdentityEvent, IdentityStatus, RecordAction, RecordEvent, TapEvent, extract_event_id};
119
+
pub use stream::{TapStream, connect, connect_to};
+330
crates/atproto-tap/src/stream.rs
+330
crates/atproto-tap/src/stream.rs
···
1
+
//! TAP event stream implementation.
2
+
//!
3
+
//! This module provides [`TapStream`], an async stream that yields TAP events
4
+
//! with automatic connection management and reconnection handling.
5
+
//!
6
+
//! # Design
7
+
//!
8
+
//! The stream encapsulates all connection logic, allowing consumers to simply
9
+
//! iterate over events using standard stream combinators or `tokio::select!`.
10
+
//!
11
+
//! Reconnection is handled automatically with exponential backoff. Parse errors
12
+
//! are yielded as `Err` items but don't affect connection state - only connection
13
+
//! errors trigger reconnection attempts.
14
+
15
+
use crate::config::TapConfig;
16
+
use crate::connection::TapConnection;
17
+
use crate::errors::TapError;
18
+
use crate::events::{TapEvent, extract_event_id};
19
+
use futures::Stream;
20
+
use std::pin::Pin;
21
+
use std::sync::Arc;
22
+
use std::task::{Context, Poll};
23
+
use std::time::Duration;
24
+
use tokio::sync::mpsc;
25
+
26
+
/// An async stream of TAP events with automatic reconnection.
27
+
///
28
+
/// `TapStream` implements [`Stream`] and yields `Result<Arc<TapEvent>, TapError>`.
29
+
/// Events are wrapped in `Arc` for efficient zero-cost sharing across consumers.
30
+
///
31
+
/// # Connection Management
32
+
///
33
+
/// The stream automatically:
34
+
/// - Connects on first poll
35
+
/// - Reconnects with exponential backoff on connection errors
36
+
/// - Sends acknowledgments after parsing each message (if enabled)
37
+
/// - Yields parse errors without affecting connection state
38
+
///
39
+
/// # Example
40
+
///
41
+
/// ```ignore
42
+
/// use atproto_tap::{TapConfig, TapStream};
43
+
/// use tokio_stream::StreamExt;
44
+
///
45
+
/// let config = TapConfig::builder()
46
+
/// .hostname("localhost:2480")
47
+
/// .build();
48
+
///
49
+
/// let mut stream = TapStream::new(config);
50
+
///
51
+
/// while let Some(result) = stream.next().await {
52
+
/// match result {
53
+
/// Ok(event) => println!("Event: {:?}", event),
54
+
/// Err(e) => eprintln!("Error: {}", e),
55
+
/// }
56
+
/// }
57
+
/// ```
58
+
pub struct TapStream {
59
+
/// Receiver for events from the background task.
60
+
receiver: mpsc::Receiver<Result<Arc<TapEvent>, TapError>>,
61
+
/// Handle to request stream closure.
62
+
close_sender: Option<mpsc::Sender<()>>,
63
+
/// Whether the stream has been closed.
64
+
closed: bool,
65
+
}
66
+
67
+
impl TapStream {
68
+
/// Create a new TAP stream with the given configuration.
69
+
///
70
+
/// The stream will start connecting immediately in a background task.
71
+
pub fn new(config: TapConfig) -> Self {
72
+
// Channel for events - buffer a few to handle bursts
73
+
let (event_tx, event_rx) = mpsc::channel(32);
74
+
// Channel for close signal
75
+
let (close_tx, close_rx) = mpsc::channel(1);
76
+
77
+
// Spawn background task to manage connection
78
+
tokio::spawn(connection_task(config, event_tx, close_rx));
79
+
80
+
Self {
81
+
receiver: event_rx,
82
+
close_sender: Some(close_tx),
83
+
closed: false,
84
+
}
85
+
}
86
+
87
+
/// Close the stream and release resources.
88
+
///
89
+
/// After calling this, the stream will yield `None` on the next poll.
90
+
pub async fn close(&mut self) {
91
+
if let Some(sender) = self.close_sender.take() {
92
+
// Signal the background task to close
93
+
let _ = sender.send(()).await;
94
+
}
95
+
self.closed = true;
96
+
}
97
+
98
+
/// Returns true if the stream is closed.
99
+
pub fn is_closed(&self) -> bool {
100
+
self.closed
101
+
}
102
+
}
103
+
104
+
impl Stream for TapStream {
105
+
type Item = Result<Arc<TapEvent>, TapError>;
106
+
107
+
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
108
+
if self.closed {
109
+
return Poll::Ready(None);
110
+
}
111
+
112
+
self.receiver.poll_recv(cx)
113
+
}
114
+
}
115
+
116
+
impl Drop for TapStream {
117
+
fn drop(&mut self) {
118
+
// Drop the close_sender to signal the background task
119
+
self.close_sender.take();
120
+
tracing::debug!("TapStream dropped");
121
+
}
122
+
}
123
+
124
+
/// Background task that manages the WebSocket connection.
125
+
async fn connection_task(
126
+
config: TapConfig,
127
+
event_tx: mpsc::Sender<Result<Arc<TapEvent>, TapError>>,
128
+
mut close_rx: mpsc::Receiver<()>,
129
+
) {
130
+
let mut current_reconnect_delay = config.initial_reconnect_delay;
131
+
let mut attempt: u32 = 0;
132
+
133
+
loop {
134
+
// Check for close signal
135
+
if close_rx.try_recv().is_ok() {
136
+
tracing::debug!("Connection task received close signal");
137
+
break;
138
+
}
139
+
140
+
// Try to connect
141
+
tracing::debug!(attempt, hostname = %config.hostname, "Connecting to TAP service");
142
+
let conn_result = TapConnection::connect(&config).await;
143
+
144
+
match conn_result {
145
+
Ok(mut conn) => {
146
+
tracing::info!(hostname = %config.hostname, "TAP stream connected");
147
+
// Reset reconnection state on successful connect
148
+
current_reconnect_delay = config.initial_reconnect_delay;
149
+
attempt = 0;
150
+
151
+
// Event loop for this connection
152
+
loop {
153
+
tokio::select! {
154
+
biased;
155
+
156
+
_ = close_rx.recv() => {
157
+
tracing::debug!("Connection task received close signal during receive");
158
+
let _ = conn.close().await;
159
+
return;
160
+
}
161
+
162
+
recv_result = conn.recv() => {
163
+
match recv_result {
164
+
Ok(Some(msg)) => {
165
+
// Parse the message
166
+
match serde_json::from_str::<TapEvent>(&msg) {
167
+
Ok(event) => {
168
+
let event_id = event.id();
169
+
170
+
// Send ack if enabled (before sending event to channel)
171
+
if config.send_acks
172
+
&& let Err(err) = conn.send_ack(event_id).await
173
+
{
174
+
tracing::warn!(error = %err, "Failed to send ack");
175
+
// Don't break connection for ack errors
176
+
}
177
+
178
+
// Send event to channel
179
+
let event = Arc::new(event);
180
+
if event_tx.send(Ok(event)).await.is_err() {
181
+
// Receiver dropped, exit task
182
+
tracing::debug!("Event receiver dropped, closing connection");
183
+
let _ = conn.close().await;
184
+
return;
185
+
}
186
+
}
187
+
Err(err) => {
188
+
// Parse errors don't affect connection
189
+
tracing::warn!(error = %err, "Failed to parse TAP message");
190
+
191
+
// Try to extract just the ID using fallback parser
192
+
// so we can still ack the message even if full parsing fails
193
+
if config.send_acks {
194
+
if let Some(event_id) = extract_event_id(&msg) {
195
+
tracing::debug!(event_id, "Extracted event ID via fallback parser");
196
+
if let Err(ack_err) = conn.send_ack(event_id).await {
197
+
tracing::warn!(error = %ack_err, "Failed to send ack for unparseable message");
198
+
}
199
+
} else {
200
+
tracing::warn!("Could not extract event ID from unparseable message");
201
+
}
202
+
}
203
+
204
+
if event_tx.send(Err(TapError::ParseError(err.to_string()))).await.is_err() {
205
+
tracing::debug!("Event receiver dropped, closing connection");
206
+
let _ = conn.close().await;
207
+
return;
208
+
}
209
+
}
210
+
}
211
+
}
212
+
Ok(None) => {
213
+
// Connection closed by server
214
+
tracing::debug!("TAP connection closed by server");
215
+
break;
216
+
}
217
+
Err(err) => {
218
+
// Connection error
219
+
tracing::warn!(error = %err, "TAP connection error");
220
+
break;
221
+
}
222
+
}
223
+
}
224
+
}
225
+
}
226
+
}
227
+
Err(err) => {
228
+
tracing::warn!(error = %err, attempt, "Failed to connect to TAP service");
229
+
}
230
+
}
231
+
232
+
// Increment attempt counter
233
+
attempt += 1;
234
+
235
+
// Check if we've exceeded max attempts
236
+
if let Some(max) = config.max_reconnect_attempts
237
+
&& attempt >= max
238
+
{
239
+
tracing::error!(attempts = attempt, "Max reconnection attempts exceeded");
240
+
let _ = event_tx
241
+
.send(Err(TapError::MaxReconnectAttemptsExceeded(attempt)))
242
+
.await;
243
+
break;
244
+
}
245
+
246
+
// Wait before reconnecting with exponential backoff
247
+
tracing::debug!(
248
+
delay_ms = current_reconnect_delay.as_millis(),
249
+
attempt,
250
+
"Waiting before reconnection"
251
+
);
252
+
253
+
tokio::select! {
254
+
_ = close_rx.recv() => {
255
+
tracing::debug!("Connection task received close signal during backoff");
256
+
return;
257
+
}
258
+
_ = tokio::time::sleep(current_reconnect_delay) => {
259
+
// Update delay for next attempt
260
+
current_reconnect_delay = Duration::from_secs_f64(
261
+
(current_reconnect_delay.as_secs_f64() * config.reconnect_backoff_multiplier)
262
+
.min(config.max_reconnect_delay.as_secs_f64()),
263
+
);
264
+
}
265
+
}
266
+
}
267
+
268
+
tracing::debug!("Connection task exiting");
269
+
}
270
+
271
+
/// Create a new TAP stream with the given configuration.
272
+
pub fn connect(config: TapConfig) -> TapStream {
273
+
TapStream::new(config)
274
+
}
275
+
276
+
/// Create a new TAP stream connected to the given hostname.
277
+
///
278
+
/// Uses default configuration values.
279
+
pub fn connect_to(hostname: &str) -> TapStream {
280
+
TapStream::new(TapConfig::new(hostname))
281
+
}
282
+
283
+
#[cfg(test)]
284
+
mod tests {
285
+
use super::*;
286
+
287
+
#[test]
288
+
fn test_stream_initial_state() {
289
+
// Note: This test doesn't actually poll the stream, just checks initial state
290
+
// Creating a TapStream requires a tokio runtime for the spawn
291
+
}
292
+
293
+
#[tokio::test]
294
+
async fn test_stream_close() {
295
+
let mut stream = TapStream::new(TapConfig::new("localhost:9999"));
296
+
assert!(!stream.is_closed());
297
+
stream.close().await;
298
+
assert!(stream.is_closed());
299
+
}
300
+
301
+
#[test]
302
+
fn test_connect_functions() {
303
+
// These just create configs, actual connection happens in background task
304
+
// We can't test without a runtime, so just verify the types compile
305
+
let _ = TapConfig::new("localhost:2480");
306
+
}
307
+
308
+
#[test]
309
+
fn test_reconnect_delay_calculation() {
310
+
// Test the delay calculation logic
311
+
let initial = Duration::from_secs(1);
312
+
let max = Duration::from_secs(10);
313
+
let multiplier = 2.0;
314
+
315
+
let mut delay = initial;
316
+
assert_eq!(delay, Duration::from_secs(1));
317
+
318
+
delay = Duration::from_secs_f64((delay.as_secs_f64() * multiplier).min(max.as_secs_f64()));
319
+
assert_eq!(delay, Duration::from_secs(2));
320
+
321
+
delay = Duration::from_secs_f64((delay.as_secs_f64() * multiplier).min(max.as_secs_f64()));
322
+
assert_eq!(delay, Duration::from_secs(4));
323
+
324
+
delay = Duration::from_secs_f64((delay.as_secs_f64() * multiplier).min(max.as_secs_f64()));
325
+
assert_eq!(delay, Duration::from_secs(8));
326
+
327
+
delay = Duration::from_secs_f64((delay.as_secs_f64() * multiplier).min(max.as_secs_f64()));
328
+
assert_eq!(delay, Duration::from_secs(10)); // Capped at max
329
+
}
330
+
}
+13
-13
crates/atproto-xrpcs/README.md
+13
-13
crates/atproto-xrpcs/README.md
···
23
23
### Basic XRPC Service
24
24
25
25
```rust
26
-
use atproto_xrpcs::authorization::ResolvingAuthorization;
26
+
use atproto_xrpcs::authorization::Authorization;
27
27
use axum::{Json, Router, extract::Query, routing::get};
28
28
use serde::Deserialize;
29
29
use serde_json::json;
···
35
35
36
36
async fn handle_hello(
37
37
params: Query<HelloParams>,
38
-
authorization: Option<ResolvingAuthorization>,
38
+
authorization: Option<Authorization>,
39
39
) -> Json<serde_json::Value> {
40
40
let name = params.name.as_deref().unwrap_or("World");
41
-
41
+
42
42
let message = if authorization.is_some() {
43
43
format!("Hello, authenticated {}!", name)
44
44
} else {
45
45
format!("Hello, {}!", name)
46
46
};
47
-
47
+
48
48
Json(json!({ "message": message }))
49
49
}
50
50
···
56
56
### JWT Authorization
57
57
58
58
```rust
59
-
use atproto_xrpcs::authorization::ResolvingAuthorization;
59
+
use atproto_xrpcs::authorization::Authorization;
60
60
61
61
async fn handle_secure_endpoint(
62
-
authorization: ResolvingAuthorization, // Required authorization
62
+
authorization: Authorization, // Required authorization
63
63
) -> Json<serde_json::Value> {
64
-
// The ResolvingAuthorization extractor automatically:
64
+
// The Authorization extractor automatically:
65
65
// 1. Validates the JWT token
66
-
// 2. Resolves the caller's DID document
66
+
// 2. Resolves the caller's DID document
67
67
// 3. Verifies the signature against the DID document
68
68
// 4. Provides access to caller identity information
69
-
69
+
70
70
let caller_did = authorization.subject();
71
71
Json(json!({"caller": caller_did, "status": "authenticated"}))
72
72
}
···
79
79
use axum::{response::IntoResponse, http::StatusCode};
80
80
81
81
async fn protected_handler(
82
-
authorization: Result<ResolvingAuthorization, AuthorizationError>,
82
+
authorization: Result<Authorization, AuthorizationError>,
83
83
) -> impl IntoResponse {
84
84
match authorization {
85
85
Ok(auth) => (StatusCode::OK, "Access granted").into_response(),
86
-
Err(AuthorizationError::InvalidJWTToken { .. }) => {
86
+
Err(AuthorizationError::InvalidJWTFormat) => {
87
87
(StatusCode::UNAUTHORIZED, "Invalid token").into_response()
88
88
}
89
-
Err(AuthorizationError::DIDDocumentResolutionFailed { .. }) => {
89
+
Err(AuthorizationError::SubjectResolutionFailed { .. }) => {
90
90
(StatusCode::FORBIDDEN, "Identity verification failed").into_response()
91
91
}
92
92
Err(_) => {
···
98
98
99
99
## Authorization Flow
100
100
101
-
The `ResolvingAuthorization` extractor implements:
101
+
The `Authorization` extractor implements:
102
102
103
103
1. JWT extraction from HTTP Authorization headers
104
104
2. Token validation (signature and claims structure)
+5
-49
crates/atproto-xrpcs/src/errors.rs
+5
-49
crates/atproto-xrpcs/src/errors.rs
···
42
42
#[error("error-atproto-xrpcs-authorization-4 No issuer found in JWT claims")]
43
43
NoIssuerInClaims,
44
44
45
-
/// Occurs when DID document is not found for the issuer
46
-
#[error("error-atproto-xrpcs-authorization-5 DID document not found for issuer: {issuer}")]
47
-
DIDDocumentNotFound {
48
-
/// The issuer DID that was not found
49
-
issuer: String,
50
-
},
51
-
52
45
/// Occurs when no verification keys are found in DID document
53
-
#[error("error-atproto-xrpcs-authorization-6 No verification keys found in DID document")]
46
+
#[error("error-atproto-xrpcs-authorization-5 No verification keys found in DID document")]
54
47
NoVerificationKeys,
55
48
56
49
/// Occurs when JWT header cannot be base64 decoded
57
-
#[error("error-atproto-xrpcs-authorization-7 Failed to decode JWT header: {error}")]
50
+
#[error("error-atproto-xrpcs-authorization-6 Failed to decode JWT header: {error}")]
58
51
HeaderDecodeError {
59
52
/// The underlying base64 decode error
60
53
error: base64::DecodeError,
61
54
},
62
55
63
56
/// Occurs when JWT header cannot be parsed as JSON
64
-
#[error("error-atproto-xrpcs-authorization-8 Failed to parse JWT header: {error}")]
57
+
#[error("error-atproto-xrpcs-authorization-7 Failed to parse JWT header: {error}")]
65
58
HeaderParseError {
66
59
/// The underlying JSON parse error
67
60
error: serde_json::Error,
68
61
},
69
62
70
63
/// Occurs when JWT validation fails with all available keys
71
-
#[error("error-atproto-xrpcs-authorization-9 JWT validation failed with all available keys")]
64
+
#[error("error-atproto-xrpcs-authorization-8 JWT validation failed with all available keys")]
72
65
ValidationFailedAllKeys,
73
66
74
67
/// Occurs when subject resolution fails during DID document lookup
75
-
#[error("error-atproto-xrpcs-authorization-10 Subject resolution failed: {issuer} {error}")]
68
+
#[error("error-atproto-xrpcs-authorization-9 Subject resolution failed: {issuer} {error}")]
76
69
SubjectResolutionFailed {
77
70
/// The issuer that failed to resolve
78
71
issuer: String,
79
72
/// The underlying resolution error
80
-
error: anyhow::Error,
81
-
},
82
-
83
-
/// Occurs when DID document lookup fails after successful resolution
84
-
#[error(
85
-
"error-atproto-xrpcs-authorization-11 DID document not found for resolved issuer: {resolved_did}"
86
-
)]
87
-
ResolvedDIDDocumentNotFound {
88
-
/// The resolved DID that was not found in storage
89
-
resolved_did: String,
90
-
},
91
-
92
-
/// Occurs when PLC directory query fails
93
-
#[error("error-atproto-xrpcs-authorization-12 PLC directory query failed: {error}")]
94
-
PLCQueryFailed {
95
-
/// The underlying PLC query error
96
-
error: anyhow::Error,
97
-
},
98
-
99
-
/// Occurs when web DID query fails
100
-
#[error("error-atproto-xrpcs-authorization-13 Web DID query failed: {error}")]
101
-
WebDIDQueryFailed {
102
-
/// The underlying web DID query error
103
-
error: anyhow::Error,
104
-
},
105
-
106
-
/// Occurs when DID document storage operation fails
107
-
#[error("error-atproto-xrpcs-authorization-14 DID document storage failed: {error}")]
108
-
DocumentStorageFailed {
109
-
/// The underlying storage error
110
-
error: anyhow::Error,
111
-
},
112
-
113
-
/// Occurs when input parsing fails for resolved DID
114
-
#[error("error-atproto-xrpcs-authorization-15 Input parsing failed for resolved DID: {error}")]
115
-
InputParsingFailed {
116
-
/// The underlying parsing error
117
73
error: anyhow::Error,
118
74
},
119
75
}
+18
-24
crates/atproto-xrpcs-helloworld/src/main.rs
+18
-24
crates/atproto-xrpcs-helloworld/src/main.rs
···
5
5
use atproto_identity::resolve::SharedIdentityResolver;
6
6
use atproto_identity::{
7
7
config::{CertificateBundles, DnsNameservers, default_env, optional_env, require_env, version},
8
-
key::{KeyData, KeyProvider, identify_key, to_public},
8
+
key::{KeyData, KeyResolver, identify_key, to_public},
9
9
resolve::{HickoryDnsResolver, IdentityResolver, InnerIdentityResolver},
10
-
storage::DidDocumentStorage,
11
-
storage_lru::LruDidDocumentStorage,
12
10
};
13
-
use atproto_xrpcs::authorization::ResolvingAuthorization;
11
+
use atproto_xrpcs::authorization::Authorization;
14
12
use axum::{
15
13
Json, Router,
16
14
extract::{FromRef, Query, State},
···
21
19
use http::{HeaderMap, StatusCode};
22
20
use serde::Deserialize;
23
21
use serde_json::json;
24
-
use std::{collections::HashMap, num::NonZeroUsize, ops::Deref, sync::Arc};
22
+
use std::{collections::HashMap, ops::Deref, sync::Arc};
25
23
26
24
#[derive(Clone)]
27
-
pub struct SimpleKeyProvider {
25
+
pub struct SimpleKeyResolver {
28
26
keys: HashMap<String, KeyData>,
29
27
}
30
28
31
-
impl Default for SimpleKeyProvider {
29
+
impl Default for SimpleKeyResolver {
32
30
fn default() -> Self {
33
31
Self::new()
34
32
}
35
33
}
36
34
37
-
impl SimpleKeyProvider {
35
+
impl SimpleKeyResolver {
38
36
pub fn new() -> Self {
39
37
Self {
40
38
keys: HashMap::new(),
···
43
41
}
44
42
45
43
#[async_trait]
46
-
impl KeyProvider for SimpleKeyProvider {
47
-
async fn get_private_key_by_id(&self, key_id: &str) -> anyhow::Result<Option<KeyData>> {
48
-
Ok(self.keys.get(key_id).cloned())
44
+
impl KeyResolver for SimpleKeyResolver {
45
+
async fn resolve(&self, key: &str) -> anyhow::Result<KeyData> {
46
+
if let Some(key_data) = self.keys.get(key) {
47
+
Ok(key_data.clone())
48
+
} else {
49
+
identify_key(key).map_err(Into::into)
50
+
}
49
51
}
50
52
}
51
53
···
57
59
58
60
pub struct InnerWebContext {
59
61
pub http_client: reqwest::Client,
60
-
pub document_storage: Arc<dyn DidDocumentStorage>,
61
-
pub key_provider: Arc<dyn KeyProvider>,
62
+
pub key_resolver: Arc<dyn KeyResolver>,
62
63
pub service_document: ServiceDocument,
63
64
pub service_did: ServiceDID,
64
65
pub identity_resolver: Arc<dyn IdentityResolver>,
···
93
94
}
94
95
}
95
96
96
-
impl FromRef<WebContext> for Arc<dyn DidDocumentStorage> {
97
+
impl FromRef<WebContext> for Arc<dyn KeyResolver> {
97
98
fn from_ref(context: &WebContext) -> Self {
98
-
context.0.document_storage.clone()
99
-
}
100
-
}
101
-
102
-
impl FromRef<WebContext> for Arc<dyn KeyProvider> {
103
-
fn from_ref(context: &WebContext) -> Self {
104
-
context.0.key_provider.clone()
99
+
context.0.key_resolver.clone()
105
100
}
106
101
}
107
102
···
212
207
213
208
let web_context = WebContext(Arc::new(InnerWebContext {
214
209
http_client: http_client.clone(),
215
-
document_storage: Arc::new(LruDidDocumentStorage::new(NonZeroUsize::new(255).unwrap())),
216
-
key_provider: Arc::new(SimpleKeyProvider {
210
+
key_resolver: Arc::new(SimpleKeyResolver {
217
211
keys: signing_key_storage,
218
212
}),
219
213
service_document,
···
280
274
async fn handle_xrpc_hello_world(
281
275
parameters: Query<HelloParameters>,
282
276
headers: HeaderMap,
283
-
authorization: Option<ResolvingAuthorization>,
277
+
authorization: Option<Authorization>,
284
278
) -> Json<serde_json::Value> {
285
279
println!("headers {headers:?}");
286
280
let subject = parameters.subject.as_deref().unwrap_or("World");