commits
Aggregate lexicon NSIDs from replicated record paths into a
searchable index. Three public endpoints (search, list, stats),
incremental updates on sync/firehose, full rebuild on startup,
and a Lit UI component with prefix search.
README updated with lexicon index, PLC log archiving, rotation
key management, and kadDHT client mode.
- Show PLC log status (archived/validated/tombstoned) in DID detail view
- Enable kadDHT client mode for content routing and PLC log discovery
- Add provideForDid/findProvidersForDid to NetworkService for DHT announcements
- Add computeDiscoveryCid for deterministic DID→CID mapping
- Add public /xrpc/org.p2pds.plc.getLog endpoint for cross-node PLC exchange
- Fall back to DHT peer discovery when plc.directory is unavailable
- Re-announce mirrored DIDs to DHT on periodic refresh
- Recovery key dialog UI with BIP-39 mnemonic generation
- PLC rotation key management (request token, add key, get status)
- CAR export endpoint for disaster recovery downloads
- Rewritten README with clearer framing and architecture overview
- PLC mirror design doc in docs/research/
Phase 1 of distributed PLC mirror: fetches, cryptographically validates,
and stores PLC audit logs for every tracked did:plc DID. Validates full
operation chain (genesis DID derivation, secp256k1/P-256 ECDSA signatures,
prev-CID integrity). Refreshes on identity events, firehose account
changes, and a 6-hour periodic timer. Adds API endpoints and UI indicator.
Rewrites README to reflect the decoupled architecture (no node identity,
lazy OAuth identity, any-PDS compatibility), documents the three lexicon
record types and where they're used in the offer→replication flow, and
describes the three replication modes (reciprocal, consensual,
non-consensual archive). Updates libp2p description in CLAUDE.md.
- Lit web component UI (esbuild-bundled): account, system, replications,
sync history, network, policies, verification, incoming offers cards
- SSE endpoint (org.p2pds.app.syncProgress) streams real-time sync events
from ReplicationManager to the browser via EventSource
- Progress events emitted at key sync milestones: start, car-received,
blocks-stored, verified, blob-progress, complete, error, cycle boundaries
- UI merges live progress into replication rows (block/blob counters tick
up during sync, full refresh on completion)
- Layout: Account card top-left, System card (with network info) top-right,
removed separate Network section
- Improved add-account form: same-height row, inline x clear button,
renamed policy options (reciprocal/consensual/non-consensual archive)
- Reduced libp2p footprint: removed kadDHT, 2 bootstrap peers, max 10
connections (kept autoNAT for dialability checks)
- Policy lifecycle: StoredPolicy with state machine (proposed/active/
suspended/terminated/purged), consent status, timestamps, persistence
via PolicyStorage. New configArchive() and archive() presets.
- Consent records: CONSENT_NSID + ConsentRecord type, consent.json lexicon
- OAuth default-on: OAUTH_ENABLED defaults true (opt-out via =false)
- Session safety: purge leftover data on new OAuth login
- Scripts: --clean flag, fixed ports (6700/6701), check-api/health/etc
- Terminology: "dashboard" -> "app", "admin" -> "user" throughout
Three replication modes: bidirectional (mutual offers), archive with
consent (requires target opt-in record), and archive without consent.
Consent check fires async on account selection. Local user can toggle
their own consent via checkbox in Account section. Three new endpoints:
checkConsent, getMyConsent, setMyConsent.
Replace the disconnect form POST with a confirm() dialog warning users
that all data will be deleted. On confirm, the server now purges all
replicated data: sync state, blocks, blobs, challenges, policies, IPFS
blockstore, auth token, and identity — leaving a clean slate instead of
tombstoned entries.
Replace filesystem-based Helia storage with SQLite:
- SqliteBlockstore: implements Blockstore interface backed by ipfs_blocks table,
eliminating thousands of tiny files from FsBlockstore
- SqliteDatastore: implements Datastore interface backed by ipfs_datastore table,
replacing FsDatastore for libp2p peer/routing state
- All persistent state now lives in a single pds.db file
Strip libp2p down to minimal config (TCP + noise + yamux + identify only):
- Remove DHT, gossipsub, relay, autoNAT, UPnP, dcutr, WebRTC, ping
- These services pegged CPU connecting to random peers on the public network
- P2PDS dials known peers directly using multiaddrs from peer records
Harden auth and server lifecycle for OAuth mode:
- Guard legacy JWT endpoints (createSession, refreshSession, getSession)
against missing config when OAUTH_ENABLED=true — return 501 instead of crash
- Guard auth middleware against undefined PDS_HOSTNAME/JWT_SECRET
- Register SIGINT/SIGTERM handlers before awaiting startServer() so shutdown
signals during startup are handled cleanly
Add two-node manual testing scripts:
- scripts/start-node.sh: build + start a single node on a random port
- scripts/start-both.sh: start two nodes with separate data dirs
- scripts/stop-both.sh: stop both nodes
- scripts/clean.sh: wipe data for both nodes (preserves .env files)
- scripts/logs.sh: show recent logs
- scripts/test-add-did.sh: offer a DID on a running node
- npm scripts: start:node1, start:node2, start:both, stop, clean, logs, test:add-did
Update README with current architecture, configuration, and project structure.
Update .gitignore for test artifacts (data-node2/, plans/, .claude/settings.local.json).
When Node A offers to replicate Node B's data, Node A now resolves
Node B's org.p2pds.peer record to find their p2pds endpoint URL and
POSTs a notification. Node B verifies the offer exists in Node A's
repo (anti-spoofing), stores it in an incoming_offers table, and
shows it in the dashboard with Accept/Reject buttons. Accepting
creates a reciprocal offer which triggers mutual agreement detection.
- Add PUBLIC_URL config + endpoint field to peer record
- Add endpoint to PeerInfo type in peer-discovery
- Add incoming_offers table + CRUD methods in sync-storage
- Add push notification in offerDid(), acceptOffer/rejectOffer in
ReplicationManager
- Add notifyOffer (unauthenticated), acceptOffer, rejectOffer XRPC
endpoints with rate limiting
- Add incoming offers UI section in dashboard
Dashboard "Add" now calls offerDid() which publishes an offer record and stores in
offered_dids table without triggering sync. Replication begins only when mutual consent
is detected during periodic offer discovery. Offered DIDs display with purple "offered"
status in the dashboard and can be revoked.
IPFS+libp2p no longer starts unconditionally at boot. Instead, it waits
for a DID — either loaded from stored identity on restart, or established
via first OAuth login. This avoids wasting CPU connecting to peers when
there's nothing to replicate.
Exercises the complete on-protocol loop with real networking: two
startServer() instances, self-sync, peer discovery via mock PDS
records, cross-sync via CAR-over-libp2p, XRPC serving verification,
incremental re-sync, and mutual offers generating auto-policies.
When a peer or HTTP client provides a `since` parameter, compute the MST
diff between current and since states, serving only new/changed blocks
instead of the full repo CAR. Falls back to full CAR if the since rev is
unknown or old blocks have been GC'd.
Track what initiated each sync event (firehose, gossipsub, periodic,
manual, gc, tombstone-recovery, firehose-resync) separately from the
transport source type. Adds trigger column to sync_history with schema
migration, threads trigger through all syncDid() call sites, and
displays colored trigger badges in dashboard tables with per-DID
breakdown summaries.
- Real bidirectional replication test (scripts/real-bidir-test.ts):
event-driven OAuth+sync flow, IPFS_NETWORKING=true, libp2p cross-sync
assertions, session reuse for fast re-runs, data-aware self-sync detection
- Dashboard: gate add-DID during self-sync with spinner, activity spinner,
disabled input/button styling, hide self-DID remove button
- Rename xrpc/admin → xrpc/app (routes, tests, e2e tests)
- Gossipsub shutdown error handling in replication-manager
- Tauri desktop sidecar process management
- Memory: NEXT-STEPS.md with reactive sync roadmap
rmSync can fail with ENOTEMPTY when blockstore files haven't been fully
released after ipfsService.stop(). Add maxRetries + try/catch to match
the pattern used in other test files.
Introduces /p2pds/repo-sync/1.0.0 libp2p protocol for peer-to-peer repo
transfer without centralized PDS servers. syncDid() now tries libp2p first
when peer info is available, falling back to HTTP PDS on failure.
- New libp2p-sync.ts: protocol handler (server) + fetchRepoFromPeer (client)
- Self-replication: OAuth login triggers addDid(own DID) to seed blockstore
- IpfsService: add dial() method for direct peer connections
- sync.ts: extract generateCarForDid() reusable by both HTTP and libp2p handlers
- start.ts: register repo sync protocol after IPFS + replication init
- admin.ts: rename "Replicated DIDs" → "Replicating Accounts", allow self-DID
Two tests validating the full bidirectional loop:
1. Two nodes sync each other's data and serve it via all sync/repo endpoints
(getRepo, getRepoStatus, listRepos, getRecord, listRecords, describeRepo)
2. Mutual offers create P2P replication policies with correct parameter merging
(max minCopies, min intervalSec, max priority)
Uses enhanced mock PDS with configurable records per DID/collection.
- ReplicationManager.setPdsClient() lazily creates OfferManager after OAuth login
- All 7 sync endpoints accept optional repoManager, registered unconditionally
- Repo read endpoints (describeRepo, getRecord, listRecords) serve replicated data without local repo
- listBlobs and sync.getRecord fall back to SyncStorage/ReplicatedRepoReader
- Logout with ?disconnect=true revokes offers/peer record, clears node_identity, unbinds DID
- OAuth callback wires PdsClient into ReplicationManager automatically
- Added typecheck npm script
Two tests: (1) two clean servers establish identity, each replicates a
different external account, verifies sync state; (2) identity and
replication state persist across server restart.
Fix: upsertState no longer overwrites status on conflict — existing
status is preserved, only pds_endpoint and peer_id are updated.
9 tests verify: server starts without DID, dashboard/health work,
replication manager exists, can add/sync DIDs, node_identity table
persists identity across restarts, env DID overrides stored identity.
Fixes: SyncStorage/ChallengeStorage schemas now init eagerly in
ReplicationManager constructor (not deferred to async init()), and
backfillIpfs skips when no RepoManager exists.
Server now starts with just infrastructure config (PORT, AUTH_TOKEN, etc.)
and gets its identity from OAuth login. node_identity table persists the
DID across restarts. RepoManager is optional throughout — Firehose,
ReplicationManager, and OAuth routes all handle its absence gracefully.
Extract startup logic from server.ts into src/start.ts with an exported
startServer(config, opts?) function that returns a ServerHandle for
programmatic control. This enables testing the full startup sequence
(DB, IPFS, replication, HTTP) in vitest and supports the Tauri sidecar
use case. Add server-startup.test.ts with 5 integration tests, a
scripts/smoke-test.sh for manual/CI smoke testing, and an npm
smoke-test script.
- Mock PDS test helper (test-helpers.ts): createTestRepo(), startMockPds(),
createMockDidResolver() for fast integration testing with tiny repos
- E2E sync tests (e2e-sync.test.ts): 7 tests covering syncDid() pipeline
against mock PDS — empty/record/blob accounts, admin API, persistence
- Sync progress logging in syncDid(): CAR size, block count, blob stats,
total duration at each checkpoint
- Fetch timeouts in repo-fetcher.ts: 60s for fetchRepo, 30s for fetchBlob
via AbortController to prevent hanging on slow PDSes
- Remove hasReplicateDids gate so ReplicationManager initializes with
empty DID list (allows adding DIDs via dashboard without pre-config)
- Enrich /oauth/status with profile info (avatar, displayName, handle)
from public Bluesky API
- Show account profile in System Overview and Account Connection cards
- Replace plain DID input with account search typeahead in Replicated
DIDs section (uses public app.bsky.actor.searchActorsTypeahead)
- Show avatar + display name + handle for tracked DIDs (async resolved)
- Rename header from "P2PDS Admin" to "P2PDS"
Enable p2pds to authenticate as a user via AT Protocol OAuth and publish
records (peer info, replication offers) to their real PDS instead of only
the local SQLite repo. This unblocks real-world peer discovery.
- Add @atproto/oauth-client-node and @atproto/api dependencies
- SQLite-backed OAuth state/session stores (src/oauth/stores.ts)
- Loopback OAuth client setup per AT Protocol spec (src/oauth/client.ts)
- PdsClient implementing RecordWriter for remote XRPC calls (src/oauth/pds-client.ts)
- Browser login flow routes: /oauth/login, /oauth/callback, /oauth/status
- Extract RecordWriter interface from OfferManager (both RepoManager and
PdsClient satisfy it, no changes to method bodies)
- Wire OAuth through server.ts → ReplicationManager → OfferManager
- Add Account Connection card to admin dashboard
- Opt-in via OAUTH_ENABLED=true (existing deployments unchanged)
Rewrite README to reflect the new user-DID model, DASL compliance
requirement, lexicon definitions, verification layers, policy engine,
desktop app, and current project status. Update CLAUDE.md to match.
Remove the did:web node identity layer — p2pds is now infrastructure
that acts on behalf of authenticated atproto users, not an entity with
its own identity. Records (peer, offer) publish to the user's own repo.
- Delete src/node-identity.ts and its tests
- Remove NODE_DID, NODE_MANAGERS, NODE_NAME from Config
- Simplify server.ts (single repo), index.ts (no NodeIdentityOpts),
auth.ts (config.DID only), replication-manager.ts (no peer publishing)
- Update all xrpc handlers and 13 test files
- Add lexicon JSON schemas for org.p2pds.peer and org.p2pds.replication.offer
- Add src/lexicons.ts loader/validator, wire into RecordValidator
- Add Tauri v2 desktop app skeleton at apps/desktop/ with sidecar pattern
- Add npm workspaces config
Defense-in-depth rate limiting with zero-config defaults:
- Sliding window rate limiter (src/rate-limiter.ts) with per-pool isolation
- HTTP middleware: per-route rate limits (meta/sync/session/read/write/challenge/admin)
- Body size limits: 1MB JSON, 64KB challenge, 60MB blob, 100MB CAR
- Gossipsub: 8KB message size cap, per-topic rate limiting (60/min commits, 10/min identity)
- libp2p: stream size caps (64KB inbound challenges, 1MB responses)
- libp2p: connection manager limits (100 max, 10 pending, 5/s inbound threshold)
- WebSocket firehose: per-IP connection limits (default 3)
- Challenge validation: targetDid check, path/CID count caps, expiration rejection
- All configurable via env vars, disabled by default in tests
Nodes now have their own did:web:{hostname} identity for coordination
(consent, discovery, policy), independent of any social account they
may host. Social account fields (DID, HANDLE, SIGNING_KEY) are now
optional — nodes can run as replication-only without a social account.
- New node-identity module: keypair lifecycle, did:web derivation, DID
document generation
- Config: social fields optional, added NODE_DID, NODE_MANAGERS
- Server: auto-generates node keypair, separate node-repo.db, optional
social account repo
- Auth: accepts node DID + social DID + manager DIDs
- ReplicationManager: uses nodeDid for gossipsub, offers, challenges
- XRPC handlers: fall back to NODE_DID when social DID not configured
- All 384 tests pass, zero TypeScript errors
Two-node demo: creates records on Node A, replicates to Node B,
leaves Node B running with dashboard visible at localhost:3000.
- sync_history table tracks every sync event with source type, block/blob
counts, byte sizes, duration, and status
- size_bytes column on replication_blocks and replication_blobs for
accurate storage accounting
- Aggregate metrics API: total blocks/blobs/records/bytes held, syncs,
24h transfer volume
- Per-DID metrics: record count, bytes held, recent sync history
- New getSyncHistory endpoint for global sync event log
- Instrumented syncDid(), applyFirehoseBlocks(), syncBlobs() to record
events with full metrics
- Redesigned dashboard: metrics summary grid, enriched DID table with
expandable per-DID details, sync history card, source type badges,
formatBytes/timeAgo helpers
Self-contained HTML page at /xrpc/org.p2pds.admin.dashboard that
fetches existing admin JSON APIs and renders system overview,
replication table, network, policies, and verification sections
with 30s auto-refresh. No auth required on the dashboard route;
token from config is embedded for client-side API calls.
Four new authenticated endpoints expose existing internal state:
- getOverview: aggregated system status (network, replication, firehose, policy, verification)
- getDidStatus: per-DID detail (sync state, block/blob counts, peer endpoints, effective policy)
- getNetworkStatus: P2P connectivity (peerId, multiaddrs, connections)
- getPolicies: policy engine configuration and explicit DID lists
Detect and log PeerID changes during sync, trigger immediate re-discovery
on connection failure, republish identity when multiaddrs change, cache
observed libp2p addrs, and broadcast identity changes via gossipsub
(/p2pds/identity/1/{did} topics).
Proves gossipsub pub/sub and challenge-response streams coexist on the
same libp2p instance pair, and that FailoverChallengeTransport correctly
resolves HTTP endpoints to multiaddrs via SyncStorage.
Peer discovery now persists multiaddrs alongside peerId in the
replication_state table. The failover challenge transport's
resolveEndpoint hook queries SyncStorage to map PDS HTTP endpoints
to libp2p multiaddrs, enabling direct P2P challenges before falling
back to HTTP.
- Add peer_multiaddrs column to replication_state (with migration)
- Update updatePeerInfo/clearPeerInfo to handle multiaddrs
- Add getMultiaddrForPdsEndpoint() lookup (prefers /p2p/ addrs)
- Pass multiaddrs through PeerDiscovery → ReplicationManager → storage
- Wire resolveEndpoint closure in server.ts challenge transport setup
Nodes publish lightweight CBOR notifications over gossipsub when commits
occur (local or replicated). Subscribed peers compare rev and trigger
syncDid() if newer, enabling low-latency P2P sync without polling.
- Add @libp2p/gossipsub v15 (compatible with libp2p v3/multiaddr v13)
- Extend NetworkService with publish/subscribe/handler for commit topics
- Publish notifications in RepoManager.sequenceAndBroadcast() and
ReplicationManager after sync
- Subscribe to per-DID topics in ReplicationManager.init() with dedup
- Update libp2p-transport.ts to v3 Stream API (send+close vs sink)
- Add E2E gossipsub test, encoding tests, and integration tests
Extend getRepo and getBlocks XRPC endpoints to serve replicated DID
data from the BlockStore, enabling other P2PDS nodes to sync repos
from peers instead of only from the source PDS.
Add peer_endpoints table to track which peers have which DIDs,
populated during manifest discovery. When syncDid() fails to reach
the source PDS, it now falls back to fetching from known peer
endpoints sorted by freshest data.
FailoverChallengeTransport wraps a primary and fallback ChallengeTransport.
On sendChallenge, tries the primary; on failure, invokes onFallback callback
and retries via fallback. Optional resolveEndpoint hook maps HTTP URLs to
multiaddrs for the libp2p transport. server.ts now uses failover when
libp2p is available.
After syncing repo blocks, walk records to extract blob CIDs, fetch from
source PDS via com.atproto.sync.getBlob, CID-verify, store in BlockStore
(IPFS), and track in replication_blobs table. Firehose path also triggers
blob fetching for create/update ops. getBlob endpoint extended to serve
replicated blobs from BlockStore.
Peers publish org.p2pds.replication.offer records declaring willingness
to replicate specific DIDs. Mutual offers are detected during sync and
automatically converted into PolicyEngine policies, driving the existing
replication and challenge machinery. Revoking an offer removes the
derived policy on the next discovery cycle.
Enables challenge-response verification over direct P2P connections
without requiring public HTTP endpoints. Adds /p2pds/challenge/1.0.0
protocol using half-close request-response streams, with fallback to
HTTP transport. New getMstProof XRPC endpoint lets light clients
request and verify MST proofs without downloading full repos.
Extract all record paths via full MST walk after syncDid() and track
paths incrementally from firehose ops, enabling the challenge scheduler
to generate MST proof challenges against replicated repos.
Implements a transport-agnostic challenge-response system for proving
peers still hold specific records. Three challenge types (mst-proof,
block-sample, combined) with deterministic generation, SQLite-backed
history/reliability tracking, and policy-driven scheduling.
Replaces L2/L3 verification stubs with real challenge-based verification
when a ChallengeTransport is available. Adds HTTP transport adapter,
three new XRPC routes, and record path tracking for challenge generation.
Firehose incremental pipeline: handleFirehoseCommit() now applies blocks
directly from the firehose event via applyFirehoseBlocks(), skipping the
HTTP round-trip to the source PDS. Falls back to full syncDid() for edge
cases (tooBig, rebase, sequence gaps, CAR parse failures). 14 new tests.
Policy engine integration: ReplicationManager optionally accepts a
PolicyEngine to drive which DIDs get replicated, at what frequency, and
in what priority order. Per-DID sync intervals replace the fixed 5-min
global timer. Server startup loads policies from POLICY_FILE and wires
them through. Fully backward compatible when no engine is provided.
20 new tests.
Also fix flaky temp dir cleanup in replication tests (ENOTEMPTY race).
Four major features implemented in parallel:
- Real-time firehose sync: FirehoseSubscription subscribes to
com.atproto.sync.subscribeRepos with CBOR frame parsing, DID filtering,
cursor persistence in SQLite, and exponential backoff reconnection.
Integrated with ReplicationManager and configurable via FIREHOSE_URL
and FIREHOSE_ENABLED env vars.
- L3 MST path proof verification: generateMstProof() extracts minimal
Merkle Search Tree node path from root to leaf; verifyMstProof()
validates purely from proof bytes + trusted commit CID. Supports both
existence and non-existence proofs.
- Policy engine MVP: Declarative, deterministic, transport-agnostic
policy system with PolicyEngine class, three presets (mutualAid, saas,
groupGovernance), and config integration via POLICY_FILE env var.
- E2E networking integration tests: Two real Helia nodes with TCP on
localhost verify bitswap block exchange, bidirectional transfer, and
peer discovery.
Replicated repos' blocks are in IPFS but weren't queryable via standard
atproto endpoints. This adds an IpfsReadableBlockstore adapter and
ReplicatedRepoReader service that loads ReadableRepo instances on demand,
making getRecord, listRecords, and describeRepo work for replicated DIDs.
Also fixes rev extraction: syncDid() now decodes the commit block CBOR to
get the actual TID rev (instead of storing the root CID as rev), and stores
both root_cid and rev separately in replication_state.
Splits IpfsService into two narrow interfaces (BlockStore for storage, NetworkService
for P2P networking) so consumers depend only on what they need. Eases future transport
migrations (e.g. Iroh, Hyperswarm) without touching storage or verification code.
Implements content-addressed verification via RASL endpoints to prove
remote peers actually host the blocks they claim to serve. Tracks
replicated block CIDs per-DID and runs verification on a separate timer.
Implement the core replication loop: announce, discover, replicate. Nodes
publish AT Protocol records declaring their IPFS PeerID (org.p2pds.peer)
and which DIDs they replicate (org.p2pds.manifest). Other nodes discover
this info, fetch repos via CAR export, store blocks in IPFS, and verify
availability.
New modules: types, sync-storage, repo-fetcher, peer-discovery,
verification, and replication-manager orchestrator. Adds REPLICATE_DIDS
config, getMultiaddrs() to IpfsService, replication status/syncNow XRPC
endpoints, and 27 tests covering all components plus integration CAR
roundtrip.
22 tests covering IpfsService unit tests (lifecycle, block storage
roundtrips, BlockMap, graceful no-ops, provideBlocks), RASL endpoint
integration tests (IPFS/SQLite/blob fallback chain, 404, headers),
and config flag defaults.
Blocks and blobs are now stored in a Helia-managed FsBlockstore and
announced on the DHT. A new /.well-known/rasl/:cid endpoint serves
content-addressed blocks over HTTP with immutable caching. IPFS
operations are fire-and-forget and never block the commit path.
Existing blocks are backfilled on startup. Controlled via
IPFS_ENABLED and IPFS_NETWORKING env vars.
Port Cirrus (Cloudflare Workers PDS) to a standalone Node.js server:
- Hono HTTP framework via @hono/node-server
- better-sqlite3 replacing Cloudflare SqlStorage
- Filesystem blob storage replacing R2
- ws library replacing hibernatable WebSockets
- In-memory DID cache replacing Workers Cache API
- Bearer token + JWT session auth (OAuth/passkeys deferred)
Verified: health check, DID document, session management, record
CRUD, CAR export, and WebSocket firehose all functional.
Aggregate lexicon NSIDs from replicated record paths into a
searchable index. Three public endpoints (search, list, stats),
incremental updates on sync/firehose, full rebuild on startup,
and a Lit UI component with prefix search.
README updated with lexicon index, PLC log archiving, rotation
key management, and kadDHT client mode.
- Show PLC log status (archived/validated/tombstoned) in DID detail view
- Enable kadDHT client mode for content routing and PLC log discovery
- Add provideForDid/findProvidersForDid to NetworkService for DHT announcements
- Add computeDiscoveryCid for deterministic DID→CID mapping
- Add public /xrpc/org.p2pds.plc.getLog endpoint for cross-node PLC exchange
- Fall back to DHT peer discovery when plc.directory is unavailable
- Re-announce mirrored DIDs to DHT on periodic refresh
Phase 1 of distributed PLC mirror: fetches, cryptographically validates,
and stores PLC audit logs for every tracked did:plc DID. Validates full
operation chain (genesis DID derivation, secp256k1/P-256 ECDSA signatures,
prev-CID integrity). Refreshes on identity events, firehose account
changes, and a 6-hour periodic timer. Adds API endpoints and UI indicator.
Rewrites README to reflect the decoupled architecture (no node identity,
lazy OAuth identity, any-PDS compatibility), documents the three lexicon
record types and where they're used in the offer→replication flow, and
describes the three replication modes (reciprocal, consensual,
non-consensual archive). Updates libp2p description in CLAUDE.md.
- Lit web component UI (esbuild-bundled): account, system, replications,
sync history, network, policies, verification, incoming offers cards
- SSE endpoint (org.p2pds.app.syncProgress) streams real-time sync events
from ReplicationManager to the browser via EventSource
- Progress events emitted at key sync milestones: start, car-received,
blocks-stored, verified, blob-progress, complete, error, cycle boundaries
- UI merges live progress into replication rows (block/blob counters tick
up during sync, full refresh on completion)
- Layout: Account card top-left, System card (with network info) top-right,
removed separate Network section
- Improved add-account form: same-height row, inline x clear button,
renamed policy options (reciprocal/consensual/non-consensual archive)
- Reduced libp2p footprint: removed kadDHT, 2 bootstrap peers, max 10
connections (kept autoNAT for dialability checks)
- Policy lifecycle: StoredPolicy with state machine (proposed/active/
suspended/terminated/purged), consent status, timestamps, persistence
via PolicyStorage. New configArchive() and archive() presets.
- Consent records: CONSENT_NSID + ConsentRecord type, consent.json lexicon
- OAuth default-on: OAUTH_ENABLED defaults true (opt-out via =false)
- Session safety: purge leftover data on new OAuth login
- Scripts: --clean flag, fixed ports (6700/6701), check-api/health/etc
- Terminology: "dashboard" -> "app", "admin" -> "user" throughout
Three replication modes: bidirectional (mutual offers), archive with
consent (requires target opt-in record), and archive without consent.
Consent check fires async on account selection. Local user can toggle
their own consent via checkbox in Account section. Three new endpoints:
checkConsent, getMyConsent, setMyConsent.
Replace filesystem-based Helia storage with SQLite:
- SqliteBlockstore: implements Blockstore interface backed by ipfs_blocks table,
eliminating thousands of tiny files from FsBlockstore
- SqliteDatastore: implements Datastore interface backed by ipfs_datastore table,
replacing FsDatastore for libp2p peer/routing state
- All persistent state now lives in a single pds.db file
Strip libp2p down to minimal config (TCP + noise + yamux + identify only):
- Remove DHT, gossipsub, relay, autoNAT, UPnP, dcutr, WebRTC, ping
- These services pegged CPU connecting to random peers on the public network
- P2PDS dials known peers directly using multiaddrs from peer records
Harden auth and server lifecycle for OAuth mode:
- Guard legacy JWT endpoints (createSession, refreshSession, getSession)
against missing config when OAUTH_ENABLED=true — return 501 instead of crash
- Guard auth middleware against undefined PDS_HOSTNAME/JWT_SECRET
- Register SIGINT/SIGTERM handlers before awaiting startServer() so shutdown
signals during startup are handled cleanly
Add two-node manual testing scripts:
- scripts/start-node.sh: build + start a single node on a random port
- scripts/start-both.sh: start two nodes with separate data dirs
- scripts/stop-both.sh: stop both nodes
- scripts/clean.sh: wipe data for both nodes (preserves .env files)
- scripts/logs.sh: show recent logs
- scripts/test-add-did.sh: offer a DID on a running node
- npm scripts: start:node1, start:node2, start:both, stop, clean, logs, test:add-did
Update README with current architecture, configuration, and project structure.
Update .gitignore for test artifacts (data-node2/, plans/, .claude/settings.local.json).
When Node A offers to replicate Node B's data, Node A now resolves
Node B's org.p2pds.peer record to find their p2pds endpoint URL and
POSTs a notification. Node B verifies the offer exists in Node A's
repo (anti-spoofing), stores it in an incoming_offers table, and
shows it in the dashboard with Accept/Reject buttons. Accepting
creates a reciprocal offer which triggers mutual agreement detection.
- Add PUBLIC_URL config + endpoint field to peer record
- Add endpoint to PeerInfo type in peer-discovery
- Add incoming_offers table + CRUD methods in sync-storage
- Add push notification in offerDid(), acceptOffer/rejectOffer in
ReplicationManager
- Add notifyOffer (unauthenticated), acceptOffer, rejectOffer XRPC
endpoints with rate limiting
- Add incoming offers UI section in dashboard
Track what initiated each sync event (firehose, gossipsub, periodic,
manual, gc, tombstone-recovery, firehose-resync) separately from the
transport source type. Adds trigger column to sync_history with schema
migration, threads trigger through all syncDid() call sites, and
displays colored trigger badges in dashboard tables with per-DID
breakdown summaries.
- Real bidirectional replication test (scripts/real-bidir-test.ts):
event-driven OAuth+sync flow, IPFS_NETWORKING=true, libp2p cross-sync
assertions, session reuse for fast re-runs, data-aware self-sync detection
- Dashboard: gate add-DID during self-sync with spinner, activity spinner,
disabled input/button styling, hide self-DID remove button
- Rename xrpc/admin → xrpc/app (routes, tests, e2e tests)
- Gossipsub shutdown error handling in replication-manager
- Tauri desktop sidecar process management
- Memory: NEXT-STEPS.md with reactive sync roadmap
Introduces /p2pds/repo-sync/1.0.0 libp2p protocol for peer-to-peer repo
transfer without centralized PDS servers. syncDid() now tries libp2p first
when peer info is available, falling back to HTTP PDS on failure.
- New libp2p-sync.ts: protocol handler (server) + fetchRepoFromPeer (client)
- Self-replication: OAuth login triggers addDid(own DID) to seed blockstore
- IpfsService: add dial() method for direct peer connections
- sync.ts: extract generateCarForDid() reusable by both HTTP and libp2p handlers
- start.ts: register repo sync protocol after IPFS + replication init
- admin.ts: rename "Replicated DIDs" → "Replicating Accounts", allow self-DID
Two tests validating the full bidirectional loop:
1. Two nodes sync each other's data and serve it via all sync/repo endpoints
(getRepo, getRepoStatus, listRepos, getRecord, listRecords, describeRepo)
2. Mutual offers create P2P replication policies with correct parameter merging
(max minCopies, min intervalSec, max priority)
Uses enhanced mock PDS with configurable records per DID/collection.
- ReplicationManager.setPdsClient() lazily creates OfferManager after OAuth login
- All 7 sync endpoints accept optional repoManager, registered unconditionally
- Repo read endpoints (describeRepo, getRecord, listRecords) serve replicated data without local repo
- listBlobs and sync.getRecord fall back to SyncStorage/ReplicatedRepoReader
- Logout with ?disconnect=true revokes offers/peer record, clears node_identity, unbinds DID
- OAuth callback wires PdsClient into ReplicationManager automatically
- Added typecheck npm script
Two tests: (1) two clean servers establish identity, each replicates a
different external account, verifies sync state; (2) identity and
replication state persist across server restart.
Fix: upsertState no longer overwrites status on conflict — existing
status is preserved, only pds_endpoint and peer_id are updated.
9 tests verify: server starts without DID, dashboard/health work,
replication manager exists, can add/sync DIDs, node_identity table
persists identity across restarts, env DID overrides stored identity.
Fixes: SyncStorage/ChallengeStorage schemas now init eagerly in
ReplicationManager constructor (not deferred to async init()), and
backfillIpfs skips when no RepoManager exists.
Extract startup logic from server.ts into src/start.ts with an exported
startServer(config, opts?) function that returns a ServerHandle for
programmatic control. This enables testing the full startup sequence
(DB, IPFS, replication, HTTP) in vitest and supports the Tauri sidecar
use case. Add server-startup.test.ts with 5 integration tests, a
scripts/smoke-test.sh for manual/CI smoke testing, and an npm
smoke-test script.
- Mock PDS test helper (test-helpers.ts): createTestRepo(), startMockPds(),
createMockDidResolver() for fast integration testing with tiny repos
- E2E sync tests (e2e-sync.test.ts): 7 tests covering syncDid() pipeline
against mock PDS — empty/record/blob accounts, admin API, persistence
- Sync progress logging in syncDid(): CAR size, block count, blob stats,
total duration at each checkpoint
- Fetch timeouts in repo-fetcher.ts: 60s for fetchRepo, 30s for fetchBlob
via AbortController to prevent hanging on slow PDSes
- Remove hasReplicateDids gate so ReplicationManager initializes with
empty DID list (allows adding DIDs via dashboard without pre-config)
- Enrich /oauth/status with profile info (avatar, displayName, handle)
from public Bluesky API
- Show account profile in System Overview and Account Connection cards
- Replace plain DID input with account search typeahead in Replicated
DIDs section (uses public app.bsky.actor.searchActorsTypeahead)
- Show avatar + display name + handle for tracked DIDs (async resolved)
- Rename header from "P2PDS Admin" to "P2PDS"
Enable p2pds to authenticate as a user via AT Protocol OAuth and publish
records (peer info, replication offers) to their real PDS instead of only
the local SQLite repo. This unblocks real-world peer discovery.
- Add @atproto/oauth-client-node and @atproto/api dependencies
- SQLite-backed OAuth state/session stores (src/oauth/stores.ts)
- Loopback OAuth client setup per AT Protocol spec (src/oauth/client.ts)
- PdsClient implementing RecordWriter for remote XRPC calls (src/oauth/pds-client.ts)
- Browser login flow routes: /oauth/login, /oauth/callback, /oauth/status
- Extract RecordWriter interface from OfferManager (both RepoManager and
PdsClient satisfy it, no changes to method bodies)
- Wire OAuth through server.ts → ReplicationManager → OfferManager
- Add Account Connection card to admin dashboard
- Opt-in via OAUTH_ENABLED=true (existing deployments unchanged)
Remove the did:web node identity layer — p2pds is now infrastructure
that acts on behalf of authenticated atproto users, not an entity with
its own identity. Records (peer, offer) publish to the user's own repo.
- Delete src/node-identity.ts and its tests
- Remove NODE_DID, NODE_MANAGERS, NODE_NAME from Config
- Simplify server.ts (single repo), index.ts (no NodeIdentityOpts),
auth.ts (config.DID only), replication-manager.ts (no peer publishing)
- Update all xrpc handlers and 13 test files
- Add lexicon JSON schemas for org.p2pds.peer and org.p2pds.replication.offer
- Add src/lexicons.ts loader/validator, wire into RecordValidator
- Add Tauri v2 desktop app skeleton at apps/desktop/ with sidecar pattern
- Add npm workspaces config
Defense-in-depth rate limiting with zero-config defaults:
- Sliding window rate limiter (src/rate-limiter.ts) with per-pool isolation
- HTTP middleware: per-route rate limits (meta/sync/session/read/write/challenge/admin)
- Body size limits: 1MB JSON, 64KB challenge, 60MB blob, 100MB CAR
- Gossipsub: 8KB message size cap, per-topic rate limiting (60/min commits, 10/min identity)
- libp2p: stream size caps (64KB inbound challenges, 1MB responses)
- libp2p: connection manager limits (100 max, 10 pending, 5/s inbound threshold)
- WebSocket firehose: per-IP connection limits (default 3)
- Challenge validation: targetDid check, path/CID count caps, expiration rejection
- All configurable via env vars, disabled by default in tests
Nodes now have their own did:web:{hostname} identity for coordination
(consent, discovery, policy), independent of any social account they
may host. Social account fields (DID, HANDLE, SIGNING_KEY) are now
optional — nodes can run as replication-only without a social account.
- New node-identity module: keypair lifecycle, did:web derivation, DID
document generation
- Config: social fields optional, added NODE_DID, NODE_MANAGERS
- Server: auto-generates node keypair, separate node-repo.db, optional
social account repo
- Auth: accepts node DID + social DID + manager DIDs
- ReplicationManager: uses nodeDid for gossipsub, offers, challenges
- XRPC handlers: fall back to NODE_DID when social DID not configured
- All 384 tests pass, zero TypeScript errors
- sync_history table tracks every sync event with source type, block/blob
counts, byte sizes, duration, and status
- size_bytes column on replication_blocks and replication_blobs for
accurate storage accounting
- Aggregate metrics API: total blocks/blobs/records/bytes held, syncs,
24h transfer volume
- Per-DID metrics: record count, bytes held, recent sync history
- New getSyncHistory endpoint for global sync event log
- Instrumented syncDid(), applyFirehoseBlocks(), syncBlobs() to record
events with full metrics
- Redesigned dashboard: metrics summary grid, enriched DID table with
expandable per-DID details, sync history card, source type badges,
formatBytes/timeAgo helpers
Self-contained HTML page at /xrpc/org.p2pds.admin.dashboard that
fetches existing admin JSON APIs and renders system overview,
replication table, network, policies, and verification sections
with 30s auto-refresh. No auth required on the dashboard route;
token from config is embedded for client-side API calls.
Four new authenticated endpoints expose existing internal state:
- getOverview: aggregated system status (network, replication, firehose, policy, verification)
- getDidStatus: per-DID detail (sync state, block/blob counts, peer endpoints, effective policy)
- getNetworkStatus: P2P connectivity (peerId, multiaddrs, connections)
- getPolicies: policy engine configuration and explicit DID lists
Peer discovery now persists multiaddrs alongside peerId in the
replication_state table. The failover challenge transport's
resolveEndpoint hook queries SyncStorage to map PDS HTTP endpoints
to libp2p multiaddrs, enabling direct P2P challenges before falling
back to HTTP.
- Add peer_multiaddrs column to replication_state (with migration)
- Update updatePeerInfo/clearPeerInfo to handle multiaddrs
- Add getMultiaddrForPdsEndpoint() lookup (prefers /p2p/ addrs)
- Pass multiaddrs through PeerDiscovery → ReplicationManager → storage
- Wire resolveEndpoint closure in server.ts challenge transport setup
Nodes publish lightweight CBOR notifications over gossipsub when commits
occur (local or replicated). Subscribed peers compare rev and trigger
syncDid() if newer, enabling low-latency P2P sync without polling.
- Add @libp2p/gossipsub v15 (compatible with libp2p v3/multiaddr v13)
- Extend NetworkService with publish/subscribe/handler for commit topics
- Publish notifications in RepoManager.sequenceAndBroadcast() and
ReplicationManager after sync
- Subscribe to per-DID topics in ReplicationManager.init() with dedup
- Update libp2p-transport.ts to v3 Stream API (send+close vs sink)
- Add E2E gossipsub test, encoding tests, and integration tests
Extend getRepo and getBlocks XRPC endpoints to serve replicated DID
data from the BlockStore, enabling other P2PDS nodes to sync repos
from peers instead of only from the source PDS.
Add peer_endpoints table to track which peers have which DIDs,
populated during manifest discovery. When syncDid() fails to reach
the source PDS, it now falls back to fetching from known peer
endpoints sorted by freshest data.
FailoverChallengeTransport wraps a primary and fallback ChallengeTransport.
On sendChallenge, tries the primary; on failure, invokes onFallback callback
and retries via fallback. Optional resolveEndpoint hook maps HTTP URLs to
multiaddrs for the libp2p transport. server.ts now uses failover when
libp2p is available.
After syncing repo blocks, walk records to extract blob CIDs, fetch from
source PDS via com.atproto.sync.getBlob, CID-verify, store in BlockStore
(IPFS), and track in replication_blobs table. Firehose path also triggers
blob fetching for create/update ops. getBlob endpoint extended to serve
replicated blobs from BlockStore.
Peers publish org.p2pds.replication.offer records declaring willingness
to replicate specific DIDs. Mutual offers are detected during sync and
automatically converted into PolicyEngine policies, driving the existing
replication and challenge machinery. Revoking an offer removes the
derived policy on the next discovery cycle.
Enables challenge-response verification over direct P2P connections
without requiring public HTTP endpoints. Adds /p2pds/challenge/1.0.0
protocol using half-close request-response streams, with fallback to
HTTP transport. New getMstProof XRPC endpoint lets light clients
request and verify MST proofs without downloading full repos.
Implements a transport-agnostic challenge-response system for proving
peers still hold specific records. Three challenge types (mst-proof,
block-sample, combined) with deterministic generation, SQLite-backed
history/reliability tracking, and policy-driven scheduling.
Replaces L2/L3 verification stubs with real challenge-based verification
when a ChallengeTransport is available. Adds HTTP transport adapter,
three new XRPC routes, and record path tracking for challenge generation.
Firehose incremental pipeline: handleFirehoseCommit() now applies blocks
directly from the firehose event via applyFirehoseBlocks(), skipping the
HTTP round-trip to the source PDS. Falls back to full syncDid() for edge
cases (tooBig, rebase, sequence gaps, CAR parse failures). 14 new tests.
Policy engine integration: ReplicationManager optionally accepts a
PolicyEngine to drive which DIDs get replicated, at what frequency, and
in what priority order. Per-DID sync intervals replace the fixed 5-min
global timer. Server startup loads policies from POLICY_FILE and wires
them through. Fully backward compatible when no engine is provided.
20 new tests.
Also fix flaky temp dir cleanup in replication tests (ENOTEMPTY race).
Four major features implemented in parallel:
- Real-time firehose sync: FirehoseSubscription subscribes to
com.atproto.sync.subscribeRepos with CBOR frame parsing, DID filtering,
cursor persistence in SQLite, and exponential backoff reconnection.
Integrated with ReplicationManager and configurable via FIREHOSE_URL
and FIREHOSE_ENABLED env vars.
- L3 MST path proof verification: generateMstProof() extracts minimal
Merkle Search Tree node path from root to leaf; verifyMstProof()
validates purely from proof bytes + trusted commit CID. Supports both
existence and non-existence proofs.
- Policy engine MVP: Declarative, deterministic, transport-agnostic
policy system with PolicyEngine class, three presets (mutualAid, saas,
groupGovernance), and config integration via POLICY_FILE env var.
- E2E networking integration tests: Two real Helia nodes with TCP on
localhost verify bitswap block exchange, bidirectional transfer, and
peer discovery.
Replicated repos' blocks are in IPFS but weren't queryable via standard
atproto endpoints. This adds an IpfsReadableBlockstore adapter and
ReplicatedRepoReader service that loads ReadableRepo instances on demand,
making getRecord, listRecords, and describeRepo work for replicated DIDs.
Also fixes rev extraction: syncDid() now decodes the commit block CBOR to
get the actual TID rev (instead of storing the root CID as rev), and stores
both root_cid and rev separately in replication_state.
Implement the core replication loop: announce, discover, replicate. Nodes
publish AT Protocol records declaring their IPFS PeerID (org.p2pds.peer)
and which DIDs they replicate (org.p2pds.manifest). Other nodes discover
this info, fetch repos via CAR export, store blocks in IPFS, and verify
availability.
New modules: types, sync-storage, repo-fetcher, peer-discovery,
verification, and replication-manager orchestrator. Adds REPLICATE_DIDS
config, getMultiaddrs() to IpfsService, replication status/syncNow XRPC
endpoints, and 27 tests covering all components plus integration CAR
roundtrip.
Blocks and blobs are now stored in a Helia-managed FsBlockstore and
announced on the DHT. A new /.well-known/rasl/:cid endpoint serves
content-addressed blocks over HTTP with immutable caching. IPFS
operations are fire-and-forget and never block the commit path.
Existing blocks are backfilled on startup. Controlled via
IPFS_ENABLED and IPFS_NETWORKING env vars.
Port Cirrus (Cloudflare Workers PDS) to a standalone Node.js server:
- Hono HTTP framework via @hono/node-server
- better-sqlite3 replacing Cloudflare SqlStorage
- Filesystem blob storage replacing R2
- ws library replacing hibernatable WebSockets
- In-memory DID cache replacing Workers Cache API
- Bearer token + JWT session auth (OAuth/passkeys deferred)
Verified: health check, DID document, session management, record
CRUD, CAR export, and WebSocket firehose all functional.