didplcbft#
A very experimental PLC implementation, for did:plc credentials, which uses BFT consensus to decentralize the hosting/maintenance of the directory. ⚠️ This is not a cryptocurrency system, nor is it intended to become one!
The release of this to the public is being rushed, because in light of developments, I want to bring this into the open ASAP in order to showcase why sequence IDs make my life a bit more difficult, if the intention for this implementation were to achieve 1:1 response parity with the official plc.directory (I don't know if I want that yet). A very unorganized readme follows!
This is definitely not ready for any type of production use, even if it already implements the entirety of the PLC HTTP API (with some caveats related to the export endpoint).
The general theme behind this project is,
Man, plc.directory seems so centralized, particularly considering most other atproto components are, at least in theory, decentralizable! There's this idea of forming an independent organization to maintain it, but what if we just skipped that entirely and made the ownership and operation of the entire system even more murky, while giving those nerds that run their own PDS/relay/appview/etc. another thing to run on their servers/homelabs? All while giving such a nerd - gbl08ma - an excuse to learn more about the tech that powers all the many blockchains that use Tendermint consensus!
Hopefully the tone of that communicates that this initiative isn't meant to be taken too seriously, at least yet.
The idea for how this would work, on an initial phase, would be to continue using plc.directory as an authoritative source while mirroring the operations in the blockchain. Validator nodes would gradually import operations from plc.directory ("authoritative import"). Eventually they'd catch up to the present moment, at which point this import would keep going as a way to bring in all the operations not submitted to didplcbft - because I recognize that this would definitely not become the official PLC implementation any time soon, and new operations would continue to be submitted to plc.directory by the general population. Nodes would also submit any operations observed on the blockchain (i.e. operations submitted directly to didplcbft) to plc.directory. Conflict/fork resolution within each DID history would happen by deferring to whatever plc.directory says the truth is.
Hypothetically, at some point in the future, and should an hypothetical network of didplcbft nodes reach a consensus on that, such network could act independently (for this, it'd just need to cease the ongoing import of operations, and cease publishing operations to plc.directory. In fact, the current code is already capable of acting like this - it's the syncing with the official centralized plc.directory that's more tricky to implement).
What's implemented#
- The entirety of the PLC HTTP API,
- Both read (GET) and write (POST) operations
- ...but export only works with timestamps (sequence-based export and websocket-based export is not implemented)
- Validation of operations (hopefully it's foolproof enough - needs revising, strengthening... and nullification might not be as simple as I made it out to be?)
- Snapshot-based sync for quickly bringing replicas up to date
- An initial, not very well tested, version of the "authoritative import" that gradually brings in operations from the official plc.directory
What's yet to be implemented#
- Actually syncing with the authoritative source, plc.directory (in progress - fetching from the official directory is mostly implemented as indicated above; submitting blockchain happenings to the official directory is yet to be implemented)
- Spam/abuse prevention (typical blockchains do this with transaction fees and some lesser known ones - e.g. Nano - use proof of work, but this is not a cryptocurrency and PoW is a bit ewww)
- A process for nodes to become validators. Unless everyone agreed that it's best if the set of validators is centralized. I mean, it's not worse than the current state of plc.directory while still allowing people to easily have their own local synced mirror through the magic of CometBFT...
- Testing, testing, testing, validation, validation, validation...
- A live testnet on the internet, so this can go from something that runs just within the computers of the developers, and actually becomes distributed
- This might be a matter of someone just figuring out the right CometBFT configs and bringing it up?
- I am planning to take care of this after the "authoritative import" mechanism is done (its first version, anyway)
- A live testnet on the internet, so this can go from something that runs just within the computers of the developers, and actually becomes distributed
What's not to be implemented by the original authors#
- Features that would allow for didplcbft to act as some form of currency or store of value.
How it works#
- Ugh, I'll explain later. The gist of it is that all of the data is stored in an AVL+ tree which is basically the perfect match for CometBFT. The core logic of the PLC is implemented as an ABCI 2.0 application, and the HTTP API communicates with it (turning POST requests into blockchain transactions, and for GET requests it ends up just reading the tree directly, why even bother going through the ABCI
Querylogic for that, am I right?). The tree is "synced" across replicas by having CometBFT replay transactions or whatever it is that it does, what matters is that in the end the root node of the tree has the same hash on all replicas which means everything was replicated properly (CometBFT takes care of ensuring this is the case, as long as our application is deterministic and stores all relevant state in the tree).
Some random notes and comments#
- Yes, this project is bound to elicit knee-jerk reactions, understandably. I was planning to make it public only once it was more complete, but I figured the public discussion could benefit from the awareness that this exists, even if it isn't necessarily going somewhere. And with the official implementation introducing global sequential IDs, I figured time is of the essence before reimplementing things in a distributed fashion becomes even more difficult.
- To be clear, the sequence ID only poses a challenge if we wanted the different blockchain nodes to present the same sequence IDs. And honestly, even if we wanted that, it's still perfectly doable, but it'd require more complexity on the "authoritative import" reconciliation mechanism (we'd potentially need to renumber operations that already exist within the didplcbft tree). And
- Yes, this is a blockchain, but only because it was relatively easy for me to bring in CometBFT as a means of making things happen in sync across different machines (even if this is my first time using it, which is fantastic, I'm sure there are 300 pitfalls I'm not aware of).
- If I had come across a different P2P framework of sorts with equally nice sync primitives and resistance to byzantine faults that was not a blockchain and which supported the languages I like to use, I'd have used it.
- Because each DID history is its own mini-blockchain, I'm convinced that all nodes could actively throw away older blockchain history (CometBFT supports this) and from then on, The World could rely exclusively on snapshot sync (which is already implemented). Of course, there are trust implications with this approach (but are they really worse than trusting plc.directory as it is right now?).
- For "a blockchain approach," the better defined operation validation is, the better (in the past, the plc.directory was not very good at this, and they had to go and retroactively mess with existing data...). But the fun thing about blockchain is that it works as long as some network of nodes agrees on what the truth is. So of course history can be re-written on a blockchain too, it just takes slightly more effort.
- I am convinced the current storage implementation can fit the entire state of the plc.directory in under 100 GB. It isn't great but it isn't terrible either, considering it's a blockchain and all. But I don't know for sure that it can, this is merely what tests with roughly 10 million operations stored have shown (operations taken from plc.directory, so real world data).
- There are plenty of unstructured debug prints throughout the code, and in general the code quality is meh... like I said, this all is very experimental.
- I have some thoughts about using proof-of-storage and some kind of "leaderboard system" (not a currency) in order to elect/demote validators
- Another option would be to tie it to some type of social metrics (did:plc is mostly used by atproto stuff after all right?) but I can't think of anything that can't be manipulated
- CometBFT coupled with the AVL+ tree implementation I'm using has some nice properties for proofs of storage (and proofs that one doesn't have something in storage) which could make it easier to prove that nodes are not omitting newer operations of a DID's history on purpose (which is really the main attack I can see a malicious PLC implementation performing, as far as the contract with its consumers is concerned). But it's very early in the life of this project, and I haven't thought sufficiently about this.
- Spam prevention will be a difficult problem (it already is in the official plc.directory; of course it'd be even more difficult to solve in a decentralized system)
- But see also the above notes on exclusively using snapshot sync, and on consensus... the truth is whatever the nodes agree it is, so spam could conceivably be removed even after it is inserted, etc. Just because it uses a blockchain it doesn't mean that it must store everything forever, history can be forgotten if everyone agrees on a snapshot, etc. A distributed system just means that rewriting history takes more effort compared to simply running some Postgres migrations... this would be the case even without "blockchain" in the mix.
- This project would kind of reduce the need for something like
plcbundle(even if it could learn something from it, when it comes to importing operations from the authoritative plc.directory)
How to run this#
I would rather not reveal it right now, but if you insist...
Note that this definitely doesn't currently do enough things to be a viable PLC mirror, let alone do them well. This is at a stage where it's really only useful to those wanting to work on it.
Let me just point out that there are some scripts in this repo (mostly LLM generated because I hate shell scripting).
You need to install Go, in case the go.mod and multiple *.go files didn't make it obvious.
To run a single node, you want to use startfresh.sh. To run multiple nodes, there's startfresh-testnet.sh. Note that using the created config files, only the first node will serve the PLC API.
The PLC API server listens on 127.0.0.1:28080 by default and other fun facts you could learn from reading config.go. There are other ports that the server listens on, one that exposes the CometBFT RPC API, and also the P2P ports for communication between the nodes.
To easily import operations you can play around with the mess that's within importer/importer_test.go - note that this imports operations by creating blockchain transactions directly, it doesn't use the PLC API. And despite being called "importer" this is not how I was meaning to do the "authoritative import" mechanism I mentioned earlier - this is really just for bringing in data for testing.
Disclaimer#
If you happen to know who my employer is - this is not endorsed or condoned by them; this wasn't developed on company time, nor using company resources; no, I don't think this constitutes a conflict of interest; no, it doesn't make use of any trade secrets, proprietary, or confidential information, nor does it make use of any skills specific to my role (as much as it may look like it, from an uninformed point of view). If you don't know, no, I won't tell you who my employer is.