feature: dns, pds, and plc configuration and documentation

+2
.gitignore
··· 1 + # PLC 2 + /plc/did-method-plc
+24 -1
README.md
··· 1 1 # localdev 2 2 3 - Code and configuration to create a local development environment. 3 + Code and configuration to create a network-local development environment. It uses tailscale and can be used to have shared isolated infrastructure that can be used to support individuals and teams. 4 + 5 + ## Configuration 6 + 7 + 8 + ## Operation 9 + 10 + 1. Configure and start the PLC service. See plc/README.md 11 + 12 + 2. Configure and start the PDS service. See pds/README.md 13 + 14 + 3. Configure and start the DNS service. See dns/README.md 15 + 16 + 4. Configure split-DNS in Tailscale. 17 + 18 + 1. Visit https://tailscale.com/ 19 + 2. Go to the Machines tab and get the internal IP address of `didadmin` 20 + 2. Go to the DNS configuration page 21 + 3. Add a nameserver and select "Custom" 22 + 4. Enter the IP address of the `didadmin`, select "Restrict to domain (Split DNS)", and set the domain to "pyroclastic.cloud" 23 + 24 + ## Maintenance 25 + 26 + Tailscale SSL certificates need to be periodically regenerated. Run the respective `docker compose exec tailscale /bin/sh -c "tailscale cert ..."` command to generate new certs and restart (stop and start) the nginx proxy for it to use the new cert.
+14
dns/Corefile.example
··· 1 + . { 2 + log 3 + errors 4 + 5 + reload 10s 6 + 7 + records pyroclastic.cloud { 8 + @ 60 IN TXT "TEST" 9 + _atproto.test1734305850 60 IN TXT "did=did:plc:p75ngbyvabgetgoy52aswele" 10 + _atproto.test1734440080 60 IN TXT "did=did:plc:k5d6h7nlhbh5tuxrlxczgal3" 11 + _atproto.test1734440644 60 IN TXT "did=did:plc:x45wmz7vktj2aqcqwj7yakxs" 12 + } 13 + 14 + }
+50
dns/README.md
··· 1 + # DNS 2 + 3 + The DNS component does several things: 4 + 5 + 1. It uses CoreDNS as a split-DNS nameserver for resolving local handles. 6 + 2. It provides a small HTTP application for generating new handles for testing purposes. 7 + 8 + ## Configuration 9 + 10 + This service makes API calls to the local PDS and also exists on a tailscale network. Please make note of any `PLACEHOLDER` and `OPTIONAL` strings in the following files: 11 + 12 + In `./docker-compose.yml`: 13 + 14 + * Set the `PDS_ADMIN_PASSWORD` environment variable to your PDS admin password. 15 + * Set the `PDS_HOSTNAME` to the internal hostname of your PDS. (i.e. `pds.sneaky-fox.ts.net`) 16 + * Optionally, if you are not using the `pyroclastic.cloud` domain (it's fine to leave this as-is) then change that. 17 + 18 + ## Operation 19 + 20 + 1. First, build the `didadmin` tool. 21 + 22 + `docker build -f ./didadmin/Dockerfile -t didadmin ./didadmin/` 23 + 24 + 3. Bring networking up. 25 + 26 + `docekr compose up tailscale -d` 27 + 28 + If you are using dynamic node registration, you'll need to view the logs and click on the link. 29 + 30 + `docker compose logs tailscale` 31 + 32 + 4. Generate an SSL certificate for the node. Be sure to change `internal.ts.net` to whatever your Tailnet name is (i.e. `sneaky-fox.ts.net`) 33 + 34 + `docker compose exec tailscale /bin/sh -c "tailscale cert --cert-file /mnt/tls/cert.pem --key-file /mnt/tls/cert.key didadmin.internal.ts.net"` 35 + 36 + 5. Bring didadmin up. 37 + 38 + `docekr compose up app -d` 39 + 40 + When this first starts, it'll create the `/etc/coredns/database.db` and `/etc/coredns/Corefile` files inside the container. 41 + 42 + 6. Bring coredns and the proxy up. 43 + 44 + `docker compose up -d` 45 + 46 + 7. Ensure the PLC and PDS services are running, and split-DNS is configured before using. 47 + 48 + ## Usage 49 + 50 + In a browser, visit https://didadmin.sneaky-fox.ts.net/ and use the form to create accounts on the local PDS.
+15
dns/didadmin/Dockerfile
··· 1 + FROM golang:alpine3.21 AS build 2 + ENV CGO_ENABLED=1 3 + RUN apk add --no-cache gcc musl-dev 4 + WORKDIR /workspace 5 + COPY go.mod /workspace/ 6 + COPY go.sum /workspace/ 7 + RUN go mod download 8 + COPY main.go /workspace/ 9 + ENV GOCACHE=/root/.cache/go-build 10 + RUN --mount=type=cache,target="/root/.cache/go-build" go install -ldflags='-s -w -extldflags "-static"' ./main.go 11 + 12 + FROM scratch 13 + COPY --from=build /go/bin/main /usr/local/bin/didadmin 14 + COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ 15 + ENTRYPOINT [ "/usr/local/bin/didadmin" ]
+27
dns/didadmin/go.mod
··· 1 + module github.com/astrenoxcoop/magicdev-admin 2 + 3 + go 1.23.5 4 + 5 + require ( 6 + github.com/coreos/go-semver v0.3.0 // indirect 7 + github.com/coreos/go-systemd/v22 v22.3.2 // indirect 8 + github.com/dustinkirkland/golang-petname v0.0.0-20240428194347-eebcea082ee0 // indirect 9 + github.com/gogo/protobuf v1.3.2 // indirect 10 + github.com/golang/protobuf v1.5.4 // indirect 11 + github.com/mattn/go-sqlite3 v1.14.24 // indirect 12 + github.com/sethvargo/go-envconfig v1.1.0 // indirect 13 + go.etcd.io/etcd/api/v3 v3.5.17 // indirect 14 + go.etcd.io/etcd/client/pkg/v3 v3.5.17 // indirect 15 + go.etcd.io/etcd/client/v3 v3.5.17 // indirect 16 + go.uber.org/atomic v1.7.0 // indirect 17 + go.uber.org/multierr v1.6.0 // indirect 18 + go.uber.org/zap v1.17.0 // indirect 19 + golang.org/x/net v0.23.0 // indirect 20 + golang.org/x/sys v0.18.0 // indirect 21 + golang.org/x/text v0.14.0 // indirect 22 + google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d // indirect 23 + google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d // indirect 24 + google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d // indirect 25 + google.golang.org/grpc v1.59.0 // indirect 26 + google.golang.org/protobuf v1.33.0 // indirect 27 + )
+83
dns/didadmin/go.sum
··· 1 + github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM= 2 + github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= 3 + github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI= 4 + github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= 5 + github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 6 + github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 7 + github.com/dustinkirkland/golang-petname v0.0.0-20240428194347-eebcea082ee0 h1:aYo8nnk3ojoQkP5iErif5Xxv0Mo0Ga/FR5+ffl/7+Nk= 8 + github.com/dustinkirkland/golang-petname v0.0.0-20240428194347-eebcea082ee0/go.mod h1:8AuBTZBRSFqEYBPYULd+NN474/zZBLP+6WeT5S9xlAc= 9 + github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= 10 + github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= 11 + github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= 12 + github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= 13 + github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= 14 + github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= 15 + github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= 16 + github.com/mattn/go-sqlite3 v1.14.24 h1:tpSp2G2KyMnnQu99ngJ47EIkWVmliIizyZBfPrBWDRM= 17 + github.com/mattn/go-sqlite3 v1.14.24/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= 18 + github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= 19 + github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 20 + github.com/sethvargo/go-envconfig v1.1.0 h1:cWZiJxeTm7AlCvzGXrEXaSTCNgip5oJepekh/BOQuog= 21 + github.com/sethvargo/go-envconfig v1.1.0/go.mod h1:JLd0KFWQYzyENqnEPWWZ49i4vzZo/6nRidxI8YvGiHw= 22 + github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 23 + github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= 24 + github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 25 + github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= 26 + github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= 27 + go.etcd.io/etcd/api/v3 v3.5.17 h1:cQB8eb8bxwuxOilBpMJAEo8fAONyrdXTHUNcMd8yT1w= 28 + go.etcd.io/etcd/api/v3 v3.5.17/go.mod h1:d1hvkRuXkts6PmaYk2Vrgqbv7H4ADfAKhyJqHNLJCB4= 29 + go.etcd.io/etcd/client/pkg/v3 v3.5.17 h1:XxnDXAWq2pnxqx76ljWwiQ9jylbpC4rvkAeRVOUKKVw= 30 + go.etcd.io/etcd/client/pkg/v3 v3.5.17/go.mod h1:4DqK1TKacp/86nJk4FLQqo6Mn2vvQFBmruW3pP14H/w= 31 + go.etcd.io/etcd/client/v3 v3.5.17 h1:o48sINNeWz5+pjy/Z0+HKpj/xSnBkuVhVvXkjEXbqZY= 32 + go.etcd.io/etcd/client/v3 v3.5.17/go.mod h1:j2d4eXTHWkT2ClBgnnEPm/Wuu7jsqku41v9DZ3OtjQo= 33 + go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw= 34 + go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= 35 + go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4= 36 + go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= 37 + go.uber.org/zap v1.17.0 h1:MTjgFu6ZLKvY6Pvaqk97GlxNBuMpV4Hy/3P6tRGlI2U= 38 + go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo= 39 + golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 40 + golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= 41 + golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= 42 + golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= 43 + golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= 44 + golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 45 + golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 46 + golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 47 + golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= 48 + golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs= 49 + golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= 50 + golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 51 + golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 52 + golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 53 + golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 54 + golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 55 + golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 56 + golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= 57 + golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= 58 + golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 59 + golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= 60 + golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= 61 + golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= 62 + golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 63 + golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= 64 + golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= 65 + golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= 66 + golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 67 + golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 68 + golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 69 + golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 70 + google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d h1:VBu5YqKPv6XiJ199exd8Br+Aetz+o08F+PLMnwJQHAY= 71 + google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d/go.mod h1:yZTlhN0tQnXo3h00fuXNCxJdLdIdnVFVBaRJ5LWBbw4= 72 + google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d h1:DoPTO70H+bcDXcd39vOqb2viZxgqeBeSGtZ55yZU4/Q= 73 + google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d/go.mod h1:KjSP20unUpOx5kyQUFa7k4OJg0qeJ7DEZflGDu2p6Bk= 74 + google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d h1:uvYuEyMHKNt+lT4K3bN6fGswmK8qSvcreM3BwjDh+y4= 75 + google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d/go.mod h1:+Bk1OCOj40wS2hwAMA+aCW9ypzm63QTBBHp6lQ3p+9M= 76 + google.golang.org/grpc v1.59.0 h1:Z5Iec2pjwb+LEOqzpB2MR12/eKFhDPhuqW91O+4bwUk= 77 + google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98= 78 + google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= 79 + google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= 80 + gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 81 + gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 82 + gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 83 + gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+265
dns/didadmin/main.go
··· 1 + package main 2 + 3 + import ( 4 + "bytes" 5 + "context" 6 + "database/sql" 7 + "encoding/base64" 8 + "encoding/json" 9 + "errors" 10 + "fmt" 11 + "io" 12 + "log" 13 + "net/http" 14 + "os" 15 + "text/template" 16 + 17 + petname "github.com/dustinkirkland/golang-petname" 18 + _ "github.com/mattn/go-sqlite3" 19 + "github.com/sethvargo/go-envconfig" 20 + ) 21 + 22 + type ServerConfig struct { 23 + Database string `env:"DATABASE, default=./database.db"` 24 + PDSHostName string `env:"PDS_HOSTNAME, default=pds.internal.ts.net"` 25 + PDSAdminPassword string `env:"PDS_ADMIN_PASSWORD, required"` 26 + Domain string `env:"DOMAIN, default=pyroclastic.cloud"` 27 + Corefile string `env:"COREFILE, default=./Corefile"` 28 + } 29 + 30 + type createInviteResponse struct { 31 + Code string `json:"code"` 32 + } 33 + 34 + type createAccountRequest struct { 35 + Email string `json:"email"` 36 + Handle string `json:"handle"` 37 + Password string `json:"password"` 38 + InviteCode string `json:"inviteCode"` 39 + } 40 + 41 + type createAccountResponse struct { 42 + DID string `json:"did"` 43 + } 44 + 45 + func createInvite(server, password string) (string, error) { 46 + url := fmt.Sprintf("https://%s/xrpc/com.atproto.server.createInviteCode", server) 47 + 48 + requestBody := []byte(`{"useCount":1}`) 49 + req, err := http.NewRequest("POST", url, bytes.NewBuffer(requestBody)) 50 + if err != nil { 51 + return "", err 52 + } 53 + req.Header.Set("Content-Type", "application/json") 54 + req.Header.Add("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte("admin:"+password))) 55 + 56 + client := &http.Client{} 57 + resp, err := client.Do(req) 58 + if err != nil { 59 + return "", err 60 + } 61 + 62 + decoder := json.NewDecoder(resp.Body) 63 + var createInvite createInviteResponse 64 + err = decoder.Decode(&createInvite) 65 + if err != nil { 66 + return "", err 67 + } 68 + 69 + return createInvite.Code, nil 70 + } 71 + 72 + func createAccount(server, password, inviteCode, handle, email string) (string, error) { 73 + url := fmt.Sprintf("https://%s/xrpc/com.atproto.server.createAccount", server) 74 + 75 + createAccountRequestBody := createAccountRequest{ 76 + Email: email, 77 + Handle: handle, 78 + Password: "password", 79 + InviteCode: inviteCode, 80 + } 81 + requestBody, err := json.Marshal(createAccountRequestBody) 82 + if err != nil { 83 + return "", err 84 + } 85 + 86 + req, err := http.NewRequest("POST", url, bytes.NewBuffer(requestBody)) 87 + if err != nil { 88 + return "", err 89 + } 90 + req.Header.Set("Content-Type", "application/json") 91 + req.Header.Add("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte("admin:"+password))) 92 + 93 + client := &http.Client{} 94 + resp, err := client.Do(req) 95 + if err != nil { 96 + return "", err 97 + } 98 + 99 + decoder := json.NewDecoder(resp.Body) 100 + var createdAccount createAccountResponse 101 + err = decoder.Decode(&createdAccount) 102 + if err != nil { 103 + return "", err 104 + } 105 + 106 + return createdAccount.DID, nil 107 + } 108 + 109 + type handlers struct { 110 + config *ServerConfig 111 + db *sql.DB 112 + } 113 + 114 + func (h *handlers) indexHandler(w http.ResponseWriter, r *http.Request) { 115 + handle := r.PostFormValue("handle") 116 + 117 + if handle == "" { 118 + handle = petname.Generate(2, "-") 119 + body := fmt.Sprintf(`<html><body><form method="post" action="/"><input type="text" name="handle" value="%s" /><input type="submit" /></form></body></html>`, handle) 120 + io.WriteString(w, body) 121 + return 122 + } 123 + 124 + inviteCode, err := createInvite(h.config.PDSHostName, h.config.PDSAdminPassword) 125 + if err != nil { 126 + io.WriteString(w, fmt.Sprintf("Error: %s", err)) 127 + return 128 + } 129 + 130 + email := fmt.Sprintf("%s@%s", handle, h.config.Domain) 131 + full_handle := fmt.Sprintf("%s.%s", handle, h.config.Domain) 132 + 133 + did, err := createAccount(h.config.PDSHostName, h.config.PDSAdminPassword, inviteCode, full_handle, email) 134 + if err != nil { 135 + io.WriteString(w, fmt.Sprintf("Error: %s", err)) 136 + return 137 + } 138 + 139 + _, err = h.db.Exec(`INSERT INTO handles (did, handle) VALUES (?, ?)`, &did, &handle) 140 + if err != nil { 141 + io.WriteString(w, fmt.Sprintf("Error: %s", err)) 142 + return 143 + } 144 + 145 + if err = generateCorefile(h.config, h.db); err != nil { 146 + io.WriteString(w, fmt.Sprintf("Error: %s", err)) 147 + return 148 + } 149 + 150 + body := fmt.Sprintf(`<html><body><p>Created <span>%s</span> with handle <span>%s</span></p><p><a href="/">Back</a></body></html>`, did, full_handle) 151 + 152 + io.WriteString(w, body) 153 + } 154 + 155 + func (h *handlers) newHandler(w http.ResponseWriter, r *http.Request) { 156 + handle := r.PostFormValue("handle") 157 + if handle == "" { 158 + handle = "testy" 159 + } 160 + 161 + io.WriteString(w, fmt.Sprintf("Hello, %s!\n", handle)) 162 + } 163 + 164 + func generateCorefile(config *ServerConfig, db *sql.DB) error { 165 + corefileTemplate := ` 166 + . { 167 + log 168 + errors 169 + reload 10s 170 + records {{ .Domain }} { 171 + @ 60 IN TXT "TEST" 172 + {{ range .Records }} 173 + _atproto.{{ .Handle }} 60 IN TXT "did={{ .DID }}"{{ end }} 174 + } 175 + }` 176 + 177 + corefile, err := template.New("corefile").Parse(corefileTemplate) 178 + if err != nil { 179 + log.Fatal(err) 180 + } 181 + 182 + type corefileValueRecord struct { 183 + DID string 184 + Handle string 185 + } 186 + type corefileValues struct { 187 + Domain string 188 + Records []corefileValueRecord 189 + } 190 + 191 + records := make([]corefileValueRecord, 0) 192 + 193 + rows, err := db.Query("SELECT handle, did FROM handles") 194 + if err != nil { 195 + return err 196 + } 197 + defer rows.Close() 198 + 199 + for rows.Next() { 200 + var handle string 201 + var did string 202 + err = rows.Scan(&handle, &did) 203 + if err != nil { 204 + return err 205 + } 206 + records = append(records, corefileValueRecord{did, handle}) 207 + } 208 + err = rows.Err() 209 + if err != nil { 210 + return err 211 + } 212 + data := corefileValues{ 213 + Domain: config.Domain, 214 + Records: records, 215 + } 216 + 217 + output, err := os.Create(config.Corefile) 218 + if err != nil { 219 + return err 220 + } 221 + defer output.Close() 222 + 223 + err = corefile.Execute(output, data) 224 + if err != nil { 225 + log.Fatal(err) 226 + } 227 + return nil 228 + } 229 + 230 + func main() { 231 + ctx := context.Background() 232 + 233 + var config ServerConfig 234 + if err := envconfig.Process(ctx, &config); err != nil { 235 + log.Fatal(err) 236 + } 237 + 238 + db, err := sql.Open("sqlite3", config.Database) 239 + if err != nil { 240 + log.Fatal(err) 241 + } 242 + defer db.Close() 243 + 244 + _, err = db.Exec(`CREATE TABLE IF NOT EXISTS handles (did TEXT NOT NULL PRIMARY KEY, handle TEXT NOT NULL UNIQUE)`) 245 + if err != nil { 246 + log.Fatal(err) 247 + } 248 + 249 + h := handlers{ 250 + config: &config, 251 + db: db, 252 + } 253 + 254 + if err = generateCorefile(h.config, h.db); err != nil { 255 + log.Fatal(err) 256 + } 257 + 258 + mux := http.NewServeMux() 259 + mux.HandleFunc("/", h.indexHandler) 260 + mux.HandleFunc("/new", h.newHandler) 261 + if err = http.ListenAndServe(":3333", mux); !errors.Is(err, http.ErrServerClosed) { 262 + log.Fatal(err) 263 + } 264 + 265 + }
+43
dns/docker-compose.yml
··· 1 + version: '3.8' 2 + volumes: 3 + dns_db: 4 + dns_ts: 5 + dns_tls: 6 + dns_coredns: 7 + services: 8 + coredns: 9 + image: coredns 10 + network_mode: service:tailscale 11 + restart: on-failure 12 + volumes: 13 + - dns_coredns:/etc/coredns/ 14 + entrypoint: /coredns 15 + command: -conf /etc/coredns/Corefile 16 + app: 17 + image: "didadmin" 18 + restart: unless-stopped 19 + environment: 20 + - PDS_ADMIN_PASSWORD=PLACEHOLDER 21 + - DATABASE=/etc/coredns/database.db 22 + - PDS_HOSTNAME=PLACEHOLDER pds.internal.ts.net 23 + - DOMAIN=pyroclastic.cloud 24 + - COREFILE=/etc/coredns/Corefile 25 + volumes: 26 + - dns_coredns:/etc/coredns/ 27 + tailscale: 28 + image: tailscale/tailscale:latest 29 + restart: unless-stopped 30 + environment: 31 + # OPTIONAL - TS_AUTHKEY=YOUR-TS-KEY-GOES-HERE 32 + - TS_STATE_DIR=/var/run/tailscale 33 + - TS_HOSTNAME=didadmin 34 + volumes: 35 + - dns_tls:/mnt/tls 36 + - dns_ts:/var/run/tailscale 37 + nginx: 38 + image: nginx 39 + restart: unless-stopped 40 + network_mode: service:tailscale 41 + volumes: 42 + - ./nginx.conf:/etc/nginx/nginx.conf 43 + - dns_tls:/mnt/tls:ro
+17
dns/nginx.conf
··· 1 + events {} 2 + http { 3 + server { 4 + resolver 127.0.0.11 [::1]:5353 valid=15s; 5 + set $backend "http://app:3333"; 6 + listen 443 ssl; 7 + ssl_certificate /mnt/tls/cert.pem; 8 + ssl_certificate_key /mnt/tls/cert.key; 9 + location / { 10 + proxy_pass $backend; 11 + proxy_set_header Host $host; 12 + proxy_set_header X-Real-IP $remote_addr; 13 + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 14 + client_max_body_size 64M; 15 + } 16 + } 17 + }
+41
pds/README.md
··· 1 + # PDS 2 + 3 + ## Configuration 4 + 5 + This is a fully operational PDS and needs appropriate configuration. If you decide to run multiple PDS instances for testing, be sure to configure each one individually. 6 + 7 + Copy the `env.example` file to `env` and update the following entry "PLACEHOLDER" values. 8 + 9 + * `PDS_JWT_SECRET` value set with `openssl rand --hex 16` 10 + * `PDS_ADMIN_PASSWORD` value set with `openssl rand --hex 16` 11 + * `PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX` value set with `openssl ecparam --name secp256k1 --genkey --noout --outform DER | tail --bytes=+8 | head --bytes=32 | xxd --plain --cols 32` 12 + * `PDS_HOSTNAME` value updated to relflect your internal tailnet 13 + * `PDS_ADMIN_EMAIL` value updated to relflect your internal tailnet 14 + * `PDS_DID_PLC_URL` value updated to relflect your internal tailnet 15 + * Optionally, if you are not using the `pyroclastic.cloud` domain (it's fine to leave this as-is) then change that. 16 + 17 + ## Operation 18 + 19 + 1. Create the configuration file and update it accordingly. 20 + 21 + 2. Bring networking up. 22 + 23 + `docekr compose up tailscale -d` 24 + 25 + If you are using dynamic node registration, you'll need to view the logs and click on the link. 26 + 27 + `docker compose logs tailscale` 28 + 29 + 3. Generate an SSL certificate for the node. Be sure to change `internal.ts.net` to whatever your Tailnet name is (i.e. `sneaky-fox.ts.net`) 30 + 31 + `docker compose exec tailscale /bin/sh -c "tailscale cert --cert-file /mnt/tls/cert.pem --key-file /mnt/tls/cert.key pds.internal.ts.net"` 32 + 33 + 4. Bring the app and proxy up. 34 + 35 + `docker compose up -d` 36 + 37 + ## Usage 38 + 39 + The PDS will be available at https://pds.internal.ts.net/. 40 + 41 + The maildev service will be available at http://pds.internal.ts.net:1080/.
+32
pds/docker-compose.yml
··· 1 + version: '3.8' 2 + volumes: 3 + pds_data: 4 + pds_ts: 5 + pds_tls: 6 + services: 7 + maildev: 8 + image: maildev/maildev 9 + restart: unless-stopped 10 + app: 11 + image: ghcr.io/bluesky-social/pds:0.4 12 + restart: unless-stopped 13 + env_file: "env" 14 + volumes: 15 + - pds_data:/pds 16 + tailscale: 17 + image: tailscale/tailscale:latest 18 + restart: unless-stopped 19 + environment: 20 + # OPTIONAL - TS_AUTHKEY=YOUR-TS-KEY-GOES-HERE 21 + - TS_STATE_DIR=/var/run/tailscale 22 + - TS_HOSTNAME=pds 23 + volumes: 24 + - pds_tls:/mnt/tls 25 + - pds_ts:/var/run/tailscale 26 + nginx: 27 + image: nginx 28 + restart: unless-stopped 29 + network_mode: service:tailscale 30 + volumes: 31 + - ./nginx.conf:/etc/nginx/nginx.conf 32 + - pds_tls:/mnt/tls:ro
+20
pds/env.example
··· 1 + PDS_SERVICE_HANDLE_DOMAINS=.pyroclastic.cloud 2 + PDS_HOSTNAME=pds.internal.ts.net 3 + PDS_JWT_SECRET=PLACEHOLDER 4 + PDS_ADMIN_PASSWORD=PLACEHOLDER 5 + PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX=PLACEHOLDER 6 + PDS_ADMIN_EMAIL=admin@plc.internal.ts.net 7 + PDS_DATA_DIRECTORY=/pds 8 + PDS_BLOBSTORE_DISK_LOCATION=/pds/blobs 9 + LOG_ENABLED=true 10 + PDS_DID_PLC_URL=https://plc.internal.ts.net 11 + PDS_BSKY_APP_VIEW_DID=did:web:api.bsky.app 12 + PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac 13 + PDS_EMAIL_FROM_ADDRESS=postmaster@localhost 14 + PDS_EMAIL_SMTP_URL=smtp://maildev:1025 15 + PDS_ACCEPTING_REPO_IMPORTS=true 16 + PDS_DEV_MODE=TRUE 17 + DEBUG_MODE=TRUE 18 + LOG_LEVEL=trace 19 + PDS_PORT=3001 20 + PDS_BLOB_UPLOAD_LIMIT=52428800
+17
pds/nginx.conf
··· 1 + events {} 2 + http { 3 + server { 4 + resolver 127.0.0.11 [::1]:5353 valid=15s; 5 + set $backend "http://app:3001"; 6 + listen 443 ssl; 7 + ssl_certificate /mnt/tls/cert.pem; 8 + ssl_certificate_key /mnt/tls/cert.key; 9 + location / { 10 + proxy_pass $backend; 11 + proxy_set_header Host $host; 12 + proxy_set_header X-Real-IP $remote_addr; 13 + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 14 + client_max_body_size 64M; 15 + } 16 + } 17 + }
+31
plc/README.md
··· 1 + # PLC 2 + 3 + To start a PLC server, you must build a container from the PLC repository. 4 + 5 + 1. First, clone https://github.com/did-method-plc/did-method-plc 6 + 7 + `git clone https://github.com/did-method-plc/did-method-plc` 8 + 9 + 2. Build the container 10 + 11 + `docker build -f ./did-method-plc/packages/server/Dockerfile -t plcjs ./did-method-plc/` 12 + 13 + 3. Bring networking up. 14 + 15 + `docekr compose up tailscale -d` 16 + 17 + If you are using dynamic node registration, you'll need to view the logs and click on the link. 18 + 19 + `docker compose logs tailscale` 20 + 21 + 4. Generate an SSL certificate for the node. Be sure to change `internal.ts.net` to whatever your Tailnet name is (i.e. `sneaky-fox.ts.net`) 22 + 23 + `docker compose exec tailscale /bin/sh -c "tailscale cert --cert-file /mnt/tls/cert.pem --key-file /mnt/tls/cert.key plc.internal.ts.net"` 24 + 25 + 5. Bring the database up. 26 + 27 + `docekr compose up db -d` 28 + 29 + 6. Bring the app and proxy up. 30 + 31 + `docker compose up -d`
+55
plc/docker-compose.yml
··· 1 + version: '3.8' 2 + volumes: 3 + plc_db: 4 + plc_ts: 5 + plc_tls: 6 + services: 7 + db: 8 + image: postgres:14.4-alpine 9 + restart: unless-stopped 10 + environment: 11 + - POSTGRES_USER=pg 12 + - POSTGRES_PASSWORD=password 13 + healthcheck: 14 + test: 'pg_isready -U pg' 15 + interval: 500ms 16 + timeout: 10s 17 + retries: 20 18 + volumes: 19 + - plc_db:/var/lib/postgresql/data 20 + - ./init.sql:/docker-entrypoint-initdb.d/init.sql 21 + app: 22 + depends_on: 23 + db: 24 + condition: service_healthy 25 + restart: true 26 + image: plcjs 27 + restart: unless-stopped 28 + environment: 29 + - DATABASE_URL=postgres://pg:password@db/plc 30 + - DEBUG_MODE=1 31 + - LOG_ENABLED=true 32 + - LOG_LEVEL=debug 33 + - DB_CREDS_JSON={"username":"pg","password":"password","host":"db","port":"5432","database":"plc"} 34 + - DB_MIGRATE_CREDS_JSON={"username":"pg","password":"password","host":"db","port":"5432","database":"plc"} 35 + - ENABLE_MIGRATIONS=true 36 + - LOG_DESTINATION=1 37 + ports: 38 + - '3000:3000' 39 + tailscale: 40 + image: tailscale/tailscale:latest 41 + restart: unless-stopped 42 + environment: 43 + # OPTIONAL - TS_AUTHKEY=YOUR-TS-KEY-GOES-HERE 44 + - TS_STATE_DIR=/var/run/tailscale 45 + - TS_HOSTNAME=plc 46 + volumes: 47 + - plc_tls:/mnt/tls 48 + - plc_ts:/var/run/tailscale 49 + nginx: 50 + image: nginx 51 + restart: unless-stopped 52 + network_mode: service:tailscale 53 + volumes: 54 + - ./nginx.conf:/etc/nginx/nginx.conf 55 + - plc_tls:/mnt/tls:ro
+3
plc/init.sql
··· 1 + -- plc 2 + CREATE DATABASE plc; 3 + GRANT ALL PRIVILEGES ON DATABASE plc TO pg;
+17
plc/nginx.conf
··· 1 + events {} 2 + http { 3 + server { 4 + resolver 127.0.0.11 [::1]:5353 valid=15s; 5 + set $backend "http://app:3000"; 6 + listen 443 ssl; 7 + ssl_certificate /mnt/tls/cert.pem; 8 + ssl_certificate_key /mnt/tls/cert.key; 9 + location / { 10 + proxy_pass $backend; 11 + proxy_set_header Host $host; 12 + proxy_set_header X-Real-IP $remote_addr; 13 + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 14 + client_max_body_size 64M; 15 + } 16 + } 17 + }