+99
-18
cmd/bigsky/README.md
+99
-18
cmd/bigsky/README.md
···
2
2
`bigsky`: atproto Relay Service
3
3
===============================
4
4
5
-
*NOTE: "Relay" used to be called "Big Graph Server", or "BGS", which inspired the name "bigsky". Many variables and packages still reference "bgs"*
5
+
*NOTE: "Relays" used to be called "Big Graph Servers", or "BGS", which inspired the name "bigsky". Many variables and packages still reference "bgs"*
6
6
7
7
This is the implementation of an atproto Relay which is running in the production network, written and operated by Bluesky.
8
8
9
9
In atproto, a Relay subscribes to multiple PDS hosts and outputs a combined "firehose" event stream. Downstream services can subscribe to this single firehose a get all relevant events for the entire network, or a specific sub-graph of the network. The Relay maintains a mirror of repo data from all accounts on the upstream PDS instances, and verifies repo data structure integrity and identity signatures. It is agnostic to applications, and does not validate data against atproto Lexicon schemas.
10
10
11
-
This Relay implementation is designed to subscribe to the entire global network. It is informally expected to scale to around 20 million accounts in the network, and thousands of repo events per second (peak).
11
+
This Relay implementation is designed to subscribe to the entire global network. The current state of the codebase is informally expected to scale to around 20 million accounts in the network, and thousands of repo events per second (peak).
12
12
13
13
Features and design decisions:
14
14
···
26
26
- periodic repo compaction
27
27
- admin web interface: configure limits, add upstream PDS instances, etc
28
28
29
+
This software is not as packaged, documented, and supported for self-hosting as our PDS distribution or Ozone service. But it is relatively simple and inexpensive to get running.
30
+
31
+
A note and reminder about Relays in general are that they are more of a convenience in the protocol than a hard requirement. The "firehose" API is the exact same on the PDS and on a Relay. Any service which subscribes to the Relay could instead connect to one or more PDS instances directly.
32
+
33
+
34
+
## Development Tips
35
+
36
+
The README and Makefile at the top level of this git repo have some generic helpers for testing, linting, formatting code, etc.
37
+
38
+
To re-build and run the bigsky Relay locally:
39
+
40
+
make run-dev-relay
41
+
42
+
You can re-build and run the command directly to get a list of configuration flags and env vars; env vars will be loaded from `.env` if that file exists:
43
+
44
+
RELAY_ADMIN_KEY=localdev go run ./cmd/bigsky/ --help
45
+
46
+
By default, the daemon will use sqlite for databases (in the directory `./data/bigsky/`), CAR data will be stored as individual shard files in `./data/bigsky/carstore/`), and the HTTP API will be bound to localhost port 2470.
47
+
48
+
When the daemon isn't running, sqlite database files can be inspected with:
49
+
50
+
sqlite3 data/bigsky/bgs.sqlite
51
+
[...]
52
+
sqlite> .schema
53
+
54
+
Wipe all local data:
29
55
30
-
## Development Quickstart
56
+
# careful! double-check this destructive command
57
+
rm -rf ./data/bigsky/*
58
+
59
+
There is a basic web dashboard, though it will not be included unless built and copied to a local directory `./public/`. Run `make build-relay-ui`, and then when running the daemon the dashboard will be available at: <http://localhost:2470/admin/>. Paste in the admin key, eg `localdev`.
60
+
61
+
The local admin routes can also be accessed by passing the admin key as a bearer token, for example:
62
+
63
+
http get :2470/admin/pds/list Authorization:"Bearer localdev"
64
+
65
+
Request crawl of an individual PDS instance like:
66
+
67
+
http post :2470/admin/pds/requestCrawl Authorization:"Bearer localdev" hostname=pds.example.com
68
+
31
69
32
-
TODO: sqlite commands
70
+
## Docker Containers
33
71
72
+
One way to deploy is running a docker image. You can pull and/or run a specific version of bigsky, referenced by git commit, from the Bluesky Github container registry. For example:
34
73
35
-
## Deployment Quickstart
74
+
docker pull ghcr.io/bluesky-social/indigo:bigsky-fd66f93ce1412a3678a1dd3e6d53320b725978a6
75
+
docker run ghcr.io/bluesky-social/indigo:bigsky-fd66f93ce1412a3678a1dd3e6d53320b725978a6
36
76
37
-
TODO: reference database section below
38
-
TODO: notable env vars
39
-
TODO: disclaim that this is a full guide
77
+
There is a Dockerfile in this directory, which can be used to build customized/patched versions of the Relay as a container, republish them, run locally, deploy to servers, deploy to an orchestrated cluster, etc. See docs and guides for docker and cluster management systems for details.
40
78
41
79
42
80
## Database Setup
43
81
44
-
PostgreSQL and Sqlite are both supported. When using Sqlite, separate database
45
-
for the Relay database itself and the CarStore are used. With PostgreSQL a single
46
-
database server, user, and database, can all be reused.
82
+
PostgreSQL and Sqlite are both supported. When using Sqlite, separate files are used for Relay metadata and CarStore metadata. With PostgreSQL a single database server, user, and logical database can all be reused: table names will not conflict.
47
83
48
-
Database configuration is passed via the `DATABASE_URL` and
49
-
`CARSTORE_DATABASE_URL` environment variables, or the corresponding CLI args.
84
+
Database configuration is passed via the `DATABASE_URL` and `CARSTORE_DATABASE_URL` environment variables, or the corresponding CLI args.
50
85
51
-
For PostgreSQL, the user and database must already be configured. Some example
52
-
SQL commands are:
86
+
For PostgreSQL, the user and database must already be configured. Some example SQL commands are:
53
87
54
88
CREATE DATABASE bgs;
55
89
CREATE DATABASE carstore;
···
58
92
GRANT ALL PRIVILEGES ON DATABASE bgs TO ${username};
59
93
GRANT ALL PRIVILEGES ON DATABASE carstore TO ${username};
60
94
61
-
This service currently uses `gorm` to automatically run database migrations as
62
-
the regular user. There is no concept of running a separate set of migrations
63
-
under more privileged database user.
95
+
This service currently uses `gorm` to automatically run database migrations as the regular user. There is no concept of running a separate set of migrations under more privileged database user.
96
+
97
+
98
+
## Deployment
99
+
100
+
*NOTE: this is not a complete guide to operating a Relay. There are decisions to be made and communicated about policies, bandwidth use, PDS crawling and rate-limits, financial sustainability, etc, which are not covered here. This is just a quick overview of how to technically get a relay up and running.*
101
+
102
+
In a real-world system, you will probably want to use PostgreSQL for both the relay database and the carstore database. CAR shards will still be stored on-disk, resulting in many millions of files. Chose your storage hardware and filesystem carefully: we recommend XFS on local NVMe, not network-backed blockstorage (eg, not EBS volumes on AWS).
103
+
104
+
Some notable configuration env vars to set:
105
+
106
+
- `ENVIRONMENT`: eg, `production`
107
+
- `DATABASE_URL`: see section below
108
+
- `CARSTORE_DATABASE_URL`: see section below
109
+
- `DATA_DIR`: CAR shards will be stored in a subdirectory
110
+
- `GOLOG_LOG_LEVEL`: log verbosity
111
+
- `RESOLVE_ADDRESS`: DNS server to use
112
+
- `FORCE_DNS_UDP`: recommend "true"
113
+
- `BGS_COMPACT_INTERVAL`: to control CAR compaction scheduling. for example, "8h" (every 8 hours). Set to "0" to disable automatic compaction.
114
+
115
+
There is a health check endpoint at `/xrpc/_health`. Prometheus metrics are exposed by default on port 2471, path `/metrics`. The service logs fairly verbosely to stderr; use `GOLOG_LOG_LEVEL` to control log volume.
64
116
117
+
As a rough guideline for the compute resources needed to run a full-network Relay, in June 2024 an example Relay for over 5 million repositories used:
118
+
119
+
- around 30 million inodes (files)
120
+
- roughly 1 TByte of disk for PostgreSQL
121
+
- roughly 1 TByte of disk for CAR shard storage
122
+
- roughly 5k disk I/O operations per second (all combined)
123
+
- roughly 100% of one CPU core (quite low CPU utilization)
124
+
- roughly 5GB of RAM for bigsky, and as much RAM as available for PostgreSQL and page cache
125
+
- on the order of 1 megabit inbound bandwidth (crawling PDS instances) and 1 megabit outbound per connected client. 1 mbit continuous is approximately 350 GByte/month
126
+
127
+
Be sure to double-check bandwidth usage and pricing if running a public relay! Bandwidth prices can vary widely between providers, and popular cloud services (AWS, Google Cloud, Azure) are very expensive compared to alternatives like OVH or Hetzner.
128
+
129
+
130
+
## Bootstrapping the Network
131
+
132
+
To bootstrap the entire network, you'll want to start with a list of large PDS instances to backfill from. You could pull from a public dashboard of instances (like [mackuba's](https://blue.mackuba.eu/directory/pdses)), or scrape the full DID PLC directory, parse out all PDS service declarations, and sort by count.
133
+
134
+
Once you have a set of PDS hosts, you can put the bare hostnames (not URLs: no `https://` prefix, port, or path suffix) in a `hosts.txt` file, and then use the `crawl_pds.sh` script to backfill and configure limits for all of them:
135
+
136
+
export RELAY_HOST=your.pds.hostname.tld
137
+
export RELAY_ADMIN_KEY=your-secret-key
138
+
139
+
# both request crawl, and set generous crawl limits for each
140
+
cat hosts.txt | parallel -j1 ./crawl_pds.sh {}
141
+
142
+
Just consuming from the firehose for a few hours will only backfill accounts with activity during that period. This is fine to get the backfill process started, but eventually you'll want to do full "resync" of all the repositories on the PDS host to the most recent repo rev version. To enqueue that for all the PDS instances:
143
+
144
+
# start sync/backfill of all accounts
145
+
cat hosts.txt | parallel -j1 ./sync_pds.sh {}
+33
cmd/bigsky/crawl_pds.sh
+33
cmd/bigsky/crawl_pds.sh
···
1
+
#!/usr/bin/env bash
2
+
3
+
set -e # fail on error
4
+
set -u # fail if variable not set in substitution
5
+
set -o pipefail # fail if part of a '|' command fails
6
+
7
+
if test -z "${RELAY_ADMIN_KEY}"; then
8
+
echo "RELAY_ADMIN_KEY secret is not defined"
9
+
exit -1
10
+
fi
11
+
12
+
if test -z "${RELAY_HOST}"; then
13
+
echo "RELAY_HOST config not defined"
14
+
exit -1
15
+
fi
16
+
17
+
if test -z "$1"; then
18
+
echo "expected PDS hostname as an argument"
19
+
exit -1
20
+
fi
21
+
22
+
echo "requestCrawl $1"
23
+
http --quiet --ignore-stdin post https://${RELAY_HOST}/admin/pds/requestCrawl Authorization:"Bearer ${RELAY_ADMIN_KEY}" \
24
+
hostname=$1
25
+
26
+
echo "changeLimits $1"
27
+
http --quiet --ignore-stdin post https://${RELAY_HOST}/admin/pds/changeLimits Authorization:"Bearer ${RELAY_ADMIN_KEY}" \
28
+
per_second:=100 \
29
+
per_hour:=1000000 \
30
+
per_day:=1000000 \
31
+
crawl_rate:=10 \
32
+
repo_limit:=1000000 \
33
+
host=$1
+24
cmd/bigsky/sync_pds.sh
+24
cmd/bigsky/sync_pds.sh
···
1
+
#!/usr/bin/env bash
2
+
3
+
set -e # fail on error
4
+
set -u # fail if variable not set in substitution
5
+
set -o pipefail # fail if part of a '|' command fails
6
+
7
+
if test -z "${RELAY_ADMIN_KEY}"; then
8
+
echo "RELAY_ADMIN_KEY secret is not defined"
9
+
exit -1
10
+
fi
11
+
12
+
if test -z "${RELAY_HOST}"; then
13
+
echo "RELAY_HOST config not defined"
14
+
exit -1
15
+
fi
16
+
17
+
if test -z "$1"; then
18
+
echo "expected PDS hostname as an argument"
19
+
exit -1
20
+
fi
21
+
22
+
echo "resync $1"
23
+
http --quiet post https://${RELAY_HOST}/admin/pds/resync Authorization:"Bearer ${RELAY_ADMIN_KEY}" \
24
+
host=$1