Airglow#
Automations for the AT Protocol — listen to events, filter them, and trigger actions like webhook deliveries or PDS record creation.
Airglow connects to Jetstream (AT Protocol's event streaming service), matches incoming records against user-defined automations, and executes actions automatically. Think IFTTT or Zapier, but the trigger side is always "something happened on the AT Protocol."
Website: airglow.run
How it works#
For end users#
- Sign in to Airglow using AT Protocol OAuth.
- Create an automation by choosing a lexicon to listen to (e.g.
sh.tangled.feed.star,site.standard.document) and which operations to watch (create,update,delete). - Add conditions to filter events — match record fields using operators like
eq,startsWith,endsWith, orcontains. The{{self}}placeholder resolves to the automation owner's DID. The schema is known from the lexicon, so Airglow can present the available fields. - Optionally add fetch steps to retrieve records from a PDS at event time. Fetched data can then be referenced in action templates using
{{fetchName.record.field}}placeholders. - Add actions — deliver a webhook to a callback URL, or create a record on your PDS using a template.
Automations can be created in dry-run mode — all logic (condition matching, fetches, template rendering) runs, but no side effects occur. Results are logged so you can verify behavior before going live.
Airglow verifies that webhook callback URLs actually support the selected lexicon before activating the automation (see Callback endpoints below).
Each user has a public profile at /u/<handle> showing their automations and maintained lexicons. Individual automations can be viewed and duplicated by other users.
Data ownership#
Automations are stored on the user's PDS as run.airglow.automation records. The user's PDS is the source of truth — Airglow instances maintain a local index for fast event matching, but the data belongs to the user. This means automations are portable across Airglow instances and visible to any AT Protocol client.
For developers#
Developers build HTTP endpoints that receive webhook payloads from Airglow.
Callback endpoints#
A callback server can optionally expose a metadata route so Airglow can discover its endpoints and verify which lexicons each one accepts:
GET <server-base-url>/.well-known/airglow
This returns a JSON manifest mapping callback paths to the lexicons they handle:
{
"callbacks": [
{ "path": "/hooks/stars", "lexicons": ["sh.tangled.feed.star"] },
{ "path": "/hooks/posts", "lexicons": ["app.bsky.feed.post"] }
]
}
When a user registers a callback URL (e.g. https://example.com/hooks/stars), Airglow fetches the manifest from https://example.com/.well-known/airglow. If the manifest is present and the path is listed with the requested lexicon, the webhook is marked as verified. If the manifest is missing or doesn't match, the webhook is still created but shown as unverified. Verification is re-checked when an automation is reactivated.
Webhook payload#
When a matching event occurs, Airglow sends a POST request to the callback URL. The payload contains the Jetstream event (commit operation, record data, repo DID, timestamp) wrapped in an Airglow envelope with metadata such as the automation ID and matched condition.
Request signing#
Airglow signs every outgoing request so that callback endpoints can verify it actually came from a legitimate Airglow instance (similar to how Stripe or GitHub sign webhook deliveries).
Response handling#
- 2xx — Success. The event was delivered.
- 4xx — Logged as a delivery failure. Users can review these errors in Airglow.
- 5xx — Airglow retries delivery (up to 2 retries with backoff).
Future: protocol-native discovery#
Today, users provide callback URLs manually. In the future, developers will be able to publish a run.airglow.callback record on their PDS, declaring their endpoint URL and supported lexicons. Airglow instances could then subscribe to this collection and index available callbacks, letting users browse and pick from discovered endpoints instead of entering URLs by hand.
Development#
Prerequisites#
Getting started#
# Install dependencies
vp install
# Set up the database
cp .env.example .env
bun run db:migrate
# Start the dev server
vp dev
The app will be available at http://localhost:5173.
Useful commands#
vp check # lint, format, type-check
vp test # run tests
vp build # build client assets for production
bun run start # run the production server
Lexicons#
Lexicon schemas live in lexicons/ and are managed with goat:
goat lex lint lexicons/ # validate schemas
goat lex new record run.airglow.<name> # create a new lexicon
Self-hosting#
Airglow is designed to be easy to self-host. Configuration is done via environment variables (see .env.example):
| Variable | Purpose |
|---|---|
PUBLIC_URL |
Public-facing base URL of the instance |
DATABASE_PATH |
Path to the SQLite database file |
JETSTREAM_URL |
Jetstream WebSocket endpoint |
COOKIE_SECRET |
Secret for session cookies (min 32 chars) |
NSID_ALLOWLIST |
Comma-separated NSIDs to allow (empty = allow all) |
NSID_BLOCKLIST |
Comma-separated NSIDs to block (empty = block none) |
Instance operators can configure NSID_ALLOWLIST and NSID_BLOCKLIST to control which lexicons their instance handles. For example, a typical instance may want to block app.bsky.* or app.bsky.feed.* since those collections are very active and could overwhelm a small instance.