a minimal, working foundation for building atproto/bluesky apps.
oauth works out of the box. sessions are stored in sqlite. the frontend is vanilla html/js on purpose — swap in whatever framework you want
quick start#
git clone https://github.com/yourusername/atproto-starter
cd atproto-starter
cp .env.example .env
bun install
bun run dev
open http://127.0.0.1:3000 and sign in with your bluesky handle. that's it.
note: use
127.0.0.1, notlocalhost. atproto oauth is picky about this.
what's in here#
src/server/
├── index.ts # elysia server + your routes go here
├── lib/
│ ├── oauth-client.ts # oauth configuration
│ ├── storage.ts # session/state stores (sqlite)
│ ├── db.ts # database setup
│ └── constants.ts # app name, cookie config
├── middleware/
│ └── auth.ts # requireAuth, optionalAuth
└── routes/
└── oauth.ts # login, callback, logout
src/types/
└── lexicons.ts # your custom record types
public/
├── index.html # the login page
├── styles.css # styles (replace with your own)
└── app.js # client logic (vanilla js)
how oauth works#
atproto doesn't use centralized auth like "sign in with google." each user's data lives on their own PDS (personal data server), and you authenticate directly with that server.
the flow:
- user enters their handle (like
keithlaugh.love) - your app figures out which PDS hosts their data
- user is redirected to their PDS to authorize your app
- PDS redirects back with an auth code
- your app exchanges that for tokens, stores the session
this starter handles all of that. you just need to build your app.
local development#
for local dev, atproto has a "loopback client" system that lets you skip registering your app anywhere. it just works on http://127.0.0.1:PORT.
production#
in production, you serve oauth client metadata at /oauth-client-metadata.json. this file describes your app to PDSes. the starter already generates it — just set PUBLIC_URL to your real domain.
using the authenticated session#
after login, your routes can access the user's DID and an authenticated api client:
import { requireAuth } from "./middleware/auth";
app
.use(requireAuth)
.get("/api/my-stuff", async ({ did, agent }) => {
// did = the user's DID (did:plc:abc123...)
// agent = authenticated ATProto client
const result = await agent.com.atproto.repo.listRecords({
repo: did,
collection: "com.example.settings",
});
return result.data.records;
});
the agent is a fully authenticated @atproto/api client. you can read from the user's repo, write records, the whole deal.
adding your own frontend#
the server doesn't care what frontend you use. it serves static files from /public/* and exposes api routes. pick your stack:
react#
react recommends starting with a framework like Next.js or React Router, or using Vite for a lighter setup:
npx create-vite . --template react-ts
build to public/, or set up a separate dev server and proxy api calls.
vue / svelte / whatever#
same deal. create your project, point it at the api routes:
GET /api/me— check if logged in, get user's DIDGET /oauth/login?handle=user.bsky.social— start loginGET /oauth/logout— log out
the cookie-based session means auth "just works" — no token management needed on the frontend.
no framework#
the included vanilla js is fine for simple apps. it's ~50 lines. sometimes that's all you need.
custom lexicons#
lexicons are atproto's schema system. they define what kinds of records your app stores in user repos.
the basics#
lexicon IDs follow reverse-DNS format: tld.domain.collection
examples:
com.myapp.settings— user settingscom.myapp.post— postscom.myapp.graph.follow— follow relationships
add yours in src/types/lexicons.ts:
export const LEXICON_IDS = {
SETTINGS: "com.example.settings",
POST: "com.example.post",
} as const;
oauth scopes are built automatically from this. when you add a new lexicon, users will be prompted to authorize access to that collection.
storing records#
once a user's logged in, write to their repo:
await agent.com.atproto.repo.createRecord({
repo: did,
collection: "com.example.post",
record: {
text: "hello world",
createdAt: new Date().toISOString(),
},
});
the record lives in their PDS, not your server. they own it. they can export it, delete it, move PDSes — their data travels with them.
reading records#
const result = await agent.com.atproto.repo.listRecords({
repo: did,
collection: "com.example.post",
limit: 50,
});
for (const record of result.data.records) {
console.log(record.value.text);
}
type safety#
define typescript interfaces alongside your lexicon IDs:
export interface Post {
text: string;
tags?: string[];
createdAt: string;
}
then cast when reading:
const posts = result.data.records.map((r) => r.value as Post);
for full schema validation, you can write proper lexicon JSON schemas and use @atproto/lexicon to generate types. but for getting started, this works.
using bluesky's lexicons#
you don't have to build everything yourself. bluesky has lexicons for common social features:
| lexicon | what it is |
|---|---|
app.bsky.feed.post |
posts (the tweets) |
app.bsky.feed.like |
likes |
app.bsky.feed.repost |
reposts |
app.bsky.graph.follow |
follows |
app.bsky.graph.block |
blocks |
app.bsky.actor.profile |
profile info |
to use these, request the appropriate scopes. in dev mode, transition:generic gives you broad access. in production, be specific:
const OAUTH_SCOPES = [
"atproto",
"repo:app.bsky.feed.post",
"repo:app.bsky.feed.like",
];
the bluesky API client#
the @atproto/api package has typed helpers for bluesky lexicons:
// post to bluesky
await agent.post({
text: "hello from my app",
});
// get user's feed
const feed = await agent.getAuthorFeed({
actor: did,
limit: 20,
});
// like a post
await agent.like(postUri, postCid);
see the @atproto/api docs for the full list.
mixing custom + bluesky lexicons#
most apps will use some of each. maybe you have custom com.myapp.settings but use bluesky's posts and likes. that's fine — just add all the lexicon IDs you need.
environment variables#
# your app's url (no trailing slash!)
PUBLIC_URL=http://127.0.0.1:3000
# database
TURSO_DATABASE_URL=file:./data.db # local sqlite
# TURSO_DATABASE_URL=libsql://x.turso.io # production
# TURSO_AUTH_TOKEN=your-token
# optional
PORT=3000
NODE_ENV=development
production#
- set
PUBLIC_URLto your real domain - use turso or hosted sqlite for the database
- oauth client metadata will be served at
/oauth-client-metadata.json - set
NODE_ENV=productionto use specific oauth scopes instead oftransition:generic
deploy anywhere that runs bun or node.
what's next?#
once you've got the basics working, you'll probably want more. here are tools the community has built that you can reach for:
identity
- slingshot — fast DID/handle resolution cache. useful when you're displaying posts from lots of different users and don't want to hammer plc.directory
- pdsls.dev — browse any atproto repo. great for debugging and understanding what's actually stored
engagement
- constellation — backlink index. answers "who liked this post?" across the whole network
- spacedust — real-time websocket for likes/reposts as they happen
links#
- atproto docs — the protocol spec
- bluesky api docs — bluesky-specific stuff
- lexicon reference — how schemas work
- oauth spec — auth details
license#
MIT — do whatever you want with it.