Plan for specialty chronological feed of everyone you follow with capacity for 1000 users per deployed instance
personal-feed-plan.md edited
882 lines 27 kB view raw view code

Plan: Personal AT Proto AppView — Next.js Frontend + Aurora Prism Backend on Digital Ocean#


Overview#

We are building a two-service architecture on a single Digital Ocean Droplet:

  1. Aurora Prism (unchanged backend) — XRPC AppView API, Jetstream firehose consumer, PostgreSQL storage, Redis cache, AT Proto OAuth
  2. Next.js App (new frontend) — App Router frontend consuming Aurora Prism's XRPC API via @atproto/api, SSR'd pages, proper authentication, clean UX

Nginx sits in front of both, terminating TLS and routing traffic:

User → Nginx (443) → /          → Next.js (3000)
                   → /xrpc/*   → Aurora Prism (5000)
                   → /health   → Aurora Prism (5000)

Phase 1: Infrastructure — Digital Ocean Setup#

1.1 Droplet Selection#

Recommended: General Purpose 8 GB / 4 vCPU — $48/month

Spec Value Reasoning
RAM 8 GB Postgres (2GB) + Aurora Prism (1.5GB) + Next.js (512MB) + Redis (512MB) + OS headroom
vCPU 4 Firehose consumer + API serving + Next.js SSR are all CPU-bound under load
Disk 50 GB SSD (base) OS + applications; all DB data lives on Block Storage
Region Closest to you NYC3, SFO3, or AMS3 recommended
OS Ubuntu 22.04 LTS Stable, excellent Docker support

Add Block Storage: 100 GB — $10/month Mount at /mnt/appdata. The PostgreSQL data directory lives here so you can resize storage without recreating the Droplet.

Total: ~$58/month. At 1000 users that is about $0.06/user/month.

Minimum viable (tight): 4 GB / 2 vCPU at $24/month. Works, but PostgreSQL alone consumes ~2GB, leaving little headroom. Not recommended for production.

1.2 Digital Ocean Firewall#

Configure in the DO Cloud Console (survives Droplet recreation, unlike ufw):

Direction Port Protocol Source
Inbound 22 TCP Your IP only
Inbound 80 TCP All IPv4/IPv6
Inbound 443 TCP All IPv4/IPv6
Outbound All All All

All inter-service traffic (Postgres, Redis, internal Next.js/Aurora Prism calls) stays on Docker's internal bridge network — no external port exposure needed.

1.3 Droplet Bootstrap Script#

#!/bin/bash
# Run as root on fresh Ubuntu 22.04 Droplet

apt-get update && apt-get upgrade -y

# Docker
curl -fsSL https://get.docker.com | sh
apt-get install -y docker-compose-plugin

# Nginx + Certbot
apt-get install -y nginx certbot python3-certbot-nginx

# App user (do not run everything as root)
useradd -m -s /bin/bash appuser
usermod -aG docker appuser

# Mount Block Storage (confirm device name in DO console — usually /dev/sda or /dev/disk/by-id/...)
VOLUME_DEVICE=/dev/sda
mkfs.ext4 $VOLUME_DEVICE
mkdir -p /mnt/appdata
echo "$VOLUME_DEVICE /mnt/appdata ext4 defaults,nofail 0 2" >> /etc/fstab
mount -a

# Create data directories
mkdir -p /mnt/appdata/postgres /mnt/appdata/redis
chown -R 999:999 /mnt/appdata/postgres   # matches postgres UID inside Docker image

Critical Requirement: Complete, Chronological, Gap-Free Timelines#

This is the hardest constraint in the system. It has three distinct sub-problems:

Problem 1 — The Jetstream is a Live Stream, Not an Archive#

Jetstream starts delivering events from "now." If Aurora Prism is not yet subscribed to a DID when that person posts, the post is missed forever from the stream. This happens in two scenarios:

  • A user follows someone after Aurora Prism started (the new follow's past posts are absent)
  • Aurora Prism restarts and the cursor is lost or stale (events during downtime are missed)

Solution A — wantedDids with dynamic updates: Jetstream supports subscribing to a specific set of DIDs via the wantedDids query parameter. Aurora Prism must dynamically update this subscription as users follow/unfollow. When a new follow happens, the subscription must expand to include the new DID and a backfill of that DID's history must be triggered from their PDS directly.

wss://jetstream2.us-east.bsky.network/subscribe
  ?wantedCollections=app.bsky.feed.post
  &wantedCollections=app.bsky.feed.repost
  &wantedCollections=app.bsky.feed.like
  &wantedDids=did:plc:abc123
  &wantedDids=did:plc:def456
  ...

Verify: Check whether Aurora Prism's data-plane supports wantedDids filtering and dynamic reconnection when the follow set changes. If not, this requires a code contribution to Aurora Prism.

Solution B — Full firehose with local filtering (simpler but expensive): Subscribe to the full Jetstream (all posts, ~850 MB/day) and filter locally by follow graph. No missed events, but storage grows with the whole network, not just followed DIDs. At 1000 users this is manageable; at 10,000 it becomes unsustainable.

Recommendation: Start with Solution B (full Jetstream, local filter) for reliability. Optimize to wantedDids later when the scale justifies the added complexity.

Problem 2 — Cursor Persistence for Crash Recovery#

Jetstream assigns each event a time_us (microsecond Unix timestamp) as a cursor. Aurora Prism must persist this cursor to PostgreSQL (or Redis) after processing each event page. On restart, the consumer reconnects with ?cursor=<last_time_us> to replay any events missed during downtime.

wss://jetstream2.us-east.bsky.network/subscribe?cursor=1708123456789000

Jetstream retains a rolling 72-hour window of events. This means:

  • Downtime up to 72 hours → full recovery on restart with cursor
  • Downtime over 72 hours → gap in timeline, requires PDS backfill to fill it

Verify: Confirm Aurora Prism's cursor service persists time_us durably (not just in-memory). Check the migrations/ and server/ directories for cursor table schema.

Problem 3 — Backfill on New Follow#

When user A follows user B, posts B made before A started following them need to be fetched. This cannot come from Jetstream (it only goes forward). It must come from a direct com.atproto.repo.listRecords call to B's PDS.

GET https://bsky.social/xrpc/com.atproto.repo.listRecords
  ?repo=<did>
  &collection=app.bsky.feed.post
  &limit=100

Aurora Prism has BACKFILL_DAYS for startup backfill, but we need on-demand backfill per follow event. The flow must be:

User follows DID B
  → Write follow to PDS (via Aurora Prism write proxy)
  → Aurora Prism detects new follow in firehose
  → Triggers backfill job for DID B: fetch all posts from B's PDS
  → Stores in PostgreSQL with correct timestamps
  → Timeline is now complete for B retroactively

Verify: Check if Aurora Prism's data-plane handles app.bsky.graph.follow events and triggers per-DID backfill. If not, this is a required code contribution.

Problem 4 — Deduplication#

Because posts can arrive from both Jetstream (live) and PDS backfill (historical), the same post (identified by uri = at://did/app.bsky.feed.post/rkey) must be idempotently upserted, not inserted twice.

Aurora Prism's PostgreSQL schema should have a unique constraint on uri in the posts table. Verify this constraint exists in migrations/ before relying on it.

Summary: What Must Be True for Complete Timelines#

Requirement How to satisfy it
No missed live posts Persistent cursor resume on restart
No missed historical posts Per-follow PDS backfill triggered at follow time
No duplicate posts Unique constraint on uri + upsert logic
Correct chronological order Order by indexedAt or post createdAt timestamp
Recovery from long downtime PDS re-backfill if cursor window (72h) is exceeded

These must be verified against Aurora Prism's actual implementation before launch. If any are missing, they represent required additions to the codebase — not optional optimizations.


Phase 2: Aurora Prism Configuration#

2.1 Clone and Generate Keys#

Aurora Prism is used completely unmodified — we treat it as a managed backend. We configure it only via environment variables.

git clone https://github.com/dollspace-gay/Aurora-Prism.git /opt/aurora-prism
cd /opt/aurora-prism
./oauth-keyset-json.sh       # generates OAuth JWKS keypair → oauth-keys.json
./setup-did-and-keys.sh      # generates did:web document

2.2 Aurora Prism .env#

# /opt/aurora-prism/.env

DATABASE_URL=postgresql://aurora:STRONG_POSTGRES_PASSWORD@postgres:5432/aurora_prism
REDIS_URL=redis://redis:6379
SESSION_SECRET=<openssl rand -base64 32>
PORT=5000

# AT Proto identity — must match your domain exactly
APPVIEW_DID=did:web:your-domain.com
APPVIEW_HOSTNAME=your-domain.com

# Full Jetstream — all posts (~850 MB/day). No wantedDids filter until per-follow
# PDS backfill + dynamic DID reconnection are confirmed in Aurora Prism. Completeness first.
RELAY_URL=wss://jetstream2.us-east.bsky.network/subscribe?wantedCollections=app.bsky.feed.post&wantedCollections=app.bsky.feed.repost&wantedCollections=app.bsky.graph.follow

# Data management
DATA_RETENTION_DAYS=30     # prune unprotected (non-followed) content after 30 days
BACKFILL_DAYS=7            # backfill 7 days of history per new user login

2.3 What Aurora Prism Handles#

Aurora Prism is the source of truth for all AT Proto concerns:

  • XRPC API surface (52 Bluesky-compatible endpoints)
  • AT Proto OAuth 2.0 (initiation, callback, token refresh)
  • Write proxying to users' own PDSes (likes, posts, follows etc. go to bsky.social or wherever the user's PDS is)
  • Jetstream consumption and PostgreSQL indexing
  • Health endpoints at /health and /ready
  • did:web DID document at /.well-known/did.json

Phase 3: Next.js Application#

3.1 Project Init#

cd /opt
npx create-next-app@latest nextjs-app \
  --typescript --tailwind --app --no-src-dir --import-alias "@/*"

cd nextjs-app
npm install @atproto/api iron-session swr @tailwindcss/typography

3.2 Key Dependencies#

Package Purpose
@atproto/api Full typed XRPC client — point at Aurora Prism instead of bsky.social
iron-session Encrypted, signed HTTP-only session cookies for server-side auth state
swr Client-side data fetching, caching, revalidation
@tailwindcss/typography Rich text rendering for post bodies

3.3 Environment Variables#

# /opt/nextjs-app/.env.local

# Internal Docker network URL (server-to-server, never touches the internet)
AURORA_PRISM_INTERNAL_URL=http://aurora-prism:5000

# Public URL (used in OAuth redirects and client-side JS)
NEXT_PUBLIC_AURORA_PRISM_URL=https://your-domain.com

# Session cookie encryption — must be different from Aurora Prism's SESSION_SECRET
SESSION_SECRET=<openssl rand -base64 32>
SESSION_COOKIE_NAME=myappview-session

NEXT_PUBLIC_APP_NAME="My AT Proto AppView"

3.4 AT Proto Agent Factory#

// lib/atproto.ts
import { AtpAgent } from '@atproto/api'

// Server-side: uses internal Docker network URL (fast, no TLS overhead)
export function createServerAgent(accessJwt?: string): AtpAgent {
  const agent = new AtpAgent({
    service: process.env.AURORA_PRISM_INTERNAL_URL!,
  })
  if (accessJwt) {
    // Attach the user's session so Aurora Prism returns personalized data
    agent.session = {
      accessJwt,
      refreshJwt: '',
      handle: '',
      did: '',
      active: true,
    }
  }
  return agent
}

3.5 Session Handling#

// lib/session.ts
import { getIronSession } from 'iron-session'
import { cookies } from 'next/headers'

export interface SessionData {
  did: string
  handle: string
  accessJwt: string
  refreshJwt: string
}

const sessionOptions = {
  cookieName: process.env.SESSION_COOKIE_NAME!,
  password: process.env.SESSION_SECRET!,
  cookieOptions: {
    secure: process.env.NODE_ENV === 'production',
    httpOnly: true,
    sameSite: 'lax' as const,
  },
}

export async function getSession() {
  return getIronSession<SessionData>(await cookies(), sessionOptions)
}

3.6 Authentication API Routes#

Login (app/api/auth/login/route.ts):

import { NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/session'

export async function POST(req: NextRequest) {
  const { identifier, password } = await req.json()

  // Forward credentials to Aurora Prism's createSession endpoint
  const res = await fetch(
    `${process.env.AURORA_PRISM_INTERNAL_URL}/xrpc/com.atproto.server.createSession`,
    {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ identifier, password }),
    }
  )

  if (!res.ok) {
    const err = await res.json()
    return NextResponse.json({ error: err.message ?? 'Login failed' }, { status: 401 })
  }

  const data = await res.json()
  const session = await getSession()
  session.did = data.did
  session.handle = data.handle
  session.accessJwt = data.accessJwt
  session.refreshJwt = data.refreshJwt
  await session.save()

  return NextResponse.json({ success: true, handle: data.handle })
}

Logout (app/api/auth/logout/route.ts):

import { NextResponse } from 'next/server'
import { getSession } from '@/lib/session'

export async function POST() {
  const session = await getSession()
  session.destroy()
  return NextResponse.json({ success: true })
}

3.7 Core Pages#

File structure:

app/
├── layout.tsx                  Root layout with nav, session provider
├── page.tsx                    Home / timeline (SSR)
├── login/
│   └── page.tsx                Login form (client component)
├── profile/
│   └── [handle]/
│       └── page.tsx            Profile + author feed (SSR + OG tags)
├── post/
│   └── [...uri]/
│       └── page.tsx            Post thread view (SSR)
├── notifications/
│   └── page.tsx                Notifications list
└── api/
    ├── auth/
    │   ├── login/route.ts
    │   └── logout/route.ts
    ├── timeline/route.ts       Paginated timeline (used by client for infinite scroll)
    └── actions/
        ├── like/route.ts
        ├── repost/route.ts
        └── post/route.ts

Timeline page (app/page.tsx):

import { redirect } from 'next/navigation'
import { getSession } from '@/lib/session'
import { createServerAgent } from '@/lib/atproto'
import { Timeline } from '@/components/Timeline'

export default async function HomePage() {
  const session = await getSession()
  if (!session.did) redirect('/login')

  const agent = createServerAgent(session.accessJwt)
  const timeline = await agent.getTimeline({ limit: 50 })

  return <Timeline initialData={timeline.data} session={session} />
}

Profile page (app/profile/[handle]/page.tsx):

import { createServerAgent } from '@/lib/atproto'

export async function generateMetadata({ params }: { params: { handle: string } }) {
  const agent = createServerAgent()
  const profile = await agent.getProfile({ actor: params.handle })
  return {
    title: profile.data.displayName ?? `@${params.handle}`,
    description: profile.data.description,
    openGraph: { images: profile.data.avatar ? [profile.data.avatar] : [] },
  }
}

export default async function ProfilePage({ params }: { params: { handle: string } }) {
  const agent = createServerAgent()
  const [profile, feed] = await Promise.all([
    agent.getProfile({ actor: params.handle }),
    agent.getAuthorFeed({ actor: params.handle, limit: 50 }),
  ])
  return <ProfileView profile={profile.data} feed={feed.data} />
}

3.8 Write Actions (Server-Side Proxy)#

All mutations go through Next.js API routes, keeping AT Proto credentials server-side:

// app/api/actions/like/route.ts
import { NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/session'
import { createServerAgent } from '@/lib/atproto'

export async function POST(req: NextRequest) {
  const session = await getSession()
  if (!session.accessJwt) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })

  const { uri, cid } = await req.json()
  const agent = createServerAgent(session.accessJwt)
  await agent.like(uri, cid)
  return NextResponse.json({ success: true })
}

export async function DELETE(req: NextRequest) {
  const session = await getSession()
  if (!session.accessJwt) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })

  const { likeUri } = await req.json()
  const agent = createServerAgent(session.accessJwt)
  await agent.deleteLike(likeUri)
  return NextResponse.json({ success: true })
}

3.9 next.config.ts#

import type { NextConfig } from 'next'

const nextConfig: NextConfig = {
  output: 'standalone',    // Required for the optimized Docker production build
  images: {
    remotePatterns: [
      { protocol: 'https', hostname: 'cdn.bsky.app' },
      { protocol: 'https', hostname: '*.bsky.network' },
      { protocol: 'https', hostname: 'your-domain.com' },
    ],
  },
}

export default nextConfig

Phase 4: Docker Compose#

4.1 Master docker-compose.yml#

# /opt/docker-compose.yml
version: '3.9'

services:

  postgres:
    image: postgres:14-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: aurora_prism
      POSTGRES_USER: aurora
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - /mnt/appdata/postgres:/var/lib/postgresql/data
    networks:
      - appnet
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U aurora"]
      interval: 10s
      timeout: 5s
      retries: 5
    command: >
      postgres
        -c shared_buffers=2GB
        -c effective_cache_size=6GB
        -c work_mem=16MB
        -c maintenance_work_mem=256MB
        -c max_connections=200
        -c checkpoint_completion_target=0.9
        -c wal_buffers=64MB

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - /mnt/appdata/redis:/data
    networks:
      - appnet
    # 512MB is plenty for 1000 users — Aurora Prism defaults to 8GB which is excessive
    command: redis-server --maxmemory 512mb --maxmemory-policy allkeys-lru
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      retries: 5

  aurora-prism:
    image: ghcr.io/dollspace-gay/aurora-prism:latest
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    env_file: /opt/aurora-prism/.env
    environment:
      DATABASE_URL: postgresql://aurora:${POSTGRES_PASSWORD}@postgres:5432/aurora_prism
      REDIS_URL: redis://redis:6379
    volumes:
      - /opt/aurora-prism/oauth-keys.json:/app/oauth-keys.json:ro
    networks:
      - appnet
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 10s
      timeout: 5s
      retries: 10

  nextjs-app:
    build:
      context: /opt/nextjs-app
      dockerfile: Dockerfile
    restart: unless-stopped
    depends_on:
      aurora-prism:
        condition: service_healthy
    env_file: /opt/nextjs-app/.env.local
    environment:
      NODE_ENV: production
      AURORA_PRISM_INTERNAL_URL: http://aurora-prism:5000
    networks:
      - appnet

  nginx:
    image: nginx:alpine
    restart: unless-stopped
    depends_on:
      - aurora-prism
      - nextjs-app
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /opt/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - /etc/letsencrypt:/etc/letsencrypt:ro
    networks:
      - appnet

networks:
  appnet:
    driver: bridge

.env at /opt/.env (shared secrets):

POSTGRES_PASSWORD=generate_a_very_strong_password_here

4.2 Next.js Dockerfile#

# /opt/nextjs-app/Dockerfile
FROM node:20-alpine AS base

FROM base AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production

RUN addgroup -S nodejs && adduser -S nextjs -G nodejs

COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs
EXPOSE 3000
ENV PORT=3000 HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]

Phase 5: Nginx Configuration#

# /opt/nginx/nginx.conf
events { worker_connections 1024; }

http {
  limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
  limit_req_zone $binary_remote_addr zone=general:10m rate=120r/m;

  upstream aurora_prism { server aurora-prism:5000; keepalive 32; }
  upstream nextjs        { server nextjs-app:3000;  keepalive 32; }

  # HTTP → HTTPS
  server {
    listen 80;
    server_name your-domain.com;
    return 301 https://$host$request_uri;
  }

  server {
    listen 443 ssl;
    server_name your-domain.com;

    ssl_certificate     /etc/letsencrypt/live/your-domain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options "nosniff";

    # AT Proto did:web document (served by Aurora Prism)
    location /.well-known/ {
      proxy_pass http://aurora_prism;
      proxy_set_header Host $host;
    }

    # Aurora Prism XRPC API
    location /xrpc/ {
      limit_req zone=api burst=20 nodelay;
      proxy_pass http://aurora_prism;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_read_timeout 60s;
    }

    # Aurora Prism health + OAuth endpoints
    location ~ ^/(health|ready|oauth)/ {
      proxy_pass http://aurora_prism;
      proxy_set_header Host $host;
    }

    # Next.js static assets (aggressively cached)
    location /_next/static/ {
      proxy_pass http://nextjs;
      expires 1y;
      add_header Cache-Control "public, immutable";
    }

    # Everything else → Next.js
    location / {
      limit_req zone=general burst=50 nodelay;
      proxy_pass http://nextjs;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }
  }
}

Phase 6: TLS + AT Proto Identity#

6.1 TLS Certificate#

# Bootstrap: get cert before enabling HTTPS in Nginx
# Nginx must be running on port 80 with a simple HTTP config first
certbot certonly --nginx -d your-domain.com --email your@email.com --agree-tos --non-interactive

# Auto-renew via cron
(crontab -l; echo "0 3 * * * certbot renew --quiet && docker exec nginx nginx -s reload") | crontab -

6.2 AT Proto did:web Identity#

The did:web:your-domain.com identity requires:

  • https://your-domain.com/.well-known/did.json — served automatically by Aurora Prism
  • https://your-domain.com/.well-known/atproto-did — also served by Aurora Prism

Both are routed through Nginx via the /.well-known/ location block above.

Critical: Run ./setup-did-and-keys.sh before first launch. The APPVIEW_DID and APPVIEW_HOSTNAME env vars in Aurora Prism's .env must match your domain exactly. This cannot be changed easily after launch without regenerating keys.


Phase 7: Deploy + Operations#

7.1 Initial Deploy Sequence#

# 1. On Droplet: ensure directories exist
mkdir -p /opt/aurora-prism /opt/nextjs-app /opt/nginx

# 2. Push configs (from local machine via rsync or scp)
rsync -avz ./nextjs-app/ droplet:/opt/nextjs-app/
rsync -avz ./nginx/     droplet:/opt/nginx/
# Aurora Prism is cloned directly on the Droplet (step in Phase 2)

# 3. Start all services
cd /opt
docker compose up -d

# 4. Run Aurora Prism DB migrations
docker compose exec aurora-prism npm run db:push

# 5. Verify health
curl https://your-domain.com/health
curl https://your-domain.com/xrpc/com.atproto.server.describeServer

7.2 Update Script#

#!/bin/bash
# /opt/deploy.sh
set -e

docker pull ghcr.io/dollspace-gay/aurora-prism:latest
docker compose build nextjs-app
docker compose up -d --no-deps aurora-prism nextjs-app
docker compose exec aurora-prism npm run db:push

echo "Deploy complete ✓"

7.3 Monitoring#

# Install DO Monitoring Agent (free, adds CPU/memory/disk alerts in DO console)
curl -sSL https://repos.insights.digitalocean.com/install.sh | sudo bash

# Watch logs
docker compose logs -f                        # all services
docker compose logs -f aurora-prism           # just the AppView
docker compose logs -f nextjs-app             # just the Next.js app

Cost Summary#

Line item Cost/month
DO Droplet — 8 GB / 4 vCPU General Purpose $48
DO Block Storage — 100 GB $10
Domain name (annualized) ~$1
Let's Encrypt TLS Free
DO Monitoring Agent Free
Total ~$59/month

Todo List#

To be added as a granular checklist by your agent


Open Questions for Annotation#

  1. Aurora Prism cursor persistence — must verify before launch. Does Aurora Prism write the Jetstream time_us cursor to PostgreSQL after each event batch? If it only stores it in-memory, a process restart causes a timeline gap for the downtime window. This must be confirmed by reading the data-plane/ source. If absent, it must be added before launch — it is a prerequisite for the completeness guarantee.

  2. Does Aurora Prism trigger per-follow PDS backfill? When an app.bsky.graph.follow event is processed, does the data-plane automatically fetch the new followee's post history from their PDS? If not, new follows will have no historical posts until the next manual backfill cycle. This is likely the biggest gap to verify — if missing, it requires a code contribution to Aurora Prism's data-plane.

  3. Upsert semantics on the posts table. The PostgreSQL schema must use INSERT ... ON CONFLICT (uri) DO NOTHING (or equivalent upsert) so Jetstream live events and PDS backfill don't create duplicate posts. Verify the unique constraint on uri exists in the migrations/ folder before launch.

  4. Full AT Proto OAuth 2.0 (DPoP/PKCE) vs app-password auth? The plan currently covers app-password auth (simpler, works today). True OAuth 2.0 is the decentralized ideal but adds significant implementation complexity. Which do you want first?

  5. Should Aurora Prism's built-in dashboard be accessible (e.g. at /admin) for monitoring, or fully hidden behind Nginx in production?

  6. CI/CD pipeline? GitHub Actions can SSH into the Droplet and run deploy.sh on every push to main — do you want that wired up as part of the plan?

  7. Domain name chosen? The did:web identity, TLS certificate, and Nginx config all depend on a fixed domain. It must be decided before running ./setup-did-and-keys.sh — the domain is baked into Aurora Prism's signing keys and cannot be changed without a full key rotation.