Deployment Infrastructure Implementation Plan#
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Package atBB as a production-ready Docker image with automated CI/CD and comprehensive deployment documentation.
Architecture: Single Docker container with nginx routing between appview (port 3000) and web (port 3001). Multi-stage build produces ~200-250MB production image. GitHub Actions handles PR checks and image publishing to GHCR.
Tech Stack: Docker (multi-stage builds), nginx, GitHub Actions, pnpm, Node.js 22
Prerequisites#
- Design document approved:
docs/plans/2026-02-11-deployment-infrastructure-design.md - Working in worktree:
.worktrees/feat-deployment-infrastructure - Branch:
feat/deployment-infrastructure - All tests passing baseline
Task 1: Docker Build Configuration Files#
Step 1: Create .dockerignore#
Purpose: Exclude unnecessary files from Docker build context for faster builds and smaller images.
File: Create .dockerignore (monorepo root)
# Git
.git/
.gitignore
.gitattributes
.worktrees/
# Dependencies
node_modules/
.pnpm-store/
# Build artifacts (will be regenerated in Docker)
dist/
*.tsbuildinfo
# Environment files (never copy secrets)
.env
.env.*
!.env.example
!.env.production.example
# Tests
**/__tests__/
**/*.test.ts
**/*.spec.ts
*.test.js
*.spec.js
# Documentation
*.md
!README.md
docs/
prior-art/
# IDE
.vscode/
.idea/
*.swp
*.swo
# OS
.DS_Store
Thumbs.db
# Logs
*.log
npm-debug.log*
# Nix/devenv (not needed in container)
.devenv/
.direnv/
devenv.nix
devenv.yaml
devenv.lock
# Git hooks
.lefthook/
lefthook.yml
# Bruno API testing
bruno/
# Misc
.cache/
coverage/
.turbo/
Verification:
# Check file exists
ls -la .dockerignore
# Check size
wc -l .dockerignore
Expected: File created with ~70 lines
Step 2: Commit .dockerignore#
git add .dockerignore
git commit -m "build: add .dockerignore for Docker build optimization"
Task 2: Nginx Routing Configuration#
Step 1: Create nginx.conf#
Purpose: Route /api/* to appview and everything else to web UI.
File: Create nginx.conf (monorepo root)
events {
worker_connections 1024;
}
http {
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# MIME types
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss
application/rss+xml font/truetype font/opentype
application/vnd.ms-fontobject image/svg+xml;
upstream appview {
server 127.0.0.1:3000;
}
upstream web {
server 127.0.0.1:3001;
}
server {
listen 80;
server_name _;
# Health check endpoint (bypass routing, check nginx itself)
location /nginx-health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# API routes to appview
location /api/ {
proxy_pass http://appview;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# OAuth callback routes to appview
location /oauth/ {
proxy_pass http://appview;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# All other routes to web UI
location / {
proxy_pass http://web;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
}
Verification:
# Check file exists
ls -la nginx.conf
# Validate syntax (requires nginx installed locally, OK to skip if not available)
nginx -t -c $(pwd)/nginx.conf 2>&1 || echo "Skip validation - nginx not installed locally"
Expected: File created with ~90 lines
Step 2: Commit nginx.conf#
git add nginx.conf
git commit -m "build: add nginx routing configuration for container"
Task 3: Container Entrypoint Script#
Step 1: Create entrypoint.sh#
Purpose: Start nginx and both Node.js apps, ensuring all services start correctly and exit together.
File: Create entrypoint.sh (monorepo root)
#!/bin/sh
set -e
echo "Starting atBB container..."
# Start nginx in background
echo "Starting nginx..."
nginx -g "daemon off;" &
NGINX_PID=$!
# Wait for nginx to be ready
sleep 2
# Start appview in background
echo "Starting appview (port 3000)..."
cd /app/apps/appview
node dist/index.js &
APPVIEW_PID=$!
# Wait for appview to be ready
sleep 2
# Start web in background
echo "Starting web (port 3001)..."
cd /app/apps/web
node dist/index.js &
WEB_PID=$!
echo "All services started successfully"
echo " - nginx: PID $NGINX_PID (port 80)"
echo " - appview: PID $APPVIEW_PID (port 3000)"
echo " - web: PID $WEB_PID (port 3001)"
# Function to handle shutdown
shutdown() {
echo "Shutting down services..."
kill $WEB_PID 2>/dev/null || true
kill $APPVIEW_PID 2>/dev/null || true
kill $NGINX_PID 2>/dev/null || true
exit 0
}
# Trap signals
trap shutdown SIGTERM SIGINT
# Wait for any process to exit
wait -n
# If we get here, one process died - shut down everything
echo "A service has stopped unexpectedly, shutting down..."
shutdown
Verification:
# Check file exists
ls -la entrypoint.sh
# Make executable
chmod +x entrypoint.sh
# Verify executable bit set
ls -l entrypoint.sh | grep -q "^-rwxr" && echo "Executable: OK"
Expected: File created with ~50 lines, executable bit set
Step 2: Commit entrypoint.sh#
git add entrypoint.sh
git commit -m "build: add container entrypoint script for process management"
Task 4: Multi-Stage Dockerfile#
Step 1: Create Dockerfile - Build Stage#
Purpose: Build stage compiles TypeScript to JavaScript for all packages.
File: Create Dockerfile (monorepo root) - Part 1
# syntax=docker/dockerfile:1
# ============================================================================
# Build Stage - Compile TypeScript and build all packages
# ============================================================================
FROM node:22-alpine AS builder
# Install pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate
# Set working directory
WORKDIR /build
# Copy package files for dependency installation
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY packages/db/package.json packages/db/
COPY packages/lexicon/package.json packages/lexicon/
COPY apps/appview/package.json apps/appview/
COPY apps/web/package.json apps/web/
# Install all dependencies (including devDependencies for build)
RUN pnpm install --frozen-lockfile
# Copy source code
COPY packages/ packages/
COPY apps/ apps/
COPY turbo.json ./
# Build all packages (lexicon → db → appview + web)
RUN pnpm build
# ============================================================================
# Runtime Stage - Minimal production image
# ============================================================================
Verification:
# Check file exists and contains FROM node:22-alpine
grep -q "FROM node:22-alpine AS builder" Dockerfile && echo "Build stage: OK"
Expected: Dockerfile created with build stage (~30 lines)
Step 2: Add Dockerfile - Runtime Stage#
Purpose: Runtime stage copies only production artifacts for small final image.
File: Modify Dockerfile - Part 2 (append to existing file)
FROM node:22-alpine AS runtime
# Install nginx and bash
RUN apk add --no-cache nginx bash
# Install pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate
# Set working directory
WORKDIR /app
# Copy package files
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY packages/db/package.json packages/db/
COPY packages/lexicon/package.json packages/lexicon/
COPY apps/appview/package.json apps/appview/
COPY apps/web/package.json apps/web/
# Install production dependencies only
RUN pnpm install --prod --frozen-lockfile
# Copy built artifacts from builder stage
COPY --from=builder /build/packages/db/dist packages/db/dist
COPY --from=builder /build/packages/lexicon/dist packages/lexicon/dist
COPY --from=builder /build/apps/appview/dist apps/appview/dist
COPY --from=builder /build/apps/web/dist apps/web/dist
# Copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Copy entrypoint script
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Create nginx directories
RUN mkdir -p /var/log/nginx /var/lib/nginx/tmp /run/nginx
# Expose only port 80 (nginx)
EXPOSE 80
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost/nginx-health || exit 1
# Run entrypoint script
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
Verification:
# Verify complete Dockerfile
grep -q "FROM node:22-alpine AS builder" Dockerfile && \
grep -q "FROM node:22-alpine AS runtime" Dockerfile && \
grep -q "ENTRYPOINT.*entrypoint.sh" Dockerfile && \
echo "Dockerfile complete: OK"
# Count stages
grep -c "^FROM" Dockerfile
Expected: 2 stages found, file ~70 lines total
Step 3: Test Docker build locally#
# Build the image (this will take several minutes)
docker build -t atbb:test .
Expected: Build succeeds, outputs "Successfully built" and "Successfully tagged atbb:test"
If build fails: Check error message, fix issue, re-run build
Step 4: Verify image size#
# Check final image size
docker images atbb:test --format "{{.Size}}"
Expected: Size between 150MB and 300MB (target ~200-250MB)
Step 5: Test container starts#
# Create test .env file
cat > .env.test <<'EOF'
PORT=3000
FORUM_DID=did:plc:test
PDS_URL=https://bsky.social
DATABASE_URL=postgres://atbb:atbb@localhost:5432/atbb
OAUTH_PUBLIC_URL=http://localhost:3000
SESSION_SECRET=test-secret-key-at-least-32-chars-long-12345
EOF
# Start container (will fail to connect to DB, but should start services)
docker run --rm --env-file .env.test -p 8080:80 atbb:test &
CONTAINER_PID=$!
# Wait for startup
sleep 5
# Test nginx health check
curl -f http://localhost:8080/nginx-health
HEALTH_STATUS=$?
# Stop container
kill $CONTAINER_PID 2>/dev/null || docker stop $(docker ps -q --filter ancestor=atbb:test)
# Clean up test env
rm .env.test
# Check result
if [ $HEALTH_STATUS -eq 0 ]; then
echo "Container test: PASS"
else
echo "Container test: FAIL - nginx not responding"
exit 1
fi
Expected: "healthy" response from nginx, "Container test: PASS"
Step 6: Commit Dockerfile#
git add Dockerfile
git commit -m "build: add multi-stage Dockerfile for production container
- Build stage: compile TypeScript with pnpm + turbo
- Runtime stage: slim image with nginx + production deps
- Target size: 200-250MB
- Exposes port 80, includes health check"
Task 5: GitHub Actions - CI Workflow#
Step 1: Create .github/workflows directory#
mkdir -p .github/workflows
Step 2: Create ci.yml workflow#
Purpose: Run lint, typecheck, and tests on every PR to catch issues before merge.
File: Create .github/workflows/ci.yml
name: CI
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
test:
name: Lint, Typecheck, and Test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
- name: Install pnpm
run: corepack enable && corepack prepare pnpm@latest --activate
- name: Get pnpm store directory
id: pnpm-cache
run: echo "STORE_PATH=$(pnpm store path)" >> $GITHUB_OUTPUT
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ steps.pnpm-cache.outputs.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build packages
run: pnpm build
- name: Lint
run: pnpm turbo lint
- name: Run tests
run: pnpm test
env:
# Provide test database URL for CI
DATABASE_URL: postgres://atbb:atbb@localhost:5432/atbb_test
# Mock other required env vars for tests
FORUM_DID: did:plc:ci-test
PDS_URL: https://bsky.social
OAUTH_PUBLIC_URL: http://localhost:3000
SESSION_SECRET: ci-test-secret-at-least-32-chars-long
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_USER: atbb
POSTGRES_PASSWORD: atbb
POSTGRES_DB: atbb_test
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
Verification:
# Check file exists
ls -la .github/workflows/ci.yml
# Validate YAML syntax
python3 -c "import yaml; yaml.safe_load(open('.github/workflows/ci.yml'))" && echo "YAML valid: OK"
Expected: File created, YAML syntax valid
Step 3: Commit CI workflow#
git add .github/workflows/ci.yml
git commit -m "ci: add GitHub Actions workflow for PR checks
- Runs lint, typecheck, and tests on every PR
- Uses pnpm cache for faster builds
- Includes PostgreSQL service for database tests
- Blocks merge if checks fail"
Task 6: GitHub Actions - Publish Workflow#
Step 1: Create publish.yml workflow#
Purpose: Build and publish Docker images to GHCR after CI passes, on main push and version tags.
File: Create .github/workflows/publish.yml
name: Build and Publish
on:
push:
branches: [main]
tags: ['v*']
workflow_dispatch:
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# Re-run CI checks before building (safety net)
ci:
name: Run CI Checks
uses: ./.github/workflows/ci.yml
build-and-push:
name: Build and Push Docker Image
runs-on: ubuntu-latest
needs: ci
permissions:
contents: read
packages: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tag as 'latest' on main branch
type=raw,value=latest,enable={{is_default_branch}}
# Tag with git SHA on main branch (e.g., main-abc1234)
type=raw,value=main-{{sha}},enable={{is_default_branch}}
# Tag with version on version tags (e.g., v1.0.0)
type=semver,pattern={{version}}
# Tag with major.minor on version tags (e.g., v1.0)
type=semver,pattern={{major}}.{{minor}}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64
- name: Output image details
run: |
echo "### Docker Image Published :rocket:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Images:**" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo "${{ steps.meta.outputs.tags }}" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
Verification:
# Check file exists
ls -la .github/workflows/publish.yml
# Validate YAML syntax
python3 -c "import yaml; yaml.safe_load(open('.github/workflows/publish.yml'))" && echo "YAML valid: OK"
# Verify it references ci.yml
grep -q "uses: ./.github/workflows/ci.yml" .github/workflows/publish.yml && echo "CI dependency: OK"
Expected: File created, YAML valid, references CI workflow
Step 2: Commit publish workflow#
git add .github/workflows/publish.yml
git commit -m "ci: add GitHub Actions workflow for Docker image publishing
- Builds and publishes to GHCR after CI passes
- Triggers on main push and version tags
- Tags: latest, main-<sha>, version numbers
- Uses GitHub Container Registry with automatic authentication
- Includes build cache for faster subsequent builds"
Task 7: Production Environment Template#
Step 1: Create .env.production.example#
Purpose: Template for operators to configure their production deployment.
File: Create .env.production.example
# =============================================================================
# atBB Production Environment Configuration
# =============================================================================
#
# Copy this file to .env or provide variables via your deployment system
# (Kubernetes secrets, docker-compose env_file, etc.)
#
# REQUIRED: All variables marked REQUIRED must be set for atBB to start
# OPTIONAL: Variables with defaults can be omitted
# -----------------------------------------------------------------------------
# Application Configuration
# -----------------------------------------------------------------------------
# Port the appview API listens on (inside container - nginx proxies to this)
# Default: 3000
# PORT=3000
# Port the web UI listens on (inside container - nginx proxies to this)
# Default: 3001 (set via WEB_PORT in web package, not shown here)
# -----------------------------------------------------------------------------
# AT Protocol Configuration
# -----------------------------------------------------------------------------
# REQUIRED: Your forum's DID (Decentralized Identifier)
# Obtain by creating an account on a PDS
# Example: did:plc:abcd1234xyz567890
FORUM_DID=
# REQUIRED: URL of your forum's Personal Data Server
# This is where your forum's records are stored
# Example: https://bsky.social
# Example: https://your-pds.example.com
PDS_URL=
# OPTIONAL: Jetstream firehose URL for real-time indexing
# Default: wss://jetstream2.us-east.bsky.network/subscribe
# JETSTREAM_URL=wss://jetstream2.us-east.bsky.network/subscribe
# -----------------------------------------------------------------------------
# Database Configuration
# -----------------------------------------------------------------------------
# REQUIRED: PostgreSQL connection string
# Format: postgres://username:password@host:port/database
# Example: postgres://atbb:secure_password@db.example.com:5432/atbb
# For managed databases (AWS RDS, DigitalOcean):
# - Use SSL: ?sslmode=require
# - Example: postgres://atbb:pass@mydb.abc123.us-east-1.rds.amazonaws.com:5432/atbb?sslmode=require
DATABASE_URL=
# -----------------------------------------------------------------------------
# OAuth & Session Configuration
# -----------------------------------------------------------------------------
# REQUIRED: Public URL where your forum is accessible
# Used for OAuth client_id and redirect_uri
# Must be HTTPS in production (except localhost for development)
# Example: https://forum.example.com
OAUTH_PUBLIC_URL=
# REQUIRED: Secret key for signing session tokens (min 32 characters)
# Generate with: openssl rand -hex 32
# IMPORTANT: Keep this secret! Changing it invalidates all sessions.
SESSION_SECRET=
# OPTIONAL: Session expiration in days
# Default: 7
# SESSION_TTL_DAYS=7
# OPTIONAL: Redis URL for session storage (multi-instance deployments)
# If not set, uses in-memory storage (single-instance only)
# Sessions will be lost on container restart with in-memory storage
# Example: redis://redis.example.com:6379
# Example: rediss://default:password@redis.example.com:6380 (TLS)
# REDIS_URL=
# -----------------------------------------------------------------------------
# Forum Service Account (for AppView writes to PDS)
# -----------------------------------------------------------------------------
# REQUIRED: Forum service account handle
# Example: forum.bsky.social
FORUM_HANDLE=
# REQUIRED: Forum service account password or app password
# Obtain from your PDS account settings
FORUM_PASSWORD=
# -----------------------------------------------------------------------------
# Deployment Notes
# -----------------------------------------------------------------------------
#
# 1. Run database migrations before starting:
# docker run --rm --env-file .env.production \
# ghcr.io/<org>/atbb:latest \
# pnpm --filter @atbb/appview db:migrate
#
# 2. Start the container:
# docker run -d --name atbb \
# -p 80:80 \
# --env-file .env.production \
# --restart unless-stopped \
# ghcr.io/<org>/atbb:latest
#
# 3. Configure reverse proxy (Caddy, nginx, Traefik) for HTTPS:
# - Proxy to container port 80
# - Enable HTTPS via Let's Encrypt
#
# See docs/deployment-guide.md for complete instructions
Verification:
# Check file exists
ls -la .env.production.example
# Verify all REQUIRED vars are documented
grep -c "REQUIRED:" .env.production.example
Expected: File created, multiple REQUIRED markers found
Step 2: Commit .env.production.example#
git add .env.production.example
git commit -m "docs: add production environment configuration template
- Documents all required and optional environment variables
- Includes examples and default values
- Provides deployment commands and notes
- Serves as template for operators"
Task 8: Docker Compose Example#
Step 1: Create docker-compose.example.yml#
Purpose: Working example with PostgreSQL for local testing and small deployments.
File: Create docker-compose.example.yml
version: '3.8'
# =============================================================================
# atBB Docker Compose Example
# =============================================================================
#
# This file provides a complete working example for local testing or simple
# production deployments. It includes PostgreSQL and the atBB application.
#
# Usage:
# 1. Copy .env.production.example to .env and fill in required values
# 2. docker-compose -f docker-compose.example.yml up -d
# 3. Run migrations: docker-compose -f docker-compose.example.yml exec app \
# pnpm --filter @atbb/appview db:migrate
# 4. Access forum at http://localhost (or your OAUTH_PUBLIC_URL)
#
# For production: Use managed PostgreSQL and Redis instead of these services
# =============================================================================
services:
# PostgreSQL database
postgres:
image: postgres:16-alpine
container_name: atbb-postgres
environment:
POSTGRES_USER: atbb
POSTGRES_PASSWORD: atbb
POSTGRES_DB: atbb
volumes:
# Persist database data
- postgres_data:/var/lib/postgresql/data
ports:
# Expose for debugging (optional - can be removed for production)
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U atbb"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Redis for session storage (optional - comment out to use in-memory)
# redis:
# image: redis:7-alpine
# container_name: atbb-redis
# volumes:
# - redis_data:/data
# ports:
# - "6379:6379"
# healthcheck:
# test: ["CMD", "redis-cli", "ping"]
# interval: 10s
# timeout: 3s
# retries: 5
# restart: unless-stopped
# atBB application
app:
image: ghcr.io/${GITHUB_REPOSITORY:-your-org/atbb}:${VERSION:-latest}
container_name: atbb-app
ports:
- "80:80"
env_file:
- .env
environment:
# Override DATABASE_URL to use compose service name
DATABASE_URL: postgres://atbb:atbb@postgres:5432/atbb
# Uncomment to use Redis from compose
# REDIS_URL: redis://redis:6379
depends_on:
postgres:
condition: service_healthy
# Uncomment if using Redis
# redis:
# condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost/nginx-health"]
interval: 30s
timeout: 3s
start_period: 10s
retries: 3
restart: unless-stopped
volumes:
postgres_data:
driver: local
# Uncomment if using Redis
# redis_data:
# driver: local
# =============================================================================
# Management Commands
# =============================================================================
#
# Start services:
# docker-compose -f docker-compose.example.yml up -d
#
# View logs:
# docker-compose -f docker-compose.example.yml logs -f
#
# Run migrations:
# docker-compose -f docker-compose.example.yml exec app \
# pnpm --filter @atbb/appview db:migrate
#
# Stop services:
# docker-compose -f docker-compose.example.yml down
#
# Stop and remove data:
# docker-compose -f docker-compose.example.yml down -v
Verification:
# Check file exists
ls -la docker-compose.example.yml
# Validate YAML syntax
python3 -c "import yaml; yaml.safe_load(open('docker-compose.example.yml'))" && echo "YAML valid: OK"
# Check for required services
grep -q "postgres:" docker-compose.example.yml && \
grep -q "app:" docker-compose.example.yml && \
echo "Required services defined: OK"
Expected: File created, YAML valid, postgres and app services defined
Step 2: Test docker-compose configuration#
Note: This test requires .env file and built image. Skip if not available locally.
# Create minimal .env for testing (or skip if you already have one)
if [ ! -f .env ]; then
echo "Skipping docker-compose test - no .env file"
else
# Validate compose file
docker-compose -f docker-compose.example.yml config > /dev/null && \
echo "Docker Compose config valid: OK"
fi
Expected: "Docker Compose config valid: OK" or skip message
Step 3: Commit docker-compose.example.yml#
git add docker-compose.example.yml
git commit -m "docs: add Docker Compose example for local testing
- Includes PostgreSQL service with persistent volume
- Includes optional Redis service (commented out)
- Provides complete working example
- Documents management commands
- Health checks for all services"
Task 9: Administrator's Deployment Guide#
Step 1: Create deployment guide outline#
Purpose: Comprehensive guide for operators deploying atBB.
File: Create docs/deployment-guide.md - Part 1 (outline and prerequisites)
# atBB Deployment Guide
Complete guide for deploying and operating an atBB forum instance.
## Table of Contents
1. [Prerequisites](#prerequisites)
2. [Quick Start](#quick-start)
3. [AT Protocol Setup](#at-protocol-setup)
4. [Database Setup](#database-setup)
5. [Environment Configuration](#environment-configuration)
6. [Running Migrations](#running-migrations)
7. [Starting the Container](#starting-the-container)
8. [Reverse Proxy Setup](#reverse-proxy-setup)
9. [Monitoring & Logs](#monitoring--logs)
10. [Upgrading](#upgrading)
11. [Troubleshooting](#troubleshooting)
12. [Docker Compose Example](#docker-compose-example)
---
## Prerequisites
Before deploying atBB, ensure you have:
### Infrastructure
- **Docker** 20.10+ or compatible container runtime (Podman, containerd)
- **PostgreSQL** 14+ database (managed service recommended for production)
- **Domain name** with DNS configured to your server
- **HTTPS capable** reverse proxy (Caddy, nginx, Traefik)
### AT Protocol Requirements
- **Forum Account**: AT Protocol account (DID) for your forum identity
- **Personal Data Server (PDS)**: PDS instance to host your forum's records
- Options:
- Use hosted PDS (e.g., bsky.social)
- Self-host PDS (advanced - see [atproto.com](https://atproto.com))
- **Forum Credentials**: Handle and password for your forum account
### Knowledge
- Basic Docker usage (`docker run`, `docker logs`)
- Environment variable configuration
- Database connection strings
- Reverse proxy configuration (Caddy/nginx)
---
## Quick Start
For experienced operators who want to get running quickly:
```bash
# 1. Pull the latest image
docker pull ghcr.io/<your-org>/atbb:latest
# 2. Create environment file from template
cp .env.production.example .env
# Edit .env with your configuration
# 3. Run database migrations
docker run --rm --env-file .env \
ghcr.io/<your-org>/atbb:latest \
sh -c "cd /app && pnpm --filter @atbb/appview db:migrate"
# 4. Start the container
docker run -d --name atbb \
-p 80:80 \
--env-file .env \
--restart unless-stopped \
ghcr.io/<your-org>/atbb:latest
# 5. Configure reverse proxy for HTTPS (see Reverse Proxy Setup section)
# 6. Verify deployment
curl http://localhost/nginx-health
# Expected: "healthy"
See detailed sections below for explanation of each step.
AT Protocol Setup#
Your forum needs an identity on the AT Protocol network.
Step 1: Create Forum Account#
Choose one of these options:
Option A: Use Existing PDS (Easier)#
- Create an account on a hosted PDS (e.g., bsky.app)
- Choose a handle for your forum (e.g.,
atbb-forum.bsky.social) - Save your handle and password for the
.envfile - Find your DID:
# Replace YOUR_HANDLE with your actual handle curl "https://bsky.social/xrpc/com.atproto.identity.resolveHandle?handle=YOUR_HANDLE" # Response includes: "did":"did:plc:abc123..."
Option B: Self-Host PDS (Advanced)#
- Follow PDS setup guide
- Create an account on your PDS instance
- Note your DID, handle, and PDS URL
Step 2: Configure OAuth#
AT Protocol OAuth requires a publicly accessible HTTPS URL:
- ❌
http://localhost- Won't work for OAuth - ❌
http://192.168.1.100- Local IPs not accessible from PDS - ✅
https://forum.example.com- Public HTTPS URL (production) - ✅
https://abc123.ngrok.io- Tunneling service (development) - ✅
https://forum.local- Local HTTPS with mkcert (development)
For production, use your actual domain with HTTPS.
For development, see OAuth development options.
Step 3: Lexicon Namespace#
atBB uses the space.atbb.* lexicon namespace. Your forum's records will be:
space.atbb.forum.forum- Forum metadataspace.atbb.forum.category- Forum categoriesspace.atbb.post- User posts (topics and replies)space.atbb.membership- User forum membershipsspace.atbb.modAction- Moderator actions
No configuration needed - these are defined in the @atbb/lexicon package.
**Verification:**
```bash
# Check file exists and has content
ls -la docs/deployment-guide.md
wc -l docs/deployment-guide.md
Expected: File created with ~150 lines
Step 2: Add deployment guide - Database and Configuration sections#
File: Append to docs/deployment-guide.md - Part 2
## Database Setup
atBB requires PostgreSQL 14 or later.
### Option A: Managed Database (Recommended)
Use a managed PostgreSQL service for production:
- **AWS RDS**: [RDS PostgreSQL](https://aws.amazon.com/rds/postgresql/)
- **DigitalOcean**: [Managed Databases](https://www.digitalocean.com/products/managed-databases-postgresql)
- **Google Cloud SQL**: [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/postgresql)
- **Azure Database**: [Azure Database for PostgreSQL](https://azure.microsoft.com/en-us/products/postgresql)
**Benefits:**
- Automatic backups
- High availability
- Easy scaling
- Managed updates
**Connection string format:**
postgres://username:password@host:port/database?sslmode=require
### Option B: Self-Managed PostgreSQL
If hosting your own PostgreSQL:
1. Install PostgreSQL 14+
2. Create database and user:
```sql
CREATE DATABASE atbb;
CREATE USER atbb WITH PASSWORD 'secure_password';
GRANT ALL PRIVILEGES ON DATABASE atbb TO atbb;
- Configure connection string:
postgres://atbb:secure_password@localhost:5432/atbb
Option C: Docker Compose (Development/Testing)#
See Docker Compose Example section.
Environment Configuration#
Configuration is done via environment variables. Copy the template and fill in values:
cp .env.production.example .env
Required Variables#
Edit .env and set these required variables:
# Your forum's AT Protocol DID
FORUM_DID=did:plc:your-actual-did-here
# PDS URL (where your forum account lives)
PDS_URL=https://bsky.social
# Database connection (from Database Setup section)
DATABASE_URL=postgres://atbb:password@db.example.com:5432/atbb?sslmode=require
# Public URL for OAuth (your actual domain)
OAUTH_PUBLIC_URL=https://forum.example.com
# Session secret (generate with: openssl rand -hex 32)
SESSION_SECRET=your-64-character-hex-string-here
# Forum service account credentials
FORUM_HANDLE=forum.bsky.social
FORUM_PASSWORD=your-forum-account-password
Optional Variables#
These have sensible defaults but can be customized:
# Session expiration (default: 7 days)
SESSION_TTL_DAYS=7
# Jetstream firehose URL (default: bsky.network)
JETSTREAM_URL=wss://jetstream2.us-east.bsky.network/subscribe
# Redis for session storage (default: in-memory)
# Only needed for multi-instance deployments
# REDIS_URL=redis://redis.example.com:6379
Using Individual Environment Variables#
For Kubernetes or other orchestration tools:
docker run -d \
-e FORUM_DID=did:plc:abc123 \
-e PDS_URL=https://bsky.social \
-e DATABASE_URL=postgres://... \
-e OAUTH_PUBLIC_URL=https://forum.example.com \
-e SESSION_SECRET=$(openssl rand -hex 32) \
-e FORUM_HANDLE=forum.bsky.social \
-e FORUM_PASSWORD=password \
ghcr.io/<your-org>/atbb:latest
Running Migrations#
CRITICAL: Run database migrations before starting the application, especially after upgrades.
First Time Setup#
docker run --rm --env-file .env \
ghcr.io/<your-org>/atbb:latest \
sh -c "cd /app && pnpm --filter @atbb/appview db:migrate"
After Upgrades#
Check release notes for migration requirements. If migrations are needed:
# Stop running container
docker stop atbb
# Run migrations with new version
docker run --rm --env-file .env \
ghcr.io/<your-org>/atbb:v1.1.0 \
sh -c "cd /app && pnpm --filter @atbb/appview db:migrate"
# Start new version
docker rm atbb
docker run -d --name atbb \
--env-file .env \
ghcr.io/<your-org>/atbb:v1.1.0
Troubleshooting Migrations#
If migrations fail:
# Check migration status
docker run --rm --env-file .env \
ghcr.io/<your-org>/atbb:latest \
sh -c "cd /app/apps/appview && pnpm exec drizzle-kit status"
# View migration history
docker exec atbb-postgres psql -U atbb -d atbb \
-c "SELECT * FROM drizzle_migrations ORDER BY created_at DESC LIMIT 5;"
**Verification:**
```bash
# Check file length
wc -l docs/deployment-guide.md
Expected: File now ~300 lines
Step 3: Add deployment guide - Container, Proxy, and Operations sections#
File: Append to docs/deployment-guide.md - Part 3
## Starting the Container
### Basic Usage
```bash
docker run -d \
--name atbb \
-p 80:80 \
--env-file .env \
--restart unless-stopped \
ghcr.io/<your-org>/atbb:latest
With Specific Version#
Use version tags instead of latest for production:
docker run -d \
--name atbb \
-p 80:80 \
--env-file .env \
--restart unless-stopped \
ghcr.io/<your-org>/atbb:v1.0.0
Available Tags#
latest- Most recent build from main branchv1.0.0- Specific version tagmain-abc1234- Specific commit SHA from main
Health Checks#
The container includes a built-in health check:
# Check container health status
docker ps --filter name=atbb --format "{{.Status}}"
# Expected: "Up X minutes (healthy)"
# Manual health check
curl http://localhost/nginx-health
# Expected: "healthy"
Reverse Proxy Setup#
IMPORTANT: Do not expose port 80 directly to the internet. Use a reverse proxy with HTTPS.
Option A: Caddy (Recommended)#
Caddy automatically handles HTTPS via Let's Encrypt.
Install Caddy:
# See https://caddyserver.com/docs/install
Caddyfile:
forum.example.com {
reverse_proxy localhost:80
}
Start Caddy:
caddy run --config Caddyfile
Option B: Nginx#
Install Nginx:
# Ubuntu/Debian
sudo apt install nginx certbot python3-certbot-nginx
# RHEL/CentOS
sudo yum install nginx certbot python3-certbot-nginx
/etc/nginx/sites-available/atbb:
server {
listen 80;
server_name forum.example.com;
location / {
proxy_pass http://localhost:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Enable site and get HTTPS certificate:
sudo ln -s /etc/nginx/sites-available/atbb /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
sudo certbot --nginx -d forum.example.com
Option C: Traefik#
See Traefik documentation for Docker labels or configuration file setup.
Monitoring & Logs#
View Container Logs#
# Follow logs in real-time
docker logs -f atbb
# View last 100 lines
docker logs --tail 100 atbb
# View logs since 10 minutes ago
docker logs --since 10m atbb
Log Format#
Logs are structured JSON for easy parsing:
{
"level": "info",
"msg": "Starting appview server",
"port": 3000,
"timestamp": "2026-02-11T12:00:00.000Z"
}
Health Monitoring#
Set up monitoring for the health endpoint:
# Add to your monitoring system (Prometheus, Datadog, etc.)
curl http://localhost/nginx-health
Expected response: healthy with 200 status code
Resource Usage#
Monitor container resource usage:
# Current stats
docker stats atbb --no-stream
# Continuous monitoring
docker stats atbb
Upgrading#
Before Upgrading#
- Read release notes for breaking changes
- Backup your database
- Test in staging environment if possible
Upgrade Process#
# 1. Pull new image
docker pull ghcr.io/<your-org>/atbb:v1.1.0
# 2. Stop current container
docker stop atbb
# 3. Run migrations (if required - check release notes)
docker run --rm --env-file .env \
ghcr.io/<your-org>/atbb:v1.1.0 \
sh -c "cd /app && pnpm --filter @atbb/appview db:migrate"
# 4. Remove old container
docker rm atbb
# 5. Start new version
docker run -d --name atbb \
-p 80:80 \
--env-file .env \
--restart unless-stopped \
ghcr.io/<your-org>/atbb:v1.1.0
# 6. Verify health
docker logs atbb
curl http://localhost/nginx-health
Rollback#
If the upgrade fails:
# Stop new version
docker stop atbb
docker rm atbb
# Start previous version
docker run -d --name atbb \
-p 80:80 \
--env-file .env \
--restart unless-stopped \
ghcr.io/<your-org>/atbb:v1.0.0
Important: If migrations were run, you may need to restore your database backup.
Downtime Notes#
Expected downtime: 10-30 seconds during container restart
Zero-downtime deployments: Not yet supported. Future work:
- Load balancer with health checks
- Blue-green deployment
- Redis session storage (prevents session loss)
Troubleshooting#
Container Won't Start#
Check logs:
docker logs atbb
Common issues:
-
Missing environment variables
Error: SESSION_SECRET is requiredSolution: Check
.envfile has all required variables -
Database connection failed
Error: connection to server failedSolution: Verify
DATABASE_URLis correct and database is accessible -
Port already in use
Error: bind: address already in useSolution: Stop other services on port 80 or use different port:
-p 8080:80
Health Check Failing#
curl http://localhost/nginx-health
If connection refused:
- Container not running:
docker ps | grep atbb - Wrong port: Check
-pmapping indocker runcommand
If returns error:
- Check container logs:
docker logs atbb - Check nginx status:
docker exec atbb nginx -t
OAuth Not Working#
Symptoms:
- Login redirects fail
- "Invalid client_id" errors
Solutions:
-
Check
OAUTH_PUBLIC_URLmatches your actual domain# Wrong: OAUTH_PUBLIC_URL=http://localhost # Correct: OAUTH_PUBLIC_URL=https://forum.example.com -
Verify HTTPS is enabled
- OAuth requires HTTPS (except localhost for dev)
- Check reverse proxy SSL certificate
-
Check
SESSION_SECRETlength- Must be at least 32 characters
- Generate new:
openssl rand -hex 32
Database Migrations Fail#
Check migration status:
docker run --rm --env-file .env \
ghcr.io/<your-org>/atbb:latest \
sh -c "cd /app/apps/appview && pnpm exec drizzle-kit status"
If stuck:
- Check database connectivity
- Verify
DATABASE_URLincludes correct permissions - Check migration table exists:
SELECT * FROM drizzle_migrations;
Performance Issues#
Container using too much CPU/memory:
docker stats atbb
Solutions:
-
Set resource limits:
docker run -d \ --memory=2g \ --cpus=2 \ --name atbb \ ... -
Check database query performance
- Enable PostgreSQL slow query logging
- Review database indexes
-
Monitor firehose connection
- Check Jetstream connectivity
- Review indexer logs for errors
Docker Compose Example#
For development or simple production deployments:
# 1. Copy environment template
cp .env.production.example .env
# Edit .env with your configuration
# 2. Copy compose example
cp docker-compose.example.yml docker-compose.yml
# 3. Start services
docker-compose up -d
# 4. Run migrations
docker-compose exec app sh -c "cd /app && pnpm --filter @atbb/appview db:migrate"
# 5. View logs
docker-compose logs -f
# 6. Stop services
docker-compose down
See docker-compose.example.yml for complete configuration.
Getting Help#
- GitHub Issues: github.com//atbb/issues
- Documentation: docs/
- AT Protocol: atproto.com
License#
AGPL-3.0 - See LICENSE file for details
**Verification:**
```bash
# Check complete file
wc -l docs/deployment-guide.md
# Should be ~700+ lines
# Verify all major sections present
grep -E "^## " docs/deployment-guide.md | wc -l
# Should be 12 sections
Expected: Complete guide with ~700+ lines, 12 major sections
Step 4: Commit deployment guide#
git add docs/deployment-guide.md
git commit -m "docs: add comprehensive deployment guide
Complete administrator's guide covering:
- Prerequisites and infrastructure requirements
- AT Protocol account setup and configuration
- Database setup (managed, self-hosted, compose)
- Environment variable configuration
- Migration procedures
- Container operations
- Reverse proxy setup (Caddy, nginx, Traefik)
- Monitoring, logging, and health checks
- Upgrade and rollback procedures
- Troubleshooting common issues
- Docker Compose example usage"
Task 10: Final Integration Testing#
Step 1: Verify all files committed#
# Check git status - should be clean
git status
Expected: "nothing to commit, working tree clean"
Step 2: Run tests to ensure nothing broke#
export PATH="/Users/jacob.zweifel/workspace/malpercio-dev/atbb-monorepo/.devenv/profile/bin:$PATH"
pnpm test
Expected: All tests pass (same count as baseline: 352 tests)
Step 3: Test Docker build one more time#
# Clean build to verify all files are included
docker build --no-cache -t atbb:final-test .
Expected: Build succeeds
Step 4: Quick container smoke test#
# Create test env
cat > .env.smoke-test <<'EOF'
PORT=3000
FORUM_DID=did:plc:test
PDS_URL=https://bsky.social
DATABASE_URL=postgres://test:test@nonexistent:5432/test
OAUTH_PUBLIC_URL=http://localhost:3000
SESSION_SECRET=smoke-test-secret-at-least-32-characters-long
FORUM_HANDLE=test.bsky.social
FORUM_PASSWORD=test
EOF
# Start container (will fail to connect to DB, but services should start)
docker run --rm -d --name atbb-smoke --env-file .env.smoke-test -p 8080:80 atbb:final-test
# Wait for startup
sleep 5
# Test nginx health
curl -f http://localhost:8080/nginx-health
SMOKE_RESULT=$?
# Clean up
docker stop atbb-smoke 2>/dev/null
rm .env.smoke-test
if [ $SMOKE_RESULT -eq 0 ]; then
echo "✅ Smoke test PASSED"
else
echo "❌ Smoke test FAILED"
exit 1
fi
Expected: "✅ Smoke test PASSED"
Step 5: Verify documentation completeness#
# Check all key files exist
echo "Verifying files..."
files=(
"Dockerfile"
".dockerignore"
"nginx.conf"
"entrypoint.sh"
".env.production.example"
"docker-compose.example.yml"
".github/workflows/ci.yml"
".github/workflows/publish.yml"
"docs/deployment-guide.md"
)
all_exist=true
for file in "${files[@]}"; do
if [ -f "$file" ]; then
echo "✅ $file"
else
echo "❌ $file MISSING"
all_exist=false
fi
done
if $all_exist; then
echo ""
echo "✅ All deployment files present"
else
echo ""
echo "❌ Some files missing"
exit 1
fi
Expected: All files present
Step 6: Push branch to remote#
# Push feature branch
git push -u origin feat/deployment-infrastructure
Expected: Branch pushed successfully
Task 11: Create Pull Request#
Step 1: Create PR using gh CLI#
# Create PR with detailed description
gh pr create \
--title "Deployment Infrastructure - Docker + CI/CD + Docs" \
--body "$(cat <<'EOF'
## Summary
Implements complete deployment infrastructure for atBB as designed in #[PR number from design doc].
## Changes
### Docker Containerization
- **Dockerfile**: Multi-stage build (build + slim runtime)
- Build stage: Compiles TypeScript with pnpm + turbo
- Runtime stage: ~200-250MB production image
- Includes nginx for routing, health checks
- **nginx.conf**: Routes `/api/*` → appview, `/` → web
- **entrypoint.sh**: Process manager for nginx + both apps
- **.dockerignore**: Optimizes build context (~70 excludes)
### CI/CD Pipeline
- **.github/workflows/ci.yml**: PR checks (lint, test, build)
- Runs on every PR
- Includes PostgreSQL service for tests
- Uses pnpm cache for speed
- **.github/workflows/publish.yml**: Image publishing
- Builds after CI passes
- Publishes to GHCR
- Tags: `latest`, `main-<sha>`, version numbers
- Triggered on main push and version tags
### Deployment Documentation
- **docs/deployment-guide.md**: Complete administrator's guide (~700 lines)
- Prerequisites and infrastructure requirements
- AT Protocol account setup
- Database configuration (managed, self-hosted, compose)
- Environment variables (required + optional)
- Migration procedures
- Container operations
- Reverse proxy setup (Caddy, nginx, Traefik)
- Monitoring and troubleshooting
- **.env.production.example**: Production config template
- **docker-compose.example.yml**: Working example with PostgreSQL
## Testing
- ✅ All 352 tests pass
- ✅ Docker build succeeds (~250MB image)
- ✅ Container smoke test passes (nginx health check)
- ✅ All configuration files validated
## Deployment
After merge, operators can deploy atBB by:
1. Pull image: `docker pull ghcr.io/<org>/atbb:latest`
2. Configure: Copy `.env.production.example` to `.env` and fill in values
3. Migrate: Run database migrations
4. Deploy: `docker run` with environment file
5. Proxy: Configure Caddy/nginx for HTTPS
See `docs/deployment-guide.md` for complete instructions.
## Open Questions
- [ ] What should the GitHub Container Registry path be? Currently using placeholder `ghcr.io/<org>/atbb`
- [ ] Do we want multi-arch builds (amd64 + arm64)?
## Checklist
- [x] Design document approved
- [x] Dockerfile implemented and tested
- [x] CI workflow implemented
- [x] Publish workflow implemented
- [x] Deployment guide written
- [x] All tests passing
- [x] Docker build successful
- [x] Smoke test passing
EOF
)" \
--label "enhancement" \
--label "deployment"
Expected: PR created with URL
Step 2: Record PR number#
# Get PR number
gh pr view --json number --jq .number
Expected: PR number (e.g., 27)
Success Criteria#
✅ All tasks completed:
- Docker configuration files created and tested
- CI/CD workflows implemented
- Comprehensive deployment guide written
- All tests passing
- Docker build succeeds (~200-250MB image)
- Container smoke test passes
- Pull request created
✅ Deliverables:
- Production-ready Dockerfile with multi-stage build
- GitHub Actions workflows for PR checks and image publishing
- Complete deployment documentation
- Example configurations (.env, docker-compose)
- Working containerized application
✅ Quality Gates:
- All 352 existing tests still pass
- Docker image builds successfully
- nginx health check responds correctly
- All configuration files validated (YAML, nginx)
- Documentation complete and comprehensive
Notes for Executors#
- Tasks are ordered for logical progression (config → build → CI → docs)
- Each step has explicit verification commands
- Commit after each major component (not after every step - batch related changes)
- Test Docker build after creating Dockerfile (Task 4)
- Smoke test at end ensures everything works together
- Use git worktree:
.worktrees/feat-deployment-infrastructure - Branch:
feat/deployment-infrastructure
Open Issues to Resolve#
- GHCR path: Decide on organization name for
ghcr.io/<org>/atbb - Multi-arch: Determine if arm64 support is needed (adds build time)
- Secrets: Decide on CI secrets strategy (GitHub, external vault)