AppView in a box as a Vite plugin thing hatk.dev

docs: add Cloudflare target design spec

Worker + Container architecture with D1, Service Binding RPC
for communication, and hatk build --target cloudflare output.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

+171
+171
docs/superpowers/specs/2026-03-18-cloudflare-target-design.md
··· 1 + # Cloudflare Target Design 2 + 3 + ## Goal 4 + 5 + Add `target: 'cloudflare'` to hatk so apps can deploy to Cloudflare Workers + Containers with D1 as the database. Same hatk project, different deployment target. 6 + 7 + ## Audience 8 + 9 + hatk users who want edge deployment, scale-to-zero, or prefer Cloudflare over Railway/VPS. 10 + 11 + ## Architecture 12 + 13 + When `hatk.config.ts` has `target: 'cloudflare'`, `hatk build` produces two deployment artifacts plus a generated Cloudflare config: 14 + 15 + ``` 16 + dist/ 17 + ├── worker/ # Cloudflare Worker 18 + │ └── index.ts # Fetch handler: XRPC + SvelteKit SSR + OAuth + admin 19 + ├── container/ # Cloudflare Container (Node process) 20 + │ └── index.ts # Firehose subscription + backfill + label evaluation 21 + └── wrangler.jsonc # Generated config (D1 binding, Service Binding, Container) 22 + ``` 23 + 24 + ### Component Split 25 + 26 + | Component | Runtime | Responsibilities | 27 + |-----------|---------|-----------------| 28 + | **Worker** | Cloudflare Worker | XRPC handlers, SvelteKit SSR, OAuth, admin API | 29 + | **Container** | Cloudflare Container | Firehose, backfill, label evaluation | 30 + | **D1** | Shared | All data storage (SQLite-compatible) | 31 + 32 + ### Communication 33 + 34 + Worker → Container via **Service Binding RPC**. The Worker calls methods on the Container directly as function calls (near-zero latency, no HTTP overhead): 35 + 36 + - `env.CONTAINER.resync(did)` — trigger backfill for a single repo 37 + - `env.CONTAINER.resyncAll()` — trigger full re-enumeration 38 + - `env.CONTAINER.getStatus()` — backfill progress 39 + 40 + No Container → Worker communication needed. 41 + 42 + --- 43 + 44 + ## D1 Database Adapter 45 + 46 + New file: `packages/hatk/src/database/adapters/d1.ts` 47 + 48 + Implements the existing `DatabasePort` interface using Cloudflare's D1 binding API. No changes to the interface itself. 49 + 50 + ### Query Execution 51 + 52 + - **`query(sql, params)`** — `d1.prepare(sql).bind(...params).all()` 53 + - **`execute(sql, params)`** — `d1.prepare(sql).bind(...params).run()` 54 + - **`executeMultiple(sql)`** — Split by `;`, run as `d1.batch([...])` 55 + 56 + ### Transactions 57 + 58 + D1 has no `BEGIN/COMMIT/ROLLBACK`. The adapter fakes it: 59 + 60 + 1. `beginTransaction()` — start buffering statements 61 + 2. `execute()` during a transaction — append to buffer instead of executing 62 + 3. `commit()` — flush buffer as `d1.batch([...])` (atomic — all succeed or all fail) 63 + 4. `rollback()` — clear the buffer 64 + 65 + This preserves the same atomicity guarantee as real transactions. 66 + 67 + ### Bulk Insert 68 + 69 + The existing `BulkInserter` interface stays the same. The D1 implementation: 70 + 71 + 1. `append(values)` — generate an INSERT statement, add to batch buffer 72 + 2. `flush()` — send buffer as `d1.batch([...])`, clear buffer 73 + 3. Buffer size tuned to stay under D1's CPU limits per batch 74 + 75 + No prepared statements or native appenders — all dynamic SQL generation. 76 + 77 + ### Dialect 78 + 79 + D1 is SQLite under the hood. Reuse the SQLite dialect's type map and placeholder style. No new dialect flags — the D1 adapter handles its own constraints internally. 80 + 81 + ### FTS 82 + 83 + D1 supports FTS5. The existing SQLite search port should work with D1 as-is. 84 + 85 + --- 86 + 87 + ## Config 88 + 89 + ```ts 90 + // hatk.config.ts 91 + export default defineConfig({ 92 + target: 'cloudflare', // new field, default: 'node' 93 + // ... everything else stays the same 94 + }) 95 + ``` 96 + 97 + No other config changes. The `database` path field is ignored when target is `cloudflare` (D1 binding is configured in wrangler.jsonc). 98 + 99 + --- 100 + 101 + ## Worker Entry 102 + 103 + Standard Cloudflare Workers fetch handler: 104 + 105 + ```ts 106 + export default { 107 + async fetch(request: Request, env: Env): Promise<Response> { 108 + // 1. Initialize D1 adapter from env.DB binding 109 + // 2. Try XRPC routes (same handlers, D1-backed ctx) 110 + // 3. Try admin routes (resync → env.CONTAINER.resync(did) via RPC) 111 + // 4. Fall through to SvelteKit for everything else 112 + } 113 + } 114 + ``` 115 + 116 + XRPC handlers don't change — they receive `ctx` with `ctx.db.query()` backed by D1 instead of SQLite. OAuth is pure JS crypto, no native deps. 117 + 118 + SvelteKit uses `@sveltejs/adapter-cloudflare` instead of `adapter-node`. 119 + 120 + ### Admin Resync Change 121 + 122 + The only behavioral difference from the Node target: admin resync calls `env.CONTAINER.resync(did)` over RPC instead of `triggerAutoBackfill(did)` in-process. 123 + 124 + --- 125 + 126 + ## Container Entry 127 + 128 + A standard Node process (Cloudflare Containers support Node). Essentially the current `main.ts` minus the HTTP server: 129 + 130 + - Connects to firehose relay via WebSocket 131 + - Processes commits, validates records, writes to D1 132 + - Runs backfill loop for pending repos 133 + - Exposes RPC methods for the Worker to call 134 + 135 + No CPU/memory limits like Workers — it's a real container. 136 + 137 + --- 138 + 139 + ## Build Step 140 + 141 + `hatk build` with `target: 'cloudflare'`: 142 + 143 + 1. Run `@sveltejs/adapter-cloudflare` for SvelteKit 144 + 2. Bundle XRPC handlers + OAuth into Worker entry 145 + 3. Bundle firehose + backfill into Container entry 146 + 4. Generate `wrangler.jsonc` with D1 binding, Service Binding to Container 147 + 148 + User deploys with `npx wrangler deploy`. 149 + 150 + --- 151 + 152 + ## Known Limitations 153 + 154 + - **D1 size limit**: 10GB per database. Cannot be increased. Sufficient for most hatk apps, but large-scale indexing may hit this. Document as a constraint of the Cloudflare target. 155 + - **Backfill speed**: D1 writes are ~30ms (HTTP) vs sub-ms (local SQLite). Backfill will be noticeably slower on Cloudflare. 156 + - **No horizontal sharding**: Single D1 database. Cloudflare's recommended pattern is per-tenant sharding, but that doesn't apply to firehose indexing. 157 + 158 + --- 159 + 160 + ## Implementation Scope 161 + 162 + | Component | Estimated Lines | Description | 163 + |-----------|----------------|-------------| 164 + | D1 adapter | ~400 | `DatabasePort` implementation with batch transactions, bulk inserter | 165 + | Dialect update | ~20 | Minimal — reuse SQLite dialect | 166 + | Adapter factory | ~20 | Add `d1` engine selection | 167 + | Worker entry | ~150 | Fetch handler wiring XRPC + SvelteKit + OAuth | 168 + | Container entry | ~100 | Firehose + backfill + RPC methods | 169 + | Build command | ~200 | `--target cloudflare` output generation + wrangler.jsonc | 170 + | Config | ~10 | `target` field in `HatkConfig` | 171 + | **Total** | **~900** | |