AppView in a box as a Vite plugin thing hatk.dev

docs: add Cloudflare target implementation plan

9 tasks in 5 batches: D1 adapter, Worker entry, Container entry,
build command, integration and docs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

+520
+520
docs/superpowers/plans/2026-03-18-cloudflare-target.md
··· 1 + # Cloudflare Target Implementation Plan 2 + 3 + > **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. 4 + 5 + **Goal:** Add `target: 'cloudflare'` to hatk so apps deploy to Cloudflare Workers + Containers with D1 as the database. 6 + 7 + **Architecture:** Worker handles HTTP (XRPC, SvelteKit SSR, OAuth, admin). Container handles firehose + backfill. Both share D1. Worker calls Container via Service Binding RPC for resync. 8 + 9 + **Tech Stack:** Cloudflare Workers, Cloudflare Containers, D1, Service Bindings RPC, `@sveltejs/adapter-cloudflare` 10 + 11 + **Design doc:** `docs/superpowers/specs/2026-03-18-cloudflare-target-design.md` 12 + 13 + --- 14 + 15 + ## Batch 1: Config + D1 Adapter 16 + 17 + ### Task 1: Add `target` field to config 18 + 19 + **Files:** 20 + - Modify: `packages/hatk/src/config.ts` 21 + 22 + **Step 1: Add `target` to `HatkConfig` and `HatkConfigInput`** 23 + 24 + In `HatkConfig` interface, add after `databaseEngine`: 25 + 26 + ```ts 27 + target: 'node' | 'cloudflare' 28 + ``` 29 + 30 + In `HatkConfigInput`, `target` is already optional via the `Partial<>` wrapping. 31 + 32 + In `loadConfig()`, add to the `config` object construction (after the `databaseEngine` line): 33 + 34 + ```ts 35 + target: (env.HATK_TARGET || parsed.target || 'node') as HatkConfig['target'], 36 + ``` 37 + 38 + **Step 2: Verify build** 39 + 40 + Run: `npm run build` 41 + Expected: Compiles without errors 42 + 43 + **Step 3: Commit** 44 + 45 + ```bash 46 + git add packages/hatk/src/config.ts 47 + git commit -m "feat: add target field to HatkConfig (node | cloudflare)" 48 + ``` 49 + 50 + ### Task 2: Add `Dialect` type for D1 51 + 52 + **Files:** 53 + - Modify: `packages/hatk/src/database/ports.ts` 54 + 55 + **Step 1: Add `d1` to Dialect union** 56 + 57 + Change line 1 from: 58 + 59 + ```ts 60 + export type Dialect = 'duckdb' | 'sqlite' | 'postgres' 61 + ``` 62 + 63 + To: 64 + 65 + ```ts 66 + export type Dialect = 'duckdb' | 'sqlite' | 'd1' | 'postgres' 67 + ``` 68 + 69 + **Step 2: Commit** 70 + 71 + ```bash 72 + git add packages/hatk/src/database/ports.ts 73 + git commit -m "feat: add d1 to Dialect type" 74 + ``` 75 + 76 + ### Task 3: Create D1 database adapter 77 + 78 + **Files:** 79 + - Create: `packages/hatk/src/database/adapters/d1.ts` 80 + 81 + This is the core of the Cloudflare support. The adapter implements `DatabasePort` using Cloudflare's D1 binding API. It fakes transactions by buffering statements and flushing as `d1.batch()`. 82 + 83 + **Step 1: Write the adapter** 84 + 85 + ```ts 86 + import type { DatabasePort, BulkInserter, Dialect } from '../ports.ts' 87 + 88 + /** 89 + * D1 database adapter for Cloudflare Workers/Containers. 90 + * 91 + * D1 is SQLite under the hood but accessed via an HTTP-based binding API. 92 + * Key differences from the SQLite adapter: 93 + * - No raw transactions — uses d1.batch() for atomic multi-statement execution 94 + * - No prepared statement reuse — each query is a fresh prepare+bind 95 + * - Bulk inserts use batched INSERT statements instead of native appenders 96 + */ 97 + 98 + /** Minimal D1 type definitions (matches Cloudflare's D1Database binding) */ 99 + interface D1Database { 100 + prepare(sql: string): D1PreparedStatement 101 + batch<T = unknown>(statements: D1PreparedStatement[]): Promise<D1Result<T>[]> 102 + exec(sql: string): Promise<D1ExecResult> 103 + } 104 + 105 + interface D1PreparedStatement { 106 + bind(...values: unknown[]): D1PreparedStatement 107 + all<T = Record<string, unknown>>(): Promise<D1Result<T>> 108 + run(): Promise<D1Result> 109 + first<T = Record<string, unknown>>(column?: string): Promise<T | null> 110 + } 111 + 112 + interface D1Result<T = unknown> { 113 + results: T[] 114 + success: boolean 115 + meta: Record<string, unknown> 116 + } 117 + 118 + interface D1ExecResult { 119 + count: number 120 + duration: number 121 + } 122 + 123 + /** 124 + * Translate DuckDB-style $1, $2 placeholders to ? placeholders. 125 + * Same logic as the SQLite adapter — D1 uses ? style. 126 + */ 127 + function translateParams(sql: string, params: unknown[]): { sql: string; params: unknown[] } { 128 + if (params.length === 0) return { sql, params } 129 + 130 + const expandedParams: unknown[] = [] 131 + const translated = sql.replace(/\$(\d+)/g, (_match, numStr) => { 132 + const idx = parseInt(numStr) - 1 133 + expandedParams.push(params[idx]) 134 + return '?' 135 + }) 136 + 137 + return { sql: translated, params: expandedParams } 138 + } 139 + 140 + export class D1Adapter implements DatabasePort { 141 + dialect: Dialect = 'd1' 142 + 143 + private db!: D1Database 144 + private txBuffer: D1PreparedStatement[] | null = null 145 + 146 + /** 147 + * Initialize with an existing D1 binding (from env.DB in Worker/Container). 148 + * The `path` argument is ignored — D1 bindings are configured in wrangler.jsonc. 149 + */ 150 + async open(path: string): Promise<void> { 151 + // D1 binding is injected via initWithBinding(), not opened by path. 152 + // This is a no-op if already initialized. 153 + if (!this.db) { 154 + throw new Error('D1Adapter requires initWithBinding(db) before use') 155 + } 156 + } 157 + 158 + /** Set the D1 binding directly (called before open). */ 159 + initWithBinding(db: D1Database): void { 160 + this.db = db 161 + } 162 + 163 + close(): void { 164 + // D1 bindings don't need explicit cleanup 165 + } 166 + 167 + async query<T = Record<string, unknown>>(sql: string, params: unknown[] = []): Promise<T[]> { 168 + const t = translateParams(sql, params) 169 + const stmt = this.db.prepare(t.sql).bind(...t.params) 170 + const result = await stmt.all<T>() 171 + return result.results 172 + } 173 + 174 + async execute(sql: string, params: unknown[] = []): Promise<void> { 175 + const t = translateParams(sql, params) 176 + const stmt = this.db.prepare(t.sql).bind(...t.params) 177 + 178 + // If inside a transaction, buffer instead of executing 179 + if (this.txBuffer !== null) { 180 + this.txBuffer.push(stmt) 181 + return 182 + } 183 + 184 + await stmt.run() 185 + } 186 + 187 + async executeMultiple(sql: string): Promise<void> { 188 + // D1's exec() handles multi-statement SQL 189 + await this.db.exec(sql) 190 + } 191 + 192 + async beginTransaction(): Promise<void> { 193 + this.txBuffer = [] 194 + } 195 + 196 + async commit(): Promise<void> { 197 + if (this.txBuffer === null) return 198 + const statements = this.txBuffer 199 + this.txBuffer = null 200 + if (statements.length > 0) { 201 + await this.db.batch(statements) 202 + } 203 + } 204 + 205 + async rollback(): Promise<void> { 206 + this.txBuffer = null 207 + } 208 + 209 + async createBulkInserter( 210 + table: string, 211 + columns: string[], 212 + options?: { onConflict?: 'ignore' | 'replace'; batchSize?: number }, 213 + ): Promise<BulkInserter> { 214 + const placeholders = columns.map(() => '?').join(', ') 215 + const conflict = 216 + options?.onConflict === 'ignore' ? ' OR IGNORE' : options?.onConflict === 'replace' ? ' OR REPLACE' : '' 217 + const sqlTemplate = `INSERT${conflict} INTO ${table} (${columns.join(', ')}) VALUES (${placeholders})` 218 + const buffer: D1PreparedStatement[] = [] 219 + const batchSize = options?.batchSize ?? 200 // smaller batches for D1 CPU limits 220 + const db = this.db 221 + 222 + const flush = async () => { 223 + if (buffer.length > 0) { 224 + await db.batch(buffer) 225 + buffer.length = 0 226 + } 227 + } 228 + 229 + return { 230 + append(values: unknown[]) { 231 + buffer.push(db.prepare(sqlTemplate).bind(...values)) 232 + if (buffer.length >= batchSize) { 233 + // Fire and forget — next append or flush will await 234 + flush() 235 + } 236 + }, 237 + async flush() { 238 + await flush() 239 + }, 240 + async close() { 241 + await flush() 242 + }, 243 + } 244 + } 245 + } 246 + ``` 247 + 248 + **Step 2: Verify build** 249 + 250 + Run: `npm run build` 251 + Expected: Compiles without errors 252 + 253 + **Step 3: Commit** 254 + 255 + ```bash 256 + git add packages/hatk/src/database/adapters/d1.ts 257 + git commit -m "feat: add D1 database adapter for Cloudflare" 258 + ``` 259 + 260 + ### Task 4: Add D1 to adapter factory and dialect 261 + 262 + **Files:** 263 + - Modify: `packages/hatk/src/database/adapter-factory.ts` 264 + - Modify: `packages/hatk/src/database/dialect.ts` 265 + 266 + **Step 1: Add `d1` case to adapter factory** 267 + 268 + Read `packages/hatk/src/database/adapter-factory.ts`. Add a new case after the `sqlite` case: 269 + 270 + ```ts 271 + case 'd1': { 272 + const { D1Adapter } = await import('./adapters/d1.ts') 273 + const { SQLiteSearchPort } = await import('./adapters/sqlite-search.ts') 274 + const adapter = new D1Adapter() 275 + // D1 uses SQLite FTS5, same search port 276 + const searchPort = new SQLiteSearchPort(adapter) 277 + return { adapter, searchPort } 278 + } 279 + ``` 280 + 281 + Also update the function signature to accept `'d1'`: 282 + 283 + ```ts 284 + export async function createAdapter(engine: 'duckdb' | 'sqlite' | 'd1'): Promise<{ 285 + ``` 286 + 287 + **Step 2: Add D1 dialect to dialect.ts** 288 + 289 + Read `packages/hatk/src/database/dialect.ts` to understand the `SqlDialect` shape. Add a D1 dialect entry that reuses the SQLite dialect's values (D1 is SQLite under the hood). The D1 dialect should be identical to the SQLite dialect. 290 + 291 + **Step 3: Verify build** 292 + 293 + Run: `npm run build` 294 + Expected: Compiles without errors 295 + 296 + **Step 4: Commit** 297 + 298 + ```bash 299 + git add packages/hatk/src/database/adapter-factory.ts packages/hatk/src/database/dialect.ts 300 + git commit -m "feat: wire D1 into adapter factory and dialect" 301 + ``` 302 + 303 + --- 304 + 305 + ## Batch 2: Worker Entry 306 + 307 + ### Task 5: Create Worker entry point 308 + 309 + **Files:** 310 + - Create: `packages/hatk/src/cloudflare/worker.ts` 311 + 312 + This is the Cloudflare Worker fetch handler. It wires together the same XRPC handlers, OAuth, and admin routes that `server.ts` provides, but in a Worker context with D1. 313 + 314 + **Step 1: Read existing server code for reference** 315 + 316 + Read these files to understand the current request handling: 317 + - `packages/hatk/src/server.ts` — the `createHandler()` function and route matching 318 + - `packages/hatk/src/server-init.ts` — server directory initialization 319 + - `packages/hatk/src/adapter.ts` — HTTP serving 320 + - `packages/hatk/src/main.ts` — full startup sequence 321 + 322 + **Step 2: Write the Worker entry** 323 + 324 + Create `packages/hatk/src/cloudflare/worker.ts` that: 325 + 326 + 1. Defines an `Env` interface with `DB: D1Database`, `CONTAINER: ContainerBinding` 327 + 2. Exports a default fetch handler 328 + 3. On first request, initializes the D1 adapter via `initWithBinding(env.DB)` 329 + 4. Runs XRPC route matching (reuse `createHandler()` from server.ts) 330 + 5. For admin resync routes, calls `env.CONTAINER.resync(did)` via RPC instead of `triggerAutoBackfill` 331 + 6. Falls through to SvelteKit for non-API routes 332 + 333 + Note: The exact SvelteKit integration depends on `@sveltejs/adapter-cloudflare` output. For now, create a placeholder that handles XRPC + admin + OAuth, and marks where SvelteKit would slot in. 334 + 335 + **Step 3: Verify build** 336 + 337 + Run: `npm run build` 338 + Expected: Compiles without errors 339 + 340 + **Step 4: Commit** 341 + 342 + ```bash 343 + git add packages/hatk/src/cloudflare/worker.ts 344 + git commit -m "feat: add Cloudflare Worker entry point" 345 + ``` 346 + 347 + --- 348 + 349 + ## Batch 3: Container Entry 350 + 351 + ### Task 6: Create Container entry point 352 + 353 + **Files:** 354 + - Create: `packages/hatk/src/cloudflare/container.ts` 355 + 356 + This is the Cloudflare Container entry — a long-lived Node process that runs the firehose and backfill, writing to D1. 357 + 358 + **Step 1: Read existing entry points for reference** 359 + 360 + Read these files: 361 + - `packages/hatk/src/main.ts` — full startup (we need firehose + backfill parts) 362 + - `packages/hatk/src/indexer.ts` — firehose subscription (the `startIndexer` function) 363 + - `packages/hatk/src/backfill.ts` — repo backfill 364 + 365 + **Step 2: Write the Container entry** 366 + 367 + Create `packages/hatk/src/cloudflare/container.ts` that: 368 + 369 + 1. Initializes the D1 adapter (Container gets D1 binding from env) 370 + 2. Loads lexicons, builds schemas, initializes database tables 371 + 3. Initializes feeds, XRPC, labels (same as main.ts) 372 + 4. Starts the firehose indexer (`startIndexer`) 373 + 5. Starts the backfill loop (`runBackfill`) 374 + 6. Exposes RPC methods: `resync(did)`, `resyncAll()`, `getStatus()` 375 + 376 + The Container does NOT start an HTTP server — communication is via RPC only. 377 + 378 + **Step 3: Verify build** 379 + 380 + Run: `npm run build` 381 + Expected: Compiles without errors 382 + 383 + **Step 4: Commit** 384 + 385 + ```bash 386 + git add packages/hatk/src/cloudflare/container.ts 387 + git commit -m "feat: add Cloudflare Container entry point" 388 + ``` 389 + 390 + --- 391 + 392 + ## Batch 4: Build Command 393 + 394 + ### Task 7: Add `--target cloudflare` to build command 395 + 396 + **Files:** 397 + - Modify: `packages/hatk/src/cli.ts` 398 + 399 + **Step 1: Read the existing build command** 400 + 401 + Read `packages/hatk/src/cli.ts` and find the `hatk build` command handler. Understand what it currently does (likely runs Vite build for SvelteKit). 402 + 403 + **Step 2: Add cloudflare build path** 404 + 405 + When `target === 'cloudflare'` in config (or `--target cloudflare` CLI flag): 406 + 407 + 1. Run `@sveltejs/adapter-cloudflare` build for SvelteKit (this may require swapping the adapter in svelte.config.js or generating a cloudflare-specific one) 408 + 2. Bundle `packages/hatk/src/cloudflare/worker.ts` into `dist/worker/index.js` 409 + 3. Bundle `packages/hatk/src/cloudflare/container.ts` into `dist/container/index.js` 410 + 4. Generate `dist/wrangler.jsonc` with: 411 + - D1 database binding (`DB`) 412 + - Service Binding to Container (`CONTAINER`) 413 + - Container configuration 414 + 415 + Example wrangler.jsonc structure: 416 + 417 + ```jsonc 418 + { 419 + "name": "my-app", 420 + "main": "worker/index.js", 421 + "compatibility_date": "2026-03-18", 422 + "d1_databases": [ 423 + { "binding": "DB", "database_name": "hatk", "database_id": "TODO" } 424 + ], 425 + "services": [ 426 + { "binding": "CONTAINER", "service": "my-app-container" } 427 + ] 428 + } 429 + ``` 430 + 431 + **Step 3: Verify build** 432 + 433 + Run: `npm run build` 434 + Expected: Compiles without errors (the cloudflare build path won't run unless target is set) 435 + 436 + **Step 4: Commit** 437 + 438 + ```bash 439 + git add packages/hatk/src/cli.ts 440 + git commit -m "feat: add --target cloudflare to build command" 441 + ``` 442 + 443 + --- 444 + 445 + ## Batch 5: Integration & Docs 446 + 447 + ### Task 8: Update `databaseEngine` handling for cloudflare target 448 + 449 + **Files:** 450 + - Modify: `packages/hatk/src/config.ts` 451 + - Modify: `packages/hatk/src/main.ts` 452 + 453 + **Step 1: Auto-set databaseEngine to 'd1' when target is cloudflare** 454 + 455 + In `loadConfig()`, after constructing the config object, add: 456 + 457 + ```ts 458 + if (config.target === 'cloudflare') { 459 + config.databaseEngine = 'd1' as any 460 + } 461 + ``` 462 + 463 + Update `HatkConfig.databaseEngine` type to include `'d1'`: 464 + 465 + ```ts 466 + databaseEngine: 'duckdb' | 'sqlite' | 'd1' 467 + ``` 468 + 469 + **Step 2: Skip file-based operations in main.ts for cloudflare** 470 + 471 + In `main.ts`, the `mkdirSync` for the data directory, schema.sql write, and `pragma` settings don't apply to D1. Guard these with `config.target !== 'cloudflare'` checks (or the D1 adapter handles them as no-ops — verify which approach is cleaner). 472 + 473 + **Step 3: Verify build** 474 + 475 + Run: `npm run build` 476 + Expected: Compiles without errors 477 + 478 + **Step 4: Commit** 479 + 480 + ```bash 481 + git add packages/hatk/src/config.ts packages/hatk/src/main.ts 482 + git commit -m "feat: auto-set d1 engine for cloudflare target, guard file ops" 483 + ``` 484 + 485 + ### Task 9: Add Cloudflare deployment docs 486 + 487 + **Files:** 488 + - Create: `docs/site/cli/cloudflare.md` 489 + - Modify: `docs/site/.vitepress/config.ts` (add to CLI sidebar) 490 + 491 + **Step 1: Write the docs page** 492 + 493 + Cover: 494 + - What `target: 'cloudflare'` does (Worker + Container + D1) 495 + - Prerequisites (Cloudflare account, wrangler CLI) 496 + - Config: add `target: 'cloudflare'` to `hatk.config.ts` 497 + - Build: `hatk build` generates `dist/` with worker, container, wrangler.jsonc 498 + - Deploy: `npx wrangler deploy` (after setting D1 database_id) 499 + - Create D1 database: `npx wrangler d1 create hatk` 500 + - Known limitations (10GB D1 limit, slower backfill) 501 + 502 + **Step 2: Add to sidebar** 503 + 504 + In `docs/site/.vitepress/config.ts`, add to the CLI Reference items: 505 + 506 + ```ts 507 + { text: 'Cloudflare', link: '/cli/cloudflare' }, 508 + ``` 509 + 510 + **Step 3: Verify docs build** 511 + 512 + Run: `cd docs/site && npx vitepress build` 513 + Expected: Build succeeds 514 + 515 + **Step 4: Commit** 516 + 517 + ```bash 518 + git add docs/site/cli/cloudflare.md docs/site/.vitepress/config.ts 519 + git commit -m "docs: add Cloudflare deployment guide" 520 + ```