A container registry that uses the AT Protocol for manifest storage and S3 for blob storage. atcr.io
docker container atproto go
76
fork

Configure Feed

Select the types of activity you want to include in your feed.

remove the filesystem and buffered upload ability on the holds. going forward the only supported storage is s3. adds extra mocks and tests around uploading

evan.jarrett.net 57593a86 3b7455a2

verified
+3332 -1039
+4 -14
.env.example
··· 193 193 # HOLD_ADMIN_ENABLED=false 194 194 195 195 # ============================================================================== 196 - # STORAGE - S3 CONFIGURATION 196 + # STORAGE - S3 CONFIGURATION (REQUIRED) 197 197 # ============================================================================== 198 198 199 - # Storage driver type 200 - # Options: s3, filesystem 201 - # Default: s3 202 - STORAGE_DRIVER=s3 199 + # S3 is the only supported storage backend. Presigned URLs enable direct 200 + # client-to-S3 transfers, reducing hold bandwidth by ~99%. 203 201 204 - # S3 Access Credentials 202 + # S3 Access Credentials (REQUIRED) 205 203 AWS_ACCESS_KEY_ID=your_access_key 206 204 AWS_SECRET_ACCESS_KEY=your_secret_key 207 205 ··· 221 219 # - Minio: http://minio:9000 222 220 # Leave empty for AWS S3 223 221 # S3_ENDPOINT=https://gateway.storjshare.io 224 - 225 - # ============================================================================== 226 - # STORAGE - FILESYSTEM CONFIGURATION 227 - # ============================================================================== 228 - 229 - # Root directory for filesystem storage (when STORAGE_DRIVER=filesystem) 230 - # Default: /var/lib/atcr/hold 231 - # STORAGE_ROOT_DIR=/var/lib/atcr/hold 232 222 233 223 # ============================================================================== 234 224 # LOGGING (Shared by AppView and Hold)
+6 -16
.env.hold.example
··· 17 17 HOLD_PUBLIC_URL=http://127.0.0.1:8080 18 18 19 19 # ============================================================================== 20 - # Storage Configuration 20 + # S3 Storage Configuration (REQUIRED) 21 21 # ============================================================================== 22 22 23 - # Storage driver type (s3, filesystem) 24 - # Default: s3 25 - # 26 - # S3 Presigned URLs: 27 - # When using S3 storage, presigned URLs are automatically enabled for direct 28 - # client ↔ S3 transfers. This eliminates the hold service as a bandwidth 29 - # bottleneck, reducing hold bandwidth by ~99% for push/pull operations. 30 - # Falls back to proxy mode automatically for non-S3 drivers. 31 - STORAGE_DRIVER=filesystem 23 + # S3 is the only supported storage backend. Presigned URLs are used for direct 24 + # client ↔ S3 transfers, eliminating the hold service as a bandwidth bottleneck 25 + # and reducing hold bandwidth by ~99% for push/pull operations. 32 26 33 - # S3 Access Credentials 27 + # S3 Access Credentials (REQUIRED) 34 28 AWS_ACCESS_KEY_ID=your_access_key 35 29 AWS_SECRET_ACCESS_KEY=your_secret_key 36 30 ··· 40 34 # Default: us-east-1 41 35 AWS_REGION=us-east-1 42 36 43 - # S3 Bucket Name 37 + # S3 Bucket Name (REQUIRED) 44 38 S3_BUCKET=atcr-blobs 45 39 46 40 # S3 Endpoint (for S3-compatible services like Storj, Minio, UpCloud) ··· 50 44 # - Minio: http://minio:9000 51 45 # Leave empty for AWS S3 52 46 # S3_ENDPOINT=https://gateway.storjshare.io 53 - 54 - # For filesystem driver: 55 - # STORAGE_DRIVER=filesystem 56 - # STORAGE_ROOT_DIR=/var/lib/atcr/hold 57 47 58 48 # ============================================================================== 59 49 # Server Configuration
+26 -22
CLAUDE.md
··· 57 57 # ./bin/atcr-appview serve config/config.yml 58 58 59 59 # Run hold service (configure via env vars - see .env.hold.example) 60 + # For local development, use Minio as S3-compatible storage: 61 + # docker run -p 9000:9000 minio/minio server /data 60 62 export HOLD_PUBLIC_URL=http://127.0.0.1:8080 61 - export STORAGE_DRIVER=filesystem 62 - export STORAGE_ROOT_DIR=/tmp/atcr-hold 63 + export AWS_ACCESS_KEY_ID=minioadmin 64 + export AWS_SECRET_ACCESS_KEY=minioadmin 65 + export S3_BUCKET=test 66 + export S3_ENDPOINT=http://localhost:9000 63 67 export HOLD_OWNER=did:plc:your-did-here 64 68 ./bin/atcr-hold 65 69 # Hold starts immediately with embedded PDS ··· 92 96 2. **Hold Service** (`cmd/hold`) - Optional BYOS component 93 97 - Lightweight HTTP server for presigned URLs 94 98 - Embedded PDS with captain + crew records 95 - - Supports S3, Storj, Minio, filesystem, etc. 99 + - Supports S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) 96 100 - Authorization based on captain record (public, allowAllCrew) 97 101 - Self-describing via DID resolution 98 102 - Configured entirely via environment variables ··· 122 126 5. Blob PUT → ProxyBlobStore calls hold's XRPC multipart upload endpoints: 123 127 a. POST /xrpc/io.atcr.hold.initiateUpload (gets uploadID) 124 128 b. POST /xrpc/io.atcr.hold.getPartUploadUrl (gets presigned URL for each part) 125 - c. PUT to S3 presigned URL (or PUT /xrpc/io.atcr.hold.uploadPart for buffered mode) 129 + c. PUT to S3 presigned URL (client uploads directly to S3) 126 130 d. POST /xrpc/io.atcr.hold.completeUpload (finalizes upload) 127 131 6. Manifest PUT → alice's PDS as io.atcr.manifest record (includes holdDid + holdEndpoint) 128 132 → Manifest also uploaded to PDS blob storage (ATProto CID format) ··· 419 423 - Resolves hold DID → HTTP URL for XRPC requests (did:web resolution) 420 424 - Gets service tokens from user's PDS (`com.atproto.server.getServiceAuth`) 421 425 - Calls hold XRPC endpoints with service token authentication: 422 - - Multipart upload: initiateUpload, getPartUploadUrl, uploadPart, completeUpload, abortUpload 426 + - Multipart upload: initiateUpload, getPartUploadUrl, completeUpload, abortUpload 423 427 - Blob read: com.atproto.sync.getBlob (returns presigned download URL) 424 428 - Implements full `distribution.BlobStore` interface 425 - - Supports both presigned URL mode (S3 direct) and buffered mode (proxy via hold) 429 + - Uses presigned URLs for direct client-to-S3 transfers 426 430 427 431 #### AppView Web UI (`pkg/appview/`) 428 432 ··· 468 472 **Architecture:** 469 473 - **Embedded PDS**: Each hold has a full ATProto PDS for storing captain + crew records 470 474 - **DID**: Hold identified by did:web (e.g., `did:web:hold01.atcr.io`) 471 - - **Storage**: Reuses distribution's storage driver factory (S3, Storj, Minio, Azure, GCS, filesystem) 475 + - **Storage**: Requires S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) 472 476 - **Authorization**: Based on captain + crew records in embedded PDS 473 477 - **Blob operations**: Generates presigned URLs (15min expiry) or proxies uploads/downloads via XRPC 474 478 ··· 546 550 All require blob:write permission via service token authentication: 547 551 - `POST /xrpc/io.atcr.hold.initiateUpload` - Start multipart upload session 548 552 - `POST /xrpc/io.atcr.hold.getPartUploadUrl` - Get presigned URL for uploading a part 549 - - `PUT /xrpc/io.atcr.hold.uploadPart` - Direct buffered part upload (alternative to presigned URLs) 550 553 - `POST /xrpc/io.atcr.hold.completeUpload` - Finalize multipart upload and move to final location 551 554 - `POST /xrpc/io.atcr.hold.abortUpload` - Cancel multipart upload and cleanup temp data 552 555 ··· 558 561 559 562 **Configuration:** Environment variables (see `.env.hold.example`) 560 563 - `HOLD_PUBLIC_URL` - Public URL of hold service (required, used for did:web generation) 561 - - `STORAGE_DRIVER` - Storage driver type (s3, filesystem) 562 - - `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - S3 credentials 563 - - `S3_BUCKET`, `S3_ENDPOINT` - S3 configuration 564 + - `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - S3 credentials (required) 565 + - `S3_BUCKET` - S3 bucket name (required) 566 + - `S3_ENDPOINT` - S3 endpoint URL (for non-AWS providers like Storj, Minio, UpCloud) 564 567 - `HOLD_PUBLIC` - Allow public reads (default: false) 565 568 - `HOLD_OWNER` - DID for captain record creation (optional) 566 569 - `HOLD_ALLOW_ALL_CREW` - Allow any authenticated user to register as crew (default: false) 567 - - `HOLD_DATABASE_PATH` - Path for embedded PDS database (required) 568 - - `HOLD_DATABASE_KEY_PATH` - Path for PDS signing keys (optional, generated if missing) 570 + - `HOLD_DATABASE_DIR` - Directory for embedded PDS database (required) 571 + - `HOLD_KEY_PATH` - Path for PDS signing keys (optional, generated if missing) 569 572 570 573 **Deployment:** Can run on Fly.io, Railway, Docker, Kubernetes, etc. 571 574 ··· 681 684 682 685 See `.env.hold.example` for all available options. Key environment variables: 683 686 - `HOLD_PUBLIC_URL` - Public URL of hold service (REQUIRED) 684 - - `STORAGE_DRIVER` - Storage backend (s3, filesystem) 685 - - `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - S3 credentials 686 - - `S3_BUCKET`, `S3_ENDPOINT` - S3 configuration 687 + - `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - S3 credentials (REQUIRED) 688 + - `S3_BUCKET` - S3 bucket name (REQUIRED) 689 + - `S3_ENDPOINT` - S3 endpoint URL (for non-AWS providers) 687 690 - `HOLD_PUBLIC` - Allow public reads (default: false) 688 691 - `HOLD_OWNER` - DID for captain record creation (optional) 689 692 - `HOLD_ALLOW_ALL_CREW` - Allow any authenticated user to register as crew (default: false) ··· 749 752 5. AppView automatically queries hold's PDS and routes blobs to user's storage 750 753 6. No AppView changes needed - fully decentralized 751 754 752 - **Supporting a new storage backend**: 753 - 1. Ensure driver is registered in `cmd/hold/main.go` imports 754 - 2. Distribution supports: S3, Azure, GCS, Swift, filesystem, OSS 755 - 3. For custom drivers: implement `storagedriver.StorageDriver` interface 756 - 4. Add case to `buildStorageConfig()` in `cmd/hold/main.go` 757 - 5. Update `.env.example` with new driver's env vars 755 + **Using S3-compatible storage**: 756 + ATCR requires S3-compatible storage. Supported providers: 757 + - AWS S3 - Set `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `S3_BUCKET` 758 + - Storj - Set `S3_ENDPOINT=https://gateway.storjshare.io` 759 + - Minio - Set `S3_ENDPOINT=http://localhost:9000` 760 + - UpCloud - Set `S3_ENDPOINT=https://[bucket-id].upcloudobjects.com` 761 + - Azure/GCS - Use their S3-compatible API endpoints 758 762 759 763 **Working with the database**: 760 764 - **Base schema** defined in `pkg/appview/db/schema.sql` - source of truth for fresh installations
+3 -4
cmd/hold/main.go
··· 19 19 "atcr.io/pkg/logging" 20 20 "atcr.io/pkg/s3" 21 21 22 - // Import storage drivers 22 + // Import S3 storage driver 23 23 "github.com/distribution/distribution/v3/registry/storage/driver/factory" 24 - _ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem" 25 24 _ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws" 26 25 27 26 "github.com/go-chi/chi/v5" ··· 133 132 os.Exit(1) 134 133 } 135 134 136 - s3Service, err := s3.NewS3Service(cfg.Storage.Parameters(), cfg.Server.DisablePresignedURLs, cfg.Storage.Type()) 135 + s3Service, err := s3.NewS3Service(cfg.Storage.Parameters()) 137 136 if err != nil { 138 137 slog.Error("Failed to create S3 service", "error", err) 139 138 os.Exit(1) ··· 143 142 xrpcHandler = pds.NewXRPCHandler(holdPDS, *s3Service, driver, broadcaster, nil, quotaMgr) 144 143 145 144 // Create OCI XRPC handler (multipart upload endpoints) 146 - ociHandler = oci.NewXRPCHandler(holdPDS, *s3Service, driver, cfg.Server.DisablePresignedURLs, cfg.Registration.EnableBlueskyPosts, nil, quotaMgr) 145 + ociHandler = oci.NewXRPCHandler(holdPDS, *s3Service, driver, cfg.Registration.EnableBlueskyPosts, nil, quotaMgr) 147 146 148 147 // Initialize garbage collector 149 148 gcConfig := gc.LoadConfigFromEnv()
+129
cmd/usage-report/main.go
··· 5 5 // 6 6 // go run ./cmd/usage-report --hold https://hold01.atcr.io 7 7 // go run ./cmd/usage-report --hold https://hold01.atcr.io --from-manifests 8 + // go run ./cmd/usage-report --hold https://hold01.atcr.io --list-blobs 8 9 package main 9 10 10 11 import ( ··· 83 84 84 85 var client = &http.Client{Timeout: 30 * time.Second} 85 86 87 + // BlobInfo represents a single blob with its metadata 88 + type BlobInfo struct { 89 + Digest string 90 + Size int64 91 + MediaType string 92 + UserDID string 93 + Handle string 94 + } 95 + 86 96 func main() { 87 97 holdURL := flag.String("hold", "https://hold01.atcr.io", "Hold service URL") 88 98 fromManifests := flag.Bool("from-manifests", false, "Calculate usage from user manifests instead of hold layer records (more accurate but slower)") 99 + listBlobs := flag.Bool("list-blobs", false, "List all individual blobs sorted by size (largest first)") 89 100 flag.Parse() 90 101 91 102 // Normalize URL ··· 100 111 os.Exit(1) 101 112 } 102 113 fmt.Printf("Hold DID: %s\n\n", holdDID) 114 + 115 + // If --list-blobs flag is set, run blob listing mode 116 + if *listBlobs { 117 + listAllBlobs(baseURL, holdDID) 118 + return 119 + } 103 120 104 121 var userUsage map[string]*UserUsage 105 122 ··· 189 206 } 190 207 sort.Strings(repos) 191 208 fmt.Printf("%s,%s,%d,%d,%s,\"%s\"\n", u.Handle, u.DID, u.LayerCount, u.TotalSize, humanSize(u.TotalSize), strings.Join(repos, ";")) 209 + } 210 + } 211 + 212 + // listAllBlobs fetches all blobs and lists them sorted by size (largest first) 213 + func listAllBlobs(baseURL, holdDID string) { 214 + fmt.Println("=== Fetching all blob records ===") 215 + 216 + layers, err := fetchAllLayerRecords(baseURL, holdDID) 217 + if err != nil { 218 + fmt.Fprintf(os.Stderr, "Failed to fetch layer records: %v\n", err) 219 + os.Exit(1) 220 + } 221 + 222 + fmt.Printf("Fetched %d layer records\n", len(layers)) 223 + 224 + // Deduplicate by digest, keeping track of first seen user 225 + blobMap := make(map[string]*BlobInfo) 226 + for _, layer := range layers { 227 + if existing, exists := blobMap[layer.Digest]; exists { 228 + // If we have a record with a user DID and existing doesn't, prefer this one 229 + if existing.UserDID == "" && layer.UserDID != "" { 230 + existing.UserDID = layer.UserDID 231 + } 232 + continue 233 + } 234 + blobMap[layer.Digest] = &BlobInfo{ 235 + Digest: layer.Digest, 236 + Size: layer.Size, 237 + MediaType: layer.MediaType, 238 + UserDID: layer.UserDID, 239 + } 240 + } 241 + 242 + // Convert to slice 243 + var blobs []*BlobInfo 244 + for _, b := range blobMap { 245 + blobs = append(blobs, b) 246 + } 247 + 248 + // Sort by size (largest first) 249 + sort.Slice(blobs, func(i, j int) bool { 250 + return blobs[i].Size > blobs[j].Size 251 + }) 252 + 253 + fmt.Printf("Found %d unique blobs\n\n", len(blobs)) 254 + 255 + // Resolve DIDs to handles (batch for efficiency) 256 + fmt.Println("Resolving DIDs to handles...") 257 + didToHandle := make(map[string]string) 258 + for _, b := range blobs { 259 + if b.UserDID == "" { 260 + continue 261 + } 262 + if _, exists := didToHandle[b.UserDID]; !exists { 263 + handle, err := resolveDIDToHandle(b.UserDID) 264 + if err != nil { 265 + didToHandle[b.UserDID] = b.UserDID 266 + } else { 267 + didToHandle[b.UserDID] = handle 268 + } 269 + } 270 + b.Handle = didToHandle[b.UserDID] 271 + } 272 + 273 + // Calculate total 274 + var totalSize int64 275 + for _, b := range blobs { 276 + totalSize += b.Size 277 + } 278 + 279 + // Print report 280 + fmt.Println("\n========================================") 281 + fmt.Println("BLOB SIZE REPORT (sorted largest to smallest)") 282 + fmt.Println("========================================") 283 + fmt.Printf("\nTotal Unique Blobs: %d\n", len(blobs)) 284 + fmt.Printf("Total Storage: %s\n\n", humanSize(totalSize)) 285 + 286 + fmt.Println("BLOBS:") 287 + fmt.Println("----------------------------------------") 288 + for i, b := range blobs { 289 + pct := float64(0) 290 + if totalSize > 0 { 291 + pct = float64(b.Size) / float64(totalSize) * 100 292 + } 293 + owner := b.Handle 294 + if owner == "" { 295 + owner = "(unknown)" 296 + } 297 + fmt.Printf("%4d. %s\n", i+1, humanSize(b.Size)) 298 + fmt.Printf(" Digest: %s\n", b.Digest) 299 + fmt.Printf(" Owner: %s\n", owner) 300 + if b.MediaType != "" { 301 + fmt.Printf(" Type: %s\n", b.MediaType) 302 + } 303 + fmt.Printf(" Share: %.2f%%\n\n", pct) 304 + } 305 + 306 + // Output CSV format 307 + fmt.Println("\n========================================") 308 + fmt.Println("CSV FORMAT") 309 + fmt.Println("========================================") 310 + fmt.Println("rank,size_bytes,size_human,digest,owner,media_type,share_pct") 311 + for i, b := range blobs { 312 + pct := float64(0) 313 + if totalSize > 0 { 314 + pct = float64(b.Size) / float64(totalSize) * 100 315 + } 316 + owner := b.Handle 317 + if owner == "" { 318 + owner = "" 319 + } 320 + fmt.Printf("%d,%d,%s,%s,%s,%s,%.2f\n", i+1, b.Size, humanSize(b.Size), b.Digest, owner, b.MediaType, pct) 192 321 } 193 322 } 194 323
+3 -17
deploy/.env.prod.template
··· 101 101 HOLD_BLUESKY_POSTS_ENABLED=true 102 102 103 103 # ============================================================================== 104 - # S3/UpCloud Object Storage Configuration 104 + # S3/UpCloud Object Storage Configuration (REQUIRED) 105 105 # ============================================================================== 106 106 107 - # Storage driver type 108 - # Options: s3, filesystem 109 - # Default: s3 110 - STORAGE_DRIVER=s3 107 + # S3 is the only supported storage backend. Presigned URLs are used for direct 108 + # client ↔ S3 transfers, eliminating the hold service as a bandwidth bottleneck. 111 109 112 110 # S3 Access Credentials 113 111 # Get these from UpCloud Object Storage console ··· 185 183 # ATProto relay endpoint for backfill sync API 186 184 # Default: https://relay1.us-east.bsky.network 187 185 ATCR_RELAY_ENDPOINT=https://relay1.us-east.bsky.network 188 - 189 - # ============================================================================== 190 - # Optional: Filesystem Storage (alternative to S3) 191 - # ============================================================================== 192 - 193 - # If using filesystem storage instead of S3: 194 - # 1. Uncomment these lines 195 - # 2. Comment out all S3 variables above 196 - # 3. Set STORAGE_DRIVER=filesystem 197 - 198 - # STORAGE_DRIVER=filesystem 199 - # STORAGE_ROOT_DIR=/var/lib/atcr/hold 200 186 201 187 # ============================================================================== 202 188 # CHECKLIST
+3 -3
deploy/README.md
··· 418 418 docker logs atcr-hold | grep -i presigned 419 419 ``` 420 420 421 - **Check S3 driver:** 421 + **Check S3 configuration:** 422 422 ```bash 423 - docker exec atcr-hold env | grep STORAGE_DRIVER 424 - # Should be: s3 (not filesystem) 423 + docker exec atcr-hold env | grep S3_BUCKET 424 + # Should show your S3 bucket name 425 425 ``` 426 426 427 427 **Verify direct S3 access:**
+1 -8
deploy/docker-compose.prod.yml
··· 100 100 HOLD_DATABASE_DIR: ${HOLD_DATABASE_DIR:-/var/lib/atcr-hold} 101 101 # HOLD_KEY_PATH: ${HOLD_KEY_PATH} # Optional, defaults to {HOLD_DATABASE_DIR}/signing.key 102 102 103 - # Storage driver 104 - STORAGE_DRIVER: ${STORAGE_DRIVER:-s3} 105 - 106 - # S3/UpCloud Object Storage configuration 103 + # S3/UpCloud Object Storage configuration (REQUIRED) 107 104 AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID:-} 108 105 AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY:-} 109 106 AWS_REGION: ${AWS_REGION:-us-east-1} ··· 113 110 # Logging 114 111 ATCR_LOG_LEVEL: ${ATCR_LOG_LEVEL:-debug} 115 112 ATCR_LOG_FORMATTER: ${ATCR_LOG_FORMATTER:-text} 116 - 117 - # Optional: Filesystem storage (comment out S3 vars above) 118 - # STORAGE_DRIVER: filesystem 119 - # STORAGE_ROOT_DIR: /var/lib/atcr/hold 120 113 volumes: 121 114 # PDS data (carstore SQLite + signing keys) 122 115 - atcr-hold-data:/var/lib/atcr-hold
+1 -4
docker-compose.yml
··· 58 58 HOLD_OWNER: did:plc:pddp4xt5lgnv2qsegbzzs4xg 59 59 HOLD_PUBLIC: false 60 60 HOLD_ALLOW_ALL_CREW: true 61 - # STORAGE_DRIVER: filesystem 62 - # STORAGE_ROOT_DIR: /var/lib/atcr/hold 63 61 TEST_MODE: true 64 - # DISABLE_PRESIGNED_URLS: true 65 62 # Logging 66 63 ATCR_LOG_LEVEL: debug 67 64 # Log shipping (uncomment to enable) 68 65 ATCR_LOG_SHIPPER_BACKEND: victoria 69 66 ATCR_LOG_SHIPPER_URL: http://172.28.0.10:9428 70 - # Storage config comes from env_file (STORAGE_DRIVER, AWS_*, S3_*) 67 + # S3 storage config comes from env_file (AWS_*, S3_*) 71 68 # Limit local Docker logs - real logs go to Victoria Logs 72 69 # Local logs just for live tailing (docker logs -f) 73 70 logging:
+21 -15
docs/BYOS.md
··· 5 5 ATCR supports "Bring Your Own Storage" (BYOS) for blob storage. Users can: 6 6 - Deploy their own hold service with embedded PDS 7 7 - Control access via crew membership in the hold's PDS 8 - - Keep blob data in their own S3/Storj/Minio while manifests stay in their user PDS 8 + - Keep blob data in their own S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) while manifests stay in their user PDS 9 9 10 10 ## Architecture 11 11 ··· 46 46 Each hold is a full ATProto actor with: 47 47 - **DID**: `did:web:hold.example.com` (hold's identity) 48 48 - **Embedded PDS**: Stores captain + crew records (shared data) 49 - - **Storage backend**: S3, Storj, Minio, filesystem, etc. 49 + - **Storage backend**: S3-compatible (AWS S3, Storj, Minio, UpCloud, etc.) 50 50 - **XRPC endpoints**: Standard ATProto + custom OCI multipart upload 51 51 52 52 ### Records in Hold's PDS ··· 98 98 HOLD_PUBLIC_URL=https://hold.example.com 99 99 HOLD_OWNER=did:plc:your-did-here 100 100 101 - # Storage backend 102 - STORAGE_DRIVER=s3 101 + # S3 storage backend (REQUIRED) 103 102 AWS_ACCESS_KEY_ID=your_access_key 104 103 AWS_SECRET_ACCESS_KEY=your_secret_key 105 104 AWS_REGION=us-east-1 ··· 115 114 ``` 116 115 117 116 ### Running Locally 117 + 118 + For local development, use Minio as an S3-compatible storage: 118 119 119 120 ```bash 121 + # Start Minio (in separate terminal) 122 + docker run -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001" 123 + 120 124 # Build 121 125 go build -o bin/atcr-hold ./cmd/hold 122 126 123 127 # Run (with env vars or .env file) 124 128 export HOLD_PUBLIC_URL=http://localhost:8080 125 129 export HOLD_OWNER=did:plc:your-did-here 126 - export STORAGE_DRIVER=filesystem 127 - export STORAGE_ROOT_DIR=/tmp/atcr-hold 130 + export AWS_ACCESS_KEY_ID=minioadmin 131 + export AWS_SECRET_ACCESS_KEY=minioadmin 132 + export S3_BUCKET=test 133 + export S3_ENDPOINT=http://localhost:9000 128 134 export HOLD_DATABASE_PATH=/tmp/atcr-hold/hold.db 129 135 130 136 ./bin/atcr-hold ··· 145 151 146 152 [env] 147 153 HOLD_PUBLIC_URL = "https://my-atcr-hold.fly.dev" 148 - STORAGE_DRIVER = "s3" 149 154 AWS_REGION = "us-east-1" 150 155 S3_BUCKET = "my-blobs" 151 156 HOLD_PUBLIC = "false" ··· 299 304 --rkey "{memberDID}" 300 305 ``` 301 306 302 - ## Storage Drivers 307 + ## Storage Backends 303 308 304 - Hold service supports all distribution storage drivers: 305 - - **S3** - AWS S3, Minio, Storj (via S3 gateway) 306 - - **Filesystem** - Local disk (for testing) 307 - - **Azure** - Azure Blob Storage 308 - - **GCS** - Google Cloud Storage 309 - - **Swift** - OpenStack Swift 309 + Hold service requires S3-compatible storage. Supported providers: 310 + - **AWS S3** - Amazon Simple Storage Service 311 + - **Storj** - Decentralized cloud storage (via S3 gateway) 312 + - **Minio** - High-performance object storage (great for local development) 313 + - **UpCloud** - European cloud provider 314 + - **Azure** - Azure Blob Storage (via S3-compatible API) 315 + - **GCS** - Google Cloud Storage (via S3-compatible API) 310 316 311 317 ## Example: Team Hold 312 318 ··· 315 321 export HOLD_PUBLIC_URL=https://team-hold.fly.dev 316 322 export HOLD_OWNER=did:plc:admin 317 323 export HOLD_PUBLIC=false # Private 318 - export STORAGE_DRIVER=s3 319 324 export AWS_ACCESS_KEY_ID=... 325 + export AWS_SECRET_ACCESS_KEY=... 320 326 export S3_BUCKET=team-blobs 321 327 322 328 fly deploy
+5 -3
docs/SBOM_SCANNING.md
··· 150 150 ```bash 151 151 # .env.hold 152 152 HOLD_PUBLIC_URL=https://hold01.atcr.io 153 - STORAGE_DRIVER=s3 154 153 S3_BUCKET=my-hold-blobs 154 + AWS_ACCESS_KEY_ID=your-access-key 155 + AWS_SECRET_ACCESS_KEY=your-secret-key 155 156 HOLD_OWNER=did:plc:xyz123 156 - HOLD_DATABASE_PATH=/var/lib/atcr/hold.db 157 + HOLD_DATABASE_DIR=/var/lib/atcr-hold 157 158 158 159 # Enable SBOM scanning 159 160 HOLD_SBOM_ENABLED=true ··· 494 495 # 1. Configure hold with SBOM enabled 495 496 cat > .env.hold <<EOF 496 497 HOLD_PUBLIC_URL=https://myhold.example.com 497 - STORAGE_DRIVER=s3 498 498 S3_BUCKET=my-blobs 499 + AWS_ACCESS_KEY_ID=your-access-key 500 + AWS_SECRET_ACCESS_KEY=your-secret-key 499 501 HOLD_OWNER=did:plc:myid 500 502 501 503 # Enable SBOM scanning
+8 -3
docs/appview.md
··· 244 244 245 245 ### Development/Testing 246 246 247 - Local Docker Compose setup: 247 + Local Docker Compose setup with Minio for S3-compatible storage: 248 248 249 249 ```bash 250 + # Start Minio (S3-compatible storage) 251 + docker run -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001" 252 + 250 253 # AppView config 251 254 ATCR_HTTP_ADDR=:5000 252 255 ATCR_DEFAULT_HOLD_DID=did:web:atcr-hold:8080 253 256 ATCR_LOG_LEVEL=debug 254 257 255 258 # Hold config (linked hold service) 256 - STORAGE_DRIVER=filesystem 257 - STORAGE_ROOT_DIR=/tmp/atcr-hold 259 + AWS_ACCESS_KEY_ID=minioadmin 260 + AWS_SECRET_ACCESS_KEY=minioadmin 261 + S3_BUCKET=test 262 + S3_ENDPOINT=http://minio:9000 258 263 HOLD_PUBLIC=true 259 264 HOLD_ALLOW_ALL_CREW=true 260 265 ```
+19 -40
docs/hold.md
··· 4 4 5 5 ## Overview 6 6 7 - **Hold Service** is the storage backend component of ATCR. It enables BYOS (Bring Your Own Storage) - users can store their own container image layers in their own S3, Storj, Minio, or filesystem storage. Each hold runs as a full ATProto user with an embedded PDS, exposing both standard ATProto sync endpoints and custom XRPC endpoints for OCI multipart blob uploads. 7 + **Hold Service** is the storage backend component of ATCR. It enables BYOS (Bring Your Own Storage) - users can store their own container image layers in their own S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.). Each hold runs as a full ATProto user with an embedded PDS, exposing both standard ATProto sync endpoints and custom XRPC endpoints for OCI multipart blob uploads. 8 8 9 9 ### What Hold Service Does 10 10 11 11 Hold Service is the storage layer that: 12 12 13 - - **Bring Your Own Storage (BYOS)** - Store your own container image layers in your own S3, Storj, Minio, or filesystem 13 + - **Bring Your Own Storage (BYOS)** - Store your own container image layers in your own S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) 14 14 - **Embedded ATProto PDS** - Each hold is a full ATProto user with its own DID, repository, and identity 15 15 - **Custom XRPC Endpoints** - OCI-compatible multipart upload endpoints (`io.atcr.hold.*`) for blob operations 16 16 - **Presigned URL Generation** - Creates time-limited S3 URLs for direct client-to-storage transfers (~99% bandwidth reduction) 17 17 - **Crew Management** - Controls access via captain and crew records stored in the hold's embedded PDS 18 18 - **Standard ATProto Sync** - Exposes com.atproto.sync.* endpoints for repository synchronization and firehose 19 - - **Multi-Backend Support** - Works with S3, Storj, Minio, filesystem, Azure, GCS via distribution's driver system 19 + - **S3 Storage** - Works with any S3-compatible storage (AWS S3, Storj, Minio, UpCloud, Azure, GCS via S3 gateway) 20 20 - **Bluesky Integration** - Optional: Posts container image push notifications from the hold's identity to Bluesky 21 21 22 22 ### The ATCR Ecosystem ··· 50 50 - Maintain data sovereignty (keep blobs in specific geographic regions) 51 51 52 52 **Prerequisites:** 53 - - S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) OR filesystem storage 53 + - S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) 54 54 - (Optional) Domain name with SSL/TLS certificates for production 55 55 - ATProto DID for hold owner (get from: `https://bsky.social/xrpc/com.atproto.identity.resolveHandle?handle=yourhandle.bsky.social`) 56 56 ··· 87 87 # Required: Your ATProto DID (for captain record) 88 88 HOLD_OWNER=did:plc:your-did-here 89 89 90 - # Required: Storage driver type 91 - STORAGE_DRIVER=s3 92 - 93 - # Required for S3: Credentials and bucket 90 + # Required: S3 credentials and bucket 94 91 AWS_ACCESS_KEY_ID=your-access-key 95 92 AWS_SECRET_ACCESS_KEY=your-secret-key 96 93 S3_BUCKET=your-bucket-name ··· 132 129 - `true`: Public registry (anyone can pull, authenticated users can push if crew) 133 130 - `false`: Private registry (authentication required for both push and pull) 134 131 135 - ### Storage Configuration 136 - 137 - #### `STORAGE_DRIVER` 138 - - **Default:** `s3` 139 - - **Options:** `s3`, `filesystem` 140 - - **Description:** Storage backend type. S3 enables presigned URLs for direct client-to-storage transfers (~99% bandwidth reduction). Filesystem stores blobs locally (development/testing). 132 + ### S3 Storage Configuration 141 133 142 - #### S3 Storage (when `STORAGE_DRIVER=s3`) 134 + S3 is the only supported storage backend. Presigned URLs enable direct client-to-storage transfers (~99% bandwidth reduction). 143 135 144 136 ##### `AWS_ACCESS_KEY_ID` ⚠️ REQUIRED for S3 145 137 - **Description:** S3 access key ID for authentication ··· 167 159 - **UpCloud:** `https://[bucket-id].upcloudobjects.com` 168 160 - **Minio:** `http://minio:9000` 169 161 - **Note:** Leave empty for AWS S3 170 - 171 - #### Filesystem Storage (when `STORAGE_DRIVER=filesystem`) 172 - 173 - ##### `STORAGE_ROOT_DIR` 174 - - **Default:** `/var/lib/atcr/hold` 175 - - **Description:** Directory path where blobs will be stored on local filesystem 176 - - **Use case:** Development, testing, or single-server deployments 177 - - **Note:** Presigned URLs are not available with filesystem driver (hold proxies all blob transfers) 178 162 179 163 ### Embedded PDS Configuration 180 164 ··· 227 211 - **Default:** `false` 228 212 - **Description:** Enable test mode (skips some validations). Do not use in production. 229 213 230 - #### `DISABLE_PRESIGNED_URLS` 231 - - **Default:** `false` 232 - - **Description:** Force proxy mode even with S3 configured (for testing). Disables presigned URL generation and routes all blob transfers through the hold service. 233 - - **Use case:** Testing, debugging, or environments where presigned URLs don't work 234 - 235 214 ## XRPC Endpoints 236 215 237 216 Hold Service exposes two types of XRPC endpoints: ··· 250 229 ### OCI Multipart Upload Endpoints (Custom) 251 230 - `POST /xrpc/io.atcr.hold.initiateUpload` - Start multipart upload session 252 231 - `POST /xrpc/io.atcr.hold.getPartUploadUrl` - Get presigned URL for uploading a part 253 - - `PUT /xrpc/io.atcr.hold.uploadPart` - Direct buffered part upload (alternative to presigned URLs) 254 232 - `POST /xrpc/io.atcr.hold.completeUpload` - Finalize multipart upload 255 233 - `POST /xrpc/io.atcr.hold.abortUpload` - Cancel multipart upload 256 234 - `POST /xrpc/io.atcr.hold.notifyManifest` - Notify hold of manifest upload (creates layer records, Bluesky posts) ··· 301 279 HOLD_ALLOW_ALL_CREW=false # Only you can push 302 280 HOLD_DATABASE_DIR=/var/lib/atcr-hold 303 281 304 - # S3 storage 305 - STORAGE_DRIVER=s3 282 + # S3 storage (using Storj) 306 283 AWS_ACCESS_KEY_ID=your-key 307 284 AWS_SECRET_ACCESS_KEY=your-secret 308 285 S3_BUCKET=alice-container-registry 309 - S3_ENDPOINT=https://gateway.storjshare.io # Using Storj 286 + S3_ENDPOINT=https://gateway.storjshare.io 310 287 ``` 311 288 312 289 ### Shared Hold (Team/Organization) ··· 322 299 HOLD_DATABASE_DIR=/var/lib/atcr-hold 323 300 324 301 # S3 storage 325 - STORAGE_DRIVER=s3 326 302 AWS_ACCESS_KEY_ID=your-key 327 303 AWS_SECRET_ACCESS_KEY=your-secret 328 304 S3_BUCKET=acme-registry-blobs ··· 343 319 HOLD_DATABASE_DIR=/var/lib/atcr-hold 344 320 345 321 # S3 storage 346 - STORAGE_DRIVER=s3 347 322 AWS_ACCESS_KEY_ID=your-key 348 323 AWS_SECRET_ACCESS_KEY=your-secret 349 324 S3_BUCKET=community-registry-blobs 350 325 ``` 351 326 352 - ### Development/Testing 327 + ### Development/Testing with Minio 353 328 354 - Local filesystem storage for testing: 329 + For local development, use Minio as an S3-compatible storage: 355 330 356 331 ```bash 332 + # Start Minio 333 + docker run -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001" 334 + 357 335 # Hold config 358 336 HOLD_PUBLIC_URL=http://127.0.0.1:8080 359 337 HOLD_OWNER=did:plc:your-test-did ··· 361 339 HOLD_ALLOW_ALL_CREW=true 362 340 HOLD_DATABASE_DIR=/tmp/atcr-hold 363 341 364 - # Filesystem storage 365 - STORAGE_DRIVER=filesystem 366 - STORAGE_ROOT_DIR=/tmp/atcr-hold-blobs 342 + # Minio S3 storage 343 + AWS_ACCESS_KEY_ID=minioadmin 344 + AWS_SECRET_ACCESS_KEY=minioadmin 345 + S3_BUCKET=test 346 + S3_ENDPOINT=http://localhost:9000 367 347 ``` 368 348 369 349 ## Production Deployment ··· 383 363 384 364 - [ ] Set `HOLD_PUBLIC_URL` to your public HTTPS URL 385 365 - [ ] Set `HOLD_OWNER` to your ATProto DID 386 - - [ ] Configure S3 storage (`STORAGE_DRIVER=s3`) 387 366 - [ ] Set `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `S3_BUCKET`, `S3_ENDPOINT` 388 367 - [ ] Set `HOLD_DATABASE_DIR` to persistent directory 389 368 - [ ] Configure `HOLD_PUBLIC` and `HOLD_ALLOW_ALL_CREW` for desired access model
+4 -17
pkg/appview/storage/proxy_blob_store.go
··· 463 463 return result.UploadID, nil 464 464 } 465 465 466 - // PartUploadInfo contains structured information for uploading a part 466 + // PartUploadInfo contains the presigned URL for uploading a part 467 467 type PartUploadInfo struct { 468 - URL string `json:"url"` 469 - Method string `json:"method,omitempty"` 470 - Headers map[string]string `json:"headers,omitempty"` 468 + URL string `json:"url"` // Presigned URL to PUT the part to 471 469 } 472 470 473 471 // getPartUploadInfo gets structured upload info for uploading a specific part via XRPC ··· 653 651 return fmt.Errorf("failed to get part upload info: %w", err) 654 652 } 655 653 656 - // Determine HTTP method (default to PUT) 657 - method := uploadInfo.Method 658 - if method == "" { 659 - method = "PUT" 660 - } 661 - 662 - // Upload part (either to S3 presigned URL or back to XRPC with headers) 663 - req, err := http.NewRequestWithContext(ctx, method, uploadInfo.URL, bytes.NewReader(w.buffer.Bytes())) 654 + // Upload part to S3 presigned URL 655 + req, err := http.NewRequestWithContext(ctx, "PUT", uploadInfo.URL, bytes.NewReader(w.buffer.Bytes())) 664 656 if err != nil { 665 657 return err 666 658 } 667 659 req.Header.Set("Content-Type", "application/octet-stream") 668 - 669 - // Apply any additional headers from the response (for buffered mode) 670 - for key, value := range uploadInfo.Headers { 671 - req.Header.Set(key, value) 672 - } 673 660 674 661 resp, err := w.store.httpClient.Do(req) 675 662 if err != nil {
+1376
pkg/appview/storage/proxy_blob_store_test.go
··· 1 1 package storage 2 2 3 3 import ( 4 + "bytes" 4 5 "context" 5 6 "encoding/base64" 6 7 "encoding/json" 7 8 "fmt" 9 + "io" 8 10 "net/http" 9 11 "net/http/httptest" 12 + "strconv" 10 13 "strings" 14 + "sync" 11 15 "testing" 12 16 "time" 13 17 14 18 "atcr.io/pkg/atproto" 15 19 "atcr.io/pkg/auth" 20 + "github.com/distribution/distribution/v3" 16 21 "github.com/opencontainers/go-digest" 17 22 ) 18 23 ··· 525 530 // Verify S3 received NO Authorization header 526 531 if s3ReceivedAuthHeader != "" { 527 532 t.Errorf("S3 should not receive Authorization header for presigned URLs, got: %s", s3ReceivedAuthHeader) 533 + } 534 + } 535 + 536 + // ============================================================================ 537 + // ProxyBlobWriter Tests 538 + // ============================================================================ 539 + 540 + // mockHoldServer is a test helper that mocks the hold service XRPC endpoints 541 + type mockHoldServer struct { 542 + *httptest.Server 543 + 544 + // Track calls 545 + mu sync.Mutex 546 + InitiateCalls []mockInitiateCall 547 + PartURLCalls []mockPartURLCall 548 + CompleteCalls []mockCompleteCall 549 + AbortCalls []mockAbortCall 550 + 551 + // Error injection 552 + InitiateError error 553 + PartURLError error 554 + CompleteError error 555 + AbortError error 556 + 557 + // Response customization 558 + UploadID string 559 + } 560 + 561 + type mockInitiateCall struct { 562 + Digest string 563 + } 564 + 565 + type mockPartURLCall struct { 566 + UploadID string 567 + PartNumber int 568 + } 569 + 570 + type mockCompleteCall struct { 571 + UploadID string 572 + Digest string 573 + Parts []map[string]any 574 + } 575 + 576 + type mockAbortCall struct { 577 + UploadID string 578 + } 579 + 580 + // mockS3Server mocks S3 presigned URL uploads 581 + type mockS3Server struct { 582 + *httptest.Server 583 + 584 + // Track uploads 585 + mu sync.Mutex 586 + Parts map[int][]byte 587 + 588 + // Error injection 589 + UploadError error 590 + 591 + // Response customization 592 + ETagInHeader bool // true = ETag in header, false = in JSON body 593 + } 594 + 595 + // newMockHoldServer creates a mock hold service 596 + func newMockHoldServer(t *testing.T, s3URL string) *mockHoldServer { 597 + m := &mockHoldServer{ 598 + UploadID: "test-upload-id-" + fmt.Sprintf("%d", time.Now().UnixNano()), 599 + } 600 + 601 + m.Server = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { 602 + m.mu.Lock() 603 + defer m.mu.Unlock() 604 + 605 + w.Header().Set("Content-Type", "application/json") 606 + 607 + switch { 608 + case strings.Contains(r.URL.Path, atproto.HoldInitiateUpload): 609 + if m.InitiateError != nil { 610 + w.WriteHeader(http.StatusInternalServerError) 611 + fmt.Fprintf(w, `{"error":"%s"}`, m.InitiateError.Error()) 612 + return 613 + } 614 + 615 + var body map[string]any 616 + json.NewDecoder(r.Body).Decode(&body) 617 + m.InitiateCalls = append(m.InitiateCalls, mockInitiateCall{ 618 + Digest: body["digest"].(string), 619 + }) 620 + 621 + w.WriteHeader(http.StatusOK) 622 + json.NewEncoder(w).Encode(map[string]string{ 623 + "uploadId": m.UploadID, 624 + }) 625 + 626 + case strings.Contains(r.URL.Path, atproto.HoldGetPartUploadURL): 627 + if m.PartURLError != nil { 628 + w.WriteHeader(http.StatusInternalServerError) 629 + fmt.Fprintf(w, `{"error":"%s"}`, m.PartURLError.Error()) 630 + return 631 + } 632 + 633 + var body map[string]any 634 + json.NewDecoder(r.Body).Decode(&body) 635 + m.PartURLCalls = append(m.PartURLCalls, mockPartURLCall{ 636 + UploadID: body["uploadId"].(string), 637 + PartNumber: int(body["partNumber"].(float64)), 638 + }) 639 + 640 + // Return presigned URL pointing to mock S3 641 + partNum := int(body["partNumber"].(float64)) 642 + w.WriteHeader(http.StatusOK) 643 + json.NewEncoder(w).Encode(map[string]string{ 644 + "url": fmt.Sprintf("%s/upload?partNumber=%d", s3URL, partNum), 645 + }) 646 + 647 + case strings.Contains(r.URL.Path, atproto.HoldCompleteUpload): 648 + if m.CompleteError != nil { 649 + w.WriteHeader(http.StatusInternalServerError) 650 + fmt.Fprintf(w, `{"error":"%s"}`, m.CompleteError.Error()) 651 + return 652 + } 653 + 654 + var body map[string]any 655 + json.NewDecoder(r.Body).Decode(&body) 656 + parts, _ := body["parts"].([]any) 657 + partsArr := make([]map[string]any, len(parts)) 658 + for i, p := range parts { 659 + partsArr[i] = p.(map[string]any) 660 + } 661 + m.CompleteCalls = append(m.CompleteCalls, mockCompleteCall{ 662 + UploadID: body["uploadId"].(string), 663 + Digest: body["digest"].(string), 664 + Parts: partsArr, 665 + }) 666 + 667 + w.WriteHeader(http.StatusOK) 668 + json.NewEncoder(w).Encode(map[string]any{}) 669 + 670 + case strings.Contains(r.URL.Path, atproto.HoldAbortUpload): 671 + if m.AbortError != nil { 672 + w.WriteHeader(http.StatusInternalServerError) 673 + fmt.Fprintf(w, `{"error":"%s"}`, m.AbortError.Error()) 674 + return 675 + } 676 + 677 + var body map[string]any 678 + json.NewDecoder(r.Body).Decode(&body) 679 + m.AbortCalls = append(m.AbortCalls, mockAbortCall{ 680 + UploadID: body["uploadId"].(string), 681 + }) 682 + 683 + w.WriteHeader(http.StatusOK) 684 + json.NewEncoder(w).Encode(map[string]any{}) 685 + 686 + default: 687 + t.Errorf("Unexpected hold endpoint: %s", r.URL.Path) 688 + w.WriteHeader(http.StatusNotFound) 689 + } 690 + })) 691 + 692 + return m 693 + } 694 + 695 + // newMockS3Server creates a mock S3 server for presigned URL uploads 696 + func newMockS3Server(t *testing.T, etagInHeader bool) *mockS3Server { 697 + m := &mockS3Server{ 698 + Parts: make(map[int][]byte), 699 + ETagInHeader: etagInHeader, 700 + } 701 + 702 + m.Server = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { 703 + m.mu.Lock() 704 + defer m.mu.Unlock() 705 + 706 + if m.UploadError != nil { 707 + w.WriteHeader(http.StatusInternalServerError) 708 + w.Write([]byte(m.UploadError.Error())) 709 + return 710 + } 711 + 712 + // Parse part number from URL 713 + partNum, _ := strconv.Atoi(r.URL.Query().Get("partNumber")) 714 + 715 + // Read body 716 + body, _ := io.ReadAll(r.Body) 717 + m.Parts[partNum] = body 718 + 719 + // Generate ETag 720 + etag := fmt.Sprintf(`"etag-part-%d"`, partNum) 721 + 722 + if m.ETagInHeader { 723 + w.Header().Set("ETag", etag) 724 + w.WriteHeader(http.StatusOK) 725 + } else { 726 + w.Header().Set("Content-Type", "application/json") 727 + w.WriteHeader(http.StatusOK) 728 + json.NewEncoder(w).Encode(map[string]string{ 729 + "etag": etag, 730 + }) 731 + } 732 + })) 733 + 734 + return m 735 + } 736 + 737 + // createTestProxyBlobStore creates a ProxyBlobStore for testing with mock servers 738 + func createTestProxyBlobStore(t *testing.T, holdURL string) *ProxyBlobStore { 739 + ctx := &RegistryContext{ 740 + DID: "did:plc:testuser", 741 + HoldDID: "did:web:hold.example.com", 742 + PDSEndpoint: "https://pds.example.com", 743 + Repository: "test-repo", 744 + ServiceToken: "test-service-token", 745 + } 746 + store := NewProxyBlobStore(ctx) 747 + store.holdURL = holdURL 748 + return store 749 + } 750 + 751 + // generateTestData creates n bytes of predictable test data 752 + func generateTestData(n int) []byte { 753 + data := make([]byte, n) 754 + for i := 0; i < n; i++ { 755 + data[i] = byte(i % 256) 756 + } 757 + return data 758 + } 759 + 760 + // TestCreate_Success tests that Create() successfully initiates multipart upload 761 + func TestCreate_Success(t *testing.T) { 762 + s3Server := newMockS3Server(t, true) 763 + defer s3Server.Close() 764 + 765 + holdServer := newMockHoldServer(t, s3Server.URL) 766 + defer holdServer.Close() 767 + 768 + store := createTestProxyBlobStore(t, holdServer.URL) 769 + 770 + writer, err := store.Create(context.Background()) 771 + if err != nil { 772 + t.Fatalf("Create() failed: %v", err) 773 + } 774 + 775 + // Verify writer is returned 776 + if writer == nil { 777 + t.Fatal("Expected non-nil writer") 778 + } 779 + 780 + // Verify initiate was called 781 + holdServer.mu.Lock() 782 + if len(holdServer.InitiateCalls) != 1 { 783 + t.Errorf("Expected 1 initiate call, got %d", len(holdServer.InitiateCalls)) 784 + } 785 + holdServer.mu.Unlock() 786 + 787 + // Verify writer ID 788 + if writer.ID() == "" { 789 + t.Error("Expected non-empty writer ID") 790 + } 791 + 792 + // Verify writer is stored in global uploads 793 + globalUploadsMu.RLock() 794 + _, exists := globalUploads[writer.ID()] 795 + globalUploadsMu.RUnlock() 796 + if !exists { 797 + t.Error("Writer should be stored in globalUploads") 798 + } 799 + 800 + // Cleanup 801 + writer.Cancel(context.Background()) 802 + } 803 + 804 + // TestCreate_HoldError tests that Create() returns error when hold service fails 805 + func TestCreate_HoldError(t *testing.T) { 806 + s3Server := newMockS3Server(t, true) 807 + defer s3Server.Close() 808 + 809 + holdServer := newMockHoldServer(t, s3Server.URL) 810 + holdServer.InitiateError = fmt.Errorf("hold service unavailable") 811 + defer holdServer.Close() 812 + 813 + store := createTestProxyBlobStore(t, holdServer.URL) 814 + 815 + writer, err := store.Create(context.Background()) 816 + if err == nil { 817 + t.Fatal("Expected error from Create()") 818 + } 819 + if writer != nil { 820 + t.Error("Expected nil writer when error occurs") 821 + } 822 + 823 + if !strings.Contains(err.Error(), "hold service unavailable") { 824 + t.Errorf("Expected hold error message, got: %v", err) 825 + } 826 + } 827 + 828 + // TestWrite_BasicBuffer tests that small writes are buffered 829 + func TestWrite_BasicBuffer(t *testing.T) { 830 + s3Server := newMockS3Server(t, true) 831 + defer s3Server.Close() 832 + 833 + holdServer := newMockHoldServer(t, s3Server.URL) 834 + defer holdServer.Close() 835 + 836 + store := createTestProxyBlobStore(t, holdServer.URL) 837 + 838 + writer, err := store.Create(context.Background()) 839 + if err != nil { 840 + t.Fatalf("Create() failed: %v", err) 841 + } 842 + defer writer.Cancel(context.Background()) 843 + 844 + // Write small data (1MB - well under 10MB threshold) 845 + data := generateTestData(1 * 1024 * 1024) 846 + n, err := writer.Write(data) 847 + if err != nil { 848 + t.Fatalf("Write() failed: %v", err) 849 + } 850 + 851 + if n != len(data) { 852 + t.Errorf("Expected to write %d bytes, wrote %d", len(data), n) 853 + } 854 + 855 + // Verify size is tracked 856 + if writer.Size() != int64(len(data)) { 857 + t.Errorf("Expected size %d, got %d", len(data), writer.Size()) 858 + } 859 + 860 + // Verify NO flush occurred (no part uploads to S3) 861 + s3Server.mu.Lock() 862 + partCount := len(s3Server.Parts) 863 + s3Server.mu.Unlock() 864 + 865 + if partCount != 0 { 866 + t.Errorf("Expected 0 parts uploaded (data should be buffered), got %d", partCount) 867 + } 868 + } 869 + 870 + // TestWrite_TriggerFlush tests that writing 10MB triggers flush 871 + func TestWrite_TriggerFlush(t *testing.T) { 872 + s3Server := newMockS3Server(t, true) 873 + defer s3Server.Close() 874 + 875 + holdServer := newMockHoldServer(t, s3Server.URL) 876 + defer holdServer.Close() 877 + 878 + store := createTestProxyBlobStore(t, holdServer.URL) 879 + 880 + writer, err := store.Create(context.Background()) 881 + if err != nil { 882 + t.Fatalf("Create() failed: %v", err) 883 + } 884 + defer writer.Cancel(context.Background()) 885 + 886 + // Write exactly 10MB (the threshold) 887 + data := generateTestData(10 * 1024 * 1024) 888 + _, err = writer.Write(data) 889 + if err != nil { 890 + t.Fatalf("Write() failed: %v", err) 891 + } 892 + 893 + // Verify flush occurred (1 part uploaded) 894 + s3Server.mu.Lock() 895 + partCount := len(s3Server.Parts) 896 + uploadedData := s3Server.Parts[1] 897 + s3Server.mu.Unlock() 898 + 899 + if partCount != 1 { 900 + t.Errorf("Expected 1 part uploaded after 10MB write, got %d", partCount) 901 + } 902 + 903 + if len(uploadedData) != 10*1024*1024 { 904 + t.Errorf("Expected uploaded part to be 10MB, got %d", len(uploadedData)) 905 + } 906 + } 907 + 908 + // TestWrite_MultipleFlushes tests that writing 25MB triggers 2 flushes 909 + func TestWrite_MultipleFlushes(t *testing.T) { 910 + s3Server := newMockS3Server(t, true) 911 + defer s3Server.Close() 912 + 913 + holdServer := newMockHoldServer(t, s3Server.URL) 914 + defer holdServer.Close() 915 + 916 + store := createTestProxyBlobStore(t, holdServer.URL) 917 + 918 + writer, err := store.Create(context.Background()) 919 + if err != nil { 920 + t.Fatalf("Create() failed: %v", err) 921 + } 922 + defer writer.Cancel(context.Background()) 923 + 924 + // Write 25MB in chunks (simulating Docker layer upload) 925 + totalSize := 25 * 1024 * 1024 926 + chunkSize := 64 * 1024 // 64KB chunks 927 + data := generateTestData(chunkSize) 928 + 929 + for written := 0; written < totalSize; written += chunkSize { 930 + _, err = writer.Write(data) 931 + if err != nil { 932 + t.Fatalf("Write() failed at byte %d: %v", written, err) 933 + } 934 + } 935 + 936 + // Verify 2 flushes occurred (10MB + 10MB), 5MB remains in buffer 937 + s3Server.mu.Lock() 938 + partCount := len(s3Server.Parts) 939 + s3Server.mu.Unlock() 940 + 941 + if partCount != 2 { 942 + t.Errorf("Expected 2 parts uploaded after 25MB write, got %d", partCount) 943 + } 944 + 945 + // Verify size tracking 946 + if writer.Size() != int64(totalSize) { 947 + t.Errorf("Expected size %d, got %d", totalSize, writer.Size()) 948 + } 949 + 950 + // Verify part URL calls 951 + holdServer.mu.Lock() 952 + partURLCount := len(holdServer.PartURLCalls) 953 + holdServer.mu.Unlock() 954 + 955 + if partURLCount != 2 { 956 + t.Errorf("Expected 2 part URL calls, got %d", partURLCount) 957 + } 958 + } 959 + 960 + // TestWrite_ClosedWriter tests that Write() fails on closed writer 961 + func TestWrite_ClosedWriter(t *testing.T) { 962 + s3Server := newMockS3Server(t, true) 963 + defer s3Server.Close() 964 + 965 + holdServer := newMockHoldServer(t, s3Server.URL) 966 + defer holdServer.Close() 967 + 968 + store := createTestProxyBlobStore(t, holdServer.URL) 969 + 970 + writer, err := store.Create(context.Background()) 971 + if err != nil { 972 + t.Fatalf("Create() failed: %v", err) 973 + } 974 + 975 + // Close the writer 976 + writer.Cancel(context.Background()) 977 + 978 + // Try to write 979 + data := generateTestData(1024) 980 + _, err = writer.Write(data) 981 + if err == nil { 982 + t.Fatal("Expected error writing to closed writer") 983 + } 984 + 985 + if !strings.Contains(err.Error(), "closed") { 986 + t.Errorf("Expected 'closed' error, got: %v", err) 987 + } 988 + } 989 + 990 + // TestFlushPart_Success tests successful part upload with ETag in header 991 + func TestFlushPart_Success(t *testing.T) { 992 + s3Server := newMockS3Server(t, true) // ETag in header 993 + defer s3Server.Close() 994 + 995 + holdServer := newMockHoldServer(t, s3Server.URL) 996 + defer holdServer.Close() 997 + 998 + store := createTestProxyBlobStore(t, holdServer.URL) 999 + 1000 + writer, err := store.Create(context.Background()) 1001 + if err != nil { 1002 + t.Fatalf("Create() failed: %v", err) 1003 + } 1004 + defer writer.Cancel(context.Background()) 1005 + 1006 + // Write enough to trigger flush 1007 + data := generateTestData(10 * 1024 * 1024) 1008 + _, err = writer.Write(data) 1009 + if err != nil { 1010 + t.Fatalf("Write() failed: %v", err) 1011 + } 1012 + 1013 + // Get the internal writer to check parts 1014 + pbw := writer.(*ProxyBlobWriter) 1015 + if len(pbw.parts) != 1 { 1016 + t.Errorf("Expected 1 part recorded, got %d", len(pbw.parts)) 1017 + } 1018 + 1019 + if pbw.parts[0].PartNumber != 1 { 1020 + t.Errorf("Expected part number 1, got %d", pbw.parts[0].PartNumber) 1021 + } 1022 + 1023 + if pbw.parts[0].ETag != `"etag-part-1"` { 1024 + t.Errorf("Expected ETag '\"etag-part-1\"', got %s", pbw.parts[0].ETag) 1025 + } 1026 + } 1027 + 1028 + // TestFlushPart_ETagInJSON tests successful part upload with ETag in JSON body 1029 + func TestFlushPart_ETagInJSON(t *testing.T) { 1030 + s3Server := newMockS3Server(t, false) // ETag in JSON body 1031 + defer s3Server.Close() 1032 + 1033 + holdServer := newMockHoldServer(t, s3Server.URL) 1034 + defer holdServer.Close() 1035 + 1036 + store := createTestProxyBlobStore(t, holdServer.URL) 1037 + 1038 + writer, err := store.Create(context.Background()) 1039 + if err != nil { 1040 + t.Fatalf("Create() failed: %v", err) 1041 + } 1042 + defer writer.Cancel(context.Background()) 1043 + 1044 + // Write enough to trigger flush 1045 + data := generateTestData(10 * 1024 * 1024) 1046 + _, err = writer.Write(data) 1047 + if err != nil { 1048 + t.Fatalf("Write() failed: %v", err) 1049 + } 1050 + 1051 + // Verify part was recorded with ETag from JSON 1052 + pbw := writer.(*ProxyBlobWriter) 1053 + if len(pbw.parts) != 1 { 1054 + t.Errorf("Expected 1 part recorded, got %d", len(pbw.parts)) 1055 + } 1056 + 1057 + if pbw.parts[0].ETag != `"etag-part-1"` { 1058 + t.Errorf("Expected ETag '\"etag-part-1\"', got %s", pbw.parts[0].ETag) 1059 + } 1060 + } 1061 + 1062 + // TestFlushPart_HoldError tests that flushPart returns error when hold fails 1063 + func TestFlushPart_HoldError(t *testing.T) { 1064 + s3Server := newMockS3Server(t, true) 1065 + defer s3Server.Close() 1066 + 1067 + holdServer := newMockHoldServer(t, s3Server.URL) 1068 + holdServer.PartURLError = fmt.Errorf("hold service error") 1069 + defer holdServer.Close() 1070 + 1071 + store := createTestProxyBlobStore(t, holdServer.URL) 1072 + 1073 + writer, err := store.Create(context.Background()) 1074 + if err != nil { 1075 + t.Fatalf("Create() failed: %v", err) 1076 + } 1077 + defer writer.Cancel(context.Background()) 1078 + 1079 + // Write enough to trigger flush 1080 + data := generateTestData(10 * 1024 * 1024) 1081 + _, err = writer.Write(data) 1082 + 1083 + if err == nil { 1084 + t.Fatal("Expected error when hold service fails") 1085 + } 1086 + 1087 + if !strings.Contains(err.Error(), "failed to get part upload info") { 1088 + t.Errorf("Expected part upload info error, got: %v", err) 1089 + } 1090 + } 1091 + 1092 + // TestFlushPart_S3Error tests that flushPart returns error when S3 fails 1093 + func TestFlushPart_S3Error(t *testing.T) { 1094 + s3Server := newMockS3Server(t, true) 1095 + s3Server.UploadError = fmt.Errorf("S3 unavailable") 1096 + defer s3Server.Close() 1097 + 1098 + holdServer := newMockHoldServer(t, s3Server.URL) 1099 + defer holdServer.Close() 1100 + 1101 + store := createTestProxyBlobStore(t, holdServer.URL) 1102 + 1103 + writer, err := store.Create(context.Background()) 1104 + if err != nil { 1105 + t.Fatalf("Create() failed: %v", err) 1106 + } 1107 + defer writer.Cancel(context.Background()) 1108 + 1109 + // Write enough to trigger flush 1110 + data := generateTestData(10 * 1024 * 1024) 1111 + _, err = writer.Write(data) 1112 + 1113 + if err == nil { 1114 + t.Fatal("Expected error when S3 fails") 1115 + } 1116 + 1117 + if !strings.Contains(err.Error(), "part upload failed") { 1118 + t.Errorf("Expected part upload failed error, got: %v", err) 1119 + } 1120 + } 1121 + 1122 + // TestFlushPart_NoETag tests that flushPart returns error when no ETag is returned 1123 + func TestFlushPart_NoETag(t *testing.T) { 1124 + // Create a custom S3 server that returns no ETag 1125 + s3Server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { 1126 + // Return 200 OK but no ETag anywhere 1127 + w.WriteHeader(http.StatusOK) 1128 + w.Write([]byte(`{}`)) 1129 + })) 1130 + defer s3Server.Close() 1131 + 1132 + holdServer := newMockHoldServer(t, s3Server.URL) 1133 + defer holdServer.Close() 1134 + 1135 + store := createTestProxyBlobStore(t, holdServer.URL) 1136 + 1137 + writer, err := store.Create(context.Background()) 1138 + if err != nil { 1139 + t.Fatalf("Create() failed: %v", err) 1140 + } 1141 + defer writer.Cancel(context.Background()) 1142 + 1143 + // Write enough to trigger flush 1144 + data := generateTestData(10 * 1024 * 1024) 1145 + _, err = writer.Write(data) 1146 + 1147 + if err == nil { 1148 + t.Fatal("Expected error when no ETag is returned") 1149 + } 1150 + 1151 + if !strings.Contains(err.Error(), "no ETag") { 1152 + t.Errorf("Expected no ETag error, got: %v", err) 1153 + } 1154 + } 1155 + 1156 + // TestReadFrom_SmallFile tests ReadFrom with data under flush threshold 1157 + func TestReadFrom_SmallFile(t *testing.T) { 1158 + s3Server := newMockS3Server(t, true) 1159 + defer s3Server.Close() 1160 + 1161 + holdServer := newMockHoldServer(t, s3Server.URL) 1162 + defer holdServer.Close() 1163 + 1164 + store := createTestProxyBlobStore(t, holdServer.URL) 1165 + 1166 + writer, err := store.Create(context.Background()) 1167 + if err != nil { 1168 + t.Fatalf("Create() failed: %v", err) 1169 + } 1170 + defer writer.Cancel(context.Background()) 1171 + 1172 + // Stream 1MB through ReadFrom 1173 + data := generateTestData(1 * 1024 * 1024) 1174 + reader := bytes.NewReader(data) 1175 + 1176 + n, err := writer.ReadFrom(reader) 1177 + if err != nil { 1178 + t.Fatalf("ReadFrom() failed: %v", err) 1179 + } 1180 + 1181 + if n != int64(len(data)) { 1182 + t.Errorf("Expected to read %d bytes, got %d", len(data), n) 1183 + } 1184 + 1185 + // Verify no flush occurred 1186 + s3Server.mu.Lock() 1187 + partCount := len(s3Server.Parts) 1188 + s3Server.mu.Unlock() 1189 + 1190 + if partCount != 0 { 1191 + t.Errorf("Expected 0 parts (data buffered), got %d", partCount) 1192 + } 1193 + } 1194 + 1195 + // TestReadFrom_LargeFile tests ReadFrom with multiple flushes 1196 + func TestReadFrom_LargeFile(t *testing.T) { 1197 + s3Server := newMockS3Server(t, true) 1198 + defer s3Server.Close() 1199 + 1200 + holdServer := newMockHoldServer(t, s3Server.URL) 1201 + defer holdServer.Close() 1202 + 1203 + store := createTestProxyBlobStore(t, holdServer.URL) 1204 + 1205 + writer, err := store.Create(context.Background()) 1206 + if err != nil { 1207 + t.Fatalf("Create() failed: %v", err) 1208 + } 1209 + defer writer.Cancel(context.Background()) 1210 + 1211 + // Stream 25MB through ReadFrom 1212 + data := generateTestData(25 * 1024 * 1024) 1213 + reader := bytes.NewReader(data) 1214 + 1215 + n, err := writer.ReadFrom(reader) 1216 + if err != nil { 1217 + t.Fatalf("ReadFrom() failed: %v", err) 1218 + } 1219 + 1220 + if n != int64(len(data)) { 1221 + t.Errorf("Expected to read %d bytes, got %d", len(data), n) 1222 + } 1223 + 1224 + // Verify 2 flushes occurred 1225 + s3Server.mu.Lock() 1226 + partCount := len(s3Server.Parts) 1227 + s3Server.mu.Unlock() 1228 + 1229 + if partCount != 2 { 1230 + t.Errorf("Expected 2 parts (2x 10MB), got %d", partCount) 1231 + } 1232 + } 1233 + 1234 + // TestReadFrom_ClosedWriter tests that ReadFrom fails on closed writer 1235 + func TestReadFrom_ClosedWriter(t *testing.T) { 1236 + s3Server := newMockS3Server(t, true) 1237 + defer s3Server.Close() 1238 + 1239 + holdServer := newMockHoldServer(t, s3Server.URL) 1240 + defer holdServer.Close() 1241 + 1242 + store := createTestProxyBlobStore(t, holdServer.URL) 1243 + 1244 + writer, err := store.Create(context.Background()) 1245 + if err != nil { 1246 + t.Fatalf("Create() failed: %v", err) 1247 + } 1248 + 1249 + // Close the writer 1250 + writer.Cancel(context.Background()) 1251 + 1252 + // Try to ReadFrom 1253 + data := generateTestData(1024) 1254 + reader := bytes.NewReader(data) 1255 + 1256 + _, err = writer.ReadFrom(reader) 1257 + if err == nil { 1258 + t.Fatal("Expected error reading to closed writer") 1259 + } 1260 + 1261 + if !strings.Contains(err.Error(), "closed") { 1262 + t.Errorf("Expected 'closed' error, got: %v", err) 1263 + } 1264 + } 1265 + 1266 + // TestCommit_Success tests successful commit 1267 + func TestCommit_Success(t *testing.T) { 1268 + s3Server := newMockS3Server(t, true) 1269 + defer s3Server.Close() 1270 + 1271 + holdServer := newMockHoldServer(t, s3Server.URL) 1272 + defer holdServer.Close() 1273 + 1274 + store := createTestProxyBlobStore(t, holdServer.URL) 1275 + 1276 + writer, err := store.Create(context.Background()) 1277 + if err != nil { 1278 + t.Fatalf("Create() failed: %v", err) 1279 + } 1280 + 1281 + // Write some data 1282 + data := generateTestData(5 * 1024 * 1024) 1283 + _, err = writer.Write(data) 1284 + if err != nil { 1285 + t.Fatalf("Write() failed: %v", err) 1286 + } 1287 + 1288 + // Commit 1289 + dgst := digest.FromBytes(data) 1290 + desc, err := writer.Commit(context.Background(), distribution.Descriptor{ 1291 + Digest: dgst, 1292 + Size: int64(len(data)), 1293 + MediaType: "application/octet-stream", 1294 + }) 1295 + if err != nil { 1296 + t.Fatalf("Commit() failed: %v", err) 1297 + } 1298 + 1299 + // Verify descriptor 1300 + if desc.Digest != dgst { 1301 + t.Errorf("Expected digest %s, got %s", dgst, desc.Digest) 1302 + } 1303 + 1304 + if desc.Size != int64(len(data)) { 1305 + t.Errorf("Expected size %d, got %d", len(data), desc.Size) 1306 + } 1307 + 1308 + // Verify complete was called 1309 + holdServer.mu.Lock() 1310 + completeCount := len(holdServer.CompleteCalls) 1311 + holdServer.mu.Unlock() 1312 + 1313 + if completeCount != 1 { 1314 + t.Errorf("Expected 1 complete call, got %d", completeCount) 1315 + } 1316 + 1317 + // Verify writer removed from global uploads 1318 + globalUploadsMu.RLock() 1319 + _, exists := globalUploads[writer.ID()] 1320 + globalUploadsMu.RUnlock() 1321 + if exists { 1322 + t.Error("Writer should be removed from globalUploads after commit") 1323 + } 1324 + } 1325 + 1326 + // TestCommit_WithRemainingBuffer tests commit with data in buffer 1327 + func TestCommit_WithRemainingBuffer(t *testing.T) { 1328 + s3Server := newMockS3Server(t, true) 1329 + defer s3Server.Close() 1330 + 1331 + holdServer := newMockHoldServer(t, s3Server.URL) 1332 + defer holdServer.Close() 1333 + 1334 + store := createTestProxyBlobStore(t, holdServer.URL) 1335 + 1336 + writer, err := store.Create(context.Background()) 1337 + if err != nil { 1338 + t.Fatalf("Create() failed: %v", err) 1339 + } 1340 + 1341 + // Write 15MB in chunks (simulating realistic upload) 1342 + // This ensures: 1 flush at 10MB threshold + 5MB remaining in buffer 1343 + totalSize := 15 * 1024 * 1024 1344 + chunkSize := 64 * 1024 // 64KB chunks 1345 + chunk := generateTestData(chunkSize) 1346 + 1347 + allData := make([]byte, 0, totalSize) 1348 + for written := 0; written < totalSize; written += chunkSize { 1349 + _, err = writer.Write(chunk) 1350 + if err != nil { 1351 + t.Fatalf("Write() failed: %v", err) 1352 + } 1353 + allData = append(allData, chunk...) 1354 + } 1355 + 1356 + // At this point, 1 part should be uploaded (10MB), 5MB in buffer 1357 + s3Server.mu.Lock() 1358 + partsBeforeCommit := len(s3Server.Parts) 1359 + s3Server.mu.Unlock() 1360 + 1361 + if partsBeforeCommit != 1 { 1362 + t.Errorf("Expected 1 part before commit, got %d", partsBeforeCommit) 1363 + } 1364 + 1365 + // Commit 1366 + dgst := digest.FromBytes(allData) 1367 + _, err = writer.Commit(context.Background(), distribution.Descriptor{ 1368 + Digest: dgst, 1369 + Size: int64(totalSize), 1370 + MediaType: "application/octet-stream", 1371 + }) 1372 + if err != nil { 1373 + t.Fatalf("Commit() failed: %v", err) 1374 + } 1375 + 1376 + // Verify final flush happened (now 2 parts) 1377 + s3Server.mu.Lock() 1378 + partsAfterCommit := len(s3Server.Parts) 1379 + s3Server.mu.Unlock() 1380 + 1381 + if partsAfterCommit != 2 { 1382 + t.Errorf("Expected 2 parts after commit, got %d", partsAfterCommit) 1383 + } 1384 + 1385 + // Verify complete was called with 2 parts 1386 + holdServer.mu.Lock() 1387 + if len(holdServer.CompleteCalls) != 1 { 1388 + t.Fatalf("Expected 1 complete call, got %d", len(holdServer.CompleteCalls)) 1389 + } 1390 + completedParts := len(holdServer.CompleteCalls[0].Parts) 1391 + holdServer.mu.Unlock() 1392 + 1393 + if completedParts != 2 { 1394 + t.Errorf("Expected complete to have 2 parts, got %d", completedParts) 1395 + } 1396 + } 1397 + 1398 + // TestCommit_FlushError tests that commit handles flush error 1399 + func TestCommit_FlushError(t *testing.T) { 1400 + s3Server := newMockS3Server(t, true) 1401 + defer s3Server.Close() 1402 + 1403 + holdServer := newMockHoldServer(t, s3Server.URL) 1404 + defer holdServer.Close() 1405 + 1406 + store := createTestProxyBlobStore(t, holdServer.URL) 1407 + 1408 + writer, err := store.Create(context.Background()) 1409 + if err != nil { 1410 + t.Fatalf("Create() failed: %v", err) 1411 + } 1412 + 1413 + // Write some data 1414 + data := generateTestData(5 * 1024 * 1024) 1415 + _, err = writer.Write(data) 1416 + if err != nil { 1417 + t.Fatalf("Write() failed: %v", err) 1418 + } 1419 + 1420 + // Inject error for final flush 1421 + holdServer.mu.Lock() 1422 + holdServer.PartURLError = fmt.Errorf("flush error") 1423 + holdServer.mu.Unlock() 1424 + 1425 + // Commit should fail 1426 + dgst := digest.FromBytes(data) 1427 + _, err = writer.Commit(context.Background(), distribution.Descriptor{ 1428 + Digest: dgst, 1429 + Size: int64(len(data)), 1430 + }) 1431 + 1432 + if err == nil { 1433 + t.Fatal("Expected error from Commit() when flush fails") 1434 + } 1435 + 1436 + if !strings.Contains(err.Error(), "failed to flush final part") { 1437 + t.Errorf("Expected flush error, got: %v", err) 1438 + } 1439 + 1440 + // Verify abort was called 1441 + holdServer.mu.Lock() 1442 + abortCount := len(holdServer.AbortCalls) 1443 + holdServer.mu.Unlock() 1444 + 1445 + if abortCount != 1 { 1446 + t.Errorf("Expected 1 abort call after flush error, got %d", abortCount) 1447 + } 1448 + } 1449 + 1450 + // TestCommit_CompleteError tests that commit handles complete error 1451 + func TestCommit_CompleteError(t *testing.T) { 1452 + s3Server := newMockS3Server(t, true) 1453 + defer s3Server.Close() 1454 + 1455 + holdServer := newMockHoldServer(t, s3Server.URL) 1456 + holdServer.CompleteError = fmt.Errorf("complete failed") 1457 + defer holdServer.Close() 1458 + 1459 + store := createTestProxyBlobStore(t, holdServer.URL) 1460 + 1461 + writer, err := store.Create(context.Background()) 1462 + if err != nil { 1463 + t.Fatalf("Create() failed: %v", err) 1464 + } 1465 + 1466 + // Write some data 1467 + data := generateTestData(1 * 1024 * 1024) 1468 + _, err = writer.Write(data) 1469 + if err != nil { 1470 + t.Fatalf("Write() failed: %v", err) 1471 + } 1472 + 1473 + // Commit should fail 1474 + dgst := digest.FromBytes(data) 1475 + _, err = writer.Commit(context.Background(), distribution.Descriptor{ 1476 + Digest: dgst, 1477 + Size: int64(len(data)), 1478 + }) 1479 + 1480 + if err == nil { 1481 + t.Fatal("Expected error from Commit() when complete fails") 1482 + } 1483 + 1484 + if !strings.Contains(err.Error(), "failed to complete multipart upload") { 1485 + t.Errorf("Expected complete error, got: %v", err) 1486 + } 1487 + } 1488 + 1489 + // TestCommit_ClosedWriter tests that Commit() fails on closed writer 1490 + func TestCommit_ClosedWriter(t *testing.T) { 1491 + s3Server := newMockS3Server(t, true) 1492 + defer s3Server.Close() 1493 + 1494 + holdServer := newMockHoldServer(t, s3Server.URL) 1495 + defer holdServer.Close() 1496 + 1497 + store := createTestProxyBlobStore(t, holdServer.URL) 1498 + 1499 + writer, err := store.Create(context.Background()) 1500 + if err != nil { 1501 + t.Fatalf("Create() failed: %v", err) 1502 + } 1503 + 1504 + // Close the writer 1505 + writer.Cancel(context.Background()) 1506 + 1507 + // Try to commit 1508 + dgst := digest.FromString("test") 1509 + _, err = writer.Commit(context.Background(), distribution.Descriptor{ 1510 + Digest: dgst, 1511 + Size: 4, 1512 + }) 1513 + 1514 + if err == nil { 1515 + t.Fatal("Expected error committing closed writer") 1516 + } 1517 + 1518 + if !strings.Contains(err.Error(), "closed") { 1519 + t.Errorf("Expected 'closed' error, got: %v", err) 1520 + } 1521 + } 1522 + 1523 + // TestCancel_Success tests successful cancel 1524 + func TestCancel_Success(t *testing.T) { 1525 + s3Server := newMockS3Server(t, true) 1526 + defer s3Server.Close() 1527 + 1528 + holdServer := newMockHoldServer(t, s3Server.URL) 1529 + defer holdServer.Close() 1530 + 1531 + store := createTestProxyBlobStore(t, holdServer.URL) 1532 + 1533 + writer, err := store.Create(context.Background()) 1534 + if err != nil { 1535 + t.Fatalf("Create() failed: %v", err) 1536 + } 1537 + 1538 + writerID := writer.ID() 1539 + 1540 + // Cancel 1541 + err = writer.Cancel(context.Background()) 1542 + if err != nil { 1543 + t.Fatalf("Cancel() failed: %v", err) 1544 + } 1545 + 1546 + // Verify abort was called 1547 + holdServer.mu.Lock() 1548 + abortCount := len(holdServer.AbortCalls) 1549 + holdServer.mu.Unlock() 1550 + 1551 + if abortCount != 1 { 1552 + t.Errorf("Expected 1 abort call, got %d", abortCount) 1553 + } 1554 + 1555 + // Verify removed from global uploads 1556 + globalUploadsMu.RLock() 1557 + _, exists := globalUploads[writerID] 1558 + globalUploadsMu.RUnlock() 1559 + 1560 + if exists { 1561 + t.Error("Writer should be removed from globalUploads after cancel") 1562 + } 1563 + } 1564 + 1565 + // TestCancel_AbortError tests that Cancel() still succeeds when abort fails 1566 + func TestCancel_AbortError(t *testing.T) { 1567 + s3Server := newMockS3Server(t, true) 1568 + defer s3Server.Close() 1569 + 1570 + holdServer := newMockHoldServer(t, s3Server.URL) 1571 + holdServer.AbortError = fmt.Errorf("abort failed") 1572 + defer holdServer.Close() 1573 + 1574 + store := createTestProxyBlobStore(t, holdServer.URL) 1575 + 1576 + writer, err := store.Create(context.Background()) 1577 + if err != nil { 1578 + t.Fatalf("Create() failed: %v", err) 1579 + } 1580 + 1581 + writerID := writer.ID() 1582 + 1583 + // Cancel should still return nil (graceful) 1584 + err = writer.Cancel(context.Background()) 1585 + if err != nil { 1586 + t.Errorf("Cancel() should return nil even when abort fails, got: %v", err) 1587 + } 1588 + 1589 + // Verify still removed from global uploads 1590 + globalUploadsMu.RLock() 1591 + _, exists := globalUploads[writerID] 1592 + globalUploadsMu.RUnlock() 1593 + 1594 + if exists { 1595 + t.Error("Writer should be removed from globalUploads even when abort fails") 1596 + } 1597 + } 1598 + 1599 + // TestCancel_AlreadyClosed tests that Cancel() is idempotent 1600 + func TestCancel_AlreadyClosed(t *testing.T) { 1601 + s3Server := newMockS3Server(t, true) 1602 + defer s3Server.Close() 1603 + 1604 + holdServer := newMockHoldServer(t, s3Server.URL) 1605 + defer holdServer.Close() 1606 + 1607 + store := createTestProxyBlobStore(t, holdServer.URL) 1608 + 1609 + writer, err := store.Create(context.Background()) 1610 + if err != nil { 1611 + t.Fatalf("Create() failed: %v", err) 1612 + } 1613 + 1614 + // Cancel twice 1615 + err1 := writer.Cancel(context.Background()) 1616 + err2 := writer.Cancel(context.Background()) 1617 + 1618 + if err1 != nil || err2 != nil { 1619 + t.Errorf("Cancel() should be idempotent, got err1=%v, err2=%v", err1, err2) 1620 + } 1621 + } 1622 + 1623 + // TestResume_Success tests successful resume 1624 + func TestResume_Success(t *testing.T) { 1625 + s3Server := newMockS3Server(t, true) 1626 + defer s3Server.Close() 1627 + 1628 + holdServer := newMockHoldServer(t, s3Server.URL) 1629 + defer holdServer.Close() 1630 + 1631 + store := createTestProxyBlobStore(t, holdServer.URL) 1632 + 1633 + writer, err := store.Create(context.Background()) 1634 + if err != nil { 1635 + t.Fatalf("Create() failed: %v", err) 1636 + } 1637 + defer writer.Cancel(context.Background()) 1638 + 1639 + writerID := writer.ID() 1640 + 1641 + // Write some data 1642 + data := generateTestData(1024) 1643 + _, err = writer.Write(data) 1644 + if err != nil { 1645 + t.Fatalf("Write() failed: %v", err) 1646 + } 1647 + 1648 + // Resume should return the same writer 1649 + resumedWriter, err := store.Resume(context.Background(), writerID) 1650 + if err != nil { 1651 + t.Fatalf("Resume() failed: %v", err) 1652 + } 1653 + 1654 + // Should be the same writer instance 1655 + if resumedWriter.ID() != writerID { 1656 + t.Errorf("Expected same writer ID, got %s", resumedWriter.ID()) 1657 + } 1658 + 1659 + // Size should be preserved 1660 + if resumedWriter.Size() != int64(len(data)) { 1661 + t.Errorf("Expected size %d, got %d", len(data), resumedWriter.Size()) 1662 + } 1663 + } 1664 + 1665 + // TestResume_NotFound tests that Resume() returns error for unknown ID 1666 + func TestResume_NotFound(t *testing.T) { 1667 + s3Server := newMockS3Server(t, true) 1668 + defer s3Server.Close() 1669 + 1670 + holdServer := newMockHoldServer(t, s3Server.URL) 1671 + defer holdServer.Close() 1672 + 1673 + store := createTestProxyBlobStore(t, holdServer.URL) 1674 + 1675 + // Try to resume non-existent upload 1676 + _, err := store.Resume(context.Background(), "non-existent-upload-id") 1677 + if err == nil { 1678 + t.Fatal("Expected error for non-existent upload") 1679 + } 1680 + 1681 + if err != distribution.ErrBlobUploadUnknown { 1682 + t.Errorf("Expected ErrBlobUploadUnknown, got: %v", err) 1683 + } 1684 + } 1685 + 1686 + // TestFullUploadFlow_25MB tests the complete upload flow with a 25MB file 1687 + func TestFullUploadFlow_25MB(t *testing.T) { 1688 + s3Server := newMockS3Server(t, true) 1689 + defer s3Server.Close() 1690 + 1691 + holdServer := newMockHoldServer(t, s3Server.URL) 1692 + defer holdServer.Close() 1693 + 1694 + store := createTestProxyBlobStore(t, holdServer.URL) 1695 + 1696 + // Create writer 1697 + writer, err := store.Create(context.Background()) 1698 + if err != nil { 1699 + t.Fatalf("Create() failed: %v", err) 1700 + } 1701 + 1702 + // Write 25MB in chunks (simulating Docker layer upload) 1703 + totalSize := 25 * 1024 * 1024 1704 + chunkSize := 64 * 1024 // 64KB chunks (realistic for Docker) 1705 + allData := generateTestData(totalSize) 1706 + 1707 + for i := 0; i < totalSize; i += chunkSize { 1708 + end := i + chunkSize 1709 + if end > totalSize { 1710 + end = totalSize 1711 + } 1712 + _, err = writer.Write(allData[i:end]) 1713 + if err != nil { 1714 + t.Fatalf("Write() failed at byte %d: %v", i, err) 1715 + } 1716 + } 1717 + 1718 + // Verify 2 parts uploaded during write (10MB + 10MB) 1719 + s3Server.mu.Lock() 1720 + partsBeforeCommit := len(s3Server.Parts) 1721 + s3Server.mu.Unlock() 1722 + if partsBeforeCommit != 2 { 1723 + t.Errorf("Expected 2 parts before commit, got %d", partsBeforeCommit) 1724 + } 1725 + 1726 + // Commit 1727 + dgst := digest.FromBytes(allData) 1728 + desc, err := writer.Commit(context.Background(), distribution.Descriptor{ 1729 + Digest: dgst, 1730 + Size: int64(totalSize), 1731 + MediaType: "application/vnd.oci.image.layer.v1.tar+gzip", 1732 + }) 1733 + if err != nil { 1734 + t.Fatalf("Commit() failed: %v", err) 1735 + } 1736 + 1737 + // Verify 3 total parts (10MB + 10MB + 5MB final) 1738 + s3Server.mu.Lock() 1739 + partsAfterCommit := len(s3Server.Parts) 1740 + s3Server.mu.Unlock() 1741 + if partsAfterCommit != 3 { 1742 + t.Errorf("Expected 3 parts after commit, got %d", partsAfterCommit) 1743 + } 1744 + 1745 + // Verify complete was called with correct parts 1746 + holdServer.mu.Lock() 1747 + if len(holdServer.CompleteCalls) != 1 { 1748 + t.Fatalf("Expected 1 complete call, got %d", len(holdServer.CompleteCalls)) 1749 + } 1750 + completeCall := holdServer.CompleteCalls[0] 1751 + holdServer.mu.Unlock() 1752 + 1753 + if len(completeCall.Parts) != 3 { 1754 + t.Errorf("Expected 3 parts in complete call, got %d", len(completeCall.Parts)) 1755 + } 1756 + 1757 + if completeCall.Digest != dgst.String() { 1758 + t.Errorf("Expected digest %s, got %s", dgst.String(), completeCall.Digest) 1759 + } 1760 + 1761 + // Verify descriptor 1762 + if desc.Size != int64(totalSize) { 1763 + t.Errorf("Expected descriptor size %d, got %d", totalSize, desc.Size) 1764 + } 1765 + 1766 + // Verify data integrity - check each part has correct size 1767 + s3Server.mu.Lock() 1768 + part1Size := len(s3Server.Parts[1]) 1769 + part2Size := len(s3Server.Parts[2]) 1770 + part3Size := len(s3Server.Parts[3]) 1771 + s3Server.mu.Unlock() 1772 + 1773 + expectedPart1 := 10 * 1024 * 1024 1774 + expectedPart2 := 10 * 1024 * 1024 1775 + expectedPart3 := 5 * 1024 * 1024 1776 + 1777 + if part1Size != expectedPart1 { 1778 + t.Errorf("Part 1 expected %d bytes, got %d", expectedPart1, part1Size) 1779 + } 1780 + if part2Size != expectedPart2 { 1781 + t.Errorf("Part 2 expected %d bytes, got %d", expectedPart2, part2Size) 1782 + } 1783 + if part3Size != expectedPart3 { 1784 + t.Errorf("Part 3 expected %d bytes, got %d", expectedPart3, part3Size) 1785 + } 1786 + } 1787 + 1788 + // TestProxyBlobWriter_ID tests the ID() method 1789 + func TestProxyBlobWriter_ID(t *testing.T) { 1790 + s3Server := newMockS3Server(t, true) 1791 + defer s3Server.Close() 1792 + 1793 + holdServer := newMockHoldServer(t, s3Server.URL) 1794 + defer holdServer.Close() 1795 + 1796 + store := createTestProxyBlobStore(t, holdServer.URL) 1797 + 1798 + writer, err := store.Create(context.Background()) 1799 + if err != nil { 1800 + t.Fatalf("Create() failed: %v", err) 1801 + } 1802 + defer writer.Cancel(context.Background()) 1803 + 1804 + id := writer.ID() 1805 + if id == "" { 1806 + t.Error("Expected non-empty ID") 1807 + } 1808 + 1809 + if !strings.HasPrefix(id, "upload-") { 1810 + t.Errorf("Expected ID to start with 'upload-', got %s", id) 1811 + } 1812 + } 1813 + 1814 + // TestProxyBlobWriter_StartedAt tests the StartedAt() method 1815 + func TestProxyBlobWriter_StartedAt(t *testing.T) { 1816 + s3Server := newMockS3Server(t, true) 1817 + defer s3Server.Close() 1818 + 1819 + holdServer := newMockHoldServer(t, s3Server.URL) 1820 + defer holdServer.Close() 1821 + 1822 + store := createTestProxyBlobStore(t, holdServer.URL) 1823 + 1824 + before := time.Now() 1825 + writer, err := store.Create(context.Background()) 1826 + if err != nil { 1827 + t.Fatalf("Create() failed: %v", err) 1828 + } 1829 + after := time.Now() 1830 + defer writer.Cancel(context.Background()) 1831 + 1832 + startedAt := writer.StartedAt() 1833 + if startedAt.Before(before) || startedAt.After(after) { 1834 + t.Errorf("StartedAt() should be between %v and %v, got %v", before, after, startedAt) 1835 + } 1836 + } 1837 + 1838 + // TestProxyBlobWriter_Size tests the Size() method 1839 + func TestProxyBlobWriter_Size(t *testing.T) { 1840 + s3Server := newMockS3Server(t, true) 1841 + defer s3Server.Close() 1842 + 1843 + holdServer := newMockHoldServer(t, s3Server.URL) 1844 + defer holdServer.Close() 1845 + 1846 + store := createTestProxyBlobStore(t, holdServer.URL) 1847 + 1848 + writer, err := store.Create(context.Background()) 1849 + if err != nil { 1850 + t.Fatalf("Create() failed: %v", err) 1851 + } 1852 + defer writer.Cancel(context.Background()) 1853 + 1854 + // Initial size should be 0 1855 + if writer.Size() != 0 { 1856 + t.Errorf("Expected initial size 0, got %d", writer.Size()) 1857 + } 1858 + 1859 + // Write 1KB 1860 + data := generateTestData(1024) 1861 + writer.Write(data) 1862 + 1863 + if writer.Size() != 1024 { 1864 + t.Errorf("Expected size 1024 after write, got %d", writer.Size()) 1865 + } 1866 + 1867 + // Write another 2KB 1868 + data2 := generateTestData(2048) 1869 + writer.Write(data2) 1870 + 1871 + if writer.Size() != 3072 { 1872 + t.Errorf("Expected size 3072 after second write, got %d", writer.Size()) 1873 + } 1874 + } 1875 + 1876 + // TestProxyBlobWriter_Close tests the Close() method 1877 + func TestProxyBlobWriter_Close(t *testing.T) { 1878 + s3Server := newMockS3Server(t, true) 1879 + defer s3Server.Close() 1880 + 1881 + holdServer := newMockHoldServer(t, s3Server.URL) 1882 + defer holdServer.Close() 1883 + 1884 + store := createTestProxyBlobStore(t, holdServer.URL) 1885 + 1886 + writer, err := store.Create(context.Background()) 1887 + if err != nil { 1888 + t.Fatalf("Create() failed: %v", err) 1889 + } 1890 + defer writer.Cancel(context.Background()) 1891 + 1892 + // Close should not error 1893 + err = writer.Close() 1894 + if err != nil { 1895 + t.Errorf("Close() should not return error, got: %v", err) 1896 + } 1897 + 1898 + // Close should NOT mark writer as closed (allows resume) 1899 + // Write should still work after Close 1900 + data := generateTestData(1024) 1901 + _, err = writer.Write(data) 1902 + if err != nil { 1903 + t.Errorf("Write() should work after Close(), got: %v", err) 528 1904 } 529 1905 } 530 1906
+24 -41
pkg/hold/config.go
··· 103 103 // TestMode uses localhost for OAuth redirects while storing real URL in hold record (from env: TEST_MODE) 104 104 TestMode bool `yaml:"test_mode"` 105 105 106 - // DisablePresignedURLs forces proxy mode even with S3 configured (for testing) (from env: DISABLE_PRESIGNED_URLS) 107 - DisablePresignedURLs bool `yaml:"disable_presigned_urls"` 108 - 109 106 // RelayEndpoint is the ATProto relay URL to request crawl from on startup (from env: HOLD_RELAY_ENDPOINT) 110 107 // If empty, no crawl request is made. Default: https://bsky.network 111 108 RelayEndpoint string `yaml:"relay_endpoint"` ··· 153 150 } 154 151 cfg.Server.Public = os.Getenv("HOLD_PUBLIC") == "true" 155 152 cfg.Server.TestMode = os.Getenv("TEST_MODE") == "true" 156 - cfg.Server.DisablePresignedURLs = os.Getenv("DISABLE_PRESIGNED_URLS") == "true" 157 153 cfg.Server.RelayEndpoint = os.Getenv("HOLD_RELAY_ENDPOINT") 158 154 cfg.Server.ReadTimeout = 5 * time.Minute // Increased for large blob uploads 159 155 cfg.Server.WriteTimeout = 5 * time.Minute // Increased for large blob uploads ··· 173 169 cfg.Database.KeyPath = filepath.Join(cfg.Database.Path, "signing.key") 174 170 } 175 171 176 - // Storage configuration - build from env vars based on storage type 177 - storageType := getEnvOrDefault("STORAGE_DRIVER", "s3") 172 + // Storage configuration - S3 is required (filesystem support removed) 178 173 var err error 179 - cfg.Storage, err = buildStorageConfig(storageType) 174 + cfg.Storage, err = buildStorageConfig() 180 175 if err != nil { 181 176 return nil, fmt.Errorf("failed to build storage config: %w", err) 182 177 } ··· 189 184 cfg.Registration.Region = meta.Region 190 185 slog.Info("Detected cloud metadata", "region", meta.Region) 191 186 } else { 192 - // Fall back to S3 region 193 - if storageType == "s3" { 194 - cfg.Registration.Region = getEnvOrDefault("AWS_REGION", "us-east-1") 195 - slog.Info("Using S3 region", "region", cfg.Registration.Region) 196 - } 187 + // Fall back to S3 region (S3 is always used) 188 + cfg.Registration.Region = getEnvOrDefault("AWS_REGION", "us-east-1") 189 + slog.Info("Using S3 region", "region", cfg.Registration.Region) 197 190 } 198 191 199 192 return cfg, nil 200 193 } 201 194 202 - // buildStorageConfig creates storage configuration based on driver type 203 - func buildStorageConfig(driver string) (StorageConfig, error) { 195 + // buildStorageConfig creates S3 storage configuration from environment variables 196 + // S3 is the only supported storage backend 197 + func buildStorageConfig() (StorageConfig, error) { 204 198 params := make(map[string]any) 205 199 206 - switch driver { 207 - case "s3": 208 - // S3/Storj/Minio configuration from standard AWS env vars 209 - accessKey := os.Getenv("AWS_ACCESS_KEY_ID") 210 - secretKey := os.Getenv("AWS_SECRET_ACCESS_KEY") 211 - region := getEnvOrDefault("AWS_REGION", "us-east-1") 212 - bucket := os.Getenv("S3_BUCKET") 213 - endpoint := os.Getenv("S3_ENDPOINT") // For Storj/Minio 200 + // S3/Storj/Minio configuration from standard AWS env vars 201 + accessKey := os.Getenv("AWS_ACCESS_KEY_ID") 202 + secretKey := os.Getenv("AWS_SECRET_ACCESS_KEY") 203 + region := getEnvOrDefault("AWS_REGION", "us-east-1") 204 + bucket := os.Getenv("S3_BUCKET") 205 + endpoint := os.Getenv("S3_ENDPOINT") // For Storj/Minio 214 206 215 - if bucket == "" { 216 - return StorageConfig{}, fmt.Errorf("S3_BUCKET is required for S3 storage") 217 - } 218 - 219 - params["accesskey"] = accessKey 220 - params["secretkey"] = secretKey 221 - params["region"] = region 222 - params["bucket"] = bucket 223 - if endpoint != "" { 224 - params["regionendpoint"] = endpoint 225 - } 226 - 227 - case "filesystem": 228 - // Filesystem configuration 229 - rootDir := getEnvOrDefault("STORAGE_ROOT_DIR", "/var/lib/atcr/hold") 230 - params["rootdirectory"] = rootDir 207 + if bucket == "" { 208 + return StorageConfig{}, fmt.Errorf("S3_BUCKET is required - S3 is the only supported storage backend") 209 + } 231 210 232 - default: 233 - return StorageConfig{}, fmt.Errorf("unsupported storage driver: %s", driver) 211 + params["accesskey"] = accessKey 212 + params["secretkey"] = secretKey 213 + params["region"] = region 214 + params["bucket"] = bucket 215 + if endpoint != "" { 216 + params["regionendpoint"] = endpoint 234 217 } 235 218 236 219 // Build distribution Storage config 237 220 storageCfg := configuration.Storage{} 238 - storageCfg[driver] = configuration.Parameters(params) 221 + storageCfg["s3"] = configuration.Parameters(params) 239 222 240 223 return StorageConfig{Storage: storageCfg}, nil 241 224 }
+38 -98
pkg/hold/config_test.go
··· 36 36 37 37 func TestLoadConfigFromEnv_Success(t *testing.T) { 38 38 cleanup := setupEnv(t, map[string]string{ 39 - "HOLD_PUBLIC_URL": "https://hold.example.com", 40 - "HOLD_SERVER_ADDR": ":9000", 41 - "HOLD_PUBLIC": "true", 42 - "TEST_MODE": "true", 43 - "HOLD_OWNER": "did:plc:owner123", 44 - "HOLD_ALLOW_ALL_CREW": "true", 45 - "STORAGE_DRIVER": "filesystem", 46 - "STORAGE_ROOT_DIR": "/tmp/test-storage", 47 - "HOLD_DATABASE_DIR": "/tmp/test-db", 48 - "HOLD_KEY_PATH": "/tmp/test-key.pem", 39 + "HOLD_PUBLIC_URL": "https://hold.example.com", 40 + "HOLD_SERVER_ADDR": ":9000", 41 + "HOLD_PUBLIC": "true", 42 + "TEST_MODE": "true", 43 + "HOLD_OWNER": "did:plc:owner123", 44 + "HOLD_ALLOW_ALL_CREW": "true", 45 + "S3_BUCKET": "test-bucket", 46 + "AWS_ACCESS_KEY_ID": "test-key", 47 + "AWS_SECRET_ACCESS_KEY": "test-secret", 48 + "HOLD_DATABASE_DIR": "/tmp/test-db", 49 + "HOLD_KEY_PATH": "/tmp/test-key.pem", 49 50 }) 50 51 defer cleanup() 51 52 ··· 91 92 func TestLoadConfigFromEnv_MissingPublicURL(t *testing.T) { 92 93 cleanup := setupEnv(t, map[string]string{ 93 94 "HOLD_PUBLIC_URL": "", // Missing required field 94 - "STORAGE_DRIVER": "filesystem", 95 + "S3_BUCKET": "test-bucket", 95 96 }) 96 97 defer cleanup() 97 98 ··· 101 102 } 102 103 } 103 104 104 - func TestLoadConfigFromEnv_Defaults(t *testing.T) { 105 + func TestLoadConfigFromEnv_MissingS3Bucket(t *testing.T) { 105 106 cleanup := setupEnv(t, map[string]string{ 106 107 "HOLD_PUBLIC_URL": "https://hold.example.com", 107 - "STORAGE_DRIVER": "filesystem", 108 + "S3_BUCKET": "", // Missing required field 109 + }) 110 + defer cleanup() 111 + 112 + _, err := LoadConfigFromEnv() 113 + if err == nil { 114 + t.Error("Expected error for missing S3_BUCKET") 115 + } 116 + } 117 + 118 + func TestLoadConfigFromEnv_Defaults(t *testing.T) { 119 + cleanup := setupEnv(t, map[string]string{ 120 + "HOLD_PUBLIC_URL": "https://hold.example.com", 121 + "S3_BUCKET": "test-bucket", 122 + "AWS_ACCESS_KEY_ID": "test-key", 123 + "AWS_SECRET_ACCESS_KEY": "test-secret", 108 124 // Don't set optional vars - test defaults 109 125 "HOLD_SERVER_ADDR": "", 110 126 "HOLD_PUBLIC": "", ··· 112 128 "HOLD_OWNER": "", 113 129 "HOLD_ALLOW_ALL_CREW": "", 114 130 "AWS_REGION": "", 115 - "STORAGE_ROOT_DIR": "", 116 131 "HOLD_DATABASE_DIR": "", 117 132 }) 118 133 defer cleanup() ··· 132 147 if cfg.Server.TestMode { 133 148 t.Error("Expected default TestMode=false") 134 149 } 135 - if cfg.Server.DisablePresignedURLs { 136 - t.Error("Expected default DisablePresignedURLs=false") 137 - } 138 150 if cfg.Registration.OwnerDID != "" { 139 151 t.Error("Expected default OwnerDID to be empty") 140 152 } ··· 148 160 149 161 func TestLoadConfigFromEnv_KeyPathDefault(t *testing.T) { 150 162 cleanup := setupEnv(t, map[string]string{ 151 - "HOLD_PUBLIC_URL": "https://hold.example.com", 152 - "STORAGE_DRIVER": "filesystem", 153 - "HOLD_DATABASE_DIR": "/custom/db/path", 154 - "HOLD_KEY_PATH": "", // Should default to {Database.Path}/signing.key 163 + "HOLD_PUBLIC_URL": "https://hold.example.com", 164 + "S3_BUCKET": "test-bucket", 165 + "AWS_ACCESS_KEY_ID": "test-key", 166 + "AWS_SECRET_ACCESS_KEY": "test-secret", 167 + "HOLD_DATABASE_DIR": "/custom/db/path", 168 + "HOLD_KEY_PATH": "", // Should default to {Database.Path}/signing.key 155 169 }) 156 170 defer cleanup() 157 171 ··· 166 180 } 167 181 } 168 182 169 - func TestLoadConfigFromEnv_DisablePresignedURLs(t *testing.T) { 170 - cleanup := setupEnv(t, map[string]string{ 171 - "HOLD_PUBLIC_URL": "https://hold.example.com", 172 - "STORAGE_DRIVER": "filesystem", 173 - "DISABLE_PRESIGNED_URLS": "true", 174 - }) 175 - defer cleanup() 176 - 177 - cfg, err := LoadConfigFromEnv() 178 - if err != nil { 179 - t.Fatalf("Expected success, got error: %v", err) 180 - } 181 - 182 - if !cfg.Server.DisablePresignedURLs { 183 - t.Error("Expected DisablePresignedURLs=true") 184 - } 185 - } 186 - 187 183 func TestBuildStorageConfig_S3_Complete(t *testing.T) { 188 184 cleanup := setupEnv(t, map[string]string{ 189 185 "AWS_ACCESS_KEY_ID": "test-access-key", ··· 194 190 }) 195 191 defer cleanup() 196 192 197 - cfg, err := buildStorageConfig("s3") 193 + cfg, err := buildStorageConfig() 198 194 if err != nil { 199 195 t.Fatalf("Expected success, got error: %v", err) 200 196 } ··· 233 229 }) 234 230 defer cleanup() 235 231 236 - cfg, err := buildStorageConfig("s3") 232 + cfg, err := buildStorageConfig() 237 233 if err != nil { 238 234 t.Fatalf("Expected success, got error: %v", err) 239 235 } ··· 264 260 }) 265 261 defer cleanup() 266 262 267 - _, err := buildStorageConfig("s3") 263 + _, err := buildStorageConfig() 268 264 if err == nil { 269 265 t.Error("Expected error for missing S3_BUCKET") 270 - } 271 - } 272 - 273 - func TestBuildStorageConfig_Filesystem(t *testing.T) { 274 - cleanup := setupEnv(t, map[string]string{ 275 - "STORAGE_ROOT_DIR": "/custom/storage/path", 276 - }) 277 - defer cleanup() 278 - 279 - cfg, err := buildStorageConfig("filesystem") 280 - if err != nil { 281 - t.Fatalf("Expected success, got error: %v", err) 282 - } 283 - 284 - fsParams, ok := cfg.Storage["filesystem"] 285 - if !ok { 286 - t.Fatal("Expected filesystem storage config") 287 - } 288 - 289 - params := map[string]any(fsParams) 290 - 291 - if params["rootdirectory"] != "/custom/storage/path" { 292 - t.Errorf("Expected rootdirectory=/custom/storage/path, got %v", params["rootdirectory"]) 293 - } 294 - } 295 - 296 - func TestBuildStorageConfig_Filesystem_Default(t *testing.T) { 297 - cleanup := setupEnv(t, map[string]string{ 298 - "STORAGE_ROOT_DIR": "", // Test default 299 - }) 300 - defer cleanup() 301 - 302 - cfg, err := buildStorageConfig("filesystem") 303 - if err != nil { 304 - t.Fatalf("Expected success, got error: %v", err) 305 - } 306 - 307 - fsParams, ok := cfg.Storage["filesystem"] 308 - if !ok { 309 - t.Fatal("Expected filesystem storage config") 310 - } 311 - 312 - params := map[string]any(fsParams) 313 - 314 - if params["rootdirectory"] != "/var/lib/atcr/hold" { 315 - t.Errorf("Expected default rootdirectory=/var/lib/atcr/hold, got %v", params["rootdirectory"]) 316 - } 317 - } 318 - 319 - func TestBuildStorageConfig_UnsupportedDriver(t *testing.T) { 320 - cleanup := setupEnv(t, map[string]string{}) 321 - defer cleanup() 322 - 323 - _, err := buildStorageConfig("azure") 324 - if err == nil { 325 - t.Error("Expected error for unsupported driver") 326 266 } 327 267 } 328 268
+131 -306
pkg/hold/oci/multipart.go
··· 2 2 3 3 import ( 4 4 "context" 5 - "crypto/sha256" 6 - "encoding/hex" 7 5 "fmt" 8 6 "log/slog" 9 7 "sort" ··· 11 9 "sync" 12 10 "time" 13 11 14 - "atcr.io/pkg/atproto" 15 12 "atcr.io/pkg/s3" 16 13 awss3 "github.com/aws/aws-sdk-go/service/s3" 17 14 "github.com/google/uuid" 18 15 ) 19 16 20 - // MultipartMode indicates how multipart uploads are handled 21 - type MultipartMode int 22 - 23 - const ( 24 - // S3Native uses S3's native multipart API with presigned URLs 25 - S3Native MultipartMode = iota 26 - // Buffered buffers parts in memory and assembles them in the hold service 27 - Buffered 28 - ) 29 - 30 17 // PartInfo represents an uploaded part with its ETag 31 18 type PartInfo struct { 32 19 PartNumber int `json:"part_number"` 33 20 ETag string `json:"etag"` 34 21 } 35 22 36 - // PartUploadInfo contains structured information for uploading a part 37 - // Used for both S3 presigned URLs and buffered mode with headers 23 + // PartUploadInfo contains the presigned URL for uploading a part 38 24 type PartUploadInfo struct { 39 - URL string `json:"url"` // URL to PUT the part to 40 - Method string `json:"method,omitempty"` // HTTP method (usually "PUT") 41 - Headers map[string]string `json:"headers,omitempty"` // Additional headers required for the request 25 + URL string `json:"url"` // Presigned URL to PUT the part to 42 26 } 43 27 44 28 // MultipartSession tracks an in-progress multipart upload 45 29 type MultipartSession struct { 46 - UploadID string // Unique upload ID 47 - Digest string // Target digest path 48 - Mode MultipartMode // Upload mode (S3Native or Buffered) 49 - S3UploadID string // S3 upload ID (for S3Native mode) 50 - Parts map[int]*MultipartPart // Buffered parts (for Buffered mode) 51 - CreatedAt time.Time // When upload started 52 - LastActivity time.Time // Last part upload 53 - mu sync.RWMutex // Protects Parts map 30 + UploadID string // Unique upload ID 31 + Digest string // Target digest path 32 + S3UploadID string // S3 upload ID 33 + CreatedAt time.Time // When upload started 34 + LastActivity time.Time // Last part upload 54 35 } 55 36 56 37 // MultipartPart represents a single part in a multipart upload 57 38 type MultipartPart struct { 58 39 PartNumber int // Part number (1-indexed) 59 - Data []byte // Part data (for Buffered mode) 60 - ETag string // ETag from S3 or computed hash 40 + ETag string // ETag from S3 61 41 Size int64 // Part size in bytes 62 42 UploadedAt time.Time // When part was uploaded 63 43 } ··· 107 87 } 108 88 109 89 // CreateSession creates a new multipart upload session 110 - func (m *MultipartManager) CreateSession(digest string, mode MultipartMode, s3UploadID string) *MultipartSession { 90 + func (m *MultipartManager) CreateSession(digest string, s3UploadID string) *MultipartSession { 111 91 uploadID := uuid.New().String() 112 92 113 93 session := &MultipartSession{ 114 94 UploadID: uploadID, 115 95 Digest: digest, 116 - Mode: mode, 117 96 S3UploadID: s3UploadID, 118 - Parts: make(map[int]*MultipartPart), 119 97 CreatedAt: time.Now(), 120 98 LastActivity: time.Now(), 121 99 } ··· 126 104 127 105 slog.Debug("Created multipart session", 128 106 "uploadID", uploadID, 129 - "digest", digest, 130 - "mode", mode) 107 + "digest", digest) 131 108 return session 132 109 } 133 110 ··· 153 130 slog.Debug("Deleted multipart session", "uploadID", uploadID) 154 131 } 155 132 156 - // StorePart stores a part in the session (for Buffered mode) 157 - func (s *MultipartSession) StorePart(partNumber int, data []byte) string { 158 - s.mu.Lock() 159 - defer s.mu.Unlock() 160 - 161 - // Compute ETag as SHA256 hash of part data 162 - hash := sha256.Sum256(data) 163 - etag := hex.EncodeToString(hash[:]) 164 - 165 - part := &MultipartPart{ 166 - PartNumber: partNumber, 167 - Data: data, 168 - ETag: etag, 169 - Size: int64(len(data)), 170 - UploadedAt: time.Now(), 133 + // StartMultipartUploadWithManager initiates a multipart upload using the manager 134 + // Returns the upload ID for tracking the session 135 + func (h *XRPCHandler) StartMultipartUploadWithManager(ctx context.Context, digest string) (string, error) { 136 + if h.s3Service.Client == nil { 137 + return "", fmt.Errorf("S3 not configured - S3 is required for blob storage") 171 138 } 172 139 173 - s.Parts[partNumber] = part 174 - s.LastActivity = time.Now() 175 - 176 - slog.Debug("Stored part", 177 - "uploadID", s.UploadID, 178 - "part", partNumber, 179 - "size", len(data), 180 - "etag", etag) 181 - return etag 182 - } 183 - 184 - // AssembleBufferedParts assembles all buffered parts into a single blob 185 - // Returns the complete data and total size 186 - func (s *MultipartSession) AssembleBufferedParts() ([]byte, int64, error) { 187 - s.mu.RLock() 188 - defer s.mu.RUnlock() 189 - 190 - if s.Mode != Buffered { 191 - return nil, 0, fmt.Errorf("session is not in buffered mode") 140 + path := s3.BlobPath(digest) 141 + s3Key := strings.TrimPrefix(path, "/") 142 + if h.s3Service.PathPrefix != "" { 143 + s3Key = h.s3Service.PathPrefix + "/" + s3Key 192 144 } 193 145 194 - // Calculate total size 195 - var totalSize int64 196 - maxPart := 0 197 - for partNum, part := range s.Parts { 198 - totalSize += part.Size 199 - if partNum > maxPart { 200 - maxPart = partNum 201 - } 146 + result, err := h.s3Service.Client.CreateMultipartUploadWithContext(ctx, &awss3.CreateMultipartUploadInput{ 147 + Bucket: &h.s3Service.Bucket, 148 + Key: &s3Key, 149 + }) 150 + if err != nil { 151 + return "", fmt.Errorf("failed to start S3 multipart upload: %w", err) 202 152 } 203 153 204 - // Check for missing parts 205 - for i := 1; i <= maxPart; i++ { 206 - if _, ok := s.Parts[i]; !ok { 207 - return nil, 0, fmt.Errorf("missing part %d", i) 208 - } 209 - } 210 - 211 - // Assemble parts in order 212 - assembled := make([]byte, 0, totalSize) 213 - for i := 1; i <= maxPart; i++ { 214 - part := s.Parts[i] 215 - assembled = append(assembled, part.Data...) 216 - } 217 - 218 - slog.Debug("Assembled buffered parts", 219 - "uploadID", s.UploadID, 220 - "parts", maxPart, 221 - "totalSize", totalSize) 222 - return assembled, totalSize, nil 223 - } 224 - 225 - // StartMultipartUploadWithManager initiates a multipart upload using the manager 226 - // Returns uploadID and mode 227 - func (h *XRPCHandler) StartMultipartUploadWithManager(ctx context.Context, digest string) (string, MultipartMode, error) { 228 - // Check if presigned URLs are disabled for testing 229 - if h.disablePresignedURLs { 230 - slog.Debug("Presigned URLs disabled, using buffered mode", "reason", "DISABLE_PRESIGNED_URLS=true") 231 - session := h.MultipartMgr.CreateSession(digest, Buffered, "") 232 - slog.Debug("Started buffered multipart", "uploadID", session.UploadID) 233 - return session.UploadID, Buffered, nil 234 - } 235 - 236 - // Try S3 native multipart first 237 - if h.s3Service.Client != nil { 238 - if h.s3Service.Client == nil { 239 - return "", S3Native, fmt.Errorf("S3 not configured") 240 - } 241 - path := s3.BlobPath(digest) 242 - s3Key := strings.TrimPrefix(path, "/") 243 - if h.s3Service.PathPrefix != "" { 244 - s3Key = h.s3Service.PathPrefix + "/" + s3Key 245 - } 246 - 247 - result, err := h.s3Service.Client.CreateMultipartUploadWithContext(ctx, &awss3.CreateMultipartUploadInput{ 248 - Bucket: &h.s3Service.Bucket, 249 - Key: &s3Key, 250 - }) 251 - if err == nil { 252 - s3UploadID := *result.UploadId 253 - // S3 native multipart succeeded 254 - session := h.MultipartMgr.CreateSession(digest, S3Native, s3UploadID) 255 - slog.Debug("Started S3 native multipart", 256 - "digest", digest, 257 - "uploadID", session.UploadID, 258 - "s3UploadID", s3UploadID) 259 - return session.UploadID, S3Native, nil 260 - } 261 - slog.Warn("S3 native multipart failed, falling back to buffered mode", "error", err) 262 - } 263 - 264 - // Fallback to buffered mode 265 - session := h.MultipartMgr.CreateSession(digest, Buffered, "") 266 - slog.Debug("Started buffered multipart", "uploadID", session.UploadID) 267 - return session.UploadID, Buffered, nil 154 + s3UploadID := *result.UploadId 155 + session := h.MultipartMgr.CreateSession(digest, s3UploadID) 156 + slog.Debug("Started S3 multipart upload", 157 + "digest", digest, 158 + "uploadID", session.UploadID, 159 + "s3UploadID", s3UploadID) 160 + return session.UploadID, nil 268 161 } 269 162 270 163 // GetPartUploadURL generates a presigned URL for uploading a part 271 - // Only used for S3Native mode - Buffered mode is handled by blobstore adapter 272 164 func (h *XRPCHandler) GetPartUploadURL(ctx context.Context, uploadID string, partNumber int) (*PartUploadInfo, error) { 273 165 session, err := h.MultipartMgr.GetSession(uploadID) 274 166 if err != nil { 275 167 return nil, err 276 168 } 277 169 278 - // For S3Native mode: return presigned URL 279 - if session.Mode == S3Native { 280 - if h.s3Service.Client == nil { 281 - return nil, fmt.Errorf("S3 not configured") 282 - } 170 + if h.s3Service.Client == nil { 171 + return nil, fmt.Errorf("S3 not configured") 172 + } 283 173 284 - path := s3.BlobPath(session.Digest) 285 - s3Key := strings.TrimPrefix(path, "/") 286 - if h.s3Service.PathPrefix != "" { 287 - s3Key = h.s3Service.PathPrefix + "/" + s3Key 288 - } 289 - pnum := int64(partNumber) 290 - req, _ := h.s3Service.Client.UploadPartRequest(&awss3.UploadPartInput{ 291 - Bucket: &h.s3Service.Bucket, 292 - Key: &s3Key, 293 - UploadId: &session.S3UploadID, 294 - PartNumber: &pnum, 295 - }) 174 + path := s3.BlobPath(session.Digest) 175 + s3Key := strings.TrimPrefix(path, "/") 176 + if h.s3Service.PathPrefix != "" { 177 + s3Key = h.s3Service.PathPrefix + "/" + s3Key 178 + } 179 + pnum := int64(partNumber) 180 + req := h.s3Service.Client.UploadPartPresignable(&awss3.UploadPartInput{ 181 + Bucket: &h.s3Service.Bucket, 182 + Key: &s3Key, 183 + UploadId: &session.S3UploadID, 184 + PartNumber: &pnum, 185 + }) 296 186 297 - url, err := req.Presign(15 * time.Minute) 298 - if err != nil { 299 - return nil, err 300 - } 187 + url, err := req.Presign(15 * time.Minute) 188 + if err != nil { 189 + return nil, err 190 + } 301 191 302 - slog.Debug("Generated part presigned URL", 303 - "digest", session.Digest, 304 - "uploadID", uploadID, 305 - "part", partNumber) 306 - 307 - return &PartUploadInfo{ 308 - URL: url, 309 - Method: "PUT", 310 - }, nil 311 - } 192 + slog.Debug("Generated part presigned URL", 193 + "digest", session.Digest, 194 + "uploadID", uploadID, 195 + "part", partNumber) 312 196 313 - // Buffered mode: return XRPC endpoint with headers 314 197 return &PartUploadInfo{ 315 - URL: fmt.Sprintf("%s%s", h.pds.PublicURL, atproto.HoldUploadPart), 316 - Method: "PUT", 317 - Headers: map[string]string{ 318 - "X-Upload-Id": uploadID, 319 - "X-Part-Number": fmt.Sprintf("%d", partNumber), 320 - }, 198 + URL: url, 321 199 }, nil 322 200 } 323 201 ··· 331 209 return err 332 210 } 333 211 334 - if session.Mode == S3Native { 335 - if h.s3Service.Client == nil { 336 - return fmt.Errorf("S3 not configured") 337 - } 212 + if h.s3Service.Client == nil { 213 + return fmt.Errorf("S3 not configured") 214 + } 338 215 339 - // Sort parts by part number (S3 requires ascending order) 340 - sort.Slice(parts, func(i, j int) bool { 341 - return parts[i].PartNumber < parts[j].PartNumber 342 - }) 216 + // Sort parts by part number (S3 requires ascending order) 217 + sort.Slice(parts, func(i, j int) bool { 218 + return parts[i].PartNumber < parts[j].PartNumber 219 + }) 343 220 344 - // Convert to S3 CompletedPart format 345 - // IMPORTANT: S3 requires ETags to be quoted in the CompleteMultipartUpload XML 346 - s3Parts := make([]*awss3.CompletedPart, len(parts)) 347 - for i, p := range parts { 348 - etag := normalizeETag(p.ETag) 349 - pnum := int64(p.PartNumber) 350 - s3Parts[i] = &awss3.CompletedPart{ 351 - PartNumber: &pnum, 352 - ETag: &etag, 353 - } 221 + // Convert to S3 CompletedPart format 222 + // IMPORTANT: S3 requires ETags to be quoted in the CompleteMultipartUpload XML 223 + s3Parts := make([]*awss3.CompletedPart, len(parts)) 224 + for i, p := range parts { 225 + etag := normalizeETag(p.ETag) 226 + pnum := int64(p.PartNumber) 227 + s3Parts[i] = &awss3.CompletedPart{ 228 + PartNumber: &pnum, 229 + ETag: &etag, 354 230 } 355 - sourcePath := s3.BlobPath(session.Digest) 356 - s3Key := strings.TrimPrefix(sourcePath, "/") 357 - if h.s3Service.PathPrefix != "" { 358 - s3Key = h.s3Service.PathPrefix + "/" + s3Key 359 - } 360 - 361 - _, err = h.s3Service.Client.CompleteMultipartUploadWithContext(ctx, &awss3.CompleteMultipartUploadInput{ 362 - Bucket: &h.s3Service.Bucket, 363 - Key: &s3Key, 364 - UploadId: &session.S3UploadID, 365 - MultipartUpload: &awss3.CompletedMultipartUpload{ 366 - Parts: s3Parts, 367 - }, 368 - }) 369 - if err != nil { 370 - return fmt.Errorf("failed to complete multipart upload: digest=%s, uploadID=%s, err=%v", session.Digest, uploadID, err) 371 - } 372 - slog.Info("Completed S3 native multipart at temp location", 373 - "digest", session.Digest, 374 - "uploadID", session.UploadID, 375 - "parts", len(s3Parts)) 376 - 377 - // Verify the blob exists at temp location before moving 378 - destPath := s3.BlobPath(finalDigest) 379 - slog.Debug("About to move blob", 380 - "source", sourcePath, 381 - "dest", destPath) 382 - 383 - if _, err := h.driver.Stat(ctx, sourcePath); err != nil { 384 - slog.Error("Source blob not found after multipart complete", 385 - "path", sourcePath, 386 - "error", err) 387 - return fmt.Errorf("source blob not found after multipart complete: %w", err) 388 - } 389 - slog.Debug("Source blob verified", "path", sourcePath) 390 - 391 - // Move from temp to final digest location using driver 392 - // Driver handles path management correctly (including S3 prefix) 393 - if err := h.driver.Move(ctx, sourcePath, destPath); err != nil { 394 - slog.Error("Failed to move blob", 395 - "source", sourcePath, 396 - "dest", destPath, 397 - "error", err) 398 - return fmt.Errorf("failed to move blob to final location: %w", err) 399 - } 400 - 401 - slog.Info("Moved blob to final location", 402 - "from", session.Digest, 403 - "to", finalDigest, 404 - "sourcePath", sourcePath, 405 - "destPath", destPath) 406 - return nil 407 231 } 408 - 409 - // Buffered mode: assemble parts and write directly to final location 410 - data, size, err := session.AssembleBufferedParts() 411 - if err != nil { 412 - return fmt.Errorf("failed to assemble parts: %w", err) 232 + sourcePath := s3.BlobPath(session.Digest) 233 + s3Key := strings.TrimPrefix(sourcePath, "/") 234 + if h.s3Service.PathPrefix != "" { 235 + s3Key = h.s3Service.PathPrefix + "/" + s3Key 413 236 } 414 237 415 - // Write assembled blob to final digest location (not temp) 416 - path := s3.BlobPath(finalDigest) 417 - writer, err := h.driver.Writer(ctx, path, false) 238 + _, err = h.s3Service.Client.CompleteMultipartUploadWithContext(ctx, &awss3.CompleteMultipartUploadInput{ 239 + Bucket: &h.s3Service.Bucket, 240 + Key: &s3Key, 241 + UploadId: &session.S3UploadID, 242 + MultipartUpload: &awss3.CompletedMultipartUpload{ 243 + Parts: s3Parts, 244 + }, 245 + }) 418 246 if err != nil { 419 - return fmt.Errorf("failed to create writer: %w", err) 247 + return fmt.Errorf("failed to complete multipart upload: digest=%s, uploadID=%s, err=%v", session.Digest, uploadID, err) 420 248 } 249 + slog.Info("Completed S3 native multipart at temp location", 250 + "digest", session.Digest, 251 + "uploadID", session.UploadID, 252 + "parts", len(s3Parts)) 421 253 422 - written, err := writer.Write(data) 423 - if err != nil { 424 - writer.Cancel(ctx) 425 - return fmt.Errorf("failed to write blob: %w", err) 254 + // Verify the blob exists at temp location before moving 255 + destPath := s3.BlobPath(finalDigest) 256 + slog.Debug("About to move blob", 257 + "source", sourcePath, 258 + "dest", destPath) 259 + 260 + if _, err := h.driver.Stat(ctx, sourcePath); err != nil { 261 + slog.Error("Source blob not found after multipart complete", 262 + "path", sourcePath, 263 + "error", err) 264 + return fmt.Errorf("source blob not found after multipart complete: %w", err) 426 265 } 266 + slog.Debug("Source blob verified", "path", sourcePath) 427 267 428 - if err := writer.Commit(ctx); err != nil { 429 - return fmt.Errorf("failed to commit blob: %w", err) 268 + // Move from temp to final digest location using driver 269 + // Driver handles path management correctly (including S3 prefix) 270 + if err := h.driver.Move(ctx, sourcePath, destPath); err != nil { 271 + slog.Error("Failed to move blob", 272 + "source", sourcePath, 273 + "dest", destPath, 274 + "error", err) 275 + return fmt.Errorf("failed to move blob to final location: %w", err) 430 276 } 431 277 432 - slog.Info("Completed buffered multipart", 433 - "uploadID", session.UploadID, 434 - "finalDigest", finalDigest, 435 - "size", size, 436 - "written", written) 278 + slog.Info("Moved blob to final location", 279 + "from", session.Digest, 280 + "to", finalDigest, 281 + "sourcePath", sourcePath, 282 + "destPath", destPath) 437 283 return nil 438 284 } 439 285 ··· 445 291 return err 446 292 } 447 293 448 - if session.Mode == S3Native { 449 - if h.s3Service.Client == nil { 450 - return fmt.Errorf("S3 not configured") 451 - } 452 - path := s3.BlobPath(session.Digest) 453 - s3Key := strings.TrimPrefix(path, "/") 454 - if h.s3Service.PathPrefix != "" { 455 - s3Key = h.s3Service.PathPrefix + "/" + s3Key 456 - } 457 - 458 - _, err := h.s3Service.Client.AbortMultipartUploadWithContext(ctx, &awss3.AbortMultipartUploadInput{ 459 - Bucket: &h.s3Service.Bucket, 460 - Key: &s3Key, 461 - UploadId: &session.S3UploadID, 462 - }) 463 - // Abort S3 multipart upload 464 - if err != nil { 465 - return fmt.Errorf("failed to abort multipart upload: digest=%s, uploadID=%s, err=%v", session.Digest, uploadID, err) 466 - } 467 - slog.Debug("Aborted S3 native multipart", 468 - "digest", session.Digest, 469 - "uploadID", session.UploadID) 470 - return nil 294 + if h.s3Service.Client == nil { 295 + return fmt.Errorf("S3 not configured") 471 296 } 472 297 473 - // Buffered mode: just delete the session (parts are in memory) 474 - slog.Debug("Aborted buffered multipart", "uploadID", session.UploadID) 475 - return nil 476 - } 298 + path := s3.BlobPath(session.Digest) 299 + s3Key := strings.TrimPrefix(path, "/") 300 + if h.s3Service.PathPrefix != "" { 301 + s3Key = h.s3Service.PathPrefix + "/" + s3Key 302 + } 477 303 478 - // HandleBufferedPartUpload handles uploading a part in buffered mode 479 - func (h *XRPCHandler) HandleBufferedPartUpload(ctx context.Context, uploadID string, partNumber int, data []byte) (string, error) { 480 - session, err := h.MultipartMgr.GetSession(uploadID) 304 + _, err = h.s3Service.Client.AbortMultipartUploadWithContext(ctx, &awss3.AbortMultipartUploadInput{ 305 + Bucket: &h.s3Service.Bucket, 306 + Key: &s3Key, 307 + UploadId: &session.S3UploadID, 308 + }) 481 309 if err != nil { 482 - return "", err 310 + return fmt.Errorf("failed to abort multipart upload: digest=%s, uploadID=%s, err=%v", session.Digest, uploadID, err) 483 311 } 484 - 485 - if session.Mode != Buffered { 486 - return "", fmt.Errorf("session is not in buffered mode") 487 - } 488 - 489 - etag := session.StorePart(partNumber, data) 490 - return etag, nil 312 + slog.Debug("Aborted S3 multipart", 313 + "digest", session.Digest, 314 + "uploadID", session.UploadID) 315 + return nil 491 316 } 492 317 493 318 // normalizeETag ensures an ETag has quotes (required by S3 CompleteMultipartUpload)
+7 -121
pkg/hold/oci/multipart_test.go
··· 12 12 sessions: make(map[string]*MultipartSession), 13 13 } 14 14 15 - session := mgr.CreateSession("sha256:test123", Buffered, "") 15 + session := mgr.CreateSession("sha256:test123", "aws-upload-id") 16 16 17 17 if session.UploadID == "" { 18 18 t.Error("Expected non-empty uploadID") ··· 20 20 if session.Digest != "sha256:test123" { 21 21 t.Errorf("Expected digest=sha256:test123, got %s", session.Digest) 22 22 } 23 - if session.Mode != Buffered { 24 - t.Errorf("Expected mode=Buffered, got %v", session.Mode) 25 - } 26 - if session.Parts == nil { 27 - t.Error("Expected Parts map to be initialized") 23 + if session.S3UploadID != "aws-upload-id" { 24 + t.Errorf("Expected S3UploadID=aws-upload-id, got %s", session.S3UploadID) 28 25 } 29 26 if session.CreatedAt.IsZero() { 30 27 t.Error("Expected CreatedAt to be set") 31 28 } 32 29 } 33 30 34 - func TestCreateSession_S3Native(t *testing.T) { 35 - mgr := &MultipartManager{ 36 - sessions: make(map[string]*MultipartSession), 37 - } 38 - 39 - s3UploadID := "aws-multipart-id-123" 40 - session := mgr.CreateSession("sha256:test123", S3Native, s3UploadID) 41 - 42 - if session.Mode != S3Native { 43 - t.Errorf("Expected mode=S3Native, got %v", session.Mode) 44 - } 45 - if session.S3UploadID != s3UploadID { 46 - t.Errorf("Expected S3UploadID=%s, got %s", s3UploadID, session.S3UploadID) 47 - } 48 - } 49 - 50 31 func TestGetSession_Success(t *testing.T) { 51 32 mgr := &MultipartManager{ 52 33 sessions: make(map[string]*MultipartSession), 53 34 } 54 35 55 - created := mgr.CreateSession("sha256:test123", Buffered, "") 36 + created := mgr.CreateSession("sha256:test123", "aws-upload-id") 56 37 57 38 retrieved, err := mgr.GetSession(created.UploadID) 58 39 if err != nil { ··· 80 61 sessions: make(map[string]*MultipartSession), 81 62 } 82 63 83 - session := mgr.CreateSession("sha256:test123", Buffered, "") 64 + session := mgr.CreateSession("sha256:test123", "aws-upload-id") 84 65 uploadID := session.UploadID 85 66 86 67 // Verify it exists ··· 99 80 } 100 81 } 101 82 102 - func TestStorePart(t *testing.T) { 103 - session := &MultipartSession{ 104 - UploadID: "test-upload", 105 - Digest: "sha256:test", 106 - Mode: Buffered, 107 - Parts: make(map[int]*MultipartPart), 108 - } 109 - 110 - data := []byte("test part data") 111 - etag := session.StorePart(1, data) 112 - 113 - if etag == "" { 114 - t.Error("Expected non-empty etag") 115 - } 116 - 117 - part, exists := session.Parts[1] 118 - if !exists { 119 - t.Fatal("Part 1 should exist") 120 - } 121 - 122 - if part.PartNumber != 1 { 123 - t.Errorf("Expected partNumber=1, got %d", part.PartNumber) 124 - } 125 - if string(part.Data) != string(data) { 126 - t.Errorf("Expected data=%s, got %s", string(data), string(part.Data)) 127 - } 128 - if part.ETag != etag { 129 - t.Errorf("Expected etag=%s, got %s", etag, part.ETag) 130 - } 131 - if part.Size != int64(len(data)) { 132 - t.Errorf("Expected size=%d, got %d", len(data), part.Size) 133 - } 134 - } 135 - 136 - func TestAssembleBufferedParts_Success(t *testing.T) { 137 - session := &MultipartSession{ 138 - UploadID: "test-upload", 139 - Digest: "sha256:test", 140 - Mode: Buffered, 141 - Parts: make(map[int]*MultipartPart), 142 - } 143 - 144 - // Add parts in non-sequential order to test sorting 145 - session.StorePart(2, []byte("second part")) 146 - session.StorePart(1, []byte("first part")) 147 - session.StorePart(3, []byte("third part")) 148 - 149 - data, size, err := session.AssembleBufferedParts() 150 - if err != nil { 151 - t.Fatalf("Expected success, got error: %v", err) 152 - } 153 - 154 - expected := "first partsecond partthird part" 155 - if string(data) != expected { 156 - t.Errorf("Expected data=%s, got %s", expected, string(data)) 157 - } 158 - 159 - if size != int64(len(expected)) { 160 - t.Errorf("Expected size=%d, got %d", len(expected), size) 161 - } 162 - } 163 - 164 - func TestAssembleBufferedParts_MissingPart(t *testing.T) { 165 - session := &MultipartSession{ 166 - UploadID: "test-upload", 167 - Digest: "sha256:test", 168 - Mode: Buffered, 169 - Parts: make(map[int]*MultipartPart), 170 - } 171 - 172 - // Add parts 1 and 3, but not 2 173 - session.StorePart(1, []byte("first part")) 174 - session.StorePart(3, []byte("third part")) 175 - 176 - _, _, err := session.AssembleBufferedParts() 177 - if err == nil { 178 - t.Error("Expected error for missing part 2") 179 - } 180 - } 181 - 182 - func TestAssembleBufferedParts_WrongMode(t *testing.T) { 183 - session := &MultipartSession{ 184 - UploadID: "test-upload", 185 - Digest: "sha256:test", 186 - Mode: S3Native, 187 - Parts: make(map[int]*MultipartPart), 188 - } 189 - 190 - _, _, err := session.AssembleBufferedParts() 191 - if err == nil { 192 - t.Error("Expected error for S3Native mode") 193 - } 194 - } 195 - 196 83 func TestCleanupExpiredSessions(t *testing.T) { 197 84 mgr := &MultipartManager{ 198 85 sessions: make(map[string]*MultipartSession), ··· 202 89 oldSession := &MultipartSession{ 203 90 UploadID: "old-session", 204 91 Digest: "sha256:old", 205 - Mode: Buffered, 206 - Parts: make(map[int]*MultipartPart), 92 + S3UploadID: "aws-old-upload", 207 93 CreatedAt: time.Now().Add(-25 * time.Hour), 208 94 LastActivity: time.Now().Add(-25 * time.Hour), 209 95 } 210 96 mgr.sessions[oldSession.UploadID] = oldSession 211 97 212 98 // Create a recent session 213 - recentSession := mgr.CreateSession("sha256:recent", Buffered, "") 99 + recentSession := mgr.CreateSession("sha256:recent", "aws-recent-upload") 214 100 215 101 // Run cleanup 216 102 mgr.cleanupExpiredSessions()
+16 -59
pkg/hold/oci/xrpc.go
··· 3 3 4 4 import ( 5 5 "fmt" 6 - "io" 7 6 "log/slog" 8 7 "net/http" 9 - "strconv" 10 8 "strings" 11 9 12 10 "atcr.io/pkg/atproto" ··· 20 18 21 19 // XRPCHandler handles OCI-specific XRPC endpoints for multipart uploads 22 20 type XRPCHandler struct { 23 - driver storagedriver.StorageDriver 24 - disablePresignedURLs bool 25 - s3Service s3.S3Service 26 - MultipartMgr *MultipartManager // Exported for access in route handlers 27 - pds *pds.HoldPDS 28 - httpClient pds.HTTPClient 29 - enableBlueskyPosts bool 30 - quotaMgr *quota.Manager // Quota manager for tier-based limits 21 + driver storagedriver.StorageDriver 22 + s3Service s3.S3Service 23 + MultipartMgr *MultipartManager // Exported for access in route handlers 24 + pds *pds.HoldPDS 25 + httpClient pds.HTTPClient 26 + enableBlueskyPosts bool 27 + quotaMgr *quota.Manager // Quota manager for tier-based limits 31 28 } 32 29 33 30 // NewXRPCHandler creates a new OCI XRPC handler 34 - func NewXRPCHandler(holdPDS *pds.HoldPDS, s3Service s3.S3Service, driver storagedriver.StorageDriver, disablePresignedURLs bool, enableBlueskyPosts bool, httpClient pds.HTTPClient, quotaMgr *quota.Manager) *XRPCHandler { 31 + func NewXRPCHandler(holdPDS *pds.HoldPDS, s3Service s3.S3Service, driver storagedriver.StorageDriver, enableBlueskyPosts bool, httpClient pds.HTTPClient, quotaMgr *quota.Manager) *XRPCHandler { 35 32 return &XRPCHandler{ 36 - driver: driver, 37 - disablePresignedURLs: disablePresignedURLs, 38 - MultipartMgr: NewMultipartManager(), 39 - s3Service: s3Service, 40 - pds: holdPDS, 41 - httpClient: httpClient, 42 - enableBlueskyPosts: enableBlueskyPosts, 43 - quotaMgr: quotaMgr, 33 + driver: driver, 34 + MultipartMgr: NewMultipartManager(), 35 + s3Service: s3Service, 36 + pds: holdPDS, 37 + httpClient: httpClient, 38 + enableBlueskyPosts: enableBlueskyPosts, 39 + quotaMgr: quotaMgr, 44 40 } 45 41 } 46 42 ··· 52 48 53 49 r.Post(atproto.HoldInitiateUpload, h.HandleInitiateUpload) 54 50 r.Post(atproto.HoldGetPartUploadURL, h.HandleGetPartUploadURL) 55 - r.Put(atproto.HoldUploadPart, h.HandleUploadPart) 56 51 r.Post(atproto.HoldCompleteUpload, h.HandleCompleteUpload) 57 52 r.Post(atproto.HoldAbortUpload, h.HandleAbortUpload) 58 53 r.Post(atproto.HoldNotifyManifest, h.HandleNotifyManifest) ··· 78 73 return 79 74 } 80 75 81 - uploadID, _, err := h.StartMultipartUploadWithManager(r.Context(), req.Digest) 76 + uploadID, err := h.StartMultipartUploadWithManager(r.Context(), req.Digest) 82 77 if err != nil { 83 78 render.Status(r, http.StatusInternalServerError) 84 79 render.JSON(w, r, map[string]string{"error": fmt.Sprintf("failed to initiate upload: %v", err)}) ··· 118 113 } 119 114 120 115 render.JSON(w, r, uploadInfo) 121 - } 122 - 123 - // HandleUploadPart handles direct buffered part uploads 124 - // Moved from pds/xrpc.go - this is OCI-specific multipart upload logic 125 - func (h *XRPCHandler) HandleUploadPart(w http.ResponseWriter, r *http.Request) { 126 - uploadID := r.Header.Get("X-Upload-Id") 127 - partNumberStr := r.Header.Get("X-Part-Number") 128 - 129 - if uploadID == "" || partNumberStr == "" { 130 - render.Status(r, http.StatusBadRequest) 131 - render.JSON(w, r, map[string]string{"error": "X-Upload-Id and X-Part-Number headers are required"}) 132 - return 133 - } 134 - 135 - partNumber, err := strconv.Atoi(partNumberStr) 136 - if err != nil { 137 - render.Status(r, http.StatusBadRequest) 138 - render.JSON(w, r, map[string]string{"error": fmt.Sprintf("invalid part number: %v", err)}) 139 - return 140 - } 141 - 142 - data, err := io.ReadAll(r.Body) 143 - if err != nil { 144 - render.Status(r, http.StatusInternalServerError) 145 - render.JSON(w, r, map[string]string{"error": fmt.Sprintf("failed to read part data: %v", err)}) 146 - return 147 - } 148 - 149 - etag, err := h.HandleBufferedPartUpload(r.Context(), uploadID, partNumber, data) 150 - if err != nil { 151 - render.Status(r, http.StatusInternalServerError) 152 - render.JSON(w, r, map[string]string{"error": fmt.Sprintf("failed to upload part: %v", err)}) 153 - return 154 - } 155 - 156 - render.JSON(w, r, map[string]any{ 157 - "etag": etag, 158 - }) 159 116 } 160 117 161 118 // HandleCompleteUpload finalizes a multipart upload
+970 -172
pkg/hold/oci/xrpc_test.go
··· 10 10 "net/http/httptest" 11 11 "os" 12 12 "path/filepath" 13 - "strconv" 13 + "sync" 14 14 "testing" 15 + "time" 15 16 16 17 "atcr.io/pkg/atproto" 17 18 "atcr.io/pkg/auth/oauth" 18 19 "atcr.io/pkg/hold/pds" 19 20 "atcr.io/pkg/s3" 21 + storagedriver "github.com/distribution/distribution/v3/registry/storage/driver" 20 22 "github.com/distribution/distribution/v3/registry/storage/driver/factory" 21 - _ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem" 23 + _ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws" 22 24 ) 23 25 24 26 // Shared test resources for OCI package ··· 66 68 }, nil 67 69 } 68 70 69 - // setupTestOCIHandler creates a test OCI XRPC handler with filesystem driver 70 - func setupTestOCIHandler(t *testing.T) (*XRPCHandler, context.Context) { 71 + // mockStorageDriver implements storagedriver.StorageDriver for testing 72 + type mockStorageDriver struct { 73 + mu sync.RWMutex 74 + blobs map[string][]byte 75 + 76 + // Error injection for testing error handling 77 + StatError error 78 + MoveError error 79 + } 80 + 81 + func newMockStorageDriver() *mockStorageDriver { 82 + return &mockStorageDriver{ 83 + blobs: make(map[string][]byte), 84 + } 85 + } 86 + 87 + func (m *mockStorageDriver) Name() string { return "mock" } 88 + 89 + func (m *mockStorageDriver) GetContent(ctx context.Context, path string) ([]byte, error) { 90 + m.mu.RLock() 91 + defer m.mu.RUnlock() 92 + if data, ok := m.blobs[path]; ok { 93 + return data, nil 94 + } 95 + return nil, storagedriver.PathNotFoundError{Path: path} 96 + } 97 + 98 + func (m *mockStorageDriver) PutContent(ctx context.Context, path string, content []byte) error { 99 + m.mu.Lock() 100 + defer m.mu.Unlock() 101 + m.blobs[path] = content 102 + return nil 103 + } 104 + 105 + func (m *mockStorageDriver) Reader(ctx context.Context, path string, offset int64) (io.ReadCloser, error) { 106 + data, err := m.GetContent(ctx, path) 107 + if err != nil { 108 + return nil, err 109 + } 110 + return io.NopCloser(bytes.NewReader(data[offset:])), nil 111 + } 112 + 113 + func (m *mockStorageDriver) Writer(ctx context.Context, path string, append bool) (storagedriver.FileWriter, error) { 114 + return &mockFileWriter{driver: m, path: path}, nil 115 + } 116 + 117 + func (m *mockStorageDriver) Stat(ctx context.Context, path string) (storagedriver.FileInfo, error) { 118 + m.mu.RLock() 119 + defer m.mu.RUnlock() 120 + 121 + // Check for injected error 122 + if m.StatError != nil { 123 + return nil, m.StatError 124 + } 125 + 126 + if data, ok := m.blobs[path]; ok { 127 + return &mockFileInfo{path: path, size: int64(len(data))}, nil 128 + } 129 + return nil, storagedriver.PathNotFoundError{Path: path} 130 + } 131 + 132 + func (m *mockStorageDriver) List(ctx context.Context, path string) ([]string, error) { 133 + return nil, nil 134 + } 135 + 136 + func (m *mockStorageDriver) Move(ctx context.Context, sourcePath string, destPath string) error { 137 + m.mu.Lock() 138 + defer m.mu.Unlock() 139 + 140 + // Check for injected error 141 + if m.MoveError != nil { 142 + return m.MoveError 143 + } 144 + 145 + if data, ok := m.blobs[sourcePath]; ok { 146 + m.blobs[destPath] = data 147 + delete(m.blobs, sourcePath) 148 + return nil 149 + } 150 + return storagedriver.PathNotFoundError{Path: sourcePath} 151 + } 152 + 153 + func (m *mockStorageDriver) Delete(ctx context.Context, path string) error { 154 + m.mu.Lock() 155 + defer m.mu.Unlock() 156 + delete(m.blobs, path) 157 + return nil 158 + } 159 + 160 + func (m *mockStorageDriver) RedirectURL(r *http.Request, path string) (string, error) { 161 + return "", storagedriver.ErrUnsupportedMethod{} 162 + } 163 + 164 + func (m *mockStorageDriver) Walk(ctx context.Context, path string, f storagedriver.WalkFn, options ...func(*storagedriver.WalkOptions)) error { 165 + return nil 166 + } 167 + 168 + // mockFileWriter implements storagedriver.FileWriter 169 + type mockFileWriter struct { 170 + driver *mockStorageDriver 171 + path string 172 + buf bytes.Buffer 173 + } 174 + 175 + func (w *mockFileWriter) Write(p []byte) (int, error) { 176 + return w.buf.Write(p) 177 + } 178 + 179 + func (w *mockFileWriter) Size() int64 { 180 + return int64(w.buf.Len()) 181 + } 182 + 183 + func (w *mockFileWriter) Close() error { 184 + return nil 185 + } 186 + 187 + func (w *mockFileWriter) Cancel(ctx context.Context) error { 188 + return nil 189 + } 190 + 191 + func (w *mockFileWriter) Commit(ctx context.Context) error { 192 + w.driver.mu.Lock() 193 + defer w.driver.mu.Unlock() 194 + w.driver.blobs[w.path] = w.buf.Bytes() 195 + return nil 196 + } 197 + 198 + // mockFileInfo implements storagedriver.FileInfo 199 + type mockFileInfo struct { 200 + path string 201 + size int64 202 + } 203 + 204 + func (f *mockFileInfo) Path() string { return f.path } 205 + func (f *mockFileInfo) Size() int64 { return f.size } 206 + func (f *mockFileInfo) ModTime() time.Time { return time.Time{} } 207 + func (f *mockFileInfo) IsDir() bool { return false } 208 + 209 + // setupTestOCIHandlerWithMockS3 creates a test OCI XRPC handler with mock S3 210 + // This does NOT require real S3 credentials - uses MockS3Client 211 + // Returns the handler, mock S3 client, and mock storage driver for test manipulation 212 + func setupTestOCIHandlerWithMockS3(t *testing.T) (*XRPCHandler, *s3.MockS3Client, *mockStorageDriver) { 71 213 t.Helper() 72 214 73 - // Create temp directory for test storage 215 + // Create temp directory for PDS database 216 + tmpDir := t.TempDir() 217 + ctx := t.Context() 218 + 219 + // Create mock S3 client 220 + mockS3Client := s3.NewMockS3Client("http://mock-s3.test") 221 + 222 + // Create S3 service with mock client 223 + s3Service := s3.S3Service{ 224 + Client: mockS3Client, 225 + Bucket: "test-bucket", 226 + PathPrefix: "test-prefix", 227 + } 228 + 229 + // Create mock storage driver 230 + mockDriver := newMockStorageDriver() 231 + 232 + // Create minimal PDS for DID/auth 233 + dbPath := ":memory:" 234 + keyPath := filepath.Join(tmpDir, "signing-key") 235 + holdDID := "did:web:hold.example.com" 236 + publicURL := "https://hold.example.com" 237 + 238 + // Copy shared signing key instead of generating a new one 239 + if err := os.WriteFile(keyPath, sharedTestKey, 0600); err != nil { 240 + t.Fatalf("Failed to copy shared signing key: %v", err) 241 + } 242 + 243 + holdPDS, err := pds.NewHoldPDS(ctx, holdDID, publicURL, dbPath, keyPath, false) 244 + if err != nil { 245 + t.Fatalf("Failed to create PDS: %v", err) 246 + } 247 + 248 + // Bootstrap PDS, suppressing stdout to avoid log spam 249 + ownerDID := "did:plc:owner123" 250 + 251 + // Redirect stdout to suppress bootstrap logging 252 + oldStdout := os.Stdout 253 + r, w, _ := os.Pipe() 254 + os.Stdout = w 255 + 256 + err = holdPDS.Bootstrap(ctx, nil, ownerDID, true, false, "", "") 257 + 258 + // Restore stdout 259 + w.Close() 260 + os.Stdout = oldStdout 261 + io.ReadAll(r) // Drain the pipe 262 + 263 + if err != nil { 264 + t.Fatalf("Failed to bootstrap PDS: %v", err) 265 + } 266 + 267 + // Create mock HTTP client 268 + mockClient := &mockPDSClient{} 269 + 270 + // Create OCI handler with mock S3 271 + handler := NewXRPCHandler(holdPDS, s3Service, mockDriver, false, mockClient, nil) 272 + 273 + return handler, mockS3Client, mockDriver 274 + } 275 + 276 + // setupTestOCIHandlerWithS3 creates a test OCI XRPC handler with S3 driver 277 + // This requires real S3 credentials or a mock S3 server (like MinIO) 278 + // Returns nil if S3 is not configured 279 + func setupTestOCIHandlerWithS3(t *testing.T) (*XRPCHandler, bool) { 280 + t.Helper() 281 + 282 + // Check if S3 credentials are available 283 + bucket := os.Getenv("S3_BUCKET") 284 + if bucket == "" { 285 + return nil, false // S3 not configured, skip S3-dependent tests 286 + } 287 + 288 + // Create temp directory for PDS database 74 289 tmpDir := t.TempDir() 75 290 storageDir := filepath.Join(tmpDir, "blobs") 76 291 77 292 // Create context 78 - ctx := context.Background() 293 + ctx := t.Context() 79 294 80 - // Create filesystem storage driver 81 - params := map[string]any{ 82 - "rootdirectory": storageDir, 295 + // Create S3 storage driver 296 + s3Params := map[string]any{ 297 + "bucket": bucket, 298 + "region": os.Getenv("AWS_REGION"), 299 + "accesskey": os.Getenv("AWS_ACCESS_KEY_ID"), 300 + "secretkey": os.Getenv("AWS_SECRET_ACCESS_KEY"), 83 301 } 84 - driver, err := factory.Create(ctx, "filesystem", params) 302 + if endpoint := os.Getenv("S3_ENDPOINT"); endpoint != "" { 303 + s3Params["regionendpoint"] = endpoint 304 + } 305 + if s3Params["region"] == nil || s3Params["region"] == "" { 306 + s3Params["region"] = "us-east-1" 307 + } 308 + s3Params["rootdirectory"] = storageDir 309 + 310 + driver, err := factory.Create(ctx, "s3", s3Params) 311 + if err != nil { 312 + t.Logf("Failed to create S3 storage driver: %v", err) 313 + return nil, false 314 + } 315 + 316 + // Create S3 service 317 + s3Service, err := s3.NewS3Service(s3Params) 85 318 if err != nil { 86 - t.Fatalf("Failed to create storage driver: %v", err) 319 + t.Logf("Failed to create S3 service: %v", err) 320 + return nil, false 87 321 } 88 322 89 323 // Create minimal PDS for DID/auth 90 - // Use in-memory database for speed 91 324 dbPath := ":memory:" 92 325 keyPath := filepath.Join(tmpDir, "signing-key") 93 326 holdDID := "did:web:hold.example.com" ··· 125 358 // Create mock HTTP client 126 359 mockClient := &mockPDSClient{} 127 360 128 - // Create OCI handler with buffered mode (no S3) 129 - mockS3 := s3.S3Service{} 130 - handler := NewXRPCHandler(holdPDS, mockS3, driver, true, false, mockClient, nil) 361 + // Create OCI handler with S3 362 + handler := NewXRPCHandler(holdPDS, *s3Service, driver, false, mockClient, nil) 131 363 132 - return handler, ctx 364 + return handler, true 133 365 } 134 366 135 367 // Helper function to create JSON request ··· 157 389 } 158 390 } 159 391 160 - // Tests for HandleInitiateUpload 392 + // Tests for HandleInitiateUpload - Mock S3 (no credentials required) 161 393 162 - func TestHandleInitiateUpload_Success(t *testing.T) { 163 - handler, _ := setupTestOCIHandler(t) 394 + func TestHandleInitiateUpload_MockS3_Success(t *testing.T) { 395 + handler, mockS3Client, _ := setupTestOCIHandlerWithMockS3(t) 164 396 165 397 req := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 166 398 "digest": "sha256:abc123", ··· 181 413 if !ok || uploadID == "" { 182 414 t.Error("Expected uploadId in response") 183 415 } 416 + 417 + // Verify mock S3 was called 418 + if len(mockS3Client.CreateMultipartCalls) != 1 { 419 + t.Errorf("Expected 1 CreateMultipartUpload call, got %d", len(mockS3Client.CreateMultipartCalls)) 420 + } 184 421 } 185 422 186 - func TestHandleInitiateUpload_MissingDigest(t *testing.T) { 187 - handler, _ := setupTestOCIHandler(t) 423 + func TestHandleInitiateUpload_MockS3_MissingDigest(t *testing.T) { 424 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 188 425 189 426 req := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{}) 190 427 addMockAuth(req) ··· 197 434 } 198 435 } 199 436 200 - // NOTE: Authorization tests are handled separately via chi router middleware tests. 201 - // When calling handlers directly (not through router), middleware doesn't execute. 202 - // See TestRequireBlobWriteAccess_* tests for middleware auth validation. 437 + // Tests for full Mock S3 upload flow (no credentials required) 203 438 204 - // Tests for HandleGetPartUploadUrl 439 + func TestFullMockS3UploadFlow(t *testing.T) { 440 + handler, mockS3Client, _ := setupTestOCIHandlerWithMockS3(t) 205 441 206 - func TestHandleGetPartUploadUrl_Buffered(t *testing.T) { 207 - handler, _ := setupTestOCIHandler(t) 208 - 209 - // First, initiate an upload 442 + // 1. Initiate upload 210 443 initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 211 - "digest": "sha256:abc123", 444 + "digest": "uploads/temp-test-flow", 212 445 }) 213 446 addMockAuth(initReq) 214 447 initW := httptest.NewRecorder() 215 448 handler.HandleInitiateUpload(initW, initReq) 216 449 450 + if initW.Code != http.StatusOK { 451 + t.Fatalf("Expected status 200, got %d: %s", initW.Code, initW.Body.String()) 452 + } 453 + 217 454 var initResp map[string]any 218 455 decodeJSONResponse(t, initW, &initResp) 219 456 uploadID := initResp["uploadId"].(string) 220 457 221 - // Now get part upload URL 222 - req := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, map[string]any{ 458 + // 2. Get part upload URL 459 + partReq := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, map[string]any{ 223 460 "uploadId": uploadID, 224 461 "partNumber": 1, 225 462 }) 463 + addMockAuth(partReq) 464 + partW := httptest.NewRecorder() 465 + handler.HandleGetPartUploadURL(partW, partReq) 466 + 467 + if partW.Code != http.StatusOK { 468 + t.Fatalf("Expected status 200, got %d: %s", partW.Code, partW.Body.String()) 469 + } 470 + 471 + var partResp PartUploadInfo 472 + decodeJSONResponse(t, partW, &partResp) 473 + 474 + // Should return presigned URL from mock 475 + if partResp.URL == "" { 476 + t.Error("Expected presigned URL") 477 + } 478 + 479 + // URL should point to mock server 480 + if partResp.URL == "" || partResp.URL[0:4] != "http" { 481 + t.Errorf("Expected valid URL, got %s", partResp.URL) 482 + } 483 + 484 + // 3. Abort the upload 485 + abortReq := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 486 + "uploadId": uploadID, 487 + }) 488 + addMockAuth(abortReq) 489 + abortW := httptest.NewRecorder() 490 + handler.HandleAbortUpload(abortW, abortReq) 491 + 492 + if abortW.Code != http.StatusOK { 493 + t.Errorf("Expected status 200, got %d: %s", abortW.Code, abortW.Body.String()) 494 + } 495 + 496 + // Verify S3 operations were called 497 + if len(mockS3Client.CreateMultipartCalls) != 1 { 498 + t.Errorf("Expected 1 CreateMultipart call, got %d", len(mockS3Client.CreateMultipartCalls)) 499 + } 500 + if len(mockS3Client.UploadPartCalls) != 1 { 501 + t.Errorf("Expected 1 UploadPart call, got %d", len(mockS3Client.UploadPartCalls)) 502 + } 503 + if len(mockS3Client.AbortCalls) != 1 { 504 + t.Errorf("Expected 1 Abort call, got %d", len(mockS3Client.AbortCalls)) 505 + } 506 + } 507 + 508 + func TestHandleGetPartUploadUrl_MockS3_InvalidSession(t *testing.T) { 509 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 510 + 511 + req := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, map[string]any{ 512 + "uploadId": "invalid-upload-id", 513 + "partNumber": 1, 514 + }) 226 515 addMockAuth(req) 227 516 228 517 w := httptest.NewRecorder() 229 518 handler.HandleGetPartUploadURL(w, req) 230 519 520 + if w.Code != http.StatusInternalServerError { 521 + t.Errorf("Expected status 500, got %d", w.Code) 522 + } 523 + } 524 + 525 + func TestHandleAbortUpload_MockS3_InvalidSession(t *testing.T) { 526 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 527 + 528 + req := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 529 + "uploadId": "invalid-upload-id", 530 + }) 531 + addMockAuth(req) 532 + 533 + w := httptest.NewRecorder() 534 + handler.HandleAbortUpload(w, req) 535 + 536 + if w.Code != http.StatusInternalServerError { 537 + t.Errorf("Expected status 500, got %d", w.Code) 538 + } 539 + } 540 + 541 + // Tests for HandleInitiateUpload - requires real S3 542 + 543 + func TestHandleInitiateUpload_WithS3_Success(t *testing.T) { 544 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 545 + if !hasS3 { 546 + t.Skip("S3 not configured, skipping test") 547 + } 548 + 549 + req := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 550 + "digest": "sha256:abc123", 551 + }) 552 + addMockAuth(req) 553 + 554 + w := httptest.NewRecorder() 555 + handler.HandleInitiateUpload(w, req) 556 + 231 557 if w.Code != http.StatusOK { 232 558 t.Errorf("Expected status 200, got %d: %s", w.Code, w.Body.String()) 233 559 } 234 560 235 - var resp PartUploadInfo 561 + var resp map[string]any 236 562 decodeJSONResponse(t, w, &resp) 237 563 238 - // Buffered mode should return XRPC endpoint 239 - if resp.Method != "PUT" { 240 - t.Errorf("Expected method PUT, got %s", resp.Method) 564 + uploadID, ok := resp["uploadId"].(string) 565 + if !ok || uploadID == "" { 566 + t.Error("Expected uploadId in response") 241 567 } 242 - if resp.Headers == nil || resp.Headers["X-Upload-Id"] != uploadID { 243 - t.Error("Expected X-Upload-Id header in buffered mode") 568 + } 569 + 570 + func TestHandleInitiateUpload_MissingDigest(t *testing.T) { 571 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 572 + if !hasS3 { 573 + t.Skip("S3 not configured, skipping test") 574 + } 575 + 576 + req := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{}) 577 + addMockAuth(req) 578 + 579 + w := httptest.NewRecorder() 580 + handler.HandleInitiateUpload(w, req) 581 + 582 + if w.Code != http.StatusBadRequest { 583 + t.Errorf("Expected status 400, got %d", w.Code) 244 584 } 245 585 } 246 586 587 + // Tests for HandleGetPartUploadUrl - requires S3 588 + 247 589 func TestHandleGetPartUploadUrl_InvalidSession(t *testing.T) { 248 - handler, _ := setupTestOCIHandler(t) 590 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 591 + if !hasS3 { 592 + t.Skip("S3 not configured, skipping test") 593 + } 249 594 250 595 req := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, map[string]any{ 251 596 "uploadId": "invalid-upload-id", ··· 262 607 } 263 608 264 609 func TestHandleGetPartUploadUrl_MissingParams(t *testing.T) { 265 - handler, _ := setupTestOCIHandler(t) 610 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 611 + if !hasS3 { 612 + t.Skip("S3 not configured, skipping test") 613 + } 266 614 267 615 tests := []struct { 268 616 name string ··· 288 636 } 289 637 } 290 638 291 - // Tests for HandleUploadPart 639 + // Tests for HandleCompleteUpload 640 + 641 + func TestHandleCompleteUpload_MissingParts(t *testing.T) { 642 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 643 + if !hasS3 { 644 + t.Skip("S3 not configured, skipping test") 645 + } 646 + 647 + req := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 648 + "uploadId": "test-id", 649 + "digest": "sha256:test", 650 + "parts": []any{}, 651 + }) 652 + addMockAuth(req) 653 + 654 + w := httptest.NewRecorder() 655 + handler.HandleCompleteUpload(w, req) 656 + 657 + if w.Code != http.StatusBadRequest { 658 + t.Errorf("Expected status 400, got %d", w.Code) 659 + } 660 + } 661 + 662 + func TestHandleCompleteUpload_InvalidSession(t *testing.T) { 663 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 664 + if !hasS3 { 665 + t.Skip("S3 not configured, skipping test") 666 + } 667 + 668 + req := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 669 + "uploadId": "invalid-upload-id", 670 + "digest": "sha256:test", 671 + "parts": []any{ 672 + map[string]any{"partNumber": 1, "etag": "abc"}, 673 + }, 674 + }) 675 + addMockAuth(req) 676 + 677 + w := httptest.NewRecorder() 678 + handler.HandleCompleteUpload(w, req) 679 + 680 + if w.Code != http.StatusInternalServerError { 681 + t.Errorf("Expected status 500, got %d", w.Code) 682 + } 683 + } 684 + 685 + // Tests for HandleAbortUpload 686 + 687 + func TestHandleAbortUpload_InvalidSession(t *testing.T) { 688 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 689 + if !hasS3 { 690 + t.Skip("S3 not configured, skipping test") 691 + } 692 + 693 + req := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 694 + "uploadId": "invalid-upload-id", 695 + }) 696 + addMockAuth(req) 697 + 698 + w := httptest.NewRecorder() 699 + handler.HandleAbortUpload(w, req) 700 + 701 + if w.Code != http.StatusInternalServerError { 702 + t.Errorf("Expected status 500, got %d", w.Code) 703 + } 704 + } 292 705 293 - func TestHandleUploadPart_Success(t *testing.T) { 294 - handler, _ := setupTestOCIHandler(t) 706 + // Tests for full S3 upload flow 295 707 296 - // Initiate upload 708 + func TestFullS3UploadFlow(t *testing.T) { 709 + handler, hasS3 := setupTestOCIHandlerWithS3(t) 710 + if !hasS3 { 711 + t.Skip("S3 not configured, skipping test") 712 + } 713 + 714 + // 1. Initiate upload 297 715 initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 298 - "digest": "sha256:abc123", 716 + "digest": "uploads/temp-test-flow", 299 717 }) 300 718 addMockAuth(initReq) 301 719 initW := httptest.NewRecorder() 302 720 handler.HandleInitiateUpload(initW, initReq) 303 721 722 + if initW.Code != http.StatusOK { 723 + t.Fatalf("Expected status 200, got %d: %s", initW.Code, initW.Body.String()) 724 + } 725 + 304 726 var initResp map[string]any 305 727 decodeJSONResponse(t, initW, &initResp) 306 728 uploadID := initResp["uploadId"].(string) 307 729 308 - // Upload a part 309 - partData := []byte("test part data") 310 - req := httptest.NewRequest("PUT", atproto.HoldUploadPart, bytes.NewReader(partData)) 311 - req.Header.Set("X-Upload-Id", uploadID) 312 - req.Header.Set("X-Part-Number", "1") 313 - addMockAuth(req) 730 + // 2. Get part upload URL 731 + partReq := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, map[string]any{ 732 + "uploadId": uploadID, 733 + "partNumber": 1, 734 + }) 735 + addMockAuth(partReq) 736 + partW := httptest.NewRecorder() 737 + handler.HandleGetPartUploadURL(partW, partReq) 314 738 315 - w := httptest.NewRecorder() 316 - handler.HandleUploadPart(w, req) 739 + if partW.Code != http.StatusOK { 740 + t.Fatalf("Expected status 200, got %d: %s", partW.Code, partW.Body.String()) 741 + } 742 + 743 + var partResp PartUploadInfo 744 + decodeJSONResponse(t, partW, &partResp) 317 745 318 - if w.Code != http.StatusOK { 319 - t.Errorf("Expected status 200, got %d: %s", w.Code, w.Body.String()) 746 + // Should return presigned URL 747 + if partResp.URL == "" { 748 + t.Error("Expected presigned URL") 320 749 } 321 750 322 - var resp map[string]any 323 - decodeJSONResponse(t, w, &resp) 751 + // 3. Abort the upload (we're not actually uploading to S3 in tests) 752 + abortReq := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 753 + "uploadId": uploadID, 754 + }) 755 + addMockAuth(abortReq) 756 + abortW := httptest.NewRecorder() 757 + handler.HandleAbortUpload(abortW, abortReq) 324 758 325 - etag, ok := resp["etag"].(string) 326 - if !ok || etag == "" { 327 - t.Error("Expected etag in response") 759 + if abortW.Code != http.StatusOK { 760 + t.Errorf("Expected status 200, got %d: %s", abortW.Code, abortW.Body.String()) 328 761 } 329 762 } 330 763 331 - func TestHandleUploadPart_MissingHeaders(t *testing.T) { 332 - handler, _ := setupTestOCIHandler(t) 764 + // Tests for HandleCompleteUpload with Mock S3 765 + 766 + func TestHandleCompleteUpload_MockS3_Success(t *testing.T) { 767 + handler, mockS3Client, mockDriver := setupTestOCIHandlerWithMockS3(t) 768 + 769 + // 1. Initiate upload with temp path 770 + tempDigest := "uploads/temp-test-complete" 771 + initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 772 + "digest": tempDigest, 773 + }) 774 + addMockAuth(initReq) 775 + initW := httptest.NewRecorder() 776 + handler.HandleInitiateUpload(initW, initReq) 777 + 778 + if initW.Code != http.StatusOK { 779 + t.Fatalf("Expected status 200 for initiate, got %d: %s", initW.Code, initW.Body.String()) 780 + } 781 + 782 + var initResp map[string]any 783 + decodeJSONResponse(t, initW, &initResp) 784 + uploadID := initResp["uploadId"].(string) 785 + 786 + // 2. Get part upload URL (so session has S3 upload info) 787 + partReq := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, map[string]any{ 788 + "uploadId": uploadID, 789 + "partNumber": 1, 790 + }) 791 + addMockAuth(partReq) 792 + partW := httptest.NewRecorder() 793 + handler.HandleGetPartUploadURL(partW, partReq) 794 + 795 + if partW.Code != http.StatusOK { 796 + t.Fatalf("Expected status 200 for part URL, got %d: %s", partW.Code, partW.Body.String()) 797 + } 798 + 799 + // 3. Pre-populate mock storage driver with temp blob (simulates S3 upload completing) 800 + // Path format: /docker/registry/v2/uploads/temp-{id}/data 801 + tempBlobPath := fmt.Sprintf("/docker/registry/v2/%s/data", tempDigest) 802 + mockDriver.PutContent(t.Context(), tempBlobPath, []byte("test blob content")) 803 + 804 + // 4. Complete upload with parts 805 + finalDigest := "sha256:abc123def456" 806 + completeReq := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 807 + "uploadId": uploadID, 808 + "digest": finalDigest, 809 + "parts": []any{ 810 + map[string]any{"part_number": 1, "etag": "abc123"}, 811 + }, 812 + }) 813 + addMockAuth(completeReq) 814 + completeW := httptest.NewRecorder() 815 + handler.HandleCompleteUpload(completeW, completeReq) 816 + 817 + if completeW.Code != http.StatusOK { 818 + t.Fatalf("Expected status 200, got %d: %s", completeW.Code, completeW.Body.String()) 819 + } 820 + 821 + var completeResp map[string]any 822 + decodeJSONResponse(t, completeW, &completeResp) 823 + 824 + if completeResp["status"] != "completed" { 825 + t.Errorf("Expected status=completed, got %v", completeResp["status"]) 826 + } 827 + if completeResp["digest"] != finalDigest { 828 + t.Errorf("Expected digest=%s, got %v", finalDigest, completeResp["digest"]) 829 + } 830 + 831 + // 5. Verify S3 operations were called 832 + if len(mockS3Client.CompleteCalls) != 1 { 833 + t.Errorf("Expected 1 Complete call, got %d", len(mockS3Client.CompleteCalls)) 834 + } 835 + 836 + // 6. Verify blob was moved to final location 837 + // Final path format: /docker/registry/v2/blobs/sha256/ab/abc123def456/data 838 + finalBlobPath := "/docker/registry/v2/blobs/sha256/ab/abc123def456/data" 839 + _, err := mockDriver.Stat(t.Context(), finalBlobPath) 840 + if err != nil { 841 + t.Errorf("Expected blob at final location %s, got error: %v", finalBlobPath, err) 842 + } 843 + } 844 + 845 + func TestHandleCompleteUpload_MockS3_InvalidSession(t *testing.T) { 846 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 847 + 848 + req := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 849 + "uploadId": "non-existent-upload-id", 850 + "digest": "sha256:test", 851 + "parts": []any{ 852 + map[string]any{"part_number": 1, "etag": "abc123"}, 853 + }, 854 + }) 855 + addMockAuth(req) 856 + 857 + w := httptest.NewRecorder() 858 + handler.HandleCompleteUpload(w, req) 859 + 860 + if w.Code != http.StatusInternalServerError { 861 + t.Errorf("Expected status 500, got %d: %s", w.Code, w.Body.String()) 862 + } 863 + } 864 + 865 + func TestHandleCompleteUpload_MockS3_MissingParams(t *testing.T) { 866 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 333 867 334 868 tests := []struct { 335 - name string 336 - uploadID string 337 - partNumber string 338 - expectedCode int 339 - expectedError string 869 + name string 870 + body map[string]any 340 871 }{ 341 - {"missing both headers", "", "", 400, "X-Upload-Id and X-Part-Number headers are required"}, 342 - {"missing upload ID", "", "1", 400, "X-Upload-Id and X-Part-Number headers are required"}, 343 - {"missing part number", "test-id", "", 400, "X-Upload-Id and X-Part-Number headers are required"}, 872 + {"missing uploadId", map[string]any{"digest": "sha256:test", "parts": []any{map[string]any{"part_number": 1, "etag": "abc"}}}}, 873 + {"missing digest", map[string]any{"uploadId": "test-id", "parts": []any{map[string]any{"part_number": 1, "etag": "abc"}}}}, 874 + {"missing parts", map[string]any{"uploadId": "test-id", "digest": "sha256:test"}}, 875 + {"empty parts", map[string]any{"uploadId": "test-id", "digest": "sha256:test", "parts": []any{}}}, 344 876 } 345 877 346 878 for _, tt := range tests { 347 879 t.Run(tt.name, func(t *testing.T) { 348 - req := httptest.NewRequest("PUT", atproto.HoldUploadPart, bytes.NewReader([]byte("data"))) 349 - if tt.uploadID != "" { 350 - req.Header.Set("X-Upload-Id", tt.uploadID) 351 - } 352 - if tt.partNumber != "" { 353 - req.Header.Set("X-Part-Number", tt.partNumber) 354 - } 880 + req := makeJSONRequest("POST", atproto.HoldCompleteUpload, tt.body) 355 881 addMockAuth(req) 356 882 357 883 w := httptest.NewRecorder() 358 - handler.HandleUploadPart(w, req) 884 + handler.HandleCompleteUpload(w, req) 359 885 360 - if w.Code != tt.expectedCode { 361 - t.Errorf("Expected status %d, got %d", tt.expectedCode, w.Code) 886 + if w.Code != http.StatusBadRequest { 887 + t.Errorf("Expected status 400, got %d: %s", w.Code, w.Body.String()) 362 888 } 363 889 }) 364 890 } 365 891 } 366 892 367 - func TestHandleUploadPart_InvalidPartNumber(t *testing.T) { 368 - handler, _ := setupTestOCIHandler(t) 893 + func TestHandleCompleteUpload_MockS3_ETagNormalization(t *testing.T) { 894 + handler, mockS3Client, mockDriver := setupTestOCIHandlerWithMockS3(t) 895 + 896 + // Setup upload session 897 + tempDigest := "uploads/temp-etag-test" 898 + initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 899 + "digest": tempDigest, 900 + }) 901 + addMockAuth(initReq) 902 + initW := httptest.NewRecorder() 903 + handler.HandleInitiateUpload(initW, initReq) 369 904 370 - req := httptest.NewRequest("PUT", atproto.HoldUploadPart, bytes.NewReader([]byte("data"))) 371 - req.Header.Set("X-Upload-Id", "test-id") 372 - req.Header.Set("X-Part-Number", "not-a-number") 373 - addMockAuth(req) 905 + var initResp map[string]any 906 + decodeJSONResponse(t, initW, &initResp) 907 + uploadID := initResp["uploadId"].(string) 374 908 375 - w := httptest.NewRecorder() 376 - handler.HandleUploadPart(w, req) 909 + // Pre-populate mock driver 910 + tempBlobPath := fmt.Sprintf("/docker/registry/v2/%s/data", tempDigest) 911 + mockDriver.PutContent(t.Context(), tempBlobPath, []byte("test")) 377 912 378 - if w.Code != http.StatusBadRequest { 379 - t.Errorf("Expected status 400, got %d", w.Code) 913 + // Complete with unquoted ETags 914 + completeReq := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 915 + "uploadId": uploadID, 916 + "digest": "sha256:etagtest123", 917 + "parts": []any{ 918 + map[string]any{"part_number": 1, "etag": "unquoted-etag"}, // No quotes 919 + map[string]any{"part_number": 2, "etag": "\"already-quoted\""}, // Already quoted 920 + }, 921 + }) 922 + addMockAuth(completeReq) 923 + completeW := httptest.NewRecorder() 924 + handler.HandleCompleteUpload(completeW, completeReq) 925 + 926 + // Should succeed - handler normalizes ETags 927 + if completeW.Code != http.StatusOK { 928 + t.Errorf("Expected status 200, got %d: %s", completeW.Code, completeW.Body.String()) 929 + } 930 + 931 + // Verify Complete was called with 2 parts 932 + if len(mockS3Client.CompleteCalls) != 1 { 933 + t.Fatalf("Expected 1 Complete call, got %d", len(mockS3Client.CompleteCalls)) 934 + } 935 + if mockS3Client.CompleteCalls[0].Parts != 2 { 936 + t.Errorf("Expected 2 parts in Complete call, got %d", mockS3Client.CompleteCalls[0].Parts) 380 937 } 381 938 } 382 939 383 - // Tests for HandleCompleteUpload 940 + func TestHandleCompleteUpload_MockS3_S3Error(t *testing.T) { 941 + handler, mockS3Client, mockDriver := setupTestOCIHandlerWithMockS3(t) 384 942 385 - func TestHandleCompleteUpload_BufferedMode(t *testing.T) { 386 - handler, _ := setupTestOCIHandler(t) 943 + // Setup upload session 944 + tempDigest := "uploads/temp-s3-error" 945 + initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 946 + "digest": tempDigest, 947 + }) 948 + addMockAuth(initReq) 949 + initW := httptest.NewRecorder() 950 + handler.HandleInitiateUpload(initW, initReq) 951 + 952 + var initResp map[string]any 953 + decodeJSONResponse(t, initW, &initResp) 954 + uploadID := initResp["uploadId"].(string) 955 + 956 + // Pre-populate mock driver 957 + tempBlobPath := fmt.Sprintf("/docker/registry/v2/%s/data", tempDigest) 958 + mockDriver.PutContent(t.Context(), tempBlobPath, []byte("test")) 959 + 960 + // Inject S3 error 961 + mockS3Client.CompleteError = fmt.Errorf("simulated S3 CompleteMultipartUpload failure") 962 + 963 + // Complete upload should fail 964 + completeReq := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 965 + "uploadId": uploadID, 966 + "digest": "sha256:s3error123", 967 + "parts": []any{ 968 + map[string]any{"part_number": 1, "etag": "abc"}, 969 + }, 970 + }) 971 + addMockAuth(completeReq) 972 + completeW := httptest.NewRecorder() 973 + handler.HandleCompleteUpload(completeW, completeReq) 974 + 975 + if completeW.Code != http.StatusInternalServerError { 976 + t.Errorf("Expected status 500, got %d: %s", completeW.Code, completeW.Body.String()) 977 + } 978 + 979 + // Verify error message mentions the failure 980 + if !bytes.Contains(completeW.Body.Bytes(), []byte("failed to complete upload")) { 981 + t.Errorf("Expected error message about failed complete, got: %s", completeW.Body.String()) 982 + } 983 + } 984 + 985 + func TestHandleCompleteUpload_MockS3_StatError(t *testing.T) { 986 + handler, _, mockDriver := setupTestOCIHandlerWithMockS3(t) 387 987 388 - // Initiate upload 988 + // Setup upload session 989 + tempDigest := "uploads/temp-stat-error" 389 990 initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 390 - "digest": "sha256:abc123", 991 + "digest": tempDigest, 391 992 }) 392 993 addMockAuth(initReq) 393 994 initW := httptest.NewRecorder() ··· 397 998 decodeJSONResponse(t, initW, &initResp) 398 999 uploadID := initResp["uploadId"].(string) 399 1000 400 - // Upload parts 401 - parts := []struct { 402 - number int 403 - data string 404 - }{ 405 - {1, "part one data"}, 406 - {2, "part two data"}, 1001 + // Pre-populate mock driver (so S3 complete succeeds) 1002 + tempBlobPath := fmt.Sprintf("/docker/registry/v2/%s/data", tempDigest) 1003 + mockDriver.PutContent(t.Context(), tempBlobPath, []byte("test")) 1004 + 1005 + // Inject Stat error (simulates blob not found after S3 complete) 1006 + mockDriver.StatError = fmt.Errorf("simulated stat failure") 1007 + 1008 + // Complete upload should fail 1009 + completeReq := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 1010 + "uploadId": uploadID, 1011 + "digest": "sha256:staterror123", 1012 + "parts": []any{ 1013 + map[string]any{"part_number": 1, "etag": "abc"}, 1014 + }, 1015 + }) 1016 + addMockAuth(completeReq) 1017 + completeW := httptest.NewRecorder() 1018 + handler.HandleCompleteUpload(completeW, completeReq) 1019 + 1020 + if completeW.Code != http.StatusInternalServerError { 1021 + t.Errorf("Expected status 500, got %d: %s", completeW.Code, completeW.Body.String()) 407 1022 } 1023 + } 408 1024 409 - var partInfos []map[string]any 410 - for _, p := range parts { 411 - req := httptest.NewRequest("PUT", atproto.HoldUploadPart, bytes.NewReader([]byte(p.data))) 412 - req.Header.Set("X-Upload-Id", uploadID) 413 - req.Header.Set("X-Part-Number", strconv.Itoa(p.number)) 414 - addMockAuth(req) 1025 + func TestHandleCompleteUpload_MockS3_MoveError(t *testing.T) { 1026 + handler, _, mockDriver := setupTestOCIHandlerWithMockS3(t) 415 1027 416 - w := httptest.NewRecorder() 417 - handler.HandleUploadPart(w, req) 1028 + // Setup upload session 1029 + tempDigest := "uploads/temp-move-error" 1030 + initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 1031 + "digest": tempDigest, 1032 + }) 1033 + addMockAuth(initReq) 1034 + initW := httptest.NewRecorder() 1035 + handler.HandleInitiateUpload(initW, initReq) 1036 + 1037 + var initResp map[string]any 1038 + decodeJSONResponse(t, initW, &initResp) 1039 + uploadID := initResp["uploadId"].(string) 418 1040 419 - var resp map[string]any 420 - decodeJSONResponse(t, w, &resp) 1041 + // Pre-populate mock driver 1042 + tempBlobPath := fmt.Sprintf("/docker/registry/v2/%s/data", tempDigest) 1043 + mockDriver.PutContent(t.Context(), tempBlobPath, []byte("test")) 421 1044 422 - partInfos = append(partInfos, map[string]any{ 423 - "partNumber": p.number, 424 - "etag": resp["etag"], 425 - }) 426 - } 1045 + // Inject Move error 1046 + mockDriver.MoveError = fmt.Errorf("simulated move failure") 427 1047 428 - // Complete upload 1048 + // Complete upload should fail 429 1049 completeReq := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 430 1050 "uploadId": uploadID, 431 - "digest": "sha256:finaldigest123", 432 - "parts": partInfos, 1051 + "digest": "sha256:moveerror123", 1052 + "parts": []any{ 1053 + map[string]any{"part_number": 1, "etag": "abc"}, 1054 + }, 433 1055 }) 434 1056 addMockAuth(completeReq) 1057 + completeW := httptest.NewRecorder() 1058 + handler.HandleCompleteUpload(completeW, completeReq) 435 1059 436 - w := httptest.NewRecorder() 437 - handler.HandleCompleteUpload(w, completeReq) 1060 + if completeW.Code != http.StatusInternalServerError { 1061 + t.Errorf("Expected status 500, got %d: %s", completeW.Code, completeW.Body.String()) 1062 + } 438 1063 439 - if w.Code != http.StatusOK { 440 - t.Errorf("Expected status 200, got %d: %s", w.Code, w.Body.String()) 1064 + // Verify error message mentions move failure 1065 + if !bytes.Contains(completeW.Body.Bytes(), []byte("failed to complete upload")) { 1066 + t.Errorf("Expected error message about failed complete, got: %s", completeW.Body.String()) 441 1067 } 1068 + } 442 1069 443 - var resp map[string]any 444 - decodeJSONResponse(t, w, &resp) 1070 + func TestHandleCompleteUpload_MockS3_UnsortedParts(t *testing.T) { 1071 + handler, mockS3Client, mockDriver := setupTestOCIHandlerWithMockS3(t) 445 1072 446 - if resp["status"] != "completed" { 447 - t.Errorf("Expected status=completed, got %v", resp["status"]) 1073 + // Setup upload session 1074 + tempDigest := "uploads/temp-unsorted" 1075 + initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 1076 + "digest": tempDigest, 1077 + }) 1078 + addMockAuth(initReq) 1079 + initW := httptest.NewRecorder() 1080 + handler.HandleInitiateUpload(initW, initReq) 1081 + 1082 + var initResp map[string]any 1083 + decodeJSONResponse(t, initW, &initResp) 1084 + uploadID := initResp["uploadId"].(string) 1085 + 1086 + // Pre-populate mock driver 1087 + tempBlobPath := fmt.Sprintf("/docker/registry/v2/%s/data", tempDigest) 1088 + mockDriver.PutContent(t.Context(), tempBlobPath, []byte("test")) 1089 + 1090 + // Complete with unsorted parts (3, 1, 2) - handler should sort them 1091 + completeReq := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 1092 + "uploadId": uploadID, 1093 + "digest": "sha256:unsorted123", 1094 + "parts": []any{ 1095 + map[string]any{"part_number": 3, "etag": "etag3"}, 1096 + map[string]any{"part_number": 1, "etag": "etag1"}, 1097 + map[string]any{"part_number": 2, "etag": "etag2"}, 1098 + }, 1099 + }) 1100 + addMockAuth(completeReq) 1101 + completeW := httptest.NewRecorder() 1102 + handler.HandleCompleteUpload(completeW, completeReq) 1103 + 1104 + if completeW.Code != http.StatusOK { 1105 + t.Errorf("Expected status 200, got %d: %s", completeW.Code, completeW.Body.String()) 448 1106 } 449 - if resp["digest"] != "sha256:finaldigest123" { 450 - t.Errorf("Expected digest=sha256:finaldigest123, got %v", resp["digest"]) 1107 + 1108 + // Verify Complete was called with 3 parts 1109 + if len(mockS3Client.CompleteCalls) != 1 { 1110 + t.Fatalf("Expected 1 Complete call, got %d", len(mockS3Client.CompleteCalls)) 1111 + } 1112 + if mockS3Client.CompleteCalls[0].Parts != 3 { 1113 + t.Errorf("Expected 3 parts, got %d", mockS3Client.CompleteCalls[0].Parts) 451 1114 } 452 1115 } 453 1116 454 - func TestHandleCompleteUpload_MissingParts(t *testing.T) { 455 - handler, _ := setupTestOCIHandler(t) 1117 + // Tests for HandleInitiateUpload edge cases 1118 + 1119 + func TestHandleInitiateUpload_MockS3_S3Error(t *testing.T) { 1120 + handler, mockS3Client, _ := setupTestOCIHandlerWithMockS3(t) 1121 + 1122 + // Inject S3 error 1123 + mockS3Client.CreateMultipartError = fmt.Errorf("simulated S3 CreateMultipartUpload failure") 456 1124 457 - req := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 458 - "uploadId": "test-id", 459 - "digest": "sha256:test", 460 - "parts": []any{}, 1125 + req := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 1126 + "digest": "sha256:test123", 461 1127 }) 462 1128 addMockAuth(req) 463 1129 464 1130 w := httptest.NewRecorder() 465 - handler.HandleCompleteUpload(w, req) 1131 + handler.HandleInitiateUpload(w, req) 466 1132 467 - if w.Code != http.StatusBadRequest { 468 - t.Errorf("Expected status 400, got %d", w.Code) 1133 + if w.Code != http.StatusInternalServerError { 1134 + t.Errorf("Expected status 500, got %d: %s", w.Code, w.Body.String()) 1135 + } 1136 + 1137 + // Verify error message 1138 + if !bytes.Contains(w.Body.Bytes(), []byte("failed to initiate upload")) { 1139 + t.Errorf("Expected error about failed initiate, got: %s", w.Body.String()) 469 1140 } 470 1141 } 471 1142 472 - func TestHandleCompleteUpload_InvalidSession(t *testing.T) { 473 - handler, _ := setupTestOCIHandler(t) 1143 + func TestHandleInitiateUpload_MockS3_WhitespaceDigest(t *testing.T) { 1144 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 474 1145 475 - req := makeJSONRequest("POST", atproto.HoldCompleteUpload, map[string]any{ 476 - "uploadId": "invalid-upload-id", 477 - "digest": "sha256:test", 478 - "parts": []any{ 479 - map[string]any{"partNumber": 1, "etag": "abc"}, 480 - }, 1146 + req := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 1147 + "digest": " ", // Whitespace only 481 1148 }) 482 1149 addMockAuth(req) 483 1150 484 1151 w := httptest.NewRecorder() 485 - handler.HandleCompleteUpload(w, req) 1152 + handler.HandleInitiateUpload(w, req) 486 1153 487 - if w.Code != http.StatusInternalServerError { 488 - t.Errorf("Expected status 500, got %d", w.Code) 1154 + // Current implementation doesn't validate whitespace, so this may succeed 1155 + // or fail based on S3 key validation. This test documents the behavior. 1156 + // If you want to enforce validation, update the handler and change this assertion. 1157 + if w.Code == http.StatusOK { 1158 + t.Log("Handler accepts whitespace digest (S3 may reject it)") 489 1159 } 490 1160 } 491 1161 492 - // Tests for HandleAbortUpload 1162 + // Tests for HandleGetPartUploadURL edge cases 1163 + 1164 + func TestHandleGetPartUploadUrl_MockS3_MissingParams(t *testing.T) { 1165 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 1166 + 1167 + tests := []struct { 1168 + name string 1169 + body map[string]any 1170 + }{ 1171 + {"missing uploadId", map[string]any{"partNumber": 1}}, 1172 + {"missing partNumber", map[string]any{"uploadId": "test-id"}}, 1173 + {"partNumber is zero", map[string]any{"uploadId": "test-id", "partNumber": 0}}, 1174 + } 1175 + 1176 + for _, tt := range tests { 1177 + t.Run(tt.name, func(t *testing.T) { 1178 + req := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, tt.body) 1179 + addMockAuth(req) 1180 + 1181 + w := httptest.NewRecorder() 1182 + handler.HandleGetPartUploadURL(w, req) 1183 + 1184 + if w.Code != http.StatusBadRequest { 1185 + t.Errorf("Expected status 400, got %d: %s", w.Code, w.Body.String()) 1186 + } 1187 + }) 1188 + } 1189 + } 493 1190 494 - func TestHandleAbortUpload_Success(t *testing.T) { 495 - handler, _ := setupTestOCIHandler(t) 1191 + func TestHandleGetPartUploadUrl_MockS3_ValidSession(t *testing.T) { 1192 + handler, mockS3Client, _ := setupTestOCIHandlerWithMockS3(t) 496 1193 497 - // Initiate upload 1194 + // First initiate an upload 498 1195 initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 499 - "digest": "sha256:abc123", 1196 + "digest": "sha256:parttest123", 500 1197 }) 501 1198 addMockAuth(initReq) 502 1199 initW := httptest.NewRecorder() ··· 506 1203 decodeJSONResponse(t, initW, &initResp) 507 1204 uploadID := initResp["uploadId"].(string) 508 1205 509 - // Abort upload 510 - req := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 511 - "uploadId": uploadID, 1206 + // Now get part upload URL 1207 + partReq := makeJSONRequest("POST", atproto.HoldGetPartUploadURL, map[string]any{ 1208 + "uploadId": uploadID, 1209 + "partNumber": 5, // Arbitrary part number 512 1210 }) 1211 + addMockAuth(partReq) 1212 + 1213 + partW := httptest.NewRecorder() 1214 + handler.HandleGetPartUploadURL(partW, partReq) 1215 + 1216 + if partW.Code != http.StatusOK { 1217 + t.Errorf("Expected status 200, got %d: %s", partW.Code, partW.Body.String()) 1218 + } 1219 + 1220 + var partResp PartUploadInfo 1221 + decodeJSONResponse(t, partW, &partResp) 1222 + 1223 + // Verify URL is returned 1224 + if partResp.URL == "" { 1225 + t.Error("Expected non-empty presigned URL") 1226 + } 1227 + 1228 + // Verify mock S3 was called with correct part number 1229 + if len(mockS3Client.UploadPartCalls) != 1 { 1230 + t.Fatalf("Expected 1 UploadPart call, got %d", len(mockS3Client.UploadPartCalls)) 1231 + } 1232 + if mockS3Client.UploadPartCalls[0].PartNumber != 5 { 1233 + t.Errorf("Expected partNumber=5, got %d", mockS3Client.UploadPartCalls[0].PartNumber) 1234 + } 1235 + } 1236 + 1237 + // Tests for HandleAbortUpload edge cases 1238 + 1239 + func TestHandleAbortUpload_MockS3_MissingUploadId(t *testing.T) { 1240 + handler, _, _ := setupTestOCIHandlerWithMockS3(t) 1241 + 1242 + req := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{}) 513 1243 addMockAuth(req) 514 1244 515 1245 w := httptest.NewRecorder() 516 1246 handler.HandleAbortUpload(w, req) 517 1247 518 - if w.Code != http.StatusOK { 519 - t.Errorf("Expected status 200, got %d: %s", w.Code, w.Body.String()) 1248 + if w.Code != http.StatusBadRequest { 1249 + t.Errorf("Expected status 400, got %d: %s", w.Code, w.Body.String()) 520 1250 } 1251 + } 1252 + 1253 + func TestHandleAbortUpload_MockS3_S3Error(t *testing.T) { 1254 + handler, mockS3Client, _ := setupTestOCIHandlerWithMockS3(t) 521 1255 522 - var resp map[string]any 523 - decodeJSONResponse(t, w, &resp) 1256 + // First initiate an upload 1257 + initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 1258 + "digest": "sha256:aborttest123", 1259 + }) 1260 + addMockAuth(initReq) 1261 + initW := httptest.NewRecorder() 1262 + handler.HandleInitiateUpload(initW, initReq) 1263 + 1264 + var initResp map[string]any 1265 + decodeJSONResponse(t, initW, &initResp) 1266 + uploadID := initResp["uploadId"].(string) 524 1267 1268 + // Inject S3 abort error 1269 + mockS3Client.AbortError = fmt.Errorf("simulated S3 AbortMultipartUpload failure") 1270 + 1271 + // Abort should fail 1272 + abortReq := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 1273 + "uploadId": uploadID, 1274 + }) 1275 + addMockAuth(abortReq) 1276 + 1277 + abortW := httptest.NewRecorder() 1278 + handler.HandleAbortUpload(abortW, abortReq) 1279 + 1280 + if abortW.Code != http.StatusInternalServerError { 1281 + t.Errorf("Expected status 500, got %d: %s", abortW.Code, abortW.Body.String()) 1282 + } 1283 + 1284 + // Verify error message 1285 + if !bytes.Contains(abortW.Body.Bytes(), []byte("failed to abort upload")) { 1286 + t.Errorf("Expected error about failed abort, got: %s", abortW.Body.String()) 1287 + } 1288 + } 1289 + 1290 + func TestHandleAbortUpload_MockS3_ValidSession(t *testing.T) { 1291 + handler, mockS3Client, _ := setupTestOCIHandlerWithMockS3(t) 1292 + 1293 + // First initiate an upload 1294 + initReq := makeJSONRequest("POST", atproto.HoldInitiateUpload, map[string]string{ 1295 + "digest": "sha256:abortvalid123", 1296 + }) 1297 + addMockAuth(initReq) 1298 + initW := httptest.NewRecorder() 1299 + handler.HandleInitiateUpload(initW, initReq) 1300 + 1301 + var initResp map[string]any 1302 + decodeJSONResponse(t, initW, &initResp) 1303 + uploadID := initResp["uploadId"].(string) 1304 + 1305 + // Abort should succeed 1306 + abortReq := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 1307 + "uploadId": uploadID, 1308 + }) 1309 + addMockAuth(abortReq) 1310 + 1311 + abortW := httptest.NewRecorder() 1312 + handler.HandleAbortUpload(abortW, abortReq) 1313 + 1314 + if abortW.Code != http.StatusOK { 1315 + t.Errorf("Expected status 200, got %d: %s", abortW.Code, abortW.Body.String()) 1316 + } 1317 + 1318 + // Verify response 1319 + var resp map[string]any 1320 + decodeJSONResponse(t, abortW, &resp) 525 1321 if resp["status"] != "aborted" { 526 1322 t.Errorf("Expected status=aborted, got %v", resp["status"]) 527 1323 } 528 - } 529 1324 530 - func TestHandleAbortUpload_InvalidSession(t *testing.T) { 531 - handler, _ := setupTestOCIHandler(t) 1325 + // Verify S3 was called 1326 + if len(mockS3Client.AbortCalls) != 1 { 1327 + t.Errorf("Expected 1 Abort call, got %d", len(mockS3Client.AbortCalls)) 1328 + } 532 1329 533 - req := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 534 - "uploadId": "invalid-upload-id", 1330 + // Session should be deleted - trying to abort again should fail 1331 + abortReq2 := makeJSONRequest("POST", atproto.HoldAbortUpload, map[string]string{ 1332 + "uploadId": uploadID, 535 1333 }) 536 - addMockAuth(req) 1334 + addMockAuth(abortReq2) 537 1335 538 - w := httptest.NewRecorder() 539 - handler.HandleAbortUpload(w, req) 1336 + abortW2 := httptest.NewRecorder() 1337 + handler.HandleAbortUpload(abortW2, abortReq2) 540 1338 541 - if w.Code != http.StatusInternalServerError { 542 - t.Errorf("Expected status 500, got %d", w.Code) 1339 + if abortW2.Code != http.StatusInternalServerError { 1340 + t.Errorf("Expected status 500 for deleted session, got %d", abortW2.Code) 543 1341 } 544 1342 }
+3 -3
pkg/hold/pds/xrpc.go
··· 1395 1395 switch operation { 1396 1396 case http.MethodGet: 1397 1397 // Note: Don't use ResponseContentType - not supported by all S3-compatible services 1398 - req, _ = h.s3Service.Client.GetObjectRequest(&awss3.GetObjectInput{ 1398 + req = h.s3Service.Client.GetObjectPresignable(&awss3.GetObjectInput{ 1399 1399 Bucket: &h.s3Service.Bucket, 1400 1400 Key: &s3Key, 1401 1401 }) 1402 1402 1403 1403 case http.MethodHead: 1404 - req, _ = h.s3Service.Client.HeadObjectRequest(&awss3.HeadObjectInput{ 1404 + req = h.s3Service.Client.HeadObjectPresignable(&awss3.HeadObjectInput{ 1405 1405 Bucket: &h.s3Service.Bucket, 1406 1406 Key: &s3Key, 1407 1407 }) 1408 1408 1409 1409 case http.MethodPut: 1410 - req, _ = h.s3Service.Client.PutObjectRequest(&awss3.PutObjectInput{ 1410 + req = h.s3Service.Client.PutObjectPresignable(&awss3.PutObjectInput{ 1411 1411 Bucket: &h.s3Service.Bucket, 1412 1412 Key: &s3Key, 1413 1413 ContentType: &contentType,
+200 -1
pkg/hold/pds/xrpc_test.go
··· 1957 1957 } 1958 1958 } 1959 1959 1960 + // setupTestXRPCHandlerWithMockS3 creates handler with MockS3Client for testing presigned URLs 1961 + func setupTestXRPCHandlerWithMockS3(t *testing.T) (*XRPCHandler, *s3.MockS3Client, context.Context) { 1962 + t.Helper() 1963 + 1964 + ctx := context.Background() 1965 + tmpDir := t.TempDir() 1966 + 1967 + // Use in-memory database for speed 1968 + dbPath := ":memory:" 1969 + keyPath := filepath.Join(tmpDir, "signing-key") 1970 + 1971 + // Copy shared signing key instead of generating a new one 1972 + if err := os.WriteFile(keyPath, sharedTestKey, 0600); err != nil { 1973 + t.Fatalf("Failed to copy shared signing key: %v", err) 1974 + } 1975 + 1976 + pds, err := NewHoldPDS(ctx, "did:web:hold.example.com", "https://hold.example.com", dbPath, keyPath, false) 1977 + if err != nil { 1978 + t.Fatalf("Failed to create test PDS: %v", err) 1979 + } 1980 + 1981 + // Bootstrap with a test owner, suppressing stdout to avoid log spam 1982 + ownerDID := "did:plc:testowner123" 1983 + 1984 + // Redirect stdout to suppress bootstrap logging 1985 + oldStdout := os.Stdout 1986 + r, w, _ := os.Pipe() 1987 + os.Stdout = w 1988 + 1989 + err = pds.Bootstrap(ctx, nil, ownerDID, true, false, "", "") 1990 + 1991 + // Restore stdout 1992 + w.Close() 1993 + os.Stdout = oldStdout 1994 + io.ReadAll(r) // Drain the pipe 1995 + 1996 + if err != nil { 1997 + t.Fatalf("Failed to bootstrap PDS: %v", err) 1998 + } 1999 + 2000 + // Create MockS3Client for testing presigned URLs 2001 + mockS3Client := s3.NewMockS3Client("https://mock-s3.example.com") 2002 + s3Service := s3.S3Service{ 2003 + Client: mockS3Client, 2004 + Bucket: "test-bucket", 2005 + PathPrefix: "test-prefix", 2006 + } 2007 + 2008 + // Create filesystem storage driver for tests 2009 + storageDir := filepath.Join(tmpDir, "storage") 2010 + params := map[string]any{ 2011 + "rootdirectory": storageDir, 2012 + } 2013 + driver, err := factory.Create(ctx, "filesystem", params) 2014 + if err != nil { 2015 + t.Fatalf("Failed to create storage driver: %v", err) 2016 + } 2017 + 2018 + // Create mock PDS client for DPoP validation 2019 + mockClient := &mockPDSClient{} 2020 + 2021 + // Create XRPC handler with mock S3 client and real filesystem driver 2022 + handler := NewXRPCHandler(pds, s3Service, driver, nil, mockClient, nil) 2023 + 2024 + return handler, mockS3Client, ctx 2025 + } 2026 + 1960 2027 // setupTestXRPCHandlerWithBlobs creates handler with mock s3 service and real filesystem driver 1961 2028 func setupTestXRPCHandlerWithBlobs(t *testing.T) (*XRPCHandler, *mockS3Service, context.Context) { 1962 2029 t.Helper() ··· 2138 2205 t.Skip("Skipping blob store error test - using real filesystem driver now") 2139 2206 } 2140 2207 2141 - // Tests for HandleGetBlob 2208 + // Tests for HandleGetBlob with Mock S3 (presigned URLs) 2209 + 2210 + // TestHandleGetBlob_MockS3_SHA256Digest tests getBlob with OCI sha256 digest format using MockS3Client 2211 + // This verifies that presigned S3 URLs are correctly generated 2212 + func TestHandleGetBlob_MockS3_SHA256Digest(t *testing.T) { 2213 + handler, mockS3Client, _ := setupTestXRPCHandlerWithMockS3(t) 2214 + 2215 + holdDID := "did:web:hold.example.com" 2216 + digest := "sha256:abc123def456" // OCI digest format 2217 + 2218 + req := makeXRPCGetRequest(atproto.SyncGetBlob, map[string]string{ 2219 + "did": holdDID, 2220 + "cid": digest, 2221 + }) 2222 + w := httptest.NewRecorder() 2223 + 2224 + handler.HandleGetBlob(w, req) 2225 + 2226 + // Should return 200 OK with JSON response 2227 + if w.Code != http.StatusOK { 2228 + t.Errorf("Expected status 200 OK, got %d", w.Code) 2229 + } 2230 + 2231 + // Parse JSON response 2232 + var response map[string]string 2233 + if err := json.Unmarshal(w.Body.Bytes(), &response); err != nil { 2234 + t.Fatalf("Failed to parse JSON response: %v", err) 2235 + } 2236 + 2237 + // Verify URL field points to mock S3 2238 + if response["url"] == "" { 2239 + t.Error("Expected url field in response") 2240 + } 2241 + 2242 + // URL should be from mock S3 server 2243 + if !strings.Contains(response["url"], "mock-s3.example.com") { 2244 + t.Errorf("Expected mock S3 URL, got: %s", response["url"]) 2245 + } 2246 + 2247 + // Verify GetObjectPresignable was called 2248 + if len(mockS3Client.GetObjectCalls) != 1 { 2249 + t.Errorf("Expected 1 GetObject call, got %d", len(mockS3Client.GetObjectCalls)) 2250 + } 2251 + } 2252 + 2253 + // TestHandleGetBlob_MockS3_HeadMethod tests HEAD request support with MockS3Client 2254 + func TestHandleGetBlob_MockS3_HeadMethod(t *testing.T) { 2255 + handler, mockS3Client, _ := setupTestXRPCHandlerWithMockS3(t) 2256 + 2257 + holdDID := "did:web:hold.example.com" 2258 + digest := "sha256:abc123def456" // OCI digest format 2259 + 2260 + // Use HEAD instead of GET with method query param 2261 + req := makeXRPCGetRequest(atproto.SyncGetBlob, map[string]string{ 2262 + "did": holdDID, 2263 + "cid": digest, 2264 + "method": "HEAD", 2265 + }) 2266 + w := httptest.NewRecorder() 2267 + 2268 + handler.HandleGetBlob(w, req) 2269 + 2270 + // Should return 200 OK with JSON response 2271 + if w.Code != http.StatusOK { 2272 + t.Errorf("Expected status 200 OK, got %d", w.Code) 2273 + } 2274 + 2275 + // Parse JSON response 2276 + var response map[string]string 2277 + if err := json.Unmarshal(w.Body.Bytes(), &response); err != nil { 2278 + t.Fatalf("Failed to parse JSON response: %v", err) 2279 + } 2280 + 2281 + // Verify URL field points to mock S3 2282 + if response["url"] == "" { 2283 + t.Error("Expected url field in response") 2284 + } 2285 + 2286 + // URL should be from mock S3 server 2287 + if !strings.Contains(response["url"], "mock-s3.example.com") { 2288 + t.Errorf("Expected mock S3 URL, got: %s", response["url"]) 2289 + } 2290 + 2291 + // Verify HeadObjectPresignable was called 2292 + if len(mockS3Client.HeadObjectCalls) != 1 { 2293 + t.Errorf("Expected 1 HeadObject call, got %d", len(mockS3Client.HeadObjectCalls)) 2294 + } 2295 + } 2296 + 2297 + // TestHandleGetBlob_MockS3_PutMethod tests PUT request support with MockS3Client 2298 + func TestHandleGetBlob_MockS3_PutMethod(t *testing.T) { 2299 + handler, mockS3Client, _ := setupTestXRPCHandlerWithMockS3(t) 2300 + 2301 + holdDID := "did:web:hold.example.com" 2302 + digest := "sha256:abc123def456" // OCI digest format 2303 + 2304 + req := makeXRPCGetRequest(atproto.SyncGetBlob, map[string]string{ 2305 + "did": holdDID, 2306 + "cid": digest, 2307 + "method": "PUT", 2308 + }) 2309 + w := httptest.NewRecorder() 2310 + 2311 + handler.HandleGetBlob(w, req) 2312 + 2313 + // Should return 200 OK with JSON response 2314 + if w.Code != http.StatusOK { 2315 + t.Errorf("Expected status 200 OK, got %d", w.Code) 2316 + } 2317 + 2318 + // Parse JSON response 2319 + var response map[string]string 2320 + if err := json.Unmarshal(w.Body.Bytes(), &response); err != nil { 2321 + t.Fatalf("Failed to parse JSON response: %v", err) 2322 + } 2323 + 2324 + // Verify URL field points to mock S3 2325 + if response["url"] == "" { 2326 + t.Error("Expected url field in response") 2327 + } 2328 + 2329 + // URL should be from mock S3 server 2330 + if !strings.Contains(response["url"], "mock-s3.example.com") { 2331 + t.Errorf("Expected mock S3 URL, got: %s", response["url"]) 2332 + } 2333 + 2334 + // Verify PutObjectPresignable was called 2335 + if len(mockS3Client.PutObjectCalls) != 1 { 2336 + t.Errorf("Expected 1 PutObject call, got %d", len(mockS3Client.PutObjectCalls)) 2337 + } 2338 + } 2339 + 2340 + // Tests for HandleGetBlob (without S3 - fallback to XRPC proxy) 2142 2341 2143 2342 // TestHandleGetBlob tests com.atproto.sync.getBlob with ATProto CID 2144 2343 // ATProto blobs should return 307 redirect to presigned URL
+245
pkg/s3/mock.go
··· 1 + package s3 2 + 3 + import ( 4 + "fmt" 5 + "sync" 6 + "time" 7 + 8 + "github.com/aws/aws-sdk-go/aws" 9 + "github.com/aws/aws-sdk-go/aws/request" 10 + "github.com/aws/aws-sdk-go/service/s3" 11 + "github.com/google/uuid" 12 + ) 13 + 14 + // mockPresignable implements Presignable for testing 15 + type mockPresignable struct { 16 + url string 17 + } 18 + 19 + // Presign returns the pre-configured mock URL 20 + func (m *mockPresignable) Presign(expire time.Duration) (string, error) { 21 + return m.url, nil 22 + } 23 + 24 + // MockS3Client implements S3Client for testing without real S3 credentials. 25 + // It generates fake presigned URLs that point to a test server. 26 + type MockS3Client struct { 27 + // TestServerURL is the base URL for generating fake presigned URLs. 28 + // Requests to these URLs should be handled by a test server (httptest.Server). 29 + TestServerURL string 30 + 31 + // UploadID is returned by CreateMultipartUploadWithContext. 32 + // If empty, a UUID is generated. 33 + UploadID string 34 + 35 + // Track calls for verification in tests 36 + mu sync.Mutex 37 + CreateMultipartCalls []CreateMultipartCall 38 + CompleteCalls []CompleteCall 39 + AbortCalls []AbortCall 40 + UploadPartCalls []UploadPartCall 41 + GetObjectCalls []GetObjectCall 42 + HeadObjectCalls []HeadObjectCall 43 + PutObjectCalls []PutObjectCall 44 + 45 + // Error injection for testing error handling 46 + CreateMultipartError error 47 + CompleteError error 48 + AbortError error 49 + } 50 + 51 + // CreateMultipartCall records a CreateMultipartUploadWithContext call 52 + type CreateMultipartCall struct { 53 + Bucket string 54 + Key string 55 + } 56 + 57 + // CompleteCall records a CompleteMultipartUploadWithContext call 58 + type CompleteCall struct { 59 + Bucket string 60 + Key string 61 + UploadID string 62 + Parts int 63 + } 64 + 65 + // AbortCall records an AbortMultipartUploadWithContext call 66 + type AbortCall struct { 67 + Bucket string 68 + Key string 69 + UploadID string 70 + } 71 + 72 + // UploadPartCall records an UploadPartRequest call 73 + type UploadPartCall struct { 74 + Bucket string 75 + Key string 76 + UploadID string 77 + PartNumber int64 78 + } 79 + 80 + // GetObjectCall records a GetObjectRequest call 81 + type GetObjectCall struct { 82 + Bucket string 83 + Key string 84 + } 85 + 86 + // HeadObjectCall records a HeadObjectRequest call 87 + type HeadObjectCall struct { 88 + Bucket string 89 + Key string 90 + } 91 + 92 + // PutObjectCall records a PutObjectRequest call 93 + type PutObjectCall struct { 94 + Bucket string 95 + Key string 96 + } 97 + 98 + // NewMockS3Client creates a new mock S3 client for testing 99 + func NewMockS3Client(testServerURL string) *MockS3Client { 100 + return &MockS3Client{ 101 + TestServerURL: testServerURL, 102 + CreateMultipartCalls: []CreateMultipartCall{}, 103 + CompleteCalls: []CompleteCall{}, 104 + AbortCalls: []AbortCall{}, 105 + UploadPartCalls: []UploadPartCall{}, 106 + GetObjectCalls: []GetObjectCall{}, 107 + HeadObjectCalls: []HeadObjectCall{}, 108 + PutObjectCalls: []PutObjectCall{}, 109 + } 110 + } 111 + 112 + // CreateMultipartUploadWithContext implements S3Client 113 + func (m *MockS3Client) CreateMultipartUploadWithContext(ctx aws.Context, input *s3.CreateMultipartUploadInput, opts ...request.Option) (*s3.CreateMultipartUploadOutput, error) { 114 + m.mu.Lock() 115 + defer m.mu.Unlock() 116 + 117 + m.CreateMultipartCalls = append(m.CreateMultipartCalls, CreateMultipartCall{ 118 + Bucket: aws.StringValue(input.Bucket), 119 + Key: aws.StringValue(input.Key), 120 + }) 121 + 122 + if m.CreateMultipartError != nil { 123 + return nil, m.CreateMultipartError 124 + } 125 + 126 + uploadID := m.UploadID 127 + if uploadID == "" { 128 + uploadID = "mock-upload-" + uuid.New().String() 129 + } 130 + 131 + return &s3.CreateMultipartUploadOutput{ 132 + UploadId: aws.String(uploadID), 133 + }, nil 134 + } 135 + 136 + // CompleteMultipartUploadWithContext implements S3Client 137 + func (m *MockS3Client) CompleteMultipartUploadWithContext(ctx aws.Context, input *s3.CompleteMultipartUploadInput, opts ...request.Option) (*s3.CompleteMultipartUploadOutput, error) { 138 + m.mu.Lock() 139 + defer m.mu.Unlock() 140 + 141 + partsCount := 0 142 + if input.MultipartUpload != nil { 143 + partsCount = len(input.MultipartUpload.Parts) 144 + } 145 + 146 + m.CompleteCalls = append(m.CompleteCalls, CompleteCall{ 147 + Bucket: aws.StringValue(input.Bucket), 148 + Key: aws.StringValue(input.Key), 149 + UploadID: aws.StringValue(input.UploadId), 150 + Parts: partsCount, 151 + }) 152 + 153 + if m.CompleteError != nil { 154 + return nil, m.CompleteError 155 + } 156 + 157 + // Return a mock ETag 158 + etag := "\"mock-etag-" + uuid.New().String() + "\"" 159 + return &s3.CompleteMultipartUploadOutput{ 160 + ETag: aws.String(etag), 161 + }, nil 162 + } 163 + 164 + // AbortMultipartUploadWithContext implements S3Client 165 + func (m *MockS3Client) AbortMultipartUploadWithContext(ctx aws.Context, input *s3.AbortMultipartUploadInput, opts ...request.Option) (*s3.AbortMultipartUploadOutput, error) { 166 + m.mu.Lock() 167 + defer m.mu.Unlock() 168 + 169 + m.AbortCalls = append(m.AbortCalls, AbortCall{ 170 + Bucket: aws.StringValue(input.Bucket), 171 + Key: aws.StringValue(input.Key), 172 + UploadID: aws.StringValue(input.UploadId), 173 + }) 174 + 175 + if m.AbortError != nil { 176 + return nil, m.AbortError 177 + } 178 + 179 + return &s3.AbortMultipartUploadOutput{}, nil 180 + } 181 + 182 + // UploadPartPresignable implements S3Client 183 + // Returns a mock Presignable that generates test server URLs 184 + func (m *MockS3Client) UploadPartPresignable(input *s3.UploadPartInput) Presignable { 185 + m.mu.Lock() 186 + defer m.mu.Unlock() 187 + 188 + m.UploadPartCalls = append(m.UploadPartCalls, UploadPartCall{ 189 + Bucket: aws.StringValue(input.Bucket), 190 + Key: aws.StringValue(input.Key), 191 + UploadID: aws.StringValue(input.UploadId), 192 + PartNumber: aws.Int64Value(input.PartNumber), 193 + }) 194 + 195 + // Create a mock presignable request 196 + url := fmt.Sprintf("%s/upload/%s?partNumber=%d&uploadId=%s", 197 + m.TestServerURL, 198 + aws.StringValue(input.Key), 199 + aws.Int64Value(input.PartNumber), 200 + aws.StringValue(input.UploadId)) 201 + 202 + return &mockPresignable{url: url} 203 + } 204 + 205 + // GetObjectPresignable implements S3Client 206 + func (m *MockS3Client) GetObjectPresignable(input *s3.GetObjectInput) Presignable { 207 + m.mu.Lock() 208 + defer m.mu.Unlock() 209 + 210 + m.GetObjectCalls = append(m.GetObjectCalls, GetObjectCall{ 211 + Bucket: aws.StringValue(input.Bucket), 212 + Key: aws.StringValue(input.Key), 213 + }) 214 + 215 + url := fmt.Sprintf("%s/get/%s", m.TestServerURL, aws.StringValue(input.Key)) 216 + return &mockPresignable{url: url} 217 + } 218 + 219 + // HeadObjectPresignable implements S3Client 220 + func (m *MockS3Client) HeadObjectPresignable(input *s3.HeadObjectInput) Presignable { 221 + m.mu.Lock() 222 + defer m.mu.Unlock() 223 + 224 + m.HeadObjectCalls = append(m.HeadObjectCalls, HeadObjectCall{ 225 + Bucket: aws.StringValue(input.Bucket), 226 + Key: aws.StringValue(input.Key), 227 + }) 228 + 229 + url := fmt.Sprintf("%s/head/%s", m.TestServerURL, aws.StringValue(input.Key)) 230 + return &mockPresignable{url: url} 231 + } 232 + 233 + // PutObjectPresignable implements S3Client 234 + func (m *MockS3Client) PutObjectPresignable(input *s3.PutObjectInput) Presignable { 235 + m.mu.Lock() 236 + defer m.mu.Unlock() 237 + 238 + m.PutObjectCalls = append(m.PutObjectCalls, PutObjectCall{ 239 + Bucket: aws.StringValue(input.Bucket), 240 + Key: aws.StringValue(input.Key), 241 + }) 242 + 243 + url := fmt.Sprintf("%s/put/%s", m.TestServerURL, aws.StringValue(input.Key)) 244 + return &mockPresignable{url: url} 245 + }
+83 -25
pkg/s3/types.go
··· 1 1 // Package s3 provides S3 client initialization and presigned URL generation 2 - // for hold services. It supports S3, Storj, and Minio storage backends, 3 - // with fallback to buffered proxy mode when presigned URLs are unavailable. 2 + // for hold services. It supports S3, Storj, and Minio storage backends. 4 3 package s3 5 4 6 5 import ( 7 6 "fmt" 8 7 "log/slog" 9 8 "strings" 9 + "time" 10 10 11 11 "github.com/aws/aws-sdk-go/aws" 12 12 "github.com/aws/aws-sdk-go/aws/credentials" 13 + "github.com/aws/aws-sdk-go/aws/request" 13 14 "github.com/aws/aws-sdk-go/aws/session" 14 15 "github.com/aws/aws-sdk-go/service/s3" 15 16 ) 16 17 17 - type S3Service struct { 18 - Client *s3.S3 // S3 client for presigned URLs (nil if not S3 storage) 19 - Bucket string // S3 bucket name 20 - PathPrefix string // S3 path prefix (if any) 18 + // Presignable represents a request that can be presigned for direct client access. 19 + // This interface allows mocking the Presign() method in tests. 20 + type Presignable interface { 21 + Presign(expire time.Duration) (string, error) 21 22 } 22 23 23 - // NewS3Service initializes the S3 client for presigned URL generation 24 - // Returns nil error if S3 client is successfully initialized 25 - // Returns error if storage is not S3 or if initialization fails (service will fall back to proxy mode) 26 - func NewS3Service(params map[string]any, disablePresigned bool, storageType string) (*S3Service, error) { 27 - // Check if presigned URLs are explicitly disabled 28 - if disablePresigned { 29 - slog.Warn("S3 presigned URLs DISABLED by config", 30 - "reason", "DISABLE_PRESIGNED_URLS=true", 31 - "uploadMode", "buffered") 32 - return &S3Service{}, nil 33 - } 24 + // S3Client defines the S3 operations used by the hold service. 25 + // This interface allows mocking S3 for tests without real credentials. 26 + // Use RealS3Client to wrap *s3.S3, or MockS3Client for testing. 27 + type S3Client interface { 28 + // Multipart upload operations 29 + CreateMultipartUploadWithContext(ctx aws.Context, input *s3.CreateMultipartUploadInput, opts ...request.Option) (*s3.CreateMultipartUploadOutput, error) 30 + CompleteMultipartUploadWithContext(ctx aws.Context, input *s3.CompleteMultipartUploadInput, opts ...request.Option) (*s3.CompleteMultipartUploadOutput, error) 31 + AbortMultipartUploadWithContext(ctx aws.Context, input *s3.AbortMultipartUploadInput, opts ...request.Option) (*s3.AbortMultipartUploadOutput, error) 34 32 35 - // Check if storage driver is S3 36 - if storageType != "s3" { 37 - slog.Info("Presigned URLs disabled for non-S3 storage", 38 - "storageDriver", storageType) 39 - return &S3Service{}, nil 40 - } 33 + // Presigned URL operations - return Presignable interface for testability 34 + // The second return value (_) is ignored by callers in practice 35 + UploadPartPresignable(input *s3.UploadPartInput) Presignable 36 + GetObjectPresignable(input *s3.GetObjectInput) Presignable 37 + HeadObjectPresignable(input *s3.HeadObjectInput) Presignable 38 + PutObjectPresignable(input *s3.PutObjectInput) Presignable 39 + } 40 + 41 + // RealS3Client wraps *s3.S3 to implement S3Client interface 42 + type RealS3Client struct { 43 + client *s3.S3 44 + } 41 45 46 + // NewRealS3Client creates a new RealS3Client wrapper 47 + func NewRealS3Client(client *s3.S3) *RealS3Client { 48 + return &RealS3Client{client: client} 49 + } 50 + 51 + // CreateMultipartUploadWithContext implements S3Client 52 + func (r *RealS3Client) CreateMultipartUploadWithContext(ctx aws.Context, input *s3.CreateMultipartUploadInput, opts ...request.Option) (*s3.CreateMultipartUploadOutput, error) { 53 + return r.client.CreateMultipartUploadWithContext(ctx, input, opts...) 54 + } 55 + 56 + // CompleteMultipartUploadWithContext implements S3Client 57 + func (r *RealS3Client) CompleteMultipartUploadWithContext(ctx aws.Context, input *s3.CompleteMultipartUploadInput, opts ...request.Option) (*s3.CompleteMultipartUploadOutput, error) { 58 + return r.client.CompleteMultipartUploadWithContext(ctx, input, opts...) 59 + } 60 + 61 + // AbortMultipartUploadWithContext implements S3Client 62 + func (r *RealS3Client) AbortMultipartUploadWithContext(ctx aws.Context, input *s3.AbortMultipartUploadInput, opts ...request.Option) (*s3.AbortMultipartUploadOutput, error) { 63 + return r.client.AbortMultipartUploadWithContext(ctx, input, opts...) 64 + } 65 + 66 + // UploadPartPresignable implements S3Client 67 + func (r *RealS3Client) UploadPartPresignable(input *s3.UploadPartInput) Presignable { 68 + req, _ := r.client.UploadPartRequest(input) 69 + return req 70 + } 71 + 72 + // GetObjectPresignable implements S3Client 73 + func (r *RealS3Client) GetObjectPresignable(input *s3.GetObjectInput) Presignable { 74 + req, _ := r.client.GetObjectRequest(input) 75 + return req 76 + } 77 + 78 + // HeadObjectPresignable implements S3Client 79 + func (r *RealS3Client) HeadObjectPresignable(input *s3.HeadObjectInput) Presignable { 80 + req, _ := r.client.HeadObjectRequest(input) 81 + return req 82 + } 83 + 84 + // PutObjectPresignable implements S3Client 85 + func (r *RealS3Client) PutObjectPresignable(input *s3.PutObjectInput) Presignable { 86 + req, _ := r.client.PutObjectRequest(input) 87 + return req 88 + } 89 + 90 + // S3Service wraps an S3 client for presigned URL generation 91 + type S3Service struct { 92 + Client S3Client // S3 client for presigned URLs (interface for testability) 93 + Bucket string // S3 bucket name 94 + PathPrefix string // S3 path prefix (if any) 95 + } 96 + 97 + // NewS3Service initializes the S3 client for presigned URL generation 98 + // S3 is required - this will return an error if not properly configured 99 + func NewS3Service(params map[string]any) (*S3Service, error) { 42 100 // Extract required S3 configuration 43 101 region, _ := params["region"].(string) 44 102 if region == "" { ··· 86 144 "region", region, 87 145 "pathPrefix", s3PathPrefix) 88 146 89 - // Create S3 client 147 + // Create S3 client wrapped in RealS3Client for interface compatibility 90 148 return &S3Service{ 91 - Client: s3.New(sess), 149 + Client: NewRealS3Client(s3.New(sess)), 92 150 Bucket: bucket, 93 151 PathPrefix: s3PathPrefix, 94 152 }, nil
+6 -47
pkg/s3/types_test.go
··· 6 6 "atcr.io/pkg/logging" 7 7 ) 8 8 9 - func TestNewS3Service_PresignedDisabled(t *testing.T) { 10 - t.Cleanup(logging.SetupTestLogger()) 11 - 12 - params := map[string]any{ 13 - "bucket": "test-bucket", 14 - "region": "us-west-2", 15 - } 16 - 17 - service, err := NewS3Service(params, true, "s3") 18 - if err != nil { 19 - t.Fatalf("Expected success when presigned disabled, got error: %v", err) 20 - } 21 - 22 - if service.Client != nil { 23 - t.Error("Expected Client to be nil when presigned disabled") 24 - } 25 - if service.Bucket != "" { 26 - t.Error("Expected empty Bucket when presigned disabled") 27 - } 28 - } 29 - 30 - func TestNewS3Service_NonS3Storage(t *testing.T) { 31 - t.Cleanup(logging.SetupTestLogger()) 32 - 33 - params := map[string]any{ 34 - "rootdirectory": "/tmp/test", 35 - } 36 - 37 - service, err := NewS3Service(params, false, "filesystem") 38 - if err != nil { 39 - t.Fatalf("Expected success for non-S3 storage, got error: %v", err) 40 - } 41 - 42 - if service.Client != nil { 43 - t.Error("Expected Client to be nil for non-S3 storage") 44 - } 45 - if service.Bucket != "" { 46 - t.Error("Expected empty Bucket for non-S3 storage") 47 - } 48 - } 49 - 50 9 func TestNewS3Service_MissingBucket(t *testing.T) { 51 10 t.Cleanup(logging.SetupTestLogger()) 52 11 ··· 57 16 // Missing bucket 58 17 } 59 18 60 - _, err := NewS3Service(params, false, "s3") 19 + _, err := NewS3Service(params) 61 20 if err == nil { 62 21 t.Error("Expected error when bucket is missing") 63 22 } ··· 73 32 "secretkey": "test-secret-key", 74 33 } 75 34 76 - service, err := NewS3Service(params, false, "s3") 35 + service, err := NewS3Service(params) 77 36 if err != nil { 78 37 t.Fatalf("Expected success, got error: %v", err) 79 38 } ··· 100 59 "regionendpoint": "https://s3.storj.io", 101 60 } 102 61 103 - service, err := NewS3Service(params, false, "s3") 62 + service, err := NewS3Service(params) 104 63 if err != nil { 105 64 t.Fatalf("Expected success with custom endpoint, got error: %v", err) 106 65 } ··· 123 82 // No region specified - should use default 124 83 } 125 84 126 - service, err := NewS3Service(params, false, "s3") 85 + service, err := NewS3Service(params) 127 86 if err != nil { 128 87 t.Fatalf("Expected success with default region, got error: %v", err) 129 88 } ··· 147 106 "rootdirectory": "/my/prefix/path", 148 107 } 149 108 150 - service, err := NewS3Service(params, false, "s3") 109 + service, err := NewS3Service(params) 151 110 if err != nil { 152 111 t.Fatalf("Expected success, got error: %v", err) 153 112 } ··· 166 125 // No credentials - should allow IAM role auth 167 126 } 168 127 169 - service, err := NewS3Service(params, false, "s3") 128 + service, err := NewS3Service(params) 170 129 if err != nil { 171 130 t.Fatalf("Expected success without credentials (IAM role), got error: %v", err) 172 131 }