A container registry that uses the AT Protocol for manifest storage and S3 for blob storage. atcr.io
docker container atproto go
README.md

ATProto Signature Verification Plugins and Examples#

This directory contains reference implementations and examples for integrating ATProto signature verification into various tools and workflows.

Overview#

ATCR uses ATProto's native signature system to cryptographically sign container images. To integrate signature verification into existing tools (Kubernetes, CI/CD, container runtimes), you can:

  1. Build plugins for verification frameworks (Ratify, Gatekeeper, Containerd)
  2. Use external services called by policy engines
  3. Integrate CLI tools in your CI/CD pipelines

Directory Structure#

examples/plugins/
├── README.md                    # This file
├── ratify-verifier/            # Ratify plugin for Kubernetes
│   ├── README.md
│   ├── verifier.go
│   ├── config.go
│   ├── resolver.go
│   ├── crypto.go
│   ├── Dockerfile
│   ├── deployment.yaml
│   └── verifier-crd.yaml
├── gatekeeper-provider/        # OPA Gatekeeper external provider
│   ├── README.md
│   ├── main.go
│   ├── verifier.go
│   ├── resolver.go
│   ├── crypto.go
│   ├── Dockerfile
│   ├── deployment.yaml
│   └── provider-crd.yaml
├── containerd-verifier/        # Containerd bindir plugin
│   ├── README.md
│   ├── main.go
│   └── Dockerfile
└── ci-cd/                      # CI/CD integration examples
    ├── github-actions.yml
    ├── gitlab-ci.yml
    └── jenkins-pipeline.groovy

Quick Start#

Option A: Ratify Plugin

cd ratify-verifier
# Build plugin and deploy to Kubernetes
./build.sh
kubectl apply -f deployment.yaml
kubectl apply -f verifier-crd.yaml

Option B: Gatekeeper Provider

cd gatekeeper-provider
# Build and deploy external provider
docker build -t atcr.io/atcr/gatekeeper-provider:latest .
kubectl apply -f deployment.yaml
kubectl apply -f provider-crd.yaml

For CI/CD#

GitHub Actions

# Copy examples/plugins/ci-cd/github-actions.yml to .github/workflows/
cp ci-cd/github-actions.yml ../.github/workflows/verify-and-deploy.yml

GitLab CI

# Copy examples/plugins/ci-cd/gitlab-ci.yml to your repo
cp ci-cd/gitlab-ci.yml ../.gitlab-ci.yml

For Containerd#

cd containerd-verifier
# Build plugin
./build.sh
# Install to containerd plugins directory
sudo cp atcr-verifier /opt/containerd/bin/

Plugins Overview#

Ratify Verifier Plugin ⭐#

Use case: Kubernetes admission control with OPA Gatekeeper

How it works:

  1. Gatekeeper receives pod creation request
  2. Calls Ratify verification engine
  3. Ratify loads ATProto verifier plugin
  4. Plugin verifies signature and checks trust policy
  5. Returns allow/deny decision to Gatekeeper

Pros:

  • Standard Ratify plugin interface
  • Works with existing Gatekeeper deployments
  • Can combine with other verifiers (Notation, Cosign)
  • Policy-based enforcement

Cons:

  • Requires building custom Ratify image
  • Plugin must be compiled into image
  • More complex deployment

See: ratify-verifier/README.md

Gatekeeper External Provider ⭐#

Use case: Kubernetes admission control with OPA Gatekeeper

How it works:

  1. Gatekeeper receives pod creation request
  2. Rego policy calls external data provider API
  3. Provider verifies ATProto signature
  4. Returns verification result to Gatekeeper
  5. Rego policy makes allow/deny decision

Pros:

  • Simpler deployment (separate service)
  • Easy to update (no Gatekeeper changes)
  • Flexible Rego policies
  • Can add caching, rate limiting

Cons:

  • Additional service to maintain
  • Network dependency (provider must be reachable)
  • Slightly higher latency

See: gatekeeper-provider/README.md

Containerd Bindir Plugin#

Use case: Runtime-level verification for all images

How it works:

  1. Containerd pulls image
  2. Calls verifier plugin (bindir)
  3. Plugin verifies ATProto signature
  4. Returns result to containerd
  5. Containerd allows/blocks image

Pros:

  • Works at runtime level (not just Kubernetes)
  • CRI-O, Podman support (CRI-compatible)
  • No Kubernetes required
  • Applies to all images

Cons:

  • Containerd 2.0+ required
  • More complex to debug
  • Less flexible policies

See: containerd-verifier/README.md

CI/CD Integration Examples#

GitHub Actions#

Complete workflow with:

  • Image signature verification
  • DID trust checking
  • Automated deployment on success

See: ci-cd/github-actions.yml

GitLab CI#

Pipeline with:

  • Multi-stage verification
  • Trust policy enforcement
  • Manual deployment approval

See: ci-cd/gitlab-ci.yml

Jenkins#

Declarative pipeline with:

  • Signature verification stage
  • Deployment gates
  • Rollback on failure

See: ci-cd/jenkins-pipeline.groovy (coming soon)

Common Components#

All plugins share common functionality:

DID Resolution#

Resolve DID to public key:

func ResolveDIDToPublicKey(ctx context.Context, did string) (*PublicKey, error)

Steps:

  1. Fetch DID document from PLC directory or did:web
  2. Extract verification method
  3. Decode multibase public key
  4. Parse as K-256 public key

PDS Communication#

Fetch repository commit:

func FetchCommit(ctx context.Context, pdsEndpoint, did, commitCID string) (*Commit, error)

Steps:

  1. Call com.atproto.sync.getRepo XRPC endpoint
  2. Parse CAR file response
  3. Extract commit with matching CID
  4. Return commit data and signature

Signature Verification#

Verify ECDSA K-256 signature:

func VerifySignature(pubKey *PublicKey, commit *Commit) error

Steps:

  1. Extract unsigned commit bytes
  2. Hash with SHA-256
  3. Verify ECDSA signature over hash
  4. Check signature is valid for public key

Trust Policy#

Check if DID is trusted:

func IsTrusted(did string, now time.Time) bool

Steps:

  1. Load trust policy from config
  2. Check if DID in trusted list
  3. Verify validFrom/expiresAt timestamps
  4. Return true if trusted

Trust Policy Format#

All plugins use the same trust policy format:

version: 1.0

trustedDIDs:
  did:plc:alice123:
    name: "Alice (DevOps Lead)"
    validFrom: "2024-01-01T00:00:00Z"
    expiresAt: null

  did:plc:bob456:
    name: "Bob (Security Team)"
    validFrom: "2024-06-01T00:00:00Z"
    expiresAt: "2025-12-31T23:59:59Z"

policies:
  - name: production-images
    scope: "atcr.io/*/prod-*"
    require:
      signature: true
      trustedDIDs:
        - did:plc:alice123
        - did:plc:bob456
      minSignatures: 1
    action: enforce

  - name: dev-images
    scope: "atcr.io/*/dev-*"
    require:
      signature: false
    action: audit

Implementation Notes#

Dependencies#

All plugins require:

  • Go 1.21+ for building
  • ATProto DID resolution (PLC directory, did:web)
  • ATProto PDS XRPC API access
  • ECDSA K-256 signature verification

Caching#

Recommended caching strategy:

  • DID documents: 5 minute TTL
  • Public keys: 5 minute TTL
  • PDS endpoints: 5 minute TTL
  • Signature results: 5 minute TTL

Error Handling#

Plugins should handle:

  • DID resolution failures (network, invalid DID)
  • PDS connectivity issues (timeout, 404, 500)
  • Invalid signature format
  • Untrusted DIDs
  • Network timeouts

Logging#

Structured logging with:

  • image - Image being verified
  • did - Signer DID
  • duration - Operation duration
  • error - Error message (if failed)

Metrics#

Expose Prometheus metrics:

  • atcr_verifications_total{result="verified|failed|error"}
  • atcr_verification_duration_seconds
  • atcr_did_resolutions_total{result="success|failure"}
  • atcr_cache_hits_total
  • atcr_cache_misses_total

Testing#

Unit Tests#

Test individual components:

# Test DID resolution
go test ./pkg/resolver -v

# Test signature verification
go test ./pkg/crypto -v

# Test trust policy
go test ./pkg/trust -v

Integration Tests#

Test with real services:

# Test against ATCR registry
go test ./integration -tags=integration -v

# Test with test PDS
go test ./integration -tags=integration -pds=https://test.pds.example.com

End-to-End Tests#

Test full deployment:

# Deploy to test cluster
kubectl apply -f test/fixtures/

# Create pod with signed image (should succeed)
kubectl run test-signed --image=atcr.io/test/signed:latest

# Create pod with unsigned image (should fail)
kubectl run test-unsigned --image=atcr.io/test/unsigned:latest

Performance Considerations#

Latency#

Typical verification latency:

  • DID resolution: 50-200ms (cached: <1ms)
  • PDS query: 100-500ms (cached: <1ms)
  • Signature verification: 1-5ms
  • Total: 150-700ms (uncached), <10ms (cached)

Throughput#

Expected throughput (single instance):

  • Without caching: ~5-10 verifications/second
  • With caching: ~100-500 verifications/second

Scaling#

For high traffic:

  • Deploy multiple replicas (stateless)
  • Use Redis for distributed caching
  • Implement rate limiting
  • Monitor P95/P99 latency

Security Considerations#

Network Policies#

Restrict access to:

  • DID resolution (PLC directory only)
  • PDS XRPC endpoints
  • Internal services only

Denial of Service#

Protect against:

  • High verification request rate
  • Slow DID resolution
  • Malicious images with many signatures
  • Large signature artifacts

Trust Model#

Understand trust dependencies:

  • DID resolution is accurate (PLC directory)
  • PDS serves correct records
  • Private keys are secure
  • Trust policy is maintained

Troubleshooting#

Plugin Not Loading#

# Check plugin exists
ls -la /path/to/plugin

# Check plugin is executable
chmod +x /path/to/plugin

# Check plugin logs
tail -f /var/log/atcr-verifier.log

Verification Failing#

# Test DID resolution
curl https://plc.directory/did:plc:alice123

# Test PDS connectivity
curl https://bsky.social/xrpc/com.atproto.server.describeServer

# Test signature exists
oras discover atcr.io/alice/myapp:latest \
  --artifact-type application/vnd.atproto.signature.v1+json

Policy Not Enforcing#

# Check policy is loaded
kubectl get configmap atcr-trust-policy -n gatekeeper-system

# Check constraint is active
kubectl get constraint atcr-signatures-required -o yaml

# Check logs
kubectl logs -n gatekeeper-system deployment/ratify

See Also#

Documentation#

Examples#

External Resources#

Support#

For questions or issues:

Contributing#

Contributions welcome! Please:

  1. Follow existing code structure
  2. Add tests for new features
  3. Update documentation
  4. Submit pull request

License#

See LICENSE file in repository root.