Service Creation Guide#
This guide explains how to create services that work with tinsnip's NFS-backed persistence architecture.
Overview#
tinsnip provides standardized infrastructure for service deployment with:
- NFS-backed persistence: Data survives machine rebuilds
- XDG integration: Data accessible through standard Linux paths
- UID isolation: Each service runs with dedicated user/permissions
- Port allocation: Automatic port assignment based on service UID
Services can be created using two patterns depending on their origin and requirements.
CRITICAL: docker-compose.yml Requirements#
These requirements are mandatory for all services. Violations will cause deployment failures.
1. env_file Directive (REQUIRED)#
All services MUST use the env_file directive to load environment variables:
services:
myservice:
env_file:
- ../../.machine/machine.env # Machine infrastructure variables
- .env # Service-specific variables
# ... rest of config
Why this matters:
- Docker Compose needs exported variables for YAML interpolation (
${TIN_PORT_0}) - Containers need environment variables for service configuration
- Docker daemon must not inherit NFS-backed XDG paths (causes failures)
env_fileensures containers get clean environment from files
What each file contains:
machine.env: TIN_MACHINE_NAME, TIN_SERVICE_UID, DOCKER_HOST, XDG paths.env: TIN_CATALOG_SERVICE, TIN_PORT_0, TIN_PORT_1, etc.
2. user: Directive (REQUIRED)#
All services MUST specify the user directive with TIN_SERVICE_UID:
services:
myservice:
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
# ... rest of config
Why this matters:
- UID isolation: Each service runs as its dedicated user
- NFS permissions: Container UID must match NFS export UID
- Security: Rootless containers, no privilege escalation
3. working_dir Constraints (IMPORTANT)#
When using user: directive, the working_dir must be writable by non-root users:
FAILS - root-owned directory:
services:
myservice:
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
working_dir: /app # Owned by root, non-root user can't write
WORKS - Options:
Option A: Use world-writable directory (simplest for test services)
services:
myservice:
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
working_dir: /tmp # World-writable, always works
Option B: Make directories writable in Dockerfile (for production)
RUN mkdir -p /app/data /app/config /app/state && \
chmod 777 /app/data /app/config /app/state
services:
myservice:
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
working_dir: /app # Now writable via chmod
Option C: Use volume mounts for writable paths
services:
myservice:
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
working_dir: /app # Base is read-only
volumes:
- ${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/data # Writable via NFS
Common symptoms of working_dir issues:
Permission denied: can't create 'file.txt'
EACCES: permission denied, open '/app/output.log'
Why this happens:
- Docker images are built as root, files owned by root:root
user:directive makes container run as non-root (e.g., UID 10720)- Non-root users cannot write to root-owned directories
- Solution: Use writable locations or fix ownership in Dockerfile
4. Complete Example with All Requirements#
services:
myservice:
image: myorg/myservice:latest
container_name: ${TIN_SERVICE_NAME}-${TIN_SERVICE_ENVIRONMENT}
# REQUIRED: Load environment from files
env_file:
- ../../.machine/machine.env
- .env
# REQUIRED: Run as tinsnip service UID
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
# IMPORTANT: Use writable working_dir or volume mounts
working_dir: /tmp # Or /app if fixed in Dockerfile
# REQUIRED: Use XDG-backed bind mounts, not named volumes
volumes:
- ${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/data
- ${XDG_CONFIG_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/config
- ${XDG_STATE_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/state
# REQUIRED: Use TIN_PORT_* variables, not hardcoded ports
ports:
- "${TIN_PORT_0}:8000"
- "${TIN_PORT_1}:8001"
# Optional: Service-specific environment overrides
environment:
- LOG_LEVEL=info
restart: unless-stopped
Testing Your Service Configuration#
Before deployment, verify:
-
env_filedirective present with correct paths -
user:directive uses${TIN_SERVICE_UID}:${TIN_SERVICE_UID} -
working_diris writable or volumes provide writable paths - Ports use
${TIN_PORT_*}variables - Volumes use XDG environment variables, not named volumes
- Test container starts without permission errors
- Test service can write to expected paths
Reference: See service/test-http/docker-compose.yml for a complete working example.
The tinsnip Target Pattern#
tinsnip establishes a standard environment that services can leverage:
Infrastructure Provided#
NFS Mount Structure:
/mnt/tinsnip/ # NFS mount point
├── data/ # Persistent application data
├── config/ # Service configuration files
├── state/ # Service state (logs, databases, etc.)
└── service/ # Docker compose location
└── myservice/
├── docker-compose.yml
└── setup.sh (optional)
Service Environment File (.env): Generated by tinsnip setup with deployment-specific paths:
# Tinsnip deployment - direct NFS mounts
XDG_DATA_HOME=/mnt/tinsnip/data
XDG_CONFIG_HOME=/mnt/tinsnip/config
XDG_STATE_HOME=/mnt/tinsnip/state
# Service metadata
TIN_SERVICE_NAME=myservice
TIN_SERVICE_ENVIRONMENT=prod
TIN_SERVICE_UID=11100
TIN_PORT_0=11100
TIN_PORT_1=11101
TIN_PORT_2=11102
Environment Variable Mapping#
| Environment Variable | Value (set in .env) | Container Path |
|---|---|---|
TIN_SERVICE_UID |
11100 | Used for user |
TIN_SERVICE_NAME |
myservice | - |
TIN_SERVICE_ENVIRONMENT |
prod | - |
TIN_SHEET |
dynamicalsystem | - |
TIN_PORT_0 |
11100 | 11100 |
TIN_PORT_1 |
11101 | 11101 |
TIN_PORT_2 |
11102 | 11102 |
XDG_DATA_HOME |
/mnt/tinsnip/data | /data |
XDG_CONFIG_HOME |
/mnt/tinsnip/config | /config |
XDG_STATE_HOME |
/mnt/tinsnip/state | /state |
Path Resolution#
| Host Path | Container Path | Description |
|---|---|---|
${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME} |
/data |
Application data |
${XDG_CONFIG_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME} |
/config |
Configuration files |
${XDG_STATE_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME} |
/state |
State/logs |
Example Resolution:
# For myservice-prod in dynamicalsystem sheet
${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}
↓ (from .env)
/mnt/tinsnip/data/dynamicalsystem/myservice
↓ (NFS mount)
nas-server:/volume1/topsheet/myservice/prod/data
Volume Requirements#
CRITICAL: Services MUST use bind mounts to XDG-integrated NFS directories, not Docker named volumes.
CORRECT (XDG + NFS-backed persistence):
volumes:
- ${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/data
- ${XDG_CONFIG_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/config
- ${XDG_STATE_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/state
INCORRECT (Local storage - data lost on rebuild):
volumes:
- myservice_data:/app/data # Stored locally, lost on rebuild
volumes:
myservice_data: # Breaks continuous delivery
Why This Matters#
tinsnip's Value Proposition: Continuous delivery with persistent data that survives machine rebuilds.
- With XDG Bind Mounts: Data stored on NFS → Survives machine rebuilds → True continuous delivery
- With Named Volumes: Data stored locally → Lost on rebuild → Breaks continuous delivery
Pattern 1: Home-grown Services#
Use Case: Services built specifically for tinsnip that can follow conventions natively.
Design Principles#
- Built to expect tinsnip's XDG + NFS directory structure
- Uses environment variables for all configuration
- Designed for the target UID and port scheme
- No adaptation layer needed
Example: Custom Web Service (Gazette)#
services:
gazette:
image: myorg/gazette:latest
ports:
- "${TIN_PORT_0}:3000"
volumes:
- ${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/documents
- ${XDG_CONFIG_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/config
- ${XDG_STATE_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/logs
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
environment:
# Service-specific environment variables
- GAZETTE_DOCUMENT_ROOT=/app/documents
- GAZETTE_CONFIG_FILE=/app/config/gazette.yaml
- GAZETTE_LOG_DIR=/app/logs
- GAZETTE_PORT=3000
- GAZETTE_BASE_URL=http://localhost:${TIN_PORT_0}
- GAZETTE_UID=${TIN_SERVICE_UID}
- GAZETTE_SHEET=${TIN_SHEET}
restart: unless-stopped
networks:
- tinsnip_network
Home-grown Service Benefits#
- Clean, simple docker-compose.yml
- No path translations or adaptations needed
- Full leverage of tinsnip environment
- Predictable behavior across deployments
- Direct XDG compliance
Pattern 2: Third-party Adaptation#
Use Case: Existing external containers that need to be wrapped to work with tinsnip's conventions.
Adaptation Strategies#
- Path Mapping: Map container's expected paths to tinsnip XDG structure
- Port Injection: Override container's ports with tinsnip allocation
- User Override: Force container to run as tinsnip service UID
- Config Adaptation: Transform tinsnip config to container's expected format
- Environment Translation: Convert tinsnip variables to container's expectations
Example: LLDAP (Identity Service)#
LLDAP is an external container with its own conventions that we adapt:
# Third-party container adaptation
services:
lldap:
image: lldap/lldap:latest-alpine-rootless
container_name: ${TIN_SERVICE_NAME:-lldap}-${TIN_SERVICE_ENVIRONMENT:-prod}
ports:
# Adapt: LLDAP's default ports → tinsnip port allocation
- "${TIN_PORT_0}:3890" # LDAP protocol
- "${TIN_PORT_1}:17170" # Web UI
volumes:
# Adapt: LLDAP expects /data → map to tinsnip XDG structure
- ${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
environment:
# Adapt: Translate tinsnip config to LLDAP's expected variables
- LLDAP_JWT_SECRET=changeme-jwt-secret-32-chars-min
- LLDAP_KEY_SEED=changeme-key-seed-32-chars-minimum
- LLDAP_BASE_DN=dc=home,dc=local
- LLDAP_LDAP_USER_DN=admin
- LLDAP_LDAP_USER_PASS=changeme-admin-password
- LLDAP_DATABASE_URL=sqlite:///data/users.db
restart: unless-stopped
networks:
- tinsnip_network
networks:
tinsnip_network:
external: true
Build-time UID Awareness#
Some services need to know the runtime UID at build time to set correct file ownership or create users. This is common for:
- Python services with virtual environments
- Services that install packages or create files during build
- Services that need specific file ownership for security
Passing Build Arguments#
Docker Compose can pass environment variables as build arguments:
services:
myservice:
build:
context: .
args:
TIN_SERVICE_UID: ${TIN_SERVICE_UID}
container_name: ${TIN_SERVICE_NAME}-${TIN_SERVICE_ENVIRONMENT}
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
# ... rest of config
Using Build Arguments in Dockerfile#
The Dockerfile receives the argument and can use it during build:
FROM python:3.13-slim
# Receive UID from build args
ARG TIN_SERVICE_UID=10700
# Install dependencies
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
# Create service user with specified UID
RUN groupadd -g ${TIN_SERVICE_UID} appuser && \
useradd -m -u ${TIN_SERVICE_UID} -g ${TIN_SERVICE_UID} -s /bin/bash appuser
WORKDIR /app
# Install packages/dependencies as root
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy application code
COPY src/ ./src/
# Fix ownership for service user
RUN chown -R appuser:appuser /app
# Switch to service user
USER appuser
CMD ["python", "-m", "myapp"]
Benefits of Build-time UID Awareness#
- Correct Ownership: Files created during build are owned by the runtime user
- No Entrypoint Scripts: No need for privilege-switching entrypoints
- Security: Container can start as non-root from the beginning
- Simplicity: Direct execution without wrapper scripts
When to Use This Pattern#
Use build arguments when:
- Service installs packages or creates files at build time
- File ownership must match runtime UID for write access
- You want to avoid entrypoint scripts that switch users
- Building language-specific environments (Python venv, Node modules, etc.)
Skip build arguments when:
- Using pre-built images from registries
- All files are read-only at runtime
- Service handles UID internally
Service Deployment#
Deployment Process#
-
Prepare Infrastructure:
tin machine create myservice prod nas-server -
Deploy Service:
# Switch to service user sudo -u myservice-prod -i # Copy from service catalog or create locally cp -r ~/.local/opt/dynamicalsystem.service/myservice /mnt/myservice-prod/service/ cd /mnt/myservice-prod/service/myservice # Run setup if present [[ -f setup.sh ]] && ./setup.sh # Deploy docker compose up -d -
Verify Deployment:
docker compose ps docker compose logs -f
Service Management#
# Status check
docker compose ps
# View logs
docker compose logs -f [service-name]
# Restart service
docker compose restart
# Update service
docker compose pull
docker compose up -d
# Stop service
docker compose down
Data Access#
Direct NFS Mount Access:
# Access service data directly on NFS mounts
ls /mnt/tinsnip/data/dynamicalsystem/myservice # Application data
ls /mnt/tinsnip/config/dynamicalsystem/myservice # Configuration
ls /mnt/tinsnip/state/dynamicalsystem/myservice # State/logs
Note: With the new direct mount approach, XDG paths point directly to NFS mounts via the .env file, eliminating the need for symlinks.
Validation Checklist#
Before deploying, verify your service:
Volume Configuration#
- No named volumes in
volumes:section - All volumes use XDG environment variables
- Volumes map to appropriate container paths
- XDG paths resolve to NFS-backed directories
User and Permissions#
- Service specifies
user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}" - Container processes run as non-root
- File permissions work with tinsnip UID
Port Configuration#
- Ports use environment variables (
${TIN_PORT_0},${TIN_PORT_1}, etc.) - No hardcoded port numbers
- Port allocation fits within UID range (UID to UID+9)
Network Configuration#
- Service connects to
tinsnip_network - Network is marked as
external: true - Inter-service communication uses service names
Environment Variables#
- Uses tinsnip-provided variables where appropriate
- No hardcoded values that should be dynamic
- Secrets loaded from files, not environment variables
XDG Integration#
- Volumes reference XDG environment variables
- Paths follow XDG Base Directory specification
- Data accessible through both XDG and direct paths
Troubleshooting#
Data Not Persisting#
Problem: Data lost after docker compose down
Solution: Check for named volumes, ensure XDG bind mounts
Permission Denied#
Problem: Container can't write to mounted directories
Solution: Verify user: directive and NFS mount permissions
XDG Paths Not Working#
Problem: XDG symlinks broken or missing Solution: Re-run machine setup to recreate XDG symlinks
Port Conflicts#
Problem: Service won't start, port already in use Solution: Check environment variable usage, verify UID calculation
Config Not Loading#
Problem: Third-party service ignoring configuration Solution: Verify config file paths match container expectations