homelab infrastructure services
1# Deployment Strategy for dynamicalsystem Services
2
3## Overview
4
5This document defines the comprehensive deployment strategy for dynamicalsystem services, covering:
6
7- **Service isolation**: Users, UIDs, and process separation
8- **Storage strategy**: NFS-backed persistence with XDG integration
9- **Network strategy**: Port allocation and Docker networking
10- **Deployment patterns**: Machine setup and service orchestration
11
12Each service runs under a dedicated user with specific UID and port allocations, ensuring complete isolation between services and environments.
13
14## SMEP Numbering Scheme
15
16**SMEP** is a 5-digit numbering scheme used for **both UIDs and ports** in tinsnip:
17
18**S-M-E-P** where each digit represents:
19
20- **S** (Sheet): `1-5` = Sheet number (dynamically calculated from sheet name)
21- **M** (Machine): `00-99` = Machine number within the sheet (2-digit)
22- **E** (Environment): `0-9` = Environment number (expanded from 2 to 10 environments)
23- **P** (Port Index): `0-9` = Port index within machine's allocation
24
25### SMEP Applied to UIDs vs Ports
26
27**IMPORTANT**: UIDs and ports are **different things** that share the same numbering scheme:
28
29- **Machine UID** (system user ID): Always P=0
30 - Example: `50300` = system user for homelab-prod
31 - Created once per machine-environment
32
33- **Machine Ports** (TCP/UDP port numbers): P=0 through P=9
34 - Example: `50300-50309` = 10 ports allocated to homelab-prod
35 - Used by catalog services running on that machine
36
37The SMEP **base number** (e.g., `5030` for homelab-prod) determines:
38- The machine UID: `50300` (base + P=0)
39- The port range: `50300-50309` (base + P=0-9)
40
41### SMEP Implementation
42
43The UID is calculated during `machine/setup.sh` based on the inputs:
44
45```bash
46# Example: ./machine/setup.sh gazette prod DS412plus
47TIN_SERVICE_UID=$(calculate_machine_uid "gazette" "prod")
48# Result: 10100 (S=1, M=01, E=0, P=0)
49```
50
51**Machine Number Mapping (M digits):**
52- `00` = station (sheet infrastructure: registry, shared config)
53- `01` = gazette (first machine)
54- `02` = lldap (identity management)
55- `03` = gateway (gateway machine)
56- `04-99` = Additional machines (auto-assigned or manually configured)
57
58**Port Allocation:**
59The machine's SMEP base number determines both its UID and port range:
60
61```bash
62# gazette-prod machine
63TIN_SERVICE_UID=10100 # System user ID (P=0)
64
65# Port range for this machine
66TIN_PORT_0=10100 # P=0, main service port
67TIN_PORT_1=10101 # P=1, admin/management port
68TIN_PORT_2=10102 # P=2, API endpoint
69# etc... up to TIN_PORT_9=10109 for 10 total ports per machine
70```
71
72### Sheet Number Calculation
73
74The sheet number (N) is automatically calculated from the sheet name using this deterministic hash function:
75
76```bash
77# Implementation from machine/scripts/lib.sh:get_sheet_number()
78get_sheet_number() {
79 local sheet="${1:-dynamicalsystem}"
80
81 # Hash sheet to 1-9 range using MD5
82 echo "$sheet" | md5sum | cut -c1-1 | {
83 read hex
84 printf "%d\n" "0x$hex" | awk '{n=($1 % 9) + 1; print n}'
85 }
86}
87```
88
89This ensures:
90- Same sheet always gets the same number across all deployments
91- No central registry needed for sheet coordination
92- Supports up to 5 different sheets (S=1 through S=5)
93
94Examples:
95- `topsheet` → S=1 (UID starts with 1xxxx)
96- `mycompany` → S=5 (UID starts with 7xxxx)
97- `acmecorp` → S=3 (UID starts with 3xxxx)
98
99### S-M-E-P Examples
100
101**Default sheet (dynamicalsystem, S=1):**
102- `10000` = S:1, SS:00, E:0, P:0 → dynamicalsystem.station.prod
103- `10001` = S:1, SS:00, E:1, P:0 → dynamicalsystem.station.test
104- `10100` = S:1, SS:01, E:0, P:0 → dynamicalsystem.gazette.prod
105- `10101` = S:1, SS:01, E:1, P:0 → dynamicalsystem.gazette.test
106- `10120` = S:1, SS:01, E:2, P:0 → dynamicalsystem.gazette.dev
107- `10200` = S:1, SS:02, E:0, P:0 → dynamicalsystem.lldap.prod
108- `10210` = S:1, SS:02, E:1, P:0 → dynamicalsystem.lldap.test
109
110**Custom sheet (mycompany, S=7):**
111- `40000` = S:7, SS:00, E:0, P:0 → mycompany.station.prod
112- `40001` = S:7, SS:00, E:1, P:0 → mycompany.station.test
113- `40100` = S:7, SS:01, E:0, P:0 → mycompany.gazette.prod
114- `40110` = S:7, SS:01, E:1, P:0 → mycompany.gazette.test
115
116**Port allocation example (lldap-prod, UID=10200):**
117- `10200` = LDAP protocol port
118- `10201` = Web admin interface
119- `10202` = API endpoint
120- `10203` = Metrics/monitoring
121
122**Environment mapping example (gazette service in sheet 1):**
123- `10100` = gazette-prod (E:0)
124- `10110` = gazette-test (E:1)
125- `10120` = gazette-dev (E:2)
126- `10130` = gazette-staging (E:3)
127- `10140` = gazette-demo (E:4)
128- `10150` = gazette-qa (E:5)
129- `10160` = gazette-uat (E:6)
130- `10170` = gazette-preview (E:7)
131- `10180` = gazette-canary (E:8)
132- `10190` = gazette-local (E:9)
133
134## Sheet Station (M=00)
135
136The sheet station (M=00) provides infrastructure services for the sheet:
137
138### Machine Registry
139Located at `/volume1/{sheet}/station/prod/machine-registry`, this file maps machine names to numbers in the example form:
140```
141gazette=01
142lldap=02
143redis=03
144prometheus=04
145```
146
147### Directory Structure
148```
149/volume1/{sheet}/station/
150├── prod/ # UID: N0000
151│ ├── machine-registry # Machine name to number mapping
152│ ├── port-allocations # Track allocated ports (optional)
153│ └── config/ # Shared sheet configuration
154└── test/ # UID: N0010
155 └── machine-registry # Test environment registry
156```
157
158### Access Permissions
159- The station exports are readable by all machine users in the sheet
160- Only administrators can write to the registry
161- Machines consult the registry during deployment to determine their machine number
162
163## Port Allocation Strategy
164
165Ports are automatically allocated based on the machine's SMEP number to ensure no conflicts when running multiple machines and environments on the same host.
166
167### Port Calculation
168
169The P field in S-M-E-P provides port indexing within a machine. Each machine allocates 10 ports (P=0-9):
170
171```bash
172# Example: lldap-test machine
173TIN_SERVICE_UID=10210 # System user ID (S:1, M:02, E:1, P:0)
174BASE_PORT=$TIN_SERVICE_UID
175
176# Increment P digit for additional ports, for example:
177TIN_PORT_0=$BASE_PORT # 10210 (P=0) - LDAP protocol
178TIN_PORT_1=$((BASE_PORT + 1)) # 10211 (P=1) - Web interface
179TIN_PORT_2=$((BASE_PORT + 2)) # 10212 (P=2) - REST API
180TIN_PORT_3=$((BASE_PORT + 3)) # 10213 (P=3) - Prometheus metrics
181# ... up to 10219 (P=9) for 10 total ports per service
182```
183
184**Implementation in machine/scripts/lib.sh:**
185```bash
186calculate_service_ports() {
187 local service_uid="$1"
188 local port_count="${2:-3}"
189
190 local base_port=$service_uid
191 for ((i=0; i<port_count; i++)); do
192 echo $((base_port + i))
193 done
194}
195```
196
197### Port Allocation Table
198
199| Machine | Environment | SMEP UID | Primary Port | Secondary Port | API Port |
200|---------|-------------|----------|--------------|----------------|----------|
201| station | prod | 10000 | 10000 | 10001 | 10002 |
202| station | test | 10010 | 10010 | 10011 | 10012 |
203| gazette | prod | 10100 | 10100 | 10101 | 10102 |
204| gazette | test | 10110 | 10110 | 10111 | 10112 |
205| lldap | prod | 10200 | 10200 | 10201 | 10202 |
206| lldap | test | 10210 | 10210 | 10211 | 10212 |
207
208### Service-Specific Port Mapping
209
210Services can map tinsnip ports to more descriptive names:
211
212```bash
213# LLDAP example - mapping tinsnip ports to service-specific names
214LDAP_PORT=${TIN_PORT_0} # 10210 - LDAP protocol
215WEB_UI_PORT=${TIN_PORT_1} # 10211 - Web administration
216
217# Or directly use tinsnip port variables in docker-compose.yml
218# Application example (gazette-prod)
219# TIN_PORT_0=10100 # Web server
220# TIN_PORT_1=10101 # REST API
221# TIN_PORT_2=10102 # Prometheus metrics
222```
223
224### Handling Opinionated Clients
225
226Some clients are hardcoded to expect services on standard ports (e.g., Synology LDAP client expects port 389, applications expecting HTTP on port 80). When you encounter such clients, you can manually configure port forwarding as needed:
227
228```bash
229# Example: Forward standard LDAP port to tinsnip production
230sudo iptables -t nat -A PREROUTING -p tcp --dport 389 -j REDIRECT --to-port 10200
231
232# Example: Use nginx reverse proxy for HTTP services
233nginx: 80 -> 10100
234```
235
236**Important**:
237- This configuration is **manual and optional** - most clients can be configured to use the UID-based ports directly
238- Only one service per standard port per host - choose which environment gets the standard port
239- Configure port forwarding only when you encounter clients that cannot be configured to use custom ports
240- Document any port forwarding rules for future reference
241
242---
243
244# CRITICAL: Environment Variable Loading for Docker Compose
245
246**READ THIS BEFORE MODIFYING DEPLOYMENT SCRIPTS**
247
248This section documents a critical, debugged, and stabilized pattern that **must not be modified** without full understanding of the constraints. Violations will cause circular debugging and service deployment failures.
249
250## The Core Problem
251
252Docker Compose requires environment variables for YAML interpolation (e.g., `${TIN_PORT_0}` in docker-compose.yml), but rootless Docker daemon **breaks** when it inherits NFS-backed XDG environment variables from the parent shell.
253
254## The Solution
255
256Service deployment uses a specific bash command pattern that:
2571. Exports variables for Docker Compose YAML interpolation
2582. Unsets problematic NFS-backed paths before starting Docker
2593. Ensures containers receive variables via `env_file` directive
260
261**Implementation in `cmd/service/deploy.sh` (lines 159-162):**
262
263```bash
264# Source env files with auto-export for Docker Compose YAML interpolation, but unset XDG vars that break rootless Docker
265# Containers get all vars via env_file directive in docker-compose.yml
266sudo -u "$service_user" bash -c "set -a && source /mnt/$service_env/.machine/machine.env && source /mnt/$service_env/service/$catalog_service/.env && set +a && unset XDG_DATA_HOME XDG_CONFIG_HOME XDG_STATE_HOME && cd /mnt/$service_env/service/$catalog_service && docker compose up -d"
267```
268
269## Why Each Component Matters
270
271### 1. `set -a` (allexport mode)
272
273**Required** before sourcing environment files.
274
275- `source` alone loads variables into the shell but **does NOT export them**
276- Docker Compose runs as a subprocess and needs **exported** variables
277- Without `set -a`: Variables are loaded but invisible to subprocesses
278- Result: `${TIN_PORT_0}` becomes empty string in docker-compose.yml
279
280### 2. Source both environment files
281
282```bash
283source /mnt/$service_env/.machine/machine.env && source /mnt/$service_env/service/$catalog_service/.env
284```
285
286- **machine.env**: Infrastructure variables (TIN_MACHINE_NAME, TIN_SERVICE_UID, DOCKER_HOST, XDG paths)
287- **service/.env**: Service-specific variables (TIN_CATALOG_SERVICE, TIN_PORT_0, TIN_PORT_1, etc.)
288- Both files required for complete YAML interpolation
289
290### 3. `set +a` (disable allexport)
291
292Turns off auto-export after sourcing files. Good practice to prevent unintended exports.
293
294### 4. `unset XDG_DATA_HOME XDG_CONFIG_HOME XDG_STATE_HOME` (CRITICAL)
295
296**This is the most important and fragile part:**
297
298- These variables point to NFS-backed paths (e.g., `/mnt/service-env/data`)
299- Rootless Docker daemon **inherits environment from parent shell**
300- If Docker daemon inherits NFS XDG paths, it tries to use NFS for internal storage
301- NFS + Docker internal storage = permission failures, daemon crashes
302- **Must unset AFTER sourcing** so variables are still in environment for Docker Compose
303- Containers will still receive these via `env_file` directive
304
305### 5. `env_file` directive (docker-compose.yml)
306
307Containers receive their environment directly from files, not from parent process:
308
309```yaml
310services:
311 myservice:
312 env_file:
313 - ../../.machine/machine.env # Infrastructure variables
314 - .env # Service-specific variables
315 user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
316```
317
318- This is why we can safely unset XDG vars for Docker daemon
319- Containers load vars from files independently
320- Containers **can** safely use NFS XDG paths (they're inside containers, not the daemon)
321
322## Three Environment Contexts
323
324| Component | Needs Vars? | Source | XDG Path Constraints |
325|-----------|-------------|--------|----------------------|
326| **Host shell** (docker compose CLI) | Yes | `source` with `set -a` | Must have for YAML interpolation, must unset before Docker |
327| **Docker daemon** (dockerd-rootless.sh) | No | Inherits from parent | **MUST NOT** inherit NFS XDG paths (breaks daemon) |
328| **Containers** (service processes) | Yes | `env_file` directive | **CAN** use NFS XDG paths safely |
329
330## Common Mistakes and Their Symptoms
331
332| Mistake | Symptom | How to Fix |
333|---------|---------|-----------|
334| Remove `set -a` | `${TIN_PORT_0}` → empty string in YAML | Add `set -a` before source |
335| Remove `source .machine/machine.env` | Missing TIN_MACHINE_NAME, DOCKER_HOST | Source both files |
336| Remove `unset XDG_*` | Docker daemon fails with permission errors | Keep unset after source |
337| Remove `env_file` from docker-compose.yml | Containers missing environment | Add env_file directive |
338| Source without export (`source` not `source + set -a`) | Variables loaded but not visible to docker compose | Use `set -a && source` |
339| Unset XDG before sourcing | Variables never loaded, both YAML and containers broken | Unset AFTER sourcing |
340
341## Testing Checklist
342
343Before modifying deployment code, verify all of these work:
344
345- [ ] Docker Compose YAML interpolation: `${TIN_PORT_0}` expands to correct port number
346- [ ] Docker daemon starts without errors (check with `docker ps`)
347- [ ] Containers receive complete environment (check with `docker exec <container> env`)
348- [ ] Services can write to NFS-backed XDG paths inside containers
349- [ ] Multiple services can deploy to same machine without interference
350- [ ] Service logs show correct port bindings
351- [ ] Containers don't crash-loop with permission errors
352
353## Why This Pattern Is Hard to Maintain
354
3551. **Three separate contexts**: Shell, daemon, containers each have different needs
3562. **Conflicting requirements**: Daemon needs clean environment, containers need full environment
3573. **Timing matters**: Order of source/unset operations is critical
3584. **Non-obvious failure**: Missing `set -a` looks like it works (no error) but silently breaks
3595. **NFS interaction**: XDG vars work fine in most contexts, only break Docker daemon
360
361## Historical Context
362
363- **Bug introduced**: Oct 16, 2025 (commit 3bb4514)
364 - Systemd detection checked user session instead of system capability
365 - Resulted in wrong DOCKER_HOST path
366- **Fixed**: Oct 22, 2025
367 - Added `set -a` for proper variable export
368 - Added XDG unset logic to protect Docker daemon
369 - Fixed systemd detection in lib/docker.sh
370 - Updated env loader script in lib/core.sh for ACT-2 metadata paths
371- **Root cause**: Multiple bugs compounded over time, circular debugging
372- **Prevention**: This documentation section
373
374## Related Code
375
376- `cmd/service/deploy.sh`: Service deployment orchestration (lines 159-162)
377- `lib/docker.sh`: Docker installation and systemd detection (lines 178-188, 247-256)
378- `lib/core.sh`: Shell environment loader script generation (lines 104-135)
379- `service/*/docker-compose.yml`: Service definitions with env_file directive
380
381## References
382
383See also:
384- OODA ACT-3 plan: `ooda/2025-10-multi-service-architecture/act/03-port-allocation/plan.md`
385- Docker rootless mode docs: https://docs.docker.com/engine/security/rootless/
386- NFS + Docker issues: https://github.com/moby/moby/issues/47962
387
388---
389
390## NFS Storage Strategy
391
392### Directory Structure
393```
394/volume1/topsheet/
395├── station/
396│ ├── prod/ (UID: 50000) - machine registry, shared config
397│ └── test/ (UID: 50010) - test registry
398└── gazette/
399 ├── prod/ (UID: 50100)
400 └── test/ (UID: 50110)
401```
402
403### NFS Export Requirements
404
405Each service/environment requires a dedicated NFS export with UID mapping:
406- **all_squash**: Maps all users to specific UID/GID
407- **anonuid/anongid**: Maps to service-specific UID (90000, 90010, etc.)
408- **Host restrictions**: Limit access to specific machines
409
410For detailed NFS setup instructions, see [CREATE_MACHINE.md](CREATE_MACHINE.md).
411
412### Storage Organization
413
414Each service environment uses a standardized directory structure:
415```
416/mnt/<service>-<environment>/ # NFS mount point
417├── state/ # Service state (logs, history, etc.)
418├── data/ # Service data files
419├── config/ # Service configuration
420└── service/ # Docker Compose configurations
421 └── <service-name>/
422 ├── docker-compose.yml
423 └── .env (optional)
424```
425
426## XDG Base Directory Integration
427
428To align with [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/latest/) and make service data accessible to user applications, symlink NFS mount subdirectories to their XDG locations.
429
430### XDG Directory Assumptions
431- **XDG_CACHE_HOME**: Local, host-specific cache files (not backed by NFS)
432- **XDG_RUNTIME_DIR**: Local, ephemeral runtime files (not backed by NFS)
433- **XDG_STATE_HOME**: Persistent state data (backed by NFS)
434- **XDG_DATA_HOME**: User-specific data files (backed by NFS)
435- **XDG_CONFIG_HOME**: User-specific configuration (backed by NFS)
436- **XDG_DATA_DIRS**: System-managed data directories (read-only)
437- **XDG_CONFIG_DIRS**: System-managed config directories (read-only)
438
439### Directory Mapping
440```bash
441# After mounting NFS to /mnt/<service>-<environment>, create XDG symlinks
442TIN_SHEET=dynamicalsystem
443TIN_SERVICE_NAME=tinsnip
444
445# Ensure XDG directories exist
446mkdir -p "${XDG_STATE_HOME:-$HOME/.local/state}/${TIN_SHEET}"
447mkdir -p "${XDG_DATA_HOME:-$HOME/.local/share}/${TIN_SHEET}"
448mkdir -p "${XDG_CONFIG_HOME:-$HOME/.config}/${TIN_SHEET}"
449
450# Create symlinks from NFS mount to XDG locations
451ln -sf /mnt/${TIN_SERVICE_NAME}-${TIN_SERVICE_ENVIRONMENT}/state "${XDG_STATE_HOME:-$HOME/.local/state}/${TIN_SHEET}/@${TIN_SERVICE_NAME}"
452ln -sf /mnt/${TIN_SERVICE_NAME}-${TIN_SERVICE_ENVIRONMENT}/data "${XDG_DATA_HOME:-$HOME/.local/share}/${TIN_SHEET}/@${TIN_SERVICE_NAME}"
453ln -sf /mnt/${TIN_SERVICE_NAME}-${TIN_SERVICE_ENVIRONMENT}/config "${XDG_CONFIG_HOME:-$HOME/.config}/${TIN_SHEET}/@${TIN_SERVICE_NAME}"
454```
455
456### Example Structure
457```
458/mnt/tinsnip-test/ # NFS mount point
459├── state/ # Service state (logs, history, etc.)
460├── data/ # Service data files
461└── config/ # Service configuration
462
463~/.local/state/dynamicalsystem/@tinsnip -> /mnt/tinsnip-test/state
464~/.local/share/dynamicalsystem/@tinsnip -> /mnt/tinsnip-test/data
465~/.config/dynamicalsystem/@tinsnip -> /mnt/tinsnip-test/config
466```
467
468### Benefits of XDG Integration
4691. **Standard Compliance**: Follows XDG Base Directory specification
4702. **User Access**: Applications can access service data through standard paths
4713. **Backup Integration**: XDG paths are commonly included in user backups
4724. **Clear Organization**: The `@` prefix clearly indicates NFS-backed service data
4735. **Performance**: Cache and runtime data remain local for speed
474
475### Implementation in Makefile
476```makefile
477setup-xdg-links: mount-nfs
478 @mkdir -p "$${XDG_STATE_HOME:-$$HOME/.local/state}/$(TIN_SHEET)"
479 @mkdir -p "$${XDG_DATA_HOME:-$$HOME/.local/share}/$(TIN_SHEET)"
480 @mkdir -p "$${XDG_CONFIG_HOME:-$$HOME/.config}/$(TIN_SHEET)"
481 @ln -sfn $(MOUNT_POINT)/state "$${XDG_STATE_HOME:-$$HOME/.local/state}/$(TIN_SHEET)/@$(TIN_SERVICE_NAME)"
482 @ln -sfn $(MOUNT_POINT)/data "$${XDG_DATA_HOME:-$$HOME/.local/share}/$(TIN_SHEET)/@$(TIN_SERVICE_NAME)"
483 @ln -sfn $(MOUNT_POINT)/config "$${XDG_CONFIG_HOME:-$$HOME/.config}/$(TIN_SHEET)/@$(TIN_SERVICE_NAME)"
484 @echo "Created XDG symlinks for $(TIN_SERVICE_NAME)"
485```
486
487## Benefits
488
4891. **Complete Isolation**: Each service/environment has its own UID and NFS directory
4902. **No Shared Credentials**: NFS `all_squash` eliminates need for LDAP/shared users
4913. **Persistent Data**: All data survives host rebuilds
4924. **Easy Backup**: Centralized data on Synology NAS
4935. **Scalable**: UID convention supports multiple sheets, services, and environments
4946. **XDG Compliance**: Integrates with Linux desktop standards
495
496## Language-Specific Patterns
497
498### Python Services with UV
499
500Python services using UV package manager require specific handling to work correctly with tinsnip's UID isolation.
501
502#### The Editable Install Problem
503
504UV workspace packages (using `[tool.uv.workspace]`) are installed in editable/development mode by default. This causes permission errors in containers because:
505
5061. Editable packages create symlinks/metadata that point to source code
5072. Python attempts to rebuild/update package metadata at import time
5083. Container runs as `TIN_SERVICE_UID` but venv was built as root
5094. Import fails with "Permission denied" errors
510
511#### Solution: Non-Editable Production Install
512
513**Dockerfile pattern:**
514```dockerfile
515FROM python:3.13-slim
516
517# Install system dependencies
518RUN apt-get update && apt-get install -y \
519 curl \
520 && rm -rf /var/lib/apt/lists/*
521
522# Install uv
523COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
524
525WORKDIR /app
526
527# Copy dependency files first (for layer caching)
528COPY pyproject.toml .
529COPY uv.lock .
530
531# Install dependencies WITHOUT the workspace package
532RUN uv sync --frozen --no-install-workspace
533
534# Copy application code
535COPY myservice/ ./myservice/
536
537# Install package as non-editable
538RUN uv pip install --no-deps ./myservice/
539
540# Create directories
541RUN mkdir -p data config state logs
542
543EXPOSE 10700
544
545# Use venv directly - no need for 'uv run' at runtime
546CMD [".venv/bin/gunicorn", \
547 "--bind", "0.0.0.0:10700", \
548 "myservice.app:create_app()"]
549```
550
551**docker-compose.yml pattern:**
552```yaml
553services:
554 myservice:
555 build: .
556 container_name: ${TIN_SERVICE_NAME}-${TIN_SERVICE_ENVIRONMENT}
557 ports:
558 - "${TIN_PORT_0}:10700"
559 volumes:
560 - ${XDG_DATA_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/data
561 - ${XDG_CONFIG_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/config
562 - ${XDG_STATE_HOME}/${TIN_SHEET}/${TIN_SERVICE_NAME}:/app/state
563 user: "${TIN_SERVICE_UID}:${TIN_SERVICE_UID}"
564 environment:
565 - TIN_SERVICE_UID=${TIN_SERVICE_UID}
566 - UV_NO_CACHE=1 # Disable cache directory creation
567 - PYTHONUNBUFFERED=1
568 restart: unless-stopped
569```
570
571#### Key Points
572
5731. **No user creation in Dockerfile** - Let docker-compose handle UID via `user:` directive
5742. **Two-stage install** - Dependencies first (cached), then package non-editable
5753. **UV_NO_CACHE=1** - Prevents UV from trying to create cache directories
5764. **Direct venv execution** - Use `.venv/bin/python` or `.venv/bin/gunicorn`, not `uv run`
5775. **Read-only venv** - Venv is built as root, readable by all, never modified at runtime
578
579#### Why This Works
580
581- **Build time**: Venv created as root with all dependencies and package installed
582- **Runtime**: Container runs as `TIN_SERVICE_UID`, venv is read-only
583- **No writes needed**: Non-editable install means Python never modifies venv
584- **Permission model**: Follows tinsnip pattern - specific UID, no privilege escalation
585
586#### Common Mistakes
587
588**INCORRECT: Using `uv run` in CMD** - Triggers package rebuilds
589```dockerfile
590CMD ["uv", "run", "gunicorn", ...] # BAD
591```
592
593**INCORRECT: Editable install** - Requires venv write access
594```dockerfile
595RUN uv sync --frozen # Installs workspace package as editable
596```
597
598**INCORRECT: Entrypoint with user switching** - Violates tinsnip pattern
599```dockerfile
600ENTRYPOINT ["/entrypoint.sh"] # Container must start as root
601```
602
603**CORRECT approach:**
604```dockerfile
605RUN uv sync --frozen --no-install-workspace # Dependencies only
606RUN uv pip install --no-deps ./myservice/ # Non-editable
607CMD [".venv/bin/gunicorn", ...] # Direct execution
608```
609
610#### Testing Locally
611
612Development and testing should still use editable installs:
613
614```bash
615# Local development
616cd myservice
617uv sync # Editable install for development
618
619# Local testing
620uv run pytest
621uv run flask run
622
623# Production build test
624docker compose build
625docker compose up
626```
627
628## Adding a New Service
629
6301. Choose the next available service number (e.g., 2 for a new service)
6312. Calculate UIDs: `10200` (prod), `10210` (test)
6323. Create NFS directories on Synology with appropriate ownership
6334. Add NFS exports to `/etc/exports` via SSH (GUI won't support custom UIDs)
6345. Create Makefile using the template above
6356. Deploy using `make setup && make deploy`
636
637## Security Notes
638
639- Each NFS export is restricted to specific hosts
640- UIDs are in the 10000+ range to avoid conflicts
641- Services cannot access each other's data due to UID isolation
642- No root access required within containers (rootless Docker)