plyr.fm Status History - November 2025#

November 2025 Work#

ATProto labeler and admin UI (PRs #385-395, Nov 29-Dec 1)#

motivation: integrate with ATProto labeling protocol for proper copyright violation signaling, and improve admin tooling for reviewing flagged content.

what shipped:

  • ATProto labeler implementation (PRs #385, #391):
    • standalone labeler service integrated into moderation Rust service
    • implements com.atproto.label.queryLabels and subscribeLabels XRPC endpoints
    • k256 ECDSA signing for cryptographic label verification
    • SQLite storage for labels with sequence numbers
    • labels emitted when copyright violations detected
    • negation labels for false positive resolution
  • admin UI (PRs #390, #392, #395):
    • web interface at /admin for reviewing copyright flags
    • htmx for server-rendered interactivity (no inline JS bloat)
    • static files extracted to moderation/static/ for proper syntax highlighting
    • plyr.fm design tokens for brand consistency
    • shows track title, artist handle, match scores, and potential matches
    • "mark false positive" button emits negation label
  • label context enrichment (PR #392):
    • labels now include track_title, artist_handle, artist_did, highest_score, matches
    • backfill script (scripts/backfill_label_context.py) populated 25 existing flags
    • admin UI displays rich context instead of just ATProto URIs
  • copyright flag visibility (PRs #387, #389):
    • artist portal shows copyright flag indicator on flagged tracks
    • tooltip shows primary match (artist - title) for quick context
  • documentation (PR #386):
    • comprehensive docs at docs/moderation/atproto-labeler.md
    • covers architecture, label schema, XRPC protocol, signing keys

admin UI architecture:

  • moderation/static/admin.html - page structure
  • moderation/static/admin.css - plyr.fm design tokens
  • moderation/static/admin.js - auth handling (~40 lines)
  • htmx endpoints: /admin/flags-html, /admin/resolve-htmx
  • server-rendered HTML partials for flag cards

motivation: detect potential copyright violations in uploaded tracks to avoid DMCA issues and protect the platform.

what shipped:

  • moderation service (Rust/Axum on Fly.io):
    • standalone service at plyr-moderation.fly.dev
    • integrates with AuDD enterprise API for audio fingerprinting
    • scans audio URLs and returns matches with metadata (artist, title, album, ISRC, timecode)
    • auth via X-Moderation-Key header
  • backend integration (PR #382):
    • ModerationSettings in config (service URL, auth token, timeout)
    • moderation client module (backend/_internal/moderation.py)
    • fire-and-forget background task on track upload
    • stores results in copyright_scans table
    • scan errors stored as "clear" so tracks aren't stuck unscanned
  • flagging fix (PR #384):
    • AuDD enterprise API returns no confidence scores (all 0)
    • changed from score threshold to presence-based flagging: is_flagged = !matches.is_empty()
    • removed unused score_threshold config
  • backfill script (scripts/scan_tracks_copyright.py):
    • scans existing tracks that haven't been checked
    • --max-duration flag to skip long DJ sets (estimated from file size)
    • --dry-run mode to preview what would be scanned
    • supports dev/staging/prod environments
  • review workflow:
    • copyright_scans table has resolution, reviewed_at, reviewed_by, review_notes columns
    • resolution values: violation, false_positive, original_artist

initial review results (25 flagged tracks):

  • 8 violations (actual copyright issues)
  • 11 false positives (fingerprint noise)
  • 6 original artists (people uploading their own distributed music)

developer tokens with independent OAuth grants (PR #367, Nov 28)#

motivation: programmatic API access (scripts, CLIs, automation) needed tokens that survive browser logout and don't become stale when browser sessions refresh.

what shipped:

  • OAuth-based dev tokens: each developer token gets its own OAuth authorization flow
    • user clicks "create token" → redirected to PDS for authorization → token created with independent credentials
    • tokens have their own DPoP keypair, access/refresh tokens - completely separate from browser session
  • cookie isolation: dev token exchange doesn't set browser cookie
    • added is_dev_token flag to ExchangeToken model
    • /auth/exchange skips Set-Cookie for dev token flows
    • prevents logout from deleting dev tokens (critical bug fixed during implementation)
  • token management UI: portal → "your data" → "developer tokens"
    • create with optional name and expiration (30/90/180/365 days or never)
    • list active tokens with creation/expiration dates
    • revoke individual tokens
  • API endpoints:
    • POST /auth/developer-token/start - initiates OAuth flow, returns auth_url
    • GET /auth/developer-tokens - list user's tokens
    • DELETE /auth/developer-tokens/{prefix} - revoke by 8-char prefix

security properties:

  • tokens are full sessions with encrypted OAuth credentials (Fernet)
  • each token refreshes independently (no staleness from browser session refresh)
  • revokable individually without affecting browser or other tokens
  • explicit OAuth consent required at PDS for each token created

documentation: see docs/authentication.md "developer tokens" section


platform stats and media session integration (PRs #359-379, Nov 27-29)#

motivation: show platform activity at a glance, improve playback experience across devices, and give users control over their data.

what shipped:

  • platform stats endpoint and UI (PRs #376, #378, #379):
    • GET /stats returns total plays, tracks, and artists
    • stats bar displays in homepage header (e.g., "1,691 plays • 55 tracks • 8 artists")
    • skeleton loading animation while fetching
    • responsive layout: visible in header on wide screens, collapses to menu on narrow
    • end-of-list animation on homepage
  • Media Session API (PR #371):
    • provides track metadata to CarPlay, lock screens, Bluetooth devices, macOS control center
    • artwork display with fallback to artist avatar
    • play/pause, prev/next, seek controls all work from system UI
    • position state syncs scrubbers on external interfaces
  • browser tab title (PR #374):
    • shows "track - artist • plyr.fm" while playing
    • persists across page navigation
    • reverts to page title when playback stops
  • timed comments (PR #359):
    • comments capture timestamp when added during playback
    • clickable timestamp buttons seek to that moment
    • compact scrollable comments section on track pages
  • constellation integration (PR #360):
    • queries constellation.microcosm.blue backlink index
    • enables network-wide like counts (not just plyr.fm internal)
    • environment-aware namespace handling
  • account deletion (PR #363):
    • explicit confirmation flow (type handle to confirm)
    • deletes all plyr.fm data (tracks, albums, likes, comments, preferences)
    • optional ATProto record cleanup with clear warnings about orphaned references

oEmbed endpoint for Leaflet.pub embeds (PRs #355-358, Nov 25)#

motivation: plyr.fm tracks embedded in Leaflet.pub (via iframely) showed a black HTML5 audio box instead of our custom embed player.

what shipped:

  • oEmbed endpoint (PR #355): /oembed returns proper embed HTML with iframe
    • follows oEmbed spec with type: "rich" and iframe in html field
    • discovery link in track page <head> for automatic detection
  • iframely domain registration: registered plyr.fm on iframely.com (free tier)
    • this was the key fix - iframely now returns our embed iframe as links.player[0]

debugging journey (PRs #356-358):

  • initially tried og:video meta tags to hint iframe embed - didn't work
  • tried removing og:audio to force oEmbed fallback - resulted in no player link
  • discovered iframely requires domain registration to trust oEmbed providers
  • after registration, iframely correctly returns embed iframe URL

export & upload reliability (PRs #337-344, Nov 24)#

motivation: exports were failing silently on large files (OOM), uploads showed incorrect progress, and SSE connections triggered false error toasts.

what shipped:

  • database-backed jobs (PR #337): moved upload/export tracking from in-memory to postgres
    • jobs table persists state across server restarts
    • enables reliable progress tracking via SSE polling
  • streaming exports (PR #343): fixed OOM on large file exports
    • previously loaded entire files into memory via response["Body"].read()
    • now streams to temp files, adds to zip from disk (constant memory)
    • 90-minute WAV files now export successfully on 1GB VM
  • progress tracking fix (PR #340): upload progress was receiving bytes but treating as percentage
    • UploadProgressTracker now properly converts bytes to percentage
    • upload progress bar works correctly again
  • UX improvements (PRs #338-339, #341-342, #344):
    • export filename now includes date (plyr-tracks-2025-11-24.zip)
    • toast notification on track deletion
    • fixed false "lost connection" error when SSE completes normally
    • progress now shows "downloading track X of Y" instead of confusing count

queue hydration + ATProto token hardening (Nov 12)#

why: queue endpoints were occasionally taking 2s+ and restore operations could 401 when multiple requests refreshed an expired ATProto token simultaneously.

what shipped:

  • added persistent image_url on Track rows so queue hydration no longer probes R2 for every track. Queue payloads now pull art directly from Postgres, with a one-time fallback for legacy rows.
  • updated _internal/queue.py to backfill any missing URLs once (with caching) instead of per-request GETs.
  • introduced per-session locks in _refresh_session_tokens so only one coroutine hits oauth_client.refresh_session at a time; others reuse the refreshed tokens. This removes the race that caused the batch restore flow to intermittently 500/401.

impact: queue tail latency dropped back under 500 ms in staging tests, ATProto restore flows are now reliable under concurrent use, and Logfire no longer shows 500s from the PDS.


performance optimization session (Nov 12)#

issue: slow /tracks/liked endpoint

symptoms:

  • /tracks/liked taking 600-900ms consistently
  • only ~25ms spent in database queries
  • mysterious 575ms gap with no spans in Logfire traces

root cause:

  • PR #184 added image_url column to tracks table to eliminate N+1 R2 API calls
  • legacy tracks (15 tracks uploaded before PR) had image_url = NULL
  • fallback code called track.get_image_url() which makes uninstrumented R2 head_object API calls
  • 5 tracks × 120ms = ~600ms of uninstrumented latency

solution: created scripts/backfill_image_urls.py to populate missing image_url values

results:

  • /tracks/liked now sub-200ms (down from 600-900ms)
  • all endpoints now consistently sub-second response times

database cleanup:

  • discovered queue_state had 265% bloat (53 dead rows, 20 live rows)
  • ran VACUUM (FULL, ANALYZE) queue_state against production

track detail pages (PR #164, Nov 12)#

  • ✅ dedicated track detail pages with large cover art
  • ✅ play button updates queue state correctly (#169)
  • ✅ liked state loaded efficiently via server-side fetch
  • ✅ mobile-optimized layouts with proper scrolling constraints
  • ✅ origin validation for image URLs (#168)

liked tracks feature (PR #157, Nov 11)#

  • ✅ server-side persistent collections
  • ✅ ATProto record publication for cross-platform visibility
  • ✅ UI for adding/removing tracks from liked collection
  • ✅ like counts displayed in track responses and analytics (#170)
  • ✅ analytics cards now clickable links to track detail pages (#171)
  • ✅ liked state shown on artist page tracks (#163)

status: COMPLETE (issue #144 closed)


upload streaming + progress UX (PR #182, Nov 11)#

  • Frontend switched from fetch to XMLHttpRequest so we can display upload progress toasts (critical for >50 MB mixes on mobile).
  • Upload form now clears only after the request succeeds; failed attempts leave the form intact so users don't lose metadata.
  • Backend writes uploads/images to temp files in 8 MB chunks before handing them to the storage layer, eliminating whole-file buffering and iOS crashes for hour-long mixes.
  • Deployment verified locally and by rerunning the exact repro Stella hit (85 minute mix from mobile).

transcoder API deployment (PR #156, Nov 11)#

standalone Rust transcoding service 🎉

  • deployed: https://plyr-transcoder.fly.dev/
  • purpose: convert AIFF/FLAC/etc. to MP3 for browser compatibility
  • technology: Axum + ffmpeg + Docker
  • security: X-Transcoder-Key header authentication (shared secret)
  • capacity: handles 1GB uploads, tested with 85-minute AIFF files (~858MB → 195MB MP3 in 32 seconds)
  • architecture:
    • 2 Fly machines for high availability
    • auto-stop/start for cost efficiency
    • stateless design (no R2 integration yet)
    • 320kbps MP3 output with proper ID3 tags
  • status: deployed and tested, ready for integration into plyr.fm upload pipeline
  • next steps: wire into backend with R2 integration and job queue (see issue #153)

AIFF/AIF browser compatibility fix (PR #152, Nov 11)#

format validation improvements

  • problem discovered: AIFF/AIF files only work in Safari, not Chrome/Firefox
    • browsers throw MediaError code 4: MEDIA_ERR_SRC_NOT_SUPPORTED
    • users could upload files but they wouldn't play in most browsers
  • immediate solution: reject AIFF/AIF uploads at both backend and frontend
    • removed AIFF/AIF from AudioFormat enum
    • added format hints to upload UI: "supported: mp3, wav, m4a"
    • client-side validation with helpful error messages
  • long-term solution: deployed standalone transcoder service (see above)
    • separate Rust/Axum service with ffmpeg
    • accepts all formats, converts to browser-compatible MP3
    • integration into upload pipeline pending (issue #153)

observability improvements:

  • added logfire instrumentation to upload background tasks
  • added logfire spans to R2 storage operations
  • documented logfire querying patterns in docs/logfire-querying.md

async I/O performance fixes (PRs #149-151, Nov 10-11)#

Eliminated event loop blocking across backend with three critical PRs:

  1. PR #149: async R2 reads - converted R2 head_object operations from sync boto3 to async aioboto3

    • portal page load time: 2+ seconds → ~200ms
    • root cause: track.image_url was blocking on serial R2 HEAD requests
  2. PR #150: concurrent PDS resolution - parallelized ATProto PDS URL lookups

    • homepage load time: 2-6 seconds → 200-400ms
    • root cause: serial resolve_atproto_data() calls (8 artists × 200-300ms each)
    • fix: asyncio.gather() for batch resolution, database caching for subsequent loads
  3. PR #151: async storage writes/deletes - made save/delete operations non-blocking

    • R2: switched to aioboto3 for uploads/deletes (async S3 operations)
    • filesystem: used anyio.Path and anyio.open_file() for chunked async I/O (64KB chunks)
    • impact: multi-MB uploads no longer monopolize worker thread, constant memory usage

mobile UI improvements (PRs #159-185, Nov 11-12)#

  • ✅ compact action menus and better navigation (#161)
  • ✅ improved mobile responsiveness (#159)
  • ✅ consistent button layouts across mobile/desktop (#176-181, #185)
  • ✅ always show play count and like count on mobile (#177)
  • ✅ login page UX improvements (#174-175)
  • ✅ liked page UX improvements (#173)
  • ✅ accent color for liked tracks (#160)