music on atproto
plyr.fm
1# Rate Limiting
2
3plyr.fm uses [slowapi](https://github.com/laurentS/slowapi) to implement application-side rate limiting. This protects the backend from abuse, brute-force attacks, and denial-of-service attempts.
4
5## Configuration
6
7Rate limits are configured via environment variables. Defaults are set in `src/backend/config.py`.
8
9| Environment Variable | Default | Description |
10|---------------------|---------|-------------|
11| `RATE_LIMIT_ENABLED` | `true` | Enable/disable rate limiting globally. |
12| `RATE_LIMIT_DEFAULT_LIMIT` | `100/minute` | Global limit applied to all endpoints by default. |
13| `RATE_LIMIT_AUTH_LIMIT` | `10/minute` | Strict limit for auth endpoints (`/auth/start`, `/auth/exchange`). |
14| `RATE_LIMIT_UPLOAD_LIMIT` | `5/minute` | Strict limit for file uploads (`/tracks/`). |
15
16## Architecture
17
18The current implementation uses **in-memory storage**.
19
20* **Per-Instance:** Limits are tracked per application instance (Fly Machine).
21* **Scaling:** With multiple replicas (e.g., 2 machines), the **effective global limit** scales linearly.
22 * Example: A limit of `100/minute` with 2 machines results in a total capacity of roughly `200/minute`.
23* **Keying:** Limits are applied by **IP address** (`get_remote_address`).
24
25### Why in-memory?
26For our current scale, in-memory is sufficient and avoids the complexity/cost of a dedicated Redis cluster. This provides effective protection against single-source flooding (DDoS/brute-force) directed at any specific instance.
27
28### Future State (Redis)
29If strict global synchronization or complex tier-based limiting is required in the future, we will migrate to a Redis-backed limiter. `slowapi` supports Redis out of the box, which would allow maintaining shared counters across all application instances.
30
31## Adding Limits to Endpoints
32
33To apply a specific limit to a route, use the `@limiter.limit` decorator:
34
35```python
36from backend.utilities.rate_limit import limiter
37from backend.config import settings
38
39@router.post("/my-expensive-endpoint")
40@limiter.limit("5/minute")
41async def my_endpoint(request: Request):
42 ...
43```
44
45**Requirements:**
46* The endpoint function **must** accept a `request: Request` parameter.
47* Use configuration settings instead of hardcoded strings where possible.
48
49## Monitoring
50
51Rate limit hits return `429 Too Many Requests`. These events are logged and will appear in Logfire traces with the `429` status code.