+54
-10
docs/configuration-reference.md
+54
-10
docs/configuration-reference.md
···
290
290
**Required**: No
291
291
**Type**: String
292
292
**Default**: `mpsc`
293
-
**Values**: `mpsc`, `redis`, `noop`, `none`
293
+
**Values**: `mpsc`, `redis`, `sqlite`, `noop`, `none`
294
294
295
295
The type of queue adapter for background handle resolution.
296
296
297
297
**Options**:
298
298
- `mpsc`: In-memory multi-producer single-consumer queue (default)
299
299
- `redis`: Redis-backed distributed queue
300
+
- `sqlite`: SQLite-backed persistent queue
300
301
- `noop`: Disable queue processing (testing only)
301
302
- `none`: Alias for `noop`
302
303
···
307
308
308
309
# Multi-instance or high availability
309
310
QUEUE_ADAPTER=redis
311
+
312
+
# Single instance with persistence
313
+
QUEUE_ADAPTER=sqlite
310
314
311
315
# Testing without background processing
312
316
QUEUE_ADAPTER=noop
···
415
419
QUEUE_BUFFER_SIZE=10000 # Very high traffic
416
420
```
417
421
422
+
### `QUEUE_SQLITE_MAX_SIZE`
423
+
424
+
**Required**: No
425
+
**Type**: Integer
426
+
**Default**: `10000`
427
+
**Range**: 100-1000000 (recommended)
428
+
**Constraints**: Must be >= 0
429
+
430
+
Maximum queue size for SQLite adapter work shedding. When the queue exceeds this limit, the oldest entries are automatically deleted to maintain the specified size limit, preserving the most recently queued work items.
431
+
432
+
**Work Shedding Behavior**:
433
+
- New work items are always accepted
434
+
- When queue size exceeds `QUEUE_SQLITE_MAX_SIZE`, oldest entries are deleted
435
+
- Deletion happens atomically with insertion in a single transaction
436
+
- Essential for long-running deployments to prevent unbounded disk growth
437
+
- Set to `0` to disable work shedding (unlimited queue size)
438
+
439
+
**Examples**:
440
+
```bash
441
+
QUEUE_SQLITE_MAX_SIZE=0 # Unlimited (disable work shedding)
442
+
QUEUE_SQLITE_MAX_SIZE=1000 # Small deployment, frequent processing
443
+
QUEUE_SQLITE_MAX_SIZE=10000 # Default, balanced for most deployments
444
+
QUEUE_SQLITE_MAX_SIZE=100000 # High-traffic deployment with slower processing
445
+
QUEUE_SQLITE_MAX_SIZE=1000000 # Very high-traffic, maximum recommended
446
+
```
447
+
448
+
**Recommendations**:
449
+
- **Small deployments**: 1000-5000 entries
450
+
- **Production deployments**: 10000-50000 entries
451
+
- **High-traffic deployments**: 50000-1000000 entries
452
+
- **Development/testing**: 100-1000 entries
453
+
- **Disk space concerns**: Lower values (1000-5000)
454
+
- **High ingestion rate**: Higher values (50000-1000000)
455
+
418
456
## Configuration Examples
419
457
420
458
### Minimal Development Configuration
···
469
507
CACHE_TTL_MEMORY=600
470
508
CACHE_TTL_SQLITE=86400 # 1 day
471
509
472
-
# Queue (MPSC for single instance)
473
-
QUEUE_ADAPTER=mpsc
510
+
# Queue (SQLite for single instance with persistence)
511
+
QUEUE_ADAPTER=sqlite
474
512
QUEUE_BUFFER_SIZE=5000
513
+
QUEUE_SQLITE_MAX_SIZE=10000
475
514
476
515
# Logging
477
516
RUST_LOG=info
···
579
618
SQLITE_URL: sqlite:/data/quickdid.db
580
619
CACHE_TTL_MEMORY: 600
581
620
CACHE_TTL_SQLITE: 86400
582
-
QUEUE_ADAPTER: mpsc
621
+
QUEUE_ADAPTER: sqlite
583
622
QUEUE_BUFFER_SIZE: 5000
623
+
QUEUE_SQLITE_MAX_SIZE: 10000
584
624
RUST_LOG: info
585
625
ports:
586
626
- "8080:8080"
···
612
652
- Recommended range: 1-60 seconds
613
653
614
654
3. **Queue Adapter** (`QUEUE_ADAPTER`):
615
-
- Must be one of: `mpsc`, `redis`, `noop`, `none`
655
+
- Must be one of: `mpsc`, `redis`, `sqlite`, `noop`, `none`
616
656
- Case-sensitive
617
657
618
658
4. **Port** (`HTTP_PORT`):
···
661
701
4. **High traffic**: Increase QUEUE_BUFFER_SIZE (5000-10000)
662
702
5. **Multi-region**: Use region-specific QUEUE_WORKER_ID
663
703
664
-
### Caching Strategy
704
+
### Caching and Queue Strategy
665
705
666
-
1. **Multi-instance/HA deployments**: Use Redis for distributed caching
667
-
2. **Single-instance deployments**: Use SQLite for persistent caching
668
-
3. **Development/testing**: Use memory-only caching
706
+
1. **Multi-instance/HA deployments**: Use Redis for distributed caching and queuing
707
+
2. **Single-instance deployments**: Use SQLite for persistent caching and queuing
708
+
3. **Development/testing**: Use memory-only caching with MPSC queuing
669
709
4. **Hybrid setups**: Configure both Redis and SQLite for redundancy
670
-
5. **Cache TTL guidelines**:
710
+
5. **Queue adapter guidelines**:
711
+
- Redis: Best for multi-instance deployments with distributed processing
712
+
- SQLite: Best for single-instance deployments needing persistence
713
+
- MPSC: Best for single-instance deployments without persistence needs
714
+
6. **Cache TTL guidelines**:
671
715
- Redis: Shorter TTLs (1-7 days) for frequently updated handles
672
716
- SQLite: Longer TTLs (7-90 days) for stable single-instance caching
673
717
- Memory: Short TTLs (5-30 minutes) as fallback
+26
-6
docs/production-deployment.md
+26
-6
docs/production-deployment.md
···
111
111
# QUEUE CONFIGURATION
112
112
# ----------------------------------------------------------------------------
113
113
114
-
# Queue adapter type: 'mpsc', 'redis', 'noop', or 'none' (default: mpsc)
114
+
# Queue adapter type: 'mpsc', 'redis', 'sqlite', 'noop', or 'none' (default: mpsc)
115
115
# - 'mpsc': In-memory queue for single-instance deployments
116
116
# - 'redis': Distributed queue for multi-instance or HA deployments
117
+
# - 'sqlite': Persistent queue for single-instance deployments
117
118
# - 'noop': Disable queue processing (testing only)
118
119
# - 'none': Alias for 'noop'
119
120
QUEUE_ADAPTER=redis
···
141
142
# Range: 100-100000
142
143
# Increase for high-traffic deployments using MPSC adapter
143
144
QUEUE_BUFFER_SIZE=5000
145
+
146
+
# Maximum queue size for SQLite adapter work shedding (default: 10000)
147
+
# Range: 100-1000000 (recommended)
148
+
# When exceeded, oldest entries are deleted to maintain this limit
149
+
# Set to 0 to disable work shedding (unlimited queue size)
150
+
# Benefits: Prevents unbounded disk usage, maintains recent work items
151
+
QUEUE_SQLITE_MAX_SIZE=10000
144
152
145
153
# ----------------------------------------------------------------------------
146
154
# HTTP CLIENT CONFIGURATION
···
389
397
SQLITE_URL: sqlite:/data/quickdid.db
390
398
CACHE_TTL_MEMORY: 600
391
399
CACHE_TTL_SQLITE: 86400
392
-
QUEUE_ADAPTER: mpsc
400
+
QUEUE_ADAPTER: sqlite
393
401
QUEUE_BUFFER_SIZE: 5000
402
+
QUEUE_SQLITE_MAX_SIZE: 10000
394
403
RUST_LOG: info
395
404
ports:
396
405
- "8080:8080"
···
695
704
696
705
# Check SQLite cache entries (if using SQLite)
697
706
docker exec quickdid sqlite3 /data/quickdid.db "SELECT COUNT(*) as total_entries, MIN(updated) as oldest, MAX(updated) as newest FROM handle_resolution_cache;"
707
+
708
+
# Check SQLite queue entries (if using SQLite queue adapter)
709
+
docker exec quickdid sqlite3 /data/quickdid.db "SELECT COUNT(*) as queue_entries, MIN(queued_at) as oldest, MAX(queued_at) as newest FROM handle_resolution_queue;"
698
710
699
711
# Monitor real-time logs
700
712
docker-compose logs -f quickdid | grep -E "ERROR|WARN"
···
770
782
3. **Memory** (fast, but lost on restart)
771
783
772
784
**Recommendations by Deployment Type**:
773
-
- **Single instance, persistent**: Use SQLite (`SQLITE_URL=sqlite:./quickdid.db`)
774
-
- **Multi-instance, HA**: Use Redis (`REDIS_URL=redis://redis:6379/0`)
775
-
- **Testing/development**: Use memory only (no Redis/SQLite URLs)
785
+
- **Single instance, persistent**: Use SQLite for both caching and queuing (`SQLITE_URL=sqlite:./quickdid.db`, `QUEUE_ADAPTER=sqlite`)
786
+
- **Multi-instance, HA**: Use Redis for both caching and queuing (`REDIS_URL=redis://redis:6379/0`, `QUEUE_ADAPTER=redis`)
787
+
- **Testing/development**: Use memory-only caching with MPSC queuing (`QUEUE_ADAPTER=mpsc`)
776
788
- **Hybrid**: Configure both Redis and SQLite for redundancy
789
+
790
+
### Queue Strategy Selection
791
+
792
+
**Queue Adapter Options**:
793
+
1. **Redis** (`QUEUE_ADAPTER=redis`) - Distributed queuing, best for multi-instance deployments
794
+
2. **SQLite** (`QUEUE_ADAPTER=sqlite`) - Persistent queuing, best for single-instance deployments
795
+
3. **MPSC** (`QUEUE_ADAPTER=mpsc`) - In-memory queuing, lightweight for single-instance without persistence needs
796
+
4. **No-op** (`QUEUE_ADAPTER=noop`) - Disable queuing entirely (testing only)
777
797
778
798
### Redis Optimization
779
799
···
816
836
- Recommended range: 1-60 seconds
817
837
818
838
3. **Queue Adapter** (`QUEUE_ADAPTER`):
819
-
- Must be one of: `mpsc`, `redis`, `noop`, `none`
839
+
- Must be one of: `mpsc`, `redis`, `sqlite`, `noop`, `none`
820
840
- Case-sensitive
821
841
822
842
### Validation Errors
+34
-1
src/bin/quickdid.rs
+34
-1
src/bin/quickdid.rs
···
17
17
http::{AppContext, create_router},
18
18
queue_adapter::{
19
19
HandleResolutionWork, QueueAdapter, create_mpsc_queue_from_channel, create_noop_queue,
20
-
create_redis_queue,
20
+
create_redis_queue, create_sqlite_queue, create_sqlite_queue_with_max_size,
21
21
},
22
22
task_manager::spawn_cancellable_task,
23
23
};
···
210
210
} else {
211
211
tracing::warn!(
212
212
"Redis queue adapter requested but no Redis URL configured, using no-op adapter"
213
+
);
214
+
create_noop_queue::<HandleResolutionWork>()
215
+
}
216
+
}
217
+
"sqlite" => {
218
+
// Use SQLite adapter
219
+
if let Some(url) = config.sqlite_url.as_ref() {
220
+
if let Some(pool) = try_create_sqlite_pool(url, "queue adapter").await {
221
+
if config.queue_sqlite_max_size > 0 {
222
+
tracing::info!(
223
+
"Creating SQLite queue adapter with work shedding (max_size: {})",
224
+
config.queue_sqlite_max_size
225
+
);
226
+
create_sqlite_queue_with_max_size::<HandleResolutionWork>(
227
+
pool,
228
+
config.queue_sqlite_max_size,
229
+
)
230
+
} else {
231
+
tracing::info!("Creating SQLite queue adapter (unlimited size)");
232
+
create_sqlite_queue::<HandleResolutionWork>(pool)
233
+
}
234
+
} else {
235
+
tracing::warn!("Failed to create SQLite pool for queue, falling back to MPSC queue adapter");
236
+
// Fall back to MPSC if SQLite fails
237
+
let (handle_sender, handle_receiver) =
238
+
tokio::sync::mpsc::channel::<HandleResolutionWork>(
239
+
config.queue_buffer_size,
240
+
);
241
+
create_mpsc_queue_from_channel(handle_sender, handle_receiver)
242
+
}
243
+
} else {
244
+
tracing::warn!(
245
+
"SQLite queue adapter requested but no SQLite URL configured, using no-op adapter"
213
246
);
214
247
create_noop_queue::<HandleResolutionWork>()
215
248
}
+25
-4
src/config.rs
+25
-4
src/config.rs
···
96
96
CACHE_TTL_SQLITE TTL for SQLite cache in seconds (default: 7776000 = 90 days)
97
97
98
98
QUEUE CONFIGURATION:
99
-
QUEUE_ADAPTER Queue adapter: 'mpsc', 'redis', 'noop', 'none' (default: mpsc)
99
+
QUEUE_ADAPTER Queue adapter: 'mpsc', 'redis', 'sqlite', 'noop', 'none' (default: mpsc)
100
100
QUEUE_REDIS_URL Redis URL for queue adapter (uses REDIS_URL if not set)
101
101
QUEUE_REDIS_PREFIX Redis key prefix for queues (default: queue:handleresolver:)
102
102
QUEUE_REDIS_TIMEOUT Queue blocking timeout in seconds (default: 5)
103
103
QUEUE_WORKER_ID Worker ID for Redis queue (default: worker1)
104
104
QUEUE_BUFFER_SIZE Buffer size for MPSC queue (default: 1000)
105
+
QUEUE_SQLITE_MAX_SIZE Maximum SQLite queue size for work shedding (default: 10000, 0=unlimited)
105
106
"
106
107
)]
107
108
/// Command-line arguments and environment variables configuration
···
192
193
/// Values:
193
194
/// - "mpsc": In-memory multi-producer single-consumer queue
194
195
/// - "redis": Redis-backed distributed queue
196
+
/// - "sqlite": SQLite-backed persistent queue
195
197
/// - "noop": Disable queue processing (for testing)
196
198
/// - "none": Alias for "noop"
197
199
///
···
276
278
/// Higher values = less Redis polling overhead
277
279
#[arg(long, env = "QUEUE_REDIS_TIMEOUT", default_value = "5")]
278
280
pub queue_redis_timeout: u64,
281
+
282
+
/// Maximum queue size for SQLite adapter (work shedding)
283
+
///
284
+
/// Range: 100-1000000 (recommended)
285
+
/// Default: 10000
286
+
///
287
+
/// When the SQLite queue exceeds this limit, the oldest entries are deleted
288
+
/// to maintain the queue size. This prevents unbounded queue growth while
289
+
/// preserving the most recently queued work items.
290
+
///
291
+
/// Set to 0 to disable work shedding (unlimited queue size)
292
+
#[arg(long, env = "QUEUE_SQLITE_MAX_SIZE", default_value = "10000")]
293
+
pub queue_sqlite_max_size: u64,
279
294
}
280
295
281
296
/// Validated configuration for QuickDID service
···
332
347
/// SQLite database URL for caching (e.g., "sqlite:./quickdid.db")
333
348
pub sqlite_url: Option<String>,
334
349
335
-
/// Queue adapter type: "mpsc", "redis", or "noop"
350
+
/// Queue adapter type: "mpsc", "redis", "sqlite", or "noop"
336
351
pub queue_adapter: String,
337
352
338
353
/// Redis URL for queue operations (falls back to redis_url)
···
358
373
359
374
/// Redis blocking timeout for queue operations in seconds (e.g., 5)
360
375
pub queue_redis_timeout: u64,
376
+
377
+
/// Maximum queue size for SQLite adapter work shedding (e.g., 10000)
378
+
/// When exceeded, oldest entries are deleted to maintain this limit.
379
+
/// Set to 0 to disable work shedding (unlimited queue size).
380
+
pub queue_sqlite_max_size: u64,
361
381
}
362
382
363
383
impl Config {
···
510
530
cache_ttl_redis: args.cache_ttl_redis,
511
531
cache_ttl_sqlite: args.cache_ttl_sqlite,
512
532
queue_redis_timeout: args.queue_redis_timeout,
533
+
queue_sqlite_max_size: args.queue_sqlite_max_size,
513
534
})
514
535
}
515
536
···
560
581
));
561
582
}
562
583
match self.queue_adapter.as_str() {
563
-
"mpsc" | "redis" | "noop" | "none" => {}
584
+
"mpsc" | "redis" | "sqlite" | "noop" | "none" => {}
564
585
_ => {
565
586
return Err(ConfigError::InvalidValue(format!(
566
-
"Invalid QUEUE_ADAPTER '{}', must be 'mpsc', 'redis', or 'noop'",
587
+
"Invalid QUEUE_ADAPTER '{}', must be 'mpsc', 'redis', 'sqlite', or 'noop'",
567
588
self.queue_adapter
568
589
)));
569
590
}
+953
src/queue_adapter.rs
+953
src/queue_adapter.rs
···
6
6
use async_trait::async_trait;
7
7
use deadpool_redis::{Pool as RedisPool, redis::AsyncCommands};
8
8
use serde::{Deserialize, Serialize};
9
+
use sqlx::{self, Row};
9
10
use std::sync::Arc;
10
11
use thiserror::Error;
11
12
use tokio::sync::{Mutex, mpsc};
···
447
448
}
448
449
}
449
450
451
+
/// SQLite-backed queue adapter implementation.
452
+
///
453
+
/// This adapter uses SQLite database for persistent queuing of work items.
454
+
/// It's suitable for single-instance deployments that need persistence
455
+
/// across service restarts while remaining lightweight.
456
+
///
457
+
/// # Features
458
+
///
459
+
/// - Persistent queuing across service restarts
460
+
/// - Simple FIFO ordering based on insertion time
461
+
/// - Single consumer design (no complex locking needed)
462
+
/// - Simple pull-and-delete semantics
463
+
/// - Work shedding to prevent unbounded queue growth
464
+
///
465
+
/// # Work Shedding
466
+
///
467
+
/// When `max_size` is configured (> 0), the adapter implements work shedding:
468
+
/// - New work items are always accepted
469
+
/// - When the queue exceeds `max_size`, oldest entries are automatically deleted
470
+
/// - This maintains the most recent work items while preventing unbounded growth
471
+
/// - Essential for long-running deployments to avoid disk space issues
472
+
pub(crate) struct SqliteQueueAdapter<T>
473
+
where
474
+
T: Send + Sync + Serialize + for<'de> Deserialize<'de> + 'static,
475
+
{
476
+
/// SQLite connection pool
477
+
pool: sqlx::SqlitePool,
478
+
/// Maximum queue size (0 = unlimited)
479
+
/// When exceeded, oldest entries are deleted to maintain this limit
480
+
max_size: u64,
481
+
/// Type marker for generic parameter
482
+
_phantom: std::marker::PhantomData<T>,
483
+
}
484
+
485
+
impl<T> SqliteQueueAdapter<T>
486
+
where
487
+
T: Send + Sync + Serialize + for<'de> Deserialize<'de> + 'static,
488
+
{
489
+
/// Create a new SQLite queue adapter with unlimited queue size.
490
+
pub(crate) fn new(pool: sqlx::SqlitePool) -> Self {
491
+
Self::with_max_size(pool, 0)
492
+
}
493
+
494
+
/// Create a new SQLite queue adapter with specified maximum queue size.
495
+
///
496
+
/// # Arguments
497
+
///
498
+
/// * `pool` - SQLite connection pool
499
+
/// * `max_size` - Maximum number of entries in queue (0 = unlimited)
500
+
///
501
+
/// # Work Shedding Behavior
502
+
///
503
+
/// When `max_size` > 0:
504
+
/// - New work items are always accepted
505
+
/// - If queue size exceeds `max_size` after insertion, oldest entries are deleted
506
+
/// - This preserves the most recent work while preventing unbounded growth
507
+
pub(crate) fn with_max_size(pool: sqlx::SqlitePool, max_size: u64) -> Self {
508
+
Self {
509
+
pool,
510
+
max_size,
511
+
_phantom: std::marker::PhantomData,
512
+
}
513
+
}
514
+
}
515
+
516
+
#[async_trait]
517
+
impl<T> QueueAdapter<T> for SqliteQueueAdapter<T>
518
+
where
519
+
T: Send + Sync + Serialize + for<'de> Deserialize<'de> + 'static,
520
+
{
521
+
async fn pull(&self) -> Option<T> {
522
+
// Get the oldest queued item and delete it in a transaction
523
+
let mut transaction = match self.pool.begin().await {
524
+
Ok(tx) => tx,
525
+
Err(e) => {
526
+
error!("Failed to start SQLite transaction: {}", e);
527
+
return None;
528
+
}
529
+
};
530
+
531
+
// Select the oldest queued item
532
+
let record = match sqlx::query(
533
+
"SELECT id, work FROM handle_resolution_queue
534
+
ORDER BY queued_at ASC
535
+
LIMIT 1"
536
+
)
537
+
.fetch_optional(&mut *transaction)
538
+
.await
539
+
{
540
+
Ok(Some(row)) => row,
541
+
Ok(None) => {
542
+
// No queued items available
543
+
debug!("No queued items available in SQLite queue");
544
+
return None;
545
+
}
546
+
Err(e) => {
547
+
error!("Failed to query SQLite queue: {}", e);
548
+
return None;
549
+
}
550
+
};
551
+
552
+
let item_id: i64 = record.get("id");
553
+
let work_json: String = record.get("work");
554
+
555
+
// Delete the item from the queue
556
+
if let Err(e) = sqlx::query("DELETE FROM handle_resolution_queue WHERE id = ?1")
557
+
.bind(item_id)
558
+
.execute(&mut *transaction)
559
+
.await
560
+
{
561
+
error!("Failed to delete item from queue: {}", e);
562
+
return None;
563
+
}
564
+
565
+
// Commit the transaction
566
+
if let Err(e) = transaction.commit().await {
567
+
error!("Failed to commit SQLite transaction: {}", e);
568
+
return None;
569
+
}
570
+
571
+
// Deserialize the work item from JSON
572
+
match serde_json::from_str(&work_json) {
573
+
Ok(work) => {
574
+
debug!("Pulled work item from SQLite queue");
575
+
Some(work)
576
+
}
577
+
Err(e) => {
578
+
error!("Failed to deserialize work item: {}", e);
579
+
None
580
+
}
581
+
}
582
+
}
583
+
584
+
async fn push(&self, work: T) -> Result<()> {
585
+
// Serialize the entire work item as JSON
586
+
let work_json = serde_json::to_string(&work)
587
+
.map_err(|e| QueueError::SerializationFailed(e.to_string()))?;
588
+
589
+
let current_timestamp = std::time::SystemTime::now()
590
+
.duration_since(std::time::UNIX_EPOCH)
591
+
.unwrap_or_default()
592
+
.as_secs() as i64;
593
+
594
+
// Optimized approach: Insert first, then check if cleanup needed
595
+
// This avoids counting on every insert
596
+
sqlx::query(
597
+
"INSERT INTO handle_resolution_queue (work, queued_at) VALUES (?1, ?2)"
598
+
)
599
+
.bind(&work_json)
600
+
.bind(current_timestamp)
601
+
.execute(&self.pool)
602
+
.await
603
+
.map_err(|e| QueueError::PushFailed(format!("Failed to insert work item: {}", e)))?;
604
+
605
+
// Implement optimized work shedding if max_size is configured
606
+
if self.max_size > 0 {
607
+
// Optimized approach: Only check and clean periodically or when likely over limit
608
+
// Use a limited count to avoid full table scan
609
+
let check_limit = self.max_size as i64 + (self.max_size as i64 / 10).max(1); // Check 10% over limit
610
+
let approx_count: Option<i64> = sqlx::query_scalar(
611
+
"SELECT COUNT(*) FROM (
612
+
SELECT 1 FROM handle_resolution_queue LIMIT ?1
613
+
) AS limited_count"
614
+
)
615
+
.bind(check_limit)
616
+
.fetch_one(&self.pool)
617
+
.await
618
+
.map_err(|e| QueueError::PushFailed(format!("Failed to check queue size: {}", e)))?;
619
+
620
+
// Only perform cleanup if we're definitely over the limit
621
+
if let Some(count) = approx_count && count >= check_limit {
622
+
// Perform batch cleanup - delete more than just the excess to reduce frequency
623
+
// Delete 20% more than needed to avoid frequent shedding
624
+
let target_size = (self.max_size as f64 * 0.8) as i64; // Keep 80% of max_size
625
+
let to_delete = count - target_size;
626
+
627
+
if to_delete > 0 {
628
+
// Optimized deletion: First get the cutoff id and timestamp
629
+
// This avoids the expensive subquery in the DELETE statement
630
+
let cutoff: Option<(i64, i64)> = sqlx::query_as(
631
+
"SELECT id, queued_at FROM handle_resolution_queue
632
+
ORDER BY queued_at ASC, id ASC
633
+
LIMIT 1 OFFSET ?1"
634
+
)
635
+
.bind(to_delete - 1)
636
+
.fetch_optional(&self.pool)
637
+
.await
638
+
.map_err(|e| QueueError::PushFailed(format!("Failed to find cutoff: {}", e)))?;
639
+
640
+
if let Some((cutoff_id, cutoff_timestamp)) = cutoff {
641
+
// Delete entries older than cutoff, or equal timestamp with lower id
642
+
// This handles the case where multiple entries have the same timestamp
643
+
let deleted_result = sqlx::query(
644
+
"DELETE FROM handle_resolution_queue
645
+
WHERE queued_at < ?1
646
+
OR (queued_at = ?1 AND id <= ?2)"
647
+
)
648
+
.bind(cutoff_timestamp)
649
+
.bind(cutoff_id)
650
+
.execute(&self.pool)
651
+
.await
652
+
.map_err(|e| QueueError::PushFailed(format!("Failed to delete excess entries: {}", e)))?;
653
+
654
+
let deleted_count = deleted_result.rows_affected();
655
+
if deleted_count > 0 {
656
+
tracing::info!(
657
+
"Work shedding: deleted {} oldest entries (target size: {}, max: {})",
658
+
deleted_count,
659
+
target_size,
660
+
self.max_size
661
+
);
662
+
}
663
+
}
664
+
}
665
+
}
666
+
}
667
+
668
+
debug!("Pushed work item to SQLite queue (max_size: {})", self.max_size);
669
+
Ok(())
670
+
}
671
+
672
+
async fn ack(&self, _item: &T) -> Result<()> {
673
+
// With the simplified SQLite queue design, items are deleted when pulled,
674
+
// so acknowledgment is a no-op (item is already processed and removed)
675
+
debug!("Acknowledged work item in SQLite queue (no-op)");
676
+
Ok(())
677
+
}
678
+
679
+
async fn depth(&self) -> Option<usize> {
680
+
match sqlx::query_scalar::<_, i64>(
681
+
"SELECT COUNT(*) FROM handle_resolution_queue"
682
+
)
683
+
.fetch_one(&self.pool)
684
+
.await
685
+
{
686
+
Ok(count) => Some(count as usize),
687
+
Err(e) => {
688
+
warn!("Failed to get SQLite queue depth: {}", e);
689
+
None
690
+
}
691
+
}
692
+
}
693
+
694
+
async fn is_healthy(&self) -> bool {
695
+
// Test the connection by running a simple query
696
+
sqlx::query_scalar::<_, i64>("SELECT 1")
697
+
.fetch_one(&self.pool)
698
+
.await
699
+
.map(|_| true)
700
+
.unwrap_or(false)
701
+
}
702
+
}
703
+
450
704
// ========= Factory Functions for Queue Adapters =========
451
705
452
706
/// Create a new MPSC queue adapter with the specified buffer size.
···
519
773
Arc::new(NoopQueueAdapter::new())
520
774
}
521
775
776
+
/// Create a new SQLite queue adapter with unlimited queue size.
777
+
///
778
+
/// This creates a persistent queue backed by SQLite database suitable
779
+
/// for single-instance deployments that need persistence across restarts.
780
+
/// The queue has no size limit and may grow unbounded.
781
+
///
782
+
/// # Arguments
783
+
///
784
+
/// * `pool` - SQLite connection pool
785
+
///
786
+
/// # Example
787
+
///
788
+
/// ```no_run
789
+
/// use quickdid::queue_adapter::{create_sqlite_queue, HandleResolutionWork};
790
+
/// use quickdid::sqlite_schema::create_sqlite_pool;
791
+
/// use std::sync::Arc;
792
+
///
793
+
/// # async fn example() -> anyhow::Result<()> {
794
+
/// let pool = create_sqlite_pool("sqlite:./quickdid.db").await?;
795
+
/// let queue = create_sqlite_queue::<HandleResolutionWork>(pool);
796
+
/// # Ok(())
797
+
/// # }
798
+
/// ```
799
+
pub fn create_sqlite_queue<T>(pool: sqlx::SqlitePool) -> Arc<dyn QueueAdapter<T>>
800
+
where
801
+
T: Send + Sync + Serialize + for<'de> Deserialize<'de> + 'static,
802
+
{
803
+
Arc::new(SqliteQueueAdapter::new(pool))
804
+
}
805
+
806
+
/// Create a new SQLite queue adapter with work shedding.
807
+
///
808
+
/// This creates a persistent queue with configurable maximum size.
809
+
/// When the queue exceeds `max_size`, the oldest entries are automatically
810
+
/// deleted to maintain the limit, preserving the most recent work items.
811
+
///
812
+
/// # Arguments
813
+
///
814
+
/// * `pool` - SQLite connection pool
815
+
/// * `max_size` - Maximum number of entries (0 = unlimited)
816
+
///
817
+
/// # Work Shedding Behavior
818
+
///
819
+
/// - New work items are always accepted
820
+
/// - When queue size exceeds `max_size`, oldest entries are deleted
821
+
/// - Deletion happens atomically with insertion in a single transaction
822
+
/// - Essential for long-running deployments to prevent disk space issues
823
+
///
824
+
/// # Example
825
+
///
826
+
/// ```no_run
827
+
/// use quickdid::queue_adapter::{create_sqlite_queue_with_max_size, HandleResolutionWork};
828
+
/// use quickdid::sqlite_schema::create_sqlite_pool;
829
+
/// use std::sync::Arc;
830
+
///
831
+
/// # async fn example() -> anyhow::Result<()> {
832
+
/// let pool = create_sqlite_pool("sqlite:./quickdid.db").await?;
833
+
/// // Limit queue to 10,000 entries with automatic work shedding
834
+
/// let queue = create_sqlite_queue_with_max_size::<HandleResolutionWork>(pool, 10000);
835
+
/// # Ok(())
836
+
/// # }
837
+
/// ```
838
+
pub fn create_sqlite_queue_with_max_size<T>(
839
+
pool: sqlx::SqlitePool,
840
+
max_size: u64,
841
+
) -> Arc<dyn QueueAdapter<T>>
842
+
where
843
+
T: Send + Sync + Serialize + for<'de> Deserialize<'de> + 'static,
844
+
{
845
+
Arc::new(SqliteQueueAdapter::with_max_size(pool, max_size))
846
+
}
847
+
522
848
#[cfg(test)]
523
849
mod tests {
524
850
use super::*;
···
669
995
670
996
// Should be healthy if Redis is running
671
997
assert!(adapter.is_healthy().await);
998
+
}
999
+
1000
+
#[tokio::test]
1001
+
async fn test_sqlite_queue_adapter_push_pull() {
1002
+
// Create in-memory SQLite database for testing
1003
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1004
+
.await
1005
+
.expect("Failed to connect to in-memory SQLite");
1006
+
1007
+
// Create the queue schema
1008
+
crate::sqlite_schema::create_schema(&pool)
1009
+
.await
1010
+
.expect("Failed to create schema");
1011
+
1012
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::new(pool.clone()));
1013
+
1014
+
let test_handle = "alice.example.com";
1015
+
let work = HandleResolutionWork {
1016
+
handle: test_handle.to_string(),
1017
+
};
1018
+
1019
+
// Test push
1020
+
adapter.push(work.clone()).await.unwrap();
1021
+
1022
+
// Verify the record is actually in the database
1023
+
let records: Vec<(i64, String, i64)> = sqlx::query_as(
1024
+
"SELECT id, work, queued_at FROM handle_resolution_queue ORDER BY queued_at ASC"
1025
+
)
1026
+
.fetch_all(&pool)
1027
+
.await
1028
+
.expect("Failed to query database");
1029
+
1030
+
assert_eq!(records.len(), 1);
1031
+
let (db_id, db_work_json, db_queued_at) = &records[0];
1032
+
assert!(*db_id > 0);
1033
+
assert!(*db_queued_at > 0);
1034
+
1035
+
// Verify the JSON content
1036
+
let stored_work: HandleResolutionWork = serde_json::from_str(db_work_json)
1037
+
.expect("Failed to deserialize stored work");
1038
+
assert_eq!(stored_work.handle, test_handle);
1039
+
1040
+
// Verify depth
1041
+
assert_eq!(adapter.depth().await, Some(1));
1042
+
1043
+
// Test pull
1044
+
let pulled = adapter.pull().await;
1045
+
assert!(pulled.is_some());
1046
+
let pulled_work = pulled.unwrap();
1047
+
assert_eq!(pulled_work.handle, test_handle);
1048
+
1049
+
// Verify the record was deleted from database after pull
1050
+
let records_after_pull: Vec<(i64, String, i64)> = sqlx::query_as(
1051
+
"SELECT id, work, queued_at FROM handle_resolution_queue"
1052
+
)
1053
+
.fetch_all(&pool)
1054
+
.await
1055
+
.expect("Failed to query database after pull");
1056
+
1057
+
assert_eq!(records_after_pull.len(), 0);
1058
+
1059
+
// Test ack - should be no-op since item already deleted
1060
+
adapter.ack(&pulled_work).await.expect("Ack should succeed");
1061
+
1062
+
// Verify depth after ack
1063
+
assert_eq!(adapter.depth().await, Some(0));
1064
+
1065
+
// Verify no more items to pull
1066
+
let empty_pull = adapter.pull().await;
1067
+
assert!(empty_pull.is_none());
1068
+
}
1069
+
1070
+
#[tokio::test]
1071
+
async fn test_sqlite_queue_adapter_multiple_items() {
1072
+
// Create in-memory SQLite database for testing
1073
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1074
+
.await
1075
+
.expect("Failed to connect to in-memory SQLite");
1076
+
1077
+
// Create the queue schema
1078
+
crate::sqlite_schema::create_schema(&pool)
1079
+
.await
1080
+
.expect("Failed to create schema");
1081
+
1082
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::new(pool.clone()));
1083
+
1084
+
// Push multiple items
1085
+
let handles = vec!["alice.example.com", "bob.example.com", "charlie.example.com"];
1086
+
for handle in &handles {
1087
+
let work = HandleResolutionWork {
1088
+
handle: handle.to_string(),
1089
+
};
1090
+
adapter.push(work).await.unwrap();
1091
+
}
1092
+
1093
+
// Verify all records are in database with correct order
1094
+
let records: Vec<(i64, String, i64)> = sqlx::query_as(
1095
+
"SELECT id, work, queued_at FROM handle_resolution_queue ORDER BY queued_at ASC"
1096
+
)
1097
+
.fetch_all(&pool)
1098
+
.await
1099
+
.expect("Failed to query database");
1100
+
1101
+
assert_eq!(records.len(), 3);
1102
+
1103
+
// Verify FIFO ordering by timestamp
1104
+
assert!(records[0].2 <= records[1].2); // queued_at timestamps should be in order
1105
+
assert!(records[1].2 <= records[2].2);
1106
+
1107
+
// Verify JSON content matches expected handles
1108
+
for (i, (db_id, db_work_json, _)) in records.iter().enumerate() {
1109
+
assert!(*db_id > 0);
1110
+
let stored_work: HandleResolutionWork = serde_json::from_str(db_work_json)
1111
+
.expect("Failed to deserialize stored work");
1112
+
assert_eq!(stored_work.handle, handles[i]);
1113
+
}
1114
+
1115
+
// Verify depth
1116
+
assert_eq!(adapter.depth().await, Some(3));
1117
+
1118
+
// Pull items in FIFO order and verify database state changes
1119
+
for (i, expected_handle) in handles.iter().enumerate() {
1120
+
let pulled = adapter.pull().await;
1121
+
assert!(pulled.is_some());
1122
+
let pulled_work = pulled.unwrap();
1123
+
assert_eq!(pulled_work.handle, *expected_handle);
1124
+
1125
+
// Verify database now has one fewer record
1126
+
let remaining_count: i64 = sqlx::query_scalar(
1127
+
"SELECT COUNT(*) FROM handle_resolution_queue"
1128
+
)
1129
+
.fetch_one(&pool)
1130
+
.await
1131
+
.expect("Failed to count remaining records");
1132
+
assert_eq!(remaining_count, (handles.len() - i - 1) as i64);
1133
+
1134
+
// Ack the item (no-op)
1135
+
adapter.ack(&pulled_work).await.expect("Ack should succeed");
1136
+
}
1137
+
1138
+
// Verify queue is empty in both adapter and database
1139
+
assert_eq!(adapter.depth().await, Some(0));
1140
+
1141
+
let final_records: Vec<(i64, String, i64)> = sqlx::query_as(
1142
+
"SELECT id, work, queued_at FROM handle_resolution_queue"
1143
+
)
1144
+
.fetch_all(&pool)
1145
+
.await
1146
+
.expect("Failed to query database");
1147
+
assert_eq!(final_records.len(), 0);
1148
+
1149
+
let empty_pull = adapter.pull().await;
1150
+
assert!(empty_pull.is_none());
1151
+
}
1152
+
1153
+
#[tokio::test]
1154
+
async fn test_sqlite_queue_adapter_simple_pull_delete() {
1155
+
// Create in-memory SQLite database for testing
1156
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1157
+
.await
1158
+
.expect("Failed to connect to in-memory SQLite");
1159
+
1160
+
// Create the queue schema
1161
+
crate::sqlite_schema::create_schema(&pool)
1162
+
.await
1163
+
.expect("Failed to create schema");
1164
+
1165
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::new(pool.clone()));
1166
+
1167
+
let test_handle = "simple.example.com";
1168
+
let work = HandleResolutionWork {
1169
+
handle: test_handle.to_string(),
1170
+
};
1171
+
1172
+
// Push item
1173
+
adapter.push(work.clone()).await.unwrap();
1174
+
1175
+
// Verify the record exists in database with correct JSON
1176
+
let records: Vec<(i64, String, i64)> = sqlx::query_as(
1177
+
"SELECT id, work, queued_at FROM handle_resolution_queue"
1178
+
)
1179
+
.fetch_all(&pool)
1180
+
.await
1181
+
.expect("Failed to query database");
1182
+
1183
+
assert_eq!(records.len(), 1);
1184
+
let (db_id, db_work_json, db_queued_at) = &records[0];
1185
+
assert!(*db_id > 0);
1186
+
assert!(*db_queued_at > 0);
1187
+
1188
+
// Verify JSON content
1189
+
let stored_work: HandleResolutionWork = serde_json::from_str(db_work_json)
1190
+
.expect("Failed to deserialize stored work");
1191
+
assert_eq!(stored_work.handle, test_handle);
1192
+
1193
+
// Verify item is in queue using schema stats
1194
+
let total_before = crate::sqlite_schema::get_queue_stats(&pool)
1195
+
.await
1196
+
.expect("Failed to get queue stats");
1197
+
assert_eq!(total_before, 1);
1198
+
1199
+
// Pull item (should delete it immediately)
1200
+
let pulled = adapter.pull().await;
1201
+
assert!(pulled.is_some());
1202
+
let pulled_work = pulled.unwrap();
1203
+
assert_eq!(pulled_work.handle, test_handle);
1204
+
1205
+
// Verify the record was deleted from database
1206
+
let records_after_pull: Vec<(i64, String, i64)> = sqlx::query_as(
1207
+
"SELECT id, work, queued_at FROM handle_resolution_queue"
1208
+
)
1209
+
.fetch_all(&pool)
1210
+
.await
1211
+
.expect("Failed to query database after pull");
1212
+
assert_eq!(records_after_pull.len(), 0);
1213
+
1214
+
// Verify that pulling again returns None (no records left)
1215
+
let empty_pull = adapter.pull().await;
1216
+
assert!(empty_pull.is_none());
1217
+
1218
+
// Verify queue is now empty after pull using schema stats
1219
+
let total_after = crate::sqlite_schema::get_queue_stats(&pool)
1220
+
.await
1221
+
.expect("Failed to get queue stats");
1222
+
assert_eq!(total_after, 0);
1223
+
1224
+
// Ack the item (should be no-op)
1225
+
adapter.ack(&pulled_work).await.expect("Ack should succeed");
1226
+
1227
+
// Verify queue is still empty using schema stats
1228
+
let total_final = crate::sqlite_schema::get_queue_stats(&pool)
1229
+
.await
1230
+
.expect("Failed to get queue stats after ack");
1231
+
assert_eq!(total_final, 0);
1232
+
}
1233
+
1234
+
#[tokio::test]
1235
+
async fn test_sqlite_queue_adapter_health() {
1236
+
// Create in-memory SQLite database for testing
1237
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1238
+
.await
1239
+
.expect("Failed to connect to in-memory SQLite");
1240
+
1241
+
// Create the queue schema
1242
+
crate::sqlite_schema::create_schema(&pool)
1243
+
.await
1244
+
.expect("Failed to create schema");
1245
+
1246
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::new(pool));
1247
+
1248
+
// Should be healthy if SQLite is working
1249
+
assert!(adapter.is_healthy().await);
1250
+
}
1251
+
1252
+
#[tokio::test]
1253
+
async fn test_sqlite_queue_adapter_ack_no_op() {
1254
+
// Create in-memory SQLite database for testing
1255
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1256
+
.await
1257
+
.expect("Failed to connect to in-memory SQLite");
1258
+
1259
+
// Create the queue schema
1260
+
crate::sqlite_schema::create_schema(&pool)
1261
+
.await
1262
+
.expect("Failed to create schema");
1263
+
1264
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::new(pool));
1265
+
1266
+
// Ack should always succeed as it's a no-op
1267
+
let any_work = HandleResolutionWork {
1268
+
handle: "any.example.com".to_string(),
1269
+
};
1270
+
1271
+
let result = adapter.ack(&any_work).await;
1272
+
assert!(result.is_ok());
1273
+
}
1274
+
1275
+
#[tokio::test]
1276
+
async fn test_sqlite_queue_adapter_generic_work_type() {
1277
+
// Test with a different work type to demonstrate genericity
1278
+
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
1279
+
struct CustomWork {
1280
+
id: u64,
1281
+
name: String,
1282
+
data: Vec<i32>,
1283
+
}
1284
+
1285
+
// Create in-memory SQLite database for testing
1286
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1287
+
.await
1288
+
.expect("Failed to connect to in-memory SQLite");
1289
+
1290
+
// Create the queue schema
1291
+
crate::sqlite_schema::create_schema(&pool)
1292
+
.await
1293
+
.expect("Failed to create schema");
1294
+
1295
+
let adapter = Arc::new(SqliteQueueAdapter::<CustomWork>::new(pool.clone()));
1296
+
1297
+
// Create custom work items
1298
+
let work1 = CustomWork {
1299
+
id: 123,
1300
+
name: "test_work".to_string(),
1301
+
data: vec![1, 2, 3, 4, 5],
1302
+
};
1303
+
1304
+
let work2 = CustomWork {
1305
+
id: 456,
1306
+
name: "another_work".to_string(),
1307
+
data: vec![10, 20],
1308
+
};
1309
+
1310
+
// Test push for both items
1311
+
adapter.push(work1.clone()).await.unwrap();
1312
+
adapter.push(work2.clone()).await.unwrap();
1313
+
1314
+
// Verify the records are in database with correct JSON serialization
1315
+
let records: Vec<(i64, String, i64)> = sqlx::query_as(
1316
+
"SELECT id, work, queued_at FROM handle_resolution_queue ORDER BY queued_at ASC"
1317
+
)
1318
+
.fetch_all(&pool)
1319
+
.await
1320
+
.expect("Failed to query database");
1321
+
1322
+
assert_eq!(records.len(), 2);
1323
+
1324
+
// Verify first work item JSON
1325
+
let stored_work1: CustomWork = serde_json::from_str(&records[0].1)
1326
+
.expect("Failed to deserialize first work item");
1327
+
assert_eq!(stored_work1, work1);
1328
+
1329
+
// Verify the JSON contains all expected fields
1330
+
let json_value1: serde_json::Value = serde_json::from_str(&records[0].1)
1331
+
.expect("Failed to parse JSON");
1332
+
assert_eq!(json_value1["id"], 123);
1333
+
assert_eq!(json_value1["name"], "test_work");
1334
+
assert_eq!(json_value1["data"], serde_json::json!([1, 2, 3, 4, 5]));
1335
+
1336
+
// Verify second work item JSON
1337
+
let stored_work2: CustomWork = serde_json::from_str(&records[1].1)
1338
+
.expect("Failed to deserialize second work item");
1339
+
assert_eq!(stored_work2, work2);
1340
+
1341
+
let json_value2: serde_json::Value = serde_json::from_str(&records[1].1)
1342
+
.expect("Failed to parse JSON");
1343
+
assert_eq!(json_value2["id"], 456);
1344
+
assert_eq!(json_value2["name"], "another_work");
1345
+
assert_eq!(json_value2["data"], serde_json::json!([10, 20]));
1346
+
1347
+
// Verify depth
1348
+
assert_eq!(adapter.depth().await, Some(2));
1349
+
1350
+
// Test pull - should get items in FIFO order
1351
+
let pulled1 = adapter.pull().await;
1352
+
assert!(pulled1.is_some());
1353
+
let pulled_work1 = pulled1.unwrap();
1354
+
assert_eq!(pulled_work1, work1);
1355
+
1356
+
// Verify database now has one record
1357
+
let count_after_first_pull: i64 = sqlx::query_scalar(
1358
+
"SELECT COUNT(*) FROM handle_resolution_queue"
1359
+
)
1360
+
.fetch_one(&pool)
1361
+
.await
1362
+
.expect("Failed to count records");
1363
+
assert_eq!(count_after_first_pull, 1);
1364
+
1365
+
let pulled2 = adapter.pull().await;
1366
+
assert!(pulled2.is_some());
1367
+
let pulled_work2 = pulled2.unwrap();
1368
+
assert_eq!(pulled_work2, work2);
1369
+
1370
+
// Verify database is now empty
1371
+
let count_after_second_pull: i64 = sqlx::query_scalar(
1372
+
"SELECT COUNT(*) FROM handle_resolution_queue"
1373
+
)
1374
+
.fetch_one(&pool)
1375
+
.await
1376
+
.expect("Failed to count records");
1377
+
assert_eq!(count_after_second_pull, 0);
1378
+
1379
+
// Verify queue is empty
1380
+
assert_eq!(adapter.depth().await, Some(0));
1381
+
let empty_pull = adapter.pull().await;
1382
+
assert!(empty_pull.is_none());
1383
+
1384
+
// Test ack (should be no-op)
1385
+
adapter.ack(&pulled_work1).await.expect("Ack should succeed");
1386
+
adapter.ack(&pulled_work2).await.expect("Ack should succeed");
1387
+
}
1388
+
1389
+
#[tokio::test]
1390
+
async fn test_sqlite_queue_adapter_work_shedding() {
1391
+
// Create in-memory SQLite database for testing
1392
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1393
+
.await
1394
+
.expect("Failed to connect to in-memory SQLite");
1395
+
1396
+
// Create the queue schema
1397
+
crate::sqlite_schema::create_schema(&pool)
1398
+
.await
1399
+
.expect("Failed to create schema");
1400
+
1401
+
// Create adapter with small max_size for testing work shedding
1402
+
let max_size = 10; // Use larger size to properly test batch deletion
1403
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::with_max_size(
1404
+
pool.clone(),
1405
+
max_size
1406
+
));
1407
+
1408
+
// Verify initial empty state
1409
+
assert_eq!(adapter.depth().await, Some(0));
1410
+
1411
+
// Push items up to the limit (should not trigger shedding)
1412
+
let mut handles = Vec::new();
1413
+
for i in 0..max_size {
1414
+
let handle = format!("test-{:03}", i);
1415
+
handles.push(handle.clone());
1416
+
let work = HandleResolutionWork { handle };
1417
+
adapter.push(work).await.expect("Push should succeed");
1418
+
}
1419
+
1420
+
// Verify all items are present
1421
+
assert_eq!(adapter.depth().await, Some(max_size as usize));
1422
+
1423
+
// Push beyond 110% of max_size to trigger batch shedding
1424
+
// The implementation checks at 110% and deletes down to 80%
1425
+
let trigger_point = max_size + (max_size / 10) + 1;
1426
+
for i in max_size..trigger_point {
1427
+
let handle = format!("test-{:03}", i);
1428
+
handles.push(handle);
1429
+
let work = HandleResolutionWork { handle: handles[i as usize].clone() };
1430
+
adapter.push(work).await.expect("Push should succeed");
1431
+
}
1432
+
1433
+
// After triggering shedding, queue should be around 80% of max_size
1434
+
let depth_after_shedding = adapter.depth().await.unwrap();
1435
+
let expected_size = (max_size as f64 * 0.8) as usize;
1436
+
1437
+
// Allow some variance due to batch deletion
1438
+
assert!(
1439
+
depth_after_shedding <= expected_size + 1,
1440
+
"Queue size {} should be around 80% of max_size ({})",
1441
+
depth_after_shedding,
1442
+
expected_size
1443
+
);
1444
+
1445
+
// Verify oldest items were deleted and newest items remain
1446
+
let records: Vec<(i64, String, i64)> = sqlx::query_as(
1447
+
"SELECT id, work, queued_at FROM handle_resolution_queue ORDER BY queued_at ASC"
1448
+
)
1449
+
.fetch_all(&pool)
1450
+
.await
1451
+
.expect("Failed to query database after shedding");
1452
+
1453
+
// Some of the oldest items should be gone (but not necessarily all the first ones)
1454
+
// With batch deletion to 80%, we keep recent items
1455
+
let last_item: HandleResolutionWork = serde_json::from_str(&records[records.len() - 1].1)
1456
+
.expect("Failed to deserialize last work");
1457
+
// Should have the most recent items
1458
+
assert!(last_item.handle.starts_with("test-01"), "Should have recent items");
1459
+
1460
+
// Verify FIFO order is maintained for remaining items
1461
+
let mut prev_id = 0;
1462
+
for record in &records {
1463
+
let id: i64 = record.0;
1464
+
assert!(id > prev_id, "IDs should be in ascending order");
1465
+
prev_id = id;
1466
+
}
1467
+
}
1468
+
1469
+
#[tokio::test]
1470
+
async fn test_sqlite_queue_adapter_work_shedding_disabled() {
1471
+
// Create in-memory SQLite database for testing
1472
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1473
+
.await
1474
+
.expect("Failed to connect to in-memory SQLite");
1475
+
1476
+
// Create the queue schema
1477
+
crate::sqlite_schema::create_schema(&pool)
1478
+
.await
1479
+
.expect("Failed to create schema");
1480
+
1481
+
// Create adapter with max_size = 0 (disabled work shedding)
1482
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::with_max_size(
1483
+
pool.clone(),
1484
+
0
1485
+
));
1486
+
1487
+
// Push many items (should not trigger any shedding)
1488
+
let mut expected_handles = Vec::new();
1489
+
for i in 0..100 {
1490
+
let handle = format!("test-{:03}", i);
1491
+
expected_handles.push(handle.clone());
1492
+
let work = HandleResolutionWork { handle };
1493
+
adapter.push(work).await.expect("Push should succeed");
1494
+
}
1495
+
1496
+
// Verify all items are present (no shedding occurred)
1497
+
assert_eq!(adapter.depth().await, Some(100));
1498
+
1499
+
// Verify all items are in database
1500
+
let records: Vec<(i64, String, i64)> = sqlx::query_as(
1501
+
"SELECT id, work, queued_at FROM handle_resolution_queue ORDER BY queued_at ASC"
1502
+
)
1503
+
.fetch_all(&pool)
1504
+
.await
1505
+
.expect("Failed to query database");
1506
+
1507
+
assert_eq!(records.len(), 100);
1508
+
1509
+
// Verify all items are present in correct order
1510
+
for (i, expected_handle) in expected_handles.iter().enumerate() {
1511
+
let stored_work: HandleResolutionWork = serde_json::from_str(&records[i].1)
1512
+
.expect("Failed to deserialize stored work");
1513
+
assert_eq!(stored_work.handle, *expected_handle);
1514
+
}
1515
+
}
1516
+
1517
+
#[tokio::test]
1518
+
async fn test_sqlite_queue_adapter_performance_optimizations() {
1519
+
// Create in-memory SQLite database for testing
1520
+
let pool = sqlx::SqlitePool::connect("sqlite::memory:")
1521
+
.await
1522
+
.expect("Failed to connect to in-memory SQLite");
1523
+
1524
+
// Create the queue schema
1525
+
crate::sqlite_schema::create_schema(&pool)
1526
+
.await
1527
+
.expect("Failed to create schema");
1528
+
1529
+
// Create adapter with reasonable max_size
1530
+
let max_size = 100;
1531
+
let adapter = Arc::new(SqliteQueueAdapter::<HandleResolutionWork>::with_max_size(
1532
+
pool.clone(),
1533
+
max_size
1534
+
));
1535
+
1536
+
// Test 1: Verify inserts don't trigger checks when well under limit
1537
+
// Push 50 items (50% of max_size) - should not trigger any cleanup checks
1538
+
for i in 0..50 {
1539
+
let work = HandleResolutionWork {
1540
+
handle: format!("handle-{:04}", i),
1541
+
};
1542
+
adapter.push(work).await.expect("Push should succeed");
1543
+
}
1544
+
assert_eq!(adapter.depth().await, Some(50));
1545
+
1546
+
// Test 2: Verify batch deletion efficiency
1547
+
// Push to 110% to trigger batch cleanup
1548
+
for i in 50..111 {
1549
+
let work = HandleResolutionWork {
1550
+
handle: format!("handle-{:04}", i),
1551
+
};
1552
+
adapter.push(work).await.expect("Push should succeed");
1553
+
// Add tiny delay to ensure different timestamps for proper ordering
1554
+
if i % 10 == 0 {
1555
+
tokio::time::sleep(tokio::time::Duration::from_millis(1)).await;
1556
+
}
1557
+
}
1558
+
1559
+
// Should have deleted down to ~80% of max_size
1560
+
let depth_after_batch = adapter.depth().await.unwrap();
1561
+
assert!(
1562
+
(79..=81).contains(&depth_after_batch),
1563
+
"After batch deletion, size should be ~80 (got {})",
1564
+
depth_after_batch
1565
+
);
1566
+
1567
+
// Test 3: Verify cleanup doesn't happen again immediately
1568
+
// Push a few more items - should not trigger another cleanup
1569
+
for i in 111..115 {
1570
+
let work = HandleResolutionWork {
1571
+
handle: format!("handle-{:04}", i),
1572
+
};
1573
+
adapter.push(work).await.expect("Push should succeed");
1574
+
}
1575
+
1576
+
let depth_no_cleanup = adapter.depth().await.unwrap();
1577
+
assert!(
1578
+
depth_no_cleanup > 80 && depth_no_cleanup < 90,
1579
+
"Should not have triggered cleanup yet (got {})",
1580
+
depth_no_cleanup
1581
+
);
1582
+
1583
+
// Test 4: Verify timestamp-based deletion is working correctly
1584
+
// The oldest items should be gone after batch deletion
1585
+
let records: Vec<(i64, String, i64)> = sqlx::query_as(
1586
+
"SELECT id, work, queued_at FROM handle_resolution_queue
1587
+
ORDER BY queued_at ASC LIMIT 5"
1588
+
)
1589
+
.fetch_all(&pool)
1590
+
.await
1591
+
.expect("Failed to query database");
1592
+
1593
+
// Verify we have recent items (not necessarily the oldest)
1594
+
let oldest_work: HandleResolutionWork = serde_json::from_str(&records[0].1)
1595
+
.expect("Failed to deserialize work");
1596
+
let oldest_num: i32 = oldest_work.handle
1597
+
.trim_start_matches("handle-")
1598
+
.parse()
1599
+
.unwrap_or(0);
1600
+
1601
+
// After batch deletion to 80%, we should have deleted approximately the first 31 items
1602
+
// But allow some variance due to timing
1603
+
assert!(
1604
+
oldest_num >= 20,
1605
+
"Should have deleted old items, oldest is now: {}",
1606
+
oldest_work.handle
1607
+
);
1608
+
1609
+
// Test 5: Verify FIFO order is maintained after batch operations
1610
+
let mut prev_timestamp = 0i64;
1611
+
let all_records: Vec<(i64, String, i64)> = sqlx::query_as(
1612
+
"SELECT id, work, queued_at FROM handle_resolution_queue ORDER BY queued_at ASC"
1613
+
)
1614
+
.fetch_all(&pool)
1615
+
.await
1616
+
.expect("Failed to query database");
1617
+
1618
+
for record in &all_records {
1619
+
assert!(
1620
+
record.2 >= prev_timestamp,
1621
+
"Timestamps should be in ascending order"
1622
+
);
1623
+
prev_timestamp = record.2;
1624
+
}
672
1625
}
673
1626
}
+112
-14
src/sqlite_schema.rs
+112
-14
src/sqlite_schema.rs
···
20
20
ON handle_resolution_cache(updated);
21
21
"#;
22
22
23
+
/// SQL schema for the handle resolution queue table.
24
+
const CREATE_HANDLE_RESOLUTION_QUEUE_TABLE: &str = r#"
25
+
CREATE TABLE IF NOT EXISTS handle_resolution_queue (
26
+
id INTEGER PRIMARY KEY AUTOINCREMENT,
27
+
work TEXT NOT NULL,
28
+
queued_at INTEGER NOT NULL
29
+
);
30
+
31
+
CREATE INDEX IF NOT EXISTS idx_handle_resolution_queue_queued_at
32
+
ON handle_resolution_queue(queued_at);
33
+
"#;
34
+
23
35
/// Create or connect to a SQLite database and ensure schema is initialized.
24
36
///
25
37
/// # Arguments
···
48
60
tracing::info!("Initializing SQLite database: {}", database_url);
49
61
50
62
// Extract the database path from the URL for file-based databases
51
-
if let Some(path) = database_url.strip_prefix("sqlite:") {
52
-
if path != ":memory:" && !path.is_empty() {
53
-
// Create the database file if it doesn't exist
54
-
if !Sqlite::database_exists(database_url).await? {
55
-
tracing::info!("Creating SQLite database file: {}", path);
56
-
Sqlite::create_database(database_url).await?;
57
-
}
63
+
if let Some(path) = database_url.strip_prefix("sqlite:")
64
+
&& path != ":memory:"
65
+
&& !path.is_empty()
66
+
{
67
+
// Create the database file if it doesn't exist
68
+
if !Sqlite::database_exists(database_url).await? {
69
+
tracing::info!("Creating SQLite database file: {}", path);
70
+
Sqlite::create_database(database_url).await?;
71
+
}
58
72
59
-
// Ensure the parent directory exists
60
-
if let Some(parent) = Path::new(path).parent() {
61
-
if !parent.exists() {
62
-
tracing::info!("Creating directory: {}", parent.display());
63
-
std::fs::create_dir_all(parent)?;
64
-
}
65
-
}
73
+
// Ensure the parent directory exists
74
+
if let Some(parent) = Path::new(path).parent()
75
+
&& !parent.exists()
76
+
{
77
+
tracing::info!("Creating directory: {}", parent.display());
78
+
std::fs::create_dir_all(parent)?;
66
79
}
67
80
}
68
81
···
99
112
100
113
// Execute the schema creation SQL
101
114
sqlx::query(CREATE_HANDLE_RESOLUTION_CACHE_TABLE)
115
+
.execute(pool)
116
+
.await?;
117
+
118
+
sqlx::query(CREATE_HANDLE_RESOLUTION_QUEUE_TABLE)
102
119
.execute(pool)
103
120
.await?;
104
121
···
196
213
let size_bytes = page_size * page_count;
197
214
198
215
Ok((total_entries, size_bytes))
216
+
}
217
+
218
+
/// Clean up old entries from the handle resolution queue.
219
+
///
220
+
/// This function removes entries that are older than the specified age.
221
+
///
222
+
/// # Arguments
223
+
///
224
+
/// * `pool` - SQLite connection pool
225
+
/// * `max_age_seconds` - Maximum age in seconds for queue entries to be kept
226
+
///
227
+
/// # Returns
228
+
///
229
+
/// Returns the number of entries deleted.
230
+
///
231
+
/// # Example
232
+
///
233
+
/// ```no_run
234
+
/// use quickdid::sqlite_schema::cleanup_queue_entries;
235
+
/// use sqlx::SqlitePool;
236
+
///
237
+
/// # async fn example() -> anyhow::Result<()> {
238
+
/// let pool = SqlitePool::connect("sqlite:./quickdid.db").await?;
239
+
/// let deleted_count = cleanup_queue_entries(&pool, 86400).await?; // 1 day
240
+
/// println!("Deleted {} old queue entries", deleted_count);
241
+
/// # Ok(())
242
+
/// # }
243
+
/// ```
244
+
pub async fn cleanup_queue_entries(pool: &SqlitePool, max_age_seconds: u64) -> Result<u64> {
245
+
let current_timestamp = std::time::SystemTime::now()
246
+
.duration_since(std::time::UNIX_EPOCH)
247
+
.unwrap_or_default()
248
+
.as_secs() as i64;
249
+
250
+
let cutoff_timestamp = current_timestamp - (max_age_seconds as i64);
251
+
252
+
let result = sqlx::query(
253
+
"DELETE FROM handle_resolution_queue WHERE queued_at < ?1"
254
+
)
255
+
.bind(cutoff_timestamp)
256
+
.execute(pool)
257
+
.await?;
258
+
259
+
let deleted_count = result.rows_affected();
260
+
if deleted_count > 0 {
261
+
tracing::info!("Cleaned up {} old queue entries", deleted_count);
262
+
}
263
+
264
+
Ok(deleted_count)
265
+
}
266
+
267
+
/// Get statistics about the handle resolution queue.
268
+
///
269
+
/// # Arguments
270
+
///
271
+
/// * `pool` - SQLite connection pool
272
+
///
273
+
/// # Returns
274
+
///
275
+
/// Returns the total number of entries in the queue.
276
+
///
277
+
/// # Example
278
+
///
279
+
/// ```no_run
280
+
/// use quickdid::sqlite_schema::get_queue_stats;
281
+
/// use sqlx::SqlitePool;
282
+
///
283
+
/// # async fn example() -> anyhow::Result<()> {
284
+
/// let pool = SqlitePool::connect("sqlite:./quickdid.db").await?;
285
+
/// let total = get_queue_stats(&pool).await?;
286
+
/// println!("Queue: {} total entries", total);
287
+
/// # Ok(())
288
+
/// # }
289
+
/// ```
290
+
pub async fn get_queue_stats(pool: &SqlitePool) -> Result<i64> {
291
+
// Get total entries
292
+
let total_entries: i64 = sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_queue")
293
+
.fetch_one(pool)
294
+
.await?;
295
+
296
+
Ok(total_entries)
199
297
}
200
298
201
299
#[cfg(test)]