+2
-14
aethelos-source/Cargo.lock
+2
-14
aethelos-source/Cargo.lock
···
32
32
"lazy_static",
33
33
"pic8259",
34
34
"spin",
35
-
"x86_64 0.15.2",
35
+
"x86_64",
36
36
]
37
37
38
38
[[package]]
···
71
71
source = "registry+https://github.com/rust-lang/crates.io-index"
72
72
checksum = "cb844b5b01db1e0b17938685738f113bfc903846f18932b378bc0eabfa40e194"
73
73
dependencies = [
74
-
"x86_64 0.14.13",
74
+
"x86_64",
75
75
]
76
76
77
77
[[package]]
···
134
134
"rustversion",
135
135
"volatile",
136
136
]
137
-
138
-
[[package]]
139
-
name = "x86_64"
140
-
version = "0.15.2"
141
-
source = "registry+https://github.com/rust-lang/crates.io-index"
142
-
checksum = "0f042214de98141e9c8706e8192b73f56494087cc55ebec28ce10f26c5c364ae"
143
-
dependencies = [
144
-
"bit_field",
145
-
"bitflags 2.10.0",
146
-
"rustversion",
147
-
"volatile",
148
-
]
+1
-1
aethelos-source/Cargo.toml
+1
-1
aethelos-source/Cargo.toml
aethelos-source/aethelos.iso
aethelos-source/aethelos.iso
This is a binary file and will not be displayed.
+681
aethelos-source/docs/PREEMPTIVE_MULTITASKING_PLAN.md
+681
aethelos-source/docs/PREEMPTIVE_MULTITASKING_PLAN.md
···
1
+
# Preemptive Multitasking Implementation Plan
2
+
3
+
This document provides a detailed, step-by-step plan for implementing preemptive multitasking in AethelOS. Preemptive multitasking is complex because it introduces race conditions and requires careful synchronization.
4
+
5
+
## Overview
6
+
7
+
**Current State:**
8
+
- Cooperative multitasking (threads call `yield_now()` explicitly)
9
+
- Timer interrupts do NOT trigger context switches
10
+
- Safe but threads can monopolize CPU if they don't yield
11
+
12
+
**Goal:**
13
+
- Preemptive multitasking with timer-driven context switches
14
+
- Threads are interrupted and switched automatically
15
+
- Must maintain system stability and avoid deadlocks
16
+
17
+
---
18
+
19
+
## Why Preemptive Multitasking is Hard
20
+
21
+
### Challenge 1: Critical Sections
22
+
**Problem:** If we preempt a thread while it's in a critical section (holding a lock), another thread might try to acquire the same lock → deadlock.
23
+
24
+
**Example:**
25
+
```
26
+
Thread A: Acquires lock → Timer interrupt → Context switch to Thread B
27
+
Thread B: Tries to acquire same lock → Deadlock!
28
+
```
29
+
30
+
**Solution:** Use interrupt-safe locks that disable interrupts while held (already implemented in `InterruptSafeLock`).
31
+
32
+
### Challenge 2: Context Switch Atomicity
33
+
**Problem:** Context switching involves multiple steps (save registers, switch stack, restore registers). If interrupted mid-switch, corruption occurs.
34
+
35
+
**Solution:** Disable interrupts during context switches, use atomic operations.
36
+
37
+
### Challenge 3: Drop Implementations
38
+
**Problem:** Rust Drop implementations can be called at any time. If preempted during Drop, resources may leak or corrupt.
39
+
40
+
**Example:**
41
+
```rust
42
+
impl Drop for MyStruct {
43
+
fn drop(&mut self) {
44
+
// If preempted here, lock may not be released!
45
+
self.lock.unlock();
46
+
}
47
+
}
48
+
```
49
+
50
+
**Solution:** Make Drop implementations interrupt-safe, or disable preemption during Drop.
51
+
52
+
### Challenge 4: Stack Safety
53
+
**Problem:** If we preempt while stack is in an invalid state (mid-push, mid-pop), corruption occurs.
54
+
55
+
**Solution:** Stack operations are atomic at instruction level, but we must ensure proper alignment.
56
+
57
+
---
58
+
59
+
## Implementation Strategy
60
+
61
+
We'll implement preemptive multitasking in **5 phases**, each building on the previous:
62
+
63
+
### Phase 1: Audit and Fix Critical Sections ⚠️ CRITICAL
64
+
### Phase 2: Implement Preemption Control
65
+
### Phase 3: Timer-Driven Context Switching
66
+
### Phase 4: Testing and Debugging
67
+
### Phase 5: Optimization and Tuning
68
+
69
+
---
70
+
71
+
## Phase 1: Audit and Fix Critical Sections
72
+
73
+
**Goal:** Ensure all locks are interrupt-safe
74
+
75
+
### Step 1.1: Identify All Locks in the System
76
+
77
+
Search for all mutex/spinlock usage:
78
+
- `spin::Mutex` (NOT interrupt-safe)
79
+
- `InterruptSafeLock` (IS interrupt-safe)
80
+
- Custom locks
81
+
82
+
**Files to audit:**
83
+
- `mana_pool/mod.rs` - MANA_POOL mutex
84
+
- `loom_of_fate/mod.rs` - LOOM mutex
85
+
- `nexus/mod.rs` - NEXUS mutex
86
+
- `vga_buffer.rs` - WRITER mutex
87
+
- `eldarin.rs` - BUFFER mutex
88
+
- Any other modules with static mutexes
89
+
90
+
### Step 1.2: Convert to Interrupt-Safe Locks
91
+
92
+
For each lock found, determine if it needs to be interrupt-safe:
93
+
94
+
**Needs to be interrupt-safe if:**
95
+
- Used in both regular code AND interrupt handlers
96
+
- Holds critical system state (scheduler, allocator)
97
+
- Guards shared resources
98
+
99
+
**Strategy:**
100
+
```rust
101
+
// BEFORE (unsafe with preemption)
102
+
static LOCK: Mutex<Data> = Mutex::new(Data::new());
103
+
104
+
// AFTER (safe with preemption)
105
+
static LOCK: InterruptSafeLock<Data> = InterruptSafeLock::new(Data::new());
106
+
```
107
+
108
+
**Files to modify:**
109
+
1. Replace `spin::Mutex` with `InterruptSafeLock` in:
110
+
- `mana_pool/mod.rs` (MANA_POOL)
111
+
- `loom_of_fate/mod.rs` (LOOM)
112
+
- `nexus/mod.rs` (NEXUS)
113
+
- `vga_buffer.rs` (WRITER)
114
+
- `eldarin.rs` (BUFFER)
115
+
116
+
2. Test that system still works with new locks
117
+
118
+
**Verification:**
119
+
- [ ] All system locks use `InterruptSafeLock`
120
+
- [ ] Build succeeds
121
+
- [ ] System boots without deadlock
122
+
- [ ] Stats display works
123
+
124
+
---
125
+
126
+
## Phase 2: Implement Preemption Control
127
+
128
+
**Goal:** Add ability to enable/disable preemption
129
+
130
+
### Step 2.1: Add Preemption State to Scheduler
131
+
132
+
Add to `Scheduler` struct:
133
+
```rust
134
+
pub struct Scheduler {
135
+
// ... existing fields ...
136
+
137
+
/// Is preemptive scheduling enabled?
138
+
preemption_enabled: bool,
139
+
140
+
/// Time quantum in timer ticks (e.g., 10ms = 10 ticks if timer is 1ms)
141
+
time_quantum: u64,
142
+
143
+
/// Ticks remaining in current thread's quantum
144
+
quantum_remaining: u64,
145
+
}
146
+
```
147
+
148
+
### Step 2.2: Add Preemption Control API
149
+
150
+
```rust
151
+
impl Scheduler {
152
+
/// Enable preemptive multitasking
153
+
pub fn enable_preemption(&mut self, quantum_ms: u64) {
154
+
self.preemption_enabled = true;
155
+
self.time_quantum = quantum_ms;
156
+
self.quantum_remaining = quantum_ms;
157
+
}
158
+
159
+
/// Disable preemptive multitasking (return to cooperative)
160
+
pub fn disable_preemption(&mut self) {
161
+
self.preemption_enabled = false;
162
+
}
163
+
164
+
/// Check if this thread's quantum has expired
165
+
pub fn should_preempt(&mut self) -> bool {
166
+
if !self.preemption_enabled {
167
+
return false;
168
+
}
169
+
170
+
if self.quantum_remaining == 0 {
171
+
// Quantum expired, reset for next thread
172
+
self.quantum_remaining = self.time_quantum;
173
+
return true;
174
+
}
175
+
176
+
false
177
+
}
178
+
179
+
/// Decrement the current thread's quantum
180
+
pub fn tick_quantum(&mut self) {
181
+
if self.quantum_remaining > 0 {
182
+
self.quantum_remaining -= 1;
183
+
}
184
+
}
185
+
}
186
+
```
187
+
188
+
### Step 2.3: Add Public API in mod.rs
189
+
190
+
```rust
191
+
/// Enable preemptive multitasking with given time quantum
192
+
pub fn enable_preemption(quantum_ms: u64) {
193
+
without_interrupts(|| {
194
+
unsafe { get_loom().lock().enable_preemption(quantum_ms) }
195
+
});
196
+
}
197
+
198
+
/// Disable preemptive multitasking (return to cooperative)
199
+
pub fn disable_preemption() {
200
+
without_interrupts(|| {
201
+
unsafe { get_loom().lock().disable_preemption() }
202
+
});
203
+
}
204
+
```
205
+
206
+
**Verification:**
207
+
- [ ] Preemption can be enabled/disabled
208
+
- [ ] Default is disabled (cooperative mode)
209
+
- [ ] API is interrupt-safe
210
+
211
+
---
212
+
213
+
## Phase 3: Timer-Driven Context Switching
214
+
215
+
**Goal:** Make timer interrupt trigger context switches
216
+
217
+
### Step 3.1: Understand Current Timer Handler
218
+
219
+
Current code ([idt.rs:40-56](../heartwood/src/attunement/idt.rs#L40-L56)):
220
+
```rust
221
+
extern "x86-interrupt" fn timer_interrupt_handler(_stack_frame: InterruptStackFrame) {
222
+
crate::attunement::timer::tick();
223
+
224
+
// REMOVED: yield_now() causes issues
225
+
// TODO: Implement preemptive multitasking
226
+
227
+
unsafe {
228
+
super::PICS.lock().notify_end_of_interrupt(32);
229
+
}
230
+
}
231
+
```
232
+
233
+
### Step 3.2: Add Preemption Check to Timer Handler
234
+
235
+
**Modify timer handler:**
236
+
```rust
237
+
extern "x86-interrupt" fn timer_interrupt_handler(stack_frame: InterruptStackFrame) {
238
+
// Increment tick counter
239
+
crate::attunement::timer::tick();
240
+
241
+
// Check if preemption is enabled and quantum expired
242
+
let should_switch = unsafe {
243
+
let mut loom = crate::loom_of_fate::get_loom().lock();
244
+
loom.tick_quantum();
245
+
loom.should_preempt()
246
+
};
247
+
248
+
// If quantum expired, trigger context switch
249
+
if should_switch {
250
+
crate::loom_of_fate::preemptive_yield();
251
+
}
252
+
253
+
// Send End of Interrupt
254
+
unsafe {
255
+
super::PICS.lock().notify_end_of_interrupt(32);
256
+
}
257
+
}
258
+
```
259
+
260
+
### Step 3.3: Implement Preemptive Yield
261
+
262
+
**Add to loom_of_fate/mod.rs:**
263
+
```rust
264
+
/// Preemptive yield - called from timer interrupt
265
+
///
266
+
/// This is different from cooperative yield_now() because:
267
+
/// - We're already in an interrupt context
268
+
/// - We must be extremely careful with locks
269
+
/// - Stack frame is different (interrupt stack frame)
270
+
pub fn preemptive_yield() {
271
+
// NOTE: Interrupts are already disabled in interrupt handler!
272
+
273
+
unsafe {
274
+
// Step 1: Lock scheduler and prepare for context switch
275
+
let (should_switch, from_ctx_ptr, to_ctx_ptr) = {
276
+
let mut loom = get_loom().lock();
277
+
278
+
// Only yield if we have a current thread
279
+
if loom.current_thread_id().is_none() {
280
+
return;
281
+
}
282
+
283
+
// Prepare for context switch
284
+
loom.prepare_yield()
285
+
};
286
+
287
+
// Step 2: If we should switch, do it
288
+
if should_switch {
289
+
// Context switch (lock is dropped)
290
+
context::switch_context_cooperative(from_ctx_ptr, to_ctx_ptr);
291
+
292
+
// After returning, update state
293
+
let mut loom = get_loom().lock();
294
+
loom.after_yield();
295
+
}
296
+
}
297
+
}
298
+
```
299
+
300
+
### Step 3.4: Handle Interrupt Stack Frame
301
+
302
+
**Challenge:** When switching from interrupt context, we need to handle the interrupt stack frame properly.
303
+
304
+
**Options:**
305
+
1. **Save/Restore Interrupt Stack:** Complex but complete
306
+
2. **Return from Interrupt, Then Switch:** Simpler but requires careful design
307
+
3. **Use Separate Interrupt Stacks:** Advanced but cleanest
308
+
309
+
**Recommended: Option 2 (Return First)**
310
+
311
+
Modify approach:
312
+
```rust
313
+
extern "x86-interrupt" fn timer_interrupt_handler(stack_frame: InterruptStackFrame) {
314
+
// Increment tick counter
315
+
crate::attunement::timer::tick();
316
+
317
+
// Mark that we need to yield after returning from interrupt
318
+
crate::loom_of_fate::request_preemptive_yield();
319
+
320
+
// Send EOI and return from interrupt
321
+
unsafe {
322
+
super::PICS.lock().notify_end_of_interrupt(32);
323
+
}
324
+
325
+
// NOTE: After IRET, the CPU will check if a yield is pending
326
+
// and perform it before continuing the interrupted thread
327
+
}
328
+
```
329
+
330
+
Then add a "yield pending" flag that gets checked after every interrupt return.
331
+
332
+
**Verification:**
333
+
- [ ] Timer interrupt increments quantum counter
334
+
- [ ] When quantum expires, context switch occurs
335
+
- [ ] System remains stable under preemption
336
+
337
+
---
338
+
339
+
## Phase 4: Testing and Debugging
340
+
341
+
**Goal:** Verify preemptive multitasking works correctly
342
+
343
+
### Test 4.1: Create CPU-Bound Test Thread
344
+
345
+
Create a thread that NEVER yields:
346
+
```rust
347
+
fn cpu_hog_thread() -> ! {
348
+
let mut counter = 0u64;
349
+
loop {
350
+
counter += 1;
351
+
352
+
// Print every million iterations (but DON'T yield!)
353
+
if counter % 1_000_000 == 0 {
354
+
crate::println!("[CPU Hog: {}]", counter);
355
+
}
356
+
357
+
// NO yield_now() call!
358
+
}
359
+
}
360
+
```
361
+
362
+
**Expected behavior:**
363
+
- Without preemption: Other threads starve
364
+
- With preemption: Other threads still run
365
+
366
+
### Test 4.2: Create Lock Contention Test
367
+
368
+
Create threads that compete for locks:
369
+
```rust
370
+
static TEST_LOCK: InterruptSafeLock<u64> = InterruptSafeLock::new(0);
371
+
372
+
fn lock_test_thread() -> ! {
373
+
loop {
374
+
{
375
+
let mut data = TEST_LOCK.lock();
376
+
*data += 1;
377
+
378
+
// Simulate work while holding lock
379
+
for _ in 0..1000 {
380
+
core::hint::spin_loop();
381
+
}
382
+
}
383
+
384
+
// Yield occasionally
385
+
if data % 100 == 0 {
386
+
yield_now();
387
+
}
388
+
}
389
+
}
390
+
```
391
+
392
+
**Expected behavior:**
393
+
- No deadlocks
394
+
- Lock counter increments correctly
395
+
- All threads make progress
396
+
397
+
### Test 4.3: Stress Test
398
+
399
+
Run system for extended period:
400
+
- Multiple threads
401
+
- Heavy allocation/deallocation
402
+
- Lock contention
403
+
- I/O operations
404
+
405
+
**Monitor for:**
406
+
- Deadlocks (system freezes)
407
+
- Memory corruption (crashes, panics)
408
+
- Lost interrupts (keyboard stops working)
409
+
- Stack corruption (garbage output)
410
+
411
+
### Test 4.4: Gradual Rollout
412
+
413
+
**Strategy:**
414
+
1. Start with preemption DISABLED (cooperative mode)
415
+
2. Enable preemption with LONG quantum (100ms)
416
+
3. Gradually reduce quantum (50ms, 20ms, 10ms)
417
+
4. Monitor stability at each step
418
+
419
+
**Verification:**
420
+
- [ ] CPU-bound threads are preempted
421
+
- [ ] No deadlocks under lock contention
422
+
- [ ] System stable for 1+ minute
423
+
- [ ] Keyboard still responsive
424
+
425
+
---
426
+
427
+
## Phase 5: Optimization and Tuning
428
+
429
+
**Goal:** Fine-tune preemptive scheduling for best performance
430
+
431
+
### Optimization 5.1: Dynamic Quantum Adjustment
432
+
433
+
Adjust quantum based on system load:
434
+
```rust
435
+
impl Scheduler {
436
+
fn adjust_quantum(&mut self) {
437
+
// If high harmony, allow longer quantums (less switching)
438
+
if self.latest_metrics.system_harmony > 0.8 {
439
+
self.time_quantum = 20; // 20ms
440
+
}
441
+
// If low harmony, use shorter quantums (more fair)
442
+
else if self.latest_metrics.system_harmony < 0.5 {
443
+
self.time_quantum = 5; // 5ms
444
+
}
445
+
// Normal: 10ms
446
+
else {
447
+
self.time_quantum = 10;
448
+
}
449
+
}
450
+
}
451
+
```
452
+
453
+
### Optimization 5.2: Priority-Based Preemption
454
+
455
+
High-priority threads can preempt low-priority ones:
456
+
```rust
457
+
impl Scheduler {
458
+
fn should_preempt(&mut self) -> bool {
459
+
if !self.preemption_enabled {
460
+
return false;
461
+
}
462
+
463
+
// Quantum expired?
464
+
if self.quantum_remaining == 0 {
465
+
return true;
466
+
}
467
+
468
+
// High-priority thread waiting?
469
+
if self.has_high_priority_ready() {
470
+
let current_priority = self.get_current_priority();
471
+
if current_priority < ThreadPriority::High {
472
+
return true; // Preempt for higher priority
473
+
}
474
+
}
475
+
476
+
false
477
+
}
478
+
}
479
+
```
480
+
481
+
### Optimization 5.3: Interrupt Latency Reduction
482
+
483
+
Minimize time with interrupts disabled:
484
+
```rust
485
+
// BEFORE: Hold lock for entire operation
486
+
let result = {
487
+
let lock = LOCK.lock(); // Interrupts disabled
488
+
do_long_operation(&lock) // Interrupts disabled for ENTIRE operation
489
+
}; // Interrupts re-enabled
490
+
491
+
// AFTER: Copy data, release lock, then process
492
+
let data_copy = {
493
+
let lock = LOCK.lock(); // Interrupts disabled
494
+
lock.clone() // Quick copy
495
+
}; // Interrupts re-enabled
496
+
497
+
let result = do_long_operation(&data_copy); // Interrupts ENABLED during work
498
+
```
499
+
500
+
### Optimization 5.4: Context Switch Profiling
501
+
502
+
Track context switch overhead:
503
+
```rust
504
+
pub struct SchedulerStats {
505
+
// ... existing fields ...
506
+
507
+
/// Total time spent in context switches (in timer ticks)
508
+
pub context_switch_overhead: u64,
509
+
510
+
/// Average context switch time
511
+
pub avg_switch_time: u64,
512
+
}
513
+
```
514
+
515
+
**Verification:**
516
+
- [ ] Context switches take < 100 microseconds
517
+
- [ ] Interrupt latency < 50 microseconds
518
+
- [ ] CPU efficiency > 95% (< 5% overhead)
519
+
520
+
---
521
+
522
+
## Rollback Plan
523
+
524
+
If preemptive multitasking causes issues:
525
+
526
+
### Immediate Rollback (Emergency)
527
+
1. Set `preemption_enabled = false` in scheduler
528
+
2. Rebuild and deploy
529
+
3. System returns to cooperative mode
530
+
531
+
### Conditional Compilation (Safe Rollback)
532
+
```rust
533
+
#[cfg(feature = "preemptive")]
534
+
const PREEMPTION_ENABLED: bool = true;
535
+
536
+
#[cfg(not(feature = "preemptive"))]
537
+
const PREEMPTION_ENABLED: bool = false;
538
+
```
539
+
540
+
Build with/without preemption:
541
+
```bash
542
+
# With preemption
543
+
cargo build --features preemptive
544
+
545
+
# Without preemption (safe mode)
546
+
cargo build
547
+
```
548
+
549
+
---
550
+
551
+
## Implementation Checklist
552
+
553
+
### Phase 1: Critical Sections (MUST complete first)
554
+
- [ ] Audit all mutexes in codebase
555
+
- [ ] Replace `spin::Mutex` with `InterruptSafeLock` in:
556
+
- [ ] `mana_pool/mod.rs`
557
+
- [ ] `loom_of_fate/mod.rs`
558
+
- [ ] `nexus/mod.rs`
559
+
- [ ] `vga_buffer.rs`
560
+
- [ ] `eldarin.rs`
561
+
- [ ] Test that all locks work correctly
562
+
- [ ] Verify no deadlocks in current system
563
+
564
+
### Phase 2: Preemption Control
565
+
- [ ] Add preemption state to Scheduler
566
+
- [ ] Implement `enable_preemption()` / `disable_preemption()`
567
+
- [ ] Add quantum tracking
568
+
- [ ] Add public API in mod.rs
569
+
- [ ] Test enabling/disabling (should have no effect yet)
570
+
571
+
### Phase 3: Timer Integration
572
+
- [ ] Modify timer interrupt handler
573
+
- [ ] Implement `preemptive_yield()`
574
+
- [ ] Handle interrupt stack frame correctly
575
+
- [ ] Test with LONG quantum (100ms) first
576
+
- [ ] Gradually reduce quantum
577
+
578
+
### Phase 4: Testing
579
+
- [ ] Create CPU-bound test thread
580
+
- [ ] Test lock contention
581
+
- [ ] Stress test for 5+ minutes
582
+
- [ ] Verify keyboard still works
583
+
- [ ] Test stats display works
584
+
- [ ] Test memory allocation/deallocation
585
+
586
+
### Phase 5: Optimization
587
+
- [ ] Dynamic quantum adjustment
588
+
- [ ] Priority-based preemption
589
+
- [ ] Interrupt latency reduction
590
+
- [ ] Context switch profiling
591
+
592
+
---
593
+
594
+
## Success Criteria
595
+
596
+
Preemptive multitasking is considered successful when:
597
+
598
+
1. **Correctness:**
599
+
- [ ] CPU-bound threads are preempted automatically
600
+
- [ ] All threads make forward progress
601
+
- [ ] No deadlocks occur
602
+
603
+
2. **Stability:**
604
+
- [ ] System runs for 10+ minutes without crash
605
+
- [ ] No memory corruption
606
+
- [ ] No stack corruption
607
+
608
+
3. **Responsiveness:**
609
+
- [ ] Keyboard input works during CPU-heavy load
610
+
- [ ] UI remains responsive
611
+
- [ ] Stats display updates correctly
612
+
613
+
4. **Performance:**
614
+
- [ ] Context switch overhead < 5%
615
+
- [ ] Interrupt latency < 50μs
616
+
- [ ] System harmony maintained > 0.7
617
+
618
+
---
619
+
620
+
## Risk Assessment
621
+
622
+
| Risk | Severity | Likelihood | Mitigation |
623
+
|------|----------|------------|------------|
624
+
| Deadlocks from preemption | HIGH | MEDIUM | Use interrupt-safe locks everywhere |
625
+
| Stack corruption | HIGH | LOW | Disable interrupts during context switch |
626
+
| Drop implementation issues | MEDIUM | MEDIUM | Audit all Drop impls, make interrupt-safe |
627
+
| Performance degradation | LOW | LOW | Profile and optimize |
628
+
| Keyboard stops working | MEDIUM | LOW | Test interrupt handling thoroughly |
629
+
630
+
---
631
+
632
+
## Timeline Estimate
633
+
634
+
**Conservative estimate (with testing):**
635
+
- Phase 1: 2-3 days (critical, must be thorough)
636
+
- Phase 2: 1 day
637
+
- Phase 3: 2-3 days (complex, requires debugging)
638
+
- Phase 4: 2-3 days (extensive testing)
639
+
- Phase 5: 1-2 days (optional optimization)
640
+
641
+
**Total: 8-12 days**
642
+
643
+
**Aggressive estimate (experienced developer):**
644
+
- Phase 1: 1 day
645
+
- Phase 2: 4 hours
646
+
- Phase 3: 1-2 days
647
+
- Phase 4: 1 day
648
+
- Phase 5: 1 day
649
+
650
+
**Total: 4-5 days**
651
+
652
+
---
653
+
654
+
## References
655
+
656
+
### Technical Resources
657
+
- [OSDev Wiki: Interrupt Service Routines](https://wiki.osdev.org/Interrupt_Service_Routines)
658
+
- [OSDev Wiki: Context Switching](https://wiki.osdev.org/Context_Switching)
659
+
- x86_64 interrupt handling and stack frames
660
+
- [Rust Atomics and Locks](https://marabos.nl/atomics/) - Mara Bos
661
+
662
+
### Similar OS Implementations
663
+
- Linux: Completely Fair Scheduler (CFS)
664
+
- Redox OS: Rust-based preemptive scheduling
665
+
- [Writing an OS in Rust](https://os.phil-opp.com/) - Philipp Oppermann
666
+
667
+
### AethelOS-Specific
668
+
- [PRODUCTION_READINESS_PLAN.md](PRODUCTION_READINESS_PLAN.md) - Overall roadmap
669
+
- [heartwood/src/attunement/idt.rs](../heartwood/src/attunement/idt.rs) - Current timer handler
670
+
- [heartwood/src/loom_of_fate/scheduler.rs](../heartwood/src/loom_of_fate/scheduler.rs) - Scheduler implementation
671
+
672
+
---
673
+
674
+
## Notes
675
+
676
+
- **Start Simple:** Begin with long quantum (100ms), reduce gradually
677
+
- **Test Often:** After each phase, test thoroughly before proceeding
678
+
- **Have Rollback Ready:** Keep cooperative mode as fallback
679
+
- **Document Issues:** Keep log of bugs found and how they were fixed
680
+
- **Measure Everything:** Profile context switch times, interrupt latency
681
+
- **Maintain Philosophy:** Even with preemption, preserve AethelOS's harmony-based approach
+13
-4
aethelos-source/docs/PRODUCTION_READINESS_PLAN.md
+13
-4
aethelos-source/docs/PRODUCTION_READINESS_PLAN.md
···
2
2
3
3
This document outlines the implementation plan for addressing TODOs and "In a real OS" comments found in the codebase. These improvements are necessary to transition from a demonstration OS to a production-ready system.
4
4
5
+
## Status Overview
6
+
7
+
| Item | Status | Date Completed | Notes |
8
+
|------|--------|----------------|-------|
9
+
| 1. Preemptive Multitasking | 🟡 **PLANNED** | - | See [PREEMPTIVE_MULTITASKING_PLAN.md](PREEMPTIVE_MULTITASKING_PLAN.md) |
10
+
| 2. Interrupt-safe Statistics | ✅ **COMPLETE** | 2025-10-25 | Uses `without_interrupts()` wrapper |
11
+
| 3. Production Memory Allocator | ✅ **COMPLETE** | 2025-10-25 | Buddy allocator with 64B-64KB blocks |
12
+
| 4. Memory Deallocation | ✅ **COMPLETE** | 2025-10-25 | Integrated with buddy allocator |
13
+
5
14
## Overview
6
15
7
16
Found 4 critical areas requiring implementation:
8
-
1. Preemptive Multitasking
9
-
2. Interrupt-safe Statistics
10
-
3. Production Memory Allocator
11
-
4. Memory Deallocation
17
+
1. Preemptive Multitasking (PLANNED - detailed plan available)
18
+
2. ✅ Interrupt-safe Statistics (COMPLETE)
19
+
3. ✅ Production Memory Allocator (COMPLETE)
20
+
4. ✅ Memory Deallocation (COMPLETE)
12
21
13
22
---
14
23
+44
-9
aethelos-source/heartwood/src/attunement/idt.rs
+44
-9
aethelos-source/heartwood/src/attunement/idt.rs
···
37
37
}
38
38
39
39
/// The Timer Interrupt Handler - The Rhythm of Time
40
-
extern "x86-interrupt" fn timer_interrupt_handler(_stack_frame: InterruptStackFrame) {
40
+
///
41
+
/// This handler is called on every timer tick (typically 1ms).
42
+
/// It increments the tick counter and, if preemptive multitasking is enabled,
43
+
/// tracks quantum usage and triggers context switches.
44
+
extern "x86-interrupt" fn timer_interrupt_handler(stack_frame: InterruptStackFrame) {
41
45
// Increment the timer tick counter
42
46
crate::attunement::timer::tick();
43
47
44
-
// REMOVED: Calling yield_now() from interrupt handler causes preemptive
45
-
// context switches that can interrupt critical sections (like Drop implementations).
46
-
// For cooperative multitasking, threads should only yield explicitly.
47
-
// TODO: If you want preemptive multitasking, you need to:
48
-
// 1. Ensure all critical sections are interrupt-safe
49
-
// 2. Use proper interrupt-safe context switching
50
-
// crate::loom_of_fate::yield_now();
48
+
// === PREEMPTIVE MULTITASKING (Phase 3) ===
49
+
// Now that all critical sections are interrupt-safe (Phase 1 complete),
50
+
// we can safely perform preemptive context switches from interrupt context.
51
+
52
+
unsafe {
53
+
let should_preempt = {
54
+
let mut loom = crate::loom_of_fate::get_loom().lock();
55
+
56
+
// Decrement the current thread's quantum
57
+
loom.tick_quantum();
58
+
59
+
// Check if we should preempt
60
+
loom.should_preempt()
61
+
// Lock is dropped here
62
+
};
63
+
64
+
// If quantum expired and preemption is enabled, switch threads
65
+
if should_preempt {
66
+
// Send End of Interrupt BEFORE context switch
67
+
// This ensures the PIC is ready for the next interrupt
68
+
super::PICS.lock().notify_end_of_interrupt(32);
51
69
52
-
// Send End of Interrupt
70
+
// Create an array with the interrupt frame values in the correct order
71
+
// The order matches the hardware interrupt frame: RIP, CS, RFLAGS, RSP, SS
72
+
let frame_values: [u64; 5] = [
73
+
stack_frame.instruction_pointer.as_u64(),
74
+
stack_frame.code_segment,
75
+
stack_frame.cpu_flags,
76
+
stack_frame.stack_pointer.as_u64(),
77
+
stack_frame.stack_segment,
78
+
];
79
+
let frame_ptr = frame_values.as_ptr();
80
+
81
+
// Perform preemptive context switch
82
+
// This function never returns - it uses IRETQ to jump to the new thread
83
+
crate::loom_of_fate::preemptive_yield(frame_ptr);
84
+
}
85
+
}
86
+
87
+
// Send End of Interrupt (only if we didn't preempt)
53
88
unsafe {
54
89
super::PICS.lock().notify_end_of_interrupt(32);
55
90
}
+5
-4
aethelos-source/heartwood/src/attunement/keyboard.rs
+5
-4
aethelos-source/heartwood/src/attunement/keyboard.rs
···
2
2
//!
3
3
//! Minimal keyboard initialization - we trust that BIOS has already set up the PS/2 controller
4
4
5
-
use spin::Mutex;
5
+
use crate::mana_pool::InterruptSafeLock;
6
6
use core::mem::MaybeUninit;
7
7
use x86_64::instructions::port::Port;
8
8
···
24
24
}
25
25
}
26
26
27
-
static mut KEYBOARD: MaybeUninit<Mutex<Keyboard>> = MaybeUninit::uninit();
27
+
// Interrupt-safe keyboard state (accessed from keyboard interrupt handler)
28
+
static mut KEYBOARD: MaybeUninit<InterruptSafeLock<Keyboard>> = MaybeUninit::uninit();
28
29
static mut KEYBOARD_INITIALIZED: bool = false;
29
30
30
31
/// Initialize keyboard - just create the state structure
···
33
34
pub fn init() {
34
35
unsafe {
35
36
let keyboard = Keyboard::new();
36
-
let mutex = Mutex::new(keyboard);
37
-
core::ptr::write(KEYBOARD.as_mut_ptr(), mutex);
37
+
let lock = InterruptSafeLock::new(keyboard);
38
+
core::ptr::write(KEYBOARD.as_mut_ptr(), lock);
38
39
KEYBOARD_INITIALIZED = true;
39
40
}
40
41
}
+4
-3
aethelos-source/heartwood/src/attunement/mod.rs
+4
-3
aethelos-source/heartwood/src/attunement/mod.rs
···
11
11
pub mod timer;
12
12
13
13
use pic8259::ChainedPics;
14
-
use spin::Mutex;
14
+
use crate::mana_pool::InterruptSafeLock;
15
15
16
16
/// The standard offset for remapping the PICs
17
17
/// IRQs 0-15 become interrupts 32-47
···
20
20
21
21
/// The Guardian - Our Programmable Interrupt Controller
22
22
/// This is THE source of truth for PIC management
23
-
pub static PICS: Mutex<ChainedPics> =
24
-
Mutex::new(unsafe { ChainedPics::new(PIC_1_OFFSET, PIC_2_OFFSET) });
23
+
/// CRITICAL: Must be interrupt-safe since accessed from interrupt handlers for EOI
24
+
pub static PICS: InterruptSafeLock<ChainedPics> =
25
+
InterruptSafeLock::new(unsafe { ChainedPics::new(PIC_1_OFFSET, PIC_2_OFFSET) });
25
26
26
27
/// Initialize the Attunement Layer
27
28
/// This follows The Grand Unification sequence
+57
-10
aethelos-source/heartwood/src/eldarin.rs
+57
-10
aethelos-source/heartwood/src/eldarin.rs
···
10
10
//! with both action and wisdom.
11
11
12
12
use core::fmt::Write;
13
-
use spin::Mutex;
13
+
use crate::mana_pool::InterruptSafeLock;
14
14
use core::mem::MaybeUninit;
15
15
16
16
/// Maximum command buffer size
···
69
69
}
70
70
}
71
71
72
-
/// Global command buffer
73
-
static mut COMMAND_BUFFER: MaybeUninit<Mutex<CommandBuffer>> = MaybeUninit::uninit();
72
+
/// Global command buffer (interrupt-safe for keyboard input)
73
+
static mut COMMAND_BUFFER: MaybeUninit<InterruptSafeLock<CommandBuffer>> = MaybeUninit::uninit();
74
74
static mut BUFFER_INITIALIZED: bool = false;
75
75
76
76
/// Initialize the shell
77
77
pub fn init() {
78
78
unsafe {
79
79
let buffer = CommandBuffer::new();
80
-
let mutex = Mutex::new(buffer);
81
-
core::ptr::write(COMMAND_BUFFER.as_mut_ptr(), mutex);
80
+
let lock = InterruptSafeLock::new(buffer);
81
+
core::ptr::write(COMMAND_BUFFER.as_mut_ptr(), lock);
82
82
BUFFER_INITIALIZED = true;
83
83
}
84
84
}
85
85
86
86
/// Get reference to command buffer
87
-
unsafe fn get_buffer() -> &'static Mutex<CommandBuffer> {
87
+
unsafe fn get_buffer() -> &'static InterruptSafeLock<CommandBuffer> {
88
88
COMMAND_BUFFER.assume_init_ref()
89
89
}
90
90
···
215
215
"echo" => cmd_echo(args),
216
216
"clear" => cmd_clear(),
217
217
"help" => cmd_help(),
218
+
"preempt" => cmd_preempt(args),
218
219
"" => {
219
220
// Empty command, just show new prompt
220
221
}
···
290
291
fn cmd_help() {
291
292
crate::println!("◈ Eldarin Shell - Available Commands");
292
293
crate::println!();
293
-
crate::println!(" harmony - Display system harmony and scheduler statistics");
294
-
crate::println!(" echo [text] - Echo the provided text back to the screen");
295
-
crate::println!(" clear - Clear the screen and redisplay the banner");
296
-
crate::println!(" help - Show this help message");
294
+
crate::println!(" harmony - Display system harmony and scheduler statistics");
295
+
crate::println!(" preempt [cmd] - Control preemptive multitasking");
296
+
crate::println!(" 'status' - Show current preemption state");
297
+
crate::println!(" 'enable' - Enable preemption (10ms quantum)");
298
+
crate::println!(" 'disable' - Disable preemption");
299
+
crate::println!(" echo [text] - Echo the provided text back to the screen");
300
+
crate::println!(" clear - Clear the screen and redisplay the banner");
301
+
crate::println!(" help - Show this help message");
297
302
crate::println!();
298
303
crate::println!("The shell listens to your intentions and translates them into action.");
299
304
}
305
+
306
+
/// The Preempt Spell - Control preemptive multitasking
307
+
fn cmd_preempt(args: &str) {
308
+
match args.trim() {
309
+
"status" | "" => {
310
+
// Show current status
311
+
crate::println!("◈ Preemption Status");
312
+
crate::println!();
313
+
314
+
let enabled = crate::loom_of_fate::is_preemption_enabled();
315
+
let quantum = crate::loom_of_fate::get_time_quantum();
316
+
317
+
crate::println!(" Mode: {}", if enabled { "PREEMPTIVE" } else { "COOPERATIVE" });
318
+
crate::println!(" Time Quantum: {}ms", quantum);
319
+
crate::println!();
320
+
321
+
if enabled {
322
+
crate::println!(" ⚠ Preemption is ENABLED");
323
+
crate::println!(" Threads will be interrupted after {}ms", quantum);
324
+
crate::println!(" Note: Timer interrupt integration not yet active");
325
+
} else {
326
+
crate::println!(" ✓ Cooperative mode (threads yield voluntarily)");
327
+
}
328
+
}
329
+
"enable" => {
330
+
crate::println!("◈ Enabling preemptive multitasking...");
331
+
crate::loom_of_fate::enable_preemption(100); // 100ms quantum (conservative)
332
+
crate::println!(" ✓ Preemption enabled with 100ms quantum");
333
+
crate::println!(" ⚠ ACTIVE: Timer will now trigger context switches!");
334
+
crate::println!(" Use 'preempt disable' if system becomes unstable");
335
+
}
336
+
"disable" => {
337
+
crate::println!("◈ Disabling preemptive multitasking...");
338
+
crate::loom_of_fate::disable_preemption();
339
+
crate::println!(" ✓ Returned to cooperative mode");
340
+
}
341
+
_ => {
342
+
crate::println!("Unknown preempt command: '{}'", args);
343
+
crate::println!("Usage: preempt [status|enable|disable]");
344
+
}
345
+
}
346
+
}
+118
aethelos-source/heartwood/src/loom_of_fate/context.rs
+118
aethelos-source/heartwood/src/loom_of_fate/context.rs
···
314
314
stack_top
315
315
}
316
316
317
+
/// Save the current thread's context from an interrupt frame (for preemptive multitasking)
318
+
///
319
+
/// When a timer interrupt fires, the CPU has already pushed an interrupt frame onto the stack.
320
+
/// This function captures that state and stores it in the thread's context structure.
321
+
///
322
+
/// # Arguments
323
+
/// * `context` - Where to save the interrupted thread's state
324
+
/// * `stack_frame` - The interrupt stack frame pushed by the CPU
325
+
///
326
+
/// # Safety
327
+
/// Must be called from interrupt context with a valid interrupt stack frame
328
+
#[unsafe(naked)]
329
+
pub unsafe extern "C" fn save_preempted_context(_context: *mut ThreadContext, _stack_frame: *const u64) {
330
+
core::arch::naked_asm!(
331
+
// rdi = context pointer (first argument)
332
+
// rsi = stack_frame pointer (second argument)
333
+
334
+
// The interrupt frame on stack contains (from low to high address):
335
+
// [rsi+0]: RIP (instruction pointer when interrupted)
336
+
// [rsi+8]: CS (code segment)
337
+
// [rsi+16]: RFLAGS
338
+
// [rsi+24]: RSP (stack pointer when interrupted)
339
+
// [rsi+32]: SS (stack segment)
340
+
341
+
// Save general purpose registers
342
+
"mov [rdi + 0x00], r15",
343
+
"mov [rdi + 0x08], r14",
344
+
"mov [rdi + 0x10], r13",
345
+
"mov [rdi + 0x18], r12",
346
+
"mov [rdi + 0x20], rbp",
347
+
"mov [rdi + 0x28], rbx",
348
+
"mov [rdi + 0x30], r11",
349
+
"mov [rdi + 0x38], r10",
350
+
"mov [rdi + 0x40], r9",
351
+
"mov [rdi + 0x48], r8",
352
+
"mov [rdi + 0x50], rax",
353
+
"mov [rdi + 0x58], rcx",
354
+
"mov [rdi + 0x60], rdx",
355
+
// Save rsi before we overwrite it
356
+
"mov [rdi + 0x68], rsi",
357
+
"mov [rdi + 0x70], rdi",
358
+
359
+
// Now save the interrupt frame fields from [rsi]
360
+
"mov rax, [rsi + 0]", // RIP from interrupt frame
361
+
"mov [rdi + 0x78], rax",
362
+
363
+
"mov rax, [rsi + 8]", // CS from interrupt frame
364
+
"mov [rdi + 0x80], rax",
365
+
366
+
"mov rax, [rsi + 16]", // RFLAGS from interrupt frame
367
+
"mov [rdi + 0x88], rax",
368
+
369
+
"mov rax, [rsi + 24]", // RSP from interrupt frame
370
+
"mov [rdi + 0x90], rax",
371
+
372
+
"mov rax, [rsi + 32]", // SS from interrupt frame
373
+
"mov [rdi + 0x98], rax",
374
+
375
+
"ret",
376
+
);
377
+
}
378
+
379
+
/// Switch from a preempted thread to another thread (called from interrupt context)
380
+
///
381
+
/// This is similar to switch_context but assumes the old context was already saved
382
+
/// via save_preempted_context. It only restores the new context and uses IRETQ.
383
+
///
384
+
/// # Arguments
385
+
/// * `new_context` - The context to restore and resume
386
+
///
387
+
/// # Safety
388
+
/// Must be called from interrupt context after save_preempted_context
389
+
#[unsafe(naked)]
390
+
pub unsafe extern "C" fn restore_context(_new_context: *const ThreadContext) -> ! {
391
+
core::arch::naked_asm!(
392
+
// rdi = new_context pointer
393
+
394
+
// Restore general purpose registers
395
+
"mov r15, [rdi + 0x00]",
396
+
"mov r14, [rdi + 0x08]",
397
+
"mov r13, [rdi + 0x10]",
398
+
"mov r12, [rdi + 0x18]",
399
+
"mov rbp, [rdi + 0x20]",
400
+
"mov rbx, [rdi + 0x28]",
401
+
"mov r11, [rdi + 0x30]",
402
+
"mov r10, [rdi + 0x38]",
403
+
"mov r9, [rdi + 0x40]",
404
+
"mov r8, [rdi + 0x48]",
405
+
"mov rax, [rdi + 0x50]",
406
+
"mov rcx, [rdi + 0x58]",
407
+
"mov rdx, [rdi + 0x60]",
408
+
409
+
// Switch to the new thread's stack
410
+
"mov rsp, [rdi + 0x90]",
411
+
412
+
// Build interrupt frame on the new stack for IRETQ
413
+
// Push in reverse order: SS, RSP, RFLAGS, CS, RIP
414
+
"push qword ptr [rdi + 0x98]", // SS
415
+
"push qword ptr [rdi + 0x90]", // RSP
416
+
417
+
// Push RFLAGS with IF (interrupt flag) set to enable interrupts
418
+
"mov rax, [rdi + 0x88]",
419
+
"or rax, 0x200", // Set IF flag (bit 9)
420
+
"push rax", // RFLAGS
421
+
422
+
"push qword ptr [rdi + 0x80]", // CS
423
+
"push qword ptr [rdi + 0x78]", // RIP
424
+
425
+
// Restore the last two registers
426
+
"mov rsi, [rdi + 0x68]",
427
+
"mov rdi, [rdi + 0x70]",
428
+
429
+
// Use IRETQ to return to the new thread
430
+
// This pops RIP, CS, RFLAGS, RSP, SS and properly returns from interrupt
431
+
"iretq",
432
+
);
433
+
}
434
+
317
435
/// Helper wrapper for thread entry points
318
436
///
319
437
/// This function is called when a new thread is first scheduled.
+116
-9
aethelos-source/heartwood/src/loom_of_fate/mod.rs
+116
-9
aethelos-source/heartwood/src/loom_of_fate/mod.rs
···
26
26
pub use thread::{Thread, ThreadId, ThreadState, ThreadPriority};
27
27
pub use harmony::{HarmonyAnalyzer, HarmonyMetrics};
28
28
29
-
use spin::Mutex;
29
+
use crate::mana_pool::InterruptSafeLock;
30
30
use core::mem::MaybeUninit;
31
31
use alloc::boxed::Box;
32
32
33
33
// Manual static initialization using MaybeUninit
34
34
// LOOM stores a Box<Scheduler> - a small pointer to heap-allocated Scheduler
35
-
static mut LOOM: MaybeUninit<Mutex<Box<Scheduler>>> = MaybeUninit::uninit();
35
+
// Using InterruptSafeLock to prevent deadlocks during preemptive multitasking
36
+
static mut LOOM: MaybeUninit<InterruptSafeLock<Box<Scheduler>>> = MaybeUninit::uninit();
36
37
static mut LOOM_INITIALIZED: bool = false;
37
38
38
39
/// Helper to write to serial for debugging
···
62
63
let scheduler_on_heap = Scheduler::new_boxed();
63
64
serial_out(b'B'); // Scheduler::new_boxed returned
64
65
65
-
// Create a mutex around the small Box pointer
66
-
serial_out(b'C'); // About to create Mutex
67
-
let mutex = Mutex::new(scheduler_on_heap);
68
-
serial_out(b'D'); // Mutex created
66
+
// Create an interrupt-safe lock around the small Box pointer
67
+
serial_out(b'C'); // About to create InterruptSafeLock
68
+
let lock = InterruptSafeLock::new(scheduler_on_heap);
69
+
serial_out(b'D'); // InterruptSafeLock created
69
70
70
-
// Write the small Mutex<Box<Scheduler>> to static
71
-
core::ptr::write(LOOM.as_mut_ptr(), mutex);
71
+
// Write the small InterruptSafeLock<Box<Scheduler>> to static
72
+
core::ptr::write(LOOM.as_mut_ptr(), lock);
72
73
serial_out(b'y'); // Written to MaybeUninit
73
74
74
75
LOOM_INITIALIZED = true;
···
105
106
}
106
107
107
108
/// Get reference to LOOM (assumes initialized)
108
-
unsafe fn get_loom() -> &'static Mutex<Box<Scheduler>> {
109
+
///
110
+
/// # Safety
111
+
/// LOOM must be initialized before calling this function
112
+
pub unsafe fn get_loom() -> &'static InterruptSafeLock<Box<Scheduler>> {
109
113
LOOM.assume_init_ref()
110
114
}
111
115
···
190
194
});
191
195
}
192
196
197
+
/// Preemptive yield - called from timer interrupt when quantum expires
198
+
///
199
+
/// This is specifically designed for interrupt context and properly handles
200
+
/// the interrupt stack frame to avoid corruption.
201
+
///
202
+
/// # Arguments
203
+
/// * `interrupt_frame_ptr` - Pointer to the interrupt stack frame (RIP, CS, RFLAGS, RSP, SS)
204
+
///
205
+
/// # Safety
206
+
/// - Must only be called from timer interrupt handler
207
+
/// - Interrupts are already disabled in interrupt context
208
+
/// - All locks are interrupt-safe (Phase 1 complete)
209
+
/// - The interrupt_frame_ptr must point to the valid interrupt frame on stack
210
+
pub unsafe fn preemptive_yield(interrupt_frame_ptr: *const u64) -> ! {
211
+
// We're already in interrupt context with interrupts disabled
212
+
// No need for without_interrupts() wrapper
213
+
214
+
// Step 1: Lock scheduler and prepare for context switch
215
+
let (should_switch, from_ctx_ptr, to_ctx_ptr) = {
216
+
let mut loom = get_loom().lock();
217
+
218
+
// Only yield if we actually have a current thread
219
+
if loom.current_thread_id().is_none() {
220
+
// No current thread - this shouldn't happen, but handle it gracefully
221
+
// We need to return via IRETQ, not return normally
222
+
// Just restore the interrupted state
223
+
drop(loom);
224
+
core::arch::asm!(
225
+
"iretq",
226
+
options(noreturn)
227
+
);
228
+
}
229
+
230
+
// Prepare for context switch and get pointers
231
+
loom.prepare_yield()
232
+
};
233
+
234
+
// Step 2: If we should switch, save current context and switch
235
+
if should_switch {
236
+
// The lock is now dropped! (loom was dropped at end of block above)
237
+
238
+
// Save the interrupted thread's context from the interrupt frame
239
+
context::save_preempted_context(from_ctx_ptr as *mut context::ThreadContext, interrupt_frame_ptr);
240
+
241
+
// Now restore the new thread's context and jump to it
242
+
// This uses IRETQ and never returns
243
+
context::restore_context(to_ctx_ptr);
244
+
245
+
// UNREACHABLE - restore_context uses iretq and never returns
246
+
} else {
247
+
// No context switch needed - just return from interrupt normally
248
+
// Use IRETQ to properly return from the interrupt
249
+
core::arch::asm!(
250
+
"iretq",
251
+
options(noreturn)
252
+
);
253
+
}
254
+
}
255
+
193
256
/// Get the current thread ID
194
257
pub fn current_thread() -> Option<ThreadId> {
195
258
without_interrupts(|| {
···
201
264
pub fn stats() -> SchedulerStats {
202
265
without_interrupts(|| {
203
266
unsafe { get_loom().lock().stats() }
267
+
})
268
+
}
269
+
270
+
// === Preemptive Multitasking Control ===
271
+
272
+
/// Enable preemptive multitasking with the given time quantum
273
+
///
274
+
/// # Arguments
275
+
/// * `quantum_ms` - Time quantum in milliseconds (e.g., 10 = 10ms per thread)
276
+
///
277
+
/// When enabled, the timer interrupt will trigger context switches
278
+
/// after the quantum expires, even if the thread hasn't yielded.
279
+
///
280
+
/// # Example
281
+
/// ```
282
+
/// // Enable preemption with 10ms quantum
283
+
/// loom_of_fate::enable_preemption(10);
284
+
/// ```
285
+
pub fn enable_preemption(quantum_ms: u64) {
286
+
without_interrupts(|| {
287
+
unsafe { get_loom().lock().enable_preemption(quantum_ms) }
288
+
});
289
+
}
290
+
291
+
/// Disable preemptive multitasking (return to cooperative mode)
292
+
///
293
+
/// Threads will only switch when they explicitly call yield_now().
294
+
pub fn disable_preemption() {
295
+
without_interrupts(|| {
296
+
unsafe { get_loom().lock().disable_preemption() }
297
+
});
298
+
}
299
+
300
+
/// Check if preemption is currently enabled
301
+
pub fn is_preemption_enabled() -> bool {
302
+
without_interrupts(|| {
303
+
unsafe { get_loom().lock().is_preemption_enabled() }
304
+
})
305
+
}
306
+
307
+
/// Get the current time quantum setting
308
+
pub fn get_time_quantum() -> u64 {
309
+
without_interrupts(|| {
310
+
unsafe { get_loom().lock().get_time_quantum() }
204
311
})
205
312
}
206
313
+84
-1
aethelos-source/heartwood/src/loom_of_fate/scheduler.rs
+84
-1
aethelos-source/heartwood/src/loom_of_fate/scheduler.rs
···
10
10
11
11
const MAX_THREADS: usize = 1024;
12
12
13
-
/// The harmony-based cooperative scheduler
13
+
/// The harmony-based cooperative/preemptive scheduler
14
14
pub struct Scheduler {
15
15
threads: Vec<Thread>,
16
16
stacks: Vec<Stack>, // Stack storage (owned by scheduler)
···
22
22
latest_metrics: HarmonyMetrics,
23
23
/// Total number of context switches performed
24
24
context_switches: u64,
25
+
26
+
// === Preemptive Multitasking Support ===
27
+
/// Is preemptive scheduling enabled?
28
+
preemption_enabled: bool,
29
+
/// Time quantum in timer ticks (e.g., 10 ticks = 10ms if timer is 1ms)
30
+
time_quantum: u64,
31
+
/// Ticks remaining in current thread's quantum
32
+
quantum_remaining: u64,
25
33
}
26
34
27
35
impl Default for Scheduler {
···
65
73
harmony_analyzer,
66
74
latest_metrics: HarmonyMetrics::default(),
67
75
context_switches: 0,
76
+
// Preemption disabled by default (cooperative mode)
77
+
preemption_enabled: false,
78
+
time_quantum: 100, // Default: 100ms quantum (conservative for testing)
79
+
quantum_remaining: 100,
68
80
}
69
81
}
70
82
···
105
117
106
118
core::ptr::write(core::ptr::addr_of_mut!((*ptr).context_switches), 0);
107
119
serial_out(b'j');
120
+
121
+
// Initialize preemption fields (disabled by default, 100ms quantum for testing)
122
+
core::ptr::write(core::ptr::addr_of_mut!((*ptr).preemption_enabled), false);
123
+
serial_out(b'k');
124
+
125
+
core::ptr::write(core::ptr::addr_of_mut!((*ptr).time_quantum), 100);
126
+
serial_out(b'l');
127
+
128
+
core::ptr::write(core::ptr::addr_of_mut!((*ptr).quantum_remaining), 100);
129
+
serial_out(b'm');
108
130
109
131
boxed.assume_init()
110
132
}
···
421
443
if let Some(thread) = self.find_thread_mut(thread_id) {
422
444
thread.set_state(ThreadState::Weaving);
423
445
}
446
+
}
447
+
448
+
// === Preemptive Multitasking Control ===
449
+
450
+
/// Enable preemptive multitasking with the given time quantum
451
+
///
452
+
/// # Arguments
453
+
/// * `quantum_ms` - Time quantum in milliseconds (e.g., 10 = 10ms per thread)
454
+
///
455
+
/// When enabled, the timer interrupt will trigger context switches
456
+
/// after the quantum expires, even if the thread hasn't yielded.
457
+
pub fn enable_preemption(&mut self, quantum_ms: u64) {
458
+
self.preemption_enabled = true;
459
+
self.time_quantum = quantum_ms;
460
+
self.quantum_remaining = quantum_ms;
461
+
}
462
+
463
+
/// Disable preemptive multitasking (return to cooperative mode)
464
+
///
465
+
/// Threads will only switch when they explicitly call yield_now().
466
+
pub fn disable_preemption(&mut self) {
467
+
self.preemption_enabled = false;
468
+
}
469
+
470
+
/// Check if the current thread's quantum has expired and should be preempted
471
+
///
472
+
/// Returns true if:
473
+
/// - Preemption is enabled
474
+
/// - Current thread's quantum has expired (quantum_remaining == 0)
475
+
pub fn should_preempt(&mut self) -> bool {
476
+
if !self.preemption_enabled {
477
+
return false;
478
+
}
479
+
480
+
if self.quantum_remaining == 0 {
481
+
// Quantum expired! Reset for next thread
482
+
self.quantum_remaining = self.time_quantum;
483
+
return true;
484
+
}
485
+
486
+
false
487
+
}
488
+
489
+
/// Decrement the current thread's quantum (called on each timer tick)
490
+
///
491
+
/// This is called from the timer interrupt handler to track how much
492
+
/// time the current thread has used.
493
+
pub fn tick_quantum(&mut self) {
494
+
if self.preemption_enabled && self.quantum_remaining > 0 {
495
+
self.quantum_remaining -= 1;
496
+
}
497
+
}
498
+
499
+
/// Check if preemption is currently enabled
500
+
pub fn is_preemption_enabled(&self) -> bool {
501
+
self.preemption_enabled
502
+
}
503
+
504
+
/// Get the current time quantum setting
505
+
pub fn get_time_quantum(&self) -> u64 {
506
+
self.time_quantum
424
507
}
425
508
}
426
509
+9
-8
aethelos-source/heartwood/src/mana_pool/mod.rs
+9
-8
aethelos-source/heartwood/src/mana_pool/mod.rs
···
27
27
pub use capability::{Capability, CapabilityRights};
28
28
pub use sanctuary::Sanctuary;
29
29
pub use ephemeral_mist::EphemeralMist;
30
+
pub use interrupt_lock::InterruptSafeLock;
30
31
31
-
use spin::Mutex;
32
32
use core::mem::MaybeUninit;
33
33
use alloc::boxed::Box;
34
34
35
35
// MANA_POOL stores a Box<ManaPool> - a small pointer to heap-allocated ManaPool
36
-
static mut MANA_POOL: MaybeUninit<Mutex<Box<ManaPool>>> = MaybeUninit::uninit();
36
+
// Using InterruptSafeLock to prevent deadlocks during preemptive multitasking
37
+
static mut MANA_POOL: MaybeUninit<InterruptSafeLock<Box<ManaPool>>> = MaybeUninit::uninit();
37
38
static mut MANA_POOL_INITIALIZED: bool = false;
38
39
39
40
pub struct ManaPool {
···
168
169
let mana_pool_on_heap = ManaPool::new_boxed();
169
170
serial_out(b'O'); // ManaPool::new_boxed returned
170
171
171
-
// Create mutex and write to static
172
-
serial_out(b'P'); // Before Mutex::new
173
-
let mutex = Mutex::new(mana_pool_on_heap);
174
-
serial_out(b'Q'); // After Mutex::new
172
+
// Create interrupt-safe lock and write to static
173
+
serial_out(b'P'); // Before InterruptSafeLock::new
174
+
let lock = InterruptSafeLock::new(mana_pool_on_heap);
175
+
serial_out(b'Q'); // After InterruptSafeLock::new
175
176
176
-
core::ptr::write(MANA_POOL.as_mut_ptr(), mutex);
177
+
core::ptr::write(MANA_POOL.as_mut_ptr(), lock);
177
178
serial_out(b'R'); // Written to static
178
179
179
180
MANA_POOL_INITIALIZED = true;
···
182
183
}
183
184
184
185
/// Get reference to MANA_POOL (assumes initialized)
185
-
unsafe fn get_mana_pool() -> &'static Mutex<Box<ManaPool>> {
186
+
unsafe fn get_mana_pool() -> &'static InterruptSafeLock<Box<ManaPool>> {
186
187
MANA_POOL.assume_init_ref()
187
188
}
188
189
+9
-8
aethelos-source/heartwood/src/nexus/mod.rs
+9
-8
aethelos-source/heartwood/src/nexus/mod.rs
···
23
23
pub use nexus_core::NexusCore;
24
24
pub use channel::{Channel, ChannelId, ChannelCapability};
25
25
26
-
use spin::Mutex;
26
+
use crate::mana_pool::InterruptSafeLock;
27
27
use core::mem::MaybeUninit;
28
28
29
-
static mut NEXUS: MaybeUninit<Mutex<NexusCore>> = MaybeUninit::uninit();
29
+
// Using InterruptSafeLock to prevent deadlocks during preemptive multitasking
30
+
static mut NEXUS: MaybeUninit<InterruptSafeLock<NexusCore>> = MaybeUninit::uninit();
30
31
static mut NEXUS_INITIALIZED: bool = false;
31
32
32
33
/// Helper to write to serial for debugging
···
46
47
let nexus_core = NexusCore::new();
47
48
serial_out(b'x'); // NexusCore::new() complete
48
49
49
-
// Create mutex and write to static
50
-
serial_out(b's'); // Before Mutex::new
51
-
let mutex = Mutex::new(nexus_core);
52
-
serial_out(b'u'); // After Mutex::new
50
+
// Create interrupt-safe lock and write to static
51
+
serial_out(b's'); // Before InterruptSafeLock::new
52
+
let lock = InterruptSafeLock::new(nexus_core);
53
+
serial_out(b'u'); // After InterruptSafeLock::new
53
54
54
-
core::ptr::write(NEXUS.as_mut_ptr(), mutex);
55
+
core::ptr::write(NEXUS.as_mut_ptr(), lock);
55
56
serial_out(b'!'); // Written to static
56
57
57
58
NEXUS_INITIALIZED = true;
···
59
60
}
60
61
61
62
/// Get reference to NEXUS (assumes initialized)
62
-
unsafe fn get_nexus() -> &'static Mutex<NexusCore> {
63
+
unsafe fn get_nexus() -> &'static InterruptSafeLock<NexusCore> {
63
64
NEXUS.assume_init_ref()
64
65
}
65
66
aethelos-source/isodir/boot/aethelos/heartwood.bin
aethelos-source/isodir/boot/aethelos/heartwood.bin
This is a binary file and will not be displayed.