commits
* Delete .github/workflows/claude.yml
* Delete .github/workflows/claude-code-review.yml
Added profiling tools 'samply', 'perf', 'valgrind', and 'cargo-valgrind' for testing and performance analysis.
* refactor: overhaul address space bootstrapping code
* refactor(kmem): do not tie `PhysMap` to `HardwareAddressSpace`.
Previously every `HardwareAddressSpace` had an associated `PhysMap` that was allocated and mapped during bootstrapping. This however turns out to be inflexible
and not quite correct:
- We don't actually want every address space to have its own physmap. In practice there would only be one physmap anyways (that might be globally mapped) but its a bit awkward
to rely on this unspoken and uncodified assumption.
- Doing this we would potentially be unable to modify userspace address spaces from the root kernel one.
* refactor(kmem): streamline test utils
* refactor(kmem): test utils use layout instead of sizes
* docs(kmem): improve test utils docs
* fix(kmem): fix permission check in test utils `Machine::write`
Updated feedback guidelines for code reviews.
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This change adopts a new, more descriptive naming policy for workspace crates: `k23-<crate name>` rather than the previous much less helpful `k<crate name>`.
* refactor(kmem): fix FrameAllocator::allocate/allocate_zeroed
This change fixes the way non-contiguous allocation works. Previously we had it allocate on-demand which works but both makes discontiguous *zeroed* allocations difficult (as
we need to borrow the physmap and arch while at the same time we probably want to map the allocated chunks) but also leads to less-than-ideal behaviour on allocation errors:
instead of failing early we would be failing in the middle of whatever we were doing.
This change forces FrameAllocator implementations to allocate all chunks upfront (or at least reserve them and check for allocation errors upfront) and return a non-fallible
iterator over the allocated chunks.
* refactor(BootstrapAllocator): fix unaligned allocations & clean up implementation
The existing `BootstrapAllocator` implementation was less than ideal: It did not return correctly aligned blocks, its internal implementation with the cross-region offset
was hard to understand and debug and lastly it was potentially wasting physical memory by "jumping"the offset to the next region.
This change completely overhauls the implementation, now featuring a list of `Arenas` that each manage a contiguous physical memory region and hold their own bump pointers.
This is both much easier to understand and produces actually correct allocations (including cleaning up on partial faliures in `allocate`)
Updated wording to emphasize brevity in comments.
* clean up kmem
* chore(kmem): clean up test utils
Clarify review comment guidelines for code reviews.
* "Claude PR Assistant workflow"
* "Claude Code Review workflow"
Adds the now required engine fields to the tangled CI pipeline declaration files.
This change replaces the external `arrayvec` dependency with an API-equivalent internal one. We do this for two reasons:
1. the external dependency does not mark enough functions as `const` which we would like to have
2. general code quality and maintainance questions with the external dependency
This change replaces the `checked_` methods previously exposed by the address types with regular `add`/`sub` methods that will panic in debug mode and _would_ wrap in release mode.
They do not wrap in release mode however because this change also enables overflow checks across the board. The intention here is to bring the behaviour of the address types more in line with the behaviour of other integer types used in the kernel. Knobs to control overflow checks should apply to both.
The `AddressRangeExt` trait needed some improving and cleaning up, in preparation for making the address types more safe, sane, and compatible with strict provenance.
Instead of attempting to access address 0x0 which Rust would catch in debug mode, we deref `0x1` instead.
* refactor: overhaul address space bootstrapping code
* refactor(kmem): do not tie `PhysMap` to `HardwareAddressSpace`.
Previously every `HardwareAddressSpace` had an associated `PhysMap` that was allocated and mapped during bootstrapping. This however turns out to be inflexible
and not quite correct:
- We don't actually want every address space to have its own physmap. In practice there would only be one physmap anyways (that might be globally mapped) but its a bit awkward
to rely on this unspoken and uncodified assumption.
- Doing this we would potentially be unable to modify userspace address spaces from the root kernel one.
* refactor(kmem): fix FrameAllocator::allocate/allocate_zeroed
This change fixes the way non-contiguous allocation works. Previously we had it allocate on-demand which works but both makes discontiguous *zeroed* allocations difficult (as
we need to borrow the physmap and arch while at the same time we probably want to map the allocated chunks) but also leads to less-than-ideal behaviour on allocation errors:
instead of failing early we would be failing in the middle of whatever we were doing.
This change forces FrameAllocator implementations to allocate all chunks upfront (or at least reserve them and check for allocation errors upfront) and return a non-fallible
iterator over the allocated chunks.
* refactor(BootstrapAllocator): fix unaligned allocations & clean up implementation
The existing `BootstrapAllocator` implementation was less than ideal: It did not return correctly aligned blocks, its internal implementation with the cross-region offset
was hard to understand and debug and lastly it was potentially wasting physical memory by "jumping"the offset to the next region.
This change completely overhauls the implementation, now featuring a list of `Arenas` that each manage a contiguous physical memory region and hold their own bump pointers.
This is both much easier to understand and produces actually correct allocations (including cleaning up on partial faliures in `allocate`)
This change replaces the `checked_` methods previously exposed by the address types with regular `add`/`sub` methods that will panic in debug mode and _would_ wrap in release mode.
They do not wrap in release mode however because this change also enables overflow checks across the board. The intention here is to bring the behaviour of the address types more in line with the behaviour of other integer types used in the kernel. Knobs to control overflow checks should apply to both.