Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'rust-6.13' of https://github.com/Rust-for-Linux/linux

Pull rust updates from Miguel Ojeda:
"Toolchain and infrastructure:

- Enable a series of lints, including safety-related ones, e.g. the
compiler will now warn about missing safety comments, as well as
unnecessary ones. How safety documentation is organized is a
frequent source of review comments, thus having the compiler guide
new developers on where they are expected (and where not) is very
nice.

- Start using '#[expect]': an interesting feature in Rust (stabilized
in 1.81.0) that makes the compiler warn if an expected warning was
_not_ emitted. This is useful to avoid forgetting cleaning up
locally ignored diagnostics ('#[allow]'s).

- Introduce '.clippy.toml' configuration file for Clippy, the Rust
linter, which will allow us to tweak its behaviour. For instance,
our first use cases are declaring a disallowed macro and, more
importantly, enabling the checking of private items.

- Lints-related fixes and cleanups related to the items above.

- Migrate from 'receiver_trait' to 'arbitrary_self_types': to get the
kernel into stable Rust, one of the major pieces of the puzzle is
the support to write custom types that can be used as 'self', i.e.
as receivers, since the kernel needs to write types such as 'Arc'
that common userspace Rust would not. 'arbitrary_self_types' has
been accepted to become stable, and this is one of the steps
required to get there.

- Remove usage of the 'new_uninit' unstable feature.

- Use custom C FFI types. Includes a new 'ffi' crate to contain our
custom mapping, instead of using the standard library 'core::ffi'
one. The actual remapping will be introduced in a later cycle.

- Map '__kernel_{size_t,ssize_t,ptrdiff_t}' to 'usize'/'isize'
instead of 32/64-bit integers.

- Fix 'size_t' in bindgen generated prototypes of C builtins.

- Warn on bindgen < 0.69.5 and libclang >= 19.1 due to a double issue
in the projects, which we managed to trigger with the upcoming
tracepoint support. It includes a build test since some
distributions backported the fix (e.g. Debian -- thanks!). All
major distributions we list should be now OK except Ubuntu non-LTS.

'macros' crate:

- Adapt the build system to be able run the doctests there too; and
clean up and enable the corresponding doctests.

'kernel' crate:

- Add 'alloc' module with generic kernel allocator support and remove
the dependency on the Rust standard library 'alloc' and the
extension traits we used to provide fallible methods with flags.

Add the 'Allocator' trait and its implementations '{K,V,KV}malloc'.
Add the 'Box' type (a heap allocation for a single value of type
'T' that is also generic over an allocator and considers the
kernel's GFP flags) and its shorthand aliases '{K,V,KV}Box'. Add
'ArrayLayout' type. Add 'Vec' (a contiguous growable array type)
and its shorthand aliases '{K,V,KV}Vec', including iterator
support.

For instance, now we may write code such as:

let mut v = KVec::new();
v.push(1, GFP_KERNEL)?;
assert_eq!(&v, &[1]);

Treewide, move as well old users to these new types.

- 'sync' module: add global lock support, including the
'GlobalLockBackend' trait; the 'Global{Lock,Guard,LockedBy}' types
and the 'global_lock!' macro. Add the 'Lock::try_lock' method.

- 'error' module: optimize 'Error' type to use 'NonZeroI32' and make
conversion functions public.

- 'page' module: add 'page_align' function.

- Add 'transmute' module with the existing 'FromBytes' and 'AsBytes'
traits.

- 'block::mq::request' module: improve rendered documentation.

- 'types' module: extend 'Opaque' type documentation and add simple
examples for the 'Either' types.

drm/panic:

- Clean up a series of Clippy warnings.

Documentation:

- Add coding guidelines for lints and the '#[expect]' feature.

- Add Ubuntu to the list of distributions in the Quick Start guide.

MAINTAINERS:

- Add Danilo Krummrich as maintainer of the new 'alloc' module.

And a few other small cleanups and fixes"

* tag 'rust-6.13' of https://github.com/Rust-for-Linux/linux: (82 commits)
rust: alloc: Fix `ArrayLayout` allocations
docs: rust: remove spurious item in `expect` list
rust: allow `clippy::needless_lifetimes`
rust: warn on bindgen < 0.69.5 and libclang >= 19.1
rust: use custom FFI integer types
rust: map `__kernel_size_t` and friends also to usize/isize
rust: fix size_t in bindgen prototypes of C builtins
rust: sync: add global lock support
rust: macros: enable the rest of the tests
rust: macros: enable paste! use from macro_rules!
rust: enable macros::module! tests
rust: kbuild: expand rusttest target for macros
rust: types: extend `Opaque` documentation
rust: block: fix formatting of `kernel::block::mq::request` module
rust: macros: fix documentation of the paste! macro
rust: kernel: fix THIS_MODULE header path in ThisModule doc comment
rust: page: add Rust version of PAGE_ALIGN
rust: helpers: remove unnecessary header includes
rust: exports: improve grammar in commentary
drm/panic: allow verbose version check
...

+3176 -901
+9
.clippy.toml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + check-private-items = true 4 + 5 + disallowed-macros = [ 6 + # The `clippy::dbg_macro` lint only works with `std::dbg!`, thus we simulate 7 + # it here, see: https://github.com/rust-lang/rust-clippy/issues/11303. 8 + { path = "kernel::dbg", reason = "the `dbg!` macro is intended as a debugging tool" }, 9 + ]
+1
.gitignore
··· 103 103 # We don't want to ignore the following even if they are dot-files 104 104 # 105 105 !.clang-format 106 + !.clippy.toml 106 107 !.cocciconfig 107 108 !.editorconfig 108 109 !.get_maintainer.ignore
+146
Documentation/rust/coding-guidelines.rst
··· 227 227 That is, the equivalent of ``GPIO_LINE_DIRECTION_IN`` would be referred to as 228 228 ``gpio::LineDirection::In``. In particular, it should not be named 229 229 ``gpio::gpio_line_direction::GPIO_LINE_DIRECTION_IN``. 230 + 231 + 232 + Lints 233 + ----- 234 + 235 + In Rust, it is possible to ``allow`` particular warnings (diagnostics, lints) 236 + locally, making the compiler ignore instances of a given warning within a given 237 + function, module, block, etc. 238 + 239 + It is similar to ``#pragma GCC diagnostic push`` + ``ignored`` + ``pop`` in C 240 + [#]_: 241 + 242 + .. code-block:: c 243 + 244 + #pragma GCC diagnostic push 245 + #pragma GCC diagnostic ignored "-Wunused-function" 246 + static void f(void) {} 247 + #pragma GCC diagnostic pop 248 + 249 + .. [#] In this particular case, the kernel's ``__{always,maybe}_unused`` 250 + attributes (C23's ``[[maybe_unused]]``) may be used; however, the example 251 + is meant to reflect the equivalent lint in Rust discussed afterwards. 252 + 253 + But way less verbose: 254 + 255 + .. code-block:: rust 256 + 257 + #[allow(dead_code)] 258 + fn f() {} 259 + 260 + By that virtue, it makes it possible to comfortably enable more diagnostics by 261 + default (i.e. outside ``W=`` levels). In particular, those that may have some 262 + false positives but that are otherwise quite useful to keep enabled to catch 263 + potential mistakes. 264 + 265 + On top of that, Rust provides the ``expect`` attribute which takes this further. 266 + It makes the compiler warn if the warning was not produced. For instance, the 267 + following will ensure that, when ``f()`` is called somewhere, we will have to 268 + remove the attribute: 269 + 270 + .. code-block:: rust 271 + 272 + #[expect(dead_code)] 273 + fn f() {} 274 + 275 + If we do not, we get a warning from the compiler:: 276 + 277 + warning: this lint expectation is unfulfilled 278 + --> x.rs:3:10 279 + | 280 + 3 | #[expect(dead_code)] 281 + | ^^^^^^^^^ 282 + | 283 + = note: `#[warn(unfulfilled_lint_expectations)]` on by default 284 + 285 + This means that ``expect``\ s do not get forgotten when they are not needed, which 286 + may happen in several situations, e.g.: 287 + 288 + - Temporary attributes added while developing. 289 + 290 + - Improvements in lints in the compiler, Clippy or custom tools which may 291 + remove a false positive. 292 + 293 + - When the lint is not needed anymore because it was expected that it would be 294 + removed at some point, such as the ``dead_code`` example above. 295 + 296 + It also increases the visibility of the remaining ``allow``\ s and reduces the 297 + chance of misapplying one. 298 + 299 + Thus prefer ``expect`` over ``allow`` unless: 300 + 301 + - Conditional compilation triggers the warning in some cases but not others. 302 + 303 + If there are only a few cases where the warning triggers (or does not 304 + trigger) compared to the total number of cases, then one may consider using 305 + a conditional ``expect`` (i.e. ``cfg_attr(..., expect(...))``). Otherwise, 306 + it is likely simpler to just use ``allow``. 307 + 308 + - Inside macros, when the different invocations may create expanded code that 309 + triggers the warning in some cases but not in others. 310 + 311 + - When code may trigger a warning for some architectures but not others, such 312 + as an ``as`` cast to a C FFI type. 313 + 314 + As a more developed example, consider for instance this program: 315 + 316 + .. code-block:: rust 317 + 318 + fn g() {} 319 + 320 + fn main() { 321 + #[cfg(CONFIG_X)] 322 + g(); 323 + } 324 + 325 + Here, function ``g()`` is dead code if ``CONFIG_X`` is not set. Can we use 326 + ``expect`` here? 327 + 328 + .. code-block:: rust 329 + 330 + #[expect(dead_code)] 331 + fn g() {} 332 + 333 + fn main() { 334 + #[cfg(CONFIG_X)] 335 + g(); 336 + } 337 + 338 + This would emit a lint if ``CONFIG_X`` is set, since it is not dead code in that 339 + configuration. Therefore, in cases like this, we cannot use ``expect`` as-is. 340 + 341 + A simple possibility is using ``allow``: 342 + 343 + .. code-block:: rust 344 + 345 + #[allow(dead_code)] 346 + fn g() {} 347 + 348 + fn main() { 349 + #[cfg(CONFIG_X)] 350 + g(); 351 + } 352 + 353 + An alternative would be using a conditional ``expect``: 354 + 355 + .. code-block:: rust 356 + 357 + #[cfg_attr(not(CONFIG_X), expect(dead_code))] 358 + fn g() {} 359 + 360 + fn main() { 361 + #[cfg(CONFIG_X)] 362 + g(); 363 + } 364 + 365 + This would ensure that, if someone introduces another call to ``g()`` somewhere 366 + (e.g. unconditionally), then it would be spotted that it is not dead code 367 + anymore. However, the ``cfg_attr`` is more complex than a simple ``allow``. 368 + 369 + Therefore, it is likely that it is not worth using conditional ``expect``\ s when 370 + more than one or two configurations are involved or when the lint may be 371 + triggered due to non-local changes (such as ``dead_code``). 372 + 373 + For more information about diagnostics in Rust, please see: 374 + 375 + https://doc.rust-lang.org/stable/reference/attributes/diagnostics.html
+17
Documentation/rust/quick-start.rst
··· 87 87 zypper install rust rust1.79-src rust-bindgen clang 88 88 89 89 90 + Ubuntu 91 + ****** 92 + 93 + Ubuntu LTS and non-LTS (interim) releases provide recent Rust releases and thus 94 + they should generally work out of the box, e.g.:: 95 + 96 + apt install rustc-1.80 rust-1.80-src bindgen-0.65 rustfmt-1.80 rust-1.80-clippy 97 + 98 + ``RUST_LIB_SRC`` needs to be set when using the versioned packages, e.g.:: 99 + 100 + RUST_LIB_SRC=/usr/src/rustc-$(rustc-1.80 --version | cut -d' ' -f2)/library 101 + 102 + In addition, ``bindgen-0.65`` is available in newer releases (24.04 LTS and 103 + 24.10), but it may not be available in older ones (20.04 LTS and 22.04 LTS), 104 + thus ``bindgen`` may need to be built manually (please see below). 105 + 106 + 90 107 Requirements: Building 91 108 ---------------------- 92 109
+8
MAINTAINERS
··· 20368 20368 C: zulip://rust-for-linux.zulipchat.com 20369 20369 P: https://rust-for-linux.com/contributing 20370 20370 T: git https://github.com/Rust-for-Linux/linux.git rust-next 20371 + F: .clippy.toml 20371 20372 F: Documentation/rust/ 20372 20373 F: include/trace/events/rust_sample.h 20373 20374 F: rust/ ··· 20376 20375 F: scripts/*rust* 20377 20376 F: tools/testing/selftests/rust/ 20378 20377 K: \b(?i:rust)\b 20378 + 20379 + RUST [ALLOC] 20380 + M: Danilo Krummrich <dakr@kernel.org> 20381 + L: rust-for-linux@vger.kernel.org 20382 + S: Maintained 20383 + F: rust/kernel/alloc.rs 20384 + F: rust/kernel/alloc/ 20379 20385 20380 20386 RXRPC SOCKETS (AF_RXRPC) 20381 20387 M: David Howells <dhowells@redhat.com>
+12 -4
Makefile
··· 446 446 export rust_common_flags := --edition=2021 \ 447 447 -Zbinary_dep_depinfo=y \ 448 448 -Astable_features \ 449 - -Dunsafe_op_in_unsafe_fn \ 450 449 -Dnon_ascii_idents \ 450 + -Dunsafe_op_in_unsafe_fn \ 451 + -Wmissing_docs \ 451 452 -Wrust_2018_idioms \ 452 453 -Wunreachable_pub \ 453 - -Wmissing_docs \ 454 - -Wrustdoc::missing_crate_level_docs \ 455 454 -Wclippy::all \ 455 + -Wclippy::ignored_unit_patterns \ 456 456 -Wclippy::mut_mut \ 457 457 -Wclippy::needless_bitwise_bool \ 458 458 -Wclippy::needless_continue \ 459 + -Aclippy::needless_lifetimes \ 459 460 -Wclippy::no_mangle_with_rust_abi \ 460 - -Wclippy::dbg_macro 461 + -Wclippy::undocumented_unsafe_blocks \ 462 + -Wclippy::unnecessary_safety_comment \ 463 + -Wclippy::unnecessary_safety_doc \ 464 + -Wrustdoc::missing_crate_level_docs \ 465 + -Wrustdoc::unescaped_backticks 461 466 462 467 KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) \ 463 468 $(HOSTCFLAGS) -I $(srctree)/scripts/include ··· 586 581 587 582 # Allows the usage of unstable features in stable compilers. 588 583 export RUSTC_BOOTSTRAP := 1 584 + 585 + # Allows finding `.clippy.toml` in out-of-srctree builds. 586 + export CLIPPY_CONF_DIR := $(srctree) 589 587 590 588 export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC HOSTPKG_CONFIG 591 589 export RUSTC RUSTDOC RUSTFMT RUSTC_OR_CLIPPY_QUIET RUSTC_OR_CLIPPY BINDGEN
+2 -2
drivers/block/rnull.rs
··· 32 32 } 33 33 34 34 struct NullBlkModule { 35 - _disk: Pin<Box<Mutex<GenDisk<NullBlkDevice>>>>, 35 + _disk: Pin<KBox<Mutex<GenDisk<NullBlkDevice>>>>, 36 36 } 37 37 38 38 impl kernel::Module for NullBlkModule { ··· 47 47 .rotational(false) 48 48 .build(format_args!("rnullb{}", 0), tagset)?; 49 49 50 - let disk = Box::pin_init(new_mutex!(disk, "nullb:disk"), flags::GFP_KERNEL)?; 50 + let disk = KBox::pin_init(new_mutex!(disk, "nullb:disk"), flags::GFP_KERNEL)?; 51 51 52 52 Ok(Self { _disk: disk }) 53 53 }
+12 -11
drivers/gpu/drm/drm_panic_qr.rs
··· 209 209 impl Version { 210 210 /// Returns the smallest QR version than can hold these segments. 211 211 fn from_segments(segments: &[&Segment<'_>]) -> Option<Version> { 212 - for v in (1..=40).map(|k| Version(k)) { 213 - if v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum() { 214 - return Some(v); 215 - } 216 - } 217 - None 212 + (1..=40) 213 + .map(Version) 214 + .find(|&v| v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum()) 218 215 } 219 216 220 217 fn width(&self) -> u8 { ··· 239 242 } 240 243 241 244 fn alignment_pattern(&self) -> &'static [u8] { 242 - &ALIGNMENT_PATTERNS[self.0 - 1] 245 + ALIGNMENT_PATTERNS[self.0 - 1] 243 246 } 244 247 245 248 fn poly(&self) -> &'static [u8] { ··· 476 479 /// Data to be put in the QR code, with correct segment encoding, padding, and 477 480 /// Error Code Correction. 478 481 impl EncodedMsg<'_> { 479 - fn new<'a, 'b>(segments: &[&Segment<'b>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> { 482 + fn new<'a>(segments: &[&Segment<'_>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> { 480 483 let version = Version::from_segments(segments)?; 481 484 let ec_size = version.ec_size(); 482 485 let g1_blocks = version.g1_blocks(); ··· 489 492 data.fill(0); 490 493 491 494 let mut em = EncodedMsg { 492 - data: data, 495 + data, 493 496 ec_size, 494 497 g1_blocks, 495 498 g2_blocks, ··· 719 722 720 723 fn is_finder(&self, x: u8, y: u8) -> bool { 721 724 let end = self.width - 8; 722 - (x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8) 725 + #[expect(clippy::nonminimal_bool)] 726 + { 727 + (x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8) 728 + } 723 729 } 724 730 725 731 // Alignment pattern: 5x5 squares in a grid. ··· 979 979 /// * `url_len`: Length of the URL. 980 980 /// 981 981 /// * If `url_len` > 0, remove the 2 segments header/length and also count the 982 - /// conversion to numeric segments. 982 + /// conversion to numeric segments. 983 983 /// * If `url_len` = 0, only removes 3 bytes for 1 binary segment. 984 984 #[no_mangle] 985 985 pub extern "C" fn drm_panic_qr_max_data_size(version: u8, url_len: usize) -> usize { 986 + #[expect(clippy::manual_range_contains)] 986 987 if version < 1 || version > 40 { 987 988 return 0; 988 989 }
+2 -1
mm/kasan/kasan_test_rust.rs
··· 11 11 /// drop the vector, and touch it. 12 12 #[no_mangle] 13 13 pub extern "C" fn kasan_test_rust_uaf() -> u8 { 14 - let mut v: Vec<u8> = Vec::new(); 14 + let mut v: KVec<u8> = KVec::new(); 15 15 for _ in 0..4096 { 16 16 v.push(0x42, GFP_KERNEL).unwrap(); 17 17 } 18 18 let ptr: *mut u8 = addr_of_mut!(v[2048]); 19 19 drop(v); 20 + // SAFETY: Incorrect, on purpose. 20 21 unsafe { *ptr } 21 22 }
+51 -44
rust/Makefile
··· 3 3 # Where to place rustdoc generated documentation 4 4 rustdoc_output := $(objtree)/Documentation/output/rust/rustdoc 5 5 6 - obj-$(CONFIG_RUST) += core.o compiler_builtins.o 6 + obj-$(CONFIG_RUST) += core.o compiler_builtins.o ffi.o 7 7 always-$(CONFIG_RUST) += exports_core_generated.h 8 8 9 9 # Missing prototypes are expected in the helpers since these are exported ··· 15 15 no-clean-files += libmacros.so 16 16 17 17 always-$(CONFIG_RUST) += bindings/bindings_generated.rs bindings/bindings_helpers_generated.rs 18 - obj-$(CONFIG_RUST) += alloc.o bindings.o kernel.o 19 - always-$(CONFIG_RUST) += exports_alloc_generated.h exports_helpers_generated.h \ 18 + obj-$(CONFIG_RUST) += bindings.o kernel.o 19 + always-$(CONFIG_RUST) += exports_helpers_generated.h \ 20 20 exports_bindings_generated.h exports_kernel_generated.h 21 21 22 22 always-$(CONFIG_RUST) += uapi/uapi_generated.rs ··· 55 55 core-cfgs = \ 56 56 --cfg no_fp_fmt_parse 57 57 58 - alloc-cfgs = \ 59 - --cfg no_global_oom_handling \ 60 - --cfg no_rc \ 61 - --cfg no_sync 62 - 63 58 quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $< 64 59 cmd_rustdoc = \ 65 60 OBJTREE=$(abspath $(objtree)) \ 66 - $(RUSTDOC) $(if $(rustdoc_host),$(rust_common_flags),$(rust_flags)) \ 61 + $(RUSTDOC) $(filter-out $(skip_flags),$(if $(rustdoc_host),$(rust_common_flags),$(rust_flags))) \ 67 62 $(rustc_target_flags) -L$(objtree)/$(obj) \ 68 63 -Zunstable-options --generate-link-to-definition \ 69 64 --output $(rustdoc_output) \ ··· 78 83 # command-like flags to solve the issue. Meanwhile, we use the non-custom case 79 84 # and then retouch the generated files. 80 85 rustdoc: rustdoc-core rustdoc-macros rustdoc-compiler_builtins \ 81 - rustdoc-alloc rustdoc-kernel 86 + rustdoc-kernel 82 87 $(Q)cp $(srctree)/Documentation/images/logo.svg $(rustdoc_output)/static.files/ 83 88 $(Q)cp $(srctree)/Documentation/images/COPYING-logo $(rustdoc_output)/static.files/ 84 89 $(Q)find $(rustdoc_output) -name '*.html' -type f -print0 | xargs -0 sed -Ei \ ··· 95 100 rustdoc-macros: $(src)/macros/lib.rs FORCE 96 101 +$(call if_changed,rustdoc) 97 102 103 + # Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should 104 + # not be needed -- see https://github.com/rust-lang/rust/pull/128307. 105 + rustdoc-core: private skip_flags = -Wrustdoc::unescaped_backticks 98 106 rustdoc-core: private rustc_target_flags = $(core-cfgs) 99 107 rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE 100 108 +$(call if_changed,rustdoc) ··· 105 107 rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE 106 108 +$(call if_changed,rustdoc) 107 109 108 - # We need to allow `rustdoc::broken_intra_doc_links` because some 109 - # `no_global_oom_handling` functions refer to non-`no_global_oom_handling` 110 - # functions. Ideally `rustdoc` would have a way to distinguish broken links 111 - # due to things that are "configured out" vs. entirely non-existing ones. 112 - rustdoc-alloc: private rustc_target_flags = $(alloc-cfgs) \ 113 - -Arustdoc::broken_intra_doc_links 114 - rustdoc-alloc: $(RUST_LIB_SRC)/alloc/src/lib.rs rustdoc-core rustdoc-compiler_builtins FORCE 110 + rustdoc-ffi: $(src)/ffi.rs rustdoc-core FORCE 115 111 +$(call if_changed,rustdoc) 116 112 117 - rustdoc-kernel: private rustc_target_flags = --extern alloc \ 113 + rustdoc-kernel: private rustc_target_flags = --extern ffi \ 118 114 --extern build_error --extern macros=$(objtree)/$(obj)/libmacros.so \ 119 115 --extern bindings --extern uapi 120 - rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-macros \ 121 - rustdoc-compiler_builtins rustdoc-alloc $(obj)/libmacros.so \ 116 + rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-ffi rustdoc-macros \ 117 + rustdoc-compiler_builtins $(obj)/libmacros.so \ 122 118 $(obj)/bindings.o FORCE 123 119 +$(call if_changed,rustdoc) 124 120 ··· 129 137 rusttestlib-build_error: $(src)/build_error.rs FORCE 130 138 +$(call if_changed,rustc_test_library) 131 139 140 + rusttestlib-ffi: $(src)/ffi.rs FORCE 141 + +$(call if_changed,rustc_test_library) 142 + 132 143 rusttestlib-macros: private rustc_target_flags = --extern proc_macro 133 144 rusttestlib-macros: private rustc_test_library_proc = yes 134 145 rusttestlib-macros: $(src)/macros/lib.rs FORCE 135 146 +$(call if_changed,rustc_test_library) 136 147 137 - rusttestlib-bindings: $(src)/bindings/lib.rs FORCE 148 + rusttestlib-kernel: private rustc_target_flags = --extern ffi \ 149 + --extern build_error --extern macros \ 150 + --extern bindings --extern uapi 151 + rusttestlib-kernel: $(src)/kernel/lib.rs \ 152 + rusttestlib-bindings rusttestlib-uapi rusttestlib-build_error \ 153 + $(obj)/libmacros.so $(obj)/bindings.o FORCE 138 154 +$(call if_changed,rustc_test_library) 139 155 140 - rusttestlib-uapi: $(src)/uapi/lib.rs FORCE 156 + rusttestlib-bindings: private rustc_target_flags = --extern ffi 157 + rusttestlib-bindings: $(src)/bindings/lib.rs rusttestlib-ffi FORCE 158 + +$(call if_changed,rustc_test_library) 159 + 160 + rusttestlib-uapi: private rustc_target_flags = --extern ffi 161 + rusttestlib-uapi: $(src)/uapi/lib.rs rusttestlib-ffi FORCE 141 162 +$(call if_changed,rustc_test_library) 142 163 143 164 quiet_cmd_rustdoc_test = RUSTDOC T $< 144 165 cmd_rustdoc_test = \ 166 + RUST_MODFILE=test.rs \ 145 167 OBJTREE=$(abspath $(objtree)) \ 146 168 $(RUSTDOC) --test $(rust_common_flags) \ 147 169 @$(objtree)/include/generated/rustc_cfg \ ··· 170 164 mkdir -p $(objtree)/$(obj)/test/doctests/kernel; \ 171 165 OBJTREE=$(abspath $(objtree)) \ 172 166 $(RUSTDOC) --test $(rust_flags) \ 173 - -L$(objtree)/$(obj) --extern alloc --extern kernel \ 167 + -L$(objtree)/$(obj) --extern ffi --extern kernel \ 174 168 --extern build_error --extern macros \ 175 169 --extern bindings --extern uapi \ 176 170 --no-run --crate-name kernel -Zunstable-options \ ··· 200 194 201 195 rusttest: rusttest-macros rusttest-kernel 202 196 203 - rusttest-macros: private rustc_target_flags = --extern proc_macro 197 + rusttest-macros: private rustc_target_flags = --extern proc_macro \ 198 + --extern macros --extern kernel 204 199 rusttest-macros: private rustdoc_test_target_flags = --crate-type proc-macro 205 - rusttest-macros: $(src)/macros/lib.rs FORCE 200 + rusttest-macros: $(src)/macros/lib.rs \ 201 + rusttestlib-macros rusttestlib-kernel FORCE 206 202 +$(call if_changed,rustc_test) 207 203 +$(call if_changed,rustdoc_test) 208 204 209 - rusttest-kernel: private rustc_target_flags = --extern alloc \ 205 + rusttest-kernel: private rustc_target_flags = --extern ffi \ 210 206 --extern build_error --extern macros --extern bindings --extern uapi 211 - rusttest-kernel: $(src)/kernel/lib.rs \ 207 + rusttest-kernel: $(src)/kernel/lib.rs rusttestlib-ffi rusttestlib-kernel \ 212 208 rusttestlib-build_error rusttestlib-macros rusttestlib-bindings \ 213 209 rusttestlib-uapi FORCE 214 210 +$(call if_changed,rustc_test) 215 - +$(call if_changed,rustc_test_library) 216 211 217 212 ifdef CONFIG_CC_IS_CLANG 218 213 bindgen_c_flags = $(c_flags) ··· 274 267 bindgen_c_flags_lto = $(bindgen_c_flags) 275 268 endif 276 269 277 - bindgen_c_flags_final = $(bindgen_c_flags_lto) -D__BINDGEN__ 270 + # `-fno-builtin` is passed to avoid `bindgen` from using `clang` builtin 271 + # prototypes for functions like `memcpy` -- if this flag is not passed, 272 + # `bindgen`-generated prototypes use `c_ulong` or `c_uint` depending on 273 + # architecture instead of generating `usize`. 274 + bindgen_c_flags_final = $(bindgen_c_flags_lto) -fno-builtin -D__BINDGEN__ 278 275 279 276 quiet_cmd_bindgen = BINDGEN $@ 280 277 cmd_bindgen = \ 281 278 $(BINDGEN) $< $(bindgen_target_flags) \ 282 - --use-core --with-derive-default --ctypes-prefix core::ffi --no-layout-tests \ 279 + --use-core --with-derive-default --ctypes-prefix ffi --no-layout-tests \ 283 280 --no-debug '.*' --enable-function-attribute-detection \ 284 281 -o $@ -- $(bindgen_c_flags_final) -DMODULE \ 285 282 $(bindgen_target_cflags) $(bindgen_target_extra) ··· 322 311 | awk '$$2~/(T|R|D|B)/ && $$3!~/__cfi/ {printf "EXPORT_SYMBOL_RUST_GPL(%s);\n",$$3}' > $@ 323 312 324 313 $(obj)/exports_core_generated.h: $(obj)/core.o FORCE 325 - $(call if_changed,exports) 326 - 327 - $(obj)/exports_alloc_generated.h: $(obj)/alloc.o FORCE 328 314 $(call if_changed,exports) 329 315 330 316 # Even though Rust kernel modules should never use the bindings directly, ··· 370 362 371 363 rust-analyzer: 372 364 $(Q)$(srctree)/scripts/generate_rust_analyzer.py \ 373 - --cfgs='core=$(core-cfgs)' --cfgs='alloc=$(alloc-cfgs)' \ 365 + --cfgs='core=$(core-cfgs)' \ 374 366 $(realpath $(srctree)) $(realpath $(objtree)) \ 375 367 $(rustc_sysroot) $(RUST_LIB_SRC) $(KBUILD_EXTMOD) > \ 376 368 $(if $(KBUILD_EXTMOD),$(extmod_prefix),$(objtree))/rust-project.json ··· 408 400 $(obj)/compiler_builtins.o: $(src)/compiler_builtins.rs $(obj)/core.o FORCE 409 401 +$(call if_changed_rule,rustc_library) 410 402 411 - $(obj)/alloc.o: private skip_clippy = 1 412 - $(obj)/alloc.o: private skip_flags = -Wunreachable_pub 413 - $(obj)/alloc.o: private rustc_target_flags = $(alloc-cfgs) 414 - $(obj)/alloc.o: $(RUST_LIB_SRC)/alloc/src/lib.rs $(obj)/compiler_builtins.o FORCE 415 - +$(call if_changed_rule,rustc_library) 416 - 417 403 $(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE 418 404 +$(call if_changed_rule,rustc_library) 419 405 406 + $(obj)/ffi.o: $(src)/ffi.rs $(obj)/compiler_builtins.o FORCE 407 + +$(call if_changed_rule,rustc_library) 408 + 409 + $(obj)/bindings.o: private rustc_target_flags = --extern ffi 420 410 $(obj)/bindings.o: $(src)/bindings/lib.rs \ 421 - $(obj)/compiler_builtins.o \ 411 + $(obj)/ffi.o \ 422 412 $(obj)/bindings/bindings_generated.rs \ 423 413 $(obj)/bindings/bindings_helpers_generated.rs FORCE 424 414 +$(call if_changed_rule,rustc_library) 425 415 416 + $(obj)/uapi.o: private rustc_target_flags = --extern ffi 426 417 $(obj)/uapi.o: $(src)/uapi/lib.rs \ 427 - $(obj)/compiler_builtins.o \ 418 + $(obj)/ffi.o \ 428 419 $(obj)/uapi/uapi_generated.rs FORCE 429 420 +$(call if_changed_rule,rustc_library) 430 421 431 - $(obj)/kernel.o: private rustc_target_flags = --extern alloc \ 422 + $(obj)/kernel.o: private rustc_target_flags = --extern ffi \ 432 423 --extern build_error --extern macros --extern bindings --extern uapi 433 - $(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/alloc.o $(obj)/build_error.o \ 424 + $(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/build_error.o \ 434 425 $(obj)/libmacros.so $(obj)/bindings.o $(obj)/uapi.o FORCE 435 426 +$(call if_changed_rule,rustc_library) 436 427
+5
rust/bindgen_parameters
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + # We want to map these types to `isize`/`usize` manually, instead of 4 + # define them as `int`/`long` depending on platform bitwidth. 5 + --blocklist-type __kernel_s?size_t 6 + --blocklist-type __kernel_ptrdiff_t 7 + 3 8 --opaque-type xregs_state 4 9 --opaque-type desc_struct 5 10 --opaque-type arch_lbr_state
+1
rust/bindings/bindings_helper.h
··· 40 40 const gfp_t RUST_CONST_HELPER_GFP_NOWAIT = GFP_NOWAIT; 41 41 const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO; 42 42 const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM; 43 + const gfp_t RUST_CONST_HELPER___GFP_NOWARN = ___GFP_NOWARN; 43 44 const blk_features_t RUST_CONST_HELPER_BLK_FEAT_ROTATIONAL = BLK_FEAT_ROTATIONAL;
+6
rust/bindings/lib.rs
··· 25 25 )] 26 26 27 27 #[allow(dead_code)] 28 + #[allow(clippy::undocumented_unsafe_blocks)] 28 29 mod bindings_raw { 30 + // Manual definition for blocklisted types. 31 + type __kernel_size_t = usize; 32 + type __kernel_ssize_t = isize; 33 + type __kernel_ptrdiff_t = isize; 34 + 29 35 // Use glob import here to expose all helpers. 30 36 // Symbols defined within the module will take precedence to the glob import. 31 37 pub use super::bindings_helper::*;
+3 -4
rust/exports.c
··· 3 3 * A hack to export Rust symbols for loadable modules without having to redo 4 4 * the entire `include/linux/export.h` logic in Rust. 5 5 * 6 - * This requires the Rust's new/future `v0` mangling scheme because the default 7 - * one ("legacy") uses invalid characters for C identifiers (thus we cannot use 8 - * the `EXPORT_SYMBOL_*` macros). 6 + * This requires Rust's new/future `v0` mangling scheme because the default one 7 + * ("legacy") uses invalid characters for C identifiers (thus we cannot use the 8 + * `EXPORT_SYMBOL_*` macros). 9 9 * 10 10 * All symbols are exported as GPL-only to guarantee no GPL-only feature is 11 11 * accidentally exposed. ··· 16 16 #define EXPORT_SYMBOL_RUST_GPL(sym) extern int sym; EXPORT_SYMBOL_GPL(sym) 17 17 18 18 #include "exports_core_generated.h" 19 - #include "exports_alloc_generated.h" 20 19 #include "exports_helpers_generated.h" 21 20 #include "exports_bindings_generated.h" 22 21 #include "exports_kernel_generated.h"
+13
rust/ffi.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Foreign function interface (FFI) types. 4 + //! 5 + //! This crate provides mapping from C primitive types to Rust ones. 6 + //! 7 + //! The Rust [`core`] crate provides [`core::ffi`], which maps integer types to the platform default 8 + //! C ABI. The kernel does not use [`core::ffi`], so it can customise the mapping that deviates from 9 + //! the platform default. 10 + 11 + #![no_std] 12 + 13 + pub use core::ffi::*;
-1
rust/helpers/build_bug.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/errname.h> 5 4 6 5 const char *rust_helper_errname(int err)
-1
rust/helpers/err.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 #include <linux/err.h> 4 - #include <linux/export.h> 5 4 6 5 __force void *rust_helper_ERR_PTR(long err) 7 6 {
+1
rust/helpers/helpers.c
··· 27 27 #include "spinlock.c" 28 28 #include "task.c" 29 29 #include "uaccess.c" 30 + #include "vmalloc.c" 30 31 #include "wait.c" 31 32 #include "workqueue.c"
-1
rust/helpers/kunit.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 #include <kunit/test-bug.h> 4 - #include <linux/export.h> 5 4 6 5 struct kunit *rust_helper_kunit_get_current_test(void) 7 6 {
-1
rust/helpers/mutex.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/mutex.h> 5 4 6 5 void rust_helper_mutex_lock(struct mutex *lock)
-1
rust/helpers/refcount.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/refcount.h> 5 4 6 5 refcount_t rust_helper_REFCOUNT_INIT(int n)
-1
rust/helpers/signal.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/sched/signal.h> 5 4 6 5 int rust_helper_signal_pending(struct task_struct *t)
+6
rust/helpers/slab.c
··· 7 7 { 8 8 return krealloc(objp, new_size, flags); 9 9 } 10 + 11 + void * __must_check __realloc_size(2) 12 + rust_helper_kvrealloc(const void *p, size_t size, gfp_t flags) 13 + { 14 + return kvrealloc(p, size, flags); 15 + }
+5 -1
rust/helpers/spinlock.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/spinlock.h> 5 4 6 5 void rust_helper___spin_lock_init(spinlock_t *lock, const char *name, ··· 24 25 void rust_helper_spin_unlock(spinlock_t *lock) 25 26 { 26 27 spin_unlock(lock); 28 + } 29 + 30 + int rust_helper_spin_trylock(spinlock_t *lock) 31 + { 32 + return spin_trylock(lock); 27 33 }
-1
rust/helpers/task.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/sched/task.h> 5 4 6 5 struct task_struct *rust_helper_get_current(void)
+9
rust/helpers/vmalloc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/vmalloc.h> 4 + 5 + void * __must_check __realloc_size(2) 6 + rust_helper_vrealloc(const void *p, size_t size, gfp_t flags) 7 + { 8 + return vrealloc(p, size, flags); 9 + }
-1
rust/helpers/wait.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/wait.h> 5 4 6 5 void rust_helper_init_wait(struct wait_queue_entry *wq_entry)
-1
rust/helpers/workqueue.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - #include <linux/export.h> 4 3 #include <linux/workqueue.h> 5 4 6 5 void rust_helper_init_work_with_key(struct work_struct *work, work_func_t func,
+143 -7
rust/kernel/alloc.rs
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - //! Extensions to the [`alloc`] crate. 3 + //! Implementation of the kernel's memory allocation infrastructure. 4 4 5 - #[cfg(not(test))] 6 - #[cfg(not(testlib))] 7 - mod allocator; 8 - pub mod box_ext; 9 - pub mod vec_ext; 5 + #[cfg(not(any(test, testlib)))] 6 + pub mod allocator; 7 + pub mod kbox; 8 + pub mod kvec; 9 + pub mod layout; 10 + 11 + #[cfg(any(test, testlib))] 12 + pub mod allocator_test; 13 + 14 + #[cfg(any(test, testlib))] 15 + pub use self::allocator_test as allocator; 16 + 17 + pub use self::kbox::Box; 18 + pub use self::kbox::KBox; 19 + pub use self::kbox::KVBox; 20 + pub use self::kbox::VBox; 21 + 22 + pub use self::kvec::IntoIter; 23 + pub use self::kvec::KVVec; 24 + pub use self::kvec::KVec; 25 + pub use self::kvec::VVec; 26 + pub use self::kvec::Vec; 10 27 11 28 /// Indicates an allocation error. 12 29 #[derive(Copy, Clone, PartialEq, Eq, Debug)] 13 30 pub struct AllocError; 31 + use core::{alloc::Layout, ptr::NonNull}; 14 32 15 33 /// Flags to be used when allocating memory. 16 34 /// 17 35 /// They can be combined with the operators `|`, `&`, and `!`. 18 36 /// 19 37 /// Values can be used from the [`flags`] module. 20 - #[derive(Clone, Copy)] 38 + #[derive(Clone, Copy, PartialEq)] 21 39 pub struct Flags(u32); 22 40 23 41 impl Flags { 24 42 /// Get the raw representation of this flag. 25 43 pub(crate) fn as_raw(self) -> u32 { 26 44 self.0 45 + } 46 + 47 + /// Check whether `flags` is contained in `self`. 48 + pub fn contains(self, flags: Flags) -> bool { 49 + (self & flags) == flags 27 50 } 28 51 } 29 52 ··· 108 85 /// use any filesystem callback. It is very likely to fail to allocate memory, even for very 109 86 /// small allocations. 110 87 pub const GFP_NOWAIT: Flags = Flags(bindings::GFP_NOWAIT); 88 + 89 + /// Suppresses allocation failure reports. 90 + /// 91 + /// This is normally or'd with other flags. 92 + pub const __GFP_NOWARN: Flags = Flags(bindings::__GFP_NOWARN); 93 + } 94 + 95 + /// The kernel's [`Allocator`] trait. 96 + /// 97 + /// An implementation of [`Allocator`] can allocate, re-allocate and free memory buffers described 98 + /// via [`Layout`]. 99 + /// 100 + /// [`Allocator`] is designed to be implemented as a ZST; [`Allocator`] functions do not operate on 101 + /// an object instance. 102 + /// 103 + /// In order to be able to support `#[derive(SmartPointer)]` later on, we need to avoid a design 104 + /// that requires an `Allocator` to be instantiated, hence its functions must not contain any kind 105 + /// of `self` parameter. 106 + /// 107 + /// # Safety 108 + /// 109 + /// - A memory allocation returned from an allocator must remain valid until it is explicitly freed. 110 + /// 111 + /// - Any pointer to a valid memory allocation must be valid to be passed to any other [`Allocator`] 112 + /// function of the same type. 113 + /// 114 + /// - Implementers must ensure that all trait functions abide by the guarantees documented in the 115 + /// `# Guarantees` sections. 116 + pub unsafe trait Allocator { 117 + /// Allocate memory based on `layout` and `flags`. 118 + /// 119 + /// On success, returns a buffer represented as `NonNull<[u8]>` that satisfies the layout 120 + /// constraints (i.e. minimum size and alignment as specified by `layout`). 121 + /// 122 + /// This function is equivalent to `realloc` when called with `None`. 123 + /// 124 + /// # Guarantees 125 + /// 126 + /// When the return value is `Ok(ptr)`, then `ptr` is 127 + /// - valid for reads and writes for `layout.size()` bytes, until it is passed to 128 + /// [`Allocator::free`] or [`Allocator::realloc`], 129 + /// - aligned to `layout.align()`, 130 + /// 131 + /// Additionally, `Flags` are honored as documented in 132 + /// <https://docs.kernel.org/core-api/mm-api.html#mm-api-gfp-flags>. 133 + fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> { 134 + // SAFETY: Passing `None` to `realloc` is valid by its safety requirements and asks for a 135 + // new memory allocation. 136 + unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags) } 137 + } 138 + 139 + /// Re-allocate an existing memory allocation to satisfy the requested `layout`. 140 + /// 141 + /// If the requested size is zero, `realloc` behaves equivalent to `free`. 142 + /// 143 + /// If the requested size is larger than the size of the existing allocation, a successful call 144 + /// to `realloc` guarantees that the new or grown buffer has at least `Layout::size` bytes, but 145 + /// may also be larger. 146 + /// 147 + /// If the requested size is smaller than the size of the existing allocation, `realloc` may or 148 + /// may not shrink the buffer; this is implementation specific to the allocator. 149 + /// 150 + /// On allocation failure, the existing buffer, if any, remains valid. 151 + /// 152 + /// The buffer is represented as `NonNull<[u8]>`. 153 + /// 154 + /// # Safety 155 + /// 156 + /// - If `ptr == Some(p)`, then `p` must point to an existing and valid memory allocation 157 + /// created by this [`Allocator`]; if `old_layout` is zero-sized `p` does not need to be a 158 + /// pointer returned by this [`Allocator`]. 159 + /// - `ptr` is allowed to be `None`; in this case a new memory allocation is created and 160 + /// `old_layout` is ignored. 161 + /// - `old_layout` must match the `Layout` the allocation has been created with. 162 + /// 163 + /// # Guarantees 164 + /// 165 + /// This function has the same guarantees as [`Allocator::alloc`]. When `ptr == Some(p)`, then 166 + /// it additionally guarantees that: 167 + /// - the contents of the memory pointed to by `p` are preserved up to the lesser of the new 168 + /// and old size, i.e. `ret_ptr[0..min(layout.size(), old_layout.size())] == 169 + /// p[0..min(layout.size(), old_layout.size())]`. 170 + /// - when the return value is `Err(AllocError)`, then `ptr` is still valid. 171 + unsafe fn realloc( 172 + ptr: Option<NonNull<u8>>, 173 + layout: Layout, 174 + old_layout: Layout, 175 + flags: Flags, 176 + ) -> Result<NonNull<[u8]>, AllocError>; 177 + 178 + /// Free an existing memory allocation. 179 + /// 180 + /// # Safety 181 + /// 182 + /// - `ptr` must point to an existing and valid memory allocation created by this [`Allocator`]; 183 + /// if `old_layout` is zero-sized `p` does not need to be a pointer returned by this 184 + /// [`Allocator`]. 185 + /// - `layout` must match the `Layout` the allocation has been created with. 186 + /// - The memory allocation at `ptr` must never again be read from or written to. 187 + unsafe fn free(ptr: NonNull<u8>, layout: Layout) { 188 + // SAFETY: The caller guarantees that `ptr` points at a valid allocation created by this 189 + // allocator. We are passing a `Layout` with the smallest possible alignment, so it is 190 + // smaller than or equal to the alignment previously used with this allocation. 191 + let _ = unsafe { Self::realloc(Some(ptr), Layout::new::<()>(), layout, Flags(0)) }; 192 + } 193 + } 194 + 195 + /// Returns a properly aligned dangling pointer from the given `layout`. 196 + pub(crate) fn dangling_from_layout(layout: Layout) -> NonNull<u8> { 197 + let ptr = layout.align() as *mut u8; 198 + 199 + // SAFETY: `layout.align()` (and hence `ptr`) is guaranteed to be non-zero. 200 + unsafe { NonNull::new_unchecked(ptr) } 111 201 }
+164 -50
rust/kernel/alloc/allocator.rs
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 //! Allocator support. 4 + //! 5 + //! Documentation for the kernel's memory allocators can found in the "Memory Allocation Guide" 6 + //! linked below. For instance, this includes the concept of "get free page" (GFP) flags and the 7 + //! typical application of the different kernel allocators. 8 + //! 9 + //! Reference: <https://docs.kernel.org/core-api/memory-allocation.html> 4 10 5 - use super::{flags::*, Flags}; 6 - use core::alloc::{GlobalAlloc, Layout}; 11 + use super::Flags; 12 + use core::alloc::Layout; 7 13 use core::ptr; 14 + use core::ptr::NonNull; 8 15 9 - struct KernelAllocator; 16 + use crate::alloc::{AllocError, Allocator}; 17 + use crate::bindings; 18 + use crate::pr_warn; 10 19 11 - /// Calls `krealloc` with a proper size to alloc a new object aligned to `new_layout`'s alignment. 20 + /// The contiguous kernel allocator. 12 21 /// 13 - /// # Safety 22 + /// `Kmalloc` is typically used for physically contiguous allocations up to page size, but also 23 + /// supports larger allocations up to `bindings::KMALLOC_MAX_SIZE`, which is hardware specific. 14 24 /// 15 - /// - `ptr` can be either null or a pointer which has been allocated by this allocator. 16 - /// - `new_layout` must have a non-zero size. 17 - pub(crate) unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: Flags) -> *mut u8 { 25 + /// For more details see [self]. 26 + pub struct Kmalloc; 27 + 28 + /// The virtually contiguous kernel allocator. 29 + /// 30 + /// `Vmalloc` allocates pages from the page level allocator and maps them into the contiguous kernel 31 + /// virtual space. It is typically used for large allocations. The memory allocated with this 32 + /// allocator is not physically contiguous. 33 + /// 34 + /// For more details see [self]. 35 + pub struct Vmalloc; 36 + 37 + /// The kvmalloc kernel allocator. 38 + /// 39 + /// `KVmalloc` attempts to allocate memory with `Kmalloc` first, but falls back to `Vmalloc` upon 40 + /// failure. This allocator is typically used when the size for the requested allocation is not 41 + /// known and may exceed the capabilities of `Kmalloc`. 42 + /// 43 + /// For more details see [self]. 44 + pub struct KVmalloc; 45 + 46 + /// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. 47 + fn aligned_size(new_layout: Layout) -> usize { 18 48 // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. 19 49 let layout = new_layout.pad_to_align(); 20 50 21 51 // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()` 22 52 // which together with the slab guarantees means the `krealloc` will return a properly aligned 23 53 // object (see comments in `kmalloc()` for more information). 24 - let size = layout.size(); 25 - 26 - // SAFETY: 27 - // - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the 28 - // function safety requirement. 29 - // - `size` is greater than 0 since it's from `layout.size()` (which cannot be zero according 30 - // to the function safety requirement) 31 - unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags.0) as *mut u8 } 54 + layout.size() 32 55 } 33 56 34 - unsafe impl GlobalAlloc for KernelAllocator { 35 - unsafe fn alloc(&self, layout: Layout) -> *mut u8 { 36 - // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety 37 - // requirement. 38 - unsafe { krealloc_aligned(ptr::null_mut(), layout, GFP_KERNEL) } 39 - } 57 + /// # Invariants 58 + /// 59 + /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`. 60 + struct ReallocFunc( 61 + unsafe extern "C" fn(*const crate::ffi::c_void, usize, u32) -> *mut crate::ffi::c_void, 62 + ); 40 63 41 - unsafe fn dealloc(&self, ptr: *mut u8, _layout: Layout) { 42 - unsafe { 43 - bindings::kfree(ptr as *const core::ffi::c_void); 64 + impl ReallocFunc { 65 + // INVARIANT: `krealloc` satisfies the type invariants. 66 + const KREALLOC: Self = Self(bindings::krealloc); 67 + 68 + // INVARIANT: `vrealloc` satisfies the type invariants. 69 + const VREALLOC: Self = Self(bindings::vrealloc); 70 + 71 + // INVARIANT: `kvrealloc` satisfies the type invariants. 72 + const KVREALLOC: Self = Self(bindings::kvrealloc); 73 + 74 + /// # Safety 75 + /// 76 + /// This method has the same safety requirements as [`Allocator::realloc`]. 77 + /// 78 + /// # Guarantees 79 + /// 80 + /// This method has the same guarantees as `Allocator::realloc`. Additionally 81 + /// - it accepts any pointer to a valid memory allocation allocated by this function. 82 + /// - memory allocated by this function remains valid until it is passed to this function. 83 + unsafe fn call( 84 + &self, 85 + ptr: Option<NonNull<u8>>, 86 + layout: Layout, 87 + old_layout: Layout, 88 + flags: Flags, 89 + ) -> Result<NonNull<[u8]>, AllocError> { 90 + let size = aligned_size(layout); 91 + let ptr = match ptr { 92 + Some(ptr) => { 93 + if old_layout.size() == 0 { 94 + ptr::null() 95 + } else { 96 + ptr.as_ptr() 97 + } 98 + } 99 + None => ptr::null(), 100 + }; 101 + 102 + // SAFETY: 103 + // - `self.0` is one of `krealloc`, `vrealloc`, `kvrealloc` and thus only requires that 104 + // `ptr` is NULL or valid. 105 + // - `ptr` is either NULL or valid by the safety requirements of this function. 106 + // 107 + // GUARANTEE: 108 + // - `self.0` is one of `krealloc`, `vrealloc`, `kvrealloc`. 109 + // - Those functions provide the guarantees of this function. 110 + let raw_ptr = unsafe { 111 + // If `size == 0` and `ptr != NULL` the memory behind the pointer is freed. 112 + self.0(ptr.cast(), size, flags.0).cast() 113 + }; 114 + 115 + let ptr = if size == 0 { 116 + crate::alloc::dangling_from_layout(layout) 117 + } else { 118 + NonNull::new(raw_ptr).ok_or(AllocError)? 119 + }; 120 + 121 + Ok(NonNull::slice_from_raw_parts(ptr, size)) 122 + } 123 + } 124 + 125 + // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that 126 + // - memory remains valid until it is explicitly freed, 127 + // - passing a pointer to a valid memory allocation is OK, 128 + // - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same. 129 + unsafe impl Allocator for Kmalloc { 130 + #[inline] 131 + unsafe fn realloc( 132 + ptr: Option<NonNull<u8>>, 133 + layout: Layout, 134 + old_layout: Layout, 135 + flags: Flags, 136 + ) -> Result<NonNull<[u8]>, AllocError> { 137 + // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`. 138 + unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) } 139 + } 140 + } 141 + 142 + // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that 143 + // - memory remains valid until it is explicitly freed, 144 + // - passing a pointer to a valid memory allocation is OK, 145 + // - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same. 146 + unsafe impl Allocator for Vmalloc { 147 + #[inline] 148 + unsafe fn realloc( 149 + ptr: Option<NonNull<u8>>, 150 + layout: Layout, 151 + old_layout: Layout, 152 + flags: Flags, 153 + ) -> Result<NonNull<[u8]>, AllocError> { 154 + // TODO: Support alignments larger than PAGE_SIZE. 155 + if layout.align() > bindings::PAGE_SIZE { 156 + pr_warn!("Vmalloc does not support alignments larger than PAGE_SIZE yet.\n"); 157 + return Err(AllocError); 44 158 } 45 - } 46 159 47 - unsafe fn realloc(&self, ptr: *mut u8, layout: Layout, new_size: usize) -> *mut u8 { 48 - // SAFETY: 49 - // - `new_size`, when rounded up to the nearest multiple of `layout.align()`, will not 50 - // overflow `isize` by the function safety requirement. 51 - // - `layout.align()` is a proper alignment (i.e. not zero and must be a power of two). 52 - let layout = unsafe { Layout::from_size_align_unchecked(new_size, layout.align()) }; 53 - 54 - // SAFETY: 55 - // - `ptr` is either null or a pointer allocated by this allocator by the function safety 56 - // requirement. 57 - // - the size of `layout` is not zero because `new_size` is not zero by the function safety 58 - // requirement. 59 - unsafe { krealloc_aligned(ptr, layout, GFP_KERNEL) } 60 - } 61 - 62 - unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 { 63 - // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety 64 - // requirement. 65 - unsafe { krealloc_aligned(ptr::null_mut(), layout, GFP_KERNEL | __GFP_ZERO) } 160 + // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously 161 + // allocated with this `Allocator`. 162 + unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags) } 66 163 } 67 164 } 68 165 69 - #[global_allocator] 70 - static ALLOCATOR: KernelAllocator = KernelAllocator; 166 + // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that 167 + // - memory remains valid until it is explicitly freed, 168 + // - passing a pointer to a valid memory allocation is OK, 169 + // - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same. 170 + unsafe impl Allocator for KVmalloc { 171 + #[inline] 172 + unsafe fn realloc( 173 + ptr: Option<NonNull<u8>>, 174 + layout: Layout, 175 + old_layout: Layout, 176 + flags: Flags, 177 + ) -> Result<NonNull<[u8]>, AllocError> { 178 + // TODO: Support alignments larger than PAGE_SIZE. 179 + if layout.align() > bindings::PAGE_SIZE { 180 + pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n"); 181 + return Err(AllocError); 182 + } 71 183 72 - // See <https://github.com/rust-lang/rust/pull/86844>. 73 - #[no_mangle] 74 - static __rust_no_alloc_shim_is_unstable: u8 = 0; 184 + // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously 185 + // allocated with this `Allocator`. 186 + unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags) } 187 + } 188 + }
+95
rust/kernel/alloc/allocator_test.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! So far the kernel's `Box` and `Vec` types can't be used by userspace test cases, since all users 4 + //! of those types (e.g. `CString`) use kernel allocators for instantiation. 5 + //! 6 + //! In order to allow userspace test cases to make use of such types as well, implement the 7 + //! `Cmalloc` allocator within the allocator_test module and type alias all kernel allocators to 8 + //! `Cmalloc`. The `Cmalloc` allocator uses libc's `realloc()` function as allocator backend. 9 + 10 + #![allow(missing_docs)] 11 + 12 + use super::{flags::*, AllocError, Allocator, Flags}; 13 + use core::alloc::Layout; 14 + use core::cmp; 15 + use core::ptr; 16 + use core::ptr::NonNull; 17 + 18 + /// The userspace allocator based on libc. 19 + pub struct Cmalloc; 20 + 21 + pub type Kmalloc = Cmalloc; 22 + pub type Vmalloc = Kmalloc; 23 + pub type KVmalloc = Kmalloc; 24 + 25 + extern "C" { 26 + #[link_name = "aligned_alloc"] 27 + fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void; 28 + 29 + #[link_name = "free"] 30 + fn libc_free(ptr: *mut crate::ffi::c_void); 31 + } 32 + 33 + // SAFETY: 34 + // - memory remains valid until it is explicitly freed, 35 + // - passing a pointer to a valid memory allocation created by this `Allocator` is always OK, 36 + // - `realloc` provides the guarantees as provided in the `# Guarantees` section. 37 + unsafe impl Allocator for Cmalloc { 38 + unsafe fn realloc( 39 + ptr: Option<NonNull<u8>>, 40 + layout: Layout, 41 + old_layout: Layout, 42 + flags: Flags, 43 + ) -> Result<NonNull<[u8]>, AllocError> { 44 + let src = match ptr { 45 + Some(src) => { 46 + if old_layout.size() == 0 { 47 + ptr::null_mut() 48 + } else { 49 + src.as_ptr() 50 + } 51 + } 52 + None => ptr::null_mut(), 53 + }; 54 + 55 + if layout.size() == 0 { 56 + // SAFETY: `src` is either NULL or was previously allocated with this `Allocator` 57 + unsafe { libc_free(src.cast()) }; 58 + 59 + return Ok(NonNull::slice_from_raw_parts( 60 + crate::alloc::dangling_from_layout(layout), 61 + 0, 62 + )); 63 + } 64 + 65 + // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or 66 + // exceeds the given size and alignment requirements. 67 + let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8; 68 + let dst = NonNull::new(dst).ok_or(AllocError)?; 69 + 70 + if flags.contains(__GFP_ZERO) { 71 + // SAFETY: The preceding calls to `libc_aligned_alloc` and `NonNull::new` 72 + // guarantee that `dst` points to memory of at least `layout.size()` bytes. 73 + unsafe { dst.as_ptr().write_bytes(0, layout.size()) }; 74 + } 75 + 76 + if !src.is_null() { 77 + // SAFETY: 78 + // - `src` has previously been allocated with this `Allocator`; `dst` has just been 79 + // newly allocated, hence the memory regions do not overlap. 80 + // - both` src` and `dst` are properly aligned and valid for reads and writes 81 + unsafe { 82 + ptr::copy_nonoverlapping( 83 + src, 84 + dst.as_ptr(), 85 + cmp::min(layout.size(), old_layout.size()), 86 + ) 87 + }; 88 + } 89 + 90 + // SAFETY: `src` is either NULL or was previously allocated with this `Allocator` 91 + unsafe { libc_free(src.cast()) }; 92 + 93 + Ok(NonNull::slice_from_raw_parts(dst, layout.size())) 94 + } 95 + }
-89
rust/kernel/alloc/box_ext.rs
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - 3 - //! Extensions to [`Box`] for fallible allocations. 4 - 5 - use super::{AllocError, Flags}; 6 - use alloc::boxed::Box; 7 - use core::{mem::MaybeUninit, ptr, result::Result}; 8 - 9 - /// Extensions to [`Box`]. 10 - pub trait BoxExt<T>: Sized { 11 - /// Allocates a new box. 12 - /// 13 - /// The allocation may fail, in which case an error is returned. 14 - fn new(x: T, flags: Flags) -> Result<Self, AllocError>; 15 - 16 - /// Allocates a new uninitialised box. 17 - /// 18 - /// The allocation may fail, in which case an error is returned. 19 - fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError>; 20 - 21 - /// Drops the contents, but keeps the allocation. 22 - /// 23 - /// # Examples 24 - /// 25 - /// ``` 26 - /// use kernel::alloc::{flags, box_ext::BoxExt}; 27 - /// let value = Box::new([0; 32], flags::GFP_KERNEL)?; 28 - /// assert_eq!(*value, [0; 32]); 29 - /// let mut value = Box::drop_contents(value); 30 - /// // Now we can re-use `value`: 31 - /// value.write([1; 32]); 32 - /// // SAFETY: We just wrote to it. 33 - /// let value = unsafe { value.assume_init() }; 34 - /// assert_eq!(*value, [1; 32]); 35 - /// # Ok::<(), Error>(()) 36 - /// ``` 37 - fn drop_contents(this: Self) -> Box<MaybeUninit<T>>; 38 - } 39 - 40 - impl<T> BoxExt<T> for Box<T> { 41 - fn new(x: T, flags: Flags) -> Result<Self, AllocError> { 42 - let mut b = <Self as BoxExt<_>>::new_uninit(flags)?; 43 - b.write(x); 44 - // SAFETY: We just wrote to it. 45 - Ok(unsafe { b.assume_init() }) 46 - } 47 - 48 - #[cfg(any(test, testlib))] 49 - fn new_uninit(_flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError> { 50 - Ok(Box::new_uninit()) 51 - } 52 - 53 - #[cfg(not(any(test, testlib)))] 54 - fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError> { 55 - let ptr = if core::mem::size_of::<MaybeUninit<T>>() == 0 { 56 - core::ptr::NonNull::<_>::dangling().as_ptr() 57 - } else { 58 - let layout = core::alloc::Layout::new::<MaybeUninit<T>>(); 59 - 60 - // SAFETY: Memory is being allocated (first arg is null). The only other source of 61 - // safety issues is sleeping on atomic context, which is addressed by klint. Lastly, 62 - // the type is not a SZT (checked above). 63 - let ptr = 64 - unsafe { super::allocator::krealloc_aligned(core::ptr::null_mut(), layout, flags) }; 65 - if ptr.is_null() { 66 - return Err(AllocError); 67 - } 68 - 69 - ptr.cast::<MaybeUninit<T>>() 70 - }; 71 - 72 - // SAFETY: For non-zero-sized types, we allocate above using the global allocator. For 73 - // zero-sized types, we use `NonNull::dangling`. 74 - Ok(unsafe { Box::from_raw(ptr) }) 75 - } 76 - 77 - fn drop_contents(this: Self) -> Box<MaybeUninit<T>> { 78 - let ptr = Box::into_raw(this); 79 - // SAFETY: `ptr` is valid, because it came from `Box::into_raw`. 80 - unsafe { ptr::drop_in_place(ptr) }; 81 - 82 - // CAST: `MaybeUninit<T>` is a transparent wrapper of `T`. 83 - let ptr = ptr.cast::<MaybeUninit<T>>(); 84 - 85 - // SAFETY: `ptr` is valid for writes, because it came from `Box::into_raw` and it is valid for 86 - // reads, since the pointer came from `Box::into_raw` and the type is `MaybeUninit<T>`. 87 - unsafe { Box::from_raw(ptr) } 88 - } 89 - }
+456
rust/kernel/alloc/kbox.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Implementation of [`Box`]. 4 + 5 + #[allow(unused_imports)] // Used in doc comments. 6 + use super::allocator::{KVmalloc, Kmalloc, Vmalloc}; 7 + use super::{AllocError, Allocator, Flags}; 8 + use core::alloc::Layout; 9 + use core::fmt; 10 + use core::marker::PhantomData; 11 + use core::mem::ManuallyDrop; 12 + use core::mem::MaybeUninit; 13 + use core::ops::{Deref, DerefMut}; 14 + use core::pin::Pin; 15 + use core::ptr::NonNull; 16 + use core::result::Result; 17 + 18 + use crate::init::{InPlaceInit, InPlaceWrite, Init, PinInit}; 19 + use crate::types::ForeignOwnable; 20 + 21 + /// The kernel's [`Box`] type -- a heap allocation for a single value of type `T`. 22 + /// 23 + /// This is the kernel's version of the Rust stdlib's `Box`. There are several differences, 24 + /// for example no `noalias` attribute is emitted and partially moving out of a `Box` is not 25 + /// supported. There are also several API differences, e.g. `Box` always requires an [`Allocator`] 26 + /// implementation to be passed as generic, page [`Flags`] when allocating memory and all functions 27 + /// that may allocate memory are fallible. 28 + /// 29 + /// `Box` works with any of the kernel's allocators, e.g. [`Kmalloc`], [`Vmalloc`] or [`KVmalloc`]. 30 + /// There are aliases for `Box` with these allocators ([`KBox`], [`VBox`], [`KVBox`]). 31 + /// 32 + /// When dropping a [`Box`], the value is also dropped and the heap memory is automatically freed. 33 + /// 34 + /// # Examples 35 + /// 36 + /// ``` 37 + /// let b = KBox::<u64>::new(24_u64, GFP_KERNEL)?; 38 + /// 39 + /// assert_eq!(*b, 24_u64); 40 + /// # Ok::<(), Error>(()) 41 + /// ``` 42 + /// 43 + /// ``` 44 + /// # use kernel::bindings; 45 + /// const SIZE: usize = bindings::KMALLOC_MAX_SIZE as usize + 1; 46 + /// struct Huge([u8; SIZE]); 47 + /// 48 + /// assert!(KBox::<Huge>::new_uninit(GFP_KERNEL | __GFP_NOWARN).is_err()); 49 + /// ``` 50 + /// 51 + /// ``` 52 + /// # use kernel::bindings; 53 + /// const SIZE: usize = bindings::KMALLOC_MAX_SIZE as usize + 1; 54 + /// struct Huge([u8; SIZE]); 55 + /// 56 + /// assert!(KVBox::<Huge>::new_uninit(GFP_KERNEL).is_ok()); 57 + /// ``` 58 + /// 59 + /// # Invariants 60 + /// 61 + /// `self.0` is always properly aligned and either points to memory allocated with `A` or, for 62 + /// zero-sized types, is a dangling, well aligned pointer. 63 + #[repr(transparent)] 64 + pub struct Box<T: ?Sized, A: Allocator>(NonNull<T>, PhantomData<A>); 65 + 66 + /// Type alias for [`Box`] with a [`Kmalloc`] allocator. 67 + /// 68 + /// # Examples 69 + /// 70 + /// ``` 71 + /// let b = KBox::new(24_u64, GFP_KERNEL)?; 72 + /// 73 + /// assert_eq!(*b, 24_u64); 74 + /// # Ok::<(), Error>(()) 75 + /// ``` 76 + pub type KBox<T> = Box<T, super::allocator::Kmalloc>; 77 + 78 + /// Type alias for [`Box`] with a [`Vmalloc`] allocator. 79 + /// 80 + /// # Examples 81 + /// 82 + /// ``` 83 + /// let b = VBox::new(24_u64, GFP_KERNEL)?; 84 + /// 85 + /// assert_eq!(*b, 24_u64); 86 + /// # Ok::<(), Error>(()) 87 + /// ``` 88 + pub type VBox<T> = Box<T, super::allocator::Vmalloc>; 89 + 90 + /// Type alias for [`Box`] with a [`KVmalloc`] allocator. 91 + /// 92 + /// # Examples 93 + /// 94 + /// ``` 95 + /// let b = KVBox::new(24_u64, GFP_KERNEL)?; 96 + /// 97 + /// assert_eq!(*b, 24_u64); 98 + /// # Ok::<(), Error>(()) 99 + /// ``` 100 + pub type KVBox<T> = Box<T, super::allocator::KVmalloc>; 101 + 102 + // SAFETY: `Box` is `Send` if `T` is `Send` because the `Box` owns a `T`. 103 + unsafe impl<T, A> Send for Box<T, A> 104 + where 105 + T: Send + ?Sized, 106 + A: Allocator, 107 + { 108 + } 109 + 110 + // SAFETY: `Box` is `Sync` if `T` is `Sync` because the `Box` owns a `T`. 111 + unsafe impl<T, A> Sync for Box<T, A> 112 + where 113 + T: Sync + ?Sized, 114 + A: Allocator, 115 + { 116 + } 117 + 118 + impl<T, A> Box<T, A> 119 + where 120 + T: ?Sized, 121 + A: Allocator, 122 + { 123 + /// Creates a new `Box<T, A>` from a raw pointer. 124 + /// 125 + /// # Safety 126 + /// 127 + /// For non-ZSTs, `raw` must point at an allocation allocated with `A` that is sufficiently 128 + /// aligned for and holds a valid `T`. The caller passes ownership of the allocation to the 129 + /// `Box`. 130 + /// 131 + /// For ZSTs, `raw` must be a dangling, well aligned pointer. 132 + #[inline] 133 + pub const unsafe fn from_raw(raw: *mut T) -> Self { 134 + // INVARIANT: Validity of `raw` is guaranteed by the safety preconditions of this function. 135 + // SAFETY: By the safety preconditions of this function, `raw` is not a NULL pointer. 136 + Self(unsafe { NonNull::new_unchecked(raw) }, PhantomData) 137 + } 138 + 139 + /// Consumes the `Box<T, A>` and returns a raw pointer. 140 + /// 141 + /// This will not run the destructor of `T` and for non-ZSTs the allocation will stay alive 142 + /// indefinitely. Use [`Box::from_raw`] to recover the [`Box`], drop the value and free the 143 + /// allocation, if any. 144 + /// 145 + /// # Examples 146 + /// 147 + /// ``` 148 + /// let x = KBox::new(24, GFP_KERNEL)?; 149 + /// let ptr = KBox::into_raw(x); 150 + /// // SAFETY: `ptr` comes from a previous call to `KBox::into_raw`. 151 + /// let x = unsafe { KBox::from_raw(ptr) }; 152 + /// 153 + /// assert_eq!(*x, 24); 154 + /// # Ok::<(), Error>(()) 155 + /// ``` 156 + #[inline] 157 + pub fn into_raw(b: Self) -> *mut T { 158 + ManuallyDrop::new(b).0.as_ptr() 159 + } 160 + 161 + /// Consumes and leaks the `Box<T, A>` and returns a mutable reference. 162 + /// 163 + /// See [`Box::into_raw`] for more details. 164 + #[inline] 165 + pub fn leak<'a>(b: Self) -> &'a mut T { 166 + // SAFETY: `Box::into_raw` always returns a properly aligned and dereferenceable pointer 167 + // which points to an initialized instance of `T`. 168 + unsafe { &mut *Box::into_raw(b) } 169 + } 170 + } 171 + 172 + impl<T, A> Box<MaybeUninit<T>, A> 173 + where 174 + A: Allocator, 175 + { 176 + /// Converts a `Box<MaybeUninit<T>, A>` to a `Box<T, A>`. 177 + /// 178 + /// It is undefined behavior to call this function while the value inside of `b` is not yet 179 + /// fully initialized. 180 + /// 181 + /// # Safety 182 + /// 183 + /// Callers must ensure that the value inside of `b` is in an initialized state. 184 + pub unsafe fn assume_init(self) -> Box<T, A> { 185 + let raw = Self::into_raw(self); 186 + 187 + // SAFETY: `raw` comes from a previous call to `Box::into_raw`. By the safety requirements 188 + // of this function, the value inside the `Box` is in an initialized state. Hence, it is 189 + // safe to reconstruct the `Box` as `Box<T, A>`. 190 + unsafe { Box::from_raw(raw.cast()) } 191 + } 192 + 193 + /// Writes the value and converts to `Box<T, A>`. 194 + pub fn write(mut self, value: T) -> Box<T, A> { 195 + (*self).write(value); 196 + 197 + // SAFETY: We've just initialized `b`'s value. 198 + unsafe { self.assume_init() } 199 + } 200 + } 201 + 202 + impl<T, A> Box<T, A> 203 + where 204 + A: Allocator, 205 + { 206 + /// Creates a new `Box<T, A>` and initializes its contents with `x`. 207 + /// 208 + /// New memory is allocated with `A`. The allocation may fail, in which case an error is 209 + /// returned. For ZSTs no memory is allocated. 210 + pub fn new(x: T, flags: Flags) -> Result<Self, AllocError> { 211 + let b = Self::new_uninit(flags)?; 212 + Ok(Box::write(b, x)) 213 + } 214 + 215 + /// Creates a new `Box<T, A>` with uninitialized contents. 216 + /// 217 + /// New memory is allocated with `A`. The allocation may fail, in which case an error is 218 + /// returned. For ZSTs no memory is allocated. 219 + /// 220 + /// # Examples 221 + /// 222 + /// ``` 223 + /// let b = KBox::<u64>::new_uninit(GFP_KERNEL)?; 224 + /// let b = KBox::write(b, 24); 225 + /// 226 + /// assert_eq!(*b, 24_u64); 227 + /// # Ok::<(), Error>(()) 228 + /// ``` 229 + pub fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>, A>, AllocError> { 230 + let layout = Layout::new::<MaybeUninit<T>>(); 231 + let ptr = A::alloc(layout, flags)?; 232 + 233 + // INVARIANT: `ptr` is either a dangling pointer or points to memory allocated with `A`, 234 + // which is sufficient in size and alignment for storing a `T`. 235 + Ok(Box(ptr.cast(), PhantomData)) 236 + } 237 + 238 + /// Constructs a new `Pin<Box<T, A>>`. If `T` does not implement [`Unpin`], then `x` will be 239 + /// pinned in memory and can't be moved. 240 + #[inline] 241 + pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> 242 + where 243 + A: 'static, 244 + { 245 + Ok(Self::new(x, flags)?.into()) 246 + } 247 + 248 + /// Forgets the contents (does not run the destructor), but keeps the allocation. 249 + fn forget_contents(this: Self) -> Box<MaybeUninit<T>, A> { 250 + let ptr = Self::into_raw(this); 251 + 252 + // SAFETY: `ptr` is valid, because it came from `Box::into_raw`. 253 + unsafe { Box::from_raw(ptr.cast()) } 254 + } 255 + 256 + /// Drops the contents, but keeps the allocation. 257 + /// 258 + /// # Examples 259 + /// 260 + /// ``` 261 + /// let value = KBox::new([0; 32], GFP_KERNEL)?; 262 + /// assert_eq!(*value, [0; 32]); 263 + /// let value = KBox::drop_contents(value); 264 + /// // Now we can re-use `value`: 265 + /// let value = KBox::write(value, [1; 32]); 266 + /// assert_eq!(*value, [1; 32]); 267 + /// # Ok::<(), Error>(()) 268 + /// ``` 269 + pub fn drop_contents(this: Self) -> Box<MaybeUninit<T>, A> { 270 + let ptr = this.0.as_ptr(); 271 + 272 + // SAFETY: `ptr` is valid, because it came from `this`. After this call we never access the 273 + // value stored in `this` again. 274 + unsafe { core::ptr::drop_in_place(ptr) }; 275 + 276 + Self::forget_contents(this) 277 + } 278 + 279 + /// Moves the `Box`'s value out of the `Box` and consumes the `Box`. 280 + pub fn into_inner(b: Self) -> T { 281 + // SAFETY: By the type invariant `&*b` is valid for `read`. 282 + let value = unsafe { core::ptr::read(&*b) }; 283 + let _ = Self::forget_contents(b); 284 + value 285 + } 286 + } 287 + 288 + impl<T, A> From<Box<T, A>> for Pin<Box<T, A>> 289 + where 290 + T: ?Sized, 291 + A: Allocator, 292 + { 293 + /// Converts a `Box<T, A>` into a `Pin<Box<T, A>>`. If `T` does not implement [`Unpin`], then 294 + /// `*b` will be pinned in memory and can't be moved. 295 + /// 296 + /// This moves `b` into `Pin` without moving `*b` or allocating and copying any memory. 297 + fn from(b: Box<T, A>) -> Self { 298 + // SAFETY: The value wrapped inside a `Pin<Box<T, A>>` cannot be moved or replaced as long 299 + // as `T` does not implement `Unpin`. 300 + unsafe { Pin::new_unchecked(b) } 301 + } 302 + } 303 + 304 + impl<T, A> InPlaceWrite<T> for Box<MaybeUninit<T>, A> 305 + where 306 + A: Allocator + 'static, 307 + { 308 + type Initialized = Box<T, A>; 309 + 310 + fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> { 311 + let slot = self.as_mut_ptr(); 312 + // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 313 + // slot is valid. 314 + unsafe { init.__init(slot)? }; 315 + // SAFETY: All fields have been initialized. 316 + Ok(unsafe { Box::assume_init(self) }) 317 + } 318 + 319 + fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E> { 320 + let slot = self.as_mut_ptr(); 321 + // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 322 + // slot is valid and will not be moved, because we pin it later. 323 + unsafe { init.__pinned_init(slot)? }; 324 + // SAFETY: All fields have been initialized. 325 + Ok(unsafe { Box::assume_init(self) }.into()) 326 + } 327 + } 328 + 329 + impl<T, A> InPlaceInit<T> for Box<T, A> 330 + where 331 + A: Allocator + 'static, 332 + { 333 + type PinnedSelf = Pin<Self>; 334 + 335 + #[inline] 336 + fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Pin<Self>, E> 337 + where 338 + E: From<AllocError>, 339 + { 340 + Box::<_, A>::new_uninit(flags)?.write_pin_init(init) 341 + } 342 + 343 + #[inline] 344 + fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E> 345 + where 346 + E: From<AllocError>, 347 + { 348 + Box::<_, A>::new_uninit(flags)?.write_init(init) 349 + } 350 + } 351 + 352 + impl<T: 'static, A> ForeignOwnable for Box<T, A> 353 + where 354 + A: Allocator, 355 + { 356 + type Borrowed<'a> = &'a T; 357 + 358 + fn into_foreign(self) -> *const crate::ffi::c_void { 359 + Box::into_raw(self) as _ 360 + } 361 + 362 + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self { 363 + // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 364 + // call to `Self::into_foreign`. 365 + unsafe { Box::from_raw(ptr as _) } 366 + } 367 + 368 + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> &'a T { 369 + // SAFETY: The safety requirements of this method ensure that the object remains alive and 370 + // immutable for the duration of 'a. 371 + unsafe { &*ptr.cast() } 372 + } 373 + } 374 + 375 + impl<T: 'static, A> ForeignOwnable for Pin<Box<T, A>> 376 + where 377 + A: Allocator, 378 + { 379 + type Borrowed<'a> = Pin<&'a T>; 380 + 381 + fn into_foreign(self) -> *const crate::ffi::c_void { 382 + // SAFETY: We are still treating the box as pinned. 383 + Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _ 384 + } 385 + 386 + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self { 387 + // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 388 + // call to `Self::into_foreign`. 389 + unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) } 390 + } 391 + 392 + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> Pin<&'a T> { 393 + // SAFETY: The safety requirements for this function ensure that the object is still alive, 394 + // so it is safe to dereference the raw pointer. 395 + // The safety requirements of `from_foreign` also ensure that the object remains alive for 396 + // the lifetime of the returned value. 397 + let r = unsafe { &*ptr.cast() }; 398 + 399 + // SAFETY: This pointer originates from a `Pin<Box<T>>`. 400 + unsafe { Pin::new_unchecked(r) } 401 + } 402 + } 403 + 404 + impl<T, A> Deref for Box<T, A> 405 + where 406 + T: ?Sized, 407 + A: Allocator, 408 + { 409 + type Target = T; 410 + 411 + fn deref(&self) -> &T { 412 + // SAFETY: `self.0` is always properly aligned, dereferenceable and points to an initialized 413 + // instance of `T`. 414 + unsafe { self.0.as_ref() } 415 + } 416 + } 417 + 418 + impl<T, A> DerefMut for Box<T, A> 419 + where 420 + T: ?Sized, 421 + A: Allocator, 422 + { 423 + fn deref_mut(&mut self) -> &mut T { 424 + // SAFETY: `self.0` is always properly aligned, dereferenceable and points to an initialized 425 + // instance of `T`. 426 + unsafe { self.0.as_mut() } 427 + } 428 + } 429 + 430 + impl<T, A> fmt::Debug for Box<T, A> 431 + where 432 + T: ?Sized + fmt::Debug, 433 + A: Allocator, 434 + { 435 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 436 + fmt::Debug::fmt(&**self, f) 437 + } 438 + } 439 + 440 + impl<T, A> Drop for Box<T, A> 441 + where 442 + T: ?Sized, 443 + A: Allocator, 444 + { 445 + fn drop(&mut self) { 446 + let layout = Layout::for_value::<T>(self); 447 + 448 + // SAFETY: The pointer in `self.0` is guaranteed to be valid by the type invariant. 449 + unsafe { core::ptr::drop_in_place::<T>(self.deref_mut()) }; 450 + 451 + // SAFETY: 452 + // - `self.0` was previously allocated with `A`. 453 + // - `layout` is equal to the `Layout´ `self.0` was allocated with. 454 + unsafe { A::free(self.0.cast(), layout) }; 455 + } 456 + }
+913
rust/kernel/alloc/kvec.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Implementation of [`Vec`]. 4 + 5 + use super::{ 6 + allocator::{KVmalloc, Kmalloc, Vmalloc}, 7 + layout::ArrayLayout, 8 + AllocError, Allocator, Box, Flags, 9 + }; 10 + use core::{ 11 + fmt, 12 + marker::PhantomData, 13 + mem::{ManuallyDrop, MaybeUninit}, 14 + ops::Deref, 15 + ops::DerefMut, 16 + ops::Index, 17 + ops::IndexMut, 18 + ptr, 19 + ptr::NonNull, 20 + slice, 21 + slice::SliceIndex, 22 + }; 23 + 24 + /// Create a [`KVec`] containing the arguments. 25 + /// 26 + /// New memory is allocated with `GFP_KERNEL`. 27 + /// 28 + /// # Examples 29 + /// 30 + /// ``` 31 + /// let mut v = kernel::kvec![]; 32 + /// v.push(1, GFP_KERNEL)?; 33 + /// assert_eq!(v, [1]); 34 + /// 35 + /// let mut v = kernel::kvec![1; 3]?; 36 + /// v.push(4, GFP_KERNEL)?; 37 + /// assert_eq!(v, [1, 1, 1, 4]); 38 + /// 39 + /// let mut v = kernel::kvec![1, 2, 3]?; 40 + /// v.push(4, GFP_KERNEL)?; 41 + /// assert_eq!(v, [1, 2, 3, 4]); 42 + /// 43 + /// # Ok::<(), Error>(()) 44 + /// ``` 45 + #[macro_export] 46 + macro_rules! kvec { 47 + () => ( 48 + $crate::alloc::KVec::new() 49 + ); 50 + ($elem:expr; $n:expr) => ( 51 + $crate::alloc::KVec::from_elem($elem, $n, GFP_KERNEL) 52 + ); 53 + ($($x:expr),+ $(,)?) => ( 54 + match $crate::alloc::KBox::new_uninit(GFP_KERNEL) { 55 + Ok(b) => Ok($crate::alloc::KVec::from($crate::alloc::KBox::write(b, [$($x),+]))), 56 + Err(e) => Err(e), 57 + } 58 + ); 59 + } 60 + 61 + /// The kernel's [`Vec`] type. 62 + /// 63 + /// A contiguous growable array type with contents allocated with the kernel's allocators (e.g. 64 + /// [`Kmalloc`], [`Vmalloc`] or [`KVmalloc`]), written `Vec<T, A>`. 65 + /// 66 + /// For non-zero-sized values, a [`Vec`] will use the given allocator `A` for its allocation. For 67 + /// the most common allocators the type aliases [`KVec`], [`VVec`] and [`KVVec`] exist. 68 + /// 69 + /// For zero-sized types the [`Vec`]'s pointer must be `dangling_mut::<T>`; no memory is allocated. 70 + /// 71 + /// Generally, [`Vec`] consists of a pointer that represents the vector's backing buffer, the 72 + /// capacity of the vector (the number of elements that currently fit into the vector), its length 73 + /// (the number of elements that are currently stored in the vector) and the `Allocator` type used 74 + /// to allocate (and free) the backing buffer. 75 + /// 76 + /// A [`Vec`] can be deconstructed into and (re-)constructed from its previously named raw parts 77 + /// and manually modified. 78 + /// 79 + /// [`Vec`]'s backing buffer gets, if required, automatically increased (re-allocated) when elements 80 + /// are added to the vector. 81 + /// 82 + /// # Invariants 83 + /// 84 + /// - `self.ptr` is always properly aligned and either points to memory allocated with `A` or, for 85 + /// zero-sized types, is a dangling, well aligned pointer. 86 + /// 87 + /// - `self.len` always represents the exact number of elements stored in the vector. 88 + /// 89 + /// - `self.layout` represents the absolute number of elements that can be stored within the vector 90 + /// without re-allocation. For ZSTs `self.layout`'s capacity is zero. However, it is legal for the 91 + /// backing buffer to be larger than `layout`. 92 + /// 93 + /// - The `Allocator` type `A` of the vector is the exact same `Allocator` type the backing buffer 94 + /// was allocated with (and must be freed with). 95 + pub struct Vec<T, A: Allocator> { 96 + ptr: NonNull<T>, 97 + /// Represents the actual buffer size as `cap` times `size_of::<T>` bytes. 98 + /// 99 + /// Note: This isn't quite the same as `Self::capacity`, which in contrast returns the number of 100 + /// elements we can still store without reallocating. 101 + layout: ArrayLayout<T>, 102 + len: usize, 103 + _p: PhantomData<A>, 104 + } 105 + 106 + /// Type alias for [`Vec`] with a [`Kmalloc`] allocator. 107 + /// 108 + /// # Examples 109 + /// 110 + /// ``` 111 + /// let mut v = KVec::new(); 112 + /// v.push(1, GFP_KERNEL)?; 113 + /// assert_eq!(&v, &[1]); 114 + /// 115 + /// # Ok::<(), Error>(()) 116 + /// ``` 117 + pub type KVec<T> = Vec<T, Kmalloc>; 118 + 119 + /// Type alias for [`Vec`] with a [`Vmalloc`] allocator. 120 + /// 121 + /// # Examples 122 + /// 123 + /// ``` 124 + /// let mut v = VVec::new(); 125 + /// v.push(1, GFP_KERNEL)?; 126 + /// assert_eq!(&v, &[1]); 127 + /// 128 + /// # Ok::<(), Error>(()) 129 + /// ``` 130 + pub type VVec<T> = Vec<T, Vmalloc>; 131 + 132 + /// Type alias for [`Vec`] with a [`KVmalloc`] allocator. 133 + /// 134 + /// # Examples 135 + /// 136 + /// ``` 137 + /// let mut v = KVVec::new(); 138 + /// v.push(1, GFP_KERNEL)?; 139 + /// assert_eq!(&v, &[1]); 140 + /// 141 + /// # Ok::<(), Error>(()) 142 + /// ``` 143 + pub type KVVec<T> = Vec<T, KVmalloc>; 144 + 145 + // SAFETY: `Vec` is `Send` if `T` is `Send` because `Vec` owns its elements. 146 + unsafe impl<T, A> Send for Vec<T, A> 147 + where 148 + T: Send, 149 + A: Allocator, 150 + { 151 + } 152 + 153 + // SAFETY: `Vec` is `Sync` if `T` is `Sync` because `Vec` owns its elements. 154 + unsafe impl<T, A> Sync for Vec<T, A> 155 + where 156 + T: Sync, 157 + A: Allocator, 158 + { 159 + } 160 + 161 + impl<T, A> Vec<T, A> 162 + where 163 + A: Allocator, 164 + { 165 + #[inline] 166 + const fn is_zst() -> bool { 167 + core::mem::size_of::<T>() == 0 168 + } 169 + 170 + /// Returns the number of elements that can be stored within the vector without allocating 171 + /// additional memory. 172 + pub fn capacity(&self) -> usize { 173 + if const { Self::is_zst() } { 174 + usize::MAX 175 + } else { 176 + self.layout.len() 177 + } 178 + } 179 + 180 + /// Returns the number of elements stored within the vector. 181 + #[inline] 182 + pub fn len(&self) -> usize { 183 + self.len 184 + } 185 + 186 + /// Forcefully sets `self.len` to `new_len`. 187 + /// 188 + /// # Safety 189 + /// 190 + /// - `new_len` must be less than or equal to [`Self::capacity`]. 191 + /// - If `new_len` is greater than `self.len`, all elements within the interval 192 + /// [`self.len`,`new_len`) must be initialized. 193 + #[inline] 194 + pub unsafe fn set_len(&mut self, new_len: usize) { 195 + debug_assert!(new_len <= self.capacity()); 196 + self.len = new_len; 197 + } 198 + 199 + /// Returns a slice of the entire vector. 200 + #[inline] 201 + pub fn as_slice(&self) -> &[T] { 202 + self 203 + } 204 + 205 + /// Returns a mutable slice of the entire vector. 206 + #[inline] 207 + pub fn as_mut_slice(&mut self) -> &mut [T] { 208 + self 209 + } 210 + 211 + /// Returns a mutable raw pointer to the vector's backing buffer, or, if `T` is a ZST, a 212 + /// dangling raw pointer. 213 + #[inline] 214 + pub fn as_mut_ptr(&mut self) -> *mut T { 215 + self.ptr.as_ptr() 216 + } 217 + 218 + /// Returns a raw pointer to the vector's backing buffer, or, if `T` is a ZST, a dangling raw 219 + /// pointer. 220 + #[inline] 221 + pub fn as_ptr(&self) -> *const T { 222 + self.ptr.as_ptr() 223 + } 224 + 225 + /// Returns `true` if the vector contains no elements, `false` otherwise. 226 + /// 227 + /// # Examples 228 + /// 229 + /// ``` 230 + /// let mut v = KVec::new(); 231 + /// assert!(v.is_empty()); 232 + /// 233 + /// v.push(1, GFP_KERNEL); 234 + /// assert!(!v.is_empty()); 235 + /// ``` 236 + #[inline] 237 + pub fn is_empty(&self) -> bool { 238 + self.len() == 0 239 + } 240 + 241 + /// Creates a new, empty `Vec<T, A>`. 242 + /// 243 + /// This method does not allocate by itself. 244 + #[inline] 245 + pub const fn new() -> Self { 246 + // INVARIANT: Since this is a new, empty `Vec` with no backing memory yet, 247 + // - `ptr` is a properly aligned dangling pointer for type `T`, 248 + // - `layout` is an empty `ArrayLayout` (zero capacity) 249 + // - `len` is zero, since no elements can be or have been stored, 250 + // - `A` is always valid. 251 + Self { 252 + ptr: NonNull::dangling(), 253 + layout: ArrayLayout::empty(), 254 + len: 0, 255 + _p: PhantomData::<A>, 256 + } 257 + } 258 + 259 + /// Returns a slice of `MaybeUninit<T>` for the remaining spare capacity of the vector. 260 + pub fn spare_capacity_mut(&mut self) -> &mut [MaybeUninit<T>] { 261 + // SAFETY: 262 + // - `self.len` is smaller than `self.capacity` and hence, the resulting pointer is 263 + // guaranteed to be part of the same allocated object. 264 + // - `self.len` can not overflow `isize`. 265 + let ptr = unsafe { self.as_mut_ptr().add(self.len) } as *mut MaybeUninit<T>; 266 + 267 + // SAFETY: The memory between `self.len` and `self.capacity` is guaranteed to be allocated 268 + // and valid, but uninitialized. 269 + unsafe { slice::from_raw_parts_mut(ptr, self.capacity() - self.len) } 270 + } 271 + 272 + /// Appends an element to the back of the [`Vec`] instance. 273 + /// 274 + /// # Examples 275 + /// 276 + /// ``` 277 + /// let mut v = KVec::new(); 278 + /// v.push(1, GFP_KERNEL)?; 279 + /// assert_eq!(&v, &[1]); 280 + /// 281 + /// v.push(2, GFP_KERNEL)?; 282 + /// assert_eq!(&v, &[1, 2]); 283 + /// # Ok::<(), Error>(()) 284 + /// ``` 285 + pub fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError> { 286 + self.reserve(1, flags)?; 287 + 288 + // SAFETY: 289 + // - `self.len` is smaller than `self.capacity` and hence, the resulting pointer is 290 + // guaranteed to be part of the same allocated object. 291 + // - `self.len` can not overflow `isize`. 292 + let ptr = unsafe { self.as_mut_ptr().add(self.len) }; 293 + 294 + // SAFETY: 295 + // - `ptr` is properly aligned and valid for writes. 296 + unsafe { core::ptr::write(ptr, v) }; 297 + 298 + // SAFETY: We just initialised the first spare entry, so it is safe to increase the length 299 + // by 1. We also know that the new length is <= capacity because of the previous call to 300 + // `reserve` above. 301 + unsafe { self.set_len(self.len() + 1) }; 302 + Ok(()) 303 + } 304 + 305 + /// Creates a new [`Vec`] instance with at least the given capacity. 306 + /// 307 + /// # Examples 308 + /// 309 + /// ``` 310 + /// let v = KVec::<u32>::with_capacity(20, GFP_KERNEL)?; 311 + /// 312 + /// assert!(v.capacity() >= 20); 313 + /// # Ok::<(), Error>(()) 314 + /// ``` 315 + pub fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError> { 316 + let mut v = Vec::new(); 317 + 318 + v.reserve(capacity, flags)?; 319 + 320 + Ok(v) 321 + } 322 + 323 + /// Creates a `Vec<T, A>` from a pointer, a length and a capacity using the allocator `A`. 324 + /// 325 + /// # Examples 326 + /// 327 + /// ``` 328 + /// let mut v = kernel::kvec![1, 2, 3]?; 329 + /// v.reserve(1, GFP_KERNEL)?; 330 + /// 331 + /// let (mut ptr, mut len, cap) = v.into_raw_parts(); 332 + /// 333 + /// // SAFETY: We've just reserved memory for another element. 334 + /// unsafe { ptr.add(len).write(4) }; 335 + /// len += 1; 336 + /// 337 + /// // SAFETY: We only wrote an additional element at the end of the `KVec`'s buffer and 338 + /// // correspondingly increased the length of the `KVec` by one. Otherwise, we construct it 339 + /// // from the exact same raw parts. 340 + /// let v = unsafe { KVec::from_raw_parts(ptr, len, cap) }; 341 + /// 342 + /// assert_eq!(v, [1, 2, 3, 4]); 343 + /// 344 + /// # Ok::<(), Error>(()) 345 + /// ``` 346 + /// 347 + /// # Safety 348 + /// 349 + /// If `T` is a ZST: 350 + /// 351 + /// - `ptr` must be a dangling, well aligned pointer. 352 + /// 353 + /// Otherwise: 354 + /// 355 + /// - `ptr` must have been allocated with the allocator `A`. 356 + /// - `ptr` must satisfy or exceed the alignment requirements of `T`. 357 + /// - `ptr` must point to memory with a size of at least `size_of::<T>() * capacity` bytes. 358 + /// - The allocated size in bytes must not be larger than `isize::MAX`. 359 + /// - `length` must be less than or equal to `capacity`. 360 + /// - The first `length` elements must be initialized values of type `T`. 361 + /// 362 + /// It is also valid to create an empty `Vec` passing a dangling pointer for `ptr` and zero for 363 + /// `cap` and `len`. 364 + pub unsafe fn from_raw_parts(ptr: *mut T, length: usize, capacity: usize) -> Self { 365 + let layout = if Self::is_zst() { 366 + ArrayLayout::empty() 367 + } else { 368 + // SAFETY: By the safety requirements of this function, `capacity * size_of::<T>()` is 369 + // smaller than `isize::MAX`. 370 + unsafe { ArrayLayout::new_unchecked(capacity) } 371 + }; 372 + 373 + // INVARIANT: For ZSTs, we store an empty `ArrayLayout`, all other type invariants are 374 + // covered by the safety requirements of this function. 375 + Self { 376 + // SAFETY: By the safety requirements, `ptr` is either dangling or pointing to a valid 377 + // memory allocation, allocated with `A`. 378 + ptr: unsafe { NonNull::new_unchecked(ptr) }, 379 + layout, 380 + len: length, 381 + _p: PhantomData::<A>, 382 + } 383 + } 384 + 385 + /// Consumes the `Vec<T, A>` and returns its raw components `pointer`, `length` and `capacity`. 386 + /// 387 + /// This will not run the destructor of the contained elements and for non-ZSTs the allocation 388 + /// will stay alive indefinitely. Use [`Vec::from_raw_parts`] to recover the [`Vec`], drop the 389 + /// elements and free the allocation, if any. 390 + pub fn into_raw_parts(self) -> (*mut T, usize, usize) { 391 + let mut me = ManuallyDrop::new(self); 392 + let len = me.len(); 393 + let capacity = me.capacity(); 394 + let ptr = me.as_mut_ptr(); 395 + (ptr, len, capacity) 396 + } 397 + 398 + /// Ensures that the capacity exceeds the length by at least `additional` elements. 399 + /// 400 + /// # Examples 401 + /// 402 + /// ``` 403 + /// let mut v = KVec::new(); 404 + /// v.push(1, GFP_KERNEL)?; 405 + /// 406 + /// v.reserve(10, GFP_KERNEL)?; 407 + /// let cap = v.capacity(); 408 + /// assert!(cap >= 10); 409 + /// 410 + /// v.reserve(10, GFP_KERNEL)?; 411 + /// let new_cap = v.capacity(); 412 + /// assert_eq!(new_cap, cap); 413 + /// 414 + /// # Ok::<(), Error>(()) 415 + /// ``` 416 + pub fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError> { 417 + let len = self.len(); 418 + let cap = self.capacity(); 419 + 420 + if cap - len >= additional { 421 + return Ok(()); 422 + } 423 + 424 + if Self::is_zst() { 425 + // The capacity is already `usize::MAX` for ZSTs, we can't go higher. 426 + return Err(AllocError); 427 + } 428 + 429 + // We know that `cap <= isize::MAX` because of the type invariants of `Self`. So the 430 + // multiplication by two won't overflow. 431 + let new_cap = core::cmp::max(cap * 2, len.checked_add(additional).ok_or(AllocError)?); 432 + let layout = ArrayLayout::new(new_cap).map_err(|_| AllocError)?; 433 + 434 + // SAFETY: 435 + // - `ptr` is valid because it's either `None` or comes from a previous call to 436 + // `A::realloc`. 437 + // - `self.layout` matches the `ArrayLayout` of the preceding allocation. 438 + let ptr = unsafe { 439 + A::realloc( 440 + Some(self.ptr.cast()), 441 + layout.into(), 442 + self.layout.into(), 443 + flags, 444 + )? 445 + }; 446 + 447 + // INVARIANT: 448 + // - `layout` is some `ArrayLayout::<T>`, 449 + // - `ptr` has been created by `A::realloc` from `layout`. 450 + self.ptr = ptr.cast(); 451 + self.layout = layout; 452 + 453 + Ok(()) 454 + } 455 + } 456 + 457 + impl<T: Clone, A: Allocator> Vec<T, A> { 458 + /// Extend the vector by `n` clones of `value`. 459 + pub fn extend_with(&mut self, n: usize, value: T, flags: Flags) -> Result<(), AllocError> { 460 + if n == 0 { 461 + return Ok(()); 462 + } 463 + 464 + self.reserve(n, flags)?; 465 + 466 + let spare = self.spare_capacity_mut(); 467 + 468 + for item in spare.iter_mut().take(n - 1) { 469 + item.write(value.clone()); 470 + } 471 + 472 + // We can write the last element directly without cloning needlessly. 473 + spare[n - 1].write(value); 474 + 475 + // SAFETY: 476 + // - `self.len() + n < self.capacity()` due to the call to reserve above, 477 + // - the loop and the line above initialized the next `n` elements. 478 + unsafe { self.set_len(self.len() + n) }; 479 + 480 + Ok(()) 481 + } 482 + 483 + /// Pushes clones of the elements of slice into the [`Vec`] instance. 484 + /// 485 + /// # Examples 486 + /// 487 + /// ``` 488 + /// let mut v = KVec::new(); 489 + /// v.push(1, GFP_KERNEL)?; 490 + /// 491 + /// v.extend_from_slice(&[20, 30, 40], GFP_KERNEL)?; 492 + /// assert_eq!(&v, &[1, 20, 30, 40]); 493 + /// 494 + /// v.extend_from_slice(&[50, 60], GFP_KERNEL)?; 495 + /// assert_eq!(&v, &[1, 20, 30, 40, 50, 60]); 496 + /// # Ok::<(), Error>(()) 497 + /// ``` 498 + pub fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError> { 499 + self.reserve(other.len(), flags)?; 500 + for (slot, item) in core::iter::zip(self.spare_capacity_mut(), other) { 501 + slot.write(item.clone()); 502 + } 503 + 504 + // SAFETY: 505 + // - `other.len()` spare entries have just been initialized, so it is safe to increase 506 + // the length by the same number. 507 + // - `self.len() + other.len() <= self.capacity()` is guaranteed by the preceding `reserve` 508 + // call. 509 + unsafe { self.set_len(self.len() + other.len()) }; 510 + Ok(()) 511 + } 512 + 513 + /// Create a new `Vec<T, A>` and extend it by `n` clones of `value`. 514 + pub fn from_elem(value: T, n: usize, flags: Flags) -> Result<Self, AllocError> { 515 + let mut v = Self::with_capacity(n, flags)?; 516 + 517 + v.extend_with(n, value, flags)?; 518 + 519 + Ok(v) 520 + } 521 + } 522 + 523 + impl<T, A> Drop for Vec<T, A> 524 + where 525 + A: Allocator, 526 + { 527 + fn drop(&mut self) { 528 + // SAFETY: `self.as_mut_ptr` is guaranteed to be valid by the type invariant. 529 + unsafe { 530 + ptr::drop_in_place(core::ptr::slice_from_raw_parts_mut( 531 + self.as_mut_ptr(), 532 + self.len, 533 + )) 534 + }; 535 + 536 + // SAFETY: 537 + // - `self.ptr` was previously allocated with `A`. 538 + // - `self.layout` matches the `ArrayLayout` of the preceding allocation. 539 + unsafe { A::free(self.ptr.cast(), self.layout.into()) }; 540 + } 541 + } 542 + 543 + impl<T, A, const N: usize> From<Box<[T; N], A>> for Vec<T, A> 544 + where 545 + A: Allocator, 546 + { 547 + fn from(b: Box<[T; N], A>) -> Vec<T, A> { 548 + let len = b.len(); 549 + let ptr = Box::into_raw(b); 550 + 551 + // SAFETY: 552 + // - `b` has been allocated with `A`, 553 + // - `ptr` fulfills the alignment requirements for `T`, 554 + // - `ptr` points to memory with at least a size of `size_of::<T>() * len`, 555 + // - all elements within `b` are initialized values of `T`, 556 + // - `len` does not exceed `isize::MAX`. 557 + unsafe { Vec::from_raw_parts(ptr as _, len, len) } 558 + } 559 + } 560 + 561 + impl<T> Default for KVec<T> { 562 + #[inline] 563 + fn default() -> Self { 564 + Self::new() 565 + } 566 + } 567 + 568 + impl<T: fmt::Debug, A: Allocator> fmt::Debug for Vec<T, A> { 569 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 570 + fmt::Debug::fmt(&**self, f) 571 + } 572 + } 573 + 574 + impl<T, A> Deref for Vec<T, A> 575 + where 576 + A: Allocator, 577 + { 578 + type Target = [T]; 579 + 580 + #[inline] 581 + fn deref(&self) -> &[T] { 582 + // SAFETY: The memory behind `self.as_ptr()` is guaranteed to contain `self.len` 583 + // initialized elements of type `T`. 584 + unsafe { slice::from_raw_parts(self.as_ptr(), self.len) } 585 + } 586 + } 587 + 588 + impl<T, A> DerefMut for Vec<T, A> 589 + where 590 + A: Allocator, 591 + { 592 + #[inline] 593 + fn deref_mut(&mut self) -> &mut [T] { 594 + // SAFETY: The memory behind `self.as_ptr()` is guaranteed to contain `self.len` 595 + // initialized elements of type `T`. 596 + unsafe { slice::from_raw_parts_mut(self.as_mut_ptr(), self.len) } 597 + } 598 + } 599 + 600 + impl<T: Eq, A> Eq for Vec<T, A> where A: Allocator {} 601 + 602 + impl<T, I: SliceIndex<[T]>, A> Index<I> for Vec<T, A> 603 + where 604 + A: Allocator, 605 + { 606 + type Output = I::Output; 607 + 608 + #[inline] 609 + fn index(&self, index: I) -> &Self::Output { 610 + Index::index(&**self, index) 611 + } 612 + } 613 + 614 + impl<T, I: SliceIndex<[T]>, A> IndexMut<I> for Vec<T, A> 615 + where 616 + A: Allocator, 617 + { 618 + #[inline] 619 + fn index_mut(&mut self, index: I) -> &mut Self::Output { 620 + IndexMut::index_mut(&mut **self, index) 621 + } 622 + } 623 + 624 + macro_rules! impl_slice_eq { 625 + ($([$($vars:tt)*] $lhs:ty, $rhs:ty,)*) => { 626 + $( 627 + impl<T, U, $($vars)*> PartialEq<$rhs> for $lhs 628 + where 629 + T: PartialEq<U>, 630 + { 631 + #[inline] 632 + fn eq(&self, other: &$rhs) -> bool { self[..] == other[..] } 633 + } 634 + )* 635 + } 636 + } 637 + 638 + impl_slice_eq! { 639 + [A1: Allocator, A2: Allocator] Vec<T, A1>, Vec<U, A2>, 640 + [A: Allocator] Vec<T, A>, &[U], 641 + [A: Allocator] Vec<T, A>, &mut [U], 642 + [A: Allocator] &[T], Vec<U, A>, 643 + [A: Allocator] &mut [T], Vec<U, A>, 644 + [A: Allocator] Vec<T, A>, [U], 645 + [A: Allocator] [T], Vec<U, A>, 646 + [A: Allocator, const N: usize] Vec<T, A>, [U; N], 647 + [A: Allocator, const N: usize] Vec<T, A>, &[U; N], 648 + } 649 + 650 + impl<'a, T, A> IntoIterator for &'a Vec<T, A> 651 + where 652 + A: Allocator, 653 + { 654 + type Item = &'a T; 655 + type IntoIter = slice::Iter<'a, T>; 656 + 657 + fn into_iter(self) -> Self::IntoIter { 658 + self.iter() 659 + } 660 + } 661 + 662 + impl<'a, T, A: Allocator> IntoIterator for &'a mut Vec<T, A> 663 + where 664 + A: Allocator, 665 + { 666 + type Item = &'a mut T; 667 + type IntoIter = slice::IterMut<'a, T>; 668 + 669 + fn into_iter(self) -> Self::IntoIter { 670 + self.iter_mut() 671 + } 672 + } 673 + 674 + /// An [`Iterator`] implementation for [`Vec`] that moves elements out of a vector. 675 + /// 676 + /// This structure is created by the [`Vec::into_iter`] method on [`Vec`] (provided by the 677 + /// [`IntoIterator`] trait). 678 + /// 679 + /// # Examples 680 + /// 681 + /// ``` 682 + /// let v = kernel::kvec![0, 1, 2]?; 683 + /// let iter = v.into_iter(); 684 + /// 685 + /// # Ok::<(), Error>(()) 686 + /// ``` 687 + pub struct IntoIter<T, A: Allocator> { 688 + ptr: *mut T, 689 + buf: NonNull<T>, 690 + len: usize, 691 + layout: ArrayLayout<T>, 692 + _p: PhantomData<A>, 693 + } 694 + 695 + impl<T, A> IntoIter<T, A> 696 + where 697 + A: Allocator, 698 + { 699 + fn into_raw_parts(self) -> (*mut T, NonNull<T>, usize, usize) { 700 + let me = ManuallyDrop::new(self); 701 + let ptr = me.ptr; 702 + let buf = me.buf; 703 + let len = me.len; 704 + let cap = me.layout.len(); 705 + (ptr, buf, len, cap) 706 + } 707 + 708 + /// Same as `Iterator::collect` but specialized for `Vec`'s `IntoIter`. 709 + /// 710 + /// # Examples 711 + /// 712 + /// ``` 713 + /// let v = kernel::kvec![1, 2, 3]?; 714 + /// let mut it = v.into_iter(); 715 + /// 716 + /// assert_eq!(it.next(), Some(1)); 717 + /// 718 + /// let v = it.collect(GFP_KERNEL); 719 + /// assert_eq!(v, [2, 3]); 720 + /// 721 + /// # Ok::<(), Error>(()) 722 + /// ``` 723 + /// 724 + /// # Implementation details 725 + /// 726 + /// Currently, we can't implement `FromIterator`. There are a couple of issues with this trait 727 + /// in the kernel, namely: 728 + /// 729 + /// - Rust's specialization feature is unstable. This prevents us to optimize for the special 730 + /// case where `I::IntoIter` equals `Vec`'s `IntoIter` type. 731 + /// - We also can't use `I::IntoIter`'s type ID either to work around this, since `FromIterator` 732 + /// doesn't require this type to be `'static`. 733 + /// - `FromIterator::from_iter` does return `Self` instead of `Result<Self, AllocError>`, hence 734 + /// we can't properly handle allocation failures. 735 + /// - Neither `Iterator::collect` nor `FromIterator::from_iter` can handle additional allocation 736 + /// flags. 737 + /// 738 + /// Instead, provide `IntoIter::collect`, such that we can at least convert a `IntoIter` into a 739 + /// `Vec` again. 740 + /// 741 + /// Note that `IntoIter::collect` doesn't require `Flags`, since it re-uses the existing backing 742 + /// buffer. However, this backing buffer may be shrunk to the actual count of elements. 743 + pub fn collect(self, flags: Flags) -> Vec<T, A> { 744 + let old_layout = self.layout; 745 + let (mut ptr, buf, len, mut cap) = self.into_raw_parts(); 746 + let has_advanced = ptr != buf.as_ptr(); 747 + 748 + if has_advanced { 749 + // Copy the contents we have advanced to at the beginning of the buffer. 750 + // 751 + // SAFETY: 752 + // - `ptr` is valid for reads of `len * size_of::<T>()` bytes, 753 + // - `buf.as_ptr()` is valid for writes of `len * size_of::<T>()` bytes, 754 + // - `ptr` and `buf.as_ptr()` are not be subject to aliasing restrictions relative to 755 + // each other, 756 + // - both `ptr` and `buf.ptr()` are properly aligned. 757 + unsafe { ptr::copy(ptr, buf.as_ptr(), len) }; 758 + ptr = buf.as_ptr(); 759 + 760 + // SAFETY: `len` is guaranteed to be smaller than `self.layout.len()`. 761 + let layout = unsafe { ArrayLayout::<T>::new_unchecked(len) }; 762 + 763 + // SAFETY: `buf` points to the start of the backing buffer and `len` is guaranteed to be 764 + // smaller than `cap`. Depending on `alloc` this operation may shrink the buffer or leaves 765 + // it as it is. 766 + ptr = match unsafe { 767 + A::realloc(Some(buf.cast()), layout.into(), old_layout.into(), flags) 768 + } { 769 + // If we fail to shrink, which likely can't even happen, continue with the existing 770 + // buffer. 771 + Err(_) => ptr, 772 + Ok(ptr) => { 773 + cap = len; 774 + ptr.as_ptr().cast() 775 + } 776 + }; 777 + } 778 + 779 + // SAFETY: If the iterator has been advanced, the advanced elements have been copied to 780 + // the beginning of the buffer and `len` has been adjusted accordingly. 781 + // 782 + // - `ptr` is guaranteed to point to the start of the backing buffer. 783 + // - `cap` is either the original capacity or, after shrinking the buffer, equal to `len`. 784 + // - `alloc` is guaranteed to be unchanged since `into_iter` has been called on the original 785 + // `Vec`. 786 + unsafe { Vec::from_raw_parts(ptr, len, cap) } 787 + } 788 + } 789 + 790 + impl<T, A> Iterator for IntoIter<T, A> 791 + where 792 + A: Allocator, 793 + { 794 + type Item = T; 795 + 796 + /// # Examples 797 + /// 798 + /// ``` 799 + /// let v = kernel::kvec![1, 2, 3]?; 800 + /// let mut it = v.into_iter(); 801 + /// 802 + /// assert_eq!(it.next(), Some(1)); 803 + /// assert_eq!(it.next(), Some(2)); 804 + /// assert_eq!(it.next(), Some(3)); 805 + /// assert_eq!(it.next(), None); 806 + /// 807 + /// # Ok::<(), Error>(()) 808 + /// ``` 809 + fn next(&mut self) -> Option<T> { 810 + if self.len == 0 { 811 + return None; 812 + } 813 + 814 + let current = self.ptr; 815 + 816 + // SAFETY: We can't overflow; decreasing `self.len` by one every time we advance `self.ptr` 817 + // by one guarantees that. 818 + unsafe { self.ptr = self.ptr.add(1) }; 819 + 820 + self.len -= 1; 821 + 822 + // SAFETY: `current` is guaranteed to point at a valid element within the buffer. 823 + Some(unsafe { current.read() }) 824 + } 825 + 826 + /// # Examples 827 + /// 828 + /// ``` 829 + /// let v: KVec<u32> = kernel::kvec![1, 2, 3]?; 830 + /// let mut iter = v.into_iter(); 831 + /// let size = iter.size_hint().0; 832 + /// 833 + /// iter.next(); 834 + /// assert_eq!(iter.size_hint().0, size - 1); 835 + /// 836 + /// iter.next(); 837 + /// assert_eq!(iter.size_hint().0, size - 2); 838 + /// 839 + /// iter.next(); 840 + /// assert_eq!(iter.size_hint().0, size - 3); 841 + /// 842 + /// # Ok::<(), Error>(()) 843 + /// ``` 844 + fn size_hint(&self) -> (usize, Option<usize>) { 845 + (self.len, Some(self.len)) 846 + } 847 + } 848 + 849 + impl<T, A> Drop for IntoIter<T, A> 850 + where 851 + A: Allocator, 852 + { 853 + fn drop(&mut self) { 854 + // SAFETY: `self.ptr` is guaranteed to be valid by the type invariant. 855 + unsafe { ptr::drop_in_place(ptr::slice_from_raw_parts_mut(self.ptr, self.len)) }; 856 + 857 + // SAFETY: 858 + // - `self.buf` was previously allocated with `A`. 859 + // - `self.layout` matches the `ArrayLayout` of the preceding allocation. 860 + unsafe { A::free(self.buf.cast(), self.layout.into()) }; 861 + } 862 + } 863 + 864 + impl<T, A> IntoIterator for Vec<T, A> 865 + where 866 + A: Allocator, 867 + { 868 + type Item = T; 869 + type IntoIter = IntoIter<T, A>; 870 + 871 + /// Consumes the `Vec<T, A>` and creates an `Iterator`, which moves each value out of the 872 + /// vector (from start to end). 873 + /// 874 + /// # Examples 875 + /// 876 + /// ``` 877 + /// let v = kernel::kvec![1, 2]?; 878 + /// let mut v_iter = v.into_iter(); 879 + /// 880 + /// let first_element: Option<u32> = v_iter.next(); 881 + /// 882 + /// assert_eq!(first_element, Some(1)); 883 + /// assert_eq!(v_iter.next(), Some(2)); 884 + /// assert_eq!(v_iter.next(), None); 885 + /// 886 + /// # Ok::<(), Error>(()) 887 + /// ``` 888 + /// 889 + /// ``` 890 + /// let v = kernel::kvec![]; 891 + /// let mut v_iter = v.into_iter(); 892 + /// 893 + /// let first_element: Option<u32> = v_iter.next(); 894 + /// 895 + /// assert_eq!(first_element, None); 896 + /// 897 + /// # Ok::<(), Error>(()) 898 + /// ``` 899 + #[inline] 900 + fn into_iter(self) -> Self::IntoIter { 901 + let buf = self.ptr; 902 + let layout = self.layout; 903 + let (ptr, len, _) = self.into_raw_parts(); 904 + 905 + IntoIter { 906 + ptr, 907 + buf, 908 + len, 909 + layout, 910 + _p: PhantomData::<A>, 911 + } 912 + } 913 + }
+91
rust/kernel/alloc/layout.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Memory layout. 4 + //! 5 + //! Custom layout types extending or improving [`Layout`]. 6 + 7 + use core::{alloc::Layout, marker::PhantomData}; 8 + 9 + /// Error when constructing an [`ArrayLayout`]. 10 + pub struct LayoutError; 11 + 12 + /// A layout for an array `[T; n]`. 13 + /// 14 + /// # Invariants 15 + /// 16 + /// - `len * size_of::<T>() <= isize::MAX`. 17 + pub struct ArrayLayout<T> { 18 + len: usize, 19 + _phantom: PhantomData<fn() -> T>, 20 + } 21 + 22 + impl<T> Clone for ArrayLayout<T> { 23 + fn clone(&self) -> Self { 24 + *self 25 + } 26 + } 27 + impl<T> Copy for ArrayLayout<T> {} 28 + 29 + const ISIZE_MAX: usize = isize::MAX as usize; 30 + 31 + impl<T> ArrayLayout<T> { 32 + /// Creates a new layout for `[T; 0]`. 33 + pub const fn empty() -> Self { 34 + // INVARIANT: `0 * size_of::<T>() <= isize::MAX`. 35 + Self { 36 + len: 0, 37 + _phantom: PhantomData, 38 + } 39 + } 40 + 41 + /// Creates a new layout for `[T; len]`. 42 + /// 43 + /// # Errors 44 + /// 45 + /// When `len * size_of::<T>()` overflows or when `len * size_of::<T>() > isize::MAX`. 46 + pub const fn new(len: usize) -> Result<Self, LayoutError> { 47 + match len.checked_mul(core::mem::size_of::<T>()) { 48 + Some(size) if size <= ISIZE_MAX => { 49 + // INVARIANT: We checked above that `len * size_of::<T>() <= isize::MAX`. 50 + Ok(Self { 51 + len, 52 + _phantom: PhantomData, 53 + }) 54 + } 55 + _ => Err(LayoutError), 56 + } 57 + } 58 + 59 + /// Creates a new layout for `[T; len]`. 60 + /// 61 + /// # Safety 62 + /// 63 + /// `len` must be a value, for which `len * size_of::<T>() <= isize::MAX` is true. 64 + pub unsafe fn new_unchecked(len: usize) -> Self { 65 + // INVARIANT: By the safety requirements of this function 66 + // `len * size_of::<T>() <= isize::MAX`. 67 + Self { 68 + len, 69 + _phantom: PhantomData, 70 + } 71 + } 72 + 73 + /// Returns the number of array elements represented by this layout. 74 + pub const fn len(&self) -> usize { 75 + self.len 76 + } 77 + 78 + /// Returns `true` when no array elements are represented by this layout. 79 + pub const fn is_empty(&self) -> bool { 80 + self.len == 0 81 + } 82 + } 83 + 84 + impl<T> From<ArrayLayout<T>> for Layout { 85 + fn from(value: ArrayLayout<T>) -> Self { 86 + let res = Layout::array::<T>(value.len); 87 + // SAFETY: By the type invariant of `ArrayLayout` we have 88 + // `len * size_of::<T>() <= isize::MAX` and thus the result must be `Ok`. 89 + unsafe { res.unwrap_unchecked() } 90 + } 91 + }
-185
rust/kernel/alloc/vec_ext.rs
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - 3 - //! Extensions to [`Vec`] for fallible allocations. 4 - 5 - use super::{AllocError, Flags}; 6 - use alloc::vec::Vec; 7 - 8 - /// Extensions to [`Vec`]. 9 - pub trait VecExt<T>: Sized { 10 - /// Creates a new [`Vec`] instance with at least the given capacity. 11 - /// 12 - /// # Examples 13 - /// 14 - /// ``` 15 - /// let v = Vec::<u32>::with_capacity(20, GFP_KERNEL)?; 16 - /// 17 - /// assert!(v.capacity() >= 20); 18 - /// # Ok::<(), Error>(()) 19 - /// ``` 20 - fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError>; 21 - 22 - /// Appends an element to the back of the [`Vec`] instance. 23 - /// 24 - /// # Examples 25 - /// 26 - /// ``` 27 - /// let mut v = Vec::new(); 28 - /// v.push(1, GFP_KERNEL)?; 29 - /// assert_eq!(&v, &[1]); 30 - /// 31 - /// v.push(2, GFP_KERNEL)?; 32 - /// assert_eq!(&v, &[1, 2]); 33 - /// # Ok::<(), Error>(()) 34 - /// ``` 35 - fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError>; 36 - 37 - /// Pushes clones of the elements of slice into the [`Vec`] instance. 38 - /// 39 - /// # Examples 40 - /// 41 - /// ``` 42 - /// let mut v = Vec::new(); 43 - /// v.push(1, GFP_KERNEL)?; 44 - /// 45 - /// v.extend_from_slice(&[20, 30, 40], GFP_KERNEL)?; 46 - /// assert_eq!(&v, &[1, 20, 30, 40]); 47 - /// 48 - /// v.extend_from_slice(&[50, 60], GFP_KERNEL)?; 49 - /// assert_eq!(&v, &[1, 20, 30, 40, 50, 60]); 50 - /// # Ok::<(), Error>(()) 51 - /// ``` 52 - fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError> 53 - where 54 - T: Clone; 55 - 56 - /// Ensures that the capacity exceeds the length by at least `additional` elements. 57 - /// 58 - /// # Examples 59 - /// 60 - /// ``` 61 - /// let mut v = Vec::new(); 62 - /// v.push(1, GFP_KERNEL)?; 63 - /// 64 - /// v.reserve(10, GFP_KERNEL)?; 65 - /// let cap = v.capacity(); 66 - /// assert!(cap >= 10); 67 - /// 68 - /// v.reserve(10, GFP_KERNEL)?; 69 - /// let new_cap = v.capacity(); 70 - /// assert_eq!(new_cap, cap); 71 - /// 72 - /// # Ok::<(), Error>(()) 73 - /// ``` 74 - fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError>; 75 - } 76 - 77 - impl<T> VecExt<T> for Vec<T> { 78 - fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError> { 79 - let mut v = Vec::new(); 80 - <Self as VecExt<_>>::reserve(&mut v, capacity, flags)?; 81 - Ok(v) 82 - } 83 - 84 - fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError> { 85 - <Self as VecExt<_>>::reserve(self, 1, flags)?; 86 - let s = self.spare_capacity_mut(); 87 - s[0].write(v); 88 - 89 - // SAFETY: We just initialised the first spare entry, so it is safe to increase the length 90 - // by 1. We also know that the new length is <= capacity because of the previous call to 91 - // `reserve` above. 92 - unsafe { self.set_len(self.len() + 1) }; 93 - Ok(()) 94 - } 95 - 96 - fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError> 97 - where 98 - T: Clone, 99 - { 100 - <Self as VecExt<_>>::reserve(self, other.len(), flags)?; 101 - for (slot, item) in core::iter::zip(self.spare_capacity_mut(), other) { 102 - slot.write(item.clone()); 103 - } 104 - 105 - // SAFETY: We just initialised the `other.len()` spare entries, so it is safe to increase 106 - // the length by the same amount. We also know that the new length is <= capacity because 107 - // of the previous call to `reserve` above. 108 - unsafe { self.set_len(self.len() + other.len()) }; 109 - Ok(()) 110 - } 111 - 112 - #[cfg(any(test, testlib))] 113 - fn reserve(&mut self, additional: usize, _flags: Flags) -> Result<(), AllocError> { 114 - Vec::reserve(self, additional); 115 - Ok(()) 116 - } 117 - 118 - #[cfg(not(any(test, testlib)))] 119 - fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError> { 120 - let len = self.len(); 121 - let cap = self.capacity(); 122 - 123 - if cap - len >= additional { 124 - return Ok(()); 125 - } 126 - 127 - if core::mem::size_of::<T>() == 0 { 128 - // The capacity is already `usize::MAX` for SZTs, we can't go higher. 129 - return Err(AllocError); 130 - } 131 - 132 - // We know cap is <= `isize::MAX` because `Layout::array` fails if the resulting byte size 133 - // is greater than `isize::MAX`. So the multiplication by two won't overflow. 134 - let new_cap = core::cmp::max(cap * 2, len.checked_add(additional).ok_or(AllocError)?); 135 - let layout = core::alloc::Layout::array::<T>(new_cap).map_err(|_| AllocError)?; 136 - 137 - let (old_ptr, len, cap) = destructure(self); 138 - 139 - // We need to make sure that `ptr` is either NULL or comes from a previous call to 140 - // `krealloc_aligned`. A `Vec<T>`'s `ptr` value is not guaranteed to be NULL and might be 141 - // dangling after being created with `Vec::new`. Instead, we can rely on `Vec<T>`'s capacity 142 - // to be zero if no memory has been allocated yet. 143 - let ptr = if cap == 0 { 144 - core::ptr::null_mut() 145 - } else { 146 - old_ptr 147 - }; 148 - 149 - // SAFETY: `ptr` is valid because it's either NULL or comes from a previous call to 150 - // `krealloc_aligned`. We also verified that the type is not a ZST. 151 - let new_ptr = unsafe { super::allocator::krealloc_aligned(ptr.cast(), layout, flags) }; 152 - if new_ptr.is_null() { 153 - // SAFETY: We are just rebuilding the existing `Vec` with no changes. 154 - unsafe { rebuild(self, old_ptr, len, cap) }; 155 - Err(AllocError) 156 - } else { 157 - // SAFETY: `ptr` has been reallocated with the layout for `new_cap` elements. New cap 158 - // is greater than `cap`, so it continues to be >= `len`. 159 - unsafe { rebuild(self, new_ptr.cast::<T>(), len, new_cap) }; 160 - Ok(()) 161 - } 162 - } 163 - } 164 - 165 - #[cfg(not(any(test, testlib)))] 166 - fn destructure<T>(v: &mut Vec<T>) -> (*mut T, usize, usize) { 167 - let mut tmp = Vec::new(); 168 - core::mem::swap(&mut tmp, v); 169 - let mut tmp = core::mem::ManuallyDrop::new(tmp); 170 - let len = tmp.len(); 171 - let cap = tmp.capacity(); 172 - (tmp.as_mut_ptr(), len, cap) 173 - } 174 - 175 - /// Rebuilds a `Vec` from a pointer, length, and capacity. 176 - /// 177 - /// # Safety 178 - /// 179 - /// The same as [`Vec::from_raw_parts`]. 180 - #[cfg(not(any(test, testlib)))] 181 - unsafe fn rebuild<T>(v: &mut Vec<T>, ptr: *mut T, len: usize, cap: usize) { 182 - // SAFETY: The safety requirements from this function satisfy those of `from_raw_parts`. 183 - let mut tmp = unsafe { Vec::from_raw_parts(ptr, len, cap) }; 184 - core::mem::swap(&mut tmp, v); 185 - }
+9 -9
rust/kernel/block/mq/operations.rs
··· 131 131 unsafe extern "C" fn poll_callback( 132 132 _hctx: *mut bindings::blk_mq_hw_ctx, 133 133 _iob: *mut bindings::io_comp_batch, 134 - ) -> core::ffi::c_int { 134 + ) -> crate::ffi::c_int { 135 135 T::poll().into() 136 136 } 137 137 ··· 145 145 /// for the same context. 146 146 unsafe extern "C" fn init_hctx_callback( 147 147 _hctx: *mut bindings::blk_mq_hw_ctx, 148 - _tagset_data: *mut core::ffi::c_void, 149 - _hctx_idx: core::ffi::c_uint, 150 - ) -> core::ffi::c_int { 148 + _tagset_data: *mut crate::ffi::c_void, 149 + _hctx_idx: crate::ffi::c_uint, 150 + ) -> crate::ffi::c_int { 151 151 from_result(|| Ok(0)) 152 152 } 153 153 ··· 159 159 /// This function may only be called by blk-mq C infrastructure. 160 160 unsafe extern "C" fn exit_hctx_callback( 161 161 _hctx: *mut bindings::blk_mq_hw_ctx, 162 - _hctx_idx: core::ffi::c_uint, 162 + _hctx_idx: crate::ffi::c_uint, 163 163 ) { 164 164 } 165 165 ··· 176 176 unsafe extern "C" fn init_request_callback( 177 177 _set: *mut bindings::blk_mq_tag_set, 178 178 rq: *mut bindings::request, 179 - _hctx_idx: core::ffi::c_uint, 180 - _numa_node: core::ffi::c_uint, 181 - ) -> core::ffi::c_int { 179 + _hctx_idx: crate::ffi::c_uint, 180 + _numa_node: crate::ffi::c_uint, 181 + ) -> crate::ffi::c_int { 182 182 from_result(|| { 183 183 // SAFETY: By the safety requirements of this function, `rq` points 184 184 // to a valid allocation. ··· 203 203 unsafe extern "C" fn exit_request_callback( 204 204 _set: *mut bindings::blk_mq_tag_set, 205 205 rq: *mut bindings::request, 206 - _hctx_idx: core::ffi::c_uint, 206 + _hctx_idx: crate::ffi::c_uint, 207 207 ) { 208 208 // SAFETY: The tagset invariants guarantee that all requests are allocated with extra memory 209 209 // for the request data.
+1 -1
rust/kernel/block/mq/raw_writer.rs
··· 25 25 } 26 26 27 27 pub(crate) fn from_array<const N: usize>( 28 - a: &'a mut [core::ffi::c_char; N], 28 + a: &'a mut [crate::ffi::c_char; N], 29 29 ) -> Result<RawWriter<'a>> { 30 30 Self::new( 31 31 // SAFETY: the buffer of `a` is valid for read and write as `u8` for
+38 -29
rust/kernel/block/mq/request.rs
··· 16 16 sync::atomic::{AtomicU64, Ordering}, 17 17 }; 18 18 19 - /// A wrapper around a blk-mq `struct request`. This represents an IO request. 19 + /// A wrapper around a blk-mq [`struct request`]. This represents an IO request. 20 20 /// 21 21 /// # Implementation details 22 22 /// 23 23 /// There are four states for a request that the Rust bindings care about: 24 24 /// 25 - /// A) Request is owned by block layer (refcount 0) 26 - /// B) Request is owned by driver but with zero `ARef`s in existence 27 - /// (refcount 1) 28 - /// C) Request is owned by driver with exactly one `ARef` in existence 29 - /// (refcount 2) 30 - /// D) Request is owned by driver with more than one `ARef` in existence 31 - /// (refcount > 2) 25 + /// 1. Request is owned by block layer (refcount 0). 26 + /// 2. Request is owned by driver but with zero [`ARef`]s in existence 27 + /// (refcount 1). 28 + /// 3. Request is owned by driver with exactly one [`ARef`] in existence 29 + /// (refcount 2). 30 + /// 4. Request is owned by driver with more than one [`ARef`] in existence 31 + /// (refcount > 2). 32 32 /// 33 33 /// 34 - /// We need to track A and B to ensure we fail tag to request conversions for 34 + /// We need to track 1 and 2 to ensure we fail tag to request conversions for 35 35 /// requests that are not owned by the driver. 36 36 /// 37 - /// We need to track C and D to ensure that it is safe to end the request and hand 37 + /// We need to track 3 and 4 to ensure that it is safe to end the request and hand 38 38 /// back ownership to the block layer. 39 39 /// 40 40 /// The states are tracked through the private `refcount` field of 41 41 /// `RequestDataWrapper`. This structure lives in the private data area of the C 42 - /// `struct request`. 42 + /// [`struct request`]. 43 43 /// 44 44 /// # Invariants 45 45 /// 46 - /// * `self.0` is a valid `struct request` created by the C portion of the kernel. 46 + /// * `self.0` is a valid [`struct request`] created by the C portion of the 47 + /// kernel. 47 48 /// * The private data area associated with this request must be an initialized 48 49 /// and valid `RequestDataWrapper<T>`. 49 50 /// * `self` is reference counted by atomic modification of 50 - /// self.wrapper_ref().refcount(). 51 + /// `self.wrapper_ref().refcount()`. 52 + /// 53 + /// [`struct request`]: srctree/include/linux/blk-mq.h 51 54 /// 52 55 #[repr(transparent)] 53 56 pub struct Request<T: Operations>(Opaque<bindings::request>, PhantomData<T>); 54 57 55 58 impl<T: Operations> Request<T> { 56 - /// Create an `ARef<Request>` from a `struct request` pointer. 59 + /// Create an [`ARef<Request>`] from a [`struct request`] pointer. 57 60 /// 58 61 /// # Safety 59 62 /// 60 63 /// * The caller must own a refcount on `ptr` that is transferred to the 61 - /// returned `ARef`. 62 - /// * The type invariants for `Request` must hold for the pointee of `ptr`. 64 + /// returned [`ARef`]. 65 + /// * The type invariants for [`Request`] must hold for the pointee of `ptr`. 66 + /// 67 + /// [`struct request`]: srctree/include/linux/blk-mq.h 63 68 pub(crate) unsafe fn aref_from_raw(ptr: *mut bindings::request) -> ARef<Self> { 64 69 // INVARIANT: By the safety requirements of this function, invariants are upheld. 65 70 // SAFETY: By the safety requirement of this function, we own a ··· 89 84 } 90 85 91 86 /// Try to take exclusive ownership of `this` by dropping the refcount to 0. 92 - /// This fails if `this` is not the only `ARef` pointing to the underlying 93 - /// `Request`. 87 + /// This fails if `this` is not the only [`ARef`] pointing to the underlying 88 + /// [`Request`]. 94 89 /// 95 - /// If the operation is successful, `Ok` is returned with a pointer to the 96 - /// C `struct request`. If the operation fails, `this` is returned in the 97 - /// `Err` variant. 90 + /// If the operation is successful, [`Ok`] is returned with a pointer to the 91 + /// C [`struct request`]. If the operation fails, `this` is returned in the 92 + /// [`Err`] variant. 93 + /// 94 + /// [`struct request`]: srctree/include/linux/blk-mq.h 98 95 fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> { 99 96 // We can race with `TagSet::tag_to_rq` 100 97 if let Err(_old) = this.wrapper_ref().refcount().compare_exchange( ··· 116 109 117 110 /// Notify the block layer that the request has been completed without errors. 118 111 /// 119 - /// This function will return `Err` if `this` is not the only `ARef` 112 + /// This function will return [`Err`] if `this` is not the only [`ARef`] 120 113 /// referencing the request. 121 114 pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> { 122 115 let request_ptr = Self::try_set_end(this)?; ··· 130 123 Ok(()) 131 124 } 132 125 133 - /// Return a pointer to the `RequestDataWrapper` stored in the private area 126 + /// Return a pointer to the [`RequestDataWrapper`] stored in the private area 134 127 /// of the request structure. 135 128 /// 136 129 /// # Safety 137 130 /// 138 131 /// - `this` must point to a valid allocation of size at least size of 139 - /// `Self` plus size of `RequestDataWrapper`. 132 + /// [`Self`] plus size of [`RequestDataWrapper`]. 140 133 pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper> { 141 134 let request_ptr = this.cast::<bindings::request>(); 142 135 // SAFETY: By safety requirements for this function, `this` is a ··· 148 141 unsafe { NonNull::new_unchecked(wrapper_ptr) } 149 142 } 150 143 151 - /// Return a reference to the `RequestDataWrapper` stored in the private 144 + /// Return a reference to the [`RequestDataWrapper`] stored in the private 152 145 /// area of the request structure. 153 146 pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper { 154 147 // SAFETY: By type invariant, `self.0` is a valid allocation. Further, ··· 159 152 } 160 153 } 161 154 162 - /// A wrapper around data stored in the private area of the C `struct request`. 155 + /// A wrapper around data stored in the private area of the C [`struct request`]. 156 + /// 157 + /// [`struct request`]: srctree/include/linux/blk-mq.h 163 158 pub(crate) struct RequestDataWrapper { 164 159 /// The Rust request refcount has the following states: 165 160 /// 166 161 /// - 0: The request is owned by C block layer. 167 - /// - 1: The request is owned by Rust abstractions but there are no ARef references to it. 168 - /// - 2+: There are `ARef` references to the request. 162 + /// - 1: The request is owned by Rust abstractions but there are no [`ARef`] references to it. 163 + /// - 2+: There are [`ARef`] references to the request. 169 164 refcount: AtomicU64, 170 165 } 171 166 ··· 213 204 } 214 205 215 206 /// Store the result of `op(target.load)` in `target` if `target.load() != 216 - /// pred`, returning true if the target was updated. 207 + /// pred`, returning [`true`] if the target was updated. 217 208 fn atomic_relaxed_op_unless(target: &AtomicU64, op: impl Fn(u64) -> u64, pred: u64) -> bool { 218 209 target 219 210 .fetch_update(Ordering::Relaxed, Ordering::Relaxed, |x| {
+1 -1
rust/kernel/block/mq/tag_set.rs
··· 53 53 queue_depth: num_tags, 54 54 cmd_size, 55 55 flags: bindings::BLK_MQ_F_SHOULD_MERGE, 56 - driver_data: core::ptr::null_mut::<core::ffi::c_void>(), 56 + driver_data: core::ptr::null_mut::<crate::ffi::c_void>(), 57 57 nr_maps: num_maps, 58 58 ..tag_set 59 59 }
+48 -31
rust/kernel/error.rs
··· 6 6 7 7 use crate::{alloc::AllocError, str::CStr}; 8 8 9 - use alloc::alloc::LayoutError; 9 + use core::alloc::LayoutError; 10 10 11 11 use core::fmt; 12 + use core::num::NonZeroI32; 12 13 use core::num::TryFromIntError; 13 14 use core::str::Utf8Error; 14 15 ··· 21 20 $( 22 21 #[doc = $doc] 23 22 )* 24 - pub const $err: super::Error = super::Error(-(crate::bindings::$err as i32)); 23 + pub const $err: super::Error = 24 + match super::Error::try_from_errno(-(crate::bindings::$err as i32)) { 25 + Some(err) => err, 26 + None => panic!("Invalid errno in `declare_err!`"), 27 + }; 25 28 }; 26 29 } 27 30 ··· 93 88 /// 94 89 /// The value is a valid `errno` (i.e. `>= -MAX_ERRNO && < 0`). 95 90 #[derive(Clone, Copy, PartialEq, Eq)] 96 - pub struct Error(core::ffi::c_int); 91 + pub struct Error(NonZeroI32); 97 92 98 93 impl Error { 99 94 /// Creates an [`Error`] from a kernel error code. 100 95 /// 101 96 /// It is a bug to pass an out-of-range `errno`. `EINVAL` would 102 97 /// be returned in such a case. 103 - pub(crate) fn from_errno(errno: core::ffi::c_int) -> Error { 98 + pub fn from_errno(errno: crate::ffi::c_int) -> Error { 104 99 if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 { 105 100 // TODO: Make it a `WARN_ONCE` once available. 106 101 crate::pr_warn!( ··· 112 107 113 108 // INVARIANT: The check above ensures the type invariant 114 109 // will hold. 115 - Error(errno) 110 + // SAFETY: `errno` is checked above to be in a valid range. 111 + unsafe { Error::from_errno_unchecked(errno) } 112 + } 113 + 114 + /// Creates an [`Error`] from a kernel error code. 115 + /// 116 + /// Returns [`None`] if `errno` is out-of-range. 117 + const fn try_from_errno(errno: crate::ffi::c_int) -> Option<Error> { 118 + if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 { 119 + return None; 120 + } 121 + 122 + // SAFETY: `errno` is checked above to be in a valid range. 123 + Some(unsafe { Error::from_errno_unchecked(errno) }) 116 124 } 117 125 118 126 /// Creates an [`Error`] from a kernel error code. ··· 133 115 /// # Safety 134 116 /// 135 117 /// `errno` must be within error code range (i.e. `>= -MAX_ERRNO && < 0`). 136 - unsafe fn from_errno_unchecked(errno: core::ffi::c_int) -> Error { 118 + const unsafe fn from_errno_unchecked(errno: crate::ffi::c_int) -> Error { 137 119 // INVARIANT: The contract ensures the type invariant 138 120 // will hold. 139 - Error(errno) 121 + // SAFETY: The caller guarantees `errno` is non-zero. 122 + Error(unsafe { NonZeroI32::new_unchecked(errno) }) 140 123 } 141 124 142 125 /// Returns the kernel error code. 143 - pub fn to_errno(self) -> core::ffi::c_int { 144 - self.0 126 + pub fn to_errno(self) -> crate::ffi::c_int { 127 + self.0.get() 145 128 } 146 129 147 130 #[cfg(CONFIG_BLOCK)] 148 131 pub(crate) fn to_blk_status(self) -> bindings::blk_status_t { 149 132 // SAFETY: `self.0` is a valid error due to its invariant. 150 - unsafe { bindings::errno_to_blk_status(self.0) } 133 + unsafe { bindings::errno_to_blk_status(self.0.get()) } 151 134 } 152 135 153 136 /// Returns the error encoded as a pointer. 154 - #[allow(dead_code)] 155 - pub(crate) fn to_ptr<T>(self) -> *mut T { 137 + pub fn to_ptr<T>(self) -> *mut T { 156 138 #[cfg_attr(target_pointer_width = "32", allow(clippy::useless_conversion))] 157 139 // SAFETY: `self.0` is a valid error due to its invariant. 158 140 unsafe { 159 - bindings::ERR_PTR(self.0.into()) as *mut _ 141 + bindings::ERR_PTR(self.0.get().into()) as *mut _ 160 142 } 161 143 } 162 144 163 145 /// Returns a string representing the error, if one exists. 164 - #[cfg(not(testlib))] 146 + #[cfg(not(any(test, testlib)))] 165 147 pub fn name(&self) -> Option<&'static CStr> { 166 148 // SAFETY: Just an FFI call, there are no extra safety requirements. 167 - let ptr = unsafe { bindings::errname(-self.0) }; 149 + let ptr = unsafe { bindings::errname(-self.0.get()) }; 168 150 if ptr.is_null() { 169 151 None 170 152 } else { ··· 178 160 /// When `testlib` is configured, this always returns `None` to avoid the dependency on a 179 161 /// kernel function so that tests that use this (e.g., by calling [`Result::unwrap`]) can still 180 162 /// run in userspace. 181 - #[cfg(testlib)] 163 + #[cfg(any(test, testlib))] 182 164 pub fn name(&self) -> Option<&'static CStr> { 183 165 None 184 166 } ··· 189 171 match self.name() { 190 172 // Print out number if no name can be found. 191 173 None => f.debug_tuple("Error").field(&-self.0).finish(), 192 - // SAFETY: These strings are ASCII-only. 193 174 Some(name) => f 194 - .debug_tuple(unsafe { core::str::from_utf8_unchecked(name) }) 175 + .debug_tuple( 176 + // SAFETY: These strings are ASCII-only. 177 + unsafe { core::str::from_utf8_unchecked(name) }, 178 + ) 195 179 .finish(), 196 180 } 197 181 } ··· 259 239 260 240 /// Converts an integer as returned by a C kernel function to an error if it's negative, and 261 241 /// `Ok(())` otherwise. 262 - pub fn to_result(err: core::ffi::c_int) -> Result { 242 + pub fn to_result(err: crate::ffi::c_int) -> Result { 263 243 if err < 0 { 264 244 Err(Error::from_errno(err)) 265 245 } else { ··· 282 262 /// fn devm_platform_ioremap_resource( 283 263 /// pdev: &mut PlatformDevice, 284 264 /// index: u32, 285 - /// ) -> Result<*mut core::ffi::c_void> { 265 + /// ) -> Result<*mut kernel::ffi::c_void> { 286 266 /// // SAFETY: `pdev` points to a valid platform device. There are no safety requirements 287 267 /// // on `index`. 288 268 /// from_err_ptr(unsafe { bindings::devm_platform_ioremap_resource(pdev.to_ptr(), index) }) 289 269 /// } 290 270 /// ``` 291 - // TODO: Remove `dead_code` marker once an in-kernel client is available. 292 - #[allow(dead_code)] 293 - pub(crate) fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> { 294 - // CAST: Casting a pointer to `*const core::ffi::c_void` is always valid. 295 - let const_ptr: *const core::ffi::c_void = ptr.cast(); 271 + pub fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> { 272 + // CAST: Casting a pointer to `*const crate::ffi::c_void` is always valid. 273 + let const_ptr: *const crate::ffi::c_void = ptr.cast(); 296 274 // SAFETY: The FFI function does not deref the pointer. 297 275 if unsafe { bindings::IS_ERR(const_ptr) } { 298 276 // SAFETY: The FFI function does not deref the pointer. 299 277 let err = unsafe { bindings::PTR_ERR(const_ptr) }; 278 + 279 + #[allow(clippy::unnecessary_cast)] 300 280 // CAST: If `IS_ERR()` returns `true`, 301 281 // then `PTR_ERR()` is guaranteed to return a 302 282 // negative value greater-or-equal to `-bindings::MAX_ERRNO`, ··· 306 286 // 307 287 // SAFETY: `IS_ERR()` ensures `err` is a 308 288 // negative value greater-or-equal to `-bindings::MAX_ERRNO`. 309 - #[allow(clippy::unnecessary_cast)] 310 - return Err(unsafe { Error::from_errno_unchecked(err as core::ffi::c_int) }); 289 + return Err(unsafe { Error::from_errno_unchecked(err as crate::ffi::c_int) }); 311 290 } 312 291 Ok(ptr) 313 292 } ··· 326 307 /// # use kernel::bindings; 327 308 /// unsafe extern "C" fn probe_callback( 328 309 /// pdev: *mut bindings::platform_device, 329 - /// ) -> core::ffi::c_int { 310 + /// ) -> kernel::ffi::c_int { 330 311 /// from_result(|| { 331 312 /// let ptr = devm_alloc(pdev)?; 332 313 /// bindings::platform_set_drvdata(pdev, ptr); ··· 334 315 /// }) 335 316 /// } 336 317 /// ``` 337 - // TODO: Remove `dead_code` marker once an in-kernel client is available. 338 - #[allow(dead_code)] 339 - pub(crate) fn from_result<T, F>(f: F) -> T 318 + pub fn from_result<T, F>(f: F) -> T 340 319 where 341 320 T: From<i16>, 342 321 F: FnOnce() -> Result<T>,
+43 -84
rust/kernel/init.rs
··· 13 13 //! To initialize a `struct` with an in-place constructor you will need two things: 14 14 //! - an in-place constructor, 15 15 //! - a memory location that can hold your `struct` (this can be the [stack], an [`Arc<T>`], 16 - //! [`UniqueArc<T>`], [`Box<T>`] or any other smart pointer that implements [`InPlaceInit`]). 16 + //! [`UniqueArc<T>`], [`KBox<T>`] or any other smart pointer that implements [`InPlaceInit`]). 17 17 //! 18 18 //! To get an in-place constructor there are generally three options: 19 19 //! - directly creating an in-place constructor using the [`pin_init!`] macro, ··· 35 35 //! that you need to write `<-` instead of `:` for fields that you want to initialize in-place. 36 36 //! 37 37 //! ```rust 38 - //! # #![allow(clippy::disallowed_names)] 38 + //! # #![expect(clippy::disallowed_names)] 39 39 //! use kernel::sync::{new_mutex, Mutex}; 40 40 //! # use core::pin::Pin; 41 41 //! #[pin_data] ··· 55 55 //! (or just the stack) to actually initialize a `Foo`: 56 56 //! 57 57 //! ```rust 58 - //! # #![allow(clippy::disallowed_names)] 58 + //! # #![expect(clippy::disallowed_names)] 59 59 //! # use kernel::sync::{new_mutex, Mutex}; 60 60 //! # use core::pin::Pin; 61 61 //! # #[pin_data] ··· 68 68 //! # a <- new_mutex!(42, "Foo::a"), 69 69 //! # b: 24, 70 70 //! # }); 71 - //! let foo: Result<Pin<Box<Foo>>> = Box::pin_init(foo, GFP_KERNEL); 71 + //! let foo: Result<Pin<KBox<Foo>>> = KBox::pin_init(foo, GFP_KERNEL); 72 72 //! ``` 73 73 //! 74 74 //! For more information see the [`pin_init!`] macro. ··· 87 87 //! To declare an init macro/function you just return an [`impl PinInit<T, E>`]: 88 88 //! 89 89 //! ```rust 90 - //! # #![allow(clippy::disallowed_names)] 91 90 //! # use kernel::{sync::Mutex, new_mutex, init::PinInit, try_pin_init}; 92 91 //! #[pin_data] 93 92 //! struct DriverData { 94 93 //! #[pin] 95 94 //! status: Mutex<i32>, 96 - //! buffer: Box<[u8; 1_000_000]>, 95 + //! buffer: KBox<[u8; 1_000_000]>, 97 96 //! } 98 97 //! 99 98 //! impl DriverData { 100 99 //! fn new() -> impl PinInit<Self, Error> { 101 100 //! try_pin_init!(Self { 102 101 //! status <- new_mutex!(0, "DriverData::status"), 103 - //! buffer: Box::init(kernel::init::zeroed(), GFP_KERNEL)?, 102 + //! buffer: KBox::init(kernel::init::zeroed(), GFP_KERNEL)?, 104 103 //! }) 105 104 //! } 106 105 //! } ··· 120 121 //! `slot` gets called. 121 122 //! 122 123 //! ```rust 123 - //! # #![allow(unreachable_pub, clippy::disallowed_names)] 124 + //! # #![expect(unreachable_pub, clippy::disallowed_names)] 124 125 //! use kernel::{init, types::Opaque}; 125 126 //! use core::{ptr::addr_of_mut, marker::PhantomPinned, pin::Pin}; 126 127 //! # mod bindings { 127 - //! # #![allow(non_camel_case_types)] 128 + //! # #![expect(non_camel_case_types)] 129 + //! # #![expect(clippy::missing_safety_doc)] 128 130 //! # pub struct foo; 129 131 //! # pub unsafe fn init_foo(_ptr: *mut foo) {} 130 132 //! # pub unsafe fn destroy_foo(_ptr: *mut foo) {} ··· 133 133 //! # } 134 134 //! # // `Error::from_errno` is `pub(crate)` in the `kernel` crate, thus provide a workaround. 135 135 //! # trait FromErrno { 136 - //! # fn from_errno(errno: core::ffi::c_int) -> Error { 136 + //! # fn from_errno(errno: kernel::ffi::c_int) -> Error { 137 137 //! # // Dummy error that can be constructed outside the `kernel` crate. 138 138 //! # Error::from(core::fmt::Error) 139 139 //! # } ··· 211 211 //! [`pin_init!`]: crate::pin_init! 212 212 213 213 use crate::{ 214 - alloc::{box_ext::BoxExt, AllocError, Flags}, 214 + alloc::{AllocError, Flags, KBox}, 215 215 error::{self, Error}, 216 216 sync::Arc, 217 217 sync::UniqueArc, 218 218 types::{Opaque, ScopeGuard}, 219 219 }; 220 - use alloc::boxed::Box; 221 220 use core::{ 222 221 cell::UnsafeCell, 223 222 convert::Infallible, ··· 237 238 /// # Examples 238 239 /// 239 240 /// ```rust 240 - /// # #![allow(clippy::disallowed_names)] 241 + /// # #![expect(clippy::disallowed_names)] 241 242 /// # use kernel::{init, macros::pin_data, pin_init, stack_pin_init, init::*, sync::Mutex, new_mutex}; 242 243 /// # use core::pin::Pin; 243 244 /// #[pin_data] ··· 289 290 /// # Examples 290 291 /// 291 292 /// ```rust,ignore 292 - /// # #![allow(clippy::disallowed_names)] 293 + /// # #![expect(clippy::disallowed_names)] 293 294 /// # use kernel::{init, pin_init, stack_try_pin_init, init::*, sync::Mutex, new_mutex}; 294 295 /// # use macros::pin_data; 295 296 /// # use core::{alloc::AllocError, pin::Pin}; ··· 297 298 /// struct Foo { 298 299 /// #[pin] 299 300 /// a: Mutex<usize>, 300 - /// b: Box<Bar>, 301 + /// b: KBox<Bar>, 301 302 /// } 302 303 /// 303 304 /// struct Bar { ··· 306 307 /// 307 308 /// stack_try_pin_init!(let foo: Result<Pin<&mut Foo>, AllocError> = pin_init!(Foo { 308 309 /// a <- new_mutex!(42), 309 - /// b: Box::new(Bar { 310 + /// b: KBox::new(Bar { 310 311 /// x: 64, 311 312 /// }, GFP_KERNEL)?, 312 313 /// })); ··· 315 316 /// ``` 316 317 /// 317 318 /// ```rust,ignore 318 - /// # #![allow(clippy::disallowed_names)] 319 + /// # #![expect(clippy::disallowed_names)] 319 320 /// # use kernel::{init, pin_init, stack_try_pin_init, init::*, sync::Mutex, new_mutex}; 320 321 /// # use macros::pin_data; 321 322 /// # use core::{alloc::AllocError, pin::Pin}; ··· 323 324 /// struct Foo { 324 325 /// #[pin] 325 326 /// a: Mutex<usize>, 326 - /// b: Box<Bar>, 327 + /// b: KBox<Bar>, 327 328 /// } 328 329 /// 329 330 /// struct Bar { ··· 332 333 /// 333 334 /// stack_try_pin_init!(let foo: Pin<&mut Foo> =? pin_init!(Foo { 334 335 /// a <- new_mutex!(42), 335 - /// b: Box::new(Bar { 336 + /// b: KBox::new(Bar { 336 337 /// x: 64, 337 338 /// }, GFP_KERNEL)?, 338 339 /// })); ··· 367 368 /// The syntax is almost identical to that of a normal `struct` initializer: 368 369 /// 369 370 /// ```rust 370 - /// # #![allow(clippy::disallowed_names)] 371 371 /// # use kernel::{init, pin_init, macros::pin_data, init::*}; 372 372 /// # use core::pin::Pin; 373 373 /// #[pin_data] ··· 390 392 /// }, 391 393 /// }); 392 394 /// # initializer } 393 - /// # Box::pin_init(demo(), GFP_KERNEL).unwrap(); 395 + /// # KBox::pin_init(demo(), GFP_KERNEL).unwrap(); 394 396 /// ``` 395 397 /// 396 398 /// Arbitrary Rust expressions can be used to set the value of a variable. ··· 411 413 /// To create an initializer function, simply declare it like this: 412 414 /// 413 415 /// ```rust 414 - /// # #![allow(clippy::disallowed_names)] 415 416 /// # use kernel::{init, pin_init, init::*}; 416 417 /// # use core::pin::Pin; 417 418 /// # #[pin_data] ··· 437 440 /// Users of `Foo` can now create it like this: 438 441 /// 439 442 /// ```rust 440 - /// # #![allow(clippy::disallowed_names)] 443 + /// # #![expect(clippy::disallowed_names)] 441 444 /// # use kernel::{init, pin_init, macros::pin_data, init::*}; 442 445 /// # use core::pin::Pin; 443 446 /// # #[pin_data] ··· 459 462 /// # }) 460 463 /// # } 461 464 /// # } 462 - /// let foo = Box::pin_init(Foo::new(), GFP_KERNEL); 465 + /// let foo = KBox::pin_init(Foo::new(), GFP_KERNEL); 463 466 /// ``` 464 467 /// 465 468 /// They can also easily embed it into their own `struct`s: 466 469 /// 467 470 /// ```rust 468 - /// # #![allow(clippy::disallowed_names)] 469 471 /// # use kernel::{init, pin_init, macros::pin_data, init::*}; 470 472 /// # use core::pin::Pin; 471 473 /// # #[pin_data] ··· 537 541 /// } 538 542 /// pin_init!(&this in Buf { 539 543 /// buf: [0; 64], 544 + /// // SAFETY: TODO. 540 545 /// ptr: unsafe { addr_of_mut!((*this.as_ptr()).buf).cast() }, 541 546 /// pin: PhantomPinned, 542 547 /// }); ··· 587 590 /// # Examples 588 591 /// 589 592 /// ```rust 590 - /// # #![feature(new_uninit)] 591 593 /// use kernel::{init::{self, PinInit}, error::Error}; 592 594 /// #[pin_data] 593 595 /// struct BigBuf { 594 - /// big: Box<[u8; 1024 * 1024 * 1024]>, 596 + /// big: KBox<[u8; 1024 * 1024 * 1024]>, 595 597 /// small: [u8; 1024 * 1024], 596 598 /// ptr: *mut u8, 597 599 /// } ··· 598 602 /// impl BigBuf { 599 603 /// fn new() -> impl PinInit<Self, Error> { 600 604 /// try_pin_init!(Self { 601 - /// big: Box::init(init::zeroed(), GFP_KERNEL)?, 605 + /// big: KBox::init(init::zeroed(), GFP_KERNEL)?, 602 606 /// small: [0; 1024 * 1024], 603 607 /// ptr: core::ptr::null_mut(), 604 608 /// }? Error) ··· 690 694 /// # Examples 691 695 /// 692 696 /// ```rust 693 - /// use kernel::{init::{PinInit, zeroed}, error::Error}; 697 + /// use kernel::{alloc::KBox, init::{PinInit, zeroed}, error::Error}; 694 698 /// struct BigBuf { 695 - /// big: Box<[u8; 1024 * 1024 * 1024]>, 699 + /// big: KBox<[u8; 1024 * 1024 * 1024]>, 696 700 /// small: [u8; 1024 * 1024], 697 701 /// } 698 702 /// 699 703 /// impl BigBuf { 700 704 /// fn new() -> impl Init<Self, Error> { 701 705 /// try_init!(Self { 702 - /// big: Box::init(zeroed(), GFP_KERNEL)?, 706 + /// big: KBox::init(zeroed(), GFP_KERNEL)?, 703 707 /// small: [0; 1024 * 1024], 704 708 /// }? Error) 705 709 /// } ··· 810 814 /// A pin-initializer for the type `T`. 811 815 /// 812 816 /// To use this initializer, you will need a suitable memory location that can hold a `T`. This can 813 - /// be [`Box<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use the 814 - /// [`InPlaceInit::pin_init`] function of a smart pointer like [`Arc<T>`] on this. 817 + /// be [`KBox<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use 818 + /// the [`InPlaceInit::pin_init`] function of a smart pointer like [`Arc<T>`] on this. 815 819 /// 816 820 /// Also see the [module description](self). 817 821 /// ··· 850 854 /// # Examples 851 855 /// 852 856 /// ```rust 853 - /// # #![allow(clippy::disallowed_names)] 857 + /// # #![expect(clippy::disallowed_names)] 854 858 /// use kernel::{types::Opaque, init::pin_init_from_closure}; 855 859 /// #[repr(C)] 856 860 /// struct RawFoo([u8; 16]); ··· 871 875 /// } 872 876 /// 873 877 /// let foo = pin_init!(Foo { 878 + /// // SAFETY: TODO. 874 879 /// raw <- unsafe { 875 880 /// Opaque::ffi_init(|s| { 876 881 /// init_foo(s); ··· 891 894 } 892 895 893 896 /// An initializer returned by [`PinInit::pin_chain`]. 894 - pub struct ChainPinInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, Box<T>)>); 897 + pub struct ChainPinInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, KBox<T>)>); 895 898 896 899 // SAFETY: The `__pinned_init` function is implemented such that it 897 900 // - returns `Ok(())` on successful initialization, ··· 917 920 /// An initializer for `T`. 918 921 /// 919 922 /// To use this initializer, you will need a suitable memory location that can hold a `T`. This can 920 - /// be [`Box<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use the 921 - /// [`InPlaceInit::init`] function of a smart pointer like [`Arc<T>`] on this. Because 923 + /// be [`KBox<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use 924 + /// the [`InPlaceInit::init`] function of a smart pointer like [`Arc<T>`] on this. Because 922 925 /// [`PinInit<T, E>`] is a super trait, you can use every function that takes it as well. 923 926 /// 924 927 /// Also see the [module description](self). ··· 962 965 /// # Examples 963 966 /// 964 967 /// ```rust 965 - /// # #![allow(clippy::disallowed_names)] 968 + /// # #![expect(clippy::disallowed_names)] 966 969 /// use kernel::{types::Opaque, init::{self, init_from_closure}}; 967 970 /// struct Foo { 968 971 /// buf: [u8; 1_000_000], ··· 990 993 } 991 994 992 995 /// An initializer returned by [`Init::chain`]. 993 - pub struct ChainInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, Box<T>)>); 996 + pub struct ChainInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, KBox<T>)>); 994 997 995 998 // SAFETY: The `__init` function is implemented such that it 996 999 // - returns `Ok(())` on successful initialization, ··· 1074 1077 /// # Examples 1075 1078 /// 1076 1079 /// ```rust 1077 - /// use kernel::{error::Error, init::init_array_from_fn}; 1078 - /// let array: Box<[usize; 1_000]> = Box::init::<Error>(init_array_from_fn(|i| i), GFP_KERNEL).unwrap(); 1080 + /// use kernel::{alloc::KBox, error::Error, init::init_array_from_fn}; 1081 + /// let array: KBox<[usize; 1_000]> = 1082 + /// KBox::init::<Error>(init_array_from_fn(|i| i), GFP_KERNEL).unwrap(); 1079 1083 /// assert_eq!(array.len(), 1_000); 1080 1084 /// ``` 1081 1085 pub fn init_array_from_fn<I, const N: usize, T, E>( ··· 1160 1162 // SAFETY: Every type can be initialized by-value. 1161 1163 unsafe impl<T, E> Init<T, E> for T { 1162 1164 unsafe fn __init(self, slot: *mut T) -> Result<(), E> { 1165 + // SAFETY: TODO. 1163 1166 unsafe { slot.write(self) }; 1164 1167 Ok(()) 1165 1168 } ··· 1169 1170 // SAFETY: Every type can be initialized by-value. `__pinned_init` calls `__init`. 1170 1171 unsafe impl<T, E> PinInit<T, E> for T { 1171 1172 unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), E> { 1173 + // SAFETY: TODO. 1172 1174 unsafe { self.__init(slot) } 1173 1175 } 1174 1176 } ··· 1243 1243 } 1244 1244 } 1245 1245 1246 - impl<T> InPlaceInit<T> for Box<T> { 1247 - type PinnedSelf = Pin<Self>; 1248 - 1249 - #[inline] 1250 - fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self::PinnedSelf, E> 1251 - where 1252 - E: From<AllocError>, 1253 - { 1254 - <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_pin_init(init) 1255 - } 1256 - 1257 - #[inline] 1258 - fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E> 1259 - where 1260 - E: From<AllocError>, 1261 - { 1262 - <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_init(init) 1263 - } 1264 - } 1265 - 1266 1246 impl<T> InPlaceInit<T> for UniqueArc<T> { 1267 1247 type PinnedSelf = Pin<Self>; 1268 1248 ··· 1277 1297 /// 1278 1298 /// Does not drop the current value and considers it as uninitialized memory. 1279 1299 fn write_pin_init<E>(self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E>; 1280 - } 1281 - 1282 - impl<T> InPlaceWrite<T> for Box<MaybeUninit<T>> { 1283 - type Initialized = Box<T>; 1284 - 1285 - fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> { 1286 - let slot = self.as_mut_ptr(); 1287 - // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1288 - // slot is valid. 1289 - unsafe { init.__init(slot)? }; 1290 - // SAFETY: All fields have been initialized. 1291 - Ok(unsafe { self.assume_init() }) 1292 - } 1293 - 1294 - fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E> { 1295 - let slot = self.as_mut_ptr(); 1296 - // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1297 - // slot is valid and will not be moved, because we pin it later. 1298 - unsafe { init.__pinned_init(slot)? }; 1299 - // SAFETY: All fields have been initialized. 1300 - Ok(unsafe { self.assume_init() }.into()) 1301 - } 1302 1300 } 1303 1301 1304 1302 impl<T> InPlaceWrite<T> for UniqueArc<MaybeUninit<T>> { ··· 1369 1411 1370 1412 macro_rules! impl_zeroable { 1371 1413 ($($({$($generics:tt)*})? $t:ty, )*) => { 1414 + // SAFETY: Safety comments written in the macro invocation. 1372 1415 $(unsafe impl$($($generics)*)? Zeroable for $t {})* 1373 1416 }; 1374 1417 } ··· 1410 1451 // 1411 1452 // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant. 1412 1453 {<T: ?Sized>} Option<NonNull<T>>, 1413 - {<T: ?Sized>} Option<Box<T>>, 1454 + {<T: ?Sized>} Option<KBox<T>>, 1414 1455 1415 1456 // SAFETY: `null` pointer is valid. 1416 1457 //
+9 -4
rust/kernel/init/__internal.rs
··· 15 15 /// [this table]: https://doc.rust-lang.org/nomicon/phantom-data.html#table-of-phantomdata-patterns 16 16 pub(super) type Invariant<T> = PhantomData<fn(*mut T) -> *mut T>; 17 17 18 - /// This is the module-internal type implementing `PinInit` and `Init`. It is unsafe to create this 19 - /// type, since the closure needs to fulfill the same safety requirement as the 20 - /// `__pinned_init`/`__init` functions. 18 + /// Module-internal type implementing `PinInit` and `Init`. 19 + /// 20 + /// It is unsafe to create this type, since the closure needs to fulfill the same safety 21 + /// requirement as the `__pinned_init`/`__init` functions. 21 22 pub(crate) struct InitClosure<F, T: ?Sized, E>(pub(crate) F, pub(crate) Invariant<(E, T)>); 22 23 23 24 // SAFETY: While constructing the `InitClosure`, the user promised that it upholds the ··· 54 53 pub unsafe trait HasPinData { 55 54 type PinData: PinData; 56 55 56 + #[expect(clippy::missing_safety_doc)] 57 57 unsafe fn __pin_data() -> Self::PinData; 58 58 } 59 59 ··· 84 82 pub unsafe trait HasInitData { 85 83 type InitData: InitData; 86 84 85 + #[expect(clippy::missing_safety_doc)] 87 86 unsafe fn __init_data() -> Self::InitData; 88 87 } 89 88 ··· 105 102 } 106 103 } 107 104 108 - pub struct AllData<T: ?Sized>(PhantomData<fn(Box<T>) -> Box<T>>); 105 + pub struct AllData<T: ?Sized>(PhantomData<fn(KBox<T>) -> KBox<T>>); 109 106 110 107 impl<T: ?Sized> Clone for AllData<T> { 111 108 fn clone(&self) -> Self { ··· 115 112 116 113 impl<T: ?Sized> Copy for AllData<T> {} 117 114 115 + // SAFETY: TODO. 118 116 unsafe impl<T: ?Sized> InitData for AllData<T> { 119 117 type Datee = T; 120 118 } 121 119 120 + // SAFETY: TODO. 122 121 unsafe impl<T: ?Sized> HasInitData for T { 123 122 type InitData = AllData<T>; 124 123
+14 -4
rust/kernel/init/macros.rs
··· 182 182 //! // Normally `Drop` bounds do not have the correct semantics, but for this purpose they do 183 183 //! // (normally people want to know if a type has any kind of drop glue at all, here we want 184 184 //! // to know if it has any kind of custom drop glue, which is exactly what this bound does). 185 - //! #[allow(drop_bounds)] 185 + //! #[expect(drop_bounds)] 186 186 //! impl<T: ::core::ops::Drop> MustNotImplDrop for T {} 187 187 //! impl<T> MustNotImplDrop for Bar<T> {} 188 188 //! // Here comes a convenience check, if one implemented `PinnedDrop`, but forgot to add it to 189 189 //! // `#[pin_data]`, then this will error with the same mechanic as above, this is not needed 190 190 //! // for safety, but a good sanity check, since no normal code calls `PinnedDrop::drop`. 191 - //! #[allow(non_camel_case_types)] 191 + //! #[expect(non_camel_case_types)] 192 192 //! trait UselessPinnedDropImpl_you_need_to_specify_PinnedDrop {} 193 193 //! impl< 194 194 //! T: ::kernel::init::PinnedDrop, ··· 513 513 } 514 514 ), 515 515 ) => { 516 + // SAFETY: TODO. 516 517 unsafe $($impl_sig)* { 517 518 // Inherit all attributes and the type/ident tokens for the signature. 518 519 $(#[$($attr)*])* ··· 873 872 } 874 873 } 875 874 875 + // SAFETY: TODO. 876 876 unsafe impl<$($impl_generics)*> 877 877 $crate::init::__internal::PinData for __ThePinData<$($ty_generics)*> 878 878 where $($whr)* ··· 925 923 // `Drop`. Additionally we will implement this trait for the struct leading to a conflict, 926 924 // if it also implements `Drop` 927 925 trait MustNotImplDrop {} 928 - #[allow(drop_bounds)] 926 + #[expect(drop_bounds)] 929 927 impl<T: ::core::ops::Drop> MustNotImplDrop for T {} 930 928 impl<$($impl_generics)*> MustNotImplDrop for $name<$($ty_generics)*> 931 929 where $($whr)* {} 932 930 // We also take care to prevent users from writing a useless `PinnedDrop` implementation. 933 931 // They might implement `PinnedDrop` correctly for the struct, but forget to give 934 932 // `PinnedDrop` as the parameter to `#[pin_data]`. 935 - #[allow(non_camel_case_types)] 933 + #[expect(non_camel_case_types)] 936 934 trait UselessPinnedDropImpl_you_need_to_specify_PinnedDrop {} 937 935 impl<T: $crate::init::PinnedDrop> 938 936 UselessPinnedDropImpl_you_need_to_specify_PinnedDrop for T {} ··· 989 987 // 990 988 // The functions are `unsafe` to prevent accidentally calling them. 991 989 #[allow(dead_code)] 990 + #[expect(clippy::missing_safety_doc)] 992 991 impl<$($impl_generics)*> $pin_data<$($ty_generics)*> 993 992 where $($whr)* 994 993 { ··· 1000 997 slot: *mut $p_type, 1001 998 init: impl $crate::init::PinInit<$p_type, E>, 1002 999 ) -> ::core::result::Result<(), E> { 1000 + // SAFETY: TODO. 1003 1001 unsafe { $crate::init::PinInit::__pinned_init(init, slot) } 1004 1002 } 1005 1003 )* ··· 1011 1007 slot: *mut $type, 1012 1008 init: impl $crate::init::Init<$type, E>, 1013 1009 ) -> ::core::result::Result<(), E> { 1010 + // SAFETY: TODO. 1014 1011 unsafe { $crate::init::Init::__init(init, slot) } 1015 1012 } 1016 1013 )* ··· 1126 1121 // no possibility of returning without `unsafe`. 1127 1122 struct __InitOk; 1128 1123 // Get the data about fields from the supplied type. 1124 + // 1125 + // SAFETY: TODO. 1129 1126 let data = unsafe { 1130 1127 use $crate::init::__internal::$has_data; 1131 1128 // Here we abuse `paste!` to retokenize `$t`. Declarative macros have some internal ··· 1183 1176 let init = move |slot| -> ::core::result::Result<(), $err> { 1184 1177 init(slot).map(|__InitOk| ()) 1185 1178 }; 1179 + // SAFETY: TODO. 1186 1180 let init = unsafe { $crate::init::$construct_closure::<_, $err>(init) }; 1187 1181 init 1188 1182 }}; ··· 1332 1324 // Endpoint, nothing more to munch, create the initializer. 1333 1325 // Since we are in the closure that is never called, this will never get executed. 1334 1326 // We abuse `slot` to get the correct type inference here: 1327 + // 1328 + // SAFETY: TODO. 1335 1329 unsafe { 1336 1330 // Here we abuse `paste!` to retokenize `$t`. Declarative macros have some internal 1337 1331 // information that is associated to already parsed fragments, so a path fragment
+1 -1
rust/kernel/ioctl.rs
··· 4 4 //! 5 5 //! C header: [`include/asm-generic/ioctl.h`](srctree/include/asm-generic/ioctl.h) 6 6 7 - #![allow(non_snake_case)] 7 + #![expect(non_snake_case)] 8 8 9 9 use crate::build_assert; 10 10
+7 -3
rust/kernel/lib.rs
··· 12 12 //! do so first instead of bypassing this crate. 13 13 14 14 #![no_std] 15 + #![feature(arbitrary_self_types)] 15 16 #![feature(coerce_unsized)] 16 17 #![feature(dispatch_from_dyn)] 17 - #![feature(new_uninit)] 18 - #![feature(receiver_trait)] 18 + #![feature(inline_const)] 19 + #![feature(lint_reasons)] 19 20 #![feature(unsize)] 20 21 21 22 // Ensure conditional compilation based on the kernel configuration works; ··· 26 25 27 26 // Allow proc-macros to refer to `::kernel` inside the `kernel` crate (this crate). 28 27 extern crate self as kernel; 28 + 29 + pub use ffi; 29 30 30 31 pub mod alloc; 31 32 #[cfg(CONFIG_BLOCK)] ··· 63 60 pub mod task; 64 61 pub mod time; 65 62 pub mod tracepoint; 63 + pub mod transmute; 66 64 pub mod types; 67 65 pub mod uaccess; 68 66 pub mod workqueue; ··· 94 90 95 91 /// Equivalent to `THIS_MODULE` in the C API. 96 92 /// 97 - /// C header: [`include/linux/export.h`](srctree/include/linux/export.h) 93 + /// C header: [`include/linux/init.h`](srctree/include/linux/init.h) 98 94 pub struct ThisModule(*mut bindings::module); 99 95 100 96 // SAFETY: `THIS_MODULE` may be used from all threads within a module.
+1
rust/kernel/list.rs
··· 354 354 /// 355 355 /// `item` must not be in a different linked list (with the same id). 356 356 pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> { 357 + // SAFETY: TODO. 357 358 let mut item = unsafe { ListLinks::fields(T::view_links(item)) }; 358 359 // SAFETY: The user provided a reference, and reference are never dangling. 359 360 //
-3
rust/kernel/list/arc.rs
··· 441 441 } 442 442 } 443 443 444 - // This is to allow [`ListArc`] (and variants) to be used as the type of `self`. 445 - impl<T, const ID: u64> core::ops::Receiver for ListArc<T, ID> where T: ListArcSafe<ID> + ?Sized {} 446 - 447 444 // This is to allow coercion from `ListArc<T>` to `ListArc<U>` if `T` can be converted to the 448 445 // dynamically-sized type (DST) `U`. 449 446 impl<T, U, const ID: u64> core::ops::CoerceUnsized<ListArc<U, ID>> for ListArc<T, ID>
+1 -1
rust/kernel/list/arc_field.rs
··· 56 56 /// 57 57 /// The caller must have mutable access to the `ListArc<ID>` containing the struct with this 58 58 /// field for the duration of the returned reference. 59 - #[allow(clippy::mut_from_ref)] 59 + #[expect(clippy::mut_from_ref)] 60 60 pub unsafe fn assert_mut(&self) -> &mut T { 61 61 // SAFETY: The caller has exclusive access to the `ListArc`, so they also have exclusive 62 62 // access to this field.
+8 -8
rust/kernel/net/phy.rs
··· 314 314 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 315 315 unsafe extern "C" fn soft_reset_callback( 316 316 phydev: *mut bindings::phy_device, 317 - ) -> core::ffi::c_int { 317 + ) -> crate::ffi::c_int { 318 318 from_result(|| { 319 319 // SAFETY: This callback is called only in contexts 320 320 // where we hold `phy_device->lock`, so the accessors on ··· 328 328 /// # Safety 329 329 /// 330 330 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 331 - unsafe extern "C" fn probe_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int { 331 + unsafe extern "C" fn probe_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int { 332 332 from_result(|| { 333 333 // SAFETY: This callback is called only in contexts 334 334 // where we can exclusively access `phy_device` because ··· 345 345 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 346 346 unsafe extern "C" fn get_features_callback( 347 347 phydev: *mut bindings::phy_device, 348 - ) -> core::ffi::c_int { 348 + ) -> crate::ffi::c_int { 349 349 from_result(|| { 350 350 // SAFETY: This callback is called only in contexts 351 351 // where we hold `phy_device->lock`, so the accessors on ··· 359 359 /// # Safety 360 360 /// 361 361 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 362 - unsafe extern "C" fn suspend_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int { 362 + unsafe extern "C" fn suspend_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int { 363 363 from_result(|| { 364 364 // SAFETY: The C core code ensures that the accessors on 365 365 // `Device` are okay to call even though `phy_device->lock` ··· 373 373 /// # Safety 374 374 /// 375 375 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 376 - unsafe extern "C" fn resume_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int { 376 + unsafe extern "C" fn resume_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int { 377 377 from_result(|| { 378 378 // SAFETY: The C core code ensures that the accessors on 379 379 // `Device` are okay to call even though `phy_device->lock` ··· 389 389 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 390 390 unsafe extern "C" fn config_aneg_callback( 391 391 phydev: *mut bindings::phy_device, 392 - ) -> core::ffi::c_int { 392 + ) -> crate::ffi::c_int { 393 393 from_result(|| { 394 394 // SAFETY: This callback is called only in contexts 395 395 // where we hold `phy_device->lock`, so the accessors on ··· 405 405 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 406 406 unsafe extern "C" fn read_status_callback( 407 407 phydev: *mut bindings::phy_device, 408 - ) -> core::ffi::c_int { 408 + ) -> crate::ffi::c_int { 409 409 from_result(|| { 410 410 // SAFETY: This callback is called only in contexts 411 411 // where we hold `phy_device->lock`, so the accessors on ··· 421 421 /// `phydev` must be passed by the corresponding callback in `phy_driver`. 422 422 unsafe extern "C" fn match_phy_device_callback( 423 423 phydev: *mut bindings::phy_device, 424 - ) -> core::ffi::c_int { 424 + ) -> crate::ffi::c_int { 425 425 // SAFETY: This callback is called only in contexts 426 426 // where we hold `phy_device->lock`, so the accessors on 427 427 // `Device` are okay to call.
+10
rust/kernel/page.rs
··· 20 20 /// A bitmask that gives the page containing a given address. 21 21 pub const PAGE_MASK: usize = !(PAGE_SIZE - 1); 22 22 23 + /// Round up the given number to the next multiple of [`PAGE_SIZE`]. 24 + /// 25 + /// It is incorrect to pass an address where the next multiple of [`PAGE_SIZE`] doesn't fit in a 26 + /// [`usize`]. 27 + pub const fn page_align(addr: usize) -> usize { 28 + // Parentheses around `PAGE_SIZE - 1` to avoid triggering overflow sanitizers in the wrong 29 + // cases. 30 + (addr + (PAGE_SIZE - 1)) & PAGE_MASK 31 + } 32 + 23 33 /// A pointer to a page that owns the page allocation. 24 34 /// 25 35 /// # Invariants
+1 -4
rust/kernel/prelude.rs
··· 14 14 #[doc(no_inline)] 15 15 pub use core::pin::Pin; 16 16 17 - pub use crate::alloc::{box_ext::BoxExt, flags::*, vec_ext::VecExt}; 18 - 19 - #[doc(no_inline)] 20 - pub use alloc::{boxed::Box, vec::Vec}; 17 + pub use crate::alloc::{flags::*, Box, KBox, KVBox, KVVec, KVec, VBox, VVec, Vec}; 21 18 22 19 #[doc(no_inline)] 23 20 pub use macros::{module, pin_data, pinned_drop, vtable, Zeroable};
+4 -1
rust/kernel/print.rs
··· 14 14 use crate::str::RawFormatter; 15 15 16 16 // Called from `vsprintf` with format specifier `%pA`. 17 + #[expect(clippy::missing_safety_doc)] 17 18 #[no_mangle] 18 19 unsafe extern "C" fn rust_fmt_argument( 19 20 buf: *mut c_char, ··· 24 23 use fmt::Write; 25 24 // SAFETY: The C contract guarantees that `buf` is valid if it's less than `end`. 26 25 let mut w = unsafe { RawFormatter::from_ptrs(buf.cast(), end.cast()) }; 26 + // SAFETY: TODO. 27 27 let _ = w.write_fmt(unsafe { *(ptr as *const fmt::Arguments<'_>) }); 28 28 w.pos().cast() 29 29 } ··· 104 102 ) { 105 103 // `_printk` does not seem to fail in any path. 106 104 #[cfg(CONFIG_PRINTK)] 105 + // SAFETY: TODO. 107 106 unsafe { 108 107 bindings::_printk( 109 108 format_string.as_ptr() as _, ··· 140 137 #[doc(hidden)] 141 138 #[cfg(not(testlib))] 142 139 #[macro_export] 143 - #[allow(clippy::crate_in_macro_def)] 140 + #[expect(clippy::crate_in_macro_def)] 144 141 macro_rules! print_macro ( 145 142 // The non-continuation cases (most of them, e.g. `INFO`). 146 143 ($format_string:path, false, $($arg:tt)+) => (
+33 -25
rust/kernel/rbtree.rs
··· 7 7 //! Reference: <https://docs.kernel.org/core-api/rbtree.html> 8 8 9 9 use crate::{alloc::Flags, bindings, container_of, error::Result, prelude::*}; 10 - use alloc::boxed::Box; 11 10 use core::{ 12 11 cmp::{Ord, Ordering}, 13 12 marker::PhantomData, ··· 496 497 // but it is not observable. The loop invariant is still maintained. 497 498 498 499 // SAFETY: `this` is valid per the loop invariant. 499 - unsafe { drop(Box::from_raw(this.cast_mut())) }; 500 + unsafe { drop(KBox::from_raw(this.cast_mut())) }; 500 501 } 501 502 } 502 503 } ··· 763 764 // point to the links field of `Node<K, V>` objects. 764 765 let this = unsafe { container_of!(self.current.as_ptr(), Node<K, V>, links) }.cast_mut(); 765 766 // SAFETY: `this` is valid by the type invariants as described above. 766 - let node = unsafe { Box::from_raw(this) }; 767 + let node = unsafe { KBox::from_raw(this) }; 767 768 let node = RBTreeNode { node }; 768 769 // SAFETY: The reference to the tree used to create the cursor outlives the cursor, so 769 770 // the tree cannot change. By the tree invariant, all nodes are valid. ··· 808 809 // point to the links field of `Node<K, V>` objects. 809 810 let this = unsafe { container_of!(neighbor, Node<K, V>, links) }.cast_mut(); 810 811 // SAFETY: `this` is valid by the type invariants as described above. 811 - let node = unsafe { Box::from_raw(this) }; 812 + let node = unsafe { KBox::from_raw(this) }; 812 813 return Some(RBTreeNode { node }); 813 814 } 814 815 None ··· 883 884 NonNull::new(neighbor) 884 885 } 885 886 886 - /// SAFETY: 887 + /// # Safety 888 + /// 887 889 /// - `node` must be a valid pointer to a node in an [`RBTree`]. 888 890 /// - The caller has immutable access to `node` for the duration of 'b. 889 891 unsafe fn to_key_value<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b V) { ··· 894 894 (k, unsafe { &*v }) 895 895 } 896 896 897 - /// SAFETY: 897 + /// # Safety 898 + /// 898 899 /// - `node` must be a valid pointer to a node in an [`RBTree`]. 899 900 /// - The caller has mutable access to `node` for the duration of 'b. 900 901 unsafe fn to_key_value_mut<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b mut V) { ··· 905 904 (k, unsafe { &mut *v }) 906 905 } 907 906 908 - /// SAFETY: 907 + /// # Safety 908 + /// 909 909 /// - `node` must be a valid pointer to a node in an [`RBTree`]. 910 910 /// - The caller has immutable access to the key for the duration of 'b. 911 911 unsafe fn to_key_value_raw<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, *mut V) { ··· 1037 1035 /// It contains the memory needed to hold a node that can be inserted into a red-black tree. One 1038 1036 /// can be obtained by directly allocating it ([`RBTreeNodeReservation::new`]). 1039 1037 pub struct RBTreeNodeReservation<K, V> { 1040 - node: Box<MaybeUninit<Node<K, V>>>, 1038 + node: KBox<MaybeUninit<Node<K, V>>>, 1041 1039 } 1042 1040 1043 1041 impl<K, V> RBTreeNodeReservation<K, V> { ··· 1045 1043 /// call to [`RBTree::insert`]. 1046 1044 pub fn new(flags: Flags) -> Result<RBTreeNodeReservation<K, V>> { 1047 1045 Ok(RBTreeNodeReservation { 1048 - node: <Box<_> as BoxExt<_>>::new_uninit(flags)?, 1046 + node: KBox::new_uninit(flags)?, 1049 1047 }) 1050 1048 } 1051 1049 } ··· 1061 1059 /// Initialises a node reservation. 1062 1060 /// 1063 1061 /// It then becomes an [`RBTreeNode`] that can be inserted into a tree. 1064 - pub fn into_node(mut self, key: K, value: V) -> RBTreeNode<K, V> { 1065 - self.node.write(Node { 1066 - key, 1067 - value, 1068 - links: bindings::rb_node::default(), 1069 - }); 1070 - // SAFETY: We just wrote to it. 1071 - let node = unsafe { self.node.assume_init() }; 1062 + pub fn into_node(self, key: K, value: V) -> RBTreeNode<K, V> { 1063 + let node = KBox::write( 1064 + self.node, 1065 + Node { 1066 + key, 1067 + value, 1068 + links: bindings::rb_node::default(), 1069 + }, 1070 + ); 1072 1071 RBTreeNode { node } 1073 1072 } 1074 1073 } ··· 1079 1076 /// The node is fully initialised (with key and value) and can be inserted into a tree without any 1080 1077 /// extra allocations or failure paths. 1081 1078 pub struct RBTreeNode<K, V> { 1082 - node: Box<Node<K, V>>, 1079 + node: KBox<Node<K, V>>, 1083 1080 } 1084 1081 1085 1082 impl<K, V> RBTreeNode<K, V> { ··· 1091 1088 1092 1089 /// Get the key and value from inside the node. 1093 1090 pub fn to_key_value(self) -> (K, V) { 1094 - (self.node.key, self.node.value) 1091 + let node = KBox::into_inner(self.node); 1092 + 1093 + (node.key, node.value) 1095 1094 } 1096 1095 } 1097 1096 ··· 1115 1110 /// may be freed (but only for the key/value; memory for the node itself is kept for reuse). 1116 1111 pub fn into_reservation(self) -> RBTreeNodeReservation<K, V> { 1117 1112 RBTreeNodeReservation { 1118 - node: Box::drop_contents(self.node), 1113 + node: KBox::drop_contents(self.node), 1119 1114 } 1120 1115 } 1121 1116 } ··· 1166 1161 /// The `node` must have a key such that inserting it here does not break the ordering of this 1167 1162 /// [`RBTree`]. 1168 1163 fn insert(self, node: RBTreeNode<K, V>) -> &'a mut V { 1169 - let node = Box::into_raw(node.node); 1164 + let node = KBox::into_raw(node.node); 1170 1165 1171 1166 // SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when 1172 1167 // the node is removed or replaced. ··· 1240 1235 // SAFETY: The node was a node in the tree, but we removed it, so we can convert it 1241 1236 // back into a box. 1242 1237 node: unsafe { 1243 - Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) 1238 + KBox::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) 1244 1239 }, 1245 1240 } 1246 1241 } 1247 1242 1248 1243 /// Takes the value of the entry out of the map, and returns it. 1249 1244 pub fn remove(self) -> V { 1250 - self.remove_node().node.value 1245 + let rb_node = self.remove_node(); 1246 + let node = KBox::into_inner(rb_node.node); 1247 + 1248 + node.value 1251 1249 } 1252 1250 1253 1251 /// Swap the current node for the provided node. 1254 1252 /// 1255 1253 /// The key of both nodes must be equal. 1256 1254 fn replace(self, node: RBTreeNode<K, V>) -> RBTreeNode<K, V> { 1257 - let node = Box::into_raw(node.node); 1255 + let node = KBox::into_raw(node.node); 1258 1256 1259 1257 // SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when 1260 1258 // the node is removed or replaced. ··· 1273 1265 // - `self.node_ptr` produces a valid pointer to a node in the tree. 1274 1266 // - Now that we removed this entry from the tree, we can convert the node to a box. 1275 1267 let old_node = 1276 - unsafe { Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) }; 1268 + unsafe { KBox::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) }; 1277 1269 1278 1270 RBTreeNode { node: old_node } 1279 1271 }
+18 -20
rust/kernel/std_vendor.rs
··· 1 1 // SPDX-License-Identifier: Apache-2.0 OR MIT 2 2 3 + //! Rust standard library vendored code. 4 + //! 3 5 //! The contents of this file come from the Rust standard library, hosted in 4 6 //! the <https://github.com/rust-lang/rust> repository, licensed under 5 7 //! "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details, ··· 16 14 /// 17 15 /// ```rust 18 16 /// let a = 2; 19 - /// # #[allow(clippy::dbg_macro)] 17 + /// # #[expect(clippy::disallowed_macros)] 20 18 /// let b = dbg!(a * 2) + 1; 21 - /// // ^-- prints: [src/main.rs:2] a * 2 = 4 19 + /// // ^-- prints: [src/main.rs:3:9] a * 2 = 4 22 20 /// assert_eq!(b, 5); 23 21 /// ``` 24 22 /// ··· 54 52 /// With a method call: 55 53 /// 56 54 /// ```rust 57 - /// # #[allow(clippy::dbg_macro)] 55 + /// # #[expect(clippy::disallowed_macros)] 58 56 /// fn foo(n: usize) { 59 57 /// if dbg!(n.checked_sub(4)).is_some() { 60 58 /// // ... ··· 67 65 /// This prints to the kernel log: 68 66 /// 69 67 /// ```text,ignore 70 - /// [src/main.rs:4] n.checked_sub(4) = None 68 + /// [src/main.rs:3:8] n.checked_sub(4) = None 71 69 /// ``` 72 70 /// 73 71 /// Naive factorial implementation: 74 72 /// 75 73 /// ```rust 76 - /// # #[allow(clippy::dbg_macro)] 77 - /// # { 74 + /// # #![expect(clippy::disallowed_macros)] 78 75 /// fn factorial(n: u32) -> u32 { 79 76 /// if dbg!(n <= 1) { 80 77 /// dbg!(1) ··· 83 82 /// } 84 83 /// 85 84 /// dbg!(factorial(4)); 86 - /// # } 87 85 /// ``` 88 86 /// 89 87 /// This prints to the kernel log: 90 88 /// 91 89 /// ```text,ignore 92 - /// [src/main.rs:3] n <= 1 = false 93 - /// [src/main.rs:3] n <= 1 = false 94 - /// [src/main.rs:3] n <= 1 = false 95 - /// [src/main.rs:3] n <= 1 = true 96 - /// [src/main.rs:4] 1 = 1 97 - /// [src/main.rs:5] n * factorial(n - 1) = 2 98 - /// [src/main.rs:5] n * factorial(n - 1) = 6 99 - /// [src/main.rs:5] n * factorial(n - 1) = 24 100 - /// [src/main.rs:11] factorial(4) = 24 90 + /// [src/main.rs:3:8] n <= 1 = false 91 + /// [src/main.rs:3:8] n <= 1 = false 92 + /// [src/main.rs:3:8] n <= 1 = false 93 + /// [src/main.rs:3:8] n <= 1 = true 94 + /// [src/main.rs:4:9] 1 = 1 95 + /// [src/main.rs:5:9] n * factorial(n - 1) = 2 96 + /// [src/main.rs:5:9] n * factorial(n - 1) = 6 97 + /// [src/main.rs:5:9] n * factorial(n - 1) = 24 98 + /// [src/main.rs:11:1] factorial(4) = 24 101 99 /// ``` 102 100 /// 103 101 /// The `dbg!(..)` macro moves the input: ··· 118 118 /// a tuple (and return it, too): 119 119 /// 120 120 /// ``` 121 - /// # #[allow(clippy::dbg_macro)] 121 + /// # #![expect(clippy::disallowed_macros)] 122 122 /// assert_eq!(dbg!(1usize, 2u32), (1, 2)); 123 123 /// ``` 124 124 /// ··· 127 127 /// invocations. You can use a 1-tuple directly if you need one: 128 128 /// 129 129 /// ``` 130 - /// # #[allow(clippy::dbg_macro)] 131 - /// # { 130 + /// # #![expect(clippy::disallowed_macros)] 132 131 /// assert_eq!(1, dbg!(1u32,)); // trailing comma ignored 133 132 /// assert_eq!((1,), dbg!((1u32,))); // 1-tuple 134 - /// # } 135 133 /// ``` 136 134 /// 137 135 /// [`std::dbg`]: https://doc.rust-lang.org/std/macro.dbg.html
+33 -13
rust/kernel/str.rs
··· 2 2 3 3 //! String representations. 4 4 5 - use crate::alloc::{flags::*, vec_ext::VecExt, AllocError}; 6 - use alloc::vec::Vec; 5 + use crate::alloc::{flags::*, AllocError, KVec}; 7 6 use core::fmt::{self, Write}; 8 7 use core::ops::{self, Deref, DerefMut, Index}; 9 8 ··· 161 162 /// Returns the length of this string with `NUL`. 162 163 #[inline] 163 164 pub const fn len_with_nul(&self) -> usize { 164 - // SAFETY: This is one of the invariant of `CStr`. 165 - // We add a `unreachable_unchecked` here to hint the optimizer that 166 - // the value returned from this function is non-zero. 167 165 if self.0.is_empty() { 166 + // SAFETY: This is one of the invariant of `CStr`. 167 + // We add a `unreachable_unchecked` here to hint the optimizer that 168 + // the value returned from this function is non-zero. 168 169 unsafe { core::hint::unreachable_unchecked() }; 169 170 } 170 171 self.0.len() ··· 184 185 /// last at least `'a`. When `CStr` is alive, the memory pointed by `ptr` 185 186 /// must not be mutated. 186 187 #[inline] 187 - pub unsafe fn from_char_ptr<'a>(ptr: *const core::ffi::c_char) -> &'a Self { 188 + pub unsafe fn from_char_ptr<'a>(ptr: *const crate::ffi::c_char) -> &'a Self { 188 189 // SAFETY: The safety precondition guarantees `ptr` is a valid pointer 189 190 // to a `NUL`-terminated C string. 190 191 let len = unsafe { bindings::strlen(ptr) } + 1; ··· 247 248 248 249 /// Returns a C pointer to the string. 249 250 #[inline] 250 - pub const fn as_char_ptr(&self) -> *const core::ffi::c_char { 251 + pub const fn as_char_ptr(&self) -> *const crate::ffi::c_char { 251 252 self.0.as_ptr() as _ 252 253 } 253 254 ··· 300 301 /// ``` 301 302 #[inline] 302 303 pub unsafe fn as_str_unchecked(&self) -> &str { 304 + // SAFETY: TODO. 303 305 unsafe { core::str::from_utf8_unchecked(self.as_bytes()) } 304 306 } 305 307 ··· 524 524 #[cfg(test)] 525 525 mod tests { 526 526 use super::*; 527 - use alloc::format; 527 + 528 + struct String(CString); 529 + 530 + impl String { 531 + fn from_fmt(args: fmt::Arguments<'_>) -> Self { 532 + String(CString::try_from_fmt(args).unwrap()) 533 + } 534 + } 535 + 536 + impl Deref for String { 537 + type Target = str; 538 + 539 + fn deref(&self) -> &str { 540 + self.0.to_str().unwrap() 541 + } 542 + } 543 + 544 + macro_rules! format { 545 + ($($f:tt)*) => ({ 546 + &*String::from_fmt(kernel::fmt!($($f)*)) 547 + }) 548 + } 528 549 529 550 const ALL_ASCII_CHARS: &'static str = 530 551 "\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09\\x0a\\x0b\\x0c\\x0d\\x0e\\x0f\ ··· 811 790 /// assert_eq!(s.is_ok(), false); 812 791 /// ``` 813 792 pub struct CString { 814 - buf: Vec<u8>, 793 + buf: KVec<u8>, 815 794 } 816 795 817 796 impl CString { ··· 824 803 let size = f.bytes_written(); 825 804 826 805 // Allocate a vector with the required number of bytes, and write to it. 827 - let mut buf = <Vec<_> as VecExt<_>>::with_capacity(size, GFP_KERNEL)?; 806 + let mut buf = KVec::with_capacity(size, GFP_KERNEL)?; 828 807 // SAFETY: The buffer stored in `buf` is at least of size `size` and is valid for writes. 829 808 let mut f = unsafe { Formatter::from_buffer(buf.as_mut_ptr(), size) }; 830 809 f.write_fmt(args)?; ··· 871 850 type Error = AllocError; 872 851 873 852 fn try_from(cstr: &'a CStr) -> Result<CString, AllocError> { 874 - let mut buf = Vec::new(); 853 + let mut buf = KVec::new(); 875 854 876 - <Vec<_> as VecExt<_>>::extend_from_slice(&mut buf, cstr.as_bytes_with_nul(), GFP_KERNEL) 877 - .map_err(|_| AllocError)?; 855 + buf.extend_from_slice(cstr.as_bytes_with_nul(), GFP_KERNEL)?; 878 856 879 857 // INVARIANT: The `CStr` and `CString` types have the same invariants for 880 858 // the string data, and we copied it over without changes.
+1
rust/kernel/sync.rs
··· 15 15 16 16 pub use arc::{Arc, ArcBorrow, UniqueArc}; 17 17 pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult}; 18 + pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy}; 18 19 pub use lock::mutex::{new_mutex, Mutex}; 19 20 pub use lock::spinlock::{new_spinlock, SpinLock}; 20 21 pub use locked_by::LockedBy;
+12 -19
rust/kernel/sync/arc.rs
··· 17 17 //! [`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html 18 18 19 19 use crate::{ 20 - alloc::{box_ext::BoxExt, AllocError, Flags}, 20 + alloc::{AllocError, Flags, KBox}, 21 21 bindings, 22 22 init::{self, InPlaceInit, Init, PinInit}, 23 23 try_init, 24 24 types::{ForeignOwnable, Opaque}, 25 25 }; 26 - use alloc::boxed::Box; 27 26 use core::{ 28 27 alloc::Layout, 29 28 fmt, ··· 170 171 } 171 172 } 172 173 173 - // This is to allow [`Arc`] (and variants) to be used as the type of `self`. 174 - impl<T: ?Sized> core::ops::Receiver for Arc<T> {} 175 - 176 174 // This is to allow coercion from `Arc<T>` to `Arc<U>` if `T` can be converted to the 177 175 // dynamically-sized type (DST) `U`. 178 176 impl<T: ?Sized + Unsize<U>, U: ?Sized> core::ops::CoerceUnsized<Arc<U>> for Arc<T> {} ··· 200 204 data: contents, 201 205 }; 202 206 203 - let inner = <Box<_> as BoxExt<_>>::new(value, flags)?; 207 + let inner = KBox::new(value, flags)?; 204 208 205 209 // SAFETY: We just created `inner` with a reference count of 1, which is owned by the new 206 210 // `Arc` object. 207 - Ok(unsafe { Self::from_inner(Box::leak(inner).into()) }) 211 + Ok(unsafe { Self::from_inner(KBox::leak(inner).into()) }) 208 212 } 209 213 } 210 214 ··· 332 336 impl<T: 'static> ForeignOwnable for Arc<T> { 333 337 type Borrowed<'a> = ArcBorrow<'a, T>; 334 338 335 - fn into_foreign(self) -> *const core::ffi::c_void { 339 + fn into_foreign(self) -> *const crate::ffi::c_void { 336 340 ManuallyDrop::new(self).ptr.as_ptr() as _ 337 341 } 338 342 339 - unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> ArcBorrow<'a, T> { 340 - // SAFETY: By the safety requirement of this function, we know that `ptr` came from 343 + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> ArcBorrow<'a, T> { 344 + // By the safety requirement of this function, we know that `ptr` came from 341 345 // a previous call to `Arc::into_foreign`. 342 346 let inner = NonNull::new(ptr as *mut ArcInner<T>).unwrap(); 343 347 ··· 346 350 unsafe { ArcBorrow::new(inner) } 347 351 } 348 352 349 - unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { 353 + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self { 350 354 // SAFETY: By the safety requirement of this function, we know that `ptr` came from 351 355 // a previous call to `Arc::into_foreign`, which guarantees that `ptr` is valid and 352 356 // holds a reference count increment that is transferrable to us. ··· 397 401 if is_zero { 398 402 // The count reached zero, we must free the memory. 399 403 // 400 - // SAFETY: The pointer was initialised from the result of `Box::leak`. 401 - unsafe { drop(Box::from_raw(self.ptr.as_ptr())) }; 404 + // SAFETY: The pointer was initialised from the result of `KBox::leak`. 405 + unsafe { drop(KBox::from_raw(self.ptr.as_ptr())) }; 402 406 } 403 407 } 404 408 } ··· 475 479 inner: NonNull<ArcInner<T>>, 476 480 _p: PhantomData<&'a ()>, 477 481 } 478 - 479 - // This is to allow [`ArcBorrow`] (and variants) to be used as the type of `self`. 480 - impl<T: ?Sized> core::ops::Receiver for ArcBorrow<'_, T> {} 481 482 482 483 // This is to allow `ArcBorrow<U>` to be dispatched on when `ArcBorrow<T>` can be coerced into 483 484 // `ArcBorrow<U>`. ··· 640 647 /// Tries to allocate a new [`UniqueArc`] instance whose contents are not initialised yet. 641 648 pub fn new_uninit(flags: Flags) -> Result<UniqueArc<MaybeUninit<T>>, AllocError> { 642 649 // INVARIANT: The refcount is initialised to a non-zero value. 643 - let inner = Box::try_init::<AllocError>( 650 + let inner = KBox::try_init::<AllocError>( 644 651 try_init!(ArcInner { 645 652 // SAFETY: There are no safety requirements for this FFI call. 646 653 refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }), ··· 650 657 )?; 651 658 Ok(UniqueArc { 652 659 // INVARIANT: The newly-created object has a refcount of 1. 653 - // SAFETY: The pointer from the `Box` is valid. 654 - inner: unsafe { Arc::from_inner(Box::leak(inner).into()) }, 660 + // SAFETY: The pointer from the `KBox` is valid. 661 + inner: unsafe { Arc::from_inner(KBox::leak(inner).into()) }, 655 662 }) 656 663 } 657 664 }
+2
rust/kernel/sync/arc/std_vendor.rs
··· 1 1 // SPDX-License-Identifier: Apache-2.0 OR MIT 2 2 3 + //! Rust standard library vendored code. 4 + //! 3 5 //! The contents of this file come from the Rust standard library, hosted in 4 6 //! the <https://github.com/rust-lang/rust> repository, licensed under 5 7 //! "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details,
+3 -4
rust/kernel/sync/condvar.rs
··· 7 7 8 8 use super::{lock::Backend, lock::Guard, LockClassKey}; 9 9 use crate::{ 10 + ffi::{c_int, c_long}, 10 11 init::PinInit, 11 12 pin_init, 12 13 str::CStr, ··· 15 14 time::Jiffies, 16 15 types::Opaque, 17 16 }; 18 - use core::ffi::{c_int, c_long}; 19 17 use core::marker::PhantomPinned; 20 18 use core::ptr; 21 19 use macros::pin_data; ··· 70 70 /// } 71 71 /// 72 72 /// /// Allocates a new boxed `Example`. 73 - /// fn new_example() -> Result<Pin<Box<Example>>> { 74 - /// Box::pin_init(pin_init!(Example { 73 + /// fn new_example() -> Result<Pin<KBox<Example>>> { 74 + /// KBox::pin_init(pin_init!(Example { 75 75 /// value <- new_mutex!(0), 76 76 /// value_changed <- new_condvar!(), 77 77 /// }), GFP_KERNEL) ··· 93 93 } 94 94 95 95 // SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on any thread. 96 - #[allow(clippy::non_send_fields_in_send_ty)] 97 96 unsafe impl Send for CondVar {} 98 97 99 98 // SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on multiple threads
+23 -4
rust/kernel/sync/lock.rs
··· 18 18 pub mod mutex; 19 19 pub mod spinlock; 20 20 21 + pub(super) mod global; 22 + pub use global::{GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy}; 23 + 21 24 /// The "backend" of a lock. 22 25 /// 23 26 /// It is the actual implementation of the lock, without the need to repeat patterns used in all ··· 54 51 /// remain valid for read indefinitely. 55 52 unsafe fn init( 56 53 ptr: *mut Self::State, 57 - name: *const core::ffi::c_char, 54 + name: *const crate::ffi::c_char, 58 55 key: *mut bindings::lock_class_key, 59 56 ); 60 57 ··· 65 62 /// Callers must ensure that [`Backend::init`] has been previously called. 66 63 #[must_use] 67 64 unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState; 65 + 66 + /// Tries to acquire the lock. 67 + /// 68 + /// # Safety 69 + /// 70 + /// Callers must ensure that [`Backend::init`] has been previously called. 71 + unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState>; 68 72 69 73 /// Releases the lock, giving up its ownership. 70 74 /// ··· 143 133 // SAFETY: The lock was just acquired. 144 134 unsafe { Guard::new(self, state) } 145 135 } 136 + 137 + /// Tries to acquire the lock. 138 + /// 139 + /// Returns a guard that can be used to access the data protected by the lock if successful. 140 + pub fn try_lock(&self) -> Option<Guard<'_, T, B>> { 141 + // SAFETY: The constructor of the type calls `init`, so the existence of the object proves 142 + // that `init` was called. 143 + unsafe { B::try_lock(self.state.get()).map(|state| Guard::new(self, state)) } 144 + } 146 145 } 147 146 148 147 /// A lock guard. ··· 174 155 // SAFETY: The caller owns the lock, so it is safe to unlock it. 175 156 unsafe { B::unlock(self.lock.state.get(), &self.state) }; 176 157 177 - // SAFETY: The lock was just unlocked above and is being relocked now. 178 - let _relock = 179 - ScopeGuard::new(|| unsafe { B::relock(self.lock.state.get(), &mut self.state) }); 158 + let _relock = ScopeGuard::new(|| 159 + // SAFETY: The lock was just unlocked above and is being relocked now. 160 + unsafe { B::relock(self.lock.state.get(), &mut self.state) }); 180 161 181 162 cb() 182 163 }
+301
rust/kernel/sync/lock/global.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Copyright (C) 2024 Google LLC. 4 + 5 + //! Support for defining statics containing locks. 6 + 7 + use crate::{ 8 + str::CStr, 9 + sync::lock::{Backend, Guard, Lock}, 10 + sync::{LockClassKey, LockedBy}, 11 + types::Opaque, 12 + }; 13 + use core::{ 14 + cell::UnsafeCell, 15 + marker::{PhantomData, PhantomPinned}, 16 + }; 17 + 18 + /// Trait implemented for marker types for global locks. 19 + /// 20 + /// See [`global_lock!`] for examples. 21 + pub trait GlobalLockBackend { 22 + /// The name for this global lock. 23 + const NAME: &'static CStr; 24 + /// Item type stored in this global lock. 25 + type Item: 'static; 26 + /// The backend used for this global lock. 27 + type Backend: Backend + 'static; 28 + /// The class for this global lock. 29 + fn get_lock_class() -> &'static LockClassKey; 30 + } 31 + 32 + /// Type used for global locks. 33 + /// 34 + /// See [`global_lock!`] for examples. 35 + pub struct GlobalLock<B: GlobalLockBackend> { 36 + inner: Lock<B::Item, B::Backend>, 37 + } 38 + 39 + impl<B: GlobalLockBackend> GlobalLock<B> { 40 + /// Creates a global lock. 41 + /// 42 + /// # Safety 43 + /// 44 + /// * Before any other method on this lock is called, [`Self::init`] must be called. 45 + /// * The type `B` must not be used with any other lock. 46 + pub const unsafe fn new(data: B::Item) -> Self { 47 + Self { 48 + inner: Lock { 49 + state: Opaque::uninit(), 50 + data: UnsafeCell::new(data), 51 + _pin: PhantomPinned, 52 + }, 53 + } 54 + } 55 + 56 + /// Initializes a global lock. 57 + /// 58 + /// # Safety 59 + /// 60 + /// Must not be called more than once on a given lock. 61 + pub unsafe fn init(&'static self) { 62 + // SAFETY: The pointer to `state` is valid for the duration of this call, and both `name` 63 + // and `key` are valid indefinitely. The `state` is pinned since we have a `'static` 64 + // reference to `self`. 65 + // 66 + // We have exclusive access to the `state` since the caller of `new` promised to call 67 + // `init` before using any other methods. As `init` can only be called once, all other 68 + // uses of this lock must happen after this call. 69 + unsafe { 70 + B::Backend::init( 71 + self.inner.state.get(), 72 + B::NAME.as_char_ptr(), 73 + B::get_lock_class().as_ptr(), 74 + ) 75 + } 76 + } 77 + 78 + /// Lock this global lock. 79 + pub fn lock(&'static self) -> GlobalGuard<B> { 80 + GlobalGuard { 81 + inner: self.inner.lock(), 82 + } 83 + } 84 + 85 + /// Try to lock this global lock. 86 + pub fn try_lock(&'static self) -> Option<GlobalGuard<B>> { 87 + Some(GlobalGuard { 88 + inner: self.inner.try_lock()?, 89 + }) 90 + } 91 + } 92 + 93 + /// A guard for a [`GlobalLock`]. 94 + /// 95 + /// See [`global_lock!`] for examples. 96 + pub struct GlobalGuard<B: GlobalLockBackend> { 97 + inner: Guard<'static, B::Item, B::Backend>, 98 + } 99 + 100 + impl<B: GlobalLockBackend> core::ops::Deref for GlobalGuard<B> { 101 + type Target = B::Item; 102 + 103 + fn deref(&self) -> &Self::Target { 104 + &self.inner 105 + } 106 + } 107 + 108 + impl<B: GlobalLockBackend> core::ops::DerefMut for GlobalGuard<B> { 109 + fn deref_mut(&mut self) -> &mut Self::Target { 110 + &mut self.inner 111 + } 112 + } 113 + 114 + /// A version of [`LockedBy`] for a [`GlobalLock`]. 115 + /// 116 + /// See [`global_lock!`] for examples. 117 + pub struct GlobalLockedBy<T: ?Sized, B: GlobalLockBackend> { 118 + _backend: PhantomData<B>, 119 + value: UnsafeCell<T>, 120 + } 121 + 122 + // SAFETY: The same thread-safety rules as `LockedBy` apply to `GlobalLockedBy`. 123 + unsafe impl<T, B> Send for GlobalLockedBy<T, B> 124 + where 125 + T: ?Sized, 126 + B: GlobalLockBackend, 127 + LockedBy<T, B::Item>: Send, 128 + { 129 + } 130 + 131 + // SAFETY: The same thread-safety rules as `LockedBy` apply to `GlobalLockedBy`. 132 + unsafe impl<T, B> Sync for GlobalLockedBy<T, B> 133 + where 134 + T: ?Sized, 135 + B: GlobalLockBackend, 136 + LockedBy<T, B::Item>: Sync, 137 + { 138 + } 139 + 140 + impl<T, B: GlobalLockBackend> GlobalLockedBy<T, B> { 141 + /// Create a new [`GlobalLockedBy`]. 142 + /// 143 + /// The provided value will be protected by the global lock indicated by `B`. 144 + pub fn new(val: T) -> Self { 145 + Self { 146 + value: UnsafeCell::new(val), 147 + _backend: PhantomData, 148 + } 149 + } 150 + } 151 + 152 + impl<T: ?Sized, B: GlobalLockBackend> GlobalLockedBy<T, B> { 153 + /// Access the value immutably. 154 + /// 155 + /// The caller must prove shared access to the lock. 156 + pub fn as_ref<'a>(&'a self, _guard: &'a GlobalGuard<B>) -> &'a T { 157 + // SAFETY: The lock is globally unique, so there can only be one guard. 158 + unsafe { &*self.value.get() } 159 + } 160 + 161 + /// Access the value mutably. 162 + /// 163 + /// The caller must prove shared exclusive to the lock. 164 + pub fn as_mut<'a>(&'a self, _guard: &'a mut GlobalGuard<B>) -> &'a mut T { 165 + // SAFETY: The lock is globally unique, so there can only be one guard. 166 + unsafe { &mut *self.value.get() } 167 + } 168 + 169 + /// Access the value mutably directly. 170 + /// 171 + /// The caller has exclusive access to this `GlobalLockedBy`, so they do not need to hold the 172 + /// lock. 173 + pub fn get_mut(&mut self) -> &mut T { 174 + self.value.get_mut() 175 + } 176 + } 177 + 178 + /// Defines a global lock. 179 + /// 180 + /// The global mutex must be initialized before first use. Usually this is done by calling 181 + /// [`GlobalLock::init`] in the module initializer. 182 + /// 183 + /// # Examples 184 + /// 185 + /// A global counter: 186 + /// 187 + /// ``` 188 + /// # mod ex { 189 + /// # use kernel::prelude::*; 190 + /// kernel::sync::global_lock! { 191 + /// // SAFETY: Initialized in module initializer before first use. 192 + /// unsafe(uninit) static MY_COUNTER: Mutex<u32> = 0; 193 + /// } 194 + /// 195 + /// fn increment_counter() -> u32 { 196 + /// let mut guard = MY_COUNTER.lock(); 197 + /// *guard += 1; 198 + /// *guard 199 + /// } 200 + /// 201 + /// impl kernel::Module for MyModule { 202 + /// fn init(_module: &'static ThisModule) -> Result<Self> { 203 + /// // SAFETY: Called exactly once. 204 + /// unsafe { MY_COUNTER.init() }; 205 + /// 206 + /// Ok(MyModule {}) 207 + /// } 208 + /// } 209 + /// # struct MyModule {} 210 + /// # } 211 + /// ``` 212 + /// 213 + /// A global mutex used to protect all instances of a given struct: 214 + /// 215 + /// ``` 216 + /// # mod ex { 217 + /// # use kernel::prelude::*; 218 + /// use kernel::sync::{GlobalGuard, GlobalLockedBy}; 219 + /// 220 + /// kernel::sync::global_lock! { 221 + /// // SAFETY: Initialized in module initializer before first use. 222 + /// unsafe(uninit) static MY_MUTEX: Mutex<()> = (); 223 + /// } 224 + /// 225 + /// /// All instances of this struct are protected by `MY_MUTEX`. 226 + /// struct MyStruct { 227 + /// my_counter: GlobalLockedBy<u32, MY_MUTEX>, 228 + /// } 229 + /// 230 + /// impl MyStruct { 231 + /// /// Increment the counter in this instance. 232 + /// /// 233 + /// /// The caller must hold the `MY_MUTEX` mutex. 234 + /// fn increment(&self, guard: &mut GlobalGuard<MY_MUTEX>) -> u32 { 235 + /// let my_counter = self.my_counter.as_mut(guard); 236 + /// *my_counter += 1; 237 + /// *my_counter 238 + /// } 239 + /// } 240 + /// 241 + /// impl kernel::Module for MyModule { 242 + /// fn init(_module: &'static ThisModule) -> Result<Self> { 243 + /// // SAFETY: Called exactly once. 244 + /// unsafe { MY_MUTEX.init() }; 245 + /// 246 + /// Ok(MyModule {}) 247 + /// } 248 + /// } 249 + /// # struct MyModule {} 250 + /// # } 251 + /// ``` 252 + #[macro_export] 253 + macro_rules! global_lock { 254 + { 255 + $(#[$meta:meta])* $pub:vis 256 + unsafe(uninit) static $name:ident: $kind:ident<$valuety:ty> = $value:expr; 257 + } => { 258 + #[doc = ::core::concat!( 259 + "Backend type used by [`", 260 + ::core::stringify!($name), 261 + "`](static@", 262 + ::core::stringify!($name), 263 + ")." 264 + )] 265 + #[allow(non_camel_case_types, unreachable_pub)] 266 + $pub enum $name {} 267 + 268 + impl $crate::sync::lock::GlobalLockBackend for $name { 269 + const NAME: &'static $crate::str::CStr = $crate::c_str!(::core::stringify!($name)); 270 + type Item = $valuety; 271 + type Backend = $crate::global_lock_inner!(backend $kind); 272 + 273 + fn get_lock_class() -> &'static $crate::sync::LockClassKey { 274 + $crate::static_lock_class!() 275 + } 276 + } 277 + 278 + $(#[$meta])* 279 + $pub static $name: $crate::sync::lock::GlobalLock<$name> = { 280 + // Defined here to be outside the unsafe scope. 281 + let init: $valuety = $value; 282 + 283 + // SAFETY: 284 + // * The user of this macro promises to initialize the macro before use. 285 + // * We are only generating one static with this backend type. 286 + unsafe { $crate::sync::lock::GlobalLock::new(init) } 287 + }; 288 + }; 289 + } 290 + pub use global_lock; 291 + 292 + #[doc(hidden)] 293 + #[macro_export] 294 + macro_rules! global_lock_inner { 295 + (backend Mutex) => { 296 + $crate::sync::lock::mutex::MutexBackend 297 + }; 298 + (backend SpinLock) => { 299 + $crate::sync::lock::spinlock::SpinLockBackend 300 + }; 301 + }
+13 -2
rust/kernel/sync/lock/mutex.rs
··· 58 58 /// } 59 59 /// 60 60 /// // Allocate a boxed `Example`. 61 - /// let e = Box::pin_init(Example::new(), GFP_KERNEL)?; 61 + /// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?; 62 62 /// assert_eq!(e.c, 10); 63 63 /// assert_eq!(e.d.lock().a, 20); 64 64 /// assert_eq!(e.d.lock().b, 30); ··· 96 96 97 97 unsafe fn init( 98 98 ptr: *mut Self::State, 99 - name: *const core::ffi::c_char, 99 + name: *const crate::ffi::c_char, 100 100 key: *mut bindings::lock_class_key, 101 101 ) { 102 102 // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and ··· 114 114 // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the 115 115 // caller is the owner of the mutex. 116 116 unsafe { bindings::mutex_unlock(ptr) }; 117 + } 118 + 119 + unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> { 120 + // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. 121 + let result = unsafe { bindings::mutex_trylock(ptr) }; 122 + 123 + if result != 0 { 124 + Some(()) 125 + } else { 126 + None 127 + } 117 128 } 118 129 }
+13 -2
rust/kernel/sync/lock/spinlock.rs
··· 56 56 /// } 57 57 /// 58 58 /// // Allocate a boxed `Example`. 59 - /// let e = Box::pin_init(Example::new(), GFP_KERNEL)?; 59 + /// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?; 60 60 /// assert_eq!(e.c, 10); 61 61 /// assert_eq!(e.d.lock().a, 20); 62 62 /// assert_eq!(e.d.lock().b, 30); ··· 95 95 96 96 unsafe fn init( 97 97 ptr: *mut Self::State, 98 - name: *const core::ffi::c_char, 98 + name: *const crate::ffi::c_char, 99 99 key: *mut bindings::lock_class_key, 100 100 ) { 101 101 // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and ··· 113 113 // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the 114 114 // caller is the owner of the spinlock. 115 115 unsafe { bindings::spin_unlock(ptr) } 116 + } 117 + 118 + unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> { 119 + // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. 120 + let result = unsafe { bindings::spin_trylock(ptr) }; 121 + 122 + if result != 0 { 123 + Some(()) 124 + } else { 125 + None 126 + } 116 127 } 117 128 }
+1 -1
rust/kernel/sync/locked_by.rs
··· 43 43 /// struct InnerDirectory { 44 44 /// /// The sum of the bytes used by all files. 45 45 /// bytes_used: u64, 46 - /// _files: Vec<File>, 46 + /// _files: KVec<File>, 47 47 /// } 48 48 /// 49 49 /// struct Directory {
+2 -6
rust/kernel/task.rs
··· 9 9 pid_namespace::PidNamespace, 10 10 types::{ARef, NotThreadSafe, Opaque}, 11 11 }; 12 - use core::{ 13 - cmp::{Eq, PartialEq}, 14 - ffi::{c_int, c_long, c_uint}, 15 - ops::Deref, 16 - ptr, 17 - }; 12 + use crate::ffi::{c_int, c_long, c_uint}; 13 + use core::{cmp::{Eq, PartialEq},ops::Deref, ptr}; 18 14 19 15 /// A sentinel value used for infinite timeouts. 20 16 pub const MAX_SCHEDULE_TIMEOUT: c_long = c_long::MAX;
+2 -2
rust/kernel/time.rs
··· 12 12 pub const NSEC_PER_MSEC: i64 = bindings::NSEC_PER_MSEC as i64; 13 13 14 14 /// The time unit of Linux kernel. One jiffy equals (1/HZ) second. 15 - pub type Jiffies = core::ffi::c_ulong; 15 + pub type Jiffies = crate::ffi::c_ulong; 16 16 17 17 /// The millisecond time unit. 18 - pub type Msecs = core::ffi::c_uint; 18 + pub type Msecs = crate::ffi::c_uint; 19 19 20 20 /// Converts milliseconds to jiffies. 21 21 #[inline]
+71
rust/kernel/transmute.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Traits for transmuting types. 4 + 5 + /// Types for which any bit pattern is valid. 6 + /// 7 + /// Not all types are valid for all values. For example, a `bool` must be either zero or one, so 8 + /// reading arbitrary bytes into something that contains a `bool` is not okay. 9 + /// 10 + /// It's okay for the type to have padding, as initializing those bytes has no effect. 11 + /// 12 + /// # Safety 13 + /// 14 + /// All bit-patterns must be valid for this type. This type must not have interior mutability. 15 + pub unsafe trait FromBytes {} 16 + 17 + macro_rules! impl_frombytes { 18 + ($($({$($generics:tt)*})? $t:ty, )*) => { 19 + // SAFETY: Safety comments written in the macro invocation. 20 + $(unsafe impl$($($generics)*)? FromBytes for $t {})* 21 + }; 22 + } 23 + 24 + impl_frombytes! { 25 + // SAFETY: All bit patterns are acceptable values of the types below. 26 + u8, u16, u32, u64, usize, 27 + i8, i16, i32, i64, isize, 28 + 29 + // SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit 30 + // patterns are also acceptable for arrays of that type. 31 + {<T: FromBytes>} [T], 32 + {<T: FromBytes, const N: usize>} [T; N], 33 + } 34 + 35 + /// Types that can be viewed as an immutable slice of initialized bytes. 36 + /// 37 + /// If a struct implements this trait, then it is okay to copy it byte-for-byte to userspace. This 38 + /// means that it should not have any padding, as padding bytes are uninitialized. Reading 39 + /// uninitialized memory is not just undefined behavior, it may even lead to leaking sensitive 40 + /// information on the stack to userspace. 41 + /// 42 + /// The struct should also not hold kernel pointers, as kernel pointer addresses are also considered 43 + /// sensitive. However, leaking kernel pointers is not considered undefined behavior by Rust, so 44 + /// this is a correctness requirement, but not a safety requirement. 45 + /// 46 + /// # Safety 47 + /// 48 + /// Values of this type may not contain any uninitialized bytes. This type must not have interior 49 + /// mutability. 50 + pub unsafe trait AsBytes {} 51 + 52 + macro_rules! impl_asbytes { 53 + ($($({$($generics:tt)*})? $t:ty, )*) => { 54 + // SAFETY: Safety comments written in the macro invocation. 55 + $(unsafe impl$($($generics)*)? AsBytes for $t {})* 56 + }; 57 + } 58 + 59 + impl_asbytes! { 60 + // SAFETY: Instances of the following types have no uninitialized portions. 61 + u8, u16, u32, u64, usize, 62 + i8, i16, i32, i64, isize, 63 + bool, 64 + char, 65 + str, 66 + 67 + // SAFETY: If individual values in an array have no uninitialized portions, then the array 68 + // itself does not have any uninitialized portions either. 69 + {<T: AsBytes>} [T], 70 + {<T: AsBytes, const N: usize>} [T; N], 71 + }
+72 -124
rust/kernel/types.rs
··· 3 3 //! Kernel types. 4 4 5 5 use crate::init::{self, PinInit}; 6 - use alloc::boxed::Box; 7 6 use core::{ 8 7 cell::UnsafeCell, 9 8 marker::{PhantomData, PhantomPinned}, 10 9 mem::{ManuallyDrop, MaybeUninit}, 11 10 ops::{Deref, DerefMut}, 12 - pin::Pin, 13 11 ptr::NonNull, 14 12 }; 15 13 ··· 29 31 /// For example, it might be invalid, dangling or pointing to uninitialized memory. Using it in 30 32 /// any way except for [`ForeignOwnable::from_foreign`], [`ForeignOwnable::borrow`], 31 33 /// [`ForeignOwnable::try_from_foreign`] can result in undefined behavior. 32 - fn into_foreign(self) -> *const core::ffi::c_void; 34 + fn into_foreign(self) -> *const crate::ffi::c_void; 33 35 34 36 /// Borrows a foreign-owned object. 35 37 /// ··· 37 39 /// 38 40 /// `ptr` must have been returned by a previous call to [`ForeignOwnable::into_foreign`] for 39 41 /// which a previous matching [`ForeignOwnable::from_foreign`] hasn't been called yet. 40 - unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Self::Borrowed<'a>; 42 + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> Self::Borrowed<'a>; 41 43 42 44 /// Converts a foreign-owned object back to a Rust-owned one. 43 45 /// ··· 47 49 /// which a previous matching [`ForeignOwnable::from_foreign`] hasn't been called yet. 48 50 /// Additionally, all instances (if any) of values returned by [`ForeignOwnable::borrow`] for 49 51 /// this object must have been dropped. 50 - unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self; 52 + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self; 51 53 52 54 /// Tries to convert a foreign-owned object back to a Rust-owned one. 53 55 /// ··· 58 60 /// 59 61 /// `ptr` must either be null or satisfy the safety requirements for 60 62 /// [`ForeignOwnable::from_foreign`]. 61 - unsafe fn try_from_foreign(ptr: *const core::ffi::c_void) -> Option<Self> { 63 + unsafe fn try_from_foreign(ptr: *const crate::ffi::c_void) -> Option<Self> { 62 64 if ptr.is_null() { 63 65 None 64 66 } else { ··· 69 71 } 70 72 } 71 73 72 - impl<T: 'static> ForeignOwnable for Box<T> { 73 - type Borrowed<'a> = &'a T; 74 - 75 - fn into_foreign(self) -> *const core::ffi::c_void { 76 - Box::into_raw(self) as _ 77 - } 78 - 79 - unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> &'a T { 80 - // SAFETY: The safety requirements for this function ensure that the object is still alive, 81 - // so it is safe to dereference the raw pointer. 82 - // The safety requirements of `from_foreign` also ensure that the object remains alive for 83 - // the lifetime of the returned value. 84 - unsafe { &*ptr.cast() } 85 - } 86 - 87 - unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { 88 - // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 89 - // call to `Self::into_foreign`. 90 - unsafe { Box::from_raw(ptr as _) } 91 - } 92 - } 93 - 94 - impl<T: 'static> ForeignOwnable for Pin<Box<T>> { 95 - type Borrowed<'a> = Pin<&'a T>; 96 - 97 - fn into_foreign(self) -> *const core::ffi::c_void { 98 - // SAFETY: We are still treating the box as pinned. 99 - Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _ 100 - } 101 - 102 - unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Pin<&'a T> { 103 - // SAFETY: The safety requirements for this function ensure that the object is still alive, 104 - // so it is safe to dereference the raw pointer. 105 - // The safety requirements of `from_foreign` also ensure that the object remains alive for 106 - // the lifetime of the returned value. 107 - let r = unsafe { &*ptr.cast() }; 108 - 109 - // SAFETY: This pointer originates from a `Pin<Box<T>>`. 110 - unsafe { Pin::new_unchecked(r) } 111 - } 112 - 113 - unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { 114 - // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 115 - // call to `Self::into_foreign`. 116 - unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) } 117 - } 118 - } 119 - 120 74 impl ForeignOwnable for () { 121 75 type Borrowed<'a> = (); 122 76 123 - fn into_foreign(self) -> *const core::ffi::c_void { 77 + fn into_foreign(self) -> *const crate::ffi::c_void { 124 78 core::ptr::NonNull::dangling().as_ptr() 125 79 } 126 80 127 - unsafe fn borrow<'a>(_: *const core::ffi::c_void) -> Self::Borrowed<'a> {} 81 + unsafe fn borrow<'a>(_: *const crate::ffi::c_void) -> Self::Borrowed<'a> {} 128 82 129 - unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {} 83 + unsafe fn from_foreign(_: *const crate::ffi::c_void) -> Self {} 130 84 } 131 85 132 86 /// Runs a cleanup function/closure when dropped. ··· 135 185 /// # use kernel::types::ScopeGuard; 136 186 /// fn example3(arg: bool) -> Result { 137 187 /// let mut vec = 138 - /// ScopeGuard::new_with_data(Vec::new(), |v| pr_info!("vec had {} elements\n", v.len())); 188 + /// ScopeGuard::new_with_data(KVec::new(), |v| pr_info!("vec had {} elements\n", v.len())); 139 189 /// 140 190 /// vec.push(10u8, GFP_KERNEL)?; 141 191 /// if arg { ··· 175 225 impl ScopeGuard<(), fn(())> { 176 226 /// Creates a new guarded object with the given cleanup function. 177 227 pub fn new(cleanup: impl FnOnce()) -> ScopeGuard<(), impl FnOnce(())> { 178 - ScopeGuard::new_with_data((), move |_| cleanup()) 228 + ScopeGuard::new_with_data((), move |()| cleanup()) 179 229 } 180 230 } 181 231 ··· 206 256 207 257 /// Stores an opaque value. 208 258 /// 209 - /// This is meant to be used with FFI objects that are never interpreted by Rust code. 259 + /// `Opaque<T>` is meant to be used with FFI objects that are never interpreted by Rust code. 260 + /// 261 + /// It is used to wrap structs from the C side, like for example `Opaque<bindings::mutex>`. 262 + /// It gets rid of all the usual assumptions that Rust has for a value: 263 + /// 264 + /// * The value is allowed to be uninitialized (for example have invalid bit patterns: `3` for a 265 + /// [`bool`]). 266 + /// * The value is allowed to be mutated, when a `&Opaque<T>` exists on the Rust side. 267 + /// * No uniqueness for mutable references: it is fine to have multiple `&mut Opaque<T>` point to 268 + /// the same value. 269 + /// * The value is not allowed to be shared with other threads (i.e. it is `!Sync`). 270 + /// 271 + /// This has to be used for all values that the C side has access to, because it can't be ensured 272 + /// that the C side is adhering to the usual constraints that Rust needs. 273 + /// 274 + /// Using `Opaque<T>` allows to continue to use references on the Rust side even for values shared 275 + /// with C. 276 + /// 277 + /// # Examples 278 + /// 279 + /// ``` 280 + /// # #![expect(unreachable_pub, clippy::disallowed_names)] 281 + /// use kernel::types::Opaque; 282 + /// # // Emulate a C struct binding which is from C, maybe uninitialized or not, only the C side 283 + /// # // knows. 284 + /// # mod bindings { 285 + /// # pub struct Foo { 286 + /// # pub val: u8, 287 + /// # } 288 + /// # } 289 + /// 290 + /// // `foo.val` is assumed to be handled on the C side, so we use `Opaque` to wrap it. 291 + /// pub struct Foo { 292 + /// foo: Opaque<bindings::Foo>, 293 + /// } 294 + /// 295 + /// impl Foo { 296 + /// pub fn get_val(&self) -> u8 { 297 + /// let ptr = Opaque::get(&self.foo); 298 + /// 299 + /// // SAFETY: `Self` is valid from C side. 300 + /// unsafe { (*ptr).val } 301 + /// } 302 + /// } 303 + /// 304 + /// // Create an instance of `Foo` with the `Opaque` wrapper. 305 + /// let foo = Foo { 306 + /// foo: Opaque::new(bindings::Foo { val: 0xdb }), 307 + /// }; 308 + /// 309 + /// assert_eq!(foo.get_val(), 0xdb); 310 + /// ``` 210 311 #[repr(transparent)] 211 312 pub struct Opaque<T> { 212 313 value: UnsafeCell<MaybeUninit<T>>, ··· 411 410 /// 412 411 /// struct Empty {} 413 412 /// 413 + /// # // SAFETY: TODO. 414 414 /// unsafe impl AlwaysRefCounted for Empty { 415 415 /// fn inc_ref(&self) {} 416 416 /// unsafe fn dec_ref(_obj: NonNull<Self>) {} ··· 419 417 /// 420 418 /// let mut data = Empty {}; 421 419 /// let ptr = NonNull::<Empty>::new(&mut data as *mut _).unwrap(); 420 + /// # // SAFETY: TODO. 422 421 /// let data_ref: ARef<Empty> = unsafe { ARef::from_raw(ptr) }; 423 422 /// let raw_ptr: NonNull<Empty> = ARef::into_raw(data_ref); 424 423 /// ··· 464 461 } 465 462 466 463 /// A sum type that always holds either a value of type `L` or `R`. 464 + /// 465 + /// # Examples 466 + /// 467 + /// ``` 468 + /// use kernel::types::Either; 469 + /// 470 + /// let left_value: Either<i32, &str> = Either::Left(7); 471 + /// let right_value: Either<i32, &str> = Either::Right("right value"); 472 + /// ``` 467 473 pub enum Either<L, R> { 468 474 /// Constructs an instance of [`Either`] containing a value of type `L`. 469 475 Left(L), ··· 480 468 /// Constructs an instance of [`Either`] containing a value of type `R`. 481 469 Right(R), 482 470 } 483 - 484 - /// Types for which any bit pattern is valid. 485 - /// 486 - /// Not all types are valid for all values. For example, a `bool` must be either zero or one, so 487 - /// reading arbitrary bytes into something that contains a `bool` is not okay. 488 - /// 489 - /// It's okay for the type to have padding, as initializing those bytes has no effect. 490 - /// 491 - /// # Safety 492 - /// 493 - /// All bit-patterns must be valid for this type. This type must not have interior mutability. 494 - pub unsafe trait FromBytes {} 495 - 496 - // SAFETY: All bit patterns are acceptable values of the types below. 497 - unsafe impl FromBytes for u8 {} 498 - unsafe impl FromBytes for u16 {} 499 - unsafe impl FromBytes for u32 {} 500 - unsafe impl FromBytes for u64 {} 501 - unsafe impl FromBytes for usize {} 502 - unsafe impl FromBytes for i8 {} 503 - unsafe impl FromBytes for i16 {} 504 - unsafe impl FromBytes for i32 {} 505 - unsafe impl FromBytes for i64 {} 506 - unsafe impl FromBytes for isize {} 507 - // SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit 508 - // patterns are also acceptable for arrays of that type. 509 - unsafe impl<T: FromBytes> FromBytes for [T] {} 510 - unsafe impl<T: FromBytes, const N: usize> FromBytes for [T; N] {} 511 - 512 - /// Types that can be viewed as an immutable slice of initialized bytes. 513 - /// 514 - /// If a struct implements this trait, then it is okay to copy it byte-for-byte to userspace. This 515 - /// means that it should not have any padding, as padding bytes are uninitialized. Reading 516 - /// uninitialized memory is not just undefined behavior, it may even lead to leaking sensitive 517 - /// information on the stack to userspace. 518 - /// 519 - /// The struct should also not hold kernel pointers, as kernel pointer addresses are also considered 520 - /// sensitive. However, leaking kernel pointers is not considered undefined behavior by Rust, so 521 - /// this is a correctness requirement, but not a safety requirement. 522 - /// 523 - /// # Safety 524 - /// 525 - /// Values of this type may not contain any uninitialized bytes. This type must not have interior 526 - /// mutability. 527 - pub unsafe trait AsBytes {} 528 - 529 - // SAFETY: Instances of the following types have no uninitialized portions. 530 - unsafe impl AsBytes for u8 {} 531 - unsafe impl AsBytes for u16 {} 532 - unsafe impl AsBytes for u32 {} 533 - unsafe impl AsBytes for u64 {} 534 - unsafe impl AsBytes for usize {} 535 - unsafe impl AsBytes for i8 {} 536 - unsafe impl AsBytes for i16 {} 537 - unsafe impl AsBytes for i32 {} 538 - unsafe impl AsBytes for i64 {} 539 - unsafe impl AsBytes for isize {} 540 - unsafe impl AsBytes for bool {} 541 - unsafe impl AsBytes for char {} 542 - unsafe impl AsBytes for str {} 543 - // SAFETY: If individual values in an array have no uninitialized portions, then the array itself 544 - // does not have any uninitialized portions either. 545 - unsafe impl<T: AsBytes> AsBytes for [T] {} 546 - unsafe impl<T: AsBytes, const N: usize> AsBytes for [T; N] {} 547 471 548 472 /// Zero-sized type to mark types not [`Send`]. 549 473 ///
+11 -14
rust/kernel/uaccess.rs
··· 8 8 alloc::Flags, 9 9 bindings, 10 10 error::Result, 11 + ffi::{c_ulong, c_void}, 11 12 prelude::*, 12 - types::{AsBytes, FromBytes}, 13 + transmute::{AsBytes, FromBytes}, 13 14 }; 14 - use alloc::vec::Vec; 15 - use core::ffi::{c_ulong, c_void}; 16 15 use core::mem::{size_of, MaybeUninit}; 17 16 18 17 /// The type used for userspace addresses. ··· 45 46 /// every byte in the region. 46 47 /// 47 48 /// ```no_run 48 - /// use alloc::vec::Vec; 49 - /// use core::ffi::c_void; 49 + /// use kernel::ffi::c_void; 50 50 /// use kernel::error::Result; 51 51 /// use kernel::uaccess::{UserPtr, UserSlice}; 52 52 /// 53 53 /// fn bytes_add_one(uptr: UserPtr, len: usize) -> Result<()> { 54 54 /// let (read, mut write) = UserSlice::new(uptr, len).reader_writer(); 55 55 /// 56 - /// let mut buf = Vec::new(); 56 + /// let mut buf = KVec::new(); 57 57 /// read.read_all(&mut buf, GFP_KERNEL)?; 58 58 /// 59 59 /// for b in &mut buf { ··· 67 69 /// Example illustrating a TOCTOU (time-of-check to time-of-use) bug. 68 70 /// 69 71 /// ```no_run 70 - /// use alloc::vec::Vec; 71 - /// use core::ffi::c_void; 72 + /// use kernel::ffi::c_void; 72 73 /// use kernel::error::{code::EINVAL, Result}; 73 74 /// use kernel::uaccess::{UserPtr, UserSlice}; 74 75 /// ··· 75 78 /// fn is_valid(uptr: UserPtr, len: usize) -> Result<bool> { 76 79 /// let read = UserSlice::new(uptr, len).reader(); 77 80 /// 78 - /// let mut buf = Vec::new(); 81 + /// let mut buf = KVec::new(); 79 82 /// read.read_all(&mut buf, GFP_KERNEL)?; 80 83 /// 81 84 /// todo!() 82 85 /// } 83 86 /// 84 87 /// /// Returns the bytes behind this user pointer if they are valid. 85 - /// fn get_bytes_if_valid(uptr: UserPtr, len: usize) -> Result<Vec<u8>> { 88 + /// fn get_bytes_if_valid(uptr: UserPtr, len: usize) -> Result<KVec<u8>> { 86 89 /// if !is_valid(uptr, len)? { 87 90 /// return Err(EINVAL); 88 91 /// } 89 92 /// 90 93 /// let read = UserSlice::new(uptr, len).reader(); 91 94 /// 92 - /// let mut buf = Vec::new(); 95 + /// let mut buf = KVec::new(); 93 96 /// read.read_all(&mut buf, GFP_KERNEL)?; 94 97 /// 95 98 /// // THIS IS A BUG! The bytes could have changed since we checked them. ··· 127 130 /// Reads the entirety of the user slice, appending it to the end of the provided buffer. 128 131 /// 129 132 /// Fails with [`EFAULT`] if the read happens on a bad address. 130 - pub fn read_all(self, buf: &mut Vec<u8>, flags: Flags) -> Result { 133 + pub fn read_all(self, buf: &mut KVec<u8>, flags: Flags) -> Result { 131 134 self.reader().read_all(buf, flags) 132 135 } 133 136 ··· 288 291 /// Reads the entirety of the user slice, appending it to the end of the provided buffer. 289 292 /// 290 293 /// Fails with [`EFAULT`] if the read happens on a bad address. 291 - pub fn read_all(mut self, buf: &mut Vec<u8>, flags: Flags) -> Result { 294 + pub fn read_all(mut self, buf: &mut KVec<u8>, flags: Flags) -> Result { 292 295 let len = self.length; 293 - VecExt::<u8>::reserve(buf, len, flags)?; 296 + buf.reserve(len, flags)?; 294 297 295 298 // The call to `try_reserve` was successful, so the spare capacity is at least `len` bytes 296 299 // long.
+16 -13
rust/kernel/workqueue.rs
··· 216 216 func: Some(func), 217 217 }); 218 218 219 - self.enqueue(Box::pin_init(init, flags).map_err(|_| AllocError)?); 219 + self.enqueue(KBox::pin_init(init, flags).map_err(|_| AllocError)?); 220 220 Ok(()) 221 221 } 222 222 } ··· 239 239 } 240 240 241 241 impl<T: FnOnce()> WorkItem for ClosureWork<T> { 242 - type Pointer = Pin<Box<Self>>; 242 + type Pointer = Pin<KBox<Self>>; 243 243 244 - fn run(mut this: Pin<Box<Self>>) { 244 + fn run(mut this: Pin<KBox<Self>>) { 245 245 if let Some(func) = this.as_mut().project().take() { 246 246 (func)() 247 247 } ··· 297 297 298 298 /// Defines the method that should be called directly when a work item is executed. 299 299 /// 300 - /// This trait is implemented by `Pin<Box<T>>` and [`Arc<T>`], and is mainly intended to be 300 + /// This trait is implemented by `Pin<KBox<T>>` and [`Arc<T>`], and is mainly intended to be 301 301 /// implemented for smart pointer types. For your own structs, you would implement [`WorkItem`] 302 302 /// instead. The [`run`] method on this trait will usually just perform the appropriate 303 303 /// `container_of` translation and then call into the [`run`][WorkItem::run] method from the ··· 329 329 /// This trait is used when the `work_struct` field is defined using the [`Work`] helper. 330 330 pub trait WorkItem<const ID: u64 = 0> { 331 331 /// The pointer type that this struct is wrapped in. This will typically be `Arc<Self>` or 332 - /// `Pin<Box<Self>>`. 332 + /// `Pin<KBox<Self>>`. 333 333 type Pointer: WorkItemPointer<ID>; 334 334 335 335 /// The method that should be called when this work item is executed. ··· 366 366 impl<T: ?Sized, const ID: u64> Work<T, ID> { 367 367 /// Creates a new instance of [`Work`]. 368 368 #[inline] 369 - #[allow(clippy::new_ret_no_self)] 370 369 pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self> 371 370 where 372 371 T: WorkItem<ID>, ··· 519 520 impl{T} HasWork<Self> for ClosureWork<T> { self.work } 520 521 } 521 522 523 + // SAFETY: TODO. 522 524 unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T> 523 525 where 524 526 T: WorkItem<ID, Pointer = Self>, 525 527 T: HasWork<T, ID>, 526 528 { 527 529 unsafe extern "C" fn run(ptr: *mut bindings::work_struct) { 528 - // SAFETY: The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. 530 + // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. 529 531 let ptr = ptr as *mut Work<T, ID>; 530 532 // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`. 531 533 let ptr = unsafe { T::work_container_of(ptr) }; ··· 537 537 } 538 538 } 539 539 540 + // SAFETY: TODO. 540 541 unsafe impl<T, const ID: u64> RawWorkItem<ID> for Arc<T> 541 542 where 542 543 T: WorkItem<ID, Pointer = Self>, ··· 566 565 } 567 566 } 568 567 569 - unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<Box<T>> 568 + // SAFETY: TODO. 569 + unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>> 570 570 where 571 571 T: WorkItem<ID, Pointer = Self>, 572 572 T: HasWork<T, ID>, 573 573 { 574 574 unsafe extern "C" fn run(ptr: *mut bindings::work_struct) { 575 - // SAFETY: The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. 575 + // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. 576 576 let ptr = ptr as *mut Work<T, ID>; 577 577 // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`. 578 578 let ptr = unsafe { T::work_container_of(ptr) }; 579 579 // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership. 580 - let boxed = unsafe { Box::from_raw(ptr) }; 580 + let boxed = unsafe { KBox::from_raw(ptr) }; 581 581 // SAFETY: The box was already pinned when it was enqueued. 582 582 let pinned = unsafe { Pin::new_unchecked(boxed) }; 583 583 ··· 586 584 } 587 585 } 588 586 589 - unsafe impl<T, const ID: u64> RawWorkItem<ID> for Pin<Box<T>> 587 + // SAFETY: TODO. 588 + unsafe impl<T, const ID: u64> RawWorkItem<ID> for Pin<KBox<T>> 590 589 where 591 590 T: WorkItem<ID, Pointer = Self>, 592 591 T: HasWork<T, ID>, ··· 601 598 // SAFETY: We're not going to move `self` or any of its fields, so its okay to temporarily 602 599 // remove the `Pin` wrapper. 603 600 let boxed = unsafe { Pin::into_inner_unchecked(self) }; 604 - let ptr = Box::into_raw(boxed); 601 + let ptr = KBox::into_raw(boxed); 605 602 606 - // SAFETY: Pointers into a `Box` point at a valid value. 603 + // SAFETY: Pointers into a `KBox` point at a valid value. 607 604 let work_ptr = unsafe { T::raw_get_work(ptr) }; 608 605 // SAFETY: `raw_get_work` returns a pointer to a valid value. 609 606 let work_ptr = unsafe { Work::raw_get(work_ptr) };
+102 -40
rust/macros/lib.rs
··· 30 30 /// 31 31 /// # Examples 32 32 /// 33 - /// ```ignore 33 + /// ``` 34 34 /// use kernel::prelude::*; 35 35 /// 36 36 /// module!{ ··· 42 42 /// alias: ["alternate_module_name"], 43 43 /// } 44 44 /// 45 - /// struct MyModule; 45 + /// struct MyModule(i32); 46 46 /// 47 47 /// impl kernel::Module for MyModule { 48 - /// fn init() -> Result<Self> { 49 - /// // If the parameter is writeable, then the kparam lock must be 50 - /// // taken to read the parameter: 51 - /// { 52 - /// let lock = THIS_MODULE.kernel_param_lock(); 53 - /// pr_info!("i32 param is: {}\n", writeable_i32.read(&lock)); 54 - /// } 55 - /// // If the parameter is read only, it can be read without locking 56 - /// // the kernel parameters: 57 - /// pr_info!("i32 param is: {}\n", my_i32.read()); 58 - /// Ok(Self) 48 + /// fn init(_module: &'static ThisModule) -> Result<Self> { 49 + /// let foo: i32 = 42; 50 + /// pr_info!("I contain: {}\n", foo); 51 + /// Ok(Self(foo)) 59 52 /// } 60 53 /// } 54 + /// # fn main() {} 61 55 /// ``` 62 56 /// 63 57 /// ## Firmware ··· 63 69 /// build an initramfs uses this information to put the firmware files into 64 70 /// the initramfs image. 65 71 /// 66 - /// ```ignore 72 + /// ``` 67 73 /// use kernel::prelude::*; 68 74 /// 69 75 /// module!{ ··· 78 84 /// struct MyDeviceDriverModule; 79 85 /// 80 86 /// impl kernel::Module for MyDeviceDriverModule { 81 - /// fn init() -> Result<Self> { 87 + /// fn init(_module: &'static ThisModule) -> Result<Self> { 82 88 /// Ok(Self) 83 89 /// } 84 90 /// } 91 + /// # fn main() {} 85 92 /// ``` 86 93 /// 87 94 /// # Supported argument types ··· 127 132 /// calls to this function at compile time: 128 133 /// 129 134 /// ```compile_fail 130 - /// # use kernel::error::VTABLE_DEFAULT_ERROR; 135 + /// # // Intentionally missing `use`s to simplify `rusttest`. 131 136 /// kernel::build_error(VTABLE_DEFAULT_ERROR) 132 137 /// ``` 133 138 /// ··· 137 142 /// 138 143 /// # Examples 139 144 /// 140 - /// ```ignore 145 + /// ``` 141 146 /// use kernel::error::VTABLE_DEFAULT_ERROR; 142 147 /// use kernel::prelude::*; 143 148 /// ··· 182 187 /// 183 188 /// # Examples 184 189 /// 185 - /// ```ignore 186 - /// use kernel::macro::concat_idents; 190 + /// ``` 191 + /// # const binder_driver_return_protocol_BR_OK: u32 = 0; 192 + /// # const binder_driver_return_protocol_BR_ERROR: u32 = 1; 193 + /// # const binder_driver_return_protocol_BR_TRANSACTION: u32 = 2; 194 + /// # const binder_driver_return_protocol_BR_REPLY: u32 = 3; 195 + /// # const binder_driver_return_protocol_BR_DEAD_REPLY: u32 = 4; 196 + /// # const binder_driver_return_protocol_BR_TRANSACTION_COMPLETE: u32 = 5; 197 + /// # const binder_driver_return_protocol_BR_INCREFS: u32 = 6; 198 + /// # const binder_driver_return_protocol_BR_ACQUIRE: u32 = 7; 199 + /// # const binder_driver_return_protocol_BR_RELEASE: u32 = 8; 200 + /// # const binder_driver_return_protocol_BR_DECREFS: u32 = 9; 201 + /// # const binder_driver_return_protocol_BR_NOOP: u32 = 10; 202 + /// # const binder_driver_return_protocol_BR_SPAWN_LOOPER: u32 = 11; 203 + /// # const binder_driver_return_protocol_BR_DEAD_BINDER: u32 = 12; 204 + /// # const binder_driver_return_protocol_BR_CLEAR_DEATH_NOTIFICATION_DONE: u32 = 13; 205 + /// # const binder_driver_return_protocol_BR_FAILED_REPLY: u32 = 14; 206 + /// use kernel::macros::concat_idents; 187 207 /// 188 208 /// macro_rules! pub_no_prefix { 189 209 /// ($prefix:ident, $($newname:ident),+) => { 190 - /// $(pub(crate) const $newname: u32 = kernel::macros::concat_idents!($prefix, $newname);)+ 210 + /// $(pub(crate) const $newname: u32 = concat_idents!($prefix, $newname);)+ 191 211 /// }; 192 212 /// } 193 213 /// ··· 248 238 /// 249 239 /// # Examples 250 240 /// 251 - /// ```rust,ignore 241 + /// ``` 242 + /// # #![feature(lint_reasons)] 243 + /// # use kernel::prelude::*; 244 + /// # use std::{sync::Mutex, process::Command}; 245 + /// # use kernel::macros::pin_data; 252 246 /// #[pin_data] 253 247 /// struct DriverData { 254 248 /// #[pin] 255 - /// queue: Mutex<Vec<Command>>, 256 - /// buf: Box<[u8; 1024 * 1024]>, 249 + /// queue: Mutex<KVec<Command>>, 250 + /// buf: KBox<[u8; 1024 * 1024]>, 257 251 /// } 258 252 /// ``` 259 253 /// 260 - /// ```rust,ignore 254 + /// ``` 255 + /// # #![feature(lint_reasons)] 256 + /// # use kernel::prelude::*; 257 + /// # use std::{sync::Mutex, process::Command}; 258 + /// # use core::pin::Pin; 259 + /// # pub struct Info; 260 + /// # mod bindings { 261 + /// # pub unsafe fn destroy_info(_ptr: *mut super::Info) {} 262 + /// # } 263 + /// use kernel::macros::{pin_data, pinned_drop}; 264 + /// 261 265 /// #[pin_data(PinnedDrop)] 262 266 /// struct DriverData { 263 267 /// #[pin] 264 - /// queue: Mutex<Vec<Command>>, 265 - /// buf: Box<[u8; 1024 * 1024]>, 268 + /// queue: Mutex<KVec<Command>>, 269 + /// buf: KBox<[u8; 1024 * 1024]>, 266 270 /// raw_info: *mut Info, 267 271 /// } 268 272 /// ··· 286 262 /// unsafe { bindings::destroy_info(self.raw_info) }; 287 263 /// } 288 264 /// } 265 + /// # fn main() {} 289 266 /// ``` 290 267 /// 291 268 /// [`pin_init!`]: ../kernel/macro.pin_init.html ··· 302 277 /// 303 278 /// # Examples 304 279 /// 305 - /// ```rust,ignore 280 + /// ``` 281 + /// # #![feature(lint_reasons)] 282 + /// # use kernel::prelude::*; 283 + /// # use macros::{pin_data, pinned_drop}; 284 + /// # use std::{sync::Mutex, process::Command}; 285 + /// # use core::pin::Pin; 286 + /// # mod bindings { 287 + /// # pub struct Info; 288 + /// # pub unsafe fn destroy_info(_ptr: *mut Info) {} 289 + /// # } 306 290 /// #[pin_data(PinnedDrop)] 307 291 /// struct DriverData { 308 292 /// #[pin] 309 - /// queue: Mutex<Vec<Command>>, 310 - /// buf: Box<[u8; 1024 * 1024]>, 311 - /// raw_info: *mut Info, 293 + /// queue: Mutex<KVec<Command>>, 294 + /// buf: KBox<[u8; 1024 * 1024]>, 295 + /// raw_info: *mut bindings::Info, 312 296 /// } 313 297 /// 314 298 /// #[pinned_drop] ··· 343 309 /// 344 310 /// # Example 345 311 /// 346 - /// ```ignore 347 - /// use kernel::macro::paste; 348 - /// 312 + /// ``` 313 + /// # const binder_driver_return_protocol_BR_OK: u32 = 0; 314 + /// # const binder_driver_return_protocol_BR_ERROR: u32 = 1; 315 + /// # const binder_driver_return_protocol_BR_TRANSACTION: u32 = 2; 316 + /// # const binder_driver_return_protocol_BR_REPLY: u32 = 3; 317 + /// # const binder_driver_return_protocol_BR_DEAD_REPLY: u32 = 4; 318 + /// # const binder_driver_return_protocol_BR_TRANSACTION_COMPLETE: u32 = 5; 319 + /// # const binder_driver_return_protocol_BR_INCREFS: u32 = 6; 320 + /// # const binder_driver_return_protocol_BR_ACQUIRE: u32 = 7; 321 + /// # const binder_driver_return_protocol_BR_RELEASE: u32 = 8; 322 + /// # const binder_driver_return_protocol_BR_DECREFS: u32 = 9; 323 + /// # const binder_driver_return_protocol_BR_NOOP: u32 = 10; 324 + /// # const binder_driver_return_protocol_BR_SPAWN_LOOPER: u32 = 11; 325 + /// # const binder_driver_return_protocol_BR_DEAD_BINDER: u32 = 12; 326 + /// # const binder_driver_return_protocol_BR_CLEAR_DEATH_NOTIFICATION_DONE: u32 = 13; 327 + /// # const binder_driver_return_protocol_BR_FAILED_REPLY: u32 = 14; 349 328 /// macro_rules! pub_no_prefix { 350 329 /// ($prefix:ident, $($newname:ident),+) => { 351 - /// paste! { 330 + /// kernel::macros::paste! { 352 331 /// $(pub(crate) const $newname: u32 = [<$prefix $newname>];)+ 353 332 /// } 354 333 /// }; ··· 400 353 /// * `lower`: change the identifier to lower case. 401 354 /// * `upper`: change the identifier to upper case. 402 355 /// 403 - /// ```ignore 404 - /// use kernel::macro::paste; 405 - /// 356 + /// ``` 357 + /// # const binder_driver_return_protocol_BR_OK: u32 = 0; 358 + /// # const binder_driver_return_protocol_BR_ERROR: u32 = 1; 359 + /// # const binder_driver_return_protocol_BR_TRANSACTION: u32 = 2; 360 + /// # const binder_driver_return_protocol_BR_REPLY: u32 = 3; 361 + /// # const binder_driver_return_protocol_BR_DEAD_REPLY: u32 = 4; 362 + /// # const binder_driver_return_protocol_BR_TRANSACTION_COMPLETE: u32 = 5; 363 + /// # const binder_driver_return_protocol_BR_INCREFS: u32 = 6; 364 + /// # const binder_driver_return_protocol_BR_ACQUIRE: u32 = 7; 365 + /// # const binder_driver_return_protocol_BR_RELEASE: u32 = 8; 366 + /// # const binder_driver_return_protocol_BR_DECREFS: u32 = 9; 367 + /// # const binder_driver_return_protocol_BR_NOOP: u32 = 10; 368 + /// # const binder_driver_return_protocol_BR_SPAWN_LOOPER: u32 = 11; 369 + /// # const binder_driver_return_protocol_BR_DEAD_BINDER: u32 = 12; 370 + /// # const binder_driver_return_protocol_BR_CLEAR_DEATH_NOTIFICATION_DONE: u32 = 13; 371 + /// # const binder_driver_return_protocol_BR_FAILED_REPLY: u32 = 14; 406 372 /// macro_rules! pub_no_prefix { 407 373 /// ($prefix:ident, $($newname:ident),+) => { 408 374 /// kernel::macros::paste! { 409 - /// $(pub(crate) const fn [<$newname:lower:span>]: u32 = [<$prefix $newname:span>];)+ 375 + /// $(pub(crate) const fn [<$newname:lower:span>]() -> u32 { [<$prefix $newname:span>] })+ 410 376 /// } 411 377 /// }; 412 378 /// } ··· 450 390 /// 451 391 /// Literals can also be concatenated with other identifiers: 452 392 /// 453 - /// ```ignore 393 + /// ``` 454 394 /// macro_rules! create_numbered_fn { 455 395 /// ($name:literal, $val:literal) => { 456 396 /// kernel::macros::paste! { ··· 478 418 /// 479 419 /// # Examples 480 420 /// 481 - /// ```rust,ignore 421 + /// ``` 422 + /// use kernel::macros::Zeroable; 423 + /// 482 424 /// #[derive(Zeroable)] 483 425 /// pub struct DriverData { 484 426 /// id: i64,
+4 -4
rust/macros/module.rs
··· 253 253 #[doc(hidden)] 254 254 #[no_mangle] 255 255 #[link_section = \".init.text\"] 256 - pub unsafe extern \"C\" fn init_module() -> core::ffi::c_int {{ 256 + pub unsafe extern \"C\" fn init_module() -> kernel::ffi::c_int {{ 257 257 // SAFETY: This function is inaccessible to the outside due to the double 258 258 // module wrapping it. It is called exactly once by the C side via its 259 259 // unique name. ··· 292 292 #[doc(hidden)] 293 293 #[link_section = \"{initcall_section}\"] 294 294 #[used] 295 - pub static __{name}_initcall: extern \"C\" fn() -> core::ffi::c_int = __{name}_init; 295 + pub static __{name}_initcall: extern \"C\" fn() -> kernel::ffi::c_int = __{name}_init; 296 296 297 297 #[cfg(not(MODULE))] 298 298 #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)] ··· 307 307 #[cfg(not(MODULE))] 308 308 #[doc(hidden)] 309 309 #[no_mangle] 310 - pub extern \"C\" fn __{name}_init() -> core::ffi::c_int {{ 310 + pub extern \"C\" fn __{name}_init() -> kernel::ffi::c_int {{ 311 311 // SAFETY: This function is inaccessible to the outside due to the double 312 312 // module wrapping it. It is called exactly once by the C side via its 313 313 // placement above in the initcall section. ··· 330 330 /// # Safety 331 331 /// 332 332 /// This function must only be called once. 333 - unsafe fn __init() -> core::ffi::c_int {{ 333 + unsafe fn __init() -> kernel::ffi::c_int {{ 334 334 match <{type_} as kernel::Module>::init(&super::super::THIS_MODULE) {{ 335 335 Ok(m) => {{ 336 336 // SAFETY: No data race, since `__MOD` can only be accessed by this
+12 -3
rust/macros/paste.rs
··· 2 2 3 3 use proc_macro::{Delimiter, Group, Ident, Spacing, Span, TokenTree}; 4 4 5 - fn concat(tokens: &[TokenTree], group_span: Span) -> TokenTree { 5 + fn concat_helper(tokens: &[TokenTree]) -> Vec<(String, Span)> { 6 6 let mut tokens = tokens.iter(); 7 7 let mut segments = Vec::new(); 8 8 let mut span = None; ··· 46 46 }; 47 47 segments.push((value, sp)); 48 48 } 49 - _ => panic!("unexpected token in paste segments"), 49 + Some(TokenTree::Group(group)) if group.delimiter() == Delimiter::None => { 50 + let tokens = group.stream().into_iter().collect::<Vec<TokenTree>>(); 51 + segments.append(&mut concat_helper(tokens.as_slice())); 52 + } 53 + token => panic!("unexpected token in paste segments: {:?}", token), 50 54 }; 51 55 } 52 56 57 + segments 58 + } 59 + 60 + fn concat(tokens: &[TokenTree], group_span: Span) -> TokenTree { 61 + let segments = concat_helper(tokens); 53 62 let pasted: String = segments.into_iter().map(|x| x.0).collect(); 54 - TokenTree::Ident(Ident::new(&pasted, span.unwrap_or(group_span))) 63 + TokenTree::Ident(Ident::new(&pasted, group_span)) 55 64 } 56 65 57 66 pub(crate) fn expand(tokens: &mut Vec<TokenTree>) {
+6
rust/uapi/lib.rs
··· 14 14 #![cfg_attr(test, allow(unsafe_op_in_unsafe_fn))] 15 15 #![allow( 16 16 clippy::all, 17 + clippy::undocumented_unsafe_blocks, 17 18 dead_code, 18 19 missing_docs, 19 20 non_camel_case_types, ··· 24 23 unreachable_pub, 25 24 unsafe_op_in_unsafe_fn 26 25 )] 26 + 27 + // Manual definition of blocklisted types. 28 + type __kernel_size_t = usize; 29 + type __kernel_ssize_t = isize; 30 + type __kernel_ptrdiff_t = isize; 27 31 28 32 include!(concat!(env!("OBJTREE"), "/rust/uapi/uapi_generated.rs"));
+2 -2
samples/rust/rust_minimal.rs
··· 13 13 } 14 14 15 15 struct RustMinimal { 16 - numbers: Vec<i32>, 16 + numbers: KVec<i32>, 17 17 } 18 18 19 19 impl kernel::Module for RustMinimal { ··· 21 21 pr_info!("Rust minimal sample (init)\n"); 22 22 pr_info!("Am I built-in? {}\n", !cfg!(MODULE)); 23 23 24 - let mut numbers = Vec::new(); 24 + let mut numbers = KVec::new(); 25 25 numbers.push(72, GFP_KERNEL)?; 26 26 numbers.push(108, GFP_KERNEL)?; 27 27 numbers.push(200, GFP_KERNEL)?;
+1
samples/rust/rust_print_main.rs
··· 15 15 16 16 struct RustPrint; 17 17 18 + #[expect(clippy::disallowed_macros)] 18 19 fn arc_print() -> Result { 19 20 use kernel::sync::*; 20 21
+2 -2
scripts/Makefile.build
··· 248 248 # Compile Rust sources (.rs) 249 249 # --------------------------------------------------------------------------- 250 250 251 - rust_allowed_features := asm_const,asm_goto,new_uninit 251 + rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons 252 252 253 253 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 254 254 # current working directory, which may be not accessible in the out-of-tree ··· 259 259 -Zallow-features=$(rust_allowed_features) \ 260 260 -Zcrate-attr=no_std \ 261 261 -Zcrate-attr='feature($(rust_allowed_features))' \ 262 - -Zunstable-options --extern force:alloc --extern kernel \ 262 + -Zunstable-options --extern kernel \ 263 263 --crate-type rlib -L $(objtree)/rust/ \ 264 264 --crate-name $(basename $(notdir $@)) \ 265 265 --sysroot=/dev/null \
+2 -9
scripts/generate_rust_analyzer.py
··· 65 65 ) 66 66 67 67 append_crate( 68 - "alloc", 69 - sysroot_src / "alloc" / "src" / "lib.rs", 70 - ["core", "compiler_builtins"], 71 - cfg=crates_cfgs.get("alloc", []), 72 - ) 73 - 74 - append_crate( 75 68 "macros", 76 69 srctree / "rust" / "macros" / "lib.rs", 77 70 [], ··· 89 96 append_crate( 90 97 "kernel", 91 98 srctree / "rust" / "kernel" / "lib.rs", 92 - ["core", "alloc", "macros", "build_error", "bindings"], 99 + ["core", "macros", "build_error", "bindings"], 93 100 cfg=cfg, 94 101 ) 95 102 crates[-1]["source"] = { ··· 126 133 append_crate( 127 134 name, 128 135 path, 129 - ["core", "alloc", "kernel"], 136 + ["core", "kernel"], 130 137 cfg=cfg, 131 138 ) 132 139
+15
scripts/rust_is_available.sh
··· 225 225 exit 1 226 226 fi 227 227 228 + if [ "$bindgen_libclang_cversion" -ge 1900100 ] && 229 + [ "$rust_bindings_generator_cversion" -lt 6905 ]; then 230 + # Distributions may have patched the issue (e.g. Debian did). 231 + if ! "$BINDGEN" $(dirname $0)/rust_is_available_bindgen_libclang_concat.h | grep -q foofoo; then 232 + echo >&2 "***" 233 + echo >&2 "*** Rust bindings generator '$BINDGEN' < 0.69.5 together with libclang >= 19.1" 234 + echo >&2 "*** may not work due to a bug (https://github.com/rust-lang/rust-bindgen/pull/2824)," 235 + echo >&2 "*** unless patched (like Debian's)." 236 + echo >&2 "*** Your bindgen version: $rust_bindings_generator_version" 237 + echo >&2 "*** Your libclang version: $bindgen_libclang_version" 238 + echo >&2 "***" 239 + warning=1 240 + fi 241 + fi 242 + 228 243 # If the C compiler is Clang, then we can also check whether its version 229 244 # matches the `libclang` version used by the Rust bindings generator. 230 245 #
+3
scripts/rust_is_available_bindgen_libclang_concat.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #define F(x) int x##x 3 + F(foo);
+33 -1
scripts/rust_is_available_test.py
··· 54 54 """) 55 55 56 56 @classmethod 57 - def generate_bindgen(cls, version_stdout, libclang_stderr, version_0_66_patched=False): 57 + def generate_bindgen(cls, version_stdout, libclang_stderr, version_0_66_patched=False, libclang_concat_patched=False): 58 58 if libclang_stderr is None: 59 59 libclang_case = f"raise SystemExit({cls.bindgen_default_bindgen_libclang_failure_exit_code})" 60 60 else: ··· 65 65 else: 66 66 version_0_66_case = "raise SystemExit(1)" 67 67 68 + if libclang_concat_patched: 69 + libclang_concat_case = "print('pub static mut foofoo: ::std::os::raw::c_int;')" 70 + else: 71 + libclang_concat_case = "pass" 72 + 68 73 return cls.generate_executable(f"""#!/usr/bin/env python3 69 74 import sys 70 75 if "rust_is_available_bindgen_libclang.h" in " ".join(sys.argv): 71 76 {libclang_case} 72 77 elif "rust_is_available_bindgen_0_66.h" in " ".join(sys.argv): 73 78 {version_0_66_case} 79 + elif "rust_is_available_bindgen_libclang_concat.h" in " ".join(sys.argv): 80 + {libclang_concat_case} 74 81 else: 75 82 print({repr(version_stdout)}) 76 83 """) ··· 274 267 bindgen = self.generate_bindgen_libclang("scripts/rust_is_available_bindgen_libclang.h:2:9: warning: clang version 10.0.0 [-W#pragma-messages], err: false") 275 268 result = self.run_script(self.Expected.FAILURE, { "BINDGEN": bindgen }) 276 269 self.assertIn(f"libclang (used by the Rust bindings generator '{bindgen}') is too old.", result.stderr) 270 + 271 + def test_bindgen_bad_libclang_concat(self): 272 + for (bindgen_version, libclang_version, expected_not_patched) in ( 273 + ("0.69.4", "18.0.0", self.Expected.SUCCESS), 274 + ("0.69.4", "19.1.0", self.Expected.SUCCESS_WITH_WARNINGS), 275 + ("0.69.4", "19.2.0", self.Expected.SUCCESS_WITH_WARNINGS), 276 + 277 + ("0.69.5", "18.0.0", self.Expected.SUCCESS), 278 + ("0.69.5", "19.1.0", self.Expected.SUCCESS), 279 + ("0.69.5", "19.2.0", self.Expected.SUCCESS), 280 + 281 + ("0.70.0", "18.0.0", self.Expected.SUCCESS), 282 + ("0.70.0", "19.1.0", self.Expected.SUCCESS), 283 + ("0.70.0", "19.2.0", self.Expected.SUCCESS), 284 + ): 285 + with self.subTest(bindgen_version=bindgen_version, libclang_version=libclang_version): 286 + cc = self.generate_clang(f"clang version {libclang_version}") 287 + libclang_stderr = f"scripts/rust_is_available_bindgen_libclang.h:2:9: warning: clang version {libclang_version} [-W#pragma-messages], err: false" 288 + bindgen = self.generate_bindgen(f"bindgen {bindgen_version}", libclang_stderr) 289 + result = self.run_script(expected_not_patched, { "BINDGEN": bindgen, "CC": cc }) 290 + if expected_not_patched == self.Expected.SUCCESS_WITH_WARNINGS: 291 + self.assertIn(f"Rust bindings generator '{bindgen}' < 0.69.5 together with libclang >= 19.1", result.stderr) 292 + 293 + bindgen = self.generate_bindgen(f"bindgen {bindgen_version}", libclang_stderr, libclang_concat_patched=True) 294 + result = self.run_script(self.Expected.SUCCESS, { "BINDGEN": bindgen, "CC": cc }) 277 295 278 296 def test_clang_matches_bindgen_libclang_different_bindgen(self): 279 297 bindgen = self.generate_bindgen_libclang("scripts/rust_is_available_bindgen_libclang.h:2:9: warning: clang version 999.0.0 [-W#pragma-messages], err: false")