Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'rust-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/ojeda/linux

Pull Rust updates from Miguel Ojeda:
"Toolchain and infrastructure:

- Enable a set of Clippy lints: 'ptr_as_ptr', 'ptr_cast_constness',
'as_ptr_cast_mut', 'as_underscore', 'cast_lossless' and
'ref_as_ptr'

These are intended to avoid type casts with the 'as' operator,
which are quite powerful, into restricted variants that are less
powerful and thus should help to avoid mistakes

- Remove the 'author' key now that most instances were moved to the
plural one in the previous cycle

'kernel' crate:

- New 'bug' module: add 'warn_on!' macro which reuses the existing
'BUG'/'WARN' infrastructure, i.e. it respects the usual sysctls and
kernel parameters:

warn_on!(value == 42);

To avoid duplicating the assembly code, the same strategy is
followed as for the static branch code in order to share the
assembly between both C and Rust

This required a few rearrangements on C arch headers -- the
existing C macros should still generate the same outputs, thus no
functional change expected there

- 'workqueue' module: add delayed work items, including a
'DelayedWork' struct, a 'impl_has_delayed_work!' macro and an
'enqueue_delayed' method, e.g.:

/// Enqueue the struct for execution on the system workqueue,
/// where its value will be printed 42 jiffies later.
fn print_later(value: Arc<MyStruct>) {
let _ = workqueue::system().enqueue_delayed(value, 42);
}

- New 'bits' module: add support for 'bit' and 'genmask' functions,
with runtime- and compile-time variants, e.g.:

static_assert!(0b00010000 == bit_u8(4));
static_assert!(0b00011110 == genmask_u8(1..=4));

assert!(checked_bit_u32(u32::BITS).is_none());

- 'uaccess' module: add 'UserSliceReader::strcpy_into_buf', which
reads NUL-terminated strings from userspace into a '&CStr'

Introduce 'UserPtr' newtype, similar in purpose to '__user' in C,
to minimize mistakes handling userspace pointers, including mixing
them up with integers and leaking them via the 'Debug' trait. Add
it to the prelude, too

- Start preparations for the replacement of our custom 'CStr' type
with the analogous type in the 'core' standard library. This will
take place across several cycles to make it easier. For this one,
it includes a new 'fmt' module, using upstream method names and
some other cleanups

Replace 'fmt!' with a re-export, which helps Clippy lint properly,
and clean up the found 'uninlined-format-args' instances

- 'dma' module:

- Clarify wording and be consistent in 'coherent' nomenclature

- Convert the 'read!()' and 'write!()' macros to return a 'Result'

- Add 'as_slice()', 'write()' methods in 'CoherentAllocation'

- Expose 'count()' and 'size()' in 'CoherentAllocation' and add
the corresponding type invariants

- Implement 'CoherentAllocation::dma_handle_with_offset()'

- 'time' module:

- Make 'Instant' generic over clock source. This allows the
compiler to assert that arithmetic expressions involving the
'Instant' use 'Instants' based on the same clock source

- Make 'HrTimer' generic over the timer mode. 'HrTimer' timers
take a 'Duration' or an 'Instant' when setting the expiry time,
depending on the timer mode. With this change, the compiler can
check the type matches the timer mode

- Add an abstraction for 'fsleep'. 'fsleep' is a flexible sleep
function that will select an appropriate sleep method depending
on the requested sleep time

- Avoid 64-bit divisions on 32-bit hardware when calculating
timestamps

- Seal the 'HrTimerMode' trait. This prevents users of the
'HrTimerMode' from implementing the trait on their own types

- Pass the correct timer mode ID to 'hrtimer_start_range_ns()'

- 'list' module: remove 'OFFSET' constants, allowing to remove
pointer arithmetic; now 'impl_list_item!' invokes
'impl_has_list_links!' or 'impl_has_list_links_self_ptr!'. Other
simplifications too

- 'types' module: remove 'ForeignOwnable::PointedTo' in favor of a
constant, which avoids exposing the type of the opaque pointer, and
require 'into_foreign' to return non-null

Remove the 'Either<L, R>' type as well. It is unused, and we want
to encourage the use of custom enums for concrete use cases

- 'sync' module: implement 'Borrow' and 'BorrowMut' for 'Arc' types
to allow them to be used in generic APIs

- 'alloc' module: implement 'Borrow' and 'BorrowMut' for 'Box<T, A>';
and 'Borrow', 'BorrowMut' and 'Default' for 'Vec<T, A>'

- 'Opaque' type: add 'cast_from' method to perform a restricted cast
that cannot change the inner type and use it in callers of
'container_of!'. Rename 'raw_get' to 'cast_into' to match it

- 'rbtree' module: add 'is_empty' method

- 'sync' module: new 'aref' submodule to hold 'AlwaysRefCounted' and
'ARef', which are moved from the too general 'types' module which
we want to reduce or eventually remove. Also fix a safety comment
in 'static_lock_class'

'pin-init' crate:

- Add 'impl<T, E> [Pin]Init<T, E> for Result<T, E>', so results are
now (pin-)initializers

- Add 'Zeroable::init_zeroed()' that delegates to 'init_zeroed()'

- New 'zeroed()', a safe version of 'mem::zeroed()' and also provide
it via 'Zeroable::zeroed()'

- Implement 'Zeroable' for 'Option<&T>', 'Option<&mut T>' and for
'Option<[unsafe] [extern "abi"] fn(...args...) -> ret>' for
'"Rust"' and '"C"' ABIs and up to 20 arguments

- Changed blanket impls of 'Init' and 'PinInit' from 'impl<T, E>
[Pin]Init<T, E> for T' to 'impl<T> [Pin]Init<T> for T'

- Renamed 'zeroed()' to 'init_zeroed()'

- Upstream dev news: improve CI more to deny warnings, use
'--all-targets'. Check the synchronization status of the two
'-next' branches in upstream and the kernel

MAINTAINERS:

- Add Vlastimil Babka, Liam R. Howlett, Uladzislau Rezki and Lorenzo
Stoakes as reviewers (thanks everyone)

And a few other cleanups and improvements"

* tag 'rust-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/ojeda/linux: (76 commits)
rust: Add warn_on macro
arm64/bug: Add ARCH_WARN_ASM macro for BUG/WARN asm code sharing with Rust
riscv/bug: Add ARCH_WARN_ASM macro for BUG/WARN asm code sharing with Rust
x86/bug: Add ARCH_WARN_ASM macro for BUG/WARN asm code sharing with Rust
rust: kernel: move ARef and AlwaysRefCounted to sync::aref
rust: sync: fix safety comment for `static_lock_class`
rust: types: remove `Either<L, R>`
rust: kernel: use `core::ffi::CStr` method names
rust: str: add `CStr` methods matching `core::ffi::CStr`
rust: str: remove unnecessary qualification
rust: use `kernel::{fmt,prelude::fmt!}`
rust: kernel: add `fmt` module
rust: kernel: remove `fmt!`, fix clippy::uninlined-format-args
scripts: rust: emit path candidates in panic message
scripts: rust: replace length checks with match
rust: list: remove nonexistent generic parameter in link
rust: bits: add support for bits/genmask macros
rust: list: remove OFFSET constants
rust: list: add `impl_list_item!` examples
rust: list: use fully qualified path
...

+2561 -987
+4
MAINTAINERS
··· 22032 22032 22033 22033 RUST [ALLOC] 22034 22034 M: Danilo Krummrich <dakr@kernel.org> 22035 + R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 22036 + R: Vlastimil Babka <vbabka@suse.cz> 22037 + R: Liam R. Howlett <Liam.Howlett@oracle.com> 22038 + R: Uladzislau Rezki <urezki@gmail.com> 22035 22039 L: rust-for-linux@vger.kernel.org 22036 22040 S: Maintained 22037 22041 T: git https://github.com/Rust-for-Linux/linux.git alloc-next
+6
Makefile
··· 479 479 -Wrust_2018_idioms \ 480 480 -Wunreachable_pub \ 481 481 -Wclippy::all \ 482 + -Wclippy::as_ptr_cast_mut \ 483 + -Wclippy::as_underscore \ 484 + -Wclippy::cast_lossless \ 482 485 -Wclippy::ignored_unit_patterns \ 483 486 -Wclippy::mut_mut \ 484 487 -Wclippy::needless_bitwise_bool \ 485 488 -Aclippy::needless_lifetimes \ 486 489 -Wclippy::no_mangle_with_rust_abi \ 490 + -Wclippy::ptr_as_ptr \ 491 + -Wclippy::ptr_cast_constness \ 492 + -Wclippy::ref_as_ptr \ 487 493 -Wclippy::undocumented_unsafe_blocks \ 488 494 -Wclippy::unnecessary_safety_comment \ 489 495 -Wclippy::unnecessary_safety_doc \
+29 -4
arch/arm64/include/asm/asm-bug.h
··· 21 21 #endif 22 22 23 23 #ifdef CONFIG_GENERIC_BUG 24 - 25 - #define __BUG_ENTRY(flags) \ 24 + #define __BUG_ENTRY_START \ 26 25 .pushsection __bug_table,"aw"; \ 27 26 .align 2; \ 28 27 14470: .long 14471f - .; \ 29 - _BUGVERBOSE_LOCATION(__FILE__, __LINE__) \ 30 - .short flags; \ 28 + 29 + #define __BUG_ENTRY_END \ 31 30 .align 2; \ 32 31 .popsection; \ 33 32 14471: 33 + 34 + #define __BUG_ENTRY(flags) \ 35 + __BUG_ENTRY_START \ 36 + _BUGVERBOSE_LOCATION(__FILE__, __LINE__) \ 37 + .short flags; \ 38 + __BUG_ENTRY_END 34 39 #else 35 40 #define __BUG_ENTRY(flags) 36 41 #endif ··· 45 40 brk BUG_BRK_IMM 46 41 47 42 #define ASM_BUG() ASM_BUG_FLAGS(0) 43 + 44 + #ifdef CONFIG_DEBUG_BUGVERBOSE 45 + #define __BUG_LOCATION_STRING(file, line) \ 46 + ".long " file "- .;" \ 47 + ".short " line ";" 48 + #else 49 + #define __BUG_LOCATION_STRING(file, line) 50 + #endif 51 + 52 + #define __BUG_ENTRY_STRING(file, line, flags) \ 53 + __stringify(__BUG_ENTRY_START) \ 54 + __BUG_LOCATION_STRING(file, line) \ 55 + ".short " flags ";" \ 56 + __stringify(__BUG_ENTRY_END) 57 + 58 + #define ARCH_WARN_ASM(file, line, flags, size) \ 59 + __BUG_ENTRY_STRING(file, line, flags) \ 60 + __stringify(brk BUG_BRK_IMM) 61 + 62 + #define ARCH_WARN_REACHABLE 48 63 49 64 #endif /* __ASM_ASM_BUG_H */
+21 -14
arch/riscv/include/asm/bug.h
··· 31 31 32 32 #ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS 33 33 #define __BUG_ENTRY_ADDR RISCV_INT " 1b - ." 34 - #define __BUG_ENTRY_FILE RISCV_INT " %0 - ." 34 + #define __BUG_ENTRY_FILE(file) RISCV_INT " " file " - ." 35 35 #else 36 36 #define __BUG_ENTRY_ADDR RISCV_PTR " 1b" 37 - #define __BUG_ENTRY_FILE RISCV_PTR " %0" 37 + #define __BUG_ENTRY_FILE(file) RISCV_PTR " " file 38 38 #endif 39 39 40 40 #ifdef CONFIG_DEBUG_BUGVERBOSE 41 - #define __BUG_ENTRY \ 41 + #define __BUG_ENTRY(file, line, flags) \ 42 42 __BUG_ENTRY_ADDR "\n\t" \ 43 - __BUG_ENTRY_FILE "\n\t" \ 44 - RISCV_SHORT " %1\n\t" \ 45 - RISCV_SHORT " %2" 43 + __BUG_ENTRY_FILE(file) "\n\t" \ 44 + RISCV_SHORT " " line "\n\t" \ 45 + RISCV_SHORT " " flags 46 46 #else 47 - #define __BUG_ENTRY \ 48 - __BUG_ENTRY_ADDR "\n\t" \ 49 - RISCV_SHORT " %2" 47 + #define __BUG_ENTRY(file, line, flags) \ 48 + __BUG_ENTRY_ADDR "\n\t" \ 49 + RISCV_SHORT " " flags 50 50 #endif 51 51 52 52 #ifdef CONFIG_GENERIC_BUG 53 - #define __BUG_FLAGS(flags) \ 54 - do { \ 55 - __asm__ __volatile__ ( \ 53 + 54 + #define ARCH_WARN_ASM(file, line, flags, size) \ 56 55 "1:\n\t" \ 57 56 "ebreak\n" \ 58 57 ".pushsection __bug_table,\"aw\"\n\t" \ 59 58 "2:\n\t" \ 60 - __BUG_ENTRY "\n\t" \ 61 - ".org 2b + %3\n\t" \ 59 + __BUG_ENTRY(file, line, flags) "\n\t" \ 60 + ".org 2b + " size "\n\t" \ 62 61 ".popsection" \ 62 + 63 + #define __BUG_FLAGS(flags) \ 64 + do { \ 65 + __asm__ __volatile__ ( \ 66 + ARCH_WARN_ASM("%0", "%1", "%2", "%3") \ 63 67 : \ 64 68 : "i" (__FILE__), "i" (__LINE__), \ 65 69 "i" (flags), \ 66 70 "i" (sizeof(struct bug_entry))); \ 67 71 } while (0) 72 + 68 73 #else /* CONFIG_GENERIC_BUG */ 69 74 #define __BUG_FLAGS(flags) do { \ 70 75 __asm__ __volatile__ ("ebreak\n"); \ ··· 82 77 } while (0) 83 78 84 79 #define __WARN_FLAGS(flags) __BUG_FLAGS(BUGFLAG_WARNING|(flags)) 80 + 81 + #define ARCH_WARN_REACHABLE 85 82 86 83 #define HAVE_ARCH_BUG 87 84
+28 -28
arch/x86/include/asm/bug.h
··· 32 32 #ifdef CONFIG_GENERIC_BUG 33 33 34 34 #ifdef CONFIG_X86_32 35 - # define __BUG_REL(val) ".long " __stringify(val) 35 + # define __BUG_REL(val) ".long " val 36 36 #else 37 - # define __BUG_REL(val) ".long " __stringify(val) " - ." 37 + # define __BUG_REL(val) ".long " val " - ." 38 38 #endif 39 39 40 40 #ifdef CONFIG_DEBUG_BUGVERBOSE 41 + #define __BUG_ENTRY(file, line, flags) \ 42 + "2:\t" __BUG_REL("1b") "\t# bug_entry::bug_addr\n" \ 43 + "\t" __BUG_REL(file) "\t# bug_entry::file\n" \ 44 + "\t.word " line "\t# bug_entry::line\n" \ 45 + "\t.word " flags "\t# bug_entry::flags\n" 46 + #else 47 + #define __BUG_ENTRY(file, line, flags) \ 48 + "2:\t" __BUG_REL("1b") "\t# bug_entry::bug_addr\n" \ 49 + "\t.word " flags "\t# bug_entry::flags\n" 50 + #endif 51 + 52 + #define _BUG_FLAGS_ASM(ins, file, line, flags, size, extra) \ 53 + "1:\t" ins "\n" \ 54 + ".pushsection __bug_table,\"aw\"\n" \ 55 + __BUG_ENTRY(file, line, flags) \ 56 + "\t.org 2b + " size "\n" \ 57 + ".popsection\n" \ 58 + extra 41 59 42 60 #define _BUG_FLAGS(ins, flags, extra) \ 43 61 do { \ 44 - asm_inline volatile("1:\t" ins "\n" \ 45 - ".pushsection __bug_table,\"aw\"\n" \ 46 - "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n" \ 47 - "\t" __BUG_REL(%c0) "\t# bug_entry::file\n" \ 48 - "\t.word %c1" "\t# bug_entry::line\n" \ 49 - "\t.word %c2" "\t# bug_entry::flags\n" \ 50 - "\t.org 2b+%c3\n" \ 51 - ".popsection\n" \ 52 - extra \ 62 + asm_inline volatile(_BUG_FLAGS_ASM(ins, "%c0", \ 63 + "%c1", "%c2", "%c3", extra) \ 53 64 : : "i" (__FILE__), "i" (__LINE__), \ 54 65 "i" (flags), \ 55 66 "i" (sizeof(struct bug_entry))); \ 56 67 } while (0) 57 68 58 - #else /* !CONFIG_DEBUG_BUGVERBOSE */ 59 - 60 - #define _BUG_FLAGS(ins, flags, extra) \ 61 - do { \ 62 - asm_inline volatile("1:\t" ins "\n" \ 63 - ".pushsection __bug_table,\"aw\"\n" \ 64 - "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n" \ 65 - "\t.word %c0" "\t# bug_entry::flags\n" \ 66 - "\t.org 2b+%c1\n" \ 67 - ".popsection\n" \ 68 - extra \ 69 - : : "i" (flags), \ 70 - "i" (sizeof(struct bug_entry))); \ 71 - } while (0) 72 - 73 - #endif /* CONFIG_DEBUG_BUGVERBOSE */ 69 + #define ARCH_WARN_ASM(file, line, flags, size) \ 70 + _BUG_FLAGS_ASM(ASM_UD2, file, line, flags, size, "") 74 71 75 72 #else 76 73 ··· 89 92 * were to trigger, we'd rather wreck the machine in an attempt to get the 90 93 * message out than not know about it. 91 94 */ 95 + 96 + #define ARCH_WARN_REACHABLE ANNOTATE_REACHABLE(1b) 97 + 92 98 #define __WARN_FLAGS(flags) \ 93 99 do { \ 94 100 __auto_type __flags = BUGFLAG_WARNING|(flags); \ 95 101 instrumentation_begin(); \ 96 - _BUG_FLAGS(ASM_UD2, __flags, ANNOTATE_REACHABLE(1b)); \ 102 + _BUG_FLAGS(ASM_UD2, __flags, ARCH_WARN_REACHABLE); \ 97 103 instrumentation_end(); \ 98 104 } while (0) 99 105
+2 -3
drivers/cpufreq/rcpufreq_dt.rs
··· 9 9 cpumask::CpumaskVar, 10 10 device::{Core, Device}, 11 11 error::code::*, 12 - fmt, 13 12 macros::vtable, 14 13 module_platform_driver, of, opp, platform, 15 14 prelude::*, ··· 18 19 19 20 /// Finds exact supply name from the OF node. 20 21 fn find_supply_name_exact(dev: &Device, name: &str) -> Option<CString> { 21 - let prop_name = CString::try_from_fmt(fmt!("{}-supply", name)).ok()?; 22 + let prop_name = CString::try_from_fmt(fmt!("{name}-supply")).ok()?; 22 23 dev.fwnode()? 23 24 .property_present(&prop_name) 24 25 .then(|| CString::try_from_fmt(fmt!("{name}")).ok()) ··· 220 221 module_platform_driver! { 221 222 type: CPUFreqDTDriver, 222 223 name: "cpufreq-dt", 223 - author: "Viresh Kumar <viresh.kumar@linaro.org>", 224 + authors: ["Viresh Kumar <viresh.kumar@linaro.org>"], 224 225 description: "Generic CPUFreq DT driver", 225 226 license: "GPL v2", 226 227 }
+2 -2
drivers/gpu/drm/drm_panic_qr.rs
··· 404 404 let mut out = 0; 405 405 let mut exp = 1; 406 406 for i in 0..poplen { 407 - out += self.decimals[self.len + i] as u16 * exp; 407 + out += u16::from(self.decimals[self.len + i]) * exp; 408 408 exp *= 10; 409 409 } 410 410 Some((out, NUM_CHARS_BITS[poplen])) ··· 425 425 match self.segment { 426 426 Segment::Binary(data) => { 427 427 if self.offset < data.len() { 428 - let byte = data[self.offset] as u16; 428 + let byte = u16::from(data[self.offset]); 429 429 self.offset += 1; 430 430 Some((byte, 8)) 431 431 } else {
+1 -1
drivers/gpu/drm/nova/nova.rs
··· 12 12 kernel::module_auxiliary_driver! { 13 13 type: NovaDriver, 14 14 name: "Nova", 15 - author: "Danilo Krummrich", 15 + authors: ["Danilo Krummrich"], 16 16 description: "Nova GPU driver", 17 17 license: "GPL v2", 18 18 }
+1 -1
drivers/gpu/nova-core/driver.rs
··· 19 19 MODULE_PCI_TABLE, 20 20 <NovaCore as pci::Driver>::IdInfo, 21 21 [( 22 - pci::DeviceId::from_id(bindings::PCI_VENDOR_ID_NVIDIA, bindings::PCI_ANY_ID as _), 22 + pci::DeviceId::from_id(bindings::PCI_VENDOR_ID_NVIDIA, bindings::PCI_ANY_ID as u32), 23 23 () 24 24 )] 25 25 );
+3 -2
drivers/gpu/nova-core/firmware.rs
··· 30 30 31 31 impl Firmware { 32 32 pub(crate) fn new(dev: &device::Device, chipset: Chipset, ver: &str) -> Result<Firmware> { 33 - let mut chip_name = CString::try_from_fmt(fmt!("{}", chipset))?; 33 + let mut chip_name = CString::try_from_fmt(fmt!("{chipset}"))?; 34 34 chip_name.make_ascii_lowercase(); 35 + let chip_name = &*chip_name; 35 36 36 37 let request = |name_| { 37 - CString::try_from_fmt(fmt!("nvidia/{}/gsp/{}-{}.bin", &*chip_name, name_, ver)) 38 + CString::try_from_fmt(fmt!("nvidia/{chip_name}/gsp/{name_}-{ver}.bin")) 38 39 .and_then(|path| firmware::Firmware::request(&path, dev)) 39 40 }; 40 41
+1 -1
drivers/gpu/nova-core/nova_core.rs
··· 18 18 kernel::module_pci_driver! { 19 19 type: driver::NovaCore, 20 20 name: "NovaCore", 21 - author: "Danilo Krummrich", 21 + authors: ["Danilo Krummrich"], 22 22 description: "Nova Core GPU driver", 23 23 license: "GPL v2", 24 24 firmware: [],
+1 -1
drivers/gpu/nova-core/regs.rs
··· 36 36 pub(crate) fn chipset(self) -> Result<Chipset> { 37 37 self.architecture() 38 38 .map(|arch| { 39 - ((arch as u32) << Self::IMPLEMENTATION.len()) | self.implementation() as u32 39 + ((arch as u32) << Self::IMPLEMENTATION.len()) | u32::from(self.implementation()) 40 40 }) 41 41 .and_then(Chipset::try_from) 42 42 }
+1 -1
drivers/gpu/nova-core/regs/macros.rs
··· 307 307 pub(crate) fn [<set_ $field>](mut self, value: $to_type) -> Self { 308 308 const MASK: u32 = $name::[<$field:upper _MASK>]; 309 309 const SHIFT: u32 = $name::[<$field:upper _SHIFT>]; 310 - let value = ((value as u32) << SHIFT) & MASK; 310 + let value = (u32::from(value) << SHIFT) & MASK; 311 311 self.0 = (self.0 & !MASK) | value; 312 312 313 313 self
+2 -2
drivers/gpu/nova-core/util.rs
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 use kernel::prelude::*; 4 - use kernel::time::{Delta, Instant}; 4 + use kernel::time::{Delta, Instant, Monotonic}; 5 5 6 6 pub(crate) const fn to_lowercase_bytes<const N: usize>(s: &str) -> [u8; N] { 7 7 let src = s.as_bytes(); ··· 33 33 /// TODO[DLAY]: replace with `read_poll_timeout` once it is available. 34 34 /// (https://lore.kernel.org/lkml/20250220070611.214262-8-fujita.tomonori@gmail.com/) 35 35 pub(crate) fn wait_on<R, F: Fn() -> Option<R>>(timeout: Delta, cond: F) -> Result<R> { 36 - let start_time = Instant::now(); 36 + let start_time = Instant::<Monotonic>::now(); 37 37 38 38 loop { 39 39 if let Some(ret) = cond() {
+8
rust/Makefile
··· 34 34 obj-$(CONFIG_RUST_KERNEL_DOCTESTS) += doctests_kernel_generated_kunit.o 35 35 36 36 always-$(subst y,$(CONFIG_RUST),$(CONFIG_JUMP_LABEL)) += kernel/generated_arch_static_branch_asm.rs 37 + ifndef CONFIG_UML 38 + always-$(subst y,$(CONFIG_RUST),$(CONFIG_BUG)) += kernel/generated_arch_warn_asm.rs kernel/generated_arch_reachable_asm.rs 39 + endif 37 40 38 41 # Avoids running `$(RUSTC)` when it may not be available. 39 42 ifdef CONFIG_RUST ··· 543 540 544 541 ifdef CONFIG_JUMP_LABEL 545 542 $(obj)/kernel.o: $(obj)/kernel/generated_arch_static_branch_asm.rs 543 + endif 544 + ifndef CONFIG_UML 545 + ifdef CONFIG_BUG 546 + $(obj)/kernel.o: $(obj)/kernel/generated_arch_warn_asm.rs $(obj)/kernel/generated_arch_reachable_asm.rs 547 + endif 546 548 endif 547 549 548 550 endif # CONFIG_RUST
+3
rust/bindings/lib.rs
··· 25 25 )] 26 26 27 27 #[allow(dead_code)] 28 + #[allow(clippy::cast_lossless)] 29 + #[allow(clippy::ptr_as_ptr)] 30 + #[allow(clippy::ref_as_ptr)] 28 31 #[allow(clippy::undocumented_unsafe_blocks)] 29 32 #[cfg_attr(CONFIG_RUSTC_HAS_UNNECESSARY_TRANSMUTES, allow(unnecessary_transmutes))] 30 33 mod bindings_raw {
+5
rust/helpers/bug.c
··· 6 6 { 7 7 BUG(); 8 8 } 9 + 10 + bool rust_helper_WARN_ON(bool cond) 11 + { 12 + return WARN_ON(cond); 13 + }
+3 -2
rust/helpers/helpers.c
··· 30 30 #include "mutex.c" 31 31 #include "of.c" 32 32 #include "page.c" 33 - #include "platform.c" 34 33 #include "pci.c" 35 34 #include "pid_namespace.c" 35 + #include "platform.c" 36 36 #include "poll.c" 37 37 #include "property.c" 38 38 #include "rbtree.c" 39 - #include "regulator.c" 40 39 #include "rcu.c" 41 40 #include "refcount.c" 41 + #include "regulator.c" 42 42 #include "security.c" 43 43 #include "signal.c" 44 44 #include "slab.c" 45 45 #include "spinlock.c" 46 46 #include "sync.c" 47 47 #include "task.c" 48 + #include "time.c" 48 49 #include "uaccess.c" 49 50 #include "vmalloc.c" 50 51 #include "wait.c"
+35
rust/helpers/time.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/delay.h> 4 + #include <linux/ktime.h> 5 + #include <linux/timekeeping.h> 6 + 7 + void rust_helper_fsleep(unsigned long usecs) 8 + { 9 + fsleep(usecs); 10 + } 11 + 12 + ktime_t rust_helper_ktime_get_real(void) 13 + { 14 + return ktime_get_real(); 15 + } 16 + 17 + ktime_t rust_helper_ktime_get_boottime(void) 18 + { 19 + return ktime_get_boottime(); 20 + } 21 + 22 + ktime_t rust_helper_ktime_get_clocktai(void) 23 + { 24 + return ktime_get_clocktai(); 25 + } 26 + 27 + s64 rust_helper_ktime_to_us(const ktime_t kt) 28 + { 29 + return ktime_to_us(kt); 30 + } 31 + 32 + s64 rust_helper_ktime_to_ms(const ktime_t kt) 33 + { 34 + return ktime_to_ms(kt); 35 + }
+2
rust/kernel/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 /generated_arch_static_branch_asm.rs 4 + /generated_arch_warn_asm.rs 5 + /generated_arch_reachable_asm.rs
+1 -1
rust/kernel/alloc/allocator_test.rs
··· 82 82 83 83 // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or 84 84 // exceeds the given size and alignment requirements. 85 - let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8; 85 + let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) }.cast::<u8>(); 86 86 let dst = NonNull::new(dst).ok_or(AllocError)?; 87 87 88 88 if flags.contains(__GFP_ZERO) {
+80 -18
rust/kernel/alloc/kbox.rs
··· 6 6 use super::allocator::{KVmalloc, Kmalloc, Vmalloc}; 7 7 use super::{AllocError, Allocator, Flags}; 8 8 use core::alloc::Layout; 9 + use core::borrow::{Borrow, BorrowMut}; 9 10 use core::fmt; 10 11 use core::marker::PhantomData; 11 12 use core::mem::ManuallyDrop; ··· 16 15 use core::ptr::NonNull; 17 16 use core::result::Result; 18 17 18 + use crate::ffi::c_void; 19 19 use crate::init::InPlaceInit; 20 20 use crate::types::ForeignOwnable; 21 21 use pin_init::{InPlaceWrite, Init, PinInit, ZeroableOption}; ··· 400 398 } 401 399 } 402 400 403 - // SAFETY: The `into_foreign` function returns a pointer that is well-aligned. 401 + // SAFETY: The pointer returned by `into_foreign` comes from a well aligned 402 + // pointer to `T`. 404 403 unsafe impl<T: 'static, A> ForeignOwnable for Box<T, A> 405 404 where 406 405 A: Allocator, 407 406 { 408 - type PointedTo = T; 407 + const FOREIGN_ALIGN: usize = core::mem::align_of::<T>(); 409 408 type Borrowed<'a> = &'a T; 410 409 type BorrowedMut<'a> = &'a mut T; 411 410 412 - fn into_foreign(self) -> *mut Self::PointedTo { 413 - Box::into_raw(self) 411 + fn into_foreign(self) -> *mut c_void { 412 + Box::into_raw(self).cast() 414 413 } 415 414 416 - unsafe fn from_foreign(ptr: *mut Self::PointedTo) -> Self { 415 + unsafe fn from_foreign(ptr: *mut c_void) -> Self { 417 416 // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 418 417 // call to `Self::into_foreign`. 419 - unsafe { Box::from_raw(ptr) } 418 + unsafe { Box::from_raw(ptr.cast()) } 420 419 } 421 420 422 - unsafe fn borrow<'a>(ptr: *mut Self::PointedTo) -> &'a T { 421 + unsafe fn borrow<'a>(ptr: *mut c_void) -> &'a T { 423 422 // SAFETY: The safety requirements of this method ensure that the object remains alive and 424 423 // immutable for the duration of 'a. 425 - unsafe { &*ptr } 424 + unsafe { &*ptr.cast() } 426 425 } 427 426 428 - unsafe fn borrow_mut<'a>(ptr: *mut Self::PointedTo) -> &'a mut T { 427 + unsafe fn borrow_mut<'a>(ptr: *mut c_void) -> &'a mut T { 428 + let ptr = ptr.cast(); 429 429 // SAFETY: The safety requirements of this method ensure that the pointer is valid and that 430 430 // nothing else will access the value for the duration of 'a. 431 431 unsafe { &mut *ptr } 432 432 } 433 433 } 434 434 435 - // SAFETY: The `into_foreign` function returns a pointer that is well-aligned. 435 + // SAFETY: The pointer returned by `into_foreign` comes from a well aligned 436 + // pointer to `T`. 436 437 unsafe impl<T: 'static, A> ForeignOwnable for Pin<Box<T, A>> 437 438 where 438 439 A: Allocator, 439 440 { 440 - type PointedTo = T; 441 + const FOREIGN_ALIGN: usize = core::mem::align_of::<T>(); 441 442 type Borrowed<'a> = Pin<&'a T>; 442 443 type BorrowedMut<'a> = Pin<&'a mut T>; 443 444 444 - fn into_foreign(self) -> *mut Self::PointedTo { 445 + fn into_foreign(self) -> *mut c_void { 445 446 // SAFETY: We are still treating the box as pinned. 446 - Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) 447 + Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }).cast() 447 448 } 448 449 449 - unsafe fn from_foreign(ptr: *mut Self::PointedTo) -> Self { 450 + unsafe fn from_foreign(ptr: *mut c_void) -> Self { 450 451 // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 451 452 // call to `Self::into_foreign`. 452 - unsafe { Pin::new_unchecked(Box::from_raw(ptr)) } 453 + unsafe { Pin::new_unchecked(Box::from_raw(ptr.cast())) } 453 454 } 454 455 455 - unsafe fn borrow<'a>(ptr: *mut Self::PointedTo) -> Pin<&'a T> { 456 + unsafe fn borrow<'a>(ptr: *mut c_void) -> Pin<&'a T> { 456 457 // SAFETY: The safety requirements for this function ensure that the object is still alive, 457 458 // so it is safe to dereference the raw pointer. 458 459 // The safety requirements of `from_foreign` also ensure that the object remains alive for 459 460 // the lifetime of the returned value. 460 - let r = unsafe { &*ptr }; 461 + let r = unsafe { &*ptr.cast() }; 461 462 462 463 // SAFETY: This pointer originates from a `Pin<Box<T>>`. 463 464 unsafe { Pin::new_unchecked(r) } 464 465 } 465 466 466 - unsafe fn borrow_mut<'a>(ptr: *mut Self::PointedTo) -> Pin<&'a mut T> { 467 + unsafe fn borrow_mut<'a>(ptr: *mut c_void) -> Pin<&'a mut T> { 468 + let ptr = ptr.cast(); 467 469 // SAFETY: The safety requirements for this function ensure that the object is still alive, 468 470 // so it is safe to dereference the raw pointer. 469 471 // The safety requirements of `from_foreign` also ensure that the object remains alive for ··· 502 496 // SAFETY: `self.0` is always properly aligned, dereferenceable and points to an initialized 503 497 // instance of `T`. 504 498 unsafe { self.0.as_mut() } 499 + } 500 + } 501 + 502 + /// # Examples 503 + /// 504 + /// ``` 505 + /// # use core::borrow::Borrow; 506 + /// # use kernel::alloc::KBox; 507 + /// struct Foo<B: Borrow<u32>>(B); 508 + /// 509 + /// // Owned instance. 510 + /// let owned = Foo(1); 511 + /// 512 + /// // Owned instance using `KBox`. 513 + /// let owned_kbox = Foo(KBox::new(1, GFP_KERNEL)?); 514 + /// 515 + /// let i = 1; 516 + /// // Borrowed from `i`. 517 + /// let borrowed = Foo(&i); 518 + /// # Ok::<(), Error>(()) 519 + /// ``` 520 + impl<T, A> Borrow<T> for Box<T, A> 521 + where 522 + T: ?Sized, 523 + A: Allocator, 524 + { 525 + fn borrow(&self) -> &T { 526 + self.deref() 527 + } 528 + } 529 + 530 + /// # Examples 531 + /// 532 + /// ``` 533 + /// # use core::borrow::BorrowMut; 534 + /// # use kernel::alloc::KBox; 535 + /// struct Foo<B: BorrowMut<u32>>(B); 536 + /// 537 + /// // Owned instance. 538 + /// let owned = Foo(1); 539 + /// 540 + /// // Owned instance using `KBox`. 541 + /// let owned_kbox = Foo(KBox::new(1, GFP_KERNEL)?); 542 + /// 543 + /// let mut i = 1; 544 + /// // Borrowed from `i`. 545 + /// let borrowed = Foo(&mut i); 546 + /// # Ok::<(), Error>(()) 547 + /// ``` 548 + impl<T, A> BorrowMut<T> for Box<T, A> 549 + where 550 + T: ?Sized, 551 + A: Allocator, 552 + { 553 + fn borrow_mut(&mut self) -> &mut T { 554 + self.deref_mut() 505 555 } 506 556 } 507 557
+56 -3
rust/kernel/alloc/kvec.rs
··· 8 8 AllocError, Allocator, Box, Flags, 9 9 }; 10 10 use core::{ 11 + borrow::{Borrow, BorrowMut}, 11 12 fmt, 12 13 marker::PhantomData, 13 14 mem::{ManuallyDrop, MaybeUninit}, ··· 289 288 // - `self.len` is smaller than `self.capacity` by the type invariant and hence, the 290 289 // resulting pointer is guaranteed to be part of the same allocated object. 291 290 // - `self.len` can not overflow `isize`. 292 - let ptr = unsafe { self.as_mut_ptr().add(self.len) } as *mut MaybeUninit<T>; 291 + let ptr = unsafe { self.as_mut_ptr().add(self.len) }.cast::<MaybeUninit<T>>(); 293 292 294 293 // SAFETY: The memory between `self.len` and `self.capacity` is guaranteed to be allocated 295 294 // and valid, but uninitialized. ··· 848 847 // - `ptr` points to memory with at least a size of `size_of::<T>() * len`, 849 848 // - all elements within `b` are initialized values of `T`, 850 849 // - `len` does not exceed `isize::MAX`. 851 - unsafe { Vec::from_raw_parts(ptr as _, len, len) } 850 + unsafe { Vec::from_raw_parts(ptr.cast(), len, len) } 852 851 } 853 852 } 854 853 855 - impl<T> Default for KVec<T> { 854 + impl<T, A: Allocator> Default for Vec<T, A> { 856 855 #[inline] 857 856 fn default() -> Self { 858 857 Self::new() ··· 888 887 // SAFETY: The memory behind `self.as_ptr()` is guaranteed to contain `self.len` 889 888 // initialized elements of type `T`. 890 889 unsafe { slice::from_raw_parts_mut(self.as_mut_ptr(), self.len) } 890 + } 891 + } 892 + 893 + /// # Examples 894 + /// 895 + /// ``` 896 + /// # use core::borrow::Borrow; 897 + /// struct Foo<B: Borrow<[u32]>>(B); 898 + /// 899 + /// // Owned array. 900 + /// let owned_array = Foo([1, 2, 3]); 901 + /// 902 + /// // Owned vector. 903 + /// let owned_vec = Foo(KVec::from_elem(0, 3, GFP_KERNEL)?); 904 + /// 905 + /// let arr = [1, 2, 3]; 906 + /// // Borrowed slice from `arr`. 907 + /// let borrowed_slice = Foo(&arr[..]); 908 + /// # Ok::<(), Error>(()) 909 + /// ``` 910 + impl<T, A> Borrow<[T]> for Vec<T, A> 911 + where 912 + A: Allocator, 913 + { 914 + fn borrow(&self) -> &[T] { 915 + self.as_slice() 916 + } 917 + } 918 + 919 + /// # Examples 920 + /// 921 + /// ``` 922 + /// # use core::borrow::BorrowMut; 923 + /// struct Foo<B: BorrowMut<[u32]>>(B); 924 + /// 925 + /// // Owned array. 926 + /// let owned_array = Foo([1, 2, 3]); 927 + /// 928 + /// // Owned vector. 929 + /// let owned_vec = Foo(KVec::from_elem(0, 3, GFP_KERNEL)?); 930 + /// 931 + /// let mut arr = [1, 2, 3]; 932 + /// // Borrowed slice from `arr`. 933 + /// let borrowed_slice = Foo(&mut arr[..]); 934 + /// # Ok::<(), Error>(()) 935 + /// ``` 936 + impl<T, A> BorrowMut<[T]> for Vec<T, A> 937 + where 938 + A: Allocator, 939 + { 940 + fn borrow_mut(&mut self) -> &mut [T] { 941 + self.as_mut_slice() 891 942 } 892 943 } 893 944
+203
rust/kernel/bits.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Bit manipulation macros. 4 + //! 5 + //! C header: [`include/linux/bits.h`](srctree/include/linux/bits.h) 6 + 7 + use crate::prelude::*; 8 + use core::ops::RangeInclusive; 9 + use macros::paste; 10 + 11 + macro_rules! impl_bit_fn { 12 + ( 13 + $ty:ty 14 + ) => { 15 + paste! { 16 + /// Computes `1 << n` if `n` is in bounds, i.e.: if `n` is smaller than 17 + /// the maximum number of bits supported by the type. 18 + /// 19 + /// Returns [`None`] otherwise. 20 + #[inline] 21 + pub fn [<checked_bit_ $ty>](n: u32) -> Option<$ty> { 22 + (1 as $ty).checked_shl(n) 23 + } 24 + 25 + /// Computes `1 << n` by performing a compile-time assertion that `n` is 26 + /// in bounds. 27 + /// 28 + /// This version is the default and should be used if `n` is known at 29 + /// compile time. 30 + #[inline] 31 + pub const fn [<bit_ $ty>](n: u32) -> $ty { 32 + build_assert!(n < <$ty>::BITS); 33 + (1 as $ty) << n 34 + } 35 + } 36 + }; 37 + } 38 + 39 + impl_bit_fn!(u64); 40 + impl_bit_fn!(u32); 41 + impl_bit_fn!(u16); 42 + impl_bit_fn!(u8); 43 + 44 + macro_rules! impl_genmask_fn { 45 + ( 46 + $ty:ty, 47 + $(#[$genmask_checked_ex:meta])*, 48 + $(#[$genmask_ex:meta])* 49 + ) => { 50 + paste! { 51 + /// Creates a contiguous bitmask for the given range by validating 52 + /// the range at runtime. 53 + /// 54 + /// Returns [`None`] if the range is invalid, i.e.: if the start is 55 + /// greater than the end or if the range is outside of the 56 + /// representable range for the type. 57 + $(#[$genmask_checked_ex])* 58 + #[inline] 59 + pub fn [<genmask_checked_ $ty>](range: RangeInclusive<u32>) -> Option<$ty> { 60 + let start = *range.start(); 61 + let end = *range.end(); 62 + 63 + if start > end { 64 + return None; 65 + } 66 + 67 + let high = [<checked_bit_ $ty>](end)?; 68 + let low = [<checked_bit_ $ty>](start)?; 69 + Some((high | (high - 1)) & !(low - 1)) 70 + } 71 + 72 + /// Creates a compile-time contiguous bitmask for the given range by 73 + /// performing a compile-time assertion that the range is valid. 74 + /// 75 + /// This version is the default and should be used if the range is known 76 + /// at compile time. 77 + $(#[$genmask_ex])* 78 + #[inline] 79 + pub const fn [<genmask_ $ty>](range: RangeInclusive<u32>) -> $ty { 80 + let start = *range.start(); 81 + let end = *range.end(); 82 + 83 + build_assert!(start <= end); 84 + 85 + let high = [<bit_ $ty>](end); 86 + let low = [<bit_ $ty>](start); 87 + (high | (high - 1)) & !(low - 1) 88 + } 89 + } 90 + }; 91 + } 92 + 93 + impl_genmask_fn!( 94 + u64, 95 + /// # Examples 96 + /// 97 + /// ``` 98 + /// # #![expect(clippy::reversed_empty_ranges)] 99 + /// # use kernel::bits::genmask_checked_u64; 100 + /// assert_eq!(genmask_checked_u64(0..=0), Some(0b1)); 101 + /// assert_eq!(genmask_checked_u64(0..=63), Some(u64::MAX)); 102 + /// assert_eq!(genmask_checked_u64(21..=39), Some(0x0000_00ff_ffe0_0000)); 103 + /// 104 + /// // `80` is out of the supported bit range. 105 + /// assert_eq!(genmask_checked_u64(21..=80), None); 106 + /// 107 + /// // Invalid range where the start is bigger than the end. 108 + /// assert_eq!(genmask_checked_u64(15..=8), None); 109 + /// ``` 110 + , 111 + /// # Examples 112 + /// 113 + /// ``` 114 + /// # use kernel::bits::genmask_u64; 115 + /// assert_eq!(genmask_u64(21..=39), 0x0000_00ff_ffe0_0000); 116 + /// assert_eq!(genmask_u64(0..=0), 0b1); 117 + /// assert_eq!(genmask_u64(0..=63), u64::MAX); 118 + /// ``` 119 + ); 120 + 121 + impl_genmask_fn!( 122 + u32, 123 + /// # Examples 124 + /// 125 + /// ``` 126 + /// # #![expect(clippy::reversed_empty_ranges)] 127 + /// # use kernel::bits::genmask_checked_u32; 128 + /// assert_eq!(genmask_checked_u32(0..=0), Some(0b1)); 129 + /// assert_eq!(genmask_checked_u32(0..=31), Some(u32::MAX)); 130 + /// assert_eq!(genmask_checked_u32(21..=31), Some(0xffe0_0000)); 131 + /// 132 + /// // `40` is out of the supported bit range. 133 + /// assert_eq!(genmask_checked_u32(21..=40), None); 134 + /// 135 + /// // Invalid range where the start is bigger than the end. 136 + /// assert_eq!(genmask_checked_u32(15..=8), None); 137 + /// ``` 138 + , 139 + /// # Examples 140 + /// 141 + /// ``` 142 + /// # use kernel::bits::genmask_u32; 143 + /// assert_eq!(genmask_u32(21..=31), 0xffe0_0000); 144 + /// assert_eq!(genmask_u32(0..=0), 0b1); 145 + /// assert_eq!(genmask_u32(0..=31), u32::MAX); 146 + /// ``` 147 + ); 148 + 149 + impl_genmask_fn!( 150 + u16, 151 + /// # Examples 152 + /// 153 + /// ``` 154 + /// # #![expect(clippy::reversed_empty_ranges)] 155 + /// # use kernel::bits::genmask_checked_u16; 156 + /// assert_eq!(genmask_checked_u16(0..=0), Some(0b1)); 157 + /// assert_eq!(genmask_checked_u16(0..=15), Some(u16::MAX)); 158 + /// assert_eq!(genmask_checked_u16(6..=15), Some(0xffc0)); 159 + /// 160 + /// // `20` is out of the supported bit range. 161 + /// assert_eq!(genmask_checked_u16(6..=20), None); 162 + /// 163 + /// // Invalid range where the start is bigger than the end. 164 + /// assert_eq!(genmask_checked_u16(10..=5), None); 165 + /// ``` 166 + , 167 + /// # Examples 168 + /// 169 + /// ``` 170 + /// # use kernel::bits::genmask_u16; 171 + /// assert_eq!(genmask_u16(6..=15), 0xffc0); 172 + /// assert_eq!(genmask_u16(0..=0), 0b1); 173 + /// assert_eq!(genmask_u16(0..=15), u16::MAX); 174 + /// ``` 175 + ); 176 + 177 + impl_genmask_fn!( 178 + u8, 179 + /// # Examples 180 + /// 181 + /// ``` 182 + /// # #![expect(clippy::reversed_empty_ranges)] 183 + /// # use kernel::bits::genmask_checked_u8; 184 + /// assert_eq!(genmask_checked_u8(0..=0), Some(0b1)); 185 + /// assert_eq!(genmask_checked_u8(0..=7), Some(u8::MAX)); 186 + /// assert_eq!(genmask_checked_u8(6..=7), Some(0xc0)); 187 + /// 188 + /// // `10` is out of the supported bit range. 189 + /// assert_eq!(genmask_checked_u8(6..=10), None); 190 + /// 191 + /// // Invalid range where the start is bigger than the end. 192 + /// assert_eq!(genmask_checked_u8(5..=2), None); 193 + /// ``` 194 + , 195 + /// # Examples 196 + /// 197 + /// ``` 198 + /// # use kernel::bits::genmask_u8; 199 + /// assert_eq!(genmask_u8(6..=7), 0xc0); 200 + /// assert_eq!(genmask_u8(0..=0), 0b1); 201 + /// assert_eq!(genmask_u8(0..=7), u8::MAX); 202 + /// ``` 203 + );
+1 -1
rust/kernel/block/mq.rs
··· 53 53 //! [`GenDiskBuilder`]: gen_disk::GenDiskBuilder 54 54 //! [`GenDiskBuilder::build`]: gen_disk::GenDiskBuilder::build 55 55 //! 56 - //! # Example 56 + //! # Examples 57 57 //! 58 58 //! ```rust 59 59 //! use kernel::{
+1 -1
rust/kernel/block/mq/operations.rs
··· 101 101 if let Err(e) = ret { 102 102 e.to_blk_status() 103 103 } else { 104 - bindings::BLK_STS_OK as _ 104 + bindings::BLK_STS_OK as bindings::blk_status_t 105 105 } 106 106 } 107 107
+8 -3
rust/kernel/block/mq/request.rs
··· 69 69 // INVARIANT: By the safety requirements of this function, invariants are upheld. 70 70 // SAFETY: By the safety requirement of this function, we own a 71 71 // reference count that we can pass to `ARef`. 72 - unsafe { ARef::from_raw(NonNull::new_unchecked(ptr as *const Self as *mut Self)) } 72 + unsafe { ARef::from_raw(NonNull::new_unchecked(ptr.cast())) } 73 73 } 74 74 75 75 /// Notify the block layer that a request is going to be processed now. ··· 125 125 // success of the call to `try_set_end` guarantees that there are no 126 126 // `ARef`s pointing to this request. Therefore it is safe to hand it 127 127 // back to the block layer. 128 - unsafe { bindings::blk_mq_end_request(request_ptr, bindings::BLK_STS_OK as _) }; 128 + unsafe { 129 + bindings::blk_mq_end_request( 130 + request_ptr, 131 + bindings::BLK_STS_OK as bindings::blk_status_t, 132 + ) 133 + }; 129 134 130 135 Ok(()) 131 136 } ··· 160 155 // the private data associated with this request is initialized and 161 156 // valid. The existence of `&self` guarantees that the private data is 162 157 // valid as a shared reference. 163 - unsafe { Self::wrapper_ptr(self as *const Self as *mut Self).as_ref() } 158 + unsafe { Self::wrapper_ptr(core::ptr::from_ref(self).cast_mut()).as_ref() } 164 159 } 165 160 } 166 161
+126
rust/kernel/bug.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Copyright (C) 2024, 2025 FUJITA Tomonori <fujita.tomonori@gmail.com> 4 + 5 + //! Support for BUG and WARN functionality. 6 + //! 7 + //! C header: [`include/asm-generic/bug.h`](srctree/include/asm-generic/bug.h) 8 + 9 + #[macro_export] 10 + #[doc(hidden)] 11 + #[cfg(all(CONFIG_BUG, not(CONFIG_UML), not(CONFIG_LOONGARCH), not(CONFIG_ARM)))] 12 + #[cfg(CONFIG_DEBUG_BUGVERBOSE)] 13 + macro_rules! warn_flags { 14 + ($flags:expr) => { 15 + const FLAGS: u32 = $crate::bindings::BUGFLAG_WARNING | $flags; 16 + const _FILE: &[u8] = file!().as_bytes(); 17 + // Plus one for null-terminator. 18 + static FILE: [u8; _FILE.len() + 1] = { 19 + let mut bytes = [0; _FILE.len() + 1]; 20 + let mut i = 0; 21 + while i < _FILE.len() { 22 + bytes[i] = _FILE[i]; 23 + i += 1; 24 + } 25 + bytes 26 + }; 27 + 28 + // SAFETY: 29 + // - `file`, `line`, `flags`, and `size` are all compile-time constants or 30 + // symbols, preventing any invalid memory access. 31 + // - The asm block has no side effects and does not modify any registers 32 + // or memory. It is purely for embedding metadata into the ELF section. 33 + unsafe { 34 + $crate::asm!( 35 + concat!( 36 + "/* {size} */", 37 + include!(concat!(env!("OBJTREE"), "/rust/kernel/generated_arch_warn_asm.rs")), 38 + include!(concat!(env!("OBJTREE"), "/rust/kernel/generated_arch_reachable_asm.rs"))); 39 + file = sym FILE, 40 + line = const line!(), 41 + flags = const FLAGS, 42 + size = const ::core::mem::size_of::<$crate::bindings::bug_entry>(), 43 + ); 44 + } 45 + } 46 + } 47 + 48 + #[macro_export] 49 + #[doc(hidden)] 50 + #[cfg(all(CONFIG_BUG, not(CONFIG_UML), not(CONFIG_LOONGARCH), not(CONFIG_ARM)))] 51 + #[cfg(not(CONFIG_DEBUG_BUGVERBOSE))] 52 + macro_rules! warn_flags { 53 + ($flags:expr) => { 54 + const FLAGS: u32 = $crate::bindings::BUGFLAG_WARNING | $flags; 55 + 56 + // SAFETY: 57 + // - `flags` and `size` are all compile-time constants, preventing 58 + // any invalid memory access. 59 + // - The asm block has no side effects and does not modify any registers 60 + // or memory. It is purely for embedding metadata into the ELF section. 61 + unsafe { 62 + $crate::asm!( 63 + concat!( 64 + "/* {size} */", 65 + include!(concat!(env!("OBJTREE"), "/rust/kernel/generated_arch_warn_asm.rs")), 66 + include!(concat!(env!("OBJTREE"), "/rust/kernel/generated_arch_reachable_asm.rs"))); 67 + flags = const FLAGS, 68 + size = const ::core::mem::size_of::<$crate::bindings::bug_entry>(), 69 + ); 70 + } 71 + } 72 + } 73 + 74 + #[macro_export] 75 + #[doc(hidden)] 76 + #[cfg(all(CONFIG_BUG, CONFIG_UML))] 77 + macro_rules! warn_flags { 78 + ($flags:expr) => { 79 + // SAFETY: It is always safe to call `warn_slowpath_fmt()` 80 + // with a valid null-terminated string. 81 + unsafe { 82 + $crate::bindings::warn_slowpath_fmt( 83 + $crate::c_str!(::core::file!()).as_char_ptr(), 84 + line!() as $crate::ffi::c_int, 85 + $flags as $crate::ffi::c_uint, 86 + ::core::ptr::null(), 87 + ); 88 + } 89 + }; 90 + } 91 + 92 + #[macro_export] 93 + #[doc(hidden)] 94 + #[cfg(all(CONFIG_BUG, any(CONFIG_LOONGARCH, CONFIG_ARM)))] 95 + macro_rules! warn_flags { 96 + ($flags:expr) => { 97 + // SAFETY: It is always safe to call `WARN_ON()`. 98 + unsafe { $crate::bindings::WARN_ON(true) } 99 + }; 100 + } 101 + 102 + #[macro_export] 103 + #[doc(hidden)] 104 + #[cfg(not(CONFIG_BUG))] 105 + macro_rules! warn_flags { 106 + ($flags:expr) => {}; 107 + } 108 + 109 + #[doc(hidden)] 110 + pub const fn bugflag_taint(value: u32) -> u32 { 111 + value << 8 112 + } 113 + 114 + /// Report a warning if `cond` is true and return the condition's evaluation result. 115 + #[macro_export] 116 + macro_rules! warn_on { 117 + ($cond:expr) => {{ 118 + let cond = $cond; 119 + if cond { 120 + const WARN_ON_FLAGS: u32 = $crate::bug::bugflag_taint($crate::bindings::TAINT_WARN); 121 + 122 + $crate::warn_flags!(WARN_ON_FLAGS); 123 + } 124 + cond 125 + }}; 126 + }
+3 -3
rust/kernel/clk.rs
··· 12 12 /// 13 13 /// Represents a frequency in hertz, wrapping a [`c_ulong`] value. 14 14 /// 15 - /// ## Examples 15 + /// # Examples 16 16 /// 17 17 /// ``` 18 18 /// use kernel::clk::Hertz; ··· 99 99 /// Instances of this type are reference-counted. Calling [`Clk::get`] ensures that the 100 100 /// allocation remains valid for the lifetime of the [`Clk`]. 101 101 /// 102 - /// ## Examples 102 + /// # Examples 103 103 /// 104 104 /// The following example demonstrates how to obtain and configure a clock for a device. 105 105 /// ··· 266 266 /// Instances of this type are reference-counted. Calling [`OptionalClk::get`] ensures that the 267 267 /// allocation remains valid for the lifetime of the [`OptionalClk`]. 268 268 /// 269 - /// ## Examples 269 + /// # Examples 270 270 /// 271 271 /// The following example demonstrates how to obtain and configure an optional clock for a 272 272 /// device. The code functions correctly whether or not the clock is available.
+11 -19
rust/kernel/configfs.rs
··· 17 17 //! 18 18 //! C header: [`include/linux/configfs.h`](srctree/include/linux/configfs.h) 19 19 //! 20 - //! # Example 20 + //! # Examples 21 21 //! 22 22 //! ```ignore 23 23 //! use kernel::alloc::flags; ··· 151 151 data: impl PinInit<Data, Error>, 152 152 ) -> impl PinInit<Self, Error> { 153 153 try_pin_init!(Self { 154 - subsystem <- pin_init::zeroed().chain( 154 + subsystem <- pin_init::init_zeroed().chain( 155 155 |place: &mut Opaque<bindings::configfs_subsystem>| { 156 156 // SAFETY: We initialized the required fields of `place.group` above. 157 157 unsafe { ··· 261 261 data: impl PinInit<Data, Error>, 262 262 ) -> impl PinInit<Self, Error> { 263 263 try_pin_init!(Self { 264 - group <- pin_init::zeroed().chain(|v: &mut Opaque<bindings::config_group>| { 264 + group <- pin_init::init_zeroed().chain(|v: &mut Opaque<bindings::config_group>| { 265 265 let place = v.get(); 266 266 let name = name.as_bytes_with_nul().as_ptr(); 267 267 // SAFETY: It is safe to initialize a group once it has been zeroed. ··· 279 279 // within the `group` field. 280 280 unsafe impl<Data> HasGroup<Data> for Group<Data> { 281 281 unsafe fn group(this: *const Self) -> *const bindings::config_group { 282 - Opaque::raw_get( 282 + Opaque::cast_into( 283 283 // SAFETY: By impl and function safety requirements this field 284 284 // projection is within bounds of the allocation. 285 285 unsafe { &raw const (*this).group }, ··· 426 426 }; 427 427 428 428 const fn vtable_ptr() -> *const bindings::configfs_group_operations { 429 - &Self::VTABLE as *const bindings::configfs_group_operations 429 + &Self::VTABLE 430 430 } 431 431 } 432 432 ··· 464 464 }; 465 465 466 466 const fn vtable_ptr() -> *const bindings::configfs_item_operations { 467 - &Self::VTABLE as *const bindings::configfs_item_operations 467 + &Self::VTABLE 468 468 } 469 469 } 470 470 ··· 476 476 }; 477 477 478 478 const fn vtable_ptr() -> *const bindings::configfs_item_operations { 479 - &Self::VTABLE as *const bindings::configfs_item_operations 479 + &Self::VTABLE 480 480 } 481 481 } 482 482 ··· 561 561 let data: &Data = unsafe { get_group_data(c_group) }; 562 562 563 563 // SAFETY: By function safety requirements, `page` is writable for `PAGE_SIZE`. 564 - let ret = O::show(data, unsafe { &mut *(page as *mut [u8; PAGE_SIZE]) }); 564 + let ret = O::show(data, unsafe { &mut *(page.cast::<[u8; PAGE_SIZE]>()) }); 565 565 566 566 match ret { 567 567 Ok(size) => size as isize, ··· 717 717 718 718 // SAFETY: By function safety requirements, we have exclusive access to 719 719 // `self` and the reference created below will be exclusive. 720 - unsafe { 721 - (&mut *self.0.get())[I] = (attribute as *const Attribute<ID, O, Data>) 722 - .cast_mut() 723 - .cast() 724 - }; 720 + unsafe { (&mut *self.0.get())[I] = core::ptr::from_ref(attribute).cast_mut().cast() }; 725 721 } 726 722 } 727 723 ··· 757 761 ct_owner: owner.as_ptr(), 758 762 ct_group_ops: GroupOperationsVTable::<Data, Child>::vtable_ptr().cast_mut(), 759 763 ct_item_ops: ItemOperationsVTable::<$tpe, Data>::vtable_ptr().cast_mut(), 760 - ct_attrs: (attributes as *const AttributeList<N, Data>) 761 - .cast_mut() 762 - .cast(), 764 + ct_attrs: core::ptr::from_ref(attributes).cast_mut().cast(), 763 765 ct_bin_attrs: core::ptr::null_mut(), 764 766 }), 765 767 _p: PhantomData, ··· 774 780 ct_owner: owner.as_ptr(), 775 781 ct_group_ops: core::ptr::null_mut(), 776 782 ct_item_ops: ItemOperationsVTable::<$tpe, Data>::vtable_ptr().cast_mut(), 777 - ct_attrs: (attributes as *const AttributeList<N, Data>) 778 - .cast_mut() 779 - .cast(), 783 + ct_attrs: core::ptr::from_ref(attributes).cast_mut().cast(), 780 784 ct_bin_attrs: core::ptr::null_mut(), 781 785 }), 782 786 _p: PhantomData,
+5 -5
rust/kernel/cpufreq.rs
··· 202 202 /// The callers must ensure that the `struct cpufreq_frequency_table` is valid for access and 203 203 /// remains valid for the lifetime of the returned reference. 204 204 /// 205 - /// ## Examples 205 + /// # Examples 206 206 /// 207 207 /// The following example demonstrates how to read a frequency value from [`Table`]. 208 208 /// ··· 318 318 /// 319 319 /// This is used by the CPU frequency drivers to build a frequency table dynamically. 320 320 /// 321 - /// ## Examples 321 + /// # Examples 322 322 /// 323 323 /// The following example demonstrates how to create a CPU frequency table. 324 324 /// ··· 395 395 /// The callers must ensure that the `struct cpufreq_policy` is valid for access and remains valid 396 396 /// for the lifetime of the returned reference. 397 397 /// 398 - /// ## Examples 398 + /// # Examples 399 399 /// 400 400 /// The following example demonstrates how to create a CPU frequency table. 401 401 /// ··· 649 649 fn set_data<T: ForeignOwnable>(&mut self, data: T) -> Result { 650 650 if self.as_ref().driver_data.is_null() { 651 651 // Transfer the ownership of the data to the foreign interface. 652 - self.as_mut_ref().driver_data = <T as ForeignOwnable>::into_foreign(data) as _; 652 + self.as_mut_ref().driver_data = <T as ForeignOwnable>::into_foreign(data).cast(); 653 653 Ok(()) 654 654 } else { 655 655 Err(EBUSY) ··· 834 834 835 835 /// CPU frequency driver Registration. 836 836 /// 837 - /// ## Examples 837 + /// # Examples 838 838 /// 839 839 /// The following example demonstrates how to register a cpufreq driver. 840 840 ///
+2 -2
rust/kernel/cpumask.rs
··· 27 27 /// The callers must ensure that the `struct cpumask` is valid for access and 28 28 /// remains valid for the lifetime of the returned reference. 29 29 /// 30 - /// ## Examples 30 + /// # Examples 31 31 /// 32 32 /// The following example demonstrates how to update a [`Cpumask`]. 33 33 /// ··· 172 172 /// The callers must ensure that the `struct cpumask_var_t` is valid for access and remains valid 173 173 /// for the lifetime of [`CpumaskVar`]. 174 174 /// 175 - /// ## Examples 175 + /// # Examples 176 176 /// 177 177 /// The following example demonstrates how to create and update a [`CpumaskVar`]. 178 178 ///
+2 -2
rust/kernel/device.rs
··· 262 262 #[cfg(CONFIG_PRINTK)] 263 263 unsafe { 264 264 bindings::_dev_printk( 265 - klevel as *const _ as *const crate::ffi::c_char, 265 + klevel.as_ptr().cast::<crate::ffi::c_char>(), 266 266 self.as_raw(), 267 267 c_str!("%pA").as_char_ptr(), 268 - &msg as *const _ as *const crate::ffi::c_void, 268 + core::ptr::from_ref(&msg).cast::<crate::ffi::c_void>(), 269 269 ) 270 270 }; 271 271 }
+2 -2
rust/kernel/device_id.rs
··· 100 100 unsafe { 101 101 raw_ids[i] 102 102 .as_mut_ptr() 103 - .byte_offset(data_offset as _) 103 + .byte_add(data_offset) 104 104 .cast::<usize>() 105 105 .write(i); 106 106 } ··· 177 177 fn as_ptr(&self) -> *const T::RawType { 178 178 // This cannot be `self.ids.as_ptr()`, as the return pointer must have correct provenance 179 179 // to access the sentinel. 180 - (self as *const Self).cast() 180 + core::ptr::from_ref(self).cast() 181 181 } 182 182 183 183 fn id(&self, index: usize) -> &T::RawType {
+5 -5
rust/kernel/devres.rs
··· 49 49 /// [`Devres`] users should make sure to simply free the corresponding backing resource in `T`'s 50 50 /// [`Drop`] implementation. 51 51 /// 52 - /// # Example 52 + /// # Examples 53 53 /// 54 54 /// ```no_run 55 55 /// # use kernel::{bindings, device::{Bound, Device}, devres::Devres, io::{Io, IoRaw}}; ··· 66 66 /// unsafe fn new(paddr: usize) -> Result<Self>{ 67 67 /// // SAFETY: By the safety requirements of this function [`paddr`, `paddr` + `SIZE`) is 68 68 /// // valid for `ioremap`. 69 - /// let addr = unsafe { bindings::ioremap(paddr as _, SIZE as _) }; 69 + /// let addr = unsafe { bindings::ioremap(paddr as bindings::phys_addr_t, SIZE) }; 70 70 /// if addr.is_null() { 71 71 /// return Err(ENOMEM); 72 72 /// } 73 73 /// 74 - /// Ok(IoMem(IoRaw::new(addr as _, SIZE)?)) 74 + /// Ok(IoMem(IoRaw::new(addr as usize, SIZE)?)) 75 75 /// } 76 76 /// } 77 77 /// 78 78 /// impl<const SIZE: usize> Drop for IoMem<SIZE> { 79 79 /// fn drop(&mut self) { 80 80 /// // SAFETY: `self.0.addr()` is guaranteed to be properly mapped by `Self::new`. 81 - /// unsafe { bindings::iounmap(self.0.addr() as _); }; 81 + /// unsafe { bindings::iounmap(self.0.addr() as *mut c_void); }; 82 82 /// } 83 83 /// } 84 84 /// ··· 219 219 /// An error is returned if `dev` does not match the same [`Device`] this [`Devres`] instance 220 220 /// has been created with. 221 221 /// 222 - /// # Example 222 + /// # Examples 223 223 /// 224 224 /// ```no_run 225 225 /// # #![cfg(CONFIG_PCI)]
+5 -5
rust/kernel/dma.rs
··· 180 180 impl Attrs { 181 181 /// Get the raw representation of this attribute. 182 182 pub(crate) fn as_raw(self) -> crate::ffi::c_ulong { 183 - self.0 as _ 183 + self.0 as crate::ffi::c_ulong 184 184 } 185 185 186 186 /// Check whether `flags` is contained in `self`. ··· 333 333 dev: dev.into(), 334 334 dma_handle, 335 335 count, 336 - cpu_addr: ret as *mut T, 336 + cpu_addr: ret.cast::<T>(), 337 337 dma_attrs, 338 338 }) 339 339 } ··· 436 436 /// slice is live. 437 437 /// * Callers must ensure that this call does not race with a read or write to the same region 438 438 /// while the returned slice is live. 439 - pub unsafe fn as_slice_mut(&self, offset: usize, count: usize) -> Result<&mut [T]> { 439 + pub unsafe fn as_slice_mut(&mut self, offset: usize, count: usize) -> Result<&mut [T]> { 440 440 self.validate_range(offset, count)?; 441 441 // SAFETY: 442 442 // - The pointer is valid due to type invariant on `CoherentAllocation`, ··· 468 468 /// unsafe { alloc.write(buf, 0)?; } 469 469 /// # Ok::<(), Error>(()) } 470 470 /// ``` 471 - pub unsafe fn write(&self, src: &[T], offset: usize) -> Result { 471 + pub unsafe fn write(&mut self, src: &[T], offset: usize) -> Result { 472 472 self.validate_range(offset, src.len())?; 473 473 // SAFETY: 474 474 // - The pointer is valid due to type invariant on `CoherentAllocation` ··· 556 556 bindings::dma_free_attrs( 557 557 self.dev.as_raw(), 558 558 size, 559 - self.cpu_addr as _, 559 + self.cpu_addr.cast(), 560 560 self.dma_handle, 561 561 self.dma_attrs.as_raw(), 562 562 )
+4 -6
rust/kernel/drm/device.rs
··· 83 83 major: T::INFO.major, 84 84 minor: T::INFO.minor, 85 85 patchlevel: T::INFO.patchlevel, 86 - name: T::INFO.name.as_char_ptr() as *mut _, 87 - desc: T::INFO.desc.as_char_ptr() as *mut _, 86 + name: T::INFO.name.as_char_ptr().cast_mut(), 87 + desc: T::INFO.desc.as_char_ptr().cast_mut(), 88 88 89 89 driver_features: drm::driver::FEAT_GEM, 90 90 ioctls: T::IOCTLS.as_ptr(), 91 91 num_ioctls: T::IOCTLS.len() as i32, 92 - fops: &Self::GEM_FOPS as _, 92 + fops: &Self::GEM_FOPS, 93 93 }; 94 94 95 95 const GEM_FOPS: bindings::file_operations = drm::gem::create_fops(); ··· 135 135 /// 136 136 /// `ptr` must be a valid pointer to a `struct device` embedded in `Self`. 137 137 unsafe fn from_drm_device(ptr: *const bindings::drm_device) -> *mut Self { 138 - let ptr: *const Opaque<bindings::drm_device> = ptr.cast(); 139 - 140 138 // SAFETY: By the safety requirements of this function `ptr` is a valid pointer to a 141 139 // `struct drm_device` embedded in `Self`. 142 - unsafe { crate::container_of!(ptr, Self, dev) }.cast_mut() 140 + unsafe { crate::container_of!(Opaque::cast_from(ptr), Self, dev) }.cast_mut() 143 141 } 144 142 145 143 /// Not intended to be called externally, except via declare_drm_ioctls!()
+1 -3
rust/kernel/drm/gem/mod.rs
··· 125 125 } 126 126 127 127 unsafe fn from_raw<'a>(self_ptr: *mut bindings::drm_gem_object) -> &'a Self { 128 - let self_ptr: *mut Opaque<bindings::drm_gem_object> = self_ptr.cast(); 129 - 130 128 // SAFETY: `obj` is guaranteed to be in an `Object<T>` via the safety contract of this 131 129 // function 132 - unsafe { &*crate::container_of!(self_ptr, Object<T>, obj) } 130 + unsafe { &*crate::container_of!(Opaque::cast_from(self_ptr), Object<T>, obj) } 133 131 } 134 132 } 135 133
+5 -5
rust/kernel/error.rs
··· 6 6 7 7 use crate::{ 8 8 alloc::{layout::LayoutError, AllocError}, 9 + fmt, 9 10 str::CStr, 10 11 }; 11 12 12 - use core::fmt; 13 13 use core::num::NonZeroI32; 14 14 use core::num::TryFromIntError; 15 15 use core::str::Utf8Error; ··· 154 154 /// Returns the error encoded as a pointer. 155 155 pub fn to_ptr<T>(self) -> *mut T { 156 156 // SAFETY: `self.0` is a valid error due to its invariant. 157 - unsafe { bindings::ERR_PTR(self.0.get() as _) as *mut _ } 157 + unsafe { bindings::ERR_PTR(self.0.get() as crate::ffi::c_long).cast() } 158 158 } 159 159 160 160 /// Returns a string representing the error, if one exists. ··· 189 189 Some(name) => f 190 190 .debug_tuple( 191 191 // SAFETY: These strings are ASCII-only. 192 - unsafe { core::str::from_utf8_unchecked(name) }, 192 + unsafe { core::str::from_utf8_unchecked(name.to_bytes()) }, 193 193 ) 194 194 .finish(), 195 195 } ··· 220 220 } 221 221 } 222 222 223 - impl From<core::fmt::Error> for Error { 224 - fn from(_: core::fmt::Error) -> Error { 223 + impl From<fmt::Error> for Error { 224 + fn from(_: fmt::Error) -> Error { 225 225 code::EINVAL 226 226 } 227 227 }
+5 -4
rust/kernel/firmware.rs
··· 62 62 fn request_internal(name: &CStr, dev: &Device, func: FwFunc) -> Result<Self> { 63 63 let mut fw: *mut bindings::firmware = core::ptr::null_mut(); 64 64 let pfw: *mut *mut bindings::firmware = &mut fw; 65 + let pfw: *mut *const bindings::firmware = pfw.cast(); 65 66 66 67 // SAFETY: `pfw` is a valid pointer to a NULL initialized `bindings::firmware` pointer. 67 68 // `name` and `dev` are valid as by their type invariants. 68 - let ret = unsafe { func.0(pfw as _, name.as_char_ptr(), dev.as_raw()) }; 69 + let ret = unsafe { func.0(pfw, name.as_char_ptr(), dev.as_raw()) }; 69 70 if ret != 0 { 70 71 return Err(Error::from_errno(ret)); 71 72 } ··· 140 139 /// Typically, such contracts would be enforced by a trait, however traits do not (yet) support 141 140 /// const functions. 142 141 /// 143 - /// # Example 142 + /// # Examples 144 143 /// 145 144 /// ``` 146 145 /// # mod module_firmware_test { ··· 182 181 /// module! { 183 182 /// type: MyModule, 184 183 /// name: "module_firmware_test", 185 - /// author: "Rust for Linux", 184 + /// authors: ["Rust for Linux"], 186 185 /// description: "module_firmware! test module", 187 186 /// license: "GPL", 188 187 /// } ··· 262 261 /// Append path components to the [`ModInfoBuilder`] instance. Paths need to be separated 263 262 /// with [`ModInfoBuilder::new_entry`]. 264 263 /// 265 - /// # Example 264 + /// # Examples 266 265 /// 267 266 /// ``` 268 267 /// use kernel::firmware::ModInfoBuilder;
+7
rust/kernel/fmt.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Formatting utilities. 4 + //! 5 + //! This module is intended to be used in place of `core::fmt` in kernel code. 6 + 7 + pub use core::fmt::{Arguments, Debug, Display, Error, Formatter, Result, Write};
+1 -1
rust/kernel/fs/file.rs
··· 366 366 // 367 367 // By the type invariants, there are no `fdget_pos` calls that did not take the 368 368 // `f_pos_lock` mutex. 369 - unsafe { LocalFile::from_raw_file(self as *const File as *const bindings::file) } 369 + unsafe { LocalFile::from_raw_file(core::ptr::from_ref(self).cast()) } 370 370 } 371 371 } 372 372
+7
rust/kernel/generated_arch_reachable_asm.rs.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #include <linux/bug.h> 4 + 5 + // Cut here. 6 + 7 + ::kernel::concat_literals!(ARCH_WARN_REACHABLE)
+7
rust/kernel/generated_arch_warn_asm.rs.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #include <linux/bug.h> 4 + 5 + // Cut here. 6 + 7 + ::kernel::concat_literals!(ARCH_WARN_ASM("{file}", "{line}", "{flags}", "{size}"))
+13 -21
rust/kernel/init.rs
··· 29 29 //! 30 30 //! ## General Examples 31 31 //! 32 - //! ```rust,ignore 33 - //! # #![allow(clippy::disallowed_names)] 32 + //! ```rust 33 + //! # #![expect(clippy::disallowed_names, clippy::undocumented_unsafe_blocks)] 34 34 //! use kernel::types::Opaque; 35 35 //! use pin_init::pin_init_from_closure; 36 36 //! 37 37 //! // assume we have some `raw_foo` type in C: 38 38 //! #[repr(C)] 39 39 //! struct RawFoo([u8; 16]); 40 - //! extern { 40 + //! extern "C" { 41 41 //! fn init_foo(_: *mut RawFoo); 42 42 //! } 43 43 //! ··· 66 66 //! }); 67 67 //! ``` 68 68 //! 69 - //! ```rust,ignore 70 - //! # #![allow(unreachable_pub, clippy::disallowed_names)] 69 + //! ```rust 70 + //! # #![expect(unreachable_pub, clippy::disallowed_names)] 71 71 //! use kernel::{prelude::*, types::Opaque}; 72 72 //! use core::{ptr::addr_of_mut, marker::PhantomPinned, pin::Pin}; 73 73 //! # mod bindings { 74 - //! # #![allow(non_camel_case_types)] 74 + //! # #![expect(non_camel_case_types, clippy::missing_safety_doc)] 75 75 //! # pub struct foo; 76 76 //! # pub unsafe fn init_foo(_ptr: *mut foo) {} 77 77 //! # pub unsafe fn destroy_foo(_ptr: *mut foo) {} 78 78 //! # pub unsafe fn enable_foo(_ptr: *mut foo, _flags: u32) -> i32 { 0 } 79 79 //! # } 80 - //! # // `Error::from_errno` is `pub(crate)` in the `kernel` crate, thus provide a workaround. 81 - //! # trait FromErrno { 82 - //! # fn from_errno(errno: core::ffi::c_int) -> Error { 83 - //! # // Dummy error that can be constructed outside the `kernel` crate. 84 - //! # Error::from(core::fmt::Error) 85 - //! # } 86 - //! # } 87 - //! # impl FromErrno for Error {} 88 80 //! /// # Invariants 89 81 //! /// 90 82 //! /// `foo` is always initialized ··· 100 108 //! let foo = addr_of_mut!((*slot).foo); 101 109 //! 102 110 //! // Initialize the `foo` 103 - //! bindings::init_foo(Opaque::raw_get(foo)); 111 + //! bindings::init_foo(Opaque::cast_into(foo)); 104 112 //! 105 113 //! // Try to enable it. 106 - //! let err = bindings::enable_foo(Opaque::raw_get(foo), flags); 114 + //! let err = bindings::enable_foo(Opaque::cast_into(foo), flags); 107 115 //! if err != 0 { 108 116 //! // Enabling has failed, first clean up the foo and then return the error. 109 - //! bindings::destroy_foo(Opaque::raw_get(foo)); 117 + //! bindings::destroy_foo(Opaque::cast_into(foo)); 110 118 //! return Err(Error::from_errno(err)); 111 119 //! } 112 120 //! ··· 198 206 /// 199 207 /// ```rust 200 208 /// use kernel::error::Error; 201 - /// use pin_init::zeroed; 209 + /// use pin_init::init_zeroed; 202 210 /// struct BigBuf { 203 211 /// big: KBox<[u8; 1024 * 1024 * 1024]>, 204 212 /// small: [u8; 1024 * 1024], ··· 207 215 /// impl BigBuf { 208 216 /// fn new() -> impl Init<Self, Error> { 209 217 /// try_init!(Self { 210 - /// big: KBox::init(zeroed(), GFP_KERNEL)?, 218 + /// big: KBox::init(init_zeroed(), GFP_KERNEL)?, 211 219 /// small: [0; 1024 * 1024], 212 220 /// }? Error) 213 221 /// } ··· 256 264 /// ```rust 257 265 /// # #![feature(new_uninit)] 258 266 /// use kernel::error::Error; 259 - /// use pin_init::zeroed; 267 + /// use pin_init::init_zeroed; 260 268 /// #[pin_data] 261 269 /// struct BigBuf { 262 270 /// big: KBox<[u8; 1024 * 1024 * 1024]>, ··· 267 275 /// impl BigBuf { 268 276 /// fn new() -> impl PinInit<Self, Error> { 269 277 /// try_pin_init!(Self { 270 - /// big: KBox::init(zeroed(), GFP_KERNEL)?, 278 + /// big: KBox::init(init_zeroed(), GFP_KERNEL)?, 271 279 /// small: [0; 1024 * 1024], 272 280 /// ptr: core::ptr::null_mut(), 273 281 /// }? Error)
+10 -10
rust/kernel/io.rs
··· 5 5 //! C header: [`include/asm-generic/io.h`](srctree/include/asm-generic/io.h) 6 6 7 7 use crate::error::{code::EINVAL, Result}; 8 - use crate::{bindings, build_assert}; 8 + use crate::{bindings, build_assert, ffi::c_void}; 9 9 10 10 pub mod mem; 11 11 pub mod resource; ··· 48 48 } 49 49 } 50 50 51 - /// IO-mapped memory, starting at the base address @addr and spanning @maxlen bytes. 51 + /// IO-mapped memory region. 52 52 /// 53 53 /// The creator (usually a subsystem / bus such as PCI) is responsible for creating the 54 54 /// mapping, performing an additional region request etc. ··· 61 61 /// # Examples 62 62 /// 63 63 /// ```no_run 64 - /// # use kernel::{bindings, io::{Io, IoRaw}}; 64 + /// # use kernel::{bindings, ffi::c_void, io::{Io, IoRaw}}; 65 65 /// # use core::ops::Deref; 66 66 /// 67 67 /// // See also [`pci::Bar`] for a real example. ··· 75 75 /// unsafe fn new(paddr: usize) -> Result<Self>{ 76 76 /// // SAFETY: By the safety requirements of this function [`paddr`, `paddr` + `SIZE`) is 77 77 /// // valid for `ioremap`. 78 - /// let addr = unsafe { bindings::ioremap(paddr as _, SIZE as _) }; 78 + /// let addr = unsafe { bindings::ioremap(paddr as bindings::phys_addr_t, SIZE) }; 79 79 /// if addr.is_null() { 80 80 /// return Err(ENOMEM); 81 81 /// } 82 82 /// 83 - /// Ok(IoMem(IoRaw::new(addr as _, SIZE)?)) 83 + /// Ok(IoMem(IoRaw::new(addr as usize, SIZE)?)) 84 84 /// } 85 85 /// } 86 86 /// 87 87 /// impl<const SIZE: usize> Drop for IoMem<SIZE> { 88 88 /// fn drop(&mut self) { 89 89 /// // SAFETY: `self.0.addr()` is guaranteed to be properly mapped by `Self::new`. 90 - /// unsafe { bindings::iounmap(self.0.addr() as _); }; 90 + /// unsafe { bindings::iounmap(self.0.addr() as *mut c_void); }; 91 91 /// } 92 92 /// } 93 93 /// ··· 124 124 let addr = self.io_addr_assert::<$type_name>(offset); 125 125 126 126 // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 127 - unsafe { bindings::$c_fn(addr as _) } 127 + unsafe { bindings::$c_fn(addr as *const c_void) } 128 128 } 129 129 130 130 /// Read IO data from a given offset. ··· 136 136 let addr = self.io_addr::<$type_name>(offset)?; 137 137 138 138 // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 139 - Ok(unsafe { bindings::$c_fn(addr as _) }) 139 + Ok(unsafe { bindings::$c_fn(addr as *const c_void) }) 140 140 } 141 141 }; 142 142 } ··· 153 153 let addr = self.io_addr_assert::<$type_name>(offset); 154 154 155 155 // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 156 - unsafe { bindings::$c_fn(value, addr as _, ) } 156 + unsafe { bindings::$c_fn(value, addr as *mut c_void) } 157 157 } 158 158 159 159 /// Write IO data from a given offset. ··· 165 165 let addr = self.io_addr::<$type_name>(offset)?; 166 166 167 167 // SAFETY: By the type invariant `addr` is a valid address for MMIO operations. 168 - unsafe { bindings::$c_fn(value, addr as _) } 168 + unsafe { bindings::$c_fn(value, addr as *mut c_void) } 169 169 Ok(()) 170 170 } 171 171 };
+8 -5
rust/kernel/kunit.rs
··· 7 7 //! Reference: <https://docs.kernel.org/dev-tools/kunit/index.html> 8 8 9 9 use crate::prelude::*; 10 - use core::{ffi::c_void, fmt}; 10 + use core::fmt; 11 + 12 + #[cfg(CONFIG_PRINTK)] 13 + use crate::c_str; 11 14 12 15 /// Prints a KUnit error-level message. 13 16 /// ··· 22 19 #[cfg(CONFIG_PRINTK)] 23 20 unsafe { 24 21 bindings::_printk( 25 - c"\x013%pA".as_ptr() as _, 26 - &args as *const _ as *const c_void, 22 + c_str!("\x013%pA").as_char_ptr(), 23 + core::ptr::from_ref(&args).cast::<c_void>(), 27 24 ); 28 25 } 29 26 } ··· 38 35 #[cfg(CONFIG_PRINTK)] 39 36 unsafe { 40 37 bindings::_printk( 41 - c"\x016%pA".as_ptr() as _, 42 - &args as *const _ as *const c_void, 38 + c_str!("\x016%pA").as_char_ptr(), 39 + core::ptr::from_ref(&args).cast::<c_void>(), 43 40 ); 44 41 } 45 42 }
+10
rust/kernel/lib.rs
··· 62 62 pub mod alloc; 63 63 #[cfg(CONFIG_AUXILIARY_BUS)] 64 64 pub mod auxiliary; 65 + pub mod bits; 65 66 #[cfg(CONFIG_BLOCK)] 66 67 pub mod block; 68 + pub mod bug; 67 69 #[doc(hidden)] 68 70 pub mod build_assert; 69 71 pub mod clk; ··· 87 85 pub mod faux; 88 86 #[cfg(CONFIG_RUST_FW_LOADER_ABSTRACTIONS)] 89 87 pub mod firmware; 88 + pub mod fmt; 90 89 pub mod fs; 91 90 pub mod init; 92 91 pub mod io; ··· 215 212 } 216 213 217 214 /// Produces a pointer to an object from a pointer to one of its fields. 215 + /// 216 + /// If you encounter a type mismatch due to the [`Opaque`] type, then use [`Opaque::cast_into`] or 217 + /// [`Opaque::cast_from`] to resolve the mismatch. 218 + /// 219 + /// [`Opaque`]: crate::types::Opaque 220 + /// [`Opaque::cast_into`]: crate::types::Opaque::cast_into 221 + /// [`Opaque::cast_from`]: crate::types::Opaque::cast_from 218 222 /// 219 223 /// # Safety 220 224 ///
+32 -31
rust/kernel/list.rs
··· 57 57 /// } 58 58 /// } 59 59 /// 60 - /// impl_has_list_links! { 61 - /// impl HasListLinks<0> for BasicItem { self.links } 62 - /// } 63 60 /// impl_list_arc_safe! { 64 61 /// impl ListArcSafe<0> for BasicItem { untracked; } 65 62 /// } 66 63 /// impl_list_item! { 67 - /// impl ListItem<0> for BasicItem { using ListLinks; } 64 + /// impl ListItem<0> for BasicItem { using ListLinks { self.links }; } 68 65 /// } 69 66 /// 70 67 /// // Create a new empty list. ··· 79 82 /// // [15, 10, 30] 80 83 /// { 81 84 /// let mut iter = list.iter(); 82 - /// assert_eq!(iter.next().unwrap().value, 15); 83 - /// assert_eq!(iter.next().unwrap().value, 10); 84 - /// assert_eq!(iter.next().unwrap().value, 30); 85 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 15); 86 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 10); 87 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 30); 85 88 /// assert!(iter.next().is_none()); 86 89 /// 87 90 /// // Verify the length of the list. ··· 90 93 /// 91 94 /// // Pop the items from the list using `pop_back()` and verify the content. 92 95 /// { 93 - /// assert_eq!(list.pop_back().unwrap().value, 30); 94 - /// assert_eq!(list.pop_back().unwrap().value, 10); 95 - /// assert_eq!(list.pop_back().unwrap().value, 15); 96 + /// assert_eq!(list.pop_back().ok_or(EINVAL)?.value, 30); 97 + /// assert_eq!(list.pop_back().ok_or(EINVAL)?.value, 10); 98 + /// assert_eq!(list.pop_back().ok_or(EINVAL)?.value, 15); 96 99 /// } 97 100 /// 98 101 /// // Insert 3 elements using `push_front()`. ··· 104 107 /// // [30, 10, 15] 105 108 /// { 106 109 /// let mut iter = list.iter(); 107 - /// assert_eq!(iter.next().unwrap().value, 30); 108 - /// assert_eq!(iter.next().unwrap().value, 10); 109 - /// assert_eq!(iter.next().unwrap().value, 15); 110 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 30); 111 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 10); 112 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 15); 110 113 /// assert!(iter.next().is_none()); 111 114 /// 112 115 /// // Verify the length of the list. ··· 115 118 /// 116 119 /// // Pop the items from the list using `pop_front()` and verify the content. 117 120 /// { 118 - /// assert_eq!(list.pop_front().unwrap().value, 30); 119 - /// assert_eq!(list.pop_front().unwrap().value, 10); 121 + /// assert_eq!(list.pop_front().ok_or(EINVAL)?.value, 30); 122 + /// assert_eq!(list.pop_front().ok_or(EINVAL)?.value, 10); 120 123 /// } 121 124 /// 122 125 /// // Push `list2` to `list` through `push_all_back()`. ··· 132 135 /// // list: [15, 25, 35] 133 136 /// // list2: [] 134 137 /// let mut iter = list.iter(); 135 - /// assert_eq!(iter.next().unwrap().value, 15); 136 - /// assert_eq!(iter.next().unwrap().value, 25); 137 - /// assert_eq!(iter.next().unwrap().value, 35); 138 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 15); 139 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 25); 140 + /// assert_eq!(iter.next().ok_or(EINVAL)?.value, 35); 138 141 /// assert!(iter.next().is_none()); 139 142 /// assert!(list2.is_empty()); 140 143 /// } ··· 281 284 #[inline] 282 285 unsafe fn fields(me: *mut Self) -> *mut ListLinksFields { 283 286 // SAFETY: The caller promises that the pointer is valid. 284 - unsafe { Opaque::raw_get(ptr::addr_of!((*me).inner)) } 287 + unsafe { Opaque::cast_into(ptr::addr_of!((*me).inner)) } 285 288 } 286 289 287 290 /// # Safety ··· 317 320 unsafe impl<T: ?Sized + Sync, const ID: u64> Sync for ListLinksSelfPtr<T, ID> {} 318 321 319 322 impl<T: ?Sized, const ID: u64> ListLinksSelfPtr<T, ID> { 320 - /// The offset from the [`ListLinks`] to the self pointer field. 321 - pub const LIST_LINKS_SELF_PTR_OFFSET: usize = core::mem::offset_of!(Self, self_ptr); 322 - 323 323 /// Creates a new initializer for this type. 324 324 pub fn new() -> impl PinInit<Self> { 325 325 // INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will ··· 330 336 }, 331 337 self_ptr: Opaque::uninit(), 332 338 } 339 + } 340 + 341 + /// Returns a pointer to the self pointer. 342 + /// 343 + /// # Safety 344 + /// 345 + /// The provided pointer must point at a valid struct of type `Self`. 346 + pub unsafe fn raw_get_self_ptr(me: *const Self) -> *const Opaque<*const T> { 347 + // SAFETY: The caller promises that the pointer is valid. 348 + unsafe { ptr::addr_of!((*me).self_ptr) } 333 349 } 334 350 } 335 351 ··· 715 711 /// } 716 712 /// } 717 713 /// 718 - /// kernel::list::impl_has_list_links! { 719 - /// impl HasListLinks<0> for ListItem { self.links } 720 - /// } 721 714 /// kernel::list::impl_list_arc_safe! { 722 715 /// impl ListArcSafe<0> for ListItem { untracked; } 723 716 /// } 724 717 /// kernel::list::impl_list_item! { 725 - /// impl ListItem<0> for ListItem { using ListLinks; } 718 + /// impl ListItem<0> for ListItem { using ListLinks { self.links }; } 726 719 /// } 727 720 /// 728 721 /// // Use a cursor to remove the first element with the given value. ··· 810 809 /// merge_sorted(&mut list, list2); 811 810 /// 812 811 /// let mut items = list.into_iter(); 813 - /// assert_eq!(items.next().unwrap().value, 10); 814 - /// assert_eq!(items.next().unwrap().value, 11); 815 - /// assert_eq!(items.next().unwrap().value, 12); 816 - /// assert_eq!(items.next().unwrap().value, 13); 817 - /// assert_eq!(items.next().unwrap().value, 14); 812 + /// assert_eq!(items.next().ok_or(EINVAL)?.value, 10); 813 + /// assert_eq!(items.next().ok_or(EINVAL)?.value, 11); 814 + /// assert_eq!(items.next().ok_or(EINVAL)?.value, 12); 815 + /// assert_eq!(items.next().ok_or(EINVAL)?.value, 13); 816 + /// assert_eq!(items.next().ok_or(EINVAL)?.value, 14); 818 817 /// assert!(items.next().is_none()); 819 818 /// # Result::<(), Error>::Ok(()) 820 819 /// ```
+158 -81
rust/kernel/list/impl_list_item_mod.rs
··· 4 4 5 5 //! Helpers for implementing list traits safely. 6 6 7 - use crate::list::ListLinks; 8 - 9 - /// Declares that this type has a `ListLinks<ID>` field at a fixed offset. 7 + /// Declares that this type has a [`ListLinks<ID>`] field. 10 8 /// 11 - /// This trait is only used to help implement `ListItem` safely. If `ListItem` is implemented 9 + /// This trait is only used to help implement [`ListItem`] safely. If [`ListItem`] is implemented 12 10 /// manually, then this trait is not needed. Use the [`impl_has_list_links!`] macro to implement 13 11 /// this trait. 14 12 /// 15 13 /// # Safety 16 14 /// 17 - /// All values of this type must have a `ListLinks<ID>` field at the given offset. 15 + /// The methods on this trait must have exactly the behavior that the definitions given below have. 18 16 /// 19 - /// The behavior of `raw_get_list_links` must not be changed. 17 + /// [`ListLinks<ID>`]: crate::list::ListLinks 18 + /// [`ListItem`]: crate::list::ListItem 20 19 pub unsafe trait HasListLinks<const ID: u64 = 0> { 21 - /// The offset of the `ListLinks` field. 22 - const OFFSET: usize; 23 - 24 - /// Returns a pointer to the [`ListLinks<T, ID>`] field. 20 + /// Returns a pointer to the [`ListLinks<ID>`] field. 25 21 /// 26 22 /// # Safety 27 23 /// 28 24 /// The provided pointer must point at a valid struct of type `Self`. 29 25 /// 30 - /// [`ListLinks<T, ID>`]: ListLinks 31 - // We don't really need this method, but it's necessary for the implementation of 32 - // `impl_has_list_links!` to be correct. 33 - #[inline] 34 - unsafe fn raw_get_list_links(ptr: *mut Self) -> *mut ListLinks<ID> { 35 - // SAFETY: The caller promises that the pointer is valid. The implementer promises that the 36 - // `OFFSET` constant is correct. 37 - unsafe { (ptr as *mut u8).add(Self::OFFSET) as *mut ListLinks<ID> } 38 - } 26 + /// [`ListLinks<ID>`]: crate::list::ListLinks 27 + unsafe fn raw_get_list_links(ptr: *mut Self) -> *mut crate::list::ListLinks<ID>; 39 28 } 40 29 41 30 /// Implements the [`HasListLinks`] trait for the given type. 42 31 #[macro_export] 43 32 macro_rules! impl_has_list_links { 44 - ($(impl$(<$($implarg:ident),*>)? 33 + ($(impl$({$($generics:tt)*})? 45 34 HasListLinks$(<$id:tt>)? 46 - for $self:ident $(<$($selfarg:ty),*>)? 35 + for $self:ty 47 36 { self$(.$field:ident)* } 48 37 )*) => {$( 49 38 // SAFETY: The implementation of `raw_get_list_links` only compiles if the field has the 50 39 // right type. 51 - // 52 - // The behavior of `raw_get_list_links` is not changed since the `addr_of_mut!` macro is 53 - // equivalent to the pointer offset operation in the trait definition. 54 - unsafe impl$(<$($implarg),*>)? $crate::list::HasListLinks$(<$id>)? for 55 - $self $(<$($selfarg),*>)? 56 - { 57 - const OFFSET: usize = ::core::mem::offset_of!(Self, $($field).*) as usize; 58 - 40 + unsafe impl$(<$($generics)*>)? $crate::list::HasListLinks$(<$id>)? for $self { 59 41 #[inline] 60 42 unsafe fn raw_get_list_links(ptr: *mut Self) -> *mut $crate::list::ListLinks$(<$id>)? { 43 + // Statically ensure that `$(.field)*` doesn't follow any pointers. 44 + // 45 + // Cannot be `const` because `$self` may contain generics and E0401 says constants 46 + // "can't use {`Self`,generic parameters} from outer item". 47 + if false { let _: usize = ::core::mem::offset_of!(Self, $($field).*); } 48 + 61 49 // SAFETY: The caller promises that the pointer is not dangling. We know that this 62 50 // expression doesn't follow any pointers, as the `offset_of!` invocation above 63 51 // would otherwise not compile. ··· 56 68 } 57 69 pub use impl_has_list_links; 58 70 59 - /// Declares that the `ListLinks<ID>` field in this struct is inside a `ListLinksSelfPtr<T, ID>`. 71 + /// Declares that the [`ListLinks<ID>`] field in this struct is inside a 72 + /// [`ListLinksSelfPtr<T, ID>`]. 60 73 /// 61 74 /// # Safety 62 75 /// 63 - /// The `ListLinks<ID>` field of this struct at the offset `HasListLinks<ID>::OFFSET` must be 64 - /// inside a `ListLinksSelfPtr<T, ID>`. 76 + /// The [`ListLinks<ID>`] field of this struct at [`HasListLinks<ID>::raw_get_list_links`] must be 77 + /// inside a [`ListLinksSelfPtr<T, ID>`]. 78 + /// 79 + /// [`ListLinks<ID>`]: crate::list::ListLinks 80 + /// [`ListLinksSelfPtr<T, ID>`]: crate::list::ListLinksSelfPtr 65 81 pub unsafe trait HasSelfPtr<T: ?Sized, const ID: u64 = 0> 66 82 where 67 83 Self: HasListLinks<ID>, ··· 75 83 /// Implements the [`HasListLinks`] and [`HasSelfPtr`] traits for the given type. 76 84 #[macro_export] 77 85 macro_rules! impl_has_list_links_self_ptr { 78 - ($(impl$({$($implarg:tt)*})? 86 + ($(impl$({$($generics:tt)*})? 79 87 HasSelfPtr<$item_type:ty $(, $id:tt)?> 80 - for $self:ident $(<$($selfarg:ty),*>)? 81 - { self.$field:ident } 88 + for $self:ty 89 + { self$(.$field:ident)* } 82 90 )*) => {$( 83 91 // SAFETY: The implementation of `raw_get_list_links` only compiles if the field has the 84 92 // right type. 85 - unsafe impl$(<$($implarg)*>)? $crate::list::HasSelfPtr<$item_type $(, $id)?> for 86 - $self $(<$($selfarg),*>)? 87 - {} 93 + unsafe impl$(<$($generics)*>)? $crate::list::HasSelfPtr<$item_type $(, $id)?> for $self {} 88 94 89 - unsafe impl$(<$($implarg)*>)? $crate::list::HasListLinks$(<$id>)? for 90 - $self $(<$($selfarg),*>)? 91 - { 92 - const OFFSET: usize = ::core::mem::offset_of!(Self, $field) as usize; 93 - 95 + unsafe impl$(<$($generics)*>)? $crate::list::HasListLinks$(<$id>)? for $self { 94 96 #[inline] 95 97 unsafe fn raw_get_list_links(ptr: *mut Self) -> *mut $crate::list::ListLinks$(<$id>)? { 96 98 // SAFETY: The caller promises that the pointer is not dangling. 97 99 let ptr: *mut $crate::list::ListLinksSelfPtr<$item_type $(, $id)?> = 98 - unsafe { ::core::ptr::addr_of_mut!((*ptr).$field) }; 100 + unsafe { ::core::ptr::addr_of_mut!((*ptr)$(.$field)*) }; 99 101 ptr.cast() 100 102 } 101 103 } ··· 103 117 /// implement that trait. 104 118 /// 105 119 /// [`ListItem`]: crate::list::ListItem 120 + /// 121 + /// # Examples 122 + /// 123 + /// ``` 124 + /// #[pin_data] 125 + /// struct SimpleListItem { 126 + /// value: u32, 127 + /// #[pin] 128 + /// links: kernel::list::ListLinks, 129 + /// } 130 + /// 131 + /// kernel::list::impl_list_arc_safe! { 132 + /// impl ListArcSafe<0> for SimpleListItem { untracked; } 133 + /// } 134 + /// 135 + /// kernel::list::impl_list_item! { 136 + /// impl ListItem<0> for SimpleListItem { using ListLinks { self.links }; } 137 + /// } 138 + /// 139 + /// struct ListLinksHolder { 140 + /// inner: kernel::list::ListLinks, 141 + /// } 142 + /// 143 + /// #[pin_data] 144 + /// struct ComplexListItem<T, U> { 145 + /// value: Result<T, U>, 146 + /// #[pin] 147 + /// links: ListLinksHolder, 148 + /// } 149 + /// 150 + /// kernel::list::impl_list_arc_safe! { 151 + /// impl{T, U} ListArcSafe<0> for ComplexListItem<T, U> { untracked; } 152 + /// } 153 + /// 154 + /// kernel::list::impl_list_item! { 155 + /// impl{T, U} ListItem<0> for ComplexListItem<T, U> { using ListLinks { self.links.inner }; } 156 + /// } 157 + /// ``` 158 + /// 159 + /// ``` 160 + /// #[pin_data] 161 + /// struct SimpleListItem { 162 + /// value: u32, 163 + /// #[pin] 164 + /// links: kernel::list::ListLinksSelfPtr<SimpleListItem>, 165 + /// } 166 + /// 167 + /// kernel::list::impl_list_arc_safe! { 168 + /// impl ListArcSafe<0> for SimpleListItem { untracked; } 169 + /// } 170 + /// 171 + /// kernel::list::impl_list_item! { 172 + /// impl ListItem<0> for SimpleListItem { using ListLinksSelfPtr { self.links }; } 173 + /// } 174 + /// 175 + /// struct ListLinksSelfPtrHolder<T, U> { 176 + /// inner: kernel::list::ListLinksSelfPtr<ComplexListItem<T, U>>, 177 + /// } 178 + /// 179 + /// #[pin_data] 180 + /// struct ComplexListItem<T, U> { 181 + /// value: Result<T, U>, 182 + /// #[pin] 183 + /// links: ListLinksSelfPtrHolder<T, U>, 184 + /// } 185 + /// 186 + /// kernel::list::impl_list_arc_safe! { 187 + /// impl{T, U} ListArcSafe<0> for ComplexListItem<T, U> { untracked; } 188 + /// } 189 + /// 190 + /// kernel::list::impl_list_item! { 191 + /// impl{T, U} ListItem<0> for ComplexListItem<T, U> { 192 + /// using ListLinksSelfPtr { self.links.inner }; 193 + /// } 194 + /// } 195 + /// ``` 106 196 #[macro_export] 107 197 macro_rules! impl_list_item { 108 198 ( 109 - $(impl$({$($generics:tt)*})? ListItem<$num:tt> for $t:ty { 110 - using ListLinks; 199 + $(impl$({$($generics:tt)*})? ListItem<$num:tt> for $self:ty { 200 + using ListLinks { self$(.$field:ident)* }; 111 201 })* 112 202 ) => {$( 203 + $crate::list::impl_has_list_links! { 204 + impl$({$($generics)*})? HasListLinks<$num> for $self { self$(.$field)* } 205 + } 206 + 113 207 // SAFETY: See GUARANTEES comment on each method. 114 - unsafe impl$(<$($generics)*>)? $crate::list::ListItem<$num> for $t { 208 + unsafe impl$(<$($generics)*>)? $crate::list::ListItem<$num> for $self { 115 209 // GUARANTEES: 116 210 // * This returns the same pointer as `prepare_to_insert` because `prepare_to_insert` 117 211 // is implemented in terms of `view_links`. ··· 205 139 } 206 140 207 141 // GUARANTEES: 208 - // * `me` originates from the most recent call to `prepare_to_insert`, which just added 209 - // `offset` to the pointer passed to `prepare_to_insert`. This method subtracts 210 - // `offset` from `me` so it returns the pointer originally passed to 211 - // `prepare_to_insert`. 142 + // * `me` originates from the most recent call to `prepare_to_insert`, which calls 143 + // `raw_get_list_link`, which is implemented using `addr_of_mut!((*self)$(.$field)*)`. 144 + // This method uses `container_of` to perform the inverse operation, so it returns the 145 + // pointer originally passed to `prepare_to_insert`. 212 146 // * The pointer remains valid until the next call to `post_remove` because the caller 213 147 // of the most recent call to `prepare_to_insert` promised to retain ownership of the 214 148 // `ListArc` containing `Self` until the next call to `post_remove`. The value cannot 215 149 // be destroyed while a `ListArc` reference exists. 216 150 unsafe fn view_value(me: *mut $crate::list::ListLinks<$num>) -> *const Self { 217 - let offset = <Self as $crate::list::HasListLinks<$num>>::OFFSET; 218 151 // SAFETY: `me` originates from the most recent call to `prepare_to_insert`, so it 219 - // points at the field at offset `offset` in a value of type `Self`. Thus, 220 - // subtracting `offset` from `me` is still in-bounds of the allocation. 221 - unsafe { (me as *const u8).sub(offset) as *const Self } 152 + // points at the field `$field` in a value of type `Self`. Thus, reversing that 153 + // operation is still in-bounds of the allocation. 154 + $crate::container_of!(me, Self, $($field).*) 222 155 } 223 156 224 157 // GUARANTEES: ··· 234 169 } 235 170 236 171 // GUARANTEES: 237 - // * `me` originates from the most recent call to `prepare_to_insert`, which just added 238 - // `offset` to the pointer passed to `prepare_to_insert`. This method subtracts 239 - // `offset` from `me` so it returns the pointer originally passed to 240 - // `prepare_to_insert`. 172 + // * `me` originates from the most recent call to `prepare_to_insert`, which calls 173 + // `raw_get_list_link`, which is implemented using `addr_of_mut!((*self)$(.$field)*)`. 174 + // This method uses `container_of` to perform the inverse operation, so it returns the 175 + // pointer originally passed to `prepare_to_insert`. 241 176 unsafe fn post_remove(me: *mut $crate::list::ListLinks<$num>) -> *const Self { 242 - let offset = <Self as $crate::list::HasListLinks<$num>>::OFFSET; 243 177 // SAFETY: `me` originates from the most recent call to `prepare_to_insert`, so it 244 - // points at the field at offset `offset` in a value of type `Self`. Thus, 245 - // subtracting `offset` from `me` is still in-bounds of the allocation. 246 - unsafe { (me as *const u8).sub(offset) as *const Self } 178 + // points at the field `$field` in a value of type `Self`. Thus, reversing that 179 + // operation is still in-bounds of the allocation. 180 + $crate::container_of!(me, Self, $($field).*) 247 181 } 248 182 } 249 183 )*}; 250 184 251 185 ( 252 - $(impl$({$($generics:tt)*})? ListItem<$num:tt> for $t:ty { 253 - using ListLinksSelfPtr; 186 + $(impl$({$($generics:tt)*})? ListItem<$num:tt> for $self:ty { 187 + using ListLinksSelfPtr { self$(.$field:ident)* }; 254 188 })* 255 189 ) => {$( 190 + $crate::list::impl_has_list_links_self_ptr! { 191 + impl$({$($generics)*})? HasSelfPtr<$self> for $self { self$(.$field)* } 192 + } 193 + 256 194 // SAFETY: See GUARANTEES comment on each method. 257 - unsafe impl$(<$($generics)*>)? $crate::list::ListItem<$num> for $t { 195 + unsafe impl$(<$($generics)*>)? $crate::list::ListItem<$num> for $self { 258 196 // GUARANTEES: 259 197 // This implementation of `ListItem` will not give out exclusive access to the same 260 198 // `ListLinks` several times because calls to `prepare_to_insert` and `post_remove` ··· 270 202 // SAFETY: The caller promises that `me` points at a valid value of type `Self`. 271 203 let links_field = unsafe { <Self as $crate::list::ListItem<$num>>::view_links(me) }; 272 204 273 - let spoff = $crate::list::ListLinksSelfPtr::<Self, $num>::LIST_LINKS_SELF_PTR_OFFSET; 274 - // Goes via the offset as the field is private. 275 - // 276 - // SAFETY: The constant is equal to `offset_of!(ListLinksSelfPtr, self_ptr)`, so 277 - // the pointer stays in bounds of the allocation. 278 - let self_ptr = unsafe { (links_field as *const u8).add(spoff) } 279 - as *const $crate::types::Opaque<*const Self>; 280 - let cell_inner = $crate::types::Opaque::raw_get(self_ptr); 205 + let container = $crate::container_of!( 206 + links_field, $crate::list::ListLinksSelfPtr<Self, $num>, inner 207 + ); 208 + 209 + // SAFETY: By the same reasoning above, `links_field` is a valid pointer. 210 + let self_ptr = unsafe { 211 + $crate::list::ListLinksSelfPtr::raw_get_self_ptr(container) 212 + }; 213 + 214 + let cell_inner = $crate::types::Opaque::cast_into(self_ptr); 281 215 282 216 // SAFETY: This value is not accessed in any other places than `prepare_to_insert`, 283 217 // `post_remove`, or `view_value`. By the safety requirements of those methods, ··· 298 228 // this value is not in a list. 299 229 unsafe fn view_links(me: *const Self) -> *mut $crate::list::ListLinks<$num> { 300 230 // SAFETY: The caller promises that `me` points at a valid value of type `Self`. 301 - unsafe { <Self as HasListLinks<$num>>::raw_get_list_links(me.cast_mut()) } 231 + unsafe { 232 + <Self as $crate::list::HasListLinks<$num>>::raw_get_list_links(me.cast_mut()) 233 + } 302 234 } 303 235 304 236 // This function is also used as the implementation of `post_remove`, so the caller ··· 319 247 // `ListArc` containing `Self` until the next call to `post_remove`. The value cannot 320 248 // be destroyed while a `ListArc` reference exists. 321 249 unsafe fn view_value(links_field: *mut $crate::list::ListLinks<$num>) -> *const Self { 322 - let spoff = $crate::list::ListLinksSelfPtr::<Self, $num>::LIST_LINKS_SELF_PTR_OFFSET; 323 - // SAFETY: The constant is equal to `offset_of!(ListLinksSelfPtr, self_ptr)`, so 324 - // the pointer stays in bounds of the allocation. 325 - let self_ptr = unsafe { (links_field as *const u8).add(spoff) } 326 - as *const ::core::cell::UnsafeCell<*const Self>; 327 - let cell_inner = ::core::cell::UnsafeCell::raw_get(self_ptr); 250 + let container = $crate::container_of!( 251 + links_field, $crate::list::ListLinksSelfPtr<Self, $num>, inner 252 + ); 253 + 254 + // SAFETY: By the same reasoning above, `links_field` is a valid pointer. 255 + let self_ptr = unsafe { 256 + $crate::list::ListLinksSelfPtr::raw_get_self_ptr(container) 257 + }; 258 + 259 + let cell_inner = $crate::types::Opaque::cast_into(self_ptr); 260 + 328 261 // SAFETY: This is not a data race, because the only function that writes to this 329 262 // value is `prepare_to_insert`, but by the safety requirements the 330 263 // `prepare_to_insert` method may not be called in parallel with `view_value` or
+6 -6
rust/kernel/miscdevice.rs
··· 33 33 pub const fn into_raw<T: MiscDevice>(self) -> bindings::miscdevice { 34 34 // SAFETY: All zeros is valid for this C type. 35 35 let mut result: bindings::miscdevice = unsafe { MaybeUninit::zeroed().assume_init() }; 36 - result.minor = bindings::MISC_DYNAMIC_MINOR as _; 36 + result.minor = bindings::MISC_DYNAMIC_MINOR as ffi::c_int; 37 37 result.name = self.name.as_char_ptr(); 38 38 result.fops = MiscdeviceVTable::<T>::build(); 39 39 result ··· 222 222 // type. 223 223 // 224 224 // SAFETY: The open call of a file can access the private data. 225 - unsafe { (*raw_file).private_data = ptr.into_foreign().cast() }; 225 + unsafe { (*raw_file).private_data = ptr.into_foreign() }; 226 226 227 227 0 228 228 } ··· 233 233 /// must be associated with a `MiscDeviceRegistration<T>`. 234 234 unsafe extern "C" fn release(_inode: *mut bindings::inode, file: *mut bindings::file) -> c_int { 235 235 // SAFETY: The release call of a file owns the private data. 236 - let private = unsafe { (*file).private_data }.cast(); 236 + let private = unsafe { (*file).private_data }; 237 237 // SAFETY: The release call of a file owns the private data. 238 238 let ptr = unsafe { <T::Ptr as ForeignOwnable>::from_foreign(private) }; 239 239 ··· 277 277 /// `file` must be a valid file that is associated with a `MiscDeviceRegistration<T>`. 278 278 unsafe extern "C" fn ioctl(file: *mut bindings::file, cmd: c_uint, arg: c_ulong) -> c_long { 279 279 // SAFETY: The ioctl call of a file can access the private data. 280 - let private = unsafe { (*file).private_data }.cast(); 280 + let private = unsafe { (*file).private_data }; 281 281 // SAFETY: Ioctl calls can borrow the private data of the file. 282 282 let device = unsafe { <T::Ptr as ForeignOwnable>::borrow(private) }; 283 283 ··· 302 302 arg: c_ulong, 303 303 ) -> c_long { 304 304 // SAFETY: The compat ioctl call of a file can access the private data. 305 - let private = unsafe { (*file).private_data }.cast(); 305 + let private = unsafe { (*file).private_data }; 306 306 // SAFETY: Ioctl calls can borrow the private data of the file. 307 307 let device = unsafe { <T::Ptr as ForeignOwnable>::borrow(private) }; 308 308 ··· 323 323 /// - `seq_file` must be a valid `struct seq_file` that we can write to. 324 324 unsafe extern "C" fn show_fdinfo(seq_file: *mut bindings::seq_file, file: *mut bindings::file) { 325 325 // SAFETY: The release call of a file owns the private data. 326 - let private = unsafe { (*file).private_data }.cast(); 326 + let private = unsafe { (*file).private_data }; 327 327 // SAFETY: Ioctl calls can borrow the private data of the file. 328 328 let device = unsafe { <T::Ptr as ForeignOwnable>::borrow(private) }; 329 329 // SAFETY:
+26 -26
rust/kernel/mm/virt.rs
··· 392 392 use crate::bindings; 393 393 394 394 /// No flags are set. 395 - pub const NONE: vm_flags_t = bindings::VM_NONE as _; 395 + pub const NONE: vm_flags_t = bindings::VM_NONE as vm_flags_t; 396 396 397 397 /// Mapping allows reads. 398 - pub const READ: vm_flags_t = bindings::VM_READ as _; 398 + pub const READ: vm_flags_t = bindings::VM_READ as vm_flags_t; 399 399 400 400 /// Mapping allows writes. 401 - pub const WRITE: vm_flags_t = bindings::VM_WRITE as _; 401 + pub const WRITE: vm_flags_t = bindings::VM_WRITE as vm_flags_t; 402 402 403 403 /// Mapping allows execution. 404 - pub const EXEC: vm_flags_t = bindings::VM_EXEC as _; 404 + pub const EXEC: vm_flags_t = bindings::VM_EXEC as vm_flags_t; 405 405 406 406 /// Mapping is shared. 407 - pub const SHARED: vm_flags_t = bindings::VM_SHARED as _; 407 + pub const SHARED: vm_flags_t = bindings::VM_SHARED as vm_flags_t; 408 408 409 409 /// Mapping may be updated to allow reads. 410 - pub const MAYREAD: vm_flags_t = bindings::VM_MAYREAD as _; 410 + pub const MAYREAD: vm_flags_t = bindings::VM_MAYREAD as vm_flags_t; 411 411 412 412 /// Mapping may be updated to allow writes. 413 - pub const MAYWRITE: vm_flags_t = bindings::VM_MAYWRITE as _; 413 + pub const MAYWRITE: vm_flags_t = bindings::VM_MAYWRITE as vm_flags_t; 414 414 415 415 /// Mapping may be updated to allow execution. 416 - pub const MAYEXEC: vm_flags_t = bindings::VM_MAYEXEC as _; 416 + pub const MAYEXEC: vm_flags_t = bindings::VM_MAYEXEC as vm_flags_t; 417 417 418 418 /// Mapping may be updated to be shared. 419 - pub const MAYSHARE: vm_flags_t = bindings::VM_MAYSHARE as _; 419 + pub const MAYSHARE: vm_flags_t = bindings::VM_MAYSHARE as vm_flags_t; 420 420 421 421 /// Page-ranges managed without `struct page`, just pure PFN. 422 - pub const PFNMAP: vm_flags_t = bindings::VM_PFNMAP as _; 422 + pub const PFNMAP: vm_flags_t = bindings::VM_PFNMAP as vm_flags_t; 423 423 424 424 /// Memory mapped I/O or similar. 425 - pub const IO: vm_flags_t = bindings::VM_IO as _; 425 + pub const IO: vm_flags_t = bindings::VM_IO as vm_flags_t; 426 426 427 427 /// Do not copy this vma on fork. 428 - pub const DONTCOPY: vm_flags_t = bindings::VM_DONTCOPY as _; 428 + pub const DONTCOPY: vm_flags_t = bindings::VM_DONTCOPY as vm_flags_t; 429 429 430 430 /// Cannot expand with mremap(). 431 - pub const DONTEXPAND: vm_flags_t = bindings::VM_DONTEXPAND as _; 431 + pub const DONTEXPAND: vm_flags_t = bindings::VM_DONTEXPAND as vm_flags_t; 432 432 433 433 /// Lock the pages covered when they are faulted in. 434 - pub const LOCKONFAULT: vm_flags_t = bindings::VM_LOCKONFAULT as _; 434 + pub const LOCKONFAULT: vm_flags_t = bindings::VM_LOCKONFAULT as vm_flags_t; 435 435 436 436 /// Is a VM accounted object. 437 - pub const ACCOUNT: vm_flags_t = bindings::VM_ACCOUNT as _; 437 + pub const ACCOUNT: vm_flags_t = bindings::VM_ACCOUNT as vm_flags_t; 438 438 439 439 /// Should the VM suppress accounting. 440 - pub const NORESERVE: vm_flags_t = bindings::VM_NORESERVE as _; 440 + pub const NORESERVE: vm_flags_t = bindings::VM_NORESERVE as vm_flags_t; 441 441 442 442 /// Huge TLB Page VM. 443 - pub const HUGETLB: vm_flags_t = bindings::VM_HUGETLB as _; 443 + pub const HUGETLB: vm_flags_t = bindings::VM_HUGETLB as vm_flags_t; 444 444 445 445 /// Synchronous page faults. (DAX-specific) 446 - pub const SYNC: vm_flags_t = bindings::VM_SYNC as _; 446 + pub const SYNC: vm_flags_t = bindings::VM_SYNC as vm_flags_t; 447 447 448 448 /// Architecture-specific flag. 449 - pub const ARCH_1: vm_flags_t = bindings::VM_ARCH_1 as _; 449 + pub const ARCH_1: vm_flags_t = bindings::VM_ARCH_1 as vm_flags_t; 450 450 451 451 /// Wipe VMA contents in child on fork. 452 - pub const WIPEONFORK: vm_flags_t = bindings::VM_WIPEONFORK as _; 452 + pub const WIPEONFORK: vm_flags_t = bindings::VM_WIPEONFORK as vm_flags_t; 453 453 454 454 /// Do not include in the core dump. 455 - pub const DONTDUMP: vm_flags_t = bindings::VM_DONTDUMP as _; 455 + pub const DONTDUMP: vm_flags_t = bindings::VM_DONTDUMP as vm_flags_t; 456 456 457 457 /// Not soft dirty clean area. 458 - pub const SOFTDIRTY: vm_flags_t = bindings::VM_SOFTDIRTY as _; 458 + pub const SOFTDIRTY: vm_flags_t = bindings::VM_SOFTDIRTY as vm_flags_t; 459 459 460 460 /// Can contain `struct page` and pure PFN pages. 461 - pub const MIXEDMAP: vm_flags_t = bindings::VM_MIXEDMAP as _; 461 + pub const MIXEDMAP: vm_flags_t = bindings::VM_MIXEDMAP as vm_flags_t; 462 462 463 463 /// MADV_HUGEPAGE marked this vma. 464 - pub const HUGEPAGE: vm_flags_t = bindings::VM_HUGEPAGE as _; 464 + pub const HUGEPAGE: vm_flags_t = bindings::VM_HUGEPAGE as vm_flags_t; 465 465 466 466 /// MADV_NOHUGEPAGE marked this vma. 467 - pub const NOHUGEPAGE: vm_flags_t = bindings::VM_NOHUGEPAGE as _; 467 + pub const NOHUGEPAGE: vm_flags_t = bindings::VM_NOHUGEPAGE as vm_flags_t; 468 468 469 469 /// KSM may merge identical pages. 470 - pub const MERGEABLE: vm_flags_t = bindings::VM_MERGEABLE as _; 470 + pub const MERGEABLE: vm_flags_t = bindings::VM_MERGEABLE as vm_flags_t; 471 471 }
+2 -2
rust/kernel/net/phy.rs
··· 142 142 // SAFETY: The struct invariant ensures that we may access 143 143 // this field without additional synchronization. 144 144 let bit_field = unsafe { &(*self.0.get())._bitfield_1 }; 145 - bit_field.get(13, 1) == bindings::AUTONEG_ENABLE as u64 145 + bit_field.get(13, 1) == u64::from(bindings::AUTONEG_ENABLE) 146 146 } 147 147 148 148 /// Gets the current auto-negotiation state. ··· 419 419 // where we hold `phy_device->lock`, so the accessors on 420 420 // `Device` are okay to call. 421 421 let dev = unsafe { Device::from_raw(phydev) }; 422 - T::match_phy_device(dev) as i32 422 + T::match_phy_device(dev).into() 423 423 } 424 424 425 425 /// # Safety
+3 -3
rust/kernel/of.rs
··· 27 27 const DRIVER_DATA_OFFSET: usize = core::mem::offset_of!(bindings::of_device_id, data); 28 28 29 29 fn index(&self) -> usize { 30 - self.0.data as _ 30 + self.0.data as usize 31 31 } 32 32 } 33 33 ··· 39 39 // SAFETY: FFI type is valid to be zero-initialized. 40 40 let mut of: bindings::of_device_id = unsafe { core::mem::zeroed() }; 41 41 42 - // TODO: Use `clone_from_slice` once the corresponding types do match. 42 + // TODO: Use `copy_from_slice` once stabilized for `const`. 43 43 let mut i = 0; 44 44 while i < src.len() { 45 - of.compatible[i] = src[i] as _; 45 + of.compatible[i] = src[i]; 46 46 i += 1; 47 47 } 48 48
+10 -10
rust/kernel/opp.rs
··· 92 92 let mut list = KVec::with_capacity(names.len() + 1, GFP_KERNEL)?; 93 93 94 94 for name in names.iter() { 95 - list.push(name.as_ptr() as _, GFP_KERNEL)?; 95 + list.push(name.as_ptr().cast(), GFP_KERNEL)?; 96 96 } 97 97 98 98 list.push(ptr::null(), GFP_KERNEL)?; ··· 103 103 /// 104 104 /// Represents voltage in microvolts, wrapping a [`c_ulong`] value. 105 105 /// 106 - /// ## Examples 106 + /// # Examples 107 107 /// 108 108 /// ``` 109 109 /// use kernel::opp::MicroVolt; ··· 128 128 /// 129 129 /// Represents power in microwatts, wrapping a [`c_ulong`] value. 130 130 /// 131 - /// ## Examples 131 + /// # Examples 132 132 /// 133 133 /// ``` 134 134 /// use kernel::opp::MicroWatt; ··· 153 153 /// 154 154 /// The associated [`OPP`] is automatically removed when the [`Token`] is dropped. 155 155 /// 156 - /// ## Examples 156 + /// # Examples 157 157 /// 158 158 /// The following example demonstrates how to create an [`OPP`] dynamically. 159 159 /// ··· 202 202 /// Rust abstraction for the C `struct dev_pm_opp_data`, used to define operating performance 203 203 /// points (OPPs) dynamically. 204 204 /// 205 - /// ## Examples 205 + /// # Examples 206 206 /// 207 207 /// The following example demonstrates how to create an [`OPP`] with [`Data`]. 208 208 /// ··· 254 254 255 255 /// [`OPP`] search options. 256 256 /// 257 - /// ## Examples 257 + /// # Examples 258 258 /// 259 259 /// Defines how to search for an [`OPP`] in a [`Table`] relative to a frequency. 260 260 /// ··· 326 326 /// 327 327 /// Rust abstraction for the C `struct dev_pm_opp_config`. 328 328 /// 329 - /// ## Examples 329 + /// # Examples 330 330 /// 331 331 /// The following example demonstrates how to set OPP property-name configuration for a [`Device`]. 332 332 /// ··· 345 345 /// impl ConfigOps for Driver {} 346 346 /// 347 347 /// fn configure(dev: &ARef<Device>) -> Result<ConfigToken> { 348 - /// let name = CString::try_from_fmt(fmt!("{}", "slow"))?; 348 + /// let name = CString::try_from_fmt(fmt!("slow"))?; 349 349 /// 350 350 /// // The OPP configuration is cleared once the [`ConfigToken`] goes out of scope. 351 351 /// Config::<Driver>::new() ··· 569 569 /// 570 570 /// Instances of this type are reference-counted. 571 571 /// 572 - /// ## Examples 572 + /// # Examples 573 573 /// 574 574 /// The following example demonstrates how to get OPP [`Table`] for a [`Cpumask`] and set its 575 575 /// frequency. ··· 1011 1011 /// 1012 1012 /// A reference to the [`OPP`], &[`OPP`], isn't refcounted by the Rust code. 1013 1013 /// 1014 - /// ## Examples 1014 + /// # Examples 1015 1015 /// 1016 1016 /// The following example demonstrates how to get [`OPP`] corresponding to a frequency value and 1017 1017 /// configure the device with it.
+8 -5
rust/kernel/pci.rs
··· 98 98 99 99 /// Declares a kernel module that exposes a single PCI driver. 100 100 /// 101 - /// # Example 101 + /// # Examples 102 102 /// 103 103 ///```ignore 104 104 /// kernel::module_pci_driver! { ··· 170 170 const DRIVER_DATA_OFFSET: usize = core::mem::offset_of!(bindings::pci_device_id, driver_data); 171 171 172 172 fn index(&self) -> usize { 173 - self.0.driver_data as _ 173 + self.0.driver_data 174 174 } 175 175 } 176 176 ··· 193 193 194 194 /// The PCI driver trait. 195 195 /// 196 - /// # Example 196 + /// # Examples 197 197 /// 198 198 ///``` 199 199 /// # use kernel::{bindings, device::Core, pci}; ··· 205 205 /// MODULE_PCI_TABLE, 206 206 /// <MyDriver as pci::Driver>::IdInfo, 207 207 /// [ 208 - /// (pci::DeviceId::from_id(bindings::PCI_VENDOR_ID_REDHAT, bindings::PCI_ANY_ID as _), ()) 208 + /// ( 209 + /// pci::DeviceId::from_id(bindings::PCI_VENDOR_ID_REDHAT, bindings::PCI_ANY_ID as u32), 210 + /// (), 211 + /// ) 209 212 /// ] 210 213 /// ); 211 214 /// ··· 347 344 // `ioptr` is valid by the safety requirements. 348 345 // `num` is valid by the safety requirements. 349 346 unsafe { 350 - bindings::pci_iounmap(pdev.as_raw(), ioptr as _); 347 + bindings::pci_iounmap(pdev.as_raw(), ioptr as *mut kernel::ffi::c_void); 351 348 bindings::pci_release_region(pdev.as_raw(), num); 352 349 } 353 350 }
+1 -1
rust/kernel/platform.rs
··· 132 132 /// 133 133 /// Drivers must implement this trait in order to get a platform driver registered. 134 134 /// 135 - /// # Example 135 + /// # Examples 136 136 /// 137 137 ///``` 138 138 /// # use kernel::{acpi, bindings, c_str, device::Core, of, platform};
+3 -1
rust/kernel/prelude.rs
··· 31 31 // `super::std_vendor` is hidden, which makes the macro inline for some reason. 32 32 #[doc(no_inline)] 33 33 pub use super::dbg; 34 - pub use super::fmt; 35 34 pub use super::{dev_alert, dev_crit, dev_dbg, dev_emerg, dev_err, dev_info, dev_notice, dev_warn}; 36 35 pub use super::{pr_alert, pr_crit, pr_debug, pr_emerg, pr_err, pr_info, pr_notice, pr_warn}; 36 + pub use core::format_args as fmt; 37 37 38 38 pub use super::{try_init, try_pin_init}; 39 39 ··· 46 46 pub use super::init::InPlaceInit; 47 47 48 48 pub use super::current; 49 + 50 + pub use super::uaccess::UserPtr;
+6 -6
rust/kernel/print.rs
··· 8 8 9 9 use crate::{ 10 10 ffi::{c_char, c_void}, 11 + fmt, 11 12 prelude::*, 12 13 str::RawFormatter, 13 14 }; 14 - use core::fmt; 15 15 16 16 // Called from `vsprintf` with format specifier `%pA`. 17 17 #[expect(clippy::missing_safety_doc)] ··· 25 25 // SAFETY: The C contract guarantees that `buf` is valid if it's less than `end`. 26 26 let mut w = unsafe { RawFormatter::from_ptrs(buf.cast(), end.cast()) }; 27 27 // SAFETY: TODO. 28 - let _ = w.write_fmt(unsafe { *(ptr as *const fmt::Arguments<'_>) }); 28 + let _ = w.write_fmt(unsafe { *ptr.cast::<fmt::Arguments<'_>>() }); 29 29 w.pos().cast() 30 30 } 31 31 ··· 109 109 bindings::_printk( 110 110 format_string.as_ptr(), 111 111 module_name.as_ptr(), 112 - &args as *const _ as *const c_void, 112 + core::ptr::from_ref(&args).cast::<c_void>(), 113 113 ); 114 114 } 115 115 } ··· 129 129 unsafe { 130 130 bindings::_printk( 131 131 format_strings::CONT.as_ptr(), 132 - &args as *const _ as *const c_void, 132 + core::ptr::from_ref(&args).cast::<c_void>(), 133 133 ); 134 134 } 135 135 } ··· 149 149 // takes borrows on the arguments, but does not extend the scope of temporaries. 150 150 // Therefore, a `match` expression is used to keep them around, since 151 151 // the scrutinee is kept until the end of the `match`. 152 - match format_args!($($arg)+) { 152 + match $crate::prelude::fmt!($($arg)+) { 153 153 // SAFETY: This hidden macro should only be called by the documented 154 154 // printing macros which ensure the format string is one of the fixed 155 155 // ones. All `__LOG_PREFIX`s are null-terminated as they are generated ··· 168 168 // The `CONT` case. 169 169 ($format_string:path, true, $($arg:tt)+) => ( 170 170 $crate::print::call_printk_cont( 171 - format_args!($($arg)+), 171 + $crate::prelude::fmt!($($arg)+), 172 172 ); 173 173 ); 174 174 );
+13 -16
rust/kernel/rbtree.rs
··· 191 191 } 192 192 } 193 193 194 + /// Returns true if this tree is empty. 195 + #[inline] 196 + pub fn is_empty(&self) -> bool { 197 + self.root.rb_node.is_null() 198 + } 199 + 194 200 /// Returns an iterator over the tree nodes, sorted by key. 195 201 pub fn iter(&self) -> Iter<'_, K, V> { 196 202 Iter { ··· 775 769 // the tree cannot change. By the tree invariant, all nodes are valid. 776 770 unsafe { bindings::rb_erase(&mut (*this).links, addr_of_mut!(self.tree.root)) }; 777 771 778 - let current = match (prev, next) { 779 - (_, Some(next)) => next, 780 - (Some(prev), None) => prev, 781 - (None, None) => { 782 - return (None, node); 783 - } 784 - }; 772 + // INVARIANT: 773 + // - `current` is a valid node in the [`RBTree`] pointed to by `self.tree`. 774 + let cursor = next.or(prev).map(|current| Self { 775 + current, 776 + tree: self.tree, 777 + }); 785 778 786 - ( 787 - // INVARIANT: 788 - // - `current` is a valid node in the [`RBTree`] pointed to by `self.tree`. 789 - Some(Self { 790 - current, 791 - tree: self.tree, 792 - }), 793 - node, 794 - ) 779 + (cursor, node) 795 780 } 796 781 797 782 /// Remove the previous node, returning it if it exists.
+4
rust/kernel/revocable.rs
··· 233 233 /// 234 234 /// The RCU read-side lock is held while the guard is alive. 235 235 pub struct RevocableGuard<'a, T> { 236 + // This can't use the `&'a T` type because references that appear in function arguments must 237 + // not become dangling during the execution of the function, which can happen if the 238 + // `RevocableGuard` is passed as a function argument and then dropped during execution of the 239 + // function. 236 240 data_ref: *const T, 237 241 _rcu_guard: rcu::Guard, 238 242 _p: PhantomData<&'a ()>,
+1 -1
rust/kernel/seq_file.rs
··· 37 37 bindings::seq_printf( 38 38 self.inner.get(), 39 39 c_str!("%pA").as_char_ptr(), 40 - &args as *const _ as *const crate::ffi::c_void, 40 + core::ptr::from_ref(&args).cast::<crate::ffi::c_void>(), 41 41 ); 42 42 } 43 43 }
+68 -43
rust/kernel/str.rs
··· 3 3 //! String representations. 4 4 5 5 use crate::alloc::{flags::*, AllocError, KVec}; 6 - use core::fmt::{self, Write}; 6 + use crate::fmt::{self, Write}; 7 7 use core::ops::{self, Deref, DerefMut, Index}; 8 8 9 9 use crate::prelude::*; ··· 29 29 #[inline] 30 30 pub const fn from_bytes(bytes: &[u8]) -> &Self { 31 31 // SAFETY: `BStr` is transparent to `[u8]`. 32 - unsafe { &*(bytes as *const [u8] as *const BStr) } 32 + unsafe { &*(core::ptr::from_ref(bytes) as *const BStr) } 33 33 } 34 34 35 35 /// Strip a prefix from `self`. Delegates to [`slice::strip_prefix`]. ··· 54 54 /// Formats printable ASCII characters, escaping the rest. 55 55 /// 56 56 /// ``` 57 - /// # use kernel::{fmt, b_str, str::{BStr, CString}}; 57 + /// # use kernel::{prelude::fmt, b_str, str::{BStr, CString}}; 58 58 /// let ascii = b_str!("Hello, BStr!"); 59 - /// let s = CString::try_from_fmt(fmt!("{}", ascii))?; 60 - /// assert_eq!(s.as_bytes(), "Hello, BStr!".as_bytes()); 59 + /// let s = CString::try_from_fmt(fmt!("{ascii}"))?; 60 + /// assert_eq!(s.to_bytes(), "Hello, BStr!".as_bytes()); 61 61 /// 62 62 /// let non_ascii = b_str!("🦀"); 63 - /// let s = CString::try_from_fmt(fmt!("{}", non_ascii))?; 64 - /// assert_eq!(s.as_bytes(), "\\xf0\\x9f\\xa6\\x80".as_bytes()); 63 + /// let s = CString::try_from_fmt(fmt!("{non_ascii}"))?; 64 + /// assert_eq!(s.to_bytes(), "\\xf0\\x9f\\xa6\\x80".as_bytes()); 65 65 /// # Ok::<(), kernel::error::Error>(()) 66 66 /// ``` 67 67 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { ··· 85 85 /// escaping the rest. 86 86 /// 87 87 /// ``` 88 - /// # use kernel::{fmt, b_str, str::{BStr, CString}}; 88 + /// # use kernel::{prelude::fmt, b_str, str::{BStr, CString}}; 89 89 /// // Embedded double quotes are escaped. 90 90 /// let ascii = b_str!("Hello, \"BStr\"!"); 91 - /// let s = CString::try_from_fmt(fmt!("{:?}", ascii))?; 92 - /// assert_eq!(s.as_bytes(), "\"Hello, \\\"BStr\\\"!\"".as_bytes()); 91 + /// let s = CString::try_from_fmt(fmt!("{ascii:?}"))?; 92 + /// assert_eq!(s.to_bytes(), "\"Hello, \\\"BStr\\\"!\"".as_bytes()); 93 93 /// 94 94 /// let non_ascii = b_str!("😺"); 95 - /// let s = CString::try_from_fmt(fmt!("{:?}", non_ascii))?; 96 - /// assert_eq!(s.as_bytes(), "\"\\xf0\\x9f\\x98\\xba\"".as_bytes()); 95 + /// let s = CString::try_from_fmt(fmt!("{non_ascii:?}"))?; 96 + /// assert_eq!(s.to_bytes(), "\"\\xf0\\x9f\\x98\\xba\"".as_bytes()); 97 97 /// # Ok::<(), kernel::error::Error>(()) 98 98 /// ``` 99 99 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { ··· 175 175 }}; 176 176 } 177 177 178 + /// Returns a C pointer to the string. 179 + // It is a free function rather than a method on an extension trait because: 180 + // 181 + // - error[E0379]: functions in trait impls cannot be declared const 182 + #[inline] 183 + pub const fn as_char_ptr_in_const_context(c_str: &CStr) -> *const c_char { 184 + c_str.0.as_ptr() 185 + } 186 + 178 187 /// Possible errors when using conversion functions in [`CStr`]. 179 188 #[derive(Debug, Clone, Copy)] 180 189 pub enum CStrConvertError { ··· 241 232 /// last at least `'a`. When `CStr` is alive, the memory pointed by `ptr` 242 233 /// must not be mutated. 243 234 #[inline] 244 - pub unsafe fn from_char_ptr<'a>(ptr: *const crate::ffi::c_char) -> &'a Self { 235 + pub unsafe fn from_char_ptr<'a>(ptr: *const c_char) -> &'a Self { 245 236 // SAFETY: The safety precondition guarantees `ptr` is a valid pointer 246 237 // to a `NUL`-terminated C string. 247 238 let len = unsafe { bindings::strlen(ptr) } + 1; 248 239 // SAFETY: Lifetime guaranteed by the safety precondition. 249 - let bytes = unsafe { core::slice::from_raw_parts(ptr as _, len) }; 240 + let bytes = unsafe { core::slice::from_raw_parts(ptr.cast(), len) }; 250 241 // SAFETY: As `len` is returned by `strlen`, `bytes` does not contain interior `NUL`. 251 242 // As we have added 1 to `len`, the last byte is known to be `NUL`. 252 243 unsafe { Self::from_bytes_with_nul_unchecked(bytes) } ··· 299 290 #[inline] 300 291 pub unsafe fn from_bytes_with_nul_unchecked_mut(bytes: &mut [u8]) -> &mut CStr { 301 292 // SAFETY: Properties of `bytes` guaranteed by the safety precondition. 302 - unsafe { &mut *(bytes as *mut [u8] as *mut CStr) } 293 + unsafe { &mut *(core::ptr::from_mut(bytes) as *mut CStr) } 303 294 } 304 295 305 296 /// Returns a C pointer to the string. 297 + /// 298 + /// Using this function in a const context is deprecated in favor of 299 + /// [`as_char_ptr_in_const_context`] in preparation for replacing `CStr` with `core::ffi::CStr` 300 + /// which does not have this method. 306 301 #[inline] 307 - pub const fn as_char_ptr(&self) -> *const crate::ffi::c_char { 308 - self.0.as_ptr() 302 + pub const fn as_char_ptr(&self) -> *const c_char { 303 + as_char_ptr_in_const_context(self) 309 304 } 310 305 311 306 /// Convert the string to a byte slice without the trailing `NUL` byte. 312 307 #[inline] 313 - pub fn as_bytes(&self) -> &[u8] { 308 + pub fn to_bytes(&self) -> &[u8] { 314 309 &self.0[..self.len()] 310 + } 311 + 312 + /// Convert the string to a byte slice without the trailing `NUL` byte. 313 + /// 314 + /// This function is deprecated in favor of [`Self::to_bytes`] in preparation for replacing 315 + /// `CStr` with `core::ffi::CStr` which does not have this method. 316 + #[inline] 317 + pub fn as_bytes(&self) -> &[u8] { 318 + self.to_bytes() 315 319 } 316 320 317 321 /// Convert the string to a byte slice containing the trailing `NUL` byte. 318 322 #[inline] 319 - pub const fn as_bytes_with_nul(&self) -> &[u8] { 323 + pub const fn to_bytes_with_nul(&self) -> &[u8] { 320 324 &self.0 325 + } 326 + 327 + /// Convert the string to a byte slice containing the trailing `NUL` byte. 328 + /// 329 + /// This function is deprecated in favor of [`Self::to_bytes_with_nul`] in preparation for 330 + /// replacing `CStr` with `core::ffi::CStr` which does not have this method. 331 + #[inline] 332 + pub const fn as_bytes_with_nul(&self) -> &[u8] { 333 + self.to_bytes_with_nul() 321 334 } 322 335 323 336 /// Yields a [`&str`] slice if the [`CStr`] contains valid UTF-8. ··· 460 429 /// 461 430 /// ``` 462 431 /// # use kernel::c_str; 463 - /// # use kernel::fmt; 432 + /// # use kernel::prelude::fmt; 464 433 /// # use kernel::str::CStr; 465 434 /// # use kernel::str::CString; 466 435 /// let penguin = c_str!("🐧"); 467 - /// let s = CString::try_from_fmt(fmt!("{}", penguin))?; 468 - /// assert_eq!(s.as_bytes_with_nul(), "\\xf0\\x9f\\x90\\xa7\0".as_bytes()); 436 + /// let s = CString::try_from_fmt(fmt!("{penguin}"))?; 437 + /// assert_eq!(s.to_bytes_with_nul(), "\\xf0\\x9f\\x90\\xa7\0".as_bytes()); 469 438 /// 470 439 /// let ascii = c_str!("so \"cool\""); 471 - /// let s = CString::try_from_fmt(fmt!("{}", ascii))?; 472 - /// assert_eq!(s.as_bytes_with_nul(), "so \"cool\"\0".as_bytes()); 440 + /// let s = CString::try_from_fmt(fmt!("{ascii}"))?; 441 + /// assert_eq!(s.to_bytes_with_nul(), "so \"cool\"\0".as_bytes()); 473 442 /// # Ok::<(), kernel::error::Error>(()) 474 443 /// ``` 475 444 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 476 - for &c in self.as_bytes() { 445 + for &c in self.to_bytes() { 477 446 if (0x20..0x7f).contains(&c) { 478 447 // Printable character. 479 448 f.write_char(c as char)?; ··· 490 459 /// 491 460 /// ``` 492 461 /// # use kernel::c_str; 493 - /// # use kernel::fmt; 462 + /// # use kernel::prelude::fmt; 494 463 /// # use kernel::str::CStr; 495 464 /// # use kernel::str::CString; 496 465 /// let penguin = c_str!("🐧"); 497 - /// let s = CString::try_from_fmt(fmt!("{:?}", penguin))?; 466 + /// let s = CString::try_from_fmt(fmt!("{penguin:?}"))?; 498 467 /// assert_eq!(s.as_bytes_with_nul(), "\"\\xf0\\x9f\\x90\\xa7\"\0".as_bytes()); 499 468 /// 500 469 /// // Embedded double quotes are escaped. 501 470 /// let ascii = c_str!("so \"cool\""); 502 - /// let s = CString::try_from_fmt(fmt!("{:?}", ascii))?; 471 + /// let s = CString::try_from_fmt(fmt!("{ascii:?}"))?; 503 472 /// assert_eq!(s.as_bytes_with_nul(), "\"so \\\"cool\\\"\"\0".as_bytes()); 504 473 /// # Ok::<(), kernel::error::Error>(()) 505 474 /// ``` ··· 609 578 610 579 macro_rules! format { 611 580 ($($f:tt)*) => ({ 612 - CString::try_from_fmt(::kernel::fmt!($($f)*))?.to_str()? 581 + CString::try_from_fmt(fmt!($($f)*))?.to_str()? 613 582 }) 614 583 } 615 584 ··· 759 728 pub(crate) unsafe fn from_ptrs(pos: *mut u8, end: *mut u8) -> Self { 760 729 // INVARIANT: The safety requirements guarantee the type invariants. 761 730 Self { 762 - beg: pos as _, 763 - pos: pos as _, 764 - end: end as _, 731 + beg: pos as usize, 732 + pos: pos as usize, 733 + end: end as usize, 765 734 } 766 735 } 767 736 ··· 786 755 /// 787 756 /// N.B. It may point to invalid memory. 788 757 pub(crate) fn pos(&self) -> *mut u8 { 789 - self.pos as _ 758 + self.pos as *mut u8 790 759 } 791 760 792 761 /// Returns the number of bytes written to the formatter. ··· 871 840 /// # Examples 872 841 /// 873 842 /// ``` 874 - /// use kernel::{str::CString, fmt}; 843 + /// use kernel::{str::CString, prelude::fmt}; 875 844 /// 876 845 /// let s = CString::try_from_fmt(fmt!("{}{}{}", "abc", 10, 20))?; 877 - /// assert_eq!(s.as_bytes_with_nul(), "abc1020\0".as_bytes()); 846 + /// assert_eq!(s.to_bytes_with_nul(), "abc1020\0".as_bytes()); 878 847 /// 879 848 /// let tmp = "testing"; 880 849 /// let s = CString::try_from_fmt(fmt!("{tmp}{}", 123))?; 881 - /// assert_eq!(s.as_bytes_with_nul(), "testing123\0".as_bytes()); 850 + /// assert_eq!(s.to_bytes_with_nul(), "testing123\0".as_bytes()); 882 851 /// 883 852 /// // This fails because it has an embedded `NUL` byte. 884 853 /// let s = CString::try_from_fmt(fmt!("a\0b{}", 123)); ··· 948 917 fn try_from(cstr: &'a CStr) -> Result<CString, AllocError> { 949 918 let mut buf = KVec::new(); 950 919 951 - buf.extend_from_slice(cstr.as_bytes_with_nul(), GFP_KERNEL)?; 920 + buf.extend_from_slice(cstr.to_bytes_with_nul(), GFP_KERNEL)?; 952 921 953 922 // INVARIANT: The `CStr` and `CString` types have the same invariants for 954 923 // the string data, and we copied it over without changes. ··· 960 929 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 961 930 fmt::Debug::fmt(&**self, f) 962 931 } 963 - } 964 - 965 - /// A convenience alias for [`core::format_args`]. 966 - #[macro_export] 967 - macro_rules! fmt { 968 - ($($f:tt)*) => ( ::core::format_args!($($f)*) ) 969 932 }
+7 -3
rust/kernel/sync.rs
··· 10 10 use pin_init; 11 11 12 12 mod arc; 13 + pub mod aref; 13 14 pub mod completion; 14 15 mod condvar; 15 16 pub mod lock; ··· 42 41 /// Initializes a dynamically allocated lock class key. In the common case of using a 43 42 /// statically allocated lock class key, the static_lock_class! macro should be used instead. 44 43 /// 45 - /// # Example 44 + /// # Examples 46 45 /// ``` 47 46 /// # use kernel::c_str; 48 47 /// # use kernel::alloc::KBox; ··· 96 95 macro_rules! static_lock_class { 97 96 () => {{ 98 97 static CLASS: $crate::sync::LockClassKey = 99 - // SAFETY: lockdep expects uninitialized memory when it's handed a statically allocated 100 - // lock_class_key 98 + // Lockdep expects uninitialized memory when it's handed a statically allocated `struct 99 + // lock_class_key`. 100 + // 101 + // SAFETY: `LockClassKey` transparently wraps `Opaque` which permits uninitialized 102 + // memory. 101 103 unsafe { ::core::mem::MaybeUninit::uninit().assume_init() }; 102 104 $crate::prelude::Pin::static_ref(&CLASS) 103 105 }};
+90 -12
rust/kernel/sync/arc.rs
··· 19 19 use crate::{ 20 20 alloc::{AllocError, Flags, KBox}, 21 21 bindings, 22 + ffi::c_void, 22 23 init::InPlaceInit, 23 24 try_init, 24 25 types::{ForeignOwnable, Opaque}, 25 26 }; 26 27 use core::{ 27 28 alloc::Layout, 29 + borrow::{Borrow, BorrowMut}, 28 30 fmt, 29 31 marker::PhantomData, 30 32 mem::{ManuallyDrop, MaybeUninit}, ··· 142 140 _p: PhantomData<ArcInner<T>>, 143 141 } 144 142 145 - #[doc(hidden)] 146 143 #[pin_data] 147 144 #[repr(C)] 148 - pub struct ArcInner<T: ?Sized> { 145 + struct ArcInner<T: ?Sized> { 149 146 refcount: Opaque<bindings::refcount_t>, 150 147 data: T, 151 148 } ··· 373 372 } 374 373 } 375 374 376 - // SAFETY: The `into_foreign` function returns a pointer that is well-aligned. 375 + // SAFETY: The pointer returned by `into_foreign` comes from a well aligned 376 + // pointer to `ArcInner<T>`. 377 377 unsafe impl<T: 'static> ForeignOwnable for Arc<T> { 378 - type PointedTo = ArcInner<T>; 378 + const FOREIGN_ALIGN: usize = core::mem::align_of::<ArcInner<T>>(); 379 + 379 380 type Borrowed<'a> = ArcBorrow<'a, T>; 380 381 type BorrowedMut<'a> = Self::Borrowed<'a>; 381 382 382 - fn into_foreign(self) -> *mut Self::PointedTo { 383 - ManuallyDrop::new(self).ptr.as_ptr() 383 + fn into_foreign(self) -> *mut c_void { 384 + ManuallyDrop::new(self).ptr.as_ptr().cast() 384 385 } 385 386 386 - unsafe fn from_foreign(ptr: *mut Self::PointedTo) -> Self { 387 + unsafe fn from_foreign(ptr: *mut c_void) -> Self { 387 388 // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 388 389 // call to `Self::into_foreign`. 389 - let inner = unsafe { NonNull::new_unchecked(ptr) }; 390 + let inner = unsafe { NonNull::new_unchecked(ptr.cast::<ArcInner<T>>()) }; 390 391 391 392 // SAFETY: By the safety requirement of this function, we know that `ptr` came from 392 393 // a previous call to `Arc::into_foreign`, which guarantees that `ptr` is valid and ··· 396 393 unsafe { Self::from_inner(inner) } 397 394 } 398 395 399 - unsafe fn borrow<'a>(ptr: *mut Self::PointedTo) -> ArcBorrow<'a, T> { 396 + unsafe fn borrow<'a>(ptr: *mut c_void) -> ArcBorrow<'a, T> { 400 397 // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 401 398 // call to `Self::into_foreign`. 402 - let inner = unsafe { NonNull::new_unchecked(ptr) }; 399 + let inner = unsafe { NonNull::new_unchecked(ptr.cast::<ArcInner<T>>()) }; 403 400 404 401 // SAFETY: The safety requirements of `from_foreign` ensure that the object remains alive 405 402 // for the lifetime of the returned value. 406 403 unsafe { ArcBorrow::new(inner) } 407 404 } 408 405 409 - unsafe fn borrow_mut<'a>(ptr: *mut Self::PointedTo) -> ArcBorrow<'a, T> { 406 + unsafe fn borrow_mut<'a>(ptr: *mut c_void) -> ArcBorrow<'a, T> { 410 407 // SAFETY: The safety requirements for `borrow_mut` are a superset of the safety 411 408 // requirements for `borrow`. 412 - unsafe { Self::borrow(ptr) } 409 + unsafe { <Self as ForeignOwnable>::borrow(ptr) } 413 410 } 414 411 } 415 412 ··· 425 422 426 423 impl<T: ?Sized> AsRef<T> for Arc<T> { 427 424 fn as_ref(&self) -> &T { 425 + self.deref() 426 + } 427 + } 428 + 429 + /// # Examples 430 + /// 431 + /// ``` 432 + /// # use core::borrow::Borrow; 433 + /// # use kernel::sync::Arc; 434 + /// struct Foo<B: Borrow<u32>>(B); 435 + /// 436 + /// // Owned instance. 437 + /// let owned = Foo(1); 438 + /// 439 + /// // Shared instance. 440 + /// let arc = Arc::new(1, GFP_KERNEL)?; 441 + /// let shared = Foo(arc.clone()); 442 + /// 443 + /// let i = 1; 444 + /// // Borrowed from `i`. 445 + /// let borrowed = Foo(&i); 446 + /// # Ok::<(), Error>(()) 447 + /// ``` 448 + impl<T: ?Sized> Borrow<T> for Arc<T> { 449 + fn borrow(&self) -> &T { 428 450 self.deref() 429 451 } 430 452 } ··· 859 831 // it is safe to dereference it. Additionally, we know there is only one reference when 860 832 // it's inside a `UniqueArc`, so it is safe to get a mutable reference. 861 833 unsafe { &mut self.inner.ptr.as_mut().data } 834 + } 835 + } 836 + 837 + /// # Examples 838 + /// 839 + /// ``` 840 + /// # use core::borrow::Borrow; 841 + /// # use kernel::sync::UniqueArc; 842 + /// struct Foo<B: Borrow<u32>>(B); 843 + /// 844 + /// // Owned instance. 845 + /// let owned = Foo(1); 846 + /// 847 + /// // Owned instance using `UniqueArc`. 848 + /// let arc = UniqueArc::new(1, GFP_KERNEL)?; 849 + /// let shared = Foo(arc); 850 + /// 851 + /// let i = 1; 852 + /// // Borrowed from `i`. 853 + /// let borrowed = Foo(&i); 854 + /// # Ok::<(), Error>(()) 855 + /// ``` 856 + impl<T: ?Sized> Borrow<T> for UniqueArc<T> { 857 + fn borrow(&self) -> &T { 858 + self.deref() 859 + } 860 + } 861 + 862 + /// # Examples 863 + /// 864 + /// ``` 865 + /// # use core::borrow::BorrowMut; 866 + /// # use kernel::sync::UniqueArc; 867 + /// struct Foo<B: BorrowMut<u32>>(B); 868 + /// 869 + /// // Owned instance. 870 + /// let owned = Foo(1); 871 + /// 872 + /// // Owned instance using `UniqueArc`. 873 + /// let arc = UniqueArc::new(1, GFP_KERNEL)?; 874 + /// let shared = Foo(arc); 875 + /// 876 + /// let mut i = 1; 877 + /// // Borrowed from `i`. 878 + /// let borrowed = Foo(&mut i); 879 + /// # Ok::<(), Error>(()) 880 + /// ``` 881 + impl<T: ?Sized> BorrowMut<T> for UniqueArc<T> { 882 + fn borrow_mut(&mut self) -> &mut T { 883 + self.deref_mut() 862 884 } 863 885 } 864 886
+154
rust/kernel/sync/aref.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Internal reference counting support. 4 + 5 + use core::{marker::PhantomData, mem::ManuallyDrop, ops::Deref, ptr::NonNull}; 6 + 7 + /// Types that are _always_ reference counted. 8 + /// 9 + /// It allows such types to define their own custom ref increment and decrement functions. 10 + /// Additionally, it allows users to convert from a shared reference `&T` to an owned reference 11 + /// [`ARef<T>`]. 12 + /// 13 + /// This is usually implemented by wrappers to existing structures on the C side of the code. For 14 + /// Rust code, the recommendation is to use [`Arc`](crate::sync::Arc) to create reference-counted 15 + /// instances of a type. 16 + /// 17 + /// # Safety 18 + /// 19 + /// Implementers must ensure that increments to the reference count keep the object alive in memory 20 + /// at least until matching decrements are performed. 21 + /// 22 + /// Implementers must also ensure that all instances are reference-counted. (Otherwise they 23 + /// won't be able to honour the requirement that [`AlwaysRefCounted::inc_ref`] keep the object 24 + /// alive.) 25 + pub unsafe trait AlwaysRefCounted { 26 + /// Increments the reference count on the object. 27 + fn inc_ref(&self); 28 + 29 + /// Decrements the reference count on the object. 30 + /// 31 + /// Frees the object when the count reaches zero. 32 + /// 33 + /// # Safety 34 + /// 35 + /// Callers must ensure that there was a previous matching increment to the reference count, 36 + /// and that the object is no longer used after its reference count is decremented (as it may 37 + /// result in the object being freed), unless the caller owns another increment on the refcount 38 + /// (e.g., it calls [`AlwaysRefCounted::inc_ref`] twice, then calls 39 + /// [`AlwaysRefCounted::dec_ref`] once). 40 + unsafe fn dec_ref(obj: NonNull<Self>); 41 + } 42 + 43 + /// An owned reference to an always-reference-counted object. 44 + /// 45 + /// The object's reference count is automatically decremented when an instance of [`ARef`] is 46 + /// dropped. It is also automatically incremented when a new instance is created via 47 + /// [`ARef::clone`]. 48 + /// 49 + /// # Invariants 50 + /// 51 + /// The pointer stored in `ptr` is non-null and valid for the lifetime of the [`ARef`] instance. In 52 + /// particular, the [`ARef`] instance owns an increment on the underlying object's reference count. 53 + pub struct ARef<T: AlwaysRefCounted> { 54 + ptr: NonNull<T>, 55 + _p: PhantomData<T>, 56 + } 57 + 58 + // SAFETY: It is safe to send `ARef<T>` to another thread when the underlying `T` is `Sync` because 59 + // it effectively means sharing `&T` (which is safe because `T` is `Sync`); additionally, it needs 60 + // `T` to be `Send` because any thread that has an `ARef<T>` may ultimately access `T` using a 61 + // mutable reference, for example, when the reference count reaches zero and `T` is dropped. 62 + unsafe impl<T: AlwaysRefCounted + Sync + Send> Send for ARef<T> {} 63 + 64 + // SAFETY: It is safe to send `&ARef<T>` to another thread when the underlying `T` is `Sync` 65 + // because it effectively means sharing `&T` (which is safe because `T` is `Sync`); additionally, 66 + // it needs `T` to be `Send` because any thread that has a `&ARef<T>` may clone it and get an 67 + // `ARef<T>` on that thread, so the thread may ultimately access `T` using a mutable reference, for 68 + // example, when the reference count reaches zero and `T` is dropped. 69 + unsafe impl<T: AlwaysRefCounted + Sync + Send> Sync for ARef<T> {} 70 + 71 + impl<T: AlwaysRefCounted> ARef<T> { 72 + /// Creates a new instance of [`ARef`]. 73 + /// 74 + /// It takes over an increment of the reference count on the underlying object. 75 + /// 76 + /// # Safety 77 + /// 78 + /// Callers must ensure that the reference count was incremented at least once, and that they 79 + /// are properly relinquishing one increment. That is, if there is only one increment, callers 80 + /// must not use the underlying object anymore -- it is only safe to do so via the newly 81 + /// created [`ARef`]. 82 + pub unsafe fn from_raw(ptr: NonNull<T>) -> Self { 83 + // INVARIANT: The safety requirements guarantee that the new instance now owns the 84 + // increment on the refcount. 85 + Self { 86 + ptr, 87 + _p: PhantomData, 88 + } 89 + } 90 + 91 + /// Consumes the `ARef`, returning a raw pointer. 92 + /// 93 + /// This function does not change the refcount. After calling this function, the caller is 94 + /// responsible for the refcount previously managed by the `ARef`. 95 + /// 96 + /// # Examples 97 + /// 98 + /// ``` 99 + /// use core::ptr::NonNull; 100 + /// use kernel::types::{ARef, AlwaysRefCounted}; 101 + /// 102 + /// struct Empty {} 103 + /// 104 + /// # // SAFETY: TODO. 105 + /// unsafe impl AlwaysRefCounted for Empty { 106 + /// fn inc_ref(&self) {} 107 + /// unsafe fn dec_ref(_obj: NonNull<Self>) {} 108 + /// } 109 + /// 110 + /// let mut data = Empty {}; 111 + /// let ptr = NonNull::<Empty>::new(&mut data).unwrap(); 112 + /// # // SAFETY: TODO. 113 + /// let data_ref: ARef<Empty> = unsafe { ARef::from_raw(ptr) }; 114 + /// let raw_ptr: NonNull<Empty> = ARef::into_raw(data_ref); 115 + /// 116 + /// assert_eq!(ptr, raw_ptr); 117 + /// ``` 118 + pub fn into_raw(me: Self) -> NonNull<T> { 119 + ManuallyDrop::new(me).ptr 120 + } 121 + } 122 + 123 + impl<T: AlwaysRefCounted> Clone for ARef<T> { 124 + fn clone(&self) -> Self { 125 + self.inc_ref(); 126 + // SAFETY: We just incremented the refcount above. 127 + unsafe { Self::from_raw(self.ptr) } 128 + } 129 + } 130 + 131 + impl<T: AlwaysRefCounted> Deref for ARef<T> { 132 + type Target = T; 133 + 134 + fn deref(&self) -> &Self::Target { 135 + // SAFETY: The type invariants guarantee that the object is valid. 136 + unsafe { self.ptr.as_ref() } 137 + } 138 + } 139 + 140 + impl<T: AlwaysRefCounted> From<&T> for ARef<T> { 141 + fn from(b: &T) -> Self { 142 + b.inc_ref(); 143 + // SAFETY: We just incremented the refcount above. 144 + unsafe { Self::from_raw(NonNull::from(b)) } 145 + } 146 + } 147 + 148 + impl<T: AlwaysRefCounted> Drop for ARef<T> { 149 + fn drop(&mut self) { 150 + // SAFETY: The type invariants guarantee that the `ARef` owns the reference we're about to 151 + // decrement. 152 + unsafe { T::dec_ref(self.ptr) }; 153 + } 154 + }
+154 -79
rust/kernel/time.rs
··· 24 24 //! C header: [`include/linux/jiffies.h`](srctree/include/linux/jiffies.h). 25 25 //! C header: [`include/linux/ktime.h`](srctree/include/linux/ktime.h). 26 26 27 + use core::marker::PhantomData; 28 + 29 + pub mod delay; 27 30 pub mod hrtimer; 28 31 29 32 /// The number of nanoseconds per microsecond. ··· 52 49 unsafe { bindings::__msecs_to_jiffies(msecs) } 53 50 } 54 51 52 + /// Trait for clock sources. 53 + /// 54 + /// Selection of the clock source depends on the use case. In some cases the usage of a 55 + /// particular clock is mandatory, e.g. in network protocols, filesystems. In other 56 + /// cases the user of the clock has to decide which clock is best suited for the 57 + /// purpose. In most scenarios clock [`Monotonic`] is the best choice as it 58 + /// provides a accurate monotonic notion of time (leap second smearing ignored). 59 + pub trait ClockSource { 60 + /// The kernel clock ID associated with this clock source. 61 + /// 62 + /// This constant corresponds to the C side `clockid_t` value. 63 + const ID: bindings::clockid_t; 64 + 65 + /// Get the current time from the clock source. 66 + /// 67 + /// The function must return a value in the range from 0 to `KTIME_MAX`. 68 + fn ktime_get() -> bindings::ktime_t; 69 + } 70 + 71 + /// A monotonically increasing clock. 72 + /// 73 + /// A nonsettable system-wide clock that represents monotonic time since as 74 + /// described by POSIX, "some unspecified point in the past". On Linux, that 75 + /// point corresponds to the number of seconds that the system has been 76 + /// running since it was booted. 77 + /// 78 + /// The CLOCK_MONOTONIC clock is not affected by discontinuous jumps in the 79 + /// CLOCK_REAL (e.g., if the system administrator manually changes the 80 + /// clock), but is affected by frequency adjustments. This clock does not 81 + /// count time that the system is suspended. 82 + pub struct Monotonic; 83 + 84 + impl ClockSource for Monotonic { 85 + const ID: bindings::clockid_t = bindings::CLOCK_MONOTONIC as bindings::clockid_t; 86 + 87 + fn ktime_get() -> bindings::ktime_t { 88 + // SAFETY: It is always safe to call `ktime_get()` outside of NMI context. 89 + unsafe { bindings::ktime_get() } 90 + } 91 + } 92 + 93 + /// A settable system-wide clock that measures real (i.e., wall-clock) time. 94 + /// 95 + /// Setting this clock requires appropriate privileges. This clock is 96 + /// affected by discontinuous jumps in the system time (e.g., if the system 97 + /// administrator manually changes the clock), and by frequency adjustments 98 + /// performed by NTP and similar applications via adjtime(3), adjtimex(2), 99 + /// clock_adjtime(2), and ntp_adjtime(3). This clock normally counts the 100 + /// number of seconds since 1970-01-01 00:00:00 Coordinated Universal Time 101 + /// (UTC) except that it ignores leap seconds; near a leap second it may be 102 + /// adjusted by leap second smearing to stay roughly in sync with UTC. Leap 103 + /// second smearing applies frequency adjustments to the clock to speed up 104 + /// or slow down the clock to account for the leap second without 105 + /// discontinuities in the clock. If leap second smearing is not applied, 106 + /// the clock will experience discontinuity around leap second adjustment. 107 + pub struct RealTime; 108 + 109 + impl ClockSource for RealTime { 110 + const ID: bindings::clockid_t = bindings::CLOCK_REALTIME as bindings::clockid_t; 111 + 112 + fn ktime_get() -> bindings::ktime_t { 113 + // SAFETY: It is always safe to call `ktime_get_real()` outside of NMI context. 114 + unsafe { bindings::ktime_get_real() } 115 + } 116 + } 117 + 118 + /// A monotonic that ticks while system is suspended. 119 + /// 120 + /// A nonsettable system-wide clock that is identical to CLOCK_MONOTONIC, 121 + /// except that it also includes any time that the system is suspended. This 122 + /// allows applications to get a suspend-aware monotonic clock without 123 + /// having to deal with the complications of CLOCK_REALTIME, which may have 124 + /// discontinuities if the time is changed using settimeofday(2) or similar. 125 + pub struct BootTime; 126 + 127 + impl ClockSource for BootTime { 128 + const ID: bindings::clockid_t = bindings::CLOCK_BOOTTIME as bindings::clockid_t; 129 + 130 + fn ktime_get() -> bindings::ktime_t { 131 + // SAFETY: It is always safe to call `ktime_get_boottime()` outside of NMI context. 132 + unsafe { bindings::ktime_get_boottime() } 133 + } 134 + } 135 + 136 + /// International Atomic Time. 137 + /// 138 + /// A system-wide clock derived from wall-clock time but counting leap seconds. 139 + /// 140 + /// This clock is coupled to CLOCK_REALTIME and will be set when CLOCK_REALTIME is 141 + /// set, or when the offset to CLOCK_REALTIME is changed via adjtimex(2). This 142 + /// usually happens during boot and **should** not happen during normal operations. 143 + /// However, if NTP or another application adjusts CLOCK_REALTIME by leap second 144 + /// smearing, this clock will not be precise during leap second smearing. 145 + /// 146 + /// The acronym TAI refers to International Atomic Time. 147 + pub struct Tai; 148 + 149 + impl ClockSource for Tai { 150 + const ID: bindings::clockid_t = bindings::CLOCK_TAI as bindings::clockid_t; 151 + 152 + fn ktime_get() -> bindings::ktime_t { 153 + // SAFETY: It is always safe to call `ktime_get_tai()` outside of NMI context. 154 + unsafe { bindings::ktime_get_clocktai() } 155 + } 156 + } 157 + 55 158 /// A specific point in time. 56 159 /// 57 160 /// # Invariants 58 161 /// 59 162 /// The `inner` value is in the range from 0 to `KTIME_MAX`. 60 163 #[repr(transparent)] 61 - #[derive(Copy, Clone, PartialEq, PartialOrd, Eq, Ord)] 62 - pub struct Instant { 164 + #[derive(PartialEq, PartialOrd, Eq, Ord)] 165 + pub struct Instant<C: ClockSource> { 63 166 inner: bindings::ktime_t, 167 + _c: PhantomData<C>, 64 168 } 65 169 66 - impl Instant { 67 - /// Get the current time using `CLOCK_MONOTONIC`. 170 + impl<C: ClockSource> Clone for Instant<C> { 171 + fn clone(&self) -> Self { 172 + *self 173 + } 174 + } 175 + 176 + impl<C: ClockSource> Copy for Instant<C> {} 177 + 178 + impl<C: ClockSource> Instant<C> { 179 + /// Get the current time from the clock source. 68 180 #[inline] 69 181 pub fn now() -> Self { 70 - // INVARIANT: The `ktime_get()` function returns a value in the range 182 + // INVARIANT: The `ClockSource::ktime_get()` function returns a value in the range 71 183 // from 0 to `KTIME_MAX`. 72 184 Self { 73 - // SAFETY: It is always safe to call `ktime_get()` outside of NMI context. 74 - inner: unsafe { bindings::ktime_get() }, 185 + inner: C::ktime_get(), 186 + _c: PhantomData, 75 187 } 76 188 } 77 189 ··· 195 77 pub fn elapsed(&self) -> Delta { 196 78 Self::now() - *self 197 79 } 80 + 81 + #[inline] 82 + pub(crate) fn as_nanos(&self) -> i64 { 83 + self.inner 84 + } 198 85 } 199 86 200 - impl core::ops::Sub for Instant { 87 + impl<C: ClockSource> core::ops::Sub for Instant<C> { 201 88 type Output = Delta; 202 89 203 90 // By the type invariant, it never overflows. 204 91 #[inline] 205 - fn sub(self, other: Instant) -> Delta { 92 + fn sub(self, other: Instant<C>) -> Delta { 206 93 Delta { 207 94 nanos: self.inner - other.inner, 208 95 } 209 - } 210 - } 211 - 212 - /// An identifier for a clock. Used when specifying clock sources. 213 - /// 214 - /// 215 - /// Selection of the clock depends on the use case. In some cases the usage of a 216 - /// particular clock is mandatory, e.g. in network protocols, filesystems.In other 217 - /// cases the user of the clock has to decide which clock is best suited for the 218 - /// purpose. In most scenarios clock [`ClockId::Monotonic`] is the best choice as it 219 - /// provides a accurate monotonic notion of time (leap second smearing ignored). 220 - #[derive(Clone, Copy, PartialEq, Eq, Debug)] 221 - #[repr(u32)] 222 - pub enum ClockId { 223 - /// A settable system-wide clock that measures real (i.e., wall-clock) time. 224 - /// 225 - /// Setting this clock requires appropriate privileges. This clock is 226 - /// affected by discontinuous jumps in the system time (e.g., if the system 227 - /// administrator manually changes the clock), and by frequency adjustments 228 - /// performed by NTP and similar applications via adjtime(3), adjtimex(2), 229 - /// clock_adjtime(2), and ntp_adjtime(3). This clock normally counts the 230 - /// number of seconds since 1970-01-01 00:00:00 Coordinated Universal Time 231 - /// (UTC) except that it ignores leap seconds; near a leap second it may be 232 - /// adjusted by leap second smearing to stay roughly in sync with UTC. Leap 233 - /// second smearing applies frequency adjustments to the clock to speed up 234 - /// or slow down the clock to account for the leap second without 235 - /// discontinuities in the clock. If leap second smearing is not applied, 236 - /// the clock will experience discontinuity around leap second adjustment. 237 - RealTime = bindings::CLOCK_REALTIME, 238 - /// A monotonically increasing clock. 239 - /// 240 - /// A nonsettable system-wide clock that represents monotonic time since—as 241 - /// described by POSIX—"some unspecified point in the past". On Linux, that 242 - /// point corresponds to the number of seconds that the system has been 243 - /// running since it was booted. 244 - /// 245 - /// The CLOCK_MONOTONIC clock is not affected by discontinuous jumps in the 246 - /// CLOCK_REAL (e.g., if the system administrator manually changes the 247 - /// clock), but is affected by frequency adjustments. This clock does not 248 - /// count time that the system is suspended. 249 - Monotonic = bindings::CLOCK_MONOTONIC, 250 - /// A monotonic that ticks while system is suspended. 251 - /// 252 - /// A nonsettable system-wide clock that is identical to CLOCK_MONOTONIC, 253 - /// except that it also includes any time that the system is suspended. This 254 - /// allows applications to get a suspend-aware monotonic clock without 255 - /// having to deal with the complications of CLOCK_REALTIME, which may have 256 - /// discontinuities if the time is changed using settimeofday(2) or similar. 257 - BootTime = bindings::CLOCK_BOOTTIME, 258 - /// International Atomic Time. 259 - /// 260 - /// A system-wide clock derived from wall-clock time but counting leap seconds. 261 - /// 262 - /// This clock is coupled to CLOCK_REALTIME and will be set when CLOCK_REALTIME is 263 - /// set, or when the offset to CLOCK_REALTIME is changed via adjtimex(2). This 264 - /// usually happens during boot and **should** not happen during normal operations. 265 - /// However, if NTP or another application adjusts CLOCK_REALTIME by leap second 266 - /// smearing, this clock will not be precise during leap second smearing. 267 - /// 268 - /// The acronym TAI refers to International Atomic Time. 269 - TAI = bindings::CLOCK_TAI, 270 - } 271 - 272 - impl ClockId { 273 - fn into_c(self) -> bindings::clockid_t { 274 - self as bindings::clockid_t 275 96 } 276 97 } 277 98 ··· 285 228 /// Return the smallest number of microseconds greater than or equal 286 229 /// to the value in the [`Delta`]. 287 230 #[inline] 288 - pub const fn as_micros_ceil(self) -> i64 { 289 - self.as_nanos().saturating_add(NSEC_PER_USEC - 1) / NSEC_PER_USEC 231 + pub fn as_micros_ceil(self) -> i64 { 232 + #[cfg(CONFIG_64BIT)] 233 + { 234 + self.as_nanos().saturating_add(NSEC_PER_USEC - 1) / NSEC_PER_USEC 235 + } 236 + 237 + #[cfg(not(CONFIG_64BIT))] 238 + // SAFETY: It is always safe to call `ktime_to_us()` with any value. 239 + unsafe { 240 + bindings::ktime_to_us(self.as_nanos().saturating_add(NSEC_PER_USEC - 1)) 241 + } 290 242 } 291 243 292 244 /// Return the number of milliseconds in the [`Delta`]. 293 245 #[inline] 294 - pub const fn as_millis(self) -> i64 { 295 - self.as_nanos() / NSEC_PER_MSEC 246 + pub fn as_millis(self) -> i64 { 247 + #[cfg(CONFIG_64BIT)] 248 + { 249 + self.as_nanos() / NSEC_PER_MSEC 250 + } 251 + 252 + #[cfg(not(CONFIG_64BIT))] 253 + // SAFETY: It is always safe to call `ktime_to_ms()` with any value. 254 + unsafe { 255 + bindings::ktime_to_ms(self.as_nanos()) 256 + } 296 257 } 297 258 }
+49
rust/kernel/time/delay.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Delay and sleep primitives. 4 + //! 5 + //! This module contains the kernel APIs related to delay and sleep that 6 + //! have been ported or wrapped for usage by Rust code in the kernel. 7 + //! 8 + //! C header: [`include/linux/delay.h`](srctree/include/linux/delay.h). 9 + 10 + use super::Delta; 11 + use crate::prelude::*; 12 + 13 + /// Sleeps for a given duration at least. 14 + /// 15 + /// Equivalent to the C side [`fsleep()`], flexible sleep function, 16 + /// which automatically chooses the best sleep method based on a duration. 17 + /// 18 + /// `delta` must be within `[0, i32::MAX]` microseconds; 19 + /// otherwise, it is erroneous behavior. That is, it is considered a bug 20 + /// to call this function with an out-of-range value, in which case the function 21 + /// will sleep for at least the maximum value in the range and may warn 22 + /// in the future. 23 + /// 24 + /// The behavior above differs from the C side [`fsleep()`] for which out-of-range 25 + /// values mean "infinite timeout" instead. 26 + /// 27 + /// This function can only be used in a nonatomic context. 28 + /// 29 + /// [`fsleep()`]: https://docs.kernel.org/timers/delay_sleep_functions.html#c.fsleep 30 + pub fn fsleep(delta: Delta) { 31 + // The maximum value is set to `i32::MAX` microseconds to prevent integer 32 + // overflow inside fsleep, which could lead to unintentional infinite sleep. 33 + const MAX_DELTA: Delta = Delta::from_micros(i32::MAX as i64); 34 + 35 + let delta = if (Delta::ZERO..=MAX_DELTA).contains(&delta) { 36 + delta 37 + } else { 38 + // TODO: Add WARN_ONCE() when it's supported. 39 + MAX_DELTA 40 + }; 41 + 42 + // SAFETY: It is always safe to call `fsleep()` with any duration. 43 + unsafe { 44 + // Convert the duration to microseconds and round up to preserve 45 + // the guarantee; `fsleep()` sleeps for at least the provided duration, 46 + // but that it may sleep for longer under some circumstances. 47 + bindings::fsleep(delta.as_micros_ceil() as c_ulong) 48 + } 49 + }
+204 -100
rust/kernel/time/hrtimer.rs
··· 67 67 //! A `restart` operation on a timer in the **stopped** state is equivalent to a 68 68 //! `start` operation. 69 69 70 - use super::ClockId; 70 + use super::{ClockSource, Delta, Instant}; 71 71 use crate::{prelude::*, types::Opaque}; 72 72 use core::marker::PhantomData; 73 73 use pin_init::PinInit; 74 - 75 - /// A Rust wrapper around a `ktime_t`. 76 - // NOTE: Ktime is going to be removed when hrtimer is converted to Instant/Delta. 77 - #[repr(transparent)] 78 - #[derive(Copy, Clone, PartialEq, PartialOrd, Eq, Ord)] 79 - pub struct Ktime { 80 - inner: bindings::ktime_t, 81 - } 82 - 83 - impl Ktime { 84 - /// Returns the number of nanoseconds. 85 - #[inline] 86 - pub fn to_ns(self) -> i64 { 87 - self.inner 88 - } 89 - } 90 74 91 75 /// A timer backed by a C `struct hrtimer`. 92 76 /// ··· 82 98 pub struct HrTimer<T> { 83 99 #[pin] 84 100 timer: Opaque<bindings::hrtimer>, 85 - mode: HrTimerMode, 86 101 _t: PhantomData<T>, 87 102 } 88 103 ··· 95 112 96 113 impl<T> HrTimer<T> { 97 114 /// Return an initializer for a new timer instance. 98 - pub fn new(mode: HrTimerMode, clock: ClockId) -> impl PinInit<Self> 115 + pub fn new() -> impl PinInit<Self> 99 116 where 100 117 T: HrTimerCallback, 118 + T: HasHrTimer<T>, 101 119 { 102 120 pin_init!(Self { 103 121 // INVARIANT: We initialize `timer` with `hrtimer_setup` below. ··· 110 126 bindings::hrtimer_setup( 111 127 place, 112 128 Some(T::Pointer::run), 113 - clock.into_c(), 114 - mode.into_c(), 129 + <<T as HasHrTimer<T>>::TimerMode as HrTimerMode>::Clock::ID, 130 + <T as HasHrTimer<T>>::TimerMode::C_MODE, 115 131 ); 116 132 } 117 133 }), 118 - mode: mode, 119 134 _t: PhantomData, 120 135 }) 121 136 } ··· 131 148 // SAFETY: The field projection to `timer` does not go out of bounds, 132 149 // because the caller of this function promises that `this` points to an 133 150 // allocation of at least the size of `Self`. 134 - unsafe { Opaque::raw_get(core::ptr::addr_of!((*this).timer)) } 151 + unsafe { Opaque::cast_into(core::ptr::addr_of!((*this).timer)) } 135 152 } 136 153 137 154 /// Cancel an initialized and potentially running timer. ··· 176 193 /// exist. A timer can be manipulated through any of the handles, and a handle 177 194 /// may represent a cancelled timer. 178 195 pub trait HrTimerPointer: Sync + Sized { 196 + /// The operational mode associated with this timer. 197 + /// 198 + /// This defines how the expiration value is interpreted. 199 + type TimerMode: HrTimerMode; 200 + 179 201 /// A handle representing a started or restarted timer. 180 202 /// 181 203 /// If the timer is running or if the timer callback is executing when the ··· 193 205 194 206 /// Start the timer with expiry after `expires` time units. If the timer was 195 207 /// already running, it is restarted with the new expiry time. 196 - fn start(self, expires: Ktime) -> Self::TimerHandle; 208 + fn start(self, expires: <Self::TimerMode as HrTimerMode>::Expires) -> Self::TimerHandle; 197 209 } 198 210 199 211 /// Unsafe version of [`HrTimerPointer`] for situations where leaking the ··· 208 220 /// [`UnsafeHrTimerPointer`] outlives any associated [`HrTimerPointer::TimerHandle`] 209 221 /// instances. 210 222 pub unsafe trait UnsafeHrTimerPointer: Sync + Sized { 223 + /// The operational mode associated with this timer. 224 + /// 225 + /// This defines how the expiration value is interpreted. 226 + type TimerMode: HrTimerMode; 227 + 211 228 /// A handle representing a running timer. 212 229 /// 213 230 /// # Safety ··· 229 236 /// 230 237 /// Caller promises keep the timer structure alive until the timer is dead. 231 238 /// Caller can ensure this by not leaking the returned [`Self::TimerHandle`]. 232 - unsafe fn start(self, expires: Ktime) -> Self::TimerHandle; 239 + unsafe fn start(self, expires: <Self::TimerMode as HrTimerMode>::Expires) -> Self::TimerHandle; 233 240 } 234 241 235 242 /// A trait for stack allocated timers. ··· 239 246 /// Implementers must ensure that `start_scoped` does not return until the 240 247 /// timer is dead and the timer handler is not running. 241 248 pub unsafe trait ScopedHrTimerPointer { 249 + /// The operational mode associated with this timer. 250 + /// 251 + /// This defines how the expiration value is interpreted. 252 + type TimerMode: HrTimerMode; 253 + 242 254 /// Start the timer to run after `expires` time units and immediately 243 255 /// after call `f`. When `f` returns, the timer is cancelled. 244 - fn start_scoped<T, F>(self, expires: Ktime, f: F) -> T 256 + fn start_scoped<T, F>(self, expires: <Self::TimerMode as HrTimerMode>::Expires, f: F) -> T 245 257 where 246 258 F: FnOnce() -> T; 247 259 } ··· 258 260 where 259 261 T: UnsafeHrTimerPointer, 260 262 { 261 - fn start_scoped<U, F>(self, expires: Ktime, f: F) -> U 263 + type TimerMode = T::TimerMode; 264 + 265 + fn start_scoped<U, F>( 266 + self, 267 + expires: <<T as UnsafeHrTimerPointer>::TimerMode as HrTimerMode>::Expires, 268 + f: F, 269 + ) -> U 262 270 where 263 271 F: FnOnce() -> U, 264 272 { ··· 339 335 /// their documentation. All the methods of this trait must operate on the same 340 336 /// field. 341 337 pub unsafe trait HasHrTimer<T> { 338 + /// The operational mode associated with this timer. 339 + /// 340 + /// This defines how the expiration value is interpreted. 341 + type TimerMode: HrTimerMode; 342 + 342 343 /// Return a pointer to the [`HrTimer`] within `Self`. 343 344 /// 344 345 /// This function is useful to get access to the value without creating ··· 391 382 /// - `this` must point to a valid `Self`. 392 383 /// - Caller must ensure that the pointee of `this` lives until the timer 393 384 /// fires or is canceled. 394 - unsafe fn start(this: *const Self, expires: Ktime) { 385 + unsafe fn start(this: *const Self, expires: <Self::TimerMode as HrTimerMode>::Expires) { 395 386 // SAFETY: By function safety requirement, `this` is a valid `Self`. 396 387 unsafe { 397 388 bindings::hrtimer_start_range_ns( 398 389 Self::c_timer_ptr(this).cast_mut(), 399 - expires.to_ns(), 390 + expires.as_nanos(), 400 391 0, 401 - (*Self::raw_get_timer(this)).mode.into_c(), 392 + <Self::TimerMode as HrTimerMode>::C_MODE, 402 393 ); 403 394 } 404 395 } ··· 420 411 } 421 412 } 422 413 423 - /// Operational mode of [`HrTimer`]. 424 - // NOTE: Some of these have the same encoding on the C side, so we keep 425 - // `repr(Rust)` and convert elsewhere. 426 - #[derive(Clone, Copy, PartialEq, Eq, Debug)] 427 - pub enum HrTimerMode { 428 - /// Timer expires at the given expiration time. 429 - Absolute, 430 - /// Timer expires after the given expiration time interpreted as a duration from now. 431 - Relative, 432 - /// Timer does not move between CPU cores. 433 - Pinned, 434 - /// Timer handler is executed in soft irq context. 435 - Soft, 436 - /// Timer handler is executed in hard irq context. 437 - Hard, 438 - /// Timer expires at the given expiration time. 439 - /// Timer does not move between CPU cores. 440 - AbsolutePinned, 441 - /// Timer expires after the given expiration time interpreted as a duration from now. 442 - /// Timer does not move between CPU cores. 443 - RelativePinned, 444 - /// Timer expires at the given expiration time. 445 - /// Timer handler is executed in soft irq context. 446 - AbsoluteSoft, 447 - /// Timer expires after the given expiration time interpreted as a duration from now. 448 - /// Timer handler is executed in soft irq context. 449 - RelativeSoft, 450 - /// Timer expires at the given expiration time. 451 - /// Timer does not move between CPU cores. 452 - /// Timer handler is executed in soft irq context. 453 - AbsolutePinnedSoft, 454 - /// Timer expires after the given expiration time interpreted as a duration from now. 455 - /// Timer does not move between CPU cores. 456 - /// Timer handler is executed in soft irq context. 457 - RelativePinnedSoft, 458 - /// Timer expires at the given expiration time. 459 - /// Timer handler is executed in hard irq context. 460 - AbsoluteHard, 461 - /// Timer expires after the given expiration time interpreted as a duration from now. 462 - /// Timer handler is executed in hard irq context. 463 - RelativeHard, 464 - /// Timer expires at the given expiration time. 465 - /// Timer does not move between CPU cores. 466 - /// Timer handler is executed in hard irq context. 467 - AbsolutePinnedHard, 468 - /// Timer expires after the given expiration time interpreted as a duration from now. 469 - /// Timer does not move between CPU cores. 470 - /// Timer handler is executed in hard irq context. 471 - RelativePinnedHard, 414 + /// Time representations that can be used as expiration values in [`HrTimer`]. 415 + pub trait HrTimerExpires { 416 + /// Converts the expiration time into a nanosecond representation. 417 + /// 418 + /// This value corresponds to a raw ktime_t value, suitable for passing to kernel 419 + /// timer functions. The interpretation (absolute vs relative) depends on the 420 + /// associated [HrTimerMode] in use. 421 + fn as_nanos(&self) -> i64; 472 422 } 473 423 474 - impl HrTimerMode { 475 - fn into_c(self) -> bindings::hrtimer_mode { 476 - use bindings::*; 477 - match self { 478 - HrTimerMode::Absolute => hrtimer_mode_HRTIMER_MODE_ABS, 479 - HrTimerMode::Relative => hrtimer_mode_HRTIMER_MODE_REL, 480 - HrTimerMode::Pinned => hrtimer_mode_HRTIMER_MODE_PINNED, 481 - HrTimerMode::Soft => hrtimer_mode_HRTIMER_MODE_SOFT, 482 - HrTimerMode::Hard => hrtimer_mode_HRTIMER_MODE_HARD, 483 - HrTimerMode::AbsolutePinned => hrtimer_mode_HRTIMER_MODE_ABS_PINNED, 484 - HrTimerMode::RelativePinned => hrtimer_mode_HRTIMER_MODE_REL_PINNED, 485 - HrTimerMode::AbsoluteSoft => hrtimer_mode_HRTIMER_MODE_ABS_SOFT, 486 - HrTimerMode::RelativeSoft => hrtimer_mode_HRTIMER_MODE_REL_SOFT, 487 - HrTimerMode::AbsolutePinnedSoft => hrtimer_mode_HRTIMER_MODE_ABS_PINNED_SOFT, 488 - HrTimerMode::RelativePinnedSoft => hrtimer_mode_HRTIMER_MODE_REL_PINNED_SOFT, 489 - HrTimerMode::AbsoluteHard => hrtimer_mode_HRTIMER_MODE_ABS_HARD, 490 - HrTimerMode::RelativeHard => hrtimer_mode_HRTIMER_MODE_REL_HARD, 491 - HrTimerMode::AbsolutePinnedHard => hrtimer_mode_HRTIMER_MODE_ABS_PINNED_HARD, 492 - HrTimerMode::RelativePinnedHard => hrtimer_mode_HRTIMER_MODE_REL_PINNED_HARD, 493 - } 424 + impl<C: ClockSource> HrTimerExpires for Instant<C> { 425 + #[inline] 426 + fn as_nanos(&self) -> i64 { 427 + Instant::<C>::as_nanos(self) 494 428 } 429 + } 430 + 431 + impl HrTimerExpires for Delta { 432 + #[inline] 433 + fn as_nanos(&self) -> i64 { 434 + Delta::as_nanos(*self) 435 + } 436 + } 437 + 438 + mod private { 439 + use crate::time::ClockSource; 440 + 441 + pub trait Sealed {} 442 + 443 + impl<C: ClockSource> Sealed for super::AbsoluteMode<C> {} 444 + impl<C: ClockSource> Sealed for super::RelativeMode<C> {} 445 + impl<C: ClockSource> Sealed for super::AbsolutePinnedMode<C> {} 446 + impl<C: ClockSource> Sealed for super::RelativePinnedMode<C> {} 447 + impl<C: ClockSource> Sealed for super::AbsoluteSoftMode<C> {} 448 + impl<C: ClockSource> Sealed for super::RelativeSoftMode<C> {} 449 + impl<C: ClockSource> Sealed for super::AbsolutePinnedSoftMode<C> {} 450 + impl<C: ClockSource> Sealed for super::RelativePinnedSoftMode<C> {} 451 + impl<C: ClockSource> Sealed for super::AbsoluteHardMode<C> {} 452 + impl<C: ClockSource> Sealed for super::RelativeHardMode<C> {} 453 + impl<C: ClockSource> Sealed for super::AbsolutePinnedHardMode<C> {} 454 + impl<C: ClockSource> Sealed for super::RelativePinnedHardMode<C> {} 455 + } 456 + 457 + /// Operational mode of [`HrTimer`]. 458 + pub trait HrTimerMode: private::Sealed { 459 + /// The C representation of hrtimer mode. 460 + const C_MODE: bindings::hrtimer_mode; 461 + 462 + /// Type representing the clock source. 463 + type Clock: ClockSource; 464 + 465 + /// Type representing the expiration specification (absolute or relative time). 466 + type Expires: HrTimerExpires; 467 + } 468 + 469 + /// Timer that expires at a fixed point in time. 470 + pub struct AbsoluteMode<C: ClockSource>(PhantomData<C>); 471 + 472 + impl<C: ClockSource> HrTimerMode for AbsoluteMode<C> { 473 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_ABS; 474 + 475 + type Clock = C; 476 + type Expires = Instant<C>; 477 + } 478 + 479 + /// Timer that expires after a delay from now. 480 + pub struct RelativeMode<C: ClockSource>(PhantomData<C>); 481 + 482 + impl<C: ClockSource> HrTimerMode for RelativeMode<C> { 483 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_REL; 484 + 485 + type Clock = C; 486 + type Expires = Delta; 487 + } 488 + 489 + /// Timer with absolute expiration time, pinned to its current CPU. 490 + pub struct AbsolutePinnedMode<C: ClockSource>(PhantomData<C>); 491 + impl<C: ClockSource> HrTimerMode for AbsolutePinnedMode<C> { 492 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_ABS_PINNED; 493 + 494 + type Clock = C; 495 + type Expires = Instant<C>; 496 + } 497 + 498 + /// Timer with relative expiration time, pinned to its current CPU. 499 + pub struct RelativePinnedMode<C: ClockSource>(PhantomData<C>); 500 + impl<C: ClockSource> HrTimerMode for RelativePinnedMode<C> { 501 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_REL_PINNED; 502 + 503 + type Clock = C; 504 + type Expires = Delta; 505 + } 506 + 507 + /// Timer with absolute expiration, handled in soft irq context. 508 + pub struct AbsoluteSoftMode<C: ClockSource>(PhantomData<C>); 509 + impl<C: ClockSource> HrTimerMode for AbsoluteSoftMode<C> { 510 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_ABS_SOFT; 511 + 512 + type Clock = C; 513 + type Expires = Instant<C>; 514 + } 515 + 516 + /// Timer with relative expiration, handled in soft irq context. 517 + pub struct RelativeSoftMode<C: ClockSource>(PhantomData<C>); 518 + impl<C: ClockSource> HrTimerMode for RelativeSoftMode<C> { 519 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_REL_SOFT; 520 + 521 + type Clock = C; 522 + type Expires = Delta; 523 + } 524 + 525 + /// Timer with absolute expiration, pinned to CPU and handled in soft irq context. 526 + pub struct AbsolutePinnedSoftMode<C: ClockSource>(PhantomData<C>); 527 + impl<C: ClockSource> HrTimerMode for AbsolutePinnedSoftMode<C> { 528 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_ABS_PINNED_SOFT; 529 + 530 + type Clock = C; 531 + type Expires = Instant<C>; 532 + } 533 + 534 + /// Timer with absolute expiration, pinned to CPU and handled in soft irq context. 535 + pub struct RelativePinnedSoftMode<C: ClockSource>(PhantomData<C>); 536 + impl<C: ClockSource> HrTimerMode for RelativePinnedSoftMode<C> { 537 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_REL_PINNED_SOFT; 538 + 539 + type Clock = C; 540 + type Expires = Delta; 541 + } 542 + 543 + /// Timer with absolute expiration, handled in hard irq context. 544 + pub struct AbsoluteHardMode<C: ClockSource>(PhantomData<C>); 545 + impl<C: ClockSource> HrTimerMode for AbsoluteHardMode<C> { 546 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_ABS_HARD; 547 + 548 + type Clock = C; 549 + type Expires = Instant<C>; 550 + } 551 + 552 + /// Timer with relative expiration, handled in hard irq context. 553 + pub struct RelativeHardMode<C: ClockSource>(PhantomData<C>); 554 + impl<C: ClockSource> HrTimerMode for RelativeHardMode<C> { 555 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_REL_HARD; 556 + 557 + type Clock = C; 558 + type Expires = Delta; 559 + } 560 + 561 + /// Timer with absolute expiration, pinned to CPU and handled in hard irq context. 562 + pub struct AbsolutePinnedHardMode<C: ClockSource>(PhantomData<C>); 563 + impl<C: ClockSource> HrTimerMode for AbsolutePinnedHardMode<C> { 564 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_ABS_PINNED_HARD; 565 + 566 + type Clock = C; 567 + type Expires = Instant<C>; 568 + } 569 + 570 + /// Timer with relative expiration, pinned to CPU and handled in hard irq context. 571 + pub struct RelativePinnedHardMode<C: ClockSource>(PhantomData<C>); 572 + impl<C: ClockSource> HrTimerMode for RelativePinnedHardMode<C> { 573 + const C_MODE: bindings::hrtimer_mode = bindings::hrtimer_mode_HRTIMER_MODE_REL_PINNED_HARD; 574 + 575 + type Clock = C; 576 + type Expires = Delta; 495 577 } 496 578 497 579 /// Use to implement the [`HasHrTimer<T>`] trait. ··· 596 496 impl$({$($generics:tt)*})? 597 497 HasHrTimer<$timer_type:ty> 598 498 for $self:ty 599 - { self.$field:ident } 499 + { 500 + mode : $mode:ty, 501 + field : self.$field:ident $(,)? 502 + } 600 503 $($rest:tt)* 601 504 ) => { 602 505 // SAFETY: This implementation of `raw_get_timer` only compiles if the 603 506 // field has the right type. 604 507 unsafe impl$(<$($generics)*>)? $crate::time::hrtimer::HasHrTimer<$timer_type> for $self { 508 + type TimerMode = $mode; 605 509 606 510 #[inline] 607 511 unsafe fn raw_get_timer(
+6 -2
rust/kernel/time/hrtimer/arc.rs
··· 4 4 use super::HrTimer; 5 5 use super::HrTimerCallback; 6 6 use super::HrTimerHandle; 7 + use super::HrTimerMode; 7 8 use super::HrTimerPointer; 8 - use super::Ktime; 9 9 use super::RawHrTimerCallback; 10 10 use crate::sync::Arc; 11 11 use crate::sync::ArcBorrow; ··· 54 54 T: HasHrTimer<T>, 55 55 T: for<'a> HrTimerCallback<Pointer<'a> = Self>, 56 56 { 57 + type TimerMode = <T as HasHrTimer<T>>::TimerMode; 57 58 type TimerHandle = ArcHrTimerHandle<T>; 58 59 59 - fn start(self, expires: Ktime) -> ArcHrTimerHandle<T> { 60 + fn start( 61 + self, 62 + expires: <<T as HasHrTimer<T>>::TimerMode as HrTimerMode>::Expires, 63 + ) -> ArcHrTimerHandle<T> { 60 64 // SAFETY: 61 65 // - We keep `self` alive by wrapping it in a handle below. 62 66 // - Since we generate the pointer passed to `start` from a valid
+7 -3
rust/kernel/time/hrtimer/pin.rs
··· 4 4 use super::HrTimer; 5 5 use super::HrTimerCallback; 6 6 use super::HrTimerHandle; 7 - use super::Ktime; 7 + use super::HrTimerMode; 8 8 use super::RawHrTimerCallback; 9 9 use super::UnsafeHrTimerPointer; 10 10 use core::pin::Pin; ··· 54 54 T: HasHrTimer<T>, 55 55 T: HrTimerCallback<Pointer<'a> = Self>, 56 56 { 57 + type TimerMode = <T as HasHrTimer<T>>::TimerMode; 57 58 type TimerHandle = PinHrTimerHandle<'a, T>; 58 59 59 - unsafe fn start(self, expires: Ktime) -> Self::TimerHandle { 60 + unsafe fn start( 61 + self, 62 + expires: <<T as HasHrTimer<T>>::TimerMode as HrTimerMode>::Expires, 63 + ) -> Self::TimerHandle { 60 64 // Cast to pointer 61 65 let self_ptr: *const T = self.get_ref(); 62 66 ··· 83 79 84 80 unsafe extern "C" fn run(ptr: *mut bindings::hrtimer) -> bindings::hrtimer_restart { 85 81 // `HrTimer` is `repr(C)` 86 - let timer_ptr = ptr as *mut HrTimer<T>; 82 + let timer_ptr = ptr.cast::<HrTimer<T>>(); 87 83 88 84 // SAFETY: By the safety requirement of this function, `timer_ptr` 89 85 // points to a `HrTimer<T>` contained in an `T`.
+7 -3
rust/kernel/time/hrtimer/pin_mut.rs
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 use super::{ 4 - HasHrTimer, HrTimer, HrTimerCallback, HrTimerHandle, Ktime, RawHrTimerCallback, 4 + HasHrTimer, HrTimer, HrTimerCallback, HrTimerHandle, HrTimerMode, RawHrTimerCallback, 5 5 UnsafeHrTimerPointer, 6 6 }; 7 7 use core::{marker::PhantomData, pin::Pin, ptr::NonNull}; ··· 52 52 T: HasHrTimer<T>, 53 53 T: HrTimerCallback<Pointer<'a> = Self>, 54 54 { 55 + type TimerMode = <T as HasHrTimer<T>>::TimerMode; 55 56 type TimerHandle = PinMutHrTimerHandle<'a, T>; 56 57 57 - unsafe fn start(mut self, expires: Ktime) -> Self::TimerHandle { 58 + unsafe fn start( 59 + mut self, 60 + expires: <<T as HasHrTimer<T>>::TimerMode as HrTimerMode>::Expires, 61 + ) -> Self::TimerHandle { 58 62 // SAFETY: 59 63 // - We promise not to move out of `self`. We only pass `self` 60 64 // back to the caller as a `Pin<&mut self>`. ··· 87 83 88 84 unsafe extern "C" fn run(ptr: *mut bindings::hrtimer) -> bindings::hrtimer_restart { 89 85 // `HrTimer` is `repr(C)` 90 - let timer_ptr = ptr as *mut HrTimer<T>; 86 + let timer_ptr = ptr.cast::<HrTimer<T>>(); 91 87 92 88 // SAFETY: By the safety requirement of this function, `timer_ptr` 93 89 // points to a `HrTimer<T>` contained in an `T`.
+6 -2
rust/kernel/time/hrtimer/tbox.rs
··· 4 4 use super::HrTimer; 5 5 use super::HrTimerCallback; 6 6 use super::HrTimerHandle; 7 + use super::HrTimerMode; 7 8 use super::HrTimerPointer; 8 - use super::Ktime; 9 9 use super::RawHrTimerCallback; 10 10 use crate::prelude::*; 11 11 use core::ptr::NonNull; ··· 64 64 T: for<'a> HrTimerCallback<Pointer<'a> = Pin<Box<T, A>>>, 65 65 A: crate::alloc::Allocator, 66 66 { 67 + type TimerMode = <T as HasHrTimer<T>>::TimerMode; 67 68 type TimerHandle = BoxHrTimerHandle<T, A>; 68 69 69 - fn start(self, expires: Ktime) -> Self::TimerHandle { 70 + fn start( 71 + self, 72 + expires: <<T as HasHrTimer<T>>::TimerMode as HrTimerMode>::Expires, 73 + ) -> Self::TimerHandle { 70 74 // SAFETY: 71 75 // - We will not move out of this box during timer callback (we pass an 72 76 // immutable reference to the callback).
+34 -195
rust/kernel/types.rs
··· 2 2 3 3 //! Kernel types. 4 4 5 + use crate::ffi::c_void; 5 6 use core::{ 6 7 cell::UnsafeCell, 7 8 marker::{PhantomData, PhantomPinned}, 8 - mem::{ManuallyDrop, MaybeUninit}, 9 + mem::MaybeUninit, 9 10 ops::{Deref, DerefMut}, 10 - ptr::NonNull, 11 11 }; 12 12 use pin_init::{PinInit, Wrapper, Zeroable}; 13 + 14 + pub use crate::sync::aref::{ARef, AlwaysRefCounted}; 13 15 14 16 /// Used to transfer ownership to and from foreign (non-Rust) languages. 15 17 /// ··· 23 21 /// 24 22 /// # Safety 25 23 /// 26 - /// Implementers must ensure that [`into_foreign`] returns a pointer which meets the alignment 27 - /// requirements of [`PointedTo`]. 28 - /// 29 - /// [`into_foreign`]: Self::into_foreign 30 - /// [`PointedTo`]: Self::PointedTo 24 + /// - Implementations must satisfy the guarantees of [`Self::into_foreign`]. 31 25 pub unsafe trait ForeignOwnable: Sized { 32 - /// Type used when the value is foreign-owned. In practical terms only defines the alignment of 33 - /// the pointer. 34 - type PointedTo; 26 + /// The alignment of pointers returned by `into_foreign`. 27 + const FOREIGN_ALIGN: usize; 35 28 36 29 /// Type used to immutably borrow a value that is currently foreign-owned. 37 30 type Borrowed<'a>; ··· 36 39 37 40 /// Converts a Rust-owned object to a foreign-owned one. 38 41 /// 42 + /// The foreign representation is a pointer to void. Aside from the guarantees listed below, 43 + /// there are no other guarantees for this pointer. For example, it might be invalid, dangling 44 + /// or pointing to uninitialized memory. Using it in any way except for [`from_foreign`], 45 + /// [`try_from_foreign`], [`borrow`], or [`borrow_mut`] can result in undefined behavior. 46 + /// 39 47 /// # Guarantees 40 48 /// 41 - /// The return value is guaranteed to be well-aligned, but there are no other guarantees for 42 - /// this pointer. For example, it might be null, dangling, or point to uninitialized memory. 43 - /// Using it in any way except for [`ForeignOwnable::from_foreign`], [`ForeignOwnable::borrow`], 44 - /// [`ForeignOwnable::try_from_foreign`] can result in undefined behavior. 49 + /// - Minimum alignment of returned pointer is [`Self::FOREIGN_ALIGN`]. 50 + /// - The returned pointer is not null. 45 51 /// 46 52 /// [`from_foreign`]: Self::from_foreign 47 53 /// [`try_from_foreign`]: Self::try_from_foreign 48 54 /// [`borrow`]: Self::borrow 49 55 /// [`borrow_mut`]: Self::borrow_mut 50 - fn into_foreign(self) -> *mut Self::PointedTo; 56 + fn into_foreign(self) -> *mut c_void; 51 57 52 58 /// Converts a foreign-owned object back to a Rust-owned one. 53 59 /// ··· 60 60 /// must not be passed to `from_foreign` more than once. 61 61 /// 62 62 /// [`into_foreign`]: Self::into_foreign 63 - unsafe fn from_foreign(ptr: *mut Self::PointedTo) -> Self; 63 + unsafe fn from_foreign(ptr: *mut c_void) -> Self; 64 64 65 65 /// Tries to convert a foreign-owned object back to a Rust-owned one. 66 66 /// ··· 72 72 /// `ptr` must either be null or satisfy the safety requirements for [`from_foreign`]. 73 73 /// 74 74 /// [`from_foreign`]: Self::from_foreign 75 - unsafe fn try_from_foreign(ptr: *mut Self::PointedTo) -> Option<Self> { 75 + unsafe fn try_from_foreign(ptr: *mut c_void) -> Option<Self> { 76 76 if ptr.is_null() { 77 77 None 78 78 } else { ··· 95 95 /// 96 96 /// [`into_foreign`]: Self::into_foreign 97 97 /// [`from_foreign`]: Self::from_foreign 98 - unsafe fn borrow<'a>(ptr: *mut Self::PointedTo) -> Self::Borrowed<'a>; 98 + unsafe fn borrow<'a>(ptr: *mut c_void) -> Self::Borrowed<'a>; 99 99 100 100 /// Borrows a foreign-owned object mutably. 101 101 /// ··· 123 123 /// [`from_foreign`]: Self::from_foreign 124 124 /// [`borrow`]: Self::borrow 125 125 /// [`Arc`]: crate::sync::Arc 126 - unsafe fn borrow_mut<'a>(ptr: *mut Self::PointedTo) -> Self::BorrowedMut<'a>; 126 + unsafe fn borrow_mut<'a>(ptr: *mut c_void) -> Self::BorrowedMut<'a>; 127 127 } 128 128 129 - // SAFETY: The `into_foreign` function returns a pointer that is dangling, but well-aligned. 129 + // SAFETY: The pointer returned by `into_foreign` comes from a well aligned 130 + // pointer to `()`. 130 131 unsafe impl ForeignOwnable for () { 131 - type PointedTo = (); 132 + const FOREIGN_ALIGN: usize = core::mem::align_of::<()>(); 132 133 type Borrowed<'a> = (); 133 134 type BorrowedMut<'a> = (); 134 135 135 - fn into_foreign(self) -> *mut Self::PointedTo { 136 + fn into_foreign(self) -> *mut c_void { 136 137 core::ptr::NonNull::dangling().as_ptr() 137 138 } 138 139 139 - unsafe fn from_foreign(_: *mut Self::PointedTo) -> Self {} 140 + unsafe fn from_foreign(_: *mut c_void) -> Self {} 140 141 141 - unsafe fn borrow<'a>(_: *mut Self::PointedTo) -> Self::Borrowed<'a> {} 142 - unsafe fn borrow_mut<'a>(_: *mut Self::PointedTo) -> Self::BorrowedMut<'a> {} 142 + unsafe fn borrow<'a>(_: *mut c_void) -> Self::Borrowed<'a> {} 143 + unsafe fn borrow_mut<'a>(_: *mut c_void) -> Self::BorrowedMut<'a> {} 143 144 } 144 145 145 146 /// Runs a cleanup function/closure when dropped. ··· 367 366 // initialize the `T`. 368 367 unsafe { 369 368 pin_init::pin_init_from_closure::<_, ::core::convert::Infallible>(move |slot| { 370 - init_func(Self::raw_get(slot)); 369 + init_func(Self::cast_into(slot)); 371 370 Ok(()) 372 371 }) 373 372 } ··· 387 386 // SAFETY: We contain a `MaybeUninit`, so it is OK for the `init_func` to not fully 388 387 // initialize the `T`. 389 388 unsafe { 390 - pin_init::pin_init_from_closure::<_, E>(move |slot| init_func(Self::raw_get(slot))) 389 + pin_init::pin_init_from_closure::<_, E>(move |slot| init_func(Self::cast_into(slot))) 391 390 } 392 391 } 393 392 ··· 400 399 /// 401 400 /// This function is useful to get access to the value without creating intermediate 402 401 /// references. 403 - pub const fn raw_get(this: *const Self) -> *mut T { 402 + pub const fn cast_into(this: *const Self) -> *mut T { 404 403 UnsafeCell::raw_get(this.cast::<UnsafeCell<MaybeUninit<T>>>()).cast::<T>() 404 + } 405 + 406 + /// The opposite operation of [`Opaque::cast_into`]. 407 + pub const fn cast_from(this: *const T) -> *const Self { 408 + this.cast() 405 409 } 406 410 } 407 411 ··· 421 415 unsafe { PinInit::<T, E>::__pinned_init(slot, ptr) } 422 416 }) 423 417 } 424 - } 425 - 426 - /// Types that are _always_ reference counted. 427 - /// 428 - /// It allows such types to define their own custom ref increment and decrement functions. 429 - /// Additionally, it allows users to convert from a shared reference `&T` to an owned reference 430 - /// [`ARef<T>`]. 431 - /// 432 - /// This is usually implemented by wrappers to existing structures on the C side of the code. For 433 - /// Rust code, the recommendation is to use [`Arc`](crate::sync::Arc) to create reference-counted 434 - /// instances of a type. 435 - /// 436 - /// # Safety 437 - /// 438 - /// Implementers must ensure that increments to the reference count keep the object alive in memory 439 - /// at least until matching decrements are performed. 440 - /// 441 - /// Implementers must also ensure that all instances are reference-counted. (Otherwise they 442 - /// won't be able to honour the requirement that [`AlwaysRefCounted::inc_ref`] keep the object 443 - /// alive.) 444 - pub unsafe trait AlwaysRefCounted { 445 - /// Increments the reference count on the object. 446 - fn inc_ref(&self); 447 - 448 - /// Decrements the reference count on the object. 449 - /// 450 - /// Frees the object when the count reaches zero. 451 - /// 452 - /// # Safety 453 - /// 454 - /// Callers must ensure that there was a previous matching increment to the reference count, 455 - /// and that the object is no longer used after its reference count is decremented (as it may 456 - /// result in the object being freed), unless the caller owns another increment on the refcount 457 - /// (e.g., it calls [`AlwaysRefCounted::inc_ref`] twice, then calls 458 - /// [`AlwaysRefCounted::dec_ref`] once). 459 - unsafe fn dec_ref(obj: NonNull<Self>); 460 - } 461 - 462 - /// An owned reference to an always-reference-counted object. 463 - /// 464 - /// The object's reference count is automatically decremented when an instance of [`ARef`] is 465 - /// dropped. It is also automatically incremented when a new instance is created via 466 - /// [`ARef::clone`]. 467 - /// 468 - /// # Invariants 469 - /// 470 - /// The pointer stored in `ptr` is non-null and valid for the lifetime of the [`ARef`] instance. In 471 - /// particular, the [`ARef`] instance owns an increment on the underlying object's reference count. 472 - pub struct ARef<T: AlwaysRefCounted> { 473 - ptr: NonNull<T>, 474 - _p: PhantomData<T>, 475 - } 476 - 477 - // SAFETY: It is safe to send `ARef<T>` to another thread when the underlying `T` is `Sync` because 478 - // it effectively means sharing `&T` (which is safe because `T` is `Sync`); additionally, it needs 479 - // `T` to be `Send` because any thread that has an `ARef<T>` may ultimately access `T` using a 480 - // mutable reference, for example, when the reference count reaches zero and `T` is dropped. 481 - unsafe impl<T: AlwaysRefCounted + Sync + Send> Send for ARef<T> {} 482 - 483 - // SAFETY: It is safe to send `&ARef<T>` to another thread when the underlying `T` is `Sync` 484 - // because it effectively means sharing `&T` (which is safe because `T` is `Sync`); additionally, 485 - // it needs `T` to be `Send` because any thread that has a `&ARef<T>` may clone it and get an 486 - // `ARef<T>` on that thread, so the thread may ultimately access `T` using a mutable reference, for 487 - // example, when the reference count reaches zero and `T` is dropped. 488 - unsafe impl<T: AlwaysRefCounted + Sync + Send> Sync for ARef<T> {} 489 - 490 - impl<T: AlwaysRefCounted> ARef<T> { 491 - /// Creates a new instance of [`ARef`]. 492 - /// 493 - /// It takes over an increment of the reference count on the underlying object. 494 - /// 495 - /// # Safety 496 - /// 497 - /// Callers must ensure that the reference count was incremented at least once, and that they 498 - /// are properly relinquishing one increment. That is, if there is only one increment, callers 499 - /// must not use the underlying object anymore -- it is only safe to do so via the newly 500 - /// created [`ARef`]. 501 - pub unsafe fn from_raw(ptr: NonNull<T>) -> Self { 502 - // INVARIANT: The safety requirements guarantee that the new instance now owns the 503 - // increment on the refcount. 504 - Self { 505 - ptr, 506 - _p: PhantomData, 507 - } 508 - } 509 - 510 - /// Consumes the `ARef`, returning a raw pointer. 511 - /// 512 - /// This function does not change the refcount. After calling this function, the caller is 513 - /// responsible for the refcount previously managed by the `ARef`. 514 - /// 515 - /// # Examples 516 - /// 517 - /// ``` 518 - /// use core::ptr::NonNull; 519 - /// use kernel::types::{ARef, AlwaysRefCounted}; 520 - /// 521 - /// struct Empty {} 522 - /// 523 - /// # // SAFETY: TODO. 524 - /// unsafe impl AlwaysRefCounted for Empty { 525 - /// fn inc_ref(&self) {} 526 - /// unsafe fn dec_ref(_obj: NonNull<Self>) {} 527 - /// } 528 - /// 529 - /// let mut data = Empty {}; 530 - /// let ptr = NonNull::<Empty>::new(&mut data).unwrap(); 531 - /// # // SAFETY: TODO. 532 - /// let data_ref: ARef<Empty> = unsafe { ARef::from_raw(ptr) }; 533 - /// let raw_ptr: NonNull<Empty> = ARef::into_raw(data_ref); 534 - /// 535 - /// assert_eq!(ptr, raw_ptr); 536 - /// ``` 537 - pub fn into_raw(me: Self) -> NonNull<T> { 538 - ManuallyDrop::new(me).ptr 539 - } 540 - } 541 - 542 - impl<T: AlwaysRefCounted> Clone for ARef<T> { 543 - fn clone(&self) -> Self { 544 - self.inc_ref(); 545 - // SAFETY: We just incremented the refcount above. 546 - unsafe { Self::from_raw(self.ptr) } 547 - } 548 - } 549 - 550 - impl<T: AlwaysRefCounted> Deref for ARef<T> { 551 - type Target = T; 552 - 553 - fn deref(&self) -> &Self::Target { 554 - // SAFETY: The type invariants guarantee that the object is valid. 555 - unsafe { self.ptr.as_ref() } 556 - } 557 - } 558 - 559 - impl<T: AlwaysRefCounted> From<&T> for ARef<T> { 560 - fn from(b: &T) -> Self { 561 - b.inc_ref(); 562 - // SAFETY: We just incremented the refcount above. 563 - unsafe { Self::from_raw(NonNull::from(b)) } 564 - } 565 - } 566 - 567 - impl<T: AlwaysRefCounted> Drop for ARef<T> { 568 - fn drop(&mut self) { 569 - // SAFETY: The type invariants guarantee that the `ARef` owns the reference we're about to 570 - // decrement. 571 - unsafe { T::dec_ref(self.ptr) }; 572 - } 573 - } 574 - 575 - /// A sum type that always holds either a value of type `L` or `R`. 576 - /// 577 - /// # Examples 578 - /// 579 - /// ``` 580 - /// use kernel::types::Either; 581 - /// 582 - /// let left_value: Either<i32, &str> = Either::Left(7); 583 - /// let right_value: Either<i32, &str> = Either::Right("right value"); 584 - /// ``` 585 - pub enum Either<L, R> { 586 - /// Constructs an instance of [`Either`] containing a value of type `L`. 587 - Left(L), 588 - 589 - /// Constructs an instance of [`Either`] containing a value of type `R`. 590 - Right(R), 591 418 } 592 419 593 420 /// Zero-sized type to mark types not [`Send`].
+153 -14
rust/kernel/uaccess.rs
··· 8 8 alloc::{Allocator, Flags}, 9 9 bindings, 10 10 error::Result, 11 - ffi::c_void, 11 + ffi::{c_char, c_void}, 12 12 prelude::*, 13 13 transmute::{AsBytes, FromBytes}, 14 14 }; 15 15 use core::mem::{size_of, MaybeUninit}; 16 16 17 - /// The type used for userspace addresses. 18 - pub type UserPtr = usize; 17 + /// A pointer into userspace. 18 + /// 19 + /// This is the Rust equivalent to C pointers tagged with `__user`. 20 + #[repr(transparent)] 21 + #[derive(Copy, Clone)] 22 + pub struct UserPtr(*mut c_void); 23 + 24 + impl UserPtr { 25 + /// Create a `UserPtr` from an integer representing the userspace address. 26 + #[inline] 27 + pub fn from_addr(addr: usize) -> Self { 28 + Self(addr as *mut c_void) 29 + } 30 + 31 + /// Create a `UserPtr` from a pointer representing the userspace address. 32 + #[inline] 33 + pub fn from_ptr(addr: *mut c_void) -> Self { 34 + Self(addr) 35 + } 36 + 37 + /// Cast this userspace pointer to a raw const void pointer. 38 + /// 39 + /// It is up to the caller to use the returned pointer correctly. 40 + #[inline] 41 + pub fn as_const_ptr(self) -> *const c_void { 42 + self.0 43 + } 44 + 45 + /// Cast this userspace pointer to a raw mutable void pointer. 46 + /// 47 + /// It is up to the caller to use the returned pointer correctly. 48 + #[inline] 49 + pub fn as_mut_ptr(self) -> *mut c_void { 50 + self.0 51 + } 52 + 53 + /// Increment this user pointer by `add` bytes. 54 + /// 55 + /// This addition is wrapping, so wrapping around the address space does not result in a panic 56 + /// even if `CONFIG_RUST_OVERFLOW_CHECKS` is enabled. 57 + #[inline] 58 + pub fn wrapping_byte_add(self, add: usize) -> UserPtr { 59 + UserPtr(self.0.wrapping_byte_add(add)) 60 + } 61 + } 19 62 20 63 /// A pointer to an area in userspace memory, which can be either read-only or read-write. 21 64 /// ··· 220 177 pub fn skip(&mut self, num_skip: usize) -> Result { 221 178 // Update `self.length` first since that's the fallible part of this operation. 222 179 self.length = self.length.checked_sub(num_skip).ok_or(EFAULT)?; 223 - self.ptr = self.ptr.wrapping_add(num_skip); 180 + self.ptr = self.ptr.wrapping_byte_add(num_skip); 224 181 Ok(()) 225 182 } 226 183 ··· 267 224 } 268 225 // SAFETY: `out_ptr` points into a mutable slice of length `len`, so we may write 269 226 // that many bytes to it. 270 - let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr as *const c_void, len) }; 227 + let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr.as_const_ptr(), len) }; 271 228 if res != 0 { 272 229 return Err(EFAULT); 273 230 } 274 - self.ptr = self.ptr.wrapping_add(len); 231 + self.ptr = self.ptr.wrapping_byte_add(len); 275 232 self.length -= len; 276 233 Ok(()) 277 234 } ··· 283 240 pub fn read_slice(&mut self, out: &mut [u8]) -> Result { 284 241 // SAFETY: The types are compatible and `read_raw` doesn't write uninitialized bytes to 285 242 // `out`. 286 - let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit<u8>]) }; 243 + let out = unsafe { &mut *(core::ptr::from_mut(out) as *mut [MaybeUninit<u8>]) }; 287 244 self.read_raw(out) 288 245 } 289 246 ··· 305 262 let res = unsafe { 306 263 bindings::_copy_from_user( 307 264 out.as_mut_ptr().cast::<c_void>(), 308 - self.ptr as *const c_void, 265 + self.ptr.as_const_ptr(), 309 266 len, 310 267 ) 311 268 }; 312 269 if res != 0 { 313 270 return Err(EFAULT); 314 271 } 315 - self.ptr = self.ptr.wrapping_add(len); 272 + self.ptr = self.ptr.wrapping_byte_add(len); 316 273 self.length -= len; 317 274 // SAFETY: The read above has initialized all bytes in `out`, and since `T` implements 318 275 // `FromBytes`, any bit-pattern is a valid value for this type. ··· 333 290 // vector have been initialized. 334 291 unsafe { buf.inc_len(len) }; 335 292 Ok(()) 293 + } 294 + 295 + /// Read a NUL-terminated string from userspace and return it. 296 + /// 297 + /// The string is read into `buf` and a NUL-terminator is added if the end of `buf` is reached. 298 + /// Since there must be space to add a NUL-terminator, the buffer must not be empty. The 299 + /// returned `&CStr` points into `buf`. 300 + /// 301 + /// Fails with [`EFAULT`] if the read happens on a bad address (some data may have been 302 + /// copied). 303 + #[doc(alias = "strncpy_from_user")] 304 + pub fn strcpy_into_buf<'buf>(self, buf: &'buf mut [u8]) -> Result<&'buf CStr> { 305 + if buf.is_empty() { 306 + return Err(EINVAL); 307 + } 308 + 309 + // SAFETY: The types are compatible and `strncpy_from_user` doesn't write uninitialized 310 + // bytes to `buf`. 311 + let mut dst = unsafe { &mut *(core::ptr::from_mut(buf) as *mut [MaybeUninit<u8>]) }; 312 + 313 + // We never read more than `self.length` bytes. 314 + if dst.len() > self.length { 315 + dst = &mut dst[..self.length]; 316 + } 317 + 318 + let mut len = raw_strncpy_from_user(dst, self.ptr)?; 319 + if len < dst.len() { 320 + // Add one to include the NUL-terminator. 321 + len += 1; 322 + } else if len < buf.len() { 323 + // This implies that `len == dst.len() < buf.len()`. 324 + // 325 + // This means that we could not fill the entire buffer, but we had to stop reading 326 + // because we hit the `self.length` limit of this `UserSliceReader`. Since we did not 327 + // fill the buffer, we treat this case as if we tried to read past the `self.length` 328 + // limit and received a page fault, which is consistent with other `UserSliceReader` 329 + // methods that also return page faults when you exceed `self.length`. 330 + return Err(EFAULT); 331 + } else { 332 + // This implies that `len == buf.len()`. 333 + // 334 + // This means that we filled the buffer exactly. In this case, we add a NUL-terminator 335 + // and return it. Unlike the `len < dst.len()` branch, don't modify `len` because it 336 + // already represents the length including the NUL-terminator. 337 + // 338 + // SAFETY: Due to the check at the beginning, the buffer is not empty. 339 + unsafe { *buf.last_mut().unwrap_unchecked() = 0 }; 340 + } 341 + 342 + // This method consumes `self`, so it can only be called once, thus we do not need to 343 + // update `self.length`. This sidesteps concerns such as whether `self.length` should be 344 + // incremented by `len` or `len-1` in the `len == buf.len()` case. 345 + 346 + // SAFETY: There are two cases: 347 + // * If we hit the `len < dst.len()` case, then `raw_strncpy_from_user` guarantees that 348 + // this slice contains exactly one NUL byte at the end of the string. 349 + // * Otherwise, `raw_strncpy_from_user` guarantees that the string contained no NUL bytes, 350 + // and we have since added a NUL byte at the end. 351 + Ok(unsafe { CStr::from_bytes_with_nul_unchecked(&buf[..len]) }) 336 352 } 337 353 } 338 354 ··· 429 327 } 430 328 // SAFETY: `data_ptr` points into an immutable slice of length `len`, so we may read 431 329 // that many bytes from it. 432 - let res = unsafe { bindings::copy_to_user(self.ptr as *mut c_void, data_ptr, len) }; 330 + let res = unsafe { bindings::copy_to_user(self.ptr.as_mut_ptr(), data_ptr, len) }; 433 331 if res != 0 { 434 332 return Err(EFAULT); 435 333 } 436 - self.ptr = self.ptr.wrapping_add(len); 334 + self.ptr = self.ptr.wrapping_byte_add(len); 437 335 self.length -= len; 438 336 Ok(()) 439 337 } ··· 456 354 // is a compile-time constant. 457 355 let res = unsafe { 458 356 bindings::_copy_to_user( 459 - self.ptr as *mut c_void, 460 - (value as *const T).cast::<c_void>(), 357 + self.ptr.as_mut_ptr(), 358 + core::ptr::from_ref(value).cast::<c_void>(), 461 359 len, 462 360 ) 463 361 }; 464 362 if res != 0 { 465 363 return Err(EFAULT); 466 364 } 467 - self.ptr = self.ptr.wrapping_add(len); 365 + self.ptr = self.ptr.wrapping_byte_add(len); 468 366 self.length -= len; 469 367 Ok(()) 470 368 } 369 + } 370 + 371 + /// Reads a nul-terminated string into `dst` and returns the length. 372 + /// 373 + /// This reads from userspace until a NUL byte is encountered, or until `dst.len()` bytes have been 374 + /// read. Fails with [`EFAULT`] if a read happens on a bad address (some data may have been 375 + /// copied). When the end of the buffer is encountered, no NUL byte is added, so the string is 376 + /// *not* guaranteed to be NUL-terminated when `Ok(dst.len())` is returned. 377 + /// 378 + /// # Guarantees 379 + /// 380 + /// When this function returns `Ok(len)`, it is guaranteed that the first `len` bytes of `dst` are 381 + /// initialized and non-zero. Furthermore, if `len < dst.len()`, then `dst[len]` is a NUL byte. 382 + #[inline] 383 + fn raw_strncpy_from_user(dst: &mut [MaybeUninit<u8>], src: UserPtr) -> Result<usize> { 384 + // CAST: Slice lengths are guaranteed to be `<= isize::MAX`. 385 + let len = dst.len() as isize; 386 + 387 + // SAFETY: `dst` is valid for writing `dst.len()` bytes. 388 + let res = unsafe { 389 + bindings::strncpy_from_user( 390 + dst.as_mut_ptr().cast::<c_char>(), 391 + src.as_const_ptr().cast::<c_char>(), 392 + len, 393 + ) 394 + }; 395 + 396 + if res < 0 { 397 + return Err(Error::from_errno(res as i32)); 398 + } 399 + 400 + #[cfg(CONFIG_RUST_OVERFLOW_CHECKS)] 401 + assert!(res <= len); 402 + 403 + // GUARANTEES: `strncpy_from_user` was successful, so `dst` has contents in accordance with the 404 + // guarantees of this function. 405 + Ok(res as usize) 471 406 }
+333 -9
rust/kernel/workqueue.rs
··· 26 26 //! * The [`WorkItemPointer`] trait is implemented for the pointer type that points at a something 27 27 //! that implements [`WorkItem`]. 28 28 //! 29 - //! ## Example 29 + //! ## Examples 30 30 //! 31 31 //! This example defines a struct that holds an integer and can be scheduled on the workqueue. When 32 32 //! the struct is executed, it will print the integer. Since there is only one `work_struct` field, ··· 131 131 //! # print_2_later(MyStruct::new(41, 42).unwrap()); 132 132 //! ``` 133 133 //! 134 + //! This example shows how you can schedule delayed work items: 135 + //! 136 + //! ``` 137 + //! use kernel::sync::Arc; 138 + //! use kernel::workqueue::{self, impl_has_delayed_work, new_delayed_work, DelayedWork, WorkItem}; 139 + //! 140 + //! #[pin_data] 141 + //! struct MyStruct { 142 + //! value: i32, 143 + //! #[pin] 144 + //! work: DelayedWork<MyStruct>, 145 + //! } 146 + //! 147 + //! impl_has_delayed_work! { 148 + //! impl HasDelayedWork<Self> for MyStruct { self.work } 149 + //! } 150 + //! 151 + //! impl MyStruct { 152 + //! fn new(value: i32) -> Result<Arc<Self>> { 153 + //! Arc::pin_init( 154 + //! pin_init!(MyStruct { 155 + //! value, 156 + //! work <- new_delayed_work!("MyStruct::work"), 157 + //! }), 158 + //! GFP_KERNEL, 159 + //! ) 160 + //! } 161 + //! } 162 + //! 163 + //! impl WorkItem for MyStruct { 164 + //! type Pointer = Arc<MyStruct>; 165 + //! 166 + //! fn run(this: Arc<MyStruct>) { 167 + //! pr_info!("The value is: {}\n", this.value); 168 + //! } 169 + //! } 170 + //! 171 + //! /// This method will enqueue the struct for execution on the system workqueue, where its value 172 + //! /// will be printed 12 jiffies later. 173 + //! fn print_later(val: Arc<MyStruct>) { 174 + //! let _ = workqueue::system().enqueue_delayed(val, 12); 175 + //! } 176 + //! 177 + //! /// It is also possible to use the ordinary `enqueue` method together with `DelayedWork`. This 178 + //! /// is equivalent to calling `enqueue_delayed` with a delay of zero. 179 + //! fn print_now(val: Arc<MyStruct>) { 180 + //! let _ = workqueue::system().enqueue(val); 181 + //! } 182 + //! # print_later(MyStruct::new(42).unwrap()); 183 + //! # print_now(MyStruct::new(42).unwrap()); 184 + //! ``` 185 + //! 134 186 //! C header: [`include/linux/workqueue.h`](srctree/include/linux/workqueue.h) 135 187 136 - use crate::alloc::{AllocError, Flags}; 137 - use crate::{prelude::*, sync::Arc, sync::LockClassKey, types::Opaque}; 188 + use crate::{ 189 + alloc::{AllocError, Flags}, 190 + container_of, 191 + prelude::*, 192 + sync::Arc, 193 + sync::LockClassKey, 194 + time::Jiffies, 195 + types::Opaque, 196 + }; 138 197 use core::marker::PhantomData; 139 198 140 199 /// Creates a [`Work`] initialiser with the given name and a newly-created lock class. ··· 204 145 }; 205 146 } 206 147 pub use new_work; 148 + 149 + /// Creates a [`DelayedWork`] initialiser with the given name and a newly-created lock class. 150 + #[macro_export] 151 + macro_rules! new_delayed_work { 152 + () => { 153 + $crate::workqueue::DelayedWork::new( 154 + $crate::optional_name!(), 155 + $crate::static_lock_class!(), 156 + $crate::c_str!(::core::concat!( 157 + ::core::file!(), 158 + ":", 159 + ::core::line!(), 160 + "_timer" 161 + )), 162 + $crate::static_lock_class!(), 163 + ) 164 + }; 165 + ($name:literal) => { 166 + $crate::workqueue::DelayedWork::new( 167 + $crate::c_str!($name), 168 + $crate::static_lock_class!(), 169 + $crate::c_str!(::core::concat!($name, "_timer")), 170 + $crate::static_lock_class!(), 171 + ) 172 + }; 173 + } 174 + pub use new_delayed_work; 207 175 208 176 /// A kernel work queue. 209 177 /// ··· 256 170 pub unsafe fn from_raw<'a>(ptr: *const bindings::workqueue_struct) -> &'a Queue { 257 171 // SAFETY: The `Queue` type is `#[repr(transparent)]`, so the pointer cast is valid. The 258 172 // caller promises that the pointer is not dangling. 259 - unsafe { &*(ptr as *const Queue) } 173 + unsafe { &*ptr.cast::<Queue>() } 260 174 } 261 175 262 176 /// Enqueues a work item. ··· 284 198 unsafe { 285 199 w.__enqueue(move |work_ptr| { 286 200 bindings::queue_work_on( 287 - bindings::wq_misc_consts_WORK_CPU_UNBOUND as _, 201 + bindings::wq_misc_consts_WORK_CPU_UNBOUND as ffi::c_int, 288 202 queue_ptr, 289 203 work_ptr, 204 + ) 205 + }) 206 + } 207 + } 208 + 209 + /// Enqueues a delayed work item. 210 + /// 211 + /// This may fail if the work item is already enqueued in a workqueue. 212 + /// 213 + /// The work item will be submitted using `WORK_CPU_UNBOUND`. 214 + pub fn enqueue_delayed<W, const ID: u64>(&self, w: W, delay: Jiffies) -> W::EnqueueOutput 215 + where 216 + W: RawDelayedWorkItem<ID> + Send + 'static, 217 + { 218 + let queue_ptr = self.0.get(); 219 + 220 + // SAFETY: We only return `false` if the `work_struct` is already in a workqueue. The other 221 + // `__enqueue` requirements are not relevant since `W` is `Send` and static. 222 + // 223 + // The call to `bindings::queue_delayed_work_on` will dereference the provided raw pointer, 224 + // which is ok because `__enqueue` guarantees that the pointer is valid for the duration of 225 + // this closure, and the safety requirements of `RawDelayedWorkItem` expands this 226 + // requirement to apply to the entire `delayed_work`. 227 + // 228 + // Furthermore, if the C workqueue code accesses the pointer after this call to 229 + // `__enqueue`, then the work item was successfully enqueued, and 230 + // `bindings::queue_delayed_work_on` will have returned true. In this case, `__enqueue` 231 + // promises that the raw pointer will stay valid until we call the function pointer in the 232 + // `work_struct`, so the access is ok. 233 + unsafe { 234 + w.__enqueue(move |work_ptr| { 235 + bindings::queue_delayed_work_on( 236 + bindings::wq_misc_consts_WORK_CPU_UNBOUND as ffi::c_int, 237 + queue_ptr, 238 + container_of!(work_ptr, bindings::delayed_work, work), 239 + delay, 290 240 ) 291 241 }) 292 242 } ··· 419 297 where 420 298 F: FnOnce(*mut bindings::work_struct) -> bool; 421 299 } 300 + 301 + /// A raw delayed work item. 302 + /// 303 + /// # Safety 304 + /// 305 + /// If the `__enqueue` method in the `RawWorkItem` implementation calls the closure, then the 306 + /// provided pointer must point at the `work` field of a valid `delayed_work`, and the guarantees 307 + /// that `__enqueue` provides about accessing the `work_struct` must also apply to the rest of the 308 + /// `delayed_work` struct. 309 + pub unsafe trait RawDelayedWorkItem<const ID: u64>: RawWorkItem<ID> {} 422 310 423 311 /// Defines the method that should be called directly when a work item is executed. 424 312 /// ··· 535 403 // 536 404 // A pointer cast would also be ok due to `#[repr(transparent)]`. We use `addr_of!` so that 537 405 // the compiler does not complain that the `work` field is unused. 538 - unsafe { Opaque::raw_get(core::ptr::addr_of!((*ptr).work)) } 406 + unsafe { Opaque::cast_into(core::ptr::addr_of!((*ptr).work)) } 539 407 } 540 408 } 541 409 542 - /// Declares that a type has a [`Work<T, ID>`] field. 410 + /// Declares that a type contains a [`Work<T, ID>`]. 543 411 /// 544 412 /// The intended way of using this trait is via the [`impl_has_work!`] macro. You can use the macro 545 413 /// like this: ··· 638 506 impl{T} HasWork<Self> for ClosureWork<T> { self.work } 639 507 } 640 508 509 + /// Links for a delayed work item. 510 + /// 511 + /// This struct contains a function pointer to the [`run`] function from the [`WorkItemPointer`] 512 + /// trait, and defines the linked list pointers necessary to enqueue a work item in a workqueue in 513 + /// a delayed manner. 514 + /// 515 + /// Wraps the kernel's C `struct delayed_work`. 516 + /// 517 + /// This is a helper type used to associate a `delayed_work` with the [`WorkItem`] that uses it. 518 + /// 519 + /// [`run`]: WorkItemPointer::run 520 + #[pin_data] 521 + #[repr(transparent)] 522 + pub struct DelayedWork<T: ?Sized, const ID: u64 = 0> { 523 + #[pin] 524 + dwork: Opaque<bindings::delayed_work>, 525 + _inner: PhantomData<T>, 526 + } 527 + 528 + // SAFETY: Kernel work items are usable from any thread. 529 + // 530 + // We do not need to constrain `T` since the work item does not actually contain a `T`. 531 + unsafe impl<T: ?Sized, const ID: u64> Send for DelayedWork<T, ID> {} 532 + // SAFETY: Kernel work items are usable from any thread. 533 + // 534 + // We do not need to constrain `T` since the work item does not actually contain a `T`. 535 + unsafe impl<T: ?Sized, const ID: u64> Sync for DelayedWork<T, ID> {} 536 + 537 + impl<T: ?Sized, const ID: u64> DelayedWork<T, ID> { 538 + /// Creates a new instance of [`DelayedWork`]. 539 + #[inline] 540 + pub fn new( 541 + work_name: &'static CStr, 542 + work_key: Pin<&'static LockClassKey>, 543 + timer_name: &'static CStr, 544 + timer_key: Pin<&'static LockClassKey>, 545 + ) -> impl PinInit<Self> 546 + where 547 + T: WorkItem<ID>, 548 + { 549 + pin_init!(Self { 550 + dwork <- Opaque::ffi_init(|slot: *mut bindings::delayed_work| { 551 + // SAFETY: The `WorkItemPointer` implementation promises that `run` can be used as 552 + // the work item function. 553 + unsafe { 554 + bindings::init_work_with_key( 555 + core::ptr::addr_of_mut!((*slot).work), 556 + Some(T::Pointer::run), 557 + false, 558 + work_name.as_char_ptr(), 559 + work_key.as_ptr(), 560 + ) 561 + } 562 + 563 + // SAFETY: The `delayed_work_timer_fn` function pointer can be used here because 564 + // the timer is embedded in a `struct delayed_work`, and only ever scheduled via 565 + // the core workqueue code, and configured to run in irqsafe context. 566 + unsafe { 567 + bindings::timer_init_key( 568 + core::ptr::addr_of_mut!((*slot).timer), 569 + Some(bindings::delayed_work_timer_fn), 570 + bindings::TIMER_IRQSAFE, 571 + timer_name.as_char_ptr(), 572 + timer_key.as_ptr(), 573 + ) 574 + } 575 + }), 576 + _inner: PhantomData, 577 + }) 578 + } 579 + 580 + /// Get a pointer to the inner `delayed_work`. 581 + /// 582 + /// # Safety 583 + /// 584 + /// The provided pointer must not be dangling and must be properly aligned. (But the memory 585 + /// need not be initialized.) 586 + #[inline] 587 + pub unsafe fn raw_as_work(ptr: *const Self) -> *mut Work<T, ID> { 588 + // SAFETY: The caller promises that the pointer is aligned and not dangling. 589 + let dw: *mut bindings::delayed_work = 590 + unsafe { Opaque::cast_into(core::ptr::addr_of!((*ptr).dwork)) }; 591 + // SAFETY: The caller promises that the pointer is aligned and not dangling. 592 + let wrk: *mut bindings::work_struct = unsafe { core::ptr::addr_of_mut!((*dw).work) }; 593 + // CAST: Work and work_struct have compatible layouts. 594 + wrk.cast() 595 + } 596 + } 597 + 598 + /// Declares that a type contains a [`DelayedWork<T, ID>`]. 599 + /// 600 + /// # Safety 601 + /// 602 + /// The `HasWork<T, ID>` implementation must return a `work_struct` that is stored in the `work` 603 + /// field of a `delayed_work` with the same access rules as the `work_struct`. 604 + pub unsafe trait HasDelayedWork<T, const ID: u64 = 0>: HasWork<T, ID> {} 605 + 606 + /// Used to safely implement the [`HasDelayedWork<T, ID>`] trait. 607 + /// 608 + /// This macro also implements the [`HasWork`] trait, so you do not need to use [`impl_has_work!`] 609 + /// when using this macro. 610 + /// 611 + /// # Examples 612 + /// 613 + /// ``` 614 + /// use kernel::sync::Arc; 615 + /// use kernel::workqueue::{self, impl_has_delayed_work, DelayedWork}; 616 + /// 617 + /// struct MyStruct<'a, T, const N: usize> { 618 + /// work_field: DelayedWork<MyStruct<'a, T, N>, 17>, 619 + /// f: fn(&'a [T; N]), 620 + /// } 621 + /// 622 + /// impl_has_delayed_work! { 623 + /// impl{'a, T, const N: usize} HasDelayedWork<MyStruct<'a, T, N>, 17> 624 + /// for MyStruct<'a, T, N> { self.work_field } 625 + /// } 626 + /// ``` 627 + #[macro_export] 628 + macro_rules! impl_has_delayed_work { 629 + ($(impl$({$($generics:tt)*})? 630 + HasDelayedWork<$work_type:ty $(, $id:tt)?> 631 + for $self:ty 632 + { self.$field:ident } 633 + )*) => {$( 634 + // SAFETY: The implementation of `raw_get_work` only compiles if the field has the right 635 + // type. 636 + unsafe impl$(<$($generics)+>)? 637 + $crate::workqueue::HasDelayedWork<$work_type $(, $id)?> for $self {} 638 + 639 + // SAFETY: The implementation of `raw_get_work` only compiles if the field has the right 640 + // type. 641 + unsafe impl$(<$($generics)+>)? $crate::workqueue::HasWork<$work_type $(, $id)?> for $self { 642 + #[inline] 643 + unsafe fn raw_get_work( 644 + ptr: *mut Self 645 + ) -> *mut $crate::workqueue::Work<$work_type $(, $id)?> { 646 + // SAFETY: The caller promises that the pointer is not dangling. 647 + let ptr: *mut $crate::workqueue::DelayedWork<$work_type $(, $id)?> = unsafe { 648 + ::core::ptr::addr_of_mut!((*ptr).$field) 649 + }; 650 + 651 + // SAFETY: The caller promises that the pointer is not dangling. 652 + unsafe { $crate::workqueue::DelayedWork::raw_as_work(ptr) } 653 + } 654 + 655 + #[inline] 656 + unsafe fn work_container_of( 657 + ptr: *mut $crate::workqueue::Work<$work_type $(, $id)?>, 658 + ) -> *mut Self { 659 + // SAFETY: The caller promises that the pointer points at a field of the right type 660 + // in the right kind of struct. 661 + let ptr = unsafe { $crate::workqueue::Work::raw_get(ptr) }; 662 + 663 + // SAFETY: The caller promises that the pointer points at a field of the right type 664 + // in the right kind of struct. 665 + let delayed_work = unsafe { 666 + $crate::container_of!(ptr, $crate::bindings::delayed_work, work) 667 + }; 668 + 669 + let delayed_work: *mut $crate::workqueue::DelayedWork<$work_type $(, $id)?> = 670 + delayed_work.cast(); 671 + 672 + // SAFETY: The caller promises that the pointer points at a field of the right type 673 + // in the right kind of struct. 674 + unsafe { $crate::container_of!(delayed_work, Self, $field) } 675 + } 676 + } 677 + )*}; 678 + } 679 + pub use impl_has_delayed_work; 680 + 641 681 // SAFETY: The `__enqueue` implementation in RawWorkItem uses a `work_struct` initialized with the 642 682 // `run` method of this trait as the function pointer because: 643 683 // - `__enqueue` gets the `work_struct` from the `Work` field, using `T::raw_get_work`. ··· 826 522 { 827 523 unsafe extern "C" fn run(ptr: *mut bindings::work_struct) { 828 524 // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. 829 - let ptr = ptr as *mut Work<T, ID>; 525 + let ptr = ptr.cast::<Work<T, ID>>(); 830 526 // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`. 831 527 let ptr = unsafe { T::work_container_of(ptr) }; 832 528 // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership. ··· 871 567 } 872 568 } 873 569 570 + // SAFETY: By the safety requirements of `HasDelayedWork`, the `work_struct` returned by methods in 571 + // `HasWork` provides a `work_struct` that is the `work` field of a `delayed_work`, and the rest of 572 + // the `delayed_work` has the same access rules as its `work` field. 573 + unsafe impl<T, const ID: u64> RawDelayedWorkItem<ID> for Arc<T> 574 + where 575 + T: WorkItem<ID, Pointer = Self>, 576 + T: HasDelayedWork<T, ID>, 577 + { 578 + } 579 + 874 580 // SAFETY: TODO. 875 581 unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>> 876 582 where ··· 889 575 { 890 576 unsafe extern "C" fn run(ptr: *mut bindings::work_struct) { 891 577 // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. 892 - let ptr = ptr as *mut Work<T, ID>; 578 + let ptr = ptr.cast::<Work<T, ID>>(); 893 579 // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`. 894 580 let ptr = unsafe { T::work_container_of(ptr) }; 895 581 // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership. ··· 929 615 unsafe { ::core::hint::unreachable_unchecked() } 930 616 } 931 617 } 618 + } 619 + 620 + // SAFETY: By the safety requirements of `HasDelayedWork`, the `work_struct` returned by methods in 621 + // `HasWork` provides a `work_struct` that is the `work` field of a `delayed_work`, and the rest of 622 + // the `delayed_work` has the same access rules as its `work` field. 623 + unsafe impl<T, const ID: u64> RawDelayedWorkItem<ID> for Pin<KBox<T>> 624 + where 625 + T: WorkItem<ID, Pointer = Self>, 626 + T: HasDelayedWork<T, ID>, 627 + { 932 628 } 933 629 934 630 /// Returns the system work queue (`system_wq`).
+5 -4
rust/kernel/xarray.rs
··· 7 7 use crate::{ 8 8 alloc, bindings, build_assert, 9 9 error::{Error, Result}, 10 + ffi::c_void, 10 11 types::{ForeignOwnable, NotThreadSafe, Opaque}, 11 12 }; 12 - use core::{iter, marker::PhantomData, mem, pin::Pin, ptr::NonNull}; 13 + use core::{iter, marker::PhantomData, pin::Pin, ptr::NonNull}; 13 14 use pin_init::{pin_data, pin_init, pinned_drop, PinInit}; 14 15 15 16 /// An array which efficiently maps sparse integer indices to owned objects. ··· 102 101 }) 103 102 } 104 103 105 - fn iter(&self) -> impl Iterator<Item = NonNull<T::PointedTo>> + '_ { 104 + fn iter(&self) -> impl Iterator<Item = NonNull<c_void>> + '_ { 106 105 let mut index = 0; 107 106 108 107 // SAFETY: `self.xa` is always valid by the type invariant. ··· 180 179 impl<'a, T: ForeignOwnable> Guard<'a, T> { 181 180 fn load<F, U>(&self, index: usize, f: F) -> Option<U> 182 181 where 183 - F: FnOnce(NonNull<T::PointedTo>) -> U, 182 + F: FnOnce(NonNull<c_void>) -> U, 184 183 { 185 184 // SAFETY: `self.xa.xa` is always valid by the type invariant. 186 185 let ptr = unsafe { bindings::xa_load(self.xa.xa.get(), index) }; ··· 231 230 gfp: alloc::Flags, 232 231 ) -> Result<Option<T>, StoreError<T>> { 233 232 build_assert!( 234 - mem::align_of::<T::PointedTo>() >= 4, 233 + T::FOREIGN_ALIGN >= 4, 235 234 "pointers stored in XArray must be 4-byte aligned" 236 235 ); 237 236 let new = value.into_foreign();
-6
rust/macros/module.rs
··· 94 94 type_: String, 95 95 license: String, 96 96 name: String, 97 - author: Option<String>, 98 97 authors: Option<Vec<String>>, 99 98 description: Option<String>, 100 99 alias: Option<Vec<String>>, ··· 107 108 const EXPECTED_KEYS: &[&str] = &[ 108 109 "type", 109 110 "name", 110 - "author", 111 111 "authors", 112 112 "description", 113 113 "license", ··· 132 134 match key.as_str() { 133 135 "type" => info.type_ = expect_ident(it), 134 136 "name" => info.name = expect_string_ascii(it), 135 - "author" => info.author = Some(expect_string(it)), 136 137 "authors" => info.authors = Some(expect_string_array(it)), 137 138 "description" => info.description = Some(expect_string(it)), 138 139 "license" => info.license = expect_string_ascii(it), ··· 176 179 // Rust does not allow hyphens in identifiers, use underscore instead. 177 180 let ident = info.name.replace('-', "_"); 178 181 let mut modinfo = ModInfoBuilder::new(ident.as_ref()); 179 - if let Some(author) = info.author { 180 - modinfo.emit("author", &author); 181 - } 182 182 if let Some(authors) = info.authors { 183 183 for author in authors { 184 184 modinfo.emit("author", &author);
+1 -1
rust/pin-init/README.md
··· 125 125 fn new() -> impl PinInit<Self, Error> { 126 126 try_pin_init!(Self { 127 127 status <- CMutex::new(0), 128 - buffer: Box::init(pin_init::zeroed())?, 128 + buffer: Box::init(pin_init::init_zeroed())?, 129 129 }? Error) 130 130 } 131 131 }
+16 -12
rust/pin-init/examples/big_struct_in_place.rs
··· 4 4 5 5 // Struct with size over 1GiB 6 6 #[derive(Debug)] 7 + #[allow(dead_code)] 7 8 pub struct BigStruct { 8 9 buf: [u8; 1024 * 1024 * 1024], 9 10 a: u64, ··· 21 20 22 21 impl ManagedBuf { 23 22 pub fn new() -> impl Init<Self> { 24 - init!(ManagedBuf { buf <- zeroed() }) 23 + init!(ManagedBuf { buf <- init_zeroed() }) 25 24 } 26 25 } 27 26 28 27 fn main() { 29 - // we want to initialize the struct in-place, otherwise we would get a stackoverflow 30 - let buf: Box<BigStruct> = Box::init(init!(BigStruct { 31 - buf <- zeroed(), 32 - a: 7, 33 - b: 186, 34 - c: 7789, 35 - d: 34, 36 - managed_buf <- ManagedBuf::new(), 37 - })) 38 - .unwrap(); 39 - println!("{}", core::mem::size_of_val(&*buf)); 28 + #[cfg(any(feature = "std", feature = "alloc"))] 29 + { 30 + // we want to initialize the struct in-place, otherwise we would get a stackoverflow 31 + let buf: Box<BigStruct> = Box::init(init!(BigStruct { 32 + buf <- init_zeroed(), 33 + a: 7, 34 + b: 186, 35 + c: 7789, 36 + d: 34, 37 + managed_buf <- ManagedBuf::new(), 38 + })) 39 + .unwrap(); 40 + println!("{}", core::mem::size_of_val(&*buf)); 41 + } 40 42 }
+9 -1
rust/pin-init/examples/linked_list.rs
··· 14 14 15 15 use pin_init::*; 16 16 17 - #[expect(unused_attributes)] 17 + #[allow(unused_attributes)] 18 18 mod error; 19 + #[allow(unused_imports)] 19 20 use error::Error; 20 21 21 22 #[pin_data(PinnedDrop)] ··· 40 39 } 41 40 42 41 #[inline] 42 + #[allow(dead_code)] 43 43 pub fn insert_next(list: &ListHead) -> impl PinInit<Self, Infallible> + '_ { 44 44 try_pin_init!(&this in Self { 45 45 prev: list.next.prev().replace(unsafe { Link::new_unchecked(this)}), ··· 114 112 } 115 113 116 114 #[inline] 115 + #[allow(dead_code)] 117 116 fn prev(&self) -> &Link { 118 117 unsafe { &(*self.0.get().as_ptr()).prev } 119 118 } ··· 141 138 } 142 139 143 140 #[allow(dead_code)] 141 + #[cfg(not(any(feature = "std", feature = "alloc")))] 142 + fn main() {} 143 + 144 + #[allow(dead_code)] 144 145 #[cfg_attr(test, test)] 146 + #[cfg(any(feature = "std", feature = "alloc"))] 145 147 fn main() -> Result<(), Error> { 146 148 let a = Box::pin_init(ListHead::new())?; 147 149 stack_pin_init!(let b = ListHead::insert_next(&a));
+56 -41
rust/pin-init/examples/mutex.rs
··· 12 12 pin::Pin, 13 13 sync::atomic::{AtomicBool, Ordering}, 14 14 }; 15 + #[cfg(feature = "std")] 15 16 use std::{ 16 17 sync::Arc, 17 - thread::{self, park, sleep, Builder, Thread}, 18 + thread::{self, sleep, Builder, Thread}, 18 19 time::Duration, 19 20 }; 20 21 21 22 use pin_init::*; 22 - #[expect(unused_attributes)] 23 + #[allow(unused_attributes)] 23 24 #[path = "./linked_list.rs"] 24 25 pub mod linked_list; 25 26 use linked_list::*; ··· 37 36 .compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed) 38 37 .is_err() 39 38 { 39 + #[cfg(feature = "std")] 40 40 while self.inner.load(Ordering::Relaxed) { 41 41 thread::yield_now(); 42 42 } ··· 96 94 // println!("wait list length: {}", self.wait_list.size()); 97 95 while self.locked.get() { 98 96 drop(sguard); 99 - park(); 97 + #[cfg(feature = "std")] 98 + thread::park(); 100 99 sguard = self.spin_lock.acquire(); 101 100 } 102 101 // This does have an effect, as the ListHead inside wait_entry implements Drop! ··· 134 131 let sguard = self.mtx.spin_lock.acquire(); 135 132 self.mtx.locked.set(false); 136 133 if let Some(list_field) = self.mtx.wait_list.next() { 137 - let wait_entry = list_field.as_ptr().cast::<WaitEntry>(); 138 - unsafe { (*wait_entry).thread.unpark() }; 134 + let _wait_entry = list_field.as_ptr().cast::<WaitEntry>(); 135 + #[cfg(feature = "std")] 136 + unsafe { 137 + (*_wait_entry).thread.unpark() 138 + }; 139 139 } 140 140 drop(sguard); 141 141 } ··· 165 159 struct WaitEntry { 166 160 #[pin] 167 161 wait_list: ListHead, 162 + #[cfg(feature = "std")] 168 163 thread: Thread, 169 164 } 170 165 171 166 impl WaitEntry { 172 167 #[inline] 173 168 fn insert_new(list: &ListHead) -> impl PinInit<Self> + '_ { 174 - pin_init!(Self { 175 - thread: thread::current(), 176 - wait_list <- ListHead::insert_prev(list), 177 - }) 169 + #[cfg(feature = "std")] 170 + { 171 + pin_init!(Self { 172 + thread: thread::current(), 173 + wait_list <- ListHead::insert_prev(list), 174 + }) 175 + } 176 + #[cfg(not(feature = "std"))] 177 + { 178 + pin_init!(Self { 179 + wait_list <- ListHead::insert_prev(list), 180 + }) 181 + } 178 182 } 179 183 } 180 184 181 - #[cfg(not(any(feature = "std", feature = "alloc")))] 182 - fn main() {} 183 - 184 - #[allow(dead_code)] 185 185 #[cfg_attr(test, test)] 186 - #[cfg(any(feature = "std", feature = "alloc"))] 186 + #[allow(dead_code)] 187 187 fn main() { 188 - let mtx: Pin<Arc<CMutex<usize>>> = Arc::pin_init(CMutex::new(0)).unwrap(); 189 - let mut handles = vec![]; 190 - let thread_count = 20; 191 - let workload = if cfg!(miri) { 100 } else { 1_000 }; 192 - for i in 0..thread_count { 193 - let mtx = mtx.clone(); 194 - handles.push( 195 - Builder::new() 196 - .name(format!("worker #{i}")) 197 - .spawn(move || { 198 - for _ in 0..workload { 199 - *mtx.lock() += 1; 200 - } 201 - println!("{i} halfway"); 202 - sleep(Duration::from_millis((i as u64) * 10)); 203 - for _ in 0..workload { 204 - *mtx.lock() += 1; 205 - } 206 - println!("{i} finished"); 207 - }) 208 - .expect("should not fail"), 209 - ); 188 + #[cfg(feature = "std")] 189 + { 190 + let mtx: Pin<Arc<CMutex<usize>>> = Arc::pin_init(CMutex::new(0)).unwrap(); 191 + let mut handles = vec![]; 192 + let thread_count = 20; 193 + let workload = if cfg!(miri) { 100 } else { 1_000 }; 194 + for i in 0..thread_count { 195 + let mtx = mtx.clone(); 196 + handles.push( 197 + Builder::new() 198 + .name(format!("worker #{i}")) 199 + .spawn(move || { 200 + for _ in 0..workload { 201 + *mtx.lock() += 1; 202 + } 203 + println!("{i} halfway"); 204 + sleep(Duration::from_millis((i as u64) * 10)); 205 + for _ in 0..workload { 206 + *mtx.lock() += 1; 207 + } 208 + println!("{i} finished"); 209 + }) 210 + .expect("should not fail"), 211 + ); 212 + } 213 + for h in handles { 214 + h.join().expect("thread panicked"); 215 + } 216 + println!("{:?}", &*mtx.lock()); 217 + assert_eq!(*mtx.lock(), workload * thread_count * 2); 210 218 } 211 - for h in handles { 212 - h.join().expect("thread panicked"); 213 - } 214 - println!("{:?}", &*mtx.lock()); 215 - assert_eq!(*mtx.lock(), workload * thread_count * 2); 216 219 }
+4
rust/pin-init/examples/pthread_mutex.rs
··· 44 44 pub enum Error { 45 45 #[allow(dead_code)] 46 46 IO(std::io::Error), 47 + #[allow(dead_code)] 47 48 Alloc, 48 49 } 49 50 ··· 62 61 } 63 62 64 63 impl<T> PThreadMutex<T> { 64 + #[allow(dead_code)] 65 65 pub fn new(data: T) -> impl PinInit<Self, Error> { 66 66 fn init_raw() -> impl PinInit<UnsafeCell<libc::pthread_mutex_t>, Error> { 67 67 let init = |slot: *mut UnsafeCell<libc::pthread_mutex_t>| { ··· 105 103 }? Error) 106 104 } 107 105 106 + #[allow(dead_code)] 108 107 pub fn lock(&self) -> PThreadMutexGuard<'_, T> { 109 108 // SAFETY: raw is always initialized 110 109 unsafe { libc::pthread_mutex_lock(self.raw.get()) }; ··· 140 137 } 141 138 142 139 #[cfg_attr(test, test)] 140 + #[cfg_attr(all(test, miri), ignore)] 143 141 fn main() { 144 142 #[cfg(all(any(feature = "std", feature = "alloc"), not(windows)))] 145 143 {
+38 -37
rust/pin-init/examples/static_init.rs
··· 3 3 #![allow(clippy::undocumented_unsafe_blocks)] 4 4 #![cfg_attr(feature = "alloc", feature(allocator_api))] 5 5 #![cfg_attr(not(RUSTC_LINT_REASONS_IS_STABLE), feature(lint_reasons))] 6 + #![allow(unused_imports)] 6 7 7 8 use core::{ 8 9 cell::{Cell, UnsafeCell}, ··· 13 12 time::Duration, 14 13 }; 15 14 use pin_init::*; 15 + #[cfg(feature = "std")] 16 16 use std::{ 17 17 sync::Arc, 18 18 thread::{sleep, Builder}, 19 19 }; 20 20 21 - #[expect(unused_attributes)] 21 + #[allow(unused_attributes)] 22 22 mod mutex; 23 23 use mutex::*; 24 24 ··· 84 82 85 83 pub static COUNT: StaticInit<CMutex<usize>, CountInit> = StaticInit::new(CountInit); 86 84 87 - #[cfg(not(any(feature = "std", feature = "alloc")))] 88 - fn main() {} 89 - 90 - #[cfg(any(feature = "std", feature = "alloc"))] 91 85 fn main() { 92 - let mtx: Pin<Arc<CMutex<usize>>> = Arc::pin_init(CMutex::new(0)).unwrap(); 93 - let mut handles = vec![]; 94 - let thread_count = 20; 95 - let workload = 1_000; 96 - for i in 0..thread_count { 97 - let mtx = mtx.clone(); 98 - handles.push( 99 - Builder::new() 100 - .name(format!("worker #{i}")) 101 - .spawn(move || { 102 - for _ in 0..workload { 103 - *COUNT.lock() += 1; 104 - std::thread::sleep(std::time::Duration::from_millis(10)); 105 - *mtx.lock() += 1; 106 - std::thread::sleep(std::time::Duration::from_millis(10)); 107 - *COUNT.lock() += 1; 108 - } 109 - println!("{i} halfway"); 110 - sleep(Duration::from_millis((i as u64) * 10)); 111 - for _ in 0..workload { 112 - std::thread::sleep(std::time::Duration::from_millis(10)); 113 - *mtx.lock() += 1; 114 - } 115 - println!("{i} finished"); 116 - }) 117 - .expect("should not fail"), 118 - ); 86 + #[cfg(feature = "std")] 87 + { 88 + let mtx: Pin<Arc<CMutex<usize>>> = Arc::pin_init(CMutex::new(0)).unwrap(); 89 + let mut handles = vec![]; 90 + let thread_count = 20; 91 + let workload = 1_000; 92 + for i in 0..thread_count { 93 + let mtx = mtx.clone(); 94 + handles.push( 95 + Builder::new() 96 + .name(format!("worker #{i}")) 97 + .spawn(move || { 98 + for _ in 0..workload { 99 + *COUNT.lock() += 1; 100 + std::thread::sleep(std::time::Duration::from_millis(10)); 101 + *mtx.lock() += 1; 102 + std::thread::sleep(std::time::Duration::from_millis(10)); 103 + *COUNT.lock() += 1; 104 + } 105 + println!("{i} halfway"); 106 + sleep(Duration::from_millis((i as u64) * 10)); 107 + for _ in 0..workload { 108 + std::thread::sleep(std::time::Duration::from_millis(10)); 109 + *mtx.lock() += 1; 110 + } 111 + println!("{i} finished"); 112 + }) 113 + .expect("should not fail"), 114 + ); 115 + } 116 + for h in handles { 117 + h.join().expect("thread panicked"); 118 + } 119 + println!("{:?}, {:?}", &*mtx.lock(), &*COUNT.lock()); 120 + assert_eq!(*mtx.lock(), workload * thread_count * 2); 119 121 } 120 - for h in handles { 121 - h.join().expect("thread panicked"); 122 - } 123 - println!("{:?}, {:?}", &*mtx.lock(), &*COUNT.lock()); 124 - assert_eq!(*mtx.lock(), workload * thread_count * 2); 125 122 }
+1
rust/pin-init/src/__internal.rs
··· 188 188 } 189 189 190 190 #[test] 191 + #[cfg(feature = "std")] 191 192 fn stack_init_reuse() { 192 193 use ::std::{borrow::ToOwned, println, string::String}; 193 194 use core::pin::pin;
+104 -16
rust/pin-init/src/lib.rs
··· 148 148 //! fn new() -> impl PinInit<Self, Error> { 149 149 //! try_pin_init!(Self { 150 150 //! status <- CMutex::new(0), 151 - //! buffer: Box::init(pin_init::zeroed())?, 151 + //! buffer: Box::init(pin_init::init_zeroed())?, 152 152 //! }? Error) 153 153 //! } 154 154 //! } ··· 742 742 /// - Fields that you want to initialize in-place have to use `<-` instead of `:`. 743 743 /// - In front of the initializer you can write `&this in` to have access to a [`NonNull<Self>`] 744 744 /// pointer named `this` inside of the initializer. 745 - /// - Using struct update syntax one can place `..Zeroable::zeroed()` at the very end of the 745 + /// - Using struct update syntax one can place `..Zeroable::init_zeroed()` at the very end of the 746 746 /// struct, this initializes every field with 0 and then runs all initializers specified in the 747 747 /// body. This can only be done if [`Zeroable`] is implemented for the struct. 748 748 /// ··· 769 769 /// }); 770 770 /// let init = pin_init!(Buf { 771 771 /// buf: [1; 64], 772 - /// ..Zeroable::zeroed() 772 + /// ..Zeroable::init_zeroed() 773 773 /// }); 774 774 /// ``` 775 775 /// ··· 805 805 /// ```rust 806 806 /// # #![feature(allocator_api)] 807 807 /// # #[path = "../examples/error.rs"] mod error; use error::Error; 808 - /// use pin_init::{pin_data, try_pin_init, PinInit, InPlaceInit, zeroed}; 808 + /// use pin_init::{pin_data, try_pin_init, PinInit, InPlaceInit, init_zeroed}; 809 809 /// 810 810 /// #[pin_data] 811 811 /// struct BigBuf { ··· 817 817 /// impl BigBuf { 818 818 /// fn new() -> impl PinInit<Self, Error> { 819 819 /// try_pin_init!(Self { 820 - /// big: Box::init(zeroed())?, 820 + /// big: Box::init(init_zeroed())?, 821 821 /// small: [0; 1024 * 1024], 822 822 /// ptr: core::ptr::null_mut(), 823 823 /// }? Error) ··· 866 866 /// # #[path = "../examples/error.rs"] mod error; use error::Error; 867 867 /// # #[path = "../examples/mutex.rs"] mod mutex; use mutex::*; 868 868 /// # use pin_init::InPlaceInit; 869 - /// use pin_init::{init, Init, zeroed}; 869 + /// use pin_init::{init, Init, init_zeroed}; 870 870 /// 871 871 /// struct BigBuf { 872 872 /// small: [u8; 1024 * 1024], ··· 875 875 /// impl BigBuf { 876 876 /// fn new() -> impl Init<Self> { 877 877 /// init!(Self { 878 - /// small <- zeroed(), 878 + /// small <- init_zeroed(), 879 879 /// }) 880 880 /// } 881 881 /// } ··· 913 913 /// # #![feature(allocator_api)] 914 914 /// # use core::alloc::AllocError; 915 915 /// # use pin_init::InPlaceInit; 916 - /// use pin_init::{try_init, Init, zeroed}; 916 + /// use pin_init::{try_init, Init, init_zeroed}; 917 917 /// 918 918 /// struct BigBuf { 919 919 /// big: Box<[u8; 1024 * 1024 * 1024]>, ··· 923 923 /// impl BigBuf { 924 924 /// fn new() -> impl Init<Self, AllocError> { 925 925 /// try_init!(Self { 926 - /// big: Box::init(zeroed())?, 926 + /// big: Box::init(init_zeroed())?, 927 927 /// small: [0; 1024 * 1024], 928 928 /// }? AllocError) 929 929 /// } ··· 953 953 /// Asserts that a field on a struct using `#[pin_data]` is marked with `#[pin]` ie. that it is 954 954 /// structurally pinned. 955 955 /// 956 - /// # Example 956 + /// # Examples 957 957 /// 958 958 /// This will succeed: 959 959 /// ``` ··· 1170 1170 /// 1171 1171 /// ```rust 1172 1172 /// # #![expect(clippy::disallowed_names)] 1173 - /// use pin_init::{init, zeroed, Init}; 1173 + /// use pin_init::{init, init_zeroed, Init}; 1174 1174 /// 1175 1175 /// struct Foo { 1176 1176 /// buf: [u8; 1_000_000], ··· 1183 1183 /// } 1184 1184 /// 1185 1185 /// let foo = init!(Foo { 1186 - /// buf <- zeroed() 1186 + /// buf <- init_zeroed() 1187 1187 /// }).chain(|foo| { 1188 1188 /// foo.setup(); 1189 1189 /// Ok(()) ··· 1495 1495 /// ```rust,ignore 1496 1496 /// let val: Self = unsafe { core::mem::zeroed() }; 1497 1497 /// ``` 1498 - pub unsafe trait Zeroable {} 1498 + pub unsafe trait Zeroable { 1499 + /// Create a new zeroed `Self`. 1500 + /// 1501 + /// The returned initializer will write `0x00` to every byte of the given `slot`. 1502 + #[inline] 1503 + fn init_zeroed() -> impl Init<Self> 1504 + where 1505 + Self: Sized, 1506 + { 1507 + init_zeroed() 1508 + } 1509 + 1510 + /// Create a `Self` consisting of all zeroes. 1511 + /// 1512 + /// Whenever a type implements [`Zeroable`], this function should be preferred over 1513 + /// [`core::mem::zeroed()`] or using `MaybeUninit<T>::zeroed().assume_init()`. 1514 + /// 1515 + /// # Examples 1516 + /// 1517 + /// ``` 1518 + /// use pin_init::{Zeroable, zeroed}; 1519 + /// 1520 + /// #[derive(Zeroable)] 1521 + /// struct Point { 1522 + /// x: u32, 1523 + /// y: u32, 1524 + /// } 1525 + /// 1526 + /// let point: Point = zeroed(); 1527 + /// assert_eq!(point.x, 0); 1528 + /// assert_eq!(point.y, 0); 1529 + /// ``` 1530 + fn zeroed() -> Self 1531 + where 1532 + Self: Sized, 1533 + { 1534 + zeroed() 1535 + } 1536 + } 1499 1537 1500 1538 /// Marker trait for types that allow `Option<Self>` to be set to all zeroes in order to write 1501 1539 /// `None` to that location. ··· 1546 1508 // SAFETY: by the safety requirement of `ZeroableOption`, this is valid. 1547 1509 unsafe impl<T: ZeroableOption> Zeroable for Option<T> {} 1548 1510 1549 - /// Create a new zeroed T. 1511 + // SAFETY: `Option<&T>` is part of the option layout optimization guarantee: 1512 + // <https://doc.rust-lang.org/stable/std/option/index.html#representation>. 1513 + unsafe impl<T> ZeroableOption for &T {} 1514 + // SAFETY: `Option<&mut T>` is part of the option layout optimization guarantee: 1515 + // <https://doc.rust-lang.org/stable/std/option/index.html#representation>. 1516 + unsafe impl<T> ZeroableOption for &mut T {} 1517 + // SAFETY: `Option<NonNull<T>>` is part of the option layout optimization guarantee: 1518 + // <https://doc.rust-lang.org/stable/std/option/index.html#representation>. 1519 + unsafe impl<T> ZeroableOption for NonNull<T> {} 1520 + 1521 + /// Create an initializer for a zeroed `T`. 1550 1522 /// 1551 1523 /// The returned initializer will write `0x00` to every byte of the given `slot`. 1552 1524 #[inline] 1553 - pub fn zeroed<T: Zeroable>() -> impl Init<T> { 1525 + pub fn init_zeroed<T: Zeroable>() -> impl Init<T> { 1554 1526 // SAFETY: Because `T: Zeroable`, all bytes zero is a valid bit pattern for `T` 1555 1527 // and because we write all zeroes, the memory is initialized. 1556 1528 unsafe { ··· 1569 1521 Ok(()) 1570 1522 }) 1571 1523 } 1524 + } 1525 + 1526 + /// Create a `T` consisting of all zeroes. 1527 + /// 1528 + /// Whenever a type implements [`Zeroable`], this function should be preferred over 1529 + /// [`core::mem::zeroed()`] or using `MaybeUninit<T>::zeroed().assume_init()`. 1530 + /// 1531 + /// # Examples 1532 + /// 1533 + /// ``` 1534 + /// use pin_init::{Zeroable, zeroed}; 1535 + /// 1536 + /// #[derive(Zeroable)] 1537 + /// struct Point { 1538 + /// x: u32, 1539 + /// y: u32, 1540 + /// } 1541 + /// 1542 + /// let point: Point = zeroed(); 1543 + /// assert_eq!(point.x, 0); 1544 + /// assert_eq!(point.y, 0); 1545 + /// ``` 1546 + pub const fn zeroed<T: Zeroable>() -> T { 1547 + // SAFETY:By the type invariants of `Zeroable`, all zeroes is a valid bit pattern for `T`. 1548 + unsafe { core::mem::zeroed() } 1572 1549 } 1573 1550 1574 1551 macro_rules! impl_zeroable { ··· 1633 1560 Option<NonZeroU128>, Option<NonZeroUsize>, 1634 1561 Option<NonZeroI8>, Option<NonZeroI16>, Option<NonZeroI32>, Option<NonZeroI64>, 1635 1562 Option<NonZeroI128>, Option<NonZeroIsize>, 1636 - {<T>} Option<NonNull<T>>, 1637 1563 1638 1564 // SAFETY: `null` pointer is valid. 1639 1565 // ··· 1661 1589 } 1662 1590 1663 1591 impl_tuple_zeroable!(A, B, C, D, E, F, G, H, I, J); 1592 + 1593 + macro_rules! impl_fn_zeroable_option { 1594 + ([$($abi:literal),* $(,)?] $args:tt) => { 1595 + $(impl_fn_zeroable_option!({extern $abi} $args);)* 1596 + $(impl_fn_zeroable_option!({unsafe extern $abi} $args);)* 1597 + }; 1598 + ({$($prefix:tt)*} {$(,)?}) => {}; 1599 + ({$($prefix:tt)*} {$ret:ident, $($rest:ident),* $(,)?}) => { 1600 + // SAFETY: function pointers are part of the option layout optimization: 1601 + // <https://doc.rust-lang.org/stable/std/option/index.html#representation>. 1602 + unsafe impl<$ret, $($rest),*> ZeroableOption for $($prefix)* fn($($rest),*) -> $ret {} 1603 + impl_fn_zeroable_option!({$($prefix)*} {$($rest),*,}); 1604 + }; 1605 + } 1606 + 1607 + impl_fn_zeroable_option!(["Rust", "C"] { A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U }); 1664 1608 1665 1609 /// This trait allows creating an instance of `Self` which contains exactly one 1666 1610 /// [structurally pinned value](https://doc.rust-lang.org/std/pin/index.html#projections-and-structural-pinning).
+8 -8
rust/pin-init/src/macros.rs
··· 1030 1030 /// 1031 1031 /// This macro has multiple internal call configurations, these are always the very first ident: 1032 1032 /// - nothing: this is the base case and called by the `{try_}{pin_}init!` macros. 1033 - /// - `with_update_parsed`: when the `..Zeroable::zeroed()` syntax has been handled. 1033 + /// - `with_update_parsed`: when the `..Zeroable::init_zeroed()` syntax has been handled. 1034 1034 /// - `init_slot`: recursively creates the code that initializes all fields in `slot`. 1035 1035 /// - `make_initializer`: recursively create the struct initializer that guarantees that every 1036 1036 /// field has been initialized exactly once. ··· 1059 1059 @data($data, $($use_data)?), 1060 1060 @has_data($has_data, $get_data), 1061 1061 @construct_closure($construct_closure), 1062 - @zeroed(), // Nothing means default behavior. 1062 + @init_zeroed(), // Nothing means default behavior. 1063 1063 ) 1064 1064 }; 1065 1065 ( ··· 1074 1074 @has_data($has_data:ident, $get_data:ident), 1075 1075 // `pin_init_from_closure` or `init_from_closure`. 1076 1076 @construct_closure($construct_closure:ident), 1077 - @munch_fields(..Zeroable::zeroed()), 1077 + @munch_fields(..Zeroable::init_zeroed()), 1078 1078 ) => { 1079 1079 $crate::__init_internal!(with_update_parsed: 1080 1080 @this($($this)?), ··· 1084 1084 @data($data, $($use_data)?), 1085 1085 @has_data($has_data, $get_data), 1086 1086 @construct_closure($construct_closure), 1087 - @zeroed(()), // `()` means zero all fields not mentioned. 1087 + @init_zeroed(()), // `()` means zero all fields not mentioned. 1088 1088 ) 1089 1089 }; 1090 1090 ( ··· 1124 1124 @has_data($has_data:ident, $get_data:ident), 1125 1125 // `pin_init_from_closure` or `init_from_closure`. 1126 1126 @construct_closure($construct_closure:ident), 1127 - @zeroed($($init_zeroed:expr)?), 1127 + @init_zeroed($($init_zeroed:expr)?), 1128 1128 ) => {{ 1129 1129 // We do not want to allow arbitrary returns, so we declare this type as the `Ok` return 1130 1130 // type and shadow it later when we insert the arbitrary user code. That way there will be ··· 1196 1196 @data($data:ident), 1197 1197 @slot($slot:ident), 1198 1198 @guards($($guards:ident,)*), 1199 - @munch_fields($(..Zeroable::zeroed())? $(,)?), 1199 + @munch_fields($(..Zeroable::init_zeroed())? $(,)?), 1200 1200 ) => { 1201 1201 // Endpoint of munching, no fields are left. If execution reaches this point, all fields 1202 1202 // have been initialized. Therefore we can now dismiss the guards by forgetting them. ··· 1300 1300 (make_initializer: 1301 1301 @slot($slot:ident), 1302 1302 @type_name($t:path), 1303 - @munch_fields(..Zeroable::zeroed() $(,)?), 1303 + @munch_fields(..Zeroable::init_zeroed() $(,)?), 1304 1304 @acc($($acc:tt)*), 1305 1305 ) => { 1306 1306 // Endpoint, nothing more to munch, create the initializer. Since the users specified 1307 - // `..Zeroable::zeroed()`, the slot will already have been zeroed and all field that have 1307 + // `..Zeroable::init_zeroed()`, the slot will already have been zeroed and all field that have 1308 1308 // not been overwritten are thus zero and initialized. We still check that all fields are 1309 1309 // actually accessible by using the struct update syntax ourselves. 1310 1310 // We are inside of a closure that is never executed and thus we can abuse `slot` to
+3
rust/uapi/lib.rs
··· 14 14 #![cfg_attr(test, allow(unsafe_op_in_unsafe_fn))] 15 15 #![allow( 16 16 clippy::all, 17 + clippy::cast_lossless, 18 + clippy::ptr_as_ptr, 19 + clippy::ref_as_ptr, 17 20 clippy::undocumented_unsafe_blocks, 18 21 dead_code, 19 22 missing_docs,
+1 -1
samples/rust/rust_configfs.rs
··· 14 14 module! { 15 15 type: RustConfigfs, 16 16 name: "rust_configfs", 17 - author: "Rust for Linux Contributors", 17 + authors: ["Rust for Linux Contributors"], 18 18 description: "Rust configfs sample", 19 19 license: "GPL", 20 20 }
+1 -1
samples/rust/rust_driver_auxiliary.rs
··· 113 113 module! { 114 114 type: SampleModule, 115 115 name: "rust_driver_auxiliary", 116 - author: "Danilo Krummrich", 116 + authors: ["Danilo Krummrich"], 117 117 description: "Rust auxiliary driver", 118 118 license: "GPL v2", 119 119 }
+2
samples/rust/rust_misc_device.rs
··· 176 176 fn ioctl(me: Pin<&RustMiscDevice>, _file: &File, cmd: u32, arg: usize) -> Result<isize> { 177 177 dev_info!(me.dev, "IOCTLing Rust Misc Device Sample\n"); 178 178 179 + // Treat the ioctl argument as a user pointer. 180 + let arg = UserPtr::from_addr(arg); 179 181 let size = _IOC_SIZE(cmd); 180 182 181 183 match cmd {
+1 -1
samples/rust/rust_print_main.rs
··· 40 40 // behaviour, contract or protocol on both `i32` and `&str` into a single `Arc` of 41 41 // type `Arc<dyn Display>`. 42 42 43 - use core::fmt::Display; 43 + use kernel::fmt::Display; 44 44 fn arc_dyn_print(arc: &Arc<dyn Display>) { 45 45 pr_info!("Arc<dyn Display> says {arc}"); 46 46 }
+3 -2
scripts/Makefile.build
··· 309 309 # The features in this list are the ones allowed for non-`rust/` code. 310 310 # 311 311 # - Stable since Rust 1.81.0: `feature(lint_reasons)`. 312 - # - Stable since Rust 1.82.0: `feature(asm_const)`, `feature(raw_ref_op)`. 312 + # - Stable since Rust 1.82.0: `feature(asm_const)`, 313 + # `feature(offset_of_nested)`, `feature(raw_ref_op)`. 313 314 # - Stable since Rust 1.87.0: `feature(asm_goto)`. 314 315 # - Expected to become stable: `feature(arbitrary_self_types)`. 315 316 # - To be determined: `feature(used_with_arg)`. 316 317 # 317 318 # Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on 318 319 # the unstable features in use. 319 - rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,raw_ref_op,used_with_arg 320 + rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,used_with_arg 320 321 321 322 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 322 323 # current working directory, which may be not accessible in the out-of-tree
+16 -15
scripts/rustdoc_test_gen.rs
··· 85 85 } 86 86 } 87 87 88 - assert!( 89 - valid_paths.len() > 0, 90 - "No path candidates found for `{file}`. This is likely a bug in the build system, or some \ 91 - files went away while compiling." 92 - ); 88 + match valid_paths.as_slice() { 89 + [] => panic!( 90 + "No path candidates found for `{file}`. This is likely a bug in the build system, or \ 91 + some files went away while compiling." 92 + ), 93 + [valid_path] => valid_path.to_str().unwrap(), 94 + valid_paths => { 95 + use std::fmt::Write; 93 96 94 - if valid_paths.len() > 1 { 95 - eprintln!("Several path candidates found:"); 96 - for path in valid_paths { 97 - eprintln!(" {path:?}"); 97 + let mut candidates = String::new(); 98 + for path in valid_paths { 99 + writeln!(&mut candidates, " {path:?}").unwrap(); 100 + } 101 + panic!( 102 + "Several path candidates found for `{file}`, please resolve the ambiguity by \ 103 + renaming a file or folder. Candidates:\n{candidates}", 104 + ); 98 105 } 99 - panic!( 100 - "Several path candidates found for `{file}`, please resolve the ambiguity by renaming \ 101 - a file or folder." 102 - ); 103 106 } 104 - 105 - valid_paths[0].to_str().unwrap() 106 107 } 107 108 108 109 fn main() {