Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'rust-6.12' of https://github.com/Rust-for-Linux/linux

Pull Rust updates from Miguel Ojeda:
"Toolchain and infrastructure:

- Support 'MITIGATION_{RETHUNK,RETPOLINE,SLS}' (which cleans up
objtool warnings), teach objtool about 'noreturn' Rust symbols and
mimic '___ADDRESSABLE()' for 'module_{init,exit}'. With that, we
should be objtool-warning-free, so enable it to run for all Rust
object files.

- KASAN (no 'SW_TAGS'), KCFI and shadow call sanitizer support.

- Support 'RUSTC_VERSION', including re-config and re-build on
change.

- Split helpers file into several files in a folder, to avoid
conflicts in it. Eventually those files will be moved to the right
places with the new build system. In addition, remove the need to
manually export the symbols defined there, reusing existing
machinery for that.

- Relax restriction on configurations with Rust + GCC plugins to just
the RANDSTRUCT plugin.

'kernel' crate:

- New 'list' module: doubly-linked linked list for use with reference
counted values, which is heavily used by the upcoming Rust Binder.

This includes 'ListArc' (a wrapper around 'Arc' that is guaranteed
unique for the given ID), 'AtomicTracker' (tracks whether a
'ListArc' exists using an atomic), 'ListLinks' (the prev/next
pointers for an item in a linked list), 'List' (the linked list
itself), 'Iter' (an iterator over a 'List'), 'Cursor' (a cursor
into a 'List' that allows to remove elements), 'ListArcField' (a
field exclusively owned by a 'ListArc'), as well as support for
heterogeneous lists.

- New 'rbtree' module: red-black tree abstractions used by the
upcoming Rust Binder.

This includes 'RBTree' (the red-black tree itself), 'RBTreeNode' (a
node), 'RBTreeNodeReservation' (a memory reservation for a node),
'Iter' and 'IterMut' (immutable and mutable iterators), 'Cursor'
(bidirectional cursor that allows to remove elements), as well as
an entry API similar to the Rust standard library one.

- 'init' module: add 'write_[pin_]init' methods and the
'InPlaceWrite' trait. Add the 'assert_pinned!' macro.

- 'sync' module: implement the 'InPlaceInit' trait for 'Arc' by
introducing an associated type in the trait.

- 'alloc' module: add 'drop_contents' method to 'BoxExt'.

- 'types' module: implement the 'ForeignOwnable' trait for
'Pin<Box<T>>' and improve the trait's documentation. In addition,
add the 'into_raw' method to the 'ARef' type.

- 'error' module: in preparation for the upcoming Rust support for
32-bit architectures, like arm, locally allow Clippy lint for
those.

Documentation:

- https://rust.docs.kernel.org has been announced, so link to it.

- Enable rustdoc's "jump to definition" feature, making its output a
bit closer to the experience in a cross-referencer.

- Debian Testing now also provides recent Rust releases (outside of
the freeze period), so add it to the list.

MAINTAINERS:

- Trevor is joining as reviewer of the "RUST" entry.

And a few other small bits"

* tag 'rust-6.12' of https://github.com/Rust-for-Linux/linux: (54 commits)
kasan: rust: Add KASAN smoke test via UAF
kbuild: rust: Enable KASAN support
rust: kasan: Rust does not support KHWASAN
kbuild: rust: Define probing macros for rustc
kasan: simplify and clarify Makefile
rust: cfi: add support for CFI_CLANG with Rust
cfi: add CONFIG_CFI_ICALL_NORMALIZE_INTEGERS
rust: support for shadow call stack sanitizer
docs: rust: include other expressions in conditional compilation section
kbuild: rust: replace proc macros dependency on `core.o` with the version text
kbuild: rust: rebuild if the version text changes
kbuild: rust: re-run Kconfig if the version text changes
kbuild: rust: add `CONFIG_RUSTC_VERSION`
rust: avoid `box_uninit_write` feature
MAINTAINERS: add Trevor Gross as Rust reviewer
rust: rbtree: add `RBTree::entry`
rust: rbtree: add cursor
rust: rbtree: add mutable iterator
rust: rbtree: add iterator
rust: rbtree: add red-black tree implementation backed by the C version
...

+3881 -405
+22 -5
Documentation/rust/general-information.rst
··· 15 15 kernel must opt into this behavior using the ``#![no_std]`` attribute. 16 16 17 17 18 + .. _rust_code_documentation: 19 + 18 20 Code documentation 19 21 ------------------ 20 22 ··· 24 22 generator. 25 23 26 24 The generated HTML docs include integrated search, linked items (e.g. types, 27 - functions, constants), source code, etc. They may be read at (TODO: link when 28 - in mainline and generated alongside the rest of the documentation): 25 + functions, constants), source code, etc. They may be read at: 29 26 30 - http://kernel.org/ 27 + https://rust.docs.kernel.org 28 + 29 + For linux-next, please see: 30 + 31 + https://rust.docs.kernel.org/next/ 32 + 33 + There are also tags for each main release, e.g.: 34 + 35 + https://rust.docs.kernel.org/6.10/ 31 36 32 37 The docs can also be easily generated and read locally. This is quite fast 33 38 (same order as compiling the code itself) and no special tools or environment ··· 84 75 .. code-block:: 85 76 86 77 rust/bindings/ 87 - (rust/helpers.c) 78 + (rust/helpers/) 88 79 89 80 include/ -----+ <-+ 90 81 | | ··· 121 112 122 113 For parts of the C header that ``bindgen`` does not auto generate, e.g. C 123 114 ``inline`` functions or non-trivial macros, it is acceptable to add a small 124 - wrapper function to ``rust/helpers.c`` to make it available for the Rust side as 115 + wrapper function to ``rust/helpers/`` to make it available for the Rust side as 125 116 well. 126 117 127 118 Abstractions ··· 151 142 #[cfg(CONFIG_X="y")] // Enabled as a built-in (`y`) 152 143 #[cfg(CONFIG_X="m")] // Enabled as a module (`m`) 153 144 #[cfg(not(CONFIG_X))] // Disabled 145 + 146 + For other predicates that Rust's ``cfg`` does not support, e.g. expressions with 147 + numerical comparisons, one may define a new Kconfig symbol: 148 + 149 + .. code-block:: kconfig 150 + 151 + config RUSTC_VERSION_MIN_107900 152 + def_bool y if RUSTC_VERSION >= 107900
+16 -2
Documentation/rust/index.rst
··· 25 25 configurations. 26 26 27 27 28 + Code documentation 29 + ------------------ 30 + 31 + Given a kernel configuration, the kernel may generate Rust code documentation, 32 + i.e. HTML rendered by the ``rustdoc`` tool. 33 + 28 34 .. only:: rustdoc and html 29 35 30 - You can also browse `rustdoc documentation <rustdoc/kernel/index.html>`_. 36 + This kernel documentation was built with `Rust code documentation 37 + <rustdoc/kernel/index.html>`_. 31 38 32 39 .. only:: not rustdoc and html 33 40 34 - This documentation does not include rustdoc generated information. 41 + This kernel documentation was not built with Rust code documentation. 42 + 43 + A pregenerated version is provided at: 44 + 45 + https://rust.docs.kernel.org 46 + 47 + Please see the :ref:`Code documentation <rust_code_documentation>` section for 48 + more details. 35 49 36 50 .. toctree:: 37 51 :maxdepth: 1
+2 -2
Documentation/rust/quick-start.rst
··· 39 39 Debian 40 40 ****** 41 41 42 - Debian Unstable (Sid), outside of the freeze period, provides recent Rust 43 - releases and thus it should generally work out of the box, e.g.:: 42 + Debian Testing and Debian Unstable (Sid), outside of the freeze period, provide 43 + recent Rust releases and thus they should generally work out of the box, e.g.:: 44 44 45 45 apt install rustc rust-src bindgen rustfmt rust-clippy 46 46
+1
MAINTAINERS
··· 20144 20144 R: Benno Lossin <benno.lossin@proton.me> 20145 20145 R: Andreas Hindborg <a.hindborg@kernel.org> 20146 20146 R: Alice Ryhl <aliceryhl@google.com> 20147 + R: Trevor Gross <tmgross@umich.edu> 20147 20148 L: rust-for-linux@vger.kernel.org 20148 20149 S: Supported 20149 20150 W: https://rust-for-linux.com
+16 -3
Makefile
··· 645 645 646 646 # The expansion should be delayed until arch/$(SRCARCH)/Makefile is included. 647 647 # Some architectures define CROSS_COMPILE in arch/$(SRCARCH)/Makefile. 648 - # CC_VERSION_TEXT is referenced from Kconfig (so it needs export), 649 - # and from include/config/auto.conf.cmd to detect the compiler upgrade. 648 + # CC_VERSION_TEXT and RUSTC_VERSION_TEXT are referenced from Kconfig (so they 649 + # need export), and from include/config/auto.conf.cmd to detect the compiler 650 + # upgrade. 650 651 CC_VERSION_TEXT = $(subst $(pound),,$(shell LC_ALL=C $(CC) --version 2>/dev/null | head -n 1)) 652 + RUSTC_VERSION_TEXT = $(subst $(pound),,$(shell $(RUSTC) --version 2>/dev/null)) 651 653 652 654 ifneq ($(findstring clang,$(CC_VERSION_TEXT)),) 653 655 include $(srctree)/scripts/Makefile.clang ··· 670 668 # KBUILD_DEFCONFIG may point out an alternative default configuration 671 669 # used for 'make defconfig' 672 670 include $(srctree)/arch/$(SRCARCH)/Makefile 673 - export KBUILD_DEFCONFIG KBUILD_KCONFIG CC_VERSION_TEXT 671 + export KBUILD_DEFCONFIG KBUILD_KCONFIG CC_VERSION_TEXT RUSTC_VERSION_TEXT 674 672 675 673 config: outputmakefile scripts_basic FORCE 676 674 $(Q)$(MAKE) $(build)=scripts/kconfig $@ ··· 926 924 ifndef CONFIG_DYNAMIC_SCS 927 925 CC_FLAGS_SCS := -fsanitize=shadow-call-stack 928 926 KBUILD_CFLAGS += $(CC_FLAGS_SCS) 927 + KBUILD_RUSTFLAGS += -Zsanitizer=shadow-call-stack 929 928 endif 930 929 export CC_FLAGS_SCS 931 930 endif ··· 951 948 952 949 ifdef CONFIG_CFI_CLANG 953 950 CC_FLAGS_CFI := -fsanitize=kcfi 951 + ifdef CONFIG_CFI_ICALL_NORMALIZE_INTEGERS 952 + CC_FLAGS_CFI += -fsanitize-cfi-icall-experimental-normalize-integers 953 + endif 954 + ifdef CONFIG_RUST 955 + # Always pass -Zsanitizer-cfi-normalize-integers as CONFIG_RUST selects 956 + # CONFIG_CFI_ICALL_NORMALIZE_INTEGERS. 957 + RUSTC_FLAGS_CFI := -Zsanitizer=kcfi -Zsanitizer-cfi-normalize-integers 958 + KBUILD_RUSTFLAGS += $(RUSTC_FLAGS_CFI) 959 + export RUSTC_FLAGS_CFI 960 + endif 954 961 KBUILD_CFLAGS += $(CC_FLAGS_CFI) 955 962 export CC_FLAGS_CFI 956 963 endif
+16
arch/Kconfig
··· 835 835 836 836 https://clang.llvm.org/docs/ControlFlowIntegrity.html 837 837 838 + config CFI_ICALL_NORMALIZE_INTEGERS 839 + bool "Normalize CFI tags for integers" 840 + depends on CFI_CLANG 841 + depends on $(cc-option,-fsanitize=kcfi -fsanitize-cfi-icall-experimental-normalize-integers) 842 + help 843 + This option normalizes the CFI tags for integer types so that all 844 + integer types of the same size and signedness receive the same CFI 845 + tag. 846 + 847 + The option is separate from CONFIG_RUST because it affects the ABI. 848 + When working with build systems that care about the ABI, it is 849 + convenient to be able to turn on this flag first, before Rust is 850 + turned on. 851 + 852 + This option is necessary for using CFI with Rust. If unsure, say N. 853 + 838 854 config CFI_PERMISSIVE 839 855 bool "Use CFI in permissive mode" 840 856 depends on CFI_CLANG
+13 -1
arch/arm64/Kconfig
··· 235 235 select HAVE_FUNCTION_ARG_ACCESS_API 236 236 select MMU_GATHER_RCU_TABLE_FREE 237 237 select HAVE_RSEQ 238 - select HAVE_RUST if CPU_LITTLE_ENDIAN 238 + select HAVE_RUST if RUSTC_SUPPORTS_ARM64 239 239 select HAVE_STACKPROTECTOR 240 240 select HAVE_SYSCALL_TRACEPOINTS 241 241 select HAVE_KPROBES ··· 269 269 select VDSO_GETRANDOM 270 270 help 271 271 ARM 64-bit (AArch64) Linux support. 272 + 273 + config RUSTC_SUPPORTS_ARM64 274 + def_bool y 275 + depends on CPU_LITTLE_ENDIAN 276 + # Shadow call stack is only supported on certain rustc versions. 277 + # 278 + # When using the UNWIND_PATCH_PAC_INTO_SCS option, rustc version 1.80+ is 279 + # required due to use of the -Zfixed-x18 flag. 280 + # 281 + # Otherwise, rustc version 1.82+ is required due to use of the 282 + # -Zsanitizer=shadow-call-stack flag. 283 + depends on !SHADOW_CALL_STACK || RUSTC_VERSION >= 108200 || RUSTC_VERSION >= 108000 && UNWIND_PATCH_PAC_INTO_SCS 272 284 273 285 config CLANG_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS 274 286 def_bool CC_IS_CLANG
+3
arch/arm64/Makefile
··· 57 57 ifneq ($(CONFIG_UNWIND_TABLES),y) 58 58 KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables 59 59 KBUILD_AFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables 60 + KBUILD_RUSTFLAGS += -Cforce-unwind-tables=n 60 61 else 61 62 KBUILD_CFLAGS += -fasynchronous-unwind-tables 62 63 KBUILD_AFLAGS += -fasynchronous-unwind-tables 64 + KBUILD_RUSTFLAGS += -Cforce-unwind-tables=y -Zuse-sync-unwind=n 63 65 endif 64 66 65 67 ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y) ··· 116 114 117 115 ifeq ($(CONFIG_SHADOW_CALL_STACK), y) 118 116 KBUILD_CFLAGS += -ffixed-x18 117 + KBUILD_RUSTFLAGS += -Zfixed-x18 119 118 endif 120 119 121 120 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
+8 -1
arch/riscv/Kconfig
··· 177 177 select HAVE_REGS_AND_STACK_ACCESS_API 178 178 select HAVE_RETHOOK if !XIP_KERNEL 179 179 select HAVE_RSEQ 180 - select HAVE_RUST if 64BIT 180 + select HAVE_RUST if RUSTC_SUPPORTS_RISCV 181 181 select HAVE_SAMPLE_FTRACE_DIRECT 182 182 select HAVE_SAMPLE_FTRACE_DIRECT_MULTI 183 183 select HAVE_STACKPROTECTOR ··· 208 208 select UACCESS_MEMCPY if !MMU 209 209 select USER_STACKTRACE_SUPPORT 210 210 select ZONE_DMA32 if 64BIT 211 + 212 + config RUSTC_SUPPORTS_RISCV 213 + def_bool y 214 + depends on 64BIT 215 + # Shadow call stack requires rustc version 1.82+ due to use of the 216 + # -Zsanitizer=shadow-call-stack flag. 217 + depends on !SHADOW_CALL_STACK || RUSTC_VERSION >= 108200 211 218 212 219 config CLANG_SUPPORTS_DYNAMIC_FTRACE 213 220 def_bool CC_IS_CLANG
+10 -1
arch/x86/Makefile
··· 24 24 25 25 ifdef CONFIG_MITIGATION_RETHUNK 26 26 RETHUNK_CFLAGS := -mfunction-return=thunk-extern 27 + RETHUNK_RUSTFLAGS := -Zfunction-return=thunk-extern 27 28 RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS) 29 + RETPOLINE_RUSTFLAGS += $(RETHUNK_RUSTFLAGS) 28 30 endif 29 31 30 32 export RETHUNK_CFLAGS 33 + export RETHUNK_RUSTFLAGS 31 34 export RETPOLINE_CFLAGS 35 + export RETPOLINE_RUSTFLAGS 32 36 export RETPOLINE_VDSO_CFLAGS 33 37 34 38 # For gcc stack alignment is specified with -mpreferred-stack-boundary, ··· 222 218 # Avoid indirect branches in kernel to deal with Spectre 223 219 ifdef CONFIG_MITIGATION_RETPOLINE 224 220 KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) 221 + KBUILD_RUSTFLAGS += $(RETPOLINE_RUSTFLAGS) 225 222 # Additionally, avoid generating expensive indirect jumps which 226 223 # are subject to retpolines for small number of switch cases. 227 - # clang turns off jump table generation by default when under 224 + # LLVM turns off jump table generation by default when under 228 225 # retpoline builds, however, gcc does not for x86. This has 229 226 # only been fixed starting from gcc stable version 8.4.0 and 230 227 # onwards, but not for older ones. See gcc bug #86952. ··· 242 237 PADDING_CFLAGS := -fpatchable-function-entry=$(CONFIG_FUNCTION_PADDING_BYTES),$(CONFIG_FUNCTION_PADDING_BYTES) 243 238 KBUILD_CFLAGS += $(PADDING_CFLAGS) 244 239 export PADDING_CFLAGS 240 + 241 + PADDING_RUSTFLAGS := -Zpatchable-function-entry=$(CONFIG_FUNCTION_PADDING_BYTES),$(CONFIG_FUNCTION_PADDING_BYTES) 242 + KBUILD_RUSTFLAGS += $(PADDING_RUSTFLAGS) 243 + export PADDING_RUSTFLAGS 245 244 endif 246 245 247 246 KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE)
+15 -4
init/Kconfig
··· 60 60 default $(ld-version) if LD_IS_LLD 61 61 default 0 62 62 63 + config RUSTC_VERSION 64 + int 65 + default $(shell,$(srctree)/scripts/rustc-version.sh $(RUSTC)) 66 + help 67 + It does not depend on `RUST` since that one may need to use the version 68 + in a `depends on`. 69 + 63 70 config RUST_IS_AVAILABLE 64 71 def_bool $(success,$(srctree)/scripts/rust_is_available.sh) 65 72 help ··· 1942 1935 bool "Rust support" 1943 1936 depends on HAVE_RUST 1944 1937 depends on RUST_IS_AVAILABLE 1945 - depends on !CFI_CLANG 1946 1938 depends on !MODVERSIONS 1947 - depends on !GCC_PLUGINS 1939 + depends on !GCC_PLUGIN_RANDSTRUCT 1948 1940 depends on !RANDSTRUCT 1949 - depends on !SHADOW_CALL_STACK 1950 1941 depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE 1942 + depends on !CFI_CLANG || RUSTC_VERSION >= 107900 && $(cc-option,-fsanitize=kcfi -fsanitize-cfi-icall-experimental-normalize-integers) 1943 + select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG 1944 + depends on !CALL_PADDING || RUSTC_VERSION >= 108000 1945 + depends on !KASAN_SW_TAGS 1951 1946 help 1952 1947 Enables Rust support in the kernel. 1953 1948 ··· 1966 1957 config RUSTC_VERSION_TEXT 1967 1958 string 1968 1959 depends on RUST 1969 - default "$(shell,$(RUSTC) --version 2>/dev/null)" 1960 + default "$(RUSTC_VERSION_TEXT)" 1961 + help 1962 + See `CC_VERSION_TEXT`. 1970 1963 1971 1964 config BINDGEN_VERSION_TEXT 1972 1965 string
+7 -1
mm/kasan/Makefile
··· 44 44 CFLAGS_KASAN_TEST += -fno-builtin 45 45 endif 46 46 47 - CFLAGS_kasan_test.o := $(CFLAGS_KASAN_TEST) 47 + CFLAGS_kasan_test_c.o := $(CFLAGS_KASAN_TEST) 48 + RUSTFLAGS_kasan_test_rust.o := $(RUSTFLAGS_KASAN) 48 49 CFLAGS_kasan_test_module.o := $(CFLAGS_KASAN_TEST) 49 50 50 51 obj-y := common.o report.o 51 52 obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o report_generic.o shadow.o quarantine.o 52 53 obj-$(CONFIG_KASAN_HW_TAGS) += hw_tags.o report_hw_tags.o tags.o report_tags.o 53 54 obj-$(CONFIG_KASAN_SW_TAGS) += init.o report_sw_tags.o shadow.o sw_tags.o tags.o report_tags.o 55 + 56 + kasan_test-objs := kasan_test_c.o 57 + ifdef CONFIG_RUST 58 + kasan_test-objs += kasan_test_rust.o 59 + endif 54 60 55 61 obj-$(CONFIG_KASAN_KUNIT_TEST) += kasan_test.o 56 62 obj-$(CONFIG_KASAN_MODULE_TEST) += kasan_test_module.o
+6
mm/kasan/kasan.h
··· 555 555 void kasan_kunit_test_suite_start(void); 556 556 void kasan_kunit_test_suite_end(void); 557 557 558 + #ifdef CONFIG_RUST 559 + char kasan_test_rust_uaf(void); 560 + #else 561 + static inline char kasan_test_rust_uaf(void) { return '\0'; } 562 + #endif 563 + 558 564 #else /* CONFIG_KASAN_KUNIT_TEST */ 559 565 560 566 static inline void kasan_kunit_test_suite_start(void) { }
+11
mm/kasan/kasan_test.c mm/kasan/kasan_test_c.c
··· 1944 1944 kfree(ptr); 1945 1945 } 1946 1946 1947 + /* 1948 + * Check that Rust performing a use-after-free using `unsafe` is detected. 1949 + * This is a smoke test to make sure that Rust is being sanitized properly. 1950 + */ 1951 + static void rust_uaf(struct kunit *test) 1952 + { 1953 + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_RUST); 1954 + KUNIT_EXPECT_KASAN_FAIL(test, kasan_test_rust_uaf()); 1955 + } 1956 + 1947 1957 static struct kunit_case kasan_kunit_test_cases[] = { 1948 1958 KUNIT_CASE(kmalloc_oob_right), 1949 1959 KUNIT_CASE(kmalloc_oob_left), ··· 2027 2017 KUNIT_CASE(match_all_not_assigned), 2028 2018 KUNIT_CASE(match_all_ptr_tag), 2029 2019 KUNIT_CASE(match_all_mem_tag), 2020 + KUNIT_CASE(rust_uaf), 2030 2021 {} 2031 2022 }; 2032 2023
+21
mm/kasan/kasan_test_rust.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Helper crate for KASAN testing. 4 + //! 5 + //! Provides behavior to check the sanitization of Rust code. 6 + 7 + use core::ptr::addr_of_mut; 8 + use kernel::prelude::*; 9 + 10 + /// Trivial UAF - allocate a big vector, grab a pointer partway through, 11 + /// drop the vector, and touch it. 12 + #[no_mangle] 13 + pub extern "C" fn kasan_test_rust_uaf() -> u8 { 14 + let mut v: Vec<u8> = Vec::new(); 15 + for _ in 0..4096 { 16 + v.push(0x42, GFP_KERNEL).unwrap(); 17 + } 18 + let ptr: *mut u8 = addr_of_mut!(v[2048]); 19 + drop(v); 20 + unsafe { *ptr } 21 + }
+37 -19
rust/Makefile
··· 8 8 9 9 # Missing prototypes are expected in the helpers since these are exported 10 10 # for Rust only, thus there is no header nor prototypes. 11 - obj-$(CONFIG_RUST) += helpers.o 12 - CFLAGS_REMOVE_helpers.o = -Wmissing-prototypes -Wmissing-declarations 11 + obj-$(CONFIG_RUST) += helpers/helpers.o 12 + CFLAGS_REMOVE_helpers/helpers.o = -Wmissing-prototypes -Wmissing-declarations 13 13 14 14 always-$(CONFIG_RUST) += libmacros.so 15 15 no-clean-files += libmacros.so 16 16 17 17 always-$(CONFIG_RUST) += bindings/bindings_generated.rs bindings/bindings_helpers_generated.rs 18 18 obj-$(CONFIG_RUST) += alloc.o bindings.o kernel.o 19 - always-$(CONFIG_RUST) += exports_alloc_generated.h exports_bindings_generated.h \ 20 - exports_kernel_generated.h 19 + always-$(CONFIG_RUST) += exports_alloc_generated.h exports_helpers_generated.h \ 20 + exports_bindings_generated.h exports_kernel_generated.h 21 21 22 22 always-$(CONFIG_RUST) += uapi/uapi_generated.rs 23 23 obj-$(CONFIG_RUST) += uapi.o ··· 63 63 OBJTREE=$(abspath $(objtree)) \ 64 64 $(RUSTDOC) $(if $(rustdoc_host),$(rust_common_flags),$(rust_flags)) \ 65 65 $(rustc_target_flags) -L$(objtree)/$(obj) \ 66 + -Zunstable-options --generate-link-to-definition \ 66 67 --output $(rustdoc_output) \ 67 68 --crate-name $(subst rustdoc-,,$@) \ 68 69 $(if $(rustdoc_host),,--sysroot=/dev/null) \ ··· 271 270 cmd_bindgen = \ 272 271 $(BINDGEN) $< $(bindgen_target_flags) \ 273 272 --use-core --with-derive-default --ctypes-prefix core::ffi --no-layout-tests \ 274 - --no-debug '.*' \ 273 + --no-debug '.*' --enable-function-attribute-detection \ 275 274 -o $@ -- $(bindgen_c_flags_final) -DMODULE \ 276 275 $(bindgen_target_cflags) $(bindgen_target_extra) 277 276 ··· 300 299 -I$(objtree)/$(obj) -Wno-missing-prototypes -Wno-missing-declarations 301 300 $(obj)/bindings/bindings_helpers_generated.rs: private bindgen_target_extra = ; \ 302 301 sed -Ei 's/pub fn rust_helper_([a-zA-Z0-9_]*)/#[link_name="rust_helper_\1"]\n pub fn \1/g' $@ 303 - $(obj)/bindings/bindings_helpers_generated.rs: $(src)/helpers.c FORCE 302 + $(obj)/bindings/bindings_helpers_generated.rs: $(src)/helpers/helpers.c FORCE 304 303 $(call if_changed_dep,bindgen) 305 304 306 305 quiet_cmd_exports = EXPORTS $@ 307 306 cmd_exports = \ 308 307 $(NM) -p --defined-only $< \ 309 - | awk '/ (T|R|D|B) / {printf "EXPORT_SYMBOL_RUST_GPL(%s);\n",$$3}' > $@ 308 + | awk '$$2~/(T|R|D|B)/ && $$3!~/__cfi/ {printf "EXPORT_SYMBOL_RUST_GPL(%s);\n",$$3}' > $@ 310 309 311 310 $(obj)/exports_core_generated.h: $(obj)/core.o FORCE 312 311 $(call if_changed,exports) 313 312 314 313 $(obj)/exports_alloc_generated.h: $(obj)/alloc.o FORCE 314 + $(call if_changed,exports) 315 + 316 + # Even though Rust kernel modules should never use the bindings directly, 317 + # symbols from the `bindings` crate and the C helpers need to be exported 318 + # because Rust generics and inlined functions may not get their code generated 319 + # in the crate where they are defined. Other helpers, called from non-inline 320 + # functions, may not be exported, in principle. However, in general, the Rust 321 + # compiler does not guarantee codegen will be performed for a non-inline 322 + # function either. Therefore, we export all symbols from helpers and bindings. 323 + # In the future, this may be revisited to reduce the number of exports after 324 + # the compiler is informed about the places codegen is required. 325 + $(obj)/exports_helpers_generated.h: $(obj)/helpers/helpers.o FORCE 315 326 $(call if_changed,exports) 316 327 317 328 $(obj)/exports_bindings_generated.h: $(obj)/bindings.o FORCE ··· 342 329 --crate-name $(patsubst lib%.so,%,$(notdir $@)) $< 343 330 344 331 # Procedural macros can only be used with the `rustc` that compiled it. 345 - # Therefore, to get `libmacros.so` automatically recompiled when the compiler 346 - # version changes, we add `core.o` as a dependency (even if it is not needed). 347 - $(obj)/libmacros.so: $(src)/macros/lib.rs $(obj)/core.o FORCE 332 + $(obj)/libmacros.so: $(src)/macros/lib.rs FORCE 348 333 +$(call if_changed_dep,rustc_procmacro) 349 334 350 335 quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L $@ ··· 355 344 --crate-type rlib -L$(objtree)/$(obj) \ 356 345 --crate-name $(patsubst %.o,%,$(notdir $@)) $< \ 357 346 --sysroot=/dev/null \ 358 - $(if $(rustc_objcopy),;$(OBJCOPY) $(rustc_objcopy) $@) 347 + $(if $(rustc_objcopy),;$(OBJCOPY) $(rustc_objcopy) $@) \ 348 + $(cmd_objtool) 359 349 360 350 rust-analyzer: 361 351 $(Q)$(srctree)/scripts/generate_rust_analyzer.py \ ··· 378 366 __ashlti3 __lshrti3 379 367 endif 380 368 369 + define rule_rustc_library 370 + $(call cmd_and_fixdep,rustc_library) 371 + $(call cmd,gen_objtooldep) 372 + endef 373 + 381 374 $(obj)/core.o: private skip_clippy = 1 382 375 $(obj)/core.o: private skip_flags = -Wunreachable_pub 383 376 $(obj)/core.o: private rustc_objcopy = $(foreach sym,$(redirect-intrinsics),--redefine-sym $(sym)=__rust$(sym)) 384 377 $(obj)/core.o: private rustc_target_flags = $(core-cfgs) 385 - $(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs FORCE 386 - +$(call if_changed_dep,rustc_library) 378 + $(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs \ 379 + $(wildcard $(objtree)/include/config/RUSTC_VERSION_TEXT) FORCE 380 + +$(call if_changed_rule,rustc_library) 387 381 ifneq ($(or $(CONFIG_X86_64),$(CONFIG_X86_32)),) 388 382 $(obj)/core.o: scripts/target.json 389 383 endif 390 384 391 385 $(obj)/compiler_builtins.o: private rustc_objcopy = -w -W '__*' 392 386 $(obj)/compiler_builtins.o: $(src)/compiler_builtins.rs $(obj)/core.o FORCE 393 - +$(call if_changed_dep,rustc_library) 387 + +$(call if_changed_rule,rustc_library) 394 388 395 389 $(obj)/alloc.o: private skip_clippy = 1 396 390 $(obj)/alloc.o: private skip_flags = -Wunreachable_pub 397 391 $(obj)/alloc.o: private rustc_target_flags = $(alloc-cfgs) 398 392 $(obj)/alloc.o: $(RUST_LIB_SRC)/alloc/src/lib.rs $(obj)/compiler_builtins.o FORCE 399 - +$(call if_changed_dep,rustc_library) 393 + +$(call if_changed_rule,rustc_library) 400 394 401 395 $(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE 402 - +$(call if_changed_dep,rustc_library) 396 + +$(call if_changed_rule,rustc_library) 403 397 404 398 $(obj)/bindings.o: $(src)/bindings/lib.rs \ 405 399 $(obj)/compiler_builtins.o \ 406 400 $(obj)/bindings/bindings_generated.rs \ 407 401 $(obj)/bindings/bindings_helpers_generated.rs FORCE 408 - +$(call if_changed_dep,rustc_library) 402 + +$(call if_changed_rule,rustc_library) 409 403 410 404 $(obj)/uapi.o: $(src)/uapi/lib.rs \ 411 405 $(obj)/compiler_builtins.o \ 412 406 $(obj)/uapi/uapi_generated.rs FORCE 413 - +$(call if_changed_dep,rustc_library) 407 + +$(call if_changed_rule,rustc_library) 414 408 415 409 $(obj)/kernel.o: private rustc_target_flags = --extern alloc \ 416 410 --extern build_error --extern macros --extern bindings --extern uapi 417 411 $(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/alloc.o $(obj)/build_error.o \ 418 412 $(obj)/libmacros.so $(obj)/bindings.o $(obj)/uapi.o FORCE 419 - +$(call if_changed_dep,rustc_library) 413 + +$(call if_changed_rule,rustc_library) 420 414 421 415 endif # CONFIG_RUST
+1 -1
rust/bindings/bindings_helper.h
··· 7 7 */ 8 8 9 9 #include <kunit/test.h> 10 - #include <linux/blk_types.h> 11 10 #include <linux/blk-mq.h> 11 + #include <linux/blk_types.h> 12 12 #include <linux/blkdev.h> 13 13 #include <linux/errname.h> 14 14 #include <linux/ethtool.h>
+1
rust/exports.c
··· 17 17 18 18 #include "exports_core_generated.h" 19 19 #include "exports_alloc_generated.h" 20 + #include "exports_helpers_generated.h" 20 21 #include "exports_bindings_generated.h" 21 22 #include "exports_kernel_generated.h" 22 23
-239
rust/helpers.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Non-trivial C macros cannot be used in Rust. Similarly, inlined C functions 4 - * cannot be called either. This file explicitly creates functions ("helpers") 5 - * that wrap those so that they can be called from Rust. 6 - * 7 - * Even though Rust kernel modules should never use the bindings directly, some 8 - * of these helpers need to be exported because Rust generics and inlined 9 - * functions may not get their code generated in the crate where they are 10 - * defined. Other helpers, called from non-inline functions, may not be 11 - * exported, in principle. However, in general, the Rust compiler does not 12 - * guarantee codegen will be performed for a non-inline function either. 13 - * Therefore, this file exports all the helpers. In the future, this may be 14 - * revisited to reduce the number of exports after the compiler is informed 15 - * about the places codegen is required. 16 - * 17 - * All symbols are exported as GPL-only to guarantee no GPL-only feature is 18 - * accidentally exposed. 19 - * 20 - * Sorted alphabetically. 21 - */ 22 - 23 - #include <kunit/test-bug.h> 24 - #include <linux/bug.h> 25 - #include <linux/build_bug.h> 26 - #include <linux/device.h> 27 - #include <linux/err.h> 28 - #include <linux/errname.h> 29 - #include <linux/gfp.h> 30 - #include <linux/highmem.h> 31 - #include <linux/mutex.h> 32 - #include <linux/refcount.h> 33 - #include <linux/sched/signal.h> 34 - #include <linux/slab.h> 35 - #include <linux/spinlock.h> 36 - #include <linux/wait.h> 37 - #include <linux/workqueue.h> 38 - 39 - __noreturn void rust_helper_BUG(void) 40 - { 41 - BUG(); 42 - } 43 - EXPORT_SYMBOL_GPL(rust_helper_BUG); 44 - 45 - unsigned long rust_helper_copy_from_user(void *to, const void __user *from, 46 - unsigned long n) 47 - { 48 - return copy_from_user(to, from, n); 49 - } 50 - EXPORT_SYMBOL_GPL(rust_helper_copy_from_user); 51 - 52 - unsigned long rust_helper_copy_to_user(void __user *to, const void *from, 53 - unsigned long n) 54 - { 55 - return copy_to_user(to, from, n); 56 - } 57 - EXPORT_SYMBOL_GPL(rust_helper_copy_to_user); 58 - 59 - void rust_helper_mutex_lock(struct mutex *lock) 60 - { 61 - mutex_lock(lock); 62 - } 63 - EXPORT_SYMBOL_GPL(rust_helper_mutex_lock); 64 - 65 - void rust_helper___spin_lock_init(spinlock_t *lock, const char *name, 66 - struct lock_class_key *key) 67 - { 68 - #ifdef CONFIG_DEBUG_SPINLOCK 69 - __raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG); 70 - #else 71 - spin_lock_init(lock); 72 - #endif 73 - } 74 - EXPORT_SYMBOL_GPL(rust_helper___spin_lock_init); 75 - 76 - void rust_helper_spin_lock(spinlock_t *lock) 77 - { 78 - spin_lock(lock); 79 - } 80 - EXPORT_SYMBOL_GPL(rust_helper_spin_lock); 81 - 82 - void rust_helper_spin_unlock(spinlock_t *lock) 83 - { 84 - spin_unlock(lock); 85 - } 86 - EXPORT_SYMBOL_GPL(rust_helper_spin_unlock); 87 - 88 - void rust_helper_init_wait(struct wait_queue_entry *wq_entry) 89 - { 90 - init_wait(wq_entry); 91 - } 92 - EXPORT_SYMBOL_GPL(rust_helper_init_wait); 93 - 94 - int rust_helper_signal_pending(struct task_struct *t) 95 - { 96 - return signal_pending(t); 97 - } 98 - EXPORT_SYMBOL_GPL(rust_helper_signal_pending); 99 - 100 - struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order) 101 - { 102 - return alloc_pages(gfp_mask, order); 103 - } 104 - EXPORT_SYMBOL_GPL(rust_helper_alloc_pages); 105 - 106 - void *rust_helper_kmap_local_page(struct page *page) 107 - { 108 - return kmap_local_page(page); 109 - } 110 - EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page); 111 - 112 - void rust_helper_kunmap_local(const void *addr) 113 - { 114 - kunmap_local(addr); 115 - } 116 - EXPORT_SYMBOL_GPL(rust_helper_kunmap_local); 117 - 118 - refcount_t rust_helper_REFCOUNT_INIT(int n) 119 - { 120 - return (refcount_t)REFCOUNT_INIT(n); 121 - } 122 - EXPORT_SYMBOL_GPL(rust_helper_REFCOUNT_INIT); 123 - 124 - void rust_helper_refcount_inc(refcount_t *r) 125 - { 126 - refcount_inc(r); 127 - } 128 - EXPORT_SYMBOL_GPL(rust_helper_refcount_inc); 129 - 130 - bool rust_helper_refcount_dec_and_test(refcount_t *r) 131 - { 132 - return refcount_dec_and_test(r); 133 - } 134 - EXPORT_SYMBOL_GPL(rust_helper_refcount_dec_and_test); 135 - 136 - __force void *rust_helper_ERR_PTR(long err) 137 - { 138 - return ERR_PTR(err); 139 - } 140 - EXPORT_SYMBOL_GPL(rust_helper_ERR_PTR); 141 - 142 - bool rust_helper_IS_ERR(__force const void *ptr) 143 - { 144 - return IS_ERR(ptr); 145 - } 146 - EXPORT_SYMBOL_GPL(rust_helper_IS_ERR); 147 - 148 - long rust_helper_PTR_ERR(__force const void *ptr) 149 - { 150 - return PTR_ERR(ptr); 151 - } 152 - EXPORT_SYMBOL_GPL(rust_helper_PTR_ERR); 153 - 154 - const char *rust_helper_errname(int err) 155 - { 156 - return errname(err); 157 - } 158 - EXPORT_SYMBOL_GPL(rust_helper_errname); 159 - 160 - struct task_struct *rust_helper_get_current(void) 161 - { 162 - return current; 163 - } 164 - EXPORT_SYMBOL_GPL(rust_helper_get_current); 165 - 166 - void rust_helper_get_task_struct(struct task_struct *t) 167 - { 168 - get_task_struct(t); 169 - } 170 - EXPORT_SYMBOL_GPL(rust_helper_get_task_struct); 171 - 172 - void rust_helper_put_task_struct(struct task_struct *t) 173 - { 174 - put_task_struct(t); 175 - } 176 - EXPORT_SYMBOL_GPL(rust_helper_put_task_struct); 177 - 178 - struct kunit *rust_helper_kunit_get_current_test(void) 179 - { 180 - return kunit_get_current_test(); 181 - } 182 - EXPORT_SYMBOL_GPL(rust_helper_kunit_get_current_test); 183 - 184 - void rust_helper_init_work_with_key(struct work_struct *work, work_func_t func, 185 - bool onstack, const char *name, 186 - struct lock_class_key *key) 187 - { 188 - __init_work(work, onstack); 189 - work->data = (atomic_long_t)WORK_DATA_INIT(); 190 - lockdep_init_map(&work->lockdep_map, name, key, 0); 191 - INIT_LIST_HEAD(&work->entry); 192 - work->func = func; 193 - } 194 - EXPORT_SYMBOL_GPL(rust_helper_init_work_with_key); 195 - 196 - void * __must_check __realloc_size(2) 197 - rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags) 198 - { 199 - return krealloc(objp, new_size, flags); 200 - } 201 - EXPORT_SYMBOL_GPL(rust_helper_krealloc); 202 - 203 - /* 204 - * `bindgen` binds the C `size_t` type as the Rust `usize` type, so we can 205 - * use it in contexts where Rust expects a `usize` like slice (array) indices. 206 - * `usize` is defined to be the same as C's `uintptr_t` type (can hold any 207 - * pointer) but not necessarily the same as `size_t` (can hold the size of any 208 - * single object). Most modern platforms use the same concrete integer type for 209 - * both of them, but in case we find ourselves on a platform where 210 - * that's not true, fail early instead of risking ABI or 211 - * integer-overflow issues. 212 - * 213 - * If your platform fails this assertion, it means that you are in 214 - * danger of integer-overflow bugs (even if you attempt to add 215 - * `--no-size_t-is-usize`). It may be easiest to change the kernel ABI on 216 - * your platform such that `size_t` matches `uintptr_t` (i.e., to increase 217 - * `size_t`, because `uintptr_t` has to be at least as big as `size_t`). 218 - */ 219 - static_assert( 220 - sizeof(size_t) == sizeof(uintptr_t) && 221 - __alignof__(size_t) == __alignof__(uintptr_t), 222 - "Rust code expects C `size_t` to match Rust `usize`" 223 - ); 224 - 225 - // This will soon be moved to a separate file, so no need to merge with above. 226 - #include <linux/blk-mq.h> 227 - #include <linux/blkdev.h> 228 - 229 - void *rust_helper_blk_mq_rq_to_pdu(struct request *rq) 230 - { 231 - return blk_mq_rq_to_pdu(rq); 232 - } 233 - EXPORT_SYMBOL_GPL(rust_helper_blk_mq_rq_to_pdu); 234 - 235 - struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu) 236 - { 237 - return blk_mq_rq_from_pdu(pdu); 238 - } 239 - EXPORT_SYMBOL_GPL(rust_helper_blk_mq_rq_from_pdu);
+14
rust/helpers/blk.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/blk-mq.h> 4 + #include <linux/blkdev.h> 5 + 6 + void *rust_helper_blk_mq_rq_to_pdu(struct request *rq) 7 + { 8 + return blk_mq_rq_to_pdu(rq); 9 + } 10 + 11 + struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu) 12 + { 13 + return blk_mq_rq_from_pdu(pdu); 14 + }
+8
rust/helpers/bug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bug.h> 4 + 5 + __noreturn void rust_helper_BUG(void) 6 + { 7 + BUG(); 8 + }
+25
rust/helpers/build_assert.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/build_bug.h> 4 + 5 + /* 6 + * `bindgen` binds the C `size_t` type as the Rust `usize` type, so we can 7 + * use it in contexts where Rust expects a `usize` like slice (array) indices. 8 + * `usize` is defined to be the same as C's `uintptr_t` type (can hold any 9 + * pointer) but not necessarily the same as `size_t` (can hold the size of any 10 + * single object). Most modern platforms use the same concrete integer type for 11 + * both of them, but in case we find ourselves on a platform where 12 + * that's not true, fail early instead of risking ABI or 13 + * integer-overflow issues. 14 + * 15 + * If your platform fails this assertion, it means that you are in 16 + * danger of integer-overflow bugs (even if you attempt to add 17 + * `--no-size_t-is-usize`). It may be easiest to change the kernel ABI on 18 + * your platform such that `size_t` matches `uintptr_t` (i.e., to increase 19 + * `size_t`, because `uintptr_t` has to be at least as big as `size_t`). 20 + */ 21 + static_assert( 22 + sizeof(size_t) == sizeof(uintptr_t) && 23 + __alignof__(size_t) == __alignof__(uintptr_t), 24 + "Rust code expects C `size_t` to match Rust `usize`" 25 + );
+9
rust/helpers/build_bug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/errname.h> 5 + 6 + const char *rust_helper_errname(int err) 7 + { 8 + return errname(err); 9 + }
+19
rust/helpers/err.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/err.h> 4 + #include <linux/export.h> 5 + 6 + __force void *rust_helper_ERR_PTR(long err) 7 + { 8 + return ERR_PTR(err); 9 + } 10 + 11 + bool rust_helper_IS_ERR(__force const void *ptr) 12 + { 13 + return IS_ERR(ptr); 14 + } 15 + 16 + long rust_helper_PTR_ERR(__force const void *ptr) 17 + { 18 + return PTR_ERR(ptr); 19 + }
+26
rust/helpers/helpers.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Non-trivial C macros cannot be used in Rust. Similarly, inlined C functions 4 + * cannot be called either. This file explicitly creates functions ("helpers") 5 + * that wrap those so that they can be called from Rust. 6 + * 7 + * Sorted alphabetically. 8 + */ 9 + 10 + #include "blk.c" 11 + #include "bug.c" 12 + #include "build_assert.c" 13 + #include "build_bug.c" 14 + #include "err.c" 15 + #include "kunit.c" 16 + #include "mutex.c" 17 + #include "page.c" 18 + #include "rbtree.c" 19 + #include "refcount.c" 20 + #include "signal.c" 21 + #include "slab.c" 22 + #include "spinlock.c" 23 + #include "task.c" 24 + #include "uaccess.c" 25 + #include "wait.c" 26 + #include "workqueue.c"
+9
rust/helpers/kunit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <kunit/test-bug.h> 4 + #include <linux/export.h> 5 + 6 + struct kunit *rust_helper_kunit_get_current_test(void) 7 + { 8 + return kunit_get_current_test(); 9 + }
+9
rust/helpers/mutex.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/mutex.h> 5 + 6 + void rust_helper_mutex_lock(struct mutex *lock) 7 + { 8 + mutex_lock(lock); 9 + }
+19
rust/helpers/page.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/gfp.h> 4 + #include <linux/highmem.h> 5 + 6 + struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order) 7 + { 8 + return alloc_pages(gfp_mask, order); 9 + } 10 + 11 + void *rust_helper_kmap_local_page(struct page *page) 12 + { 13 + return kmap_local_page(page); 14 + } 15 + 16 + void rust_helper_kunmap_local(const void *addr) 17 + { 18 + kunmap_local(addr); 19 + }
+9
rust/helpers/rbtree.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/rbtree.h> 4 + 5 + void rust_helper_rb_link_node(struct rb_node *node, struct rb_node *parent, 6 + struct rb_node **rb_link) 7 + { 8 + rb_link_node(node, parent, rb_link); 9 + }
+19
rust/helpers/refcount.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/refcount.h> 5 + 6 + refcount_t rust_helper_REFCOUNT_INIT(int n) 7 + { 8 + return (refcount_t)REFCOUNT_INIT(n); 9 + } 10 + 11 + void rust_helper_refcount_inc(refcount_t *r) 12 + { 13 + refcount_inc(r); 14 + } 15 + 16 + bool rust_helper_refcount_dec_and_test(refcount_t *r) 17 + { 18 + return refcount_dec_and_test(r); 19 + }
+9
rust/helpers/signal.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/sched/signal.h> 5 + 6 + int rust_helper_signal_pending(struct task_struct *t) 7 + { 8 + return signal_pending(t); 9 + }
+9
rust/helpers/slab.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/slab.h> 4 + 5 + void * __must_check __realloc_size(2) 6 + rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags) 7 + { 8 + return krealloc(objp, new_size, flags); 9 + }
+24
rust/helpers/spinlock.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/spinlock.h> 5 + 6 + void rust_helper___spin_lock_init(spinlock_t *lock, const char *name, 7 + struct lock_class_key *key) 8 + { 9 + #ifdef CONFIG_DEBUG_SPINLOCK 10 + __raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG); 11 + #else 12 + spin_lock_init(lock); 13 + #endif 14 + } 15 + 16 + void rust_helper_spin_lock(spinlock_t *lock) 17 + { 18 + spin_lock(lock); 19 + } 20 + 21 + void rust_helper_spin_unlock(spinlock_t *lock) 22 + { 23 + spin_unlock(lock); 24 + }
+19
rust/helpers/task.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/sched/task.h> 5 + 6 + struct task_struct *rust_helper_get_current(void) 7 + { 8 + return current; 9 + } 10 + 11 + void rust_helper_get_task_struct(struct task_struct *t) 12 + { 13 + get_task_struct(t); 14 + } 15 + 16 + void rust_helper_put_task_struct(struct task_struct *t) 17 + { 18 + put_task_struct(t); 19 + }
+15
rust/helpers/uaccess.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/uaccess.h> 4 + 5 + unsigned long rust_helper_copy_from_user(void *to, const void __user *from, 6 + unsigned long n) 7 + { 8 + return copy_from_user(to, from, n); 9 + } 10 + 11 + unsigned long rust_helper_copy_to_user(void __user *to, const void *from, 12 + unsigned long n) 13 + { 14 + return copy_to_user(to, from, n); 15 + }
+9
rust/helpers/wait.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/wait.h> 5 + 6 + void rust_helper_init_wait(struct wait_queue_entry *wq_entry) 7 + { 8 + init_wait(wq_entry); 9 + }
+15
rust/helpers/workqueue.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <linux/workqueue.h> 5 + 6 + void rust_helper_init_work_with_key(struct work_struct *work, work_func_t func, 7 + bool onstack, const char *name, 8 + struct lock_class_key *key) 9 + { 10 + __init_work(work, onstack); 11 + work->data = (atomic_long_t)WORK_DATA_INIT(); 12 + lockdep_init_map(&work->lockdep_map, name, key, 0); 13 + INIT_LIST_HEAD(&work->entry); 14 + work->func = func; 15 + }
+32 -1
rust/kernel/alloc/box_ext.rs
··· 4 4 5 5 use super::{AllocError, Flags}; 6 6 use alloc::boxed::Box; 7 - use core::mem::MaybeUninit; 7 + use core::{mem::MaybeUninit, ptr, result::Result}; 8 8 9 9 /// Extensions to [`Box`]. 10 10 pub trait BoxExt<T>: Sized { ··· 17 17 /// 18 18 /// The allocation may fail, in which case an error is returned. 19 19 fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError>; 20 + 21 + /// Drops the contents, but keeps the allocation. 22 + /// 23 + /// # Examples 24 + /// 25 + /// ``` 26 + /// use kernel::alloc::{flags, box_ext::BoxExt}; 27 + /// let value = Box::new([0; 32], flags::GFP_KERNEL)?; 28 + /// assert_eq!(*value, [0; 32]); 29 + /// let mut value = Box::drop_contents(value); 30 + /// // Now we can re-use `value`: 31 + /// value.write([1; 32]); 32 + /// // SAFETY: We just wrote to it. 33 + /// let value = unsafe { value.assume_init() }; 34 + /// assert_eq!(*value, [1; 32]); 35 + /// # Ok::<(), Error>(()) 36 + /// ``` 37 + fn drop_contents(this: Self) -> Box<MaybeUninit<T>>; 20 38 } 21 39 22 40 impl<T> BoxExt<T> for Box<T> { ··· 72 54 // SAFETY: For non-zero-sized types, we allocate above using the global allocator. For 73 55 // zero-sized types, we use `NonNull::dangling`. 74 56 Ok(unsafe { Box::from_raw(ptr) }) 57 + } 58 + 59 + fn drop_contents(this: Self) -> Box<MaybeUninit<T>> { 60 + let ptr = Box::into_raw(this); 61 + // SAFETY: `ptr` is valid, because it came from `Box::into_raw`. 62 + unsafe { ptr::drop_in_place(ptr) }; 63 + 64 + // CAST: `MaybeUninit<T>` is a transparent wrapper of `T`. 65 + let ptr = ptr.cast::<MaybeUninit<T>>(); 66 + 67 + // SAFETY: `ptr` is valid for writes, because it came from `Box::into_raw` and it is valid for 68 + // reads, since the pointer came from `Box::into_raw` and the type is `MaybeUninit<T>`. 69 + unsafe { Box::from_raw(ptr) } 75 70 } 76 71 }
+4 -1
rust/kernel/error.rs
··· 135 135 /// Returns the error encoded as a pointer. 136 136 #[allow(dead_code)] 137 137 pub(crate) fn to_ptr<T>(self) -> *mut T { 138 + #[cfg_attr(target_pointer_width = "32", allow(clippy::useless_conversion))] 138 139 // SAFETY: `self.0` is a valid error due to its invariant. 139 - unsafe { bindings::ERR_PTR(self.0.into()) as *mut _ } 140 + unsafe { 141 + bindings::ERR_PTR(self.0.into()) as *mut _ 142 + } 140 143 } 141 144 142 145 /// Returns a string representing the error, if one exists.
+164 -29
rust/kernel/init.rs
··· 213 213 use crate::{ 214 214 alloc::{box_ext::BoxExt, AllocError, Flags}, 215 215 error::{self, Error}, 216 + sync::Arc, 216 217 sync::UniqueArc, 217 218 types::{Opaque, ScopeGuard}, 218 219 }; ··· 743 742 }; 744 743 } 745 744 745 + /// Asserts that a field on a struct using `#[pin_data]` is marked with `#[pin]` ie. that it is 746 + /// structurally pinned. 747 + /// 748 + /// # Example 749 + /// 750 + /// This will succeed: 751 + /// ``` 752 + /// use kernel::assert_pinned; 753 + /// #[pin_data] 754 + /// struct MyStruct { 755 + /// #[pin] 756 + /// some_field: u64, 757 + /// } 758 + /// 759 + /// assert_pinned!(MyStruct, some_field, u64); 760 + /// ``` 761 + /// 762 + /// This will fail: 763 + // TODO: replace with `compile_fail` when supported. 764 + /// ```ignore 765 + /// use kernel::assert_pinned; 766 + /// #[pin_data] 767 + /// struct MyStruct { 768 + /// some_field: u64, 769 + /// } 770 + /// 771 + /// assert_pinned!(MyStruct, some_field, u64); 772 + /// ``` 773 + /// 774 + /// Some uses of the macro may trigger the `can't use generic parameters from outer item` error. To 775 + /// work around this, you may pass the `inline` parameter to the macro. The `inline` parameter can 776 + /// only be used when the macro is invoked from a function body. 777 + /// ``` 778 + /// use kernel::assert_pinned; 779 + /// #[pin_data] 780 + /// struct Foo<T> { 781 + /// #[pin] 782 + /// elem: T, 783 + /// } 784 + /// 785 + /// impl<T> Foo<T> { 786 + /// fn project(self: Pin<&mut Self>) -> Pin<&mut T> { 787 + /// assert_pinned!(Foo<T>, elem, T, inline); 788 + /// 789 + /// // SAFETY: The field is structurally pinned. 790 + /// unsafe { self.map_unchecked_mut(|me| &mut me.elem) } 791 + /// } 792 + /// } 793 + /// ``` 794 + #[macro_export] 795 + macro_rules! assert_pinned { 796 + ($ty:ty, $field:ident, $field_ty:ty, inline) => { 797 + let _ = move |ptr: *mut $field_ty| { 798 + // SAFETY: This code is unreachable. 799 + let data = unsafe { <$ty as $crate::init::__internal::HasPinData>::__pin_data() }; 800 + let init = $crate::init::__internal::AlwaysFail::<$field_ty>::new(); 801 + // SAFETY: This code is unreachable. 802 + unsafe { data.$field(ptr, init) }.ok(); 803 + }; 804 + }; 805 + 806 + ($ty:ty, $field:ident, $field_ty:ty) => { 807 + const _: () = { 808 + $crate::assert_pinned!($ty, $field, $field_ty, inline); 809 + }; 810 + }; 811 + } 812 + 746 813 /// A pin-initializer for the type `T`. 747 814 /// 748 815 /// To use this initializer, you will need a suitable memory location that can hold a `T`. This can ··· 1176 1107 1177 1108 /// Smart pointer that can initialize memory in-place. 1178 1109 pub trait InPlaceInit<T>: Sized { 1110 + /// Pinned version of `Self`. 1111 + /// 1112 + /// If a type already implicitly pins its pointee, `Pin<Self>` is unnecessary. In this case use 1113 + /// `Self`, otherwise just use `Pin<Self>`. 1114 + type PinnedSelf; 1115 + 1179 1116 /// Use the given pin-initializer to pin-initialize a `T` inside of a new smart pointer of this 1180 1117 /// type. 1181 1118 /// 1182 1119 /// If `T: !Unpin` it will not be able to move afterwards. 1183 - fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Pin<Self>, E> 1120 + fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self::PinnedSelf, E> 1184 1121 where 1185 1122 E: From<AllocError>; 1186 1123 ··· 1194 1119 /// type. 1195 1120 /// 1196 1121 /// If `T: !Unpin` it will not be able to move afterwards. 1197 - fn pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> error::Result<Pin<Self>> 1122 + fn pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> error::Result<Self::PinnedSelf> 1198 1123 where 1199 1124 Error: From<E>, 1200 1125 { ··· 1223 1148 } 1224 1149 } 1225 1150 1226 - impl<T> InPlaceInit<T> for Box<T> { 1151 + impl<T> InPlaceInit<T> for Arc<T> { 1152 + type PinnedSelf = Self; 1153 + 1227 1154 #[inline] 1228 - fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Pin<Self>, E> 1155 + fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self::PinnedSelf, E> 1229 1156 where 1230 1157 E: From<AllocError>, 1231 1158 { 1232 - let mut this = <Box<_> as BoxExt<_>>::new_uninit(flags)?; 1233 - let slot = this.as_mut_ptr(); 1234 - // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1235 - // slot is valid and will not be moved, because we pin it later. 1236 - unsafe { init.__pinned_init(slot)? }; 1237 - // SAFETY: All fields have been initialized. 1238 - Ok(unsafe { this.assume_init() }.into()) 1159 + UniqueArc::try_pin_init(init, flags).map(|u| u.into()) 1239 1160 } 1240 1161 1241 1162 #[inline] ··· 1239 1168 where 1240 1169 E: From<AllocError>, 1241 1170 { 1242 - let mut this = <Box<_> as BoxExt<_>>::new_uninit(flags)?; 1243 - let slot = this.as_mut_ptr(); 1244 - // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1245 - // slot is valid. 1246 - unsafe { init.__init(slot)? }; 1247 - // SAFETY: All fields have been initialized. 1248 - Ok(unsafe { this.assume_init() }) 1171 + UniqueArc::try_init(init, flags).map(|u| u.into()) 1172 + } 1173 + } 1174 + 1175 + impl<T> InPlaceInit<T> for Box<T> { 1176 + type PinnedSelf = Pin<Self>; 1177 + 1178 + #[inline] 1179 + fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self::PinnedSelf, E> 1180 + where 1181 + E: From<AllocError>, 1182 + { 1183 + <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_pin_init(init) 1184 + } 1185 + 1186 + #[inline] 1187 + fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E> 1188 + where 1189 + E: From<AllocError>, 1190 + { 1191 + <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_init(init) 1249 1192 } 1250 1193 } 1251 1194 1252 1195 impl<T> InPlaceInit<T> for UniqueArc<T> { 1196 + type PinnedSelf = Pin<Self>; 1197 + 1253 1198 #[inline] 1254 - fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Pin<Self>, E> 1199 + fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self::PinnedSelf, E> 1255 1200 where 1256 1201 E: From<AllocError>, 1257 1202 { 1258 - let mut this = UniqueArc::new_uninit(flags)?; 1259 - let slot = this.as_mut_ptr(); 1260 - // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1261 - // slot is valid and will not be moved, because we pin it later. 1262 - unsafe { init.__pinned_init(slot)? }; 1263 - // SAFETY: All fields have been initialized. 1264 - Ok(unsafe { this.assume_init() }.into()) 1203 + UniqueArc::new_uninit(flags)?.write_pin_init(init) 1265 1204 } 1266 1205 1267 1206 #[inline] ··· 1279 1198 where 1280 1199 E: From<AllocError>, 1281 1200 { 1282 - let mut this = UniqueArc::new_uninit(flags)?; 1283 - let slot = this.as_mut_ptr(); 1201 + UniqueArc::new_uninit(flags)?.write_init(init) 1202 + } 1203 + } 1204 + 1205 + /// Smart pointer containing uninitialized memory and that can write a value. 1206 + pub trait InPlaceWrite<T> { 1207 + /// The type `Self` turns into when the contents are initialized. 1208 + type Initialized; 1209 + 1210 + /// Use the given initializer to write a value into `self`. 1211 + /// 1212 + /// Does not drop the current value and considers it as uninitialized memory. 1213 + fn write_init<E>(self, init: impl Init<T, E>) -> Result<Self::Initialized, E>; 1214 + 1215 + /// Use the given pin-initializer to write a value into `self`. 1216 + /// 1217 + /// Does not drop the current value and considers it as uninitialized memory. 1218 + fn write_pin_init<E>(self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E>; 1219 + } 1220 + 1221 + impl<T> InPlaceWrite<T> for Box<MaybeUninit<T>> { 1222 + type Initialized = Box<T>; 1223 + 1224 + fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> { 1225 + let slot = self.as_mut_ptr(); 1284 1226 // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1285 1227 // slot is valid. 1286 1228 unsafe { init.__init(slot)? }; 1287 1229 // SAFETY: All fields have been initialized. 1288 - Ok(unsafe { this.assume_init() }) 1230 + Ok(unsafe { self.assume_init() }) 1231 + } 1232 + 1233 + fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E> { 1234 + let slot = self.as_mut_ptr(); 1235 + // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1236 + // slot is valid and will not be moved, because we pin it later. 1237 + unsafe { init.__pinned_init(slot)? }; 1238 + // SAFETY: All fields have been initialized. 1239 + Ok(unsafe { self.assume_init() }.into()) 1240 + } 1241 + } 1242 + 1243 + impl<T> InPlaceWrite<T> for UniqueArc<MaybeUninit<T>> { 1244 + type Initialized = UniqueArc<T>; 1245 + 1246 + fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> { 1247 + let slot = self.as_mut_ptr(); 1248 + // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1249 + // slot is valid. 1250 + unsafe { init.__init(slot)? }; 1251 + // SAFETY: All fields have been initialized. 1252 + Ok(unsafe { self.assume_init() }) 1253 + } 1254 + 1255 + fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E> { 1256 + let slot = self.as_mut_ptr(); 1257 + // SAFETY: When init errors/panics, slot will get deallocated but not dropped, 1258 + // slot is valid and will not be moved, because we pin it later. 1259 + unsafe { init.__pinned_init(slot)? }; 1260 + // SAFETY: All fields have been initialized. 1261 + Ok(unsafe { self.assume_init() }.into()) 1289 1262 } 1290 1263 } 1291 1264
+29
rust/kernel/init/__internal.rs
··· 228 228 Self(()) 229 229 } 230 230 } 231 + 232 + /// Initializer that always fails. 233 + /// 234 + /// Used by [`assert_pinned!`]. 235 + /// 236 + /// [`assert_pinned!`]: crate::assert_pinned 237 + pub struct AlwaysFail<T: ?Sized> { 238 + _t: PhantomData<T>, 239 + } 240 + 241 + impl<T: ?Sized> AlwaysFail<T> { 242 + /// Creates a new initializer that always fails. 243 + pub fn new() -> Self { 244 + Self { _t: PhantomData } 245 + } 246 + } 247 + 248 + impl<T: ?Sized> Default for AlwaysFail<T> { 249 + fn default() -> Self { 250 + Self::new() 251 + } 252 + } 253 + 254 + // SAFETY: `__pinned_init` always fails, which is always okay. 255 + unsafe impl<T: ?Sized> PinInit<T, ()> for AlwaysFail<T> { 256 + unsafe fn __pinned_init(self, _slot: *mut T) -> Result<(), ()> { 257 + Err(()) 258 + } 259 + }
+2
rust/kernel/lib.rs
··· 38 38 pub mod ioctl; 39 39 #[cfg(CONFIG_KUNIT)] 40 40 pub mod kunit; 41 + pub mod list; 41 42 #[cfg(CONFIG_NET)] 42 43 pub mod net; 43 44 pub mod page; 44 45 pub mod prelude; 45 46 pub mod print; 46 47 pub mod sizes; 48 + pub mod rbtree; 47 49 mod static_assert; 48 50 #[doc(hidden)] 49 51 pub mod std_vendor;
+686
rust/kernel/list.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Copyright (C) 2024 Google LLC. 4 + 5 + //! A linked list implementation. 6 + 7 + use crate::init::PinInit; 8 + use crate::sync::ArcBorrow; 9 + use crate::types::Opaque; 10 + use core::iter::{DoubleEndedIterator, FusedIterator}; 11 + use core::marker::PhantomData; 12 + use core::ptr; 13 + 14 + mod impl_list_item_mod; 15 + pub use self::impl_list_item_mod::{ 16 + impl_has_list_links, impl_has_list_links_self_ptr, impl_list_item, HasListLinks, HasSelfPtr, 17 + }; 18 + 19 + mod arc; 20 + pub use self::arc::{impl_list_arc_safe, AtomicTracker, ListArc, ListArcSafe, TryNewListArc}; 21 + 22 + mod arc_field; 23 + pub use self::arc_field::{define_list_arc_field_getter, ListArcField}; 24 + 25 + /// A linked list. 26 + /// 27 + /// All elements in this linked list will be [`ListArc`] references to the value. Since a value can 28 + /// only have one `ListArc` (for each pair of prev/next pointers), this ensures that the same 29 + /// prev/next pointers are not used for several linked lists. 30 + /// 31 + /// # Invariants 32 + /// 33 + /// * If the list is empty, then `first` is null. Otherwise, `first` points at the `ListLinks` 34 + /// field of the first element in the list. 35 + /// * All prev/next pointers in `ListLinks` fields of items in the list are valid and form a cycle. 36 + /// * For every item in the list, the list owns the associated [`ListArc`] reference and has 37 + /// exclusive access to the `ListLinks` field. 38 + pub struct List<T: ?Sized + ListItem<ID>, const ID: u64 = 0> { 39 + first: *mut ListLinksFields, 40 + _ty: PhantomData<ListArc<T, ID>>, 41 + } 42 + 43 + // SAFETY: This is a container of `ListArc<T, ID>`, and access to the container allows the same 44 + // type of access to the `ListArc<T, ID>` elements. 45 + unsafe impl<T, const ID: u64> Send for List<T, ID> 46 + where 47 + ListArc<T, ID>: Send, 48 + T: ?Sized + ListItem<ID>, 49 + { 50 + } 51 + // SAFETY: This is a container of `ListArc<T, ID>`, and access to the container allows the same 52 + // type of access to the `ListArc<T, ID>` elements. 53 + unsafe impl<T, const ID: u64> Sync for List<T, ID> 54 + where 55 + ListArc<T, ID>: Sync, 56 + T: ?Sized + ListItem<ID>, 57 + { 58 + } 59 + 60 + /// Implemented by types where a [`ListArc<Self>`] can be inserted into a [`List`]. 61 + /// 62 + /// # Safety 63 + /// 64 + /// Implementers must ensure that they provide the guarantees documented on methods provided by 65 + /// this trait. 66 + /// 67 + /// [`ListArc<Self>`]: ListArc 68 + pub unsafe trait ListItem<const ID: u64 = 0>: ListArcSafe<ID> { 69 + /// Views the [`ListLinks`] for this value. 70 + /// 71 + /// # Guarantees 72 + /// 73 + /// If there is a previous call to `prepare_to_insert` and there is no call to `post_remove` 74 + /// since the most recent such call, then this returns the same pointer as the one returned by 75 + /// the most recent call to `prepare_to_insert`. 76 + /// 77 + /// Otherwise, the returned pointer points at a read-only [`ListLinks`] with two null pointers. 78 + /// 79 + /// # Safety 80 + /// 81 + /// The provided pointer must point at a valid value. (It need not be in an `Arc`.) 82 + unsafe fn view_links(me: *const Self) -> *mut ListLinks<ID>; 83 + 84 + /// View the full value given its [`ListLinks`] field. 85 + /// 86 + /// Can only be used when the value is in a list. 87 + /// 88 + /// # Guarantees 89 + /// 90 + /// * Returns the same pointer as the one passed to the most recent call to `prepare_to_insert`. 91 + /// * The returned pointer is valid until the next call to `post_remove`. 92 + /// 93 + /// # Safety 94 + /// 95 + /// * The provided pointer must originate from the most recent call to `prepare_to_insert`, or 96 + /// from a call to `view_links` that happened after the most recent call to 97 + /// `prepare_to_insert`. 98 + /// * Since the most recent call to `prepare_to_insert`, the `post_remove` method must not have 99 + /// been called. 100 + unsafe fn view_value(me: *mut ListLinks<ID>) -> *const Self; 101 + 102 + /// This is called when an item is inserted into a [`List`]. 103 + /// 104 + /// # Guarantees 105 + /// 106 + /// The caller is granted exclusive access to the returned [`ListLinks`] until `post_remove` is 107 + /// called. 108 + /// 109 + /// # Safety 110 + /// 111 + /// * The provided pointer must point at a valid value in an [`Arc`]. 112 + /// * Calls to `prepare_to_insert` and `post_remove` on the same value must alternate. 113 + /// * The caller must own the [`ListArc`] for this value. 114 + /// * The caller must not give up ownership of the [`ListArc`] unless `post_remove` has been 115 + /// called after this call to `prepare_to_insert`. 116 + /// 117 + /// [`Arc`]: crate::sync::Arc 118 + unsafe fn prepare_to_insert(me: *const Self) -> *mut ListLinks<ID>; 119 + 120 + /// This undoes a previous call to `prepare_to_insert`. 121 + /// 122 + /// # Guarantees 123 + /// 124 + /// The returned pointer is the pointer that was originally passed to `prepare_to_insert`. 125 + /// 126 + /// # Safety 127 + /// 128 + /// The provided pointer must be the pointer returned by the most recent call to 129 + /// `prepare_to_insert`. 130 + unsafe fn post_remove(me: *mut ListLinks<ID>) -> *const Self; 131 + } 132 + 133 + #[repr(C)] 134 + #[derive(Copy, Clone)] 135 + struct ListLinksFields { 136 + next: *mut ListLinksFields, 137 + prev: *mut ListLinksFields, 138 + } 139 + 140 + /// The prev/next pointers for an item in a linked list. 141 + /// 142 + /// # Invariants 143 + /// 144 + /// The fields are null if and only if this item is not in a list. 145 + #[repr(transparent)] 146 + pub struct ListLinks<const ID: u64 = 0> { 147 + // This type is `!Unpin` for aliasing reasons as the pointers are part of an intrusive linked 148 + // list. 149 + inner: Opaque<ListLinksFields>, 150 + } 151 + 152 + // SAFETY: The only way to access/modify the pointers inside of `ListLinks<ID>` is via holding the 153 + // associated `ListArc<T, ID>`. Since that type correctly implements `Send`, it is impossible to 154 + // move this an instance of this type to a different thread if the pointees are `!Send`. 155 + unsafe impl<const ID: u64> Send for ListLinks<ID> {} 156 + // SAFETY: The type is opaque so immutable references to a ListLinks are useless. Therefore, it's 157 + // okay to have immutable access to a ListLinks from several threads at once. 158 + unsafe impl<const ID: u64> Sync for ListLinks<ID> {} 159 + 160 + impl<const ID: u64> ListLinks<ID> { 161 + /// Creates a new initializer for this type. 162 + pub fn new() -> impl PinInit<Self> { 163 + // INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will 164 + // not be constructed in an `Arc` that already has a `ListArc`. 165 + ListLinks { 166 + inner: Opaque::new(ListLinksFields { 167 + prev: ptr::null_mut(), 168 + next: ptr::null_mut(), 169 + }), 170 + } 171 + } 172 + 173 + /// # Safety 174 + /// 175 + /// `me` must be dereferenceable. 176 + #[inline] 177 + unsafe fn fields(me: *mut Self) -> *mut ListLinksFields { 178 + // SAFETY: The caller promises that the pointer is valid. 179 + unsafe { Opaque::raw_get(ptr::addr_of!((*me).inner)) } 180 + } 181 + 182 + /// # Safety 183 + /// 184 + /// `me` must be dereferenceable. 185 + #[inline] 186 + unsafe fn from_fields(me: *mut ListLinksFields) -> *mut Self { 187 + me.cast() 188 + } 189 + } 190 + 191 + /// Similar to [`ListLinks`], but also contains a pointer to the full value. 192 + /// 193 + /// This type can be used instead of [`ListLinks`] to support lists with trait objects. 194 + #[repr(C)] 195 + pub struct ListLinksSelfPtr<T: ?Sized, const ID: u64 = 0> { 196 + /// The `ListLinks` field inside this value. 197 + /// 198 + /// This is public so that it can be used with `impl_has_list_links!`. 199 + pub inner: ListLinks<ID>, 200 + // UnsafeCell is not enough here because we use `Opaque::uninit` as a dummy value, and 201 + // `ptr::null()` doesn't work for `T: ?Sized`. 202 + self_ptr: Opaque<*const T>, 203 + } 204 + 205 + // SAFETY: The fields of a ListLinksSelfPtr can be moved across thread boundaries. 206 + unsafe impl<T: ?Sized + Send, const ID: u64> Send for ListLinksSelfPtr<T, ID> {} 207 + // SAFETY: The type is opaque so immutable references to a ListLinksSelfPtr are useless. Therefore, 208 + // it's okay to have immutable access to a ListLinks from several threads at once. 209 + // 210 + // Note that `inner` being a public field does not prevent this type from being opaque, since 211 + // `inner` is a opaque type. 212 + unsafe impl<T: ?Sized + Sync, const ID: u64> Sync for ListLinksSelfPtr<T, ID> {} 213 + 214 + impl<T: ?Sized, const ID: u64> ListLinksSelfPtr<T, ID> { 215 + /// The offset from the [`ListLinks`] to the self pointer field. 216 + pub const LIST_LINKS_SELF_PTR_OFFSET: usize = core::mem::offset_of!(Self, self_ptr); 217 + 218 + /// Creates a new initializer for this type. 219 + pub fn new() -> impl PinInit<Self> { 220 + // INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will 221 + // not be constructed in an `Arc` that already has a `ListArc`. 222 + Self { 223 + inner: ListLinks { 224 + inner: Opaque::new(ListLinksFields { 225 + prev: ptr::null_mut(), 226 + next: ptr::null_mut(), 227 + }), 228 + }, 229 + self_ptr: Opaque::uninit(), 230 + } 231 + } 232 + } 233 + 234 + impl<T: ?Sized + ListItem<ID>, const ID: u64> List<T, ID> { 235 + /// Creates a new empty list. 236 + pub const fn new() -> Self { 237 + Self { 238 + first: ptr::null_mut(), 239 + _ty: PhantomData, 240 + } 241 + } 242 + 243 + /// Returns whether this list is empty. 244 + pub fn is_empty(&self) -> bool { 245 + self.first.is_null() 246 + } 247 + 248 + /// Add the provided item to the back of the list. 249 + pub fn push_back(&mut self, item: ListArc<T, ID>) { 250 + let raw_item = ListArc::into_raw(item); 251 + // SAFETY: 252 + // * We just got `raw_item` from a `ListArc`, so it's in an `Arc`. 253 + // * Since we have ownership of the `ListArc`, `post_remove` must have been called after 254 + // the most recent call to `prepare_to_insert`, if any. 255 + // * We own the `ListArc`. 256 + // * Removing items from this list is always done using `remove_internal_inner`, which 257 + // calls `post_remove` before giving up ownership. 258 + let list_links = unsafe { T::prepare_to_insert(raw_item) }; 259 + // SAFETY: We have not yet called `post_remove`, so `list_links` is still valid. 260 + let item = unsafe { ListLinks::fields(list_links) }; 261 + 262 + if self.first.is_null() { 263 + self.first = item; 264 + // SAFETY: The caller just gave us ownership of these fields. 265 + // INVARIANT: A linked list with one item should be cyclic. 266 + unsafe { 267 + (*item).next = item; 268 + (*item).prev = item; 269 + } 270 + } else { 271 + let next = self.first; 272 + // SAFETY: By the type invariant, this pointer is valid or null. We just checked that 273 + // it's not null, so it must be valid. 274 + let prev = unsafe { (*next).prev }; 275 + // SAFETY: Pointers in a linked list are never dangling, and the caller just gave us 276 + // ownership of the fields on `item`. 277 + // INVARIANT: This correctly inserts `item` between `prev` and `next`. 278 + unsafe { 279 + (*item).next = next; 280 + (*item).prev = prev; 281 + (*prev).next = item; 282 + (*next).prev = item; 283 + } 284 + } 285 + } 286 + 287 + /// Add the provided item to the front of the list. 288 + pub fn push_front(&mut self, item: ListArc<T, ID>) { 289 + let raw_item = ListArc::into_raw(item); 290 + // SAFETY: 291 + // * We just got `raw_item` from a `ListArc`, so it's in an `Arc`. 292 + // * If this requirement is violated, then the previous caller of `prepare_to_insert` 293 + // violated the safety requirement that they can't give up ownership of the `ListArc` 294 + // until they call `post_remove`. 295 + // * We own the `ListArc`. 296 + // * Removing items] from this list is always done using `remove_internal_inner`, which 297 + // calls `post_remove` before giving up ownership. 298 + let list_links = unsafe { T::prepare_to_insert(raw_item) }; 299 + // SAFETY: We have not yet called `post_remove`, so `list_links` is still valid. 300 + let item = unsafe { ListLinks::fields(list_links) }; 301 + 302 + if self.first.is_null() { 303 + // SAFETY: The caller just gave us ownership of these fields. 304 + // INVARIANT: A linked list with one item should be cyclic. 305 + unsafe { 306 + (*item).next = item; 307 + (*item).prev = item; 308 + } 309 + } else { 310 + let next = self.first; 311 + // SAFETY: We just checked that `next` is non-null. 312 + let prev = unsafe { (*next).prev }; 313 + // SAFETY: Pointers in a linked list are never dangling, and the caller just gave us 314 + // ownership of the fields on `item`. 315 + // INVARIANT: This correctly inserts `item` between `prev` and `next`. 316 + unsafe { 317 + (*item).next = next; 318 + (*item).prev = prev; 319 + (*prev).next = item; 320 + (*next).prev = item; 321 + } 322 + } 323 + self.first = item; 324 + } 325 + 326 + /// Removes the last item from this list. 327 + pub fn pop_back(&mut self) -> Option<ListArc<T, ID>> { 328 + if self.first.is_null() { 329 + return None; 330 + } 331 + 332 + // SAFETY: We just checked that the list is not empty. 333 + let last = unsafe { (*self.first).prev }; 334 + // SAFETY: The last item of this list is in this list. 335 + Some(unsafe { self.remove_internal(last) }) 336 + } 337 + 338 + /// Removes the first item from this list. 339 + pub fn pop_front(&mut self) -> Option<ListArc<T, ID>> { 340 + if self.first.is_null() { 341 + return None; 342 + } 343 + 344 + // SAFETY: The first item of this list is in this list. 345 + Some(unsafe { self.remove_internal(self.first) }) 346 + } 347 + 348 + /// Removes the provided item from this list and returns it. 349 + /// 350 + /// This returns `None` if the item is not in the list. (Note that by the safety requirements, 351 + /// this means that the item is not in any list.) 352 + /// 353 + /// # Safety 354 + /// 355 + /// `item` must not be in a different linked list (with the same id). 356 + pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> { 357 + let mut item = unsafe { ListLinks::fields(T::view_links(item)) }; 358 + // SAFETY: The user provided a reference, and reference are never dangling. 359 + // 360 + // As for why this is not a data race, there are two cases: 361 + // 362 + // * If `item` is not in any list, then these fields are read-only and null. 363 + // * If `item` is in this list, then we have exclusive access to these fields since we 364 + // have a mutable reference to the list. 365 + // 366 + // In either case, there's no race. 367 + let ListLinksFields { next, prev } = unsafe { *item }; 368 + 369 + debug_assert_eq!(next.is_null(), prev.is_null()); 370 + if !next.is_null() { 371 + // This is really a no-op, but this ensures that `item` is a raw pointer that was 372 + // obtained without going through a pointer->reference->pointer conversion roundtrip. 373 + // This ensures that the list is valid under the more restrictive strict provenance 374 + // ruleset. 375 + // 376 + // SAFETY: We just checked that `next` is not null, and it's not dangling by the 377 + // list invariants. 378 + unsafe { 379 + debug_assert_eq!(item, (*next).prev); 380 + item = (*next).prev; 381 + } 382 + 383 + // SAFETY: We just checked that `item` is in a list, so the caller guarantees that it 384 + // is in this list. The pointers are in the right order. 385 + Some(unsafe { self.remove_internal_inner(item, next, prev) }) 386 + } else { 387 + None 388 + } 389 + } 390 + 391 + /// Removes the provided item from the list. 392 + /// 393 + /// # Safety 394 + /// 395 + /// `item` must point at an item in this list. 396 + unsafe fn remove_internal(&mut self, item: *mut ListLinksFields) -> ListArc<T, ID> { 397 + // SAFETY: The caller promises that this pointer is not dangling, and there's no data race 398 + // since we have a mutable reference to the list containing `item`. 399 + let ListLinksFields { next, prev } = unsafe { *item }; 400 + // SAFETY: The pointers are ok and in the right order. 401 + unsafe { self.remove_internal_inner(item, next, prev) } 402 + } 403 + 404 + /// Removes the provided item from the list. 405 + /// 406 + /// # Safety 407 + /// 408 + /// The `item` pointer must point at an item in this list, and we must have `(*item).next == 409 + /// next` and `(*item).prev == prev`. 410 + unsafe fn remove_internal_inner( 411 + &mut self, 412 + item: *mut ListLinksFields, 413 + next: *mut ListLinksFields, 414 + prev: *mut ListLinksFields, 415 + ) -> ListArc<T, ID> { 416 + // SAFETY: We have exclusive access to the pointers of items in the list, and the prev/next 417 + // pointers are always valid for items in a list. 418 + // 419 + // INVARIANT: There are three cases: 420 + // * If the list has at least three items, then after removing the item, `prev` and `next` 421 + // will be next to each other. 422 + // * If the list has two items, then the remaining item will point at itself. 423 + // * If the list has one item, then `next == prev == item`, so these writes have no 424 + // effect. The list remains unchanged and `item` is still in the list for now. 425 + unsafe { 426 + (*next).prev = prev; 427 + (*prev).next = next; 428 + } 429 + // SAFETY: We have exclusive access to items in the list. 430 + // INVARIANT: `item` is being removed, so the pointers should be null. 431 + unsafe { 432 + (*item).prev = ptr::null_mut(); 433 + (*item).next = ptr::null_mut(); 434 + } 435 + // INVARIANT: There are three cases: 436 + // * If `item` was not the first item, then `self.first` should remain unchanged. 437 + // * If `item` was the first item and there is another item, then we just updated 438 + // `prev->next` to `next`, which is the new first item, and setting `item->next` to null 439 + // did not modify `prev->next`. 440 + // * If `item` was the only item in the list, then `prev == item`, and we just set 441 + // `item->next` to null, so this correctly sets `first` to null now that the list is 442 + // empty. 443 + if self.first == item { 444 + // SAFETY: The `prev` pointer is the value that `item->prev` had when it was in this 445 + // list, so it must be valid. There is no race since `prev` is still in the list and we 446 + // still have exclusive access to the list. 447 + self.first = unsafe { (*prev).next }; 448 + } 449 + 450 + // SAFETY: `item` used to be in the list, so it is dereferenceable by the type invariants 451 + // of `List`. 452 + let list_links = unsafe { ListLinks::from_fields(item) }; 453 + // SAFETY: Any pointer in the list originates from a `prepare_to_insert` call. 454 + let raw_item = unsafe { T::post_remove(list_links) }; 455 + // SAFETY: The above call to `post_remove` guarantees that we can recreate the `ListArc`. 456 + unsafe { ListArc::from_raw(raw_item) } 457 + } 458 + 459 + /// Moves all items from `other` into `self`. 460 + /// 461 + /// The items of `other` are added to the back of `self`, so the last item of `other` becomes 462 + /// the last item of `self`. 463 + pub fn push_all_back(&mut self, other: &mut List<T, ID>) { 464 + // First, we insert the elements into `self`. At the end, we make `other` empty. 465 + if self.is_empty() { 466 + // INVARIANT: All of the elements in `other` become elements of `self`. 467 + self.first = other.first; 468 + } else if !other.is_empty() { 469 + let other_first = other.first; 470 + // SAFETY: The other list is not empty, so this pointer is valid. 471 + let other_last = unsafe { (*other_first).prev }; 472 + let self_first = self.first; 473 + // SAFETY: The self list is not empty, so this pointer is valid. 474 + let self_last = unsafe { (*self_first).prev }; 475 + 476 + // SAFETY: We have exclusive access to both lists, so we can update the pointers. 477 + // INVARIANT: This correctly sets the pointers to merge both lists. We do not need to 478 + // update `self.first` because the first element of `self` does not change. 479 + unsafe { 480 + (*self_first).prev = other_last; 481 + (*other_last).next = self_first; 482 + (*self_last).next = other_first; 483 + (*other_first).prev = self_last; 484 + } 485 + } 486 + 487 + // INVARIANT: The other list is now empty, so update its pointer. 488 + other.first = ptr::null_mut(); 489 + } 490 + 491 + /// Returns a cursor to the first element of the list. 492 + /// 493 + /// If the list is empty, this returns `None`. 494 + pub fn cursor_front(&mut self) -> Option<Cursor<'_, T, ID>> { 495 + if self.first.is_null() { 496 + None 497 + } else { 498 + Some(Cursor { 499 + current: self.first, 500 + list: self, 501 + }) 502 + } 503 + } 504 + 505 + /// Creates an iterator over the list. 506 + pub fn iter(&self) -> Iter<'_, T, ID> { 507 + // INVARIANT: If the list is empty, both pointers are null. Otherwise, both pointers point 508 + // at the first element of the same list. 509 + Iter { 510 + current: self.first, 511 + stop: self.first, 512 + _ty: PhantomData, 513 + } 514 + } 515 + } 516 + 517 + impl<T: ?Sized + ListItem<ID>, const ID: u64> Default for List<T, ID> { 518 + fn default() -> Self { 519 + List::new() 520 + } 521 + } 522 + 523 + impl<T: ?Sized + ListItem<ID>, const ID: u64> Drop for List<T, ID> { 524 + fn drop(&mut self) { 525 + while let Some(item) = self.pop_front() { 526 + drop(item); 527 + } 528 + } 529 + } 530 + 531 + /// An iterator over a [`List`]. 532 + /// 533 + /// # Invariants 534 + /// 535 + /// * There must be a [`List`] that is immutably borrowed for the duration of `'a`. 536 + /// * The `current` pointer is null or points at a value in that [`List`]. 537 + /// * The `stop` pointer is equal to the `first` field of that [`List`]. 538 + #[derive(Clone)] 539 + pub struct Iter<'a, T: ?Sized + ListItem<ID>, const ID: u64 = 0> { 540 + current: *mut ListLinksFields, 541 + stop: *mut ListLinksFields, 542 + _ty: PhantomData<&'a ListArc<T, ID>>, 543 + } 544 + 545 + impl<'a, T: ?Sized + ListItem<ID>, const ID: u64> Iterator for Iter<'a, T, ID> { 546 + type Item = ArcBorrow<'a, T>; 547 + 548 + fn next(&mut self) -> Option<ArcBorrow<'a, T>> { 549 + if self.current.is_null() { 550 + return None; 551 + } 552 + 553 + let current = self.current; 554 + 555 + // SAFETY: We just checked that `current` is not null, so it is in a list, and hence not 556 + // dangling. There's no race because the iterator holds an immutable borrow to the list. 557 + let next = unsafe { (*current).next }; 558 + // INVARIANT: If `current` was the last element of the list, then this updates it to null. 559 + // Otherwise, we update it to the next element. 560 + self.current = if next != self.stop { 561 + next 562 + } else { 563 + ptr::null_mut() 564 + }; 565 + 566 + // SAFETY: The `current` pointer points at a value in the list. 567 + let item = unsafe { T::view_value(ListLinks::from_fields(current)) }; 568 + // SAFETY: 569 + // * All values in a list are stored in an `Arc`. 570 + // * The value cannot be removed from the list for the duration of the lifetime annotated 571 + // on the returned `ArcBorrow`, because removing it from the list would require mutable 572 + // access to the list. However, the `ArcBorrow` is annotated with the iterator's 573 + // lifetime, and the list is immutably borrowed for that lifetime. 574 + // * Values in a list never have a `UniqueArc` reference. 575 + Some(unsafe { ArcBorrow::from_raw(item) }) 576 + } 577 + } 578 + 579 + /// A cursor into a [`List`]. 580 + /// 581 + /// # Invariants 582 + /// 583 + /// The `current` pointer points a value in `list`. 584 + pub struct Cursor<'a, T: ?Sized + ListItem<ID>, const ID: u64 = 0> { 585 + current: *mut ListLinksFields, 586 + list: &'a mut List<T, ID>, 587 + } 588 + 589 + impl<'a, T: ?Sized + ListItem<ID>, const ID: u64> Cursor<'a, T, ID> { 590 + /// Access the current element of this cursor. 591 + pub fn current(&self) -> ArcBorrow<'_, T> { 592 + // SAFETY: The `current` pointer points a value in the list. 593 + let me = unsafe { T::view_value(ListLinks::from_fields(self.current)) }; 594 + // SAFETY: 595 + // * All values in a list are stored in an `Arc`. 596 + // * The value cannot be removed from the list for the duration of the lifetime annotated 597 + // on the returned `ArcBorrow`, because removing it from the list would require mutable 598 + // access to the cursor or the list. However, the `ArcBorrow` holds an immutable borrow 599 + // on the cursor, which in turn holds a mutable borrow on the list, so any such 600 + // mutable access requires first releasing the immutable borrow on the cursor. 601 + // * Values in a list never have a `UniqueArc` reference, because the list has a `ListArc` 602 + // reference, and `UniqueArc` references must be unique. 603 + unsafe { ArcBorrow::from_raw(me) } 604 + } 605 + 606 + /// Move the cursor to the next element. 607 + pub fn next(self) -> Option<Cursor<'a, T, ID>> { 608 + // SAFETY: The `current` field is always in a list. 609 + let next = unsafe { (*self.current).next }; 610 + 611 + if next == self.list.first { 612 + None 613 + } else { 614 + // INVARIANT: Since `self.current` is in the `list`, its `next` pointer is also in the 615 + // `list`. 616 + Some(Cursor { 617 + current: next, 618 + list: self.list, 619 + }) 620 + } 621 + } 622 + 623 + /// Move the cursor to the previous element. 624 + pub fn prev(self) -> Option<Cursor<'a, T, ID>> { 625 + // SAFETY: The `current` field is always in a list. 626 + let prev = unsafe { (*self.current).prev }; 627 + 628 + if self.current == self.list.first { 629 + None 630 + } else { 631 + // INVARIANT: Since `self.current` is in the `list`, its `prev` pointer is also in the 632 + // `list`. 633 + Some(Cursor { 634 + current: prev, 635 + list: self.list, 636 + }) 637 + } 638 + } 639 + 640 + /// Remove the current element from the list. 641 + pub fn remove(self) -> ListArc<T, ID> { 642 + // SAFETY: The `current` pointer always points at a member of the list. 643 + unsafe { self.list.remove_internal(self.current) } 644 + } 645 + } 646 + 647 + impl<'a, T: ?Sized + ListItem<ID>, const ID: u64> FusedIterator for Iter<'a, T, ID> {} 648 + 649 + impl<'a, T: ?Sized + ListItem<ID>, const ID: u64> IntoIterator for &'a List<T, ID> { 650 + type IntoIter = Iter<'a, T, ID>; 651 + type Item = ArcBorrow<'a, T>; 652 + 653 + fn into_iter(self) -> Iter<'a, T, ID> { 654 + self.iter() 655 + } 656 + } 657 + 658 + /// An owning iterator into a [`List`]. 659 + pub struct IntoIter<T: ?Sized + ListItem<ID>, const ID: u64 = 0> { 660 + list: List<T, ID>, 661 + } 662 + 663 + impl<T: ?Sized + ListItem<ID>, const ID: u64> Iterator for IntoIter<T, ID> { 664 + type Item = ListArc<T, ID>; 665 + 666 + fn next(&mut self) -> Option<ListArc<T, ID>> { 667 + self.list.pop_front() 668 + } 669 + } 670 + 671 + impl<T: ?Sized + ListItem<ID>, const ID: u64> FusedIterator for IntoIter<T, ID> {} 672 + 673 + impl<T: ?Sized + ListItem<ID>, const ID: u64> DoubleEndedIterator for IntoIter<T, ID> { 674 + fn next_back(&mut self) -> Option<ListArc<T, ID>> { 675 + self.list.pop_back() 676 + } 677 + } 678 + 679 + impl<T: ?Sized + ListItem<ID>, const ID: u64> IntoIterator for List<T, ID> { 680 + type IntoIter = IntoIter<T, ID>; 681 + type Item = ListArc<T, ID>; 682 + 683 + fn into_iter(self) -> IntoIter<T, ID> { 684 + IntoIter { list: self } 685 + } 686 + }
+521
rust/kernel/list/arc.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Copyright (C) 2024 Google LLC. 4 + 5 + //! A wrapper around `Arc` for linked lists. 6 + 7 + use crate::alloc::{AllocError, Flags}; 8 + use crate::prelude::*; 9 + use crate::sync::{Arc, ArcBorrow, UniqueArc}; 10 + use core::marker::{PhantomPinned, Unsize}; 11 + use core::ops::Deref; 12 + use core::pin::Pin; 13 + use core::sync::atomic::{AtomicBool, Ordering}; 14 + 15 + /// Declares that this type has some way to ensure that there is exactly one `ListArc` instance for 16 + /// this id. 17 + /// 18 + /// Types that implement this trait should include some kind of logic for keeping track of whether 19 + /// a [`ListArc`] exists or not. We refer to this logic as "the tracking inside `T`". 20 + /// 21 + /// We allow the case where the tracking inside `T` thinks that a [`ListArc`] exists, but actually, 22 + /// there isn't a [`ListArc`]. However, we do not allow the opposite situation where a [`ListArc`] 23 + /// exists, but the tracking thinks it doesn't. This is because the former can at most result in us 24 + /// failing to create a [`ListArc`] when the operation could succeed, whereas the latter can result 25 + /// in the creation of two [`ListArc`] references. Only the latter situation can lead to memory 26 + /// safety issues. 27 + /// 28 + /// A consequence of the above is that you may implement the tracking inside `T` by not actually 29 + /// keeping track of anything. To do this, you always claim that a [`ListArc`] exists, even if 30 + /// there isn't one. This implementation is allowed by the above rule, but it means that 31 + /// [`ListArc`] references can only be created if you have ownership of *all* references to the 32 + /// refcounted object, as you otherwise have no way of knowing whether a [`ListArc`] exists. 33 + pub trait ListArcSafe<const ID: u64 = 0> { 34 + /// Informs the tracking inside this type that it now has a [`ListArc`] reference. 35 + /// 36 + /// This method may be called even if the tracking inside this type thinks that a `ListArc` 37 + /// reference exists. (But only if that's not actually the case.) 38 + /// 39 + /// # Safety 40 + /// 41 + /// Must not be called if a [`ListArc`] already exist for this value. 42 + unsafe fn on_create_list_arc_from_unique(self: Pin<&mut Self>); 43 + 44 + /// Informs the tracking inside this type that there is no [`ListArc`] reference anymore. 45 + /// 46 + /// # Safety 47 + /// 48 + /// Must only be called if there is no [`ListArc`] reference, but the tracking thinks there is. 49 + unsafe fn on_drop_list_arc(&self); 50 + } 51 + 52 + /// Declares that this type is able to safely attempt to create `ListArc`s at any time. 53 + /// 54 + /// # Safety 55 + /// 56 + /// The guarantees of `try_new_list_arc` must be upheld. 57 + pub unsafe trait TryNewListArc<const ID: u64 = 0>: ListArcSafe<ID> { 58 + /// Attempts to convert an `Arc<Self>` into an `ListArc<Self>`. Returns `true` if the 59 + /// conversion was successful. 60 + /// 61 + /// This method should not be called directly. Use [`ListArc::try_from_arc`] instead. 62 + /// 63 + /// # Guarantees 64 + /// 65 + /// If this call returns `true`, then there is no [`ListArc`] pointing to this value. 66 + /// Additionally, this call will have transitioned the tracking inside `Self` from not thinking 67 + /// that a [`ListArc`] exists, to thinking that a [`ListArc`] exists. 68 + fn try_new_list_arc(&self) -> bool; 69 + } 70 + 71 + /// Declares that this type supports [`ListArc`]. 72 + /// 73 + /// This macro supports a few different strategies for implementing the tracking inside the type: 74 + /// 75 + /// * The `untracked` strategy does not actually keep track of whether a [`ListArc`] exists. When 76 + /// using this strategy, the only way to create a [`ListArc`] is using a [`UniqueArc`]. 77 + /// * The `tracked_by` strategy defers the tracking to a field of the struct. The user much specify 78 + /// which field to defer the tracking to. The field must implement [`ListArcSafe`]. If the field 79 + /// implements [`TryNewListArc`], then the type will also implement [`TryNewListArc`]. 80 + /// 81 + /// The `tracked_by` strategy is usually used by deferring to a field of type 82 + /// [`AtomicTracker`]. However, it is also possible to defer the tracking to another struct 83 + /// using also using this macro. 84 + #[macro_export] 85 + macro_rules! impl_list_arc_safe { 86 + (impl$({$($generics:tt)*})? ListArcSafe<$num:tt> for $t:ty { untracked; } $($rest:tt)*) => { 87 + impl$(<$($generics)*>)? $crate::list::ListArcSafe<$num> for $t { 88 + unsafe fn on_create_list_arc_from_unique(self: ::core::pin::Pin<&mut Self>) {} 89 + unsafe fn on_drop_list_arc(&self) {} 90 + } 91 + $crate::list::impl_list_arc_safe! { $($rest)* } 92 + }; 93 + 94 + (impl$({$($generics:tt)*})? ListArcSafe<$num:tt> for $t:ty { 95 + tracked_by $field:ident : $fty:ty; 96 + } $($rest:tt)*) => { 97 + impl$(<$($generics)*>)? $crate::list::ListArcSafe<$num> for $t { 98 + unsafe fn on_create_list_arc_from_unique(self: ::core::pin::Pin<&mut Self>) { 99 + $crate::assert_pinned!($t, $field, $fty, inline); 100 + 101 + // SAFETY: This field is structurally pinned as per the above assertion. 102 + let field = unsafe { 103 + ::core::pin::Pin::map_unchecked_mut(self, |me| &mut me.$field) 104 + }; 105 + // SAFETY: The caller promises that there is no `ListArc`. 106 + unsafe { 107 + <$fty as $crate::list::ListArcSafe<$num>>::on_create_list_arc_from_unique(field) 108 + }; 109 + } 110 + unsafe fn on_drop_list_arc(&self) { 111 + // SAFETY: The caller promises that there is no `ListArc` reference, and also 112 + // promises that the tracking thinks there is a `ListArc` reference. 113 + unsafe { <$fty as $crate::list::ListArcSafe<$num>>::on_drop_list_arc(&self.$field) }; 114 + } 115 + } 116 + unsafe impl$(<$($generics)*>)? $crate::list::TryNewListArc<$num> for $t 117 + where 118 + $fty: TryNewListArc<$num>, 119 + { 120 + fn try_new_list_arc(&self) -> bool { 121 + <$fty as $crate::list::TryNewListArc<$num>>::try_new_list_arc(&self.$field) 122 + } 123 + } 124 + $crate::list::impl_list_arc_safe! { $($rest)* } 125 + }; 126 + 127 + () => {}; 128 + } 129 + pub use impl_list_arc_safe; 130 + 131 + /// A wrapper around [`Arc`] that's guaranteed unique for the given id. 132 + /// 133 + /// The `ListArc` type can be thought of as a special reference to a refcounted object that owns the 134 + /// permission to manipulate the `next`/`prev` pointers stored in the refcounted object. By ensuring 135 + /// that each object has only one `ListArc` reference, the owner of that reference is assured 136 + /// exclusive access to the `next`/`prev` pointers. When a `ListArc` is inserted into a [`List`], 137 + /// the [`List`] takes ownership of the `ListArc` reference. 138 + /// 139 + /// There are various strategies to ensuring that a value has only one `ListArc` reference. The 140 + /// simplest is to convert a [`UniqueArc`] into a `ListArc`. However, the refcounted object could 141 + /// also keep track of whether a `ListArc` exists using a boolean, which could allow for the 142 + /// creation of new `ListArc` references from an [`Arc`] reference. Whatever strategy is used, the 143 + /// relevant tracking is referred to as "the tracking inside `T`", and the [`ListArcSafe`] trait 144 + /// (and its subtraits) are used to update the tracking when a `ListArc` is created or destroyed. 145 + /// 146 + /// Note that we allow the case where the tracking inside `T` thinks that a `ListArc` exists, but 147 + /// actually, there isn't a `ListArc`. However, we do not allow the opposite situation where a 148 + /// `ListArc` exists, but the tracking thinks it doesn't. This is because the former can at most 149 + /// result in us failing to create a `ListArc` when the operation could succeed, whereas the latter 150 + /// can result in the creation of two `ListArc` references. 151 + /// 152 + /// While this `ListArc` is unique for the given id, there still might exist normal `Arc` 153 + /// references to the object. 154 + /// 155 + /// # Invariants 156 + /// 157 + /// * Each reference counted object has at most one `ListArc` for each value of `ID`. 158 + /// * The tracking inside `T` is aware that a `ListArc` reference exists. 159 + /// 160 + /// [`List`]: crate::list::List 161 + #[repr(transparent)] 162 + pub struct ListArc<T, const ID: u64 = 0> 163 + where 164 + T: ListArcSafe<ID> + ?Sized, 165 + { 166 + arc: Arc<T>, 167 + } 168 + 169 + impl<T: ListArcSafe<ID>, const ID: u64> ListArc<T, ID> { 170 + /// Constructs a new reference counted instance of `T`. 171 + #[inline] 172 + pub fn new(contents: T, flags: Flags) -> Result<Self, AllocError> { 173 + Ok(Self::from(UniqueArc::new(contents, flags)?)) 174 + } 175 + 176 + /// Use the given initializer to in-place initialize a `T`. 177 + /// 178 + /// If `T: !Unpin` it will not be able to move afterwards. 179 + // We don't implement `InPlaceInit` because `ListArc` is implicitly pinned. This is similar to 180 + // what we do for `Arc`. 181 + #[inline] 182 + pub fn pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self, E> 183 + where 184 + E: From<AllocError>, 185 + { 186 + Ok(Self::from(UniqueArc::try_pin_init(init, flags)?)) 187 + } 188 + 189 + /// Use the given initializer to in-place initialize a `T`. 190 + /// 191 + /// This is equivalent to [`ListArc<T>::pin_init`], since a [`ListArc`] is always pinned. 192 + #[inline] 193 + pub fn init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E> 194 + where 195 + E: From<AllocError>, 196 + { 197 + Ok(Self::from(UniqueArc::try_init(init, flags)?)) 198 + } 199 + } 200 + 201 + impl<T, const ID: u64> From<UniqueArc<T>> for ListArc<T, ID> 202 + where 203 + T: ListArcSafe<ID> + ?Sized, 204 + { 205 + /// Convert a [`UniqueArc`] into a [`ListArc`]. 206 + #[inline] 207 + fn from(unique: UniqueArc<T>) -> Self { 208 + Self::from(Pin::from(unique)) 209 + } 210 + } 211 + 212 + impl<T, const ID: u64> From<Pin<UniqueArc<T>>> for ListArc<T, ID> 213 + where 214 + T: ListArcSafe<ID> + ?Sized, 215 + { 216 + /// Convert a pinned [`UniqueArc`] into a [`ListArc`]. 217 + #[inline] 218 + fn from(mut unique: Pin<UniqueArc<T>>) -> Self { 219 + // SAFETY: We have a `UniqueArc`, so there is no `ListArc`. 220 + unsafe { T::on_create_list_arc_from_unique(unique.as_mut()) }; 221 + let arc = Arc::from(unique); 222 + // SAFETY: We just called `on_create_list_arc_from_unique` on an arc without a `ListArc`, 223 + // so we can create a `ListArc`. 224 + unsafe { Self::transmute_from_arc(arc) } 225 + } 226 + } 227 + 228 + impl<T, const ID: u64> ListArc<T, ID> 229 + where 230 + T: ListArcSafe<ID> + ?Sized, 231 + { 232 + /// Creates two `ListArc`s from a [`UniqueArc`]. 233 + /// 234 + /// The two ids must be different. 235 + #[inline] 236 + pub fn pair_from_unique<const ID2: u64>(unique: UniqueArc<T>) -> (Self, ListArc<T, ID2>) 237 + where 238 + T: ListArcSafe<ID2>, 239 + { 240 + Self::pair_from_pin_unique(Pin::from(unique)) 241 + } 242 + 243 + /// Creates two `ListArc`s from a pinned [`UniqueArc`]. 244 + /// 245 + /// The two ids must be different. 246 + #[inline] 247 + pub fn pair_from_pin_unique<const ID2: u64>( 248 + mut unique: Pin<UniqueArc<T>>, 249 + ) -> (Self, ListArc<T, ID2>) 250 + where 251 + T: ListArcSafe<ID2>, 252 + { 253 + build_assert!(ID != ID2); 254 + 255 + // SAFETY: We have a `UniqueArc`, so there is no `ListArc`. 256 + unsafe { <T as ListArcSafe<ID>>::on_create_list_arc_from_unique(unique.as_mut()) }; 257 + // SAFETY: We have a `UniqueArc`, so there is no `ListArc`. 258 + unsafe { <T as ListArcSafe<ID2>>::on_create_list_arc_from_unique(unique.as_mut()) }; 259 + 260 + let arc1 = Arc::from(unique); 261 + let arc2 = Arc::clone(&arc1); 262 + 263 + // SAFETY: We just called `on_create_list_arc_from_unique` on an arc without a `ListArc` 264 + // for both IDs (which are different), so we can create two `ListArc`s. 265 + unsafe { 266 + ( 267 + Self::transmute_from_arc(arc1), 268 + ListArc::transmute_from_arc(arc2), 269 + ) 270 + } 271 + } 272 + 273 + /// Try to create a new `ListArc`. 274 + /// 275 + /// This fails if this value already has a `ListArc`. 276 + pub fn try_from_arc(arc: Arc<T>) -> Result<Self, Arc<T>> 277 + where 278 + T: TryNewListArc<ID>, 279 + { 280 + if arc.try_new_list_arc() { 281 + // SAFETY: The `try_new_list_arc` method returned true, so we made the tracking think 282 + // that a `ListArc` exists. This lets us create a `ListArc`. 283 + Ok(unsafe { Self::transmute_from_arc(arc) }) 284 + } else { 285 + Err(arc) 286 + } 287 + } 288 + 289 + /// Try to create a new `ListArc`. 290 + /// 291 + /// This fails if this value already has a `ListArc`. 292 + pub fn try_from_arc_borrow(arc: ArcBorrow<'_, T>) -> Option<Self> 293 + where 294 + T: TryNewListArc<ID>, 295 + { 296 + if arc.try_new_list_arc() { 297 + // SAFETY: The `try_new_list_arc` method returned true, so we made the tracking think 298 + // that a `ListArc` exists. This lets us create a `ListArc`. 299 + Some(unsafe { Self::transmute_from_arc(Arc::from(arc)) }) 300 + } else { 301 + None 302 + } 303 + } 304 + 305 + /// Try to create a new `ListArc`. 306 + /// 307 + /// If it's not possible to create a new `ListArc`, then the `Arc` is dropped. This will never 308 + /// run the destructor of the value. 309 + pub fn try_from_arc_or_drop(arc: Arc<T>) -> Option<Self> 310 + where 311 + T: TryNewListArc<ID>, 312 + { 313 + match Self::try_from_arc(arc) { 314 + Ok(list_arc) => Some(list_arc), 315 + Err(arc) => Arc::into_unique_or_drop(arc).map(Self::from), 316 + } 317 + } 318 + 319 + /// Transmutes an [`Arc`] into a `ListArc` without updating the tracking inside `T`. 320 + /// 321 + /// # Safety 322 + /// 323 + /// * The value must not already have a `ListArc` reference. 324 + /// * The tracking inside `T` must think that there is a `ListArc` reference. 325 + #[inline] 326 + unsafe fn transmute_from_arc(arc: Arc<T>) -> Self { 327 + // INVARIANT: By the safety requirements, the invariants on `ListArc` are satisfied. 328 + Self { arc } 329 + } 330 + 331 + /// Transmutes a `ListArc` into an [`Arc`] without updating the tracking inside `T`. 332 + /// 333 + /// After this call, the tracking inside `T` will still think that there is a `ListArc` 334 + /// reference. 335 + #[inline] 336 + fn transmute_to_arc(self) -> Arc<T> { 337 + // Use a transmute to skip destructor. 338 + // 339 + // SAFETY: ListArc is repr(transparent). 340 + unsafe { core::mem::transmute(self) } 341 + } 342 + 343 + /// Convert ownership of this `ListArc` into a raw pointer. 344 + /// 345 + /// The returned pointer is indistinguishable from pointers returned by [`Arc::into_raw`]. The 346 + /// tracking inside `T` will still think that a `ListArc` exists after this call. 347 + #[inline] 348 + pub fn into_raw(self) -> *const T { 349 + Arc::into_raw(Self::transmute_to_arc(self)) 350 + } 351 + 352 + /// Take ownership of the `ListArc` from a raw pointer. 353 + /// 354 + /// # Safety 355 + /// 356 + /// * `ptr` must satisfy the safety requirements of [`Arc::from_raw`]. 357 + /// * The value must not already have a `ListArc` reference. 358 + /// * The tracking inside `T` must think that there is a `ListArc` reference. 359 + #[inline] 360 + pub unsafe fn from_raw(ptr: *const T) -> Self { 361 + // SAFETY: The pointer satisfies the safety requirements for `Arc::from_raw`. 362 + let arc = unsafe { Arc::from_raw(ptr) }; 363 + // SAFETY: The value doesn't already have a `ListArc` reference, but the tracking thinks it 364 + // does. 365 + unsafe { Self::transmute_from_arc(arc) } 366 + } 367 + 368 + /// Converts the `ListArc` into an [`Arc`]. 369 + #[inline] 370 + pub fn into_arc(self) -> Arc<T> { 371 + let arc = Self::transmute_to_arc(self); 372 + // SAFETY: There is no longer a `ListArc`, but the tracking thinks there is. 373 + unsafe { T::on_drop_list_arc(&arc) }; 374 + arc 375 + } 376 + 377 + /// Clone a `ListArc` into an [`Arc`]. 378 + #[inline] 379 + pub fn clone_arc(&self) -> Arc<T> { 380 + self.arc.clone() 381 + } 382 + 383 + /// Returns a reference to an [`Arc`] from the given [`ListArc`]. 384 + /// 385 + /// This is useful when the argument of a function call is an [`&Arc`] (e.g., in a method 386 + /// receiver), but we have a [`ListArc`] instead. 387 + /// 388 + /// [`&Arc`]: Arc 389 + #[inline] 390 + pub fn as_arc(&self) -> &Arc<T> { 391 + &self.arc 392 + } 393 + 394 + /// Returns an [`ArcBorrow`] from the given [`ListArc`]. 395 + /// 396 + /// This is useful when the argument of a function call is an [`ArcBorrow`] (e.g., in a method 397 + /// receiver), but we have an [`Arc`] instead. Getting an [`ArcBorrow`] is free when optimised. 398 + #[inline] 399 + pub fn as_arc_borrow(&self) -> ArcBorrow<'_, T> { 400 + self.arc.as_arc_borrow() 401 + } 402 + 403 + /// Compare whether two [`ListArc`] pointers reference the same underlying object. 404 + #[inline] 405 + pub fn ptr_eq(this: &Self, other: &Self) -> bool { 406 + Arc::ptr_eq(&this.arc, &other.arc) 407 + } 408 + } 409 + 410 + impl<T, const ID: u64> Deref for ListArc<T, ID> 411 + where 412 + T: ListArcSafe<ID> + ?Sized, 413 + { 414 + type Target = T; 415 + 416 + #[inline] 417 + fn deref(&self) -> &Self::Target { 418 + self.arc.deref() 419 + } 420 + } 421 + 422 + impl<T, const ID: u64> Drop for ListArc<T, ID> 423 + where 424 + T: ListArcSafe<ID> + ?Sized, 425 + { 426 + #[inline] 427 + fn drop(&mut self) { 428 + // SAFETY: There is no longer a `ListArc`, but the tracking thinks there is by the type 429 + // invariants on `Self`. 430 + unsafe { T::on_drop_list_arc(&self.arc) }; 431 + } 432 + } 433 + 434 + impl<T, const ID: u64> AsRef<Arc<T>> for ListArc<T, ID> 435 + where 436 + T: ListArcSafe<ID> + ?Sized, 437 + { 438 + #[inline] 439 + fn as_ref(&self) -> &Arc<T> { 440 + self.as_arc() 441 + } 442 + } 443 + 444 + // This is to allow [`ListArc`] (and variants) to be used as the type of `self`. 445 + impl<T, const ID: u64> core::ops::Receiver for ListArc<T, ID> where T: ListArcSafe<ID> + ?Sized {} 446 + 447 + // This is to allow coercion from `ListArc<T>` to `ListArc<U>` if `T` can be converted to the 448 + // dynamically-sized type (DST) `U`. 449 + impl<T, U, const ID: u64> core::ops::CoerceUnsized<ListArc<U, ID>> for ListArc<T, ID> 450 + where 451 + T: ListArcSafe<ID> + Unsize<U> + ?Sized, 452 + U: ListArcSafe<ID> + ?Sized, 453 + { 454 + } 455 + 456 + // This is to allow `ListArc<U>` to be dispatched on when `ListArc<T>` can be coerced into 457 + // `ListArc<U>`. 458 + impl<T, U, const ID: u64> core::ops::DispatchFromDyn<ListArc<U, ID>> for ListArc<T, ID> 459 + where 460 + T: ListArcSafe<ID> + Unsize<U> + ?Sized, 461 + U: ListArcSafe<ID> + ?Sized, 462 + { 463 + } 464 + 465 + /// A utility for tracking whether a [`ListArc`] exists using an atomic. 466 + /// 467 + /// # Invariant 468 + /// 469 + /// If the boolean is `false`, then there is no [`ListArc`] for this value. 470 + #[repr(transparent)] 471 + pub struct AtomicTracker<const ID: u64 = 0> { 472 + inner: AtomicBool, 473 + // This value needs to be pinned to justify the INVARIANT: comment in `AtomicTracker::new`. 474 + _pin: PhantomPinned, 475 + } 476 + 477 + impl<const ID: u64> AtomicTracker<ID> { 478 + /// Creates a new initializer for this type. 479 + pub fn new() -> impl PinInit<Self> { 480 + // INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will 481 + // not be constructed in an `Arc` that already has a `ListArc`. 482 + Self { 483 + inner: AtomicBool::new(false), 484 + _pin: PhantomPinned, 485 + } 486 + } 487 + 488 + fn project_inner(self: Pin<&mut Self>) -> &mut AtomicBool { 489 + // SAFETY: The `inner` field is not structurally pinned, so we may obtain a mutable 490 + // reference to it even if we only have a pinned reference to `self`. 491 + unsafe { &mut Pin::into_inner_unchecked(self).inner } 492 + } 493 + } 494 + 495 + impl<const ID: u64> ListArcSafe<ID> for AtomicTracker<ID> { 496 + unsafe fn on_create_list_arc_from_unique(self: Pin<&mut Self>) { 497 + // INVARIANT: We just created a ListArc, so the boolean should be true. 498 + *self.project_inner().get_mut() = true; 499 + } 500 + 501 + unsafe fn on_drop_list_arc(&self) { 502 + // INVARIANT: We just dropped a ListArc, so the boolean should be false. 503 + self.inner.store(false, Ordering::Release); 504 + } 505 + } 506 + 507 + // SAFETY: If this method returns `true`, then by the type invariant there is no `ListArc` before 508 + // this call, so it is okay to create a new `ListArc`. 509 + // 510 + // The acquire ordering will synchronize with the release store from the destruction of any 511 + // previous `ListArc`, so if there was a previous `ListArc`, then the destruction of the previous 512 + // `ListArc` happens-before the creation of the new `ListArc`. 513 + unsafe impl<const ID: u64> TryNewListArc<ID> for AtomicTracker<ID> { 514 + fn try_new_list_arc(&self) -> bool { 515 + // INVARIANT: If this method returns true, then the boolean used to be false, and is no 516 + // longer false, so it is okay for the caller to create a new [`ListArc`]. 517 + self.inner 518 + .compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed) 519 + .is_ok() 520 + } 521 + }
+96
rust/kernel/list/arc_field.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Copyright (C) 2024 Google LLC. 4 + 5 + //! A field that is exclusively owned by a [`ListArc`]. 6 + //! 7 + //! This can be used to have reference counted struct where one of the reference counted pointers 8 + //! has exclusive access to a field of the struct. 9 + //! 10 + //! [`ListArc`]: crate::list::ListArc 11 + 12 + use core::cell::UnsafeCell; 13 + 14 + /// A field owned by a specific [`ListArc`]. 15 + /// 16 + /// [`ListArc`]: crate::list::ListArc 17 + pub struct ListArcField<T, const ID: u64 = 0> { 18 + value: UnsafeCell<T>, 19 + } 20 + 21 + // SAFETY: If the inner type is thread-safe, then it's also okay for `ListArc` to be thread-safe. 22 + unsafe impl<T: Send + Sync, const ID: u64> Send for ListArcField<T, ID> {} 23 + // SAFETY: If the inner type is thread-safe, then it's also okay for `ListArc` to be thread-safe. 24 + unsafe impl<T: Send + Sync, const ID: u64> Sync for ListArcField<T, ID> {} 25 + 26 + impl<T, const ID: u64> ListArcField<T, ID> { 27 + /// Creates a new `ListArcField`. 28 + pub fn new(value: T) -> Self { 29 + Self { 30 + value: UnsafeCell::new(value), 31 + } 32 + } 33 + 34 + /// Access the value when we have exclusive access to the `ListArcField`. 35 + /// 36 + /// This allows access to the field using an `UniqueArc` instead of a `ListArc`. 37 + pub fn get_mut(&mut self) -> &mut T { 38 + self.value.get_mut() 39 + } 40 + 41 + /// Unsafely assert that you have shared access to the `ListArc` for this field. 42 + /// 43 + /// # Safety 44 + /// 45 + /// The caller must have shared access to the `ListArc<ID>` containing the struct with this 46 + /// field for the duration of the returned reference. 47 + pub unsafe fn assert_ref(&self) -> &T { 48 + // SAFETY: The caller has shared access to the `ListArc`, so they also have shared access 49 + // to this field. 50 + unsafe { &*self.value.get() } 51 + } 52 + 53 + /// Unsafely assert that you have mutable access to the `ListArc` for this field. 54 + /// 55 + /// # Safety 56 + /// 57 + /// The caller must have mutable access to the `ListArc<ID>` containing the struct with this 58 + /// field for the duration of the returned reference. 59 + #[allow(clippy::mut_from_ref)] 60 + pub unsafe fn assert_mut(&self) -> &mut T { 61 + // SAFETY: The caller has exclusive access to the `ListArc`, so they also have exclusive 62 + // access to this field. 63 + unsafe { &mut *self.value.get() } 64 + } 65 + } 66 + 67 + /// Defines getters for a [`ListArcField`]. 68 + #[macro_export] 69 + macro_rules! define_list_arc_field_getter { 70 + ($pub:vis fn $name:ident(&self $(<$id:tt>)?) -> &$typ:ty { $field:ident } 71 + $($rest:tt)* 72 + ) => { 73 + $pub fn $name<'a>(self: &'a $crate::list::ListArc<Self $(, $id)?>) -> &'a $typ { 74 + let field = &(&**self).$field; 75 + // SAFETY: We have a shared reference to the `ListArc`. 76 + unsafe { $crate::list::ListArcField::<$typ $(, $id)?>::assert_ref(field) } 77 + } 78 + 79 + $crate::list::define_list_arc_field_getter!($($rest)*); 80 + }; 81 + 82 + ($pub:vis fn $name:ident(&mut self $(<$id:tt>)?) -> &mut $typ:ty { $field:ident } 83 + $($rest:tt)* 84 + ) => { 85 + $pub fn $name<'a>(self: &'a mut $crate::list::ListArc<Self $(, $id)?>) -> &'a mut $typ { 86 + let field = &(&**self).$field; 87 + // SAFETY: We have a mutable reference to the `ListArc`. 88 + unsafe { $crate::list::ListArcField::<$typ $(, $id)?>::assert_mut(field) } 89 + } 90 + 91 + $crate::list::define_list_arc_field_getter!($($rest)*); 92 + }; 93 + 94 + () => {}; 95 + } 96 + pub use define_list_arc_field_getter;
+274
rust/kernel/list/impl_list_item_mod.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Copyright (C) 2024 Google LLC. 4 + 5 + //! Helpers for implementing list traits safely. 6 + 7 + use crate::list::ListLinks; 8 + 9 + /// Declares that this type has a `ListLinks<ID>` field at a fixed offset. 10 + /// 11 + /// This trait is only used to help implement `ListItem` safely. If `ListItem` is implemented 12 + /// manually, then this trait is not needed. Use the [`impl_has_list_links!`] macro to implement 13 + /// this trait. 14 + /// 15 + /// # Safety 16 + /// 17 + /// All values of this type must have a `ListLinks<ID>` field at the given offset. 18 + /// 19 + /// The behavior of `raw_get_list_links` must not be changed. 20 + pub unsafe trait HasListLinks<const ID: u64 = 0> { 21 + /// The offset of the `ListLinks` field. 22 + const OFFSET: usize; 23 + 24 + /// Returns a pointer to the [`ListLinks<T, ID>`] field. 25 + /// 26 + /// # Safety 27 + /// 28 + /// The provided pointer must point at a valid struct of type `Self`. 29 + /// 30 + /// [`ListLinks<T, ID>`]: ListLinks 31 + // We don't really need this method, but it's necessary for the implementation of 32 + // `impl_has_list_links!` to be correct. 33 + #[inline] 34 + unsafe fn raw_get_list_links(ptr: *mut Self) -> *mut ListLinks<ID> { 35 + // SAFETY: The caller promises that the pointer is valid. The implementer promises that the 36 + // `OFFSET` constant is correct. 37 + unsafe { (ptr as *mut u8).add(Self::OFFSET) as *mut ListLinks<ID> } 38 + } 39 + } 40 + 41 + /// Implements the [`HasListLinks`] trait for the given type. 42 + #[macro_export] 43 + macro_rules! impl_has_list_links { 44 + ($(impl$(<$($implarg:ident),*>)? 45 + HasListLinks$(<$id:tt>)? 46 + for $self:ident $(<$($selfarg:ty),*>)? 47 + { self$(.$field:ident)* } 48 + )*) => {$( 49 + // SAFETY: The implementation of `raw_get_list_links` only compiles if the field has the 50 + // right type. 51 + // 52 + // The behavior of `raw_get_list_links` is not changed since the `addr_of_mut!` macro is 53 + // equivalent to the pointer offset operation in the trait definition. 54 + unsafe impl$(<$($implarg),*>)? $crate::list::HasListLinks$(<$id>)? for 55 + $self $(<$($selfarg),*>)? 56 + { 57 + const OFFSET: usize = ::core::mem::offset_of!(Self, $($field).*) as usize; 58 + 59 + #[inline] 60 + unsafe fn raw_get_list_links(ptr: *mut Self) -> *mut $crate::list::ListLinks$(<$id>)? { 61 + // SAFETY: The caller promises that the pointer is not dangling. We know that this 62 + // expression doesn't follow any pointers, as the `offset_of!` invocation above 63 + // would otherwise not compile. 64 + unsafe { ::core::ptr::addr_of_mut!((*ptr)$(.$field)*) } 65 + } 66 + } 67 + )*}; 68 + } 69 + pub use impl_has_list_links; 70 + 71 + /// Declares that the `ListLinks<ID>` field in this struct is inside a `ListLinksSelfPtr<T, ID>`. 72 + /// 73 + /// # Safety 74 + /// 75 + /// The `ListLinks<ID>` field of this struct at the offset `HasListLinks<ID>::OFFSET` must be 76 + /// inside a `ListLinksSelfPtr<T, ID>`. 77 + pub unsafe trait HasSelfPtr<T: ?Sized, const ID: u64 = 0> 78 + where 79 + Self: HasListLinks<ID>, 80 + { 81 + } 82 + 83 + /// Implements the [`HasListLinks`] and [`HasSelfPtr`] traits for the given type. 84 + #[macro_export] 85 + macro_rules! impl_has_list_links_self_ptr { 86 + ($(impl$({$($implarg:tt)*})? 87 + HasSelfPtr<$item_type:ty $(, $id:tt)?> 88 + for $self:ident $(<$($selfarg:ty),*>)? 89 + { self.$field:ident } 90 + )*) => {$( 91 + // SAFETY: The implementation of `raw_get_list_links` only compiles if the field has the 92 + // right type. 93 + unsafe impl$(<$($implarg)*>)? $crate::list::HasSelfPtr<$item_type $(, $id)?> for 94 + $self $(<$($selfarg),*>)? 95 + {} 96 + 97 + unsafe impl$(<$($implarg)*>)? $crate::list::HasListLinks$(<$id>)? for 98 + $self $(<$($selfarg),*>)? 99 + { 100 + const OFFSET: usize = ::core::mem::offset_of!(Self, $field) as usize; 101 + 102 + #[inline] 103 + unsafe fn raw_get_list_links(ptr: *mut Self) -> *mut $crate::list::ListLinks$(<$id>)? { 104 + // SAFETY: The caller promises that the pointer is not dangling. 105 + let ptr: *mut $crate::list::ListLinksSelfPtr<$item_type $(, $id)?> = 106 + unsafe { ::core::ptr::addr_of_mut!((*ptr).$field) }; 107 + ptr.cast() 108 + } 109 + } 110 + )*}; 111 + } 112 + pub use impl_has_list_links_self_ptr; 113 + 114 + /// Implements the [`ListItem`] trait for the given type. 115 + /// 116 + /// Requires that the type implements [`HasListLinks`]. Use the [`impl_has_list_links!`] macro to 117 + /// implement that trait. 118 + /// 119 + /// [`ListItem`]: crate::list::ListItem 120 + #[macro_export] 121 + macro_rules! impl_list_item { 122 + ( 123 + $(impl$({$($generics:tt)*})? ListItem<$num:tt> for $t:ty { 124 + using ListLinks; 125 + })* 126 + ) => {$( 127 + // SAFETY: See GUARANTEES comment on each method. 128 + unsafe impl$(<$($generics)*>)? $crate::list::ListItem<$num> for $t { 129 + // GUARANTEES: 130 + // * This returns the same pointer as `prepare_to_insert` because `prepare_to_insert` 131 + // is implemented in terms of `view_links`. 132 + // * By the type invariants of `ListLinks`, the `ListLinks` has two null pointers when 133 + // this value is not in a list. 134 + unsafe fn view_links(me: *const Self) -> *mut $crate::list::ListLinks<$num> { 135 + // SAFETY: The caller guarantees that `me` points at a valid value of type `Self`. 136 + unsafe { 137 + <Self as $crate::list::HasListLinks<$num>>::raw_get_list_links(me.cast_mut()) 138 + } 139 + } 140 + 141 + // GUARANTEES: 142 + // * `me` originates from the most recent call to `prepare_to_insert`, which just added 143 + // `offset` to the pointer passed to `prepare_to_insert`. This method subtracts 144 + // `offset` from `me` so it returns the pointer originally passed to 145 + // `prepare_to_insert`. 146 + // * The pointer remains valid until the next call to `post_remove` because the caller 147 + // of the most recent call to `prepare_to_insert` promised to retain ownership of the 148 + // `ListArc` containing `Self` until the next call to `post_remove`. The value cannot 149 + // be destroyed while a `ListArc` reference exists. 150 + unsafe fn view_value(me: *mut $crate::list::ListLinks<$num>) -> *const Self { 151 + let offset = <Self as $crate::list::HasListLinks<$num>>::OFFSET; 152 + // SAFETY: `me` originates from the most recent call to `prepare_to_insert`, so it 153 + // points at the field at offset `offset` in a value of type `Self`. Thus, 154 + // subtracting `offset` from `me` is still in-bounds of the allocation. 155 + unsafe { (me as *const u8).sub(offset) as *const Self } 156 + } 157 + 158 + // GUARANTEES: 159 + // This implementation of `ListItem` will not give out exclusive access to the same 160 + // `ListLinks` several times because calls to `prepare_to_insert` and `post_remove` 161 + // must alternate and exclusive access is given up when `post_remove` is called. 162 + // 163 + // Other invocations of `impl_list_item!` also cannot give out exclusive access to the 164 + // same `ListLinks` because you can only implement `ListItem` once for each value of 165 + // `ID`, and the `ListLinks` fields only work with the specified `ID`. 166 + unsafe fn prepare_to_insert(me: *const Self) -> *mut $crate::list::ListLinks<$num> { 167 + // SAFETY: The caller promises that `me` points at a valid value. 168 + unsafe { <Self as $crate::list::ListItem<$num>>::view_links(me) } 169 + } 170 + 171 + // GUARANTEES: 172 + // * `me` originates from the most recent call to `prepare_to_insert`, which just added 173 + // `offset` to the pointer passed to `prepare_to_insert`. This method subtracts 174 + // `offset` from `me` so it returns the pointer originally passed to 175 + // `prepare_to_insert`. 176 + unsafe fn post_remove(me: *mut $crate::list::ListLinks<$num>) -> *const Self { 177 + let offset = <Self as $crate::list::HasListLinks<$num>>::OFFSET; 178 + // SAFETY: `me` originates from the most recent call to `prepare_to_insert`, so it 179 + // points at the field at offset `offset` in a value of type `Self`. Thus, 180 + // subtracting `offset` from `me` is still in-bounds of the allocation. 181 + unsafe { (me as *const u8).sub(offset) as *const Self } 182 + } 183 + } 184 + )*}; 185 + 186 + ( 187 + $(impl$({$($generics:tt)*})? ListItem<$num:tt> for $t:ty { 188 + using ListLinksSelfPtr; 189 + })* 190 + ) => {$( 191 + // SAFETY: See GUARANTEES comment on each method. 192 + unsafe impl$(<$($generics)*>)? $crate::list::ListItem<$num> for $t { 193 + // GUARANTEES: 194 + // This implementation of `ListItem` will not give out exclusive access to the same 195 + // `ListLinks` several times because calls to `prepare_to_insert` and `post_remove` 196 + // must alternate and exclusive access is given up when `post_remove` is called. 197 + // 198 + // Other invocations of `impl_list_item!` also cannot give out exclusive access to the 199 + // same `ListLinks` because you can only implement `ListItem` once for each value of 200 + // `ID`, and the `ListLinks` fields only work with the specified `ID`. 201 + unsafe fn prepare_to_insert(me: *const Self) -> *mut $crate::list::ListLinks<$num> { 202 + // SAFETY: The caller promises that `me` points at a valid value of type `Self`. 203 + let links_field = unsafe { <Self as $crate::list::ListItem<$num>>::view_links(me) }; 204 + 205 + let spoff = $crate::list::ListLinksSelfPtr::<Self, $num>::LIST_LINKS_SELF_PTR_OFFSET; 206 + // Goes via the offset as the field is private. 207 + // 208 + // SAFETY: The constant is equal to `offset_of!(ListLinksSelfPtr, self_ptr)`, so 209 + // the pointer stays in bounds of the allocation. 210 + let self_ptr = unsafe { (links_field as *const u8).add(spoff) } 211 + as *const $crate::types::Opaque<*const Self>; 212 + let cell_inner = $crate::types::Opaque::raw_get(self_ptr); 213 + 214 + // SAFETY: This value is not accessed in any other places than `prepare_to_insert`, 215 + // `post_remove`, or `view_value`. By the safety requirements of those methods, 216 + // none of these three methods may be called in parallel with this call to 217 + // `prepare_to_insert`, so this write will not race with any other access to the 218 + // value. 219 + unsafe { ::core::ptr::write(cell_inner, me) }; 220 + 221 + links_field 222 + } 223 + 224 + // GUARANTEES: 225 + // * This returns the same pointer as `prepare_to_insert` because `prepare_to_insert` 226 + // returns the return value of `view_links`. 227 + // * By the type invariants of `ListLinks`, the `ListLinks` has two null pointers when 228 + // this value is not in a list. 229 + unsafe fn view_links(me: *const Self) -> *mut $crate::list::ListLinks<$num> { 230 + // SAFETY: The caller promises that `me` points at a valid value of type `Self`. 231 + unsafe { <Self as HasListLinks<$num>>::raw_get_list_links(me.cast_mut()) } 232 + } 233 + 234 + // This function is also used as the implementation of `post_remove`, so the caller 235 + // may choose to satisfy the safety requirements of `post_remove` instead of the safety 236 + // requirements for `view_value`. 237 + // 238 + // GUARANTEES: (always) 239 + // * This returns the same pointer as the one passed to the most recent call to 240 + // `prepare_to_insert` since that call wrote that pointer to this location. The value 241 + // is only modified in `prepare_to_insert`, so it has not been modified since the 242 + // most recent call. 243 + // 244 + // GUARANTEES: (only when using the `view_value` safety requirements) 245 + // * The pointer remains valid until the next call to `post_remove` because the caller 246 + // of the most recent call to `prepare_to_insert` promised to retain ownership of the 247 + // `ListArc` containing `Self` until the next call to `post_remove`. The value cannot 248 + // be destroyed while a `ListArc` reference exists. 249 + unsafe fn view_value(links_field: *mut $crate::list::ListLinks<$num>) -> *const Self { 250 + let spoff = $crate::list::ListLinksSelfPtr::<Self, $num>::LIST_LINKS_SELF_PTR_OFFSET; 251 + // SAFETY: The constant is equal to `offset_of!(ListLinksSelfPtr, self_ptr)`, so 252 + // the pointer stays in bounds of the allocation. 253 + let self_ptr = unsafe { (links_field as *const u8).add(spoff) } 254 + as *const ::core::cell::UnsafeCell<*const Self>; 255 + let cell_inner = ::core::cell::UnsafeCell::raw_get(self_ptr); 256 + // SAFETY: This is not a data race, because the only function that writes to this 257 + // value is `prepare_to_insert`, but by the safety requirements the 258 + // `prepare_to_insert` method may not be called in parallel with `view_value` or 259 + // `post_remove`. 260 + unsafe { ::core::ptr::read(cell_inner) } 261 + } 262 + 263 + // GUARANTEES: 264 + // The first guarantee of `view_value` is exactly what `post_remove` guarantees. 265 + unsafe fn post_remove(me: *mut $crate::list::ListLinks<$num>) -> *const Self { 266 + // SAFETY: This specific implementation of `view_value` allows the caller to 267 + // promise the safety requirements of `post_remove` instead of the safety 268 + // requirements for `view_value`. 269 + unsafe { <Self as $crate::list::ListItem<$num>>::view_value(me) } 270 + } 271 + } 272 + )*}; 273 + } 274 + pub use impl_list_item;
+1 -1
rust/kernel/prelude.rs
··· 37 37 38 38 pub use super::{str::CStr, ThisModule}; 39 39 40 - pub use super::init::{InPlaceInit, Init, PinInit}; 40 + pub use super::init::{InPlaceInit, InPlaceWrite, Init, PinInit}; 41 41 42 42 pub use super::current;
+10 -10
rust/kernel/print.rs
··· 4 4 //! 5 5 //! C header: [`include/linux/printk.h`](srctree/include/linux/printk.h) 6 6 //! 7 - //! Reference: <https://www.kernel.org/doc/html/latest/core-api/printk-basics.html> 7 + //! Reference: <https://docs.kernel.org/core-api/printk-basics.html> 8 8 9 9 use core::{ 10 10 ffi::{c_char, c_void}, ··· 197 197 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 198 198 /// `alloc::format!` for information about the formatting syntax. 199 199 /// 200 - /// [`pr_emerg`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_emerg 200 + /// [`pr_emerg`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_emerg 201 201 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 202 202 /// 203 203 /// # Examples ··· 221 221 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 222 222 /// `alloc::format!` for information about the formatting syntax. 223 223 /// 224 - /// [`pr_alert`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_alert 224 + /// [`pr_alert`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_alert 225 225 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 226 226 /// 227 227 /// # Examples ··· 245 245 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 246 246 /// `alloc::format!` for information about the formatting syntax. 247 247 /// 248 - /// [`pr_crit`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_crit 248 + /// [`pr_crit`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_crit 249 249 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 250 250 /// 251 251 /// # Examples ··· 269 269 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 270 270 /// `alloc::format!` for information about the formatting syntax. 271 271 /// 272 - /// [`pr_err`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_err 272 + /// [`pr_err`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_err 273 273 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 274 274 /// 275 275 /// # Examples ··· 293 293 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 294 294 /// `alloc::format!` for information about the formatting syntax. 295 295 /// 296 - /// [`pr_warn`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_warn 296 + /// [`pr_warn`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_warn 297 297 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 298 298 /// 299 299 /// # Examples ··· 317 317 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 318 318 /// `alloc::format!` for information about the formatting syntax. 319 319 /// 320 - /// [`pr_notice`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_notice 320 + /// [`pr_notice`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_notice 321 321 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 322 322 /// 323 323 /// # Examples ··· 341 341 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 342 342 /// `alloc::format!` for information about the formatting syntax. 343 343 /// 344 - /// [`pr_info`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_info 344 + /// [`pr_info`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_info 345 345 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 346 346 /// 347 347 /// # Examples ··· 367 367 /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 368 368 /// `alloc::format!` for information about the formatting syntax. 369 369 /// 370 - /// [`pr_debug`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_debug 370 + /// [`pr_debug`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_debug 371 371 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 372 372 /// 373 373 /// # Examples ··· 395 395 /// `alloc::format!` for information about the formatting syntax. 396 396 /// 397 397 /// [`pr_info!`]: crate::pr_info! 398 - /// [`pr_cont`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_cont 398 + /// [`pr_cont`]: https://docs.kernel.org/core-api/printk-basics.html#c.pr_cont 399 399 /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 400 400 /// 401 401 /// # Examples
+1278
rust/kernel/rbtree.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Red-black trees. 4 + //! 5 + //! C header: [`include/linux/rbtree.h`](srctree/include/linux/rbtree.h) 6 + //! 7 + //! Reference: <https://docs.kernel.org/core-api/rbtree.html> 8 + 9 + use crate::{alloc::Flags, bindings, container_of, error::Result, prelude::*}; 10 + use alloc::boxed::Box; 11 + use core::{ 12 + cmp::{Ord, Ordering}, 13 + marker::PhantomData, 14 + mem::MaybeUninit, 15 + ptr::{addr_of_mut, from_mut, NonNull}, 16 + }; 17 + 18 + /// A red-black tree with owned nodes. 19 + /// 20 + /// It is backed by the kernel C red-black trees. 21 + /// 22 + /// # Examples 23 + /// 24 + /// In the example below we do several operations on a tree. We note that insertions may fail if 25 + /// the system is out of memory. 26 + /// 27 + /// ``` 28 + /// use kernel::{alloc::flags, rbtree::{RBTree, RBTreeNode, RBTreeNodeReservation}}; 29 + /// 30 + /// // Create a new tree. 31 + /// let mut tree = RBTree::new(); 32 + /// 33 + /// // Insert three elements. 34 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 35 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 36 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 37 + /// 38 + /// // Check the nodes we just inserted. 39 + /// { 40 + /// assert_eq!(tree.get(&10).unwrap(), &100); 41 + /// assert_eq!(tree.get(&20).unwrap(), &200); 42 + /// assert_eq!(tree.get(&30).unwrap(), &300); 43 + /// } 44 + /// 45 + /// // Iterate over the nodes we just inserted. 46 + /// { 47 + /// let mut iter = tree.iter(); 48 + /// assert_eq!(iter.next().unwrap(), (&10, &100)); 49 + /// assert_eq!(iter.next().unwrap(), (&20, &200)); 50 + /// assert_eq!(iter.next().unwrap(), (&30, &300)); 51 + /// assert!(iter.next().is_none()); 52 + /// } 53 + /// 54 + /// // Print all elements. 55 + /// for (key, value) in &tree { 56 + /// pr_info!("{} = {}\n", key, value); 57 + /// } 58 + /// 59 + /// // Replace one of the elements. 60 + /// tree.try_create_and_insert(10, 1000, flags::GFP_KERNEL)?; 61 + /// 62 + /// // Check that the tree reflects the replacement. 63 + /// { 64 + /// let mut iter = tree.iter(); 65 + /// assert_eq!(iter.next().unwrap(), (&10, &1000)); 66 + /// assert_eq!(iter.next().unwrap(), (&20, &200)); 67 + /// assert_eq!(iter.next().unwrap(), (&30, &300)); 68 + /// assert!(iter.next().is_none()); 69 + /// } 70 + /// 71 + /// // Change the value of one of the elements. 72 + /// *tree.get_mut(&30).unwrap() = 3000; 73 + /// 74 + /// // Check that the tree reflects the update. 75 + /// { 76 + /// let mut iter = tree.iter(); 77 + /// assert_eq!(iter.next().unwrap(), (&10, &1000)); 78 + /// assert_eq!(iter.next().unwrap(), (&20, &200)); 79 + /// assert_eq!(iter.next().unwrap(), (&30, &3000)); 80 + /// assert!(iter.next().is_none()); 81 + /// } 82 + /// 83 + /// // Remove an element. 84 + /// tree.remove(&10); 85 + /// 86 + /// // Check that the tree reflects the removal. 87 + /// { 88 + /// let mut iter = tree.iter(); 89 + /// assert_eq!(iter.next().unwrap(), (&20, &200)); 90 + /// assert_eq!(iter.next().unwrap(), (&30, &3000)); 91 + /// assert!(iter.next().is_none()); 92 + /// } 93 + /// 94 + /// # Ok::<(), Error>(()) 95 + /// ``` 96 + /// 97 + /// In the example below, we first allocate a node, acquire a spinlock, then insert the node into 98 + /// the tree. This is useful when the insertion context does not allow sleeping, for example, when 99 + /// holding a spinlock. 100 + /// 101 + /// ``` 102 + /// use kernel::{alloc::flags, rbtree::{RBTree, RBTreeNode}, sync::SpinLock}; 103 + /// 104 + /// fn insert_test(tree: &SpinLock<RBTree<u32, u32>>) -> Result { 105 + /// // Pre-allocate node. This may fail (as it allocates memory). 106 + /// let node = RBTreeNode::new(10, 100, flags::GFP_KERNEL)?; 107 + /// 108 + /// // Insert node while holding the lock. It is guaranteed to succeed with no allocation 109 + /// // attempts. 110 + /// let mut guard = tree.lock(); 111 + /// guard.insert(node); 112 + /// Ok(()) 113 + /// } 114 + /// ``` 115 + /// 116 + /// In the example below, we reuse an existing node allocation from an element we removed. 117 + /// 118 + /// ``` 119 + /// use kernel::{alloc::flags, rbtree::{RBTree, RBTreeNodeReservation}}; 120 + /// 121 + /// // Create a new tree. 122 + /// let mut tree = RBTree::new(); 123 + /// 124 + /// // Insert three elements. 125 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 126 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 127 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 128 + /// 129 + /// // Check the nodes we just inserted. 130 + /// { 131 + /// let mut iter = tree.iter(); 132 + /// assert_eq!(iter.next().unwrap(), (&10, &100)); 133 + /// assert_eq!(iter.next().unwrap(), (&20, &200)); 134 + /// assert_eq!(iter.next().unwrap(), (&30, &300)); 135 + /// assert!(iter.next().is_none()); 136 + /// } 137 + /// 138 + /// // Remove a node, getting back ownership of it. 139 + /// let existing = tree.remove(&30).unwrap(); 140 + /// 141 + /// // Check that the tree reflects the removal. 142 + /// { 143 + /// let mut iter = tree.iter(); 144 + /// assert_eq!(iter.next().unwrap(), (&10, &100)); 145 + /// assert_eq!(iter.next().unwrap(), (&20, &200)); 146 + /// assert!(iter.next().is_none()); 147 + /// } 148 + /// 149 + /// // Create a preallocated reservation that we can re-use later. 150 + /// let reservation = RBTreeNodeReservation::new(flags::GFP_KERNEL)?; 151 + /// 152 + /// // Insert a new node into the tree, reusing the previous allocation. This is guaranteed to 153 + /// // succeed (no memory allocations). 154 + /// tree.insert(reservation.into_node(15, 150)); 155 + /// 156 + /// // Check that the tree reflect the new insertion. 157 + /// { 158 + /// let mut iter = tree.iter(); 159 + /// assert_eq!(iter.next().unwrap(), (&10, &100)); 160 + /// assert_eq!(iter.next().unwrap(), (&15, &150)); 161 + /// assert_eq!(iter.next().unwrap(), (&20, &200)); 162 + /// assert!(iter.next().is_none()); 163 + /// } 164 + /// 165 + /// # Ok::<(), Error>(()) 166 + /// ``` 167 + /// 168 + /// # Invariants 169 + /// 170 + /// Non-null parent/children pointers stored in instances of the `rb_node` C struct are always 171 + /// valid, and pointing to a field of our internal representation of a node. 172 + pub struct RBTree<K, V> { 173 + root: bindings::rb_root, 174 + _p: PhantomData<Node<K, V>>, 175 + } 176 + 177 + // SAFETY: An [`RBTree`] allows the same kinds of access to its values that a struct allows to its 178 + // fields, so we use the same Send condition as would be used for a struct with K and V fields. 179 + unsafe impl<K: Send, V: Send> Send for RBTree<K, V> {} 180 + 181 + // SAFETY: An [`RBTree`] allows the same kinds of access to its values that a struct allows to its 182 + // fields, so we use the same Sync condition as would be used for a struct with K and V fields. 183 + unsafe impl<K: Sync, V: Sync> Sync for RBTree<K, V> {} 184 + 185 + impl<K, V> RBTree<K, V> { 186 + /// Creates a new and empty tree. 187 + pub fn new() -> Self { 188 + Self { 189 + // INVARIANT: There are no nodes in the tree, so the invariant holds vacuously. 190 + root: bindings::rb_root::default(), 191 + _p: PhantomData, 192 + } 193 + } 194 + 195 + /// Returns an iterator over the tree nodes, sorted by key. 196 + pub fn iter(&self) -> Iter<'_, K, V> { 197 + Iter { 198 + _tree: PhantomData, 199 + // INVARIANT: 200 + // - `self.root` is a valid pointer to a tree root. 201 + // - `bindings::rb_first` produces a valid pointer to a node given `root` is valid. 202 + iter_raw: IterRaw { 203 + // SAFETY: by the invariants, all pointers are valid. 204 + next: unsafe { bindings::rb_first(&self.root) }, 205 + _phantom: PhantomData, 206 + }, 207 + } 208 + } 209 + 210 + /// Returns a mutable iterator over the tree nodes, sorted by key. 211 + pub fn iter_mut(&mut self) -> IterMut<'_, K, V> { 212 + IterMut { 213 + _tree: PhantomData, 214 + // INVARIANT: 215 + // - `self.root` is a valid pointer to a tree root. 216 + // - `bindings::rb_first` produces a valid pointer to a node given `root` is valid. 217 + iter_raw: IterRaw { 218 + // SAFETY: by the invariants, all pointers are valid. 219 + next: unsafe { bindings::rb_first(from_mut(&mut self.root)) }, 220 + _phantom: PhantomData, 221 + }, 222 + } 223 + } 224 + 225 + /// Returns an iterator over the keys of the nodes in the tree, in sorted order. 226 + pub fn keys(&self) -> impl Iterator<Item = &'_ K> { 227 + self.iter().map(|(k, _)| k) 228 + } 229 + 230 + /// Returns an iterator over the values of the nodes in the tree, sorted by key. 231 + pub fn values(&self) -> impl Iterator<Item = &'_ V> { 232 + self.iter().map(|(_, v)| v) 233 + } 234 + 235 + /// Returns a mutable iterator over the values of the nodes in the tree, sorted by key. 236 + pub fn values_mut(&mut self) -> impl Iterator<Item = &'_ mut V> { 237 + self.iter_mut().map(|(_, v)| v) 238 + } 239 + 240 + /// Returns a cursor over the tree nodes, starting with the smallest key. 241 + pub fn cursor_front(&mut self) -> Option<Cursor<'_, K, V>> { 242 + let root = addr_of_mut!(self.root); 243 + // SAFETY: `self.root` is always a valid root node 244 + let current = unsafe { bindings::rb_first(root) }; 245 + NonNull::new(current).map(|current| { 246 + // INVARIANT: 247 + // - `current` is a valid node in the [`RBTree`] pointed to by `self`. 248 + Cursor { 249 + current, 250 + tree: self, 251 + } 252 + }) 253 + } 254 + 255 + /// Returns a cursor over the tree nodes, starting with the largest key. 256 + pub fn cursor_back(&mut self) -> Option<Cursor<'_, K, V>> { 257 + let root = addr_of_mut!(self.root); 258 + // SAFETY: `self.root` is always a valid root node 259 + let current = unsafe { bindings::rb_last(root) }; 260 + NonNull::new(current).map(|current| { 261 + // INVARIANT: 262 + // - `current` is a valid node in the [`RBTree`] pointed to by `self`. 263 + Cursor { 264 + current, 265 + tree: self, 266 + } 267 + }) 268 + } 269 + } 270 + 271 + impl<K, V> RBTree<K, V> 272 + where 273 + K: Ord, 274 + { 275 + /// Tries to insert a new value into the tree. 276 + /// 277 + /// It overwrites a node if one already exists with the same key and returns it (containing the 278 + /// key/value pair). Returns [`None`] if a node with the same key didn't already exist. 279 + /// 280 + /// Returns an error if it cannot allocate memory for the new node. 281 + pub fn try_create_and_insert( 282 + &mut self, 283 + key: K, 284 + value: V, 285 + flags: Flags, 286 + ) -> Result<Option<RBTreeNode<K, V>>> { 287 + Ok(self.insert(RBTreeNode::new(key, value, flags)?)) 288 + } 289 + 290 + /// Inserts a new node into the tree. 291 + /// 292 + /// It overwrites a node if one already exists with the same key and returns it (containing the 293 + /// key/value pair). Returns [`None`] if a node with the same key didn't already exist. 294 + /// 295 + /// This function always succeeds. 296 + pub fn insert(&mut self, node: RBTreeNode<K, V>) -> Option<RBTreeNode<K, V>> { 297 + match self.raw_entry(&node.node.key) { 298 + RawEntry::Occupied(entry) => Some(entry.replace(node)), 299 + RawEntry::Vacant(entry) => { 300 + entry.insert(node); 301 + None 302 + } 303 + } 304 + } 305 + 306 + fn raw_entry(&mut self, key: &K) -> RawEntry<'_, K, V> { 307 + let raw_self: *mut RBTree<K, V> = self; 308 + // The returned `RawEntry` is used to call either `rb_link_node` or `rb_replace_node`. 309 + // The parameters of `bindings::rb_link_node` are as follows: 310 + // - `node`: A pointer to an uninitialized node being inserted. 311 + // - `parent`: A pointer to an existing node in the tree. One of its child pointers must be 312 + // null, and `node` will become a child of `parent` by replacing that child pointer 313 + // with a pointer to `node`. 314 + // - `rb_link`: A pointer to either the left-child or right-child field of `parent`. This 315 + // specifies which child of `parent` should hold `node` after this call. The 316 + // value of `*rb_link` must be null before the call to `rb_link_node`. If the 317 + // red/black tree is empty, then it’s also possible for `parent` to be null. In 318 + // this case, `rb_link` is a pointer to the `root` field of the red/black tree. 319 + // 320 + // We will traverse the tree looking for a node that has a null pointer as its child, 321 + // representing an empty subtree where we can insert our new node. We need to make sure 322 + // that we preserve the ordering of the nodes in the tree. In each iteration of the loop 323 + // we store `parent` and `child_field_of_parent`, and the new `node` will go somewhere 324 + // in the subtree of `parent` that `child_field_of_parent` points at. Once 325 + // we find an empty subtree, we can insert the new node using `rb_link_node`. 326 + let mut parent = core::ptr::null_mut(); 327 + let mut child_field_of_parent: &mut *mut bindings::rb_node = 328 + // SAFETY: `raw_self` is a valid pointer to the `RBTree` (created from `self` above). 329 + unsafe { &mut (*raw_self).root.rb_node }; 330 + while !(*child_field_of_parent).is_null() { 331 + let curr = *child_field_of_parent; 332 + // SAFETY: All links fields we create are in a `Node<K, V>`. 333 + let node = unsafe { container_of!(curr, Node<K, V>, links) }; 334 + 335 + // SAFETY: `node` is a non-null node so it is valid by the type invariants. 336 + match key.cmp(unsafe { &(*node).key }) { 337 + // SAFETY: `curr` is a non-null node so it is valid by the type invariants. 338 + Ordering::Less => child_field_of_parent = unsafe { &mut (*curr).rb_left }, 339 + // SAFETY: `curr` is a non-null node so it is valid by the type invariants. 340 + Ordering::Greater => child_field_of_parent = unsafe { &mut (*curr).rb_right }, 341 + Ordering::Equal => { 342 + return RawEntry::Occupied(OccupiedEntry { 343 + rbtree: self, 344 + node_links: curr, 345 + }) 346 + } 347 + } 348 + parent = curr; 349 + } 350 + 351 + RawEntry::Vacant(RawVacantEntry { 352 + rbtree: raw_self, 353 + parent, 354 + child_field_of_parent, 355 + _phantom: PhantomData, 356 + }) 357 + } 358 + 359 + /// Gets the given key's corresponding entry in the map for in-place manipulation. 360 + pub fn entry(&mut self, key: K) -> Entry<'_, K, V> { 361 + match self.raw_entry(&key) { 362 + RawEntry::Occupied(entry) => Entry::Occupied(entry), 363 + RawEntry::Vacant(entry) => Entry::Vacant(VacantEntry { raw: entry, key }), 364 + } 365 + } 366 + 367 + /// Used for accessing the given node, if it exists. 368 + pub fn find_mut(&mut self, key: &K) -> Option<OccupiedEntry<'_, K, V>> { 369 + match self.raw_entry(key) { 370 + RawEntry::Occupied(entry) => Some(entry), 371 + RawEntry::Vacant(_entry) => None, 372 + } 373 + } 374 + 375 + /// Returns a reference to the value corresponding to the key. 376 + pub fn get(&self, key: &K) -> Option<&V> { 377 + let mut node = self.root.rb_node; 378 + while !node.is_null() { 379 + // SAFETY: By the type invariant of `Self`, all non-null `rb_node` pointers stored in `self` 380 + // point to the links field of `Node<K, V>` objects. 381 + let this = unsafe { container_of!(node, Node<K, V>, links) }; 382 + // SAFETY: `this` is a non-null node so it is valid by the type invariants. 383 + node = match key.cmp(unsafe { &(*this).key }) { 384 + // SAFETY: `node` is a non-null node so it is valid by the type invariants. 385 + Ordering::Less => unsafe { (*node).rb_left }, 386 + // SAFETY: `node` is a non-null node so it is valid by the type invariants. 387 + Ordering::Greater => unsafe { (*node).rb_right }, 388 + // SAFETY: `node` is a non-null node so it is valid by the type invariants. 389 + Ordering::Equal => return Some(unsafe { &(*this).value }), 390 + } 391 + } 392 + None 393 + } 394 + 395 + /// Returns a mutable reference to the value corresponding to the key. 396 + pub fn get_mut(&mut self, key: &K) -> Option<&mut V> { 397 + self.find_mut(key).map(|node| node.into_mut()) 398 + } 399 + 400 + /// Removes the node with the given key from the tree. 401 + /// 402 + /// It returns the node that was removed if one exists, or [`None`] otherwise. 403 + pub fn remove_node(&mut self, key: &K) -> Option<RBTreeNode<K, V>> { 404 + self.find_mut(key).map(OccupiedEntry::remove_node) 405 + } 406 + 407 + /// Removes the node with the given key from the tree. 408 + /// 409 + /// It returns the value that was removed if one exists, or [`None`] otherwise. 410 + pub fn remove(&mut self, key: &K) -> Option<V> { 411 + self.find_mut(key).map(OccupiedEntry::remove) 412 + } 413 + 414 + /// Returns a cursor over the tree nodes based on the given key. 415 + /// 416 + /// If the given key exists, the cursor starts there. 417 + /// Otherwise it starts with the first larger key in sort order. 418 + /// If there is no larger key, it returns [`None`]. 419 + pub fn cursor_lower_bound(&mut self, key: &K) -> Option<Cursor<'_, K, V>> 420 + where 421 + K: Ord, 422 + { 423 + let mut node = self.root.rb_node; 424 + let mut best_match: Option<NonNull<Node<K, V>>> = None; 425 + while !node.is_null() { 426 + // SAFETY: By the type invariant of `Self`, all non-null `rb_node` pointers stored in `self` 427 + // point to the links field of `Node<K, V>` objects. 428 + let this = unsafe { container_of!(node, Node<K, V>, links) }.cast_mut(); 429 + // SAFETY: `this` is a non-null node so it is valid by the type invariants. 430 + let this_key = unsafe { &(*this).key }; 431 + // SAFETY: `node` is a non-null node so it is valid by the type invariants. 432 + let left_child = unsafe { (*node).rb_left }; 433 + // SAFETY: `node` is a non-null node so it is valid by the type invariants. 434 + let right_child = unsafe { (*node).rb_right }; 435 + match key.cmp(this_key) { 436 + Ordering::Equal => { 437 + best_match = NonNull::new(this); 438 + break; 439 + } 440 + Ordering::Greater => { 441 + node = right_child; 442 + } 443 + Ordering::Less => { 444 + let is_better_match = match best_match { 445 + None => true, 446 + Some(best) => { 447 + // SAFETY: `best` is a non-null node so it is valid by the type invariants. 448 + let best_key = unsafe { &(*best.as_ptr()).key }; 449 + best_key > this_key 450 + } 451 + }; 452 + if is_better_match { 453 + best_match = NonNull::new(this); 454 + } 455 + node = left_child; 456 + } 457 + }; 458 + } 459 + 460 + let best = best_match?; 461 + 462 + // SAFETY: `best` is a non-null node so it is valid by the type invariants. 463 + let links = unsafe { addr_of_mut!((*best.as_ptr()).links) }; 464 + 465 + NonNull::new(links).map(|current| { 466 + // INVARIANT: 467 + // - `current` is a valid node in the [`RBTree`] pointed to by `self`. 468 + Cursor { 469 + current, 470 + tree: self, 471 + } 472 + }) 473 + } 474 + } 475 + 476 + impl<K, V> Default for RBTree<K, V> { 477 + fn default() -> Self { 478 + Self::new() 479 + } 480 + } 481 + 482 + impl<K, V> Drop for RBTree<K, V> { 483 + fn drop(&mut self) { 484 + // SAFETY: `root` is valid as it's embedded in `self` and we have a valid `self`. 485 + let mut next = unsafe { bindings::rb_first_postorder(&self.root) }; 486 + 487 + // INVARIANT: The loop invariant is that all tree nodes from `next` in postorder are valid. 488 + while !next.is_null() { 489 + // SAFETY: All links fields we create are in a `Node<K, V>`. 490 + let this = unsafe { container_of!(next, Node<K, V>, links) }; 491 + 492 + // Find out what the next node is before disposing of the current one. 493 + // SAFETY: `next` and all nodes in postorder are still valid. 494 + next = unsafe { bindings::rb_next_postorder(next) }; 495 + 496 + // INVARIANT: This is the destructor, so we break the type invariant during clean-up, 497 + // but it is not observable. The loop invariant is still maintained. 498 + 499 + // SAFETY: `this` is valid per the loop invariant. 500 + unsafe { drop(Box::from_raw(this.cast_mut())) }; 501 + } 502 + } 503 + } 504 + 505 + /// A bidirectional cursor over the tree nodes, sorted by key. 506 + /// 507 + /// # Examples 508 + /// 509 + /// In the following example, we obtain a cursor to the first element in the tree. 510 + /// The cursor allows us to iterate bidirectionally over key/value pairs in the tree. 511 + /// 512 + /// ``` 513 + /// use kernel::{alloc::flags, rbtree::RBTree}; 514 + /// 515 + /// // Create a new tree. 516 + /// let mut tree = RBTree::new(); 517 + /// 518 + /// // Insert three elements. 519 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 520 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 521 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 522 + /// 523 + /// // Get a cursor to the first element. 524 + /// let mut cursor = tree.cursor_front().unwrap(); 525 + /// let mut current = cursor.current(); 526 + /// assert_eq!(current, (&10, &100)); 527 + /// 528 + /// // Move the cursor, updating it to the 2nd element. 529 + /// cursor = cursor.move_next().unwrap(); 530 + /// current = cursor.current(); 531 + /// assert_eq!(current, (&20, &200)); 532 + /// 533 + /// // Peek at the next element without impacting the cursor. 534 + /// let next = cursor.peek_next().unwrap(); 535 + /// assert_eq!(next, (&30, &300)); 536 + /// current = cursor.current(); 537 + /// assert_eq!(current, (&20, &200)); 538 + /// 539 + /// // Moving past the last element causes the cursor to return [`None`]. 540 + /// cursor = cursor.move_next().unwrap(); 541 + /// current = cursor.current(); 542 + /// assert_eq!(current, (&30, &300)); 543 + /// let cursor = cursor.move_next(); 544 + /// assert!(cursor.is_none()); 545 + /// 546 + /// # Ok::<(), Error>(()) 547 + /// ``` 548 + /// 549 + /// A cursor can also be obtained at the last element in the tree. 550 + /// 551 + /// ``` 552 + /// use kernel::{alloc::flags, rbtree::RBTree}; 553 + /// 554 + /// // Create a new tree. 555 + /// let mut tree = RBTree::new(); 556 + /// 557 + /// // Insert three elements. 558 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 559 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 560 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 561 + /// 562 + /// let mut cursor = tree.cursor_back().unwrap(); 563 + /// let current = cursor.current(); 564 + /// assert_eq!(current, (&30, &300)); 565 + /// 566 + /// # Ok::<(), Error>(()) 567 + /// ``` 568 + /// 569 + /// Obtaining a cursor returns [`None`] if the tree is empty. 570 + /// 571 + /// ``` 572 + /// use kernel::rbtree::RBTree; 573 + /// 574 + /// let mut tree: RBTree<u16, u16> = RBTree::new(); 575 + /// assert!(tree.cursor_front().is_none()); 576 + /// 577 + /// # Ok::<(), Error>(()) 578 + /// ``` 579 + /// 580 + /// [`RBTree::cursor_lower_bound`] can be used to start at an arbitrary node in the tree. 581 + /// 582 + /// ``` 583 + /// use kernel::{alloc::flags, rbtree::RBTree}; 584 + /// 585 + /// // Create a new tree. 586 + /// let mut tree = RBTree::new(); 587 + /// 588 + /// // Insert five elements. 589 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 590 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 591 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 592 + /// tree.try_create_and_insert(40, 400, flags::GFP_KERNEL)?; 593 + /// tree.try_create_and_insert(50, 500, flags::GFP_KERNEL)?; 594 + /// 595 + /// // If the provided key exists, a cursor to that key is returned. 596 + /// let cursor = tree.cursor_lower_bound(&20).unwrap(); 597 + /// let current = cursor.current(); 598 + /// assert_eq!(current, (&20, &200)); 599 + /// 600 + /// // If the provided key doesn't exist, a cursor to the first larger element in sort order is returned. 601 + /// let cursor = tree.cursor_lower_bound(&25).unwrap(); 602 + /// let current = cursor.current(); 603 + /// assert_eq!(current, (&30, &300)); 604 + /// 605 + /// // If there is no larger key, [`None`] is returned. 606 + /// let cursor = tree.cursor_lower_bound(&55); 607 + /// assert!(cursor.is_none()); 608 + /// 609 + /// # Ok::<(), Error>(()) 610 + /// ``` 611 + /// 612 + /// The cursor allows mutation of values in the tree. 613 + /// 614 + /// ``` 615 + /// use kernel::{alloc::flags, rbtree::RBTree}; 616 + /// 617 + /// // Create a new tree. 618 + /// let mut tree = RBTree::new(); 619 + /// 620 + /// // Insert three elements. 621 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 622 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 623 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 624 + /// 625 + /// // Retrieve a cursor. 626 + /// let mut cursor = tree.cursor_front().unwrap(); 627 + /// 628 + /// // Get a mutable reference to the current value. 629 + /// let (k, v) = cursor.current_mut(); 630 + /// *v = 1000; 631 + /// 632 + /// // The updated value is reflected in the tree. 633 + /// let updated = tree.get(&10).unwrap(); 634 + /// assert_eq!(updated, &1000); 635 + /// 636 + /// # Ok::<(), Error>(()) 637 + /// ``` 638 + /// 639 + /// It also allows node removal. The following examples demonstrate the behavior of removing the current node. 640 + /// 641 + /// ``` 642 + /// use kernel::{alloc::flags, rbtree::RBTree}; 643 + /// 644 + /// // Create a new tree. 645 + /// let mut tree = RBTree::new(); 646 + /// 647 + /// // Insert three elements. 648 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 649 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 650 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 651 + /// 652 + /// // Remove the first element. 653 + /// let mut cursor = tree.cursor_front().unwrap(); 654 + /// let mut current = cursor.current(); 655 + /// assert_eq!(current, (&10, &100)); 656 + /// cursor = cursor.remove_current().0.unwrap(); 657 + /// 658 + /// // If a node exists after the current element, it is returned. 659 + /// current = cursor.current(); 660 + /// assert_eq!(current, (&20, &200)); 661 + /// 662 + /// // Get a cursor to the last element, and remove it. 663 + /// cursor = tree.cursor_back().unwrap(); 664 + /// current = cursor.current(); 665 + /// assert_eq!(current, (&30, &300)); 666 + /// 667 + /// // Since there is no next node, the previous node is returned. 668 + /// cursor = cursor.remove_current().0.unwrap(); 669 + /// current = cursor.current(); 670 + /// assert_eq!(current, (&20, &200)); 671 + /// 672 + /// // Removing the last element in the tree returns [`None`]. 673 + /// assert!(cursor.remove_current().0.is_none()); 674 + /// 675 + /// # Ok::<(), Error>(()) 676 + /// ``` 677 + /// 678 + /// Nodes adjacent to the current node can also be removed. 679 + /// 680 + /// ``` 681 + /// use kernel::{alloc::flags, rbtree::RBTree}; 682 + /// 683 + /// // Create a new tree. 684 + /// let mut tree = RBTree::new(); 685 + /// 686 + /// // Insert three elements. 687 + /// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?; 688 + /// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?; 689 + /// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?; 690 + /// 691 + /// // Get a cursor to the first element. 692 + /// let mut cursor = tree.cursor_front().unwrap(); 693 + /// let mut current = cursor.current(); 694 + /// assert_eq!(current, (&10, &100)); 695 + /// 696 + /// // Calling `remove_prev` from the first element returns [`None`]. 697 + /// assert!(cursor.remove_prev().is_none()); 698 + /// 699 + /// // Get a cursor to the last element. 700 + /// cursor = tree.cursor_back().unwrap(); 701 + /// current = cursor.current(); 702 + /// assert_eq!(current, (&30, &300)); 703 + /// 704 + /// // Calling `remove_prev` removes and returns the middle element. 705 + /// assert_eq!(cursor.remove_prev().unwrap().to_key_value(), (20, 200)); 706 + /// 707 + /// // Calling `remove_next` from the last element returns [`None`]. 708 + /// assert!(cursor.remove_next().is_none()); 709 + /// 710 + /// // Move to the first element 711 + /// cursor = cursor.move_prev().unwrap(); 712 + /// current = cursor.current(); 713 + /// assert_eq!(current, (&10, &100)); 714 + /// 715 + /// // Calling `remove_next` removes and returns the last element. 716 + /// assert_eq!(cursor.remove_next().unwrap().to_key_value(), (30, 300)); 717 + /// 718 + /// # Ok::<(), Error>(()) 719 + /// 720 + /// ``` 721 + /// 722 + /// # Invariants 723 + /// - `current` points to a node that is in the same [`RBTree`] as `tree`. 724 + pub struct Cursor<'a, K, V> { 725 + tree: &'a mut RBTree<K, V>, 726 + current: NonNull<bindings::rb_node>, 727 + } 728 + 729 + // SAFETY: The [`Cursor`] has exclusive access to both `K` and `V`, so it is sufficient to require them to be `Send`. 730 + // The cursor only gives out immutable references to the keys, but since it has excusive access to those same 731 + // keys, `Send` is sufficient. `Sync` would be okay, but it is more restrictive to the user. 732 + unsafe impl<'a, K: Send, V: Send> Send for Cursor<'a, K, V> {} 733 + 734 + // SAFETY: The [`Cursor`] gives out immutable references to K and mutable references to V, 735 + // so it has the same thread safety requirements as mutable references. 736 + unsafe impl<'a, K: Sync, V: Sync> Sync for Cursor<'a, K, V> {} 737 + 738 + impl<'a, K, V> Cursor<'a, K, V> { 739 + /// The current node 740 + pub fn current(&self) -> (&K, &V) { 741 + // SAFETY: 742 + // - `self.current` is a valid node by the type invariants. 743 + // - We have an immutable reference by the function signature. 744 + unsafe { Self::to_key_value(self.current) } 745 + } 746 + 747 + /// The current node, with a mutable value 748 + pub fn current_mut(&mut self) -> (&K, &mut V) { 749 + // SAFETY: 750 + // - `self.current` is a valid node by the type invariants. 751 + // - We have an mutable reference by the function signature. 752 + unsafe { Self::to_key_value_mut(self.current) } 753 + } 754 + 755 + /// Remove the current node from the tree. 756 + /// 757 + /// Returns a tuple where the first element is a cursor to the next node, if it exists, 758 + /// else the previous node, else [`None`] (if the tree becomes empty). The second element 759 + /// is the removed node. 760 + pub fn remove_current(self) -> (Option<Self>, RBTreeNode<K, V>) { 761 + let prev = self.get_neighbor_raw(Direction::Prev); 762 + let next = self.get_neighbor_raw(Direction::Next); 763 + // SAFETY: By the type invariant of `Self`, all non-null `rb_node` pointers stored in `self` 764 + // point to the links field of `Node<K, V>` objects. 765 + let this = unsafe { container_of!(self.current.as_ptr(), Node<K, V>, links) }.cast_mut(); 766 + // SAFETY: `this` is valid by the type invariants as described above. 767 + let node = unsafe { Box::from_raw(this) }; 768 + let node = RBTreeNode { node }; 769 + // SAFETY: The reference to the tree used to create the cursor outlives the cursor, so 770 + // the tree cannot change. By the tree invariant, all nodes are valid. 771 + unsafe { bindings::rb_erase(&mut (*this).links, addr_of_mut!(self.tree.root)) }; 772 + 773 + let current = match (prev, next) { 774 + (_, Some(next)) => next, 775 + (Some(prev), None) => prev, 776 + (None, None) => { 777 + return (None, node); 778 + } 779 + }; 780 + 781 + ( 782 + // INVARIANT: 783 + // - `current` is a valid node in the [`RBTree`] pointed to by `self.tree`. 784 + Some(Self { 785 + current, 786 + tree: self.tree, 787 + }), 788 + node, 789 + ) 790 + } 791 + 792 + /// Remove the previous node, returning it if it exists. 793 + pub fn remove_prev(&mut self) -> Option<RBTreeNode<K, V>> { 794 + self.remove_neighbor(Direction::Prev) 795 + } 796 + 797 + /// Remove the next node, returning it if it exists. 798 + pub fn remove_next(&mut self) -> Option<RBTreeNode<K, V>> { 799 + self.remove_neighbor(Direction::Next) 800 + } 801 + 802 + fn remove_neighbor(&mut self, direction: Direction) -> Option<RBTreeNode<K, V>> { 803 + if let Some(neighbor) = self.get_neighbor_raw(direction) { 804 + let neighbor = neighbor.as_ptr(); 805 + // SAFETY: The reference to the tree used to create the cursor outlives the cursor, so 806 + // the tree cannot change. By the tree invariant, all nodes are valid. 807 + unsafe { bindings::rb_erase(neighbor, addr_of_mut!(self.tree.root)) }; 808 + // SAFETY: By the type invariant of `Self`, all non-null `rb_node` pointers stored in `self` 809 + // point to the links field of `Node<K, V>` objects. 810 + let this = unsafe { container_of!(neighbor, Node<K, V>, links) }.cast_mut(); 811 + // SAFETY: `this` is valid by the type invariants as described above. 812 + let node = unsafe { Box::from_raw(this) }; 813 + return Some(RBTreeNode { node }); 814 + } 815 + None 816 + } 817 + 818 + /// Move the cursor to the previous node, returning [`None`] if it doesn't exist. 819 + pub fn move_prev(self) -> Option<Self> { 820 + self.mv(Direction::Prev) 821 + } 822 + 823 + /// Move the cursor to the next node, returning [`None`] if it doesn't exist. 824 + pub fn move_next(self) -> Option<Self> { 825 + self.mv(Direction::Next) 826 + } 827 + 828 + fn mv(self, direction: Direction) -> Option<Self> { 829 + // INVARIANT: 830 + // - `neighbor` is a valid node in the [`RBTree`] pointed to by `self.tree`. 831 + self.get_neighbor_raw(direction).map(|neighbor| Self { 832 + tree: self.tree, 833 + current: neighbor, 834 + }) 835 + } 836 + 837 + /// Access the previous node without moving the cursor. 838 + pub fn peek_prev(&self) -> Option<(&K, &V)> { 839 + self.peek(Direction::Prev) 840 + } 841 + 842 + /// Access the previous node without moving the cursor. 843 + pub fn peek_next(&self) -> Option<(&K, &V)> { 844 + self.peek(Direction::Next) 845 + } 846 + 847 + fn peek(&self, direction: Direction) -> Option<(&K, &V)> { 848 + self.get_neighbor_raw(direction).map(|neighbor| { 849 + // SAFETY: 850 + // - `neighbor` is a valid tree node. 851 + // - By the function signature, we have an immutable reference to `self`. 852 + unsafe { Self::to_key_value(neighbor) } 853 + }) 854 + } 855 + 856 + /// Access the previous node mutably without moving the cursor. 857 + pub fn peek_prev_mut(&mut self) -> Option<(&K, &mut V)> { 858 + self.peek_mut(Direction::Prev) 859 + } 860 + 861 + /// Access the next node mutably without moving the cursor. 862 + pub fn peek_next_mut(&mut self) -> Option<(&K, &mut V)> { 863 + self.peek_mut(Direction::Next) 864 + } 865 + 866 + fn peek_mut(&mut self, direction: Direction) -> Option<(&K, &mut V)> { 867 + self.get_neighbor_raw(direction).map(|neighbor| { 868 + // SAFETY: 869 + // - `neighbor` is a valid tree node. 870 + // - By the function signature, we have a mutable reference to `self`. 871 + unsafe { Self::to_key_value_mut(neighbor) } 872 + }) 873 + } 874 + 875 + fn get_neighbor_raw(&self, direction: Direction) -> Option<NonNull<bindings::rb_node>> { 876 + // SAFETY: `self.current` is valid by the type invariants. 877 + let neighbor = unsafe { 878 + match direction { 879 + Direction::Prev => bindings::rb_prev(self.current.as_ptr()), 880 + Direction::Next => bindings::rb_next(self.current.as_ptr()), 881 + } 882 + }; 883 + 884 + NonNull::new(neighbor) 885 + } 886 + 887 + /// SAFETY: 888 + /// - `node` must be a valid pointer to a node in an [`RBTree`]. 889 + /// - The caller has immutable access to `node` for the duration of 'b. 890 + unsafe fn to_key_value<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b V) { 891 + // SAFETY: the caller guarantees that `node` is a valid pointer in an `RBTree`. 892 + let (k, v) = unsafe { Self::to_key_value_raw(node) }; 893 + // SAFETY: the caller guarantees immutable access to `node`. 894 + (k, unsafe { &*v }) 895 + } 896 + 897 + /// SAFETY: 898 + /// - `node` must be a valid pointer to a node in an [`RBTree`]. 899 + /// - The caller has mutable access to `node` for the duration of 'b. 900 + unsafe fn to_key_value_mut<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b mut V) { 901 + // SAFETY: the caller guarantees that `node` is a valid pointer in an `RBTree`. 902 + let (k, v) = unsafe { Self::to_key_value_raw(node) }; 903 + // SAFETY: the caller guarantees mutable access to `node`. 904 + (k, unsafe { &mut *v }) 905 + } 906 + 907 + /// SAFETY: 908 + /// - `node` must be a valid pointer to a node in an [`RBTree`]. 909 + /// - The caller has immutable access to the key for the duration of 'b. 910 + unsafe fn to_key_value_raw<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, *mut V) { 911 + // SAFETY: By the type invariant of `Self`, all non-null `rb_node` pointers stored in `self` 912 + // point to the links field of `Node<K, V>` objects. 913 + let this = unsafe { container_of!(node.as_ptr(), Node<K, V>, links) }.cast_mut(); 914 + // SAFETY: The passed `node` is the current node or a non-null neighbor, 915 + // thus `this` is valid by the type invariants. 916 + let k = unsafe { &(*this).key }; 917 + // SAFETY: The passed `node` is the current node or a non-null neighbor, 918 + // thus `this` is valid by the type invariants. 919 + let v = unsafe { addr_of_mut!((*this).value) }; 920 + (k, v) 921 + } 922 + } 923 + 924 + /// Direction for [`Cursor`] operations. 925 + enum Direction { 926 + /// the node immediately before, in sort order 927 + Prev, 928 + /// the node immediately after, in sort order 929 + Next, 930 + } 931 + 932 + impl<'a, K, V> IntoIterator for &'a RBTree<K, V> { 933 + type Item = (&'a K, &'a V); 934 + type IntoIter = Iter<'a, K, V>; 935 + 936 + fn into_iter(self) -> Self::IntoIter { 937 + self.iter() 938 + } 939 + } 940 + 941 + /// An iterator over the nodes of a [`RBTree`]. 942 + /// 943 + /// Instances are created by calling [`RBTree::iter`]. 944 + pub struct Iter<'a, K, V> { 945 + _tree: PhantomData<&'a RBTree<K, V>>, 946 + iter_raw: IterRaw<K, V>, 947 + } 948 + 949 + // SAFETY: The [`Iter`] gives out immutable references to K and V, so it has the same 950 + // thread safety requirements as immutable references. 951 + unsafe impl<'a, K: Sync, V: Sync> Send for Iter<'a, K, V> {} 952 + 953 + // SAFETY: The [`Iter`] gives out immutable references to K and V, so it has the same 954 + // thread safety requirements as immutable references. 955 + unsafe impl<'a, K: Sync, V: Sync> Sync for Iter<'a, K, V> {} 956 + 957 + impl<'a, K, V> Iterator for Iter<'a, K, V> { 958 + type Item = (&'a K, &'a V); 959 + 960 + fn next(&mut self) -> Option<Self::Item> { 961 + // SAFETY: Due to `self._tree`, `k` and `v` are valid for the lifetime of `'a`. 962 + self.iter_raw.next().map(|(k, v)| unsafe { (&*k, &*v) }) 963 + } 964 + } 965 + 966 + impl<'a, K, V> IntoIterator for &'a mut RBTree<K, V> { 967 + type Item = (&'a K, &'a mut V); 968 + type IntoIter = IterMut<'a, K, V>; 969 + 970 + fn into_iter(self) -> Self::IntoIter { 971 + self.iter_mut() 972 + } 973 + } 974 + 975 + /// A mutable iterator over the nodes of a [`RBTree`]. 976 + /// 977 + /// Instances are created by calling [`RBTree::iter_mut`]. 978 + pub struct IterMut<'a, K, V> { 979 + _tree: PhantomData<&'a mut RBTree<K, V>>, 980 + iter_raw: IterRaw<K, V>, 981 + } 982 + 983 + // SAFETY: The [`IterMut`] has exclusive access to both `K` and `V`, so it is sufficient to require them to be `Send`. 984 + // The iterator only gives out immutable references to the keys, but since the iterator has excusive access to those same 985 + // keys, `Send` is sufficient. `Sync` would be okay, but it is more restrictive to the user. 986 + unsafe impl<'a, K: Send, V: Send> Send for IterMut<'a, K, V> {} 987 + 988 + // SAFETY: The [`IterMut`] gives out immutable references to K and mutable references to V, so it has the same 989 + // thread safety requirements as mutable references. 990 + unsafe impl<'a, K: Sync, V: Sync> Sync for IterMut<'a, K, V> {} 991 + 992 + impl<'a, K, V> Iterator for IterMut<'a, K, V> { 993 + type Item = (&'a K, &'a mut V); 994 + 995 + fn next(&mut self) -> Option<Self::Item> { 996 + self.iter_raw.next().map(|(k, v)| 997 + // SAFETY: Due to `&mut self`, we have exclusive access to `k` and `v`, for the lifetime of `'a`. 998 + unsafe { (&*k, &mut *v) }) 999 + } 1000 + } 1001 + 1002 + /// A raw iterator over the nodes of a [`RBTree`]. 1003 + /// 1004 + /// # Invariants 1005 + /// - `self.next` is a valid pointer. 1006 + /// - `self.next` points to a node stored inside of a valid `RBTree`. 1007 + struct IterRaw<K, V> { 1008 + next: *mut bindings::rb_node, 1009 + _phantom: PhantomData<fn() -> (K, V)>, 1010 + } 1011 + 1012 + impl<K, V> Iterator for IterRaw<K, V> { 1013 + type Item = (*mut K, *mut V); 1014 + 1015 + fn next(&mut self) -> Option<Self::Item> { 1016 + if self.next.is_null() { 1017 + return None; 1018 + } 1019 + 1020 + // SAFETY: By the type invariant of `IterRaw`, `self.next` is a valid node in an `RBTree`, 1021 + // and by the type invariant of `RBTree`, all nodes point to the links field of `Node<K, V>` objects. 1022 + let cur = unsafe { container_of!(self.next, Node<K, V>, links) }.cast_mut(); 1023 + 1024 + // SAFETY: `self.next` is a valid tree node by the type invariants. 1025 + self.next = unsafe { bindings::rb_next(self.next) }; 1026 + 1027 + // SAFETY: By the same reasoning above, it is safe to dereference the node. 1028 + Some(unsafe { (addr_of_mut!((*cur).key), addr_of_mut!((*cur).value)) }) 1029 + } 1030 + } 1031 + 1032 + /// A memory reservation for a red-black tree node. 1033 + /// 1034 + /// 1035 + /// It contains the memory needed to hold a node that can be inserted into a red-black tree. One 1036 + /// can be obtained by directly allocating it ([`RBTreeNodeReservation::new`]). 1037 + pub struct RBTreeNodeReservation<K, V> { 1038 + node: Box<MaybeUninit<Node<K, V>>>, 1039 + } 1040 + 1041 + impl<K, V> RBTreeNodeReservation<K, V> { 1042 + /// Allocates memory for a node to be eventually initialised and inserted into the tree via a 1043 + /// call to [`RBTree::insert`]. 1044 + pub fn new(flags: Flags) -> Result<RBTreeNodeReservation<K, V>> { 1045 + Ok(RBTreeNodeReservation { 1046 + node: <Box<_> as BoxExt<_>>::new_uninit(flags)?, 1047 + }) 1048 + } 1049 + } 1050 + 1051 + // SAFETY: This doesn't actually contain K or V, and is just a memory allocation. Those can always 1052 + // be moved across threads. 1053 + unsafe impl<K, V> Send for RBTreeNodeReservation<K, V> {} 1054 + 1055 + // SAFETY: This doesn't actually contain K or V, and is just a memory allocation. 1056 + unsafe impl<K, V> Sync for RBTreeNodeReservation<K, V> {} 1057 + 1058 + impl<K, V> RBTreeNodeReservation<K, V> { 1059 + /// Initialises a node reservation. 1060 + /// 1061 + /// It then becomes an [`RBTreeNode`] that can be inserted into a tree. 1062 + pub fn into_node(mut self, key: K, value: V) -> RBTreeNode<K, V> { 1063 + self.node.write(Node { 1064 + key, 1065 + value, 1066 + links: bindings::rb_node::default(), 1067 + }); 1068 + // SAFETY: We just wrote to it. 1069 + let node = unsafe { self.node.assume_init() }; 1070 + RBTreeNode { node } 1071 + } 1072 + } 1073 + 1074 + /// A red-black tree node. 1075 + /// 1076 + /// The node is fully initialised (with key and value) and can be inserted into a tree without any 1077 + /// extra allocations or failure paths. 1078 + pub struct RBTreeNode<K, V> { 1079 + node: Box<Node<K, V>>, 1080 + } 1081 + 1082 + impl<K, V> RBTreeNode<K, V> { 1083 + /// Allocates and initialises a node that can be inserted into the tree via 1084 + /// [`RBTree::insert`]. 1085 + pub fn new(key: K, value: V, flags: Flags) -> Result<RBTreeNode<K, V>> { 1086 + Ok(RBTreeNodeReservation::new(flags)?.into_node(key, value)) 1087 + } 1088 + 1089 + /// Get the key and value from inside the node. 1090 + pub fn to_key_value(self) -> (K, V) { 1091 + (self.node.key, self.node.value) 1092 + } 1093 + } 1094 + 1095 + // SAFETY: If K and V can be sent across threads, then it's also okay to send [`RBTreeNode`] across 1096 + // threads. 1097 + unsafe impl<K: Send, V: Send> Send for RBTreeNode<K, V> {} 1098 + 1099 + // SAFETY: If K and V can be accessed without synchronization, then it's also okay to access 1100 + // [`RBTreeNode`] without synchronization. 1101 + unsafe impl<K: Sync, V: Sync> Sync for RBTreeNode<K, V> {} 1102 + 1103 + impl<K, V> RBTreeNode<K, V> { 1104 + /// Drop the key and value, but keep the allocation. 1105 + /// 1106 + /// It then becomes a reservation that can be re-initialised into a different node (i.e., with 1107 + /// a different key and/or value). 1108 + /// 1109 + /// The existing key and value are dropped in-place as part of this operation, that is, memory 1110 + /// may be freed (but only for the key/value; memory for the node itself is kept for reuse). 1111 + pub fn into_reservation(self) -> RBTreeNodeReservation<K, V> { 1112 + RBTreeNodeReservation { 1113 + node: Box::drop_contents(self.node), 1114 + } 1115 + } 1116 + } 1117 + 1118 + /// A view into a single entry in a map, which may either be vacant or occupied. 1119 + /// 1120 + /// This enum is constructed from the [`RBTree::entry`]. 1121 + /// 1122 + /// [`entry`]: fn@RBTree::entry 1123 + pub enum Entry<'a, K, V> { 1124 + /// This [`RBTree`] does not have a node with this key. 1125 + Vacant(VacantEntry<'a, K, V>), 1126 + /// This [`RBTree`] already has a node with this key. 1127 + Occupied(OccupiedEntry<'a, K, V>), 1128 + } 1129 + 1130 + /// Like [`Entry`], except that it doesn't have ownership of the key. 1131 + enum RawEntry<'a, K, V> { 1132 + Vacant(RawVacantEntry<'a, K, V>), 1133 + Occupied(OccupiedEntry<'a, K, V>), 1134 + } 1135 + 1136 + /// A view into a vacant entry in a [`RBTree`]. It is part of the [`Entry`] enum. 1137 + pub struct VacantEntry<'a, K, V> { 1138 + key: K, 1139 + raw: RawVacantEntry<'a, K, V>, 1140 + } 1141 + 1142 + /// Like [`VacantEntry`], but doesn't hold on to the key. 1143 + /// 1144 + /// # Invariants 1145 + /// - `parent` may be null if the new node becomes the root. 1146 + /// - `child_field_of_parent` is a valid pointer to the left-child or right-child of `parent`. If `parent` is 1147 + /// null, it is a pointer to the root of the [`RBTree`]. 1148 + struct RawVacantEntry<'a, K, V> { 1149 + rbtree: *mut RBTree<K, V>, 1150 + /// The node that will become the parent of the new node if we insert one. 1151 + parent: *mut bindings::rb_node, 1152 + /// This points to the left-child or right-child field of `parent`, or `root` if `parent` is 1153 + /// null. 1154 + child_field_of_parent: *mut *mut bindings::rb_node, 1155 + _phantom: PhantomData<&'a mut RBTree<K, V>>, 1156 + } 1157 + 1158 + impl<'a, K, V> RawVacantEntry<'a, K, V> { 1159 + /// Inserts the given node into the [`RBTree`] at this entry. 1160 + /// 1161 + /// The `node` must have a key such that inserting it here does not break the ordering of this 1162 + /// [`RBTree`]. 1163 + fn insert(self, node: RBTreeNode<K, V>) -> &'a mut V { 1164 + let node = Box::into_raw(node.node); 1165 + 1166 + // SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when 1167 + // the node is removed or replaced. 1168 + let node_links = unsafe { addr_of_mut!((*node).links) }; 1169 + 1170 + // INVARIANT: We are linking in a new node, which is valid. It remains valid because we 1171 + // "forgot" it with `Box::into_raw`. 1172 + // SAFETY: The type invariants of `RawVacantEntry` are exactly the safety requirements of `rb_link_node`. 1173 + unsafe { bindings::rb_link_node(node_links, self.parent, self.child_field_of_parent) }; 1174 + 1175 + // SAFETY: All pointers are valid. `node` has just been inserted into the tree. 1176 + unsafe { bindings::rb_insert_color(node_links, addr_of_mut!((*self.rbtree).root)) }; 1177 + 1178 + // SAFETY: The node is valid until we remove it from the tree. 1179 + unsafe { &mut (*node).value } 1180 + } 1181 + } 1182 + 1183 + impl<'a, K, V> VacantEntry<'a, K, V> { 1184 + /// Inserts the given node into the [`RBTree`] at this entry. 1185 + pub fn insert(self, value: V, reservation: RBTreeNodeReservation<K, V>) -> &'a mut V { 1186 + self.raw.insert(reservation.into_node(self.key, value)) 1187 + } 1188 + } 1189 + 1190 + /// A view into an occupied entry in a [`RBTree`]. It is part of the [`Entry`] enum. 1191 + /// 1192 + /// # Invariants 1193 + /// - `node_links` is a valid, non-null pointer to a tree node in `self.rbtree` 1194 + pub struct OccupiedEntry<'a, K, V> { 1195 + rbtree: &'a mut RBTree<K, V>, 1196 + /// The node that this entry corresponds to. 1197 + node_links: *mut bindings::rb_node, 1198 + } 1199 + 1200 + impl<'a, K, V> OccupiedEntry<'a, K, V> { 1201 + /// Gets a reference to the value in the entry. 1202 + pub fn get(&self) -> &V { 1203 + // SAFETY: 1204 + // - `self.node_links` is a valid pointer to a node in the tree. 1205 + // - We have shared access to the underlying tree, and can thus give out a shared reference. 1206 + unsafe { &(*container_of!(self.node_links, Node<K, V>, links)).value } 1207 + } 1208 + 1209 + /// Gets a mutable reference to the value in the entry. 1210 + pub fn get_mut(&mut self) -> &mut V { 1211 + // SAFETY: 1212 + // - `self.node_links` is a valid pointer to a node in the tree. 1213 + // - We have exclusive access to the underlying tree, and can thus give out a mutable reference. 1214 + unsafe { &mut (*(container_of!(self.node_links, Node<K, V>, links).cast_mut())).value } 1215 + } 1216 + 1217 + /// Converts the entry into a mutable reference to its value. 1218 + /// 1219 + /// If you need multiple references to the `OccupiedEntry`, see [`self#get_mut`]. 1220 + pub fn into_mut(self) -> &'a mut V { 1221 + // SAFETY: 1222 + // - `self.node_links` is a valid pointer to a node in the tree. 1223 + // - This consumes the `&'a mut RBTree<K, V>`, therefore it can give out a mutable reference that lives for `'a`. 1224 + unsafe { &mut (*(container_of!(self.node_links, Node<K, V>, links).cast_mut())).value } 1225 + } 1226 + 1227 + /// Remove this entry from the [`RBTree`]. 1228 + pub fn remove_node(self) -> RBTreeNode<K, V> { 1229 + // SAFETY: The node is a node in the tree, so it is valid. 1230 + unsafe { bindings::rb_erase(self.node_links, &mut self.rbtree.root) }; 1231 + 1232 + // INVARIANT: The node is being returned and the caller may free it, however, it was 1233 + // removed from the tree. So the invariants still hold. 1234 + RBTreeNode { 1235 + // SAFETY: The node was a node in the tree, but we removed it, so we can convert it 1236 + // back into a box. 1237 + node: unsafe { 1238 + Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) 1239 + }, 1240 + } 1241 + } 1242 + 1243 + /// Takes the value of the entry out of the map, and returns it. 1244 + pub fn remove(self) -> V { 1245 + self.remove_node().node.value 1246 + } 1247 + 1248 + /// Swap the current node for the provided node. 1249 + /// 1250 + /// The key of both nodes must be equal. 1251 + fn replace(self, node: RBTreeNode<K, V>) -> RBTreeNode<K, V> { 1252 + let node = Box::into_raw(node.node); 1253 + 1254 + // SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when 1255 + // the node is removed or replaced. 1256 + let new_node_links = unsafe { addr_of_mut!((*node).links) }; 1257 + 1258 + // SAFETY: This updates the pointers so that `new_node_links` is in the tree where 1259 + // `self.node_links` used to be. 1260 + unsafe { 1261 + bindings::rb_replace_node(self.node_links, new_node_links, &mut self.rbtree.root) 1262 + }; 1263 + 1264 + // SAFETY: 1265 + // - `self.node_ptr` produces a valid pointer to a node in the tree. 1266 + // - Now that we removed this entry from the tree, we can convert the node to a box. 1267 + let old_node = 1268 + unsafe { Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) }; 1269 + 1270 + RBTreeNode { node: old_node } 1271 + } 1272 + } 1273 + 1274 + struct Node<K, V> { 1275 + links: bindings::rb_node, 1276 + key: K, 1277 + value: V, 1278 + }
+1 -1
rust/kernel/std_vendor.rs
··· 136 136 /// 137 137 /// [`std::dbg`]: https://doc.rust-lang.org/std/macro.dbg.html 138 138 /// [`eprintln`]: https://doc.rust-lang.org/std/macro.eprintln.html 139 - /// [`printk`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html 139 + /// [`printk`]: https://docs.kernel.org/core-api/printk-basics.html 140 140 /// [`pr_info`]: crate::pr_info! 141 141 /// [`pr_debug`]: crate::pr_debug! 142 142 #[macro_export]
+2 -23
rust/kernel/sync/arc.rs
··· 12 12 //! 2. It does not support weak references, which allows it to be half the size. 13 13 //! 3. It saturates the reference count instead of aborting when it goes over a threshold. 14 14 //! 4. It does not provide a `get_mut` method, so the ref counted object is pinned. 15 + //! 5. The object in [`Arc`] is pinned implicitly. 15 16 //! 16 17 //! [`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html 17 18 18 19 use crate::{ 19 20 alloc::{box_ext::BoxExt, AllocError, Flags}, 20 - error::{self, Error}, 21 + bindings, 21 22 init::{self, InPlaceInit, Init, PinInit}, 22 23 try_init, 23 24 types::{ForeignOwnable, Opaque}, ··· 209 208 // SAFETY: We just created `inner` with a reference count of 1, which is owned by the new 210 209 // `Arc` object. 211 210 Ok(unsafe { Self::from_inner(Box::leak(inner).into()) }) 212 - } 213 - 214 - /// Use the given initializer to in-place initialize a `T`. 215 - /// 216 - /// If `T: !Unpin` it will not be able to move afterwards. 217 - #[inline] 218 - pub fn pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> error::Result<Self> 219 - where 220 - Error: From<E>, 221 - { 222 - UniqueArc::pin_init(init, flags).map(|u| u.into()) 223 - } 224 - 225 - /// Use the given initializer to in-place initialize a `T`. 226 - /// 227 - /// This is equivalent to [`Arc<T>::pin_init`], since an [`Arc`] is always pinned. 228 - #[inline] 229 - pub fn init<E>(init: impl Init<T, E>, flags: Flags) -> error::Result<Self> 230 - where 231 - Error: From<E>, 232 - { 233 - UniqueArc::init(init, flags).map(|u| u.into()) 234 211 } 235 212 } 236 213
+61 -2
rust/kernel/types.rs
··· 7 7 use core::{ 8 8 cell::UnsafeCell, 9 9 marker::{PhantomData, PhantomPinned}, 10 - mem::MaybeUninit, 10 + mem::{ManuallyDrop, MaybeUninit}, 11 11 ops::{Deref, DerefMut}, 12 + pin::Pin, 12 13 ptr::NonNull, 13 14 }; 14 15 ··· 27 26 28 27 /// Converts a Rust-owned object to a foreign-owned one. 29 28 /// 30 - /// The foreign representation is a pointer to void. 29 + /// The foreign representation is a pointer to void. There are no guarantees for this pointer. 30 + /// For example, it might be invalid, dangling or pointing to uninitialized memory. Using it in 31 + /// any way except for [`ForeignOwnable::from_foreign`], [`ForeignOwnable::borrow`], 32 + /// [`ForeignOwnable::try_from_foreign`] can result in undefined behavior. 31 33 fn into_foreign(self) -> *const core::ffi::c_void; 32 34 33 35 /// Borrows a foreign-owned object. ··· 90 86 // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 91 87 // call to `Self::into_foreign`. 92 88 unsafe { Box::from_raw(ptr as _) } 89 + } 90 + } 91 + 92 + impl<T: 'static> ForeignOwnable for Pin<Box<T>> { 93 + type Borrowed<'a> = Pin<&'a T>; 94 + 95 + fn into_foreign(self) -> *const core::ffi::c_void { 96 + // SAFETY: We are still treating the box as pinned. 97 + Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _ 98 + } 99 + 100 + unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Pin<&'a T> { 101 + // SAFETY: The safety requirements for this function ensure that the object is still alive, 102 + // so it is safe to dereference the raw pointer. 103 + // The safety requirements of `from_foreign` also ensure that the object remains alive for 104 + // the lifetime of the returned value. 105 + let r = unsafe { &*ptr.cast() }; 106 + 107 + // SAFETY: This pointer originates from a `Pin<Box<T>>`. 108 + unsafe { Pin::new_unchecked(r) } 109 + } 110 + 111 + unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { 112 + // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous 113 + // call to `Self::into_foreign`. 114 + unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) } 93 115 } 94 116 } 95 117 ··· 395 365 ptr, 396 366 _p: PhantomData, 397 367 } 368 + } 369 + 370 + /// Consumes the `ARef`, returning a raw pointer. 371 + /// 372 + /// This function does not change the refcount. After calling this function, the caller is 373 + /// responsible for the refcount previously managed by the `ARef`. 374 + /// 375 + /// # Examples 376 + /// 377 + /// ``` 378 + /// use core::ptr::NonNull; 379 + /// use kernel::types::{ARef, AlwaysRefCounted}; 380 + /// 381 + /// struct Empty {} 382 + /// 383 + /// unsafe impl AlwaysRefCounted for Empty { 384 + /// fn inc_ref(&self) {} 385 + /// unsafe fn dec_ref(_obj: NonNull<Self>) {} 386 + /// } 387 + /// 388 + /// let mut data = Empty {}; 389 + /// let ptr = NonNull::<Empty>::new(&mut data as *mut _).unwrap(); 390 + /// let data_ref: ARef<Empty> = unsafe { ARef::from_raw(ptr) }; 391 + /// let raw_ptr: NonNull<Empty> = ARef::into_raw(data_ref); 392 + /// 393 + /// assert_eq!(ptr, raw_ptr); 394 + /// ``` 395 + pub fn into_raw(me: Self) -> NonNull<T> { 396 + ManuallyDrop::new(me).ptr 398 397 } 399 398 } 400 399
+4
rust/macros/lib.rs
··· 2 2 3 3 //! Crate for all kernel procedural macros. 4 4 5 + // When fixdep scans this, it will find this string `CONFIG_RUSTC_VERSION_TEXT` 6 + // and thus add a dependency on `include/config/RUSTC_VERSION_TEXT`, which is 7 + // touched by Kconfig when the version string from the compiler changes. 8 + 5 9 #[macro_use] 6 10 mod quote; 7 11 mod concat_idents;
+12
rust/macros/module.rs
··· 262 262 263 263 #[cfg(MODULE)] 264 264 #[doc(hidden)] 265 + #[used] 266 + #[link_section = \".init.data\"] 267 + static __UNIQUE_ID___addressable_init_module: unsafe extern \"C\" fn() -> i32 = init_module; 268 + 269 + #[cfg(MODULE)] 270 + #[doc(hidden)] 265 271 #[no_mangle] 266 272 pub extern \"C\" fn cleanup_module() {{ 267 273 // SAFETY: ··· 278 272 // (which delegates to `__init`). 279 273 unsafe {{ __exit() }} 280 274 }} 275 + 276 + #[cfg(MODULE)] 277 + #[doc(hidden)] 278 + #[used] 279 + #[link_section = \".exit.data\"] 280 + static __UNIQUE_ID___addressable_cleanup_module: extern \"C\" fn() = cleanup_module; 281 281 282 282 // Built-in modules are initialized through an initcall pointer 283 283 // and the identifiers need to be unique.
+8
scripts/Kconfig.include
··· 64 64 cc-option-bit = $(if-success,$(CC) -Werror $(1) -E -x c /dev/null -o /dev/null,$(1)) 65 65 m32-flag := $(cc-option-bit,-m32) 66 66 m64-flag := $(cc-option-bit,-m64) 67 + 68 + # $(rustc-option,<flag>) 69 + # Return y if the Rust compiler supports <flag>, n otherwise 70 + # Calls to this should be guarded so that they are not evaluated if 71 + # CONFIG_RUST_IS_AVAILABLE is not set. 72 + # If you are testing for unstable features, consider testing RUSTC_VERSION 73 + # instead, as features may have different completeness while available. 74 + rustc-option = $(success,trap "rm -rf .tmp_$$" EXIT; mkdir .tmp_$$; $(RUSTC) $(1) --crate-type=rlib /dev/null --out-dir=.tmp_$$ -o .tmp_$$/tmp.rlib)
+7 -2
scripts/Makefile.build
··· 273 273 # would not match each other. 274 274 275 275 quiet_cmd_rustc_o_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ 276 - cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $< 276 + cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $< $(cmd_objtool) 277 + 278 + define rule_rustc_o_rs 279 + $(call cmd_and_fixdep,rustc_o_rs) 280 + $(call cmd,gen_objtooldep) 281 + endef 277 282 278 283 $(obj)/%.o: $(obj)/%.rs FORCE 279 - +$(call if_changed_dep,rustc_o_rs) 284 + +$(call if_changed_rule,rustc_o_rs) 280 285 281 286 quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ 282 287 cmd_rustc_rsi_rs = \
+15
scripts/Makefile.compiler
··· 72 72 # ld-option 73 73 # Usage: KBUILD_LDFLAGS += $(call ld-option, -X, -Y) 74 74 ld-option = $(call try-run, $(LD) $(KBUILD_LDFLAGS) $(1) -v,$(1),$(2),$(3)) 75 + 76 + # __rustc-option 77 + # Usage: MY_RUSTFLAGS += $(call __rustc-option,$(RUSTC),$(MY_RUSTFLAGS),-Cinstrument-coverage,-Zinstrument-coverage) 78 + __rustc-option = $(call try-run,\ 79 + $(1) $(2) $(3) --crate-type=rlib /dev/null --out-dir=$$TMPOUT -o "$$TMP",$(3),$(4)) 80 + 81 + # rustc-option 82 + # Usage: rustflags-y += $(call rustc-option,-Cinstrument-coverage,-Zinstrument-coverage) 83 + rustc-option = $(call __rustc-option, $(RUSTC),\ 84 + $(KBUILD_RUSTFLAGS),$(1),$(2)) 85 + 86 + # rustc-option-yn 87 + # Usage: flag := $(call rustc-option-yn,-Cinstrument-coverage) 88 + rustc-option-yn = $(call try-run,\ 89 + $(RUSTC) $(KBUILD_RUSTFLAGS) $(1) --crate-type=rlib /dev/null --out-dir=$$TMPOUT -o "$$TMP",y,n)
+41 -16
scripts/Makefile.kasan
··· 12 12 KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET) 13 13 14 14 cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1))) 15 + rustc-param = $(call rustc-option, -Cllvm-args=-$(1),) 16 + 17 + check-args = $(foreach arg,$(2),$(call $(1),$(arg))) 18 + 19 + kasan_params := 15 20 16 21 ifdef CONFIG_KASAN_STACK 17 22 stack_enable := 1 ··· 46 41 $(call cc-option, -fsanitize=kernel-address \ 47 42 -mllvm -asan-mapping-offset=$(KASAN_SHADOW_OFFSET))) 48 43 49 - # Now, add other parameters enabled similarly in both GCC and Clang. 50 - # As some of them are not supported by older compilers, use cc-param. 51 - CFLAGS_KASAN += $(call cc-param,asan-instrumentation-with-call-threshold=$(call_threshold)) \ 52 - $(call cc-param,asan-stack=$(stack_enable)) \ 53 - $(call cc-param,asan-instrument-allocas=1) \ 54 - $(call cc-param,asan-globals=1) 44 + # The minimum supported `rustc` version has a minimum supported LLVM 45 + # version late enough that we can assume support for -asan-mapping-offset. 46 + RUSTFLAGS_KASAN := -Zsanitizer=kernel-address \ 47 + -Zsanitizer-recover=kernel-address \ 48 + -Cllvm-args=-asan-mapping-offset=$(KASAN_SHADOW_OFFSET) 49 + 50 + # Now, add other parameters enabled similarly in GCC, Clang, and rustc. 51 + # As some of them are not supported by older compilers, these will be filtered 52 + # through `cc-param` or `rust-param` as applicable. 53 + kasan_params += asan-instrumentation-with-call-threshold=$(call_threshold) \ 54 + asan-stack=$(stack_enable) \ 55 + asan-instrument-allocas=1 \ 56 + asan-globals=1 55 57 56 58 # Instrument memcpy/memset/memmove calls by using instrumented __asan_mem*() 57 59 # instead. With compilers that don't support this option, compiler-inserted 58 60 # memintrinsics won't be checked by KASAN on GENERIC_ENTRY architectures. 59 - CFLAGS_KASAN += $(call cc-param,asan-kernel-mem-intrinsic-prefix=1) 61 + kasan_params += asan-kernel-mem-intrinsic-prefix=1 60 62 61 63 endif # CONFIG_KASAN_GENERIC 62 64 63 65 ifdef CONFIG_KASAN_SW_TAGS 64 66 67 + CFLAGS_KASAN := -fsanitize=kernel-hwaddress 68 + 69 + # This sets flags that will enable SW_TAGS KASAN once enabled in Rust. These 70 + # will not work today, and is guarded against in dependencies for CONFIG_RUST. 71 + RUSTFLAGS_KASAN := -Zsanitizer=kernel-hwaddress \ 72 + -Zsanitizer-recover=kernel-hwaddress 73 + 65 74 ifdef CONFIG_KASAN_INLINE 66 - instrumentation_flags := $(call cc-param,hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)) 75 + kasan_params += hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET) 67 76 else 68 - instrumentation_flags := $(call cc-param,hwasan-instrument-with-calls=1) 77 + kasan_params += hwasan-instrument-with-calls=1 69 78 endif 70 79 71 - CFLAGS_KASAN := -fsanitize=kernel-hwaddress \ 72 - $(call cc-param,hwasan-instrument-stack=$(stack_enable)) \ 73 - $(call cc-param,hwasan-use-short-granules=0) \ 74 - $(call cc-param,hwasan-inline-all-checks=0) \ 75 - $(instrumentation_flags) 80 + kasan_params += hwasan-instrument-stack=$(stack_enable) \ 81 + hwasan-use-short-granules=0 \ 82 + hwasan-inline-all-checks=0 76 83 77 84 # Instrument memcpy/memset/memmove calls by using instrumented __hwasan_mem*(). 78 85 ifeq ($(call clang-min-version, 150000)$(call gcc-min-version, 130000),y) 79 - CFLAGS_KASAN += $(call cc-param,hwasan-kernel-mem-intrinsic-prefix=1) 86 + kasan_params += hwasan-kernel-mem-intrinsic-prefix=1 80 87 endif 81 88 82 89 endif # CONFIG_KASAN_SW_TAGS 83 90 84 - export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE 91 + # Add all as-supported KASAN LLVM parameters requested by the configuration. 92 + CFLAGS_KASAN += $(call check-args, cc-param, $(kasan_params)) 93 + 94 + ifdef CONFIG_RUST 95 + # Avoid calling `rustc-param` unless Rust is enabled. 96 + RUSTFLAGS_KASAN += $(call check-args, rustc-param, $(kasan_params)) 97 + endif # CONFIG_RUST 98 + 99 + export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE RUSTFLAGS_KASAN
+3
scripts/Makefile.lib
··· 146 146 _c_flags += $(if $(patsubst n%,, \ 147 147 $(KASAN_SANITIZE_$(target-stem).o)$(KASAN_SANITIZE)$(is-kernel-object)), \ 148 148 $(CFLAGS_KASAN), $(CFLAGS_KASAN_NOSANITIZE)) 149 + _rust_flags += $(if $(patsubst n%,, \ 150 + $(KASAN_SANITIZE_$(target-stem).o)$(KASAN_SANITIZE)$(is-kernel-object)), \ 151 + $(RUSTFLAGS_KASAN)) 149 152 endif 150 153 endif 151 154
+79 -39
scripts/generate_rust_target.rs
··· 20 20 Boolean(bool), 21 21 Number(i32), 22 22 String(String), 23 + Array(Vec<Value>), 23 24 Object(Object), 24 25 } 25 26 26 27 type Object = Vec<(String, Value)>; 27 28 28 - /// Minimal "almost JSON" generator (e.g. no `null`s, no arrays, no escaping), 29 + fn comma_sep<T>( 30 + seq: &[T], 31 + formatter: &mut Formatter<'_>, 32 + f: impl Fn(&mut Formatter<'_>, &T) -> Result, 33 + ) -> Result { 34 + if let [ref rest @ .., ref last] = seq[..] { 35 + for v in rest { 36 + f(formatter, v)?; 37 + formatter.write_str(",")?; 38 + } 39 + f(formatter, last)?; 40 + } 41 + Ok(()) 42 + } 43 + 44 + /// Minimal "almost JSON" generator (e.g. no `null`s, no escaping), 29 45 /// enough for this purpose. 30 46 impl Display for Value { 31 47 fn fmt(&self, formatter: &mut Formatter<'_>) -> Result { ··· 49 33 Value::Boolean(boolean) => write!(formatter, "{}", boolean), 50 34 Value::Number(number) => write!(formatter, "{}", number), 51 35 Value::String(string) => write!(formatter, "\"{}\"", string), 36 + Value::Array(values) => { 37 + formatter.write_str("[")?; 38 + comma_sep(&values[..], formatter, |formatter, v| v.fmt(formatter))?; 39 + formatter.write_str("]") 40 + } 52 41 Value::Object(object) => { 53 42 formatter.write_str("{")?; 54 - if let [ref rest @ .., ref last] = object[..] { 55 - for (key, value) in rest { 56 - write!(formatter, "\"{}\": {},", key, value)?; 57 - } 58 - write!(formatter, "\"{}\": {}", last.0, last.1)?; 59 - } 43 + comma_sep(&object[..], formatter, |formatter, v| { 44 + write!(formatter, "\"{}\": {}", v.0, v.1) 45 + })?; 60 46 formatter.write_str("}") 61 47 } 62 48 } 49 + } 50 + } 51 + 52 + impl From<bool> for Value { 53 + fn from(value: bool) -> Self { 54 + Self::Boolean(value) 55 + } 56 + } 57 + 58 + impl From<i32> for Value { 59 + fn from(value: i32) -> Self { 60 + Self::Number(value) 61 + } 62 + } 63 + 64 + impl From<String> for Value { 65 + fn from(value: String) -> Self { 66 + Self::String(value) 67 + } 68 + } 69 + 70 + impl From<&str> for Value { 71 + fn from(value: &str) -> Self { 72 + Self::String(value.to_string()) 73 + } 74 + } 75 + 76 + impl From<Object> for Value { 77 + fn from(object: Object) -> Self { 78 + Self::Object(object) 79 + } 80 + } 81 + 82 + impl<T: Into<Value>, const N: usize> From<[T; N]> for Value { 83 + fn from(i: [T; N]) -> Self { 84 + Self::Array(i.into_iter().map(|v| v.into()).collect()) 63 85 } 64 86 } 65 87 ··· 107 53 fn new() -> TargetSpec { 108 54 TargetSpec(Vec::new()) 109 55 } 110 - } 111 56 112 - trait Push<T> { 113 - fn push(&mut self, key: &str, value: T); 114 - } 115 - 116 - impl Push<bool> for TargetSpec { 117 - fn push(&mut self, key: &str, value: bool) { 118 - self.0.push((key.to_string(), Value::Boolean(value))); 119 - } 120 - } 121 - 122 - impl Push<i32> for TargetSpec { 123 - fn push(&mut self, key: &str, value: i32) { 124 - self.0.push((key.to_string(), Value::Number(value))); 125 - } 126 - } 127 - 128 - impl Push<String> for TargetSpec { 129 - fn push(&mut self, key: &str, value: String) { 130 - self.0.push((key.to_string(), Value::String(value))); 131 - } 132 - } 133 - 134 - impl Push<&str> for TargetSpec { 135 - fn push(&mut self, key: &str, value: &str) { 136 - self.push(key, value.to_string()); 137 - } 138 - } 139 - 140 - impl Push<Object> for TargetSpec { 141 - fn push(&mut self, key: &str, value: Object) { 142 - self.0.push((key.to_string(), Value::Object(value))); 57 + fn push(&mut self, key: &str, value: impl Into<Value>) { 58 + self.0.push((key.to_string(), value.into())); 143 59 } 144 60 } 145 61 ··· 188 164 ); 189 165 let mut features = "-mmx,+soft-float".to_string(); 190 166 if cfg.has("MITIGATION_RETPOLINE") { 167 + // The kernel uses `-mretpoline-external-thunk` (for Clang), which Clang maps to the 168 + // target feature of the same name plus the other two target features in 169 + // `clang/lib/Driver/ToolChains/Arch/X86.cpp`. These should be eventually enabled via 170 + // `-Ctarget-feature` when `rustc` starts recognizing them (or via a new dedicated 171 + // flag); see https://github.com/rust-lang/rust/issues/116852. 191 172 features += ",+retpoline-external-thunk"; 173 + features += ",+retpoline-indirect-branches"; 174 + features += ",+retpoline-indirect-calls"; 175 + } 176 + if cfg.has("MITIGATION_SLS") { 177 + // The kernel uses `-mharden-sls=all`, which Clang maps to both these target features in 178 + // `clang/lib/Driver/ToolChains/Arch/X86.cpp`. These should be eventually enabled via 179 + // `-Ctarget-feature` when `rustc` starts recognizing them (or via a new dedicated 180 + // flag); see https://github.com/rust-lang/rust/issues/116851. 181 + features += ",+harden-sls-ijmp"; 182 + features += ",+harden-sls-ret"; 192 183 } 193 184 ts.push("features", features); 194 185 ts.push("llvm-target", "x86_64-linux-gnu"); 186 + ts.push("supported-sanitizers", ["kcfi", "kernel-address"]); 195 187 ts.push("target-pointer-width", "64"); 196 188 } else if cfg.has("X86_32") { 197 189 // This only works on UML, as i386 otherwise needs regparm support in rustc
+26
scripts/rustc-version.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Usage: $ ./rustc-version.sh rustc 5 + # 6 + # Print the Rust compiler version in a 6 or 7-digit form. 7 + 8 + # Convert the version string x.y.z to a canonical up-to-7-digits form. 9 + # 10 + # Note that this function uses one more digit (compared to other 11 + # instances in other version scripts) to give a bit more space to 12 + # `rustc` since it will reach 1.100.0 in late 2026. 13 + get_canonical_version() 14 + { 15 + IFS=. 16 + set -- $1 17 + echo $((100000 * $1 + 100 * $2 + $3)) 18 + } 19 + 20 + if output=$("$@" --version 2>/dev/null); then 21 + set -- $output 22 + get_canonical_version $2 23 + else 24 + echo 0 25 + exit 1 26 + fi
+51 -1
tools/objtool/check.c
··· 178 178 } 179 179 180 180 /* 181 + * Checks if a string ends with another. 182 + */ 183 + static bool str_ends_with(const char *s, const char *sub) 184 + { 185 + const int slen = strlen(s); 186 + const int sublen = strlen(sub); 187 + 188 + if (sublen > slen) 189 + return 0; 190 + 191 + return !memcmp(s + slen - sublen, sub, sublen); 192 + } 193 + 194 + /* 195 + * Checks if a function is a Rust "noreturn" one. 196 + */ 197 + static bool is_rust_noreturn(const struct symbol *func) 198 + { 199 + /* 200 + * If it does not start with "_R", then it is not a Rust symbol. 201 + */ 202 + if (strncmp(func->name, "_R", 2)) 203 + return false; 204 + 205 + /* 206 + * These are just heuristics -- we do not control the precise symbol 207 + * name, due to the crate disambiguators (which depend on the compiler) 208 + * as well as changes to the source code itself between versions (since 209 + * these come from the Rust standard library). 210 + */ 211 + return str_ends_with(func->name, "_4core5sliceSp15copy_from_slice17len_mismatch_fail") || 212 + str_ends_with(func->name, "_4core6option13unwrap_failed") || 213 + str_ends_with(func->name, "_4core6result13unwrap_failed") || 214 + str_ends_with(func->name, "_4core9panicking5panic") || 215 + str_ends_with(func->name, "_4core9panicking9panic_fmt") || 216 + str_ends_with(func->name, "_4core9panicking14panic_explicit") || 217 + str_ends_with(func->name, "_4core9panicking14panic_nounwind") || 218 + str_ends_with(func->name, "_4core9panicking18panic_bounds_check") || 219 + str_ends_with(func->name, "_4core9panicking19assert_failed_inner") || 220 + str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") || 221 + strstr(func->name, "_4core9panicking11panic_const24panic_const_") || 222 + (strstr(func->name, "_4core5slice5index24slice_") && 223 + str_ends_with(func->name, "_fail")); 224 + } 225 + 226 + /* 181 227 * This checks to see if the given function is a "noreturn" function. 182 228 * 183 229 * For global functions which are outside the scope of this object file, we ··· 248 202 if (!func) 249 203 return false; 250 204 251 - if (func->bind == STB_GLOBAL || func->bind == STB_WEAK) 205 + if (func->bind == STB_GLOBAL || func->bind == STB_WEAK) { 206 + if (is_rust_noreturn(func)) 207 + return true; 208 + 252 209 for (i = 0; i < ARRAY_SIZE(global_noreturns); i++) 253 210 if (!strcmp(func->name, global_noreturns[i])) 254 211 return true; 212 + } 255 213 256 214 if (func->bind == STB_WEAK) 257 215 return false;
+2
tools/objtool/noreturns.h
··· 39 39 NORETURN(panic_smp_self_stop) 40 40 NORETURN(rest_init) 41 41 NORETURN(rewind_stack_and_make_dead) 42 + NORETURN(rust_begin_unwind) 43 + NORETURN(rust_helper_BUG) 42 44 NORETURN(sev_es_terminate) 43 45 NORETURN(snp_abort) 44 46 NORETURN(start_kernel)