Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'rust-v6.1-rc1' of https://github.com/Rust-for-Linux/linux

Pull Rust introductory support from Kees Cook:
"The tree has a recent base, but has fundamentally been in linux-next
for a year and a half[1]. It's been updated based on feedback from the
Kernel Maintainer's Summit, and to gain recent Reviewed-by: tags.

Miguel is the primary maintainer, with me helping where needed/wanted.
Our plan is for the tree to switch to the standard non-rebasing
practice once this initial infrastructure series lands.

The contents are the absolute minimum to get Rust code building in the
kernel, with many more interfaces[2] (and drivers - NVMe[3], 9p[4], M1
GPU[5]) on the way.

The initial support of Rust-for-Linux comes in roughly 4 areas:

- Kernel internals (kallsyms expansion for Rust symbols, %pA format)

- Kbuild infrastructure (Rust build rules and support scripts)

- Rust crates and bindings for initial minimum viable build

- Rust kernel documentation and samples

Rust support has been in linux-next for a year and a half now, and the
short log doesn't do justice to the number of people who have
contributed both to the Linux kernel side but also to the upstream
Rust side to support the kernel's needs. Thanks to these 173 people,
and many more, who have been involved in all kinds of ways:

Miguel Ojeda, Wedson Almeida Filho, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Andreas Hindborg, Adam Bratschi-Kaye, Benno Lossin,
Maciej Falkowski, Finn Behrens, Sven Van Asbroeck, Asahi Lina, FUJITA
Tomonori, John Baublitz, Wei Liu, Geoffrey Thomas, Philip Herron,
Arthur Cohen, David Faust, Antoni Boucher, Philip Li, Yujie Liu,
Jonathan Corbet, Greg Kroah-Hartman, Paul E. McKenney, Josh Triplett,
Kent Overstreet, David Gow, Alice Ryhl, Robin Randhawa, Kees Cook,
Nick Desaulniers, Matthew Wilcox, Linus Walleij, Joe Perches, Michael
Ellerman, Petr Mladek, Masahiro Yamada, Arnaldo Carvalho de Melo,
Andrii Nakryiko, Konstantin Shelekhin, Rasmus Villemoes, Konstantin
Ryabitsev, Stephen Rothwell, Andy Shevchenko, Sergey Senozhatsky, John
Paul Adrian Glaubitz, David Laight, Nathan Chancellor, Jonathan
Cameron, Daniel Latypov, Shuah Khan, Brendan Higgins, Julia Lawall,
Laurent Pinchart, Geert Uytterhoeven, Akira Yokosawa, Pavel Machek,
David S. Miller, John Hawley, James Bottomley, Arnd Bergmann,
Christian Brauner, Dan Robertson, Nicholas Piggin, Zhouyi Zhou, Elena
Zannoni, Jose E. Marchesi, Leon Romanovsky, Will Deacon, Richard
Weinberger, Randy Dunlap, Paolo Bonzini, Roland Dreier, Mark Brown,
Sasha Levin, Ted Ts'o, Steven Rostedt, Jarkko Sakkinen, Michal
Kubecek, Marco Elver, Al Viro, Keith Busch, Johannes Berg, Jan Kara,
David Sterba, Connor Kuehl, Andy Lutomirski, Andrew Lunn, Alexandre
Belloni, Peter Zijlstra, Russell King, Eric W. Biederman, Willy
Tarreau, Christoph Hellwig, Emilio Cobos Álvarez, Christian Poveda,
Mark Rousskov, John Ericson, TennyZhuang, Xuanwo, Daniel Paoliello,
Manish Goregaokar, comex, Josh Stone, Stephan Sokolow, Philipp Krones,
Guillaume Gomez, Joshua Nelson, Mats Larsen, Marc Poulhiès, Samantha
Miller, Esteban Blanc, Martin Schmidt, Martin Rodriguez Reboredo,
Daniel Xu, Viresh Kumar, Bartosz Golaszewski, Vegard Nossum, Milan
Landaverde, Dariusz Sosnowski, Yuki Okushi, Matthew Bakhtiari, Wu
XiangCheng, Tiago Lam, Boris-Chengbiao Zhou, Sumera Priyadarsini,
Viktor Garske, Niklas Mohrin, Nándor István Krácser, Morgan Bartlett,
Miguel Cano, Léo Lanteri Thauvin, Julian Merkle, Andreas Reindl,
Jiapeng Chong, Fox Chen, Douglas Su, Antonio Terceiro, SeongJae Park,
Sergio González Collado, Ngo Iok Ui (Wu Yu Wei), Joshua Abraham,
Milan, Daniel Kolsoi, ahomescu, Manas, Luis Gerhorst, Li Hongyu,
Philipp Gesang, Russell Currey, Jalil David Salamé Messina, Jon Olson,
Raghvender, Angelos, Kaviraj Kanagaraj, Paul Römer, Sladyn Nunes,
Mauro Baladés, Hsiang-Cheng Yang, Abhik Jain, Hongyu Li, Sean Nash,
Yuheng Su, Peng Hao, Anhad Singh, Roel Kluin, Sara Saa, Geert
Stappers, Garrett LeSage, IFo Hancroft, and Linus Torvalds"

Link: https://lwn.net/Articles/849849/ [1]
Link: https://github.com/Rust-for-Linux/linux/commits/rust [2]
Link: https://github.com/metaspace/rust-linux/commit/d88c3744d6cbdf11767e08bad56cbfb67c4c96d0 [3]
Link: https://github.com/wedsonaf/linux/commit/9367032607f7670de0ba1537cf09ab0f4365a338 [4]
Link: https://github.com/AsahiLinux/linux/commits/gpu/rust-wip [5]

* tag 'rust-v6.1-rc1' of https://github.com/Rust-for-Linux/linux: (27 commits)
MAINTAINERS: Rust
samples: add first Rust examples
x86: enable initial Rust support
docs: add Rust documentation
Kbuild: add Rust support
rust: add `.rustfmt.toml`
scripts: add `is_rust_module.sh`
scripts: add `rust_is_available.sh`
scripts: add `generate_rust_target.rs`
scripts: add `generate_rust_analyzer.py`
scripts: decode_stacktrace: demangle Rust symbols
scripts: checkpatch: enable language-independent checks for Rust
scripts: checkpatch: diagnose uses of `%pA` in the C side as errors
vsprintf: add new `%pA` format specifier
rust: export generated symbols
rust: add `kernel` crate
rust: add `bindings` crate
rust: add `macros` crate
rust: add `compiler_builtins` crate
rust: adapt `alloc` crate to the kernel
...

+12552 -51
+6
.gitignore
··· 37 37 *.o 38 38 *.o.* 39 39 *.patch 40 + *.rmeta 41 + *.rsi 40 42 *.s 41 43 *.so 42 44 *.so.dbg ··· 99 97 !.gitattributes 100 98 !.gitignore 101 99 !.mailmap 100 + !.rustfmt.toml 102 101 103 102 # 104 103 # Generated include files ··· 165 162 166 163 # Documentation toolchain 167 164 sphinx_*/ 165 + 166 + # Rust analyzer configuration 167 + /rust-project.json
+12
.rustfmt.toml
··· 1 + edition = "2021" 2 + newline_style = "Unix" 3 + 4 + # Unstable options that help catching some mistakes in formatting and that we may want to enable 5 + # when they become stable. 6 + # 7 + # They are kept here since they are useful to run from time to time. 8 + #format_code_in_doc_comments = true 9 + #reorder_impl_items = true 10 + #comment_width = 100 11 + #wrap_comments = true 12 + #normalize_comments = true
+10
Documentation/core-api/printk-formats.rst
··· 625 625 %p4cc Y10 little-endian (0x20303159) 626 626 %p4cc NV12 big-endian (0xb231564e) 627 627 628 + Rust 629 + ---- 630 + 631 + :: 632 + 633 + %pA 634 + 635 + Only intended to be used from Rust code to format ``core::fmt::Arguments``. 636 + Do *not* use it from C. 637 + 628 638 Thanks 629 639 ====== 630 640
+3
Documentation/doc-guide/kernel-doc.rst
··· 14 14 reasons. The kernel source contains tens of thousands of kernel-doc 15 15 comments. Please stick to the style described here. 16 16 17 + .. note:: kernel-doc does not cover Rust code: please see 18 + Documentation/rust/general-information.rst instead. 19 + 17 20 The kernel-doc structure is extracted from the comments, and proper 18 21 `Sphinx C Domain`_ function and type descriptions with anchors are 19 22 generated from them. The descriptions are filtered for special kernel-doc
+1
Documentation/index.rst
··· 58 58 trace/index 59 59 fault-injection/index 60 60 livepatch/index 61 + rust/index 61 62 62 63 63 64 User-oriented documentation
+17
Documentation/kbuild/kbuild.rst
··· 48 48 ------- 49 49 Additional options to the C compiler (for built-in and modules). 50 50 51 + KRUSTFLAGS 52 + ---------- 53 + Additional options to the Rust compiler (for built-in and modules). 54 + 51 55 CFLAGS_KERNEL 52 56 ------------- 53 57 Additional options for $(CC) when used to compile ··· 60 56 CFLAGS_MODULE 61 57 ------------- 62 58 Additional module specific options to use for $(CC). 59 + 60 + RUSTFLAGS_KERNEL 61 + ---------------- 62 + Additional options for $(RUSTC) when used to compile 63 + code that is compiled as built-in. 64 + 65 + RUSTFLAGS_MODULE 66 + ---------------- 67 + Additional module specific options to use for $(RUSTC). 63 68 64 69 LDFLAGS_MODULE 65 70 -------------- ··· 81 68 HOSTCXXFLAGS 82 69 ------------ 83 70 Additional flags to be passed to $(HOSTCXX) when building host programs. 71 + 72 + HOSTRUSTFLAGS 73 + ------------- 74 + Additional flags to be passed to $(HOSTRUSTC) when building host programs. 84 75 85 76 HOSTLDFLAGS 86 77 -----------
+46 -4
Documentation/kbuild/makefiles.rst
··· 29 29 --- 4.1 Simple Host Program 30 30 --- 4.2 Composite Host Programs 31 31 --- 4.3 Using C++ for host programs 32 - --- 4.4 Controlling compiler options for host programs 33 - --- 4.5 When host programs are actually built 32 + --- 4.4 Using Rust for host programs 33 + --- 4.5 Controlling compiler options for host programs 34 + --- 4.6 When host programs are actually built 34 35 35 36 === 5 Userspace Program support 36 37 --- 5.1 Simple Userspace Program ··· 836 835 qconf-cxxobjs := qconf.o 837 836 qconf-objs := check.o 838 837 839 - 4.4 Controlling compiler options for host programs 838 + 4.4 Using Rust for host programs 839 + -------------------------------- 840 + 841 + Kbuild offers support for host programs written in Rust. However, 842 + since a Rust toolchain is not mandatory for kernel compilation, 843 + it may only be used in scenarios where Rust is required to be 844 + available (e.g. when ``CONFIG_RUST`` is enabled). 845 + 846 + Example:: 847 + 848 + hostprogs := target 849 + target-rust := y 850 + 851 + Kbuild will compile ``target`` using ``target.rs`` as the crate root, 852 + located in the same directory as the ``Makefile``. The crate may 853 + consist of several source files (see ``samples/rust/hostprogs``). 854 + 855 + 4.5 Controlling compiler options for host programs 840 856 -------------------------------------------------- 841 857 842 858 When compiling host programs, it is possible to set specific flags. ··· 885 867 When linking qconf, it will be passed the extra option 886 868 "-L$(QTDIR)/lib". 887 869 888 - 4.5 When host programs are actually built 870 + 4.6 When host programs are actually built 889 871 ----------------------------------------- 890 872 891 873 Kbuild will only build host-programs when they are referenced ··· 1199 1181 The first example utilises the trick that a config option expands 1200 1182 to 'y' when selected. 1201 1183 1184 + KBUILD_RUSTFLAGS 1185 + $(RUSTC) compiler flags 1186 + 1187 + Default value - see top level Makefile 1188 + Append or modify as required per architecture. 1189 + 1190 + Often, the KBUILD_RUSTFLAGS variable depends on the configuration. 1191 + 1192 + Note that target specification file generation (for ``--target``) 1193 + is handled in ``scripts/generate_rust_target.rs``. 1194 + 1202 1195 KBUILD_AFLAGS_KERNEL 1203 1196 Assembler options specific for built-in 1204 1197 ··· 1236 1207 $(KBUILD_CFLAGS_MODULE) is used to add arch-specific options that 1237 1208 are used for $(CC). 1238 1209 From commandline CFLAGS_MODULE shall be used (see kbuild.rst). 1210 + 1211 + KBUILD_RUSTFLAGS_KERNEL 1212 + $(RUSTC) options specific for built-in 1213 + 1214 + $(KBUILD_RUSTFLAGS_KERNEL) contains extra Rust compiler flags used to 1215 + compile resident kernel code. 1216 + 1217 + KBUILD_RUSTFLAGS_MODULE 1218 + Options for $(RUSTC) when building modules 1219 + 1220 + $(KBUILD_RUSTFLAGS_MODULE) is used to add arch-specific options that 1221 + are used for $(RUSTC). 1222 + From commandline RUSTFLAGS_MODULE shall be used (see kbuild.rst). 1239 1223 1240 1224 KBUILD_LDFLAGS_MODULE 1241 1225 Options for $(LD) when linking modules
+41
Documentation/process/changes.rst
··· 31 31 ====================== =============== ======================================== 32 32 GNU C 5.1 gcc --version 33 33 Clang/LLVM (optional) 11.0.0 clang --version 34 + Rust (optional) 1.62.0 rustc --version 35 + bindgen (optional) 0.56.0 bindgen --version 34 36 GNU make 3.81 make --version 35 37 bash 4.2 bash --version 36 38 binutils 2.23 ld -v ··· 81 79 kernels. Older releases aren't guaranteed to work, and we may drop workarounds 82 80 from the kernel that were used to support older versions. Please see additional 83 81 docs on :ref:`Building Linux with Clang/LLVM <kbuild_llvm>`. 82 + 83 + Rust (optional) 84 + --------------- 85 + 86 + A particular version of the Rust toolchain is required. Newer versions may or 87 + may not work because the kernel depends on some unstable Rust features, for 88 + the moment. 89 + 90 + Each Rust toolchain comes with several "components", some of which are required 91 + (like ``rustc``) and some that are optional. The ``rust-src`` component (which 92 + is optional) needs to be installed to build the kernel. Other components are 93 + useful for developing. 94 + 95 + Please see Documentation/rust/quick-start.rst for instructions on how to 96 + satisfy the build requirements of Rust support. In particular, the ``Makefile`` 97 + target ``rustavailable`` is useful to check why the Rust toolchain may not 98 + be detected. 99 + 100 + bindgen (optional) 101 + ------------------ 102 + 103 + ``bindgen`` is used to generate the Rust bindings to the C side of the kernel. 104 + It depends on ``libclang``. 84 105 85 106 Make 86 107 ---- ··· 373 348 Please see :ref:`sphinx_install` in :ref:`Documentation/doc-guide/sphinx.rst <sphinxdoc>` 374 349 for details about Sphinx requirements. 375 350 351 + rustdoc 352 + ------- 353 + 354 + ``rustdoc`` is used to generate the documentation for Rust code. Please see 355 + Documentation/rust/general-information.rst for more information. 356 + 376 357 Getting updated software 377 358 ======================== 378 359 ··· 394 363 ---------- 395 364 396 365 - :ref:`Getting LLVM <getting_llvm>`. 366 + 367 + Rust 368 + ---- 369 + 370 + - Documentation/rust/quick-start.rst. 371 + 372 + bindgen 373 + ------- 374 + 375 + - Documentation/rust/quick-start.rst. 397 376 398 377 Make 399 378 ----
+19
Documentation/rust/arch-support.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Arch Support 4 + ============ 5 + 6 + Currently, the Rust compiler (``rustc``) uses LLVM for code generation, 7 + which limits the supported architectures that can be targeted. In addition, 8 + support for building the kernel with LLVM/Clang varies (please see 9 + Documentation/kbuild/llvm.rst). This support is needed for ``bindgen`` 10 + which uses ``libclang``. 11 + 12 + Below is a general summary of architectures that currently work. Level of 13 + support corresponds to ``S`` values in the ``MAINTAINERS`` file. 14 + 15 + ============ ================ ============================================== 16 + Architecture Level of support Constraints 17 + ============ ================ ============================================== 18 + ``x86`` Maintained ``x86_64`` only. 19 + ============ ================ ==============================================
+216
Documentation/rust/coding-guidelines.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Coding Guidelines 4 + ================= 5 + 6 + This document describes how to write Rust code in the kernel. 7 + 8 + 9 + Style & formatting 10 + ------------------ 11 + 12 + The code should be formatted using ``rustfmt``. In this way, a person 13 + contributing from time to time to the kernel does not need to learn and 14 + remember one more style guide. More importantly, reviewers and maintainers 15 + do not need to spend time pointing out style issues anymore, and thus 16 + less patch roundtrips may be needed to land a change. 17 + 18 + .. note:: Conventions on comments and documentation are not checked by 19 + ``rustfmt``. Thus those are still needed to be taken care of. 20 + 21 + The default settings of ``rustfmt`` are used. This means the idiomatic Rust 22 + style is followed. For instance, 4 spaces are used for indentation rather 23 + than tabs. 24 + 25 + It is convenient to instruct editors/IDEs to format while typing, 26 + when saving or at commit time. However, if for some reason reformatting 27 + the entire kernel Rust sources is needed at some point, the following can be 28 + run:: 29 + 30 + make LLVM=1 rustfmt 31 + 32 + It is also possible to check if everything is formatted (printing a diff 33 + otherwise), for instance for a CI, with:: 34 + 35 + make LLVM=1 rustfmtcheck 36 + 37 + Like ``clang-format`` for the rest of the kernel, ``rustfmt`` works on 38 + individual files, and does not require a kernel configuration. Sometimes it may 39 + even work with broken code. 40 + 41 + 42 + Comments 43 + -------- 44 + 45 + "Normal" comments (i.e. ``//``, rather than code documentation which starts 46 + with ``///`` or ``//!``) are written in Markdown the same way as documentation 47 + comments are, even though they will not be rendered. This improves consistency, 48 + simplifies the rules and allows to move content between the two kinds of 49 + comments more easily. For instance: 50 + 51 + .. code-block:: rust 52 + 53 + // `object` is ready to be handled now. 54 + f(object); 55 + 56 + Furthermore, just like documentation, comments are capitalized at the beginning 57 + of a sentence and ended with a period (even if it is a single sentence). This 58 + includes ``// SAFETY:``, ``// TODO:`` and other "tagged" comments, e.g.: 59 + 60 + .. code-block:: rust 61 + 62 + // FIXME: The error should be handled properly. 63 + 64 + Comments should not be used for documentation purposes: comments are intended 65 + for implementation details, not users. This distinction is useful even if the 66 + reader of the source file is both an implementor and a user of an API. In fact, 67 + sometimes it is useful to use both comments and documentation at the same time. 68 + For instance, for a ``TODO`` list or to comment on the documentation itself. 69 + For the latter case, comments can be inserted in the middle; that is, closer to 70 + the line of documentation to be commented. For any other case, comments are 71 + written after the documentation, e.g.: 72 + 73 + .. code-block:: rust 74 + 75 + /// Returns a new [`Foo`]. 76 + /// 77 + /// # Examples 78 + /// 79 + // TODO: Find a better example. 80 + /// ``` 81 + /// let foo = f(42); 82 + /// ``` 83 + // FIXME: Use fallible approach. 84 + pub fn f(x: i32) -> Foo { 85 + // ... 86 + } 87 + 88 + One special kind of comments are the ``// SAFETY:`` comments. These must appear 89 + before every ``unsafe`` block, and they explain why the code inside the block is 90 + correct/sound, i.e. why it cannot trigger undefined behavior in any case, e.g.: 91 + 92 + .. code-block:: rust 93 + 94 + // SAFETY: `p` is valid by the safety requirements. 95 + unsafe { *p = 0; } 96 + 97 + ``// SAFETY:`` comments are not to be confused with the ``# Safety`` sections 98 + in code documentation. ``# Safety`` sections specify the contract that callers 99 + (for functions) or implementors (for traits) need to abide by. ``// SAFETY:`` 100 + comments show why a call (for functions) or implementation (for traits) actually 101 + respects the preconditions stated in a ``# Safety`` section or the language 102 + reference. 103 + 104 + 105 + Code documentation 106 + ------------------ 107 + 108 + Rust kernel code is not documented like C kernel code (i.e. via kernel-doc). 109 + Instead, the usual system for documenting Rust code is used: the ``rustdoc`` 110 + tool, which uses Markdown (a lightweight markup language). 111 + 112 + To learn Markdown, there are many guides available out there. For instance, 113 + the one at: 114 + 115 + https://commonmark.org/help/ 116 + 117 + This is how a well-documented Rust function may look like: 118 + 119 + .. code-block:: rust 120 + 121 + /// Returns the contained [`Some`] value, consuming the `self` value, 122 + /// without checking that the value is not [`None`]. 123 + /// 124 + /// # Safety 125 + /// 126 + /// Calling this method on [`None`] is *[undefined behavior]*. 127 + /// 128 + /// [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html 129 + /// 130 + /// # Examples 131 + /// 132 + /// ``` 133 + /// let x = Some("air"); 134 + /// assert_eq!(unsafe { x.unwrap_unchecked() }, "air"); 135 + /// ``` 136 + pub unsafe fn unwrap_unchecked(self) -> T { 137 + match self { 138 + Some(val) => val, 139 + 140 + // SAFETY: The safety contract must be upheld by the caller. 141 + None => unsafe { hint::unreachable_unchecked() }, 142 + } 143 + } 144 + 145 + This example showcases a few ``rustdoc`` features and some conventions followed 146 + in the kernel: 147 + 148 + - The first paragraph must be a single sentence briefly describing what 149 + the documented item does. Further explanations must go in extra paragraphs. 150 + 151 + - Unsafe functions must document their safety preconditions under 152 + a ``# Safety`` section. 153 + 154 + - While not shown here, if a function may panic, the conditions under which 155 + that happens must be described under a ``# Panics`` section. 156 + 157 + Please note that panicking should be very rare and used only with a good 158 + reason. In almost all cases, a fallible approach should be used, typically 159 + returning a ``Result``. 160 + 161 + - If providing examples of usage would help readers, they must be written in 162 + a section called ``# Examples``. 163 + 164 + - Rust items (functions, types, constants...) must be linked appropriately 165 + (``rustdoc`` will create a link automatically). 166 + 167 + - Any ``unsafe`` block must be preceded by a ``// SAFETY:`` comment 168 + describing why the code inside is sound. 169 + 170 + While sometimes the reason might look trivial and therefore unneeded, 171 + writing these comments is not just a good way of documenting what has been 172 + taken into account, but most importantly, it provides a way to know that 173 + there are no *extra* implicit constraints. 174 + 175 + To learn more about how to write documentation for Rust and extra features, 176 + please take a look at the ``rustdoc`` book at: 177 + 178 + https://doc.rust-lang.org/rustdoc/how-to-write-documentation.html 179 + 180 + 181 + Naming 182 + ------ 183 + 184 + Rust kernel code follows the usual Rust naming conventions: 185 + 186 + https://rust-lang.github.io/api-guidelines/naming.html 187 + 188 + When existing C concepts (e.g. macros, functions, objects...) are wrapped into 189 + a Rust abstraction, a name as close as reasonably possible to the C side should 190 + be used in order to avoid confusion and to improve readability when switching 191 + back and forth between the C and Rust sides. For instance, macros such as 192 + ``pr_info`` from C are named the same in the Rust side. 193 + 194 + Having said that, casing should be adjusted to follow the Rust naming 195 + conventions, and namespacing introduced by modules and types should not be 196 + repeated in the item names. For instance, when wrapping constants like: 197 + 198 + .. code-block:: c 199 + 200 + #define GPIO_LINE_DIRECTION_IN 0 201 + #define GPIO_LINE_DIRECTION_OUT 1 202 + 203 + The equivalent in Rust may look like (ignoring documentation): 204 + 205 + .. code-block:: rust 206 + 207 + pub mod gpio { 208 + pub enum LineDirection { 209 + In = bindings::GPIO_LINE_DIRECTION_IN as _, 210 + Out = bindings::GPIO_LINE_DIRECTION_OUT as _, 211 + } 212 + } 213 + 214 + That is, the equivalent of ``GPIO_LINE_DIRECTION_IN`` would be referred to as 215 + ``gpio::LineDirection::In``. In particular, it should not be named 216 + ``gpio::gpio_line_direction::GPIO_LINE_DIRECTION_IN``.
+79
Documentation/rust/general-information.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + General Information 4 + =================== 5 + 6 + This document contains useful information to know when working with 7 + the Rust support in the kernel. 8 + 9 + 10 + Code documentation 11 + ------------------ 12 + 13 + Rust kernel code is documented using ``rustdoc``, its built-in documentation 14 + generator. 15 + 16 + The generated HTML docs include integrated search, linked items (e.g. types, 17 + functions, constants), source code, etc. They may be read at (TODO: link when 18 + in mainline and generated alongside the rest of the documentation): 19 + 20 + http://kernel.org/ 21 + 22 + The docs can also be easily generated and read locally. This is quite fast 23 + (same order as compiling the code itself) and no special tools or environment 24 + are needed. This has the added advantage that they will be tailored to 25 + the particular kernel configuration used. To generate them, use the ``rustdoc`` 26 + target with the same invocation used for compilation, e.g.:: 27 + 28 + make LLVM=1 rustdoc 29 + 30 + To read the docs locally in your web browser, run e.g.:: 31 + 32 + xdg-open rust/doc/kernel/index.html 33 + 34 + To learn about how to write the documentation, please see coding-guidelines.rst. 35 + 36 + 37 + Extra lints 38 + ----------- 39 + 40 + While ``rustc`` is a very helpful compiler, some extra lints and analyses are 41 + available via ``clippy``, a Rust linter. To enable it, pass ``CLIPPY=1`` to 42 + the same invocation used for compilation, e.g.:: 43 + 44 + make LLVM=1 CLIPPY=1 45 + 46 + Please note that Clippy may change code generation, thus it should not be 47 + enabled while building a production kernel. 48 + 49 + 50 + Abstractions vs. bindings 51 + ------------------------- 52 + 53 + Abstractions are Rust code wrapping kernel functionality from the C side. 54 + 55 + In order to use functions and types from the C side, bindings are created. 56 + Bindings are the declarations for Rust of those functions and types from 57 + the C side. 58 + 59 + For instance, one may write a ``Mutex`` abstraction in Rust which wraps 60 + a ``struct mutex`` from the C side and calls its functions through the bindings. 61 + 62 + Abstractions are not available for all the kernel internal APIs and concepts, 63 + but it is intended that coverage is expanded as time goes on. "Leaf" modules 64 + (e.g. drivers) should not use the C bindings directly. Instead, subsystems 65 + should provide as-safe-as-possible abstractions as needed. 66 + 67 + 68 + Conditional compilation 69 + ----------------------- 70 + 71 + Rust code has access to conditional compilation based on the kernel 72 + configuration: 73 + 74 + .. code-block:: rust 75 + 76 + #[cfg(CONFIG_X)] // Enabled (`y` or `m`) 77 + #[cfg(CONFIG_X="y")] // Enabled as a built-in (`y`) 78 + #[cfg(CONFIG_X="m")] // Enabled as a module (`m`) 79 + #[cfg(not(CONFIG_X))] // Disabled
+22
Documentation/rust/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Rust 4 + ==== 5 + 6 + Documentation related to Rust within the kernel. To start using Rust 7 + in the kernel, please read the quick-start.rst guide. 8 + 9 + .. toctree:: 10 + :maxdepth: 1 11 + 12 + quick-start 13 + general-information 14 + coding-guidelines 15 + arch-support 16 + 17 + .. only:: subproject and html 18 + 19 + Indices 20 + ======= 21 + 22 + * :ref:`genindex`
+232
Documentation/rust/quick-start.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Quick Start 4 + =========== 5 + 6 + This document describes how to get started with kernel development in Rust. 7 + 8 + 9 + Requirements: Building 10 + ---------------------- 11 + 12 + This section explains how to fetch the tools needed for building. 13 + 14 + Some of these requirements might be available from Linux distributions 15 + under names like ``rustc``, ``rust-src``, ``rust-bindgen``, etc. However, 16 + at the time of writing, they are likely not to be recent enough unless 17 + the distribution tracks the latest releases. 18 + 19 + To easily check whether the requirements are met, the following target 20 + can be used:: 21 + 22 + make LLVM=1 rustavailable 23 + 24 + This triggers the same logic used by Kconfig to determine whether 25 + ``RUST_IS_AVAILABLE`` should be enabled; but it also explains why not 26 + if that is the case. 27 + 28 + 29 + rustc 30 + ***** 31 + 32 + A particular version of the Rust compiler is required. Newer versions may or 33 + may not work because, for the moment, the kernel depends on some unstable 34 + Rust features. 35 + 36 + If ``rustup`` is being used, enter the checked out source code directory 37 + and run:: 38 + 39 + rustup override set $(scripts/min-tool-version.sh rustc) 40 + 41 + Otherwise, fetch a standalone installer or install ``rustup`` from: 42 + 43 + https://www.rust-lang.org 44 + 45 + 46 + Rust standard library source 47 + **************************** 48 + 49 + The Rust standard library source is required because the build system will 50 + cross-compile ``core`` and ``alloc``. 51 + 52 + If ``rustup`` is being used, run:: 53 + 54 + rustup component add rust-src 55 + 56 + The components are installed per toolchain, thus upgrading the Rust compiler 57 + version later on requires re-adding the component. 58 + 59 + Otherwise, if a standalone installer is used, the Rust repository may be cloned 60 + into the installation folder of the toolchain:: 61 + 62 + git clone --recurse-submodules \ 63 + --branch $(scripts/min-tool-version.sh rustc) \ 64 + https://github.com/rust-lang/rust \ 65 + $(rustc --print sysroot)/lib/rustlib/src/rust 66 + 67 + In this case, upgrading the Rust compiler version later on requires manually 68 + updating this clone. 69 + 70 + 71 + libclang 72 + ******** 73 + 74 + ``libclang`` (part of LLVM) is used by ``bindgen`` to understand the C code 75 + in the kernel, which means LLVM needs to be installed; like when the kernel 76 + is compiled with ``CC=clang`` or ``LLVM=1``. 77 + 78 + Linux distributions are likely to have a suitable one available, so it is 79 + best to check that first. 80 + 81 + There are also some binaries for several systems and architectures uploaded at: 82 + 83 + https://releases.llvm.org/download.html 84 + 85 + Otherwise, building LLVM takes quite a while, but it is not a complex process: 86 + 87 + https://llvm.org/docs/GettingStarted.html#getting-the-source-code-and-building-llvm 88 + 89 + Please see Documentation/kbuild/llvm.rst for more information and further ways 90 + to fetch pre-built releases and distribution packages. 91 + 92 + 93 + bindgen 94 + ******* 95 + 96 + The bindings to the C side of the kernel are generated at build time using 97 + the ``bindgen`` tool. A particular version is required. 98 + 99 + Install it via (note that this will download and build the tool from source):: 100 + 101 + cargo install --locked --version $(scripts/min-tool-version.sh bindgen) bindgen 102 + 103 + 104 + Requirements: Developing 105 + ------------------------ 106 + 107 + This section explains how to fetch the tools needed for developing. That is, 108 + they are not needed when just building the kernel. 109 + 110 + 111 + rustfmt 112 + ******* 113 + 114 + The ``rustfmt`` tool is used to automatically format all the Rust kernel code, 115 + including the generated C bindings (for details, please see 116 + coding-guidelines.rst). 117 + 118 + If ``rustup`` is being used, its ``default`` profile already installs the tool, 119 + thus nothing needs to be done. If another profile is being used, the component 120 + can be installed manually:: 121 + 122 + rustup component add rustfmt 123 + 124 + The standalone installers also come with ``rustfmt``. 125 + 126 + 127 + clippy 128 + ****** 129 + 130 + ``clippy`` is a Rust linter. Running it provides extra warnings for Rust code. 131 + It can be run by passing ``CLIPPY=1`` to ``make`` (for details, please see 132 + general-information.rst). 133 + 134 + If ``rustup`` is being used, its ``default`` profile already installs the tool, 135 + thus nothing needs to be done. If another profile is being used, the component 136 + can be installed manually:: 137 + 138 + rustup component add clippy 139 + 140 + The standalone installers also come with ``clippy``. 141 + 142 + 143 + cargo 144 + ***** 145 + 146 + ``cargo`` is the Rust native build system. It is currently required to run 147 + the tests since it is used to build a custom standard library that contains 148 + the facilities provided by the custom ``alloc`` in the kernel. The tests can 149 + be run using the ``rusttest`` Make target. 150 + 151 + If ``rustup`` is being used, all the profiles already install the tool, 152 + thus nothing needs to be done. 153 + 154 + The standalone installers also come with ``cargo``. 155 + 156 + 157 + rustdoc 158 + ******* 159 + 160 + ``rustdoc`` is the documentation tool for Rust. It generates pretty HTML 161 + documentation for Rust code (for details, please see 162 + general-information.rst). 163 + 164 + ``rustdoc`` is also used to test the examples provided in documented Rust code 165 + (called doctests or documentation tests). The ``rusttest`` Make target uses 166 + this feature. 167 + 168 + If ``rustup`` is being used, all the profiles already install the tool, 169 + thus nothing needs to be done. 170 + 171 + The standalone installers also come with ``rustdoc``. 172 + 173 + 174 + rust-analyzer 175 + ************* 176 + 177 + The `rust-analyzer <https://rust-analyzer.github.io/>`_ language server can 178 + be used with many editors to enable syntax highlighting, completion, go to 179 + definition, and other features. 180 + 181 + ``rust-analyzer`` needs a configuration file, ``rust-project.json``, which 182 + can be generated by the ``rust-analyzer`` Make target. 183 + 184 + 185 + Configuration 186 + ------------- 187 + 188 + ``Rust support`` (``CONFIG_RUST``) needs to be enabled in the ``General setup`` 189 + menu. The option is only shown if a suitable Rust toolchain is found (see 190 + above), as long as the other requirements are met. In turn, this will make 191 + visible the rest of options that depend on Rust. 192 + 193 + Afterwards, go to:: 194 + 195 + Kernel hacking 196 + -> Sample kernel code 197 + -> Rust samples 198 + 199 + And enable some sample modules either as built-in or as loadable. 200 + 201 + 202 + Building 203 + -------- 204 + 205 + Building a kernel with a complete LLVM toolchain is the best supported setup 206 + at the moment. That is:: 207 + 208 + make LLVM=1 209 + 210 + For architectures that do not support a full LLVM toolchain, use:: 211 + 212 + make CC=clang 213 + 214 + Using GCC also works for some configurations, but it is very experimental at 215 + the moment. 216 + 217 + 218 + Hacking 219 + ------- 220 + 221 + To dive deeper, take a look at the source code of the samples 222 + at ``samples/rust/``, the Rust support code under ``rust/`` and 223 + the ``Rust hacking`` menu under ``Kernel hacking``. 224 + 225 + If GDB/Binutils is used and Rust symbols are not getting demangled, the reason 226 + is the toolchain does not support Rust's new v0 mangling scheme yet. 227 + There are a few ways out: 228 + 229 + - Install a newer release (GDB >= 10.2, Binutils >= 2.36). 230 + 231 + - Some versions of GDB (e.g. vanilla GDB 10.1) are able to use 232 + the pre-demangled names embedded in the debug info (``CONFIG_DEBUG_INFO``).
+18
MAINTAINERS
··· 17755 17755 F: kernel/trace/rv/ 17756 17756 F: tools/verification/ 17757 17757 17758 + RUST 17759 + M: Miguel Ojeda <ojeda@kernel.org> 17760 + M: Alex Gaynor <alex.gaynor@gmail.com> 17761 + M: Wedson Almeida Filho <wedsonaf@gmail.com> 17762 + R: Boqun Feng <boqun.feng@gmail.com> 17763 + R: Gary Guo <gary@garyguo.net> 17764 + R: Björn Roy Baron <bjorn3_gh@protonmail.com> 17765 + L: rust-for-linux@vger.kernel.org 17766 + S: Supported 17767 + W: https://github.com/Rust-for-Linux/linux 17768 + B: https://github.com/Rust-for-Linux/linux/issues 17769 + T: git https://github.com/Rust-for-Linux/linux.git rust-next 17770 + F: Documentation/rust/ 17771 + F: rust/ 17772 + F: samples/rust/ 17773 + F: scripts/*rust* 17774 + K: \b(?i:rust)\b 17775 + 17758 17776 RXRPC SOCKETS (AF_RXRPC) 17759 17777 M: David Howells <dhowells@redhat.com> 17760 17778 M: Marc Dionne <marc.dionne@auristor.com>
+163 -9
Makefile
··· 120 120 121 121 export KBUILD_CHECKSRC 122 122 123 + # Enable "clippy" (a linter) as part of the Rust compilation. 124 + # 125 + # Use 'make CLIPPY=1' to enable it. 126 + ifeq ("$(origin CLIPPY)", "command line") 127 + KBUILD_CLIPPY := $(CLIPPY) 128 + endif 129 + 130 + export KBUILD_CLIPPY 131 + 123 132 # Use make M=dir or set the environment variable KBUILD_EXTMOD to specify the 124 133 # directory of external module to build. Setting M= takes precedence. 125 134 ifeq ("$(origin M)", "command line") ··· 279 270 cscope gtags TAGS tags help% %docs check% coccicheck \ 280 271 $(version_h) headers headers_% archheaders archscripts \ 281 272 %asm-generic kernelversion %src-pkg dt_binding_check \ 282 - outputmakefile 273 + outputmakefile rustavailable rustfmt rustfmtcheck 283 274 # Installation targets should not require compiler. Unfortunately, vdso_install 284 275 # is an exception where build artifacts may be updated. This must be fixed. 285 276 no-compiler-targets := $(no-dot-config-targets) install dtbs_install \ 286 277 headers_install modules_install kernelrelease image_name 287 278 no-sync-config-targets := $(no-dot-config-targets) %install kernelrelease \ 288 279 image_name 289 - single-targets := %.a %.i %.ko %.lds %.ll %.lst %.mod %.o %.s %.symtypes %/ 280 + single-targets := %.a %.i %.rsi %.ko %.lds %.ll %.lst %.mod %.o %.s %.symtypes %/ 290 281 291 282 config-build := 292 283 mixed-build := ··· 448 439 HOSTCC = gcc 449 440 HOSTCXX = g++ 450 441 endif 442 + HOSTRUSTC = rustc 451 443 HOSTPKG_CONFIG = pkg-config 452 444 453 445 KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \ ··· 457 447 KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS) 458 448 KBUILD_USERLDFLAGS := $(USERLDFLAGS) 459 449 450 + # These flags apply to all Rust code in the tree, including the kernel and 451 + # host programs. 452 + export rust_common_flags := --edition=2021 \ 453 + -Zbinary_dep_depinfo=y \ 454 + -Dunsafe_op_in_unsafe_fn -Drust_2018_idioms \ 455 + -Dunreachable_pub -Dnon_ascii_idents \ 456 + -Wmissing_docs \ 457 + -Drustdoc::missing_crate_level_docs \ 458 + -Dclippy::correctness -Dclippy::style \ 459 + -Dclippy::suspicious -Dclippy::complexity \ 460 + -Dclippy::perf \ 461 + -Dclippy::let_unit_value -Dclippy::mut_mut \ 462 + -Dclippy::needless_bitwise_bool \ 463 + -Dclippy::needless_continue \ 464 + -Wclippy::dbg_macro 465 + 460 466 KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) $(HOSTCFLAGS) 461 467 KBUILD_HOSTCXXFLAGS := -Wall -O2 $(HOST_LFS_CFLAGS) $(HOSTCXXFLAGS) 468 + KBUILD_HOSTRUSTFLAGS := $(rust_common_flags) -O -Cstrip=debuginfo \ 469 + -Zallow-features= $(HOSTRUSTFLAGS) 462 470 KBUILD_HOSTLDFLAGS := $(HOST_LFS_LDFLAGS) $(HOSTLDFLAGS) 463 471 KBUILD_HOSTLDLIBS := $(HOST_LFS_LIBS) $(HOSTLDLIBS) 464 472 ··· 501 473 READELF = $(CROSS_COMPILE)readelf 502 474 STRIP = $(CROSS_COMPILE)strip 503 475 endif 476 + RUSTC = rustc 477 + RUSTDOC = rustdoc 478 + RUSTFMT = rustfmt 479 + CLIPPY_DRIVER = clippy-driver 480 + BINDGEN = bindgen 481 + CARGO = cargo 504 482 PAHOLE = pahole 505 483 RESOLVE_BTFIDS = $(objtree)/tools/bpf/resolve_btfids/resolve_btfids 506 484 LEX = flex ··· 532 498 -Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF) 533 499 NOSTDINC_FLAGS := 534 500 CFLAGS_MODULE = 501 + RUSTFLAGS_MODULE = 535 502 AFLAGS_MODULE = 536 503 LDFLAGS_MODULE = 537 504 CFLAGS_KERNEL = 505 + RUSTFLAGS_KERNEL = 538 506 AFLAGS_KERNEL = 539 507 LDFLAGS_vmlinux = 540 508 ··· 565 529 -Werror=return-type -Wno-format-security \ 566 530 -std=gnu11 567 531 KBUILD_CPPFLAGS := -D__KERNEL__ 532 + KBUILD_RUSTFLAGS := $(rust_common_flags) \ 533 + --target=$(objtree)/rust/target.json \ 534 + -Cpanic=abort -Cembed-bitcode=n -Clto=n \ 535 + -Cforce-unwind-tables=n -Ccodegen-units=1 \ 536 + -Csymbol-mangling-version=v0 \ 537 + -Crelocation-model=static \ 538 + -Zfunction-sections=n \ 539 + -Dclippy::float_arithmetic 540 + 568 541 KBUILD_AFLAGS_KERNEL := 569 542 KBUILD_CFLAGS_KERNEL := 543 + KBUILD_RUSTFLAGS_KERNEL := 570 544 KBUILD_AFLAGS_MODULE := -DMODULE 571 545 KBUILD_CFLAGS_MODULE := -DMODULE 546 + KBUILD_RUSTFLAGS_MODULE := --cfg MODULE 572 547 KBUILD_LDFLAGS_MODULE := 573 548 KBUILD_LDFLAGS := 574 549 CLANG_FLAGS := 575 550 551 + ifeq ($(KBUILD_CLIPPY),1) 552 + RUSTC_OR_CLIPPY_QUIET := CLIPPY 553 + RUSTC_OR_CLIPPY = $(CLIPPY_DRIVER) 554 + else 555 + RUSTC_OR_CLIPPY_QUIET := RUSTC 556 + RUSTC_OR_CLIPPY = $(RUSTC) 557 + endif 558 + 559 + ifdef RUST_LIB_SRC 560 + export RUST_LIB_SRC 561 + endif 562 + 563 + # Allows the usage of unstable features in stable compilers. 564 + export RUSTC_BOOTSTRAP := 1 565 + 576 566 export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC HOSTPKG_CONFIG 567 + export RUSTC RUSTDOC RUSTFMT RUSTC_OR_CLIPPY_QUIET RUSTC_OR_CLIPPY BINDGEN CARGO 568 + export HOSTRUSTC KBUILD_HOSTRUSTFLAGS 577 569 export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL 578 570 export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX 579 571 export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD ··· 610 546 611 547 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS 612 548 export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE 549 + export KBUILD_RUSTFLAGS RUSTFLAGS_KERNEL RUSTFLAGS_MODULE 613 550 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE 614 - export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE 615 - export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL 551 + export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_RUSTFLAGS_MODULE KBUILD_LDFLAGS_MODULE 552 + export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL KBUILD_RUSTFLAGS_KERNEL 616 553 export PAHOLE_FLAGS 617 554 618 555 # Files to ignore in find ... statements ··· 794 729 # 795 730 # Do not use $(call cmd,...) here. That would suppress prompts from syncconfig, 796 731 # so you cannot notice that Kconfig is waiting for the user input. 797 - %/config/auto.conf %/config/auto.conf.cmd %/generated/autoconf.h: $(KCONFIG_CONFIG) 732 + %/config/auto.conf %/config/auto.conf.cmd %/generated/autoconf.h %/generated/rustc_cfg: $(KCONFIG_CONFIG) 798 733 $(Q)$(kecho) " SYNC $@" 799 734 $(Q)$(MAKE) -f $(srctree)/Makefile syncconfig 800 735 else # !may-sync-config ··· 823 758 824 759 ifdef CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE 825 760 KBUILD_CFLAGS += -O2 761 + KBUILD_RUSTFLAGS += -Copt-level=2 826 762 else ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE 827 763 KBUILD_CFLAGS += -Os 764 + KBUILD_RUSTFLAGS += -Copt-level=s 828 765 endif 766 + 767 + # Always set `debug-assertions` and `overflow-checks` because their default 768 + # depends on `opt-level` and `debug-assertions`, respectively. 769 + KBUILD_RUSTFLAGS += -Cdebug-assertions=$(if $(CONFIG_RUST_DEBUG_ASSERTIONS),y,n) 770 + KBUILD_RUSTFLAGS += -Coverflow-checks=$(if $(CONFIG_RUST_OVERFLOW_CHECKS),y,n) 829 771 830 772 # Tell gcc to never replace conditional load with a non-conditional one 831 773 ifdef CONFIG_CC_IS_GCC ··· 864 792 KBUILD_CFLAGS-$(CONFIG_CC_NO_ARRAY_BOUNDS) += -Wno-array-bounds 865 793 KBUILD_CFLAGS += $(KBUILD_CFLAGS-y) $(CONFIG_CC_IMPLICIT_FALLTHROUGH) 866 794 795 + KBUILD_RUSTFLAGS-$(CONFIG_WERROR) += -Dwarnings 796 + KBUILD_RUSTFLAGS += $(KBUILD_RUSTFLAGS-y) 797 + 867 798 ifdef CONFIG_CC_IS_CLANG 868 799 KBUILD_CPPFLAGS += -Qunused-arguments 869 800 # The kernel builds with '-std=gnu11' so use of GNU extensions is acceptable. ··· 887 812 888 813 ifdef CONFIG_FRAME_POINTER 889 814 KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls 815 + KBUILD_RUSTFLAGS += -Cforce-frame-pointers=y 890 816 else 891 817 # Some targets (ARM with Thumb2, for example), can't be built with frame 892 818 # pointers. For those, we don't have FUNCTION_TRACER automatically 893 819 # select FRAME_POINTER. However, FUNCTION_TRACER adds -pg, and this is 894 820 # incompatible with -fomit-frame-pointer with current GCC, so we don't use 895 821 # -fomit-frame-pointer with FUNCTION_TRACER. 822 + # In the Rust target specification, "frame-pointer" is set explicitly 823 + # to "may-omit". 896 824 ifndef CONFIG_FUNCTION_TRACER 897 825 KBUILD_CFLAGS += -fomit-frame-pointer 898 826 endif ··· 960 882 KBUILD_CFLAGS += -fno-inline-functions-called-once 961 883 endif 962 884 885 + # `rustc`'s `-Zfunction-sections` applies to data too (as of 1.59.0). 963 886 ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION 964 887 KBUILD_CFLAGS_KERNEL += -ffunction-sections -fdata-sections 888 + KBUILD_RUSTFLAGS_KERNEL += -Zfunction-sections=y 965 889 LDFLAGS_vmlinux += --gc-sections 966 890 endif 967 891 ··· 1106 1026 # Do not add $(call cc-option,...) below this line. When you build the kernel 1107 1027 # from the clean source tree, the GCC plugins do not exist at this point. 1108 1028 1109 - # Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments 1029 + # Add user supplied CPPFLAGS, AFLAGS, CFLAGS and RUSTFLAGS as the last assignments 1110 1030 KBUILD_CPPFLAGS += $(KCPPFLAGS) 1111 1031 KBUILD_AFLAGS += $(KAFLAGS) 1112 1032 KBUILD_CFLAGS += $(KCFLAGS) 1033 + KBUILD_RUSTFLAGS += $(KRUSTFLAGS) 1113 1034 1114 1035 KBUILD_LDFLAGS_MODULE += --build-id=sha1 1115 1036 LDFLAGS_vmlinux += --build-id=sha1 ··· 1185 1104 core-y += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ 1186 1105 core-$(CONFIG_BLOCK) += block/ 1187 1106 core-$(CONFIG_IO_URING) += io_uring/ 1107 + core-$(CONFIG_RUST) += rust/ 1188 1108 1189 1109 vmlinux-dirs := $(patsubst %/,%,$(filter %/, \ 1190 1110 $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ ··· 1288 1206 1289 1207 # All the preparing.. 1290 1208 prepare: prepare0 1209 + ifdef CONFIG_RUST 1210 + $(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh -v 1211 + $(Q)$(MAKE) $(build)=rust 1212 + endif 1291 1213 1292 1214 PHONY += remove-stale-files 1293 1215 remove-stale-files: ··· 1585 1499 # Directories & files removed with 'make clean' 1586 1500 CLEAN_FILES += include/ksym vmlinux.symvers modules-only.symvers \ 1587 1501 modules.builtin modules.builtin.modinfo modules.nsdeps \ 1588 - compile_commands.json .thinlto-cache 1502 + compile_commands.json .thinlto-cache rust/test rust/doc 1589 1503 1590 1504 # Directories & files removed with 'make mrproper' 1591 1505 MRPROPER_FILES += include/config include/generated \ ··· 1596 1510 certs/signing_key.pem \ 1597 1511 certs/x509.genkey \ 1598 1512 vmlinux-gdb.py \ 1599 - *.spec 1513 + *.spec \ 1514 + rust/target.json rust/libmacros.so 1600 1515 1601 1516 # clean - Delete most, but leave enough to build external modules 1602 1517 # ··· 1622 1535 1623 1536 mrproper: clean $(mrproper-dirs) 1624 1537 $(call cmd,rmfiles) 1538 + @find . $(RCS_FIND_IGNORE) \ 1539 + \( -name '*.rmeta' \) \ 1540 + -type f -print | xargs rm -f 1625 1541 1626 1542 # distclean 1627 1543 # ··· 1712 1622 @echo ' kselftest-merge - Merge all the config dependencies of' 1713 1623 @echo ' kselftest to existing .config.' 1714 1624 @echo '' 1625 + @echo 'Rust targets:' 1626 + @echo ' rustavailable - Checks whether the Rust toolchain is' 1627 + @echo ' available and, if not, explains why.' 1628 + @echo ' rustfmt - Reformat all the Rust code in the kernel' 1629 + @echo ' rustfmtcheck - Checks if all the Rust code in the kernel' 1630 + @echo ' is formatted, printing a diff otherwise.' 1631 + @echo ' rustdoc - Generate Rust documentation' 1632 + @echo ' (requires kernel .config)' 1633 + @echo ' rusttest - Runs the Rust tests' 1634 + @echo ' (requires kernel .config; downloads external repos)' 1635 + @echo ' rust-analyzer - Generate rust-project.json rust-analyzer support file' 1636 + @echo ' (requires kernel .config)' 1637 + @echo ' dir/file.[os] - Build specified target only' 1638 + @echo ' dir/file.rsi - Build macro expanded source, similar to C preprocessing.' 1639 + @echo ' Run with RUSTFMT=n to skip reformatting if needed.' 1640 + @echo ' The output is not intended to be compilable.' 1641 + @echo ' dir/file.ll - Build the LLVM assembly file' 1642 + @echo '' 1715 1643 @$(if $(dtstree), \ 1716 1644 echo 'Devicetree:'; \ 1717 1645 echo '* dtbs - Build device tree blobs for enabled boards'; \ ··· 1801 1693 PHONY += $(DOC_TARGETS) 1802 1694 $(DOC_TARGETS): 1803 1695 $(Q)$(MAKE) $(build)=Documentation $@ 1696 + 1697 + 1698 + # Rust targets 1699 + # --------------------------------------------------------------------------- 1700 + 1701 + # "Is Rust available?" target 1702 + PHONY += rustavailable 1703 + rustavailable: 1704 + $(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh -v && echo "Rust is available!" 1705 + 1706 + # Documentation target 1707 + # 1708 + # Using the singular to avoid running afoul of `no-dot-config-targets`. 1709 + PHONY += rustdoc 1710 + rustdoc: prepare 1711 + $(Q)$(MAKE) $(build)=rust $@ 1712 + 1713 + # Testing target 1714 + PHONY += rusttest 1715 + rusttest: prepare 1716 + $(Q)$(MAKE) $(build)=rust $@ 1717 + 1718 + # Formatting targets 1719 + PHONY += rustfmt rustfmtcheck 1720 + 1721 + # We skip `rust/alloc` since we want to minimize the diff w.r.t. upstream. 1722 + # 1723 + # We match using absolute paths since `find` does not resolve them 1724 + # when matching, which is a problem when e.g. `srctree` is `..`. 1725 + # We `grep` afterwards in order to remove the directory entry itself. 1726 + rustfmt: 1727 + $(Q)find $(abs_srctree) -type f -name '*.rs' \ 1728 + -o -path $(abs_srctree)/rust/alloc -prune \ 1729 + -o -path $(abs_objtree)/rust/test -prune \ 1730 + | grep -Fv $(abs_srctree)/rust/alloc \ 1731 + | grep -Fv $(abs_objtree)/rust/test \ 1732 + | grep -Fv generated \ 1733 + | xargs $(RUSTFMT) $(rustfmt_flags) 1734 + 1735 + rustfmtcheck: rustfmt_flags = --check 1736 + rustfmtcheck: rustfmt 1737 + 1738 + # IDE support targets 1739 + PHONY += rust-analyzer 1740 + rust-analyzer: 1741 + $(Q)$(MAKE) $(build)=rust $@ 1804 1742 1805 1743 # Misc 1806 1744 # --------------------------------------------------------------------------- ··· 2015 1861 clean: $(clean-dirs) 2016 1862 $(call cmd,rmfiles) 2017 1863 @find $(or $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \ 2018 - \( -name '*.[aios]' -o -name '*.ko' -o -name '.*.cmd' \ 1864 + \( -name '*.[aios]' -o -name '*.rsi' -o -name '*.ko' -o -name '.*.cmd' \ 2019 1865 -o -name '*.ko.*' \ 2020 1866 -o -name '*.dtb' -o -name '*.dtbo' -o -name '*.dtb.S' -o -name '*.dt.yaml' \ 2021 1867 -o -name '*.dwo' -o -name '*.lst' \
+6
arch/Kconfig
··· 355 355 This symbol should be selected by an architecture if it 356 356 supports an implementation of restartable sequences. 357 357 358 + config HAVE_RUST 359 + bool 360 + help 361 + This symbol should be selected by an architecture if it 362 + supports Rust. 363 + 358 364 config HAVE_FUNCTION_ARG_ACCESS_API 359 365 bool 360 366 help
+1
arch/x86/Kconfig
··· 257 257 select HAVE_STATIC_CALL_INLINE if HAVE_OBJTOOL 258 258 select HAVE_PREEMPT_DYNAMIC_CALL 259 259 select HAVE_RSEQ 260 + select HAVE_RUST if X86_64 260 261 select HAVE_SYSCALL_TRACEPOINTS 261 262 select HAVE_UACCESS_VALIDATION if HAVE_OBJTOOL 262 263 select HAVE_UNSTABLE_SCHED_CLOCK
+10
arch/x86/Makefile
··· 68 68 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53383 69 69 # 70 70 KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx 71 + KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2 71 72 72 73 ifeq ($(CONFIG_X86_KERNEL_IBT),y) 73 74 # ··· 156 155 cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic 157 156 KBUILD_CFLAGS += $(cflags-y) 158 157 158 + rustflags-$(CONFIG_MK8) += -Ctarget-cpu=k8 159 + rustflags-$(CONFIG_MPSC) += -Ctarget-cpu=nocona 160 + rustflags-$(CONFIG_MCORE2) += -Ctarget-cpu=core2 161 + rustflags-$(CONFIG_MATOM) += -Ctarget-cpu=atom 162 + rustflags-$(CONFIG_GENERIC_CPU) += -Ztune-cpu=generic 163 + KBUILD_RUSTFLAGS += $(rustflags-y) 164 + 159 165 KBUILD_CFLAGS += -mno-red-zone 160 166 KBUILD_CFLAGS += -mcmodel=kernel 167 + KBUILD_RUSTFLAGS += -Cno-redzone=y 168 + KBUILD_RUSTFLAGS += -Ccode-model=kernel 161 169 endif 162 170 163 171 #
+5 -1
include/linux/compiler_types.h
··· 4 4 5 5 #ifndef __ASSEMBLY__ 6 6 7 + /* 8 + * Skipped when running bindgen due to a libclang issue; 9 + * see https://github.com/rust-lang/rust-bindgen/issues/2244. 10 + */ 7 11 #if defined(CONFIG_DEBUG_INFO_BTF) && defined(CONFIG_PAHOLE_HAS_BTF_TAG) && \ 8 - __has_attribute(btf_type_tag) 12 + __has_attribute(btf_type_tag) && !defined(__BINDGEN__) 9 13 # define BTF_TYPE_TAG(value) __attribute__((btf_type_tag(#value))) 10 14 #else 11 15 # define BTF_TYPE_TAG(value) /* nothing */
+1 -1
include/linux/kallsyms.h
··· 15 15 16 16 #include <asm/sections.h> 17 17 18 - #define KSYM_NAME_LEN 128 18 + #define KSYM_NAME_LEN 512 19 19 #define KSYM_SYMBOL_LEN (sizeof("%s+%#lx/%#lx [%s %s]") + \ 20 20 (KSYM_NAME_LEN - 1) + \ 21 21 2*(BITS_PER_LONG*3/10) + (MODULE_NAME_LEN - 1) + \
+45 -1
init/Kconfig
··· 60 60 default $(ld-version) if LD_IS_LLD 61 61 default 0 62 62 63 + config RUST_IS_AVAILABLE 64 + def_bool $(success,$(srctree)/scripts/rust_is_available.sh) 65 + help 66 + This shows whether a suitable Rust toolchain is available (found). 67 + 68 + Please see Documentation/rust/quick-start.rst for instructions on how 69 + to satify the build requirements of Rust support. 70 + 71 + In particular, the Makefile target 'rustavailable' is useful to check 72 + why the Rust toolchain is not being detected. 73 + 63 74 config CC_CAN_LINK 64 75 bool 65 76 default $(success,$(srctree)/scripts/cc-can-link.sh $(CC) $(CLANG_FLAGS) $(USERCFLAGS) $(USERLDFLAGS) $(m64-flag)) if 64BIT ··· 158 147 default COMPILE_TEST 159 148 help 160 149 A kernel build should not cause any compiler warnings, and this 161 - enables the '-Werror' flag to enforce that rule by default. 150 + enables the '-Werror' (for C) and '-Dwarnings' (for Rust) flags 151 + to enforce that rule by default. 162 152 163 153 However, if you have a new (or very old) compiler with odd and 164 154 unusual warnings, or you have some architecture with problems, ··· 1910 1898 help 1911 1899 Say Y here to enable the extended profiling support mechanisms used 1912 1900 by profilers. 1901 + 1902 + config RUST 1903 + bool "Rust support" 1904 + depends on HAVE_RUST 1905 + depends on RUST_IS_AVAILABLE 1906 + depends on !MODVERSIONS 1907 + depends on !GCC_PLUGINS 1908 + depends on !RANDSTRUCT 1909 + depends on !DEBUG_INFO_BTF 1910 + select CONSTRUCTORS 1911 + help 1912 + Enables Rust support in the kernel. 1913 + 1914 + This allows other Rust-related options, like drivers written in Rust, 1915 + to be selected. 1916 + 1917 + It is also required to be able to load external kernel modules 1918 + written in Rust. 1919 + 1920 + See Documentation/rust/ for more information. 1921 + 1922 + If unsure, say N. 1923 + 1924 + config RUSTC_VERSION_TEXT 1925 + string 1926 + depends on RUST 1927 + default $(shell,command -v $(RUSTC) >/dev/null 2>&1 && $(RUSTC) --version || echo n) 1928 + 1929 + config BINDGEN_VERSION_TEXT 1930 + string 1931 + depends on RUST 1932 + default $(shell,command -v $(BINDGEN) >/dev/null 2>&1 && $(BINDGEN) --version || echo n) 1913 1933 1914 1934 # 1915 1935 # Place an empty function call at each tracepoint site. Can be
+1
kernel/configs/rust.config
··· 1 + CONFIG_RUST=y
+22 -4
kernel/kallsyms.c
··· 50 50 data = &kallsyms_names[off]; 51 51 len = *data; 52 52 data++; 53 + off++; 54 + 55 + /* If MSB is 1, it is a "big" symbol, so needs an additional byte. */ 56 + if ((len & 0x80) != 0) { 57 + len = (len & 0x7F) | (*data << 7); 58 + data++; 59 + off++; 60 + } 53 61 54 62 /* 55 63 * Update the offset to return the offset for the next symbol on 56 64 * the compressed stream. 57 65 */ 58 - off += len + 1; 66 + off += len; 59 67 60 68 /* 61 69 * For every byte on the compressed symbol data, copy the table ··· 116 108 static unsigned int get_symbol_offset(unsigned long pos) 117 109 { 118 110 const u8 *name; 119 - int i; 111 + int i, len; 120 112 121 113 /* 122 114 * Use the closest marker we have. We have markers every 256 positions, ··· 130 122 * so we just need to add the len to the current pointer for every 131 123 * symbol we wish to skip. 132 124 */ 133 - for (i = 0; i < (pos & 0xFF); i++) 134 - name = name + (*name) + 1; 125 + for (i = 0; i < (pos & 0xFF); i++) { 126 + len = *name; 127 + 128 + /* 129 + * If MSB is 1, it is a "big" symbol, so we need to look into 130 + * the next byte (and skip it, too). 131 + */ 132 + if ((len & 0x80) != 0) 133 + len = ((len & 0x7F) | (name[1] << 7)) + 1; 134 + 135 + name = name + len + 1; 136 + } 135 137 136 138 return name - kallsyms_names; 137 139 }
+2 -2
kernel/livepatch/core.c
··· 213 213 * we use the smallest/strictest upper bound possible (56, based on 214 214 * the current definition of MODULE_NAME_LEN) to prevent overflows. 215 215 */ 216 - BUILD_BUG_ON(MODULE_NAME_LEN < 56 || KSYM_NAME_LEN != 128); 216 + BUILD_BUG_ON(MODULE_NAME_LEN < 56 || KSYM_NAME_LEN != 512); 217 217 218 218 relas = (Elf_Rela *) relasec->sh_addr; 219 219 /* For each rela in this klp relocation section */ ··· 227 227 228 228 /* Format: .klp.sym.sym_objname.sym_name,sympos */ 229 229 cnt = sscanf(strtab + sym->st_name, 230 - ".klp.sym.%55[^.].%127[^,],%lu", 230 + ".klp.sym.%55[^.].%511[^,],%lu", 231 231 sym_objname, sym_name, &sympos); 232 232 if (cnt != 3) { 233 233 pr_err("symbol %s has an incorrectly formatted name\n",
+34
lib/Kconfig.debug
··· 2710 2710 2711 2711 endmenu # "Kernel Testing and Coverage" 2712 2712 2713 + menu "Rust hacking" 2714 + 2715 + config RUST_DEBUG_ASSERTIONS 2716 + bool "Debug assertions" 2717 + depends on RUST 2718 + help 2719 + Enables rustc's `-Cdebug-assertions` codegen option. 2720 + 2721 + This flag lets you turn `cfg(debug_assertions)` conditional 2722 + compilation on or off. This can be used to enable extra debugging 2723 + code in development but not in production. For example, it controls 2724 + the behavior of the standard library's `debug_assert!` macro. 2725 + 2726 + Note that this will apply to all Rust code, including `core`. 2727 + 2728 + If unsure, say N. 2729 + 2730 + config RUST_OVERFLOW_CHECKS 2731 + bool "Overflow checks" 2732 + default y 2733 + depends on RUST 2734 + help 2735 + Enables rustc's `-Coverflow-checks` codegen option. 2736 + 2737 + This flag allows you to control the behavior of runtime integer 2738 + overflow. When overflow-checks are enabled, a Rust panic will occur 2739 + on overflow. 2740 + 2741 + Note that this will apply to all Rust code, including `core`. 2742 + 2743 + If unsure, say Y. 2744 + 2745 + endmenu # "Rust" 2746 + 2713 2747 source "Documentation/Kconfig" 2714 2748 2715 2749 endmenu # Kernel hacking
+13
lib/vsprintf.c
··· 2246 2246 } 2247 2247 early_param("no_hash_pointers", no_hash_pointers_enable); 2248 2248 2249 + /* Used for Rust formatting ('%pA'). */ 2250 + char *rust_fmt_argument(char *buf, char *end, void *ptr); 2251 + 2249 2252 /* 2250 2253 * Show a '%p' thing. A kernel extension is that the '%p' is followed 2251 2254 * by an extra set of alphanumeric characters that are extended format ··· 2375 2372 * 2376 2373 * Note: The default behaviour (unadorned %p) is to hash the address, 2377 2374 * rendering it useful as a unique identifier. 2375 + * 2376 + * There is also a '%pA' format specifier, but it is only intended to be used 2377 + * from Rust code to format core::fmt::Arguments. Do *not* use it from C. 2378 + * See rust/kernel/print.rs for details. 2378 2379 */ 2379 2380 static noinline_for_stack 2380 2381 char *pointer(const char *fmt, char *buf, char *end, void *ptr, ··· 2451 2444 return device_node_string(buf, end, ptr, spec, fmt + 1); 2452 2445 case 'f': 2453 2446 return fwnode_string(buf, end, ptr, spec, fmt + 1); 2447 + case 'A': 2448 + if (!IS_ENABLED(CONFIG_RUST)) { 2449 + WARN_ONCE(1, "Please remove %%pA from non-Rust code\n"); 2450 + return error_string(buf, end, "(%pA?)", spec); 2451 + } 2452 + return rust_fmt_argument(buf, end, ptr); 2454 2453 case 'x': 2455 2454 return pointer_string(buf, end, ptr, spec); 2456 2455 case 'e':
+8
rust/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + target.json 4 + bindings_generated.rs 5 + bindings_helpers_generated.rs 6 + exports_*_generated.h 7 + doc/ 8 + test/
+381
rust/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + always-$(CONFIG_RUST) += target.json 4 + no-clean-files += target.json 5 + 6 + obj-$(CONFIG_RUST) += core.o compiler_builtins.o 7 + always-$(CONFIG_RUST) += exports_core_generated.h 8 + 9 + # Missing prototypes are expected in the helpers since these are exported 10 + # for Rust only, thus there is no header nor prototypes. 11 + obj-$(CONFIG_RUST) += helpers.o 12 + CFLAGS_REMOVE_helpers.o = -Wmissing-prototypes -Wmissing-declarations 13 + 14 + always-$(CONFIG_RUST) += libmacros.so 15 + no-clean-files += libmacros.so 16 + 17 + always-$(CONFIG_RUST) += bindings/bindings_generated.rs bindings/bindings_helpers_generated.rs 18 + obj-$(CONFIG_RUST) += alloc.o bindings.o kernel.o 19 + always-$(CONFIG_RUST) += exports_alloc_generated.h exports_bindings_generated.h \ 20 + exports_kernel_generated.h 21 + 22 + obj-$(CONFIG_RUST) += exports.o 23 + 24 + # Avoids running `$(RUSTC)` for the sysroot when it may not be available. 25 + ifdef CONFIG_RUST 26 + 27 + # `$(rust_flags)` is passed in case the user added `--sysroot`. 28 + rustc_sysroot := $(shell $(RUSTC) $(rust_flags) --print sysroot) 29 + rustc_host_target := $(shell $(RUSTC) --version --verbose | grep -F 'host: ' | cut -d' ' -f2) 30 + RUST_LIB_SRC ?= $(rustc_sysroot)/lib/rustlib/src/rust/library 31 + 32 + ifeq ($(quiet),silent_) 33 + cargo_quiet=-q 34 + rust_test_quiet=-q 35 + rustdoc_test_quiet=--test-args -q 36 + else ifeq ($(quiet),quiet_) 37 + rust_test_quiet=-q 38 + rustdoc_test_quiet=--test-args -q 39 + else 40 + cargo_quiet=--verbose 41 + endif 42 + 43 + core-cfgs = \ 44 + --cfg no_fp_fmt_parse 45 + 46 + alloc-cfgs = \ 47 + --cfg no_fmt \ 48 + --cfg no_global_oom_handling \ 49 + --cfg no_macros \ 50 + --cfg no_rc \ 51 + --cfg no_str \ 52 + --cfg no_string \ 53 + --cfg no_sync \ 54 + --cfg no_thin 55 + 56 + quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $< 57 + cmd_rustdoc = \ 58 + OBJTREE=$(abspath $(objtree)) \ 59 + $(RUSTDOC) $(if $(rustdoc_host),$(rust_common_flags),$(rust_flags)) \ 60 + $(rustc_target_flags) -L$(objtree)/$(obj) \ 61 + --output $(objtree)/$(obj)/doc \ 62 + --crate-name $(subst rustdoc-,,$@) \ 63 + @$(objtree)/include/generated/rustc_cfg $< 64 + 65 + # The `html_logo_url` and `html_favicon_url` forms of the `doc` attribute 66 + # can be used to specify a custom logo. However: 67 + # - The given value is used as-is, thus it cannot be relative or a local file 68 + # (unlike the non-custom case) since the generated docs have subfolders. 69 + # - It requires adding it to every crate. 70 + # - It requires changing `core` which comes from the sysroot. 71 + # 72 + # Using `-Zcrate-attr` would solve the last two points, but not the first. 73 + # The https://github.com/rust-lang/rfcs/pull/3226 RFC suggests two new 74 + # command-like flags to solve the issue. Meanwhile, we use the non-custom case 75 + # and then retouch the generated files. 76 + rustdoc: rustdoc-core rustdoc-macros rustdoc-compiler_builtins \ 77 + rustdoc-alloc rustdoc-kernel 78 + $(Q)cp $(srctree)/Documentation/images/logo.svg $(objtree)/$(obj)/doc 79 + $(Q)cp $(srctree)/Documentation/images/COPYING-logo $(objtree)/$(obj)/doc 80 + $(Q)find $(objtree)/$(obj)/doc -name '*.html' -type f -print0 | xargs -0 sed -Ei \ 81 + -e 's:rust-logo\.svg:logo.svg:g' \ 82 + -e 's:rust-logo\.png:logo.svg:g' \ 83 + -e 's:favicon\.svg:logo.svg:g' \ 84 + -e 's:<link rel="alternate icon" type="image/png" href="[./]*favicon-(16x16|32x32)\.png">::g' 85 + $(Q)echo '.logo-container > img { object-fit: contain; }' \ 86 + >> $(objtree)/$(obj)/doc/rustdoc.css 87 + 88 + rustdoc-macros: private rustdoc_host = yes 89 + rustdoc-macros: private rustc_target_flags = --crate-type proc-macro \ 90 + --extern proc_macro 91 + rustdoc-macros: $(src)/macros/lib.rs FORCE 92 + $(call if_changed,rustdoc) 93 + 94 + rustdoc-core: private rustc_target_flags = $(core-cfgs) 95 + rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE 96 + $(call if_changed,rustdoc) 97 + 98 + rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE 99 + $(call if_changed,rustdoc) 100 + 101 + # We need to allow `rustdoc::broken_intra_doc_links` because some 102 + # `no_global_oom_handling` functions refer to non-`no_global_oom_handling` 103 + # functions. Ideally `rustdoc` would have a way to distinguish broken links 104 + # due to things that are "configured out" vs. entirely non-existing ones. 105 + rustdoc-alloc: private rustc_target_flags = $(alloc-cfgs) \ 106 + -Arustdoc::broken_intra_doc_links 107 + rustdoc-alloc: $(src)/alloc/lib.rs rustdoc-core rustdoc-compiler_builtins FORCE 108 + $(call if_changed,rustdoc) 109 + 110 + rustdoc-kernel: private rustc_target_flags = --extern alloc \ 111 + --extern macros=$(objtree)/$(obj)/libmacros.so \ 112 + --extern bindings 113 + rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-macros \ 114 + rustdoc-compiler_builtins rustdoc-alloc $(obj)/libmacros.so \ 115 + $(obj)/bindings.o FORCE 116 + $(call if_changed,rustdoc) 117 + 118 + quiet_cmd_rustc_test_library = RUSTC TL $< 119 + cmd_rustc_test_library = \ 120 + OBJTREE=$(abspath $(objtree)) \ 121 + $(RUSTC) $(rust_common_flags) \ 122 + @$(objtree)/include/generated/rustc_cfg $(rustc_target_flags) \ 123 + --crate-type $(if $(rustc_test_library_proc),proc-macro,rlib) \ 124 + --out-dir $(objtree)/$(obj)/test --cfg testlib \ 125 + --sysroot $(objtree)/$(obj)/test/sysroot \ 126 + -L$(objtree)/$(obj)/test \ 127 + --crate-name $(subst rusttest-,,$(subst rusttestlib-,,$@)) $< 128 + 129 + rusttestlib-macros: private rustc_target_flags = --extern proc_macro 130 + rusttestlib-macros: private rustc_test_library_proc = yes 131 + rusttestlib-macros: $(src)/macros/lib.rs rusttest-prepare FORCE 132 + $(call if_changed,rustc_test_library) 133 + 134 + rusttestlib-bindings: $(src)/bindings/lib.rs rusttest-prepare FORCE 135 + $(call if_changed,rustc_test_library) 136 + 137 + quiet_cmd_rustdoc_test = RUSTDOC T $< 138 + cmd_rustdoc_test = \ 139 + OBJTREE=$(abspath $(objtree)) \ 140 + $(RUSTDOC) --test $(rust_common_flags) \ 141 + @$(objtree)/include/generated/rustc_cfg \ 142 + $(rustc_target_flags) $(rustdoc_test_target_flags) \ 143 + --sysroot $(objtree)/$(obj)/test/sysroot $(rustdoc_test_quiet) \ 144 + -L$(objtree)/$(obj)/test --output $(objtree)/$(obj)/doc \ 145 + --crate-name $(subst rusttest-,,$@) $< 146 + 147 + # We cannot use `-Zpanic-abort-tests` because some tests are dynamic, 148 + # so for the moment we skip `-Cpanic=abort`. 149 + quiet_cmd_rustc_test = RUSTC T $< 150 + cmd_rustc_test = \ 151 + OBJTREE=$(abspath $(objtree)) \ 152 + $(RUSTC) --test $(rust_common_flags) \ 153 + @$(objtree)/include/generated/rustc_cfg \ 154 + $(rustc_target_flags) --out-dir $(objtree)/$(obj)/test \ 155 + --sysroot $(objtree)/$(obj)/test/sysroot \ 156 + -L$(objtree)/$(obj)/test \ 157 + --crate-name $(subst rusttest-,,$@) $<; \ 158 + $(objtree)/$(obj)/test/$(subst rusttest-,,$@) $(rust_test_quiet) \ 159 + $(rustc_test_run_flags) 160 + 161 + rusttest: rusttest-macros rusttest-kernel 162 + 163 + # This prepares a custom sysroot with our custom `alloc` instead of 164 + # the standard one. 165 + # 166 + # This requires several hacks: 167 + # - Unlike `core` and `alloc`, `std` depends on more than a dozen crates, 168 + # including third-party crates that need to be downloaded, plus custom 169 + # `build.rs` steps. Thus hardcoding things here is not maintainable. 170 + # - `cargo` knows how to build the standard library, but it is an unstable 171 + # feature so far (`-Zbuild-std`). 172 + # - `cargo` only considers the use case of building the standard library 173 + # to use it in a given package. Thus we need to create a dummy package 174 + # and pick the generated libraries from there. 175 + # - Since we only keep a subset of upstream `alloc` in-tree, we need 176 + # to recreate it on the fly by putting our sources on top. 177 + # - The usual ways of modifying the dependency graph in `cargo` do not seem 178 + # to apply for the `-Zbuild-std` steps, thus we have to mislead it 179 + # by modifying the sources in the sysroot. 180 + # - To avoid messing with the user's Rust installation, we create a clone 181 + # of the sysroot. However, `cargo` ignores `RUSTFLAGS` in the `-Zbuild-std` 182 + # steps, thus we use a wrapper binary passed via `RUSTC` to pass the flag. 183 + # 184 + # In the future, we hope to avoid the whole ordeal by either: 185 + # - Making the `test` crate not depend on `std` (either improving upstream 186 + # or having our own custom crate). 187 + # - Making the tests run in kernel space (requires the previous point). 188 + # - Making `std` and friends be more like a "normal" crate, so that 189 + # `-Zbuild-std` and related hacks are not needed. 190 + quiet_cmd_rustsysroot = RUSTSYSROOT 191 + cmd_rustsysroot = \ 192 + rm -rf $(objtree)/$(obj)/test; \ 193 + mkdir -p $(objtree)/$(obj)/test; \ 194 + cp -a $(rustc_sysroot) $(objtree)/$(obj)/test/sysroot; \ 195 + cp -r $(srctree)/$(src)/alloc/* \ 196 + $(objtree)/$(obj)/test/sysroot/lib/rustlib/src/rust/library/alloc/src; \ 197 + echo '\#!/bin/sh' > $(objtree)/$(obj)/test/rustc_sysroot; \ 198 + echo "$(RUSTC) --sysroot=$(abspath $(objtree)/$(obj)/test/sysroot) \"\$$@\"" \ 199 + >> $(objtree)/$(obj)/test/rustc_sysroot; \ 200 + chmod u+x $(objtree)/$(obj)/test/rustc_sysroot; \ 201 + $(CARGO) -q new $(objtree)/$(obj)/test/dummy; \ 202 + RUSTC=$(objtree)/$(obj)/test/rustc_sysroot $(CARGO) $(cargo_quiet) \ 203 + test -Zbuild-std --target $(rustc_host_target) \ 204 + --manifest-path $(objtree)/$(obj)/test/dummy/Cargo.toml; \ 205 + rm $(objtree)/$(obj)/test/sysroot/lib/rustlib/$(rustc_host_target)/lib/*; \ 206 + cp $(objtree)/$(obj)/test/dummy/target/$(rustc_host_target)/debug/deps/* \ 207 + $(objtree)/$(obj)/test/sysroot/lib/rustlib/$(rustc_host_target)/lib 208 + 209 + rusttest-prepare: FORCE 210 + $(call if_changed,rustsysroot) 211 + 212 + rusttest-macros: private rustc_target_flags = --extern proc_macro 213 + rusttest-macros: private rustdoc_test_target_flags = --crate-type proc-macro 214 + rusttest-macros: $(src)/macros/lib.rs rusttest-prepare FORCE 215 + $(call if_changed,rustc_test) 216 + $(call if_changed,rustdoc_test) 217 + 218 + rusttest-kernel: private rustc_target_flags = --extern alloc \ 219 + --extern macros --extern bindings 220 + rusttest-kernel: $(src)/kernel/lib.rs rusttest-prepare \ 221 + rusttestlib-macros rusttestlib-bindings FORCE 222 + $(call if_changed,rustc_test) 223 + $(call if_changed,rustc_test_library) 224 + 225 + filechk_rust_target = $(objtree)/scripts/generate_rust_target < $< 226 + 227 + $(obj)/target.json: $(objtree)/include/config/auto.conf FORCE 228 + $(call filechk,rust_target) 229 + 230 + ifdef CONFIG_CC_IS_CLANG 231 + bindgen_c_flags = $(c_flags) 232 + else 233 + # bindgen relies on libclang to parse C. Ideally, bindgen would support a GCC 234 + # plugin backend and/or the Clang driver would be perfectly compatible with GCC. 235 + # 236 + # For the moment, here we are tweaking the flags on the fly. This is a hack, 237 + # and some kernel configurations may not work (e.g. `GCC_PLUGIN_RANDSTRUCT` 238 + # if we end up using one of those structs). 239 + bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \ 240 + -mskip-rax-setup -mgeneral-regs-only -msign-return-address=% \ 241 + -mindirect-branch=thunk-extern -mindirect-branch-register \ 242 + -mfunction-return=thunk-extern -mrecord-mcount -mabi=lp64 \ 243 + -mindirect-branch-cs-prefix -mstack-protector-guard% -mtraceback=no \ 244 + -mno-pointers-to-nested-functions -mno-string \ 245 + -mno-strict-align -mstrict-align \ 246 + -fconserve-stack -falign-jumps=% -falign-loops=% \ 247 + -femit-struct-debug-baseonly -fno-ipa-cp-clone -fno-ipa-sra \ 248 + -fno-partial-inlining -fplugin-arg-arm_ssp_per_task_plugin-% \ 249 + -fno-reorder-blocks -fno-allow-store-data-races -fasan-shadow-offset=% \ 250 + -fzero-call-used-regs=% -fno-stack-clash-protection \ 251 + -fno-inline-functions-called-once \ 252 + --param=% --param asan-% 253 + 254 + # Derived from `scripts/Makefile.clang`. 255 + BINDGEN_TARGET_x86 := x86_64-linux-gnu 256 + BINDGEN_TARGET := $(BINDGEN_TARGET_$(SRCARCH)) 257 + 258 + # All warnings are inhibited since GCC builds are very experimental, 259 + # many GCC warnings are not supported by Clang, they may only appear in 260 + # some configurations, with new GCC versions, etc. 261 + bindgen_extra_c_flags = -w --target=$(BINDGEN_TARGET) 262 + 263 + bindgen_c_flags = $(filter-out $(bindgen_skip_c_flags), $(c_flags)) \ 264 + $(bindgen_extra_c_flags) 265 + endif 266 + 267 + ifdef CONFIG_LTO 268 + bindgen_c_flags_lto = $(filter-out $(CC_FLAGS_LTO), $(bindgen_c_flags)) 269 + else 270 + bindgen_c_flags_lto = $(bindgen_c_flags) 271 + endif 272 + 273 + bindgen_c_flags_final = $(bindgen_c_flags_lto) -D__BINDGEN__ 274 + 275 + quiet_cmd_bindgen = BINDGEN $@ 276 + cmd_bindgen = \ 277 + $(BINDGEN) $< $(bindgen_target_flags) \ 278 + --use-core --with-derive-default --ctypes-prefix core::ffi --no-layout-tests \ 279 + --no-debug '.*' \ 280 + --size_t-is-usize -o $@ -- $(bindgen_c_flags_final) -DMODULE \ 281 + $(bindgen_target_cflags) $(bindgen_target_extra) 282 + 283 + $(obj)/bindings/bindings_generated.rs: private bindgen_target_flags = \ 284 + $(shell grep -v '^\#\|^$$' $(srctree)/$(src)/bindgen_parameters) 285 + $(obj)/bindings/bindings_generated.rs: $(src)/bindings/bindings_helper.h \ 286 + $(src)/bindgen_parameters FORCE 287 + $(call if_changed_dep,bindgen) 288 + 289 + # See `CFLAGS_REMOVE_helpers.o` above. In addition, Clang on C does not warn 290 + # with `-Wmissing-declarations` (unlike GCC), so it is not strictly needed here 291 + # given it is `libclang`; but for consistency, future Clang changes and/or 292 + # a potential future GCC backend for `bindgen`, we disable it too. 293 + $(obj)/bindings/bindings_helpers_generated.rs: private bindgen_target_flags = \ 294 + --blacklist-type '.*' --whitelist-var '' \ 295 + --whitelist-function 'rust_helper_.*' 296 + $(obj)/bindings/bindings_helpers_generated.rs: private bindgen_target_cflags = \ 297 + -I$(objtree)/$(obj) -Wno-missing-prototypes -Wno-missing-declarations 298 + $(obj)/bindings/bindings_helpers_generated.rs: private bindgen_target_extra = ; \ 299 + sed -Ei 's/pub fn rust_helper_([a-zA-Z0-9_]*)/#[link_name="rust_helper_\1"]\n pub fn \1/g' $@ 300 + $(obj)/bindings/bindings_helpers_generated.rs: $(src)/helpers.c FORCE 301 + $(call if_changed_dep,bindgen) 302 + 303 + quiet_cmd_exports = EXPORTS $@ 304 + cmd_exports = \ 305 + $(NM) -p --defined-only $< \ 306 + | grep -E ' (T|R|D) ' | cut -d ' ' -f 3 \ 307 + | xargs -Isymbol \ 308 + echo 'EXPORT_SYMBOL_RUST_GPL(symbol);' > $@ 309 + 310 + $(obj)/exports_core_generated.h: $(obj)/core.o FORCE 311 + $(call if_changed,exports) 312 + 313 + $(obj)/exports_alloc_generated.h: $(obj)/alloc.o FORCE 314 + $(call if_changed,exports) 315 + 316 + $(obj)/exports_bindings_generated.h: $(obj)/bindings.o FORCE 317 + $(call if_changed,exports) 318 + 319 + $(obj)/exports_kernel_generated.h: $(obj)/kernel.o FORCE 320 + $(call if_changed,exports) 321 + 322 + quiet_cmd_rustc_procmacro = $(RUSTC_OR_CLIPPY_QUIET) P $@ 323 + cmd_rustc_procmacro = \ 324 + $(RUSTC_OR_CLIPPY) $(rust_common_flags) \ 325 + --emit=dep-info,link --extern proc_macro \ 326 + --crate-type proc-macro --out-dir $(objtree)/$(obj) \ 327 + --crate-name $(patsubst lib%.so,%,$(notdir $@)) $<; \ 328 + mv $(objtree)/$(obj)/$(patsubst lib%.so,%,$(notdir $@)).d $(depfile); \ 329 + sed -i '/^\#/d' $(depfile) 330 + 331 + # Procedural macros can only be used with the `rustc` that compiled it. 332 + # Therefore, to get `libmacros.so` automatically recompiled when the compiler 333 + # version changes, we add `core.o` as a dependency (even if it is not needed). 334 + $(obj)/libmacros.so: $(src)/macros/lib.rs $(obj)/core.o FORCE 335 + $(call if_changed_dep,rustc_procmacro) 336 + 337 + quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L $@ 338 + cmd_rustc_library = \ 339 + OBJTREE=$(abspath $(objtree)) \ 340 + $(if $(skip_clippy),$(RUSTC),$(RUSTC_OR_CLIPPY)) \ 341 + $(filter-out $(skip_flags),$(rust_flags) $(rustc_target_flags)) \ 342 + --emit=dep-info,obj,metadata --crate-type rlib \ 343 + --out-dir $(objtree)/$(obj) -L$(objtree)/$(obj) \ 344 + --crate-name $(patsubst %.o,%,$(notdir $@)) $<; \ 345 + mv $(objtree)/$(obj)/$(patsubst %.o,%,$(notdir $@)).d $(depfile); \ 346 + sed -i '/^\#/d' $(depfile) \ 347 + $(if $(rustc_objcopy),;$(OBJCOPY) $(rustc_objcopy) $@) 348 + 349 + rust-analyzer: 350 + $(Q)$(srctree)/scripts/generate_rust_analyzer.py $(srctree) $(objtree) \ 351 + $(RUST_LIB_SRC) > $(objtree)/rust-project.json 352 + 353 + $(obj)/core.o: private skip_clippy = 1 354 + $(obj)/core.o: private skip_flags = -Dunreachable_pub 355 + $(obj)/core.o: private rustc_target_flags = $(core-cfgs) 356 + $(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs $(obj)/target.json FORCE 357 + $(call if_changed_dep,rustc_library) 358 + 359 + $(obj)/compiler_builtins.o: private rustc_objcopy = -w -W '__*' 360 + $(obj)/compiler_builtins.o: $(src)/compiler_builtins.rs $(obj)/core.o FORCE 361 + $(call if_changed_dep,rustc_library) 362 + 363 + $(obj)/alloc.o: private skip_clippy = 1 364 + $(obj)/alloc.o: private skip_flags = -Dunreachable_pub 365 + $(obj)/alloc.o: private rustc_target_flags = $(alloc-cfgs) 366 + $(obj)/alloc.o: $(src)/alloc/lib.rs $(obj)/compiler_builtins.o FORCE 367 + $(call if_changed_dep,rustc_library) 368 + 369 + $(obj)/bindings.o: $(src)/bindings/lib.rs \ 370 + $(obj)/compiler_builtins.o \ 371 + $(obj)/bindings/bindings_generated.rs \ 372 + $(obj)/bindings/bindings_helpers_generated.rs FORCE 373 + $(call if_changed_dep,rustc_library) 374 + 375 + $(obj)/kernel.o: private rustc_target_flags = --extern alloc \ 376 + --extern macros --extern bindings 377 + $(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/alloc.o \ 378 + $(obj)/libmacros.so $(obj)/bindings.o FORCE 379 + $(call if_changed_dep,rustc_library) 380 + 381 + endif # CONFIG_RUST
+33
rust/alloc/README.md
··· 1 + # `alloc` 2 + 3 + These source files come from the Rust standard library, hosted in 4 + the <https://github.com/rust-lang/rust> repository, licensed under 5 + "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details, 6 + see <https://github.com/rust-lang/rust/blob/master/COPYRIGHT>. 7 + 8 + Please note that these files should be kept as close as possible to 9 + upstream. In general, only additions should be performed (e.g. new 10 + methods). Eventually, changes should make it into upstream so that, 11 + at some point, this fork can be dropped from the kernel tree. 12 + 13 + 14 + ## Rationale 15 + 16 + On one hand, kernel folks wanted to keep `alloc` in-tree to have more 17 + freedom in both workflow and actual features if actually needed 18 + (e.g. receiver types if we ended up using them), which is reasonable. 19 + 20 + On the other hand, Rust folks wanted to keep `alloc` as close as 21 + upstream as possible and avoid as much divergence as possible, which 22 + is also reasonable. 23 + 24 + We agreed on a middle-ground: we would keep a subset of `alloc` 25 + in-tree that would be as small and as close as possible to upstream. 26 + Then, upstream can start adding the functions that we add to `alloc` 27 + etc., until we reach a point where the kernel already knows exactly 28 + what it needs in `alloc` and all the new methods are merged into 29 + upstream, so that we can drop `alloc` from the kernel tree and go back 30 + to using the upstream one. 31 + 32 + By doing this, the kernel can go a bit faster now, and Rust can 33 + slowly incorporate and discuss the changes as needed.
+440
rust/alloc/alloc.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + //! Memory allocation APIs 4 + 5 + #![stable(feature = "alloc_module", since = "1.28.0")] 6 + 7 + #[cfg(not(test))] 8 + use core::intrinsics; 9 + use core::intrinsics::{min_align_of_val, size_of_val}; 10 + 11 + use core::ptr::Unique; 12 + #[cfg(not(test))] 13 + use core::ptr::{self, NonNull}; 14 + 15 + #[stable(feature = "alloc_module", since = "1.28.0")] 16 + #[doc(inline)] 17 + pub use core::alloc::*; 18 + 19 + use core::marker::Destruct; 20 + 21 + #[cfg(test)] 22 + mod tests; 23 + 24 + extern "Rust" { 25 + // These are the magic symbols to call the global allocator. rustc generates 26 + // them to call `__rg_alloc` etc. if there is a `#[global_allocator]` attribute 27 + // (the code expanding that attribute macro generates those functions), or to call 28 + // the default implementations in libstd (`__rdl_alloc` etc. in `library/std/src/alloc.rs`) 29 + // otherwise. 30 + // The rustc fork of LLVM also special-cases these function names to be able to optimize them 31 + // like `malloc`, `realloc`, and `free`, respectively. 32 + #[rustc_allocator] 33 + #[rustc_allocator_nounwind] 34 + fn __rust_alloc(size: usize, align: usize) -> *mut u8; 35 + #[rustc_allocator_nounwind] 36 + fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize); 37 + #[rustc_allocator_nounwind] 38 + fn __rust_realloc(ptr: *mut u8, old_size: usize, align: usize, new_size: usize) -> *mut u8; 39 + #[rustc_allocator_nounwind] 40 + fn __rust_alloc_zeroed(size: usize, align: usize) -> *mut u8; 41 + } 42 + 43 + /// The global memory allocator. 44 + /// 45 + /// This type implements the [`Allocator`] trait by forwarding calls 46 + /// to the allocator registered with the `#[global_allocator]` attribute 47 + /// if there is one, or the `std` crate’s default. 48 + /// 49 + /// Note: while this type is unstable, the functionality it provides can be 50 + /// accessed through the [free functions in `alloc`](self#functions). 51 + #[unstable(feature = "allocator_api", issue = "32838")] 52 + #[derive(Copy, Clone, Default, Debug)] 53 + #[cfg(not(test))] 54 + pub struct Global; 55 + 56 + #[cfg(test)] 57 + pub use std::alloc::Global; 58 + 59 + /// Allocate memory with the global allocator. 60 + /// 61 + /// This function forwards calls to the [`GlobalAlloc::alloc`] method 62 + /// of the allocator registered with the `#[global_allocator]` attribute 63 + /// if there is one, or the `std` crate’s default. 64 + /// 65 + /// This function is expected to be deprecated in favor of the `alloc` method 66 + /// of the [`Global`] type when it and the [`Allocator`] trait become stable. 67 + /// 68 + /// # Safety 69 + /// 70 + /// See [`GlobalAlloc::alloc`]. 71 + /// 72 + /// # Examples 73 + /// 74 + /// ``` 75 + /// use std::alloc::{alloc, dealloc, Layout}; 76 + /// 77 + /// unsafe { 78 + /// let layout = Layout::new::<u16>(); 79 + /// let ptr = alloc(layout); 80 + /// 81 + /// *(ptr as *mut u16) = 42; 82 + /// assert_eq!(*(ptr as *mut u16), 42); 83 + /// 84 + /// dealloc(ptr, layout); 85 + /// } 86 + /// ``` 87 + #[stable(feature = "global_alloc", since = "1.28.0")] 88 + #[must_use = "losing the pointer will leak memory"] 89 + #[inline] 90 + pub unsafe fn alloc(layout: Layout) -> *mut u8 { 91 + unsafe { __rust_alloc(layout.size(), layout.align()) } 92 + } 93 + 94 + /// Deallocate memory with the global allocator. 95 + /// 96 + /// This function forwards calls to the [`GlobalAlloc::dealloc`] method 97 + /// of the allocator registered with the `#[global_allocator]` attribute 98 + /// if there is one, or the `std` crate’s default. 99 + /// 100 + /// This function is expected to be deprecated in favor of the `dealloc` method 101 + /// of the [`Global`] type when it and the [`Allocator`] trait become stable. 102 + /// 103 + /// # Safety 104 + /// 105 + /// See [`GlobalAlloc::dealloc`]. 106 + #[stable(feature = "global_alloc", since = "1.28.0")] 107 + #[inline] 108 + pub unsafe fn dealloc(ptr: *mut u8, layout: Layout) { 109 + unsafe { __rust_dealloc(ptr, layout.size(), layout.align()) } 110 + } 111 + 112 + /// Reallocate memory with the global allocator. 113 + /// 114 + /// This function forwards calls to the [`GlobalAlloc::realloc`] method 115 + /// of the allocator registered with the `#[global_allocator]` attribute 116 + /// if there is one, or the `std` crate’s default. 117 + /// 118 + /// This function is expected to be deprecated in favor of the `realloc` method 119 + /// of the [`Global`] type when it and the [`Allocator`] trait become stable. 120 + /// 121 + /// # Safety 122 + /// 123 + /// See [`GlobalAlloc::realloc`]. 124 + #[stable(feature = "global_alloc", since = "1.28.0")] 125 + #[must_use = "losing the pointer will leak memory"] 126 + #[inline] 127 + pub unsafe fn realloc(ptr: *mut u8, layout: Layout, new_size: usize) -> *mut u8 { 128 + unsafe { __rust_realloc(ptr, layout.size(), layout.align(), new_size) } 129 + } 130 + 131 + /// Allocate zero-initialized memory with the global allocator. 132 + /// 133 + /// This function forwards calls to the [`GlobalAlloc::alloc_zeroed`] method 134 + /// of the allocator registered with the `#[global_allocator]` attribute 135 + /// if there is one, or the `std` crate’s default. 136 + /// 137 + /// This function is expected to be deprecated in favor of the `alloc_zeroed` method 138 + /// of the [`Global`] type when it and the [`Allocator`] trait become stable. 139 + /// 140 + /// # Safety 141 + /// 142 + /// See [`GlobalAlloc::alloc_zeroed`]. 143 + /// 144 + /// # Examples 145 + /// 146 + /// ``` 147 + /// use std::alloc::{alloc_zeroed, dealloc, Layout}; 148 + /// 149 + /// unsafe { 150 + /// let layout = Layout::new::<u16>(); 151 + /// let ptr = alloc_zeroed(layout); 152 + /// 153 + /// assert_eq!(*(ptr as *mut u16), 0); 154 + /// 155 + /// dealloc(ptr, layout); 156 + /// } 157 + /// ``` 158 + #[stable(feature = "global_alloc", since = "1.28.0")] 159 + #[must_use = "losing the pointer will leak memory"] 160 + #[inline] 161 + pub unsafe fn alloc_zeroed(layout: Layout) -> *mut u8 { 162 + unsafe { __rust_alloc_zeroed(layout.size(), layout.align()) } 163 + } 164 + 165 + #[cfg(not(test))] 166 + impl Global { 167 + #[inline] 168 + fn alloc_impl(&self, layout: Layout, zeroed: bool) -> Result<NonNull<[u8]>, AllocError> { 169 + match layout.size() { 170 + 0 => Ok(NonNull::slice_from_raw_parts(layout.dangling(), 0)), 171 + // SAFETY: `layout` is non-zero in size, 172 + size => unsafe { 173 + let raw_ptr = if zeroed { alloc_zeroed(layout) } else { alloc(layout) }; 174 + let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 175 + Ok(NonNull::slice_from_raw_parts(ptr, size)) 176 + }, 177 + } 178 + } 179 + 180 + // SAFETY: Same as `Allocator::grow` 181 + #[inline] 182 + unsafe fn grow_impl( 183 + &self, 184 + ptr: NonNull<u8>, 185 + old_layout: Layout, 186 + new_layout: Layout, 187 + zeroed: bool, 188 + ) -> Result<NonNull<[u8]>, AllocError> { 189 + debug_assert!( 190 + new_layout.size() >= old_layout.size(), 191 + "`new_layout.size()` must be greater than or equal to `old_layout.size()`" 192 + ); 193 + 194 + match old_layout.size() { 195 + 0 => self.alloc_impl(new_layout, zeroed), 196 + 197 + // SAFETY: `new_size` is non-zero as `old_size` is greater than or equal to `new_size` 198 + // as required by safety conditions. Other conditions must be upheld by the caller 199 + old_size if old_layout.align() == new_layout.align() => unsafe { 200 + let new_size = new_layout.size(); 201 + 202 + // `realloc` probably checks for `new_size >= old_layout.size()` or something similar. 203 + intrinsics::assume(new_size >= old_layout.size()); 204 + 205 + let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 206 + let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 207 + if zeroed { 208 + raw_ptr.add(old_size).write_bytes(0, new_size - old_size); 209 + } 210 + Ok(NonNull::slice_from_raw_parts(ptr, new_size)) 211 + }, 212 + 213 + // SAFETY: because `new_layout.size()` must be greater than or equal to `old_size`, 214 + // both the old and new memory allocation are valid for reads and writes for `old_size` 215 + // bytes. Also, because the old allocation wasn't yet deallocated, it cannot overlap 216 + // `new_ptr`. Thus, the call to `copy_nonoverlapping` is safe. The safety contract 217 + // for `dealloc` must be upheld by the caller. 218 + old_size => unsafe { 219 + let new_ptr = self.alloc_impl(new_layout, zeroed)?; 220 + ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_mut_ptr(), old_size); 221 + self.deallocate(ptr, old_layout); 222 + Ok(new_ptr) 223 + }, 224 + } 225 + } 226 + } 227 + 228 + #[unstable(feature = "allocator_api", issue = "32838")] 229 + #[cfg(not(test))] 230 + unsafe impl Allocator for Global { 231 + #[inline] 232 + fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> { 233 + self.alloc_impl(layout, false) 234 + } 235 + 236 + #[inline] 237 + fn allocate_zeroed(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> { 238 + self.alloc_impl(layout, true) 239 + } 240 + 241 + #[inline] 242 + unsafe fn deallocate(&self, ptr: NonNull<u8>, layout: Layout) { 243 + if layout.size() != 0 { 244 + // SAFETY: `layout` is non-zero in size, 245 + // other conditions must be upheld by the caller 246 + unsafe { dealloc(ptr.as_ptr(), layout) } 247 + } 248 + } 249 + 250 + #[inline] 251 + unsafe fn grow( 252 + &self, 253 + ptr: NonNull<u8>, 254 + old_layout: Layout, 255 + new_layout: Layout, 256 + ) -> Result<NonNull<[u8]>, AllocError> { 257 + // SAFETY: all conditions must be upheld by the caller 258 + unsafe { self.grow_impl(ptr, old_layout, new_layout, false) } 259 + } 260 + 261 + #[inline] 262 + unsafe fn grow_zeroed( 263 + &self, 264 + ptr: NonNull<u8>, 265 + old_layout: Layout, 266 + new_layout: Layout, 267 + ) -> Result<NonNull<[u8]>, AllocError> { 268 + // SAFETY: all conditions must be upheld by the caller 269 + unsafe { self.grow_impl(ptr, old_layout, new_layout, true) } 270 + } 271 + 272 + #[inline] 273 + unsafe fn shrink( 274 + &self, 275 + ptr: NonNull<u8>, 276 + old_layout: Layout, 277 + new_layout: Layout, 278 + ) -> Result<NonNull<[u8]>, AllocError> { 279 + debug_assert!( 280 + new_layout.size() <= old_layout.size(), 281 + "`new_layout.size()` must be smaller than or equal to `old_layout.size()`" 282 + ); 283 + 284 + match new_layout.size() { 285 + // SAFETY: conditions must be upheld by the caller 286 + 0 => unsafe { 287 + self.deallocate(ptr, old_layout); 288 + Ok(NonNull::slice_from_raw_parts(new_layout.dangling(), 0)) 289 + }, 290 + 291 + // SAFETY: `new_size` is non-zero. Other conditions must be upheld by the caller 292 + new_size if old_layout.align() == new_layout.align() => unsafe { 293 + // `realloc` probably checks for `new_size <= old_layout.size()` or something similar. 294 + intrinsics::assume(new_size <= old_layout.size()); 295 + 296 + let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 297 + let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 298 + Ok(NonNull::slice_from_raw_parts(ptr, new_size)) 299 + }, 300 + 301 + // SAFETY: because `new_size` must be smaller than or equal to `old_layout.size()`, 302 + // both the old and new memory allocation are valid for reads and writes for `new_size` 303 + // bytes. Also, because the old allocation wasn't yet deallocated, it cannot overlap 304 + // `new_ptr`. Thus, the call to `copy_nonoverlapping` is safe. The safety contract 305 + // for `dealloc` must be upheld by the caller. 306 + new_size => unsafe { 307 + let new_ptr = self.allocate(new_layout)?; 308 + ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_mut_ptr(), new_size); 309 + self.deallocate(ptr, old_layout); 310 + Ok(new_ptr) 311 + }, 312 + } 313 + } 314 + } 315 + 316 + /// The allocator for unique pointers. 317 + #[cfg(all(not(no_global_oom_handling), not(test)))] 318 + #[lang = "exchange_malloc"] 319 + #[inline] 320 + unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 { 321 + let layout = unsafe { Layout::from_size_align_unchecked(size, align) }; 322 + match Global.allocate(layout) { 323 + Ok(ptr) => ptr.as_mut_ptr(), 324 + Err(_) => handle_alloc_error(layout), 325 + } 326 + } 327 + 328 + #[cfg_attr(not(test), lang = "box_free")] 329 + #[inline] 330 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 331 + // This signature has to be the same as `Box`, otherwise an ICE will happen. 332 + // When an additional parameter to `Box` is added (like `A: Allocator`), this has to be added here as 333 + // well. 334 + // For example if `Box` is changed to `struct Box<T: ?Sized, A: Allocator>(Unique<T>, A)`, 335 + // this function has to be changed to `fn box_free<T: ?Sized, A: Allocator>(Unique<T>, A)` as well. 336 + pub(crate) const unsafe fn box_free<T: ?Sized, A: ~const Allocator + ~const Destruct>( 337 + ptr: Unique<T>, 338 + alloc: A, 339 + ) { 340 + unsafe { 341 + let size = size_of_val(ptr.as_ref()); 342 + let align = min_align_of_val(ptr.as_ref()); 343 + let layout = Layout::from_size_align_unchecked(size, align); 344 + alloc.deallocate(From::from(ptr.cast()), layout) 345 + } 346 + } 347 + 348 + // # Allocation error handler 349 + 350 + #[cfg(not(no_global_oom_handling))] 351 + extern "Rust" { 352 + // This is the magic symbol to call the global alloc error handler. rustc generates 353 + // it to call `__rg_oom` if there is a `#[alloc_error_handler]`, or to call the 354 + // default implementations below (`__rdl_oom`) otherwise. 355 + fn __rust_alloc_error_handler(size: usize, align: usize) -> !; 356 + } 357 + 358 + /// Abort on memory allocation error or failure. 359 + /// 360 + /// Callers of memory allocation APIs wishing to abort computation 361 + /// in response to an allocation error are encouraged to call this function, 362 + /// rather than directly invoking `panic!` or similar. 363 + /// 364 + /// The default behavior of this function is to print a message to standard error 365 + /// and abort the process. 366 + /// It can be replaced with [`set_alloc_error_hook`] and [`take_alloc_error_hook`]. 367 + /// 368 + /// [`set_alloc_error_hook`]: ../../std/alloc/fn.set_alloc_error_hook.html 369 + /// [`take_alloc_error_hook`]: ../../std/alloc/fn.take_alloc_error_hook.html 370 + #[stable(feature = "global_alloc", since = "1.28.0")] 371 + #[rustc_const_unstable(feature = "const_alloc_error", issue = "92523")] 372 + #[cfg(all(not(no_global_oom_handling), not(test)))] 373 + #[cold] 374 + pub const fn handle_alloc_error(layout: Layout) -> ! { 375 + const fn ct_error(_: Layout) -> ! { 376 + panic!("allocation failed"); 377 + } 378 + 379 + fn rt_error(layout: Layout) -> ! { 380 + unsafe { 381 + __rust_alloc_error_handler(layout.size(), layout.align()); 382 + } 383 + } 384 + 385 + unsafe { core::intrinsics::const_eval_select((layout,), ct_error, rt_error) } 386 + } 387 + 388 + // For alloc test `std::alloc::handle_alloc_error` can be used directly. 389 + #[cfg(all(not(no_global_oom_handling), test))] 390 + pub use std::alloc::handle_alloc_error; 391 + 392 + #[cfg(all(not(no_global_oom_handling), not(test)))] 393 + #[doc(hidden)] 394 + #[allow(unused_attributes)] 395 + #[unstable(feature = "alloc_internals", issue = "none")] 396 + pub mod __alloc_error_handler { 397 + use crate::alloc::Layout; 398 + 399 + // called via generated `__rust_alloc_error_handler` 400 + 401 + // if there is no `#[alloc_error_handler]` 402 + #[rustc_std_internal_symbol] 403 + pub unsafe extern "C-unwind" fn __rdl_oom(size: usize, _align: usize) -> ! { 404 + panic!("memory allocation of {size} bytes failed") 405 + } 406 + 407 + // if there is an `#[alloc_error_handler]` 408 + #[rustc_std_internal_symbol] 409 + pub unsafe extern "C-unwind" fn __rg_oom(size: usize, align: usize) -> ! { 410 + let layout = unsafe { Layout::from_size_align_unchecked(size, align) }; 411 + extern "Rust" { 412 + #[lang = "oom"] 413 + fn oom_impl(layout: Layout) -> !; 414 + } 415 + unsafe { oom_impl(layout) } 416 + } 417 + } 418 + 419 + /// Specialize clones into pre-allocated, uninitialized memory. 420 + /// Used by `Box::clone` and `Rc`/`Arc::make_mut`. 421 + pub(crate) trait WriteCloneIntoRaw: Sized { 422 + unsafe fn write_clone_into_raw(&self, target: *mut Self); 423 + } 424 + 425 + impl<T: Clone> WriteCloneIntoRaw for T { 426 + #[inline] 427 + default unsafe fn write_clone_into_raw(&self, target: *mut Self) { 428 + // Having allocated *first* may allow the optimizer to create 429 + // the cloned value in-place, skipping the local and move. 430 + unsafe { target.write(self.clone()) }; 431 + } 432 + } 433 + 434 + impl<T: Copy> WriteCloneIntoRaw for T { 435 + #[inline] 436 + unsafe fn write_clone_into_raw(&self, target: *mut Self) { 437 + // We can always copy in-place, without ever involving a local value. 438 + unsafe { target.copy_from_nonoverlapping(self, 1) }; 439 + } 440 + }
+498
rust/alloc/borrow.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + //! A module for working with borrowed data. 4 + 5 + #![stable(feature = "rust1", since = "1.0.0")] 6 + 7 + use core::cmp::Ordering; 8 + use core::hash::{Hash, Hasher}; 9 + use core::ops::Deref; 10 + #[cfg(not(no_global_oom_handling))] 11 + use core::ops::{Add, AddAssign}; 12 + 13 + #[stable(feature = "rust1", since = "1.0.0")] 14 + pub use core::borrow::{Borrow, BorrowMut}; 15 + 16 + use core::fmt; 17 + #[cfg(not(no_global_oom_handling))] 18 + use crate::string::String; 19 + 20 + use Cow::*; 21 + 22 + #[stable(feature = "rust1", since = "1.0.0")] 23 + impl<'a, B: ?Sized> Borrow<B> for Cow<'a, B> 24 + where 25 + B: ToOwned, 26 + <B as ToOwned>::Owned: 'a, 27 + { 28 + fn borrow(&self) -> &B { 29 + &**self 30 + } 31 + } 32 + 33 + /// A generalization of `Clone` to borrowed data. 34 + /// 35 + /// Some types make it possible to go from borrowed to owned, usually by 36 + /// implementing the `Clone` trait. But `Clone` works only for going from `&T` 37 + /// to `T`. The `ToOwned` trait generalizes `Clone` to construct owned data 38 + /// from any borrow of a given type. 39 + #[cfg_attr(not(test), rustc_diagnostic_item = "ToOwned")] 40 + #[stable(feature = "rust1", since = "1.0.0")] 41 + pub trait ToOwned { 42 + /// The resulting type after obtaining ownership. 43 + #[stable(feature = "rust1", since = "1.0.0")] 44 + type Owned: Borrow<Self>; 45 + 46 + /// Creates owned data from borrowed data, usually by cloning. 47 + /// 48 + /// # Examples 49 + /// 50 + /// Basic usage: 51 + /// 52 + /// ``` 53 + /// let s: &str = "a"; 54 + /// let ss: String = s.to_owned(); 55 + /// 56 + /// let v: &[i32] = &[1, 2]; 57 + /// let vv: Vec<i32> = v.to_owned(); 58 + /// ``` 59 + #[stable(feature = "rust1", since = "1.0.0")] 60 + #[must_use = "cloning is often expensive and is not expected to have side effects"] 61 + fn to_owned(&self) -> Self::Owned; 62 + 63 + /// Uses borrowed data to replace owned data, usually by cloning. 64 + /// 65 + /// This is borrow-generalized version of `Clone::clone_from`. 66 + /// 67 + /// # Examples 68 + /// 69 + /// Basic usage: 70 + /// 71 + /// ``` 72 + /// # #![feature(toowned_clone_into)] 73 + /// let mut s: String = String::new(); 74 + /// "hello".clone_into(&mut s); 75 + /// 76 + /// let mut v: Vec<i32> = Vec::new(); 77 + /// [1, 2][..].clone_into(&mut v); 78 + /// ``` 79 + #[unstable(feature = "toowned_clone_into", reason = "recently added", issue = "41263")] 80 + fn clone_into(&self, target: &mut Self::Owned) { 81 + *target = self.to_owned(); 82 + } 83 + } 84 + 85 + #[stable(feature = "rust1", since = "1.0.0")] 86 + impl<T> ToOwned for T 87 + where 88 + T: Clone, 89 + { 90 + type Owned = T; 91 + fn to_owned(&self) -> T { 92 + self.clone() 93 + } 94 + 95 + fn clone_into(&self, target: &mut T) { 96 + target.clone_from(self); 97 + } 98 + } 99 + 100 + /// A clone-on-write smart pointer. 101 + /// 102 + /// The type `Cow` is a smart pointer providing clone-on-write functionality: it 103 + /// can enclose and provide immutable access to borrowed data, and clone the 104 + /// data lazily when mutation or ownership is required. The type is designed to 105 + /// work with general borrowed data via the `Borrow` trait. 106 + /// 107 + /// `Cow` implements `Deref`, which means that you can call 108 + /// non-mutating methods directly on the data it encloses. If mutation 109 + /// is desired, `to_mut` will obtain a mutable reference to an owned 110 + /// value, cloning if necessary. 111 + /// 112 + /// If you need reference-counting pointers, note that 113 + /// [`Rc::make_mut`][crate::rc::Rc::make_mut] and 114 + /// [`Arc::make_mut`][crate::sync::Arc::make_mut] can provide clone-on-write 115 + /// functionality as well. 116 + /// 117 + /// # Examples 118 + /// 119 + /// ``` 120 + /// use std::borrow::Cow; 121 + /// 122 + /// fn abs_all(input: &mut Cow<[i32]>) { 123 + /// for i in 0..input.len() { 124 + /// let v = input[i]; 125 + /// if v < 0 { 126 + /// // Clones into a vector if not already owned. 127 + /// input.to_mut()[i] = -v; 128 + /// } 129 + /// } 130 + /// } 131 + /// 132 + /// // No clone occurs because `input` doesn't need to be mutated. 133 + /// let slice = [0, 1, 2]; 134 + /// let mut input = Cow::from(&slice[..]); 135 + /// abs_all(&mut input); 136 + /// 137 + /// // Clone occurs because `input` needs to be mutated. 138 + /// let slice = [-1, 0, 1]; 139 + /// let mut input = Cow::from(&slice[..]); 140 + /// abs_all(&mut input); 141 + /// 142 + /// // No clone occurs because `input` is already owned. 143 + /// let mut input = Cow::from(vec![-1, 0, 1]); 144 + /// abs_all(&mut input); 145 + /// ``` 146 + /// 147 + /// Another example showing how to keep `Cow` in a struct: 148 + /// 149 + /// ``` 150 + /// use std::borrow::Cow; 151 + /// 152 + /// struct Items<'a, X: 'a> where [X]: ToOwned<Owned = Vec<X>> { 153 + /// values: Cow<'a, [X]>, 154 + /// } 155 + /// 156 + /// impl<'a, X: Clone + 'a> Items<'a, X> where [X]: ToOwned<Owned = Vec<X>> { 157 + /// fn new(v: Cow<'a, [X]>) -> Self { 158 + /// Items { values: v } 159 + /// } 160 + /// } 161 + /// 162 + /// // Creates a container from borrowed values of a slice 163 + /// let readonly = [1, 2]; 164 + /// let borrowed = Items::new((&readonly[..]).into()); 165 + /// match borrowed { 166 + /// Items { values: Cow::Borrowed(b) } => println!("borrowed {b:?}"), 167 + /// _ => panic!("expect borrowed value"), 168 + /// } 169 + /// 170 + /// let mut clone_on_write = borrowed; 171 + /// // Mutates the data from slice into owned vec and pushes a new value on top 172 + /// clone_on_write.values.to_mut().push(3); 173 + /// println!("clone_on_write = {:?}", clone_on_write.values); 174 + /// 175 + /// // The data was mutated. Let's check it out. 176 + /// match clone_on_write { 177 + /// Items { values: Cow::Owned(_) } => println!("clone_on_write contains owned data"), 178 + /// _ => panic!("expect owned data"), 179 + /// } 180 + /// ``` 181 + #[stable(feature = "rust1", since = "1.0.0")] 182 + #[cfg_attr(not(test), rustc_diagnostic_item = "Cow")] 183 + pub enum Cow<'a, B: ?Sized + 'a> 184 + where 185 + B: ToOwned, 186 + { 187 + /// Borrowed data. 188 + #[stable(feature = "rust1", since = "1.0.0")] 189 + Borrowed(#[stable(feature = "rust1", since = "1.0.0")] &'a B), 190 + 191 + /// Owned data. 192 + #[stable(feature = "rust1", since = "1.0.0")] 193 + Owned(#[stable(feature = "rust1", since = "1.0.0")] <B as ToOwned>::Owned), 194 + } 195 + 196 + #[stable(feature = "rust1", since = "1.0.0")] 197 + impl<B: ?Sized + ToOwned> Clone for Cow<'_, B> { 198 + fn clone(&self) -> Self { 199 + match *self { 200 + Borrowed(b) => Borrowed(b), 201 + Owned(ref o) => { 202 + let b: &B = o.borrow(); 203 + Owned(b.to_owned()) 204 + } 205 + } 206 + } 207 + 208 + fn clone_from(&mut self, source: &Self) { 209 + match (self, source) { 210 + (&mut Owned(ref mut dest), &Owned(ref o)) => o.borrow().clone_into(dest), 211 + (t, s) => *t = s.clone(), 212 + } 213 + } 214 + } 215 + 216 + impl<B: ?Sized + ToOwned> Cow<'_, B> { 217 + /// Returns true if the data is borrowed, i.e. if `to_mut` would require additional work. 218 + /// 219 + /// # Examples 220 + /// 221 + /// ``` 222 + /// #![feature(cow_is_borrowed)] 223 + /// use std::borrow::Cow; 224 + /// 225 + /// let cow = Cow::Borrowed("moo"); 226 + /// assert!(cow.is_borrowed()); 227 + /// 228 + /// let bull: Cow<'_, str> = Cow::Owned("...moo?".to_string()); 229 + /// assert!(!bull.is_borrowed()); 230 + /// ``` 231 + #[unstable(feature = "cow_is_borrowed", issue = "65143")] 232 + #[rustc_const_unstable(feature = "const_cow_is_borrowed", issue = "65143")] 233 + pub const fn is_borrowed(&self) -> bool { 234 + match *self { 235 + Borrowed(_) => true, 236 + Owned(_) => false, 237 + } 238 + } 239 + 240 + /// Returns true if the data is owned, i.e. if `to_mut` would be a no-op. 241 + /// 242 + /// # Examples 243 + /// 244 + /// ``` 245 + /// #![feature(cow_is_borrowed)] 246 + /// use std::borrow::Cow; 247 + /// 248 + /// let cow: Cow<'_, str> = Cow::Owned("moo".to_string()); 249 + /// assert!(cow.is_owned()); 250 + /// 251 + /// let bull = Cow::Borrowed("...moo?"); 252 + /// assert!(!bull.is_owned()); 253 + /// ``` 254 + #[unstable(feature = "cow_is_borrowed", issue = "65143")] 255 + #[rustc_const_unstable(feature = "const_cow_is_borrowed", issue = "65143")] 256 + pub const fn is_owned(&self) -> bool { 257 + !self.is_borrowed() 258 + } 259 + 260 + /// Acquires a mutable reference to the owned form of the data. 261 + /// 262 + /// Clones the data if it is not already owned. 263 + /// 264 + /// # Examples 265 + /// 266 + /// ``` 267 + /// use std::borrow::Cow; 268 + /// 269 + /// let mut cow = Cow::Borrowed("foo"); 270 + /// cow.to_mut().make_ascii_uppercase(); 271 + /// 272 + /// assert_eq!( 273 + /// cow, 274 + /// Cow::Owned(String::from("FOO")) as Cow<str> 275 + /// ); 276 + /// ``` 277 + #[stable(feature = "rust1", since = "1.0.0")] 278 + pub fn to_mut(&mut self) -> &mut <B as ToOwned>::Owned { 279 + match *self { 280 + Borrowed(borrowed) => { 281 + *self = Owned(borrowed.to_owned()); 282 + match *self { 283 + Borrowed(..) => unreachable!(), 284 + Owned(ref mut owned) => owned, 285 + } 286 + } 287 + Owned(ref mut owned) => owned, 288 + } 289 + } 290 + 291 + /// Extracts the owned data. 292 + /// 293 + /// Clones the data if it is not already owned. 294 + /// 295 + /// # Examples 296 + /// 297 + /// Calling `into_owned` on a `Cow::Borrowed` returns a clone of the borrowed data: 298 + /// 299 + /// ``` 300 + /// use std::borrow::Cow; 301 + /// 302 + /// let s = "Hello world!"; 303 + /// let cow = Cow::Borrowed(s); 304 + /// 305 + /// assert_eq!( 306 + /// cow.into_owned(), 307 + /// String::from(s) 308 + /// ); 309 + /// ``` 310 + /// 311 + /// Calling `into_owned` on a `Cow::Owned` returns the owned data. The data is moved out of the 312 + /// `Cow` without being cloned. 313 + /// 314 + /// ``` 315 + /// use std::borrow::Cow; 316 + /// 317 + /// let s = "Hello world!"; 318 + /// let cow: Cow<str> = Cow::Owned(String::from(s)); 319 + /// 320 + /// assert_eq!( 321 + /// cow.into_owned(), 322 + /// String::from(s) 323 + /// ); 324 + /// ``` 325 + #[stable(feature = "rust1", since = "1.0.0")] 326 + pub fn into_owned(self) -> <B as ToOwned>::Owned { 327 + match self { 328 + Borrowed(borrowed) => borrowed.to_owned(), 329 + Owned(owned) => owned, 330 + } 331 + } 332 + } 333 + 334 + #[stable(feature = "rust1", since = "1.0.0")] 335 + #[rustc_const_unstable(feature = "const_deref", issue = "88955")] 336 + impl<B: ?Sized + ToOwned> const Deref for Cow<'_, B> 337 + where 338 + B::Owned: ~const Borrow<B>, 339 + { 340 + type Target = B; 341 + 342 + fn deref(&self) -> &B { 343 + match *self { 344 + Borrowed(borrowed) => borrowed, 345 + Owned(ref owned) => owned.borrow(), 346 + } 347 + } 348 + } 349 + 350 + #[stable(feature = "rust1", since = "1.0.0")] 351 + impl<B: ?Sized> Eq for Cow<'_, B> where B: Eq + ToOwned {} 352 + 353 + #[stable(feature = "rust1", since = "1.0.0")] 354 + impl<B: ?Sized> Ord for Cow<'_, B> 355 + where 356 + B: Ord + ToOwned, 357 + { 358 + #[inline] 359 + fn cmp(&self, other: &Self) -> Ordering { 360 + Ord::cmp(&**self, &**other) 361 + } 362 + } 363 + 364 + #[stable(feature = "rust1", since = "1.0.0")] 365 + impl<'a, 'b, B: ?Sized, C: ?Sized> PartialEq<Cow<'b, C>> for Cow<'a, B> 366 + where 367 + B: PartialEq<C> + ToOwned, 368 + C: ToOwned, 369 + { 370 + #[inline] 371 + fn eq(&self, other: &Cow<'b, C>) -> bool { 372 + PartialEq::eq(&**self, &**other) 373 + } 374 + } 375 + 376 + #[stable(feature = "rust1", since = "1.0.0")] 377 + impl<'a, B: ?Sized> PartialOrd for Cow<'a, B> 378 + where 379 + B: PartialOrd + ToOwned, 380 + { 381 + #[inline] 382 + fn partial_cmp(&self, other: &Cow<'a, B>) -> Option<Ordering> { 383 + PartialOrd::partial_cmp(&**self, &**other) 384 + } 385 + } 386 + 387 + #[stable(feature = "rust1", since = "1.0.0")] 388 + impl<B: ?Sized> fmt::Debug for Cow<'_, B> 389 + where 390 + B: fmt::Debug + ToOwned<Owned: fmt::Debug>, 391 + { 392 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 393 + match *self { 394 + Borrowed(ref b) => fmt::Debug::fmt(b, f), 395 + Owned(ref o) => fmt::Debug::fmt(o, f), 396 + } 397 + } 398 + } 399 + 400 + #[stable(feature = "rust1", since = "1.0.0")] 401 + impl<B: ?Sized> fmt::Display for Cow<'_, B> 402 + where 403 + B: fmt::Display + ToOwned<Owned: fmt::Display>, 404 + { 405 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 406 + match *self { 407 + Borrowed(ref b) => fmt::Display::fmt(b, f), 408 + Owned(ref o) => fmt::Display::fmt(o, f), 409 + } 410 + } 411 + } 412 + 413 + #[stable(feature = "default", since = "1.11.0")] 414 + impl<B: ?Sized> Default for Cow<'_, B> 415 + where 416 + B: ToOwned<Owned: Default>, 417 + { 418 + /// Creates an owned Cow<'a, B> with the default value for the contained owned value. 419 + fn default() -> Self { 420 + Owned(<B as ToOwned>::Owned::default()) 421 + } 422 + } 423 + 424 + #[stable(feature = "rust1", since = "1.0.0")] 425 + impl<B: ?Sized> Hash for Cow<'_, B> 426 + where 427 + B: Hash + ToOwned, 428 + { 429 + #[inline] 430 + fn hash<H: Hasher>(&self, state: &mut H) { 431 + Hash::hash(&**self, state) 432 + } 433 + } 434 + 435 + #[stable(feature = "rust1", since = "1.0.0")] 436 + impl<T: ?Sized + ToOwned> AsRef<T> for Cow<'_, T> { 437 + fn as_ref(&self) -> &T { 438 + self 439 + } 440 + } 441 + 442 + #[cfg(not(no_global_oom_handling))] 443 + #[stable(feature = "cow_add", since = "1.14.0")] 444 + impl<'a> Add<&'a str> for Cow<'a, str> { 445 + type Output = Cow<'a, str>; 446 + 447 + #[inline] 448 + fn add(mut self, rhs: &'a str) -> Self::Output { 449 + self += rhs; 450 + self 451 + } 452 + } 453 + 454 + #[cfg(not(no_global_oom_handling))] 455 + #[stable(feature = "cow_add", since = "1.14.0")] 456 + impl<'a> Add<Cow<'a, str>> for Cow<'a, str> { 457 + type Output = Cow<'a, str>; 458 + 459 + #[inline] 460 + fn add(mut self, rhs: Cow<'a, str>) -> Self::Output { 461 + self += rhs; 462 + self 463 + } 464 + } 465 + 466 + #[cfg(not(no_global_oom_handling))] 467 + #[stable(feature = "cow_add", since = "1.14.0")] 468 + impl<'a> AddAssign<&'a str> for Cow<'a, str> { 469 + fn add_assign(&mut self, rhs: &'a str) { 470 + if self.is_empty() { 471 + *self = Cow::Borrowed(rhs) 472 + } else if !rhs.is_empty() { 473 + if let Cow::Borrowed(lhs) = *self { 474 + let mut s = String::with_capacity(lhs.len() + rhs.len()); 475 + s.push_str(lhs); 476 + *self = Cow::Owned(s); 477 + } 478 + self.to_mut().push_str(rhs); 479 + } 480 + } 481 + } 482 + 483 + #[cfg(not(no_global_oom_handling))] 484 + #[stable(feature = "cow_add", since = "1.14.0")] 485 + impl<'a> AddAssign<Cow<'a, str>> for Cow<'a, str> { 486 + fn add_assign(&mut self, rhs: Cow<'a, str>) { 487 + if self.is_empty() { 488 + *self = rhs 489 + } else if !rhs.is_empty() { 490 + if let Cow::Borrowed(lhs) = *self { 491 + let mut s = String::with_capacity(lhs.len() + rhs.len()); 492 + s.push_str(lhs); 493 + *self = Cow::Owned(s); 494 + } 495 + self.to_mut().push_str(&rhs); 496 + } 497 + } 498 + }
+2028
rust/alloc/boxed.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + //! A pointer type for heap allocation. 4 + //! 5 + //! [`Box<T>`], casually referred to as a 'box', provides the simplest form of 6 + //! heap allocation in Rust. Boxes provide ownership for this allocation, and 7 + //! drop their contents when they go out of scope. Boxes also ensure that they 8 + //! never allocate more than `isize::MAX` bytes. 9 + //! 10 + //! # Examples 11 + //! 12 + //! Move a value from the stack to the heap by creating a [`Box`]: 13 + //! 14 + //! ``` 15 + //! let val: u8 = 5; 16 + //! let boxed: Box<u8> = Box::new(val); 17 + //! ``` 18 + //! 19 + //! Move a value from a [`Box`] back to the stack by [dereferencing]: 20 + //! 21 + //! ``` 22 + //! let boxed: Box<u8> = Box::new(5); 23 + //! let val: u8 = *boxed; 24 + //! ``` 25 + //! 26 + //! Creating a recursive data structure: 27 + //! 28 + //! ``` 29 + //! #[derive(Debug)] 30 + //! enum List<T> { 31 + //! Cons(T, Box<List<T>>), 32 + //! Nil, 33 + //! } 34 + //! 35 + //! let list: List<i32> = List::Cons(1, Box::new(List::Cons(2, Box::new(List::Nil)))); 36 + //! println!("{list:?}"); 37 + //! ``` 38 + //! 39 + //! This will print `Cons(1, Cons(2, Nil))`. 40 + //! 41 + //! Recursive structures must be boxed, because if the definition of `Cons` 42 + //! looked like this: 43 + //! 44 + //! ```compile_fail,E0072 45 + //! # enum List<T> { 46 + //! Cons(T, List<T>), 47 + //! # } 48 + //! ``` 49 + //! 50 + //! It wouldn't work. This is because the size of a `List` depends on how many 51 + //! elements are in the list, and so we don't know how much memory to allocate 52 + //! for a `Cons`. By introducing a [`Box<T>`], which has a defined size, we know how 53 + //! big `Cons` needs to be. 54 + //! 55 + //! # Memory layout 56 + //! 57 + //! For non-zero-sized values, a [`Box`] will use the [`Global`] allocator for 58 + //! its allocation. It is valid to convert both ways between a [`Box`] and a 59 + //! raw pointer allocated with the [`Global`] allocator, given that the 60 + //! [`Layout`] used with the allocator is correct for the type. More precisely, 61 + //! a `value: *mut T` that has been allocated with the [`Global`] allocator 62 + //! with `Layout::for_value(&*value)` may be converted into a box using 63 + //! [`Box::<T>::from_raw(value)`]. Conversely, the memory backing a `value: *mut 64 + //! T` obtained from [`Box::<T>::into_raw`] may be deallocated using the 65 + //! [`Global`] allocator with [`Layout::for_value(&*value)`]. 66 + //! 67 + //! For zero-sized values, the `Box` pointer still has to be [valid] for reads 68 + //! and writes and sufficiently aligned. In particular, casting any aligned 69 + //! non-zero integer literal to a raw pointer produces a valid pointer, but a 70 + //! pointer pointing into previously allocated memory that since got freed is 71 + //! not valid. The recommended way to build a Box to a ZST if `Box::new` cannot 72 + //! be used is to use [`ptr::NonNull::dangling`]. 73 + //! 74 + //! So long as `T: Sized`, a `Box<T>` is guaranteed to be represented 75 + //! as a single pointer and is also ABI-compatible with C pointers 76 + //! (i.e. the C type `T*`). This means that if you have extern "C" 77 + //! Rust functions that will be called from C, you can define those 78 + //! Rust functions using `Box<T>` types, and use `T*` as corresponding 79 + //! type on the C side. As an example, consider this C header which 80 + //! declares functions that create and destroy some kind of `Foo` 81 + //! value: 82 + //! 83 + //! ```c 84 + //! /* C header */ 85 + //! 86 + //! /* Returns ownership to the caller */ 87 + //! struct Foo* foo_new(void); 88 + //! 89 + //! /* Takes ownership from the caller; no-op when invoked with null */ 90 + //! void foo_delete(struct Foo*); 91 + //! ``` 92 + //! 93 + //! These two functions might be implemented in Rust as follows. Here, the 94 + //! `struct Foo*` type from C is translated to `Box<Foo>`, which captures 95 + //! the ownership constraints. Note also that the nullable argument to 96 + //! `foo_delete` is represented in Rust as `Option<Box<Foo>>`, since `Box<Foo>` 97 + //! cannot be null. 98 + //! 99 + //! ``` 100 + //! #[repr(C)] 101 + //! pub struct Foo; 102 + //! 103 + //! #[no_mangle] 104 + //! pub extern "C" fn foo_new() -> Box<Foo> { 105 + //! Box::new(Foo) 106 + //! } 107 + //! 108 + //! #[no_mangle] 109 + //! pub extern "C" fn foo_delete(_: Option<Box<Foo>>) {} 110 + //! ``` 111 + //! 112 + //! Even though `Box<T>` has the same representation and C ABI as a C pointer, 113 + //! this does not mean that you can convert an arbitrary `T*` into a `Box<T>` 114 + //! and expect things to work. `Box<T>` values will always be fully aligned, 115 + //! non-null pointers. Moreover, the destructor for `Box<T>` will attempt to 116 + //! free the value with the global allocator. In general, the best practice 117 + //! is to only use `Box<T>` for pointers that originated from the global 118 + //! allocator. 119 + //! 120 + //! **Important.** At least at present, you should avoid using 121 + //! `Box<T>` types for functions that are defined in C but invoked 122 + //! from Rust. In those cases, you should directly mirror the C types 123 + //! as closely as possible. Using types like `Box<T>` where the C 124 + //! definition is just using `T*` can lead to undefined behavior, as 125 + //! described in [rust-lang/unsafe-code-guidelines#198][ucg#198]. 126 + //! 127 + //! [ucg#198]: https://github.com/rust-lang/unsafe-code-guidelines/issues/198 128 + //! [dereferencing]: core::ops::Deref 129 + //! [`Box::<T>::from_raw(value)`]: Box::from_raw 130 + //! [`Global`]: crate::alloc::Global 131 + //! [`Layout`]: crate::alloc::Layout 132 + //! [`Layout::for_value(&*value)`]: crate::alloc::Layout::for_value 133 + //! [valid]: ptr#safety 134 + 135 + #![stable(feature = "rust1", since = "1.0.0")] 136 + 137 + use core::any::Any; 138 + use core::async_iter::AsyncIterator; 139 + use core::borrow; 140 + use core::cmp::Ordering; 141 + use core::convert::{From, TryFrom}; 142 + use core::fmt; 143 + use core::future::Future; 144 + use core::hash::{Hash, Hasher}; 145 + #[cfg(not(no_global_oom_handling))] 146 + use core::iter::FromIterator; 147 + use core::iter::{FusedIterator, Iterator}; 148 + use core::marker::{Destruct, Unpin, Unsize}; 149 + use core::mem; 150 + use core::ops::{ 151 + CoerceUnsized, Deref, DerefMut, DispatchFromDyn, Generator, GeneratorState, Receiver, 152 + }; 153 + use core::pin::Pin; 154 + use core::ptr::{self, Unique}; 155 + use core::task::{Context, Poll}; 156 + 157 + #[cfg(not(no_global_oom_handling))] 158 + use crate::alloc::{handle_alloc_error, WriteCloneIntoRaw}; 159 + use crate::alloc::{AllocError, Allocator, Global, Layout}; 160 + #[cfg(not(no_global_oom_handling))] 161 + use crate::borrow::Cow; 162 + use crate::raw_vec::RawVec; 163 + #[cfg(not(no_global_oom_handling))] 164 + use crate::str::from_boxed_utf8_unchecked; 165 + #[cfg(not(no_global_oom_handling))] 166 + use crate::vec::Vec; 167 + 168 + #[cfg(not(no_thin))] 169 + #[unstable(feature = "thin_box", issue = "92791")] 170 + pub use thin::ThinBox; 171 + 172 + #[cfg(not(no_thin))] 173 + mod thin; 174 + 175 + /// A pointer type for heap allocation. 176 + /// 177 + /// See the [module-level documentation](../../std/boxed/index.html) for more. 178 + #[lang = "owned_box"] 179 + #[fundamental] 180 + #[stable(feature = "rust1", since = "1.0.0")] 181 + // The declaration of the `Box` struct must be kept in sync with the 182 + // `alloc::alloc::box_free` function or ICEs will happen. See the comment 183 + // on `box_free` for more details. 184 + pub struct Box< 185 + T: ?Sized, 186 + #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global, 187 + >(Unique<T>, A); 188 + 189 + impl<T> Box<T> { 190 + /// Allocates memory on the heap and then places `x` into it. 191 + /// 192 + /// This doesn't actually allocate if `T` is zero-sized. 193 + /// 194 + /// # Examples 195 + /// 196 + /// ``` 197 + /// let five = Box::new(5); 198 + /// ``` 199 + #[cfg(not(no_global_oom_handling))] 200 + #[inline(always)] 201 + #[stable(feature = "rust1", since = "1.0.0")] 202 + #[must_use] 203 + pub fn new(x: T) -> Self { 204 + box x 205 + } 206 + 207 + /// Constructs a new box with uninitialized contents. 208 + /// 209 + /// # Examples 210 + /// 211 + /// ``` 212 + /// #![feature(new_uninit)] 213 + /// 214 + /// let mut five = Box::<u32>::new_uninit(); 215 + /// 216 + /// let five = unsafe { 217 + /// // Deferred initialization: 218 + /// five.as_mut_ptr().write(5); 219 + /// 220 + /// five.assume_init() 221 + /// }; 222 + /// 223 + /// assert_eq!(*five, 5) 224 + /// ``` 225 + #[cfg(not(no_global_oom_handling))] 226 + #[unstable(feature = "new_uninit", issue = "63291")] 227 + #[must_use] 228 + #[inline] 229 + pub fn new_uninit() -> Box<mem::MaybeUninit<T>> { 230 + Self::new_uninit_in(Global) 231 + } 232 + 233 + /// Constructs a new `Box` with uninitialized contents, with the memory 234 + /// being filled with `0` bytes. 235 + /// 236 + /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and incorrect usage 237 + /// of this method. 238 + /// 239 + /// # Examples 240 + /// 241 + /// ``` 242 + /// #![feature(new_uninit)] 243 + /// 244 + /// let zero = Box::<u32>::new_zeroed(); 245 + /// let zero = unsafe { zero.assume_init() }; 246 + /// 247 + /// assert_eq!(*zero, 0) 248 + /// ``` 249 + /// 250 + /// [zeroed]: mem::MaybeUninit::zeroed 251 + #[cfg(not(no_global_oom_handling))] 252 + #[inline] 253 + #[unstable(feature = "new_uninit", issue = "63291")] 254 + #[must_use] 255 + pub fn new_zeroed() -> Box<mem::MaybeUninit<T>> { 256 + Self::new_zeroed_in(Global) 257 + } 258 + 259 + /// Constructs a new `Pin<Box<T>>`. If `T` does not implement `Unpin`, then 260 + /// `x` will be pinned in memory and unable to be moved. 261 + #[cfg(not(no_global_oom_handling))] 262 + #[stable(feature = "pin", since = "1.33.0")] 263 + #[must_use] 264 + #[inline(always)] 265 + pub fn pin(x: T) -> Pin<Box<T>> { 266 + (box x).into() 267 + } 268 + 269 + /// Allocates memory on the heap then places `x` into it, 270 + /// returning an error if the allocation fails 271 + /// 272 + /// This doesn't actually allocate if `T` is zero-sized. 273 + /// 274 + /// # Examples 275 + /// 276 + /// ``` 277 + /// #![feature(allocator_api)] 278 + /// 279 + /// let five = Box::try_new(5)?; 280 + /// # Ok::<(), std::alloc::AllocError>(()) 281 + /// ``` 282 + #[unstable(feature = "allocator_api", issue = "32838")] 283 + #[inline] 284 + pub fn try_new(x: T) -> Result<Self, AllocError> { 285 + Self::try_new_in(x, Global) 286 + } 287 + 288 + /// Constructs a new box with uninitialized contents on the heap, 289 + /// returning an error if the allocation fails 290 + /// 291 + /// # Examples 292 + /// 293 + /// ``` 294 + /// #![feature(allocator_api, new_uninit)] 295 + /// 296 + /// let mut five = Box::<u32>::try_new_uninit()?; 297 + /// 298 + /// let five = unsafe { 299 + /// // Deferred initialization: 300 + /// five.as_mut_ptr().write(5); 301 + /// 302 + /// five.assume_init() 303 + /// }; 304 + /// 305 + /// assert_eq!(*five, 5); 306 + /// # Ok::<(), std::alloc::AllocError>(()) 307 + /// ``` 308 + #[unstable(feature = "allocator_api", issue = "32838")] 309 + // #[unstable(feature = "new_uninit", issue = "63291")] 310 + #[inline] 311 + pub fn try_new_uninit() -> Result<Box<mem::MaybeUninit<T>>, AllocError> { 312 + Box::try_new_uninit_in(Global) 313 + } 314 + 315 + /// Constructs a new `Box` with uninitialized contents, with the memory 316 + /// being filled with `0` bytes on the heap 317 + /// 318 + /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and incorrect usage 319 + /// of this method. 320 + /// 321 + /// # Examples 322 + /// 323 + /// ``` 324 + /// #![feature(allocator_api, new_uninit)] 325 + /// 326 + /// let zero = Box::<u32>::try_new_zeroed()?; 327 + /// let zero = unsafe { zero.assume_init() }; 328 + /// 329 + /// assert_eq!(*zero, 0); 330 + /// # Ok::<(), std::alloc::AllocError>(()) 331 + /// ``` 332 + /// 333 + /// [zeroed]: mem::MaybeUninit::zeroed 334 + #[unstable(feature = "allocator_api", issue = "32838")] 335 + // #[unstable(feature = "new_uninit", issue = "63291")] 336 + #[inline] 337 + pub fn try_new_zeroed() -> Result<Box<mem::MaybeUninit<T>>, AllocError> { 338 + Box::try_new_zeroed_in(Global) 339 + } 340 + } 341 + 342 + impl<T, A: Allocator> Box<T, A> { 343 + /// Allocates memory in the given allocator then places `x` into it. 344 + /// 345 + /// This doesn't actually allocate if `T` is zero-sized. 346 + /// 347 + /// # Examples 348 + /// 349 + /// ``` 350 + /// #![feature(allocator_api)] 351 + /// 352 + /// use std::alloc::System; 353 + /// 354 + /// let five = Box::new_in(5, System); 355 + /// ``` 356 + #[cfg(not(no_global_oom_handling))] 357 + #[unstable(feature = "allocator_api", issue = "32838")] 358 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 359 + #[must_use] 360 + #[inline] 361 + pub const fn new_in(x: T, alloc: A) -> Self 362 + where 363 + A: ~const Allocator + ~const Destruct, 364 + { 365 + let mut boxed = Self::new_uninit_in(alloc); 366 + unsafe { 367 + boxed.as_mut_ptr().write(x); 368 + boxed.assume_init() 369 + } 370 + } 371 + 372 + /// Allocates memory in the given allocator then places `x` into it, 373 + /// returning an error if the allocation fails 374 + /// 375 + /// This doesn't actually allocate if `T` is zero-sized. 376 + /// 377 + /// # Examples 378 + /// 379 + /// ``` 380 + /// #![feature(allocator_api)] 381 + /// 382 + /// use std::alloc::System; 383 + /// 384 + /// let five = Box::try_new_in(5, System)?; 385 + /// # Ok::<(), std::alloc::AllocError>(()) 386 + /// ``` 387 + #[unstable(feature = "allocator_api", issue = "32838")] 388 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 389 + #[inline] 390 + pub const fn try_new_in(x: T, alloc: A) -> Result<Self, AllocError> 391 + where 392 + T: ~const Destruct, 393 + A: ~const Allocator + ~const Destruct, 394 + { 395 + let mut boxed = Self::try_new_uninit_in(alloc)?; 396 + unsafe { 397 + boxed.as_mut_ptr().write(x); 398 + Ok(boxed.assume_init()) 399 + } 400 + } 401 + 402 + /// Constructs a new box with uninitialized contents in the provided allocator. 403 + /// 404 + /// # Examples 405 + /// 406 + /// ``` 407 + /// #![feature(allocator_api, new_uninit)] 408 + /// 409 + /// use std::alloc::System; 410 + /// 411 + /// let mut five = Box::<u32, _>::new_uninit_in(System); 412 + /// 413 + /// let five = unsafe { 414 + /// // Deferred initialization: 415 + /// five.as_mut_ptr().write(5); 416 + /// 417 + /// five.assume_init() 418 + /// }; 419 + /// 420 + /// assert_eq!(*five, 5) 421 + /// ``` 422 + #[unstable(feature = "allocator_api", issue = "32838")] 423 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 424 + #[cfg(not(no_global_oom_handling))] 425 + #[must_use] 426 + // #[unstable(feature = "new_uninit", issue = "63291")] 427 + pub const fn new_uninit_in(alloc: A) -> Box<mem::MaybeUninit<T>, A> 428 + where 429 + A: ~const Allocator + ~const Destruct, 430 + { 431 + let layout = Layout::new::<mem::MaybeUninit<T>>(); 432 + // NOTE: Prefer match over unwrap_or_else since closure sometimes not inlineable. 433 + // That would make code size bigger. 434 + match Box::try_new_uninit_in(alloc) { 435 + Ok(m) => m, 436 + Err(_) => handle_alloc_error(layout), 437 + } 438 + } 439 + 440 + /// Constructs a new box with uninitialized contents in the provided allocator, 441 + /// returning an error if the allocation fails 442 + /// 443 + /// # Examples 444 + /// 445 + /// ``` 446 + /// #![feature(allocator_api, new_uninit)] 447 + /// 448 + /// use std::alloc::System; 449 + /// 450 + /// let mut five = Box::<u32, _>::try_new_uninit_in(System)?; 451 + /// 452 + /// let five = unsafe { 453 + /// // Deferred initialization: 454 + /// five.as_mut_ptr().write(5); 455 + /// 456 + /// five.assume_init() 457 + /// }; 458 + /// 459 + /// assert_eq!(*five, 5); 460 + /// # Ok::<(), std::alloc::AllocError>(()) 461 + /// ``` 462 + #[unstable(feature = "allocator_api", issue = "32838")] 463 + // #[unstable(feature = "new_uninit", issue = "63291")] 464 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 465 + pub const fn try_new_uninit_in(alloc: A) -> Result<Box<mem::MaybeUninit<T>, A>, AllocError> 466 + where 467 + A: ~const Allocator + ~const Destruct, 468 + { 469 + let layout = Layout::new::<mem::MaybeUninit<T>>(); 470 + let ptr = alloc.allocate(layout)?.cast(); 471 + unsafe { Ok(Box::from_raw_in(ptr.as_ptr(), alloc)) } 472 + } 473 + 474 + /// Constructs a new `Box` with uninitialized contents, with the memory 475 + /// being filled with `0` bytes in the provided allocator. 476 + /// 477 + /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and incorrect usage 478 + /// of this method. 479 + /// 480 + /// # Examples 481 + /// 482 + /// ``` 483 + /// #![feature(allocator_api, new_uninit)] 484 + /// 485 + /// use std::alloc::System; 486 + /// 487 + /// let zero = Box::<u32, _>::new_zeroed_in(System); 488 + /// let zero = unsafe { zero.assume_init() }; 489 + /// 490 + /// assert_eq!(*zero, 0) 491 + /// ``` 492 + /// 493 + /// [zeroed]: mem::MaybeUninit::zeroed 494 + #[unstable(feature = "allocator_api", issue = "32838")] 495 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 496 + #[cfg(not(no_global_oom_handling))] 497 + // #[unstable(feature = "new_uninit", issue = "63291")] 498 + #[must_use] 499 + pub const fn new_zeroed_in(alloc: A) -> Box<mem::MaybeUninit<T>, A> 500 + where 501 + A: ~const Allocator + ~const Destruct, 502 + { 503 + let layout = Layout::new::<mem::MaybeUninit<T>>(); 504 + // NOTE: Prefer match over unwrap_or_else since closure sometimes not inlineable. 505 + // That would make code size bigger. 506 + match Box::try_new_zeroed_in(alloc) { 507 + Ok(m) => m, 508 + Err(_) => handle_alloc_error(layout), 509 + } 510 + } 511 + 512 + /// Constructs a new `Box` with uninitialized contents, with the memory 513 + /// being filled with `0` bytes in the provided allocator, 514 + /// returning an error if the allocation fails, 515 + /// 516 + /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and incorrect usage 517 + /// of this method. 518 + /// 519 + /// # Examples 520 + /// 521 + /// ``` 522 + /// #![feature(allocator_api, new_uninit)] 523 + /// 524 + /// use std::alloc::System; 525 + /// 526 + /// let zero = Box::<u32, _>::try_new_zeroed_in(System)?; 527 + /// let zero = unsafe { zero.assume_init() }; 528 + /// 529 + /// assert_eq!(*zero, 0); 530 + /// # Ok::<(), std::alloc::AllocError>(()) 531 + /// ``` 532 + /// 533 + /// [zeroed]: mem::MaybeUninit::zeroed 534 + #[unstable(feature = "allocator_api", issue = "32838")] 535 + // #[unstable(feature = "new_uninit", issue = "63291")] 536 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 537 + pub const fn try_new_zeroed_in(alloc: A) -> Result<Box<mem::MaybeUninit<T>, A>, AllocError> 538 + where 539 + A: ~const Allocator + ~const Destruct, 540 + { 541 + let layout = Layout::new::<mem::MaybeUninit<T>>(); 542 + let ptr = alloc.allocate_zeroed(layout)?.cast(); 543 + unsafe { Ok(Box::from_raw_in(ptr.as_ptr(), alloc)) } 544 + } 545 + 546 + /// Constructs a new `Pin<Box<T, A>>`. If `T` does not implement `Unpin`, then 547 + /// `x` will be pinned in memory and unable to be moved. 548 + #[cfg(not(no_global_oom_handling))] 549 + #[unstable(feature = "allocator_api", issue = "32838")] 550 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 551 + #[must_use] 552 + #[inline(always)] 553 + pub const fn pin_in(x: T, alloc: A) -> Pin<Self> 554 + where 555 + A: 'static + ~const Allocator + ~const Destruct, 556 + { 557 + Self::into_pin(Self::new_in(x, alloc)) 558 + } 559 + 560 + /// Converts a `Box<T>` into a `Box<[T]>` 561 + /// 562 + /// This conversion does not allocate on the heap and happens in place. 563 + #[unstable(feature = "box_into_boxed_slice", issue = "71582")] 564 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 565 + pub const fn into_boxed_slice(boxed: Self) -> Box<[T], A> { 566 + let (raw, alloc) = Box::into_raw_with_allocator(boxed); 567 + unsafe { Box::from_raw_in(raw as *mut [T; 1], alloc) } 568 + } 569 + 570 + /// Consumes the `Box`, returning the wrapped value. 571 + /// 572 + /// # Examples 573 + /// 574 + /// ``` 575 + /// #![feature(box_into_inner)] 576 + /// 577 + /// let c = Box::new(5); 578 + /// 579 + /// assert_eq!(Box::into_inner(c), 5); 580 + /// ``` 581 + #[unstable(feature = "box_into_inner", issue = "80437")] 582 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 583 + #[inline] 584 + pub const fn into_inner(boxed: Self) -> T 585 + where 586 + Self: ~const Destruct, 587 + { 588 + *boxed 589 + } 590 + } 591 + 592 + impl<T> Box<[T]> { 593 + /// Constructs a new boxed slice with uninitialized contents. 594 + /// 595 + /// # Examples 596 + /// 597 + /// ``` 598 + /// #![feature(new_uninit)] 599 + /// 600 + /// let mut values = Box::<[u32]>::new_uninit_slice(3); 601 + /// 602 + /// let values = unsafe { 603 + /// // Deferred initialization: 604 + /// values[0].as_mut_ptr().write(1); 605 + /// values[1].as_mut_ptr().write(2); 606 + /// values[2].as_mut_ptr().write(3); 607 + /// 608 + /// values.assume_init() 609 + /// }; 610 + /// 611 + /// assert_eq!(*values, [1, 2, 3]) 612 + /// ``` 613 + #[cfg(not(no_global_oom_handling))] 614 + #[unstable(feature = "new_uninit", issue = "63291")] 615 + #[must_use] 616 + pub fn new_uninit_slice(len: usize) -> Box<[mem::MaybeUninit<T>]> { 617 + unsafe { RawVec::with_capacity(len).into_box(len) } 618 + } 619 + 620 + /// Constructs a new boxed slice with uninitialized contents, with the memory 621 + /// being filled with `0` bytes. 622 + /// 623 + /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and incorrect usage 624 + /// of this method. 625 + /// 626 + /// # Examples 627 + /// 628 + /// ``` 629 + /// #![feature(new_uninit)] 630 + /// 631 + /// let values = Box::<[u32]>::new_zeroed_slice(3); 632 + /// let values = unsafe { values.assume_init() }; 633 + /// 634 + /// assert_eq!(*values, [0, 0, 0]) 635 + /// ``` 636 + /// 637 + /// [zeroed]: mem::MaybeUninit::zeroed 638 + #[cfg(not(no_global_oom_handling))] 639 + #[unstable(feature = "new_uninit", issue = "63291")] 640 + #[must_use] 641 + pub fn new_zeroed_slice(len: usize) -> Box<[mem::MaybeUninit<T>]> { 642 + unsafe { RawVec::with_capacity_zeroed(len).into_box(len) } 643 + } 644 + 645 + /// Constructs a new boxed slice with uninitialized contents. Returns an error if 646 + /// the allocation fails 647 + /// 648 + /// # Examples 649 + /// 650 + /// ``` 651 + /// #![feature(allocator_api, new_uninit)] 652 + /// 653 + /// let mut values = Box::<[u32]>::try_new_uninit_slice(3)?; 654 + /// let values = unsafe { 655 + /// // Deferred initialization: 656 + /// values[0].as_mut_ptr().write(1); 657 + /// values[1].as_mut_ptr().write(2); 658 + /// values[2].as_mut_ptr().write(3); 659 + /// values.assume_init() 660 + /// }; 661 + /// 662 + /// assert_eq!(*values, [1, 2, 3]); 663 + /// # Ok::<(), std::alloc::AllocError>(()) 664 + /// ``` 665 + #[unstable(feature = "allocator_api", issue = "32838")] 666 + #[inline] 667 + pub fn try_new_uninit_slice(len: usize) -> Result<Box<[mem::MaybeUninit<T>]>, AllocError> { 668 + unsafe { 669 + let layout = match Layout::array::<mem::MaybeUninit<T>>(len) { 670 + Ok(l) => l, 671 + Err(_) => return Err(AllocError), 672 + }; 673 + let ptr = Global.allocate(layout)?; 674 + Ok(RawVec::from_raw_parts_in(ptr.as_mut_ptr() as *mut _, len, Global).into_box(len)) 675 + } 676 + } 677 + 678 + /// Constructs a new boxed slice with uninitialized contents, with the memory 679 + /// being filled with `0` bytes. Returns an error if the allocation fails 680 + /// 681 + /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and incorrect usage 682 + /// of this method. 683 + /// 684 + /// # Examples 685 + /// 686 + /// ``` 687 + /// #![feature(allocator_api, new_uninit)] 688 + /// 689 + /// let values = Box::<[u32]>::try_new_zeroed_slice(3)?; 690 + /// let values = unsafe { values.assume_init() }; 691 + /// 692 + /// assert_eq!(*values, [0, 0, 0]); 693 + /// # Ok::<(), std::alloc::AllocError>(()) 694 + /// ``` 695 + /// 696 + /// [zeroed]: mem::MaybeUninit::zeroed 697 + #[unstable(feature = "allocator_api", issue = "32838")] 698 + #[inline] 699 + pub fn try_new_zeroed_slice(len: usize) -> Result<Box<[mem::MaybeUninit<T>]>, AllocError> { 700 + unsafe { 701 + let layout = match Layout::array::<mem::MaybeUninit<T>>(len) { 702 + Ok(l) => l, 703 + Err(_) => return Err(AllocError), 704 + }; 705 + let ptr = Global.allocate_zeroed(layout)?; 706 + Ok(RawVec::from_raw_parts_in(ptr.as_mut_ptr() as *mut _, len, Global).into_box(len)) 707 + } 708 + } 709 + } 710 + 711 + impl<T, A: Allocator> Box<[T], A> { 712 + /// Constructs a new boxed slice with uninitialized contents in the provided allocator. 713 + /// 714 + /// # Examples 715 + /// 716 + /// ``` 717 + /// #![feature(allocator_api, new_uninit)] 718 + /// 719 + /// use std::alloc::System; 720 + /// 721 + /// let mut values = Box::<[u32], _>::new_uninit_slice_in(3, System); 722 + /// 723 + /// let values = unsafe { 724 + /// // Deferred initialization: 725 + /// values[0].as_mut_ptr().write(1); 726 + /// values[1].as_mut_ptr().write(2); 727 + /// values[2].as_mut_ptr().write(3); 728 + /// 729 + /// values.assume_init() 730 + /// }; 731 + /// 732 + /// assert_eq!(*values, [1, 2, 3]) 733 + /// ``` 734 + #[cfg(not(no_global_oom_handling))] 735 + #[unstable(feature = "allocator_api", issue = "32838")] 736 + // #[unstable(feature = "new_uninit", issue = "63291")] 737 + #[must_use] 738 + pub fn new_uninit_slice_in(len: usize, alloc: A) -> Box<[mem::MaybeUninit<T>], A> { 739 + unsafe { RawVec::with_capacity_in(len, alloc).into_box(len) } 740 + } 741 + 742 + /// Constructs a new boxed slice with uninitialized contents in the provided allocator, 743 + /// with the memory being filled with `0` bytes. 744 + /// 745 + /// See [`MaybeUninit::zeroed`][zeroed] for examples of correct and incorrect usage 746 + /// of this method. 747 + /// 748 + /// # Examples 749 + /// 750 + /// ``` 751 + /// #![feature(allocator_api, new_uninit)] 752 + /// 753 + /// use std::alloc::System; 754 + /// 755 + /// let values = Box::<[u32], _>::new_zeroed_slice_in(3, System); 756 + /// let values = unsafe { values.assume_init() }; 757 + /// 758 + /// assert_eq!(*values, [0, 0, 0]) 759 + /// ``` 760 + /// 761 + /// [zeroed]: mem::MaybeUninit::zeroed 762 + #[cfg(not(no_global_oom_handling))] 763 + #[unstable(feature = "allocator_api", issue = "32838")] 764 + // #[unstable(feature = "new_uninit", issue = "63291")] 765 + #[must_use] 766 + pub fn new_zeroed_slice_in(len: usize, alloc: A) -> Box<[mem::MaybeUninit<T>], A> { 767 + unsafe { RawVec::with_capacity_zeroed_in(len, alloc).into_box(len) } 768 + } 769 + } 770 + 771 + impl<T, A: Allocator> Box<mem::MaybeUninit<T>, A> { 772 + /// Converts to `Box<T, A>`. 773 + /// 774 + /// # Safety 775 + /// 776 + /// As with [`MaybeUninit::assume_init`], 777 + /// it is up to the caller to guarantee that the value 778 + /// really is in an initialized state. 779 + /// Calling this when the content is not yet fully initialized 780 + /// causes immediate undefined behavior. 781 + /// 782 + /// [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init 783 + /// 784 + /// # Examples 785 + /// 786 + /// ``` 787 + /// #![feature(new_uninit)] 788 + /// 789 + /// let mut five = Box::<u32>::new_uninit(); 790 + /// 791 + /// let five: Box<u32> = unsafe { 792 + /// // Deferred initialization: 793 + /// five.as_mut_ptr().write(5); 794 + /// 795 + /// five.assume_init() 796 + /// }; 797 + /// 798 + /// assert_eq!(*five, 5) 799 + /// ``` 800 + #[unstable(feature = "new_uninit", issue = "63291")] 801 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 802 + #[inline] 803 + pub const unsafe fn assume_init(self) -> Box<T, A> { 804 + let (raw, alloc) = Box::into_raw_with_allocator(self); 805 + unsafe { Box::from_raw_in(raw as *mut T, alloc) } 806 + } 807 + 808 + /// Writes the value and converts to `Box<T, A>`. 809 + /// 810 + /// This method converts the box similarly to [`Box::assume_init`] but 811 + /// writes `value` into it before conversion thus guaranteeing safety. 812 + /// In some scenarios use of this method may improve performance because 813 + /// the compiler may be able to optimize copying from stack. 814 + /// 815 + /// # Examples 816 + /// 817 + /// ``` 818 + /// #![feature(new_uninit)] 819 + /// 820 + /// let big_box = Box::<[usize; 1024]>::new_uninit(); 821 + /// 822 + /// let mut array = [0; 1024]; 823 + /// for (i, place) in array.iter_mut().enumerate() { 824 + /// *place = i; 825 + /// } 826 + /// 827 + /// // The optimizer may be able to elide this copy, so previous code writes 828 + /// // to heap directly. 829 + /// let big_box = Box::write(big_box, array); 830 + /// 831 + /// for (i, x) in big_box.iter().enumerate() { 832 + /// assert_eq!(*x, i); 833 + /// } 834 + /// ``` 835 + #[unstable(feature = "new_uninit", issue = "63291")] 836 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 837 + #[inline] 838 + pub const fn write(mut boxed: Self, value: T) -> Box<T, A> { 839 + unsafe { 840 + (*boxed).write(value); 841 + boxed.assume_init() 842 + } 843 + } 844 + } 845 + 846 + impl<T, A: Allocator> Box<[mem::MaybeUninit<T>], A> { 847 + /// Converts to `Box<[T], A>`. 848 + /// 849 + /// # Safety 850 + /// 851 + /// As with [`MaybeUninit::assume_init`], 852 + /// it is up to the caller to guarantee that the values 853 + /// really are in an initialized state. 854 + /// Calling this when the content is not yet fully initialized 855 + /// causes immediate undefined behavior. 856 + /// 857 + /// [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init 858 + /// 859 + /// # Examples 860 + /// 861 + /// ``` 862 + /// #![feature(new_uninit)] 863 + /// 864 + /// let mut values = Box::<[u32]>::new_uninit_slice(3); 865 + /// 866 + /// let values = unsafe { 867 + /// // Deferred initialization: 868 + /// values[0].as_mut_ptr().write(1); 869 + /// values[1].as_mut_ptr().write(2); 870 + /// values[2].as_mut_ptr().write(3); 871 + /// 872 + /// values.assume_init() 873 + /// }; 874 + /// 875 + /// assert_eq!(*values, [1, 2, 3]) 876 + /// ``` 877 + #[unstable(feature = "new_uninit", issue = "63291")] 878 + #[inline] 879 + pub unsafe fn assume_init(self) -> Box<[T], A> { 880 + let (raw, alloc) = Box::into_raw_with_allocator(self); 881 + unsafe { Box::from_raw_in(raw as *mut [T], alloc) } 882 + } 883 + } 884 + 885 + impl<T: ?Sized> Box<T> { 886 + /// Constructs a box from a raw pointer. 887 + /// 888 + /// After calling this function, the raw pointer is owned by the 889 + /// resulting `Box`. Specifically, the `Box` destructor will call 890 + /// the destructor of `T` and free the allocated memory. For this 891 + /// to be safe, the memory must have been allocated in accordance 892 + /// with the [memory layout] used by `Box` . 893 + /// 894 + /// # Safety 895 + /// 896 + /// This function is unsafe because improper use may lead to 897 + /// memory problems. For example, a double-free may occur if the 898 + /// function is called twice on the same raw pointer. 899 + /// 900 + /// The safety conditions are described in the [memory layout] section. 901 + /// 902 + /// # Examples 903 + /// 904 + /// Recreate a `Box` which was previously converted to a raw pointer 905 + /// using [`Box::into_raw`]: 906 + /// ``` 907 + /// let x = Box::new(5); 908 + /// let ptr = Box::into_raw(x); 909 + /// let x = unsafe { Box::from_raw(ptr) }; 910 + /// ``` 911 + /// Manually create a `Box` from scratch by using the global allocator: 912 + /// ``` 913 + /// use std::alloc::{alloc, Layout}; 914 + /// 915 + /// unsafe { 916 + /// let ptr = alloc(Layout::new::<i32>()) as *mut i32; 917 + /// // In general .write is required to avoid attempting to destruct 918 + /// // the (uninitialized) previous contents of `ptr`, though for this 919 + /// // simple example `*ptr = 5` would have worked as well. 920 + /// ptr.write(5); 921 + /// let x = Box::from_raw(ptr); 922 + /// } 923 + /// ``` 924 + /// 925 + /// [memory layout]: self#memory-layout 926 + /// [`Layout`]: crate::Layout 927 + #[stable(feature = "box_raw", since = "1.4.0")] 928 + #[inline] 929 + pub unsafe fn from_raw(raw: *mut T) -> Self { 930 + unsafe { Self::from_raw_in(raw, Global) } 931 + } 932 + } 933 + 934 + impl<T: ?Sized, A: Allocator> Box<T, A> { 935 + /// Constructs a box from a raw pointer in the given allocator. 936 + /// 937 + /// After calling this function, the raw pointer is owned by the 938 + /// resulting `Box`. Specifically, the `Box` destructor will call 939 + /// the destructor of `T` and free the allocated memory. For this 940 + /// to be safe, the memory must have been allocated in accordance 941 + /// with the [memory layout] used by `Box` . 942 + /// 943 + /// # Safety 944 + /// 945 + /// This function is unsafe because improper use may lead to 946 + /// memory problems. For example, a double-free may occur if the 947 + /// function is called twice on the same raw pointer. 948 + /// 949 + /// 950 + /// # Examples 951 + /// 952 + /// Recreate a `Box` which was previously converted to a raw pointer 953 + /// using [`Box::into_raw_with_allocator`]: 954 + /// ``` 955 + /// #![feature(allocator_api)] 956 + /// 957 + /// use std::alloc::System; 958 + /// 959 + /// let x = Box::new_in(5, System); 960 + /// let (ptr, alloc) = Box::into_raw_with_allocator(x); 961 + /// let x = unsafe { Box::from_raw_in(ptr, alloc) }; 962 + /// ``` 963 + /// Manually create a `Box` from scratch by using the system allocator: 964 + /// ``` 965 + /// #![feature(allocator_api, slice_ptr_get)] 966 + /// 967 + /// use std::alloc::{Allocator, Layout, System}; 968 + /// 969 + /// unsafe { 970 + /// let ptr = System.allocate(Layout::new::<i32>())?.as_mut_ptr() as *mut i32; 971 + /// // In general .write is required to avoid attempting to destruct 972 + /// // the (uninitialized) previous contents of `ptr`, though for this 973 + /// // simple example `*ptr = 5` would have worked as well. 974 + /// ptr.write(5); 975 + /// let x = Box::from_raw_in(ptr, System); 976 + /// } 977 + /// # Ok::<(), std::alloc::AllocError>(()) 978 + /// ``` 979 + /// 980 + /// [memory layout]: self#memory-layout 981 + /// [`Layout`]: crate::Layout 982 + #[unstable(feature = "allocator_api", issue = "32838")] 983 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 984 + #[inline] 985 + pub const unsafe fn from_raw_in(raw: *mut T, alloc: A) -> Self { 986 + Box(unsafe { Unique::new_unchecked(raw) }, alloc) 987 + } 988 + 989 + /// Consumes the `Box`, returning a wrapped raw pointer. 990 + /// 991 + /// The pointer will be properly aligned and non-null. 992 + /// 993 + /// After calling this function, the caller is responsible for the 994 + /// memory previously managed by the `Box`. In particular, the 995 + /// caller should properly destroy `T` and release the memory, taking 996 + /// into account the [memory layout] used by `Box`. The easiest way to 997 + /// do this is to convert the raw pointer back into a `Box` with the 998 + /// [`Box::from_raw`] function, allowing the `Box` destructor to perform 999 + /// the cleanup. 1000 + /// 1001 + /// Note: this is an associated function, which means that you have 1002 + /// to call it as `Box::into_raw(b)` instead of `b.into_raw()`. This 1003 + /// is so that there is no conflict with a method on the inner type. 1004 + /// 1005 + /// # Examples 1006 + /// Converting the raw pointer back into a `Box` with [`Box::from_raw`] 1007 + /// for automatic cleanup: 1008 + /// ``` 1009 + /// let x = Box::new(String::from("Hello")); 1010 + /// let ptr = Box::into_raw(x); 1011 + /// let x = unsafe { Box::from_raw(ptr) }; 1012 + /// ``` 1013 + /// Manual cleanup by explicitly running the destructor and deallocating 1014 + /// the memory: 1015 + /// ``` 1016 + /// use std::alloc::{dealloc, Layout}; 1017 + /// use std::ptr; 1018 + /// 1019 + /// let x = Box::new(String::from("Hello")); 1020 + /// let p = Box::into_raw(x); 1021 + /// unsafe { 1022 + /// ptr::drop_in_place(p); 1023 + /// dealloc(p as *mut u8, Layout::new::<String>()); 1024 + /// } 1025 + /// ``` 1026 + /// 1027 + /// [memory layout]: self#memory-layout 1028 + #[stable(feature = "box_raw", since = "1.4.0")] 1029 + #[inline] 1030 + pub fn into_raw(b: Self) -> *mut T { 1031 + Self::into_raw_with_allocator(b).0 1032 + } 1033 + 1034 + /// Consumes the `Box`, returning a wrapped raw pointer and the allocator. 1035 + /// 1036 + /// The pointer will be properly aligned and non-null. 1037 + /// 1038 + /// After calling this function, the caller is responsible for the 1039 + /// memory previously managed by the `Box`. In particular, the 1040 + /// caller should properly destroy `T` and release the memory, taking 1041 + /// into account the [memory layout] used by `Box`. The easiest way to 1042 + /// do this is to convert the raw pointer back into a `Box` with the 1043 + /// [`Box::from_raw_in`] function, allowing the `Box` destructor to perform 1044 + /// the cleanup. 1045 + /// 1046 + /// Note: this is an associated function, which means that you have 1047 + /// to call it as `Box::into_raw_with_allocator(b)` instead of `b.into_raw_with_allocator()`. This 1048 + /// is so that there is no conflict with a method on the inner type. 1049 + /// 1050 + /// # Examples 1051 + /// Converting the raw pointer back into a `Box` with [`Box::from_raw_in`] 1052 + /// for automatic cleanup: 1053 + /// ``` 1054 + /// #![feature(allocator_api)] 1055 + /// 1056 + /// use std::alloc::System; 1057 + /// 1058 + /// let x = Box::new_in(String::from("Hello"), System); 1059 + /// let (ptr, alloc) = Box::into_raw_with_allocator(x); 1060 + /// let x = unsafe { Box::from_raw_in(ptr, alloc) }; 1061 + /// ``` 1062 + /// Manual cleanup by explicitly running the destructor and deallocating 1063 + /// the memory: 1064 + /// ``` 1065 + /// #![feature(allocator_api)] 1066 + /// 1067 + /// use std::alloc::{Allocator, Layout, System}; 1068 + /// use std::ptr::{self, NonNull}; 1069 + /// 1070 + /// let x = Box::new_in(String::from("Hello"), System); 1071 + /// let (ptr, alloc) = Box::into_raw_with_allocator(x); 1072 + /// unsafe { 1073 + /// ptr::drop_in_place(ptr); 1074 + /// let non_null = NonNull::new_unchecked(ptr); 1075 + /// alloc.deallocate(non_null.cast(), Layout::new::<String>()); 1076 + /// } 1077 + /// ``` 1078 + /// 1079 + /// [memory layout]: self#memory-layout 1080 + #[unstable(feature = "allocator_api", issue = "32838")] 1081 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1082 + #[inline] 1083 + pub const fn into_raw_with_allocator(b: Self) -> (*mut T, A) { 1084 + let (leaked, alloc) = Box::into_unique(b); 1085 + (leaked.as_ptr(), alloc) 1086 + } 1087 + 1088 + #[unstable( 1089 + feature = "ptr_internals", 1090 + issue = "none", 1091 + reason = "use `Box::leak(b).into()` or `Unique::from(Box::leak(b))` instead" 1092 + )] 1093 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1094 + #[inline] 1095 + #[doc(hidden)] 1096 + pub const fn into_unique(b: Self) -> (Unique<T>, A) { 1097 + // Box is recognized as a "unique pointer" by Stacked Borrows, but internally it is a 1098 + // raw pointer for the type system. Turning it directly into a raw pointer would not be 1099 + // recognized as "releasing" the unique pointer to permit aliased raw accesses, 1100 + // so all raw pointer methods have to go through `Box::leak`. Turning *that* to a raw pointer 1101 + // behaves correctly. 1102 + let alloc = unsafe { ptr::read(&b.1) }; 1103 + (Unique::from(Box::leak(b)), alloc) 1104 + } 1105 + 1106 + /// Returns a reference to the underlying allocator. 1107 + /// 1108 + /// Note: this is an associated function, which means that you have 1109 + /// to call it as `Box::allocator(&b)` instead of `b.allocator()`. This 1110 + /// is so that there is no conflict with a method on the inner type. 1111 + #[unstable(feature = "allocator_api", issue = "32838")] 1112 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1113 + #[inline] 1114 + pub const fn allocator(b: &Self) -> &A { 1115 + &b.1 1116 + } 1117 + 1118 + /// Consumes and leaks the `Box`, returning a mutable reference, 1119 + /// `&'a mut T`. Note that the type `T` must outlive the chosen lifetime 1120 + /// `'a`. If the type has only static references, or none at all, then this 1121 + /// may be chosen to be `'static`. 1122 + /// 1123 + /// This function is mainly useful for data that lives for the remainder of 1124 + /// the program's life. Dropping the returned reference will cause a memory 1125 + /// leak. If this is not acceptable, the reference should first be wrapped 1126 + /// with the [`Box::from_raw`] function producing a `Box`. This `Box` can 1127 + /// then be dropped which will properly destroy `T` and release the 1128 + /// allocated memory. 1129 + /// 1130 + /// Note: this is an associated function, which means that you have 1131 + /// to call it as `Box::leak(b)` instead of `b.leak()`. This 1132 + /// is so that there is no conflict with a method on the inner type. 1133 + /// 1134 + /// # Examples 1135 + /// 1136 + /// Simple usage: 1137 + /// 1138 + /// ``` 1139 + /// let x = Box::new(41); 1140 + /// let static_ref: &'static mut usize = Box::leak(x); 1141 + /// *static_ref += 1; 1142 + /// assert_eq!(*static_ref, 42); 1143 + /// ``` 1144 + /// 1145 + /// Unsized data: 1146 + /// 1147 + /// ``` 1148 + /// let x = vec![1, 2, 3].into_boxed_slice(); 1149 + /// let static_ref = Box::leak(x); 1150 + /// static_ref[0] = 4; 1151 + /// assert_eq!(*static_ref, [4, 2, 3]); 1152 + /// ``` 1153 + #[stable(feature = "box_leak", since = "1.26.0")] 1154 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1155 + #[inline] 1156 + pub const fn leak<'a>(b: Self) -> &'a mut T 1157 + where 1158 + A: 'a, 1159 + { 1160 + unsafe { &mut *mem::ManuallyDrop::new(b).0.as_ptr() } 1161 + } 1162 + 1163 + /// Converts a `Box<T>` into a `Pin<Box<T>>` 1164 + /// 1165 + /// This conversion does not allocate on the heap and happens in place. 1166 + /// 1167 + /// This is also available via [`From`]. 1168 + #[unstable(feature = "box_into_pin", issue = "62370")] 1169 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1170 + pub const fn into_pin(boxed: Self) -> Pin<Self> 1171 + where 1172 + A: 'static, 1173 + { 1174 + // It's not possible to move or replace the insides of a `Pin<Box<T>>` 1175 + // when `T: !Unpin`, so it's safe to pin it directly without any 1176 + // additional requirements. 1177 + unsafe { Pin::new_unchecked(boxed) } 1178 + } 1179 + } 1180 + 1181 + #[stable(feature = "rust1", since = "1.0.0")] 1182 + unsafe impl<#[may_dangle] T: ?Sized, A: Allocator> Drop for Box<T, A> { 1183 + fn drop(&mut self) { 1184 + // FIXME: Do nothing, drop is currently performed by compiler. 1185 + } 1186 + } 1187 + 1188 + #[cfg(not(no_global_oom_handling))] 1189 + #[stable(feature = "rust1", since = "1.0.0")] 1190 + impl<T: Default> Default for Box<T> { 1191 + /// Creates a `Box<T>`, with the `Default` value for T. 1192 + fn default() -> Self { 1193 + box T::default() 1194 + } 1195 + } 1196 + 1197 + #[cfg(not(no_global_oom_handling))] 1198 + #[stable(feature = "rust1", since = "1.0.0")] 1199 + #[rustc_const_unstable(feature = "const_default_impls", issue = "87864")] 1200 + impl<T> const Default for Box<[T]> { 1201 + fn default() -> Self { 1202 + let ptr: Unique<[T]> = Unique::<[T; 0]>::dangling(); 1203 + Box(ptr, Global) 1204 + } 1205 + } 1206 + 1207 + #[cfg(not(no_global_oom_handling))] 1208 + #[stable(feature = "default_box_extra", since = "1.17.0")] 1209 + #[rustc_const_unstable(feature = "const_default_impls", issue = "87864")] 1210 + impl const Default for Box<str> { 1211 + fn default() -> Self { 1212 + // SAFETY: This is the same as `Unique::cast<U>` but with an unsized `U = str`. 1213 + let ptr: Unique<str> = unsafe { 1214 + let bytes: Unique<[u8]> = Unique::<[u8; 0]>::dangling(); 1215 + Unique::new_unchecked(bytes.as_ptr() as *mut str) 1216 + }; 1217 + Box(ptr, Global) 1218 + } 1219 + } 1220 + 1221 + #[cfg(not(no_global_oom_handling))] 1222 + #[stable(feature = "rust1", since = "1.0.0")] 1223 + impl<T: Clone, A: Allocator + Clone> Clone for Box<T, A> { 1224 + /// Returns a new box with a `clone()` of this box's contents. 1225 + /// 1226 + /// # Examples 1227 + /// 1228 + /// ``` 1229 + /// let x = Box::new(5); 1230 + /// let y = x.clone(); 1231 + /// 1232 + /// // The value is the same 1233 + /// assert_eq!(x, y); 1234 + /// 1235 + /// // But they are unique objects 1236 + /// assert_ne!(&*x as *const i32, &*y as *const i32); 1237 + /// ``` 1238 + #[inline] 1239 + fn clone(&self) -> Self { 1240 + // Pre-allocate memory to allow writing the cloned value directly. 1241 + let mut boxed = Self::new_uninit_in(self.1.clone()); 1242 + unsafe { 1243 + (**self).write_clone_into_raw(boxed.as_mut_ptr()); 1244 + boxed.assume_init() 1245 + } 1246 + } 1247 + 1248 + /// Copies `source`'s contents into `self` without creating a new allocation. 1249 + /// 1250 + /// # Examples 1251 + /// 1252 + /// ``` 1253 + /// let x = Box::new(5); 1254 + /// let mut y = Box::new(10); 1255 + /// let yp: *const i32 = &*y; 1256 + /// 1257 + /// y.clone_from(&x); 1258 + /// 1259 + /// // The value is the same 1260 + /// assert_eq!(x, y); 1261 + /// 1262 + /// // And no allocation occurred 1263 + /// assert_eq!(yp, &*y); 1264 + /// ``` 1265 + #[inline] 1266 + fn clone_from(&mut self, source: &Self) { 1267 + (**self).clone_from(&(**source)); 1268 + } 1269 + } 1270 + 1271 + #[cfg(not(no_global_oom_handling))] 1272 + #[stable(feature = "box_slice_clone", since = "1.3.0")] 1273 + impl Clone for Box<str> { 1274 + fn clone(&self) -> Self { 1275 + // this makes a copy of the data 1276 + let buf: Box<[u8]> = self.as_bytes().into(); 1277 + unsafe { from_boxed_utf8_unchecked(buf) } 1278 + } 1279 + } 1280 + 1281 + #[stable(feature = "rust1", since = "1.0.0")] 1282 + impl<T: ?Sized + PartialEq, A: Allocator> PartialEq for Box<T, A> { 1283 + #[inline] 1284 + fn eq(&self, other: &Self) -> bool { 1285 + PartialEq::eq(&**self, &**other) 1286 + } 1287 + #[inline] 1288 + fn ne(&self, other: &Self) -> bool { 1289 + PartialEq::ne(&**self, &**other) 1290 + } 1291 + } 1292 + #[stable(feature = "rust1", since = "1.0.0")] 1293 + impl<T: ?Sized + PartialOrd, A: Allocator> PartialOrd for Box<T, A> { 1294 + #[inline] 1295 + fn partial_cmp(&self, other: &Self) -> Option<Ordering> { 1296 + PartialOrd::partial_cmp(&**self, &**other) 1297 + } 1298 + #[inline] 1299 + fn lt(&self, other: &Self) -> bool { 1300 + PartialOrd::lt(&**self, &**other) 1301 + } 1302 + #[inline] 1303 + fn le(&self, other: &Self) -> bool { 1304 + PartialOrd::le(&**self, &**other) 1305 + } 1306 + #[inline] 1307 + fn ge(&self, other: &Self) -> bool { 1308 + PartialOrd::ge(&**self, &**other) 1309 + } 1310 + #[inline] 1311 + fn gt(&self, other: &Self) -> bool { 1312 + PartialOrd::gt(&**self, &**other) 1313 + } 1314 + } 1315 + #[stable(feature = "rust1", since = "1.0.0")] 1316 + impl<T: ?Sized + Ord, A: Allocator> Ord for Box<T, A> { 1317 + #[inline] 1318 + fn cmp(&self, other: &Self) -> Ordering { 1319 + Ord::cmp(&**self, &**other) 1320 + } 1321 + } 1322 + #[stable(feature = "rust1", since = "1.0.0")] 1323 + impl<T: ?Sized + Eq, A: Allocator> Eq for Box<T, A> {} 1324 + 1325 + #[stable(feature = "rust1", since = "1.0.0")] 1326 + impl<T: ?Sized + Hash, A: Allocator> Hash for Box<T, A> { 1327 + fn hash<H: Hasher>(&self, state: &mut H) { 1328 + (**self).hash(state); 1329 + } 1330 + } 1331 + 1332 + #[stable(feature = "indirect_hasher_impl", since = "1.22.0")] 1333 + impl<T: ?Sized + Hasher, A: Allocator> Hasher for Box<T, A> { 1334 + fn finish(&self) -> u64 { 1335 + (**self).finish() 1336 + } 1337 + fn write(&mut self, bytes: &[u8]) { 1338 + (**self).write(bytes) 1339 + } 1340 + fn write_u8(&mut self, i: u8) { 1341 + (**self).write_u8(i) 1342 + } 1343 + fn write_u16(&mut self, i: u16) { 1344 + (**self).write_u16(i) 1345 + } 1346 + fn write_u32(&mut self, i: u32) { 1347 + (**self).write_u32(i) 1348 + } 1349 + fn write_u64(&mut self, i: u64) { 1350 + (**self).write_u64(i) 1351 + } 1352 + fn write_u128(&mut self, i: u128) { 1353 + (**self).write_u128(i) 1354 + } 1355 + fn write_usize(&mut self, i: usize) { 1356 + (**self).write_usize(i) 1357 + } 1358 + fn write_i8(&mut self, i: i8) { 1359 + (**self).write_i8(i) 1360 + } 1361 + fn write_i16(&mut self, i: i16) { 1362 + (**self).write_i16(i) 1363 + } 1364 + fn write_i32(&mut self, i: i32) { 1365 + (**self).write_i32(i) 1366 + } 1367 + fn write_i64(&mut self, i: i64) { 1368 + (**self).write_i64(i) 1369 + } 1370 + fn write_i128(&mut self, i: i128) { 1371 + (**self).write_i128(i) 1372 + } 1373 + fn write_isize(&mut self, i: isize) { 1374 + (**self).write_isize(i) 1375 + } 1376 + fn write_length_prefix(&mut self, len: usize) { 1377 + (**self).write_length_prefix(len) 1378 + } 1379 + fn write_str(&mut self, s: &str) { 1380 + (**self).write_str(s) 1381 + } 1382 + } 1383 + 1384 + #[cfg(not(no_global_oom_handling))] 1385 + #[stable(feature = "from_for_ptrs", since = "1.6.0")] 1386 + impl<T> From<T> for Box<T> { 1387 + /// Converts a `T` into a `Box<T>` 1388 + /// 1389 + /// The conversion allocates on the heap and moves `t` 1390 + /// from the stack into it. 1391 + /// 1392 + /// # Examples 1393 + /// 1394 + /// ```rust 1395 + /// let x = 5; 1396 + /// let boxed = Box::new(5); 1397 + /// 1398 + /// assert_eq!(Box::from(x), boxed); 1399 + /// ``` 1400 + fn from(t: T) -> Self { 1401 + Box::new(t) 1402 + } 1403 + } 1404 + 1405 + #[stable(feature = "pin", since = "1.33.0")] 1406 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1407 + impl<T: ?Sized, A: Allocator> const From<Box<T, A>> for Pin<Box<T, A>> 1408 + where 1409 + A: 'static, 1410 + { 1411 + /// Converts a `Box<T>` into a `Pin<Box<T>>` 1412 + /// 1413 + /// This conversion does not allocate on the heap and happens in place. 1414 + fn from(boxed: Box<T, A>) -> Self { 1415 + Box::into_pin(boxed) 1416 + } 1417 + } 1418 + 1419 + #[cfg(not(no_global_oom_handling))] 1420 + #[stable(feature = "box_from_slice", since = "1.17.0")] 1421 + impl<T: Copy> From<&[T]> for Box<[T]> { 1422 + /// Converts a `&[T]` into a `Box<[T]>` 1423 + /// 1424 + /// This conversion allocates on the heap 1425 + /// and performs a copy of `slice`. 1426 + /// 1427 + /// # Examples 1428 + /// ```rust 1429 + /// // create a &[u8] which will be used to create a Box<[u8]> 1430 + /// let slice: &[u8] = &[104, 101, 108, 108, 111]; 1431 + /// let boxed_slice: Box<[u8]> = Box::from(slice); 1432 + /// 1433 + /// println!("{boxed_slice:?}"); 1434 + /// ``` 1435 + fn from(slice: &[T]) -> Box<[T]> { 1436 + let len = slice.len(); 1437 + let buf = RawVec::with_capacity(len); 1438 + unsafe { 1439 + ptr::copy_nonoverlapping(slice.as_ptr(), buf.ptr(), len); 1440 + buf.into_box(slice.len()).assume_init() 1441 + } 1442 + } 1443 + } 1444 + 1445 + #[cfg(not(no_global_oom_handling))] 1446 + #[stable(feature = "box_from_cow", since = "1.45.0")] 1447 + impl<T: Copy> From<Cow<'_, [T]>> for Box<[T]> { 1448 + /// Converts a `Cow<'_, [T]>` into a `Box<[T]>` 1449 + /// 1450 + /// When `cow` is the `Cow::Borrowed` variant, this 1451 + /// conversion allocates on the heap and copies the 1452 + /// underlying slice. Otherwise, it will try to reuse the owned 1453 + /// `Vec`'s allocation. 1454 + #[inline] 1455 + fn from(cow: Cow<'_, [T]>) -> Box<[T]> { 1456 + match cow { 1457 + Cow::Borrowed(slice) => Box::from(slice), 1458 + Cow::Owned(slice) => Box::from(slice), 1459 + } 1460 + } 1461 + } 1462 + 1463 + #[cfg(not(no_global_oom_handling))] 1464 + #[stable(feature = "box_from_slice", since = "1.17.0")] 1465 + impl From<&str> for Box<str> { 1466 + /// Converts a `&str` into a `Box<str>` 1467 + /// 1468 + /// This conversion allocates on the heap 1469 + /// and performs a copy of `s`. 1470 + /// 1471 + /// # Examples 1472 + /// 1473 + /// ```rust 1474 + /// let boxed: Box<str> = Box::from("hello"); 1475 + /// println!("{boxed}"); 1476 + /// ``` 1477 + #[inline] 1478 + fn from(s: &str) -> Box<str> { 1479 + unsafe { from_boxed_utf8_unchecked(Box::from(s.as_bytes())) } 1480 + } 1481 + } 1482 + 1483 + #[cfg(not(no_global_oom_handling))] 1484 + #[stable(feature = "box_from_cow", since = "1.45.0")] 1485 + impl From<Cow<'_, str>> for Box<str> { 1486 + /// Converts a `Cow<'_, str>` into a `Box<str>` 1487 + /// 1488 + /// When `cow` is the `Cow::Borrowed` variant, this 1489 + /// conversion allocates on the heap and copies the 1490 + /// underlying `str`. Otherwise, it will try to reuse the owned 1491 + /// `String`'s allocation. 1492 + /// 1493 + /// # Examples 1494 + /// 1495 + /// ```rust 1496 + /// use std::borrow::Cow; 1497 + /// 1498 + /// let unboxed = Cow::Borrowed("hello"); 1499 + /// let boxed: Box<str> = Box::from(unboxed); 1500 + /// println!("{boxed}"); 1501 + /// ``` 1502 + /// 1503 + /// ```rust 1504 + /// # use std::borrow::Cow; 1505 + /// let unboxed = Cow::Owned("hello".to_string()); 1506 + /// let boxed: Box<str> = Box::from(unboxed); 1507 + /// println!("{boxed}"); 1508 + /// ``` 1509 + #[inline] 1510 + fn from(cow: Cow<'_, str>) -> Box<str> { 1511 + match cow { 1512 + Cow::Borrowed(s) => Box::from(s), 1513 + Cow::Owned(s) => Box::from(s), 1514 + } 1515 + } 1516 + } 1517 + 1518 + #[stable(feature = "boxed_str_conv", since = "1.19.0")] 1519 + impl<A: Allocator> From<Box<str, A>> for Box<[u8], A> { 1520 + /// Converts a `Box<str>` into a `Box<[u8]>` 1521 + /// 1522 + /// This conversion does not allocate on the heap and happens in place. 1523 + /// 1524 + /// # Examples 1525 + /// ```rust 1526 + /// // create a Box<str> which will be used to create a Box<[u8]> 1527 + /// let boxed: Box<str> = Box::from("hello"); 1528 + /// let boxed_str: Box<[u8]> = Box::from(boxed); 1529 + /// 1530 + /// // create a &[u8] which will be used to create a Box<[u8]> 1531 + /// let slice: &[u8] = &[104, 101, 108, 108, 111]; 1532 + /// let boxed_slice = Box::from(slice); 1533 + /// 1534 + /// assert_eq!(boxed_slice, boxed_str); 1535 + /// ``` 1536 + #[inline] 1537 + fn from(s: Box<str, A>) -> Self { 1538 + let (raw, alloc) = Box::into_raw_with_allocator(s); 1539 + unsafe { Box::from_raw_in(raw as *mut [u8], alloc) } 1540 + } 1541 + } 1542 + 1543 + #[cfg(not(no_global_oom_handling))] 1544 + #[stable(feature = "box_from_array", since = "1.45.0")] 1545 + impl<T, const N: usize> From<[T; N]> for Box<[T]> { 1546 + /// Converts a `[T; N]` into a `Box<[T]>` 1547 + /// 1548 + /// This conversion moves the array to newly heap-allocated memory. 1549 + /// 1550 + /// # Examples 1551 + /// 1552 + /// ```rust 1553 + /// let boxed: Box<[u8]> = Box::from([4, 2]); 1554 + /// println!("{boxed:?}"); 1555 + /// ``` 1556 + fn from(array: [T; N]) -> Box<[T]> { 1557 + box array 1558 + } 1559 + } 1560 + 1561 + #[stable(feature = "boxed_slice_try_from", since = "1.43.0")] 1562 + impl<T, const N: usize> TryFrom<Box<[T]>> for Box<[T; N]> { 1563 + type Error = Box<[T]>; 1564 + 1565 + /// Attempts to convert a `Box<[T]>` into a `Box<[T; N]>`. 1566 + /// 1567 + /// The conversion occurs in-place and does not require a 1568 + /// new memory allocation. 1569 + /// 1570 + /// # Errors 1571 + /// 1572 + /// Returns the old `Box<[T]>` in the `Err` variant if 1573 + /// `boxed_slice.len()` does not equal `N`. 1574 + fn try_from(boxed_slice: Box<[T]>) -> Result<Self, Self::Error> { 1575 + if boxed_slice.len() == N { 1576 + Ok(unsafe { Box::from_raw(Box::into_raw(boxed_slice) as *mut [T; N]) }) 1577 + } else { 1578 + Err(boxed_slice) 1579 + } 1580 + } 1581 + } 1582 + 1583 + impl<A: Allocator> Box<dyn Any, A> { 1584 + /// Attempt to downcast the box to a concrete type. 1585 + /// 1586 + /// # Examples 1587 + /// 1588 + /// ``` 1589 + /// use std::any::Any; 1590 + /// 1591 + /// fn print_if_string(value: Box<dyn Any>) { 1592 + /// if let Ok(string) = value.downcast::<String>() { 1593 + /// println!("String ({}): {}", string.len(), string); 1594 + /// } 1595 + /// } 1596 + /// 1597 + /// let my_string = "Hello World".to_string(); 1598 + /// print_if_string(Box::new(my_string)); 1599 + /// print_if_string(Box::new(0i8)); 1600 + /// ``` 1601 + #[inline] 1602 + #[stable(feature = "rust1", since = "1.0.0")] 1603 + pub fn downcast<T: Any>(self) -> Result<Box<T, A>, Self> { 1604 + if self.is::<T>() { unsafe { Ok(self.downcast_unchecked::<T>()) } } else { Err(self) } 1605 + } 1606 + 1607 + /// Downcasts the box to a concrete type. 1608 + /// 1609 + /// For a safe alternative see [`downcast`]. 1610 + /// 1611 + /// # Examples 1612 + /// 1613 + /// ``` 1614 + /// #![feature(downcast_unchecked)] 1615 + /// 1616 + /// use std::any::Any; 1617 + /// 1618 + /// let x: Box<dyn Any> = Box::new(1_usize); 1619 + /// 1620 + /// unsafe { 1621 + /// assert_eq!(*x.downcast_unchecked::<usize>(), 1); 1622 + /// } 1623 + /// ``` 1624 + /// 1625 + /// # Safety 1626 + /// 1627 + /// The contained value must be of type `T`. Calling this method 1628 + /// with the incorrect type is *undefined behavior*. 1629 + /// 1630 + /// [`downcast`]: Self::downcast 1631 + #[inline] 1632 + #[unstable(feature = "downcast_unchecked", issue = "90850")] 1633 + pub unsafe fn downcast_unchecked<T: Any>(self) -> Box<T, A> { 1634 + debug_assert!(self.is::<T>()); 1635 + unsafe { 1636 + let (raw, alloc): (*mut dyn Any, _) = Box::into_raw_with_allocator(self); 1637 + Box::from_raw_in(raw as *mut T, alloc) 1638 + } 1639 + } 1640 + } 1641 + 1642 + impl<A: Allocator> Box<dyn Any + Send, A> { 1643 + /// Attempt to downcast the box to a concrete type. 1644 + /// 1645 + /// # Examples 1646 + /// 1647 + /// ``` 1648 + /// use std::any::Any; 1649 + /// 1650 + /// fn print_if_string(value: Box<dyn Any + Send>) { 1651 + /// if let Ok(string) = value.downcast::<String>() { 1652 + /// println!("String ({}): {}", string.len(), string); 1653 + /// } 1654 + /// } 1655 + /// 1656 + /// let my_string = "Hello World".to_string(); 1657 + /// print_if_string(Box::new(my_string)); 1658 + /// print_if_string(Box::new(0i8)); 1659 + /// ``` 1660 + #[inline] 1661 + #[stable(feature = "rust1", since = "1.0.0")] 1662 + pub fn downcast<T: Any>(self) -> Result<Box<T, A>, Self> { 1663 + if self.is::<T>() { unsafe { Ok(self.downcast_unchecked::<T>()) } } else { Err(self) } 1664 + } 1665 + 1666 + /// Downcasts the box to a concrete type. 1667 + /// 1668 + /// For a safe alternative see [`downcast`]. 1669 + /// 1670 + /// # Examples 1671 + /// 1672 + /// ``` 1673 + /// #![feature(downcast_unchecked)] 1674 + /// 1675 + /// use std::any::Any; 1676 + /// 1677 + /// let x: Box<dyn Any + Send> = Box::new(1_usize); 1678 + /// 1679 + /// unsafe { 1680 + /// assert_eq!(*x.downcast_unchecked::<usize>(), 1); 1681 + /// } 1682 + /// ``` 1683 + /// 1684 + /// # Safety 1685 + /// 1686 + /// The contained value must be of type `T`. Calling this method 1687 + /// with the incorrect type is *undefined behavior*. 1688 + /// 1689 + /// [`downcast`]: Self::downcast 1690 + #[inline] 1691 + #[unstable(feature = "downcast_unchecked", issue = "90850")] 1692 + pub unsafe fn downcast_unchecked<T: Any>(self) -> Box<T, A> { 1693 + debug_assert!(self.is::<T>()); 1694 + unsafe { 1695 + let (raw, alloc): (*mut (dyn Any + Send), _) = Box::into_raw_with_allocator(self); 1696 + Box::from_raw_in(raw as *mut T, alloc) 1697 + } 1698 + } 1699 + } 1700 + 1701 + impl<A: Allocator> Box<dyn Any + Send + Sync, A> { 1702 + /// Attempt to downcast the box to a concrete type. 1703 + /// 1704 + /// # Examples 1705 + /// 1706 + /// ``` 1707 + /// use std::any::Any; 1708 + /// 1709 + /// fn print_if_string(value: Box<dyn Any + Send + Sync>) { 1710 + /// if let Ok(string) = value.downcast::<String>() { 1711 + /// println!("String ({}): {}", string.len(), string); 1712 + /// } 1713 + /// } 1714 + /// 1715 + /// let my_string = "Hello World".to_string(); 1716 + /// print_if_string(Box::new(my_string)); 1717 + /// print_if_string(Box::new(0i8)); 1718 + /// ``` 1719 + #[inline] 1720 + #[stable(feature = "box_send_sync_any_downcast", since = "1.51.0")] 1721 + pub fn downcast<T: Any>(self) -> Result<Box<T, A>, Self> { 1722 + if self.is::<T>() { unsafe { Ok(self.downcast_unchecked::<T>()) } } else { Err(self) } 1723 + } 1724 + 1725 + /// Downcasts the box to a concrete type. 1726 + /// 1727 + /// For a safe alternative see [`downcast`]. 1728 + /// 1729 + /// # Examples 1730 + /// 1731 + /// ``` 1732 + /// #![feature(downcast_unchecked)] 1733 + /// 1734 + /// use std::any::Any; 1735 + /// 1736 + /// let x: Box<dyn Any + Send + Sync> = Box::new(1_usize); 1737 + /// 1738 + /// unsafe { 1739 + /// assert_eq!(*x.downcast_unchecked::<usize>(), 1); 1740 + /// } 1741 + /// ``` 1742 + /// 1743 + /// # Safety 1744 + /// 1745 + /// The contained value must be of type `T`. Calling this method 1746 + /// with the incorrect type is *undefined behavior*. 1747 + /// 1748 + /// [`downcast`]: Self::downcast 1749 + #[inline] 1750 + #[unstable(feature = "downcast_unchecked", issue = "90850")] 1751 + pub unsafe fn downcast_unchecked<T: Any>(self) -> Box<T, A> { 1752 + debug_assert!(self.is::<T>()); 1753 + unsafe { 1754 + let (raw, alloc): (*mut (dyn Any + Send + Sync), _) = 1755 + Box::into_raw_with_allocator(self); 1756 + Box::from_raw_in(raw as *mut T, alloc) 1757 + } 1758 + } 1759 + } 1760 + 1761 + #[stable(feature = "rust1", since = "1.0.0")] 1762 + impl<T: fmt::Display + ?Sized, A: Allocator> fmt::Display for Box<T, A> { 1763 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 1764 + fmt::Display::fmt(&**self, f) 1765 + } 1766 + } 1767 + 1768 + #[stable(feature = "rust1", since = "1.0.0")] 1769 + impl<T: fmt::Debug + ?Sized, A: Allocator> fmt::Debug for Box<T, A> { 1770 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 1771 + fmt::Debug::fmt(&**self, f) 1772 + } 1773 + } 1774 + 1775 + #[stable(feature = "rust1", since = "1.0.0")] 1776 + impl<T: ?Sized, A: Allocator> fmt::Pointer for Box<T, A> { 1777 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 1778 + // It's not possible to extract the inner Uniq directly from the Box, 1779 + // instead we cast it to a *const which aliases the Unique 1780 + let ptr: *const T = &**self; 1781 + fmt::Pointer::fmt(&ptr, f) 1782 + } 1783 + } 1784 + 1785 + #[stable(feature = "rust1", since = "1.0.0")] 1786 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1787 + impl<T: ?Sized, A: Allocator> const Deref for Box<T, A> { 1788 + type Target = T; 1789 + 1790 + fn deref(&self) -> &T { 1791 + &**self 1792 + } 1793 + } 1794 + 1795 + #[stable(feature = "rust1", since = "1.0.0")] 1796 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1797 + impl<T: ?Sized, A: Allocator> const DerefMut for Box<T, A> { 1798 + fn deref_mut(&mut self) -> &mut T { 1799 + &mut **self 1800 + } 1801 + } 1802 + 1803 + #[unstable(feature = "receiver_trait", issue = "none")] 1804 + impl<T: ?Sized, A: Allocator> Receiver for Box<T, A> {} 1805 + 1806 + #[stable(feature = "rust1", since = "1.0.0")] 1807 + impl<I: Iterator + ?Sized, A: Allocator> Iterator for Box<I, A> { 1808 + type Item = I::Item; 1809 + fn next(&mut self) -> Option<I::Item> { 1810 + (**self).next() 1811 + } 1812 + fn size_hint(&self) -> (usize, Option<usize>) { 1813 + (**self).size_hint() 1814 + } 1815 + fn nth(&mut self, n: usize) -> Option<I::Item> { 1816 + (**self).nth(n) 1817 + } 1818 + fn last(self) -> Option<I::Item> { 1819 + BoxIter::last(self) 1820 + } 1821 + } 1822 + 1823 + trait BoxIter { 1824 + type Item; 1825 + fn last(self) -> Option<Self::Item>; 1826 + } 1827 + 1828 + impl<I: Iterator + ?Sized, A: Allocator> BoxIter for Box<I, A> { 1829 + type Item = I::Item; 1830 + default fn last(self) -> Option<I::Item> { 1831 + #[inline] 1832 + fn some<T>(_: Option<T>, x: T) -> Option<T> { 1833 + Some(x) 1834 + } 1835 + 1836 + self.fold(None, some) 1837 + } 1838 + } 1839 + 1840 + /// Specialization for sized `I`s that uses `I`s implementation of `last()` 1841 + /// instead of the default. 1842 + #[stable(feature = "rust1", since = "1.0.0")] 1843 + impl<I: Iterator, A: Allocator> BoxIter for Box<I, A> { 1844 + fn last(self) -> Option<I::Item> { 1845 + (*self).last() 1846 + } 1847 + } 1848 + 1849 + #[stable(feature = "rust1", since = "1.0.0")] 1850 + impl<I: DoubleEndedIterator + ?Sized, A: Allocator> DoubleEndedIterator for Box<I, A> { 1851 + fn next_back(&mut self) -> Option<I::Item> { 1852 + (**self).next_back() 1853 + } 1854 + fn nth_back(&mut self, n: usize) -> Option<I::Item> { 1855 + (**self).nth_back(n) 1856 + } 1857 + } 1858 + #[stable(feature = "rust1", since = "1.0.0")] 1859 + impl<I: ExactSizeIterator + ?Sized, A: Allocator> ExactSizeIterator for Box<I, A> { 1860 + fn len(&self) -> usize { 1861 + (**self).len() 1862 + } 1863 + fn is_empty(&self) -> bool { 1864 + (**self).is_empty() 1865 + } 1866 + } 1867 + 1868 + #[stable(feature = "fused", since = "1.26.0")] 1869 + impl<I: FusedIterator + ?Sized, A: Allocator> FusedIterator for Box<I, A> {} 1870 + 1871 + #[stable(feature = "boxed_closure_impls", since = "1.35.0")] 1872 + impl<Args, F: FnOnce<Args> + ?Sized, A: Allocator> FnOnce<Args> for Box<F, A> { 1873 + type Output = <F as FnOnce<Args>>::Output; 1874 + 1875 + extern "rust-call" fn call_once(self, args: Args) -> Self::Output { 1876 + <F as FnOnce<Args>>::call_once(*self, args) 1877 + } 1878 + } 1879 + 1880 + #[stable(feature = "boxed_closure_impls", since = "1.35.0")] 1881 + impl<Args, F: FnMut<Args> + ?Sized, A: Allocator> FnMut<Args> for Box<F, A> { 1882 + extern "rust-call" fn call_mut(&mut self, args: Args) -> Self::Output { 1883 + <F as FnMut<Args>>::call_mut(self, args) 1884 + } 1885 + } 1886 + 1887 + #[stable(feature = "boxed_closure_impls", since = "1.35.0")] 1888 + impl<Args, F: Fn<Args> + ?Sized, A: Allocator> Fn<Args> for Box<F, A> { 1889 + extern "rust-call" fn call(&self, args: Args) -> Self::Output { 1890 + <F as Fn<Args>>::call(self, args) 1891 + } 1892 + } 1893 + 1894 + #[unstable(feature = "coerce_unsized", issue = "27732")] 1895 + impl<T: ?Sized + Unsize<U>, U: ?Sized, A: Allocator> CoerceUnsized<Box<U, A>> for Box<T, A> {} 1896 + 1897 + #[unstable(feature = "dispatch_from_dyn", issue = "none")] 1898 + impl<T: ?Sized + Unsize<U>, U: ?Sized> DispatchFromDyn<Box<U>> for Box<T, Global> {} 1899 + 1900 + #[cfg(not(no_global_oom_handling))] 1901 + #[stable(feature = "boxed_slice_from_iter", since = "1.32.0")] 1902 + impl<I> FromIterator<I> for Box<[I]> { 1903 + fn from_iter<T: IntoIterator<Item = I>>(iter: T) -> Self { 1904 + iter.into_iter().collect::<Vec<_>>().into_boxed_slice() 1905 + } 1906 + } 1907 + 1908 + #[cfg(not(no_global_oom_handling))] 1909 + #[stable(feature = "box_slice_clone", since = "1.3.0")] 1910 + impl<T: Clone, A: Allocator + Clone> Clone for Box<[T], A> { 1911 + fn clone(&self) -> Self { 1912 + let alloc = Box::allocator(self).clone(); 1913 + self.to_vec_in(alloc).into_boxed_slice() 1914 + } 1915 + 1916 + fn clone_from(&mut self, other: &Self) { 1917 + if self.len() == other.len() { 1918 + self.clone_from_slice(&other); 1919 + } else { 1920 + *self = other.clone(); 1921 + } 1922 + } 1923 + } 1924 + 1925 + #[stable(feature = "box_borrow", since = "1.1.0")] 1926 + impl<T: ?Sized, A: Allocator> borrow::Borrow<T> for Box<T, A> { 1927 + fn borrow(&self) -> &T { 1928 + &**self 1929 + } 1930 + } 1931 + 1932 + #[stable(feature = "box_borrow", since = "1.1.0")] 1933 + impl<T: ?Sized, A: Allocator> borrow::BorrowMut<T> for Box<T, A> { 1934 + fn borrow_mut(&mut self) -> &mut T { 1935 + &mut **self 1936 + } 1937 + } 1938 + 1939 + #[stable(since = "1.5.0", feature = "smart_ptr_as_ref")] 1940 + impl<T: ?Sized, A: Allocator> AsRef<T> for Box<T, A> { 1941 + fn as_ref(&self) -> &T { 1942 + &**self 1943 + } 1944 + } 1945 + 1946 + #[stable(since = "1.5.0", feature = "smart_ptr_as_ref")] 1947 + impl<T: ?Sized, A: Allocator> AsMut<T> for Box<T, A> { 1948 + fn as_mut(&mut self) -> &mut T { 1949 + &mut **self 1950 + } 1951 + } 1952 + 1953 + /* Nota bene 1954 + * 1955 + * We could have chosen not to add this impl, and instead have written a 1956 + * function of Pin<Box<T>> to Pin<T>. Such a function would not be sound, 1957 + * because Box<T> implements Unpin even when T does not, as a result of 1958 + * this impl. 1959 + * 1960 + * We chose this API instead of the alternative for a few reasons: 1961 + * - Logically, it is helpful to understand pinning in regard to the 1962 + * memory region being pointed to. For this reason none of the 1963 + * standard library pointer types support projecting through a pin 1964 + * (Box<T> is the only pointer type in std for which this would be 1965 + * safe.) 1966 + * - It is in practice very useful to have Box<T> be unconditionally 1967 + * Unpin because of trait objects, for which the structural auto 1968 + * trait functionality does not apply (e.g., Box<dyn Foo> would 1969 + * otherwise not be Unpin). 1970 + * 1971 + * Another type with the same semantics as Box but only a conditional 1972 + * implementation of `Unpin` (where `T: Unpin`) would be valid/safe, and 1973 + * could have a method to project a Pin<T> from it. 1974 + */ 1975 + #[stable(feature = "pin", since = "1.33.0")] 1976 + #[rustc_const_unstable(feature = "const_box", issue = "92521")] 1977 + impl<T: ?Sized, A: Allocator> const Unpin for Box<T, A> where A: 'static {} 1978 + 1979 + #[unstable(feature = "generator_trait", issue = "43122")] 1980 + impl<G: ?Sized + Generator<R> + Unpin, R, A: Allocator> Generator<R> for Box<G, A> 1981 + where 1982 + A: 'static, 1983 + { 1984 + type Yield = G::Yield; 1985 + type Return = G::Return; 1986 + 1987 + fn resume(mut self: Pin<&mut Self>, arg: R) -> GeneratorState<Self::Yield, Self::Return> { 1988 + G::resume(Pin::new(&mut *self), arg) 1989 + } 1990 + } 1991 + 1992 + #[unstable(feature = "generator_trait", issue = "43122")] 1993 + impl<G: ?Sized + Generator<R>, R, A: Allocator> Generator<R> for Pin<Box<G, A>> 1994 + where 1995 + A: 'static, 1996 + { 1997 + type Yield = G::Yield; 1998 + type Return = G::Return; 1999 + 2000 + fn resume(mut self: Pin<&mut Self>, arg: R) -> GeneratorState<Self::Yield, Self::Return> { 2001 + G::resume((*self).as_mut(), arg) 2002 + } 2003 + } 2004 + 2005 + #[stable(feature = "futures_api", since = "1.36.0")] 2006 + impl<F: ?Sized + Future + Unpin, A: Allocator> Future for Box<F, A> 2007 + where 2008 + A: 'static, 2009 + { 2010 + type Output = F::Output; 2011 + 2012 + fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> { 2013 + F::poll(Pin::new(&mut *self), cx) 2014 + } 2015 + } 2016 + 2017 + #[unstable(feature = "async_iterator", issue = "79024")] 2018 + impl<S: ?Sized + AsyncIterator + Unpin> AsyncIterator for Box<S> { 2019 + type Item = S::Item; 2020 + 2021 + fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> { 2022 + Pin::new(&mut **self).poll_next(cx) 2023 + } 2024 + 2025 + fn size_hint(&self) -> (usize, Option<usize>) { 2026 + (**self).size_hint() 2027 + } 2028 + }
+156
rust/alloc/collections/mod.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + //! Collection types. 4 + 5 + #![stable(feature = "rust1", since = "1.0.0")] 6 + 7 + #[cfg(not(no_global_oom_handling))] 8 + pub mod binary_heap; 9 + #[cfg(not(no_global_oom_handling))] 10 + mod btree; 11 + #[cfg(not(no_global_oom_handling))] 12 + pub mod linked_list; 13 + #[cfg(not(no_global_oom_handling))] 14 + pub mod vec_deque; 15 + 16 + #[cfg(not(no_global_oom_handling))] 17 + #[stable(feature = "rust1", since = "1.0.0")] 18 + pub mod btree_map { 19 + //! An ordered map based on a B-Tree. 20 + #[stable(feature = "rust1", since = "1.0.0")] 21 + pub use super::btree::map::*; 22 + } 23 + 24 + #[cfg(not(no_global_oom_handling))] 25 + #[stable(feature = "rust1", since = "1.0.0")] 26 + pub mod btree_set { 27 + //! An ordered set based on a B-Tree. 28 + #[stable(feature = "rust1", since = "1.0.0")] 29 + pub use super::btree::set::*; 30 + } 31 + 32 + #[cfg(not(no_global_oom_handling))] 33 + #[stable(feature = "rust1", since = "1.0.0")] 34 + #[doc(no_inline)] 35 + pub use binary_heap::BinaryHeap; 36 + 37 + #[cfg(not(no_global_oom_handling))] 38 + #[stable(feature = "rust1", since = "1.0.0")] 39 + #[doc(no_inline)] 40 + pub use btree_map::BTreeMap; 41 + 42 + #[cfg(not(no_global_oom_handling))] 43 + #[stable(feature = "rust1", since = "1.0.0")] 44 + #[doc(no_inline)] 45 + pub use btree_set::BTreeSet; 46 + 47 + #[cfg(not(no_global_oom_handling))] 48 + #[stable(feature = "rust1", since = "1.0.0")] 49 + #[doc(no_inline)] 50 + pub use linked_list::LinkedList; 51 + 52 + #[cfg(not(no_global_oom_handling))] 53 + #[stable(feature = "rust1", since = "1.0.0")] 54 + #[doc(no_inline)] 55 + pub use vec_deque::VecDeque; 56 + 57 + use crate::alloc::{Layout, LayoutError}; 58 + use core::fmt::Display; 59 + 60 + /// The error type for `try_reserve` methods. 61 + #[derive(Clone, PartialEq, Eq, Debug)] 62 + #[stable(feature = "try_reserve", since = "1.57.0")] 63 + pub struct TryReserveError { 64 + kind: TryReserveErrorKind, 65 + } 66 + 67 + impl TryReserveError { 68 + /// Details about the allocation that caused the error 69 + #[inline] 70 + #[must_use] 71 + #[unstable( 72 + feature = "try_reserve_kind", 73 + reason = "Uncertain how much info should be exposed", 74 + issue = "48043" 75 + )] 76 + pub fn kind(&self) -> TryReserveErrorKind { 77 + self.kind.clone() 78 + } 79 + } 80 + 81 + /// Details of the allocation that caused a `TryReserveError` 82 + #[derive(Clone, PartialEq, Eq, Debug)] 83 + #[unstable( 84 + feature = "try_reserve_kind", 85 + reason = "Uncertain how much info should be exposed", 86 + issue = "48043" 87 + )] 88 + pub enum TryReserveErrorKind { 89 + /// Error due to the computed capacity exceeding the collection's maximum 90 + /// (usually `isize::MAX` bytes). 91 + CapacityOverflow, 92 + 93 + /// The memory allocator returned an error 94 + AllocError { 95 + /// The layout of allocation request that failed 96 + layout: Layout, 97 + 98 + #[doc(hidden)] 99 + #[unstable( 100 + feature = "container_error_extra", 101 + issue = "none", 102 + reason = "\ 103 + Enable exposing the allocator’s custom error value \ 104 + if an associated type is added in the future: \ 105 + https://github.com/rust-lang/wg-allocators/issues/23" 106 + )] 107 + non_exhaustive: (), 108 + }, 109 + } 110 + 111 + #[unstable( 112 + feature = "try_reserve_kind", 113 + reason = "Uncertain how much info should be exposed", 114 + issue = "48043" 115 + )] 116 + impl From<TryReserveErrorKind> for TryReserveError { 117 + #[inline] 118 + fn from(kind: TryReserveErrorKind) -> Self { 119 + Self { kind } 120 + } 121 + } 122 + 123 + #[unstable(feature = "try_reserve_kind", reason = "new API", issue = "48043")] 124 + impl From<LayoutError> for TryReserveErrorKind { 125 + /// Always evaluates to [`TryReserveErrorKind::CapacityOverflow`]. 126 + #[inline] 127 + fn from(_: LayoutError) -> Self { 128 + TryReserveErrorKind::CapacityOverflow 129 + } 130 + } 131 + 132 + #[stable(feature = "try_reserve", since = "1.57.0")] 133 + impl Display for TryReserveError { 134 + fn fmt( 135 + &self, 136 + fmt: &mut core::fmt::Formatter<'_>, 137 + ) -> core::result::Result<(), core::fmt::Error> { 138 + fmt.write_str("memory allocation failed")?; 139 + let reason = match self.kind { 140 + TryReserveErrorKind::CapacityOverflow => { 141 + " because the computed capacity exceeded the collection's maximum" 142 + } 143 + TryReserveErrorKind::AllocError { .. } => { 144 + " because the memory allocator returned a error" 145 + } 146 + }; 147 + fmt.write_str(reason) 148 + } 149 + } 150 + 151 + /// An intermediate trait for specialization of `Extend`. 152 + #[doc(hidden)] 153 + trait SpecExtend<I: IntoIterator> { 154 + /// Extends `self` with the contents of the given iterator. 155 + fn spec_extend(&mut self, iter: I); 156 + }
+244
rust/alloc/lib.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + //! # The Rust core allocation and collections library 4 + //! 5 + //! This library provides smart pointers and collections for managing 6 + //! heap-allocated values. 7 + //! 8 + //! This library, like libcore, normally doesn’t need to be used directly 9 + //! since its contents are re-exported in the [`std` crate](../std/index.html). 10 + //! Crates that use the `#![no_std]` attribute however will typically 11 + //! not depend on `std`, so they’d use this crate instead. 12 + //! 13 + //! ## Boxed values 14 + //! 15 + //! The [`Box`] type is a smart pointer type. There can only be one owner of a 16 + //! [`Box`], and the owner can decide to mutate the contents, which live on the 17 + //! heap. 18 + //! 19 + //! This type can be sent among threads efficiently as the size of a `Box` value 20 + //! is the same as that of a pointer. Tree-like data structures are often built 21 + //! with boxes because each node often has only one owner, the parent. 22 + //! 23 + //! ## Reference counted pointers 24 + //! 25 + //! The [`Rc`] type is a non-threadsafe reference-counted pointer type intended 26 + //! for sharing memory within a thread. An [`Rc`] pointer wraps a type, `T`, and 27 + //! only allows access to `&T`, a shared reference. 28 + //! 29 + //! This type is useful when inherited mutability (such as using [`Box`]) is too 30 + //! constraining for an application, and is often paired with the [`Cell`] or 31 + //! [`RefCell`] types in order to allow mutation. 32 + //! 33 + //! ## Atomically reference counted pointers 34 + //! 35 + //! The [`Arc`] type is the threadsafe equivalent of the [`Rc`] type. It 36 + //! provides all the same functionality of [`Rc`], except it requires that the 37 + //! contained type `T` is shareable. Additionally, [`Arc<T>`][`Arc`] is itself 38 + //! sendable while [`Rc<T>`][`Rc`] is not. 39 + //! 40 + //! This type allows for shared access to the contained data, and is often 41 + //! paired with synchronization primitives such as mutexes to allow mutation of 42 + //! shared resources. 43 + //! 44 + //! ## Collections 45 + //! 46 + //! Implementations of the most common general purpose data structures are 47 + //! defined in this library. They are re-exported through the 48 + //! [standard collections library](../std/collections/index.html). 49 + //! 50 + //! ## Heap interfaces 51 + //! 52 + //! The [`alloc`](alloc/index.html) module defines the low-level interface to the 53 + //! default global allocator. It is not compatible with the libc allocator API. 54 + //! 55 + //! [`Arc`]: sync 56 + //! [`Box`]: boxed 57 + //! [`Cell`]: core::cell 58 + //! [`Rc`]: rc 59 + //! [`RefCell`]: core::cell 60 + 61 + // To run liballoc tests without x.py without ending up with two copies of liballoc, Miri needs to be 62 + // able to "empty" this crate. See <https://github.com/rust-lang/miri-test-libstd/issues/4>. 63 + // rustc itself never sets the feature, so this line has no affect there. 64 + #![cfg(any(not(feature = "miri-test-libstd"), test, doctest))] 65 + #![allow(unused_attributes)] 66 + #![stable(feature = "alloc", since = "1.36.0")] 67 + #![doc( 68 + html_playground_url = "https://play.rust-lang.org/", 69 + issue_tracker_base_url = "https://github.com/rust-lang/rust/issues/", 70 + test(no_crate_inject, attr(allow(unused_variables), deny(warnings))) 71 + )] 72 + #![doc(cfg_hide( 73 + not(test), 74 + not(any(test, bootstrap)), 75 + any(not(feature = "miri-test-libstd"), test, doctest), 76 + no_global_oom_handling, 77 + not(no_global_oom_handling), 78 + target_has_atomic = "ptr" 79 + ))] 80 + #![no_std] 81 + #![needs_allocator] 82 + // 83 + // Lints: 84 + #![deny(unsafe_op_in_unsafe_fn)] 85 + #![warn(deprecated_in_future)] 86 + #![warn(missing_debug_implementations)] 87 + #![warn(missing_docs)] 88 + #![allow(explicit_outlives_requirements)] 89 + // 90 + // Library features: 91 + #![cfg_attr(not(no_global_oom_handling), feature(alloc_c_string))] 92 + #![feature(alloc_layout_extra)] 93 + #![feature(allocator_api)] 94 + #![feature(array_chunks)] 95 + #![feature(array_methods)] 96 + #![feature(array_windows)] 97 + #![feature(assert_matches)] 98 + #![feature(async_iterator)] 99 + #![feature(coerce_unsized)] 100 + #![cfg_attr(not(no_global_oom_handling), feature(const_alloc_error))] 101 + #![feature(const_box)] 102 + #![cfg_attr(not(no_global_oom_handling), feature(const_btree_new))] 103 + #![feature(const_cow_is_borrowed)] 104 + #![feature(const_convert)] 105 + #![feature(const_size_of_val)] 106 + #![feature(const_align_of_val)] 107 + #![feature(const_ptr_read)] 108 + #![feature(const_maybe_uninit_write)] 109 + #![feature(const_maybe_uninit_as_mut_ptr)] 110 + #![feature(const_refs_to_cell)] 111 + #![feature(core_c_str)] 112 + #![feature(core_intrinsics)] 113 + #![feature(core_ffi_c)] 114 + #![feature(const_eval_select)] 115 + #![feature(const_pin)] 116 + #![feature(cstr_from_bytes_until_nul)] 117 + #![feature(dispatch_from_dyn)] 118 + #![feature(exact_size_is_empty)] 119 + #![feature(extend_one)] 120 + #![feature(fmt_internals)] 121 + #![feature(fn_traits)] 122 + #![feature(hasher_prefixfree_extras)] 123 + #![feature(inplace_iteration)] 124 + #![feature(iter_advance_by)] 125 + #![feature(layout_for_ptr)] 126 + #![feature(maybe_uninit_slice)] 127 + #![cfg_attr(test, feature(new_uninit))] 128 + #![feature(nonnull_slice_from_raw_parts)] 129 + #![feature(pattern)] 130 + #![feature(ptr_internals)] 131 + #![feature(ptr_metadata)] 132 + #![feature(ptr_sub_ptr)] 133 + #![feature(receiver_trait)] 134 + #![feature(set_ptr_value)] 135 + #![feature(slice_group_by)] 136 + #![feature(slice_ptr_get)] 137 + #![feature(slice_ptr_len)] 138 + #![feature(slice_range)] 139 + #![feature(str_internals)] 140 + #![feature(strict_provenance)] 141 + #![feature(trusted_len)] 142 + #![feature(trusted_random_access)] 143 + #![feature(try_trait_v2)] 144 + #![feature(unchecked_math)] 145 + #![feature(unicode_internals)] 146 + #![feature(unsize)] 147 + // 148 + // Language features: 149 + #![feature(allocator_internals)] 150 + #![feature(allow_internal_unstable)] 151 + #![feature(associated_type_bounds)] 152 + #![feature(box_syntax)] 153 + #![feature(cfg_sanitize)] 154 + #![feature(const_deref)] 155 + #![feature(const_mut_refs)] 156 + #![feature(const_ptr_write)] 157 + #![feature(const_precise_live_drops)] 158 + #![feature(const_trait_impl)] 159 + #![feature(const_try)] 160 + #![feature(dropck_eyepatch)] 161 + #![feature(exclusive_range_pattern)] 162 + #![feature(fundamental)] 163 + #![cfg_attr(not(test), feature(generator_trait))] 164 + #![feature(hashmap_internals)] 165 + #![feature(lang_items)] 166 + #![feature(let_else)] 167 + #![feature(min_specialization)] 168 + #![feature(negative_impls)] 169 + #![feature(never_type)] 170 + #![feature(nll)] // Not necessary, but here to test the `nll` feature. 171 + #![feature(rustc_allow_const_fn_unstable)] 172 + #![feature(rustc_attrs)] 173 + #![feature(slice_internals)] 174 + #![feature(staged_api)] 175 + #![cfg_attr(test, feature(test))] 176 + #![feature(unboxed_closures)] 177 + #![feature(unsized_fn_params)] 178 + #![feature(c_unwind)] 179 + // 180 + // Rustdoc features: 181 + #![feature(doc_cfg)] 182 + #![feature(doc_cfg_hide)] 183 + // Technically, this is a bug in rustdoc: rustdoc sees the documentation on `#[lang = slice_alloc]` 184 + // blocks is for `&[T]`, which also has documentation using this feature in `core`, and gets mad 185 + // that the feature-gate isn't enabled. Ideally, it wouldn't check for the feature gate for docs 186 + // from other crates, but since this can only appear for lang items, it doesn't seem worth fixing. 187 + #![feature(intra_doc_pointers)] 188 + 189 + // Allow testing this library 190 + #[cfg(test)] 191 + #[macro_use] 192 + extern crate std; 193 + #[cfg(test)] 194 + extern crate test; 195 + 196 + // Module with internal macros used by other modules (needs to be included before other modules). 197 + #[cfg(not(no_macros))] 198 + #[macro_use] 199 + mod macros; 200 + 201 + mod raw_vec; 202 + 203 + // Heaps provided for low-level allocation strategies 204 + 205 + pub mod alloc; 206 + 207 + // Primitive types using the heaps above 208 + 209 + // Need to conditionally define the mod from `boxed.rs` to avoid 210 + // duplicating the lang-items when building in test cfg; but also need 211 + // to allow code to have `use boxed::Box;` declarations. 212 + #[cfg(not(test))] 213 + pub mod boxed; 214 + #[cfg(test)] 215 + mod boxed { 216 + pub use std::boxed::Box; 217 + } 218 + pub mod borrow; 219 + pub mod collections; 220 + #[cfg(not(no_global_oom_handling))] 221 + pub mod ffi; 222 + #[cfg(not(no_fmt))] 223 + pub mod fmt; 224 + #[cfg(not(no_rc))] 225 + pub mod rc; 226 + pub mod slice; 227 + #[cfg(not(no_str))] 228 + pub mod str; 229 + #[cfg(not(no_string))] 230 + pub mod string; 231 + #[cfg(not(no_sync))] 232 + #[cfg(target_has_atomic = "ptr")] 233 + pub mod sync; 234 + #[cfg(all(not(no_global_oom_handling), target_has_atomic = "ptr"))] 235 + pub mod task; 236 + #[cfg(test)] 237 + mod tests; 238 + pub mod vec; 239 + 240 + #[doc(hidden)] 241 + #[unstable(feature = "liballoc_internals", issue = "none", reason = "implementation detail")] 242 + pub mod __export { 243 + pub use core::format_args; 244 + }
+527
rust/alloc/raw_vec.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + #![unstable(feature = "raw_vec_internals", reason = "unstable const warnings", issue = "none")] 4 + 5 + use core::alloc::LayoutError; 6 + use core::cmp; 7 + use core::intrinsics; 8 + use core::mem::{self, ManuallyDrop, MaybeUninit}; 9 + use core::ops::Drop; 10 + use core::ptr::{self, NonNull, Unique}; 11 + use core::slice; 12 + 13 + #[cfg(not(no_global_oom_handling))] 14 + use crate::alloc::handle_alloc_error; 15 + use crate::alloc::{Allocator, Global, Layout}; 16 + use crate::boxed::Box; 17 + use crate::collections::TryReserveError; 18 + use crate::collections::TryReserveErrorKind::*; 19 + 20 + #[cfg(test)] 21 + mod tests; 22 + 23 + #[cfg(not(no_global_oom_handling))] 24 + enum AllocInit { 25 + /// The contents of the new memory are uninitialized. 26 + Uninitialized, 27 + /// The new memory is guaranteed to be zeroed. 28 + Zeroed, 29 + } 30 + 31 + /// A low-level utility for more ergonomically allocating, reallocating, and deallocating 32 + /// a buffer of memory on the heap without having to worry about all the corner cases 33 + /// involved. This type is excellent for building your own data structures like Vec and VecDeque. 34 + /// In particular: 35 + /// 36 + /// * Produces `Unique::dangling()` on zero-sized types. 37 + /// * Produces `Unique::dangling()` on zero-length allocations. 38 + /// * Avoids freeing `Unique::dangling()`. 39 + /// * Catches all overflows in capacity computations (promotes them to "capacity overflow" panics). 40 + /// * Guards against 32-bit systems allocating more than isize::MAX bytes. 41 + /// * Guards against overflowing your length. 42 + /// * Calls `handle_alloc_error` for fallible allocations. 43 + /// * Contains a `ptr::Unique` and thus endows the user with all related benefits. 44 + /// * Uses the excess returned from the allocator to use the largest available capacity. 45 + /// 46 + /// This type does not in anyway inspect the memory that it manages. When dropped it *will* 47 + /// free its memory, but it *won't* try to drop its contents. It is up to the user of `RawVec` 48 + /// to handle the actual things *stored* inside of a `RawVec`. 49 + /// 50 + /// Note that the excess of a zero-sized types is always infinite, so `capacity()` always returns 51 + /// `usize::MAX`. This means that you need to be careful when round-tripping this type with a 52 + /// `Box<[T]>`, since `capacity()` won't yield the length. 53 + #[allow(missing_debug_implementations)] 54 + pub(crate) struct RawVec<T, A: Allocator = Global> { 55 + ptr: Unique<T>, 56 + cap: usize, 57 + alloc: A, 58 + } 59 + 60 + impl<T> RawVec<T, Global> { 61 + /// HACK(Centril): This exists because stable `const fn` can only call stable `const fn`, so 62 + /// they cannot call `Self::new()`. 63 + /// 64 + /// If you change `RawVec<T>::new` or dependencies, please take care to not introduce anything 65 + /// that would truly const-call something unstable. 66 + pub const NEW: Self = Self::new(); 67 + 68 + /// Creates the biggest possible `RawVec` (on the system heap) 69 + /// without allocating. If `T` has positive size, then this makes a 70 + /// `RawVec` with capacity `0`. If `T` is zero-sized, then it makes a 71 + /// `RawVec` with capacity `usize::MAX`. Useful for implementing 72 + /// delayed allocation. 73 + #[must_use] 74 + pub const fn new() -> Self { 75 + Self::new_in(Global) 76 + } 77 + 78 + /// Creates a `RawVec` (on the system heap) with exactly the 79 + /// capacity and alignment requirements for a `[T; capacity]`. This is 80 + /// equivalent to calling `RawVec::new` when `capacity` is `0` or `T` is 81 + /// zero-sized. Note that if `T` is zero-sized this means you will 82 + /// *not* get a `RawVec` with the requested capacity. 83 + /// 84 + /// # Panics 85 + /// 86 + /// Panics if the requested capacity exceeds `isize::MAX` bytes. 87 + /// 88 + /// # Aborts 89 + /// 90 + /// Aborts on OOM. 91 + #[cfg(not(any(no_global_oom_handling, test)))] 92 + #[must_use] 93 + #[inline] 94 + pub fn with_capacity(capacity: usize) -> Self { 95 + Self::with_capacity_in(capacity, Global) 96 + } 97 + 98 + /// Like `with_capacity`, but guarantees the buffer is zeroed. 99 + #[cfg(not(any(no_global_oom_handling, test)))] 100 + #[must_use] 101 + #[inline] 102 + pub fn with_capacity_zeroed(capacity: usize) -> Self { 103 + Self::with_capacity_zeroed_in(capacity, Global) 104 + } 105 + } 106 + 107 + impl<T, A: Allocator> RawVec<T, A> { 108 + // Tiny Vecs are dumb. Skip to: 109 + // - 8 if the element size is 1, because any heap allocators is likely 110 + // to round up a request of less than 8 bytes to at least 8 bytes. 111 + // - 4 if elements are moderate-sized (<= 1 KiB). 112 + // - 1 otherwise, to avoid wasting too much space for very short Vecs. 113 + pub(crate) const MIN_NON_ZERO_CAP: usize = if mem::size_of::<T>() == 1 { 114 + 8 115 + } else if mem::size_of::<T>() <= 1024 { 116 + 4 117 + } else { 118 + 1 119 + }; 120 + 121 + /// Like `new`, but parameterized over the choice of allocator for 122 + /// the returned `RawVec`. 123 + pub const fn new_in(alloc: A) -> Self { 124 + // `cap: 0` means "unallocated". zero-sized types are ignored. 125 + Self { ptr: Unique::dangling(), cap: 0, alloc } 126 + } 127 + 128 + /// Like `with_capacity`, but parameterized over the choice of 129 + /// allocator for the returned `RawVec`. 130 + #[cfg(not(no_global_oom_handling))] 131 + #[inline] 132 + pub fn with_capacity_in(capacity: usize, alloc: A) -> Self { 133 + Self::allocate_in(capacity, AllocInit::Uninitialized, alloc) 134 + } 135 + 136 + /// Like `with_capacity_zeroed`, but parameterized over the choice 137 + /// of allocator for the returned `RawVec`. 138 + #[cfg(not(no_global_oom_handling))] 139 + #[inline] 140 + pub fn with_capacity_zeroed_in(capacity: usize, alloc: A) -> Self { 141 + Self::allocate_in(capacity, AllocInit::Zeroed, alloc) 142 + } 143 + 144 + /// Converts the entire buffer into `Box<[MaybeUninit<T>]>` with the specified `len`. 145 + /// 146 + /// Note that this will correctly reconstitute any `cap` changes 147 + /// that may have been performed. (See description of type for details.) 148 + /// 149 + /// # Safety 150 + /// 151 + /// * `len` must be greater than or equal to the most recently requested capacity, and 152 + /// * `len` must be less than or equal to `self.capacity()`. 153 + /// 154 + /// Note, that the requested capacity and `self.capacity()` could differ, as 155 + /// an allocator could overallocate and return a greater memory block than requested. 156 + pub unsafe fn into_box(self, len: usize) -> Box<[MaybeUninit<T>], A> { 157 + // Sanity-check one half of the safety requirement (we cannot check the other half). 158 + debug_assert!( 159 + len <= self.capacity(), 160 + "`len` must be smaller than or equal to `self.capacity()`" 161 + ); 162 + 163 + let me = ManuallyDrop::new(self); 164 + unsafe { 165 + let slice = slice::from_raw_parts_mut(me.ptr() as *mut MaybeUninit<T>, len); 166 + Box::from_raw_in(slice, ptr::read(&me.alloc)) 167 + } 168 + } 169 + 170 + #[cfg(not(no_global_oom_handling))] 171 + fn allocate_in(capacity: usize, init: AllocInit, alloc: A) -> Self { 172 + // Don't allocate here because `Drop` will not deallocate when `capacity` is 0. 173 + if mem::size_of::<T>() == 0 || capacity == 0 { 174 + Self::new_in(alloc) 175 + } else { 176 + // We avoid `unwrap_or_else` here because it bloats the amount of 177 + // LLVM IR generated. 178 + let layout = match Layout::array::<T>(capacity) { 179 + Ok(layout) => layout, 180 + Err(_) => capacity_overflow(), 181 + }; 182 + match alloc_guard(layout.size()) { 183 + Ok(_) => {} 184 + Err(_) => capacity_overflow(), 185 + } 186 + let result = match init { 187 + AllocInit::Uninitialized => alloc.allocate(layout), 188 + AllocInit::Zeroed => alloc.allocate_zeroed(layout), 189 + }; 190 + let ptr = match result { 191 + Ok(ptr) => ptr, 192 + Err(_) => handle_alloc_error(layout), 193 + }; 194 + 195 + // Allocators currently return a `NonNull<[u8]>` whose length 196 + // matches the size requested. If that ever changes, the capacity 197 + // here should change to `ptr.len() / mem::size_of::<T>()`. 198 + Self { 199 + ptr: unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }, 200 + cap: capacity, 201 + alloc, 202 + } 203 + } 204 + } 205 + 206 + /// Reconstitutes a `RawVec` from a pointer, capacity, and allocator. 207 + /// 208 + /// # Safety 209 + /// 210 + /// The `ptr` must be allocated (via the given allocator `alloc`), and with the given 211 + /// `capacity`. 212 + /// The `capacity` cannot exceed `isize::MAX` for sized types. (only a concern on 32-bit 213 + /// systems). ZST vectors may have a capacity up to `usize::MAX`. 214 + /// If the `ptr` and `capacity` come from a `RawVec` created via `alloc`, then this is 215 + /// guaranteed. 216 + #[inline] 217 + pub unsafe fn from_raw_parts_in(ptr: *mut T, capacity: usize, alloc: A) -> Self { 218 + Self { ptr: unsafe { Unique::new_unchecked(ptr) }, cap: capacity, alloc } 219 + } 220 + 221 + /// Gets a raw pointer to the start of the allocation. Note that this is 222 + /// `Unique::dangling()` if `capacity == 0` or `T` is zero-sized. In the former case, you must 223 + /// be careful. 224 + #[inline] 225 + pub fn ptr(&self) -> *mut T { 226 + self.ptr.as_ptr() 227 + } 228 + 229 + /// Gets the capacity of the allocation. 230 + /// 231 + /// This will always be `usize::MAX` if `T` is zero-sized. 232 + #[inline(always)] 233 + pub fn capacity(&self) -> usize { 234 + if mem::size_of::<T>() == 0 { usize::MAX } else { self.cap } 235 + } 236 + 237 + /// Returns a shared reference to the allocator backing this `RawVec`. 238 + pub fn allocator(&self) -> &A { 239 + &self.alloc 240 + } 241 + 242 + fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { 243 + if mem::size_of::<T>() == 0 || self.cap == 0 { 244 + None 245 + } else { 246 + // We have an allocated chunk of memory, so we can bypass runtime 247 + // checks to get our current layout. 248 + unsafe { 249 + let layout = Layout::array::<T>(self.cap).unwrap_unchecked(); 250 + Some((self.ptr.cast().into(), layout)) 251 + } 252 + } 253 + } 254 + 255 + /// Ensures that the buffer contains at least enough space to hold `len + 256 + /// additional` elements. If it doesn't already have enough capacity, will 257 + /// reallocate enough space plus comfortable slack space to get amortized 258 + /// *O*(1) behavior. Will limit this behavior if it would needlessly cause 259 + /// itself to panic. 260 + /// 261 + /// If `len` exceeds `self.capacity()`, this may fail to actually allocate 262 + /// the requested space. This is not really unsafe, but the unsafe 263 + /// code *you* write that relies on the behavior of this function may break. 264 + /// 265 + /// This is ideal for implementing a bulk-push operation like `extend`. 266 + /// 267 + /// # Panics 268 + /// 269 + /// Panics if the new capacity exceeds `isize::MAX` bytes. 270 + /// 271 + /// # Aborts 272 + /// 273 + /// Aborts on OOM. 274 + #[cfg(not(no_global_oom_handling))] 275 + #[inline] 276 + pub fn reserve(&mut self, len: usize, additional: usize) { 277 + // Callers expect this function to be very cheap when there is already sufficient capacity. 278 + // Therefore, we move all the resizing and error-handling logic from grow_amortized and 279 + // handle_reserve behind a call, while making sure that this function is likely to be 280 + // inlined as just a comparison and a call if the comparison fails. 281 + #[cold] 282 + fn do_reserve_and_handle<T, A: Allocator>( 283 + slf: &mut RawVec<T, A>, 284 + len: usize, 285 + additional: usize, 286 + ) { 287 + handle_reserve(slf.grow_amortized(len, additional)); 288 + } 289 + 290 + if self.needs_to_grow(len, additional) { 291 + do_reserve_and_handle(self, len, additional); 292 + } 293 + } 294 + 295 + /// A specialized version of `reserve()` used only by the hot and 296 + /// oft-instantiated `Vec::push()`, which does its own capacity check. 297 + #[cfg(not(no_global_oom_handling))] 298 + #[inline(never)] 299 + pub fn reserve_for_push(&mut self, len: usize) { 300 + handle_reserve(self.grow_amortized(len, 1)); 301 + } 302 + 303 + /// The same as `reserve`, but returns on errors instead of panicking or aborting. 304 + pub fn try_reserve(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> { 305 + if self.needs_to_grow(len, additional) { 306 + self.grow_amortized(len, additional) 307 + } else { 308 + Ok(()) 309 + } 310 + } 311 + 312 + /// The same as `reserve_for_push`, but returns on errors instead of panicking or aborting. 313 + #[inline(never)] 314 + pub fn try_reserve_for_push(&mut self, len: usize) -> Result<(), TryReserveError> { 315 + self.grow_amortized(len, 1) 316 + } 317 + 318 + /// Ensures that the buffer contains at least enough space to hold `len + 319 + /// additional` elements. If it doesn't already, will reallocate the 320 + /// minimum possible amount of memory necessary. Generally this will be 321 + /// exactly the amount of memory necessary, but in principle the allocator 322 + /// is free to give back more than we asked for. 323 + /// 324 + /// If `len` exceeds `self.capacity()`, this may fail to actually allocate 325 + /// the requested space. This is not really unsafe, but the unsafe code 326 + /// *you* write that relies on the behavior of this function may break. 327 + /// 328 + /// # Panics 329 + /// 330 + /// Panics if the new capacity exceeds `isize::MAX` bytes. 331 + /// 332 + /// # Aborts 333 + /// 334 + /// Aborts on OOM. 335 + #[cfg(not(no_global_oom_handling))] 336 + pub fn reserve_exact(&mut self, len: usize, additional: usize) { 337 + handle_reserve(self.try_reserve_exact(len, additional)); 338 + } 339 + 340 + /// The same as `reserve_exact`, but returns on errors instead of panicking or aborting. 341 + pub fn try_reserve_exact( 342 + &mut self, 343 + len: usize, 344 + additional: usize, 345 + ) -> Result<(), TryReserveError> { 346 + if self.needs_to_grow(len, additional) { self.grow_exact(len, additional) } else { Ok(()) } 347 + } 348 + 349 + /// Shrinks the buffer down to the specified capacity. If the given amount 350 + /// is 0, actually completely deallocates. 351 + /// 352 + /// # Panics 353 + /// 354 + /// Panics if the given amount is *larger* than the current capacity. 355 + /// 356 + /// # Aborts 357 + /// 358 + /// Aborts on OOM. 359 + #[cfg(not(no_global_oom_handling))] 360 + pub fn shrink_to_fit(&mut self, cap: usize) { 361 + handle_reserve(self.shrink(cap)); 362 + } 363 + } 364 + 365 + impl<T, A: Allocator> RawVec<T, A> { 366 + /// Returns if the buffer needs to grow to fulfill the needed extra capacity. 367 + /// Mainly used to make inlining reserve-calls possible without inlining `grow`. 368 + fn needs_to_grow(&self, len: usize, additional: usize) -> bool { 369 + additional > self.capacity().wrapping_sub(len) 370 + } 371 + 372 + fn set_ptr_and_cap(&mut self, ptr: NonNull<[u8]>, cap: usize) { 373 + // Allocators currently return a `NonNull<[u8]>` whose length matches 374 + // the size requested. If that ever changes, the capacity here should 375 + // change to `ptr.len() / mem::size_of::<T>()`. 376 + self.ptr = unsafe { Unique::new_unchecked(ptr.cast().as_ptr()) }; 377 + self.cap = cap; 378 + } 379 + 380 + // This method is usually instantiated many times. So we want it to be as 381 + // small as possible, to improve compile times. But we also want as much of 382 + // its contents to be statically computable as possible, to make the 383 + // generated code run faster. Therefore, this method is carefully written 384 + // so that all of the code that depends on `T` is within it, while as much 385 + // of the code that doesn't depend on `T` as possible is in functions that 386 + // are non-generic over `T`. 387 + fn grow_amortized(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> { 388 + // This is ensured by the calling contexts. 389 + debug_assert!(additional > 0); 390 + 391 + if mem::size_of::<T>() == 0 { 392 + // Since we return a capacity of `usize::MAX` when `elem_size` is 393 + // 0, getting to here necessarily means the `RawVec` is overfull. 394 + return Err(CapacityOverflow.into()); 395 + } 396 + 397 + // Nothing we can really do about these checks, sadly. 398 + let required_cap = len.checked_add(additional).ok_or(CapacityOverflow)?; 399 + 400 + // This guarantees exponential growth. The doubling cannot overflow 401 + // because `cap <= isize::MAX` and the type of `cap` is `usize`. 402 + let cap = cmp::max(self.cap * 2, required_cap); 403 + let cap = cmp::max(Self::MIN_NON_ZERO_CAP, cap); 404 + 405 + let new_layout = Layout::array::<T>(cap); 406 + 407 + // `finish_grow` is non-generic over `T`. 408 + let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?; 409 + self.set_ptr_and_cap(ptr, cap); 410 + Ok(()) 411 + } 412 + 413 + // The constraints on this method are much the same as those on 414 + // `grow_amortized`, but this method is usually instantiated less often so 415 + // it's less critical. 416 + fn grow_exact(&mut self, len: usize, additional: usize) -> Result<(), TryReserveError> { 417 + if mem::size_of::<T>() == 0 { 418 + // Since we return a capacity of `usize::MAX` when the type size is 419 + // 0, getting to here necessarily means the `RawVec` is overfull. 420 + return Err(CapacityOverflow.into()); 421 + } 422 + 423 + let cap = len.checked_add(additional).ok_or(CapacityOverflow)?; 424 + let new_layout = Layout::array::<T>(cap); 425 + 426 + // `finish_grow` is non-generic over `T`. 427 + let ptr = finish_grow(new_layout, self.current_memory(), &mut self.alloc)?; 428 + self.set_ptr_and_cap(ptr, cap); 429 + Ok(()) 430 + } 431 + 432 + #[allow(dead_code)] 433 + fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> { 434 + assert!(cap <= self.capacity(), "Tried to shrink to a larger capacity"); 435 + 436 + let (ptr, layout) = if let Some(mem) = self.current_memory() { mem } else { return Ok(()) }; 437 + 438 + let ptr = unsafe { 439 + // `Layout::array` cannot overflow here because it would have 440 + // overflowed earlier when capacity was larger. 441 + let new_layout = Layout::array::<T>(cap).unwrap_unchecked(); 442 + self.alloc 443 + .shrink(ptr, layout, new_layout) 444 + .map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })? 445 + }; 446 + self.set_ptr_and_cap(ptr, cap); 447 + Ok(()) 448 + } 449 + } 450 + 451 + // This function is outside `RawVec` to minimize compile times. See the comment 452 + // above `RawVec::grow_amortized` for details. (The `A` parameter isn't 453 + // significant, because the number of different `A` types seen in practice is 454 + // much smaller than the number of `T` types.) 455 + #[inline(never)] 456 + fn finish_grow<A>( 457 + new_layout: Result<Layout, LayoutError>, 458 + current_memory: Option<(NonNull<u8>, Layout)>, 459 + alloc: &mut A, 460 + ) -> Result<NonNull<[u8]>, TryReserveError> 461 + where 462 + A: Allocator, 463 + { 464 + // Check for the error here to minimize the size of `RawVec::grow_*`. 465 + let new_layout = new_layout.map_err(|_| CapacityOverflow)?; 466 + 467 + alloc_guard(new_layout.size())?; 468 + 469 + let memory = if let Some((ptr, old_layout)) = current_memory { 470 + debug_assert_eq!(old_layout.align(), new_layout.align()); 471 + unsafe { 472 + // The allocator checks for alignment equality 473 + intrinsics::assume(old_layout.align() == new_layout.align()); 474 + alloc.grow(ptr, old_layout, new_layout) 475 + } 476 + } else { 477 + alloc.allocate(new_layout) 478 + }; 479 + 480 + memory.map_err(|_| AllocError { layout: new_layout, non_exhaustive: () }.into()) 481 + } 482 + 483 + unsafe impl<#[may_dangle] T, A: Allocator> Drop for RawVec<T, A> { 484 + /// Frees the memory owned by the `RawVec` *without* trying to drop its contents. 485 + fn drop(&mut self) { 486 + if let Some((ptr, layout)) = self.current_memory() { 487 + unsafe { self.alloc.deallocate(ptr, layout) } 488 + } 489 + } 490 + } 491 + 492 + // Central function for reserve error handling. 493 + #[cfg(not(no_global_oom_handling))] 494 + #[inline] 495 + fn handle_reserve(result: Result<(), TryReserveError>) { 496 + match result.map_err(|e| e.kind()) { 497 + Err(CapacityOverflow) => capacity_overflow(), 498 + Err(AllocError { layout, .. }) => handle_alloc_error(layout), 499 + Ok(()) => { /* yay */ } 500 + } 501 + } 502 + 503 + // We need to guarantee the following: 504 + // * We don't ever allocate `> isize::MAX` byte-size objects. 505 + // * We don't overflow `usize::MAX` and actually allocate too little. 506 + // 507 + // On 64-bit we just need to check for overflow since trying to allocate 508 + // `> isize::MAX` bytes will surely fail. On 32-bit and 16-bit we need to add 509 + // an extra guard for this in case we're running on a platform which can use 510 + // all 4GB in user-space, e.g., PAE or x32. 511 + 512 + #[inline] 513 + fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> { 514 + if usize::BITS < 64 && alloc_size > isize::MAX as usize { 515 + Err(CapacityOverflow.into()) 516 + } else { 517 + Ok(()) 518 + } 519 + } 520 + 521 + // One central function responsible for reporting capacity overflows. This'll 522 + // ensure that the code generation related to these panics is minimal as there's 523 + // only one location which panics rather than a bunch throughout the module. 524 + #[cfg(not(no_global_oom_handling))] 525 + fn capacity_overflow() -> ! { 526 + panic!("capacity overflow"); 527 + }
+1204
rust/alloc/slice.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + //! A dynamically-sized view into a contiguous sequence, `[T]`. 4 + //! 5 + //! *[See also the slice primitive type](slice).* 6 + //! 7 + //! Slices are a view into a block of memory represented as a pointer and a 8 + //! length. 9 + //! 10 + //! ``` 11 + //! // slicing a Vec 12 + //! let vec = vec![1, 2, 3]; 13 + //! let int_slice = &vec[..]; 14 + //! // coercing an array to a slice 15 + //! let str_slice: &[&str] = &["one", "two", "three"]; 16 + //! ``` 17 + //! 18 + //! Slices are either mutable or shared. The shared slice type is `&[T]`, 19 + //! while the mutable slice type is `&mut [T]`, where `T` represents the element 20 + //! type. For example, you can mutate the block of memory that a mutable slice 21 + //! points to: 22 + //! 23 + //! ``` 24 + //! let x = &mut [1, 2, 3]; 25 + //! x[1] = 7; 26 + //! assert_eq!(x, &[1, 7, 3]); 27 + //! ``` 28 + //! 29 + //! Here are some of the things this module contains: 30 + //! 31 + //! ## Structs 32 + //! 33 + //! There are several structs that are useful for slices, such as [`Iter`], which 34 + //! represents iteration over a slice. 35 + //! 36 + //! ## Trait Implementations 37 + //! 38 + //! There are several implementations of common traits for slices. Some examples 39 + //! include: 40 + //! 41 + //! * [`Clone`] 42 + //! * [`Eq`], [`Ord`] - for slices whose element type are [`Eq`] or [`Ord`]. 43 + //! * [`Hash`] - for slices whose element type is [`Hash`]. 44 + //! 45 + //! ## Iteration 46 + //! 47 + //! The slices implement `IntoIterator`. The iterator yields references to the 48 + //! slice elements. 49 + //! 50 + //! ``` 51 + //! let numbers = &[0, 1, 2]; 52 + //! for n in numbers { 53 + //! println!("{n} is a number!"); 54 + //! } 55 + //! ``` 56 + //! 57 + //! The mutable slice yields mutable references to the elements: 58 + //! 59 + //! ``` 60 + //! let mut scores = [7, 8, 9]; 61 + //! for score in &mut scores[..] { 62 + //! *score += 1; 63 + //! } 64 + //! ``` 65 + //! 66 + //! This iterator yields mutable references to the slice's elements, so while 67 + //! the element type of the slice is `i32`, the element type of the iterator is 68 + //! `&mut i32`. 69 + //! 70 + //! * [`.iter`] and [`.iter_mut`] are the explicit methods to return the default 71 + //! iterators. 72 + //! * Further methods that return iterators are [`.split`], [`.splitn`], 73 + //! [`.chunks`], [`.windows`] and more. 74 + //! 75 + //! [`Hash`]: core::hash::Hash 76 + //! [`.iter`]: slice::iter 77 + //! [`.iter_mut`]: slice::iter_mut 78 + //! [`.split`]: slice::split 79 + //! [`.splitn`]: slice::splitn 80 + //! [`.chunks`]: slice::chunks 81 + //! [`.windows`]: slice::windows 82 + #![stable(feature = "rust1", since = "1.0.0")] 83 + // Many of the usings in this module are only used in the test configuration. 84 + // It's cleaner to just turn off the unused_imports warning than to fix them. 85 + #![cfg_attr(test, allow(unused_imports, dead_code))] 86 + 87 + use core::borrow::{Borrow, BorrowMut}; 88 + #[cfg(not(no_global_oom_handling))] 89 + use core::cmp::Ordering::{self, Less}; 90 + #[cfg(not(no_global_oom_handling))] 91 + use core::mem; 92 + #[cfg(not(no_global_oom_handling))] 93 + use core::mem::size_of; 94 + #[cfg(not(no_global_oom_handling))] 95 + use core::ptr; 96 + 97 + use crate::alloc::Allocator; 98 + #[cfg(not(no_global_oom_handling))] 99 + use crate::alloc::Global; 100 + #[cfg(not(no_global_oom_handling))] 101 + use crate::borrow::ToOwned; 102 + use crate::boxed::Box; 103 + use crate::vec::Vec; 104 + 105 + #[unstable(feature = "slice_range", issue = "76393")] 106 + pub use core::slice::range; 107 + #[unstable(feature = "array_chunks", issue = "74985")] 108 + pub use core::slice::ArrayChunks; 109 + #[unstable(feature = "array_chunks", issue = "74985")] 110 + pub use core::slice::ArrayChunksMut; 111 + #[unstable(feature = "array_windows", issue = "75027")] 112 + pub use core::slice::ArrayWindows; 113 + #[stable(feature = "inherent_ascii_escape", since = "1.60.0")] 114 + pub use core::slice::EscapeAscii; 115 + #[stable(feature = "slice_get_slice", since = "1.28.0")] 116 + pub use core::slice::SliceIndex; 117 + #[stable(feature = "from_ref", since = "1.28.0")] 118 + pub use core::slice::{from_mut, from_ref}; 119 + #[stable(feature = "rust1", since = "1.0.0")] 120 + pub use core::slice::{from_raw_parts, from_raw_parts_mut}; 121 + #[stable(feature = "rust1", since = "1.0.0")] 122 + pub use core::slice::{Chunks, Windows}; 123 + #[stable(feature = "chunks_exact", since = "1.31.0")] 124 + pub use core::slice::{ChunksExact, ChunksExactMut}; 125 + #[stable(feature = "rust1", since = "1.0.0")] 126 + pub use core::slice::{ChunksMut, Split, SplitMut}; 127 + #[unstable(feature = "slice_group_by", issue = "80552")] 128 + pub use core::slice::{GroupBy, GroupByMut}; 129 + #[stable(feature = "rust1", since = "1.0.0")] 130 + pub use core::slice::{Iter, IterMut}; 131 + #[stable(feature = "rchunks", since = "1.31.0")] 132 + pub use core::slice::{RChunks, RChunksExact, RChunksExactMut, RChunksMut}; 133 + #[stable(feature = "slice_rsplit", since = "1.27.0")] 134 + pub use core::slice::{RSplit, RSplitMut}; 135 + #[stable(feature = "rust1", since = "1.0.0")] 136 + pub use core::slice::{RSplitN, RSplitNMut, SplitN, SplitNMut}; 137 + #[stable(feature = "split_inclusive", since = "1.51.0")] 138 + pub use core::slice::{SplitInclusive, SplitInclusiveMut}; 139 + 140 + //////////////////////////////////////////////////////////////////////////////// 141 + // Basic slice extension methods 142 + //////////////////////////////////////////////////////////////////////////////// 143 + 144 + // HACK(japaric) needed for the implementation of `vec!` macro during testing 145 + // N.B., see the `hack` module in this file for more details. 146 + #[cfg(test)] 147 + pub use hack::into_vec; 148 + 149 + // HACK(japaric) needed for the implementation of `Vec::clone` during testing 150 + // N.B., see the `hack` module in this file for more details. 151 + #[cfg(test)] 152 + pub use hack::to_vec; 153 + 154 + // HACK(japaric): With cfg(test) `impl [T]` is not available, these three 155 + // functions are actually methods that are in `impl [T]` but not in 156 + // `core::slice::SliceExt` - we need to supply these functions for the 157 + // `test_permutations` test 158 + pub(crate) mod hack { 159 + use core::alloc::Allocator; 160 + 161 + use crate::boxed::Box; 162 + use crate::vec::Vec; 163 + 164 + // We shouldn't add inline attribute to this since this is used in 165 + // `vec!` macro mostly and causes perf regression. See #71204 for 166 + // discussion and perf results. 167 + pub fn into_vec<T, A: Allocator>(b: Box<[T], A>) -> Vec<T, A> { 168 + unsafe { 169 + let len = b.len(); 170 + let (b, alloc) = Box::into_raw_with_allocator(b); 171 + Vec::from_raw_parts_in(b as *mut T, len, len, alloc) 172 + } 173 + } 174 + 175 + #[cfg(not(no_global_oom_handling))] 176 + #[inline] 177 + pub fn to_vec<T: ConvertVec, A: Allocator>(s: &[T], alloc: A) -> Vec<T, A> { 178 + T::to_vec(s, alloc) 179 + } 180 + 181 + #[cfg(not(no_global_oom_handling))] 182 + pub trait ConvertVec { 183 + fn to_vec<A: Allocator>(s: &[Self], alloc: A) -> Vec<Self, A> 184 + where 185 + Self: Sized; 186 + } 187 + 188 + #[cfg(not(no_global_oom_handling))] 189 + impl<T: Clone> ConvertVec for T { 190 + #[inline] 191 + default fn to_vec<A: Allocator>(s: &[Self], alloc: A) -> Vec<Self, A> { 192 + struct DropGuard<'a, T, A: Allocator> { 193 + vec: &'a mut Vec<T, A>, 194 + num_init: usize, 195 + } 196 + impl<'a, T, A: Allocator> Drop for DropGuard<'a, T, A> { 197 + #[inline] 198 + fn drop(&mut self) { 199 + // SAFETY: 200 + // items were marked initialized in the loop below 201 + unsafe { 202 + self.vec.set_len(self.num_init); 203 + } 204 + } 205 + } 206 + let mut vec = Vec::with_capacity_in(s.len(), alloc); 207 + let mut guard = DropGuard { vec: &mut vec, num_init: 0 }; 208 + let slots = guard.vec.spare_capacity_mut(); 209 + // .take(slots.len()) is necessary for LLVM to remove bounds checks 210 + // and has better codegen than zip. 211 + for (i, b) in s.iter().enumerate().take(slots.len()) { 212 + guard.num_init = i; 213 + slots[i].write(b.clone()); 214 + } 215 + core::mem::forget(guard); 216 + // SAFETY: 217 + // the vec was allocated and initialized above to at least this length. 218 + unsafe { 219 + vec.set_len(s.len()); 220 + } 221 + vec 222 + } 223 + } 224 + 225 + #[cfg(not(no_global_oom_handling))] 226 + impl<T: Copy> ConvertVec for T { 227 + #[inline] 228 + fn to_vec<A: Allocator>(s: &[Self], alloc: A) -> Vec<Self, A> { 229 + let mut v = Vec::with_capacity_in(s.len(), alloc); 230 + // SAFETY: 231 + // allocated above with the capacity of `s`, and initialize to `s.len()` in 232 + // ptr::copy_to_non_overlapping below. 233 + unsafe { 234 + s.as_ptr().copy_to_nonoverlapping(v.as_mut_ptr(), s.len()); 235 + v.set_len(s.len()); 236 + } 237 + v 238 + } 239 + } 240 + } 241 + 242 + #[cfg(not(test))] 243 + impl<T> [T] { 244 + /// Sorts the slice. 245 + /// 246 + /// This sort is stable (i.e., does not reorder equal elements) and *O*(*n* \* log(*n*)) worst-case. 247 + /// 248 + /// When applicable, unstable sorting is preferred because it is generally faster than stable 249 + /// sorting and it doesn't allocate auxiliary memory. 250 + /// See [`sort_unstable`](slice::sort_unstable). 251 + /// 252 + /// # Current implementation 253 + /// 254 + /// The current algorithm is an adaptive, iterative merge sort inspired by 255 + /// [timsort](https://en.wikipedia.org/wiki/Timsort). 256 + /// It is designed to be very fast in cases where the slice is nearly sorted, or consists of 257 + /// two or more sorted sequences concatenated one after another. 258 + /// 259 + /// Also, it allocates temporary storage half the size of `self`, but for short slices a 260 + /// non-allocating insertion sort is used instead. 261 + /// 262 + /// # Examples 263 + /// 264 + /// ``` 265 + /// let mut v = [-5, 4, 1, -3, 2]; 266 + /// 267 + /// v.sort(); 268 + /// assert!(v == [-5, -3, 1, 2, 4]); 269 + /// ``` 270 + #[cfg(not(no_global_oom_handling))] 271 + #[rustc_allow_incoherent_impl] 272 + #[stable(feature = "rust1", since = "1.0.0")] 273 + #[inline] 274 + pub fn sort(&mut self) 275 + where 276 + T: Ord, 277 + { 278 + merge_sort(self, |a, b| a.lt(b)); 279 + } 280 + 281 + /// Sorts the slice with a comparator function. 282 + /// 283 + /// This sort is stable (i.e., does not reorder equal elements) and *O*(*n* \* log(*n*)) worst-case. 284 + /// 285 + /// The comparator function must define a total ordering for the elements in the slice. If 286 + /// the ordering is not total, the order of the elements is unspecified. An order is a 287 + /// total order if it is (for all `a`, `b` and `c`): 288 + /// 289 + /// * total and antisymmetric: exactly one of `a < b`, `a == b` or `a > b` is true, and 290 + /// * transitive, `a < b` and `b < c` implies `a < c`. The same must hold for both `==` and `>`. 291 + /// 292 + /// For example, while [`f64`] doesn't implement [`Ord`] because `NaN != NaN`, we can use 293 + /// `partial_cmp` as our sort function when we know the slice doesn't contain a `NaN`. 294 + /// 295 + /// ``` 296 + /// let mut floats = [5f64, 4.0, 1.0, 3.0, 2.0]; 297 + /// floats.sort_by(|a, b| a.partial_cmp(b).unwrap()); 298 + /// assert_eq!(floats, [1.0, 2.0, 3.0, 4.0, 5.0]); 299 + /// ``` 300 + /// 301 + /// When applicable, unstable sorting is preferred because it is generally faster than stable 302 + /// sorting and it doesn't allocate auxiliary memory. 303 + /// See [`sort_unstable_by`](slice::sort_unstable_by). 304 + /// 305 + /// # Current implementation 306 + /// 307 + /// The current algorithm is an adaptive, iterative merge sort inspired by 308 + /// [timsort](https://en.wikipedia.org/wiki/Timsort). 309 + /// It is designed to be very fast in cases where the slice is nearly sorted, or consists of 310 + /// two or more sorted sequences concatenated one after another. 311 + /// 312 + /// Also, it allocates temporary storage half the size of `self`, but for short slices a 313 + /// non-allocating insertion sort is used instead. 314 + /// 315 + /// # Examples 316 + /// 317 + /// ``` 318 + /// let mut v = [5, 4, 1, 3, 2]; 319 + /// v.sort_by(|a, b| a.cmp(b)); 320 + /// assert!(v == [1, 2, 3, 4, 5]); 321 + /// 322 + /// // reverse sorting 323 + /// v.sort_by(|a, b| b.cmp(a)); 324 + /// assert!(v == [5, 4, 3, 2, 1]); 325 + /// ``` 326 + #[cfg(not(no_global_oom_handling))] 327 + #[rustc_allow_incoherent_impl] 328 + #[stable(feature = "rust1", since = "1.0.0")] 329 + #[inline] 330 + pub fn sort_by<F>(&mut self, mut compare: F) 331 + where 332 + F: FnMut(&T, &T) -> Ordering, 333 + { 334 + merge_sort(self, |a, b| compare(a, b) == Less); 335 + } 336 + 337 + /// Sorts the slice with a key extraction function. 338 + /// 339 + /// This sort is stable (i.e., does not reorder equal elements) and *O*(*m* \* *n* \* log(*n*)) 340 + /// worst-case, where the key function is *O*(*m*). 341 + /// 342 + /// For expensive key functions (e.g. functions that are not simple property accesses or 343 + /// basic operations), [`sort_by_cached_key`](slice::sort_by_cached_key) is likely to be 344 + /// significantly faster, as it does not recompute element keys. 345 + /// 346 + /// When applicable, unstable sorting is preferred because it is generally faster than stable 347 + /// sorting and it doesn't allocate auxiliary memory. 348 + /// See [`sort_unstable_by_key`](slice::sort_unstable_by_key). 349 + /// 350 + /// # Current implementation 351 + /// 352 + /// The current algorithm is an adaptive, iterative merge sort inspired by 353 + /// [timsort](https://en.wikipedia.org/wiki/Timsort). 354 + /// It is designed to be very fast in cases where the slice is nearly sorted, or consists of 355 + /// two or more sorted sequences concatenated one after another. 356 + /// 357 + /// Also, it allocates temporary storage half the size of `self`, but for short slices a 358 + /// non-allocating insertion sort is used instead. 359 + /// 360 + /// # Examples 361 + /// 362 + /// ``` 363 + /// let mut v = [-5i32, 4, 1, -3, 2]; 364 + /// 365 + /// v.sort_by_key(|k| k.abs()); 366 + /// assert!(v == [1, 2, -3, 4, -5]); 367 + /// ``` 368 + #[cfg(not(no_global_oom_handling))] 369 + #[rustc_allow_incoherent_impl] 370 + #[stable(feature = "slice_sort_by_key", since = "1.7.0")] 371 + #[inline] 372 + pub fn sort_by_key<K, F>(&mut self, mut f: F) 373 + where 374 + F: FnMut(&T) -> K, 375 + K: Ord, 376 + { 377 + merge_sort(self, |a, b| f(a).lt(&f(b))); 378 + } 379 + 380 + /// Sorts the slice with a key extraction function. 381 + /// 382 + /// During sorting, the key function is called at most once per element, by using 383 + /// temporary storage to remember the results of key evaluation. 384 + /// The order of calls to the key function is unspecified and may change in future versions 385 + /// of the standard library. 386 + /// 387 + /// This sort is stable (i.e., does not reorder equal elements) and *O*(*m* \* *n* + *n* \* log(*n*)) 388 + /// worst-case, where the key function is *O*(*m*). 389 + /// 390 + /// For simple key functions (e.g., functions that are property accesses or 391 + /// basic operations), [`sort_by_key`](slice::sort_by_key) is likely to be 392 + /// faster. 393 + /// 394 + /// # Current implementation 395 + /// 396 + /// The current algorithm is based on [pattern-defeating quicksort][pdqsort] by Orson Peters, 397 + /// which combines the fast average case of randomized quicksort with the fast worst case of 398 + /// heapsort, while achieving linear time on slices with certain patterns. It uses some 399 + /// randomization to avoid degenerate cases, but with a fixed seed to always provide 400 + /// deterministic behavior. 401 + /// 402 + /// In the worst case, the algorithm allocates temporary storage in a `Vec<(K, usize)>` the 403 + /// length of the slice. 404 + /// 405 + /// # Examples 406 + /// 407 + /// ``` 408 + /// let mut v = [-5i32, 4, 32, -3, 2]; 409 + /// 410 + /// v.sort_by_cached_key(|k| k.to_string()); 411 + /// assert!(v == [-3, -5, 2, 32, 4]); 412 + /// ``` 413 + /// 414 + /// [pdqsort]: https://github.com/orlp/pdqsort 415 + #[cfg(not(no_global_oom_handling))] 416 + #[rustc_allow_incoherent_impl] 417 + #[stable(feature = "slice_sort_by_cached_key", since = "1.34.0")] 418 + #[inline] 419 + pub fn sort_by_cached_key<K, F>(&mut self, f: F) 420 + where 421 + F: FnMut(&T) -> K, 422 + K: Ord, 423 + { 424 + // Helper macro for indexing our vector by the smallest possible type, to reduce allocation. 425 + macro_rules! sort_by_key { 426 + ($t:ty, $slice:ident, $f:ident) => {{ 427 + let mut indices: Vec<_> = 428 + $slice.iter().map($f).enumerate().map(|(i, k)| (k, i as $t)).collect(); 429 + // The elements of `indices` are unique, as they are indexed, so any sort will be 430 + // stable with respect to the original slice. We use `sort_unstable` here because 431 + // it requires less memory allocation. 432 + indices.sort_unstable(); 433 + for i in 0..$slice.len() { 434 + let mut index = indices[i].1; 435 + while (index as usize) < i { 436 + index = indices[index as usize].1; 437 + } 438 + indices[i].1 = index; 439 + $slice.swap(i, index as usize); 440 + } 441 + }}; 442 + } 443 + 444 + let sz_u8 = mem::size_of::<(K, u8)>(); 445 + let sz_u16 = mem::size_of::<(K, u16)>(); 446 + let sz_u32 = mem::size_of::<(K, u32)>(); 447 + let sz_usize = mem::size_of::<(K, usize)>(); 448 + 449 + let len = self.len(); 450 + if len < 2 { 451 + return; 452 + } 453 + if sz_u8 < sz_u16 && len <= (u8::MAX as usize) { 454 + return sort_by_key!(u8, self, f); 455 + } 456 + if sz_u16 < sz_u32 && len <= (u16::MAX as usize) { 457 + return sort_by_key!(u16, self, f); 458 + } 459 + if sz_u32 < sz_usize && len <= (u32::MAX as usize) { 460 + return sort_by_key!(u32, self, f); 461 + } 462 + sort_by_key!(usize, self, f) 463 + } 464 + 465 + /// Copies `self` into a new `Vec`. 466 + /// 467 + /// # Examples 468 + /// 469 + /// ``` 470 + /// let s = [10, 40, 30]; 471 + /// let x = s.to_vec(); 472 + /// // Here, `s` and `x` can be modified independently. 473 + /// ``` 474 + #[cfg(not(no_global_oom_handling))] 475 + #[rustc_allow_incoherent_impl] 476 + #[rustc_conversion_suggestion] 477 + #[stable(feature = "rust1", since = "1.0.0")] 478 + #[inline] 479 + pub fn to_vec(&self) -> Vec<T> 480 + where 481 + T: Clone, 482 + { 483 + self.to_vec_in(Global) 484 + } 485 + 486 + /// Copies `self` into a new `Vec` with an allocator. 487 + /// 488 + /// # Examples 489 + /// 490 + /// ``` 491 + /// #![feature(allocator_api)] 492 + /// 493 + /// use std::alloc::System; 494 + /// 495 + /// let s = [10, 40, 30]; 496 + /// let x = s.to_vec_in(System); 497 + /// // Here, `s` and `x` can be modified independently. 498 + /// ``` 499 + #[cfg(not(no_global_oom_handling))] 500 + #[rustc_allow_incoherent_impl] 501 + #[inline] 502 + #[unstable(feature = "allocator_api", issue = "32838")] 503 + pub fn to_vec_in<A: Allocator>(&self, alloc: A) -> Vec<T, A> 504 + where 505 + T: Clone, 506 + { 507 + // N.B., see the `hack` module in this file for more details. 508 + hack::to_vec(self, alloc) 509 + } 510 + 511 + /// Converts `self` into a vector without clones or allocation. 512 + /// 513 + /// The resulting vector can be converted back into a box via 514 + /// `Vec<T>`'s `into_boxed_slice` method. 515 + /// 516 + /// # Examples 517 + /// 518 + /// ``` 519 + /// let s: Box<[i32]> = Box::new([10, 40, 30]); 520 + /// let x = s.into_vec(); 521 + /// // `s` cannot be used anymore because it has been converted into `x`. 522 + /// 523 + /// assert_eq!(x, vec![10, 40, 30]); 524 + /// ``` 525 + #[rustc_allow_incoherent_impl] 526 + #[stable(feature = "rust1", since = "1.0.0")] 527 + #[inline] 528 + pub fn into_vec<A: Allocator>(self: Box<Self, A>) -> Vec<T, A> { 529 + // N.B., see the `hack` module in this file for more details. 530 + hack::into_vec(self) 531 + } 532 + 533 + /// Creates a vector by repeating a slice `n` times. 534 + /// 535 + /// # Panics 536 + /// 537 + /// This function will panic if the capacity would overflow. 538 + /// 539 + /// # Examples 540 + /// 541 + /// Basic usage: 542 + /// 543 + /// ``` 544 + /// assert_eq!([1, 2].repeat(3), vec![1, 2, 1, 2, 1, 2]); 545 + /// ``` 546 + /// 547 + /// A panic upon overflow: 548 + /// 549 + /// ```should_panic 550 + /// // this will panic at runtime 551 + /// b"0123456789abcdef".repeat(usize::MAX); 552 + /// ``` 553 + #[rustc_allow_incoherent_impl] 554 + #[cfg(not(no_global_oom_handling))] 555 + #[stable(feature = "repeat_generic_slice", since = "1.40.0")] 556 + pub fn repeat(&self, n: usize) -> Vec<T> 557 + where 558 + T: Copy, 559 + { 560 + if n == 0 { 561 + return Vec::new(); 562 + } 563 + 564 + // If `n` is larger than zero, it can be split as 565 + // `n = 2^expn + rem (2^expn > rem, expn >= 0, rem >= 0)`. 566 + // `2^expn` is the number represented by the leftmost '1' bit of `n`, 567 + // and `rem` is the remaining part of `n`. 568 + 569 + // Using `Vec` to access `set_len()`. 570 + let capacity = self.len().checked_mul(n).expect("capacity overflow"); 571 + let mut buf = Vec::with_capacity(capacity); 572 + 573 + // `2^expn` repetition is done by doubling `buf` `expn`-times. 574 + buf.extend(self); 575 + { 576 + let mut m = n >> 1; 577 + // If `m > 0`, there are remaining bits up to the leftmost '1'. 578 + while m > 0 { 579 + // `buf.extend(buf)`: 580 + unsafe { 581 + ptr::copy_nonoverlapping( 582 + buf.as_ptr(), 583 + (buf.as_mut_ptr() as *mut T).add(buf.len()), 584 + buf.len(), 585 + ); 586 + // `buf` has capacity of `self.len() * n`. 587 + let buf_len = buf.len(); 588 + buf.set_len(buf_len * 2); 589 + } 590 + 591 + m >>= 1; 592 + } 593 + } 594 + 595 + // `rem` (`= n - 2^expn`) repetition is done by copying 596 + // first `rem` repetitions from `buf` itself. 597 + let rem_len = capacity - buf.len(); // `self.len() * rem` 598 + if rem_len > 0 { 599 + // `buf.extend(buf[0 .. rem_len])`: 600 + unsafe { 601 + // This is non-overlapping since `2^expn > rem`. 602 + ptr::copy_nonoverlapping( 603 + buf.as_ptr(), 604 + (buf.as_mut_ptr() as *mut T).add(buf.len()), 605 + rem_len, 606 + ); 607 + // `buf.len() + rem_len` equals to `buf.capacity()` (`= self.len() * n`). 608 + buf.set_len(capacity); 609 + } 610 + } 611 + buf 612 + } 613 + 614 + /// Flattens a slice of `T` into a single value `Self::Output`. 615 + /// 616 + /// # Examples 617 + /// 618 + /// ``` 619 + /// assert_eq!(["hello", "world"].concat(), "helloworld"); 620 + /// assert_eq!([[1, 2], [3, 4]].concat(), [1, 2, 3, 4]); 621 + /// ``` 622 + #[rustc_allow_incoherent_impl] 623 + #[stable(feature = "rust1", since = "1.0.0")] 624 + pub fn concat<Item: ?Sized>(&self) -> <Self as Concat<Item>>::Output 625 + where 626 + Self: Concat<Item>, 627 + { 628 + Concat::concat(self) 629 + } 630 + 631 + /// Flattens a slice of `T` into a single value `Self::Output`, placing a 632 + /// given separator between each. 633 + /// 634 + /// # Examples 635 + /// 636 + /// ``` 637 + /// assert_eq!(["hello", "world"].join(" "), "hello world"); 638 + /// assert_eq!([[1, 2], [3, 4]].join(&0), [1, 2, 0, 3, 4]); 639 + /// assert_eq!([[1, 2], [3, 4]].join(&[0, 0][..]), [1, 2, 0, 0, 3, 4]); 640 + /// ``` 641 + #[rustc_allow_incoherent_impl] 642 + #[stable(feature = "rename_connect_to_join", since = "1.3.0")] 643 + pub fn join<Separator>(&self, sep: Separator) -> <Self as Join<Separator>>::Output 644 + where 645 + Self: Join<Separator>, 646 + { 647 + Join::join(self, sep) 648 + } 649 + 650 + /// Flattens a slice of `T` into a single value `Self::Output`, placing a 651 + /// given separator between each. 652 + /// 653 + /// # Examples 654 + /// 655 + /// ``` 656 + /// # #![allow(deprecated)] 657 + /// assert_eq!(["hello", "world"].connect(" "), "hello world"); 658 + /// assert_eq!([[1, 2], [3, 4]].connect(&0), [1, 2, 0, 3, 4]); 659 + /// ``` 660 + #[rustc_allow_incoherent_impl] 661 + #[stable(feature = "rust1", since = "1.0.0")] 662 + #[deprecated(since = "1.3.0", note = "renamed to join")] 663 + pub fn connect<Separator>(&self, sep: Separator) -> <Self as Join<Separator>>::Output 664 + where 665 + Self: Join<Separator>, 666 + { 667 + Join::join(self, sep) 668 + } 669 + } 670 + 671 + #[cfg(not(test))] 672 + impl [u8] { 673 + /// Returns a vector containing a copy of this slice where each byte 674 + /// is mapped to its ASCII upper case equivalent. 675 + /// 676 + /// ASCII letters 'a' to 'z' are mapped to 'A' to 'Z', 677 + /// but non-ASCII letters are unchanged. 678 + /// 679 + /// To uppercase the value in-place, use [`make_ascii_uppercase`]. 680 + /// 681 + /// [`make_ascii_uppercase`]: slice::make_ascii_uppercase 682 + #[cfg(not(no_global_oom_handling))] 683 + #[rustc_allow_incoherent_impl] 684 + #[must_use = "this returns the uppercase bytes as a new Vec, \ 685 + without modifying the original"] 686 + #[stable(feature = "ascii_methods_on_intrinsics", since = "1.23.0")] 687 + #[inline] 688 + pub fn to_ascii_uppercase(&self) -> Vec<u8> { 689 + let mut me = self.to_vec(); 690 + me.make_ascii_uppercase(); 691 + me 692 + } 693 + 694 + /// Returns a vector containing a copy of this slice where each byte 695 + /// is mapped to its ASCII lower case equivalent. 696 + /// 697 + /// ASCII letters 'A' to 'Z' are mapped to 'a' to 'z', 698 + /// but non-ASCII letters are unchanged. 699 + /// 700 + /// To lowercase the value in-place, use [`make_ascii_lowercase`]. 701 + /// 702 + /// [`make_ascii_lowercase`]: slice::make_ascii_lowercase 703 + #[cfg(not(no_global_oom_handling))] 704 + #[rustc_allow_incoherent_impl] 705 + #[must_use = "this returns the lowercase bytes as a new Vec, \ 706 + without modifying the original"] 707 + #[stable(feature = "ascii_methods_on_intrinsics", since = "1.23.0")] 708 + #[inline] 709 + pub fn to_ascii_lowercase(&self) -> Vec<u8> { 710 + let mut me = self.to_vec(); 711 + me.make_ascii_lowercase(); 712 + me 713 + } 714 + } 715 + 716 + //////////////////////////////////////////////////////////////////////////////// 717 + // Extension traits for slices over specific kinds of data 718 + //////////////////////////////////////////////////////////////////////////////// 719 + 720 + /// Helper trait for [`[T]::concat`](slice::concat). 721 + /// 722 + /// Note: the `Item` type parameter is not used in this trait, 723 + /// but it allows impls to be more generic. 724 + /// Without it, we get this error: 725 + /// 726 + /// ```error 727 + /// error[E0207]: the type parameter `T` is not constrained by the impl trait, self type, or predica 728 + /// --> src/liballoc/slice.rs:608:6 729 + /// | 730 + /// 608 | impl<T: Clone, V: Borrow<[T]>> Concat for [V] { 731 + /// | ^ unconstrained type parameter 732 + /// ``` 733 + /// 734 + /// This is because there could exist `V` types with multiple `Borrow<[_]>` impls, 735 + /// such that multiple `T` types would apply: 736 + /// 737 + /// ``` 738 + /// # #[allow(dead_code)] 739 + /// pub struct Foo(Vec<u32>, Vec<String>); 740 + /// 741 + /// impl std::borrow::Borrow<[u32]> for Foo { 742 + /// fn borrow(&self) -> &[u32] { &self.0 } 743 + /// } 744 + /// 745 + /// impl std::borrow::Borrow<[String]> for Foo { 746 + /// fn borrow(&self) -> &[String] { &self.1 } 747 + /// } 748 + /// ``` 749 + #[unstable(feature = "slice_concat_trait", issue = "27747")] 750 + pub trait Concat<Item: ?Sized> { 751 + #[unstable(feature = "slice_concat_trait", issue = "27747")] 752 + /// The resulting type after concatenation 753 + type Output; 754 + 755 + /// Implementation of [`[T]::concat`](slice::concat) 756 + #[unstable(feature = "slice_concat_trait", issue = "27747")] 757 + fn concat(slice: &Self) -> Self::Output; 758 + } 759 + 760 + /// Helper trait for [`[T]::join`](slice::join) 761 + #[unstable(feature = "slice_concat_trait", issue = "27747")] 762 + pub trait Join<Separator> { 763 + #[unstable(feature = "slice_concat_trait", issue = "27747")] 764 + /// The resulting type after concatenation 765 + type Output; 766 + 767 + /// Implementation of [`[T]::join`](slice::join) 768 + #[unstable(feature = "slice_concat_trait", issue = "27747")] 769 + fn join(slice: &Self, sep: Separator) -> Self::Output; 770 + } 771 + 772 + #[cfg(not(no_global_oom_handling))] 773 + #[unstable(feature = "slice_concat_ext", issue = "27747")] 774 + impl<T: Clone, V: Borrow<[T]>> Concat<T> for [V] { 775 + type Output = Vec<T>; 776 + 777 + fn concat(slice: &Self) -> Vec<T> { 778 + let size = slice.iter().map(|slice| slice.borrow().len()).sum(); 779 + let mut result = Vec::with_capacity(size); 780 + for v in slice { 781 + result.extend_from_slice(v.borrow()) 782 + } 783 + result 784 + } 785 + } 786 + 787 + #[cfg(not(no_global_oom_handling))] 788 + #[unstable(feature = "slice_concat_ext", issue = "27747")] 789 + impl<T: Clone, V: Borrow<[T]>> Join<&T> for [V] { 790 + type Output = Vec<T>; 791 + 792 + fn join(slice: &Self, sep: &T) -> Vec<T> { 793 + let mut iter = slice.iter(); 794 + let first = match iter.next() { 795 + Some(first) => first, 796 + None => return vec![], 797 + }; 798 + let size = slice.iter().map(|v| v.borrow().len()).sum::<usize>() + slice.len() - 1; 799 + let mut result = Vec::with_capacity(size); 800 + result.extend_from_slice(first.borrow()); 801 + 802 + for v in iter { 803 + result.push(sep.clone()); 804 + result.extend_from_slice(v.borrow()) 805 + } 806 + result 807 + } 808 + } 809 + 810 + #[cfg(not(no_global_oom_handling))] 811 + #[unstable(feature = "slice_concat_ext", issue = "27747")] 812 + impl<T: Clone, V: Borrow<[T]>> Join<&[T]> for [V] { 813 + type Output = Vec<T>; 814 + 815 + fn join(slice: &Self, sep: &[T]) -> Vec<T> { 816 + let mut iter = slice.iter(); 817 + let first = match iter.next() { 818 + Some(first) => first, 819 + None => return vec![], 820 + }; 821 + let size = 822 + slice.iter().map(|v| v.borrow().len()).sum::<usize>() + sep.len() * (slice.len() - 1); 823 + let mut result = Vec::with_capacity(size); 824 + result.extend_from_slice(first.borrow()); 825 + 826 + for v in iter { 827 + result.extend_from_slice(sep); 828 + result.extend_from_slice(v.borrow()) 829 + } 830 + result 831 + } 832 + } 833 + 834 + //////////////////////////////////////////////////////////////////////////////// 835 + // Standard trait implementations for slices 836 + //////////////////////////////////////////////////////////////////////////////// 837 + 838 + #[stable(feature = "rust1", since = "1.0.0")] 839 + impl<T> Borrow<[T]> for Vec<T> { 840 + fn borrow(&self) -> &[T] { 841 + &self[..] 842 + } 843 + } 844 + 845 + #[stable(feature = "rust1", since = "1.0.0")] 846 + impl<T> BorrowMut<[T]> for Vec<T> { 847 + fn borrow_mut(&mut self) -> &mut [T] { 848 + &mut self[..] 849 + } 850 + } 851 + 852 + #[cfg(not(no_global_oom_handling))] 853 + #[stable(feature = "rust1", since = "1.0.0")] 854 + impl<T: Clone> ToOwned for [T] { 855 + type Owned = Vec<T>; 856 + #[cfg(not(test))] 857 + fn to_owned(&self) -> Vec<T> { 858 + self.to_vec() 859 + } 860 + 861 + #[cfg(test)] 862 + fn to_owned(&self) -> Vec<T> { 863 + hack::to_vec(self, Global) 864 + } 865 + 866 + fn clone_into(&self, target: &mut Vec<T>) { 867 + // drop anything in target that will not be overwritten 868 + target.truncate(self.len()); 869 + 870 + // target.len <= self.len due to the truncate above, so the 871 + // slices here are always in-bounds. 872 + let (init, tail) = self.split_at(target.len()); 873 + 874 + // reuse the contained values' allocations/resources. 875 + target.clone_from_slice(init); 876 + target.extend_from_slice(tail); 877 + } 878 + } 879 + 880 + //////////////////////////////////////////////////////////////////////////////// 881 + // Sorting 882 + //////////////////////////////////////////////////////////////////////////////// 883 + 884 + /// Inserts `v[0]` into pre-sorted sequence `v[1..]` so that whole `v[..]` becomes sorted. 885 + /// 886 + /// This is the integral subroutine of insertion sort. 887 + #[cfg(not(no_global_oom_handling))] 888 + fn insert_head<T, F>(v: &mut [T], is_less: &mut F) 889 + where 890 + F: FnMut(&T, &T) -> bool, 891 + { 892 + if v.len() >= 2 && is_less(&v[1], &v[0]) { 893 + unsafe { 894 + // There are three ways to implement insertion here: 895 + // 896 + // 1. Swap adjacent elements until the first one gets to its final destination. 897 + // However, this way we copy data around more than is necessary. If elements are big 898 + // structures (costly to copy), this method will be slow. 899 + // 900 + // 2. Iterate until the right place for the first element is found. Then shift the 901 + // elements succeeding it to make room for it and finally place it into the 902 + // remaining hole. This is a good method. 903 + // 904 + // 3. Copy the first element into a temporary variable. Iterate until the right place 905 + // for it is found. As we go along, copy every traversed element into the slot 906 + // preceding it. Finally, copy data from the temporary variable into the remaining 907 + // hole. This method is very good. Benchmarks demonstrated slightly better 908 + // performance than with the 2nd method. 909 + // 910 + // All methods were benchmarked, and the 3rd showed best results. So we chose that one. 911 + let tmp = mem::ManuallyDrop::new(ptr::read(&v[0])); 912 + 913 + // Intermediate state of the insertion process is always tracked by `hole`, which 914 + // serves two purposes: 915 + // 1. Protects integrity of `v` from panics in `is_less`. 916 + // 2. Fills the remaining hole in `v` in the end. 917 + // 918 + // Panic safety: 919 + // 920 + // If `is_less` panics at any point during the process, `hole` will get dropped and 921 + // fill the hole in `v` with `tmp`, thus ensuring that `v` still holds every object it 922 + // initially held exactly once. 923 + let mut hole = InsertionHole { src: &*tmp, dest: &mut v[1] }; 924 + ptr::copy_nonoverlapping(&v[1], &mut v[0], 1); 925 + 926 + for i in 2..v.len() { 927 + if !is_less(&v[i], &*tmp) { 928 + break; 929 + } 930 + ptr::copy_nonoverlapping(&v[i], &mut v[i - 1], 1); 931 + hole.dest = &mut v[i]; 932 + } 933 + // `hole` gets dropped and thus copies `tmp` into the remaining hole in `v`. 934 + } 935 + } 936 + 937 + // When dropped, copies from `src` into `dest`. 938 + struct InsertionHole<T> { 939 + src: *const T, 940 + dest: *mut T, 941 + } 942 + 943 + impl<T> Drop for InsertionHole<T> { 944 + fn drop(&mut self) { 945 + unsafe { 946 + ptr::copy_nonoverlapping(self.src, self.dest, 1); 947 + } 948 + } 949 + } 950 + } 951 + 952 + /// Merges non-decreasing runs `v[..mid]` and `v[mid..]` using `buf` as temporary storage, and 953 + /// stores the result into `v[..]`. 954 + /// 955 + /// # Safety 956 + /// 957 + /// The two slices must be non-empty and `mid` must be in bounds. Buffer `buf` must be long enough 958 + /// to hold a copy of the shorter slice. Also, `T` must not be a zero-sized type. 959 + #[cfg(not(no_global_oom_handling))] 960 + unsafe fn merge<T, F>(v: &mut [T], mid: usize, buf: *mut T, is_less: &mut F) 961 + where 962 + F: FnMut(&T, &T) -> bool, 963 + { 964 + let len = v.len(); 965 + let v = v.as_mut_ptr(); 966 + let (v_mid, v_end) = unsafe { (v.add(mid), v.add(len)) }; 967 + 968 + // The merge process first copies the shorter run into `buf`. Then it traces the newly copied 969 + // run and the longer run forwards (or backwards), comparing their next unconsumed elements and 970 + // copying the lesser (or greater) one into `v`. 971 + // 972 + // As soon as the shorter run is fully consumed, the process is done. If the longer run gets 973 + // consumed first, then we must copy whatever is left of the shorter run into the remaining 974 + // hole in `v`. 975 + // 976 + // Intermediate state of the process is always tracked by `hole`, which serves two purposes: 977 + // 1. Protects integrity of `v` from panics in `is_less`. 978 + // 2. Fills the remaining hole in `v` if the longer run gets consumed first. 979 + // 980 + // Panic safety: 981 + // 982 + // If `is_less` panics at any point during the process, `hole` will get dropped and fill the 983 + // hole in `v` with the unconsumed range in `buf`, thus ensuring that `v` still holds every 984 + // object it initially held exactly once. 985 + let mut hole; 986 + 987 + if mid <= len - mid { 988 + // The left run is shorter. 989 + unsafe { 990 + ptr::copy_nonoverlapping(v, buf, mid); 991 + hole = MergeHole { start: buf, end: buf.add(mid), dest: v }; 992 + } 993 + 994 + // Initially, these pointers point to the beginnings of their arrays. 995 + let left = &mut hole.start; 996 + let mut right = v_mid; 997 + let out = &mut hole.dest; 998 + 999 + while *left < hole.end && right < v_end { 1000 + // Consume the lesser side. 1001 + // If equal, prefer the left run to maintain stability. 1002 + unsafe { 1003 + let to_copy = if is_less(&*right, &**left) { 1004 + get_and_increment(&mut right) 1005 + } else { 1006 + get_and_increment(left) 1007 + }; 1008 + ptr::copy_nonoverlapping(to_copy, get_and_increment(out), 1); 1009 + } 1010 + } 1011 + } else { 1012 + // The right run is shorter. 1013 + unsafe { 1014 + ptr::copy_nonoverlapping(v_mid, buf, len - mid); 1015 + hole = MergeHole { start: buf, end: buf.add(len - mid), dest: v_mid }; 1016 + } 1017 + 1018 + // Initially, these pointers point past the ends of their arrays. 1019 + let left = &mut hole.dest; 1020 + let right = &mut hole.end; 1021 + let mut out = v_end; 1022 + 1023 + while v < *left && buf < *right { 1024 + // Consume the greater side. 1025 + // If equal, prefer the right run to maintain stability. 1026 + unsafe { 1027 + let to_copy = if is_less(&*right.offset(-1), &*left.offset(-1)) { 1028 + decrement_and_get(left) 1029 + } else { 1030 + decrement_and_get(right) 1031 + }; 1032 + ptr::copy_nonoverlapping(to_copy, decrement_and_get(&mut out), 1); 1033 + } 1034 + } 1035 + } 1036 + // Finally, `hole` gets dropped. If the shorter run was not fully consumed, whatever remains of 1037 + // it will now be copied into the hole in `v`. 1038 + 1039 + unsafe fn get_and_increment<T>(ptr: &mut *mut T) -> *mut T { 1040 + let old = *ptr; 1041 + *ptr = unsafe { ptr.offset(1) }; 1042 + old 1043 + } 1044 + 1045 + unsafe fn decrement_and_get<T>(ptr: &mut *mut T) -> *mut T { 1046 + *ptr = unsafe { ptr.offset(-1) }; 1047 + *ptr 1048 + } 1049 + 1050 + // When dropped, copies the range `start..end` into `dest..`. 1051 + struct MergeHole<T> { 1052 + start: *mut T, 1053 + end: *mut T, 1054 + dest: *mut T, 1055 + } 1056 + 1057 + impl<T> Drop for MergeHole<T> { 1058 + fn drop(&mut self) { 1059 + // `T` is not a zero-sized type, and these are pointers into a slice's elements. 1060 + unsafe { 1061 + let len = self.end.sub_ptr(self.start); 1062 + ptr::copy_nonoverlapping(self.start, self.dest, len); 1063 + } 1064 + } 1065 + } 1066 + } 1067 + 1068 + /// This merge sort borrows some (but not all) ideas from TimSort, which is described in detail 1069 + /// [here](https://github.com/python/cpython/blob/main/Objects/listsort.txt). 1070 + /// 1071 + /// The algorithm identifies strictly descending and non-descending subsequences, which are called 1072 + /// natural runs. There is a stack of pending runs yet to be merged. Each newly found run is pushed 1073 + /// onto the stack, and then some pairs of adjacent runs are merged until these two invariants are 1074 + /// satisfied: 1075 + /// 1076 + /// 1. for every `i` in `1..runs.len()`: `runs[i - 1].len > runs[i].len` 1077 + /// 2. for every `i` in `2..runs.len()`: `runs[i - 2].len > runs[i - 1].len + runs[i].len` 1078 + /// 1079 + /// The invariants ensure that the total running time is *O*(*n* \* log(*n*)) worst-case. 1080 + #[cfg(not(no_global_oom_handling))] 1081 + fn merge_sort<T, F>(v: &mut [T], mut is_less: F) 1082 + where 1083 + F: FnMut(&T, &T) -> bool, 1084 + { 1085 + // Slices of up to this length get sorted using insertion sort. 1086 + const MAX_INSERTION: usize = 20; 1087 + // Very short runs are extended using insertion sort to span at least this many elements. 1088 + const MIN_RUN: usize = 10; 1089 + 1090 + // Sorting has no meaningful behavior on zero-sized types. 1091 + if size_of::<T>() == 0 { 1092 + return; 1093 + } 1094 + 1095 + let len = v.len(); 1096 + 1097 + // Short arrays get sorted in-place via insertion sort to avoid allocations. 1098 + if len <= MAX_INSERTION { 1099 + if len >= 2 { 1100 + for i in (0..len - 1).rev() { 1101 + insert_head(&mut v[i..], &mut is_less); 1102 + } 1103 + } 1104 + return; 1105 + } 1106 + 1107 + // Allocate a buffer to use as scratch memory. We keep the length 0 so we can keep in it 1108 + // shallow copies of the contents of `v` without risking the dtors running on copies if 1109 + // `is_less` panics. When merging two sorted runs, this buffer holds a copy of the shorter run, 1110 + // which will always have length at most `len / 2`. 1111 + let mut buf = Vec::with_capacity(len / 2); 1112 + 1113 + // In order to identify natural runs in `v`, we traverse it backwards. That might seem like a 1114 + // strange decision, but consider the fact that merges more often go in the opposite direction 1115 + // (forwards). According to benchmarks, merging forwards is slightly faster than merging 1116 + // backwards. To conclude, identifying runs by traversing backwards improves performance. 1117 + let mut runs = vec![]; 1118 + let mut end = len; 1119 + while end > 0 { 1120 + // Find the next natural run, and reverse it if it's strictly descending. 1121 + let mut start = end - 1; 1122 + if start > 0 { 1123 + start -= 1; 1124 + unsafe { 1125 + if is_less(v.get_unchecked(start + 1), v.get_unchecked(start)) { 1126 + while start > 0 && is_less(v.get_unchecked(start), v.get_unchecked(start - 1)) { 1127 + start -= 1; 1128 + } 1129 + v[start..end].reverse(); 1130 + } else { 1131 + while start > 0 && !is_less(v.get_unchecked(start), v.get_unchecked(start - 1)) 1132 + { 1133 + start -= 1; 1134 + } 1135 + } 1136 + } 1137 + } 1138 + 1139 + // Insert some more elements into the run if it's too short. Insertion sort is faster than 1140 + // merge sort on short sequences, so this significantly improves performance. 1141 + while start > 0 && end - start < MIN_RUN { 1142 + start -= 1; 1143 + insert_head(&mut v[start..end], &mut is_less); 1144 + } 1145 + 1146 + // Push this run onto the stack. 1147 + runs.push(Run { start, len: end - start }); 1148 + end = start; 1149 + 1150 + // Merge some pairs of adjacent runs to satisfy the invariants. 1151 + while let Some(r) = collapse(&runs) { 1152 + let left = runs[r + 1]; 1153 + let right = runs[r]; 1154 + unsafe { 1155 + merge( 1156 + &mut v[left.start..right.start + right.len], 1157 + left.len, 1158 + buf.as_mut_ptr(), 1159 + &mut is_less, 1160 + ); 1161 + } 1162 + runs[r] = Run { start: left.start, len: left.len + right.len }; 1163 + runs.remove(r + 1); 1164 + } 1165 + } 1166 + 1167 + // Finally, exactly one run must remain in the stack. 1168 + debug_assert!(runs.len() == 1 && runs[0].start == 0 && runs[0].len == len); 1169 + 1170 + // Examines the stack of runs and identifies the next pair of runs to merge. More specifically, 1171 + // if `Some(r)` is returned, that means `runs[r]` and `runs[r + 1]` must be merged next. If the 1172 + // algorithm should continue building a new run instead, `None` is returned. 1173 + // 1174 + // TimSort is infamous for its buggy implementations, as described here: 1175 + // http://envisage-project.eu/timsort-specification-and-verification/ 1176 + // 1177 + // The gist of the story is: we must enforce the invariants on the top four runs on the stack. 1178 + // Enforcing them on just top three is not sufficient to ensure that the invariants will still 1179 + // hold for *all* runs in the stack. 1180 + // 1181 + // This function correctly checks invariants for the top four runs. Additionally, if the top 1182 + // run starts at index 0, it will always demand a merge operation until the stack is fully 1183 + // collapsed, in order to complete the sort. 1184 + #[inline] 1185 + fn collapse(runs: &[Run]) -> Option<usize> { 1186 + let n = runs.len(); 1187 + if n >= 2 1188 + && (runs[n - 1].start == 0 1189 + || runs[n - 2].len <= runs[n - 1].len 1190 + || (n >= 3 && runs[n - 3].len <= runs[n - 2].len + runs[n - 1].len) 1191 + || (n >= 4 && runs[n - 4].len <= runs[n - 3].len + runs[n - 2].len)) 1192 + { 1193 + if n >= 3 && runs[n - 3].len < runs[n - 1].len { Some(n - 3) } else { Some(n - 2) } 1194 + } else { 1195 + None 1196 + } 1197 + } 1198 + 1199 + #[derive(Clone, Copy)] 1200 + struct Run { 1201 + start: usize, 1202 + len: usize, 1203 + } 1204 + }
+186
rust/alloc/vec/drain.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + use crate::alloc::{Allocator, Global}; 4 + use core::fmt; 5 + use core::iter::{FusedIterator, TrustedLen}; 6 + use core::mem; 7 + use core::ptr::{self, NonNull}; 8 + use core::slice::{self}; 9 + 10 + use super::Vec; 11 + 12 + /// A draining iterator for `Vec<T>`. 13 + /// 14 + /// This `struct` is created by [`Vec::drain`]. 15 + /// See its documentation for more. 16 + /// 17 + /// # Example 18 + /// 19 + /// ``` 20 + /// let mut v = vec![0, 1, 2]; 21 + /// let iter: std::vec::Drain<_> = v.drain(..); 22 + /// ``` 23 + #[stable(feature = "drain", since = "1.6.0")] 24 + pub struct Drain< 25 + 'a, 26 + T: 'a, 27 + #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator + 'a = Global, 28 + > { 29 + /// Index of tail to preserve 30 + pub(super) tail_start: usize, 31 + /// Length of tail 32 + pub(super) tail_len: usize, 33 + /// Current remaining range to remove 34 + pub(super) iter: slice::Iter<'a, T>, 35 + pub(super) vec: NonNull<Vec<T, A>>, 36 + } 37 + 38 + #[stable(feature = "collection_debug", since = "1.17.0")] 39 + impl<T: fmt::Debug, A: Allocator> fmt::Debug for Drain<'_, T, A> { 40 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 41 + f.debug_tuple("Drain").field(&self.iter.as_slice()).finish() 42 + } 43 + } 44 + 45 + impl<'a, T, A: Allocator> Drain<'a, T, A> { 46 + /// Returns the remaining items of this iterator as a slice. 47 + /// 48 + /// # Examples 49 + /// 50 + /// ``` 51 + /// let mut vec = vec!['a', 'b', 'c']; 52 + /// let mut drain = vec.drain(..); 53 + /// assert_eq!(drain.as_slice(), &['a', 'b', 'c']); 54 + /// let _ = drain.next().unwrap(); 55 + /// assert_eq!(drain.as_slice(), &['b', 'c']); 56 + /// ``` 57 + #[must_use] 58 + #[stable(feature = "vec_drain_as_slice", since = "1.46.0")] 59 + pub fn as_slice(&self) -> &[T] { 60 + self.iter.as_slice() 61 + } 62 + 63 + /// Returns a reference to the underlying allocator. 64 + #[unstable(feature = "allocator_api", issue = "32838")] 65 + #[must_use] 66 + #[inline] 67 + pub fn allocator(&self) -> &A { 68 + unsafe { self.vec.as_ref().allocator() } 69 + } 70 + } 71 + 72 + #[stable(feature = "vec_drain_as_slice", since = "1.46.0")] 73 + impl<'a, T, A: Allocator> AsRef<[T]> for Drain<'a, T, A> { 74 + fn as_ref(&self) -> &[T] { 75 + self.as_slice() 76 + } 77 + } 78 + 79 + #[stable(feature = "drain", since = "1.6.0")] 80 + unsafe impl<T: Sync, A: Sync + Allocator> Sync for Drain<'_, T, A> {} 81 + #[stable(feature = "drain", since = "1.6.0")] 82 + unsafe impl<T: Send, A: Send + Allocator> Send for Drain<'_, T, A> {} 83 + 84 + #[stable(feature = "drain", since = "1.6.0")] 85 + impl<T, A: Allocator> Iterator for Drain<'_, T, A> { 86 + type Item = T; 87 + 88 + #[inline] 89 + fn next(&mut self) -> Option<T> { 90 + self.iter.next().map(|elt| unsafe { ptr::read(elt as *const _) }) 91 + } 92 + 93 + fn size_hint(&self) -> (usize, Option<usize>) { 94 + self.iter.size_hint() 95 + } 96 + } 97 + 98 + #[stable(feature = "drain", since = "1.6.0")] 99 + impl<T, A: Allocator> DoubleEndedIterator for Drain<'_, T, A> { 100 + #[inline] 101 + fn next_back(&mut self) -> Option<T> { 102 + self.iter.next_back().map(|elt| unsafe { ptr::read(elt as *const _) }) 103 + } 104 + } 105 + 106 + #[stable(feature = "drain", since = "1.6.0")] 107 + impl<T, A: Allocator> Drop for Drain<'_, T, A> { 108 + fn drop(&mut self) { 109 + /// Moves back the un-`Drain`ed elements to restore the original `Vec`. 110 + struct DropGuard<'r, 'a, T, A: Allocator>(&'r mut Drain<'a, T, A>); 111 + 112 + impl<'r, 'a, T, A: Allocator> Drop for DropGuard<'r, 'a, T, A> { 113 + fn drop(&mut self) { 114 + if self.0.tail_len > 0 { 115 + unsafe { 116 + let source_vec = self.0.vec.as_mut(); 117 + // memmove back untouched tail, update to new length 118 + let start = source_vec.len(); 119 + let tail = self.0.tail_start; 120 + if tail != start { 121 + let src = source_vec.as_ptr().add(tail); 122 + let dst = source_vec.as_mut_ptr().add(start); 123 + ptr::copy(src, dst, self.0.tail_len); 124 + } 125 + source_vec.set_len(start + self.0.tail_len); 126 + } 127 + } 128 + } 129 + } 130 + 131 + let iter = mem::replace(&mut self.iter, (&mut []).iter()); 132 + let drop_len = iter.len(); 133 + 134 + let mut vec = self.vec; 135 + 136 + if mem::size_of::<T>() == 0 { 137 + // ZSTs have no identity, so we don't need to move them around, we only need to drop the correct amount. 138 + // this can be achieved by manipulating the Vec length instead of moving values out from `iter`. 139 + unsafe { 140 + let vec = vec.as_mut(); 141 + let old_len = vec.len(); 142 + vec.set_len(old_len + drop_len + self.tail_len); 143 + vec.truncate(old_len + self.tail_len); 144 + } 145 + 146 + return; 147 + } 148 + 149 + // ensure elements are moved back into their appropriate places, even when drop_in_place panics 150 + let _guard = DropGuard(self); 151 + 152 + if drop_len == 0 { 153 + return; 154 + } 155 + 156 + // as_slice() must only be called when iter.len() is > 0 because 157 + // vec::Splice modifies vec::Drain fields and may grow the vec which would invalidate 158 + // the iterator's internal pointers. Creating a reference to deallocated memory 159 + // is invalid even when it is zero-length 160 + let drop_ptr = iter.as_slice().as_ptr(); 161 + 162 + unsafe { 163 + // drop_ptr comes from a slice::Iter which only gives us a &[T] but for drop_in_place 164 + // a pointer with mutable provenance is necessary. Therefore we must reconstruct 165 + // it from the original vec but also avoid creating a &mut to the front since that could 166 + // invalidate raw pointers to it which some unsafe code might rely on. 167 + let vec_ptr = vec.as_mut().as_mut_ptr(); 168 + let drop_offset = drop_ptr.sub_ptr(vec_ptr); 169 + let to_drop = ptr::slice_from_raw_parts_mut(vec_ptr.add(drop_offset), drop_len); 170 + ptr::drop_in_place(to_drop); 171 + } 172 + } 173 + } 174 + 175 + #[stable(feature = "drain", since = "1.6.0")] 176 + impl<T, A: Allocator> ExactSizeIterator for Drain<'_, T, A> { 177 + fn is_empty(&self) -> bool { 178 + self.iter.is_empty() 179 + } 180 + } 181 + 182 + #[unstable(feature = "trusted_len", issue = "37572")] 183 + unsafe impl<T, A: Allocator> TrustedLen for Drain<'_, T, A> {} 184 + 185 + #[stable(feature = "fused", since = "1.26.0")] 186 + impl<T, A: Allocator> FusedIterator for Drain<'_, T, A> {}
+145
rust/alloc/vec/drain_filter.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + use crate::alloc::{Allocator, Global}; 4 + use core::ptr::{self}; 5 + use core::slice::{self}; 6 + 7 + use super::Vec; 8 + 9 + /// An iterator which uses a closure to determine if an element should be removed. 10 + /// 11 + /// This struct is created by [`Vec::drain_filter`]. 12 + /// See its documentation for more. 13 + /// 14 + /// # Example 15 + /// 16 + /// ``` 17 + /// #![feature(drain_filter)] 18 + /// 19 + /// let mut v = vec![0, 1, 2]; 20 + /// let iter: std::vec::DrainFilter<_, _> = v.drain_filter(|x| *x % 2 == 0); 21 + /// ``` 22 + #[unstable(feature = "drain_filter", reason = "recently added", issue = "43244")] 23 + #[derive(Debug)] 24 + pub struct DrainFilter< 25 + 'a, 26 + T, 27 + F, 28 + #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global, 29 + > where 30 + F: FnMut(&mut T) -> bool, 31 + { 32 + pub(super) vec: &'a mut Vec<T, A>, 33 + /// The index of the item that will be inspected by the next call to `next`. 34 + pub(super) idx: usize, 35 + /// The number of items that have been drained (removed) thus far. 36 + pub(super) del: usize, 37 + /// The original length of `vec` prior to draining. 38 + pub(super) old_len: usize, 39 + /// The filter test predicate. 40 + pub(super) pred: F, 41 + /// A flag that indicates a panic has occurred in the filter test predicate. 42 + /// This is used as a hint in the drop implementation to prevent consumption 43 + /// of the remainder of the `DrainFilter`. Any unprocessed items will be 44 + /// backshifted in the `vec`, but no further items will be dropped or 45 + /// tested by the filter predicate. 46 + pub(super) panic_flag: bool, 47 + } 48 + 49 + impl<T, F, A: Allocator> DrainFilter<'_, T, F, A> 50 + where 51 + F: FnMut(&mut T) -> bool, 52 + { 53 + /// Returns a reference to the underlying allocator. 54 + #[unstable(feature = "allocator_api", issue = "32838")] 55 + #[inline] 56 + pub fn allocator(&self) -> &A { 57 + self.vec.allocator() 58 + } 59 + } 60 + 61 + #[unstable(feature = "drain_filter", reason = "recently added", issue = "43244")] 62 + impl<T, F, A: Allocator> Iterator for DrainFilter<'_, T, F, A> 63 + where 64 + F: FnMut(&mut T) -> bool, 65 + { 66 + type Item = T; 67 + 68 + fn next(&mut self) -> Option<T> { 69 + unsafe { 70 + while self.idx < self.old_len { 71 + let i = self.idx; 72 + let v = slice::from_raw_parts_mut(self.vec.as_mut_ptr(), self.old_len); 73 + self.panic_flag = true; 74 + let drained = (self.pred)(&mut v[i]); 75 + self.panic_flag = false; 76 + // Update the index *after* the predicate is called. If the index 77 + // is updated prior and the predicate panics, the element at this 78 + // index would be leaked. 79 + self.idx += 1; 80 + if drained { 81 + self.del += 1; 82 + return Some(ptr::read(&v[i])); 83 + } else if self.del > 0 { 84 + let del = self.del; 85 + let src: *const T = &v[i]; 86 + let dst: *mut T = &mut v[i - del]; 87 + ptr::copy_nonoverlapping(src, dst, 1); 88 + } 89 + } 90 + None 91 + } 92 + } 93 + 94 + fn size_hint(&self) -> (usize, Option<usize>) { 95 + (0, Some(self.old_len - self.idx)) 96 + } 97 + } 98 + 99 + #[unstable(feature = "drain_filter", reason = "recently added", issue = "43244")] 100 + impl<T, F, A: Allocator> Drop for DrainFilter<'_, T, F, A> 101 + where 102 + F: FnMut(&mut T) -> bool, 103 + { 104 + fn drop(&mut self) { 105 + struct BackshiftOnDrop<'a, 'b, T, F, A: Allocator> 106 + where 107 + F: FnMut(&mut T) -> bool, 108 + { 109 + drain: &'b mut DrainFilter<'a, T, F, A>, 110 + } 111 + 112 + impl<'a, 'b, T, F, A: Allocator> Drop for BackshiftOnDrop<'a, 'b, T, F, A> 113 + where 114 + F: FnMut(&mut T) -> bool, 115 + { 116 + fn drop(&mut self) { 117 + unsafe { 118 + if self.drain.idx < self.drain.old_len && self.drain.del > 0 { 119 + // This is a pretty messed up state, and there isn't really an 120 + // obviously right thing to do. We don't want to keep trying 121 + // to execute `pred`, so we just backshift all the unprocessed 122 + // elements and tell the vec that they still exist. The backshift 123 + // is required to prevent a double-drop of the last successfully 124 + // drained item prior to a panic in the predicate. 125 + let ptr = self.drain.vec.as_mut_ptr(); 126 + let src = ptr.add(self.drain.idx); 127 + let dst = src.sub(self.drain.del); 128 + let tail_len = self.drain.old_len - self.drain.idx; 129 + src.copy_to(dst, tail_len); 130 + } 131 + self.drain.vec.set_len(self.drain.old_len - self.drain.del); 132 + } 133 + } 134 + } 135 + 136 + let backshift = BackshiftOnDrop { drain: self }; 137 + 138 + // Attempt to consume any remaining elements if the filter predicate 139 + // has not yet panicked. We'll backshift any remaining elements 140 + // whether we've already panicked or if the consumption here panics. 141 + if !backshift.drain.panic_flag { 142 + backshift.drain.for_each(drop); 143 + } 144 + } 145 + }
+366
rust/alloc/vec/into_iter.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + #[cfg(not(no_global_oom_handling))] 4 + use super::AsVecIntoIter; 5 + use crate::alloc::{Allocator, Global}; 6 + use crate::raw_vec::RawVec; 7 + use core::fmt; 8 + use core::intrinsics::arith_offset; 9 + use core::iter::{ 10 + FusedIterator, InPlaceIterable, SourceIter, TrustedLen, TrustedRandomAccessNoCoerce, 11 + }; 12 + use core::marker::PhantomData; 13 + use core::mem::{self, ManuallyDrop}; 14 + #[cfg(not(no_global_oom_handling))] 15 + use core::ops::Deref; 16 + use core::ptr::{self, NonNull}; 17 + use core::slice::{self}; 18 + 19 + /// An iterator that moves out of a vector. 20 + /// 21 + /// This `struct` is created by the `into_iter` method on [`Vec`](super::Vec) 22 + /// (provided by the [`IntoIterator`] trait). 23 + /// 24 + /// # Example 25 + /// 26 + /// ``` 27 + /// let v = vec![0, 1, 2]; 28 + /// let iter: std::vec::IntoIter<_> = v.into_iter(); 29 + /// ``` 30 + #[stable(feature = "rust1", since = "1.0.0")] 31 + #[rustc_insignificant_dtor] 32 + pub struct IntoIter< 33 + T, 34 + #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global, 35 + > { 36 + pub(super) buf: NonNull<T>, 37 + pub(super) phantom: PhantomData<T>, 38 + pub(super) cap: usize, 39 + // the drop impl reconstructs a RawVec from buf, cap and alloc 40 + // to avoid dropping the allocator twice we need to wrap it into ManuallyDrop 41 + pub(super) alloc: ManuallyDrop<A>, 42 + pub(super) ptr: *const T, 43 + pub(super) end: *const T, 44 + } 45 + 46 + #[stable(feature = "vec_intoiter_debug", since = "1.13.0")] 47 + impl<T: fmt::Debug, A: Allocator> fmt::Debug for IntoIter<T, A> { 48 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 49 + f.debug_tuple("IntoIter").field(&self.as_slice()).finish() 50 + } 51 + } 52 + 53 + impl<T, A: Allocator> IntoIter<T, A> { 54 + /// Returns the remaining items of this iterator as a slice. 55 + /// 56 + /// # Examples 57 + /// 58 + /// ``` 59 + /// let vec = vec!['a', 'b', 'c']; 60 + /// let mut into_iter = vec.into_iter(); 61 + /// assert_eq!(into_iter.as_slice(), &['a', 'b', 'c']); 62 + /// let _ = into_iter.next().unwrap(); 63 + /// assert_eq!(into_iter.as_slice(), &['b', 'c']); 64 + /// ``` 65 + #[stable(feature = "vec_into_iter_as_slice", since = "1.15.0")] 66 + pub fn as_slice(&self) -> &[T] { 67 + unsafe { slice::from_raw_parts(self.ptr, self.len()) } 68 + } 69 + 70 + /// Returns the remaining items of this iterator as a mutable slice. 71 + /// 72 + /// # Examples 73 + /// 74 + /// ``` 75 + /// let vec = vec!['a', 'b', 'c']; 76 + /// let mut into_iter = vec.into_iter(); 77 + /// assert_eq!(into_iter.as_slice(), &['a', 'b', 'c']); 78 + /// into_iter.as_mut_slice()[2] = 'z'; 79 + /// assert_eq!(into_iter.next().unwrap(), 'a'); 80 + /// assert_eq!(into_iter.next().unwrap(), 'b'); 81 + /// assert_eq!(into_iter.next().unwrap(), 'z'); 82 + /// ``` 83 + #[stable(feature = "vec_into_iter_as_slice", since = "1.15.0")] 84 + pub fn as_mut_slice(&mut self) -> &mut [T] { 85 + unsafe { &mut *self.as_raw_mut_slice() } 86 + } 87 + 88 + /// Returns a reference to the underlying allocator. 89 + #[unstable(feature = "allocator_api", issue = "32838")] 90 + #[inline] 91 + pub fn allocator(&self) -> &A { 92 + &self.alloc 93 + } 94 + 95 + fn as_raw_mut_slice(&mut self) -> *mut [T] { 96 + ptr::slice_from_raw_parts_mut(self.ptr as *mut T, self.len()) 97 + } 98 + 99 + /// Drops remaining elements and relinquishes the backing allocation. 100 + /// 101 + /// This is roughly equivalent to the following, but more efficient 102 + /// 103 + /// ``` 104 + /// # let mut into_iter = Vec::<u8>::with_capacity(10).into_iter(); 105 + /// (&mut into_iter).for_each(core::mem::drop); 106 + /// unsafe { core::ptr::write(&mut into_iter, Vec::new().into_iter()); } 107 + /// ``` 108 + /// 109 + /// This method is used by in-place iteration, refer to the vec::in_place_collect 110 + /// documentation for an overview. 111 + #[cfg(not(no_global_oom_handling))] 112 + pub(super) fn forget_allocation_drop_remaining(&mut self) { 113 + let remaining = self.as_raw_mut_slice(); 114 + 115 + // overwrite the individual fields instead of creating a new 116 + // struct and then overwriting &mut self. 117 + // this creates less assembly 118 + self.cap = 0; 119 + self.buf = unsafe { NonNull::new_unchecked(RawVec::NEW.ptr()) }; 120 + self.ptr = self.buf.as_ptr(); 121 + self.end = self.buf.as_ptr(); 122 + 123 + unsafe { 124 + ptr::drop_in_place(remaining); 125 + } 126 + } 127 + 128 + /// Forgets to Drop the remaining elements while still allowing the backing allocation to be freed. 129 + #[allow(dead_code)] 130 + pub(crate) fn forget_remaining_elements(&mut self) { 131 + self.ptr = self.end; 132 + } 133 + } 134 + 135 + #[stable(feature = "vec_intoiter_as_ref", since = "1.46.0")] 136 + impl<T, A: Allocator> AsRef<[T]> for IntoIter<T, A> { 137 + fn as_ref(&self) -> &[T] { 138 + self.as_slice() 139 + } 140 + } 141 + 142 + #[stable(feature = "rust1", since = "1.0.0")] 143 + unsafe impl<T: Send, A: Allocator + Send> Send for IntoIter<T, A> {} 144 + #[stable(feature = "rust1", since = "1.0.0")] 145 + unsafe impl<T: Sync, A: Allocator + Sync> Sync for IntoIter<T, A> {} 146 + 147 + #[stable(feature = "rust1", since = "1.0.0")] 148 + impl<T, A: Allocator> Iterator for IntoIter<T, A> { 149 + type Item = T; 150 + 151 + #[inline] 152 + fn next(&mut self) -> Option<T> { 153 + if self.ptr as *const _ == self.end { 154 + None 155 + } else if mem::size_of::<T>() == 0 { 156 + // purposefully don't use 'ptr.offset' because for 157 + // vectors with 0-size elements this would return the 158 + // same pointer. 159 + self.ptr = unsafe { arith_offset(self.ptr as *const i8, 1) as *mut T }; 160 + 161 + // Make up a value of this ZST. 162 + Some(unsafe { mem::zeroed() }) 163 + } else { 164 + let old = self.ptr; 165 + self.ptr = unsafe { self.ptr.offset(1) }; 166 + 167 + Some(unsafe { ptr::read(old) }) 168 + } 169 + } 170 + 171 + #[inline] 172 + fn size_hint(&self) -> (usize, Option<usize>) { 173 + let exact = if mem::size_of::<T>() == 0 { 174 + self.end.addr().wrapping_sub(self.ptr.addr()) 175 + } else { 176 + unsafe { self.end.sub_ptr(self.ptr) } 177 + }; 178 + (exact, Some(exact)) 179 + } 180 + 181 + #[inline] 182 + fn advance_by(&mut self, n: usize) -> Result<(), usize> { 183 + let step_size = self.len().min(n); 184 + let to_drop = ptr::slice_from_raw_parts_mut(self.ptr as *mut T, step_size); 185 + if mem::size_of::<T>() == 0 { 186 + // SAFETY: due to unchecked casts of unsigned amounts to signed offsets the wraparound 187 + // effectively results in unsigned pointers representing positions 0..usize::MAX, 188 + // which is valid for ZSTs. 189 + self.ptr = unsafe { arith_offset(self.ptr as *const i8, step_size as isize) as *mut T } 190 + } else { 191 + // SAFETY: the min() above ensures that step_size is in bounds 192 + self.ptr = unsafe { self.ptr.add(step_size) }; 193 + } 194 + // SAFETY: the min() above ensures that step_size is in bounds 195 + unsafe { 196 + ptr::drop_in_place(to_drop); 197 + } 198 + if step_size < n { 199 + return Err(step_size); 200 + } 201 + Ok(()) 202 + } 203 + 204 + #[inline] 205 + fn count(self) -> usize { 206 + self.len() 207 + } 208 + 209 + unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item 210 + where 211 + Self: TrustedRandomAccessNoCoerce, 212 + { 213 + // SAFETY: the caller must guarantee that `i` is in bounds of the 214 + // `Vec<T>`, so `i` cannot overflow an `isize`, and the `self.ptr.add(i)` 215 + // is guaranteed to pointer to an element of the `Vec<T>` and 216 + // thus guaranteed to be valid to dereference. 217 + // 218 + // Also note the implementation of `Self: TrustedRandomAccess` requires 219 + // that `T: Copy` so reading elements from the buffer doesn't invalidate 220 + // them for `Drop`. 221 + unsafe { 222 + if mem::size_of::<T>() == 0 { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } 223 + } 224 + } 225 + } 226 + 227 + #[stable(feature = "rust1", since = "1.0.0")] 228 + impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> { 229 + #[inline] 230 + fn next_back(&mut self) -> Option<T> { 231 + if self.end == self.ptr { 232 + None 233 + } else if mem::size_of::<T>() == 0 { 234 + // See above for why 'ptr.offset' isn't used 235 + self.end = unsafe { arith_offset(self.end as *const i8, -1) as *mut T }; 236 + 237 + // Make up a value of this ZST. 238 + Some(unsafe { mem::zeroed() }) 239 + } else { 240 + self.end = unsafe { self.end.offset(-1) }; 241 + 242 + Some(unsafe { ptr::read(self.end) }) 243 + } 244 + } 245 + 246 + #[inline] 247 + fn advance_back_by(&mut self, n: usize) -> Result<(), usize> { 248 + let step_size = self.len().min(n); 249 + if mem::size_of::<T>() == 0 { 250 + // SAFETY: same as for advance_by() 251 + self.end = unsafe { 252 + arith_offset(self.end as *const i8, step_size.wrapping_neg() as isize) as *mut T 253 + } 254 + } else { 255 + // SAFETY: same as for advance_by() 256 + self.end = unsafe { self.end.offset(step_size.wrapping_neg() as isize) }; 257 + } 258 + let to_drop = ptr::slice_from_raw_parts_mut(self.end as *mut T, step_size); 259 + // SAFETY: same as for advance_by() 260 + unsafe { 261 + ptr::drop_in_place(to_drop); 262 + } 263 + if step_size < n { 264 + return Err(step_size); 265 + } 266 + Ok(()) 267 + } 268 + } 269 + 270 + #[stable(feature = "rust1", since = "1.0.0")] 271 + impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> { 272 + fn is_empty(&self) -> bool { 273 + self.ptr == self.end 274 + } 275 + } 276 + 277 + #[stable(feature = "fused", since = "1.26.0")] 278 + impl<T, A: Allocator> FusedIterator for IntoIter<T, A> {} 279 + 280 + #[unstable(feature = "trusted_len", issue = "37572")] 281 + unsafe impl<T, A: Allocator> TrustedLen for IntoIter<T, A> {} 282 + 283 + #[doc(hidden)] 284 + #[unstable(issue = "none", feature = "std_internals")] 285 + #[rustc_unsafe_specialization_marker] 286 + pub trait NonDrop {} 287 + 288 + // T: Copy as approximation for !Drop since get_unchecked does not advance self.ptr 289 + // and thus we can't implement drop-handling 290 + #[unstable(issue = "none", feature = "std_internals")] 291 + impl<T: Copy> NonDrop for T {} 292 + 293 + #[doc(hidden)] 294 + #[unstable(issue = "none", feature = "std_internals")] 295 + // TrustedRandomAccess (without NoCoerce) must not be implemented because 296 + // subtypes/supertypes of `T` might not be `NonDrop` 297 + unsafe impl<T, A: Allocator> TrustedRandomAccessNoCoerce for IntoIter<T, A> 298 + where 299 + T: NonDrop, 300 + { 301 + const MAY_HAVE_SIDE_EFFECT: bool = false; 302 + } 303 + 304 + #[cfg(not(no_global_oom_handling))] 305 + #[stable(feature = "vec_into_iter_clone", since = "1.8.0")] 306 + impl<T: Clone, A: Allocator + Clone> Clone for IntoIter<T, A> { 307 + #[cfg(not(test))] 308 + fn clone(&self) -> Self { 309 + self.as_slice().to_vec_in(self.alloc.deref().clone()).into_iter() 310 + } 311 + #[cfg(test)] 312 + fn clone(&self) -> Self { 313 + crate::slice::to_vec(self.as_slice(), self.alloc.deref().clone()).into_iter() 314 + } 315 + } 316 + 317 + #[stable(feature = "rust1", since = "1.0.0")] 318 + unsafe impl<#[may_dangle] T, A: Allocator> Drop for IntoIter<T, A> { 319 + fn drop(&mut self) { 320 + struct DropGuard<'a, T, A: Allocator>(&'a mut IntoIter<T, A>); 321 + 322 + impl<T, A: Allocator> Drop for DropGuard<'_, T, A> { 323 + fn drop(&mut self) { 324 + unsafe { 325 + // `IntoIter::alloc` is not used anymore after this and will be dropped by RawVec 326 + let alloc = ManuallyDrop::take(&mut self.0.alloc); 327 + // RawVec handles deallocation 328 + let _ = RawVec::from_raw_parts_in(self.0.buf.as_ptr(), self.0.cap, alloc); 329 + } 330 + } 331 + } 332 + 333 + let guard = DropGuard(self); 334 + // destroy the remaining elements 335 + unsafe { 336 + ptr::drop_in_place(guard.0.as_raw_mut_slice()); 337 + } 338 + // now `guard` will be dropped and do the rest 339 + } 340 + } 341 + 342 + // In addition to the SAFETY invariants of the following three unsafe traits 343 + // also refer to the vec::in_place_collect module documentation to get an overview 344 + #[unstable(issue = "none", feature = "inplace_iteration")] 345 + #[doc(hidden)] 346 + unsafe impl<T, A: Allocator> InPlaceIterable for IntoIter<T, A> {} 347 + 348 + #[unstable(issue = "none", feature = "inplace_iteration")] 349 + #[doc(hidden)] 350 + unsafe impl<T, A: Allocator> SourceIter for IntoIter<T, A> { 351 + type Source = Self; 352 + 353 + #[inline] 354 + unsafe fn as_inner(&mut self) -> &mut Self::Source { 355 + self 356 + } 357 + } 358 + 359 + #[cfg(not(no_global_oom_handling))] 360 + unsafe impl<T> AsVecIntoIter for IntoIter<T> { 361 + type Item = T; 362 + 363 + fn as_into_iter(&mut self) -> &mut IntoIter<Self::Item> { 364 + self 365 + } 366 + }
+120
rust/alloc/vec/is_zero.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + use crate::boxed::Box; 4 + 5 + #[rustc_specialization_trait] 6 + pub(super) unsafe trait IsZero { 7 + /// Whether this value's representation is all zeros 8 + fn is_zero(&self) -> bool; 9 + } 10 + 11 + macro_rules! impl_is_zero { 12 + ($t:ty, $is_zero:expr) => { 13 + unsafe impl IsZero for $t { 14 + #[inline] 15 + fn is_zero(&self) -> bool { 16 + $is_zero(*self) 17 + } 18 + } 19 + }; 20 + } 21 + 22 + impl_is_zero!(i16, |x| x == 0); 23 + impl_is_zero!(i32, |x| x == 0); 24 + impl_is_zero!(i64, |x| x == 0); 25 + impl_is_zero!(i128, |x| x == 0); 26 + impl_is_zero!(isize, |x| x == 0); 27 + 28 + impl_is_zero!(u16, |x| x == 0); 29 + impl_is_zero!(u32, |x| x == 0); 30 + impl_is_zero!(u64, |x| x == 0); 31 + impl_is_zero!(u128, |x| x == 0); 32 + impl_is_zero!(usize, |x| x == 0); 33 + 34 + impl_is_zero!(bool, |x| x == false); 35 + impl_is_zero!(char, |x| x == '\0'); 36 + 37 + impl_is_zero!(f32, |x: f32| x.to_bits() == 0); 38 + impl_is_zero!(f64, |x: f64| x.to_bits() == 0); 39 + 40 + unsafe impl<T> IsZero for *const T { 41 + #[inline] 42 + fn is_zero(&self) -> bool { 43 + (*self).is_null() 44 + } 45 + } 46 + 47 + unsafe impl<T> IsZero for *mut T { 48 + #[inline] 49 + fn is_zero(&self) -> bool { 50 + (*self).is_null() 51 + } 52 + } 53 + 54 + unsafe impl<T: IsZero, const N: usize> IsZero for [T; N] { 55 + #[inline] 56 + fn is_zero(&self) -> bool { 57 + // Because this is generated as a runtime check, it's not obvious that 58 + // it's worth doing if the array is really long. The threshold here 59 + // is largely arbitrary, but was picked because as of 2022-05-01 LLVM 60 + // can const-fold the check in `vec![[0; 32]; n]` but not in 61 + // `vec![[0; 64]; n]`: https://godbolt.org/z/WTzjzfs5b 62 + // Feel free to tweak if you have better evidence. 63 + 64 + N <= 32 && self.iter().all(IsZero::is_zero) 65 + } 66 + } 67 + 68 + // `Option<&T>` and `Option<Box<T>>` are guaranteed to represent `None` as null. 69 + // For fat pointers, the bytes that would be the pointer metadata in the `Some` 70 + // variant are padding in the `None` variant, so ignoring them and 71 + // zero-initializing instead is ok. 72 + // `Option<&mut T>` never implements `Clone`, so there's no need for an impl of 73 + // `SpecFromElem`. 74 + 75 + unsafe impl<T: ?Sized> IsZero for Option<&T> { 76 + #[inline] 77 + fn is_zero(&self) -> bool { 78 + self.is_none() 79 + } 80 + } 81 + 82 + unsafe impl<T: ?Sized> IsZero for Option<Box<T>> { 83 + #[inline] 84 + fn is_zero(&self) -> bool { 85 + self.is_none() 86 + } 87 + } 88 + 89 + // `Option<num::NonZeroU32>` and similar have a representation guarantee that 90 + // they're the same size as the corresponding `u32` type, as well as a guarantee 91 + // that transmuting between `NonZeroU32` and `Option<num::NonZeroU32>` works. 92 + // While the documentation officially makes it UB to transmute from `None`, 93 + // we're the standard library so we can make extra inferences, and we know that 94 + // the only niche available to represent `None` is the one that's all zeros. 95 + 96 + macro_rules! impl_is_zero_option_of_nonzero { 97 + ($($t:ident,)+) => {$( 98 + unsafe impl IsZero for Option<core::num::$t> { 99 + #[inline] 100 + fn is_zero(&self) -> bool { 101 + self.is_none() 102 + } 103 + } 104 + )+}; 105 + } 106 + 107 + impl_is_zero_option_of_nonzero!( 108 + NonZeroU8, 109 + NonZeroU16, 110 + NonZeroU32, 111 + NonZeroU64, 112 + NonZeroU128, 113 + NonZeroI8, 114 + NonZeroI16, 115 + NonZeroI32, 116 + NonZeroI64, 117 + NonZeroI128, 118 + NonZeroUsize, 119 + NonZeroIsize, 120 + );
+3140
rust/alloc/vec/mod.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + //! A contiguous growable array type with heap-allocated contents, written 4 + //! `Vec<T>`. 5 + //! 6 + //! Vectors have *O*(1) indexing, amortized *O*(1) push (to the end) and 7 + //! *O*(1) pop (from the end). 8 + //! 9 + //! Vectors ensure they never allocate more than `isize::MAX` bytes. 10 + //! 11 + //! # Examples 12 + //! 13 + //! You can explicitly create a [`Vec`] with [`Vec::new`]: 14 + //! 15 + //! ``` 16 + //! let v: Vec<i32> = Vec::new(); 17 + //! ``` 18 + //! 19 + //! ...or by using the [`vec!`] macro: 20 + //! 21 + //! ``` 22 + //! let v: Vec<i32> = vec![]; 23 + //! 24 + //! let v = vec![1, 2, 3, 4, 5]; 25 + //! 26 + //! let v = vec![0; 10]; // ten zeroes 27 + //! ``` 28 + //! 29 + //! You can [`push`] values onto the end of a vector (which will grow the vector 30 + //! as needed): 31 + //! 32 + //! ``` 33 + //! let mut v = vec![1, 2]; 34 + //! 35 + //! v.push(3); 36 + //! ``` 37 + //! 38 + //! Popping values works in much the same way: 39 + //! 40 + //! ``` 41 + //! let mut v = vec![1, 2]; 42 + //! 43 + //! let two = v.pop(); 44 + //! ``` 45 + //! 46 + //! Vectors also support indexing (through the [`Index`] and [`IndexMut`] traits): 47 + //! 48 + //! ``` 49 + //! let mut v = vec![1, 2, 3]; 50 + //! let three = v[2]; 51 + //! v[1] = v[1] + 5; 52 + //! ``` 53 + //! 54 + //! [`push`]: Vec::push 55 + 56 + #![stable(feature = "rust1", since = "1.0.0")] 57 + 58 + #[cfg(not(no_global_oom_handling))] 59 + use core::cmp; 60 + use core::cmp::Ordering; 61 + use core::convert::TryFrom; 62 + use core::fmt; 63 + use core::hash::{Hash, Hasher}; 64 + use core::intrinsics::{arith_offset, assume}; 65 + use core::iter; 66 + #[cfg(not(no_global_oom_handling))] 67 + use core::iter::FromIterator; 68 + use core::marker::PhantomData; 69 + use core::mem::{self, ManuallyDrop, MaybeUninit}; 70 + use core::ops::{self, Index, IndexMut, Range, RangeBounds}; 71 + use core::ptr::{self, NonNull}; 72 + use core::slice::{self, SliceIndex}; 73 + 74 + use crate::alloc::{Allocator, Global}; 75 + use crate::borrow::{Cow, ToOwned}; 76 + use crate::boxed::Box; 77 + use crate::collections::TryReserveError; 78 + use crate::raw_vec::RawVec; 79 + 80 + #[unstable(feature = "drain_filter", reason = "recently added", issue = "43244")] 81 + pub use self::drain_filter::DrainFilter; 82 + 83 + mod drain_filter; 84 + 85 + #[cfg(not(no_global_oom_handling))] 86 + #[stable(feature = "vec_splice", since = "1.21.0")] 87 + pub use self::splice::Splice; 88 + 89 + #[cfg(not(no_global_oom_handling))] 90 + mod splice; 91 + 92 + #[stable(feature = "drain", since = "1.6.0")] 93 + pub use self::drain::Drain; 94 + 95 + mod drain; 96 + 97 + #[cfg(not(no_global_oom_handling))] 98 + mod cow; 99 + 100 + #[cfg(not(no_global_oom_handling))] 101 + pub(crate) use self::in_place_collect::AsVecIntoIter; 102 + #[stable(feature = "rust1", since = "1.0.0")] 103 + pub use self::into_iter::IntoIter; 104 + 105 + mod into_iter; 106 + 107 + #[cfg(not(no_global_oom_handling))] 108 + use self::is_zero::IsZero; 109 + 110 + mod is_zero; 111 + 112 + #[cfg(not(no_global_oom_handling))] 113 + mod in_place_collect; 114 + 115 + mod partial_eq; 116 + 117 + #[cfg(not(no_global_oom_handling))] 118 + use self::spec_from_elem::SpecFromElem; 119 + 120 + #[cfg(not(no_global_oom_handling))] 121 + mod spec_from_elem; 122 + 123 + #[cfg(not(no_global_oom_handling))] 124 + use self::set_len_on_drop::SetLenOnDrop; 125 + 126 + #[cfg(not(no_global_oom_handling))] 127 + mod set_len_on_drop; 128 + 129 + #[cfg(not(no_global_oom_handling))] 130 + use self::in_place_drop::InPlaceDrop; 131 + 132 + #[cfg(not(no_global_oom_handling))] 133 + mod in_place_drop; 134 + 135 + #[cfg(not(no_global_oom_handling))] 136 + use self::spec_from_iter_nested::SpecFromIterNested; 137 + 138 + #[cfg(not(no_global_oom_handling))] 139 + mod spec_from_iter_nested; 140 + 141 + #[cfg(not(no_global_oom_handling))] 142 + use self::spec_from_iter::SpecFromIter; 143 + 144 + #[cfg(not(no_global_oom_handling))] 145 + mod spec_from_iter; 146 + 147 + #[cfg(not(no_global_oom_handling))] 148 + use self::spec_extend::SpecExtend; 149 + 150 + #[cfg(not(no_global_oom_handling))] 151 + mod spec_extend; 152 + 153 + /// A contiguous growable array type, written as `Vec<T>`, short for 'vector'. 154 + /// 155 + /// # Examples 156 + /// 157 + /// ``` 158 + /// let mut vec = Vec::new(); 159 + /// vec.push(1); 160 + /// vec.push(2); 161 + /// 162 + /// assert_eq!(vec.len(), 2); 163 + /// assert_eq!(vec[0], 1); 164 + /// 165 + /// assert_eq!(vec.pop(), Some(2)); 166 + /// assert_eq!(vec.len(), 1); 167 + /// 168 + /// vec[0] = 7; 169 + /// assert_eq!(vec[0], 7); 170 + /// 171 + /// vec.extend([1, 2, 3].iter().copied()); 172 + /// 173 + /// for x in &vec { 174 + /// println!("{x}"); 175 + /// } 176 + /// assert_eq!(vec, [7, 1, 2, 3]); 177 + /// ``` 178 + /// 179 + /// The [`vec!`] macro is provided for convenient initialization: 180 + /// 181 + /// ``` 182 + /// let mut vec1 = vec![1, 2, 3]; 183 + /// vec1.push(4); 184 + /// let vec2 = Vec::from([1, 2, 3, 4]); 185 + /// assert_eq!(vec1, vec2); 186 + /// ``` 187 + /// 188 + /// It can also initialize each element of a `Vec<T>` with a given value. 189 + /// This may be more efficient than performing allocation and initialization 190 + /// in separate steps, especially when initializing a vector of zeros: 191 + /// 192 + /// ``` 193 + /// let vec = vec![0; 5]; 194 + /// assert_eq!(vec, [0, 0, 0, 0, 0]); 195 + /// 196 + /// // The following is equivalent, but potentially slower: 197 + /// let mut vec = Vec::with_capacity(5); 198 + /// vec.resize(5, 0); 199 + /// assert_eq!(vec, [0, 0, 0, 0, 0]); 200 + /// ``` 201 + /// 202 + /// For more information, see 203 + /// [Capacity and Reallocation](#capacity-and-reallocation). 204 + /// 205 + /// Use a `Vec<T>` as an efficient stack: 206 + /// 207 + /// ``` 208 + /// let mut stack = Vec::new(); 209 + /// 210 + /// stack.push(1); 211 + /// stack.push(2); 212 + /// stack.push(3); 213 + /// 214 + /// while let Some(top) = stack.pop() { 215 + /// // Prints 3, 2, 1 216 + /// println!("{top}"); 217 + /// } 218 + /// ``` 219 + /// 220 + /// # Indexing 221 + /// 222 + /// The `Vec` type allows to access values by index, because it implements the 223 + /// [`Index`] trait. An example will be more explicit: 224 + /// 225 + /// ``` 226 + /// let v = vec![0, 2, 4, 6]; 227 + /// println!("{}", v[1]); // it will display '2' 228 + /// ``` 229 + /// 230 + /// However be careful: if you try to access an index which isn't in the `Vec`, 231 + /// your software will panic! You cannot do this: 232 + /// 233 + /// ```should_panic 234 + /// let v = vec![0, 2, 4, 6]; 235 + /// println!("{}", v[6]); // it will panic! 236 + /// ``` 237 + /// 238 + /// Use [`get`] and [`get_mut`] if you want to check whether the index is in 239 + /// the `Vec`. 240 + /// 241 + /// # Slicing 242 + /// 243 + /// A `Vec` can be mutable. On the other hand, slices are read-only objects. 244 + /// To get a [slice][prim@slice], use [`&`]. Example: 245 + /// 246 + /// ``` 247 + /// fn read_slice(slice: &[usize]) { 248 + /// // ... 249 + /// } 250 + /// 251 + /// let v = vec![0, 1]; 252 + /// read_slice(&v); 253 + /// 254 + /// // ... and that's all! 255 + /// // you can also do it like this: 256 + /// let u: &[usize] = &v; 257 + /// // or like this: 258 + /// let u: &[_] = &v; 259 + /// ``` 260 + /// 261 + /// In Rust, it's more common to pass slices as arguments rather than vectors 262 + /// when you just want to provide read access. The same goes for [`String`] and 263 + /// [`&str`]. 264 + /// 265 + /// # Capacity and reallocation 266 + /// 267 + /// The capacity of a vector is the amount of space allocated for any future 268 + /// elements that will be added onto the vector. This is not to be confused with 269 + /// the *length* of a vector, which specifies the number of actual elements 270 + /// within the vector. If a vector's length exceeds its capacity, its capacity 271 + /// will automatically be increased, but its elements will have to be 272 + /// reallocated. 273 + /// 274 + /// For example, a vector with capacity 10 and length 0 would be an empty vector 275 + /// with space for 10 more elements. Pushing 10 or fewer elements onto the 276 + /// vector will not change its capacity or cause reallocation to occur. However, 277 + /// if the vector's length is increased to 11, it will have to reallocate, which 278 + /// can be slow. For this reason, it is recommended to use [`Vec::with_capacity`] 279 + /// whenever possible to specify how big the vector is expected to get. 280 + /// 281 + /// # Guarantees 282 + /// 283 + /// Due to its incredibly fundamental nature, `Vec` makes a lot of guarantees 284 + /// about its design. This ensures that it's as low-overhead as possible in 285 + /// the general case, and can be correctly manipulated in primitive ways 286 + /// by unsafe code. Note that these guarantees refer to an unqualified `Vec<T>`. 287 + /// If additional type parameters are added (e.g., to support custom allocators), 288 + /// overriding their defaults may change the behavior. 289 + /// 290 + /// Most fundamentally, `Vec` is and always will be a (pointer, capacity, length) 291 + /// triplet. No more, no less. The order of these fields is completely 292 + /// unspecified, and you should use the appropriate methods to modify these. 293 + /// The pointer will never be null, so this type is null-pointer-optimized. 294 + /// 295 + /// However, the pointer might not actually point to allocated memory. In particular, 296 + /// if you construct a `Vec` with capacity 0 via [`Vec::new`], [`vec![]`][`vec!`], 297 + /// [`Vec::with_capacity(0)`][`Vec::with_capacity`], or by calling [`shrink_to_fit`] 298 + /// on an empty Vec, it will not allocate memory. Similarly, if you store zero-sized 299 + /// types inside a `Vec`, it will not allocate space for them. *Note that in this case 300 + /// the `Vec` might not report a [`capacity`] of 0*. `Vec` will allocate if and only 301 + /// if <code>[mem::size_of::\<T>]\() * [capacity]\() > 0</code>. In general, `Vec`'s allocation 302 + /// details are very subtle --- if you intend to allocate memory using a `Vec` 303 + /// and use it for something else (either to pass to unsafe code, or to build your 304 + /// own memory-backed collection), be sure to deallocate this memory by using 305 + /// `from_raw_parts` to recover the `Vec` and then dropping it. 306 + /// 307 + /// If a `Vec` *has* allocated memory, then the memory it points to is on the heap 308 + /// (as defined by the allocator Rust is configured to use by default), and its 309 + /// pointer points to [`len`] initialized, contiguous elements in order (what 310 + /// you would see if you coerced it to a slice), followed by <code>[capacity] - [len]</code> 311 + /// logically uninitialized, contiguous elements. 312 + /// 313 + /// A vector containing the elements `'a'` and `'b'` with capacity 4 can be 314 + /// visualized as below. The top part is the `Vec` struct, it contains a 315 + /// pointer to the head of the allocation in the heap, length and capacity. 316 + /// The bottom part is the allocation on the heap, a contiguous memory block. 317 + /// 318 + /// ```text 319 + /// ptr len capacity 320 + /// +--------+--------+--------+ 321 + /// | 0x0123 | 2 | 4 | 322 + /// +--------+--------+--------+ 323 + /// | 324 + /// v 325 + /// Heap +--------+--------+--------+--------+ 326 + /// | 'a' | 'b' | uninit | uninit | 327 + /// +--------+--------+--------+--------+ 328 + /// ``` 329 + /// 330 + /// - **uninit** represents memory that is not initialized, see [`MaybeUninit`]. 331 + /// - Note: the ABI is not stable and `Vec` makes no guarantees about its memory 332 + /// layout (including the order of fields). 333 + /// 334 + /// `Vec` will never perform a "small optimization" where elements are actually 335 + /// stored on the stack for two reasons: 336 + /// 337 + /// * It would make it more difficult for unsafe code to correctly manipulate 338 + /// a `Vec`. The contents of a `Vec` wouldn't have a stable address if it were 339 + /// only moved, and it would be more difficult to determine if a `Vec` had 340 + /// actually allocated memory. 341 + /// 342 + /// * It would penalize the general case, incurring an additional branch 343 + /// on every access. 344 + /// 345 + /// `Vec` will never automatically shrink itself, even if completely empty. This 346 + /// ensures no unnecessary allocations or deallocations occur. Emptying a `Vec` 347 + /// and then filling it back up to the same [`len`] should incur no calls to 348 + /// the allocator. If you wish to free up unused memory, use 349 + /// [`shrink_to_fit`] or [`shrink_to`]. 350 + /// 351 + /// [`push`] and [`insert`] will never (re)allocate if the reported capacity is 352 + /// sufficient. [`push`] and [`insert`] *will* (re)allocate if 353 + /// <code>[len] == [capacity]</code>. That is, the reported capacity is completely 354 + /// accurate, and can be relied on. It can even be used to manually free the memory 355 + /// allocated by a `Vec` if desired. Bulk insertion methods *may* reallocate, even 356 + /// when not necessary. 357 + /// 358 + /// `Vec` does not guarantee any particular growth strategy when reallocating 359 + /// when full, nor when [`reserve`] is called. The current strategy is basic 360 + /// and it may prove desirable to use a non-constant growth factor. Whatever 361 + /// strategy is used will of course guarantee *O*(1) amortized [`push`]. 362 + /// 363 + /// `vec![x; n]`, `vec![a, b, c, d]`, and 364 + /// [`Vec::with_capacity(n)`][`Vec::with_capacity`], will all produce a `Vec` 365 + /// with exactly the requested capacity. If <code>[len] == [capacity]</code>, 366 + /// (as is the case for the [`vec!`] macro), then a `Vec<T>` can be converted to 367 + /// and from a [`Box<[T]>`][owned slice] without reallocating or moving the elements. 368 + /// 369 + /// `Vec` will not specifically overwrite any data that is removed from it, 370 + /// but also won't specifically preserve it. Its uninitialized memory is 371 + /// scratch space that it may use however it wants. It will generally just do 372 + /// whatever is most efficient or otherwise easy to implement. Do not rely on 373 + /// removed data to be erased for security purposes. Even if you drop a `Vec`, its 374 + /// buffer may simply be reused by another allocation. Even if you zero a `Vec`'s memory 375 + /// first, that might not actually happen because the optimizer does not consider 376 + /// this a side-effect that must be preserved. There is one case which we will 377 + /// not break, however: using `unsafe` code to write to the excess capacity, 378 + /// and then increasing the length to match, is always valid. 379 + /// 380 + /// Currently, `Vec` does not guarantee the order in which elements are dropped. 381 + /// The order has changed in the past and may change again. 382 + /// 383 + /// [`get`]: ../../std/vec/struct.Vec.html#method.get 384 + /// [`get_mut`]: ../../std/vec/struct.Vec.html#method.get_mut 385 + /// [`String`]: crate::string::String 386 + /// [`&str`]: type@str 387 + /// [`shrink_to_fit`]: Vec::shrink_to_fit 388 + /// [`shrink_to`]: Vec::shrink_to 389 + /// [capacity]: Vec::capacity 390 + /// [`capacity`]: Vec::capacity 391 + /// [mem::size_of::\<T>]: core::mem::size_of 392 + /// [len]: Vec::len 393 + /// [`len`]: Vec::len 394 + /// [`push`]: Vec::push 395 + /// [`insert`]: Vec::insert 396 + /// [`reserve`]: Vec::reserve 397 + /// [`MaybeUninit`]: core::mem::MaybeUninit 398 + /// [owned slice]: Box 399 + #[stable(feature = "rust1", since = "1.0.0")] 400 + #[cfg_attr(not(test), rustc_diagnostic_item = "Vec")] 401 + #[rustc_insignificant_dtor] 402 + pub struct Vec<T, #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global> { 403 + buf: RawVec<T, A>, 404 + len: usize, 405 + } 406 + 407 + //////////////////////////////////////////////////////////////////////////////// 408 + // Inherent methods 409 + //////////////////////////////////////////////////////////////////////////////// 410 + 411 + impl<T> Vec<T> { 412 + /// Constructs a new, empty `Vec<T>`. 413 + /// 414 + /// The vector will not allocate until elements are pushed onto it. 415 + /// 416 + /// # Examples 417 + /// 418 + /// ``` 419 + /// # #![allow(unused_mut)] 420 + /// let mut vec: Vec<i32> = Vec::new(); 421 + /// ``` 422 + #[inline] 423 + #[rustc_const_stable(feature = "const_vec_new", since = "1.39.0")] 424 + #[stable(feature = "rust1", since = "1.0.0")] 425 + #[must_use] 426 + pub const fn new() -> Self { 427 + Vec { buf: RawVec::NEW, len: 0 } 428 + } 429 + 430 + /// Constructs a new, empty `Vec<T>` with the specified capacity. 431 + /// 432 + /// The vector will be able to hold exactly `capacity` elements without 433 + /// reallocating. If `capacity` is 0, the vector will not allocate. 434 + /// 435 + /// It is important to note that although the returned vector has the 436 + /// *capacity* specified, the vector will have a zero *length*. For an 437 + /// explanation of the difference between length and capacity, see 438 + /// *[Capacity and reallocation]*. 439 + /// 440 + /// [Capacity and reallocation]: #capacity-and-reallocation 441 + /// 442 + /// # Panics 443 + /// 444 + /// Panics if the new capacity exceeds `isize::MAX` bytes. 445 + /// 446 + /// # Examples 447 + /// 448 + /// ``` 449 + /// let mut vec = Vec::with_capacity(10); 450 + /// 451 + /// // The vector contains no items, even though it has capacity for more 452 + /// assert_eq!(vec.len(), 0); 453 + /// assert_eq!(vec.capacity(), 10); 454 + /// 455 + /// // These are all done without reallocating... 456 + /// for i in 0..10 { 457 + /// vec.push(i); 458 + /// } 459 + /// assert_eq!(vec.len(), 10); 460 + /// assert_eq!(vec.capacity(), 10); 461 + /// 462 + /// // ...but this may make the vector reallocate 463 + /// vec.push(11); 464 + /// assert_eq!(vec.len(), 11); 465 + /// assert!(vec.capacity() >= 11); 466 + /// ``` 467 + #[cfg(not(no_global_oom_handling))] 468 + #[inline] 469 + #[stable(feature = "rust1", since = "1.0.0")] 470 + #[must_use] 471 + pub fn with_capacity(capacity: usize) -> Self { 472 + Self::with_capacity_in(capacity, Global) 473 + } 474 + 475 + /// Creates a `Vec<T>` directly from the raw components of another vector. 476 + /// 477 + /// # Safety 478 + /// 479 + /// This is highly unsafe, due to the number of invariants that aren't 480 + /// checked: 481 + /// 482 + /// * `ptr` needs to have been previously allocated via [`String`]/`Vec<T>` 483 + /// (at least, it's highly likely to be incorrect if it wasn't). 484 + /// * `T` needs to have the same alignment as what `ptr` was allocated with. 485 + /// (`T` having a less strict alignment is not sufficient, the alignment really 486 + /// needs to be equal to satisfy the [`dealloc`] requirement that memory must be 487 + /// allocated and deallocated with the same layout.) 488 + /// * The size of `T` times the `capacity` (ie. the allocated size in bytes) needs 489 + /// to be the same size as the pointer was allocated with. (Because similar to 490 + /// alignment, [`dealloc`] must be called with the same layout `size`.) 491 + /// * `length` needs to be less than or equal to `capacity`. 492 + /// 493 + /// Violating these may cause problems like corrupting the allocator's 494 + /// internal data structures. For example it is normally **not** safe 495 + /// to build a `Vec<u8>` from a pointer to a C `char` array with length 496 + /// `size_t`, doing so is only safe if the array was initially allocated by 497 + /// a `Vec` or `String`. 498 + /// It's also not safe to build one from a `Vec<u16>` and its length, because 499 + /// the allocator cares about the alignment, and these two types have different 500 + /// alignments. The buffer was allocated with alignment 2 (for `u16`), but after 501 + /// turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid 502 + /// these issues, it is often preferable to do casting/transmuting using 503 + /// [`slice::from_raw_parts`] instead. 504 + /// 505 + /// The ownership of `ptr` is effectively transferred to the 506 + /// `Vec<T>` which may then deallocate, reallocate or change the 507 + /// contents of memory pointed to by the pointer at will. Ensure 508 + /// that nothing else uses the pointer after calling this 509 + /// function. 510 + /// 511 + /// [`String`]: crate::string::String 512 + /// [`dealloc`]: crate::alloc::GlobalAlloc::dealloc 513 + /// 514 + /// # Examples 515 + /// 516 + /// ``` 517 + /// use std::ptr; 518 + /// use std::mem; 519 + /// 520 + /// let v = vec![1, 2, 3]; 521 + /// 522 + // FIXME Update this when vec_into_raw_parts is stabilized 523 + /// // Prevent running `v`'s destructor so we are in complete control 524 + /// // of the allocation. 525 + /// let mut v = mem::ManuallyDrop::new(v); 526 + /// 527 + /// // Pull out the various important pieces of information about `v` 528 + /// let p = v.as_mut_ptr(); 529 + /// let len = v.len(); 530 + /// let cap = v.capacity(); 531 + /// 532 + /// unsafe { 533 + /// // Overwrite memory with 4, 5, 6 534 + /// for i in 0..len as isize { 535 + /// ptr::write(p.offset(i), 4 + i); 536 + /// } 537 + /// 538 + /// // Put everything back together into a Vec 539 + /// let rebuilt = Vec::from_raw_parts(p, len, cap); 540 + /// assert_eq!(rebuilt, [4, 5, 6]); 541 + /// } 542 + /// ``` 543 + #[inline] 544 + #[stable(feature = "rust1", since = "1.0.0")] 545 + pub unsafe fn from_raw_parts(ptr: *mut T, length: usize, capacity: usize) -> Self { 546 + unsafe { Self::from_raw_parts_in(ptr, length, capacity, Global) } 547 + } 548 + } 549 + 550 + impl<T, A: Allocator> Vec<T, A> { 551 + /// Constructs a new, empty `Vec<T, A>`. 552 + /// 553 + /// The vector will not allocate until elements are pushed onto it. 554 + /// 555 + /// # Examples 556 + /// 557 + /// ``` 558 + /// #![feature(allocator_api)] 559 + /// 560 + /// use std::alloc::System; 561 + /// 562 + /// # #[allow(unused_mut)] 563 + /// let mut vec: Vec<i32, _> = Vec::new_in(System); 564 + /// ``` 565 + #[inline] 566 + #[unstable(feature = "allocator_api", issue = "32838")] 567 + pub const fn new_in(alloc: A) -> Self { 568 + Vec { buf: RawVec::new_in(alloc), len: 0 } 569 + } 570 + 571 + /// Constructs a new, empty `Vec<T, A>` with the specified capacity with the provided 572 + /// allocator. 573 + /// 574 + /// The vector will be able to hold exactly `capacity` elements without 575 + /// reallocating. If `capacity` is 0, the vector will not allocate. 576 + /// 577 + /// It is important to note that although the returned vector has the 578 + /// *capacity* specified, the vector will have a zero *length*. For an 579 + /// explanation of the difference between length and capacity, see 580 + /// *[Capacity and reallocation]*. 581 + /// 582 + /// [Capacity and reallocation]: #capacity-and-reallocation 583 + /// 584 + /// # Panics 585 + /// 586 + /// Panics if the new capacity exceeds `isize::MAX` bytes. 587 + /// 588 + /// # Examples 589 + /// 590 + /// ``` 591 + /// #![feature(allocator_api)] 592 + /// 593 + /// use std::alloc::System; 594 + /// 595 + /// let mut vec = Vec::with_capacity_in(10, System); 596 + /// 597 + /// // The vector contains no items, even though it has capacity for more 598 + /// assert_eq!(vec.len(), 0); 599 + /// assert_eq!(vec.capacity(), 10); 600 + /// 601 + /// // These are all done without reallocating... 602 + /// for i in 0..10 { 603 + /// vec.push(i); 604 + /// } 605 + /// assert_eq!(vec.len(), 10); 606 + /// assert_eq!(vec.capacity(), 10); 607 + /// 608 + /// // ...but this may make the vector reallocate 609 + /// vec.push(11); 610 + /// assert_eq!(vec.len(), 11); 611 + /// assert!(vec.capacity() >= 11); 612 + /// ``` 613 + #[cfg(not(no_global_oom_handling))] 614 + #[inline] 615 + #[unstable(feature = "allocator_api", issue = "32838")] 616 + pub fn with_capacity_in(capacity: usize, alloc: A) -> Self { 617 + Vec { buf: RawVec::with_capacity_in(capacity, alloc), len: 0 } 618 + } 619 + 620 + /// Creates a `Vec<T, A>` directly from the raw components of another vector. 621 + /// 622 + /// # Safety 623 + /// 624 + /// This is highly unsafe, due to the number of invariants that aren't 625 + /// checked: 626 + /// 627 + /// * `ptr` needs to have been previously allocated via [`String`]/`Vec<T>` 628 + /// (at least, it's highly likely to be incorrect if it wasn't). 629 + /// * `T` needs to have the same size and alignment as what `ptr` was allocated with. 630 + /// (`T` having a less strict alignment is not sufficient, the alignment really 631 + /// needs to be equal to satisfy the [`dealloc`] requirement that memory must be 632 + /// allocated and deallocated with the same layout.) 633 + /// * `length` needs to be less than or equal to `capacity`. 634 + /// * `capacity` needs to be the capacity that the pointer was allocated with. 635 + /// 636 + /// Violating these may cause problems like corrupting the allocator's 637 + /// internal data structures. For example it is **not** safe 638 + /// to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`. 639 + /// It's also not safe to build one from a `Vec<u16>` and its length, because 640 + /// the allocator cares about the alignment, and these two types have different 641 + /// alignments. The buffer was allocated with alignment 2 (for `u16`), but after 642 + /// turning it into a `Vec<u8>` it'll be deallocated with alignment 1. 643 + /// 644 + /// The ownership of `ptr` is effectively transferred to the 645 + /// `Vec<T>` which may then deallocate, reallocate or change the 646 + /// contents of memory pointed to by the pointer at will. Ensure 647 + /// that nothing else uses the pointer after calling this 648 + /// function. 649 + /// 650 + /// [`String`]: crate::string::String 651 + /// [`dealloc`]: crate::alloc::GlobalAlloc::dealloc 652 + /// 653 + /// # Examples 654 + /// 655 + /// ``` 656 + /// #![feature(allocator_api)] 657 + /// 658 + /// use std::alloc::System; 659 + /// 660 + /// use std::ptr; 661 + /// use std::mem; 662 + /// 663 + /// let mut v = Vec::with_capacity_in(3, System); 664 + /// v.push(1); 665 + /// v.push(2); 666 + /// v.push(3); 667 + /// 668 + // FIXME Update this when vec_into_raw_parts is stabilized 669 + /// // Prevent running `v`'s destructor so we are in complete control 670 + /// // of the allocation. 671 + /// let mut v = mem::ManuallyDrop::new(v); 672 + /// 673 + /// // Pull out the various important pieces of information about `v` 674 + /// let p = v.as_mut_ptr(); 675 + /// let len = v.len(); 676 + /// let cap = v.capacity(); 677 + /// let alloc = v.allocator(); 678 + /// 679 + /// unsafe { 680 + /// // Overwrite memory with 4, 5, 6 681 + /// for i in 0..len as isize { 682 + /// ptr::write(p.offset(i), 4 + i); 683 + /// } 684 + /// 685 + /// // Put everything back together into a Vec 686 + /// let rebuilt = Vec::from_raw_parts_in(p, len, cap, alloc.clone()); 687 + /// assert_eq!(rebuilt, [4, 5, 6]); 688 + /// } 689 + /// ``` 690 + #[inline] 691 + #[unstable(feature = "allocator_api", issue = "32838")] 692 + pub unsafe fn from_raw_parts_in(ptr: *mut T, length: usize, capacity: usize, alloc: A) -> Self { 693 + unsafe { Vec { buf: RawVec::from_raw_parts_in(ptr, capacity, alloc), len: length } } 694 + } 695 + 696 + /// Decomposes a `Vec<T>` into its raw components. 697 + /// 698 + /// Returns the raw pointer to the underlying data, the length of 699 + /// the vector (in elements), and the allocated capacity of the 700 + /// data (in elements). These are the same arguments in the same 701 + /// order as the arguments to [`from_raw_parts`]. 702 + /// 703 + /// After calling this function, the caller is responsible for the 704 + /// memory previously managed by the `Vec`. The only way to do 705 + /// this is to convert the raw pointer, length, and capacity back 706 + /// into a `Vec` with the [`from_raw_parts`] function, allowing 707 + /// the destructor to perform the cleanup. 708 + /// 709 + /// [`from_raw_parts`]: Vec::from_raw_parts 710 + /// 711 + /// # Examples 712 + /// 713 + /// ``` 714 + /// #![feature(vec_into_raw_parts)] 715 + /// let v: Vec<i32> = vec![-1, 0, 1]; 716 + /// 717 + /// let (ptr, len, cap) = v.into_raw_parts(); 718 + /// 719 + /// let rebuilt = unsafe { 720 + /// // We can now make changes to the components, such as 721 + /// // transmuting the raw pointer to a compatible type. 722 + /// let ptr = ptr as *mut u32; 723 + /// 724 + /// Vec::from_raw_parts(ptr, len, cap) 725 + /// }; 726 + /// assert_eq!(rebuilt, [4294967295, 0, 1]); 727 + /// ``` 728 + #[unstable(feature = "vec_into_raw_parts", reason = "new API", issue = "65816")] 729 + pub fn into_raw_parts(self) -> (*mut T, usize, usize) { 730 + let mut me = ManuallyDrop::new(self); 731 + (me.as_mut_ptr(), me.len(), me.capacity()) 732 + } 733 + 734 + /// Decomposes a `Vec<T>` into its raw components. 735 + /// 736 + /// Returns the raw pointer to the underlying data, the length of the vector (in elements), 737 + /// the allocated capacity of the data (in elements), and the allocator. These are the same 738 + /// arguments in the same order as the arguments to [`from_raw_parts_in`]. 739 + /// 740 + /// After calling this function, the caller is responsible for the 741 + /// memory previously managed by the `Vec`. The only way to do 742 + /// this is to convert the raw pointer, length, and capacity back 743 + /// into a `Vec` with the [`from_raw_parts_in`] function, allowing 744 + /// the destructor to perform the cleanup. 745 + /// 746 + /// [`from_raw_parts_in`]: Vec::from_raw_parts_in 747 + /// 748 + /// # Examples 749 + /// 750 + /// ``` 751 + /// #![feature(allocator_api, vec_into_raw_parts)] 752 + /// 753 + /// use std::alloc::System; 754 + /// 755 + /// let mut v: Vec<i32, System> = Vec::new_in(System); 756 + /// v.push(-1); 757 + /// v.push(0); 758 + /// v.push(1); 759 + /// 760 + /// let (ptr, len, cap, alloc) = v.into_raw_parts_with_alloc(); 761 + /// 762 + /// let rebuilt = unsafe { 763 + /// // We can now make changes to the components, such as 764 + /// // transmuting the raw pointer to a compatible type. 765 + /// let ptr = ptr as *mut u32; 766 + /// 767 + /// Vec::from_raw_parts_in(ptr, len, cap, alloc) 768 + /// }; 769 + /// assert_eq!(rebuilt, [4294967295, 0, 1]); 770 + /// ``` 771 + #[unstable(feature = "allocator_api", issue = "32838")] 772 + // #[unstable(feature = "vec_into_raw_parts", reason = "new API", issue = "65816")] 773 + pub fn into_raw_parts_with_alloc(self) -> (*mut T, usize, usize, A) { 774 + let mut me = ManuallyDrop::new(self); 775 + let len = me.len(); 776 + let capacity = me.capacity(); 777 + let ptr = me.as_mut_ptr(); 778 + let alloc = unsafe { ptr::read(me.allocator()) }; 779 + (ptr, len, capacity, alloc) 780 + } 781 + 782 + /// Returns the number of elements the vector can hold without 783 + /// reallocating. 784 + /// 785 + /// # Examples 786 + /// 787 + /// ``` 788 + /// let vec: Vec<i32> = Vec::with_capacity(10); 789 + /// assert_eq!(vec.capacity(), 10); 790 + /// ``` 791 + #[inline] 792 + #[stable(feature = "rust1", since = "1.0.0")] 793 + pub fn capacity(&self) -> usize { 794 + self.buf.capacity() 795 + } 796 + 797 + /// Reserves capacity for at least `additional` more elements to be inserted 798 + /// in the given `Vec<T>`. The collection may reserve more space to avoid 799 + /// frequent reallocations. After calling `reserve`, capacity will be 800 + /// greater than or equal to `self.len() + additional`. Does nothing if 801 + /// capacity is already sufficient. 802 + /// 803 + /// # Panics 804 + /// 805 + /// Panics if the new capacity exceeds `isize::MAX` bytes. 806 + /// 807 + /// # Examples 808 + /// 809 + /// ``` 810 + /// let mut vec = vec![1]; 811 + /// vec.reserve(10); 812 + /// assert!(vec.capacity() >= 11); 813 + /// ``` 814 + #[cfg(not(no_global_oom_handling))] 815 + #[stable(feature = "rust1", since = "1.0.0")] 816 + pub fn reserve(&mut self, additional: usize) { 817 + self.buf.reserve(self.len, additional); 818 + } 819 + 820 + /// Reserves the minimum capacity for exactly `additional` more elements to 821 + /// be inserted in the given `Vec<T>`. After calling `reserve_exact`, 822 + /// capacity will be greater than or equal to `self.len() + additional`. 823 + /// Does nothing if the capacity is already sufficient. 824 + /// 825 + /// Note that the allocator may give the collection more space than it 826 + /// requests. Therefore, capacity can not be relied upon to be precisely 827 + /// minimal. Prefer [`reserve`] if future insertions are expected. 828 + /// 829 + /// [`reserve`]: Vec::reserve 830 + /// 831 + /// # Panics 832 + /// 833 + /// Panics if the new capacity exceeds `isize::MAX` bytes. 834 + /// 835 + /// # Examples 836 + /// 837 + /// ``` 838 + /// let mut vec = vec![1]; 839 + /// vec.reserve_exact(10); 840 + /// assert!(vec.capacity() >= 11); 841 + /// ``` 842 + #[cfg(not(no_global_oom_handling))] 843 + #[stable(feature = "rust1", since = "1.0.0")] 844 + pub fn reserve_exact(&mut self, additional: usize) { 845 + self.buf.reserve_exact(self.len, additional); 846 + } 847 + 848 + /// Tries to reserve capacity for at least `additional` more elements to be inserted 849 + /// in the given `Vec<T>`. The collection may reserve more space to avoid 850 + /// frequent reallocations. After calling `try_reserve`, capacity will be 851 + /// greater than or equal to `self.len() + additional`. Does nothing if 852 + /// capacity is already sufficient. 853 + /// 854 + /// # Errors 855 + /// 856 + /// If the capacity overflows, or the allocator reports a failure, then an error 857 + /// is returned. 858 + /// 859 + /// # Examples 860 + /// 861 + /// ``` 862 + /// use std::collections::TryReserveError; 863 + /// 864 + /// fn process_data(data: &[u32]) -> Result<Vec<u32>, TryReserveError> { 865 + /// let mut output = Vec::new(); 866 + /// 867 + /// // Pre-reserve the memory, exiting if we can't 868 + /// output.try_reserve(data.len())?; 869 + /// 870 + /// // Now we know this can't OOM in the middle of our complex work 871 + /// output.extend(data.iter().map(|&val| { 872 + /// val * 2 + 5 // very complicated 873 + /// })); 874 + /// 875 + /// Ok(output) 876 + /// } 877 + /// # process_data(&[1, 2, 3]).expect("why is the test harness OOMing on 12 bytes?"); 878 + /// ``` 879 + #[stable(feature = "try_reserve", since = "1.57.0")] 880 + pub fn try_reserve(&mut self, additional: usize) -> Result<(), TryReserveError> { 881 + self.buf.try_reserve(self.len, additional) 882 + } 883 + 884 + /// Tries to reserve the minimum capacity for exactly `additional` 885 + /// elements to be inserted in the given `Vec<T>`. After calling 886 + /// `try_reserve_exact`, capacity will be greater than or equal to 887 + /// `self.len() + additional` if it returns `Ok(())`. 888 + /// Does nothing if the capacity is already sufficient. 889 + /// 890 + /// Note that the allocator may give the collection more space than it 891 + /// requests. Therefore, capacity can not be relied upon to be precisely 892 + /// minimal. Prefer [`try_reserve`] if future insertions are expected. 893 + /// 894 + /// [`try_reserve`]: Vec::try_reserve 895 + /// 896 + /// # Errors 897 + /// 898 + /// If the capacity overflows, or the allocator reports a failure, then an error 899 + /// is returned. 900 + /// 901 + /// # Examples 902 + /// 903 + /// ``` 904 + /// use std::collections::TryReserveError; 905 + /// 906 + /// fn process_data(data: &[u32]) -> Result<Vec<u32>, TryReserveError> { 907 + /// let mut output = Vec::new(); 908 + /// 909 + /// // Pre-reserve the memory, exiting if we can't 910 + /// output.try_reserve_exact(data.len())?; 911 + /// 912 + /// // Now we know this can't OOM in the middle of our complex work 913 + /// output.extend(data.iter().map(|&val| { 914 + /// val * 2 + 5 // very complicated 915 + /// })); 916 + /// 917 + /// Ok(output) 918 + /// } 919 + /// # process_data(&[1, 2, 3]).expect("why is the test harness OOMing on 12 bytes?"); 920 + /// ``` 921 + #[stable(feature = "try_reserve", since = "1.57.0")] 922 + pub fn try_reserve_exact(&mut self, additional: usize) -> Result<(), TryReserveError> { 923 + self.buf.try_reserve_exact(self.len, additional) 924 + } 925 + 926 + /// Shrinks the capacity of the vector as much as possible. 927 + /// 928 + /// It will drop down as close as possible to the length but the allocator 929 + /// may still inform the vector that there is space for a few more elements. 930 + /// 931 + /// # Examples 932 + /// 933 + /// ``` 934 + /// let mut vec = Vec::with_capacity(10); 935 + /// vec.extend([1, 2, 3]); 936 + /// assert_eq!(vec.capacity(), 10); 937 + /// vec.shrink_to_fit(); 938 + /// assert!(vec.capacity() >= 3); 939 + /// ``` 940 + #[cfg(not(no_global_oom_handling))] 941 + #[stable(feature = "rust1", since = "1.0.0")] 942 + pub fn shrink_to_fit(&mut self) { 943 + // The capacity is never less than the length, and there's nothing to do when 944 + // they are equal, so we can avoid the panic case in `RawVec::shrink_to_fit` 945 + // by only calling it with a greater capacity. 946 + if self.capacity() > self.len { 947 + self.buf.shrink_to_fit(self.len); 948 + } 949 + } 950 + 951 + /// Shrinks the capacity of the vector with a lower bound. 952 + /// 953 + /// The capacity will remain at least as large as both the length 954 + /// and the supplied value. 955 + /// 956 + /// If the current capacity is less than the lower limit, this is a no-op. 957 + /// 958 + /// # Examples 959 + /// 960 + /// ``` 961 + /// let mut vec = Vec::with_capacity(10); 962 + /// vec.extend([1, 2, 3]); 963 + /// assert_eq!(vec.capacity(), 10); 964 + /// vec.shrink_to(4); 965 + /// assert!(vec.capacity() >= 4); 966 + /// vec.shrink_to(0); 967 + /// assert!(vec.capacity() >= 3); 968 + /// ``` 969 + #[cfg(not(no_global_oom_handling))] 970 + #[stable(feature = "shrink_to", since = "1.56.0")] 971 + pub fn shrink_to(&mut self, min_capacity: usize) { 972 + if self.capacity() > min_capacity { 973 + self.buf.shrink_to_fit(cmp::max(self.len, min_capacity)); 974 + } 975 + } 976 + 977 + /// Converts the vector into [`Box<[T]>`][owned slice]. 978 + /// 979 + /// Note that this will drop any excess capacity. 980 + /// 981 + /// [owned slice]: Box 982 + /// 983 + /// # Examples 984 + /// 985 + /// ``` 986 + /// let v = vec![1, 2, 3]; 987 + /// 988 + /// let slice = v.into_boxed_slice(); 989 + /// ``` 990 + /// 991 + /// Any excess capacity is removed: 992 + /// 993 + /// ``` 994 + /// let mut vec = Vec::with_capacity(10); 995 + /// vec.extend([1, 2, 3]); 996 + /// 997 + /// assert_eq!(vec.capacity(), 10); 998 + /// let slice = vec.into_boxed_slice(); 999 + /// assert_eq!(slice.into_vec().capacity(), 3); 1000 + /// ``` 1001 + #[cfg(not(no_global_oom_handling))] 1002 + #[stable(feature = "rust1", since = "1.0.0")] 1003 + pub fn into_boxed_slice(mut self) -> Box<[T], A> { 1004 + unsafe { 1005 + self.shrink_to_fit(); 1006 + let me = ManuallyDrop::new(self); 1007 + let buf = ptr::read(&me.buf); 1008 + let len = me.len(); 1009 + buf.into_box(len).assume_init() 1010 + } 1011 + } 1012 + 1013 + /// Shortens the vector, keeping the first `len` elements and dropping 1014 + /// the rest. 1015 + /// 1016 + /// If `len` is greater than the vector's current length, this has no 1017 + /// effect. 1018 + /// 1019 + /// The [`drain`] method can emulate `truncate`, but causes the excess 1020 + /// elements to be returned instead of dropped. 1021 + /// 1022 + /// Note that this method has no effect on the allocated capacity 1023 + /// of the vector. 1024 + /// 1025 + /// # Examples 1026 + /// 1027 + /// Truncating a five element vector to two elements: 1028 + /// 1029 + /// ``` 1030 + /// let mut vec = vec![1, 2, 3, 4, 5]; 1031 + /// vec.truncate(2); 1032 + /// assert_eq!(vec, [1, 2]); 1033 + /// ``` 1034 + /// 1035 + /// No truncation occurs when `len` is greater than the vector's current 1036 + /// length: 1037 + /// 1038 + /// ``` 1039 + /// let mut vec = vec![1, 2, 3]; 1040 + /// vec.truncate(8); 1041 + /// assert_eq!(vec, [1, 2, 3]); 1042 + /// ``` 1043 + /// 1044 + /// Truncating when `len == 0` is equivalent to calling the [`clear`] 1045 + /// method. 1046 + /// 1047 + /// ``` 1048 + /// let mut vec = vec![1, 2, 3]; 1049 + /// vec.truncate(0); 1050 + /// assert_eq!(vec, []); 1051 + /// ``` 1052 + /// 1053 + /// [`clear`]: Vec::clear 1054 + /// [`drain`]: Vec::drain 1055 + #[stable(feature = "rust1", since = "1.0.0")] 1056 + pub fn truncate(&mut self, len: usize) { 1057 + // This is safe because: 1058 + // 1059 + // * the slice passed to `drop_in_place` is valid; the `len > self.len` 1060 + // case avoids creating an invalid slice, and 1061 + // * the `len` of the vector is shrunk before calling `drop_in_place`, 1062 + // such that no value will be dropped twice in case `drop_in_place` 1063 + // were to panic once (if it panics twice, the program aborts). 1064 + unsafe { 1065 + // Note: It's intentional that this is `>` and not `>=`. 1066 + // Changing it to `>=` has negative performance 1067 + // implications in some cases. See #78884 for more. 1068 + if len > self.len { 1069 + return; 1070 + } 1071 + let remaining_len = self.len - len; 1072 + let s = ptr::slice_from_raw_parts_mut(self.as_mut_ptr().add(len), remaining_len); 1073 + self.len = len; 1074 + ptr::drop_in_place(s); 1075 + } 1076 + } 1077 + 1078 + /// Extracts a slice containing the entire vector. 1079 + /// 1080 + /// Equivalent to `&s[..]`. 1081 + /// 1082 + /// # Examples 1083 + /// 1084 + /// ``` 1085 + /// use std::io::{self, Write}; 1086 + /// let buffer = vec![1, 2, 3, 5, 8]; 1087 + /// io::sink().write(buffer.as_slice()).unwrap(); 1088 + /// ``` 1089 + #[inline] 1090 + #[stable(feature = "vec_as_slice", since = "1.7.0")] 1091 + pub fn as_slice(&self) -> &[T] { 1092 + self 1093 + } 1094 + 1095 + /// Extracts a mutable slice of the entire vector. 1096 + /// 1097 + /// Equivalent to `&mut s[..]`. 1098 + /// 1099 + /// # Examples 1100 + /// 1101 + /// ``` 1102 + /// use std::io::{self, Read}; 1103 + /// let mut buffer = vec![0; 3]; 1104 + /// io::repeat(0b101).read_exact(buffer.as_mut_slice()).unwrap(); 1105 + /// ``` 1106 + #[inline] 1107 + #[stable(feature = "vec_as_slice", since = "1.7.0")] 1108 + pub fn as_mut_slice(&mut self) -> &mut [T] { 1109 + self 1110 + } 1111 + 1112 + /// Returns a raw pointer to the vector's buffer. 1113 + /// 1114 + /// The caller must ensure that the vector outlives the pointer this 1115 + /// function returns, or else it will end up pointing to garbage. 1116 + /// Modifying the vector may cause its buffer to be reallocated, 1117 + /// which would also make any pointers to it invalid. 1118 + /// 1119 + /// The caller must also ensure that the memory the pointer (non-transitively) points to 1120 + /// is never written to (except inside an `UnsafeCell`) using this pointer or any pointer 1121 + /// derived from it. If you need to mutate the contents of the slice, use [`as_mut_ptr`]. 1122 + /// 1123 + /// # Examples 1124 + /// 1125 + /// ``` 1126 + /// let x = vec![1, 2, 4]; 1127 + /// let x_ptr = x.as_ptr(); 1128 + /// 1129 + /// unsafe { 1130 + /// for i in 0..x.len() { 1131 + /// assert_eq!(*x_ptr.add(i), 1 << i); 1132 + /// } 1133 + /// } 1134 + /// ``` 1135 + /// 1136 + /// [`as_mut_ptr`]: Vec::as_mut_ptr 1137 + #[stable(feature = "vec_as_ptr", since = "1.37.0")] 1138 + #[inline] 1139 + pub fn as_ptr(&self) -> *const T { 1140 + // We shadow the slice method of the same name to avoid going through 1141 + // `deref`, which creates an intermediate reference. 1142 + let ptr = self.buf.ptr(); 1143 + unsafe { 1144 + assume(!ptr.is_null()); 1145 + } 1146 + ptr 1147 + } 1148 + 1149 + /// Returns an unsafe mutable pointer to the vector's buffer. 1150 + /// 1151 + /// The caller must ensure that the vector outlives the pointer this 1152 + /// function returns, or else it will end up pointing to garbage. 1153 + /// Modifying the vector may cause its buffer to be reallocated, 1154 + /// which would also make any pointers to it invalid. 1155 + /// 1156 + /// # Examples 1157 + /// 1158 + /// ``` 1159 + /// // Allocate vector big enough for 4 elements. 1160 + /// let size = 4; 1161 + /// let mut x: Vec<i32> = Vec::with_capacity(size); 1162 + /// let x_ptr = x.as_mut_ptr(); 1163 + /// 1164 + /// // Initialize elements via raw pointer writes, then set length. 1165 + /// unsafe { 1166 + /// for i in 0..size { 1167 + /// *x_ptr.add(i) = i as i32; 1168 + /// } 1169 + /// x.set_len(size); 1170 + /// } 1171 + /// assert_eq!(&*x, &[0, 1, 2, 3]); 1172 + /// ``` 1173 + #[stable(feature = "vec_as_ptr", since = "1.37.0")] 1174 + #[inline] 1175 + pub fn as_mut_ptr(&mut self) -> *mut T { 1176 + // We shadow the slice method of the same name to avoid going through 1177 + // `deref_mut`, which creates an intermediate reference. 1178 + let ptr = self.buf.ptr(); 1179 + unsafe { 1180 + assume(!ptr.is_null()); 1181 + } 1182 + ptr 1183 + } 1184 + 1185 + /// Returns a reference to the underlying allocator. 1186 + #[unstable(feature = "allocator_api", issue = "32838")] 1187 + #[inline] 1188 + pub fn allocator(&self) -> &A { 1189 + self.buf.allocator() 1190 + } 1191 + 1192 + /// Forces the length of the vector to `new_len`. 1193 + /// 1194 + /// This is a low-level operation that maintains none of the normal 1195 + /// invariants of the type. Normally changing the length of a vector 1196 + /// is done using one of the safe operations instead, such as 1197 + /// [`truncate`], [`resize`], [`extend`], or [`clear`]. 1198 + /// 1199 + /// [`truncate`]: Vec::truncate 1200 + /// [`resize`]: Vec::resize 1201 + /// [`extend`]: Extend::extend 1202 + /// [`clear`]: Vec::clear 1203 + /// 1204 + /// # Safety 1205 + /// 1206 + /// - `new_len` must be less than or equal to [`capacity()`]. 1207 + /// - The elements at `old_len..new_len` must be initialized. 1208 + /// 1209 + /// [`capacity()`]: Vec::capacity 1210 + /// 1211 + /// # Examples 1212 + /// 1213 + /// This method can be useful for situations in which the vector 1214 + /// is serving as a buffer for other code, particularly over FFI: 1215 + /// 1216 + /// ```no_run 1217 + /// # #![allow(dead_code)] 1218 + /// # // This is just a minimal skeleton for the doc example; 1219 + /// # // don't use this as a starting point for a real library. 1220 + /// # pub struct StreamWrapper { strm: *mut std::ffi::c_void } 1221 + /// # const Z_OK: i32 = 0; 1222 + /// # extern "C" { 1223 + /// # fn deflateGetDictionary( 1224 + /// # strm: *mut std::ffi::c_void, 1225 + /// # dictionary: *mut u8, 1226 + /// # dictLength: *mut usize, 1227 + /// # ) -> i32; 1228 + /// # } 1229 + /// # impl StreamWrapper { 1230 + /// pub fn get_dictionary(&self) -> Option<Vec<u8>> { 1231 + /// // Per the FFI method's docs, "32768 bytes is always enough". 1232 + /// let mut dict = Vec::with_capacity(32_768); 1233 + /// let mut dict_length = 0; 1234 + /// // SAFETY: When `deflateGetDictionary` returns `Z_OK`, it holds that: 1235 + /// // 1. `dict_length` elements were initialized. 1236 + /// // 2. `dict_length` <= the capacity (32_768) 1237 + /// // which makes `set_len` safe to call. 1238 + /// unsafe { 1239 + /// // Make the FFI call... 1240 + /// let r = deflateGetDictionary(self.strm, dict.as_mut_ptr(), &mut dict_length); 1241 + /// if r == Z_OK { 1242 + /// // ...and update the length to what was initialized. 1243 + /// dict.set_len(dict_length); 1244 + /// Some(dict) 1245 + /// } else { 1246 + /// None 1247 + /// } 1248 + /// } 1249 + /// } 1250 + /// # } 1251 + /// ``` 1252 + /// 1253 + /// While the following example is sound, there is a memory leak since 1254 + /// the inner vectors were not freed prior to the `set_len` call: 1255 + /// 1256 + /// ``` 1257 + /// let mut vec = vec![vec![1, 0, 0], 1258 + /// vec![0, 1, 0], 1259 + /// vec![0, 0, 1]]; 1260 + /// // SAFETY: 1261 + /// // 1. `old_len..0` is empty so no elements need to be initialized. 1262 + /// // 2. `0 <= capacity` always holds whatever `capacity` is. 1263 + /// unsafe { 1264 + /// vec.set_len(0); 1265 + /// } 1266 + /// ``` 1267 + /// 1268 + /// Normally, here, one would use [`clear`] instead to correctly drop 1269 + /// the contents and thus not leak memory. 1270 + #[inline] 1271 + #[stable(feature = "rust1", since = "1.0.0")] 1272 + pub unsafe fn set_len(&mut self, new_len: usize) { 1273 + debug_assert!(new_len <= self.capacity()); 1274 + 1275 + self.len = new_len; 1276 + } 1277 + 1278 + /// Removes an element from the vector and returns it. 1279 + /// 1280 + /// The removed element is replaced by the last element of the vector. 1281 + /// 1282 + /// This does not preserve ordering, but is *O*(1). 1283 + /// If you need to preserve the element order, use [`remove`] instead. 1284 + /// 1285 + /// [`remove`]: Vec::remove 1286 + /// 1287 + /// # Panics 1288 + /// 1289 + /// Panics if `index` is out of bounds. 1290 + /// 1291 + /// # Examples 1292 + /// 1293 + /// ``` 1294 + /// let mut v = vec!["foo", "bar", "baz", "qux"]; 1295 + /// 1296 + /// assert_eq!(v.swap_remove(1), "bar"); 1297 + /// assert_eq!(v, ["foo", "qux", "baz"]); 1298 + /// 1299 + /// assert_eq!(v.swap_remove(0), "foo"); 1300 + /// assert_eq!(v, ["baz", "qux"]); 1301 + /// ``` 1302 + #[inline] 1303 + #[stable(feature = "rust1", since = "1.0.0")] 1304 + pub fn swap_remove(&mut self, index: usize) -> T { 1305 + #[cold] 1306 + #[inline(never)] 1307 + fn assert_failed(index: usize, len: usize) -> ! { 1308 + panic!("swap_remove index (is {index}) should be < len (is {len})"); 1309 + } 1310 + 1311 + let len = self.len(); 1312 + if index >= len { 1313 + assert_failed(index, len); 1314 + } 1315 + unsafe { 1316 + // We replace self[index] with the last element. Note that if the 1317 + // bounds check above succeeds there must be a last element (which 1318 + // can be self[index] itself). 1319 + let value = ptr::read(self.as_ptr().add(index)); 1320 + let base_ptr = self.as_mut_ptr(); 1321 + ptr::copy(base_ptr.add(len - 1), base_ptr.add(index), 1); 1322 + self.set_len(len - 1); 1323 + value 1324 + } 1325 + } 1326 + 1327 + /// Inserts an element at position `index` within the vector, shifting all 1328 + /// elements after it to the right. 1329 + /// 1330 + /// # Panics 1331 + /// 1332 + /// Panics if `index > len`. 1333 + /// 1334 + /// # Examples 1335 + /// 1336 + /// ``` 1337 + /// let mut vec = vec![1, 2, 3]; 1338 + /// vec.insert(1, 4); 1339 + /// assert_eq!(vec, [1, 4, 2, 3]); 1340 + /// vec.insert(4, 5); 1341 + /// assert_eq!(vec, [1, 4, 2, 3, 5]); 1342 + /// ``` 1343 + #[cfg(not(no_global_oom_handling))] 1344 + #[stable(feature = "rust1", since = "1.0.0")] 1345 + pub fn insert(&mut self, index: usize, element: T) { 1346 + #[cold] 1347 + #[inline(never)] 1348 + fn assert_failed(index: usize, len: usize) -> ! { 1349 + panic!("insertion index (is {index}) should be <= len (is {len})"); 1350 + } 1351 + 1352 + let len = self.len(); 1353 + if index > len { 1354 + assert_failed(index, len); 1355 + } 1356 + 1357 + // space for the new element 1358 + if len == self.buf.capacity() { 1359 + self.reserve(1); 1360 + } 1361 + 1362 + unsafe { 1363 + // infallible 1364 + // The spot to put the new value 1365 + { 1366 + let p = self.as_mut_ptr().add(index); 1367 + // Shift everything over to make space. (Duplicating the 1368 + // `index`th element into two consecutive places.) 1369 + ptr::copy(p, p.offset(1), len - index); 1370 + // Write it in, overwriting the first copy of the `index`th 1371 + // element. 1372 + ptr::write(p, element); 1373 + } 1374 + self.set_len(len + 1); 1375 + } 1376 + } 1377 + 1378 + /// Removes and returns the element at position `index` within the vector, 1379 + /// shifting all elements after it to the left. 1380 + /// 1381 + /// Note: Because this shifts over the remaining elements, it has a 1382 + /// worst-case performance of *O*(*n*). If you don't need the order of elements 1383 + /// to be preserved, use [`swap_remove`] instead. If you'd like to remove 1384 + /// elements from the beginning of the `Vec`, consider using 1385 + /// [`VecDeque::pop_front`] instead. 1386 + /// 1387 + /// [`swap_remove`]: Vec::swap_remove 1388 + /// [`VecDeque::pop_front`]: crate::collections::VecDeque::pop_front 1389 + /// 1390 + /// # Panics 1391 + /// 1392 + /// Panics if `index` is out of bounds. 1393 + /// 1394 + /// # Examples 1395 + /// 1396 + /// ``` 1397 + /// let mut v = vec![1, 2, 3]; 1398 + /// assert_eq!(v.remove(1), 2); 1399 + /// assert_eq!(v, [1, 3]); 1400 + /// ``` 1401 + #[stable(feature = "rust1", since = "1.0.0")] 1402 + #[track_caller] 1403 + pub fn remove(&mut self, index: usize) -> T { 1404 + #[cold] 1405 + #[inline(never)] 1406 + #[track_caller] 1407 + fn assert_failed(index: usize, len: usize) -> ! { 1408 + panic!("removal index (is {index}) should be < len (is {len})"); 1409 + } 1410 + 1411 + let len = self.len(); 1412 + if index >= len { 1413 + assert_failed(index, len); 1414 + } 1415 + unsafe { 1416 + // infallible 1417 + let ret; 1418 + { 1419 + // the place we are taking from. 1420 + let ptr = self.as_mut_ptr().add(index); 1421 + // copy it out, unsafely having a copy of the value on 1422 + // the stack and in the vector at the same time. 1423 + ret = ptr::read(ptr); 1424 + 1425 + // Shift everything down to fill in that spot. 1426 + ptr::copy(ptr.offset(1), ptr, len - index - 1); 1427 + } 1428 + self.set_len(len - 1); 1429 + ret 1430 + } 1431 + } 1432 + 1433 + /// Retains only the elements specified by the predicate. 1434 + /// 1435 + /// In other words, remove all elements `e` for which `f(&e)` returns `false`. 1436 + /// This method operates in place, visiting each element exactly once in the 1437 + /// original order, and preserves the order of the retained elements. 1438 + /// 1439 + /// # Examples 1440 + /// 1441 + /// ``` 1442 + /// let mut vec = vec![1, 2, 3, 4]; 1443 + /// vec.retain(|&x| x % 2 == 0); 1444 + /// assert_eq!(vec, [2, 4]); 1445 + /// ``` 1446 + /// 1447 + /// Because the elements are visited exactly once in the original order, 1448 + /// external state may be used to decide which elements to keep. 1449 + /// 1450 + /// ``` 1451 + /// let mut vec = vec![1, 2, 3, 4, 5]; 1452 + /// let keep = [false, true, true, false, true]; 1453 + /// let mut iter = keep.iter(); 1454 + /// vec.retain(|_| *iter.next().unwrap()); 1455 + /// assert_eq!(vec, [2, 3, 5]); 1456 + /// ``` 1457 + #[stable(feature = "rust1", since = "1.0.0")] 1458 + pub fn retain<F>(&mut self, mut f: F) 1459 + where 1460 + F: FnMut(&T) -> bool, 1461 + { 1462 + self.retain_mut(|elem| f(elem)); 1463 + } 1464 + 1465 + /// Retains only the elements specified by the predicate, passing a mutable reference to it. 1466 + /// 1467 + /// In other words, remove all elements `e` such that `f(&mut e)` returns `false`. 1468 + /// This method operates in place, visiting each element exactly once in the 1469 + /// original order, and preserves the order of the retained elements. 1470 + /// 1471 + /// # Examples 1472 + /// 1473 + /// ``` 1474 + /// let mut vec = vec![1, 2, 3, 4]; 1475 + /// vec.retain_mut(|x| if *x > 3 { 1476 + /// false 1477 + /// } else { 1478 + /// *x += 1; 1479 + /// true 1480 + /// }); 1481 + /// assert_eq!(vec, [2, 3, 4]); 1482 + /// ``` 1483 + #[stable(feature = "vec_retain_mut", since = "1.61.0")] 1484 + pub fn retain_mut<F>(&mut self, mut f: F) 1485 + where 1486 + F: FnMut(&mut T) -> bool, 1487 + { 1488 + let original_len = self.len(); 1489 + // Avoid double drop if the drop guard is not executed, 1490 + // since we may make some holes during the process. 1491 + unsafe { self.set_len(0) }; 1492 + 1493 + // Vec: [Kept, Kept, Hole, Hole, Hole, Hole, Unchecked, Unchecked] 1494 + // |<- processed len ->| ^- next to check 1495 + // |<- deleted cnt ->| 1496 + // |<- original_len ->| 1497 + // Kept: Elements which predicate returns true on. 1498 + // Hole: Moved or dropped element slot. 1499 + // Unchecked: Unchecked valid elements. 1500 + // 1501 + // This drop guard will be invoked when predicate or `drop` of element panicked. 1502 + // It shifts unchecked elements to cover holes and `set_len` to the correct length. 1503 + // In cases when predicate and `drop` never panick, it will be optimized out. 1504 + struct BackshiftOnDrop<'a, T, A: Allocator> { 1505 + v: &'a mut Vec<T, A>, 1506 + processed_len: usize, 1507 + deleted_cnt: usize, 1508 + original_len: usize, 1509 + } 1510 + 1511 + impl<T, A: Allocator> Drop for BackshiftOnDrop<'_, T, A> { 1512 + fn drop(&mut self) { 1513 + if self.deleted_cnt > 0 { 1514 + // SAFETY: Trailing unchecked items must be valid since we never touch them. 1515 + unsafe { 1516 + ptr::copy( 1517 + self.v.as_ptr().add(self.processed_len), 1518 + self.v.as_mut_ptr().add(self.processed_len - self.deleted_cnt), 1519 + self.original_len - self.processed_len, 1520 + ); 1521 + } 1522 + } 1523 + // SAFETY: After filling holes, all items are in contiguous memory. 1524 + unsafe { 1525 + self.v.set_len(self.original_len - self.deleted_cnt); 1526 + } 1527 + } 1528 + } 1529 + 1530 + let mut g = BackshiftOnDrop { v: self, processed_len: 0, deleted_cnt: 0, original_len }; 1531 + 1532 + fn process_loop<F, T, A: Allocator, const DELETED: bool>( 1533 + original_len: usize, 1534 + f: &mut F, 1535 + g: &mut BackshiftOnDrop<'_, T, A>, 1536 + ) where 1537 + F: FnMut(&mut T) -> bool, 1538 + { 1539 + while g.processed_len != original_len { 1540 + // SAFETY: Unchecked element must be valid. 1541 + let cur = unsafe { &mut *g.v.as_mut_ptr().add(g.processed_len) }; 1542 + if !f(cur) { 1543 + // Advance early to avoid double drop if `drop_in_place` panicked. 1544 + g.processed_len += 1; 1545 + g.deleted_cnt += 1; 1546 + // SAFETY: We never touch this element again after dropped. 1547 + unsafe { ptr::drop_in_place(cur) }; 1548 + // We already advanced the counter. 1549 + if DELETED { 1550 + continue; 1551 + } else { 1552 + break; 1553 + } 1554 + } 1555 + if DELETED { 1556 + // SAFETY: `deleted_cnt` > 0, so the hole slot must not overlap with current element. 1557 + // We use copy for move, and never touch this element again. 1558 + unsafe { 1559 + let hole_slot = g.v.as_mut_ptr().add(g.processed_len - g.deleted_cnt); 1560 + ptr::copy_nonoverlapping(cur, hole_slot, 1); 1561 + } 1562 + } 1563 + g.processed_len += 1; 1564 + } 1565 + } 1566 + 1567 + // Stage 1: Nothing was deleted. 1568 + process_loop::<F, T, A, false>(original_len, &mut f, &mut g); 1569 + 1570 + // Stage 2: Some elements were deleted. 1571 + process_loop::<F, T, A, true>(original_len, &mut f, &mut g); 1572 + 1573 + // All item are processed. This can be optimized to `set_len` by LLVM. 1574 + drop(g); 1575 + } 1576 + 1577 + /// Removes all but the first of consecutive elements in the vector that resolve to the same 1578 + /// key. 1579 + /// 1580 + /// If the vector is sorted, this removes all duplicates. 1581 + /// 1582 + /// # Examples 1583 + /// 1584 + /// ``` 1585 + /// let mut vec = vec![10, 20, 21, 30, 20]; 1586 + /// 1587 + /// vec.dedup_by_key(|i| *i / 10); 1588 + /// 1589 + /// assert_eq!(vec, [10, 20, 30, 20]); 1590 + /// ``` 1591 + #[stable(feature = "dedup_by", since = "1.16.0")] 1592 + #[inline] 1593 + pub fn dedup_by_key<F, K>(&mut self, mut key: F) 1594 + where 1595 + F: FnMut(&mut T) -> K, 1596 + K: PartialEq, 1597 + { 1598 + self.dedup_by(|a, b| key(a) == key(b)) 1599 + } 1600 + 1601 + /// Removes all but the first of consecutive elements in the vector satisfying a given equality 1602 + /// relation. 1603 + /// 1604 + /// The `same_bucket` function is passed references to two elements from the vector and 1605 + /// must determine if the elements compare equal. The elements are passed in opposite order 1606 + /// from their order in the slice, so if `same_bucket(a, b)` returns `true`, `a` is removed. 1607 + /// 1608 + /// If the vector is sorted, this removes all duplicates. 1609 + /// 1610 + /// # Examples 1611 + /// 1612 + /// ``` 1613 + /// let mut vec = vec!["foo", "bar", "Bar", "baz", "bar"]; 1614 + /// 1615 + /// vec.dedup_by(|a, b| a.eq_ignore_ascii_case(b)); 1616 + /// 1617 + /// assert_eq!(vec, ["foo", "bar", "baz", "bar"]); 1618 + /// ``` 1619 + #[stable(feature = "dedup_by", since = "1.16.0")] 1620 + pub fn dedup_by<F>(&mut self, mut same_bucket: F) 1621 + where 1622 + F: FnMut(&mut T, &mut T) -> bool, 1623 + { 1624 + let len = self.len(); 1625 + if len <= 1 { 1626 + return; 1627 + } 1628 + 1629 + /* INVARIANT: vec.len() > read >= write > write-1 >= 0 */ 1630 + struct FillGapOnDrop<'a, T, A: core::alloc::Allocator> { 1631 + /* Offset of the element we want to check if it is duplicate */ 1632 + read: usize, 1633 + 1634 + /* Offset of the place where we want to place the non-duplicate 1635 + * when we find it. */ 1636 + write: usize, 1637 + 1638 + /* The Vec that would need correction if `same_bucket` panicked */ 1639 + vec: &'a mut Vec<T, A>, 1640 + } 1641 + 1642 + impl<'a, T, A: core::alloc::Allocator> Drop for FillGapOnDrop<'a, T, A> { 1643 + fn drop(&mut self) { 1644 + /* This code gets executed when `same_bucket` panics */ 1645 + 1646 + /* SAFETY: invariant guarantees that `read - write` 1647 + * and `len - read` never overflow and that the copy is always 1648 + * in-bounds. */ 1649 + unsafe { 1650 + let ptr = self.vec.as_mut_ptr(); 1651 + let len = self.vec.len(); 1652 + 1653 + /* How many items were left when `same_bucket` panicked. 1654 + * Basically vec[read..].len() */ 1655 + let items_left = len.wrapping_sub(self.read); 1656 + 1657 + /* Pointer to first item in vec[write..write+items_left] slice */ 1658 + let dropped_ptr = ptr.add(self.write); 1659 + /* Pointer to first item in vec[read..] slice */ 1660 + let valid_ptr = ptr.add(self.read); 1661 + 1662 + /* Copy `vec[read..]` to `vec[write..write+items_left]`. 1663 + * The slices can overlap, so `copy_nonoverlapping` cannot be used */ 1664 + ptr::copy(valid_ptr, dropped_ptr, items_left); 1665 + 1666 + /* How many items have been already dropped 1667 + * Basically vec[read..write].len() */ 1668 + let dropped = self.read.wrapping_sub(self.write); 1669 + 1670 + self.vec.set_len(len - dropped); 1671 + } 1672 + } 1673 + } 1674 + 1675 + let mut gap = FillGapOnDrop { read: 1, write: 1, vec: self }; 1676 + let ptr = gap.vec.as_mut_ptr(); 1677 + 1678 + /* Drop items while going through Vec, it should be more efficient than 1679 + * doing slice partition_dedup + truncate */ 1680 + 1681 + /* SAFETY: Because of the invariant, read_ptr, prev_ptr and write_ptr 1682 + * are always in-bounds and read_ptr never aliases prev_ptr */ 1683 + unsafe { 1684 + while gap.read < len { 1685 + let read_ptr = ptr.add(gap.read); 1686 + let prev_ptr = ptr.add(gap.write.wrapping_sub(1)); 1687 + 1688 + if same_bucket(&mut *read_ptr, &mut *prev_ptr) { 1689 + // Increase `gap.read` now since the drop may panic. 1690 + gap.read += 1; 1691 + /* We have found duplicate, drop it in-place */ 1692 + ptr::drop_in_place(read_ptr); 1693 + } else { 1694 + let write_ptr = ptr.add(gap.write); 1695 + 1696 + /* Because `read_ptr` can be equal to `write_ptr`, we either 1697 + * have to use `copy` or conditional `copy_nonoverlapping`. 1698 + * Looks like the first option is faster. */ 1699 + ptr::copy(read_ptr, write_ptr, 1); 1700 + 1701 + /* We have filled that place, so go further */ 1702 + gap.write += 1; 1703 + gap.read += 1; 1704 + } 1705 + } 1706 + 1707 + /* Technically we could let `gap` clean up with its Drop, but 1708 + * when `same_bucket` is guaranteed to not panic, this bloats a little 1709 + * the codegen, so we just do it manually */ 1710 + gap.vec.set_len(gap.write); 1711 + mem::forget(gap); 1712 + } 1713 + } 1714 + 1715 + /// Appends an element to the back of a collection. 1716 + /// 1717 + /// # Panics 1718 + /// 1719 + /// Panics if the new capacity exceeds `isize::MAX` bytes. 1720 + /// 1721 + /// # Examples 1722 + /// 1723 + /// ``` 1724 + /// let mut vec = vec![1, 2]; 1725 + /// vec.push(3); 1726 + /// assert_eq!(vec, [1, 2, 3]); 1727 + /// ``` 1728 + #[cfg(not(no_global_oom_handling))] 1729 + #[inline] 1730 + #[stable(feature = "rust1", since = "1.0.0")] 1731 + pub fn push(&mut self, value: T) { 1732 + // This will panic or abort if we would allocate > isize::MAX bytes 1733 + // or if the length increment would overflow for zero-sized types. 1734 + if self.len == self.buf.capacity() { 1735 + self.buf.reserve_for_push(self.len); 1736 + } 1737 + unsafe { 1738 + let end = self.as_mut_ptr().add(self.len); 1739 + ptr::write(end, value); 1740 + self.len += 1; 1741 + } 1742 + } 1743 + 1744 + /// Tries to append an element to the back of a collection. 1745 + /// 1746 + /// # Examples 1747 + /// 1748 + /// ``` 1749 + /// let mut vec = vec![1, 2]; 1750 + /// vec.try_push(3).unwrap(); 1751 + /// assert_eq!(vec, [1, 2, 3]); 1752 + /// ``` 1753 + #[inline] 1754 + #[stable(feature = "kernel", since = "1.0.0")] 1755 + pub fn try_push(&mut self, value: T) -> Result<(), TryReserveError> { 1756 + if self.len == self.buf.capacity() { 1757 + self.buf.try_reserve_for_push(self.len)?; 1758 + } 1759 + unsafe { 1760 + let end = self.as_mut_ptr().add(self.len); 1761 + ptr::write(end, value); 1762 + self.len += 1; 1763 + } 1764 + Ok(()) 1765 + } 1766 + 1767 + /// Removes the last element from a vector and returns it, or [`None`] if it 1768 + /// is empty. 1769 + /// 1770 + /// If you'd like to pop the first element, consider using 1771 + /// [`VecDeque::pop_front`] instead. 1772 + /// 1773 + /// [`VecDeque::pop_front`]: crate::collections::VecDeque::pop_front 1774 + /// 1775 + /// # Examples 1776 + /// 1777 + /// ``` 1778 + /// let mut vec = vec![1, 2, 3]; 1779 + /// assert_eq!(vec.pop(), Some(3)); 1780 + /// assert_eq!(vec, [1, 2]); 1781 + /// ``` 1782 + #[inline] 1783 + #[stable(feature = "rust1", since = "1.0.0")] 1784 + pub fn pop(&mut self) -> Option<T> { 1785 + if self.len == 0 { 1786 + None 1787 + } else { 1788 + unsafe { 1789 + self.len -= 1; 1790 + Some(ptr::read(self.as_ptr().add(self.len()))) 1791 + } 1792 + } 1793 + } 1794 + 1795 + /// Moves all the elements of `other` into `self`, leaving `other` empty. 1796 + /// 1797 + /// # Panics 1798 + /// 1799 + /// Panics if the number of elements in the vector overflows a `usize`. 1800 + /// 1801 + /// # Examples 1802 + /// 1803 + /// ``` 1804 + /// let mut vec = vec![1, 2, 3]; 1805 + /// let mut vec2 = vec![4, 5, 6]; 1806 + /// vec.append(&mut vec2); 1807 + /// assert_eq!(vec, [1, 2, 3, 4, 5, 6]); 1808 + /// assert_eq!(vec2, []); 1809 + /// ``` 1810 + #[cfg(not(no_global_oom_handling))] 1811 + #[inline] 1812 + #[stable(feature = "append", since = "1.4.0")] 1813 + pub fn append(&mut self, other: &mut Self) { 1814 + unsafe { 1815 + self.append_elements(other.as_slice() as _); 1816 + other.set_len(0); 1817 + } 1818 + } 1819 + 1820 + /// Appends elements to `self` from other buffer. 1821 + #[cfg(not(no_global_oom_handling))] 1822 + #[inline] 1823 + unsafe fn append_elements(&mut self, other: *const [T]) { 1824 + let count = unsafe { (*other).len() }; 1825 + self.reserve(count); 1826 + let len = self.len(); 1827 + unsafe { ptr::copy_nonoverlapping(other as *const T, self.as_mut_ptr().add(len), count) }; 1828 + self.len += count; 1829 + } 1830 + 1831 + /// Removes the specified range from the vector in bulk, returning all 1832 + /// removed elements as an iterator. If the iterator is dropped before 1833 + /// being fully consumed, it drops the remaining removed elements. 1834 + /// 1835 + /// The returned iterator keeps a mutable borrow on the vector to optimize 1836 + /// its implementation. 1837 + /// 1838 + /// # Panics 1839 + /// 1840 + /// Panics if the starting point is greater than the end point or if 1841 + /// the end point is greater than the length of the vector. 1842 + /// 1843 + /// # Leaking 1844 + /// 1845 + /// If the returned iterator goes out of scope without being dropped (due to 1846 + /// [`mem::forget`], for example), the vector may have lost and leaked 1847 + /// elements arbitrarily, including elements outside the range. 1848 + /// 1849 + /// # Examples 1850 + /// 1851 + /// ``` 1852 + /// let mut v = vec![1, 2, 3]; 1853 + /// let u: Vec<_> = v.drain(1..).collect(); 1854 + /// assert_eq!(v, &[1]); 1855 + /// assert_eq!(u, &[2, 3]); 1856 + /// 1857 + /// // A full range clears the vector, like `clear()` does 1858 + /// v.drain(..); 1859 + /// assert_eq!(v, &[]); 1860 + /// ``` 1861 + #[stable(feature = "drain", since = "1.6.0")] 1862 + pub fn drain<R>(&mut self, range: R) -> Drain<'_, T, A> 1863 + where 1864 + R: RangeBounds<usize>, 1865 + { 1866 + // Memory safety 1867 + // 1868 + // When the Drain is first created, it shortens the length of 1869 + // the source vector to make sure no uninitialized or moved-from elements 1870 + // are accessible at all if the Drain's destructor never gets to run. 1871 + // 1872 + // Drain will ptr::read out the values to remove. 1873 + // When finished, remaining tail of the vec is copied back to cover 1874 + // the hole, and the vector length is restored to the new length. 1875 + // 1876 + let len = self.len(); 1877 + let Range { start, end } = slice::range(range, ..len); 1878 + 1879 + unsafe { 1880 + // set self.vec length's to start, to be safe in case Drain is leaked 1881 + self.set_len(start); 1882 + // Use the borrow in the IterMut to indicate borrowing behavior of the 1883 + // whole Drain iterator (like &mut T). 1884 + let range_slice = slice::from_raw_parts_mut(self.as_mut_ptr().add(start), end - start); 1885 + Drain { 1886 + tail_start: end, 1887 + tail_len: len - end, 1888 + iter: range_slice.iter(), 1889 + vec: NonNull::from(self), 1890 + } 1891 + } 1892 + } 1893 + 1894 + /// Clears the vector, removing all values. 1895 + /// 1896 + /// Note that this method has no effect on the allocated capacity 1897 + /// of the vector. 1898 + /// 1899 + /// # Examples 1900 + /// 1901 + /// ``` 1902 + /// let mut v = vec![1, 2, 3]; 1903 + /// 1904 + /// v.clear(); 1905 + /// 1906 + /// assert!(v.is_empty()); 1907 + /// ``` 1908 + #[inline] 1909 + #[stable(feature = "rust1", since = "1.0.0")] 1910 + pub fn clear(&mut self) { 1911 + let elems: *mut [T] = self.as_mut_slice(); 1912 + 1913 + // SAFETY: 1914 + // - `elems` comes directly from `as_mut_slice` and is therefore valid. 1915 + // - Setting `self.len` before calling `drop_in_place` means that, 1916 + // if an element's `Drop` impl panics, the vector's `Drop` impl will 1917 + // do nothing (leaking the rest of the elements) instead of dropping 1918 + // some twice. 1919 + unsafe { 1920 + self.len = 0; 1921 + ptr::drop_in_place(elems); 1922 + } 1923 + } 1924 + 1925 + /// Returns the number of elements in the vector, also referred to 1926 + /// as its 'length'. 1927 + /// 1928 + /// # Examples 1929 + /// 1930 + /// ``` 1931 + /// let a = vec![1, 2, 3]; 1932 + /// assert_eq!(a.len(), 3); 1933 + /// ``` 1934 + #[inline] 1935 + #[stable(feature = "rust1", since = "1.0.0")] 1936 + pub fn len(&self) -> usize { 1937 + self.len 1938 + } 1939 + 1940 + /// Returns `true` if the vector contains no elements. 1941 + /// 1942 + /// # Examples 1943 + /// 1944 + /// ``` 1945 + /// let mut v = Vec::new(); 1946 + /// assert!(v.is_empty()); 1947 + /// 1948 + /// v.push(1); 1949 + /// assert!(!v.is_empty()); 1950 + /// ``` 1951 + #[stable(feature = "rust1", since = "1.0.0")] 1952 + pub fn is_empty(&self) -> bool { 1953 + self.len() == 0 1954 + } 1955 + 1956 + /// Splits the collection into two at the given index. 1957 + /// 1958 + /// Returns a newly allocated vector containing the elements in the range 1959 + /// `[at, len)`. After the call, the original vector will be left containing 1960 + /// the elements `[0, at)` with its previous capacity unchanged. 1961 + /// 1962 + /// # Panics 1963 + /// 1964 + /// Panics if `at > len`. 1965 + /// 1966 + /// # Examples 1967 + /// 1968 + /// ``` 1969 + /// let mut vec = vec![1, 2, 3]; 1970 + /// let vec2 = vec.split_off(1); 1971 + /// assert_eq!(vec, [1]); 1972 + /// assert_eq!(vec2, [2, 3]); 1973 + /// ``` 1974 + #[cfg(not(no_global_oom_handling))] 1975 + #[inline] 1976 + #[must_use = "use `.truncate()` if you don't need the other half"] 1977 + #[stable(feature = "split_off", since = "1.4.0")] 1978 + pub fn split_off(&mut self, at: usize) -> Self 1979 + where 1980 + A: Clone, 1981 + { 1982 + #[cold] 1983 + #[inline(never)] 1984 + fn assert_failed(at: usize, len: usize) -> ! { 1985 + panic!("`at` split index (is {at}) should be <= len (is {len})"); 1986 + } 1987 + 1988 + if at > self.len() { 1989 + assert_failed(at, self.len()); 1990 + } 1991 + 1992 + if at == 0 { 1993 + // the new vector can take over the original buffer and avoid the copy 1994 + return mem::replace( 1995 + self, 1996 + Vec::with_capacity_in(self.capacity(), self.allocator().clone()), 1997 + ); 1998 + } 1999 + 2000 + let other_len = self.len - at; 2001 + let mut other = Vec::with_capacity_in(other_len, self.allocator().clone()); 2002 + 2003 + // Unsafely `set_len` and copy items to `other`. 2004 + unsafe { 2005 + self.set_len(at); 2006 + other.set_len(other_len); 2007 + 2008 + ptr::copy_nonoverlapping(self.as_ptr().add(at), other.as_mut_ptr(), other.len()); 2009 + } 2010 + other 2011 + } 2012 + 2013 + /// Resizes the `Vec` in-place so that `len` is equal to `new_len`. 2014 + /// 2015 + /// If `new_len` is greater than `len`, the `Vec` is extended by the 2016 + /// difference, with each additional slot filled with the result of 2017 + /// calling the closure `f`. The return values from `f` will end up 2018 + /// in the `Vec` in the order they have been generated. 2019 + /// 2020 + /// If `new_len` is less than `len`, the `Vec` is simply truncated. 2021 + /// 2022 + /// This method uses a closure to create new values on every push. If 2023 + /// you'd rather [`Clone`] a given value, use [`Vec::resize`]. If you 2024 + /// want to use the [`Default`] trait to generate values, you can 2025 + /// pass [`Default::default`] as the second argument. 2026 + /// 2027 + /// # Examples 2028 + /// 2029 + /// ``` 2030 + /// let mut vec = vec![1, 2, 3]; 2031 + /// vec.resize_with(5, Default::default); 2032 + /// assert_eq!(vec, [1, 2, 3, 0, 0]); 2033 + /// 2034 + /// let mut vec = vec![]; 2035 + /// let mut p = 1; 2036 + /// vec.resize_with(4, || { p *= 2; p }); 2037 + /// assert_eq!(vec, [2, 4, 8, 16]); 2038 + /// ``` 2039 + #[cfg(not(no_global_oom_handling))] 2040 + #[stable(feature = "vec_resize_with", since = "1.33.0")] 2041 + pub fn resize_with<F>(&mut self, new_len: usize, f: F) 2042 + where 2043 + F: FnMut() -> T, 2044 + { 2045 + let len = self.len(); 2046 + if new_len > len { 2047 + self.extend_with(new_len - len, ExtendFunc(f)); 2048 + } else { 2049 + self.truncate(new_len); 2050 + } 2051 + } 2052 + 2053 + /// Consumes and leaks the `Vec`, returning a mutable reference to the contents, 2054 + /// `&'a mut [T]`. Note that the type `T` must outlive the chosen lifetime 2055 + /// `'a`. If the type has only static references, or none at all, then this 2056 + /// may be chosen to be `'static`. 2057 + /// 2058 + /// As of Rust 1.57, this method does not reallocate or shrink the `Vec`, 2059 + /// so the leaked allocation may include unused capacity that is not part 2060 + /// of the returned slice. 2061 + /// 2062 + /// This function is mainly useful for data that lives for the remainder of 2063 + /// the program's life. Dropping the returned reference will cause a memory 2064 + /// leak. 2065 + /// 2066 + /// # Examples 2067 + /// 2068 + /// Simple usage: 2069 + /// 2070 + /// ``` 2071 + /// let x = vec![1, 2, 3]; 2072 + /// let static_ref: &'static mut [usize] = x.leak(); 2073 + /// static_ref[0] += 1; 2074 + /// assert_eq!(static_ref, &[2, 2, 3]); 2075 + /// ``` 2076 + #[cfg(not(no_global_oom_handling))] 2077 + #[stable(feature = "vec_leak", since = "1.47.0")] 2078 + #[inline] 2079 + pub fn leak<'a>(self) -> &'a mut [T] 2080 + where 2081 + A: 'a, 2082 + { 2083 + let mut me = ManuallyDrop::new(self); 2084 + unsafe { slice::from_raw_parts_mut(me.as_mut_ptr(), me.len) } 2085 + } 2086 + 2087 + /// Returns the remaining spare capacity of the vector as a slice of 2088 + /// `MaybeUninit<T>`. 2089 + /// 2090 + /// The returned slice can be used to fill the vector with data (e.g. by 2091 + /// reading from a file) before marking the data as initialized using the 2092 + /// [`set_len`] method. 2093 + /// 2094 + /// [`set_len`]: Vec::set_len 2095 + /// 2096 + /// # Examples 2097 + /// 2098 + /// ``` 2099 + /// // Allocate vector big enough for 10 elements. 2100 + /// let mut v = Vec::with_capacity(10); 2101 + /// 2102 + /// // Fill in the first 3 elements. 2103 + /// let uninit = v.spare_capacity_mut(); 2104 + /// uninit[0].write(0); 2105 + /// uninit[1].write(1); 2106 + /// uninit[2].write(2); 2107 + /// 2108 + /// // Mark the first 3 elements of the vector as being initialized. 2109 + /// unsafe { 2110 + /// v.set_len(3); 2111 + /// } 2112 + /// 2113 + /// assert_eq!(&v, &[0, 1, 2]); 2114 + /// ``` 2115 + #[stable(feature = "vec_spare_capacity", since = "1.60.0")] 2116 + #[inline] 2117 + pub fn spare_capacity_mut(&mut self) -> &mut [MaybeUninit<T>] { 2118 + // Note: 2119 + // This method is not implemented in terms of `split_at_spare_mut`, 2120 + // to prevent invalidation of pointers to the buffer. 2121 + unsafe { 2122 + slice::from_raw_parts_mut( 2123 + self.as_mut_ptr().add(self.len) as *mut MaybeUninit<T>, 2124 + self.buf.capacity() - self.len, 2125 + ) 2126 + } 2127 + } 2128 + 2129 + /// Returns vector content as a slice of `T`, along with the remaining spare 2130 + /// capacity of the vector as a slice of `MaybeUninit<T>`. 2131 + /// 2132 + /// The returned spare capacity slice can be used to fill the vector with data 2133 + /// (e.g. by reading from a file) before marking the data as initialized using 2134 + /// the [`set_len`] method. 2135 + /// 2136 + /// [`set_len`]: Vec::set_len 2137 + /// 2138 + /// Note that this is a low-level API, which should be used with care for 2139 + /// optimization purposes. If you need to append data to a `Vec` 2140 + /// you can use [`push`], [`extend`], [`extend_from_slice`], 2141 + /// [`extend_from_within`], [`insert`], [`append`], [`resize`] or 2142 + /// [`resize_with`], depending on your exact needs. 2143 + /// 2144 + /// [`push`]: Vec::push 2145 + /// [`extend`]: Vec::extend 2146 + /// [`extend_from_slice`]: Vec::extend_from_slice 2147 + /// [`extend_from_within`]: Vec::extend_from_within 2148 + /// [`insert`]: Vec::insert 2149 + /// [`append`]: Vec::append 2150 + /// [`resize`]: Vec::resize 2151 + /// [`resize_with`]: Vec::resize_with 2152 + /// 2153 + /// # Examples 2154 + /// 2155 + /// ``` 2156 + /// #![feature(vec_split_at_spare)] 2157 + /// 2158 + /// let mut v = vec![1, 1, 2]; 2159 + /// 2160 + /// // Reserve additional space big enough for 10 elements. 2161 + /// v.reserve(10); 2162 + /// 2163 + /// let (init, uninit) = v.split_at_spare_mut(); 2164 + /// let sum = init.iter().copied().sum::<u32>(); 2165 + /// 2166 + /// // Fill in the next 4 elements. 2167 + /// uninit[0].write(sum); 2168 + /// uninit[1].write(sum * 2); 2169 + /// uninit[2].write(sum * 3); 2170 + /// uninit[3].write(sum * 4); 2171 + /// 2172 + /// // Mark the 4 elements of the vector as being initialized. 2173 + /// unsafe { 2174 + /// let len = v.len(); 2175 + /// v.set_len(len + 4); 2176 + /// } 2177 + /// 2178 + /// assert_eq!(&v, &[1, 1, 2, 4, 8, 12, 16]); 2179 + /// ``` 2180 + #[unstable(feature = "vec_split_at_spare", issue = "81944")] 2181 + #[inline] 2182 + pub fn split_at_spare_mut(&mut self) -> (&mut [T], &mut [MaybeUninit<T>]) { 2183 + // SAFETY: 2184 + // - len is ignored and so never changed 2185 + let (init, spare, _) = unsafe { self.split_at_spare_mut_with_len() }; 2186 + (init, spare) 2187 + } 2188 + 2189 + /// Safety: changing returned .2 (&mut usize) is considered the same as calling `.set_len(_)`. 2190 + /// 2191 + /// This method provides unique access to all vec parts at once in `extend_from_within`. 2192 + unsafe fn split_at_spare_mut_with_len( 2193 + &mut self, 2194 + ) -> (&mut [T], &mut [MaybeUninit<T>], &mut usize) { 2195 + let ptr = self.as_mut_ptr(); 2196 + // SAFETY: 2197 + // - `ptr` is guaranteed to be valid for `self.len` elements 2198 + // - but the allocation extends out to `self.buf.capacity()` elements, possibly 2199 + // uninitialized 2200 + let spare_ptr = unsafe { ptr.add(self.len) }; 2201 + let spare_ptr = spare_ptr.cast::<MaybeUninit<T>>(); 2202 + let spare_len = self.buf.capacity() - self.len; 2203 + 2204 + // SAFETY: 2205 + // - `ptr` is guaranteed to be valid for `self.len` elements 2206 + // - `spare_ptr` is pointing one element past the buffer, so it doesn't overlap with `initialized` 2207 + unsafe { 2208 + let initialized = slice::from_raw_parts_mut(ptr, self.len); 2209 + let spare = slice::from_raw_parts_mut(spare_ptr, spare_len); 2210 + 2211 + (initialized, spare, &mut self.len) 2212 + } 2213 + } 2214 + } 2215 + 2216 + impl<T: Clone, A: Allocator> Vec<T, A> { 2217 + /// Resizes the `Vec` in-place so that `len` is equal to `new_len`. 2218 + /// 2219 + /// If `new_len` is greater than `len`, the `Vec` is extended by the 2220 + /// difference, with each additional slot filled with `value`. 2221 + /// If `new_len` is less than `len`, the `Vec` is simply truncated. 2222 + /// 2223 + /// This method requires `T` to implement [`Clone`], 2224 + /// in order to be able to clone the passed value. 2225 + /// If you need more flexibility (or want to rely on [`Default`] instead of 2226 + /// [`Clone`]), use [`Vec::resize_with`]. 2227 + /// If you only need to resize to a smaller size, use [`Vec::truncate`]. 2228 + /// 2229 + /// # Examples 2230 + /// 2231 + /// ``` 2232 + /// let mut vec = vec!["hello"]; 2233 + /// vec.resize(3, "world"); 2234 + /// assert_eq!(vec, ["hello", "world", "world"]); 2235 + /// 2236 + /// let mut vec = vec![1, 2, 3, 4]; 2237 + /// vec.resize(2, 0); 2238 + /// assert_eq!(vec, [1, 2]); 2239 + /// ``` 2240 + #[cfg(not(no_global_oom_handling))] 2241 + #[stable(feature = "vec_resize", since = "1.5.0")] 2242 + pub fn resize(&mut self, new_len: usize, value: T) { 2243 + let len = self.len(); 2244 + 2245 + if new_len > len { 2246 + self.extend_with(new_len - len, ExtendElement(value)) 2247 + } else { 2248 + self.truncate(new_len); 2249 + } 2250 + } 2251 + 2252 + /// Clones and appends all elements in a slice to the `Vec`. 2253 + /// 2254 + /// Iterates over the slice `other`, clones each element, and then appends 2255 + /// it to this `Vec`. The `other` slice is traversed in-order. 2256 + /// 2257 + /// Note that this function is same as [`extend`] except that it is 2258 + /// specialized to work with slices instead. If and when Rust gets 2259 + /// specialization this function will likely be deprecated (but still 2260 + /// available). 2261 + /// 2262 + /// # Examples 2263 + /// 2264 + /// ``` 2265 + /// let mut vec = vec![1]; 2266 + /// vec.extend_from_slice(&[2, 3, 4]); 2267 + /// assert_eq!(vec, [1, 2, 3, 4]); 2268 + /// ``` 2269 + /// 2270 + /// [`extend`]: Vec::extend 2271 + #[cfg(not(no_global_oom_handling))] 2272 + #[stable(feature = "vec_extend_from_slice", since = "1.6.0")] 2273 + pub fn extend_from_slice(&mut self, other: &[T]) { 2274 + self.spec_extend(other.iter()) 2275 + } 2276 + 2277 + /// Copies elements from `src` range to the end of the vector. 2278 + /// 2279 + /// # Panics 2280 + /// 2281 + /// Panics if the starting point is greater than the end point or if 2282 + /// the end point is greater than the length of the vector. 2283 + /// 2284 + /// # Examples 2285 + /// 2286 + /// ``` 2287 + /// let mut vec = vec![0, 1, 2, 3, 4]; 2288 + /// 2289 + /// vec.extend_from_within(2..); 2290 + /// assert_eq!(vec, [0, 1, 2, 3, 4, 2, 3, 4]); 2291 + /// 2292 + /// vec.extend_from_within(..2); 2293 + /// assert_eq!(vec, [0, 1, 2, 3, 4, 2, 3, 4, 0, 1]); 2294 + /// 2295 + /// vec.extend_from_within(4..8); 2296 + /// assert_eq!(vec, [0, 1, 2, 3, 4, 2, 3, 4, 0, 1, 4, 2, 3, 4]); 2297 + /// ``` 2298 + #[cfg(not(no_global_oom_handling))] 2299 + #[stable(feature = "vec_extend_from_within", since = "1.53.0")] 2300 + pub fn extend_from_within<R>(&mut self, src: R) 2301 + where 2302 + R: RangeBounds<usize>, 2303 + { 2304 + let range = slice::range(src, ..self.len()); 2305 + self.reserve(range.len()); 2306 + 2307 + // SAFETY: 2308 + // - `slice::range` guarantees that the given range is valid for indexing self 2309 + unsafe { 2310 + self.spec_extend_from_within(range); 2311 + } 2312 + } 2313 + } 2314 + 2315 + impl<T, A: Allocator, const N: usize> Vec<[T; N], A> { 2316 + /// Takes a `Vec<[T; N]>` and flattens it into a `Vec<T>`. 2317 + /// 2318 + /// # Panics 2319 + /// 2320 + /// Panics if the length of the resulting vector would overflow a `usize`. 2321 + /// 2322 + /// This is only possible when flattening a vector of arrays of zero-sized 2323 + /// types, and thus tends to be irrelevant in practice. If 2324 + /// `size_of::<T>() > 0`, this will never panic. 2325 + /// 2326 + /// # Examples 2327 + /// 2328 + /// ``` 2329 + /// #![feature(slice_flatten)] 2330 + /// 2331 + /// let mut vec = vec![[1, 2, 3], [4, 5, 6], [7, 8, 9]]; 2332 + /// assert_eq!(vec.pop(), Some([7, 8, 9])); 2333 + /// 2334 + /// let mut flattened = vec.into_flattened(); 2335 + /// assert_eq!(flattened.pop(), Some(6)); 2336 + /// ``` 2337 + #[unstable(feature = "slice_flatten", issue = "95629")] 2338 + pub fn into_flattened(self) -> Vec<T, A> { 2339 + let (ptr, len, cap, alloc) = self.into_raw_parts_with_alloc(); 2340 + let (new_len, new_cap) = if mem::size_of::<T>() == 0 { 2341 + (len.checked_mul(N).expect("vec len overflow"), usize::MAX) 2342 + } else { 2343 + // SAFETY: 2344 + // - `cap * N` cannot overflow because the allocation is already in 2345 + // the address space. 2346 + // - Each `[T; N]` has `N` valid elements, so there are `len * N` 2347 + // valid elements in the allocation. 2348 + unsafe { (len.unchecked_mul(N), cap.unchecked_mul(N)) } 2349 + }; 2350 + // SAFETY: 2351 + // - `ptr` was allocated by `self` 2352 + // - `ptr` is well-aligned because `[T; N]` has the same alignment as `T`. 2353 + // - `new_cap` refers to the same sized allocation as `cap` because 2354 + // `new_cap * size_of::<T>()` == `cap * size_of::<[T; N]>()` 2355 + // - `len` <= `cap`, so `len * N` <= `cap * N`. 2356 + unsafe { Vec::<T, A>::from_raw_parts_in(ptr.cast(), new_len, new_cap, alloc) } 2357 + } 2358 + } 2359 + 2360 + // This code generalizes `extend_with_{element,default}`. 2361 + trait ExtendWith<T> { 2362 + fn next(&mut self) -> T; 2363 + fn last(self) -> T; 2364 + } 2365 + 2366 + struct ExtendElement<T>(T); 2367 + impl<T: Clone> ExtendWith<T> for ExtendElement<T> { 2368 + fn next(&mut self) -> T { 2369 + self.0.clone() 2370 + } 2371 + fn last(self) -> T { 2372 + self.0 2373 + } 2374 + } 2375 + 2376 + struct ExtendFunc<F>(F); 2377 + impl<T, F: FnMut() -> T> ExtendWith<T> for ExtendFunc<F> { 2378 + fn next(&mut self) -> T { 2379 + (self.0)() 2380 + } 2381 + fn last(mut self) -> T { 2382 + (self.0)() 2383 + } 2384 + } 2385 + 2386 + impl<T, A: Allocator> Vec<T, A> { 2387 + #[cfg(not(no_global_oom_handling))] 2388 + /// Extend the vector by `n` values, using the given generator. 2389 + fn extend_with<E: ExtendWith<T>>(&mut self, n: usize, mut value: E) { 2390 + self.reserve(n); 2391 + 2392 + unsafe { 2393 + let mut ptr = self.as_mut_ptr().add(self.len()); 2394 + // Use SetLenOnDrop to work around bug where compiler 2395 + // might not realize the store through `ptr` through self.set_len() 2396 + // don't alias. 2397 + let mut local_len = SetLenOnDrop::new(&mut self.len); 2398 + 2399 + // Write all elements except the last one 2400 + for _ in 1..n { 2401 + ptr::write(ptr, value.next()); 2402 + ptr = ptr.offset(1); 2403 + // Increment the length in every step in case next() panics 2404 + local_len.increment_len(1); 2405 + } 2406 + 2407 + if n > 0 { 2408 + // We can write the last element directly without cloning needlessly 2409 + ptr::write(ptr, value.last()); 2410 + local_len.increment_len(1); 2411 + } 2412 + 2413 + // len set by scope guard 2414 + } 2415 + } 2416 + } 2417 + 2418 + impl<T: PartialEq, A: Allocator> Vec<T, A> { 2419 + /// Removes consecutive repeated elements in the vector according to the 2420 + /// [`PartialEq`] trait implementation. 2421 + /// 2422 + /// If the vector is sorted, this removes all duplicates. 2423 + /// 2424 + /// # Examples 2425 + /// 2426 + /// ``` 2427 + /// let mut vec = vec![1, 2, 2, 3, 2]; 2428 + /// 2429 + /// vec.dedup(); 2430 + /// 2431 + /// assert_eq!(vec, [1, 2, 3, 2]); 2432 + /// ``` 2433 + #[stable(feature = "rust1", since = "1.0.0")] 2434 + #[inline] 2435 + pub fn dedup(&mut self) { 2436 + self.dedup_by(|a, b| a == b) 2437 + } 2438 + } 2439 + 2440 + //////////////////////////////////////////////////////////////////////////////// 2441 + // Internal methods and functions 2442 + //////////////////////////////////////////////////////////////////////////////// 2443 + 2444 + #[doc(hidden)] 2445 + #[cfg(not(no_global_oom_handling))] 2446 + #[stable(feature = "rust1", since = "1.0.0")] 2447 + pub fn from_elem<T: Clone>(elem: T, n: usize) -> Vec<T> { 2448 + <T as SpecFromElem>::from_elem(elem, n, Global) 2449 + } 2450 + 2451 + #[doc(hidden)] 2452 + #[cfg(not(no_global_oom_handling))] 2453 + #[unstable(feature = "allocator_api", issue = "32838")] 2454 + pub fn from_elem_in<T: Clone, A: Allocator>(elem: T, n: usize, alloc: A) -> Vec<T, A> { 2455 + <T as SpecFromElem>::from_elem(elem, n, alloc) 2456 + } 2457 + 2458 + trait ExtendFromWithinSpec { 2459 + /// # Safety 2460 + /// 2461 + /// - `src` needs to be valid index 2462 + /// - `self.capacity() - self.len()` must be `>= src.len()` 2463 + unsafe fn spec_extend_from_within(&mut self, src: Range<usize>); 2464 + } 2465 + 2466 + impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { 2467 + default unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) { 2468 + // SAFETY: 2469 + // - len is increased only after initializing elements 2470 + let (this, spare, len) = unsafe { self.split_at_spare_mut_with_len() }; 2471 + 2472 + // SAFETY: 2473 + // - caller guaratees that src is a valid index 2474 + let to_clone = unsafe { this.get_unchecked(src) }; 2475 + 2476 + iter::zip(to_clone, spare) 2477 + .map(|(src, dst)| dst.write(src.clone())) 2478 + // Note: 2479 + // - Element was just initialized with `MaybeUninit::write`, so it's ok to increase len 2480 + // - len is increased after each element to prevent leaks (see issue #82533) 2481 + .for_each(|_| *len += 1); 2482 + } 2483 + } 2484 + 2485 + impl<T: Copy, A: Allocator> ExtendFromWithinSpec for Vec<T, A> { 2486 + unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) { 2487 + let count = src.len(); 2488 + { 2489 + let (init, spare) = self.split_at_spare_mut(); 2490 + 2491 + // SAFETY: 2492 + // - caller guaratees that `src` is a valid index 2493 + let source = unsafe { init.get_unchecked(src) }; 2494 + 2495 + // SAFETY: 2496 + // - Both pointers are created from unique slice references (`&mut [_]`) 2497 + // so they are valid and do not overlap. 2498 + // - Elements are :Copy so it's OK to to copy them, without doing 2499 + // anything with the original values 2500 + // - `count` is equal to the len of `source`, so source is valid for 2501 + // `count` reads 2502 + // - `.reserve(count)` guarantees that `spare.len() >= count` so spare 2503 + // is valid for `count` writes 2504 + unsafe { ptr::copy_nonoverlapping(source.as_ptr(), spare.as_mut_ptr() as _, count) }; 2505 + } 2506 + 2507 + // SAFETY: 2508 + // - The elements were just initialized by `copy_nonoverlapping` 2509 + self.len += count; 2510 + } 2511 + } 2512 + 2513 + //////////////////////////////////////////////////////////////////////////////// 2514 + // Common trait implementations for Vec 2515 + //////////////////////////////////////////////////////////////////////////////// 2516 + 2517 + #[stable(feature = "rust1", since = "1.0.0")] 2518 + impl<T, A: Allocator> ops::Deref for Vec<T, A> { 2519 + type Target = [T]; 2520 + 2521 + fn deref(&self) -> &[T] { 2522 + unsafe { slice::from_raw_parts(self.as_ptr(), self.len) } 2523 + } 2524 + } 2525 + 2526 + #[stable(feature = "rust1", since = "1.0.0")] 2527 + impl<T, A: Allocator> ops::DerefMut for Vec<T, A> { 2528 + fn deref_mut(&mut self) -> &mut [T] { 2529 + unsafe { slice::from_raw_parts_mut(self.as_mut_ptr(), self.len) } 2530 + } 2531 + } 2532 + 2533 + #[cfg(not(no_global_oom_handling))] 2534 + trait SpecCloneFrom { 2535 + fn clone_from(this: &mut Self, other: &Self); 2536 + } 2537 + 2538 + #[cfg(not(no_global_oom_handling))] 2539 + impl<T: Clone, A: Allocator> SpecCloneFrom for Vec<T, A> { 2540 + default fn clone_from(this: &mut Self, other: &Self) { 2541 + // drop anything that will not be overwritten 2542 + this.truncate(other.len()); 2543 + 2544 + // self.len <= other.len due to the truncate above, so the 2545 + // slices here are always in-bounds. 2546 + let (init, tail) = other.split_at(this.len()); 2547 + 2548 + // reuse the contained values' allocations/resources. 2549 + this.clone_from_slice(init); 2550 + this.extend_from_slice(tail); 2551 + } 2552 + } 2553 + 2554 + #[cfg(not(no_global_oom_handling))] 2555 + impl<T: Copy, A: Allocator> SpecCloneFrom for Vec<T, A> { 2556 + fn clone_from(this: &mut Self, other: &Self) { 2557 + this.clear(); 2558 + this.extend_from_slice(other); 2559 + } 2560 + } 2561 + 2562 + #[cfg(not(no_global_oom_handling))] 2563 + #[stable(feature = "rust1", since = "1.0.0")] 2564 + impl<T: Clone, A: Allocator + Clone> Clone for Vec<T, A> { 2565 + #[cfg(not(test))] 2566 + fn clone(&self) -> Self { 2567 + let alloc = self.allocator().clone(); 2568 + <[T]>::to_vec_in(&**self, alloc) 2569 + } 2570 + 2571 + // HACK(japaric): with cfg(test) the inherent `[T]::to_vec` method, which is 2572 + // required for this method definition, is not available. Instead use the 2573 + // `slice::to_vec` function which is only available with cfg(test) 2574 + // NB see the slice::hack module in slice.rs for more information 2575 + #[cfg(test)] 2576 + fn clone(&self) -> Self { 2577 + let alloc = self.allocator().clone(); 2578 + crate::slice::to_vec(&**self, alloc) 2579 + } 2580 + 2581 + fn clone_from(&mut self, other: &Self) { 2582 + SpecCloneFrom::clone_from(self, other) 2583 + } 2584 + } 2585 + 2586 + /// The hash of a vector is the same as that of the corresponding slice, 2587 + /// as required by the `core::borrow::Borrow` implementation. 2588 + /// 2589 + /// ``` 2590 + /// #![feature(build_hasher_simple_hash_one)] 2591 + /// use std::hash::BuildHasher; 2592 + /// 2593 + /// let b = std::collections::hash_map::RandomState::new(); 2594 + /// let v: Vec<u8> = vec![0xa8, 0x3c, 0x09]; 2595 + /// let s: &[u8] = &[0xa8, 0x3c, 0x09]; 2596 + /// assert_eq!(b.hash_one(v), b.hash_one(s)); 2597 + /// ``` 2598 + #[stable(feature = "rust1", since = "1.0.0")] 2599 + impl<T: Hash, A: Allocator> Hash for Vec<T, A> { 2600 + #[inline] 2601 + fn hash<H: Hasher>(&self, state: &mut H) { 2602 + Hash::hash(&**self, state) 2603 + } 2604 + } 2605 + 2606 + #[stable(feature = "rust1", since = "1.0.0")] 2607 + #[rustc_on_unimplemented( 2608 + message = "vector indices are of type `usize` or ranges of `usize`", 2609 + label = "vector indices are of type `usize` or ranges of `usize`" 2610 + )] 2611 + impl<T, I: SliceIndex<[T]>, A: Allocator> Index<I> for Vec<T, A> { 2612 + type Output = I::Output; 2613 + 2614 + #[inline] 2615 + fn index(&self, index: I) -> &Self::Output { 2616 + Index::index(&**self, index) 2617 + } 2618 + } 2619 + 2620 + #[stable(feature = "rust1", since = "1.0.0")] 2621 + #[rustc_on_unimplemented( 2622 + message = "vector indices are of type `usize` or ranges of `usize`", 2623 + label = "vector indices are of type `usize` or ranges of `usize`" 2624 + )] 2625 + impl<T, I: SliceIndex<[T]>, A: Allocator> IndexMut<I> for Vec<T, A> { 2626 + #[inline] 2627 + fn index_mut(&mut self, index: I) -> &mut Self::Output { 2628 + IndexMut::index_mut(&mut **self, index) 2629 + } 2630 + } 2631 + 2632 + #[cfg(not(no_global_oom_handling))] 2633 + #[stable(feature = "rust1", since = "1.0.0")] 2634 + impl<T> FromIterator<T> for Vec<T> { 2635 + #[inline] 2636 + fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Vec<T> { 2637 + <Self as SpecFromIter<T, I::IntoIter>>::from_iter(iter.into_iter()) 2638 + } 2639 + } 2640 + 2641 + #[stable(feature = "rust1", since = "1.0.0")] 2642 + impl<T, A: Allocator> IntoIterator for Vec<T, A> { 2643 + type Item = T; 2644 + type IntoIter = IntoIter<T, A>; 2645 + 2646 + /// Creates a consuming iterator, that is, one that moves each value out of 2647 + /// the vector (from start to end). The vector cannot be used after calling 2648 + /// this. 2649 + /// 2650 + /// # Examples 2651 + /// 2652 + /// ``` 2653 + /// let v = vec!["a".to_string(), "b".to_string()]; 2654 + /// for s in v.into_iter() { 2655 + /// // s has type String, not &String 2656 + /// println!("{s}"); 2657 + /// } 2658 + /// ``` 2659 + #[inline] 2660 + fn into_iter(self) -> IntoIter<T, A> { 2661 + unsafe { 2662 + let mut me = ManuallyDrop::new(self); 2663 + let alloc = ManuallyDrop::new(ptr::read(me.allocator())); 2664 + let begin = me.as_mut_ptr(); 2665 + let end = if mem::size_of::<T>() == 0 { 2666 + arith_offset(begin as *const i8, me.len() as isize) as *const T 2667 + } else { 2668 + begin.add(me.len()) as *const T 2669 + }; 2670 + let cap = me.buf.capacity(); 2671 + IntoIter { 2672 + buf: NonNull::new_unchecked(begin), 2673 + phantom: PhantomData, 2674 + cap, 2675 + alloc, 2676 + ptr: begin, 2677 + end, 2678 + } 2679 + } 2680 + } 2681 + } 2682 + 2683 + #[stable(feature = "rust1", since = "1.0.0")] 2684 + impl<'a, T, A: Allocator> IntoIterator for &'a Vec<T, A> { 2685 + type Item = &'a T; 2686 + type IntoIter = slice::Iter<'a, T>; 2687 + 2688 + fn into_iter(self) -> slice::Iter<'a, T> { 2689 + self.iter() 2690 + } 2691 + } 2692 + 2693 + #[stable(feature = "rust1", since = "1.0.0")] 2694 + impl<'a, T, A: Allocator> IntoIterator for &'a mut Vec<T, A> { 2695 + type Item = &'a mut T; 2696 + type IntoIter = slice::IterMut<'a, T>; 2697 + 2698 + fn into_iter(self) -> slice::IterMut<'a, T> { 2699 + self.iter_mut() 2700 + } 2701 + } 2702 + 2703 + #[cfg(not(no_global_oom_handling))] 2704 + #[stable(feature = "rust1", since = "1.0.0")] 2705 + impl<T, A: Allocator> Extend<T> for Vec<T, A> { 2706 + #[inline] 2707 + fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) { 2708 + <Self as SpecExtend<T, I::IntoIter>>::spec_extend(self, iter.into_iter()) 2709 + } 2710 + 2711 + #[inline] 2712 + fn extend_one(&mut self, item: T) { 2713 + self.push(item); 2714 + } 2715 + 2716 + #[inline] 2717 + fn extend_reserve(&mut self, additional: usize) { 2718 + self.reserve(additional); 2719 + } 2720 + } 2721 + 2722 + impl<T, A: Allocator> Vec<T, A> { 2723 + // leaf method to which various SpecFrom/SpecExtend implementations delegate when 2724 + // they have no further optimizations to apply 2725 + #[cfg(not(no_global_oom_handling))] 2726 + fn extend_desugared<I: Iterator<Item = T>>(&mut self, mut iterator: I) { 2727 + // This is the case for a general iterator. 2728 + // 2729 + // This function should be the moral equivalent of: 2730 + // 2731 + // for item in iterator { 2732 + // self.push(item); 2733 + // } 2734 + while let Some(element) = iterator.next() { 2735 + let len = self.len(); 2736 + if len == self.capacity() { 2737 + let (lower, _) = iterator.size_hint(); 2738 + self.reserve(lower.saturating_add(1)); 2739 + } 2740 + unsafe { 2741 + ptr::write(self.as_mut_ptr().add(len), element); 2742 + // Since next() executes user code which can panic we have to bump the length 2743 + // after each step. 2744 + // NB can't overflow since we would have had to alloc the address space 2745 + self.set_len(len + 1); 2746 + } 2747 + } 2748 + } 2749 + 2750 + /// Creates a splicing iterator that replaces the specified range in the vector 2751 + /// with the given `replace_with` iterator and yields the removed items. 2752 + /// `replace_with` does not need to be the same length as `range`. 2753 + /// 2754 + /// `range` is removed even if the iterator is not consumed until the end. 2755 + /// 2756 + /// It is unspecified how many elements are removed from the vector 2757 + /// if the `Splice` value is leaked. 2758 + /// 2759 + /// The input iterator `replace_with` is only consumed when the `Splice` value is dropped. 2760 + /// 2761 + /// This is optimal if: 2762 + /// 2763 + /// * The tail (elements in the vector after `range`) is empty, 2764 + /// * or `replace_with` yields fewer or equal elements than `range`’s length 2765 + /// * or the lower bound of its `size_hint()` is exact. 2766 + /// 2767 + /// Otherwise, a temporary vector is allocated and the tail is moved twice. 2768 + /// 2769 + /// # Panics 2770 + /// 2771 + /// Panics if the starting point is greater than the end point or if 2772 + /// the end point is greater than the length of the vector. 2773 + /// 2774 + /// # Examples 2775 + /// 2776 + /// ``` 2777 + /// let mut v = vec![1, 2, 3, 4]; 2778 + /// let new = [7, 8, 9]; 2779 + /// let u: Vec<_> = v.splice(1..3, new).collect(); 2780 + /// assert_eq!(v, &[1, 7, 8, 9, 4]); 2781 + /// assert_eq!(u, &[2, 3]); 2782 + /// ``` 2783 + #[cfg(not(no_global_oom_handling))] 2784 + #[inline] 2785 + #[stable(feature = "vec_splice", since = "1.21.0")] 2786 + pub fn splice<R, I>(&mut self, range: R, replace_with: I) -> Splice<'_, I::IntoIter, A> 2787 + where 2788 + R: RangeBounds<usize>, 2789 + I: IntoIterator<Item = T>, 2790 + { 2791 + Splice { drain: self.drain(range), replace_with: replace_with.into_iter() } 2792 + } 2793 + 2794 + /// Creates an iterator which uses a closure to determine if an element should be removed. 2795 + /// 2796 + /// If the closure returns true, then the element is removed and yielded. 2797 + /// If the closure returns false, the element will remain in the vector and will not be yielded 2798 + /// by the iterator. 2799 + /// 2800 + /// Using this method is equivalent to the following code: 2801 + /// 2802 + /// ``` 2803 + /// # let some_predicate = |x: &mut i32| { *x == 2 || *x == 3 || *x == 6 }; 2804 + /// # let mut vec = vec![1, 2, 3, 4, 5, 6]; 2805 + /// let mut i = 0; 2806 + /// while i < vec.len() { 2807 + /// if some_predicate(&mut vec[i]) { 2808 + /// let val = vec.remove(i); 2809 + /// // your code here 2810 + /// } else { 2811 + /// i += 1; 2812 + /// } 2813 + /// } 2814 + /// 2815 + /// # assert_eq!(vec, vec![1, 4, 5]); 2816 + /// ``` 2817 + /// 2818 + /// But `drain_filter` is easier to use. `drain_filter` is also more efficient, 2819 + /// because it can backshift the elements of the array in bulk. 2820 + /// 2821 + /// Note that `drain_filter` also lets you mutate every element in the filter closure, 2822 + /// regardless of whether you choose to keep or remove it. 2823 + /// 2824 + /// # Examples 2825 + /// 2826 + /// Splitting an array into evens and odds, reusing the original allocation: 2827 + /// 2828 + /// ``` 2829 + /// #![feature(drain_filter)] 2830 + /// let mut numbers = vec![1, 2, 3, 4, 5, 6, 8, 9, 11, 13, 14, 15]; 2831 + /// 2832 + /// let evens = numbers.drain_filter(|x| *x % 2 == 0).collect::<Vec<_>>(); 2833 + /// let odds = numbers; 2834 + /// 2835 + /// assert_eq!(evens, vec![2, 4, 6, 8, 14]); 2836 + /// assert_eq!(odds, vec![1, 3, 5, 9, 11, 13, 15]); 2837 + /// ``` 2838 + #[unstable(feature = "drain_filter", reason = "recently added", issue = "43244")] 2839 + pub fn drain_filter<F>(&mut self, filter: F) -> DrainFilter<'_, T, F, A> 2840 + where 2841 + F: FnMut(&mut T) -> bool, 2842 + { 2843 + let old_len = self.len(); 2844 + 2845 + // Guard against us getting leaked (leak amplification) 2846 + unsafe { 2847 + self.set_len(0); 2848 + } 2849 + 2850 + DrainFilter { vec: self, idx: 0, del: 0, old_len, pred: filter, panic_flag: false } 2851 + } 2852 + } 2853 + 2854 + /// Extend implementation that copies elements out of references before pushing them onto the Vec. 2855 + /// 2856 + /// This implementation is specialized for slice iterators, where it uses [`copy_from_slice`] to 2857 + /// append the entire slice at once. 2858 + /// 2859 + /// [`copy_from_slice`]: slice::copy_from_slice 2860 + #[cfg(not(no_global_oom_handling))] 2861 + #[stable(feature = "extend_ref", since = "1.2.0")] 2862 + impl<'a, T: Copy + 'a, A: Allocator + 'a> Extend<&'a T> for Vec<T, A> { 2863 + fn extend<I: IntoIterator<Item = &'a T>>(&mut self, iter: I) { 2864 + self.spec_extend(iter.into_iter()) 2865 + } 2866 + 2867 + #[inline] 2868 + fn extend_one(&mut self, &item: &'a T) { 2869 + self.push(item); 2870 + } 2871 + 2872 + #[inline] 2873 + fn extend_reserve(&mut self, additional: usize) { 2874 + self.reserve(additional); 2875 + } 2876 + } 2877 + 2878 + /// Implements comparison of vectors, [lexicographically](core::cmp::Ord#lexicographical-comparison). 2879 + #[stable(feature = "rust1", since = "1.0.0")] 2880 + impl<T: PartialOrd, A: Allocator> PartialOrd for Vec<T, A> { 2881 + #[inline] 2882 + fn partial_cmp(&self, other: &Self) -> Option<Ordering> { 2883 + PartialOrd::partial_cmp(&**self, &**other) 2884 + } 2885 + } 2886 + 2887 + #[stable(feature = "rust1", since = "1.0.0")] 2888 + impl<T: Eq, A: Allocator> Eq for Vec<T, A> {} 2889 + 2890 + /// Implements ordering of vectors, [lexicographically](core::cmp::Ord#lexicographical-comparison). 2891 + #[stable(feature = "rust1", since = "1.0.0")] 2892 + impl<T: Ord, A: Allocator> Ord for Vec<T, A> { 2893 + #[inline] 2894 + fn cmp(&self, other: &Self) -> Ordering { 2895 + Ord::cmp(&**self, &**other) 2896 + } 2897 + } 2898 + 2899 + #[stable(feature = "rust1", since = "1.0.0")] 2900 + unsafe impl<#[may_dangle] T, A: Allocator> Drop for Vec<T, A> { 2901 + fn drop(&mut self) { 2902 + unsafe { 2903 + // use drop for [T] 2904 + // use a raw slice to refer to the elements of the vector as weakest necessary type; 2905 + // could avoid questions of validity in certain cases 2906 + ptr::drop_in_place(ptr::slice_from_raw_parts_mut(self.as_mut_ptr(), self.len)) 2907 + } 2908 + // RawVec handles deallocation 2909 + } 2910 + } 2911 + 2912 + #[stable(feature = "rust1", since = "1.0.0")] 2913 + #[rustc_const_unstable(feature = "const_default_impls", issue = "87864")] 2914 + impl<T> const Default for Vec<T> { 2915 + /// Creates an empty `Vec<T>`. 2916 + fn default() -> Vec<T> { 2917 + Vec::new() 2918 + } 2919 + } 2920 + 2921 + #[stable(feature = "rust1", since = "1.0.0")] 2922 + impl<T: fmt::Debug, A: Allocator> fmt::Debug for Vec<T, A> { 2923 + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 2924 + fmt::Debug::fmt(&**self, f) 2925 + } 2926 + } 2927 + 2928 + #[stable(feature = "rust1", since = "1.0.0")] 2929 + impl<T, A: Allocator> AsRef<Vec<T, A>> for Vec<T, A> { 2930 + fn as_ref(&self) -> &Vec<T, A> { 2931 + self 2932 + } 2933 + } 2934 + 2935 + #[stable(feature = "vec_as_mut", since = "1.5.0")] 2936 + impl<T, A: Allocator> AsMut<Vec<T, A>> for Vec<T, A> { 2937 + fn as_mut(&mut self) -> &mut Vec<T, A> { 2938 + self 2939 + } 2940 + } 2941 + 2942 + #[stable(feature = "rust1", since = "1.0.0")] 2943 + impl<T, A: Allocator> AsRef<[T]> for Vec<T, A> { 2944 + fn as_ref(&self) -> &[T] { 2945 + self 2946 + } 2947 + } 2948 + 2949 + #[stable(feature = "vec_as_mut", since = "1.5.0")] 2950 + impl<T, A: Allocator> AsMut<[T]> for Vec<T, A> { 2951 + fn as_mut(&mut self) -> &mut [T] { 2952 + self 2953 + } 2954 + } 2955 + 2956 + #[cfg(not(no_global_oom_handling))] 2957 + #[stable(feature = "rust1", since = "1.0.0")] 2958 + impl<T: Clone> From<&[T]> for Vec<T> { 2959 + /// Allocate a `Vec<T>` and fill it by cloning `s`'s items. 2960 + /// 2961 + /// # Examples 2962 + /// 2963 + /// ``` 2964 + /// assert_eq!(Vec::from(&[1, 2, 3][..]), vec![1, 2, 3]); 2965 + /// ``` 2966 + #[cfg(not(test))] 2967 + fn from(s: &[T]) -> Vec<T> { 2968 + s.to_vec() 2969 + } 2970 + #[cfg(test)] 2971 + fn from(s: &[T]) -> Vec<T> { 2972 + crate::slice::to_vec(s, Global) 2973 + } 2974 + } 2975 + 2976 + #[cfg(not(no_global_oom_handling))] 2977 + #[stable(feature = "vec_from_mut", since = "1.19.0")] 2978 + impl<T: Clone> From<&mut [T]> for Vec<T> { 2979 + /// Allocate a `Vec<T>` and fill it by cloning `s`'s items. 2980 + /// 2981 + /// # Examples 2982 + /// 2983 + /// ``` 2984 + /// assert_eq!(Vec::from(&mut [1, 2, 3][..]), vec![1, 2, 3]); 2985 + /// ``` 2986 + #[cfg(not(test))] 2987 + fn from(s: &mut [T]) -> Vec<T> { 2988 + s.to_vec() 2989 + } 2990 + #[cfg(test)] 2991 + fn from(s: &mut [T]) -> Vec<T> { 2992 + crate::slice::to_vec(s, Global) 2993 + } 2994 + } 2995 + 2996 + #[cfg(not(no_global_oom_handling))] 2997 + #[stable(feature = "vec_from_array", since = "1.44.0")] 2998 + impl<T, const N: usize> From<[T; N]> for Vec<T> { 2999 + /// Allocate a `Vec<T>` and move `s`'s items into it. 3000 + /// 3001 + /// # Examples 3002 + /// 3003 + /// ``` 3004 + /// assert_eq!(Vec::from([1, 2, 3]), vec![1, 2, 3]); 3005 + /// ``` 3006 + #[cfg(not(test))] 3007 + fn from(s: [T; N]) -> Vec<T> { 3008 + <[T]>::into_vec(box s) 3009 + } 3010 + 3011 + #[cfg(test)] 3012 + fn from(s: [T; N]) -> Vec<T> { 3013 + crate::slice::into_vec(box s) 3014 + } 3015 + } 3016 + 3017 + #[stable(feature = "vec_from_cow_slice", since = "1.14.0")] 3018 + impl<'a, T> From<Cow<'a, [T]>> for Vec<T> 3019 + where 3020 + [T]: ToOwned<Owned = Vec<T>>, 3021 + { 3022 + /// Convert a clone-on-write slice into a vector. 3023 + /// 3024 + /// If `s` already owns a `Vec<T>`, it will be returned directly. 3025 + /// If `s` is borrowing a slice, a new `Vec<T>` will be allocated and 3026 + /// filled by cloning `s`'s items into it. 3027 + /// 3028 + /// # Examples 3029 + /// 3030 + /// ``` 3031 + /// # use std::borrow::Cow; 3032 + /// let o: Cow<[i32]> = Cow::Owned(vec![1, 2, 3]); 3033 + /// let b: Cow<[i32]> = Cow::Borrowed(&[1, 2, 3]); 3034 + /// assert_eq!(Vec::from(o), Vec::from(b)); 3035 + /// ``` 3036 + fn from(s: Cow<'a, [T]>) -> Vec<T> { 3037 + s.into_owned() 3038 + } 3039 + } 3040 + 3041 + // note: test pulls in libstd, which causes errors here 3042 + #[cfg(not(test))] 3043 + #[stable(feature = "vec_from_box", since = "1.18.0")] 3044 + impl<T, A: Allocator> From<Box<[T], A>> for Vec<T, A> { 3045 + /// Convert a boxed slice into a vector by transferring ownership of 3046 + /// the existing heap allocation. 3047 + /// 3048 + /// # Examples 3049 + /// 3050 + /// ``` 3051 + /// let b: Box<[i32]> = vec![1, 2, 3].into_boxed_slice(); 3052 + /// assert_eq!(Vec::from(b), vec![1, 2, 3]); 3053 + /// ``` 3054 + fn from(s: Box<[T], A>) -> Self { 3055 + s.into_vec() 3056 + } 3057 + } 3058 + 3059 + // note: test pulls in libstd, which causes errors here 3060 + #[cfg(not(no_global_oom_handling))] 3061 + #[cfg(not(test))] 3062 + #[stable(feature = "box_from_vec", since = "1.20.0")] 3063 + impl<T, A: Allocator> From<Vec<T, A>> for Box<[T], A> { 3064 + /// Convert a vector into a boxed slice. 3065 + /// 3066 + /// If `v` has excess capacity, its items will be moved into a 3067 + /// newly-allocated buffer with exactly the right capacity. 3068 + /// 3069 + /// # Examples 3070 + /// 3071 + /// ``` 3072 + /// assert_eq!(Box::from(vec![1, 2, 3]), vec![1, 2, 3].into_boxed_slice()); 3073 + /// ``` 3074 + fn from(v: Vec<T, A>) -> Self { 3075 + v.into_boxed_slice() 3076 + } 3077 + } 3078 + 3079 + #[cfg(not(no_global_oom_handling))] 3080 + #[stable(feature = "rust1", since = "1.0.0")] 3081 + impl From<&str> for Vec<u8> { 3082 + /// Allocate a `Vec<u8>` and fill it with a UTF-8 string. 3083 + /// 3084 + /// # Examples 3085 + /// 3086 + /// ``` 3087 + /// assert_eq!(Vec::from("123"), vec![b'1', b'2', b'3']); 3088 + /// ``` 3089 + fn from(s: &str) -> Vec<u8> { 3090 + From::from(s.as_bytes()) 3091 + } 3092 + } 3093 + 3094 + #[stable(feature = "array_try_from_vec", since = "1.48.0")] 3095 + impl<T, A: Allocator, const N: usize> TryFrom<Vec<T, A>> for [T; N] { 3096 + type Error = Vec<T, A>; 3097 + 3098 + /// Gets the entire contents of the `Vec<T>` as an array, 3099 + /// if its size exactly matches that of the requested array. 3100 + /// 3101 + /// # Examples 3102 + /// 3103 + /// ``` 3104 + /// assert_eq!(vec![1, 2, 3].try_into(), Ok([1, 2, 3])); 3105 + /// assert_eq!(<Vec<i32>>::new().try_into(), Ok([])); 3106 + /// ``` 3107 + /// 3108 + /// If the length doesn't match, the input comes back in `Err`: 3109 + /// ``` 3110 + /// let r: Result<[i32; 4], _> = (0..10).collect::<Vec<_>>().try_into(); 3111 + /// assert_eq!(r, Err(vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9])); 3112 + /// ``` 3113 + /// 3114 + /// If you're fine with just getting a prefix of the `Vec<T>`, 3115 + /// you can call [`.truncate(N)`](Vec::truncate) first. 3116 + /// ``` 3117 + /// let mut v = String::from("hello world").into_bytes(); 3118 + /// v.sort(); 3119 + /// v.truncate(2); 3120 + /// let [a, b]: [_; 2] = v.try_into().unwrap(); 3121 + /// assert_eq!(a, b' '); 3122 + /// assert_eq!(b, b'd'); 3123 + /// ``` 3124 + fn try_from(mut vec: Vec<T, A>) -> Result<[T; N], Vec<T, A>> { 3125 + if vec.len() != N { 3126 + return Err(vec); 3127 + } 3128 + 3129 + // SAFETY: `.set_len(0)` is always sound. 3130 + unsafe { vec.set_len(0) }; 3131 + 3132 + // SAFETY: A `Vec`'s pointer is always aligned properly, and 3133 + // the alignment the array needs is the same as the items. 3134 + // We checked earlier that we have sufficient items. 3135 + // The items will not double-drop as the `set_len` 3136 + // tells the `Vec` not to also drop them. 3137 + let array = unsafe { ptr::read(vec.as_ptr() as *const [T; N]) }; 3138 + Ok(array) 3139 + } 3140 + }
+49
rust/alloc/vec/partial_eq.rs
··· 1 + // SPDX-License-Identifier: Apache-2.0 OR MIT 2 + 3 + use crate::alloc::Allocator; 4 + #[cfg(not(no_global_oom_handling))] 5 + use crate::borrow::Cow; 6 + 7 + use super::Vec; 8 + 9 + macro_rules! __impl_slice_eq1 { 10 + ([$($vars:tt)*] $lhs:ty, $rhs:ty $(where $ty:ty: $bound:ident)?, #[$stability:meta]) => { 11 + #[$stability] 12 + impl<T, U, $($vars)*> PartialEq<$rhs> for $lhs 13 + where 14 + T: PartialEq<U>, 15 + $($ty: $bound)? 16 + { 17 + #[inline] 18 + fn eq(&self, other: &$rhs) -> bool { self[..] == other[..] } 19 + #[inline] 20 + fn ne(&self, other: &$rhs) -> bool { self[..] != other[..] } 21 + } 22 + } 23 + } 24 + 25 + __impl_slice_eq1! { [A1: Allocator, A2: Allocator] Vec<T, A1>, Vec<U, A2>, #[stable(feature = "rust1", since = "1.0.0")] } 26 + __impl_slice_eq1! { [A: Allocator] Vec<T, A>, &[U], #[stable(feature = "rust1", since = "1.0.0")] } 27 + __impl_slice_eq1! { [A: Allocator] Vec<T, A>, &mut [U], #[stable(feature = "rust1", since = "1.0.0")] } 28 + __impl_slice_eq1! { [A: Allocator] &[T], Vec<U, A>, #[stable(feature = "partialeq_vec_for_ref_slice", since = "1.46.0")] } 29 + __impl_slice_eq1! { [A: Allocator] &mut [T], Vec<U, A>, #[stable(feature = "partialeq_vec_for_ref_slice", since = "1.46.0")] } 30 + __impl_slice_eq1! { [A: Allocator] Vec<T, A>, [U], #[stable(feature = "partialeq_vec_for_slice", since = "1.48.0")] } 31 + __impl_slice_eq1! { [A: Allocator] [T], Vec<U, A>, #[stable(feature = "partialeq_vec_for_slice", since = "1.48.0")] } 32 + #[cfg(not(no_global_oom_handling))] 33 + __impl_slice_eq1! { [A: Allocator] Cow<'_, [T]>, Vec<U, A> where T: Clone, #[stable(feature = "rust1", since = "1.0.0")] } 34 + #[cfg(not(no_global_oom_handling))] 35 + __impl_slice_eq1! { [] Cow<'_, [T]>, &[U] where T: Clone, #[stable(feature = "rust1", since = "1.0.0")] } 36 + #[cfg(not(no_global_oom_handling))] 37 + __impl_slice_eq1! { [] Cow<'_, [T]>, &mut [U] where T: Clone, #[stable(feature = "rust1", since = "1.0.0")] } 38 + __impl_slice_eq1! { [A: Allocator, const N: usize] Vec<T, A>, [U; N], #[stable(feature = "rust1", since = "1.0.0")] } 39 + __impl_slice_eq1! { [A: Allocator, const N: usize] Vec<T, A>, &[U; N], #[stable(feature = "rust1", since = "1.0.0")] } 40 + 41 + // NOTE: some less important impls are omitted to reduce code bloat 42 + // FIXME(Centril): Reconsider this? 43 + //__impl_slice_eq1! { [const N: usize] Vec<A>, &mut [B; N], } 44 + //__impl_slice_eq1! { [const N: usize] [A; N], Vec<B>, } 45 + //__impl_slice_eq1! { [const N: usize] &[A; N], Vec<B>, } 46 + //__impl_slice_eq1! { [const N: usize] &mut [A; N], Vec<B>, } 47 + //__impl_slice_eq1! { [const N: usize] Cow<'a, [A]>, [B; N], } 48 + //__impl_slice_eq1! { [const N: usize] Cow<'a, [A]>, &[B; N], } 49 + //__impl_slice_eq1! { [const N: usize] Cow<'a, [A]>, &mut [B; N], }
+21
rust/bindgen_parameters
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + --opaque-type xregs_state 4 + --opaque-type desc_struct 5 + --opaque-type arch_lbr_state 6 + --opaque-type local_apic 7 + 8 + # Packed type cannot transitively contain a `#[repr(align)]` type. 9 + --opaque-type x86_msi_data 10 + --opaque-type x86_msi_addr_lo 11 + 12 + # `try` is a reserved keyword since Rust 2018; solved in `bindgen` v0.59.2, 13 + # commit 2aed6b021680 ("context: Escape the try keyword properly"). 14 + --opaque-type kunit_try_catch 15 + 16 + # If SMP is disabled, `arch_spinlock_t` is defined as a ZST which triggers a Rust 17 + # warning. We don't need to peek into it anyway. 18 + --opaque-type spinlock 19 + 20 + # `seccomp`'s comment gets understood as a doctest 21 + --no-doc-comments
+13
rust/bindings/bindings_helper.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Header that contains the code (mostly headers) for which Rust bindings 4 + * will be automatically generated by `bindgen`. 5 + * 6 + * Sorted alphabetically. 7 + */ 8 + 9 + #include <linux/slab.h> 10 + 11 + /* `bindgen` gets confused at certain things. */ 12 + const gfp_t BINDINGS_GFP_KERNEL = GFP_KERNEL; 13 + const gfp_t BINDINGS___GFP_ZERO = __GFP_ZERO;
+53
rust/bindings/lib.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Bindings. 4 + //! 5 + //! Imports the generated bindings by `bindgen`. 6 + //! 7 + //! This crate may not be directly used. If you need a kernel C API that is 8 + //! not ported or wrapped in the `kernel` crate, then do so first instead of 9 + //! using this crate. 10 + 11 + #![no_std] 12 + #![feature(core_ffi_c)] 13 + // See <https://github.com/rust-lang/rust-bindgen/issues/1651>. 14 + #![cfg_attr(test, allow(deref_nullptr))] 15 + #![cfg_attr(test, allow(unaligned_references))] 16 + #![cfg_attr(test, allow(unsafe_op_in_unsafe_fn))] 17 + #![allow( 18 + clippy::all, 19 + missing_docs, 20 + non_camel_case_types, 21 + non_upper_case_globals, 22 + non_snake_case, 23 + improper_ctypes, 24 + unreachable_pub, 25 + unsafe_op_in_unsafe_fn 26 + )] 27 + 28 + mod bindings_raw { 29 + // Use glob import here to expose all helpers. 30 + // Symbols defined within the module will take precedence to the glob import. 31 + pub use super::bindings_helper::*; 32 + include!(concat!( 33 + env!("OBJTREE"), 34 + "/rust/bindings/bindings_generated.rs" 35 + )); 36 + } 37 + 38 + // When both a directly exposed symbol and a helper exists for the same function, 39 + // the directly exposed symbol is preferred and the helper becomes dead code, so 40 + // ignore the warning here. 41 + #[allow(dead_code)] 42 + mod bindings_helper { 43 + // Import the generated bindings for types. 44 + include!(concat!( 45 + env!("OBJTREE"), 46 + "/rust/bindings/bindings_helpers_generated.rs" 47 + )); 48 + } 49 + 50 + pub use bindings_raw::*; 51 + 52 + pub const GFP_KERNEL: gfp_t = BINDINGS_GFP_KERNEL; 53 + pub const __GFP_ZERO: gfp_t = BINDINGS___GFP_ZERO;
+63
rust/compiler_builtins.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Our own `compiler_builtins`. 4 + //! 5 + //! Rust provides [`compiler_builtins`] as a port of LLVM's [`compiler-rt`]. 6 + //! Since we do not need the vast majority of them, we avoid the dependency 7 + //! by providing this file. 8 + //! 9 + //! At the moment, some builtins are required that should not be. For instance, 10 + //! [`core`] has 128-bit integers functionality which we should not be compiling 11 + //! in. We will work with upstream [`core`] to provide feature flags to disable 12 + //! the parts we do not need. For the moment, we define them to [`panic!`] at 13 + //! runtime for simplicity to catch mistakes, instead of performing surgery 14 + //! on `core.o`. 15 + //! 16 + //! In any case, all these symbols are weakened to ensure we do not override 17 + //! those that may be provided by the rest of the kernel. 18 + //! 19 + //! [`compiler_builtins`]: https://github.com/rust-lang/compiler-builtins 20 + //! [`compiler-rt`]: https://compiler-rt.llvm.org/ 21 + 22 + #![feature(compiler_builtins)] 23 + #![compiler_builtins] 24 + #![no_builtins] 25 + #![no_std] 26 + 27 + macro_rules! define_panicking_intrinsics( 28 + ($reason: tt, { $($ident: ident, )* }) => { 29 + $( 30 + #[doc(hidden)] 31 + #[no_mangle] 32 + pub extern "C" fn $ident() { 33 + panic!($reason); 34 + } 35 + )* 36 + } 37 + ); 38 + 39 + define_panicking_intrinsics!("`f32` should not be used", { 40 + __eqsf2, 41 + __gesf2, 42 + __lesf2, 43 + __nesf2, 44 + __unordsf2, 45 + }); 46 + 47 + define_panicking_intrinsics!("`f64` should not be used", { 48 + __unorddf2, 49 + }); 50 + 51 + define_panicking_intrinsics!("`i128` should not be used", { 52 + __ashrti3, 53 + __muloti4, 54 + __multi3, 55 + }); 56 + 57 + define_panicking_intrinsics!("`u128` should not be used", { 58 + __ashlti3, 59 + __lshrti3, 60 + __udivmodti4, 61 + __udivti3, 62 + __umodti3, 63 + });
+21
rust/exports.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * A hack to export Rust symbols for loadable modules without having to redo 4 + * the entire `include/linux/export.h` logic in Rust. 5 + * 6 + * This requires the Rust's new/future `v0` mangling scheme because the default 7 + * one ("legacy") uses invalid characters for C identifiers (thus we cannot use 8 + * the `EXPORT_SYMBOL_*` macros). 9 + * 10 + * All symbols are exported as GPL-only to guarantee no GPL-only feature is 11 + * accidentally exposed. 12 + */ 13 + 14 + #include <linux/module.h> 15 + 16 + #define EXPORT_SYMBOL_RUST_GPL(sym) extern int sym; EXPORT_SYMBOL_GPL(sym) 17 + 18 + #include "exports_core_generated.h" 19 + #include "exports_alloc_generated.h" 20 + #include "exports_bindings_generated.h" 21 + #include "exports_kernel_generated.h"
+51
rust/helpers.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Non-trivial C macros cannot be used in Rust. Similarly, inlined C functions 4 + * cannot be called either. This file explicitly creates functions ("helpers") 5 + * that wrap those so that they can be called from Rust. 6 + * 7 + * Even though Rust kernel modules should never use directly the bindings, some 8 + * of these helpers need to be exported because Rust generics and inlined 9 + * functions may not get their code generated in the crate where they are 10 + * defined. Other helpers, called from non-inline functions, may not be 11 + * exported, in principle. However, in general, the Rust compiler does not 12 + * guarantee codegen will be performed for a non-inline function either. 13 + * Therefore, this file exports all the helpers. In the future, this may be 14 + * revisited to reduce the number of exports after the compiler is informed 15 + * about the places codegen is required. 16 + * 17 + * All symbols are exported as GPL-only to guarantee no GPL-only feature is 18 + * accidentally exposed. 19 + */ 20 + 21 + #include <linux/bug.h> 22 + #include <linux/build_bug.h> 23 + 24 + __noreturn void rust_helper_BUG(void) 25 + { 26 + BUG(); 27 + } 28 + EXPORT_SYMBOL_GPL(rust_helper_BUG); 29 + 30 + /* 31 + * We use `bindgen`'s `--size_t-is-usize` option to bind the C `size_t` type 32 + * as the Rust `usize` type, so we can use it in contexts where Rust 33 + * expects a `usize` like slice (array) indices. `usize` is defined to be 34 + * the same as C's `uintptr_t` type (can hold any pointer) but not 35 + * necessarily the same as `size_t` (can hold the size of any single 36 + * object). Most modern platforms use the same concrete integer type for 37 + * both of them, but in case we find ourselves on a platform where 38 + * that's not true, fail early instead of risking ABI or 39 + * integer-overflow issues. 40 + * 41 + * If your platform fails this assertion, it means that you are in 42 + * danger of integer-overflow bugs (even if you attempt to remove 43 + * `--size_t-is-usize`). It may be easiest to change the kernel ABI on 44 + * your platform such that `size_t` matches `uintptr_t` (i.e., to increase 45 + * `size_t`, because `uintptr_t` has to be at least as big as `size_t`). 46 + */ 47 + static_assert( 48 + sizeof(size_t) == sizeof(uintptr_t) && 49 + __alignof__(size_t) == __alignof__(uintptr_t), 50 + "Rust code expects C `size_t` to match Rust `usize`" 51 + );
+64
rust/kernel/allocator.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Allocator support. 4 + 5 + use core::alloc::{GlobalAlloc, Layout}; 6 + use core::ptr; 7 + 8 + use crate::bindings; 9 + 10 + struct KernelAllocator; 11 + 12 + unsafe impl GlobalAlloc for KernelAllocator { 13 + unsafe fn alloc(&self, layout: Layout) -> *mut u8 { 14 + // `krealloc()` is used instead of `kmalloc()` because the latter is 15 + // an inline function and cannot be bound to as a result. 16 + unsafe { bindings::krealloc(ptr::null(), layout.size(), bindings::GFP_KERNEL) as *mut u8 } 17 + } 18 + 19 + unsafe fn dealloc(&self, ptr: *mut u8, _layout: Layout) { 20 + unsafe { 21 + bindings::kfree(ptr as *const core::ffi::c_void); 22 + } 23 + } 24 + } 25 + 26 + #[global_allocator] 27 + static ALLOCATOR: KernelAllocator = KernelAllocator; 28 + 29 + // `rustc` only generates these for some crate types. Even then, we would need 30 + // to extract the object file that has them from the archive. For the moment, 31 + // let's generate them ourselves instead. 32 + // 33 + // Note that `#[no_mangle]` implies exported too, nowadays. 34 + #[no_mangle] 35 + fn __rust_alloc(size: usize, _align: usize) -> *mut u8 { 36 + unsafe { bindings::krealloc(core::ptr::null(), size, bindings::GFP_KERNEL) as *mut u8 } 37 + } 38 + 39 + #[no_mangle] 40 + fn __rust_dealloc(ptr: *mut u8, _size: usize, _align: usize) { 41 + unsafe { bindings::kfree(ptr as *const core::ffi::c_void) }; 42 + } 43 + 44 + #[no_mangle] 45 + fn __rust_realloc(ptr: *mut u8, _old_size: usize, _align: usize, new_size: usize) -> *mut u8 { 46 + unsafe { 47 + bindings::krealloc( 48 + ptr as *const core::ffi::c_void, 49 + new_size, 50 + bindings::GFP_KERNEL, 51 + ) as *mut u8 52 + } 53 + } 54 + 55 + #[no_mangle] 56 + fn __rust_alloc_zeroed(size: usize, _align: usize) -> *mut u8 { 57 + unsafe { 58 + bindings::krealloc( 59 + core::ptr::null(), 60 + size, 61 + bindings::GFP_KERNEL | bindings::__GFP_ZERO, 62 + ) as *mut u8 63 + } 64 + }
+59
rust/kernel/error.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Kernel errors. 4 + //! 5 + //! C header: [`include/uapi/asm-generic/errno-base.h`](../../../include/uapi/asm-generic/errno-base.h) 6 + 7 + use alloc::collections::TryReserveError; 8 + 9 + /// Contains the C-compatible error codes. 10 + pub mod code { 11 + /// Out of memory. 12 + pub const ENOMEM: super::Error = super::Error(-(crate::bindings::ENOMEM as i32)); 13 + } 14 + 15 + /// Generic integer kernel error. 16 + /// 17 + /// The kernel defines a set of integer generic error codes based on C and 18 + /// POSIX ones. These codes may have a more specific meaning in some contexts. 19 + /// 20 + /// # Invariants 21 + /// 22 + /// The value is a valid `errno` (i.e. `>= -MAX_ERRNO && < 0`). 23 + #[derive(Clone, Copy, PartialEq, Eq)] 24 + pub struct Error(core::ffi::c_int); 25 + 26 + impl Error { 27 + /// Returns the kernel error code. 28 + pub fn to_kernel_errno(self) -> core::ffi::c_int { 29 + self.0 30 + } 31 + } 32 + 33 + impl From<TryReserveError> for Error { 34 + fn from(_: TryReserveError) -> Error { 35 + code::ENOMEM 36 + } 37 + } 38 + 39 + /// A [`Result`] with an [`Error`] error type. 40 + /// 41 + /// To be used as the return type for functions that may fail. 42 + /// 43 + /// # Error codes in C and Rust 44 + /// 45 + /// In C, it is common that functions indicate success or failure through 46 + /// their return value; modifying or returning extra data through non-`const` 47 + /// pointer parameters. In particular, in the kernel, functions that may fail 48 + /// typically return an `int` that represents a generic error code. We model 49 + /// those as [`Error`]. 50 + /// 51 + /// In Rust, it is idiomatic to model functions that may fail as returning 52 + /// a [`Result`]. Since in the kernel many functions return an error code, 53 + /// [`Result`] is a type alias for a [`core::result::Result`] that uses 54 + /// [`Error`] as its error type. 55 + /// 56 + /// Note that even if a function does not return anything when it succeeds, 57 + /// it should still be modeled as returning a `Result` rather than 58 + /// just an [`Error`]. 59 + pub type Result<T = ()> = core::result::Result<T, Error>;
+78
rust/kernel/lib.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! The `kernel` crate. 4 + //! 5 + //! This crate contains the kernel APIs that have been ported or wrapped for 6 + //! usage by Rust code in the kernel and is shared by all of them. 7 + //! 8 + //! In other words, all the rest of the Rust code in the kernel (e.g. kernel 9 + //! modules written in Rust) depends on [`core`], [`alloc`] and this crate. 10 + //! 11 + //! If you need a kernel C API that is not ported or wrapped yet here, then 12 + //! do so first instead of bypassing this crate. 13 + 14 + #![no_std] 15 + #![feature(core_ffi_c)] 16 + 17 + // Ensure conditional compilation based on the kernel configuration works; 18 + // otherwise we may silently break things like initcall handling. 19 + #[cfg(not(CONFIG_RUST))] 20 + compile_error!("Missing kernel configuration for conditional compilation"); 21 + 22 + #[cfg(not(test))] 23 + #[cfg(not(testlib))] 24 + mod allocator; 25 + pub mod error; 26 + pub mod prelude; 27 + pub mod print; 28 + pub mod str; 29 + 30 + #[doc(hidden)] 31 + pub use bindings; 32 + pub use macros; 33 + 34 + /// Prefix to appear before log messages printed from within the `kernel` crate. 35 + const __LOG_PREFIX: &[u8] = b"rust_kernel\0"; 36 + 37 + /// The top level entrypoint to implementing a kernel module. 38 + /// 39 + /// For any teardown or cleanup operations, your type may implement [`Drop`]. 40 + pub trait Module: Sized + Sync { 41 + /// Called at module initialization time. 42 + /// 43 + /// Use this method to perform whatever setup or registration your module 44 + /// should do. 45 + /// 46 + /// Equivalent to the `module_init` macro in the C API. 47 + fn init(module: &'static ThisModule) -> error::Result<Self>; 48 + } 49 + 50 + /// Equivalent to `THIS_MODULE` in the C API. 51 + /// 52 + /// C header: `include/linux/export.h` 53 + pub struct ThisModule(*mut bindings::module); 54 + 55 + // SAFETY: `THIS_MODULE` may be used from all threads within a module. 56 + unsafe impl Sync for ThisModule {} 57 + 58 + impl ThisModule { 59 + /// Creates a [`ThisModule`] given the `THIS_MODULE` pointer. 60 + /// 61 + /// # Safety 62 + /// 63 + /// The pointer must be equal to the right `THIS_MODULE`. 64 + pub const unsafe fn from_ptr(ptr: *mut bindings::module) -> ThisModule { 65 + ThisModule(ptr) 66 + } 67 + } 68 + 69 + #[cfg(not(any(testlib, test)))] 70 + #[panic_handler] 71 + fn panic(info: &core::panic::PanicInfo<'_>) -> ! { 72 + pr_emerg!("{}\n", info); 73 + // SAFETY: FFI call. 74 + unsafe { bindings::BUG() }; 75 + // Bindgen currently does not recognize `__noreturn` so `BUG` returns `()` 76 + // instead of `!`. See <https://github.com/rust-lang/rust-bindgen/issues/2094>. 77 + loop {} 78 + }
+20
rust/kernel/prelude.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! The `kernel` prelude. 4 + //! 5 + //! These are the most common items used by Rust code in the kernel, 6 + //! intended to be imported by all Rust code, for convenience. 7 + //! 8 + //! # Examples 9 + //! 10 + //! ``` 11 + //! use kernel::prelude::*; 12 + //! ``` 13 + 14 + pub use super::{ 15 + error::{Error, Result}, 16 + pr_emerg, pr_info, ThisModule, 17 + }; 18 + pub use alloc::{boxed::Box, vec::Vec}; 19 + pub use core::pin::Pin; 20 + pub use macros::module;
+198
rust/kernel/print.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Printing facilities. 4 + //! 5 + //! C header: [`include/linux/printk.h`](../../../../include/linux/printk.h) 6 + //! 7 + //! Reference: <https://www.kernel.org/doc/html/latest/core-api/printk-basics.html> 8 + 9 + use core::{ 10 + ffi::{c_char, c_void}, 11 + fmt, 12 + }; 13 + 14 + use crate::str::RawFormatter; 15 + 16 + #[cfg(CONFIG_PRINTK)] 17 + use crate::bindings; 18 + 19 + // Called from `vsprintf` with format specifier `%pA`. 20 + #[no_mangle] 21 + unsafe fn rust_fmt_argument(buf: *mut c_char, end: *mut c_char, ptr: *const c_void) -> *mut c_char { 22 + use fmt::Write; 23 + // SAFETY: The C contract guarantees that `buf` is valid if it's less than `end`. 24 + let mut w = unsafe { RawFormatter::from_ptrs(buf.cast(), end.cast()) }; 25 + let _ = w.write_fmt(unsafe { *(ptr as *const fmt::Arguments<'_>) }); 26 + w.pos().cast() 27 + } 28 + 29 + /// Format strings. 30 + /// 31 + /// Public but hidden since it should only be used from public macros. 32 + #[doc(hidden)] 33 + pub mod format_strings { 34 + use crate::bindings; 35 + 36 + /// The length we copy from the `KERN_*` kernel prefixes. 37 + const LENGTH_PREFIX: usize = 2; 38 + 39 + /// The length of the fixed format strings. 40 + pub const LENGTH: usize = 10; 41 + 42 + /// Generates a fixed format string for the kernel's [`_printk`]. 43 + /// 44 + /// The format string is always the same for a given level, i.e. for a 45 + /// given `prefix`, which are the kernel's `KERN_*` constants. 46 + /// 47 + /// [`_printk`]: ../../../../include/linux/printk.h 48 + const fn generate(is_cont: bool, prefix: &[u8; 3]) -> [u8; LENGTH] { 49 + // Ensure the `KERN_*` macros are what we expect. 50 + assert!(prefix[0] == b'\x01'); 51 + if is_cont { 52 + assert!(prefix[1] == b'c'); 53 + } else { 54 + assert!(prefix[1] >= b'0' && prefix[1] <= b'7'); 55 + } 56 + assert!(prefix[2] == b'\x00'); 57 + 58 + let suffix: &[u8; LENGTH - LENGTH_PREFIX] = if is_cont { 59 + b"%pA\0\0\0\0\0" 60 + } else { 61 + b"%s: %pA\0" 62 + }; 63 + 64 + [ 65 + prefix[0], prefix[1], suffix[0], suffix[1], suffix[2], suffix[3], suffix[4], suffix[5], 66 + suffix[6], suffix[7], 67 + ] 68 + } 69 + 70 + // Generate the format strings at compile-time. 71 + // 72 + // This avoids the compiler generating the contents on the fly in the stack. 73 + // 74 + // Furthermore, `static` instead of `const` is used to share the strings 75 + // for all the kernel. 76 + pub static EMERG: [u8; LENGTH] = generate(false, bindings::KERN_EMERG); 77 + pub static INFO: [u8; LENGTH] = generate(false, bindings::KERN_INFO); 78 + } 79 + 80 + /// Prints a message via the kernel's [`_printk`]. 81 + /// 82 + /// Public but hidden since it should only be used from public macros. 83 + /// 84 + /// # Safety 85 + /// 86 + /// The format string must be one of the ones in [`format_strings`], and 87 + /// the module name must be null-terminated. 88 + /// 89 + /// [`_printk`]: ../../../../include/linux/_printk.h 90 + #[doc(hidden)] 91 + #[cfg_attr(not(CONFIG_PRINTK), allow(unused_variables))] 92 + pub unsafe fn call_printk( 93 + format_string: &[u8; format_strings::LENGTH], 94 + module_name: &[u8], 95 + args: fmt::Arguments<'_>, 96 + ) { 97 + // `_printk` does not seem to fail in any path. 98 + #[cfg(CONFIG_PRINTK)] 99 + unsafe { 100 + bindings::_printk( 101 + format_string.as_ptr() as _, 102 + module_name.as_ptr(), 103 + &args as *const _ as *const c_void, 104 + ); 105 + } 106 + } 107 + 108 + /// Performs formatting and forwards the string to [`call_printk`]. 109 + /// 110 + /// Public but hidden since it should only be used from public macros. 111 + #[doc(hidden)] 112 + #[cfg(not(testlib))] 113 + #[macro_export] 114 + #[allow(clippy::crate_in_macro_def)] 115 + macro_rules! print_macro ( 116 + // The non-continuation cases (most of them, e.g. `INFO`). 117 + ($format_string:path, $($arg:tt)+) => ( 118 + // SAFETY: This hidden macro should only be called by the documented 119 + // printing macros which ensure the format string is one of the fixed 120 + // ones. All `__LOG_PREFIX`s are null-terminated as they are generated 121 + // by the `module!` proc macro or fixed values defined in a kernel 122 + // crate. 123 + unsafe { 124 + $crate::print::call_printk( 125 + &$format_string, 126 + crate::__LOG_PREFIX, 127 + format_args!($($arg)+), 128 + ); 129 + } 130 + ); 131 + ); 132 + 133 + /// Stub for doctests 134 + #[cfg(testlib)] 135 + #[macro_export] 136 + macro_rules! print_macro ( 137 + ($format_string:path, $e:expr, $($arg:tt)+) => ( 138 + () 139 + ); 140 + ); 141 + 142 + // We could use a macro to generate these macros. However, doing so ends 143 + // up being a bit ugly: it requires the dollar token trick to escape `$` as 144 + // well as playing with the `doc` attribute. Furthermore, they cannot be easily 145 + // imported in the prelude due to [1]. So, for the moment, we just write them 146 + // manually, like in the C side; while keeping most of the logic in another 147 + // macro, i.e. [`print_macro`]. 148 + // 149 + // [1]: https://github.com/rust-lang/rust/issues/52234 150 + 151 + /// Prints an emergency-level message (level 0). 152 + /// 153 + /// Use this level if the system is unusable. 154 + /// 155 + /// Equivalent to the kernel's [`pr_emerg`] macro. 156 + /// 157 + /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 158 + /// `alloc::format!` for information about the formatting syntax. 159 + /// 160 + /// [`pr_emerg`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_emerg 161 + /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 162 + /// 163 + /// # Examples 164 + /// 165 + /// ``` 166 + /// pr_emerg!("hello {}\n", "there"); 167 + /// ``` 168 + #[macro_export] 169 + macro_rules! pr_emerg ( 170 + ($($arg:tt)*) => ( 171 + $crate::print_macro!($crate::print::format_strings::EMERG, $($arg)*) 172 + ) 173 + ); 174 + 175 + /// Prints an info-level message (level 6). 176 + /// 177 + /// Use this level for informational messages. 178 + /// 179 + /// Equivalent to the kernel's [`pr_info`] macro. 180 + /// 181 + /// Mimics the interface of [`std::print!`]. See [`core::fmt`] and 182 + /// `alloc::format!` for information about the formatting syntax. 183 + /// 184 + /// [`pr_info`]: https://www.kernel.org/doc/html/latest/core-api/printk-basics.html#c.pr_info 185 + /// [`std::print!`]: https://doc.rust-lang.org/std/macro.print.html 186 + /// 187 + /// # Examples 188 + /// 189 + /// ``` 190 + /// pr_info!("hello {}\n", "there"); 191 + /// ``` 192 + #[macro_export] 193 + #[doc(alias = "print")] 194 + macro_rules! pr_info ( 195 + ($($arg:tt)*) => ( 196 + $crate::print_macro!($crate::print::format_strings::INFO, $($arg)*) 197 + ) 198 + );
+72
rust/kernel/str.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! String representations. 4 + 5 + use core::fmt; 6 + 7 + /// Allows formatting of [`fmt::Arguments`] into a raw buffer. 8 + /// 9 + /// It does not fail if callers write past the end of the buffer so that they can calculate the 10 + /// size required to fit everything. 11 + /// 12 + /// # Invariants 13 + /// 14 + /// The memory region between `pos` (inclusive) and `end` (exclusive) is valid for writes if `pos` 15 + /// is less than `end`. 16 + pub(crate) struct RawFormatter { 17 + // Use `usize` to use `saturating_*` functions. 18 + #[allow(dead_code)] 19 + beg: usize, 20 + pos: usize, 21 + end: usize, 22 + } 23 + 24 + impl RawFormatter { 25 + /// Creates a new instance of [`RawFormatter`] with the given buffer pointers. 26 + /// 27 + /// # Safety 28 + /// 29 + /// If `pos` is less than `end`, then the region between `pos` (inclusive) and `end` 30 + /// (exclusive) must be valid for writes for the lifetime of the returned [`RawFormatter`]. 31 + pub(crate) unsafe fn from_ptrs(pos: *mut u8, end: *mut u8) -> Self { 32 + // INVARIANT: The safety requierments guarantee the type invariants. 33 + Self { 34 + beg: pos as _, 35 + pos: pos as _, 36 + end: end as _, 37 + } 38 + } 39 + 40 + /// Returns the current insert position. 41 + /// 42 + /// N.B. It may point to invalid memory. 43 + pub(crate) fn pos(&self) -> *mut u8 { 44 + self.pos as _ 45 + } 46 + } 47 + 48 + impl fmt::Write for RawFormatter { 49 + fn write_str(&mut self, s: &str) -> fmt::Result { 50 + // `pos` value after writing `len` bytes. This does not have to be bounded by `end`, but we 51 + // don't want it to wrap around to 0. 52 + let pos_new = self.pos.saturating_add(s.len()); 53 + 54 + // Amount that we can copy. `saturating_sub` ensures we get 0 if `pos` goes past `end`. 55 + let len_to_copy = core::cmp::min(pos_new, self.end).saturating_sub(self.pos); 56 + 57 + if len_to_copy > 0 { 58 + // SAFETY: If `len_to_copy` is non-zero, then we know `pos` has not gone past `end` 59 + // yet, so it is valid for write per the type invariants. 60 + unsafe { 61 + core::ptr::copy_nonoverlapping( 62 + s.as_bytes().as_ptr(), 63 + self.pos as *mut u8, 64 + len_to_copy, 65 + ) 66 + }; 67 + } 68 + 69 + self.pos = pos_new; 70 + Ok(()) 71 + } 72 + }
+51
rust/macros/helpers.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + use proc_macro::{token_stream, TokenTree}; 4 + 5 + pub(crate) fn try_ident(it: &mut token_stream::IntoIter) -> Option<String> { 6 + if let Some(TokenTree::Ident(ident)) = it.next() { 7 + Some(ident.to_string()) 8 + } else { 9 + None 10 + } 11 + } 12 + 13 + pub(crate) fn try_literal(it: &mut token_stream::IntoIter) -> Option<String> { 14 + if let Some(TokenTree::Literal(literal)) = it.next() { 15 + Some(literal.to_string()) 16 + } else { 17 + None 18 + } 19 + } 20 + 21 + pub(crate) fn try_byte_string(it: &mut token_stream::IntoIter) -> Option<String> { 22 + try_literal(it).and_then(|byte_string| { 23 + if byte_string.starts_with("b\"") && byte_string.ends_with('\"') { 24 + Some(byte_string[2..byte_string.len() - 1].to_string()) 25 + } else { 26 + None 27 + } 28 + }) 29 + } 30 + 31 + pub(crate) fn expect_ident(it: &mut token_stream::IntoIter) -> String { 32 + try_ident(it).expect("Expected Ident") 33 + } 34 + 35 + pub(crate) fn expect_punct(it: &mut token_stream::IntoIter) -> char { 36 + if let TokenTree::Punct(punct) = it.next().expect("Reached end of token stream for Punct") { 37 + punct.as_char() 38 + } else { 39 + panic!("Expected Punct"); 40 + } 41 + } 42 + 43 + pub(crate) fn expect_byte_string(it: &mut token_stream::IntoIter) -> String { 44 + try_byte_string(it).expect("Expected byte string") 45 + } 46 + 47 + pub(crate) fn expect_end(it: &mut token_stream::IntoIter) { 48 + if it.next().is_some() { 49 + panic!("Expected end"); 50 + } 51 + }
+72
rust/macros/lib.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Crate for all kernel procedural macros. 4 + 5 + mod helpers; 6 + mod module; 7 + 8 + use proc_macro::TokenStream; 9 + 10 + /// Declares a kernel module. 11 + /// 12 + /// The `type` argument should be a type which implements the [`Module`] 13 + /// trait. Also accepts various forms of kernel metadata. 14 + /// 15 + /// C header: [`include/linux/moduleparam.h`](../../../include/linux/moduleparam.h) 16 + /// 17 + /// [`Module`]: ../kernel/trait.Module.html 18 + /// 19 + /// # Examples 20 + /// 21 + /// ```ignore 22 + /// use kernel::prelude::*; 23 + /// 24 + /// module!{ 25 + /// type: MyModule, 26 + /// name: b"my_kernel_module", 27 + /// author: b"Rust for Linux Contributors", 28 + /// description: b"My very own kernel module!", 29 + /// license: b"GPL", 30 + /// params: { 31 + /// my_i32: i32 { 32 + /// default: 42, 33 + /// permissions: 0o000, 34 + /// description: b"Example of i32", 35 + /// }, 36 + /// writeable_i32: i32 { 37 + /// default: 42, 38 + /// permissions: 0o644, 39 + /// description: b"Example of i32", 40 + /// }, 41 + /// }, 42 + /// } 43 + /// 44 + /// struct MyModule; 45 + /// 46 + /// impl kernel::Module for MyModule { 47 + /// fn init() -> Result<Self> { 48 + /// // If the parameter is writeable, then the kparam lock must be 49 + /// // taken to read the parameter: 50 + /// { 51 + /// let lock = THIS_MODULE.kernel_param_lock(); 52 + /// pr_info!("i32 param is: {}\n", writeable_i32.read(&lock)); 53 + /// } 54 + /// // If the parameter is read only, it can be read without locking 55 + /// // the kernel parameters: 56 + /// pr_info!("i32 param is: {}\n", my_i32.read()); 57 + /// Ok(Self) 58 + /// } 59 + /// } 60 + /// ``` 61 + /// 62 + /// # Supported argument types 63 + /// - `type`: type which implements the [`Module`] trait (required). 64 + /// - `name`: byte array of the name of the kernel module (required). 65 + /// - `author`: byte array of the author of the kernel module. 66 + /// - `description`: byte array of the description of the kernel module. 67 + /// - `license`: byte array of the license of the kernel module (required). 68 + /// - `alias`: byte array of alias name of the kernel module. 69 + #[proc_macro] 70 + pub fn module(ts: TokenStream) -> TokenStream { 71 + module::module(ts) 72 + }
+282
rust/macros/module.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + use crate::helpers::*; 4 + use proc_macro::{token_stream, Literal, TokenStream, TokenTree}; 5 + use std::fmt::Write; 6 + 7 + struct ModInfoBuilder<'a> { 8 + module: &'a str, 9 + counter: usize, 10 + buffer: String, 11 + } 12 + 13 + impl<'a> ModInfoBuilder<'a> { 14 + fn new(module: &'a str) -> Self { 15 + ModInfoBuilder { 16 + module, 17 + counter: 0, 18 + buffer: String::new(), 19 + } 20 + } 21 + 22 + fn emit_base(&mut self, field: &str, content: &str, builtin: bool) { 23 + let string = if builtin { 24 + // Built-in modules prefix their modinfo strings by `module.`. 25 + format!( 26 + "{module}.{field}={content}\0", 27 + module = self.module, 28 + field = field, 29 + content = content 30 + ) 31 + } else { 32 + // Loadable modules' modinfo strings go as-is. 33 + format!("{field}={content}\0", field = field, content = content) 34 + }; 35 + 36 + write!( 37 + &mut self.buffer, 38 + " 39 + {cfg} 40 + #[doc(hidden)] 41 + #[link_section = \".modinfo\"] 42 + #[used] 43 + pub static __{module}_{counter}: [u8; {length}] = *{string}; 44 + ", 45 + cfg = if builtin { 46 + "#[cfg(not(MODULE))]" 47 + } else { 48 + "#[cfg(MODULE)]" 49 + }, 50 + module = self.module.to_uppercase(), 51 + counter = self.counter, 52 + length = string.len(), 53 + string = Literal::byte_string(string.as_bytes()), 54 + ) 55 + .unwrap(); 56 + 57 + self.counter += 1; 58 + } 59 + 60 + fn emit_only_builtin(&mut self, field: &str, content: &str) { 61 + self.emit_base(field, content, true) 62 + } 63 + 64 + fn emit_only_loadable(&mut self, field: &str, content: &str) { 65 + self.emit_base(field, content, false) 66 + } 67 + 68 + fn emit(&mut self, field: &str, content: &str) { 69 + self.emit_only_builtin(field, content); 70 + self.emit_only_loadable(field, content); 71 + } 72 + } 73 + 74 + #[derive(Debug, Default)] 75 + struct ModuleInfo { 76 + type_: String, 77 + license: String, 78 + name: String, 79 + author: Option<String>, 80 + description: Option<String>, 81 + alias: Option<String>, 82 + } 83 + 84 + impl ModuleInfo { 85 + fn parse(it: &mut token_stream::IntoIter) -> Self { 86 + let mut info = ModuleInfo::default(); 87 + 88 + const EXPECTED_KEYS: &[&str] = 89 + &["type", "name", "author", "description", "license", "alias"]; 90 + const REQUIRED_KEYS: &[&str] = &["type", "name", "license"]; 91 + let mut seen_keys = Vec::new(); 92 + 93 + loop { 94 + let key = match it.next() { 95 + Some(TokenTree::Ident(ident)) => ident.to_string(), 96 + Some(_) => panic!("Expected Ident or end"), 97 + None => break, 98 + }; 99 + 100 + if seen_keys.contains(&key) { 101 + panic!( 102 + "Duplicated key \"{}\". Keys can only be specified once.", 103 + key 104 + ); 105 + } 106 + 107 + assert_eq!(expect_punct(it), ':'); 108 + 109 + match key.as_str() { 110 + "type" => info.type_ = expect_ident(it), 111 + "name" => info.name = expect_byte_string(it), 112 + "author" => info.author = Some(expect_byte_string(it)), 113 + "description" => info.description = Some(expect_byte_string(it)), 114 + "license" => info.license = expect_byte_string(it), 115 + "alias" => info.alias = Some(expect_byte_string(it)), 116 + _ => panic!( 117 + "Unknown key \"{}\". Valid keys are: {:?}.", 118 + key, EXPECTED_KEYS 119 + ), 120 + } 121 + 122 + assert_eq!(expect_punct(it), ','); 123 + 124 + seen_keys.push(key); 125 + } 126 + 127 + expect_end(it); 128 + 129 + for key in REQUIRED_KEYS { 130 + if !seen_keys.iter().any(|e| e == key) { 131 + panic!("Missing required key \"{}\".", key); 132 + } 133 + } 134 + 135 + let mut ordered_keys: Vec<&str> = Vec::new(); 136 + for key in EXPECTED_KEYS { 137 + if seen_keys.iter().any(|e| e == key) { 138 + ordered_keys.push(key); 139 + } 140 + } 141 + 142 + if seen_keys != ordered_keys { 143 + panic!( 144 + "Keys are not ordered as expected. Order them like: {:?}.", 145 + ordered_keys 146 + ); 147 + } 148 + 149 + info 150 + } 151 + } 152 + 153 + pub(crate) fn module(ts: TokenStream) -> TokenStream { 154 + let mut it = ts.into_iter(); 155 + 156 + let info = ModuleInfo::parse(&mut it); 157 + 158 + let mut modinfo = ModInfoBuilder::new(info.name.as_ref()); 159 + if let Some(author) = info.author { 160 + modinfo.emit("author", &author); 161 + } 162 + if let Some(description) = info.description { 163 + modinfo.emit("description", &description); 164 + } 165 + modinfo.emit("license", &info.license); 166 + if let Some(alias) = info.alias { 167 + modinfo.emit("alias", &alias); 168 + } 169 + 170 + // Built-in modules also export the `file` modinfo string. 171 + let file = 172 + std::env::var("RUST_MODFILE").expect("Unable to fetch RUST_MODFILE environmental variable"); 173 + modinfo.emit_only_builtin("file", &file); 174 + 175 + format!( 176 + " 177 + /// The module name. 178 + /// 179 + /// Used by the printing macros, e.g. [`info!`]. 180 + const __LOG_PREFIX: &[u8] = b\"{name}\\0\"; 181 + 182 + /// The \"Rust loadable module\" mark, for `scripts/is_rust_module.sh`. 183 + // 184 + // This may be best done another way later on, e.g. as a new modinfo 185 + // key or a new section. For the moment, keep it simple. 186 + #[cfg(MODULE)] 187 + #[doc(hidden)] 188 + #[used] 189 + static __IS_RUST_MODULE: () = (); 190 + 191 + static mut __MOD: Option<{type_}> = None; 192 + 193 + // SAFETY: `__this_module` is constructed by the kernel at load time and will not be 194 + // freed until the module is unloaded. 195 + #[cfg(MODULE)] 196 + static THIS_MODULE: kernel::ThisModule = unsafe {{ 197 + kernel::ThisModule::from_ptr(&kernel::bindings::__this_module as *const _ as *mut _) 198 + }}; 199 + #[cfg(not(MODULE))] 200 + static THIS_MODULE: kernel::ThisModule = unsafe {{ 201 + kernel::ThisModule::from_ptr(core::ptr::null_mut()) 202 + }}; 203 + 204 + // Loadable modules need to export the `{{init,cleanup}}_module` identifiers. 205 + #[cfg(MODULE)] 206 + #[doc(hidden)] 207 + #[no_mangle] 208 + pub extern \"C\" fn init_module() -> core::ffi::c_int {{ 209 + __init() 210 + }} 211 + 212 + #[cfg(MODULE)] 213 + #[doc(hidden)] 214 + #[no_mangle] 215 + pub extern \"C\" fn cleanup_module() {{ 216 + __exit() 217 + }} 218 + 219 + // Built-in modules are initialized through an initcall pointer 220 + // and the identifiers need to be unique. 221 + #[cfg(not(MODULE))] 222 + #[cfg(not(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS))] 223 + #[doc(hidden)] 224 + #[link_section = \"{initcall_section}\"] 225 + #[used] 226 + pub static __{name}_initcall: extern \"C\" fn() -> core::ffi::c_int = __{name}_init; 227 + 228 + #[cfg(not(MODULE))] 229 + #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)] 230 + core::arch::global_asm!( 231 + r#\".section \"{initcall_section}\", \"a\" 232 + __{name}_initcall: 233 + .long __{name}_init - . 234 + .previous 235 + \"# 236 + ); 237 + 238 + #[cfg(not(MODULE))] 239 + #[doc(hidden)] 240 + #[no_mangle] 241 + pub extern \"C\" fn __{name}_init() -> core::ffi::c_int {{ 242 + __init() 243 + }} 244 + 245 + #[cfg(not(MODULE))] 246 + #[doc(hidden)] 247 + #[no_mangle] 248 + pub extern \"C\" fn __{name}_exit() {{ 249 + __exit() 250 + }} 251 + 252 + fn __init() -> core::ffi::c_int {{ 253 + match <{type_} as kernel::Module>::init(&THIS_MODULE) {{ 254 + Ok(m) => {{ 255 + unsafe {{ 256 + __MOD = Some(m); 257 + }} 258 + return 0; 259 + }} 260 + Err(e) => {{ 261 + return e.to_kernel_errno(); 262 + }} 263 + }} 264 + }} 265 + 266 + fn __exit() {{ 267 + unsafe {{ 268 + // Invokes `drop()` on `__MOD`, which should be used for cleanup. 269 + __MOD = None; 270 + }} 271 + }} 272 + 273 + {modinfo} 274 + ", 275 + type_ = info.type_, 276 + name = info.name, 277 + modinfo = modinfo.buffer, 278 + initcall_section = ".initcall6.init" 279 + ) 280 + .parse() 281 + .expect("Error parsing formatted string into token stream.") 282 + }
+2
samples/Kconfig
··· 263 263 This demonstrates how a user may create their own CoreSight 264 264 configurations and easily load them into the system at runtime. 265 265 266 + source "samples/rust/Kconfig" 267 + 266 268 endif # SAMPLES 267 269 268 270 config HAVE_SAMPLE_FTRACE_DIRECT
+1
samples/Makefile
··· 35 35 obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak/ 36 36 obj-$(CONFIG_SAMPLE_CORESIGHT_SYSCFG) += coresight/ 37 37 obj-$(CONFIG_SAMPLE_FPROBE) += fprobe/ 38 + obj-$(CONFIG_SAMPLES_RUST) += rust/
+30
samples/rust/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + menuconfig SAMPLES_RUST 4 + bool "Rust samples" 5 + depends on RUST 6 + help 7 + You can build sample Rust kernel code here. 8 + 9 + If unsure, say N. 10 + 11 + if SAMPLES_RUST 12 + 13 + config SAMPLE_RUST_MINIMAL 14 + tristate "Minimal" 15 + help 16 + This option builds the Rust minimal module sample. 17 + 18 + To compile this as a module, choose M here: 19 + the module will be called rust_minimal. 20 + 21 + If unsure, say N. 22 + 23 + config SAMPLE_RUST_HOSTPROGS 24 + bool "Host programs" 25 + help 26 + This option builds the Rust host program samples. 27 + 28 + If unsure, say N. 29 + 30 + endif # SAMPLES_RUST
+5
samples/rust/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + obj-$(CONFIG_SAMPLE_RUST_MINIMAL) += rust_minimal.o 4 + 5 + subdir-$(CONFIG_SAMPLE_RUST_HOSTPROGS) += hostprogs
+3
samples/rust/hostprogs/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + single
+5
samples/rust/hostprogs/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + hostprogs-always-y := single 4 + 5 + single-rust := y
+7
samples/rust/hostprogs/a.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Rust single host program sample: module `a`. 4 + 5 + pub(crate) fn f(x: i32) { 6 + println!("The number is {}.", x); 7 + }
+5
samples/rust/hostprogs/b.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Rust single host program sample: module `b`. 4 + 5 + pub(crate) const CONSTANT: i32 = 42;
+12
samples/rust/hostprogs/single.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Rust single host program sample. 4 + 5 + mod a; 6 + mod b; 7 + 8 + fn main() { 9 + println!("Hello world!"); 10 + 11 + a::f(b::CONSTANT); 12 + }
+38
samples/rust/rust_minimal.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Rust minimal sample. 4 + 5 + use kernel::prelude::*; 6 + 7 + module! { 8 + type: RustMinimal, 9 + name: b"rust_minimal", 10 + author: b"Rust for Linux Contributors", 11 + description: b"Rust minimal sample", 12 + license: b"GPL", 13 + } 14 + 15 + struct RustMinimal { 16 + numbers: Vec<i32>, 17 + } 18 + 19 + impl kernel::Module for RustMinimal { 20 + fn init(_module: &'static ThisModule) -> Result<Self> { 21 + pr_info!("Rust minimal sample (init)\n"); 22 + pr_info!("Am I built-in? {}\n", !cfg!(MODULE)); 23 + 24 + let mut numbers = Vec::new(); 25 + numbers.try_push(72)?; 26 + numbers.try_push(108)?; 27 + numbers.try_push(200)?; 28 + 29 + Ok(RustMinimal { numbers }) 30 + } 31 + } 32 + 33 + impl Drop for RustMinimal { 34 + fn drop(&mut self) { 35 + pr_info!("My numbers are {:?}\n", self.numbers); 36 + pr_info!("Rust minimal sample (exit)\n"); 37 + } 38 + }
+1
scripts/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 /asn1_compiler 3 3 /bin2c 4 + /generate_rust_target 4 5 /insert-sys-cert 5 6 /kallsyms 6 7 /module.lds
+3 -3
scripts/Kconfig.include
··· 36 36 as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -) 37 37 38 38 # check if $(CC) and $(LD) exist 39 - $(error-if,$(failure,command -v $(CC)),compiler '$(CC)' not found) 39 + $(error-if,$(failure,command -v $(CC)),C compiler '$(CC)' not found) 40 40 $(error-if,$(failure,command -v $(LD)),linker '$(LD)' not found) 41 41 42 - # Get the compiler name, version, and error out if it is not supported. 42 + # Get the C compiler name, version, and error out if it is not supported. 43 43 cc-info := $(shell,$(srctree)/scripts/cc-version.sh $(CC)) 44 - $(error-if,$(success,test -z "$(cc-info)"),Sorry$(comma) this compiler is not supported.) 44 + $(error-if,$(success,test -z "$(cc-info)"),Sorry$(comma) this C compiler is not supported.) 45 45 cc-name := $(shell,set -- $(cc-info) && echo $1) 46 46 cc-version := $(shell,set -- $(cc-info) && echo $2) 47 47
+3
scripts/Makefile
··· 10 10 hostprogs-always-$(CONFIG_ASN1) += asn1_compiler 11 11 hostprogs-always-$(CONFIG_MODULE_SIG_FORMAT) += sign-file 12 12 hostprogs-always-$(CONFIG_SYSTEM_EXTRA_CERTIFICATE) += insert-sys-cert 13 + hostprogs-always-$(CONFIG_RUST) += generate_rust_target 14 + 15 + generate_rust_target-rust := y 13 16 14 17 HOSTCFLAGS_sorttable.o = -I$(srctree)/tools/include 15 18 HOSTLDLIBS_sorttable = -lpthread
+60
scripts/Makefile.build
··· 26 26 EXTRA_LDFLAGS := 27 27 asflags-y := 28 28 ccflags-y := 29 + rustflags-y := 29 30 cppflags-y := 30 31 ldflags-y := 31 32 ··· 271 270 272 271 $(obj)/%.lst: $(src)/%.c FORCE 273 272 $(call if_changed_dep,cc_lst_c) 273 + 274 + # Compile Rust sources (.rs) 275 + # --------------------------------------------------------------------------- 276 + 277 + rust_allowed_features := core_ffi_c 278 + 279 + rust_common_cmd = \ 280 + RUST_MODFILE=$(modfile) $(RUSTC_OR_CLIPPY) $(rust_flags) \ 281 + -Zallow-features=$(rust_allowed_features) \ 282 + -Zcrate-attr=no_std \ 283 + -Zcrate-attr='feature($(rust_allowed_features))' \ 284 + --extern alloc --extern kernel \ 285 + --crate-type rlib --out-dir $(obj) -L $(objtree)/rust/ \ 286 + --crate-name $(basename $(notdir $@)) 287 + 288 + rust_handle_depfile = \ 289 + mv $(obj)/$(basename $(notdir $@)).d $(depfile); \ 290 + sed -i '/^\#/d' $(depfile) 291 + 292 + # `--emit=obj`, `--emit=asm` and `--emit=llvm-ir` imply a single codegen unit 293 + # will be used. We explicitly request `-Ccodegen-units=1` in any case, and 294 + # the compiler shows a warning if it is not 1. However, if we ever stop 295 + # requesting it explicitly and we start using some other `--emit` that does not 296 + # imply it (and for which codegen is performed), then we would be out of sync, 297 + # i.e. the outputs we would get for the different single targets (e.g. `.ll`) 298 + # would not match each other. 299 + 300 + quiet_cmd_rustc_o_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ 301 + cmd_rustc_o_rs = \ 302 + $(rust_common_cmd) --emit=dep-info,obj $<; \ 303 + $(rust_handle_depfile) 304 + 305 + $(obj)/%.o: $(src)/%.rs FORCE 306 + $(call if_changed_dep,rustc_o_rs) 307 + 308 + quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ 309 + cmd_rustc_rsi_rs = \ 310 + $(rust_common_cmd) --emit=dep-info -Zunpretty=expanded $< >$@; \ 311 + command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) $@; \ 312 + $(rust_handle_depfile) 313 + 314 + $(obj)/%.rsi: $(src)/%.rs FORCE 315 + $(call if_changed_dep,rustc_rsi_rs) 316 + 317 + quiet_cmd_rustc_s_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ 318 + cmd_rustc_s_rs = \ 319 + $(rust_common_cmd) --emit=dep-info,asm $<; \ 320 + $(rust_handle_depfile) 321 + 322 + $(obj)/%.s: $(src)/%.rs FORCE 323 + $(call if_changed_dep,rustc_s_rs) 324 + 325 + quiet_cmd_rustc_ll_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@ 326 + cmd_rustc_ll_rs = \ 327 + $(rust_common_cmd) --emit=dep-info,llvm-ir $<; \ 328 + $(rust_handle_depfile) 329 + 330 + $(obj)/%.ll: $(src)/%.rs FORCE 331 + $(call if_changed_dep,rustc_ll_rs) 274 332 275 333 # Compile assembler sources (.S) 276 334 # ---------------------------------------------------------------------------
+8
scripts/Makefile.debug
··· 1 1 DEBUG_CFLAGS := 2 + DEBUG_RUSTFLAGS := 3 + 2 4 debug-flags-y := -g 3 5 4 6 ifdef CONFIG_DEBUG_INFO_SPLIT ··· 19 17 20 18 ifdef CONFIG_DEBUG_INFO_REDUCED 21 19 DEBUG_CFLAGS += -fno-var-tracking 20 + DEBUG_RUSTFLAGS += -Cdebuginfo=1 22 21 ifdef CONFIG_CC_IS_GCC 23 22 DEBUG_CFLAGS += -femit-struct-debug-baseonly 24 23 endif 24 + else 25 + DEBUG_RUSTFLAGS += -Cdebuginfo=2 25 26 endif 26 27 27 28 ifdef CONFIG_DEBUG_INFO_COMPRESSED ··· 35 30 36 31 KBUILD_CFLAGS += $(DEBUG_CFLAGS) 37 32 export DEBUG_CFLAGS 33 + 34 + KBUILD_RUSTFLAGS += $(DEBUG_RUSTFLAGS) 35 + export DEBUG_RUSTFLAGS
+31 -3
scripts/Makefile.host
··· 22 22 # to preprocess a data file. 23 23 # 24 24 # Both C and C++ are supported, but preferred language is C for such utilities. 25 + # Rust is also supported, but it may only be used in scenarios where a Rust 26 + # toolchain is required to be available (e.g. when `CONFIG_RUST` is enabled). 25 27 # 26 28 # Sample syntax (see Documentation/kbuild/makefiles.rst for reference) 27 29 # hostprogs := bin2hex ··· 39 37 # qconf-objs := menu.o 40 38 # Will compile qconf as a C++ program, and menu as a C program. 41 39 # They are linked as C++ code to the executable qconf 40 + # 41 + # hostprogs := target 42 + # target-rust := y 43 + # Will compile `target` as a Rust program, using `target.rs` as the crate root. 44 + # The crate may consist of several source files. 42 45 43 46 # C code 44 47 # Executables compiled from a single .c file 45 48 host-csingle := $(foreach m,$(hostprogs), \ 46 - $(if $($(m)-objs)$($(m)-cxxobjs),,$(m))) 49 + $(if $($(m)-objs)$($(m)-cxxobjs)$($(m)-rust),,$(m))) 47 50 48 51 # C executables linked based on several .o files 49 52 host-cmulti := $(foreach m,$(hostprogs),\ 50 - $(if $($(m)-cxxobjs),,$(if $($(m)-objs),$(m)))) 53 + $(if $($(m)-cxxobjs)$($(m)-rust),,$(if $($(m)-objs),$(m)))) 51 54 52 55 # Object (.o) files compiled from .c files 53 56 host-cobjs := $(sort $(foreach m,$(hostprogs),$($(m)-objs))) ··· 65 58 # C++ Object (.o) files compiled from .cc files 66 59 host-cxxobjs := $(sort $(foreach m,$(host-cxxmulti),$($(m)-cxxobjs))) 67 60 61 + # Rust code 62 + # Executables compiled from a single Rust crate (which may consist of 63 + # one or more .rs files) 64 + host-rust := $(foreach m,$(hostprogs),$(if $($(m)-rust),$(m))) 65 + 68 66 host-csingle := $(addprefix $(obj)/,$(host-csingle)) 69 67 host-cmulti := $(addprefix $(obj)/,$(host-cmulti)) 70 68 host-cobjs := $(addprefix $(obj)/,$(host-cobjs)) 71 69 host-cxxmulti := $(addprefix $(obj)/,$(host-cxxmulti)) 72 70 host-cxxobjs := $(addprefix $(obj)/,$(host-cxxobjs)) 71 + host-rust := $(addprefix $(obj)/,$(host-rust)) 73 72 74 73 ##### 75 74 # Handle options to gcc. Support building with separate output directory ··· 84 71 $(HOSTCFLAGS_$(target-stem).o) 85 72 _hostcxx_flags = $(KBUILD_HOSTCXXFLAGS) $(HOST_EXTRACXXFLAGS) \ 86 73 $(HOSTCXXFLAGS_$(target-stem).o) 74 + _hostrust_flags = $(KBUILD_HOSTRUSTFLAGS) $(HOST_EXTRARUSTFLAGS) \ 75 + $(HOSTRUSTFLAGS_$(target-stem)) 87 76 88 77 # $(objtree)/$(obj) for including generated headers from checkin source files 89 78 ifeq ($(KBUILD_EXTMOD),) ··· 97 82 98 83 hostc_flags = -Wp,-MMD,$(depfile) $(_hostc_flags) 99 84 hostcxx_flags = -Wp,-MMD,$(depfile) $(_hostcxx_flags) 85 + hostrust_flags = $(_hostrust_flags) 100 86 101 87 ##### 102 88 # Compile programs on the host ··· 144 128 $(host-cxxobjs): $(obj)/%.o: $(src)/%.cc FORCE 145 129 $(call if_changed_dep,host-cxxobjs) 146 130 131 + # Create executable from a single Rust crate (which may consist of 132 + # one or more `.rs` files) 133 + # host-rust -> Executable 134 + quiet_cmd_host-rust = HOSTRUSTC $@ 135 + cmd_host-rust = \ 136 + $(HOSTRUSTC) $(hostrust_flags) --emit=dep-info,link \ 137 + --out-dir=$(obj)/ $<; \ 138 + mv $(obj)/$(target-stem).d $(depfile); \ 139 + sed -i '/^\#/d' $(depfile) 140 + $(host-rust): $(obj)/%: $(src)/%.rs FORCE 141 + $(call if_changed_dep,host-rust) 142 + 147 143 targets += $(host-csingle) $(host-cmulti) $(host-cobjs) \ 148 - $(host-cxxmulti) $(host-cxxobjs) 144 + $(host-cxxmulti) $(host-cxxobjs) $(host-rust)
+12
scripts/Makefile.lib
··· 8 8 # flags that take effect in current and sub directories 9 9 KBUILD_AFLAGS += $(subdir-asflags-y) 10 10 KBUILD_CFLAGS += $(subdir-ccflags-y) 11 + KBUILD_RUSTFLAGS += $(subdir-rustflags-y) 11 12 12 13 # Figure out what we need to build from the various variables 13 14 # =========================================================================== ··· 129 128 $(filter-out $(ccflags-remove-y), \ 130 129 $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(ccflags-y)) \ 131 130 $(CFLAGS_$(target-stem).o)) 131 + _rust_flags = $(filter-out $(RUSTFLAGS_REMOVE_$(target-stem).o), \ 132 + $(filter-out $(rustflags-remove-y), \ 133 + $(KBUILD_RUSTFLAGS) $(rustflags-y)) \ 134 + $(RUSTFLAGS_$(target-stem).o)) 132 135 _a_flags = $(filter-out $(AFLAGS_REMOVE_$(target-stem).o), \ 133 136 $(filter-out $(asflags-remove-y), \ 134 137 $(KBUILD_CPPFLAGS) $(KBUILD_AFLAGS) $(asflags-y)) \ ··· 207 202 $(KBUILD_CFLAGS_MODULE) $(CFLAGS_MODULE), \ 208 203 $(KBUILD_CFLAGS_KERNEL) $(CFLAGS_KERNEL) $(modfile_flags)) 209 204 205 + modkern_rustflags = \ 206 + $(if $(part-of-module), \ 207 + $(KBUILD_RUSTFLAGS_MODULE) $(RUSTFLAGS_MODULE), \ 208 + $(KBUILD_RUSTFLAGS_KERNEL) $(RUSTFLAGS_KERNEL)) 209 + 210 210 modkern_aflags = $(if $(part-of-module), \ 211 211 $(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE), \ 212 212 $(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL)) ··· 220 210 -include $(srctree)/include/linux/compiler_types.h \ 221 211 $(_c_flags) $(modkern_cflags) \ 222 212 $(basename_flags) $(modname_flags) 213 + 214 + rust_flags = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg 223 215 224 216 a_flags = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \ 225 217 $(_a_flags) $(modkern_aflags)
+5 -3
scripts/Makefile.modfinal
··· 39 39 40 40 quiet_cmd_btf_ko = BTF [M] $@ 41 41 cmd_btf_ko = \ 42 - if [ -f vmlinux ]; then \ 42 + if [ ! -f vmlinux ]; then \ 43 + printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \ 44 + elif [ -n "$(CONFIG_RUST)" ] && $(srctree)/scripts/is_rust_module.sh $@; then \ 45 + printf "Skipping BTF generation for %s because it's a Rust module\n" $@ 1>&2; \ 46 + else \ 43 47 LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J $(PAHOLE_FLAGS) --btf_base vmlinux $@; \ 44 48 $(RESOLVE_BTFIDS) -b vmlinux $@; \ 45 - else \ 46 - printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \ 47 49 fi; 48 50 49 51 # Same as newer-prereqs, but allows to exclude specified extra dependencies
+6 -6
scripts/cc-version.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # 4 - # Print the compiler name and its version in a 5 or 6-digit form. 4 + # Print the C compiler name and its version in a 5 or 6-digit form. 5 5 # Also, perform the minimum version check. 6 6 7 7 set -e 8 8 9 - # Print the compiler name and some version components. 10 - get_compiler_info() 9 + # Print the C compiler name and some version components. 10 + get_c_compiler_info() 11 11 { 12 12 cat <<- EOF | "$@" -E -P -x c - 2>/dev/null 13 13 #if defined(__clang__) ··· 32 32 33 33 # $@ instead of $1 because multiple words might be given, e.g. CC="ccache gcc". 34 34 orig_args="$@" 35 - set -- $(get_compiler_info "$@") 35 + set -- $(get_c_compiler_info "$@") 36 36 37 37 name=$1 38 38 ··· 52 52 min_version=$($min_tool_version icc) 53 53 ;; 54 54 *) 55 - echo "$orig_args: unknown compiler" >&2 55 + echo "$orig_args: unknown C compiler" >&2 56 56 exit 1 57 57 ;; 58 58 esac ··· 62 62 63 63 if [ "$cversion" -lt "$min_cversion" ]; then 64 64 echo >&2 "***" 65 - echo >&2 "*** Compiler is too old." 65 + echo >&2 "*** C compiler is too old." 66 66 echo >&2 "*** Your $name version: $version" 67 67 echo >&2 "*** Minimum $name version: $min_version" 68 68 echo >&2 "***"
+8 -4
scripts/checkpatch.pl
··· 3616 3616 my $comment = ""; 3617 3617 if ($realfile =~ /\.(h|s|S)$/) { 3618 3618 $comment = '/*'; 3619 - } elsif ($realfile =~ /\.(c|dts|dtsi)$/) { 3619 + } elsif ($realfile =~ /\.(c|rs|dts|dtsi)$/) { 3620 3620 $comment = '//'; 3621 3621 } elsif (($checklicenseline == 2) || $realfile =~ /\.(sh|pl|py|awk|tc|yaml)$/) { 3622 3622 $comment = '#'; ··· 3664 3664 } 3665 3665 3666 3666 # check we are in a valid source file if not then ignore this hunk 3667 - next if ($realfile !~ /\.(h|c|s|S|sh|dtsi|dts)$/); 3667 + next if ($realfile !~ /\.(h|c|rs|s|S|sh|dtsi|dts)$/); 3668 3668 3669 3669 # check for using SPDX-License-Identifier on the wrong line number 3670 3670 if ($realline != $checklicenseline && ··· 6783 6783 } 6784 6784 if ($bad_specifier ne "") { 6785 6785 my $stat_real = get_stat_real($linenr, $lc); 6786 + my $msg_level = \&WARN; 6786 6787 my $ext_type = "Invalid"; 6787 6788 my $use = ""; 6788 6789 if ($bad_specifier =~ /p[Ff]/) { 6789 6790 $use = " - use %pS instead"; 6790 6791 $use =~ s/pS/ps/ if ($bad_specifier =~ /pf/); 6792 + } elsif ($bad_specifier =~ /pA/) { 6793 + $use = " - '%pA' is only intended to be used from Rust code"; 6794 + $msg_level = \&ERROR; 6791 6795 } 6792 6796 6793 - WARN("VSPRINTF_POINTER_EXTENSION", 6794 - "$ext_type vsprintf pointer extension '$bad_specifier'$use\n" . "$here\n$stat_real\n"); 6797 + &{$msg_level}("VSPRINTF_POINTER_EXTENSION", 6798 + "$ext_type vsprintf pointer extension '$bad_specifier'$use\n" . "$here\n$stat_real\n"); 6795 6799 } 6796 6800 } 6797 6801 }
+14
scripts/decode_stacktrace.sh
··· 8 8 echo " $0 -r <release> | <vmlinux> [<base path>|auto] [<modules path>]" 9 9 } 10 10 11 + # Try to find a Rust demangler 12 + if type llvm-cxxfilt >/dev/null 2>&1 ; then 13 + cppfilt=llvm-cxxfilt 14 + elif type c++filt >/dev/null 2>&1 ; then 15 + cppfilt=c++filt 16 + cppfilt_opts=-i 17 + fi 18 + 11 19 if [[ $1 == "-r" ]] ; then 12 20 vmlinux="" 13 21 basepath="auto" ··· 187 179 188 180 # In the case of inlines, move everything to same line 189 181 code=${code//$'\n'/' '} 182 + 183 + # Demangle if the name looks like a Rust symbol and if 184 + # we got a Rust demangler 185 + if [[ $name =~ ^_R && $cppfilt != "" ]] ; then 186 + name=$("$cppfilt" "$cppfilt_opts" "$name") 187 + fi 190 188 191 189 # Replace old address with pretty line numbers 192 190 symbol="$segment$name ($code)"
+135
scripts/generate_rust_analyzer.py
··· 1 + #!/usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + """generate_rust_analyzer - Generates the `rust-project.json` file for `rust-analyzer`. 4 + """ 5 + 6 + import argparse 7 + import json 8 + import logging 9 + import pathlib 10 + import sys 11 + 12 + def generate_crates(srctree, objtree, sysroot_src): 13 + # Generate the configuration list. 14 + cfg = [] 15 + with open(objtree / "include" / "generated" / "rustc_cfg") as fd: 16 + for line in fd: 17 + line = line.replace("--cfg=", "") 18 + line = line.replace("\n", "") 19 + cfg.append(line) 20 + 21 + # Now fill the crates list -- dependencies need to come first. 22 + # 23 + # Avoid O(n^2) iterations by keeping a map of indexes. 24 + crates = [] 25 + crates_indexes = {} 26 + 27 + def append_crate(display_name, root_module, deps, cfg=[], is_workspace_member=True, is_proc_macro=False): 28 + crates_indexes[display_name] = len(crates) 29 + crates.append({ 30 + "display_name": display_name, 31 + "root_module": str(root_module), 32 + "is_workspace_member": is_workspace_member, 33 + "is_proc_macro": is_proc_macro, 34 + "deps": [{"crate": crates_indexes[dep], "name": dep} for dep in deps], 35 + "cfg": cfg, 36 + "edition": "2021", 37 + "env": { 38 + "RUST_MODFILE": "This is only for rust-analyzer" 39 + } 40 + }) 41 + 42 + # First, the ones in `rust/` since they are a bit special. 43 + append_crate( 44 + "core", 45 + sysroot_src / "core" / "src" / "lib.rs", 46 + [], 47 + is_workspace_member=False, 48 + ) 49 + 50 + append_crate( 51 + "compiler_builtins", 52 + srctree / "rust" / "compiler_builtins.rs", 53 + [], 54 + ) 55 + 56 + append_crate( 57 + "alloc", 58 + srctree / "rust" / "alloc" / "lib.rs", 59 + ["core", "compiler_builtins"], 60 + ) 61 + 62 + append_crate( 63 + "macros", 64 + srctree / "rust" / "macros" / "lib.rs", 65 + [], 66 + is_proc_macro=True, 67 + ) 68 + crates[-1]["proc_macro_dylib_path"] = "rust/libmacros.so" 69 + 70 + append_crate( 71 + "bindings", 72 + srctree / "rust"/ "bindings" / "lib.rs", 73 + ["core"], 74 + cfg=cfg, 75 + ) 76 + crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True)) 77 + 78 + append_crate( 79 + "kernel", 80 + srctree / "rust" / "kernel" / "lib.rs", 81 + ["core", "alloc", "macros", "bindings"], 82 + cfg=cfg, 83 + ) 84 + crates[-1]["source"] = { 85 + "include_dirs": [ 86 + str(srctree / "rust" / "kernel"), 87 + str(objtree / "rust") 88 + ], 89 + "exclude_dirs": [], 90 + } 91 + 92 + # Then, the rest outside of `rust/`. 93 + # 94 + # We explicitly mention the top-level folders we want to cover. 95 + for folder in ("samples", "drivers"): 96 + for path in (srctree / folder).rglob("*.rs"): 97 + logging.info("Checking %s", path) 98 + name = path.name.replace(".rs", "") 99 + 100 + # Skip those that are not crate roots. 101 + if f"{name}.o" not in open(path.parent / "Makefile").read(): 102 + continue 103 + 104 + logging.info("Adding %s", name) 105 + append_crate( 106 + name, 107 + path, 108 + ["core", "alloc", "kernel"], 109 + cfg=cfg, 110 + ) 111 + 112 + return crates 113 + 114 + def main(): 115 + parser = argparse.ArgumentParser() 116 + parser.add_argument('--verbose', '-v', action='store_true') 117 + parser.add_argument("srctree", type=pathlib.Path) 118 + parser.add_argument("objtree", type=pathlib.Path) 119 + parser.add_argument("sysroot_src", type=pathlib.Path) 120 + args = parser.parse_args() 121 + 122 + logging.basicConfig( 123 + format="[%(asctime)s] [%(levelname)s] %(message)s", 124 + level=logging.INFO if args.verbose else logging.WARNING 125 + ) 126 + 127 + rust_project = { 128 + "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src), 129 + "sysroot_src": str(args.sysroot_src), 130 + } 131 + 132 + json.dump(rust_project, sys.stdout, sort_keys=True, indent=4) 133 + 134 + if __name__ == "__main__": 135 + main()
+182
scripts/generate_rust_target.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! The custom target specification file generator for `rustc`. 4 + //! 5 + //! To configure a target from scratch, a JSON-encoded file has to be passed 6 + //! to `rustc` (introduced in [RFC 131]). These options and the file itself are 7 + //! unstable. Eventually, `rustc` should provide a way to do this in a stable 8 + //! manner. For instance, via command-line arguments. Therefore, this file 9 + //! should avoid using keys which can be set via `-C` or `-Z` options. 10 + //! 11 + //! [RFC 131]: https://rust-lang.github.io/rfcs/0131-target-specification.html 12 + 13 + use std::{ 14 + collections::HashMap, 15 + fmt::{Display, Formatter, Result}, 16 + io::BufRead, 17 + }; 18 + 19 + enum Value { 20 + Boolean(bool), 21 + Number(i32), 22 + String(String), 23 + Object(Object), 24 + } 25 + 26 + type Object = Vec<(String, Value)>; 27 + 28 + /// Minimal "almost JSON" generator (e.g. no `null`s, no arrays, no escaping), 29 + /// enough for this purpose. 30 + impl Display for Value { 31 + fn fmt(&self, formatter: &mut Formatter<'_>) -> Result { 32 + match self { 33 + Value::Boolean(boolean) => write!(formatter, "{}", boolean), 34 + Value::Number(number) => write!(formatter, "{}", number), 35 + Value::String(string) => write!(formatter, "\"{}\"", string), 36 + Value::Object(object) => { 37 + formatter.write_str("{")?; 38 + if let [ref rest @ .., ref last] = object[..] { 39 + for (key, value) in rest { 40 + write!(formatter, "\"{}\": {},", key, value)?; 41 + } 42 + write!(formatter, "\"{}\": {}", last.0, last.1)?; 43 + } 44 + formatter.write_str("}") 45 + } 46 + } 47 + } 48 + } 49 + 50 + struct TargetSpec(Object); 51 + 52 + impl TargetSpec { 53 + fn new() -> TargetSpec { 54 + TargetSpec(Vec::new()) 55 + } 56 + } 57 + 58 + trait Push<T> { 59 + fn push(&mut self, key: &str, value: T); 60 + } 61 + 62 + impl Push<bool> for TargetSpec { 63 + fn push(&mut self, key: &str, value: bool) { 64 + self.0.push((key.to_string(), Value::Boolean(value))); 65 + } 66 + } 67 + 68 + impl Push<i32> for TargetSpec { 69 + fn push(&mut self, key: &str, value: i32) { 70 + self.0.push((key.to_string(), Value::Number(value))); 71 + } 72 + } 73 + 74 + impl Push<String> for TargetSpec { 75 + fn push(&mut self, key: &str, value: String) { 76 + self.0.push((key.to_string(), Value::String(value))); 77 + } 78 + } 79 + 80 + impl Push<&str> for TargetSpec { 81 + fn push(&mut self, key: &str, value: &str) { 82 + self.push(key, value.to_string()); 83 + } 84 + } 85 + 86 + impl Push<Object> for TargetSpec { 87 + fn push(&mut self, key: &str, value: Object) { 88 + self.0.push((key.to_string(), Value::Object(value))); 89 + } 90 + } 91 + 92 + impl Display for TargetSpec { 93 + fn fmt(&self, formatter: &mut Formatter<'_>) -> Result { 94 + // We add some newlines for clarity. 95 + formatter.write_str("{\n")?; 96 + if let [ref rest @ .., ref last] = self.0[..] { 97 + for (key, value) in rest { 98 + write!(formatter, " \"{}\": {},\n", key, value)?; 99 + } 100 + write!(formatter, " \"{}\": {}\n", last.0, last.1)?; 101 + } 102 + formatter.write_str("}") 103 + } 104 + } 105 + 106 + struct KernelConfig(HashMap<String, String>); 107 + 108 + impl KernelConfig { 109 + /// Parses `include/config/auto.conf` from `stdin`. 110 + fn from_stdin() -> KernelConfig { 111 + let mut result = HashMap::new(); 112 + 113 + let stdin = std::io::stdin(); 114 + let mut handle = stdin.lock(); 115 + let mut line = String::new(); 116 + 117 + loop { 118 + line.clear(); 119 + 120 + if handle.read_line(&mut line).unwrap() == 0 { 121 + break; 122 + } 123 + 124 + if line.starts_with('#') { 125 + continue; 126 + } 127 + 128 + let (key, value) = line.split_once('=').expect("Missing `=` in line."); 129 + result.insert(key.to_string(), value.trim_end_matches('\n').to_string()); 130 + } 131 + 132 + KernelConfig(result) 133 + } 134 + 135 + /// Does the option exist in the configuration (any value)? 136 + /// 137 + /// The argument must be passed without the `CONFIG_` prefix. 138 + /// This avoids repetition and it also avoids `fixdep` making us 139 + /// depend on it. 140 + fn has(&self, option: &str) -> bool { 141 + let option = "CONFIG_".to_owned() + option; 142 + self.0.contains_key(&option) 143 + } 144 + } 145 + 146 + fn main() { 147 + let cfg = KernelConfig::from_stdin(); 148 + let mut ts = TargetSpec::new(); 149 + 150 + // `llvm-target`s are taken from `scripts/Makefile.clang`. 151 + if cfg.has("X86_64") { 152 + ts.push("arch", "x86_64"); 153 + ts.push( 154 + "data-layout", 155 + "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", 156 + ); 157 + let mut features = "-3dnow,-3dnowa,-mmx,+soft-float".to_string(); 158 + if cfg.has("RETPOLINE") { 159 + features += ",+retpoline-external-thunk"; 160 + } 161 + ts.push("features", features); 162 + ts.push("llvm-target", "x86_64-linux-gnu"); 163 + ts.push("target-pointer-width", "64"); 164 + } else { 165 + panic!("Unsupported architecture"); 166 + } 167 + 168 + ts.push("emit-debug-gdb-scripts", false); 169 + ts.push("frame-pointer", "may-omit"); 170 + ts.push( 171 + "stack-probes", 172 + vec![("kind".to_string(), Value::String("none".to_string()))], 173 + ); 174 + 175 + // Everything else is LE, whether `CPU_LITTLE_ENDIAN` is declared or not 176 + // (e.g. x86). It is also `rustc`'s default. 177 + if cfg.has("CPU_BIG_ENDIAN") { 178 + ts.push("target-endian", "big"); 179 + } 180 + 181 + println!("{}", ts); 182 + }
+16
scripts/is_rust_module.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # is_rust_module.sh module.ko 5 + # 6 + # Returns `0` if `module.ko` is a Rust module, `1` otherwise. 7 + 8 + set -e 9 + 10 + # Using the `16_` prefix ensures other symbols with the same substring 11 + # are not picked up (even if it would be unlikely). The last part is 12 + # used just in case LLVM decides to use the `.` suffix. 13 + # 14 + # In the future, checking for the `.comment` section may be another 15 + # option, see https://github.com/rust-lang/rust/pull/97550. 16 + ${NM} "$*" | grep -qE '^[0-9a-fA-F]+ r _R[^[:space:]]+16___IS_RUST_MODULE[^[:space:]]*$'
+46 -7
scripts/kallsyms.c
··· 27 27 28 28 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0])) 29 29 30 - #define KSYM_NAME_LEN 128 30 + #define _stringify_1(x) #x 31 + #define _stringify(x) _stringify_1(x) 32 + 33 + #define KSYM_NAME_LEN 512 34 + 35 + /* 36 + * A substantially bigger size than the current maximum. 37 + * 38 + * It cannot be defined as an expression because it gets stringified 39 + * for the fscanf() format string. Therefore, a _Static_assert() is 40 + * used instead to maintain the relationship with KSYM_NAME_LEN. 41 + */ 42 + #define KSYM_NAME_LEN_BUFFER 2048 43 + _Static_assert( 44 + KSYM_NAME_LEN_BUFFER == KSYM_NAME_LEN * 4, 45 + "Please keep KSYM_NAME_LEN_BUFFER in sync with KSYM_NAME_LEN" 46 + ); 31 47 32 48 struct sym_entry { 33 49 unsigned long long addr; ··· 214 198 215 199 static struct sym_entry *read_symbol(FILE *in) 216 200 { 217 - char name[500], type; 201 + char name[KSYM_NAME_LEN_BUFFER+1], type; 218 202 unsigned long long addr; 219 203 unsigned int len; 220 204 struct sym_entry *sym; 221 205 int rc; 222 206 223 - rc = fscanf(in, "%llx %c %499s\n", &addr, &type, name); 207 + rc = fscanf(in, "%llx %c %" _stringify(KSYM_NAME_LEN_BUFFER) "s\n", &addr, &type, name); 224 208 if (rc != 3) { 225 - if (rc != EOF && fgets(name, 500, in) == NULL) 209 + if (rc != EOF && fgets(name, ARRAY_SIZE(name), in) == NULL) 226 210 fprintf(stderr, "Read error or end of file.\n"); 227 211 return NULL; 228 212 } ··· 487 471 if ((i & 0xFF) == 0) 488 472 markers[i >> 8] = off; 489 473 490 - printf("\t.byte 0x%02x", table[i]->len); 474 + /* There cannot be any symbol of length zero. */ 475 + if (table[i]->len == 0) { 476 + fprintf(stderr, "kallsyms failure: " 477 + "unexpected zero symbol length\n"); 478 + exit(EXIT_FAILURE); 479 + } 480 + 481 + /* Only lengths that fit in up-to-two-byte ULEB128 are supported. */ 482 + if (table[i]->len > 0x3FFF) { 483 + fprintf(stderr, "kallsyms failure: " 484 + "unexpected huge symbol length\n"); 485 + exit(EXIT_FAILURE); 486 + } 487 + 488 + /* Encode length with ULEB128. */ 489 + if (table[i]->len <= 0x7F) { 490 + /* Most symbols use a single byte for the length. */ 491 + printf("\t.byte 0x%02x", table[i]->len); 492 + off += table[i]->len + 1; 493 + } else { 494 + /* "Big" symbols use two bytes. */ 495 + printf("\t.byte 0x%02x, 0x%02x", 496 + (table[i]->len & 0x7F) | 0x80, 497 + (table[i]->len >> 7) & 0x7F); 498 + off += table[i]->len + 2; 499 + } 491 500 for (k = 0; k < table[i]->len; k++) 492 501 printf(", 0x%02x", table[i]->sym[k]); 493 502 printf("\n"); 494 - 495 - off += table[i]->len + 1; 496 503 } 497 504 printf("\n"); 498 505
+75
scripts/kconfig/confdata.c
··· 216 216 return name ? name : "include/generated/autoconf.h"; 217 217 } 218 218 219 + static const char *conf_get_rustccfg_name(void) 220 + { 221 + char *name = getenv("KCONFIG_RUSTCCFG"); 222 + 223 + return name ? name : "include/generated/rustc_cfg"; 224 + } 225 + 219 226 static int conf_set_sym_val(struct symbol *sym, int def, int def_flags, char *p) 220 227 { 221 228 char *p2; ··· 612 605 613 606 static void conf_write_heading(FILE *fp, const struct comment_style *cs) 614 607 { 608 + if (!cs) 609 + return; 610 + 615 611 fprintf(fp, "%s\n", cs->prefix); 616 612 617 613 fprintf(fp, "%s Automatically generated file; DO NOT EDIT.\n", ··· 753 743 val_prefix, val); 754 744 755 745 free(escaped); 746 + } 747 + 748 + static void print_symbol_for_rustccfg(FILE *fp, struct symbol *sym) 749 + { 750 + const char *val; 751 + const char *val_prefix = ""; 752 + char *val_prefixed = NULL; 753 + size_t val_prefixed_len; 754 + char *escaped = NULL; 755 + 756 + if (sym->type == S_UNKNOWN) 757 + return; 758 + 759 + val = sym_get_string_value(sym); 760 + 761 + switch (sym->type) { 762 + case S_BOOLEAN: 763 + case S_TRISTATE: 764 + /* 765 + * We do not care about disabled ones, i.e. no need for 766 + * what otherwise are "comments" in other printers. 767 + */ 768 + if (*val == 'n') 769 + return; 770 + 771 + /* 772 + * To have similar functionality to the C macro `IS_ENABLED()` 773 + * we provide an empty `--cfg CONFIG_X` here in both `y` 774 + * and `m` cases. 775 + * 776 + * Then, the common `fprintf()` below will also give us 777 + * a `--cfg CONFIG_X="y"` or `--cfg CONFIG_X="m"`, which can 778 + * be used as the equivalent of `IS_BUILTIN()`/`IS_MODULE()`. 779 + */ 780 + fprintf(fp, "--cfg=%s%s\n", CONFIG_, sym->name); 781 + break; 782 + case S_HEX: 783 + if (val[0] != '0' || (val[1] != 'x' && val[1] != 'X')) 784 + val_prefix = "0x"; 785 + break; 786 + default: 787 + break; 788 + } 789 + 790 + if (strlen(val_prefix) > 0) { 791 + val_prefixed_len = strlen(val) + strlen(val_prefix) + 1; 792 + val_prefixed = xmalloc(val_prefixed_len); 793 + snprintf(val_prefixed, val_prefixed_len, "%s%s", val_prefix, val); 794 + val = val_prefixed; 795 + } 796 + 797 + /* All values get escaped: the `--cfg` option only takes strings */ 798 + escaped = escape_string_value(val); 799 + val = escaped; 800 + 801 + fprintf(fp, "--cfg=%s%s=%s\n", CONFIG_, sym->name, val); 802 + 803 + free(escaped); 804 + free(val_prefixed); 756 805 } 757 806 758 807 /* ··· 1198 1129 ret = __conf_write_autoconf(conf_get_autoheader_name(), 1199 1130 print_symbol_for_c, 1200 1131 &comment_style_c); 1132 + if (ret) 1133 + return ret; 1134 + 1135 + ret = __conf_write_autoconf(conf_get_rustccfg_name(), 1136 + print_symbol_for_rustccfg, 1137 + NULL); 1201 1138 if (ret) 1202 1139 return ret; 1203 1140
+6
scripts/min-tool-version.sh
··· 30 30 echo 11.0.0 31 31 fi 32 32 ;; 33 + rustc) 34 + echo 1.62.0 35 + ;; 36 + bindgen) 37 + echo 0.56.0 38 + ;; 33 39 *) 34 40 echo "$1: unknown tool" >&2 35 41 exit 1
+160
scripts/rust_is_available.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Tests whether a suitable Rust toolchain is available. 5 + # 6 + # Pass `-v` for human output and more checks (as warnings). 7 + 8 + set -e 9 + 10 + min_tool_version=$(dirname $0)/min-tool-version.sh 11 + 12 + # Convert the version string x.y.z to a canonical up-to-7-digits form. 13 + # 14 + # Note that this function uses one more digit (compared to other 15 + # instances in other version scripts) to give a bit more space to 16 + # `rustc` since it will reach 1.100.0 in late 2026. 17 + get_canonical_version() 18 + { 19 + IFS=. 20 + set -- $1 21 + echo $((100000 * $1 + 100 * $2 + $3)) 22 + } 23 + 24 + # Check that the Rust compiler exists. 25 + if ! command -v "$RUSTC" >/dev/null; then 26 + if [ "$1" = -v ]; then 27 + echo >&2 "***" 28 + echo >&2 "*** Rust compiler '$RUSTC' could not be found." 29 + echo >&2 "***" 30 + fi 31 + exit 1 32 + fi 33 + 34 + # Check that the Rust bindings generator exists. 35 + if ! command -v "$BINDGEN" >/dev/null; then 36 + if [ "$1" = -v ]; then 37 + echo >&2 "***" 38 + echo >&2 "*** Rust bindings generator '$BINDGEN' could not be found." 39 + echo >&2 "***" 40 + fi 41 + exit 1 42 + fi 43 + 44 + # Check that the Rust compiler version is suitable. 45 + # 46 + # Non-stable and distributions' versions may have a version suffix, e.g. `-dev`. 47 + rust_compiler_version=$( \ 48 + LC_ALL=C "$RUSTC" --version 2>/dev/null \ 49 + | head -n 1 \ 50 + | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' \ 51 + ) 52 + rust_compiler_min_version=$($min_tool_version rustc) 53 + rust_compiler_cversion=$(get_canonical_version $rust_compiler_version) 54 + rust_compiler_min_cversion=$(get_canonical_version $rust_compiler_min_version) 55 + if [ "$rust_compiler_cversion" -lt "$rust_compiler_min_cversion" ]; then 56 + if [ "$1" = -v ]; then 57 + echo >&2 "***" 58 + echo >&2 "*** Rust compiler '$RUSTC' is too old." 59 + echo >&2 "*** Your version: $rust_compiler_version" 60 + echo >&2 "*** Minimum version: $rust_compiler_min_version" 61 + echo >&2 "***" 62 + fi 63 + exit 1 64 + fi 65 + if [ "$1" = -v ] && [ "$rust_compiler_cversion" -gt "$rust_compiler_min_cversion" ]; then 66 + echo >&2 "***" 67 + echo >&2 "*** Rust compiler '$RUSTC' is too new. This may or may not work." 68 + echo >&2 "*** Your version: $rust_compiler_version" 69 + echo >&2 "*** Expected version: $rust_compiler_min_version" 70 + echo >&2 "***" 71 + fi 72 + 73 + # Check that the Rust bindings generator is suitable. 74 + # 75 + # Non-stable and distributions' versions may have a version suffix, e.g. `-dev`. 76 + rust_bindings_generator_version=$( \ 77 + LC_ALL=C "$BINDGEN" --version 2>/dev/null \ 78 + | head -n 1 \ 79 + | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' \ 80 + ) 81 + rust_bindings_generator_min_version=$($min_tool_version bindgen) 82 + rust_bindings_generator_cversion=$(get_canonical_version $rust_bindings_generator_version) 83 + rust_bindings_generator_min_cversion=$(get_canonical_version $rust_bindings_generator_min_version) 84 + if [ "$rust_bindings_generator_cversion" -lt "$rust_bindings_generator_min_cversion" ]; then 85 + if [ "$1" = -v ]; then 86 + echo >&2 "***" 87 + echo >&2 "*** Rust bindings generator '$BINDGEN' is too old." 88 + echo >&2 "*** Your version: $rust_bindings_generator_version" 89 + echo >&2 "*** Minimum version: $rust_bindings_generator_min_version" 90 + echo >&2 "***" 91 + fi 92 + exit 1 93 + fi 94 + if [ "$1" = -v ] && [ "$rust_bindings_generator_cversion" -gt "$rust_bindings_generator_min_cversion" ]; then 95 + echo >&2 "***" 96 + echo >&2 "*** Rust bindings generator '$BINDGEN' is too new. This may or may not work." 97 + echo >&2 "*** Your version: $rust_bindings_generator_version" 98 + echo >&2 "*** Expected version: $rust_bindings_generator_min_version" 99 + echo >&2 "***" 100 + fi 101 + 102 + # Check that the `libclang` used by the Rust bindings generator is suitable. 103 + bindgen_libclang_version=$( \ 104 + LC_ALL=C "$BINDGEN" $(dirname $0)/rust_is_available_bindgen_libclang.h 2>&1 >/dev/null \ 105 + | grep -F 'clang version ' \ 106 + | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' \ 107 + | head -n 1 \ 108 + ) 109 + bindgen_libclang_min_version=$($min_tool_version llvm) 110 + bindgen_libclang_cversion=$(get_canonical_version $bindgen_libclang_version) 111 + bindgen_libclang_min_cversion=$(get_canonical_version $bindgen_libclang_min_version) 112 + if [ "$bindgen_libclang_cversion" -lt "$bindgen_libclang_min_cversion" ]; then 113 + if [ "$1" = -v ]; then 114 + echo >&2 "***" 115 + echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN') is too old." 116 + echo >&2 "*** Your version: $bindgen_libclang_version" 117 + echo >&2 "*** Minimum version: $bindgen_libclang_min_version" 118 + echo >&2 "***" 119 + fi 120 + exit 1 121 + fi 122 + 123 + # If the C compiler is Clang, then we can also check whether its version 124 + # matches the `libclang` version used by the Rust bindings generator. 125 + # 126 + # In the future, we might be able to perform a full version check, see 127 + # https://github.com/rust-lang/rust-bindgen/issues/2138. 128 + if [ "$1" = -v ]; then 129 + cc_name=$($(dirname $0)/cc-version.sh "$CC" | cut -f1 -d' ') 130 + if [ "$cc_name" = Clang ]; then 131 + clang_version=$( \ 132 + LC_ALL=C "$CC" --version 2>/dev/null \ 133 + | sed -nE '1s:.*version ([0-9]+\.[0-9]+\.[0-9]+).*:\1:p' 134 + ) 135 + if [ "$clang_version" != "$bindgen_libclang_version" ]; then 136 + echo >&2 "***" 137 + echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN')" 138 + echo >&2 "*** version does not match Clang's. This may be a problem." 139 + echo >&2 "*** libclang version: $bindgen_libclang_version" 140 + echo >&2 "*** Clang version: $clang_version" 141 + echo >&2 "***" 142 + fi 143 + fi 144 + fi 145 + 146 + # Check that the source code for the `core` standard library exists. 147 + # 148 + # `$KRUSTFLAGS` is passed in case the user added `--sysroot`. 149 + rustc_sysroot=$("$RUSTC" $KRUSTFLAGS --print sysroot) 150 + rustc_src=${RUST_LIB_SRC:-"$rustc_sysroot/lib/rustlib/src/rust/library"} 151 + rustc_src_core="$rustc_src/core/src/lib.rs" 152 + if [ ! -e "$rustc_src_core" ]; then 153 + if [ "$1" = -v ]; then 154 + echo >&2 "***" 155 + echo >&2 "*** Source code for the 'core' standard library could not be found" 156 + echo >&2 "*** at '$rustc_src_core'." 157 + echo >&2 "***" 158 + fi 159 + exit 1 160 + fi
+2
scripts/rust_is_available_bindgen_libclang.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #pragma message("clang version " __clang_version__)
+1 -1
tools/include/linux/kallsyms.h
··· 6 6 #include <stdio.h> 7 7 #include <unistd.h> 8 8 9 - #define KSYM_NAME_LEN 128 9 + #define KSYM_NAME_LEN 512 10 10 11 11 struct module; 12 12
+1 -1
tools/lib/perf/include/perf/event.h
··· 97 97 }; 98 98 99 99 #ifndef KSYM_NAME_LEN 100 - #define KSYM_NAME_LEN 256 100 + #define KSYM_NAME_LEN 512 101 101 #endif 102 102 103 103 struct perf_record_ksymbol {
+1 -1
tools/lib/symbol/kallsyms.h
··· 7 7 #include <linux/types.h> 8 8 9 9 #ifndef KSYM_NAME_LEN 10 - #define KSYM_NAME_LEN 256 10 + #define KSYM_NAME_LEN 512 11 11 #endif 12 12 13 13 static inline u8 kallsyms2elf_binding(char type)