Clone of https://github.com/NixOS/nixpkgs.git (to stress-test knotserver)
at fix-function-merge 795 lines 32 kB view raw
1From b256fc507d4710287b22077834c16d18cee4ab17 Mon Sep 17 00:00:00 2001 2From: Miguel Ojeda <ojeda@kernel.org> 3Date: Sat, 17 Feb 2024 01:27:17 +0100 4Subject: [PATCH] rust: upgrade to Rust 1.77.1 5 6This is the next upgrade to the Rust toolchain, from 1.76.0 to 1.77.1 7(i.e. the latest) [1]. 8 9See the upgrade policy [2] and the comments on the first upgrade in 10commit 3ed03f4da06e ("rust: upgrade to Rust 1.68.2"). 11 12# Unstable features 13 14The `offset_of` feature (single-field `offset_of!`) that we were using 15got stabilized in Rust 1.77.0 [3]. 16 17Therefore, now the only unstable features allowed to be used outside the 18`kernel` crate is `new_uninit`, though other code to be upstreamed may 19increase the list. 20 21Please see [4] for details. 22 23# Required changes 24 25Rust 1.77.0 merged the `unused_tuple_struct_fields` lint into `dead_code`, 26thus upgrading it from `allow` to `warn` [5]. In turn, this made `rustc` 27complain about the `ThisModule`'s pointer field being never read, but 28the previous patch adds the `as_ptr` method to it, needed by Binder [6], 29so that we do not need to locally `allow` it. 30 31# Other changes 32 33Rust 1.77.0 introduces the `--check-cfg` feature [7], for which there 34is a Call for Testing going on [8]. We were requested to test it and 35we found it useful [9] -- we will likely enable it in the future. 36 37# `alloc` upgrade and reviewing 38 39The vast majority of changes are due to our `alloc` fork being upgraded 40at once. 41 42There are two kinds of changes to be aware of: the ones coming from 43upstream, which we should follow as closely as possible, and the updates 44needed in our added fallible APIs to keep them matching the newer 45infallible APIs coming from upstream. 46 47Instead of taking a look at the diff of this patch, an alternative 48approach is reviewing a diff of the changes between upstream `alloc` and 49the kernel's. This allows to easily inspect the kernel additions only, 50especially to check if the fallible methods we already have still match 51the infallible ones in the new version coming from upstream. 52 53Another approach is reviewing the changes introduced in the additions in 54the kernel fork between the two versions. This is useful to spot 55potentially unintended changes to our additions. 56 57To apply these approaches, one may follow steps similar to the following 58to generate a pair of patches that show the differences between upstream 59Rust and the kernel (for the subset of `alloc` we use) before and after 60applying this patch: 61 62 # Get the difference with respect to the old version. 63 git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) 64 git -C linux ls-tree -r --name-only HEAD -- rust/alloc | 65 cut -d/ -f3- | 66 grep -Fv README.md | 67 xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH 68 git -C linux diff --patch-with-stat --summary -R > old.patch 69 git -C linux restore rust/alloc 70 71 # Apply this patch. 72 git -C linux am rust-upgrade.patch 73 74 # Get the difference with respect to the new version. 75 git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) 76 git -C linux ls-tree -r --name-only HEAD -- rust/alloc | 77 cut -d/ -f3- | 78 grep -Fv README.md | 79 xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH 80 git -C linux diff --patch-with-stat --summary -R > new.patch 81 git -C linux restore rust/alloc 82 83Now one may check the `new.patch` to take a look at the additions (first 84approach) or at the difference between those two patches (second 85approach). For the latter, a side-by-side tool is recommended. 86 87Link: https://github.com/rust-lang/rust/blob/stable/RELEASES.md#version-1770-2024-03-21 [1] 88Link: https://rust-for-linux.com/rust-version-policy [2] 89Link: https://github.com/rust-lang/rust/pull/118799 [3] 90Link: https://github.com/Rust-for-Linux/linux/issues/2 [4] 91Link: https://github.com/rust-lang/rust/pull/118297 [5] 92Link: https://lore.kernel.org/rust-for-linux/20231101-rust-binder-v1-2-08ba9197f637@google.com/#Z31rust:kernel:lib.rs [6] 93Link: https://doc.rust-lang.org/nightly/unstable-book/compiler-flags/check-cfg.html [7] 94Link: https://github.com/rust-lang/rfcs/pull/3013#issuecomment-1936648479 [8] 95Link: https://github.com/rust-lang/rust/issues/82450#issuecomment-1947462977 [9] 96Reviewed-by: Alice Ryhl <aliceryhl@google.com> 97Tested-by: Boqun Feng <boqun.feng@gmail.com> 98Link: https://lore.kernel.org/r/20240217002717.57507-1-ojeda@kernel.org 99[ Upgraded to 1.77.1. Removed `allow(dead_code)` thanks to the previous 100 patch. Reworded accordingly. No changes to `alloc` during the beta. ] 101Signed-off-by: Miguel Ojeda <ojeda@kernel.org> 102Signed-off-by: Alyssa Ross <hi@alyssa.is> 103--- 104 Documentation/process/changes.rst | 2 +- 105 rust/alloc/alloc.rs | 6 +- 106 rust/alloc/boxed.rs | 4 +- 107 rust/alloc/lib.rs | 7 +- 108 rust/alloc/raw_vec.rs | 13 ++-- 109 rust/alloc/slice.rs | 4 +- 110 rust/alloc/vec/into_iter.rs | 104 +++++++++++++++++++----------- 111 rust/alloc/vec/mod.rs | 101 ++++++++++++++++++++--------- 112 rust/kernel/lib.rs | 1 - 113 scripts/Makefile.build | 2 +- 114 scripts/min-tool-version.sh | 2 +- 115 11 files changed, 158 insertions(+), 88 deletions(-) 116 117diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst 118index 7ef8de58f7f8..b5d3107c6734 100644 119--- a/Documentation/process/changes.rst 120+++ b/Documentation/process/changes.rst 121@@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils. 122 ====================== =============== ======================================== 123 GNU C 5.1 gcc --version 124 Clang/LLVM (optional) 13.0.1 clang --version 125-Rust (optional) 1.76.0 rustc --version 126+Rust (optional) 1.77.1 rustc --version 127 bindgen (optional) 0.65.1 bindgen --version 128 GNU make 3.82 make --version 129 bash 4.2 bash --version 130diff --git a/rust/alloc/alloc.rs b/rust/alloc/alloc.rs 131index abb791cc2371..b1204f87227b 100644 132--- a/rust/alloc/alloc.rs 133+++ b/rust/alloc/alloc.rs 134@@ -5,7 +5,7 @@ 135 #![stable(feature = "alloc_module", since = "1.28.0")] 136 137 #[cfg(not(test))] 138-use core::intrinsics; 139+use core::hint; 140 141 #[cfg(not(test))] 142 use core::ptr::{self, NonNull}; 143@@ -210,7 +210,7 @@ unsafe fn grow_impl( 144 let new_size = new_layout.size(); 145 146 // `realloc` probably checks for `new_size >= old_layout.size()` or something similar. 147- intrinsics::assume(new_size >= old_layout.size()); 148+ hint::assert_unchecked(new_size >= old_layout.size()); 149 150 let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 151 let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 152@@ -301,7 +301,7 @@ unsafe fn shrink( 153 // SAFETY: `new_size` is non-zero. Other conditions must be upheld by the caller 154 new_size if old_layout.align() == new_layout.align() => unsafe { 155 // `realloc` probably checks for `new_size <= old_layout.size()` or something similar. 156- intrinsics::assume(new_size <= old_layout.size()); 157+ hint::assert_unchecked(new_size <= old_layout.size()); 158 159 let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 160 let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 161diff --git a/rust/alloc/boxed.rs b/rust/alloc/boxed.rs 162index c93a22a5c97f..5fc39dfeb8e7 100644 163--- a/rust/alloc/boxed.rs 164+++ b/rust/alloc/boxed.rs 165@@ -26,6 +26,7 @@ 166 //! Creating a recursive data structure: 167 //! 168 //! ``` 169+//! ##[allow(dead_code)] 170 //! #[derive(Debug)] 171 //! enum List<T> { 172 //! Cons(T, Box<List<T>>), 173@@ -194,8 +195,7 @@ 174 #[fundamental] 175 #[stable(feature = "rust1", since = "1.0.0")] 176 // The declaration of the `Box` struct must be kept in sync with the 177-// `alloc::alloc::box_free` function or ICEs will happen. See the comment 178-// on `box_free` for more details. 179+// compiler or ICEs will happen. 180 pub struct Box< 181 T: ?Sized, 182 #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global, 183diff --git a/rust/alloc/lib.rs b/rust/alloc/lib.rs 184index 36f79c075593..39afd55ec074 100644 185--- a/rust/alloc/lib.rs 186+++ b/rust/alloc/lib.rs 187@@ -105,7 +105,6 @@ 188 #![feature(allocator_api)] 189 #![feature(array_chunks)] 190 #![feature(array_into_iter_constructors)] 191-#![feature(array_methods)] 192 #![feature(array_windows)] 193 #![feature(ascii_char)] 194 #![feature(assert_matches)] 195@@ -122,7 +121,6 @@ 196 #![feature(const_size_of_val)] 197 #![feature(const_waker)] 198 #![feature(core_intrinsics)] 199-#![feature(core_panic)] 200 #![feature(deprecated_suggestion)] 201 #![feature(dispatch_from_dyn)] 202 #![feature(error_generic_member_access)] 203@@ -132,6 +130,7 @@ 204 #![feature(fmt_internals)] 205 #![feature(fn_traits)] 206 #![feature(hasher_prefixfree_extras)] 207+#![feature(hint_assert_unchecked)] 208 #![feature(inline_const)] 209 #![feature(inplace_iteration)] 210 #![feature(iter_advance_by)] 211@@ -141,6 +140,8 @@ 212 #![feature(maybe_uninit_slice)] 213 #![feature(maybe_uninit_uninit_array)] 214 #![feature(maybe_uninit_uninit_array_transpose)] 215+#![feature(non_null_convenience)] 216+#![feature(panic_internals)] 217 #![feature(pattern)] 218 #![feature(ptr_internals)] 219 #![feature(ptr_metadata)] 220@@ -149,7 +150,6 @@ 221 #![feature(set_ptr_value)] 222 #![feature(sized_type_properties)] 223 #![feature(slice_from_ptr_range)] 224-#![feature(slice_group_by)] 225 #![feature(slice_ptr_get)] 226 #![feature(slice_ptr_len)] 227 #![feature(slice_range)] 228@@ -182,6 +182,7 @@ 229 #![feature(const_ptr_write)] 230 #![feature(const_trait_impl)] 231 #![feature(const_try)] 232+#![feature(decl_macro)] 233 #![feature(dropck_eyepatch)] 234 #![feature(exclusive_range_pattern)] 235 #![feature(fundamental)] 236diff --git a/rust/alloc/raw_vec.rs b/rust/alloc/raw_vec.rs 237index 98b6abf30af6..1839d1c8ee7a 100644 238--- a/rust/alloc/raw_vec.rs 239+++ b/rust/alloc/raw_vec.rs 240@@ -4,7 +4,7 @@ 241 242 use core::alloc::LayoutError; 243 use core::cmp; 244-use core::intrinsics; 245+use core::hint; 246 use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; 247 use core::ptr::{self, NonNull, Unique}; 248 use core::slice; 249@@ -317,7 +317,7 @@ fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { 250 /// 251 /// # Panics 252 /// 253- /// Panics if the new capacity exceeds `isize::MAX` bytes. 254+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 255 /// 256 /// # Aborts 257 /// 258@@ -358,7 +358,7 @@ pub fn try_reserve(&mut self, len: usize, additional: usize) -> Result<(), TryRe 259 } 260 unsafe { 261 // Inform the optimizer that the reservation has succeeded or wasn't needed 262- core::intrinsics::assume(!self.needs_to_grow(len, additional)); 263+ hint::assert_unchecked(!self.needs_to_grow(len, additional)); 264 } 265 Ok(()) 266 } 267@@ -381,7 +381,7 @@ pub fn try_reserve_for_push(&mut self, len: usize) -> Result<(), TryReserveError 268 /// 269 /// # Panics 270 /// 271- /// Panics if the new capacity exceeds `isize::MAX` bytes. 272+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 273 /// 274 /// # Aborts 275 /// 276@@ -402,7 +402,7 @@ pub fn try_reserve_exact( 277 } 278 unsafe { 279 // Inform the optimizer that the reservation has succeeded or wasn't needed 280- core::intrinsics::assume(!self.needs_to_grow(len, additional)); 281+ hint::assert_unchecked(!self.needs_to_grow(len, additional)); 282 } 283 Ok(()) 284 } 285@@ -553,7 +553,7 @@ fn finish_grow<A>( 286 debug_assert_eq!(old_layout.align(), new_layout.align()); 287 unsafe { 288 // The allocator checks for alignment equality 289- intrinsics::assume(old_layout.align() == new_layout.align()); 290+ hint::assert_unchecked(old_layout.align() == new_layout.align()); 291 alloc.grow(ptr, old_layout, new_layout) 292 } 293 } else { 294@@ -591,7 +591,6 @@ fn handle_reserve(result: Result<(), TryReserveError>) { 295 // `> isize::MAX` bytes will surely fail. On 32-bit and 16-bit we need to add 296 // an extra guard for this in case we're running on a platform which can use 297 // all 4GB in user-space, e.g., PAE or x32. 298- 299 #[inline] 300 fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> { 301 if usize::BITS < 64 && alloc_size > isize::MAX as usize { 302diff --git a/rust/alloc/slice.rs b/rust/alloc/slice.rs 303index 1181836da5f4..a36b072c9519 100644 304--- a/rust/alloc/slice.rs 305+++ b/rust/alloc/slice.rs 306@@ -53,14 +53,14 @@ 307 pub use core::slice::{from_mut_ptr_range, from_ptr_range}; 308 #[stable(feature = "rust1", since = "1.0.0")] 309 pub use core::slice::{from_raw_parts, from_raw_parts_mut}; 310+#[stable(feature = "slice_group_by", since = "1.77.0")] 311+pub use core::slice::{ChunkBy, ChunkByMut}; 312 #[stable(feature = "rust1", since = "1.0.0")] 313 pub use core::slice::{Chunks, Windows}; 314 #[stable(feature = "chunks_exact", since = "1.31.0")] 315 pub use core::slice::{ChunksExact, ChunksExactMut}; 316 #[stable(feature = "rust1", since = "1.0.0")] 317 pub use core::slice::{ChunksMut, Split, SplitMut}; 318-#[unstable(feature = "slice_group_by", issue = "80552")] 319-pub use core::slice::{GroupBy, GroupByMut}; 320 #[stable(feature = "rust1", since = "1.0.0")] 321 pub use core::slice::{Iter, IterMut}; 322 #[stable(feature = "rchunks", since = "1.31.0")] 323diff --git a/rust/alloc/vec/into_iter.rs b/rust/alloc/vec/into_iter.rs 324index 136bfe94af6c..0f11744c44b3 100644 325--- a/rust/alloc/vec/into_iter.rs 326+++ b/rust/alloc/vec/into_iter.rs 327@@ -20,6 +20,17 @@ 328 use core::ptr::{self, NonNull}; 329 use core::slice::{self}; 330 331+macro non_null { 332+ (mut $place:expr, $t:ident) => {{ 333+ #![allow(unused_unsafe)] // we're sometimes used within an unsafe block 334+ unsafe { &mut *(ptr::addr_of_mut!($place) as *mut NonNull<$t>) } 335+ }}, 336+ ($place:expr, $t:ident) => {{ 337+ #![allow(unused_unsafe)] // we're sometimes used within an unsafe block 338+ unsafe { *(ptr::addr_of!($place) as *const NonNull<$t>) } 339+ }}, 340+} 341+ 342 /// An iterator that moves out of a vector. 343 /// 344 /// This `struct` is created by the `into_iter` method on [`Vec`](super::Vec) 345@@ -43,10 +54,12 @@ pub struct IntoIter< 346 // the drop impl reconstructs a RawVec from buf, cap and alloc 347 // to avoid dropping the allocator twice we need to wrap it into ManuallyDrop 348 pub(super) alloc: ManuallyDrop<A>, 349- pub(super) ptr: *const T, 350- pub(super) end: *const T, // If T is a ZST, this is actually ptr+len. This encoding is picked so that 351- // ptr == end is a quick test for the Iterator being empty, that works 352- // for both ZST and non-ZST. 353+ pub(super) ptr: NonNull<T>, 354+ /// If T is a ZST, this is actually ptr+len. This encoding is picked so that 355+ /// ptr == end is a quick test for the Iterator being empty, that works 356+ /// for both ZST and non-ZST. 357+ /// For non-ZSTs the pointer is treated as `NonNull<T>` 358+ pub(super) end: *const T, 359 } 360 361 #[stable(feature = "vec_intoiter_debug", since = "1.13.0")] 362@@ -70,7 +83,7 @@ impl<T, A: Allocator> IntoIter<T, A> { 363 /// ``` 364 #[stable(feature = "vec_into_iter_as_slice", since = "1.15.0")] 365 pub fn as_slice(&self) -> &[T] { 366- unsafe { slice::from_raw_parts(self.ptr, self.len()) } 367+ unsafe { slice::from_raw_parts(self.ptr.as_ptr(), self.len()) } 368 } 369 370 /// Returns the remaining items of this iterator as a mutable slice. 371@@ -99,7 +112,7 @@ pub fn allocator(&self) -> &A { 372 } 373 374 fn as_raw_mut_slice(&mut self) -> *mut [T] { 375- ptr::slice_from_raw_parts_mut(self.ptr as *mut T, self.len()) 376+ ptr::slice_from_raw_parts_mut(self.ptr.as_ptr(), self.len()) 377 } 378 379 /// Drops remaining elements and relinquishes the backing allocation. 380@@ -126,7 +139,7 @@ pub(super) fn forget_allocation_drop_remaining(&mut self) { 381 // this creates less assembly 382 self.cap = 0; 383 self.buf = unsafe { NonNull::new_unchecked(RawVec::NEW.ptr()) }; 384- self.ptr = self.buf.as_ptr(); 385+ self.ptr = self.buf; 386 self.end = self.buf.as_ptr(); 387 388 // Dropping the remaining elements can panic, so this needs to be 389@@ -138,9 +151,9 @@ pub(super) fn forget_allocation_drop_remaining(&mut self) { 390 391 /// Forgets to Drop the remaining elements while still allowing the backing allocation to be freed. 392 pub(crate) fn forget_remaining_elements(&mut self) { 393- // For th ZST case, it is crucial that we mutate `end` here, not `ptr`. 394+ // For the ZST case, it is crucial that we mutate `end` here, not `ptr`. 395 // `ptr` must stay aligned, while `end` may be unaligned. 396- self.end = self.ptr; 397+ self.end = self.ptr.as_ptr(); 398 } 399 400 #[cfg(not(no_global_oom_handling))] 401@@ -162,7 +175,7 @@ pub(crate) fn into_vecdeque(self) -> VecDeque<T, A> { 402 // say that they're all at the beginning of the "allocation". 403 0..this.len() 404 } else { 405- this.ptr.sub_ptr(buf)..this.end.sub_ptr(buf) 406+ this.ptr.sub_ptr(this.buf)..this.end.sub_ptr(buf) 407 }; 408 let cap = this.cap; 409 let alloc = ManuallyDrop::take(&mut this.alloc); 410@@ -189,29 +202,35 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { 411 412 #[inline] 413 fn next(&mut self) -> Option<T> { 414- if self.ptr == self.end { 415- None 416- } else if T::IS_ZST { 417- // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 418- // reducing the `end`. 419- self.end = self.end.wrapping_byte_sub(1); 420+ if T::IS_ZST { 421+ if self.ptr.as_ptr() == self.end as *mut _ { 422+ None 423+ } else { 424+ // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 425+ // reducing the `end`. 426+ self.end = self.end.wrapping_byte_sub(1); 427 428- // Make up a value of this ZST. 429- Some(unsafe { mem::zeroed() }) 430+ // Make up a value of this ZST. 431+ Some(unsafe { mem::zeroed() }) 432+ } 433 } else { 434- let old = self.ptr; 435- self.ptr = unsafe { self.ptr.add(1) }; 436+ if self.ptr == non_null!(self.end, T) { 437+ None 438+ } else { 439+ let old = self.ptr; 440+ self.ptr = unsafe { old.add(1) }; 441 442- Some(unsafe { ptr::read(old) }) 443+ Some(unsafe { ptr::read(old.as_ptr()) }) 444+ } 445 } 446 } 447 448 #[inline] 449 fn size_hint(&self) -> (usize, Option<usize>) { 450 let exact = if T::IS_ZST { 451- self.end.addr().wrapping_sub(self.ptr.addr()) 452+ self.end.addr().wrapping_sub(self.ptr.as_ptr().addr()) 453 } else { 454- unsafe { self.end.sub_ptr(self.ptr) } 455+ unsafe { non_null!(self.end, T).sub_ptr(self.ptr) } 456 }; 457 (exact, Some(exact)) 458 } 459@@ -219,7 +238,7 @@ fn size_hint(&self) -> (usize, Option<usize>) { 460 #[inline] 461 fn advance_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { 462 let step_size = self.len().min(n); 463- let to_drop = ptr::slice_from_raw_parts_mut(self.ptr as *mut T, step_size); 464+ let to_drop = ptr::slice_from_raw_parts_mut(self.ptr.as_ptr(), step_size); 465 if T::IS_ZST { 466 // See `next` for why we sub `end` here. 467 self.end = self.end.wrapping_byte_sub(step_size); 468@@ -261,7 +280,7 @@ fn count(self) -> usize { 469 // Safety: `len` indicates that this many elements are available and we just checked that 470 // it fits into the array. 471 unsafe { 472- ptr::copy_nonoverlapping(self.ptr, raw_ary.as_mut_ptr() as *mut T, len); 473+ ptr::copy_nonoverlapping(self.ptr.as_ptr(), raw_ary.as_mut_ptr() as *mut T, len); 474 self.forget_remaining_elements(); 475 return Err(array::IntoIter::new_unchecked(raw_ary, 0..len)); 476 } 477@@ -270,7 +289,7 @@ fn count(self) -> usize { 478 // Safety: `len` is larger than the array size. Copy a fixed amount here to fully initialize 479 // the array. 480 return unsafe { 481- ptr::copy_nonoverlapping(self.ptr, raw_ary.as_mut_ptr() as *mut T, N); 482+ ptr::copy_nonoverlapping(self.ptr.as_ptr(), raw_ary.as_mut_ptr() as *mut T, N); 483 self.ptr = self.ptr.add(N); 484 Ok(raw_ary.transpose().assume_init()) 485 }; 486@@ -288,7 +307,7 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item 487 // Also note the implementation of `Self: TrustedRandomAccess` requires 488 // that `T: Copy` so reading elements from the buffer doesn't invalidate 489 // them for `Drop`. 490- unsafe { if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } } 491+ unsafe { if T::IS_ZST { mem::zeroed() } else { self.ptr.add(i).read() } } 492 } 493 } 494 495@@ -296,18 +315,25 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item 496 impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> { 497 #[inline] 498 fn next_back(&mut self) -> Option<T> { 499- if self.end == self.ptr { 500- None 501- } else if T::IS_ZST { 502- // See above for why 'ptr.offset' isn't used 503- self.end = self.end.wrapping_byte_sub(1); 504+ if T::IS_ZST { 505+ if self.end as *mut _ == self.ptr.as_ptr() { 506+ None 507+ } else { 508+ // See above for why 'ptr.offset' isn't used 509+ self.end = self.end.wrapping_byte_sub(1); 510 511- // Make up a value of this ZST. 512- Some(unsafe { mem::zeroed() }) 513+ // Make up a value of this ZST. 514+ Some(unsafe { mem::zeroed() }) 515+ } 516 } else { 517- self.end = unsafe { self.end.sub(1) }; 518+ if non_null!(self.end, T) == self.ptr { 519+ None 520+ } else { 521+ let new_end = unsafe { non_null!(self.end, T).sub(1) }; 522+ *non_null!(mut self.end, T) = new_end; 523 524- Some(unsafe { ptr::read(self.end) }) 525+ Some(unsafe { ptr::read(new_end.as_ptr()) }) 526+ } 527 } 528 } 529 530@@ -333,7 +359,11 @@ fn advance_back_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { 531 #[stable(feature = "rust1", since = "1.0.0")] 532 impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> { 533 fn is_empty(&self) -> bool { 534- self.ptr == self.end 535+ if T::IS_ZST { 536+ self.ptr.as_ptr() == self.end as *mut _ 537+ } else { 538+ self.ptr == non_null!(self.end, T) 539+ } 540 } 541 } 542 543diff --git a/rust/alloc/vec/mod.rs b/rust/alloc/vec/mod.rs 544index 220fb9d6f45b..0be27fff4554 100644 545--- a/rust/alloc/vec/mod.rs 546+++ b/rust/alloc/vec/mod.rs 547@@ -360,7 +360,7 @@ 548 /// 549 /// `vec![x; n]`, `vec![a, b, c, d]`, and 550 /// [`Vec::with_capacity(n)`][`Vec::with_capacity`], will all produce a `Vec` 551-/// with exactly the requested capacity. If <code>[len] == [capacity]</code>, 552+/// with at least the requested capacity. If <code>[len] == [capacity]</code>, 553 /// (as is the case for the [`vec!`] macro), then a `Vec<T>` can be converted to 554 /// and from a [`Box<[T]>`][owned slice] without reallocating or moving the elements. 555 /// 556@@ -447,7 +447,7 @@ pub const fn new() -> Self { 557 /// 558 /// # Panics 559 /// 560- /// Panics if the new capacity exceeds `isize::MAX` bytes. 561+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 562 /// 563 /// # Examples 564 /// 565@@ -690,7 +690,7 @@ pub const fn new_in(alloc: A) -> Self { 566 /// 567 /// # Panics 568 /// 569- /// Panics if the new capacity exceeds `isize::MAX` bytes. 570+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 571 /// 572 /// # Examples 573 /// 574@@ -1013,7 +1013,7 @@ pub fn capacity(&self) -> usize { 575 /// 576 /// # Panics 577 /// 578- /// Panics if the new capacity exceeds `isize::MAX` bytes. 579+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 580 /// 581 /// # Examples 582 /// 583@@ -1043,7 +1043,7 @@ pub fn reserve(&mut self, additional: usize) { 584 /// 585 /// # Panics 586 /// 587- /// Panics if the new capacity exceeds `isize::MAX` bytes. 588+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 589 /// 590 /// # Examples 591 /// 592@@ -1140,8 +1140,11 @@ pub fn try_reserve_exact(&mut self, additional: usize) -> Result<(), TryReserveE 593 594 /// Shrinks the capacity of the vector as much as possible. 595 /// 596- /// It will drop down as close as possible to the length but the allocator 597- /// may still inform the vector that there is space for a few more elements. 598+ /// The behavior of this method depends on the allocator, which may either shrink the vector 599+ /// in-place or reallocate. The resulting vector might still have some excess capacity, just as 600+ /// is the case for [`with_capacity`]. See [`Allocator::shrink`] for more details. 601+ /// 602+ /// [`with_capacity`]: Vec::with_capacity 603 /// 604 /// # Examples 605 /// 606@@ -1191,10 +1194,10 @@ pub fn shrink_to(&mut self, min_capacity: usize) { 607 608 /// Converts the vector into [`Box<[T]>`][owned slice]. 609 /// 610- /// If the vector has excess capacity, its items will be moved into a 611- /// newly-allocated buffer with exactly the right capacity. 612+ /// Before doing the conversion, this method discards excess capacity like [`shrink_to_fit`]. 613 /// 614 /// [owned slice]: Box 615+ /// [`shrink_to_fit`]: Vec::shrink_to_fit 616 /// 617 /// # Examples 618 /// 619@@ -2017,7 +2020,7 @@ fn drop(&mut self) { 620 /// 621 /// # Panics 622 /// 623- /// Panics if the new capacity exceeds `isize::MAX` bytes. 624+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 625 /// 626 /// # Examples 627 /// 628@@ -2133,7 +2136,7 @@ pub fn pop(&mut self) -> Option<T> { 629 } else { 630 unsafe { 631 self.len -= 1; 632- core::intrinsics::assume(self.len < self.capacity()); 633+ core::hint::assert_unchecked(self.len < self.capacity()); 634 Some(ptr::read(self.as_ptr().add(self.len()))) 635 } 636 } 637@@ -2143,7 +2146,7 @@ pub fn pop(&mut self) -> Option<T> { 638 /// 639 /// # Panics 640 /// 641- /// Panics if the new capacity exceeds `isize::MAX` bytes. 642+ /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 643 /// 644 /// # Examples 645 /// 646@@ -2315,6 +2318,12 @@ pub fn is_empty(&self) -> bool { 647 /// `[at, len)`. After the call, the original vector will be left containing 648 /// the elements `[0, at)` with its previous capacity unchanged. 649 /// 650+ /// - If you want to take ownership of the entire contents and capacity of 651+ /// the vector, see [`mem::take`] or [`mem::replace`]. 652+ /// - If you don't need the returned vector at all, see [`Vec::truncate`]. 653+ /// - If you want to take ownership of an arbitrary subslice, or you don't 654+ /// necessarily want to store the removed items in a vector, see [`Vec::drain`]. 655+ /// 656 /// # Panics 657 /// 658 /// Panics if `at > len`. 659@@ -2346,14 +2355,6 @@ fn assert_failed(at: usize, len: usize) -> ! { 660 assert_failed(at, self.len()); 661 } 662 663- if at == 0 { 664- // the new vector can take over the original buffer and avoid the copy 665- return mem::replace( 666- self, 667- Vec::with_capacity_in(self.capacity(), self.allocator().clone()), 668- ); 669- } 670- 671 let other_len = self.len - at; 672 let mut other = Vec::with_capacity_in(other_len, self.allocator().clone()); 673 674@@ -3027,6 +3028,50 @@ fn index_mut(&mut self, index: I) -> &mut Self::Output { 675 } 676 } 677 678+/// Collects an iterator into a Vec, commonly called via [`Iterator::collect()`] 679+/// 680+/// # Allocation behavior 681+/// 682+/// In general `Vec` does not guarantee any particular growth or allocation strategy. 683+/// That also applies to this trait impl. 684+/// 685+/// **Note:** This section covers implementation details and is therefore exempt from 686+/// stability guarantees. 687+/// 688+/// Vec may use any or none of the following strategies, 689+/// depending on the supplied iterator: 690+/// 691+/// * preallocate based on [`Iterator::size_hint()`] 692+/// * and panic if the number of items is outside the provided lower/upper bounds 693+/// * use an amortized growth strategy similar to `pushing` one item at a time 694+/// * perform the iteration in-place on the original allocation backing the iterator 695+/// 696+/// The last case warrants some attention. It is an optimization that in many cases reduces peak memory 697+/// consumption and improves cache locality. But when big, short-lived allocations are created, 698+/// only a small fraction of their items get collected, no further use is made of the spare capacity 699+/// and the resulting `Vec` is moved into a longer-lived structure, then this can lead to the large 700+/// allocations having their lifetimes unnecessarily extended which can result in increased memory 701+/// footprint. 702+/// 703+/// In cases where this is an issue, the excess capacity can be discarded with [`Vec::shrink_to()`], 704+/// [`Vec::shrink_to_fit()`] or by collecting into [`Box<[T]>`][owned slice] instead, which additionally reduces 705+/// the size of the long-lived struct. 706+/// 707+/// [owned slice]: Box 708+/// 709+/// ```rust 710+/// # use std::sync::Mutex; 711+/// static LONG_LIVED: Mutex<Vec<Vec<u16>>> = Mutex::new(Vec::new()); 712+/// 713+/// for i in 0..10 { 714+/// let big_temporary: Vec<u16> = (0..1024).collect(); 715+/// // discard most items 716+/// let mut result: Vec<_> = big_temporary.into_iter().filter(|i| i % 100 == 0).collect(); 717+/// // without this a lot of unused capacity might be moved into the global 718+/// result.shrink_to_fit(); 719+/// LONG_LIVED.lock().unwrap().push(result); 720+/// } 721+/// ``` 722 #[cfg(not(no_global_oom_handling))] 723 #[stable(feature = "rust1", since = "1.0.0")] 724 impl<T> FromIterator<T> for Vec<T> { 725@@ -3069,14 +3114,8 @@ fn into_iter(self) -> Self::IntoIter { 726 begin.add(me.len()) as *const T 727 }; 728 let cap = me.buf.capacity(); 729- IntoIter { 730- buf: NonNull::new_unchecked(begin), 731- phantom: PhantomData, 732- cap, 733- alloc, 734- ptr: begin, 735- end, 736- } 737+ let buf = NonNull::new_unchecked(begin); 738+ IntoIter { buf, phantom: PhantomData, cap, alloc, ptr: buf, end } 739 } 740 } 741 } 742@@ -3598,8 +3637,10 @@ fn from(s: Box<[T], A>) -> Self { 743 impl<T, A: Allocator> From<Vec<T, A>> for Box<[T], A> { 744 /// Convert a vector into a boxed slice. 745 /// 746- /// If `v` has excess capacity, its items will be moved into a 747- /// newly-allocated buffer with exactly the right capacity. 748+ /// Before doing the conversion, this method discards excess capacity like [`Vec::shrink_to_fit`]. 749+ /// 750+ /// [owned slice]: Box 751+ /// [`Vec::shrink_to_fit`]: Vec::shrink_to_fit 752 /// 753 /// # Examples 754 /// 755diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs 756index 6858e2f8a3ed..9e9b245ebab5 100644 757--- a/rust/kernel/lib.rs 758+++ b/rust/kernel/lib.rs 759@@ -16,7 +16,6 @@ 760 #![feature(coerce_unsized)] 761 #![feature(dispatch_from_dyn)] 762 #![feature(new_uninit)] 763-#![feature(offset_of)] 764 #![feature(receiver_trait)] 765 #![feature(unsize)] 766 767diff --git a/scripts/Makefile.build b/scripts/Makefile.build 768index 533a7799fdfe..5a6ab6d965bc 100644 769--- a/scripts/Makefile.build 770+++ b/scripts/Makefile.build 771@@ -263,7 +263,7 @@ $(obj)/%.lst: $(src)/%.c FORCE 772 # Compile Rust sources (.rs) 773 # --------------------------------------------------------------------------- 774 775-rust_allowed_features := new_uninit,offset_of 776+rust_allowed_features := new_uninit 777 778 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 779 # current working directory, which may be not accessible in the out-of-tree 780diff --git a/scripts/min-tool-version.sh b/scripts/min-tool-version.sh 781index 5927cc6b7de3..6086e00e640e 100755 782--- a/scripts/min-tool-version.sh 783+++ b/scripts/min-tool-version.sh 784@@ -33,7 +33,7 @@ llvm) 785 fi 786 ;; 787 rustc) 788- echo 1.76.0 789+ echo 1.77.1 790 ;; 791 bindgen) 792 echo 0.65.1 793-- 7942.44.0 795