Clone of https://github.com/NixOS/nixpkgs.git (to stress-test knotserver)

kernelPatches.rust_1_77-6_8,kernelPatches.rust_1_77-6_9: update

Some changes were made when this patch was committed to rust-next.
Most importantly, the minimum rustc version was updated from 1.77.0 to
1.77.1, and if we use the latest version of the patch, we'll be able
to cleanly apply the 1.78.0 patch.

rust-next gets force pushed sometimes[1], so we shouldn't fetch from
it in a FOD, hence we now have rust-1.77-6.8.patch in-tree, but this
will save us from having rust-1.78.patch in-tree, which we can fetch
from lore.

[1]: https://github.com/Rust-for-Linux/linux/activity?ref=rust-next

+881 -83
+1 -5
pkgs/os-specific/linux/kernel/patches.nix
··· 82 83 rust_1_77-6_8 = { 84 name = "rust-1.77.patch"; 85 - patch = fetchurl { 86 - name = "rust-1.77.patch"; 87 - url = "https://lore.kernel.org/rust-for-linux/20240217002717.57507-1-ojeda@kernel.org/raw"; 88 - hash = "sha256-0KW9nHpJeMSDssCPXWZbrN8kxq5bA434t+XuPfwslUc="; 89 - }; 90 }; 91 92 rust_1_77-6_9 = {
··· 82 83 rust_1_77-6_8 = { 84 name = "rust-1.77.patch"; 85 + patch = ./rust-1.77-6.8.patch; 86 }; 87 88 rust_1_77-6_9 = {
+799
pkgs/os-specific/linux/kernel/rust-1.77-6.8.patch
···
··· 1 + From 82a754271336c7736fb0350692be85fecb30e38e Mon Sep 17 00:00:00 2001 2 + From: Miguel Ojeda <ojeda@kernel.org> 3 + Date: Sat, 17 Feb 2024 01:27:17 +0100 4 + Subject: [PATCH] rust: upgrade to Rust 1.77.1 5 + 6 + This is the next upgrade to the Rust toolchain, from 1.76.0 to 1.77.1 7 + (i.e. the latest) [1]. 8 + 9 + See the upgrade policy [2] and the comments on the first upgrade in 10 + commit 3ed03f4da06e ("rust: upgrade to Rust 1.68.2"). 11 + 12 + # Unstable features 13 + 14 + The `offset_of` feature (single-field `offset_of!`) that we were using 15 + got stabilized in Rust 1.77.0 [3]. 16 + 17 + Therefore, now the only unstable features allowed to be used outside the 18 + `kernel` crate is `new_uninit`, though other code to be upstreamed may 19 + increase the list. 20 + 21 + Please see [4] for details. 22 + 23 + # Required changes 24 + 25 + Rust 1.77.0 merged the `unused_tuple_struct_fields` lint into `dead_code`, 26 + thus upgrading it from `allow` to `warn` [5]. In turn, this made `rustc` 27 + complain about the `ThisModule`'s pointer field being never read, but 28 + the previous patch adds the `as_ptr` method to it, needed by Binder [6], 29 + so that we do not need to locally `allow` it. 30 + 31 + # Other changes 32 + 33 + Rust 1.77.0 introduces the `--check-cfg` feature [7], for which there 34 + is a Call for Testing going on [8]. We were requested to test it and 35 + we found it useful [9] -- we will likely enable it in the future. 36 + 37 + # `alloc` upgrade and reviewing 38 + 39 + The vast majority of changes are due to our `alloc` fork being upgraded 40 + at once. 41 + 42 + There are two kinds of changes to be aware of: the ones coming from 43 + upstream, which we should follow as closely as possible, and the updates 44 + needed in our added fallible APIs to keep them matching the newer 45 + infallible APIs coming from upstream. 46 + 47 + Instead of taking a look at the diff of this patch, an alternative 48 + approach is reviewing a diff of the changes between upstream `alloc` and 49 + the kernel's. This allows to easily inspect the kernel additions only, 50 + especially to check if the fallible methods we already have still match 51 + the infallible ones in the new version coming from upstream. 52 + 53 + Another approach is reviewing the changes introduced in the additions in 54 + the kernel fork between the two versions. This is useful to spot 55 + potentially unintended changes to our additions. 56 + 57 + To apply these approaches, one may follow steps similar to the following 58 + to generate a pair of patches that show the differences between upstream 59 + Rust and the kernel (for the subset of `alloc` we use) before and after 60 + applying this patch: 61 + 62 + # Get the difference with respect to the old version. 63 + git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) 64 + git -C linux ls-tree -r --name-only HEAD -- rust/alloc | 65 + cut -d/ -f3- | 66 + grep -Fv README.md | 67 + xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH 68 + git -C linux diff --patch-with-stat --summary -R > old.patch 69 + git -C linux restore rust/alloc 70 + 71 + # Apply this patch. 72 + git -C linux am rust-upgrade.patch 73 + 74 + # Get the difference with respect to the new version. 75 + git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) 76 + git -C linux ls-tree -r --name-only HEAD -- rust/alloc | 77 + cut -d/ -f3- | 78 + grep -Fv README.md | 79 + xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH 80 + git -C linux diff --patch-with-stat --summary -R > new.patch 81 + git -C linux restore rust/alloc 82 + 83 + Now one may check the `new.patch` to take a look at the additions (first 84 + approach) or at the difference between those two patches (second 85 + approach). For the latter, a side-by-side tool is recommended. 86 + 87 + Link: https://github.com/rust-lang/rust/blob/stable/RELEASES.md#version-1770-2024-03-21 [1] 88 + Link: https://rust-for-linux.com/rust-version-policy [2] 89 + Link: https://github.com/rust-lang/rust/pull/118799 [3] 90 + Link: https://github.com/Rust-for-Linux/linux/issues/2 [4] 91 + Link: https://github.com/rust-lang/rust/pull/118297 [5] 92 + Link: https://lore.kernel.org/rust-for-linux/20231101-rust-binder-v1-2-08ba9197f637@google.com/#Z31rust:kernel:lib.rs [6] 93 + Link: https://doc.rust-lang.org/nightly/unstable-book/compiler-flags/check-cfg.html [7] 94 + Link: https://github.com/rust-lang/rfcs/pull/3013#issuecomment-1936648479 [8] 95 + Link: https://github.com/rust-lang/rust/issues/82450#issuecomment-1947462977 [9] 96 + Reviewed-by: Alice Ryhl <aliceryhl@google.com> 97 + Tested-by: Boqun Feng <boqun.feng@gmail.com> 98 + Link: https://lore.kernel.org/r/20240217002717.57507-1-ojeda@kernel.org 99 + [ Upgraded to 1.77.1. Removed `allow(dead_code)` thanks to the previous 100 + patch. Reworded accordingly. No changes to `alloc` during the beta. ] 101 + Signed-off-by: Miguel Ojeda <ojeda@kernel.org> 102 + Signed-off-by: Alyssa Ross <hi@alyssa.is> 103 + 104 + # Conflicts: 105 + # Documentation/process/changes.rst 106 + # rust/kernel/lib.rs 107 + --- 108 + Documentation/process/changes.rst | 2 +- 109 + rust/alloc/alloc.rs | 6 +- 110 + rust/alloc/boxed.rs | 4 +- 111 + rust/alloc/lib.rs | 7 +- 112 + rust/alloc/raw_vec.rs | 13 ++-- 113 + rust/alloc/slice.rs | 4 +- 114 + rust/alloc/vec/into_iter.rs | 104 +++++++++++++++++++----------- 115 + rust/alloc/vec/mod.rs | 101 ++++++++++++++++++++--------- 116 + rust/kernel/lib.rs | 1 - 117 + scripts/Makefile.build | 2 +- 118 + scripts/min-tool-version.sh | 2 +- 119 + 11 files changed, 158 insertions(+), 88 deletions(-) 120 + 121 + diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst 122 + index c78ecc1e176f..641d67363b92 100644 123 + --- a/Documentation/process/changes.rst 124 + +++ b/Documentation/process/changes.rst 125 + @@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils. 126 + ====================== =============== ======================================== 127 + GNU C 5.1 gcc --version 128 + Clang/LLVM (optional) 11.0.0 clang --version 129 + -Rust (optional) 1.76.0 rustc --version 130 + +Rust (optional) 1.77.1 rustc --version 131 + bindgen (optional) 0.65.1 bindgen --version 132 + GNU make 3.82 make --version 133 + bash 4.2 bash --version 134 + diff --git a/rust/alloc/alloc.rs b/rust/alloc/alloc.rs 135 + index abb791cc2371..b1204f87227b 100644 136 + --- a/rust/alloc/alloc.rs 137 + +++ b/rust/alloc/alloc.rs 138 + @@ -5,7 +5,7 @@ 139 + #![stable(feature = "alloc_module", since = "1.28.0")] 140 + 141 + #[cfg(not(test))] 142 + -use core::intrinsics; 143 + +use core::hint; 144 + 145 + #[cfg(not(test))] 146 + use core::ptr::{self, NonNull}; 147 + @@ -210,7 +210,7 @@ unsafe fn grow_impl( 148 + let new_size = new_layout.size(); 149 + 150 + // `realloc` probably checks for `new_size >= old_layout.size()` or something similar. 151 + - intrinsics::assume(new_size >= old_layout.size()); 152 + + hint::assert_unchecked(new_size >= old_layout.size()); 153 + 154 + let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 155 + let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 156 + @@ -301,7 +301,7 @@ unsafe fn shrink( 157 + // SAFETY: `new_size` is non-zero. Other conditions must be upheld by the caller 158 + new_size if old_layout.align() == new_layout.align() => unsafe { 159 + // `realloc` probably checks for `new_size <= old_layout.size()` or something similar. 160 + - intrinsics::assume(new_size <= old_layout.size()); 161 + + hint::assert_unchecked(new_size <= old_layout.size()); 162 + 163 + let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 164 + let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 165 + diff --git a/rust/alloc/boxed.rs b/rust/alloc/boxed.rs 166 + index c93a22a5c97f..5fc39dfeb8e7 100644 167 + --- a/rust/alloc/boxed.rs 168 + +++ b/rust/alloc/boxed.rs 169 + @@ -26,6 +26,7 @@ 170 + //! Creating a recursive data structure: 171 + //! 172 + //! ``` 173 + +//! ##[allow(dead_code)] 174 + //! #[derive(Debug)] 175 + //! enum List<T> { 176 + //! Cons(T, Box<List<T>>), 177 + @@ -194,8 +195,7 @@ 178 + #[fundamental] 179 + #[stable(feature = "rust1", since = "1.0.0")] 180 + // The declaration of the `Box` struct must be kept in sync with the 181 + -// `alloc::alloc::box_free` function or ICEs will happen. See the comment 182 + -// on `box_free` for more details. 183 + +// compiler or ICEs will happen. 184 + pub struct Box< 185 + T: ?Sized, 186 + #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global, 187 + diff --git a/rust/alloc/lib.rs b/rust/alloc/lib.rs 188 + index 36f79c075593..39afd55ec074 100644 189 + --- a/rust/alloc/lib.rs 190 + +++ b/rust/alloc/lib.rs 191 + @@ -105,7 +105,6 @@ 192 + #![feature(allocator_api)] 193 + #![feature(array_chunks)] 194 + #![feature(array_into_iter_constructors)] 195 + -#![feature(array_methods)] 196 + #![feature(array_windows)] 197 + #![feature(ascii_char)] 198 + #![feature(assert_matches)] 199 + @@ -122,7 +121,6 @@ 200 + #![feature(const_size_of_val)] 201 + #![feature(const_waker)] 202 + #![feature(core_intrinsics)] 203 + -#![feature(core_panic)] 204 + #![feature(deprecated_suggestion)] 205 + #![feature(dispatch_from_dyn)] 206 + #![feature(error_generic_member_access)] 207 + @@ -132,6 +130,7 @@ 208 + #![feature(fmt_internals)] 209 + #![feature(fn_traits)] 210 + #![feature(hasher_prefixfree_extras)] 211 + +#![feature(hint_assert_unchecked)] 212 + #![feature(inline_const)] 213 + #![feature(inplace_iteration)] 214 + #![feature(iter_advance_by)] 215 + @@ -141,6 +140,8 @@ 216 + #![feature(maybe_uninit_slice)] 217 + #![feature(maybe_uninit_uninit_array)] 218 + #![feature(maybe_uninit_uninit_array_transpose)] 219 + +#![feature(non_null_convenience)] 220 + +#![feature(panic_internals)] 221 + #![feature(pattern)] 222 + #![feature(ptr_internals)] 223 + #![feature(ptr_metadata)] 224 + @@ -149,7 +150,6 @@ 225 + #![feature(set_ptr_value)] 226 + #![feature(sized_type_properties)] 227 + #![feature(slice_from_ptr_range)] 228 + -#![feature(slice_group_by)] 229 + #![feature(slice_ptr_get)] 230 + #![feature(slice_ptr_len)] 231 + #![feature(slice_range)] 232 + @@ -182,6 +182,7 @@ 233 + #![feature(const_ptr_write)] 234 + #![feature(const_trait_impl)] 235 + #![feature(const_try)] 236 + +#![feature(decl_macro)] 237 + #![feature(dropck_eyepatch)] 238 + #![feature(exclusive_range_pattern)] 239 + #![feature(fundamental)] 240 + diff --git a/rust/alloc/raw_vec.rs b/rust/alloc/raw_vec.rs 241 + index 98b6abf30af6..1839d1c8ee7a 100644 242 + --- a/rust/alloc/raw_vec.rs 243 + +++ b/rust/alloc/raw_vec.rs 244 + @@ -4,7 +4,7 @@ 245 + 246 + use core::alloc::LayoutError; 247 + use core::cmp; 248 + -use core::intrinsics; 249 + +use core::hint; 250 + use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; 251 + use core::ptr::{self, NonNull, Unique}; 252 + use core::slice; 253 + @@ -317,7 +317,7 @@ fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { 254 + /// 255 + /// # Panics 256 + /// 257 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 258 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 259 + /// 260 + /// # Aborts 261 + /// 262 + @@ -358,7 +358,7 @@ pub fn try_reserve(&mut self, len: usize, additional: usize) -> Result<(), TryRe 263 + } 264 + unsafe { 265 + // Inform the optimizer that the reservation has succeeded or wasn't needed 266 + - core::intrinsics::assume(!self.needs_to_grow(len, additional)); 267 + + hint::assert_unchecked(!self.needs_to_grow(len, additional)); 268 + } 269 + Ok(()) 270 + } 271 + @@ -381,7 +381,7 @@ pub fn try_reserve_for_push(&mut self, len: usize) -> Result<(), TryReserveError 272 + /// 273 + /// # Panics 274 + /// 275 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 276 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 277 + /// 278 + /// # Aborts 279 + /// 280 + @@ -402,7 +402,7 @@ pub fn try_reserve_exact( 281 + } 282 + unsafe { 283 + // Inform the optimizer that the reservation has succeeded or wasn't needed 284 + - core::intrinsics::assume(!self.needs_to_grow(len, additional)); 285 + + hint::assert_unchecked(!self.needs_to_grow(len, additional)); 286 + } 287 + Ok(()) 288 + } 289 + @@ -553,7 +553,7 @@ fn finish_grow<A>( 290 + debug_assert_eq!(old_layout.align(), new_layout.align()); 291 + unsafe { 292 + // The allocator checks for alignment equality 293 + - intrinsics::assume(old_layout.align() == new_layout.align()); 294 + + hint::assert_unchecked(old_layout.align() == new_layout.align()); 295 + alloc.grow(ptr, old_layout, new_layout) 296 + } 297 + } else { 298 + @@ -591,7 +591,6 @@ fn handle_reserve(result: Result<(), TryReserveError>) { 299 + // `> isize::MAX` bytes will surely fail. On 32-bit and 16-bit we need to add 300 + // an extra guard for this in case we're running on a platform which can use 301 + // all 4GB in user-space, e.g., PAE or x32. 302 + - 303 + #[inline] 304 + fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> { 305 + if usize::BITS < 64 && alloc_size > isize::MAX as usize { 306 + diff --git a/rust/alloc/slice.rs b/rust/alloc/slice.rs 307 + index 1181836da5f4..a36b072c9519 100644 308 + --- a/rust/alloc/slice.rs 309 + +++ b/rust/alloc/slice.rs 310 + @@ -53,14 +53,14 @@ 311 + pub use core::slice::{from_mut_ptr_range, from_ptr_range}; 312 + #[stable(feature = "rust1", since = "1.0.0")] 313 + pub use core::slice::{from_raw_parts, from_raw_parts_mut}; 314 + +#[stable(feature = "slice_group_by", since = "1.77.0")] 315 + +pub use core::slice::{ChunkBy, ChunkByMut}; 316 + #[stable(feature = "rust1", since = "1.0.0")] 317 + pub use core::slice::{Chunks, Windows}; 318 + #[stable(feature = "chunks_exact", since = "1.31.0")] 319 + pub use core::slice::{ChunksExact, ChunksExactMut}; 320 + #[stable(feature = "rust1", since = "1.0.0")] 321 + pub use core::slice::{ChunksMut, Split, SplitMut}; 322 + -#[unstable(feature = "slice_group_by", issue = "80552")] 323 + -pub use core::slice::{GroupBy, GroupByMut}; 324 + #[stable(feature = "rust1", since = "1.0.0")] 325 + pub use core::slice::{Iter, IterMut}; 326 + #[stable(feature = "rchunks", since = "1.31.0")] 327 + diff --git a/rust/alloc/vec/into_iter.rs b/rust/alloc/vec/into_iter.rs 328 + index 136bfe94af6c..0f11744c44b3 100644 329 + --- a/rust/alloc/vec/into_iter.rs 330 + +++ b/rust/alloc/vec/into_iter.rs 331 + @@ -20,6 +20,17 @@ 332 + use core::ptr::{self, NonNull}; 333 + use core::slice::{self}; 334 + 335 + +macro non_null { 336 + + (mut $place:expr, $t:ident) => {{ 337 + + #![allow(unused_unsafe)] // we're sometimes used within an unsafe block 338 + + unsafe { &mut *(ptr::addr_of_mut!($place) as *mut NonNull<$t>) } 339 + + }}, 340 + + ($place:expr, $t:ident) => {{ 341 + + #![allow(unused_unsafe)] // we're sometimes used within an unsafe block 342 + + unsafe { *(ptr::addr_of!($place) as *const NonNull<$t>) } 343 + + }}, 344 + +} 345 + + 346 + /// An iterator that moves out of a vector. 347 + /// 348 + /// This `struct` is created by the `into_iter` method on [`Vec`](super::Vec) 349 + @@ -43,10 +54,12 @@ pub struct IntoIter< 350 + // the drop impl reconstructs a RawVec from buf, cap and alloc 351 + // to avoid dropping the allocator twice we need to wrap it into ManuallyDrop 352 + pub(super) alloc: ManuallyDrop<A>, 353 + - pub(super) ptr: *const T, 354 + - pub(super) end: *const T, // If T is a ZST, this is actually ptr+len. This encoding is picked so that 355 + - // ptr == end is a quick test for the Iterator being empty, that works 356 + - // for both ZST and non-ZST. 357 + + pub(super) ptr: NonNull<T>, 358 + + /// If T is a ZST, this is actually ptr+len. This encoding is picked so that 359 + + /// ptr == end is a quick test for the Iterator being empty, that works 360 + + /// for both ZST and non-ZST. 361 + + /// For non-ZSTs the pointer is treated as `NonNull<T>` 362 + + pub(super) end: *const T, 363 + } 364 + 365 + #[stable(feature = "vec_intoiter_debug", since = "1.13.0")] 366 + @@ -70,7 +83,7 @@ impl<T, A: Allocator> IntoIter<T, A> { 367 + /// ``` 368 + #[stable(feature = "vec_into_iter_as_slice", since = "1.15.0")] 369 + pub fn as_slice(&self) -> &[T] { 370 + - unsafe { slice::from_raw_parts(self.ptr, self.len()) } 371 + + unsafe { slice::from_raw_parts(self.ptr.as_ptr(), self.len()) } 372 + } 373 + 374 + /// Returns the remaining items of this iterator as a mutable slice. 375 + @@ -99,7 +112,7 @@ pub fn allocator(&self) -> &A { 376 + } 377 + 378 + fn as_raw_mut_slice(&mut self) -> *mut [T] { 379 + - ptr::slice_from_raw_parts_mut(self.ptr as *mut T, self.len()) 380 + + ptr::slice_from_raw_parts_mut(self.ptr.as_ptr(), self.len()) 381 + } 382 + 383 + /// Drops remaining elements and relinquishes the backing allocation. 384 + @@ -126,7 +139,7 @@ pub(super) fn forget_allocation_drop_remaining(&mut self) { 385 + // this creates less assembly 386 + self.cap = 0; 387 + self.buf = unsafe { NonNull::new_unchecked(RawVec::NEW.ptr()) }; 388 + - self.ptr = self.buf.as_ptr(); 389 + + self.ptr = self.buf; 390 + self.end = self.buf.as_ptr(); 391 + 392 + // Dropping the remaining elements can panic, so this needs to be 393 + @@ -138,9 +151,9 @@ pub(super) fn forget_allocation_drop_remaining(&mut self) { 394 + 395 + /// Forgets to Drop the remaining elements while still allowing the backing allocation to be freed. 396 + pub(crate) fn forget_remaining_elements(&mut self) { 397 + - // For th ZST case, it is crucial that we mutate `end` here, not `ptr`. 398 + + // For the ZST case, it is crucial that we mutate `end` here, not `ptr`. 399 + // `ptr` must stay aligned, while `end` may be unaligned. 400 + - self.end = self.ptr; 401 + + self.end = self.ptr.as_ptr(); 402 + } 403 + 404 + #[cfg(not(no_global_oom_handling))] 405 + @@ -162,7 +175,7 @@ pub(crate) fn into_vecdeque(self) -> VecDeque<T, A> { 406 + // say that they're all at the beginning of the "allocation". 407 + 0..this.len() 408 + } else { 409 + - this.ptr.sub_ptr(buf)..this.end.sub_ptr(buf) 410 + + this.ptr.sub_ptr(this.buf)..this.end.sub_ptr(buf) 411 + }; 412 + let cap = this.cap; 413 + let alloc = ManuallyDrop::take(&mut this.alloc); 414 + @@ -189,29 +202,35 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { 415 + 416 + #[inline] 417 + fn next(&mut self) -> Option<T> { 418 + - if self.ptr == self.end { 419 + - None 420 + - } else if T::IS_ZST { 421 + - // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 422 + - // reducing the `end`. 423 + - self.end = self.end.wrapping_byte_sub(1); 424 + + if T::IS_ZST { 425 + + if self.ptr.as_ptr() == self.end as *mut _ { 426 + + None 427 + + } else { 428 + + // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 429 + + // reducing the `end`. 430 + + self.end = self.end.wrapping_byte_sub(1); 431 + 432 + - // Make up a value of this ZST. 433 + - Some(unsafe { mem::zeroed() }) 434 + + // Make up a value of this ZST. 435 + + Some(unsafe { mem::zeroed() }) 436 + + } 437 + } else { 438 + - let old = self.ptr; 439 + - self.ptr = unsafe { self.ptr.add(1) }; 440 + + if self.ptr == non_null!(self.end, T) { 441 + + None 442 + + } else { 443 + + let old = self.ptr; 444 + + self.ptr = unsafe { old.add(1) }; 445 + 446 + - Some(unsafe { ptr::read(old) }) 447 + + Some(unsafe { ptr::read(old.as_ptr()) }) 448 + + } 449 + } 450 + } 451 + 452 + #[inline] 453 + fn size_hint(&self) -> (usize, Option<usize>) { 454 + let exact = if T::IS_ZST { 455 + - self.end.addr().wrapping_sub(self.ptr.addr()) 456 + + self.end.addr().wrapping_sub(self.ptr.as_ptr().addr()) 457 + } else { 458 + - unsafe { self.end.sub_ptr(self.ptr) } 459 + + unsafe { non_null!(self.end, T).sub_ptr(self.ptr) } 460 + }; 461 + (exact, Some(exact)) 462 + } 463 + @@ -219,7 +238,7 @@ fn size_hint(&self) -> (usize, Option<usize>) { 464 + #[inline] 465 + fn advance_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { 466 + let step_size = self.len().min(n); 467 + - let to_drop = ptr::slice_from_raw_parts_mut(self.ptr as *mut T, step_size); 468 + + let to_drop = ptr::slice_from_raw_parts_mut(self.ptr.as_ptr(), step_size); 469 + if T::IS_ZST { 470 + // See `next` for why we sub `end` here. 471 + self.end = self.end.wrapping_byte_sub(step_size); 472 + @@ -261,7 +280,7 @@ fn count(self) -> usize { 473 + // Safety: `len` indicates that this many elements are available and we just checked that 474 + // it fits into the array. 475 + unsafe { 476 + - ptr::copy_nonoverlapping(self.ptr, raw_ary.as_mut_ptr() as *mut T, len); 477 + + ptr::copy_nonoverlapping(self.ptr.as_ptr(), raw_ary.as_mut_ptr() as *mut T, len); 478 + self.forget_remaining_elements(); 479 + return Err(array::IntoIter::new_unchecked(raw_ary, 0..len)); 480 + } 481 + @@ -270,7 +289,7 @@ fn count(self) -> usize { 482 + // Safety: `len` is larger than the array size. Copy a fixed amount here to fully initialize 483 + // the array. 484 + return unsafe { 485 + - ptr::copy_nonoverlapping(self.ptr, raw_ary.as_mut_ptr() as *mut T, N); 486 + + ptr::copy_nonoverlapping(self.ptr.as_ptr(), raw_ary.as_mut_ptr() as *mut T, N); 487 + self.ptr = self.ptr.add(N); 488 + Ok(raw_ary.transpose().assume_init()) 489 + }; 490 + @@ -288,7 +307,7 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item 491 + // Also note the implementation of `Self: TrustedRandomAccess` requires 492 + // that `T: Copy` so reading elements from the buffer doesn't invalidate 493 + // them for `Drop`. 494 + - unsafe { if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } } 495 + + unsafe { if T::IS_ZST { mem::zeroed() } else { self.ptr.add(i).read() } } 496 + } 497 + } 498 + 499 + @@ -296,18 +315,25 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item 500 + impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> { 501 + #[inline] 502 + fn next_back(&mut self) -> Option<T> { 503 + - if self.end == self.ptr { 504 + - None 505 + - } else if T::IS_ZST { 506 + - // See above for why 'ptr.offset' isn't used 507 + - self.end = self.end.wrapping_byte_sub(1); 508 + + if T::IS_ZST { 509 + + if self.end as *mut _ == self.ptr.as_ptr() { 510 + + None 511 + + } else { 512 + + // See above for why 'ptr.offset' isn't used 513 + + self.end = self.end.wrapping_byte_sub(1); 514 + 515 + - // Make up a value of this ZST. 516 + - Some(unsafe { mem::zeroed() }) 517 + + // Make up a value of this ZST. 518 + + Some(unsafe { mem::zeroed() }) 519 + + } 520 + } else { 521 + - self.end = unsafe { self.end.sub(1) }; 522 + + if non_null!(self.end, T) == self.ptr { 523 + + None 524 + + } else { 525 + + let new_end = unsafe { non_null!(self.end, T).sub(1) }; 526 + + *non_null!(mut self.end, T) = new_end; 527 + 528 + - Some(unsafe { ptr::read(self.end) }) 529 + + Some(unsafe { ptr::read(new_end.as_ptr()) }) 530 + + } 531 + } 532 + } 533 + 534 + @@ -333,7 +359,11 @@ fn advance_back_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { 535 + #[stable(feature = "rust1", since = "1.0.0")] 536 + impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> { 537 + fn is_empty(&self) -> bool { 538 + - self.ptr == self.end 539 + + if T::IS_ZST { 540 + + self.ptr.as_ptr() == self.end as *mut _ 541 + + } else { 542 + + self.ptr == non_null!(self.end, T) 543 + + } 544 + } 545 + } 546 + 547 + diff --git a/rust/alloc/vec/mod.rs b/rust/alloc/vec/mod.rs 548 + index 220fb9d6f45b..0be27fff4554 100644 549 + --- a/rust/alloc/vec/mod.rs 550 + +++ b/rust/alloc/vec/mod.rs 551 + @@ -360,7 +360,7 @@ 552 + /// 553 + /// `vec![x; n]`, `vec![a, b, c, d]`, and 554 + /// [`Vec::with_capacity(n)`][`Vec::with_capacity`], will all produce a `Vec` 555 + -/// with exactly the requested capacity. If <code>[len] == [capacity]</code>, 556 + +/// with at least the requested capacity. If <code>[len] == [capacity]</code>, 557 + /// (as is the case for the [`vec!`] macro), then a `Vec<T>` can be converted to 558 + /// and from a [`Box<[T]>`][owned slice] without reallocating or moving the elements. 559 + /// 560 + @@ -447,7 +447,7 @@ pub const fn new() -> Self { 561 + /// 562 + /// # Panics 563 + /// 564 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 565 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 566 + /// 567 + /// # Examples 568 + /// 569 + @@ -690,7 +690,7 @@ pub const fn new_in(alloc: A) -> Self { 570 + /// 571 + /// # Panics 572 + /// 573 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 574 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 575 + /// 576 + /// # Examples 577 + /// 578 + @@ -1013,7 +1013,7 @@ pub fn capacity(&self) -> usize { 579 + /// 580 + /// # Panics 581 + /// 582 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 583 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 584 + /// 585 + /// # Examples 586 + /// 587 + @@ -1043,7 +1043,7 @@ pub fn reserve(&mut self, additional: usize) { 588 + /// 589 + /// # Panics 590 + /// 591 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 592 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 593 + /// 594 + /// # Examples 595 + /// 596 + @@ -1140,8 +1140,11 @@ pub fn try_reserve_exact(&mut self, additional: usize) -> Result<(), TryReserveE 597 + 598 + /// Shrinks the capacity of the vector as much as possible. 599 + /// 600 + - /// It will drop down as close as possible to the length but the allocator 601 + - /// may still inform the vector that there is space for a few more elements. 602 + + /// The behavior of this method depends on the allocator, which may either shrink the vector 603 + + /// in-place or reallocate. The resulting vector might still have some excess capacity, just as 604 + + /// is the case for [`with_capacity`]. See [`Allocator::shrink`] for more details. 605 + + /// 606 + + /// [`with_capacity`]: Vec::with_capacity 607 + /// 608 + /// # Examples 609 + /// 610 + @@ -1191,10 +1194,10 @@ pub fn shrink_to(&mut self, min_capacity: usize) { 611 + 612 + /// Converts the vector into [`Box<[T]>`][owned slice]. 613 + /// 614 + - /// If the vector has excess capacity, its items will be moved into a 615 + - /// newly-allocated buffer with exactly the right capacity. 616 + + /// Before doing the conversion, this method discards excess capacity like [`shrink_to_fit`]. 617 + /// 618 + /// [owned slice]: Box 619 + + /// [`shrink_to_fit`]: Vec::shrink_to_fit 620 + /// 621 + /// # Examples 622 + /// 623 + @@ -2017,7 +2020,7 @@ fn drop(&mut self) { 624 + /// 625 + /// # Panics 626 + /// 627 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 628 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 629 + /// 630 + /// # Examples 631 + /// 632 + @@ -2133,7 +2136,7 @@ pub fn pop(&mut self) -> Option<T> { 633 + } else { 634 + unsafe { 635 + self.len -= 1; 636 + - core::intrinsics::assume(self.len < self.capacity()); 637 + + core::hint::assert_unchecked(self.len < self.capacity()); 638 + Some(ptr::read(self.as_ptr().add(self.len()))) 639 + } 640 + } 641 + @@ -2143,7 +2146,7 @@ pub fn pop(&mut self) -> Option<T> { 642 + /// 643 + /// # Panics 644 + /// 645 + - /// Panics if the new capacity exceeds `isize::MAX` bytes. 646 + + /// Panics if the new capacity exceeds `isize::MAX` _bytes_. 647 + /// 648 + /// # Examples 649 + /// 650 + @@ -2315,6 +2318,12 @@ pub fn is_empty(&self) -> bool { 651 + /// `[at, len)`. After the call, the original vector will be left containing 652 + /// the elements `[0, at)` with its previous capacity unchanged. 653 + /// 654 + + /// - If you want to take ownership of the entire contents and capacity of 655 + + /// the vector, see [`mem::take`] or [`mem::replace`]. 656 + + /// - If you don't need the returned vector at all, see [`Vec::truncate`]. 657 + + /// - If you want to take ownership of an arbitrary subslice, or you don't 658 + + /// necessarily want to store the removed items in a vector, see [`Vec::drain`]. 659 + + /// 660 + /// # Panics 661 + /// 662 + /// Panics if `at > len`. 663 + @@ -2346,14 +2355,6 @@ fn assert_failed(at: usize, len: usize) -> ! { 664 + assert_failed(at, self.len()); 665 + } 666 + 667 + - if at == 0 { 668 + - // the new vector can take over the original buffer and avoid the copy 669 + - return mem::replace( 670 + - self, 671 + - Vec::with_capacity_in(self.capacity(), self.allocator().clone()), 672 + - ); 673 + - } 674 + - 675 + let other_len = self.len - at; 676 + let mut other = Vec::with_capacity_in(other_len, self.allocator().clone()); 677 + 678 + @@ -3027,6 +3028,50 @@ fn index_mut(&mut self, index: I) -> &mut Self::Output { 679 + } 680 + } 681 + 682 + +/// Collects an iterator into a Vec, commonly called via [`Iterator::collect()`] 683 + +/// 684 + +/// # Allocation behavior 685 + +/// 686 + +/// In general `Vec` does not guarantee any particular growth or allocation strategy. 687 + +/// That also applies to this trait impl. 688 + +/// 689 + +/// **Note:** This section covers implementation details and is therefore exempt from 690 + +/// stability guarantees. 691 + +/// 692 + +/// Vec may use any or none of the following strategies, 693 + +/// depending on the supplied iterator: 694 + +/// 695 + +/// * preallocate based on [`Iterator::size_hint()`] 696 + +/// * and panic if the number of items is outside the provided lower/upper bounds 697 + +/// * use an amortized growth strategy similar to `pushing` one item at a time 698 + +/// * perform the iteration in-place on the original allocation backing the iterator 699 + +/// 700 + +/// The last case warrants some attention. It is an optimization that in many cases reduces peak memory 701 + +/// consumption and improves cache locality. But when big, short-lived allocations are created, 702 + +/// only a small fraction of their items get collected, no further use is made of the spare capacity 703 + +/// and the resulting `Vec` is moved into a longer-lived structure, then this can lead to the large 704 + +/// allocations having their lifetimes unnecessarily extended which can result in increased memory 705 + +/// footprint. 706 + +/// 707 + +/// In cases where this is an issue, the excess capacity can be discarded with [`Vec::shrink_to()`], 708 + +/// [`Vec::shrink_to_fit()`] or by collecting into [`Box<[T]>`][owned slice] instead, which additionally reduces 709 + +/// the size of the long-lived struct. 710 + +/// 711 + +/// [owned slice]: Box 712 + +/// 713 + +/// ```rust 714 + +/// # use std::sync::Mutex; 715 + +/// static LONG_LIVED: Mutex<Vec<Vec<u16>>> = Mutex::new(Vec::new()); 716 + +/// 717 + +/// for i in 0..10 { 718 + +/// let big_temporary: Vec<u16> = (0..1024).collect(); 719 + +/// // discard most items 720 + +/// let mut result: Vec<_> = big_temporary.into_iter().filter(|i| i % 100 == 0).collect(); 721 + +/// // without this a lot of unused capacity might be moved into the global 722 + +/// result.shrink_to_fit(); 723 + +/// LONG_LIVED.lock().unwrap().push(result); 724 + +/// } 725 + +/// ``` 726 + #[cfg(not(no_global_oom_handling))] 727 + #[stable(feature = "rust1", since = "1.0.0")] 728 + impl<T> FromIterator<T> for Vec<T> { 729 + @@ -3069,14 +3114,8 @@ fn into_iter(self) -> Self::IntoIter { 730 + begin.add(me.len()) as *const T 731 + }; 732 + let cap = me.buf.capacity(); 733 + - IntoIter { 734 + - buf: NonNull::new_unchecked(begin), 735 + - phantom: PhantomData, 736 + - cap, 737 + - alloc, 738 + - ptr: begin, 739 + - end, 740 + - } 741 + + let buf = NonNull::new_unchecked(begin); 742 + + IntoIter { buf, phantom: PhantomData, cap, alloc, ptr: buf, end } 743 + } 744 + } 745 + } 746 + @@ -3598,8 +3637,10 @@ fn from(s: Box<[T], A>) -> Self { 747 + impl<T, A: Allocator> From<Vec<T, A>> for Box<[T], A> { 748 + /// Convert a vector into a boxed slice. 749 + /// 750 + - /// If `v` has excess capacity, its items will be moved into a 751 + - /// newly-allocated buffer with exactly the right capacity. 752 + + /// Before doing the conversion, this method discards excess capacity like [`Vec::shrink_to_fit`]. 753 + + /// 754 + + /// [owned slice]: Box 755 + + /// [`Vec::shrink_to_fit`]: Vec::shrink_to_fit 756 + /// 757 + /// # Examples 758 + /// 759 + diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs 760 + index 75efe47522e4..f07bc5a2c6b4 100644 761 + --- a/rust/kernel/lib.rs 762 + +++ b/rust/kernel/lib.rs 763 + @@ -17,7 +17,6 @@ 764 + #![feature(const_maybe_uninit_zeroed)] 765 + #![feature(dispatch_from_dyn)] 766 + #![feature(new_uninit)] 767 + -#![feature(offset_of)] 768 + #![feature(ptr_metadata)] 769 + #![feature(receiver_trait)] 770 + #![feature(unsize)] 771 + diff --git a/scripts/Makefile.build b/scripts/Makefile.build 772 + index 1633175846df..0bc7c5fe64b1 100644 773 + --- a/scripts/Makefile.build 774 + +++ b/scripts/Makefile.build 775 + @@ -262,7 +262,7 @@ $(obj)/%.lst: $(src)/%.c FORCE 776 + # Compile Rust sources (.rs) 777 + # --------------------------------------------------------------------------- 778 + 779 + -rust_allowed_features := new_uninit,offset_of 780 + +rust_allowed_features := new_uninit 781 + 782 + # `--out-dir` is required to avoid temporaries being created by `rustc` in the 783 + # current working directory, which may be not accessible in the out-of-tree 784 + diff --git a/scripts/min-tool-version.sh b/scripts/min-tool-version.sh 785 + index e217683b10d6..db2b0bc5866b 100755 786 + --- a/scripts/min-tool-version.sh 787 + +++ b/scripts/min-tool-version.sh 788 + @@ -33,7 +33,7 @@ llvm) 789 + fi 790 + ;; 791 + rustc) 792 + - echo 1.76.0 793 + + echo 1.77.1 794 + ;; 795 + bindgen) 796 + echo 0.65.1 797 + -- 798 + 2.44.0 799 +
+81 -78
pkgs/os-specific/linux/kernel/rust-1.77.patch
··· 1 - From d69265b7d756931b2e763a3262f22ba4100895a0 Mon Sep 17 00:00:00 2001 2 From: Miguel Ojeda <ojeda@kernel.org> 3 Date: Sat, 17 Feb 2024 01:27:17 +0100 4 - Subject: [PATCH] rust: upgrade to Rust 1.77.0 5 6 - This is the next upgrade to the Rust toolchain, from 1.76.0 to 1.77.0 7 (i.e. the latest) [1]. 8 9 See the upgrade policy [2] and the comments on the first upgrade in 10 commit 3ed03f4da06e ("rust: upgrade to Rust 1.68.2"). 11 12 The `offset_of` feature (single-field `offset_of!`) that we were using 13 got stabilized in Rust 1.77.0 [3]. ··· 17 increase the list. 18 19 Please see [4] for details. 20 21 Rust 1.77.0 merged the `unused_tuple_struct_fields` lint into `dead_code`, 22 - thus upgrading it from `allow` to `warn` [5]. In turn, this makes `rustc` 23 - complain about the `ThisModule`'s pointer field being never read. Thus 24 - locally `allow` it for the moment, since we will have users later on 25 - (e.g. Binder needs a `as_ptr` method [6]). 26 27 Rust 1.77.0 introduces the `--check-cfg` feature [7], for which there 28 is a Call for Testing going on [8]. We were requested to test it and 29 we found it useful [9] -- we will likely enable it in the future. 30 31 The vast majority of changes are due to our `alloc` fork being upgraded 32 at once. 33 ··· 85 Link: https://doc.rust-lang.org/nightly/unstable-book/compiler-flags/check-cfg.html [7] 86 Link: https://github.com/rust-lang/rfcs/pull/3013#issuecomment-1936648479 [8] 87 Link: https://github.com/rust-lang/rust/issues/82450#issuecomment-1947462977 [9] 88 Signed-off-by: Miguel Ojeda <ojeda@kernel.org> 89 - Link: https://lore.kernel.org/r/20240217002717.57507-1-ojeda@kernel.org 90 - Link: https://github.com/Rust-for-Linux/linux/commit/d69265b7d756931b2e763a3262f22ba4100895a0 91 Signed-off-by: Alyssa Ross <hi@alyssa.is> 92 --- 93 Documentation/process/changes.rst | 2 +- ··· 96 rust/alloc/lib.rs | 7 +- 97 rust/alloc/raw_vec.rs | 13 ++-- 98 rust/alloc/slice.rs | 4 +- 99 - rust/alloc/vec/into_iter.rs | 108 +++++++++++++++++++----------- 100 - rust/alloc/vec/mod.rs | 101 +++++++++++++++++++--------- 101 - rust/kernel/lib.rs | 3 +- 102 scripts/Makefile.build | 2 +- 103 scripts/min-tool-version.sh | 2 +- 104 - 11 files changed, 161 insertions(+), 91 deletions(-) 105 106 diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst 107 - index 7ef8de58f7f892..879ee628893ae1 100644 108 --- a/Documentation/process/changes.rst 109 +++ b/Documentation/process/changes.rst 110 @@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils. ··· 112 GNU C 5.1 gcc --version 113 Clang/LLVM (optional) 13.0.1 clang --version 114 -Rust (optional) 1.76.0 rustc --version 115 - +Rust (optional) 1.77.0 rustc --version 116 bindgen (optional) 0.65.1 bindgen --version 117 GNU make 3.82 make --version 118 bash 4.2 bash --version 119 diff --git a/rust/alloc/alloc.rs b/rust/alloc/alloc.rs 120 - index abb791cc23715a..b1204f87227b23 100644 121 --- a/rust/alloc/alloc.rs 122 +++ b/rust/alloc/alloc.rs 123 @@ -5,7 +5,7 @@ ··· 129 130 #[cfg(not(test))] 131 use core::ptr::{self, NonNull}; 132 - @@ -210,7 +210,7 @@ impl Global { 133 let new_size = new_layout.size(); 134 135 // `realloc` probably checks for `new_size >= old_layout.size()` or something similar. ··· 138 139 let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 140 let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 141 - @@ -301,7 +301,7 @@ unsafe impl Allocator for Global { 142 // SAFETY: `new_size` is non-zero. Other conditions must be upheld by the caller 143 new_size if old_layout.align() == new_layout.align() => unsafe { 144 // `realloc` probably checks for `new_size <= old_layout.size()` or something similar. ··· 148 let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 149 let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 150 diff --git a/rust/alloc/boxed.rs b/rust/alloc/boxed.rs 151 - index c93a22a5c97f14..5fc39dfeb8e7bf 100644 152 --- a/rust/alloc/boxed.rs 153 +++ b/rust/alloc/boxed.rs 154 @@ -26,6 +26,7 @@ ··· 159 //! #[derive(Debug)] 160 //! enum List<T> { 161 //! Cons(T, Box<List<T>>), 162 - @@ -194,8 +195,7 @@ mod thin; 163 #[fundamental] 164 #[stable(feature = "rust1", since = "1.0.0")] 165 // The declaration of the `Box` struct must be kept in sync with the ··· 170 T: ?Sized, 171 #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global, 172 diff --git a/rust/alloc/lib.rs b/rust/alloc/lib.rs 173 - index 36f79c07559338..39afd55ec0749e 100644 174 --- a/rust/alloc/lib.rs 175 +++ b/rust/alloc/lib.rs 176 @@ -105,7 +105,6 @@ ··· 223 #![feature(exclusive_range_pattern)] 224 #![feature(fundamental)] 225 diff --git a/rust/alloc/raw_vec.rs b/rust/alloc/raw_vec.rs 226 - index 98b6abf30af6e4..1839d1c8ee7a04 100644 227 --- a/rust/alloc/raw_vec.rs 228 +++ b/rust/alloc/raw_vec.rs 229 @@ -4,7 +4,7 @@ ··· 235 use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; 236 use core::ptr::{self, NonNull, Unique}; 237 use core::slice; 238 - @@ -317,7 +317,7 @@ impl<T, A: Allocator> RawVec<T, A> { 239 /// 240 /// # Panics 241 /// ··· 244 /// 245 /// # Aborts 246 /// 247 - @@ -358,7 +358,7 @@ impl<T, A: Allocator> RawVec<T, A> { 248 } 249 unsafe { 250 // Inform the optimizer that the reservation has succeeded or wasn't needed ··· 253 } 254 Ok(()) 255 } 256 - @@ -381,7 +381,7 @@ impl<T, A: Allocator> RawVec<T, A> { 257 /// 258 /// # Panics 259 /// ··· 262 /// 263 /// # Aborts 264 /// 265 - @@ -402,7 +402,7 @@ impl<T, A: Allocator> RawVec<T, A> { 266 } 267 unsafe { 268 // Inform the optimizer that the reservation has succeeded or wasn't needed ··· 271 } 272 Ok(()) 273 } 274 - @@ -553,7 +553,7 @@ where 275 debug_assert_eq!(old_layout.align(), new_layout.align()); 276 unsafe { 277 // The allocator checks for alignment equality ··· 289 fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> { 290 if usize::BITS < 64 && alloc_size > isize::MAX as usize { 291 diff --git a/rust/alloc/slice.rs b/rust/alloc/slice.rs 292 - index 1181836da5f462..a36b072c95195f 100644 293 --- a/rust/alloc/slice.rs 294 +++ b/rust/alloc/slice.rs 295 - @@ -53,14 +53,14 @@ pub use core::slice::{from_mut, from_ref}; 296 pub use core::slice::{from_mut_ptr_range, from_ptr_range}; 297 #[stable(feature = "rust1", since = "1.0.0")] 298 pub use core::slice::{from_raw_parts, from_raw_parts_mut}; ··· 310 pub use core::slice::{Iter, IterMut}; 311 #[stable(feature = "rchunks", since = "1.31.0")] 312 diff --git a/rust/alloc/vec/into_iter.rs b/rust/alloc/vec/into_iter.rs 313 - index 136bfe94af6c83..0f11744c44b34c 100644 314 --- a/rust/alloc/vec/into_iter.rs 315 +++ b/rust/alloc/vec/into_iter.rs 316 - @@ -20,6 +20,17 @@ use core::ops::Deref; 317 use core::ptr::{self, NonNull}; 318 use core::slice::{self}; 319 ··· 357 } 358 359 /// Returns the remaining items of this iterator as a mutable slice. 360 - @@ -99,7 +112,7 @@ impl<T, A: Allocator> IntoIter<T, A> { 361 } 362 363 fn as_raw_mut_slice(&mut self) -> *mut [T] { ··· 366 } 367 368 /// Drops remaining elements and relinquishes the backing allocation. 369 - @@ -126,7 +139,7 @@ impl<T, A: Allocator> IntoIter<T, A> { 370 // this creates less assembly 371 self.cap = 0; 372 self.buf = unsafe { NonNull::new_unchecked(RawVec::NEW.ptr()) }; ··· 375 self.end = self.buf.as_ptr(); 376 377 // Dropping the remaining elements can panic, so this needs to be 378 - @@ -138,9 +151,9 @@ impl<T, A: Allocator> IntoIter<T, A> { 379 380 /// Forgets to Drop the remaining elements while still allowing the backing allocation to be freed. 381 pub(crate) fn forget_remaining_elements(&mut self) { ··· 387 } 388 389 #[cfg(not(no_global_oom_handling))] 390 - @@ -162,7 +175,7 @@ impl<T, A: Allocator> IntoIter<T, A> { 391 // say that they're all at the beginning of the "allocation". 392 0..this.len() 393 } else { ··· 406 - // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 407 - // reducing the `end`. 408 - self.end = self.end.wrapping_byte_sub(1); 409 - - 410 - - // Make up a value of this ZST. 411 - - Some(unsafe { mem::zeroed() }) 412 + if T::IS_ZST { 413 + if self.ptr.as_ptr() == self.end as *mut _ { 414 + None ··· 416 + // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 417 + // reducing the `end`. 418 + self.end = self.end.wrapping_byte_sub(1); 419 - + 420 + // Make up a value of this ZST. 421 + Some(unsafe { mem::zeroed() }) 422 + } ··· 446 }; 447 (exact, Some(exact)) 448 } 449 - @@ -219,7 +238,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { 450 #[inline] 451 fn advance_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { 452 let step_size = self.len().min(n); ··· 455 if T::IS_ZST { 456 // See `next` for why we sub `end` here. 457 self.end = self.end.wrapping_byte_sub(step_size); 458 - @@ -261,7 +280,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { 459 // Safety: `len` indicates that this many elements are available and we just checked that 460 // it fits into the array. 461 unsafe { ··· 464 self.forget_remaining_elements(); 465 return Err(array::IntoIter::new_unchecked(raw_ary, 0..len)); 466 } 467 - @@ -270,7 +289,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { 468 // Safety: `len` is larger than the array size. Copy a fixed amount here to fully initialize 469 // the array. 470 return unsafe { ··· 473 self.ptr = self.ptr.add(N); 474 Ok(raw_ary.transpose().assume_init()) 475 }; 476 - @@ -288,7 +307,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { 477 // Also note the implementation of `Self: TrustedRandomAccess` requires 478 // that `T: Copy` so reading elements from the buffer doesn't invalidate 479 // them for `Drop`. ··· 482 } 483 } 484 485 - @@ -296,18 +315,25 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { 486 impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> { 487 #[inline] 488 fn next_back(&mut self) -> Option<T> { ··· 491 - } else if T::IS_ZST { 492 - // See above for why 'ptr.offset' isn't used 493 - self.end = self.end.wrapping_byte_sub(1); 494 - - 495 - - // Make up a value of this ZST. 496 - - Some(unsafe { mem::zeroed() }) 497 + if T::IS_ZST { 498 + if self.end as *mut _ == self.ptr.as_ptr() { 499 + None 500 + } else { 501 + // See above for why 'ptr.offset' isn't used 502 + self.end = self.end.wrapping_byte_sub(1); 503 - + 504 + // Make up a value of this ZST. 505 + Some(unsafe { mem::zeroed() }) 506 + } ··· 518 } 519 } 520 521 - @@ -333,7 +359,11 @@ impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> { 522 #[stable(feature = "rust1", since = "1.0.0")] 523 impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> { 524 fn is_empty(&self) -> bool { ··· 532 } 533 534 diff --git a/rust/alloc/vec/mod.rs b/rust/alloc/vec/mod.rs 535 - index 220fb9d6f45b3f..0be27fff4554a1 100644 536 --- a/rust/alloc/vec/mod.rs 537 +++ b/rust/alloc/vec/mod.rs 538 - @@ -360,7 +360,7 @@ mod spec_extend; 539 /// 540 /// `vec![x; n]`, `vec![a, b, c, d]`, and 541 /// [`Vec::with_capacity(n)`][`Vec::with_capacity`], will all produce a `Vec` ··· 544 /// (as is the case for the [`vec!`] macro), then a `Vec<T>` can be converted to 545 /// and from a [`Box<[T]>`][owned slice] without reallocating or moving the elements. 546 /// 547 - @@ -447,7 +447,7 @@ impl<T> Vec<T> { 548 /// 549 /// # Panics 550 /// ··· 553 /// 554 /// # Examples 555 /// 556 - @@ -690,7 +690,7 @@ impl<T, A: Allocator> Vec<T, A> { 557 /// 558 /// # Panics 559 /// ··· 562 /// 563 /// # Examples 564 /// 565 - @@ -1013,7 +1013,7 @@ impl<T, A: Allocator> Vec<T, A> { 566 /// 567 /// # Panics 568 /// ··· 571 /// 572 /// # Examples 573 /// 574 - @@ -1043,7 +1043,7 @@ impl<T, A: Allocator> Vec<T, A> { 575 /// 576 /// # Panics 577 /// ··· 580 /// 581 /// # Examples 582 /// 583 - @@ -1140,8 +1140,11 @@ impl<T, A: Allocator> Vec<T, A> { 584 585 /// Shrinks the capacity of the vector as much as possible. 586 /// ··· 594 /// 595 /// # Examples 596 /// 597 - @@ -1191,10 +1194,10 @@ impl<T, A: Allocator> Vec<T, A> { 598 599 /// Converts the vector into [`Box<[T]>`][owned slice]. 600 /// ··· 607 /// 608 /// # Examples 609 /// 610 - @@ -2017,7 +2020,7 @@ impl<T, A: Allocator> Vec<T, A> { 611 /// 612 /// # Panics 613 /// ··· 616 /// 617 /// # Examples 618 /// 619 - @@ -2133,7 +2136,7 @@ impl<T, A: Allocator> Vec<T, A> { 620 } else { 621 unsafe { 622 self.len -= 1; ··· 625 Some(ptr::read(self.as_ptr().add(self.len()))) 626 } 627 } 628 - @@ -2143,7 +2146,7 @@ impl<T, A: Allocator> Vec<T, A> { 629 /// 630 /// # Panics 631 /// ··· 634 /// 635 /// # Examples 636 /// 637 - @@ -2315,6 +2318,12 @@ impl<T, A: Allocator> Vec<T, A> { 638 /// `[at, len)`. After the call, the original vector will be left containing 639 /// the elements `[0, at)` with its previous capacity unchanged. 640 /// ··· 647 /// # Panics 648 /// 649 /// Panics if `at > len`. 650 - @@ -2346,14 +2355,6 @@ impl<T, A: Allocator> Vec<T, A> { 651 assert_failed(at, self.len()); 652 } 653 ··· 662 let other_len = self.len - at; 663 let mut other = Vec::with_capacity_in(other_len, self.allocator().clone()); 664 665 - @@ -3027,6 +3028,50 @@ impl<T, I: SliceIndex<[T]>, A: Allocator> IndexMut<I> for Vec<T, A> { 666 } 667 } 668 ··· 713 #[cfg(not(no_global_oom_handling))] 714 #[stable(feature = "rust1", since = "1.0.0")] 715 impl<T> FromIterator<T> for Vec<T> { 716 - @@ -3069,14 +3114,8 @@ impl<T, A: Allocator> IntoIterator for Vec<T, A> { 717 begin.add(me.len()) as *const T 718 }; 719 let cap = me.buf.capacity(); ··· 730 } 731 } 732 } 733 - @@ -3598,8 +3637,10 @@ impl<T, A: Allocator> From<Box<[T], A>> for Vec<T, A> { 734 impl<T, A: Allocator> From<Vec<T, A>> for Box<[T], A> { 735 /// Convert a vector into a boxed slice. 736 /// ··· 744 /// # Examples 745 /// 746 diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs 747 - index be68d5e567b1a1..71f95e5aa09abd 100644 748 --- a/rust/kernel/lib.rs 749 +++ b/rust/kernel/lib.rs 750 @@ -16,7 +16,6 @@ ··· 755 #![feature(receiver_trait)] 756 #![feature(unsize)] 757 758 - @@ -78,7 +77,7 @@ pub trait Module: Sized + Sync { 759 - /// Equivalent to `THIS_MODULE` in the C API. 760 - /// 761 - /// C header: [`include/linux/export.h`](srctree/include/linux/export.h) 762 - -pub struct ThisModule(*mut bindings::module); 763 - +pub struct ThisModule(#[allow(dead_code)] *mut bindings::module); 764 - 765 - // SAFETY: `THIS_MODULE` may be used from all threads within a module. 766 - unsafe impl Sync for ThisModule {} 767 diff --git a/scripts/Makefile.build b/scripts/Makefile.build 768 - index baf86c0880b6d7..367cfeea74c5f5 100644 769 --- a/scripts/Makefile.build 770 +++ b/scripts/Makefile.build 771 @@ -263,7 +263,7 @@ $(obj)/%.lst: $(src)/%.c FORCE ··· 778 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 779 # current working directory, which may be not accessible in the out-of-tree 780 diff --git a/scripts/min-tool-version.sh b/scripts/min-tool-version.sh 781 - index 5927cc6b7de338..cc5141b67b4a71 100755 782 --- a/scripts/min-tool-version.sh 783 +++ b/scripts/min-tool-version.sh 784 @@ -33,7 +33,7 @@ llvm) ··· 786 ;; 787 rustc) 788 - echo 1.76.0 789 - + echo 1.77.0 790 ;; 791 bindgen) 792 echo 0.65.1
··· 1 + From b256fc507d4710287b22077834c16d18cee4ab17 Mon Sep 17 00:00:00 2001 2 From: Miguel Ojeda <ojeda@kernel.org> 3 Date: Sat, 17 Feb 2024 01:27:17 +0100 4 + Subject: [PATCH] rust: upgrade to Rust 1.77.1 5 6 + This is the next upgrade to the Rust toolchain, from 1.76.0 to 1.77.1 7 (i.e. the latest) [1]. 8 9 See the upgrade policy [2] and the comments on the first upgrade in 10 commit 3ed03f4da06e ("rust: upgrade to Rust 1.68.2"). 11 + 12 + # Unstable features 13 14 The `offset_of` feature (single-field `offset_of!`) that we were using 15 got stabilized in Rust 1.77.0 [3]. ··· 19 increase the list. 20 21 Please see [4] for details. 22 + 23 + # Required changes 24 25 Rust 1.77.0 merged the `unused_tuple_struct_fields` lint into `dead_code`, 26 + thus upgrading it from `allow` to `warn` [5]. In turn, this made `rustc` 27 + complain about the `ThisModule`'s pointer field being never read, but 28 + the previous patch adds the `as_ptr` method to it, needed by Binder [6], 29 + so that we do not need to locally `allow` it. 30 + 31 + # Other changes 32 33 Rust 1.77.0 introduces the `--check-cfg` feature [7], for which there 34 is a Call for Testing going on [8]. We were requested to test it and 35 we found it useful [9] -- we will likely enable it in the future. 36 37 + # `alloc` upgrade and reviewing 38 + 39 The vast majority of changes are due to our `alloc` fork being upgraded 40 at once. 41 ··· 93 Link: https://doc.rust-lang.org/nightly/unstable-book/compiler-flags/check-cfg.html [7] 94 Link: https://github.com/rust-lang/rfcs/pull/3013#issuecomment-1936648479 [8] 95 Link: https://github.com/rust-lang/rust/issues/82450#issuecomment-1947462977 [9] 96 + Reviewed-by: Alice Ryhl <aliceryhl@google.com> 97 + Tested-by: Boqun Feng <boqun.feng@gmail.com> 98 + Link: https://lore.kernel.org/r/20240217002717.57507-1-ojeda@kernel.org 99 + [ Upgraded to 1.77.1. Removed `allow(dead_code)` thanks to the previous 100 + patch. Reworded accordingly. No changes to `alloc` during the beta. ] 101 Signed-off-by: Miguel Ojeda <ojeda@kernel.org> 102 Signed-off-by: Alyssa Ross <hi@alyssa.is> 103 --- 104 Documentation/process/changes.rst | 2 +- ··· 107 rust/alloc/lib.rs | 7 +- 108 rust/alloc/raw_vec.rs | 13 ++-- 109 rust/alloc/slice.rs | 4 +- 110 + rust/alloc/vec/into_iter.rs | 104 +++++++++++++++++++----------- 111 + rust/alloc/vec/mod.rs | 101 ++++++++++++++++++++--------- 112 + rust/kernel/lib.rs | 1 - 113 scripts/Makefile.build | 2 +- 114 scripts/min-tool-version.sh | 2 +- 115 + 11 files changed, 158 insertions(+), 88 deletions(-) 116 117 diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst 118 + index 7ef8de58f7f8..b5d3107c6734 100644 119 --- a/Documentation/process/changes.rst 120 +++ b/Documentation/process/changes.rst 121 @@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils. ··· 123 GNU C 5.1 gcc --version 124 Clang/LLVM (optional) 13.0.1 clang --version 125 -Rust (optional) 1.76.0 rustc --version 126 + +Rust (optional) 1.77.1 rustc --version 127 bindgen (optional) 0.65.1 bindgen --version 128 GNU make 3.82 make --version 129 bash 4.2 bash --version 130 diff --git a/rust/alloc/alloc.rs b/rust/alloc/alloc.rs 131 + index abb791cc2371..b1204f87227b 100644 132 --- a/rust/alloc/alloc.rs 133 +++ b/rust/alloc/alloc.rs 134 @@ -5,7 +5,7 @@ ··· 140 141 #[cfg(not(test))] 142 use core::ptr::{self, NonNull}; 143 + @@ -210,7 +210,7 @@ unsafe fn grow_impl( 144 let new_size = new_layout.size(); 145 146 // `realloc` probably checks for `new_size >= old_layout.size()` or something similar. ··· 149 150 let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 151 let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 152 + @@ -301,7 +301,7 @@ unsafe fn shrink( 153 // SAFETY: `new_size` is non-zero. Other conditions must be upheld by the caller 154 new_size if old_layout.align() == new_layout.align() => unsafe { 155 // `realloc` probably checks for `new_size <= old_layout.size()` or something similar. ··· 159 let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size); 160 let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?; 161 diff --git a/rust/alloc/boxed.rs b/rust/alloc/boxed.rs 162 + index c93a22a5c97f..5fc39dfeb8e7 100644 163 --- a/rust/alloc/boxed.rs 164 +++ b/rust/alloc/boxed.rs 165 @@ -26,6 +26,7 @@ ··· 170 //! #[derive(Debug)] 171 //! enum List<T> { 172 //! Cons(T, Box<List<T>>), 173 + @@ -194,8 +195,7 @@ 174 #[fundamental] 175 #[stable(feature = "rust1", since = "1.0.0")] 176 // The declaration of the `Box` struct must be kept in sync with the ··· 181 T: ?Sized, 182 #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global, 183 diff --git a/rust/alloc/lib.rs b/rust/alloc/lib.rs 184 + index 36f79c075593..39afd55ec074 100644 185 --- a/rust/alloc/lib.rs 186 +++ b/rust/alloc/lib.rs 187 @@ -105,7 +105,6 @@ ··· 234 #![feature(exclusive_range_pattern)] 235 #![feature(fundamental)] 236 diff --git a/rust/alloc/raw_vec.rs b/rust/alloc/raw_vec.rs 237 + index 98b6abf30af6..1839d1c8ee7a 100644 238 --- a/rust/alloc/raw_vec.rs 239 +++ b/rust/alloc/raw_vec.rs 240 @@ -4,7 +4,7 @@ ··· 246 use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; 247 use core::ptr::{self, NonNull, Unique}; 248 use core::slice; 249 + @@ -317,7 +317,7 @@ fn current_memory(&self) -> Option<(NonNull<u8>, Layout)> { 250 /// 251 /// # Panics 252 /// ··· 255 /// 256 /// # Aborts 257 /// 258 + @@ -358,7 +358,7 @@ pub fn try_reserve(&mut self, len: usize, additional: usize) -> Result<(), TryRe 259 } 260 unsafe { 261 // Inform the optimizer that the reservation has succeeded or wasn't needed ··· 264 } 265 Ok(()) 266 } 267 + @@ -381,7 +381,7 @@ pub fn try_reserve_for_push(&mut self, len: usize) -> Result<(), TryReserveError 268 /// 269 /// # Panics 270 /// ··· 273 /// 274 /// # Aborts 275 /// 276 + @@ -402,7 +402,7 @@ pub fn try_reserve_exact( 277 } 278 unsafe { 279 // Inform the optimizer that the reservation has succeeded or wasn't needed ··· 282 } 283 Ok(()) 284 } 285 + @@ -553,7 +553,7 @@ fn finish_grow<A>( 286 debug_assert_eq!(old_layout.align(), new_layout.align()); 287 unsafe { 288 // The allocator checks for alignment equality ··· 300 fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> { 301 if usize::BITS < 64 && alloc_size > isize::MAX as usize { 302 diff --git a/rust/alloc/slice.rs b/rust/alloc/slice.rs 303 + index 1181836da5f4..a36b072c9519 100644 304 --- a/rust/alloc/slice.rs 305 +++ b/rust/alloc/slice.rs 306 + @@ -53,14 +53,14 @@ 307 pub use core::slice::{from_mut_ptr_range, from_ptr_range}; 308 #[stable(feature = "rust1", since = "1.0.0")] 309 pub use core::slice::{from_raw_parts, from_raw_parts_mut}; ··· 321 pub use core::slice::{Iter, IterMut}; 322 #[stable(feature = "rchunks", since = "1.31.0")] 323 diff --git a/rust/alloc/vec/into_iter.rs b/rust/alloc/vec/into_iter.rs 324 + index 136bfe94af6c..0f11744c44b3 100644 325 --- a/rust/alloc/vec/into_iter.rs 326 +++ b/rust/alloc/vec/into_iter.rs 327 + @@ -20,6 +20,17 @@ 328 use core::ptr::{self, NonNull}; 329 use core::slice::{self}; 330 ··· 368 } 369 370 /// Returns the remaining items of this iterator as a mutable slice. 371 + @@ -99,7 +112,7 @@ pub fn allocator(&self) -> &A { 372 } 373 374 fn as_raw_mut_slice(&mut self) -> *mut [T] { ··· 377 } 378 379 /// Drops remaining elements and relinquishes the backing allocation. 380 + @@ -126,7 +139,7 @@ pub(super) fn forget_allocation_drop_remaining(&mut self) { 381 // this creates less assembly 382 self.cap = 0; 383 self.buf = unsafe { NonNull::new_unchecked(RawVec::NEW.ptr()) }; ··· 386 self.end = self.buf.as_ptr(); 387 388 // Dropping the remaining elements can panic, so this needs to be 389 + @@ -138,9 +151,9 @@ pub(super) fn forget_allocation_drop_remaining(&mut self) { 390 391 /// Forgets to Drop the remaining elements while still allowing the backing allocation to be freed. 392 pub(crate) fn forget_remaining_elements(&mut self) { ··· 398 } 399 400 #[cfg(not(no_global_oom_handling))] 401 + @@ -162,7 +175,7 @@ pub(crate) fn into_vecdeque(self) -> VecDeque<T, A> { 402 // say that they're all at the beginning of the "allocation". 403 0..this.len() 404 } else { ··· 417 - // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 418 - // reducing the `end`. 419 - self.end = self.end.wrapping_byte_sub(1); 420 + if T::IS_ZST { 421 + if self.ptr.as_ptr() == self.end as *mut _ { 422 + None ··· 424 + // `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by 425 + // reducing the `end`. 426 + self.end = self.end.wrapping_byte_sub(1); 427 + 428 + - // Make up a value of this ZST. 429 + - Some(unsafe { mem::zeroed() }) 430 + // Make up a value of this ZST. 431 + Some(unsafe { mem::zeroed() }) 432 + } ··· 456 }; 457 (exact, Some(exact)) 458 } 459 + @@ -219,7 +238,7 @@ fn size_hint(&self) -> (usize, Option<usize>) { 460 #[inline] 461 fn advance_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { 462 let step_size = self.len().min(n); ··· 465 if T::IS_ZST { 466 // See `next` for why we sub `end` here. 467 self.end = self.end.wrapping_byte_sub(step_size); 468 + @@ -261,7 +280,7 @@ fn count(self) -> usize { 469 // Safety: `len` indicates that this many elements are available and we just checked that 470 // it fits into the array. 471 unsafe { ··· 474 self.forget_remaining_elements(); 475 return Err(array::IntoIter::new_unchecked(raw_ary, 0..len)); 476 } 477 + @@ -270,7 +289,7 @@ fn count(self) -> usize { 478 // Safety: `len` is larger than the array size. Copy a fixed amount here to fully initialize 479 // the array. 480 return unsafe { ··· 483 self.ptr = self.ptr.add(N); 484 Ok(raw_ary.transpose().assume_init()) 485 }; 486 + @@ -288,7 +307,7 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item 487 // Also note the implementation of `Self: TrustedRandomAccess` requires 488 // that `T: Copy` so reading elements from the buffer doesn't invalidate 489 // them for `Drop`. ··· 492 } 493 } 494 495 + @@ -296,18 +315,25 @@ unsafe fn __iterator_get_unchecked(&mut self, i: usize) -> Self::Item 496 impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> { 497 #[inline] 498 fn next_back(&mut self) -> Option<T> { ··· 501 - } else if T::IS_ZST { 502 - // See above for why 'ptr.offset' isn't used 503 - self.end = self.end.wrapping_byte_sub(1); 504 + if T::IS_ZST { 505 + if self.end as *mut _ == self.ptr.as_ptr() { 506 + None 507 + } else { 508 + // See above for why 'ptr.offset' isn't used 509 + self.end = self.end.wrapping_byte_sub(1); 510 + 511 + - // Make up a value of this ZST. 512 + - Some(unsafe { mem::zeroed() }) 513 + // Make up a value of this ZST. 514 + Some(unsafe { mem::zeroed() }) 515 + } ··· 527 } 528 } 529 530 + @@ -333,7 +359,11 @@ fn advance_back_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { 531 #[stable(feature = "rust1", since = "1.0.0")] 532 impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> { 533 fn is_empty(&self) -> bool { ··· 541 } 542 543 diff --git a/rust/alloc/vec/mod.rs b/rust/alloc/vec/mod.rs 544 + index 220fb9d6f45b..0be27fff4554 100644 545 --- a/rust/alloc/vec/mod.rs 546 +++ b/rust/alloc/vec/mod.rs 547 + @@ -360,7 +360,7 @@ 548 /// 549 /// `vec![x; n]`, `vec![a, b, c, d]`, and 550 /// [`Vec::with_capacity(n)`][`Vec::with_capacity`], will all produce a `Vec` ··· 553 /// (as is the case for the [`vec!`] macro), then a `Vec<T>` can be converted to 554 /// and from a [`Box<[T]>`][owned slice] without reallocating or moving the elements. 555 /// 556 + @@ -447,7 +447,7 @@ pub const fn new() -> Self { 557 /// 558 /// # Panics 559 /// ··· 562 /// 563 /// # Examples 564 /// 565 + @@ -690,7 +690,7 @@ pub const fn new_in(alloc: A) -> Self { 566 /// 567 /// # Panics 568 /// ··· 571 /// 572 /// # Examples 573 /// 574 + @@ -1013,7 +1013,7 @@ pub fn capacity(&self) -> usize { 575 /// 576 /// # Panics 577 /// ··· 580 /// 581 /// # Examples 582 /// 583 + @@ -1043,7 +1043,7 @@ pub fn reserve(&mut self, additional: usize) { 584 /// 585 /// # Panics 586 /// ··· 589 /// 590 /// # Examples 591 /// 592 + @@ -1140,8 +1140,11 @@ pub fn try_reserve_exact(&mut self, additional: usize) -> Result<(), TryReserveE 593 594 /// Shrinks the capacity of the vector as much as possible. 595 /// ··· 603 /// 604 /// # Examples 605 /// 606 + @@ -1191,10 +1194,10 @@ pub fn shrink_to(&mut self, min_capacity: usize) { 607 608 /// Converts the vector into [`Box<[T]>`][owned slice]. 609 /// ··· 616 /// 617 /// # Examples 618 /// 619 + @@ -2017,7 +2020,7 @@ fn drop(&mut self) { 620 /// 621 /// # Panics 622 /// ··· 625 /// 626 /// # Examples 627 /// 628 + @@ -2133,7 +2136,7 @@ pub fn pop(&mut self) -> Option<T> { 629 } else { 630 unsafe { 631 self.len -= 1; ··· 634 Some(ptr::read(self.as_ptr().add(self.len()))) 635 } 636 } 637 + @@ -2143,7 +2146,7 @@ pub fn pop(&mut self) -> Option<T> { 638 /// 639 /// # Panics 640 /// ··· 643 /// 644 /// # Examples 645 /// 646 + @@ -2315,6 +2318,12 @@ pub fn is_empty(&self) -> bool { 647 /// `[at, len)`. After the call, the original vector will be left containing 648 /// the elements `[0, at)` with its previous capacity unchanged. 649 /// ··· 656 /// # Panics 657 /// 658 /// Panics if `at > len`. 659 + @@ -2346,14 +2355,6 @@ fn assert_failed(at: usize, len: usize) -> ! { 660 assert_failed(at, self.len()); 661 } 662 ··· 671 let other_len = self.len - at; 672 let mut other = Vec::with_capacity_in(other_len, self.allocator().clone()); 673 674 + @@ -3027,6 +3028,50 @@ fn index_mut(&mut self, index: I) -> &mut Self::Output { 675 } 676 } 677 ··· 722 #[cfg(not(no_global_oom_handling))] 723 #[stable(feature = "rust1", since = "1.0.0")] 724 impl<T> FromIterator<T> for Vec<T> { 725 + @@ -3069,14 +3114,8 @@ fn into_iter(self) -> Self::IntoIter { 726 begin.add(me.len()) as *const T 727 }; 728 let cap = me.buf.capacity(); ··· 739 } 740 } 741 } 742 + @@ -3598,8 +3637,10 @@ fn from(s: Box<[T], A>) -> Self { 743 impl<T, A: Allocator> From<Vec<T, A>> for Box<[T], A> { 744 /// Convert a vector into a boxed slice. 745 /// ··· 753 /// # Examples 754 /// 755 diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs 756 + index 6858e2f8a3ed..9e9b245ebab5 100644 757 --- a/rust/kernel/lib.rs 758 +++ b/rust/kernel/lib.rs 759 @@ -16,7 +16,6 @@ ··· 764 #![feature(receiver_trait)] 765 #![feature(unsize)] 766 767 diff --git a/scripts/Makefile.build b/scripts/Makefile.build 768 + index 533a7799fdfe..5a6ab6d965bc 100644 769 --- a/scripts/Makefile.build 770 +++ b/scripts/Makefile.build 771 @@ -263,7 +263,7 @@ $(obj)/%.lst: $(src)/%.c FORCE ··· 778 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 779 # current working directory, which may be not accessible in the out-of-tree 780 diff --git a/scripts/min-tool-version.sh b/scripts/min-tool-version.sh 781 + index 5927cc6b7de3..6086e00e640e 100755 782 --- a/scripts/min-tool-version.sh 783 +++ b/scripts/min-tool-version.sh 784 @@ -33,7 +33,7 @@ llvm) ··· 786 ;; 787 rustc) 788 - echo 1.76.0 789 + + echo 1.77.1 790 ;; 791 bindgen) 792 echo 0.65.1 793 + -- 794 + 2.44.0 795 +