Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge v6.16-rc2 into timers/ptp

to pick up the __GENMASK() fix, otherwise the AUX clock VDSO patches fail
to compile for compat.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

+2382 -1386
+4
.mailmap
··· 426 426 Krzysztof Wilczyński <kwilczynski@kernel.org> <kw@linux.com> 427 427 Kshitiz Godara <quic_kgodara@quicinc.com> <kgodara@codeaurora.org> 428 428 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 429 + Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.com> 430 + Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.co.jp> 431 + Kuniyuki Iwashima <kuniyu@google.com> <kuni1840@gmail.com> 429 432 Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org> 430 433 Lee Jones <lee@kernel.org> <joneslee@google.com> 431 434 Lee Jones <lee@kernel.org> <lee.jones@canonical.com> ··· 722 719 Sriram R <quic_srirrama@quicinc.com> <srirrama@codeaurora.org> 723 720 Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech> 724 721 Stanislav Fomichev <sdf@fomichev.me> <sdf@google.com> 722 + Stanislav Fomichev <sdf@fomichev.me> <stfomichev@gmail.com> 725 723 Stefan Wahren <wahrenst@gmx.net> <stefan.wahren@i2se.com> 726 724 Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> 727 725 Stephen Hemminger <stephen@networkplumber.org> <shemminger@linux-foundation.org>
+2
Documentation/admin-guide/cifs/usage.rst
··· 270 270 illegal Windows/NTFS/SMB characters to a remap range (this mount parameter 271 271 is the default for SMB3). This remap (``mapposix``) range is also 272 272 compatible with Mac (and "Services for Mac" on some older Windows). 273 + When POSIX Extensions for SMB 3.1.1 are negotiated, remapping is automatically 274 + disabled. 273 275 274 276 CIFS VFS Mount Options 275 277 ======================
+77
Documentation/block/ublk.rst
··· 352 352 parameter of `struct ublk_param_segment` with backend for avoiding 353 353 unnecessary IO split, which usually hurts io_uring performance. 354 354 355 + Auto Buffer Registration 356 + ------------------------ 357 + 358 + The ``UBLK_F_AUTO_BUF_REG`` feature automatically handles buffer registration 359 + and unregistration for I/O requests, which simplifies the buffer management 360 + process and reduces overhead in the ublk server implementation. 361 + 362 + This is another feature flag for using zero copy, and it is compatible with 363 + ``UBLK_F_SUPPORT_ZERO_COPY``. 364 + 365 + Feature Overview 366 + ~~~~~~~~~~~~~~~~ 367 + 368 + This feature automatically registers request buffers to the io_uring context 369 + before delivering I/O commands to the ublk server and unregisters them when 370 + completing I/O commands. This eliminates the need for manual buffer 371 + registration/unregistration via ``UBLK_IO_REGISTER_IO_BUF`` and 372 + ``UBLK_IO_UNREGISTER_IO_BUF`` commands, then IO handling in ublk server 373 + can avoid dependency on the two uring_cmd operations. 374 + 375 + IOs can't be issued concurrently to io_uring if there is any dependency 376 + among these IOs. So this way not only simplifies ublk server implementation, 377 + but also makes concurrent IO handling becomes possible by removing the 378 + dependency on buffer registration & unregistration commands. 379 + 380 + Usage Requirements 381 + ~~~~~~~~~~~~~~~~~~ 382 + 383 + 1. The ublk server must create a sparse buffer table on the same ``io_ring_ctx`` 384 + used for ``UBLK_IO_FETCH_REQ`` and ``UBLK_IO_COMMIT_AND_FETCH_REQ``. If 385 + uring_cmd is issued on a different ``io_ring_ctx``, manual buffer 386 + unregistration is required. 387 + 388 + 2. Buffer registration data must be passed via uring_cmd's ``sqe->addr`` with the 389 + following structure:: 390 + 391 + struct ublk_auto_buf_reg { 392 + __u16 index; /* Buffer index for registration */ 393 + __u8 flags; /* Registration flags */ 394 + __u8 reserved0; /* Reserved for future use */ 395 + __u32 reserved1; /* Reserved for future use */ 396 + }; 397 + 398 + ublk_auto_buf_reg_to_sqe_addr() is for converting the above structure into 399 + ``sqe->addr``. 400 + 401 + 3. All reserved fields in ``ublk_auto_buf_reg`` must be zeroed. 402 + 403 + 4. Optional flags can be passed via ``ublk_auto_buf_reg.flags``. 404 + 405 + Fallback Behavior 406 + ~~~~~~~~~~~~~~~~~ 407 + 408 + If auto buffer registration fails: 409 + 410 + 1. When ``UBLK_AUTO_BUF_REG_FALLBACK`` is enabled: 411 + 412 + - The uring_cmd is completed 413 + - ``UBLK_IO_F_NEED_REG_BUF`` is set in ``ublksrv_io_desc.op_flags`` 414 + - The ublk server must manually deal with the failure, such as, register 415 + the buffer manually, or using user copy feature for retrieving the data 416 + for handling ublk IO 417 + 418 + 2. If fallback is not enabled: 419 + 420 + - The ublk I/O request fails silently 421 + - The uring_cmd won't be completed 422 + 423 + Limitations 424 + ~~~~~~~~~~~ 425 + 426 + - Requires same ``io_ring_ctx`` for all operations 427 + - May require manual buffer management in fallback cases 428 + - io_ring_ctx buffer table has a max size of 16K, which may not be enough 429 + in case that too many ublk devices are handled by this single io_ring_ctx 430 + and each one has very large queue depth 431 + 355 432 References 356 433 ========== 357 434
+1 -1
Documentation/devicetree/bindings/pinctrl/starfive,jh7110-aon-pinctrl.yaml
··· 15 15 Some peripherals such as PWM have their I/O go through the 4 "GPIOs". 16 16 17 17 maintainers: 18 - - Jianlong Huang <jianlong.huang@starfivetech.com> 18 + - Hal Feng <hal.feng@starfivetech.com> 19 19 20 20 properties: 21 21 compatible:
+1 -1
Documentation/devicetree/bindings/pinctrl/starfive,jh7110-sys-pinctrl.yaml
··· 18 18 any GPIO can be set up to be controlled by any of the peripherals. 19 19 20 20 maintainers: 21 - - Jianlong Huang <jianlong.huang@starfivetech.com> 21 + - Hal Feng <hal.feng@starfivetech.com> 22 22 23 23 properties: 24 24 compatible:
+3 -1
Documentation/filesystems/proc.rst
··· 584 584 ms may share 585 585 gd stack segment growns down 586 586 pf pure PFN range 587 - dw disabled write to the mapped file 588 587 lo pages are locked in memory 589 588 io memory mapped I/O area 590 589 sr sequential read advise provided ··· 606 607 mt arm64 MTE allocation tags are enabled 607 608 um userfaultfd missing tracking 608 609 uw userfaultfd wr-protect tracking 610 + ui userfaultfd minor fault 609 611 ss shadow/guarded control stack page 610 612 sl sealed 613 + lf lock on fault pages 614 + dp always lazily freeable mapping 611 615 == ======================================= 612 616 613 617 Note that there is no guarantee that every flag and associated mnemonic will
+8 -4
MAINTAINERS
··· 4555 4555 M: Martin KaFai Lau <martin.lau@linux.dev> 4556 4556 M: Daniel Borkmann <daniel@iogearbox.net> 4557 4557 R: John Fastabend <john.fastabend@gmail.com> 4558 + R: Stanislav Fomichev <sdf@fomichev.me> 4558 4559 L: bpf@vger.kernel.org 4559 4560 L: netdev@vger.kernel.org 4560 4561 S: Maintained ··· 6255 6254 F: include/linux/smpboot.h 6256 6255 F: kernel/cpu.c 6257 6256 F: kernel/smpboot.* 6257 + F: rust/helper/cpu.c 6258 6258 F: rust/kernel/cpu.rs 6259 6259 6260 6260 CPU IDLE TIME MANAGEMENT FRAMEWORK ··· 15921 15919 R: Nico Pache <npache@redhat.com> 15922 15920 R: Ryan Roberts <ryan.roberts@arm.com> 15923 15921 R: Dev Jain <dev.jain@arm.com> 15922 + R: Barry Song <baohua@kernel.org> 15924 15923 L: linux-mm@kvack.org 15925 15924 S: Maintained 15926 15925 W: http://www.linux-mm.org ··· 17496 17493 NETWORKING [TCP] 17497 17494 M: Eric Dumazet <edumazet@google.com> 17498 17495 M: Neal Cardwell <ncardwell@google.com> 17499 - R: Kuniyuki Iwashima <kuniyu@amazon.com> 17496 + R: Kuniyuki Iwashima <kuniyu@google.com> 17500 17497 L: netdev@vger.kernel.org 17501 17498 S: Maintained 17502 17499 F: Documentation/networking/net_cachelines/tcp_sock.rst ··· 17526 17523 17527 17524 NETWORKING [SOCKETS] 17528 17525 M: Eric Dumazet <edumazet@google.com> 17529 - M: Kuniyuki Iwashima <kuniyu@amazon.com> 17526 + M: Kuniyuki Iwashima <kuniyu@google.com> 17530 17527 M: Paolo Abeni <pabeni@redhat.com> 17531 17528 M: Willem de Bruijn <willemb@google.com> 17532 17529 S: Maintained ··· 17541 17538 F: net/socket.c 17542 17539 17543 17540 NETWORKING [UNIX SOCKETS] 17544 - M: Kuniyuki Iwashima <kuniyu@amazon.com> 17541 + M: Kuniyuki Iwashima <kuniyu@google.com> 17545 17542 S: Maintained 17546 17543 F: include/net/af_unix.h 17547 17544 F: include/net/netns/unix.h ··· 23664 23661 23665 23662 STARFIVE JH71X0 PINCTRL DRIVERS 23666 23663 M: Emil Renner Berthing <kernel@esmil.dk> 23667 - M: Jianlong Huang <jianlong.huang@starfivetech.com> 23668 23664 M: Hal Feng <hal.feng@starfivetech.com> 23669 23665 L: linux-gpio@vger.kernel.org 23670 23666 S: Maintained ··· 26969 26967 M: Jakub Kicinski <kuba@kernel.org> 26970 26968 M: Jesper Dangaard Brouer <hawk@kernel.org> 26971 26969 M: John Fastabend <john.fastabend@gmail.com> 26970 + R: Stanislav Fomichev <sdf@fomichev.me> 26972 26971 L: netdev@vger.kernel.org 26973 26972 L: bpf@vger.kernel.org 26974 26973 S: Supported ··· 26991 26988 M: Magnus Karlsson <magnus.karlsson@intel.com> 26992 26989 M: Maciej Fijalkowski <maciej.fijalkowski@intel.com> 26993 26990 R: Jonathan Lemon <jonathan.lemon@gmail.com> 26991 + R: Stanislav Fomichev <sdf@fomichev.me> 26994 26992 L: netdev@vger.kernel.org 26995 26993 L: bpf@vger.kernel.org 26996 26994 S: Maintained
+1 -4
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1832 1832 # Misc 1833 1833 # --------------------------------------------------------------------------- 1834 1834 1835 - # Run misc checks when ${KBUILD_EXTRA_WARN} contains 1 1836 1835 PHONY += misc-check 1837 - ifneq ($(findstring 1,$(KBUILD_EXTRA_WARN)),) 1838 1836 misc-check: 1839 1837 $(Q)$(srctree)/scripts/misc-check 1840 - endif 1841 1838 1842 1839 all: misc-check 1843 1840
+1 -1
arch/alpha/include/asm/pgtable.h
··· 327 327 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 328 328 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 329 329 330 - static inline int pte_swp_exclusive(pte_t pte) 330 + static inline bool pte_swp_exclusive(pte_t pte) 331 331 { 332 332 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 333 333 }
+1 -1
arch/arc/include/asm/arcregs.h
··· 144 144 #define ARC_AUX_AGU_MOD2 0x5E2 145 145 #define ARC_AUX_AGU_MOD3 0x5E3 146 146 147 - #ifndef __ASSEMBLY__ 147 + #ifndef __ASSEMBLER__ 148 148 149 149 #include <soc/arc/arc_aux.h> 150 150
+2 -2
arch/arc/include/asm/atomic.h
··· 6 6 #ifndef _ASM_ARC_ATOMIC_H 7 7 #define _ASM_ARC_ATOMIC_H 8 8 9 - #ifndef __ASSEMBLY__ 9 + #ifndef __ASSEMBLER__ 10 10 11 11 #include <linux/types.h> 12 12 #include <linux/compiler.h> ··· 31 31 #include <asm/atomic64-arcv2.h> 32 32 #endif 33 33 34 - #endif /* !__ASSEMBLY__ */ 34 + #endif /* !__ASSEMBLER__ */ 35 35 36 36 #endif
+5 -10
arch/arc/include/asm/atomic64-arcv2.h
··· 137 137 #undef ATOMIC64_OP_RETURN 138 138 #undef ATOMIC64_OP 139 139 140 - static inline s64 141 - arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new) 140 + static inline u64 __arch_cmpxchg64_relaxed(volatile void *ptr, u64 old, u64 new) 142 141 { 143 - s64 prev; 144 - 145 - smp_mb(); 142 + u64 prev; 146 143 147 144 __asm__ __volatile__( 148 145 "1: llockd %0, [%1] \n" ··· 149 152 " bnz 1b \n" 150 153 "2: \n" 151 154 : "=&r"(prev) 152 - : "r"(ptr), "ir"(expected), "r"(new) 153 - : "cc"); /* memory clobber comes from smp_mb() */ 154 - 155 - smp_mb(); 155 + : "r"(ptr), "ir"(old), "r"(new) 156 + : "memory", "cc"); 156 157 157 158 return prev; 158 159 } 159 - #define arch_atomic64_cmpxchg arch_atomic64_cmpxchg 160 + #define arch_cmpxchg64_relaxed __arch_cmpxchg64_relaxed 160 161 161 162 static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) 162 163 {
+2 -2
arch/arc/include/asm/bitops.h
··· 10 10 #error only <linux/bitops.h> can be included directly 11 11 #endif 12 12 13 - #ifndef __ASSEMBLY__ 13 + #ifndef __ASSEMBLER__ 14 14 15 15 #include <linux/types.h> 16 16 #include <linux/compiler.h> ··· 192 192 #include <asm-generic/bitops/le.h> 193 193 #include <asm-generic/bitops/ext2-atomic-setbit.h> 194 194 195 - #endif /* !__ASSEMBLY__ */ 195 + #endif /* !__ASSEMBLER__ */ 196 196 197 197 #endif
+2 -2
arch/arc/include/asm/bug.h
··· 6 6 #ifndef _ASM_ARC_BUG_H 7 7 #define _ASM_ARC_BUG_H 8 8 9 - #ifndef __ASSEMBLY__ 9 + #ifndef __ASSEMBLER__ 10 10 11 11 #include <asm/ptrace.h> 12 12 ··· 29 29 30 30 #include <asm-generic/bug.h> 31 31 32 - #endif /* !__ASSEMBLY__ */ 32 + #endif /* !__ASSEMBLER__ */ 33 33 34 34 #endif
+2 -2
arch/arc/include/asm/cache.h
··· 23 23 */ 24 24 #define ARC_UNCACHED_ADDR_SPACE 0xc0000000 25 25 26 - #ifndef __ASSEMBLY__ 26 + #ifndef __ASSEMBLER__ 27 27 28 28 #include <linux/build_bug.h> 29 29 ··· 65 65 extern int ioc_enable; 66 66 extern unsigned long perip_base, perip_end; 67 67 68 - #endif /* !__ASSEMBLY__ */ 68 + #endif /* !__ASSEMBLER__ */ 69 69 70 70 /* Instruction cache related Auxiliary registers */ 71 71 #define ARC_REG_IC_BCR 0x77 /* Build Config reg */
+2 -2
arch/arc/include/asm/current.h
··· 9 9 #ifndef _ASM_ARC_CURRENT_H 10 10 #define _ASM_ARC_CURRENT_H 11 11 12 - #ifndef __ASSEMBLY__ 12 + #ifndef __ASSEMBLER__ 13 13 14 14 #ifdef CONFIG_ARC_CURR_IN_REG 15 15 ··· 20 20 #include <asm-generic/current.h> 21 21 #endif /* ! CONFIG_ARC_CURR_IN_REG */ 22 22 23 - #endif /* ! __ASSEMBLY__ */ 23 + #endif /* ! __ASSEMBLER__ */ 24 24 25 25 #endif /* _ASM_ARC_CURRENT_H */
+1 -1
arch/arc/include/asm/dsp-impl.h
··· 11 11 12 12 #define DSP_CTRL_DISABLED_ALL 0 13 13 14 - #ifdef __ASSEMBLY__ 14 + #ifdef __ASSEMBLER__ 15 15 16 16 /* clobbers r5 register */ 17 17 .macro DSP_EARLY_INIT
+2 -2
arch/arc/include/asm/dsp.h
··· 7 7 #ifndef __ASM_ARC_DSP_H 8 8 #define __ASM_ARC_DSP_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 /* 13 13 * DSP-related saved registers - need to be saved only when you are ··· 24 24 #endif 25 25 }; 26 26 27 - #endif /* !__ASSEMBLY__ */ 27 + #endif /* !__ASSEMBLER__ */ 28 28 29 29 #endif /* __ASM_ARC_DSP_H */
+2 -2
arch/arc/include/asm/dwarf.h
··· 6 6 #ifndef _ASM_ARC_DWARF_H 7 7 #define _ASM_ARC_DWARF_H 8 8 9 - #ifdef __ASSEMBLY__ 9 + #ifdef __ASSEMBLER__ 10 10 11 11 #ifdef ARC_DW2_UNWIND_AS_CFI 12 12 ··· 38 38 39 39 #endif /* !ARC_DW2_UNWIND_AS_CFI */ 40 40 41 - #endif /* __ASSEMBLY__ */ 41 + #endif /* __ASSEMBLER__ */ 42 42 43 43 #endif /* _ASM_ARC_DWARF_H */
+2 -2
arch/arc/include/asm/entry.h
··· 13 13 #include <asm/processor.h> /* For VMALLOC_START */ 14 14 #include <asm/mmu.h> 15 15 16 - #ifdef __ASSEMBLY__ 16 + #ifdef __ASSEMBLER__ 17 17 18 18 #ifdef CONFIG_ISA_ARCOMPACT 19 19 #include <asm/entry-compact.h> /* ISA specific bits */ ··· 146 146 147 147 #endif /* CONFIG_ARC_CURR_IN_REG */ 148 148 149 - #else /* !__ASSEMBLY__ */ 149 + #else /* !__ASSEMBLER__ */ 150 150 151 151 extern void do_signal(struct pt_regs *); 152 152 extern void do_notify_resume(struct pt_regs *);
+2 -2
arch/arc/include/asm/irqflags-arcv2.h
··· 50 50 #define ISA_INIT_STATUS_BITS (STATUS_IE_MASK | __AD_ENB | \ 51 51 (ARCV2_IRQ_DEF_PRIO << 1)) 52 52 53 - #ifndef __ASSEMBLY__ 53 + #ifndef __ASSEMBLER__ 54 54 55 55 /* 56 56 * Save IRQ state and disable IRQs ··· 170 170 seti 171 171 .endm 172 172 173 - #endif /* __ASSEMBLY__ */ 173 + #endif /* __ASSEMBLER__ */ 174 174 175 175 #endif
+2 -2
arch/arc/include/asm/irqflags-compact.h
··· 40 40 41 41 #define ISA_INIT_STATUS_BITS STATUS_IE_MASK 42 42 43 - #ifndef __ASSEMBLY__ 43 + #ifndef __ASSEMBLER__ 44 44 45 45 /****************************************************************** 46 46 * IRQ Control Macros ··· 196 196 flag \scratch 197 197 .endm 198 198 199 - #endif /* __ASSEMBLY__ */ 199 + #endif /* __ASSEMBLER__ */ 200 200 201 201 #endif
+2 -2
arch/arc/include/asm/jump_label.h
··· 2 2 #ifndef _ASM_ARC_JUMP_LABEL_H 3 3 #define _ASM_ARC_JUMP_LABEL_H 4 4 5 - #ifndef __ASSEMBLY__ 5 + #ifndef __ASSEMBLER__ 6 6 7 7 #include <linux/stringify.h> 8 8 #include <linux/types.h> ··· 68 68 jump_label_t key; 69 69 }; 70 70 71 - #endif /* __ASSEMBLY__ */ 71 + #endif /* __ASSEMBLER__ */ 72 72 #endif
+3 -3
arch/arc/include/asm/linkage.h
··· 12 12 #define __ALIGN .align 4 13 13 #define __ALIGN_STR __stringify(__ALIGN) 14 14 15 - #ifdef __ASSEMBLY__ 15 + #ifdef __ASSEMBLER__ 16 16 17 17 .macro ST2 e, o, off 18 18 #ifdef CONFIG_ARC_HAS_LL64 ··· 61 61 CFI_ENDPROC ASM_NL \ 62 62 .size name, .-name 63 63 64 - #else /* !__ASSEMBLY__ */ 64 + #else /* !__ASSEMBLER__ */ 65 65 66 66 #ifdef CONFIG_ARC_HAS_ICCM 67 67 #define __arcfp_code __section(".text.arcfp") ··· 75 75 #define __arcfp_data __section(".data") 76 76 #endif 77 77 78 - #endif /* __ASSEMBLY__ */ 78 + #endif /* __ASSEMBLER__ */ 79 79 80 80 #endif
+2 -2
arch/arc/include/asm/mmu-arcv2.h
··· 69 69 70 70 #define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE) 71 71 72 - #ifndef __ASSEMBLY__ 72 + #ifndef __ASSEMBLER__ 73 73 74 74 struct mm_struct; 75 75 extern int pae40_exist_but_not_enab(void); ··· 100 100 sr \reg, [ARC_REG_PID] 101 101 .endm 102 102 103 - #endif /* !__ASSEMBLY__ */ 103 + #endif /* !__ASSEMBLER__ */ 104 104 105 105 #endif
+1 -1
arch/arc/include/asm/mmu.h
··· 6 6 #ifndef _ASM_ARC_MMU_H 7 7 #define _ASM_ARC_MMU_H 8 8 9 - #ifndef __ASSEMBLY__ 9 + #ifndef __ASSEMBLER__ 10 10 11 11 #include <linux/threads.h> /* NR_CPUS */ 12 12
+2 -2
arch/arc/include/asm/page.h
··· 19 19 20 20 #endif /* CONFIG_ARC_HAS_PAE40 */ 21 21 22 - #ifndef __ASSEMBLY__ 22 + #ifndef __ASSEMBLER__ 23 23 24 24 #define clear_page(paddr) memset((paddr), 0, PAGE_SIZE) 25 25 #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) ··· 136 136 #include <asm-generic/memory_model.h> /* page_to_pfn, pfn_to_page */ 137 137 #include <asm-generic/getorder.h> 138 138 139 - #endif /* !__ASSEMBLY__ */ 139 + #endif /* !__ASSEMBLER__ */ 140 140 141 141 #endif
+3 -3
arch/arc/include/asm/pgtable-bits-arcv2.h
··· 75 75 * This is to enable COW mechanism 76 76 */ 77 77 /* xwr */ 78 - #ifndef __ASSEMBLY__ 78 + #ifndef __ASSEMBLER__ 79 79 80 80 #define pte_write(pte) (pte_val(pte) & _PAGE_WRITE) 81 81 #define pte_dirty(pte) (pte_val(pte) & _PAGE_DIRTY) ··· 130 130 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 131 131 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 132 132 133 - static inline int pte_swp_exclusive(pte_t pte) 133 + static inline bool pte_swp_exclusive(pte_t pte) 134 134 { 135 135 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 136 136 } ··· 142 142 #include <asm/hugepage.h> 143 143 #endif 144 144 145 - #endif /* __ASSEMBLY__ */ 145 + #endif /* __ASSEMBLER__ */ 146 146 147 147 #endif
+2 -2
arch/arc/include/asm/pgtable-levels.h
··· 85 85 86 86 #define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) 87 87 88 - #ifndef __ASSEMBLY__ 88 + #ifndef __ASSEMBLER__ 89 89 90 90 #if CONFIG_PGTABLE_LEVELS > 3 91 91 #include <asm-generic/pgtable-nop4d.h> ··· 181 181 #define pmd_leaf(x) (pmd_val(x) & _PAGE_HW_SZ) 182 182 #endif 183 183 184 - #endif /* !__ASSEMBLY__ */ 184 + #endif /* !__ASSEMBLER__ */ 185 185 186 186 #endif
+2 -2
arch/arc/include/asm/pgtable.h
··· 19 19 */ 20 20 #define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE) 21 21 22 - #ifndef __ASSEMBLY__ 22 + #ifndef __ASSEMBLER__ 23 23 24 24 extern char empty_zero_page[PAGE_SIZE]; 25 25 #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) ··· 29 29 /* to cope with aliasing VIPT cache */ 30 30 #define HAVE_ARCH_UNMAPPED_AREA 31 31 32 - #endif /* __ASSEMBLY__ */ 32 + #endif /* __ASSEMBLER__ */ 33 33 34 34 #endif
+2 -2
arch/arc/include/asm/processor.h
··· 11 11 #ifndef __ASM_ARC_PROCESSOR_H 12 12 #define __ASM_ARC_PROCESSOR_H 13 13 14 - #ifndef __ASSEMBLY__ 14 + #ifndef __ASSEMBLER__ 15 15 16 16 #include <asm/ptrace.h> 17 17 #include <asm/dsp.h> ··· 66 66 67 67 extern unsigned int __get_wchan(struct task_struct *p); 68 68 69 - #endif /* !__ASSEMBLY__ */ 69 + #endif /* !__ASSEMBLER__ */ 70 70 71 71 /* 72 72 * Default System Memory Map on ARC
+2 -2
arch/arc/include/asm/ptrace.h
··· 10 10 #include <uapi/asm/ptrace.h> 11 11 #include <linux/compiler.h> 12 12 13 - #ifndef __ASSEMBLY__ 13 + #ifndef __ASSEMBLER__ 14 14 15 15 typedef union { 16 16 struct { ··· 172 172 extern int syscall_trace_enter(struct pt_regs *); 173 173 extern void syscall_trace_exit(struct pt_regs *); 174 174 175 - #endif /* !__ASSEMBLY__ */ 175 + #endif /* !__ASSEMBLER__ */ 176 176 177 177 #endif /* __ASM_PTRACE_H */
+1 -1
arch/arc/include/asm/switch_to.h
··· 6 6 #ifndef _ASM_ARC_SWITCH_TO_H 7 7 #define _ASM_ARC_SWITCH_TO_H 8 8 9 - #ifndef __ASSEMBLY__ 9 + #ifndef __ASSEMBLER__ 10 10 11 11 #include <linux/sched.h> 12 12 #include <asm/dsp-impl.h>
+2 -2
arch/arc/include/asm/thread_info.h
··· 24 24 #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) 25 25 #define THREAD_SHIFT (PAGE_SHIFT << THREAD_SIZE_ORDER) 26 26 27 - #ifndef __ASSEMBLY__ 27 + #ifndef __ASSEMBLER__ 28 28 29 29 #include <linux/thread_info.h> 30 30 ··· 62 62 return (struct thread_info *)(sp & ~(THREAD_SIZE - 1)); 63 63 } 64 64 65 - #endif /* !__ASSEMBLY__ */ 65 + #endif /* !__ASSEMBLER__ */ 66 66 67 67 /* 68 68 * thread information flags
+2 -2
arch/arc/include/uapi/asm/ptrace.h
··· 14 14 15 15 #define PTRACE_GET_THREAD_AREA 25 16 16 17 - #ifndef __ASSEMBLY__ 17 + #ifndef __ASSEMBLER__ 18 18 /* 19 19 * Userspace ABI: Register state needed by 20 20 * -ptrace (gdbserver) ··· 53 53 unsigned long r30, r58, r59; 54 54 }; 55 55 56 - #endif /* !__ASSEMBLY__ */ 56 + #endif /* !__ASSEMBLER__ */ 57 57 58 58 #endif /* _UAPI__ASM_ARC_PTRACE_H */
+1 -10
arch/arc/kernel/unwind.c
··· 241 241 return (e1->start > e2->start) - (e1->start < e2->start); 242 242 } 243 243 244 - static void swap_eh_frame_hdr_table_entries(void *p1, void *p2, int size) 245 - { 246 - struct eh_frame_hdr_table_entry *e1 = p1; 247 - struct eh_frame_hdr_table_entry *e2 = p2; 248 - 249 - swap(e1->start, e2->start); 250 - swap(e1->fde, e2->fde); 251 - } 252 - 253 244 static void init_unwind_hdr(struct unwind_table *table, 254 245 void *(*alloc) (unsigned long)) 255 246 { ··· 336 345 sort(header->table, 337 346 n, 338 347 sizeof(*header->table), 339 - cmp_eh_frame_hdr_table_entries, swap_eh_frame_hdr_table_entries); 348 + cmp_eh_frame_hdr_table_entries, NULL); 340 349 341 350 table->hdrsz = hdrSize; 342 351 smp_wmb();
+1 -1
arch/arm/include/asm/pgtable.h
··· 301 301 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 302 302 #define __swp_entry_to_pte(swp) __pte((swp).val) 303 303 304 - static inline int pte_swp_exclusive(pte_t pte) 304 + static inline bool pte_swp_exclusive(pte_t pte) 305 305 { 306 306 return pte_isset(pte, L_PTE_SWP_EXCLUSIVE); 307 307 }
+28 -6
arch/arm64/include/asm/kvm_host.h
··· 1107 1107 #define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r)) 1108 1108 1109 1109 u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *, enum vcpu_sysreg, u64); 1110 - #define __vcpu_sys_reg(v,r) \ 1111 - (*({ \ 1110 + 1111 + #define __vcpu_assign_sys_reg(v, r, val) \ 1112 + do { \ 1112 1113 const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \ 1113 - u64 *__r = __ctxt_sys_reg(ctxt, (r)); \ 1114 + u64 __v = (val); \ 1114 1115 if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \ 1115 - *__r = kvm_vcpu_apply_reg_masks((v), (r), *__r);\ 1116 - __r; \ 1117 - })) 1116 + __v = kvm_vcpu_apply_reg_masks((v), (r), __v); \ 1117 + \ 1118 + ctxt_sys_reg(ctxt, (r)) = __v; \ 1119 + } while (0) 1120 + 1121 + #define __vcpu_rmw_sys_reg(v, r, op, val) \ 1122 + do { \ 1123 + const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \ 1124 + u64 __v = ctxt_sys_reg(ctxt, (r)); \ 1125 + __v op (val); \ 1126 + if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \ 1127 + __v = kvm_vcpu_apply_reg_masks((v), (r), __v); \ 1128 + \ 1129 + ctxt_sys_reg(ctxt, (r)) = __v; \ 1130 + } while (0) 1131 + 1132 + #define __vcpu_sys_reg(v,r) \ 1133 + ({ \ 1134 + const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \ 1135 + u64 __v = ctxt_sys_reg(ctxt, (r)); \ 1136 + if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \ 1137 + __v = kvm_vcpu_apply_reg_masks((v), (r), __v); \ 1138 + __v; \ 1139 + }) 1118 1140 1119 1141 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg); 1120 1142 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
+1 -1
arch/arm64/include/asm/pgtable.h
··· 563 563 return set_pte_bit(pte, __pgprot(PTE_SWP_EXCLUSIVE)); 564 564 } 565 565 566 - static inline int pte_swp_exclusive(pte_t pte) 566 + static inline bool pte_swp_exclusive(pte_t pte) 567 567 { 568 568 return pte_val(pte) & PTE_SWP_EXCLUSIVE; 569 569 }
+9 -9
arch/arm64/kvm/arch_timer.c
··· 108 108 109 109 switch(arch_timer_ctx_index(ctxt)) { 110 110 case TIMER_VTIMER: 111 - __vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl; 111 + __vcpu_assign_sys_reg(vcpu, CNTV_CTL_EL0, ctl); 112 112 break; 113 113 case TIMER_PTIMER: 114 - __vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl; 114 + __vcpu_assign_sys_reg(vcpu, CNTP_CTL_EL0, ctl); 115 115 break; 116 116 case TIMER_HVTIMER: 117 - __vcpu_sys_reg(vcpu, CNTHV_CTL_EL2) = ctl; 117 + __vcpu_assign_sys_reg(vcpu, CNTHV_CTL_EL2, ctl); 118 118 break; 119 119 case TIMER_HPTIMER: 120 - __vcpu_sys_reg(vcpu, CNTHP_CTL_EL2) = ctl; 120 + __vcpu_assign_sys_reg(vcpu, CNTHP_CTL_EL2, ctl); 121 121 break; 122 122 default: 123 123 WARN_ON(1); ··· 130 130 131 131 switch(arch_timer_ctx_index(ctxt)) { 132 132 case TIMER_VTIMER: 133 - __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval; 133 + __vcpu_assign_sys_reg(vcpu, CNTV_CVAL_EL0, cval); 134 134 break; 135 135 case TIMER_PTIMER: 136 - __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval; 136 + __vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, cval); 137 137 break; 138 138 case TIMER_HVTIMER: 139 - __vcpu_sys_reg(vcpu, CNTHV_CVAL_EL2) = cval; 139 + __vcpu_assign_sys_reg(vcpu, CNTHV_CVAL_EL2, cval); 140 140 break; 141 141 case TIMER_HPTIMER: 142 - __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = cval; 142 + __vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, cval); 143 143 break; 144 144 default: 145 145 WARN_ON(1); ··· 1036 1036 if (vcpu_has_nv(vcpu)) { 1037 1037 struct arch_timer_offset *offs = &vcpu_vtimer(vcpu)->offset; 1038 1038 1039 - offs->vcpu_offset = &__vcpu_sys_reg(vcpu, CNTVOFF_EL2); 1039 + offs->vcpu_offset = __ctxt_sys_reg(&vcpu->arch.ctxt, CNTVOFF_EL2); 1040 1040 offs->vm_offset = &vcpu->kvm->arch.timer_data.poffset; 1041 1041 } 1042 1042
+2 -2
arch/arm64/kvm/debug.c
··· 216 216 void kvm_debug_handle_oslar(struct kvm_vcpu *vcpu, u64 val) 217 217 { 218 218 if (val & OSLAR_EL1_OSLK) 219 - __vcpu_sys_reg(vcpu, OSLSR_EL1) |= OSLSR_EL1_OSLK; 219 + __vcpu_rmw_sys_reg(vcpu, OSLSR_EL1, |=, OSLSR_EL1_OSLK); 220 220 else 221 - __vcpu_sys_reg(vcpu, OSLSR_EL1) &= ~OSLSR_EL1_OSLK; 221 + __vcpu_rmw_sys_reg(vcpu, OSLSR_EL1, &=, ~OSLSR_EL1_OSLK); 222 222 223 223 preempt_disable(); 224 224 kvm_arch_vcpu_put(vcpu);
+2 -2
arch/arm64/kvm/fpsimd.c
··· 103 103 fp_state.sve_state = vcpu->arch.sve_state; 104 104 fp_state.sve_vl = vcpu->arch.sve_max_vl; 105 105 fp_state.sme_state = NULL; 106 - fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR); 107 - fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR); 106 + fp_state.svcr = __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); 107 + fp_state.fpmr = __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); 108 108 fp_state.fp_type = &vcpu->arch.fp_type; 109 109 110 110 if (vcpu_has_sve(vcpu))
+2 -2
arch/arm64/kvm/hyp/exception.c
··· 37 37 if (unlikely(vcpu_has_nv(vcpu))) 38 38 vcpu_write_sys_reg(vcpu, val, reg); 39 39 else if (!__vcpu_write_sys_reg_to_cpu(val, reg)) 40 - __vcpu_sys_reg(vcpu, reg) = val; 40 + __vcpu_assign_sys_reg(vcpu, reg, val); 41 41 } 42 42 43 43 static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode, ··· 51 51 } else if (has_vhe()) { 52 52 write_sysreg_el1(val, SYS_SPSR); 53 53 } else { 54 - __vcpu_sys_reg(vcpu, SPSR_EL1) = val; 54 + __vcpu_assign_sys_reg(vcpu, SPSR_EL1, val); 55 55 } 56 56 } 57 57
+2 -2
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 45 45 if (!vcpu_el1_is_32bit(vcpu)) 46 46 return; 47 47 48 - __vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2); 48 + __vcpu_assign_sys_reg(vcpu, FPEXC32_EL2, read_sysreg(fpexc32_el2)); 49 49 } 50 50 51 51 static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) ··· 456 456 */ 457 457 if (vcpu_has_sve(vcpu)) { 458 458 zcr_el1 = read_sysreg_el1(SYS_ZCR); 459 - __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr_el1; 459 + __vcpu_assign_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu), zcr_el1); 460 460 461 461 /* 462 462 * The guest's state is always saved using the guest's max VL.
+3 -3
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
··· 307 307 vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq); 308 308 vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq); 309 309 310 - __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); 311 - __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); 310 + __vcpu_assign_sys_reg(vcpu, DACR32_EL2, read_sysreg(dacr32_el2)); 311 + __vcpu_assign_sys_reg(vcpu, IFSR32_EL2, read_sysreg(ifsr32_el2)); 312 312 313 313 if (has_vhe() || kvm_debug_regs_in_use(vcpu)) 314 - __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); 314 + __vcpu_assign_sys_reg(vcpu, DBGVCR32_EL2, read_sysreg(dbgvcr32_el2)); 315 315 } 316 316 317 317 static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
+2 -2
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 26 26 27 27 static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) 28 28 { 29 - __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); 29 + __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR)); 30 30 /* 31 31 * On saving/restoring guest sve state, always use the maximum VL for 32 32 * the guest. The layout of the data when saving the sve state depends ··· 79 79 80 80 has_fpmr = kvm_has_fpmr(kern_hyp_va(vcpu->kvm)); 81 81 if (has_fpmr) 82 - __vcpu_sys_reg(vcpu, FPMR) = read_sysreg_s(SYS_FPMR); 82 + __vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR)); 83 83 84 84 if (system_supports_sve()) 85 85 __hyp_sve_restore_host();
+2 -2
arch/arm64/kvm/hyp/vhe/switch.c
··· 223 223 */ 224 224 val = read_sysreg_el0(SYS_CNTP_CVAL); 225 225 if (map.direct_ptimer == vcpu_ptimer(vcpu)) 226 - __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = val; 226 + __vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, val); 227 227 if (map.direct_ptimer == vcpu_hptimer(vcpu)) 228 - __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = val; 228 + __vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, val); 229 229 230 230 offset = read_sysreg_s(SYS_CNTPOFF_EL2); 231 231
+23 -23
arch/arm64/kvm/hyp/vhe/sysreg-sr.c
··· 18 18 static void __sysreg_save_vel2_state(struct kvm_vcpu *vcpu) 19 19 { 20 20 /* These registers are common with EL1 */ 21 - __vcpu_sys_reg(vcpu, PAR_EL1) = read_sysreg(par_el1); 22 - __vcpu_sys_reg(vcpu, TPIDR_EL1) = read_sysreg(tpidr_el1); 21 + __vcpu_assign_sys_reg(vcpu, PAR_EL1, read_sysreg(par_el1)); 22 + __vcpu_assign_sys_reg(vcpu, TPIDR_EL1, read_sysreg(tpidr_el1)); 23 23 24 - __vcpu_sys_reg(vcpu, ESR_EL2) = read_sysreg_el1(SYS_ESR); 25 - __vcpu_sys_reg(vcpu, AFSR0_EL2) = read_sysreg_el1(SYS_AFSR0); 26 - __vcpu_sys_reg(vcpu, AFSR1_EL2) = read_sysreg_el1(SYS_AFSR1); 27 - __vcpu_sys_reg(vcpu, FAR_EL2) = read_sysreg_el1(SYS_FAR); 28 - __vcpu_sys_reg(vcpu, MAIR_EL2) = read_sysreg_el1(SYS_MAIR); 29 - __vcpu_sys_reg(vcpu, VBAR_EL2) = read_sysreg_el1(SYS_VBAR); 30 - __vcpu_sys_reg(vcpu, CONTEXTIDR_EL2) = read_sysreg_el1(SYS_CONTEXTIDR); 31 - __vcpu_sys_reg(vcpu, AMAIR_EL2) = read_sysreg_el1(SYS_AMAIR); 24 + __vcpu_assign_sys_reg(vcpu, ESR_EL2, read_sysreg_el1(SYS_ESR)); 25 + __vcpu_assign_sys_reg(vcpu, AFSR0_EL2, read_sysreg_el1(SYS_AFSR0)); 26 + __vcpu_assign_sys_reg(vcpu, AFSR1_EL2, read_sysreg_el1(SYS_AFSR1)); 27 + __vcpu_assign_sys_reg(vcpu, FAR_EL2, read_sysreg_el1(SYS_FAR)); 28 + __vcpu_assign_sys_reg(vcpu, MAIR_EL2, read_sysreg_el1(SYS_MAIR)); 29 + __vcpu_assign_sys_reg(vcpu, VBAR_EL2, read_sysreg_el1(SYS_VBAR)); 30 + __vcpu_assign_sys_reg(vcpu, CONTEXTIDR_EL2, read_sysreg_el1(SYS_CONTEXTIDR)); 31 + __vcpu_assign_sys_reg(vcpu, AMAIR_EL2, read_sysreg_el1(SYS_AMAIR)); 32 32 33 33 /* 34 34 * In VHE mode those registers are compatible between EL1 and EL2, ··· 46 46 * are always trapped, ensuring that the in-memory 47 47 * copy is always up-to-date. A small blessing... 48 48 */ 49 - __vcpu_sys_reg(vcpu, SCTLR_EL2) = read_sysreg_el1(SYS_SCTLR); 50 - __vcpu_sys_reg(vcpu, TTBR0_EL2) = read_sysreg_el1(SYS_TTBR0); 51 - __vcpu_sys_reg(vcpu, TTBR1_EL2) = read_sysreg_el1(SYS_TTBR1); 52 - __vcpu_sys_reg(vcpu, TCR_EL2) = read_sysreg_el1(SYS_TCR); 49 + __vcpu_assign_sys_reg(vcpu, SCTLR_EL2, read_sysreg_el1(SYS_SCTLR)); 50 + __vcpu_assign_sys_reg(vcpu, TTBR0_EL2, read_sysreg_el1(SYS_TTBR0)); 51 + __vcpu_assign_sys_reg(vcpu, TTBR1_EL2, read_sysreg_el1(SYS_TTBR1)); 52 + __vcpu_assign_sys_reg(vcpu, TCR_EL2, read_sysreg_el1(SYS_TCR)); 53 53 54 54 if (ctxt_has_tcrx(&vcpu->arch.ctxt)) { 55 - __vcpu_sys_reg(vcpu, TCR2_EL2) = read_sysreg_el1(SYS_TCR2); 55 + __vcpu_assign_sys_reg(vcpu, TCR2_EL2, read_sysreg_el1(SYS_TCR2)); 56 56 57 57 if (ctxt_has_s1pie(&vcpu->arch.ctxt)) { 58 - __vcpu_sys_reg(vcpu, PIRE0_EL2) = read_sysreg_el1(SYS_PIRE0); 59 - __vcpu_sys_reg(vcpu, PIR_EL2) = read_sysreg_el1(SYS_PIR); 58 + __vcpu_assign_sys_reg(vcpu, PIRE0_EL2, read_sysreg_el1(SYS_PIRE0)); 59 + __vcpu_assign_sys_reg(vcpu, PIR_EL2, read_sysreg_el1(SYS_PIR)); 60 60 } 61 61 62 62 if (ctxt_has_s1poe(&vcpu->arch.ctxt)) 63 - __vcpu_sys_reg(vcpu, POR_EL2) = read_sysreg_el1(SYS_POR); 63 + __vcpu_assign_sys_reg(vcpu, POR_EL2, read_sysreg_el1(SYS_POR)); 64 64 } 65 65 66 66 /* ··· 70 70 */ 71 71 val = read_sysreg_el1(SYS_CNTKCTL); 72 72 val &= CNTKCTL_VALID_BITS; 73 - __vcpu_sys_reg(vcpu, CNTHCTL_EL2) &= ~CNTKCTL_VALID_BITS; 74 - __vcpu_sys_reg(vcpu, CNTHCTL_EL2) |= val; 73 + __vcpu_rmw_sys_reg(vcpu, CNTHCTL_EL2, &=, ~CNTKCTL_VALID_BITS); 74 + __vcpu_rmw_sys_reg(vcpu, CNTHCTL_EL2, |=, val); 75 75 } 76 76 77 - __vcpu_sys_reg(vcpu, SP_EL2) = read_sysreg(sp_el1); 78 - __vcpu_sys_reg(vcpu, ELR_EL2) = read_sysreg_el1(SYS_ELR); 79 - __vcpu_sys_reg(vcpu, SPSR_EL2) = read_sysreg_el1(SYS_SPSR); 77 + __vcpu_assign_sys_reg(vcpu, SP_EL2, read_sysreg(sp_el1)); 78 + __vcpu_assign_sys_reg(vcpu, ELR_EL2, read_sysreg_el1(SYS_ELR)); 79 + __vcpu_assign_sys_reg(vcpu, SPSR_EL2, read_sysreg_el1(SYS_SPSR)); 80 80 } 81 81 82 82 static void __sysreg_restore_vel2_state(struct kvm_vcpu *vcpu)
+1 -1
arch/arm64/kvm/nested.c
··· 1757 1757 1758 1758 out: 1759 1759 for (enum vcpu_sysreg sr = __SANITISED_REG_START__; sr < NR_SYS_REGS; sr++) 1760 - (void)__vcpu_sys_reg(vcpu, sr); 1760 + __vcpu_rmw_sys_reg(vcpu, sr, |=, 0); 1761 1761 1762 1762 return 0; 1763 1763 }
+12 -12
arch/arm64/kvm/pmu-emul.c
··· 178 178 val |= lower_32_bits(val); 179 179 } 180 180 181 - __vcpu_sys_reg(vcpu, reg) = val; 181 + __vcpu_assign_sys_reg(vcpu, reg, val); 182 182 183 183 /* Recreate the perf event to reflect the updated sample_period */ 184 184 kvm_pmu_create_perf_event(pmc); ··· 204 204 void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) 205 205 { 206 206 kvm_pmu_release_perf_event(kvm_vcpu_idx_to_pmc(vcpu, select_idx)); 207 - __vcpu_sys_reg(vcpu, counter_index_to_reg(select_idx)) = val; 207 + __vcpu_assign_sys_reg(vcpu, counter_index_to_reg(select_idx), val); 208 208 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 209 209 } 210 210 ··· 239 239 240 240 reg = counter_index_to_reg(pmc->idx); 241 241 242 - __vcpu_sys_reg(vcpu, reg) = val; 242 + __vcpu_assign_sys_reg(vcpu, reg, val); 243 243 244 244 kvm_pmu_release_perf_event(pmc); 245 245 } ··· 503 503 reg = __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) + 1; 504 504 if (!kvm_pmc_is_64bit(pmc)) 505 505 reg = lower_32_bits(reg); 506 - __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) = reg; 506 + __vcpu_assign_sys_reg(vcpu, counter_index_to_reg(i), reg); 507 507 508 508 /* No overflow? move on */ 509 509 if (kvm_pmc_has_64bit_overflow(pmc) ? reg : lower_32_bits(reg)) 510 510 continue; 511 511 512 512 /* Mark overflow */ 513 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i); 513 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, BIT(i)); 514 514 515 515 if (kvm_pmu_counter_can_chain(pmc)) 516 516 kvm_pmu_counter_increment(vcpu, BIT(i + 1), ··· 556 556 perf_event->attr.sample_period = period; 557 557 perf_event->hw.sample_period = period; 558 558 559 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx); 559 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, BIT(idx)); 560 560 561 561 if (kvm_pmu_counter_can_chain(pmc)) 562 562 kvm_pmu_counter_increment(vcpu, BIT(idx + 1), ··· 602 602 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 603 603 604 604 /* The reset bits don't indicate any state, and shouldn't be saved. */ 605 - __vcpu_sys_reg(vcpu, PMCR_EL0) = val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P); 605 + __vcpu_assign_sys_reg(vcpu, PMCR_EL0, (val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P))); 606 606 607 607 if (val & ARMV8_PMU_PMCR_C) 608 608 kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0); ··· 779 779 u64 reg; 780 780 781 781 reg = counter_index_to_evtreg(pmc->idx); 782 - __vcpu_sys_reg(vcpu, reg) = data & kvm_pmu_evtyper_mask(vcpu->kvm); 782 + __vcpu_assign_sys_reg(vcpu, reg, (data & kvm_pmu_evtyper_mask(vcpu->kvm))); 783 783 784 784 kvm_pmu_create_perf_event(pmc); 785 785 } ··· 914 914 { 915 915 u64 mask = kvm_pmu_implemented_counter_mask(vcpu); 916 916 917 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask; 918 - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask; 919 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask; 917 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=, mask); 918 + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, mask); 919 + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, mask); 920 920 921 921 kvm_pmu_reprogram_counter_mask(vcpu, mask); 922 922 } ··· 1038 1038 u64 val = __vcpu_sys_reg(vcpu, MDCR_EL2); 1039 1039 val &= ~MDCR_EL2_HPMN; 1040 1040 val |= FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters); 1041 - __vcpu_sys_reg(vcpu, MDCR_EL2) = val; 1041 + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); 1042 1042 } 1043 1043 } 1044 1044 }
+31 -29
arch/arm64/kvm/sys_regs.c
··· 228 228 * to reverse-translate virtual EL2 system registers for a 229 229 * non-VHE guest hypervisor. 230 230 */ 231 - __vcpu_sys_reg(vcpu, reg) = val; 231 + __vcpu_assign_sys_reg(vcpu, reg, val); 232 232 233 233 switch (reg) { 234 234 case CNTHCTL_EL2: ··· 263 263 return; 264 264 265 265 memory_write: 266 - __vcpu_sys_reg(vcpu, reg) = val; 266 + __vcpu_assign_sys_reg(vcpu, reg, val); 267 267 } 268 268 269 269 /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */ ··· 605 605 if ((val ^ rd->val) & ~OSLSR_EL1_OSLK) 606 606 return -EINVAL; 607 607 608 - __vcpu_sys_reg(vcpu, rd->reg) = val; 608 + __vcpu_assign_sys_reg(vcpu, rd->reg, val); 609 609 return 0; 610 610 } 611 611 ··· 791 791 mask |= GENMASK(n - 1, 0); 792 792 793 793 reset_unknown(vcpu, r); 794 - __vcpu_sys_reg(vcpu, r->reg) &= mask; 794 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, mask); 795 795 796 796 return __vcpu_sys_reg(vcpu, r->reg); 797 797 } ··· 799 799 static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 800 800 { 801 801 reset_unknown(vcpu, r); 802 - __vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0); 802 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, GENMASK(31, 0)); 803 803 804 804 return __vcpu_sys_reg(vcpu, r->reg); 805 805 } ··· 811 811 return 0; 812 812 813 813 reset_unknown(vcpu, r); 814 - __vcpu_sys_reg(vcpu, r->reg) &= kvm_pmu_evtyper_mask(vcpu->kvm); 814 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, kvm_pmu_evtyper_mask(vcpu->kvm)); 815 815 816 816 return __vcpu_sys_reg(vcpu, r->reg); 817 817 } ··· 819 819 static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 820 820 { 821 821 reset_unknown(vcpu, r); 822 - __vcpu_sys_reg(vcpu, r->reg) &= PMSELR_EL0_SEL_MASK; 822 + __vcpu_rmw_sys_reg(vcpu, r->reg, &=, PMSELR_EL0_SEL_MASK); 823 823 824 824 return __vcpu_sys_reg(vcpu, r->reg); 825 825 } ··· 835 835 * The value of PMCR.N field is included when the 836 836 * vCPU register is read via kvm_vcpu_read_pmcr(). 837 837 */ 838 - __vcpu_sys_reg(vcpu, r->reg) = pmcr; 838 + __vcpu_assign_sys_reg(vcpu, r->reg, pmcr); 839 839 840 840 return __vcpu_sys_reg(vcpu, r->reg); 841 841 } ··· 907 907 return false; 908 908 909 909 if (p->is_write) 910 - __vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval; 910 + __vcpu_assign_sys_reg(vcpu, PMSELR_EL0, p->regval); 911 911 else 912 912 /* return PMSELR.SEL field */ 913 913 p->regval = __vcpu_sys_reg(vcpu, PMSELR_EL0) ··· 1076 1076 { 1077 1077 u64 mask = kvm_pmu_accessible_counter_mask(vcpu); 1078 1078 1079 - __vcpu_sys_reg(vcpu, r->reg) = val & mask; 1079 + __vcpu_assign_sys_reg(vcpu, r->reg, val & mask); 1080 1080 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 1081 1081 1082 1082 return 0; ··· 1103 1103 val = p->regval & mask; 1104 1104 if (r->Op2 & 0x1) 1105 1105 /* accessing PMCNTENSET_EL0 */ 1106 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; 1106 + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=, val); 1107 1107 else 1108 1108 /* accessing PMCNTENCLR_EL0 */ 1109 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; 1109 + __vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, ~val); 1110 1110 1111 1111 kvm_pmu_reprogram_counter_mask(vcpu, val); 1112 1112 } else { ··· 1129 1129 1130 1130 if (r->Op2 & 0x1) 1131 1131 /* accessing PMINTENSET_EL1 */ 1132 - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val; 1132 + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=, val); 1133 1133 else 1134 1134 /* accessing PMINTENCLR_EL1 */ 1135 - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val; 1135 + __vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, ~val); 1136 1136 } else { 1137 1137 p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1); 1138 1138 } ··· 1151 1151 if (p->is_write) { 1152 1152 if (r->CRm & 0x2) 1153 1153 /* accessing PMOVSSET_EL0 */ 1154 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask); 1154 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, (p->regval & mask)); 1155 1155 else 1156 1156 /* accessing PMOVSCLR_EL0 */ 1157 - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask); 1157 + __vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=, ~(p->regval & mask)); 1158 1158 } else { 1159 1159 p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); 1160 1160 } ··· 1185 1185 if (!vcpu_mode_priv(vcpu)) 1186 1186 return undef_access(vcpu, p, r); 1187 1187 1188 - __vcpu_sys_reg(vcpu, PMUSERENR_EL0) = 1189 - p->regval & ARMV8_PMU_USERENR_MASK; 1188 + __vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, 1189 + (p->regval & ARMV8_PMU_USERENR_MASK)); 1190 1190 } else { 1191 1191 p->regval = __vcpu_sys_reg(vcpu, PMUSERENR_EL0) 1192 1192 & ARMV8_PMU_USERENR_MASK; ··· 1237 1237 if (!kvm_supports_32bit_el0()) 1238 1238 val |= ARMV8_PMU_PMCR_LC; 1239 1239 1240 - __vcpu_sys_reg(vcpu, r->reg) = val; 1240 + __vcpu_assign_sys_reg(vcpu, r->reg, val); 1241 1241 kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 1242 1242 1243 1243 return 0; ··· 2213 2213 if (kvm_has_mte(vcpu->kvm)) 2214 2214 clidr |= 2ULL << CLIDR_TTYPE_SHIFT(loc); 2215 2215 2216 - __vcpu_sys_reg(vcpu, r->reg) = clidr; 2216 + __vcpu_assign_sys_reg(vcpu, r->reg, clidr); 2217 2217 2218 2218 return __vcpu_sys_reg(vcpu, r->reg); 2219 2219 } ··· 2227 2227 if ((val & CLIDR_EL1_RES0) || (!(ctr_el0 & CTR_EL0_IDC) && idc)) 2228 2228 return -EINVAL; 2229 2229 2230 - __vcpu_sys_reg(vcpu, rd->reg) = val; 2230 + __vcpu_assign_sys_reg(vcpu, rd->reg, val); 2231 2231 2232 2232 return 0; 2233 2233 } ··· 2404 2404 const struct sys_reg_desc *r) 2405 2405 { 2406 2406 if (p->is_write) 2407 - __vcpu_sys_reg(vcpu, SP_EL1) = p->regval; 2407 + __vcpu_assign_sys_reg(vcpu, SP_EL1, p->regval); 2408 2408 else 2409 2409 p->regval = __vcpu_sys_reg(vcpu, SP_EL1); 2410 2410 ··· 2428 2428 const struct sys_reg_desc *r) 2429 2429 { 2430 2430 if (p->is_write) 2431 - __vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval; 2431 + __vcpu_assign_sys_reg(vcpu, SPSR_EL1, p->regval); 2432 2432 else 2433 2433 p->regval = __vcpu_sys_reg(vcpu, SPSR_EL1); 2434 2434 ··· 2440 2440 const struct sys_reg_desc *r) 2441 2441 { 2442 2442 if (p->is_write) 2443 - __vcpu_sys_reg(vcpu, CNTKCTL_EL1) = p->regval; 2443 + __vcpu_assign_sys_reg(vcpu, CNTKCTL_EL1, p->regval); 2444 2444 else 2445 2445 p->regval = __vcpu_sys_reg(vcpu, CNTKCTL_EL1); 2446 2446 ··· 2454 2454 if (!cpus_have_final_cap(ARM64_HAS_HCR_NV1)) 2455 2455 val |= HCR_E2H; 2456 2456 2457 - return __vcpu_sys_reg(vcpu, r->reg) = val; 2457 + __vcpu_assign_sys_reg(vcpu, r->reg, val); 2458 + 2459 + return __vcpu_sys_reg(vcpu, r->reg); 2458 2460 } 2459 2461 2460 2462 static unsigned int __el2_visibility(const struct kvm_vcpu *vcpu, ··· 2627 2625 u64_replace_bits(val, hpmn, MDCR_EL2_HPMN); 2628 2626 } 2629 2627 2630 - __vcpu_sys_reg(vcpu, MDCR_EL2) = val; 2628 + __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val); 2631 2629 2632 2630 /* 2633 2631 * Request a reload of the PMU to enable/disable the counters ··· 2756 2754 2757 2755 static u64 reset_mdcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 2758 2756 { 2759 - __vcpu_sys_reg(vcpu, r->reg) = vcpu->kvm->arch.nr_pmu_counters; 2757 + __vcpu_assign_sys_reg(vcpu, r->reg, vcpu->kvm->arch.nr_pmu_counters); 2760 2758 return vcpu->kvm->arch.nr_pmu_counters; 2761 2759 } 2762 2760 ··· 4792 4790 r->reset(vcpu, r); 4793 4791 4794 4792 if (r->reg >= __SANITISED_REG_START__ && r->reg < NR_SYS_REGS) 4795 - (void)__vcpu_sys_reg(vcpu, r->reg); 4793 + __vcpu_rmw_sys_reg(vcpu, r->reg, |=, 0); 4796 4794 } 4797 4795 4798 4796 set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags); ··· 5014 5012 if (r->set_user) { 5015 5013 ret = (r->set_user)(vcpu, r, val); 5016 5014 } else { 5017 - __vcpu_sys_reg(vcpu, r->reg) = val; 5015 + __vcpu_assign_sys_reg(vcpu, r->reg, val); 5018 5016 ret = 0; 5019 5017 } 5020 5018
+2 -2
arch/arm64/kvm/sys_regs.h
··· 137 137 { 138 138 BUG_ON(!r->reg); 139 139 BUG_ON(r->reg >= NR_SYS_REGS); 140 - __vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL; 140 + __vcpu_assign_sys_reg(vcpu, r->reg, 0x1de7ec7edbadc0deULL); 141 141 return __vcpu_sys_reg(vcpu, r->reg); 142 142 } 143 143 ··· 145 145 { 146 146 BUG_ON(!r->reg); 147 147 BUG_ON(r->reg >= NR_SYS_REGS); 148 - __vcpu_sys_reg(vcpu, r->reg) = r->val; 148 + __vcpu_assign_sys_reg(vcpu, r->reg, r->val); 149 149 return __vcpu_sys_reg(vcpu, r->reg); 150 150 } 151 151
+5 -5
arch/arm64/kvm/vgic/vgic-v3-nested.c
··· 356 356 val = __vcpu_sys_reg(vcpu, ICH_HCR_EL2); 357 357 val &= ~ICH_HCR_EL2_EOIcount_MASK; 358 358 val |= (s_cpu_if->vgic_hcr & ICH_HCR_EL2_EOIcount_MASK); 359 - __vcpu_sys_reg(vcpu, ICH_HCR_EL2) = val; 360 - __vcpu_sys_reg(vcpu, ICH_VMCR_EL2) = s_cpu_if->vgic_vmcr; 359 + __vcpu_assign_sys_reg(vcpu, ICH_HCR_EL2, val); 360 + __vcpu_assign_sys_reg(vcpu, ICH_VMCR_EL2, s_cpu_if->vgic_vmcr); 361 361 362 362 for (i = 0; i < 4; i++) { 363 - __vcpu_sys_reg(vcpu, ICH_AP0RN(i)) = s_cpu_if->vgic_ap0r[i]; 364 - __vcpu_sys_reg(vcpu, ICH_AP1RN(i)) = s_cpu_if->vgic_ap1r[i]; 363 + __vcpu_assign_sys_reg(vcpu, ICH_AP0RN(i), s_cpu_if->vgic_ap0r[i]); 364 + __vcpu_assign_sys_reg(vcpu, ICH_AP1RN(i), s_cpu_if->vgic_ap1r[i]); 365 365 } 366 366 367 367 for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) { ··· 370 370 val &= ~ICH_LR_STATE; 371 371 val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE; 372 372 373 - __vcpu_sys_reg(vcpu, ICH_LRN(i)) = val; 373 + __vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val); 374 374 s_cpu_if->vgic_lr[i] = 0; 375 375 } 376 376
+1 -1
arch/csky/include/asm/pgtable.h
··· 200 200 return pte; 201 201 } 202 202 203 - static inline int pte_swp_exclusive(pte_t pte) 203 + static inline bool pte_swp_exclusive(pte_t pte) 204 204 { 205 205 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 206 206 }
+1 -1
arch/hexagon/include/asm/pgtable.h
··· 387 387 (((type & 0x1f) << 1) | \ 388 388 ((offset & 0x3ffff8) << 10) | ((offset & 0x7) << 7)) }) 389 389 390 - static inline int pte_swp_exclusive(pte_t pte) 390 + static inline bool pte_swp_exclusive(pte_t pte) 391 391 { 392 392 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 393 393 }
+1 -1
arch/loongarch/include/asm/pgtable.h
··· 301 301 #define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val(pmd) }) 302 302 #define __swp_entry_to_pmd(x) ((pmd_t) { (x).val | _PAGE_HUGE }) 303 303 304 - static inline int pte_swp_exclusive(pte_t pte) 304 + static inline bool pte_swp_exclusive(pte_t pte) 305 305 { 306 306 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 307 307 }
+1 -1
arch/m68k/include/asm/mcf_pgtable.h
··· 268 268 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 269 269 #define __swp_entry_to_pte(x) (__pte((x).val)) 270 270 271 - static inline int pte_swp_exclusive(pte_t pte) 271 + static inline bool pte_swp_exclusive(pte_t pte) 272 272 { 273 273 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 274 274 }
+1 -1
arch/m68k/include/asm/motorola_pgtable.h
··· 185 185 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 186 186 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 187 187 188 - static inline int pte_swp_exclusive(pte_t pte) 188 + static inline bool pte_swp_exclusive(pte_t pte) 189 189 { 190 190 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 191 191 }
+1 -1
arch/m68k/include/asm/sun3_pgtable.h
··· 169 169 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 170 170 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 171 171 172 - static inline int pte_swp_exclusive(pte_t pte) 172 + static inline bool pte_swp_exclusive(pte_t pte) 173 173 { 174 174 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 175 175 }
+1 -1
arch/microblaze/include/asm/pgtable.h
··· 398 398 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 2 }) 399 399 #define __swp_entry_to_pte(x) ((pte_t) { (x).val << 2 }) 400 400 401 - static inline int pte_swp_exclusive(pte_t pte) 401 + static inline bool pte_swp_exclusive(pte_t pte) 402 402 { 403 403 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 404 404 }
+2 -2
arch/mips/include/asm/pgtable.h
··· 534 534 #endif 535 535 536 536 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) 537 - static inline int pte_swp_exclusive(pte_t pte) 537 + static inline bool pte_swp_exclusive(pte_t pte) 538 538 { 539 539 return pte.pte_low & _PAGE_SWP_EXCLUSIVE; 540 540 } ··· 551 551 return pte; 552 552 } 553 553 #else 554 - static inline int pte_swp_exclusive(pte_t pte) 554 + static inline bool pte_swp_exclusive(pte_t pte) 555 555 { 556 556 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 557 557 }
+1 -1
arch/nios2/include/asm/pgtable.h
··· 259 259 #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) 260 260 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 261 261 262 - static inline int pte_swp_exclusive(pte_t pte) 262 + static inline bool pte_swp_exclusive(pte_t pte) 263 263 { 264 264 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 265 265 }
+1 -1
arch/openrisc/include/asm/pgtable.h
··· 411 411 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 412 412 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 413 413 414 - static inline int pte_swp_exclusive(pte_t pte) 414 + static inline bool pte_swp_exclusive(pte_t pte) 415 415 { 416 416 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 417 417 }
+1 -1
arch/parisc/include/asm/pgtable.h
··· 425 425 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 426 426 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 427 427 428 - static inline int pte_swp_exclusive(pte_t pte) 428 + static inline bool pte_swp_exclusive(pte_t pte) 429 429 { 430 430 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 431 431 }
+1 -1
arch/powerpc/include/asm/book3s/32/pgtable.h
··· 365 365 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 3 }) 366 366 #define __swp_entry_to_pte(x) ((pte_t) { (x).val << 3 }) 367 367 368 - static inline int pte_swp_exclusive(pte_t pte) 368 + static inline bool pte_swp_exclusive(pte_t pte) 369 369 { 370 370 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 371 371 }
+1 -1
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 693 693 return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SWP_EXCLUSIVE)); 694 694 } 695 695 696 - static inline int pte_swp_exclusive(pte_t pte) 696 + static inline bool pte_swp_exclusive(pte_t pte) 697 697 { 698 698 return !!(pte_raw(pte) & cpu_to_be64(_PAGE_SWP_EXCLUSIVE)); 699 699 }
+1 -1
arch/powerpc/include/asm/nohash/pgtable.h
··· 286 286 return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); 287 287 } 288 288 289 - static inline int pte_swp_exclusive(pte_t pte) 289 + static inline bool pte_swp_exclusive(pte_t pte) 290 290 { 291 291 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 292 292 }
+9
arch/powerpc/platforms/book3s/vas-api.c
··· 521 521 return -EINVAL; 522 522 } 523 523 524 + /* 525 + * Map complete page to the paste address. So the user 526 + * space should pass 0ULL to the offset parameter. 527 + */ 528 + if (vma->vm_pgoff) { 529 + pr_debug("Page offset unsupported to map paste address\n"); 530 + return -EINVAL; 531 + } 532 + 524 533 /* Ensure instance has an open send window */ 525 534 if (!txwin) { 526 535 pr_err("No send window open?\n");
+6 -2
arch/powerpc/platforms/powernv/memtrace.c
··· 48 48 static int memtrace_mmap(struct file *filp, struct vm_area_struct *vma) 49 49 { 50 50 struct memtrace_entry *ent = filp->private_data; 51 + unsigned long ent_nrpages = ent->size >> PAGE_SHIFT; 52 + unsigned long vma_nrpages = vma_pages(vma); 51 53 52 - if (ent->size < vma->vm_end - vma->vm_start) 54 + /* The requested page offset should be within object's page count */ 55 + if (vma->vm_pgoff >= ent_nrpages) 53 56 return -EINVAL; 54 57 55 - if (vma->vm_pgoff << PAGE_SHIFT >= ent->size) 58 + /* The requested mapping range should remain within the bounds */ 59 + if (vma_nrpages > ent_nrpages - vma->vm_pgoff) 56 60 return -EINVAL; 57 61 58 62 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+1 -1
arch/riscv/include/asm/pgtable.h
··· 1028 1028 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 1029 1029 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 1030 1030 1031 - static inline int pte_swp_exclusive(pte_t pte) 1031 + static inline bool pte_swp_exclusive(pte_t pte) 1032 1032 { 1033 1033 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 1034 1034 }
+1 -1
arch/s390/include/asm/pgtable.h
··· 915 915 } 916 916 #endif 917 917 918 - static inline int pte_swp_exclusive(pte_t pte) 918 + static inline bool pte_swp_exclusive(pte_t pte) 919 919 { 920 920 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 921 921 }
+1 -1
arch/sh/include/asm/pgtable_32.h
··· 470 470 /* In both cases, we borrow bit 6 to store the exclusive marker in swap PTEs. */ 471 471 #define _PAGE_SWP_EXCLUSIVE _PAGE_USER 472 472 473 - static inline int pte_swp_exclusive(pte_t pte) 473 + static inline bool pte_swp_exclusive(pte_t pte) 474 474 { 475 475 return pte.pte_low & _PAGE_SWP_EXCLUSIVE; 476 476 }
+1 -1
arch/sparc/include/asm/pgtable_32.h
··· 348 348 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 349 349 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 350 350 351 - static inline int pte_swp_exclusive(pte_t pte) 351 + static inline bool pte_swp_exclusive(pte_t pte) 352 352 { 353 353 return pte_val(pte) & SRMMU_SWP_EXCLUSIVE; 354 354 }
+1 -1
arch/sparc/include/asm/pgtable_64.h
··· 1023 1023 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 1024 1024 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 1025 1025 1026 - static inline int pte_swp_exclusive(pte_t pte) 1026 + static inline bool pte_swp_exclusive(pte_t pte) 1027 1027 { 1028 1028 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 1029 1029 }
+1 -1
arch/um/include/asm/pgtable.h
··· 314 314 ((swp_entry_t) { pte_val(pte_mkuptodate(pte)) }) 315 315 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 316 316 317 - static inline int pte_swp_exclusive(pte_t pte) 317 + static inline bool pte_swp_exclusive(pte_t pte) 318 318 { 319 319 return pte_get_bits(pte, _PAGE_SWP_EXCLUSIVE); 320 320 }
+1 -1
arch/x86/include/asm/pgtable.h
··· 1561 1561 return pte_set_flags(pte, _PAGE_SWP_EXCLUSIVE); 1562 1562 } 1563 1563 1564 - static inline int pte_swp_exclusive(pte_t pte) 1564 + static inline bool pte_swp_exclusive(pte_t pte) 1565 1565 { 1566 1566 return pte_flags(pte) & _PAGE_SWP_EXCLUSIVE; 1567 1567 }
+24
arch/x86/kernel/smp.c
··· 299 299 .send_call_func_single_ipi = native_send_call_func_single_ipi, 300 300 }; 301 301 EXPORT_SYMBOL_GPL(smp_ops); 302 + 303 + int arch_cpu_rescan_dead_smt_siblings(void) 304 + { 305 + enum cpuhp_smt_control old = cpu_smt_control; 306 + int ret; 307 + 308 + /* 309 + * If SMT has been disabled and SMT siblings are in HLT, bring them back 310 + * online and offline them again so that they end up in MWAIT proper. 311 + * 312 + * Called with hotplug enabled. 313 + */ 314 + if (old != CPU_SMT_DISABLED && old != CPU_SMT_FORCE_DISABLED) 315 + return 0; 316 + 317 + ret = cpuhp_smt_enable(); 318 + if (ret) 319 + return ret; 320 + 321 + ret = cpuhp_smt_disable(old); 322 + 323 + return ret; 324 + } 325 + EXPORT_SYMBOL_GPL(arch_cpu_rescan_dead_smt_siblings);
+7 -47
arch/x86/kernel/smpboot.c
··· 1244 1244 local_irq_disable(); 1245 1245 } 1246 1246 1247 + /* 1248 + * We need to flush the caches before going to sleep, lest we have 1249 + * dirty data in our caches when we come back up. 1250 + */ 1247 1251 void __noreturn mwait_play_dead(unsigned int eax_hint) 1248 1252 { 1249 1253 struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead); ··· 1291 1287 native_halt(); 1292 1288 } 1293 1289 } 1294 - } 1295 - 1296 - /* 1297 - * We need to flush the caches before going to sleep, lest we have 1298 - * dirty data in our caches when we come back up. 1299 - */ 1300 - static inline void mwait_play_dead_cpuid_hint(void) 1301 - { 1302 - unsigned int eax, ebx, ecx, edx; 1303 - unsigned int highest_cstate = 0; 1304 - unsigned int highest_subcstate = 0; 1305 - int i; 1306 - 1307 - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 1308 - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) 1309 - return; 1310 - if (!this_cpu_has(X86_FEATURE_MWAIT)) 1311 - return; 1312 - if (!this_cpu_has(X86_FEATURE_CLFLUSH)) 1313 - return; 1314 - 1315 - eax = CPUID_LEAF_MWAIT; 1316 - ecx = 0; 1317 - native_cpuid(&eax, &ebx, &ecx, &edx); 1318 - 1319 - /* 1320 - * eax will be 0 if EDX enumeration is not valid. 1321 - * Initialized below to cstate, sub_cstate value when EDX is valid. 1322 - */ 1323 - if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED)) { 1324 - eax = 0; 1325 - } else { 1326 - edx >>= MWAIT_SUBSTATE_SIZE; 1327 - for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) { 1328 - if (edx & MWAIT_SUBSTATE_MASK) { 1329 - highest_cstate = i; 1330 - highest_subcstate = edx & MWAIT_SUBSTATE_MASK; 1331 - } 1332 - } 1333 - eax = (highest_cstate << MWAIT_SUBSTATE_SIZE) | 1334 - (highest_subcstate - 1); 1335 - } 1336 - 1337 - mwait_play_dead(eax); 1338 1290 } 1339 1291 1340 1292 /* ··· 1343 1383 play_dead_common(); 1344 1384 tboot_shutdown(TB_SHUTDOWN_WFS); 1345 1385 1346 - mwait_play_dead_cpuid_hint(); 1347 - if (cpuidle_play_dead()) 1348 - hlt_play_dead(); 1386 + /* Below returns only on error. */ 1387 + cpuidle_play_dead(); 1388 + hlt_play_dead(); 1349 1389 } 1350 1390 1351 1391 #else /* ... !CONFIG_HOTPLUG_CPU */
+8 -1
arch/x86/kvm/mmu/mmu.c
··· 4896 4896 { 4897 4897 u64 error_code = PFERR_GUEST_FINAL_MASK; 4898 4898 u8 level = PG_LEVEL_4K; 4899 + u64 direct_bits; 4899 4900 u64 end; 4900 4901 int r; 4901 4902 4902 4903 if (!vcpu->kvm->arch.pre_fault_allowed) 4903 4904 return -EOPNOTSUPP; 4905 + 4906 + if (kvm_is_gfn_alias(vcpu->kvm, gpa_to_gfn(range->gpa))) 4907 + return -EINVAL; 4904 4908 4905 4909 /* 4906 4910 * reload is efficient when called repeatedly, so we can do it on ··· 4914 4910 if (r) 4915 4911 return r; 4916 4912 4913 + direct_bits = 0; 4917 4914 if (kvm_arch_has_private_mem(vcpu->kvm) && 4918 4915 kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa))) 4919 4916 error_code |= PFERR_PRIVATE_ACCESS; 4917 + else 4918 + direct_bits = gfn_to_gpa(kvm_gfn_direct_bits(vcpu->kvm)); 4920 4919 4921 4920 /* 4922 4921 * Shadow paging uses GVA for kvm page fault, so restrict to 4923 4922 * two-dimensional paging. 4924 4923 */ 4925 - r = kvm_tdp_map_page(vcpu, range->gpa, error_code, &level); 4924 + r = kvm_tdp_map_page(vcpu, range->gpa | direct_bits, error_code, &level); 4926 4925 if (r < 0) 4927 4926 return r; 4928 4927
+35 -9
arch/x86/kvm/svm/sev.c
··· 2871 2871 } 2872 2872 } 2873 2873 2874 + static bool is_sev_snp_initialized(void) 2875 + { 2876 + struct sev_user_data_snp_status *status; 2877 + struct sev_data_snp_addr buf; 2878 + bool initialized = false; 2879 + int ret, error = 0; 2880 + 2881 + status = snp_alloc_firmware_page(GFP_KERNEL | __GFP_ZERO); 2882 + if (!status) 2883 + return false; 2884 + 2885 + buf.address = __psp_pa(status); 2886 + ret = sev_do_cmd(SEV_CMD_SNP_PLATFORM_STATUS, &buf, &error); 2887 + if (ret) { 2888 + pr_err("SEV: SNP_PLATFORM_STATUS failed ret=%d, fw_error=%d (%#x)\n", 2889 + ret, error, error); 2890 + goto out; 2891 + } 2892 + 2893 + initialized = !!status->state; 2894 + 2895 + out: 2896 + snp_free_firmware_page(status); 2897 + 2898 + return initialized; 2899 + } 2900 + 2874 2901 void __init sev_hardware_setup(void) 2875 2902 { 2876 2903 unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count; ··· 3002 2975 sev_snp_supported = sev_snp_enabled && cc_platform_has(CC_ATTR_HOST_SEV_SNP); 3003 2976 3004 2977 out: 2978 + if (sev_enabled) { 2979 + init_args.probe = true; 2980 + if (sev_platform_init(&init_args)) 2981 + sev_supported = sev_es_supported = sev_snp_supported = false; 2982 + else if (sev_snp_supported) 2983 + sev_snp_supported = is_sev_snp_initialized(); 2984 + } 2985 + 3005 2986 if (boot_cpu_has(X86_FEATURE_SEV)) 3006 2987 pr_info("SEV %s (ASIDs %u - %u)\n", 3007 2988 sev_supported ? min_sev_asid <= max_sev_asid ? "enabled" : ··· 3036 3001 sev_supported_vmsa_features = 0; 3037 3002 if (sev_es_debug_swap_enabled) 3038 3003 sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP; 3039 - 3040 - if (!sev_enabled) 3041 - return; 3042 - 3043 - /* 3044 - * Do both SNP and SEV initialization at KVM module load. 3045 - */ 3046 - init_args.probe = true; 3047 - sev_platform_init(&init_args); 3048 3004 } 3049 3005 3050 3006 void sev_hardware_unsetup(void)
+5 -12
arch/x86/power/hibernate.c
··· 192 192 193 193 int arch_resume_nosmt(void) 194 194 { 195 - int ret = 0; 195 + int ret; 196 + 196 197 /* 197 198 * We reached this while coming out of hibernation. This means 198 199 * that SMT siblings are sleeping in hlt, as mwait is not safe ··· 207 206 * Called with hotplug disabled. 208 207 */ 209 208 cpu_hotplug_enable(); 210 - if (cpu_smt_control == CPU_SMT_DISABLED || 211 - cpu_smt_control == CPU_SMT_FORCE_DISABLED) { 212 - enum cpuhp_smt_control old = cpu_smt_control; 213 209 214 - ret = cpuhp_smt_enable(); 215 - if (ret) 216 - goto out; 217 - ret = cpuhp_smt_disable(old); 218 - if (ret) 219 - goto out; 220 - } 221 - out: 210 + ret = arch_cpu_rescan_dead_smt_siblings(); 211 + 222 212 cpu_hotplug_disable(); 213 + 223 214 return ret; 224 215 }
+1 -1
arch/xtensa/include/asm/pgtable.h
··· 349 349 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 350 350 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 351 351 352 - static inline int pte_swp_exclusive(pte_t pte) 352 + static inline bool pte_swp_exclusive(pte_t pte) 353 353 { 354 354 return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; 355 355 }
+13 -13
block/blk-merge.c
··· 998 998 if (!plug || rq_list_empty(&plug->mq_list)) 999 999 return false; 1000 1000 1001 - rq_list_for_each(&plug->mq_list, rq) { 1002 - if (rq->q == q) { 1003 - if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == 1004 - BIO_MERGE_OK) 1005 - return true; 1006 - break; 1007 - } 1001 + rq = plug->mq_list.tail; 1002 + if (rq->q == q) 1003 + return blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == 1004 + BIO_MERGE_OK; 1005 + else if (!plug->multiple_queues) 1006 + return false; 1008 1007 1009 - /* 1010 - * Only keep iterating plug list for merges if we have multiple 1011 - * queues 1012 - */ 1013 - if (!plug->multiple_queues) 1014 - break; 1008 + rq_list_for_each(&plug->mq_list, rq) { 1009 + if (rq->q != q) 1010 + continue; 1011 + if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == 1012 + BIO_MERGE_OK) 1013 + return true; 1014 + break; 1015 1015 } 1016 1016 return false; 1017 1017 }
+6 -2
block/blk-zoned.c
··· 1225 1225 if (bio_flagged(bio, BIO_EMULATES_ZONE_APPEND)) { 1226 1226 bio->bi_opf &= ~REQ_OP_MASK; 1227 1227 bio->bi_opf |= REQ_OP_ZONE_APPEND; 1228 + bio_clear_flag(bio, BIO_EMULATES_ZONE_APPEND); 1228 1229 } 1229 1230 1230 1231 /* ··· 1307 1306 spin_unlock_irqrestore(&zwplug->lock, flags); 1308 1307 1309 1308 bdev = bio->bi_bdev; 1310 - submit_bio_noacct_nocheck(bio); 1311 1309 1312 1310 /* 1313 1311 * blk-mq devices will reuse the extra reference on the request queue ··· 1314 1314 * path for BIO-based devices will not do that. So drop this extra 1315 1315 * reference here. 1316 1316 */ 1317 - if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) 1317 + if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) { 1318 + bdev->bd_disk->fops->submit_bio(bio); 1318 1319 blk_queue_exit(bdev->bd_disk->queue); 1320 + } else { 1321 + blk_mq_submit_bio(bio); 1322 + } 1319 1323 1320 1324 put_zwplug: 1321 1325 /* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */
+1 -1
crypto/hkdf.c
··· 566 566 567 567 static void __exit crypto_hkdf_module_exit(void) {} 568 568 569 - module_init(crypto_hkdf_module_init); 569 + late_initcall(crypto_hkdf_module_init); 570 570 module_exit(crypto_hkdf_module_exit); 571 571 572 572 MODULE_LICENSE("GPL");
+2 -2
drivers/accel/amdxdna/aie2_psp.c
··· 126 126 psp->ddev = ddev; 127 127 memcpy(psp->psp_regs, conf->psp_regs, sizeof(psp->psp_regs)); 128 128 129 - psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN) + PSP_FW_ALIGN; 130 - psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz, GFP_KERNEL); 129 + psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN); 130 + psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz + PSP_FW_ALIGN, GFP_KERNEL); 131 131 if (!psp->fw_buffer) { 132 132 drm_err(ddev, "no memory for fw buffer"); 133 133 return NULL;
+1 -1
drivers/acpi/acpi_pad.c
··· 33 33 static DEFINE_MUTEX(isolated_cpus_lock); 34 34 static DEFINE_MUTEX(round_robin_lock); 35 35 36 - static unsigned long power_saving_mwait_eax; 36 + static unsigned int power_saving_mwait_eax; 37 37 38 38 static unsigned char tsc_detected_unstable; 39 39 static unsigned char tsc_marked_unstable;
+3 -6
drivers/acpi/apei/einj-core.c
··· 883 883 } 884 884 885 885 einj_dev = faux_device_create("acpi-einj", NULL, &einj_device_ops); 886 - if (!einj_dev) 887 - return -ENODEV; 888 886 889 - einj_initialized = true; 887 + if (einj_dev) 888 + einj_initialized = true; 890 889 891 890 return 0; 892 891 } 893 892 894 893 static void __exit einj_exit(void) 895 894 { 896 - if (einj_initialized) 897 - faux_device_destroy(einj_dev); 898 - 895 + faux_device_destroy(einj_dev); 899 896 } 900 897 901 898 module_init(einj_init);
+1 -1
drivers/acpi/cppc_acpi.c
··· 476 476 struct cpc_desc *cpc_ptr; 477 477 int cpu; 478 478 479 - for_each_possible_cpu(cpu) { 479 + for_each_present_cpu(cpu) { 480 480 cpc_ptr = per_cpu(cpc_desc_ptr, cpu); 481 481 desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF]; 482 482 if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&
+17
drivers/acpi/ec.c
··· 23 23 #include <linux/delay.h> 24 24 #include <linux/interrupt.h> 25 25 #include <linux/list.h> 26 + #include <linux/printk.h> 26 27 #include <linux/spinlock.h> 27 28 #include <linux/slab.h> 29 + #include <linux/string.h> 28 30 #include <linux/suspend.h> 29 31 #include <linux/acpi.h> 30 32 #include <linux/dmi.h> ··· 2030 2028 * Asus X50GL: 2031 2029 * https://bugzilla.kernel.org/show_bug.cgi?id=11880 2032 2030 */ 2031 + goto out; 2032 + } 2033 + 2034 + if (!strstarts(ecdt_ptr->id, "\\")) { 2035 + /* 2036 + * The ECDT table on some MSI notebooks contains invalid data, together 2037 + * with an empty ID string (""). 2038 + * 2039 + * Section 5.2.15 of the ACPI specification requires the ID string to be 2040 + * a "fully qualified reference to the (...) embedded controller device", 2041 + * so this string always has to start with a backslash. 2042 + * 2043 + * By verifying this we can avoid such faulty ECDT tables in a safe way. 2044 + */ 2045 + pr_err(FW_BUG "Ignoring ECDT due to invalid ID string \"%s\"\n", ecdt_ptr->id); 2033 2046 goto out; 2034 2047 } 2035 2048
+6
drivers/acpi/internal.h
··· 175 175 static inline void acpi_early_processor_control_setup(void) {} 176 176 #endif 177 177 178 + #ifdef CONFIG_ACPI_PROCESSOR_CSTATE 179 + void acpi_idle_rescan_dead_smt_siblings(void); 180 + #else 181 + static inline void acpi_idle_rescan_dead_smt_siblings(void) {} 182 + #endif 183 + 178 184 /* -------------------------------------------------------------------------- 179 185 Embedded Controller 180 186 -------------------------------------------------------------------------- */
+3
drivers/acpi/processor_driver.c
··· 279 279 * after acpi_cppc_processor_probe() has been called for all online CPUs 280 280 */ 281 281 acpi_processor_init_invariance_cppc(); 282 + 283 + acpi_idle_rescan_dead_smt_siblings(); 284 + 282 285 return 0; 283 286 err: 284 287 driver_unregister(&acpi_processor_driver);
+8
drivers/acpi/processor_idle.c
··· 24 24 #include <acpi/processor.h> 25 25 #include <linux/context_tracking.h> 26 26 27 + #include "internal.h" 28 + 27 29 /* 28 30 * Include the apic definitions for x86 to have the APIC timer related defines 29 31 * available also for UP (on SMP it gets magically included via linux/smp.h). ··· 57 55 }; 58 56 59 57 #ifdef CONFIG_ACPI_PROCESSOR_CSTATE 58 + void acpi_idle_rescan_dead_smt_siblings(void) 59 + { 60 + if (cpuidle_get_driver() == &acpi_idle_driver) 61 + arch_cpu_rescan_dead_smt_siblings(); 62 + } 63 + 60 64 static 61 65 DEFINE_PER_CPU(struct acpi_processor_cx * [CPUIDLE_STATE_MAX], acpi_cstate); 62 66
+7
drivers/acpi/resource.c
··· 667 667 }, 668 668 }, 669 669 { 670 + /* MACHENIKE L16P/L16P */ 671 + .matches = { 672 + DMI_MATCH(DMI_SYS_VENDOR, "MACHENIKE"), 673 + DMI_MATCH(DMI_BOARD_NAME, "L16P"), 674 + }, 675 + }, 676 + { 670 677 /* 671 678 * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the 672 679 * board-name is changed, so check OEM strings instead. Note
+2 -1
drivers/base/faux.c
··· 86 86 .name = "faux_driver", 87 87 .bus = &faux_bus_type, 88 88 .probe_type = PROBE_FORCE_SYNCHRONOUS, 89 + .suppress_bind_attrs = true, 89 90 }; 90 91 91 92 static void faux_device_release(struct device *dev) ··· 170 169 * successful is almost impossible to determine by the caller. 171 170 */ 172 171 if (!dev->driver) { 173 - dev_err(dev, "probe did not succeed, tearing down the device\n"); 172 + dev_dbg(dev, "probe did not succeed, tearing down the device\n"); 174 173 faux_device_destroy(faux_dev); 175 174 faux_dev = NULL; 176 175 }
+5 -6
drivers/block/loop.c
··· 1248 1248 lo->lo_flags &= ~LOOP_SET_STATUS_CLEARABLE_FLAGS; 1249 1249 lo->lo_flags |= (info->lo_flags & LOOP_SET_STATUS_SETTABLE_FLAGS); 1250 1250 1251 - if (size_changed) { 1252 - loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit, 1253 - lo->lo_backing_file); 1254 - loop_set_size(lo, new_size); 1255 - } 1256 - 1257 1251 /* update the direct I/O flag if lo_offset changed */ 1258 1252 loop_update_dio(lo); 1259 1253 ··· 1255 1261 blk_mq_unfreeze_queue(lo->lo_queue, memflags); 1256 1262 if (partscan) 1257 1263 clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state); 1264 + if (!err && size_changed) { 1265 + loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit, 1266 + lo->lo_backing_file); 1267 + loop_set_size(lo, new_size); 1268 + } 1258 1269 out_unlock: 1259 1270 mutex_unlock(&lo->lo_mutex); 1260 1271 if (partscan)
+18 -13
drivers/bluetooth/btintel_pcie.c
··· 396 396 static int btintel_pcie_start_rx(struct btintel_pcie_data *data) 397 397 { 398 398 int i, ret; 399 + struct rxq *rxq = &data->rxq; 399 400 400 - for (i = 0; i < BTINTEL_PCIE_RX_MAX_QUEUE; i++) { 401 + /* Post (BTINTEL_PCIE_RX_DESCS_COUNT - 3) buffers to overcome the 402 + * hardware issues leading to race condition at the firmware. 403 + */ 404 + 405 + for (i = 0; i < rxq->count - 3; i++) { 401 406 ret = btintel_pcie_submit_rx(data); 402 407 if (ret) 403 408 return ret; ··· 1787 1782 * + size of index * Number of queues(2) * type of index array(4) 1788 1783 * + size of context information 1789 1784 */ 1790 - total = (sizeof(struct tfd) + sizeof(struct urbd0) + sizeof(struct frbd) 1791 - + sizeof(struct urbd1)) * BTINTEL_DESCS_COUNT; 1785 + total = (sizeof(struct tfd) + sizeof(struct urbd0)) * BTINTEL_PCIE_TX_DESCS_COUNT; 1786 + total += (sizeof(struct frbd) + sizeof(struct urbd1)) * BTINTEL_PCIE_RX_DESCS_COUNT; 1792 1787 1793 1788 /* Add the sum of size of index array and size of ci struct */ 1794 1789 total += (sizeof(u16) * BTINTEL_PCIE_NUM_QUEUES * 4) + sizeof(struct ctx_info); ··· 1813 1808 data->dma_v_addr = v_addr; 1814 1809 1815 1810 /* Setup descriptor count */ 1816 - data->txq.count = BTINTEL_DESCS_COUNT; 1817 - data->rxq.count = BTINTEL_DESCS_COUNT; 1811 + data->txq.count = BTINTEL_PCIE_TX_DESCS_COUNT; 1812 + data->rxq.count = BTINTEL_PCIE_RX_DESCS_COUNT; 1818 1813 1819 1814 /* Setup tfds */ 1820 1815 data->txq.tfds_p_addr = p_addr; 1821 1816 data->txq.tfds = v_addr; 1822 1817 1823 - p_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT); 1824 - v_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT); 1818 + p_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT); 1819 + v_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT); 1825 1820 1826 1821 /* Setup urbd0 */ 1827 1822 data->txq.urbd0s_p_addr = p_addr; 1828 1823 data->txq.urbd0s = v_addr; 1829 1824 1830 - p_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT); 1831 - v_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT); 1825 + p_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT); 1826 + v_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT); 1832 1827 1833 1828 /* Setup FRBD*/ 1834 1829 data->rxq.frbds_p_addr = p_addr; 1835 1830 data->rxq.frbds = v_addr; 1836 1831 1837 - p_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT); 1838 - v_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT); 1832 + p_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT); 1833 + v_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT); 1839 1834 1840 1835 /* Setup urbd1 */ 1841 1836 data->rxq.urbd1s_p_addr = p_addr; 1842 1837 data->rxq.urbd1s = v_addr; 1843 1838 1844 - p_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT); 1845 - v_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT); 1839 + p_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT); 1840 + v_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT); 1846 1841 1847 1842 /* Setup data buffers for txq */ 1848 1843 err = btintel_pcie_setup_txq_bufs(data, &data->txq);
+5 -5
drivers/bluetooth/btintel_pcie.h
··· 154 154 /* Default interrupt timeout in msec */ 155 155 #define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000 156 156 157 - /* The number of descriptors in TX/RX queues */ 158 - #define BTINTEL_DESCS_COUNT 16 157 + /* The number of descriptors in TX queues */ 158 + #define BTINTEL_PCIE_TX_DESCS_COUNT 32 159 + 160 + /* The number of descriptors in RX queues */ 161 + #define BTINTEL_PCIE_RX_DESCS_COUNT 64 159 162 160 163 /* Number of Queue for TX and RX 161 164 * It indicates the index of the IA(Index Array) ··· 179 176 180 177 /* Doorbell vector for TFD */ 181 178 #define BTINTEL_PCIE_TX_DB_VEC 0 182 - 183 - /* Number of pending RX requests for downlink */ 184 - #define BTINTEL_PCIE_RX_MAX_QUEUE 6 185 179 186 180 /* Doorbell vector for FRBD */ 187 181 #define BTINTEL_PCIE_RX_DB_VEC 513
+2 -2
drivers/cpufreq/rcpufreq_dt.rs
··· 26 26 } 27 27 28 28 /// Finds supply name for the CPU from DT. 29 - fn find_supply_names(dev: &Device, cpu: u32) -> Option<KVec<CString>> { 29 + fn find_supply_names(dev: &Device, cpu: cpu::CpuId) -> Option<KVec<CString>> { 30 30 // Try "cpu0" for older DTs, fallback to "cpu". 31 - let name = (cpu == 0) 31 + let name = (cpu.as_u32() == 0) 32 32 .then(|| find_supply_name_exact(dev, "cpu0")) 33 33 .flatten() 34 34 .or_else(|| find_supply_name_exact(dev, "cpu"))?;
+1 -1
drivers/dma-buf/dma-buf.c
··· 1118 1118 * Catch exporters making buffers inaccessible even when 1119 1119 * attachments preventing that exist. 1120 1120 */ 1121 - WARN_ON_ONCE(ret == EBUSY); 1121 + WARN_ON_ONCE(ret == -EBUSY); 1122 1122 if (ret) 1123 1123 return ERR_PTR(ret); 1124 1124 }
+2 -3
drivers/dma-buf/udmabuf.c
··· 264 264 ubuf->sg = NULL; 265 265 } 266 266 } else { 267 - dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents, 268 - direction); 267 + dma_sync_sgtable_for_cpu(dev, ubuf->sg, direction); 269 268 } 270 269 271 270 return ret; ··· 279 280 if (!ubuf->sg) 280 281 return -EINVAL; 281 282 282 - dma_sync_sg_for_device(dev, ubuf->sg->sgl, ubuf->sg->nents, direction); 283 + dma_sync_sgtable_for_device(dev, ubuf->sg, direction); 283 284 return 0; 284 285 } 285 286
+1 -1
drivers/gpu/drm/meson/meson_encoder_hdmi.c
··· 109 109 venc_freq /= 2; 110 110 111 111 dev_dbg(priv->dev, 112 - "vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n", 112 + "phy:%lluHz vclk=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n", 113 113 phy_freq, vclk_freq, venc_freq, hdmi_freq, 114 114 priv->venc.hdmi_use_enci); 115 115
+34 -21
drivers/gpu/drm/meson/meson_vclk.c
··· 110 110 #define HDMI_PLL_LOCK BIT(31) 111 111 #define HDMI_PLL_LOCK_G12A (3 << 30) 112 112 113 - #define PIXEL_FREQ_1000_1001(_freq) \ 114 - DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL) 115 - #define PHY_FREQ_1000_1001(_freq) \ 116 - (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10) 113 + #define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL) 117 114 118 115 /* VID PLL Dividers */ 119 116 enum { ··· 769 772 pll_freq); 770 773 } 771 774 775 + static bool meson_vclk_freqs_are_matching_param(unsigned int idx, 776 + unsigned long long phy_freq, 777 + unsigned long long vclk_freq) 778 + { 779 + DRM_DEBUG_DRIVER("i = %d vclk_freq = %lluHz alt = %lluHz\n", 780 + idx, params[idx].vclk_freq, 781 + FREQ_1000_1001(params[idx].vclk_freq)); 782 + DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n", 783 + idx, params[idx].phy_freq, 784 + FREQ_1000_1001(params[idx].phy_freq)); 785 + 786 + /* Match strict frequency */ 787 + if (phy_freq == params[idx].phy_freq && 788 + vclk_freq == params[idx].vclk_freq) 789 + return true; 790 + 791 + /* Match 1000/1001 variant: vclk deviation has to be less than 1kHz 792 + * (drm EDID is defined in 1kHz steps, so everything smaller must be 793 + * rounding error) and the PHY freq deviation has to be less than 794 + * 10kHz (as the TMDS clock is 10 times the pixel clock, so anything 795 + * smaller must be rounding error as well). 796 + */ 797 + if (abs(vclk_freq - FREQ_1000_1001(params[idx].vclk_freq)) < 1000 && 798 + abs(phy_freq - FREQ_1000_1001(params[idx].phy_freq)) < 10000) 799 + return true; 800 + 801 + /* no match */ 802 + return false; 803 + } 804 + 772 805 enum drm_mode_status 773 806 meson_vclk_vic_supported_freq(struct meson_drm *priv, 774 807 unsigned long long phy_freq, ··· 817 790 } 818 791 819 792 for (i = 0 ; params[i].pixel_freq ; ++i) { 820 - DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n", 821 - i, params[i].pixel_freq, 822 - PIXEL_FREQ_1000_1001(params[i].pixel_freq)); 823 - DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n", 824 - i, params[i].phy_freq, 825 - PHY_FREQ_1000_1001(params[i].phy_freq)); 826 - /* Match strict frequency */ 827 - if (phy_freq == params[i].phy_freq && 828 - vclk_freq == params[i].vclk_freq) 829 - return MODE_OK; 830 - /* Match 1000/1001 variant */ 831 - if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) && 832 - vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq)) 793 + if (meson_vclk_freqs_are_matching_param(i, phy_freq, vclk_freq)) 833 794 return MODE_OK; 834 795 } 835 796 ··· 1090 1075 } 1091 1076 1092 1077 for (freq = 0 ; params[freq].pixel_freq ; ++freq) { 1093 - if ((phy_freq == params[freq].phy_freq || 1094 - phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) && 1095 - (vclk_freq == params[freq].vclk_freq || 1096 - vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) { 1078 + if (meson_vclk_freqs_are_matching_param(freq, phy_freq, 1079 + vclk_freq)) { 1097 1080 if (vclk_freq != params[freq].vclk_freq) 1098 1081 vic_alternate_clock = true; 1099 1082 else
+1
drivers/gpu/drm/sitronix/Kconfig
··· 5 5 select DRM_GEM_SHMEM_HELPER 6 6 select DRM_KMS_HELPER 7 7 select REGMAP_I2C 8 + select VIDEOMODE_HELPERS 8 9 help 9 10 DRM driver for Sitronix ST7571 panels controlled over I2C. 10 11
+6 -6
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 560 560 if (ret) 561 561 return ret; 562 562 563 - ret = drm_connector_hdmi_audio_init(connector, dev->dev, 564 - &vc4_hdmi_audio_funcs, 565 - 8, false, -1); 566 - if (ret) 567 - return ret; 568 - 569 563 drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs); 570 564 571 565 /* ··· 2284 2290 dev_err(dev, "Could not register CPU DAI: %d\n", ret); 2285 2291 return ret; 2286 2292 } 2293 + 2294 + ret = drm_connector_hdmi_audio_init(&vc4_hdmi->connector, dev, 2295 + &vc4_hdmi_audio_funcs, 8, false, 2296 + -1); 2297 + if (ret) 2298 + return ret; 2287 2299 2288 2300 dai_link->cpus = &vc4_hdmi->audio.cpu; 2289 2301 dai_link->codecs = &vc4_hdmi->audio.codec;
+20 -4
drivers/gpu/drm/xe/xe_lrc.c
··· 941 941 * store it in the PPHSWP. 942 942 */ 943 943 #define CONTEXT_ACTIVE 1ULL 944 - static void xe_lrc_setup_utilization(struct xe_lrc *lrc) 944 + static int xe_lrc_setup_utilization(struct xe_lrc *lrc) 945 945 { 946 - u32 *cmd; 946 + u32 *cmd, *buf = NULL; 947 947 948 - cmd = lrc->bb_per_ctx_bo->vmap.vaddr; 948 + if (lrc->bb_per_ctx_bo->vmap.is_iomem) { 949 + buf = kmalloc(lrc->bb_per_ctx_bo->size, GFP_KERNEL); 950 + if (!buf) 951 + return -ENOMEM; 952 + cmd = buf; 953 + } else { 954 + cmd = lrc->bb_per_ctx_bo->vmap.vaddr; 955 + } 949 956 950 957 *cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET; 951 958 *cmd++ = ENGINE_ID(0).addr; ··· 973 966 974 967 *cmd++ = MI_BATCH_BUFFER_END; 975 968 969 + if (buf) { 970 + xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bb_per_ctx_bo->vmap, 0, 971 + buf, (cmd - buf) * sizeof(*cmd)); 972 + kfree(buf); 973 + } 974 + 976 975 xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR, 977 976 xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1); 978 977 978 + return 0; 979 979 } 980 980 981 981 #define PVC_CTX_ASID (0x2e + 1) ··· 1139 1125 map = __xe_lrc_start_seqno_map(lrc); 1140 1126 xe_map_write32(lrc_to_xe(lrc), &map, lrc->fence_ctx.next_seqno - 1); 1141 1127 1142 - xe_lrc_setup_utilization(lrc); 1128 + err = xe_lrc_setup_utilization(lrc); 1129 + if (err) 1130 + goto err_lrc_finish; 1143 1131 1144 1132 return 0; 1145 1133
+1 -1
drivers/gpu/drm/xe/xe_svm.c
··· 764 764 return false; 765 765 } 766 766 767 - if (range_size <= SZ_64K && !supports_4K_migration(vm->xe)) { 767 + if (range_size < SZ_64K && !supports_4K_migration(vm->xe)) { 768 768 drm_dbg(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n"); 769 769 return false; 770 770 }
+7 -5
drivers/idle/intel_idle.c
··· 152 152 int index, bool irqoff) 153 153 { 154 154 struct cpuidle_state *state = &drv->states[index]; 155 - unsigned long eax = flg2MWAIT(state->flags); 156 - unsigned long ecx = 1*irqoff; /* break on interrupt flag */ 155 + unsigned int eax = flg2MWAIT(state->flags); 156 + unsigned int ecx = 1*irqoff; /* break on interrupt flag */ 157 157 158 158 mwait_idle_with_hints(eax, ecx); 159 159 ··· 226 226 static __cpuidle int intel_idle_s2idle(struct cpuidle_device *dev, 227 227 struct cpuidle_driver *drv, int index) 228 228 { 229 - unsigned long ecx = 1; /* break on interrupt flag */ 230 229 struct cpuidle_state *state = &drv->states[index]; 231 - unsigned long eax = flg2MWAIT(state->flags); 230 + unsigned int eax = flg2MWAIT(state->flags); 231 + unsigned int ecx = 1; /* break on interrupt flag */ 232 232 233 233 if (state->flags & CPUIDLE_FLAG_INIT_XSTATE) 234 234 fpu_idle_fpregs(); ··· 2507 2507 pr_debug("Local APIC timer is reliable in %s\n", 2508 2508 boot_cpu_has(X86_FEATURE_ARAT) ? "all C-states" : "C1"); 2509 2509 2510 + arch_cpu_rescan_dead_smt_siblings(); 2511 + 2510 2512 return 0; 2511 2513 2512 2514 hp_setup_fail: ··· 2520 2518 return retval; 2521 2519 2522 2520 } 2523 - device_initcall(intel_idle_init); 2521 + subsys_initcall_sync(intel_idle_init); 2524 2522 2525 2523 /* 2526 2524 * We are not really modular, but we used to support that. Meaning we also
+2 -2
drivers/iommu/tegra-smmu.c
··· 559 559 { 560 560 unsigned int pd_index = iova_pd_index(iova); 561 561 struct tegra_smmu *smmu = as->smmu; 562 - struct tegra_pd *pd = as->pd; 562 + u32 *pd = &as->pd->val[pd_index]; 563 563 unsigned long offset = pd_index * sizeof(*pd); 564 564 565 565 /* Set the page directory entry first */ 566 - pd->val[pd_index] = value; 566 + *pd = value; 567 567 568 568 /* The flush the page directory entry from caches */ 569 569 dma_sync_single_range_for_device(smmu->dev, as->pd_dma, offset,
+1 -5
drivers/net/dsa/b53/b53_common.c
··· 2034 2034 2035 2035 b53_get_vlan_entry(dev, pvid, vl); 2036 2036 vl->members &= ~BIT(port); 2037 - if (vl->members == BIT(cpu_port)) 2038 - vl->members &= ~BIT(cpu_port); 2039 - vl->untag = vl->members; 2040 2037 b53_set_vlan_entry(dev, pvid, vl); 2041 2038 } 2042 2039 ··· 2112 2115 } 2113 2116 2114 2117 b53_get_vlan_entry(dev, pvid, vl); 2115 - vl->members |= BIT(port) | BIT(cpu_port); 2116 - vl->untag |= BIT(port) | BIT(cpu_port); 2118 + vl->members |= BIT(port); 2117 2119 b53_set_vlan_entry(dev, pvid, vl); 2118 2120 } 2119 2121 }
+2 -1
drivers/net/ethernet/airoha/airoha_regs.h
··· 614 614 RX19_DONE_INT_MASK | RX18_DONE_INT_MASK | \ 615 615 RX17_DONE_INT_MASK | RX16_DONE_INT_MASK) 616 616 617 - #define RX_DONE_INT_MASK (RX_DONE_HIGH_INT_MASK | RX_DONE_LOW_INT_MASK) 618 617 #define RX_DONE_HIGH_OFFSET fls(RX_DONE_HIGH_INT_MASK) 618 + #define RX_DONE_INT_MASK \ 619 + ((RX_DONE_HIGH_INT_MASK << RX_DONE_HIGH_OFFSET) | RX_DONE_LOW_INT_MASK) 619 620 620 621 #define INT_RX2_MASK(_n) \ 621 622 ((RX_NO_CPU_DSCP_HIGH_INT_MASK & (_n)) | \
+5 -1
drivers/net/ethernet/freescale/enetc/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 config FSL_ENETC_CORE 3 3 tristate 4 + select NXP_NETC_LIB if NXP_NTMP 4 5 help 5 6 This module supports common functionality between the PF and VF 6 7 drivers for the NXP ENETC controller. ··· 22 21 This module provides common functionalities for both ENETC and NETC 23 22 Switch, such as NETC Table Management Protocol (NTMP) 2.0, common tc 24 23 flower and debugfs interfaces and so on. 24 + 25 + config NXP_NTMP 26 + bool 25 27 26 28 config FSL_ENETC 27 29 tristate "ENETC PF driver" ··· 49 45 select FSL_ENETC_CORE 50 46 select FSL_ENETC_MDIO 51 47 select NXP_ENETC_PF_COMMON 52 - select NXP_NETC_LIB 48 + select NXP_NTMP 53 49 select PHYLINK 54 50 select DIMLIB 55 51 help
+4 -4
drivers/net/ethernet/intel/e1000/e1000_main.c
··· 477 477 478 478 cancel_delayed_work_sync(&adapter->phy_info_task); 479 479 cancel_delayed_work_sync(&adapter->fifo_stall_task); 480 - 481 - /* Only kill reset task if adapter is not resetting */ 482 - if (!test_bit(__E1000_RESETTING, &adapter->flags)) 483 - cancel_work_sync(&adapter->reset_task); 484 480 } 485 481 486 482 void e1000_down(struct e1000_adapter *adapter) ··· 1261 1265 e1000_release_manageability(adapter); 1262 1266 1263 1267 unregister_netdev(netdev); 1268 + 1269 + /* Only kill reset task if adapter is not resetting */ 1270 + if (!test_bit(__E1000_RESETTING, &adapter->flags)) 1271 + cancel_work_sync(&adapter->reset_task); 1264 1272 1265 1273 e1000_phy_hw_reset(hw); 1266 1274
+7 -4
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1546 1546 * @vf: pointer to the VF structure 1547 1547 * @flr: VFLR was issued or not 1548 1548 * 1549 - * Returns true if the VF is in reset, resets successfully, or resets 1550 - * are disabled and false otherwise. 1549 + * Return: True if reset was performed successfully or if resets are disabled. 1550 + * False if reset is already in progress. 1551 1551 **/ 1552 1552 bool i40e_reset_vf(struct i40e_vf *vf, bool flr) 1553 1553 { ··· 1566 1566 1567 1567 /* If VF is being reset already we don't need to continue. */ 1568 1568 if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) 1569 - return true; 1569 + return false; 1570 1570 1571 1571 i40e_trigger_vf_reset(vf, flr); 1572 1572 ··· 4328 4328 reg = rd32(hw, I40E_GLGEN_VFLRSTAT(reg_idx)); 4329 4329 if (reg & BIT(bit_idx)) 4330 4330 /* i40e_reset_vf will clear the bit in GLGEN_VFLRSTAT */ 4331 - i40e_reset_vf(vf, true); 4331 + if (!i40e_reset_vf(vf, true)) { 4332 + /* At least one VF did not finish resetting, retry next time */ 4333 + set_bit(__I40E_VFLR_EVENT_PENDING, pf->state); 4334 + } 4332 4335 } 4333 4336 4334 4337 return 0;
+11
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 3209 3209 } 3210 3210 3211 3211 continue_reset: 3212 + /* If we are still early in the state machine, just restart. */ 3213 + if (adapter->state <= __IAVF_INIT_FAILED) { 3214 + iavf_shutdown_adminq(hw); 3215 + iavf_change_state(adapter, __IAVF_STARTUP); 3216 + iavf_startup(adapter); 3217 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 3218 + msecs_to_jiffies(30)); 3219 + netdev_unlock(netdev); 3220 + return; 3221 + } 3222 + 3212 3223 /* We don't use netif_running() because it may be true prior to 3213 3224 * ndo_open() returning, so we can't assume it means all our open 3214 3225 * tasks have finished, since we're not holding the rtnl_lock here.
+17
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 79 79 return iavf_status_to_errno(status); 80 80 received_op = 81 81 (enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high); 82 + 83 + if (received_op == VIRTCHNL_OP_EVENT) { 84 + struct iavf_adapter *adapter = hw->back; 85 + struct virtchnl_pf_event *vpe = 86 + (struct virtchnl_pf_event *)event->msg_buf; 87 + 88 + if (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING) 89 + continue; 90 + 91 + dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n"); 92 + if (!(adapter->flags & IAVF_FLAG_RESET_PENDING)) 93 + iavf_schedule_reset(adapter, 94 + IAVF_FLAG_RESET_PENDING); 95 + 96 + return -EIO; 97 + } 98 + 82 99 if (op_to_poll == received_op) 83 100 break; 84 101 }
+1
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 2299 2299 ts = ((u64)ts_hi << 32) | ts_lo; 2300 2300 system->cycles = ts; 2301 2301 system->cs_id = CSID_X86_ART; 2302 + system->use_nsecs = true; 2302 2303 2303 2304 /* Read Device source clock time */ 2304 2305 ts_lo = rd32(hw, cfg->dev_time_l[tmr_idx]);
+1 -4
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 43 43 #include "en/fs_ethtool.h" 44 44 45 45 #define LANES_UNKNOWN 0 46 - #define MAX_LANES 8 47 46 48 47 void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv, 49 48 struct ethtool_drvinfo *drvinfo) ··· 1097 1098 speed = info->speed; 1098 1099 lanes = info->lanes; 1099 1100 duplex = DUPLEX_FULL; 1100 - } else if (data_rate_oper) { 1101 + } else if (data_rate_oper) 1101 1102 speed = 100 * data_rate_oper; 1102 - lanes = MAX_LANES; 1103 - } 1104 1103 1105 1104 out: 1106 1105 link_ksettings->base.duplex = duplex;
+6 -6
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 2028 2028 return err; 2029 2029 } 2030 2030 2031 - static bool mlx5_flow_has_geneve_opt(struct mlx5e_tc_flow *flow) 2031 + static bool mlx5_flow_has_geneve_opt(struct mlx5_flow_spec *spec) 2032 2032 { 2033 - struct mlx5_flow_spec *spec = &flow->attr->parse_attr->spec; 2034 2033 void *headers_v = MLX5_ADDR_OF(fte_match_param, 2035 2034 spec->match_value, 2036 2035 misc_parameters_3); ··· 2068 2069 } 2069 2070 complete_all(&flow->del_hw_done); 2070 2071 2071 - if (mlx5_flow_has_geneve_opt(flow)) 2072 + if (mlx5_flow_has_geneve_opt(&attr->parse_attr->spec)) 2072 2073 mlx5_geneve_tlv_option_del(priv->mdev->geneve); 2073 2074 2074 2075 if (flow->decap_route) ··· 2573 2574 2574 2575 err = mlx5e_tc_tun_parse(filter_dev, priv, tmp_spec, f, match_level); 2575 2576 if (err) { 2576 - kvfree(tmp_spec); 2577 2577 NL_SET_ERR_MSG_MOD(extack, "Failed to parse tunnel attributes"); 2578 2578 netdev_warn(priv->netdev, "Failed to parse tunnel attributes"); 2579 - return err; 2579 + } else { 2580 + err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec); 2580 2581 } 2581 - err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec); 2582 + if (mlx5_flow_has_geneve_opt(tmp_spec)) 2583 + mlx5_geneve_tlv_option_del(priv->mdev->geneve); 2582 2584 kvfree(tmp_spec); 2583 2585 if (err) 2584 2586 return err;
+13 -8
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1295 1295 ret = mlx5_eswitch_load_pf_vf_vport(esw, MLX5_VPORT_ECPF, enabled_events); 1296 1296 if (ret) 1297 1297 goto ecpf_err; 1298 - if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1299 - ret = mlx5_eswitch_load_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs, 1300 - enabled_events); 1301 - if (ret) 1302 - goto ec_vf_err; 1303 - } 1298 + } 1299 + 1300 + /* Enable ECVF vports */ 1301 + if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1302 + ret = mlx5_eswitch_load_ec_vf_vports(esw, 1303 + esw->esw_funcs.num_ec_vfs, 1304 + enabled_events); 1305 + if (ret) 1306 + goto ec_vf_err; 1304 1307 } 1305 1308 1306 1309 /* Enable VF vports */ ··· 1334 1331 { 1335 1332 mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs); 1336 1333 1334 + if (mlx5_core_ec_sriov_enabled(esw->dev)) 1335 + mlx5_eswitch_unload_ec_vf_vports(esw, 1336 + esw->esw_funcs.num_ec_vfs); 1337 + 1337 1338 if (mlx5_ecpf_vport_exists(esw->dev)) { 1338 - if (mlx5_core_ec_sriov_enabled(esw->dev)) 1339 - mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_vfs); 1340 1339 mlx5_eswitch_unload_pf_vf_vport(esw, MLX5_VPORT_ECPF); 1341 1340 } 1342 1341
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 2228 2228 struct mlx5_flow_handle *rule; 2229 2229 struct match_list *iter; 2230 2230 bool take_write = false; 2231 + bool try_again = false; 2231 2232 struct fs_fte *fte; 2232 2233 u64 version = 0; 2233 2234 int err; ··· 2293 2292 nested_down_write_ref_node(&g->node, FS_LOCK_PARENT); 2294 2293 2295 2294 if (!g->node.active) { 2295 + try_again = true; 2296 2296 up_write_ref_node(&g->node, false); 2297 2297 continue; 2298 2298 } ··· 2315 2313 tree_put_node(&fte->node, false); 2316 2314 return rule; 2317 2315 } 2318 - rule = ERR_PTR(-ENOENT); 2316 + err = try_again ? -EAGAIN : -ENOENT; 2317 + rule = ERR_PTR(err); 2319 2318 out: 2320 2319 kmem_cache_free(steering->ftes_cache, fte); 2321 2320 return rule;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 291 291 static int alloc_system_page(struct mlx5_core_dev *dev, u32 function) 292 292 { 293 293 struct device *device = mlx5_core_dma_dev(dev); 294 - int nid = dev_to_node(device); 294 + int nid = dev->priv.numa_node; 295 295 struct page *page; 296 296 u64 zero_addr = 1; 297 297 u64 addr;
+7 -7
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
··· 1370 1370 struct mlx5hws_cmd_set_fte_attr fte_attr = {0}; 1371 1371 struct mlx5hws_cmd_forward_tbl *fw_island; 1372 1372 struct mlx5hws_action *action; 1373 - u32 i /*, packet_reformat_id*/; 1374 - int ret; 1373 + int ret, last_dest_idx = -1; 1374 + u32 i; 1375 1375 1376 1376 if (num_dest <= 1) { 1377 1377 mlx5hws_err(ctx, "Action must have multiple dests\n"); ··· 1401 1401 dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id; 1402 1402 fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 1403 1403 fte_attr.ignore_flow_level = ignore_flow_level; 1404 - /* ToDo: In SW steering we have a handling of 'go to WIRE' 1405 - * destination here by upper layer setting 'is_wire_ft' flag 1406 - * if the destination is wire. 1407 - * This is because uplink should be last dest in the list. 1408 - */ 1404 + if (dests[i].is_wire_ft) 1405 + last_dest_idx = i; 1409 1406 break; 1410 1407 case MLX5HWS_ACTION_TYP_VPORT: 1411 1408 dest_list[i].destination_type = MLX5_FLOW_DESTINATION_TYPE_VPORT; ··· 1425 1428 goto free_dest_list; 1426 1429 } 1427 1430 } 1431 + 1432 + if (last_dest_idx != -1) 1433 + swap(dest_list[last_dest_idx], dest_list[num_dest - 1]); 1428 1434 1429 1435 fte_attr.dests_num = num_dest; 1430 1436 fte_attr.dests = dest_list;
+17 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c
··· 1070 1070 struct mlx5hws_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher; 1071 1071 struct mlx5hws_bwc_complex_rule_hash_node *node, *old_node; 1072 1072 struct rhashtable *refcount_hash; 1073 - int i; 1073 + int ret, i; 1074 1074 1075 1075 bwc_rule->complex_hash_node = NULL; 1076 1076 ··· 1078 1078 if (unlikely(!node)) 1079 1079 return -ENOMEM; 1080 1080 1081 - node->tag = ida_alloc(&bwc_matcher->complex->metadata_ida, GFP_KERNEL); 1081 + ret = ida_alloc(&bwc_matcher->complex->metadata_ida, GFP_KERNEL); 1082 + if (ret < 0) 1083 + goto err_free_node; 1084 + node->tag = ret; 1085 + 1082 1086 refcount_set(&node->refcount, 1); 1083 1087 1084 1088 /* Clear match buffer - turn off all the unrelated fields ··· 1098 1094 old_node = rhashtable_lookup_get_insert_fast(refcount_hash, 1099 1095 &node->hash_node, 1100 1096 hws_refcount_hash); 1097 + if (IS_ERR(old_node)) { 1098 + ret = PTR_ERR(old_node); 1099 + goto err_free_ida; 1100 + } 1101 + 1101 1102 if (old_node) { 1102 1103 /* Rule with the same tag already exists - update refcount */ 1103 1104 refcount_inc(&old_node->refcount); ··· 1121 1112 1122 1113 bwc_rule->complex_hash_node = node; 1123 1114 return 0; 1115 + 1116 + err_free_ida: 1117 + ida_free(&bwc_matcher->complex->metadata_ida, node->tag); 1118 + err_free_node: 1119 + kfree(node); 1120 + return ret; 1124 1121 } 1125 1122 1126 1123 static void
+3
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
··· 785 785 HWS_SET_HDR(fc, match_param, IP_PROTOCOL_O, 786 786 outer_headers.ip_protocol, 787 787 eth_l3_outer.protocol_next_header); 788 + HWS_SET_HDR(fc, match_param, IP_VERSION_O, 789 + outer_headers.ip_version, 790 + eth_l3_outer.ip_version); 788 791 HWS_SET_HDR(fc, match_param, IP_TTL_O, 789 792 outer_headers.ttl_hoplimit, 790 793 eth_l3_outer.time_to_live_hop_limit);
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
··· 966 966 switch (attr->type) { 967 967 case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE: 968 968 dest_action = mlx5_fs_get_dest_action_ft(fs_ctx, dst); 969 + if (dst->dest_attr.ft->flags & 970 + MLX5_FLOW_TABLE_UPLINK_VPORT) 971 + dest_actions[num_dest_actions].is_wire_ft = true; 969 972 break; 970 973 case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM: 971 974 dest_action = mlx5_fs_get_dest_action_table_num(fs_ctx, ··· 1360 1357 pkt_reformat->fs_hws_action.pr_data = pr_data; 1361 1358 } 1362 1359 1360 + mutex_init(&pkt_reformat->fs_hws_action.lock); 1363 1361 pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_HWS; 1364 1362 pkt_reformat->fs_hws_action.hws_action = hws_action; 1365 1363 return 0; ··· 1507 1503 err = -ENOMEM; 1508 1504 goto release_mh; 1509 1505 } 1510 - mutex_init(&modify_hdr->fs_hws_action.lock); 1511 1506 modify_hdr->fs_hws_action.mh_data = mh_data; 1512 1507 modify_hdr->fs_hws_action.fs_pool = pool; 1513 1508 modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_SW;
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
··· 213 213 struct mlx5hws_action *dest; 214 214 /* Optional reformat action */ 215 215 struct mlx5hws_action *reformat; 216 + bool is_wire_ft; 216 217 }; 217 218 218 219 /**
+34 -6
drivers/net/macsec.c
··· 247 247 return sci; 248 248 } 249 249 250 - static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present) 250 + static sci_t macsec_active_sci(struct macsec_secy *secy) 251 251 { 252 - sci_t sci; 252 + struct macsec_rx_sc *rx_sc = rcu_dereference_bh(secy->rx_sc); 253 253 254 - if (sci_present) 254 + /* Case single RX SC */ 255 + if (rx_sc && !rcu_dereference_bh(rx_sc->next)) 256 + return (rx_sc->active) ? rx_sc->sci : 0; 257 + /* Case no RX SC or multiple */ 258 + else 259 + return 0; 260 + } 261 + 262 + static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present, 263 + struct macsec_rxh_data *rxd) 264 + { 265 + struct macsec_dev *macsec; 266 + sci_t sci = 0; 267 + 268 + /* SC = 1 */ 269 + if (sci_present) { 255 270 memcpy(&sci, hdr->secure_channel_id, 256 271 sizeof(hdr->secure_channel_id)); 257 - else 272 + /* SC = 0; ES = 0 */ 273 + } else if ((!(hdr->tci_an & (MACSEC_TCI_ES | MACSEC_TCI_SC))) && 274 + (list_is_singular(&rxd->secys))) { 275 + /* Only one SECY should exist on this scenario */ 276 + macsec = list_first_or_null_rcu(&rxd->secys, struct macsec_dev, 277 + secys); 278 + if (macsec) 279 + return macsec_active_sci(&macsec->secy); 280 + } else { 258 281 sci = make_sci(hdr->eth.h_source, MACSEC_PORT_ES); 282 + } 259 283 260 284 return sci; 261 285 } ··· 1133 1109 struct macsec_rxh_data *rxd; 1134 1110 struct macsec_dev *macsec; 1135 1111 unsigned int len; 1136 - sci_t sci; 1112 + sci_t sci = 0; 1137 1113 u32 hdr_pn; 1138 1114 bool cbit; 1139 1115 struct pcpu_rx_sc_stats *rxsc_stats; ··· 1180 1156 1181 1157 macsec_skb_cb(skb)->has_sci = !!(hdr->tci_an & MACSEC_TCI_SC); 1182 1158 macsec_skb_cb(skb)->assoc_num = hdr->tci_an & MACSEC_AN_MASK; 1183 - sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci); 1184 1159 1185 1160 rcu_read_lock(); 1186 1161 rxd = macsec_data_rcu(skb->dev); 1162 + 1163 + sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci, rxd); 1164 + if (!sci) 1165 + goto drop_nosc; 1187 1166 1188 1167 list_for_each_entry_rcu(macsec, &rxd->secys, secys) { 1189 1168 struct macsec_rx_sc *sc = find_rx_sc(&macsec->secy, sci); ··· 1310 1283 macsec_rxsa_put(rx_sa); 1311 1284 drop_nosa: 1312 1285 macsec_rxsc_put(rx_sc); 1286 + drop_nosc: 1313 1287 rcu_read_unlock(); 1314 1288 drop_direct: 1315 1289 kfree_skb(skb);
+1 -2
drivers/net/netconsole.c
··· 1252 1252 */ 1253 1253 static int prepare_extradata(struct netconsole_target *nt) 1254 1254 { 1255 - u32 fields = SYSDATA_CPU_NR | SYSDATA_TASKNAME; 1256 1255 int extradata_len; 1257 1256 1258 1257 /* userdata was appended when configfs write helper was called ··· 1259 1260 */ 1260 1261 extradata_len = nt->userdata_length; 1261 1262 1262 - if (!(nt->sysdata_fields & fields)) 1263 + if (!nt->sysdata_fields) 1263 1264 goto out; 1264 1265 1265 1266 if (nt->sysdata_fields & SYSDATA_CPU_NR)
+2 -1
drivers/net/netdevsim/netdev.c
··· 371 371 int done; 372 372 373 373 done = nsim_rcv(rq, budget); 374 - napi_complete(napi); 374 + if (done < budget) 375 + napi_complete_done(napi, done); 375 376 376 377 return done; 377 378 }
+12
drivers/net/phy/mdio_bus.c
··· 445 445 446 446 lockdep_assert_held_once(&bus->mdio_lock); 447 447 448 + if (addr >= PHY_MAX_ADDR) 449 + return -ENXIO; 450 + 448 451 if (bus->read) 449 452 retval = bus->read(bus, addr, regnum); 450 453 else ··· 476 473 int err; 477 474 478 475 lockdep_assert_held_once(&bus->mdio_lock); 476 + 477 + if (addr >= PHY_MAX_ADDR) 478 + return -ENXIO; 479 479 480 480 if (bus->write) 481 481 err = bus->write(bus, addr, regnum, val); ··· 541 535 542 536 lockdep_assert_held_once(&bus->mdio_lock); 543 537 538 + if (addr >= PHY_MAX_ADDR) 539 + return -ENXIO; 540 + 544 541 if (bus->read_c45) 545 542 retval = bus->read_c45(bus, addr, devad, regnum); 546 543 else ··· 574 565 int err; 575 566 576 567 lockdep_assert_held_once(&bus->mdio_lock); 568 + 569 + if (addr >= PHY_MAX_ADDR) 570 + return -ENXIO; 577 571 578 572 if (bus->write_c45) 579 573 err = bus->write_c45(bus, addr, devad, regnum, val);
+12 -6
drivers/net/phy/phy_caps.c
··· 188 188 * When @exact is not set, we return either an exact match, or matching capabilities 189 189 * at lower speed, or the lowest matching speed, or NULL. 190 190 * 191 + * Non-exact matches will try to return an exact speed and duplex match, but may 192 + * return matching capabilities with same speed but a different duplex. 193 + * 191 194 * Returns: a matched link_capabilities according to the above process, NULL 192 195 * otherwise. 193 196 */ ··· 198 195 phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported, 199 196 bool exact) 200 197 { 201 - const struct link_capabilities *lcap, *last = NULL; 198 + const struct link_capabilities *lcap, *match = NULL, *last = NULL; 202 199 203 200 for_each_link_caps_desc_speed(lcap) { 204 201 if (linkmode_intersects(lcap->linkmodes, supported)) { ··· 207 204 if (lcap->speed == speed && lcap->duplex == duplex) { 208 205 return lcap; 209 206 } else if (!exact) { 210 - if (lcap->speed <= speed) 211 - return lcap; 207 + if (!match && lcap->speed <= speed) 208 + match = lcap; 209 + 210 + if (lcap->speed < speed) 211 + break; 212 212 } 213 213 } 214 214 } 215 215 216 - if (!exact) 217 - return last; 216 + if (!match && !exact) 217 + match = last; 218 218 219 - return NULL; 219 + return match; 220 220 } 221 221 EXPORT_SYMBOL_GPL(phy_caps_lookup); 222 222
+1
drivers/net/usb/r8152.c
··· 10054 10054 { USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041) }, 10055 10055 { USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) }, 10056 10056 { USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) }, 10057 + { USB_DEVICE(VENDOR_ID_TPLINK, 0x0602) }, 10057 10058 { USB_DEVICE(VENDOR_ID_DLINK, 0xb301) }, 10058 10059 { USB_DEVICE(VENDOR_ID_DELL, 0xb097) }, 10059 10060 { USB_DEVICE(VENDOR_ID_ASUS, 0x1976) },
+2 -2
drivers/net/veth.c
··· 909 909 910 910 /* NAPI functions as RCU section */ 911 911 peer_dev = rcu_dereference_check(priv->peer, rcu_read_lock_bh_held()); 912 - peer_txq = netdev_get_tx_queue(peer_dev, queue_idx); 912 + peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL; 913 913 914 914 for (i = 0; i < budget; i++) { 915 915 void *ptr = __ptr_ring_consume(&rq->xdp_ring); ··· 959 959 rq->stats.vs.xdp_packets += done; 960 960 u64_stats_update_end(&rq->stats.syncp); 961 961 962 - if (unlikely(netif_tx_queue_stopped(peer_txq))) 962 + if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq))) 963 963 netif_tx_wake_queue(peer_txq); 964 964 965 965 return done;
+25 -8
drivers/net/wireless/ath/ath10k/mac.c
··· 4 4 * Copyright (c) 2011-2017 Qualcomm Atheros, Inc. 5 5 * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. 6 6 * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved. 7 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 7 8 */ 8 9 9 10 #include "mac.h" ··· 1021 1020 return -ETIMEDOUT; 1022 1021 1023 1022 return ar->last_wmi_vdev_start_status; 1023 + } 1024 + 1025 + static inline int ath10k_vdev_delete_sync(struct ath10k *ar) 1026 + { 1027 + unsigned long time_left; 1028 + 1029 + lockdep_assert_held(&ar->conf_mutex); 1030 + 1031 + if (!test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) 1032 + return 0; 1033 + 1034 + if (test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags)) 1035 + return -ESHUTDOWN; 1036 + 1037 + time_left = wait_for_completion_timeout(&ar->vdev_delete_done, 1038 + ATH10K_VDEV_DELETE_TIMEOUT_HZ); 1039 + if (time_left == 0) 1040 + return -ETIMEDOUT; 1041 + 1042 + return 0; 1024 1043 } 1025 1044 1026 1045 static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id) ··· 5921 5900 struct ath10k *ar = hw->priv; 5922 5901 struct ath10k_vif *arvif = (void *)vif->drv_priv; 5923 5902 struct ath10k_peer *peer; 5924 - unsigned long time_left; 5925 5903 int ret; 5926 5904 int i; 5927 5905 ··· 5960 5940 ath10k_warn(ar, "failed to delete WMI vdev %i: %d\n", 5961 5941 arvif->vdev_id, ret); 5962 5942 5963 - if (test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) { 5964 - time_left = wait_for_completion_timeout(&ar->vdev_delete_done, 5965 - ATH10K_VDEV_DELETE_TIMEOUT_HZ); 5966 - if (time_left == 0) { 5967 - ath10k_warn(ar, "Timeout in receiving vdev delete response\n"); 5968 - goto out; 5969 - } 5943 + ret = ath10k_vdev_delete_sync(ar); 5944 + if (ret) { 5945 + ath10k_warn(ar, "Error in receiving vdev delete response: %d\n", ret); 5946 + goto out; 5970 5947 } 5971 5948 5972 5949 /* Some firmware revisions don't notify host about self-peer removal
+3 -1
drivers/net/wireless/ath/ath10k/snoc.c
··· 938 938 939 939 dev_set_threaded(ar->napi_dev, true); 940 940 ath10k_core_napi_enable(ar); 941 - ath10k_snoc_irq_enable(ar); 941 + /* IRQs are left enabled when we restart due to a firmware crash */ 942 + if (!test_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags)) 943 + ath10k_snoc_irq_enable(ar); 942 944 ath10k_snoc_rx_post(ar); 943 945 944 946 clear_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags);
+15 -14
drivers/net/wireless/ath/ath11k/core.c
··· 990 990 INIT_LIST_HEAD(&ar->fw_stats.bcn); 991 991 992 992 init_completion(&ar->fw_stats_complete); 993 + init_completion(&ar->fw_stats_done); 993 994 } 994 995 995 996 void ath11k_fw_stats_free(struct ath11k_fw_stats *stats) ··· 2135 2134 { 2136 2135 int ret; 2137 2136 2137 + switch (ath11k_crypto_mode) { 2138 + case ATH11K_CRYPT_MODE_SW: 2139 + set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2140 + set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2141 + break; 2142 + case ATH11K_CRYPT_MODE_HW: 2143 + clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2144 + clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2145 + break; 2146 + default: 2147 + ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode); 2148 + return -EINVAL; 2149 + } 2150 + 2138 2151 ret = ath11k_core_start_firmware(ab, ab->fw_mode); 2139 2152 if (ret) { 2140 2153 ath11k_err(ab, "failed to start firmware: %d\n", ret); ··· 2165 2150 if (ret) { 2166 2151 ath11k_err(ab, "failed to init DP: %d\n", ret); 2167 2152 goto err_firmware_stop; 2168 - } 2169 - 2170 - switch (ath11k_crypto_mode) { 2171 - case ATH11K_CRYPT_MODE_SW: 2172 - set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2173 - set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2174 - break; 2175 - case ATH11K_CRYPT_MODE_HW: 2176 - clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2177 - clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2178 - break; 2179 - default: 2180 - ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode); 2181 - return -EINVAL; 2182 2153 } 2183 2154 2184 2155 if (ath11k_frame_mode == ATH11K_HW_TXRX_RAW)
+3 -1
drivers/net/wireless/ath/ath11k/core.h
··· 600 600 struct list_head pdevs; 601 601 struct list_head vdevs; 602 602 struct list_head bcn; 603 + u32 num_vdev_recvd; 604 + u32 num_bcn_recvd; 603 605 }; 604 606 605 607 struct ath11k_dbg_htt_stats { ··· 786 784 u8 alpha2[REG_ALPHA2_LEN + 1]; 787 785 struct ath11k_fw_stats fw_stats; 788 786 struct completion fw_stats_complete; 789 - bool fw_stats_done; 787 + struct completion fw_stats_done; 790 788 791 789 /* protected by conf_mutex */ 792 790 bool ps_state_enable;
+13 -135
drivers/net/wireless/ath/ath11k/debugfs.c
··· 1 1 // SPDX-License-Identifier: BSD-3-Clause-Clear 2 2 /* 3 3 * Copyright (c) 2018-2020 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved. 5 5 */ 6 6 7 7 #include <linux/vmalloc.h> ··· 93 93 spin_unlock_bh(&dbr_data->lock); 94 94 } 95 95 96 - static void ath11k_debugfs_fw_stats_reset(struct ath11k *ar) 97 - { 98 - spin_lock_bh(&ar->data_lock); 99 - ar->fw_stats_done = false; 100 - ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs); 101 - ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs); 102 - spin_unlock_bh(&ar->data_lock); 103 - } 104 - 105 96 void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats) 106 97 { 107 98 struct ath11k_base *ab = ar->ab; 108 - struct ath11k_pdev *pdev; 109 - bool is_end; 110 - static unsigned int num_vdev, num_bcn; 111 - size_t total_vdevs_started = 0; 112 - int i; 99 + bool is_end = true; 113 100 114 - /* WMI_REQUEST_PDEV_STAT request has been already processed */ 115 - 116 - if (stats->stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) { 117 - ar->fw_stats_done = true; 118 - return; 119 - } 120 - 121 - if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 122 - if (list_empty(&stats->vdevs)) { 123 - ath11k_warn(ab, "empty vdev stats"); 124 - return; 125 - } 126 - /* FW sends all the active VDEV stats irrespective of PDEV, 127 - * hence limit until the count of all VDEVs started 128 - */ 129 - for (i = 0; i < ab->num_radios; i++) { 130 - pdev = rcu_dereference(ab->pdevs_active[i]); 131 - if (pdev && pdev->ar) 132 - total_vdevs_started += ar->num_started_vdevs; 133 - } 134 - 135 - is_end = ((++num_vdev) == total_vdevs_started); 136 - 137 - list_splice_tail_init(&stats->vdevs, 138 - &ar->fw_stats.vdevs); 139 - 140 - if (is_end) { 141 - ar->fw_stats_done = true; 142 - num_vdev = 0; 143 - } 144 - return; 145 - } 146 - 101 + /* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_RSSI_PER_CHAIN_STAT and 102 + * WMI_REQUEST_VDEV_STAT requests have been already processed. 103 + */ 147 104 if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 148 105 if (list_empty(&stats->bcn)) { 149 106 ath11k_warn(ab, "empty bcn stats"); ··· 109 152 /* Mark end until we reached the count of all started VDEVs 110 153 * within the PDEV 111 154 */ 112 - is_end = ((++num_bcn) == ar->num_started_vdevs); 155 + if (ar->num_started_vdevs) 156 + is_end = ((++ar->fw_stats.num_bcn_recvd) == 157 + ar->num_started_vdevs); 113 158 114 159 list_splice_tail_init(&stats->bcn, 115 160 &ar->fw_stats.bcn); 116 161 117 - if (is_end) { 118 - ar->fw_stats_done = true; 119 - num_bcn = 0; 120 - } 162 + if (is_end) 163 + complete(&ar->fw_stats_done); 121 164 } 122 - } 123 - 124 - static int ath11k_debugfs_fw_stats_request(struct ath11k *ar, 125 - struct stats_request_params *req_param) 126 - { 127 - struct ath11k_base *ab = ar->ab; 128 - unsigned long timeout, time_left; 129 - int ret; 130 - 131 - lockdep_assert_held(&ar->conf_mutex); 132 - 133 - /* FW stats can get split when exceeding the stats data buffer limit. 134 - * In that case, since there is no end marking for the back-to-back 135 - * received 'update stats' event, we keep a 3 seconds timeout in case, 136 - * fw_stats_done is not marked yet 137 - */ 138 - timeout = jiffies + secs_to_jiffies(3); 139 - 140 - ath11k_debugfs_fw_stats_reset(ar); 141 - 142 - reinit_completion(&ar->fw_stats_complete); 143 - 144 - ret = ath11k_wmi_send_stats_request_cmd(ar, req_param); 145 - 146 - if (ret) { 147 - ath11k_warn(ab, "could not request fw stats (%d)\n", 148 - ret); 149 - return ret; 150 - } 151 - 152 - time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ); 153 - 154 - if (!time_left) 155 - return -ETIMEDOUT; 156 - 157 - for (;;) { 158 - if (time_after(jiffies, timeout)) 159 - break; 160 - 161 - spin_lock_bh(&ar->data_lock); 162 - if (ar->fw_stats_done) { 163 - spin_unlock_bh(&ar->data_lock); 164 - break; 165 - } 166 - spin_unlock_bh(&ar->data_lock); 167 - } 168 - return 0; 169 - } 170 - 171 - int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id, 172 - u32 vdev_id, u32 stats_id) 173 - { 174 - struct ath11k_base *ab = ar->ab; 175 - struct stats_request_params req_param; 176 - int ret; 177 - 178 - mutex_lock(&ar->conf_mutex); 179 - 180 - if (ar->state != ATH11K_STATE_ON) { 181 - ret = -ENETDOWN; 182 - goto err_unlock; 183 - } 184 - 185 - req_param.pdev_id = pdev_id; 186 - req_param.vdev_id = vdev_id; 187 - req_param.stats_id = stats_id; 188 - 189 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 190 - if (ret) 191 - ath11k_warn(ab, "failed to request fw stats: %d\n", ret); 192 - 193 - ath11k_dbg(ab, ATH11K_DBG_WMI, 194 - "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n", 195 - pdev_id, vdev_id, stats_id); 196 - 197 - err_unlock: 198 - mutex_unlock(&ar->conf_mutex); 199 - 200 - return ret; 201 165 } 202 166 203 167 static int ath11k_open_pdev_stats(struct inode *inode, struct file *file) ··· 146 268 req_param.vdev_id = 0; 147 269 req_param.stats_id = WMI_REQUEST_PDEV_STAT; 148 270 149 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 271 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 150 272 if (ret) { 151 273 ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret); 152 274 goto err_free; ··· 217 339 req_param.vdev_id = 0; 218 340 req_param.stats_id = WMI_REQUEST_VDEV_STAT; 219 341 220 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 342 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 221 343 if (ret) { 222 344 ath11k_warn(ar->ab, "failed to request fw vdev stats: %d\n", ret); 223 345 goto err_free; ··· 293 415 continue; 294 416 295 417 req_param.vdev_id = arvif->vdev_id; 296 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 418 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 297 419 if (ret) { 298 420 ath11k_warn(ar->ab, "failed to request fw bcn stats: %d\n", ret); 299 421 goto err_free;
+1 -9
drivers/net/wireless/ath/ath11k/debugfs.h
··· 1 1 /* SPDX-License-Identifier: BSD-3-Clause-Clear */ 2 2 /* 3 3 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) 2021-2022, 2025 Qualcomm Innovation Center, Inc. All rights reserved. 5 5 */ 6 6 7 7 #ifndef _ATH11K_DEBUGFS_H_ ··· 273 273 void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats); 274 274 275 275 void ath11k_debugfs_fw_stats_init(struct ath11k *ar); 276 - int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id, 277 - u32 vdev_id, u32 stats_id); 278 276 279 277 static inline bool ath11k_debugfs_is_pktlog_lite_mode_enabled(struct ath11k *ar) 280 278 { ··· 375 377 } 376 378 377 379 static inline int ath11k_debugfs_rx_filter(struct ath11k *ar) 378 - { 379 - return 0; 380 - } 381 - 382 - static inline int ath11k_debugfs_get_fw_stats(struct ath11k *ar, 383 - u32 pdev_id, u32 vdev_id, u32 stats_id) 384 380 { 385 381 return 0; 386 382 }
+83 -44
drivers/net/wireless/ath/ath11k/mac.c
··· 8997 8997 } 8998 8998 } 8999 8999 9000 + static void ath11k_mac_fw_stats_reset(struct ath11k *ar) 9001 + { 9002 + spin_lock_bh(&ar->data_lock); 9003 + ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs); 9004 + ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs); 9005 + ar->fw_stats.num_vdev_recvd = 0; 9006 + ar->fw_stats.num_bcn_recvd = 0; 9007 + spin_unlock_bh(&ar->data_lock); 9008 + } 9009 + 9010 + int ath11k_mac_fw_stats_request(struct ath11k *ar, 9011 + struct stats_request_params *req_param) 9012 + { 9013 + struct ath11k_base *ab = ar->ab; 9014 + unsigned long time_left; 9015 + int ret; 9016 + 9017 + lockdep_assert_held(&ar->conf_mutex); 9018 + 9019 + ath11k_mac_fw_stats_reset(ar); 9020 + 9021 + reinit_completion(&ar->fw_stats_complete); 9022 + reinit_completion(&ar->fw_stats_done); 9023 + 9024 + ret = ath11k_wmi_send_stats_request_cmd(ar, req_param); 9025 + 9026 + if (ret) { 9027 + ath11k_warn(ab, "could not request fw stats (%d)\n", 9028 + ret); 9029 + return ret; 9030 + } 9031 + 9032 + time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ); 9033 + if (!time_left) 9034 + return -ETIMEDOUT; 9035 + 9036 + /* FW stats can get split when exceeding the stats data buffer limit. 9037 + * In that case, since there is no end marking for the back-to-back 9038 + * received 'update stats' event, we keep a 3 seconds timeout in case, 9039 + * fw_stats_done is not marked yet 9040 + */ 9041 + time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ); 9042 + if (!time_left) 9043 + return -ETIMEDOUT; 9044 + 9045 + return 0; 9046 + } 9047 + 9048 + static int ath11k_mac_get_fw_stats(struct ath11k *ar, u32 pdev_id, 9049 + u32 vdev_id, u32 stats_id) 9050 + { 9051 + struct ath11k_base *ab = ar->ab; 9052 + struct stats_request_params req_param; 9053 + int ret; 9054 + 9055 + lockdep_assert_held(&ar->conf_mutex); 9056 + 9057 + if (ar->state != ATH11K_STATE_ON) 9058 + return -ENETDOWN; 9059 + 9060 + req_param.pdev_id = pdev_id; 9061 + req_param.vdev_id = vdev_id; 9062 + req_param.stats_id = stats_id; 9063 + 9064 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 9065 + if (ret) 9066 + ath11k_warn(ab, "failed to request fw stats: %d\n", ret); 9067 + 9068 + ath11k_dbg(ab, ATH11K_DBG_WMI, 9069 + "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n", 9070 + pdev_id, vdev_id, stats_id); 9071 + 9072 + return ret; 9073 + } 9074 + 9000 9075 static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw, 9001 9076 struct ieee80211_vif *vif, 9002 9077 struct ieee80211_sta *sta, ··· 9106 9031 9107 9032 ath11k_mac_put_chain_rssi(sinfo, arsta, "ppdu", false); 9108 9033 9034 + mutex_lock(&ar->conf_mutex); 9109 9035 if (!(sinfo->filled & BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL)) && 9110 9036 arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA && 9111 9037 ar->ab->hw_params.supports_rssi_stats && 9112 - !ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9113 - WMI_REQUEST_RSSI_PER_CHAIN_STAT)) { 9038 + !ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9039 + WMI_REQUEST_RSSI_PER_CHAIN_STAT)) { 9114 9040 ath11k_mac_put_chain_rssi(sinfo, arsta, "fw stats", true); 9115 9041 } 9116 9042 ··· 9119 9043 if (!signal && 9120 9044 arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA && 9121 9045 ar->ab->hw_params.supports_rssi_stats && 9122 - !(ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9123 - WMI_REQUEST_VDEV_STAT))) 9046 + !(ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9047 + WMI_REQUEST_VDEV_STAT))) 9124 9048 signal = arsta->rssi_beacon; 9049 + mutex_unlock(&ar->conf_mutex); 9125 9050 9126 9051 ath11k_dbg(ar->ab, ATH11K_DBG_MAC, 9127 9052 "sta statistics db2dbm %u rssi comb %d rssi beacon %d\n", ··· 9457 9380 return ret; 9458 9381 } 9459 9382 9460 - static int ath11k_fw_stats_request(struct ath11k *ar, 9461 - struct stats_request_params *req_param) 9462 - { 9463 - struct ath11k_base *ab = ar->ab; 9464 - unsigned long time_left; 9465 - int ret; 9466 - 9467 - lockdep_assert_held(&ar->conf_mutex); 9468 - 9469 - spin_lock_bh(&ar->data_lock); 9470 - ar->fw_stats_done = false; 9471 - ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs); 9472 - spin_unlock_bh(&ar->data_lock); 9473 - 9474 - reinit_completion(&ar->fw_stats_complete); 9475 - 9476 - ret = ath11k_wmi_send_stats_request_cmd(ar, req_param); 9477 - if (ret) { 9478 - ath11k_warn(ab, "could not request fw stats (%d)\n", 9479 - ret); 9480 - return ret; 9481 - } 9482 - 9483 - time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 9484 - 1 * HZ); 9485 - 9486 - if (!time_left) 9487 - return -ETIMEDOUT; 9488 - 9489 - return 0; 9490 - } 9491 - 9492 9383 static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw, 9493 9384 struct ieee80211_vif *vif, 9494 9385 unsigned int link_id, ··· 9464 9419 { 9465 9420 struct ath11k *ar = hw->priv; 9466 9421 struct ath11k_base *ab = ar->ab; 9467 - struct stats_request_params req_param = {0}; 9468 9422 struct ath11k_fw_stats_pdev *pdev; 9469 9423 int ret; 9470 9424 ··· 9475 9431 */ 9476 9432 mutex_lock(&ar->conf_mutex); 9477 9433 9478 - if (ar->state != ATH11K_STATE_ON) 9479 - goto err_fallback; 9480 - 9481 9434 /* Firmware doesn't provide Tx power during CAC hence no need to fetch 9482 9435 * the stats. 9483 9436 */ ··· 9483 9442 return -EAGAIN; 9484 9443 } 9485 9444 9486 - req_param.pdev_id = ar->pdev->pdev_id; 9487 - req_param.stats_id = WMI_REQUEST_PDEV_STAT; 9488 - 9489 - ret = ath11k_fw_stats_request(ar, &req_param); 9445 + ret = ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9446 + WMI_REQUEST_PDEV_STAT); 9490 9447 if (ret) { 9491 9448 ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret); 9492 9449 goto err_fallback;
+3 -1
drivers/net/wireless/ath/ath11k/mac.h
··· 1 1 /* SPDX-License-Identifier: BSD-3-Clause-Clear */ 2 2 /* 3 3 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) 2021-2023, 2025 Qualcomm Innovation Center, Inc. All rights reserved. 5 5 */ 6 6 7 7 #ifndef ATH11K_MAC_H ··· 179 179 void ath11k_mac_fill_reg_tpc_info(struct ath11k *ar, 180 180 struct ieee80211_vif *vif, 181 181 struct ieee80211_chanctx_conf *ctx); 182 + int ath11k_mac_fw_stats_request(struct ath11k *ar, 183 + struct stats_request_params *req_param); 182 184 #endif
+43 -6
drivers/net/wireless/ath/ath11k/wmi.c
··· 8158 8158 static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *skb) 8159 8159 { 8160 8160 struct ath11k_fw_stats stats = {}; 8161 + size_t total_vdevs_started = 0; 8162 + struct ath11k_pdev *pdev; 8163 + bool is_end = true; 8164 + int i; 8165 + 8161 8166 struct ath11k *ar; 8162 8167 int ret; 8163 8168 ··· 8189 8184 8190 8185 spin_lock_bh(&ar->data_lock); 8191 8186 8192 - /* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via 8187 + /* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_VDEV_STAT and 8188 + * WMI_REQUEST_RSSI_PER_CHAIN_STAT can be requested via mac ops or via 8193 8189 * debugfs fw stats. Therefore, processing it separately. 8194 8190 */ 8195 8191 if (stats.stats_id == WMI_REQUEST_PDEV_STAT) { 8196 8192 list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs); 8197 - ar->fw_stats_done = true; 8193 + complete(&ar->fw_stats_done); 8198 8194 goto complete; 8199 8195 } 8200 8196 8201 - /* WMI_REQUEST_VDEV_STAT, WMI_REQUEST_BCN_STAT and WMI_REQUEST_RSSI_PER_CHAIN_STAT 8202 - * are currently requested only via debugfs fw stats. Hence, processing these 8203 - * in debugfs context 8197 + if (stats.stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) { 8198 + complete(&ar->fw_stats_done); 8199 + goto complete; 8200 + } 8201 + 8202 + if (stats.stats_id == WMI_REQUEST_VDEV_STAT) { 8203 + if (list_empty(&stats.vdevs)) { 8204 + ath11k_warn(ab, "empty vdev stats"); 8205 + goto complete; 8206 + } 8207 + /* FW sends all the active VDEV stats irrespective of PDEV, 8208 + * hence limit until the count of all VDEVs started 8209 + */ 8210 + for (i = 0; i < ab->num_radios; i++) { 8211 + pdev = rcu_dereference(ab->pdevs_active[i]); 8212 + if (pdev && pdev->ar) 8213 + total_vdevs_started += ar->num_started_vdevs; 8214 + } 8215 + 8216 + if (total_vdevs_started) 8217 + is_end = ((++ar->fw_stats.num_vdev_recvd) == 8218 + total_vdevs_started); 8219 + 8220 + list_splice_tail_init(&stats.vdevs, 8221 + &ar->fw_stats.vdevs); 8222 + 8223 + if (is_end) 8224 + complete(&ar->fw_stats_done); 8225 + 8226 + goto complete; 8227 + } 8228 + 8229 + /* WMI_REQUEST_BCN_STAT is currently requested only via debugfs fw stats. 8230 + * Hence, processing it in debugfs context 8204 8231 */ 8205 8232 ath11k_debugfs_fw_stats_process(ar, &stats); 8206 8233 8207 8234 complete: 8208 8235 complete(&ar->fw_stats_complete); 8209 - rcu_read_unlock(); 8210 8236 spin_unlock_bh(&ar->data_lock); 8237 + rcu_read_unlock(); 8211 8238 8212 8239 /* Since the stats's pdev, vdev and beacon list are spliced and reinitialised 8213 8240 * at this point, no need to free the individual list.
+7 -3
drivers/net/wireless/ath/ath12k/core.c
··· 2129 2129 if (!ag) { 2130 2130 mutex_unlock(&ath12k_hw_group_mutex); 2131 2131 ath12k_warn(ab, "unable to get hw group\n"); 2132 - return -ENODEV; 2132 + ret = -ENODEV; 2133 + goto err_unregister_notifier; 2133 2134 } 2134 2135 2135 2136 mutex_unlock(&ath12k_hw_group_mutex); ··· 2145 2144 if (ret) { 2146 2145 mutex_unlock(&ag->mutex); 2147 2146 ath12k_warn(ab, "unable to create hw group\n"); 2148 - goto err; 2147 + goto err_destroy_hw_group; 2149 2148 } 2150 2149 } 2151 2150 ··· 2153 2152 2154 2153 return 0; 2155 2154 2156 - err: 2155 + err_destroy_hw_group: 2157 2156 ath12k_core_hw_group_destroy(ab->ag); 2158 2157 ath12k_core_hw_group_unassign(ab); 2158 + err_unregister_notifier: 2159 + ath12k_core_panic_notifier_unregister(ab); 2160 + 2159 2161 return ret; 2160 2162 } 2161 2163
+2 -1
drivers/net/wireless/ath/ath12k/hal.h
··· 585 585 * or cache was blocked 586 586 * @HAL_REO_CMD_FAILED: Command execution failed, could be due to 587 587 * invalid queue desc 588 - * @HAL_REO_CMD_RESOURCE_BLOCKED: 588 + * @HAL_REO_CMD_RESOURCE_BLOCKED: Command could not be executed because 589 + * one or more descriptors were blocked 589 590 * @HAL_REO_CMD_DRAIN: 590 591 */ 591 592 enum hal_reo_cmd_status {
+6
drivers/net/wireless/ath/ath12k/hw.c
··· 951 951 .hal_umac_ce0_dest_reg_base = 0x01b81000, 952 952 .hal_umac_ce1_src_reg_base = 0x01b82000, 953 953 .hal_umac_ce1_dest_reg_base = 0x01b83000, 954 + 955 + .gcc_gcc_pcie_hot_rst = 0x1e38338, 954 956 }; 955 957 956 958 static const struct ath12k_hw_regs qcn9274_v2_regs = { ··· 1044 1042 .hal_umac_ce0_dest_reg_base = 0x01b81000, 1045 1043 .hal_umac_ce1_src_reg_base = 0x01b82000, 1046 1044 .hal_umac_ce1_dest_reg_base = 0x01b83000, 1045 + 1046 + .gcc_gcc_pcie_hot_rst = 0x1e38338, 1047 1047 }; 1048 1048 1049 1049 static const struct ath12k_hw_regs ipq5332_regs = { ··· 1219 1215 .hal_umac_ce0_dest_reg_base = 0x01b81000, 1220 1216 .hal_umac_ce1_src_reg_base = 0x01b82000, 1221 1217 .hal_umac_ce1_dest_reg_base = 0x01b83000, 1218 + 1219 + .gcc_gcc_pcie_hot_rst = 0x1e40304, 1222 1220 }; 1223 1221 1224 1222 static const struct ath12k_hw_hal_params ath12k_hw_hal_params_qcn9274 = {
+2
drivers/net/wireless/ath/ath12k/hw.h
··· 375 375 u32 hal_reo_cmd_ring_base; 376 376 377 377 u32 hal_reo_status_ring_base; 378 + 379 + u32 gcc_gcc_pcie_hot_rst; 378 380 }; 379 381 380 382 static inline const char *ath12k_bd_ie_type_str(enum ath12k_bd_ie_type type)
+3 -3
drivers/net/wireless/ath/ath12k/pci.c
··· 292 292 293 293 ath12k_dbg(ab, ATH12K_DBG_PCI, "pci ltssm 0x%x\n", val); 294 294 295 - val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST); 295 + val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab)); 296 296 val |= GCC_GCC_PCIE_HOT_RST_VAL; 297 - ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST, val); 298 - val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST); 297 + ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST(ab), val); 298 + val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab)); 299 299 300 300 ath12k_dbg(ab, ATH12K_DBG_PCI, "pci pcie_hot_rst 0x%x\n", val); 301 301
+3 -1
drivers/net/wireless/ath/ath12k/pci.h
··· 28 28 #define PCIE_PCIE_PARF_LTSSM 0x1e081b0 29 29 #define PARM_LTSSM_VALUE 0x111 30 30 31 - #define GCC_GCC_PCIE_HOT_RST 0x1e38338 31 + #define GCC_GCC_PCIE_HOT_RST(ab) \ 32 + ((ab)->hw_params->regs->gcc_gcc_pcie_hot_rst) 33 + 32 34 #define GCC_GCC_PCIE_HOT_RST_VAL 0x10 33 35 34 36 #define PCIE_PCIE_INT_ALL_CLEAR 0x1e08228
+16 -10
drivers/net/wireless/ath/wil6210/interrupt.c
··· 179 179 wil_dbg_irq(wil, "mask_irq\n"); 180 180 181 181 wil6210_mask_irq_tx(wil); 182 - wil6210_mask_irq_tx_edma(wil); 182 + if (wil->use_enhanced_dma_hw) 183 + wil6210_mask_irq_tx_edma(wil); 183 184 wil6210_mask_irq_rx(wil); 184 - wil6210_mask_irq_rx_edma(wil); 185 + if (wil->use_enhanced_dma_hw) 186 + wil6210_mask_irq_rx_edma(wil); 185 187 wil6210_mask_irq_misc(wil, true); 186 188 wil6210_mask_irq_pseudo(wil); 187 189 } ··· 192 190 { 193 191 wil_dbg_irq(wil, "unmask_irq\n"); 194 192 195 - wil_w(wil, RGF_DMA_EP_RX_ICR + offsetof(struct RGF_ICR, ICC), 196 - WIL_ICR_ICC_VALUE); 197 - wil_w(wil, RGF_DMA_EP_TX_ICR + offsetof(struct RGF_ICR, ICC), 198 - WIL_ICR_ICC_VALUE); 193 + if (wil->use_enhanced_dma_hw) { 194 + wil_w(wil, RGF_DMA_EP_RX_ICR + offsetof(struct RGF_ICR, ICC), 195 + WIL_ICR_ICC_VALUE); 196 + wil_w(wil, RGF_DMA_EP_TX_ICR + offsetof(struct RGF_ICR, ICC), 197 + WIL_ICR_ICC_VALUE); 198 + } 199 199 wil_w(wil, RGF_DMA_EP_MISC_ICR + offsetof(struct RGF_ICR, ICC), 200 200 WIL_ICR_ICC_MISC_VALUE); 201 201 wil_w(wil, RGF_INT_GEN_TX_ICR + offsetof(struct RGF_ICR, ICC), ··· 849 845 offsetof(struct RGF_ICR, ICR)); 850 846 wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_TX_ICR) + 851 847 offsetof(struct RGF_ICR, ICR)); 852 - wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_RX_ICR) + 853 - offsetof(struct RGF_ICR, ICR)); 854 - wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_TX_ICR) + 855 - offsetof(struct RGF_ICR, ICR)); 848 + if (wil->use_enhanced_dma_hw) { 849 + wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_RX_ICR) + 850 + offsetof(struct RGF_ICR, ICR)); 851 + wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_TX_ICR) + 852 + offsetof(struct RGF_ICR, ICR)); 853 + } 856 854 wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_MISC_ICR) + 857 855 offsetof(struct RGF_ICR, ICR)); 858 856 wmb(); /* make sure write completed */
+20 -4
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1501 1501 * Scratch value was altered, this means the device was powered off, we 1502 1502 * need to reset it completely. 1503 1503 * Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan, 1504 - * so assume that any bits there mean that the device is usable. 1504 + * but not bits [15:8]. So if we have bits set in lower word, assume 1505 + * the device is alive. 1506 + * For older devices, just try silently to grab the NIC. 1505 1507 */ 1506 - if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ && 1507 - !iwl_read32(trans, CSR_FUNC_SCRATCH)) 1508 - device_was_powered_off = true; 1508 + if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) { 1509 + if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) & 1510 + CSR_FUNC_SCRATCH_POWER_OFF_MASK)) 1511 + device_was_powered_off = true; 1512 + } else { 1513 + /* 1514 + * bh are re-enabled by iwl_trans_pcie_release_nic_access, 1515 + * so re-enable them if _iwl_trans_pcie_grab_nic_access fails. 1516 + */ 1517 + local_bh_disable(); 1518 + if (_iwl_trans_pcie_grab_nic_access(trans, true)) { 1519 + iwl_trans_pcie_release_nic_access(trans); 1520 + } else { 1521 + device_was_powered_off = true; 1522 + local_bh_enable(); 1523 + } 1524 + } 1509 1525 1510 1526 if (restore || device_was_powered_off) { 1511 1527 trans->state = IWL_TRANS_NO_FW;
+2 -4
drivers/net/wireless/marvell/mwifiex/11n.c
··· 403 403 404 404 if (sband->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40 && 405 405 bss_desc->bcn_ht_oper->ht_param & 406 - IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) { 407 - chan_list->chan_scan_param[0].radio_type |= 408 - CHAN_BW_40MHZ << 2; 406 + IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) 409 407 SET_SECONDARYCHAN(chan_list->chan_scan_param[0]. 410 408 radio_type, 411 409 (bss_desc->bcn_ht_oper->ht_param & 412 410 IEEE80211_HT_PARAM_CHA_SEC_OFFSET)); 413 - } 411 + 414 412 *buffer += struct_size(chan_list, chan_scan_param, 1); 415 413 ret_len += struct_size(chan_list, chan_scan_param, 1); 416 414 }
+7 -14
drivers/nvme/host/ioctl.c
··· 429 429 pdu->result = le64_to_cpu(nvme_req(req)->result.u64); 430 430 431 431 /* 432 - * For iopoll, complete it directly. Note that using the uring_cmd 433 - * helper for this is safe only because we check blk_rq_is_poll(). 434 - * As that returns false if we're NOT on a polled queue, then it's 435 - * safe to use the polled completion helper. 436 - * 437 - * Otherwise, move the completion to task work. 432 + * IOPOLL could potentially complete this request directly, but 433 + * if multiple rings are polling on the same queue, then it's possible 434 + * for one ring to find completions for another ring. Punting the 435 + * completion via task_work will always direct it to the right 436 + * location, rather than potentially complete requests for ringA 437 + * under iopoll invocations from ringB. 438 438 */ 439 - if (blk_rq_is_poll(req)) { 440 - if (pdu->bio) 441 - blk_rq_unmap_user(pdu->bio); 442 - io_uring_cmd_iopoll_done(ioucmd, pdu->result, pdu->status); 443 - } else { 444 - io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb); 445 - } 446 - 439 + io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb); 447 440 return RQ_END_IO_FREE; 448 441 } 449 442
-5
drivers/pinctrl/pinctrl-st.c
··· 374 374 } 375 375 376 376 /* Low level functions.. */ 377 - static inline int st_gpio_bank(int gpio) 378 - { 379 - return gpio/ST_GPIO_PINS_PER_BANK; 380 - } 381 - 382 377 static inline int st_gpio_pin(int gpio) 383 378 { 384 379 return gpio%ST_GPIO_PINS_PER_BANK;
+1 -1
drivers/pinctrl/pinctrl-tb10x.c
··· 823 823 .remove = tb10x_pinctrl_remove, 824 824 .driver = { 825 825 .name = "tb10x_pinctrl", 826 - .of_match_table = of_match_ptr(tb10x_pinctrl_dt_ids), 826 + .of_match_table = tb10x_pinctrl_dt_ids, 827 827 } 828 828 }; 829 829
-1
drivers/pinctrl/qcom/pinctrl-apq8064.c
··· 629 629 .of_match_table = apq8064_pinctrl_of_match, 630 630 }, 631 631 .probe = apq8064_pinctrl_probe, 632 - .remove = msm_pinctrl_remove, 633 632 }; 634 633 635 634 static int __init apq8064_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-apq8084.c
··· 1207 1207 .of_match_table = apq8084_pinctrl_of_match, 1208 1208 }, 1209 1209 .probe = apq8084_pinctrl_probe, 1210 - .remove = msm_pinctrl_remove, 1211 1210 }; 1212 1211 1213 1212 static int __init apq8084_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq4019.c
··· 710 710 .of_match_table = ipq4019_pinctrl_of_match, 711 711 }, 712 712 .probe = ipq4019_pinctrl_probe, 713 - .remove = msm_pinctrl_remove, 714 713 }; 715 714 716 715 static int __init ipq4019_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq5018.c
··· 754 754 .of_match_table = ipq5018_pinctrl_of_match, 755 755 }, 756 756 .probe = ipq5018_pinctrl_probe, 757 - .remove = msm_pinctrl_remove, 758 757 }; 759 758 760 759 static int __init ipq5018_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq5332.c
··· 834 834 .of_match_table = ipq5332_pinctrl_of_match, 835 835 }, 836 836 .probe = ipq5332_pinctrl_probe, 837 - .remove = msm_pinctrl_remove, 838 837 }; 839 838 840 839 static int __init ipq5332_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq5424.c
··· 791 791 .of_match_table = ipq5424_pinctrl_of_match, 792 792 }, 793 793 .probe = ipq5424_pinctrl_probe, 794 - .remove = msm_pinctrl_remove, 795 794 }; 796 795 797 796 static int __init ipq5424_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq6018.c
··· 1080 1080 .of_match_table = ipq6018_pinctrl_of_match, 1081 1081 }, 1082 1082 .probe = ipq6018_pinctrl_probe, 1083 - .remove = msm_pinctrl_remove, 1084 1083 }; 1085 1084 1086 1085 static int __init ipq6018_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq8064.c
··· 631 631 .of_match_table = ipq8064_pinctrl_of_match, 632 632 }, 633 633 .probe = ipq8064_pinctrl_probe, 634 - .remove = msm_pinctrl_remove, 635 634 }; 636 635 637 636 static int __init ipq8064_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq8074.c
··· 1041 1041 .of_match_table = ipq8074_pinctrl_of_match, 1042 1042 }, 1043 1043 .probe = ipq8074_pinctrl_probe, 1044 - .remove = msm_pinctrl_remove, 1045 1044 }; 1046 1045 1047 1046 static int __init ipq8074_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-ipq9574.c
··· 799 799 .of_match_table = ipq9574_pinctrl_of_match, 800 800 }, 801 801 .probe = ipq9574_pinctrl_probe, 802 - .remove = msm_pinctrl_remove, 803 802 }; 804 803 805 804 static int __init ipq9574_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-mdm9607.c
··· 1059 1059 .of_match_table = mdm9607_pinctrl_of_match, 1060 1060 }, 1061 1061 .probe = mdm9607_pinctrl_probe, 1062 - .remove = msm_pinctrl_remove, 1063 1062 }; 1064 1063 1065 1064 static int __init mdm9607_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-mdm9615.c
··· 446 446 .of_match_table = mdm9615_pinctrl_of_match, 447 447 }, 448 448 .probe = mdm9615_pinctrl_probe, 449 - .remove = msm_pinctrl_remove, 450 449 }; 451 450 452 451 static int __init mdm9615_pinctrl_init(void)
+1 -10
drivers/pinctrl/qcom/pinctrl-msm.c
··· 1442 1442 girq->handler = handle_bad_irq; 1443 1443 girq->parents[0] = pctrl->irq; 1444 1444 1445 - ret = gpiochip_add_data(&pctrl->chip, pctrl); 1445 + ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl); 1446 1446 if (ret) { 1447 1447 dev_err(pctrl->dev, "Failed register gpiochip\n"); 1448 1448 return ret; ··· 1463 1463 dev_name(pctrl->dev), 0, 0, chip->ngpio); 1464 1464 if (ret) { 1465 1465 dev_err(pctrl->dev, "Failed to add pin range\n"); 1466 - gpiochip_remove(&pctrl->chip); 1467 1466 return ret; 1468 1467 } 1469 1468 } ··· 1597 1598 return 0; 1598 1599 } 1599 1600 EXPORT_SYMBOL(msm_pinctrl_probe); 1600 - 1601 - void msm_pinctrl_remove(struct platform_device *pdev) 1602 - { 1603 - struct msm_pinctrl *pctrl = platform_get_drvdata(pdev); 1604 - 1605 - gpiochip_remove(&pctrl->chip); 1606 - } 1607 - EXPORT_SYMBOL(msm_pinctrl_remove); 1608 1601 1609 1602 MODULE_DESCRIPTION("Qualcomm Technologies, Inc. TLMM driver"); 1610 1603 MODULE_LICENSE("GPL v2");
-1
drivers/pinctrl/qcom/pinctrl-msm.h
··· 171 171 172 172 int msm_pinctrl_probe(struct platform_device *pdev, 173 173 const struct msm_pinctrl_soc_data *soc_data); 174 - void msm_pinctrl_remove(struct platform_device *pdev); 175 174 176 175 #endif
-1
drivers/pinctrl/qcom/pinctrl-msm8226.c
··· 654 654 .of_match_table = msm8226_pinctrl_of_match, 655 655 }, 656 656 .probe = msm8226_pinctrl_probe, 657 - .remove = msm_pinctrl_remove, 658 657 }; 659 658 660 659 static int __init msm8226_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8660.c
··· 981 981 .of_match_table = msm8660_pinctrl_of_match, 982 982 }, 983 983 .probe = msm8660_pinctrl_probe, 984 - .remove = msm_pinctrl_remove, 985 984 }; 986 985 987 986 static int __init msm8660_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8909.c
··· 929 929 .of_match_table = msm8909_pinctrl_of_match, 930 930 }, 931 931 .probe = msm8909_pinctrl_probe, 932 - .remove = msm_pinctrl_remove, 933 932 }; 934 933 935 934 static int __init msm8909_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8916.c
··· 969 969 .of_match_table = msm8916_pinctrl_of_match, 970 970 }, 971 971 .probe = msm8916_pinctrl_probe, 972 - .remove = msm_pinctrl_remove, 973 972 }; 974 973 975 974 static int __init msm8916_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8917.c
··· 1607 1607 .of_match_table = msm8917_pinctrl_of_match, 1608 1608 }, 1609 1609 .probe = msm8917_pinctrl_probe, 1610 - .remove = msm_pinctrl_remove, 1611 1610 }; 1612 1611 1613 1612 static int __init msm8917_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8953.c
··· 1816 1816 .of_match_table = msm8953_pinctrl_of_match, 1817 1817 }, 1818 1818 .probe = msm8953_pinctrl_probe, 1819 - .remove = msm_pinctrl_remove, 1820 1819 }; 1821 1820 1822 1821 static int __init msm8953_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8960.c
··· 1246 1246 .of_match_table = msm8960_pinctrl_of_match, 1247 1247 }, 1248 1248 .probe = msm8960_pinctrl_probe, 1249 - .remove = msm_pinctrl_remove, 1250 1249 }; 1251 1250 1252 1251 static int __init msm8960_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8976.c
··· 1096 1096 .of_match_table = msm8976_pinctrl_of_match, 1097 1097 }, 1098 1098 .probe = msm8976_pinctrl_probe, 1099 - .remove = msm_pinctrl_remove, 1100 1099 }; 1101 1100 1102 1101 static int __init msm8976_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8994.c
··· 1343 1343 .of_match_table = msm8994_pinctrl_of_match, 1344 1344 }, 1345 1345 .probe = msm8994_pinctrl_probe, 1346 - .remove = msm_pinctrl_remove, 1347 1346 }; 1348 1347 1349 1348 static int __init msm8994_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8996.c
··· 1920 1920 .of_match_table = msm8996_pinctrl_of_match, 1921 1921 }, 1922 1922 .probe = msm8996_pinctrl_probe, 1923 - .remove = msm_pinctrl_remove, 1924 1923 }; 1925 1924 1926 1925 static int __init msm8996_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8998.c
··· 1535 1535 .of_match_table = msm8998_pinctrl_of_match, 1536 1536 }, 1537 1537 .probe = msm8998_pinctrl_probe, 1538 - .remove = msm_pinctrl_remove, 1539 1538 }; 1540 1539 1541 1540 static int __init msm8998_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-msm8x74.c
··· 1083 1083 .of_match_table = msm8x74_pinctrl_of_match, 1084 1084 }, 1085 1085 .probe = msm8x74_pinctrl_probe, 1086 - .remove = msm_pinctrl_remove, 1087 1086 }; 1088 1087 1089 1088 static int __init msm8x74_pinctrl_init(void)
+9 -1
drivers/pinctrl/qcom/pinctrl-qcm2290.c
··· 167 167 PINCTRL_PIN(62, "GPIO_62"), 168 168 PINCTRL_PIN(63, "GPIO_63"), 169 169 PINCTRL_PIN(64, "GPIO_64"), 170 + PINCTRL_PIN(65, "GPIO_65"), 171 + PINCTRL_PIN(66, "GPIO_66"), 172 + PINCTRL_PIN(67, "GPIO_67"), 173 + PINCTRL_PIN(68, "GPIO_68"), 170 174 PINCTRL_PIN(69, "GPIO_69"), 171 175 PINCTRL_PIN(70, "GPIO_70"), 172 176 PINCTRL_PIN(71, "GPIO_71"), ··· 185 181 PINCTRL_PIN(80, "GPIO_80"), 186 182 PINCTRL_PIN(81, "GPIO_81"), 187 183 PINCTRL_PIN(82, "GPIO_82"), 184 + PINCTRL_PIN(83, "GPIO_83"), 185 + PINCTRL_PIN(84, "GPIO_84"), 186 + PINCTRL_PIN(85, "GPIO_85"), 188 187 PINCTRL_PIN(86, "GPIO_86"), 189 188 PINCTRL_PIN(87, "GPIO_87"), 190 189 PINCTRL_PIN(88, "GPIO_88"), 191 190 PINCTRL_PIN(89, "GPIO_89"), 192 191 PINCTRL_PIN(90, "GPIO_90"), 193 192 PINCTRL_PIN(91, "GPIO_91"), 193 + PINCTRL_PIN(92, "GPIO_92"), 194 + PINCTRL_PIN(93, "GPIO_93"), 194 195 PINCTRL_PIN(94, "GPIO_94"), 195 196 PINCTRL_PIN(95, "GPIO_95"), 196 197 PINCTRL_PIN(96, "GPIO_96"), ··· 1134 1125 .of_match_table = qcm2290_pinctrl_of_match, 1135 1126 }, 1136 1127 .probe = qcm2290_pinctrl_probe, 1137 - .remove = msm_pinctrl_remove, 1138 1128 }; 1139 1129 1140 1130 static int __init qcm2290_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-qcs404.c
··· 1644 1644 .of_match_table = qcs404_pinctrl_of_match, 1645 1645 }, 1646 1646 .probe = qcs404_pinctrl_probe, 1647 - .remove = msm_pinctrl_remove, 1648 1647 }; 1649 1648 1650 1649 static int __init qcs404_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-qcs615.c
··· 1087 1087 .of_match_table = qcs615_tlmm_of_match, 1088 1088 }, 1089 1089 .probe = qcs615_tlmm_probe, 1090 - .remove = msm_pinctrl_remove, 1091 1090 }; 1092 1091 1093 1092 static int __init qcs615_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-qcs8300.c
··· 1227 1227 .of_match_table = qcs8300_pinctrl_of_match, 1228 1228 }, 1229 1229 .probe = qcs8300_pinctrl_probe, 1230 - .remove = msm_pinctrl_remove, 1231 1230 }; 1232 1231 1233 1232 static int __init qcs8300_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-qdf2xxx.c
··· 145 145 .acpi_match_table = qdf2xxx_acpi_ids, 146 146 }, 147 147 .probe = qdf2xxx_pinctrl_probe, 148 - .remove = msm_pinctrl_remove, 149 148 }; 150 149 151 150 static int __init qdf2xxx_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-qdu1000.c
··· 1248 1248 .of_match_table = qdu1000_tlmm_of_match, 1249 1249 }, 1250 1250 .probe = qdu1000_tlmm_probe, 1251 - .remove = msm_pinctrl_remove, 1252 1251 }; 1253 1252 1254 1253 static int __init qdu1000_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sa8775p.c
··· 1540 1540 .of_match_table = sa8775p_pinctrl_of_match, 1541 1541 }, 1542 1542 .probe = sa8775p_pinctrl_probe, 1543 - .remove = msm_pinctrl_remove, 1544 1543 }; 1545 1544 1546 1545 static int __init sa8775p_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sar2130p.c
··· 1486 1486 .of_match_table = sar2130p_tlmm_of_match, 1487 1487 }, 1488 1488 .probe = sar2130p_tlmm_probe, 1489 - .remove = msm_pinctrl_remove, 1490 1489 }; 1491 1490 1492 1491 static int __init sar2130p_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sc7180.c
··· 1159 1159 .of_match_table = sc7180_pinctrl_of_match, 1160 1160 }, 1161 1161 .probe = sc7180_pinctrl_probe, 1162 - .remove = msm_pinctrl_remove, 1163 1162 }; 1164 1163 1165 1164 static int __init sc7180_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sc7280.c
··· 1505 1505 .of_match_table = sc7280_pinctrl_of_match, 1506 1506 }, 1507 1507 .probe = sc7280_pinctrl_probe, 1508 - .remove = msm_pinctrl_remove, 1509 1508 }; 1510 1509 1511 1510 static int __init sc7280_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sc8180x.c
··· 1720 1720 .acpi_match_table = sc8180x_pinctrl_acpi_match, 1721 1721 }, 1722 1722 .probe = sc8180x_pinctrl_probe, 1723 - .remove = msm_pinctrl_remove, 1724 1723 }; 1725 1724 1726 1725 static int __init sc8180x_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sc8280xp.c
··· 1926 1926 .of_match_table = sc8280xp_pinctrl_of_match, 1927 1927 }, 1928 1928 .probe = sc8280xp_pinctrl_probe, 1929 - .remove = msm_pinctrl_remove, 1930 1929 }; 1931 1930 1932 1931 static int __init sc8280xp_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sdm660.c
··· 1442 1442 .of_match_table = sdm660_pinctrl_of_match, 1443 1443 }, 1444 1444 .probe = sdm660_pinctrl_probe, 1445 - .remove = msm_pinctrl_remove, 1446 1445 }; 1447 1446 1448 1447 static int __init sdm660_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sdm670.c
··· 1337 1337 .of_match_table = sdm670_pinctrl_of_match, 1338 1338 }, 1339 1339 .probe = sdm670_pinctrl_probe, 1340 - .remove = msm_pinctrl_remove, 1341 1340 }; 1342 1341 1343 1342 static int __init sdm670_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sdm845.c
··· 1351 1351 .acpi_match_table = ACPI_PTR(sdm845_pinctrl_acpi_match), 1352 1352 }, 1353 1353 .probe = sdm845_pinctrl_probe, 1354 - .remove = msm_pinctrl_remove, 1355 1354 }; 1356 1355 1357 1356 static int __init sdm845_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sdx55.c
··· 990 990 .of_match_table = sdx55_pinctrl_of_match, 991 991 }, 992 992 .probe = sdx55_pinctrl_probe, 993 - .remove = msm_pinctrl_remove, 994 993 }; 995 994 996 995 static int __init sdx55_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sdx65.c
··· 939 939 .of_match_table = sdx65_pinctrl_of_match, 940 940 }, 941 941 .probe = sdx65_pinctrl_probe, 942 - .remove = msm_pinctrl_remove, 943 942 }; 944 943 945 944 static int __init sdx65_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sdx75.c
··· 1124 1124 .of_match_table = sdx75_pinctrl_of_match, 1125 1125 }, 1126 1126 .probe = sdx75_pinctrl_probe, 1127 - .remove = msm_pinctrl_remove, 1128 1127 }; 1129 1128 1130 1129 static int __init sdx75_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm4450.c
··· 994 994 .of_match_table = sm4450_tlmm_of_match, 995 995 }, 996 996 .probe = sm4450_tlmm_probe, 997 - .remove = msm_pinctrl_remove, 998 997 }; 999 998 MODULE_DEVICE_TABLE(of, sm4450_tlmm_of_match); 1000 999
-1
drivers/pinctrl/qcom/pinctrl-sm6115.c
··· 907 907 .of_match_table = sm6115_tlmm_of_match, 908 908 }, 909 909 .probe = sm6115_tlmm_probe, 910 - .remove = msm_pinctrl_remove, 911 910 }; 912 911 913 912 static int __init sm6115_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm6125.c
··· 1266 1266 .of_match_table = sm6125_tlmm_of_match, 1267 1267 }, 1268 1268 .probe = sm6125_tlmm_probe, 1269 - .remove = msm_pinctrl_remove, 1270 1269 }; 1271 1270 1272 1271 static int __init sm6125_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm6350.c
··· 1373 1373 .of_match_table = sm6350_tlmm_of_match, 1374 1374 }, 1375 1375 .probe = sm6350_tlmm_probe, 1376 - .remove = msm_pinctrl_remove, 1377 1376 }; 1378 1377 1379 1378 static int __init sm6350_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm6375.c
··· 1516 1516 .of_match_table = sm6375_tlmm_of_match, 1517 1517 }, 1518 1518 .probe = sm6375_tlmm_probe, 1519 - .remove = msm_pinctrl_remove, 1520 1519 }; 1521 1520 1522 1521 static int __init sm6375_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm7150.c
··· 1255 1255 .of_match_table = sm7150_tlmm_of_match, 1256 1256 }, 1257 1257 .probe = sm7150_tlmm_probe, 1258 - .remove = msm_pinctrl_remove, 1259 1258 }; 1260 1259 1261 1260 static int __init sm7150_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm8150.c
··· 1542 1542 .of_match_table = sm8150_pinctrl_of_match, 1543 1543 }, 1544 1544 .probe = sm8150_pinctrl_probe, 1545 - .remove = msm_pinctrl_remove, 1546 1545 }; 1547 1546 1548 1547 static int __init sm8150_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm8250.c
··· 1351 1351 .of_match_table = sm8250_pinctrl_of_match, 1352 1352 }, 1353 1353 .probe = sm8250_pinctrl_probe, 1354 - .remove = msm_pinctrl_remove, 1355 1354 }; 1356 1355 1357 1356 static int __init sm8250_pinctrl_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm8350.c
··· 1642 1642 .of_match_table = sm8350_tlmm_of_match, 1643 1643 }, 1644 1644 .probe = sm8350_tlmm_probe, 1645 - .remove = msm_pinctrl_remove, 1646 1645 }; 1647 1646 1648 1647 static int __init sm8350_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm8450.c
··· 1677 1677 .of_match_table = sm8450_tlmm_of_match, 1678 1678 }, 1679 1679 .probe = sm8450_tlmm_probe, 1680 - .remove = msm_pinctrl_remove, 1681 1680 }; 1682 1681 1683 1682 static int __init sm8450_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm8550.c
··· 1762 1762 .of_match_table = sm8550_tlmm_of_match, 1763 1763 }, 1764 1764 .probe = sm8550_tlmm_probe, 1765 - .remove = msm_pinctrl_remove, 1766 1765 }; 1767 1766 1768 1767 static int __init sm8550_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm8650.c
··· 1742 1742 .of_match_table = sm8650_tlmm_of_match, 1743 1743 }, 1744 1744 .probe = sm8650_tlmm_probe, 1745 - .remove = msm_pinctrl_remove, 1746 1745 }; 1747 1746 1748 1747 static int __init sm8650_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-sm8750.c
··· 1711 1711 .of_match_table = sm8750_tlmm_of_match, 1712 1712 }, 1713 1713 .probe = sm8750_tlmm_probe, 1714 - .remove = msm_pinctrl_remove, 1715 1714 }; 1716 1715 1717 1716 static int __init sm8750_tlmm_init(void)
-1
drivers/pinctrl/qcom/pinctrl-x1e80100.c
··· 1861 1861 .of_match_table = x1e80100_pinctrl_of_match, 1862 1862 }, 1863 1863 .probe = x1e80100_pinctrl_probe, 1864 - .remove = msm_pinctrl_remove, 1865 1864 }; 1866 1865 1867 1866 static int __init x1e80100_pinctrl_init(void)
+4 -4
drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c
··· 143 143 */ 144 144 static int prepare_function_table(struct device *dev, struct device_node *pnode, 145 145 struct sunxi_desc_pin *pins, int npins, 146 - const u8 *irq_bank_muxes) 146 + unsigned pin_base, const u8 *irq_bank_muxes) 147 147 { 148 148 struct device_node *node; 149 149 struct property *prop; ··· 166 166 */ 167 167 for (i = 0; i < npins; i++) { 168 168 struct sunxi_desc_pin *pin = &pins[i]; 169 - int bank = pin->pin.number / PINS_PER_BANK; 169 + int bank = (pin->pin.number - pin_base) / PINS_PER_BANK; 170 170 171 171 if (irq_bank_muxes[bank]) { 172 172 pin->variant++; ··· 211 211 last_bank = 0; 212 212 for (i = 0; i < npins; i++) { 213 213 struct sunxi_desc_pin *pin = &pins[i]; 214 - int bank = pin->pin.number / PINS_PER_BANK; 214 + int bank = (pin->pin.number - pin_base) / PINS_PER_BANK; 215 215 int lastfunc = pin->variant + 1; 216 216 int irq_mux = irq_bank_muxes[bank]; 217 217 ··· 353 353 return PTR_ERR(pins); 354 354 355 355 ret = prepare_function_table(&pdev->dev, pnode, pins, desc->npins, 356 - irq_bank_muxes); 356 + desc->pin_base, irq_bank_muxes); 357 357 if (ret) 358 358 return ret; 359 359
+1 -11
drivers/ptp/ptp_private.h
··· 98 98 /* Check if ptp virtual clock is in use */ 99 99 static inline bool ptp_vclock_in_use(struct ptp_clock *ptp) 100 100 { 101 - bool in_use = false; 102 - 103 - if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 104 - return true; 105 - 106 - if (!ptp->is_virtual_clock && ptp->n_vclocks) 107 - in_use = true; 108 - 109 - mutex_unlock(&ptp->n_vclocks_mux); 110 - 111 - return in_use; 101 + return !ptp->is_virtual_clock; 112 102 } 113 103 114 104 /* Check if ptp clock shall be free running */
+3
drivers/rapidio/rio_cm.c
··· 783 783 if (buf == NULL || ch_id == 0 || len == 0 || len > RIO_MAX_MSG_SIZE) 784 784 return -EINVAL; 785 785 786 + if (len < sizeof(struct rio_ch_chan_hdr)) 787 + return -EINVAL; /* insufficient data from user */ 788 + 786 789 ch = riocm_get_channel(ch_id); 787 790 if (!ch) { 788 791 riocm_error("%s(%d) ch_%d not found", current->comm,
+3 -3
drivers/regulator/max20086-regulator.c
··· 5 5 // Copyright (C) 2022 Laurent Pinchart <laurent.pinchart@idesonboard.com> 6 6 // Copyright (C) 2018 Avnet, Inc. 7 7 8 + #include <linux/cleanup.h> 8 9 #include <linux/err.h> 9 10 #include <linux/gpio/consumer.h> 10 11 #include <linux/i2c.h> ··· 134 133 static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on) 135 134 { 136 135 struct of_regulator_match *matches; 137 - struct device_node *node; 138 136 unsigned int i; 139 137 int ret; 140 138 141 - node = of_get_child_by_name(chip->dev->of_node, "regulators"); 139 + struct device_node *node __free(device_node) = 140 + of_get_child_by_name(chip->dev->of_node, "regulators"); 142 141 if (!node) { 143 142 dev_err(chip->dev, "regulators node not found\n"); 144 143 return -ENODEV; ··· 154 153 155 154 ret = of_regulator_match(chip->dev, node, matches, 156 155 chip->info->num_outputs); 157 - of_node_put(node); 158 156 if (ret < 0) { 159 157 dev_err(chip->dev, "Failed to match regulators\n"); 160 158 return -EINVAL;
+2
drivers/s390/scsi/zfcp_sysfs.c
··· 449 449 if (kstrtoull(buf, 0, (unsigned long long *) &fcp_lun)) 450 450 return -EINVAL; 451 451 452 + flush_work(&port->rport_work); 453 + 452 454 retval = zfcp_unit_add(port, fcp_lun); 453 455 if (retval) 454 456 return retval;
+2 -2
drivers/scsi/mvsas/mv_defs.h
··· 215 215 216 216 /* MVS_Px_INT_STAT, MVS_Px_INT_MASK (per-phy events) */ 217 217 PHYEV_DEC_ERR = (1U << 24), /* Phy Decoding Error */ 218 - PHYEV_DCDR_ERR = (1U << 23), /* STP Deocder Error */ 218 + PHYEV_DCDR_ERR = (1U << 23), /* STP Decoder Error */ 219 219 PHYEV_CRC_ERR = (1U << 22), /* STP CRC Error */ 220 220 PHYEV_UNASSOC_FIS = (1U << 19), /* unassociated FIS rx'd */ 221 221 PHYEV_AN = (1U << 18), /* SATA async notification */ ··· 347 347 CMD_SATA_PORT_MEM_CTL0 = 0x158, /* SATA Port Memory Control 0 */ 348 348 CMD_SATA_PORT_MEM_CTL1 = 0x15c, /* SATA Port Memory Control 1 */ 349 349 CMD_XOR_MEM_BIST_CTL = 0x160, /* XOR Memory BIST Control */ 350 - CMD_XOR_MEM_BIST_STAT = 0x164, /* XOR Memroy BIST Status */ 350 + CMD_XOR_MEM_BIST_STAT = 0x164, /* XOR Memory BIST Status */ 351 351 CMD_DMA_MEM_BIST_CTL = 0x168, /* DMA Memory BIST Control */ 352 352 CMD_DMA_MEM_BIST_STAT = 0x16c, /* DMA Memory BIST Status */ 353 353 CMD_PORT_MEM_BIST_CTL = 0x170, /* Port Memory BIST Control */
+2 -1
drivers/scsi/scsi_error.c
··· 665 665 * if the device is in the process of becoming ready, we 666 666 * should retry. 667 667 */ 668 - if ((sshdr.asc == 0x04) && (sshdr.ascq == 0x01)) 668 + if ((sshdr.asc == 0x04) && 669 + (sshdr.ascq == 0x01 || sshdr.ascq == 0x0a)) 669 670 return NEEDS_RETRY; 670 671 /* 671 672 * if the device is not started, we need to wake
+5 -6
drivers/scsi/scsi_transport_iscsi.c
··· 3499 3499 pr_err("%s could not find host no %u\n", 3500 3500 __func__, ev->u.new_flashnode.host_no); 3501 3501 err = -ENODEV; 3502 - goto put_host; 3502 + goto exit_new_fnode; 3503 3503 } 3504 3504 3505 3505 index = transport->new_flashnode(shost, data, len); ··· 3509 3509 else 3510 3510 err = -EIO; 3511 3511 3512 - put_host: 3513 3512 scsi_host_put(shost); 3514 3513 3515 3514 exit_new_fnode: ··· 3533 3534 pr_err("%s could not find host no %u\n", 3534 3535 __func__, ev->u.del_flashnode.host_no); 3535 3536 err = -ENODEV; 3536 - goto put_host; 3537 + goto exit_del_fnode; 3537 3538 } 3538 3539 3539 3540 idx = ev->u.del_flashnode.flashnode_idx; ··· 3575 3576 pr_err("%s could not find host no %u\n", 3576 3577 __func__, ev->u.login_flashnode.host_no); 3577 3578 err = -ENODEV; 3578 - goto put_host; 3579 + goto exit_login_fnode; 3579 3580 } 3580 3581 3581 3582 idx = ev->u.login_flashnode.flashnode_idx; ··· 3627 3628 pr_err("%s could not find host no %u\n", 3628 3629 __func__, ev->u.logout_flashnode.host_no); 3629 3630 err = -ENODEV; 3630 - goto put_host; 3631 + goto exit_logout_fnode; 3631 3632 } 3632 3633 3633 3634 idx = ev->u.logout_flashnode.flashnode_idx; ··· 3677 3678 pr_err("%s could not find host no %u\n", 3678 3679 __func__, ev->u.logout_flashnode.host_no); 3679 3680 err = -ENODEV; 3680 - goto put_host; 3681 + goto exit_logout_sid; 3681 3682 } 3682 3683 3683 3684 session = iscsi_session_lookup(ev->u.logout_flashnode_sid.sid);
+6 -4
drivers/scsi/storvsc_drv.c
··· 362 362 /* 363 363 * Timeout in seconds for all devices managed by this driver. 364 364 */ 365 - static int storvsc_timeout = 180; 365 + static const int storvsc_timeout = 180; 366 366 367 367 #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS) 368 368 static struct scsi_transport_template *fc_transport_template; ··· 768 768 return; 769 769 } 770 770 771 - t = wait_for_completion_timeout(&request->wait_event, 10*HZ); 771 + t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ); 772 772 if (t == 0) { 773 773 dev_err(dev, "Failed to create sub-channel: timed out\n"); 774 774 return; ··· 833 833 if (ret != 0) 834 834 return ret; 835 835 836 - t = wait_for_completion_timeout(&request->wait_event, 5*HZ); 836 + t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ); 837 837 if (t == 0) 838 838 return -ETIMEDOUT; 839 839 ··· 1350 1350 return ret; 1351 1351 1352 1352 ret = storvsc_channel_init(device, is_fc); 1353 + if (ret) 1354 + vmbus_close(device->channel); 1353 1355 1354 1356 return ret; 1355 1357 } ··· 1670 1668 if (ret != 0) 1671 1669 return FAILED; 1672 1670 1673 - t = wait_for_completion_timeout(&request->wait_event, 5*HZ); 1671 + t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ); 1674 1672 if (t == 0) 1675 1673 return TIMEOUT_ERROR; 1676 1674
+1
drivers/spi/spi-loongson-core.c
··· 5 5 #include <linux/clk.h> 6 6 #include <linux/delay.h> 7 7 #include <linux/err.h> 8 + #include <linux/export.h> 8 9 #include <linux/init.h> 9 10 #include <linux/interrupt.h> 10 11 #include <linux/io.h>
+1 -1
drivers/spi/spi-offload.c
··· 297 297 if (trigger->ops->enable) { 298 298 ret = trigger->ops->enable(trigger, config); 299 299 if (ret) { 300 - if (offload->ops->trigger_disable) 300 + if (offload->ops && offload->ops->trigger_disable) 301 301 offload->ops->trigger_disable(offload); 302 302 return ret; 303 303 }
+18 -12
drivers/spi/spi-omap2-mcspi.c
··· 134 134 size_t max_xfer_len; 135 135 u32 ref_clk_hz; 136 136 bool use_multi_mode; 137 + bool last_msg_kept_cs; 137 138 }; 138 139 139 140 struct omap2_mcspi_cs { ··· 1270 1269 * multi-mode is applicable. 1271 1270 */ 1272 1271 mcspi->use_multi_mode = true; 1272 + 1273 + if (mcspi->last_msg_kept_cs) 1274 + mcspi->use_multi_mode = false; 1275 + 1273 1276 list_for_each_entry(tr, &msg->transfers, transfer_list) { 1274 1277 if (!tr->bits_per_word) 1275 1278 bits_per_word = msg->spi->bits_per_word; ··· 1292 1287 mcspi->use_multi_mode = false; 1293 1288 } 1294 1289 1295 - /* Check if transfer asks to change the CS status after the transfer */ 1296 - if (!tr->cs_change) 1297 - mcspi->use_multi_mode = false; 1298 - 1299 - /* 1300 - * If at least one message is not compatible, switch back to single mode 1301 - * 1302 - * The bits_per_word of certain transfer can be different, but it will have no 1303 - * impact on the signal itself. 1304 - */ 1305 - if (!mcspi->use_multi_mode) 1306 - break; 1290 + if (list_is_last(&tr->transfer_list, &msg->transfers)) { 1291 + /* Check if transfer asks to keep the CS status after the whole message */ 1292 + if (tr->cs_change) { 1293 + mcspi->use_multi_mode = false; 1294 + mcspi->last_msg_kept_cs = true; 1295 + } else { 1296 + mcspi->last_msg_kept_cs = false; 1297 + } 1298 + } else { 1299 + /* Check if transfer asks to change the CS status after the transfer */ 1300 + if (!tr->cs_change) 1301 + mcspi->use_multi_mode = false; 1302 + } 1307 1303 } 1308 1304 1309 1305 omap2_mcspi_set_mode(ctlr);
+2 -2
drivers/spi/spi-pci1xxxx.c
··· 762 762 return -EINVAL; 763 763 764 764 num_vector = pci_alloc_irq_vectors(pdev, 1, hw_inst_cnt, 765 - PCI_IRQ_ALL_TYPES); 765 + PCI_IRQ_INTX | PCI_IRQ_MSI); 766 766 if (num_vector < 0) { 767 767 dev_err(&pdev->dev, "Error allocating MSI vectors\n"); 768 - return ret; 768 + return num_vector; 769 769 } 770 770 771 771 init_completion(&spi_sub_ptr->spi_xfer_done);
+19 -5
drivers/spi/spi-stm32-ospi.c
··· 804 804 return ret; 805 805 } 806 806 807 - ospi->rstc = devm_reset_control_array_get_exclusive(dev); 807 + ospi->rstc = devm_reset_control_array_get_exclusive_released(dev); 808 808 if (IS_ERR(ospi->rstc)) 809 809 return dev_err_probe(dev, PTR_ERR(ospi->rstc), 810 810 "Can't get reset\n"); ··· 936 936 if (ret < 0) 937 937 goto err_pm_enable; 938 938 939 - if (ospi->rstc) { 940 - reset_control_assert(ospi->rstc); 941 - udelay(2); 942 - reset_control_deassert(ospi->rstc); 939 + ret = reset_control_acquire(ospi->rstc); 940 + if (ret) { 941 + dev_err_probe(dev, ret, "Can not acquire reset %d\n", ret); 942 + goto err_pm_resume; 943 943 } 944 + 945 + reset_control_assert(ospi->rstc); 946 + udelay(2); 947 + reset_control_deassert(ospi->rstc); 944 948 945 949 ret = spi_register_controller(ctrl); 946 950 if (ret) { ··· 991 987 if (ospi->dma_chrx) 992 988 dma_release_channel(ospi->dma_chrx); 993 989 990 + reset_control_release(ospi->rstc); 991 + 994 992 pm_runtime_put_sync_suspend(ospi->dev); 995 993 pm_runtime_force_suspend(ospi->dev); 996 994 } ··· 1002 996 struct stm32_ospi *ospi = dev_get_drvdata(dev); 1003 997 1004 998 pinctrl_pm_select_sleep_state(dev); 999 + 1000 + reset_control_release(ospi->rstc); 1005 1001 1006 1002 return pm_runtime_force_suspend(ospi->dev); 1007 1003 } ··· 1023 1015 ret = pm_runtime_resume_and_get(ospi->dev); 1024 1016 if (ret < 0) 1025 1017 return ret; 1018 + 1019 + ret = reset_control_acquire(ospi->rstc); 1020 + if (ret) { 1021 + dev_err(dev, "Can not acquire reset\n"); 1022 + return ret; 1023 + } 1026 1024 1027 1025 writel_relaxed(ospi->cr_reg, regs_base + OSPI_CR); 1028 1026 writel_relaxed(ospi->dcr_reg, regs_base + OSPI_DCR1);
+6 -1
drivers/ufs/core/ufshcd.c
··· 6623 6623 up(&hba->host_sem); 6624 6624 return; 6625 6625 } 6626 + spin_unlock_irqrestore(hba->host->host_lock, flags); 6627 + 6628 + ufshcd_err_handling_prepare(hba); 6629 + 6630 + spin_lock_irqsave(hba->host->host_lock, flags); 6626 6631 ufshcd_set_eh_in_progress(hba); 6627 6632 spin_unlock_irqrestore(hba->host->host_lock, flags); 6628 - ufshcd_err_handling_prepare(hba); 6633 + 6629 6634 /* Complete requests that have door-bell cleared by h/w */ 6630 6635 ufshcd_complete_requests(hba, false); 6631 6636 spin_lock_irqsave(hba->host->host_lock, flags);
-1
fs/bcachefs/bcachefs.h
··· 296 296 #define bch2_fmt(_c, fmt) bch2_log_msg(_c, fmt "\n") 297 297 298 298 void bch2_print_str(struct bch_fs *, const char *, const char *); 299 - void bch2_print_str_nonblocking(struct bch_fs *, const char *, const char *); 300 299 301 300 __printf(2, 3) 302 301 void bch2_print_opts(struct bch_opts *, const char *, ...);
+60 -35
fs/bcachefs/btree_gc.c
··· 397 397 continue; 398 398 } 399 399 400 - ret = btree_check_node_boundaries(trans, b, prev, cur, pulled_from_scan); 400 + ret = lockrestart_do(trans, 401 + btree_check_node_boundaries(trans, b, prev, cur, pulled_from_scan)); 402 + if (ret < 0) 403 + goto err; 404 + 401 405 if (ret == DID_FILL_FROM_SCAN) { 402 406 new_pass = true; 403 407 ret = 0; ··· 442 438 443 439 if (!ret && !IS_ERR_OR_NULL(prev)) { 444 440 BUG_ON(cur); 445 - ret = btree_repair_node_end(trans, b, prev, pulled_from_scan); 441 + ret = lockrestart_do(trans, 442 + btree_repair_node_end(trans, b, prev, pulled_from_scan)); 446 443 if (ret == DID_FILL_FROM_SCAN) { 447 444 new_pass = true; 448 445 ret = 0; ··· 524 519 bch2_bkey_buf_exit(&prev_k, c); 525 520 bch2_bkey_buf_exit(&cur_k, c); 526 521 printbuf_exit(&buf); 522 + bch_err_fn(c, ret); 523 + return ret; 524 + } 525 + 526 + static int bch2_check_root(struct btree_trans *trans, enum btree_id i, 527 + bool *reconstructed_root) 528 + { 529 + struct bch_fs *c = trans->c; 530 + struct btree_root *r = bch2_btree_id_root(c, i); 531 + struct printbuf buf = PRINTBUF; 532 + int ret = 0; 533 + 534 + bch2_btree_id_to_text(&buf, i); 535 + 536 + if (r->error) { 537 + bch_info(c, "btree root %s unreadable, must recover from scan", buf.buf); 538 + 539 + r->alive = false; 540 + r->error = 0; 541 + 542 + if (!bch2_btree_has_scanned_nodes(c, i)) { 543 + __fsck_err(trans, 544 + FSCK_CAN_FIX|(!btree_id_important(i) ? FSCK_AUTOFIX : 0), 545 + btree_root_unreadable_and_scan_found_nothing, 546 + "no nodes found for btree %s, continue?", buf.buf); 547 + bch2_btree_root_alloc_fake_trans(trans, i, 0); 548 + } else { 549 + bch2_btree_root_alloc_fake_trans(trans, i, 1); 550 + bch2_shoot_down_journal_keys(c, i, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 551 + ret = bch2_get_scanned_nodes(c, i, 0, POS_MIN, SPOS_MAX); 552 + if (ret) 553 + goto err; 554 + } 555 + 556 + *reconstructed_root = true; 557 + } 558 + err: 559 + fsck_err: 560 + printbuf_exit(&buf); 561 + bch_err_fn(c, ret); 527 562 return ret; 528 563 } 529 564 ··· 571 526 { 572 527 struct btree_trans *trans = bch2_trans_get(c); 573 528 struct bpos pulled_from_scan = POS_MIN; 574 - struct printbuf buf = PRINTBUF; 575 529 int ret = 0; 576 530 577 531 bch2_trans_srcu_unlock(trans); 578 532 579 533 for (unsigned i = 0; i < btree_id_nr_alive(c) && !ret; i++) { 580 - struct btree_root *r = bch2_btree_id_root(c, i); 581 534 bool reconstructed_root = false; 535 + recover: 536 + ret = lockrestart_do(trans, bch2_check_root(trans, i, &reconstructed_root)); 537 + if (ret) 538 + break; 582 539 583 - printbuf_reset(&buf); 584 - bch2_btree_id_to_text(&buf, i); 585 - 586 - if (r->error) { 587 - reconstruct_root: 588 - bch_info(c, "btree root %s unreadable, must recover from scan", buf.buf); 589 - 590 - r->alive = false; 591 - r->error = 0; 592 - 593 - if (!bch2_btree_has_scanned_nodes(c, i)) { 594 - __fsck_err(trans, 595 - FSCK_CAN_FIX|(!btree_id_important(i) ? FSCK_AUTOFIX : 0), 596 - btree_root_unreadable_and_scan_found_nothing, 597 - "no nodes found for btree %s, continue?", buf.buf); 598 - bch2_btree_root_alloc_fake_trans(trans, i, 0); 599 - } else { 600 - bch2_btree_root_alloc_fake_trans(trans, i, 1); 601 - bch2_shoot_down_journal_keys(c, i, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 602 - ret = bch2_get_scanned_nodes(c, i, 0, POS_MIN, SPOS_MAX); 603 - if (ret) 604 - break; 605 - } 606 - 607 - reconstructed_root = true; 608 - } 609 - 540 + struct btree_root *r = bch2_btree_id_root(c, i); 610 541 struct btree *b = r->b; 611 542 612 543 btree_node_lock_nopath_nofail(trans, &b->c, SIX_LOCK_read); ··· 596 575 597 576 r->b = NULL; 598 577 599 - if (!reconstructed_root) 600 - goto reconstruct_root; 578 + if (!reconstructed_root) { 579 + r->error = -EIO; 580 + goto recover; 581 + } 601 582 583 + struct printbuf buf = PRINTBUF; 584 + bch2_btree_id_to_text(&buf, i); 602 585 bch_err(c, "empty btree root %s", buf.buf); 586 + printbuf_exit(&buf); 603 587 bch2_btree_root_alloc_fake_trans(trans, i, 0); 604 588 r->alive = false; 605 589 ret = 0; 606 590 } 607 591 } 608 - fsck_err: 609 - printbuf_exit(&buf); 592 + 610 593 bch2_trans_put(trans); 611 594 return ret; 612 595 }
+19 -7
fs/bcachefs/btree_io.c
··· 741 741 BCH_VERSION_MAJOR(version), 742 742 BCH_VERSION_MINOR(version)); 743 743 744 - if (btree_err_on(version < c->sb.version_min, 744 + if (c->recovery.curr_pass != BCH_RECOVERY_PASS_scan_for_btree_nodes && 745 + btree_err_on(version < c->sb.version_min, 745 746 -BCH_ERR_btree_node_read_err_fixable, 746 747 c, NULL, b, i, NULL, 747 748 btree_node_bset_older_than_sb_min, 748 749 "bset version %u older than superblock version_min %u", 749 750 version, c->sb.version_min)) { 750 - mutex_lock(&c->sb_lock); 751 - c->disk_sb.sb->version_min = cpu_to_le16(version); 752 - bch2_write_super(c); 753 - mutex_unlock(&c->sb_lock); 751 + if (bch2_version_compatible(version)) { 752 + mutex_lock(&c->sb_lock); 753 + c->disk_sb.sb->version_min = cpu_to_le16(version); 754 + bch2_write_super(c); 755 + mutex_unlock(&c->sb_lock); 756 + } else { 757 + /* We have no idea what's going on: */ 758 + i->version = cpu_to_le16(c->sb.version); 759 + } 754 760 } 755 761 756 762 if (btree_err_on(BCH_VERSION_MAJOR(version) > ··· 1051 1045 le16_add_cpu(&i->u64s, -next_good_key); 1052 1046 memmove_u64s_down(k, (u64 *) k + next_good_key, (u64 *) vstruct_end(i) - (u64 *) k); 1053 1047 set_btree_node_need_rewrite(b); 1048 + set_btree_node_need_rewrite_error(b); 1054 1049 } 1055 1050 fsck_err: 1056 1051 printbuf_exit(&buf); ··· 1312 1305 (u64 *) vstruct_end(i) - (u64 *) k); 1313 1306 set_btree_bset_end(b, b->set); 1314 1307 set_btree_node_need_rewrite(b); 1308 + set_btree_node_need_rewrite_error(b); 1315 1309 continue; 1316 1310 } 1317 1311 if (ret) ··· 1337 1329 bkey_for_each_ptr(bch2_bkey_ptrs(bkey_i_to_s(&b->key)), ptr) { 1338 1330 struct bch_dev *ca2 = bch2_dev_rcu(c, ptr->dev); 1339 1331 1340 - if (!ca2 || ca2->mi.state != BCH_MEMBER_STATE_rw) 1332 + if (!ca2 || ca2->mi.state != BCH_MEMBER_STATE_rw) { 1341 1333 set_btree_node_need_rewrite(b); 1334 + set_btree_node_need_rewrite_degraded(b); 1335 + } 1342 1336 } 1343 1337 1344 - if (!ptr_written) 1338 + if (!ptr_written) { 1345 1339 set_btree_node_need_rewrite(b); 1340 + set_btree_node_need_rewrite_ptr_written_zero(b); 1341 + } 1346 1342 fsck_err: 1347 1343 mempool_free(iter, &c->fill_iter); 1348 1344 printbuf_exit(&buf);
+1 -1
fs/bcachefs/btree_locking.c
··· 213 213 prt_newline(&buf); 214 214 } 215 215 216 - bch2_print_str_nonblocking(g->g->trans->c, KERN_ERR, buf.buf); 216 + bch2_print_str(g->g->trans->c, KERN_ERR, buf.buf); 217 217 printbuf_exit(&buf); 218 218 BUG(); 219 219 }
+4 -2
fs/bcachefs/btree_locking.h
··· 417 417 EBUG_ON(!btree_node_locked(path, path->level)); 418 418 EBUG_ON(path->uptodate); 419 419 420 - path->should_be_locked = true; 421 - trace_btree_path_should_be_locked(trans, path); 420 + if (!path->should_be_locked) { 421 + path->should_be_locked = true; 422 + trace_btree_path_should_be_locked(trans, path); 423 + } 422 424 } 423 425 424 426 static inline void __btree_path_set_level_up(struct btree_trans *trans,
+29
fs/bcachefs/btree_types.h
··· 617 617 x(dying) \ 618 618 x(fake) \ 619 619 x(need_rewrite) \ 620 + x(need_rewrite_error) \ 621 + x(need_rewrite_degraded) \ 622 + x(need_rewrite_ptr_written_zero) \ 620 623 x(never_write) \ 621 624 x(pinned) 622 625 ··· 643 640 644 641 BTREE_FLAGS() 645 642 #undef x 643 + 644 + #define BTREE_NODE_REWRITE_REASON() \ 645 + x(none) \ 646 + x(unknown) \ 647 + x(error) \ 648 + x(degraded) \ 649 + x(ptr_written_zero) 650 + 651 + enum btree_node_rewrite_reason { 652 + #define x(n) BTREE_NODE_REWRITE_##n, 653 + BTREE_NODE_REWRITE_REASON() 654 + #undef x 655 + }; 656 + 657 + static inline enum btree_node_rewrite_reason btree_node_rewrite_reason(struct btree *b) 658 + { 659 + if (btree_node_need_rewrite_ptr_written_zero(b)) 660 + return BTREE_NODE_REWRITE_ptr_written_zero; 661 + if (btree_node_need_rewrite_degraded(b)) 662 + return BTREE_NODE_REWRITE_degraded; 663 + if (btree_node_need_rewrite_error(b)) 664 + return BTREE_NODE_REWRITE_error; 665 + if (btree_node_need_rewrite(b)) 666 + return BTREE_NODE_REWRITE_unknown; 667 + return BTREE_NODE_REWRITE_none; 668 + } 646 669 647 670 static inline struct btree_write *btree_current_write(struct btree *b) 648 671 {
+31 -2
fs/bcachefs/btree_update_interior.c
··· 1138 1138 start_time); 1139 1139 } 1140 1140 1141 + static const char * const btree_node_reawrite_reason_strs[] = { 1142 + #define x(n) #n, 1143 + BTREE_NODE_REWRITE_REASON() 1144 + #undef x 1145 + NULL, 1146 + }; 1147 + 1141 1148 static struct btree_update * 1142 1149 bch2_btree_update_start(struct btree_trans *trans, struct btree_path *path, 1143 1150 unsigned level_start, bool split, ··· 1238 1231 mutex_lock(&c->btree_interior_update_lock); 1239 1232 list_add_tail(&as->list, &c->btree_interior_update_list); 1240 1233 mutex_unlock(&c->btree_interior_update_lock); 1234 + 1235 + struct btree *b = btree_path_node(path, path->level); 1236 + as->node_start = b->data->min_key; 1237 + as->node_end = b->data->max_key; 1238 + as->node_needed_rewrite = btree_node_rewrite_reason(b); 1239 + as->node_written = b->written; 1240 + as->node_sectors = btree_buf_bytes(b) >> 9; 1241 + as->node_remaining = __bch2_btree_u64s_remaining(b, 1242 + btree_bkey_last(b, bset_tree_last(b))); 1241 1243 1242 1244 /* 1243 1245 * We don't want to allocate if we're in an error state, that can cause ··· 2124 2108 if (ret) 2125 2109 goto err; 2126 2110 2111 + as->node_start = prev->data->min_key; 2112 + as->node_end = next->data->max_key; 2113 + 2127 2114 trace_and_count(c, btree_node_merge, trans, b); 2128 2115 2129 2116 n = bch2_btree_node_alloc(as, trans, b->c.level); ··· 2700 2681 2701 2682 prt_str(out, " "); 2702 2683 bch2_btree_id_to_text(out, as->btree_id); 2703 - prt_printf(out, " l=%u-%u mode=%s nodes_written=%u cl.remaining=%u journal_seq=%llu\n", 2684 + prt_printf(out, " l=%u-%u ", 2704 2685 as->update_level_start, 2705 - as->update_level_end, 2686 + as->update_level_end); 2687 + bch2_bpos_to_text(out, as->node_start); 2688 + prt_char(out, ' '); 2689 + bch2_bpos_to_text(out, as->node_end); 2690 + prt_printf(out, "\nwritten %u/%u u64s_remaining %u need_rewrite %s", 2691 + as->node_written, 2692 + as->node_sectors, 2693 + as->node_remaining, 2694 + btree_node_reawrite_reason_strs[as->node_needed_rewrite]); 2695 + 2696 + prt_printf(out, "\nmode=%s nodes_written=%u cl.remaining=%u journal_seq=%llu\n", 2706 2697 bch2_btree_update_modes[as->mode], 2707 2698 as->nodes_written, 2708 2699 closure_nr_remaining(&as->cl),
+7
fs/bcachefs/btree_update_interior.h
··· 57 57 unsigned took_gc_lock:1; 58 58 59 59 enum btree_id btree_id; 60 + struct bpos node_start; 61 + struct bpos node_end; 62 + enum btree_node_rewrite_reason node_needed_rewrite; 63 + u16 node_written; 64 + u16 node_sectors; 65 + u16 node_remaining; 66 + 60 67 unsigned update_level_start; 61 68 unsigned update_level_end; 62 69
+2 -2
fs/bcachefs/chardev.c
··· 399 399 return ret; 400 400 } 401 401 402 - static long bch2_ioctl_fs_usage(struct bch_fs *c, 402 + static noinline_for_stack long bch2_ioctl_fs_usage(struct bch_fs *c, 403 403 struct bch_ioctl_fs_usage __user *user_arg) 404 404 { 405 405 struct bch_ioctl_fs_usage arg = {}; ··· 469 469 } 470 470 471 471 /* obsolete, didn't allow for new data types: */ 472 - static long bch2_ioctl_dev_usage(struct bch_fs *c, 472 + static noinline_for_stack long bch2_ioctl_dev_usage(struct bch_fs *c, 473 473 struct bch_ioctl_dev_usage __user *user_arg) 474 474 { 475 475 struct bch_ioctl_dev_usage arg;
+3 -1
fs/bcachefs/disk_accounting.c
··· 618 618 for (unsigned j = 0; j < nr; j++) 619 619 src_v[j] -= dst_v[j]; 620 620 621 - if (fsck_err(trans, accounting_mismatch, "%s", buf.buf)) { 621 + bch2_trans_unlock_long(trans); 622 + 623 + if (fsck_err(c, accounting_mismatch, "%s", buf.buf)) { 622 624 percpu_up_write(&c->mark_lock); 623 625 ret = commit_do(trans, NULL, NULL, 0, 624 626 bch2_disk_accounting_mod(trans, &acc_k, src_v, nr, false));
+4 -1
fs/bcachefs/error.c
··· 69 69 if (trans) 70 70 bch2_trans_updates_to_text(&buf, trans); 71 71 bool ret = __bch2_inconsistent_error(c, &buf); 72 - bch2_print_str_nonblocking(c, KERN_ERR, buf.buf); 72 + bch2_print_str(c, KERN_ERR, buf.buf); 73 73 74 74 printbuf_exit(&buf); 75 75 return ret; ··· 620 620 621 621 if (s) 622 622 s->ret = ret; 623 + 624 + if (trans) 625 + ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret; 623 626 err_unlock: 624 627 mutex_unlock(&c->fsck_error_msgs_lock); 625 628 err:
+8
fs/bcachefs/fs.c
··· 2490 2490 if (ret) 2491 2491 goto err_stop_fs; 2492 2492 2493 + /* 2494 + * We might be doing a RO mount because other options required it, or we 2495 + * have no alloc info and it's a small image with no room to regenerate 2496 + * it 2497 + */ 2498 + if (c->opts.read_only) 2499 + fc->sb_flags |= SB_RDONLY; 2500 + 2493 2501 sb = sget(fc->fs_type, NULL, bch2_set_super, fc->sb_flags|SB_NOSEC, c); 2494 2502 ret = PTR_ERR_OR_ZERO(sb); 2495 2503 if (ret)
+9 -2
fs/bcachefs/io_read.c
··· 343 343 344 344 *bounce = true; 345 345 *read_full = promote_full; 346 + 347 + if (have_io_error(failed)) 348 + orig->self_healing = true; 349 + 346 350 return promote; 347 351 nopromote: 348 352 trace_io_read_nopromote(c, ret); ··· 639 635 prt_str(&buf, "(internal move) "); 640 636 641 637 prt_str(&buf, "data read error, "); 642 - if (!ret) 638 + if (!ret) { 643 639 prt_str(&buf, "successful retry"); 644 - else 640 + if (rbio->self_healing) 641 + prt_str(&buf, ", self healing"); 642 + } else 645 643 prt_str(&buf, bch2_err_str(ret)); 646 644 prt_newline(&buf); 645 + 647 646 648 647 if (!bkey_deleted(&sk.k->k)) { 649 648 bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(sk.k));
+1
fs/bcachefs/io_read.h
··· 44 44 have_ioref:1, 45 45 narrow_crcs:1, 46 46 saw_error:1, 47 + self_healing:1, 47 48 context:2; 48 49 }; 49 50 u16 _state;
+13 -9
fs/bcachefs/movinggc.c
··· 28 28 #include <linux/wait.h> 29 29 30 30 struct buckets_in_flight { 31 - struct rhashtable table; 31 + struct rhashtable *table; 32 32 struct move_bucket *first; 33 33 struct move_bucket *last; 34 34 size_t nr; ··· 98 98 static void move_bucket_free(struct buckets_in_flight *list, 99 99 struct move_bucket *b) 100 100 { 101 - int ret = rhashtable_remove_fast(&list->table, &b->hash, 101 + int ret = rhashtable_remove_fast(list->table, &b->hash, 102 102 bch_move_bucket_params); 103 103 BUG_ON(ret); 104 104 kfree(b); ··· 133 133 static bool bucket_in_flight(struct buckets_in_flight *list, 134 134 struct move_bucket_key k) 135 135 { 136 - return rhashtable_lookup_fast(&list->table, &k, bch_move_bucket_params); 136 + return rhashtable_lookup_fast(list->table, &k, bch_move_bucket_params); 137 137 } 138 138 139 139 static int bch2_copygc_get_buckets(struct moving_context *ctxt, ··· 185 185 goto err; 186 186 } 187 187 188 - ret2 = rhashtable_lookup_insert_fast(&buckets_in_flight->table, &b_i->hash, 188 + ret2 = rhashtable_lookup_insert_fast(buckets_in_flight->table, &b_i->hash, 189 189 bch_move_bucket_params); 190 190 BUG_ON(ret2); 191 191 ··· 350 350 struct buckets_in_flight buckets = {}; 351 351 u64 last, wait; 352 352 353 - int ret = rhashtable_init(&buckets.table, &bch_move_bucket_params); 353 + buckets.table = kzalloc(sizeof(*buckets.table), GFP_KERNEL); 354 + int ret = !buckets.table 355 + ? -ENOMEM 356 + : rhashtable_init(buckets.table, &bch_move_bucket_params); 354 357 bch_err_msg(c, ret, "allocating copygc buckets in flight"); 355 358 if (ret) 356 - return ret; 359 + goto err; 357 360 358 361 set_freezable(); 359 362 ··· 424 421 } 425 422 426 423 move_buckets_wait(&ctxt, &buckets, true); 427 - rhashtable_destroy(&buckets.table); 424 + rhashtable_destroy(buckets.table); 428 425 bch2_moving_ctxt_exit(&ctxt); 429 426 bch2_move_stats_exit(&move_stats, c); 430 - 431 - return 0; 427 + err: 428 + kfree(buckets.table); 429 + return ret; 432 430 } 433 431 434 432 void bch2_copygc_stop(struct bch_fs *c)
+10
fs/bcachefs/namei.c
··· 175 175 new_inode->bi_dir_offset = dir_offset; 176 176 } 177 177 178 + if (S_ISDIR(mode)) { 179 + ret = bch2_maybe_propagate_has_case_insensitive(trans, 180 + (subvol_inum) { 181 + new_inode->bi_subvol ?: dir.subvol, 182 + new_inode->bi_inum }, 183 + new_inode); 184 + if (ret) 185 + goto err; 186 + } 187 + 178 188 if (S_ISDIR(mode) && 179 189 !new_inode->bi_subvol) 180 190 new_inode->bi_depth = dir_u->bi_depth + 1;
+11 -11
fs/bcachefs/rcu_pending.c
··· 182 182 while (nr--) 183 183 kfree(*p); 184 184 } 185 - 186 - #define local_irq_save(flags) \ 187 - do { \ 188 - flags = 0; \ 189 - } while (0) 190 185 #endif 191 186 192 187 static noinline void __process_finished_items(struct rcu_pending *pending, ··· 424 429 425 430 BUG_ON((ptr != NULL) != (pending->process == RCU_PENDING_KVFREE_FN)); 426 431 427 - local_irq_save(flags); 428 - p = this_cpu_ptr(pending->p); 429 - spin_lock(&p->lock); 432 + /* We could technically be scheduled before taking the lock and end up 433 + * using a different cpu's rcu_pending_pcpu: that's ok, it needs a lock 434 + * anyways 435 + * 436 + * And we have to do it this way to avoid breaking PREEMPT_RT, which 437 + * redefines how spinlocks work: 438 + */ 439 + p = raw_cpu_ptr(pending->p); 440 + spin_lock_irqsave(&p->lock, flags); 430 441 rcu_gp_poll_state_t seq = __get_state_synchronize_rcu(pending->srcu); 431 442 restart: 432 443 if (may_sleep && ··· 521 520 goto free_node; 522 521 } 523 522 524 - local_irq_save(flags); 525 - p = this_cpu_ptr(pending->p); 526 - spin_lock(&p->lock); 523 + p = raw_cpu_ptr(pending->p); 524 + spin_lock_irqsave(&p->lock, flags); 527 525 goto restart; 528 526 } 529 527
+21 -6
fs/bcachefs/recovery.c
··· 99 99 goto out; 100 100 case BTREE_ID_snapshots: 101 101 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_reconstruct_snapshots, 0) ?: ret; 102 + ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret; 102 103 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret; 103 104 goto out; 104 105 default: 106 + ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret; 105 107 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret; 106 108 goto out; 107 109 } ··· 273 271 goto out; 274 272 275 273 struct btree_path *path = btree_iter_path(trans, &iter); 276 - if (unlikely(!btree_path_node(path, k->level))) { 274 + if (unlikely(!btree_path_node(path, k->level) && 275 + !k->allocated)) { 276 + struct bch_fs *c = trans->c; 277 + 278 + if (!(c->recovery.passes_complete & (BIT_ULL(BCH_RECOVERY_PASS_scan_for_btree_nodes)| 279 + BIT_ULL(BCH_RECOVERY_PASS_check_topology)))) { 280 + bch_err(c, "have key in journal replay for btree depth that does not exist, confused"); 281 + ret = -EINVAL; 282 + } 283 + #if 0 277 284 bch2_trans_iter_exit(trans, &iter); 278 285 bch2_trans_node_iter_init(trans, &iter, k->btree_id, k->k->k.p, 279 286 BTREE_MAX_DEPTH, 0, iter_flags); 280 287 ret = bch2_btree_iter_traverse(trans, &iter) ?: 281 288 bch2_btree_increase_depth(trans, iter.path, 0) ?: 282 289 -BCH_ERR_transaction_restart_nested; 290 + #endif 291 + k->overwritten = true; 283 292 goto out; 284 293 } 285 294 ··· 752 739 ? min(c->opts.recovery_pass_last, BCH_RECOVERY_PASS_snapshots_read) 753 740 : BCH_RECOVERY_PASS_snapshots_read; 754 741 c->opts.nochanges = true; 755 - c->opts.read_only = true; 756 742 } 743 + 744 + if (c->opts.nochanges) 745 + c->opts.read_only = true; 757 746 758 747 mutex_lock(&c->sb_lock); 759 748 struct bch_sb_field_ext *ext = bch2_sb_field_get(c->disk_sb.sb, ext); ··· 1108 1093 out: 1109 1094 bch2_flush_fsck_errs(c); 1110 1095 1111 - if (!IS_ERR(clean)) 1112 - kfree(clean); 1113 - 1114 1096 if (!ret && 1115 1097 test_bit(BCH_FS_need_delete_dead_snapshots, &c->flags) && 1116 1098 !c->opts.nochanges) { ··· 1116 1104 } 1117 1105 1118 1106 bch_err_fn(c, ret); 1107 + final_out: 1108 + if (!IS_ERR(clean)) 1109 + kfree(clean); 1119 1110 return ret; 1120 1111 err: 1121 1112 fsck_err: ··· 1132 1117 bch2_print_str(c, KERN_ERR, buf.buf); 1133 1118 printbuf_exit(&buf); 1134 1119 } 1135 - return ret; 1120 + goto final_out; 1136 1121 } 1137 1122 1138 1123 int bch2_fs_initialize(struct bch_fs *c)
+11 -3
fs/bcachefs/recovery_passes.c
··· 294 294 enum bch_run_recovery_pass_flags *flags) 295 295 { 296 296 struct bch_fs_recovery *r = &c->recovery; 297 - bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags); 298 - bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent); 297 + 298 + /* 299 + * Never run scan_for_btree_nodes persistently: check_topology will run 300 + * it if required 301 + */ 302 + if (pass == BCH_RECOVERY_PASS_scan_for_btree_nodes) 303 + *flags |= RUN_RECOVERY_PASS_nopersistent; 299 304 300 305 if ((*flags & RUN_RECOVERY_PASS_ratelimit) && 301 306 !bch2_recovery_pass_want_ratelimit(c, pass)) ··· 315 310 * Otherwise, we run run_explicit_recovery_pass when we find damage, so 316 311 * it should run again even if it's already run: 317 312 */ 313 + bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags); 314 + bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent); 318 315 319 316 if (persistent 320 317 ? !(c->sb.recovery_passes_required & BIT_ULL(pass)) ··· 340 333 { 341 334 struct bch_fs_recovery *r = &c->recovery; 342 335 int ret = 0; 336 + 343 337 344 338 lockdep_assert_held(&c->sb_lock); 345 339 ··· 454 446 455 447 int bch2_run_print_explicit_recovery_pass(struct bch_fs *c, enum bch_recovery_pass pass) 456 448 { 457 - enum bch_run_recovery_pass_flags flags = RUN_RECOVERY_PASS_nopersistent; 449 + enum bch_run_recovery_pass_flags flags = 0; 458 450 459 451 if (!recovery_pass_needs_set(c, pass, &flags)) 460 452 return 0;
+4 -1
fs/bcachefs/sb-downgrade.c
··· 253 253 254 254 static int downgrade_table_extra(struct bch_fs *c, darray_char *table) 255 255 { 256 + unsigned dst_offset = table->nr; 256 257 struct bch_sb_field_downgrade_entry *dst = (void *) &darray_top(*table); 257 258 unsigned bytes = sizeof(*dst) + sizeof(dst->errors[0]) * le16_to_cpu(dst->nr_errors); 258 259 int ret = 0; ··· 269 268 if (ret) 270 269 return ret; 271 270 271 + dst = (void *) &table->data[dst_offset]; 272 + dst->nr_errors = cpu_to_le16(nr_errors + 1); 273 + 272 274 /* open coded __set_bit_le64, as dst is packed and 273 275 * dst->recovery_passes is misaligned */ 274 276 unsigned b = BCH_RECOVERY_PASS_STABLE_check_allocations; ··· 282 278 break; 283 279 } 284 280 285 - dst->nr_errors = cpu_to_le16(nr_errors); 286 281 return ret; 287 282 } 288 283
+5 -5
fs/bcachefs/sb-errors_format.h
··· 134 134 x(bucket_gens_to_invalid_buckets, 121, FSCK_AUTOFIX) \ 135 135 x(bucket_gens_nonzero_for_invalid_buckets, 122, FSCK_AUTOFIX) \ 136 136 x(need_discard_freespace_key_to_invalid_dev_bucket, 123, 0) \ 137 - x(need_discard_freespace_key_bad, 124, 0) \ 137 + x(need_discard_freespace_key_bad, 124, FSCK_AUTOFIX) \ 138 138 x(discarding_bucket_not_in_need_discard_btree, 291, 0) \ 139 139 x(backpointer_bucket_offset_wrong, 125, 0) \ 140 140 x(backpointer_level_bad, 294, 0) \ ··· 165 165 x(ptr_to_missing_replicas_entry, 149, FSCK_AUTOFIX) \ 166 166 x(ptr_to_missing_stripe, 150, 0) \ 167 167 x(ptr_to_incorrect_stripe, 151, 0) \ 168 - x(ptr_gen_newer_than_bucket_gen, 152, 0) \ 168 + x(ptr_gen_newer_than_bucket_gen, 152, FSCK_AUTOFIX) \ 169 169 x(ptr_too_stale, 153, 0) \ 170 170 x(stale_dirty_ptr, 154, FSCK_AUTOFIX) \ 171 171 x(ptr_bucket_data_type_mismatch, 155, 0) \ ··· 236 236 x(inode_multiple_links_but_nlink_0, 207, FSCK_AUTOFIX) \ 237 237 x(inode_wrong_backpointer, 208, FSCK_AUTOFIX) \ 238 238 x(inode_wrong_nlink, 209, FSCK_AUTOFIX) \ 239 - x(inode_has_child_snapshots_wrong, 287, 0) \ 239 + x(inode_has_child_snapshots_wrong, 287, FSCK_AUTOFIX) \ 240 240 x(inode_unreachable, 210, FSCK_AUTOFIX) \ 241 241 x(inode_journal_seq_in_future, 299, FSCK_AUTOFIX) \ 242 242 x(inode_i_sectors_underflow, 312, FSCK_AUTOFIX) \ ··· 279 279 x(root_dir_missing, 239, 0) \ 280 280 x(root_inode_not_dir, 240, 0) \ 281 281 x(dir_loop, 241, 0) \ 282 - x(hash_table_key_duplicate, 242, 0) \ 283 - x(hash_table_key_wrong_offset, 243, 0) \ 282 + x(hash_table_key_duplicate, 242, FSCK_AUTOFIX) \ 283 + x(hash_table_key_wrong_offset, 243, FSCK_AUTOFIX) \ 284 284 x(unlinked_inode_not_on_deleted_list, 244, FSCK_AUTOFIX) \ 285 285 x(reflink_p_front_pad_bad, 245, 0) \ 286 286 x(journal_entry_dup_same_device, 246, 0) \
+30 -4
fs/bcachefs/sb-members.c
··· 325 325 { 326 326 struct bch_sb_field_members_v1 *mi = field_to_type(f, members_v1); 327 327 struct bch_sb_field_disk_groups *gi = bch2_sb_field_get(sb, disk_groups); 328 - unsigned i; 329 328 330 - for (i = 0; i < sb->nr_devices; i++) 329 + if (vstruct_end(&mi->field) <= (void *) &mi->_members[0]) { 330 + prt_printf(out, "field ends before start of entries"); 331 + return; 332 + } 333 + 334 + unsigned nr = (vstruct_end(&mi->field) - (void *) &mi->_members[0]) / sizeof(mi->_members[0]); 335 + if (nr != sb->nr_devices) 336 + prt_printf(out, "nr_devices mismatch: have %i entries, should be %u", nr, sb->nr_devices); 337 + 338 + for (unsigned i = 0; i < min(sb->nr_devices, nr); i++) 331 339 member_to_text(out, members_v1_get(mi, i), gi, sb, i); 332 340 } 333 341 ··· 349 341 { 350 342 struct bch_sb_field_members_v2 *mi = field_to_type(f, members_v2); 351 343 struct bch_sb_field_disk_groups *gi = bch2_sb_field_get(sb, disk_groups); 352 - unsigned i; 353 344 354 - for (i = 0; i < sb->nr_devices; i++) 345 + if (vstruct_end(&mi->field) <= (void *) &mi->_members[0]) { 346 + prt_printf(out, "field ends before start of entries"); 347 + return; 348 + } 349 + 350 + if (!le16_to_cpu(mi->member_bytes)) { 351 + prt_printf(out, "member_bytes 0"); 352 + return; 353 + } 354 + 355 + unsigned nr = (vstruct_end(&mi->field) - (void *) &mi->_members[0]) / le16_to_cpu(mi->member_bytes); 356 + if (nr != sb->nr_devices) 357 + prt_printf(out, "nr_devices mismatch: have %i entries, should be %u", nr, sb->nr_devices); 358 + 359 + /* 360 + * We call to_text() on superblock sections that haven't passed 361 + * validate, so we can't trust sb->nr_devices. 362 + */ 363 + 364 + for (unsigned i = 0; i < min(sb->nr_devices, nr); i++) 355 365 member_to_text(out, members_v2_get(mi, i), gi, sb, i); 356 366 } 357 367
+33 -14
fs/bcachefs/super.c
··· 104 104 #undef x 105 105 106 106 static void __bch2_print_str(struct bch_fs *c, const char *prefix, 107 - const char *str, bool nonblocking) 107 + const char *str) 108 108 { 109 109 #ifdef __KERNEL__ 110 110 struct stdio_redirect *stdio = bch2_fs_stdio_redirect(c); ··· 114 114 return; 115 115 } 116 116 #endif 117 - bch2_print_string_as_lines(KERN_ERR, str, nonblocking); 117 + bch2_print_string_as_lines(KERN_ERR, str); 118 118 } 119 119 120 120 void bch2_print_str(struct bch_fs *c, const char *prefix, const char *str) 121 121 { 122 - __bch2_print_str(c, prefix, str, false); 123 - } 124 - 125 - void bch2_print_str_nonblocking(struct bch_fs *c, const char *prefix, const char *str) 126 - { 127 - __bch2_print_str(c, prefix, str, true); 122 + __bch2_print_str(c, prefix, str); 128 123 } 129 124 130 125 __printf(2, 0) ··· 1067 1072 static void print_mount_opts(struct bch_fs *c) 1068 1073 { 1069 1074 enum bch_opt_id i; 1070 - struct printbuf p = PRINTBUF; 1071 - bool first = true; 1075 + CLASS(printbuf, p)(); 1076 + bch2_log_msg_start(c, &p); 1072 1077 1073 1078 prt_str(&p, "starting version "); 1074 1079 bch2_version_to_text(&p, c->sb.version); 1075 1080 1081 + bool first = true; 1076 1082 for (i = 0; i < bch2_opts_nr; i++) { 1077 1083 const struct bch_option *opt = &bch2_opt_table[i]; 1078 1084 u64 v = bch2_opt_get_by_id(&c->opts, i); ··· 1090 1094 } 1091 1095 1092 1096 if (c->sb.version_incompat_allowed != c->sb.version) { 1093 - prt_printf(&p, "\n allowing incompatible features above "); 1097 + prt_printf(&p, "\nallowing incompatible features above "); 1094 1098 bch2_version_to_text(&p, c->sb.version_incompat_allowed); 1095 1099 } 1096 1100 1097 1101 if (c->opts.verbose) { 1098 - prt_printf(&p, "\n features: "); 1102 + prt_printf(&p, "\nfeatures: "); 1099 1103 prt_bitflags(&p, bch2_sb_features, c->sb.features); 1100 1104 } 1101 1105 1102 - bch_info(c, "%s", p.buf); 1103 - printbuf_exit(&p); 1106 + if (c->sb.multi_device) { 1107 + prt_printf(&p, "\nwith devices"); 1108 + for_each_online_member(c, ca, BCH_DEV_READ_REF_bch2_online_devs) { 1109 + prt_char(&p, ' '); 1110 + prt_str(&p, ca->name); 1111 + } 1112 + } 1113 + 1114 + bch2_print_str(c, KERN_INFO, p.buf); 1104 1115 } 1105 1116 1106 1117 static bool bch2_fs_may_start(struct bch_fs *c) ··· 1997 1994 if (ret) 1998 1995 goto err_late; 1999 1996 } 1997 + 1998 + /* 1999 + * We just changed the superblock UUID, invalidate cache and send a 2000 + * uevent to update /dev/disk/by-uuid 2001 + */ 2002 + invalidate_bdev(ca->disk_sb.bdev); 2003 + 2004 + char uuid_str[37]; 2005 + snprintf(uuid_str, sizeof(uuid_str), "UUID=%pUb", &c->sb.uuid); 2006 + 2007 + char *envp[] = { 2008 + "CHANGE=uuid", 2009 + uuid_str, 2010 + NULL, 2011 + }; 2012 + kobject_uevent_env(&ca->disk_sb.bdev->bd_device.kobj, KOBJ_CHANGE, envp); 2000 2013 2001 2014 up_write(&c->state_lock); 2002 2015 out:
+2 -8
fs/bcachefs/util.c
··· 262 262 return true; 263 263 } 264 264 265 - void bch2_print_string_as_lines(const char *prefix, const char *lines, 266 - bool nonblocking) 265 + void bch2_print_string_as_lines(const char *prefix, const char *lines) 267 266 { 268 267 bool locked = false; 269 268 const char *p; ··· 272 273 return; 273 274 } 274 275 275 - if (!nonblocking) { 276 - console_lock(); 277 - locked = true; 278 - } else { 279 - locked = console_trylock(); 280 - } 276 + locked = console_trylock(); 281 277 282 278 while (*lines) { 283 279 p = strchrnul(lines, '\n');
+1 -1
fs/bcachefs/util.h
··· 214 214 void bch2_prt_u64_base2_nbits(struct printbuf *, u64, unsigned); 215 215 void bch2_prt_u64_base2(struct printbuf *, u64); 216 216 217 - void bch2_print_string_as_lines(const char *, const char *, bool); 217 + void bch2_print_string_as_lines(const char *, const char *); 218 218 219 219 typedef DARRAY(unsigned long) bch_stacktrace; 220 220 int bch2_save_backtrace(bch_stacktrace *stack, struct task_struct *, unsigned, gfp_t);
+4 -4
fs/smb/client/cached_dir.h
··· 21 21 struct cached_dirents { 22 22 bool is_valid:1; 23 23 bool is_failed:1; 24 - struct dir_context *ctx; /* 25 - * Only used to make sure we only take entries 26 - * from a single context. Never dereferenced. 27 - */ 24 + struct file *file; /* 25 + * Used to associate the cache with a single 26 + * open file instance. 27 + */ 28 28 struct mutex de_mutex; 29 29 int pos; /* Expected ctx->pos */ 30 30 struct list_head entries;
+8 -2
fs/smb/client/connect.c
··· 3718 3718 goto out; 3719 3719 } 3720 3720 3721 - /* if new SMB3.11 POSIX extensions are supported do not remap / and \ */ 3722 - if (tcon->posix_extensions) 3721 + /* 3722 + * if new SMB3.11 POSIX extensions are supported, do not change anything in the 3723 + * path (i.e., do not remap / and \ and do not map any special characters) 3724 + */ 3725 + if (tcon->posix_extensions) { 3723 3726 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS; 3727 + cifs_sb->mnt_cifs_flags &= ~(CIFS_MOUNT_MAP_SFM_CHR | 3728 + CIFS_MOUNT_MAP_SPECIAL_CHR); 3729 + } 3724 3730 3725 3731 #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY 3726 3732 /* tell server which Unix caps we support */
+6 -3
fs/smb/client/file.c
··· 999 999 rc = cifs_get_readable_path(tcon, full_path, &cfile); 1000 1000 } 1001 1001 if (rc == 0) { 1002 - if (file->f_flags == cfile->f_flags) { 1002 + unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC); 1003 + unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC); 1004 + 1005 + if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) && 1006 + (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) { 1003 1007 file->private_data = cfile; 1004 1008 spin_lock(&CIFS_I(inode)->deferred_lock); 1005 1009 cifs_del_deferred_close(cfile); 1006 1010 spin_unlock(&CIFS_I(inode)->deferred_lock); 1007 1011 goto use_cache; 1008 - } else { 1009 - _cifsFileInfo_put(cfile, true, false); 1010 1012 } 1013 + _cifsFileInfo_put(cfile, true, false); 1011 1014 } else { 1012 1015 /* hard link on the defeered close file */ 1013 1016 rc = cifs_get_hardlink_path(tcon, inode, file);
+15 -13
fs/smb/client/readdir.c
··· 851 851 } 852 852 853 853 static void update_cached_dirents_count(struct cached_dirents *cde, 854 - struct dir_context *ctx) 854 + struct file *file) 855 855 { 856 - if (cde->ctx != ctx) 856 + if (cde->file != file) 857 857 return; 858 858 if (cde->is_valid || cde->is_failed) 859 859 return; ··· 862 862 } 863 863 864 864 static void finished_cached_dirents_count(struct cached_dirents *cde, 865 - struct dir_context *ctx) 865 + struct dir_context *ctx, struct file *file) 866 866 { 867 - if (cde->ctx != ctx) 867 + if (cde->file != file) 868 868 return; 869 869 if (cde->is_valid || cde->is_failed) 870 870 return; ··· 877 877 static void add_cached_dirent(struct cached_dirents *cde, 878 878 struct dir_context *ctx, 879 879 const char *name, int namelen, 880 - struct cifs_fattr *fattr) 880 + struct cifs_fattr *fattr, 881 + struct file *file) 881 882 { 882 883 struct cached_dirent *de; 883 884 884 - if (cde->ctx != ctx) 885 + if (cde->file != file) 885 886 return; 886 887 if (cde->is_valid || cde->is_failed) 887 888 return; ··· 912 911 static bool cifs_dir_emit(struct dir_context *ctx, 913 912 const char *name, int namelen, 914 913 struct cifs_fattr *fattr, 915 - struct cached_fid *cfid) 914 + struct cached_fid *cfid, 915 + struct file *file) 916 916 { 917 917 bool rc; 918 918 ino_t ino = cifs_uniqueid_to_ino_t(fattr->cf_uniqueid); ··· 925 923 if (cfid) { 926 924 mutex_lock(&cfid->dirents.de_mutex); 927 925 add_cached_dirent(&cfid->dirents, ctx, name, namelen, 928 - fattr); 926 + fattr, file); 929 927 mutex_unlock(&cfid->dirents.de_mutex); 930 928 } 931 929 ··· 1025 1023 cifs_prime_dcache(file_dentry(file), &name, &fattr); 1026 1024 1027 1025 return !cifs_dir_emit(ctx, name.name, name.len, 1028 - &fattr, cfid); 1026 + &fattr, cfid, file); 1029 1027 } 1030 1028 1031 1029 ··· 1076 1074 * we need to initialize scanning and storing the 1077 1075 * directory content. 1078 1076 */ 1079 - if (ctx->pos == 0 && cfid->dirents.ctx == NULL) { 1080 - cfid->dirents.ctx = ctx; 1077 + if (ctx->pos == 0 && cfid->dirents.file == NULL) { 1078 + cfid->dirents.file = file; 1081 1079 cfid->dirents.pos = 2; 1082 1080 } 1083 1081 /* ··· 1145 1143 } else { 1146 1144 if (cfid) { 1147 1145 mutex_lock(&cfid->dirents.de_mutex); 1148 - finished_cached_dirents_count(&cfid->dirents, ctx); 1146 + finished_cached_dirents_count(&cfid->dirents, ctx, file); 1149 1147 mutex_unlock(&cfid->dirents.de_mutex); 1150 1148 } 1151 1149 cifs_dbg(FYI, "Could not find entry\n"); ··· 1186 1184 ctx->pos++; 1187 1185 if (cfid) { 1188 1186 mutex_lock(&cfid->dirents.de_mutex); 1189 - update_cached_dirents_count(&cfid->dirents, ctx); 1187 + update_cached_dirents_count(&cfid->dirents, file); 1190 1188 mutex_unlock(&cfid->dirents.de_mutex); 1191 1189 } 1192 1190
+1 -1
include/linux/bio.h
··· 291 291 292 292 fi->folio = page_folio(bvec->bv_page); 293 293 fi->offset = bvec->bv_offset + 294 - PAGE_SIZE * (bvec->bv_page - &fi->folio->page); 294 + PAGE_SIZE * folio_page_idx(fi->folio, bvec->bv_page); 295 295 fi->_seg_count = bvec->bv_len; 296 296 fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count); 297 297 fi->_next = folio_next(fi->folio);
+5 -2
include/linux/bvec.h
··· 57 57 * @offset: offset into the folio 58 58 */ 59 59 static inline void bvec_set_folio(struct bio_vec *bv, struct folio *folio, 60 - unsigned int len, unsigned int offset) 60 + size_t len, size_t offset) 61 61 { 62 - bvec_set_page(bv, &folio->page, len, offset); 62 + unsigned long nr = offset / PAGE_SIZE; 63 + 64 + WARN_ON_ONCE(len > UINT_MAX); 65 + bvec_set_page(bv, folio_page(folio, nr), len, offset % PAGE_SIZE); 63 66 } 64 67 65 68 /**
+3
include/linux/cpu.h
··· 120 120 extern void cpu_maps_update_done(void); 121 121 int bringup_hibernate_cpu(unsigned int sleep_cpu); 122 122 void bringup_nonboot_cpus(unsigned int max_cpus); 123 + int arch_cpu_rescan_dead_smt_siblings(void); 123 124 124 125 #else /* CONFIG_SMP */ 125 126 #define cpuhp_tasks_frozen 0 ··· 134 133 } 135 134 136 135 static inline int add_cpu(unsigned int cpu) { return 0;} 136 + 137 + static inline int arch_cpu_rescan_dead_smt_siblings(void) { return 0; } 137 138 138 139 #endif /* CONFIG_SMP */ 139 140 extern const struct bus_type cpu_subsys;
+4 -2
include/linux/fs.h
··· 2274 2274 return true; 2275 2275 } 2276 2276 2277 + int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma); 2278 + 2277 2279 static inline int call_mmap(struct file *file, struct vm_area_struct *vma) 2278 2280 { 2279 - if (WARN_ON_ONCE(file->f_op->mmap_prepare)) 2280 - return -EINVAL; 2281 + if (file->f_op->mmap_prepare) 2282 + return compat_vma_mmap_prepare(file, vma); 2281 2283 2282 2284 return file->f_op->mmap(file, vma); 2283 2285 }
+1 -1
include/linux/key.h
··· 236 236 #define KEY_FLAG_ROOT_CAN_INVAL 7 /* set if key can be invalidated by root without permission */ 237 237 #define KEY_FLAG_KEEP 8 /* set if key should not be removed */ 238 238 #define KEY_FLAG_UID_KEYRING 9 /* set if key is a user or user session keyring */ 239 - #define KEY_FLAG_FINAL_PUT 10 /* set if final put has happened on key */ 239 + #define KEY_FLAG_USER_ALIVE 10 /* set if final put has not happened on key yet */ 240 240 241 241 /* the key type and key description string 242 242 * - the desc is used to match a key against search criteria
+2 -2
include/linux/scatterlist.h
··· 99 99 * @sg: The current sg entry 100 100 * 101 101 * Description: 102 - * Usually the next entry will be @sg@ + 1, but if this sg element is part 102 + * Usually the next entry will be @sg + 1, but if this sg element is part 103 103 * of a chained scatterlist, it could jump to the start of a new 104 104 * scatterlist array. 105 105 * ··· 254 254 * @sgl: Second scatterlist 255 255 * 256 256 * Description: 257 - * Links @prv@ and @sgl@ together, to form a longer scatterlist. 257 + * Links @prv and @sgl together, to form a longer scatterlist. 258 258 * 259 259 **/ 260 260 static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
+7 -4
include/net/bluetooth/hci_core.h
··· 242 242 __u8 mesh; 243 243 __u8 instance; 244 244 __u8 handle; 245 + __u8 sid; 245 246 __u32 flags; 246 247 __u16 timeout; 247 248 __u16 remaining_time; ··· 547 546 struct hci_conn_hash conn_hash; 548 547 549 548 struct list_head mesh_pending; 549 + struct mutex mgmt_pending_lock; 550 550 struct list_head mgmt_pending; 551 551 struct list_head reject_list; 552 552 struct list_head accept_list; ··· 1552 1550 u16 timeout); 1553 1551 struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst, 1554 1552 __u8 dst_type, struct bt_iso_qos *qos); 1555 - struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, 1553 + struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 sid, 1556 1554 struct bt_iso_qos *qos, 1557 1555 __u8 base_len, __u8 *base); 1558 1556 struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst, 1559 1557 __u8 dst_type, struct bt_iso_qos *qos); 1560 1558 struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, 1561 - __u8 dst_type, struct bt_iso_qos *qos, 1559 + __u8 dst_type, __u8 sid, 1560 + struct bt_iso_qos *qos, 1562 1561 __u8 data_len, __u8 *data); 1563 1562 struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, 1564 1563 __u8 dst_type, __u8 sid, struct bt_iso_qos *qos); ··· 1834 1831 1835 1832 void hci_adv_instances_clear(struct hci_dev *hdev); 1836 1833 struct adv_info *hci_find_adv_instance(struct hci_dev *hdev, u8 instance); 1834 + struct adv_info *hci_find_adv_sid(struct hci_dev *hdev, u8 sid); 1837 1835 struct adv_info *hci_get_next_instance(struct hci_dev *hdev, u8 instance); 1838 1836 struct adv_info *hci_add_adv_instance(struct hci_dev *hdev, u8 instance, 1839 1837 u32 flags, u16 adv_data_len, u8 *adv_data, ··· 1842 1838 u16 timeout, u16 duration, s8 tx_power, 1843 1839 u32 min_interval, u32 max_interval, 1844 1840 u8 mesh_handle); 1845 - struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, 1841 + struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, u8 sid, 1846 1842 u32 flags, u8 data_len, u8 *data, 1847 1843 u32 min_interval, u32 max_interval); 1848 1844 int hci_set_adv_instance_data(struct hci_dev *hdev, u8 instance, ··· 2404 2400 u8 instance); 2405 2401 void mgmt_advertising_removed(struct sock *sk, struct hci_dev *hdev, 2406 2402 u8 instance); 2407 - void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle); 2408 2403 int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip); 2409 2404 void mgmt_adv_monitor_device_lost(struct hci_dev *hdev, u16 handle, 2410 2405 bdaddr_t *bdaddr, u8 addr_type);
+2 -2
include/net/bluetooth/hci_sync.h
··· 115 115 int hci_enable_advertising_sync(struct hci_dev *hdev); 116 116 int hci_enable_advertising(struct hci_dev *hdev); 117 117 118 - int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len, 119 - u8 *data, u32 flags, u16 min_interval, 118 + int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 sid, 119 + u8 data_len, u8 *data, u32 flags, u16 min_interval, 120 120 u16 max_interval, u16 sync_interval); 121 121 122 122 int hci_disable_per_advertising_sync(struct hci_dev *hdev, u8 instance);
-8
include/net/sch_generic.h
··· 973 973 *backlog = qstats.backlog; 974 974 } 975 975 976 - static inline void qdisc_tree_flush_backlog(struct Qdisc *sch) 977 - { 978 - __u32 qlen, backlog; 979 - 980 - qdisc_qstats_qlen_backlog(sch, &qlen, &backlog); 981 - qdisc_tree_reduce_backlog(sch, qlen, backlog); 982 - } 983 - 984 976 static inline void qdisc_purge_queue(struct Qdisc *sch) 985 977 { 986 978 __u32 qlen, backlog;
+5 -2
include/net/sock.h
··· 3010 3010 int sk_ioctl(struct sock *sk, unsigned int cmd, void __user *arg); 3011 3011 static inline bool sk_is_readable(struct sock *sk) 3012 3012 { 3013 - if (sk->sk_prot->sock_is_readable) 3014 - return sk->sk_prot->sock_is_readable(sk); 3013 + const struct proto *prot = READ_ONCE(sk->sk_prot); 3014 + 3015 + if (prot->sock_is_readable) 3016 + return prot->sock_is_readable(sk); 3017 + 3015 3018 return false; 3016 3019 } 3017 3020 #endif /* _SOCK_H */
+2 -2
include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
+1
init/initramfs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/init.h> 3 3 #include <linux/async.h> 4 + #include <linux/export.h> 4 5 #include <linux/fs.h> 5 6 #include <linux/slab.h> 6 7 #include <linux/types.h>
+1
init/main.c
··· 13 13 #define DEBUG /* Enable initcall_debug */ 14 14 15 15 #include <linux/types.h> 16 + #include <linux/export.h> 16 17 #include <linux/extable.h> 17 18 #include <linux/module.h> 18 19 #include <linux/proc_fs.h>
+10 -2
io_uring/fdinfo.c
··· 141 141 142 142 if (ctx->flags & IORING_SETUP_SQPOLL) { 143 143 struct io_sq_data *sq = ctx->sq_data; 144 + struct task_struct *tsk; 144 145 146 + rcu_read_lock(); 147 + tsk = rcu_dereference(sq->thread); 145 148 /* 146 149 * sq->thread might be NULL if we raced with the sqpoll 147 150 * thread termination. 148 151 */ 149 - if (sq->thread) { 152 + if (tsk) { 153 + get_task_struct(tsk); 154 + rcu_read_unlock(); 155 + getrusage(tsk, RUSAGE_SELF, &sq_usage); 156 + put_task_struct(tsk); 150 157 sq_pid = sq->task_pid; 151 158 sq_cpu = sq->sq_cpu; 152 - getrusage(sq->thread, RUSAGE_SELF, &sq_usage); 153 159 sq_total_time = (sq_usage.ru_stime.tv_sec * 1000000 154 160 + sq_usage.ru_stime.tv_usec); 155 161 sq_work_time = sq->work_time; 162 + } else { 163 + rcu_read_unlock(); 156 164 } 157 165 } 158 166
+5 -2
io_uring/io_uring.c
··· 1523 1523 } 1524 1524 } 1525 1525 mutex_unlock(&ctx->uring_lock); 1526 + 1527 + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) 1528 + io_move_task_work_from_local(ctx); 1526 1529 } 1527 1530 1528 1531 static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_events) ··· 2909 2906 struct task_struct *tsk; 2910 2907 2911 2908 io_sq_thread_park(sqd); 2912 - tsk = sqd->thread; 2909 + tsk = sqpoll_task_locked(sqd); 2913 2910 if (tsk && tsk->io_uring && tsk->io_uring->io_wq) 2914 2911 io_wq_cancel_cb(tsk->io_uring->io_wq, 2915 2912 io_cancel_ctx_cb, ctx, true); ··· 3145 3142 s64 inflight; 3146 3143 DEFINE_WAIT(wait); 3147 3144 3148 - WARN_ON_ONCE(sqd && sqd->thread != current); 3145 + WARN_ON_ONCE(sqd && sqpoll_task_locked(sqd) != current); 3149 3146 3150 3147 if (!current->io_uring) 3151 3148 return;
+4 -1
io_uring/kbuf.c
··· 270 270 /* truncate end piece, if needed, for non partial buffers */ 271 271 if (len > arg->max_len) { 272 272 len = arg->max_len; 273 - if (!(bl->flags & IOBL_INC)) 273 + if (!(bl->flags & IOBL_INC)) { 274 + if (iov != arg->iovs) 275 + break; 274 276 buf->len = len; 277 + } 275 278 } 276 279 277 280 iov->iov_base = u64_to_user_ptr(buf->addr);
+5 -2
io_uring/register.c
··· 273 273 if (ctx->flags & IORING_SETUP_SQPOLL) { 274 274 sqd = ctx->sq_data; 275 275 if (sqd) { 276 + struct task_struct *tsk; 277 + 276 278 /* 277 279 * Observe the correct sqd->lock -> ctx->uring_lock 278 280 * ordering. Fine to drop uring_lock here, we hold ··· 284 282 mutex_unlock(&ctx->uring_lock); 285 283 mutex_lock(&sqd->lock); 286 284 mutex_lock(&ctx->uring_lock); 287 - if (sqd->thread) 288 - tctx = sqd->thread->io_uring; 285 + tsk = sqpoll_task_locked(sqd); 286 + if (tsk) 287 + tctx = tsk->io_uring; 289 288 } 290 289 } else { 291 290 tctx = current->io_uring;
+28 -15
io_uring/sqpoll.c
··· 30 30 void io_sq_thread_unpark(struct io_sq_data *sqd) 31 31 __releases(&sqd->lock) 32 32 { 33 - WARN_ON_ONCE(sqd->thread == current); 33 + WARN_ON_ONCE(sqpoll_task_locked(sqd) == current); 34 34 35 35 /* 36 36 * Do the dance but not conditional clear_bit() because it'd race with ··· 46 46 void io_sq_thread_park(struct io_sq_data *sqd) 47 47 __acquires(&sqd->lock) 48 48 { 49 - WARN_ON_ONCE(data_race(sqd->thread) == current); 49 + struct task_struct *tsk; 50 50 51 51 atomic_inc(&sqd->park_pending); 52 52 set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state); 53 53 mutex_lock(&sqd->lock); 54 - if (sqd->thread) 55 - wake_up_process(sqd->thread); 54 + 55 + tsk = sqpoll_task_locked(sqd); 56 + if (tsk) { 57 + WARN_ON_ONCE(tsk == current); 58 + wake_up_process(tsk); 59 + } 56 60 } 57 61 58 62 void io_sq_thread_stop(struct io_sq_data *sqd) 59 63 { 60 - WARN_ON_ONCE(sqd->thread == current); 64 + struct task_struct *tsk; 65 + 61 66 WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state)); 62 67 63 68 set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state); 64 69 mutex_lock(&sqd->lock); 65 - if (sqd->thread) 66 - wake_up_process(sqd->thread); 70 + tsk = sqpoll_task_locked(sqd); 71 + if (tsk) { 72 + WARN_ON_ONCE(tsk == current); 73 + wake_up_process(tsk); 74 + } 67 75 mutex_unlock(&sqd->lock); 68 76 wait_for_completion(&sqd->exited); 69 77 } ··· 278 270 /* offload context creation failed, just exit */ 279 271 if (!current->io_uring) { 280 272 mutex_lock(&sqd->lock); 281 - sqd->thread = NULL; 273 + rcu_assign_pointer(sqd->thread, NULL); 274 + put_task_struct(current); 282 275 mutex_unlock(&sqd->lock); 283 276 goto err_out; 284 277 } ··· 388 379 io_sq_tw(&retry_list, UINT_MAX); 389 380 390 381 io_uring_cancel_generic(true, sqd); 391 - sqd->thread = NULL; 382 + rcu_assign_pointer(sqd->thread, NULL); 383 + put_task_struct(current); 392 384 list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) 393 385 atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags); 394 386 io_run_task_work(); ··· 494 484 goto err_sqpoll; 495 485 } 496 486 497 - sqd->thread = tsk; 487 + mutex_lock(&sqd->lock); 488 + rcu_assign_pointer(sqd->thread, tsk); 489 + mutex_unlock(&sqd->lock); 490 + 498 491 task_to_put = get_task_struct(tsk); 499 492 ret = io_uring_alloc_task_context(tsk, ctx); 500 493 wake_up_new_task(tsk); ··· 508 495 ret = -EINVAL; 509 496 goto err; 510 497 } 511 - 512 - if (task_to_put) 513 - put_task_struct(task_to_put); 514 498 return 0; 515 499 err_sqpoll: 516 500 complete(&ctx->sq_data->exited); ··· 525 515 int ret = -EINVAL; 526 516 527 517 if (sqd) { 518 + struct task_struct *tsk; 519 + 528 520 io_sq_thread_park(sqd); 529 521 /* Don't set affinity for a dying thread */ 530 - if (sqd->thread) 531 - ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask); 522 + tsk = sqpoll_task_locked(sqd); 523 + if (tsk) 524 + ret = io_wq_cpu_affinity(tsk->io_uring, mask); 532 525 io_sq_thread_unpark(sqd); 533 526 } 534 527
+7 -1
io_uring/sqpoll.h
··· 8 8 /* ctx's that are using this sqd */ 9 9 struct list_head ctx_list; 10 10 11 - struct task_struct *thread; 11 + struct task_struct __rcu *thread; 12 12 struct wait_queue_head wait; 13 13 14 14 unsigned sq_thread_idle; ··· 29 29 void io_put_sq_data(struct io_sq_data *sqd); 30 30 void io_sqpoll_wait_sq(struct io_ring_ctx *ctx); 31 31 int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx, cpumask_var_t mask); 32 + 33 + static inline struct task_struct *sqpoll_task_locked(struct io_sq_data *sqd) 34 + { 35 + return rcu_dereference_protected(sqd->thread, 36 + lockdep_is_held(&sqd->lock)); 37 + }
+9
kernel/time/posix-cpu-timers.c
··· 1406 1406 lockdep_assert_irqs_disabled(); 1407 1407 1408 1408 /* 1409 + * Ensure that release_task(tsk) can't happen while 1410 + * handle_posix_cpu_timers() is running. Otherwise, a concurrent 1411 + * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and 1412 + * miss timer->it.cpu.firing != 0. 1413 + */ 1414 + if (tsk->exit_state) 1415 + return; 1416 + 1417 + /* 1409 1418 * If the actual expiry is deferred to task work context and the 1410 1419 * work is already scheduled there is no point to do anything here. 1411 1420 */
+1 -3
kernel/trace/trace_events_filter.c
··· 1437 1437 INIT_LIST_HEAD(&head->list); 1438 1438 1439 1439 item = kmalloc(sizeof(*item), GFP_KERNEL); 1440 - if (!item) { 1441 - kfree(head); 1440 + if (!item) 1442 1441 goto free_now; 1443 - } 1444 1442 1445 1443 item->filter = filter; 1446 1444 list_add_tail(&item->list, &head->list);
+4 -4
lib/scatterlist.c
··· 73 73 * Should only be used casually, it (currently) scans the entire list 74 74 * to get the last entry. 75 75 * 76 - * Note that the @sgl@ pointer passed in need not be the first one, 77 - * the important bit is that @nents@ denotes the number of entries that 78 - * exist from @sgl@. 76 + * Note that the @sgl pointer passed in need not be the first one, 77 + * the important bit is that @nents denotes the number of entries that 78 + * exist from @sgl. 79 79 * 80 80 **/ 81 81 struct scatterlist *sg_last(struct scatterlist *sgl, unsigned int nents) ··· 345 345 * @gfp_mask: GFP allocation mask 346 346 * 347 347 * Description: 348 - * Allocate and initialize an sg table. If @nents@ is larger than 348 + * Allocate and initialize an sg table. If @nents is larger than 349 349 * SG_MAX_SINGLE_ALLOC a chained sg table will be setup. 350 350 * 351 351 **/
-1
mm/damon/Kconfig
··· 4 4 5 5 config DAMON 6 6 bool "DAMON: Data Access Monitoring Framework" 7 - default y 8 7 help 9 8 This builds a framework that allows kernel subsystems to monitor 10 9 access frequency of each memory region. The information can be useful
+2
mm/madvise.c
··· 508 508 pte_offset_map_lock(mm, pmd, addr, &ptl); 509 509 if (!start_pte) 510 510 break; 511 + flush_tlb_batched_pending(mm); 511 512 arch_enter_lazy_mmu_mode(); 512 513 if (!err) 513 514 nr = 0; ··· 742 741 start_pte = pte; 743 742 if (!start_pte) 744 743 break; 744 + flush_tlb_batched_pending(mm); 745 745 arch_enter_lazy_mmu_mode(); 746 746 if (!err) 747 747 nr = 0;
+40
mm/util.c
··· 1131 1131 } 1132 1132 EXPORT_SYMBOL(flush_dcache_folio); 1133 1133 #endif 1134 + 1135 + /** 1136 + * compat_vma_mmap_prepare() - Apply the file's .mmap_prepare() hook to an 1137 + * existing VMA 1138 + * @file: The file which possesss an f_op->mmap_prepare() hook 1139 + * @vma: The VMA to apply the .mmap_prepare() hook to. 1140 + * 1141 + * Ordinarily, .mmap_prepare() is invoked directly upon mmap(). However, certain 1142 + * 'wrapper' file systems invoke a nested mmap hook of an underlying file. 1143 + * 1144 + * Until all filesystems are converted to use .mmap_prepare(), we must be 1145 + * conservative and continue to invoke these 'wrapper' filesystems using the 1146 + * deprecated .mmap() hook. 1147 + * 1148 + * However we have a problem if the underlying file system possesses an 1149 + * .mmap_prepare() hook, as we are in a different context when we invoke the 1150 + * .mmap() hook, already having a VMA to deal with. 1151 + * 1152 + * compat_vma_mmap_prepare() is a compatibility function that takes VMA state, 1153 + * establishes a struct vm_area_desc descriptor, passes to the underlying 1154 + * .mmap_prepare() hook and applies any changes performed by it. 1155 + * 1156 + * Once the conversion of filesystems is complete this function will no longer 1157 + * be required and will be removed. 1158 + * 1159 + * Returns: 0 on success or error. 1160 + */ 1161 + int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma) 1162 + { 1163 + struct vm_area_desc desc; 1164 + int err; 1165 + 1166 + err = file->f_op->mmap_prepare(vma_to_desc(vma, &desc)); 1167 + if (err) 1168 + return err; 1169 + set_vma_from_desc(vma, &desc); 1170 + 1171 + return 0; 1172 + } 1173 + EXPORT_SYMBOL(compat_vma_mmap_prepare);
+4 -19
mm/vma.c
··· 967 967 err = dup_anon_vma(next, middle, &anon_dup); 968 968 } 969 969 970 - if (err) 970 + if (err || commit_merge(vmg)) 971 971 goto abort; 972 - 973 - err = commit_merge(vmg); 974 - if (err) { 975 - VM_WARN_ON(err != -ENOMEM); 976 - 977 - if (anon_dup) 978 - unlink_anon_vmas(anon_dup); 979 - 980 - /* 981 - * We've cleaned up any cloned anon_vma's, no VMAs have been 982 - * modified, no harm no foul if the user requests that we not 983 - * report this and just give up, leaving the VMAs unmerged. 984 - */ 985 - if (!vmg->give_up_on_oom) 986 - vmg->state = VMA_MERGE_ERROR_NOMEM; 987 - return NULL; 988 - } 989 972 990 973 khugepaged_enter_vma(vmg->target, vmg->flags); 991 974 vmg->state = VMA_MERGE_SUCCESS; ··· 977 994 abort: 978 995 vma_iter_set(vmg->vmi, start); 979 996 vma_iter_load(vmg->vmi); 997 + 998 + if (anon_dup) 999 + unlink_anon_vmas(anon_dup); 980 1000 981 1001 /* 982 1002 * This means we have failed to clone anon_vma's correctly, but no ··· 3112 3126 userfaultfd_unmap_complete(mm, &uf); 3113 3127 return ret; 3114 3128 } 3115 - 3116 3129 3117 3130 /* Insert vm structure into process list sorted by address 3118 3131 * and into the inode's i_mmap tree. If vm_file is non-NULL
+47
mm/vma.h
··· 222 222 return 0; 223 223 } 224 224 225 + 226 + /* 227 + * Temporary helper functions for file systems which wrap an invocation of 228 + * f_op->mmap() but which might have an underlying file system which implements 229 + * f_op->mmap_prepare(). 230 + */ 231 + 232 + static inline struct vm_area_desc *vma_to_desc(struct vm_area_struct *vma, 233 + struct vm_area_desc *desc) 234 + { 235 + desc->mm = vma->vm_mm; 236 + desc->start = vma->vm_start; 237 + desc->end = vma->vm_end; 238 + 239 + desc->pgoff = vma->vm_pgoff; 240 + desc->file = vma->vm_file; 241 + desc->vm_flags = vma->vm_flags; 242 + desc->page_prot = vma->vm_page_prot; 243 + 244 + desc->vm_ops = NULL; 245 + desc->private_data = NULL; 246 + 247 + return desc; 248 + } 249 + 250 + static inline void set_vma_from_desc(struct vm_area_struct *vma, 251 + struct vm_area_desc *desc) 252 + { 253 + /* 254 + * Since we're invoking .mmap_prepare() despite having a partially 255 + * established VMA, we must take care to handle setting fields 256 + * correctly. 257 + */ 258 + 259 + /* Mutable fields. Populated with initial state. */ 260 + vma->vm_pgoff = desc->pgoff; 261 + if (vma->vm_file != desc->file) 262 + vma_set_file(vma, desc->file); 263 + if (vma->vm_flags != desc->vm_flags) 264 + vm_flags_set(vma, desc->vm_flags); 265 + vma->vm_page_prot = desc->page_prot; 266 + 267 + /* User-defined fields. */ 268 + vma->vm_ops = desc->vm_ops; 269 + vma->vm_private_data = desc->private_data; 270 + } 271 + 225 272 int 226 273 do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, 227 274 struct mm_struct *mm, unsigned long start,
+10 -7
net/bluetooth/eir.c
··· 242 242 return ad_len; 243 243 } 244 244 245 - u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr) 245 + u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size) 246 246 { 247 247 struct adv_info *adv = NULL; 248 248 u8 ad_len = 0, flags = 0; ··· 286 286 /* If flags would still be empty, then there is no need to 287 287 * include the "Flags" AD field". 288 288 */ 289 - if (flags) { 289 + if (flags && (ad_len + eir_precalc_len(1) <= size)) { 290 290 ptr[0] = 0x02; 291 291 ptr[1] = EIR_FLAGS; 292 292 ptr[2] = flags; ··· 316 316 } 317 317 318 318 /* Provide Tx Power only if we can provide a valid value for it */ 319 - if (adv_tx_power != HCI_TX_POWER_INVALID) { 319 + if (adv_tx_power != HCI_TX_POWER_INVALID && 320 + (ad_len + eir_precalc_len(1) <= size)) { 320 321 ptr[0] = 0x02; 321 322 ptr[1] = EIR_TX_POWER; 322 323 ptr[2] = (u8)adv_tx_power; ··· 367 366 368 367 void *eir_get_service_data(u8 *eir, size_t eir_len, u16 uuid, size_t *len) 369 368 { 370 - while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, len))) { 369 + size_t dlen; 370 + 371 + while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, &dlen))) { 371 372 u16 value = get_unaligned_le16(eir); 372 373 373 374 if (uuid == value) { 374 375 if (len) 375 - *len -= 2; 376 + *len = dlen - 2; 376 377 return &eir[2]; 377 378 } 378 379 379 - eir += *len; 380 - eir_len -= *len; 380 + eir += dlen; 381 + eir_len -= dlen; 381 382 } 382 383 383 384 return NULL;
+1 -1
net/bluetooth/eir.h
··· 9 9 10 10 void eir_create(struct hci_dev *hdev, u8 *data); 11 11 12 - u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr); 12 + u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size); 13 13 u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr); 14 14 u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr); 15 15
+24 -7
net/bluetooth/hci_conn.c
··· 1501 1501 1502 1502 /* This function requires the caller holds hdev->lock */ 1503 1503 static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst, 1504 - struct bt_iso_qos *qos, __u8 base_len, 1505 - __u8 *base) 1504 + __u8 sid, struct bt_iso_qos *qos, 1505 + __u8 base_len, __u8 *base) 1506 1506 { 1507 1507 struct hci_conn *conn; 1508 1508 int err; ··· 1543 1543 return conn; 1544 1544 1545 1545 conn->state = BT_CONNECT; 1546 + conn->sid = sid; 1546 1547 1547 1548 hci_conn_hold(conn); 1548 1549 return conn; ··· 2063 2062 if (qos->bcast.bis) 2064 2063 sync_interval = interval * 4; 2065 2064 2066 - err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->le_per_adv_data_len, 2065 + err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->sid, 2066 + conn->le_per_adv_data_len, 2067 2067 conn->le_per_adv_data, flags, interval, 2068 2068 interval, sync_interval); 2069 2069 if (err) ··· 2136 2134 } 2137 2135 } 2138 2136 2139 - struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, 2137 + struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 sid, 2140 2138 struct bt_iso_qos *qos, 2141 2139 __u8 base_len, __u8 *base) 2142 2140 { ··· 2158 2156 base, base_len); 2159 2157 2160 2158 /* We need hci_conn object using the BDADDR_ANY as dst */ 2161 - conn = hci_add_bis(hdev, dst, qos, base_len, eir); 2159 + conn = hci_add_bis(hdev, dst, sid, qos, base_len, eir); 2162 2160 if (IS_ERR(conn)) 2163 2161 return conn; 2164 2162 ··· 2209 2207 } 2210 2208 2211 2209 struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, 2212 - __u8 dst_type, struct bt_iso_qos *qos, 2210 + __u8 dst_type, __u8 sid, 2211 + struct bt_iso_qos *qos, 2213 2212 __u8 base_len, __u8 *base) 2214 2213 { 2215 2214 struct hci_conn *conn; 2216 2215 int err; 2217 2216 struct iso_list_data data; 2218 2217 2219 - conn = hci_bind_bis(hdev, dst, qos, base_len, base); 2218 + conn = hci_bind_bis(hdev, dst, sid, qos, base_len, base); 2220 2219 if (IS_ERR(conn)) 2221 2220 return conn; 2222 2221 2223 2222 if (conn->state == BT_CONNECTED) 2224 2223 return conn; 2224 + 2225 + /* Check if SID needs to be allocated then search for the first 2226 + * available. 2227 + */ 2228 + if (conn->sid == HCI_SID_INVALID) { 2229 + u8 sid; 2230 + 2231 + for (sid = 0; sid <= 0x0f; sid++) { 2232 + if (!hci_find_adv_sid(hdev, sid)) { 2233 + conn->sid = sid; 2234 + break; 2235 + } 2236 + } 2237 + } 2225 2238 2226 2239 data.big = qos->bcast.big; 2227 2240 data.bis = qos->bcast.bis;
+20 -12
net/bluetooth/hci_core.c
··· 1585 1585 } 1586 1586 1587 1587 /* This function requires the caller holds hdev->lock */ 1588 + struct adv_info *hci_find_adv_sid(struct hci_dev *hdev, u8 sid) 1589 + { 1590 + struct adv_info *adv; 1591 + 1592 + list_for_each_entry(adv, &hdev->adv_instances, list) { 1593 + if (adv->sid == sid) 1594 + return adv; 1595 + } 1596 + 1597 + return NULL; 1598 + } 1599 + 1600 + /* This function requires the caller holds hdev->lock */ 1588 1601 struct adv_info *hci_get_next_instance(struct hci_dev *hdev, u8 instance) 1589 1602 { 1590 1603 struct adv_info *cur_instance; ··· 1749 1736 } 1750 1737 1751 1738 /* This function requires the caller holds hdev->lock */ 1752 - struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, 1739 + struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, u8 sid, 1753 1740 u32 flags, u8 data_len, u8 *data, 1754 1741 u32 min_interval, u32 max_interval) 1755 1742 { ··· 1761 1748 if (IS_ERR(adv)) 1762 1749 return adv; 1763 1750 1751 + adv->sid = sid; 1764 1752 adv->periodic = true; 1765 1753 adv->per_adv_data_len = data_len; 1766 1754 ··· 1891 1877 if (monitor->handle) 1892 1878 idr_remove(&hdev->adv_monitors_idr, monitor->handle); 1893 1879 1894 - if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED) { 1880 + if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED) 1895 1881 hdev->adv_monitors_cnt--; 1896 - mgmt_adv_monitor_removed(hdev, monitor->handle); 1897 - } 1898 1882 1899 1883 kfree(monitor); 1900 1884 } ··· 2499 2487 2500 2488 mutex_init(&hdev->lock); 2501 2489 mutex_init(&hdev->req_lock); 2490 + mutex_init(&hdev->mgmt_pending_lock); 2502 2491 2503 2492 ida_init(&hdev->unset_handle_ida); 2504 2493 ··· 3430 3417 3431 3418 bt_dev_err(hdev, "link tx timeout"); 3432 3419 3433 - rcu_read_lock(); 3420 + hci_dev_lock(hdev); 3434 3421 3435 3422 /* Kill stalled connections */ 3436 - list_for_each_entry_rcu(c, &h->list, list) { 3423 + list_for_each_entry(c, &h->list, list) { 3437 3424 if (c->type == type && c->sent) { 3438 3425 bt_dev_err(hdev, "killing stalled connection %pMR", 3439 3426 &c->dst); 3440 - /* hci_disconnect might sleep, so, we have to release 3441 - * the RCU read lock before calling it. 3442 - */ 3443 - rcu_read_unlock(); 3444 3427 hci_disconnect(c, HCI_ERROR_REMOTE_USER_TERM); 3445 - rcu_read_lock(); 3446 3428 } 3447 3429 } 3448 3430 3449 - rcu_read_unlock(); 3431 + hci_dev_unlock(hdev); 3450 3432 } 3451 3433 3452 3434 static struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
+35 -10
net/bluetooth/hci_sync.c
··· 1261 1261 hci_cpu_to_le24(adv->min_interval, cp.min_interval); 1262 1262 hci_cpu_to_le24(adv->max_interval, cp.max_interval); 1263 1263 cp.tx_power = adv->tx_power; 1264 + cp.sid = adv->sid; 1264 1265 } else { 1265 1266 hci_cpu_to_le24(hdev->le_adv_min_interval, cp.min_interval); 1266 1267 hci_cpu_to_le24(hdev->le_adv_max_interval, cp.max_interval); 1267 1268 cp.tx_power = HCI_ADV_TX_POWER_NO_PREFERENCE; 1269 + cp.sid = 0x00; 1268 1270 } 1269 1271 1270 1272 secondary_adv = (flags & MGMT_ADV_FLAG_SEC_MASK); ··· 1561 1559 static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv) 1562 1560 { 1563 1561 u8 bid[3]; 1564 - u8 ad[4 + 3]; 1562 + u8 ad[HCI_MAX_EXT_AD_LENGTH]; 1563 + u8 len; 1565 1564 1566 1565 /* Skip if NULL adv as instance 0x00 is used for general purpose 1567 1566 * advertising so it cannot used for the likes of Broadcast Announcement ··· 1588 1585 1589 1586 /* Generate Broadcast ID */ 1590 1587 get_random_bytes(bid, sizeof(bid)); 1591 - eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid)); 1592 - hci_set_adv_instance_data(hdev, adv->instance, sizeof(ad), ad, 0, NULL); 1588 + len = eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid)); 1589 + memcpy(ad + len, adv->adv_data, adv->adv_data_len); 1590 + hci_set_adv_instance_data(hdev, adv->instance, len + adv->adv_data_len, 1591 + ad, 0, NULL); 1593 1592 1594 1593 return hci_update_adv_data_sync(hdev, adv->instance); 1595 1594 } 1596 1595 1597 - int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len, 1598 - u8 *data, u32 flags, u16 min_interval, 1596 + int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 sid, 1597 + u8 data_len, u8 *data, u32 flags, u16 min_interval, 1599 1598 u16 max_interval, u16 sync_interval) 1600 1599 { 1601 1600 struct adv_info *adv = NULL; ··· 1608 1603 1609 1604 if (instance) { 1610 1605 adv = hci_find_adv_instance(hdev, instance); 1611 - /* Create an instance if that could not be found */ 1612 - if (!adv) { 1613 - adv = hci_add_per_instance(hdev, instance, flags, 1606 + if (adv) { 1607 + if (sid != HCI_SID_INVALID && adv->sid != sid) { 1608 + /* If the SID don't match attempt to find by 1609 + * SID. 1610 + */ 1611 + adv = hci_find_adv_sid(hdev, sid); 1612 + if (!adv) { 1613 + bt_dev_err(hdev, 1614 + "Unable to find adv_info"); 1615 + return -EINVAL; 1616 + } 1617 + } 1618 + 1619 + /* Turn it into periodic advertising */ 1620 + adv->periodic = true; 1621 + adv->per_adv_data_len = data_len; 1622 + if (data) 1623 + memcpy(adv->per_adv_data, data, data_len); 1624 + adv->flags = flags; 1625 + } else if (!adv) { 1626 + /* Create an instance if that could not be found */ 1627 + adv = hci_add_per_instance(hdev, instance, sid, flags, 1614 1628 data_len, data, 1615 1629 sync_interval, 1616 1630 sync_interval); ··· 1836 1812 return 0; 1837 1813 } 1838 1814 1839 - len = eir_create_adv_data(hdev, instance, pdu->data); 1815 + len = eir_create_adv_data(hdev, instance, pdu->data, 1816 + HCI_MAX_EXT_AD_LENGTH); 1840 1817 1841 1818 pdu->length = len; 1842 1819 pdu->handle = adv ? adv->handle : instance; ··· 1868 1843 1869 1844 memset(&cp, 0, sizeof(cp)); 1870 1845 1871 - len = eir_create_adv_data(hdev, instance, cp.data); 1846 + len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data)); 1872 1847 1873 1848 /* There's nothing to do if the data hasn't changed */ 1874 1849 if (hdev->adv_data_len == len &&
+12 -5
net/bluetooth/iso.c
··· 336 336 struct hci_dev *hdev; 337 337 int err; 338 338 339 - BT_DBG("%pMR", &iso_pi(sk)->src); 339 + BT_DBG("%pMR (SID 0x%2.2x)", &iso_pi(sk)->src, iso_pi(sk)->bc_sid); 340 340 341 341 hdev = hci_get_route(&iso_pi(sk)->dst, &iso_pi(sk)->src, 342 342 iso_pi(sk)->src_type); ··· 365 365 366 366 /* Just bind if DEFER_SETUP has been set */ 367 367 if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) { 368 - hcon = hci_bind_bis(hdev, &iso_pi(sk)->dst, 368 + hcon = hci_bind_bis(hdev, &iso_pi(sk)->dst, iso_pi(sk)->bc_sid, 369 369 &iso_pi(sk)->qos, iso_pi(sk)->base_len, 370 370 iso_pi(sk)->base); 371 371 if (IS_ERR(hcon)) { ··· 375 375 } else { 376 376 hcon = hci_connect_bis(hdev, &iso_pi(sk)->dst, 377 377 le_addr_type(iso_pi(sk)->dst_type), 378 - &iso_pi(sk)->qos, iso_pi(sk)->base_len, 379 - iso_pi(sk)->base); 378 + iso_pi(sk)->bc_sid, &iso_pi(sk)->qos, 379 + iso_pi(sk)->base_len, iso_pi(sk)->base); 380 380 if (IS_ERR(hcon)) { 381 381 err = PTR_ERR(hcon); 382 382 goto unlock; 383 383 } 384 + 385 + /* Update SID if it was not set */ 386 + if (iso_pi(sk)->bc_sid == HCI_SID_INVALID) 387 + iso_pi(sk)->bc_sid = hcon->sid; 384 388 } 385 389 386 390 conn = iso_conn_add(hcon); ··· 1341 1337 addr->sa_family = AF_BLUETOOTH; 1342 1338 1343 1339 if (peer) { 1340 + struct hci_conn *hcon = iso_pi(sk)->conn ? 1341 + iso_pi(sk)->conn->hcon : NULL; 1342 + 1344 1343 bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst); 1345 1344 sa->iso_bdaddr_type = iso_pi(sk)->dst_type; 1346 1345 1347 - if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) { 1346 + if (hcon && hcon->type == BIS_LINK) { 1348 1347 sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid; 1349 1348 sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis; 1350 1349 memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis,
+61 -79
net/bluetooth/mgmt.c
··· 1447 1447 1448 1448 send_settings_rsp(cmd->sk, cmd->opcode, match->hdev); 1449 1449 1450 - list_del(&cmd->list); 1451 - 1452 1450 if (match->sk == NULL) { 1453 1451 match->sk = cmd->sk; 1454 1452 sock_hold(match->sk); 1455 1453 } 1456 - 1457 - mgmt_pending_free(cmd); 1458 1454 } 1459 1455 1460 1456 static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data) 1461 1457 { 1462 1458 u8 *status = data; 1463 1459 1464 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, *status); 1465 - mgmt_pending_remove(cmd); 1460 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, *status); 1466 1461 } 1467 1462 1468 1463 static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data) ··· 1471 1476 1472 1477 if (cmd->cmd_complete) { 1473 1478 cmd->cmd_complete(cmd, match->mgmt_status); 1474 - mgmt_pending_remove(cmd); 1475 - 1476 1479 return; 1477 1480 } 1478 1481 ··· 1479 1486 1480 1487 static int generic_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status) 1481 1488 { 1482 - return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, 1489 + return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, 1483 1490 cmd->param, cmd->param_len); 1484 1491 } 1485 1492 1486 1493 static int addr_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status) 1487 1494 { 1488 - return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, 1495 + return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, 1489 1496 cmd->param, sizeof(struct mgmt_addr_info)); 1490 1497 } 1491 1498 ··· 1525 1532 1526 1533 if (err) { 1527 1534 u8 mgmt_err = mgmt_status(err); 1528 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 1535 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 1529 1536 hci_dev_clear_flag(hdev, HCI_LIMITED_DISCOVERABLE); 1530 1537 goto done; 1531 1538 } ··· 1700 1707 1701 1708 if (err) { 1702 1709 u8 mgmt_err = mgmt_status(err); 1703 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 1710 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 1704 1711 goto done; 1705 1712 } 1706 1713 ··· 1936 1943 new_settings(hdev, NULL); 1937 1944 } 1938 1945 1939 - mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, cmd_status_rsp, 1940 - &mgmt_err); 1946 + mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, 1947 + cmd_status_rsp, &mgmt_err); 1941 1948 return; 1942 1949 } 1943 1950 ··· 1947 1954 changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED); 1948 1955 } 1949 1956 1950 - mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, settings_rsp, &match); 1957 + mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match); 1951 1958 1952 1959 if (changed) 1953 1960 new_settings(hdev, match.sk); ··· 2067 2074 bt_dev_dbg(hdev, "err %d", err); 2068 2075 2069 2076 if (status) { 2070 - mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, cmd_status_rsp, 2071 - &status); 2077 + mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp, 2078 + &status); 2072 2079 return; 2073 2080 } 2074 2081 2075 - mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, settings_rsp, &match); 2082 + mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match); 2076 2083 2077 2084 new_settings(hdev, match.sk); 2078 2085 ··· 2131 2138 struct sock *sk = cmd->sk; 2132 2139 2133 2140 if (status) { 2134 - mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, 2141 + mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true, 2135 2142 cmd_status_rsp, &status); 2136 2143 return; 2137 2144 } ··· 2631 2638 2632 2639 bt_dev_dbg(hdev, "err %d", err); 2633 2640 2634 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 2641 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 2635 2642 mgmt_status(err), hdev->dev_class, 3); 2636 2643 2637 2644 mgmt_pending_free(cmd); ··· 3420 3427 bacpy(&rp.addr.bdaddr, &conn->dst); 3421 3428 rp.addr.type = link_to_bdaddr(conn->type, conn->dst_type); 3422 3429 3423 - err = mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_PAIR_DEVICE, 3430 + err = mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_PAIR_DEVICE, 3424 3431 status, &rp, sizeof(rp)); 3425 3432 3426 3433 /* So we don't get further callbacks for this connection */ ··· 5101 5108 mgmt_event(MGMT_EV_ADV_MONITOR_ADDED, hdev, &ev, sizeof(ev), sk); 5102 5109 } 5103 5110 5104 - void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle) 5111 + static void mgmt_adv_monitor_removed(struct sock *sk, struct hci_dev *hdev, 5112 + __le16 handle) 5105 5113 { 5106 5114 struct mgmt_ev_adv_monitor_removed ev; 5107 - struct mgmt_pending_cmd *cmd; 5108 - struct sock *sk_skip = NULL; 5109 - struct mgmt_cp_remove_adv_monitor *cp; 5110 5115 5111 - cmd = pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev); 5112 - if (cmd) { 5113 - cp = cmd->param; 5116 + ev.monitor_handle = handle; 5114 5117 5115 - if (cp->monitor_handle) 5116 - sk_skip = cmd->sk; 5117 - } 5118 - 5119 - ev.monitor_handle = cpu_to_le16(handle); 5120 - 5121 - mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk_skip); 5118 + mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk); 5122 5119 } 5123 5120 5124 5121 static int read_adv_mon_features(struct sock *sk, struct hci_dev *hdev, ··· 5179 5196 hci_update_passive_scan(hdev); 5180 5197 } 5181 5198 5182 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 5199 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 5183 5200 mgmt_status(status), &rp, sizeof(rp)); 5184 5201 mgmt_pending_remove(cmd); 5185 5202 ··· 5210 5227 5211 5228 if (pending_find(MGMT_OP_SET_LE, hdev) || 5212 5229 pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) || 5213 - pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev) || 5214 - pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) { 5230 + pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) { 5215 5231 status = MGMT_STATUS_BUSY; 5216 5232 goto unlock; 5217 5233 } ··· 5380 5398 struct mgmt_pending_cmd *cmd = data; 5381 5399 struct mgmt_cp_remove_adv_monitor *cp; 5382 5400 5383 - if (status == -ECANCELED || 5384 - cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) 5401 + if (status == -ECANCELED) 5385 5402 return; 5386 5403 5387 5404 hci_dev_lock(hdev); ··· 5389 5408 5390 5409 rp.monitor_handle = cp->monitor_handle; 5391 5410 5392 - if (!status) 5411 + if (!status) { 5412 + mgmt_adv_monitor_removed(cmd->sk, hdev, cp->monitor_handle); 5393 5413 hci_update_passive_scan(hdev); 5414 + } 5394 5415 5395 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 5416 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 5396 5417 mgmt_status(status), &rp, sizeof(rp)); 5397 - mgmt_pending_remove(cmd); 5418 + mgmt_pending_free(cmd); 5398 5419 5399 5420 hci_dev_unlock(hdev); 5400 5421 bt_dev_dbg(hdev, "remove monitor %d complete, status %d", ··· 5406 5423 static int mgmt_remove_adv_monitor_sync(struct hci_dev *hdev, void *data) 5407 5424 { 5408 5425 struct mgmt_pending_cmd *cmd = data; 5409 - 5410 - if (cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) 5411 - return -ECANCELED; 5412 - 5413 5426 struct mgmt_cp_remove_adv_monitor *cp = cmd->param; 5414 5427 u16 handle = __le16_to_cpu(cp->monitor_handle); 5415 5428 ··· 5424 5445 hci_dev_lock(hdev); 5425 5446 5426 5447 if (pending_find(MGMT_OP_SET_LE, hdev) || 5427 - pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev) || 5428 5448 pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) || 5429 5449 pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) { 5430 5450 status = MGMT_STATUS_BUSY; 5431 5451 goto unlock; 5432 5452 } 5433 5453 5434 - cmd = mgmt_pending_add(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len); 5454 + cmd = mgmt_pending_new(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len); 5435 5455 if (!cmd) { 5436 5456 status = MGMT_STATUS_NO_RESOURCES; 5437 5457 goto unlock; ··· 5440 5462 mgmt_remove_adv_monitor_complete); 5441 5463 5442 5464 if (err) { 5443 - mgmt_pending_remove(cmd); 5465 + mgmt_pending_free(cmd); 5444 5466 5445 5467 if (err == -ENOMEM) 5446 5468 status = MGMT_STATUS_NO_RESOURCES; ··· 5770 5792 cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev)) 5771 5793 return; 5772 5794 5773 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), 5795 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err), 5774 5796 cmd->param, 1); 5775 5797 mgmt_pending_remove(cmd); 5776 5798 ··· 5991 6013 5992 6014 bt_dev_dbg(hdev, "err %d", err); 5993 6015 5994 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), 6016 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err), 5995 6017 cmd->param, 1); 5996 6018 mgmt_pending_remove(cmd); 5997 6019 ··· 6216 6238 u8 status = mgmt_status(err); 6217 6239 6218 6240 if (status) { 6219 - mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, 6241 + mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, 6220 6242 cmd_status_rsp, &status); 6221 6243 return; 6222 6244 } ··· 6226 6248 else 6227 6249 hci_dev_clear_flag(hdev, HCI_ADVERTISING); 6228 6250 6229 - mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, settings_rsp, 6251 + mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp, 6230 6252 &match); 6231 6253 6232 6254 new_settings(hdev, match.sk); ··· 6570 6592 */ 6571 6593 hci_dev_clear_flag(hdev, HCI_BREDR_ENABLED); 6572 6594 6573 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 6595 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 6574 6596 } else { 6575 6597 send_settings_rsp(cmd->sk, MGMT_OP_SET_BREDR, hdev); 6576 6598 new_settings(hdev, cmd->sk); ··· 6707 6729 if (err) { 6708 6730 u8 mgmt_err = mgmt_status(err); 6709 6731 6710 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 6732 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 6711 6733 goto done; 6712 6734 } 6713 6735 ··· 7154 7176 rp.max_tx_power = HCI_TX_POWER_INVALID; 7155 7177 } 7156 7178 7157 - mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_GET_CONN_INFO, status, 7179 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_GET_CONN_INFO, status, 7158 7180 &rp, sizeof(rp)); 7159 7181 7160 7182 mgmt_pending_free(cmd); ··· 7314 7336 } 7315 7337 7316 7338 complete: 7317 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, &rp, 7339 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, &rp, 7318 7340 sizeof(rp)); 7319 7341 7320 7342 mgmt_pending_free(cmd); ··· 8564 8586 rp.instance = cp->instance; 8565 8587 8566 8588 if (err) 8567 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 8589 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 8568 8590 mgmt_status(err)); 8569 8591 else 8570 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 8592 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 8571 8593 mgmt_status(err), &rp, sizeof(rp)); 8572 8594 8573 8595 add_adv_complete(hdev, cmd->sk, cp->instance, err); ··· 8755 8777 8756 8778 hci_remove_adv_instance(hdev, cp->instance); 8757 8779 8758 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 8780 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 8759 8781 mgmt_status(err)); 8760 8782 } else { 8761 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 8783 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 8762 8784 mgmt_status(err), &rp, sizeof(rp)); 8763 8785 } 8764 8786 ··· 8905 8927 rp.instance = cp->instance; 8906 8928 8907 8929 if (err) 8908 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 8930 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 8909 8931 mgmt_status(err)); 8910 8932 else 8911 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 8933 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 8912 8934 mgmt_status(err), &rp, sizeof(rp)); 8913 8935 8914 8936 mgmt_pending_free(cmd); ··· 9067 9089 rp.instance = cp->instance; 9068 9090 9069 9091 if (err) 9070 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 9092 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 9071 9093 mgmt_status(err)); 9072 9094 else 9073 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 9095 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 9074 9096 MGMT_STATUS_SUCCESS, &rp, sizeof(rp)); 9075 9097 9076 9098 mgmt_pending_free(cmd); ··· 9342 9364 if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) 9343 9365 return; 9344 9366 9345 - mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match); 9367 + mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match); 9346 9368 9347 9369 if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) { 9348 9370 mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0, ··· 9380 9402 hci_update_passive_scan(hdev); 9381 9403 } 9382 9404 9383 - mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match); 9405 + mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp, 9406 + &match); 9384 9407 9385 9408 new_settings(hdev, match.sk); 9386 9409 ··· 9396 9417 struct cmd_lookup match = { NULL, hdev }; 9397 9418 u8 zero_cod[] = { 0, 0, 0 }; 9398 9419 9399 - mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match); 9420 + mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp, 9421 + &match); 9400 9422 9401 9423 /* If the power off is because of hdev unregistration let 9402 9424 * use the appropriate INVALID_INDEX status. Otherwise use ··· 9411 9431 else 9412 9432 match.mgmt_status = MGMT_STATUS_NOT_POWERED; 9413 9433 9414 - mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match); 9434 + mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match); 9415 9435 9416 9436 if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) { 9417 9437 mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, ··· 9652 9672 device_unpaired(hdev, &cp->addr.bdaddr, cp->addr.type, cmd->sk); 9653 9673 9654 9674 cmd->cmd_complete(cmd, 0); 9655 - mgmt_pending_remove(cmd); 9656 9675 } 9657 9676 9658 9677 bool mgmt_powering_down(struct hci_dev *hdev) ··· 9707 9728 struct mgmt_cp_disconnect *cp; 9708 9729 struct mgmt_pending_cmd *cmd; 9709 9730 9710 - mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp, 9711 - hdev); 9731 + mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, true, 9732 + unpair_device_rsp, hdev); 9712 9733 9713 9734 cmd = pending_find(MGMT_OP_DISCONNECT, hdev); 9714 9735 if (!cmd) ··· 9901 9922 9902 9923 if (status) { 9903 9924 u8 mgmt_err = mgmt_status(status); 9904 - mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, 9925 + mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true, 9905 9926 cmd_status_rsp, &mgmt_err); 9906 9927 return; 9907 9928 } ··· 9911 9932 else 9912 9933 changed = hci_dev_test_and_clear_flag(hdev, HCI_LINK_SECURITY); 9913 9934 9914 - mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, settings_rsp, 9915 - &match); 9935 + mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true, 9936 + settings_rsp, &match); 9916 9937 9917 9938 if (changed) 9918 9939 new_settings(hdev, match.sk); ··· 9936 9957 { 9937 9958 struct cmd_lookup match = { NULL, hdev, mgmt_status(status) }; 9938 9959 9939 - mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, sk_lookup, &match); 9940 - mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, sk_lookup, &match); 9941 - mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, sk_lookup, &match); 9960 + mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, false, sk_lookup, 9961 + &match); 9962 + mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, false, sk_lookup, 9963 + &match); 9964 + mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, false, sk_lookup, 9965 + &match); 9942 9966 9943 9967 if (!status) { 9944 9968 mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, dev_class,
+27 -5
net/bluetooth/mgmt_util.c
··· 217 217 struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode, 218 218 struct hci_dev *hdev) 219 219 { 220 - struct mgmt_pending_cmd *cmd; 220 + struct mgmt_pending_cmd *cmd, *tmp; 221 221 222 - list_for_each_entry(cmd, &hdev->mgmt_pending, list) { 222 + mutex_lock(&hdev->mgmt_pending_lock); 223 + 224 + list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) { 223 225 if (hci_sock_get_channel(cmd->sk) != channel) 224 226 continue; 225 - if (cmd->opcode == opcode) 227 + 228 + if (cmd->opcode == opcode) { 229 + mutex_unlock(&hdev->mgmt_pending_lock); 226 230 return cmd; 231 + } 227 232 } 233 + 234 + mutex_unlock(&hdev->mgmt_pending_lock); 228 235 229 236 return NULL; 230 237 } 231 238 232 - void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, 239 + void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove, 233 240 void (*cb)(struct mgmt_pending_cmd *cmd, void *data), 234 241 void *data) 235 242 { 236 243 struct mgmt_pending_cmd *cmd, *tmp; 237 244 245 + mutex_lock(&hdev->mgmt_pending_lock); 246 + 238 247 list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) { 239 248 if (opcode > 0 && cmd->opcode != opcode) 240 249 continue; 241 250 251 + if (remove) 252 + list_del(&cmd->list); 253 + 242 254 cb(cmd, data); 255 + 256 + if (remove) 257 + mgmt_pending_free(cmd); 243 258 } 259 + 260 + mutex_unlock(&hdev->mgmt_pending_lock); 244 261 } 245 262 246 263 struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode, ··· 271 254 return NULL; 272 255 273 256 cmd->opcode = opcode; 274 - cmd->index = hdev->id; 257 + cmd->hdev = hdev; 275 258 276 259 cmd->param = kmemdup(data, len, GFP_KERNEL); 277 260 if (!cmd->param) { ··· 297 280 if (!cmd) 298 281 return NULL; 299 282 283 + mutex_lock(&hdev->mgmt_pending_lock); 300 284 list_add_tail(&cmd->list, &hdev->mgmt_pending); 285 + mutex_unlock(&hdev->mgmt_pending_lock); 301 286 302 287 return cmd; 303 288 } ··· 313 294 314 295 void mgmt_pending_remove(struct mgmt_pending_cmd *cmd) 315 296 { 297 + mutex_lock(&cmd->hdev->mgmt_pending_lock); 316 298 list_del(&cmd->list); 299 + mutex_unlock(&cmd->hdev->mgmt_pending_lock); 300 + 317 301 mgmt_pending_free(cmd); 318 302 } 319 303
+2 -2
net/bluetooth/mgmt_util.h
··· 33 33 struct mgmt_pending_cmd { 34 34 struct list_head list; 35 35 u16 opcode; 36 - int index; 36 + struct hci_dev *hdev; 37 37 void *param; 38 38 size_t param_len; 39 39 struct sock *sk; ··· 54 54 55 55 struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode, 56 56 struct hci_dev *hdev); 57 - void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, 57 + void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove, 58 58 void (*cb)(struct mgmt_pending_cmd *cmd, void *data), 59 59 void *data); 60 60 struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
+13 -6
net/core/filter.c
··· 3233 3233 .arg1_type = ARG_PTR_TO_CTX, 3234 3234 }; 3235 3235 3236 + static void bpf_skb_change_protocol(struct sk_buff *skb, u16 proto) 3237 + { 3238 + skb->protocol = htons(proto); 3239 + if (skb_valid_dst(skb)) 3240 + skb_dst_drop(skb); 3241 + } 3242 + 3236 3243 static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len) 3237 3244 { 3238 3245 /* Caller already did skb_cow() with len as headroom, ··· 3336 3329 } 3337 3330 } 3338 3331 3339 - skb->protocol = htons(ETH_P_IPV6); 3332 + bpf_skb_change_protocol(skb, ETH_P_IPV6); 3340 3333 skb_clear_hash(skb); 3341 3334 3342 3335 return 0; ··· 3366 3359 } 3367 3360 } 3368 3361 3369 - skb->protocol = htons(ETH_P_IP); 3362 + bpf_skb_change_protocol(skb, ETH_P_IP); 3370 3363 skb_clear_hash(skb); 3371 3364 3372 3365 return 0; ··· 3557 3550 /* Match skb->protocol to new outer l3 protocol */ 3558 3551 if (skb->protocol == htons(ETH_P_IP) && 3559 3552 flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV6) 3560 - skb->protocol = htons(ETH_P_IPV6); 3553 + bpf_skb_change_protocol(skb, ETH_P_IPV6); 3561 3554 else if (skb->protocol == htons(ETH_P_IPV6) && 3562 3555 flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV4) 3563 - skb->protocol = htons(ETH_P_IP); 3556 + bpf_skb_change_protocol(skb, ETH_P_IP); 3564 3557 } 3565 3558 3566 3559 if (skb_is_gso(skb)) { ··· 3613 3606 /* Match skb->protocol to new outer l3 protocol */ 3614 3607 if (skb->protocol == htons(ETH_P_IP) && 3615 3608 flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV6) 3616 - skb->protocol = htons(ETH_P_IPV6); 3609 + bpf_skb_change_protocol(skb, ETH_P_IPV6); 3617 3610 else if (skb->protocol == htons(ETH_P_IPV6) && 3618 3611 flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV4) 3619 - skb->protocol = htons(ETH_P_IP); 3612 + bpf_skb_change_protocol(skb, ETH_P_IP); 3620 3613 3621 3614 if (skb_is_gso(skb)) { 3622 3615 struct skb_shared_info *shinfo = skb_shinfo(skb);
+2 -1
net/ethtool/ioctl.c
··· 1083 1083 ethtool_get_flow_spec_ring(info.fs.ring_cookie)) 1084 1084 return -EINVAL; 1085 1085 1086 - if (!xa_load(&dev->ethtool->rss_ctx, info.rss_context)) 1086 + if (info.rss_context && 1087 + !xa_load(&dev->ethtool->rss_ctx, info.rss_context)) 1087 1088 return -EINVAL; 1088 1089 } 1089 1090
+55 -55
net/ipv6/route.c
··· 3737 3737 } 3738 3738 } 3739 3739 3740 + static int fib6_config_validate(struct fib6_config *cfg, 3741 + struct netlink_ext_ack *extack) 3742 + { 3743 + /* RTF_PCPU is an internal flag; can not be set by userspace */ 3744 + if (cfg->fc_flags & RTF_PCPU) { 3745 + NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU"); 3746 + goto errout; 3747 + } 3748 + 3749 + /* RTF_CACHE is an internal flag; can not be set by userspace */ 3750 + if (cfg->fc_flags & RTF_CACHE) { 3751 + NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE"); 3752 + goto errout; 3753 + } 3754 + 3755 + if (cfg->fc_type > RTN_MAX) { 3756 + NL_SET_ERR_MSG(extack, "Invalid route type"); 3757 + goto errout; 3758 + } 3759 + 3760 + if (cfg->fc_dst_len > 128) { 3761 + NL_SET_ERR_MSG(extack, "Invalid prefix length"); 3762 + goto errout; 3763 + } 3764 + 3765 + #ifdef CONFIG_IPV6_SUBTREES 3766 + if (cfg->fc_src_len > 128) { 3767 + NL_SET_ERR_MSG(extack, "Invalid source address length"); 3768 + goto errout; 3769 + } 3770 + 3771 + if (cfg->fc_nh_id && cfg->fc_src_len) { 3772 + NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing"); 3773 + goto errout; 3774 + } 3775 + #else 3776 + if (cfg->fc_src_len) { 3777 + NL_SET_ERR_MSG(extack, 3778 + "Specifying source address requires IPV6_SUBTREES to be enabled"); 3779 + goto errout; 3780 + } 3781 + #endif 3782 + return 0; 3783 + errout: 3784 + return -EINVAL; 3785 + } 3786 + 3740 3787 static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg, 3741 3788 gfp_t gfp_flags, 3742 3789 struct netlink_ext_ack *extack) ··· 3932 3885 { 3933 3886 struct fib6_info *rt; 3934 3887 int err; 3888 + 3889 + err = fib6_config_validate(cfg, extack); 3890 + if (err) 3891 + return err; 3935 3892 3936 3893 rt = ip6_route_info_create(cfg, gfp_flags, extack); 3937 3894 if (IS_ERR(rt)) ··· 4530 4479 rcu_read_unlock(); 4531 4480 } 4532 4481 4533 - static int fib6_config_validate(struct fib6_config *cfg, 4534 - struct netlink_ext_ack *extack) 4535 - { 4536 - /* RTF_PCPU is an internal flag; can not be set by userspace */ 4537 - if (cfg->fc_flags & RTF_PCPU) { 4538 - NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU"); 4539 - goto errout; 4540 - } 4541 - 4542 - /* RTF_CACHE is an internal flag; can not be set by userspace */ 4543 - if (cfg->fc_flags & RTF_CACHE) { 4544 - NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE"); 4545 - goto errout; 4546 - } 4547 - 4548 - if (cfg->fc_type > RTN_MAX) { 4549 - NL_SET_ERR_MSG(extack, "Invalid route type"); 4550 - goto errout; 4551 - } 4552 - 4553 - if (cfg->fc_dst_len > 128) { 4554 - NL_SET_ERR_MSG(extack, "Invalid prefix length"); 4555 - goto errout; 4556 - } 4557 - 4558 - #ifdef CONFIG_IPV6_SUBTREES 4559 - if (cfg->fc_src_len > 128) { 4560 - NL_SET_ERR_MSG(extack, "Invalid source address length"); 4561 - goto errout; 4562 - } 4563 - 4564 - if (cfg->fc_nh_id && cfg->fc_src_len) { 4565 - NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing"); 4566 - goto errout; 4567 - } 4568 - #else 4569 - if (cfg->fc_src_len) { 4570 - NL_SET_ERR_MSG(extack, 4571 - "Specifying source address requires IPV6_SUBTREES to be enabled"); 4572 - goto errout; 4573 - } 4574 - #endif 4575 - return 0; 4576 - errout: 4577 - return -EINVAL; 4578 - } 4579 - 4580 4482 static void rtmsg_to_fib6_config(struct net *net, 4581 4483 struct in6_rtmsg *rtmsg, 4582 4484 struct fib6_config *cfg) ··· 4567 4563 4568 4564 switch (cmd) { 4569 4565 case SIOCADDRT: 4570 - err = fib6_config_validate(&cfg, NULL); 4571 - if (err) 4572 - break; 4573 - 4574 4566 /* Only do the default setting of fc_metric in route adding */ 4575 4567 if (cfg.fc_metric == 0) 4576 4568 cfg.fc_metric = IP6_RT_PRIO_USER; ··· 5402 5402 int nhn = 0; 5403 5403 int err; 5404 5404 5405 + err = fib6_config_validate(cfg, extack); 5406 + if (err) 5407 + return err; 5408 + 5405 5409 replace = (cfg->fc_nlinfo.nlh && 5406 5410 (cfg->fc_nlinfo.nlh->nlmsg_flags & NLM_F_REPLACE)); 5407 5411 ··· 5638 5634 5639 5635 err = rtm_to_fib6_config(skb, nlh, &cfg, extack); 5640 5636 if (err < 0) 5641 - return err; 5642 - 5643 - err = fib6_config_validate(&cfg, extack); 5644 - if (err) 5645 5637 return err; 5646 5638 5647 5639 if (cfg.fc_metric == 0)
+1 -1
net/sched/sch_ets.c
··· 661 661 for (i = q->nbands; i < oldbands; i++) { 662 662 if (i >= q->nstrict && q->classes[i].qdisc->q.qlen) 663 663 list_del_init(&q->classes[i].alist); 664 - qdisc_tree_flush_backlog(q->classes[i].qdisc); 664 + qdisc_purge_queue(q->classes[i].qdisc); 665 665 } 666 666 WRITE_ONCE(q->nstrict, nstrict); 667 667 memcpy(q->prio2band, priomap, sizeof(priomap));
+1 -1
net/sched/sch_prio.c
··· 211 211 memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1); 212 212 213 213 for (i = q->bands; i < oldbands; i++) 214 - qdisc_tree_flush_backlog(q->queues[i]); 214 + qdisc_purge_queue(q->queues[i]); 215 215 216 216 for (i = oldbands; i < q->bands; i++) { 217 217 q->queues[i] = queues[i];
+1 -1
net/sched/sch_red.c
··· 285 285 q->userbits = userbits; 286 286 q->limit = ctl->limit; 287 287 if (child) { 288 - qdisc_tree_flush_backlog(q->qdisc); 288 + qdisc_purge_queue(q->qdisc); 289 289 old_child = q->qdisc; 290 290 q->qdisc = child; 291 291 }
+12 -3
net/sched/sch_sfq.c
··· 310 310 /* It is difficult to believe, but ALL THE SLOTS HAVE LENGTH 1. */ 311 311 x = q->tail->next; 312 312 slot = &q->slots[x]; 313 - q->tail->next = slot->next; 313 + if (slot->next == x) 314 + q->tail = NULL; /* no more active slots */ 315 + else 316 + q->tail->next = slot->next; 314 317 q->ht[slot->hash] = SFQ_EMPTY_SLOT; 315 318 goto drop; 316 319 } ··· 656 653 NL_SET_ERR_MSG_MOD(extack, "invalid quantum"); 657 654 return -EINVAL; 658 655 } 656 + 657 + if (ctl->perturb_period < 0 || 658 + ctl->perturb_period > INT_MAX / HZ) { 659 + NL_SET_ERR_MSG_MOD(extack, "invalid perturb period"); 660 + return -EINVAL; 661 + } 662 + perturb_period = ctl->perturb_period * HZ; 663 + 659 664 if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max, 660 665 ctl_v1->Wlog, ctl_v1->Scell_log, NULL)) 661 666 return -EINVAL; ··· 680 669 headdrop = q->headdrop; 681 670 maxdepth = q->maxdepth; 682 671 maxflows = q->maxflows; 683 - perturb_period = q->perturb_period; 684 672 quantum = q->quantum; 685 673 flags = q->flags; 686 674 687 675 /* update and validate configuration */ 688 676 if (ctl->quantum) 689 677 quantum = ctl->quantum; 690 - perturb_period = ctl->perturb_period * HZ; 691 678 if (ctl->flows) 692 679 maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS); 693 680 if (ctl->divisor) {
+1 -1
net/sched/sch_tbf.c
··· 452 452 453 453 sch_tree_lock(sch); 454 454 if (child) { 455 - qdisc_tree_flush_backlog(q->qdisc); 455 + qdisc_purge_queue(q->qdisc); 456 456 old = q->qdisc; 457 457 q->qdisc = child; 458 458 }
+2 -1
net/unix/af_unix.c
··· 1971 1971 if (UNIXCB(skb).pid) 1972 1972 return; 1973 1973 1974 - if (unix_may_passcred(sk) || unix_may_passcred(other)) { 1974 + if (unix_may_passcred(sk) || unix_may_passcred(other) || 1975 + !other->sk_socket) { 1975 1976 UNIXCB(skb).pid = get_pid(task_tgid(current)); 1976 1977 current_uid_gid(&UNIXCB(skb).uid, &UNIXCB(skb).gid); 1977 1978 }
+1 -1
net/wireless/nl80211.c
··· 1583 1583 1584 1584 return result; 1585 1585 error: 1586 - kfree(result); 1586 + kfree_sensitive(result); 1587 1587 return ERR_PTR(err); 1588 1588 } 1589 1589
+8
rust/helpers/cpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/smp.h> 4 + 5 + unsigned int rust_helper_raw_smp_processor_id(void) 6 + { 7 + return raw_smp_processor_id(); 8 + }
+1
rust/helpers/helpers.c
··· 13 13 #include "build_assert.c" 14 14 #include "build_bug.c" 15 15 #include "clk.c" 16 + #include "cpu.c" 16 17 #include "cpufreq.c" 17 18 #include "cpumask.c" 18 19 #include "cred.c"
+123 -2
rust/kernel/cpu.rs
··· 6 6 7 7 use crate::{bindings, device::Device, error::Result, prelude::ENODEV}; 8 8 9 + /// Returns the maximum number of possible CPUs in the current system configuration. 10 + #[inline] 11 + pub fn nr_cpu_ids() -> u32 { 12 + #[cfg(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS))] 13 + { 14 + bindings::NR_CPUS 15 + } 16 + 17 + #[cfg(not(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS)))] 18 + // SAFETY: `nr_cpu_ids` is a valid global provided by the kernel. 19 + unsafe { 20 + bindings::nr_cpu_ids 21 + } 22 + } 23 + 24 + /// The CPU ID. 25 + /// 26 + /// Represents a CPU identifier as a wrapper around an [`u32`]. 27 + /// 28 + /// # Invariants 29 + /// 30 + /// The CPU ID lies within the range `[0, nr_cpu_ids())`. 31 + /// 32 + /// # Examples 33 + /// 34 + /// ``` 35 + /// use kernel::cpu::CpuId; 36 + /// 37 + /// let cpu = 0; 38 + /// 39 + /// // SAFETY: 0 is always a valid CPU number. 40 + /// let id = unsafe { CpuId::from_u32_unchecked(cpu) }; 41 + /// 42 + /// assert_eq!(id.as_u32(), cpu); 43 + /// assert!(CpuId::from_i32(0).is_some()); 44 + /// assert!(CpuId::from_i32(-1).is_none()); 45 + /// ``` 46 + #[derive(Copy, Clone, PartialEq, Eq, Debug)] 47 + pub struct CpuId(u32); 48 + 49 + impl CpuId { 50 + /// Creates a new [`CpuId`] from the given `id` without checking bounds. 51 + /// 52 + /// # Safety 53 + /// 54 + /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`). 55 + #[inline] 56 + pub unsafe fn from_i32_unchecked(id: i32) -> Self { 57 + debug_assert!(id >= 0); 58 + debug_assert!((id as u32) < nr_cpu_ids()); 59 + 60 + // INVARIANT: The function safety guarantees `id` is a valid CPU id. 61 + Self(id as u32) 62 + } 63 + 64 + /// Creates a new [`CpuId`] from the given `id`, checking that it is valid. 65 + pub fn from_i32(id: i32) -> Option<Self> { 66 + if id < 0 || id as u32 >= nr_cpu_ids() { 67 + None 68 + } else { 69 + // INVARIANT: `id` has just been checked as a valid CPU ID. 70 + Some(Self(id as u32)) 71 + } 72 + } 73 + 74 + /// Creates a new [`CpuId`] from the given `id` without checking bounds. 75 + /// 76 + /// # Safety 77 + /// 78 + /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`). 79 + #[inline] 80 + pub unsafe fn from_u32_unchecked(id: u32) -> Self { 81 + debug_assert!(id < nr_cpu_ids()); 82 + 83 + // Ensure the `id` fits in an [`i32`] as it's also representable that way. 84 + debug_assert!(id <= i32::MAX as u32); 85 + 86 + // INVARIANT: The function safety guarantees `id` is a valid CPU id. 87 + Self(id) 88 + } 89 + 90 + /// Creates a new [`CpuId`] from the given `id`, checking that it is valid. 91 + pub fn from_u32(id: u32) -> Option<Self> { 92 + if id >= nr_cpu_ids() { 93 + None 94 + } else { 95 + // INVARIANT: `id` has just been checked as a valid CPU ID. 96 + Some(Self(id)) 97 + } 98 + } 99 + 100 + /// Returns CPU number. 101 + #[inline] 102 + pub fn as_u32(&self) -> u32 { 103 + self.0 104 + } 105 + 106 + /// Returns the ID of the CPU the code is currently running on. 107 + /// 108 + /// The returned value is considered unstable because it may change 109 + /// unexpectedly due to preemption or CPU migration. It should only be 110 + /// used when the context ensures that the task remains on the same CPU 111 + /// or the users could use a stale (yet valid) CPU ID. 112 + pub fn current() -> Self { 113 + // SAFETY: raw_smp_processor_id() always returns a valid CPU ID. 114 + unsafe { Self::from_u32_unchecked(bindings::raw_smp_processor_id()) } 115 + } 116 + } 117 + 118 + impl From<CpuId> for u32 { 119 + fn from(id: CpuId) -> Self { 120 + id.as_u32() 121 + } 122 + } 123 + 124 + impl From<CpuId> for i32 { 125 + fn from(id: CpuId) -> Self { 126 + id.as_u32() as i32 127 + } 128 + } 129 + 9 130 /// Creates a new instance of CPU's device. 10 131 /// 11 132 /// # Safety ··· 138 17 /// Callers must ensure that the CPU device is not used after it has been unregistered. 139 18 /// This can be achieved, for example, by registering a CPU hotplug notifier and removing 140 19 /// any references to the CPU device within the notifier's callback. 141 - pub unsafe fn from_cpu(cpu: u32) -> Result<&'static Device> { 20 + pub unsafe fn from_cpu(cpu: CpuId) -> Result<&'static Device> { 142 21 // SAFETY: It is safe to call `get_cpu_device()` for any CPU. 143 - let ptr = unsafe { bindings::get_cpu_device(cpu) }; 22 + let ptr = unsafe { bindings::get_cpu_device(u32::from(cpu)) }; 144 23 if ptr.is_null() { 145 24 return Err(ENODEV); 146 25 }
+128 -45
rust/kernel/cpufreq.rs
··· 10 10 11 11 use crate::{ 12 12 clk::Hertz, 13 + cpu::CpuId, 13 14 cpumask, 14 15 device::{Bound, Device}, 15 16 devres::Devres, ··· 466 465 467 466 /// Returns the primary CPU for the [`Policy`]. 468 467 #[inline] 469 - pub fn cpu(&self) -> u32 { 470 - self.as_ref().cpu 468 + pub fn cpu(&self) -> CpuId { 469 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 470 + unsafe { CpuId::from_u32_unchecked(self.as_ref().cpu) } 471 471 } 472 472 473 473 /// Returns the minimum frequency for the [`Policy`]. ··· 527 525 #[inline] 528 526 pub fn generic_get(&self) -> Result<u32> { 529 527 // SAFETY: By the type invariant, the pointer stored in `self` is valid. 530 - Ok(unsafe { bindings::cpufreq_generic_get(self.cpu()) }) 528 + Ok(unsafe { bindings::cpufreq_generic_get(u32::from(self.cpu())) }) 531 529 } 532 530 533 531 /// Provides a wrapper to the register with energy model using the OPP core. ··· 680 678 struct PolicyCpu<'a>(&'a mut Policy); 681 679 682 680 impl<'a> PolicyCpu<'a> { 683 - fn from_cpu(cpu: u32) -> Result<Self> { 681 + fn from_cpu(cpu: CpuId) -> Result<Self> { 684 682 // SAFETY: It is safe to call `cpufreq_cpu_get` for any valid CPU. 685 - let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(cpu) })?; 683 + let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(u32::from(cpu)) })?; 686 684 687 685 Ok(Self( 688 686 // SAFETY: The `ptr` is guaranteed to be valid and remains valid for the lifetime of ··· 1057 1055 impl<T: Driver> Registration<T> { 1058 1056 /// Driver's `init` callback. 1059 1057 /// 1060 - /// SAFETY: Called from C. Inputs must be valid pointers. 1061 - extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1058 + /// # Safety 1059 + /// 1060 + /// - This function may only be called from the cpufreq C infrastructure. 1061 + /// - The pointer arguments must be valid pointers. 1062 + unsafe extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1062 1063 from_result(|| { 1063 1064 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1064 1065 // lifetime of `policy`. ··· 1075 1070 1076 1071 /// Driver's `exit` callback. 1077 1072 /// 1078 - /// SAFETY: Called from C. Inputs must be valid pointers. 1079 - extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) { 1073 + /// # Safety 1074 + /// 1075 + /// - This function may only be called from the cpufreq C infrastructure. 1076 + /// - The pointer arguments must be valid pointers. 1077 + unsafe extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) { 1080 1078 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1081 1079 // lifetime of `policy`. 1082 1080 let policy = unsafe { Policy::from_raw_mut(ptr) }; ··· 1090 1082 1091 1083 /// Driver's `online` callback. 1092 1084 /// 1093 - /// SAFETY: Called from C. Inputs must be valid pointers. 1094 - extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1085 + /// # Safety 1086 + /// 1087 + /// - This function may only be called from the cpufreq C infrastructure. 1088 + /// - The pointer arguments must be valid pointers. 1089 + unsafe extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1095 1090 from_result(|| { 1096 1091 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1097 1092 // lifetime of `policy`. ··· 1105 1094 1106 1095 /// Driver's `offline` callback. 1107 1096 /// 1108 - /// SAFETY: Called from C. Inputs must be valid pointers. 1109 - extern "C" fn offline_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1097 + /// # Safety 1098 + /// 1099 + /// - This function may only be called from the cpufreq C infrastructure. 1100 + /// - The pointer arguments must be valid pointers. 1101 + unsafe extern "C" fn offline_callback( 1102 + ptr: *mut bindings::cpufreq_policy, 1103 + ) -> kernel::ffi::c_int { 1110 1104 from_result(|| { 1111 1105 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1112 1106 // lifetime of `policy`. ··· 1122 1106 1123 1107 /// Driver's `suspend` callback. 1124 1108 /// 1125 - /// SAFETY: Called from C. Inputs must be valid pointers. 1126 - extern "C" fn suspend_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1109 + /// # Safety 1110 + /// 1111 + /// - This function may only be called from the cpufreq C infrastructure. 1112 + /// - The pointer arguments must be valid pointers. 1113 + unsafe extern "C" fn suspend_callback( 1114 + ptr: *mut bindings::cpufreq_policy, 1115 + ) -> kernel::ffi::c_int { 1127 1116 from_result(|| { 1128 1117 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1129 1118 // lifetime of `policy`. ··· 1139 1118 1140 1119 /// Driver's `resume` callback. 1141 1120 /// 1142 - /// SAFETY: Called from C. Inputs must be valid pointers. 1143 - extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1121 + /// # Safety 1122 + /// 1123 + /// - This function may only be called from the cpufreq C infrastructure. 1124 + /// - The pointer arguments must be valid pointers. 1125 + unsafe extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1144 1126 from_result(|| { 1145 1127 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1146 1128 // lifetime of `policy`. ··· 1154 1130 1155 1131 /// Driver's `ready` callback. 1156 1132 /// 1157 - /// SAFETY: Called from C. Inputs must be valid pointers. 1158 - extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) { 1133 + /// # Safety 1134 + /// 1135 + /// - This function may only be called from the cpufreq C infrastructure. 1136 + /// - The pointer arguments must be valid pointers. 1137 + unsafe extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) { 1159 1138 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1160 1139 // lifetime of `policy`. 1161 1140 let policy = unsafe { Policy::from_raw_mut(ptr) }; ··· 1167 1140 1168 1141 /// Driver's `verify` callback. 1169 1142 /// 1170 - /// SAFETY: Called from C. Inputs must be valid pointers. 1171 - extern "C" fn verify_callback(ptr: *mut bindings::cpufreq_policy_data) -> kernel::ffi::c_int { 1143 + /// # Safety 1144 + /// 1145 + /// - This function may only be called from the cpufreq C infrastructure. 1146 + /// - The pointer arguments must be valid pointers. 1147 + unsafe extern "C" fn verify_callback( 1148 + ptr: *mut bindings::cpufreq_policy_data, 1149 + ) -> kernel::ffi::c_int { 1172 1150 from_result(|| { 1173 1151 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1174 1152 // lifetime of `policy`. ··· 1184 1152 1185 1153 /// Driver's `setpolicy` callback. 1186 1154 /// 1187 - /// SAFETY: Called from C. Inputs must be valid pointers. 1188 - extern "C" fn setpolicy_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int { 1155 + /// # Safety 1156 + /// 1157 + /// - This function may only be called from the cpufreq C infrastructure. 1158 + /// - The pointer arguments must be valid pointers. 1159 + unsafe extern "C" fn setpolicy_callback( 1160 + ptr: *mut bindings::cpufreq_policy, 1161 + ) -> kernel::ffi::c_int { 1189 1162 from_result(|| { 1190 1163 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1191 1164 // lifetime of `policy`. ··· 1201 1164 1202 1165 /// Driver's `target` callback. 1203 1166 /// 1204 - /// SAFETY: Called from C. Inputs must be valid pointers. 1205 - extern "C" fn target_callback( 1167 + /// # Safety 1168 + /// 1169 + /// - This function may only be called from the cpufreq C infrastructure. 1170 + /// - The pointer arguments must be valid pointers. 1171 + unsafe extern "C" fn target_callback( 1206 1172 ptr: *mut bindings::cpufreq_policy, 1207 1173 target_freq: u32, 1208 1174 relation: u32, ··· 1220 1180 1221 1181 /// Driver's `target_index` callback. 1222 1182 /// 1223 - /// SAFETY: Called from C. Inputs must be valid pointers. 1224 - extern "C" fn target_index_callback( 1183 + /// # Safety 1184 + /// 1185 + /// - This function may only be called from the cpufreq C infrastructure. 1186 + /// - The pointer arguments must be valid pointers. 1187 + unsafe extern "C" fn target_index_callback( 1225 1188 ptr: *mut bindings::cpufreq_policy, 1226 1189 index: u32, 1227 1190 ) -> kernel::ffi::c_int { ··· 1243 1200 1244 1201 /// Driver's `fast_switch` callback. 1245 1202 /// 1246 - /// SAFETY: Called from C. Inputs must be valid pointers. 1247 - extern "C" fn fast_switch_callback( 1203 + /// # Safety 1204 + /// 1205 + /// - This function may only be called from the cpufreq C infrastructure. 1206 + /// - The pointer arguments must be valid pointers. 1207 + unsafe extern "C" fn fast_switch_callback( 1248 1208 ptr: *mut bindings::cpufreq_policy, 1249 1209 target_freq: u32, 1250 1210 ) -> kernel::ffi::c_uint { ··· 1258 1212 } 1259 1213 1260 1214 /// Driver's `adjust_perf` callback. 1261 - extern "C" fn adjust_perf_callback( 1215 + /// 1216 + /// # Safety 1217 + /// 1218 + /// - This function may only be called from the cpufreq C infrastructure. 1219 + unsafe extern "C" fn adjust_perf_callback( 1262 1220 cpu: u32, 1263 1221 min_perf: usize, 1264 1222 target_perf: usize, 1265 1223 capacity: usize, 1266 1224 ) { 1267 - if let Ok(mut policy) = PolicyCpu::from_cpu(cpu) { 1225 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 1226 + let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) }; 1227 + 1228 + if let Ok(mut policy) = PolicyCpu::from_cpu(cpu_id) { 1268 1229 T::adjust_perf(&mut policy, min_perf, target_perf, capacity); 1269 1230 } 1270 1231 } 1271 1232 1272 1233 /// Driver's `get_intermediate` callback. 1273 1234 /// 1274 - /// SAFETY: Called from C. Inputs must be valid pointers. 1275 - extern "C" fn get_intermediate_callback( 1235 + /// # Safety 1236 + /// 1237 + /// - This function may only be called from the cpufreq C infrastructure. 1238 + /// - The pointer arguments must be valid pointers. 1239 + unsafe extern "C" fn get_intermediate_callback( 1276 1240 ptr: *mut bindings::cpufreq_policy, 1277 1241 index: u32, 1278 1242 ) -> kernel::ffi::c_uint { ··· 1299 1243 1300 1244 /// Driver's `target_intermediate` callback. 1301 1245 /// 1302 - /// SAFETY: Called from C. Inputs must be valid pointers. 1303 - extern "C" fn target_intermediate_callback( 1246 + /// # Safety 1247 + /// 1248 + /// - This function may only be called from the cpufreq C infrastructure. 1249 + /// - The pointer arguments must be valid pointers. 1250 + unsafe extern "C" fn target_intermediate_callback( 1304 1251 ptr: *mut bindings::cpufreq_policy, 1305 1252 index: u32, 1306 1253 ) -> kernel::ffi::c_int { ··· 1321 1262 } 1322 1263 1323 1264 /// Driver's `get` callback. 1324 - extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint { 1325 - PolicyCpu::from_cpu(cpu).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f)) 1265 + /// 1266 + /// # Safety 1267 + /// 1268 + /// - This function may only be called from the cpufreq C infrastructure. 1269 + unsafe extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint { 1270 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 1271 + let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) }; 1272 + 1273 + PolicyCpu::from_cpu(cpu_id).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f)) 1326 1274 } 1327 1275 1328 1276 /// Driver's `update_limit` callback. 1329 - extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) { 1277 + /// 1278 + /// # Safety 1279 + /// 1280 + /// - This function may only be called from the cpufreq C infrastructure. 1281 + /// - The pointer arguments must be valid pointers. 1282 + unsafe extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) { 1330 1283 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1331 1284 // lifetime of `policy`. 1332 1285 let policy = unsafe { Policy::from_raw_mut(ptr) }; ··· 1347 1276 1348 1277 /// Driver's `bios_limit` callback. 1349 1278 /// 1350 - /// SAFETY: Called from C. Inputs must be valid pointers. 1351 - extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int { 1279 + /// # Safety 1280 + /// 1281 + /// - This function may only be called from the cpufreq C infrastructure. 1282 + /// - The pointer arguments must be valid pointers. 1283 + unsafe extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int { 1284 + // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number. 1285 + let cpu_id = unsafe { CpuId::from_i32_unchecked(cpu) }; 1286 + 1352 1287 from_result(|| { 1353 - let mut policy = PolicyCpu::from_cpu(cpu as u32)?; 1288 + let mut policy = PolicyCpu::from_cpu(cpu_id)?; 1354 1289 1355 1290 // SAFETY: `limit` is guaranteed by the C code to be valid. 1356 1291 T::bios_limit(&mut policy, &mut (unsafe { *limit })).map(|()| 0) ··· 1365 1288 1366 1289 /// Driver's `set_boost` callback. 1367 1290 /// 1368 - /// SAFETY: Called from C. Inputs must be valid pointers. 1369 - extern "C" fn set_boost_callback( 1291 + /// # Safety 1292 + /// 1293 + /// - This function may only be called from the cpufreq C infrastructure. 1294 + /// - The pointer arguments must be valid pointers. 1295 + unsafe extern "C" fn set_boost_callback( 1370 1296 ptr: *mut bindings::cpufreq_policy, 1371 1297 state: i32, 1372 1298 ) -> kernel::ffi::c_int { ··· 1383 1303 1384 1304 /// Driver's `register_em` callback. 1385 1305 /// 1386 - /// SAFETY: Called from C. Inputs must be valid pointers. 1387 - extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) { 1306 + /// # Safety 1307 + /// 1308 + /// - This function may only be called from the cpufreq C infrastructure. 1309 + /// - The pointer arguments must be valid pointers. 1310 + unsafe extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) { 1388 1311 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the 1389 1312 // lifetime of `policy`. 1390 1313 let policy = unsafe { Policy::from_raw_mut(ptr) };
+36 -15
rust/kernel/cpumask.rs
··· 6 6 7 7 use crate::{ 8 8 alloc::{AllocError, Flags}, 9 + cpu::CpuId, 9 10 prelude::*, 10 11 types::Opaque, 11 12 }; ··· 36 35 /// 37 36 /// ``` 38 37 /// use kernel::bindings; 38 + /// use kernel::cpu::CpuId; 39 39 /// use kernel::cpumask::Cpumask; 40 40 /// 41 - /// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: u32, clear_cpu: i32) { 41 + /// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: CpuId, clear_cpu: CpuId) { 42 42 /// // SAFETY: The `ptr` is valid for writing and remains valid for the lifetime of the 43 43 /// // returned reference. 44 44 /// let mask = unsafe { Cpumask::as_mut_ref(ptr) }; ··· 92 90 /// This mismatches kernel naming convention and corresponds to the C 93 91 /// function `__cpumask_set_cpu()`. 94 92 #[inline] 95 - pub fn set(&mut self, cpu: u32) { 93 + pub fn set(&mut self, cpu: CpuId) { 96 94 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `__cpumask_set_cpu`. 97 - unsafe { bindings::__cpumask_set_cpu(cpu, self.as_raw()) }; 95 + unsafe { bindings::__cpumask_set_cpu(u32::from(cpu), self.as_raw()) }; 98 96 } 99 97 100 98 /// Clear `cpu` in the cpumask. ··· 103 101 /// This mismatches kernel naming convention and corresponds to the C 104 102 /// function `__cpumask_clear_cpu()`. 105 103 #[inline] 106 - pub fn clear(&mut self, cpu: i32) { 104 + pub fn clear(&mut self, cpu: CpuId) { 107 105 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to 108 106 // `__cpumask_clear_cpu`. 109 - unsafe { bindings::__cpumask_clear_cpu(cpu, self.as_raw()) }; 107 + unsafe { bindings::__cpumask_clear_cpu(i32::from(cpu), self.as_raw()) }; 110 108 } 111 109 112 110 /// Test `cpu` in the cpumask. 113 111 /// 114 112 /// Equivalent to the kernel's `cpumask_test_cpu` API. 115 113 #[inline] 116 - pub fn test(&self, cpu: i32) -> bool { 114 + pub fn test(&self, cpu: CpuId) -> bool { 117 115 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `cpumask_test_cpu`. 118 - unsafe { bindings::cpumask_test_cpu(cpu, self.as_raw()) } 116 + unsafe { bindings::cpumask_test_cpu(i32::from(cpu), self.as_raw()) } 119 117 } 120 118 121 119 /// Set all CPUs in the cpumask. ··· 180 178 /// The following example demonstrates how to create and update a [`CpumaskVar`]. 181 179 /// 182 180 /// ``` 181 + /// use kernel::cpu::CpuId; 183 182 /// use kernel::cpumask::CpumaskVar; 184 183 /// 185 184 /// let mut mask = CpumaskVar::new_zero(GFP_KERNEL).unwrap(); 186 185 /// 187 186 /// assert!(mask.empty()); 188 - /// mask.set(2); 189 - /// assert!(mask.test(2)); 190 - /// mask.set(3); 191 - /// assert!(mask.test(3)); 192 - /// assert_eq!(mask.weight(), 2); 187 + /// let mut count = 0; 188 + /// 189 + /// let cpu2 = CpuId::from_u32(2); 190 + /// if let Some(cpu) = cpu2 { 191 + /// mask.set(cpu); 192 + /// assert!(mask.test(cpu)); 193 + /// count += 1; 194 + /// } 195 + /// 196 + /// let cpu3 = CpuId::from_u32(3); 197 + /// if let Some(cpu) = cpu3 { 198 + /// mask.set(cpu); 199 + /// assert!(mask.test(cpu)); 200 + /// count += 1; 201 + /// } 202 + /// 203 + /// assert_eq!(mask.weight(), count); 193 204 /// 194 205 /// let mask2 = CpumaskVar::try_clone(&mask).unwrap(); 195 - /// assert!(mask2.test(2)); 196 - /// assert!(mask2.test(3)); 197 - /// assert_eq!(mask2.weight(), 2); 206 + /// 207 + /// if let Some(cpu) = cpu2 { 208 + /// assert!(mask2.test(cpu)); 209 + /// } 210 + /// 211 + /// if let Some(cpu) = cpu3 { 212 + /// assert!(mask2.test(cpu)); 213 + /// } 214 + /// assert_eq!(mask2.weight(), count); 198 215 /// ``` 199 216 pub struct CpumaskVar { 200 217 #[cfg(CONFIG_CPUMASK_OFFSTACK)]
+1 -1
rust/kernel/time/hrtimer.rs
··· 517 517 ) -> *mut Self { 518 518 // SAFETY: As per the safety requirement of this function, `ptr` 519 519 // is pointing inside a `$timer_type`. 520 - unsafe { ::kernel::container_of!(ptr, $timer_type, $field).cast_mut() } 520 + unsafe { ::kernel::container_of!(ptr, $timer_type, $field) } 521 521 } 522 522 } 523 523 }
+2 -12
scripts/gendwarfksyms/gendwarfksyms.h
··· 216 216 void cache_init(struct cache *cache); 217 217 void cache_free(struct cache *cache); 218 218 219 - static inline void __cache_mark_expanded(struct cache *cache, uintptr_t addr) 220 - { 221 - cache_set(cache, addr, 1); 222 - } 223 - 224 - static inline bool __cache_was_expanded(struct cache *cache, uintptr_t addr) 225 - { 226 - return cache_get(cache, addr) == 1; 227 - } 228 - 229 219 static inline void cache_mark_expanded(struct cache *cache, void *addr) 230 220 { 231 - __cache_mark_expanded(cache, (uintptr_t)addr); 221 + cache_set(cache, (unsigned long)addr, 1); 232 222 } 233 223 234 224 static inline bool cache_was_expanded(struct cache *cache, void *addr) 235 225 { 236 - return __cache_was_expanded(cache, (uintptr_t)addr); 226 + return cache_get(cache, (unsigned long)addr) == 1; 237 227 } 238 228 239 229 /*
+19 -46
scripts/gendwarfksyms/types.c
··· 333 333 cache_free(&expansion_cache); 334 334 } 335 335 336 - static void __type_expand(struct die *cache, struct type_expansion *type, 337 - bool recursive); 338 - 339 - static void type_expand_child(struct die *cache, struct type_expansion *type, 340 - bool recursive) 341 - { 342 - struct type_expansion child; 343 - char *name; 344 - 345 - name = get_type_name(cache); 346 - if (!name) { 347 - __type_expand(cache, type, recursive); 348 - return; 349 - } 350 - 351 - if (recursive && !__cache_was_expanded(&expansion_cache, cache->addr)) { 352 - __cache_mark_expanded(&expansion_cache, cache->addr); 353 - type_expansion_init(&child); 354 - __type_expand(cache, &child, true); 355 - type_map_add(name, &child); 356 - type_expansion_free(&child); 357 - } 358 - 359 - type_expansion_append(type, name, name); 360 - } 361 - 362 - static void __type_expand(struct die *cache, struct type_expansion *type, 363 - bool recursive) 336 + static void __type_expand(struct die *cache, struct type_expansion *type) 364 337 { 365 338 struct die_fragment *df; 366 339 struct die *child; 340 + char *name; 367 341 368 342 list_for_each_entry(df, &cache->fragments, list) { 369 343 switch (df->type) { ··· 353 379 error("unknown child: %" PRIxPTR, 354 380 df->data.addr); 355 381 356 - type_expand_child(child, type, recursive); 382 + name = get_type_name(child); 383 + if (name) 384 + type_expansion_append(type, name, name); 385 + else 386 + __type_expand(child, type); 387 + 357 388 break; 358 389 case FRAGMENT_LINEBREAK: 359 390 /* ··· 376 397 } 377 398 } 378 399 379 - static void type_expand(struct die *cache, struct type_expansion *type, 380 - bool recursive) 400 + static void type_expand(const char *name, struct die *cache, 401 + struct type_expansion *type) 381 402 { 403 + const char *override; 404 + 382 405 type_expansion_init(type); 383 - __type_expand(cache, type, recursive); 384 - cache_free(&expansion_cache); 406 + 407 + if (stable && kabi_get_type_string(name, &override)) 408 + type_parse(name, override, type); 409 + else 410 + __type_expand(cache, type); 385 411 } 386 412 387 413 static void type_parse(const char *name, const char *str, ··· 399 415 400 416 if (!*str) 401 417 error("empty type string override for '%s'", name); 402 - 403 - type_expansion_init(type); 404 418 405 419 for (pos = 0; str[pos]; ++pos) { 406 420 bool empty; ··· 460 478 static void expand_type(struct die *cache, void *arg) 461 479 { 462 480 struct type_expansion type; 463 - const char *override; 464 481 char *name; 465 482 466 483 if (cache->mapped) ··· 485 504 486 505 debug("%s", name); 487 506 488 - if (stable && kabi_get_type_string(name, &override)) 489 - type_parse(name, override, &type); 490 - else 491 - type_expand(cache, &type, true); 492 - 507 + type_expand(name, cache, &type); 493 508 type_map_add(name, &type); 494 509 type_expansion_free(&type); 495 510 free(name); ··· 495 518 { 496 519 struct type_expansion type; 497 520 struct version version; 498 - const char *override; 499 521 struct die *cache; 500 522 501 523 /* ··· 508 532 if (__die_map_get(sym->die_addr, DIE_SYMBOL, &cache)) 509 533 return; /* We'll warn about missing CRCs later. */ 510 534 511 - if (stable && kabi_get_type_string(sym->name, &override)) 512 - type_parse(sym->name, override, &type); 513 - else 514 - type_expand(cache, &type, false); 535 + type_expand(sym->name, cache, &type); 515 536 516 537 /* If the symbol already has a version, don't calculate it again. */ 517 538 if (sym->state != SYMBOL_PROCESSED) {
+12 -3
scripts/misc-check
··· 62 62 xargs -r printf "%s: warning: EXPORT_SYMBOL() is not used, but #include <linux/export.h> is present\n" >&2 63 63 } 64 64 65 - check_tracked_ignored_files 66 - check_missing_include_linux_export_h 67 - check_unnecessary_include_linux_export_h 65 + case "${KBUILD_EXTRA_WARN}" in 66 + *1*) 67 + check_tracked_ignored_files 68 + ;; 69 + esac 70 + 71 + case "${KBUILD_EXTRA_WARN}" in 72 + *2*) 73 + check_missing_include_linux_export_h 74 + check_unnecessary_include_linux_export_h 75 + ;; 76 + esac
+2 -2
security/keys/gc.c
··· 218 218 key = rb_entry(cursor, struct key, serial_node); 219 219 cursor = rb_next(cursor); 220 220 221 - if (test_bit(KEY_FLAG_FINAL_PUT, &key->flags)) { 222 - smp_mb(); /* Clobber key->user after FINAL_PUT seen. */ 221 + if (!test_bit_acquire(KEY_FLAG_USER_ALIVE, &key->flags)) { 222 + /* Clobber key->user after final put seen. */ 223 223 goto found_unreferenced_key; 224 224 } 225 225
+3 -2
security/keys/key.c
··· 298 298 key->restrict_link = restrict_link; 299 299 key->last_used_at = ktime_get_real_seconds(); 300 300 301 + key->flags |= 1 << KEY_FLAG_USER_ALIVE; 301 302 if (!(flags & KEY_ALLOC_NOT_IN_QUOTA)) 302 303 key->flags |= 1 << KEY_FLAG_IN_QUOTA; 303 304 if (flags & KEY_ALLOC_BUILT_IN) ··· 659 658 key->user->qnbytes -= key->quotalen; 660 659 spin_unlock_irqrestore(&key->user->lock, flags); 661 660 } 662 - smp_mb(); /* key->user before FINAL_PUT set. */ 663 - set_bit(KEY_FLAG_FINAL_PUT, &key->flags); 661 + /* Mark key as safe for GC after key->user done. */ 662 + clear_bit_unlock(KEY_FLAG_USER_ALIVE, &key->flags); 664 663 schedule_work(&key_gc_work); 665 664 } 666 665 }
+1 -1
tools/bpf/resolve_btfids/Makefile
··· 17 17 18 18 # Overrides for the prepare step libraries. 19 19 HOST_OVERRIDES := AR="$(HOSTAR)" CC="$(HOSTCC)" LD="$(HOSTLD)" ARCH="$(HOSTARCH)" \ 20 - CROSS_COMPILE="" EXTRA_CFLAGS="$(HOSTCFLAGS)" 20 + CROSS_COMPILE="" CLANG_CROSS_FLAGS="" EXTRA_CFLAGS="$(HOSTCFLAGS)" 21 21 22 22 RM ?= rm 23 23 HOSTCC ?= gcc
+3 -3
tools/lib/bpf/btf.c
··· 1384 1384 1385 1385 fd = open(path, O_RDONLY); 1386 1386 if (fd < 0) 1387 - return libbpf_err_ptr(-errno); 1387 + return ERR_PTR(-errno); 1388 1388 1389 1389 if (fstat(fd, &st) < 0) { 1390 1390 err = -errno; 1391 1391 close(fd); 1392 - return libbpf_err_ptr(err); 1392 + return ERR_PTR(err); 1393 1393 } 1394 1394 1395 1395 data = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0); ··· 1397 1397 close(fd); 1398 1398 1399 1399 if (data == MAP_FAILED) 1400 - return libbpf_err_ptr(err); 1400 + return ERR_PTR(err); 1401 1401 1402 1402 btf = btf_new(data, st.st_size, base_btf, true); 1403 1403 if (IS_ERR(btf))
+5 -4
tools/power/cpupower/Makefile
··· 73 73 mandir ?= /usr/man 74 74 libdir ?= /usr/lib 75 75 libexecdir ?= /usr/libexec 76 + unitdir ?= /usr/lib/systemd/system 76 77 includedir ?= /usr/include 77 78 localedir ?= /usr/share/locale 78 79 docdir ?= /usr/share/doc/packages/cpupower ··· 310 309 $(INSTALL_DATA) cpupower-service.conf '$(DESTDIR)${confdir}' 311 310 $(INSTALL) -d $(DESTDIR)${libexecdir} 312 311 $(INSTALL_PROGRAM) cpupower.sh '$(DESTDIR)${libexecdir}/cpupower' 313 - $(INSTALL) -d $(DESTDIR)${libdir}/systemd/system 314 - sed 's|___CDIR___|${confdir}|; s|___LDIR___|${libexecdir}|' cpupower.service.in > '$(DESTDIR)${libdir}/systemd/system/cpupower.service' 315 - $(SETPERM_DATA) '$(DESTDIR)${libdir}/systemd/system/cpupower.service' 312 + $(INSTALL) -d $(DESTDIR)${unitdir} 313 + sed 's|___CDIR___|${confdir}|; s|___LDIR___|${libexecdir}|' cpupower.service.in > '$(DESTDIR)${unitdir}/cpupower.service' 314 + $(SETPERM_DATA) '$(DESTDIR)${unitdir}/cpupower.service' 316 315 317 316 install-man: 318 317 $(INSTALL_DATA) -D man/cpupower.1 $(DESTDIR)${mandir}/man1/cpupower.1 ··· 349 348 - rm -f $(DESTDIR)${bindir}/utils/cpupower 350 349 - rm -f $(DESTDIR)${confdir}cpupower-service.conf 351 350 - rm -f $(DESTDIR)${libexecdir}/cpupower 352 - - rm -f $(DESTDIR)${libdir}/systemd/system/cpupower.service 351 + - rm -f $(DESTDIR)${unitdir}/cpupower.service 353 352 - rm -f $(DESTDIR)${mandir}/man1/cpupower.1 354 353 - rm -f $(DESTDIR)${mandir}/man1/cpupower-frequency-set.1 355 354 - rm -f $(DESTDIR)${mandir}/man1/cpupower-frequency-info.1
+58 -1
tools/testing/selftests/drivers/net/hw/rss_ctx.py
··· 747 747 'noise' : (0,) }) 748 748 749 749 750 + def test_rss_default_context_rule(cfg): 751 + """ 752 + Allocate a port, direct this port to context 0, then create a new RSS 753 + context and steer all TCP traffic to it (context 1). Verify that: 754 + * Traffic to the specific port continues to use queues of the main 755 + context (0/1). 756 + * Traffic to any other TCP port is redirected to the new context 757 + (queues 2/3). 758 + """ 759 + 760 + require_ntuple(cfg) 761 + 762 + queue_cnt = len(_get_rx_cnts(cfg)) 763 + if queue_cnt < 4: 764 + try: 765 + ksft_pr(f"Increasing queue count {queue_cnt} -> 4") 766 + ethtool(f"-L {cfg.ifname} combined 4") 767 + defer(ethtool, f"-L {cfg.ifname} combined {queue_cnt}") 768 + except Exception as exc: 769 + raise KsftSkipEx("Not enough queues for the test") from exc 770 + 771 + # Use queues 0 and 1 for the main context 772 + ethtool(f"-X {cfg.ifname} equal 2") 773 + defer(ethtool, f"-X {cfg.ifname} default") 774 + 775 + # Create a new RSS context that uses queues 2 and 3 776 + ctx_id = ethtool_create(cfg, "-X", "context new start 2 equal 2") 777 + defer(ethtool, f"-X {cfg.ifname} context {ctx_id} delete") 778 + 779 + # Generic low-priority rule: redirect all TCP traffic to the new context. 780 + # Give it an explicit higher location number (lower priority). 781 + flow_generic = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} context {ctx_id} loc 1" 782 + ethtool(f"-N {cfg.ifname} {flow_generic}") 783 + defer(ethtool, f"-N {cfg.ifname} delete 1") 784 + 785 + # Specific high-priority rule for a random port that should stay on context 0. 786 + # Assign loc 0 so it is evaluated before the generic rule. 787 + port_main = rand_port() 788 + flow_main = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} dst-port {port_main} context 0 loc 0" 789 + ethtool(f"-N {cfg.ifname} {flow_main}") 790 + defer(ethtool, f"-N {cfg.ifname} delete 0") 791 + 792 + _ntuple_rule_check(cfg, 1, ctx_id) 793 + 794 + # Verify that traffic matching the specific rule still goes to queues 0/1 795 + _send_traffic_check(cfg, port_main, "context 0", 796 + { 'target': (0, 1), 797 + 'empty' : (2, 3) }) 798 + 799 + # And that traffic for any other port is steered to the new context 800 + port_other = rand_port() 801 + _send_traffic_check(cfg, port_other, f"context {ctx_id}", 802 + { 'target': (2, 3), 803 + 'noise' : (0, 1) }) 804 + 805 + 750 806 def main() -> None: 751 807 with NetDrvEpEnv(__file__, nsim_test=False) as cfg: 752 808 cfg.context_cnt = None ··· 816 760 test_rss_context_overlap, test_rss_context_overlap2, 817 761 test_rss_context_out_of_order, test_rss_context4_create_with_cfg, 818 762 test_flow_add_context_missing, 819 - test_delete_rss_context_busy, test_rss_ntuple_addition], 763 + test_delete_rss_context_busy, test_rss_ntuple_addition, 764 + test_rss_default_context_rule], 820 765 args=(cfg, )) 821 766 ksft_exit() 822 767
+25 -14
tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
··· 22 22 #include "gic.h" 23 23 #include "vgic.h" 24 24 25 - static const uint64_t CVAL_MAX = ~0ULL; 25 + /* Depends on counter width. */ 26 + static uint64_t CVAL_MAX; 26 27 /* tval is a signed 32-bit int. */ 27 28 static const int32_t TVAL_MAX = INT32_MAX; 28 29 static const int32_t TVAL_MIN = INT32_MIN; ··· 31 30 /* After how much time we say there is no IRQ. */ 32 31 static const uint32_t TIMEOUT_NO_IRQ_US = 50000; 33 32 34 - /* A nice counter value to use as the starting one for most tests. */ 35 - static const uint64_t DEF_CNT = (CVAL_MAX / 2); 33 + /* Counter value to use as the starting one for most tests. Set to CVAL_MAX/2 */ 34 + static uint64_t DEF_CNT; 36 35 37 36 /* Number of runs. */ 38 37 static const uint32_t NR_TEST_ITERS_DEF = 5; ··· 192 191 { 193 192 atomic_set(&shared_data.handled, 0); 194 193 atomic_set(&shared_data.spurious, 0); 195 - timer_set_ctl(timer, ctl); 196 194 timer_set_tval(timer, tval_cycles); 195 + timer_set_ctl(timer, ctl); 197 196 } 198 197 199 198 static void set_xval_irq(enum arch_timer timer, uint64_t xval, uint32_t ctl, ··· 733 732 test_set_cnt_after_tval(timer, 0, tval, (uint64_t) tval + 1, 734 733 wm); 735 734 } 736 - 737 - for (i = 0; i < ARRAY_SIZE(sleep_method); i++) { 738 - sleep_method_t sm = sleep_method[i]; 739 - 740 - test_set_cnt_after_cval_no_irq(timer, 0, DEF_CNT, CVAL_MAX, sm); 741 - } 742 735 } 743 736 744 737 /* ··· 844 849 GUEST_DONE(); 845 850 } 846 851 852 + static cpu_set_t default_cpuset; 853 + 847 854 static uint32_t next_pcpu(void) 848 855 { 849 856 uint32_t max = get_nprocs(); 850 857 uint32_t cur = sched_getcpu(); 851 858 uint32_t next = cur; 852 - cpu_set_t cpuset; 859 + cpu_set_t cpuset = default_cpuset; 853 860 854 861 TEST_ASSERT(max > 1, "Need at least two physical cpus"); 855 - 856 - sched_getaffinity(0, sizeof(cpuset), &cpuset); 857 862 858 863 do { 859 864 next = (next + 1) % CPU_SETSIZE; ··· 970 975 test_init_timer_irq(*vm, *vcpu); 971 976 vgic_v3_setup(*vm, 1, 64); 972 977 sync_global_to_guest(*vm, test_args); 978 + sync_global_to_guest(*vm, CVAL_MAX); 979 + sync_global_to_guest(*vm, DEF_CNT); 973 980 } 974 981 975 982 static void test_print_help(char *name) ··· 983 986 pr_info("\t-b: Test both physical and virtual timers (default: true)\n"); 984 987 pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n", 985 988 LONG_WAIT_TEST_MS); 986 - pr_info("\t-l: Delta (in ms) used for wait times (default: %u)\n", 989 + pr_info("\t-w: Delta (in ms) used for wait times (default: %u)\n", 987 990 WAIT_TEST_MS); 988 991 pr_info("\t-p: Test physical timer (default: true)\n"); 989 992 pr_info("\t-v: Test virtual timer (default: true)\n"); ··· 1032 1035 return false; 1033 1036 } 1034 1037 1038 + static void set_counter_defaults(void) 1039 + { 1040 + const uint64_t MIN_ROLLOVER_SECS = 40ULL * 365 * 24 * 3600; 1041 + uint64_t freq = read_sysreg(CNTFRQ_EL0); 1042 + uint64_t width = ilog2(MIN_ROLLOVER_SECS * freq); 1043 + 1044 + width = clamp(width, 56, 64); 1045 + CVAL_MAX = GENMASK_ULL(width - 1, 0); 1046 + DEF_CNT = CVAL_MAX / 2; 1047 + } 1048 + 1035 1049 int main(int argc, char *argv[]) 1036 1050 { 1037 1051 struct kvm_vcpu *vcpu; ··· 1053 1045 1054 1046 if (!parse_args(argc, argv)) 1055 1047 exit(KSFT_SKIP); 1048 + 1049 + sched_getaffinity(0, sizeof(default_cpuset), &default_cpuset); 1050 + set_counter_defaults(); 1056 1051 1057 1052 if (test_args.test_virtual) { 1058 1053 test_vm_create(&vm, &vcpu, VIRTUAL);
+6 -1
tools/testing/selftests/mm/gup_longterm.c
··· 298 298 log_test_start("%s ... with memfd", desc); 299 299 300 300 fd = memfd_create("test", 0); 301 - if (fd < 0) 301 + if (fd < 0) { 302 302 ksft_print_msg("memfd_create() failed (%s)\n", strerror(errno)); 303 + log_test_result(KSFT_SKIP); 304 + return; 305 + } 303 306 304 307 fn(fd, pagesize); 305 308 close(fd); ··· 369 366 fd = memfd_create("test", flags); 370 367 if (fd < 0) { 371 368 ksft_print_msg("memfd_create() failed (%s)\n", strerror(errno)); 369 + log_test_result(KSFT_SKIP); 370 + return; 372 371 } 373 372 374 373 fn(fd, hugetlbsize);
+1
tools/testing/selftests/net/Makefile
··· 27 27 TEST_PROGS += unicast_extensions.sh 28 28 TEST_PROGS += udpgro_fwd.sh 29 29 TEST_PROGS += udpgro_frglist.sh 30 + TEST_PROGS += nat6to4.sh 30 31 TEST_PROGS += veth.sh 31 32 TEST_PROGS += ioam6.sh 32 33 TEST_PROGS += gro.sh
+15
tools/testing/selftests/net/nat6to4.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + NS="ns-peer-$(mktemp -u XXXXXX)" 5 + 6 + ip netns add "${NS}" 7 + ip -netns "${NS}" link set lo up 8 + ip -netns "${NS}" route add default via 127.0.0.2 dev lo 9 + 10 + tc -n "${NS}" qdisc add dev lo ingress 11 + tc -n "${NS}" filter add dev lo ingress prio 4 protocol ip \ 12 + bpf object-file nat6to4.bpf.o section schedcls/egress4/snat4 direct-action 13 + 14 + ip netns exec "${NS}" \ 15 + bash -c 'echo 012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789abc | socat - UDP4-DATAGRAM:224.1.0.1:6666,ip-multicast-loop=1'
+16
tools/testing/vma/vma_internal.h
··· 159 159 160 160 #define ASSERT_EXCLUSIVE_WRITER(x) 161 161 162 + /** 163 + * swap - swap values of @a and @b 164 + * @a: first value 165 + * @b: second value 166 + */ 167 + #define swap(a, b) \ 168 + do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0) 169 + 162 170 struct kref { 163 171 refcount_t refcount; 164 172 }; ··· 1474 1466 static inline void fixup_hugetlb_reservations(struct vm_area_struct *vma) 1475 1467 { 1476 1468 (void)vma; 1469 + } 1470 + 1471 + static inline void vma_set_file(struct vm_area_struct *vma, struct file *file) 1472 + { 1473 + /* Changing an anonymous vma with this is illegal */ 1474 + get_file(file); 1475 + swap(vma->vm_file, file); 1476 + fput(file); 1477 1477 } 1478 1478 1479 1479 #endif /* __MM_VMA_INTERNAL_H */