Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.17-rc6).

Conflicts:

net/netfilter/nft_set_pipapo.c
net/netfilter/nft_set_pipapo_avx2.c
c4eaca2e1052 ("netfilter: nft_set_pipapo: don't check genbit from packetpath lookups")
84c1da7b38d9 ("netfilter: nft_set_pipapo: use avx2 algorithm for insertions too")

Only trivial adjacent changes (in a doc and a Makefile).

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2633 -1130
+1
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 586 586 /sys/devices/system/cpu/vulnerabilities/srbds 587 587 /sys/devices/system/cpu/vulnerabilities/tsa 588 588 /sys/devices/system/cpu/vulnerabilities/tsx_async_abort 589 + /sys/devices/system/cpu/vulnerabilities/vmscape 589 590 Date: January 2018 590 591 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 591 592 Description: Information about CPU vulnerabilities
+1
Documentation/admin-guide/hw-vuln/index.rst
··· 26 26 rsb 27 27 old_microcode 28 28 indirect-target-selection 29 + vmscape
+110
Documentation/admin-guide/hw-vuln/vmscape.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + VMSCAPE 4 + ======= 5 + 6 + VMSCAPE is a vulnerability that may allow a guest to influence the branch 7 + prediction in host userspace. It particularly affects hypervisors like QEMU. 8 + 9 + Even if a hypervisor may not have any sensitive data like disk encryption keys, 10 + guest-userspace may be able to attack the guest-kernel using the hypervisor as 11 + a confused deputy. 12 + 13 + Affected processors 14 + ------------------- 15 + 16 + The following CPU families are affected by VMSCAPE: 17 + 18 + **Intel processors:** 19 + - Skylake generation (Parts without Enhanced-IBRS) 20 + - Cascade Lake generation - (Parts affected by ITS guest/host separation) 21 + - Alder Lake and newer (Parts affected by BHI) 22 + 23 + Note that, BHI affected parts that use BHB clearing software mitigation e.g. 24 + Icelake are not vulnerable to VMSCAPE. 25 + 26 + **AMD processors:** 27 + - Zen series (families 0x17, 0x19, 0x1a) 28 + 29 + ** Hygon processors:** 30 + - Family 0x18 31 + 32 + Mitigation 33 + ---------- 34 + 35 + Conditional IBPB 36 + ---------------- 37 + 38 + Kernel tracks when a CPU has run a potentially malicious guest and issues an 39 + IBPB before the first exit to userspace after VM-exit. If userspace did not run 40 + between VM-exit and the next VM-entry, no IBPB is issued. 41 + 42 + Note that the existing userspace mitigation against Spectre-v2 is effective in 43 + protecting the userspace. They are insufficient to protect the userspace VMMs 44 + from a malicious guest. This is because Spectre-v2 mitigations are applied at 45 + context switch time, while the userspace VMM can run after a VM-exit without a 46 + context switch. 47 + 48 + Vulnerability enumeration and mitigation is not applied inside a guest. This is 49 + because nested hypervisors should already be deploying IBPB to isolate 50 + themselves from nested guests. 51 + 52 + SMT considerations 53 + ------------------ 54 + 55 + When Simultaneous Multi-Threading (SMT) is enabled, hypervisors can be 56 + vulnerable to cross-thread attacks. For complete protection against VMSCAPE 57 + attacks in SMT environments, STIBP should be enabled. 58 + 59 + The kernel will issue a warning if SMT is enabled without adequate STIBP 60 + protection. Warning is not issued when: 61 + 62 + - SMT is disabled 63 + - STIBP is enabled system-wide 64 + - Intel eIBRS is enabled (which implies STIBP protection) 65 + 66 + System information and options 67 + ------------------------------ 68 + 69 + The sysfs file showing VMSCAPE mitigation status is: 70 + 71 + /sys/devices/system/cpu/vulnerabilities/vmscape 72 + 73 + The possible values in this file are: 74 + 75 + * 'Not affected': 76 + 77 + The processor is not vulnerable to VMSCAPE attacks. 78 + 79 + * 'Vulnerable': 80 + 81 + The processor is vulnerable and no mitigation has been applied. 82 + 83 + * 'Mitigation: IBPB before exit to userspace': 84 + 85 + Conditional IBPB mitigation is enabled. The kernel tracks when a CPU has 86 + run a potentially malicious guest and issues an IBPB before the first 87 + exit to userspace after VM-exit. 88 + 89 + * 'Mitigation: IBPB on VMEXIT': 90 + 91 + IBPB is issued on every VM-exit. This occurs when other mitigations like 92 + RETBLEED or SRSO are already issuing IBPB on VM-exit. 93 + 94 + Mitigation control on the kernel command line 95 + ---------------------------------------------- 96 + 97 + The mitigation can be controlled via the ``vmscape=`` command line parameter: 98 + 99 + * ``vmscape=off``: 100 + 101 + Disable the VMSCAPE mitigation. 102 + 103 + * ``vmscape=ibpb``: 104 + 105 + Enable conditional IBPB mitigation (default when CONFIG_MITIGATION_VMSCAPE=y). 106 + 107 + * ``vmscape=force``: 108 + 109 + Force vulnerability detection and mitigation even on processors that are 110 + not known to be affected.
+11
Documentation/admin-guide/kernel-parameters.txt
··· 3829 3829 srbds=off [X86,INTEL] 3830 3830 ssbd=force-off [ARM64] 3831 3831 tsx_async_abort=off [X86] 3832 + vmscape=off [X86] 3832 3833 3833 3834 Exceptions: 3834 3835 This does not have any effect on ··· 8041 8040 8042 8041 vmpoff= [KNL,S390] Perform z/VM CP command after power off. 8043 8042 Format: <command> 8043 + 8044 + vmscape= [X86] Controls mitigation for VMscape attacks. 8045 + VMscape attacks can leak information from a userspace 8046 + hypervisor to a guest via speculative side-channels. 8047 + 8048 + off - disable the mitigation 8049 + ibpb - use Indirect Branch Prediction Barrier 8050 + (IBPB) mitigation (default) 8051 + force - force vulnerability detection even on 8052 + unaffected processors 8044 8053 8045 8054 vsyscall= [X86-64,EARLY] 8046 8055 Controls the behavior of vsyscalls (i.e. calls to
+5
Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml
··· 20 20 - enum: 21 21 - fsl,imx7ulp-spi 22 22 - fsl,imx8qxp-spi 23 + - nxp,s32g2-lpspi 23 24 - items: 24 25 - enum: 25 26 - fsl,imx8ulp-spi ··· 28 27 - fsl,imx94-spi 29 28 - fsl,imx95-spi 30 29 - const: fsl,imx7ulp-spi 30 + - items: 31 + - const: nxp,s32g3-lpspi 32 + - const: nxp,s32g2-lpspi 33 + 31 34 reg: 32 35 maxItems: 1 33 36
+1 -1
Documentation/netlink/specs/mptcp_pm.yaml
··· 256 256 type: u32 257 257 - 258 258 name: if-idx 259 - type: u32 259 + type: s32 260 260 - 261 261 name: reset-reason 262 262 type: u32
+1 -1
Documentation/networking/can.rst
··· 742 742 struct timeval ival1, ival2; /* count and subsequent interval */ 743 743 canid_t can_id; /* unique can_id for task */ 744 744 __u32 nframes; /* number of can_frames following */ 745 - struct can_frame frames[0]; 745 + struct can_frame frames[]; 746 746 }; 747 747 748 748 The aligned payload 'frames' uses the same basic CAN frame structure defined
+4 -4
Documentation/networking/mptcp.rst
··· 60 60 and the server side that announces additional addresses via the ``ADD_ADDR`` and 61 61 ``REMOVE_ADDR`` options. 62 62 63 - Path managers are controlled by the ``net.mptcp.pm_type`` sysctl knob -- see 64 - mptcp-sysctl.rst. There are two types: the in-kernel one (type ``0``) where the 65 - same rules are applied for all the connections (see: ``ip mptcp``) ; and the 66 - userspace one (type ``1``), controlled by a userspace daemon (i.e. `mptcpd 63 + Path managers are controlled by the ``net.mptcp.path_manager`` sysctl knob -- 64 + see mptcp-sysctl.rst. There are two types: the in-kernel one (``kernel``) where 65 + the same rules are applied for all the connections (see: ``ip mptcp``) ; and the 66 + userspace one (``userspace``), controlled by a userspace daemon (i.e. `mptcpd 67 67 <https://mptcpd.mptcp.dev/>`_) where different rules can be applied for each 68 68 connection. The path managers can be controlled via a Netlink API; see 69 69 ../netlink/specs/mptcp_pm.rst.
+3 -3
MAINTAINERS
··· 4683 4683 BPF [SELFTESTS] (Test Runners & Infrastructure) 4684 4684 M: Andrii Nakryiko <andrii@kernel.org> 4685 4685 M: Eduard Zingerman <eddyz87@gmail.com> 4686 - R: Mykola Lysenko <mykolal@fb.com> 4687 4686 L: bpf@vger.kernel.org 4688 4687 S: Maintained 4689 4688 F: tools/testing/selftests/bpf/ ··· 5258 5259 5259 5260 BTRFS FILE SYSTEM 5260 5261 M: Chris Mason <clm@fb.com> 5261 - M: Josef Bacik <josef@toxicpanda.com> 5262 5262 M: David Sterba <dsterba@suse.com> 5263 5263 L: linux-btrfs@vger.kernel.org 5264 5264 S: Maintained ··· 7820 7822 Q: https://gitlab.freedesktop.org/drm/nouveau/-/merge_requests 7821 7823 B: https://gitlab.freedesktop.org/drm/nouveau/-/issues 7822 7824 C: irc://irc.oftc.net/nouveau 7823 - T: git https://gitlab.freedesktop.org/drm/nouveau.git 7825 + T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 7824 7826 F: drivers/gpu/drm/nouveau/ 7825 7827 F: include/uapi/drm/nouveau_drm.h 7826 7828 ··· 16125 16127 M: Mike Rapoport <rppt@kernel.org> 16126 16128 L: linux-mm@kvack.org 16127 16129 S: Maintained 16130 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git 16128 16131 F: include/linux/numa_memblks.h 16129 16132 F: mm/numa.c 16130 16133 F: mm/numa_emulation.c ··· 17478 17479 M: Pablo Neira Ayuso <pablo@netfilter.org> 17479 17480 M: Jozsef Kadlecsik <kadlec@netfilter.org> 17480 17481 M: Florian Westphal <fw@strlen.de> 17482 + R: Phil Sutter <phil@nwl.cc> 17481 17483 L: netfilter-devel@vger.kernel.org 17482 17484 L: coreteam@netfilter.org 17483 17485 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm64/include/asm/module.h
··· 19 19 20 20 /* for CONFIG_DYNAMIC_FTRACE */ 21 21 struct plt_entry *ftrace_trampolines; 22 + struct plt_entry *init_ftrace_trampolines; 22 23 }; 23 24 24 25 u64 module_emit_plt_entry(struct module *mod, Elf64_Shdr *sechdrs,
+1
arch/arm64/include/asm/module.lds.h
··· 2 2 .plt 0 : { BYTE(0) } 3 3 .init.plt 0 : { BYTE(0) } 4 4 .text.ftrace_trampoline 0 : { BYTE(0) } 5 + .init.text.ftrace_trampoline 0 : { BYTE(0) } 5 6 6 7 #ifdef CONFIG_KASAN_SW_TAGS 7 8 /*
+5
arch/arm64/include/uapi/asm/bitsperlong.h
··· 17 17 #ifndef __ASM_BITSPERLONG_H 18 18 #define __ASM_BITSPERLONG_H 19 19 20 + #if defined(__KERNEL__) && !defined(__aarch64__) 21 + /* Used by the compat vDSO */ 22 + #define __BITS_PER_LONG 32 23 + #else 20 24 #define __BITS_PER_LONG 64 25 + #endif 21 26 22 27 #include <asm-generic/bitsperlong.h> 23 28
+10 -3
arch/arm64/kernel/ftrace.c
··· 258 258 return ftrace_modify_code(pc, 0, new, false); 259 259 } 260 260 261 - static struct plt_entry *get_ftrace_plt(struct module *mod) 261 + static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr) 262 262 { 263 263 #ifdef CONFIG_MODULES 264 - struct plt_entry *plt = mod->arch.ftrace_trampolines; 264 + struct plt_entry *plt = NULL; 265 + 266 + if (within_module_mem_type(addr, mod, MOD_INIT_TEXT)) 267 + plt = mod->arch.init_ftrace_trampolines; 268 + else if (within_module_mem_type(addr, mod, MOD_TEXT)) 269 + plt = mod->arch.ftrace_trampolines; 270 + else 271 + return NULL; 265 272 266 273 return &plt[FTRACE_PLT_IDX]; 267 274 #else ··· 339 332 if (WARN_ON(!mod)) 340 333 return false; 341 334 342 - plt = get_ftrace_plt(mod); 335 + plt = get_ftrace_plt(mod, pc); 343 336 if (!plt) { 344 337 pr_err("ftrace: no module PLT for %ps\n", (void *)*addr); 345 338 return false;
+1 -1
arch/arm64/kernel/machine_kexec_file.c
··· 94 94 char *initrd, unsigned long initrd_len, 95 95 char *cmdline) 96 96 { 97 - struct kexec_buf kbuf; 97 + struct kexec_buf kbuf = {}; 98 98 void *dtb = NULL; 99 99 unsigned long initrd_load_addr = 0, dtb_len, 100 100 orig_segments = image->nr_segments;
+11 -1
arch/arm64/kernel/module-plts.c
··· 283 283 unsigned long core_plts = 0; 284 284 unsigned long init_plts = 0; 285 285 Elf64_Sym *syms = NULL; 286 - Elf_Shdr *pltsec, *tramp = NULL; 286 + Elf_Shdr *pltsec, *tramp = NULL, *init_tramp = NULL; 287 287 int i; 288 288 289 289 /* ··· 298 298 else if (!strcmp(secstrings + sechdrs[i].sh_name, 299 299 ".text.ftrace_trampoline")) 300 300 tramp = sechdrs + i; 301 + else if (!strcmp(secstrings + sechdrs[i].sh_name, 302 + ".init.text.ftrace_trampoline")) 303 + init_tramp = sechdrs + i; 301 304 else if (sechdrs[i].sh_type == SHT_SYMTAB) 302 305 syms = (Elf64_Sym *)sechdrs[i].sh_addr; 303 306 } ··· 364 361 tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC; 365 362 tramp->sh_addralign = __alignof__(struct plt_entry); 366 363 tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry); 364 + } 365 + 366 + if (init_tramp) { 367 + init_tramp->sh_type = SHT_NOBITS; 368 + init_tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC; 369 + init_tramp->sh_addralign = __alignof__(struct plt_entry); 370 + init_tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry); 367 371 } 368 372 369 373 return 0;
+11
arch/arm64/kernel/module.c
··· 466 466 __init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR); 467 467 468 468 mod->arch.ftrace_trampolines = plts; 469 + 470 + s = find_section(hdr, sechdrs, ".init.text.ftrace_trampoline"); 471 + if (!s) 472 + return -ENOEXEC; 473 + 474 + plts = (void *)s->sh_addr; 475 + 476 + __init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR); 477 + 478 + mod->arch.init_ftrace_trampolines = plts; 479 + 469 480 #endif 470 481 return 0; 471 482 }
+1 -1
arch/riscv/Kconfig
··· 65 65 select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE 66 66 select ARCH_SUPPORTS_HUGETLBFS if MMU 67 67 # LLD >= 14: https://github.com/llvm/llvm-project/issues/50505 68 - select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000 68 + select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000 && CMODEL_MEDANY 69 69 select ARCH_SUPPORTS_LTO_CLANG_THIN if LLD_VERSION >= 140000 70 70 select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS if 64BIT && MMU 71 71 select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU
+1 -1
arch/riscv/include/asm/asm.h
··· 91 91 #endif 92 92 93 93 .macro asm_per_cpu dst sym tmp 94 - REG_L \tmp, TASK_TI_CPU_NUM(tp) 94 + lw \tmp, TASK_TI_CPU_NUM(tp) 95 95 slli \tmp, \tmp, PER_CPU_OFFSET_SHIFT 96 96 la \dst, __per_cpu_offset 97 97 add \dst, \dst, \tmp
+4 -4
arch/riscv/include/asm/uaccess.h
··· 209 209 err = 0; \ 210 210 break; \ 211 211 __gu_failed: \ 212 - x = 0; \ 212 + x = (__typeof__(x))0; \ 213 213 err = -EFAULT; \ 214 214 } while (0) 215 215 ··· 311 311 do { \ 312 312 if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \ 313 313 !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \ 314 - __inttype(x) ___val = (__inttype(x))x; \ 314 + __typeof__(*(__gu_ptr)) ___val = (x); \ 315 315 if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(___val), sizeof(*__gu_ptr))) \ 316 316 goto label; \ 317 317 break; \ ··· 438 438 } 439 439 440 440 #define __get_kernel_nofault(dst, src, type, err_label) \ 441 - __get_user_nocheck(*((type *)(dst)), (type *)(src), err_label) 441 + __get_user_nocheck(*((type *)(dst)), (__force __user type *)(src), err_label) 442 442 443 443 #define __put_kernel_nofault(dst, src, type, err_label) \ 444 - __put_user_nocheck(*((type *)(src)), (type *)(dst), err_label) 444 + __put_user_nocheck(*((type *)(src)), (__force __user type *)(dst), err_label) 445 445 446 446 static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len) 447 447 {
+1 -1
arch/riscv/kernel/entry.S
··· 46 46 * a0 = &new_vmalloc[BIT_WORD(cpu)] 47 47 * a1 = BIT_MASK(cpu) 48 48 */ 49 - REG_L a2, TASK_TI_CPU(tp) 49 + lw a2, TASK_TI_CPU(tp) 50 50 /* 51 51 * Compute the new_vmalloc element position: 52 52 * (cpu / 64) * 8 = (cpu >> 6) << 3
+2 -2
arch/riscv/kernel/kexec_elf.c
··· 28 28 int i; 29 29 int ret = 0; 30 30 size_t size; 31 - struct kexec_buf kbuf; 31 + struct kexec_buf kbuf = {}; 32 32 const struct elf_phdr *phdr; 33 33 34 34 kbuf.image = image; ··· 66 66 { 67 67 int i; 68 68 int ret; 69 - struct kexec_buf kbuf; 69 + struct kexec_buf kbuf = {}; 70 70 const struct elf_phdr *phdr; 71 71 unsigned long lowest_paddr = ULONG_MAX; 72 72 unsigned long lowest_vaddr = ULONG_MAX;
+1 -1
arch/riscv/kernel/kexec_image.c
··· 41 41 struct riscv_image_header *h; 42 42 u64 flags; 43 43 bool be_image, be_kernel; 44 - struct kexec_buf kbuf; 44 + struct kexec_buf kbuf = {}; 45 45 int ret; 46 46 47 47 /* Check Image header */
+1 -1
arch/riscv/kernel/machine_kexec_file.c
··· 261 261 int ret; 262 262 void *fdt; 263 263 unsigned long initrd_pbase = 0UL; 264 - struct kexec_buf kbuf; 264 + struct kexec_buf kbuf = {}; 265 265 char *modified_cmdline = NULL; 266 266 267 267 kbuf.image = image;
+2 -2
arch/riscv/net/bpf_jit_comp64.c
··· 1356 1356 emit_mv(rd, rs, ctx); 1357 1357 #ifdef CONFIG_SMP 1358 1358 /* Load current CPU number in T1 */ 1359 - emit_ld(RV_REG_T1, offsetof(struct thread_info, cpu), 1359 + emit_lw(RV_REG_T1, offsetof(struct thread_info, cpu), 1360 1360 RV_REG_TP, ctx); 1361 1361 /* Load address of __per_cpu_offset array in T2 */ 1362 1362 emit_addr(RV_REG_T2, (u64)&__per_cpu_offset, extra_pass, ctx); ··· 1763 1763 */ 1764 1764 if (insn->src_reg == 0 && insn->imm == BPF_FUNC_get_smp_processor_id) { 1765 1765 /* Load current CPU number in R0 */ 1766 - emit_ld(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu), 1766 + emit_lw(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu), 1767 1767 RV_REG_TP, ctx); 1768 1768 break; 1769 1769 }
+1 -1
arch/s390/kernel/kexec_elf.c
··· 16 16 static int kexec_file_add_kernel_elf(struct kimage *image, 17 17 struct s390_load_data *data) 18 18 { 19 - struct kexec_buf buf; 19 + struct kexec_buf buf = {}; 20 20 const Elf_Ehdr *ehdr; 21 21 const Elf_Phdr *phdr; 22 22 Elf_Addr entry;
+1 -1
arch/s390/kernel/kexec_image.c
··· 16 16 static int kexec_file_add_kernel_image(struct kimage *image, 17 17 struct s390_load_data *data) 18 18 { 19 - struct kexec_buf buf; 19 + struct kexec_buf buf = {}; 20 20 21 21 buf.image = image; 22 22
+3 -3
arch/s390/kernel/machine_kexec_file.c
··· 129 129 static int kexec_file_add_purgatory(struct kimage *image, 130 130 struct s390_load_data *data) 131 131 { 132 - struct kexec_buf buf; 132 + struct kexec_buf buf = {}; 133 133 int ret; 134 134 135 135 buf.image = image; ··· 152 152 static int kexec_file_add_initrd(struct kimage *image, 153 153 struct s390_load_data *data) 154 154 { 155 - struct kexec_buf buf; 155 + struct kexec_buf buf = {}; 156 156 int ret; 157 157 158 158 buf.image = image; ··· 184 184 { 185 185 __u32 *lc_ipl_parmblock_ptr; 186 186 unsigned int len, ncerts; 187 - struct kexec_buf buf; 187 + struct kexec_buf buf = {}; 188 188 unsigned long addr; 189 189 void *ptr, *end; 190 190 int ret;
+2 -2
arch/s390/kernel/perf_cpum_cf.c
··· 760 760 break; 761 761 762 762 case PERF_TYPE_HARDWARE: 763 - if (is_sampling_event(event)) /* No sampling support */ 764 - return -ENOENT; 765 763 ev = attr->config; 766 764 if (!attr->exclude_user && attr->exclude_kernel) { 767 765 /* ··· 857 859 unsigned int type = event->attr.type; 858 860 int err = -ENOENT; 859 861 862 + if (is_sampling_event(event)) /* No sampling support */ 863 + return err; 860 864 if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW) 861 865 err = __hw_perf_event_init(event, type); 862 866 else if (event->pmu->type == type)
+2 -2
arch/s390/kernel/perf_pai_crypto.c
··· 285 285 /* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */ 286 286 if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type) 287 287 return -ENOENT; 288 - /* PAI crypto event must be in valid range */ 288 + /* PAI crypto event must be in valid range, try others if not */ 289 289 if (a->config < PAI_CRYPTO_BASE || 290 290 a->config > PAI_CRYPTO_BASE + paicrypt_cnt) 291 - return -EINVAL; 291 + return -ENOENT; 292 292 /* Allow only CRYPTO_ALL for sampling */ 293 293 if (a->sample_period && a->config != PAI_CRYPTO_BASE) 294 294 return -EINVAL;
+1 -1
arch/s390/kernel/perf_pai_ext.c
··· 265 265 event->hw.config_base = offsetof(struct paiext_cb, acc); 266 266 return 0; 267 267 } 268 - return -EINVAL; 268 + return -ENOENT; 269 269 } 270 270 271 271 /* Might be called on different CPU than the one the event is intended for. */
-2
arch/s390/mm/pgtable.c
··· 335 335 int nodat; 336 336 struct mm_struct *mm = vma->vm_mm; 337 337 338 - preempt_disable(); 339 338 pgste = ptep_xchg_start(mm, addr, ptep); 340 339 nodat = !!(pgste_val(pgste) & _PGSTE_GPS_NODAT); 341 340 old = ptep_flush_lazy(mm, addr, ptep, nodat); ··· 359 360 } else { 360 361 set_pte(ptep, pte); 361 362 } 362 - preempt_enable(); 363 363 } 364 364 365 365 static inline void pmdp_idte_local(struct mm_struct *mm,
+9
arch/x86/Kconfig
··· 2701 2701 security vulnerability on AMD CPUs which can lead to forwarding of 2702 2702 invalid info to subsequent instructions and thus can affect their 2703 2703 timing and thereby cause a leakage. 2704 + 2705 + config MITIGATION_VMSCAPE 2706 + bool "Mitigate VMSCAPE" 2707 + depends on KVM 2708 + default y 2709 + help 2710 + Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security 2711 + vulnerability on Intel and AMD CPUs that may allow a guest to do 2712 + Spectre v2 style attacks on userspace hypervisor. 2704 2713 endif 2705 2714 2706 2715 config ARCH_HAS_ADD_PAGES
+2
arch/x86/include/asm/cpufeatures.h
··· 495 495 #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ 496 496 #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ 497 497 #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ 498 + #define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */ 498 499 499 500 /* 500 501 * BUG word(s) ··· 552 551 #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 553 552 #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 554 553 #define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ 554 + #define X86_BUG_VMSCAPE X86_BUG( 1*32+10) /* "vmscape" CPU is affected by VMSCAPE attacks from guests */ 555 555 #endif /* _ASM_X86_CPUFEATURES_H */
+7
arch/x86/include/asm/entry-common.h
··· 93 93 * 8 (ia32) bits. 94 94 */ 95 95 choose_random_kstack_offset(rdtsc()); 96 + 97 + /* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */ 98 + if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && 99 + this_cpu_read(x86_ibpb_exit_to_user)) { 100 + indirect_branch_prediction_barrier(); 101 + this_cpu_write(x86_ibpb_exit_to_user, false); 102 + } 96 103 } 97 104 #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare 98 105
+2
arch/x86/include/asm/nospec-branch.h
··· 530 530 : "memory"); 531 531 } 532 532 533 + DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user); 534 + 533 535 static inline void indirect_branch_prediction_barrier(void) 534 536 { 535 537 asm_inline volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
+203 -82
arch/x86/kernel/cpu/bugs.c
··· 96 96 static void __init its_apply_mitigation(void); 97 97 static void __init tsa_select_mitigation(void); 98 98 static void __init tsa_apply_mitigation(void); 99 + static void __init vmscape_select_mitigation(void); 100 + static void __init vmscape_update_mitigation(void); 101 + static void __init vmscape_apply_mitigation(void); 99 102 100 103 /* The base value of the SPEC_CTRL MSR without task-specific bits set */ 101 104 u64 x86_spec_ctrl_base; ··· 107 104 /* The current value of the SPEC_CTRL MSR with task-specific bits set */ 108 105 DEFINE_PER_CPU(u64, x86_spec_ctrl_current); 109 106 EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); 107 + 108 + /* 109 + * Set when the CPU has run a potentially malicious guest. An IBPB will 110 + * be needed to before running userspace. That IBPB will flush the branch 111 + * predictor content. 112 + */ 113 + DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user); 114 + EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user); 110 115 111 116 u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB; 112 117 ··· 273 262 its_select_mitigation(); 274 263 bhi_select_mitigation(); 275 264 tsa_select_mitigation(); 265 + vmscape_select_mitigation(); 276 266 277 267 /* 278 268 * After mitigations are selected, some may need to update their ··· 305 293 bhi_update_mitigation(); 306 294 /* srso_update_mitigation() depends on retbleed_update_mitigation(). */ 307 295 srso_update_mitigation(); 296 + vmscape_update_mitigation(); 308 297 309 298 spectre_v1_apply_mitigation(); 310 299 spectre_v2_apply_mitigation(); ··· 323 310 its_apply_mitigation(); 324 311 bhi_apply_mitigation(); 325 312 tsa_apply_mitigation(); 313 + vmscape_apply_mitigation(); 326 314 } 327 315 328 316 /* ··· 2552 2538 } 2553 2539 } 2554 2540 2555 - #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" 2556 - #define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n" 2557 - #define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n" 2558 - 2559 - void cpu_bugs_smt_update(void) 2560 - { 2561 - mutex_lock(&spec_ctrl_mutex); 2562 - 2563 - if (sched_smt_active() && unprivileged_ebpf_enabled() && 2564 - spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) 2565 - pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); 2566 - 2567 - switch (spectre_v2_user_stibp) { 2568 - case SPECTRE_V2_USER_NONE: 2569 - break; 2570 - case SPECTRE_V2_USER_STRICT: 2571 - case SPECTRE_V2_USER_STRICT_PREFERRED: 2572 - update_stibp_strict(); 2573 - break; 2574 - case SPECTRE_V2_USER_PRCTL: 2575 - case SPECTRE_V2_USER_SECCOMP: 2576 - update_indir_branch_cond(); 2577 - break; 2578 - } 2579 - 2580 - switch (mds_mitigation) { 2581 - case MDS_MITIGATION_FULL: 2582 - case MDS_MITIGATION_AUTO: 2583 - case MDS_MITIGATION_VMWERV: 2584 - if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY)) 2585 - pr_warn_once(MDS_MSG_SMT); 2586 - update_mds_branch_idle(); 2587 - break; 2588 - case MDS_MITIGATION_OFF: 2589 - break; 2590 - } 2591 - 2592 - switch (taa_mitigation) { 2593 - case TAA_MITIGATION_VERW: 2594 - case TAA_MITIGATION_AUTO: 2595 - case TAA_MITIGATION_UCODE_NEEDED: 2596 - if (sched_smt_active()) 2597 - pr_warn_once(TAA_MSG_SMT); 2598 - break; 2599 - case TAA_MITIGATION_TSX_DISABLED: 2600 - case TAA_MITIGATION_OFF: 2601 - break; 2602 - } 2603 - 2604 - switch (mmio_mitigation) { 2605 - case MMIO_MITIGATION_VERW: 2606 - case MMIO_MITIGATION_AUTO: 2607 - case MMIO_MITIGATION_UCODE_NEEDED: 2608 - if (sched_smt_active()) 2609 - pr_warn_once(MMIO_MSG_SMT); 2610 - break; 2611 - case MMIO_MITIGATION_OFF: 2612 - break; 2613 - } 2614 - 2615 - switch (tsa_mitigation) { 2616 - case TSA_MITIGATION_USER_KERNEL: 2617 - case TSA_MITIGATION_VM: 2618 - case TSA_MITIGATION_AUTO: 2619 - case TSA_MITIGATION_FULL: 2620 - /* 2621 - * TSA-SQ can potentially lead to info leakage between 2622 - * SMT threads. 2623 - */ 2624 - if (sched_smt_active()) 2625 - static_branch_enable(&cpu_buf_idle_clear); 2626 - else 2627 - static_branch_disable(&cpu_buf_idle_clear); 2628 - break; 2629 - case TSA_MITIGATION_NONE: 2630 - case TSA_MITIGATION_UCODE_NEEDED: 2631 - break; 2632 - } 2633 - 2634 - mutex_unlock(&spec_ctrl_mutex); 2635 - } 2636 - 2637 2541 #undef pr_fmt 2638 2542 #define pr_fmt(fmt) "Speculative Store Bypass: " fmt 2639 2543 ··· 3263 3331 } 3264 3332 3265 3333 #undef pr_fmt 3334 + #define pr_fmt(fmt) "VMSCAPE: " fmt 3335 + 3336 + enum vmscape_mitigations { 3337 + VMSCAPE_MITIGATION_NONE, 3338 + VMSCAPE_MITIGATION_AUTO, 3339 + VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER, 3340 + VMSCAPE_MITIGATION_IBPB_ON_VMEXIT, 3341 + }; 3342 + 3343 + static const char * const vmscape_strings[] = { 3344 + [VMSCAPE_MITIGATION_NONE] = "Vulnerable", 3345 + /* [VMSCAPE_MITIGATION_AUTO] */ 3346 + [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] = "Mitigation: IBPB before exit to userspace", 3347 + [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT", 3348 + }; 3349 + 3350 + static enum vmscape_mitigations vmscape_mitigation __ro_after_init = 3351 + IS_ENABLED(CONFIG_MITIGATION_VMSCAPE) ? VMSCAPE_MITIGATION_AUTO : VMSCAPE_MITIGATION_NONE; 3352 + 3353 + static int __init vmscape_parse_cmdline(char *str) 3354 + { 3355 + if (!str) 3356 + return -EINVAL; 3357 + 3358 + if (!strcmp(str, "off")) { 3359 + vmscape_mitigation = VMSCAPE_MITIGATION_NONE; 3360 + } else if (!strcmp(str, "ibpb")) { 3361 + vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; 3362 + } else if (!strcmp(str, "force")) { 3363 + setup_force_cpu_bug(X86_BUG_VMSCAPE); 3364 + vmscape_mitigation = VMSCAPE_MITIGATION_AUTO; 3365 + } else { 3366 + pr_err("Ignoring unknown vmscape=%s option.\n", str); 3367 + } 3368 + 3369 + return 0; 3370 + } 3371 + early_param("vmscape", vmscape_parse_cmdline); 3372 + 3373 + static void __init vmscape_select_mitigation(void) 3374 + { 3375 + if (cpu_mitigations_off() || 3376 + !boot_cpu_has_bug(X86_BUG_VMSCAPE) || 3377 + !boot_cpu_has(X86_FEATURE_IBPB)) { 3378 + vmscape_mitigation = VMSCAPE_MITIGATION_NONE; 3379 + return; 3380 + } 3381 + 3382 + if (vmscape_mitigation == VMSCAPE_MITIGATION_AUTO) 3383 + vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; 3384 + } 3385 + 3386 + static void __init vmscape_update_mitigation(void) 3387 + { 3388 + if (!boot_cpu_has_bug(X86_BUG_VMSCAPE)) 3389 + return; 3390 + 3391 + if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB || 3392 + srso_mitigation == SRSO_MITIGATION_IBPB_ON_VMEXIT) 3393 + vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_ON_VMEXIT; 3394 + 3395 + pr_info("%s\n", vmscape_strings[vmscape_mitigation]); 3396 + } 3397 + 3398 + static void __init vmscape_apply_mitigation(void) 3399 + { 3400 + if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) 3401 + setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER); 3402 + } 3403 + 3404 + #undef pr_fmt 3266 3405 #define pr_fmt(fmt) fmt 3406 + 3407 + #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" 3408 + #define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n" 3409 + #define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n" 3410 + #define VMSCAPE_MSG_SMT "VMSCAPE: SMT on, STIBP is required for full protection. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/vmscape.html for more details.\n" 3411 + 3412 + void cpu_bugs_smt_update(void) 3413 + { 3414 + mutex_lock(&spec_ctrl_mutex); 3415 + 3416 + if (sched_smt_active() && unprivileged_ebpf_enabled() && 3417 + spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) 3418 + pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); 3419 + 3420 + switch (spectre_v2_user_stibp) { 3421 + case SPECTRE_V2_USER_NONE: 3422 + break; 3423 + case SPECTRE_V2_USER_STRICT: 3424 + case SPECTRE_V2_USER_STRICT_PREFERRED: 3425 + update_stibp_strict(); 3426 + break; 3427 + case SPECTRE_V2_USER_PRCTL: 3428 + case SPECTRE_V2_USER_SECCOMP: 3429 + update_indir_branch_cond(); 3430 + break; 3431 + } 3432 + 3433 + switch (mds_mitigation) { 3434 + case MDS_MITIGATION_FULL: 3435 + case MDS_MITIGATION_AUTO: 3436 + case MDS_MITIGATION_VMWERV: 3437 + if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY)) 3438 + pr_warn_once(MDS_MSG_SMT); 3439 + update_mds_branch_idle(); 3440 + break; 3441 + case MDS_MITIGATION_OFF: 3442 + break; 3443 + } 3444 + 3445 + switch (taa_mitigation) { 3446 + case TAA_MITIGATION_VERW: 3447 + case TAA_MITIGATION_AUTO: 3448 + case TAA_MITIGATION_UCODE_NEEDED: 3449 + if (sched_smt_active()) 3450 + pr_warn_once(TAA_MSG_SMT); 3451 + break; 3452 + case TAA_MITIGATION_TSX_DISABLED: 3453 + case TAA_MITIGATION_OFF: 3454 + break; 3455 + } 3456 + 3457 + switch (mmio_mitigation) { 3458 + case MMIO_MITIGATION_VERW: 3459 + case MMIO_MITIGATION_AUTO: 3460 + case MMIO_MITIGATION_UCODE_NEEDED: 3461 + if (sched_smt_active()) 3462 + pr_warn_once(MMIO_MSG_SMT); 3463 + break; 3464 + case MMIO_MITIGATION_OFF: 3465 + break; 3466 + } 3467 + 3468 + switch (tsa_mitigation) { 3469 + case TSA_MITIGATION_USER_KERNEL: 3470 + case TSA_MITIGATION_VM: 3471 + case TSA_MITIGATION_AUTO: 3472 + case TSA_MITIGATION_FULL: 3473 + /* 3474 + * TSA-SQ can potentially lead to info leakage between 3475 + * SMT threads. 3476 + */ 3477 + if (sched_smt_active()) 3478 + static_branch_enable(&cpu_buf_idle_clear); 3479 + else 3480 + static_branch_disable(&cpu_buf_idle_clear); 3481 + break; 3482 + case TSA_MITIGATION_NONE: 3483 + case TSA_MITIGATION_UCODE_NEEDED: 3484 + break; 3485 + } 3486 + 3487 + switch (vmscape_mitigation) { 3488 + case VMSCAPE_MITIGATION_NONE: 3489 + case VMSCAPE_MITIGATION_AUTO: 3490 + break; 3491 + case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT: 3492 + case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: 3493 + /* 3494 + * Hypervisors can be attacked across-threads, warn for SMT when 3495 + * STIBP is not already enabled system-wide. 3496 + * 3497 + * Intel eIBRS (!AUTOIBRS) implies STIBP on. 3498 + */ 3499 + if (!sched_smt_active() || 3500 + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || 3501 + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED || 3502 + (spectre_v2_in_eibrs_mode(spectre_v2_enabled) && 3503 + !boot_cpu_has(X86_FEATURE_AUTOIBRS))) 3504 + break; 3505 + pr_warn_once(VMSCAPE_MSG_SMT); 3506 + break; 3507 + } 3508 + 3509 + mutex_unlock(&spec_ctrl_mutex); 3510 + } 3267 3511 3268 3512 #ifdef CONFIG_SYSFS 3269 3513 ··· 3686 3578 return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]); 3687 3579 } 3688 3580 3581 + static ssize_t vmscape_show_state(char *buf) 3582 + { 3583 + return sysfs_emit(buf, "%s\n", vmscape_strings[vmscape_mitigation]); 3584 + } 3585 + 3689 3586 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, 3690 3587 char *buf, unsigned int bug) 3691 3588 { ··· 3756 3643 3757 3644 case X86_BUG_TSA: 3758 3645 return tsa_show_state(buf); 3646 + 3647 + case X86_BUG_VMSCAPE: 3648 + return vmscape_show_state(buf); 3759 3649 3760 3650 default: 3761 3651 break; ··· 3850 3734 ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf) 3851 3735 { 3852 3736 return cpu_show_common(dev, attr, buf, X86_BUG_TSA); 3737 + } 3738 + 3739 + ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf) 3740 + { 3741 + return cpu_show_common(dev, attr, buf, X86_BUG_VMSCAPE); 3853 3742 } 3854 3743 #endif 3855 3744
+55 -31
arch/x86/kernel/cpu/common.c
··· 1236 1236 #define ITS_NATIVE_ONLY BIT(9) 1237 1237 /* CPU is affected by Transient Scheduler Attacks */ 1238 1238 #define TSA BIT(10) 1239 + /* CPU is affected by VMSCAPE */ 1240 + #define VMSCAPE BIT(11) 1239 1241 1240 1242 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { 1241 - VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS), 1242 - VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS), 1243 - VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS), 1244 - VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS), 1245 - VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO), 1246 - VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO), 1247 - VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS), 1248 - VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO), 1249 - VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS), 1250 - VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS), 1251 - VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS), 1252 - VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS), 1253 - VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS), 1254 - VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS), 1255 - VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS), 1256 - VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS), 1257 - VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS), 1258 - VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED), 1243 + VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE_X, X86_STEP_MAX, VMSCAPE), 1244 + VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE, X86_STEP_MAX, VMSCAPE), 1245 + VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE_X, X86_STEP_MAX, VMSCAPE), 1246 + VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS | VMSCAPE), 1247 + VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS | VMSCAPE), 1248 + VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS | VMSCAPE), 1249 + VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE), 1250 + VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO | VMSCAPE), 1251 + VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO | VMSCAPE), 1252 + VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO | VMSCAPE), 1253 + VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE), 1254 + VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS | VMSCAPE), 1255 + VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS | VMSCAPE), 1256 + VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | VMSCAPE), 1257 + VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), 1258 + VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), 1259 + VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), 1260 + VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE), 1261 + VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), 1262 + VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE), 1263 + VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED | VMSCAPE), 1259 1264 VULNBL_INTEL_STEPS(INTEL_ICELAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY), 1260 1265 VULNBL_INTEL_STEPS(INTEL_ICELAKE_D, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY), 1261 1266 VULNBL_INTEL_STEPS(INTEL_ICELAKE_X, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY), 1262 - VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS), 1263 - VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS), 1264 - VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS), 1267 + VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE), 1268 + VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS | VMSCAPE), 1269 + VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE), 1265 1270 VULNBL_INTEL_STEPS(INTEL_TIGERLAKE_L, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY), 1266 1271 VULNBL_INTEL_STEPS(INTEL_TIGERLAKE, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY), 1267 1272 VULNBL_INTEL_STEPS(INTEL_LAKEFIELD, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED), 1268 1273 VULNBL_INTEL_STEPS(INTEL_ROCKETLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY), 1269 - VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS), 1270 - VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS), 1271 - VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS), 1272 - VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS), 1273 - VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS), 1274 - VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS), 1274 + VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS | VMSCAPE), 1275 + VULNBL_INTEL_STEPS(INTEL_ALDERLAKE, X86_STEP_MAX, VMSCAPE), 1276 + VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS | VMSCAPE), 1277 + VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS | VMSCAPE), 1278 + VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE, X86_STEP_MAX, VMSCAPE), 1279 + VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS | VMSCAPE), 1280 + VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS | VMSCAPE), 1281 + VULNBL_INTEL_STEPS(INTEL_METEORLAKE_L, X86_STEP_MAX, VMSCAPE), 1282 + VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_H, X86_STEP_MAX, VMSCAPE), 1283 + VULNBL_INTEL_STEPS(INTEL_ARROWLAKE, X86_STEP_MAX, VMSCAPE), 1284 + VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_U, X86_STEP_MAX, VMSCAPE), 1285 + VULNBL_INTEL_STEPS(INTEL_LUNARLAKE_M, X86_STEP_MAX, VMSCAPE), 1286 + VULNBL_INTEL_STEPS(INTEL_SAPPHIRERAPIDS_X, X86_STEP_MAX, VMSCAPE), 1287 + VULNBL_INTEL_STEPS(INTEL_GRANITERAPIDS_X, X86_STEP_MAX, VMSCAPE), 1288 + VULNBL_INTEL_STEPS(INTEL_EMERALDRAPIDS_X, X86_STEP_MAX, VMSCAPE), 1289 + VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS | VMSCAPE), 1275 1290 VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS), 1276 1291 VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_D, X86_STEP_MAX, MMIO | RFDS), 1277 1292 VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS), 1278 1293 VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT, X86_STEP_MAX, RFDS), 1279 1294 VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_D, X86_STEP_MAX, RFDS), 1280 1295 VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_PLUS, X86_STEP_MAX, RFDS), 1296 + VULNBL_INTEL_STEPS(INTEL_ATOM_CRESTMONT_X, X86_STEP_MAX, VMSCAPE), 1281 1297 1282 1298 VULNBL_AMD(0x15, RETBLEED), 1283 1299 VULNBL_AMD(0x16, RETBLEED), 1284 - VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), 1285 - VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO), 1286 - VULNBL_AMD(0x19, SRSO | TSA), 1287 - VULNBL_AMD(0x1a, SRSO), 1300 + VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO | VMSCAPE), 1301 + VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO | VMSCAPE), 1302 + VULNBL_AMD(0x19, SRSO | TSA | VMSCAPE), 1303 + VULNBL_AMD(0x1a, SRSO | VMSCAPE), 1288 1304 {} 1289 1305 }; 1290 1306 ··· 1558 1542 setup_force_cpu_bug(X86_BUG_TSA); 1559 1543 } 1560 1544 } 1545 + 1546 + /* 1547 + * Set the bug only on bare-metal. A nested hypervisor should already be 1548 + * deploying IBPB to isolate itself from nested guests. 1549 + */ 1550 + if (cpu_matches(cpu_vuln_blacklist, VMSCAPE) && 1551 + !boot_cpu_has(X86_FEATURE_HYPERVISOR)) 1552 + setup_force_cpu_bug(X86_BUG_VMSCAPE); 1561 1553 1562 1554 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) 1563 1555 return;
+9
arch/x86/kvm/x86.c
··· 11011 11011 wrmsrq(MSR_IA32_XFD_ERR, 0); 11012 11012 11013 11013 /* 11014 + * Mark this CPU as needing a branch predictor flush before running 11015 + * userspace. Must be done before enabling preemption to ensure it gets 11016 + * set for the CPU that actually ran the guest, and not the CPU that it 11017 + * may migrate to. 11018 + */ 11019 + if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) 11020 + this_cpu_write(x86_ibpb_exit_to_user, true); 11021 + 11022 + /* 11014 11023 * Consume any pending interrupts, including the possible source of 11015 11024 * VM-Exit on SVM and any ticks that occur between VM-Exit and now. 11016 11025 * An instruction is required after local_irq_enable() to fully unblock
+8 -5
block/fops.c
··· 7 7 #include <linux/init.h> 8 8 #include <linux/mm.h> 9 9 #include <linux/blkdev.h> 10 + #include <linux/blk-integrity.h> 10 11 #include <linux/buffer_head.h> 11 12 #include <linux/mpage.h> 12 13 #include <linux/uio.h> ··· 55 54 struct bio bio; 56 55 ssize_t ret; 57 56 58 - WARN_ON_ONCE(iocb->ki_flags & IOCB_HAS_METADATA); 59 57 if (nr_pages <= DIO_INLINE_BIO_VECS) 60 58 vecs = inline_vecs; 61 59 else { ··· 131 131 if (bio->bi_status && !dio->bio.bi_status) 132 132 dio->bio.bi_status = bio->bi_status; 133 133 134 - if (!is_sync && (dio->iocb->ki_flags & IOCB_HAS_METADATA)) 134 + if (bio_integrity(bio)) 135 135 bio_integrity_unmap_user(bio); 136 136 137 137 if (atomic_dec_and_test(&dio->ref)) { ··· 233 233 } 234 234 bio->bi_opf |= REQ_NOWAIT; 235 235 } 236 - if (!is_sync && (iocb->ki_flags & IOCB_HAS_METADATA)) { 236 + if (iocb->ki_flags & IOCB_HAS_METADATA) { 237 237 ret = bio_integrity_map_iter(bio, iocb->private); 238 238 if (unlikely(ret)) 239 239 goto fail; ··· 301 301 ret = blk_status_to_errno(bio->bi_status); 302 302 } 303 303 304 - if (iocb->ki_flags & IOCB_HAS_METADATA) 304 + if (bio_integrity(bio)) 305 305 bio_integrity_unmap_user(bio); 306 306 307 307 iocb->ki_complete(iocb, ret); ··· 422 422 } 423 423 424 424 nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1); 425 - if (likely(nr_pages <= BIO_MAX_VECS)) { 425 + if (likely(nr_pages <= BIO_MAX_VECS && 426 + !(iocb->ki_flags & IOCB_HAS_METADATA))) { 426 427 if (is_sync_kiocb(iocb)) 427 428 return __blkdev_direct_IO_simple(iocb, iter, bdev, 428 429 nr_pages); ··· 688 687 689 688 if (bdev_can_atomic_write(bdev)) 690 689 filp->f_mode |= FMODE_CAN_ATOMIC_WRITE; 690 + if (blk_get_integrity(bdev->bd_disk)) 691 + filp->f_mode |= FMODE_HAS_METADATA; 691 692 692 693 ret = bdev_open(bdev, mode, filp->private_data, NULL, filp); 693 694 if (ret)
+39
crypto/sha1.c
··· 49 49 return 0; 50 50 } 51 51 52 + static int __crypto_sha1_export_core(const struct sha1_ctx *ctx, void *out) 53 + { 54 + memcpy(out, ctx, offsetof(struct sha1_ctx, buf)); 55 + return 0; 56 + } 57 + 58 + static int __crypto_sha1_import_core(struct sha1_ctx *ctx, const void *in) 59 + { 60 + memcpy(ctx, in, offsetof(struct sha1_ctx, buf)); 61 + return 0; 62 + } 63 + 52 64 const u8 sha1_zero_message_hash[SHA1_DIGEST_SIZE] = { 53 65 0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d, 54 66 0x32, 0x55, 0xbf, 0xef, 0x95, 0x60, 0x18, 0x90, ··· 104 92 static int crypto_sha1_import(struct shash_desc *desc, const void *in) 105 93 { 106 94 return __crypto_sha1_import(SHA1_CTX(desc), in); 95 + } 96 + 97 + static int crypto_sha1_export_core(struct shash_desc *desc, void *out) 98 + { 99 + return __crypto_sha1_export_core(SHA1_CTX(desc), out); 100 + } 101 + 102 + static int crypto_sha1_import_core(struct shash_desc *desc, const void *in) 103 + { 104 + return __crypto_sha1_import_core(SHA1_CTX(desc), in); 107 105 } 108 106 109 107 #define HMAC_SHA1_KEY(tfm) ((struct hmac_sha1_key *)crypto_shash_ctx(tfm)) ··· 165 143 return __crypto_sha1_import(&ctx->sha_ctx, in); 166 144 } 167 145 146 + static int crypto_hmac_sha1_export_core(struct shash_desc *desc, void *out) 147 + { 148 + return __crypto_sha1_export_core(&HMAC_SHA1_CTX(desc)->sha_ctx, out); 149 + } 150 + 151 + static int crypto_hmac_sha1_import_core(struct shash_desc *desc, const void *in) 152 + { 153 + struct hmac_sha1_ctx *ctx = HMAC_SHA1_CTX(desc); 154 + 155 + ctx->ostate = HMAC_SHA1_KEY(desc->tfm)->ostate; 156 + return __crypto_sha1_import_core(&ctx->sha_ctx, in); 157 + } 158 + 168 159 static struct shash_alg algs[] = { 169 160 { 170 161 .base.cra_name = "sha1", ··· 192 157 .digest = crypto_sha1_digest, 193 158 .export = crypto_sha1_export, 194 159 .import = crypto_sha1_import, 160 + .export_core = crypto_sha1_export_core, 161 + .import_core = crypto_sha1_import_core, 195 162 .descsize = sizeof(struct sha1_ctx), 196 163 .statesize = SHA1_SHASH_STATE_SIZE, 197 164 }, ··· 212 175 .digest = crypto_hmac_sha1_digest, 213 176 .export = crypto_hmac_sha1_export, 214 177 .import = crypto_hmac_sha1_import, 178 + .export_core = crypto_hmac_sha1_export_core, 179 + .import_core = crypto_hmac_sha1_import_core, 215 180 .descsize = sizeof(struct hmac_sha1_ctx), 216 181 .statesize = SHA1_SHASH_STATE_SIZE, 217 182 },
+71
crypto/sha256.c
··· 50 50 return 0; 51 51 } 52 52 53 + static int __crypto_sha256_export_core(const struct __sha256_ctx *ctx, 54 + void *out) 55 + { 56 + memcpy(out, ctx, offsetof(struct __sha256_ctx, buf)); 57 + return 0; 58 + } 59 + 60 + static int __crypto_sha256_import_core(struct __sha256_ctx *ctx, const void *in) 61 + { 62 + memcpy(ctx, in, offsetof(struct __sha256_ctx, buf)); 63 + return 0; 64 + } 65 + 53 66 /* SHA-224 */ 54 67 55 68 const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] = { ··· 111 98 return __crypto_sha256_import(&SHA224_CTX(desc)->ctx, in); 112 99 } 113 100 101 + static int crypto_sha224_export_core(struct shash_desc *desc, void *out) 102 + { 103 + return __crypto_sha256_export_core(&SHA224_CTX(desc)->ctx, out); 104 + } 105 + 106 + static int crypto_sha224_import_core(struct shash_desc *desc, const void *in) 107 + { 108 + return __crypto_sha256_import_core(&SHA224_CTX(desc)->ctx, in); 109 + } 110 + 114 111 /* SHA-256 */ 115 112 116 113 const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] = { ··· 167 144 static int crypto_sha256_import(struct shash_desc *desc, const void *in) 168 145 { 169 146 return __crypto_sha256_import(&SHA256_CTX(desc)->ctx, in); 147 + } 148 + 149 + static int crypto_sha256_export_core(struct shash_desc *desc, void *out) 150 + { 151 + return __crypto_sha256_export_core(&SHA256_CTX(desc)->ctx, out); 152 + } 153 + 154 + static int crypto_sha256_import_core(struct shash_desc *desc, const void *in) 155 + { 156 + return __crypto_sha256_import_core(&SHA256_CTX(desc)->ctx, in); 170 157 } 171 158 172 159 /* HMAC-SHA224 */ ··· 231 198 return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); 232 199 } 233 200 201 + static int crypto_hmac_sha224_export_core(struct shash_desc *desc, void *out) 202 + { 203 + return __crypto_sha256_export_core(&HMAC_SHA224_CTX(desc)->ctx.sha_ctx, 204 + out); 205 + } 206 + 207 + static int crypto_hmac_sha224_import_core(struct shash_desc *desc, 208 + const void *in) 209 + { 210 + struct hmac_sha224_ctx *ctx = HMAC_SHA224_CTX(desc); 211 + 212 + ctx->ctx.ostate = HMAC_SHA224_KEY(desc->tfm)->key.ostate; 213 + return __crypto_sha256_import_core(&ctx->ctx.sha_ctx, in); 214 + } 215 + 234 216 /* HMAC-SHA256 */ 235 217 236 218 #define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(tfm)) ··· 298 250 return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); 299 251 } 300 252 253 + static int crypto_hmac_sha256_export_core(struct shash_desc *desc, void *out) 254 + { 255 + return __crypto_sha256_export_core(&HMAC_SHA256_CTX(desc)->ctx.sha_ctx, 256 + out); 257 + } 258 + 259 + static int crypto_hmac_sha256_import_core(struct shash_desc *desc, 260 + const void *in) 261 + { 262 + struct hmac_sha256_ctx *ctx = HMAC_SHA256_CTX(desc); 263 + 264 + ctx->ctx.ostate = HMAC_SHA256_KEY(desc->tfm)->key.ostate; 265 + return __crypto_sha256_import_core(&ctx->ctx.sha_ctx, in); 266 + } 267 + 301 268 /* Algorithm definitions */ 302 269 303 270 static struct shash_alg algs[] = { ··· 329 266 .digest = crypto_sha224_digest, 330 267 .export = crypto_sha224_export, 331 268 .import = crypto_sha224_import, 269 + .export_core = crypto_sha224_export_core, 270 + .import_core = crypto_sha224_import_core, 332 271 .descsize = sizeof(struct sha224_ctx), 333 272 .statesize = SHA256_SHASH_STATE_SIZE, 334 273 }, ··· 347 282 .digest = crypto_sha256_digest, 348 283 .export = crypto_sha256_export, 349 284 .import = crypto_sha256_import, 285 + .export_core = crypto_sha256_export_core, 286 + .import_core = crypto_sha256_import_core, 350 287 .descsize = sizeof(struct sha256_ctx), 351 288 .statesize = SHA256_SHASH_STATE_SIZE, 352 289 }, ··· 367 300 .digest = crypto_hmac_sha224_digest, 368 301 .export = crypto_hmac_sha224_export, 369 302 .import = crypto_hmac_sha224_import, 303 + .export_core = crypto_hmac_sha224_export_core, 304 + .import_core = crypto_hmac_sha224_import_core, 370 305 .descsize = sizeof(struct hmac_sha224_ctx), 371 306 .statesize = SHA256_SHASH_STATE_SIZE, 372 307 }, ··· 387 318 .digest = crypto_hmac_sha256_digest, 388 319 .export = crypto_hmac_sha256_export, 389 320 .import = crypto_hmac_sha256_import, 321 + .export_core = crypto_hmac_sha256_export_core, 322 + .import_core = crypto_hmac_sha256_import_core, 390 323 .descsize = sizeof(struct hmac_sha256_ctx), 391 324 .statesize = SHA256_SHASH_STATE_SIZE, 392 325 },
+71
crypto/sha512.c
··· 50 50 return 0; 51 51 } 52 52 53 + static int __crypto_sha512_export_core(const struct __sha512_ctx *ctx, 54 + void *out) 55 + { 56 + memcpy(out, ctx, offsetof(struct __sha512_ctx, buf)); 57 + return 0; 58 + } 59 + 60 + static int __crypto_sha512_import_core(struct __sha512_ctx *ctx, const void *in) 61 + { 62 + memcpy(ctx, in, offsetof(struct __sha512_ctx, buf)); 63 + return 0; 64 + } 65 + 53 66 /* SHA-384 */ 54 67 55 68 const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE] = { ··· 111 98 static int crypto_sha384_import(struct shash_desc *desc, const void *in) 112 99 { 113 100 return __crypto_sha512_import(&SHA384_CTX(desc)->ctx, in); 101 + } 102 + 103 + static int crypto_sha384_export_core(struct shash_desc *desc, void *out) 104 + { 105 + return __crypto_sha512_export_core(&SHA384_CTX(desc)->ctx, out); 106 + } 107 + 108 + static int crypto_sha384_import_core(struct shash_desc *desc, const void *in) 109 + { 110 + return __crypto_sha512_import_core(&SHA384_CTX(desc)->ctx, in); 114 111 } 115 112 116 113 /* SHA-512 */ ··· 175 152 return __crypto_sha512_import(&SHA512_CTX(desc)->ctx, in); 176 153 } 177 154 155 + static int crypto_sha512_export_core(struct shash_desc *desc, void *out) 156 + { 157 + return __crypto_sha512_export_core(&SHA512_CTX(desc)->ctx, out); 158 + } 159 + 160 + static int crypto_sha512_import_core(struct shash_desc *desc, const void *in) 161 + { 162 + return __crypto_sha512_import_core(&SHA512_CTX(desc)->ctx, in); 163 + } 164 + 178 165 /* HMAC-SHA384 */ 179 166 180 167 #define HMAC_SHA384_KEY(tfm) ((struct hmac_sha384_key *)crypto_shash_ctx(tfm)) ··· 235 202 236 203 ctx->ctx.ostate = HMAC_SHA384_KEY(desc->tfm)->key.ostate; 237 204 return __crypto_sha512_import(&ctx->ctx.sha_ctx, in); 205 + } 206 + 207 + static int crypto_hmac_sha384_export_core(struct shash_desc *desc, void *out) 208 + { 209 + return __crypto_sha512_export_core(&HMAC_SHA384_CTX(desc)->ctx.sha_ctx, 210 + out); 211 + } 212 + 213 + static int crypto_hmac_sha384_import_core(struct shash_desc *desc, 214 + const void *in) 215 + { 216 + struct hmac_sha384_ctx *ctx = HMAC_SHA384_CTX(desc); 217 + 218 + ctx->ctx.ostate = HMAC_SHA384_KEY(desc->tfm)->key.ostate; 219 + return __crypto_sha512_import_core(&ctx->ctx.sha_ctx, in); 238 220 } 239 221 240 222 /* HMAC-SHA512 */ ··· 304 256 return __crypto_sha512_import(&ctx->ctx.sha_ctx, in); 305 257 } 306 258 259 + static int crypto_hmac_sha512_export_core(struct shash_desc *desc, void *out) 260 + { 261 + return __crypto_sha512_export_core(&HMAC_SHA512_CTX(desc)->ctx.sha_ctx, 262 + out); 263 + } 264 + 265 + static int crypto_hmac_sha512_import_core(struct shash_desc *desc, 266 + const void *in) 267 + { 268 + struct hmac_sha512_ctx *ctx = HMAC_SHA512_CTX(desc); 269 + 270 + ctx->ctx.ostate = HMAC_SHA512_KEY(desc->tfm)->key.ostate; 271 + return __crypto_sha512_import_core(&ctx->ctx.sha_ctx, in); 272 + } 273 + 307 274 /* Algorithm definitions */ 308 275 309 276 static struct shash_alg algs[] = { ··· 335 272 .digest = crypto_sha384_digest, 336 273 .export = crypto_sha384_export, 337 274 .import = crypto_sha384_import, 275 + .export_core = crypto_sha384_export_core, 276 + .import_core = crypto_sha384_import_core, 338 277 .descsize = sizeof(struct sha384_ctx), 339 278 .statesize = SHA512_SHASH_STATE_SIZE, 340 279 }, ··· 353 288 .digest = crypto_sha512_digest, 354 289 .export = crypto_sha512_export, 355 290 .import = crypto_sha512_import, 291 + .export_core = crypto_sha512_export_core, 292 + .import_core = crypto_sha512_import_core, 356 293 .descsize = sizeof(struct sha512_ctx), 357 294 .statesize = SHA512_SHASH_STATE_SIZE, 358 295 }, ··· 373 306 .digest = crypto_hmac_sha384_digest, 374 307 .export = crypto_hmac_sha384_export, 375 308 .import = crypto_hmac_sha384_import, 309 + .export_core = crypto_hmac_sha384_export_core, 310 + .import_core = crypto_hmac_sha384_import_core, 376 311 .descsize = sizeof(struct hmac_sha384_ctx), 377 312 .statesize = SHA512_SHASH_STATE_SIZE, 378 313 }, ··· 393 324 .digest = crypto_hmac_sha512_digest, 394 325 .export = crypto_hmac_sha512_export, 395 326 .import = crypto_hmac_sha512_import, 327 + .export_core = crypto_hmac_sha512_export_core, 328 + .import_core = crypto_hmac_sha512_import_core, 396 329 .descsize = sizeof(struct hmac_sha512_ctx), 397 330 .statesize = SHA512_SHASH_STATE_SIZE, 398 331 },
+1 -1
drivers/accel/ivpu/ivpu_drv.c
··· 677 677 static void ivpu_dev_fini(struct ivpu_device *vdev) 678 678 { 679 679 ivpu_jobs_abort_all(vdev); 680 - ivpu_pm_cancel_recovery(vdev); 680 + ivpu_pm_disable_recovery(vdev); 681 681 ivpu_pm_disable(vdev); 682 682 ivpu_prepare_for_reset(vdev); 683 683 ivpu_shutdown(vdev);
+2 -2
drivers/accel/ivpu/ivpu_pm.c
··· 417 417 ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay); 418 418 } 419 419 420 - void ivpu_pm_cancel_recovery(struct ivpu_device *vdev) 420 + void ivpu_pm_disable_recovery(struct ivpu_device *vdev) 421 421 { 422 422 drm_WARN_ON(&vdev->drm, delayed_work_pending(&vdev->pm->job_timeout_work)); 423 - cancel_work_sync(&vdev->pm->recovery_work); 423 + disable_work_sync(&vdev->pm->recovery_work); 424 424 } 425 425 426 426 void ivpu_pm_enable(struct ivpu_device *vdev)
+1 -1
drivers/accel/ivpu/ivpu_pm.h
··· 25 25 void ivpu_pm_init(struct ivpu_device *vdev); 26 26 void ivpu_pm_enable(struct ivpu_device *vdev); 27 27 void ivpu_pm_disable(struct ivpu_device *vdev); 28 - void ivpu_pm_cancel_recovery(struct ivpu_device *vdev); 28 + void ivpu_pm_disable_recovery(struct ivpu_device *vdev); 29 29 30 30 int ivpu_pm_suspend_cb(struct device *dev); 31 31 int ivpu_pm_resume_cb(struct device *dev);
+3 -1
drivers/acpi/arm64/iort.c
··· 937 937 938 938 new_sids = krealloc_array(sids, count + new_count, 939 939 sizeof(*new_sids), GFP_KERNEL); 940 - if (!new_sids) 940 + if (!new_sids) { 941 + kfree(sids); 941 942 return NULL; 943 + } 942 944 943 945 for (i = count; i < total_count; i++) 944 946 new_sids[i] = id_start++;
+2 -2
drivers/acpi/riscv/cppc.c
··· 119 119 120 120 *val = data.ret.value; 121 121 122 - return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0; 122 + return data.ret.error; 123 123 } 124 124 125 125 return -EINVAL; ··· 148 148 149 149 smp_call_function_single(cpu, cppc_ffh_csr_write, &data, 1); 150 150 151 - return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0; 151 + return data.ret.error; 152 152 } 153 153 154 154 return -EINVAL;
+3
drivers/base/cpu.c
··· 603 603 CPU_SHOW_VULN_FALLBACK(old_microcode); 604 604 CPU_SHOW_VULN_FALLBACK(indirect_target_selection); 605 605 CPU_SHOW_VULN_FALLBACK(tsa); 606 + CPU_SHOW_VULN_FALLBACK(vmscape); 606 607 607 608 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); 608 609 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); ··· 623 622 static DEVICE_ATTR(old_microcode, 0444, cpu_show_old_microcode, NULL); 624 623 static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL); 625 624 static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL); 625 + static DEVICE_ATTR(vmscape, 0444, cpu_show_vmscape, NULL); 626 626 627 627 static struct attribute *cpu_root_vulnerabilities_attrs[] = { 628 628 &dev_attr_meltdown.attr, ··· 644 642 &dev_attr_old_microcode.attr, 645 643 &dev_attr_indirect_target_selection.attr, 646 644 &dev_attr_tsa.attr, 645 + &dev_attr_vmscape.attr, 647 646 NULL 648 647 }; 649 648
+11 -8
drivers/cpufreq/amd-pstate.c
··· 1554 1554 pr_debug("CPU %d exiting\n", policy->cpu); 1555 1555 } 1556 1556 1557 - static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) 1557 + static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy, bool policy_change) 1558 1558 { 1559 1559 struct amd_cpudata *cpudata = policy->driver_data; 1560 1560 union perf_cached perf; 1561 1561 u8 epp; 1562 1562 1563 - if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) 1563 + if (policy_change || 1564 + policy->min != cpudata->min_limit_freq || 1565 + policy->max != cpudata->max_limit_freq) 1564 1566 amd_pstate_update_min_max_limit(policy); 1565 1567 1566 1568 if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) ··· 1586 1584 1587 1585 cpudata->policy = policy->policy; 1588 1586 1589 - ret = amd_pstate_epp_update_limit(policy); 1587 + ret = amd_pstate_epp_update_limit(policy, true); 1590 1588 if (ret) 1591 1589 return ret; 1592 1590 ··· 1628 1626 * min_perf value across kexec reboots. If this CPU is just resumed back without kexec, 1629 1627 * the limits, epp and desired perf will get reset to the cached values in cpudata struct 1630 1628 */ 1631 - ret = amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false); 1629 + ret = amd_pstate_update_perf(policy, perf.bios_min_perf, 1630 + FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached), 1631 + FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), 1632 + FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached), 1633 + false); 1632 1634 if (ret) 1633 1635 return ret; 1634 - 1635 - /* invalidate to ensure it's rewritten during resume */ 1636 - cpudata->cppc_req_cached = 0; 1637 1636 1638 1637 /* set this flag to avoid setting core offline*/ 1639 1638 cpudata->suspended = true; ··· 1661 1658 int ret; 1662 1659 1663 1660 /* enable amd pstate from suspend state*/ 1664 - ret = amd_pstate_epp_update_limit(policy); 1661 + ret = amd_pstate_epp_update_limit(policy, false); 1665 1662 if (ret) 1666 1663 return ret; 1667 1664
+2 -2
drivers/cpufreq/intel_pstate.c
··· 1034 1034 if (!cpu_dev) 1035 1035 return false; 1036 1036 1037 - if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb, 1038 - cpumask_of(cpu), false)) 1037 + if (em_dev_register_pd_no_update(cpu_dev, HYBRID_EM_STATE_COUNT, &cb, 1038 + cpumask_of(cpu), false)) 1039 1039 return false; 1040 1040 1041 1041 cpudata->pd_registered = true;
-1
drivers/edac/altera_edac.c
··· 128 128 129 129 ptemp = dma_alloc_coherent(mci->pdev, 16, &dma_handle, GFP_KERNEL); 130 130 if (!ptemp) { 131 - dma_free_coherent(mci->pdev, 16, ptemp, dma_handle); 132 131 edac_printk(KERN_ERR, EDAC_MC, 133 132 "Inject: Buffer Allocation error\n"); 134 133 return -ENOMEM;
+3 -3
drivers/gpio/Kconfig
··· 3 3 # GPIO infrastructure and drivers 4 4 # 5 5 6 + config GPIOLIB_LEGACY 7 + def_bool y 8 + 6 9 menuconfig GPIOLIB 7 10 bool "GPIO Support" 8 11 help ··· 14 11 one or more of the GPIO drivers below. 15 12 16 13 If unsure, say N. 17 - 18 - config GPIOLIB_LEGACY 19 - def_bool y 20 14 21 15 if GPIOLIB 22 16
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 448 448 psp->cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL); 449 449 if (!psp->cmd) { 450 450 dev_err(adev->dev, "Failed to allocate memory to command buffer!\n"); 451 - ret = -ENOMEM; 451 + return -ENOMEM; 452 452 } 453 453 454 454 adev->psp.xgmi_context.supports_extended_data =
-5
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 1462 1462 1463 1463 static void dce_v10_0_audio_fini(struct amdgpu_device *adev) 1464 1464 { 1465 - int i; 1466 - 1467 1465 if (!amdgpu_audio) 1468 1466 return; 1469 1467 1470 1468 if (!adev->mode_info.audio.enabled) 1471 1469 return; 1472 - 1473 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1474 - dce_v10_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1475 1470 1476 1471 adev->mode_info.audio.enabled = false; 1477 1472 }
-5
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 1511 1511 1512 1512 static void dce_v11_0_audio_fini(struct amdgpu_device *adev) 1513 1513 { 1514 - int i; 1515 - 1516 1514 if (!amdgpu_audio) 1517 1515 return; 1518 1516 1519 1517 if (!adev->mode_info.audio.enabled) 1520 1518 return; 1521 - 1522 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1523 - dce_v11_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1524 1519 1525 1520 adev->mode_info.audio.enabled = false; 1526 1521 }
-5
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 1451 1451 1452 1452 static void dce_v6_0_audio_fini(struct amdgpu_device *adev) 1453 1453 { 1454 - int i; 1455 - 1456 1454 if (!amdgpu_audio) 1457 1455 return; 1458 1456 1459 1457 if (!adev->mode_info.audio.enabled) 1460 1458 return; 1461 - 1462 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1463 - dce_v6_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1464 1459 1465 1460 adev->mode_info.audio.enabled = false; 1466 1461 }
-5
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 1443 1443 1444 1444 static void dce_v8_0_audio_fini(struct amdgpu_device *adev) 1445 1445 { 1446 - int i; 1447 - 1448 1446 if (!amdgpu_audio) 1449 1447 return; 1450 1448 1451 1449 if (!adev->mode_info.audio.enabled) 1452 1450 return; 1453 - 1454 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1455 - dce_v8_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1456 1451 1457 1452 adev->mode_info.audio.enabled = false; 1458 1453 }
+3 -2
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 641 641 break; 642 642 case MES_MISC_OP_CHANGE_CONFIG: 643 643 if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) < 0x63) { 644 - dev_err(mes->adev->dev, "MES FW version must be larger than 0x63 to support limit single process feature.\n"); 645 - return -EINVAL; 644 + dev_warn_once(mes->adev->dev, 645 + "MES FW version must be larger than 0x63 to support limit single process feature.\n"); 646 + return 0; 646 647 } 647 648 misc_pkt.opcode = MESAPI_MISC__CHANGE_CONFIG; 648 649 misc_pkt.change_config.opcode =
+3 -3
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
··· 1377 1377 1378 1378 switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) { 1379 1379 case IP_VERSION(6, 0, 0): 1380 - if ((adev->sdma.instance[0].fw_version >= 24) && !adev->sdma.disable_uq) 1380 + if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq) 1381 1381 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1382 1382 break; 1383 1383 case IP_VERSION(6, 0, 1): ··· 1385 1385 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1386 1386 break; 1387 1387 case IP_VERSION(6, 0, 2): 1388 - if ((adev->sdma.instance[0].fw_version >= 21) && !adev->sdma.disable_uq) 1388 + if ((adev->sdma.instance[0].fw_version >= 23) && !adev->sdma.disable_uq) 1389 1389 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1390 1390 break; 1391 1391 case IP_VERSION(6, 0, 3): 1392 - if ((adev->sdma.instance[0].fw_version >= 25) && !adev->sdma.disable_uq) 1392 + if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq) 1393 1393 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1394 1394 break; 1395 1395 case IP_VERSION(6, 1, 0):
+1 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 8381 8381 drm_add_modes_noedid(connector, 1920, 1080); 8382 8382 } else { 8383 8383 amdgpu_dm_connector_ddc_get_modes(connector, drm_edid); 8384 - if (encoder && (connector->connector_type != DRM_MODE_CONNECTOR_eDP) && 8385 - (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)) 8384 + if (encoder) 8386 8385 amdgpu_dm_connector_add_common_modes(encoder, connector); 8387 8386 amdgpu_dm_connector_add_freesync_modes(connector, drm_edid); 8388 8387 }
+9
drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
··· 520 520 REG_UPDATE(DPP_CONTROL, DPP_CLOCK_ENABLE, 0); 521 521 } 522 522 523 + void dpp_force_disable_cursor(struct dpp *dpp_base) 524 + { 525 + struct dcn10_dpp *dpp = TO_DCN10_DPP(dpp_base); 526 + 527 + /* Force disable cursor */ 528 + REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, 0); 529 + dpp_base->pos.cur0_ctl.bits.cur0_enable = 0; 530 + } 531 + 523 532 static const struct dpp_funcs dcn10_dpp_funcs = { 524 533 .dpp_read_state = dpp_read_state, 525 534 .dpp_reset = dpp_reset,
+2
drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.h
··· 1525 1525 1526 1526 void dpp1_cm_get_gamut_remap(struct dpp *dpp_base, 1527 1527 struct dpp_grph_csc_adjustment *adjust); 1528 + void dpp_force_disable_cursor(struct dpp *dpp_base); 1529 + 1528 1530 #endif
+1
drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
··· 1494 1494 .dpp_dppclk_control = dpp1_dppclk_control, 1495 1495 .dpp_set_hdr_multiplier = dpp3_set_hdr_multiplier, 1496 1496 .dpp_get_gamut_remap = dpp3_cm_get_gamut_remap, 1497 + .dpp_force_disable_cursor = dpp_force_disable_cursor, 1497 1498 }; 1498 1499 1499 1500
+72
drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
··· 528 528 529 529 apply_symclk_on_tx_off_wa(link); 530 530 } 531 + 532 + /** 533 + * dcn314_dpp_pg_control - DPP power gate control. 534 + * 535 + * @hws: dce_hwseq reference. 536 + * @dpp_inst: DPP instance reference. 537 + * @power_on: true if we want to enable power gate, false otherwise. 538 + * 539 + * Enable or disable power gate in the specific DPP instance. 540 + * If power gating is disabled, will force disable cursor in the DPP instance. 541 + */ 542 + void dcn314_dpp_pg_control( 543 + struct dce_hwseq *hws, 544 + unsigned int dpp_inst, 545 + bool power_on) 546 + { 547 + uint32_t power_gate = power_on ? 0 : 1; 548 + uint32_t pwr_status = power_on ? 0 : 2; 549 + 550 + 551 + if (hws->ctx->dc->debug.disable_dpp_power_gate) { 552 + /* Workaround for DCN314 with disabled power gating */ 553 + if (!power_on) { 554 + 555 + /* Force disable cursor if power gating is disabled */ 556 + struct dpp *dpp = hws->ctx->dc->res_pool->dpps[dpp_inst]; 557 + if (dpp && dpp->funcs->dpp_force_disable_cursor) 558 + dpp->funcs->dpp_force_disable_cursor(dpp); 559 + } 560 + return; 561 + } 562 + if (REG(DOMAIN1_PG_CONFIG) == 0) 563 + return; 564 + 565 + switch (dpp_inst) { 566 + case 0: /* DPP0 */ 567 + REG_UPDATE(DOMAIN1_PG_CONFIG, 568 + DOMAIN1_POWER_GATE, power_gate); 569 + 570 + REG_WAIT(DOMAIN1_PG_STATUS, 571 + DOMAIN1_PGFSM_PWR_STATUS, pwr_status, 572 + 1, 1000); 573 + break; 574 + case 1: /* DPP1 */ 575 + REG_UPDATE(DOMAIN3_PG_CONFIG, 576 + DOMAIN3_POWER_GATE, power_gate); 577 + 578 + REG_WAIT(DOMAIN3_PG_STATUS, 579 + DOMAIN3_PGFSM_PWR_STATUS, pwr_status, 580 + 1, 1000); 581 + break; 582 + case 2: /* DPP2 */ 583 + REG_UPDATE(DOMAIN5_PG_CONFIG, 584 + DOMAIN5_POWER_GATE, power_gate); 585 + 586 + REG_WAIT(DOMAIN5_PG_STATUS, 587 + DOMAIN5_PGFSM_PWR_STATUS, pwr_status, 588 + 1, 1000); 589 + break; 590 + case 3: /* DPP3 */ 591 + REG_UPDATE(DOMAIN7_PG_CONFIG, 592 + DOMAIN7_POWER_GATE, power_gate); 593 + 594 + REG_WAIT(DOMAIN7_PG_STATUS, 595 + DOMAIN7_PGFSM_PWR_STATUS, pwr_status, 596 + 1, 1000); 597 + break; 598 + default: 599 + BREAK_TO_DEBUGGER(); 600 + break; 601 + } 602 + }
+2
drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.h
··· 47 47 48 48 void dcn314_disable_link_output(struct dc_link *link, const struct link_resource *link_res, enum signal_type signal); 49 49 50 + void dcn314_dpp_pg_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool power_on); 51 + 50 52 #endif /* __DC_HWSS_DCN314_H__ */
+1
drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
··· 141 141 .enable_power_gating_plane = dcn314_enable_power_gating_plane, 142 142 .dpp_root_clock_control = dcn314_dpp_root_clock_control, 143 143 .hubp_pg_control = dcn31_hubp_pg_control, 144 + .dpp_pg_control = dcn314_dpp_pg_control, 144 145 .program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree, 145 146 .update_odm = dcn314_update_odm, 146 147 .dsc_pg_control = dcn314_dsc_pg_control,
+3
drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
··· 349 349 struct dpp *dpp_base, 350 350 enum dc_color_space color_space, 351 351 struct dc_csc_transform cursor_csc_color_matrix); 352 + 353 + void (*dpp_force_disable_cursor)(struct dpp *dpp_base); 354 + 352 355 }; 353 356 354 357
+11
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 393 393 gpiod_set_value_cansleep(pdata->enable_gpio, 1); 394 394 395 395 /* 396 + * After EN is deasserted and an external clock is detected, the bridge 397 + * will sample GPIO3:1 to determine its frequency. The driver will 398 + * overwrite this setting in ti_sn_bridge_set_refclk_freq(). But this is 399 + * racy. Thus we have to wait a couple of us. According to the datasheet 400 + * the GPIO lines has to be stable at least 5 us (td5) but it seems that 401 + * is not enough and the refclk frequency value is still lost or 402 + * overwritten by the bridge itself. Waiting for 20us seems to work. 403 + */ 404 + usleep_range(20, 30); 405 + 406 + /* 396 407 * If we have a reference clock we can enable communication w/ the 397 408 * panel (including the aux channel) w/out any need for an input clock 398 409 * so we can do it in resume which lets us read the EDID before
+6 -1
drivers/gpu/drm/nouveau/gv100_fence.c
··· 18 18 struct nvif_push *push = &chan->chan.push; 19 19 int ret; 20 20 21 - ret = PUSH_WAIT(push, 8); 21 + ret = PUSH_WAIT(push, 13); 22 22 if (ret) 23 23 return ret; 24 24 ··· 31 31 NVDEF(NVC36F, SEM_EXECUTE, RELEASE_WFI, EN) | 32 32 NVDEF(NVC36F, SEM_EXECUTE, PAYLOAD_SIZE, 32BIT) | 33 33 NVDEF(NVC36F, SEM_EXECUTE, RELEASE_TIMESTAMP, DIS)); 34 + 35 + PUSH_MTHD(push, NVC36F, MEM_OP_A, 0, 36 + MEM_OP_B, 0, 37 + MEM_OP_C, NVDEF(NVC36F, MEM_OP_C, MEMBAR_TYPE, SYS_MEMBAR), 38 + MEM_OP_D, NVDEF(NVC36F, MEM_OP_D, OPERATION, MEMBAR)); 34 39 35 40 PUSH_MTHD(push, NVC36F, NON_STALL_INTERRUPT, 0); 36 41
+85
drivers/gpu/drm/nouveau/include/nvhw/class/clc36f.h
··· 7 7 8 8 #define NVC36F_NON_STALL_INTERRUPT (0x00000020) 9 9 #define NVC36F_NON_STALL_INTERRUPT_HANDLE 31:0 10 + // NOTE - MEM_OP_A and MEM_OP_B have been replaced in gp100 with methods for 11 + // specifying the page address for a targeted TLB invalidate and the uTLB for 12 + // a targeted REPLAY_CANCEL for UVM. 13 + // The previous MEM_OP_A/B functionality is in MEM_OP_C/D, with slightly 14 + // rearranged fields. 15 + #define NVC36F_MEM_OP_A (0x00000028) 16 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_CLIENT_UNIT_ID 5:0 // only relevant for REPLAY_CANCEL_TARGETED 17 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_INVALIDATION_SIZE 5:0 // Used to specify size of invalidate, used for invalidates which are not of the REPLAY_CANCEL_TARGETED type 18 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_GPC_ID 10:6 // only relevant for REPLAY_CANCEL_TARGETED 19 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_MMU_ENGINE_ID 6:0 // only relevant for REPLAY_CANCEL_VA_GLOBAL 20 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR 11:11 21 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_EN 0x00000001 22 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_DIS 0x00000000 23 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_TARGET_ADDR_LO 31:12 24 + #define NVC36F_MEM_OP_B (0x0000002c) 25 + #define NVC36F_MEM_OP_B_TLB_INVALIDATE_TARGET_ADDR_HI 31:0 26 + #define NVC36F_MEM_OP_C (0x00000030) 27 + #define NVC36F_MEM_OP_C_MEMBAR_TYPE 2:0 28 + #define NVC36F_MEM_OP_C_MEMBAR_TYPE_SYS_MEMBAR 0x00000000 29 + #define NVC36F_MEM_OP_C_MEMBAR_TYPE_MEMBAR 0x00000001 30 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB 0:0 31 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ONE 0x00000000 32 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ALL 0x00000001 // Probably nonsensical for MMU_TLB_INVALIDATE_TARGETED 33 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC 1:1 34 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_ENABLE 0x00000000 35 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_DISABLE 0x00000001 36 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY 4:2 // only relevant if GPC ENABLE 37 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_NONE 0x00000000 38 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START 0x00000001 39 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START_ACK_ALL 0x00000002 40 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_TARGETED 0x00000003 41 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_GLOBAL 0x00000004 42 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_VA_GLOBAL 0x00000005 43 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE 6:5 // only relevant if GPC ENABLE 44 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_NONE 0x00000000 45 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_GLOBALLY 0x00000001 46 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_INTRANODE 0x00000002 47 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE 9:7 //only relevant for REPLAY_CANCEL_VA_GLOBAL 48 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_READ 0 49 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE 1 50 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_STRONG 2 51 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_RSVRVD 3 52 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_WEAK 4 53 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_ALL 5 54 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE_AND_ATOMIC 6 55 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ALL 7 56 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL 9:7 // Invalidate affects this level and all below 57 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_ALL 0x00000000 // Invalidate tlb caches at all levels of the page table 58 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_PTE_ONLY 0x00000001 59 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE0 0x00000002 60 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE1 0x00000003 61 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE2 0x00000004 62 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE3 0x00000005 63 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE4 0x00000006 64 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE5 0x00000007 65 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE 11:10 // only relevant if PDB_ONE 66 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_VID_MEM 0x00000000 67 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_COHERENT 0x00000002 68 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_NONCOHERENT 0x00000003 69 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ADDR_LO 31:12 // only relevant if PDB_ONE 70 + #define NVC36F_MEM_OP_C_ACCESS_COUNTER_CLR_TARGETED_NOTIFY_TAG 19:0 71 + // MEM_OP_D MUST be preceded by MEM_OPs A-C. 72 + #define NVC36F_MEM_OP_D (0x00000034) 73 + #define NVC36F_MEM_OP_D_TLB_INVALIDATE_PDB_ADDR_HI 26:0 // only relevant if PDB_ONE 74 + #define NVC36F_MEM_OP_D_OPERATION 31:27 75 + #define NVC36F_MEM_OP_D_OPERATION_MEMBAR 0x00000005 76 + #define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE 0x00000009 77 + #define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE_TARGETED 0x0000000a 78 + #define NVC36F_MEM_OP_D_OPERATION_L2_PEERMEM_INVALIDATE 0x0000000d 79 + #define NVC36F_MEM_OP_D_OPERATION_L2_SYSMEM_INVALIDATE 0x0000000e 80 + // CLEAN_LINES is an alias for Tegra/GPU IP usage 81 + #define NVC36F_MEM_OP_B_OPERATION_L2_INVALIDATE_CLEAN_LINES 0x0000000e 82 + #define NVC36F_MEM_OP_D_OPERATION_L2_CLEAN_COMPTAGS 0x0000000f 83 + #define NVC36F_MEM_OP_D_OPERATION_L2_FLUSH_DIRTY 0x00000010 84 + #define NVC36F_MEM_OP_D_OPERATION_L2_WAIT_FOR_SYS_PENDING_READS 0x00000015 85 + #define NVC36F_MEM_OP_D_OPERATION_ACCESS_COUNTER_CLR 0x00000016 86 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE 1:0 87 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MIMC 0x00000000 88 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MOMC 0x00000001 89 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_ALL 0x00000002 90 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_TARGETED 0x00000003 91 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE 2:2 92 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MIMC 0x00000000 93 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MOMC 0x00000001 94 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_BANK 6:3 10 95 #define NVC36F_SEM_ADDR_LO (0x0000005c) 11 96 #define NVC36F_SEM_ADDR_LO_OFFSET 31:2 12 97 #define NVC36F_SEM_ADDR_HI (0x00000060)
+2
drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
··· 350 350 nvkm_chid_unref(&fifo->chid); 351 351 352 352 nvkm_event_fini(&fifo->nonstall.event); 353 + if (fifo->func->nonstall_dtor) 354 + fifo->func->nonstall_dtor(fifo); 353 355 mutex_destroy(&fifo->mutex); 354 356 355 357 if (fifo->func->dtor)
+15 -8
drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
··· 517 517 static void 518 518 ga100_fifo_nonstall_block(struct nvkm_event *event, int type, int index) 519 519 { 520 - struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event); 521 - struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0); 522 - 523 - nvkm_inth_block(&runl->nonstall.inth); 524 520 } 525 521 526 522 static void 527 523 ga100_fifo_nonstall_allow(struct nvkm_event *event, int type, int index) 528 524 { 529 - struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event); 530 - struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0); 531 - 532 - nvkm_inth_allow(&runl->nonstall.inth); 533 525 } 534 526 535 527 const struct nvkm_event_func ··· 556 564 if (ret) 557 565 return ret; 558 566 567 + nvkm_inth_allow(&runl->nonstall.inth); 568 + 559 569 nr = max(nr, runl->id + 1); 560 570 } 561 571 562 572 return nr; 573 + } 574 + 575 + void 576 + ga100_fifo_nonstall_dtor(struct nvkm_fifo *fifo) 577 + { 578 + struct nvkm_runl *runl; 579 + 580 + nvkm_runl_foreach(runl, fifo) { 581 + if (runl->nonstall.vector < 0) 582 + continue; 583 + nvkm_inth_block(&runl->nonstall.inth); 584 + } 563 585 } 564 586 565 587 int ··· 605 599 .runl_ctor = ga100_fifo_runl_ctor, 606 600 .mmu_fault = &tu102_fifo_mmu_fault, 607 601 .nonstall_ctor = ga100_fifo_nonstall_ctor, 602 + .nonstall_dtor = ga100_fifo_nonstall_dtor, 608 603 .nonstall = &ga100_fifo_nonstall, 609 604 .runl = &ga100_runl, 610 605 .runq = &ga100_runq,
+1
drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c
··· 30 30 .runl_ctor = ga100_fifo_runl_ctor, 31 31 .mmu_fault = &tu102_fifo_mmu_fault, 32 32 .nonstall_ctor = ga100_fifo_nonstall_ctor, 33 + .nonstall_dtor = ga100_fifo_nonstall_dtor, 33 34 .nonstall = &ga100_fifo_nonstall, 34 35 .runl = &ga100_runl, 35 36 .runq = &ga100_runq,
+2
drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h
··· 41 41 void (*start)(struct nvkm_fifo *, unsigned long *); 42 42 43 43 int (*nonstall_ctor)(struct nvkm_fifo *); 44 + void (*nonstall_dtor)(struct nvkm_fifo *); 44 45 const struct nvkm_event_func *nonstall; 45 46 46 47 const struct nvkm_runl_func *runl; ··· 201 200 202 201 int ga100_fifo_runl_ctor(struct nvkm_fifo *); 203 202 int ga100_fifo_nonstall_ctor(struct nvkm_fifo *); 203 + void ga100_fifo_nonstall_dtor(struct nvkm_fifo *); 204 204 extern const struct nvkm_event_func ga100_fifo_nonstall; 205 205 extern const struct nvkm_runl_func ga100_runl; 206 206 extern const struct nvkm_runq_func ga100_runq;
+1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fifo.c
··· 601 601 rm->chan.func = &r535_chan; 602 602 rm->nonstall = &ga100_fifo_nonstall; 603 603 rm->nonstall_ctor = ga100_fifo_nonstall_ctor; 604 + rm->nonstall_dtor = ga100_fifo_nonstall_dtor; 604 605 605 606 return nvkm_fifo_new_(rm, device, type, inst, pfifo); 606 607 }
+7 -4
drivers/gpu/drm/scheduler/sched_entity.c
··· 391 391 * Add a callback to the current dependency of the entity to wake up the 392 392 * scheduler when the entity becomes available. 393 393 */ 394 - static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) 394 + static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity, 395 + struct drm_sched_job *sched_job) 395 396 { 396 397 struct drm_gpu_scheduler *sched = entity->rq->sched; 397 398 struct dma_fence *fence = entity->dependency; ··· 421 420 dma_fence_put(entity->dependency); 422 421 entity->dependency = fence; 423 422 } 423 + 424 + if (trace_drm_sched_job_unschedulable_enabled() && 425 + !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &entity->dependency->flags)) 426 + trace_drm_sched_job_unschedulable(sched_job, entity->dependency); 424 427 425 428 if (!dma_fence_add_callback(entity->dependency, &entity->cb, 426 429 drm_sched_entity_wakeup)) ··· 466 461 467 462 while ((entity->dependency = 468 463 drm_sched_job_dependency(sched_job, entity))) { 469 - if (drm_sched_entity_add_dependency_cb(entity)) { 470 - trace_drm_sched_job_unschedulable(sched_job, entity->dependency); 464 + if (drm_sched_entity_add_dependency_cb(entity, sched_job)) 471 465 return NULL; 472 - } 473 466 } 474 467 475 468 /* skip jobs from entity that marked guilty */
+1 -2
drivers/gpu/drm/xe/xe_bo.c
··· 819 819 return ret; 820 820 } 821 821 822 - tt_has_data = ttm && (ttm_tt_is_populated(ttm) || 823 - (ttm->page_flags & TTM_TT_FLAG_SWAPPED)); 822 + tt_has_data = ttm && (ttm_tt_is_populated(ttm) || ttm_tt_is_swapped(ttm)); 824 823 825 824 move_lacks_source = !old_mem || (handle_system_ccs ? (!bo->ccs_cleared) : 826 825 (!mem_type_is_vram(old_mem_type) && !tt_has_data));
+5 -4
drivers/hwmon/ina238.c
··· 379 379 regval = clamp_val(val, -163, 163); 380 380 regval = (regval * 1000 * 4) / 381 381 (INA238_SHUNT_VOLTAGE_LSB * data->gain); 382 - regval = clamp_val(regval, S16_MIN, S16_MAX); 382 + regval = clamp_val(regval, S16_MIN, S16_MAX) & 0xffff; 383 383 384 384 switch (attr) { 385 385 case hwmon_in_max: ··· 517 517 * Unsigned postive values. Compared against the 24-bit power register, 518 518 * lower 8-bits are truncated. Same conversion to/from uW as POWER 519 519 * register. 520 + * The first clamp_val() is to establish a baseline to avoid overflows. 520 521 */ 521 - regval = clamp_val(val, 0, LONG_MAX); 522 - regval = div_u64(val * 4 * 100 * data->rshunt, data->config->power_calculate_factor * 522 + regval = clamp_val(val, 0, LONG_MAX / 2); 523 + regval = div_u64(regval * 4 * 100 * data->rshunt, data->config->power_calculate_factor * 523 524 1000ULL * INA238_FIXED_SHUNT * data->gain); 524 525 regval = clamp_val(regval >> 8, 0, U16_MAX); 525 526 ··· 573 572 return -EOPNOTSUPP; 574 573 575 574 /* Signed */ 576 - regval = clamp_val(val, -40000, 125000); 575 + val = clamp_val(val, -40000, 125000); 577 576 regval = div_s64(val * 10000, data->config->temp_lsb) << data->config->temp_shift; 578 577 regval = clamp_val(regval, S16_MIN, S16_MAX) & (0xffff << data->config->temp_shift); 579 578
+2 -3
drivers/hwmon/mlxreg-fan.c
··· 561 561 if (!pwm->connected) 562 562 continue; 563 563 pwm->fan = fan; 564 + /* Set minimal PWM speed. */ 565 + pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY); 564 566 pwm->cdev = devm_thermal_of_cooling_device_register(dev, NULL, mlxreg_fan_name[i], 565 567 pwm, &mlxreg_fan_cooling_ops); 566 568 if (IS_ERR(pwm->cdev)) { 567 569 dev_err(dev, "Failed to register cooling device\n"); 568 570 return PTR_ERR(pwm->cdev); 569 571 } 570 - 571 - /* Set minimal PWM speed. */ 572 - pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY); 573 572 } 574 573 575 574 return 0;
+1 -1
drivers/i2c/busses/i2c-i801.c
··· 1052 1052 { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1053 1053 { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_SOC_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1054 1054 { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1055 - { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1055 + { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5) }, 1056 1056 { PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1057 1057 { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1058 1058 { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+8 -14
drivers/i2c/busses/i2c-rtl9300.c
··· 99 99 { 100 100 u32 val, mask; 101 101 102 + if (len < 1 || len > 16) 103 + return -EINVAL; 104 + 102 105 val = chan->bus_freq << RTL9300_I2C_MST_CTRL2_SCL_FREQ_OFS; 103 106 mask = RTL9300_I2C_MST_CTRL2_SCL_FREQ_MASK; 104 107 ··· 225 222 } 226 223 227 224 switch (size) { 228 - case I2C_SMBUS_QUICK: 229 - ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0); 230 - if (ret) 231 - goto out_unlock; 232 - ret = rtl9300_i2c_reg_addr_set(i2c, 0, 0); 233 - if (ret) 234 - goto out_unlock; 235 - break; 236 - 237 225 case I2C_SMBUS_BYTE: 238 226 if (read_write == I2C_SMBUS_WRITE) { 239 227 ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0); ··· 306 312 307 313 static u32 rtl9300_i2c_func(struct i2c_adapter *a) 308 314 { 309 - return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE | 310 - I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA | 311 - I2C_FUNC_SMBUS_BLOCK_DATA; 315 + return I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_BYTE_DATA | 316 + I2C_FUNC_SMBUS_WORD_DATA | I2C_FUNC_SMBUS_BLOCK_DATA | 317 + I2C_FUNC_SMBUS_I2C_BLOCK; 312 318 } 313 319 314 320 static const struct i2c_algorithm rtl9300_i2c_algo = { ··· 317 323 }; 318 324 319 325 static struct i2c_adapter_quirks rtl9300_i2c_quirks = { 320 - .flags = I2C_AQ_NO_CLK_STRETCH, 326 + .flags = I2C_AQ_NO_CLK_STRETCH | I2C_AQ_NO_ZERO_LEN, 321 327 .max_read_len = 16, 322 328 .max_write_len = 16, 323 329 }; ··· 347 353 348 354 platform_set_drvdata(pdev, i2c); 349 355 350 - if (device_get_child_node_count(dev) >= RTL9300_I2C_MUX_NCHAN) 356 + if (device_get_child_node_count(dev) > RTL9300_I2C_MUX_NCHAN) 351 357 return dev_err_probe(dev, -EINVAL, "Too many channels\n"); 352 358 353 359 device_for_each_child_node(dev, child) {
+5
drivers/md/md.c
··· 9125 9125 } 9126 9126 9127 9127 action = md_sync_action(mddev); 9128 + if (action == ACTION_FROZEN || action == ACTION_IDLE) { 9129 + set_bit(MD_RECOVERY_INTR, &mddev->recovery); 9130 + goto skip; 9131 + } 9132 + 9128 9133 desc = md_sync_action_name(action); 9129 9134 mddev->last_sync_action = action; 9130 9135
+1 -1
drivers/md/raid1.c
··· 1225 1225 int i = 0; 1226 1226 struct bio *behind_bio = NULL; 1227 1227 1228 - behind_bio = bio_alloc_bioset(NULL, vcnt, 0, GFP_NOIO, 1228 + behind_bio = bio_alloc_bioset(NULL, vcnt, bio->bi_opf, GFP_NOIO, 1229 1229 &r1_bio->mddev->bio_set); 1230 1230 1231 1231 /* discard op, we don't support writezero/writesame yet */
+1 -7
drivers/net/can/rcar/rcar_can.c
··· 861 861 { 862 862 struct net_device *ndev = dev_get_drvdata(dev); 863 863 struct rcar_can_priv *priv = netdev_priv(ndev); 864 - u16 ctlr; 865 864 int err; 866 865 867 866 if (!netif_running(ndev)) ··· 872 873 return err; 873 874 } 874 875 875 - ctlr = readw(&priv->regs->ctlr); 876 - ctlr &= ~RCAR_CAN_CTLR_SLPM; 877 - writew(ctlr, &priv->regs->ctlr); 878 - ctlr &= ~RCAR_CAN_CTLR_CANM; 879 - writew(ctlr, &priv->regs->ctlr); 880 - priv->can.state = CAN_STATE_ERROR_ACTIVE; 876 + rcar_can_start(ndev); 881 877 882 878 netif_device_attach(ndev); 883 879 netif_start_queue(ndev);
+8 -8
drivers/net/can/xilinx_can.c
··· 690 690 dlc |= XCAN_DLCR_EDL_MASK; 691 691 } 692 692 693 - if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) && 694 - (priv->devtype.flags & XCAN_FLAG_TXFEMP)) 695 - can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0); 696 - else 697 - can_put_echo_skb(skb, ndev, 0, 0); 698 - 699 - priv->tx_head++; 700 - 701 693 priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id); 702 694 /* If the CAN frame is RTR frame this write triggers transmission 703 695 * (not on CAN FD) ··· 722 730 data[1]); 723 731 } 724 732 } 733 + 734 + if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) && 735 + (priv->devtype.flags & XCAN_FLAG_TXFEMP)) 736 + can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0); 737 + else 738 + can_put_echo_skb(skb, ndev, 0, 0); 739 + 740 + priv->tx_head++; 725 741 } 726 742 727 743 /**
+13 -4
drivers/net/dsa/b53/b53_common.c
··· 1273 1273 */ 1274 1274 ds->untag_vlan_aware_bridge_pvid = true; 1275 1275 1276 - /* Ageing time is set in seconds */ 1277 - ds->ageing_time_min = 1 * 1000; 1278 - ds->ageing_time_max = AGE_TIME_MAX * 1000; 1276 + if (dev->chip_id == BCM53101_DEVICE_ID) { 1277 + /* BCM53101 uses 0.5 second increments */ 1278 + ds->ageing_time_min = 1 * 500; 1279 + ds->ageing_time_max = AGE_TIME_MAX * 500; 1280 + } else { 1281 + /* Everything else uses 1 second increments */ 1282 + ds->ageing_time_min = 1 * 1000; 1283 + ds->ageing_time_max = AGE_TIME_MAX * 1000; 1284 + } 1279 1285 1280 1286 ret = b53_reset_switch(dev); 1281 1287 if (ret) { ··· 2565 2559 else 2566 2560 reg = B53_AGING_TIME_CONTROL; 2567 2561 2568 - atc = DIV_ROUND_CLOSEST(msecs, 1000); 2562 + if (dev->chip_id == BCM53101_DEVICE_ID) 2563 + atc = DIV_ROUND_CLOSEST(msecs, 500); 2564 + else 2565 + atc = DIV_ROUND_CLOSEST(msecs, 1000); 2569 2566 2570 2567 if (!is5325(dev) && !is5365(dev)) 2571 2568 atc |= AGE_CHANGE;
+2 -1
drivers/net/ethernet/freescale/fec_main.c
··· 2363 2363 */ 2364 2364 phy_dev = of_phy_find_device(fep->phy_node); 2365 2365 phy_reset_after_clk_enable(phy_dev); 2366 - put_device(&phy_dev->mdio.dev); 2366 + if (phy_dev) 2367 + put_device(&phy_dev->mdio.dev); 2367 2368 } 2368 2369 } 2369 2370
+1
drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
··· 1561 1561 struct i40e_aq_set_mac_config { 1562 1562 __le16 max_frame_size; 1563 1563 u8 params; 1564 + #define I40E_AQ_SET_MAC_CONFIG_CRC_EN BIT(2) 1564 1565 u8 tx_timer_priority; /* bitmap */ 1565 1566 __le16 tx_timer_value; 1566 1567 __le16 fc_refresh_threshold;
+34
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 1190 1190 } 1191 1191 1192 1192 /** 1193 + * i40e_aq_set_mac_config - Configure MAC settings 1194 + * @hw: pointer to the hw struct 1195 + * @max_frame_size: Maximum Frame Size to be supported by the port 1196 + * @cmd_details: pointer to command details structure or NULL 1197 + * 1198 + * Set MAC configuration (0x0603). Note that max_frame_size must be greater 1199 + * than zero. 1200 + * 1201 + * Return: 0 on success, or a negative error code on failure. 1202 + */ 1203 + int i40e_aq_set_mac_config(struct i40e_hw *hw, u16 max_frame_size, 1204 + struct i40e_asq_cmd_details *cmd_details) 1205 + { 1206 + struct i40e_aq_set_mac_config *cmd; 1207 + struct libie_aq_desc desc; 1208 + 1209 + cmd = libie_aq_raw(&desc); 1210 + 1211 + if (max_frame_size == 0) 1212 + return -EINVAL; 1213 + 1214 + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_mac_config); 1215 + 1216 + cmd->max_frame_size = cpu_to_le16(max_frame_size); 1217 + cmd->params = I40E_AQ_SET_MAC_CONFIG_CRC_EN; 1218 + 1219 + #define I40E_AQ_SET_MAC_CONFIG_FC_DEFAULT_THRESHOLD 0x7FFF 1220 + cmd->fc_refresh_threshold = 1221 + cpu_to_le16(I40E_AQ_SET_MAC_CONFIG_FC_DEFAULT_THRESHOLD); 1222 + 1223 + return i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); 1224 + } 1225 + 1226 + /** 1193 1227 * i40e_aq_clear_pxe_mode 1194 1228 * @hw: pointer to the hw struct 1195 1229 * @cmd_details: pointer to command details structure or NULL
+11 -7
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 4156 4156 irq_num = pf->msix_entries[base + vector].vector; 4157 4157 irq_set_affinity_notifier(irq_num, NULL); 4158 4158 irq_update_affinity_hint(irq_num, NULL); 4159 - free_irq(irq_num, &vsi->q_vectors[vector]); 4159 + free_irq(irq_num, vsi->q_vectors[vector]); 4160 4160 } 4161 4161 return err; 4162 4162 } ··· 16045 16045 dev_dbg(&pf->pdev->dev, "get supported phy types ret = %pe last_status = %s\n", 16046 16046 ERR_PTR(err), libie_aq_str(pf->hw.aq.asq_last_status)); 16047 16047 16048 - /* make sure the MFS hasn't been set lower than the default */ 16049 16048 #define MAX_FRAME_SIZE_DEFAULT 0x2600 16050 - val = FIELD_GET(I40E_PRTGL_SAH_MFS_MASK, 16051 - rd32(&pf->hw, I40E_PRTGL_SAH)); 16052 - if (val < MAX_FRAME_SIZE_DEFAULT) 16053 - dev_warn(&pdev->dev, "MFS for port %x (%d) has been set below the default (%d)\n", 16054 - pf->hw.port, val, MAX_FRAME_SIZE_DEFAULT); 16049 + 16050 + err = i40e_aq_set_mac_config(hw, MAX_FRAME_SIZE_DEFAULT, NULL); 16051 + if (err) 16052 + dev_warn(&pdev->dev, "set mac config ret = %pe last_status = %s\n", 16053 + ERR_PTR(err), libie_aq_str(pf->hw.aq.asq_last_status)); 16054 + 16055 + /* Make sure the MFS is set to the expected value */ 16056 + val = rd32(hw, I40E_PRTGL_SAH); 16057 + FIELD_MODIFY(I40E_PRTGL_SAH_MFS_MASK, &val, MAX_FRAME_SIZE_DEFAULT); 16058 + wr32(hw, I40E_PRTGL_SAH, val); 16055 16059 16056 16060 /* Add a filter to drop all Flow control frames from any VSI from being 16057 16061 * transmitted. By doing so we stop a malicious VF from sending out
+2
drivers/net/ethernet/intel/i40e/i40e_prototype.h
··· 98 98 struct i40e_asq_cmd_details *cmd_details); 99 99 int i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask, 100 100 struct i40e_asq_cmd_details *cmd_details); 101 + int i40e_aq_set_mac_config(struct i40e_hw *hw, u16 max_frame_size, 102 + struct i40e_asq_cmd_details *cmd_details); 101 103 int i40e_aq_clear_pxe_mode(struct i40e_hw *hw, 102 104 struct i40e_asq_cmd_details *cmd_details); 103 105 int i40e_aq_set_link_restart_an(struct i40e_hw *hw,
+1 -4
drivers/net/ethernet/intel/igb/igb_ethtool.c
··· 2081 2081 } else { 2082 2082 dev_info(&adapter->pdev->dev, "online testing starting\n"); 2083 2083 2084 - /* PHY is powered down when interface is down */ 2085 - if (if_running && igb_link_test(adapter, &data[TEST_LINK])) 2084 + if (igb_link_test(adapter, &data[TEST_LINK])) 2086 2085 eth_test->flags |= ETH_TEST_FL_FAILED; 2087 - else 2088 - data[TEST_LINK] = 0; 2089 2086 2090 2087 /* Online tests aren't run; pass by default */ 2091 2088 data[TEST_REG] = 0;
+1 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 4452 4452 if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) 4453 4453 xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 4454 4454 res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev, 4455 - rx_ring->queue_index, 4456 - rx_ring->q_vector->napi.napi_id); 4455 + rx_ring->queue_index, 0); 4457 4456 if (res < 0) { 4458 4457 dev_err(dev, "Failed to register xdp_rxq index %u\n", 4459 4458 rx_ring->queue_index);
+14 -6
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 654 654 655 655 static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr) 656 656 { 657 - struct net_device *real_dev; 657 + struct net_device *real_dev, *port_dev; 658 658 struct prueth_emac *emac; 659 659 u8 vlan_id, i; 660 660 ··· 663 663 664 664 if (is_hsr_master(real_dev)) { 665 665 for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) { 666 - emac = netdev_priv(hsr_get_port_ndev(real_dev, i)); 667 - if (!emac) 666 + port_dev = hsr_get_port_ndev(real_dev, i); 667 + emac = netdev_priv(port_dev); 668 + if (!emac) { 669 + dev_put(port_dev); 668 670 return -EINVAL; 671 + } 669 672 icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id, 670 673 true); 674 + dev_put(port_dev); 671 675 } 672 676 } else { 673 677 emac = netdev_priv(real_dev); ··· 683 679 684 680 static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr) 685 681 { 686 - struct net_device *real_dev; 682 + struct net_device *real_dev, *port_dev; 687 683 struct prueth_emac *emac; 688 684 u8 vlan_id, i; 689 685 ··· 692 688 693 689 if (is_hsr_master(real_dev)) { 694 690 for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) { 695 - emac = netdev_priv(hsr_get_port_ndev(real_dev, i)); 696 - if (!emac) 691 + port_dev = hsr_get_port_ndev(real_dev, i); 692 + emac = netdev_priv(port_dev); 693 + if (!emac) { 694 + dev_put(port_dev); 697 695 return -EINVAL; 696 + } 698 697 icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id, 699 698 false); 699 + dev_put(port_dev); 700 700 } 701 701 } else { 702 702 emac = netdev_priv(real_dev);
-4
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 2078 2078 { 2079 2079 u32 rss_field = 0; 2080 2080 2081 - /* VT, and RSS do not coexist at the same time */ 2082 - if (test_bit(WX_FLAG_VMDQ_ENABLED, wx->flags)) 2083 - return; 2084 - 2085 2081 /* Disable indicating checksum in descriptor, enables RSS hash */ 2086 2082 wr32m(wx, WX_PSR_CTL, WX_PSR_CTL_PCSD, WX_PSR_CTL_PCSD); 2087 2083
+1
drivers/net/macsec.c
··· 4211 4211 if (err < 0) 4212 4212 goto del_dev; 4213 4213 4214 + netdev_update_features(dev); 4214 4215 netif_stacked_transfer_operstate(real_dev, dev); 4215 4216 linkwatch_fire_event(dev); 4216 4217
+1 -1
drivers/net/phy/Kconfig
··· 361 361 tristate "NXP TJA11xx PHYs support" 362 362 depends on HWMON 363 363 help 364 - Currently supports the NXP TJA1100 and TJA1101 PHY. 364 + Currently supports the NXP TJA1100, TJA1101 and TJA1102 PHYs. 365 365 366 366 config NCN26000_PHY 367 367 tristate "Onsemi 10BASE-T1S Ethernet PHY"
+4 -8
drivers/net/phy/phy.c
··· 1065 1065 */ 1066 1066 int phy_config_inband(struct phy_device *phydev, unsigned int modes) 1067 1067 { 1068 - int err; 1068 + lockdep_assert_held(&phydev->lock); 1069 1069 1070 1070 if (!!(modes & LINK_INBAND_DISABLE) + 1071 1071 !!(modes & LINK_INBAND_ENABLE) + 1072 1072 !!(modes & LINK_INBAND_BYPASS) != 1) 1073 1073 return -EINVAL; 1074 1074 1075 - mutex_lock(&phydev->lock); 1076 1075 if (!phydev->drv) 1077 - err = -EIO; 1076 + return -EIO; 1078 1077 else if (!phydev->drv->config_inband) 1079 - err = -EOPNOTSUPP; 1080 - else 1081 - err = phydev->drv->config_inband(phydev, modes); 1082 - mutex_unlock(&phydev->lock); 1078 + return -EOPNOTSUPP; 1083 1079 1084 - return err; 1080 + return phydev->drv->config_inband(phydev, modes); 1085 1081 } 1086 1082 EXPORT_SYMBOL(phy_config_inband); 1087 1083
+3 -2
drivers/net/phy/phy_device.c
··· 287 287 if (phydev->phy_link_change == phy_link_change) 288 288 return phydev->attached_dev && phydev->adjust_link; 289 289 290 - /* phydev->phy_link_change is implicitly phylink_phy_change() */ 291 - return true; 290 + return !!phydev->phy_link_change; 292 291 } 293 292 294 293 static bool mdio_bus_phy_may_suspend(struct phy_device *phydev) ··· 1863 1864 phydev->attached_dev = NULL; 1864 1865 phy_link_topo_del_phy(dev, phydev); 1865 1866 } 1867 + 1868 + phydev->phy_link_change = NULL; 1866 1869 phydev->phylink = NULL; 1867 1870 1868 1871 if (!phydev->is_on_sfp_module)
+25 -3
drivers/net/phy/phylink.c
··· 67 67 struct timer_list link_poll; 68 68 69 69 struct mutex state_mutex; 70 + /* Serialize updates to pl->phydev with phylink_resolve() */ 71 + struct mutex phydev_mutex; 70 72 struct phylink_link_state phy_state; 71 73 unsigned int phy_ib_mode; 72 74 struct work_struct resolve; ··· 1434 1432 static void phylink_mac_initial_config(struct phylink *pl, bool force_restart) 1435 1433 { 1436 1434 struct phylink_link_state link_state; 1435 + struct phy_device *phy = pl->phydev; 1437 1436 1438 1437 switch (pl->req_link_an_mode) { 1439 1438 case MLO_AN_PHY: ··· 1458 1455 link_state.link = false; 1459 1456 1460 1457 phylink_apply_manual_flow(pl, &link_state); 1458 + if (phy) 1459 + mutex_lock(&phy->lock); 1461 1460 phylink_major_config(pl, force_restart, &link_state); 1461 + if (phy) 1462 + mutex_unlock(&phy->lock); 1462 1463 } 1463 1464 1464 1465 static const char *phylink_pause_to_str(int pause) ··· 1598 1591 struct phylink_link_state link_state; 1599 1592 bool mac_config = false; 1600 1593 bool retrigger = false; 1594 + struct phy_device *phy; 1601 1595 bool cur_link_state; 1602 1596 1597 + mutex_lock(&pl->phydev_mutex); 1598 + phy = pl->phydev; 1599 + if (phy) 1600 + mutex_lock(&phy->lock); 1603 1601 mutex_lock(&pl->state_mutex); 1604 1602 cur_link_state = phylink_link_is_up(pl); 1605 1603 ··· 1638 1626 /* If we have a phy, the "up" state is the union of both the 1639 1627 * PHY and the MAC 1640 1628 */ 1641 - if (pl->phydev) 1629 + if (phy) 1642 1630 link_state.link &= pl->phy_state.link; 1643 1631 1644 1632 /* Only update if the PHY link is up */ 1645 - if (pl->phydev && pl->phy_state.link) { 1633 + if (phy && pl->phy_state.link) { 1646 1634 /* If the interface has changed, force a link down 1647 1635 * event if the link isn't already down, and re-resolve. 1648 1636 */ ··· 1706 1694 queue_work(system_power_efficient_wq, &pl->resolve); 1707 1695 } 1708 1696 mutex_unlock(&pl->state_mutex); 1697 + if (phy) 1698 + mutex_unlock(&phy->lock); 1699 + mutex_unlock(&pl->phydev_mutex); 1709 1700 } 1710 1701 1711 1702 static void phylink_run_resolve(struct phylink *pl) ··· 1844 1829 if (!pl) 1845 1830 return ERR_PTR(-ENOMEM); 1846 1831 1832 + mutex_init(&pl->phydev_mutex); 1847 1833 mutex_init(&pl->state_mutex); 1848 1834 INIT_WORK(&pl->resolve, phylink_resolve); 1849 1835 ··· 2105 2089 dev_name(&phy->mdio.dev), phy->drv->name, irq_str); 2106 2090 kfree(irq_str); 2107 2091 2092 + mutex_lock(&pl->phydev_mutex); 2108 2093 mutex_lock(&phy->lock); 2109 2094 mutex_lock(&pl->state_mutex); 2110 2095 pl->phydev = phy; ··· 2151 2134 2152 2135 mutex_unlock(&pl->state_mutex); 2153 2136 mutex_unlock(&phy->lock); 2137 + mutex_unlock(&pl->phydev_mutex); 2154 2138 2155 2139 phylink_dbg(pl, 2156 2140 "phy: %s setting supported %*pb advertising %*pb\n", ··· 2330 2312 2331 2313 ASSERT_RTNL(); 2332 2314 2315 + mutex_lock(&pl->phydev_mutex); 2333 2316 phy = pl->phydev; 2334 2317 if (phy) { 2335 2318 mutex_lock(&phy->lock); ··· 2340 2321 pl->mac_tx_clk_stop = false; 2341 2322 mutex_unlock(&pl->state_mutex); 2342 2323 mutex_unlock(&phy->lock); 2343 - flush_work(&pl->resolve); 2324 + } 2325 + mutex_unlock(&pl->phydev_mutex); 2344 2326 2327 + if (phy) { 2328 + flush_work(&pl->resolve); 2345 2329 phy_disconnect(phy); 2346 2330 } 2347 2331 }
+67 -55
drivers/net/wireless/ath/ath12k/mac.c
··· 4078 4078 return ret; 4079 4079 } 4080 4080 4081 + static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif) 4082 + { 4083 + struct ath12k *ar = arvif->ar; 4084 + struct ieee80211_vif *vif = arvif->ahvif->vif; 4085 + struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf; 4086 + enum wmi_sta_powersave_param param; 4087 + struct ieee80211_bss_conf *info; 4088 + enum wmi_sta_ps_mode psmode; 4089 + int ret; 4090 + int timeout; 4091 + bool enable_ps; 4092 + 4093 + lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy); 4094 + 4095 + if (vif->type != NL80211_IFTYPE_STATION) 4096 + return; 4097 + 4098 + enable_ps = arvif->ahvif->ps; 4099 + if (enable_ps) { 4100 + psmode = WMI_STA_PS_MODE_ENABLED; 4101 + param = WMI_STA_PS_PARAM_INACTIVITY_TIME; 4102 + 4103 + timeout = conf->dynamic_ps_timeout; 4104 + if (timeout == 0) { 4105 + info = ath12k_mac_get_link_bss_conf(arvif); 4106 + if (!info) { 4107 + ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n", 4108 + vif->addr, arvif->link_id); 4109 + return; 4110 + } 4111 + 4112 + /* firmware doesn't like 0 */ 4113 + timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000; 4114 + } 4115 + 4116 + ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param, 4117 + timeout); 4118 + if (ret) { 4119 + ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n", 4120 + arvif->vdev_id, ret); 4121 + return; 4122 + } 4123 + } else { 4124 + psmode = WMI_STA_PS_MODE_DISABLED; 4125 + } 4126 + 4127 + ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n", 4128 + arvif->vdev_id, psmode ? "enable" : "disable"); 4129 + 4130 + ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode); 4131 + if (ret) 4132 + ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n", 4133 + psmode, arvif->vdev_id, ret); 4134 + } 4135 + 4081 4136 static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw, 4082 4137 struct ieee80211_vif *vif, 4083 4138 u64 changed) 4084 4139 { 4085 4140 struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 4086 4141 unsigned long links = ahvif->links_map; 4142 + struct ieee80211_vif_cfg *vif_cfg; 4087 4143 struct ieee80211_bss_conf *info; 4088 4144 struct ath12k_link_vif *arvif; 4089 4145 struct ieee80211_sta *sta; ··· 4203 4147 } 4204 4148 } 4205 4149 } 4206 - } 4207 4150 4208 - static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif) 4209 - { 4210 - struct ath12k *ar = arvif->ar; 4211 - struct ieee80211_vif *vif = arvif->ahvif->vif; 4212 - struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf; 4213 - enum wmi_sta_powersave_param param; 4214 - struct ieee80211_bss_conf *info; 4215 - enum wmi_sta_ps_mode psmode; 4216 - int ret; 4217 - int timeout; 4218 - bool enable_ps; 4151 + if (changed & BSS_CHANGED_PS) { 4152 + links = ahvif->links_map; 4153 + vif_cfg = &vif->cfg; 4219 4154 4220 - lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy); 4155 + for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) { 4156 + arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]); 4157 + if (!arvif || !arvif->ar) 4158 + continue; 4221 4159 4222 - if (vif->type != NL80211_IFTYPE_STATION) 4223 - return; 4160 + ar = arvif->ar; 4224 4161 4225 - enable_ps = arvif->ahvif->ps; 4226 - if (enable_ps) { 4227 - psmode = WMI_STA_PS_MODE_ENABLED; 4228 - param = WMI_STA_PS_PARAM_INACTIVITY_TIME; 4229 - 4230 - timeout = conf->dynamic_ps_timeout; 4231 - if (timeout == 0) { 4232 - info = ath12k_mac_get_link_bss_conf(arvif); 4233 - if (!info) { 4234 - ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n", 4235 - vif->addr, arvif->link_id); 4236 - return; 4162 + if (ar->ab->hw_params->supports_sta_ps) { 4163 + ahvif->ps = vif_cfg->ps; 4164 + ath12k_mac_vif_setup_ps(arvif); 4237 4165 } 4238 - 4239 - /* firmware doesn't like 0 */ 4240 - timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000; 4241 4166 } 4242 - 4243 - ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param, 4244 - timeout); 4245 - if (ret) { 4246 - ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n", 4247 - arvif->vdev_id, ret); 4248 - return; 4249 - } 4250 - } else { 4251 - psmode = WMI_STA_PS_MODE_DISABLED; 4252 4167 } 4253 - 4254 - ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n", 4255 - arvif->vdev_id, psmode ? "enable" : "disable"); 4256 - 4257 - ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode); 4258 - if (ret) 4259 - ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n", 4260 - psmode, arvif->vdev_id, ret); 4261 4168 } 4262 4169 4263 4170 static bool ath12k_mac_supports_tpc(struct ath12k *ar, struct ath12k_vif *ahvif, ··· 4242 4223 { 4243 4224 struct ath12k_vif *ahvif = arvif->ahvif; 4244 4225 struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif); 4245 - struct ieee80211_vif_cfg *vif_cfg = &vif->cfg; 4246 4226 struct cfg80211_chan_def def; 4247 4227 u32 param_id, param_value; 4248 4228 enum nl80211_band band; ··· 4528 4510 } 4529 4511 4530 4512 ath12k_mac_fils_discovery(arvif, info); 4531 - 4532 - if (changed & BSS_CHANGED_PS && 4533 - ar->ab->hw_params->supports_sta_ps) { 4534 - ahvif->ps = vif_cfg->ps; 4535 - ath12k_mac_vif_setup_ps(arvif); 4536 - } 4537 4513 } 4538 4514 4539 4515 static struct ath12k_vif_cache *ath12k_ahvif_get_link_cache(struct ath12k_vif *ahvif,
+1 -1
drivers/net/wireless/ath/ath12k/wmi.c
··· 843 843 cmd->tx_params_valid = 0; 844 844 845 845 frame_tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd)); 846 - frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len); 846 + frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len_aligned); 847 847 848 848 memcpy(frame_tlv->value, frame->data, buf_len); 849 849
+13 -13
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 124 124 {IWL_PCI_DEVICE(0x0082, 0x1304, iwl6005_mac_cfg)},/* low 5GHz active */ 125 125 {IWL_PCI_DEVICE(0x0082, 0x1305, iwl6005_mac_cfg)},/* high 5GHz active */ 126 126 127 - /* 6x30 Series */ 128 - {IWL_PCI_DEVICE(0x008A, 0x5305, iwl1000_mac_cfg)}, 129 - {IWL_PCI_DEVICE(0x008A, 0x5307, iwl1000_mac_cfg)}, 130 - {IWL_PCI_DEVICE(0x008A, 0x5325, iwl1000_mac_cfg)}, 131 - {IWL_PCI_DEVICE(0x008A, 0x5327, iwl1000_mac_cfg)}, 132 - {IWL_PCI_DEVICE(0x008B, 0x5315, iwl1000_mac_cfg)}, 133 - {IWL_PCI_DEVICE(0x008B, 0x5317, iwl1000_mac_cfg)}, 127 + /* 1030/6x30 Series */ 128 + {IWL_PCI_DEVICE(0x008A, 0x5305, iwl6030_mac_cfg)}, 129 + {IWL_PCI_DEVICE(0x008A, 0x5307, iwl6030_mac_cfg)}, 130 + {IWL_PCI_DEVICE(0x008A, 0x5325, iwl6030_mac_cfg)}, 131 + {IWL_PCI_DEVICE(0x008A, 0x5327, iwl6030_mac_cfg)}, 132 + {IWL_PCI_DEVICE(0x008B, 0x5315, iwl6030_mac_cfg)}, 133 + {IWL_PCI_DEVICE(0x008B, 0x5317, iwl6030_mac_cfg)}, 134 134 {IWL_PCI_DEVICE(0x0090, 0x5211, iwl6030_mac_cfg)}, 135 135 {IWL_PCI_DEVICE(0x0090, 0x5215, iwl6030_mac_cfg)}, 136 136 {IWL_PCI_DEVICE(0x0090, 0x5216, iwl6030_mac_cfg)}, ··· 181 181 {IWL_PCI_DEVICE(0x08AE, 0x1027, iwl1000_mac_cfg)}, 182 182 183 183 /* 130 Series WiFi */ 184 - {IWL_PCI_DEVICE(0x0896, 0x5005, iwl1000_mac_cfg)}, 185 - {IWL_PCI_DEVICE(0x0896, 0x5007, iwl1000_mac_cfg)}, 186 - {IWL_PCI_DEVICE(0x0897, 0x5015, iwl1000_mac_cfg)}, 187 - {IWL_PCI_DEVICE(0x0897, 0x5017, iwl1000_mac_cfg)}, 188 - {IWL_PCI_DEVICE(0x0896, 0x5025, iwl1000_mac_cfg)}, 189 - {IWL_PCI_DEVICE(0x0896, 0x5027, iwl1000_mac_cfg)}, 184 + {IWL_PCI_DEVICE(0x0896, 0x5005, iwl6030_mac_cfg)}, 185 + {IWL_PCI_DEVICE(0x0896, 0x5007, iwl6030_mac_cfg)}, 186 + {IWL_PCI_DEVICE(0x0897, 0x5015, iwl6030_mac_cfg)}, 187 + {IWL_PCI_DEVICE(0x0897, 0x5017, iwl6030_mac_cfg)}, 188 + {IWL_PCI_DEVICE(0x0896, 0x5025, iwl6030_mac_cfg)}, 189 + {IWL_PCI_DEVICE(0x0896, 0x5027, iwl6030_mac_cfg)}, 190 190 191 191 /* 2x00 Series */ 192 192 {IWL_PCI_DEVICE(0x0890, 0x4022, iwl2000_mac_cfg)},
+3 -1
drivers/net/wireless/virtual/virt_wifi.c
··· 277 277 priv->is_connected = true; 278 278 279 279 /* Schedules an event that acquires the rtnl lock. */ 280 - cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0, 280 + cfg80211_connect_result(priv->upperdev, 281 + priv->is_connected ? fake_router_bssid : NULL, 282 + NULL, 0, NULL, 0, 281 283 status, GFP_KERNEL); 282 284 netif_carrier_on(priv->upperdev); 283 285 }
+11 -7
drivers/nvme/host/core.c
··· 903 903 u32 upper, lower; 904 904 u64 ref48; 905 905 906 + /* only type1 and type 2 PI formats have a reftag */ 907 + switch (ns->head->pi_type) { 908 + case NVME_NS_DPS_PI_TYPE1: 909 + case NVME_NS_DPS_PI_TYPE2: 910 + break; 911 + default: 912 + return; 913 + } 914 + 906 915 /* both rw and write zeroes share the same reftag format */ 907 916 switch (ns->head->guard_type) { 908 917 case NVME_NVM_NS_16B_GUARD: ··· 951 942 952 943 if (nvme_ns_has_pi(ns->head)) { 953 944 cmnd->write_zeroes.control |= cpu_to_le16(NVME_RW_PRINFO_PRACT); 954 - 955 - switch (ns->head->pi_type) { 956 - case NVME_NS_DPS_PI_TYPE1: 957 - case NVME_NS_DPS_PI_TYPE2: 958 - nvme_set_ref_tag(ns, cmnd, req); 959 - break; 960 - } 945 + nvme_set_ref_tag(ns, cmnd, req); 961 946 } 962 947 963 948 return BLK_STS_OK; ··· 1042 1039 if (WARN_ON_ONCE(!nvme_ns_has_pi(ns->head))) 1043 1040 return BLK_STS_NOTSUPP; 1044 1041 control |= NVME_RW_PRINFO_PRACT; 1042 + nvme_set_ref_tag(ns, cmnd, req); 1045 1043 } 1046 1044 1047 1045 if (bio_integrity_flagged(req->bio, BIP_CHECK_GUARD))
-3
drivers/pcmcia/Kconfig
··· 250 250 config PCCARD_NONSTATIC 251 251 bool 252 252 253 - config PCCARD_IODYN 254 - bool 255 - 256 253 endif # PCCARD
-1
drivers/pcmcia/Makefile
··· 12 12 13 13 pcmcia_rsrc-y += rsrc_mgr.o 14 14 pcmcia_rsrc-$(CONFIG_PCCARD_NONSTATIC) += rsrc_nonstatic.o 15 - pcmcia_rsrc-$(CONFIG_PCCARD_IODYN) += rsrc_iodyn.o 16 15 obj-$(CONFIG_PCCARD) += pcmcia_rsrc.o 17 16 18 17
-17
drivers/pcmcia/cs.c
··· 229 229 EXPORT_SYMBOL(pcmcia_unregister_socket); 230 230 231 231 232 - struct pcmcia_socket *pcmcia_get_socket_by_nr(unsigned int nr) 233 - { 234 - struct pcmcia_socket *s; 235 - 236 - down_read(&pcmcia_socket_list_rwsem); 237 - list_for_each_entry(s, &pcmcia_socket_list, socket_list) 238 - if (s->sock == nr) { 239 - up_read(&pcmcia_socket_list_rwsem); 240 - return s; 241 - } 242 - up_read(&pcmcia_socket_list_rwsem); 243 - 244 - return NULL; 245 - 246 - } 247 - EXPORT_SYMBOL(pcmcia_get_socket_by_nr); 248 - 249 232 static int socket_reset(struct pcmcia_socket *skt) 250 233 { 251 234 int status, i;
-1
drivers/pcmcia/cs_internal.h
··· 116 116 extern const struct class pcmcia_socket_class; 117 117 118 118 int pccard_register_pcmcia(struct pcmcia_socket *s, struct pcmcia_callback *c); 119 - struct pcmcia_socket *pcmcia_get_socket_by_nr(unsigned int nr); 120 119 121 120 void pcmcia_parse_uevents(struct pcmcia_socket *socket, unsigned int events); 122 121 #define PCMCIA_UEVENT_EJECT 0x0001
+1 -1
drivers/pcmcia/ds.c
··· 1308 1308 * physically present, even if the call to this function returns 1309 1309 * non-NULL. Furthermore, the device driver most likely is unbound 1310 1310 * almost immediately, so the timeframe where pcmcia_dev_present 1311 - * returns NULL is probably really really small. 1311 + * returns NULL is probably really, really small. 1312 1312 */ 1313 1313 struct pcmcia_device *pcmcia_dev_present(struct pcmcia_device *_p_dev) 1314 1314 {
+9 -1
drivers/pcmcia/omap_cf.c
··· 215 215 return -EINVAL; 216 216 217 217 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 218 + if (!res) 219 + return -EINVAL; 218 220 219 221 cf = kzalloc(sizeof *cf, GFP_KERNEL); 220 222 if (!cf) ··· 304 302 kfree(cf); 305 303 } 306 304 307 - static struct platform_driver omap_cf_driver = { 305 + /* 306 + * omap_cf_remove() lives in .exit.text. For drivers registered via 307 + * platform_driver_probe() this is ok because they cannot get unbound at 308 + * runtime. So mark the driver struct with __refdata to prevent modpost 309 + * triggering a section mismatch warning. 310 + */ 311 + static struct platform_driver omap_cf_driver __refdata = { 308 312 .driver = { 309 313 .name = driver_name, 310 314 },
-168
drivers/pcmcia/rsrc_iodyn.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * rsrc_iodyn.c -- Resource management routines for MEM-static sockets. 4 - * 5 - * The initial developer of the original code is David A. Hinds 6 - * <dahinds@users.sourceforge.net>. Portions created by David A. Hinds 7 - * are Copyright (C) 1999 David A. Hinds. All Rights Reserved. 8 - * 9 - * (C) 1999 David A. Hinds 10 - */ 11 - 12 - #include <linux/slab.h> 13 - #include <linux/module.h> 14 - #include <linux/kernel.h> 15 - 16 - #include <pcmcia/ss.h> 17 - #include <pcmcia/cistpl.h> 18 - #include "cs_internal.h" 19 - 20 - 21 - struct pcmcia_align_data { 22 - unsigned long mask; 23 - unsigned long offset; 24 - }; 25 - 26 - static resource_size_t pcmcia_align(void *align_data, 27 - const struct resource *res, 28 - resource_size_t size, resource_size_t align) 29 - { 30 - struct pcmcia_align_data *data = align_data; 31 - resource_size_t start; 32 - 33 - start = (res->start & ~data->mask) + data->offset; 34 - if (start < res->start) 35 - start += data->mask + 1; 36 - 37 - #ifdef CONFIG_X86 38 - if (res->flags & IORESOURCE_IO) { 39 - if (start & 0x300) 40 - start = (start + 0x3ff) & ~0x3ff; 41 - } 42 - #endif 43 - 44 - #ifdef CONFIG_M68K 45 - if (res->flags & IORESOURCE_IO) { 46 - if ((res->start + size - 1) >= 1024) 47 - start = res->end; 48 - } 49 - #endif 50 - 51 - return start; 52 - } 53 - 54 - 55 - static struct resource *__iodyn_find_io_region(struct pcmcia_socket *s, 56 - unsigned long base, int num, 57 - unsigned long align) 58 - { 59 - struct resource *res = pcmcia_make_resource(0, num, IORESOURCE_IO, 60 - dev_name(&s->dev)); 61 - struct pcmcia_align_data data; 62 - unsigned long min = base; 63 - int ret; 64 - 65 - data.mask = align - 1; 66 - data.offset = base & data.mask; 67 - 68 - #ifdef CONFIG_PCI 69 - if (s->cb_dev) { 70 - ret = pci_bus_alloc_resource(s->cb_dev->bus, res, num, 1, 71 - min, 0, pcmcia_align, &data); 72 - } else 73 - #endif 74 - ret = allocate_resource(&ioport_resource, res, num, min, ~0UL, 75 - 1, pcmcia_align, &data); 76 - 77 - if (ret != 0) { 78 - kfree(res); 79 - res = NULL; 80 - } 81 - return res; 82 - } 83 - 84 - static int iodyn_find_io(struct pcmcia_socket *s, unsigned int attr, 85 - unsigned int *base, unsigned int num, 86 - unsigned int align, struct resource **parent) 87 - { 88 - int i, ret = 0; 89 - 90 - /* Check for an already-allocated window that must conflict with 91 - * what was asked for. It is a hack because it does not catch all 92 - * potential conflicts, just the most obvious ones. 93 - */ 94 - for (i = 0; i < MAX_IO_WIN; i++) { 95 - if (!s->io[i].res) 96 - continue; 97 - 98 - if (!*base) 99 - continue; 100 - 101 - if ((s->io[i].res->start & (align-1)) == *base) 102 - return -EBUSY; 103 - } 104 - 105 - for (i = 0; i < MAX_IO_WIN; i++) { 106 - struct resource *res = s->io[i].res; 107 - unsigned int try; 108 - 109 - if (res && (res->flags & IORESOURCE_BITS) != 110 - (attr & IORESOURCE_BITS)) 111 - continue; 112 - 113 - if (!res) { 114 - if (align == 0) 115 - align = 0x10000; 116 - 117 - res = s->io[i].res = __iodyn_find_io_region(s, *base, 118 - num, align); 119 - if (!res) 120 - return -EINVAL; 121 - 122 - *base = res->start; 123 - s->io[i].res->flags = 124 - ((res->flags & ~IORESOURCE_BITS) | 125 - (attr & IORESOURCE_BITS)); 126 - s->io[i].InUse = num; 127 - *parent = res; 128 - return 0; 129 - } 130 - 131 - /* Try to extend top of window */ 132 - try = res->end + 1; 133 - if ((*base == 0) || (*base == try)) { 134 - if (adjust_resource(s->io[i].res, res->start, 135 - resource_size(res) + num)) 136 - continue; 137 - *base = try; 138 - s->io[i].InUse += num; 139 - *parent = res; 140 - return 0; 141 - } 142 - 143 - /* Try to extend bottom of window */ 144 - try = res->start - num; 145 - if ((*base == 0) || (*base == try)) { 146 - if (adjust_resource(s->io[i].res, 147 - res->start - num, 148 - resource_size(res) + num)) 149 - continue; 150 - *base = try; 151 - s->io[i].InUse += num; 152 - *parent = res; 153 - return 0; 154 - } 155 - } 156 - 157 - return -EINVAL; 158 - } 159 - 160 - 161 - struct pccard_resource_ops pccard_iodyn_ops = { 162 - .validate_mem = NULL, 163 - .find_io = iodyn_find_io, 164 - .find_mem = NULL, 165 - .init = static_init, 166 - .exit = NULL, 167 - }; 168 - EXPORT_SYMBOL(pccard_iodyn_ops);
+3 -1
drivers/pcmcia/rsrc_nonstatic.c
··· 375 375 376 376 if (validate && !s->fake_cis) { 377 377 /* move it to the validated data set */ 378 - add_interval(&s_data->mem_db_valid, base, size); 378 + ret = add_interval(&s_data->mem_db_valid, base, size); 379 + if (ret) 380 + return ret; 379 381 sub_interval(&s_data->mem_db, base, size); 380 382 } 381 383
+3 -2
drivers/pcmcia/socket_sysfs.c
··· 10 10 #include <linux/init.h> 11 11 #include <linux/kernel.h> 12 12 #include <linux/string.h> 13 + #include <linux/string_choices.h> 13 14 #include <linux/major.h> 14 15 #include <linux/errno.h> 15 16 #include <linux/mm.h> ··· 99 98 char *buf) 100 99 { 101 100 struct pcmcia_socket *s = to_socket(dev); 102 - return sysfs_emit(buf, "%s\n", s->state & SOCKET_SUSPEND ? "off" : "on"); 101 + return sysfs_emit(buf, "%s\n", str_off_on(s->state & SOCKET_SUSPEND)); 103 102 } 104 103 105 104 static ssize_t pccard_store_card_pm_state(struct device *dev, ··· 178 177 struct device_attribute *attr, char *buf) 179 178 { 180 179 struct pcmcia_socket *s = to_socket(dev); 181 - return sysfs_emit(buf, "%s\n", s->resource_setup_done ? "yes" : "no"); 180 + return sysfs_emit(buf, "%s\n", str_yes_no(s->resource_setup_done)); 182 181 } 183 182 184 183 static ssize_t pccard_store_resource(struct device *dev,
+12 -59
drivers/platform/x86/acer-wmi.c
··· 129 129 enum acer_wmi_gaming_misc_setting { 130 130 ACER_WMID_MISC_SETTING_OC_1 = 0x0005, 131 131 ACER_WMID_MISC_SETTING_OC_2 = 0x0007, 132 + /* Unreliable on some models */ 132 133 ACER_WMID_MISC_SETTING_SUPPORTED_PROFILES = 0x000A, 133 134 ACER_WMID_MISC_SETTING_PLATFORM_PROFILE = 0x000B, 134 135 }; ··· 794 793 * returning from turbo mode when the mode key is in toggle mode. 795 794 */ 796 795 static int last_non_turbo_profile = INT_MIN; 797 - 798 - /* The most performant supported profile */ 799 - static int acer_predator_v4_max_perf; 800 796 801 797 enum acer_predator_v4_thermal_profile { 802 798 ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET = 0x00, ··· 2012 2014 if (err) 2013 2015 return err; 2014 2016 2015 - if (tp != acer_predator_v4_max_perf) 2017 + if (tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO) 2016 2018 last_non_turbo_profile = tp; 2017 2019 2018 2020 return 0; ··· 2021 2023 static int 2022 2024 acer_predator_v4_platform_profile_probe(void *drvdata, unsigned long *choices) 2023 2025 { 2024 - unsigned long supported_profiles; 2025 - int err; 2026 + set_bit(PLATFORM_PROFILE_PERFORMANCE, choices); 2027 + set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices); 2028 + set_bit(PLATFORM_PROFILE_BALANCED, choices); 2029 + set_bit(PLATFORM_PROFILE_QUIET, choices); 2030 + set_bit(PLATFORM_PROFILE_LOW_POWER, choices); 2026 2031 2027 - err = WMID_gaming_get_misc_setting(ACER_WMID_MISC_SETTING_SUPPORTED_PROFILES, 2028 - (u8 *)&supported_profiles); 2029 - if (err) 2030 - return err; 2031 - 2032 - /* Iterate through supported profiles in order of increasing performance */ 2033 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_ECO, &supported_profiles)) { 2034 - set_bit(PLATFORM_PROFILE_LOW_POWER, choices); 2035 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO; 2036 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO; 2037 - } 2038 - 2039 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET, &supported_profiles)) { 2040 - set_bit(PLATFORM_PROFILE_QUIET, choices); 2041 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET; 2042 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET; 2043 - } 2044 - 2045 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED, &supported_profiles)) { 2046 - set_bit(PLATFORM_PROFILE_BALANCED, choices); 2047 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED; 2048 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED; 2049 - } 2050 - 2051 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE, &supported_profiles)) { 2052 - set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices); 2053 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE; 2054 - 2055 - /* We only use this profile as a fallback option in case no prior 2056 - * profile is supported. 2057 - */ 2058 - if (last_non_turbo_profile < 0) 2059 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE; 2060 - } 2061 - 2062 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO, &supported_profiles)) { 2063 - set_bit(PLATFORM_PROFILE_PERFORMANCE, choices); 2064 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO; 2065 - 2066 - /* We need to handle the hypothetical case where only the turbo profile 2067 - * is supported. In this case the turbo toggle will essentially be a 2068 - * no-op. 2069 - */ 2070 - if (last_non_turbo_profile < 0) 2071 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO; 2072 - } 2032 + /* Set default non-turbo profile */ 2033 + last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED; 2073 2034 2074 2035 return 0; 2075 2036 } ··· 2065 2108 if (cycle_gaming_thermal_profile) { 2066 2109 platform_profile_cycle(); 2067 2110 } else { 2068 - /* Do nothing if no suitable platform profiles where found */ 2069 - if (last_non_turbo_profile < 0) 2070 - return 0; 2071 - 2072 2111 err = WMID_gaming_get_misc_setting( 2073 2112 ACER_WMID_MISC_SETTING_PLATFORM_PROFILE, &current_tp); 2074 2113 if (err) 2075 2114 return err; 2076 2115 2077 - if (current_tp == acer_predator_v4_max_perf) 2116 + if (current_tp == ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO) 2078 2117 tp = last_non_turbo_profile; 2079 2118 else 2080 - tp = acer_predator_v4_max_perf; 2119 + tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO; 2081 2120 2082 2121 err = WMID_gaming_set_misc_setting( 2083 2122 ACER_WMID_MISC_SETTING_PLATFORM_PROFILE, tp); ··· 2081 2128 return err; 2082 2129 2083 2130 /* Store last profile for toggle */ 2084 - if (current_tp != acer_predator_v4_max_perf) 2131 + if (current_tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO) 2085 2132 last_non_turbo_profile = current_tp; 2086 2133 2087 2134 platform_profile_notify(platform_profile_device);
+10 -4
drivers/platform/x86/amd/hfi/hfi.c
··· 385 385 amd_hfi_data->pcct_entry = pcct_entry; 386 386 pcct_ext = (struct acpi_pcct_ext_pcc_slave *)pcct_entry; 387 387 388 - if (pcct_ext->length <= 0) 389 - return -EINVAL; 388 + if (pcct_ext->length <= 0) { 389 + ret = -EINVAL; 390 + goto out; 391 + } 390 392 391 393 amd_hfi_data->shmem = devm_kzalloc(amd_hfi_data->dev, pcct_ext->length, GFP_KERNEL); 392 - if (!amd_hfi_data->shmem) 393 - return -ENOMEM; 394 + if (!amd_hfi_data->shmem) { 395 + ret = -ENOMEM; 396 + goto out; 397 + } 394 398 395 399 pcc_chan->shmem_base_addr = pcct_ext->base_address; 396 400 pcc_chan->shmem_size = pcct_ext->length; ··· 402 398 /* parse the shared memory info from the PCCT table */ 403 399 ret = amd_hfi_fill_metadata(amd_hfi_data); 404 400 401 + out: 402 + /* Don't leak any ACPI memory */ 405 403 acpi_put_table(pcct_tbl); 406 404 407 405 return ret;
+14
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 248 248 DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"), 249 249 } 250 250 }, 251 + { 252 + .ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10", 253 + .driver_data = &quirk_spurious_8042, 254 + .matches = { 255 + DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"), 256 + } 257 + }, 258 + { 259 + .ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10", 260 + .driver_data = &quirk_spurious_8042, 261 + .matches = { 262 + DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"), 263 + } 264 + }, 251 265 {} 252 266 }; 253 267
+22 -6
drivers/platform/x86/asus-nb-wmi.c
··· 147 147 }; 148 148 149 149 static struct quirk_entry quirk_asus_zenbook_duo_kbd = { 150 - .ignore_key_wlan = true, 150 + .key_wlan_event = ASUS_WMI_KEY_IGNORE, 151 + }; 152 + 153 + static struct quirk_entry quirk_asus_z13 = { 154 + .key_wlan_event = ASUS_WMI_KEY_ARMOURY, 155 + .tablet_switch_mode = asus_wmi_kbd_dock_devid, 151 156 }; 152 157 153 158 static int dmi_matched(const struct dmi_system_id *dmi) ··· 544 539 }, 545 540 .driver_data = &quirk_asus_zenbook_duo_kbd, 546 541 }, 542 + { 543 + .callback = dmi_matched, 544 + .ident = "ASUS ROG Z13", 545 + .matches = { 546 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 547 + DMI_MATCH(DMI_PRODUCT_NAME, "ROG Flow Z13"), 548 + }, 549 + .driver_data = &quirk_asus_z13, 550 + }, 547 551 {}, 548 552 }; 549 553 ··· 632 618 { KE_KEY, 0x93, { KEY_SWITCHVIDEOMODE } }, /* SDSP LCD + CRT + TV + DVI */ 633 619 { KE_KEY, 0x95, { KEY_MEDIA } }, 634 620 { KE_KEY, 0x99, { KEY_PHONE } }, /* Conflicts with fan mode switch */ 621 + { KE_KEY, 0X9D, { KEY_FN_F } }, 635 622 { KE_KEY, 0xA0, { KEY_SWITCHVIDEOMODE } }, /* SDSP HDMI only */ 636 623 { KE_KEY, 0xA1, { KEY_SWITCHVIDEOMODE } }, /* SDSP LCD + HDMI */ 637 624 { KE_KEY, 0xA2, { KEY_SWITCHVIDEOMODE } }, /* SDSP CRT + HDMI */ ··· 647 632 { KE_IGNORE, 0xC0, }, /* External display connect/disconnect notification */ 648 633 { KE_KEY, 0xC4, { KEY_KBDILLUMUP } }, 649 634 { KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } }, 635 + { KE_KEY, 0xCA, { KEY_F13 } }, /* Noise cancelling on Expertbook B9 */ 636 + { KE_KEY, 0xCB, { KEY_F14 } }, /* Fn+noise-cancel */ 650 637 { KE_IGNORE, 0xC6, }, /* Ambient Light Sensor notification */ 651 638 { KE_IGNORE, 0xCF, }, /* AC mode */ 652 639 { KE_KEY, 0xFA, { KEY_PROG2 } }, /* Lid flip action */ 653 640 { KE_KEY, 0xBD, { KEY_PROG2 } }, /* Lid flip action on ROG xflow laptops */ 641 + { KE_KEY, ASUS_WMI_KEY_ARMOURY, { KEY_PROG3 } }, 654 642 { KE_END, 0}, 655 643 }; 656 644 ··· 673 655 if (atkbd_reports_vol_keys) 674 656 *code = ASUS_WMI_KEY_IGNORE; 675 657 break; 676 - case 0x5D: /* Wireless console Toggle */ 677 - case 0x5E: /* Wireless console Enable */ 678 - case 0x5F: /* Wireless console Disable */ 679 - if (quirks->ignore_key_wlan) 680 - *code = ASUS_WMI_KEY_IGNORE; 658 + case 0x5F: /* Wireless console Disable / Special Key */ 659 + if (quirks->key_wlan_event) 660 + *code = quirks->key_wlan_event; 681 661 break; 682 662 } 683 663 }
+8 -1
drivers/platform/x86/asus-wmi.c
··· 5088 5088 5089 5089 asus_s2idle_check_register(); 5090 5090 5091 - return asus_wmi_add(pdev); 5091 + ret = asus_wmi_add(pdev); 5092 + if (ret) 5093 + asus_s2idle_check_unregister(); 5094 + 5095 + return ret; 5092 5096 } 5093 5097 5094 5098 static bool used; 5099 + static DEFINE_MUTEX(register_mutex); 5095 5100 5096 5101 int __init_or_module asus_wmi_register_driver(struct asus_wmi_driver *driver) 5097 5102 { 5098 5103 struct platform_driver *platform_driver; 5099 5104 struct platform_device *platform_device; 5100 5105 5106 + guard(mutex)(&register_mutex); 5101 5107 if (used) 5102 5108 return -EBUSY; 5103 5109 ··· 5126 5120 5127 5121 void asus_wmi_unregister_driver(struct asus_wmi_driver *driver) 5128 5122 { 5123 + guard(mutex)(&register_mutex); 5129 5124 asus_s2idle_check_unregister(); 5130 5125 5131 5126 platform_device_unregister(driver->platform_device);
+2 -1
drivers/platform/x86/asus-wmi.h
··· 18 18 #include <linux/i8042.h> 19 19 20 20 #define ASUS_WMI_KEY_IGNORE (-1) 21 + #define ASUS_WMI_KEY_ARMOURY 0xffff01 21 22 #define ASUS_WMI_BRN_DOWN 0x2e 22 23 #define ASUS_WMI_BRN_UP 0x2f 23 24 ··· 41 40 bool wmi_force_als_set; 42 41 bool wmi_ignore_fan; 43 42 bool filter_i8042_e1_extended_codes; 44 - bool ignore_key_wlan; 43 + int key_wlan_event; 45 44 enum asus_wmi_tablet_switch_mode tablet_switch_mode; 46 45 int wapf; 47 46 /*
+4
drivers/platform/x86/hp/hp-wmi.c
··· 122 122 HPWMI_BATTERY_CHARGE_PERIOD = 0x10, 123 123 HPWMI_SANITIZATION_MODE = 0x17, 124 124 HPWMI_CAMERA_TOGGLE = 0x1A, 125 + HPWMI_FN_P_HOTKEY = 0x1B, 125 126 HPWMI_OMEN_KEY = 0x1D, 126 127 HPWMI_SMART_EXPERIENCE_APP = 0x21, 127 128 }; ··· 981 980 if (!sparse_keymap_report_event(hp_wmi_input_dev, 982 981 key_code, 1, true)) 983 982 pr_info("Unknown key code - 0x%x\n", key_code); 983 + break; 984 + case HPWMI_FN_P_HOTKEY: 985 + platform_profile_cycle(); 984 986 break; 985 987 case HPWMI_OMEN_KEY: 986 988 if (event_data) /* Only should be true for HP Omen */
+1
drivers/platform/x86/intel/pmc/core.c
··· 1625 1625 X86_MATCH_VFM(INTEL_RAPTORLAKE_P, &tgl_l_pmc_dev), 1626 1626 X86_MATCH_VFM(INTEL_RAPTORLAKE, &adl_pmc_dev), 1627 1627 X86_MATCH_VFM(INTEL_RAPTORLAKE_S, &adl_pmc_dev), 1628 + X86_MATCH_VFM(INTEL_BARTLETTLAKE, &adl_pmc_dev), 1628 1629 X86_MATCH_VFM(INTEL_METEORLAKE_L, &mtl_pmc_dev), 1629 1630 X86_MATCH_VFM(INTEL_ARROWLAKE, &arl_pmc_dev), 1630 1631 X86_MATCH_VFM(INTEL_ARROWLAKE_H, &arl_h_pmc_dev),
+1 -1
drivers/platform/x86/intel/tpmi_power_domains.c
··· 178 178 179 179 info->punit_thread_id = FIELD_GET(LP_ID_MASK, data); 180 180 info->punit_core_id = FIELD_GET(MODULE_ID_MASK, data); 181 - info->pkg_id = topology_physical_package_id(cpu); 181 + info->pkg_id = topology_logical_package_id(cpu); 182 182 info->linux_cpu = cpu; 183 183 184 184 return 0;
+6 -4
drivers/scsi/lpfc/lpfc_nvmet.c
··· 1243 1243 struct lpfc_nvmet_tgtport *tgtp; 1244 1244 struct lpfc_async_xchg_ctx *ctxp = 1245 1245 container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req); 1246 - struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer; 1246 + struct rqb_dmabuf *nvmebuf; 1247 1247 struct lpfc_hba *phba = ctxp->phba; 1248 1248 unsigned long iflag; 1249 1249 ··· 1251 1251 lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n", 1252 1252 ctxp->oxid, ctxp->size, raw_smp_processor_id()); 1253 1253 1254 + spin_lock_irqsave(&ctxp->ctxlock, iflag); 1255 + nvmebuf = ctxp->rqb_buffer; 1254 1256 if (!nvmebuf) { 1257 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1255 1258 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR, 1256 1259 "6425 Defer rcv: no buffer oxid x%x: " 1257 1260 "flg %x ste %x\n", 1258 1261 ctxp->oxid, ctxp->flag, ctxp->state); 1259 1262 return; 1260 1263 } 1264 + ctxp->rqb_buffer = NULL; 1265 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1261 1266 1262 1267 tgtp = phba->targetport->private; 1263 1268 if (tgtp) ··· 1270 1265 1271 1266 /* Free the nvmebuf since a new buffer already replaced it */ 1272 1267 nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 1273 - spin_lock_irqsave(&ctxp->ctxlock, iflag); 1274 - ctxp->rqb_buffer = NULL; 1275 - spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1276 1268 } 1277 1269 1278 1270 /**
+10 -6
drivers/scsi/sr.c
··· 475 475 476 476 static int sr_revalidate_disk(struct scsi_cd *cd) 477 477 { 478 + struct request_queue *q = cd->device->request_queue; 478 479 struct scsi_sense_hdr sshdr; 480 + struct queue_limits lim; 481 + int sector_size; 479 482 480 483 /* if the unit is not ready, nothing more to do */ 481 484 if (scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr)) 482 485 return 0; 483 486 sr_cd_check(&cd->cdi); 484 - return get_sectorsize(cd); 487 + sector_size = get_sectorsize(cd); 488 + 489 + lim = queue_limits_start_update(q); 490 + lim.logical_block_size = sector_size; 491 + lim.features |= BLK_FEAT_ROTATIONAL; 492 + return queue_limits_commit_update_frozen(q, &lim); 485 493 } 486 494 487 495 static int sr_block_open(struct gendisk *disk, blk_mode_t mode) ··· 729 721 730 722 static int get_sectorsize(struct scsi_cd *cd) 731 723 { 732 - struct request_queue *q = cd->device->request_queue; 733 724 static const u8 cmd[10] = { READ_CAPACITY }; 734 725 unsigned char buffer[8] = { }; 735 - struct queue_limits lim; 736 726 int err; 737 727 int sector_size; 738 728 struct scsi_failure failure_defs[] = { ··· 801 795 set_capacity(cd->disk, cd->capacity); 802 796 } 803 797 804 - lim = queue_limits_start_update(q); 805 - lim.logical_block_size = sector_size; 806 - return queue_limits_commit_update_frozen(q, &lim); 798 + return sector_size; 807 799 } 808 800 809 801 static int get_capabilities(struct scsi_cd *cd)
+33
drivers/spi/spi-cadence-quadspi.c
··· 108 108 109 109 bool is_jh7110; /* Flag for StarFive JH7110 SoC */ 110 110 bool disable_stig_mode; 111 + refcount_t refcount; 112 + refcount_t inflight_ops; 111 113 112 114 const struct cqspi_driver_platdata *ddata; 113 115 }; ··· 737 735 u8 *rxbuf_end = rxbuf + n_rx; 738 736 int ret = 0; 739 737 738 + if (!refcount_read(&cqspi->refcount)) 739 + return -ENODEV; 740 + 740 741 writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR); 741 742 writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES); 742 743 ··· 1075 1070 unsigned int remaining = n_tx; 1076 1071 unsigned int write_bytes; 1077 1072 int ret; 1073 + 1074 + if (!refcount_read(&cqspi->refcount)) 1075 + return -ENODEV; 1078 1076 1079 1077 writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR); 1080 1078 writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES); ··· 1469 1461 struct cqspi_st *cqspi = spi_controller_get_devdata(mem->spi->controller); 1470 1462 struct device *dev = &cqspi->pdev->dev; 1471 1463 1464 + if (refcount_read(&cqspi->inflight_ops) == 0) 1465 + return -ENODEV; 1466 + 1472 1467 ret = pm_runtime_resume_and_get(dev); 1473 1468 if (ret) { 1474 1469 dev_err(&mem->spi->dev, "resume failed with %d\n", ret); 1475 1470 return ret; 1471 + } 1472 + 1473 + if (!refcount_read(&cqspi->refcount)) 1474 + return -EBUSY; 1475 + 1476 + refcount_inc(&cqspi->inflight_ops); 1477 + 1478 + if (!refcount_read(&cqspi->refcount)) { 1479 + if (refcount_read(&cqspi->inflight_ops)) 1480 + refcount_dec(&cqspi->inflight_ops); 1481 + return -EBUSY; 1476 1482 } 1477 1483 1478 1484 ret = cqspi_mem_process(mem, op); ··· 1495 1473 1496 1474 if (ret) 1497 1475 dev_err(&mem->spi->dev, "operation failed with %d\n", ret); 1476 + 1477 + if (refcount_read(&cqspi->inflight_ops) > 1) 1478 + refcount_dec(&cqspi->inflight_ops); 1498 1479 1499 1480 return ret; 1500 1481 } ··· 1950 1925 } 1951 1926 } 1952 1927 1928 + refcount_set(&cqspi->refcount, 1); 1929 + refcount_set(&cqspi->inflight_ops, 1); 1930 + 1953 1931 ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0, 1954 1932 pdev->name, cqspi); 1955 1933 if (ret) { ··· 2014 1986 static void cqspi_remove(struct platform_device *pdev) 2015 1987 { 2016 1988 struct cqspi_st *cqspi = platform_get_drvdata(pdev); 1989 + 1990 + refcount_set(&cqspi->refcount, 0); 1991 + 1992 + if (!refcount_dec_and_test(&cqspi->inflight_ops)) 1993 + cqspi_wait_idle(cqspi); 2017 1994 2018 1995 spi_unregister_controller(cqspi->host); 2019 1996 cqspi_controller_enable(cqspi, 0);
+28 -19
drivers/spi/spi-fsl-lpspi.c
··· 3 3 // Freescale i.MX7ULP LPSPI driver 4 4 // 5 5 // Copyright 2016 Freescale Semiconductor, Inc. 6 - // Copyright 2018 NXP Semiconductors 6 + // Copyright 2018, 2023, 2025 NXP 7 7 8 + #include <linux/bitfield.h> 8 9 #include <linux/clk.h> 9 10 #include <linux/completion.h> 10 11 #include <linux/delay.h> ··· 71 70 #define DER_TDDE BIT(0) 72 71 #define CFGR1_PCSCFG BIT(27) 73 72 #define CFGR1_PINCFG (BIT(24)|BIT(25)) 74 - #define CFGR1_PCSPOL BIT(8) 73 + #define CFGR1_PCSPOL_MASK GENMASK(11, 8) 75 74 #define CFGR1_NOSTALL BIT(3) 76 75 #define CFGR1_HOST BIT(0) 77 76 #define FSR_TXCOUNT (0xFF) ··· 83 82 #define TCR_RXMSK BIT(19) 84 83 #define TCR_TXMSK BIT(18) 85 84 85 + #define SR_CLEAR_MASK GENMASK(13, 8) 86 + 86 87 struct fsl_lpspi_devtype_data { 87 - u8 prescale_max; 88 + u8 prescale_max : 3; /* 0 == no limit */ 89 + bool query_hw_for_num_cs : 1; 88 90 }; 89 91 90 92 struct lpspi_config { ··· 133 129 }; 134 130 135 131 /* 136 - * ERR051608 fixed or not: 137 - * https://www.nxp.com/docs/en/errata/i.MX93_1P87f.pdf 132 + * Devices with ERR051608 have a max TCR_PRESCALE value of 1, otherwise there is 133 + * no prescale limit: https://www.nxp.com/docs/en/errata/i.MX93_1P87f.pdf 138 134 */ 139 - static struct fsl_lpspi_devtype_data imx93_lpspi_devtype_data = { 135 + static const struct fsl_lpspi_devtype_data imx93_lpspi_devtype_data = { 140 136 .prescale_max = 1, 137 + .query_hw_for_num_cs = true, 141 138 }; 142 139 143 - static struct fsl_lpspi_devtype_data imx7ulp_lpspi_devtype_data = { 144 - .prescale_max = 7, 140 + static const struct fsl_lpspi_devtype_data imx7ulp_lpspi_devtype_data = { 141 + /* All defaults */ 142 + }; 143 + 144 + static const struct fsl_lpspi_devtype_data s32g_lpspi_devtype_data = { 145 + .query_hw_for_num_cs = true, 145 146 }; 146 147 147 148 static const struct of_device_id fsl_lpspi_dt_ids[] = { 148 149 { .compatible = "fsl,imx7ulp-spi", .data = &imx7ulp_lpspi_devtype_data,}, 149 150 { .compatible = "fsl,imx93-spi", .data = &imx93_lpspi_devtype_data,}, 151 + { .compatible = "nxp,s32g2-lpspi", .data = &s32g_lpspi_devtype_data,}, 150 152 { /* sentinel */ } 151 153 }; 152 154 MODULE_DEVICE_TABLE(of, fsl_lpspi_dt_ids); ··· 331 321 int scldiv; 332 322 333 323 perclk_rate = clk_get_rate(fsl_lpspi->clk_per); 334 - prescale_max = fsl_lpspi->devtype_data->prescale_max; 324 + prescale_max = fsl_lpspi->devtype_data->prescale_max ?: 7; 335 325 336 326 if (!config.speed_hz) { 337 327 dev_err(fsl_lpspi->dev, ··· 433 423 else 434 424 temp = CFGR1_PINCFG; 435 425 if (fsl_lpspi->config.mode & SPI_CS_HIGH) 436 - temp |= CFGR1_PCSPOL; 426 + temp |= FIELD_PREP(CFGR1_PCSPOL_MASK, 427 + BIT(fsl_lpspi->config.chip_select)); 428 + 437 429 writel(temp, fsl_lpspi->base + IMX7ULP_CFGR1); 438 430 439 431 temp = readl(fsl_lpspi->base + IMX7ULP_CR); ··· 544 532 fsl_lpspi_intctrl(fsl_lpspi, 0); 545 533 } 546 534 547 - /* W1C for all flags in SR */ 548 - temp = 0x3F << 8; 549 - writel(temp, fsl_lpspi->base + IMX7ULP_SR); 550 - 551 535 /* Clear FIFO and disable module */ 552 536 temp = CR_RRF | CR_RTF; 553 537 writel(temp, fsl_lpspi->base + IMX7ULP_CR); 538 + 539 + /* W1C for all flags in SR */ 540 + writel(SR_CLEAR_MASK, fsl_lpspi->base + IMX7ULP_SR); 554 541 555 542 return 0; 556 543 } ··· 741 730 fsl_lpspi_write_tx_fifo(fsl_lpspi); 742 731 743 732 ret = fsl_lpspi_wait_for_completion(controller); 744 - if (ret) 745 - return ret; 746 733 747 734 fsl_lpspi_reset(fsl_lpspi); 748 735 749 - return 0; 736 + return ret; 750 737 } 751 738 752 739 static int fsl_lpspi_transfer_one(struct spi_controller *controller, ··· 794 785 if (temp_SR & SR_MBF || 795 786 readl(fsl_lpspi->base + IMX7ULP_FSR) & FSR_TXCOUNT) { 796 787 writel(SR_FCF, fsl_lpspi->base + IMX7ULP_SR); 797 - fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE); 788 + fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE | (temp_IER & IER_TDIE)); 798 789 return IRQ_HANDLED; 799 790 } 800 791 ··· 939 930 fsl_lpspi->rxfifosize = 1 << ((temp >> 8) & 0x0f); 940 931 if (of_property_read_u32((&pdev->dev)->of_node, "num-cs", 941 932 &num_cs)) { 942 - if (of_device_is_compatible(pdev->dev.of_node, "fsl,imx93-spi")) 933 + if (devtype_data->query_hw_for_num_cs) 943 934 num_cs = ((temp >> 16) & 0xf); 944 935 else 945 936 num_cs = 1;
-12
drivers/spi/spi-microchip-core-qspi.c
··· 531 531 532 532 static bool mchp_coreqspi_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) 533 533 { 534 - struct mchp_coreqspi *qspi = spi_controller_get_devdata(mem->spi->controller); 535 - unsigned long clk_hz; 536 - u32 baud_rate_val; 537 - 538 534 if (!spi_mem_default_supports_op(mem, op)) 539 535 return false; 540 536 ··· 552 556 if (op->data.dir == SPI_MEM_DATA_OUT) 553 557 return false; 554 558 } 555 - 556 - clk_hz = clk_get_rate(qspi->clk); 557 - if (!clk_hz) 558 - return false; 559 - 560 - baud_rate_val = DIV_ROUND_UP(clk_hz, 2 * op->max_freq); 561 - if (baud_rate_val > MAX_DIVIDER || baud_rate_val < MIN_DIVIDER) 562 - return false; 563 559 564 560 return true; 565 561 }
+4 -2
drivers/spi/spi-qpic-snand.c
··· 1615 1615 ret = spi_register_controller(ctlr); 1616 1616 if (ret) { 1617 1617 dev_err(&pdev->dev, "spi_register_controller failed.\n"); 1618 - goto err_spi_init; 1618 + goto err_register_controller; 1619 1619 } 1620 1620 1621 1621 return 0; 1622 1622 1623 + err_register_controller: 1624 + nand_ecc_unregister_on_host_hw_engine(&snandc->qspi->ecc_eng); 1623 1625 err_spi_init: 1624 1626 qcom_nandc_unalloc(snandc); 1625 1627 err_snand_alloc: ··· 1643 1641 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1644 1642 1645 1643 spi_unregister_controller(ctlr); 1646 - 1644 + nand_ecc_unregister_on_host_hw_engine(&snandc->qspi->ecc_eng); 1647 1645 qcom_nandc_unalloc(snandc); 1648 1646 1649 1647 clk_disable_unprepare(snandc->aon_clk);
+30 -10
fs/btrfs/extent_io.c
··· 111 111 */ 112 112 unsigned long submit_bitmap; 113 113 struct readahead_control *ractl; 114 + 115 + /* 116 + * The start offset of the last used extent map by a read operation. 117 + * 118 + * This is for proper compressed read merge. 119 + * U64_MAX means we are starting the read and have made no progress yet. 120 + * 121 + * The current btrfs_bio_is_contig() only uses disk_bytenr as 122 + * the condition to check if the read can be merged with previous 123 + * bio, which is not correct. E.g. two file extents pointing to the 124 + * same extent but with different offset. 125 + * 126 + * So here we need to do extra checks to only merge reads that are 127 + * covered by the same extent map. 128 + * Just extent_map::start will be enough, as they are unique 129 + * inside the same inode. 130 + */ 131 + u64 last_em_start; 114 132 }; 115 133 116 134 static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl) ··· 927 909 * return 0 on success, otherwise return error 928 910 */ 929 911 static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, 930 - struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start) 912 + struct btrfs_bio_ctrl *bio_ctrl) 931 913 { 932 914 struct inode *inode = folio->mapping->host; 933 915 struct btrfs_fs_info *fs_info = inode_to_fs_info(inode); ··· 1037 1019 * non-optimal behavior (submitting 2 bios for the same extent). 1038 1020 */ 1039 1021 if (compress_type != BTRFS_COMPRESS_NONE && 1040 - prev_em_start && *prev_em_start != (u64)-1 && 1041 - *prev_em_start != em->start) 1022 + bio_ctrl->last_em_start != U64_MAX && 1023 + bio_ctrl->last_em_start != em->start) 1042 1024 force_bio_submit = true; 1043 1025 1044 - if (prev_em_start) 1045 - *prev_em_start = em->start; 1026 + bio_ctrl->last_em_start = em->start; 1046 1027 1047 1028 btrfs_free_extent_map(em); 1048 1029 em = NULL; ··· 1255 1238 const u64 start = folio_pos(folio); 1256 1239 const u64 end = start + folio_size(folio) - 1; 1257 1240 struct extent_state *cached_state = NULL; 1258 - struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ }; 1241 + struct btrfs_bio_ctrl bio_ctrl = { 1242 + .opf = REQ_OP_READ, 1243 + .last_em_start = U64_MAX, 1244 + }; 1259 1245 struct extent_map *em_cached = NULL; 1260 1246 int ret; 1261 1247 1262 1248 lock_extents_for_read(inode, start, end, &cached_state); 1263 - ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL); 1249 + ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl); 1264 1250 btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state); 1265 1251 1266 1252 btrfs_free_extent_map(em_cached); ··· 2603 2583 { 2604 2584 struct btrfs_bio_ctrl bio_ctrl = { 2605 2585 .opf = REQ_OP_READ | REQ_RAHEAD, 2606 - .ractl = rac 2586 + .ractl = rac, 2587 + .last_em_start = U64_MAX, 2607 2588 }; 2608 2589 struct folio *folio; 2609 2590 struct btrfs_inode *inode = BTRFS_I(rac->mapping->host); ··· 2612 2591 const u64 end = start + readahead_length(rac) - 1; 2613 2592 struct extent_state *cached_state = NULL; 2614 2593 struct extent_map *em_cached = NULL; 2615 - u64 prev_em_start = (u64)-1; 2616 2594 2617 2595 lock_extents_for_read(inode, start, end, &cached_state); 2618 2596 2619 2597 while ((folio = readahead_folio(rac)) != NULL) 2620 - btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start); 2598 + btrfs_do_readpage(folio, &em_cached, &bio_ctrl); 2621 2599 2622 2600 btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state); 2623 2601
+11 -1
fs/btrfs/inode.c
··· 5696 5696 bool empty = false; 5697 5697 5698 5698 xa_lock(&root->inodes); 5699 - entry = __xa_erase(&root->inodes, btrfs_ino(inode)); 5699 + /* 5700 + * This btrfs_inode is being freed and has already been unhashed at this 5701 + * point. It's possible that another btrfs_inode has already been 5702 + * allocated for the same inode and inserted itself into the root, so 5703 + * don't delete it in that case. 5704 + * 5705 + * Note that this shouldn't need to allocate memory, so the gfp flags 5706 + * don't really matter. 5707 + */ 5708 + entry = __xa_cmpxchg(&root->inodes, btrfs_ino(inode), inode, NULL, 5709 + GFP_ATOMIC); 5700 5710 if (entry == inode) 5701 5711 empty = xa_empty(&root->inodes); 5702 5712 xa_unlock(&root->inodes);
+4 -2
fs/btrfs/qgroup.c
··· 1455 1455 struct btrfs_qgroup *qgroup; 1456 1456 LIST_HEAD(qgroup_list); 1457 1457 u64 num_bytes = src->excl; 1458 + u64 num_bytes_cmpr = src->excl_cmpr; 1458 1459 int ret = 0; 1459 1460 1460 1461 qgroup = find_qgroup_rb(fs_info, ref_root); ··· 1467 1466 struct btrfs_qgroup_list *glist; 1468 1467 1469 1468 qgroup->rfer += sign * num_bytes; 1470 - qgroup->rfer_cmpr += sign * num_bytes; 1469 + qgroup->rfer_cmpr += sign * num_bytes_cmpr; 1471 1470 1472 1471 WARN_ON(sign < 0 && qgroup->excl < num_bytes); 1472 + WARN_ON(sign < 0 && qgroup->excl_cmpr < num_bytes_cmpr); 1473 1473 qgroup->excl += sign * num_bytes; 1474 - qgroup->excl_cmpr += sign * num_bytes; 1474 + qgroup->excl_cmpr += sign * num_bytes_cmpr; 1475 1475 1476 1476 if (sign > 0) 1477 1477 qgroup_rsv_add_by_qgroup(fs_info, qgroup, src);
+6 -3
fs/btrfs/super.c
··· 299 299 btrfs_set_opt(ctx->mount_opt, COMPRESS); 300 300 btrfs_clear_opt(ctx->mount_opt, NODATACOW); 301 301 btrfs_clear_opt(ctx->mount_opt, NODATASUM); 302 - } else if (btrfs_match_compress_type(string, "lzo", false)) { 302 + } else if (btrfs_match_compress_type(string, "lzo", true)) { 303 303 ctx->compress_type = BTRFS_COMPRESS_LZO; 304 - ctx->compress_level = 0; 304 + ctx->compress_level = btrfs_compress_str2level(BTRFS_COMPRESS_LZO, 305 + string + 3); 306 + if (string[3] == ':' && string[4]) 307 + btrfs_warn(NULL, "Compression level ignored for LZO"); 305 308 btrfs_set_opt(ctx->mount_opt, COMPRESS); 306 309 btrfs_clear_opt(ctx->mount_opt, NODATACOW); 307 310 btrfs_clear_opt(ctx->mount_opt, NODATASUM); ··· 1082 1079 seq_printf(seq, ",compress-force=%s", compress_type); 1083 1080 else 1084 1081 seq_printf(seq, ",compress=%s", compress_type); 1085 - if (info->compress_level) 1082 + if (info->compress_level && info->compress_type != BTRFS_COMPRESS_LZO) 1086 1083 seq_printf(seq, ":%d", info->compress_level); 1087 1084 } 1088 1085 if (btrfs_test_opt(info, NOSSD))
+5
fs/btrfs/volumes.c
··· 2722 2722 goto error; 2723 2723 } 2724 2724 2725 + if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) { 2726 + ret = -EINVAL; 2727 + goto error; 2728 + } 2729 + 2725 2730 if (fs_devices->seeding) { 2726 2731 seeding_dev = true; 2727 2732 down_write(&sb->s_umount);
+4
fs/coredump.c
··· 1466 1466 ssize_t retval; 1467 1467 char old_core_pattern[CORENAME_MAX_SIZE]; 1468 1468 1469 + if (write) 1470 + return proc_dostring(table, write, buffer, lenp, ppos); 1471 + 1469 1472 retval = strscpy(old_core_pattern, core_pattern, CORENAME_MAX_SIZE); 1470 1473 1471 1474 error = proc_dostring(table, write, buffer, lenp, ppos); 1472 1475 if (error) 1473 1476 return error; 1477 + 1474 1478 if (!check_coredump_socket()) { 1475 1479 strscpy(core_pattern, old_core_pattern, retval + 1); 1476 1480 return -EINVAL;
+1 -1
fs/exec.c
··· 2048 2048 { 2049 2049 int error = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 2050 2050 2051 - if (!error) 2051 + if (!error && !write) 2052 2052 validate_coredump_safety(); 2053 2053 return error; 2054 2054 }
+8
fs/fhandle.c
··· 208 208 return 1; 209 209 210 210 /* 211 + * Verify that the decoded dentry itself has a valid id mapping. 212 + * In case the decoded dentry is the mountfd root itself, this 213 + * verifies that the mountfd inode itself has a valid id mapping. 214 + */ 215 + if (!privileged_wrt_inode_uidgid(user_ns, idmap, d_inode(dentry))) 216 + return 0; 217 + 218 + /* 211 219 * It's racy as we're not taking rename_lock but we're able to ignore 212 220 * permissions and we just need an approximation whether we were able 213 221 * to follow a path to the file.
+1 -1
fs/fuse/dev.c
··· 1893 1893 1894 1894 index = outarg->offset >> PAGE_SHIFT; 1895 1895 1896 - while (num) { 1896 + while (num && ap->num_folios < num_pages) { 1897 1897 struct folio *folio; 1898 1898 unsigned int folio_offset; 1899 1899 unsigned int nr_bytes;
+2 -1
fs/fuse/dir.c
··· 1199 1199 if (attr->blksize != 0) 1200 1200 blkbits = ilog2(attr->blksize); 1201 1201 else 1202 - blkbits = inode->i_sb->s_blocksize_bits; 1202 + blkbits = fc->blkbits; 1203 1203 1204 1204 stat->blksize = 1 << blkbits; 1205 1205 } ··· 1377 1377 generic_fillattr(idmap, request_mask, inode, stat); 1378 1378 stat->mode = fi->orig_i_mode; 1379 1379 stat->ino = fi->orig_ino; 1380 + stat->blksize = 1 << fi->cached_i_blkbits; 1380 1381 if (test_bit(FUSE_I_BTIME, &fi->state)) { 1381 1382 stat->btime = fi->i_btime; 1382 1383 stat->result_mask |= STATX_BTIME;
+4 -1
fs/fuse/file.c
··· 2960 2960 .nodeid_out = ff_out->nodeid, 2961 2961 .fh_out = ff_out->fh, 2962 2962 .off_out = pos_out, 2963 - .len = len, 2963 + .len = min_t(size_t, len, UINT_MAX & PAGE_MASK), 2964 2964 .flags = flags 2965 2965 }; 2966 2966 struct fuse_write_out outarg; ··· 3026 3026 fc->no_copy_file_range = 1; 3027 3027 err = -EOPNOTSUPP; 3028 3028 } 3029 + if (!err && outarg.size > len) 3030 + err = -EIO; 3031 + 3029 3032 if (err) 3030 3033 goto out; 3031 3034
+14
fs/fuse/fuse_i.h
··· 210 210 /** Reference to backing file in passthrough mode */ 211 211 struct fuse_backing *fb; 212 212 #endif 213 + 214 + /* 215 + * The underlying inode->i_blkbits value will not be modified, 216 + * so preserve the blocksize specified by the server. 217 + */ 218 + u8 cached_i_blkbits; 213 219 }; 214 220 215 221 /** FUSE inode state bits */ ··· 975 969 /* Request timeout (in jiffies). 0 = no timeout */ 976 970 unsigned int req_timeout; 977 971 } timeout; 972 + 973 + /* 974 + * This is a workaround until fuse uses iomap for reads. 975 + * For fuseblk servers, this represents the blocksize passed in at 976 + * mount time and for regular fuse servers, this is equivalent to 977 + * inode->i_blkbits. 978 + */ 979 + u8 blkbits; 978 980 }; 979 981 980 982 /*
+16
fs/fuse/inode.c
··· 289 289 } 290 290 } 291 291 292 + if (attr->blksize) 293 + fi->cached_i_blkbits = ilog2(attr->blksize); 294 + else 295 + fi->cached_i_blkbits = fc->blkbits; 296 + 292 297 /* 293 298 * Don't set the sticky bit in i_mode, unless we want the VFS 294 299 * to check permissions. This prevents failures due to the ··· 1810 1805 err = -EINVAL; 1811 1806 if (!sb_set_blocksize(sb, ctx->blksize)) 1812 1807 goto err; 1808 + /* 1809 + * This is a workaround until fuse hooks into iomap for reads. 1810 + * Use PAGE_SIZE for the blocksize else if the writeback cache 1811 + * is enabled, buffered writes go through iomap and a read may 1812 + * overwrite partially written data if blocksize < PAGE_SIZE 1813 + */ 1814 + fc->blkbits = sb->s_blocksize_bits; 1815 + if (ctx->blksize != PAGE_SIZE && 1816 + !sb_set_blocksize(sb, PAGE_SIZE)) 1817 + goto err; 1813 1818 #endif 1814 1819 } else { 1815 1820 sb->s_blocksize = PAGE_SIZE; 1816 1821 sb->s_blocksize_bits = PAGE_SHIFT; 1822 + fc->blkbits = sb->s_blocksize_bits; 1817 1823 } 1818 1824 1819 1825 sb->s_subtype = ctx->subtype;
+5
fs/fuse/passthrough.c
··· 237 237 if (!file) 238 238 goto out; 239 239 240 + /* read/write/splice/mmap passthrough only relevant for regular files */ 241 + res = d_is_dir(file->f_path.dentry) ? -EISDIR : -EINVAL; 242 + if (!d_is_reg(file->f_path.dentry)) 243 + goto out_fput; 244 + 240 245 backing_sb = file_inode(file)->i_sb; 241 246 res = -ELOOP; 242 247 if (backing_sb->s_stack_depth >= fc->max_stack_depth)
+1 -1
fs/fuse/virtio_fs.c
··· 1016 1016 if (kaddr) 1017 1017 *kaddr = fs->window_kaddr + offset; 1018 1018 if (pfn) 1019 - *pfn = fs->window_phys_addr + offset; 1019 + *pfn = PHYS_PFN(fs->window_phys_addr + offset); 1020 1020 return nr_pages > max_nr_pages ? max_nr_pages : nr_pages; 1021 1021 } 1022 1022
+1 -1
fs/namespace.c
··· 2455 2455 return ERR_PTR(-EINVAL); 2456 2456 } 2457 2457 2458 - if (!ns_capable(old_mnt->mnt_ns->user_ns, CAP_SYS_ADMIN)) 2458 + if (!ns_capable(old_mnt->mnt_ns->user_ns, CAP_SYS_ADMIN)) 2459 2459 return ERR_PTR(-EPERM); 2460 2460 2461 2461 if (__has_locked_children(old_mnt, path->dentry))
+2
fs/nfs/client.c
··· 888 888 889 889 if (fsinfo->xattr_support) 890 890 server->caps |= NFS_CAP_XATTR; 891 + else 892 + server->caps &= ~NFS_CAP_XATTR; 891 893 #endif 892 894 } 893 895
+37 -3
fs/nfs/file.c
··· 28 28 #include <linux/mm.h> 29 29 #include <linux/pagemap.h> 30 30 #include <linux/gfp.h> 31 + #include <linux/rmap.h> 31 32 #include <linux/swap.h> 32 33 #include <linux/compaction.h> 33 34 ··· 281 280 } 282 281 EXPORT_SYMBOL_GPL(nfs_file_fsync); 283 282 283 + void nfs_truncate_last_folio(struct address_space *mapping, loff_t from, 284 + loff_t to) 285 + { 286 + struct folio *folio; 287 + 288 + if (from >= to) 289 + return; 290 + 291 + folio = filemap_lock_folio(mapping, from >> PAGE_SHIFT); 292 + if (IS_ERR(folio)) 293 + return; 294 + 295 + if (folio_mkclean(folio)) 296 + folio_mark_dirty(folio); 297 + 298 + if (folio_test_uptodate(folio)) { 299 + loff_t fpos = folio_pos(folio); 300 + size_t offset = from - fpos; 301 + size_t end = folio_size(folio); 302 + 303 + if (to - fpos < end) 304 + end = to - fpos; 305 + folio_zero_segment(folio, offset, end); 306 + trace_nfs_size_truncate_folio(mapping->host, to); 307 + } 308 + 309 + folio_unlock(folio); 310 + folio_put(folio); 311 + } 312 + EXPORT_SYMBOL_GPL(nfs_truncate_last_folio); 313 + 284 314 /* 285 315 * Decide whether a read/modify/write cycle may be more efficient 286 316 * then a modify/write/read cycle when writing to a page in the ··· 388 356 389 357 dfprintk(PAGECACHE, "NFS: write_begin(%pD2(%lu), %u@%lld)\n", 390 358 file, mapping->host->i_ino, len, (long long) pos); 359 + nfs_truncate_last_folio(mapping, i_size_read(mapping->host), pos); 391 360 392 361 fgp |= fgf_set_order(len); 393 362 start: ··· 475 442 dfprintk(PAGECACHE, "NFS: invalidate_folio(%lu, %zu, %zu)\n", 476 443 folio->index, offset, length); 477 444 478 - if (offset != 0 || length < folio_size(folio)) 479 - return; 480 445 /* Cancel any unstarted writes on this page */ 481 - nfs_wb_folio_cancel(inode, folio); 446 + if (offset != 0 || length < folio_size(folio)) 447 + nfs_wb_folio(inode, folio); 448 + else 449 + nfs_wb_folio_cancel(inode, folio); 482 450 folio_wait_private_2(folio); /* [DEPRECATED] */ 483 451 trace_nfs_invalidate_folio(inode, folio_pos(folio) + offset, length); 484 452 }
+13 -8
fs/nfs/flexfilelayout/flexfilelayout.c
··· 293 293 struct pnfs_layout_segment *l2) 294 294 { 295 295 const struct nfs4_ff_layout_segment *fl1 = FF_LAYOUT_LSEG(l1); 296 - const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l1); 296 + const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l2); 297 297 u32 i; 298 298 299 299 if (fl1->mirror_array_cnt != fl2->mirror_array_cnt) ··· 773 773 continue; 774 774 775 775 if (check_device && 776 - nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node)) 776 + nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node)) { 777 + // reinitialize the error state in case if this is the last iteration 778 + ds = ERR_PTR(-EINVAL); 777 779 continue; 780 + } 778 781 779 782 *best_idx = idx; 780 783 break; ··· 807 804 struct nfs4_pnfs_ds *ds; 808 805 809 806 ds = ff_layout_choose_valid_ds_for_read(lseg, start_idx, best_idx); 810 - if (ds) 807 + if (!IS_ERR(ds)) 811 808 return ds; 812 809 return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx); 813 810 } ··· 821 818 822 819 ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx, 823 820 best_idx); 824 - if (ds || !pgio->pg_mirror_idx) 821 + if (!IS_ERR(ds) || !pgio->pg_mirror_idx) 825 822 return ds; 826 823 return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx); 827 824 } ··· 871 868 req->wb_nio = 0; 872 869 873 870 ds = ff_layout_get_ds_for_read(pgio, &ds_idx); 874 - if (!ds) { 871 + if (IS_ERR(ds)) { 875 872 if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg)) 876 873 goto out_mds; 877 874 pnfs_generic_pg_cleanup(pgio); ··· 1075 1072 { 1076 1073 u32 idx = hdr->pgio_mirror_idx + 1; 1077 1074 u32 new_idx = 0; 1075 + struct nfs4_pnfs_ds *ds; 1078 1076 1079 - if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx)) 1080 - ff_layout_send_layouterror(hdr->lseg); 1081 - else 1077 + ds = ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx); 1078 + if (IS_ERR(ds)) 1082 1079 pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg); 1080 + else 1081 + ff_layout_send_layouterror(hdr->lseg); 1083 1082 pnfs_read_resend_pnfs(hdr, new_idx); 1084 1083 } 1085 1084
+10 -3
fs/nfs/inode.c
··· 716 716 { 717 717 struct inode *inode = d_inode(dentry); 718 718 struct nfs_fattr *fattr; 719 + loff_t oldsize = i_size_read(inode); 719 720 int error = 0; 720 721 721 722 nfs_inc_stats(inode, NFSIOS_VFSSETATTR); ··· 732 731 if (error) 733 732 return error; 734 733 735 - if (attr->ia_size == i_size_read(inode)) 734 + if (attr->ia_size == oldsize) 736 735 attr->ia_valid &= ~ATTR_SIZE; 737 736 } 738 737 ··· 768 767 trace_nfs_setattr_enter(inode); 769 768 770 769 /* Write all dirty data */ 771 - if (S_ISREG(inode->i_mode)) 770 + if (S_ISREG(inode->i_mode)) { 771 + nfs_file_block_o_direct(NFS_I(inode)); 772 772 nfs_sync_inode(inode); 773 + } 773 774 774 775 fattr = nfs_alloc_fattr_with_label(NFS_SERVER(inode)); 775 776 if (fattr == NULL) { ··· 780 777 } 781 778 782 779 error = NFS_PROTO(inode)->setattr(dentry, fattr, attr); 783 - if (error == 0) 780 + if (error == 0) { 781 + if (attr->ia_valid & ATTR_SIZE) 782 + nfs_truncate_last_folio(inode->i_mapping, oldsize, 783 + attr->ia_size); 784 784 error = nfs_refresh_inode(inode, fattr); 785 + } 785 786 nfs_free_fattr(fattr); 786 787 out: 787 788 trace_nfs_setattr_exit(inode, error);
+12
fs/nfs/internal.h
··· 437 437 int nfs_lock(struct file *, int, struct file_lock *); 438 438 int nfs_flock(struct file *, int, struct file_lock *); 439 439 int nfs_check_flags(int); 440 + void nfs_truncate_last_folio(struct address_space *mapping, loff_t from, 441 + loff_t to); 440 442 441 443 /* inode.c */ 442 444 extern struct workqueue_struct *nfsiod_workqueue; ··· 531 529 { 532 530 return test_bit(NFS_INO_ODIRECT, &nfsi->flags) == 0; 533 531 } 532 + 533 + /* Must be called with exclusively locked inode->i_rwsem */ 534 + static inline void nfs_file_block_o_direct(struct nfs_inode *nfsi) 535 + { 536 + if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) { 537 + clear_bit(NFS_INO_ODIRECT, &nfsi->flags); 538 + inode_dio_wait(&nfsi->vfs_inode); 539 + } 540 + } 541 + 534 542 535 543 /* namespace.c */ 536 544 #define NFS_PATH_CANONICAL 1
+2 -11
fs/nfs/io.c
··· 14 14 15 15 #include "internal.h" 16 16 17 - /* Call with exclusively locked inode->i_rwsem */ 18 - static void nfs_block_o_direct(struct nfs_inode *nfsi, struct inode *inode) 19 - { 20 - if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) { 21 - clear_bit(NFS_INO_ODIRECT, &nfsi->flags); 22 - inode_dio_wait(inode); 23 - } 24 - } 25 - 26 17 /** 27 18 * nfs_start_io_read - declare the file is being used for buffered reads 28 19 * @inode: file inode ··· 48 57 err = down_write_killable(&inode->i_rwsem); 49 58 if (err) 50 59 return err; 51 - nfs_block_o_direct(nfsi, inode); 60 + nfs_file_block_o_direct(nfsi); 52 61 downgrade_write(&inode->i_rwsem); 53 62 54 63 return 0; ··· 81 90 82 91 err = down_write_killable(&inode->i_rwsem); 83 92 if (!err) 84 - nfs_block_o_direct(NFS_I(inode), inode); 93 + nfs_file_block_o_direct(NFS_I(inode)); 85 94 return err; 86 95 } 87 96
+11 -10
fs/nfs/localio.c
··· 180 180 return; 181 181 } 182 182 183 - if (nfs_client_is_local(clp)) { 184 - /* If already enabled, disable and re-enable */ 185 - nfs_localio_disable_client(clp); 186 - } 183 + if (nfs_client_is_local(clp)) 184 + return; 187 185 188 186 if (!nfs_uuid_begin(&clp->cl_uuid)) 189 187 return; ··· 242 244 case -ENOMEM: 243 245 case -ENXIO: 244 246 case -ENOENT: 245 - /* Revalidate localio, will disable if unsupported */ 247 + /* Revalidate localio */ 248 + nfs_localio_disable_client(clp); 246 249 nfs_local_probe(clp); 247 250 } 248 251 } ··· 452 453 nfs_local_iter_init(&iter, iocb, READ); 453 454 454 455 status = filp->f_op->read_iter(&iocb->kiocb, &iter); 456 + 457 + revert_creds(save_cred); 458 + 455 459 if (status != -EIOCBQUEUED) { 456 460 nfs_local_read_done(iocb, status); 457 461 nfs_local_pgio_release(iocb); 458 462 } 459 - 460 - revert_creds(save_cred); 461 463 } 462 464 463 465 static int ··· 648 648 file_start_write(filp); 649 649 status = filp->f_op->write_iter(&iocb->kiocb, &iter); 650 650 file_end_write(filp); 651 + 652 + revert_creds(save_cred); 653 + current->flags = old_flags; 654 + 651 655 if (status != -EIOCBQUEUED) { 652 656 nfs_local_write_done(iocb, status); 653 657 nfs_local_vfs_getattr(iocb); 654 658 nfs_local_pgio_release(iocb); 655 659 } 656 - 657 - revert_creds(save_cred); 658 - current->flags = old_flags; 659 660 } 660 661 661 662 static int
+26 -9
fs/nfs/nfs42proc.c
··· 114 114 exception.inode = inode; 115 115 exception.state = lock->open_context->state; 116 116 117 + nfs_file_block_o_direct(NFS_I(inode)); 117 118 err = nfs_sync_inode(inode); 118 119 if (err) 119 120 goto out; ··· 138 137 .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE], 139 138 }; 140 139 struct inode *inode = file_inode(filep); 140 + loff_t oldsize = i_size_read(inode); 141 141 int err; 142 142 143 143 if (!nfs_server_capable(inode, NFS_CAP_ALLOCATE)) ··· 147 145 inode_lock(inode); 148 146 149 147 err = nfs42_proc_fallocate(&msg, filep, offset, len); 150 - if (err == -EOPNOTSUPP) 148 + 149 + if (err == 0) 150 + nfs_truncate_last_folio(inode->i_mapping, oldsize, 151 + offset + len); 152 + else if (err == -EOPNOTSUPP) 151 153 NFS_SERVER(inode)->caps &= ~(NFS_CAP_ALLOCATE | 152 154 NFS_CAP_ZERO_RANGE); 153 155 ··· 189 183 .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ZERO_RANGE], 190 184 }; 191 185 struct inode *inode = file_inode(filep); 186 + loff_t oldsize = i_size_read(inode); 192 187 int err; 193 188 194 189 if (!nfs_server_capable(inode, NFS_CAP_ZERO_RANGE)) ··· 198 191 inode_lock(inode); 199 192 200 193 err = nfs42_proc_fallocate(&msg, filep, offset, len); 201 - if (err == 0) 194 + if (err == 0) { 195 + nfs_truncate_last_folio(inode->i_mapping, oldsize, 196 + offset + len); 202 197 truncate_pagecache_range(inode, offset, (offset + len) -1); 203 - if (err == -EOPNOTSUPP) 198 + } else if (err == -EOPNOTSUPP) 204 199 NFS_SERVER(inode)->caps &= ~NFS_CAP_ZERO_RANGE; 205 200 206 201 inode_unlock(inode); ··· 363 354 364 355 /** 365 356 * nfs42_copy_dest_done - perform inode cache updates after clone/copy offload 366 - * @inode: pointer to destination inode 357 + * @file: pointer to destination file 367 358 * @pos: destination offset 368 359 * @len: copy length 360 + * @oldsize: length of the file prior to clone/copy 369 361 * 370 362 * Punch a hole in the inode page cache, so that the NFS client will 371 363 * know to retrieve new data. 372 364 * Update the file size if necessary, and then mark the inode as having 373 365 * invalid cached values for change attribute, ctime, mtime and space used. 374 366 */ 375 - static void nfs42_copy_dest_done(struct inode *inode, loff_t pos, loff_t len) 367 + static void nfs42_copy_dest_done(struct file *file, loff_t pos, loff_t len, 368 + loff_t oldsize) 376 369 { 370 + struct inode *inode = file_inode(file); 371 + struct address_space *mapping = file->f_mapping; 377 372 loff_t newsize = pos + len; 378 373 loff_t end = newsize - 1; 379 374 380 - WARN_ON_ONCE(invalidate_inode_pages2_range(inode->i_mapping, 381 - pos >> PAGE_SHIFT, end >> PAGE_SHIFT)); 375 + nfs_truncate_last_folio(mapping, oldsize, pos); 376 + WARN_ON_ONCE(invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT, 377 + end >> PAGE_SHIFT)); 382 378 383 379 spin_lock(&inode->i_lock); 384 380 if (newsize > i_size_read(inode)) ··· 416 402 struct nfs_server *src_server = NFS_SERVER(src_inode); 417 403 loff_t pos_src = args->src_pos; 418 404 loff_t pos_dst = args->dst_pos; 405 + loff_t oldsize_dst = i_size_read(dst_inode); 419 406 size_t count = args->count; 420 407 ssize_t status; 421 408 ··· 445 430 return status; 446 431 } 447 432 433 + nfs_file_block_o_direct(NFS_I(dst_inode)); 448 434 status = nfs_sync_inode(dst_inode); 449 435 if (status) 450 436 return status; ··· 491 475 goto out; 492 476 } 493 477 494 - nfs42_copy_dest_done(dst_inode, pos_dst, res->write_res.count); 478 + nfs42_copy_dest_done(dst, pos_dst, res->write_res.count, oldsize_dst); 495 479 nfs_invalidate_atime(src_inode); 496 480 status = res->write_res.count; 497 481 out: ··· 1258 1242 struct nfs42_clone_res res = { 1259 1243 .server = server, 1260 1244 }; 1245 + loff_t oldsize_dst = i_size_read(dst_inode); 1261 1246 int status; 1262 1247 1263 1248 msg->rpc_argp = &args; ··· 1293 1276 /* a zero-length count means clone to EOF in src */ 1294 1277 if (count == 0 && res.dst_fattr->valid & NFS_ATTR_FATTR_SIZE) 1295 1278 count = nfs_size_to_loff_t(res.dst_fattr->size) - dst_offset; 1296 - nfs42_copy_dest_done(dst_inode, dst_offset, count); 1279 + nfs42_copy_dest_done(dst_f, dst_offset, count, oldsize_dst); 1297 1280 status = nfs_post_op_update_inode(dst_inode, res.dst_fattr); 1298 1281 } 1299 1282
+2
fs/nfs/nfs4file.c
··· 278 278 lock_two_nondirectories(src_inode, dst_inode); 279 279 /* flush all pending writes on both src and dst so that server 280 280 * has the latest data */ 281 + nfs_file_block_o_direct(NFS_I(src_inode)); 281 282 ret = nfs_sync_inode(src_inode); 282 283 if (ret) 283 284 goto out_unlock; 285 + nfs_file_block_o_direct(NFS_I(dst_inode)); 284 286 ret = nfs_sync_inode(dst_inode); 285 287 if (ret) 286 288 goto out_unlock;
+4 -3
fs/nfs/nfs4proc.c
··· 4013 4013 res.attr_bitmask[2]; 4014 4014 } 4015 4015 memcpy(server->attr_bitmask, res.attr_bitmask, sizeof(server->attr_bitmask)); 4016 - server->caps &= ~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS | 4017 - NFS_CAP_SYMLINKS| NFS_CAP_SECURITY_LABEL); 4016 + server->caps &= 4017 + ~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS | 4018 + NFS_CAP_SECURITY_LABEL | NFS_CAP_FS_LOCATIONS | 4019 + NFS_CAP_OPEN_XOR | NFS_CAP_DELEGTIME); 4018 4020 server->fattr_valid = NFS_ATTR_FATTR_V4; 4019 4021 if (res.attr_bitmask[0] & FATTR4_WORD0_ACL && 4020 4022 res.acl_bitmask & ACL4_SUPPORT_ALLOW_ACL) ··· 4094 4092 }; 4095 4093 int err; 4096 4094 4097 - nfs_server_set_init_caps(server); 4098 4095 do { 4099 4096 err = nfs4_handle_exception(server, 4100 4097 _nfs4_server_capabilities(server, fhandle),
+1
fs/nfs/nfstrace.h
··· 272 272 TP_ARGS(inode, new_size)) 273 273 274 274 DEFINE_NFS_UPDATE_SIZE_EVENT(truncate); 275 + DEFINE_NFS_UPDATE_SIZE_EVENT(truncate_folio); 275 276 DEFINE_NFS_UPDATE_SIZE_EVENT(wcc); 276 277 DEFINE_NFS_UPDATE_SIZE_EVENT(update); 277 278 DEFINE_NFS_UPDATE_SIZE_EVENT(grow);
+6 -47
fs/nfs/write.c
··· 237 237 } 238 238 239 239 /* 240 - * nfs_page_group_search_locked 241 - * @head - head request of page group 242 - * @page_offset - offset into page 240 + * nfs_page_covers_folio 241 + * @req: struct nfs_page 243 242 * 244 - * Search page group with head @head to find a request that contains the 245 - * page offset @page_offset. 246 - * 247 - * Returns a pointer to the first matching nfs request, or NULL if no 248 - * match is found. 249 - * 250 - * Must be called with the page group lock held 251 - */ 252 - static struct nfs_page * 253 - nfs_page_group_search_locked(struct nfs_page *head, unsigned int page_offset) 254 - { 255 - struct nfs_page *req; 256 - 257 - req = head; 258 - do { 259 - if (page_offset >= req->wb_pgbase && 260 - page_offset < (req->wb_pgbase + req->wb_bytes)) 261 - return req; 262 - 263 - req = req->wb_this_page; 264 - } while (req != head); 265 - 266 - return NULL; 267 - } 268 - 269 - /* 270 - * nfs_page_group_covers_page 271 - * @head - head request of page group 272 - * 273 - * Return true if the page group with head @head covers the whole page, 274 - * returns false otherwise 243 + * Return true if the request covers the whole folio. 244 + * Note that the caller should ensure all subrequests have been joined 275 245 */ 276 246 static bool nfs_page_group_covers_page(struct nfs_page *req) 277 247 { 278 248 unsigned int len = nfs_folio_length(nfs_page_to_folio(req)); 279 - struct nfs_page *tmp; 280 - unsigned int pos = 0; 281 249 282 - nfs_page_group_lock(req); 283 - 284 - for (;;) { 285 - tmp = nfs_page_group_search_locked(req->wb_head, pos); 286 - if (!tmp) 287 - break; 288 - pos = tmp->wb_pgbase + tmp->wb_bytes; 289 - } 290 - 291 - nfs_page_group_unlock(req); 292 - return pos >= len; 250 + return req->wb_pgbase == 0 && req->wb_bytes == len; 293 251 } 294 252 295 253 /* We can set the PG_uptodate flag if we see that a write request ··· 2003 2045 * release it */ 2004 2046 nfs_inode_remove_request(req); 2005 2047 nfs_unlock_and_release_request(req); 2048 + folio_cancel_dirty(folio); 2006 2049 } 2007 2050 2008 2051 return ret;
+9 -1
fs/ocfs2/extent_map.c
··· 706 706 * it not only handles the fiemap for inlined files, but also deals 707 707 * with the fast symlink, cause they have no difference for extent 708 708 * mapping per se. 709 + * 710 + * Must be called with ip_alloc_sem semaphore held. 709 711 */ 710 712 static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh, 711 713 struct fiemap_extent_info *fieinfo, ··· 719 717 u64 phys; 720 718 u32 flags = FIEMAP_EXTENT_DATA_INLINE|FIEMAP_EXTENT_LAST; 721 719 struct ocfs2_inode_info *oi = OCFS2_I(inode); 720 + lockdep_assert_held_read(&oi->ip_alloc_sem); 722 721 723 722 di = (struct ocfs2_dinode *)di_bh->b_data; 724 723 if (ocfs2_inode_is_fast_symlink(inode)) ··· 735 732 phys += offsetof(struct ocfs2_dinode, 736 733 id2.i_data.id_data); 737 734 735 + /* Release the ip_alloc_sem to prevent deadlock on page fault */ 736 + up_read(&OCFS2_I(inode)->ip_alloc_sem); 738 737 ret = fiemap_fill_next_extent(fieinfo, 0, phys, id_count, 739 738 flags); 739 + down_read(&OCFS2_I(inode)->ip_alloc_sem); 740 740 if (ret < 0) 741 741 return ret; 742 742 } ··· 808 802 len_bytes = (u64)le16_to_cpu(rec.e_leaf_clusters) << osb->s_clustersize_bits; 809 803 phys_bytes = le64_to_cpu(rec.e_blkno) << osb->sb->s_blocksize_bits; 810 804 virt_bytes = (u64)le32_to_cpu(rec.e_cpos) << osb->s_clustersize_bits; 811 - 805 + /* Release the ip_alloc_sem to prevent deadlock on page fault */ 806 + up_read(&OCFS2_I(inode)->ip_alloc_sem); 812 807 ret = fiemap_fill_next_extent(fieinfo, virt_bytes, phys_bytes, 813 808 len_bytes, fe_flags); 809 + down_read(&OCFS2_I(inode)->ip_alloc_sem); 814 810 if (ret) 815 811 break; 816 812
+2 -1
fs/proc/generic.c
··· 393 393 if (proc_alloc_inum(&dp->low_ino)) 394 394 goto out_free_entry; 395 395 396 - pde_set_flags(dp); 396 + if (!S_ISDIR(dp->mode)) 397 + pde_set_flags(dp); 397 398 398 399 write_lock(&proc_subdir_lock); 399 400 dp->parent = dir;
+24 -7
fs/smb/client/cifs_debug.c
··· 304 304 list_for_each(tmp1, &ses->tcon_list) { 305 305 tcon = list_entry(tmp1, struct cifs_tcon, tcon_list); 306 306 cfids = tcon->cfids; 307 + if (!cfids) 308 + continue; 307 309 spin_lock(&cfids->cfid_list_lock); /* check lock ordering */ 308 310 seq_printf(m, "Num entries: %d\n", cfids->num_entries); 309 311 list_for_each_entry(cfid, &cfids->entries, entry) { ··· 321 319 seq_printf(m, "\n"); 322 320 } 323 321 spin_unlock(&cfids->cfid_list_lock); 324 - 325 - 326 322 } 327 323 } 328 324 } ··· 344 344 return "Pattern_V1"; 345 345 default: 346 346 return "invalid"; 347 + } 348 + } 349 + 350 + static __always_inline const char *cipher_alg_str(__le16 cipher) 351 + { 352 + switch (cipher) { 353 + case SMB2_ENCRYPTION_AES128_CCM: 354 + return "AES128-CCM"; 355 + case SMB2_ENCRYPTION_AES128_GCM: 356 + return "AES128-GCM"; 357 + case SMB2_ENCRYPTION_AES256_CCM: 358 + return "AES256-CCM"; 359 + case SMB2_ENCRYPTION_AES256_GCM: 360 + return "AES256-GCM"; 361 + default: 362 + return "UNKNOWN"; 347 363 } 348 364 } 349 365 ··· 555 539 else 556 540 seq_puts(m, "disabled (not supported by this server)"); 557 541 542 + /* Show negotiated encryption cipher, even if not required */ 543 + seq_puts(m, "\nEncryption: "); 544 + if (server->cipher_type) 545 + seq_printf(m, "Negotiated cipher (%s)", cipher_alg_str(server->cipher_type)); 546 + 558 547 seq_printf(m, "\n\n\tSessions: "); 559 548 i = 0; 560 549 list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) { ··· 597 576 598 577 /* dump session id helpful for use with network trace */ 599 578 seq_printf(m, " SessionId: 0x%llx", ses->Suid); 600 - if (ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA) { 579 + if (ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA) 601 580 seq_puts(m, " encrypted"); 602 - /* can help in debugging to show encryption type */ 603 - if (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM) 604 - seq_puts(m, "(gcm256)"); 605 - } 606 581 if (ses->sign) 607 582 seq_puts(m, " signed"); 608 583
+3
fs/smb/client/cifs_unicode.c
··· 629 629 int len; 630 630 __le16 *dst; 631 631 632 + if (!src) 633 + return NULL; 634 + 632 635 len = cifs_local_to_utf16_bytes(src, maxlen, cp); 633 636 len += 2; /* NULL */ 634 637 dst = kmalloc(len, GFP_KERNEL);
+1 -1
fs/smb/client/reparse.c
··· 278 278 } 279 279 280 280 /* 281 - * For absolute symlinks it is not possible to determinate 281 + * For absolute symlinks it is not possible to determine 282 282 * if it should point to directory or file. 283 283 */ 284 284 if (symname[0] == '/') {
+2 -2
fs/smb/client/smb1ops.c
··· 1005 1005 rc = -EOPNOTSUPP; 1006 1006 } 1007 1007 1008 - /* Fallback to SMB_COM_SETATTR command when absolutelty needed. */ 1008 + /* Fallback to SMB_COM_SETATTR command when absolutely needed. */ 1009 1009 if (rc == -EOPNOTSUPP) { 1010 1010 cifs_dbg(FYI, "calling SetInformation since SetPathInfo for attrs/times not supported by this server\n"); 1011 1011 rc = SMBSetInformation(xid, tcon, full_path, ··· 1039 1039 cifsFileInfo_put(open_file); 1040 1040 1041 1041 /* 1042 - * Setting the read-only bit is not honered on non-NT servers when done 1042 + * Setting the read-only bit is not honored on non-NT servers when done 1043 1043 * via open-semantics. So for setting it, use SMB_COM_SETATTR command. 1044 1044 * This command works only after the file is closed, so use it only when 1045 1045 * operation was called without the filehandle.
+15 -4
fs/smb/client/smb2misc.c
··· 614 614 struct cifs_tcon *tcon; 615 615 struct cifs_pending_open *open; 616 616 617 + /* Trace receipt of lease break request from server */ 618 + trace_smb3_lease_break_enter(le32_to_cpu(rsp->CurrentLeaseState), 619 + le32_to_cpu(rsp->Flags), 620 + le16_to_cpu(rsp->Epoch), 621 + le32_to_cpu(rsp->hdr.Id.SyncId.TreeId), 622 + le64_to_cpu(rsp->hdr.SessionId), 623 + *((u64 *)rsp->LeaseKey), 624 + *((u64 *)&rsp->LeaseKey[8])); 625 + 617 626 cifs_dbg(FYI, "Checking for lease break\n"); 618 627 619 628 /* If server is a channel, select the primary channel */ ··· 669 660 spin_unlock(&cifs_tcp_ses_lock); 670 661 cifs_dbg(FYI, "Can not process lease break - no lease matched\n"); 671 662 trace_smb3_lease_not_found(le32_to_cpu(rsp->CurrentLeaseState), 672 - le32_to_cpu(rsp->hdr.Id.SyncId.TreeId), 673 - le64_to_cpu(rsp->hdr.SessionId), 674 - *((u64 *)rsp->LeaseKey), 675 - *((u64 *)&rsp->LeaseKey[8])); 663 + le32_to_cpu(rsp->Flags), 664 + le16_to_cpu(rsp->Epoch), 665 + le32_to_cpu(rsp->hdr.Id.SyncId.TreeId), 666 + le64_to_cpu(rsp->hdr.SessionId), 667 + *((u64 *)rsp->LeaseKey), 668 + *((u64 *)&rsp->LeaseKey[8])); 676 669 677 670 return false; 678 671 }
+2 -2
fs/smb/client/smb2pdu.c
··· 6192 6192 please_key_high = (__u64 *)(lease_key+8); 6193 6193 if (rc) { 6194 6194 cifs_stats_fail_inc(tcon, SMB2_OPLOCK_BREAK_HE); 6195 - trace_smb3_lease_err(le32_to_cpu(lease_state), tcon->tid, 6195 + trace_smb3_lease_ack_err(le32_to_cpu(lease_state), tcon->tid, 6196 6196 ses->Suid, *please_key_low, *please_key_high, rc); 6197 6197 cifs_dbg(FYI, "Send error in Lease Break = %d\n", rc); 6198 6198 } else 6199 - trace_smb3_lease_done(le32_to_cpu(lease_state), tcon->tid, 6199 + trace_smb3_lease_ack_done(le32_to_cpu(lease_state), tcon->tid, 6200 6200 ses->Suid, *please_key_low, *please_key_high); 6201 6201 6202 6202 return rc;
+49 -3
fs/smb/client/trace.h
··· 1171 1171 __u64 lease_key_high), \ 1172 1172 TP_ARGS(lease_state, tid, sesid, lease_key_low, lease_key_high)) 1173 1173 1174 - DEFINE_SMB3_LEASE_DONE_EVENT(lease_done); 1175 - DEFINE_SMB3_LEASE_DONE_EVENT(lease_not_found); 1174 + DEFINE_SMB3_LEASE_DONE_EVENT(lease_ack_done); 1175 + /* Tracepoint when a lease break request is received/entered (includes epoch and flags) */ 1176 + DECLARE_EVENT_CLASS(smb3_lease_enter_class, 1177 + TP_PROTO(__u32 lease_state, 1178 + __u32 flags, 1179 + __u16 epoch, 1180 + __u32 tid, 1181 + __u64 sesid, 1182 + __u64 lease_key_low, 1183 + __u64 lease_key_high), 1184 + TP_ARGS(lease_state, flags, epoch, tid, sesid, lease_key_low, lease_key_high), 1185 + TP_STRUCT__entry( 1186 + __field(__u32, lease_state) 1187 + __field(__u32, flags) 1188 + __field(__u16, epoch) 1189 + __field(__u32, tid) 1190 + __field(__u64, sesid) 1191 + __field(__u64, lease_key_low) 1192 + __field(__u64, lease_key_high) 1193 + ), 1194 + TP_fast_assign( 1195 + __entry->lease_state = lease_state; 1196 + __entry->flags = flags; 1197 + __entry->epoch = epoch; 1198 + __entry->tid = tid; 1199 + __entry->sesid = sesid; 1200 + __entry->lease_key_low = lease_key_low; 1201 + __entry->lease_key_high = lease_key_high; 1202 + ), 1203 + TP_printk("sid=0x%llx tid=0x%x lease_key=0x%llx%llx lease_state=0x%x flags=0x%x epoch=%u", 1204 + __entry->sesid, __entry->tid, __entry->lease_key_high, 1205 + __entry->lease_key_low, __entry->lease_state, __entry->flags, __entry->epoch) 1206 + ) 1207 + 1208 + #define DEFINE_SMB3_LEASE_ENTER_EVENT(name) \ 1209 + DEFINE_EVENT(smb3_lease_enter_class, smb3_##name, \ 1210 + TP_PROTO(__u32 lease_state, \ 1211 + __u32 flags, \ 1212 + __u16 epoch, \ 1213 + __u32 tid, \ 1214 + __u64 sesid, \ 1215 + __u64 lease_key_low, \ 1216 + __u64 lease_key_high), \ 1217 + TP_ARGS(lease_state, flags, epoch, tid, sesid, lease_key_low, lease_key_high)) 1218 + 1219 + DEFINE_SMB3_LEASE_ENTER_EVENT(lease_break_enter); 1220 + /* Lease not found: reuse lease_enter payload (includes epoch and flags) */ 1221 + DEFINE_SMB3_LEASE_ENTER_EVENT(lease_not_found); 1176 1222 1177 1223 DECLARE_EVENT_CLASS(smb3_lease_err_class, 1178 1224 TP_PROTO(__u32 lease_state, ··· 1259 1213 int rc), \ 1260 1214 TP_ARGS(lease_state, tid, sesid, lease_key_low, lease_key_high, rc)) 1261 1215 1262 - DEFINE_SMB3_LEASE_ERR_EVENT(lease_err); 1216 + DEFINE_SMB3_LEASE_ERR_EVENT(lease_ack_err); 1263 1217 1264 1218 DECLARE_EVENT_CLASS(smb3_connect_class, 1265 1219 TP_PROTO(char *hostname,
+25 -6
include/linux/compiler-clang.h
··· 18 18 #define KASAN_ABI_VERSION 5 19 19 20 20 /* 21 + * Clang 22 added preprocessor macros to match GCC, in hopes of eventually 22 + * dropping __has_feature support for sanitizers: 23 + * https://github.com/llvm/llvm-project/commit/568c23bbd3303518c5056d7f03444dae4fdc8a9c 24 + * Create these macros for older versions of clang so that it is easy to clean 25 + * up once the minimum supported version of LLVM for building the kernel always 26 + * creates these macros. 27 + * 21 28 * Note: Checking __has_feature(*_sanitizer) is only true if the feature is 22 29 * enabled. Therefore it is not required to additionally check defined(CONFIG_*) 23 30 * to avoid adding redundant attributes in other configurations. 24 31 */ 25 - 26 - #if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer) 27 - /* Emulate GCC's __SANITIZE_ADDRESS__ flag */ 32 + #if __has_feature(address_sanitizer) && !defined(__SANITIZE_ADDRESS__) 28 33 #define __SANITIZE_ADDRESS__ 34 + #endif 35 + #if __has_feature(hwaddress_sanitizer) && !defined(__SANITIZE_HWADDRESS__) 36 + #define __SANITIZE_HWADDRESS__ 37 + #endif 38 + #if __has_feature(thread_sanitizer) && !defined(__SANITIZE_THREAD__) 39 + #define __SANITIZE_THREAD__ 40 + #endif 41 + 42 + /* 43 + * Treat __SANITIZE_HWADDRESS__ the same as __SANITIZE_ADDRESS__ in the kernel. 44 + */ 45 + #ifdef __SANITIZE_HWADDRESS__ 46 + #define __SANITIZE_ADDRESS__ 47 + #endif 48 + 49 + #ifdef __SANITIZE_ADDRESS__ 29 50 #define __no_sanitize_address \ 30 51 __attribute__((no_sanitize("address", "hwaddress"))) 31 52 #else 32 53 #define __no_sanitize_address 33 54 #endif 34 55 35 - #if __has_feature(thread_sanitizer) 36 - /* emulate gcc's __SANITIZE_THREAD__ flag */ 37 - #define __SANITIZE_THREAD__ 56 + #ifdef __SANITIZE_THREAD__ 38 57 #define __no_sanitize_thread \ 39 58 __attribute__((no_sanitize("thread"))) 40 59 #else
+1
include/linux/cpu.h
··· 83 83 extern ssize_t cpu_show_indirect_target_selection(struct device *dev, 84 84 struct device_attribute *attr, char *buf); 85 85 extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf); 86 + extern ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf); 86 87 87 88 extern __printf(4, 5) 88 89 struct device *cpu_device_create(struct device *parent, void *drvdata,
+10
include/linux/energy_model.h
··· 171 171 int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, 172 172 const struct em_data_callback *cb, 173 173 const cpumask_t *cpus, bool microwatts); 174 + int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states, 175 + const struct em_data_callback *cb, 176 + const cpumask_t *cpus, bool microwatts); 174 177 void em_dev_unregister_perf_domain(struct device *dev); 175 178 struct em_perf_table *em_table_alloc(struct em_perf_domain *pd); 176 179 void em_table_free(struct em_perf_table *table); ··· 350 347 int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, 351 348 const struct em_data_callback *cb, 352 349 const cpumask_t *cpus, bool microwatts) 350 + { 351 + return -EINVAL; 352 + } 353 + static inline 354 + int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states, 355 + const struct em_data_callback *cb, 356 + const cpumask_t *cpus, bool microwatts) 353 357 { 354 358 return -EINVAL; 355 359 }
+2 -2
include/linux/ethtool.h
··· 856 856 enum hwtstamp_provider_qualifier phc_qualifier; 857 857 enum hwtstamp_source phc_source; 858 858 int phc_phyindex; 859 - enum hwtstamp_tx_types tx_types; 860 - enum hwtstamp_rx_filters rx_filters; 859 + u32 tx_types; 860 + u32 rx_filters; 861 861 }; 862 862 863 863 /**
+2 -1
include/linux/fs.h
··· 149 149 /* Expect random access pattern */ 150 150 #define FMODE_RANDOM ((__force fmode_t)(1 << 12)) 151 151 152 - /* FMODE_* bit 13 */ 152 + /* Supports IOCB_HAS_METADATA */ 153 + #define FMODE_HAS_METADATA ((__force fmode_t)(1 << 13)) 153 154 154 155 /* File is opened with O_PATH; almost nothing can be done with it */ 155 156 #define FMODE_PATH ((__force fmode_t)(1 << 14))
+3 -3
include/linux/kasan.h
··· 562 562 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) 563 563 564 564 void kasan_populate_early_vm_area_shadow(void *start, unsigned long size); 565 - int kasan_populate_vmalloc(unsigned long addr, unsigned long size); 565 + int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask); 566 566 void kasan_release_vmalloc(unsigned long start, unsigned long end, 567 567 unsigned long free_region_start, 568 568 unsigned long free_region_end, ··· 574 574 unsigned long size) 575 575 { } 576 576 static inline int kasan_populate_vmalloc(unsigned long start, 577 - unsigned long size) 577 + unsigned long size, gfp_t gfp_mask) 578 578 { 579 579 return 0; 580 580 } ··· 610 610 static inline void kasan_populate_early_vm_area_shadow(void *start, 611 611 unsigned long size) { } 612 612 static inline int kasan_populate_vmalloc(unsigned long start, 613 - unsigned long size) 613 + unsigned long size, gfp_t gfp_mask) 614 614 { 615 615 return 0; 616 616 }
+8 -1
include/linux/timekeeper_internal.h
··· 76 76 * @cs_was_changed_seq: The sequence number of clocksource change events 77 77 * @clock_valid: Indicator for valid clock 78 78 * @monotonic_to_boot: CLOCK_MONOTONIC to CLOCK_BOOTTIME offset 79 + * @monotonic_to_aux: CLOCK_MONOTONIC to CLOCK_AUX offset 79 80 * @cycle_interval: Number of clock cycles in one NTP interval 80 81 * @xtime_interval: Number of clock shifted nano seconds in one NTP 81 82 * interval. ··· 117 116 * 118 117 * @offs_aux is used by the auxiliary timekeepers which do not utilize any 119 118 * of the regular timekeeper offset fields. 119 + * 120 + * @monotonic_to_aux is a timespec64 representation of @offs_aux to 121 + * accelerate the VDSO update for CLOCK_AUX. 120 122 * 121 123 * The cacheline ordering of the structure is optimized for in kernel usage of 122 124 * the ktime_get() and ktime_get_ts64() family of time accessors. Struct ··· 163 159 u8 cs_was_changed_seq; 164 160 u8 clock_valid; 165 161 166 - struct timespec64 monotonic_to_boot; 162 + union { 163 + struct timespec64 monotonic_to_boot; 164 + struct timespec64 monotonic_to_aux; 165 + }; 167 166 168 167 u64 cycle_interval; 169 168 u64 xtime_interval;
-1
include/net/netfilter/nf_tables.h
··· 1914 1914 struct mutex commit_mutex; 1915 1915 u64 table_handle; 1916 1916 u64 tstamp; 1917 - unsigned int base_seq; 1918 1917 unsigned int gc_seq; 1919 1918 u8 validate_state; 1920 1919 struct work_struct destroy_work;
+2 -8
include/net/netfilter/nf_tables_core.h
··· 109 109 const struct nft_set_ext * 110 110 nft_hash_lookup(const struct net *net, const struct nft_set *set, 111 111 const u32 *key); 112 + #endif 113 + 112 114 const struct nft_set_ext * 113 115 nft_set_do_lookup(const struct net *net, const struct nft_set *set, 114 116 const u32 *key); 115 - #else 116 - static inline const struct nft_set_ext * 117 - nft_set_do_lookup(const struct net *net, const struct nft_set *set, 118 - const u32 *key) 119 - { 120 - return set->ops->lookup(net, set, key); 121 - } 122 - #endif 123 117 124 118 /* called from nft_pipapo_avx2.c */ 125 119 const struct nft_set_ext *
+1
include/net/netns/nftables.h
··· 3 3 #define _NETNS_NFTABLES_H_ 4 4 5 5 struct netns_nftables { 6 + unsigned int base_seq; 6 7 u8 gencursor; 7 8 }; 8 9
+1 -7
include/pcmcia/ss.h
··· 227 227 228 228 229 229 /* socket drivers must define the resource operations type they use. There 230 - * are three options: 230 + * are two options: 231 231 * - pccard_static_ops iomem and ioport areas are assigned statically 232 - * - pccard_iodyn_ops iomem areas is assigned statically, ioport 233 - * areas dynamically 234 - * If this option is selected, use 235 - * "select PCCARD_IODYN" in Kconfig. 236 232 * - pccard_nonstatic_ops iomem and ioport areas are assigned dynamically. 237 233 * If this option is selected, use 238 234 * "select PCCARD_NONSTATIC" in Kconfig. ··· 236 240 */ 237 241 extern struct pccard_resource_ops pccard_static_ops; 238 242 #if defined(CONFIG_PCMCIA) || defined(CONFIG_PCMCIA_MODULE) 239 - extern struct pccard_resource_ops pccard_iodyn_ops; 240 243 extern struct pccard_resource_ops pccard_nonstatic_ops; 241 244 #else 242 245 /* If PCMCIA is not used, but only CARDBUS, these functions are not used 243 246 * at all. Therefore, do not use the large (240K!) rsrc_nonstatic module 244 247 */ 245 - #define pccard_iodyn_ops pccard_static_ops 246 248 #define pccard_nonstatic_ops pccard_static_ops 247 249 #endif 248 250
+3
init/Kconfig
··· 146 146 config RUSTC_HAS_FILE_WITH_NUL 147 147 def_bool RUSTC_VERSION >= 108900 148 148 149 + config RUSTC_HAS_FILE_AS_C_STR 150 + def_bool RUSTC_VERSION >= 109100 151 + 149 152 config PAHOLE_VERSION 150 153 int 151 154 default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+1 -1
init/main.c
··· 956 956 sort_main_extable(); 957 957 trap_init(); 958 958 mm_core_init(); 959 + maple_tree_init(); 959 960 poking_init(); 960 961 ftrace_init(); 961 962 ··· 974 973 "Interrupts were enabled *very* early, fixing it\n")) 975 974 local_irq_disable(); 976 975 radix_tree_init(); 977 - maple_tree_init(); 978 976 979 977 /* 980 978 * Set up housekeeping before setting up workqueues to allow the unbound
+3
io_uring/rw.c
··· 886 886 if (req->flags & REQ_F_HAS_METADATA) { 887 887 struct io_async_rw *io = req->async_data; 888 888 889 + if (!(file->f_mode & FMODE_HAS_METADATA)) 890 + return -EINVAL; 891 + 889 892 /* 890 893 * We have a union of meta fields with wpq used for buffered-io 891 894 * in io_async_rw, so fail it here.
+1 -1
kernel/auditfilter.c
··· 1326 1326 1327 1327 /* handle trailing slashes */ 1328 1328 pathlen -= parentlen; 1329 - while (p[pathlen - 1] == '/') 1329 + while (pathlen > 0 && p[pathlen - 1] == '/') 1330 1330 pathlen--; 1331 1331 1332 1332 if (pathlen != dlen)
+1
kernel/bpf/Makefile
··· 62 62 CFLAGS_REMOVE_queue_stack_maps.o = $(CC_FLAGS_FTRACE) 63 63 CFLAGS_REMOVE_lpm_trie.o = $(CC_FLAGS_FTRACE) 64 64 CFLAGS_REMOVE_ringbuf.o = $(CC_FLAGS_FTRACE) 65 + CFLAGS_REMOVE_rqspinlock.o = $(CC_FLAGS_FTRACE)
+13 -8
kernel/bpf/core.c
··· 2366 2366 const struct bpf_insn *insn) 2367 2367 { 2368 2368 /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON 2369 - * is not working properly, or interpreter is being used when 2370 - * prog->jit_requested is not 0, so warn about it! 2369 + * is not working properly, so warn about it! 2371 2370 */ 2372 2371 WARN_ON_ONCE(1); 2373 2372 return 0; ··· 2467 2468 return ret; 2468 2469 } 2469 2470 2470 - static void bpf_prog_select_func(struct bpf_prog *fp) 2471 + static bool bpf_prog_select_interpreter(struct bpf_prog *fp) 2471 2472 { 2473 + bool select_interpreter = false; 2472 2474 #ifndef CONFIG_BPF_JIT_ALWAYS_ON 2473 2475 u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1); 2474 2476 u32 idx = (round_up(stack_depth, 32) / 32) - 1; ··· 2478 2478 * But for non-JITed programs, we don't need bpf_func, so no bounds 2479 2479 * check needed. 2480 2480 */ 2481 - if (!fp->jit_requested && 2482 - !WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) { 2481 + if (idx < ARRAY_SIZE(interpreters)) { 2483 2482 fp->bpf_func = interpreters[idx]; 2483 + select_interpreter = true; 2484 2484 } else { 2485 2485 fp->bpf_func = __bpf_prog_ret0_warn; 2486 2486 } 2487 2487 #else 2488 2488 fp->bpf_func = __bpf_prog_ret0_warn; 2489 2489 #endif 2490 + return select_interpreter; 2490 2491 } 2491 2492 2492 2493 /** ··· 2506 2505 /* In case of BPF to BPF calls, verifier did all the prep 2507 2506 * work with regards to JITing, etc. 2508 2507 */ 2509 - bool jit_needed = fp->jit_requested; 2508 + bool jit_needed = false; 2510 2509 2511 2510 if (fp->bpf_func) 2512 2511 goto finalize; ··· 2515 2514 bpf_prog_has_kfunc_call(fp)) 2516 2515 jit_needed = true; 2517 2516 2518 - bpf_prog_select_func(fp); 2517 + if (!bpf_prog_select_interpreter(fp)) 2518 + jit_needed = true; 2519 2519 2520 2520 /* eBPF JITs can rewrite the program in case constant 2521 2521 * blinding is active. However, in case of error during ··· 3026 3024 3027 3025 /* Always built-in helper functions. */ 3028 3026 const struct bpf_func_proto bpf_tail_call_proto = { 3029 - .func = NULL, 3027 + /* func is unused for tail_call, we set it to pass the 3028 + * get_helper_proto check 3029 + */ 3030 + .func = BPF_PTR_POISON, 3030 3031 .gpl_only = false, 3031 3032 .ret_type = RET_VOID, 3032 3033 .arg1_type = ARG_PTR_TO_CTX,
+2 -2
kernel/bpf/cpumap.c
··· 186 186 struct xdp_buff xdp; 187 187 int i, nframes = 0; 188 188 189 - xdp_set_return_frame_no_direct(); 190 189 xdp.rxq = &rxq; 191 190 192 191 for (i = 0; i < n; i++) { ··· 230 231 } 231 232 } 232 233 233 - xdp_clear_return_frame_no_direct(); 234 234 stats->pass += nframes; 235 235 236 236 return nframes; ··· 253 255 254 256 rcu_read_lock(); 255 257 bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); 258 + xdp_set_return_frame_no_direct(); 256 259 257 260 ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats); 258 261 if (unlikely(ret->skb_n)) ··· 263 264 if (stats->redirect) 264 265 xdp_do_flush(); 265 266 267 + xdp_clear_return_frame_no_direct(); 266 268 bpf_net_ctx_clear(bpf_net_ctx); 267 269 rcu_read_unlock(); 268 270
+1 -1
kernel/bpf/crypto.c
··· 278 278 siv_len = siv ? __bpf_dynptr_size(siv) : 0; 279 279 src_len = __bpf_dynptr_size(src); 280 280 dst_len = __bpf_dynptr_size(dst); 281 - if (!src_len || !dst_len) 281 + if (!src_len || !dst_len || src_len > dst_len) 282 282 return -EINVAL; 283 283 284 284 if (siv_len != ctx->siv_len)
+13 -3
kernel/bpf/helpers.c
··· 1274 1274 goto out; 1275 1275 } 1276 1276 1277 - /* allocate hrtimer via map_kmalloc to use memcg accounting */ 1278 - cb = bpf_map_kmalloc_node(map, size, GFP_ATOMIC, map->numa_node); 1277 + /* Allocate via bpf_map_kmalloc_node() for memcg accounting. Until 1278 + * kmalloc_nolock() is available, avoid locking issues by using 1279 + * __GFP_HIGH (GFP_ATOMIC & ~__GFP_RECLAIM). 1280 + */ 1281 + cb = bpf_map_kmalloc_node(map, size, __GFP_HIGH, map->numa_node); 1279 1282 if (!cb) { 1280 1283 ret = -ENOMEM; 1281 1284 goto out; ··· 3678 3675 3679 3676 guard(pagefault)(); 3680 3677 for (i = 0; i < XATTR_SIZE_MAX; i++) { 3681 - for (j = 0; i + j < len && j < XATTR_SIZE_MAX; j++) { 3678 + for (j = 0; i + j <= len && j < XATTR_SIZE_MAX; j++) { 3682 3679 __get_kernel_nofault(&c2, s2__ign + j, char, err_out); 3683 3680 if (c2 == '\0') 3684 3681 return i; 3682 + /* 3683 + * We allow reading an extra byte from s2 (note the 3684 + * `i + j <= len` above) to cover the case when s2 is 3685 + * a suffix of the first len chars of s1. 3686 + */ 3687 + if (i + j == len) 3688 + break; 3685 3689 __get_kernel_nofault(&c1, s1__ign + j, char, err_out); 3686 3690 if (c1 == '\0') 3687 3691 return -ENOENT;
+1 -1
kernel/bpf/rqspinlock.c
··· 471 471 * any MCS node. This is not the most elegant solution, but is 472 472 * simple enough. 473 473 */ 474 - if (unlikely(idx >= _Q_MAX_NODES)) { 474 + if (unlikely(idx >= _Q_MAX_NODES || in_nmi())) { 475 475 lockevent_inc(lock_no_node); 476 476 RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT); 477 477 while (!queued_spin_trylock(lock)) {
+5 -1
kernel/bpf/verifier.c
··· 8552 8552 verifier_bug(env, "Two map pointers in a timer helper"); 8553 8553 return -EFAULT; 8554 8554 } 8555 + if (IS_ENABLED(CONFIG_PREEMPT_RT)) { 8556 + verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); 8557 + return -EOPNOTSUPP; 8558 + } 8555 8559 meta->map_uid = reg->map_uid; 8556 8560 meta->map_ptr = map; 8557 8561 return 0; ··· 11363 11359 return -EINVAL; 11364 11360 11365 11361 *ptr = env->ops->get_func_proto(func_id, env->prog); 11366 - return *ptr ? 0 : -EINVAL; 11362 + return *ptr && (*ptr)->func ? 0 : -EINVAL; 11367 11363 } 11368 11364 11369 11365 static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+47 -1
kernel/dma/debug.c
··· 39 39 dma_debug_sg, 40 40 dma_debug_coherent, 41 41 dma_debug_resource, 42 + dma_debug_noncoherent, 42 43 }; 43 44 44 45 enum map_err_types { ··· 142 141 [dma_debug_sg] = "scatter-gather", 143 142 [dma_debug_coherent] = "coherent", 144 143 [dma_debug_resource] = "resource", 144 + [dma_debug_noncoherent] = "noncoherent", 145 145 }; 146 146 147 147 static const char *dir2name[] = { ··· 995 993 "[mapped as %s] [unmapped as %s]\n", 996 994 ref->dev_addr, ref->size, 997 995 type2name[entry->type], type2name[ref->type]); 998 - } else if (entry->type == dma_debug_coherent && 996 + } else if ((entry->type == dma_debug_coherent || 997 + entry->type == dma_debug_noncoherent) && 999 998 ref->paddr != entry->paddr) { 1000 999 err_printk(ref->dev, entry, "device driver frees " 1001 1000 "DMA memory with different CPU address " ··· 1582 1579 1583 1580 check_sync(dev, &ref, false); 1584 1581 } 1582 + } 1583 + 1584 + void debug_dma_alloc_pages(struct device *dev, struct page *page, 1585 + size_t size, int direction, 1586 + dma_addr_t dma_addr, 1587 + unsigned long attrs) 1588 + { 1589 + struct dma_debug_entry *entry; 1590 + 1591 + if (unlikely(dma_debug_disabled())) 1592 + return; 1593 + 1594 + entry = dma_entry_alloc(); 1595 + if (!entry) 1596 + return; 1597 + 1598 + entry->type = dma_debug_noncoherent; 1599 + entry->dev = dev; 1600 + entry->paddr = page_to_phys(page); 1601 + entry->size = size; 1602 + entry->dev_addr = dma_addr; 1603 + entry->direction = direction; 1604 + 1605 + add_dma_entry(entry, attrs); 1606 + } 1607 + 1608 + void debug_dma_free_pages(struct device *dev, struct page *page, 1609 + size_t size, int direction, 1610 + dma_addr_t dma_addr) 1611 + { 1612 + struct dma_debug_entry ref = { 1613 + .type = dma_debug_noncoherent, 1614 + .dev = dev, 1615 + .paddr = page_to_phys(page), 1616 + .dev_addr = dma_addr, 1617 + .size = size, 1618 + .direction = direction, 1619 + }; 1620 + 1621 + if (unlikely(dma_debug_disabled())) 1622 + return; 1623 + 1624 + check_unmap(&ref); 1585 1625 } 1586 1626 1587 1627 static int __init dma_debug_driver_setup(char *str)
+20
kernel/dma/debug.h
··· 54 54 extern void debug_dma_sync_sg_for_device(struct device *dev, 55 55 struct scatterlist *sg, 56 56 int nelems, int direction); 57 + extern void debug_dma_alloc_pages(struct device *dev, struct page *page, 58 + size_t size, int direction, 59 + dma_addr_t dma_addr, 60 + unsigned long attrs); 61 + extern void debug_dma_free_pages(struct device *dev, struct page *page, 62 + size_t size, int direction, 63 + dma_addr_t dma_addr); 57 64 #else /* CONFIG_DMA_API_DEBUG */ 58 65 static inline void debug_dma_map_page(struct device *dev, struct page *page, 59 66 size_t offset, size_t size, ··· 131 124 static inline void debug_dma_sync_sg_for_device(struct device *dev, 132 125 struct scatterlist *sg, 133 126 int nelems, int direction) 127 + { 128 + } 129 + 130 + static inline void debug_dma_alloc_pages(struct device *dev, struct page *page, 131 + size_t size, int direction, 132 + dma_addr_t dma_addr, 133 + unsigned long attrs) 134 + { 135 + } 136 + 137 + static inline void debug_dma_free_pages(struct device *dev, struct page *page, 138 + size_t size, int direction, 139 + dma_addr_t dma_addr) 134 140 { 135 141 } 136 142 #endif /* CONFIG_DMA_API_DEBUG */
+2 -2
kernel/dma/mapping.c
··· 712 712 if (page) { 713 713 trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle, 714 714 size, dir, gfp, 0); 715 - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0); 715 + debug_dma_alloc_pages(dev, page, size, dir, *dma_handle, 0); 716 716 } else { 717 717 trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0); 718 718 } ··· 738 738 dma_addr_t dma_handle, enum dma_data_direction dir) 739 739 { 740 740 trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0); 741 - debug_dma_unmap_page(dev, dma_handle, size, dir); 741 + debug_dma_free_pages(dev, page, size, dir, dma_handle); 742 742 __dma_free_pages(dev, size, page, dma_handle, dir); 743 743 } 744 744 EXPORT_SYMBOL_GPL(dma_free_pages);
+1
kernel/events/core.c
··· 10330 10330 ret = 1; 10331 10331 event->pending_kill = POLL_HUP; 10332 10332 perf_event_disable_inatomic(event); 10333 + event->pmu->stop(event, 0); 10333 10334 } 10334 10335 10335 10336 if (event->attr.sigtrap) {
+1 -1
kernel/fork.c
··· 689 689 mm_pasid_drop(mm); 690 690 mm_destroy_cid(mm); 691 691 percpu_counter_destroy_many(mm->rss_stat, NR_MM_COUNTERS); 692 - futex_hash_free(mm); 693 692 694 693 free_mm(mm); 695 694 } ··· 1137 1138 if (mm->binfmt) 1138 1139 module_put(mm->binfmt->module); 1139 1140 lru_gen_del_mm(mm); 1141 + futex_hash_free(mm); 1140 1142 mmdrop(mm); 1141 1143 } 1142 1144
+12 -4
kernel/futex/core.c
··· 1722 1722 RCU_INIT_POINTER(mm->futex_phash, NULL); 1723 1723 mm->futex_phash_new = NULL; 1724 1724 /* futex-ref */ 1725 + mm->futex_ref = NULL; 1725 1726 atomic_long_set(&mm->futex_atomic, 0); 1726 1727 mm->futex_batches = get_state_synchronize_rcu(); 1727 - mm->futex_ref = alloc_percpu(unsigned int); 1728 - if (!mm->futex_ref) 1729 - return -ENOMEM; 1730 - this_cpu_inc(*mm->futex_ref); /* 0 -> 1 */ 1731 1728 return 0; 1732 1729 } 1733 1730 ··· 1796 1799 return -EBUSY; 1797 1800 return 0; 1798 1801 } 1802 + } 1803 + 1804 + if (!mm->futex_ref) { 1805 + /* 1806 + * This will always be allocated by the first thread and 1807 + * therefore requires no locking. 1808 + */ 1809 + mm->futex_ref = alloc_percpu(unsigned int); 1810 + if (!mm->futex_ref) 1811 + return -ENOMEM; 1812 + this_cpu_inc(*mm->futex_ref); /* 0 -> 1 */ 1799 1813 } 1800 1814 1801 1815 fph = kvzalloc(struct_size(fph, queues, hash_slots),
+25 -4
kernel/power/energy_model.c
··· 553 553 const struct em_data_callback *cb, 554 554 const cpumask_t *cpus, bool microwatts) 555 555 { 556 + int ret = em_dev_register_pd_no_update(dev, nr_states, cb, cpus, microwatts); 557 + 558 + if (_is_cpu_device(dev)) 559 + em_check_capacity_update(); 560 + 561 + return ret; 562 + } 563 + EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); 564 + 565 + /** 566 + * em_dev_register_pd_no_update() - Register a perf domain for a device 567 + * @dev : Device to register the PD for 568 + * @nr_states : Number of performance states in the new PD 569 + * @cb : Callback functions for populating the energy model 570 + * @cpus : CPUs to include in the new PD (mandatory if @dev is a CPU device) 571 + * @microwatts : Whether or not the power values in the EM will be in uW 572 + * 573 + * Like em_dev_register_perf_domain(), but does not trigger a CPU capacity 574 + * update after registering the PD, even if @dev is a CPU device. 575 + */ 576 + int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states, 577 + const struct em_data_callback *cb, 578 + const cpumask_t *cpus, bool microwatts) 579 + { 556 580 struct em_perf_table *em_table; 557 581 unsigned long cap, prev_cap = 0; 558 582 unsigned long flags = 0; ··· 660 636 unlock: 661 637 mutex_unlock(&em_pd_mutex); 662 638 663 - if (_is_cpu_device(dev)) 664 - em_check_capacity_update(); 665 - 666 639 return ret; 667 640 } 668 - EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); 641 + EXPORT_SYMBOL_GPL(em_dev_register_pd_no_update); 669 642 670 643 /** 671 644 * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
+1
kernel/power/hibernate.c
··· 449 449 shrink_shmem_memory(); 450 450 451 451 console_suspend_all(); 452 + pm_restrict_gfp_mask(); 452 453 453 454 error = dpm_suspend(PMSG_FREEZE); 454 455
+8 -2
kernel/time/timekeeping.c
··· 83 83 } 84 84 #endif 85 85 86 + static inline void tk_update_aux_offs(struct timekeeper *tk, ktime_t offs) 87 + { 88 + tk->offs_aux = offs; 89 + tk->monotonic_to_aux = ktime_to_timespec64(offs); 90 + } 91 + 86 92 /* flag for if timekeeping is suspended */ 87 93 int __read_mostly timekeeping_suspended; 88 94 ··· 1512 1506 timekeeping_restore_shadow(tkd); 1513 1507 return -EINVAL; 1514 1508 } 1515 - tks->offs_aux = offs; 1509 + tk_update_aux_offs(tks, offs); 1516 1510 } 1517 1511 1518 1512 timekeeping_update_from_shadow(tkd, TK_UPDATE_ALL); ··· 2943 2937 * xtime ("realtime") is not applicable for auxiliary clocks and 2944 2938 * kept in sync with "monotonic". 2945 2939 */ 2946 - aux_tks->offs_aux = ktime_sub(timespec64_to_ktime(*tnew), tnow); 2940 + tk_update_aux_offs(aux_tks, ktime_sub(timespec64_to_ktime(*tnew), tnow)); 2947 2941 2948 2942 timekeeping_update_from_shadow(aux_tkd, TK_UPDATE_ALL); 2949 2943 return 0;
+2 -2
kernel/time/vsyscall.c
··· 159 159 if (clock_mode != VDSO_CLOCKMODE_NONE) { 160 160 fill_clock_configuration(vc, &tk->tkr_mono); 161 161 162 - vdso_ts->sec = tk->xtime_sec; 162 + vdso_ts->sec = tk->xtime_sec + tk->monotonic_to_aux.tv_sec; 163 163 164 164 nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift; 165 - nsec += tk->offs_aux; 165 + nsec += tk->monotonic_to_aux.tv_nsec; 166 166 vdso_ts->sec += __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec); 167 167 nsec = nsec << tk->tkr_mono.shift; 168 168 vdso_ts->nsec = nsec;
+2 -1
kernel/trace/fgraph.c
··· 1397 1397 ftrace_graph_active--; 1398 1398 gops->saved_func = NULL; 1399 1399 fgraph_lru_release_index(i); 1400 - unregister_pm_notifier(&ftrace_suspend_notifier); 1400 + if (!ftrace_graph_active) 1401 + unregister_pm_notifier(&ftrace_suspend_notifier); 1401 1402 } 1402 1403 return ret; 1403 1404 }
+7 -3
kernel/trace/trace.c
··· 834 834 /* copy the current bits to the new max */ 835 835 ret = trace_pid_list_first(filtered_pids, &pid); 836 836 while (!ret) { 837 - trace_pid_list_set(pid_list, pid); 837 + ret = trace_pid_list_set(pid_list, pid); 838 + if (ret < 0) 839 + goto out; 840 + 838 841 ret = trace_pid_list_next(filtered_pids, pid + 1, &pid); 839 842 nr_pids++; 840 843 } ··· 874 871 trace_parser_clear(&parser); 875 872 ret = 0; 876 873 } 874 + out: 877 875 trace_parser_put(&parser); 878 876 879 877 if (ret < 0) { ··· 7213 7209 entry = ring_buffer_event_data(event); 7214 7210 entry->ip = ip; 7215 7211 7216 - len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt); 7212 + len = copy_from_user_nofault(&entry->buf, ubuf, cnt); 7217 7213 if (len) { 7218 7214 memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE); 7219 7215 cnt = FAULTED_SIZE; ··· 7310 7306 7311 7307 entry = ring_buffer_event_data(event); 7312 7308 7313 - len = __copy_from_user_inatomic(&entry->id, ubuf, cnt); 7309 + len = copy_from_user_nofault(&entry->id, ubuf, cnt); 7314 7310 if (len) { 7315 7311 entry->id = -1; 7316 7312 memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
+1 -1
kernel/trace/trace_events_user.c
··· 496 496 { 497 497 struct user_event_enabler_fault *fault; 498 498 499 - fault = kmem_cache_zalloc(fault_cache, GFP_NOWAIT | __GFP_NOWARN); 499 + fault = kmem_cache_zalloc(fault_cache, GFP_NOWAIT); 500 500 501 501 if (!fault) 502 502 return false;
+3
kernel/trace/trace_osnoise.c
··· 2322 2322 int running, err; 2323 2323 char *buf __free(kfree) = NULL; 2324 2324 2325 + if (count < 1) 2326 + return 0; 2327 + 2325 2328 buf = kmalloc(count, GFP_KERNEL); 2326 2329 if (!buf) 2327 2330 return -ENOMEM;
+4
mm/damon/core.c
··· 2111 2111 if (!quota->ms && !quota->sz && list_empty(&quota->goals)) 2112 2112 return; 2113 2113 2114 + /* First charge window */ 2115 + if (!quota->total_charged_sz && !quota->charged_from) 2116 + quota->charged_from = jiffies; 2117 + 2114 2118 /* New charge window starts */ 2115 2119 if (time_after_eq(jiffies, quota->charged_from + 2116 2120 msecs_to_jiffies(quota->reset_interval))) {
+5
mm/damon/lru_sort.c
··· 198 198 if (err) 199 199 return err; 200 200 201 + if (!damon_lru_sort_mon_attrs.sample_interval) { 202 + err = -EINVAL; 203 + goto out; 204 + } 205 + 201 206 err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs); 202 207 if (err) 203 208 goto out;
+5
mm/damon/reclaim.c
··· 194 194 if (err) 195 195 return err; 196 196 197 + if (!damon_reclaim_mon_attrs.aggr_interval) { 198 + err = -EINVAL; 199 + goto out; 200 + } 201 + 197 202 err = damon_set_attrs(param_ctx, &damon_reclaim_mon_attrs); 198 203 if (err) 199 204 goto out;
+9 -5
mm/damon/sysfs.c
··· 1260 1260 { 1261 1261 struct damon_sysfs_kdamond *kdamond = container_of(kobj, 1262 1262 struct damon_sysfs_kdamond, kobj); 1263 - struct damon_ctx *ctx = kdamond->damon_ctx; 1264 - bool running; 1263 + struct damon_ctx *ctx; 1264 + bool running = false; 1265 1265 1266 - if (!ctx) 1267 - running = false; 1268 - else 1266 + if (!mutex_trylock(&damon_sysfs_lock)) 1267 + return -EBUSY; 1268 + 1269 + ctx = kdamond->damon_ctx; 1270 + if (ctx) 1269 1271 running = damon_is_running(ctx); 1272 + 1273 + mutex_unlock(&damon_sysfs_lock); 1270 1274 1271 1275 return sysfs_emit(buf, "%s\n", running ? 1272 1276 damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_ON] :
+6 -3
mm/hugetlb.c
··· 5851 5851 spinlock_t *ptl; 5852 5852 struct hstate *h = hstate_vma(vma); 5853 5853 unsigned long sz = huge_page_size(h); 5854 - bool adjust_reservation = false; 5854 + bool adjust_reservation; 5855 5855 unsigned long last_addr_mask; 5856 5856 bool force_flush = false; 5857 5857 ··· 5944 5944 sz); 5945 5945 hugetlb_count_sub(pages_per_huge_page(h), mm); 5946 5946 hugetlb_remove_rmap(folio); 5947 + spin_unlock(ptl); 5947 5948 5948 5949 /* 5949 5950 * Restore the reservation for anonymous page, otherwise the ··· 5952 5951 * If there we are freeing a surplus, do not set the restore 5953 5952 * reservation bit. 5954 5953 */ 5954 + adjust_reservation = false; 5955 + 5956 + spin_lock_irq(&hugetlb_lock); 5955 5957 if (!h->surplus_huge_pages && __vma_private_lock(vma) && 5956 5958 folio_test_anon(folio)) { 5957 5959 folio_set_hugetlb_restore_reserve(folio); 5958 5960 /* Reservation to be adjusted after the spin lock */ 5959 5961 adjust_reservation = true; 5960 5962 } 5961 - 5962 - spin_unlock(ptl); 5963 + spin_unlock_irq(&hugetlb_lock); 5963 5964 5964 5965 /* 5965 5966 * Adjust the reservation for the region that will have the
+24 -7
mm/kasan/shadow.c
··· 336 336 } 337 337 } 338 338 339 - static int ___alloc_pages_bulk(struct page **pages, int nr_pages) 339 + static int ___alloc_pages_bulk(struct page **pages, int nr_pages, gfp_t gfp_mask) 340 340 { 341 341 unsigned long nr_populated, nr_total = nr_pages; 342 342 struct page **page_array = pages; 343 343 344 344 while (nr_pages) { 345 - nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, pages); 345 + nr_populated = alloc_pages_bulk(gfp_mask, nr_pages, pages); 346 346 if (!nr_populated) { 347 347 ___free_pages_bulk(page_array, nr_total - nr_pages); 348 348 return -ENOMEM; ··· 354 354 return 0; 355 355 } 356 356 357 - static int __kasan_populate_vmalloc(unsigned long start, unsigned long end) 357 + static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_t gfp_mask) 358 358 { 359 359 unsigned long nr_pages, nr_total = PFN_UP(end - start); 360 360 struct vmalloc_populate_data data; 361 + unsigned int flags; 361 362 int ret = 0; 362 363 363 - data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO); 364 + data.pages = (struct page **)__get_free_page(gfp_mask | __GFP_ZERO); 364 365 if (!data.pages) 365 366 return -ENOMEM; 366 367 367 368 while (nr_total) { 368 369 nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0])); 369 - ret = ___alloc_pages_bulk(data.pages, nr_pages); 370 + ret = ___alloc_pages_bulk(data.pages, nr_pages, gfp_mask); 370 371 if (ret) 371 372 break; 372 373 373 374 data.start = start; 375 + 376 + /* 377 + * page tables allocations ignore external gfp mask, enforce it 378 + * by the scope API 379 + */ 380 + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) 381 + flags = memalloc_nofs_save(); 382 + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) 383 + flags = memalloc_noio_save(); 384 + 374 385 ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE, 375 386 kasan_populate_vmalloc_pte, &data); 387 + 388 + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) 389 + memalloc_nofs_restore(flags); 390 + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) 391 + memalloc_noio_restore(flags); 392 + 376 393 ___free_pages_bulk(data.pages, nr_pages); 377 394 if (ret) 378 395 break; ··· 403 386 return ret; 404 387 } 405 388 406 - int kasan_populate_vmalloc(unsigned long addr, unsigned long size) 389 + int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask) 407 390 { 408 391 unsigned long shadow_start, shadow_end; 409 392 int ret; ··· 432 415 shadow_start = PAGE_ALIGN_DOWN(shadow_start); 433 416 shadow_end = PAGE_ALIGN(shadow_end); 434 417 435 - ret = __kasan_populate_vmalloc(shadow_start, shadow_end); 418 + ret = __kasan_populate_vmalloc(shadow_start, shadow_end, gfp_mask); 436 419 if (ret) 437 420 return ret; 438 421
+2 -2
mm/khugepaged.c
··· 1417 1417 */ 1418 1418 if (cc->is_khugepaged && 1419 1419 (pte_young(pteval) || folio_test_young(folio) || 1420 - folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm, 1421 - address))) 1420 + folio_test_referenced(folio) || 1421 + mmu_notifier_test_young(vma->vm_mm, _address))) 1422 1422 referenced++; 1423 1423 } 1424 1424 if (!writable) {
+9 -11
mm/memory-failure.c
··· 956 956 [MF_MSG_BUDDY] = "free buddy page", 957 957 [MF_MSG_DAX] = "dax page", 958 958 [MF_MSG_UNSPLIT_THP] = "unsplit thp", 959 - [MF_MSG_ALREADY_POISONED] = "already poisoned", 959 + [MF_MSG_ALREADY_POISONED] = "already poisoned page", 960 960 [MF_MSG_UNKNOWN] = "unknown page", 961 961 }; 962 962 ··· 1349 1349 { 1350 1350 trace_memory_failure_event(pfn, type, result); 1351 1351 1352 - num_poisoned_pages_inc(pfn); 1353 - 1354 - update_per_node_mf_stats(pfn, result); 1352 + if (type != MF_MSG_ALREADY_POISONED) { 1353 + num_poisoned_pages_inc(pfn); 1354 + update_per_node_mf_stats(pfn, result); 1355 + } 1355 1356 1356 1357 pr_err("%#lx: recovery action for %s: %s\n", 1357 1358 pfn, action_page_types[type], action_name[result]); ··· 2095 2094 *hugetlb = 0; 2096 2095 return 0; 2097 2096 } else if (res == -EHWPOISON) { 2098 - pr_err("%#lx: already hardware poisoned\n", pfn); 2099 2097 if (flags & MF_ACTION_REQUIRED) { 2100 2098 folio = page_folio(p); 2101 2099 res = kill_accessing_process(current, folio_pfn(folio), flags); 2102 - action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED); 2103 2100 } 2101 + action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED); 2104 2102 return res; 2105 2103 } else if (res == -EBUSY) { 2106 2104 if (!(flags & MF_NO_RETRY)) { ··· 2285 2285 goto unlock_mutex; 2286 2286 2287 2287 if (TestSetPageHWPoison(p)) { 2288 - pr_err("%#lx: already hardware poisoned\n", pfn); 2289 2288 res = -EHWPOISON; 2290 2289 if (flags & MF_ACTION_REQUIRED) 2291 2290 res = kill_accessing_process(current, pfn, flags); ··· 2568 2569 static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL, 2569 2570 DEFAULT_RATELIMIT_BURST); 2570 2571 2571 - if (!pfn_valid(pfn)) 2572 - return -ENXIO; 2573 - 2574 - p = pfn_to_page(pfn); 2572 + p = pfn_to_online_page(pfn); 2573 + if (!p) 2574 + return -EIO; 2575 2575 folio = page_folio(p); 2576 2576 2577 2577 mutex_lock(&mf_mutex);
+8 -2
mm/memory_hotplug.c
··· 1815 1815 pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1; 1816 1816 1817 1817 if (folio_contain_hwpoisoned_page(folio)) { 1818 - if (WARN_ON(folio_test_lru(folio))) 1819 - folio_isolate_lru(folio); 1818 + /* 1819 + * unmap_poisoned_folio() cannot handle large folios 1820 + * in all cases yet. 1821 + */ 1822 + if (folio_test_large(folio) && !folio_test_hugetlb(folio)) 1823 + goto put_folio; 1824 + if (folio_test_lru(folio) && !folio_isolate_lru(folio)) 1825 + goto put_folio; 1820 1826 if (folio_mapped(folio)) { 1821 1827 folio_lock(folio); 1822 1828 unmap_poisoned_folio(folio, pfn, false);
+6 -3
mm/mremap.c
··· 1774 1774 if (!vrm->new_len) 1775 1775 return -EINVAL; 1776 1776 1777 - /* Is the new length or address silly? */ 1778 - if (vrm->new_len > TASK_SIZE || 1779 - vrm->new_addr > TASK_SIZE - vrm->new_len) 1777 + /* Is the new length silly? */ 1778 + if (vrm->new_len > TASK_SIZE) 1780 1779 return -EINVAL; 1781 1780 1782 1781 /* Remainder of checks are for cases with specific new_addr. */ 1783 1782 if (!vrm_implies_new_addr(vrm)) 1784 1783 return 0; 1784 + 1785 + /* Is the new address silly? */ 1786 + if (vrm->new_addr > TASK_SIZE - vrm->new_len) 1787 + return -EINVAL; 1785 1788 1786 1789 /* The new address must be page-aligned. */ 1787 1790 if (offset_in_page(vrm->new_addr))
+12 -8
mm/percpu.c
··· 1734 1734 bool is_atomic; 1735 1735 bool do_warn; 1736 1736 struct obj_cgroup *objcg = NULL; 1737 - static int warn_limit = 10; 1737 + static atomic_t warn_limit = ATOMIC_INIT(10); 1738 1738 struct pcpu_chunk *chunk, *next; 1739 1739 const char *err; 1740 1740 int slot, off, cpu, ret; ··· 1904 1904 fail: 1905 1905 trace_percpu_alloc_percpu_fail(reserved, is_atomic, size, align); 1906 1906 1907 - if (do_warn && warn_limit) { 1908 - pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n", 1909 - size, align, is_atomic, err); 1910 - if (!is_atomic) 1911 - dump_stack(); 1912 - if (!--warn_limit) 1913 - pr_info("limit reached, disable warning\n"); 1907 + if (do_warn) { 1908 + int remaining = atomic_dec_if_positive(&warn_limit); 1909 + 1910 + if (remaining >= 0) { 1911 + pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n", 1912 + size, align, is_atomic, err); 1913 + if (!is_atomic) 1914 + dump_stack(); 1915 + if (remaining == 0) 1916 + pr_info("limit reached, disable warning\n"); 1917 + } 1914 1918 } 1915 1919 1916 1920 if (is_atomic) {
+4 -4
mm/vmalloc.c
··· 2026 2026 if (unlikely(!vmap_initialized)) 2027 2027 return ERR_PTR(-EBUSY); 2028 2028 2029 + /* Only reclaim behaviour flags are relevant. */ 2030 + gfp_mask = gfp_mask & GFP_RECLAIM_MASK; 2029 2031 might_sleep(); 2030 2032 2031 2033 /* ··· 2040 2038 */ 2041 2039 va = node_alloc(size, align, vstart, vend, &addr, &vn_id); 2042 2040 if (!va) { 2043 - gfp_mask = gfp_mask & GFP_RECLAIM_MASK; 2044 - 2045 2041 va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); 2046 2042 if (unlikely(!va)) 2047 2043 return ERR_PTR(-ENOMEM); ··· 2089 2089 BUG_ON(va->va_start < vstart); 2090 2090 BUG_ON(va->va_end > vend); 2091 2091 2092 - ret = kasan_populate_vmalloc(addr, size); 2092 + ret = kasan_populate_vmalloc(addr, size, gfp_mask); 2093 2093 if (ret) { 2094 2094 free_vmap_area(va); 2095 2095 return ERR_PTR(ret); ··· 4826 4826 4827 4827 /* populate the kasan shadow space */ 4828 4828 for (area = 0; area < nr_vms; area++) { 4829 - if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area])) 4829 + if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area], GFP_KERNEL)) 4830 4830 goto err_free_shadow; 4831 4831 } 4832 4832
+7
net/bridge/br.c
··· 324 324 int err = 0; 325 325 int opt_id; 326 326 327 + opt_id = find_next_bit(&bitmap, BITS_PER_LONG, BR_BOOLOPT_MAX); 328 + if (opt_id != BITS_PER_LONG) { 329 + NL_SET_ERR_MSG_FMT_MOD(extack, "Unknown boolean option %d", 330 + opt_id); 331 + return -EINVAL; 332 + } 333 + 327 334 for_each_set_bit(opt_id, &bitmap, BR_BOOLOPT_MAX) { 328 335 bool on = !!(bm->optval & BIT(opt_id)); 329 336
+4 -1
net/can/j1939/bus.c
··· 290 290 if (!ecu) 291 291 ecu = j1939_ecu_create_locked(priv, name); 292 292 err = PTR_ERR_OR_ZERO(ecu); 293 - if (err) 293 + if (err) { 294 + if (j1939_address_is_unicast(sa)) 295 + priv->ents[sa].nusers--; 294 296 goto done; 297 + } 295 298 296 299 ecu->nusers++; 297 300 /* TODO: do we care if ecu->addr != sa? */
+1
net/can/j1939/j1939-priv.h
··· 212 212 213 213 /* notify/alert all j1939 sockets bound to ifindex */ 214 214 void j1939_sk_netdev_event_netdown(struct j1939_priv *priv); 215 + void j1939_sk_netdev_event_unregister(struct j1939_priv *priv); 215 216 int j1939_cancel_active_session(struct j1939_priv *priv, struct sock *sk); 216 217 void j1939_tp_init(struct j1939_priv *priv); 217 218
+3
net/can/j1939/main.c
··· 377 377 j1939_sk_netdev_event_netdown(priv); 378 378 j1939_ecu_unmap_all(priv); 379 379 break; 380 + case NETDEV_UNREGISTER: 381 + j1939_sk_netdev_event_unregister(priv); 382 + break; 380 383 } 381 384 382 385 j1939_priv_put(priv);
+52
net/can/j1939/socket.c
··· 521 521 ret = j1939_local_ecu_get(priv, jsk->addr.src_name, jsk->addr.sa); 522 522 if (ret) { 523 523 j1939_netdev_stop(priv); 524 + jsk->priv = NULL; 525 + synchronize_rcu(); 526 + j1939_priv_put(priv); 524 527 goto out_release_sock; 525 528 } 526 529 ··· 1301 1298 j1939_sk_queue_drop_all(priv, jsk, error_code); 1302 1299 } 1303 1300 read_unlock_bh(&priv->j1939_socks_lock); 1301 + } 1302 + 1303 + void j1939_sk_netdev_event_unregister(struct j1939_priv *priv) 1304 + { 1305 + struct sock *sk; 1306 + struct j1939_sock *jsk; 1307 + bool wait_rcu = false; 1308 + 1309 + rescan: /* The caller is holding a ref on this "priv" via j1939_priv_get_by_ndev(). */ 1310 + read_lock_bh(&priv->j1939_socks_lock); 1311 + list_for_each_entry(jsk, &priv->j1939_socks, list) { 1312 + /* Skip if j1939_jsk_add() is not called on this socket. */ 1313 + if (!(jsk->state & J1939_SOCK_BOUND)) 1314 + continue; 1315 + sk = &jsk->sk; 1316 + sock_hold(sk); 1317 + read_unlock_bh(&priv->j1939_socks_lock); 1318 + /* Check if j1939_jsk_del() is not yet called on this socket after holding 1319 + * socket's lock, for both j1939_sk_bind() and j1939_sk_release() call 1320 + * j1939_jsk_del() with socket's lock held. 1321 + */ 1322 + lock_sock(sk); 1323 + if (jsk->state & J1939_SOCK_BOUND) { 1324 + /* Neither j1939_sk_bind() nor j1939_sk_release() called j1939_jsk_del(). 1325 + * Make this socket no longer bound, by pretending as if j1939_sk_bind() 1326 + * dropped old references but did not get new references. 1327 + */ 1328 + j1939_jsk_del(priv, jsk); 1329 + j1939_local_ecu_put(priv, jsk->addr.src_name, jsk->addr.sa); 1330 + j1939_netdev_stop(priv); 1331 + /* Call j1939_priv_put() now and prevent j1939_sk_sock_destruct() from 1332 + * calling the corresponding j1939_priv_put(). 1333 + * 1334 + * j1939_sk_sock_destruct() is supposed to call j1939_priv_put() after 1335 + * an RCU grace period. But since the caller is holding a ref on this 1336 + * "priv", we can defer synchronize_rcu() until immediately before 1337 + * the caller calls j1939_priv_put(). 1338 + */ 1339 + j1939_priv_put(priv); 1340 + jsk->priv = NULL; 1341 + wait_rcu = true; 1342 + } 1343 + release_sock(sk); 1344 + sock_put(sk); 1345 + goto rescan; 1346 + } 1347 + read_unlock_bh(&priv->j1939_socks_lock); 1348 + if (wait_rcu) 1349 + synchronize_rcu(); 1304 1350 } 1305 1351 1306 1352 static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
+18 -4
net/core/dev_ioctl.c
··· 464 464 if (!netif_device_present(dev)) 465 465 return -ENODEV; 466 466 467 - if (ops->ndo_hwtstamp_get) 468 - return dev_get_hwtstamp_phylib(dev, kernel_cfg); 467 + if (ops->ndo_hwtstamp_get) { 468 + int err; 469 + 470 + netdev_lock_ops(dev); 471 + err = dev_get_hwtstamp_phylib(dev, kernel_cfg); 472 + netdev_unlock_ops(dev); 473 + 474 + return err; 475 + } 469 476 470 477 /* Legacy path: unconverted lower driver */ 471 478 return generic_hwtstamp_ioctl_lower(dev, SIOCGHWTSTAMP, kernel_cfg); ··· 488 481 if (!netif_device_present(dev)) 489 482 return -ENODEV; 490 483 491 - if (ops->ndo_hwtstamp_set) 492 - return dev_set_hwtstamp_phylib(dev, kernel_cfg, extack); 484 + if (ops->ndo_hwtstamp_set) { 485 + int err; 486 + 487 + netdev_lock_ops(dev); 488 + err = dev_set_hwtstamp_phylib(dev, kernel_cfg, extack); 489 + netdev_unlock_ops(dev); 490 + 491 + return err; 492 + } 493 493 494 494 /* Legacy path: unconverted lower driver */ 495 495 return generic_hwtstamp_ioctl_lower(dev, SIOCSHWTSTAMP, kernel_cfg);
+18 -10
net/hsr/hsr_device.c
··· 49 49 50 50 ASSERT_RTNL(); 51 51 52 - hsr_for_each_port(master->hsr, port) { 52 + hsr_for_each_port_rtnl(master->hsr, port) { 53 53 if (port->type != HSR_PT_MASTER && is_slave_up(port->dev)) { 54 54 netif_carrier_on(master->dev); 55 55 return true; ··· 105 105 struct hsr_port *port; 106 106 107 107 mtu_max = ETH_DATA_LEN; 108 - hsr_for_each_port(hsr, port) 108 + hsr_for_each_port_rtnl(hsr, port) 109 109 if (port->type != HSR_PT_MASTER) 110 110 mtu_max = min(port->dev->mtu, mtu_max); 111 111 ··· 139 139 140 140 hsr = netdev_priv(dev); 141 141 142 - hsr_for_each_port(hsr, port) { 142 + hsr_for_each_port_rtnl(hsr, port) { 143 143 if (port->type == HSR_PT_MASTER) 144 144 continue; 145 145 switch (port->type) { ··· 172 172 struct hsr_priv *hsr; 173 173 174 174 hsr = netdev_priv(dev); 175 - hsr_for_each_port(hsr, port) { 175 + hsr_for_each_port_rtnl(hsr, port) { 176 176 if (port->type == HSR_PT_MASTER) 177 177 continue; 178 178 switch (port->type) { ··· 205 205 * may become enabled. 206 206 */ 207 207 features &= ~NETIF_F_ONE_FOR_ALL; 208 - hsr_for_each_port(hsr, port) 208 + hsr_for_each_port_rtnl(hsr, port) 209 209 features = netdev_increment_features(features, 210 210 port->dev->features, 211 211 mask); ··· 226 226 struct hsr_priv *hsr = netdev_priv(dev); 227 227 struct hsr_port *master; 228 228 229 + rcu_read_lock(); 229 230 master = hsr_port_get_hsr(hsr, HSR_PT_MASTER); 230 231 if (master) { 231 232 skb->dev = master->dev; ··· 239 238 dev_core_stats_tx_dropped_inc(dev); 240 239 dev_kfree_skb_any(skb); 241 240 } 241 + rcu_read_unlock(); 242 + 242 243 return NETDEV_TX_OK; 243 244 } 244 245 ··· 487 484 488 485 hsr = netdev_priv(dev); 489 486 490 - hsr_for_each_port(hsr, port) { 487 + hsr_for_each_port_rtnl(hsr, port) { 491 488 if (port->type == HSR_PT_MASTER) 492 489 continue; 493 490 switch (port->type) { ··· 509 506 510 507 hsr = netdev_priv(dev); 511 508 512 - hsr_for_each_port(hsr, port) { 509 + hsr_for_each_port_rtnl(hsr, port) { 513 510 if (port->type == HSR_PT_MASTER) 514 511 continue; 515 512 switch (port->type) { ··· 537 534 538 535 hsr = netdev_priv(dev); 539 536 540 - hsr_for_each_port(hsr, port) { 537 + hsr_for_each_port_rtnl(hsr, port) { 541 538 if (port->type == HSR_PT_MASTER || 542 539 port->type == HSR_PT_INTERLINK) 543 540 continue; ··· 583 580 584 581 hsr = netdev_priv(dev); 585 582 586 - hsr_for_each_port(hsr, port) { 583 + hsr_for_each_port_rtnl(hsr, port) { 587 584 switch (port->type) { 588 585 case HSR_PT_SLAVE_A: 589 586 case HSR_PT_SLAVE_B: ··· 675 672 struct hsr_priv *hsr = netdev_priv(ndev); 676 673 struct hsr_port *port; 677 674 675 + rcu_read_lock(); 678 676 hsr_for_each_port(hsr, port) 679 - if (port->type == pt) 677 + if (port->type == pt) { 678 + dev_hold(port->dev); 679 + rcu_read_unlock(); 680 680 return port->dev; 681 + } 682 + rcu_read_unlock(); 681 683 return NULL; 682 684 } 683 685 EXPORT_SYMBOL(hsr_get_port_ndev);
+2 -2
net/hsr/hsr_main.c
··· 22 22 { 23 23 struct hsr_port *port; 24 24 25 - hsr_for_each_port(hsr, port) 25 + hsr_for_each_port_rtnl(hsr, port) 26 26 if (port->type != HSR_PT_MASTER) 27 27 return false; 28 28 return true; ··· 134 134 { 135 135 struct hsr_port *port; 136 136 137 - hsr_for_each_port(hsr, port) 137 + hsr_for_each_port_rtnl(hsr, port) 138 138 if (port->type == pt) 139 139 return port; 140 140 return NULL;
+3
net/hsr/hsr_main.h
··· 224 224 #define hsr_for_each_port(hsr, port) \ 225 225 list_for_each_entry_rcu((port), &(hsr)->ports, port_list) 226 226 227 + #define hsr_for_each_port_rtnl(hsr, port) \ 228 + list_for_each_entry_rcu((port), &(hsr)->ports, port_list, lockdep_rtnl_is_held()) 229 + 227 230 struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt); 228 231 229 232 /* Caller must ensure skb is a valid HSR frame */
+6
net/ipv4/ip_tunnel_core.c
··· 206 206 if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr))) 207 207 return -EINVAL; 208 208 209 + if (skb_is_gso(skb)) 210 + skb_gso_reset(skb); 211 + 209 212 skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN); 210 213 pskb_pull(skb, ETH_HLEN); 211 214 skb_reset_network_header(skb); ··· 302 299 303 300 if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr))) 304 301 return -EINVAL; 302 + 303 + if (skb_is_gso(skb)) 304 + skb_gso_reset(skb); 305 305 306 306 skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN); 307 307 pskb_pull(skb, ETH_HLEN);
+4 -1
net/ipv4/tcp_bpf.c
··· 408 408 if (!psock->cork) { 409 409 psock->cork = kzalloc(sizeof(*psock->cork), 410 410 GFP_ATOMIC | __GFP_NOWARN); 411 - if (!psock->cork) 411 + if (!psock->cork) { 412 + sk_msg_free(sk, msg); 413 + *copied = 0; 412 414 return -ENOMEM; 415 + } 413 416 } 414 417 memcpy(psock->cork, msg, sizeof(*msg)); 415 418 return 0;
+5 -6
net/mptcp/sockopt.c
··· 1532 1532 { 1533 1533 static const unsigned int tx_rx_locks = SOCK_RCVBUF_LOCK | SOCK_SNDBUF_LOCK; 1534 1534 struct sock *sk = (struct sock *)msk; 1535 + bool keep_open; 1535 1536 1536 - if (ssk->sk_prot->keepalive) { 1537 - if (sock_flag(sk, SOCK_KEEPOPEN)) 1538 - ssk->sk_prot->keepalive(ssk, 1); 1539 - else 1540 - ssk->sk_prot->keepalive(ssk, 0); 1541 - } 1537 + keep_open = sock_flag(sk, SOCK_KEEPOPEN); 1538 + if (ssk->sk_prot->keepalive) 1539 + ssk->sk_prot->keepalive(ssk, keep_open); 1540 + sock_valbool_flag(ssk, SOCK_KEEPOPEN, keep_open); 1542 1541 1543 1542 ssk->sk_priority = sk->sk_priority; 1544 1543 ssk->sk_bound_dev_if = sk->sk_bound_dev_if;
+34 -32
net/netfilter/nf_tables_api.c
··· 1123 1123 return ERR_PTR(-ENOENT); 1124 1124 } 1125 1125 1126 - static __be16 nft_base_seq(const struct net *net) 1126 + static unsigned int nft_base_seq(const struct net *net) 1127 1127 { 1128 - struct nftables_pernet *nft_net = nft_pernet(net); 1128 + return READ_ONCE(net->nft.base_seq); 1129 + } 1129 1130 1130 - return htons(nft_net->base_seq & 0xffff); 1131 + static __be16 nft_base_seq_be16(const struct net *net) 1132 + { 1133 + return htons(nft_base_seq(net) & 0xffff); 1131 1134 } 1132 1135 1133 1136 static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = { ··· 1150 1147 1151 1148 nlh = nfnl_msg_put(skb, portid, seq, 1152 1149 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 1153 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 1150 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 1154 1151 if (!nlh) 1155 1152 goto nla_put_failure; 1156 1153 ··· 1243 1240 1244 1241 rcu_read_lock(); 1245 1242 nft_net = nft_pernet(net); 1246 - cb->seq = READ_ONCE(nft_net->base_seq); 1243 + cb->seq = nft_base_seq(net); 1247 1244 1248 1245 list_for_each_entry_rcu(table, &nft_net->tables, list) { 1249 1246 if (family != NFPROTO_UNSPEC && family != table->family) ··· 2025 2022 2026 2023 nlh = nfnl_msg_put(skb, portid, seq, 2027 2024 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 2028 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 2025 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 2029 2026 if (!nlh) 2030 2027 goto nla_put_failure; 2031 2028 ··· 2128 2125 2129 2126 rcu_read_lock(); 2130 2127 nft_net = nft_pernet(net); 2131 - cb->seq = READ_ONCE(nft_net->base_seq); 2128 + cb->seq = nft_base_seq(net); 2132 2129 2133 2130 list_for_each_entry_rcu(table, &nft_net->tables, list) { 2134 2131 if (family != NFPROTO_UNSPEC && family != table->family) ··· 3666 3663 u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event); 3667 3664 3668 3665 nlh = nfnl_msg_put(skb, portid, seq, type, flags, family, NFNETLINK_V0, 3669 - nft_base_seq(net)); 3666 + nft_base_seq_be16(net)); 3670 3667 if (!nlh) 3671 3668 goto nla_put_failure; 3672 3669 ··· 3834 3831 3835 3832 rcu_read_lock(); 3836 3833 nft_net = nft_pernet(net); 3837 - cb->seq = READ_ONCE(nft_net->base_seq); 3834 + cb->seq = nft_base_seq(net); 3838 3835 3839 3836 list_for_each_entry_rcu(table, &nft_net->tables, list) { 3840 3837 if (family != NFPROTO_UNSPEC && family != table->family) ··· 4045 4042 buf = kasprintf(GFP_ATOMIC, "%.*s:%u", 4046 4043 nla_len(nla[NFTA_RULE_TABLE]), 4047 4044 (char *)nla_data(nla[NFTA_RULE_TABLE]), 4048 - nft_net->base_seq); 4045 + nft_base_seq(net)); 4049 4046 audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1, 4050 4047 AUDIT_NFT_OP_RULE_RESET, GFP_ATOMIC); 4051 4048 kfree(buf); ··· 4882 4879 nlh = nfnl_msg_put(skb, portid, seq, 4883 4880 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 4884 4881 flags, ctx->family, NFNETLINK_V0, 4885 - nft_base_seq(ctx->net)); 4882 + nft_base_seq_be16(ctx->net)); 4886 4883 if (!nlh) 4887 4884 goto nla_put_failure; 4888 4885 ··· 5027 5024 5028 5025 rcu_read_lock(); 5029 5026 nft_net = nft_pernet(net); 5030 - cb->seq = READ_ONCE(nft_net->base_seq); 5027 + cb->seq = nft_base_seq(net); 5031 5028 5032 5029 list_for_each_entry_rcu(table, &nft_net->tables, list) { 5033 5030 if (ctx->family != NFPROTO_UNSPEC && ··· 6204 6201 6205 6202 rcu_read_lock(); 6206 6203 nft_net = nft_pernet(net); 6207 - cb->seq = READ_ONCE(nft_net->base_seq); 6204 + cb->seq = nft_base_seq(net); 6208 6205 6209 6206 list_for_each_entry_rcu(table, &nft_net->tables, list) { 6210 6207 if (dump_ctx->ctx.family != NFPROTO_UNSPEC && ··· 6233 6230 seq = cb->nlh->nlmsg_seq; 6234 6231 6235 6232 nlh = nfnl_msg_put(skb, portid, seq, event, NLM_F_MULTI, 6236 - table->family, NFNETLINK_V0, nft_base_seq(net)); 6233 + table->family, NFNETLINK_V0, nft_base_seq_be16(net)); 6237 6234 if (!nlh) 6238 6235 goto nla_put_failure; 6239 6236 ··· 6326 6323 6327 6324 event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event); 6328 6325 nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family, 6329 - NFNETLINK_V0, nft_base_seq(ctx->net)); 6326 + NFNETLINK_V0, nft_base_seq_be16(ctx->net)); 6330 6327 if (!nlh) 6331 6328 goto nla_put_failure; 6332 6329 ··· 6625 6622 } 6626 6623 nelems++; 6627 6624 } 6628 - audit_log_nft_set_reset(dump_ctx.ctx.table, nft_net->base_seq, nelems); 6625 + audit_log_nft_set_reset(dump_ctx.ctx.table, nft_base_seq(info->net), nelems); 6629 6626 6630 6627 out_unlock: 6631 6628 rcu_read_unlock(); ··· 8375 8372 8376 8373 nlh = nfnl_msg_put(skb, portid, seq, 8377 8374 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 8378 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 8375 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 8379 8376 if (!nlh) 8380 8377 goto nla_put_failure; 8381 8378 ··· 8440 8437 8441 8438 rcu_read_lock(); 8442 8439 nft_net = nft_pernet(net); 8443 - cb->seq = READ_ONCE(nft_net->base_seq); 8440 + cb->seq = nft_base_seq(net); 8444 8441 8445 8442 list_for_each_entry_rcu(table, &nft_net->tables, list) { 8446 8443 if (family != NFPROTO_UNSPEC && family != table->family) ··· 8474 8471 idx++; 8475 8472 } 8476 8473 if (ctx->reset && entries) 8477 - audit_log_obj_reset(table, nft_net->base_seq, entries); 8474 + audit_log_obj_reset(table, nft_base_seq(net), entries); 8478 8475 if (rc < 0) 8479 8476 break; 8480 8477 } ··· 8643 8640 buf = kasprintf(GFP_ATOMIC, "%.*s:%u", 8644 8641 nla_len(nla[NFTA_OBJ_TABLE]), 8645 8642 (char *)nla_data(nla[NFTA_OBJ_TABLE]), 8646 - nft_net->base_seq); 8643 + nft_base_seq(net)); 8647 8644 audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1, 8648 8645 AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC); 8649 8646 kfree(buf); ··· 8748 8745 struct nft_object *obj, u32 portid, u32 seq, int event, 8749 8746 u16 flags, int family, int report, gfp_t gfp) 8750 8747 { 8751 - struct nftables_pernet *nft_net = nft_pernet(net); 8752 8748 char *buf = kasprintf(gfp, "%s:%u", 8753 - table->name, nft_net->base_seq); 8749 + table->name, nft_base_seq(net)); 8754 8750 8755 8751 audit_log_nfcfg(buf, 8756 8752 family, ··· 9435 9433 9436 9434 nlh = nfnl_msg_put(skb, portid, seq, 9437 9435 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 9438 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 9436 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 9439 9437 if (!nlh) 9440 9438 goto nla_put_failure; 9441 9439 ··· 9504 9502 9505 9503 rcu_read_lock(); 9506 9504 nft_net = nft_pernet(net); 9507 - cb->seq = READ_ONCE(nft_net->base_seq); 9505 + cb->seq = nft_base_seq(net); 9508 9506 9509 9507 list_for_each_entry_rcu(table, &nft_net->tables, list) { 9510 9508 if (family != NFPROTO_UNSPEC && family != table->family) ··· 9689 9687 static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net, 9690 9688 u32 portid, u32 seq) 9691 9689 { 9692 - struct nftables_pernet *nft_net = nft_pernet(net); 9693 9690 struct nlmsghdr *nlh; 9694 9691 char buf[TASK_COMM_LEN]; 9695 9692 int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN); 9696 9693 9697 9694 nlh = nfnl_msg_put(skb, portid, seq, event, 0, AF_UNSPEC, 9698 - NFNETLINK_V0, nft_base_seq(net)); 9695 + NFNETLINK_V0, nft_base_seq_be16(net)); 9699 9696 if (!nlh) 9700 9697 goto nla_put_failure; 9701 9698 9702 - if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_net->base_seq)) || 9699 + if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_base_seq(net))) || 9703 9700 nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) || 9704 9701 nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current))) 9705 9702 goto nla_put_failure; ··· 10960 10959 * Bump generation counter, invalidate any dump in progress. 10961 10960 * Cannot fail after this point. 10962 10961 */ 10963 - base_seq = READ_ONCE(nft_net->base_seq); 10962 + base_seq = nft_base_seq(net); 10964 10963 while (++base_seq == 0) 10965 10964 ; 10966 10965 10967 - WRITE_ONCE(nft_net->base_seq, base_seq); 10966 + /* pairs with smp_load_acquire in nft_lookup_eval */ 10967 + smp_store_release(&net->nft.base_seq, base_seq); 10968 10968 10969 10969 gc_seq = nft_gc_seq_begin(nft_net); 10970 10970 ··· 11174 11172 11175 11173 nft_commit_notify(net, NETLINK_CB(skb).portid); 11176 11174 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); 11177 - nf_tables_commit_audit_log(&adl, nft_net->base_seq); 11175 + nf_tables_commit_audit_log(&adl, nft_base_seq(net)); 11178 11176 11179 11177 nft_gc_seq_end(nft_net, gc_seq); 11180 11178 nft_net->validate_state = NFT_VALIDATE_SKIP; ··· 11499 11497 mutex_lock(&nft_net->commit_mutex); 11500 11498 nft_net->tstamp = get_jiffies_64(); 11501 11499 11502 - genid_ok = genid == 0 || nft_net->base_seq == genid; 11500 + genid_ok = genid == 0 || nft_base_seq(net) == genid; 11503 11501 if (!genid_ok) 11504 11502 mutex_unlock(&nft_net->commit_mutex); 11505 11503 ··· 12136 12134 INIT_LIST_HEAD(&nft_net->module_list); 12137 12135 INIT_LIST_HEAD(&nft_net->notify_list); 12138 12136 mutex_init(&nft_net->commit_mutex); 12139 - nft_net->base_seq = 1; 12137 + net->nft.base_seq = 1; 12140 12138 nft_net->gc_seq = 0; 12141 12139 nft_net->validate_state = NFT_VALIDATE_SKIP; 12142 12140 INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work);
+41 -5
net/netfilter/nft_lookup.c
··· 24 24 struct nft_set_binding binding; 25 25 }; 26 26 27 - #ifdef CONFIG_MITIGATION_RETPOLINE 28 - const struct nft_set_ext * 29 - nft_set_do_lookup(const struct net *net, const struct nft_set *set, 30 - const u32 *key) 27 + static const struct nft_set_ext * 28 + __nft_set_do_lookup(const struct net *net, const struct nft_set *set, 29 + const u32 *key) 31 30 { 31 + #ifdef CONFIG_MITIGATION_RETPOLINE 32 32 if (set->ops == &nft_set_hash_fast_type.ops) 33 33 return nft_hash_lookup_fast(net, set, key); 34 34 if (set->ops == &nft_set_hash_type.ops) ··· 51 51 return nft_rbtree_lookup(net, set, key); 52 52 53 53 WARN_ON_ONCE(1); 54 + #endif 54 55 return set->ops->lookup(net, set, key); 55 56 } 57 + 58 + static unsigned int nft_base_seq(const struct net *net) 59 + { 60 + /* pairs with smp_store_release() in nf_tables_commit() */ 61 + return smp_load_acquire(&net->nft.base_seq); 62 + } 63 + 64 + static bool nft_lookup_should_retry(const struct net *net, unsigned int seq) 65 + { 66 + return unlikely(seq != nft_base_seq(net)); 67 + } 68 + 69 + const struct nft_set_ext * 70 + nft_set_do_lookup(const struct net *net, const struct nft_set *set, 71 + const u32 *key) 72 + { 73 + const struct nft_set_ext *ext; 74 + unsigned int base_seq; 75 + 76 + do { 77 + base_seq = nft_base_seq(net); 78 + 79 + ext = __nft_set_do_lookup(net, set, key); 80 + if (ext) 81 + break; 82 + /* No match? There is a small chance that lookup was 83 + * performed in the old generation, but nf_tables_commit() 84 + * already unlinked a (matching) element. 85 + * 86 + * We need to repeat the lookup to make sure that we didn't 87 + * miss a matching element in the new generation. 88 + */ 89 + } while (nft_lookup_should_retry(net, base_seq)); 90 + 91 + return ext; 92 + } 56 93 EXPORT_SYMBOL_GPL(nft_set_do_lookup); 57 - #endif 58 94 59 95 void nft_lookup_eval(const struct nft_expr *expr, 60 96 struct nft_regs *regs,
+2 -1
net/netfilter/nft_set_bitmap.c
··· 226 226 const struct nft_bitmap *priv = nft_set_priv(set); 227 227 struct nft_bitmap_elem *be; 228 228 229 - list_for_each_entry_rcu(be, &priv->list, head) { 229 + list_for_each_entry_rcu(be, &priv->list, head, 230 + lockdep_is_held(&nft_pernet(ctx->net)->commit_mutex)) { 230 231 if (iter->count < iter->skip) 231 232 goto cont; 232 233
+18 -2
net/netfilter/nft_set_pipapo.c
··· 549 549 * 550 550 * This function is called from the data path. It will search for 551 551 * an element matching the given key in the current active copy. 552 + * Unlike other set types, this uses NFT_GENMASK_ANY instead of 553 + * nft_genmask_cur(). 554 + * 555 + * This is because new (future) elements are not reachable from 556 + * priv->match, they get added to priv->clone instead. 557 + * When the commit phase flips the generation bitmask, the 558 + * 'now old' entries are skipped but without the 'now current' 559 + * elements becoming visible. Using nft_genmask_cur() thus creates 560 + * inconsistent state: matching old entries get skipped but thew 561 + * newly matching entries are unreachable. 562 + * 563 + * GENMASK will still find the 'now old' entries which ensures consistent 564 + * priv->match view. 565 + * 566 + * nft_pipapo_commit swaps ->clone and ->match shortly after the 567 + * genbit flip. As ->clone doesn't contain the old entries in the first 568 + * place, lookup will only find the now-current ones. 552 569 * 553 570 * Return: ntables API extension pointer or NULL if no match. 554 571 */ ··· 574 557 const u32 *key) 575 558 { 576 559 struct nft_pipapo *priv = nft_set_priv(set); 577 - u8 genmask = nft_genmask_cur(net); 578 560 const struct nft_pipapo_match *m; 579 561 const struct nft_pipapo_elem *e; 580 562 581 563 m = rcu_dereference(priv->match); 582 - e = pipapo_get_slow(m, (const u8 *)key, genmask, get_jiffies_64()); 564 + e = pipapo_get_slow(m, (const u8 *)key, NFT_GENMASK_ANY, get_jiffies_64()); 583 565 584 566 return e ? &e->ext : NULL; 585 567 }
+1 -2
net/netfilter/nft_set_pipapo_avx2.c
··· 1275 1275 const u32 *key) 1276 1276 { 1277 1277 struct nft_pipapo *priv = nft_set_priv(set); 1278 - u8 genmask = nft_genmask_cur(net); 1279 1278 const struct nft_pipapo_match *m; 1280 1279 const u8 *rp = (const u8 *)key; 1281 1280 const struct nft_pipapo_elem *e; ··· 1292 1293 1293 1294 m = rcu_dereference(priv->match); 1294 1295 1295 - e = pipapo_get_avx2(m, rp, genmask, get_jiffies_64()); 1296 + e = pipapo_get_avx2(m, rp, NFT_GENMASK_ANY, get_jiffies_64()); 1296 1297 local_bh_enable(); 1297 1298 1298 1299 return e ? &e->ext : NULL;
+3 -3
net/netfilter/nft_set_rbtree.c
··· 77 77 nft_rbtree_interval_end(rbe) && 78 78 nft_rbtree_interval_start(interval)) 79 79 continue; 80 - interval = rbe; 80 + if (nft_set_elem_active(&rbe->ext, genmask) && 81 + !nft_rbtree_elem_expired(rbe)) 82 + interval = rbe; 81 83 } else if (d > 0) 82 84 parent = rcu_dereference_raw(parent->rb_right); 83 85 else { ··· 104 102 } 105 103 106 104 if (set->flags & NFT_SET_INTERVAL && interval != NULL && 107 - nft_set_elem_active(&interval->ext, genmask) && 108 - !nft_rbtree_elem_expired(interval) && 109 105 nft_rbtree_interval_start(interval)) 110 106 return &interval->ext; 111 107
+3
net/netlink/genetlink.c
··· 1836 1836 !ns_capable(net->user_ns, CAP_SYS_ADMIN)) 1837 1837 ret = -EPERM; 1838 1838 1839 + if (ret) 1840 + break; 1841 + 1839 1842 if (family->bind) 1840 1843 family->bind(i); 1841 1844
-2
net/sunrpc/sched.c
··· 276 276 277 277 static int rpc_wait_bit_killable(struct wait_bit_key *key, int mode) 278 278 { 279 - if (unlikely(current->flags & PF_EXITING)) 280 - return -EINTR; 281 279 schedule(); 282 280 if (signal_pending_state(mode, current)) 283 281 return -ERESTARTSYS;
+3 -3
net/sunrpc/xprtsock.c
··· 407 407 iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1, 408 408 alert_kvec.iov_len); 409 409 ret = sock_recvmsg(sock, &msg, flags); 410 - if (ret > 0 && 411 - tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) { 412 - iov_iter_revert(&msg.msg_iter, ret); 410 + if (ret > 0) { 411 + if (tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) 412 + iov_iter_revert(&msg.msg_iter, ret); 413 413 ret = xs_sock_process_cmsg(sock, &msg, msg_flags, &u.cmsg, 414 414 -EAGAIN); 415 415 }
+7 -6
net/wireless/nl80211.c
··· 7062 7062 u32 seq, int flags, 7063 7063 struct cfg80211_registered_device *rdev, 7064 7064 struct net_device *dev, 7065 - const u8 *mac_addr, struct station_info *sinfo) 7065 + const u8 *mac_addr, struct station_info *sinfo, 7066 + bool link_stats) 7066 7067 { 7067 7068 void *hdr; 7068 7069 struct nlattr *sinfoattr, *bss_param; ··· 7284 7283 goto nla_put_failure; 7285 7284 } 7286 7285 7287 - if (sinfo->valid_links) { 7286 + if (link_stats && sinfo->valid_links) { 7288 7287 links = nla_nest_start(msg, NL80211_ATTR_MLO_LINKS); 7289 7288 if (!links) 7290 7289 goto nla_put_failure; ··· 7575 7574 NETLINK_CB(cb->skb).portid, 7576 7575 cb->nlh->nlmsg_seq, NLM_F_MULTI, 7577 7576 rdev, wdev->netdev, mac_addr, 7578 - &sinfo) < 0) 7577 + &sinfo, false) < 0) 7579 7578 goto out; 7580 7579 7581 7580 sta_idx++; ··· 7636 7635 7637 7636 if (nl80211_send_station(msg, NL80211_CMD_NEW_STATION, 7638 7637 info->snd_portid, info->snd_seq, 0, 7639 - rdev, dev, mac_addr, &sinfo) < 0) { 7638 + rdev, dev, mac_addr, &sinfo, false) < 0) { 7640 7639 nlmsg_free(msg); 7641 7640 return -ENOBUFS; 7642 7641 } ··· 19681 19680 return; 19682 19681 19683 19682 if (nl80211_send_station(msg, NL80211_CMD_NEW_STATION, 0, 0, 0, 19684 - rdev, dev, mac_addr, sinfo) < 0) { 19683 + rdev, dev, mac_addr, sinfo, false) < 0) { 19685 19684 nlmsg_free(msg); 19686 19685 return; 19687 19686 } ··· 19711 19710 } 19712 19711 19713 19712 if (nl80211_send_station(msg, NL80211_CMD_DEL_STATION, 0, 0, 0, 19714 - rdev, dev, mac_addr, sinfo) < 0) { 19713 + rdev, dev, mac_addr, sinfo, false) < 0) { 19715 19714 nlmsg_free(msg); 19716 19715 return; 19717 19716 }
+99 -14
net/xdp/xsk.c
··· 36 36 #define TX_BATCH_SIZE 32 37 37 #define MAX_PER_SOCKET_BUDGET 32 38 38 39 + struct xsk_addr_node { 40 + u64 addr; 41 + struct list_head addr_node; 42 + }; 43 + 44 + struct xsk_addr_head { 45 + u32 num_descs; 46 + struct list_head addrs_list; 47 + }; 48 + 49 + static struct kmem_cache *xsk_tx_generic_cache; 50 + 51 + #define XSKCB(skb) ((struct xsk_addr_head *)((skb)->cb)) 52 + 39 53 void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool) 40 54 { 41 55 if (pool->cached_need_wakeup & XDP_WAKEUP_RX) ··· 546 532 return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags); 547 533 } 548 534 549 - static int xsk_cq_reserve_addr_locked(struct xsk_buff_pool *pool, u64 addr) 535 + static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool) 550 536 { 551 537 unsigned long flags; 552 538 int ret; 553 539 554 540 spin_lock_irqsave(&pool->cq_lock, flags); 555 - ret = xskq_prod_reserve_addr(pool->cq, addr); 541 + ret = xskq_prod_reserve(pool->cq); 556 542 spin_unlock_irqrestore(&pool->cq_lock, flags); 557 543 558 544 return ret; 559 545 } 560 546 561 - static void xsk_cq_submit_locked(struct xsk_buff_pool *pool, u32 n) 547 + static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool, 548 + struct sk_buff *skb) 562 549 { 550 + struct xsk_addr_node *pos, *tmp; 551 + u32 descs_processed = 0; 563 552 unsigned long flags; 553 + u32 idx; 564 554 565 555 spin_lock_irqsave(&pool->cq_lock, flags); 566 - xskq_prod_submit_n(pool->cq, n); 556 + idx = xskq_get_prod(pool->cq); 557 + 558 + xskq_prod_write_addr(pool->cq, idx, 559 + (u64)(uintptr_t)skb_shinfo(skb)->destructor_arg); 560 + descs_processed++; 561 + 562 + if (unlikely(XSKCB(skb)->num_descs > 1)) { 563 + list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) { 564 + xskq_prod_write_addr(pool->cq, idx + descs_processed, 565 + pos->addr); 566 + descs_processed++; 567 + list_del(&pos->addr_node); 568 + kmem_cache_free(xsk_tx_generic_cache, pos); 569 + } 570 + } 571 + xskq_prod_submit_n(pool->cq, descs_processed); 567 572 spin_unlock_irqrestore(&pool->cq_lock, flags); 568 573 } 569 574 ··· 595 562 spin_unlock_irqrestore(&pool->cq_lock, flags); 596 563 } 597 564 565 + static void xsk_inc_num_desc(struct sk_buff *skb) 566 + { 567 + XSKCB(skb)->num_descs++; 568 + } 569 + 598 570 static u32 xsk_get_num_desc(struct sk_buff *skb) 599 571 { 600 - return skb ? (long)skb_shinfo(skb)->destructor_arg : 0; 572 + return XSKCB(skb)->num_descs; 601 573 } 602 574 603 575 static void xsk_destruct_skb(struct sk_buff *skb) ··· 614 576 *compl->tx_timestamp = ktime_get_tai_fast_ns(); 615 577 } 616 578 617 - xsk_cq_submit_locked(xdp_sk(skb->sk)->pool, xsk_get_num_desc(skb)); 579 + xsk_cq_submit_addr_locked(xdp_sk(skb->sk)->pool, skb); 618 580 sock_wfree(skb); 619 581 } 620 582 621 - static void xsk_set_destructor_arg(struct sk_buff *skb) 583 + static void xsk_set_destructor_arg(struct sk_buff *skb, u64 addr) 622 584 { 623 - long num = xsk_get_num_desc(xdp_sk(skb->sk)->skb) + 1; 624 - 625 - skb_shinfo(skb)->destructor_arg = (void *)num; 585 + BUILD_BUG_ON(sizeof(struct xsk_addr_head) > sizeof(skb->cb)); 586 + INIT_LIST_HEAD(&XSKCB(skb)->addrs_list); 587 + XSKCB(skb)->num_descs = 0; 588 + skb_shinfo(skb)->destructor_arg = (void *)(uintptr_t)addr; 626 589 } 627 590 628 591 static void xsk_consume_skb(struct sk_buff *skb) 629 592 { 630 593 struct xdp_sock *xs = xdp_sk(skb->sk); 594 + u32 num_descs = xsk_get_num_desc(skb); 595 + struct xsk_addr_node *pos, *tmp; 596 + 597 + if (unlikely(num_descs > 1)) { 598 + list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) { 599 + list_del(&pos->addr_node); 600 + kmem_cache_free(xsk_tx_generic_cache, pos); 601 + } 602 + } 631 603 632 604 skb->destructor = sock_wfree; 633 - xsk_cq_cancel_locked(xs->pool, xsk_get_num_desc(skb)); 605 + xsk_cq_cancel_locked(xs->pool, num_descs); 634 606 /* Free skb without triggering the perf drop trace */ 635 607 consume_skb(skb); 636 608 xs->skb = NULL; ··· 657 609 { 658 610 struct xsk_buff_pool *pool = xs->pool; 659 611 u32 hr, len, ts, offset, copy, copied; 612 + struct xsk_addr_node *xsk_addr; 660 613 struct sk_buff *skb = xs->skb; 661 614 struct page *page; 662 615 void *buffer; ··· 672 623 return ERR_PTR(err); 673 624 674 625 skb_reserve(skb, hr); 626 + 627 + xsk_set_destructor_arg(skb, desc->addr); 628 + } else { 629 + xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL); 630 + if (!xsk_addr) 631 + return ERR_PTR(-ENOMEM); 632 + 633 + /* in case of -EOVERFLOW that could happen below, 634 + * xsk_consume_skb() will release this node as whole skb 635 + * would be dropped, which implies freeing all list elements 636 + */ 637 + xsk_addr->addr = desc->addr; 638 + list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list); 675 639 } 676 640 677 641 addr = desc->addr; ··· 756 694 err = skb_store_bits(skb, 0, buffer, len); 757 695 if (unlikely(err)) 758 696 goto free_err; 697 + 698 + xsk_set_destructor_arg(skb, desc->addr); 759 699 } else { 760 700 int nr_frags = skb_shinfo(skb)->nr_frags; 701 + struct xsk_addr_node *xsk_addr; 761 702 struct page *page; 762 703 u8 *vaddr; 763 704 ··· 775 710 goto free_err; 776 711 } 777 712 713 + xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL); 714 + if (!xsk_addr) { 715 + __free_page(page); 716 + err = -ENOMEM; 717 + goto free_err; 718 + } 719 + 778 720 vaddr = kmap_local_page(page); 779 721 memcpy(vaddr, buffer, len); 780 722 kunmap_local(vaddr); 781 723 782 724 skb_add_rx_frag(skb, nr_frags, page, 0, len, PAGE_SIZE); 783 725 refcount_add(PAGE_SIZE, &xs->sk.sk_wmem_alloc); 726 + 727 + xsk_addr->addr = desc->addr; 728 + list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list); 784 729 } 785 730 786 731 if (first_frag && desc->options & XDP_TX_METADATA) { ··· 834 759 skb->mark = READ_ONCE(xs->sk.sk_mark); 835 760 skb->destructor = xsk_destruct_skb; 836 761 xsk_tx_metadata_to_compl(meta, &skb_shinfo(skb)->xsk_meta); 837 - xsk_set_destructor_arg(skb); 762 + xsk_inc_num_desc(skb); 838 763 839 764 return skb; 840 765 ··· 844 769 845 770 if (err == -EOVERFLOW) { 846 771 /* Drop the packet */ 847 - xsk_set_destructor_arg(xs->skb); 772 + xsk_inc_num_desc(xs->skb); 848 773 xsk_drop_skb(xs->skb); 849 774 xskq_cons_release(xs->tx); 850 775 } else { ··· 887 812 * if there is space in it. This avoids having to implement 888 813 * any buffering in the Tx path. 889 814 */ 890 - err = xsk_cq_reserve_addr_locked(xs->pool, desc.addr); 815 + err = xsk_cq_reserve_locked(xs->pool); 891 816 if (err) { 892 817 err = -EAGAIN; 893 818 goto out; ··· 1890 1815 if (err) 1891 1816 goto out_pernet; 1892 1817 1818 + xsk_tx_generic_cache = kmem_cache_create("xsk_generic_xmit_cache", 1819 + sizeof(struct xsk_addr_node), 1820 + 0, SLAB_HWCACHE_ALIGN, NULL); 1821 + if (!xsk_tx_generic_cache) { 1822 + err = -ENOMEM; 1823 + goto out_unreg_notif; 1824 + } 1825 + 1893 1826 return 0; 1894 1827 1828 + out_unreg_notif: 1829 + unregister_netdevice_notifier(&xsk_netdev_notifier); 1895 1830 out_pernet: 1896 1831 unregister_pernet_subsys(&xsk_net_ops); 1897 1832 out_sk:
+12
net/xdp/xsk_queue.h
··· 344 344 345 345 /* Functions for producers */ 346 346 347 + static inline u32 xskq_get_prod(struct xsk_queue *q) 348 + { 349 + return READ_ONCE(q->ring->producer); 350 + } 351 + 347 352 static inline u32 xskq_prod_nb_free(struct xsk_queue *q, u32 max) 348 353 { 349 354 u32 free_entries = q->nentries - (q->cached_prod - q->cached_cons); ··· 393 388 /* A, matches D */ 394 389 ring->desc[q->cached_prod++ & q->ring_mask] = addr; 395 390 return 0; 391 + } 392 + 393 + static inline void xskq_prod_write_addr(struct xsk_queue *q, u32 idx, u64 addr) 394 + { 395 + struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring; 396 + 397 + ring->desc[idx & q->ring_mask] = addr; 396 398 } 397 399 398 400 static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
+10 -5
rust/kernel/lib.rs
··· 296 296 297 297 /// Gets the C string file name of a [`Location`]. 298 298 /// 299 - /// If `file_with_nul()` is not available, returns a string that warns about it. 299 + /// If `Location::file_as_c_str()` is not available, returns a string that warns about it. 300 300 /// 301 301 /// [`Location`]: core::panic::Location 302 302 /// ··· 310 310 /// let caller = core::panic::Location::caller(); 311 311 /// 312 312 /// // Output: 313 - /// // - A path like "rust/kernel/example.rs" if file_with_nul() is available. 314 - /// // - "<Location::file_with_nul() not supported>" otherwise. 313 + /// // - A path like "rust/kernel/example.rs" if `file_as_c_str()` is available. 314 + /// // - "<Location::file_as_c_str() not supported>" otherwise. 315 315 /// let caller_file = file_from_location(caller); 316 316 /// 317 317 /// // Prints out the message with caller's file name. ··· 326 326 /// ``` 327 327 #[inline] 328 328 pub fn file_from_location<'a>(loc: &'a core::panic::Location<'a>) -> &'a core::ffi::CStr { 329 - #[cfg(CONFIG_RUSTC_HAS_FILE_WITH_NUL)] 329 + #[cfg(CONFIG_RUSTC_HAS_FILE_AS_C_STR)] 330 + { 331 + loc.file_as_c_str() 332 + } 333 + 334 + #[cfg(all(CONFIG_RUSTC_HAS_FILE_WITH_NUL, not(CONFIG_RUSTC_HAS_FILE_AS_C_STR)))] 330 335 { 331 336 loc.file_with_nul() 332 337 } ··· 339 334 #[cfg(not(CONFIG_RUSTC_HAS_FILE_WITH_NUL))] 340 335 { 341 336 let _ = loc; 342 - c"<Location::file_with_nul() not supported>" 337 + c"<Location::file_as_c_str() not supported>" 343 338 } 344 339 }
+1 -1
samples/ftrace/ftrace-direct-modify.c
··· 75 75 CALL_DEPTH_ACCOUNT 76 76 " call my_direct_func1\n" 77 77 " leave\n" 78 - " .size my_tramp1, .-my_tramp1\n" 79 78 ASM_RET 79 + " .size my_tramp1, .-my_tramp1\n" 80 80 81 81 " .type my_tramp2, @function\n" 82 82 " .globl my_tramp2\n"
+10 -2
scripts/generate_rust_target.rs
··· 225 225 ts.push("features", features); 226 226 ts.push("llvm-target", "x86_64-linux-gnu"); 227 227 ts.push("supported-sanitizers", ["kcfi", "kernel-address"]); 228 - ts.push("target-pointer-width", "64"); 228 + if cfg.rustc_version_atleast(1, 91, 0) { 229 + ts.push("target-pointer-width", 64); 230 + } else { 231 + ts.push("target-pointer-width", "64"); 232 + } 229 233 } else if cfg.has("X86_32") { 230 234 // This only works on UML, as i386 otherwise needs regparm support in rustc 231 235 if !cfg.has("UML") { ··· 249 245 } 250 246 ts.push("features", features); 251 247 ts.push("llvm-target", "i386-unknown-linux-gnu"); 252 - ts.push("target-pointer-width", "32"); 248 + if cfg.rustc_version_atleast(1, 91, 0) { 249 + ts.push("target-pointer-width", 32); 250 + } else { 251 + ts.push("target-pointer-width", "32"); 252 + } 253 253 } else if cfg.has("LOONGARCH") { 254 254 panic!("loongarch uses the builtin rustc loongarch64-unknown-none-softfloat target"); 255 255 } else {
+1 -1
tools/gpio/Makefile
··· 77 77 78 78 clean: 79 79 rm -f $(ALL_PROGRAMS) 80 - rm -f $(OUTPUT)include/linux/gpio.h 80 + rm -rf $(OUTPUT)include 81 81 find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete 82 82 83 83 install: $(ALL_PROGRAMS)
+2 -2
tools/perf/tests/pe-file-parsing.c
··· 37 37 size_t idx; 38 38 39 39 scnprintf(filename, PATH_MAX, "%s/pe-file.exe", d); 40 - ret = filename__read_build_id(filename, &bid); 40 + ret = filename__read_build_id(filename, &bid, /*block=*/true); 41 41 TEST_ASSERT_VAL("Failed to read build_id", 42 42 ret == sizeof(expect_build_id)); 43 43 TEST_ASSERT_VAL("Wrong build_id", !memcmp(bid.data, expect_build_id, ··· 49 49 !strcmp(debuglink, expect_debuglink)); 50 50 51 51 scnprintf(debugfile, PATH_MAX, "%s/%s", d, debuglink); 52 - ret = filename__read_build_id(debugfile, &bid); 52 + ret = filename__read_build_id(debugfile, &bid, /*block=*/true); 53 53 TEST_ASSERT_VAL("Failed to read debug file build_id", 54 54 ret == sizeof(expect_build_id)); 55 55 TEST_ASSERT_VAL("Wrong build_id", !memcmp(bid.data, expect_build_id,
+1 -1
tools/perf/tests/shell/test_bpf_metadata.sh
··· 61 61 /perf_version/ { 62 62 if (entry) print $NF; 63 63 } 64 - ' | egrep "$VERS" > /dev/null 64 + ' | grep -qF "$VERS" 65 65 then 66 66 echo "Basic BPF metadata test [Failed invalid output]" 67 67 err=1
+27 -12
tools/perf/util/bpf-event.c
··· 657 657 info_node->info_linear = info_linear; 658 658 info_node->metadata = NULL; 659 659 if (!perf_env__insert_bpf_prog_info(env, info_node)) { 660 - free(info_linear); 660 + /* 661 + * Insert failed, likely because of a duplicate event 662 + * made by the sideband thread. Ignore synthesizing the 663 + * metadata. 664 + */ 661 665 free(info_node); 666 + goto out; 662 667 } 668 + /* info_linear is now owned by info_node and shouldn't be freed below. */ 663 669 info_linear = NULL; 664 670 665 671 /* ··· 833 827 return err; 834 828 } 835 829 836 - static void perf_env__add_bpf_info(struct perf_env *env, u32 id) 830 + static int perf_env__add_bpf_info(struct perf_env *env, u32 id) 837 831 { 838 832 struct bpf_prog_info_node *info_node; 839 833 struct perf_bpil *info_linear; 840 834 struct btf *btf = NULL; 841 835 u64 arrays; 842 836 u32 btf_id; 843 - int fd; 837 + int fd, err = 0; 844 838 845 839 fd = bpf_prog_get_fd_by_id(id); 846 840 if (fd < 0) 847 - return; 841 + return -EINVAL; 848 842 849 843 arrays = 1UL << PERF_BPIL_JITED_KSYMS; 850 844 arrays |= 1UL << PERF_BPIL_JITED_FUNC_LENS; ··· 858 852 info_linear = get_bpf_prog_info_linear(fd, arrays); 859 853 if (IS_ERR_OR_NULL(info_linear)) { 860 854 pr_debug("%s: failed to get BPF program info. aborting\n", __func__); 855 + err = PTR_ERR(info_linear); 861 856 goto out; 862 857 } 863 858 ··· 869 862 info_node->info_linear = info_linear; 870 863 info_node->metadata = bpf_metadata_create(&info_linear->info); 871 864 if (!perf_env__insert_bpf_prog_info(env, info_node)) { 865 + pr_debug("%s: duplicate add bpf info request for id %u\n", 866 + __func__, btf_id); 872 867 free(info_linear); 873 868 free(info_node); 869 + goto out; 874 870 } 875 - } else 871 + } else { 876 872 free(info_linear); 873 + err = -ENOMEM; 874 + goto out; 875 + } 877 876 878 877 if (btf_id == 0) 879 878 goto out; 880 879 881 880 btf = btf__load_from_kernel_by_id(btf_id); 882 - if (libbpf_get_error(btf)) { 883 - pr_debug("%s: failed to get BTF of id %u, aborting\n", 884 - __func__, btf_id); 885 - goto out; 881 + if (!btf) { 882 + err = -errno; 883 + pr_debug("%s: failed to get BTF of id %u %d\n", __func__, btf_id, err); 884 + } else { 885 + perf_env__fetch_btf(env, btf_id, btf); 886 886 } 887 - perf_env__fetch_btf(env, btf_id, btf); 888 887 889 888 out: 890 889 btf__free(btf); 891 890 close(fd); 891 + return err; 892 892 } 893 893 894 894 static int bpf_event__sb_cb(union perf_event *event, void *data) 895 895 { 896 896 struct perf_env *env = data; 897 + int ret = 0; 897 898 898 899 if (event->header.type != PERF_RECORD_BPF_EVENT) 899 900 return -1; 900 901 901 902 switch (event->bpf.type) { 902 903 case PERF_BPF_EVENT_PROG_LOAD: 903 - perf_env__add_bpf_info(env, event->bpf.id); 904 + ret = perf_env__add_bpf_info(env, event->bpf.id); 904 905 905 906 case PERF_BPF_EVENT_PROG_UNLOAD: 906 907 /* ··· 922 907 break; 923 908 } 924 909 925 - return 0; 910 + return ret; 926 911 } 927 912 928 913 int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env)
+39 -22
tools/perf/util/bpf-utils.c
··· 20 20 */ 21 21 }; 22 22 23 - static struct bpil_array_desc bpil_array_desc[] = { 23 + static const struct bpil_array_desc bpil_array_desc[] = { 24 24 [PERF_BPIL_JITED_INSNS] = { 25 25 offsetof(struct bpf_prog_info, jited_prog_insns), 26 26 offsetof(struct bpf_prog_info, jited_prog_len), ··· 115 115 __u32 info_len = sizeof(info); 116 116 __u32 data_len = 0; 117 117 int i, err; 118 - void *ptr; 118 + __u8 *ptr; 119 119 120 120 if (arrays >> PERF_BPIL_LAST_ARRAY) 121 121 return ERR_PTR(-EINVAL); ··· 126 126 pr_debug("can't get prog info: %s", strerror(errno)); 127 127 return ERR_PTR(-EFAULT); 128 128 } 129 + if (info.type >= __MAX_BPF_PROG_TYPE) 130 + pr_debug("%s:%d: unexpected program type %u\n", __func__, __LINE__, info.type); 129 131 130 132 /* step 2: calculate total size of all arrays */ 131 133 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 134 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 132 135 bool include_array = (arrays & (1UL << i)) > 0; 133 - struct bpil_array_desc *desc; 134 136 __u32 count, size; 135 - 136 - desc = bpil_array_desc + i; 137 137 138 138 /* kernel is too old to support this field */ 139 139 if (info_len < desc->array_offset + sizeof(__u32) || ··· 163 163 ptr = info_linear->data; 164 164 165 165 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 166 - struct bpil_array_desc *desc; 166 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 167 167 __u32 count, size; 168 168 169 169 if ((arrays & (1UL << i)) == 0) 170 170 continue; 171 171 172 - desc = bpil_array_desc + i; 173 172 count = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 174 173 size = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 175 174 bpf_prog_info_set_offset_u32(&info_linear->info, 176 175 desc->count_offset, count); 177 176 bpf_prog_info_set_offset_u32(&info_linear->info, 178 177 desc->size_offset, size); 178 + assert(ptr >= info_linear->data); 179 + assert(ptr < &info_linear->data[data_len]); 179 180 bpf_prog_info_set_offset_u64(&info_linear->info, 180 181 desc->array_offset, 181 182 ptr_to_u64(ptr)); ··· 190 189 free(info_linear); 191 190 return ERR_PTR(-EFAULT); 192 191 } 192 + if (info_linear->info.type >= __MAX_BPF_PROG_TYPE) { 193 + pr_debug("%s:%d: unexpected program type %u\n", 194 + __func__, __LINE__, info_linear->info.type); 195 + } 193 196 194 197 /* step 6: verify the data */ 198 + ptr = info_linear->data; 195 199 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 196 - struct bpil_array_desc *desc; 197 - __u32 v1, v2; 200 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 201 + __u32 count1, count2, size1, size2; 202 + __u64 ptr2; 198 203 199 204 if ((arrays & (1UL << i)) == 0) 200 205 continue; 201 206 202 - desc = bpil_array_desc + i; 203 - v1 = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 204 - v2 = bpf_prog_info_read_offset_u32(&info_linear->info, 207 + count1 = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 208 + count2 = bpf_prog_info_read_offset_u32(&info_linear->info, 205 209 desc->count_offset); 206 - if (v1 != v2) 207 - pr_warning("%s: mismatch in element count\n", __func__); 210 + if (count1 != count2) { 211 + pr_warning("%s: mismatch in element count %u vs %u\n", __func__, count1, count2); 212 + free(info_linear); 213 + return ERR_PTR(-ERANGE); 214 + } 208 215 209 - v1 = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 210 - v2 = bpf_prog_info_read_offset_u32(&info_linear->info, 216 + size1 = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 217 + size2 = bpf_prog_info_read_offset_u32(&info_linear->info, 211 218 desc->size_offset); 212 - if (v1 != v2) 213 - pr_warning("%s: mismatch in rec size\n", __func__); 219 + if (size1 != size2) { 220 + pr_warning("%s: mismatch in rec size %u vs %u\n", __func__, size1, size2); 221 + free(info_linear); 222 + return ERR_PTR(-ERANGE); 223 + } 224 + ptr2 = bpf_prog_info_read_offset_u64(&info_linear->info, desc->array_offset); 225 + if (ptr_to_u64(ptr) != ptr2) { 226 + pr_warning("%s: mismatch in array %p vs %llx\n", __func__, ptr, ptr2); 227 + free(info_linear); 228 + return ERR_PTR(-ERANGE); 229 + } 230 + ptr += roundup(count1 * size1, sizeof(__u64)); 214 231 } 215 232 216 233 /* step 7: update info_len and data_len */ ··· 243 224 int i; 244 225 245 226 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 246 - struct bpil_array_desc *desc; 227 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 247 228 __u64 addr, offs; 248 229 249 230 if ((info_linear->arrays & (1UL << i)) == 0) 250 231 continue; 251 232 252 - desc = bpil_array_desc + i; 253 233 addr = bpf_prog_info_read_offset_u64(&info_linear->info, 254 234 desc->array_offset); 255 235 offs = addr - ptr_to_u64(info_linear->data); ··· 262 244 int i; 263 245 264 246 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 265 - struct bpil_array_desc *desc; 247 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 266 248 __u64 addr, offs; 267 249 268 250 if ((info_linear->arrays & (1UL << i)) == 0) 269 251 continue; 270 252 271 - desc = bpil_array_desc + i; 272 253 offs = bpf_prog_info_read_offset_u64(&info_linear->info, 273 254 desc->array_offset); 274 255 addr = offs + ptr_to_u64(info_linear->data);
+7 -3
tools/perf/util/symbol-elf.c
··· 873 873 874 874 #ifdef HAVE_LIBBFD_BUILDID_SUPPORT 875 875 876 - static int read_build_id(const char *filename, struct build_id *bid) 876 + static int read_build_id(const char *filename, struct build_id *bid, bool block) 877 877 { 878 878 size_t size = sizeof(bid->data); 879 - int err = -1; 879 + int err = -1, fd; 880 880 bfd *abfd; 881 881 882 - abfd = bfd_openr(filename, NULL); 882 + fd = open(filename, block ? O_RDONLY : (O_RDONLY | O_NONBLOCK)); 883 + if (fd < 0) 884 + return -1; 885 + 886 + abfd = bfd_fdopenr(filename, /*target=*/NULL, fd); 883 887 if (!abfd) 884 888 return -1; 885 889
+4
tools/testing/selftests/bpf/prog_tests/free_timer.c
··· 124 124 int err; 125 125 126 126 skel = free_timer__open_and_load(); 127 + if (!skel && errno == EOPNOTSUPP) { 128 + test__skip(); 129 + return; 130 + } 127 131 if (!ASSERT_OK_PTR(skel, "open_load")) 128 132 return; 129 133
+4
tools/testing/selftests/bpf/prog_tests/timer.c
··· 86 86 int err; 87 87 88 88 timer_skel = timer__open_and_load(); 89 + if (!timer_skel && errno == EOPNOTSUPP) { 90 + test__skip(); 91 + return; 92 + } 89 93 if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load")) 90 94 return; 91 95
+4
tools/testing/selftests/bpf/prog_tests/timer_crash.c
··· 12 12 struct timer_crash *skel; 13 13 14 14 skel = timer_crash__open_and_load(); 15 + if (!skel && errno == EOPNOTSUPP) { 16 + test__skip(); 17 + return; 18 + } 15 19 if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load")) 16 20 return; 17 21 skel->bss->pid = getpid();
+4
tools/testing/selftests/bpf/prog_tests/timer_lockup.c
··· 59 59 } 60 60 61 61 skel = timer_lockup__open_and_load(); 62 + if (!skel && errno == EOPNOTSUPP) { 63 + test__skip(); 64 + return; 65 + } 62 66 if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load")) 63 67 return; 64 68
+4
tools/testing/selftests/bpf/prog_tests/timer_mim.c
··· 65 65 goto cleanup; 66 66 67 67 timer_skel = timer_mim__open_and_load(); 68 + if (!timer_skel && errno == EOPNOTSUPP) { 69 + test__skip(); 70 + return; 71 + } 68 72 if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load")) 69 73 goto cleanup; 70 74
+2 -2
tools/testing/selftests/bpf/progs/bpf_arena_spin_lock.h
··· 302 302 * barriers. 303 303 */ 304 304 if (val & _Q_LOCKED_MASK) 305 - smp_cond_load_acquire_label(&lock->locked, !VAL, release_err); 305 + (void)smp_cond_load_acquire_label(&lock->locked, !VAL, release_err); 306 306 307 307 /* 308 308 * take ownership and clear the pending bit. ··· 380 380 /* Link @node into the waitqueue. */ 381 381 WRITE_ONCE(prev->next, node); 382 382 383 - arch_mcs_spin_lock_contended_label(&node->locked, release_node_err); 383 + (void)arch_mcs_spin_lock_contended_label(&node->locked, release_node_err); 384 384 385 385 /* 386 386 * While waiting for the MCS lock, the next pointer may have
+32 -14
tools/testing/selftests/bpf/progs/crypto_sanity.c
··· 14 14 u16 udp_test_port = 7777; 15 15 u32 authsize, key_len; 16 16 char algo[128] = {}; 17 - char dst[16] = {}; 17 + char dst[16] = {}, dst_bad[8] = {}; 18 18 int status; 19 19 20 20 static int skb_dynptr_validate(struct __sk_buff *skb, struct bpf_dynptr *psrc) ··· 59 59 .authsize = authsize, 60 60 }; 61 61 struct bpf_crypto_ctx *cctx; 62 - int err = 0; 62 + int err; 63 63 64 64 status = 0; 65 - 66 65 if (key_len > 256) { 67 66 status = -EINVAL; 68 67 return 0; ··· 69 70 70 71 __builtin_memcpy(&params.algo, algo, sizeof(algo)); 71 72 __builtin_memcpy(&params.key, key, sizeof(key)); 72 - cctx = bpf_crypto_ctx_create(&params, sizeof(params), &err); 73 73 74 + cctx = bpf_crypto_ctx_create(&params, sizeof(params), &err); 74 75 if (!cctx) { 75 76 status = err; 76 77 return 0; ··· 79 80 err = crypto_ctx_insert(cctx); 80 81 if (err && err != -EEXIST) 81 82 status = err; 82 - 83 83 return 0; 84 84 } 85 85 ··· 90 92 struct bpf_dynptr psrc, pdst; 91 93 int err; 92 94 95 + status = 0; 93 96 err = skb_dynptr_validate(skb, &psrc); 94 97 if (err < 0) { 95 98 status = err; ··· 109 110 return TC_ACT_SHOT; 110 111 } 111 112 112 - /* dst is a global variable to make testing part easier to check. In real 113 - * production code, a percpu map should be used to store the result. 113 + /* Check also bad case where the dst buffer is smaller than the 114 + * skb's linear section. 115 + */ 116 + bpf_dynptr_from_mem(dst_bad, sizeof(dst_bad), 0, &pdst); 117 + status = bpf_crypto_decrypt(ctx, &psrc, &pdst, NULL); 118 + if (!status) 119 + status = -EIO; 120 + if (status != -EINVAL) 121 + goto err; 122 + 123 + /* dst is a global variable to make testing part easier to check. 124 + * In real production code, a percpu map should be used to store 125 + * the result. 114 126 */ 115 127 bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst); 116 - 117 128 status = bpf_crypto_decrypt(ctx, &psrc, &pdst, NULL); 118 - 129 + err: 119 130 return TC_ACT_SHOT; 120 131 } 121 132 ··· 138 129 int err; 139 130 140 131 status = 0; 141 - 142 132 err = skb_dynptr_validate(skb, &psrc); 143 133 if (err < 0) { 144 134 status = err; ··· 156 148 return TC_ACT_SHOT; 157 149 } 158 150 159 - /* dst is a global variable to make testing part easier to check. In real 160 - * production code, a percpu map should be used to store the result. 151 + /* Check also bad case where the dst buffer is smaller than the 152 + * skb's linear section. 153 + */ 154 + bpf_dynptr_from_mem(dst_bad, sizeof(dst_bad), 0, &pdst); 155 + status = bpf_crypto_encrypt(ctx, &psrc, &pdst, NULL); 156 + if (!status) 157 + status = -EIO; 158 + if (status != -EINVAL) 159 + goto err; 160 + 161 + /* dst is a global variable to make testing part easier to check. 162 + * In real production code, a percpu map should be used to store 163 + * the result. 161 164 */ 162 165 bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst); 163 - 164 166 status = bpf_crypto_encrypt(ctx, &psrc, &pdst, NULL); 165 - 167 + err: 166 168 return TC_ACT_SHOT; 167 169 } 168 170
+2 -3
tools/testing/selftests/bpf/progs/linked_list_fail.c
··· 226 226 SEC("?tc") 227 227 int obj_new_no_struct(void *ctx) 228 228 { 229 - 230 - bpf_obj_new(union { int data; unsigned udata; }); 229 + (void)bpf_obj_new(union { int data; unsigned udata; }); 231 230 return 0; 232 231 } 233 232 ··· 251 252 SEC("?tc") 252 253 int obj_new_acq(void *ctx) 253 254 { 254 - bpf_obj_new(struct foo); 255 + (void)bpf_obj_new(struct foo); 255 256 return 0; 256 257 } 257 258
+6 -2
tools/testing/selftests/bpf/progs/string_kfuncs_success.c
··· 30 30 __test(6) int test_strstr_found(void *ctx) { return bpf_strstr(str, "world"); } 31 31 __test(-ENOENT) int test_strstr_notfound(void *ctx) { return bpf_strstr(str, "hi"); } 32 32 __test(0) int test_strstr_empty(void *ctx) { return bpf_strstr(str, ""); } 33 - __test(0) int test_strnstr_found(void *ctx) { return bpf_strnstr(str, "hello", 6); } 34 - __test(-ENOENT) int test_strnstr_notfound(void *ctx) { return bpf_strnstr(str, "hi", 10); } 33 + __test(0) int test_strnstr_found1(void *ctx) { return bpf_strnstr("", "", 0); } 34 + __test(0) int test_strnstr_found2(void *ctx) { return bpf_strnstr(str, "hello", 5); } 35 + __test(0) int test_strnstr_found3(void *ctx) { return bpf_strnstr(str, "hello", 6); } 36 + __test(-ENOENT) int test_strnstr_notfound1(void *ctx) { return bpf_strnstr(str, "hi", 10); } 37 + __test(-ENOENT) int test_strnstr_notfound2(void *ctx) { return bpf_strnstr(str, "hello", 4); } 38 + __test(-ENOENT) int test_strnstr_notfound3(void *ctx) { return bpf_strnstr("", "a", 0); } 35 39 __test(0) int test_strnstr_empty(void *ctx) { return bpf_strnstr(str, "", 1); } 36 40 37 41 char _license[] SEC("license") = "GPL";
+8 -9
tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
··· 2 2 // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu> 3 3 4 4 #define _GNU_SOURCE 5 + 6 + // Needed for linux/fanotify.h 7 + typedef struct { 8 + int val[2]; 9 + } __kernel_fsid_t; 10 + #define __kernel_fsid_t __kernel_fsid_t 11 + 5 12 #include <fcntl.h> 6 13 #include <sched.h> 7 14 #include <stdio.h> ··· 17 10 #include <sys/mount.h> 18 11 #include <unistd.h> 19 12 #include <sys/syscall.h> 13 + #include <sys/fanotify.h> 20 14 21 15 #include "../../kselftest_harness.h" 22 16 #include "../statmount/statmount.h" 23 17 #include "../utils.h" 24 - 25 - // Needed for linux/fanotify.h 26 - #ifndef __kernel_fsid_t 27 - typedef struct { 28 - int val[2]; 29 - } __kernel_fsid_t; 30 - #endif 31 - 32 - #include <sys/fanotify.h> 33 18 34 19 static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX"; 35 20
+8 -10
tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
··· 2 2 // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu> 3 3 4 4 #define _GNU_SOURCE 5 + 6 + // Needed for linux/fanotify.h 7 + typedef struct { 8 + int val[2]; 9 + } __kernel_fsid_t; 10 + #define __kernel_fsid_t __kernel_fsid_t 11 + 5 12 #include <fcntl.h> 6 13 #include <sched.h> 7 14 #include <stdio.h> ··· 17 10 #include <sys/mount.h> 18 11 #include <unistd.h> 19 12 #include <sys/syscall.h> 13 + #include <sys/fanotify.h> 20 14 21 15 #include "../../kselftest_harness.h" 22 - #include "../../pidfd/pidfd.h" 23 16 #include "../statmount/statmount.h" 24 17 #include "../utils.h" 25 - 26 - // Needed for linux/fanotify.h 27 - #ifndef __kernel_fsid_t 28 - typedef struct { 29 - int val[2]; 30 - } __kernel_fsid_t; 31 - #endif 32 - 33 - #include <sys/fanotify.h> 34 18 35 19 static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX"; 36 20
+1
tools/testing/selftests/net/Makefile
··· 117 117 TEST_GEN_FILES += skf_net_off 118 118 TEST_GEN_FILES += tfo 119 119 TEST_PROGS += tfo_passive.sh 120 + TEST_PROGS += broadcast_ether_dst.sh 120 121 TEST_PROGS += broadcast_pmtu.sh 121 122 TEST_PROGS += ipv6_force_forwarding.sh 122 123 TEST_GEN_PROGS += ipv6_fragmentation
+83
tools/testing/selftests/net/broadcast_ether_dst.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Author: Brett A C Sheffield <bacs@librecast.net> 5 + # Author: Oscar Maes <oscmaes92@gmail.com> 6 + # 7 + # Ensure destination ethernet field is correctly set for 8 + # broadcast packets 9 + 10 + source lib.sh 11 + 12 + CLIENT_IP4="192.168.0.1" 13 + GW_IP4="192.168.0.2" 14 + 15 + setup() { 16 + setup_ns CLIENT_NS SERVER_NS 17 + 18 + ip -net "${SERVER_NS}" link add link1 type veth \ 19 + peer name link0 netns "${CLIENT_NS}" 20 + 21 + ip -net "${CLIENT_NS}" link set link0 up 22 + ip -net "${CLIENT_NS}" addr add "${CLIENT_IP4}"/24 dev link0 23 + 24 + ip -net "${SERVER_NS}" link set link1 up 25 + 26 + ip -net "${CLIENT_NS}" route add default via "${GW_IP4}" 27 + ip netns exec "${CLIENT_NS}" arp -s "${GW_IP4}" 00:11:22:33:44:55 28 + } 29 + 30 + cleanup() { 31 + rm -f "${CAPFILE}" "${OUTPUT}" 32 + ip -net "${SERVER_NS}" link del link1 33 + cleanup_ns "${CLIENT_NS}" "${SERVER_NS}" 34 + } 35 + 36 + test_broadcast_ether_dst() { 37 + local rc=0 38 + CAPFILE=$(mktemp -u cap.XXXXXXXXXX) 39 + OUTPUT=$(mktemp -u out.XXXXXXXXXX) 40 + 41 + echo "Testing ethernet broadcast destination" 42 + 43 + # start tcpdump listening for icmp 44 + # tcpdump will exit after receiving a single packet 45 + # timeout will kill tcpdump if it is still running after 2s 46 + timeout 2s ip netns exec "${CLIENT_NS}" \ 47 + tcpdump -i link0 -c 1 -w "${CAPFILE}" icmp &> "${OUTPUT}" & 48 + pid=$! 49 + slowwait 1 grep -qs "listening" "${OUTPUT}" 50 + 51 + # send broadcast ping 52 + ip netns exec "${CLIENT_NS}" \ 53 + ping -W0.01 -c1 -b 255.255.255.255 &> /dev/null 54 + 55 + # wait for tcpdump for exit after receiving packet 56 + wait "${pid}" 57 + 58 + # compare ethernet destination field to ff:ff:ff:ff:ff:ff 59 + ether_dst=$(tcpdump -r "${CAPFILE}" -tnne 2>/dev/null | \ 60 + awk '{sub(/,/,"",$3); print $3}') 61 + if [[ "${ether_dst}" == "ff:ff:ff:ff:ff:ff" ]]; then 62 + echo "[ OK ]" 63 + rc="${ksft_pass}" 64 + else 65 + echo "[FAIL] expected dst ether addr to be ff:ff:ff:ff:ff:ff," \ 66 + "got ${ether_dst}" 67 + rc="${ksft_fail}" 68 + fi 69 + 70 + return "${rc}" 71 + } 72 + 73 + if [ ! -x "$(command -v tcpdump)" ]; then 74 + echo "SKIP: Could not run test without tcpdump tool" 75 + exit "${ksft_skip}" 76 + fi 77 + 78 + trap cleanup EXIT 79 + 80 + setup 81 + test_broadcast_ether_dst 82 + 83 + exit $?
+3
tools/testing/selftests/net/can/config
··· 1 + CONFIG_CAN=m 2 + CONFIG_CAN_DEV=m 3 + CONFIG_CAN_VCAN=m
+1 -1
tools/testing/selftests/net/mptcp/diag.sh
··· 28 28 } 29 29 30 30 # This function is used in the cleanup trap 31 - #shellcheck disable=SC2317 31 + #shellcheck disable=SC2317,SC2329 32 32 cleanup() 33 33 { 34 34 ip netns pids "${ns}" | xargs --no-run-if-empty kill -SIGKILL &>/dev/null
+1 -1
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 134 134 TEST_GROUP="" 135 135 136 136 # This function is used in the cleanup trap 137 - #shellcheck disable=SC2317 137 + #shellcheck disable=SC2317,SC2329 138 138 cleanup() 139 139 { 140 140 rm -f "$cin_disconnect"
+1 -1
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 8 8 9 9 # ShellCheck incorrectly believes that most of the code here is unreachable 10 10 # because it's invoked by variable name, see how the "tests" array is used 11 - #shellcheck disable=SC2317 11 + #shellcheck disable=SC2317,SC2329 12 12 13 13 . "$(dirname "${0}")/mptcp_lib.sh" 14 14
+1 -1
tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
··· 95 95 } 96 96 97 97 # This function is used in the cleanup trap 98 - #shellcheck disable=SC2317 98 + #shellcheck disable=SC2317,SC2329 99 99 cleanup() 100 100 { 101 101 mptcp_lib_ns_exit "${ns1}" "${ns2}" "${ns_sbox}"
+3 -2
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 32 32 err=$(mktemp) 33 33 34 34 # This function is used in the cleanup trap 35 - #shellcheck disable=SC2317 35 + #shellcheck disable=SC2317,SC2329 36 36 cleanup() 37 37 { 38 38 rm -f "${err}" ··· 70 70 mptcp_lib_pm_nl_format_endpoints "${@}" 71 71 } 72 72 73 + # This function is invoked indirectly 74 + #shellcheck disable=SC2317,SC2329 73 75 get_endpoint() { 74 - # shellcheck disable=SC2317 # invoked indirectly 75 76 mptcp_lib_pm_nl_get_endpoint "${ns1}" "${@}" 76 77 } 77 78
+1 -1
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 35 35 } 36 36 37 37 # This function is used in the cleanup trap 38 - #shellcheck disable=SC2317 38 + #shellcheck disable=SC2317,SC2329 39 39 cleanup() 40 40 { 41 41 rm -f "$cout" "$sout"
+1 -1
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 94 94 } 95 95 96 96 # This function is used in the cleanup trap 97 - #shellcheck disable=SC2317 97 + #shellcheck disable=SC2317,SC2329 98 98 cleanup() 99 99 { 100 100 print_title "Cleanup"