Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 6.17-rc3 into char-misc-next

We need the char/misc/iio fixes in here as well to build on.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4396 -1814
+2
.mailmap
··· 226 226 Douglas Gilbert <dougg@torque.net> 227 227 Drew Fustini <fustini@kernel.org> <drew@pdp7.com> 228 228 <duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr> 229 + Easwar Hariharan <easwar.hariharan@linux.microsoft.com> <easwar.hariharan@intel.com> 230 + Easwar Hariharan <easwar.hariharan@linux.microsoft.com> <eahariha@linux.microsoft.com> 229 231 Ed L. Cashin <ecashin@coraid.com> 230 232 Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org> 231 233 Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com>
+2 -2
Documentation/admin-guide/cgroup-v2.rst
··· 435 435 Controlling Controllers 436 436 ----------------------- 437 437 438 - Availablity 439 - ~~~~~~~~~~~ 438 + Availability 439 + ~~~~~~~~~~~~ 440 440 441 441 A controller is available in a cgroup when it is supported by the kernel (i.e., 442 442 compiled in, not disabled and not attached to a v1 hierarchy) and listed in the
+6 -5
Documentation/core-api/symbol-namespaces.rst
··· 76 76 within the corresponding compilation unit before the #include for 77 77 <linux/export.h>. Typically it's placed before the first #include statement. 78 78 79 - Using the EXPORT_SYMBOL_GPL_FOR_MODULES() macro 80 - ----------------------------------------------- 79 + Using the EXPORT_SYMBOL_FOR_MODULES() macro 80 + ------------------------------------------- 81 81 82 82 Symbols exported using this macro are put into a module namespace. This 83 - namespace cannot be imported. 83 + namespace cannot be imported. These exports are GPL-only as they are only 84 + intended for in-tree modules. 84 85 85 86 The macro takes a comma separated list of module names, allowing only those 86 87 modules to access this symbol. Simple tail-globs are supported. 87 88 88 89 For example:: 89 90 90 - EXPORT_SYMBOL_GPL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*") 91 + EXPORT_SYMBOL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*") 91 92 92 - will limit usage of this symbol to modules whoes name matches the given 93 + will limit usage of this symbol to modules whose name matches the given 93 94 patterns. 94 95 95 96 How to use Symbols exported in Namespaces
+1 -1
Documentation/devicetree/bindings/regulator/infineon,ir38060.yaml
··· 7 7 title: Infineon Buck Regulators with PMBUS interfaces 8 8 9 9 maintainers: 10 - - Not Me. 10 + - Guenter Roeck <linux@roeck-us.net> 11 11 12 12 allOf: 13 13 - $ref: regulator.yaml#
+2
Documentation/networking/mptcp-sysctl.rst
··· 12 12 resent to an MPTCP peer that has not acknowledged a previous 13 13 ADD_ADDR message. 14 14 15 + Do not retransmit if set to 0. 16 + 15 17 The default value matches TCP_RTO_MAX. This is a per-namespace 16 18 sysctl. 17 19
+16 -9
Documentation/process/security-bugs.rst
··· 8 8 disclosed as quickly as possible. Please report security bugs to the 9 9 Linux kernel security team. 10 10 11 - Contact 12 - ------- 11 + The security team and maintainers almost always require additional 12 + information beyond what was initially provided in a report and rely on 13 + active and efficient collaboration with the reporter to perform further 14 + testing (e.g., verifying versions, configuration options, mitigations, or 15 + patches). Before contacting the security team, the reporter must ensure 16 + they are available to explain their findings, engage in discussions, and 17 + run additional tests. Reports where the reporter does not respond promptly 18 + or cannot effectively discuss their findings may be abandoned if the 19 + communication does not quickly improve. 20 + 21 + As it is with any bug, the more information provided the easier it 22 + will be to diagnose and fix. Please review the procedure outlined in 23 + 'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what 24 + information is helpful. Any exploit code is very helpful and will not 25 + be released without consent from the reporter unless it has already been 26 + made public. 13 27 14 28 The Linux kernel security team can be contacted by email at 15 29 <security@kernel.org>. This is a private list of security officers ··· 32 18 that can speed up the process considerably. It is possible that the 33 19 security team will bring in extra help from area maintainers to 34 20 understand and fix the security vulnerability. 35 - 36 - As it is with any bug, the more information provided the easier it 37 - will be to diagnose and fix. Please review the procedure outlined in 38 - 'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what 39 - information is helpful. Any exploit code is very helpful and will not 40 - be released without consent from the reporter unless it has already been 41 - made public. 42 21 43 22 Please send plain text emails without attachments where possible. 44 23 It is much harder to have a context-quoted discussion about a complex
+2 -2
Documentation/userspace-api/iommufd.rst
··· 43 43 44 44 - IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table 45 45 (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING" 46 - primarly indicates this type of HWPT should be linked to an IOAS. It also 46 + primarily indicates this type of HWPT should be linked to an IOAS. It also 47 47 indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING 48 48 feature flag. This can be either an UNMANAGED stage-1 domain for a device 49 49 running in the user space, or a nesting parent stage-2 domain for mappings ··· 76 76 77 77 * Security namespace for guest owned ID, e.g. guest-controlled cache tags 78 78 * Non-device-affiliated event reporting, e.g. invalidation queue errors 79 - * Access to a sharable nesting parent pagetable across physical IOMMUs 79 + * Access to a shareable nesting parent pagetable across physical IOMMUs 80 80 * Virtualization of various platforms IDs, e.g. RIDs and others 81 81 * Delivery of paravirtualized invalidation 82 82 * Direct assigned invalidation queues
+34 -4
MAINTAINERS
··· 8427 8427 F: drivers/gpu/drm/scheduler/ 8428 8428 F: include/drm/gpu_scheduler.h 8429 8429 8430 + DRM GPUVM 8431 + M: Danilo Krummrich <dakr@kernel.org> 8432 + R: Matthew Brost <matthew.brost@intel.com> 8433 + R: Thomas Hellström <thomas.hellstrom@linux.intel.com> 8434 + R: Alice Ryhl <aliceryhl@google.com> 8435 + L: dri-devel@lists.freedesktop.org 8436 + S: Supported 8437 + T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 8438 + F: drivers/gpu/drm/drm_gpuvm.c 8439 + F: include/drm/drm_gpuvm.h 8440 + 8430 8441 DRM LOG 8431 8442 M: Jocelyn Falempe <jfalempe@redhat.com> 8432 8443 M: Javier Martinez Canillas <javierm@redhat.com> ··· 10667 10656 F: block/partitions/efi.* 10668 10657 10669 10658 HABANALABS PCI DRIVER 10670 - M: Yaron Avizrat <yaron.avizrat@intel.com> 10659 + M: Koby Elbaz <koby.elbaz@intel.com> 10660 + M: Konstantin Sinyuk <konstantin.sinyuk@intel.com> 10671 10661 L: dri-devel@lists.freedesktop.org 10672 10662 S: Supported 10673 10663 C: irc://irc.oftc.net/dri-devel ··· 11026 11014 F: drivers/perf/hisilicon/hns3_pmu.c 11027 11015 11028 11016 HISILICON I2C CONTROLLER DRIVER 11029 - M: Yicong Yang <yangyicong@hisilicon.com> 11017 + M: Devyn Liu <liudingyuan@h-partners.com> 11030 11018 L: linux-i2c@vger.kernel.org 11031 11019 S: Maintained 11032 11020 W: https://www.hisilicon.com ··· 12294 12282 F: include/linux/net/intel/*/ 12295 12283 12296 12284 INTEL ETHERNET PROTOCOL DRIVER FOR RDMA 12297 - M: Mustafa Ismail <mustafa.ismail@intel.com> 12298 12285 M: Tatyana Nikolova <tatyana.e.nikolova@intel.com> 12299 12286 L: linux-rdma@vger.kernel.org 12300 12287 S: Supported ··· 16070 16059 F: mm/migrate.c 16071 16060 F: mm/migrate_device.c 16072 16061 16062 + MEMORY MANAGEMENT - MGLRU (MULTI-GEN LRU) 16063 + M: Andrew Morton <akpm@linux-foundation.org> 16064 + M: Axel Rasmussen <axelrasmussen@google.com> 16065 + M: Yuanchu Xie <yuanchu@google.com> 16066 + R: Wei Xu <weixugc@google.com> 16067 + L: linux-mm@kvack.org 16068 + S: Maintained 16069 + W: http://www.linux-mm.org 16070 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 16071 + F: Documentation/admin-guide/mm/multigen_lru.rst 16072 + F: Documentation/mm/multigen_lru.rst 16073 + F: include/linux/mm_inline.h 16074 + F: include/linux/mmzone.h 16075 + F: mm/swap.c 16076 + F: mm/vmscan.c 16077 + F: mm/workingset.c 16078 + 16073 16079 MEMORY MANAGEMENT - MISC 16074 16080 M: Andrew Morton <akpm@linux-foundation.org> 16075 16081 M: David Hildenbrand <david@redhat.com> ··· 16277 16249 W: http://www.linux-mm.org 16278 16250 T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 16279 16251 F: rust/helpers/mm.c 16252 + F: rust/helpers/page.c 16280 16253 F: rust/kernel/mm.rs 16281 16254 F: rust/kernel/mm/ 16255 + F: rust/kernel/page.rs 16282 16256 16283 16257 MEMORY MAPPING 16284 16258 M: Andrew Morton <akpm@linux-foundation.org> ··· 22205 22175 22206 22176 S390 NETWORK DRIVERS 22207 22177 M: Alexandra Winter <wintera@linux.ibm.com> 22208 - M: Thorsten Winkler <twinkler@linux.ibm.com> 22178 + R: Aswin Karuvally <aswin@linux.ibm.com> 22209 22179 L: linux-s390@vger.kernel.org 22210 22180 L: netdev@vger.kernel.org 22211 22181 S: Supported
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+6
arch/loongarch/Makefile
··· 102 102 103 103 ifdef CONFIG_OBJTOOL 104 104 ifdef CONFIG_CC_HAS_ANNOTATE_TABLEJUMP 105 + # The annotate-tablejump option can not be passed to LLVM backend when LTO is enabled. 106 + # Ensure it is aware of linker with LTO, '--loongarch-annotate-tablejump' also needs to 107 + # be passed via '-mllvm' to ld.lld. 105 108 KBUILD_CFLAGS += -mannotate-tablejump 109 + ifdef CONFIG_LTO_CLANG 110 + KBUILD_LDFLAGS += -mllvm --loongarch-annotate-tablejump 111 + endif 106 112 else 107 113 KBUILD_CFLAGS += -fno-jump-tables # keep compatibility with older compilers 108 114 endif
+1 -1
arch/loongarch/include/asm/stackframe.h
··· 58 58 .endm 59 59 60 60 .macro STACKLEAK_ERASE 61 - #ifdef CONFIG_GCC_PLUGIN_STACKLEAK 61 + #ifdef CONFIG_KSTACK_ERASE 62 62 bl stackleak_erase_on_task_stack 63 63 #endif 64 64 .endm
+8
arch/loongarch/include/uapi/asm/setup.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + 3 + #ifndef _UAPI_ASM_LOONGARCH_SETUP_H 4 + #define _UAPI_ASM_LOONGARCH_SETUP_H 5 + 6 + #define COMMAND_LINE_SIZE 4096 7 + 8 + #endif /* _UAPI_ASM_LOONGARCH_SETUP_H */
+19 -19
arch/loongarch/kernel/module-sections.c
··· 8 8 #include <linux/module.h> 9 9 #include <linux/moduleloader.h> 10 10 #include <linux/ftrace.h> 11 + #include <linux/sort.h> 11 12 12 13 Elf_Addr module_emit_got_entry(struct module *mod, Elf_Shdr *sechdrs, Elf_Addr val) 13 14 { ··· 62 61 return (Elf_Addr)&plt[nr]; 63 62 } 64 63 65 - static int is_rela_equal(const Elf_Rela *x, const Elf_Rela *y) 64 + #define cmp_3way(a, b) ((a) < (b) ? -1 : (a) > (b)) 65 + 66 + static int compare_rela(const void *x, const void *y) 66 67 { 67 - return x->r_info == y->r_info && x->r_addend == y->r_addend; 68 - } 68 + int ret; 69 + const Elf_Rela *rela_x = x, *rela_y = y; 69 70 70 - static bool duplicate_rela(const Elf_Rela *rela, int idx) 71 - { 72 - int i; 71 + ret = cmp_3way(rela_x->r_info, rela_y->r_info); 72 + if (ret == 0) 73 + ret = cmp_3way(rela_x->r_addend, rela_y->r_addend); 73 74 74 - for (i = 0; i < idx; i++) { 75 - if (is_rela_equal(&rela[i], &rela[idx])) 76 - return true; 77 - } 78 - 79 - return false; 75 + return ret; 80 76 } 81 77 82 78 static void count_max_entries(Elf_Rela *relas, int num, 83 79 unsigned int *plts, unsigned int *gots) 84 80 { 85 - unsigned int i, type; 81 + unsigned int i; 82 + 83 + sort(relas, num, sizeof(Elf_Rela), compare_rela, NULL); 86 84 87 85 for (i = 0; i < num; i++) { 88 - type = ELF_R_TYPE(relas[i].r_info); 89 - switch (type) { 86 + if (i && !compare_rela(&relas[i-1], &relas[i])) 87 + continue; 88 + 89 + switch (ELF_R_TYPE(relas[i].r_info)) { 90 90 case R_LARCH_SOP_PUSH_PLT_PCREL: 91 91 case R_LARCH_B26: 92 - if (!duplicate_rela(relas, i)) 93 - (*plts)++; 92 + (*plts)++; 94 93 break; 95 94 case R_LARCH_GOT_PC_HI20: 96 - if (!duplicate_rela(relas, i)) 97 - (*gots)++; 95 + (*gots)++; 98 96 break; 99 97 default: 100 98 break; /* Do nothing. */
+5 -5
arch/loongarch/kernel/signal.c
··· 677 677 for (i = 1; i < 32; i++) 678 678 err |= __put_user(regs->regs[i], &sc->sc_regs[i]); 679 679 680 + #ifdef CONFIG_CPU_HAS_LBT 681 + if (extctx->lbt.addr) 682 + err |= protected_save_lbt_context(extctx); 683 + #endif 684 + 680 685 if (extctx->lasx.addr) 681 686 err |= protected_save_lasx_context(extctx); 682 687 else if (extctx->lsx.addr) 683 688 err |= protected_save_lsx_context(extctx); 684 689 else if (extctx->fpu.addr) 685 690 err |= protected_save_fpu_context(extctx); 686 - 687 - #ifdef CONFIG_CPU_HAS_LBT 688 - if (extctx->lbt.addr) 689 - err |= protected_save_lbt_context(extctx); 690 - #endif 691 691 692 692 /* Set the "end" magic */ 693 693 info = (struct sctx_info *)extctx->end.addr;
+22
arch/loongarch/kernel/time.c
··· 5 5 * Copyright (C) 2020-2022 Loongson Technology Corporation Limited 6 6 */ 7 7 #include <linux/clockchips.h> 8 + #include <linux/cpuhotplug.h> 8 9 #include <linux/delay.h> 9 10 #include <linux/export.h> 10 11 #include <linux/init.h> ··· 103 102 return 0; 104 103 } 105 104 105 + static int arch_timer_starting(unsigned int cpu) 106 + { 107 + set_csr_ecfg(ECFGF_TIMER); 108 + 109 + return 0; 110 + } 111 + 112 + static int arch_timer_dying(unsigned int cpu) 113 + { 114 + constant_set_state_shutdown(this_cpu_ptr(&constant_clockevent_device)); 115 + 116 + /* Clear Timer Interrupt */ 117 + write_csr_tintclear(CSR_TINTCLR_TI); 118 + 119 + return 0; 120 + } 121 + 106 122 static unsigned long get_loops_per_jiffy(void) 107 123 { 108 124 unsigned long lpj = (unsigned long)const_clock_freq; ··· 189 171 190 172 lpj_fine = get_loops_per_jiffy(); 191 173 pr_info("Constant clock event device register\n"); 174 + 175 + cpuhp_setup_state(CPUHP_AP_LOONGARCH_ARCH_TIMER_STARTING, 176 + "clockevents/loongarch/timer:starting", 177 + arch_timer_starting, arch_timer_dying); 192 178 193 179 return 0; 194 180 }
+6 -1
arch/loongarch/kvm/intc/eiointc.c
··· 45 45 } 46 46 47 47 cpu = s->sw_coremap[irq]; 48 - vcpu = kvm_get_vcpu(s->kvm, cpu); 48 + vcpu = kvm_get_vcpu_by_id(s->kvm, cpu); 49 + if (unlikely(vcpu == NULL)) { 50 + kvm_err("%s: invalid target cpu: %d\n", __func__, cpu); 51 + return; 52 + } 53 + 49 54 if (level) { 50 55 /* if not enable return false */ 51 56 if (!test_bit(irq, (unsigned long *)s->enable.reg_u32))
+4 -4
arch/loongarch/kvm/intc/ipi.c
··· 99 99 static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data) 100 100 { 101 101 int i, idx, ret; 102 - uint32_t val = 0, mask = 0; 102 + uint64_t val = 0, mask = 0; 103 103 104 104 /* 105 105 * Bit 27-30 is mask for byte writing. ··· 108 108 if ((data >> 27) & 0xf) { 109 109 /* Read the old val */ 110 110 idx = srcu_read_lock(&vcpu->kvm->srcu); 111 - ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 111 + ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, 4, &val); 112 112 srcu_read_unlock(&vcpu->kvm->srcu, idx); 113 113 if (unlikely(ret)) { 114 114 kvm_err("%s: : read data from addr %llx failed\n", __func__, addr); ··· 124 124 } 125 125 val |= ((uint32_t)(data >> 32) & ~mask); 126 126 idx = srcu_read_lock(&vcpu->kvm->srcu); 127 - ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 127 + ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, 4, &val); 128 128 srcu_read_unlock(&vcpu->kvm->srcu, idx); 129 129 if (unlikely(ret)) 130 130 kvm_err("%s: : write data to addr %llx failed\n", __func__, addr); ··· 298 298 cpu = (attr->attr >> 16) & 0x3ff; 299 299 addr = attr->attr & 0xff; 300 300 301 - vcpu = kvm_get_vcpu(dev->kvm, cpu); 301 + vcpu = kvm_get_vcpu_by_id(dev->kvm, cpu); 302 302 if (unlikely(vcpu == NULL)) { 303 303 kvm_err("%s: invalid target cpu: %d\n", __func__, cpu); 304 304 return -EINVAL;
+10
arch/loongarch/kvm/intc/pch_pic.c
··· 195 195 return -EINVAL; 196 196 } 197 197 198 + if (addr & (len - 1)) { 199 + kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len); 200 + return -EINVAL; 201 + } 202 + 198 203 /* statistics of pch pic reading */ 199 204 vcpu->stat.pch_pic_read_exits++; 200 205 ret = loongarch_pch_pic_read(s, addr, len, val); ··· 304 299 305 300 if (!s) { 306 301 kvm_err("%s: pch pic irqchip not valid!\n", __func__); 302 + return -EINVAL; 303 + } 304 + 305 + if (addr & (len - 1)) { 306 + kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len); 307 307 return -EINVAL; 308 308 } 309 309
+5 -3
arch/loongarch/kvm/vcpu.c
··· 1283 1283 return -EINVAL; 1284 1284 1285 1285 preempt_disable(); 1286 - set_csr_euen(CSR_EUEN_LBTEN); 1287 - _restore_lbt(&vcpu->arch.lbt); 1288 - vcpu->arch.aux_inuse |= KVM_LARCH_LBT; 1286 + if (!(vcpu->arch.aux_inuse & KVM_LARCH_LBT)) { 1287 + set_csr_euen(CSR_EUEN_LBTEN); 1288 + _restore_lbt(&vcpu->arch.lbt); 1289 + vcpu->arch.aux_inuse |= KVM_LARCH_LBT; 1290 + } 1289 1291 preempt_enable(); 1290 1292 1291 1293 return 0;
+4 -1
arch/mips/boot/dts/lantiq/danube_easy50712.dts
··· 82 82 }; 83 83 }; 84 84 85 - etop@e180000 { 85 + ethernet@e180000 { 86 86 compatible = "lantiq,etop-xway"; 87 87 reg = <0xe180000 0x40000>; 88 88 interrupt-parent = <&icu0>; 89 89 interrupts = <73 78>; 90 + interrupt-names = "tx", "rx"; 90 91 phy-mode = "rmii"; 91 92 mac-address = [ 00 11 22 33 44 55 ]; 93 + lantiq,rx-burst-length = <4>; 94 + lantiq,tx-burst-length = <4>; 92 95 }; 93 96 94 97 stp0: stp@e100bb0 {
+5 -5
arch/mips/lantiq/xway/sysctrl.c
··· 497 497 ifccr = CGU_IFCCR_VR9; 498 498 pcicr = CGU_PCICR_VR9; 499 499 } else { 500 - clkdev_add_pmu("1e180000.etop", NULL, 1, 0, PMU_PPE); 500 + clkdev_add_pmu("1e180000.ethernet", NULL, 1, 0, PMU_PPE); 501 501 } 502 502 503 503 if (!of_machine_is_compatible("lantiq,ase")) ··· 531 531 CLOCK_133M, CLOCK_133M); 532 532 clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0); 533 533 clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P); 534 - clkdev_add_pmu("1e180000.etop", "ppe", 1, 0, PMU_PPE); 535 - clkdev_add_cgu("1e180000.etop", "ephycgu", CGU_EPHY); 536 - clkdev_add_pmu("1e180000.etop", "ephy", 1, 0, PMU_EPHY); 534 + clkdev_add_pmu("1e180000.ethernet", "ppe", 1, 0, PMU_PPE); 535 + clkdev_add_cgu("1e180000.ethernet", "ephycgu", CGU_EPHY); 536 + clkdev_add_pmu("1e180000.ethernet", "ephy", 1, 0, PMU_EPHY); 537 537 clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_ASE_SDIO); 538 538 clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE); 539 539 } else if (of_machine_is_compatible("lantiq,grx390")) { ··· 592 592 clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0 | PMU_AHBM); 593 593 clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 0, PMU_USB1_P); 594 594 clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1 | PMU_AHBM); 595 - clkdev_add_pmu("1e180000.etop", "switch", 1, 0, PMU_SWITCH); 595 + clkdev_add_pmu("1e180000.ethernet", "switch", 1, 0, PMU_SWITCH); 596 596 clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO); 597 597 clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); 598 598 clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
+3
arch/s390/boot/vmem.c
··· 530 530 lowcore_address + sizeof(struct lowcore), 531 531 POPULATE_LOWCORE); 532 532 for_each_physmem_usable_range(i, &start, &end) { 533 + /* Do not map lowcore with identity mapping */ 534 + if (!start) 535 + start = sizeof(struct lowcore); 533 536 pgtable_populate((unsigned long)__identity_va(start), 534 537 (unsigned long)__identity_va(end), 535 538 POPULATE_IDENTITY);
+16 -17
arch/s390/configs/debug_defconfig
··· 5 5 CONFIG_AUDIT=y 6 6 CONFIG_NO_HZ_IDLE=y 7 7 CONFIG_HIGH_RES_TIMERS=y 8 + CONFIG_POSIX_AUX_CLOCKS=y 8 9 CONFIG_BPF_SYSCALL=y 9 10 CONFIG_BPF_JIT=y 10 11 CONFIG_BPF_JIT_ALWAYS_ON=y ··· 20 19 CONFIG_TASK_IO_ACCOUNTING=y 21 20 CONFIG_IKCONFIG=y 22 21 CONFIG_IKCONFIG_PROC=y 22 + CONFIG_SCHED_PROXY_EXEC=y 23 23 CONFIG_NUMA_BALANCING=y 24 24 CONFIG_MEMCG=y 25 25 CONFIG_BLK_CGROUP=y ··· 44 42 CONFIG_KEXEC=y 45 43 CONFIG_KEXEC_FILE=y 46 44 CONFIG_KEXEC_SIG=y 45 + CONFIG_CRASH_DM_CRYPT=y 47 46 CONFIG_LIVEPATCH=y 48 47 CONFIG_MARCH_Z13=y 49 48 CONFIG_NR_CPUS=512 ··· 108 105 CONFIG_MEM_SOFT_DIRTY=y 109 106 CONFIG_DEFERRED_STRUCT_PAGE_INIT=y 110 107 CONFIG_IDLE_PAGE_TRACKING=y 108 + CONFIG_ZONE_DEVICE=y 111 109 CONFIG_PERCPU_STATS=y 112 110 CONFIG_GUP_TEST=y 113 111 CONFIG_ANON_VMA_NAME=y ··· 227 223 CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m 228 224 CONFIG_NETFILTER_XT_TARGET_CT=m 229 225 CONFIG_NETFILTER_XT_TARGET_DSCP=m 226 + CONFIG_NETFILTER_XT_TARGET_HL=m 230 227 CONFIG_NETFILTER_XT_TARGET_HMARK=m 231 228 CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m 232 229 CONFIG_NETFILTER_XT_TARGET_LOG=m 233 230 CONFIG_NETFILTER_XT_TARGET_MARK=m 231 + CONFIG_NETFILTER_XT_NAT=m 234 232 CONFIG_NETFILTER_XT_TARGET_NETMAP=m 235 233 CONFIG_NETFILTER_XT_TARGET_NFLOG=m 236 234 CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m 237 235 CONFIG_NETFILTER_XT_TARGET_REDIRECT=m 236 + CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m 238 237 CONFIG_NETFILTER_XT_TARGET_TEE=m 239 238 CONFIG_NETFILTER_XT_TARGET_TPROXY=m 240 - CONFIG_NETFILTER_XT_TARGET_TRACE=m 241 239 CONFIG_NETFILTER_XT_TARGET_SECMARK=m 242 240 CONFIG_NETFILTER_XT_TARGET_TCPMSS=m 243 241 CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m ··· 254 248 CONFIG_NETFILTER_XT_MATCH_CONNMARK=m 255 249 CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m 256 250 CONFIG_NETFILTER_XT_MATCH_CPU=m 251 + CONFIG_NETFILTER_XT_MATCH_DCCP=m 257 252 CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m 258 253 CONFIG_NETFILTER_XT_MATCH_DSCP=m 259 254 CONFIG_NETFILTER_XT_MATCH_ESP=m ··· 325 318 CONFIG_IP_NF_MATCH_ECN=m 326 319 CONFIG_IP_NF_MATCH_RPFILTER=m 327 320 CONFIG_IP_NF_MATCH_TTL=m 328 - CONFIG_IP_NF_FILTER=m 329 321 CONFIG_IP_NF_TARGET_REJECT=m 330 - CONFIG_IP_NF_NAT=m 331 - CONFIG_IP_NF_TARGET_MASQUERADE=m 332 - CONFIG_IP_NF_MANGLE=m 333 322 CONFIG_IP_NF_TARGET_ECN=m 334 - CONFIG_IP_NF_TARGET_TTL=m 335 - CONFIG_IP_NF_RAW=m 336 - CONFIG_IP_NF_SECURITY=m 337 - CONFIG_IP_NF_ARPFILTER=m 338 323 CONFIG_IP_NF_ARP_MANGLE=m 339 324 CONFIG_NFT_FIB_IPV6=m 340 325 CONFIG_IP6_NF_IPTABLES=m ··· 339 340 CONFIG_IP6_NF_MATCH_MH=m 340 341 CONFIG_IP6_NF_MATCH_RPFILTER=m 341 342 CONFIG_IP6_NF_MATCH_RT=m 342 - CONFIG_IP6_NF_TARGET_HL=m 343 - CONFIG_IP6_NF_FILTER=m 344 343 CONFIG_IP6_NF_TARGET_REJECT=m 345 - CONFIG_IP6_NF_MANGLE=m 346 - CONFIG_IP6_NF_RAW=m 347 - CONFIG_IP6_NF_SECURITY=m 348 - CONFIG_IP6_NF_NAT=m 349 - CONFIG_IP6_NF_TARGET_MASQUERADE=m 350 344 CONFIG_NF_TABLES_BRIDGE=m 345 + CONFIG_IP_SCTP=m 351 346 CONFIG_RDS=m 352 347 CONFIG_RDS_RDMA=m 353 348 CONFIG_RDS_TCP=m ··· 376 383 CONFIG_NET_SCH_INGRESS=m 377 384 CONFIG_NET_SCH_PLUG=m 378 385 CONFIG_NET_SCH_ETS=m 386 + CONFIG_NET_SCH_DUALPI2=m 379 387 CONFIG_NET_CLS_BASIC=m 380 388 CONFIG_NET_CLS_ROUTE4=m 381 389 CONFIG_NET_CLS_FW=m ··· 498 504 CONFIG_NETDEVICES=y 499 505 CONFIG_BONDING=m 500 506 CONFIG_DUMMY=m 507 + CONFIG_OVPN=m 501 508 CONFIG_EQUALIZER=m 502 509 CONFIG_IFB=m 503 510 CONFIG_MACVLAN=m ··· 636 641 CONFIG_VHOST_NET=m 637 642 CONFIG_VHOST_VSOCK=m 638 643 CONFIG_VHOST_VDPA=m 644 + CONFIG_DEV_DAX=m 639 645 CONFIG_EXT4_FS=y 640 646 CONFIG_EXT4_FS_POSIX_ACL=y 641 647 CONFIG_EXT4_FS_SECURITY=y ··· 661 665 CONFIG_BCACHEFS_FS=y 662 666 CONFIG_BCACHEFS_QUOTA=y 663 667 CONFIG_BCACHEFS_POSIX_ACL=y 668 + CONFIG_FS_DAX=y 664 669 CONFIG_EXPORTFS_BLOCK_OPS=y 665 670 CONFIG_FS_ENCRYPTION=y 666 671 CONFIG_FS_VERITY=y ··· 752 755 CONFIG_BUG_ON_DATA_CORRUPTION=y 753 756 CONFIG_CRYPTO_USER=m 754 757 CONFIG_CRYPTO_SELFTESTS=y 758 + CONFIG_CRYPTO_SELFTESTS_FULL=y 759 + CONFIG_CRYPTO_NULL=y 755 760 CONFIG_CRYPTO_PCRYPT=m 756 761 CONFIG_CRYPTO_CRYPTD=m 757 762 CONFIG_CRYPTO_BENCHMARK=m ··· 782 783 CONFIG_CRYPTO_LRW=m 783 784 CONFIG_CRYPTO_PCBC=m 784 785 CONFIG_CRYPTO_AEGIS128=m 785 - CONFIG_CRYPTO_CHACHA20POLY1305=m 786 786 CONFIG_CRYPTO_GCM=y 787 787 CONFIG_CRYPTO_SEQIV=y 788 788 CONFIG_CRYPTO_MD4=m ··· 820 822 CONFIG_CRYPTO_KRB5=m 821 823 CONFIG_CRYPTO_KRB5_SELFTESTS=y 822 824 CONFIG_CORDIC=m 825 + CONFIG_TRACE_MMIO_ACCESS=y 823 826 CONFIG_RANDOM32_SELFTEST=y 824 827 CONFIG_XZ_DEC_MICROLZMA=y 825 828 CONFIG_DMA_CMA=y
+15 -19
arch/s390/configs/defconfig
··· 4 4 CONFIG_AUDIT=y 5 5 CONFIG_NO_HZ_IDLE=y 6 6 CONFIG_HIGH_RES_TIMERS=y 7 + CONFIG_POSIX_AUX_CLOCKS=y 7 8 CONFIG_BPF_SYSCALL=y 8 9 CONFIG_BPF_JIT=y 9 10 CONFIG_BPF_JIT_ALWAYS_ON=y ··· 18 17 CONFIG_TASK_IO_ACCOUNTING=y 19 18 CONFIG_IKCONFIG=y 20 19 CONFIG_IKCONFIG_PROC=y 20 + CONFIG_SCHED_PROXY_EXEC=y 21 21 CONFIG_NUMA_BALANCING=y 22 22 CONFIG_MEMCG=y 23 23 CONFIG_BLK_CGROUP=y ··· 42 40 CONFIG_KEXEC=y 43 41 CONFIG_KEXEC_FILE=y 44 42 CONFIG_KEXEC_SIG=y 43 + CONFIG_CRASH_DM_CRYPT=y 45 44 CONFIG_LIVEPATCH=y 46 45 CONFIG_MARCH_Z13=y 47 46 CONFIG_NR_CPUS=512 48 47 CONFIG_NUMA=y 49 - CONFIG_HZ_100=y 48 + CONFIG_HZ_1000=y 50 49 CONFIG_CERT_STORE=y 51 50 CONFIG_EXPOLINE=y 52 51 CONFIG_EXPOLINE_AUTO=y ··· 100 97 CONFIG_MEM_SOFT_DIRTY=y 101 98 CONFIG_DEFERRED_STRUCT_PAGE_INIT=y 102 99 CONFIG_IDLE_PAGE_TRACKING=y 100 + CONFIG_ZONE_DEVICE=y 103 101 CONFIG_PERCPU_STATS=y 104 102 CONFIG_ANON_VMA_NAME=y 105 103 CONFIG_USERFAULTFD=y ··· 218 214 CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m 219 215 CONFIG_NETFILTER_XT_TARGET_CT=m 220 216 CONFIG_NETFILTER_XT_TARGET_DSCP=m 217 + CONFIG_NETFILTER_XT_TARGET_HL=m 221 218 CONFIG_NETFILTER_XT_TARGET_HMARK=m 222 219 CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m 223 220 CONFIG_NETFILTER_XT_TARGET_LOG=m 224 221 CONFIG_NETFILTER_XT_TARGET_MARK=m 222 + CONFIG_NETFILTER_XT_NAT=m 225 223 CONFIG_NETFILTER_XT_TARGET_NETMAP=m 226 224 CONFIG_NETFILTER_XT_TARGET_NFLOG=m 227 225 CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m 228 226 CONFIG_NETFILTER_XT_TARGET_REDIRECT=m 227 + CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m 229 228 CONFIG_NETFILTER_XT_TARGET_TEE=m 230 229 CONFIG_NETFILTER_XT_TARGET_TPROXY=m 231 - CONFIG_NETFILTER_XT_TARGET_TRACE=m 232 230 CONFIG_NETFILTER_XT_TARGET_SECMARK=m 233 231 CONFIG_NETFILTER_XT_TARGET_TCPMSS=m 234 232 CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m ··· 245 239 CONFIG_NETFILTER_XT_MATCH_CONNMARK=m 246 240 CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m 247 241 CONFIG_NETFILTER_XT_MATCH_CPU=m 242 + CONFIG_NETFILTER_XT_MATCH_DCCP=m 248 243 CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m 249 244 CONFIG_NETFILTER_XT_MATCH_DSCP=m 250 245 CONFIG_NETFILTER_XT_MATCH_ESP=m ··· 316 309 CONFIG_IP_NF_MATCH_ECN=m 317 310 CONFIG_IP_NF_MATCH_RPFILTER=m 318 311 CONFIG_IP_NF_MATCH_TTL=m 319 - CONFIG_IP_NF_FILTER=m 320 312 CONFIG_IP_NF_TARGET_REJECT=m 321 - CONFIG_IP_NF_NAT=m 322 - CONFIG_IP_NF_TARGET_MASQUERADE=m 323 - CONFIG_IP_NF_MANGLE=m 324 313 CONFIG_IP_NF_TARGET_ECN=m 325 - CONFIG_IP_NF_TARGET_TTL=m 326 - CONFIG_IP_NF_RAW=m 327 - CONFIG_IP_NF_SECURITY=m 328 - CONFIG_IP_NF_ARPFILTER=m 329 314 CONFIG_IP_NF_ARP_MANGLE=m 330 315 CONFIG_NFT_FIB_IPV6=m 331 316 CONFIG_IP6_NF_IPTABLES=m ··· 330 331 CONFIG_IP6_NF_MATCH_MH=m 331 332 CONFIG_IP6_NF_MATCH_RPFILTER=m 332 333 CONFIG_IP6_NF_MATCH_RT=m 333 - CONFIG_IP6_NF_TARGET_HL=m 334 - CONFIG_IP6_NF_FILTER=m 335 334 CONFIG_IP6_NF_TARGET_REJECT=m 336 - CONFIG_IP6_NF_MANGLE=m 337 - CONFIG_IP6_NF_RAW=m 338 - CONFIG_IP6_NF_SECURITY=m 339 - CONFIG_IP6_NF_NAT=m 340 - CONFIG_IP6_NF_TARGET_MASQUERADE=m 341 335 CONFIG_NF_TABLES_BRIDGE=m 336 + CONFIG_IP_SCTP=m 342 337 CONFIG_RDS=m 343 338 CONFIG_RDS_RDMA=m 344 339 CONFIG_RDS_TCP=m ··· 366 373 CONFIG_NET_SCH_INGRESS=m 367 374 CONFIG_NET_SCH_PLUG=m 368 375 CONFIG_NET_SCH_ETS=m 376 + CONFIG_NET_SCH_DUALPI2=m 369 377 CONFIG_NET_CLS_BASIC=m 370 378 CONFIG_NET_CLS_ROUTE4=m 371 379 CONFIG_NET_CLS_FW=m ··· 488 494 CONFIG_NETDEVICES=y 489 495 CONFIG_BONDING=m 490 496 CONFIG_DUMMY=m 497 + CONFIG_OVPN=m 491 498 CONFIG_EQUALIZER=m 492 499 CONFIG_IFB=m 493 500 CONFIG_MACVLAN=m ··· 626 631 CONFIG_VHOST_NET=m 627 632 CONFIG_VHOST_VSOCK=m 628 633 CONFIG_VHOST_VDPA=m 634 + CONFIG_DEV_DAX=m 629 635 CONFIG_EXT4_FS=y 630 636 CONFIG_EXT4_FS_POSIX_ACL=y 631 637 CONFIG_EXT4_FS_SECURITY=y ··· 648 652 CONFIG_BCACHEFS_FS=m 649 653 CONFIG_BCACHEFS_QUOTA=y 650 654 CONFIG_BCACHEFS_POSIX_ACL=y 655 + CONFIG_FS_DAX=y 651 656 CONFIG_EXPORTFS_BLOCK_OPS=y 652 657 CONFIG_FS_ENCRYPTION=y 653 658 CONFIG_FS_VERITY=y ··· 680 683 CONFIG_TMPFS_INODE64=y 681 684 CONFIG_TMPFS_QUOTA=y 682 685 CONFIG_HUGETLBFS=y 683 - CONFIG_CONFIGFS_FS=m 684 686 CONFIG_ECRYPT_FS=m 685 687 CONFIG_CRAMFS=m 686 688 CONFIG_SQUASHFS=m ··· 737 741 CONFIG_CRYPTO_FIPS=y 738 742 CONFIG_CRYPTO_USER=m 739 743 CONFIG_CRYPTO_SELFTESTS=y 744 + CONFIG_CRYPTO_NULL=y 740 745 CONFIG_CRYPTO_PCRYPT=m 741 746 CONFIG_CRYPTO_CRYPTD=m 742 747 CONFIG_CRYPTO_BENCHMARK=m ··· 766 769 CONFIG_CRYPTO_LRW=m 767 770 CONFIG_CRYPTO_PCBC=m 768 771 CONFIG_CRYPTO_AEGIS128=m 769 - CONFIG_CRYPTO_CHACHA20POLY1305=m 770 772 CONFIG_CRYPTO_GCM=y 771 773 CONFIG_CRYPTO_SEQIV=y 772 774 CONFIG_CRYPTO_MD4=m
+2 -1
arch/s390/configs/zfcpdump_defconfig
··· 1 1 CONFIG_NO_HZ_IDLE=y 2 2 CONFIG_HIGH_RES_TIMERS=y 3 + CONFIG_POSIX_AUX_CLOCKS=y 3 4 CONFIG_BPF_SYSCALL=y 4 5 # CONFIG_CPU_ISOLATION is not set 5 6 # CONFIG_UTS_NS is not set ··· 12 11 CONFIG_KEXEC=y 13 12 CONFIG_MARCH_Z13=y 14 13 CONFIG_NR_CPUS=2 15 - CONFIG_HZ_100=y 14 + CONFIG_HZ_1000=y 16 15 # CONFIG_CHSC_SCH is not set 17 16 # CONFIG_SCM_BUS is not set 18 17 # CONFIG_AP is not set
+12 -7
arch/s390/hypfs/hypfs_dbfs.c
··· 6 6 * Author(s): Michael Holzheu <holzheu@linux.vnet.ibm.com> 7 7 */ 8 8 9 + #include <linux/security.h> 9 10 #include <linux/slab.h> 10 11 #include "hypfs.h" 11 12 ··· 67 66 long rc; 68 67 69 68 mutex_lock(&df->lock); 70 - if (df->unlocked_ioctl) 71 - rc = df->unlocked_ioctl(file, cmd, arg); 72 - else 73 - rc = -ENOTTY; 69 + rc = df->unlocked_ioctl(file, cmd, arg); 74 70 mutex_unlock(&df->lock); 75 71 return rc; 76 72 } 77 73 78 - static const struct file_operations dbfs_ops = { 74 + static const struct file_operations dbfs_ops_ioctl = { 79 75 .read = dbfs_read, 80 76 .unlocked_ioctl = dbfs_ioctl, 81 77 }; 82 78 79 + static const struct file_operations dbfs_ops = { 80 + .read = dbfs_read, 81 + }; 82 + 83 83 void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df) 84 84 { 85 - df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, 86 - &dbfs_ops); 85 + const struct file_operations *fops = &dbfs_ops; 86 + 87 + if (df->unlocked_ioctl && !security_locked_down(LOCKDOWN_DEBUGFS)) 88 + fops = &dbfs_ops_ioctl; 89 + df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, fops); 87 90 mutex_init(&df->lock); 88 91 } 89 92
+3 -2
arch/x86/include/asm/xen/hypercall.h
··· 94 94 #ifdef MODULE 95 95 #define __ADDRESSABLE_xen_hypercall 96 96 #else 97 - #define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall) 97 + #define __ADDRESSABLE_xen_hypercall \ 98 + __stringify(.global STATIC_CALL_KEY(xen_hypercall);) 98 99 #endif 99 100 100 101 #define __HYPERCALL \ 101 102 __ADDRESSABLE_xen_hypercall \ 102 - "call __SCT__xen_hypercall" 103 + __stringify(call STATIC_CALL_TRAMP(xen_hypercall)) 103 104 104 105 #define __HYPERCALL_ENTRY(x) "a" (x) 105 106
+6 -2
arch/x86/kernel/cpu/amd.c
··· 1326 1326 1327 1327 static __init int print_s5_reset_status_mmio(void) 1328 1328 { 1329 - unsigned long value; 1330 1329 void __iomem *addr; 1330 + u32 value; 1331 1331 int i; 1332 1332 1333 1333 if (!cpu_feature_enabled(X86_FEATURE_ZEN)) ··· 1340 1340 value = ioread32(addr); 1341 1341 iounmap(addr); 1342 1342 1343 + /* Value with "all bits set" is an error response and should be ignored. */ 1344 + if (value == U32_MAX) 1345 + return 0; 1346 + 1343 1347 for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) { 1344 1348 if (!(value & BIT(i))) 1345 1349 continue; 1346 1350 1347 1351 if (s5_reset_reason_txt[i]) { 1348 - pr_info("x86/amd: Previous system reset reason [0x%08lx]: %s\n", 1352 + pr_info("x86/amd: Previous system reset reason [0x%08x]: %s\n", 1349 1353 value, s5_reset_reason_txt[i]); 1350 1354 } 1351 1355 }
+1 -3
arch/x86/kernel/cpu/bugs.c
··· 1068 1068 if (gds_mitigation == GDS_MITIGATION_AUTO) { 1069 1069 if (should_mitigate_vuln(X86_BUG_GDS)) 1070 1070 gds_mitigation = GDS_MITIGATION_FULL; 1071 - else { 1071 + else 1072 1072 gds_mitigation = GDS_MITIGATION_OFF; 1073 - return; 1074 - } 1075 1073 } 1076 1074 1077 1075 /* No microcode */
+3
arch/x86/kernel/cpu/hygon.c
··· 16 16 #include <asm/spec-ctrl.h> 17 17 #include <asm/delay.h> 18 18 #include <asm/msr.h> 19 + #include <asm/resctrl.h> 19 20 20 21 #include "cpu.h" 21 22 ··· 118 117 x86_amd_ls_cfg_ssbd_mask = 1ULL << 10; 119 118 } 120 119 } 120 + 121 + resctrl_cpu_detect(c); 121 122 } 122 123 123 124 static void early_init_hygon(struct cpuinfo_x86 *c)
+1 -1
block/blk-core.c
··· 557 557 sector_t maxsector = bdev_nr_sectors(bio->bi_bdev); 558 558 unsigned int nr_sectors = bio_sectors(bio); 559 559 560 - if (nr_sectors && 560 + if (nr_sectors && maxsector && 561 561 (nr_sectors > maxsector || 562 562 bio->bi_iter.bi_sector > maxsector - nr_sectors)) { 563 563 pr_info_ratelimited("%s: attempt to access beyond end of device\n"
+1
block/blk-mq-debugfs.c
··· 95 95 QUEUE_FLAG_NAME(SQ_SCHED), 96 96 QUEUE_FLAG_NAME(DISABLE_WBT_DEF), 97 97 QUEUE_FLAG_NAME(NO_ELV_SWITCH), 98 + QUEUE_FLAG_NAME(QOS_ENABLED), 98 99 }; 99 100 #undef QUEUE_FLAG_NAME 100 101
+9 -4
block/blk-mq.c
··· 5033 5033 unsigned int memflags; 5034 5034 int i; 5035 5035 struct xarray elv_tbl, et_tbl; 5036 + bool queues_frozen = false; 5036 5037 5037 5038 lockdep_assert_held(&set->tag_list_lock); 5038 5039 ··· 5057 5056 blk_mq_sysfs_unregister_hctxs(q); 5058 5057 } 5059 5058 5060 - list_for_each_entry(q, &set->tag_list, tag_set_list) 5061 - blk_mq_freeze_queue_nomemsave(q); 5062 - 5063 5059 /* 5064 5060 * Switch IO scheduler to 'none', cleaning up the data associated 5065 5061 * with the previous scheduler. We will switch back once we are done ··· 5066 5068 if (blk_mq_elv_switch_none(q, &elv_tbl)) 5067 5069 goto switch_back; 5068 5070 5071 + list_for_each_entry(q, &set->tag_list, tag_set_list) 5072 + blk_mq_freeze_queue_nomemsave(q); 5073 + queues_frozen = true; 5069 5074 if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0) 5070 5075 goto switch_back; 5071 5076 ··· 5092 5091 } 5093 5092 switch_back: 5094 5093 /* The blk_mq_elv_switch_back unfreezes queue for us. */ 5095 - list_for_each_entry(q, &set->tag_list, tag_set_list) 5094 + list_for_each_entry(q, &set->tag_list, tag_set_list) { 5095 + /* switch_back expects queue to be frozen */ 5096 + if (!queues_frozen) 5097 + blk_mq_freeze_queue_nomemsave(q); 5096 5098 blk_mq_elv_switch_back(q, &elv_tbl, &et_tbl); 5099 + } 5097 5100 5098 5101 list_for_each_entry(q, &set->tag_list, tag_set_list) { 5099 5102 blk_mq_sysfs_register_hctxs(q);
+4 -4
block/blk-rq-qos.c
··· 2 2 3 3 #include "blk-rq-qos.h" 4 4 5 - __read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos); 6 - 7 5 /* 8 6 * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded, 9 7 * false if 'v' + 1 would be bigger than 'below'. ··· 317 319 struct rq_qos *rqos = q->rq_qos; 318 320 q->rq_qos = rqos->next; 319 321 rqos->ops->exit(rqos); 320 - static_branch_dec(&block_rq_qos); 321 322 } 323 + blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q); 322 324 mutex_unlock(&q->rq_qos_mutex); 323 325 } 324 326 ··· 344 346 goto ebusy; 345 347 rqos->next = q->rq_qos; 346 348 q->rq_qos = rqos; 347 - static_branch_inc(&block_rq_qos); 349 + blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q); 348 350 349 351 blk_mq_unfreeze_queue(q, memflags); 350 352 ··· 375 377 break; 376 378 } 377 379 } 380 + if (!q->rq_qos) 381 + blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q); 378 382 blk_mq_unfreeze_queue(q, memflags); 379 383 380 384 mutex_lock(&q->debugfs_mutex);
+31 -17
block/blk-rq-qos.h
··· 12 12 #include "blk-mq-debugfs.h" 13 13 14 14 struct blk_mq_debugfs_attr; 15 - extern struct static_key_false block_rq_qos; 16 15 17 16 enum rq_qos_id { 18 17 RQ_QOS_WBT, ··· 112 113 113 114 static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio) 114 115 { 115 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 116 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 117 + q->rq_qos) 116 118 __rq_qos_cleanup(q->rq_qos, bio); 117 119 } 118 120 119 121 static inline void rq_qos_done(struct request_queue *q, struct request *rq) 120 122 { 121 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos && 122 - !blk_rq_is_passthrough(rq)) 123 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 124 + q->rq_qos && !blk_rq_is_passthrough(rq)) 123 125 __rq_qos_done(q->rq_qos, rq); 124 126 } 125 127 126 128 static inline void rq_qos_issue(struct request_queue *q, struct request *rq) 127 129 { 128 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 130 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 131 + q->rq_qos) 129 132 __rq_qos_issue(q->rq_qos, rq); 130 133 } 131 134 132 135 static inline void rq_qos_requeue(struct request_queue *q, struct request *rq) 133 136 { 134 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 137 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 138 + q->rq_qos) 135 139 __rq_qos_requeue(q->rq_qos, rq); 136 140 } 137 141 138 142 static inline void rq_qos_done_bio(struct bio *bio) 139 143 { 140 - if (static_branch_unlikely(&block_rq_qos) && 141 - bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) || 142 - bio_flagged(bio, BIO_QOS_MERGED))) { 143 - struct request_queue *q = bdev_get_queue(bio->bi_bdev); 144 - if (q->rq_qos) 145 - __rq_qos_done_bio(q->rq_qos, bio); 146 - } 144 + struct request_queue *q; 145 + 146 + if (!bio->bi_bdev || (!bio_flagged(bio, BIO_QOS_THROTTLED) && 147 + !bio_flagged(bio, BIO_QOS_MERGED))) 148 + return; 149 + 150 + q = bdev_get_queue(bio->bi_bdev); 151 + 152 + /* 153 + * If a bio has BIO_QOS_xxx set, it implicitly implies that 154 + * q->rq_qos is present. So, we skip re-checking q->rq_qos 155 + * here as an extra optimization and directly call 156 + * __rq_qos_done_bio(). 157 + */ 158 + __rq_qos_done_bio(q->rq_qos, bio); 147 159 } 148 160 149 161 static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) 150 162 { 151 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) { 163 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 164 + q->rq_qos) { 152 165 bio_set_flag(bio, BIO_QOS_THROTTLED); 153 166 __rq_qos_throttle(q->rq_qos, bio); 154 167 } ··· 169 158 static inline void rq_qos_track(struct request_queue *q, struct request *rq, 170 159 struct bio *bio) 171 160 { 172 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 161 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 162 + q->rq_qos) 173 163 __rq_qos_track(q->rq_qos, rq, bio); 174 164 } 175 165 176 166 static inline void rq_qos_merge(struct request_queue *q, struct request *rq, 177 167 struct bio *bio) 178 168 { 179 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) { 169 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 170 + q->rq_qos) { 180 171 bio_set_flag(bio, BIO_QOS_MERGED); 181 172 __rq_qos_merge(q->rq_qos, rq, bio); 182 173 } ··· 186 173 187 174 static inline void rq_qos_queue_depth_changed(struct request_queue *q) 188 175 { 189 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 176 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 177 + q->rq_qos) 190 178 __rq_qos_queue_depth_changed(q->rq_qos); 191 179 } 192 180
+6 -6
block/blk-settings.c
··· 157 157 switch (bi->csum_type) { 158 158 case BLK_INTEGRITY_CSUM_NONE: 159 159 if (bi->pi_tuple_size) { 160 - pr_warn("pi_tuple_size must be 0 when checksum type \ 161 - is none\n"); 160 + pr_warn("pi_tuple_size must be 0 when checksum type is none\n"); 162 161 return -EINVAL; 163 162 } 164 163 break; 165 164 case BLK_INTEGRITY_CSUM_CRC: 166 165 case BLK_INTEGRITY_CSUM_IP: 167 166 if (bi->pi_tuple_size != sizeof(struct t10_pi_tuple)) { 168 - pr_warn("pi_tuple_size mismatch for T10 PI: expected \ 169 - %zu, got %u\n", 167 + pr_warn("pi_tuple_size mismatch for T10 PI: expected %zu, got %u\n", 170 168 sizeof(struct t10_pi_tuple), 171 169 bi->pi_tuple_size); 172 170 return -EINVAL; ··· 172 174 break; 173 175 case BLK_INTEGRITY_CSUM_CRC64: 174 176 if (bi->pi_tuple_size != sizeof(struct crc64_pi_tuple)) { 175 - pr_warn("pi_tuple_size mismatch for CRC64 PI: \ 176 - expected %zu, got %u\n", 177 + pr_warn("pi_tuple_size mismatch for CRC64 PI: expected %zu, got %u\n", 177 178 sizeof(struct crc64_pi_tuple), 178 179 bi->pi_tuple_size); 179 180 return -EINVAL; ··· 969 972 goto incompatible; 970 973 if (ti->csum_type != bi->csum_type) 971 974 goto incompatible; 975 + if (ti->pi_tuple_size != bi->pi_tuple_size) 976 + goto incompatible; 972 977 if ((ti->flags & BLK_INTEGRITY_REF_TAG) != 973 978 (bi->flags & BLK_INTEGRITY_REF_TAG)) 974 979 goto incompatible; ··· 979 980 ti->flags |= (bi->flags & BLK_INTEGRITY_DEVICE_CAPABLE) | 980 981 (bi->flags & BLK_INTEGRITY_REF_TAG); 981 982 ti->csum_type = bi->csum_type; 983 + ti->pi_tuple_size = bi->pi_tuple_size; 982 984 ti->metadata_size = bi->metadata_size; 983 985 ti->pi_offset = bi->pi_offset; 984 986 ti->interval_exp = bi->interval_exp;
+1 -1
drivers/accel/habanalabs/gaudi2/gaudi2.c
··· 10437 10437 (u64 *)(lin_dma_pkts_arr), DEBUGFS_WRITE64); 10438 10438 WREG32(sob_addr, 0); 10439 10439 10440 - kfree(lin_dma_pkts_arr); 10440 + kvfree(lin_dma_pkts_arr); 10441 10441 10442 10442 return rc; 10443 10443 }
+7 -10
drivers/acpi/apei/einj-core.c
··· 315 315 memcpy_fromio(&v5param, p, v5param_size); 316 316 acpi5 = 1; 317 317 check_vendor_extension(pa_v5, &v5param); 318 - if (available_error_type & ACPI65_EINJV2_SUPP) { 318 + if (is_v2 && available_error_type & ACPI65_EINJV2_SUPP) { 319 319 len = v5param.einjv2_struct.length; 320 320 offset = offsetof(struct einjv2_extension_struct, component_arr); 321 321 max_nr_components = (len - offset) / ··· 540 540 struct set_error_type_with_address *v5param; 541 541 542 542 v5param = kmalloc(v5param_size, GFP_KERNEL); 543 + if (!v5param) 544 + return -ENOMEM; 545 + 543 546 memcpy_fromio(v5param, einj_param, v5param_size); 544 547 v5param->type = type; 545 548 if (type & ACPI5_VENDOR_BIT) { ··· 1094 1091 return rc; 1095 1092 } 1096 1093 1097 - static void __exit einj_remove(struct faux_device *fdev) 1094 + static void einj_remove(struct faux_device *fdev) 1098 1095 { 1099 1096 struct apei_exec_context ctx; 1100 1097 ··· 1117 1114 } 1118 1115 1119 1116 static struct faux_device *einj_dev; 1120 - /* 1121 - * einj_remove() lives in .exit.text. For drivers registered via 1122 - * platform_driver_probe() this is ok because they cannot get unbound at 1123 - * runtime. So mark the driver struct with __refdata to prevent modpost 1124 - * triggering a section mismatch warning. 1125 - */ 1126 - static struct faux_device_ops einj_device_ops __refdata = { 1117 + static struct faux_device_ops einj_device_ops = { 1127 1118 .probe = einj_probe, 1128 - .remove = __exit_p(einj_remove), 1119 + .remove = einj_remove, 1129 1120 }; 1130 1121 1131 1122 static int __init einj_init(void)
+1 -1
drivers/acpi/pfr_update.c
··· 329 329 if (type == PFRU_CODE_INJECT_TYPE) 330 330 return payload_hdr->rt_ver >= cap->code_rt_version; 331 331 332 - return payload_hdr->rt_ver >= cap->drv_rt_version; 332 + return payload_hdr->svn_ver >= cap->drv_svn; 333 333 } 334 334 335 335 static void print_update_debug_info(struct pfru_updated_result *result,
+21 -18
drivers/block/loop.c
··· 137 137 static int max_part; 138 138 static int part_shift; 139 139 140 - static loff_t get_size(loff_t offset, loff_t sizelimit, struct file *file) 140 + static loff_t lo_calculate_size(struct loop_device *lo, struct file *file) 141 141 { 142 + struct kstat stat; 142 143 loff_t loopsize; 144 + int ret; 143 145 144 - /* Compute loopsize in bytes */ 145 - loopsize = i_size_read(file->f_mapping->host); 146 - if (offset > 0) 147 - loopsize -= offset; 146 + /* 147 + * Get the accurate file size. This provides better results than 148 + * cached inode data, particularly for network filesystems where 149 + * metadata may be stale. 150 + */ 151 + ret = vfs_getattr_nosec(&file->f_path, &stat, STATX_SIZE, 0); 152 + if (ret) 153 + return 0; 154 + 155 + loopsize = stat.size; 156 + if (lo->lo_offset > 0) 157 + loopsize -= lo->lo_offset; 148 158 /* offset is beyond i_size, weird but possible */ 149 159 if (loopsize < 0) 150 160 return 0; 151 - 152 - if (sizelimit > 0 && sizelimit < loopsize) 153 - loopsize = sizelimit; 161 + if (lo->lo_sizelimit > 0 && lo->lo_sizelimit < loopsize) 162 + loopsize = lo->lo_sizelimit; 154 163 /* 155 164 * Unfortunately, if we want to do I/O on the device, 156 165 * the number of 512-byte sectors has to fit into a sector_t. 157 166 */ 158 167 return loopsize >> 9; 159 - } 160 - 161 - static loff_t get_loop_size(struct loop_device *lo, struct file *file) 162 - { 163 - return get_size(lo->lo_offset, lo->lo_sizelimit, file); 164 168 } 165 169 166 170 /* ··· 573 569 error = -EINVAL; 574 570 575 571 /* size of the new backing store needs to be the same */ 576 - if (get_loop_size(lo, file) != get_loop_size(lo, old_file)) 572 + if (lo_calculate_size(lo, file) != lo_calculate_size(lo, old_file)) 577 573 goto out_err; 578 574 579 575 /* ··· 1067 1063 loop_update_dio(lo); 1068 1064 loop_sysfs_init(lo); 1069 1065 1070 - size = get_loop_size(lo, file); 1066 + size = lo_calculate_size(lo, file); 1071 1067 loop_set_size(lo, size); 1072 1068 1073 1069 /* Order wrt reading lo_state in loop_validate_file(). */ ··· 1259 1255 if (partscan) 1260 1256 clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state); 1261 1257 if (!err && size_changed) { 1262 - loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit, 1263 - lo->lo_backing_file); 1258 + loff_t new_size = lo_calculate_size(lo, lo->lo_backing_file); 1264 1259 loop_set_size(lo, new_size); 1265 1260 } 1266 1261 out_unlock: ··· 1402 1399 if (unlikely(lo->lo_state != Lo_bound)) 1403 1400 return -ENXIO; 1404 1401 1405 - size = get_loop_size(lo, lo->lo_backing_file); 1402 + size = lo_calculate_size(lo, lo->lo_backing_file); 1406 1403 loop_set_size(lo, size); 1407 1404 1408 1405 return 0;
+1 -6
drivers/bluetooth/btmtk.c
··· 642 642 * WMT command. 643 643 */ 644 644 err = wait_on_bit_timeout(&data->flags, BTMTK_TX_WAIT_VND_EVT, 645 - TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT); 646 - if (err == -EINTR) { 647 - bt_dev_err(hdev, "Execution of wmt command interrupted"); 648 - clear_bit(BTMTK_TX_WAIT_VND_EVT, &data->flags); 649 - goto err_free_wc; 650 - } 645 + TASK_UNINTERRUPTIBLE, HCI_INIT_TIMEOUT); 651 646 652 647 if (err) { 653 648 bt_dev_err(hdev, "Execution of wmt command timed out");
+4 -4
drivers/bluetooth/btnxpuart.c
··· 543 543 } 544 544 545 545 if (psdata->wakeup_source) { 546 - ret = devm_request_irq(&serdev->dev, psdata->irq_handler, 547 - ps_host_wakeup_irq_handler, 548 - IRQF_ONESHOT | IRQF_TRIGGER_FALLING, 549 - dev_name(&serdev->dev), nxpdev); 546 + ret = devm_request_threaded_irq(&serdev->dev, psdata->irq_handler, 547 + NULL, ps_host_wakeup_irq_handler, 548 + IRQF_ONESHOT, 549 + dev_name(&serdev->dev), nxpdev); 550 550 if (ret) 551 551 bt_dev_info(hdev, "error setting wakeup IRQ handler, ignoring\n"); 552 552 disable_irq(psdata->irq_handler);
+1 -2
drivers/cdx/controller/cdx_rpmsg.c
··· 129 129 130 130 chinfo.src = RPMSG_ADDR_ANY; 131 131 chinfo.dst = rpdev->dst; 132 - strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, 133 - strlen(cdx_rpmsg_id_table[0].name)); 132 + strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, sizeof(chinfo.name)); 134 133 135 134 cdx_mcdi->ept = rpmsg_create_ept(rpdev, cdx_rpmsg_cb, NULL, chinfo); 136 135 if (!cdx_mcdi->ept) {
+5
drivers/comedi/comedi_fops.c
··· 1587 1587 memset(&data[n], 0, (MIN_SAMPLES - n) * 1588 1588 sizeof(unsigned int)); 1589 1589 } 1590 + } else { 1591 + memset(data, 0, max_t(unsigned int, n, MIN_SAMPLES) * 1592 + sizeof(unsigned int)); 1590 1593 } 1591 1594 ret = parse_insn(dev, insns + i, data, file); 1592 1595 if (ret < 0) ··· 1673 1670 memset(&data[insn->n], 0, 1674 1671 (MIN_SAMPLES - insn->n) * sizeof(unsigned int)); 1675 1672 } 1673 + } else { 1674 + memset(data, 0, n_data * sizeof(unsigned int)); 1676 1675 } 1677 1676 ret = parse_insn(dev, insn, data, file); 1678 1677 if (ret < 0)
+14 -13
drivers/comedi/drivers.c
··· 620 620 unsigned int chan = CR_CHAN(insn->chanspec); 621 621 unsigned int base_chan = (chan < 32) ? 0 : chan; 622 622 unsigned int _data[2]; 623 + unsigned int i; 623 624 int ret; 624 - 625 - if (insn->n == 0) 626 - return 0; 627 625 628 626 memset(_data, 0, sizeof(_data)); 629 627 memset(&_insn, 0, sizeof(_insn)); ··· 633 635 if (insn->insn == INSN_WRITE) { 634 636 if (!(s->subdev_flags & SDF_WRITABLE)) 635 637 return -EINVAL; 636 - _data[0] = 1U << (chan - base_chan); /* mask */ 637 - _data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */ 638 + _data[0] = 1U << (chan - base_chan); /* mask */ 639 + } 640 + for (i = 0; i < insn->n; i++) { 641 + if (insn->insn == INSN_WRITE) 642 + _data[1] = data[i] ? _data[0] : 0; /* bits */ 643 + 644 + ret = s->insn_bits(dev, s, &_insn, _data); 645 + if (ret < 0) 646 + return ret; 647 + 648 + if (insn->insn == INSN_READ) 649 + data[i] = (_data[1] >> (chan - base_chan)) & 1; 638 650 } 639 651 640 - ret = s->insn_bits(dev, s, &_insn, _data); 641 - if (ret < 0) 642 - return ret; 643 - 644 - if (insn->insn == INSN_READ) 645 - data[0] = (_data[1] >> (chan - base_chan)) & 1; 646 - 647 - return 1; 652 + return insn->n; 648 653 } 649 654 650 655 static int __comedi_device_postconfig_async(struct comedi_device *dev,
+2 -1
drivers/comedi/drivers/pcl726.c
··· 328 328 * Hook up the external trigger source interrupt only if the 329 329 * user config option is valid and the board supports interrupts. 330 330 */ 331 - if (it->options[1] && (board->irq_mask & (1 << it->options[1]))) { 331 + if (it->options[1] > 0 && it->options[1] < 16 && 332 + (board->irq_mask & (1U << it->options[1]))) { 332 333 ret = request_irq(it->options[1], pcl726_interrupt, 0, 333 334 dev->board_name, dev); 334 335 if (ret == 0) {
+12 -17
drivers/cpuidle/governors/menu.c
··· 287 287 return 0; 288 288 } 289 289 290 - if (tick_nohz_tick_stopped()) { 291 - /* 292 - * If the tick is already stopped, the cost of possible short 293 - * idle duration misprediction is much higher, because the CPU 294 - * may be stuck in a shallow idle state for a long time as a 295 - * result of it. In that case say we might mispredict and use 296 - * the known time till the closest timer event for the idle 297 - * state selection. 298 - */ 299 - if (predicted_ns < TICK_NSEC) 300 - predicted_ns = data->next_timer_ns; 301 - } else if (latency_req > predicted_ns) { 302 - latency_req = predicted_ns; 303 - } 290 + /* 291 + * If the tick is already stopped, the cost of possible short idle 292 + * duration misprediction is much higher, because the CPU may be stuck 293 + * in a shallow idle state for a long time as a result of it. In that 294 + * case, say we might mispredict and use the known time till the closest 295 + * timer event for the idle state selection. 296 + */ 297 + if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC) 298 + predicted_ns = data->next_timer_ns; 304 299 305 300 /* 306 301 * Find the idle state with the lowest power while satisfying ··· 311 316 if (idx == -1) 312 317 idx = i; /* first enabled state */ 313 318 319 + if (s->exit_latency_ns > latency_req) 320 + break; 321 + 314 322 if (s->target_residency_ns > predicted_ns) { 315 323 /* 316 324 * Use a physical idle state, not busy polling, unless 317 325 * a timer is going to trigger soon enough. 318 326 */ 319 327 if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) && 320 - s->exit_latency_ns <= latency_req && 321 328 s->target_residency_ns <= data->next_timer_ns) { 322 329 predicted_ns = s->target_residency_ns; 323 330 idx = i; ··· 351 354 352 355 return idx; 353 356 } 354 - if (s->exit_latency_ns > latency_req) 355 - break; 356 357 357 358 idx = i; 358 359 }
+4 -4
drivers/fpga/zynq-fpga.c
··· 405 405 } 406 406 } 407 407 408 - priv->dma_nelms = 409 - dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0); 410 - if (priv->dma_nelms == 0) { 408 + err = dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0); 409 + if (err) { 411 410 dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n"); 412 - return -ENOMEM; 411 + return err; 413 412 } 413 + priv->dma_nelms = sgt->nents; 414 414 415 415 /* enable clock */ 416 416 err = clk_enable(priv->clk);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 514 514 return false; 515 515 516 516 if (drm_gem_is_imported(obj)) { 517 - struct dma_buf *dma_buf = obj->dma_buf; 517 + struct dma_buf *dma_buf = obj->import_attach->dmabuf; 518 518 519 519 if (dma_buf->ops != &amdgpu_dmabuf_ops) 520 520 /* No XGMI with non AMD GPUs */
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 317 317 */ 318 318 if (!vm->is_compute_context || !vm->process_info) 319 319 return 0; 320 - if (!drm_gem_is_imported(obj) || !dma_buf_is_dynamic(obj->dma_buf)) 320 + if (!drm_gem_is_imported(obj) || 321 + !dma_buf_is_dynamic(obj->import_attach->dmabuf)) 321 322 return 0; 322 323 mutex_lock_nested(&vm->process_info->lock, 1); 323 324 if (!WARN_ON(!vm->process_info->eviction_fence)) {
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1283 1283 struct drm_gem_object *obj = &bo->tbo.base; 1284 1284 1285 1285 if (drm_gem_is_imported(obj) && bo_va->is_xgmi) { 1286 - struct dma_buf *dma_buf = obj->dma_buf; 1286 + struct dma_buf *dma_buf = obj->import_attach->dmabuf; 1287 1287 struct drm_gem_object *gobj = dma_buf->priv; 1288 1288 struct amdgpu_bo *abo = gem_to_amdgpu_bo(gobj); 1289 1289
+3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 7792 7792 struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn); 7793 7793 int ret; 7794 7794 7795 + if (WARN_ON(unlikely(!old_con_state || !new_con_state))) 7796 + return -EINVAL; 7797 + 7795 7798 trace_amdgpu_dm_connector_atomic_check(new_con_state); 7796 7799 7797 7800 if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
+19
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 299 299 irq_type = amdgpu_display_crtc_idx_to_irq_type(adev, acrtc->crtc_id); 300 300 301 301 if (enable) { 302 + struct dc *dc = adev->dm.dc; 303 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 304 + struct psr_settings *psr = &acrtc_state->stream->link->psr_settings; 305 + struct replay_settings *pr = &acrtc_state->stream->link->replay_settings; 306 + bool sr_supported = (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED) || 307 + pr->config.replay_supported; 308 + 309 + /* 310 + * IPS & self-refresh feature can cause vblank counter resets between 311 + * vblank disable and enable. 312 + * It may cause system stuck due to waiting for the vblank counter. 313 + * Call this function to estimate missed vblanks by using timestamps and 314 + * update the vblank counter in DRM. 315 + */ 316 + if (dc->caps.ips_support && 317 + dc->config.disable_ips != DMUB_IPS_DISABLE_ALL && 318 + sr_supported && vblank->config.disable_immediate) 319 + drm_crtc_vblank_restore(crtc); 320 + 302 321 /* vblank irq on -> Only need vupdate irq in vrr mode */ 303 322 if (amdgpu_dm_crtc_vrr_active(acrtc_state)) 304 323 rc = amdgpu_dm_crtc_set_vupdate_irq(crtc, true);
+1 -4
drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
··· 174 174 return object_id; 175 175 } 176 176 177 - if (tbl->ucNumberOfObjects <= i) { 178 - dm_error("Can't find connector id %d in connector table of size %d.\n", 179 - i, tbl->ucNumberOfObjects); 177 + if (tbl->ucNumberOfObjects <= i) 180 178 return object_id; 181 - } 182 179 183 180 id = le16_to_cpu(tbl->asObjects[i].usObjectID); 184 181 object_id = object_id_from_bios_object_id(id);
+1 -1
drivers/gpu/drm/amd/display/dc/bios/command_table.c
··· 993 993 allocation.sPCLKInput.usFbDiv = 994 994 cpu_to_le16((uint16_t)bp_params->feedback_divider); 995 995 allocation.sPCLKInput.ucFracFbDiv = 996 - (uint8_t)bp_params->fractional_feedback_divider; 996 + (uint8_t)(bp_params->fractional_feedback_divider / 100000); 997 997 allocation.sPCLKInput.ucPostDiv = 998 998 (uint8_t)bp_params->pixel_clock_post_divider; 999 999
+5 -9
drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
··· 72 72 /* ClocksStateLow */ 73 73 { .display_clk_khz = 352000, .pixel_clk_khz = 330000}, 74 74 /* ClocksStateNominal */ 75 - { .display_clk_khz = 600000, .pixel_clk_khz = 400000 }, 75 + { .display_clk_khz = 625000, .pixel_clk_khz = 400000 }, 76 76 /* ClocksStatePerformance */ 77 - { .display_clk_khz = 600000, .pixel_clk_khz = 400000 } }; 77 + { .display_clk_khz = 625000, .pixel_clk_khz = 400000 } }; 78 78 79 79 int dentist_get_divider_from_did(int did) 80 80 { ··· 391 391 { 392 392 struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg; 393 393 394 - pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); 395 - 396 394 dce110_fill_display_configs(context, pp_display_cfg); 397 395 398 396 if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0) ··· 403 405 { 404 406 struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base); 405 407 struct dm_pp_power_level_change_request level_change_req; 406 - int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz; 407 - 408 - /*TODO: W/A for dal3 linux, investigate why this works */ 409 - if (!clk_mgr_dce->dfs_bypass_active) 410 - patched_disp_clk = patched_disp_clk * 115 / 100; 408 + const int max_disp_clk = 409 + clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; 410 + int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz); 411 411 412 412 level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context); 413 413 /* get max clock state from PPLIB */
+23 -17
drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
··· 120 120 const struct dc_state *context, 121 121 struct dm_pp_display_configuration *pp_display_cfg) 122 122 { 123 + struct dc *dc = context->clk_mgr->ctx->dc; 123 124 int j; 124 125 int num_cfgs = 0; 126 + 127 + pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); 128 + pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz; 129 + pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0; 130 + pp_display_cfg->crtc_index = dc->res_pool->res_cap->num_timing_generator; 125 131 126 132 for (j = 0; j < context->stream_count; j++) { 127 133 int k; ··· 170 164 cfg->v_refresh /= stream->timing.h_total; 171 165 cfg->v_refresh = (cfg->v_refresh + stream->timing.v_total / 2) 172 166 / stream->timing.v_total; 167 + 168 + /* Find first CRTC index and calculate its line time. 169 + * This is necessary for DPM on SI GPUs. 170 + */ 171 + if (cfg->pipe_idx < pp_display_cfg->crtc_index) { 172 + const struct dc_crtc_timing *timing = 173 + &context->streams[0]->timing; 174 + 175 + pp_display_cfg->crtc_index = cfg->pipe_idx; 176 + pp_display_cfg->line_time_in_us = 177 + timing->h_total * 10000 / timing->pix_clk_100hz; 178 + } 179 + } 180 + 181 + if (!num_cfgs) { 182 + pp_display_cfg->crtc_index = 0; 183 + pp_display_cfg->line_time_in_us = 0; 173 184 } 174 185 175 186 pp_display_cfg->display_count = num_cfgs; ··· 246 223 pp_display_cfg->min_engine_clock_deep_sleep_khz 247 224 = context->bw_ctx.bw.dce.sclk_deep_sleep_khz; 248 225 249 - pp_display_cfg->avail_mclk_switch_time_us = 250 - dce110_get_min_vblank_time_us(context); 251 - /* TODO: dce11.2*/ 252 - pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0; 253 - 254 - pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz; 255 - 256 226 dce110_fill_display_configs(context, pp_display_cfg); 257 - 258 - /* TODO: is this still applicable?*/ 259 - if (pp_display_cfg->display_count == 1) { 260 - const struct dc_crtc_timing *timing = 261 - &context->streams[0]->timing; 262 - 263 - pp_display_cfg->crtc_index = 264 - pp_display_cfg->disp_configs[0].pipe_idx; 265 - pp_display_cfg->line_time_in_us = timing->h_total * 10000 / timing->pix_clk_100hz; 266 - } 267 227 268 228 if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0) 269 229 dm_pp_apply_display_requirements(dc->ctx, pp_display_cfg);
+9 -22
drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
··· 83 83 static int dce60_get_dp_ref_freq_khz(struct clk_mgr *clk_mgr_base) 84 84 { 85 85 struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); 86 - int dprefclk_wdivider; 87 - int dp_ref_clk_khz; 88 - int target_div; 86 + struct dc_context *ctx = clk_mgr_base->ctx; 87 + int dp_ref_clk_khz = 0; 89 88 90 - /* DCE6 has no DPREFCLK_CNTL to read DP Reference Clock source */ 91 - 92 - /* Read the mmDENTIST_DISPCLK_CNTL to get the currently 93 - * programmed DID DENTIST_DPREFCLK_WDIVIDER*/ 94 - REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, &dprefclk_wdivider); 95 - 96 - /* Convert DENTIST_DPREFCLK_WDIVIDERto actual divider*/ 97 - target_div = dentist_get_divider_from_did(dprefclk_wdivider); 98 - 99 - /* Calculate the current DFS clock, in kHz.*/ 100 - dp_ref_clk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR 101 - * clk_mgr->base.dentist_vco_freq_khz) / target_div; 89 + if (ASIC_REV_IS_TAHITI_P(ctx->asic_id.hw_internal_rev)) 90 + dp_ref_clk_khz = ctx->dc_bios->fw_info.default_display_engine_pll_frequency; 91 + else 92 + dp_ref_clk_khz = clk_mgr_base->clks.dispclk_khz; 102 93 103 94 return dce_adjust_dp_ref_freq_for_ss(clk_mgr, dp_ref_clk_khz); 104 95 } ··· 99 108 struct dc_state *context) 100 109 { 101 110 struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg; 102 - 103 - pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); 104 111 105 112 dce110_fill_display_configs(context, pp_display_cfg); 106 113 ··· 112 123 { 113 124 struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base); 114 125 struct dm_pp_power_level_change_request level_change_req; 115 - int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz; 116 - 117 - /*TODO: W/A for dal3 linux, investigate why this works */ 118 - if (!clk_mgr_dce->dfs_bypass_active) 119 - patched_disp_clk = patched_disp_clk * 115 / 100; 126 + const int max_disp_clk = 127 + clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; 128 + int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz); 120 129 121 130 level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context); 122 131 /* get max clock state from PPLIB */
+14 -1
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 217 217 connectors_num, 218 218 num_virtual_links); 219 219 220 - // condition loop on link_count to allow skipping invalid indices 220 + /* When getting the number of connectors, the VBIOS reports the number of valid indices, 221 + * but it doesn't say which indices are valid, and not every index has an actual connector. 222 + * So, if we don't find a connector on an index, that is not an error. 223 + * 224 + * - There is no guarantee that the first N indices will be valid 225 + * - VBIOS may report a higher amount of valid indices than there are actual connectors 226 + * - Some VBIOS have valid configurations for more connectors than there actually are 227 + * on the card. This may be because the manufacturer used the same VBIOS for different 228 + * variants of the same card. 229 + */ 221 230 for (i = 0; dc->link_count < connectors_num && i < MAX_LINKS; i++) { 231 + struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, i); 222 232 struct link_init_data link_init_params = {0}; 223 233 struct dc_link *link; 234 + 235 + if (connector_id.id == CONNECTOR_ID_UNKNOWN) 236 + continue; 224 237 225 238 DC_LOG_DC("BIOS object table - printing link object info for connector number: %d, link_index: %d", i, dc->link_count); 226 239
+3 -40
drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
··· 4 4 5 5 #include "dc.h" 6 6 #include "dc_dmub_srv.h" 7 - #include "dc_dp_types.h" 8 7 #include "dmub/dmub_srv.h" 9 8 #include "core_types.h" 10 9 #include "dmub_replay.h" ··· 43 44 /* 44 45 * Enable/Disable Replay. 45 46 */ 46 - static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait, uint8_t panel_inst, 47 - struct dc_link *link) 47 + static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait, uint8_t panel_inst) 48 48 { 49 49 union dmub_rb_cmd cmd; 50 50 struct dc_context *dc = dmub->ctx; 51 51 uint32_t retry_count; 52 52 enum replay_state state = REPLAY_STATE_0; 53 - struct pipe_ctx *pipe_ctx = NULL; 54 - struct resource_context *res_ctx = &link->ctx->dc->current_state->res_ctx; 55 - uint8_t i; 56 53 57 54 memset(&cmd, 0, sizeof(cmd)); 58 55 cmd.replay_enable.header.type = DMUB_CMD__REPLAY; 59 56 cmd.replay_enable.data.panel_inst = panel_inst; 60 57 61 58 cmd.replay_enable.header.sub_type = DMUB_CMD__REPLAY_ENABLE; 62 - if (enable) { 59 + if (enable) 63 60 cmd.replay_enable.data.enable = REPLAY_ENABLE; 64 - // hpo stream/link encoder assignments are not static, need to update everytime we try to enable replay 65 - if (link->cur_link_settings.link_rate >= LINK_RATE_UHBR10) { 66 - for (i = 0; i < MAX_PIPES; i++) { 67 - if (res_ctx && 68 - res_ctx->pipe_ctx[i].stream && 69 - res_ctx->pipe_ctx[i].stream->link && 70 - res_ctx->pipe_ctx[i].stream->link == link && 71 - res_ctx->pipe_ctx[i].stream->link->connector_signal == SIGNAL_TYPE_EDP) { 72 - pipe_ctx = &res_ctx->pipe_ctx[i]; 73 - //TODO: refactor for multi edp support 74 - break; 75 - } 76 - } 77 - 78 - if (!pipe_ctx) 79 - return; 80 - 81 - cmd.replay_enable.data.hpo_stream_enc_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst; 82 - cmd.replay_enable.data.hpo_link_enc_inst = pipe_ctx->link_res.hpo_dp_link_enc->inst; 83 - } 84 - } else 61 + else 85 62 cmd.replay_enable.data.enable = REPLAY_DISABLE; 86 63 87 64 cmd.replay_enable.header.payload_bytes = sizeof(struct dmub_rb_cmd_replay_enable_data); ··· 149 174 copy_settings_data->digbe_inst = replay_context->digbe_inst; 150 175 copy_settings_data->digfe_inst = replay_context->digfe_inst; 151 176 152 - if (link->cur_link_settings.link_rate >= LINK_RATE_UHBR10) { 153 - if (pipe_ctx->stream_res.hpo_dp_stream_enc) 154 - copy_settings_data->hpo_stream_enc_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst; 155 - else 156 - copy_settings_data->hpo_stream_enc_inst = 0; 157 - if (pipe_ctx->link_res.hpo_dp_link_enc) 158 - copy_settings_data->hpo_link_enc_inst = pipe_ctx->link_res.hpo_dp_link_enc->inst; 159 - else 160 - copy_settings_data->hpo_link_enc_inst = 0; 161 - } 162 - 163 177 if (pipe_ctx->plane_res.dpp) 164 178 copy_settings_data->dpp_inst = pipe_ctx->plane_res.dpp->inst; 165 179 else ··· 211 247 pCmd->header.type = DMUB_CMD__REPLAY; 212 248 pCmd->header.sub_type = DMUB_CMD__REPLAY_SET_COASTING_VTOTAL; 213 249 pCmd->header.payload_bytes = sizeof(struct dmub_cmd_replay_set_coasting_vtotal_data); 214 - pCmd->replay_set_coasting_vtotal_data.panel_inst = panel_inst; 215 250 pCmd->replay_set_coasting_vtotal_data.coasting_vtotal = (coasting_vtotal & 0xFFFF); 216 251 pCmd->replay_set_coasting_vtotal_data.coasting_vtotal_high = (coasting_vtotal & 0xFFFF0000) >> 16; 217 252
+1 -1
drivers/gpu/drm/amd/display/dc/dce/dmub_replay.h
··· 19 19 void (*replay_get_state)(struct dmub_replay *dmub, enum replay_state *state, 20 20 uint8_t panel_inst); 21 21 void (*replay_enable)(struct dmub_replay *dmub, bool enable, bool wait, 22 - uint8_t panel_inst, struct dc_link *link); 22 + uint8_t panel_inst); 23 23 bool (*replay_copy_settings)(struct dmub_replay *dmub, struct dc_link *link, 24 24 struct replay_context *replay_context, uint8_t panel_inst); 25 25 void (*replay_set_power_opt)(struct dmub_replay *dmub, unsigned int power_opt,
-20
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
··· 4048 4048 */ 4049 4049 uint8_t digbe_inst; 4050 4050 /** 4051 - * @hpo_stream_enc_inst: HPO stream encoder instance 4052 - */ 4053 - uint8_t hpo_stream_enc_inst; 4054 - /** 4055 - * @hpo_link_enc_inst: HPO link encoder instance 4056 - */ 4057 - uint8_t hpo_link_enc_inst; 4058 - /** 4059 4051 * AUX HW instance. 4060 4052 */ 4061 4053 uint8_t aux_inst; ··· 4151 4159 * This does not support HDMI/DP2 for now. 4152 4160 */ 4153 4161 uint8_t phy_rate; 4154 - /** 4155 - * @hpo_stream_enc_inst: HPO stream encoder instance 4156 - */ 4157 - uint8_t hpo_stream_enc_inst; 4158 - /** 4159 - * @hpo_link_enc_inst: HPO link encoder instance 4160 - */ 4161 - uint8_t hpo_link_enc_inst; 4162 - /** 4163 - * @pad: Align structure to 4 byte boundary. 4164 - */ 4165 - uint8_t pad[2]; 4166 4162 }; 4167 4163 4168 4164 /**
+3
drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
··· 260 260 return MOD_HDCP_STATUS_FAILURE; 261 261 } 262 262 263 + if (!display) 264 + return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND; 265 + 263 266 hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.context.mem_context.shared_buf; 264 267 265 268 mutex_lock(&psp->hdcp_context.mutex);
+25 -5
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 1697 1697 uint32_t *min_power_limit) 1698 1698 { 1699 1699 struct smu_table_context *table_context = &smu->smu_table; 1700 + struct smu_14_0_2_powerplay_table *powerplay_table = 1701 + table_context->power_play_table; 1700 1702 PPTable_t *pptable = table_context->driver_pptable; 1701 1703 CustomSkuTable_t *skutable = &pptable->CustomSkuTable; 1702 - uint32_t power_limit; 1704 + uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0; 1703 1705 uint32_t msg_limit = pptable->SkuTable.MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; 1704 1706 1705 1707 if (smu_v14_0_get_current_power_limit(smu, &power_limit)) ··· 1714 1712 if (default_power_limit) 1715 1713 *default_power_limit = power_limit; 1716 1714 1717 - if (max_power_limit) 1718 - *max_power_limit = msg_limit; 1715 + if (powerplay_table) { 1716 + if (smu->od_enabled && 1717 + smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) { 1718 + od_percent_upper = pptable->SkuTable.OverDriveLimitsBasicMax.Ppt; 1719 + od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt; 1720 + } else if (smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) { 1721 + od_percent_upper = 0; 1722 + od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt; 1723 + } 1724 + } 1719 1725 1720 - if (min_power_limit) 1721 - *min_power_limit = 0; 1726 + dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", 1727 + od_percent_upper, od_percent_lower, power_limit); 1728 + 1729 + if (max_power_limit) { 1730 + *max_power_limit = msg_limit * (100 + od_percent_upper); 1731 + *max_power_limit /= 100; 1732 + } 1733 + 1734 + if (min_power_limit) { 1735 + *min_power_limit = power_limit * (100 + od_percent_lower); 1736 + *min_power_limit /= 100; 1737 + } 1722 1738 1723 1739 return 0; 1724 1740 }
+2 -2
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 1474 1474 1475 1475 dp = devm_drm_bridge_alloc(dev, struct analogix_dp_device, bridge, 1476 1476 &analogix_dp_bridge_funcs); 1477 - if (!dp) 1478 - return ERR_PTR(-ENOMEM); 1477 + if (IS_ERR(dp)) 1478 + return ERR_CAST(dp); 1479 1479 1480 1480 dp->dev = &pdev->dev; 1481 1481 dp->dpms_mode = DRM_MODE_DPMS_OFF;
+2
drivers/gpu/drm/drm_gpuvm.c
··· 2432 2432 * 2433 2433 * The expected usage is: 2434 2434 * 2435 + * .. code-block:: c 2436 + * 2435 2437 * vm_bind { 2436 2438 * struct drm_exec exec; 2437 2439 *
+21 -1
drivers/gpu/drm/drm_panic_qr.rs
··· 381 381 len: usize, 382 382 } 383 383 384 + // On arm32 architecture, dividing an `u64` by a constant will generate a call 385 + // to `__aeabi_uldivmod` which is not present in the kernel. 386 + // So use the multiply by inverse method for this architecture. 387 + fn div10(val: u64) -> u64 { 388 + if cfg!(target_arch = "arm") { 389 + let val_h = val >> 32; 390 + let val_l = val & 0xFFFFFFFF; 391 + let b_h: u64 = 0x66666666; 392 + let b_l: u64 = 0x66666667; 393 + 394 + let tmp1 = val_h * b_l + ((val_l * b_l) >> 32); 395 + let tmp2 = val_l * b_h + (tmp1 & 0xffffffff); 396 + let tmp3 = val_h * b_h + (tmp1 >> 32) + (tmp2 >> 32); 397 + 398 + tmp3 >> 2 399 + } else { 400 + val / 10 401 + } 402 + } 403 + 384 404 impl DecFifo { 385 405 fn push(&mut self, data: u64, len: usize) { 386 406 let mut chunk = data; ··· 409 389 } 410 390 for i in 0..len { 411 391 self.decimals[i] = (chunk % 10) as u8; 412 - chunk /= 10; 392 + chunk = div10(chunk); 413 393 } 414 394 self.len += len; 415 395 }
+12 -2
drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c
··· 325 325 return hibmc_dp_link_reduce_rate(dp); 326 326 } 327 327 328 + static void hibmc_dp_update_caps(struct hibmc_dp_dev *dp) 329 + { 330 + dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE]; 331 + if (dp->link.cap.link_rate > DP_LINK_BW_8_1 || !dp->link.cap.link_rate) 332 + dp->link.cap.link_rate = DP_LINK_BW_8_1; 333 + 334 + dp->link.cap.lanes = dp->dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK; 335 + if (dp->link.cap.lanes > HIBMC_DP_LANE_NUM_MAX) 336 + dp->link.cap.lanes = HIBMC_DP_LANE_NUM_MAX; 337 + } 338 + 328 339 int hibmc_dp_link_training(struct hibmc_dp_dev *dp) 329 340 { 330 341 struct hibmc_dp_link *link = &dp->link; ··· 345 334 if (ret) 346 335 drm_err(dp->dev, "dp aux read dpcd failed, ret: %d\n", ret); 347 336 348 - dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE]; 349 - dp->link.cap.lanes = 0x2; 337 + hibmc_dp_update_caps(dp); 350 338 351 339 ret = hibmc_dp_get_serdes_rate_cfg(dp); 352 340 if (ret < 0)
+13 -9
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 32 32 33 33 DEFINE_DRM_GEM_FOPS(hibmc_fops); 34 34 35 - static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "vblank", "hpd" }; 35 + static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "hibmc-vblank", "hibmc-hpd" }; 36 36 37 37 static irqreturn_t hibmc_interrupt(int irq, void *arg) 38 38 { ··· 115 115 static int hibmc_kms_init(struct hibmc_drm_private *priv) 116 116 { 117 117 struct drm_device *dev = &priv->dev; 118 + struct drm_encoder *encoder; 119 + u32 clone_mask = 0; 118 120 int ret; 119 121 120 122 ret = drmm_mode_config_init(dev); ··· 155 153 drm_err(dev, "failed to init vdac: %d\n", ret); 156 154 return ret; 157 155 } 156 + 157 + drm_for_each_encoder(encoder, dev) 158 + clone_mask |= drm_encoder_mask(encoder); 159 + 160 + drm_for_each_encoder(encoder, dev) 161 + encoder->possible_clones = clone_mask; 158 162 159 163 return 0; 160 164 } ··· 285 277 static int hibmc_msi_init(struct drm_device *dev) 286 278 { 287 279 struct pci_dev *pdev = to_pci_dev(dev->dev); 288 - char name[32] = {0}; 289 280 int valid_irq_num; 290 281 int irq; 291 282 int ret; ··· 299 292 valid_irq_num = ret; 300 293 301 294 for (int i = 0; i < valid_irq_num; i++) { 302 - snprintf(name, ARRAY_SIZE(name) - 1, "%s-%s-%s", 303 - dev->driver->name, pci_name(pdev), g_irqs_names_map[i]); 304 - 305 295 irq = pci_irq_vector(pdev, i); 306 296 307 297 if (i) ··· 306 302 ret = devm_request_threaded_irq(&pdev->dev, irq, 307 303 hibmc_dp_interrupt, 308 304 hibmc_dp_hpd_isr, 309 - IRQF_SHARED, name, dev); 305 + IRQF_SHARED, g_irqs_names_map[i], dev); 310 306 else 311 307 ret = devm_request_irq(&pdev->dev, irq, hibmc_interrupt, 312 - IRQF_SHARED, name, dev); 308 + IRQF_SHARED, g_irqs_names_map[i], dev); 313 309 if (ret) { 314 310 drm_err(dev, "install irq failed: %d\n", ret); 315 311 return ret; ··· 327 323 328 324 ret = hibmc_hw_init(priv); 329 325 if (ret) 330 - goto err; 326 + return ret; 331 327 332 328 ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0), 333 329 pci_resource_len(pdev, 0)); 334 330 if (ret) { 335 331 drm_err(dev, "Error initializing VRAM MM; %d\n", ret); 336 - goto err; 332 + return ret; 337 333 } 338 334 339 335 ret = hibmc_kms_init(priv);
+1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
··· 69 69 int hibmc_vdac_init(struct hibmc_drm_private *priv); 70 70 71 71 int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *connector); 72 + void hibmc_ddc_del(struct hibmc_vdac *vdac); 72 73 73 74 int hibmc_dp_init(struct hibmc_drm_private *priv); 74 75
+5
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
··· 95 95 96 96 return i2c_bit_add_bus(&vdac->adapter); 97 97 } 98 + 99 + void hibmc_ddc_del(struct hibmc_vdac *vdac) 100 + { 101 + i2c_del_adapter(&vdac->adapter); 102 + }
+8 -3
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
··· 53 53 { 54 54 struct hibmc_vdac *vdac = to_hibmc_vdac(connector); 55 55 56 - i2c_del_adapter(&vdac->adapter); 56 + hibmc_ddc_del(vdac); 57 57 drm_connector_cleanup(connector); 58 58 } 59 59 ··· 110 110 ret = drmm_encoder_init(dev, encoder, NULL, DRM_MODE_ENCODER_DAC, NULL); 111 111 if (ret) { 112 112 drm_err(dev, "failed to init encoder: %d\n", ret); 113 - return ret; 113 + goto err; 114 114 } 115 115 116 116 drm_encoder_helper_add(encoder, &hibmc_encoder_helper_funcs); ··· 121 121 &vdac->adapter); 122 122 if (ret) { 123 123 drm_err(dev, "failed to init connector: %d\n", ret); 124 - return ret; 124 + goto err; 125 125 } 126 126 127 127 drm_connector_helper_add(connector, &hibmc_connector_helper_funcs); ··· 131 131 connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 132 132 133 133 return 0; 134 + 135 + err: 136 + hibmc_ddc_del(vdac); 137 + 138 + return ret; 134 139 }
+4
drivers/gpu/drm/i915/display/intel_display_irq.c
··· 1506 1506 if (!(master_ctl & GEN11_GU_MISC_IRQ)) 1507 1507 return 0; 1508 1508 1509 + intel_display_rpm_assert_block(display); 1510 + 1509 1511 iir = intel_de_read(display, GEN11_GU_MISC_IIR); 1510 1512 if (likely(iir)) 1511 1513 intel_de_write(display, GEN11_GU_MISC_IIR, iir); 1514 + 1515 + intel_display_rpm_assert_unblock(display); 1512 1516 1513 1517 return iir; 1514 1518 }
+75 -18
drivers/gpu/drm/i915/display/intel_tc.c
··· 23 23 #include "intel_modeset_lock.h" 24 24 #include "intel_tc.h" 25 25 26 + #define DP_PIN_ASSIGNMENT_NONE 0x0 26 27 #define DP_PIN_ASSIGNMENT_C 0x3 27 28 #define DP_PIN_ASSIGNMENT_D 0x4 28 29 #define DP_PIN_ASSIGNMENT_E 0x5 ··· 67 66 enum tc_port_mode init_mode; 68 67 enum phy_fia phy_fia; 69 68 u8 phy_fia_idx; 69 + u8 max_lane_count; 70 70 }; 71 71 72 72 static enum intel_display_power_domain ··· 309 307 REG_FIELD_GET(TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK, val); 310 308 311 309 switch (pin_assignment) { 310 + case DP_PIN_ASSIGNMENT_NONE: 311 + return 0; 312 312 default: 313 313 MISSING_CASE(pin_assignment); 314 314 fallthrough; ··· 369 365 } 370 366 } 371 367 372 - int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) 368 + static int get_max_lane_count(struct intel_tc_port *tc) 373 369 { 374 - struct intel_display *display = to_intel_display(dig_port); 375 - struct intel_tc_port *tc = to_tc_port(dig_port); 370 + struct intel_display *display = to_intel_display(tc->dig_port); 371 + struct intel_digital_port *dig_port = tc->dig_port; 376 372 377 - if (!intel_encoder_is_tc(&dig_port->base) || tc->mode != TC_PORT_DP_ALT) 373 + if (tc->mode != TC_PORT_DP_ALT) 378 374 return 4; 379 375 380 376 assert_tc_cold_blocked(tc); ··· 386 382 return mtl_tc_port_get_max_lane_count(dig_port); 387 383 388 384 return intel_tc_port_get_max_lane_count(dig_port); 385 + } 386 + 387 + static void read_pin_configuration(struct intel_tc_port *tc) 388 + { 389 + tc->max_lane_count = get_max_lane_count(tc); 390 + } 391 + 392 + int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) 393 + { 394 + struct intel_display *display = to_intel_display(dig_port); 395 + struct intel_tc_port *tc = to_tc_port(dig_port); 396 + 397 + if (!intel_encoder_is_tc(&dig_port->base)) 398 + return 4; 399 + 400 + if (DISPLAY_VER(display) < 20) 401 + return get_max_lane_count(tc); 402 + 403 + return tc->max_lane_count; 389 404 } 390 405 391 406 void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port, ··· 619 596 tc_cold_wref = __tc_cold_block(tc, &domain); 620 597 621 598 tc->mode = tc_phy_get_current_mode(tc); 622 - if (tc->mode != TC_PORT_DISCONNECTED) 599 + if (tc->mode != TC_PORT_DISCONNECTED) { 623 600 tc->lock_wakeref = tc_cold_block(tc); 601 + 602 + read_pin_configuration(tc); 603 + } 624 604 625 605 __tc_cold_unblock(tc, domain, tc_cold_wref); 626 606 } ··· 682 656 683 657 tc->lock_wakeref = tc_cold_block(tc); 684 658 685 - if (tc->mode == TC_PORT_TBT_ALT) 659 + if (tc->mode == TC_PORT_TBT_ALT) { 660 + read_pin_configuration(tc); 661 + 686 662 return true; 663 + } 687 664 688 665 if ((!tc_phy_is_ready(tc) || 689 666 !icl_tc_phy_take_ownership(tc, true)) && ··· 697 668 goto out_unblock_tc_cold; 698 669 } 699 670 671 + read_pin_configuration(tc); 700 672 701 673 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) 702 674 goto out_release_phy; ··· 888 858 port_wakeref = intel_display_power_get(display, port_power_domain); 889 859 890 860 tc->mode = tc_phy_get_current_mode(tc); 891 - if (tc->mode != TC_PORT_DISCONNECTED) 861 + if (tc->mode != TC_PORT_DISCONNECTED) { 892 862 tc->lock_wakeref = tc_cold_block(tc); 863 + 864 + read_pin_configuration(tc); 865 + } 893 866 894 867 intel_display_power_put(display, port_power_domain, port_wakeref); 895 868 } ··· 906 873 907 874 if (tc->mode == TC_PORT_TBT_ALT) { 908 875 tc->lock_wakeref = tc_cold_block(tc); 876 + 877 + read_pin_configuration(tc); 878 + 909 879 return true; 910 880 } 911 881 ··· 929 893 } 930 894 931 895 tc->lock_wakeref = tc_cold_block(tc); 896 + 897 + read_pin_configuration(tc); 932 898 933 899 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) 934 900 goto out_unblock_tc_cold; ··· 1162 1124 tc_cold_wref = __tc_cold_block(tc, &domain); 1163 1125 1164 1126 tc->mode = tc_phy_get_current_mode(tc); 1165 - if (tc->mode != TC_PORT_DISCONNECTED) 1127 + if (tc->mode != TC_PORT_DISCONNECTED) { 1166 1128 tc->lock_wakeref = tc_cold_block(tc); 1129 + 1130 + read_pin_configuration(tc); 1131 + /* 1132 + * Set a valid lane count value for a DP-alt sink which got 1133 + * disconnected. The driver can only disable the output on this PHY. 1134 + */ 1135 + if (tc->max_lane_count == 0) 1136 + tc->max_lane_count = 4; 1137 + } 1167 1138 1168 1139 drm_WARN_ON(display->drm, 1169 1140 (tc->mode == TC_PORT_DP_ALT || tc->mode == TC_PORT_LEGACY) && ··· 1185 1138 { 1186 1139 tc->lock_wakeref = tc_cold_block(tc); 1187 1140 1188 - if (tc->mode == TC_PORT_TBT_ALT) 1141 + if (tc->mode == TC_PORT_TBT_ALT) { 1142 + read_pin_configuration(tc); 1143 + 1189 1144 return true; 1145 + } 1190 1146 1191 1147 if (!xelpdp_tc_phy_enable_tcss_power(tc, true)) 1192 1148 goto out_unblock_tccold; 1193 1149 1194 1150 xelpdp_tc_phy_take_ownership(tc, true); 1151 + 1152 + read_pin_configuration(tc); 1195 1153 1196 1154 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) 1197 1155 goto out_release_phy; ··· 1278 1226 tc->phy_ops->get_hw_state(tc); 1279 1227 } 1280 1228 1281 - static bool tc_phy_is_ready_and_owned(struct intel_tc_port *tc, 1282 - bool phy_is_ready, bool phy_is_owned) 1229 + /* Is the PHY owned by display i.e. is it in legacy or DP-alt mode? */ 1230 + static bool tc_phy_owned_by_display(struct intel_tc_port *tc, 1231 + bool phy_is_ready, bool phy_is_owned) 1283 1232 { 1284 1233 struct intel_display *display = to_intel_display(tc->dig_port); 1285 1234 1286 - drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready); 1235 + if (DISPLAY_VER(display) < 20) { 1236 + drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready); 1287 1237 1288 - return phy_is_ready && phy_is_owned; 1238 + return phy_is_ready && phy_is_owned; 1239 + } else { 1240 + return phy_is_owned; 1241 + } 1289 1242 } 1290 1243 1291 1244 static bool tc_phy_is_connected(struct intel_tc_port *tc, ··· 1301 1244 bool phy_is_owned = tc_phy_is_owned(tc); 1302 1245 bool is_connected; 1303 1246 1304 - if (tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) 1247 + if (tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) 1305 1248 is_connected = port_pll_type == ICL_PORT_DPLL_MG_PHY; 1306 1249 else 1307 1250 is_connected = port_pll_type == ICL_PORT_DPLL_DEFAULT; ··· 1409 1352 phy_is_ready = tc_phy_is_ready(tc); 1410 1353 phy_is_owned = tc_phy_is_owned(tc); 1411 1354 1412 - if (!tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) { 1355 + if (!tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) { 1413 1356 mode = get_tc_mode_in_phy_not_owned_state(tc, live_mode); 1414 1357 } else { 1415 1358 drm_WARN_ON(display->drm, live_mode == TC_PORT_TBT_ALT); ··· 1498 1441 intel_display_power_flush_work(display); 1499 1442 if (!intel_tc_cold_requires_aux_pw(dig_port)) { 1500 1443 enum intel_display_power_domain aux_domain; 1501 - bool aux_powered; 1502 1444 1503 1445 aux_domain = intel_aux_power_domain(dig_port); 1504 - aux_powered = intel_display_power_is_enabled(display, aux_domain); 1505 - drm_WARN_ON(display->drm, aux_powered); 1446 + if (intel_display_power_is_enabled(display, aux_domain)) 1447 + drm_dbg_kms(display->drm, "Port %s: AUX unexpectedly powered\n", 1448 + tc->port_name); 1506 1449 } 1507 1450 1508 1451 tc_phy_disconnect(tc);
+11 -9
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 634 634 static void icl_ctx_workarounds_init(struct intel_engine_cs *engine, 635 635 struct i915_wa_list *wal) 636 636 { 637 + struct drm_i915_private *i915 = engine->i915; 638 + 637 639 /* Wa_1406697149 (WaDisableBankHangMode:icl) */ 638 640 wa_write(wal, GEN8_L3CNTLREG, GEN8_ERRDETBCTRL); 639 641 ··· 671 669 672 670 /* Wa_1406306137:icl,ehl */ 673 671 wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, GEN11_DIS_PICK_2ND_EU); 672 + 673 + if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) { 674 + /* 675 + * Disable Repacking for Compression (masked R/W access) 676 + * before rendering compressed surfaces for display. 677 + */ 678 + wa_masked_en(wal, CACHE_MODE_0_GEN7, 679 + DISABLE_REPACKING_FOR_COMPRESSION); 680 + } 674 681 } 675 682 676 683 /* ··· 2315 2304 RING_PSMI_CTL(RENDER_RING_BASE), 2316 2305 GEN12_WAIT_FOR_EVENT_POWER_DOWN_DISABLE | 2317 2306 GEN8_RC_SEMA_IDLE_MSG_DISABLE); 2318 - } 2319 - 2320 - if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) { 2321 - /* 2322 - * "Disable Repacking for Compression (masked R/W access) 2323 - * before rendering compressed surfaces for display." 2324 - */ 2325 - wa_masked_en(wal, CACHE_MODE_0_GEN7, 2326 - DISABLE_REPACKING_FOR_COMPRESSION); 2327 2307 } 2328 2308 2329 2309 if (GRAPHICS_VER(i915) == 11) {
+3 -3
drivers/gpu/drm/nouveau/nouveau_exec.c
··· 60 60 * virtual address in the GPU's VA space there is no guarantee that the actual 61 61 * mappings are created in the GPU's MMU. If the given memory is swapped out 62 62 * at the time the bind operation is executed the kernel will stash the mapping 63 - * details into it's internal alloctor and create the actual MMU mappings once 63 + * details into it's internal allocator and create the actual MMU mappings once 64 64 * the memory is swapped back in. While this is transparent for userspace, it is 65 65 * guaranteed that all the backing memory is swapped back in and all the memory 66 66 * mappings, as requested by userspace previously, are actually mapped once the 67 67 * DRM_NOUVEAU_EXEC ioctl is called to submit an exec job. 68 68 * 69 69 * A VM_BIND job can be executed either synchronously or asynchronously. If 70 - * exectued asynchronously, userspace may provide a list of syncobjs this job 70 + * executed asynchronously, userspace may provide a list of syncobjs this job 71 71 * will wait for and/or a list of syncobj the kernel will signal once the 72 72 * VM_BIND job finished execution. If executed synchronously the ioctl will 73 73 * block until the bind job is finished. For synchronous jobs the kernel will ··· 82 82 * Since VM_BIND jobs update the GPU's VA space on job submit, EXEC jobs do have 83 83 * an up to date view of the VA space. However, the actual mappings might still 84 84 * be pending. Hence, EXEC jobs require to have the particular fences - of 85 - * the corresponding VM_BIND jobs they depent on - attached to them. 85 + * the corresponding VM_BIND jobs they depend on - attached to them. 86 86 */ 87 87 88 88 static int
+2 -1
drivers/gpu/drm/nouveau/nvif/vmm.c
··· 219 219 case RAW: args->type = NVIF_VMM_V0_TYPE_RAW; break; 220 220 default: 221 221 WARN_ON(1); 222 - return -EINVAL; 222 + ret = -EINVAL; 223 + goto done; 223 224 } 224 225 225 226 memcpy(args->data, argv, argc);
+2 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
··· 325 325 326 326 rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), info.retries); 327 327 if (IS_ERR_OR_NULL(rpc)) { 328 - kfree(buf); 328 + kvfree(buf); 329 329 return rpc; 330 330 } 331 331 ··· 334 334 335 335 rpc = r535_gsp_msgq_recv_one_elem(gsp, &info); 336 336 if (IS_ERR_OR_NULL(rpc)) { 337 - kfree(buf); 337 + kvfree(buf); 338 338 return rpc; 339 339 } 340 340
+2 -1
drivers/gpu/drm/nova/file.rs
··· 39 39 _ => return Err(EINVAL), 40 40 }; 41 41 42 - getparam.set_value(value); 42 + #[allow(clippy::useless_conversion)] 43 + getparam.set_value(value.into()); 43 44 44 45 Ok(0) 45 46 }
+1
drivers/gpu/drm/rockchip/Kconfig
··· 53 53 bool "Rockchip cdn DP" 54 54 depends on EXTCON=y || (EXTCON=m && DRM_ROCKCHIP=m) 55 55 select DRM_DISPLAY_HELPER 56 + select DRM_BRIDGE_CONNECTOR 56 57 select DRM_DISPLAY_DP_HELPER 57 58 help 58 59 This selects support for Rockchip SoC specific extensions
+5 -4
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 2579 2579 } 2580 2580 2581 2581 /* 2582 - * The window registers are only updated when config done is written. 2583 - * Until that they read back the old value. As we read-modify-write 2584 - * these registers mark them as non-volatile. This makes sure we read 2585 - * the new values from the regmap register cache. 2582 + * The window and video port registers are only updated when config 2583 + * done is written. Until that they read back the old value. As we 2584 + * read-modify-write these registers mark them as non-volatile. This 2585 + * makes sure we read the new values from the regmap register cache. 2586 2586 */ 2587 2587 static const struct regmap_range vop2_nonvolatile_range[] = { 2588 + regmap_reg_range(RK3568_VP0_CTRL_BASE, RK3588_VP3_CTRL_BASE + 255), 2588 2589 regmap_reg_range(0x1000, 0x23ff), 2589 2590 }; 2590 2591
+2 -1
drivers/gpu/drm/tests/drm_format_helper_test.c
··· 1033 1033 NULL : &result->dst_pitch; 1034 1034 1035 1035 drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state); 1036 - buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32)); 1036 + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 1037 1037 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 1038 1038 1039 1039 buf = dst.vaddr; /* restore original value of buf */ 1040 1040 memset(buf, 0, dst_size); 1041 1041 1042 1042 drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state); 1043 + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 1043 1044 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 1044 1045 } 1045 1046
+1 -1
drivers/gpu/drm/xe/xe_migrate.c
··· 408 408 409 409 /* Special layout, prepared below.. */ 410 410 vm = xe_vm_create(xe, XE_VM_FLAG_MIGRATION | 411 - XE_VM_FLAG_SET_TILE_ID(tile)); 411 + XE_VM_FLAG_SET_TILE_ID(tile), NULL); 412 412 if (IS_ERR(vm)) 413 413 return ERR_CAST(vm); 414 414
+1 -1
drivers/gpu/drm/xe/xe_pxp_submit.c
··· 101 101 xe_assert(xe, hwe); 102 102 103 103 /* PXP instructions must be issued from PPGTT */ 104 - vm = xe_vm_create(xe, XE_VM_FLAG_GSC); 104 + vm = xe_vm_create(xe, XE_VM_FLAG_GSC, NULL); 105 105 if (IS_ERR(vm)) 106 106 return PTR_ERR(vm); 107 107
+23 -25
drivers/gpu/drm/xe/xe_vm.c
··· 1640 1640 } 1641 1641 } 1642 1642 1643 - struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) 1643 + struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef) 1644 1644 { 1645 1645 struct drm_gem_object *vm_resv_obj; 1646 1646 struct xe_vm *vm; ··· 1661 1661 vm->xe = xe; 1662 1662 1663 1663 vm->size = 1ull << xe->info.va_bits; 1664 - 1665 1664 vm->flags = flags; 1666 1665 1666 + if (xef) 1667 + vm->xef = xe_file_get(xef); 1667 1668 /** 1668 1669 * GSC VMs are kernel-owned, only used for PXP ops and can sometimes be 1669 1670 * manipulated under the PXP mutex. However, the PXP mutex can be taken ··· 1795 1794 if (number_tiles > 1) 1796 1795 vm->composite_fence_ctx = dma_fence_context_alloc(1); 1797 1796 1797 + if (xef && xe->info.has_asid) { 1798 + u32 asid; 1799 + 1800 + down_write(&xe->usm.lock); 1801 + err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm, 1802 + XA_LIMIT(1, XE_MAX_ASID - 1), 1803 + &xe->usm.next_asid, GFP_KERNEL); 1804 + up_write(&xe->usm.lock); 1805 + if (err < 0) 1806 + goto err_unlock_close; 1807 + 1808 + vm->usm.asid = asid; 1809 + } 1810 + 1798 1811 trace_xe_vm_create(vm); 1799 1812 1800 1813 return vm; ··· 1829 1814 for_each_tile(tile, xe, id) 1830 1815 xe_range_fence_tree_fini(&vm->rftree[id]); 1831 1816 ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move); 1817 + if (vm->xef) 1818 + xe_file_put(vm->xef); 1832 1819 kfree(vm); 1833 1820 if (flags & XE_VM_FLAG_LR_MODE) 1834 1821 xe_pm_runtime_put(xe); ··· 2076 2059 struct xe_device *xe = to_xe_device(dev); 2077 2060 struct xe_file *xef = to_xe_file(file); 2078 2061 struct drm_xe_vm_create *args = data; 2079 - struct xe_tile *tile; 2080 2062 struct xe_vm *vm; 2081 - u32 id, asid; 2063 + u32 id; 2082 2064 int err; 2083 2065 u32 flags = 0; 2084 2066 ··· 2113 2097 if (args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE) 2114 2098 flags |= XE_VM_FLAG_FAULT_MODE; 2115 2099 2116 - vm = xe_vm_create(xe, flags); 2100 + vm = xe_vm_create(xe, flags, xef); 2117 2101 if (IS_ERR(vm)) 2118 2102 return PTR_ERR(vm); 2119 - 2120 - if (xe->info.has_asid) { 2121 - down_write(&xe->usm.lock); 2122 - err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm, 2123 - XA_LIMIT(1, XE_MAX_ASID - 1), 2124 - &xe->usm.next_asid, GFP_KERNEL); 2125 - up_write(&xe->usm.lock); 2126 - if (err < 0) 2127 - goto err_close_and_put; 2128 - 2129 - vm->usm.asid = asid; 2130 - } 2131 - 2132 - vm->xef = xe_file_get(xef); 2133 - 2134 - /* Record BO memory for VM pagetable created against client */ 2135 - for_each_tile(tile, xe, id) 2136 - if (vm->pt_root[id]) 2137 - xe_drm_client_add_bo(vm->xef->client, vm->pt_root[id]->bo); 2138 2103 2139 2104 #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_MEM) 2140 2105 /* Warning: Security issue - never enable by default */ ··· 3418 3421 free_bind_ops: 3419 3422 if (args->num_binds > 1) 3420 3423 kvfree(*bind_ops); 3424 + *bind_ops = NULL; 3421 3425 return err; 3422 3426 } 3423 3427 ··· 3525 3527 struct xe_exec_queue *q = NULL; 3526 3528 u32 num_syncs, num_ufence = 0; 3527 3529 struct xe_sync_entry *syncs = NULL; 3528 - struct drm_xe_vm_bind_op *bind_ops; 3530 + struct drm_xe_vm_bind_op *bind_ops = NULL; 3529 3531 struct xe_vma_ops vops; 3530 3532 struct dma_fence *fence; 3531 3533 int err;
+1 -1
drivers/gpu/drm/xe/xe_vm.h
··· 26 26 struct xe_svm_range; 27 27 struct drm_exec; 28 28 29 - struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags); 29 + struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef); 30 30 31 31 struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id); 32 32 int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node);
+12 -8
drivers/i2c/busses/i2c-rtl9300.c
··· 143 143 return -EIO; 144 144 145 145 for (i = 0; i < len; i++) { 146 - if (i % 4 == 0) 147 - vals[i/4] = 0; 148 - vals[i/4] <<= 8; 149 - vals[i/4] |= buf[i]; 146 + unsigned int shift = (i % 4) * 8; 147 + unsigned int reg = i / 4; 148 + 149 + vals[reg] |= buf[i] << shift; 150 150 } 151 151 152 152 return regmap_bulk_write(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_DATA_WORD0, ··· 175 175 return ret; 176 176 177 177 ret = regmap_read_poll_timeout(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_CTRL1, 178 - val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 2000); 178 + val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 100000); 179 179 if (ret) 180 180 return ret; 181 181 ··· 281 281 ret = rtl9300_i2c_reg_addr_set(i2c, command, 1); 282 282 if (ret) 283 283 goto out_unlock; 284 - ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0]); 284 + if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX) { 285 + ret = -EINVAL; 286 + goto out_unlock; 287 + } 288 + ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0] + 1); 285 289 if (ret) 286 290 goto out_unlock; 287 291 if (read_write == I2C_SMBUS_WRITE) { 288 - ret = rtl9300_i2c_write(i2c, &data->block[1], data->block[0]); 292 + ret = rtl9300_i2c_write(i2c, &data->block[0], data->block[0] + 1); 289 293 if (ret) 290 294 goto out_unlock; 291 295 } 292 - len = data->block[0]; 296 + len = data->block[0] + 1; 293 297 break; 294 298 295 299 default:
+1 -1
drivers/iio/accel/sca3300.c
··· 477 477 struct iio_dev *indio_dev = pf->indio_dev; 478 478 struct sca3300_data *data = iio_priv(indio_dev); 479 479 int bit, ret, val, i = 0; 480 - IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX); 480 + IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX) = { }; 481 481 482 482 iio_for_each_active_channel(indio_dev, bit) { 483 483 ret = sca3300_read_reg(data, indio_dev->channels[bit].address, &val);
+1 -1
drivers/iio/adc/Kconfig
··· 1300 1300 1301 1301 config ROHM_BD79124 1302 1302 tristate "Rohm BD79124 ADC driver" 1303 - depends on I2C 1303 + depends on I2C && GPIOLIB 1304 1304 select REGMAP_I2C 1305 1305 select IIO_ADC_HELPER 1306 1306 help
+7 -7
drivers/iio/adc/ad7124.c
··· 849 849 static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan_spec *chan) 850 850 { 851 851 struct device *dev = &st->sd.spi->dev; 852 - struct ad7124_channel *ch = &st->channels[chan->channel]; 852 + struct ad7124_channel *ch = &st->channels[chan->address]; 853 853 int ret; 854 854 855 855 if (ch->syscalib_mode == AD7124_SYSCALIB_ZERO_SCALE) { ··· 865 865 if (ret < 0) 866 866 return ret; 867 867 868 - dev_dbg(dev, "offset for channel %d after zero-scale calibration: 0x%x\n", 869 - chan->channel, ch->cfg.calibration_offset); 868 + dev_dbg(dev, "offset for channel %lu after zero-scale calibration: 0x%x\n", 869 + chan->address, ch->cfg.calibration_offset); 870 870 } else { 871 871 ch->cfg.calibration_gain = st->gain_default; 872 872 ··· 880 880 if (ret < 0) 881 881 return ret; 882 882 883 - dev_dbg(dev, "gain for channel %d after full-scale calibration: 0x%x\n", 884 - chan->channel, ch->cfg.calibration_gain); 883 + dev_dbg(dev, "gain for channel %lu after full-scale calibration: 0x%x\n", 884 + chan->address, ch->cfg.calibration_gain); 885 885 } 886 886 887 887 return 0; ··· 924 924 { 925 925 struct ad7124_state *st = iio_priv(indio_dev); 926 926 927 - st->channels[chan->channel].syscalib_mode = mode; 927 + st->channels[chan->address].syscalib_mode = mode; 928 928 929 929 return 0; 930 930 } ··· 934 934 { 935 935 struct ad7124_state *st = iio_priv(indio_dev); 936 936 937 - return st->channels[chan->channel].syscalib_mode; 937 + return st->channels[chan->address].syscalib_mode; 938 938 } 939 939 940 940 static const struct iio_enum ad7124_syscalib_mode_enum = {
+75 -12
drivers/iio/adc/ad7173.c
··· 200 200 /* 201 201 * Following fields are used to compare equality. If you 202 202 * make adaptations in it, you most likely also have to adapt 203 - * ad7173_find_live_config(), too. 203 + * ad7173_is_setup_equal(), too. 204 204 */ 205 205 struct_group(config_props, 206 206 bool bipolar; ··· 561 561 st->config_usage_counter = 0; 562 562 } 563 563 564 - static struct ad7173_channel_config * 565 - ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg) 564 + /** 565 + * ad7173_is_setup_equal - Compare two channel setups 566 + * @cfg1: First channel configuration 567 + * @cfg2: Second channel configuration 568 + * 569 + * Compares all configuration options that affect the registers connected to 570 + * SETUP_SEL, namely CONFIGx, FILTERx, GAINx and OFFSETx. 571 + * 572 + * Returns: true if the setups are identical, false otherwise 573 + */ 574 + static bool ad7173_is_setup_equal(const struct ad7173_channel_config *cfg1, 575 + const struct ad7173_channel_config *cfg2) 566 576 { 567 - struct ad7173_channel_config *cfg_aux; 568 - int i; 569 - 570 577 /* 571 578 * This is just to make sure that the comparison is adapted after 572 579 * struct ad7173_channel_config was changed. ··· 586 579 u8 ref_sel; 587 580 })); 588 581 582 + return cfg1->bipolar == cfg2->bipolar && 583 + cfg1->input_buf == cfg2->input_buf && 584 + cfg1->odr == cfg2->odr && 585 + cfg1->ref_sel == cfg2->ref_sel; 586 + } 587 + 588 + static struct ad7173_channel_config * 589 + ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg) 590 + { 591 + struct ad7173_channel_config *cfg_aux; 592 + int i; 593 + 589 594 for (i = 0; i < st->num_channels; i++) { 590 595 cfg_aux = &st->channels[i].cfg; 591 596 592 - if (cfg_aux->live && 593 - cfg->bipolar == cfg_aux->bipolar && 594 - cfg->input_buf == cfg_aux->input_buf && 595 - cfg->odr == cfg_aux->odr && 596 - cfg->ref_sel == cfg_aux->ref_sel) 597 + if (cfg_aux->live && ad7173_is_setup_equal(cfg, cfg_aux)) 597 598 return cfg_aux; 598 599 } 599 600 return NULL; ··· 1243 1228 const unsigned long *scan_mask) 1244 1229 { 1245 1230 struct ad7173_state *st = iio_priv(indio_dev); 1246 - int i, ret; 1231 + int i, j, k, ret; 1247 1232 1248 1233 for (i = 0; i < indio_dev->num_channels; i++) { 1249 1234 if (test_bit(i, scan_mask)) ··· 1252 1237 ret = ad_sd_write_reg(&st->sd, AD7173_REG_CH(i), 2, 0); 1253 1238 if (ret < 0) 1254 1239 return ret; 1240 + } 1241 + 1242 + /* 1243 + * On some chips, there are more channels that setups, so if there were 1244 + * more unique setups requested than the number of available slots, 1245 + * ad7173_set_channel() will have written over some of the slots. We 1246 + * can detect this by making sure each assigned cfg_slot matches the 1247 + * requested configuration. If it doesn't, we know that the slot was 1248 + * overwritten by a different channel. 1249 + */ 1250 + for_each_set_bit(i, scan_mask, indio_dev->num_channels) { 1251 + const struct ad7173_channel_config *cfg1, *cfg2; 1252 + 1253 + cfg1 = &st->channels[i].cfg; 1254 + 1255 + for_each_set_bit(j, scan_mask, indio_dev->num_channels) { 1256 + cfg2 = &st->channels[j].cfg; 1257 + 1258 + /* 1259 + * Only compare configs that are assigned to the same 1260 + * SETUP_SEL slot and don't compare channel to itself. 1261 + */ 1262 + if (i == j || cfg1->cfg_slot != cfg2->cfg_slot) 1263 + continue; 1264 + 1265 + /* 1266 + * If we find two different configs trying to use the 1267 + * same SETUP_SEL slot, then we know that the that we 1268 + * have too many unique configurations requested for 1269 + * the available slots and at least one was overwritten. 1270 + */ 1271 + if (!ad7173_is_setup_equal(cfg1, cfg2)) { 1272 + /* 1273 + * At this point, there isn't a way to tell 1274 + * which setups are actually programmed in the 1275 + * ADC anymore, so we could read them back to 1276 + * see, but it is simpler to just turn off all 1277 + * of the live flags so that everything gets 1278 + * reprogramed on the next attempt read a sample. 1279 + */ 1280 + for (k = 0; k < st->num_channels; k++) 1281 + st->channels[k].cfg.live = false; 1282 + 1283 + dev_err(&st->sd.spi->dev, 1284 + "Too many unique channel configurations requested for scan\n"); 1285 + return -EINVAL; 1286 + } 1287 + } 1255 1288 } 1256 1289 1257 1290 return 0;
+1
drivers/iio/adc/ad7380.c
··· 873 873 .has_hardware_gain = true, 874 874 .available_scan_masks = ad7380_4_channel_scan_masks, 875 875 .timing_specs = &ad7380_4_timing, 876 + .max_conversion_rate_hz = 4 * MEGA, 876 877 }; 877 878 878 879 static const struct spi_offload_config ad7380_offload_config = {
+10 -23
drivers/iio/adc/rzg2l_adc.c
··· 89 89 struct completion completion; 90 90 struct mutex lock; 91 91 u16 last_val[RZG2L_ADC_MAX_CHANNELS]; 92 - bool was_rpm_active; 93 92 }; 94 93 95 94 /** ··· 427 428 if (!indio_dev) 428 429 return -ENOMEM; 429 430 431 + platform_set_drvdata(pdev, indio_dev); 432 + 430 433 adc = iio_priv(indio_dev); 431 434 432 435 adc->hw_params = device_get_match_data(dev); ··· 460 459 ret = devm_pm_runtime_enable(dev); 461 460 if (ret) 462 461 return ret; 463 - 464 - platform_set_drvdata(pdev, indio_dev); 465 462 466 463 ret = rzg2l_adc_hw_init(dev, adc); 467 464 if (ret) ··· 540 541 }; 541 542 int ret; 542 543 543 - if (pm_runtime_suspended(dev)) { 544 - adc->was_rpm_active = false; 545 - } else { 546 - ret = pm_runtime_force_suspend(dev); 547 - if (ret) 548 - return ret; 549 - adc->was_rpm_active = true; 550 - } 544 + ret = pm_runtime_force_suspend(dev); 545 + if (ret) 546 + return ret; 551 547 552 548 ret = reset_control_bulk_assert(ARRAY_SIZE(resets), resets); 553 549 if (ret) ··· 551 557 return 0; 552 558 553 559 rpm_restore: 554 - if (adc->was_rpm_active) 555 - pm_runtime_force_resume(dev); 556 - 560 + pm_runtime_force_resume(dev); 557 561 return ret; 558 562 } 559 563 ··· 569 577 if (ret) 570 578 return ret; 571 579 572 - if (adc->was_rpm_active) { 573 - ret = pm_runtime_force_resume(dev); 574 - if (ret) 575 - goto resets_restore; 576 - } 580 + ret = pm_runtime_force_resume(dev); 581 + if (ret) 582 + goto resets_restore; 577 583 578 584 ret = rzg2l_adc_hw_init(dev, adc); 579 585 if (ret) ··· 580 590 return 0; 581 591 582 592 rpm_restore: 583 - if (adc->was_rpm_active) { 584 - pm_runtime_mark_last_busy(dev); 585 - pm_runtime_put_autosuspend(dev); 586 - } 593 + pm_runtime_force_suspend(dev); 587 594 resets_restore: 588 595 reset_control_bulk_assert(ARRAY_SIZE(resets), resets); 589 596 return ret;
+5 -1
drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
··· 32 32 goto exit; 33 33 34 34 *temp = (s16)be16_to_cpup(raw); 35 + /* 36 + * Temperature data is invalid if both accel and gyro are off. 37 + * Return -EBUSY in this case. 38 + */ 35 39 if (*temp == INV_ICM42600_DATA_INVALID) 36 - ret = -EINVAL; 40 + ret = -EBUSY; 37 41 38 42 exit: 39 43 mutex_unlock(&st->lock);
+1 -1
drivers/iio/light/as73211.c
··· 639 639 struct { 640 640 __le16 chan[4]; 641 641 aligned_s64 ts; 642 - } scan; 642 + } scan = { }; 643 643 int data_result, ret; 644 644 645 645 mutex_lock(&data->mutex);
+5 -4
drivers/iio/pressure/bmp280-core.c
··· 3213 3213 3214 3214 /* Bring chip out of reset if there is an assigned GPIO line */ 3215 3215 gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 3216 + if (IS_ERR(gpiod)) 3217 + return dev_err_probe(dev, PTR_ERR(gpiod), "failed to get reset GPIO\n"); 3218 + 3216 3219 /* Deassert the signal */ 3217 - if (gpiod) { 3218 - dev_info(dev, "release reset\n"); 3219 - gpiod_set_value(gpiod, 0); 3220 - } 3220 + dev_info(dev, "release reset\n"); 3221 + gpiod_set_value(gpiod, 0); 3221 3222 3222 3223 data->regmap = regmap; 3223 3224
+10 -4
drivers/iio/proximity/isl29501.c
··· 938 938 struct iio_dev *indio_dev = pf->indio_dev; 939 939 struct isl29501_private *isl29501 = iio_priv(indio_dev); 940 940 const unsigned long *active_mask = indio_dev->active_scan_mask; 941 - u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */ 941 + u32 value; 942 + struct { 943 + u16 data; 944 + aligned_s64 ts; 945 + } scan = { }; 942 946 943 - if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) 944 - isl29501_register_read(isl29501, REG_DISTANCE, buffer); 947 + if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) { 948 + isl29501_register_read(isl29501, REG_DISTANCE, &value); 949 + scan.data = value; 950 + } 945 951 946 - iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp); 952 + iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp); 947 953 iio_trigger_notify_done(indio_dev->trig); 948 954 949 955 return IRQ_HANDLED;
+16 -10
drivers/iio/temperature/maxim_thermocouple.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/err.h> 13 13 #include <linux/spi/spi.h> 14 + #include <linux/types.h> 14 15 #include <linux/iio/iio.h> 15 16 #include <linux/iio/sysfs.h> 16 17 #include <linux/iio/trigger.h> ··· 122 121 struct spi_device *spi; 123 122 const struct maxim_thermocouple_chip *chip; 124 123 char tc_type; 125 - 126 - u8 buffer[16] __aligned(IIO_DMA_MINALIGN); 124 + /* Buffer for reading up to 2 hardware channels. */ 125 + struct { 126 + union { 127 + __be16 raw16; 128 + __be32 raw32; 129 + __be16 raw[2]; 130 + }; 131 + aligned_s64 timestamp; 132 + } buffer __aligned(IIO_DMA_MINALIGN); 127 133 }; 128 134 129 135 static int maxim_thermocouple_read(struct maxim_thermocouple_data *data, ··· 138 130 { 139 131 unsigned int storage_bytes = data->chip->read_size; 140 132 unsigned int shift = chan->scan_type.shift + (chan->address * 8); 141 - __be16 buf16; 142 - __be32 buf32; 143 133 int ret; 144 134 145 135 switch (storage_bytes) { 146 136 case 2: 147 - ret = spi_read(data->spi, (void *)&buf16, storage_bytes); 148 - *val = be16_to_cpu(buf16); 137 + ret = spi_read(data->spi, &data->buffer.raw16, storage_bytes); 138 + *val = be16_to_cpu(data->buffer.raw16); 149 139 break; 150 140 case 4: 151 - ret = spi_read(data->spi, (void *)&buf32, storage_bytes); 152 - *val = be32_to_cpu(buf32); 141 + ret = spi_read(data->spi, &data->buffer.raw32, storage_bytes); 142 + *val = be32_to_cpu(data->buffer.raw32); 153 143 break; 154 144 default: 155 145 ret = -EINVAL; ··· 172 166 struct maxim_thermocouple_data *data = iio_priv(indio_dev); 173 167 int ret; 174 168 175 - ret = spi_read(data->spi, data->buffer, data->chip->read_size); 169 + ret = spi_read(data->spi, data->buffer.raw, data->chip->read_size); 176 170 if (!ret) { 177 - iio_push_to_buffers_with_ts(indio_dev, data->buffer, 171 + iio_push_to_buffers_with_ts(indio_dev, &data->buffer, 178 172 sizeof(data->buffer), 179 173 iio_get_time_ns(indio_dev)); 180 174 }
+2 -2
drivers/infiniband/core/umem_odp.c
··· 115 115 116 116 out_free_map: 117 117 if (ib_uses_virt_dma(dev)) 118 - kfree(map->pfn_list); 118 + kvfree(map->pfn_list); 119 119 else 120 120 hmm_dma_map_free(dev->dma_device, map); 121 121 return ret; ··· 287 287 mutex_unlock(&umem_odp->umem_mutex); 288 288 mmu_interval_notifier_remove(&umem_odp->notifier); 289 289 if (ib_uses_virt_dma(dev)) 290 - kfree(umem_odp->map.pfn_list); 290 + kvfree(umem_odp->map.pfn_list); 291 291 else 292 292 hmm_dma_map_free(dev->dma_device, &umem_odp->map); 293 293 }
+2 -6
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 1921 1921 struct bnxt_re_srq *srq = container_of(ib_srq, struct bnxt_re_srq, 1922 1922 ib_srq); 1923 1923 struct bnxt_re_dev *rdev = srq->rdev; 1924 - int rc; 1925 1924 1926 1925 switch (srq_attr_mask) { 1927 1926 case IB_SRQ_MAX_WR: ··· 1932 1933 return -EINVAL; 1933 1934 1934 1935 srq->qplib_srq.threshold = srq_attr->srq_limit; 1935 - rc = bnxt_qplib_modify_srq(&rdev->qplib_res, &srq->qplib_srq); 1936 - if (rc) { 1937 - ibdev_err(&rdev->ibdev, "Modify HW SRQ failed!"); 1938 - return rc; 1939 - } 1936 + bnxt_qplib_srq_arm_db(&srq->qplib_srq.dbinfo, srq->qplib_srq.threshold); 1937 + 1940 1938 /* On success, update the shadow */ 1941 1939 srq->srq_limit = srq_attr->srq_limit; 1942 1940 /* No need to Build and send response back to udata */
+23
drivers/infiniband/hw/bnxt_re/main.c
··· 2017 2017 rdev->nqr = NULL; 2018 2018 } 2019 2019 2020 + /* When DEL_GID fails, driver is not freeing GID ctx memory. 2021 + * To avoid the memory leak, free the memory during unload 2022 + */ 2023 + static void bnxt_re_free_gid_ctx(struct bnxt_re_dev *rdev) 2024 + { 2025 + struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl; 2026 + struct bnxt_re_gid_ctx *ctx, **ctx_tbl; 2027 + int i; 2028 + 2029 + if (!sgid_tbl->active) 2030 + return; 2031 + 2032 + ctx_tbl = sgid_tbl->ctx; 2033 + for (i = 0; i < sgid_tbl->max; i++) { 2034 + if (sgid_tbl->hw_id[i] == 0xFFFF) 2035 + continue; 2036 + 2037 + ctx = ctx_tbl[i]; 2038 + kfree(ctx); 2039 + } 2040 + } 2041 + 2020 2042 static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type) 2021 2043 { 2022 2044 u8 type; ··· 2052 2030 if (test_and_clear_bit(BNXT_RE_FLAG_QOS_WORK_REG, &rdev->flags)) 2053 2031 cancel_delayed_work_sync(&rdev->worker); 2054 2032 2033 + bnxt_re_free_gid_ctx(rdev); 2055 2034 if (test_and_clear_bit(BNXT_RE_FLAG_RESOURCES_INITIALIZED, 2056 2035 &rdev->flags)) 2057 2036 bnxt_re_cleanup_res(rdev);
+1 -29
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 705 705 srq->dbinfo.db = srq->dpi->dbr; 706 706 srq->dbinfo.max_slot = 1; 707 707 srq->dbinfo.priv_db = res->dpi_tbl.priv_db; 708 - if (srq->threshold) 709 - bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA); 710 - srq->arm_req = false; 708 + bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA); 711 709 712 710 return 0; 713 711 fail: ··· 713 715 kfree(srq->swq); 714 716 715 717 return rc; 716 - } 717 - 718 - int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res, 719 - struct bnxt_qplib_srq *srq) 720 - { 721 - struct bnxt_qplib_hwq *srq_hwq = &srq->hwq; 722 - u32 count; 723 - 724 - count = __bnxt_qplib_get_avail(srq_hwq); 725 - if (count > srq->threshold) { 726 - srq->arm_req = false; 727 - bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold); 728 - } else { 729 - /* Deferred arming */ 730 - srq->arm_req = true; 731 - } 732 - 733 - return 0; 734 718 } 735 719 736 720 int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, ··· 756 776 struct bnxt_qplib_hwq *srq_hwq = &srq->hwq; 757 777 struct rq_wqe *srqe; 758 778 struct sq_sge *hw_sge; 759 - u32 count = 0; 760 779 int i, next; 761 780 762 781 spin_lock(&srq_hwq->lock); ··· 787 808 788 809 bnxt_qplib_hwq_incr_prod(&srq->dbinfo, srq_hwq, srq->dbinfo.max_slot); 789 810 790 - spin_lock(&srq_hwq->lock); 791 - count = __bnxt_qplib_get_avail(srq_hwq); 792 - spin_unlock(&srq_hwq->lock); 793 811 /* Ring DB */ 794 812 bnxt_qplib_ring_prod_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ); 795 - if (srq->arm_req == true && count > srq->threshold) { 796 - srq->arm_req = false; 797 - bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold); 798 - } 799 813 800 814 return 0; 801 815 }
-2
drivers/infiniband/hw/bnxt_re/qplib_fp.h
··· 546 546 srqn_handler_t srq_handler); 547 547 int bnxt_qplib_create_srq(struct bnxt_qplib_res *res, 548 548 struct bnxt_qplib_srq *srq); 549 - int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res, 550 - struct bnxt_qplib_srq *srq); 551 549 int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, 552 550 struct bnxt_qplib_srq *srq); 553 551 void bnxt_qplib_destroy_srq(struct bnxt_qplib_res *res,
+2
drivers/infiniband/hw/bnxt_re/qplib_res.c
··· 121 121 pbl->pg_arr = vmalloc_array(pages, sizeof(void *)); 122 122 if (!pbl->pg_arr) 123 123 return -ENOMEM; 124 + memset(pbl->pg_arr, 0, pages * sizeof(void *)); 124 125 125 126 pbl->pg_map_arr = vmalloc_array(pages, sizeof(dma_addr_t)); 126 127 if (!pbl->pg_map_arr) { ··· 129 128 pbl->pg_arr = NULL; 130 129 return -ENOMEM; 131 130 } 131 + memset(pbl->pg_map_arr, 0, pages * sizeof(dma_addr_t)); 132 132 pbl->pg_count = 0; 133 133 pbl->pg_size = sginfo->pgsize; 134 134
+5 -1
drivers/infiniband/hw/erdma/erdma_verbs.c
··· 994 994 old_entry = xa_store(&dev->qp_xa, 1, qp, GFP_KERNEL); 995 995 if (xa_is_err(old_entry)) 996 996 ret = xa_err(old_entry); 997 + else 998 + qp->ibqp.qp_num = 1; 997 999 } else { 998 1000 ret = xa_alloc_cyclic(&dev->qp_xa, &qp->ibqp.qp_num, qp, 999 1001 XA_LIMIT(1, dev->attrs.max_qp - 1), ··· 1033 1031 if (ret) 1034 1032 goto err_out_cmd; 1035 1033 } else { 1036 - init_kernel_qp(dev, qp, attrs); 1034 + ret = init_kernel_qp(dev, qp, attrs); 1035 + if (ret) 1036 + goto err_out_xa; 1037 1037 } 1038 1038 1039 1039 qp->attrs.max_send_sge = attrs->cap.max_send_sge;
+3 -3
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 3043 3043 if (!hr_dev->is_vf) 3044 3044 hns_roce_free_link_table(hr_dev); 3045 3045 3046 - if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09) 3046 + if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09) 3047 3047 free_dip_entry(hr_dev); 3048 3048 } 3049 3049 ··· 5476 5476 return ret; 5477 5477 } 5478 5478 5479 - static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 qpn, 5479 + static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 sccn, 5480 5480 void *buffer) 5481 5481 { 5482 5482 struct hns_roce_v2_scc_context *context; ··· 5488 5488 return PTR_ERR(mailbox); 5489 5489 5490 5490 ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, HNS_ROCE_CMD_QUERY_SCCC, 5491 - qpn); 5491 + sccn); 5492 5492 if (ret) 5493 5493 goto out; 5494 5494
+8 -1
drivers/infiniband/hw/hns/hns_roce_restrack.c
··· 100 100 struct hns_roce_v2_qp_context qpc; 101 101 struct hns_roce_v2_scc_context sccc; 102 102 } context = {}; 103 + u32 sccn = hr_qp->qpn; 103 104 int ret; 104 105 105 106 if (!hr_dev->hw->query_qpc) ··· 117 116 !hr_dev->hw->query_sccc) 118 117 goto out; 119 118 120 - ret = hr_dev->hw->query_sccc(hr_dev, hr_qp->qpn, &context.sccc); 119 + if (hr_qp->cong_type == CONG_TYPE_DIP) { 120 + if (!hr_qp->dip) 121 + goto out; 122 + sccn = hr_qp->dip->dip_idx; 123 + } 124 + 125 + ret = hr_dev->hw->query_sccc(hr_dev, sccn, &context.sccc); 121 126 if (ret) 122 127 ibdev_warn_ratelimited(&hr_dev->ib_dev, 123 128 "failed to query SCCC, ret = %d.\n",
+8 -21
drivers/infiniband/sw/rxe/rxe_net.c
··· 345 345 346 346 static void rxe_skb_tx_dtor(struct sk_buff *skb) 347 347 { 348 - struct net_device *ndev = skb->dev; 349 - struct rxe_dev *rxe; 350 - unsigned int qp_index; 351 - struct rxe_qp *qp; 348 + struct rxe_qp *qp = skb->sk->sk_user_data; 352 349 int skb_out; 353 350 354 - rxe = rxe_get_dev_from_net(ndev); 355 - if (!rxe && is_vlan_dev(ndev)) 356 - rxe = rxe_get_dev_from_net(vlan_dev_real_dev(ndev)); 357 - if (WARN_ON(!rxe)) 358 - return; 359 - 360 - qp_index = (int)(uintptr_t)skb->sk->sk_user_data; 361 - if (!qp_index) 362 - return; 363 - 364 - qp = rxe_pool_get_index(&rxe->qp_pool, qp_index); 365 - if (!qp) 366 - goto put_dev; 367 - 368 351 skb_out = atomic_dec_return(&qp->skb_out); 369 - if (qp->need_req_skb && skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW) 352 + if (unlikely(qp->need_req_skb && 353 + skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) 370 354 rxe_sched_task(&qp->send_task); 371 355 372 356 rxe_put(qp); 373 - put_dev: 374 - ib_device_put(&rxe->ib_dev); 375 357 sock_put(skb->sk); 376 358 } 377 359 ··· 365 383 sock_hold(sk); 366 384 skb->sk = sk; 367 385 skb->destructor = rxe_skb_tx_dtor; 386 + rxe_get(pkt->qp); 368 387 atomic_inc(&pkt->qp->skb_out); 369 388 370 389 if (skb->protocol == htons(ETH_P_IP)) ··· 388 405 sock_hold(sk); 389 406 skb->sk = sk; 390 407 skb->destructor = rxe_skb_tx_dtor; 408 + rxe_get(pkt->qp); 391 409 atomic_inc(&pkt->qp->skb_out); 392 410 393 411 if (skb->protocol == htons(ETH_P_IP)) ··· 480 496 rcu_read_unlock(); 481 497 goto out; 482 498 } 499 + 500 + /* Add time stamp to skb. */ 501 + skb->tstamp = ktime_get(); 483 502 484 503 skb_reserve(skb, hdr_len + LL_RESERVED_SPACE(ndev)); 485 504
+1 -1
drivers/infiniband/sw/rxe/rxe_qp.c
··· 244 244 err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); 245 245 if (err < 0) 246 246 return err; 247 - qp->sk->sk->sk_user_data = (void *)(uintptr_t)qp->elem.index; 247 + qp->sk->sk->sk_user_data = qp; 248 248 249 249 /* pick a source UDP port number for this QP based on 250 250 * the source QPN. this spreads traffic for different QPs
+2 -2
drivers/iommu/amd/init.c
··· 3638 3638 { 3639 3639 u32 seg = 0, bus, dev, fn; 3640 3640 char *hid, *uid, *p, *addr; 3641 - char acpiid[ACPIID_LEN] = {0}; 3641 + char acpiid[ACPIID_LEN + 1] = { }; /* size with NULL terminator */ 3642 3642 int i; 3643 3643 3644 3644 addr = strchr(str, '@'); ··· 3664 3664 /* We have the '@', make it the terminator to get just the acpiid */ 3665 3665 *addr++ = 0; 3666 3666 3667 - if (strlen(str) > ACPIID_LEN + 1) 3667 + if (strlen(str) > ACPIID_LEN) 3668 3668 goto not_found; 3669 3669 3670 3670 if (sscanf(str, "=%s", acpiid) != 1)
+1 -1
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
··· 2997 2997 /* ATS is being switched off, invalidate the entire ATC */ 2998 2998 arm_smmu_atc_inv_master(master, IOMMU_NO_PASID); 2999 2999 } 3000 - master->ats_enabled = state->ats_enabled; 3001 3000 3002 3001 arm_smmu_remove_master_domain(master, state->old_domain, state->ssid); 3002 + master->ats_enabled = state->ats_enabled; 3003 3003 } 3004 3004 3005 3005 static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+5 -3
drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
··· 301 301 struct iommu_vevent_tegra241_cmdqv vevent_data; 302 302 int i; 303 303 304 - for (i = 0; i < LVCMDQ_ERR_MAP_NUM_64; i++) 305 - vevent_data.lvcmdq_err_map[i] = 306 - readq_relaxed(REG_VINTF(vintf, LVCMDQ_ERR_MAP_64(i))); 304 + for (i = 0; i < LVCMDQ_ERR_MAP_NUM_64; i++) { 305 + u64 err = readq_relaxed(REG_VINTF(vintf, LVCMDQ_ERR_MAP_64(i))); 306 + 307 + vevent_data.lvcmdq_err_map[i] = cpu_to_le64(err); 308 + } 307 309 308 310 iommufd_viommu_report_event(viommu, IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV, 309 311 &vevent_data, sizeof(vevent_data));
+2 -2
drivers/iommu/iommufd/viommu.c
··· 339 339 } 340 340 341 341 *base_pa = (page_to_pfn(pages[0]) << PAGE_SHIFT) + offset; 342 - kfree(pages); 342 + kvfree(pages); 343 343 return access; 344 344 345 345 out_unpin: ··· 349 349 out_destroy: 350 350 iommufd_access_destroy_internal(viommu->ictx, access); 351 351 out_free: 352 - kfree(pages); 352 + kvfree(pages); 353 353 return ERR_PTR(rc); 354 354 } 355 355
+1 -1
drivers/iommu/riscv/iommu.c
··· 1283 1283 unsigned long *ptr; 1284 1284 1285 1285 ptr = riscv_iommu_pte_fetch(domain, iova, &pte_size); 1286 - if (_io_pte_none(*ptr) || !_io_pte_present(*ptr)) 1286 + if (!ptr) 1287 1287 return 0; 1288 1288 1289 1289 return pfn_to_phys(__page_val_to_pfn(*ptr)) | (iova & (pte_size - 1));
+9 -6
drivers/iommu/virtio-iommu.c
··· 998 998 iommu_dma_get_resv_regions(dev, head); 999 999 } 1000 1000 1001 - static const struct iommu_ops viommu_ops; 1002 - static struct virtio_driver virtio_iommu_drv; 1001 + static const struct bus_type *virtio_bus_type; 1003 1002 1004 1003 static int viommu_match_node(struct device *dev, const void *data) 1005 1004 { ··· 1007 1008 1008 1009 static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode) 1009 1010 { 1010 - struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL, 1011 - fwnode, viommu_match_node); 1011 + struct device *dev = bus_find_device(virtio_bus_type, NULL, fwnode, 1012 + viommu_match_node); 1013 + 1012 1014 put_device(dev); 1013 1015 1014 1016 return dev ? dev_to_virtio(dev)->priv : NULL; ··· 1160 1160 if (!viommu) 1161 1161 return -ENOMEM; 1162 1162 1163 + /* Borrow this for easy lookups later */ 1164 + virtio_bus_type = dev->bus; 1165 + 1163 1166 spin_lock_init(&viommu->request_lock); 1164 1167 ida_init(&viommu->domain_ids); 1165 1168 viommu->dev = dev; ··· 1232 1229 if (ret) 1233 1230 goto err_free_vqs; 1234 1231 1235 - iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev); 1236 - 1237 1232 vdev->priv = viommu; 1233 + 1234 + iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev); 1238 1235 1239 1236 dev_info(dev, "input address: %u bits\n", 1240 1237 order_base_2(viommu->geometry.aperture_end));
+92 -30
drivers/md/md.c
··· 339 339 * so all the races disappear. 340 340 */ 341 341 static bool create_on_open = true; 342 + static bool legacy_async_del_gendisk = true; 342 343 343 344 /* 344 345 * We have a system wide 'event count' that is incremented ··· 878 877 export_rdev(rdev, mddev); 879 878 } 880 879 881 - /* Call del_gendisk after release reconfig_mutex to avoid 882 - * deadlock (e.g. call del_gendisk under the lock and an 883 - * access to sysfs files waits the lock) 884 - * And MD_DELETED is only used for md raid which is set in 885 - * do_md_stop. dm raid only uses md_stop to stop. So dm raid 886 - * doesn't need to check MD_DELETED when getting reconfig lock 887 - */ 888 - if (test_bit(MD_DELETED, &mddev->flags)) 889 - del_gendisk(mddev->gendisk); 880 + if (!legacy_async_del_gendisk) { 881 + /* 882 + * Call del_gendisk after release reconfig_mutex to avoid 883 + * deadlock (e.g. call del_gendisk under the lock and an 884 + * access to sysfs files waits the lock) 885 + * And MD_DELETED is only used for md raid which is set in 886 + * do_md_stop. dm raid only uses md_stop to stop. So dm raid 887 + * doesn't need to check MD_DELETED when getting reconfig lock 888 + */ 889 + if (test_bit(MD_DELETED, &mddev->flags)) 890 + del_gendisk(mddev->gendisk); 891 + } 890 892 } 891 893 EXPORT_SYMBOL_GPL(mddev_unlock); 892 894 ··· 1423 1419 else { 1424 1420 if (sb->events_hi == sb->cp_events_hi && 1425 1421 sb->events_lo == sb->cp_events_lo) { 1426 - mddev->resync_offset = sb->resync_offset; 1422 + mddev->resync_offset = sb->recovery_cp; 1427 1423 } else 1428 1424 mddev->resync_offset = 0; 1429 1425 } ··· 1551 1547 mddev->minor_version = sb->minor_version; 1552 1548 if (mddev->in_sync) 1553 1549 { 1554 - sb->resync_offset = mddev->resync_offset; 1550 + sb->recovery_cp = mddev->resync_offset; 1555 1551 sb->cp_events_hi = (mddev->events>>32); 1556 1552 sb->cp_events_lo = (u32)mddev->events; 1557 1553 if (mddev->resync_offset == MaxSector) 1558 1554 sb->state = (1<< MD_SB_CLEAN); 1559 1555 } else 1560 - sb->resync_offset = 0; 1556 + sb->recovery_cp = 0; 1561 1557 1562 1558 sb->layout = mddev->layout; 1563 1559 sb->chunk_size = mddev->chunk_sectors << 9; ··· 4839 4835 static struct md_sysfs_entry md_metadata = 4840 4836 __ATTR_PREALLOC(metadata_version, S_IRUGO|S_IWUSR, metadata_show, metadata_store); 4841 4837 4838 + static bool rdev_needs_recovery(struct md_rdev *rdev, sector_t sectors) 4839 + { 4840 + return rdev->raid_disk >= 0 && 4841 + !test_bit(Journal, &rdev->flags) && 4842 + !test_bit(Faulty, &rdev->flags) && 4843 + !test_bit(In_sync, &rdev->flags) && 4844 + rdev->recovery_offset < sectors; 4845 + } 4846 + 4847 + static enum sync_action md_get_active_sync_action(struct mddev *mddev) 4848 + { 4849 + struct md_rdev *rdev; 4850 + bool is_recover = false; 4851 + 4852 + if (mddev->resync_offset < MaxSector) 4853 + return ACTION_RESYNC; 4854 + 4855 + if (mddev->reshape_position != MaxSector) 4856 + return ACTION_RESHAPE; 4857 + 4858 + rcu_read_lock(); 4859 + rdev_for_each_rcu(rdev, mddev) { 4860 + if (rdev_needs_recovery(rdev, MaxSector)) { 4861 + is_recover = true; 4862 + break; 4863 + } 4864 + } 4865 + rcu_read_unlock(); 4866 + 4867 + return is_recover ? ACTION_RECOVER : ACTION_IDLE; 4868 + } 4869 + 4842 4870 enum sync_action md_sync_action(struct mddev *mddev) 4843 4871 { 4844 4872 unsigned long recovery = mddev->recovery; 4873 + enum sync_action active_action; 4845 4874 4846 4875 /* 4847 4876 * frozen has the highest priority, means running sync_thread will be ··· 4898 4861 !test_bit(MD_RECOVERY_NEEDED, &recovery)) 4899 4862 return ACTION_IDLE; 4900 4863 4901 - if (test_bit(MD_RECOVERY_RESHAPE, &recovery) || 4902 - mddev->reshape_position != MaxSector) 4864 + /* 4865 + * Check if any sync operation (resync/recover/reshape) is 4866 + * currently active. This ensures that only one sync operation 4867 + * can run at a time. Returns the type of active operation, or 4868 + * ACTION_IDLE if none are active. 4869 + */ 4870 + active_action = md_get_active_sync_action(mddev); 4871 + if (active_action != ACTION_IDLE) 4872 + return active_action; 4873 + 4874 + if (test_bit(MD_RECOVERY_RESHAPE, &recovery)) 4903 4875 return ACTION_RESHAPE; 4904 4876 4905 4877 if (test_bit(MD_RECOVERY_RECOVER, &recovery)) ··· 5864 5818 { 5865 5819 struct mddev *mddev = container_of(ko, struct mddev, kobj); 5866 5820 5821 + if (legacy_async_del_gendisk) { 5822 + if (mddev->sysfs_state) 5823 + sysfs_put(mddev->sysfs_state); 5824 + if (mddev->sysfs_level) 5825 + sysfs_put(mddev->sysfs_level); 5826 + del_gendisk(mddev->gendisk); 5827 + } 5867 5828 put_disk(mddev->gendisk); 5868 5829 } 5869 5830 ··· 6073 6020 static int md_alloc_and_put(dev_t dev, char *name) 6074 6021 { 6075 6022 struct mddev *mddev = md_alloc(dev, name); 6023 + 6024 + if (legacy_async_del_gendisk) 6025 + pr_warn("md: async del_gendisk mode will be removed in future, please upgrade to mdadm-4.5+\n"); 6076 6026 6077 6027 if (IS_ERR(mddev)) 6078 6028 return PTR_ERR(mddev); ··· 6487 6431 mddev->persistent = 0; 6488 6432 mddev->level = LEVEL_NONE; 6489 6433 mddev->clevel[0] = 0; 6490 - /* if UNTIL_STOP is set, it's cleared here */ 6491 - mddev->hold_active = 0; 6492 - /* Don't clear MD_CLOSING, or mddev can be opened again. */ 6493 - mddev->flags &= BIT_ULL_MASK(MD_CLOSING); 6434 + 6435 + /* 6436 + * For legacy_async_del_gendisk mode, it can stop the array in the 6437 + * middle of assembling it, then it still can access the array. So 6438 + * it needs to clear MD_CLOSING. If not legacy_async_del_gendisk, 6439 + * it can't open the array again after stopping it. So it doesn't 6440 + * clear MD_CLOSING. 6441 + */ 6442 + if (legacy_async_del_gendisk && mddev->hold_active) { 6443 + clear_bit(MD_CLOSING, &mddev->flags); 6444 + } else { 6445 + /* if UNTIL_STOP is set, it's cleared here */ 6446 + mddev->hold_active = 0; 6447 + /* Don't clear MD_CLOSING, or mddev can be opened again. */ 6448 + mddev->flags &= BIT_ULL_MASK(MD_CLOSING); 6449 + } 6494 6450 mddev->sb_flags = 0; 6495 6451 mddev->ro = MD_RDWR; 6496 6452 mddev->metadata_type[0] = 0; ··· 6726 6658 6727 6659 export_array(mddev); 6728 6660 md_clean(mddev); 6729 - set_bit(MD_DELETED, &mddev->flags); 6661 + if (!legacy_async_del_gendisk) 6662 + set_bit(MD_DELETED, &mddev->flags); 6730 6663 } 6731 6664 md_new_event(); 6732 6665 sysfs_notify_dirent_safe(mddev->sysfs_state); ··· 9037 8968 start = MaxSector; 9038 8969 rcu_read_lock(); 9039 8970 rdev_for_each_rcu(rdev, mddev) 9040 - if (rdev->raid_disk >= 0 && 9041 - !test_bit(Journal, &rdev->flags) && 9042 - !test_bit(Faulty, &rdev->flags) && 9043 - !test_bit(In_sync, &rdev->flags) && 9044 - rdev->recovery_offset < start) 8971 + if (rdev_needs_recovery(rdev, start)) 9045 8972 start = rdev->recovery_offset; 9046 8973 rcu_read_unlock(); 9047 8974 ··· 9396 9331 test_bit(MD_RECOVERY_RECOVER, &mddev->recovery)) { 9397 9332 rcu_read_lock(); 9398 9333 rdev_for_each_rcu(rdev, mddev) 9399 - if (rdev->raid_disk >= 0 && 9400 - mddev->delta_disks >= 0 && 9401 - !test_bit(Journal, &rdev->flags) && 9402 - !test_bit(Faulty, &rdev->flags) && 9403 - !test_bit(In_sync, &rdev->flags) && 9404 - rdev->recovery_offset < mddev->curr_resync) 9334 + if (mddev->delta_disks >= 0 && 9335 + rdev_needs_recovery(rdev, mddev->curr_resync)) 9405 9336 rdev->recovery_offset = mddev->curr_resync; 9406 9337 rcu_read_unlock(); 9407 9338 } ··· 10453 10392 module_param(start_dirty_degraded, int, S_IRUGO|S_IWUSR); 10454 10393 module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR); 10455 10394 module_param(create_on_open, bool, S_IRUSR|S_IWUSR); 10395 + module_param(legacy_async_del_gendisk, bool, 0600); 10456 10396 10457 10397 MODULE_LICENSE("GPL"); 10458 10398 MODULE_DESCRIPTION("MD RAID framework");
-1
drivers/memstick/core/memstick.c
··· 555 555 */ 556 556 void memstick_remove_host(struct memstick_host *host) 557 557 { 558 - host->removing = 1; 559 558 flush_workqueue(workqueue); 560 559 mutex_lock(&host->lock); 561 560 if (host->card)
+1
drivers/memstick/host/rtsx_usb_ms.c
··· 812 812 int err; 813 813 814 814 host->eject = true; 815 + msh->removing = true; 815 816 cancel_work_sync(&host->handle_req); 816 817 cancel_delayed_work_sync(&host->poll_card); 817 818
+31 -2
drivers/mmc/host/sdhci-of-arasan.c
··· 99 99 #define HIWORD_UPDATE(val, mask, shift) \ 100 100 ((val) << (shift) | (mask) << ((shift) + 16)) 101 101 102 + #define CD_STABLE_TIMEOUT_US 1000000 103 + #define CD_STABLE_MAX_SLEEP_US 10 104 + 102 105 /** 103 106 * struct sdhci_arasan_soc_ctl_field - Field used in sdhci_arasan_soc_ctl_map 104 107 * ··· 209 206 * 19MHz instead 210 207 */ 211 208 #define SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN BIT(2) 209 + /* Enable CD stable check before power-up */ 210 + #define SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE BIT(3) 212 211 }; 213 212 214 213 struct sdhci_arasan_of_data { 215 214 const struct sdhci_arasan_soc_ctl_map *soc_ctl_map; 216 215 const struct sdhci_pltfm_data *pdata; 217 216 const struct sdhci_arasan_clk_ops *clk_ops; 217 + u32 quirks; 218 218 }; 219 219 220 220 static const struct sdhci_arasan_soc_ctl_map rk3399_soc_ctl_map = { ··· 520 514 return -EINVAL; 521 515 } 522 516 517 + static void sdhci_arasan_set_power_and_bus_voltage(struct sdhci_host *host, unsigned char mode, 518 + unsigned short vdd) 519 + { 520 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 521 + struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); 522 + u32 reg; 523 + 524 + /* 525 + * Ensure that the card detect logic has stabilized before powering up, this is 526 + * necessary after a host controller reset. 527 + */ 528 + if (mode == MMC_POWER_UP && sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE) 529 + read_poll_timeout(sdhci_readl, reg, reg & SDHCI_CD_STABLE, CD_STABLE_MAX_SLEEP_US, 530 + CD_STABLE_TIMEOUT_US, false, host, SDHCI_PRESENT_STATE); 531 + 532 + sdhci_set_power_and_bus_voltage(host, mode, vdd); 533 + } 534 + 523 535 static const struct sdhci_ops sdhci_arasan_ops = { 524 536 .set_clock = sdhci_arasan_set_clock, 525 537 .get_max_clock = sdhci_pltfm_clk_get_max_clock, ··· 545 521 .set_bus_width = sdhci_set_bus_width, 546 522 .reset = sdhci_arasan_reset, 547 523 .set_uhs_signaling = sdhci_set_uhs_signaling, 548 - .set_power = sdhci_set_power_and_bus_voltage, 524 + .set_power = sdhci_arasan_set_power_and_bus_voltage, 549 525 .hw_reset = sdhci_arasan_hw_reset, 550 526 }; 551 527 ··· 594 570 .set_bus_width = sdhci_set_bus_width, 595 571 .reset = sdhci_arasan_reset, 596 572 .set_uhs_signaling = sdhci_set_uhs_signaling, 597 - .set_power = sdhci_set_power_and_bus_voltage, 573 + .set_power = sdhci_arasan_set_power_and_bus_voltage, 598 574 .irq = sdhci_arasan_cqhci_irq, 599 575 }; 600 576 ··· 1471 1447 static struct sdhci_arasan_of_data sdhci_arasan_zynqmp_data = { 1472 1448 .pdata = &sdhci_arasan_zynqmp_pdata, 1473 1449 .clk_ops = &zynqmp_clk_ops, 1450 + .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, 1474 1451 }; 1475 1452 1476 1453 static const struct sdhci_arasan_clk_ops versal_clk_ops = { ··· 1482 1457 static struct sdhci_arasan_of_data sdhci_arasan_versal_data = { 1483 1458 .pdata = &sdhci_arasan_zynqmp_pdata, 1484 1459 .clk_ops = &versal_clk_ops, 1460 + .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, 1485 1461 }; 1486 1462 1487 1463 static const struct sdhci_arasan_clk_ops versal_net_clk_ops = { ··· 1493 1467 static struct sdhci_arasan_of_data sdhci_arasan_versal_net_data = { 1494 1468 .pdata = &sdhci_arasan_versal_net_pdata, 1495 1469 .clk_ops = &versal_net_clk_ops, 1470 + .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, 1496 1471 }; 1497 1472 1498 1473 static struct sdhci_arasan_of_data intel_keembay_emmc_data = { ··· 1963 1936 1964 1937 if (of_device_is_compatible(np, "rockchip,rk3399-sdhci-5.1")) 1965 1938 sdhci_arasan_update_clockmultiplier(host, 0x0); 1939 + 1940 + sdhci_arasan->quirks |= data->quirks; 1966 1941 1967 1942 if (of_device_is_compatible(np, "intel,keembay-sdhci-5.1-emmc") || 1968 1943 of_device_is_compatible(np, "intel,keembay-sdhci-5.1-sd") ||
+21 -16
drivers/mmc/host/sdhci-pci-gli.c
··· 287 287 #define GLI_MAX_TUNING_LOOP 40 288 288 289 289 /* Genesys Logic chipset */ 290 + static void sdhci_gli_mask_replay_timer_timeout(struct pci_dev *pdev) 291 + { 292 + int aer; 293 + u32 value; 294 + 295 + /* mask the replay timer timeout of AER */ 296 + aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); 297 + if (aer) { 298 + pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); 299 + value |= PCI_ERR_COR_REP_TIMER; 300 + pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); 301 + } 302 + } 303 + 290 304 static inline void gl9750_wt_on(struct sdhci_host *host) 291 305 { 292 306 u32 wt_value; ··· 621 607 { 622 608 struct sdhci_pci_slot *slot = sdhci_priv(host); 623 609 struct pci_dev *pdev; 624 - int aer; 625 610 u32 value; 626 611 627 612 pdev = slot->chip->pdev; ··· 639 626 pci_set_power_state(pdev, PCI_D0); 640 627 641 628 /* mask the replay timer timeout of AER */ 642 - aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); 643 - if (aer) { 644 - pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); 645 - value |= PCI_ERR_COR_REP_TIMER; 646 - pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); 647 - } 629 + sdhci_gli_mask_replay_timer_timeout(pdev); 648 630 649 631 gl9750_wt_off(host); 650 632 } ··· 814 806 static void gl9755_hw_setting(struct sdhci_pci_slot *slot) 815 807 { 816 808 struct pci_dev *pdev = slot->chip->pdev; 817 - int aer; 818 809 u32 value; 819 810 820 811 gl9755_wt_on(pdev); ··· 848 841 pci_set_power_state(pdev, PCI_D0); 849 842 850 843 /* mask the replay timer timeout of AER */ 851 - aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); 852 - if (aer) { 853 - pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); 854 - value |= PCI_ERR_COR_REP_TIMER; 855 - pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); 856 - } 844 + sdhci_gli_mask_replay_timer_timeout(pdev); 857 845 858 846 gl9755_wt_off(pdev); 859 847 } ··· 1753 1751 return ret; 1754 1752 } 1755 1753 1756 - static void gli_set_gl9763e(struct sdhci_pci_slot *slot) 1754 + static void gl9763e_hw_setting(struct sdhci_pci_slot *slot) 1757 1755 { 1758 1756 struct pci_dev *pdev = slot->chip->pdev; 1759 1757 u32 value; ··· 1781 1779 value &= ~GLI_9763E_HS400_RXDLY; 1782 1780 value |= FIELD_PREP(GLI_9763E_HS400_RXDLY, GLI_9763E_HS400_RXDLY_5); 1783 1781 pci_write_config_dword(pdev, PCIE_GLI_9763E_CLKRXDLY, value); 1782 + 1783 + /* mask the replay timer timeout of AER */ 1784 + sdhci_gli_mask_replay_timer_timeout(pdev); 1784 1785 1785 1786 pci_read_config_dword(pdev, PCIE_GLI_9763E_VHS, &value); 1786 1787 value &= ~GLI_9763E_VHS_REV; ··· 1928 1923 gli_pcie_enable_msi(slot); 1929 1924 host->mmc_host_ops.hs400_enhanced_strobe = 1930 1925 gl9763e_hs400_enhanced_strobe; 1931 - gli_set_gl9763e(slot); 1926 + gl9763e_hw_setting(slot); 1932 1927 sdhci_enable_v4_mode(host); 1933 1928 1934 1929 return 0;
+18
drivers/mmc/host/sdhci_am654.c
··· 156 156 157 157 #define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0) 158 158 #define SDHCI_AM654_QUIRK_SUPPRESS_V1P8_ENA BIT(1) 159 + #define SDHCI_AM654_QUIRK_DISABLE_HS400 BIT(2) 159 160 }; 160 161 161 162 struct window { ··· 766 765 { 767 766 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 768 767 struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host); 768 + struct device *dev = mmc_dev(host->mmc); 769 769 u32 ctl_cfg_2 = 0; 770 770 u32 mask; 771 771 u32 val; ··· 821 819 ret = sdhci_am654_get_otap_delay(host, sdhci_am654); 822 820 if (ret) 823 821 goto err_cleanup_host; 822 + 823 + if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_DISABLE_HS400 && 824 + host->mmc->caps2 & (MMC_CAP2_HS400 | MMC_CAP2_HS400_ES)) { 825 + dev_info(dev, "HS400 mode not supported on this silicon revision, disabling it\n"); 826 + host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); 827 + } 824 828 825 829 ret = __sdhci_add_host(host); 826 830 if (ret) ··· 890 882 891 883 return 0; 892 884 } 885 + 886 + static const struct soc_device_attribute sdhci_am654_descope_hs400[] = { 887 + { .family = "AM62PX", .revision = "SR1.0" }, 888 + { .family = "AM62PX", .revision = "SR1.1" }, 889 + { /* sentinel */ } 890 + }; 893 891 894 892 static const struct of_device_id sdhci_am654_of_match[] = { 895 893 { ··· 983 969 ret = mmc_of_parse(host->mmc); 984 970 if (ret) 985 971 return dev_err_probe(dev, ret, "parsing dt failed\n"); 972 + 973 + soc = soc_device_match(sdhci_am654_descope_hs400); 974 + if (soc) 975 + sdhci_am654->quirks |= SDHCI_AM654_QUIRK_DISABLE_HS400; 986 976 987 977 host->mmc_host_ops.start_signal_voltage_switch = sdhci_am654_start_signal_voltage_switch; 988 978 host->mmc_host_ops.execute_tuning = sdhci_am654_execute_tuning;
+1 -1
drivers/most/core.c
··· 538 538 dev = bus_find_device_by_name(&mostbus, NULL, mdev); 539 539 if (!dev) 540 540 return NULL; 541 - put_device(dev); 542 541 iface = dev_get_drvdata(dev); 542 + put_device(dev); 543 543 list_for_each_entry_safe(c, tmp, &iface->p->channel_list, list) { 544 544 if (!strcmp(dev_name(&c->dev), mdev_ch)) 545 545 return c;
+49 -18
drivers/net/bonding/bond_3ad.c
··· 95 95 static void ad_mux_machine(struct port *port, bool *update_slave_arr); 96 96 static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port); 97 97 static void ad_tx_machine(struct port *port); 98 - static void ad_periodic_machine(struct port *port, struct bond_params *bond_params); 98 + static void ad_periodic_machine(struct port *port); 99 99 static void ad_port_selection_logic(struct port *port, bool *update_slave_arr); 100 100 static void ad_agg_selection_logic(struct aggregator *aggregator, 101 101 bool *update_slave_arr); 102 102 static void ad_clear_agg(struct aggregator *aggregator); 103 103 static void ad_initialize_agg(struct aggregator *aggregator); 104 - static void ad_initialize_port(struct port *port, int lacp_fast); 104 + static void ad_initialize_port(struct port *port, const struct bond_params *bond_params); 105 105 static void ad_enable_collecting(struct port *port); 106 106 static void ad_disable_distributing(struct port *port, 107 107 bool *update_slave_arr); ··· 1307 1307 * case of EXPIRED even if LINK_DOWN didn't arrive for 1308 1308 * the port. 1309 1309 */ 1310 - port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION; 1311 1310 port->sm_vars &= ~AD_PORT_MATCHED; 1311 + /* Based on IEEE 8021AX-2014, Figure 6-18 - Receive 1312 + * machine state diagram, the statue should be 1313 + * Partner_Oper_Port_State.Synchronization = FALSE; 1314 + * Partner_Oper_Port_State.LACP_Timeout = Short Timeout; 1315 + * start current_while_timer(Short Timeout); 1316 + * Actor_Oper_Port_State.Expired = TRUE; 1317 + */ 1318 + port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION; 1312 1319 port->partner_oper.port_state |= LACP_STATE_LACP_TIMEOUT; 1313 - port->partner_oper.port_state |= LACP_STATE_LACP_ACTIVITY; 1314 1320 port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(AD_SHORT_TIMEOUT)); 1315 1321 port->actor_oper_port_state |= LACP_STATE_EXPIRED; 1316 1322 port->sm_vars |= AD_PORT_CHURNED; ··· 1423 1417 /** 1424 1418 * ad_periodic_machine - handle a port's periodic state machine 1425 1419 * @port: the port we're looking at 1426 - * @bond_params: bond parameters we will use 1427 1420 * 1428 1421 * Turn ntt flag on priodically to perform periodic transmission of lacpdu's. 1429 1422 */ 1430 - static void ad_periodic_machine(struct port *port, struct bond_params *bond_params) 1423 + static void ad_periodic_machine(struct port *port) 1431 1424 { 1432 1425 periodic_states_t last_state; 1433 1426 ··· 1435 1430 1436 1431 /* check if port was reinitialized */ 1437 1432 if (((port->sm_vars & AD_PORT_BEGIN) || !(port->sm_vars & AD_PORT_LACP_ENABLED) || !port->is_enabled) || 1438 - (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY)) || 1439 - !bond_params->lacp_active) { 1433 + (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY))) { 1440 1434 port->sm_periodic_state = AD_NO_PERIODIC; 1441 1435 } 1442 1436 /* check if state machine should change state */ ··· 1959 1955 /** 1960 1956 * ad_initialize_port - initialize a given port's parameters 1961 1957 * @port: the port we're looking at 1962 - * @lacp_fast: boolean. whether fast periodic should be used 1958 + * @bond_params: bond parameters we will use 1963 1959 */ 1964 - static void ad_initialize_port(struct port *port, int lacp_fast) 1960 + static void ad_initialize_port(struct port *port, const struct bond_params *bond_params) 1965 1961 { 1966 1962 static const struct port_params tmpl = { 1967 1963 .system_priority = 0xffff, 1968 1964 .key = 1, 1969 1965 .port_number = 1, 1970 1966 .port_priority = 0xff, 1971 - .port_state = 1, 1967 + .port_state = 0, 1972 1968 }; 1973 1969 static const struct lacpdu lacpdu = { 1974 1970 .subtype = 0x01, ··· 1986 1982 port->actor_port_priority = 0xff; 1987 1983 port->actor_port_aggregator_identifier = 0; 1988 1984 port->ntt = false; 1989 - port->actor_admin_port_state = LACP_STATE_AGGREGATION | 1990 - LACP_STATE_LACP_ACTIVITY; 1991 - port->actor_oper_port_state = LACP_STATE_AGGREGATION | 1992 - LACP_STATE_LACP_ACTIVITY; 1985 + port->actor_admin_port_state = LACP_STATE_AGGREGATION; 1986 + port->actor_oper_port_state = LACP_STATE_AGGREGATION; 1987 + if (bond_params->lacp_active) { 1988 + port->actor_admin_port_state |= LACP_STATE_LACP_ACTIVITY; 1989 + port->actor_oper_port_state |= LACP_STATE_LACP_ACTIVITY; 1990 + } 1993 1991 1994 - if (lacp_fast) 1992 + if (bond_params->lacp_fast) 1995 1993 port->actor_oper_port_state |= LACP_STATE_LACP_TIMEOUT; 1996 1994 1997 1995 memcpy(&port->partner_admin, &tmpl, sizeof(tmpl)); ··· 2207 2201 /* port initialization */ 2208 2202 port = &(SLAVE_AD_INFO(slave)->port); 2209 2203 2210 - ad_initialize_port(port, bond->params.lacp_fast); 2204 + ad_initialize_port(port, &bond->params); 2211 2205 2212 2206 port->slave = slave; 2213 2207 port->actor_port_number = SLAVE_AD_INFO(slave)->id; ··· 2519 2513 } 2520 2514 2521 2515 ad_rx_machine(NULL, port); 2522 - ad_periodic_machine(port, &bond->params); 2516 + ad_periodic_machine(port); 2523 2517 ad_port_selection_logic(port, &update_slave_arr); 2524 2518 ad_mux_machine(port, &update_slave_arr); 2525 2519 ad_tx_machine(port); ··· 2885 2879 port->actor_oper_port_state |= LACP_STATE_LACP_TIMEOUT; 2886 2880 else 2887 2881 port->actor_oper_port_state &= ~LACP_STATE_LACP_TIMEOUT; 2882 + } 2883 + spin_unlock_bh(&bond->mode_lock); 2884 + } 2885 + 2886 + /** 2887 + * bond_3ad_update_lacp_active - change the lacp active 2888 + * @bond: bonding struct 2889 + * 2890 + * Update actor_oper_port_state when lacp_active is modified. 2891 + */ 2892 + void bond_3ad_update_lacp_active(struct bonding *bond) 2893 + { 2894 + struct port *port = NULL; 2895 + struct list_head *iter; 2896 + struct slave *slave; 2897 + int lacp_active; 2898 + 2899 + lacp_active = bond->params.lacp_active; 2900 + spin_lock_bh(&bond->mode_lock); 2901 + bond_for_each_slave(bond, slave, iter) { 2902 + port = &(SLAVE_AD_INFO(slave)->port); 2903 + if (lacp_active) 2904 + port->actor_oper_port_state |= LACP_STATE_LACP_ACTIVITY; 2905 + else 2906 + port->actor_oper_port_state &= ~LACP_STATE_LACP_ACTIVITY; 2888 2907 } 2889 2908 spin_unlock_bh(&bond->mode_lock); 2890 2909 }
+1
drivers/net/bonding/bond_options.c
··· 1660 1660 netdev_dbg(bond->dev, "Setting LACP active to %s (%llu)\n", 1661 1661 newval->string, newval->value); 1662 1662 bond->params.lacp_active = newval->value; 1663 + bond_3ad_update_lacp_active(bond); 1663 1664 1664 1665 return 0; 1665 1666 }
+1 -1
drivers/net/dsa/b53/b53_common.c
··· 2078 2078 2079 2079 /* Start search operation */ 2080 2080 reg = ARL_SRCH_STDN; 2081 - b53_write8(priv, offset, B53_ARL_SRCH_CTL, reg); 2081 + b53_write8(priv, B53_ARLIO_PAGE, offset, reg); 2082 2082 2083 2083 do { 2084 2084 ret = b53_arl_search_wait(priv);
+6
drivers/net/dsa/microchip/ksz_common.c
··· 2457 2457 dev->dev_ops->cfg_port_member(dev, i, val | cpu_port); 2458 2458 } 2459 2459 2460 + /* HSR ports are setup once so need to use the assigned membership 2461 + * when the port is enabled. 2462 + */ 2463 + if (!port_member && p->stp_state == BR_STATE_FORWARDING && 2464 + (dev->hsr_ports & BIT(port))) 2465 + port_member = dev->hsr_ports; 2460 2466 dev->dev_ops->cfg_port_member(dev, port, port_member | cpu_port); 2461 2467 } 2462 2468
+1 -3
drivers/net/ethernet/airoha/airoha_ppe.c
··· 736 736 continue; 737 737 } 738 738 739 - if (commit_done || !airoha_ppe_foe_compare_entry(e, hwe)) { 740 - e->hash = 0xffff; 739 + if (!airoha_ppe_foe_compare_entry(e, hwe)) 741 740 continue; 742 - } 743 741 744 742 airoha_ppe_foe_commit_entry(ppe, &e->data, hash); 745 743 commit_done = true;
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 5332 5332 { 5333 5333 int i; 5334 5334 5335 - netdev_assert_locked(bp->dev); 5335 + netdev_assert_locked_or_invisible(bp->dev); 5336 5336 5337 5337 /* Under netdev instance lock and all our NAPIs have been disabled. 5338 5338 * It's safe to delete the hash table.
+2 -1
drivers/net/ethernet/cadence/macb_main.c
··· 5113 5113 5114 5114 static const struct macb_config sama7g5_emac_config = { 5115 5115 .caps = MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII | 5116 - MACB_CAPS_MIIONRGMII | MACB_CAPS_GEM_HAS_PTP, 5116 + MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_MIIONRGMII | 5117 + MACB_CAPS_GEM_HAS_PTP, 5117 5118 .dma_burst_length = 16, 5118 5119 .clk_init = macb_clk_init, 5119 5120 .init = macb_init,
+2
drivers/net/ethernet/google/gve/gve_main.c
··· 2870 2870 struct gve_priv *priv = netdev_priv(netdev); 2871 2871 bool was_up = netif_running(priv->dev); 2872 2872 2873 + netif_device_detach(netdev); 2874 + 2873 2875 rtnl_lock(); 2874 2876 netdev_lock(netdev); 2875 2877 if (was_up && gve_close(priv->dev)) {
+7 -7
drivers/net/ethernet/intel/igc/igc_main.c
··· 7149 7149 adapter->port_num = hw->bus.func; 7150 7150 adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); 7151 7151 7152 + /* PCI config space info */ 7153 + hw->vendor_id = pdev->vendor; 7154 + hw->device_id = pdev->device; 7155 + hw->revision_id = pdev->revision; 7156 + hw->subsystem_vendor_id = pdev->subsystem_vendor; 7157 + hw->subsystem_device_id = pdev->subsystem_device; 7158 + 7152 7159 /* Disable ASPM L1.2 on I226 devices to avoid packet loss */ 7153 7160 if (igc_is_device_id_i226(hw)) 7154 7161 pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); ··· 7181 7174 7182 7175 netdev->mem_start = pci_resource_start(pdev, 0); 7183 7176 netdev->mem_end = pci_resource_end(pdev, 0); 7184 - 7185 - /* PCI config space info */ 7186 - hw->vendor_id = pdev->vendor; 7187 - hw->device_id = pdev->device; 7188 - hw->revision_id = pdev->revision; 7189 - hw->subsystem_vendor_id = pdev->subsystem_vendor; 7190 - hw->subsystem_device_id = pdev->subsystem_device; 7191 7177 7192 7178 /* Copy the default MAC and PHY function pointers */ 7193 7179 memcpy(&hw->mac.ops, ei->mac_ops, sizeof(hw->mac.ops));
+11 -23
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 968 968 for (i = 0; i < adapter->num_tx_queues; i++) 969 969 clear_bit(__IXGBE_HANG_CHECK_ARMED, 970 970 &adapter->tx_ring[i]->state); 971 - 972 - for (i = 0; i < adapter->num_xdp_queues; i++) 973 - clear_bit(__IXGBE_HANG_CHECK_ARMED, 974 - &adapter->xdp_ring[i]->state); 975 971 } 976 972 977 973 static void ixgbe_update_xoff_received(struct ixgbe_adapter *adapter) ··· 1210 1214 struct ixgbe_adapter *adapter = netdev_priv(tx_ring->netdev); 1211 1215 struct ixgbe_hw *hw = &adapter->hw; 1212 1216 1213 - e_err(drv, "Detected Tx Unit Hang%s\n" 1217 + e_err(drv, "Detected Tx Unit Hang\n" 1214 1218 " Tx Queue <%d>\n" 1215 1219 " TDH, TDT <%x>, <%x>\n" 1216 1220 " next_to_use <%x>\n" ··· 1218 1222 "tx_buffer_info[next_to_clean]\n" 1219 1223 " time_stamp <%lx>\n" 1220 1224 " jiffies <%lx>\n", 1221 - ring_is_xdp(tx_ring) ? " (XDP)" : "", 1222 1225 tx_ring->queue_index, 1223 1226 IXGBE_READ_REG(hw, IXGBE_TDH(tx_ring->reg_idx)), 1224 1227 IXGBE_READ_REG(hw, IXGBE_TDT(tx_ring->reg_idx)), 1225 1228 tx_ring->next_to_use, next, 1226 1229 tx_ring->tx_buffer_info[next].time_stamp, jiffies); 1227 1230 1228 - if (!ring_is_xdp(tx_ring)) 1229 - netif_stop_subqueue(tx_ring->netdev, 1230 - tx_ring->queue_index); 1231 + netif_stop_subqueue(tx_ring->netdev, 1232 + tx_ring->queue_index); 1231 1233 } 1232 1234 1233 1235 /** ··· 1445 1451 total_bytes); 1446 1452 adapter->tx_ipsec += total_ipsec; 1447 1453 1454 + if (ring_is_xdp(tx_ring)) 1455 + return !!budget; 1456 + 1448 1457 if (check_for_tx_hang(tx_ring) && ixgbe_check_tx_hang(tx_ring)) { 1449 1458 if (adapter->hw.mac.type == ixgbe_mac_e610) 1450 1459 ixgbe_handle_mdd_event(adapter, tx_ring); ··· 1464 1467 /* the adapter is about to reset, no point in enabling stuff */ 1465 1468 return true; 1466 1469 } 1467 - 1468 - if (ring_is_xdp(tx_ring)) 1469 - return !!budget; 1470 1470 1471 1471 #define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) 1472 1472 txq = netdev_get_tx_queue(tx_ring->netdev, tx_ring->queue_index); ··· 7968 7974 return; 7969 7975 7970 7976 /* Force detection of hung controller */ 7971 - if (netif_carrier_ok(adapter->netdev)) { 7977 + if (netif_carrier_ok(adapter->netdev)) 7972 7978 for (i = 0; i < adapter->num_tx_queues; i++) 7973 7979 set_check_for_tx_hang(adapter->tx_ring[i]); 7974 - for (i = 0; i < adapter->num_xdp_queues; i++) 7975 - set_check_for_tx_hang(adapter->xdp_ring[i]); 7976 - } 7977 7980 7978 7981 if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) { 7979 7982 /* ··· 8187 8196 struct ixgbe_ring *tx_ring = adapter->tx_ring[i]; 8188 8197 8189 8198 if (tx_ring->next_to_use != tx_ring->next_to_clean) 8190 - return true; 8191 - } 8192 - 8193 - for (i = 0; i < adapter->num_xdp_queues; i++) { 8194 - struct ixgbe_ring *ring = adapter->xdp_ring[i]; 8195 - 8196 - if (ring->next_to_use != ring->next_to_clean) 8197 8199 return true; 8198 8200 } 8199 8201 ··· 10987 11003 int i; 10988 11004 10989 11005 if (unlikely(test_bit(__IXGBE_DOWN, &adapter->state))) 11006 + return -ENETDOWN; 11007 + 11008 + if (!netif_carrier_ok(adapter->netdev) || 11009 + !netif_running(adapter->netdev)) 10990 11010 return -ENETDOWN; 10991 11011 10992 11012 if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+3 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
··· 398 398 dma_addr_t dma; 399 399 u32 cmd_type; 400 400 401 - while (budget-- > 0) { 401 + while (likely(budget)) { 402 402 if (unlikely(!ixgbe_desc_unused(xdp_ring))) { 403 403 work_done = false; 404 404 break; ··· 433 433 xdp_ring->next_to_use++; 434 434 if (xdp_ring->next_to_use == xdp_ring->count) 435 435 xdp_ring->next_to_use = 0; 436 + 437 + budget--; 436 438 } 437 439 438 440 if (tx_desc) {
+2 -2
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
··· 606 606 if (!npc_check_field(rvu, blkaddr, NPC_LB, intf)) 607 607 *features &= ~BIT_ULL(NPC_OUTER_VID); 608 608 609 - /* Set SPI flag only if AH/ESP and IPSEC_SPI are in the key */ 610 - if (npc_check_field(rvu, blkaddr, NPC_IPSEC_SPI, intf) && 609 + /* Allow extracting SPI field from AH and ESP headers at same offset */ 610 + if (npc_is_field_present(rvu, NPC_IPSEC_SPI, intf) && 611 611 (*features & (BIT_ULL(NPC_IPPROTO_ESP) | BIT_ULL(NPC_IPPROTO_AH)))) 612 612 *features |= BIT_ULL(NPC_IPSEC_SPI); 613 613
+2
drivers/net/ethernet/mediatek/mtk_ppe_offload.c
··· 101 101 if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED)) 102 102 return -1; 103 103 104 + rcu_read_lock(); 104 105 err = dev_fill_forward_path(dev, addr, &stack); 106 + rcu_read_unlock(); 105 107 if (err) 106 108 return err; 107 109
-1
drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h
··· 26 26 u8 cap; 27 27 28 28 /* Buffer configuration */ 29 - bool manual_buffer; 30 29 u32 cable_len; 31 30 u32 xoff; 32 31 u16 port_buff_cell_sz;
+8 -10
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
··· 272 272 /* Total shared buffer size is split in a ratio of 3:1 between 273 273 * lossy and lossless pools respectively. 274 274 */ 275 - lossy_epool_size = (shared_buffer_size / 4) * 3; 276 275 lossless_ipool_size = shared_buffer_size / 4; 276 + lossy_epool_size = shared_buffer_size - lossless_ipool_size; 277 277 278 278 mlx5e_port_set_sbpr(mdev, 0, MLX5_EGRESS_DIR, MLX5_LOSSY_POOL, 0, 279 279 lossy_epool_size); ··· 288 288 u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz; 289 289 struct mlx5_core_dev *mdev = priv->mdev; 290 290 int sz = MLX5_ST_SZ_BYTES(pbmc_reg); 291 - u32 new_headroom_size = 0; 292 - u32 current_headroom_size; 291 + u32 current_headroom_cells = 0; 292 + u32 new_headroom_cells = 0; 293 293 void *in; 294 294 int err; 295 295 int i; 296 - 297 - current_headroom_size = port_buffer->headroom_size; 298 296 299 297 in = kzalloc(sz, GFP_KERNEL); 300 298 if (!in) ··· 304 306 305 307 for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 306 308 void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]); 309 + current_headroom_cells += MLX5_GET(bufferx_reg, buffer, size); 310 + 307 311 u64 size = port_buffer->buffer[i].size; 308 312 u64 xoff = port_buffer->buffer[i].xoff; 309 313 u64 xon = port_buffer->buffer[i].xon; 310 314 311 - new_headroom_size += size; 312 315 do_div(size, port_buff_cell_sz); 316 + new_headroom_cells += size; 313 317 do_div(xoff, port_buff_cell_sz); 314 318 do_div(xon, port_buff_cell_sz); 315 319 MLX5_SET(bufferx_reg, buffer, size, size); ··· 320 320 MLX5_SET(bufferx_reg, buffer, xon_threshold, xon); 321 321 } 322 322 323 - new_headroom_size /= port_buff_cell_sz; 324 - current_headroom_size /= port_buff_cell_sz; 325 - err = port_update_shared_buffer(priv->mdev, current_headroom_size, 326 - new_headroom_size); 323 + err = port_update_shared_buffer(priv->mdev, current_headroom_cells, 324 + new_headroom_cells); 327 325 if (err) 328 326 goto out; 329 327
+2
drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c
··· 173 173 174 174 memset(rule_actions, 0, NUM_CT_HMFS_RULES * sizeof(*rule_actions)); 175 175 rule_actions[0].action = mlx5_fc_get_hws_action(fs_hmfs->ctx, attr->counter); 176 + rule_actions[0].counter.offset = 177 + attr->counter->id - attr->counter->bulk->base_id; 176 178 /* Modify header is special, it may require extra arguments outside the action itself. */ 177 179 if (mh_action->mh_data) { 178 180 rule_actions[1].modify_header.offset = mh_action->mh_data->offset;
+9 -3
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
··· 362 362 static int mlx5e_dcbnl_ieee_setpfc(struct net_device *dev, 363 363 struct ieee_pfc *pfc) 364 364 { 365 + u8 buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN; 365 366 struct mlx5e_priv *priv = netdev_priv(dev); 366 367 struct mlx5_core_dev *mdev = priv->mdev; 367 368 u32 old_cable_len = priv->dcbx.cable_len; ··· 390 389 391 390 if (MLX5_BUFFER_SUPPORTED(mdev)) { 392 391 pfc_new.pfc_en = (changed & MLX5E_PORT_BUFFER_PFC) ? pfc->pfc_en : curr_pfc_en; 393 - if (priv->dcbx.manual_buffer) 392 + ret = mlx5_query_port_buffer_ownership(mdev, 393 + &buffer_ownership); 394 + if (ret) 395 + netdev_err(dev, 396 + "%s, Failed to get buffer ownership: %d\n", 397 + __func__, ret); 398 + 399 + if (buffer_ownership == MLX5_BUF_OWNERSHIP_SW_OWNED) 394 400 ret = mlx5e_port_manual_buffer_config(priv, changed, 395 401 dev->mtu, &pfc_new, 396 402 NULL, NULL); ··· 990 982 if (!changed) 991 983 return 0; 992 984 993 - priv->dcbx.manual_buffer = true; 994 985 err = mlx5e_port_manual_buffer_config(priv, changed, dev->mtu, NULL, 995 986 buffer_size, prio2buffer); 996 987 return err; ··· 1259 1252 priv->dcbx.cap |= DCB_CAP_DCBX_HOST; 1260 1253 1261 1254 priv->dcbx.port_buff_cell_sz = mlx5e_query_port_buffers_cell_size(priv); 1262 - priv->dcbx.manual_buffer = false; 1263 1255 priv->dcbx.cable_len = MLX5E_DEFAULT_CABLE_LEN; 1264 1256 1265 1257 mlx5e_ets_init(priv);
+98 -85
drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
··· 102 102 u8 level; 103 103 /* Valid only when this node represents a traffic class. */ 104 104 u8 tc; 105 + /* Valid only for a TC arbiter node or vport TC arbiter. */ 106 + u32 tc_bw[DEVLINK_RATE_TCS_MAX]; 105 107 }; 106 108 107 109 static void esw_qos_node_attach_to_parent(struct mlx5_esw_sched_node *node) ··· 464 462 esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_node, 465 463 struct netlink_ext_ack *extack) 466 464 { 465 + struct mlx5_esw_sched_node *parent = vport_node->parent; 467 466 u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; 468 467 struct mlx5_core_dev *dev = vport_node->esw->dev; 469 468 void *attr; ··· 480 477 attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); 481 478 MLX5_SET(vport_element, attr, vport_number, vport_node->vport->vport); 482 479 MLX5_SET(scheduling_context, sched_ctx, parent_element_id, 483 - vport_node->parent->ix); 480 + parent ? parent->ix : vport_node->esw->qos.root_tsar_ix); 484 481 MLX5_SET(scheduling_context, sched_ctx, max_average_bw, 485 482 vport_node->max_rate); 486 483 ··· 611 608 esw_qos_tc_arbiter_get_bw_shares(struct mlx5_esw_sched_node *tc_arbiter_node, 612 609 u32 *tc_bw) 613 610 { 614 - struct mlx5_esw_sched_node *vports_tc_node; 615 - 616 - list_for_each_entry(vports_tc_node, &tc_arbiter_node->children, entry) 617 - tc_bw[vports_tc_node->tc] = vports_tc_node->bw_share; 611 + memcpy(tc_bw, tc_arbiter_node->tc_bw, sizeof(tc_arbiter_node->tc_bw)); 618 612 } 619 613 620 614 static void ··· 628 628 u8 tc = vports_tc_node->tc; 629 629 u32 bw_share; 630 630 631 + tc_arbiter_node->tc_bw[tc] = tc_bw[tc]; 631 632 bw_share = tc_bw[tc] * fw_max_bw_share; 632 633 bw_share = esw_qos_calc_bw_share(bw_share, divider, 633 634 fw_max_bw_share); ··· 787 786 return err; 788 787 } 789 788 790 - if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) { 791 - esw->qos.node0 = __esw_qos_create_vports_sched_node(esw, NULL, extack); 792 - } else { 793 - /* The eswitch doesn't support scheduling nodes. 794 - * Create a software-only node0 using the root TSAR to attach vport QoS to. 795 - */ 796 - if (!__esw_qos_alloc_node(esw, 797 - esw->qos.root_tsar_ix, 798 - SCHED_NODE_TYPE_VPORTS_TSAR, 799 - NULL)) 800 - esw->qos.node0 = ERR_PTR(-ENOMEM); 801 - else 802 - list_add_tail(&esw->qos.node0->entry, 803 - &esw->qos.domain->nodes); 804 - } 805 - if (IS_ERR(esw->qos.node0)) { 806 - err = PTR_ERR(esw->qos.node0); 807 - esw_warn(dev, "E-Switch create rate node 0 failed (%d)\n", err); 808 - goto err_node0; 809 - } 810 789 refcount_set(&esw->qos.refcnt, 1); 811 790 812 791 return 0; 813 - 814 - err_node0: 815 - if (mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, 816 - esw->qos.root_tsar_ix)) 817 - esw_warn(esw->dev, "E-Switch destroy root TSAR failed.\n"); 818 - 819 - return err; 820 792 } 821 793 822 794 static void esw_qos_destroy(struct mlx5_eswitch *esw) 823 795 { 824 796 int err; 825 - 826 - if (esw->qos.node0->ix != esw->qos.root_tsar_ix) 827 - __esw_qos_destroy_node(esw->qos.node0, NULL); 828 - else 829 - __esw_qos_free_node(esw->qos.node0); 830 - esw->qos.node0 = NULL; 831 797 832 798 err = mlx5_destroy_scheduling_element_cmd(esw->dev, 833 799 SCHEDULING_HIERARCHY_E_SWITCH, ··· 958 990 struct netlink_ext_ack *extack) 959 991 { 960 992 struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; 961 - int err, new_level, max_level; 993 + struct mlx5_esw_sched_node *parent = vport_node->parent; 994 + int err; 962 995 963 996 if (type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) { 997 + int new_level, max_level; 998 + 964 999 /* Increase the parent's level by 2 to account for both the 965 1000 * TC arbiter and the vports TC scheduling element. 966 1001 */ 967 - new_level = vport_node->parent->level + 2; 1002 + new_level = (parent ? parent->level : 2) + 2; 968 1003 max_level = 1 << MLX5_CAP_QOS(vport_node->esw->dev, 969 1004 log_esw_max_sched_depth); 970 1005 if (new_level > max_level) { ··· 1004 1033 err_sched_nodes: 1005 1034 if (type == SCHED_NODE_TYPE_RATE_LIMITER) { 1006 1035 esw_qos_node_destroy_sched_element(vport_node, NULL); 1007 - list_add_tail(&vport_node->entry, 1008 - &vport_node->parent->children); 1009 - vport_node->level = vport_node->parent->level + 1; 1036 + esw_qos_node_attach_to_parent(vport_node); 1010 1037 } else { 1011 1038 esw_qos_tc_arbiter_scheduling_teardown(vport_node, NULL); 1012 1039 } ··· 1052 1083 static void esw_qos_vport_disable(struct mlx5_vport *vport, struct netlink_ext_ack *extack) 1053 1084 { 1054 1085 struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; 1055 - struct mlx5_esw_sched_node *parent = vport_node->parent; 1056 1086 enum sched_node_type curr_type = vport_node->type; 1057 1087 1058 1088 if (curr_type == SCHED_NODE_TYPE_VPORT) ··· 1060 1092 esw_qos_vport_tc_disable(vport, extack); 1061 1093 1062 1094 vport_node->bw_share = 0; 1095 + memset(vport_node->tc_bw, 0, sizeof(vport_node->tc_bw)); 1063 1096 list_del_init(&vport_node->entry); 1064 - esw_qos_normalize_min_rate(parent->esw, parent, extack); 1097 + esw_qos_normalize_min_rate(vport_node->esw, vport_node->parent, extack); 1065 1098 1066 1099 trace_mlx5_esw_vport_qos_destroy(vport_node->esw->dev, vport); 1067 1100 } ··· 1072 1103 struct mlx5_esw_sched_node *parent, 1073 1104 struct netlink_ext_ack *extack) 1074 1105 { 1106 + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; 1075 1107 int err; 1076 1108 1077 1109 esw_assert_qos_lock_held(vport->dev->priv.eswitch); 1078 1110 1079 - esw_qos_node_set_parent(vport->qos.sched_node, parent); 1080 - if (type == SCHED_NODE_TYPE_VPORT) { 1081 - err = esw_qos_vport_create_sched_element(vport->qos.sched_node, 1082 - extack); 1083 - } else { 1111 + esw_qos_node_set_parent(vport_node, parent); 1112 + if (type == SCHED_NODE_TYPE_VPORT) 1113 + err = esw_qos_vport_create_sched_element(vport_node, extack); 1114 + else 1084 1115 err = esw_qos_vport_tc_enable(vport, type, extack); 1085 - } 1086 1116 if (err) 1087 1117 return err; 1088 1118 1089 - vport->qos.sched_node->type = type; 1090 - esw_qos_normalize_min_rate(parent->esw, parent, extack); 1091 - trace_mlx5_esw_vport_qos_create(vport->dev, vport, 1092 - vport->qos.sched_node->max_rate, 1093 - vport->qos.sched_node->bw_share); 1119 + vport_node->type = type; 1120 + esw_qos_normalize_min_rate(vport_node->esw, parent, extack); 1121 + trace_mlx5_esw_vport_qos_create(vport->dev, vport, vport_node->max_rate, 1122 + vport_node->bw_share); 1094 1123 1095 1124 return 0; 1096 1125 } ··· 1099 1132 { 1100 1133 struct mlx5_eswitch *esw = vport->dev->priv.eswitch; 1101 1134 struct mlx5_esw_sched_node *sched_node; 1135 + struct mlx5_eswitch *parent_esw; 1102 1136 int err; 1103 1137 1104 1138 esw_assert_qos_lock_held(esw); ··· 1107 1139 if (err) 1108 1140 return err; 1109 1141 1110 - parent = parent ?: esw->qos.node0; 1111 - sched_node = __esw_qos_alloc_node(parent->esw, 0, type, parent); 1112 - if (!sched_node) 1142 + parent_esw = parent ? parent->esw : esw; 1143 + sched_node = __esw_qos_alloc_node(parent_esw, 0, type, parent); 1144 + if (!sched_node) { 1145 + esw_qos_put(esw); 1113 1146 return -ENOMEM; 1147 + } 1148 + if (!parent) 1149 + list_add_tail(&sched_node->entry, &esw->qos.domain->nodes); 1114 1150 1115 1151 sched_node->max_rate = max_rate; 1116 1152 sched_node->min_rate = min_rate; ··· 1122 1150 vport->qos.sched_node = sched_node; 1123 1151 err = esw_qos_vport_enable(vport, type, parent, extack); 1124 1152 if (err) { 1153 + __esw_qos_free_node(sched_node); 1125 1154 esw_qos_put(esw); 1126 1155 vport->qos.sched_node = NULL; 1127 1156 } 1128 1157 1129 1158 return err; 1159 + } 1160 + 1161 + static void mlx5_esw_qos_vport_disable_locked(struct mlx5_vport *vport) 1162 + { 1163 + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; 1164 + 1165 + esw_assert_qos_lock_held(esw); 1166 + if (!vport->qos.sched_node) 1167 + return; 1168 + 1169 + esw_qos_vport_disable(vport, NULL); 1170 + mlx5_esw_qos_vport_qos_free(vport); 1171 + esw_qos_put(esw); 1130 1172 } 1131 1173 1132 1174 void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) ··· 1154 1168 goto unlock; 1155 1169 1156 1170 parent = vport->qos.sched_node->parent; 1157 - WARN(parent != esw->qos.node0, "Disabling QoS on port before detaching it from node"); 1171 + WARN(parent, "Disabling QoS on port before detaching it from node"); 1158 1172 1159 - esw_qos_vport_disable(vport, NULL); 1160 - mlx5_esw_qos_vport_qos_free(vport); 1161 - esw_qos_put(esw); 1173 + mlx5_esw_qos_vport_disable_locked(vport); 1162 1174 unlock: 1163 1175 esw_qos_unlock(esw); 1164 1176 } ··· 1246 1262 struct mlx5_esw_sched_node *parent, 1247 1263 struct netlink_ext_ack *extack) 1248 1264 { 1249 - struct mlx5_esw_sched_node *curr_parent = vport->qos.sched_node->parent; 1250 - enum sched_node_type curr_type = vport->qos.sched_node->type; 1265 + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; 1266 + struct mlx5_esw_sched_node *curr_parent = vport_node->parent; 1267 + enum sched_node_type curr_type = vport_node->type; 1251 1268 u32 curr_tc_bw[DEVLINK_RATE_TCS_MAX] = {0}; 1252 1269 int err; 1253 1270 1254 1271 esw_assert_qos_lock_held(vport->dev->priv.eswitch); 1255 - parent = parent ?: curr_parent; 1256 1272 if (curr_type == type && curr_parent == parent) 1257 1273 return 0; 1258 1274 ··· 1260 1276 if (err) 1261 1277 return err; 1262 1278 1263 - if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type) { 1264 - esw_qos_tc_arbiter_get_bw_shares(vport->qos.sched_node, 1265 - curr_tc_bw); 1266 - } 1279 + if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type) 1280 + esw_qos_tc_arbiter_get_bw_shares(vport_node, curr_tc_bw); 1267 1281 1268 1282 esw_qos_vport_disable(vport, extack); 1269 1283 ··· 1272 1290 } 1273 1291 1274 1292 if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type) { 1275 - esw_qos_set_tc_arbiter_bw_shares(vport->qos.sched_node, 1276 - curr_tc_bw, extack); 1293 + esw_qos_set_tc_arbiter_bw_shares(vport_node, curr_tc_bw, 1294 + extack); 1277 1295 } 1278 1296 1279 1297 return err; ··· 1288 1306 1289 1307 esw_assert_qos_lock_held(esw); 1290 1308 curr_parent = vport->qos.sched_node->parent; 1291 - parent = parent ?: esw->qos.node0; 1292 1309 if (curr_parent == parent) 1293 1310 return 0; 1294 1311 1295 1312 /* Set vport QoS type based on parent node type if different from 1296 1313 * default QoS; otherwise, use the vport's current QoS type. 1297 1314 */ 1298 - if (parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) 1315 + if (parent && parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) 1299 1316 type = SCHED_NODE_TYPE_RATE_LIMITER; 1300 - else if (curr_parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) 1317 + else if (curr_parent && 1318 + curr_parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) 1301 1319 type = SCHED_NODE_TYPE_VPORT; 1302 1320 else 1303 1321 type = vport->qos.sched_node->type; ··· 1636 1654 static bool esw_qos_vport_validate_unsupported_tc_bw(struct mlx5_vport *vport, 1637 1655 u32 *tc_bw) 1638 1656 { 1639 - struct mlx5_eswitch *esw = vport->qos.sched_node ? 1640 - vport->qos.sched_node->parent->esw : 1641 - vport->dev->priv.eswitch; 1657 + struct mlx5_esw_sched_node *node = vport->qos.sched_node; 1658 + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; 1659 + 1660 + esw = (node && node->parent) ? node->parent->esw : esw; 1642 1661 1643 1662 return esw_qos_validate_unsupported_tc_bw(esw, tc_bw); 1644 1663 } ··· 1654 1671 } 1655 1672 1656 1673 return true; 1674 + } 1675 + 1676 + static void esw_vport_qos_prune_empty(struct mlx5_vport *vport) 1677 + { 1678 + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; 1679 + 1680 + esw_assert_qos_lock_held(vport->dev->priv.eswitch); 1681 + if (!vport_node) 1682 + return; 1683 + 1684 + if (vport_node->parent || vport_node->max_rate || 1685 + vport_node->min_rate || !esw_qos_tc_bw_disabled(vport_node->tc_bw)) 1686 + return; 1687 + 1688 + mlx5_esw_qos_vport_disable_locked(vport); 1657 1689 } 1658 1690 1659 1691 int mlx5_esw_qos_init(struct mlx5_eswitch *esw) ··· 1704 1706 1705 1707 esw_qos_lock(esw); 1706 1708 err = mlx5_esw_qos_set_vport_min_rate(vport, tx_share, extack); 1709 + if (err) 1710 + goto out; 1711 + esw_vport_qos_prune_empty(vport); 1712 + out: 1707 1713 esw_qos_unlock(esw); 1708 1714 return err; 1709 1715 } ··· 1729 1727 1730 1728 esw_qos_lock(esw); 1731 1729 err = mlx5_esw_qos_set_vport_max_rate(vport, tx_max, extack); 1730 + if (err) 1731 + goto out; 1732 + esw_vport_qos_prune_empty(vport); 1733 + out: 1732 1734 esw_qos_unlock(esw); 1733 1735 return err; 1734 1736 } ··· 1769 1763 if (disable) { 1770 1764 if (vport_node->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) 1771 1765 err = esw_qos_vport_update(vport, SCHED_NODE_TYPE_VPORT, 1772 - NULL, extack); 1766 + vport_node->parent, extack); 1767 + esw_vport_qos_prune_empty(vport); 1773 1768 goto unlock; 1774 1769 } 1775 1770 ··· 1782 1775 } else { 1783 1776 err = esw_qos_vport_update(vport, 1784 1777 SCHED_NODE_TYPE_TC_ARBITER_TSAR, 1785 - NULL, extack); 1778 + vport_node->parent, extack); 1786 1779 } 1787 1780 if (!err) 1788 1781 esw_qos_set_tc_arbiter_bw_shares(vport_node, tc_bw, extack); ··· 1931 1924 void *priv, void *parent_priv, 1932 1925 struct netlink_ext_ack *extack) 1933 1926 { 1934 - struct mlx5_esw_sched_node *node; 1927 + struct mlx5_esw_sched_node *node = parent ? parent_priv : NULL; 1935 1928 struct mlx5_vport *vport = priv; 1929 + int err; 1936 1930 1937 - if (!parent) 1938 - return mlx5_esw_qos_vport_update_parent(vport, NULL, extack); 1931 + err = mlx5_esw_qos_vport_update_parent(vport, node, extack); 1932 + if (!err) { 1933 + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; 1939 1934 1940 - node = parent_priv; 1941 - return mlx5_esw_qos_vport_update_parent(vport, node, extack); 1935 + esw_qos_lock(esw); 1936 + esw_vport_qos_prune_empty(vport); 1937 + esw_qos_unlock(esw); 1938 + } 1939 + 1940 + return err; 1942 1941 } 1943 1942 1944 1943 static bool esw_qos_is_node_empty(struct mlx5_esw_sched_node *node)
-5
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 373 373 refcount_t refcnt; 374 374 u32 root_tsar_ix; 375 375 struct mlx5_qos_domain *domain; 376 - /* Contains all vports with QoS enabled but no explicit node. 377 - * Cannot be NULL if QoS is enabled, but may be a fake node 378 - * referencing the root TSAR if the esw doesn't support nodes. 379 - */ 380 - struct mlx5_esw_sched_node *node0; 381 376 } qos; 382 377 383 378 struct mlx5_esw_bridge_offloads *br_offloads;
+2
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 367 367 int mlx5_set_port_dcbx_param(struct mlx5_core_dev *mdev, u32 *in); 368 368 int mlx5_set_trust_state(struct mlx5_core_dev *mdev, u8 trust_state); 369 369 int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state); 370 + int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev, 371 + u8 *buffer_ownership); 370 372 int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio); 371 373 int mlx5_query_dscp2prio(struct mlx5_core_dev *mdev, u8 *dscp2prio); 372 374
+20
drivers/net/ethernet/mellanox/mlx5/core/port.c
··· 968 968 return err; 969 969 } 970 970 971 + int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev, 972 + u8 *buffer_ownership) 973 + { 974 + u32 out[MLX5_ST_SZ_DW(pfcc_reg)] = {}; 975 + int err; 976 + 977 + if (!MLX5_CAP_PCAM_FEATURE(mdev, buffer_ownership)) { 978 + *buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN; 979 + return 0; 980 + } 981 + 982 + err = mlx5_query_pfcc_reg(mdev, out, sizeof(out)); 983 + if (err) 984 + return err; 985 + 986 + *buffer_ownership = MLX5_GET(pfcc_reg, out, buf_ownership); 987 + 988 + return 0; 989 + } 990 + 971 991 int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio) 972 992 { 973 993 int sz = MLX5_ST_SZ_BYTES(qpdpm_reg);
+62 -19
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
··· 74 74 static int 75 75 hws_bwc_matcher_move_all_simple(struct mlx5hws_bwc_matcher *bwc_matcher) 76 76 { 77 - bool move_error = false, poll_error = false, drain_error = false; 78 77 struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx; 79 78 struct mlx5hws_matcher *matcher = bwc_matcher->matcher; 79 + int drain_error = 0, move_error = 0, poll_error = 0; 80 80 u16 bwc_queues = mlx5hws_bwc_queues(ctx); 81 81 struct mlx5hws_rule_attr rule_attr; 82 82 struct mlx5hws_bwc_rule *bwc_rule; ··· 84 84 struct list_head *rules_list; 85 85 u32 pending_rules; 86 86 int i, ret = 0; 87 + bool drain; 87 88 88 89 mlx5hws_bwc_rule_fill_attr(bwc_matcher, 0, 0, &rule_attr); 89 90 ··· 100 99 ret = mlx5hws_matcher_resize_rule_move(matcher, 101 100 bwc_rule->rule, 102 101 &rule_attr); 103 - if (unlikely(ret && !move_error)) { 104 - mlx5hws_err(ctx, 105 - "Moving BWC rule: move failed (%d), attempting to move rest of the rules\n", 106 - ret); 107 - move_error = true; 102 + if (unlikely(ret)) { 103 + if (!move_error) { 104 + mlx5hws_err(ctx, 105 + "Moving BWC rule: move failed (%d), attempting to move rest of the rules\n", 106 + ret); 107 + move_error = ret; 108 + } 109 + /* Rule wasn't queued, no need to poll */ 110 + continue; 108 111 } 109 112 110 113 pending_rules++; 114 + drain = pending_rules >= 115 + hws_bwc_get_burst_th(ctx, rule_attr.queue_id); 111 116 ret = mlx5hws_bwc_queue_poll(ctx, 112 117 rule_attr.queue_id, 113 118 &pending_rules, 114 - false); 115 - if (unlikely(ret && !poll_error)) { 116 - mlx5hws_err(ctx, 117 - "Moving BWC rule: poll failed (%d), attempting to move rest of the rules\n", 118 - ret); 119 - poll_error = true; 119 + drain); 120 + if (unlikely(ret)) { 121 + if (ret == -ETIMEDOUT) { 122 + mlx5hws_err(ctx, 123 + "Moving BWC rule: timeout polling for completions (%d), aborting rehash\n", 124 + ret); 125 + return ret; 126 + } 127 + if (!poll_error) { 128 + mlx5hws_err(ctx, 129 + "Moving BWC rule: polling for completions failed (%d), attempting to move rest of the rules\n", 130 + ret); 131 + poll_error = ret; 132 + } 120 133 } 121 134 } 122 135 ··· 141 126 rule_attr.queue_id, 142 127 &pending_rules, 143 128 true); 144 - if (unlikely(ret && !drain_error)) { 145 - mlx5hws_err(ctx, 146 - "Moving BWC rule: drain failed (%d), attempting to move rest of the rules\n", 147 - ret); 148 - drain_error = true; 129 + if (unlikely(ret)) { 130 + if (ret == -ETIMEDOUT) { 131 + mlx5hws_err(ctx, 132 + "Moving bwc rule: timeout draining completions (%d), aborting rehash\n", 133 + ret); 134 + return ret; 135 + } 136 + if (!drain_error) { 137 + mlx5hws_err(ctx, 138 + "Moving bwc rule: drain failed (%d), attempting to move rest of the rules\n", 139 + ret); 140 + drain_error = ret; 141 + } 149 142 } 150 143 } 151 144 } 152 145 153 - if (move_error || poll_error || drain_error) 154 - ret = -EINVAL; 146 + /* Return the first error that happened */ 147 + if (unlikely(move_error)) 148 + return move_error; 149 + if (unlikely(poll_error)) 150 + return poll_error; 151 + if (unlikely(drain_error)) 152 + return drain_error; 155 153 156 154 return ret; 157 155 } ··· 1061 1033 hws_bwc_rule_list_add(bwc_rule, bwc_queue_idx); 1062 1034 mutex_unlock(queue_lock); 1063 1035 return 0; /* rule inserted successfully */ 1036 + } 1037 + 1038 + /* Rule insertion could fail due to queue being full, timeout, or 1039 + * matcher in resize. In such cases, no point in trying to rehash. 1040 + */ 1041 + if (ret == -EBUSY || ret == -ETIMEDOUT || ret == -EAGAIN) { 1042 + mutex_unlock(queue_lock); 1043 + mlx5hws_err(ctx, 1044 + "BWC rule insertion failed - %s (%d)\n", 1045 + ret == -EBUSY ? "queue is full" : 1046 + ret == -ETIMEDOUT ? "timeout" : 1047 + ret == -EAGAIN ? "matcher in resize" : "N/A", 1048 + ret); 1049 + hws_bwc_rule_cnt_dec(bwc_rule); 1050 + return ret; 1064 1051 } 1065 1052 1066 1053 /* At this point the rule wasn't added.
+28 -13
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c
··· 1328 1328 { 1329 1329 struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx; 1330 1330 struct mlx5hws_matcher *matcher = bwc_matcher->matcher; 1331 - bool move_error = false, poll_error = false; 1332 1331 u16 bwc_queues = mlx5hws_bwc_queues(ctx); 1333 1332 struct mlx5hws_bwc_rule *tmp_bwc_rule; 1334 1333 struct mlx5hws_rule_attr rule_attr; 1335 1334 struct mlx5hws_table *isolated_tbl; 1335 + int move_error = 0, poll_error = 0; 1336 1336 struct mlx5hws_rule *tmp_rule; 1337 1337 struct list_head *rules_list; 1338 1338 u32 expected_completions = 1; ··· 1391 1391 ret = mlx5hws_matcher_resize_rule_move(matcher, 1392 1392 tmp_rule, 1393 1393 &rule_attr); 1394 - if (unlikely(ret && !move_error)) { 1395 - mlx5hws_err(ctx, 1396 - "Moving complex BWC rule failed (%d), attempting to move rest of the rules\n", 1397 - ret); 1398 - move_error = true; 1394 + if (unlikely(ret)) { 1395 + if (!move_error) { 1396 + mlx5hws_err(ctx, 1397 + "Moving complex BWC rule: move failed (%d), attempting to move rest of the rules\n", 1398 + ret); 1399 + move_error = ret; 1400 + } 1401 + /* Rule wasn't queued, no need to poll */ 1402 + continue; 1399 1403 } 1400 1404 1401 1405 expected_completions = 1; ··· 1407 1403 rule_attr.queue_id, 1408 1404 &expected_completions, 1409 1405 true); 1410 - if (unlikely(ret && !poll_error)) { 1411 - mlx5hws_err(ctx, 1412 - "Moving complex BWC rule: poll failed (%d), attempting to move rest of the rules\n", 1413 - ret); 1414 - poll_error = true; 1406 + if (unlikely(ret)) { 1407 + if (ret == -ETIMEDOUT) { 1408 + mlx5hws_err(ctx, 1409 + "Moving complex BWC rule: timeout polling for completions (%d), aborting rehash\n", 1410 + ret); 1411 + return ret; 1412 + } 1413 + if (!poll_error) { 1414 + mlx5hws_err(ctx, 1415 + "Moving complex BWC rule: polling for completions failed (%d), attempting to move rest of the rules\n", 1416 + ret); 1417 + poll_error = ret; 1418 + } 1415 1419 } 1416 1420 1417 1421 /* Done moving the rule to the new matcher, ··· 1434 1422 } 1435 1423 } 1436 1424 1437 - if (move_error || poll_error) 1438 - ret = -EINVAL; 1425 + /* Return the first error that happened */ 1426 + if (unlikely(move_error)) 1427 + return move_error; 1428 + if (unlikely(poll_error)) 1429 + return poll_error; 1439 1430 1440 1431 return ret; 1441 1432 }
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c
··· 55 55 56 56 MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); 57 57 MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); 58 + MLX5_SET(create_flow_table_in, in, uid, ft_attr->uid); 58 59 59 60 ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); 60 61 MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level);
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h
··· 36 36 struct mlx5hws_cmd_ft_create_attr { 37 37 u8 type; 38 38 u8 level; 39 + u16 uid; 39 40 bool rtc_valid; 40 41 bool decap_en; 41 42 bool reformat_en;
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
··· 267 267 268 268 tbl_attr.type = MLX5HWS_TABLE_TYPE_FDB; 269 269 tbl_attr.level = ft_attr->level; 270 + tbl_attr.uid = ft_attr->uid; 270 271 tbl = mlx5hws_table_create(ctx, &tbl_attr); 271 272 if (!tbl) { 272 273 mlx5_core_err(ns->dev, "Failed creating hws flow_table\n");
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
··· 85 85 86 86 ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, 87 87 tbl, 88 + 0, 88 89 &matcher->end_ft_id); 89 90 if (ret) { 90 91 mlx5hws_err(tbl->ctx, "Isolated matcher: failed to create end flow table\n"); ··· 113 112 if (mlx5hws_matcher_is_isolated(matcher)) 114 113 ret = hws_matcher_create_end_ft_isolated(matcher); 115 114 else 116 - ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl, 115 + ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, 116 + tbl, 117 + 0, 117 118 &matcher->end_ft_id); 118 119 119 120 if (ret) {
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
··· 75 75 struct mlx5hws_table_attr { 76 76 enum mlx5hws_table_type type; 77 77 u32 level; 78 + u16 uid; 78 79 }; 79 80 80 81 enum mlx5hws_matcher_flow_src {
-1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c
··· 964 964 return -ENOMEM; 965 965 966 966 MLX5_SET(cqc, cqc_data, uar_page, mdev->priv.uar->index); 967 - MLX5_SET(cqc, cqc_data, cqe_sz, queue->num_entries); 968 967 MLX5_SET(cqc, cqc_data, log_cq_size, ilog2(queue->num_entries)); 969 968 970 969 err = hws_send_ring_alloc_cq(mdev, numa_node, queue, cqc_data, cq);
+10 -3
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c
··· 9 9 } 10 10 11 11 static void hws_table_init_next_ft_attr(struct mlx5hws_table *tbl, 12 + u16 uid, 12 13 struct mlx5hws_cmd_ft_create_attr *ft_attr) 13 14 { 14 15 ft_attr->type = tbl->fw_ft_type; ··· 17 16 ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; 18 17 else 19 18 ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; 19 + 20 20 ft_attr->rtc_valid = true; 21 + ft_attr->uid = uid; 21 22 } 22 23 23 24 static void hws_table_set_cap_attr(struct mlx5hws_table *tbl, ··· 122 119 123 120 int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev, 124 121 struct mlx5hws_table *tbl, 125 - u32 *ft_id) 122 + u16 uid, u32 *ft_id) 126 123 { 127 124 struct mlx5hws_cmd_ft_create_attr ft_attr = {0}; 128 125 int ret; 129 126 130 - hws_table_init_next_ft_attr(tbl, &ft_attr); 127 + hws_table_init_next_ft_attr(tbl, uid, &ft_attr); 131 128 hws_table_set_cap_attr(tbl, &ft_attr); 132 129 133 130 ret = mlx5hws_cmd_flow_table_create(mdev, &ft_attr, ft_id); ··· 192 189 } 193 190 194 191 mutex_lock(&ctx->ctrl_lock); 195 - ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl, &tbl->ft_id); 192 + ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, 193 + tbl, 194 + tbl->uid, 195 + &tbl->ft_id); 196 196 if (ret) { 197 197 mlx5hws_err(tbl->ctx, "Failed to create flow table object\n"); 198 198 mutex_unlock(&ctx->ctrl_lock); ··· 245 239 tbl->ctx = ctx; 246 240 tbl->type = attr->type; 247 241 tbl->level = attr->level; 242 + tbl->uid = attr->uid; 248 243 249 244 ret = hws_table_init(tbl); 250 245 if (ret) {
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h
··· 18 18 enum mlx5hws_table_type type; 19 19 u32 fw_ft_type; 20 20 u32 level; 21 + u16 uid; 21 22 struct list_head matchers_list; 22 23 struct list_head tbl_list_node; 23 24 struct mlx5hws_default_miss default_miss; ··· 48 47 49 48 int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev, 50 49 struct mlx5hws_table *tbl, 51 - u32 *ft_id); 50 + u16 uid, u32 *ft_id); 52 51 53 52 void mlx5hws_table_destroy_default_ft(struct mlx5hws_table *tbl, 54 53 u32 ft_id);
+2
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 2375 2375 ROUTER_EXP, false), 2376 2376 MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_DIP_LINK_LOCAL, FORWARD, 2377 2377 ROUTER_EXP, false), 2378 + MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_SIP_LINK_LOCAL, FORWARD, 2379 + ROUTER_EXP, false), 2378 2380 /* Multicast Router Traps */ 2379 2381 MLXSW_SP_RXL_MARK(ACL1, TRAP_TO_CPU, MULTICAST, false), 2380 2382 MLXSW_SP_RXL_L3_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false),
+1
drivers/net/ethernet/mellanox/mlxsw/trap.h
··· 94 94 MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_SIP_BC = 0x16A, 95 95 MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_DIP_LOCAL_NET = 0x16B, 96 96 MLXSW_TRAP_ID_DISCARD_ING_ROUTER_DIP_LINK_LOCAL = 0x16C, 97 + MLXSW_TRAP_ID_DISCARD_ING_ROUTER_SIP_LINK_LOCAL = 0x16D, 97 98 MLXSW_TRAP_ID_DISCARD_ROUTER_IRIF_EN = 0x178, 98 99 MLXSW_TRAP_ID_DISCARD_ROUTER_ERIF_EN = 0x179, 99 100 MLXSW_TRAP_ID_DISCARD_ROUTER_LPM4 = 0x17B,
+21
drivers/net/ethernet/microchip/lan865x/lan865x.c
··· 32 32 /* MAC Specific Addr 1 Top Reg */ 33 33 #define LAN865X_REG_MAC_H_SADDR1 0x00010023 34 34 35 + /* MAC TSU Timer Increment Register */ 36 + #define LAN865X_REG_MAC_TSU_TIMER_INCR 0x00010077 37 + #define MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS 0x0028 38 + 35 39 struct lan865x_priv { 36 40 struct work_struct multicast_work; 37 41 struct net_device *netdev; ··· 315 311 316 312 phy_start(netdev->phydev); 317 313 314 + netif_start_queue(netdev); 315 + 318 316 return 0; 319 317 } 320 318 ··· 348 342 if (!priv->tc6) { 349 343 ret = -ENODEV; 350 344 goto free_netdev; 345 + } 346 + 347 + /* LAN865x Rev.B0/B1 configuration parameters from AN1760 348 + * As per the Configuration Application Note AN1760 published in the 349 + * link, https://www.microchip.com/en-us/application-notes/an1760 350 + * Revision F (DS60001760G - June 2024), configure the MAC to set time 351 + * stamping at the end of the Start of Frame Delimiter (SFD) and set the 352 + * Timer Increment reg to 40 ns to be used as a 25 MHz internal clock. 353 + */ 354 + ret = oa_tc6_write_register(priv->tc6, LAN865X_REG_MAC_TSU_TIMER_INCR, 355 + MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS); 356 + if (ret) { 357 + dev_err(&spi->dev, "Failed to config TSU Timer Incr reg: %d\n", 358 + ret); 359 + goto oa_tc6_exit; 351 360 } 352 361 353 362 /* As per the point s3 in the below errata, SPI receive Ethernet frame
+1 -1
drivers/net/ethernet/realtek/rtase/rtase.h
··· 241 241 #define RTASE_RX_RES BIT(20) 242 242 #define RTASE_RX_RUNT BIT(19) 243 243 #define RTASE_RX_RWT BIT(18) 244 - #define RTASE_RX_CRC BIT(16) 244 + #define RTASE_RX_CRC BIT(17) 245 245 #define RTASE_RX_V6F BIT(31) 246 246 #define RTASE_RX_V4F BIT(30) 247 247 #define RTASE_RX_UDPT BIT(29)
+8 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
··· 152 152 static int thead_dwmac_enable_clk(struct plat_stmmacenet_data *plat) 153 153 { 154 154 struct thead_dwmac *dwmac = plat->bsp_priv; 155 - u32 reg; 155 + u32 reg, div; 156 156 157 157 switch (plat->mac_interface) { 158 158 case PHY_INTERFACE_MODE_MII: ··· 164 164 case PHY_INTERFACE_MODE_RGMII_RXID: 165 165 case PHY_INTERFACE_MODE_RGMII_TXID: 166 166 /* use pll */ 167 + div = clk_get_rate(plat->stmmac_clk) / rgmii_clock(SPEED_1000); 168 + reg = FIELD_PREP(GMAC_PLLCLK_DIV_EN, 1) | 169 + FIELD_PREP(GMAC_PLLCLK_DIV_NUM, div); 170 + 171 + writel(0, dwmac->apb_base + GMAC_PLLCLK_DIV); 172 + writel(reg, dwmac->apb_base + GMAC_PLLCLK_DIV); 173 + 167 174 writel(GMAC_GTXCLK_SEL_PLL, dwmac->apb_base + GMAC_GTXCLK_SEL); 168 175 reg = GMAC_TX_CLK_EN | GMAC_TX_CLK_N_EN | GMAC_TX_CLK_OUT_EN | 169 176 GMAC_RX_CLK_EN | GMAC_RX_CLK_N_EN;
+41 -31
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 203 203 } 204 204 } 205 205 206 + static void icssg_enable_fw_offload(struct prueth *prueth) 207 + { 208 + struct prueth_emac *emac; 209 + int mac; 210 + 211 + for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) { 212 + emac = prueth->emac[mac]; 213 + if (prueth->is_hsr_offload_mode) { 214 + if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM) 215 + icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE); 216 + else 217 + icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE); 218 + } 219 + 220 + if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) { 221 + if (netif_running(emac->ndev)) { 222 + icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan, 223 + ICSSG_FDB_ENTRY_P0_MEMBERSHIP | 224 + ICSSG_FDB_ENTRY_P1_MEMBERSHIP | 225 + ICSSG_FDB_ENTRY_P2_MEMBERSHIP | 226 + ICSSG_FDB_ENTRY_BLOCK, 227 + true); 228 + icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID, 229 + BIT(emac->port_id) | DEFAULT_PORT_MASK, 230 + BIT(emac->port_id) | DEFAULT_UNTAG_MASK, 231 + true); 232 + if (prueth->is_hsr_offload_mode) 233 + icssg_vtbl_modify(emac, DEFAULT_VID, 234 + DEFAULT_PORT_MASK, 235 + DEFAULT_UNTAG_MASK, true); 236 + icssg_set_pvid(prueth, emac->port_vlan, emac->port_id); 237 + if (prueth->is_switch_mode) 238 + icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE); 239 + } 240 + } 241 + } 242 + } 243 + 206 244 static int prueth_emac_common_start(struct prueth *prueth) 207 245 { 208 246 struct prueth_emac *emac; ··· 791 753 ret = prueth_emac_common_start(prueth); 792 754 if (ret) 793 755 goto free_rx_irq; 756 + icssg_enable_fw_offload(prueth); 794 757 } 795 758 796 759 flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET; ··· 1399 1360 1400 1361 static void icssg_change_mode(struct prueth *prueth) 1401 1362 { 1402 - struct prueth_emac *emac; 1403 - int mac, ret; 1363 + int ret; 1404 1364 1405 1365 ret = prueth_emac_restart(prueth); 1406 1366 if (ret) { ··· 1407 1369 return; 1408 1370 } 1409 1371 1410 - for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) { 1411 - emac = prueth->emac[mac]; 1412 - if (prueth->is_hsr_offload_mode) { 1413 - if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM) 1414 - icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE); 1415 - else 1416 - icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE); 1417 - } 1418 - 1419 - if (netif_running(emac->ndev)) { 1420 - icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan, 1421 - ICSSG_FDB_ENTRY_P0_MEMBERSHIP | 1422 - ICSSG_FDB_ENTRY_P1_MEMBERSHIP | 1423 - ICSSG_FDB_ENTRY_P2_MEMBERSHIP | 1424 - ICSSG_FDB_ENTRY_BLOCK, 1425 - true); 1426 - icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID, 1427 - BIT(emac->port_id) | DEFAULT_PORT_MASK, 1428 - BIT(emac->port_id) | DEFAULT_UNTAG_MASK, 1429 - true); 1430 - if (prueth->is_hsr_offload_mode) 1431 - icssg_vtbl_modify(emac, DEFAULT_VID, 1432 - DEFAULT_PORT_MASK, 1433 - DEFAULT_UNTAG_MASK, true); 1434 - icssg_set_pvid(prueth, emac->port_vlan, emac->port_id); 1435 - if (prueth->is_switch_mode) 1436 - icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE); 1437 - } 1438 - } 1372 + icssg_enable_fw_offload(prueth); 1439 1373 } 1440 1374 1441 1375 static int prueth_netdevice_port_link(struct net_device *ndev,
+1 -1
drivers/net/ethernet/wangxun/libwx/wx_vf_lib.c
··· 192 192 u8 i, j; 193 193 194 194 /* Fill out hash function seeds */ 195 - netdev_rss_key_fill(wx->rss_key, sizeof(wx->rss_key)); 195 + netdev_rss_key_fill(wx->rss_key, WX_RSS_KEY_SIZE); 196 196 for (i = 0; i < WX_RSS_KEY_SIZE / 4; i++) 197 197 wr32(wx, WX_VXRSSRK(i), wx->rss_key[i]); 198 198
+6 -2
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1160 1160 struct axienet_local *lp = data; 1161 1161 struct sk_buff *skb; 1162 1162 u32 *app_metadata; 1163 + int i; 1163 1164 1164 1165 skbuf_dma = axienet_get_rx_desc(lp, lp->rx_ring_tail++); 1165 1166 skb = skbuf_dma->skb; ··· 1179 1178 u64_stats_add(&lp->rx_packets, 1); 1180 1179 u64_stats_add(&lp->rx_bytes, rx_len); 1181 1180 u64_stats_update_end(&lp->rx_stat_sync); 1182 - axienet_rx_submit_desc(lp->ndev); 1181 + 1182 + for (i = 0; i < CIRC_SPACE(lp->rx_ring_head, lp->rx_ring_tail, 1183 + RX_BUF_NUM_DEFAULT); i++) 1184 + axienet_rx_submit_desc(lp->ndev); 1183 1185 dma_async_issue_pending(lp->rx_chan); 1184 1186 } 1185 1187 ··· 1461 1457 if (!skbuf_dma) 1462 1458 return; 1463 1459 1464 - lp->rx_ring_head++; 1465 1460 skb = netdev_alloc_skb(ndev, lp->max_frm_size); 1466 1461 if (!skb) 1467 1462 return; ··· 1485 1482 skbuf_dma->desc = dma_rx_desc; 1486 1483 dma_rx_desc->callback_param = lp; 1487 1484 dma_rx_desc->callback_result = axienet_dma_rx_cb; 1485 + lp->rx_ring_head++; 1488 1486 dmaengine_submit(dma_rx_desc); 1489 1487 1490 1488 return;
+12
drivers/net/phy/mscc/mscc.h
··· 362 362 u16 mask; 363 363 }; 364 364 365 + struct vsc8531_skb_cb { 366 + u32 ns; 367 + }; 368 + 369 + #define VSC8531_SKB_CB(skb) \ 370 + ((struct vsc8531_skb_cb *)((skb)->cb)) 371 + 365 372 struct vsc8531_private { 366 373 int rate_magic; 367 374 u16 supp_led_modes; ··· 417 410 */ 418 411 struct mutex ts_lock; 419 412 struct mutex phc_lock; 413 + 414 + /* list of skbs that were received and need timestamp information but it 415 + * didn't received it yet 416 + */ 417 + struct sk_buff_head rx_skbs_list; 420 418 }; 421 419 422 420 /* Shared structure between the PHYs of the same package.
+12
drivers/net/phy/mscc/mscc_main.c
··· 2335 2335 return vsc85xx_dt_led_modes_get(phydev, default_mode); 2336 2336 } 2337 2337 2338 + static void vsc85xx_remove(struct phy_device *phydev) 2339 + { 2340 + struct vsc8531_private *priv = phydev->priv; 2341 + 2342 + skb_queue_purge(&priv->rx_skbs_list); 2343 + } 2344 + 2338 2345 /* Microsemi VSC85xx PHYs */ 2339 2346 static struct phy_driver vsc85xx_driver[] = { 2340 2347 { ··· 2596 2589 .config_intr = &vsc85xx_config_intr, 2597 2590 .suspend = &genphy_suspend, 2598 2591 .resume = &genphy_resume, 2592 + .remove = &vsc85xx_remove, 2599 2593 .probe = &vsc8574_probe, 2600 2594 .set_wol = &vsc85xx_wol_set, 2601 2595 .get_wol = &vsc85xx_wol_get, ··· 2622 2614 .config_intr = &vsc85xx_config_intr, 2623 2615 .suspend = &genphy_suspend, 2624 2616 .resume = &genphy_resume, 2617 + .remove = &vsc85xx_remove, 2625 2618 .probe = &vsc8574_probe, 2626 2619 .set_wol = &vsc85xx_wol_set, 2627 2620 .get_wol = &vsc85xx_wol_get, ··· 2648 2639 .config_intr = &vsc85xx_config_intr, 2649 2640 .suspend = &genphy_suspend, 2650 2641 .resume = &genphy_resume, 2642 + .remove = &vsc85xx_remove, 2651 2643 .probe = &vsc8584_probe, 2652 2644 .get_tunable = &vsc85xx_get_tunable, 2653 2645 .set_tunable = &vsc85xx_set_tunable, ··· 2672 2662 .config_intr = &vsc85xx_config_intr, 2673 2663 .suspend = &genphy_suspend, 2674 2664 .resume = &genphy_resume, 2665 + .remove = &vsc85xx_remove, 2675 2666 .probe = &vsc8584_probe, 2676 2667 .get_tunable = &vsc85xx_get_tunable, 2677 2668 .set_tunable = &vsc85xx_set_tunable, ··· 2696 2685 .config_intr = &vsc85xx_config_intr, 2697 2686 .suspend = &genphy_suspend, 2698 2687 .resume = &genphy_resume, 2688 + .remove = &vsc85xx_remove, 2699 2689 .probe = &vsc8584_probe, 2700 2690 .get_tunable = &vsc85xx_get_tunable, 2701 2691 .set_tunable = &vsc85xx_set_tunable,
+37 -12
drivers/net/phy/mscc/mscc_ptp.c
··· 1194 1194 { 1195 1195 struct vsc8531_private *vsc8531 = 1196 1196 container_of(mii_ts, struct vsc8531_private, mii_ts); 1197 - struct skb_shared_hwtstamps *shhwtstamps = NULL; 1198 1197 struct vsc85xx_ptphdr *ptphdr; 1199 - struct timespec64 ts; 1200 1198 unsigned long ns; 1201 1199 1202 1200 if (!vsc8531->ptp->configured) ··· 1204 1206 type == PTP_CLASS_NONE) 1205 1207 return false; 1206 1208 1207 - vsc85xx_gettime(&vsc8531->ptp->caps, &ts); 1208 - 1209 1209 ptphdr = get_ptp_header_rx(skb, vsc8531->ptp->rx_filter); 1210 1210 if (!ptphdr) 1211 1211 return false; 1212 1212 1213 - shhwtstamps = skb_hwtstamps(skb); 1214 - memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps)); 1215 - 1216 1213 ns = ntohl(ptphdr->rsrvd2); 1217 1214 1218 - /* nsec is in reserved field */ 1219 - if (ts.tv_nsec < ns) 1220 - ts.tv_sec--; 1215 + VSC8531_SKB_CB(skb)->ns = ns; 1216 + skb_queue_tail(&vsc8531->rx_skbs_list, skb); 1221 1217 1222 - shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, ns); 1223 - netif_rx(skb); 1218 + ptp_schedule_worker(vsc8531->ptp->ptp_clock, 0); 1224 1219 1225 1220 return true; 1221 + } 1222 + 1223 + static long vsc85xx_do_aux_work(struct ptp_clock_info *info) 1224 + { 1225 + struct vsc85xx_ptp *ptp = container_of(info, struct vsc85xx_ptp, caps); 1226 + struct skb_shared_hwtstamps *shhwtstamps = NULL; 1227 + struct phy_device *phydev = ptp->phydev; 1228 + struct vsc8531_private *priv = phydev->priv; 1229 + struct sk_buff_head received; 1230 + struct sk_buff *rx_skb; 1231 + struct timespec64 ts; 1232 + unsigned long flags; 1233 + 1234 + __skb_queue_head_init(&received); 1235 + spin_lock_irqsave(&priv->rx_skbs_list.lock, flags); 1236 + skb_queue_splice_tail_init(&priv->rx_skbs_list, &received); 1237 + spin_unlock_irqrestore(&priv->rx_skbs_list.lock, flags); 1238 + 1239 + vsc85xx_gettime(info, &ts); 1240 + while ((rx_skb = __skb_dequeue(&received)) != NULL) { 1241 + shhwtstamps = skb_hwtstamps(rx_skb); 1242 + memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps)); 1243 + 1244 + if (ts.tv_nsec < VSC8531_SKB_CB(rx_skb)->ns) 1245 + ts.tv_sec--; 1246 + 1247 + shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, 1248 + VSC8531_SKB_CB(rx_skb)->ns); 1249 + netif_rx(rx_skb); 1250 + } 1251 + 1252 + return -1; 1226 1253 } 1227 1254 1228 1255 static const struct ptp_clock_info vsc85xx_clk_caps = { ··· 1263 1240 .adjfine = &vsc85xx_adjfine, 1264 1241 .gettime64 = &vsc85xx_gettime, 1265 1242 .settime64 = &vsc85xx_settime, 1243 + .do_aux_work = &vsc85xx_do_aux_work, 1266 1244 }; 1267 1245 1268 1246 static struct vsc8531_private *vsc8584_base_priv(struct phy_device *phydev) ··· 1591 1567 1592 1568 mutex_init(&vsc8531->phc_lock); 1593 1569 mutex_init(&vsc8531->ts_lock); 1570 + skb_queue_head_init(&vsc8531->rx_skbs_list); 1594 1571 1595 1572 /* Retrieve the shared load/save GPIO. Request it as non exclusive as 1596 1573 * the same GPIO can be requested by all the PHYs of the same package.
+11 -6
drivers/net/ppp/ppp_generic.c
··· 33 33 #include <linux/ppp_channel.h> 34 34 #include <linux/ppp-comp.h> 35 35 #include <linux/skbuff.h> 36 + #include <linux/rculist.h> 36 37 #include <linux/rtnetlink.h> 37 38 #include <linux/if_arp.h> 38 39 #include <linux/ip.h> ··· 1599 1598 if (ppp->flags & SC_MULTILINK) 1600 1599 return -EOPNOTSUPP; 1601 1600 1602 - if (list_empty(&ppp->channels)) 1601 + pch = list_first_or_null_rcu(&ppp->channels, struct channel, clist); 1602 + if (!pch) 1603 1603 return -ENODEV; 1604 1604 1605 - pch = list_first_entry(&ppp->channels, struct channel, clist); 1606 - chan = pch->chan; 1605 + chan = READ_ONCE(pch->chan); 1606 + if (!chan) 1607 + return -ENODEV; 1608 + 1607 1609 if (!chan->ops->fill_forward_path) 1608 1610 return -EOPNOTSUPP; 1609 1611 ··· 2998 2994 */ 2999 2995 down_write(&pch->chan_sem); 3000 2996 spin_lock_bh(&pch->downl); 3001 - pch->chan = NULL; 2997 + WRITE_ONCE(pch->chan, NULL); 3002 2998 spin_unlock_bh(&pch->downl); 3003 2999 up_write(&pch->chan_sem); 3004 3000 ppp_disconnect_channel(pch); ··· 3519 3515 hdrlen = pch->file.hdrlen + 2; /* for protocol bytes */ 3520 3516 if (hdrlen > ppp->dev->hard_header_len) 3521 3517 ppp->dev->hard_header_len = hdrlen; 3522 - list_add_tail(&pch->clist, &ppp->channels); 3518 + list_add_tail_rcu(&pch->clist, &ppp->channels); 3523 3519 ++ppp->n_channels; 3524 3520 pch->ppp = ppp; 3525 3521 refcount_inc(&ppp->file.refcnt); ··· 3549 3545 if (ppp) { 3550 3546 /* remove it from the ppp unit's list */ 3551 3547 ppp_lock(ppp); 3552 - list_del(&pch->clist); 3548 + list_del_rcu(&pch->clist); 3553 3549 if (--ppp->n_channels == 0) 3554 3550 wake_up_interruptible(&ppp->file.rwait); 3555 3551 ppp_unlock(ppp); 3552 + synchronize_net(); 3556 3553 if (refcount_dec_and_test(&ppp->file.refcnt)) 3557 3554 ppp_destroy_interface(ppp); 3558 3555 err = 0;
+48 -15
drivers/net/pse-pd/pd692x0.c
··· 1041 1041 int pw_budget; 1042 1042 1043 1043 pw_budget = regulator_get_unclaimed_power_budget(supply); 1044 + if (!pw_budget) 1045 + /* Do nothing if no power budget */ 1046 + continue; 1047 + 1044 1048 /* Max power budget per manager */ 1045 1049 if (pw_budget > 6000000) 1046 1050 pw_budget = 6000000; ··· 1166 1162 return 0; 1167 1163 } 1168 1164 1165 + static void pd692x0_of_put_managers(struct pd692x0_priv *priv, 1166 + struct pd692x0_manager *manager, 1167 + int nmanagers) 1168 + { 1169 + int i, j; 1170 + 1171 + for (i = 0; i < nmanagers; i++) { 1172 + for (j = 0; j < manager[i].nports; j++) 1173 + of_node_put(manager[i].port_node[j]); 1174 + of_node_put(manager[i].node); 1175 + } 1176 + } 1177 + 1178 + static void pd692x0_managers_free_pw_budget(struct pd692x0_priv *priv) 1179 + { 1180 + int i; 1181 + 1182 + for (i = 0; i < PD692X0_MAX_MANAGERS; i++) { 1183 + struct regulator *supply; 1184 + 1185 + if (!priv->manager_reg[i] || !priv->manager_pw_budget[i]) 1186 + continue; 1187 + 1188 + supply = priv->manager_reg[i]->supply; 1189 + if (!supply) 1190 + continue; 1191 + 1192 + regulator_free_power_budget(supply, 1193 + priv->manager_pw_budget[i]); 1194 + } 1195 + } 1196 + 1169 1197 static int pd692x0_setup_pi_matrix(struct pse_controller_dev *pcdev) 1170 1198 { 1171 1199 struct pd692x0_manager *manager __free(kfree) = NULL; 1172 1200 struct pd692x0_priv *priv = to_pd692x0_priv(pcdev); 1173 1201 struct pd692x0_matrix port_matrix[PD692X0_MAX_PIS]; 1174 - int ret, i, j, nmanagers; 1202 + int ret, nmanagers; 1175 1203 1176 1204 /* Should we flash the port matrix */ 1177 1205 if (priv->fw_state != PD692X0_FW_OK && ··· 1221 1185 nmanagers = ret; 1222 1186 ret = pd692x0_register_managers_regulator(priv, manager, nmanagers); 1223 1187 if (ret) 1224 - goto out; 1188 + goto err_of_managers; 1225 1189 1226 1190 ret = pd692x0_configure_managers(priv, nmanagers); 1227 1191 if (ret) 1228 - goto out; 1192 + goto err_of_managers; 1229 1193 1230 1194 ret = pd692x0_set_ports_matrix(priv, manager, nmanagers, port_matrix); 1231 1195 if (ret) 1232 - goto out; 1196 + goto err_managers_req_pw; 1233 1197 1234 1198 ret = pd692x0_write_ports_matrix(priv, port_matrix); 1235 1199 if (ret) 1236 - goto out; 1200 + goto err_managers_req_pw; 1237 1201 1238 - out: 1239 - for (i = 0; i < nmanagers; i++) { 1240 - struct regulator *supply = priv->manager_reg[i]->supply; 1202 + pd692x0_of_put_managers(priv, manager, nmanagers); 1203 + return 0; 1241 1204 1242 - regulator_free_power_budget(supply, 1243 - priv->manager_pw_budget[i]); 1244 - 1245 - for (j = 0; j < manager[i].nports; j++) 1246 - of_node_put(manager[i].port_node[j]); 1247 - of_node_put(manager[i].node); 1248 - } 1205 + err_managers_req_pw: 1206 + pd692x0_managers_free_pw_budget(priv); 1207 + err_of_managers: 1208 + pd692x0_of_put_managers(priv, manager, nmanagers); 1249 1209 return ret; 1250 1210 } 1251 1211 ··· 1780 1748 { 1781 1749 struct pd692x0_priv *priv = i2c_get_clientdata(client); 1782 1750 1751 + pd692x0_managers_free_pw_budget(priv); 1783 1752 firmware_upload_unregister(priv->fwl); 1784 1753 } 1785 1754
+1 -1
drivers/net/usb/asix_devices.c
··· 676 676 priv->mdio->read = &asix_mdio_bus_read; 677 677 priv->mdio->write = &asix_mdio_bus_write; 678 678 priv->mdio->name = "Asix MDIO Bus"; 679 - priv->mdio->phy_mask = ~(BIT(priv->phy_addr) | BIT(AX_EMBD_PHY_ADDR)); 679 + priv->mdio->phy_mask = ~(BIT(priv->phy_addr & 0x1f) | BIT(AX_EMBD_PHY_ADDR)); 680 680 /* mii bus name is usb-<usb bus number>-<usb device number> */ 681 681 snprintf(priv->mdio->id, MII_BUS_ID_SIZE, "usb-%03d:%03d", 682 682 dev->udev->bus->busnum, dev->udev->devnum);
+7
drivers/net/usb/cdc_ncm.c
··· 2087 2087 .driver_info = (unsigned long)&wwan_info, 2088 2088 }, 2089 2089 2090 + /* Intel modem (label from OEM reads Fibocom L850-GL) */ 2091 + { USB_DEVICE_AND_INTERFACE_INFO(0x8087, 0x095a, 2092 + USB_CLASS_COMM, 2093 + USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE), 2094 + .driver_info = (unsigned long)&wwan_info, 2095 + }, 2096 + 2090 2097 /* DisplayLink docking stations */ 2091 2098 { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO 2092 2099 | USB_DEVICE_ID_MATCH_VENDOR,
+1 -1
drivers/pci/controller/pcie-xilinx.c
··· 400 400 if (val & XILINX_PCIE_RPIFR1_MSI_INTR) { 401 401 val = pcie_read(pcie, XILINX_PCIE_REG_RPIFR2) & 402 402 XILINX_PCIE_RPIFR2_MSG_DATA; 403 - domain = pcie->msi_domain->parent; 403 + domain = pcie->msi_domain; 404 404 } else { 405 405 val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >> 406 406 XILINX_PCIE_RPIFR1_INTR_SHIFT;
-3
drivers/pci/controller/vmd.c
··· 306 306 struct irq_domain *real_parent, 307 307 struct msi_domain_info *info) 308 308 { 309 - if (WARN_ON_ONCE(info->bus_token != DOMAIN_BUS_PCI_DEVICE_MSIX)) 310 - return false; 311 - 312 309 if (!msi_lib_init_dev_msi_info(dev, domain, real_parent, info)) 313 310 return false; 314 311
+1 -1
drivers/platform/x86/amd/hsmp/acpi.c
··· 504 504 505 505 dev_set_drvdata(dev, &hsmp_pdev->sock[sock_ind]); 506 506 507 - return ret; 507 + return 0; 508 508 } 509 509 510 510 static const struct bin_attribute hsmp_metric_tbl_attr = {
+5
drivers/platform/x86/amd/hsmp/hsmp.c
··· 356 356 if (!sock || !buf) 357 357 return -EINVAL; 358 358 359 + if (!sock->metric_tbl_addr) { 360 + dev_err(sock->dev, "Metrics table address not available\n"); 361 + return -ENOMEM; 362 + } 363 + 359 364 /* Do not support lseek(), also don't allow more than the size of metric table */ 360 365 if (size != sizeof(struct hsmp_metric_table)) { 361 366 dev_err(sock->dev, "Wrong buffer size\n");
+34 -20
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 28 28 .spurious_8042 = true, 29 29 }; 30 30 31 + static struct quirk_entry quirk_s2idle_spurious_8042 = { 32 + .s2idle_bug_mmio = FCH_PM_BASE + FCH_PM_SCRATCH, 33 + .spurious_8042 = true, 34 + }; 35 + 31 36 static const struct dmi_system_id fwbug_list[] = { 32 37 { 33 38 .ident = "L14 Gen2 AMD", 34 - .driver_data = &quirk_s2idle_bug, 39 + .driver_data = &quirk_s2idle_spurious_8042, 35 40 .matches = { 36 41 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 37 42 DMI_MATCH(DMI_PRODUCT_NAME, "20X5"), ··· 44 39 }, 45 40 { 46 41 .ident = "T14s Gen2 AMD", 47 - .driver_data = &quirk_s2idle_bug, 42 + .driver_data = &quirk_s2idle_spurious_8042, 48 43 .matches = { 49 44 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 50 45 DMI_MATCH(DMI_PRODUCT_NAME, "20XF"), ··· 52 47 }, 53 48 { 54 49 .ident = "X13 Gen2 AMD", 55 - .driver_data = &quirk_s2idle_bug, 50 + .driver_data = &quirk_s2idle_spurious_8042, 56 51 .matches = { 57 52 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 58 53 DMI_MATCH(DMI_PRODUCT_NAME, "20XH"), ··· 60 55 }, 61 56 { 62 57 .ident = "T14 Gen2 AMD", 63 - .driver_data = &quirk_s2idle_bug, 58 + .driver_data = &quirk_s2idle_spurious_8042, 64 59 .matches = { 65 60 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 66 61 DMI_MATCH(DMI_PRODUCT_NAME, "20XK"), ··· 68 63 }, 69 64 { 70 65 .ident = "T14 Gen1 AMD", 71 - .driver_data = &quirk_s2idle_bug, 66 + .driver_data = &quirk_s2idle_spurious_8042, 72 67 .matches = { 73 68 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 74 69 DMI_MATCH(DMI_PRODUCT_NAME, "20UD"), ··· 76 71 }, 77 72 { 78 73 .ident = "T14 Gen1 AMD", 79 - .driver_data = &quirk_s2idle_bug, 74 + .driver_data = &quirk_s2idle_spurious_8042, 80 75 .matches = { 81 76 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 82 77 DMI_MATCH(DMI_PRODUCT_NAME, "20UE"), ··· 84 79 }, 85 80 { 86 81 .ident = "T14s Gen1 AMD", 87 - .driver_data = &quirk_s2idle_bug, 82 + .driver_data = &quirk_s2idle_spurious_8042, 88 83 .matches = { 89 84 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 90 85 DMI_MATCH(DMI_PRODUCT_NAME, "20UH"), ··· 92 87 }, 93 88 { 94 89 .ident = "T14s Gen1 AMD", 95 - .driver_data = &quirk_s2idle_bug, 90 + .driver_data = &quirk_s2idle_spurious_8042, 96 91 .matches = { 97 92 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 98 93 DMI_MATCH(DMI_PRODUCT_NAME, "20UJ"), ··· 100 95 }, 101 96 { 102 97 .ident = "P14s Gen1 AMD", 103 - .driver_data = &quirk_s2idle_bug, 98 + .driver_data = &quirk_s2idle_spurious_8042, 104 99 .matches = { 105 100 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 106 101 DMI_MATCH(DMI_PRODUCT_NAME, "20Y1"), ··· 108 103 }, 109 104 { 110 105 .ident = "P14s Gen2 AMD", 111 - .driver_data = &quirk_s2idle_bug, 106 + .driver_data = &quirk_s2idle_spurious_8042, 112 107 .matches = { 113 108 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 114 109 DMI_MATCH(DMI_PRODUCT_NAME, "21A0"), ··· 116 111 }, 117 112 { 118 113 .ident = "P14s Gen2 AMD", 119 - .driver_data = &quirk_s2idle_bug, 114 + .driver_data = &quirk_s2idle_spurious_8042, 120 115 .matches = { 121 116 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 122 117 DMI_MATCH(DMI_PRODUCT_NAME, "21A1"), ··· 157 152 }, 158 153 { 159 154 .ident = "IdeaPad 1 14AMN7", 160 - .driver_data = &quirk_s2idle_bug, 155 + .driver_data = &quirk_s2idle_spurious_8042, 161 156 .matches = { 162 157 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 163 158 DMI_MATCH(DMI_PRODUCT_NAME, "82VF"), ··· 165 160 }, 166 161 { 167 162 .ident = "IdeaPad 1 15AMN7", 168 - .driver_data = &quirk_s2idle_bug, 163 + .driver_data = &quirk_s2idle_spurious_8042, 169 164 .matches = { 170 165 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 171 166 DMI_MATCH(DMI_PRODUCT_NAME, "82VG"), ··· 173 168 }, 174 169 { 175 170 .ident = "IdeaPad 1 15AMN7", 176 - .driver_data = &quirk_s2idle_bug, 171 + .driver_data = &quirk_s2idle_spurious_8042, 177 172 .matches = { 178 173 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 179 174 DMI_MATCH(DMI_PRODUCT_NAME, "82X5"), ··· 181 176 }, 182 177 { 183 178 .ident = "IdeaPad Slim 3 14AMN8", 184 - .driver_data = &quirk_s2idle_bug, 179 + .driver_data = &quirk_s2idle_spurious_8042, 185 180 .matches = { 186 181 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 187 182 DMI_MATCH(DMI_PRODUCT_NAME, "82XN"), ··· 189 184 }, 190 185 { 191 186 .ident = "IdeaPad Slim 3 15AMN8", 192 - .driver_data = &quirk_s2idle_bug, 187 + .driver_data = &quirk_s2idle_spurious_8042, 193 188 .matches = { 194 189 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 195 190 DMI_MATCH(DMI_PRODUCT_NAME, "82XQ"), ··· 198 193 /* https://gitlab.freedesktop.org/drm/amd/-/issues/4434 */ 199 194 { 200 195 .ident = "Lenovo Yoga 6 13ALC6", 201 - .driver_data = &quirk_s2idle_bug, 196 + .driver_data = &quirk_s2idle_spurious_8042, 202 197 .matches = { 203 198 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 204 199 DMI_MATCH(DMI_PRODUCT_NAME, "82ND"), ··· 207 202 /* https://gitlab.freedesktop.org/drm/amd/-/issues/2684 */ 208 203 { 209 204 .ident = "HP Laptop 15s-eq2xxx", 210 - .driver_data = &quirk_s2idle_bug, 205 + .driver_data = &quirk_s2idle_spurious_8042, 211 206 .matches = { 212 207 DMI_MATCH(DMI_SYS_VENDOR, "HP"), 213 208 DMI_MATCH(DMI_PRODUCT_NAME, "HP Laptop 15s-eq2xxx"), ··· 290 285 { 291 286 const struct dmi_system_id *dmi_id; 292 287 288 + /* 289 + * IRQ1 may cause an interrupt during resume even without a keyboard 290 + * press. 291 + * 292 + * Affects Renoir, Cezanne and Barcelo SoCs 293 + * 294 + * A solution is available in PMFW 64.66.0, but it must be activated by 295 + * SBIOS. If SBIOS is known to have the fix a quirk can be added for 296 + * a given system to avoid workaround. 297 + */ 293 298 if (dev->cpu_id == AMD_CPU_ID_CZN) 294 299 dev->disable_8042_wakeup = true; 295 300 ··· 310 295 if (dev->quirks->s2idle_bug_mmio) 311 296 pr_info("Using s2idle quirk to avoid %s platform firmware bug\n", 312 297 dmi_id->ident); 313 - if (dev->quirks->spurious_8042) 314 - dev->disable_8042_wakeup = true; 298 + dev->disable_8042_wakeup = dev->quirks->spurious_8042; 315 299 }
-13
drivers/platform/x86/amd/pmc/pmc.c
··· 530 530 static int amd_pmc_wa_irq1(struct amd_pmc_dev *pdev) 531 531 { 532 532 struct device *d; 533 - int rc; 534 - 535 - /* cezanne platform firmware has a fix in 64.66.0 */ 536 - if (pdev->cpu_id == AMD_CPU_ID_CZN) { 537 - if (!pdev->major) { 538 - rc = amd_pmc_get_smu_version(pdev); 539 - if (rc) 540 - return rc; 541 - } 542 - 543 - if (pdev->major > 64 || (pdev->major == 64 && pdev->minor > 65)) 544 - return 0; 545 - } 546 533 547 534 d = bus_find_device_by_name(&serio_bus, NULL, "serio0"); 548 535 if (!d)
+9 -10
drivers/platform/x86/dell/dell-smbios-base.c
··· 39 39 struct smbios_device { 40 40 struct list_head list; 41 41 struct device *device; 42 + int priority; 42 43 int (*call_fn)(struct calling_interface_buffer *arg); 43 44 }; 44 45 ··· 146 145 } 147 146 EXPORT_SYMBOL_GPL(dell_smbios_error); 148 147 149 - int dell_smbios_register_device(struct device *d, void *call_fn) 148 + int dell_smbios_register_device(struct device *d, int priority, void *call_fn) 150 149 { 151 150 struct smbios_device *priv; 152 151 ··· 155 154 return -ENOMEM; 156 155 get_device(d); 157 156 priv->device = d; 157 + priv->priority = priority; 158 158 priv->call_fn = call_fn; 159 159 mutex_lock(&smbios_mutex); 160 160 list_add_tail(&priv->list, &smbios_device_list); ··· 294 292 295 293 int dell_smbios_call(struct calling_interface_buffer *buffer) 296 294 { 297 - int (*call_fn)(struct calling_interface_buffer *) = NULL; 298 - struct device *selected_dev = NULL; 295 + struct smbios_device *selected = NULL; 299 296 struct smbios_device *priv; 300 297 int ret; 301 298 302 299 mutex_lock(&smbios_mutex); 303 300 list_for_each_entry(priv, &smbios_device_list, list) { 304 - if (!selected_dev || priv->device->id >= selected_dev->id) { 305 - dev_dbg(priv->device, "Trying device ID: %d\n", 306 - priv->device->id); 307 - call_fn = priv->call_fn; 308 - selected_dev = priv->device; 301 + if (!selected || priv->priority >= selected->priority) { 302 + dev_dbg(priv->device, "Trying device ID: %d\n", priv->priority); 303 + selected = priv; 309 304 } 310 305 } 311 306 312 - if (!selected_dev) { 307 + if (!selected) { 313 308 ret = -ENODEV; 314 309 pr_err("No dell-smbios drivers are loaded\n"); 315 310 goto out_smbios_call; 316 311 } 317 312 318 - ret = call_fn(buffer); 313 + ret = selected->call_fn(buffer); 319 314 320 315 out_smbios_call: 321 316 mutex_unlock(&smbios_mutex);
+1 -2
drivers/platform/x86/dell/dell-smbios-smm.c
··· 125 125 if (ret) 126 126 goto fail_platform_device_add; 127 127 128 - ret = dell_smbios_register_device(&platform_device->dev, 129 - &dell_smbios_smm_call); 128 + ret = dell_smbios_register_device(&platform_device->dev, 0, &dell_smbios_smm_call); 130 129 if (ret) 131 130 goto fail_register; 132 131
+1 -3
drivers/platform/x86/dell/dell-smbios-wmi.c
··· 264 264 if (ret) 265 265 return ret; 266 266 267 - /* ID is used by dell-smbios to set priority of drivers */ 268 - wdev->dev.id = 1; 269 - ret = dell_smbios_register_device(&wdev->dev, &dell_smbios_wmi_call); 267 + ret = dell_smbios_register_device(&wdev->dev, 1, &dell_smbios_wmi_call); 270 268 if (ret) 271 269 return ret; 272 270
+1 -1
drivers/platform/x86/dell/dell-smbios.h
··· 64 64 struct calling_interface_token tokens[]; 65 65 } __packed; 66 66 67 - int dell_smbios_register_device(struct device *d, void *call_fn); 67 + int dell_smbios_register_device(struct device *d, int priority, void *call_fn); 68 68 void dell_smbios_unregister_device(struct device *d); 69 69 70 70 int dell_smbios_error(int value);
+2 -2
drivers/platform/x86/hp/hp-wmi.c
··· 92 92 "8A25" 93 93 }; 94 94 95 - /* DMI Board names of Victus 16-s1000 laptops */ 95 + /* DMI Board names of Victus 16-r1000 and Victus 16-s1000 laptops */ 96 96 static const char * const victus_s_thermal_profile_boards[] = { 97 - "8C9C" 97 + "8C99", "8C9C" 98 98 }; 99 99 100 100 enum hp_wmi_radio {
+5
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
··· 192 192 static int write_eff_lat_ctrl(struct uncore_data *data, unsigned int val, enum uncore_index index) 193 193 { 194 194 struct tpmi_uncore_cluster_info *cluster_info; 195 + struct tpmi_uncore_struct *uncore_root; 195 196 u64 control; 196 197 197 198 cluster_info = container_of(data, struct tpmi_uncore_cluster_info, uncore_data); 199 + uncore_root = cluster_info->uncore_root; 200 + 201 + if (uncore_root->write_blocked) 202 + return -EPERM; 198 203 199 204 if (cluster_info->root_domain) 200 205 return -ENODATA;
+5 -8
drivers/regulator/pca9450-regulator.c
··· 40 40 struct device *dev; 41 41 struct regmap *regmap; 42 42 struct gpio_desc *sd_vsel_gpio; 43 - struct notifier_block restart_nb; 44 43 enum pca9450_chip_type type; 45 44 unsigned int rcnt; 46 45 int irq; ··· 1099 1100 return IRQ_HANDLED; 1100 1101 } 1101 1102 1102 - static int pca9450_i2c_restart_handler(struct notifier_block *nb, 1103 - unsigned long action, void *data) 1103 + static int pca9450_i2c_restart_handler(struct sys_off_data *data) 1104 1104 { 1105 - struct pca9450 *pca9450 = container_of(nb, struct pca9450, restart_nb); 1105 + struct pca9450 *pca9450 = data->cb_data; 1106 1106 struct i2c_client *i2c = container_of(pca9450->dev, struct i2c_client, dev); 1107 1107 1108 1108 dev_dbg(&i2c->dev, "Restarting device..\n"); ··· 1259 1261 pca9450->sd_vsel_fixed_low = 1260 1262 of_property_read_bool(ldo5->dev.of_node, "nxp,sd-vsel-fixed-low"); 1261 1263 1262 - pca9450->restart_nb.notifier_call = pca9450_i2c_restart_handler; 1263 - pca9450->restart_nb.priority = PCA9450_RESTART_HANDLER_PRIORITY; 1264 - 1265 - if (register_restart_handler(&pca9450->restart_nb)) 1264 + if (devm_register_sys_off_handler(&i2c->dev, SYS_OFF_MODE_RESTART, 1265 + PCA9450_RESTART_HANDLER_PRIORITY, 1266 + pca9450_i2c_restart_handler, pca9450)) 1266 1267 dev_warn(&i2c->dev, "Failed to register restart handler\n"); 1267 1268 1268 1269 dev_info(&i2c->dev, "%s probed.\n",
+6 -6
drivers/regulator/tps65219-regulator.c
··· 454 454 irq_type->irq_name, 455 455 irq_data); 456 456 if (error) 457 - return dev_err_probe(tps->dev, PTR_ERR(rdev), 458 - "Failed to request %s IRQ %d: %d\n", 459 - irq_type->irq_name, irq, error); 457 + return dev_err_probe(tps->dev, error, 458 + "Failed to request %s IRQ %d\n", 459 + irq_type->irq_name, irq); 460 460 } 461 461 462 462 for (i = 0; i < pmic->dev_irq_size; ++i) { ··· 477 477 irq_type->irq_name, 478 478 irq_data); 479 479 if (error) 480 - return dev_err_probe(tps->dev, PTR_ERR(rdev), 481 - "Failed to request %s IRQ %d: %d\n", 482 - irq_type->irq_name, irq, error); 480 + return dev_err_probe(tps->dev, error, 481 + "Failed to request %s IRQ %d\n", 482 + irq_type->irq_name, irq); 483 483 } 484 484 485 485 return 0;
+9 -2
drivers/s390/char/sclp.c
··· 77 77 /* The currently active SCLP command word. */ 78 78 static sclp_cmdw_t active_cmd; 79 79 80 + static inline struct sccb_header *sclpint_to_sccb(u32 sccb_int) 81 + { 82 + if (sccb_int) 83 + return __va(sccb_int); 84 + return NULL; 85 + } 86 + 80 87 static inline void sclp_trace(int prio, char *id, u32 a, u64 b, bool err) 81 88 { 82 89 struct sclp_trace_entry e; ··· 627 620 628 621 static bool ok_response(u32 sccb_int, sclp_cmdw_t cmd) 629 622 { 630 - struct sccb_header *sccb = (struct sccb_header *)__va(sccb_int); 623 + struct sccb_header *sccb = sclpint_to_sccb(sccb_int); 631 624 struct evbuf_header *evbuf; 632 625 u16 response; 633 626 ··· 666 659 667 660 /* INT: Interrupt received (a=intparm, b=cmd) */ 668 661 sclp_trace_sccb(0, "INT", param32, active_cmd, active_cmd, 669 - (struct sccb_header *)__va(finished_sccb), 662 + sclpint_to_sccb(finished_sccb), 670 663 !ok_response(finished_sccb, active_cmd)); 671 664 672 665 if (finished_sccb) {
-2
drivers/scsi/fnic/fnic.h
··· 323 323 FNIC_IN_ETH_TRANS_FC_MODE, 324 324 }; 325 325 326 - struct mempool; 327 - 328 326 enum fnic_role_e { 329 327 FNIC_ROLE_FCP_INITIATOR = 0, 330 328 };
+2
drivers/scsi/qla4xxx/ql4_os.c
··· 6606 6606 6607 6607 ep = qla4xxx_ep_connect(ha->host, (struct sockaddr *)dst_addr, 0); 6608 6608 vfree(dst_addr); 6609 + if (IS_ERR(ep)) 6610 + return NULL; 6609 6611 return ep; 6610 6612 } 6611 6613
+3 -5
drivers/spi/spi-fsl-lpspi.c
··· 330 330 } 331 331 332 332 if (config.speed_hz > perclk_rate / 2) { 333 - dev_err(fsl_lpspi->dev, 334 - "per-clk should be at least two times of transfer speed"); 335 - return -EINVAL; 333 + div = 2; 334 + } else { 335 + div = DIV_ROUND_UP(perclk_rate, config.speed_hz); 336 336 } 337 - 338 - div = DIV_ROUND_UP(perclk_rate, config.speed_hz); 339 337 340 338 for (prescale = 0; prescale <= prescale_max; prescale++) { 341 339 scldiv = div / (1 << prescale) - 2;
+4
drivers/spi/spi-mem.c
··· 265 265 */ 266 266 bool spi_mem_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) 267 267 { 268 + /* Make sure the operation frequency is correct before going futher */ 269 + spi_mem_adjust_op_freq(mem, (struct spi_mem_op *)op); 270 + 268 271 if (spi_mem_check_op(op)) 269 272 return false; 270 273 ··· 580 577 * spi_mem_calc_op_duration() - Derives the theoretical length (in ns) of an 581 578 * operation. This helps finding the best variant 582 579 * among a list of possible choices. 580 + * @mem: the SPI memory 583 581 * @op: the operation to benchmark 584 582 * 585 583 * Some chips have per-op frequency limitations, PCBs usually have their own
+15 -7
drivers/spi/spi-qpic-snand.c
··· 210 210 struct qcom_nand_controller *snandc = nand_to_qcom_snand(nand); 211 211 struct qpic_ecc *qecc = snandc->qspi->ecc; 212 212 213 - if (section > 1) 214 - return -ERANGE; 213 + switch (section) { 214 + case 0: 215 + oobregion->offset = 0; 216 + oobregion->length = qecc->bytes * (qecc->steps - 1) + 217 + qecc->bbm_size; 218 + return 0; 219 + case 1: 220 + oobregion->offset = qecc->bytes * (qecc->steps - 1) + 221 + qecc->bbm_size + 222 + qecc->steps * 4; 223 + oobregion->length = mtd->oobsize - oobregion->offset; 224 + return 0; 225 + } 215 226 216 - oobregion->length = qecc->ecc_bytes_hw + qecc->spare_bytes; 217 - oobregion->offset = mtd->oobsize - oobregion->length; 218 - 219 - return 0; 227 + return -ERANGE; 220 228 } 221 229 222 230 static int qcom_spi_ooblayout_free(struct mtd_info *mtd, int section, ··· 1204 1196 u32 cfg0, cfg1, ecc_bch_cfg, ecc_buf_cfg; 1205 1197 1206 1198 cfg0 = (ecc_cfg->cfg0 & ~CW_PER_PAGE_MASK) | 1207 - FIELD_PREP(CW_PER_PAGE_MASK, num_cw - 1); 1199 + FIELD_PREP(CW_PER_PAGE_MASK, 0); 1208 1200 cfg1 = ecc_cfg->cfg1; 1209 1201 ecc_bch_cfg = ecc_cfg->ecc_bch_cfg; 1210 1202 ecc_buf_cfg = ecc_cfg->ecc_buf_cfg;
+5 -5
drivers/spi/spi-st-ssc4.c
··· 378 378 pinctrl_pm_select_sleep_state(&pdev->dev); 379 379 } 380 380 381 - static int __maybe_unused spi_st_runtime_suspend(struct device *dev) 381 + static int spi_st_runtime_suspend(struct device *dev) 382 382 { 383 383 struct spi_controller *host = dev_get_drvdata(dev); 384 384 struct spi_st *spi_st = spi_controller_get_devdata(host); ··· 391 391 return 0; 392 392 } 393 393 394 - static int __maybe_unused spi_st_runtime_resume(struct device *dev) 394 + static int spi_st_runtime_resume(struct device *dev) 395 395 { 396 396 struct spi_controller *host = dev_get_drvdata(dev); 397 397 struct spi_st *spi_st = spi_controller_get_devdata(host); ··· 428 428 } 429 429 430 430 static const struct dev_pm_ops spi_st_pm = { 431 - SET_SYSTEM_SLEEP_PM_OPS(spi_st_suspend, spi_st_resume) 432 - SET_RUNTIME_PM_OPS(spi_st_runtime_suspend, spi_st_runtime_resume, NULL) 431 + SYSTEM_SLEEP_PM_OPS(spi_st_suspend, spi_st_resume) 432 + RUNTIME_PM_OPS(spi_st_runtime_suspend, spi_st_runtime_resume, NULL) 433 433 }; 434 434 435 435 static const struct of_device_id stm_spi_match[] = { ··· 441 441 static struct platform_driver spi_st_driver = { 442 442 .driver = { 443 443 .name = "spi-st", 444 - .pm = pm_sleep_ptr(&spi_st_pm), 444 + .pm = pm_ptr(&spi_st_pm), 445 445 .of_match_table = of_match_ptr(stm_spi_match), 446 446 }, 447 447 .probe = spi_st_probe,
+4 -4
drivers/tty/serial/8250/8250_rsa.c
··· 147 147 if (up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16) 148 148 serial_out(up, UART_RSA_FRR, 0); 149 149 } 150 - EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_enable, "8250_base"); 150 + EXPORT_SYMBOL_FOR_MODULES(rsa_enable, "8250_base"); 151 151 152 152 /* 153 153 * Attempts to turn off the RSA FIFO and resets the RSA board back to 115kbps compat mode. It is ··· 179 179 up->port.uartclk = SERIAL_RSA_BAUD_BASE_LO * 16; 180 180 uart_port_unlock_irq(&up->port); 181 181 } 182 - EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_disable, "8250_base"); 182 + EXPORT_SYMBOL_FOR_MODULES(rsa_disable, "8250_base"); 183 183 184 184 void rsa_autoconfig(struct uart_8250_port *up) 185 185 { ··· 192 192 if (__rsa_enable(up)) 193 193 up->port.type = PORT_RSA; 194 194 } 195 - EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_autoconfig, "8250_base"); 195 + EXPORT_SYMBOL_FOR_MODULES(rsa_autoconfig, "8250_base"); 196 196 197 197 void rsa_reset(struct uart_8250_port *up) 198 198 { ··· 201 201 202 202 serial_out(up, UART_RSA_FRR, 0); 203 203 } 204 - EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_reset, "8250_base"); 204 + EXPORT_SYMBOL_FOR_MODULES(rsa_reset, "8250_base"); 205 205 206 206 #ifdef CONFIG_SERIAL_8250_DEPRECATED_OPTIONS 207 207 #ifndef MODULE
+44 -32
drivers/ufs/core/ufshcd.c
··· 1303 1303 * 1304 1304 * Return: 0 upon success; -EBUSY upon timeout. 1305 1305 */ 1306 - static int ufshcd_wait_for_doorbell_clr(struct ufs_hba *hba, 1306 + static int ufshcd_wait_for_pending_cmds(struct ufs_hba *hba, 1307 1307 u64 wait_timeout_us) 1308 1308 { 1309 1309 int ret = 0; ··· 1431 1431 down_write(&hba->clk_scaling_lock); 1432 1432 1433 1433 if (!hba->clk_scaling.is_allowed || 1434 - ufshcd_wait_for_doorbell_clr(hba, timeout_us)) { 1434 + ufshcd_wait_for_pending_cmds(hba, timeout_us)) { 1435 1435 ret = -EBUSY; 1436 1436 up_write(&hba->clk_scaling_lock); 1437 1437 mutex_unlock(&hba->wb_mutex); ··· 3199 3199 } 3200 3200 3201 3201 /* 3202 - * Return: 0 upon success; < 0 upon failure. 3202 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3203 + * < 0 if another error occurred. 3203 3204 */ 3204 3205 static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba, 3205 3206 struct ufshcd_lrb *lrbp, int max_timeout) ··· 3276 3275 } 3277 3276 } 3278 3277 3279 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3280 3278 return err; 3281 3279 } 3282 3280 ··· 3294 3294 } 3295 3295 3296 3296 /* 3297 - * Return: 0 upon success; < 0 upon failure. 3297 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3298 + * < 0 if another error occurred. 3298 3299 */ 3299 3300 static int ufshcd_issue_dev_cmd(struct ufs_hba *hba, struct ufshcd_lrb *lrbp, 3300 3301 const u32 tag, int timeout) ··· 3318 3317 * @cmd_type: specifies the type (NOP, Query...) 3319 3318 * @timeout: timeout in milliseconds 3320 3319 * 3321 - * Return: 0 upon success; < 0 upon failure. 3320 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3321 + * < 0 if another error occurred. 3322 3322 * 3323 3323 * NOTE: Since there is only one available tag for device management commands, 3324 3324 * it is expected you hold the hba->dev_cmd.lock mutex. ··· 3365 3363 (*request)->upiu_req.selector = selector; 3366 3364 } 3367 3365 3366 + /* 3367 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3368 + * < 0 if another error occurred. 3369 + */ 3368 3370 static int ufshcd_query_flag_retry(struct ufs_hba *hba, 3369 3371 enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res) 3370 3372 { ··· 3389 3383 dev_err(hba->dev, 3390 3384 "%s: query flag, opcode %d, idn %d, failed with error %d after %d retries\n", 3391 3385 __func__, opcode, idn, ret, retries); 3392 - WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret); 3393 3386 return ret; 3394 3387 } 3395 3388 ··· 3400 3395 * @index: flag index to access 3401 3396 * @flag_res: the flag value after the query request completes 3402 3397 * 3403 - * Return: 0 for success; < 0 upon failure. 3398 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3399 + * < 0 if another error occurred. 3404 3400 */ 3405 3401 int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, 3406 3402 enum flag_idn idn, u8 index, bool *flag_res) ··· 3457 3451 3458 3452 out_unlock: 3459 3453 ufshcd_dev_man_unlock(hba); 3460 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3461 3454 return err; 3462 3455 } 3463 3456 ··· 3469 3464 * @selector: selector field 3470 3465 * @attr_val: the attribute value after the query request completes 3471 3466 * 3472 - * Return: 0 upon success; < 0 upon failure. 3473 - */ 3467 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3468 + * < 0 if another error occurred. 3469 + */ 3474 3470 int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, 3475 3471 enum attr_idn idn, u8 index, u8 selector, u32 *attr_val) 3476 3472 { ··· 3519 3513 3520 3514 out_unlock: 3521 3515 ufshcd_dev_man_unlock(hba); 3522 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3523 3516 return err; 3524 3517 } 3525 3518 ··· 3533 3528 * @attr_val: the attribute value after the query request 3534 3529 * completes 3535 3530 * 3536 - * Return: 0 for success; < 0 upon failure. 3537 - */ 3531 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3532 + * < 0 if another error occurred. 3533 + */ 3538 3534 int ufshcd_query_attr_retry(struct ufs_hba *hba, 3539 3535 enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector, 3540 3536 u32 *attr_val) ··· 3557 3551 dev_err(hba->dev, 3558 3552 "%s: query attribute, idn %d, failed with error %d after %d retries\n", 3559 3553 __func__, idn, ret, QUERY_REQ_RETRIES); 3560 - WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret); 3561 3554 return ret; 3562 3555 } 3563 3556 3564 3557 /* 3565 - * Return: 0 if successful; < 0 upon failure. 3558 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3559 + * < 0 if another error occurred. 3566 3560 */ 3567 3561 static int __ufshcd_query_descriptor(struct ufs_hba *hba, 3568 3562 enum query_opcode opcode, enum desc_idn idn, u8 index, ··· 3621 3615 out_unlock: 3622 3616 hba->dev_cmd.query.descriptor = NULL; 3623 3617 ufshcd_dev_man_unlock(hba); 3624 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3625 3618 return err; 3626 3619 } 3627 3620 ··· 3637 3632 * The buf_len parameter will contain, on return, the length parameter 3638 3633 * received on the response. 3639 3634 * 3640 - * Return: 0 for success; < 0 upon failure. 3635 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3636 + * < 0 if another error occurred. 3641 3637 */ 3642 3638 int ufshcd_query_descriptor_retry(struct ufs_hba *hba, 3643 3639 enum query_opcode opcode, ··· 3656 3650 break; 3657 3651 } 3658 3652 3659 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3660 3653 return err; 3661 3654 } 3662 3655 ··· 3668 3663 * @param_read_buf: pointer to buffer where parameter would be read 3669 3664 * @param_size: sizeof(param_read_buf) 3670 3665 * 3671 - * Return: 0 in case of success; < 0 upon failure. 3666 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3667 + * < 0 if another error occurred. 3672 3668 */ 3673 3669 int ufshcd_read_desc_param(struct ufs_hba *hba, 3674 3670 enum desc_idn desc_id, ··· 3736 3730 out: 3737 3731 if (is_kmalloc) 3738 3732 kfree(desc_buf); 3739 - WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret); 3740 3733 return ret; 3741 3734 } 3742 3735 ··· 4786 4781 * 4787 4782 * Set fDeviceInit flag and poll until device toggles it. 4788 4783 * 4789 - * Return: 0 upon success; < 0 upon failure. 4784 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 4785 + * < 0 if another error occurred. 4790 4786 */ 4791 4787 static int ufshcd_complete_dev_init(struct ufs_hba *hba) 4792 4788 { ··· 5141 5135 * not respond with NOP IN UPIU within timeout of %NOP_OUT_TIMEOUT 5142 5136 * and we retry sending NOP OUT for %NOP_OUT_RETRIES iterations. 5143 5137 * 5144 - * Return: 0 upon success; < 0 upon failure. 5138 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 5139 + * < 0 if another error occurred. 5145 5140 */ 5146 5141 static int ufshcd_verify_dev_init(struct ufs_hba *hba) 5147 5142 { ··· 5566 5559 irqreturn_t retval = IRQ_NONE; 5567 5560 struct uic_command *cmd; 5568 5561 5569 - spin_lock(hba->host->host_lock); 5562 + guard(spinlock_irqsave)(hba->host->host_lock); 5570 5563 cmd = hba->active_uic_cmd; 5571 - if (WARN_ON_ONCE(!cmd)) 5564 + if (!cmd) 5572 5565 goto unlock; 5573 5566 5574 5567 if (ufshcd_is_auto_hibern8_error(hba, intr_status)) ··· 5593 5586 ufshcd_add_uic_command_trace(hba, cmd, UFS_CMD_COMP); 5594 5587 5595 5588 unlock: 5596 - spin_unlock(hba->host->host_lock); 5597 - 5598 5589 return retval; 5599 5590 } 5600 5591 ··· 5874 5869 * as the device is allowed to manage its own way of handling background 5875 5870 * operations. 5876 5871 * 5877 - * Return: zero on success, non-zero on failure. 5872 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 5873 + * < 0 if another error occurred. 5878 5874 */ 5879 5875 static int ufshcd_enable_auto_bkops(struct ufs_hba *hba) 5880 5876 { ··· 5914 5908 * host is idle so that BKOPS are managed effectively without any negative 5915 5909 * impacts. 5916 5910 * 5917 - * Return: zero on success, non-zero on failure. 5911 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 5912 + * < 0 if another error occurred. 5918 5913 */ 5919 5914 static int ufshcd_disable_auto_bkops(struct ufs_hba *hba) 5920 5915 { ··· 6065 6058 __func__, err); 6066 6059 } 6067 6060 6061 + /* 6062 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 6063 + * < 0 if another error occurred. 6064 + */ 6068 6065 int ufshcd_read_device_lvl_exception_id(struct ufs_hba *hba, u64 *exception_id) 6069 6066 { 6070 6067 struct utp_upiu_query_v4_0 *upiu_resp; ··· 6931 6920 bool queue_eh_work = false; 6932 6921 irqreturn_t retval = IRQ_NONE; 6933 6922 6934 - spin_lock(hba->host->host_lock); 6923 + guard(spinlock_irqsave)(hba->host->host_lock); 6935 6924 hba->errors |= UFSHCD_ERROR_MASK & intr_status; 6936 6925 6937 6926 if (hba->errors & INT_FATAL_ERRORS) { ··· 6990 6979 */ 6991 6980 hba->errors = 0; 6992 6981 hba->uic_error = 0; 6993 - spin_unlock(hba->host->host_lock); 6982 + 6994 6983 return retval; 6995 6984 } 6996 6985 ··· 7465 7454 * @sg_list: Pointer to SG list when DATA IN/OUT UPIU is required in ARPMB operation 7466 7455 * @dir: DMA direction 7467 7456 * 7468 - * Return: zero on success, non-zero on failure. 7457 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 7458 + * < 0 if another error occurred. 7469 7459 */ 7470 7460 int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *req_upiu, 7471 7461 struct utp_upiu_req *rsp_upiu, struct ufs_ehs *req_ehs,
+15 -24
drivers/ufs/host/ufs-qcom.c
··· 2070 2070 return IRQ_HANDLED; 2071 2071 } 2072 2072 2073 - static void ufs_qcom_irq_free(struct ufs_qcom_irq *uqi) 2074 - { 2075 - for (struct ufs_qcom_irq *q = uqi; q->irq; q++) 2076 - devm_free_irq(q->hba->dev, q->irq, q->hba); 2077 - 2078 - platform_device_msi_free_irqs_all(uqi->hba->dev); 2079 - devm_kfree(uqi->hba->dev, uqi); 2080 - } 2081 - 2082 - DEFINE_FREE(ufs_qcom_irq, struct ufs_qcom_irq *, if (_T) ufs_qcom_irq_free(_T)) 2083 - 2084 2073 static int ufs_qcom_config_esi(struct ufs_hba *hba) 2085 2074 { 2086 2075 struct ufs_qcom_host *host = ufshcd_get_variant(hba); ··· 2084 2095 */ 2085 2096 nr_irqs = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL]; 2086 2097 2087 - struct ufs_qcom_irq *qi __free(ufs_qcom_irq) = 2088 - devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL); 2089 - if (!qi) 2090 - return -ENOMEM; 2091 - /* Preset so __free() has a pointer to hba in all error paths */ 2092 - qi[0].hba = hba; 2093 - 2094 2098 ret = platform_device_msi_init_and_alloc_irqs(hba->dev, nr_irqs, 2095 2099 ufs_qcom_write_msi_msg); 2096 2100 if (ret) { 2097 - dev_err(hba->dev, "Failed to request Platform MSI %d\n", ret); 2098 - return ret; 2101 + dev_warn(hba->dev, "Platform MSI not supported or failed, continuing without ESI\n"); 2102 + return ret; /* Continue without ESI */ 2103 + } 2104 + 2105 + struct ufs_qcom_irq *qi = devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL); 2106 + 2107 + if (!qi) { 2108 + platform_device_msi_free_irqs_all(hba->dev); 2109 + return -ENOMEM; 2099 2110 } 2100 2111 2101 2112 for (int idx = 0; idx < nr_irqs; idx++) { ··· 2106 2117 ret = devm_request_irq(hba->dev, qi[idx].irq, ufs_qcom_mcq_esi_handler, 2107 2118 IRQF_SHARED, "qcom-mcq-esi", qi + idx); 2108 2119 if (ret) { 2109 - dev_err(hba->dev, "%s: Fail to request IRQ for %d, err = %d\n", 2120 + dev_err(hba->dev, "%s: Failed to request IRQ for %d, err = %d\n", 2110 2121 __func__, qi[idx].irq, ret); 2111 - qi[idx].irq = 0; 2122 + /* Free previously allocated IRQs */ 2123 + for (int j = 0; j < idx; j++) 2124 + devm_free_irq(hba->dev, qi[j].irq, qi + j); 2125 + platform_device_msi_free_irqs_all(hba->dev); 2126 + devm_kfree(hba->dev, qi); 2112 2127 return ret; 2113 2128 } 2114 2129 } 2115 - 2116 - retain_and_null_ptr(qi); 2117 2130 2118 2131 if (host->hw_ver.major >= 6) { 2119 2132 ufshcd_rmwl(hba, ESI_VEC_MASK, FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1),
+1
drivers/ufs/host/ufshcd-pci.c
··· 630 630 { PCI_VDEVICE(INTEL, 0xA847), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 631 631 { PCI_VDEVICE(INTEL, 0x7747), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 632 632 { PCI_VDEVICE(INTEL, 0xE447), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 633 + { PCI_VDEVICE(INTEL, 0x4D47), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 633 634 { } /* terminate list */ 634 635 }; 635 636
+2 -1
drivers/usb/chipidea/ci_hdrc_imx.c
··· 338 338 schedule_work(&ci->usb_phy->chg_work); 339 339 break; 340 340 case CI_HDRC_CONTROLLER_PULLUP_EVENT: 341 - if (ci->role == CI_ROLE_GADGET) 341 + if (ci->role == CI_ROLE_GADGET && 342 + ci->gadget.speed == USB_SPEED_HIGH) 342 343 imx_usbmisc_pullup(data->usbmisc_data, 343 344 ci->gadget.connected); 344 345 break;
+16 -7
drivers/usb/chipidea/usbmisc_imx.c
··· 1068 1068 unsigned long flags; 1069 1069 u32 val; 1070 1070 1071 + if (on) 1072 + return; 1073 + 1071 1074 spin_lock_irqsave(&usbmisc->lock, flags); 1072 1075 val = readl(usbmisc->base + MX7D_USBNC_USB_CTRL2); 1073 - if (!on) { 1074 - val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK; 1075 - val |= MX7D_USBNC_USB_CTRL2_OPMODE(1); 1076 - val |= MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1077 - } else { 1078 - val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1079 - } 1076 + val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK; 1077 + val |= MX7D_USBNC_USB_CTRL2_OPMODE(1); 1078 + val |= MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1079 + writel(val, usbmisc->base + MX7D_USBNC_USB_CTRL2); 1080 + spin_unlock_irqrestore(&usbmisc->lock, flags); 1081 + 1082 + /* Last for at least 1 micro-frame to let host see disconnect signal */ 1083 + usleep_range(125, 150); 1084 + 1085 + spin_lock_irqsave(&usbmisc->lock, flags); 1086 + val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK; 1087 + val |= MX7D_USBNC_USB_CTRL2_OPMODE(0); 1088 + val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1080 1089 writel(val, usbmisc->base + MX7D_USBNC_USB_CTRL2); 1081 1090 spin_unlock_irqrestore(&usbmisc->lock, flags); 1082 1091 }
+16 -12
drivers/usb/core/hcd.c
··· 1636 1636 struct usb_hcd *hcd = bus_to_hcd(urb->dev->bus); 1637 1637 struct usb_anchor *anchor = urb->anchor; 1638 1638 int status = urb->unlinked; 1639 - unsigned long flags; 1640 1639 1641 1640 urb->hcpriv = NULL; 1642 1641 if (unlikely((urb->transfer_flags & URB_SHORT_NOT_OK) && ··· 1653 1654 /* pass ownership to the completion handler */ 1654 1655 urb->status = status; 1655 1656 /* 1656 - * Only collect coverage in the softirq context and disable interrupts 1657 - * to avoid scenarios with nested remote coverage collection sections 1658 - * that KCOV does not support. 1659 - * See the comment next to kcov_remote_start_usb_softirq() for details. 1657 + * This function can be called in task context inside another remote 1658 + * coverage collection section, but kcov doesn't support that kind of 1659 + * recursion yet. Only collect coverage in softirq context for now. 1660 1660 */ 1661 - flags = kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); 1661 + kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); 1662 1662 urb->complete(urb); 1663 - kcov_remote_stop_softirq(flags); 1663 + kcov_remote_stop_softirq(); 1664 1664 1665 1665 usb_anchor_resume_wakeups(anchor); 1666 1666 atomic_dec(&urb->use_count); ··· 1717 1719 * @urb: urb being returned to the USB device driver. 1718 1720 * @status: completion status code for the URB. 1719 1721 * 1720 - * Context: atomic. The completion callback is invoked in caller's context. 1721 - * For HCDs with HCD_BH flag set, the completion callback is invoked in BH 1722 - * context (except for URBs submitted to the root hub which always complete in 1723 - * caller's context). 1722 + * Context: atomic. The completion callback is invoked either in a work queue 1723 + * (BH) context or in the caller's context, depending on whether the HCD_BH 1724 + * flag is set in the @hcd structure, except that URBs submitted to the 1725 + * root hub always complete in BH context. 1724 1726 * 1725 1727 * This hands the URB from HCD to its USB device driver, using its 1726 1728 * completion function. The HCD has freed all per-urb resources ··· 2164 2166 urb->complete = usb_ehset_completion; 2165 2167 urb->status = -EINPROGRESS; 2166 2168 urb->actual_length = 0; 2167 - urb->transfer_flags = URB_DIR_IN; 2169 + urb->transfer_flags = URB_DIR_IN | URB_NO_TRANSFER_DMA_MAP; 2168 2170 usb_get_urb(urb); 2169 2171 atomic_inc(&urb->use_count); 2170 2172 atomic_inc(&urb->dev->urbnum); ··· 2228 2230 2229 2231 /* Complete remaining DATA and STATUS stages using the same URB */ 2230 2232 urb->status = -EINPROGRESS; 2233 + urb->transfer_flags &= ~URB_NO_TRANSFER_DMA_MAP; 2231 2234 usb_get_urb(urb); 2232 2235 atomic_inc(&urb->use_count); 2233 2236 atomic_inc(&urb->dev->urbnum); 2237 + if (map_urb_for_dma(hcd, urb, GFP_KERNEL)) { 2238 + usb_put_urb(urb); 2239 + goto out1; 2240 + } 2241 + 2234 2242 retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 0); 2235 2243 if (!retval && !wait_for_completion_timeout(&done, 2236 2244 msecs_to_jiffies(2000))) {
+1
drivers/usb/core/quirks.c
··· 371 371 { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, 372 372 373 373 /* SanDisk Corp. SanDisk 3.2Gen1 */ 374 + { USB_DEVICE(0x0781, 0x5596), .driver_info = USB_QUIRK_DELAY_INIT }, 374 375 { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT }, 375 376 376 377 /* SanDisk Extreme 55AE */
+2
drivers/usb/dwc3/dwc3-pci.c
··· 41 41 #define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee 42 42 #define PCI_DEVICE_ID_INTEL_TGPH 0x43ee 43 43 #define PCI_DEVICE_ID_INTEL_JSP 0x4dee 44 + #define PCI_DEVICE_ID_INTEL_WCL 0x4d7e 44 45 #define PCI_DEVICE_ID_INTEL_ADL 0x460e 45 46 #define PCI_DEVICE_ID_INTEL_ADL_PCH 0x51ee 46 47 #define PCI_DEVICE_ID_INTEL_ADLN 0x465e ··· 432 431 { PCI_DEVICE_DATA(INTEL, TGPLP, &dwc3_pci_intel_swnode) }, 433 432 { PCI_DEVICE_DATA(INTEL, TGPH, &dwc3_pci_intel_swnode) }, 434 433 { PCI_DEVICE_DATA(INTEL, JSP, &dwc3_pci_intel_swnode) }, 434 + { PCI_DEVICE_DATA(INTEL, WCL, &dwc3_pci_intel_swnode) }, 435 435 { PCI_DEVICE_DATA(INTEL, ADL, &dwc3_pci_intel_swnode) }, 436 436 { PCI_DEVICE_DATA(INTEL, ADL_PCH, &dwc3_pci_intel_swnode) }, 437 437 { PCI_DEVICE_DATA(INTEL, ADLN, &dwc3_pci_intel_swnode) },
+16 -4
drivers/usb/dwc3/ep0.c
··· 288 288 dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 8, 289 289 DWC3_TRBCTL_CONTROL_SETUP, false); 290 290 ret = dwc3_ep0_start_trans(dep); 291 - WARN_ON(ret < 0); 291 + if (ret < 0) 292 + dev_err(dwc->dev, "ep0 out start transfer failed: %d\n", ret); 293 + 292 294 for (i = 2; i < DWC3_ENDPOINTS_NUM; i++) { 293 295 struct dwc3_ep *dwc3_ep; 294 296 ··· 1063 1061 ret = dwc3_ep0_start_trans(dep); 1064 1062 } 1065 1063 1066 - WARN_ON(ret < 0); 1064 + if (ret < 0) 1065 + dev_err(dwc->dev, 1066 + "ep0 data phase start transfer failed: %d\n", ret); 1067 1067 } 1068 1068 1069 1069 static int dwc3_ep0_start_control_status(struct dwc3_ep *dep) ··· 1082 1078 1083 1079 static void __dwc3_ep0_do_control_status(struct dwc3 *dwc, struct dwc3_ep *dep) 1084 1080 { 1085 - WARN_ON(dwc3_ep0_start_control_status(dep)); 1081 + int ret; 1082 + 1083 + ret = dwc3_ep0_start_control_status(dep); 1084 + if (ret) 1085 + dev_err(dwc->dev, 1086 + "ep0 status phase start transfer failed: %d\n", ret); 1086 1087 } 1087 1088 1088 1089 static void dwc3_ep0_do_control_status(struct dwc3 *dwc, ··· 1130 1121 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); 1131 1122 memset(&params, 0, sizeof(params)); 1132 1123 ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params); 1133 - WARN_ON_ONCE(ret); 1124 + if (ret) 1125 + dev_err_ratelimited(dwc->dev, 1126 + "ep0 data phase end transfer failed: %d\n", ret); 1127 + 1134 1128 dep->resource_index = 0; 1135 1129 } 1136 1130
+17 -2
drivers/usb/dwc3/gadget.c
··· 1772 1772 dep->flags |= DWC3_EP_DELAY_STOP; 1773 1773 return 0; 1774 1774 } 1775 - WARN_ON_ONCE(ret); 1775 + 1776 + if (ret) 1777 + dev_err_ratelimited(dep->dwc->dev, 1778 + "end transfer failed: %d\n", ret); 1779 + 1776 1780 dep->resource_index = 0; 1777 1781 1778 1782 if (!interrupt) ··· 3781 3777 static void dwc3_gadget_endpoint_transfer_not_ready(struct dwc3_ep *dep, 3782 3778 const struct dwc3_event_depevt *event) 3783 3779 { 3780 + /* 3781 + * During a device-initiated disconnect, a late xferNotReady event can 3782 + * be generated after the End Transfer command resets the event filter, 3783 + * but before the controller is halted. Ignore it to prevent a new 3784 + * transfer from starting. 3785 + */ 3786 + if (!dep->dwc->connected) 3787 + return; 3788 + 3784 3789 dwc3_gadget_endpoint_frame_from_event(dep, event); 3785 3790 3786 3791 /* ··· 4052 4039 dep->flags &= ~DWC3_EP_STALL; 4053 4040 4054 4041 ret = dwc3_send_clear_stall_ep_cmd(dep); 4055 - WARN_ON_ONCE(ret); 4042 + if (ret) 4043 + dev_err_ratelimited(dwc->dev, 4044 + "failed to clear STALL on %s\n", dep->name); 4056 4045 } 4057 4046 } 4058 4047
+7 -2
drivers/usb/gadget/udc/tegra-xudc.c
··· 502 502 struct clk_bulk_data *clks; 503 503 504 504 bool device_mode; 505 + bool current_device_mode; 505 506 struct work_struct usb_role_sw_work; 506 507 507 508 struct phy **usb3_phy; ··· 716 715 717 716 phy_set_mode_ext(xudc->curr_utmi_phy, PHY_MODE_USB_OTG, 718 717 USB_ROLE_DEVICE); 718 + 719 + xudc->current_device_mode = true; 719 720 } 720 721 721 722 static void tegra_xudc_device_mode_off(struct tegra_xudc *xudc) ··· 727 724 int err; 728 725 729 726 dev_dbg(xudc->dev, "device mode off\n"); 727 + 728 + xudc->current_device_mode = false; 730 729 731 730 connected = !!(xudc_readl(xudc, PORTSC) & PORTSC_CCS); 732 731 ··· 4049 4044 4050 4045 spin_lock_irqsave(&xudc->lock, flags); 4051 4046 xudc->suspended = false; 4047 + if (xudc->device_mode != xudc->current_device_mode) 4048 + schedule_work(&xudc->usb_role_sw_work); 4052 4049 spin_unlock_irqrestore(&xudc->lock, flags); 4053 - 4054 - schedule_work(&xudc->usb_role_sw_work); 4055 4050 4056 4051 pm_runtime_enable(dev); 4057 4052
+1 -2
drivers/usb/host/xhci-hub.c
··· 704 704 if (!xhci->devs[i]) 705 705 continue; 706 706 707 - retval = xhci_disable_slot(xhci, i); 708 - xhci_free_virt_device(xhci, i); 707 + retval = xhci_disable_and_free_slot(xhci, i); 709 708 if (retval) 710 709 xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n", 711 710 i, retval);
+11 -11
drivers/usb/host/xhci-mem.c
··· 865 865 * will be manipulated by the configure endpoint, allocate device, or update 866 866 * hub functions while this function is removing the TT entries from the list. 867 867 */ 868 - void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id) 868 + void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, 869 + int slot_id) 869 870 { 870 - struct xhci_virt_device *dev; 871 871 int i; 872 872 int old_active_eps = 0; 873 873 874 874 /* Slot ID 0 is reserved */ 875 - if (slot_id == 0 || !xhci->devs[slot_id]) 875 + if (slot_id == 0 || !dev) 876 876 return; 877 877 878 - dev = xhci->devs[slot_id]; 879 - 880 - xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 881 - if (!dev) 882 - return; 878 + /* If device ctx array still points to _this_ device, clear it */ 879 + if (dev->out_ctx && 880 + xhci->dcbaa->dev_context_ptrs[slot_id] == cpu_to_le64(dev->out_ctx->dma)) 881 + xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 883 882 884 883 trace_xhci_free_virt_device(dev); 885 884 ··· 919 920 dev->udev->slot_id = 0; 920 921 if (dev->rhub_port && dev->rhub_port->slot_id == slot_id) 921 922 dev->rhub_port->slot_id = 0; 922 - kfree(xhci->devs[slot_id]); 923 - xhci->devs[slot_id] = NULL; 923 + if (xhci->devs[slot_id] == dev) 924 + xhci->devs[slot_id] = NULL; 925 + kfree(dev); 924 926 } 925 927 926 928 /* ··· 962 962 out: 963 963 /* we are now at a leaf device */ 964 964 xhci_debugfs_remove_slot(xhci, slot_id); 965 - xhci_free_virt_device(xhci, slot_id); 965 + xhci_free_virt_device(xhci, vdev, slot_id); 966 966 } 967 967 968 968 int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+4 -3
drivers/usb/host/xhci-pci-renesas.c
··· 47 47 #define RENESAS_ROM_ERASE_MAGIC 0x5A65726F 48 48 #define RENESAS_ROM_WRITE_MAGIC 0x53524F4D 49 49 50 - #define RENESAS_RETRY 10000 51 - #define RENESAS_DELAY 10 50 + #define RENESAS_RETRY 50000 /* 50000 * RENESAS_DELAY ~= 500ms */ 51 + #define RENESAS_CHIP_ERASE_RETRY 500000 /* 500000 * RENESAS_DELAY ~= 5s */ 52 + #define RENESAS_DELAY 10 52 53 53 54 #define RENESAS_FW_NAME "renesas_usb_fw.mem" 54 55 ··· 408 407 /* sleep a bit while ROM is erased */ 409 408 msleep(20); 410 409 411 - for (i = 0; i < RENESAS_RETRY; i++) { 410 + for (i = 0; i < RENESAS_CHIP_ERASE_RETRY; i++) { 412 411 retval = pci_read_config_byte(pdev, RENESAS_ROM_STATUS, 413 412 &status); 414 413 status &= RENESAS_ROM_STATUS_ERASE;
+7 -2
drivers/usb/host/xhci-ring.c
··· 1592 1592 command->slot_id = 0; 1593 1593 } 1594 1594 1595 - static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id) 1595 + static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id, 1596 + u32 cmd_comp_code) 1596 1597 { 1597 1598 struct xhci_virt_device *virt_dev; 1598 1599 struct xhci_slot_ctx *slot_ctx; ··· 1608 1607 if (xhci->quirks & XHCI_EP_LIMIT_QUIRK) 1609 1608 /* Delete default control endpoint resources */ 1610 1609 xhci_free_device_endpoint_resources(xhci, virt_dev, true); 1610 + if (cmd_comp_code == COMP_SUCCESS) { 1611 + xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 1612 + xhci->devs[slot_id] = NULL; 1613 + } 1611 1614 } 1612 1615 1613 1616 static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id) ··· 1861 1856 xhci_handle_cmd_enable_slot(slot_id, cmd, cmd_comp_code); 1862 1857 break; 1863 1858 case TRB_DISABLE_SLOT: 1864 - xhci_handle_cmd_disable_slot(xhci, slot_id); 1859 + xhci_handle_cmd_disable_slot(xhci, slot_id, cmd_comp_code); 1865 1860 break; 1866 1861 case TRB_CONFIG_EP: 1867 1862 if (!cmd->completion)
+16 -7
drivers/usb/host/xhci.c
··· 309 309 return -EINVAL; 310 310 311 311 iman = readl(&ir->ir_set->iman); 312 + iman &= ~IMAN_IP; 312 313 iman |= IMAN_IE; 313 314 writel(iman, &ir->ir_set->iman); 314 315 ··· 326 325 return -EINVAL; 327 326 328 327 iman = readl(&ir->ir_set->iman); 328 + iman &= ~IMAN_IP; 329 329 iman &= ~IMAN_IE; 330 330 writel(iman, &ir->ir_set->iman); 331 331 ··· 3934 3932 * Obtaining a new device slot to inform the xHCI host that 3935 3933 * the USB device has been reset. 3936 3934 */ 3937 - ret = xhci_disable_slot(xhci, udev->slot_id); 3938 - xhci_free_virt_device(xhci, udev->slot_id); 3935 + ret = xhci_disable_and_free_slot(xhci, udev->slot_id); 3939 3936 if (!ret) { 3940 3937 ret = xhci_alloc_dev(hcd, udev); 3941 3938 if (ret == 1) ··· 4091 4090 xhci_disable_slot(xhci, udev->slot_id); 4092 4091 4093 4092 spin_lock_irqsave(&xhci->lock, flags); 4094 - xhci_free_virt_device(xhci, udev->slot_id); 4093 + xhci_free_virt_device(xhci, virt_dev, udev->slot_id); 4095 4094 spin_unlock_irqrestore(&xhci->lock, flags); 4096 4095 4097 4096 } ··· 4138 4137 xhci_free_command(xhci, command); 4139 4138 4140 4139 return 0; 4140 + } 4141 + 4142 + int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id) 4143 + { 4144 + struct xhci_virt_device *vdev = xhci->devs[slot_id]; 4145 + int ret; 4146 + 4147 + ret = xhci_disable_slot(xhci, slot_id); 4148 + xhci_free_virt_device(xhci, vdev, slot_id); 4149 + return ret; 4141 4150 } 4142 4151 4143 4152 /* ··· 4256 4245 return 1; 4257 4246 4258 4247 disable_slot: 4259 - xhci_disable_slot(xhci, udev->slot_id); 4260 - xhci_free_virt_device(xhci, udev->slot_id); 4248 + xhci_disable_and_free_slot(xhci, udev->slot_id); 4261 4249 4262 4250 return 0; 4263 4251 } ··· 4392 4382 dev_warn(&udev->dev, "Device not responding to setup %s.\n", act); 4393 4383 4394 4384 mutex_unlock(&xhci->mutex); 4395 - ret = xhci_disable_slot(xhci, udev->slot_id); 4396 - xhci_free_virt_device(xhci, udev->slot_id); 4385 + ret = xhci_disable_and_free_slot(xhci, udev->slot_id); 4397 4386 if (!ret) { 4398 4387 if (xhci_alloc_dev(hcd, udev) == 1) 4399 4388 xhci_setup_addressable_virt_dev(xhci, udev);
+2 -1
drivers/usb/host/xhci.h
··· 1791 1791 /* xHCI memory management */ 1792 1792 void xhci_mem_cleanup(struct xhci_hcd *xhci); 1793 1793 int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags); 1794 - void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id); 1794 + void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, int slot_id); 1795 1795 int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, struct usb_device *udev, gfp_t flags); 1796 1796 int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *udev); 1797 1797 void xhci_copy_ep0_dequeue_into_input_ctx(struct xhci_hcd *xhci, ··· 1888 1888 int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev, 1889 1889 struct usb_tt *tt, gfp_t mem_flags); 1890 1890 int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id); 1891 + int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id); 1891 1892 int xhci_ext_cap_init(struct xhci_hcd *xhci); 1892 1893 1893 1894 int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup);
+1 -1
drivers/usb/storage/realtek_cr.c
··· 252 252 return USB_STOR_TRANSPORT_ERROR; 253 253 } 254 254 255 - residue = bcs->Residue; 255 + residue = le32_to_cpu(bcs->Residue); 256 256 if (bcs->Tag != us->tag) 257 257 return USB_STOR_TRANSPORT_ERROR; 258 258
+29
drivers/usb/storage/unusual_devs.h
··· 934 934 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 935 935 US_FL_SANE_SENSE ), 936 936 937 + /* Added by Maël GUERIN <mael.guerin@murena.io> */ 938 + UNUSUAL_DEV( 0x0603, 0x8611, 0x0000, 0xffff, 939 + "Novatek", 940 + "NTK96550-based camera", 941 + USB_SC_SCSI, USB_PR_BULK, NULL, 942 + US_FL_BULK_IGNORE_TAG ), 943 + 937 944 /* 938 945 * Reported by Hanno Boeck <hanno@gmx.de> 939 946 * Taken from the Lycoris Kernel ··· 1500 1493 "External", 1501 1494 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1502 1495 US_FL_NO_WP_DETECT ), 1496 + 1497 + /* 1498 + * Reported by Zenm Chen <zenmchen@gmail.com> 1499 + * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch 1500 + * the device into Wi-Fi mode. 1501 + */ 1502 + UNUSUAL_DEV( 0x0bda, 0x1a2b, 0x0000, 0xffff, 1503 + "Realtek", 1504 + "DISK", 1505 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1506 + US_FL_IGNORE_DEVICE ), 1507 + 1508 + /* 1509 + * Reported by Zenm Chen <zenmchen@gmail.com> 1510 + * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch 1511 + * the device into Wi-Fi mode. 1512 + */ 1513 + UNUSUAL_DEV( 0x0bda, 0xa192, 0x0000, 0xffff, 1514 + "Realtek", 1515 + "DISK", 1516 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1517 + US_FL_IGNORE_DEVICE ), 1503 1518 1504 1519 UNUSUAL_DEV( 0x0d49, 0x7310, 0x0000, 0x9999, 1505 1520 "Maxtor",
+8 -4
drivers/usb/typec/tcpm/fusb302.c
··· 1485 1485 struct fusb302_chip *chip = dev_id; 1486 1486 unsigned long flags; 1487 1487 1488 + /* Disable our level triggered IRQ until our irq_work has cleared it */ 1489 + disable_irq_nosync(chip->gpio_int_n_irq); 1490 + 1488 1491 spin_lock_irqsave(&chip->irq_lock, flags); 1489 1492 if (chip->irq_suspended) 1490 1493 chip->irq_while_suspended = true; ··· 1630 1627 } 1631 1628 done: 1632 1629 mutex_unlock(&chip->lock); 1630 + enable_irq(chip->gpio_int_n_irq); 1633 1631 } 1634 1632 1635 1633 static int init_gpio(struct fusb302_chip *chip) ··· 1755 1751 goto destroy_workqueue; 1756 1752 } 1757 1753 1758 - ret = devm_request_threaded_irq(dev, chip->gpio_int_n_irq, 1759 - NULL, fusb302_irq_intn, 1760 - IRQF_ONESHOT | IRQF_TRIGGER_LOW, 1761 - "fsc_interrupt_int_n", chip); 1754 + ret = request_irq(chip->gpio_int_n_irq, fusb302_irq_intn, 1755 + IRQF_ONESHOT | IRQF_TRIGGER_LOW, 1756 + "fsc_interrupt_int_n", chip); 1762 1757 if (ret < 0) { 1763 1758 dev_err(dev, "cannot request IRQ for GPIO Int_N, ret=%d", ret); 1764 1759 goto tcpm_unregister_port; ··· 1782 1779 struct fusb302_chip *chip = i2c_get_clientdata(client); 1783 1780 1784 1781 disable_irq_wake(chip->gpio_int_n_irq); 1782 + free_irq(chip->gpio_int_n_irq, chip); 1785 1783 cancel_work_sync(&chip->irq_work); 1786 1784 cancel_delayed_work_sync(&chip->bc_lvl_handler); 1787 1785 tcpm_unregister_port(chip->tcpm_port);
+58
drivers/usb/typec/tcpm/maxim_contaminant.c
··· 188 188 if (ret < 0) 189 189 return ret; 190 190 191 + /* Disable low power mode */ 192 + ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL, 193 + FIELD_PREP(CCLPMODESEL, 194 + LOW_POWER_MODE_DISABLE)); 195 + 191 196 /* Sleep to allow comparators settle */ 192 197 usleep_range(5000, 6000); 193 198 ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL, TCPC_TCPC_CTRL_ORIENTATION, PLUG_ORNT_CC1); ··· 329 324 return 0; 330 325 } 331 326 327 + static int max_contaminant_enable_toggling(struct max_tcpci_chip *chip) 328 + { 329 + struct regmap *regmap = chip->data.regmap; 330 + int ret; 331 + 332 + /* Disable dry detection if enabled. */ 333 + ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL, 334 + FIELD_PREP(CCLPMODESEL, 335 + LOW_POWER_MODE_DISABLE)); 336 + if (ret) 337 + return ret; 338 + 339 + ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL1, CCCONNDRY, 0); 340 + if (ret) 341 + return ret; 342 + 343 + ret = max_tcpci_write8(chip, TCPC_ROLE_CTRL, TCPC_ROLE_CTRL_DRP | 344 + FIELD_PREP(TCPC_ROLE_CTRL_CC1, 345 + TCPC_ROLE_CTRL_CC_RD) | 346 + FIELD_PREP(TCPC_ROLE_CTRL_CC2, 347 + TCPC_ROLE_CTRL_CC_RD)); 348 + if (ret) 349 + return ret; 350 + 351 + ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL, 352 + TCPC_TCPC_CTRL_EN_LK4CONN_ALRT, 353 + TCPC_TCPC_CTRL_EN_LK4CONN_ALRT); 354 + if (ret) 355 + return ret; 356 + 357 + return max_tcpci_write8(chip, TCPC_COMMAND, TCPC_CMD_LOOK4CONNECTION); 358 + } 359 + 332 360 bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect_while_debounce, 333 361 bool *cc_handled) 334 362 { ··· 377 339 ret = max_tcpci_read8(chip, TCPC_POWER_CTRL, &pwr_cntl); 378 340 if (ret < 0) 379 341 return false; 342 + 343 + if (cc_status & TCPC_CC_STATUS_TOGGLING) { 344 + if (chip->contaminant_state == DETECTED) 345 + return true; 346 + return false; 347 + } 380 348 381 349 if (chip->contaminant_state == NOT_DETECTED || chip->contaminant_state == SINK) { 382 350 if (!disconnect_while_debounce) ··· 416 372 max_contaminant_enable_dry_detection(chip); 417 373 return true; 418 374 } 375 + 376 + ret = max_contaminant_enable_toggling(chip); 377 + if (ret) 378 + dev_err(chip->dev, 379 + "Failed to enable toggling, ret=%d", 380 + ret); 419 381 } 420 382 } else if (chip->contaminant_state == DETECTED) { 421 383 if (!(cc_status & TCPC_CC_STATUS_TOGGLING)) { ··· 429 379 if (chip->contaminant_state == DETECTED) { 430 380 max_contaminant_enable_dry_detection(chip); 431 381 return true; 382 + } else { 383 + ret = max_contaminant_enable_toggling(chip); 384 + if (ret) { 385 + dev_err(chip->dev, 386 + "Failed to enable toggling, ret=%d", 387 + ret); 388 + return true; 389 + } 432 390 } 433 391 } 434 392 }
+1
drivers/usb/typec/tcpm/tcpci_maxim.h
··· 21 21 #define CCOVPDIS BIT(6) 22 22 #define SBURPCTRL BIT(5) 23 23 #define CCLPMODESEL GENMASK(4, 3) 24 + #define LOW_POWER_MODE_DISABLE 0 24 25 #define ULTRA_LOW_POWER_MODE 1 25 26 #define CCRPCTRL GENMASK(2, 0) 26 27 #define UA_1_SRC 1
-23
drivers/xen/xenbus/xenbus_xs.c
··· 718 718 return 0; 719 719 } 720 720 721 - /* 722 - * Certain older XenBus toolstack cannot handle reading values that are 723 - * not populated. Some Xen 3.4 installation are incapable of doing this 724 - * so if we are running on anything older than 4 do not attempt to read 725 - * control/platform-feature-xs_reset_watches. 726 - */ 727 - static bool xen_strict_xenbus_quirk(void) 728 - { 729 - #ifdef CONFIG_X86 730 - uint32_t eax, ebx, ecx, edx, base; 731 - 732 - base = xen_cpuid_base(); 733 - cpuid(base + 1, &eax, &ebx, &ecx, &edx); 734 - 735 - if ((eax >> 16) < 4) 736 - return true; 737 - #endif 738 - return false; 739 - 740 - } 741 721 static void xs_reset_watches(void) 742 722 { 743 723 int err; 744 724 745 725 if (!xen_hvm_domain() || xen_initial_domain()) 746 - return; 747 - 748 - if (xen_strict_xenbus_quirk()) 749 726 return; 750 727 751 728 if (!xenbus_read_unsigned("control",
+1 -1
fs/anon_inodes.c
··· 129 129 } 130 130 return inode; 131 131 } 132 - EXPORT_SYMBOL_GPL_FOR_MODULES(anon_inode_make_secure_inode, "kvm"); 132 + EXPORT_SYMBOL_FOR_MODULES(anon_inode_make_secure_inode, "kvm"); 133 133 134 134 static struct file *__anon_inode_getfile(const char *name, 135 135 const struct file_operations *fops,
+19 -5
fs/btrfs/extent_io.c
··· 1512 1512 1513 1513 /* 1514 1514 * Return 0 if we have submitted or queued the sector for submission. 1515 - * Return <0 for critical errors. 1515 + * Return <0 for critical errors, and the sector will have its dirty flag cleared. 1516 1516 * 1517 1517 * Caller should make sure filepos < i_size and handle filepos >= i_size case. 1518 1518 */ ··· 1535 1535 ASSERT(filepos < i_size); 1536 1536 1537 1537 em = btrfs_get_extent(inode, NULL, filepos, sectorsize); 1538 - if (IS_ERR(em)) 1538 + if (IS_ERR(em)) { 1539 + /* 1540 + * When submission failed, we should still clear the folio dirty. 1541 + * Or the folio will be written back again but without any 1542 + * ordered extent. 1543 + */ 1544 + btrfs_folio_clear_dirty(fs_info, folio, filepos, sectorsize); 1545 + btrfs_folio_set_writeback(fs_info, folio, filepos, sectorsize); 1546 + btrfs_folio_clear_writeback(fs_info, folio, filepos, sectorsize); 1539 1547 return PTR_ERR(em); 1548 + } 1540 1549 1541 1550 extent_offset = filepos - em->start; 1542 1551 em_end = btrfs_extent_map_end(em); ··· 1618 1609 folio_unlock(folio); 1619 1610 return 1; 1620 1611 } 1621 - if (ret < 0) 1612 + if (ret < 0) { 1613 + btrfs_folio_clear_dirty(fs_info, folio, start, len); 1614 + btrfs_folio_set_writeback(fs_info, folio, start, len); 1615 + btrfs_folio_clear_writeback(fs_info, folio, start, len); 1622 1616 return ret; 1617 + } 1623 1618 1624 1619 for (cur = start; cur < start + len; cur += fs_info->sectorsize) 1625 1620 set_bit((cur - folio_start) >> fs_info->sectorsize_bits, &range_bitmap); ··· 1679 1666 * Here we set writeback and clear for the range. If the full folio 1680 1667 * is no longer dirty then we clear the PAGECACHE_TAG_DIRTY tag. 1681 1668 * 1682 - * If we hit any error, the corresponding sector will still be dirty 1683 - * thus no need to clear PAGECACHE_TAG_DIRTY. 1669 + * If we hit any error, the corresponding sector will have its dirty 1670 + * flag cleared and writeback finished, thus no need to handle the error case. 1684 1671 */ 1685 1672 if (!submitted_io && !error) { 1686 1673 btrfs_folio_set_writeback(fs_info, folio, start, len); ··· 1826 1813 xas_load(&xas); 1827 1814 xas_set_mark(&xas, PAGECACHE_TAG_WRITEBACK); 1828 1815 xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY); 1816 + xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE); 1829 1817 xas_unlock_irqrestore(&xas, flags); 1830 1818 1831 1819 btrfs_set_header_flag(eb, BTRFS_HEADER_FLAG_WRITTEN);
+19 -10
fs/btrfs/inode.c
··· 4189 4189 return ret; 4190 4190 } 4191 4191 4192 + static void update_time_after_link_or_unlink(struct btrfs_inode *dir) 4193 + { 4194 + struct timespec64 now; 4195 + 4196 + /* 4197 + * If we are replaying a log tree, we do not want to update the mtime 4198 + * and ctime of the parent directory with the current time, since the 4199 + * log replay procedure is responsible for setting them to their correct 4200 + * values (the ones it had when the fsync was done). 4201 + */ 4202 + if (test_bit(BTRFS_FS_LOG_RECOVERING, &dir->root->fs_info->flags)) 4203 + return; 4204 + 4205 + now = inode_set_ctime_current(&dir->vfs_inode); 4206 + inode_set_mtime_to_ts(&dir->vfs_inode, now); 4207 + } 4208 + 4192 4209 /* 4193 4210 * unlink helper that gets used here in inode.c and in the tree logging 4194 4211 * recovery code. It remove a link in a directory with a given name, and ··· 4306 4289 inode_inc_iversion(&inode->vfs_inode); 4307 4290 inode_set_ctime_current(&inode->vfs_inode); 4308 4291 inode_inc_iversion(&dir->vfs_inode); 4309 - inode_set_mtime_to_ts(&dir->vfs_inode, inode_set_ctime_current(&dir->vfs_inode)); 4292 + update_time_after_link_or_unlink(dir); 4310 4293 4311 4294 return btrfs_update_inode(trans, dir); 4312 4295 } ··· 6700 6683 btrfs_i_size_write(parent_inode, parent_inode->vfs_inode.i_size + 6701 6684 name->len * 2); 6702 6685 inode_inc_iversion(&parent_inode->vfs_inode); 6703 - /* 6704 - * If we are replaying a log tree, we do not want to update the mtime 6705 - * and ctime of the parent directory with the current time, since the 6706 - * log replay procedure is responsible for setting them to their correct 6707 - * values (the ones it had when the fsync was done). 6708 - */ 6709 - if (!test_bit(BTRFS_FS_LOG_RECOVERING, &root->fs_info->flags)) 6710 - inode_set_mtime_to_ts(&parent_inode->vfs_inode, 6711 - inode_set_ctime_current(&parent_inode->vfs_inode)); 6686 + update_time_after_link_or_unlink(parent_inode); 6712 6687 6713 6688 ret = btrfs_update_inode(trans, parent_inode); 6714 6689 if (ret)
+18 -1
fs/btrfs/subpage.c
··· 448 448 449 449 spin_lock_irqsave(&bfs->lock, flags); 450 450 bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); 451 + 452 + /* 453 + * Don't clear the TOWRITE tag when starting writeback on a still-dirty 454 + * folio. Doing so can cause WB_SYNC_ALL writepages() to overlook it, 455 + * assume writeback is complete, and exit too early — violating sync 456 + * ordering guarantees. 457 + */ 451 458 if (!folio_test_writeback(folio)) 452 - folio_start_writeback(folio); 459 + __folio_start_writeback(folio, true); 460 + if (!folio_test_dirty(folio)) { 461 + struct address_space *mapping = folio_mapping(folio); 462 + XA_STATE(xas, &mapping->i_pages, folio->index); 463 + unsigned long flags; 464 + 465 + xas_lock_irqsave(&xas, flags); 466 + xas_load(&xas); 467 + xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE); 468 + xas_unlock_irqrestore(&xas, flags); 469 + } 453 470 spin_unlock_irqrestore(&bfs->lock, flags); 454 471 } 455 472
+8 -5
fs/btrfs/super.c
··· 88 88 refcount_t refs; 89 89 }; 90 90 91 + static void btrfs_emit_options(struct btrfs_fs_info *info, 92 + struct btrfs_fs_context *old); 93 + 91 94 enum { 92 95 Opt_acl, 93 96 Opt_clear_cache, ··· 701 698 702 699 if (!test_bit(BTRFS_FS_STATE_REMOUNTING, &info->fs_state)) { 703 700 if (btrfs_raw_test_opt(*mount_opt, SPACE_CACHE)) { 704 - btrfs_info(info, "disk space caching is enabled"); 705 701 btrfs_warn(info, 706 702 "space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2"); 707 703 } 708 - if (btrfs_raw_test_opt(*mount_opt, FREE_SPACE_TREE)) 709 - btrfs_info(info, "using free-space-tree"); 710 704 } 711 705 712 706 return ret; ··· 979 979 btrfs_err(fs_info, "open_ctree failed: %d", ret); 980 980 return ret; 981 981 } 982 + 983 + btrfs_emit_options(fs_info, NULL); 982 984 983 985 inode = btrfs_iget(BTRFS_FIRST_FREE_OBJECTID, fs_info->fs_root); 984 986 if (IS_ERR(inode)) { ··· 1439 1437 { 1440 1438 btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum"); 1441 1439 btrfs_info_if_set(info, old, DEGRADED, "allowing degraded mounts"); 1442 - btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum"); 1440 + btrfs_info_if_set(info, old, NODATACOW, "setting nodatacow"); 1443 1441 btrfs_info_if_set(info, old, SSD, "enabling ssd optimizations"); 1444 1442 btrfs_info_if_set(info, old, SSD_SPREAD, "using spread ssd allocation scheme"); 1445 1443 btrfs_info_if_set(info, old, NOBARRIER, "turning off barriers"); ··· 1461 1459 btrfs_info_if_set(info, old, IGNOREMETACSUMS, "ignoring meta csums"); 1462 1460 btrfs_info_if_set(info, old, IGNORESUPERFLAGS, "ignoring unknown super block flags"); 1463 1461 1462 + btrfs_info_if_unset(info, old, NODATASUM, "setting datasum"); 1464 1463 btrfs_info_if_unset(info, old, NODATACOW, "setting datacow"); 1465 1464 btrfs_info_if_unset(info, old, SSD, "not using ssd optimizations"); 1466 1465 btrfs_info_if_unset(info, old, SSD_SPREAD, "not using spread ssd allocation scheme"); 1467 - btrfs_info_if_unset(info, old, NOBARRIER, "turning off barriers"); 1466 + btrfs_info_if_unset(info, old, NOBARRIER, "turning on barriers"); 1468 1467 btrfs_info_if_unset(info, old, NOTREELOG, "enabling tree log"); 1469 1468 btrfs_info_if_unset(info, old, SPACE_CACHE, "disabling disk space caching"); 1470 1469 btrfs_info_if_unset(info, old, FREE_SPACE_TREE, "disabling free space tree");
+99 -34
fs/btrfs/zoned.c
··· 17 17 #include "accessors.h" 18 18 #include "bio.h" 19 19 #include "transaction.h" 20 + #include "sysfs.h" 20 21 21 22 /* Maximum number of zones to report per blkdev_report_zones() call */ 22 23 #define BTRFS_REPORT_NR_ZONES 4096 ··· 42 41 43 42 /* Number of superblock log zones */ 44 43 #define BTRFS_NR_SB_LOG_ZONES 2 44 + 45 + /* Default number of max active zones when the device has no limits. */ 46 + #define BTRFS_DEFAULT_MAX_ACTIVE_ZONES 128 45 47 46 48 /* 47 49 * Minimum of active zones we need: ··· 420 416 if (!IS_ALIGNED(nr_sectors, zone_sectors)) 421 417 zone_info->nr_zones++; 422 418 423 - max_active_zones = bdev_max_active_zones(bdev); 419 + max_active_zones = min_not_zero(bdev_max_active_zones(bdev), 420 + bdev_max_open_zones(bdev)); 421 + if (!max_active_zones && zone_info->nr_zones > BTRFS_DEFAULT_MAX_ACTIVE_ZONES) 422 + max_active_zones = BTRFS_DEFAULT_MAX_ACTIVE_ZONES; 424 423 if (max_active_zones && max_active_zones < BTRFS_MIN_ACTIVE_ZONES) { 425 424 btrfs_err(fs_info, 426 425 "zoned: %s: max active zones %u is too small, need at least %u active zones", ··· 2175 2168 goto out_unlock; 2176 2169 } 2177 2170 2178 - /* No space left */ 2179 - if (btrfs_zoned_bg_is_full(block_group)) { 2180 - ret = false; 2181 - goto out_unlock; 2171 + if (block_group->flags & BTRFS_BLOCK_GROUP_DATA) { 2172 + /* The caller should check if the block group is full. */ 2173 + if (WARN_ON_ONCE(btrfs_zoned_bg_is_full(block_group))) { 2174 + ret = false; 2175 + goto out_unlock; 2176 + } 2177 + } else { 2178 + /* Since it is already written, it should have been active. */ 2179 + WARN_ON_ONCE(block_group->meta_write_pointer != block_group->start); 2182 2180 } 2183 2181 2184 2182 for (i = 0; i < map->num_stripes; i++) { ··· 2242 2230 struct btrfs_fs_info *fs_info = block_group->fs_info; 2243 2231 const u64 end = block_group->start + block_group->length; 2244 2232 struct extent_buffer *eb; 2245 - unsigned long index, start = (block_group->start >> fs_info->sectorsize_bits); 2233 + unsigned long index, start = (block_group->start >> fs_info->nodesize_bits); 2246 2234 2247 2235 rcu_read_lock(); 2248 2236 xa_for_each_start(&fs_info->buffer_tree, index, eb, start) { ··· 2255 2243 rcu_read_lock(); 2256 2244 } 2257 2245 rcu_read_unlock(); 2246 + } 2247 + 2248 + static int call_zone_finish(struct btrfs_block_group *block_group, 2249 + struct btrfs_io_stripe *stripe) 2250 + { 2251 + struct btrfs_device *device = stripe->dev; 2252 + const u64 physical = stripe->physical; 2253 + struct btrfs_zoned_device_info *zinfo = device->zone_info; 2254 + int ret; 2255 + 2256 + if (!device->bdev) 2257 + return 0; 2258 + 2259 + if (zinfo->max_active_zones == 0) 2260 + return 0; 2261 + 2262 + if (btrfs_dev_is_sequential(device, physical)) { 2263 + unsigned int nofs_flags; 2264 + 2265 + nofs_flags = memalloc_nofs_save(); 2266 + ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH, 2267 + physical >> SECTOR_SHIFT, 2268 + zinfo->zone_size >> SECTOR_SHIFT); 2269 + memalloc_nofs_restore(nofs_flags); 2270 + 2271 + if (ret) 2272 + return ret; 2273 + } 2274 + 2275 + if (!(block_group->flags & BTRFS_BLOCK_GROUP_DATA)) 2276 + zinfo->reserved_active_zones++; 2277 + btrfs_dev_clear_active_zone(device, physical); 2278 + 2279 + return 0; 2258 2280 } 2259 2281 2260 2282 static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_written) ··· 2375 2329 down_read(&dev_replace->rwsem); 2376 2330 map = block_group->physical_map; 2377 2331 for (i = 0; i < map->num_stripes; i++) { 2378 - struct btrfs_device *device = map->stripes[i].dev; 2379 - const u64 physical = map->stripes[i].physical; 2380 - struct btrfs_zoned_device_info *zinfo = device->zone_info; 2381 - unsigned int nofs_flags; 2382 2332 2383 - if (!device->bdev) 2384 - continue; 2385 - 2386 - if (zinfo->max_active_zones == 0) 2387 - continue; 2388 - 2389 - nofs_flags = memalloc_nofs_save(); 2390 - ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH, 2391 - physical >> SECTOR_SHIFT, 2392 - zinfo->zone_size >> SECTOR_SHIFT); 2393 - memalloc_nofs_restore(nofs_flags); 2394 - 2333 + ret = call_zone_finish(block_group, &map->stripes[i]); 2395 2334 if (ret) { 2396 2335 up_read(&dev_replace->rwsem); 2397 2336 return ret; 2398 2337 } 2399 - 2400 - if (!(block_group->flags & BTRFS_BLOCK_GROUP_DATA)) 2401 - zinfo->reserved_active_zones++; 2402 - btrfs_dev_clear_active_zone(device, physical); 2403 2338 } 2404 2339 up_read(&dev_replace->rwsem); 2405 2340 ··· 2531 2504 void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info) 2532 2505 { 2533 2506 struct btrfs_space_info *data_sinfo = fs_info->data_sinfo; 2534 - struct btrfs_space_info *space_info = data_sinfo->sub_group[0]; 2507 + struct btrfs_space_info *space_info = data_sinfo; 2535 2508 struct btrfs_trans_handle *trans; 2536 2509 struct btrfs_block_group *bg; 2537 2510 struct list_head *bg_list; 2538 2511 u64 alloc_flags; 2539 - bool initial = false; 2512 + bool first = true; 2540 2513 bool did_chunk_alloc = false; 2541 2514 int index; 2542 2515 int ret; ··· 2550 2523 if (sb_rdonly(fs_info->sb)) 2551 2524 return; 2552 2525 2553 - ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC); 2554 2526 alloc_flags = btrfs_get_alloc_profile(fs_info, space_info->flags); 2555 2527 index = btrfs_bg_flags_to_raid_index(alloc_flags); 2556 2528 2557 - bg_list = &data_sinfo->block_groups[index]; 2529 + /* Scan the data space_info to find empty block groups. Take the second one. */ 2558 2530 again: 2531 + bg_list = &space_info->block_groups[index]; 2559 2532 list_for_each_entry(bg, bg_list, list) { 2560 - if (bg->used > 0) 2533 + if (bg->alloc_offset != 0) 2561 2534 continue; 2562 2535 2563 - if (!initial) { 2564 - initial = true; 2536 + if (first) { 2537 + first = false; 2565 2538 continue; 2539 + } 2540 + 2541 + if (space_info == data_sinfo) { 2542 + /* Migrate the block group to the data relocation space_info. */ 2543 + struct btrfs_space_info *reloc_sinfo = data_sinfo->sub_group[0]; 2544 + int factor; 2545 + 2546 + ASSERT(reloc_sinfo->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC); 2547 + factor = btrfs_bg_type_to_factor(bg->flags); 2548 + 2549 + down_write(&space_info->groups_sem); 2550 + list_del_init(&bg->list); 2551 + /* We can assume this as we choose the second empty one. */ 2552 + ASSERT(!list_empty(&space_info->block_groups[index])); 2553 + up_write(&space_info->groups_sem); 2554 + 2555 + spin_lock(&space_info->lock); 2556 + space_info->total_bytes -= bg->length; 2557 + space_info->disk_total -= bg->length * factor; 2558 + /* There is no allocation ever happened. */ 2559 + ASSERT(bg->used == 0); 2560 + ASSERT(bg->zone_unusable == 0); 2561 + /* No super block in a block group on the zoned setup. */ 2562 + ASSERT(bg->bytes_super == 0); 2563 + spin_unlock(&space_info->lock); 2564 + 2565 + bg->space_info = reloc_sinfo; 2566 + if (reloc_sinfo->block_group_kobjs[index] == NULL) 2567 + btrfs_sysfs_add_block_group_type(bg); 2568 + 2569 + btrfs_add_bg_to_space_info(fs_info, bg); 2566 2570 } 2567 2571 2568 2572 fs_info->data_reloc_bg = bg->start; ··· 2610 2552 if (IS_ERR(trans)) 2611 2553 return; 2612 2554 2555 + /* Allocate new BG in the data relocation space_info. */ 2556 + space_info = data_sinfo->sub_group[0]; 2557 + ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC); 2613 2558 ret = btrfs_chunk_alloc(trans, space_info, alloc_flags, CHUNK_ALLOC_FORCE); 2614 2559 btrfs_end_transaction(trans); 2615 2560 if (ret == 1) { 2561 + /* 2562 + * We allocated a new block group in the data relocation space_info. We 2563 + * can take that one. 2564 + */ 2565 + first = false; 2616 2566 did_chunk_alloc = true; 2617 - bg_list = &space_info->block_groups[index]; 2618 2567 goto again; 2619 2568 } 2620 2569 }
+1 -1
fs/buffer.c
··· 157 157 */ 158 158 void end_buffer_read_sync(struct buffer_head *bh, int uptodate) 159 159 { 160 - __end_buffer_read_notouch(bh, uptodate); 161 160 put_bh(bh); 161 + __end_buffer_read_notouch(bh, uptodate); 162 162 } 163 163 EXPORT_SYMBOL(end_buffer_read_sync); 164 164
+1 -1
fs/coredump.c
··· 345 345 was_space = false; 346 346 err = cn_printf(cn, "%c", '\0'); 347 347 if (err) 348 - return err; 348 + return false; 349 349 (*argv)[(*argc)++] = cn->used; 350 350 } 351 351 }
+10 -1
fs/debugfs/inode.c
··· 183 183 struct debugfs_fs_info *sb_opts = sb->s_fs_info; 184 184 struct debugfs_fs_info *new_opts = fc->s_fs_info; 185 185 186 + if (!new_opts) 187 + return 0; 188 + 186 189 sync_filesystem(sb); 187 190 188 191 /* structure copy of new mount options to sb */ ··· 285 282 286 283 static int debugfs_get_tree(struct fs_context *fc) 287 284 { 285 + int err; 286 + 288 287 if (!(debugfs_allow & DEBUGFS_ALLOW_API)) 289 288 return -EPERM; 290 289 291 - return get_tree_single(fc, debugfs_fill_super); 290 + err = get_tree_single(fc, debugfs_fill_super); 291 + if (err) 292 + return err; 293 + 294 + return debugfs_reconfigure(fc); 292 295 } 293 296 294 297 static void debugfs_free_fc(struct fs_context *fc)
+20 -3
fs/ext4/fsmap.c
··· 393 393 /* Reserved GDT blocks */ 394 394 if (!ext4_has_feature_meta_bg(sb) || metagroup < first_meta_bg) { 395 395 len = le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks); 396 + 397 + /* 398 + * mkfs.ext4 can set s_reserved_gdt_blocks as 0 in some cases, 399 + * check for that. 400 + */ 401 + if (!len) 402 + return 0; 403 + 396 404 error = ext4_getfsmap_fill(meta_list, fsb, len, 397 405 EXT4_FMR_OWN_RESV_GDT); 398 406 if (error) ··· 534 526 ext4_group_t end_ag; 535 527 ext4_grpblk_t first_cluster; 536 528 ext4_grpblk_t last_cluster; 529 + struct ext4_fsmap irec; 537 530 int error = 0; 538 531 539 532 bofs = le32_to_cpu(sbi->s_es->s_first_data_block); ··· 618 609 goto err; 619 610 } 620 611 621 - /* Report any gaps at the end of the bg */ 612 + /* 613 + * The dummy record below will cause ext4_getfsmap_helper() to report 614 + * any allocated blocks at the end of the range. 615 + */ 616 + irec.fmr_device = 0; 617 + irec.fmr_physical = end_fsb + 1; 618 + irec.fmr_length = 0; 619 + irec.fmr_owner = EXT4_FMR_OWN_FREE; 620 + irec.fmr_flags = 0; 621 + 622 622 info->gfi_last = true; 623 - error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1, 624 - 0, info); 623 + error = ext4_getfsmap_helper(sb, info, &irec); 625 624 if (error) 626 625 goto err; 627 626
+2 -2
fs/ext4/indirect.c
··· 539 539 int indirect_blks; 540 540 int blocks_to_boundary = 0; 541 541 int depth; 542 - int count = 0; 542 + u64 count = 0; 543 543 ext4_fsblk_t first_block = 0; 544 544 545 545 trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags); ··· 588 588 count++; 589 589 /* Fill in size of a hole we found */ 590 590 map->m_pblk = 0; 591 - map->m_len = min_t(unsigned int, map->m_len, count); 591 + map->m_len = umin(map->m_len, count); 592 592 goto cleanup; 593 593 } 594 594
+2 -2
fs/ext4/inode.c
··· 146 146 */ 147 147 int ext4_inode_is_fast_symlink(struct inode *inode) 148 148 { 149 - if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) { 149 + if (!ext4_has_feature_ea_inode(inode->i_sb)) { 150 150 int ea_blocks = EXT4_I(inode)->i_file_acl ? 151 151 EXT4_CLUSTER_SIZE(inode->i_sb) >> 9 : 0; 152 152 ··· 3155 3155 folio_unlock(folio); 3156 3156 folio_put(folio); 3157 3157 /* 3158 - * block_write_begin may have instantiated a few blocks 3158 + * ext4_block_write_begin may have instantiated a few blocks 3159 3159 * outside i_size. Trim these off again. Don't need 3160 3160 * i_size_read because we hold inode lock. 3161 3161 */
-4
fs/ext4/namei.c
··· 2965 2965 struct inode *inode) 2966 2966 { 2967 2967 struct buffer_head *dir_block = NULL; 2968 - struct ext4_dir_entry_2 *de; 2969 2968 ext4_lblk_t block = 0; 2970 2969 int err; 2971 2970 ··· 2981 2982 dir_block = ext4_append(handle, inode, &block); 2982 2983 if (IS_ERR(dir_block)) 2983 2984 return PTR_ERR(dir_block); 2984 - de = (struct ext4_dir_entry_2 *)dir_block->b_data; 2985 2985 err = ext4_init_dirblock(handle, inode, dir_block, dir->i_ino, NULL, 0); 2986 - if (err) 2987 - goto out; 2988 2986 out: 2989 2987 brelse(dir_block); 2990 2988 return err;
+3 -2
fs/ext4/orphan.c
··· 589 589 } 590 590 oi->of_blocks = inode->i_size >> sb->s_blocksize_bits; 591 591 oi->of_csum_seed = EXT4_I(inode)->i_csum_seed; 592 - oi->of_binfo = kmalloc(oi->of_blocks*sizeof(struct ext4_orphan_block), 593 - GFP_KERNEL); 592 + oi->of_binfo = kmalloc_array(oi->of_blocks, 593 + sizeof(struct ext4_orphan_block), 594 + GFP_KERNEL); 594 595 if (!oi->of_binfo) { 595 596 ret = -ENOMEM; 596 597 goto out_put;
+1 -1
fs/ext4/page-io.c
··· 547 547 * first page of the bio. Otherwise it can deadlock. 548 548 */ 549 549 if (io->io_bio) 550 - gfp_flags = GFP_NOWAIT | __GFP_NOWARN; 550 + gfp_flags = GFP_NOWAIT; 551 551 retry_encrypt: 552 552 bounce_page = fscrypt_encrypt_pagecache_blocks(folio, 553 553 enc_bytes, 0, gfp_flags);
+8 -4
fs/ext4/super.c
··· 268 268 void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block) 269 269 { 270 270 struct buffer_head *bh = bdev_getblk(sb->s_bdev, block, 271 - sb->s_blocksize, GFP_NOWAIT | __GFP_NOWARN); 271 + sb->s_blocksize, GFP_NOWAIT); 272 272 273 273 if (likely(bh)) { 274 274 if (trylock_buffer(bh)) ··· 1998 1998 fc->fs_private = ctx; 1999 1999 fc->ops = &ext4_context_ops; 2000 2000 2001 + /* i_version is always enabled now */ 2002 + fc->sb_flags |= SB_I_VERSION; 2003 + 2001 2004 return 0; 2002 2005 } 2003 2006 ··· 2978 2975 SEQ_OPTS_PRINT("min_batch_time=%u", sbi->s_min_batch_time); 2979 2976 if (nodefs || sbi->s_max_batch_time != EXT4_DEF_MAX_BATCH_TIME) 2980 2977 SEQ_OPTS_PRINT("max_batch_time=%u", sbi->s_max_batch_time); 2978 + if (nodefs && sb->s_flags & SB_I_VERSION) 2979 + SEQ_OPTS_PUTS("i_version"); 2981 2980 if (nodefs || sbi->s_stripe) 2982 2981 SEQ_OPTS_PRINT("stripe=%lu", sbi->s_stripe); 2983 2982 if (nodefs || EXT4_MOUNT_DATA_FLAGS & ··· 5319 5314 sb->s_flags = (sb->s_flags & ~SB_POSIXACL) | 5320 5315 (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0); 5321 5316 5322 - /* i_version is always enabled now */ 5323 - sb->s_flags |= SB_I_VERSION; 5324 - 5325 5317 /* HSM events are allowed by default. */ 5326 5318 sb->s_iflags |= SB_I_ALLOW_HSM; 5327 5319 ··· 5416 5414 err = ext4_load_and_init_journal(sb, es, ctx); 5417 5415 if (err) 5418 5416 goto failed_mount3a; 5417 + if (bdev_read_only(sb->s_bdev)) 5418 + needs_recovery = 0; 5419 5419 } else if (test_opt(sb, NOLOAD) && !sb_rdonly(sb) && 5420 5420 ext4_has_feature_journal_needs_recovery(sb)) { 5421 5421 ext4_msg(sb, KERN_ERR, "required journal recovery "
+1 -1
fs/fhandle.c
··· 402 402 if (retval) 403 403 return retval; 404 404 405 - CLASS(get_unused_fd, fd)(O_CLOEXEC); 405 + CLASS(get_unused_fd, fd)(open_flag); 406 406 if (fd < 0) 407 407 return fd; 408 408
+5 -4
fs/fs-writeback.c
··· 2608 2608 wakeup_bdi = inode_io_list_move_locked(inode, wb, 2609 2609 dirty_list); 2610 2610 2611 - spin_unlock(&wb->list_lock); 2612 - spin_unlock(&inode->i_lock); 2613 - trace_writeback_dirty_inode_enqueue(inode); 2614 - 2615 2611 /* 2616 2612 * If this is the first dirty inode for this bdi, 2617 2613 * we have to wake-up the corresponding bdi thread ··· 2617 2621 if (wakeup_bdi && 2618 2622 (wb->bdi->capabilities & BDI_CAP_WRITEBACK)) 2619 2623 wb_wakeup_delayed(wb); 2624 + 2625 + spin_unlock(&wb->list_lock); 2626 + spin_unlock(&inode->i_lock); 2627 + trace_writeback_dirty_inode_enqueue(inode); 2628 + 2620 2629 return; 2621 2630 } 2622 2631 }
-5
fs/fuse/inode.c
··· 289 289 } 290 290 } 291 291 292 - if (attr->blksize != 0) 293 - inode->i_blkbits = ilog2(attr->blksize); 294 - else 295 - inode->i_blkbits = inode->i_sb->s_blocksize_bits; 296 - 297 292 /* 298 293 * Don't set the sticky bit in i_mode, unless we want the VFS 299 294 * to check permissions. This prevents failures due to the
+7 -7
fs/iomap/direct-io.c
··· 363 363 if (iomap->flags & IOMAP_F_SHARED) 364 364 dio->flags |= IOMAP_DIO_COW; 365 365 366 - if (iomap->flags & IOMAP_F_NEW) { 366 + if (iomap->flags & IOMAP_F_NEW) 367 367 need_zeroout = true; 368 - } else if (iomap->type == IOMAP_MAPPED) { 369 - if (iomap_dio_can_use_fua(iomap, dio)) 370 - bio_opf |= REQ_FUA; 371 - else 372 - dio->flags &= ~IOMAP_DIO_WRITE_THROUGH; 373 - } 368 + else if (iomap->type == IOMAP_MAPPED && 369 + iomap_dio_can_use_fua(iomap, dio)) 370 + bio_opf |= REQ_FUA; 371 + 372 + if (!(bio_opf & REQ_FUA)) 373 + dio->flags &= ~IOMAP_DIO_WRITE_THROUGH; 374 374 375 375 /* 376 376 * We can only do deferred completion for pure overwrites that
+1
fs/jbd2/checkpoint.c
··· 285 285 retry: 286 286 if (batch_count) 287 287 __flush_batch(journal, &batch_count); 288 + cond_resched(); 288 289 spin_lock(&journal->j_list_lock); 289 290 goto restart; 290 291 }
+2 -2
fs/kernfs/inode.c
··· 142 142 struct kernfs_node *kn = kernfs_dentry_node(dentry); 143 143 struct kernfs_iattrs *attrs; 144 144 145 - attrs = kernfs_iattrs_noalloc(kn); 145 + attrs = kernfs_iattrs(kn); 146 146 if (!attrs) 147 - return -ENODATA; 147 + return -ENOMEM; 148 148 149 149 return simple_xattr_list(d_inode(dentry), &attrs->xattrs, buf, size); 150 150 }
+44 -32
fs/namespace.c
··· 1197 1197 1198 1198 if (!mnt_ns_attached(mnt)) { 1199 1199 for (struct mount *m = mnt; m; m = next_mnt(m, mnt)) 1200 - if (unlikely(mnt_ns_attached(m))) 1201 - m = skip_mnt_tree(m); 1202 - else 1203 - mnt_add_to_ns(n, m); 1200 + mnt_add_to_ns(n, m); 1204 1201 n->nr_mounts += n->pending_mounts; 1205 1202 n->pending_mounts = 0; 1206 1203 } ··· 2701 2704 lock_mnt_tree(child); 2702 2705 q = __lookup_mnt(&child->mnt_parent->mnt, 2703 2706 child->mnt_mountpoint); 2707 + commit_tree(child); 2704 2708 if (q) { 2705 2709 struct mountpoint *mp = root.mp; 2706 2710 struct mount *r = child; ··· 2711 2713 mp = shorter; 2712 2714 mnt_change_mountpoint(r, mp, q); 2713 2715 } 2714 - commit_tree(child); 2715 2716 } 2716 2717 unpin_mountpoint(&root); 2717 2718 unlock_mount_hash(); ··· 2859 2862 return attach_recursive_mnt(mnt, p, mp); 2860 2863 } 2861 2864 2865 + static int may_change_propagation(const struct mount *m) 2866 + { 2867 + struct mnt_namespace *ns = m->mnt_ns; 2868 + 2869 + // it must be mounted in some namespace 2870 + if (IS_ERR_OR_NULL(ns)) // is_mounted() 2871 + return -EINVAL; 2872 + // and the caller must be admin in userns of that namespace 2873 + if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN)) 2874 + return -EPERM; 2875 + return 0; 2876 + } 2877 + 2862 2878 /* 2863 2879 * Sanity check the flags to change_mnt_propagation. 2864 2880 */ ··· 2908 2898 return -EINVAL; 2909 2899 2910 2900 namespace_lock(); 2911 - if (!check_mnt(mnt)) { 2912 - err = -EINVAL; 2901 + err = may_change_propagation(mnt); 2902 + if (err) 2913 2903 goto out_unlock; 2914 - } 2904 + 2915 2905 if (type == MS_SHARED) { 2916 2906 err = invent_group_ids(mnt, recurse); 2917 2907 if (err) ··· 3357 3347 3358 3348 namespace_lock(); 3359 3349 3360 - err = -EINVAL; 3361 - /* To and From must be mounted */ 3362 - if (!is_mounted(&from->mnt)) 3350 + err = may_change_propagation(from); 3351 + if (err) 3363 3352 goto out; 3364 - if (!is_mounted(&to->mnt)) 3365 - goto out; 3366 - 3367 - err = -EPERM; 3368 - /* We should be allowed to modify mount namespaces of both mounts */ 3369 - if (!ns_capable(from->mnt_ns->user_ns, CAP_SYS_ADMIN)) 3370 - goto out; 3371 - if (!ns_capable(to->mnt_ns->user_ns, CAP_SYS_ADMIN)) 3353 + err = may_change_propagation(to); 3354 + if (err) 3372 3355 goto out; 3373 3356 3374 3357 err = -EINVAL; ··· 4554 4551 if (flags & MOVE_MOUNT_SET_GROUP) mflags |= MNT_TREE_PROPAGATION; 4555 4552 if (flags & MOVE_MOUNT_BENEATH) mflags |= MNT_TREE_BENEATH; 4556 4553 4557 - lflags = 0; 4558 - if (flags & MOVE_MOUNT_F_SYMLINKS) lflags |= LOOKUP_FOLLOW; 4559 - if (flags & MOVE_MOUNT_F_AUTOMOUNTS) lflags |= LOOKUP_AUTOMOUNT; 4560 4554 uflags = 0; 4561 - if (flags & MOVE_MOUNT_F_EMPTY_PATH) uflags = AT_EMPTY_PATH; 4562 - from_name = getname_maybe_null(from_pathname, uflags); 4563 - if (IS_ERR(from_name)) 4564 - return PTR_ERR(from_name); 4555 + if (flags & MOVE_MOUNT_T_EMPTY_PATH) 4556 + uflags = AT_EMPTY_PATH; 4565 4557 4566 - lflags = 0; 4567 - if (flags & MOVE_MOUNT_T_SYMLINKS) lflags |= LOOKUP_FOLLOW; 4568 - if (flags & MOVE_MOUNT_T_AUTOMOUNTS) lflags |= LOOKUP_AUTOMOUNT; 4569 - uflags = 0; 4570 - if (flags & MOVE_MOUNT_T_EMPTY_PATH) uflags = AT_EMPTY_PATH; 4571 4558 to_name = getname_maybe_null(to_pathname, uflags); 4572 4559 if (IS_ERR(to_name)) 4573 4560 return PTR_ERR(to_name); ··· 4570 4577 to_path = fd_file(f_to)->f_path; 4571 4578 path_get(&to_path); 4572 4579 } else { 4580 + lflags = 0; 4581 + if (flags & MOVE_MOUNT_T_SYMLINKS) 4582 + lflags |= LOOKUP_FOLLOW; 4583 + if (flags & MOVE_MOUNT_T_AUTOMOUNTS) 4584 + lflags |= LOOKUP_AUTOMOUNT; 4573 4585 ret = filename_lookup(to_dfd, to_name, lflags, &to_path, NULL); 4574 4586 if (ret) 4575 4587 return ret; 4576 4588 } 4589 + 4590 + uflags = 0; 4591 + if (flags & MOVE_MOUNT_F_EMPTY_PATH) 4592 + uflags = AT_EMPTY_PATH; 4593 + 4594 + from_name = getname_maybe_null(from_pathname, uflags); 4595 + if (IS_ERR(from_name)) 4596 + return PTR_ERR(from_name); 4577 4597 4578 4598 if (!from_name && from_dfd >= 0) { 4579 4599 CLASS(fd_raw, f_from)(from_dfd); ··· 4596 4590 return vfs_move_mount(&fd_file(f_from)->f_path, &to_path, mflags); 4597 4591 } 4598 4592 4593 + lflags = 0; 4594 + if (flags & MOVE_MOUNT_F_SYMLINKS) 4595 + lflags |= LOOKUP_FOLLOW; 4596 + if (flags & MOVE_MOUNT_F_AUTOMOUNTS) 4597 + lflags |= LOOKUP_AUTOMOUNT; 4599 4598 ret = filename_lookup(from_dfd, from_name, lflags, &from_path, NULL); 4600 4599 if (ret) 4601 4600 return ret; ··· 5187 5176 int ret; 5188 5177 struct mount_kattr kattr = {}; 5189 5178 5190 - kattr.kflags = MOUNT_KATTR_IDMAP_REPLACE; 5179 + if (flags & OPEN_TREE_CLONE) 5180 + kattr.kflags = MOUNT_KATTR_IDMAP_REPLACE; 5191 5181 if (flags & AT_RECURSIVE) 5192 5182 kattr.kflags |= MOUNT_KATTR_RECURSE; 5193 5183
+3 -1
fs/netfs/read_collect.c
··· 281 281 } else if (test_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags)) { 282 282 notes |= MADE_PROGRESS; 283 283 } else { 284 - if (!stream->failed) 284 + if (!stream->failed) { 285 285 stream->transferred += transferred; 286 + stream->transferred_valid = true; 287 + } 286 288 if (front->transferred < front->len) 287 289 set_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags); 288 290 notes |= MADE_PROGRESS;
+8 -2
fs/netfs/write_collect.c
··· 254 254 if (front->start + front->transferred > stream->collected_to) { 255 255 stream->collected_to = front->start + front->transferred; 256 256 stream->transferred = stream->collected_to - wreq->start; 257 + stream->transferred_valid = true; 257 258 notes |= MADE_PROGRESS; 258 259 } 259 260 if (test_bit(NETFS_SREQ_FAILED, &front->flags)) { ··· 357 356 { 358 357 struct netfs_inode *ictx = netfs_inode(wreq->inode); 359 358 size_t transferred; 359 + bool transferred_valid = false; 360 360 int s; 361 361 362 362 _enter("R=%x", wreq->debug_id); ··· 378 376 continue; 379 377 if (!list_empty(&stream->subrequests)) 380 378 return false; 381 - if (stream->transferred < transferred) 379 + if (stream->transferred_valid && 380 + stream->transferred < transferred) { 382 381 transferred = stream->transferred; 382 + transferred_valid = true; 383 + } 383 384 } 384 385 385 386 /* Okay, declare that all I/O is complete. */ 386 - wreq->transferred = transferred; 387 + if (transferred_valid) 388 + wreq->transferred = transferred; 387 389 trace_netfs_rreq(wreq, netfs_rreq_trace_write_done); 388 390 389 391 if (wreq->io_streams[1].active &&
+2 -2
fs/netfs/write_issue.c
··· 118 118 wreq->io_streams[0].prepare_write = ictx->ops->prepare_write; 119 119 wreq->io_streams[0].issue_write = ictx->ops->issue_write; 120 120 wreq->io_streams[0].collected_to = start; 121 - wreq->io_streams[0].transferred = LONG_MAX; 121 + wreq->io_streams[0].transferred = 0; 122 122 123 123 wreq->io_streams[1].stream_nr = 1; 124 124 wreq->io_streams[1].source = NETFS_WRITE_TO_CACHE; 125 125 wreq->io_streams[1].collected_to = start; 126 - wreq->io_streams[1].transferred = LONG_MAX; 126 + wreq->io_streams[1].transferred = 0; 127 127 if (fscache_resources_valid(&wreq->cache_resources)) { 128 128 wreq->io_streams[1].avail = true; 129 129 wreq->io_streams[1].active = true;
+5 -4
fs/nfs/pagelist.c
··· 253 253 nfs_page_clear_headlock(req); 254 254 } 255 255 256 - /* 257 - * nfs_page_group_sync_on_bit_locked 256 + /** 257 + * nfs_page_group_sync_on_bit_locked - Test if all requests have @bit set 258 + * @req: request in page group 259 + * @bit: PG_* bit that is used to sync page group 258 260 * 259 261 * must be called with page group lock held 260 262 */ 261 - static bool 262 - nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit) 263 + bool nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit) 263 264 { 264 265 struct nfs_page *head = req->wb_head; 265 266 struct nfs_page *tmp;
+10 -19
fs/nfs/write.c
··· 153 153 } 154 154 } 155 155 156 - static int 157 - nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode) 156 + static void nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode) 158 157 { 159 - int ret; 160 - 161 - if (!test_bit(PG_REMOVE, &req->wb_flags)) 162 - return 0; 163 - ret = nfs_page_group_lock(req); 164 - if (ret) 165 - return ret; 166 158 if (test_and_clear_bit(PG_REMOVE, &req->wb_flags)) 167 159 nfs_page_set_inode_ref(req, inode); 168 - nfs_page_group_unlock(req); 169 - return 0; 170 160 } 171 161 172 162 /** ··· 575 585 } 576 586 } 577 587 588 + ret = nfs_page_group_lock(head); 589 + if (ret < 0) 590 + goto out_unlock; 591 + 578 592 /* Ensure that nobody removed the request before we locked it */ 579 593 if (head != folio->private) { 594 + nfs_page_group_unlock(head); 580 595 nfs_unlock_and_release_request(head); 581 596 goto retry; 582 597 } 583 598 584 - ret = nfs_cancel_remove_inode(head, inode); 585 - if (ret < 0) 586 - goto out_unlock; 587 - 588 - ret = nfs_page_group_lock(head); 589 - if (ret < 0) 590 - goto out_unlock; 599 + nfs_cancel_remove_inode(head, inode); 591 600 592 601 /* lock each request in the page group */ 593 602 for (subreq = head->wb_this_page; ··· 775 786 { 776 787 struct nfs_inode *nfsi = NFS_I(nfs_page_to_inode(req)); 777 788 778 - if (nfs_page_group_sync_on_bit(req, PG_REMOVE)) { 789 + nfs_page_group_lock(req); 790 + if (nfs_page_group_sync_on_bit_locked(req, PG_REMOVE)) { 779 791 struct folio *folio = nfs_page_to_folio(req->wb_head); 780 792 struct address_space *mapping = folio->mapping; 781 793 ··· 788 798 } 789 799 spin_unlock(&mapping->i_private_lock); 790 800 } 801 + nfs_page_group_unlock(req); 791 802 792 803 if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) { 793 804 atomic_long_dec(&nfsi->nrequests);
+1 -1
fs/overlayfs/dir.c
··· 225 225 struct ovl_cattr *attr) 226 226 { 227 227 struct dentry *ret; 228 - inode_lock(workdir->d_inode); 228 + inode_lock_nested(workdir->d_inode, I_MUTEX_PARENT); 229 229 ret = ovl_create_real(ofs, workdir, 230 230 ovl_lookup_temp(ofs, workdir), attr); 231 231 inode_unlock(workdir->d_inode);
+2 -1
fs/overlayfs/util.c
··· 1552 1552 int ovl_parent_lock(struct dentry *parent, struct dentry *child) 1553 1553 { 1554 1554 inode_lock_nested(parent->d_inode, I_MUTEX_PARENT); 1555 - if (!child || child->d_parent == parent) 1555 + if (!child || 1556 + (!d_unhashed(child) && child->d_parent == parent)) 1556 1557 return 0; 1557 1558 1558 1559 inode_unlock(parent->d_inode);
+1 -1
fs/pidfs.c
··· 296 296 static long pidfd_info(struct file *file, unsigned int cmd, unsigned long arg) 297 297 { 298 298 struct pidfd_info __user *uinfo = (struct pidfd_info __user *)arg; 299 + struct task_struct *task __free(put_task) = NULL; 299 300 struct pid *pid = pidfd_pid(file); 300 301 size_t usize = _IOC_SIZE(cmd); 301 302 struct pidfd_info kinfo = {}; 302 303 struct pidfs_exit_info *exit_info; 303 304 struct user_namespace *user_ns; 304 - struct task_struct *task; 305 305 struct pidfs_attr *attr; 306 306 const struct cred *c; 307 307 __u64 mask;
+6 -4
fs/pnode.c
··· 111 111 return; 112 112 } 113 113 if (IS_MNT_SHARED(mnt)) { 114 - m = propagation_source(mnt); 114 + if (type == MS_SLAVE || !hlist_empty(&mnt->mnt_slave_list)) 115 + m = propagation_source(mnt); 115 116 if (list_empty(&mnt->mnt_share)) { 116 117 mnt_release_group_id(mnt); 117 118 } else { ··· 638 637 } 639 638 640 639 // now to_umount consists of all acceptable candidates 641 - // deal with reparenting of remaining overmounts on those 640 + // deal with reparenting of surviving overmounts on those 642 641 list_for_each_entry(m, &to_umount, mnt_list) { 643 - if (m->overmount) 644 - reparent(m->overmount); 642 + struct mount *over = m->overmount; 643 + if (over && !will_be_unmounted(over)) 644 + reparent(over); 645 645 } 646 646 647 647 // and fold them into the set
+1 -1
fs/smb/client/smb2ops.c
··· 4496 4496 for (int i = 1; i < num_rqst; i++) { 4497 4497 struct smb_rqst *old = &old_rq[i - 1]; 4498 4498 struct smb_rqst *new = &new_rq[i]; 4499 - struct folio_queue *buffer; 4499 + struct folio_queue *buffer = NULL; 4500 4500 size_t size = iov_iter_count(&old->rq_iter); 4501 4501 4502 4502 orig_len += smb_rqst_len(server, old);
+2 -1
fs/smb/server/connection.c
··· 504 504 { 505 505 mutex_lock(&init_lock); 506 506 ksmbd_tcp_destroy(); 507 - ksmbd_rdma_destroy(); 507 + ksmbd_rdma_stop_listening(); 508 508 stop_sessions(); 509 + ksmbd_rdma_destroy(); 509 510 mutex_unlock(&init_lock); 510 511 }
+6 -1
fs/smb/server/connection.h
··· 46 46 struct mutex srv_mutex; 47 47 int status; 48 48 unsigned int cli_cap; 49 - __be32 inet_addr; 49 + union { 50 + __be32 inet_addr; 51 + #if IS_ENABLED(CONFIG_IPV6) 52 + u8 inet6_addr[16]; 53 + #endif 54 + }; 50 55 char *request_buf; 51 56 struct ksmbd_transport *transport; 52 57 struct nls_table *local_nls;
+10 -3
fs/smb/server/oplock.c
··· 1102 1102 if (!atomic_inc_not_zero(&opinfo->refcount)) 1103 1103 continue; 1104 1104 1105 - if (ksmbd_conn_releasing(opinfo->conn)) 1105 + if (ksmbd_conn_releasing(opinfo->conn)) { 1106 + opinfo_put(opinfo); 1106 1107 continue; 1108 + } 1107 1109 1108 1110 oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); 1109 1111 opinfo_put(opinfo); ··· 1141 1139 if (!atomic_inc_not_zero(&opinfo->refcount)) 1142 1140 continue; 1143 1141 1144 - if (ksmbd_conn_releasing(opinfo->conn)) 1142 + if (ksmbd_conn_releasing(opinfo->conn)) { 1143 + opinfo_put(opinfo); 1145 1144 continue; 1145 + } 1146 + 1146 1147 oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); 1147 1148 opinfo_put(opinfo); 1148 1149 } ··· 1348 1343 if (!atomic_inc_not_zero(&brk_op->refcount)) 1349 1344 continue; 1350 1345 1351 - if (ksmbd_conn_releasing(brk_op->conn)) 1346 + if (ksmbd_conn_releasing(brk_op->conn)) { 1347 + opinfo_put(brk_op); 1352 1348 continue; 1349 + } 1353 1350 1354 1351 if (brk_op->is_lease && (brk_op->o_lease->state & 1355 1352 (~(SMB2_LEASE_READ_CACHING_LE |
+4 -1
fs/smb/server/transport_rdma.c
··· 2194 2194 return 0; 2195 2195 } 2196 2196 2197 - void ksmbd_rdma_destroy(void) 2197 + void ksmbd_rdma_stop_listening(void) 2198 2198 { 2199 2199 if (!smb_direct_listener.cm_id) 2200 2200 return; ··· 2203 2203 rdma_destroy_id(smb_direct_listener.cm_id); 2204 2204 2205 2205 smb_direct_listener.cm_id = NULL; 2206 + } 2206 2207 2208 + void ksmbd_rdma_destroy(void) 2209 + { 2207 2210 if (smb_direct_wq) { 2208 2211 destroy_workqueue(smb_direct_wq); 2209 2212 smb_direct_wq = NULL;
+3 -1
fs/smb/server/transport_rdma.h
··· 54 54 55 55 #ifdef CONFIG_SMB_SERVER_SMBDIRECT 56 56 int ksmbd_rdma_init(void); 57 + void ksmbd_rdma_stop_listening(void); 57 58 void ksmbd_rdma_destroy(void); 58 59 bool ksmbd_rdma_capable_netdev(struct net_device *netdev); 59 60 void init_smbd_max_io_size(unsigned int sz); 60 61 unsigned int get_smbd_max_read_write_size(void); 61 62 #else 62 63 static inline int ksmbd_rdma_init(void) { return 0; } 63 - static inline int ksmbd_rdma_destroy(void) { return 0; } 64 + static inline void ksmbd_rdma_stop_listening(void) { } 65 + static inline void ksmbd_rdma_destroy(void) { } 64 66 static inline bool ksmbd_rdma_capable_netdev(struct net_device *netdev) { return false; } 65 67 static inline void init_smbd_max_io_size(unsigned int sz) { } 66 68 static inline unsigned int get_smbd_max_read_write_size(void) { return 0; }
+23 -3
fs/smb/server/transport_tcp.c
··· 85 85 return NULL; 86 86 } 87 87 88 + #if IS_ENABLED(CONFIG_IPV6) 89 + if (client_sk->sk->sk_family == AF_INET6) 90 + memcpy(&conn->inet6_addr, &client_sk->sk->sk_v6_daddr, 16); 91 + else 92 + conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr; 93 + #else 88 94 conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr; 95 + #endif 89 96 conn->transport = KSMBD_TRANS(t); 90 97 KSMBD_TRANS(t)->conn = conn; 91 98 KSMBD_TRANS(t)->ops = &ksmbd_tcp_transport_ops; ··· 236 229 { 237 230 struct socket *client_sk = NULL; 238 231 struct interface *iface = (struct interface *)p; 239 - struct inet_sock *csk_inet; 240 232 struct ksmbd_conn *conn; 241 233 int ret; 242 234 ··· 258 252 /* 259 253 * Limits repeated connections from clients with the same IP. 260 254 */ 261 - csk_inet = inet_sk(client_sk->sk); 262 255 down_read(&conn_list_lock); 263 256 list_for_each_entry(conn, &conn_list, conns_list) 264 - if (csk_inet->inet_daddr == conn->inet_addr) { 257 + #if IS_ENABLED(CONFIG_IPV6) 258 + if (client_sk->sk->sk_family == AF_INET6) { 259 + if (memcmp(&client_sk->sk->sk_v6_daddr, 260 + &conn->inet6_addr, 16) == 0) { 261 + ret = -EAGAIN; 262 + break; 263 + } 264 + } else if (inet_sk(client_sk->sk)->inet_daddr == 265 + conn->inet_addr) { 265 266 ret = -EAGAIN; 266 267 break; 267 268 } 269 + #else 270 + if (inet_sk(client_sk->sk)->inet_daddr == 271 + conn->inet_addr) { 272 + ret = -EAGAIN; 273 + break; 274 + } 275 + #endif 268 276 up_read(&conn_list_lock); 269 277 if (ret == -EAGAIN) 270 278 continue;
+3
fs/splice.c
··· 739 739 sd.pos = kiocb.ki_pos; 740 740 if (ret <= 0) 741 741 break; 742 + WARN_ONCE(ret > sd.total_len - left, 743 + "Splice Exceeded! ret=%zd tot=%zu left=%zu\n", 744 + ret, sd.total_len, left); 742 745 743 746 sd.num_spliced += ret; 744 747 sd.total_len -= ret;
+7 -7
fs/squashfs/super.c
··· 187 187 unsigned short flags; 188 188 unsigned int fragments; 189 189 u64 lookup_table_start, xattr_id_table_start, next_table; 190 - int err; 190 + int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); 191 191 192 192 TRACE("Entered squashfs_fill_superblock\n"); 193 + 194 + if (!devblksize) { 195 + errorf(fc, "squashfs: unable to set blocksize\n"); 196 + return -EINVAL; 197 + } 193 198 194 199 sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL); 195 200 if (sb->s_fs_info == NULL) { ··· 206 201 207 202 msblk->panic_on_errors = (opts->errors == Opt_errors_panic); 208 203 209 - msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); 210 - if (!msblk->devblksize) { 211 - errorf(fc, "squashfs: unable to set blocksize\n"); 212 - return -EINVAL; 213 - } 214 - 204 + msblk->devblksize = devblksize; 215 205 msblk->devblksize_log2 = ffz(~msblk->devblksize); 216 206 217 207 mutex_init(&msblk->meta_index_mutex);
+1
include/linux/blkdev.h
··· 656 656 QUEUE_FLAG_SQ_SCHED, /* single queue style io dispatch */ 657 657 QUEUE_FLAG_DISABLE_WBT_DEF, /* for sched to disable/enable wbt */ 658 658 QUEUE_FLAG_NO_ELV_SWITCH, /* can't switch elevator any more */ 659 + QUEUE_FLAG_QOS_ENABLED, /* qos is enabled */ 659 660 QUEUE_FLAG_MAX 660 661 }; 661 662
-8
include/linux/compiler.h
··· 288 288 #define __ADDRESSABLE(sym) \ 289 289 ___ADDRESSABLE(sym, __section(".discard.addressable")) 290 290 291 - #define __ADDRESSABLE_ASM(sym) \ 292 - .pushsection .discard.addressable,"aw"; \ 293 - .align ARCH_SEL(8,4); \ 294 - ARCH_SEL(.quad, .long) __stringify(sym); \ 295 - .popsection; 296 - 297 - #define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym)) 298 - 299 291 /* 300 292 * This returns a constant expression while determining if an argument is 301 293 * a constant expression, most importantly without evaluating the argument.
+1
include/linux/cpuhotplug.h
··· 168 168 CPUHP_AP_QCOM_TIMER_STARTING, 169 169 CPUHP_AP_TEGRA_TIMER_STARTING, 170 170 CPUHP_AP_ARMADA_TIMER_STARTING, 171 + CPUHP_AP_LOONGARCH_ARCH_TIMER_STARTING, 171 172 CPUHP_AP_MIPS_GIC_TIMER_STARTING, 172 173 CPUHP_AP_ARC_TIMER_STARTING, 173 174 CPUHP_AP_REALTEK_TIMER_STARTING,
+1 -1
include/linux/export.h
··· 91 91 #define EXPORT_SYMBOL_NS(sym, ns) __EXPORT_SYMBOL(sym, "", ns) 92 92 #define EXPORT_SYMBOL_NS_GPL(sym, ns) __EXPORT_SYMBOL(sym, "GPL", ns) 93 93 94 - #define EXPORT_SYMBOL_GPL_FOR_MODULES(sym, mods) __EXPORT_SYMBOL(sym, "GPL", "module:" mods) 94 + #define EXPORT_SYMBOL_FOR_MODULES(sym, mods) __EXPORT_SYMBOL(sym, "GPL", "module:" mods) 95 95 96 96 #endif /* _LINUX_EXPORT_H */
+1 -6
include/linux/iosys-map.h
··· 264 264 */ 265 265 static inline void iosys_map_clear(struct iosys_map *map) 266 266 { 267 - if (map->is_iomem) { 268 - map->vaddr_iomem = NULL; 269 - map->is_iomem = false; 270 - } else { 271 - map->vaddr = NULL; 272 - } 267 + memset(map, 0, sizeof(*map)); 273 268 } 274 269 275 270 /**
+11 -9
include/linux/iov_iter.h
··· 160 160 161 161 do { 162 162 struct folio *folio = folioq_folio(folioq, slot); 163 - size_t part, remain, consumed; 163 + size_t part, remain = 0, consumed; 164 164 size_t fsize; 165 165 void *base; 166 166 ··· 168 168 break; 169 169 170 170 fsize = folioq_folio_size(folioq, slot); 171 - base = kmap_local_folio(folio, skip); 172 - part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); 173 - remain = step(base, progress, part, priv, priv2); 174 - kunmap_local(base); 175 - consumed = part - remain; 176 - len -= consumed; 177 - progress += consumed; 178 - skip += consumed; 171 + if (skip < fsize) { 172 + base = kmap_local_folio(folio, skip); 173 + part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); 174 + remain = step(base, progress, part, priv, priv2); 175 + kunmap_local(base); 176 + consumed = part - remain; 177 + len -= consumed; 178 + progress += consumed; 179 + skip += consumed; 180 + } 179 181 if (skip >= fsize) { 180 182 skip = 0; 181 183 slot++;
+9 -38
include/linux/kcov.h
··· 57 57 58 58 /* 59 59 * The softirq flavor of kcov_remote_*() functions is introduced as a temporary 60 - * workaround for KCOV's lack of nested remote coverage sections support. 61 - * 62 - * Adding support is tracked in https://bugzilla.kernel.org/show_bug.cgi?id=210337. 63 - * 64 - * kcov_remote_start_usb_softirq(): 65 - * 66 - * 1. Only collects coverage when called in the softirq context. This allows 67 - * avoiding nested remote coverage collection sections in the task context. 68 - * For example, USB/IP calls usb_hcd_giveback_urb() in the task context 69 - * within an existing remote coverage collection section. Thus, KCOV should 70 - * not attempt to start collecting coverage within the coverage collection 71 - * section in __usb_hcd_giveback_urb() in this case. 72 - * 73 - * 2. Disables interrupts for the duration of the coverage collection section. 74 - * This allows avoiding nested remote coverage collection sections in the 75 - * softirq context (a softirq might occur during the execution of a work in 76 - * the BH workqueue, which runs with in_serving_softirq() > 0). 77 - * For example, usb_giveback_urb_bh() runs in the BH workqueue with 78 - * interrupts enabled, so __usb_hcd_giveback_urb() might be interrupted in 79 - * the middle of its remote coverage collection section, and the interrupt 80 - * handler might invoke __usb_hcd_giveback_urb() again. 60 + * work around for kcov's lack of nested remote coverage sections support in 61 + * task context. Adding support for nested sections is tracked in: 62 + * https://bugzilla.kernel.org/show_bug.cgi?id=210337 81 63 */ 82 64 83 - static inline unsigned long kcov_remote_start_usb_softirq(u64 id) 65 + static inline void kcov_remote_start_usb_softirq(u64 id) 84 66 { 85 - unsigned long flags = 0; 86 - 87 - if (in_serving_softirq()) { 88 - local_irq_save(flags); 67 + if (in_serving_softirq() && !in_hardirq()) 89 68 kcov_remote_start_usb(id); 90 - } 91 - 92 - return flags; 93 69 } 94 70 95 - static inline void kcov_remote_stop_softirq(unsigned long flags) 71 + static inline void kcov_remote_stop_softirq(void) 96 72 { 97 - if (in_serving_softirq()) { 73 + if (in_serving_softirq() && !in_hardirq()) 98 74 kcov_remote_stop(); 99 - local_irq_restore(flags); 100 - } 101 75 } 102 76 103 77 #ifdef CONFIG_64BIT ··· 105 131 } 106 132 static inline void kcov_remote_start_common(u64 id) {} 107 133 static inline void kcov_remote_start_usb(u64 id) {} 108 - static inline unsigned long kcov_remote_start_usb_softirq(u64 id) 109 - { 110 - return 0; 111 - } 112 - static inline void kcov_remote_stop_softirq(unsigned long flags) {} 134 + static inline void kcov_remote_start_usb_softirq(u64 id) {} 135 + static inline void kcov_remote_stop_softirq(void) {} 113 136 114 137 #endif /* CONFIG_KCOV */ 115 138 #endif /* _LINUX_KCOV_H */
+5
include/linux/migrate.h
··· 79 79 void folio_migrate_flags(struct folio *newfolio, struct folio *folio); 80 80 int folio_migrate_mapping(struct address_space *mapping, 81 81 struct folio *newfolio, struct folio *folio, int extra_count); 82 + int set_movable_ops(const struct movable_operations *ops, enum pagetype type); 82 83 83 84 #else 84 85 ··· 98 97 99 98 static inline int migrate_huge_page_move_mapping(struct address_space *mapping, 100 99 struct folio *dst, struct folio *src) 100 + { 101 + return -ENOSYS; 102 + } 103 + static inline int set_movable_ops(const struct movable_operations *ops, enum pagetype type) 101 104 { 102 105 return -ENOSYS; 103 106 }
+1
include/linux/netfs.h
··· 150 150 bool active; /* T if stream is active */ 151 151 bool need_retry; /* T if this stream needs retrying */ 152 152 bool failed; /* T if this stream failed */ 153 + bool transferred_valid; /* T is ->transferred is valid */ 153 154 }; 154 155 155 156 /*
+1
include/linux/nfs_page.h
··· 160 160 extern int nfs_page_group_lock(struct nfs_page *); 161 161 extern void nfs_page_group_unlock(struct nfs_page *); 162 162 extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int); 163 + extern bool nfs_page_group_sync_on_bit_locked(struct nfs_page *, unsigned int); 163 164 extern int nfs_page_set_headlock(struct nfs_page *req); 164 165 extern void nfs_page_clear_headlock(struct nfs_page *req); 165 166 extern bool nfs_async_iocounter_wait(struct rpc_task *, struct nfs_lock_context *);
+2 -2
include/net/bluetooth/bluetooth.h
··· 647 647 #if IS_ENABLED(CONFIG_BT_LE) 648 648 int iso_init(void); 649 649 int iso_exit(void); 650 - bool iso_enabled(void); 650 + bool iso_inited(void); 651 651 #else 652 652 static inline int iso_init(void) 653 653 { ··· 659 659 return 0; 660 660 } 661 661 662 - static inline bool iso_enabled(void) 662 + static inline bool iso_inited(void) 663 663 { 664 664 return false; 665 665 }
+38 -6
include/net/bluetooth/hci_core.h
··· 129 129 struct list_head list; 130 130 unsigned int acl_num; 131 131 unsigned int sco_num; 132 - unsigned int iso_num; 132 + unsigned int cis_num; 133 + unsigned int bis_num; 134 + unsigned int pa_num; 133 135 unsigned int le_num; 134 136 unsigned int le_num_peripheral; 135 137 }; ··· 1016 1014 h->sco_num++; 1017 1015 break; 1018 1016 case CIS_LINK: 1017 + h->cis_num++; 1018 + break; 1019 1019 case BIS_LINK: 1020 + h->bis_num++; 1021 + break; 1020 1022 case PA_LINK: 1021 - h->iso_num++; 1023 + h->pa_num++; 1022 1024 break; 1023 1025 } 1024 1026 } ··· 1048 1042 h->sco_num--; 1049 1043 break; 1050 1044 case CIS_LINK: 1045 + h->cis_num--; 1046 + break; 1051 1047 case BIS_LINK: 1048 + h->bis_num--; 1049 + break; 1052 1050 case PA_LINK: 1053 - h->iso_num--; 1051 + h->pa_num--; 1054 1052 break; 1055 1053 } 1056 1054 } ··· 1071 1061 case ESCO_LINK: 1072 1062 return h->sco_num; 1073 1063 case CIS_LINK: 1064 + return h->cis_num; 1074 1065 case BIS_LINK: 1066 + return h->bis_num; 1075 1067 case PA_LINK: 1076 - return h->iso_num; 1068 + return h->pa_num; 1077 1069 default: 1078 1070 return 0; 1079 1071 } ··· 1085 1073 { 1086 1074 struct hci_conn_hash *c = &hdev->conn_hash; 1087 1075 1088 - return c->acl_num + c->sco_num + c->le_num + c->iso_num; 1076 + return c->acl_num + c->sco_num + c->le_num + c->cis_num + c->bis_num + 1077 + c->pa_num; 1078 + } 1079 + 1080 + static inline unsigned int hci_iso_count(struct hci_dev *hdev) 1081 + { 1082 + struct hci_conn_hash *c = &hdev->conn_hash; 1083 + 1084 + return c->cis_num + c->bis_num; 1089 1085 } 1090 1086 1091 1087 static inline bool hci_conn_valid(struct hci_dev *hdev, struct hci_conn *conn) ··· 1935 1915 !hci_dev_test_flag(dev, HCI_RPA_EXPIRED)) 1936 1916 #define adv_rpa_valid(adv) (bacmp(&adv->random_addr, BDADDR_ANY) && \ 1937 1917 !adv->rpa_expired) 1918 + #define le_enabled(dev) (lmp_le_capable(dev) && \ 1919 + hci_dev_test_flag(dev, HCI_LE_ENABLED)) 1938 1920 1939 1921 #define scan_1m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_1M) || \ 1940 1922 ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_1M)) ··· 1954 1932 ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED)) 1955 1933 1956 1934 #define ll_privacy_capable(dev) ((dev)->le_features[0] & HCI_LE_LL_PRIVACY) 1935 + #define ll_privacy_enabled(dev) (le_enabled(dev) && ll_privacy_capable(dev)) 1957 1936 1958 1937 #define privacy_mode_capable(dev) (ll_privacy_capable(dev) && \ 1959 1938 ((dev)->commands[39] & 0x04)) ··· 2004 1981 2005 1982 /* CIS Master/Slave and BIS support */ 2006 1983 #define iso_capable(dev) (cis_capable(dev) || bis_capable(dev)) 1984 + #define iso_enabled(dev) (le_enabled(dev) && iso_capable(dev)) 2007 1985 #define cis_capable(dev) \ 2008 1986 (cis_central_capable(dev) || cis_peripheral_capable(dev)) 1987 + #define cis_enabled(dev) (le_enabled(dev) && cis_capable(dev)) 2009 1988 #define cis_central_capable(dev) \ 2010 1989 ((dev)->le_features[3] & HCI_LE_CIS_CENTRAL) 1990 + #define cis_central_enabled(dev) \ 1991 + (le_enabled(dev) && cis_central_capable(dev)) 2011 1992 #define cis_peripheral_capable(dev) \ 2012 1993 ((dev)->le_features[3] & HCI_LE_CIS_PERIPHERAL) 1994 + #define cis_peripheral_enabled(dev) \ 1995 + (le_enabled(dev) && cis_peripheral_capable(dev)) 2013 1996 #define bis_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_BROADCASTER) 2014 - #define sync_recv_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER) 1997 + #define bis_enabled(dev) (le_enabled(dev) && bis_capable(dev)) 1998 + #define sync_recv_capable(dev) \ 1999 + ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER) 2000 + #define sync_recv_enabled(dev) (le_enabled(dev) && sync_recv_capable(dev)) 2015 2001 2016 2002 #define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \ 2017 2003 (!hci_test_quirk((dev), HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG)))
+1
include/net/bond_3ad.h
··· 307 307 struct slave *slave); 308 308 int bond_3ad_set_carrier(struct bonding *bond); 309 309 void bond_3ad_update_lacp_rate(struct bonding *bond); 310 + void bond_3ad_update_lacp_active(struct bonding *bond); 310 311 void bond_3ad_update_ad_actor_settings(struct bonding *bond); 311 312 int bond_3ad_stats_fill(struct sk_buff *skb, struct bond_3ad_stats *stats); 312 313 size_t bond_3ad_stats_size(void);
+8 -3
include/net/sch_generic.h
··· 1038 1038 skb = __skb_dequeue(&sch->gso_skb); 1039 1039 if (skb) { 1040 1040 sch->q.qlen--; 1041 + qdisc_qstats_backlog_dec(sch, skb); 1041 1042 return skb; 1042 1043 } 1043 - if (direct) 1044 - return __qdisc_dequeue_head(&sch->q); 1045 - else 1044 + if (direct) { 1045 + skb = __qdisc_dequeue_head(&sch->q); 1046 + if (skb) 1047 + qdisc_qstats_backlog_dec(sch, skb); 1048 + return skb; 1049 + } else { 1046 1050 return sch->dequeue(sch); 1051 + } 1047 1052 } 1048 1053 1049 1054 static inline struct sk_buff *qdisc_dequeue_head(struct Qdisc *sch)
+3 -2
include/sound/cs35l56.h
··· 107 107 #define CS35L56_DSP1_PMEM_5114 0x3804FE8 108 108 109 109 #define CS35L63_DSP1_FW_VER CS35L56_DSP1_FW_VER 110 - #define CS35L63_DSP1_HALO_STATE 0x280396C 111 - #define CS35L63_DSP1_PM_CUR_STATE 0x28042C8 110 + #define CS35L63_DSP1_HALO_STATE 0x2803C04 111 + #define CS35L63_DSP1_PM_CUR_STATE 0x2804518 112 112 #define CS35L63_PROTECTION_STATUS 0x340009C 113 113 #define CS35L63_TRANSDUCER_ACTUAL_PS 0x34000F4 114 114 #define CS35L63_MAIN_RENDER_USER_MUTE 0x3400020 ··· 306 306 struct gpio_desc *reset_gpio; 307 307 struct cs35l56_spi_payload *spi_payload_buf; 308 308 const struct cs35l56_fw_reg *fw_reg; 309 + const struct cirrus_amp_cal_controls *calibration_controls; 309 310 }; 310 311 311 312 static inline bool cs35l56_is_otp_register(unsigned int reg)
+3 -3
include/sound/tas2781-tlv.h
··· 2 2 // 3 3 // ALSA SoC Texas Instruments TAS2781 Audio Smart Amplifier 4 4 // 5 - // Copyright (C) 2022 - 2024 Texas Instruments Incorporated 5 + // Copyright (C) 2022 - 2025 Texas Instruments Incorporated 6 6 // https://www.ti.com 7 7 // 8 8 // The TAS2781 driver implements a flexible and configurable ··· 15 15 #ifndef __TAS2781_TLV_H__ 16 16 #define __TAS2781_TLV_H__ 17 17 18 - static const __maybe_unused DECLARE_TLV_DB_SCALE(dvc_tlv, -10000, 50, 0); 19 - static const __maybe_unused DECLARE_TLV_DB_SCALE(amp_vol_tlv, 1100, 50, 0); 18 + static const __maybe_unused DECLARE_TLV_DB_SCALE(tas2781_dvc_tlv, -10000, 50, 0); 19 + static const __maybe_unused DECLARE_TLV_DB_SCALE(tas2781_amp_tlv, 1100, 50, 0); 20 20 21 21 #endif
+1
include/uapi/linux/pfrut.h
··· 89 89 __u32 hw_ver; 90 90 __u32 rt_ver; 91 91 __u8 platform_id[16]; 92 + __u32 svn_ver; 92 93 }; 93 94 94 95 enum pfru_dsm_status {
+1 -1
include/uapi/linux/raid/md_p.h
··· 173 173 #else 174 174 #error unspecified endianness 175 175 #endif 176 - __u32 resync_offset; /* 11 resync checkpoint sector count */ 176 + __u32 recovery_cp; /* 11 resync checkpoint sector count */ 177 177 /* There are only valid for minor_version > 90 */ 178 178 __u64 reshape_position; /* 12,13 next address in array-space for reshape */ 179 179 __u32 new_level; /* 14 new level we are reshaping to */
+3
io_uring/futex.c
··· 288 288 goto done_unlock; 289 289 } 290 290 291 + req->flags |= REQ_F_ASYNC_DATA; 291 292 req->async_data = ifd; 292 293 ifd->q = futex_q_init; 293 294 ifd->q.bitset = iof->futex_mask; ··· 310 309 if (ret < 0) 311 310 req_set_fail(req); 312 311 io_req_set_res(req, ret, 0); 312 + req->async_data = NULL; 313 + req->flags &= ~REQ_F_ASYNC_DATA; 313 314 kfree(ifd); 314 315 return IOU_COMPLETE; 315 316 }
+1
io_uring/io_uring.c
··· 2119 2119 req->file = NULL; 2120 2120 req->tctx = current->io_uring; 2121 2121 req->cancel_seq_set = false; 2122 + req->async_data = NULL; 2122 2123 2123 2124 if (unlikely(opcode >= IORING_OP_LAST)) { 2124 2125 req->opcode = 0;
+1
kernel/Kconfig.kexec
··· 97 97 config KEXEC_HANDOVER 98 98 bool "kexec handover" 99 99 depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE 100 + depends on !DEFERRED_STRUCT_PAGE_INIT 100 101 select MEMBLOCK_KHO_SCRATCH 101 102 select KEXEC_FILE 102 103 select DEBUG_FS
+5 -6
kernel/cgroup/cpuset.c
··· 280 280 { 281 281 if (!cpusets_insane_config() && 282 282 movable_only_nodes(nodes)) { 283 - static_branch_enable(&cpusets_insane_config_key); 283 + static_branch_enable_cpuslocked(&cpusets_insane_config_key); 284 284 pr_info("Unsupported (movable nodes only) cpuset configuration detected (nmask=%*pbl)!\n" 285 285 "Cpuset allocations might fail even with a lot of memory available.\n", 286 286 nodemask_pr_args(nodes)); ··· 1843 1843 if (is_partition_valid(cs)) 1844 1844 adding = cpumask_and(tmp->addmask, 1845 1845 xcpus, parent->effective_xcpus); 1846 - } else if (is_partition_invalid(cs) && 1846 + } else if (is_partition_invalid(cs) && !cpumask_empty(xcpus) && 1847 1847 cpumask_subset(xcpus, parent->effective_xcpus)) { 1848 1848 struct cgroup_subsys_state *css; 1849 1849 struct cpuset *child; ··· 3358 3358 else 3359 3359 return -EINVAL; 3360 3360 3361 - css_get(&cs->css); 3362 3361 cpus_read_lock(); 3363 3362 mutex_lock(&cpuset_mutex); 3364 3363 if (is_cpuset_online(cs)) 3365 3364 retval = update_prstate(cs, val); 3366 3365 mutex_unlock(&cpuset_mutex); 3367 3366 cpus_read_unlock(); 3368 - css_put(&cs->css); 3369 3367 return retval ?: nbytes; 3370 3368 } 3371 3369 ··· 3868 3870 partcmd = partcmd_invalidate; 3869 3871 /* 3870 3872 * On the other hand, an invalid partition root may be transitioned 3871 - * back to a regular one. 3873 + * back to a regular one with a non-empty effective xcpus. 3872 3874 */ 3873 - else if (is_partition_valid(parent) && is_partition_invalid(cs)) 3875 + else if (is_partition_valid(parent) && is_partition_invalid(cs) && 3876 + !cpumask_empty(cs->effective_xcpus)) 3874 3877 partcmd = partcmd_update; 3875 3878 3876 3879 if (partcmd >= 0) {
+3
kernel/cgroup/rstat.c
··· 479 479 if (!css_uses_rstat(css)) 480 480 return; 481 481 482 + if (!css->rstat_cpu) 483 + return; 484 + 482 485 css_rstat_flush(css); 483 486 484 487 /* sanity check */
+6
kernel/events/core.c
··· 2665 2665 2666 2666 static void perf_event_unthrottle(struct perf_event *event, bool start) 2667 2667 { 2668 + if (event->state != PERF_EVENT_STATE_ACTIVE) 2669 + return; 2670 + 2668 2671 event->hw.interrupts = 0; 2669 2672 if (start) 2670 2673 event->pmu->start(event, 0); ··· 2677 2674 2678 2675 static void perf_event_throttle(struct perf_event *event) 2679 2676 { 2677 + if (event->state != PERF_EVENT_STATE_ACTIVE) 2678 + return; 2679 + 2680 2680 event->hw.interrupts = MAX_INTERRUPTS; 2681 2681 event->pmu->stop(event, 0); 2682 2682 if (event == event->group_leader)
+25 -4
kernel/kexec_handover.c
··· 144 144 unsigned int order) 145 145 { 146 146 struct kho_mem_phys_bits *bits; 147 - struct kho_mem_phys *physxa; 147 + struct kho_mem_phys *physxa, *new_physxa; 148 148 const unsigned long pfn_high = pfn >> order; 149 149 150 150 might_sleep(); 151 151 152 - physxa = xa_load_or_alloc(&track->orders, order, sizeof(*physxa)); 153 - if (IS_ERR(physxa)) 154 - return PTR_ERR(physxa); 152 + physxa = xa_load(&track->orders, order); 153 + if (!physxa) { 154 + int err; 155 + 156 + new_physxa = kzalloc(sizeof(*physxa), GFP_KERNEL); 157 + if (!new_physxa) 158 + return -ENOMEM; 159 + 160 + xa_init(&new_physxa->phys_bits); 161 + physxa = xa_cmpxchg(&track->orders, order, NULL, new_physxa, 162 + GFP_KERNEL); 163 + 164 + err = xa_err(physxa); 165 + if (err || physxa) { 166 + xa_destroy(&new_physxa->phys_bits); 167 + kfree(new_physxa); 168 + 169 + if (err) 170 + return err; 171 + } else { 172 + physxa = new_physxa; 173 + } 174 + } 155 175 156 176 bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS, 157 177 sizeof(*bits)); ··· 564 544 err_free_scratch_desc: 565 545 memblock_free(kho_scratch, kho_scratch_cnt * sizeof(*kho_scratch)); 566 546 err_disable_kho: 547 + pr_warn("Failed to reserve scratch area, disabling kexec handover\n"); 567 548 kho_enable = false; 568 549 } 569 550
+4 -3
kernel/params.c
··· 513 513 int param_set_copystring(const char *val, const struct kernel_param *kp) 514 514 { 515 515 const struct kparam_string *kps = kp->str; 516 + const size_t len = strnlen(val, kps->maxlen); 516 517 517 - if (strnlen(val, kps->maxlen) == kps->maxlen) { 518 + if (len == kps->maxlen) { 518 519 pr_err("%s: string doesn't fit in %u chars.\n", 519 520 kp->name, kps->maxlen-1); 520 521 return -ENOSPC; 521 522 } 522 - strcpy(kps->string, val); 523 + memcpy(kps->string, val, len + 1); 523 524 return 0; 524 525 } 525 526 EXPORT_SYMBOL(param_set_copystring); ··· 842 841 dot = strchr(kp->name, '.'); 843 842 if (!dot) { 844 843 /* This happens for core_param() */ 845 - strcpy(modname, "kernel"); 844 + strscpy(modname, "kernel"); 846 845 name_len = 0; 847 846 } else { 848 847 name_len = dot - kp->name + 1;
+4
kernel/sched/ext.c
··· 5749 5749 __setscheduler_class(p->policy, p->prio); 5750 5750 struct sched_enq_and_set_ctx ctx; 5751 5751 5752 + if (!tryget_task_struct(p)) 5753 + continue; 5754 + 5752 5755 if (old_class != new_class && p->se.sched_delayed) 5753 5756 dequeue_task(task_rq(p), p, DEQUEUE_SLEEP | DEQUEUE_DELAYED); 5754 5757 ··· 5764 5761 sched_enq_and_set_task(&ctx); 5765 5762 5766 5763 check_class_changed(task_rq(p), p, old_class, p->prio); 5764 + put_task_struct(p); 5767 5765 } 5768 5766 scx_task_iter_stop(&sti); 5769 5767 percpu_up_write(&scx_fork_rwsem);
+5 -1
kernel/signal.c
··· 4067 4067 { 4068 4068 struct pid *pid; 4069 4069 enum pid_type type; 4070 + int ret; 4070 4071 4071 4072 /* Enforce flags be set to 0 until we add an extension. */ 4072 4073 if (flags & ~PIDFD_SEND_SIGNAL_FLAGS) ··· 4109 4108 } 4110 4109 } 4111 4110 4112 - return do_pidfd_send_signal(pid, sig, type, info, flags); 4111 + ret = do_pidfd_send_signal(pid, sig, type, info, flags); 4112 + put_pid(pid); 4113 + 4114 + return ret; 4113 4115 } 4114 4116 4115 4117 static int
+1
kernel/trace/fgraph.c
··· 1397 1397 ftrace_graph_active--; 1398 1398 gops->saved_func = NULL; 1399 1399 fgraph_lru_release_index(i); 1400 + unregister_pm_notifier(&ftrace_suspend_notifier); 1400 1401 } 1401 1402 return ret; 1402 1403 }
+10 -9
kernel/trace/ftrace.c
··· 4661 4661 } else { 4662 4662 iter->hash = alloc_and_copy_ftrace_hash(size_bits, hash); 4663 4663 } 4664 + } else { 4665 + if (hash) 4666 + iter->hash = alloc_and_copy_ftrace_hash(hash->size_bits, hash); 4667 + else 4668 + iter->hash = EMPTY_HASH; 4669 + } 4664 4670 4665 - if (!iter->hash) { 4666 - trace_parser_put(&iter->parser); 4667 - goto out_unlock; 4668 - } 4669 - } else 4670 - iter->hash = hash; 4671 + if (!iter->hash) { 4672 + trace_parser_put(&iter->parser); 4673 + goto out_unlock; 4674 + } 4671 4675 4672 4676 ret = 0; 4673 4677 ··· 6547 6543 ftrace_hash_move_and_update_ops(iter->ops, orig_hash, 6548 6544 iter->hash, filter_hash); 6549 6545 mutex_unlock(&ftrace_lock); 6550 - } else { 6551 - /* For read only, the hash is the ops hash */ 6552 - iter->hash = NULL; 6553 6546 } 6554 6547 6555 6548 mutex_unlock(&iter->ops->func_hash->regex_lock);
+1 -1
kernel/trace/ring_buffer.c
··· 7666 7666 rb_test_started = true; 7667 7667 7668 7668 set_current_state(TASK_INTERRUPTIBLE); 7669 - /* Just run for 10 seconds */; 7669 + /* Just run for 10 seconds */ 7670 7670 schedule_timeout(10 * HZ); 7671 7671 7672 7672 kthread_stop(rb_hammer);
+14 -8
kernel/trace/trace.c
··· 1816 1816 1817 1817 ret = get_user(ch, ubuf++); 1818 1818 if (ret) 1819 - return ret; 1819 + goto fail; 1820 1820 1821 1821 read++; 1822 1822 cnt--; ··· 1830 1830 while (cnt && isspace(ch)) { 1831 1831 ret = get_user(ch, ubuf++); 1832 1832 if (ret) 1833 - return ret; 1833 + goto fail; 1834 1834 read++; 1835 1835 cnt--; 1836 1836 } ··· 1848 1848 while (cnt && !isspace(ch) && ch) { 1849 1849 if (parser->idx < parser->size - 1) 1850 1850 parser->buffer[parser->idx++] = ch; 1851 - else 1852 - return -EINVAL; 1851 + else { 1852 + ret = -EINVAL; 1853 + goto fail; 1854 + } 1853 1855 1854 1856 ret = get_user(ch, ubuf++); 1855 1857 if (ret) 1856 - return ret; 1858 + goto fail; 1857 1859 read++; 1858 1860 cnt--; 1859 1861 } ··· 1870 1868 /* Make sure the parsed string always terminates with '\0'. */ 1871 1869 parser->buffer[parser->idx] = 0; 1872 1870 } else { 1873 - return -EINVAL; 1871 + ret = -EINVAL; 1872 + goto fail; 1874 1873 } 1875 1874 1876 1875 *ppos += read; 1877 1876 return read; 1877 + fail: 1878 + trace_parser_fail(parser); 1879 + return ret; 1878 1880 } 1879 1881 1880 1882 /* TODO add a seq_buf_to_buffer() */ ··· 10638 10632 ret = print_trace_line(&iter); 10639 10633 if (ret != TRACE_TYPE_NO_CONSUME) 10640 10634 trace_consume(&iter); 10635 + 10636 + trace_printk_seq(&iter.seq); 10641 10637 } 10642 10638 touch_nmi_watchdog(); 10643 - 10644 - trace_printk_seq(&iter.seq); 10645 10639 } 10646 10640 10647 10641 if (!cnt)
+8 -2
kernel/trace/trace.h
··· 1292 1292 */ 1293 1293 struct trace_parser { 1294 1294 bool cont; 1295 + bool fail; 1295 1296 char *buffer; 1296 1297 unsigned idx; 1297 1298 unsigned size; ··· 1300 1299 1301 1300 static inline bool trace_parser_loaded(struct trace_parser *parser) 1302 1301 { 1303 - return (parser->idx != 0); 1302 + return !parser->fail && parser->idx != 0; 1304 1303 } 1305 1304 1306 1305 static inline bool trace_parser_cont(struct trace_parser *parser) ··· 1312 1311 { 1313 1312 parser->cont = false; 1314 1313 parser->idx = 0; 1314 + } 1315 + 1316 + static inline void trace_parser_fail(struct trace_parser *parser) 1317 + { 1318 + parser->fail = true; 1315 1319 } 1316 1320 1317 1321 extern int trace_parser_get_init(struct trace_parser *parser, int size); ··· 2210 2204 static inline void sanitize_event_name(char *name) 2211 2205 { 2212 2206 while (*name++ != '\0') 2213 - if (*name == ':' || *name == '.') 2207 + if (*name == ':' || *name == '.' || *name == '*') 2214 2208 *name = '_'; 2215 2209 } 2216 2210
+16 -6
kernel/trace/trace_functions_graph.c
··· 27 27 unsigned long enter_funcs[FTRACE_RETFUNC_DEPTH]; 28 28 }; 29 29 30 + struct fgraph_ent_args { 31 + struct ftrace_graph_ent_entry ent; 32 + /* Force the sizeof of args[] to have FTRACE_REGS_MAX_ARGS entries */ 33 + unsigned long args[FTRACE_REGS_MAX_ARGS]; 34 + }; 35 + 30 36 struct fgraph_data { 31 37 struct fgraph_cpu_data __percpu *cpu_data; 32 38 33 39 /* Place to preserve last processed entry. */ 34 40 union { 35 - struct ftrace_graph_ent_entry ent; 41 + struct fgraph_ent_args ent; 42 + /* TODO allow retaddr to have args */ 36 43 struct fgraph_retaddr_ent_entry rent; 37 - } ent; 44 + }; 38 45 struct ftrace_graph_ret_entry ret; 39 46 int failed; 40 47 int cpu; ··· 634 627 * Save current and next entries for later reference 635 628 * if the output fails. 636 629 */ 637 - if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT)) 638 - data->ent.rent = *(struct fgraph_retaddr_ent_entry *)curr; 639 - else 640 - data->ent.ent = *curr; 630 + if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT)) { 631 + data->rent = *(struct fgraph_retaddr_ent_entry *)curr; 632 + } else { 633 + int size = min((int)sizeof(data->ent), (int)iter->ent_size); 634 + 635 + memcpy(&data->ent, curr, size); 636 + } 641 637 /* 642 638 * If the next event is not a return type, then 643 639 * we only care about what type it is. Otherwise we can
+5 -5
lib/crypto/Kconfig
··· 140 140 config CRYPTO_LIB_SHA1 141 141 tristate 142 142 help 143 - The SHA-1 library functions. Select this if your module uses any of 144 - the functions from <crypto/sha1.h>. 143 + The SHA-1 and HMAC-SHA1 library functions. Select this if your module 144 + uses any of the functions from <crypto/sha1.h>. 145 145 146 146 config CRYPTO_LIB_SHA1_ARCH 147 147 bool ··· 157 157 config CRYPTO_LIB_SHA256 158 158 tristate 159 159 help 160 - Enable the SHA-256 library interface. This interface may be fulfilled 161 - by either the generic implementation or an arch-specific one, if one 162 - is available and enabled. 160 + The SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library functions. 161 + Select this if your module uses any of these functions from 162 + <crypto/sha2.h>. 163 163 164 164 config CRYPTO_LIB_SHA256_ARCH 165 165 bool
+4 -4
lib/crypto/Makefile
··· 100 100 libsha256-y += arm/sha256-ce.o arm/sha256-core.o 101 101 $(obj)/arm/sha256-core.S: $(src)/arm/sha256-armv4.pl 102 102 $(call cmd,perlasm) 103 - clean-files += arm/sha256-core.S 104 103 AFLAGS_arm/sha256-core.o += $(aflags-thumb2-y) 105 104 endif 106 105 ··· 107 108 libsha256-y += arm64/sha256-core.o 108 109 $(obj)/arm64/sha256-core.S: $(src)/arm64/sha2-armv8.pl 109 110 $(call cmd,perlasm_with_args) 110 - clean-files += arm64/sha256-core.S 111 111 libsha256-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha256-ce.o 112 112 endif 113 113 ··· 130 132 libsha512-y += arm/sha512-core.o 131 133 $(obj)/arm/sha512-core.S: $(src)/arm/sha512-armv4.pl 132 134 $(call cmd,perlasm) 133 - clean-files += arm/sha512-core.S 134 135 AFLAGS_arm/sha512-core.o += $(aflags-thumb2-y) 135 136 endif 136 137 ··· 137 140 libsha512-y += arm64/sha512-core.o 138 141 $(obj)/arm64/sha512-core.S: $(src)/arm64/sha2-armv8.pl 139 142 $(call cmd,perlasm_with_args) 140 - clean-files += arm64/sha512-core.S 141 143 libsha512-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha512-ce-core.o 142 144 endif 143 145 ··· 163 167 obj-$(CONFIG_RISCV) += riscv/ 164 168 obj-$(CONFIG_S390) += s390/ 165 169 obj-$(CONFIG_X86) += x86/ 170 + 171 + # clean-files must be defined unconditionally 172 + clean-files += arm/sha256-core.S arm/sha512-core.S 173 + clean-files += arm64/sha256-core.S arm64/sha512-core.S
+6
mm/balloon_compaction.c
··· 254 254 .putback_page = balloon_page_putback, 255 255 }; 256 256 257 + static int __init balloon_init(void) 258 + { 259 + return set_movable_ops(&balloon_mops, PGTY_offline); 260 + } 261 + core_initcall(balloon_init); 262 + 257 263 #endif /* CONFIG_BALLOON_COMPACTION */
+14 -1
mm/damon/core.c
··· 845 845 return NULL; 846 846 } 847 847 848 + static struct damos_filter *damos_nth_ops_filter(int n, struct damos *s) 849 + { 850 + struct damos_filter *filter; 851 + int i = 0; 852 + 853 + damos_for_each_ops_filter(filter, s) { 854 + if (i++ == n) 855 + return filter; 856 + } 857 + return NULL; 858 + } 859 + 848 860 static void damos_commit_filter_arg( 849 861 struct damos_filter *dst, struct damos_filter *src) 850 862 { ··· 883 871 { 884 872 dst->type = src->type; 885 873 dst->matching = src->matching; 874 + dst->allow = src->allow; 886 875 damos_commit_filter_arg(dst, src); 887 876 } 888 877 ··· 921 908 int i = 0, j = 0; 922 909 923 910 damos_for_each_ops_filter_safe(dst_filter, next, dst) { 924 - src_filter = damos_nth_filter(i++, src); 911 + src_filter = damos_nth_ops_filter(i++, src); 925 912 if (src_filter) 926 913 damos_commit_filter(dst_filter, src_filter); 927 914 else
+1 -1
mm/damon/sysfs-schemes.c
··· 2158 2158 { 2159 2159 damon_sysfs_access_pattern_rm_dirs(scheme->access_pattern); 2160 2160 kobject_put(&scheme->access_pattern->kobj); 2161 - kobject_put(&scheme->dests->kobj); 2162 2161 damos_sysfs_dests_rm_dirs(scheme->dests); 2162 + kobject_put(&scheme->dests->kobj); 2163 2163 damon_sysfs_quotas_rm_dirs(scheme->quotas); 2164 2164 kobject_put(&scheme->quotas->kobj); 2165 2165 kobject_put(&scheme->watermarks->kobj);
+7 -2
mm/debug_vm_pgtable.c
··· 990 990 991 991 /* Free page table entries */ 992 992 if (args->start_ptep) { 993 + pmd_clear(args->pmdp); 993 994 pte_free(args->mm, args->start_ptep); 994 995 mm_dec_nr_ptes(args->mm); 995 996 } 996 997 997 998 if (args->start_pmdp) { 999 + pud_clear(args->pudp); 998 1000 pmd_free(args->mm, args->start_pmdp); 999 1001 mm_dec_nr_pmds(args->mm); 1000 1002 } 1001 1003 1002 1004 if (args->start_pudp) { 1005 + p4d_clear(args->p4dp); 1003 1006 pud_free(args->mm, args->start_pudp); 1004 1007 mm_dec_nr_puds(args->mm); 1005 1008 } 1006 1009 1007 - if (args->start_p4dp) 1010 + if (args->start_p4dp) { 1011 + pgd_clear(args->pgdp); 1008 1012 p4d_free(args->mm, args->start_p4dp); 1013 + } 1009 1014 1010 1015 /* Free vma and mm struct */ 1011 1016 if (args->vma) 1012 1017 vm_area_free(args->vma); 1013 1018 1014 1019 if (args->mm) 1015 - mmdrop(args->mm); 1020 + mmput(args->mm); 1016 1021 } 1017 1022 1018 1023 static struct page * __init
+8
mm/memory-failure.c
··· 853 853 #define hwpoison_hugetlb_range NULL 854 854 #endif 855 855 856 + static int hwpoison_test_walk(unsigned long start, unsigned long end, 857 + struct mm_walk *walk) 858 + { 859 + /* We also want to consider pages mapped into VM_PFNMAP. */ 860 + return 0; 861 + } 862 + 856 863 static const struct mm_walk_ops hwpoison_walk_ops = { 857 864 .pmd_entry = hwpoison_pte_range, 858 865 .hugetlb_entry = hwpoison_hugetlb_range, 866 + .test_walk = hwpoison_test_walk, 859 867 .walk_lock = PGWALK_RDLOCK, 860 868 }; 861 869
+30 -8
mm/migrate.c
··· 43 43 #include <linux/sched/sysctl.h> 44 44 #include <linux/memory-tiers.h> 45 45 #include <linux/pagewalk.h> 46 - #include <linux/balloon_compaction.h> 47 - #include <linux/zsmalloc.h> 48 46 49 47 #include <asm/tlbflush.h> 50 48 ··· 50 52 51 53 #include "internal.h" 52 54 #include "swap.h" 55 + 56 + static const struct movable_operations *offline_movable_ops; 57 + static const struct movable_operations *zsmalloc_movable_ops; 58 + 59 + int set_movable_ops(const struct movable_operations *ops, enum pagetype type) 60 + { 61 + /* 62 + * We only allow for selected types and don't handle concurrent 63 + * registration attempts yet. 64 + */ 65 + switch (type) { 66 + case PGTY_offline: 67 + if (offline_movable_ops && ops) 68 + return -EBUSY; 69 + offline_movable_ops = ops; 70 + break; 71 + case PGTY_zsmalloc: 72 + if (zsmalloc_movable_ops && ops) 73 + return -EBUSY; 74 + zsmalloc_movable_ops = ops; 75 + break; 76 + default: 77 + return -EINVAL; 78 + } 79 + return 0; 80 + } 81 + EXPORT_SYMBOL_GPL(set_movable_ops); 53 82 54 83 static const struct movable_operations *page_movable_ops(struct page *page) 55 84 { ··· 87 62 * it as movable, the page type must be sticky until the page gets freed 88 63 * back to the buddy. 89 64 */ 90 - #ifdef CONFIG_BALLOON_COMPACTION 91 65 if (PageOffline(page)) 92 66 /* Only balloon compaction sets PageOffline pages movable. */ 93 - return &balloon_mops; 94 - #endif /* CONFIG_BALLOON_COMPACTION */ 95 - #if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) 67 + return offline_movable_ops; 96 68 if (PageZsmalloc(page)) 97 - return &zsmalloc_mops; 98 - #endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */ 69 + return zsmalloc_movable_ops; 70 + 99 71 return NULL; 100 72 } 101 73
+47 -35
mm/mremap.c
··· 323 323 } 324 324 #endif 325 325 326 + static inline bool uffd_supports_page_table_move(struct pagetable_move_control *pmc) 327 + { 328 + /* 329 + * If we are moving a VMA that has uffd-wp registered but with 330 + * remap events disabled (new VMA will not be registered with uffd), we 331 + * need to ensure that the uffd-wp state is cleared from all pgtables. 332 + * This means recursing into lower page tables in move_page_tables(). 333 + * 334 + * We might get called with VMAs reversed when recovering from a 335 + * failed page table move. In that case, the 336 + * "old"-but-actually-"originally new" VMA during recovery will not have 337 + * a uffd context. Recursing into lower page tables during the original 338 + * move but not during the recovery move will cause trouble, because we 339 + * run into already-existing page tables. So check both VMAs. 340 + */ 341 + return !vma_has_uffd_without_event_remap(pmc->old) && 342 + !vma_has_uffd_without_event_remap(pmc->new); 343 + } 344 + 326 345 #ifdef CONFIG_HAVE_MOVE_PMD 327 346 static bool move_normal_pmd(struct pagetable_move_control *pmc, 328 347 pmd_t *old_pmd, pmd_t *new_pmd) ··· 353 334 pmd_t pmd; 354 335 355 336 if (!arch_supports_page_table_move()) 337 + return false; 338 + if (!uffd_supports_page_table_move(pmc)) 356 339 return false; 357 340 /* 358 341 * The destination pmd shouldn't be established, free_pgtables() ··· 380 359 * this point, and verify that it really is empty. We'll see. 381 360 */ 382 361 if (WARN_ON_ONCE(!pmd_none(*new_pmd))) 383 - return false; 384 - 385 - /* If this pmd belongs to a uffd vma with remap events disabled, we need 386 - * to ensure that the uffd-wp state is cleared from all pgtables. This 387 - * means recursing into lower page tables in move_page_tables(), and we 388 - * can reuse the existing code if we simply treat the entry as "not 389 - * moved". 390 - */ 391 - if (vma_has_uffd_without_event_remap(vma)) 392 362 return false; 393 363 394 364 /* ··· 430 418 431 419 if (!arch_supports_page_table_move()) 432 420 return false; 421 + if (!uffd_supports_page_table_move(pmc)) 422 + return false; 433 423 /* 434 424 * The destination pud shouldn't be established, free_pgtables() 435 425 * should have released it. 436 426 */ 437 427 if (WARN_ON_ONCE(!pud_none(*new_pud))) 438 - return false; 439 - 440 - /* If this pud belongs to a uffd vma with remap events disabled, we need 441 - * to ensure that the uffd-wp state is cleared from all pgtables. This 442 - * means recursing into lower page tables in move_page_tables(), and we 443 - * can reuse the existing code if we simply treat the entry as "not 444 - * moved". 445 - */ 446 - if (vma_has_uffd_without_event_remap(vma)) 447 428 return false; 448 429 449 430 /* ··· 1625 1620 1626 1621 static bool vma_multi_allowed(struct vm_area_struct *vma) 1627 1622 { 1628 - struct file *file; 1623 + struct file *file = vma->vm_file; 1629 1624 1630 1625 /* 1631 1626 * We can't support moving multiple uffd VMAs as notify requires ··· 1638 1633 * Custom get unmapped area might result in MREMAP_FIXED not 1639 1634 * being obeyed. 1640 1635 */ 1641 - file = vma->vm_file; 1642 - if (file && !vma_is_shmem(vma) && !is_vm_hugetlb_page(vma)) { 1643 - const struct file_operations *fop = file->f_op; 1636 + if (!file || !file->f_op->get_unmapped_area) 1637 + return true; 1638 + /* Known good. */ 1639 + if (vma_is_shmem(vma)) 1640 + return true; 1641 + if (is_vm_hugetlb_page(vma)) 1642 + return true; 1643 + if (file->f_op->get_unmapped_area == thp_get_unmapped_area) 1644 + return true; 1644 1645 1645 - if (fop->get_unmapped_area) 1646 - return false; 1647 - } 1648 - 1649 - return true; 1646 + return false; 1650 1647 } 1651 1648 1652 1649 static int check_prep_vma(struct vma_remap_struct *vrm) ··· 1825 1818 unsigned long start = vrm->addr; 1826 1819 unsigned long end = vrm->addr + vrm->old_len; 1827 1820 unsigned long new_addr = vrm->new_addr; 1828 - bool allowed = true, seen_vma = false; 1829 1821 unsigned long target_addr = new_addr; 1830 1822 unsigned long res = -EFAULT; 1831 1823 unsigned long last_end; 1824 + bool seen_vma = false; 1825 + 1832 1826 VMA_ITERATOR(vmi, current->mm, start); 1833 1827 1834 1828 /* ··· 1842 1834 unsigned long addr = max(vma->vm_start, start); 1843 1835 unsigned long len = min(end, vma->vm_end) - addr; 1844 1836 unsigned long offset, res_vma; 1845 - 1846 - if (!allowed) 1847 - return -EFAULT; 1837 + bool multi_allowed; 1848 1838 1849 1839 /* No gap permitted at the start of the range. */ 1850 1840 if (!seen_vma && start < vma->vm_start) ··· 1871 1865 vrm->new_addr = target_addr + offset; 1872 1866 vrm->old_len = vrm->new_len = len; 1873 1867 1874 - allowed = vma_multi_allowed(vma); 1875 - if (seen_vma && !allowed) 1876 - return -EFAULT; 1868 + multi_allowed = vma_multi_allowed(vma); 1869 + if (!multi_allowed) { 1870 + /* This is not the first VMA, abort immediately. */ 1871 + if (seen_vma) 1872 + return -EFAULT; 1873 + /* This is the first, but there are more, abort. */ 1874 + if (vma->vm_end < end) 1875 + return -EFAULT; 1876 + } 1877 1877 1878 1878 res_vma = check_prep_vma(vrm); 1879 1879 if (!res_vma) ··· 1888 1876 return res_vma; 1889 1877 1890 1878 if (!seen_vma) { 1891 - VM_WARN_ON_ONCE(allowed && res_vma != new_addr); 1879 + VM_WARN_ON_ONCE(multi_allowed && res_vma != new_addr); 1892 1880 res = res_vma; 1893 1881 } 1894 1882
+2 -2
mm/vmscan.c
··· 5772 5772 if (sysfs_create_group(mm_kobj, &lru_gen_attr_group)) 5773 5773 pr_err("lru_gen: failed to create sysfs group\n"); 5774 5774 5775 - debugfs_create_file_aux_num("lru_gen", 0644, NULL, NULL, 1, 5775 + debugfs_create_file_aux_num("lru_gen", 0644, NULL, NULL, false, 5776 5776 &lru_gen_rw_fops); 5777 - debugfs_create_file_aux_num("lru_gen_full", 0444, NULL, NULL, 0, 5777 + debugfs_create_file_aux_num("lru_gen_full", 0444, NULL, NULL, true, 5778 5778 &lru_gen_ro_fops); 5779 5779 5780 5780 return 0;
+10
mm/zsmalloc.c
··· 2246 2246 2247 2247 static int __init zs_init(void) 2248 2248 { 2249 + int rc __maybe_unused; 2250 + 2249 2251 #ifdef CONFIG_ZPOOL 2250 2252 zpool_register_driver(&zs_zpool_driver); 2253 + #endif 2254 + #ifdef CONFIG_COMPACTION 2255 + rc = set_movable_ops(&zsmalloc_mops, PGTY_zsmalloc); 2256 + if (rc) 2257 + return rc; 2251 2258 #endif 2252 2259 zs_stat_init(); 2253 2260 return 0; ··· 2264 2257 { 2265 2258 #ifdef CONFIG_ZPOOL 2266 2259 zpool_unregister_driver(&zs_zpool_driver); 2260 + #endif 2261 + #ifdef CONFIG_COMPACTION 2262 + set_movable_ops(NULL, PGTY_zsmalloc); 2267 2263 #endif 2268 2264 zs_stat_exit(); 2269 2265 }
+14 -3
net/bluetooth/hci_conn.c
··· 339 339 case BT_CODEC_TRANSPARENT: 340 340 if (!find_next_esco_param(conn, esco_param_msbc, 341 341 ARRAY_SIZE(esco_param_msbc))) 342 - return false; 342 + return -EINVAL; 343 + 343 344 param = &esco_param_msbc[conn->attempt - 1]; 344 345 cp.tx_coding_format.id = 0x03; 345 346 cp.rx_coding_format.id = 0x03; ··· 831 830 /* Check if ISO connection is a BIS and terminate advertising 832 831 * set and BIG if there are no other connections using it. 833 832 */ 834 - bis = hci_conn_hash_lookup_big(hdev, conn->iso_qos.bcast.big); 833 + bis = hci_conn_hash_lookup_big_state(hdev, 834 + conn->iso_qos.bcast.big, 835 + BT_CONNECTED, 836 + HCI_ROLE_MASTER); 837 + if (bis) 838 + return; 839 + 840 + bis = hci_conn_hash_lookup_big_state(hdev, 841 + conn->iso_qos.bcast.big, 842 + BT_CONNECT, 843 + HCI_ROLE_MASTER); 835 844 if (bis) 836 845 return; 837 846 ··· 2260 2249 * the start periodic advertising and create BIG commands have 2261 2250 * been queued 2262 2251 */ 2263 - hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK, 2252 + hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK, 2264 2253 BT_BOUND, &data); 2265 2254 2266 2255 /* Queue start periodic advertising and create BIG */
+10 -5
net/bluetooth/hci_event.c
··· 6745 6745 qos->ucast.out.latency = 6746 6746 DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency), 6747 6747 1000); 6748 - qos->ucast.in.sdu = le16_to_cpu(ev->c_mtu); 6749 - qos->ucast.out.sdu = le16_to_cpu(ev->p_mtu); 6748 + qos->ucast.in.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0; 6749 + qos->ucast.out.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0; 6750 6750 qos->ucast.in.phy = ev->c_phy; 6751 6751 qos->ucast.out.phy = ev->p_phy; 6752 6752 break; ··· 6760 6760 qos->ucast.in.latency = 6761 6761 DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency), 6762 6762 1000); 6763 - qos->ucast.out.sdu = le16_to_cpu(ev->c_mtu); 6764 - qos->ucast.in.sdu = le16_to_cpu(ev->p_mtu); 6763 + qos->ucast.out.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0; 6764 + qos->ucast.in.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0; 6765 6765 qos->ucast.out.phy = ev->c_phy; 6766 6766 qos->ucast.in.phy = ev->p_phy; 6767 6767 break; ··· 6957 6957 continue; 6958 6958 } 6959 6959 6960 - if (ev->status != 0x42) 6960 + if (ev->status != 0x42) { 6961 6961 /* Mark PA sync as established */ 6962 6962 set_bit(HCI_CONN_PA_SYNC, &bis->flags); 6963 + /* Reset cleanup callback of PA Sync so it doesn't 6964 + * terminate the sync when deleting the connection. 6965 + */ 6966 + conn->cleanup = NULL; 6967 + } 6963 6968 6964 6969 bis->sync_handle = conn->sync_handle; 6965 6970 bis->iso_qos.bcast.big = ev->handle;
+16 -9
net/bluetooth/hci_sync.c
··· 3344 3344 * advertising data. This also applies to the case 3345 3345 * where BR/EDR was toggled during the AUTO_OFF phase. 3346 3346 */ 3347 - if (hci_dev_test_flag(hdev, HCI_ADVERTISING) || 3347 + if (hci_dev_test_flag(hdev, HCI_ADVERTISING) && 3348 3348 list_empty(&hdev->adv_instances)) { 3349 3349 if (ext_adv_capable(hdev)) { 3350 3350 err = hci_setup_ext_adv_instance_sync(hdev, 0x00); ··· 4531 4531 { 4532 4532 struct hci_cp_le_set_host_feature cp; 4533 4533 4534 - if (!cis_capable(hdev)) 4534 + if (!iso_capable(hdev)) 4535 4535 return 0; 4536 4536 4537 4537 memset(&cp, 0, sizeof(cp)); 4538 4538 4539 4539 /* Connected Isochronous Channels (Host Support) */ 4540 4540 cp.bit_number = 32; 4541 - cp.bit_value = 1; 4541 + cp.bit_value = iso_enabled(hdev) ? 0x01 : 0x00; 4542 4542 4543 4543 return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_HOST_FEATURE, 4544 4544 sizeof(cp), &cp, HCI_CMD_TIMEOUT); ··· 6985 6985 6986 6986 hci_dev_lock(hdev); 6987 6987 6988 - hci_dev_clear_flag(hdev, HCI_PA_SYNC); 6989 - 6990 6988 if (!hci_conn_valid(hdev, conn)) 6991 6989 clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); 6992 6990 ··· 7045 7047 /* SID has not been set listen for HCI_EV_LE_EXT_ADV_REPORT to update 7046 7048 * it. 7047 7049 */ 7048 - if (conn->sid == HCI_SID_INVALID) 7049 - __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL, 7050 - HCI_EV_LE_EXT_ADV_REPORT, 7051 - conn->conn_timeout, NULL); 7050 + if (conn->sid == HCI_SID_INVALID) { 7051 + err = __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL, 7052 + HCI_EV_LE_EXT_ADV_REPORT, 7053 + conn->conn_timeout, NULL); 7054 + if (err == -ETIMEDOUT) 7055 + goto done; 7056 + } 7052 7057 7053 7058 memset(&cp, 0, sizeof(cp)); 7054 7059 cp.options = qos->bcast.options; ··· 7080 7079 if (err == -ETIMEDOUT) 7081 7080 __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL, 7082 7081 0, NULL, HCI_CMD_TIMEOUT); 7082 + 7083 + done: 7084 + hci_dev_clear_flag(hdev, HCI_PA_SYNC); 7085 + 7086 + /* Update passive scan since HCI_PA_SYNC flag has been cleared */ 7087 + hci_update_passive_scan_sync(hdev); 7083 7088 7084 7089 return err; 7085 7090 }
+8 -8
net/bluetooth/iso.c
··· 1347 1347 bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst); 1348 1348 sa->iso_bdaddr_type = iso_pi(sk)->dst_type; 1349 1349 1350 - if (hcon && hcon->type == BIS_LINK) { 1350 + if (hcon && (hcon->type == BIS_LINK || hcon->type == PA_LINK)) { 1351 1351 sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid; 1352 1352 sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis; 1353 1353 memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis, ··· 2483 2483 .create = iso_sock_create, 2484 2484 }; 2485 2485 2486 - static bool iso_inited; 2486 + static bool inited; 2487 2487 2488 - bool iso_enabled(void) 2488 + bool iso_inited(void) 2489 2489 { 2490 - return iso_inited; 2490 + return inited; 2491 2491 } 2492 2492 2493 2493 int iso_init(void) ··· 2496 2496 2497 2497 BUILD_BUG_ON(sizeof(struct sockaddr_iso) > sizeof(struct sockaddr)); 2498 2498 2499 - if (iso_inited) 2499 + if (inited) 2500 2500 return -EALREADY; 2501 2501 2502 2502 err = proto_register(&iso_proto, 0); ··· 2524 2524 iso_debugfs = debugfs_create_file("iso", 0444, bt_debugfs, 2525 2525 NULL, &iso_debugfs_fops); 2526 2526 2527 - iso_inited = true; 2527 + inited = true; 2528 2528 2529 2529 return 0; 2530 2530 ··· 2535 2535 2536 2536 int iso_exit(void) 2537 2537 { 2538 - if (!iso_inited) 2538 + if (!inited) 2539 2539 return -EALREADY; 2540 2540 2541 2541 bt_procfs_cleanup(&init_net, "iso"); ··· 2549 2549 2550 2550 proto_unregister(&iso_proto); 2551 2551 2552 - iso_inited = false; 2552 + inited = false; 2553 2553 2554 2554 return 0; 2555 2555 }
+6 -6
net/bluetooth/mgmt.c
··· 922 922 if (hci_dev_test_flag(hdev, HCI_WIDEBAND_SPEECH_ENABLED)) 923 923 settings |= MGMT_SETTING_WIDEBAND_SPEECH; 924 924 925 - if (cis_central_capable(hdev)) 925 + if (cis_central_enabled(hdev)) 926 926 settings |= MGMT_SETTING_CIS_CENTRAL; 927 927 928 - if (cis_peripheral_capable(hdev)) 928 + if (cis_peripheral_enabled(hdev)) 929 929 settings |= MGMT_SETTING_CIS_PERIPHERAL; 930 930 931 - if (bis_capable(hdev)) 931 + if (bis_enabled(hdev)) 932 932 settings |= MGMT_SETTING_ISO_BROADCASTER; 933 933 934 - if (sync_recv_capable(hdev)) 934 + if (sync_recv_enabled(hdev)) 935 935 settings |= MGMT_SETTING_ISO_SYNC_RECEIVER; 936 936 937 - if (ll_privacy_capable(hdev)) 937 + if (ll_privacy_enabled(hdev)) 938 938 settings |= MGMT_SETTING_LL_PRIVACY; 939 939 940 940 return settings; ··· 4513 4513 } 4514 4514 4515 4515 if (IS_ENABLED(CONFIG_BT_LE)) { 4516 - flags = iso_enabled() ? BIT(0) : 0; 4516 + flags = iso_inited() ? BIT(0) : 0; 4517 4517 memcpy(rp->features[idx].uuid, iso_socket_uuid, 16); 4518 4518 rp->features[idx].flags = cpu_to_le32(flags); 4519 4519 idx++;
+16
net/bridge/br_multicast.c
··· 4818 4818 intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MIN; 4819 4819 } 4820 4820 4821 + if (intvl_jiffies > BR_MULTICAST_QUERY_INTVL_MAX) { 4822 + br_info(brmctx->br, 4823 + "trying to set multicast query interval above maximum, setting to %lu (%ums)\n", 4824 + jiffies_to_clock_t(BR_MULTICAST_QUERY_INTVL_MAX), 4825 + jiffies_to_msecs(BR_MULTICAST_QUERY_INTVL_MAX)); 4826 + intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MAX; 4827 + } 4828 + 4821 4829 brmctx->multicast_query_interval = intvl_jiffies; 4822 4830 } 4823 4831 ··· 4840 4832 jiffies_to_clock_t(BR_MULTICAST_STARTUP_QUERY_INTVL_MIN), 4841 4833 jiffies_to_msecs(BR_MULTICAST_STARTUP_QUERY_INTVL_MIN)); 4842 4834 intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MIN; 4835 + } 4836 + 4837 + if (intvl_jiffies > BR_MULTICAST_STARTUP_QUERY_INTVL_MAX) { 4838 + br_info(brmctx->br, 4839 + "trying to set multicast startup query interval above maximum, setting to %lu (%ums)\n", 4840 + jiffies_to_clock_t(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX), 4841 + jiffies_to_msecs(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX)); 4842 + intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MAX; 4843 4843 } 4844 4844 4845 4845 brmctx->multicast_startup_query_interval = intvl_jiffies;
+2
net/bridge/br_private.h
··· 31 31 #define BR_MULTICAST_DEFAULT_HASH_MAX 4096 32 32 #define BR_MULTICAST_QUERY_INTVL_MIN msecs_to_jiffies(1000) 33 33 #define BR_MULTICAST_STARTUP_QUERY_INTVL_MIN BR_MULTICAST_QUERY_INTVL_MIN 34 + #define BR_MULTICAST_QUERY_INTVL_MAX msecs_to_jiffies(86400000) /* 24 hours */ 35 + #define BR_MULTICAST_STARTUP_QUERY_INTVL_MAX BR_MULTICAST_QUERY_INTVL_MAX 34 36 35 37 #define BR_HWDOM_MAX BITS_PER_LONG 36 38
+12
net/core/dev.c
··· 3779 3779 features &= ~NETIF_F_TSO_MANGLEID; 3780 3780 } 3781 3781 3782 + /* NETIF_F_IPV6_CSUM does not support IPv6 extension headers, 3783 + * so neither does TSO that depends on it. 3784 + */ 3785 + if (features & NETIF_F_IPV6_CSUM && 3786 + (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 || 3787 + (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && 3788 + vlan_get_protocol(skb) == htons(ETH_P_IPV6))) && 3789 + skb_transport_header_was_set(skb) && 3790 + skb_network_header_len(skb) != sizeof(struct ipv6hdr) && 3791 + !ipv6_has_hopopt_jumbo(skb)) 3792 + features &= ~(NETIF_F_IPV6_CSUM | NETIF_F_TSO6 | NETIF_F_GSO_UDP_L4); 3793 + 3782 3794 return features; 3783 3795 } 3784 3796
+7 -1
net/hsr/hsr_slave.c
··· 63 63 skb_push(skb, ETH_HLEN); 64 64 skb_reset_mac_header(skb); 65 65 if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) || 66 - protocol == htons(ETH_P_HSR)) 66 + protocol == htons(ETH_P_HSR)) { 67 + if (!pskb_may_pull(skb, ETH_HLEN + HSR_HLEN)) { 68 + kfree_skb(skb); 69 + goto finish_consume; 70 + } 71 + 67 72 skb_set_network_header(skb, ETH_HLEN + HSR_HLEN); 73 + } 68 74 skb_reset_mac_len(skb); 69 75 70 76 /* Only the frames received over the interlink port will assign a
+2 -4
net/ipv4/netfilter/nf_reject_ipv4.c
··· 247 247 if (!oth) 248 248 return; 249 249 250 - if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) && 251 - nf_reject_fill_skb_dst(oldskb) < 0) 250 + if (!skb_dst(oldskb) && nf_reject_fill_skb_dst(oldskb) < 0) 252 251 return; 253 252 254 253 if (skb_rtable(oldskb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST)) ··· 320 321 if (iph->frag_off & htons(IP_OFFSET)) 321 322 return; 322 323 323 - if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) && 324 - nf_reject_fill_skb_dst(skb_in) < 0) 324 + if (!skb_dst(skb_in) && nf_reject_fill_skb_dst(skb_in) < 0) 325 325 return; 326 326 327 327 if (skb_csum_unnecessary(skb_in) ||
+2 -3
net/ipv6/netfilter/nf_reject_ipv6.c
··· 293 293 fl6.fl6_sport = otcph->dest; 294 294 fl6.fl6_dport = otcph->source; 295 295 296 - if (hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) { 296 + if (!skb_dst(oldskb)) { 297 297 nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false); 298 298 if (!dst) 299 299 return; ··· 397 397 if (hooknum == NF_INET_LOCAL_OUT && skb_in->dev == NULL) 398 398 skb_in->dev = net->loopback_dev; 399 399 400 - if ((hooknum == NF_INET_PRE_ROUTING || hooknum == NF_INET_INGRESS) && 401 - nf_reject6_fill_skb_dst(skb_in) < 0) 400 + if (!skb_dst(skb_in) && nf_reject6_fill_skb_dst(skb_in) < 0) 402 401 return; 403 402 404 403 icmpv6_send(skb_in, ICMPV6_DEST_UNREACH, code, 0);
+5 -1
net/ipv6/seg6_hmac.c
··· 35 35 #include <net/xfrm.h> 36 36 37 37 #include <crypto/hash.h> 38 + #include <crypto/utils.h> 38 39 #include <net/seg6.h> 39 40 #include <net/genetlink.h> 40 41 #include <net/seg6_hmac.h> ··· 281 280 if (seg6_hmac_compute(hinfo, srh, &ipv6_hdr(skb)->saddr, hmac_output)) 282 281 return false; 283 282 284 - if (memcmp(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN) != 0) 283 + if (crypto_memneq(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN)) 285 284 return false; 286 285 287 286 return true; ··· 304 303 { 305 304 struct seg6_pernet_data *sdata = seg6_pernet(net); 306 305 int err; 306 + 307 + if (!__hmac_get_algo(hinfo->alg_id)) 308 + return -EINVAL; 307 309 308 310 err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, 309 311 rht_params);
+4 -2
net/mptcp/options.c
··· 1118 1118 return hmac == mp_opt->ahmac; 1119 1119 } 1120 1120 1121 - /* Return false if a subflow has been reset, else return true */ 1121 + /* Return false in case of error (or subflow has been reset), 1122 + * else return true. 1123 + */ 1122 1124 bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb) 1123 1125 { 1124 1126 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); ··· 1224 1222 1225 1223 mpext = skb_ext_add(skb, SKB_EXT_MPTCP); 1226 1224 if (!mpext) 1227 - return true; 1225 + return false; 1228 1226 1229 1227 memset(mpext, 0, sizeof(*mpext)); 1230 1228
+12 -6
net/mptcp/pm.c
··· 274 274 add_timer); 275 275 struct mptcp_sock *msk = entry->sock; 276 276 struct sock *sk = (struct sock *)msk; 277 + unsigned int timeout; 277 278 278 279 pr_debug("msk=%p\n", msk); 279 280 ··· 292 291 goto out; 293 292 } 294 293 294 + timeout = mptcp_get_add_addr_timeout(sock_net(sk)); 295 + if (!timeout) 296 + goto out; 297 + 295 298 spin_lock_bh(&msk->pm.lock); 296 299 297 300 if (!mptcp_pm_should_add_signal_addr(msk)) { ··· 307 302 308 303 if (entry->retrans_times < ADD_ADDR_RETRANS_MAX) 309 304 sk_reset_timer(sk, timer, 310 - jiffies + mptcp_get_add_addr_timeout(sock_net(sk))); 305 + jiffies + timeout); 311 306 312 307 spin_unlock_bh(&msk->pm.lock); 313 308 ··· 349 344 struct mptcp_pm_add_entry *add_entry = NULL; 350 345 struct sock *sk = (struct sock *)msk; 351 346 struct net *net = sock_net(sk); 347 + unsigned int timeout; 352 348 353 349 lockdep_assert_held(&msk->pm.lock); 354 350 ··· 359 353 if (WARN_ON_ONCE(mptcp_pm_is_kernel(msk))) 360 354 return false; 361 355 362 - sk_reset_timer(sk, &add_entry->add_timer, 363 - jiffies + mptcp_get_add_addr_timeout(net)); 364 - return true; 356 + goto reset_timer; 365 357 } 366 358 367 359 add_entry = kmalloc(sizeof(*add_entry), GFP_ATOMIC); ··· 373 369 add_entry->retrans_times = 0; 374 370 375 371 timer_setup(&add_entry->add_timer, mptcp_pm_add_timer, 0); 376 - sk_reset_timer(sk, &add_entry->add_timer, 377 - jiffies + mptcp_get_add_addr_timeout(net)); 372 + reset_timer: 373 + timeout = mptcp_get_add_addr_timeout(net); 374 + if (timeout) 375 + sk_reset_timer(sk, &add_entry->add_timer, jiffies + timeout); 378 376 379 377 return true; 380 378 }
-1
net/mptcp/pm_kernel.c
··· 1085 1085 static void __reset_counters(struct pm_nl_pernet *pernet) 1086 1086 { 1087 1087 WRITE_ONCE(pernet->add_addr_signal_max, 0); 1088 - WRITE_ONCE(pernet->add_addr_accept_max, 0); 1089 1088 WRITE_ONCE(pernet->local_addr_max, 0); 1090 1089 pernet->addrs = 0; 1091 1090 }
+12 -2
net/sched/sch_cake.c
··· 1750 1750 ktime_t now = ktime_get(); 1751 1751 struct cake_tin_data *b; 1752 1752 struct cake_flow *flow; 1753 - u32 idx; 1753 + u32 idx, tin; 1754 1754 1755 1755 /* choose flow to insert into */ 1756 1756 idx = cake_classify(sch, &b, skb, q->flow_mode, &ret); ··· 1760 1760 __qdisc_drop(skb, to_free); 1761 1761 return ret; 1762 1762 } 1763 + tin = (u32)(b - q->tins); 1763 1764 idx--; 1764 1765 flow = &b->flows[idx]; 1765 1766 ··· 1928 1927 q->buffer_max_used = q->buffer_used; 1929 1928 1930 1929 if (q->buffer_used > q->buffer_limit) { 1930 + bool same_flow = false; 1931 1931 u32 dropped = 0; 1932 + u32 drop_id; 1932 1933 1933 1934 while (q->buffer_used > q->buffer_limit) { 1934 1935 dropped++; 1935 - cake_drop(sch, to_free); 1936 + drop_id = cake_drop(sch, to_free); 1937 + 1938 + if ((drop_id >> 16) == tin && 1939 + (drop_id & 0xFFFF) == idx) 1940 + same_flow = true; 1936 1941 } 1937 1942 b->drop_overlimit += dropped; 1943 + 1944 + if (same_flow) 1945 + return NET_XMIT_CN; 1938 1946 } 1939 1947 return NET_XMIT_SUCCESS; 1940 1948 }
+7 -5
net/sched/sch_codel.c
··· 101 101 static int codel_change(struct Qdisc *sch, struct nlattr *opt, 102 102 struct netlink_ext_ack *extack) 103 103 { 104 + unsigned int dropped_pkts = 0, dropped_bytes = 0; 104 105 struct codel_sched_data *q = qdisc_priv(sch); 105 106 struct nlattr *tb[TCA_CODEL_MAX + 1]; 106 - unsigned int qlen, dropped = 0; 107 107 int err; 108 108 109 109 err = nla_parse_nested_deprecated(tb, TCA_CODEL_MAX, opt, ··· 142 142 WRITE_ONCE(q->params.ecn, 143 143 !!nla_get_u32(tb[TCA_CODEL_ECN])); 144 144 145 - qlen = sch->q.qlen; 146 145 while (sch->q.qlen > sch->limit) { 147 146 struct sk_buff *skb = qdisc_dequeue_internal(sch, true); 148 147 149 - dropped += qdisc_pkt_len(skb); 150 - qdisc_qstats_backlog_dec(sch, skb); 148 + if (!skb) 149 + break; 150 + 151 + dropped_pkts++; 152 + dropped_bytes += qdisc_pkt_len(skb); 151 153 rtnl_qdisc_drop(skb, sch); 152 154 } 153 - qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped); 155 + qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); 154 156 155 157 sch_tree_unlock(sch); 156 158 return 0;
+3 -2
net/sched/sch_dualpi2.c
··· 927 927 928 928 q->sch = sch; 929 929 dualpi2_reset_default(sch); 930 - hrtimer_setup(&q->pi2_timer, dualpi2_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED); 930 + hrtimer_setup(&q->pi2_timer, dualpi2_timer, CLOCK_MONOTONIC, 931 + HRTIMER_MODE_ABS_PINNED_SOFT); 931 932 932 933 if (opt && nla_len(opt)) { 933 934 err = dualpi2_change(sch, opt, extack); ··· 938 937 } 939 938 940 939 hrtimer_start(&q->pi2_timer, next_pi2_timeout(q), 941 - HRTIMER_MODE_ABS_PINNED); 940 + HRTIMER_MODE_ABS_PINNED_SOFT); 942 941 return 0; 943 942 } 944 943
+7 -5
net/sched/sch_fq.c
··· 1013 1013 static int fq_change(struct Qdisc *sch, struct nlattr *opt, 1014 1014 struct netlink_ext_ack *extack) 1015 1015 { 1016 + unsigned int dropped_pkts = 0, dropped_bytes = 0; 1016 1017 struct fq_sched_data *q = qdisc_priv(sch); 1017 1018 struct nlattr *tb[TCA_FQ_MAX + 1]; 1018 - int err, drop_count = 0; 1019 - unsigned drop_len = 0; 1020 1019 u32 fq_log; 1020 + int err; 1021 1021 1022 1022 err = nla_parse_nested_deprecated(tb, TCA_FQ_MAX, opt, fq_policy, 1023 1023 NULL); ··· 1135 1135 err = fq_resize(sch, fq_log); 1136 1136 sch_tree_lock(sch); 1137 1137 } 1138 + 1138 1139 while (sch->q.qlen > sch->limit) { 1139 1140 struct sk_buff *skb = qdisc_dequeue_internal(sch, false); 1140 1141 1141 1142 if (!skb) 1142 1143 break; 1143 - drop_len += qdisc_pkt_len(skb); 1144 + 1145 + dropped_pkts++; 1146 + dropped_bytes += qdisc_pkt_len(skb); 1144 1147 rtnl_kfree_skbs(skb, skb); 1145 - drop_count++; 1146 1148 } 1147 - qdisc_tree_reduce_backlog(sch, drop_count, drop_len); 1149 + qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); 1148 1150 1149 1151 sch_tree_unlock(sch); 1150 1152 return err;
+7 -5
net/sched/sch_fq_codel.c
··· 366 366 static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt, 367 367 struct netlink_ext_ack *extack) 368 368 { 369 + unsigned int dropped_pkts = 0, dropped_bytes = 0; 369 370 struct fq_codel_sched_data *q = qdisc_priv(sch); 370 371 struct nlattr *tb[TCA_FQ_CODEL_MAX + 1]; 371 372 u32 quantum = 0; ··· 444 443 q->memory_usage > q->memory_limit) { 445 444 struct sk_buff *skb = qdisc_dequeue_internal(sch, false); 446 445 447 - q->cstats.drop_len += qdisc_pkt_len(skb); 446 + if (!skb) 447 + break; 448 + 449 + dropped_pkts++; 450 + dropped_bytes += qdisc_pkt_len(skb); 448 451 rtnl_kfree_skbs(skb, skb); 449 - q->cstats.drop_count++; 450 452 } 451 - qdisc_tree_reduce_backlog(sch, q->cstats.drop_count, q->cstats.drop_len); 452 - q->cstats.drop_count = 0; 453 - q->cstats.drop_len = 0; 453 + qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); 454 454 455 455 sch_tree_unlock(sch); 456 456 return 0;
+7 -5
net/sched/sch_fq_pie.c
··· 287 287 static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt, 288 288 struct netlink_ext_ack *extack) 289 289 { 290 + unsigned int dropped_pkts = 0, dropped_bytes = 0; 290 291 struct fq_pie_sched_data *q = qdisc_priv(sch); 291 292 struct nlattr *tb[TCA_FQ_PIE_MAX + 1]; 292 - unsigned int len_dropped = 0; 293 - unsigned int num_dropped = 0; 294 293 int err; 295 294 296 295 err = nla_parse_nested(tb, TCA_FQ_PIE_MAX, opt, fq_pie_policy, extack); ··· 367 368 while (sch->q.qlen > sch->limit) { 368 369 struct sk_buff *skb = qdisc_dequeue_internal(sch, false); 369 370 370 - len_dropped += qdisc_pkt_len(skb); 371 - num_dropped += 1; 371 + if (!skb) 372 + break; 373 + 374 + dropped_pkts++; 375 + dropped_bytes += qdisc_pkt_len(skb); 372 376 rtnl_kfree_skbs(skb, skb); 373 377 } 374 - qdisc_tree_reduce_backlog(sch, num_dropped, len_dropped); 378 + qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); 375 379 376 380 sch_tree_unlock(sch); 377 381 return 0;
+7 -5
net/sched/sch_hhf.c
··· 508 508 static int hhf_change(struct Qdisc *sch, struct nlattr *opt, 509 509 struct netlink_ext_ack *extack) 510 510 { 511 + unsigned int dropped_pkts = 0, dropped_bytes = 0; 511 512 struct hhf_sched_data *q = qdisc_priv(sch); 512 513 struct nlattr *tb[TCA_HHF_MAX + 1]; 513 - unsigned int qlen, prev_backlog; 514 514 int err; 515 515 u64 non_hh_quantum; 516 516 u32 new_quantum = q->quantum; ··· 561 561 usecs_to_jiffies(us)); 562 562 } 563 563 564 - qlen = sch->q.qlen; 565 - prev_backlog = sch->qstats.backlog; 566 564 while (sch->q.qlen > sch->limit) { 567 565 struct sk_buff *skb = qdisc_dequeue_internal(sch, false); 568 566 567 + if (!skb) 568 + break; 569 + 570 + dropped_pkts++; 571 + dropped_bytes += qdisc_pkt_len(skb); 569 572 rtnl_kfree_skbs(skb, skb); 570 573 } 571 - qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, 572 - prev_backlog - sch->qstats.backlog); 574 + qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); 573 575 574 576 sch_tree_unlock(sch); 575 577 return 0;
+1 -1
net/sched/sch_htb.c
··· 592 592 */ 593 593 static inline void htb_activate(struct htb_sched *q, struct htb_class *cl) 594 594 { 595 - WARN_ON(cl->level || !cl->leaf.q || !cl->leaf.q->q.qlen); 595 + WARN_ON(cl->level || !cl->leaf.q); 596 596 597 597 if (!cl->prio_activity) { 598 598 cl->prio_activity = 1 << cl->prio;
+7 -5
net/sched/sch_pie.c
··· 141 141 static int pie_change(struct Qdisc *sch, struct nlattr *opt, 142 142 struct netlink_ext_ack *extack) 143 143 { 144 + unsigned int dropped_pkts = 0, dropped_bytes = 0; 144 145 struct pie_sched_data *q = qdisc_priv(sch); 145 146 struct nlattr *tb[TCA_PIE_MAX + 1]; 146 - unsigned int qlen, dropped = 0; 147 147 int err; 148 148 149 149 err = nla_parse_nested_deprecated(tb, TCA_PIE_MAX, opt, pie_policy, ··· 193 193 nla_get_u32(tb[TCA_PIE_DQ_RATE_ESTIMATOR])); 194 194 195 195 /* Drop excess packets if new limit is lower */ 196 - qlen = sch->q.qlen; 197 196 while (sch->q.qlen > sch->limit) { 198 197 struct sk_buff *skb = qdisc_dequeue_internal(sch, true); 199 198 200 - dropped += qdisc_pkt_len(skb); 201 - qdisc_qstats_backlog_dec(sch, skb); 199 + if (!skb) 200 + break; 201 + 202 + dropped_pkts++; 203 + dropped_bytes += qdisc_pkt_len(skb); 202 204 rtnl_qdisc_drop(skb, sch); 203 205 } 204 - qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped); 206 + qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); 205 207 206 208 sch_tree_unlock(sch); 207 209 return 0;
+2 -1
net/smc/af_smc.c
··· 2568 2568 goto out_decl; 2569 2569 } 2570 2570 2571 - smc_listen_out_connected(new_smc); 2572 2571 SMC_STAT_SERV_SUCC_INC(sock_net(newclcsock->sk), ini); 2572 + /* smc_listen_out() will release smcsk */ 2573 + smc_listen_out_connected(new_smc); 2573 2574 goto out_free; 2574 2575 2575 2576 out_unlock:
+6 -1
net/tls/tls_sw.c
··· 1808 1808 return tls_decrypt_sg(sk, NULL, sgout, &darg); 1809 1809 } 1810 1810 1811 + /* All records returned from a recvmsg() call must have the same type. 1812 + * 0 is not a valid content type. Use it as "no type reported, yet". 1813 + */ 1811 1814 static int tls_record_content_type(struct msghdr *msg, struct tls_msg *tlm, 1812 1815 u8 *control) 1813 1816 { ··· 2054 2051 if (err < 0) 2055 2052 goto end; 2056 2053 2054 + /* process_rx_list() will set @control if it processed any records */ 2057 2055 copied = err; 2058 - if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA) || rx_more) 2056 + if (len <= copied || rx_more || 2057 + (control && control != TLS_RECORD_TYPE_DATA)) 2059 2058 goto end; 2060 2059 2061 2060 target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+18 -12
rust/kernel/alloc/allocator.rs
··· 43 43 /// For more details see [self]. 44 44 pub struct KVmalloc; 45 45 46 - /// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. 47 - fn aligned_size(new_layout: Layout) -> usize { 48 - // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. 49 - let layout = new_layout.pad_to_align(); 50 - 51 - // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()` 52 - // which together with the slab guarantees means the `krealloc` will return a properly aligned 53 - // object (see comments in `kmalloc()` for more information). 54 - layout.size() 55 - } 56 - 57 46 /// # Invariants 58 47 /// 59 48 /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`. ··· 77 88 old_layout: Layout, 78 89 flags: Flags, 79 90 ) -> Result<NonNull<[u8]>, AllocError> { 80 - let size = aligned_size(layout); 91 + let size = layout.size(); 81 92 let ptr = match ptr { 82 93 Some(ptr) => { 83 94 if old_layout.size() == 0 { ··· 112 123 } 113 124 } 114 125 126 + impl Kmalloc { 127 + /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of 128 + /// `layout`. 129 + pub fn aligned_layout(layout: Layout) -> Layout { 130 + // Note that `layout.size()` (after padding) is guaranteed to be a multiple of 131 + // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return 132 + // a properly aligned object (see comments in `kmalloc()` for more information). 133 + layout.pad_to_align() 134 + } 135 + } 136 + 115 137 // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that 116 138 // - memory remains valid until it is explicitly freed, 117 139 // - passing a pointer to a valid memory allocation is OK, ··· 135 135 old_layout: Layout, 136 136 flags: Flags, 137 137 ) -> Result<NonNull<[u8]>, AllocError> { 138 + let layout = Kmalloc::aligned_layout(layout); 139 + 138 140 // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`. 139 141 unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) } 140 142 } ··· 178 176 old_layout: Layout, 179 177 flags: Flags, 180 178 ) -> Result<NonNull<[u8]>, AllocError> { 179 + // `KVmalloc` may use the `Kmalloc` backend, hence we have to enforce a `Kmalloc` 180 + // compatible layout. 181 + let layout = Kmalloc::aligned_layout(layout); 182 + 181 183 // TODO: Support alignments larger than PAGE_SIZE. 182 184 if layout.align() > bindings::PAGE_SIZE { 183 185 pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n");
+11
rust/kernel/alloc/allocator_test.rs
··· 22 22 pub type Vmalloc = Kmalloc; 23 23 pub type KVmalloc = Kmalloc; 24 24 25 + impl Cmalloc { 26 + /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of 27 + /// `layout`. 28 + pub fn aligned_layout(layout: Layout) -> Layout { 29 + // Note that `layout.size()` (after padding) is guaranteed to be a multiple of 30 + // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return 31 + // a properly aligned object (see comments in `kmalloc()` for more information). 32 + layout.pad_to_align() 33 + } 34 + } 35 + 25 36 extern "C" { 26 37 #[link_name = "aligned_alloc"] 27 38 fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void;
+184 -24
rust/kernel/device.rs
··· 15 15 16 16 pub mod property; 17 17 18 - /// A reference-counted device. 18 + /// The core representation of a device in the kernel's driver model. 19 19 /// 20 - /// This structure represents the Rust abstraction for a C `struct device`. This implementation 21 - /// abstracts the usage of an already existing C `struct device` within Rust code that we get 22 - /// passed from the C side. 20 + /// This structure represents the Rust abstraction for a C `struct device`. A [`Device`] can either 21 + /// exist as temporary reference (see also [`Device::from_raw`]), which is only valid within a 22 + /// certain scope or as [`ARef<Device>`], owning a dedicated reference count. 23 23 /// 24 - /// An instance of this abstraction can be obtained temporarily or permanent. 24 + /// # Device Types 25 25 /// 26 - /// A temporary one is bound to the lifetime of the C `struct device` pointer used for creation. 27 - /// A permanent instance is always reference-counted and hence not restricted by any lifetime 28 - /// boundaries. 26 + /// A [`Device`] can represent either a bus device or a class device. 29 27 /// 30 - /// For subsystems it is recommended to create a permanent instance to wrap into a subsystem 31 - /// specific device structure (e.g. `pci::Device`). This is useful for passing it to drivers in 32 - /// `T::probe()`, such that a driver can store the `ARef<Device>` (equivalent to storing a 33 - /// `struct device` pointer in a C driver) for arbitrary purposes, e.g. allocating DMA coherent 34 - /// memory. 28 + /// ## Bus Devices 29 + /// 30 + /// A bus device is a [`Device`] that is associated with a physical or virtual bus. Examples of 31 + /// buses include PCI, USB, I2C, and SPI. Devices attached to a bus are registered with a specific 32 + /// bus type, which facilitates matching devices with appropriate drivers based on IDs or other 33 + /// identifying information. Bus devices are visible in sysfs under `/sys/bus/<bus-name>/devices/`. 34 + /// 35 + /// ## Class Devices 36 + /// 37 + /// A class device is a [`Device`] that is associated with a logical category of functionality 38 + /// rather than a physical bus. Examples of classes include block devices, network interfaces, sound 39 + /// cards, and input devices. Class devices are grouped under a common class and exposed to 40 + /// userspace via entries in `/sys/class/<class-name>/`. 41 + /// 42 + /// # Device Context 43 + /// 44 + /// [`Device`] references are generic over a [`DeviceContext`], which represents the type state of 45 + /// a [`Device`]. 46 + /// 47 + /// As the name indicates, this type state represents the context of the scope the [`Device`] 48 + /// reference is valid in. For instance, the [`Bound`] context guarantees that the [`Device`] is 49 + /// bound to a driver for the entire duration of the existence of a [`Device<Bound>`] reference. 50 + /// 51 + /// Other [`DeviceContext`] types besides [`Bound`] are [`Normal`], [`Core`] and [`CoreInternal`]. 52 + /// 53 + /// Unless selected otherwise [`Device`] defaults to the [`Normal`] [`DeviceContext`], which by 54 + /// itself has no additional requirements. 55 + /// 56 + /// It is always up to the caller of [`Device::from_raw`] to select the correct [`DeviceContext`] 57 + /// type for the corresponding scope the [`Device`] reference is created in. 58 + /// 59 + /// All [`DeviceContext`] types other than [`Normal`] are intended to be used with 60 + /// [bus devices](#bus-devices) only. 61 + /// 62 + /// # Implementing Bus Devices 63 + /// 64 + /// This section provides a guideline to implement bus specific devices, such as [`pci::Device`] or 65 + /// [`platform::Device`]. 66 + /// 67 + /// A bus specific device should be defined as follows. 68 + /// 69 + /// ```ignore 70 + /// #[repr(transparent)] 71 + /// pub struct Device<Ctx: device::DeviceContext = device::Normal>( 72 + /// Opaque<bindings::bus_device_type>, 73 + /// PhantomData<Ctx>, 74 + /// ); 75 + /// ``` 76 + /// 77 + /// Since devices are reference counted, [`AlwaysRefCounted`] should be implemented for `Device` 78 + /// (i.e. `Device<Normal>`). Note that [`AlwaysRefCounted`] must not be implemented for any other 79 + /// [`DeviceContext`], since all other device context types are only valid within a certain scope. 80 + /// 81 + /// In order to be able to implement the [`DeviceContext`] dereference hierarchy, bus device 82 + /// implementations should call the [`impl_device_context_deref`] macro as shown below. 83 + /// 84 + /// ```ignore 85 + /// // SAFETY: `Device` is a transparent wrapper of a type that doesn't depend on `Device`'s 86 + /// // generic argument. 87 + /// kernel::impl_device_context_deref!(unsafe { Device }); 88 + /// ``` 89 + /// 90 + /// In order to convert from a any [`Device<Ctx>`] to [`ARef<Device>`], bus devices can implement 91 + /// the following macro call. 92 + /// 93 + /// ```ignore 94 + /// kernel::impl_device_context_into_aref!(Device); 95 + /// ``` 96 + /// 97 + /// Bus devices should also implement the following [`AsRef`] implementation, such that users can 98 + /// easily derive a generic [`Device`] reference. 99 + /// 100 + /// ```ignore 101 + /// impl<Ctx: device::DeviceContext> AsRef<device::Device<Ctx>> for Device<Ctx> { 102 + /// fn as_ref(&self) -> &device::Device<Ctx> { 103 + /// ... 104 + /// } 105 + /// } 106 + /// ``` 107 + /// 108 + /// # Implementing Class Devices 109 + /// 110 + /// Class device implementations require less infrastructure and depend slightly more on the 111 + /// specific subsystem. 112 + /// 113 + /// An example implementation for a class device could look like this. 114 + /// 115 + /// ```ignore 116 + /// #[repr(C)] 117 + /// pub struct Device<T: class::Driver> { 118 + /// dev: Opaque<bindings::class_device_type>, 119 + /// data: T::Data, 120 + /// } 121 + /// ``` 122 + /// 123 + /// This class device uses the sub-classing pattern to embed the driver's private data within the 124 + /// allocation of the class device. For this to be possible the class device is generic over the 125 + /// class specific `Driver` trait implementation. 126 + /// 127 + /// Just like any device, class devices are reference counted and should hence implement 128 + /// [`AlwaysRefCounted`] for `Device`. 129 + /// 130 + /// Class devices should also implement the following [`AsRef`] implementation, such that users can 131 + /// easily derive a generic [`Device`] reference. 132 + /// 133 + /// ```ignore 134 + /// impl<T: class::Driver> AsRef<device::Device> for Device<T> { 135 + /// fn as_ref(&self) -> &device::Device { 136 + /// ... 137 + /// } 138 + /// } 139 + /// ``` 140 + /// 141 + /// An example for a class device implementation is [`drm::Device`]. 35 142 /// 36 143 /// # Invariants 37 144 /// ··· 149 42 /// 150 43 /// `bindings::device::release` is valid to be called from any thread, hence `ARef<Device>` can be 151 44 /// dropped from any thread. 45 + /// 46 + /// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted 47 + /// [`drm::Device`]: kernel::drm::Device 48 + /// [`impl_device_context_deref`]: kernel::impl_device_context_deref 49 + /// [`pci::Device`]: kernel::pci::Device 50 + /// [`platform::Device`]: kernel::platform::Device 152 51 #[repr(transparent)] 153 52 pub struct Device<Ctx: DeviceContext = Normal>(Opaque<bindings::device>, PhantomData<Ctx>); 154 53 ··· 424 311 // synchronization in `struct device`. 425 312 unsafe impl Sync for Device {} 426 313 427 - /// Marker trait for the context of a bus specific device. 314 + /// Marker trait for the context or scope of a bus specific device. 428 315 /// 429 - /// Some functions of a bus specific device should only be called from a certain context, i.e. bus 430 - /// callbacks, such as `probe()`. 316 + /// [`DeviceContext`] is a marker trait for types representing the context of a bus specific 317 + /// [`Device`]. 431 318 /// 432 - /// This is the marker trait for structures representing the context of a bus specific device. 319 + /// The specific device context types are: [`CoreInternal`], [`Core`], [`Bound`] and [`Normal`]. 320 + /// 321 + /// [`DeviceContext`] types are hierarchical, which means that there is a strict hierarchy that 322 + /// defines which [`DeviceContext`] type can be derived from another. For instance, any 323 + /// [`Device<Core>`] can dereference to a [`Device<Bound>`]. 324 + /// 325 + /// The following enumeration illustrates the dereference hierarchy of [`DeviceContext`] types. 326 + /// 327 + /// - [`CoreInternal`] => [`Core`] => [`Bound`] => [`Normal`] 328 + /// 329 + /// Bus devices can automatically implement the dereference hierarchy by using 330 + /// [`impl_device_context_deref`]. 331 + /// 332 + /// Note that the guarantee for a [`Device`] reference to have a certain [`DeviceContext`] comes 333 + /// from the specific scope the [`Device`] reference is valid in. 334 + /// 335 + /// [`impl_device_context_deref`]: kernel::impl_device_context_deref 433 336 pub trait DeviceContext: private::Sealed {} 434 337 435 - /// The [`Normal`] context is the context of a bus specific device when it is not an argument of 436 - /// any bus callback. 338 + /// The [`Normal`] context is the default [`DeviceContext`] of any [`Device`]. 339 + /// 340 + /// The normal context does not indicate any specific context. Any `Device<Ctx>` is also a valid 341 + /// [`Device<Normal>`]. It is the only [`DeviceContext`] for which it is valid to implement 342 + /// [`AlwaysRefCounted`] for. 343 + /// 344 + /// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted 437 345 pub struct Normal; 438 346 439 - /// The [`Core`] context is the context of a bus specific device when it is supplied as argument of 440 - /// any of the bus callbacks, such as `probe()`. 347 + /// The [`Core`] context is the context of a bus specific device when it appears as argument of 348 + /// any bus specific callback, such as `probe()`. 349 + /// 350 + /// The core context indicates that the [`Device<Core>`] reference's scope is limited to the bus 351 + /// callback it appears in. It is intended to be used for synchronization purposes. Bus device 352 + /// implementations can implement methods for [`Device<Core>`], such that they can only be called 353 + /// from bus callbacks. 441 354 pub struct Core; 442 355 443 - /// Semantically the same as [`Core`] but reserved for internal usage of the corresponding bus 356 + /// Semantically the same as [`Core`], but reserved for internal usage of the corresponding bus 444 357 /// abstraction. 358 + /// 359 + /// The internal core context is intended to be used in exactly the same way as the [`Core`] 360 + /// context, with the difference that this [`DeviceContext`] is internal to the corresponding bus 361 + /// abstraction. 362 + /// 363 + /// This context mainly exists to share generic [`Device`] infrastructure that should only be called 364 + /// from bus callbacks with bus abstractions, but without making them accessible for drivers. 445 365 pub struct CoreInternal; 446 366 447 - /// The [`Bound`] context is the context of a bus specific device reference when it is guaranteed to 448 - /// be bound for the duration of its lifetime. 367 + /// The [`Bound`] context is the [`DeviceContext`] of a bus specific device when it is guaranteed to 368 + /// be bound to a driver. 369 + /// 370 + /// The bound context indicates that for the entire duration of the lifetime of a [`Device<Bound>`] 371 + /// reference, the [`Device`] is guaranteed to be bound to a driver. 372 + /// 373 + /// Some APIs, such as [`dma::CoherentAllocation`] or [`Devres`] rely on the [`Device`] to be bound, 374 + /// which can be proven with the [`Bound`] device context. 375 + /// 376 + /// Any abstraction that can guarantee a scope where the corresponding bus device is bound, should 377 + /// provide a [`Device<Bound>`] reference to its users for this scope. This allows users to benefit 378 + /// from optimizations for accessing device resources, see also [`Devres::access`]. 379 + /// 380 + /// [`Devres`]: kernel::devres::Devres 381 + /// [`Devres::access`]: kernel::devres::Devres::access 382 + /// [`dma::CoherentAllocation`]: kernel::dma::CoherentAllocation 449 383 pub struct Bound; 450 384 451 385 mod private {
+18 -9
rust/kernel/devres.rs
··· 115 115 /// Contains all the fields shared with [`Self::callback`]. 116 116 // TODO: Replace with `UnsafePinned`, once available. 117 117 // 118 - // Subsequently, the `drop_in_place()` in `Devres::drop` and the explicit `Send` and `Sync' 119 - // impls can be removed. 118 + // Subsequently, the `drop_in_place()` in `Devres::drop` and `Devres::new` as well as the 119 + // explicit `Send` and `Sync' impls can be removed. 120 120 #[pin] 121 121 inner: Opaque<Inner<T>>, 122 + _add_action: (), 122 123 } 123 124 124 125 impl<T: Send> Devres<T> { ··· 141 140 dev: dev.into(), 142 141 callback, 143 142 // INVARIANT: `inner` is properly initialized. 144 - inner <- { 143 + inner <- Opaque::pin_init(try_pin_init!(Inner { 144 + devm <- Completion::new(), 145 + revoke <- Completion::new(), 146 + data <- Revocable::new(data), 147 + })), 148 + // TODO: Replace with "initializer code blocks" [1] once available. 149 + // 150 + // [1] https://github.com/Rust-for-Linux/pin-init/pull/69 151 + _add_action: { 145 152 // SAFETY: `this` is a valid pointer to uninitialized memory. 146 153 let inner = unsafe { &raw mut (*this.as_ptr()).inner }; 147 154 ··· 161 152 // live at least as long as the returned `impl PinInit<Self, Error>`. 162 153 to_result(unsafe { 163 154 bindings::devm_add_action(dev.as_raw(), Some(callback), inner.cast()) 164 - })?; 155 + }).inspect_err(|_| { 156 + let inner = Opaque::cast_into(inner); 165 157 166 - Opaque::pin_init(try_pin_init!(Inner { 167 - devm <- Completion::new(), 168 - revoke <- Completion::new(), 169 - data <- Revocable::new(data), 170 - })) 158 + // SAFETY: `inner` is a valid pointer to an `Inner<T>` and valid for both reads 159 + // and writes. 160 + unsafe { core::ptr::drop_in_place(inner) }; 161 + })?; 171 162 }, 172 163 }) 173 164 }
+87 -2
rust/kernel/driver.rs
··· 2 2 3 3 //! Generic support for drivers of different buses (e.g., PCI, Platform, Amba, etc.). 4 4 //! 5 - //! Each bus / subsystem is expected to implement [`RegistrationOps`], which allows drivers to 6 - //! register using the [`Registration`] class. 5 + //! This documentation describes how to implement a bus specific driver API and how to align it with 6 + //! the design of (bus specific) devices. 7 + //! 8 + //! Note: Readers are expected to know the content of the documentation of [`Device`] and 9 + //! [`DeviceContext`]. 10 + //! 11 + //! # Driver Trait 12 + //! 13 + //! The main driver interface is defined by a bus specific driver trait. For instance: 14 + //! 15 + //! ```ignore 16 + //! pub trait Driver: Send { 17 + //! /// The type holding information about each device ID supported by the driver. 18 + //! type IdInfo: 'static; 19 + //! 20 + //! /// The table of OF device ids supported by the driver. 21 + //! const OF_ID_TABLE: Option<of::IdTable<Self::IdInfo>> = None; 22 + //! 23 + //! /// The table of ACPI device ids supported by the driver. 24 + //! const ACPI_ID_TABLE: Option<acpi::IdTable<Self::IdInfo>> = None; 25 + //! 26 + //! /// Driver probe. 27 + //! fn probe(dev: &Device<device::Core>, id_info: &Self::IdInfo) -> Result<Pin<KBox<Self>>>; 28 + //! 29 + //! /// Driver unbind (optional). 30 + //! fn unbind(dev: &Device<device::Core>, this: Pin<&Self>) { 31 + //! let _ = (dev, this); 32 + //! } 33 + //! } 34 + //! ``` 35 + //! 36 + //! For specific examples see [`auxiliary::Driver`], [`pci::Driver`] and [`platform::Driver`]. 37 + //! 38 + //! The `probe()` callback should return a `Result<Pin<KBox<Self>>>`, i.e. the driver's private 39 + //! data. The bus abstraction should store the pointer in the corresponding bus device. The generic 40 + //! [`Device`] infrastructure provides common helpers for this purpose on its 41 + //! [`Device<CoreInternal>`] implementation. 42 + //! 43 + //! All driver callbacks should provide a reference to the driver's private data. Once the driver 44 + //! is unbound from the device, the bus abstraction should take back the ownership of the driver's 45 + //! private data from the corresponding [`Device`] and [`drop`] it. 46 + //! 47 + //! All driver callbacks should provide a [`Device<Core>`] reference (see also [`device::Core`]). 48 + //! 49 + //! # Adapter 50 + //! 51 + //! The adapter implementation of a bus represents the abstraction layer between the C bus 52 + //! callbacks and the Rust bus callbacks. It therefore has to be generic over an implementation of 53 + //! the [driver trait](#driver-trait). 54 + //! 55 + //! ```ignore 56 + //! pub struct Adapter<T: Driver>; 57 + //! ``` 58 + //! 59 + //! There's a common [`Adapter`] trait that can be implemented to inherit common driver 60 + //! infrastructure, such as finding the ID info from an [`of::IdTable`] or [`acpi::IdTable`]. 61 + //! 62 + //! # Driver Registration 63 + //! 64 + //! In order to register C driver types (such as `struct platform_driver`) the [adapter](#adapter) 65 + //! should implement the [`RegistrationOps`] trait. 66 + //! 67 + //! This trait implementation can be used to create the actual registration with the common 68 + //! [`Registration`] type. 69 + //! 70 + //! Typically, bus abstractions want to provide a bus specific `module_bus_driver!` macro, which 71 + //! creates a kernel module with exactly one [`Registration`] for the bus specific adapter. 72 + //! 73 + //! The generic driver infrastructure provides a helper for this with the [`module_driver`] macro. 74 + //! 75 + //! # Device IDs 76 + //! 77 + //! Besides the common device ID types, such as [`of::DeviceId`] and [`acpi::DeviceId`], most buses 78 + //! may need to implement their own device ID types. 79 + //! 80 + //! For this purpose the generic infrastructure in [`device_id`] should be used. 81 + //! 82 + //! [`auxiliary::Driver`]: kernel::auxiliary::Driver 83 + //! [`Core`]: device::Core 84 + //! [`Device`]: device::Device 85 + //! [`Device<Core>`]: device::Device<device::Core> 86 + //! [`Device<CoreInternal>`]: device::Device<device::CoreInternal> 87 + //! [`DeviceContext`]: device::DeviceContext 88 + //! [`device_id`]: kernel::device_id 89 + //! [`module_driver`]: kernel::module_driver 90 + //! [`pci::Driver`]: kernel::pci::Driver 91 + //! [`platform::Driver`]: kernel::platform::Driver 7 92 8 93 use crate::error::{Error, Result}; 9 94 use crate::{acpi, device, of, str::CStr, try_pin_init, types::Opaque, ThisModule};
+25 -7
rust/kernel/drm/device.rs
··· 5 5 //! C header: [`include/linux/drm/drm_device.h`](srctree/include/linux/drm/drm_device.h) 6 6 7 7 use crate::{ 8 + alloc::allocator::Kmalloc, 8 9 bindings, device, drm, 9 10 drm::driver::AllocImpl, 10 11 error::from_err_ptr, ··· 13 12 prelude::*, 14 13 types::{ARef, AlwaysRefCounted, Opaque}, 15 14 }; 16 - use core::{mem, ops::Deref, ptr, ptr::NonNull}; 15 + use core::{alloc::Layout, mem, ops::Deref, ptr, ptr::NonNull}; 17 16 18 17 #[cfg(CONFIG_DRM_LEGACY)] 19 18 macro_rules! drm_legacy_fields { ··· 54 53 /// 55 54 /// `self.dev` is a valid instance of a `struct device`. 56 55 #[repr(C)] 57 - #[pin_data] 58 56 pub struct Device<T: drm::Driver> { 59 57 dev: Opaque<bindings::drm_device>, 60 - #[pin] 61 58 data: T::Data, 62 59 } 63 60 ··· 95 96 96 97 /// Create a new `drm::Device` for a `drm::Driver`. 97 98 pub fn new(dev: &device::Device, data: impl PinInit<T::Data, Error>) -> Result<ARef<Self>> { 99 + // `__drm_dev_alloc` uses `kmalloc()` to allocate memory, hence ensure a `kmalloc()` 100 + // compatible `Layout`. 101 + let layout = Kmalloc::aligned_layout(Layout::new::<Self>()); 102 + 98 103 // SAFETY: 99 104 // - `VTABLE`, as a `const` is pinned to the read-only section of the compilation, 100 105 // - `dev` is valid by its type invarants, ··· 106 103 bindings::__drm_dev_alloc( 107 104 dev.as_raw(), 108 105 &Self::VTABLE, 109 - mem::size_of::<Self>(), 106 + layout.size(), 110 107 mem::offset_of!(Self, dev), 111 108 ) 112 109 } ··· 120 117 // - `raw_data` is a valid pointer to uninitialized memory. 121 118 // - `raw_data` will not move until it is dropped. 122 119 unsafe { data.__pinned_init(raw_data) }.inspect_err(|_| { 123 - // SAFETY: `__drm_dev_alloc()` was successful, hence `raw_drm` must be valid and the 120 + // SAFETY: `raw_drm` is a valid pointer to `Self`, given that `__drm_dev_alloc` was 121 + // successful. 122 + let drm_dev = unsafe { Self::into_drm_device(raw_drm) }; 123 + 124 + // SAFETY: `__drm_dev_alloc()` was successful, hence `drm_dev` must be valid and the 124 125 // refcount must be non-zero. 125 - unsafe { bindings::drm_dev_put(ptr::addr_of_mut!((*raw_drm.as_ptr()).dev).cast()) }; 126 + unsafe { bindings::drm_dev_put(drm_dev) }; 126 127 })?; 127 128 128 129 // SAFETY: The reference count is one, and now we take ownership of that reference as a ··· 145 138 // SAFETY: By the safety requirements of this function `ptr` is a valid pointer to a 146 139 // `struct drm_device` embedded in `Self`. 147 140 unsafe { crate::container_of!(Opaque::cast_from(ptr), Self, dev) }.cast_mut() 141 + } 142 + 143 + /// # Safety 144 + /// 145 + /// `ptr` must be a valid pointer to `Self`. 146 + unsafe fn into_drm_device(ptr: NonNull<Self>) -> *mut bindings::drm_device { 147 + // SAFETY: By the safety requirements of this function, `ptr` is a valid pointer to `Self`. 148 + unsafe { &raw mut (*ptr.as_ptr()).dev }.cast() 148 149 } 149 150 150 151 /// Not intended to be called externally, except via declare_drm_ioctls!() ··· 204 189 } 205 190 206 191 unsafe fn dec_ref(obj: NonNull<Self>) { 192 + // SAFETY: `obj` is a valid pointer to `Self`. 193 + let drm_dev = unsafe { Self::into_drm_device(obj) }; 194 + 207 195 // SAFETY: The safety requirements guarantee that the refcount is non-zero. 208 - unsafe { bindings::drm_dev_put(obj.cast().as_ptr()) }; 196 + unsafe { bindings::drm_dev_put(drm_dev) }; 209 197 } 210 198 } 211 199
+1 -1
rust/kernel/faux.rs
··· 4 4 //! 5 5 //! This module provides bindings for working with faux devices in kernel modules. 6 6 //! 7 - //! C header: [`include/linux/device/faux.h`] 7 + //! C header: [`include/linux/device/faux.h`](srctree/include/linux/device/faux.h) 8 8 9 9 use crate::{bindings, device, error::code::*, prelude::*}; 10 10 use core::ptr::{addr_of_mut, null, null_mut, NonNull};
+2 -2
sound/core/timer.c
··· 2139 2139 goto err_take_id; 2140 2140 } 2141 2141 2142 + utimer->id = utimer_id; 2143 + 2142 2144 utimer->name = kasprintf(GFP_KERNEL, "snd-utimer%d", utimer_id); 2143 2145 if (!utimer->name) { 2144 2146 err = -ENOMEM; 2145 2147 goto err_get_name; 2146 2148 } 2147 - 2148 - utimer->id = utimer_id; 2149 2149 2150 2150 tid.dev_sclass = SNDRV_TIMER_SCLASS_APPLICATION; 2151 2151 tid.dev_class = SNDRV_TIMER_CLASS_GLOBAL;
+22 -9
sound/hda/codecs/realtek/alc269.c
··· 510 510 hp_pin = 0x21; 511 511 512 512 alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */ 513 + 514 + /* 3k pull low control for Headset jack. */ 515 + /* NOTE: call this before clearing the pin, otherwise codec stalls */ 516 + /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly 517 + * when booting with headset plugged. So skip setting it for the codec alc257 518 + */ 519 + if (spec->en_3kpull_low) 520 + alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 521 + 513 522 hp_pin_sense = snd_hda_jack_detect(codec, hp_pin); 514 523 515 524 if (hp_pin_sense) { ··· 528 519 AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 529 520 530 521 msleep(75); 531 - 532 - /* 3k pull low control for Headset jack. */ 533 - /* NOTE: call this before clearing the pin, otherwise codec stalls */ 534 - /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly 535 - * when booting with headset plugged. So skip setting it for the codec alc257 536 - */ 537 - if (spec->en_3kpull_low) 538 - alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 539 522 540 523 if (!spec->no_shutup_pins) 541 524 snd_hda_codec_write(codec, hp_pin, 0, ··· 3580 3579 ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE, 3581 3580 ALC294_FIXUP_ASUS_MIC, 3582 3581 ALC294_FIXUP_ASUS_HEADSET_MIC, 3582 + ALC294_FIXUP_ASUS_I2C_HEADSET_MIC, 3583 3583 ALC294_FIXUP_ASUS_SPK, 3584 3584 ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE, 3585 3585 ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE, ··· 4890 4888 }, 4891 4889 .chained = true, 4892 4890 .chain_id = ALC269_FIXUP_HEADSET_MIC 4891 + }, 4892 + [ALC294_FIXUP_ASUS_I2C_HEADSET_MIC] = { 4893 + .type = HDA_FIXUP_PINS, 4894 + .v.pins = (const struct hda_pintbl[]) { 4895 + { 0x19, 0x03a19020 }, /* use as headset mic */ 4896 + { } 4897 + }, 4898 + .chained = true, 4899 + .chain_id = ALC287_FIXUP_CS35L41_I2C_2 4893 4900 }, 4894 4901 [ALC294_FIXUP_ASUS_SPK] = { 4895 4902 .type = HDA_FIXUP_VERBS, ··· 6379 6368 SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 6380 6369 SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), 6381 6370 SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 6371 + SND_PCI_QUIRK(0x103c, 0x8548, "HP EliteBook x360 830 G6", ALC285_FIXUP_HP_GPIO_LED), 6372 + SND_PCI_QUIRK(0x103c, 0x854a, "HP EliteBook 830 G6", ALC285_FIXUP_HP_GPIO_LED), 6382 6373 SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11), 6383 6374 SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360), 6384 6375 SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), ··· 6741 6728 SND_PCI_QUIRK(0x1043, 0x1b13, "ASUS U41SV/GA403U", ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC), 6742 6729 SND_PCI_QUIRK(0x1043, 0x1b93, "ASUS G614JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), 6743 6730 SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 6744 - SND_PCI_QUIRK(0x1043, 0x1c03, "ASUS UM3406HA", ALC287_FIXUP_CS35L41_I2C_2), 6731 + SND_PCI_QUIRK(0x1043, 0x1c03, "ASUS UM3406HA", ALC294_FIXUP_ASUS_I2C_HEADSET_MIC), 6745 6732 SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 6746 6733 SND_PCI_QUIRK(0x1043, 0x1c33, "ASUS UX5304MA", ALC245_FIXUP_CS35L41_SPI_2), 6747 6734 SND_PCI_QUIRK(0x1043, 0x1c43, "ASUS UX8406MA", ALC245_FIXUP_CS35L41_SPI_2),
+2 -2
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 267 267 static const struct snd_kcontrol_new tas2781_snd_controls[] = { 268 268 ACARD_SINGLE_RANGE_EXT_TLV("Speaker Analog Volume", TAS2781_AMP_LEVEL, 269 269 1, 0, 20, 0, tas2781_amp_getvol, 270 - tas2781_amp_putvol, amp_vol_tlv), 270 + tas2781_amp_putvol, tas2781_amp_tlv), 271 271 ACARD_SINGLE_BOOL_EXT("Speaker Force Firmware Load", 0, 272 272 tas2781_force_fwload_get, tas2781_force_fwload_put), 273 273 }; ··· 305 305 efi_char16_t efi_name[TAS2563_CAL_VAR_NAME_MAX]; 306 306 unsigned long max_size = TAS2563_CAL_DATA_SIZE; 307 307 unsigned char var8[TAS2563_CAL_VAR_NAME_MAX]; 308 - struct tasdevice_priv *p = h->hda_priv; 308 + struct tasdevice_priv *p = h->priv; 309 309 struct calidata *cd = &p->cali_data; 310 310 struct cali_reg *r = &cd->cali_reg_array; 311 311 unsigned int offset = 0;
+4 -2
sound/hda/codecs/side-codecs/tas2781_hda_spi.c
··· 494 494 495 495 static struct snd_kcontrol_new tas2781_snd_ctls[] = { 496 496 ACARD_SINGLE_RANGE_EXT_TLV(NULL, TAS2781_AMP_LEVEL, 1, 0, 20, 0, 497 - tas2781_amp_getvol, tas2781_amp_putvol, amp_vol_tlv), 497 + tas2781_amp_getvol, tas2781_amp_putvol, 498 + tas2781_amp_tlv), 498 499 ACARD_SINGLE_RANGE_EXT_TLV(NULL, TAS2781_DVC_LVL, 0, 0, 200, 1, 499 - tas2781_digital_getvol, tas2781_digital_putvol, dvc_tlv), 500 + tas2781_digital_getvol, tas2781_digital_putvol, 501 + tas2781_dvc_tlv), 500 502 ACARD_SINGLE_BOOL_EXT(NULL, 0, tas2781_force_fwload_get, 501 503 tas2781_force_fwload_put), 502 504 };
-69
sound/soc/codecs/cs35l56-sdw.c
··· 393 393 return 0; 394 394 } 395 395 396 - static int cs35l63_sdw_kick_divider(struct cs35l56_private *cs35l56, 397 - struct sdw_slave *peripheral) 398 - { 399 - unsigned int curr_scale_reg, next_scale_reg; 400 - int curr_scale, next_scale, ret; 401 - 402 - if (!cs35l56->base.init_done) 403 - return 0; 404 - 405 - if (peripheral->bus->params.curr_bank) { 406 - curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1; 407 - next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0; 408 - } else { 409 - curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0; 410 - next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1; 411 - } 412 - 413 - /* 414 - * Current clock scale value must be different to new value. 415 - * Modify current to guarantee this. If next still has the dummy 416 - * value we wrote when it was current, the core code has not set 417 - * a new scale so restore its original good value 418 - */ 419 - curr_scale = sdw_read_no_pm(peripheral, curr_scale_reg); 420 - if (curr_scale < 0) { 421 - dev_err(cs35l56->base.dev, "Failed to read current clock scale: %d\n", curr_scale); 422 - return curr_scale; 423 - } 424 - 425 - next_scale = sdw_read_no_pm(peripheral, next_scale_reg); 426 - if (next_scale < 0) { 427 - dev_err(cs35l56->base.dev, "Failed to read next clock scale: %d\n", next_scale); 428 - return next_scale; 429 - } 430 - 431 - if (next_scale == CS35L56_SDW_INVALID_BUS_SCALE) { 432 - next_scale = cs35l56->old_sdw_clock_scale; 433 - ret = sdw_write_no_pm(peripheral, next_scale_reg, next_scale); 434 - if (ret < 0) { 435 - dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", 436 - ret); 437 - return ret; 438 - } 439 - } 440 - 441 - cs35l56->old_sdw_clock_scale = curr_scale; 442 - ret = sdw_write_no_pm(peripheral, curr_scale_reg, CS35L56_SDW_INVALID_BUS_SCALE); 443 - if (ret < 0) { 444 - dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", ret); 445 - return ret; 446 - } 447 - 448 - dev_dbg(cs35l56->base.dev, "Next bus scale: %#x\n", next_scale); 449 - 450 - return 0; 451 - } 452 - 453 - static int cs35l56_sdw_bus_config(struct sdw_slave *peripheral, 454 - struct sdw_bus_params *params) 455 - { 456 - struct cs35l56_private *cs35l56 = dev_get_drvdata(&peripheral->dev); 457 - 458 - if ((cs35l56->base.type == 0x63) && (cs35l56->base.rev < 0xa1)) 459 - return cs35l63_sdw_kick_divider(cs35l56, peripheral); 460 - 461 - return 0; 462 - } 463 - 464 396 static int __maybe_unused cs35l56_sdw_clk_stop(struct sdw_slave *peripheral, 465 397 enum sdw_clk_stop_mode mode, 466 398 enum sdw_clk_stop_type type) ··· 408 476 .read_prop = cs35l56_sdw_read_prop, 409 477 .interrupt_callback = cs35l56_sdw_interrupt, 410 478 .update_status = cs35l56_sdw_update_status, 411 - .bus_config = cs35l56_sdw_bus_config, 412 479 #ifdef DEBUG 413 480 .clk_stop = cs35l56_sdw_clk_stop, 414 481 #endif
+26 -3
sound/soc/codecs/cs35l56-shared.c
··· 838 838 }; 839 839 EXPORT_SYMBOL_NS_GPL(cs35l56_calibration_controls, "SND_SOC_CS35L56_SHARED"); 840 840 841 + static const struct cirrus_amp_cal_controls cs35l63_calibration_controls = { 842 + .alg_id = 0xbf210, 843 + .mem_region = WMFW_ADSP2_YM, 844 + .ambient = "CAL_AMBIENT", 845 + .calr = "CAL_R", 846 + .status = "CAL_STATUS", 847 + .checksum = "CAL_CHECKSUM", 848 + }; 849 + 841 850 int cs35l56_get_calibration(struct cs35l56_base *cs35l56_base) 842 851 { 843 852 u64 silicon_uid = 0; ··· 921 912 void cs35l56_log_tuning(struct cs35l56_base *cs35l56_base, struct cs_dsp *cs_dsp) 922 913 { 923 914 __be32 pid, sid, tid; 915 + unsigned int alg_id; 924 916 int ret; 917 + 918 + switch (cs35l56_base->type) { 919 + case 0x54: 920 + case 0x56: 921 + case 0x57: 922 + alg_id = 0x9f212; 923 + break; 924 + default: 925 + alg_id = 0xbf212; 926 + break; 927 + } 925 928 926 929 scoped_guard(mutex, &cs_dsp->pwr_lock) { 927 930 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_PRJCT_ID", 928 - WMFW_ADSP2_XM, 0x9f212), 931 + WMFW_ADSP2_XM, alg_id), 929 932 0, &pid, sizeof(pid)); 930 933 if (!ret) 931 934 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_CHNNL_ID", 932 - WMFW_ADSP2_XM, 0x9f212), 935 + WMFW_ADSP2_XM, alg_id), 933 936 0, &sid, sizeof(sid)); 934 937 if (!ret) 935 938 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_SNPSHT_ID", 936 - WMFW_ADSP2_XM, 0x9f212), 939 + WMFW_ADSP2_XM, alg_id), 937 940 0, &tid, sizeof(tid)); 938 941 } 939 942 ··· 995 974 case 0x35A54: 996 975 case 0x35A56: 997 976 case 0x35A57: 977 + cs35l56_base->calibration_controls = &cs35l56_calibration_controls; 998 978 break; 999 979 case 0x35A630: 980 + cs35l56_base->calibration_controls = &cs35l63_calibration_controls; 1000 981 devid = devid >> 4; 1001 982 break; 1002 983 default:
+1 -1
sound/soc/codecs/cs35l56.c
··· 695 695 return ret; 696 696 697 697 ret = cs_amp_write_cal_coeffs(&cs35l56->dsp.cs_dsp, 698 - &cs35l56_calibration_controls, 698 + cs35l56->base.calibration_controls, 699 699 &cs35l56->base.cal_data); 700 700 701 701 wm_adsp_stop(&cs35l56->dsp);
-3
sound/soc/codecs/cs35l56.h
··· 20 20 #define CS35L56_SDW_GEN_INT_MASK_1 0xc1 21 21 #define CS35L56_SDW_INT_MASK_CODEC_IRQ BIT(0) 22 22 23 - #define CS35L56_SDW_INVALID_BUS_SCALE 0xf 24 - 25 23 #define CS35L56_RX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) 26 24 #define CS35L56_TX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE \ 27 25 | SNDRV_PCM_FMTBIT_S32_LE) ··· 50 52 u8 asp_slot_count; 51 53 bool tdm_mode; 52 54 bool sysclk_set; 53 - u8 old_sdw_clock_scale; 54 55 u8 sdw_link_num; 55 56 u8 sdw_unique_id; 56 57 };
+1 -1
sound/soc/codecs/es8389.c
··· 636 636 regmap_write(es8389->regmap, ES8389_ANA_CTL1, 0x59); 637 637 regmap_write(es8389->regmap, ES8389_ADC_EN, 0x00); 638 638 regmap_write(es8389->regmap, ES8389_CLK_OFF1, 0x00); 639 - regmap_write(es8389->regmap, ES8389_RESET, 0x7E); 639 + regmap_write(es8389->regmap, ES8389_RESET, 0x3E); 640 640 regmap_update_bits(es8389->regmap, ES8389_DAC_INV, 0x80, 0x80); 641 641 usleep_range(8000, 8500); 642 642 regmap_update_bits(es8389->regmap, ES8389_DAC_INV, 0x80, 0x00);
+2 -2
sound/soc/codecs/tas2781-i2c.c
··· 910 910 static const struct snd_kcontrol_new tas2781_snd_controls[] = { 911 911 SOC_SINGLE_RANGE_EXT_TLV("Speaker Analog Volume", TAS2781_AMP_LEVEL, 912 912 1, 0, 20, 0, tas2781_amp_getvol, 913 - tas2781_amp_putvol, amp_vol_tlv), 913 + tas2781_amp_putvol, tas2781_amp_tlv), 914 914 SOC_SINGLE_RANGE_EXT_TLV("Speaker Digital Volume", TAS2781_DVC_LVL, 915 915 0, 0, 200, 1, tas2781_digital_getvol, 916 - tas2781_digital_putvol, dvc_tlv), 916 + tas2781_digital_putvol, tas2781_dvc_tlv), 917 917 }; 918 918 919 919 static const struct snd_kcontrol_new tas2781_cali_controls[] = {
+1 -1
sound/usb/stream.c
··· 349 349 u16 cs_len; 350 350 u8 cs_type; 351 351 352 - if (len < sizeof(*p)) 352 + if (len < sizeof(*cs_desc)) 353 353 break; 354 354 cs_len = le16_to_cpu(cs_desc->wLength); 355 355 if (len < cs_len)
+1 -1
sound/usb/validate.c
··· 285 285 /* UAC_VERSION_3, UAC3_EXTENDED_TERMINAL: not implemented yet */ 286 286 FUNC(UAC_VERSION_3, UAC3_MIXER_UNIT, validate_mixer_unit), 287 287 FUNC(UAC_VERSION_3, UAC3_SELECTOR_UNIT, validate_selector_unit), 288 - FUNC(UAC_VERSION_3, UAC_FEATURE_UNIT, validate_uac3_feature_unit), 288 + FUNC(UAC_VERSION_3, UAC3_FEATURE_UNIT, validate_uac3_feature_unit), 289 289 /* UAC_VERSION_3, UAC3_EFFECT_UNIT: not implemented yet */ 290 290 FUNC(UAC_VERSION_3, UAC3_PROCESSING_UNIT, validate_processing_unit), 291 291 FUNC(UAC_VERSION_3, UAC3_EXTENSION_UNIT, validate_processing_unit),
+2 -2
tools/bootconfig/main.c
··· 193 193 if (stat.st_size < BOOTCONFIG_FOOTER_SIZE) 194 194 return 0; 195 195 196 - if (lseek(fd, -BOOTCONFIG_MAGIC_LEN, SEEK_END) < 0) 196 + if (lseek(fd, -(off_t)BOOTCONFIG_MAGIC_LEN, SEEK_END) < 0) 197 197 return pr_errno("Failed to lseek for magic", -errno); 198 198 199 199 if (read(fd, magic, BOOTCONFIG_MAGIC_LEN) < 0) ··· 203 203 if (memcmp(magic, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN) != 0) 204 204 return 0; 205 205 206 - if (lseek(fd, -BOOTCONFIG_FOOTER_SIZE, SEEK_END) < 0) 206 + if (lseek(fd, -(off_t)BOOTCONFIG_FOOTER_SIZE, SEEK_END) < 0) 207 207 return pr_errno("Failed to lseek for size", -errno); 208 208 209 209 if (read(fd, &size, sizeof(uint32_t)) < 0)
+28
tools/include/linux/args.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _LINUX_ARGS_H 4 + #define _LINUX_ARGS_H 5 + 6 + /* 7 + * How do these macros work? 8 + * 9 + * In __COUNT_ARGS() _0 to _12 are just placeholders from the start 10 + * in order to make sure _n is positioned over the correct number 11 + * from 12 to 0 (depending on X, which is a variadic argument list). 12 + * They serve no purpose other than occupying a position. Since each 13 + * macro parameter must have a distinct identifier, those identifiers 14 + * are as good as any. 15 + * 16 + * In COUNT_ARGS() we use actual integers, so __COUNT_ARGS() returns 17 + * that as _n. 18 + */ 19 + 20 + /* This counts to 15. Any more, it will return 16th argument. */ 21 + #define __COUNT_ARGS(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _n, X...) _n 22 + #define COUNT_ARGS(X...) __COUNT_ARGS(, ##X, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0) 23 + 24 + /* Concatenate two parameters, but allow them to be expanded beforehand. */ 25 + #define __CONCAT(a, b) a ## b 26 + #define CONCATENATE(a, b) __CONCAT(a, b) 27 + 28 + #endif /* _LINUX_ARGS_H */
+23
tools/objtool/arch/loongarch/special.c
··· 27 27 struct table_info *next_table; 28 28 unsigned long tmp_insn_offset; 29 29 unsigned long tmp_rodata_offset; 30 + bool is_valid_list = false; 30 31 31 32 rsec = find_section_by_name(file->elf, ".rela.discard.tablejump_annotate"); 32 33 if (!rsec) ··· 36 35 INIT_LIST_HEAD(&table_list); 37 36 38 37 for_each_reloc(rsec, reloc) { 38 + if (reloc->sym->sec->rodata) 39 + continue; 40 + 41 + if (strcmp(insn->sec->name, reloc->sym->sec->name)) 42 + continue; 43 + 39 44 orig_table = malloc(sizeof(struct table_info)); 40 45 if (!orig_table) { 41 46 WARN("malloc failed"); ··· 56 49 57 50 if (reloc_idx(reloc) + 1 == sec_num_entries(rsec)) 58 51 break; 52 + 53 + if (strcmp(insn->sec->name, (reloc + 1)->sym->sec->name)) { 54 + list_for_each_entry(orig_table, &table_list, jump_info) { 55 + if (orig_table->insn_offset == insn->offset) { 56 + is_valid_list = true; 57 + break; 58 + } 59 + } 60 + 61 + if (!is_valid_list) { 62 + list_del_init(&table_list); 63 + continue; 64 + } 65 + 66 + break; 67 + } 59 68 } 60 69 61 70 list_for_each_entry(orig_table, &table_list, jump_info) {
+4 -3
tools/power/cpupower/man/cpupower-set.1
··· 81 81 .RE 82 82 83 83 .PP 84 - \-\-turbo\-boost, \-t 84 + \-\-turbo\-boost, \-\-boost, \-t 85 85 .RS 4 86 - This option is used to enable or disable the turbo boost feature on 87 - supported Intel and AMD processors. 86 + This option is used to enable or disable the boost feature on 87 + supported Intel and AMD processors, and other boost supported systems. 88 + (The --boost option is an alias for the --turbo-boost option) 88 89 89 90 This option takes as parameter either \fB1\fP to enable, or \fB0\fP to disable the feature. 90 91
+15 -1
tools/power/cpupower/utils/cpufreq-info.c
··· 128 128 /* ToDo: Make this more global */ 129 129 unsigned long pstates[MAX_HW_PSTATES] = {0,}; 130 130 131 - ret = cpufreq_has_boost_support(cpu, &support, &active, &b_states); 131 + ret = cpufreq_has_x86_boost_support(cpu, &support, &active, &b_states); 132 132 if (ret) { 133 133 printf(_("Error while evaluating Boost Capabilities" 134 134 " on CPU %d -- are you root?\n"), cpu); ··· 204 204 return 0; 205 205 } 206 206 207 + static int get_boost_mode_generic(unsigned int cpu) 208 + { 209 + bool active; 210 + 211 + if (!cpufreq_has_generic_boost_support(&active)) { 212 + printf(_(" boost state support:\n")); 213 + printf(_(" Active: %s\n"), active ? _("yes") : _("no")); 214 + } 215 + 216 + return 0; 217 + } 218 + 207 219 /* --boost / -b */ 208 220 209 221 static int get_boost_mode(unsigned int cpu) ··· 226 214 cpupower_cpu_info.vendor == X86_VENDOR_HYGON || 227 215 cpupower_cpu_info.vendor == X86_VENDOR_INTEL) 228 216 return get_boost_mode_x86(cpu); 217 + else 218 + get_boost_mode_generic(cpu); 229 219 230 220 freqs = cpufreq_get_boost_frequencies(cpu); 231 221 if (freqs) {
+3 -2
tools/power/cpupower/utils/cpupower-set.c
··· 21 21 {"epp", required_argument, NULL, 'e'}, 22 22 {"amd-pstate-mode", required_argument, NULL, 'm'}, 23 23 {"turbo-boost", required_argument, NULL, 't'}, 24 + {"boost", required_argument, NULL, 't'}, 24 25 { }, 25 26 }; 26 27 ··· 63 62 64 63 params.params = 0; 65 64 /* parameter parsing */ 66 - while ((ret = getopt_long(argc, argv, "b:e:m:", 67 - set_opts, NULL)) != -1) { 65 + while ((ret = getopt_long(argc, argv, "b:e:m:t:", 66 + set_opts, NULL)) != -1) { 68 67 switch (ret) { 69 68 case 'b': 70 69 if (params.perf_bias)
+7 -7
tools/power/cpupower/utils/helpers/helpers.h
··· 103 103 104 104 /* cpuid and cpuinfo helpers **************************/ 105 105 106 + int cpufreq_has_generic_boost_support(bool *active); 107 + int cpupower_set_turbo_boost(int turbo_boost); 108 + 106 109 /* X86 ONLY ****************************************/ 107 110 #if defined(__i386__) || defined(__x86_64__) 108 111 ··· 121 118 122 119 extern int cpupower_set_epp(unsigned int cpu, char *epp); 123 120 extern int cpupower_set_amd_pstate_mode(char *mode); 124 - extern int cpupower_set_turbo_boost(int turbo_boost); 125 121 126 122 /* Read/Write msr ****************************/ 127 123 ··· 141 139 142 140 /* AMD HW pstate decoding **************************/ 143 141 144 - extern int cpufreq_has_boost_support(unsigned int cpu, int *support, 145 - int *active, int * states); 142 + int cpufreq_has_x86_boost_support(unsigned int cpu, int *support, 143 + int *active, int *states); 146 144 147 145 /* AMD P-State stuff **************************/ 148 146 bool cpupower_amd_pstate_enabled(void); ··· 183 181 { return -1; }; 184 182 static inline int cpupower_set_amd_pstate_mode(char *mode) 185 183 { return -1; }; 186 - static inline int cpupower_set_turbo_boost(int turbo_boost) 187 - { return -1; }; 188 184 189 185 /* Read/Write msr ****************************/ 190 186 191 - static inline int cpufreq_has_boost_support(unsigned int cpu, int *support, 192 - int *active, int * states) 187 + static inline int cpufreq_has_x86_boost_support(unsigned int cpu, int *support, 188 + int *active, int *states) 193 189 { return -1; } 194 190 195 191 static inline bool cpupower_amd_pstate_enabled(void)
+54 -22
tools/power/cpupower/utils/helpers/misc.c
··· 8 8 #include "helpers/helpers.h" 9 9 #include "helpers/sysfs.h" 10 10 #include "cpufreq.h" 11 + #include "cpupower_intern.h" 11 12 12 13 #if defined(__i386__) || defined(__x86_64__) 13 14 14 - #include "cpupower_intern.h" 15 - 16 15 #define MSR_AMD_HWCR 0xc0010015 17 16 18 - int cpufreq_has_boost_support(unsigned int cpu, int *support, int *active, 19 - int *states) 17 + int cpufreq_has_x86_boost_support(unsigned int cpu, int *support, int *active, 18 + int *states) 20 19 { 21 20 int ret; 22 21 unsigned long long val; ··· 123 124 return 0; 124 125 } 125 126 126 - int cpupower_set_turbo_boost(int turbo_boost) 127 - { 128 - char path[SYSFS_PATH_MAX]; 129 - char linebuf[2] = {}; 130 - 131 - snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost"); 132 - 133 - if (!is_valid_path(path)) 134 - return -1; 135 - 136 - snprintf(linebuf, sizeof(linebuf), "%d", turbo_boost); 137 - 138 - if (cpupower_write_sysfs(path, linebuf, 2) <= 0) 139 - return -1; 140 - 141 - return 0; 142 - } 143 - 144 127 bool cpupower_amd_pstate_enabled(void) 145 128 { 146 129 char *driver = cpufreq_get_driver(0); ··· 140 159 } 141 160 142 161 #endif /* #if defined(__i386__) || defined(__x86_64__) */ 162 + 163 + int cpufreq_has_generic_boost_support(bool *active) 164 + { 165 + char path[SYSFS_PATH_MAX]; 166 + char linebuf[2] = {}; 167 + unsigned long val; 168 + char *endp; 169 + 170 + snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost"); 171 + 172 + if (!is_valid_path(path)) 173 + return -EACCES; 174 + 175 + if (cpupower_read_sysfs(path, linebuf, 2) <= 0) 176 + return -EINVAL; 177 + 178 + val = strtoul(linebuf, &endp, 0); 179 + if (endp == linebuf || errno == ERANGE) 180 + return -EINVAL; 181 + 182 + switch (val) { 183 + case 0: 184 + *active = false; 185 + break; 186 + case 1: 187 + *active = true; 188 + break; 189 + default: 190 + return -EINVAL; 191 + } 192 + 193 + return 0; 194 + } 143 195 144 196 /* get_cpustate 145 197 * ··· 272 258 ((unsigned int)(speed % 1000) / 100)); 273 259 } 274 260 } 261 + } 262 + 263 + int cpupower_set_turbo_boost(int turbo_boost) 264 + { 265 + char path[SYSFS_PATH_MAX]; 266 + char linebuf[2] = {}; 267 + 268 + snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost"); 269 + 270 + if (!is_valid_path(path)) 271 + return -1; 272 + 273 + snprintf(linebuf, sizeof(linebuf), "%d", turbo_boost); 274 + 275 + if (cpupower_write_sysfs(path, linebuf, 2) <= 0) 276 + return -1; 277 + 278 + return 0; 275 279 }
-3
tools/testing/selftests/coredump/stackdump_test.c
··· 446 446 if (info.coredump_mask & PIDFD_COREDUMPED) 447 447 goto out; 448 448 449 - if (read(fd_coredump, &c, 1) < 1) 450 - goto out; 451 - 452 449 exit_code = EXIT_SUCCESS; 453 450 out: 454 451 if (fd_peer_pidfd >= 0)
+1
tools/testing/selftests/damon/Makefile
··· 4 4 TEST_GEN_FILES += access_memory access_memory_even 5 5 6 6 TEST_FILES = _damon_sysfs.py 7 + TEST_FILES += drgn_dump_damon_status.py 7 8 8 9 # functionality tests 9 10 TEST_PROGS += sysfs.sh
+2 -1
tools/testing/selftests/drivers/net/bonding/Makefile
··· 10 10 mode-2-recovery-updelay.sh \ 11 11 bond_options.sh \ 12 12 bond-eth-type-change.sh \ 13 - bond_macvlan_ipvlan.sh 13 + bond_macvlan_ipvlan.sh \ 14 + bond_passive_lacp.sh 14 15 15 16 TEST_FILES := \ 16 17 lag_lib.sh \
+105
tools/testing/selftests/drivers/net/bonding/bond_passive_lacp.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Test if a bond interface works with lacp_active=off. 5 + 6 + # shellcheck disable=SC2034 7 + REQUIRE_MZ=no 8 + NUM_NETIFS=0 9 + lib_dir=$(dirname "$0") 10 + # shellcheck disable=SC1091 11 + source "$lib_dir"/../../../net/forwarding/lib.sh 12 + 13 + # shellcheck disable=SC2317 14 + check_port_state() 15 + { 16 + local netns=$1 17 + local port=$2 18 + local state=$3 19 + 20 + ip -n "${netns}" -d -j link show "$port" | \ 21 + jq -e ".[].linkinfo.info_slave_data.ad_actor_oper_port_state_str | index(\"${state}\") != null" > /dev/null 22 + } 23 + 24 + check_pkt_count() 25 + { 26 + RET=0 27 + local ns="$1" 28 + local iface="$2" 29 + 30 + # wait 65s, one per 30s 31 + slowwait_for_counter 65 2 tc_rule_handle_stats_get \ 32 + "dev ${iface} egress" 101 ".packets" "-n ${ns}" &> /dev/null 33 + } 34 + 35 + setup() { 36 + setup_ns c_ns s_ns 37 + 38 + # shellcheck disable=SC2154 39 + ip -n "${c_ns}" link add eth0 type veth peer name eth0 netns "${s_ns}" 40 + ip -n "${c_ns}" link add eth1 type veth peer name eth1 netns "${s_ns}" 41 + 42 + # Add tc filter to count the pkts 43 + tc -n "${c_ns}" qdisc add dev eth0 clsact 44 + tc -n "${c_ns}" filter add dev eth0 egress handle 101 protocol 0x8809 matchall action pass 45 + tc -n "${s_ns}" qdisc add dev eth1 clsact 46 + tc -n "${s_ns}" filter add dev eth1 egress handle 101 protocol 0x8809 matchall action pass 47 + 48 + ip -n "${s_ns}" link add bond0 type bond mode 802.3ad lacp_active on lacp_rate fast 49 + ip -n "${s_ns}" link set eth0 master bond0 50 + ip -n "${s_ns}" link set eth1 master bond0 51 + 52 + ip -n "${c_ns}" link add bond0 type bond mode 802.3ad lacp_active off lacp_rate fast 53 + ip -n "${c_ns}" link set eth0 master bond0 54 + ip -n "${c_ns}" link set eth1 master bond0 55 + 56 + } 57 + 58 + trap cleanup_all_ns EXIT 59 + setup 60 + 61 + # The bond will send 2 lacpdu pkts during init time, let's wait at least 2s 62 + # after interface up 63 + ip -n "${c_ns}" link set bond0 up 64 + sleep 2 65 + 66 + # 1. The passive side shouldn't send LACPDU. 67 + check_pkt_count "${c_ns}" "eth0" && RET=1 68 + log_test "802.3ad lacp_active off" "init port" 69 + 70 + ip -n "${s_ns}" link set bond0 up 71 + # 2. The passive side should not have the 'active' flag. 72 + RET=0 73 + slowwait 2 check_port_state "${c_ns}" "eth0" "active" && RET=1 74 + log_test "802.3ad lacp_active off" "port state active" 75 + 76 + # 3. The active side should have the 'active' flag. 77 + RET=0 78 + slowwait 2 check_port_state "${s_ns}" "eth0" "active" || RET=1 79 + log_test "802.3ad lacp_active on" "port state active" 80 + 81 + # 4. Make sure the connection is not expired. 82 + RET=0 83 + slowwait 5 check_port_state "${s_ns}" "eth0" "distributing" 84 + slowwait 10 check_port_state "${s_ns}" "eth0" "expired" && RET=1 85 + log_test "bond 802.3ad lacp_active off" "port connection" 86 + 87 + # After testing, disconnect one port on each side to check the state. 88 + ip -n "${s_ns}" link set eth0 nomaster 89 + ip -n "${s_ns}" link set eth0 up 90 + ip -n "${c_ns}" link set eth1 nomaster 91 + ip -n "${c_ns}" link set eth1 up 92 + # Due to Periodic Machine and Rx Machine state change, the bond will still 93 + # send lacpdu pkts in a few seconds. sleep at lease 5s to make sure 94 + # negotiation finished 95 + sleep 5 96 + 97 + # 5. The active side should keep sending LACPDU. 98 + check_pkt_count "${s_ns}" "eth1" || RET=1 99 + log_test "bond 802.3ad lacp_active on" "port pkt after disconnect" 100 + 101 + # 6. The passive side shouldn't send LACPDU anymore. 102 + check_pkt_count "${c_ns}" "eth0" && RET=1 103 + log_test "bond 802.3ad lacp_active off" "port pkt after disconnect" 104 + 105 + exit "$EXIT_STATUS"
+1
tools/testing/selftests/drivers/net/bonding/config
··· 6 6 CONFIG_IPVLAN=y 7 7 CONFIG_NET_ACT_GACT=y 8 8 CONFIG_NET_CLS_FLOWER=y 9 + CONFIG_NET_CLS_MATCHALL=m 9 10 CONFIG_NET_SCH_INGRESS=y 10 11 CONFIG_NLMON=y 11 12 CONFIG_VETH=y
+261 -3
tools/testing/selftests/mm/mremap_test.c
··· 5 5 #define _GNU_SOURCE 6 6 7 7 #include <errno.h> 8 + #include <fcntl.h> 9 + #include <linux/userfaultfd.h> 8 10 #include <stdlib.h> 9 11 #include <stdio.h> 10 12 #include <string.h> 13 + #include <sys/ioctl.h> 11 14 #include <sys/mman.h> 15 + #include <syscall.h> 12 16 #include <time.h> 13 17 #include <stdbool.h> 14 18 ··· 172 168 173 169 if (first_val <= start && second_val >= end) { 174 170 success = true; 171 + fflush(maps_fp); 175 172 break; 176 173 } 177 174 } 178 175 179 176 return success; 177 + } 178 + 179 + /* Check if [ptr, ptr + size) mapped in /proc/self/maps. */ 180 + static bool is_ptr_mapped(FILE *maps_fp, void *ptr, unsigned long size) 181 + { 182 + unsigned long start = (unsigned long)ptr; 183 + unsigned long end = start + size; 184 + 185 + return is_range_mapped(maps_fp, start, end); 180 186 } 181 187 182 188 /* ··· 747 733 dont_unmap ? " [dontunnmap]" : ""); 748 734 } 749 735 736 + #ifdef __NR_userfaultfd 737 + static void mremap_move_multi_invalid_vmas(FILE *maps_fp, 738 + unsigned long page_size) 739 + { 740 + char *test_name = "mremap move multiple invalid vmas"; 741 + const size_t size = 10 * page_size; 742 + bool success = true; 743 + char *ptr, *tgt_ptr; 744 + int uffd, err, i; 745 + void *res; 746 + struct uffdio_api api = { 747 + .api = UFFD_API, 748 + .features = UFFD_EVENT_PAGEFAULT, 749 + }; 750 + 751 + uffd = syscall(__NR_userfaultfd, O_NONBLOCK); 752 + if (uffd == -1) { 753 + err = errno; 754 + perror("userfaultfd"); 755 + if (err == EPERM) { 756 + ksft_test_result_skip("%s - missing uffd", test_name); 757 + return; 758 + } 759 + success = false; 760 + goto out; 761 + } 762 + if (ioctl(uffd, UFFDIO_API, &api)) { 763 + perror("ioctl UFFDIO_API"); 764 + success = false; 765 + goto out_close_uffd; 766 + } 767 + 768 + ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, 769 + MAP_PRIVATE | MAP_ANON, -1, 0); 770 + if (ptr == MAP_FAILED) { 771 + perror("mmap"); 772 + success = false; 773 + goto out_close_uffd; 774 + } 775 + 776 + tgt_ptr = mmap(NULL, size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0); 777 + if (tgt_ptr == MAP_FAILED) { 778 + perror("mmap"); 779 + success = false; 780 + goto out_close_uffd; 781 + } 782 + if (munmap(tgt_ptr, size)) { 783 + perror("munmap"); 784 + success = false; 785 + goto out_unmap; 786 + } 787 + 788 + /* 789 + * Unmap so we end up with: 790 + * 791 + * 0 2 4 6 8 10 offset in buffer 792 + * |*| |*| |*| |*| |*| 793 + * |*| |*| |*| |*| |*| 794 + * 795 + * Additionally, register each with UFFD. 796 + */ 797 + for (i = 0; i < 10; i += 2) { 798 + void *unmap_ptr = &ptr[(i + 1) * page_size]; 799 + unsigned long start = (unsigned long)&ptr[i * page_size]; 800 + struct uffdio_register reg = { 801 + .range = { 802 + .start = start, 803 + .len = page_size, 804 + }, 805 + .mode = UFFDIO_REGISTER_MODE_MISSING, 806 + }; 807 + 808 + if (ioctl(uffd, UFFDIO_REGISTER, &reg) == -1) { 809 + perror("ioctl UFFDIO_REGISTER"); 810 + success = false; 811 + goto out_unmap; 812 + } 813 + if (munmap(unmap_ptr, page_size)) { 814 + perror("munmap"); 815 + success = false; 816 + goto out_unmap; 817 + } 818 + } 819 + 820 + /* 821 + * Now try to move the entire range which is invalid for multi VMA move. 822 + * 823 + * This will fail, and no VMA should be moved, as we check this ahead of 824 + * time. 825 + */ 826 + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 827 + err = errno; 828 + if (res != MAP_FAILED) { 829 + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); 830 + success = false; 831 + goto out_unmap; 832 + } 833 + if (err != EFAULT) { 834 + errno = err; 835 + perror("mrmeap() unexpected error"); 836 + success = false; 837 + goto out_unmap; 838 + } 839 + if (is_ptr_mapped(maps_fp, tgt_ptr, page_size)) { 840 + fprintf(stderr, 841 + "Invalid uffd-armed VMA at start of multi range moved\n"); 842 + success = false; 843 + goto out_unmap; 844 + } 845 + 846 + /* 847 + * Now try to move a single VMA, this should succeed as not multi VMA 848 + * move. 849 + */ 850 + res = mremap(ptr, page_size, page_size, 851 + MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 852 + if (res == MAP_FAILED) { 853 + perror("mremap single invalid-multi VMA"); 854 + success = false; 855 + goto out_unmap; 856 + } 857 + 858 + /* 859 + * Unmap the VMA, and remap a non-uffd registered (therefore, multi VMA 860 + * move valid) VMA at the start of ptr range. 861 + */ 862 + if (munmap(tgt_ptr, page_size)) { 863 + perror("munmap"); 864 + success = false; 865 + goto out_unmap; 866 + } 867 + res = mmap(ptr, page_size, PROT_READ | PROT_WRITE, 868 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 869 + if (res == MAP_FAILED) { 870 + perror("mmap"); 871 + success = false; 872 + goto out_unmap; 873 + } 874 + 875 + /* 876 + * Now try to move the entire range, we should succeed in moving the 877 + * first VMA, but no others, and report a failure. 878 + */ 879 + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 880 + err = errno; 881 + if (res != MAP_FAILED) { 882 + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); 883 + success = false; 884 + goto out_unmap; 885 + } 886 + if (err != EFAULT) { 887 + errno = err; 888 + perror("mrmeap() unexpected error"); 889 + success = false; 890 + goto out_unmap; 891 + } 892 + if (!is_ptr_mapped(maps_fp, tgt_ptr, page_size)) { 893 + fprintf(stderr, "Valid VMA not moved\n"); 894 + success = false; 895 + goto out_unmap; 896 + } 897 + 898 + /* 899 + * Unmap the VMA, and map valid VMA at start of ptr range, and replace 900 + * all existing multi-move invalid VMAs, except the last, with valid 901 + * multi-move VMAs. 902 + */ 903 + if (munmap(tgt_ptr, page_size)) { 904 + perror("munmap"); 905 + success = false; 906 + goto out_unmap; 907 + } 908 + if (munmap(ptr, size - 2 * page_size)) { 909 + perror("munmap"); 910 + success = false; 911 + goto out_unmap; 912 + } 913 + for (i = 0; i < 8; i += 2) { 914 + res = mmap(&ptr[i * page_size], page_size, 915 + PROT_READ | PROT_WRITE, 916 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 917 + if (res == MAP_FAILED) { 918 + perror("mmap"); 919 + success = false; 920 + goto out_unmap; 921 + } 922 + } 923 + 924 + /* 925 + * Now try to move the entire range, we should succeed in moving all but 926 + * the last VMA, and report a failure. 927 + */ 928 + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 929 + err = errno; 930 + if (res != MAP_FAILED) { 931 + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); 932 + success = false; 933 + goto out_unmap; 934 + } 935 + if (err != EFAULT) { 936 + errno = err; 937 + perror("mrmeap() unexpected error"); 938 + success = false; 939 + goto out_unmap; 940 + } 941 + 942 + for (i = 0; i < 10; i += 2) { 943 + bool is_mapped = is_ptr_mapped(maps_fp, 944 + &tgt_ptr[i * page_size], page_size); 945 + 946 + if (i < 8 && !is_mapped) { 947 + fprintf(stderr, "Valid VMA not moved at %d\n", i); 948 + success = false; 949 + goto out_unmap; 950 + } else if (i == 8 && is_mapped) { 951 + fprintf(stderr, "Invalid VMA moved at %d\n", i); 952 + success = false; 953 + goto out_unmap; 954 + } 955 + } 956 + 957 + out_unmap: 958 + if (munmap(tgt_ptr, size)) 959 + perror("munmap tgt"); 960 + if (munmap(ptr, size)) 961 + perror("munmap src"); 962 + out_close_uffd: 963 + close(uffd); 964 + out: 965 + if (success) 966 + ksft_test_result_pass("%s\n", test_name); 967 + else 968 + ksft_test_result_fail("%s\n", test_name); 969 + } 970 + #else 971 + static void mremap_move_multi_invalid_vmas(FILE *maps_fp, unsigned long page_size) 972 + { 973 + char *test_name = "mremap move multiple invalid vmas"; 974 + 975 + ksft_test_result_skip("%s - missing uffd", test_name); 976 + } 977 + #endif /* __NR_userfaultfd */ 978 + 750 979 /* Returns the time taken for the remap on success else returns -1. */ 751 980 static long long remap_region(struct config c, unsigned int threshold_mb, 752 981 char *rand_addr) ··· 1331 1074 char *rand_addr; 1332 1075 size_t rand_size; 1333 1076 int num_expand_tests = 2; 1334 - int num_misc_tests = 8; 1077 + int num_misc_tests = 9; 1335 1078 struct test test_cases[MAX_TEST] = {}; 1336 1079 struct test perf_test_cases[MAX_PERF_TEST]; 1337 1080 int page_size; ··· 1454 1197 mremap_expand_merge(maps_fp, page_size); 1455 1198 mremap_expand_merge_offset(maps_fp, page_size); 1456 1199 1457 - fclose(maps_fp); 1458 - 1459 1200 mremap_move_within_range(pattern_seed, rand_addr); 1460 1201 mremap_move_1mb_from_start(pattern_seed, rand_addr); 1461 1202 mremap_shrink_multiple_vmas(page_size, /* inplace= */true); ··· 1462 1207 mremap_move_multiple_vmas(pattern_seed, page_size, /* dontunmap= */ true); 1463 1208 mremap_move_multiple_vmas_split(pattern_seed, page_size, /* dontunmap= */ false); 1464 1209 mremap_move_multiple_vmas_split(pattern_seed, page_size, /* dontunmap= */ true); 1210 + mremap_move_multi_invalid_vmas(maps_fp, page_size); 1211 + 1212 + fclose(maps_fp); 1465 1213 1466 1214 if (run_perf_tests) { 1467 1215 ksft_print_msg("\n%s\n",
+64 -13
tools/testing/selftests/mount_setattr/mount_setattr_test.c
··· 107 107 #endif 108 108 #endif 109 109 110 + #ifndef __NR_open_tree_attr 111 + #if defined __alpha__ 112 + #define __NR_open_tree_attr 577 113 + #elif defined _MIPS_SIM 114 + #if _MIPS_SIM == _MIPS_SIM_ABI32 /* o32 */ 115 + #define __NR_open_tree_attr (467 + 4000) 116 + #endif 117 + #if _MIPS_SIM == _MIPS_SIM_NABI32 /* n32 */ 118 + #define __NR_open_tree_attr (467 + 6000) 119 + #endif 120 + #if _MIPS_SIM == _MIPS_SIM_ABI64 /* n64 */ 121 + #define __NR_open_tree_attr (467 + 5000) 122 + #endif 123 + #elif defined __ia64__ 124 + #define __NR_open_tree_attr (467 + 1024) 125 + #else 126 + #define __NR_open_tree_attr 467 127 + #endif 128 + #endif 129 + 110 130 #ifndef MOUNT_ATTR_IDMAP 111 131 #define MOUNT_ATTR_IDMAP 0x00100000 112 132 #endif ··· 139 119 struct mount_attr *attr, size_t size) 140 120 { 141 121 return syscall(__NR_mount_setattr, dfd, path, flags, attr, size); 122 + } 123 + 124 + static inline int sys_open_tree_attr(int dfd, const char *path, unsigned int flags, 125 + struct mount_attr *attr, size_t size) 126 + { 127 + return syscall(__NR_open_tree_attr, dfd, path, flags, attr, size); 142 128 } 143 129 144 130 static ssize_t write_nointr(int fd, const void *buf, size_t count) ··· 1248 1222 attr.userns_fd = get_userns_fd(0, 10000, 10000); 1249 1223 ASSERT_GE(attr.userns_fd, 0); 1250 1224 ASSERT_NE(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0); 1225 + /* 1226 + * Make sure that open_tree_attr() without OPEN_TREE_CLONE is not a way 1227 + * to bypass this mount_setattr() restriction. 1228 + */ 1229 + ASSERT_LT(sys_open_tree_attr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0); 1230 + 1251 1231 ASSERT_EQ(close(attr.userns_fd), 0); 1252 1232 ASSERT_EQ(close(open_tree_fd), 0); 1253 1233 } ··· 1287 1255 ASSERT_GE(attr.userns_fd, 0); 1288 1256 ASSERT_NE(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr, 1289 1257 sizeof(attr)), 0); 1258 + /* 1259 + * Make sure that open_tree_attr() without OPEN_TREE_CLONE is not a way 1260 + * to bypass this mount_setattr() restriction. 1261 + */ 1262 + ASSERT_LT(sys_open_tree_attr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0); 1263 + 1290 1264 ASSERT_EQ(close(attr.userns_fd), 0); 1291 1265 ASSERT_EQ(close(open_tree_fd), 0); 1292 1266 } ··· 1359 1321 ASSERT_EQ(close(open_tree_fd), 0); 1360 1322 } 1361 1323 1324 + static bool expected_uid_gid(int dfd, const char *path, int flags, 1325 + uid_t expected_uid, gid_t expected_gid) 1326 + { 1327 + int ret; 1328 + struct stat st; 1329 + 1330 + ret = fstatat(dfd, path, &st, flags); 1331 + if (ret < 0) 1332 + return false; 1333 + 1334 + return st.st_uid == expected_uid && st.st_gid == expected_gid; 1335 + } 1336 + 1362 1337 /** 1363 1338 * Validate that currently changing the idmapping of an idmapped mount fails. 1364 1339 */ ··· 1381 1330 struct mount_attr attr = { 1382 1331 .attr_set = MOUNT_ATTR_IDMAP, 1383 1332 }; 1333 + 1334 + ASSERT_TRUE(expected_uid_gid(-EBADF, "/mnt/D", 0, 0, 0)); 1384 1335 1385 1336 if (!mount_setattr_supported()) 1386 1337 SKIP(return, "mount_setattr syscall not supported"); ··· 1401 1348 AT_EMPTY_PATH, &attr, sizeof(attr)), 0); 1402 1349 ASSERT_EQ(close(attr.userns_fd), 0); 1403 1350 1351 + EXPECT_FALSE(expected_uid_gid(open_tree_fd, ".", 0, 0, 0)); 1352 + EXPECT_TRUE(expected_uid_gid(open_tree_fd, ".", 0, 10000, 10000)); 1353 + 1404 1354 /* Change idmapping on a detached mount that is already idmapped. */ 1405 1355 attr.userns_fd = get_userns_fd(0, 20000, 10000); 1406 1356 ASSERT_GE(attr.userns_fd, 0); 1407 1357 ASSERT_NE(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0); 1358 + /* 1359 + * Make sure that open_tree_attr() without OPEN_TREE_CLONE is not a way 1360 + * to bypass this mount_setattr() restriction. 1361 + */ 1362 + EXPECT_LT(sys_open_tree_attr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0); 1363 + EXPECT_FALSE(expected_uid_gid(open_tree_fd, ".", 0, 20000, 20000)); 1364 + EXPECT_TRUE(expected_uid_gid(open_tree_fd, ".", 0, 10000, 10000)); 1365 + 1408 1366 ASSERT_EQ(close(attr.userns_fd), 0); 1409 1367 ASSERT_EQ(close(open_tree_fd), 0); 1410 - } 1411 - 1412 - static bool expected_uid_gid(int dfd, const char *path, int flags, 1413 - uid_t expected_uid, gid_t expected_gid) 1414 - { 1415 - int ret; 1416 - struct stat st; 1417 - 1418 - ret = fstatat(dfd, path, &st, flags); 1419 - if (ret < 0) 1420 - return false; 1421 - 1422 - return st.st_uid == expected_uid && st.st_gid == expected_gid; 1423 1368 } 1424 1369 1425 1370 TEST_F(mount_setattr_idmapped, idmap_mount_tree_invalid)
+29
tools/testing/selftests/net/forwarding/router.sh
··· 18 18 # | 2001:db8:1::1/64 2001:db8:2::1/64 | 19 19 # | | 20 20 # +-----------------------------------------------------------------+ 21 + # 22 + #shellcheck disable=SC2034 # SC doesn't see our uses of global variables 21 23 22 24 ALL_TESTS=" 23 25 ping_ipv4 ··· 29 27 ipv4_sip_equal_dip 30 28 ipv6_sip_equal_dip 31 29 ipv4_dip_link_local 30 + ipv4_sip_link_local 32 31 " 33 32 34 33 NUM_NETIFS=4 ··· 331 328 ip route del 169.254.1.0/24 dev $rp2 332 329 ip neigh del 169.254.1.1 lladdr 00:11:22:33:44:55 dev $rp2 333 330 tc filter del dev $rp2 egress protocol ip pref 1 handle 101 flower 331 + } 332 + 333 + ipv4_sip_link_local() 334 + { 335 + local sip=169.254.1.1 336 + 337 + RET=0 338 + 339 + # Disable rpfilter to prevent packets to be dropped because of it. 340 + sysctl_set net.ipv4.conf.all.rp_filter 0 341 + sysctl_set net.ipv4.conf."$rp1".rp_filter 0 342 + 343 + tc filter add dev "$rp2" egress protocol ip pref 1 handle 101 \ 344 + flower src_ip "$sip" action pass 345 + 346 + $MZ "$h1" -t udp "sp=54321,dp=12345" -c 5 -d 1msec -b "$rp1mac" \ 347 + -A "$sip" -B 198.51.100.2 -q 348 + 349 + tc_check_packets "dev $rp2 egress" 101 5 350 + check_err $? "Packets were dropped" 351 + 352 + log_test "IPv4 source IP is link-local" 353 + 354 + tc filter del dev "$rp2" egress protocol ip pref 1 handle 101 flower 355 + sysctl_restore net.ipv4.conf."$rp1".rp_filter 356 + sysctl_restore net.ipv4.conf.all.rp_filter 334 357 } 335 358 336 359 trap cleanup EXIT
+3 -2
tools/testing/selftests/net/mptcp/mptcp_connect.c
··· 183 183 struct addrinfo *hints, 184 184 struct addrinfo **res) 185 185 { 186 - again: 187 - int err = getaddrinfo(node, service, hints, res); 186 + int err; 188 187 188 + again: 189 + err = getaddrinfo(node, service, hints, res); 189 190 if (err) { 190 191 const char *errstr; 191 192
+3 -2
tools/testing/selftests/net/mptcp/mptcp_inq.c
··· 75 75 struct addrinfo *hints, 76 76 struct addrinfo **res) 77 77 { 78 - again: 79 - int err = getaddrinfo(node, service, hints, res); 78 + int err; 80 79 80 + again: 81 + err = getaddrinfo(node, service, hints, res); 81 82 if (err) { 82 83 const char *errstr; 83 84
+1
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 3842 3842 # remove and re-add 3843 3843 if reset_with_events "delete re-add signal" && 3844 3844 mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 3845 + ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=0 3845 3846 pm_nl_set_limits $ns1 0 3 3846 3847 pm_nl_set_limits $ns2 3 3 3847 3848 pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
+3 -2
tools/testing/selftests/net/mptcp/mptcp_sockopt.c
··· 162 162 struct addrinfo *hints, 163 163 struct addrinfo **res) 164 164 { 165 - again: 166 - int err = getaddrinfo(node, service, hints, res); 165 + int err; 167 166 167 + again: 168 + err = getaddrinfo(node, service, hints, res); 168 169 if (err) { 169 170 const char *errstr; 170 171
+1
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 198 198 check "get_limits" "${default_limits}" "subflows above hard limit" 199 199 200 200 set_limits 8 8 201 + flush_endpoint ## to make sure it doesn't affect the limits 201 202 check "get_limits" "$(format_limits 8 8)" "set limits" 202 203 203 204 flush_endpoint
+300 -12
tools/testing/selftests/net/tls.c
··· 181 181 return sendmsg(fd, &msg, flags); 182 182 } 183 183 184 - static int tls_recv_cmsg(struct __test_metadata *_metadata, 185 - int fd, unsigned char record_type, 186 - void *data, size_t len, int flags) 184 + static int __tls_recv_cmsg(struct __test_metadata *_metadata, 185 + int fd, unsigned char *ctype, 186 + void *data, size_t len, int flags) 187 187 { 188 188 char cbuf[CMSG_SPACE(sizeof(char))]; 189 189 struct cmsghdr *cmsg; 190 - unsigned char ctype; 191 190 struct msghdr msg; 192 191 struct iovec vec; 193 192 int n; ··· 205 206 EXPECT_NE(cmsg, NULL); 206 207 EXPECT_EQ(cmsg->cmsg_level, SOL_TLS); 207 208 EXPECT_EQ(cmsg->cmsg_type, TLS_GET_RECORD_TYPE); 208 - ctype = *((unsigned char *)CMSG_DATA(cmsg)); 209 + if (ctype) 210 + *ctype = *((unsigned char *)CMSG_DATA(cmsg)); 211 + 212 + return n; 213 + } 214 + 215 + static int tls_recv_cmsg(struct __test_metadata *_metadata, 216 + int fd, unsigned char record_type, 217 + void *data, size_t len, int flags) 218 + { 219 + unsigned char ctype; 220 + int n; 221 + 222 + n = __tls_recv_cmsg(_metadata, fd, &ctype, data, len, flags); 209 223 EXPECT_EQ(ctype, record_type); 210 224 211 225 return n; ··· 2176 2164 } 2177 2165 } 2178 2166 2167 + struct raw_rec { 2168 + unsigned int plain_len; 2169 + unsigned char plain_data[100]; 2170 + unsigned int cipher_len; 2171 + unsigned char cipher_data[128]; 2172 + }; 2173 + 2174 + /* TLS 1.2, AES_CCM, data, seqno:0, plaintext: 'Hello world' */ 2175 + static const struct raw_rec id0_data_l11 = { 2176 + .plain_len = 11, 2177 + .plain_data = { 2178 + 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f, 2179 + 0x72, 0x6c, 0x64, 2180 + }, 2181 + .cipher_len = 40, 2182 + .cipher_data = { 2183 + 0x17, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00, 2184 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x26, 0xa2, 0x33, 2185 + 0xde, 0x8d, 0x94, 0xf0, 0x29, 0x6c, 0xb1, 0xaf, 2186 + 0x6a, 0x75, 0xb2, 0x93, 0xad, 0x45, 0xd5, 0xfd, 2187 + 0x03, 0x51, 0x57, 0x8f, 0xf9, 0xcc, 0x3b, 0x42, 2188 + }, 2189 + }; 2190 + 2191 + /* TLS 1.2, AES_CCM, ctrl, seqno:0, plaintext: '' */ 2192 + static const struct raw_rec id0_ctrl_l0 = { 2193 + .plain_len = 0, 2194 + .plain_data = { 2195 + }, 2196 + .cipher_len = 29, 2197 + .cipher_data = { 2198 + 0x16, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00, 2199 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x38, 0x7b, 2200 + 0xa6, 0x1c, 0xdd, 0xa7, 0x19, 0x33, 0xab, 0xae, 2201 + 0x88, 0xe1, 0xd2, 0x08, 0x4f, 2202 + }, 2203 + }; 2204 + 2205 + /* TLS 1.2, AES_CCM, data, seqno:0, plaintext: '' */ 2206 + static const struct raw_rec id0_data_l0 = { 2207 + .plain_len = 0, 2208 + .plain_data = { 2209 + }, 2210 + .cipher_len = 29, 2211 + .cipher_data = { 2212 + 0x17, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00, 2213 + 0x00, 0x00, 0x00, 0x00, 0x00, 0xc5, 0x37, 0x90, 2214 + 0x70, 0x45, 0x89, 0xfb, 0x5c, 0xc7, 0x89, 0x03, 2215 + 0x68, 0x80, 0xd3, 0xd8, 0xcc, 2216 + }, 2217 + }; 2218 + 2219 + /* TLS 1.2, AES_CCM, data, seqno:1, plaintext: 'Hello world' */ 2220 + static const struct raw_rec id1_data_l11 = { 2221 + .plain_len = 11, 2222 + .plain_data = { 2223 + 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f, 2224 + 0x72, 0x6c, 0x64, 2225 + }, 2226 + .cipher_len = 40, 2227 + .cipher_data = { 2228 + 0x17, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00, 2229 + 0x00, 0x00, 0x00, 0x00, 0x01, 0x3a, 0x1a, 0x9c, 2230 + 0xd0, 0xa8, 0x9a, 0xd6, 0x69, 0xd6, 0x1a, 0xe3, 2231 + 0xb5, 0x1f, 0x0d, 0x2c, 0xe2, 0x97, 0x46, 0xff, 2232 + 0x2b, 0xcc, 0x5a, 0xc4, 0xa3, 0xb9, 0xef, 0xba, 2233 + }, 2234 + }; 2235 + 2236 + /* TLS 1.2, AES_CCM, ctrl, seqno:1, plaintext: '' */ 2237 + static const struct raw_rec id1_ctrl_l0 = { 2238 + .plain_len = 0, 2239 + .plain_data = { 2240 + }, 2241 + .cipher_len = 29, 2242 + .cipher_data = { 2243 + 0x16, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00, 2244 + 0x00, 0x00, 0x00, 0x00, 0x01, 0x3e, 0xf0, 0xfe, 2245 + 0xee, 0xd9, 0xe2, 0x5d, 0xc7, 0x11, 0x4c, 0xe6, 2246 + 0xb4, 0x7e, 0xef, 0x40, 0x2b, 2247 + }, 2248 + }; 2249 + 2250 + /* TLS 1.2, AES_CCM, data, seqno:1, plaintext: '' */ 2251 + static const struct raw_rec id1_data_l0 = { 2252 + .plain_len = 0, 2253 + .plain_data = { 2254 + }, 2255 + .cipher_len = 29, 2256 + .cipher_data = { 2257 + 0x17, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00, 2258 + 0x00, 0x00, 0x00, 0x00, 0x01, 0xce, 0xfc, 0x86, 2259 + 0xc8, 0xf0, 0x55, 0xf9, 0x47, 0x3f, 0x74, 0xdc, 2260 + 0xc9, 0xbf, 0xfe, 0x5b, 0xb1, 2261 + }, 2262 + }; 2263 + 2264 + /* TLS 1.2, AES_CCM, ctrl, seqno:2, plaintext: 'Hello world' */ 2265 + static const struct raw_rec id2_ctrl_l11 = { 2266 + .plain_len = 11, 2267 + .plain_data = { 2268 + 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f, 2269 + 0x72, 0x6c, 0x64, 2270 + }, 2271 + .cipher_len = 40, 2272 + .cipher_data = { 2273 + 0x16, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00, 2274 + 0x00, 0x00, 0x00, 0x00, 0x02, 0xe5, 0x3d, 0x19, 2275 + 0x3d, 0xca, 0xb8, 0x16, 0xb6, 0xff, 0x79, 0x87, 2276 + 0x2a, 0x04, 0x11, 0x3d, 0xf8, 0x64, 0x5f, 0x36, 2277 + 0x8b, 0xa8, 0xee, 0x4c, 0x6d, 0x62, 0xa5, 0x00, 2278 + }, 2279 + }; 2280 + 2281 + /* TLS 1.2, AES_CCM, data, seqno:2, plaintext: 'Hello world' */ 2282 + static const struct raw_rec id2_data_l11 = { 2283 + .plain_len = 11, 2284 + .plain_data = { 2285 + 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f, 2286 + 0x72, 0x6c, 0x64, 2287 + }, 2288 + .cipher_len = 40, 2289 + .cipher_data = { 2290 + 0x17, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00, 2291 + 0x00, 0x00, 0x00, 0x00, 0x02, 0xe5, 0x3d, 0x19, 2292 + 0x3d, 0xca, 0xb8, 0x16, 0xb6, 0xff, 0x79, 0x87, 2293 + 0x8e, 0xa1, 0xd0, 0xcd, 0x33, 0xb5, 0x86, 0x2b, 2294 + 0x17, 0xf1, 0x52, 0x2a, 0x55, 0x62, 0x65, 0x11, 2295 + }, 2296 + }; 2297 + 2298 + /* TLS 1.2, AES_CCM, ctrl, seqno:2, plaintext: '' */ 2299 + static const struct raw_rec id2_ctrl_l0 = { 2300 + .plain_len = 0, 2301 + .plain_data = { 2302 + }, 2303 + .cipher_len = 29, 2304 + .cipher_data = { 2305 + 0x16, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00, 2306 + 0x00, 0x00, 0x00, 0x00, 0x02, 0xdc, 0x5c, 0x0e, 2307 + 0x41, 0xdd, 0xba, 0xd3, 0xcc, 0xcf, 0x6d, 0xd9, 2308 + 0x06, 0xdb, 0x79, 0xe5, 0x5d, 2309 + }, 2310 + }; 2311 + 2312 + /* TLS 1.2, AES_CCM, data, seqno:2, plaintext: '' */ 2313 + static const struct raw_rec id2_data_l0 = { 2314 + .plain_len = 0, 2315 + .plain_data = { 2316 + }, 2317 + .cipher_len = 29, 2318 + .cipher_data = { 2319 + 0x17, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00, 2320 + 0x00, 0x00, 0x00, 0x00, 0x02, 0xc3, 0xca, 0x26, 2321 + 0x22, 0xe4, 0x25, 0xfb, 0x5f, 0x6d, 0xbf, 0x83, 2322 + 0x30, 0x48, 0x69, 0x1a, 0x47, 2323 + }, 2324 + }; 2325 + 2326 + FIXTURE(zero_len) 2327 + { 2328 + int fd, cfd; 2329 + bool notls; 2330 + }; 2331 + 2332 + FIXTURE_VARIANT(zero_len) 2333 + { 2334 + const struct raw_rec *recs[4]; 2335 + ssize_t recv_ret[4]; 2336 + }; 2337 + 2338 + FIXTURE_VARIANT_ADD(zero_len, data_data_data) 2339 + { 2340 + .recs = { &id0_data_l11, &id1_data_l11, &id2_data_l11, }, 2341 + .recv_ret = { 33, -EAGAIN, }, 2342 + }; 2343 + 2344 + FIXTURE_VARIANT_ADD(zero_len, data_0ctrl_data) 2345 + { 2346 + .recs = { &id0_data_l11, &id1_ctrl_l0, &id2_data_l11, }, 2347 + .recv_ret = { 11, 0, 11, -EAGAIN, }, 2348 + }; 2349 + 2350 + FIXTURE_VARIANT_ADD(zero_len, 0data_0data_0data) 2351 + { 2352 + .recs = { &id0_data_l0, &id1_data_l0, &id2_data_l0, }, 2353 + .recv_ret = { -EAGAIN, }, 2354 + }; 2355 + 2356 + FIXTURE_VARIANT_ADD(zero_len, 0data_0data_ctrl) 2357 + { 2358 + .recs = { &id0_data_l0, &id1_data_l0, &id2_ctrl_l11, }, 2359 + .recv_ret = { 0, 11, -EAGAIN, }, 2360 + }; 2361 + 2362 + FIXTURE_VARIANT_ADD(zero_len, 0data_0data_0ctrl) 2363 + { 2364 + .recs = { &id0_data_l0, &id1_data_l0, &id2_ctrl_l0, }, 2365 + .recv_ret = { 0, 0, -EAGAIN, }, 2366 + }; 2367 + 2368 + FIXTURE_VARIANT_ADD(zero_len, 0ctrl_0ctrl_0ctrl) 2369 + { 2370 + .recs = { &id0_ctrl_l0, &id1_ctrl_l0, &id2_ctrl_l0, }, 2371 + .recv_ret = { 0, 0, 0, -EAGAIN, }, 2372 + }; 2373 + 2374 + FIXTURE_VARIANT_ADD(zero_len, 0data_0data_data) 2375 + { 2376 + .recs = { &id0_data_l0, &id1_data_l0, &id2_data_l11, }, 2377 + .recv_ret = { 11, -EAGAIN, }, 2378 + }; 2379 + 2380 + FIXTURE_VARIANT_ADD(zero_len, data_0data_0data) 2381 + { 2382 + .recs = { &id0_data_l11, &id1_data_l0, &id2_data_l0, }, 2383 + .recv_ret = { 11, -EAGAIN, }, 2384 + }; 2385 + 2386 + FIXTURE_SETUP(zero_len) 2387 + { 2388 + struct tls_crypto_info_keys tls12; 2389 + int ret; 2390 + 2391 + tls_crypto_info_init(TLS_1_2_VERSION, TLS_CIPHER_AES_CCM_128, 2392 + &tls12, 0); 2393 + 2394 + ulp_sock_pair(_metadata, &self->fd, &self->cfd, &self->notls); 2395 + if (self->notls) 2396 + return; 2397 + 2398 + /* Don't install keys on fd, we'll send raw records */ 2399 + ret = setsockopt(self->cfd, SOL_TLS, TLS_RX, &tls12, tls12.len); 2400 + ASSERT_EQ(ret, 0); 2401 + } 2402 + 2403 + FIXTURE_TEARDOWN(zero_len) 2404 + { 2405 + close(self->fd); 2406 + close(self->cfd); 2407 + } 2408 + 2409 + TEST_F(zero_len, test) 2410 + { 2411 + const struct raw_rec *const *rec; 2412 + unsigned char buf[128]; 2413 + int rec_off; 2414 + int i; 2415 + 2416 + for (i = 0; i < 4 && variant->recs[i]; i++) 2417 + EXPECT_EQ(send(self->fd, variant->recs[i]->cipher_data, 2418 + variant->recs[i]->cipher_len, 0), 2419 + variant->recs[i]->cipher_len); 2420 + 2421 + rec = &variant->recs[0]; 2422 + rec_off = 0; 2423 + for (i = 0; i < 4; i++) { 2424 + int j, ret; 2425 + 2426 + ret = variant->recv_ret[i] >= 0 ? variant->recv_ret[i] : -1; 2427 + EXPECT_EQ(__tls_recv_cmsg(_metadata, self->cfd, NULL, 2428 + buf, sizeof(buf), MSG_DONTWAIT), ret); 2429 + if (ret == -1) 2430 + EXPECT_EQ(errno, -variant->recv_ret[i]); 2431 + if (variant->recv_ret[i] == -EAGAIN) 2432 + break; 2433 + 2434 + for (j = 0; j < ret; j++) { 2435 + while (rec_off == (*rec)->plain_len) { 2436 + rec++; 2437 + rec_off = 0; 2438 + } 2439 + EXPECT_EQ(buf[j], (*rec)->plain_data[rec_off]); 2440 + rec_off++; 2441 + } 2442 + } 2443 + }; 2444 + 2179 2445 FIXTURE(tls_err) 2180 2446 { 2181 2447 int fd, cfd; ··· 3038 2748 pid = fork(); 3039 2749 ASSERT_GE(pid, 0); 3040 2750 if (!pid) { 3041 - EXPECT_EQ(recv(cfd, buf, sizeof(buf), MSG_WAITALL), 3042 - sizeof(buf)); 2751 + EXPECT_EQ(recv(cfd, buf, sizeof(buf) / 2, MSG_WAITALL), 2752 + sizeof(buf) / 2); 3043 2753 exit(!__test_passed(_metadata)); 3044 2754 } 3045 2755 3046 - usleep(2000); 2756 + usleep(10000); 3047 2757 ASSERT_EQ(setsockopt(fd, SOL_TLS, TLS_TX, &tls, tls.len), 0); 3048 2758 ASSERT_EQ(setsockopt(cfd, SOL_TLS, TLS_RX, &tls, tls.len), 0); 3049 2759 3050 2760 EXPECT_EQ(send(fd, buf, sizeof(buf), 0), sizeof(buf)); 3051 - usleep(2000); 2761 + EXPECT_EQ(wait(&status), pid); 2762 + EXPECT_EQ(status, 0); 3052 2763 EXPECT_EQ(recv(cfd, buf2, sizeof(buf2), MSG_DONTWAIT), -1); 3053 2764 /* Don't check errno, the error will be different depending 3054 2765 * on what random bytes TLS interpreted as the record length. ··· 3057 2766 3058 2767 close(fd); 3059 2768 close(cfd); 3060 - 3061 - EXPECT_EQ(wait(&status), pid); 3062 - EXPECT_EQ(status, 0); 3063 2769 } 3064 2770 3065 2771 static void __attribute__((constructor)) fips_check(void) {
-1
tools/testing/selftests/sched_ext/hotplug.c
··· 6 6 #include <bpf/bpf.h> 7 7 #include <sched.h> 8 8 #include <scx/common.h> 9 - #include <sched.h> 10 9 #include <sys/wait.h> 11 10 #include <unistd.h> 12 11
+198
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 186 186 ] 187 187 }, 188 188 { 189 + "id": "34c0", 190 + "name": "Test TBF with HHF Backlog Accounting in gso_skb case against underflow", 191 + "category": [ 192 + "qdisc", 193 + "tbf", 194 + "hhf" 195 + ], 196 + "plugins": { 197 + "requires": [ 198 + "nsPlugin" 199 + ] 200 + }, 201 + "setup": [ 202 + "$IP link set dev $DUMMY up || true", 203 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 204 + "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms", 205 + "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 hhf limit 1000", 206 + [ 207 + "ping -I $DUMMY -c2 10.10.11.11", 208 + 1 209 + ], 210 + "$TC qdisc change dev $DUMMY handle 2: parent 1:1 hhf limit 1" 211 + ], 212 + "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1", 213 + "expExitCode": "0", 214 + "verifyCmd": "$TC -s qdisc show dev $DUMMY", 215 + "matchPattern": "backlog 0b 0p", 216 + "matchCount": "1", 217 + "teardown": [ 218 + "$TC qdisc del dev $DUMMY handle 1: root" 219 + ] 220 + }, 221 + { 222 + "id": "fd68", 223 + "name": "Test TBF with CODEL Backlog Accounting in gso_skb case against underflow", 224 + "category": [ 225 + "qdisc", 226 + "tbf", 227 + "codel" 228 + ], 229 + "plugins": { 230 + "requires": [ 231 + "nsPlugin" 232 + ] 233 + }, 234 + "setup": [ 235 + "$IP link set dev $DUMMY up || true", 236 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 237 + "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms", 238 + "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 codel limit 1000", 239 + [ 240 + "ping -I $DUMMY -c2 10.10.11.11", 241 + 1 242 + ], 243 + "$TC qdisc change dev $DUMMY handle 2: parent 1:1 codel limit 1" 244 + ], 245 + "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1", 246 + "expExitCode": "0", 247 + "verifyCmd": "$TC -s qdisc show dev $DUMMY", 248 + "matchPattern": "backlog 0b 0p", 249 + "matchCount": "1", 250 + "teardown": [ 251 + "$TC qdisc del dev $DUMMY handle 1: root" 252 + ] 253 + }, 254 + { 255 + "id": "514e", 256 + "name": "Test TBF with PIE Backlog Accounting in gso_skb case against underflow", 257 + "category": [ 258 + "qdisc", 259 + "tbf", 260 + "pie" 261 + ], 262 + "plugins": { 263 + "requires": [ 264 + "nsPlugin" 265 + ] 266 + }, 267 + "setup": [ 268 + "$IP link set dev $DUMMY up || true", 269 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 270 + "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms", 271 + "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 pie limit 1000", 272 + [ 273 + "ping -I $DUMMY -c2 10.10.11.11", 274 + 1 275 + ], 276 + "$TC qdisc change dev $DUMMY handle 2: parent 1:1 pie limit 1" 277 + ], 278 + "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1", 279 + "expExitCode": "0", 280 + "verifyCmd": "$TC -s qdisc show dev $DUMMY", 281 + "matchPattern": "backlog 0b 0p", 282 + "matchCount": "1", 283 + "teardown": [ 284 + "$TC qdisc del dev $DUMMY handle 1: root" 285 + ] 286 + }, 287 + { 288 + "id": "6c97", 289 + "name": "Test TBF with FQ Backlog Accounting in gso_skb case against underflow", 290 + "category": [ 291 + "qdisc", 292 + "tbf", 293 + "fq" 294 + ], 295 + "plugins": { 296 + "requires": [ 297 + "nsPlugin" 298 + ] 299 + }, 300 + "setup": [ 301 + "$IP link set dev $DUMMY up || true", 302 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 303 + "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms", 304 + "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq limit 1000", 305 + [ 306 + "ping -I $DUMMY -c2 10.10.11.11", 307 + 1 308 + ], 309 + "$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq limit 1" 310 + ], 311 + "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1", 312 + "expExitCode": "0", 313 + "verifyCmd": "$TC -s qdisc show dev $DUMMY", 314 + "matchPattern": "backlog 0b 0p", 315 + "matchCount": "1", 316 + "teardown": [ 317 + "$TC qdisc del dev $DUMMY handle 1: root" 318 + ] 319 + }, 320 + { 321 + "id": "5d0b", 322 + "name": "Test TBF with FQ_CODEL Backlog Accounting in gso_skb case against underflow", 323 + "category": [ 324 + "qdisc", 325 + "tbf", 326 + "fq_codel" 327 + ], 328 + "plugins": { 329 + "requires": [ 330 + "nsPlugin" 331 + ] 332 + }, 333 + "setup": [ 334 + "$IP link set dev $DUMMY up || true", 335 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 336 + "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms", 337 + "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq_codel limit 1000", 338 + [ 339 + "ping -I $DUMMY -c2 10.10.11.11", 340 + 1 341 + ], 342 + "$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq_codel limit 1" 343 + ], 344 + "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1", 345 + "expExitCode": "0", 346 + "verifyCmd": "$TC -s qdisc show dev $DUMMY", 347 + "matchPattern": "backlog 0b 0p", 348 + "matchCount": "1", 349 + "teardown": [ 350 + "$TC qdisc del dev $DUMMY handle 1: root" 351 + ] 352 + }, 353 + { 354 + "id": "21c3", 355 + "name": "Test TBF with FQ_PIE Backlog Accounting in gso_skb case against underflow", 356 + "category": [ 357 + "qdisc", 358 + "tbf", 359 + "fq_pie" 360 + ], 361 + "plugins": { 362 + "requires": [ 363 + "nsPlugin" 364 + ] 365 + }, 366 + "setup": [ 367 + "$IP link set dev $DUMMY up || true", 368 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 369 + "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms", 370 + "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq_pie limit 1000", 371 + [ 372 + "ping -I $DUMMY -c2 10.10.11.11", 373 + 1 374 + ], 375 + "$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq_pie limit 1" 376 + ], 377 + "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1", 378 + "expExitCode": "0", 379 + "verifyCmd": "$TC -s qdisc show dev $DUMMY", 380 + "matchPattern": "backlog 0b 0p", 381 + "matchCount": "1", 382 + "teardown": [ 383 + "$TC qdisc del dev $DUMMY handle 1: root" 384 + ] 385 + }, 386 + { 189 387 "id": "a4bb", 190 388 "name": "Test FQ_CODEL with HTB parent - force packet drop with empty queue", 191 389 "category": [
+2 -2
tools/testing/selftests/ublk/kublk.c
··· 1400 1400 1401 1401 if (!((1ULL << i) & features)) 1402 1402 continue; 1403 - if (i < sizeof(feat_map) / sizeof(feat_map[0])) 1403 + if (i < ARRAY_SIZE(feat_map)) 1404 1404 feat = feat_map[i]; 1405 1405 else 1406 1406 feat = "unknown"; ··· 1477 1477 printf("\tdefault: nr_queues=2(max 32), depth=128(max 1024), dev_id=-1(auto allocation)\n"); 1478 1478 printf("\tdefault: nthreads=nr_queues"); 1479 1479 1480 - for (i = 0; i < sizeof(tgt_ops_list) / sizeof(tgt_ops_list[0]); i++) { 1480 + for (i = 0; i < ARRAY_SIZE(tgt_ops_list); i++) { 1481 1481 const struct ublk_tgt_ops *ops = tgt_ops_list[i]; 1482 1482 1483 1483 if (ops->usage)
+4
tools/testing/shared/linux/idr.h
··· 1 + /* Avoid duplicate definitions due to system headers. */ 2 + #ifdef __CONCAT 3 + #undef __CONCAT 4 + #endif 1 5 #include "../../../../include/linux/idr.h"
+8
tools/tracing/latency/Makefile.config
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + include $(srctree)/tools/scripts/utilities.mak 4 + 3 5 STOP_ERROR := 6 + 7 + ifndef ($(NO_LIBTRACEEVENT),1) 8 + ifeq ($(call get-executable,$(PKG_CONFIG)),) 9 + $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it) 10 + endif 11 + endif 4 12 5 13 define lib_setup 6 14 $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))
+8
tools/tracing/rtla/Makefile.config
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + include $(srctree)/tools/scripts/utilities.mak 4 + 3 5 STOP_ERROR := 4 6 5 7 LIBTRACEEVENT_MIN_VERSION = 1.5 6 8 LIBTRACEFS_MIN_VERSION = 1.6 9 + 10 + ifndef ($(NO_LIBTRACEEVENT),1) 11 + ifeq ($(call get-executable,$(PKG_CONFIG)),) 12 + $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it) 13 + endif 14 + endif 7 15 8 16 define lib_setup 9 17 $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))