Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.17-rc4).

No conflicts.

Adjacent changes:

drivers/net/ethernet/intel/idpf/idpf_txrx.c
02614eee26fb ("idpf: do not linearize big TSO packets")
6c4e68480238 ("idpf: remove obsolete stashing code")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3911 -2278
+2
.mailmap
··· 226 226 Douglas Gilbert <dougg@torque.net> 227 227 Drew Fustini <fustini@kernel.org> <drew@pdp7.com> 228 228 <duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr> 229 + Easwar Hariharan <easwar.hariharan@linux.microsoft.com> <easwar.hariharan@intel.com> 230 + Easwar Hariharan <easwar.hariharan@linux.microsoft.com> <eahariha@linux.microsoft.com> 229 231 Ed L. Cashin <ecashin@coraid.com> 230 232 Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org> 231 233 Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com>
+7
CREDITS
··· 3222 3222 D: Starter of Linux1394 effort 3223 3223 S: ask per mail for current address 3224 3224 3225 + N: Boris Pismenny 3226 + E: borisp@mellanox.com 3227 + D: Kernel TLS implementation and offload support. 3228 + 3225 3229 N: Nicolas Pitre 3226 3230 E: nico@fluxnic.net 3227 3231 D: StrongARM SA1100 support integrator & hacker ··· 4171 4167 S: 1513 Brewster Dr. 4172 4168 S: Carrollton, TX 75010 4173 4169 S: USA 4170 + 4171 + N: Dave Watson 4172 + D: Kernel TLS implementation. 4174 4173 4175 4174 N: Tim Waugh 4176 4175 E: tim@cyberelk.net
+2 -2
Documentation/admin-guide/cgroup-v2.rst
··· 435 435 Controlling Controllers 436 436 ----------------------- 437 437 438 - Availablity 439 - ~~~~~~~~~~~ 438 + Availability 439 + ~~~~~~~~~~~~ 440 440 441 441 A controller is available in a cgroup when it is supported by the kernel (i.e., 442 442 compiled in, not disabled and not attached to a v1 hierarchy) and listed in the
+1 -1
Documentation/devicetree/bindings/regulator/infineon,ir38060.yaml
··· 7 7 title: Infineon Buck Regulators with PMBUS interfaces 8 8 9 9 maintainers: 10 - - Not Me. 10 + - Guenter Roeck <linux@roeck-us.net> 11 11 12 12 allOf: 13 13 - $ref: regulator.yaml#
+2
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 507 507 description: Espressif Systems Co. Ltd. 508 508 "^est,.*": 509 509 description: ESTeem Wireless Modems 510 + "^eswin,.*": 511 + description: Beijing ESWIN Technology Group Co. Ltd. 510 512 "^ettus,.*": 511 513 description: NI Ettus Research 512 514 "^eukrea,.*":
+16 -9
Documentation/process/security-bugs.rst
··· 8 8 disclosed as quickly as possible. Please report security bugs to the 9 9 Linux kernel security team. 10 10 11 - Contact 12 - ------- 11 + The security team and maintainers almost always require additional 12 + information beyond what was initially provided in a report and rely on 13 + active and efficient collaboration with the reporter to perform further 14 + testing (e.g., verifying versions, configuration options, mitigations, or 15 + patches). Before contacting the security team, the reporter must ensure 16 + they are available to explain their findings, engage in discussions, and 17 + run additional tests. Reports where the reporter does not respond promptly 18 + or cannot effectively discuss their findings may be abandoned if the 19 + communication does not quickly improve. 20 + 21 + As it is with any bug, the more information provided the easier it 22 + will be to diagnose and fix. Please review the procedure outlined in 23 + 'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what 24 + information is helpful. Any exploit code is very helpful and will not 25 + be released without consent from the reporter unless it has already been 26 + made public. 13 27 14 28 The Linux kernel security team can be contacted by email at 15 29 <security@kernel.org>. This is a private list of security officers ··· 32 18 that can speed up the process considerably. It is possible that the 33 19 security team will bring in extra help from area maintainers to 34 20 understand and fix the security vulnerability. 35 - 36 - As it is with any bug, the more information provided the easier it 37 - will be to diagnose and fix. Please review the procedure outlined in 38 - 'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what 39 - information is helpful. Any exploit code is very helpful and will not 40 - be released without consent from the reporter unless it has already been 41 - made public. 42 21 43 22 Please send plain text emails without attachments where possible. 44 23 It is much harder to have a context-quoted discussion about a complex
+2 -2
Documentation/userspace-api/iommufd.rst
··· 43 43 44 44 - IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table 45 45 (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING" 46 - primarly indicates this type of HWPT should be linked to an IOAS. It also 46 + primarily indicates this type of HWPT should be linked to an IOAS. It also 47 47 indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING 48 48 feature flag. This can be either an UNMANAGED stage-1 domain for a device 49 49 running in the user space, or a nesting parent stage-2 domain for mappings ··· 76 76 77 77 * Security namespace for guest owned ID, e.g. guest-controlled cache tags 78 78 * Non-device-affiliated event reporting, e.g. invalidation queue errors 79 - * Access to a sharable nesting parent pagetable across physical IOMMUs 79 + * Access to a shareable nesting parent pagetable across physical IOMMUs 80 80 * Virtualization of various platforms IDs, e.g. RIDs and others 81 81 * Delivery of paravirtualized invalidation 82 82 * Direct assigned invalidation queues
+37 -8
MAINTAINERS
··· 937 937 F: drivers/gpio/gpio-altera.c 938 938 939 939 ALTERA TRIPLE SPEED ETHERNET DRIVER 940 - M: Joyce Ooi <joyce.ooi@intel.com> 940 + M: Boon Khai Ng <boon.khai.ng@altera.com> 941 941 L: netdev@vger.kernel.org 942 942 S: Maintained 943 943 F: drivers/net/ethernet/altera/ ··· 4216 4216 BCACHEFS 4217 4217 M: Kent Overstreet <kent.overstreet@linux.dev> 4218 4218 L: linux-bcachefs@vger.kernel.org 4219 - S: Supported 4219 + S: Externally maintained 4220 4220 C: irc://irc.oftc.net/bcache 4221 4221 P: Documentation/filesystems/bcachefs/SubmittingPatches.rst 4222 4222 T: git https://evilpiepirate.org/git/bcachefs.git ··· 8427 8427 F: drivers/gpu/drm/scheduler/ 8428 8428 F: include/drm/gpu_scheduler.h 8429 8429 8430 + DRM GPUVM 8431 + M: Danilo Krummrich <dakr@kernel.org> 8432 + R: Matthew Brost <matthew.brost@intel.com> 8433 + R: Thomas Hellström <thomas.hellstrom@linux.intel.com> 8434 + R: Alice Ryhl <aliceryhl@google.com> 8435 + L: dri-devel@lists.freedesktop.org 8436 + S: Supported 8437 + T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 8438 + F: drivers/gpu/drm/drm_gpuvm.c 8439 + F: include/drm/drm_gpuvm.h 8440 + 8430 8441 DRM LOG 8431 8442 M: Jocelyn Falempe <jfalempe@redhat.com> 8432 8443 M: Javier Martinez Canillas <javierm@redhat.com> ··· 10667 10656 F: block/partitions/efi.* 10668 10657 10669 10658 HABANALABS PCI DRIVER 10670 - M: Yaron Avizrat <yaron.avizrat@intel.com> 10659 + M: Koby Elbaz <koby.elbaz@intel.com> 10660 + M: Konstantin Sinyuk <konstantin.sinyuk@intel.com> 10671 10661 L: dri-devel@lists.freedesktop.org 10672 10662 S: Supported 10673 10663 C: irc://irc.oftc.net/dri-devel ··· 11026 11014 F: drivers/perf/hisilicon/hns3_pmu.c 11027 11015 11028 11016 HISILICON I2C CONTROLLER DRIVER 11029 - M: Yicong Yang <yangyicong@hisilicon.com> 11017 + M: Devyn Liu <liudingyuan@h-partners.com> 11030 11018 L: linux-i2c@vger.kernel.org 11031 11019 S: Maintained 11032 11020 W: https://www.hisilicon.com ··· 12294 12282 F: include/linux/net/intel/*/ 12295 12283 12296 12284 INTEL ETHERNET PROTOCOL DRIVER FOR RDMA 12297 - M: Mustafa Ismail <mustafa.ismail@intel.com> 12298 12285 M: Tatyana Nikolova <tatyana.e.nikolova@intel.com> 12299 12286 L: linux-rdma@vger.kernel.org 12300 12287 S: Supported ··· 16070 16059 F: mm/migrate.c 16071 16060 F: mm/migrate_device.c 16072 16061 16062 + MEMORY MANAGEMENT - MGLRU (MULTI-GEN LRU) 16063 + M: Andrew Morton <akpm@linux-foundation.org> 16064 + M: Axel Rasmussen <axelrasmussen@google.com> 16065 + M: Yuanchu Xie <yuanchu@google.com> 16066 + R: Wei Xu <weixugc@google.com> 16067 + L: linux-mm@kvack.org 16068 + S: Maintained 16069 + W: http://www.linux-mm.org 16070 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 16071 + F: Documentation/admin-guide/mm/multigen_lru.rst 16072 + F: Documentation/mm/multigen_lru.rst 16073 + F: include/linux/mm_inline.h 16074 + F: include/linux/mmzone.h 16075 + F: mm/swap.c 16076 + F: mm/vmscan.c 16077 + F: mm/workingset.c 16078 + 16073 16079 MEMORY MANAGEMENT - MISC 16074 16080 M: Andrew Morton <akpm@linux-foundation.org> 16075 16081 M: David Hildenbrand <david@redhat.com> ··· 16277 16249 W: http://www.linux-mm.org 16278 16250 T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 16279 16251 F: rust/helpers/mm.c 16252 + F: rust/helpers/page.c 16280 16253 F: rust/kernel/mm.rs 16281 16254 F: rust/kernel/mm/ 16255 + F: rust/kernel/page.rs 16282 16256 16283 16257 MEMORY MAPPING 16284 16258 M: Andrew Morton <akpm@linux-foundation.org> ··· 17849 17819 F: net/ipv6/tcp*.c 17850 17820 17851 17821 NETWORKING [TLS] 17852 - M: Boris Pismenny <borisp@nvidia.com> 17853 17822 M: John Fastabend <john.fastabend@gmail.com> 17854 17823 M: Jakub Kicinski <kuba@kernel.org> 17855 17824 L: netdev@vger.kernel.org ··· 20886 20857 F: drivers/firmware/qcom/qcom_qseecom_uefisecapp.c 20887 20858 20888 20859 QUALCOMM RMNET DRIVER 20889 - M: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com> 20890 - M: Sean Tranchetti <quic_stranche@quicinc.com> 20860 + M: Subash Abhinov Kasiviswanathan <subash.a.kasiviswanathan@oss.qualcomm.com> 20861 + M: Sean Tranchetti <sean.tranchetti@oss.qualcomm.com> 20891 20862 L: netdev@vger.kernel.org 20892 20863 S: Maintained 20893 20864 F: Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+4 -1
arch/mips/boot/dts/lantiq/danube_easy50712.dts
··· 82 82 }; 83 83 }; 84 84 85 - etop@e180000 { 85 + ethernet@e180000 { 86 86 compatible = "lantiq,etop-xway"; 87 87 reg = <0xe180000 0x40000>; 88 88 interrupt-parent = <&icu0>; 89 89 interrupts = <73 78>; 90 + interrupt-names = "tx", "rx"; 90 91 phy-mode = "rmii"; 91 92 mac-address = [ 00 11 22 33 44 55 ]; 93 + lantiq,rx-burst-length = <4>; 94 + lantiq,tx-burst-length = <4>; 92 95 }; 93 96 94 97 stp0: stp@e100bb0 {
+5 -5
arch/mips/lantiq/xway/sysctrl.c
··· 497 497 ifccr = CGU_IFCCR_VR9; 498 498 pcicr = CGU_PCICR_VR9; 499 499 } else { 500 - clkdev_add_pmu("1e180000.etop", NULL, 1, 0, PMU_PPE); 500 + clkdev_add_pmu("1e180000.ethernet", NULL, 1, 0, PMU_PPE); 501 501 } 502 502 503 503 if (!of_machine_is_compatible("lantiq,ase")) ··· 531 531 CLOCK_133M, CLOCK_133M); 532 532 clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0); 533 533 clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P); 534 - clkdev_add_pmu("1e180000.etop", "ppe", 1, 0, PMU_PPE); 535 - clkdev_add_cgu("1e180000.etop", "ephycgu", CGU_EPHY); 536 - clkdev_add_pmu("1e180000.etop", "ephy", 1, 0, PMU_EPHY); 534 + clkdev_add_pmu("1e180000.ethernet", "ppe", 1, 0, PMU_PPE); 535 + clkdev_add_cgu("1e180000.ethernet", "ephycgu", CGU_EPHY); 536 + clkdev_add_pmu("1e180000.ethernet", "ephy", 1, 0, PMU_EPHY); 537 537 clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_ASE_SDIO); 538 538 clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE); 539 539 } else if (of_machine_is_compatible("lantiq,grx390")) { ··· 592 592 clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0 | PMU_AHBM); 593 593 clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 0, PMU_USB1_P); 594 594 clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1 | PMU_AHBM); 595 - clkdev_add_pmu("1e180000.etop", "switch", 1, 0, PMU_SWITCH); 595 + clkdev_add_pmu("1e180000.ethernet", "switch", 1, 0, PMU_SWITCH); 596 596 clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO); 597 597 clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); 598 598 clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
+3 -3
arch/powerpc/boot/Makefile
··· 243 243 hostprogs := addnote hack-coff mktree 244 244 245 245 targets += $(patsubst $(obj)/%,%,$(obj-boot) wrapper.a) zImage.lds 246 - extra-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \ 246 + always-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \ 247 247 $(obj)/zImage.lds $(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds 248 248 249 249 dtstree := $(src)/dts 250 250 251 251 wrapper := $(src)/wrapper 252 - wrapperbits := $(extra-y) $(addprefix $(obj)/,addnote hack-coff mktree) \ 252 + wrapperbits := $(always-y) $(addprefix $(obj)/,addnote hack-coff mktree) \ 253 253 $(wrapper) FORCE 254 254 255 255 ############# ··· 456 456 WRAPPER_BINDIR := /usr/sbin 457 457 INSTALL := install 458 458 459 - extra-installed := $(patsubst $(obj)/%, $(DESTDIR)$(WRAPPER_OBJDIR)/%, $(extra-y)) 459 + extra-installed := $(patsubst $(obj)/%, $(DESTDIR)$(WRAPPER_OBJDIR)/%, $(always-y)) 460 460 hostprogs-installed := $(patsubst %, $(DESTDIR)$(WRAPPER_BINDIR)/%, $(hostprogs)) 461 461 wrapper-installed := $(DESTDIR)$(WRAPPER_BINDIR)/wrapper 462 462 dts-installed := $(patsubst $(dtstree)/%, $(DESTDIR)$(WRAPPER_DTSDIR)/%, $(wildcard $(dtstree)/*.dts))
+7 -7
arch/powerpc/boot/install.sh
··· 19 19 set -e 20 20 21 21 # this should work for both the pSeries zImage and the iSeries vmlinux.sm 22 - image_name=`basename $2` 22 + image_name=$(basename "$2") 23 23 24 24 25 25 echo "Warning: '${INSTALLKERNEL}' command not available... Copying" \ 26 26 "directly to $4/$image_name-$1" >&2 27 27 28 - if [ -f $4/$image_name-$1 ]; then 29 - mv $4/$image_name-$1 $4/$image_name-$1.old 28 + if [ -f "$4"/"$image_name"-"$1" ]; then 29 + mv "$4"/"$image_name"-"$1" "$4"/"$image_name"-"$1".old 30 30 fi 31 31 32 - if [ -f $4/System.map-$1 ]; then 33 - mv $4/System.map-$1 $4/System-$1.old 32 + if [ -f "$4"/System.map-"$1" ]; then 33 + mv "$4"/System.map-"$1" "$4"/System-"$1".old 34 34 fi 35 35 36 - cat $2 > $4/$image_name-$1 37 - cp $3 $4/System.map-$1 36 + cat "$2" > "$4"/"$image_name"-"$1" 37 + cp "$3" "$4"/System.map-"$1"
+3 -1
arch/powerpc/kernel/Makefile
··· 199 199 200 200 obj-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init.o 201 201 obj64-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_entry_64.o 202 - extra-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init_check 202 + ifdef KBUILD_BUILTIN 203 + always-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init_check 204 + endif 203 205 204 206 obj-$(CONFIG_PPC64) += $(obj64-y) 205 207 obj-$(CONFIG_PPC32) += $(obj32-y)
+4 -4
arch/powerpc/kernel/kvm.c
··· 632 632 #endif 633 633 } 634 634 635 - switch (inst_no_rt & ~KVM_MASK_RB) { 636 635 #ifdef CONFIG_PPC_BOOK3S_32 636 + switch (inst_no_rt & ~KVM_MASK_RB) { 637 637 case KVM_INST_MTSRIN: 638 638 if (features & KVM_MAGIC_FEAT_SR) { 639 639 u32 inst_rb = _inst & KVM_MASK_RB; 640 640 kvm_patch_ins_mtsrin(inst, inst_rt, inst_rb); 641 641 } 642 642 break; 643 - #endif 644 643 } 644 + #endif 645 645 646 - switch (_inst) { 647 646 #ifdef CONFIG_BOOKE 647 + switch (_inst) { 648 648 case KVM_INST_WRTEEI_0: 649 649 kvm_patch_ins_wrteei_0(inst); 650 650 break; ··· 652 652 case KVM_INST_WRTEEI_1: 653 653 kvm_patch_ins_wrtee(inst, 0, 1); 654 654 break; 655 - #endif 656 655 } 656 + #endif 657 657 } 658 658 659 659 extern u32 kvm_template_start[];
+8 -8
arch/powerpc/kernel/prom_init_check.sh
··· 15 15 16 16 has_renamed_memintrinsics() 17 17 { 18 - grep -q "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} && \ 19 - ! grep -q "^CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y" ${KCONFIG_CONFIG} 18 + grep -q "^CONFIG_KASAN=y$" "${KCONFIG_CONFIG}" && \ 19 + ! grep -q "^CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y" "${KCONFIG_CONFIG}" 20 20 } 21 21 22 22 if has_renamed_memintrinsics ··· 42 42 { 43 43 file=$1 44 44 section=$2 45 - size=$(objdump -h -j $section $file 2>/dev/null | awk "\$2 == \"$section\" {print \$3}") 45 + size=$(objdump -h -j "$section" "$file" 2>/dev/null | awk "\$2 == \"$section\" {print \$3}") 46 46 size=${size:-0} 47 - if [ $size -ne 0 ]; then 47 + if [ "$size" -ne 0 ]; then 48 48 ERROR=1 49 49 echo "Error: Section $section not empty in prom_init.c" >&2 50 50 fi 51 51 } 52 52 53 - for UNDEF in $($NM -u $OBJ | awk '{print $2}') 53 + for UNDEF in $($NM -u "$OBJ" | awk '{print $2}') 54 54 do 55 55 # On 64-bit nm gives us the function descriptors, which have 56 56 # a leading . on the name, so strip it off here. ··· 87 87 fi 88 88 done 89 89 90 - check_section $OBJ .data 91 - check_section $OBJ .bss 92 - check_section $OBJ .init.data 90 + check_section "$OBJ" .data 91 + check_section "$OBJ" .bss 92 + check_section "$OBJ" .init.data 93 93 94 94 exit $ERROR
+1 -4
arch/powerpc/kernel/setup_64.c
··· 141 141 smt_enabled_at_boot = 0; 142 142 else { 143 143 int smt; 144 - int rc; 145 - 146 - rc = kstrtoint(smt_enabled_cmdline, 10, &smt); 147 - if (!rc) 144 + if (!kstrtoint(smt_enabled_cmdline, 10, &smt)) 148 145 smt_enabled_at_boot = 149 146 min(threads_per_core, smt); 150 147 }
+1 -1
arch/powerpc/kvm/powerpc.c
··· 69 69 70 70 /* 71 71 * Common checks before entering the guest world. Call with interrupts 72 - * disabled. 72 + * enabled. 73 73 * 74 74 * returns: 75 75 *
+1 -2
arch/powerpc/platforms/8xx/cpm1-ic.c
··· 110 110 111 111 out_be32(&data->reg->cpic_cimr, 0); 112 112 113 - data->host = irq_domain_create_linear(of_fwnode_handle(dev->of_node), 114 - 64, &cpm_pic_host_ops, data); 113 + data->host = irq_domain_create_linear(dev_fwnode(dev), 64, &cpm_pic_host_ops, data); 115 114 if (!data->host) 116 115 return -ENODEV; 117 116
+4 -9
arch/powerpc/platforms/Kconfig.cputype
··· 122 122 If unsure, select Generic. 123 123 124 124 config POWERPC64_CPU 125 - bool "Generic (POWER5 and PowerPC 970 and above)" 126 - depends on PPC_BOOK3S_64 && !CPU_LITTLE_ENDIAN 125 + bool "Generic 64 bits powerpc" 126 + depends on PPC_BOOK3S_64 127 + select ARCH_HAS_FAST_MULTIPLIER if CPU_LITTLE_ENDIAN 127 128 select PPC_64S_HASH_MMU 128 - 129 - config POWERPC64_CPU 130 - bool "Generic (POWER8 and above)" 131 - depends on PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN 132 - select ARCH_HAS_FAST_MULTIPLIER 133 - select PPC_64S_HASH_MMU 134 - select PPC_HAS_LBARX_LHARX 129 + select PPC_HAS_LBARX_LHARX if CPU_LITTLE_ENDIAN 135 130 136 131 config POWERPC_CPU 137 132 bool "Generic 32 bits powerpc"
+2 -3
arch/powerpc/sysdev/fsl_msi.c
··· 412 412 } 413 413 platform_set_drvdata(dev, msi); 414 414 415 - msi->irqhost = irq_domain_create_linear(of_fwnode_handle(dev->dev.of_node), 416 - NR_MSI_IRQS_MAX, &fsl_msi_host_ops, msi); 417 - 415 + msi->irqhost = irq_domain_create_linear(dev_fwnode(&dev->dev), NR_MSI_IRQS_MAX, 416 + &fsl_msi_host_ops, msi); 418 417 if (msi->irqhost == NULL) { 419 418 dev_err(&dev->dev, "No memory for MSI irqhost\n"); 420 419 err = -ENOMEM;
+3
arch/s390/boot/vmem.c
··· 530 530 lowcore_address + sizeof(struct lowcore), 531 531 POPULATE_LOWCORE); 532 532 for_each_physmem_usable_range(i, &start, &end) { 533 + /* Do not map lowcore with identity mapping */ 534 + if (!start) 535 + start = sizeof(struct lowcore); 533 536 pgtable_populate((unsigned long)__identity_va(start), 534 537 (unsigned long)__identity_va(end), 535 538 POPULATE_IDENTITY);
+16 -17
arch/s390/configs/debug_defconfig
··· 5 5 CONFIG_AUDIT=y 6 6 CONFIG_NO_HZ_IDLE=y 7 7 CONFIG_HIGH_RES_TIMERS=y 8 + CONFIG_POSIX_AUX_CLOCKS=y 8 9 CONFIG_BPF_SYSCALL=y 9 10 CONFIG_BPF_JIT=y 10 11 CONFIG_BPF_JIT_ALWAYS_ON=y ··· 20 19 CONFIG_TASK_IO_ACCOUNTING=y 21 20 CONFIG_IKCONFIG=y 22 21 CONFIG_IKCONFIG_PROC=y 22 + CONFIG_SCHED_PROXY_EXEC=y 23 23 CONFIG_NUMA_BALANCING=y 24 24 CONFIG_MEMCG=y 25 25 CONFIG_BLK_CGROUP=y ··· 44 42 CONFIG_KEXEC=y 45 43 CONFIG_KEXEC_FILE=y 46 44 CONFIG_KEXEC_SIG=y 45 + CONFIG_CRASH_DM_CRYPT=y 47 46 CONFIG_LIVEPATCH=y 48 47 CONFIG_MARCH_Z13=y 49 48 CONFIG_NR_CPUS=512 ··· 108 105 CONFIG_MEM_SOFT_DIRTY=y 109 106 CONFIG_DEFERRED_STRUCT_PAGE_INIT=y 110 107 CONFIG_IDLE_PAGE_TRACKING=y 108 + CONFIG_ZONE_DEVICE=y 111 109 CONFIG_PERCPU_STATS=y 112 110 CONFIG_GUP_TEST=y 113 111 CONFIG_ANON_VMA_NAME=y ··· 227 223 CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m 228 224 CONFIG_NETFILTER_XT_TARGET_CT=m 229 225 CONFIG_NETFILTER_XT_TARGET_DSCP=m 226 + CONFIG_NETFILTER_XT_TARGET_HL=m 230 227 CONFIG_NETFILTER_XT_TARGET_HMARK=m 231 228 CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m 232 229 CONFIG_NETFILTER_XT_TARGET_LOG=m 233 230 CONFIG_NETFILTER_XT_TARGET_MARK=m 231 + CONFIG_NETFILTER_XT_NAT=m 234 232 CONFIG_NETFILTER_XT_TARGET_NETMAP=m 235 233 CONFIG_NETFILTER_XT_TARGET_NFLOG=m 236 234 CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m 237 235 CONFIG_NETFILTER_XT_TARGET_REDIRECT=m 236 + CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m 238 237 CONFIG_NETFILTER_XT_TARGET_TEE=m 239 238 CONFIG_NETFILTER_XT_TARGET_TPROXY=m 240 - CONFIG_NETFILTER_XT_TARGET_TRACE=m 241 239 CONFIG_NETFILTER_XT_TARGET_SECMARK=m 242 240 CONFIG_NETFILTER_XT_TARGET_TCPMSS=m 243 241 CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m ··· 254 248 CONFIG_NETFILTER_XT_MATCH_CONNMARK=m 255 249 CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m 256 250 CONFIG_NETFILTER_XT_MATCH_CPU=m 251 + CONFIG_NETFILTER_XT_MATCH_DCCP=m 257 252 CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m 258 253 CONFIG_NETFILTER_XT_MATCH_DSCP=m 259 254 CONFIG_NETFILTER_XT_MATCH_ESP=m ··· 325 318 CONFIG_IP_NF_MATCH_ECN=m 326 319 CONFIG_IP_NF_MATCH_RPFILTER=m 327 320 CONFIG_IP_NF_MATCH_TTL=m 328 - CONFIG_IP_NF_FILTER=m 329 321 CONFIG_IP_NF_TARGET_REJECT=m 330 - CONFIG_IP_NF_NAT=m 331 - CONFIG_IP_NF_TARGET_MASQUERADE=m 332 - CONFIG_IP_NF_MANGLE=m 333 322 CONFIG_IP_NF_TARGET_ECN=m 334 - CONFIG_IP_NF_TARGET_TTL=m 335 - CONFIG_IP_NF_RAW=m 336 - CONFIG_IP_NF_SECURITY=m 337 - CONFIG_IP_NF_ARPFILTER=m 338 323 CONFIG_IP_NF_ARP_MANGLE=m 339 324 CONFIG_NFT_FIB_IPV6=m 340 325 CONFIG_IP6_NF_IPTABLES=m ··· 339 340 CONFIG_IP6_NF_MATCH_MH=m 340 341 CONFIG_IP6_NF_MATCH_RPFILTER=m 341 342 CONFIG_IP6_NF_MATCH_RT=m 342 - CONFIG_IP6_NF_TARGET_HL=m 343 - CONFIG_IP6_NF_FILTER=m 344 343 CONFIG_IP6_NF_TARGET_REJECT=m 345 - CONFIG_IP6_NF_MANGLE=m 346 - CONFIG_IP6_NF_RAW=m 347 - CONFIG_IP6_NF_SECURITY=m 348 - CONFIG_IP6_NF_NAT=m 349 - CONFIG_IP6_NF_TARGET_MASQUERADE=m 350 344 CONFIG_NF_TABLES_BRIDGE=m 345 + CONFIG_IP_SCTP=m 351 346 CONFIG_RDS=m 352 347 CONFIG_RDS_RDMA=m 353 348 CONFIG_RDS_TCP=m ··· 376 383 CONFIG_NET_SCH_INGRESS=m 377 384 CONFIG_NET_SCH_PLUG=m 378 385 CONFIG_NET_SCH_ETS=m 386 + CONFIG_NET_SCH_DUALPI2=m 379 387 CONFIG_NET_CLS_BASIC=m 380 388 CONFIG_NET_CLS_ROUTE4=m 381 389 CONFIG_NET_CLS_FW=m ··· 498 504 CONFIG_NETDEVICES=y 499 505 CONFIG_BONDING=m 500 506 CONFIG_DUMMY=m 507 + CONFIG_OVPN=m 501 508 CONFIG_EQUALIZER=m 502 509 CONFIG_IFB=m 503 510 CONFIG_MACVLAN=m ··· 636 641 CONFIG_VHOST_NET=m 637 642 CONFIG_VHOST_VSOCK=m 638 643 CONFIG_VHOST_VDPA=m 644 + CONFIG_DEV_DAX=m 639 645 CONFIG_EXT4_FS=y 640 646 CONFIG_EXT4_FS_POSIX_ACL=y 641 647 CONFIG_EXT4_FS_SECURITY=y ··· 661 665 CONFIG_BCACHEFS_FS=y 662 666 CONFIG_BCACHEFS_QUOTA=y 663 667 CONFIG_BCACHEFS_POSIX_ACL=y 668 + CONFIG_FS_DAX=y 664 669 CONFIG_EXPORTFS_BLOCK_OPS=y 665 670 CONFIG_FS_ENCRYPTION=y 666 671 CONFIG_FS_VERITY=y ··· 752 755 CONFIG_BUG_ON_DATA_CORRUPTION=y 753 756 CONFIG_CRYPTO_USER=m 754 757 CONFIG_CRYPTO_SELFTESTS=y 758 + CONFIG_CRYPTO_SELFTESTS_FULL=y 759 + CONFIG_CRYPTO_NULL=y 755 760 CONFIG_CRYPTO_PCRYPT=m 756 761 CONFIG_CRYPTO_CRYPTD=m 757 762 CONFIG_CRYPTO_BENCHMARK=m ··· 782 783 CONFIG_CRYPTO_LRW=m 783 784 CONFIG_CRYPTO_PCBC=m 784 785 CONFIG_CRYPTO_AEGIS128=m 785 - CONFIG_CRYPTO_CHACHA20POLY1305=m 786 786 CONFIG_CRYPTO_GCM=y 787 787 CONFIG_CRYPTO_SEQIV=y 788 788 CONFIG_CRYPTO_MD4=m ··· 820 822 CONFIG_CRYPTO_KRB5=m 821 823 CONFIG_CRYPTO_KRB5_SELFTESTS=y 822 824 CONFIG_CORDIC=m 825 + CONFIG_TRACE_MMIO_ACCESS=y 823 826 CONFIG_RANDOM32_SELFTEST=y 824 827 CONFIG_XZ_DEC_MICROLZMA=y 825 828 CONFIG_DMA_CMA=y
+15 -19
arch/s390/configs/defconfig
··· 4 4 CONFIG_AUDIT=y 5 5 CONFIG_NO_HZ_IDLE=y 6 6 CONFIG_HIGH_RES_TIMERS=y 7 + CONFIG_POSIX_AUX_CLOCKS=y 7 8 CONFIG_BPF_SYSCALL=y 8 9 CONFIG_BPF_JIT=y 9 10 CONFIG_BPF_JIT_ALWAYS_ON=y ··· 18 17 CONFIG_TASK_IO_ACCOUNTING=y 19 18 CONFIG_IKCONFIG=y 20 19 CONFIG_IKCONFIG_PROC=y 20 + CONFIG_SCHED_PROXY_EXEC=y 21 21 CONFIG_NUMA_BALANCING=y 22 22 CONFIG_MEMCG=y 23 23 CONFIG_BLK_CGROUP=y ··· 42 40 CONFIG_KEXEC=y 43 41 CONFIG_KEXEC_FILE=y 44 42 CONFIG_KEXEC_SIG=y 43 + CONFIG_CRASH_DM_CRYPT=y 45 44 CONFIG_LIVEPATCH=y 46 45 CONFIG_MARCH_Z13=y 47 46 CONFIG_NR_CPUS=512 48 47 CONFIG_NUMA=y 49 - CONFIG_HZ_100=y 48 + CONFIG_HZ_1000=y 50 49 CONFIG_CERT_STORE=y 51 50 CONFIG_EXPOLINE=y 52 51 CONFIG_EXPOLINE_AUTO=y ··· 100 97 CONFIG_MEM_SOFT_DIRTY=y 101 98 CONFIG_DEFERRED_STRUCT_PAGE_INIT=y 102 99 CONFIG_IDLE_PAGE_TRACKING=y 100 + CONFIG_ZONE_DEVICE=y 103 101 CONFIG_PERCPU_STATS=y 104 102 CONFIG_ANON_VMA_NAME=y 105 103 CONFIG_USERFAULTFD=y ··· 218 214 CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m 219 215 CONFIG_NETFILTER_XT_TARGET_CT=m 220 216 CONFIG_NETFILTER_XT_TARGET_DSCP=m 217 + CONFIG_NETFILTER_XT_TARGET_HL=m 221 218 CONFIG_NETFILTER_XT_TARGET_HMARK=m 222 219 CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m 223 220 CONFIG_NETFILTER_XT_TARGET_LOG=m 224 221 CONFIG_NETFILTER_XT_TARGET_MARK=m 222 + CONFIG_NETFILTER_XT_NAT=m 225 223 CONFIG_NETFILTER_XT_TARGET_NETMAP=m 226 224 CONFIG_NETFILTER_XT_TARGET_NFLOG=m 227 225 CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m 228 226 CONFIG_NETFILTER_XT_TARGET_REDIRECT=m 227 + CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m 229 228 CONFIG_NETFILTER_XT_TARGET_TEE=m 230 229 CONFIG_NETFILTER_XT_TARGET_TPROXY=m 231 - CONFIG_NETFILTER_XT_TARGET_TRACE=m 232 230 CONFIG_NETFILTER_XT_TARGET_SECMARK=m 233 231 CONFIG_NETFILTER_XT_TARGET_TCPMSS=m 234 232 CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m ··· 245 239 CONFIG_NETFILTER_XT_MATCH_CONNMARK=m 246 240 CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m 247 241 CONFIG_NETFILTER_XT_MATCH_CPU=m 242 + CONFIG_NETFILTER_XT_MATCH_DCCP=m 248 243 CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m 249 244 CONFIG_NETFILTER_XT_MATCH_DSCP=m 250 245 CONFIG_NETFILTER_XT_MATCH_ESP=m ··· 316 309 CONFIG_IP_NF_MATCH_ECN=m 317 310 CONFIG_IP_NF_MATCH_RPFILTER=m 318 311 CONFIG_IP_NF_MATCH_TTL=m 319 - CONFIG_IP_NF_FILTER=m 320 312 CONFIG_IP_NF_TARGET_REJECT=m 321 - CONFIG_IP_NF_NAT=m 322 - CONFIG_IP_NF_TARGET_MASQUERADE=m 323 - CONFIG_IP_NF_MANGLE=m 324 313 CONFIG_IP_NF_TARGET_ECN=m 325 - CONFIG_IP_NF_TARGET_TTL=m 326 - CONFIG_IP_NF_RAW=m 327 - CONFIG_IP_NF_SECURITY=m 328 - CONFIG_IP_NF_ARPFILTER=m 329 314 CONFIG_IP_NF_ARP_MANGLE=m 330 315 CONFIG_NFT_FIB_IPV6=m 331 316 CONFIG_IP6_NF_IPTABLES=m ··· 330 331 CONFIG_IP6_NF_MATCH_MH=m 331 332 CONFIG_IP6_NF_MATCH_RPFILTER=m 332 333 CONFIG_IP6_NF_MATCH_RT=m 333 - CONFIG_IP6_NF_TARGET_HL=m 334 - CONFIG_IP6_NF_FILTER=m 335 334 CONFIG_IP6_NF_TARGET_REJECT=m 336 - CONFIG_IP6_NF_MANGLE=m 337 - CONFIG_IP6_NF_RAW=m 338 - CONFIG_IP6_NF_SECURITY=m 339 - CONFIG_IP6_NF_NAT=m 340 - CONFIG_IP6_NF_TARGET_MASQUERADE=m 341 335 CONFIG_NF_TABLES_BRIDGE=m 336 + CONFIG_IP_SCTP=m 342 337 CONFIG_RDS=m 343 338 CONFIG_RDS_RDMA=m 344 339 CONFIG_RDS_TCP=m ··· 366 373 CONFIG_NET_SCH_INGRESS=m 367 374 CONFIG_NET_SCH_PLUG=m 368 375 CONFIG_NET_SCH_ETS=m 376 + CONFIG_NET_SCH_DUALPI2=m 369 377 CONFIG_NET_CLS_BASIC=m 370 378 CONFIG_NET_CLS_ROUTE4=m 371 379 CONFIG_NET_CLS_FW=m ··· 488 494 CONFIG_NETDEVICES=y 489 495 CONFIG_BONDING=m 490 496 CONFIG_DUMMY=m 497 + CONFIG_OVPN=m 491 498 CONFIG_EQUALIZER=m 492 499 CONFIG_IFB=m 493 500 CONFIG_MACVLAN=m ··· 626 631 CONFIG_VHOST_NET=m 627 632 CONFIG_VHOST_VSOCK=m 628 633 CONFIG_VHOST_VDPA=m 634 + CONFIG_DEV_DAX=m 629 635 CONFIG_EXT4_FS=y 630 636 CONFIG_EXT4_FS_POSIX_ACL=y 631 637 CONFIG_EXT4_FS_SECURITY=y ··· 648 652 CONFIG_BCACHEFS_FS=m 649 653 CONFIG_BCACHEFS_QUOTA=y 650 654 CONFIG_BCACHEFS_POSIX_ACL=y 655 + CONFIG_FS_DAX=y 651 656 CONFIG_EXPORTFS_BLOCK_OPS=y 652 657 CONFIG_FS_ENCRYPTION=y 653 658 CONFIG_FS_VERITY=y ··· 680 683 CONFIG_TMPFS_INODE64=y 681 684 CONFIG_TMPFS_QUOTA=y 682 685 CONFIG_HUGETLBFS=y 683 - CONFIG_CONFIGFS_FS=m 684 686 CONFIG_ECRYPT_FS=m 685 687 CONFIG_CRAMFS=m 686 688 CONFIG_SQUASHFS=m ··· 737 741 CONFIG_CRYPTO_FIPS=y 738 742 CONFIG_CRYPTO_USER=m 739 743 CONFIG_CRYPTO_SELFTESTS=y 744 + CONFIG_CRYPTO_NULL=y 740 745 CONFIG_CRYPTO_PCRYPT=m 741 746 CONFIG_CRYPTO_CRYPTD=m 742 747 CONFIG_CRYPTO_BENCHMARK=m ··· 766 769 CONFIG_CRYPTO_LRW=m 767 770 CONFIG_CRYPTO_PCBC=m 768 771 CONFIG_CRYPTO_AEGIS128=m 769 - CONFIG_CRYPTO_CHACHA20POLY1305=m 770 772 CONFIG_CRYPTO_GCM=y 771 773 CONFIG_CRYPTO_SEQIV=y 772 774 CONFIG_CRYPTO_MD4=m
+2 -1
arch/s390/configs/zfcpdump_defconfig
··· 1 1 CONFIG_NO_HZ_IDLE=y 2 2 CONFIG_HIGH_RES_TIMERS=y 3 + CONFIG_POSIX_AUX_CLOCKS=y 3 4 CONFIG_BPF_SYSCALL=y 4 5 # CONFIG_CPU_ISOLATION is not set 5 6 # CONFIG_UTS_NS is not set ··· 12 11 CONFIG_KEXEC=y 13 12 CONFIG_MARCH_Z13=y 14 13 CONFIG_NR_CPUS=2 15 - CONFIG_HZ_100=y 14 + CONFIG_HZ_1000=y 16 15 # CONFIG_CHSC_SCH is not set 17 16 # CONFIG_SCM_BUS is not set 18 17 # CONFIG_AP is not set
+12 -7
arch/s390/hypfs/hypfs_dbfs.c
··· 6 6 * Author(s): Michael Holzheu <holzheu@linux.vnet.ibm.com> 7 7 */ 8 8 9 + #include <linux/security.h> 9 10 #include <linux/slab.h> 10 11 #include "hypfs.h" 11 12 ··· 67 66 long rc; 68 67 69 68 mutex_lock(&df->lock); 70 - if (df->unlocked_ioctl) 71 - rc = df->unlocked_ioctl(file, cmd, arg); 72 - else 73 - rc = -ENOTTY; 69 + rc = df->unlocked_ioctl(file, cmd, arg); 74 70 mutex_unlock(&df->lock); 75 71 return rc; 76 72 } 77 73 78 - static const struct file_operations dbfs_ops = { 74 + static const struct file_operations dbfs_ops_ioctl = { 79 75 .read = dbfs_read, 80 76 .unlocked_ioctl = dbfs_ioctl, 81 77 }; 82 78 79 + static const struct file_operations dbfs_ops = { 80 + .read = dbfs_read, 81 + }; 82 + 83 83 void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df) 84 84 { 85 - df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, 86 - &dbfs_ops); 85 + const struct file_operations *fops = &dbfs_ops; 86 + 87 + if (df->unlocked_ioctl && !security_locked_down(LOCKDOWN_DEBUGFS)) 88 + fops = &dbfs_ops_ioctl; 89 + df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, fops); 87 90 mutex_init(&df->lock); 88 91 } 89 92
+3 -2
arch/x86/include/asm/xen/hypercall.h
··· 94 94 #ifdef MODULE 95 95 #define __ADDRESSABLE_xen_hypercall 96 96 #else 97 - #define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall) 97 + #define __ADDRESSABLE_xen_hypercall \ 98 + __stringify(.global STATIC_CALL_KEY(xen_hypercall);) 98 99 #endif 99 100 100 101 #define __HYPERCALL \ 101 102 __ADDRESSABLE_xen_hypercall \ 102 - "call __SCT__xen_hypercall" 103 + __stringify(call STATIC_CALL_TRAMP(xen_hypercall)) 103 104 104 105 #define __HYPERCALL_ENTRY(x) "a" (x) 105 106
+6 -2
arch/x86/kernel/cpu/amd.c
··· 1326 1326 1327 1327 static __init int print_s5_reset_status_mmio(void) 1328 1328 { 1329 - unsigned long value; 1330 1329 void __iomem *addr; 1330 + u32 value; 1331 1331 int i; 1332 1332 1333 1333 if (!cpu_feature_enabled(X86_FEATURE_ZEN)) ··· 1340 1340 value = ioread32(addr); 1341 1341 iounmap(addr); 1342 1342 1343 + /* Value with "all bits set" is an error response and should be ignored. */ 1344 + if (value == U32_MAX) 1345 + return 0; 1346 + 1343 1347 for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) { 1344 1348 if (!(value & BIT(i))) 1345 1349 continue; 1346 1350 1347 1351 if (s5_reset_reason_txt[i]) { 1348 - pr_info("x86/amd: Previous system reset reason [0x%08lx]: %s\n", 1352 + pr_info("x86/amd: Previous system reset reason [0x%08x]: %s\n", 1349 1353 value, s5_reset_reason_txt[i]); 1350 1354 } 1351 1355 }
+1 -3
arch/x86/kernel/cpu/bugs.c
··· 1068 1068 if (gds_mitigation == GDS_MITIGATION_AUTO) { 1069 1069 if (should_mitigate_vuln(X86_BUG_GDS)) 1070 1070 gds_mitigation = GDS_MITIGATION_FULL; 1071 - else { 1071 + else 1072 1072 gds_mitigation = GDS_MITIGATION_OFF; 1073 - return; 1074 - } 1075 1073 } 1076 1074 1077 1075 /* No microcode */
+3
arch/x86/kernel/cpu/hygon.c
··· 16 16 #include <asm/spec-ctrl.h> 17 17 #include <asm/delay.h> 18 18 #include <asm/msr.h> 19 + #include <asm/resctrl.h> 19 20 20 21 #include "cpu.h" 21 22 ··· 118 117 x86_amd_ls_cfg_ssbd_mask = 1ULL << 10; 119 118 } 120 119 } 120 + 121 + resctrl_cpu_detect(c); 121 122 } 122 123 123 124 static void early_init_hygon(struct cpuinfo_x86 *c)
+1 -1
block/blk-core.c
··· 557 557 sector_t maxsector = bdev_nr_sectors(bio->bi_bdev); 558 558 unsigned int nr_sectors = bio_sectors(bio); 559 559 560 - if (nr_sectors && 560 + if (nr_sectors && maxsector && 561 561 (nr_sectors > maxsector || 562 562 bio->bi_iter.bi_sector > maxsector - nr_sectors)) { 563 563 pr_info_ratelimited("%s: attempt to access beyond end of device\n"
+1
block/blk-mq-debugfs.c
··· 95 95 QUEUE_FLAG_NAME(SQ_SCHED), 96 96 QUEUE_FLAG_NAME(DISABLE_WBT_DEF), 97 97 QUEUE_FLAG_NAME(NO_ELV_SWITCH), 98 + QUEUE_FLAG_NAME(QOS_ENABLED), 98 99 }; 99 100 #undef QUEUE_FLAG_NAME 100 101
+9 -4
block/blk-mq.c
··· 5033 5033 unsigned int memflags; 5034 5034 int i; 5035 5035 struct xarray elv_tbl, et_tbl; 5036 + bool queues_frozen = false; 5036 5037 5037 5038 lockdep_assert_held(&set->tag_list_lock); 5038 5039 ··· 5057 5056 blk_mq_sysfs_unregister_hctxs(q); 5058 5057 } 5059 5058 5060 - list_for_each_entry(q, &set->tag_list, tag_set_list) 5061 - blk_mq_freeze_queue_nomemsave(q); 5062 - 5063 5059 /* 5064 5060 * Switch IO scheduler to 'none', cleaning up the data associated 5065 5061 * with the previous scheduler. We will switch back once we are done ··· 5066 5068 if (blk_mq_elv_switch_none(q, &elv_tbl)) 5067 5069 goto switch_back; 5068 5070 5071 + list_for_each_entry(q, &set->tag_list, tag_set_list) 5072 + blk_mq_freeze_queue_nomemsave(q); 5073 + queues_frozen = true; 5069 5074 if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0) 5070 5075 goto switch_back; 5071 5076 ··· 5092 5091 } 5093 5092 switch_back: 5094 5093 /* The blk_mq_elv_switch_back unfreezes queue for us. */ 5095 - list_for_each_entry(q, &set->tag_list, tag_set_list) 5094 + list_for_each_entry(q, &set->tag_list, tag_set_list) { 5095 + /* switch_back expects queue to be frozen */ 5096 + if (!queues_frozen) 5097 + blk_mq_freeze_queue_nomemsave(q); 5096 5098 blk_mq_elv_switch_back(q, &elv_tbl, &et_tbl); 5099 + } 5097 5100 5098 5101 list_for_each_entry(q, &set->tag_list, tag_set_list) { 5099 5102 blk_mq_sysfs_register_hctxs(q);
+4 -4
block/blk-rq-qos.c
··· 2 2 3 3 #include "blk-rq-qos.h" 4 4 5 - __read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos); 6 - 7 5 /* 8 6 * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded, 9 7 * false if 'v' + 1 would be bigger than 'below'. ··· 317 319 struct rq_qos *rqos = q->rq_qos; 318 320 q->rq_qos = rqos->next; 319 321 rqos->ops->exit(rqos); 320 - static_branch_dec(&block_rq_qos); 321 322 } 323 + blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q); 322 324 mutex_unlock(&q->rq_qos_mutex); 323 325 } 324 326 ··· 344 346 goto ebusy; 345 347 rqos->next = q->rq_qos; 346 348 q->rq_qos = rqos; 347 - static_branch_inc(&block_rq_qos); 349 + blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q); 348 350 349 351 blk_mq_unfreeze_queue(q, memflags); 350 352 ··· 375 377 break; 376 378 } 377 379 } 380 + if (!q->rq_qos) 381 + blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q); 378 382 blk_mq_unfreeze_queue(q, memflags); 379 383 380 384 mutex_lock(&q->debugfs_mutex);
+31 -17
block/blk-rq-qos.h
··· 12 12 #include "blk-mq-debugfs.h" 13 13 14 14 struct blk_mq_debugfs_attr; 15 - extern struct static_key_false block_rq_qos; 16 15 17 16 enum rq_qos_id { 18 17 RQ_QOS_WBT, ··· 112 113 113 114 static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio) 114 115 { 115 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 116 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 117 + q->rq_qos) 116 118 __rq_qos_cleanup(q->rq_qos, bio); 117 119 } 118 120 119 121 static inline void rq_qos_done(struct request_queue *q, struct request *rq) 120 122 { 121 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos && 122 - !blk_rq_is_passthrough(rq)) 123 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 124 + q->rq_qos && !blk_rq_is_passthrough(rq)) 123 125 __rq_qos_done(q->rq_qos, rq); 124 126 } 125 127 126 128 static inline void rq_qos_issue(struct request_queue *q, struct request *rq) 127 129 { 128 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 130 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 131 + q->rq_qos) 129 132 __rq_qos_issue(q->rq_qos, rq); 130 133 } 131 134 132 135 static inline void rq_qos_requeue(struct request_queue *q, struct request *rq) 133 136 { 134 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 137 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 138 + q->rq_qos) 135 139 __rq_qos_requeue(q->rq_qos, rq); 136 140 } 137 141 138 142 static inline void rq_qos_done_bio(struct bio *bio) 139 143 { 140 - if (static_branch_unlikely(&block_rq_qos) && 141 - bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) || 142 - bio_flagged(bio, BIO_QOS_MERGED))) { 143 - struct request_queue *q = bdev_get_queue(bio->bi_bdev); 144 - if (q->rq_qos) 145 - __rq_qos_done_bio(q->rq_qos, bio); 146 - } 144 + struct request_queue *q; 145 + 146 + if (!bio->bi_bdev || (!bio_flagged(bio, BIO_QOS_THROTTLED) && 147 + !bio_flagged(bio, BIO_QOS_MERGED))) 148 + return; 149 + 150 + q = bdev_get_queue(bio->bi_bdev); 151 + 152 + /* 153 + * If a bio has BIO_QOS_xxx set, it implicitly implies that 154 + * q->rq_qos is present. So, we skip re-checking q->rq_qos 155 + * here as an extra optimization and directly call 156 + * __rq_qos_done_bio(). 157 + */ 158 + __rq_qos_done_bio(q->rq_qos, bio); 147 159 } 148 160 149 161 static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) 150 162 { 151 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) { 163 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 164 + q->rq_qos) { 152 165 bio_set_flag(bio, BIO_QOS_THROTTLED); 153 166 __rq_qos_throttle(q->rq_qos, bio); 154 167 } ··· 169 158 static inline void rq_qos_track(struct request_queue *q, struct request *rq, 170 159 struct bio *bio) 171 160 { 172 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 161 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 162 + q->rq_qos) 173 163 __rq_qos_track(q->rq_qos, rq, bio); 174 164 } 175 165 176 166 static inline void rq_qos_merge(struct request_queue *q, struct request *rq, 177 167 struct bio *bio) 178 168 { 179 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) { 169 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 170 + q->rq_qos) { 180 171 bio_set_flag(bio, BIO_QOS_MERGED); 181 172 __rq_qos_merge(q->rq_qos, rq, bio); 182 173 } ··· 186 173 187 174 static inline void rq_qos_queue_depth_changed(struct request_queue *q) 188 175 { 189 - if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) 176 + if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 177 + q->rq_qos) 190 178 __rq_qos_queue_depth_changed(q->rq_qos); 191 179 } 192 180
+6 -6
block/blk-settings.c
··· 157 157 switch (bi->csum_type) { 158 158 case BLK_INTEGRITY_CSUM_NONE: 159 159 if (bi->pi_tuple_size) { 160 - pr_warn("pi_tuple_size must be 0 when checksum type \ 161 - is none\n"); 160 + pr_warn("pi_tuple_size must be 0 when checksum type is none\n"); 162 161 return -EINVAL; 163 162 } 164 163 break; 165 164 case BLK_INTEGRITY_CSUM_CRC: 166 165 case BLK_INTEGRITY_CSUM_IP: 167 166 if (bi->pi_tuple_size != sizeof(struct t10_pi_tuple)) { 168 - pr_warn("pi_tuple_size mismatch for T10 PI: expected \ 169 - %zu, got %u\n", 167 + pr_warn("pi_tuple_size mismatch for T10 PI: expected %zu, got %u\n", 170 168 sizeof(struct t10_pi_tuple), 171 169 bi->pi_tuple_size); 172 170 return -EINVAL; ··· 172 174 break; 173 175 case BLK_INTEGRITY_CSUM_CRC64: 174 176 if (bi->pi_tuple_size != sizeof(struct crc64_pi_tuple)) { 175 - pr_warn("pi_tuple_size mismatch for CRC64 PI: \ 176 - expected %zu, got %u\n", 177 + pr_warn("pi_tuple_size mismatch for CRC64 PI: expected %zu, got %u\n", 177 178 sizeof(struct crc64_pi_tuple), 178 179 bi->pi_tuple_size); 179 180 return -EINVAL; ··· 969 972 goto incompatible; 970 973 if (ti->csum_type != bi->csum_type) 971 974 goto incompatible; 975 + if (ti->pi_tuple_size != bi->pi_tuple_size) 976 + goto incompatible; 972 977 if ((ti->flags & BLK_INTEGRITY_REF_TAG) != 973 978 (bi->flags & BLK_INTEGRITY_REF_TAG)) 974 979 goto incompatible; ··· 979 980 ti->flags |= (bi->flags & BLK_INTEGRITY_DEVICE_CAPABLE) | 980 981 (bi->flags & BLK_INTEGRITY_REF_TAG); 981 982 ti->csum_type = bi->csum_type; 983 + ti->pi_tuple_size = bi->pi_tuple_size; 982 984 ti->metadata_size = bi->metadata_size; 983 985 ti->pi_offset = bi->pi_offset; 984 986 ti->interval_exp = bi->interval_exp;
+1 -1
drivers/accel/habanalabs/gaudi2/gaudi2.c
··· 10437 10437 (u64 *)(lin_dma_pkts_arr), DEBUGFS_WRITE64); 10438 10438 WREG32(sob_addr, 0); 10439 10439 10440 - kfree(lin_dma_pkts_arr); 10440 + kvfree(lin_dma_pkts_arr); 10441 10441 10442 10442 return rc; 10443 10443 }
+7 -10
drivers/acpi/apei/einj-core.c
··· 315 315 memcpy_fromio(&v5param, p, v5param_size); 316 316 acpi5 = 1; 317 317 check_vendor_extension(pa_v5, &v5param); 318 - if (available_error_type & ACPI65_EINJV2_SUPP) { 318 + if (is_v2 && available_error_type & ACPI65_EINJV2_SUPP) { 319 319 len = v5param.einjv2_struct.length; 320 320 offset = offsetof(struct einjv2_extension_struct, component_arr); 321 321 max_nr_components = (len - offset) / ··· 540 540 struct set_error_type_with_address *v5param; 541 541 542 542 v5param = kmalloc(v5param_size, GFP_KERNEL); 543 + if (!v5param) 544 + return -ENOMEM; 545 + 543 546 memcpy_fromio(v5param, einj_param, v5param_size); 544 547 v5param->type = type; 545 548 if (type & ACPI5_VENDOR_BIT) { ··· 1094 1091 return rc; 1095 1092 } 1096 1093 1097 - static void __exit einj_remove(struct faux_device *fdev) 1094 + static void einj_remove(struct faux_device *fdev) 1098 1095 { 1099 1096 struct apei_exec_context ctx; 1100 1097 ··· 1117 1114 } 1118 1115 1119 1116 static struct faux_device *einj_dev; 1120 - /* 1121 - * einj_remove() lives in .exit.text. For drivers registered via 1122 - * platform_driver_probe() this is ok because they cannot get unbound at 1123 - * runtime. So mark the driver struct with __refdata to prevent modpost 1124 - * triggering a section mismatch warning. 1125 - */ 1126 - static struct faux_device_ops einj_device_ops __refdata = { 1117 + static struct faux_device_ops einj_device_ops = { 1127 1118 .probe = einj_probe, 1128 - .remove = __exit_p(einj_remove), 1119 + .remove = einj_remove, 1129 1120 }; 1130 1121 1131 1122 static int __init einj_init(void)
+1 -1
drivers/acpi/pfr_update.c
··· 329 329 if (type == PFRU_CODE_INJECT_TYPE) 330 330 return payload_hdr->rt_ver >= cap->code_rt_version; 331 331 332 - return payload_hdr->rt_ver >= cap->drv_rt_version; 332 + return payload_hdr->svn_ver >= cap->drv_svn; 333 333 } 334 334 335 335 static void print_update_debug_info(struct pfru_updated_result *result,
+14 -3
drivers/atm/atmtcp.c
··· 279 279 return NULL; 280 280 } 281 281 282 + static int atmtcp_c_pre_send(struct atm_vcc *vcc, struct sk_buff *skb) 283 + { 284 + struct atmtcp_hdr *hdr; 285 + 286 + if (skb->len < sizeof(struct atmtcp_hdr)) 287 + return -EINVAL; 288 + 289 + hdr = (struct atmtcp_hdr *)skb->data; 290 + if (hdr->length == ATMTCP_HDR_MAGIC) 291 + return -EINVAL; 292 + 293 + return 0; 294 + } 282 295 283 296 static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb) 284 297 { ··· 300 287 struct atm_vcc *out_vcc; 301 288 struct sk_buff *new_skb; 302 289 int result = 0; 303 - 304 - if (skb->len < sizeof(struct atmtcp_hdr)) 305 - goto done; 306 290 307 291 dev = vcc->dev_data; 308 292 hdr = (struct atmtcp_hdr *) skb->data; ··· 357 347 358 348 static const struct atmdev_ops atmtcp_c_dev_ops = { 359 349 .close = atmtcp_c_close, 350 + .pre_send = atmtcp_c_pre_send, 360 351 .send = atmtcp_c_send 361 352 }; 362 353
+2 -2
drivers/base/power/main.c
··· 675 675 idx = device_links_read_lock(); 676 676 677 677 /* Start processing the device's "async" consumers. */ 678 - list_for_each_entry_rcu(link, &dev->links.consumers, s_node) 678 + list_for_each_entry_rcu_locked(link, &dev->links.consumers, s_node) 679 679 if (READ_ONCE(link->status) != DL_STATE_DORMANT) 680 680 dpm_async_with_cleanup(link->consumer, func); 681 681 ··· 1330 1330 idx = device_links_read_lock(); 1331 1331 1332 1332 /* Start processing the device's "async" suppliers. */ 1333 - list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) 1333 + list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) 1334 1334 if (READ_ONCE(link->status) != DL_STATE_DORMANT) 1335 1335 dpm_async_with_cleanup(link->supplier, func); 1336 1336
+21 -18
drivers/block/loop.c
··· 137 137 static int max_part; 138 138 static int part_shift; 139 139 140 - static loff_t get_size(loff_t offset, loff_t sizelimit, struct file *file) 140 + static loff_t lo_calculate_size(struct loop_device *lo, struct file *file) 141 141 { 142 + struct kstat stat; 142 143 loff_t loopsize; 144 + int ret; 143 145 144 - /* Compute loopsize in bytes */ 145 - loopsize = i_size_read(file->f_mapping->host); 146 - if (offset > 0) 147 - loopsize -= offset; 146 + /* 147 + * Get the accurate file size. This provides better results than 148 + * cached inode data, particularly for network filesystems where 149 + * metadata may be stale. 150 + */ 151 + ret = vfs_getattr_nosec(&file->f_path, &stat, STATX_SIZE, 0); 152 + if (ret) 153 + return 0; 154 + 155 + loopsize = stat.size; 156 + if (lo->lo_offset > 0) 157 + loopsize -= lo->lo_offset; 148 158 /* offset is beyond i_size, weird but possible */ 149 159 if (loopsize < 0) 150 160 return 0; 151 - 152 - if (sizelimit > 0 && sizelimit < loopsize) 153 - loopsize = sizelimit; 161 + if (lo->lo_sizelimit > 0 && lo->lo_sizelimit < loopsize) 162 + loopsize = lo->lo_sizelimit; 154 163 /* 155 164 * Unfortunately, if we want to do I/O on the device, 156 165 * the number of 512-byte sectors has to fit into a sector_t. 157 166 */ 158 167 return loopsize >> 9; 159 - } 160 - 161 - static loff_t get_loop_size(struct loop_device *lo, struct file *file) 162 - { 163 - return get_size(lo->lo_offset, lo->lo_sizelimit, file); 164 168 } 165 169 166 170 /* ··· 573 569 error = -EINVAL; 574 570 575 571 /* size of the new backing store needs to be the same */ 576 - if (get_loop_size(lo, file) != get_loop_size(lo, old_file)) 572 + if (lo_calculate_size(lo, file) != lo_calculate_size(lo, old_file)) 577 573 goto out_err; 578 574 579 575 /* ··· 1067 1063 loop_update_dio(lo); 1068 1064 loop_sysfs_init(lo); 1069 1065 1070 - size = get_loop_size(lo, file); 1066 + size = lo_calculate_size(lo, file); 1071 1067 loop_set_size(lo, size); 1072 1068 1073 1069 /* Order wrt reading lo_state in loop_validate_file(). */ ··· 1259 1255 if (partscan) 1260 1256 clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state); 1261 1257 if (!err && size_changed) { 1262 - loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit, 1263 - lo->lo_backing_file); 1258 + loff_t new_size = lo_calculate_size(lo, lo->lo_backing_file); 1264 1259 loop_set_size(lo, new_size); 1265 1260 } 1266 1261 out_unlock: ··· 1402 1399 if (unlikely(lo->lo_state != Lo_bound)) 1403 1400 return -ENXIO; 1404 1401 1405 - size = get_loop_size(lo, lo->lo_backing_file); 1402 + size = lo_calculate_size(lo, lo->lo_backing_file); 1406 1403 loop_set_size(lo, size); 1407 1404 1408 1405 return 0;
+1 -2
drivers/cdx/controller/cdx_rpmsg.c
··· 129 129 130 130 chinfo.src = RPMSG_ADDR_ANY; 131 131 chinfo.dst = rpdev->dst; 132 - strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, 133 - strlen(cdx_rpmsg_id_table[0].name)); 132 + strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, sizeof(chinfo.name)); 134 133 135 134 cdx_mcdi->ept = rpmsg_create_ept(rpdev, cdx_rpmsg_cb, NULL, chinfo); 136 135 if (!cdx_mcdi->ept) {
+5
drivers/comedi/comedi_fops.c
··· 1587 1587 memset(&data[n], 0, (MIN_SAMPLES - n) * 1588 1588 sizeof(unsigned int)); 1589 1589 } 1590 + } else { 1591 + memset(data, 0, max_t(unsigned int, n, MIN_SAMPLES) * 1592 + sizeof(unsigned int)); 1590 1593 } 1591 1594 ret = parse_insn(dev, insns + i, data, file); 1592 1595 if (ret < 0) ··· 1673 1670 memset(&data[insn->n], 0, 1674 1671 (MIN_SAMPLES - insn->n) * sizeof(unsigned int)); 1675 1672 } 1673 + } else { 1674 + memset(data, 0, n_data * sizeof(unsigned int)); 1676 1675 } 1677 1676 ret = parse_insn(dev, insn, data, file); 1678 1677 if (ret < 0)
+14 -13
drivers/comedi/drivers.c
··· 620 620 unsigned int chan = CR_CHAN(insn->chanspec); 621 621 unsigned int base_chan = (chan < 32) ? 0 : chan; 622 622 unsigned int _data[2]; 623 + unsigned int i; 623 624 int ret; 624 - 625 - if (insn->n == 0) 626 - return 0; 627 625 628 626 memset(_data, 0, sizeof(_data)); 629 627 memset(&_insn, 0, sizeof(_insn)); ··· 633 635 if (insn->insn == INSN_WRITE) { 634 636 if (!(s->subdev_flags & SDF_WRITABLE)) 635 637 return -EINVAL; 636 - _data[0] = 1U << (chan - base_chan); /* mask */ 637 - _data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */ 638 + _data[0] = 1U << (chan - base_chan); /* mask */ 639 + } 640 + for (i = 0; i < insn->n; i++) { 641 + if (insn->insn == INSN_WRITE) 642 + _data[1] = data[i] ? _data[0] : 0; /* bits */ 643 + 644 + ret = s->insn_bits(dev, s, &_insn, _data); 645 + if (ret < 0) 646 + return ret; 647 + 648 + if (insn->insn == INSN_READ) 649 + data[i] = (_data[1] >> (chan - base_chan)) & 1; 638 650 } 639 651 640 - ret = s->insn_bits(dev, s, &_insn, _data); 641 - if (ret < 0) 642 - return ret; 643 - 644 - if (insn->insn == INSN_READ) 645 - data[0] = (_data[1] >> (chan - base_chan)) & 1; 646 - 647 - return 1; 652 + return insn->n; 648 653 } 649 654 650 655 static int __comedi_device_postconfig_async(struct comedi_device *dev,
+2 -1
drivers/comedi/drivers/pcl726.c
··· 328 328 * Hook up the external trigger source interrupt only if the 329 329 * user config option is valid and the board supports interrupts. 330 330 */ 331 - if (it->options[1] && (board->irq_mask & (1 << it->options[1]))) { 331 + if (it->options[1] > 0 && it->options[1] < 16 && 332 + (board->irq_mask & (1U << it->options[1]))) { 332 333 ret = request_irq(it->options[1], pcl726_interrupt, 0, 333 334 dev->board_name, dev); 334 335 if (ret == 0) {
+12 -17
drivers/cpuidle/governors/menu.c
··· 287 287 return 0; 288 288 } 289 289 290 - if (tick_nohz_tick_stopped()) { 291 - /* 292 - * If the tick is already stopped, the cost of possible short 293 - * idle duration misprediction is much higher, because the CPU 294 - * may be stuck in a shallow idle state for a long time as a 295 - * result of it. In that case say we might mispredict and use 296 - * the known time till the closest timer event for the idle 297 - * state selection. 298 - */ 299 - if (predicted_ns < TICK_NSEC) 300 - predicted_ns = data->next_timer_ns; 301 - } else if (latency_req > predicted_ns) { 302 - latency_req = predicted_ns; 303 - } 290 + /* 291 + * If the tick is already stopped, the cost of possible short idle 292 + * duration misprediction is much higher, because the CPU may be stuck 293 + * in a shallow idle state for a long time as a result of it. In that 294 + * case, say we might mispredict and use the known time till the closest 295 + * timer event for the idle state selection. 296 + */ 297 + if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC) 298 + predicted_ns = data->next_timer_ns; 304 299 305 300 /* 306 301 * Find the idle state with the lowest power while satisfying ··· 311 316 if (idx == -1) 312 317 idx = i; /* first enabled state */ 313 318 319 + if (s->exit_latency_ns > latency_req) 320 + break; 321 + 314 322 if (s->target_residency_ns > predicted_ns) { 315 323 /* 316 324 * Use a physical idle state, not busy polling, unless 317 325 * a timer is going to trigger soon enough. 318 326 */ 319 327 if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) && 320 - s->exit_latency_ns <= latency_req && 321 328 s->target_residency_ns <= data->next_timer_ns) { 322 329 predicted_ns = s->target_residency_ns; 323 330 idx = i; ··· 351 354 352 355 return idx; 353 356 } 354 - if (s->exit_latency_ns > latency_req) 355 - break; 356 357 357 358 idx = i; 358 359 }
+4 -4
drivers/fpga/zynq-fpga.c
··· 405 405 } 406 406 } 407 407 408 - priv->dma_nelms = 409 - dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0); 410 - if (priv->dma_nelms == 0) { 408 + err = dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0); 409 + if (err) { 411 410 dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n"); 412 - return -ENOMEM; 411 + return err; 413 412 } 413 + priv->dma_nelms = sgt->nents; 414 414 415 415 /* enable clock */ 416 416 err = clk_enable(priv->clk);
+14
drivers/gpio/gpiolib-acpi-quirks.c
··· 344 344 .ignore_interrupt = "AMDI0030:00@8", 345 345 }, 346 346 }, 347 + { 348 + /* 349 + * Spurious wakeups from TP_ATTN# pin 350 + * Found in BIOS 5.35 351 + * https://gitlab.freedesktop.org/drm/amd/-/issues/4482 352 + */ 353 + .matches = { 354 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 355 + DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt PX13"), 356 + }, 357 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 358 + .ignore_wake = "ASCP1A00:00@8", 359 + }, 360 + }, 347 361 {} /* Terminating entry */ 348 362 }; 349 363
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 514 514 return false; 515 515 516 516 if (drm_gem_is_imported(obj)) { 517 - struct dma_buf *dma_buf = obj->dma_buf; 517 + struct dma_buf *dma_buf = obj->import_attach->dmabuf; 518 518 519 519 if (dma_buf->ops != &amdgpu_dmabuf_ops) 520 520 /* No XGMI with non AMD GPUs */
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 317 317 */ 318 318 if (!vm->is_compute_context || !vm->process_info) 319 319 return 0; 320 - if (!drm_gem_is_imported(obj) || !dma_buf_is_dynamic(obj->dma_buf)) 320 + if (!drm_gem_is_imported(obj) || 321 + !dma_buf_is_dynamic(obj->import_attach->dmabuf)) 321 322 return 0; 322 323 mutex_lock_nested(&vm->process_info->lock, 1); 323 324 if (!WARN_ON(!vm->process_info->eviction_fence)) {
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1283 1283 struct drm_gem_object *obj = &bo->tbo.base; 1284 1284 1285 1285 if (drm_gem_is_imported(obj) && bo_va->is_xgmi) { 1286 - struct dma_buf *dma_buf = obj->dma_buf; 1286 + struct dma_buf *dma_buf = obj->import_attach->dmabuf; 1287 1287 struct drm_gem_object *gobj = dma_buf->priv; 1288 1288 struct amdgpu_bo *abo = gem_to_amdgpu_bo(gobj); 1289 1289
+3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 7792 7792 struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn); 7793 7793 int ret; 7794 7794 7795 + if (WARN_ON(unlikely(!old_con_state || !new_con_state))) 7796 + return -EINVAL; 7797 + 7795 7798 trace_amdgpu_dm_connector_atomic_check(new_con_state); 7796 7799 7797 7800 if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
+19
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 299 299 irq_type = amdgpu_display_crtc_idx_to_irq_type(adev, acrtc->crtc_id); 300 300 301 301 if (enable) { 302 + struct dc *dc = adev->dm.dc; 303 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); 304 + struct psr_settings *psr = &acrtc_state->stream->link->psr_settings; 305 + struct replay_settings *pr = &acrtc_state->stream->link->replay_settings; 306 + bool sr_supported = (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED) || 307 + pr->config.replay_supported; 308 + 309 + /* 310 + * IPS & self-refresh feature can cause vblank counter resets between 311 + * vblank disable and enable. 312 + * It may cause system stuck due to waiting for the vblank counter. 313 + * Call this function to estimate missed vblanks by using timestamps and 314 + * update the vblank counter in DRM. 315 + */ 316 + if (dc->caps.ips_support && 317 + dc->config.disable_ips != DMUB_IPS_DISABLE_ALL && 318 + sr_supported && vblank->config.disable_immediate) 319 + drm_crtc_vblank_restore(crtc); 320 + 302 321 /* vblank irq on -> Only need vupdate irq in vrr mode */ 303 322 if (amdgpu_dm_crtc_vrr_active(acrtc_state)) 304 323 rc = amdgpu_dm_crtc_set_vupdate_irq(crtc, true);
+1 -4
drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
··· 174 174 return object_id; 175 175 } 176 176 177 - if (tbl->ucNumberOfObjects <= i) { 178 - dm_error("Can't find connector id %d in connector table of size %d.\n", 179 - i, tbl->ucNumberOfObjects); 177 + if (tbl->ucNumberOfObjects <= i) 180 178 return object_id; 181 - } 182 179 183 180 id = le16_to_cpu(tbl->asObjects[i].usObjectID); 184 181 object_id = object_id_from_bios_object_id(id);
+1 -1
drivers/gpu/drm/amd/display/dc/bios/command_table.c
··· 993 993 allocation.sPCLKInput.usFbDiv = 994 994 cpu_to_le16((uint16_t)bp_params->feedback_divider); 995 995 allocation.sPCLKInput.ucFracFbDiv = 996 - (uint8_t)bp_params->fractional_feedback_divider; 996 + (uint8_t)(bp_params->fractional_feedback_divider / 100000); 997 997 allocation.sPCLKInput.ucPostDiv = 998 998 (uint8_t)bp_params->pixel_clock_post_divider; 999 999
+5 -9
drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
··· 72 72 /* ClocksStateLow */ 73 73 { .display_clk_khz = 352000, .pixel_clk_khz = 330000}, 74 74 /* ClocksStateNominal */ 75 - { .display_clk_khz = 600000, .pixel_clk_khz = 400000 }, 75 + { .display_clk_khz = 625000, .pixel_clk_khz = 400000 }, 76 76 /* ClocksStatePerformance */ 77 - { .display_clk_khz = 600000, .pixel_clk_khz = 400000 } }; 77 + { .display_clk_khz = 625000, .pixel_clk_khz = 400000 } }; 78 78 79 79 int dentist_get_divider_from_did(int did) 80 80 { ··· 391 391 { 392 392 struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg; 393 393 394 - pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); 395 - 396 394 dce110_fill_display_configs(context, pp_display_cfg); 397 395 398 396 if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0) ··· 403 405 { 404 406 struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base); 405 407 struct dm_pp_power_level_change_request level_change_req; 406 - int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz; 407 - 408 - /*TODO: W/A for dal3 linux, investigate why this works */ 409 - if (!clk_mgr_dce->dfs_bypass_active) 410 - patched_disp_clk = patched_disp_clk * 115 / 100; 408 + const int max_disp_clk = 409 + clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; 410 + int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz); 411 411 412 412 level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context); 413 413 /* get max clock state from PPLIB */
+23 -17
drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
··· 120 120 const struct dc_state *context, 121 121 struct dm_pp_display_configuration *pp_display_cfg) 122 122 { 123 + struct dc *dc = context->clk_mgr->ctx->dc; 123 124 int j; 124 125 int num_cfgs = 0; 126 + 127 + pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); 128 + pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz; 129 + pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0; 130 + pp_display_cfg->crtc_index = dc->res_pool->res_cap->num_timing_generator; 125 131 126 132 for (j = 0; j < context->stream_count; j++) { 127 133 int k; ··· 170 164 cfg->v_refresh /= stream->timing.h_total; 171 165 cfg->v_refresh = (cfg->v_refresh + stream->timing.v_total / 2) 172 166 / stream->timing.v_total; 167 + 168 + /* Find first CRTC index and calculate its line time. 169 + * This is necessary for DPM on SI GPUs. 170 + */ 171 + if (cfg->pipe_idx < pp_display_cfg->crtc_index) { 172 + const struct dc_crtc_timing *timing = 173 + &context->streams[0]->timing; 174 + 175 + pp_display_cfg->crtc_index = cfg->pipe_idx; 176 + pp_display_cfg->line_time_in_us = 177 + timing->h_total * 10000 / timing->pix_clk_100hz; 178 + } 179 + } 180 + 181 + if (!num_cfgs) { 182 + pp_display_cfg->crtc_index = 0; 183 + pp_display_cfg->line_time_in_us = 0; 173 184 } 174 185 175 186 pp_display_cfg->display_count = num_cfgs; ··· 246 223 pp_display_cfg->min_engine_clock_deep_sleep_khz 247 224 = context->bw_ctx.bw.dce.sclk_deep_sleep_khz; 248 225 249 - pp_display_cfg->avail_mclk_switch_time_us = 250 - dce110_get_min_vblank_time_us(context); 251 - /* TODO: dce11.2*/ 252 - pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0; 253 - 254 - pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz; 255 - 256 226 dce110_fill_display_configs(context, pp_display_cfg); 257 - 258 - /* TODO: is this still applicable?*/ 259 - if (pp_display_cfg->display_count == 1) { 260 - const struct dc_crtc_timing *timing = 261 - &context->streams[0]->timing; 262 - 263 - pp_display_cfg->crtc_index = 264 - pp_display_cfg->disp_configs[0].pipe_idx; 265 - pp_display_cfg->line_time_in_us = timing->h_total * 10000 / timing->pix_clk_100hz; 266 - } 267 227 268 228 if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0) 269 229 dm_pp_apply_display_requirements(dc->ctx, pp_display_cfg);
+9 -22
drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
··· 83 83 static int dce60_get_dp_ref_freq_khz(struct clk_mgr *clk_mgr_base) 84 84 { 85 85 struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); 86 - int dprefclk_wdivider; 87 - int dp_ref_clk_khz; 88 - int target_div; 86 + struct dc_context *ctx = clk_mgr_base->ctx; 87 + int dp_ref_clk_khz = 0; 89 88 90 - /* DCE6 has no DPREFCLK_CNTL to read DP Reference Clock source */ 91 - 92 - /* Read the mmDENTIST_DISPCLK_CNTL to get the currently 93 - * programmed DID DENTIST_DPREFCLK_WDIVIDER*/ 94 - REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, &dprefclk_wdivider); 95 - 96 - /* Convert DENTIST_DPREFCLK_WDIVIDERto actual divider*/ 97 - target_div = dentist_get_divider_from_did(dprefclk_wdivider); 98 - 99 - /* Calculate the current DFS clock, in kHz.*/ 100 - dp_ref_clk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR 101 - * clk_mgr->base.dentist_vco_freq_khz) / target_div; 89 + if (ASIC_REV_IS_TAHITI_P(ctx->asic_id.hw_internal_rev)) 90 + dp_ref_clk_khz = ctx->dc_bios->fw_info.default_display_engine_pll_frequency; 91 + else 92 + dp_ref_clk_khz = clk_mgr_base->clks.dispclk_khz; 102 93 103 94 return dce_adjust_dp_ref_freq_for_ss(clk_mgr, dp_ref_clk_khz); 104 95 } ··· 99 108 struct dc_state *context) 100 109 { 101 110 struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg; 102 - 103 - pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); 104 111 105 112 dce110_fill_display_configs(context, pp_display_cfg); 106 113 ··· 112 123 { 113 124 struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base); 114 125 struct dm_pp_power_level_change_request level_change_req; 115 - int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz; 116 - 117 - /*TODO: W/A for dal3 linux, investigate why this works */ 118 - if (!clk_mgr_dce->dfs_bypass_active) 119 - patched_disp_clk = patched_disp_clk * 115 / 100; 126 + const int max_disp_clk = 127 + clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; 128 + int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz); 120 129 121 130 level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context); 122 131 /* get max clock state from PPLIB */
+14 -1
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 217 217 connectors_num, 218 218 num_virtual_links); 219 219 220 - // condition loop on link_count to allow skipping invalid indices 220 + /* When getting the number of connectors, the VBIOS reports the number of valid indices, 221 + * but it doesn't say which indices are valid, and not every index has an actual connector. 222 + * So, if we don't find a connector on an index, that is not an error. 223 + * 224 + * - There is no guarantee that the first N indices will be valid 225 + * - VBIOS may report a higher amount of valid indices than there are actual connectors 226 + * - Some VBIOS have valid configurations for more connectors than there actually are 227 + * on the card. This may be because the manufacturer used the same VBIOS for different 228 + * variants of the same card. 229 + */ 221 230 for (i = 0; dc->link_count < connectors_num && i < MAX_LINKS; i++) { 231 + struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, i); 222 232 struct link_init_data link_init_params = {0}; 223 233 struct dc_link *link; 234 + 235 + if (connector_id.id == CONNECTOR_ID_UNKNOWN) 236 + continue; 224 237 225 238 DC_LOG_DC("BIOS object table - printing link object info for connector number: %d, link_index: %d", i, dc->link_count); 226 239
+3 -40
drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
··· 4 4 5 5 #include "dc.h" 6 6 #include "dc_dmub_srv.h" 7 - #include "dc_dp_types.h" 8 7 #include "dmub/dmub_srv.h" 9 8 #include "core_types.h" 10 9 #include "dmub_replay.h" ··· 43 44 /* 44 45 * Enable/Disable Replay. 45 46 */ 46 - static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait, uint8_t panel_inst, 47 - struct dc_link *link) 47 + static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait, uint8_t panel_inst) 48 48 { 49 49 union dmub_rb_cmd cmd; 50 50 struct dc_context *dc = dmub->ctx; 51 51 uint32_t retry_count; 52 52 enum replay_state state = REPLAY_STATE_0; 53 - struct pipe_ctx *pipe_ctx = NULL; 54 - struct resource_context *res_ctx = &link->ctx->dc->current_state->res_ctx; 55 - uint8_t i; 56 53 57 54 memset(&cmd, 0, sizeof(cmd)); 58 55 cmd.replay_enable.header.type = DMUB_CMD__REPLAY; 59 56 cmd.replay_enable.data.panel_inst = panel_inst; 60 57 61 58 cmd.replay_enable.header.sub_type = DMUB_CMD__REPLAY_ENABLE; 62 - if (enable) { 59 + if (enable) 63 60 cmd.replay_enable.data.enable = REPLAY_ENABLE; 64 - // hpo stream/link encoder assignments are not static, need to update everytime we try to enable replay 65 - if (link->cur_link_settings.link_rate >= LINK_RATE_UHBR10) { 66 - for (i = 0; i < MAX_PIPES; i++) { 67 - if (res_ctx && 68 - res_ctx->pipe_ctx[i].stream && 69 - res_ctx->pipe_ctx[i].stream->link && 70 - res_ctx->pipe_ctx[i].stream->link == link && 71 - res_ctx->pipe_ctx[i].stream->link->connector_signal == SIGNAL_TYPE_EDP) { 72 - pipe_ctx = &res_ctx->pipe_ctx[i]; 73 - //TODO: refactor for multi edp support 74 - break; 75 - } 76 - } 77 - 78 - if (!pipe_ctx) 79 - return; 80 - 81 - cmd.replay_enable.data.hpo_stream_enc_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst; 82 - cmd.replay_enable.data.hpo_link_enc_inst = pipe_ctx->link_res.hpo_dp_link_enc->inst; 83 - } 84 - } else 61 + else 85 62 cmd.replay_enable.data.enable = REPLAY_DISABLE; 86 63 87 64 cmd.replay_enable.header.payload_bytes = sizeof(struct dmub_rb_cmd_replay_enable_data); ··· 149 174 copy_settings_data->digbe_inst = replay_context->digbe_inst; 150 175 copy_settings_data->digfe_inst = replay_context->digfe_inst; 151 176 152 - if (link->cur_link_settings.link_rate >= LINK_RATE_UHBR10) { 153 - if (pipe_ctx->stream_res.hpo_dp_stream_enc) 154 - copy_settings_data->hpo_stream_enc_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst; 155 - else 156 - copy_settings_data->hpo_stream_enc_inst = 0; 157 - if (pipe_ctx->link_res.hpo_dp_link_enc) 158 - copy_settings_data->hpo_link_enc_inst = pipe_ctx->link_res.hpo_dp_link_enc->inst; 159 - else 160 - copy_settings_data->hpo_link_enc_inst = 0; 161 - } 162 - 163 177 if (pipe_ctx->plane_res.dpp) 164 178 copy_settings_data->dpp_inst = pipe_ctx->plane_res.dpp->inst; 165 179 else ··· 211 247 pCmd->header.type = DMUB_CMD__REPLAY; 212 248 pCmd->header.sub_type = DMUB_CMD__REPLAY_SET_COASTING_VTOTAL; 213 249 pCmd->header.payload_bytes = sizeof(struct dmub_cmd_replay_set_coasting_vtotal_data); 214 - pCmd->replay_set_coasting_vtotal_data.panel_inst = panel_inst; 215 250 pCmd->replay_set_coasting_vtotal_data.coasting_vtotal = (coasting_vtotal & 0xFFFF); 216 251 pCmd->replay_set_coasting_vtotal_data.coasting_vtotal_high = (coasting_vtotal & 0xFFFF0000) >> 16; 217 252
+1 -1
drivers/gpu/drm/amd/display/dc/dce/dmub_replay.h
··· 19 19 void (*replay_get_state)(struct dmub_replay *dmub, enum replay_state *state, 20 20 uint8_t panel_inst); 21 21 void (*replay_enable)(struct dmub_replay *dmub, bool enable, bool wait, 22 - uint8_t panel_inst, struct dc_link *link); 22 + uint8_t panel_inst); 23 23 bool (*replay_copy_settings)(struct dmub_replay *dmub, struct dc_link *link, 24 24 struct replay_context *replay_context, uint8_t panel_inst); 25 25 void (*replay_set_power_opt)(struct dmub_replay *dmub, unsigned int power_opt,
-20
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
··· 4048 4048 */ 4049 4049 uint8_t digbe_inst; 4050 4050 /** 4051 - * @hpo_stream_enc_inst: HPO stream encoder instance 4052 - */ 4053 - uint8_t hpo_stream_enc_inst; 4054 - /** 4055 - * @hpo_link_enc_inst: HPO link encoder instance 4056 - */ 4057 - uint8_t hpo_link_enc_inst; 4058 - /** 4059 4051 * AUX HW instance. 4060 4052 */ 4061 4053 uint8_t aux_inst; ··· 4151 4159 * This does not support HDMI/DP2 for now. 4152 4160 */ 4153 4161 uint8_t phy_rate; 4154 - /** 4155 - * @hpo_stream_enc_inst: HPO stream encoder instance 4156 - */ 4157 - uint8_t hpo_stream_enc_inst; 4158 - /** 4159 - * @hpo_link_enc_inst: HPO link encoder instance 4160 - */ 4161 - uint8_t hpo_link_enc_inst; 4162 - /** 4163 - * @pad: Align structure to 4 byte boundary. 4164 - */ 4165 - uint8_t pad[2]; 4166 4162 }; 4167 4163 4168 4164 /**
+3
drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
··· 260 260 return MOD_HDCP_STATUS_FAILURE; 261 261 } 262 262 263 + if (!display) 264 + return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND; 265 + 263 266 hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.context.mem_context.shared_buf; 264 267 265 268 mutex_lock(&psp->hdcp_context.mutex);
+25 -5
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 1697 1697 uint32_t *min_power_limit) 1698 1698 { 1699 1699 struct smu_table_context *table_context = &smu->smu_table; 1700 + struct smu_14_0_2_powerplay_table *powerplay_table = 1701 + table_context->power_play_table; 1700 1702 PPTable_t *pptable = table_context->driver_pptable; 1701 1703 CustomSkuTable_t *skutable = &pptable->CustomSkuTable; 1702 - uint32_t power_limit; 1704 + uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0; 1703 1705 uint32_t msg_limit = pptable->SkuTable.MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; 1704 1706 1705 1707 if (smu_v14_0_get_current_power_limit(smu, &power_limit)) ··· 1714 1712 if (default_power_limit) 1715 1713 *default_power_limit = power_limit; 1716 1714 1717 - if (max_power_limit) 1718 - *max_power_limit = msg_limit; 1715 + if (powerplay_table) { 1716 + if (smu->od_enabled && 1717 + smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) { 1718 + od_percent_upper = pptable->SkuTable.OverDriveLimitsBasicMax.Ppt; 1719 + od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt; 1720 + } else if (smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) { 1721 + od_percent_upper = 0; 1722 + od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt; 1723 + } 1724 + } 1719 1725 1720 - if (min_power_limit) 1721 - *min_power_limit = 0; 1726 + dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", 1727 + od_percent_upper, od_percent_lower, power_limit); 1728 + 1729 + if (max_power_limit) { 1730 + *max_power_limit = msg_limit * (100 + od_percent_upper); 1731 + *max_power_limit /= 100; 1732 + } 1733 + 1734 + if (min_power_limit) { 1735 + *min_power_limit = power_limit * (100 + od_percent_lower); 1736 + *min_power_limit /= 100; 1737 + } 1722 1738 1723 1739 return 0; 1724 1740 }
+2 -2
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 1474 1474 1475 1475 dp = devm_drm_bridge_alloc(dev, struct analogix_dp_device, bridge, 1476 1476 &analogix_dp_bridge_funcs); 1477 - if (!dp) 1478 - return ERR_PTR(-ENOMEM); 1477 + if (IS_ERR(dp)) 1478 + return ERR_CAST(dp); 1479 1479 1480 1480 dp->dev = &pdev->dev; 1481 1481 dp->dpms_mode = DRM_MODE_DPMS_OFF;
+2
drivers/gpu/drm/drm_gpuvm.c
··· 2432 2432 * 2433 2433 * The expected usage is: 2434 2434 * 2435 + * .. code-block:: c 2436 + * 2435 2437 * vm_bind { 2436 2438 * struct drm_exec exec; 2437 2439 *
+21 -1
drivers/gpu/drm/drm_panic_qr.rs
··· 381 381 len: usize, 382 382 } 383 383 384 + // On arm32 architecture, dividing an `u64` by a constant will generate a call 385 + // to `__aeabi_uldivmod` which is not present in the kernel. 386 + // So use the multiply by inverse method for this architecture. 387 + fn div10(val: u64) -> u64 { 388 + if cfg!(target_arch = "arm") { 389 + let val_h = val >> 32; 390 + let val_l = val & 0xFFFFFFFF; 391 + let b_h: u64 = 0x66666666; 392 + let b_l: u64 = 0x66666667; 393 + 394 + let tmp1 = val_h * b_l + ((val_l * b_l) >> 32); 395 + let tmp2 = val_l * b_h + (tmp1 & 0xffffffff); 396 + let tmp3 = val_h * b_h + (tmp1 >> 32) + (tmp2 >> 32); 397 + 398 + tmp3 >> 2 399 + } else { 400 + val / 10 401 + } 402 + } 403 + 384 404 impl DecFifo { 385 405 fn push(&mut self, data: u64, len: usize) { 386 406 let mut chunk = data; ··· 409 389 } 410 390 for i in 0..len { 411 391 self.decimals[i] = (chunk % 10) as u8; 412 - chunk /= 10; 392 + chunk = div10(chunk); 413 393 } 414 394 self.len += len; 415 395 }
+12 -2
drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c
··· 325 325 return hibmc_dp_link_reduce_rate(dp); 326 326 } 327 327 328 + static void hibmc_dp_update_caps(struct hibmc_dp_dev *dp) 329 + { 330 + dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE]; 331 + if (dp->link.cap.link_rate > DP_LINK_BW_8_1 || !dp->link.cap.link_rate) 332 + dp->link.cap.link_rate = DP_LINK_BW_8_1; 333 + 334 + dp->link.cap.lanes = dp->dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK; 335 + if (dp->link.cap.lanes > HIBMC_DP_LANE_NUM_MAX) 336 + dp->link.cap.lanes = HIBMC_DP_LANE_NUM_MAX; 337 + } 338 + 328 339 int hibmc_dp_link_training(struct hibmc_dp_dev *dp) 329 340 { 330 341 struct hibmc_dp_link *link = &dp->link; ··· 345 334 if (ret) 346 335 drm_err(dp->dev, "dp aux read dpcd failed, ret: %d\n", ret); 347 336 348 - dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE]; 349 - dp->link.cap.lanes = 0x2; 337 + hibmc_dp_update_caps(dp); 350 338 351 339 ret = hibmc_dp_get_serdes_rate_cfg(dp); 352 340 if (ret < 0)
+13 -9
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 32 32 33 33 DEFINE_DRM_GEM_FOPS(hibmc_fops); 34 34 35 - static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "vblank", "hpd" }; 35 + static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "hibmc-vblank", "hibmc-hpd" }; 36 36 37 37 static irqreturn_t hibmc_interrupt(int irq, void *arg) 38 38 { ··· 115 115 static int hibmc_kms_init(struct hibmc_drm_private *priv) 116 116 { 117 117 struct drm_device *dev = &priv->dev; 118 + struct drm_encoder *encoder; 119 + u32 clone_mask = 0; 118 120 int ret; 119 121 120 122 ret = drmm_mode_config_init(dev); ··· 155 153 drm_err(dev, "failed to init vdac: %d\n", ret); 156 154 return ret; 157 155 } 156 + 157 + drm_for_each_encoder(encoder, dev) 158 + clone_mask |= drm_encoder_mask(encoder); 159 + 160 + drm_for_each_encoder(encoder, dev) 161 + encoder->possible_clones = clone_mask; 158 162 159 163 return 0; 160 164 } ··· 285 277 static int hibmc_msi_init(struct drm_device *dev) 286 278 { 287 279 struct pci_dev *pdev = to_pci_dev(dev->dev); 288 - char name[32] = {0}; 289 280 int valid_irq_num; 290 281 int irq; 291 282 int ret; ··· 299 292 valid_irq_num = ret; 300 293 301 294 for (int i = 0; i < valid_irq_num; i++) { 302 - snprintf(name, ARRAY_SIZE(name) - 1, "%s-%s-%s", 303 - dev->driver->name, pci_name(pdev), g_irqs_names_map[i]); 304 - 305 295 irq = pci_irq_vector(pdev, i); 306 296 307 297 if (i) ··· 306 302 ret = devm_request_threaded_irq(&pdev->dev, irq, 307 303 hibmc_dp_interrupt, 308 304 hibmc_dp_hpd_isr, 309 - IRQF_SHARED, name, dev); 305 + IRQF_SHARED, g_irqs_names_map[i], dev); 310 306 else 311 307 ret = devm_request_irq(&pdev->dev, irq, hibmc_interrupt, 312 - IRQF_SHARED, name, dev); 308 + IRQF_SHARED, g_irqs_names_map[i], dev); 313 309 if (ret) { 314 310 drm_err(dev, "install irq failed: %d\n", ret); 315 311 return ret; ··· 327 323 328 324 ret = hibmc_hw_init(priv); 329 325 if (ret) 330 - goto err; 326 + return ret; 331 327 332 328 ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0), 333 329 pci_resource_len(pdev, 0)); 334 330 if (ret) { 335 331 drm_err(dev, "Error initializing VRAM MM; %d\n", ret); 336 - goto err; 332 + return ret; 337 333 } 338 334 339 335 ret = hibmc_kms_init(priv);
+1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
··· 69 69 int hibmc_vdac_init(struct hibmc_drm_private *priv); 70 70 71 71 int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *connector); 72 + void hibmc_ddc_del(struct hibmc_vdac *vdac); 72 73 73 74 int hibmc_dp_init(struct hibmc_drm_private *priv); 74 75
+5
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
··· 95 95 96 96 return i2c_bit_add_bus(&vdac->adapter); 97 97 } 98 + 99 + void hibmc_ddc_del(struct hibmc_vdac *vdac) 100 + { 101 + i2c_del_adapter(&vdac->adapter); 102 + }
+8 -3
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
··· 53 53 { 54 54 struct hibmc_vdac *vdac = to_hibmc_vdac(connector); 55 55 56 - i2c_del_adapter(&vdac->adapter); 56 + hibmc_ddc_del(vdac); 57 57 drm_connector_cleanup(connector); 58 58 } 59 59 ··· 110 110 ret = drmm_encoder_init(dev, encoder, NULL, DRM_MODE_ENCODER_DAC, NULL); 111 111 if (ret) { 112 112 drm_err(dev, "failed to init encoder: %d\n", ret); 113 - return ret; 113 + goto err; 114 114 } 115 115 116 116 drm_encoder_helper_add(encoder, &hibmc_encoder_helper_funcs); ··· 121 121 &vdac->adapter); 122 122 if (ret) { 123 123 drm_err(dev, "failed to init connector: %d\n", ret); 124 - return ret; 124 + goto err; 125 125 } 126 126 127 127 drm_connector_helper_add(connector, &hibmc_connector_helper_funcs); ··· 131 131 connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; 132 132 133 133 return 0; 134 + 135 + err: 136 + hibmc_ddc_del(vdac); 137 + 138 + return ret; 134 139 }
+4
drivers/gpu/drm/i915/display/intel_display_irq.c
··· 1506 1506 if (!(master_ctl & GEN11_GU_MISC_IRQ)) 1507 1507 return 0; 1508 1508 1509 + intel_display_rpm_assert_block(display); 1510 + 1509 1511 iir = intel_de_read(display, GEN11_GU_MISC_IIR); 1510 1512 if (likely(iir)) 1511 1513 intel_de_write(display, GEN11_GU_MISC_IIR, iir); 1514 + 1515 + intel_display_rpm_assert_unblock(display); 1512 1516 1513 1517 return iir; 1514 1518 }
+75 -18
drivers/gpu/drm/i915/display/intel_tc.c
··· 23 23 #include "intel_modeset_lock.h" 24 24 #include "intel_tc.h" 25 25 26 + #define DP_PIN_ASSIGNMENT_NONE 0x0 26 27 #define DP_PIN_ASSIGNMENT_C 0x3 27 28 #define DP_PIN_ASSIGNMENT_D 0x4 28 29 #define DP_PIN_ASSIGNMENT_E 0x5 ··· 67 66 enum tc_port_mode init_mode; 68 67 enum phy_fia phy_fia; 69 68 u8 phy_fia_idx; 69 + u8 max_lane_count; 70 70 }; 71 71 72 72 static enum intel_display_power_domain ··· 309 307 REG_FIELD_GET(TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK, val); 310 308 311 309 switch (pin_assignment) { 310 + case DP_PIN_ASSIGNMENT_NONE: 311 + return 0; 312 312 default: 313 313 MISSING_CASE(pin_assignment); 314 314 fallthrough; ··· 369 365 } 370 366 } 371 367 372 - int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) 368 + static int get_max_lane_count(struct intel_tc_port *tc) 373 369 { 374 - struct intel_display *display = to_intel_display(dig_port); 375 - struct intel_tc_port *tc = to_tc_port(dig_port); 370 + struct intel_display *display = to_intel_display(tc->dig_port); 371 + struct intel_digital_port *dig_port = tc->dig_port; 376 372 377 - if (!intel_encoder_is_tc(&dig_port->base) || tc->mode != TC_PORT_DP_ALT) 373 + if (tc->mode != TC_PORT_DP_ALT) 378 374 return 4; 379 375 380 376 assert_tc_cold_blocked(tc); ··· 386 382 return mtl_tc_port_get_max_lane_count(dig_port); 387 383 388 384 return intel_tc_port_get_max_lane_count(dig_port); 385 + } 386 + 387 + static void read_pin_configuration(struct intel_tc_port *tc) 388 + { 389 + tc->max_lane_count = get_max_lane_count(tc); 390 + } 391 + 392 + int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) 393 + { 394 + struct intel_display *display = to_intel_display(dig_port); 395 + struct intel_tc_port *tc = to_tc_port(dig_port); 396 + 397 + if (!intel_encoder_is_tc(&dig_port->base)) 398 + return 4; 399 + 400 + if (DISPLAY_VER(display) < 20) 401 + return get_max_lane_count(tc); 402 + 403 + return tc->max_lane_count; 389 404 } 390 405 391 406 void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port, ··· 619 596 tc_cold_wref = __tc_cold_block(tc, &domain); 620 597 621 598 tc->mode = tc_phy_get_current_mode(tc); 622 - if (tc->mode != TC_PORT_DISCONNECTED) 599 + if (tc->mode != TC_PORT_DISCONNECTED) { 623 600 tc->lock_wakeref = tc_cold_block(tc); 601 + 602 + read_pin_configuration(tc); 603 + } 624 604 625 605 __tc_cold_unblock(tc, domain, tc_cold_wref); 626 606 } ··· 682 656 683 657 tc->lock_wakeref = tc_cold_block(tc); 684 658 685 - if (tc->mode == TC_PORT_TBT_ALT) 659 + if (tc->mode == TC_PORT_TBT_ALT) { 660 + read_pin_configuration(tc); 661 + 686 662 return true; 663 + } 687 664 688 665 if ((!tc_phy_is_ready(tc) || 689 666 !icl_tc_phy_take_ownership(tc, true)) && ··· 697 668 goto out_unblock_tc_cold; 698 669 } 699 670 671 + read_pin_configuration(tc); 700 672 701 673 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) 702 674 goto out_release_phy; ··· 888 858 port_wakeref = intel_display_power_get(display, port_power_domain); 889 859 890 860 tc->mode = tc_phy_get_current_mode(tc); 891 - if (tc->mode != TC_PORT_DISCONNECTED) 861 + if (tc->mode != TC_PORT_DISCONNECTED) { 892 862 tc->lock_wakeref = tc_cold_block(tc); 863 + 864 + read_pin_configuration(tc); 865 + } 893 866 894 867 intel_display_power_put(display, port_power_domain, port_wakeref); 895 868 } ··· 906 873 907 874 if (tc->mode == TC_PORT_TBT_ALT) { 908 875 tc->lock_wakeref = tc_cold_block(tc); 876 + 877 + read_pin_configuration(tc); 878 + 909 879 return true; 910 880 } 911 881 ··· 929 893 } 930 894 931 895 tc->lock_wakeref = tc_cold_block(tc); 896 + 897 + read_pin_configuration(tc); 932 898 933 899 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) 934 900 goto out_unblock_tc_cold; ··· 1162 1124 tc_cold_wref = __tc_cold_block(tc, &domain); 1163 1125 1164 1126 tc->mode = tc_phy_get_current_mode(tc); 1165 - if (tc->mode != TC_PORT_DISCONNECTED) 1127 + if (tc->mode != TC_PORT_DISCONNECTED) { 1166 1128 tc->lock_wakeref = tc_cold_block(tc); 1129 + 1130 + read_pin_configuration(tc); 1131 + /* 1132 + * Set a valid lane count value for a DP-alt sink which got 1133 + * disconnected. The driver can only disable the output on this PHY. 1134 + */ 1135 + if (tc->max_lane_count == 0) 1136 + tc->max_lane_count = 4; 1137 + } 1167 1138 1168 1139 drm_WARN_ON(display->drm, 1169 1140 (tc->mode == TC_PORT_DP_ALT || tc->mode == TC_PORT_LEGACY) && ··· 1185 1138 { 1186 1139 tc->lock_wakeref = tc_cold_block(tc); 1187 1140 1188 - if (tc->mode == TC_PORT_TBT_ALT) 1141 + if (tc->mode == TC_PORT_TBT_ALT) { 1142 + read_pin_configuration(tc); 1143 + 1189 1144 return true; 1145 + } 1190 1146 1191 1147 if (!xelpdp_tc_phy_enable_tcss_power(tc, true)) 1192 1148 goto out_unblock_tccold; 1193 1149 1194 1150 xelpdp_tc_phy_take_ownership(tc, true); 1151 + 1152 + read_pin_configuration(tc); 1195 1153 1196 1154 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) 1197 1155 goto out_release_phy; ··· 1278 1226 tc->phy_ops->get_hw_state(tc); 1279 1227 } 1280 1228 1281 - static bool tc_phy_is_ready_and_owned(struct intel_tc_port *tc, 1282 - bool phy_is_ready, bool phy_is_owned) 1229 + /* Is the PHY owned by display i.e. is it in legacy or DP-alt mode? */ 1230 + static bool tc_phy_owned_by_display(struct intel_tc_port *tc, 1231 + bool phy_is_ready, bool phy_is_owned) 1283 1232 { 1284 1233 struct intel_display *display = to_intel_display(tc->dig_port); 1285 1234 1286 - drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready); 1235 + if (DISPLAY_VER(display) < 20) { 1236 + drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready); 1287 1237 1288 - return phy_is_ready && phy_is_owned; 1238 + return phy_is_ready && phy_is_owned; 1239 + } else { 1240 + return phy_is_owned; 1241 + } 1289 1242 } 1290 1243 1291 1244 static bool tc_phy_is_connected(struct intel_tc_port *tc, ··· 1301 1244 bool phy_is_owned = tc_phy_is_owned(tc); 1302 1245 bool is_connected; 1303 1246 1304 - if (tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) 1247 + if (tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) 1305 1248 is_connected = port_pll_type == ICL_PORT_DPLL_MG_PHY; 1306 1249 else 1307 1250 is_connected = port_pll_type == ICL_PORT_DPLL_DEFAULT; ··· 1409 1352 phy_is_ready = tc_phy_is_ready(tc); 1410 1353 phy_is_owned = tc_phy_is_owned(tc); 1411 1354 1412 - if (!tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) { 1355 + if (!tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) { 1413 1356 mode = get_tc_mode_in_phy_not_owned_state(tc, live_mode); 1414 1357 } else { 1415 1358 drm_WARN_ON(display->drm, live_mode == TC_PORT_TBT_ALT); ··· 1498 1441 intel_display_power_flush_work(display); 1499 1442 if (!intel_tc_cold_requires_aux_pw(dig_port)) { 1500 1443 enum intel_display_power_domain aux_domain; 1501 - bool aux_powered; 1502 1444 1503 1445 aux_domain = intel_aux_power_domain(dig_port); 1504 - aux_powered = intel_display_power_is_enabled(display, aux_domain); 1505 - drm_WARN_ON(display->drm, aux_powered); 1446 + if (intel_display_power_is_enabled(display, aux_domain)) 1447 + drm_dbg_kms(display->drm, "Port %s: AUX unexpectedly powered\n", 1448 + tc->port_name); 1506 1449 } 1507 1450 1508 1451 tc_phy_disconnect(tc);
+11 -9
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 634 634 static void icl_ctx_workarounds_init(struct intel_engine_cs *engine, 635 635 struct i915_wa_list *wal) 636 636 { 637 + struct drm_i915_private *i915 = engine->i915; 638 + 637 639 /* Wa_1406697149 (WaDisableBankHangMode:icl) */ 638 640 wa_write(wal, GEN8_L3CNTLREG, GEN8_ERRDETBCTRL); 639 641 ··· 671 669 672 670 /* Wa_1406306137:icl,ehl */ 673 671 wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, GEN11_DIS_PICK_2ND_EU); 672 + 673 + if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) { 674 + /* 675 + * Disable Repacking for Compression (masked R/W access) 676 + * before rendering compressed surfaces for display. 677 + */ 678 + wa_masked_en(wal, CACHE_MODE_0_GEN7, 679 + DISABLE_REPACKING_FOR_COMPRESSION); 680 + } 674 681 } 675 682 676 683 /* ··· 2315 2304 RING_PSMI_CTL(RENDER_RING_BASE), 2316 2305 GEN12_WAIT_FOR_EVENT_POWER_DOWN_DISABLE | 2317 2306 GEN8_RC_SEMA_IDLE_MSG_DISABLE); 2318 - } 2319 - 2320 - if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) { 2321 - /* 2322 - * "Disable Repacking for Compression (masked R/W access) 2323 - * before rendering compressed surfaces for display." 2324 - */ 2325 - wa_masked_en(wal, CACHE_MODE_0_GEN7, 2326 - DISABLE_REPACKING_FOR_COMPRESSION); 2327 2307 } 2328 2308 2329 2309 if (GRAPHICS_VER(i915) == 11) {
+3 -3
drivers/gpu/drm/nouveau/nouveau_exec.c
··· 60 60 * virtual address in the GPU's VA space there is no guarantee that the actual 61 61 * mappings are created in the GPU's MMU. If the given memory is swapped out 62 62 * at the time the bind operation is executed the kernel will stash the mapping 63 - * details into it's internal alloctor and create the actual MMU mappings once 63 + * details into it's internal allocator and create the actual MMU mappings once 64 64 * the memory is swapped back in. While this is transparent for userspace, it is 65 65 * guaranteed that all the backing memory is swapped back in and all the memory 66 66 * mappings, as requested by userspace previously, are actually mapped once the 67 67 * DRM_NOUVEAU_EXEC ioctl is called to submit an exec job. 68 68 * 69 69 * A VM_BIND job can be executed either synchronously or asynchronously. If 70 - * exectued asynchronously, userspace may provide a list of syncobjs this job 70 + * executed asynchronously, userspace may provide a list of syncobjs this job 71 71 * will wait for and/or a list of syncobj the kernel will signal once the 72 72 * VM_BIND job finished execution. If executed synchronously the ioctl will 73 73 * block until the bind job is finished. For synchronous jobs the kernel will ··· 82 82 * Since VM_BIND jobs update the GPU's VA space on job submit, EXEC jobs do have 83 83 * an up to date view of the VA space. However, the actual mappings might still 84 84 * be pending. Hence, EXEC jobs require to have the particular fences - of 85 - * the corresponding VM_BIND jobs they depent on - attached to them. 85 + * the corresponding VM_BIND jobs they depend on - attached to them. 86 86 */ 87 87 88 88 static int
+2 -1
drivers/gpu/drm/nouveau/nvif/vmm.c
··· 219 219 case RAW: args->type = NVIF_VMM_V0_TYPE_RAW; break; 220 220 default: 221 221 WARN_ON(1); 222 - return -EINVAL; 222 + ret = -EINVAL; 223 + goto done; 223 224 } 224 225 225 226 memcpy(args->data, argv, argc);
+2 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
··· 325 325 326 326 rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), info.retries); 327 327 if (IS_ERR_OR_NULL(rpc)) { 328 - kfree(buf); 328 + kvfree(buf); 329 329 return rpc; 330 330 } 331 331 ··· 334 334 335 335 rpc = r535_gsp_msgq_recv_one_elem(gsp, &info); 336 336 if (IS_ERR_OR_NULL(rpc)) { 337 - kfree(buf); 337 + kvfree(buf); 338 338 return rpc; 339 339 } 340 340
+2 -1
drivers/gpu/drm/nova/file.rs
··· 39 39 _ => return Err(EINVAL), 40 40 }; 41 41 42 - getparam.set_value(value); 42 + #[allow(clippy::useless_conversion)] 43 + getparam.set_value(value.into()); 43 44 44 45 Ok(0) 45 46 }
+1
drivers/gpu/drm/rockchip/Kconfig
··· 53 53 bool "Rockchip cdn DP" 54 54 depends on EXTCON=y || (EXTCON=m && DRM_ROCKCHIP=m) 55 55 select DRM_DISPLAY_HELPER 56 + select DRM_BRIDGE_CONNECTOR 56 57 select DRM_DISPLAY_DP_HELPER 57 58 help 58 59 This selects support for Rockchip SoC specific extensions
+5 -4
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 2579 2579 } 2580 2580 2581 2581 /* 2582 - * The window registers are only updated when config done is written. 2583 - * Until that they read back the old value. As we read-modify-write 2584 - * these registers mark them as non-volatile. This makes sure we read 2585 - * the new values from the regmap register cache. 2582 + * The window and video port registers are only updated when config 2583 + * done is written. Until that they read back the old value. As we 2584 + * read-modify-write these registers mark them as non-volatile. This 2585 + * makes sure we read the new values from the regmap register cache. 2586 2586 */ 2587 2587 static const struct regmap_range vop2_nonvolatile_range[] = { 2588 + regmap_reg_range(RK3568_VP0_CTRL_BASE, RK3588_VP3_CTRL_BASE + 255), 2588 2589 regmap_reg_range(0x1000, 0x23ff), 2589 2590 }; 2590 2591
+2 -1
drivers/gpu/drm/tests/drm_format_helper_test.c
··· 1033 1033 NULL : &result->dst_pitch; 1034 1034 1035 1035 drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state); 1036 - buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32)); 1036 + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 1037 1037 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 1038 1038 1039 1039 buf = dst.vaddr; /* restore original value of buf */ 1040 1040 memset(buf, 0, dst_size); 1041 1041 1042 1042 drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state); 1043 + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 1043 1044 KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 1044 1045 } 1045 1046
+1 -1
drivers/gpu/drm/xe/xe_migrate.c
··· 408 408 409 409 /* Special layout, prepared below.. */ 410 410 vm = xe_vm_create(xe, XE_VM_FLAG_MIGRATION | 411 - XE_VM_FLAG_SET_TILE_ID(tile)); 411 + XE_VM_FLAG_SET_TILE_ID(tile), NULL); 412 412 if (IS_ERR(vm)) 413 413 return ERR_CAST(vm); 414 414
+1 -1
drivers/gpu/drm/xe/xe_pxp_submit.c
··· 101 101 xe_assert(xe, hwe); 102 102 103 103 /* PXP instructions must be issued from PPGTT */ 104 - vm = xe_vm_create(xe, XE_VM_FLAG_GSC); 104 + vm = xe_vm_create(xe, XE_VM_FLAG_GSC, NULL); 105 105 if (IS_ERR(vm)) 106 106 return PTR_ERR(vm); 107 107
+23 -25
drivers/gpu/drm/xe/xe_vm.c
··· 1640 1640 } 1641 1641 } 1642 1642 1643 - struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) 1643 + struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef) 1644 1644 { 1645 1645 struct drm_gem_object *vm_resv_obj; 1646 1646 struct xe_vm *vm; ··· 1661 1661 vm->xe = xe; 1662 1662 1663 1663 vm->size = 1ull << xe->info.va_bits; 1664 - 1665 1664 vm->flags = flags; 1666 1665 1666 + if (xef) 1667 + vm->xef = xe_file_get(xef); 1667 1668 /** 1668 1669 * GSC VMs are kernel-owned, only used for PXP ops and can sometimes be 1669 1670 * manipulated under the PXP mutex. However, the PXP mutex can be taken ··· 1795 1794 if (number_tiles > 1) 1796 1795 vm->composite_fence_ctx = dma_fence_context_alloc(1); 1797 1796 1797 + if (xef && xe->info.has_asid) { 1798 + u32 asid; 1799 + 1800 + down_write(&xe->usm.lock); 1801 + err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm, 1802 + XA_LIMIT(1, XE_MAX_ASID - 1), 1803 + &xe->usm.next_asid, GFP_KERNEL); 1804 + up_write(&xe->usm.lock); 1805 + if (err < 0) 1806 + goto err_unlock_close; 1807 + 1808 + vm->usm.asid = asid; 1809 + } 1810 + 1798 1811 trace_xe_vm_create(vm); 1799 1812 1800 1813 return vm; ··· 1829 1814 for_each_tile(tile, xe, id) 1830 1815 xe_range_fence_tree_fini(&vm->rftree[id]); 1831 1816 ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move); 1817 + if (vm->xef) 1818 + xe_file_put(vm->xef); 1832 1819 kfree(vm); 1833 1820 if (flags & XE_VM_FLAG_LR_MODE) 1834 1821 xe_pm_runtime_put(xe); ··· 2076 2059 struct xe_device *xe = to_xe_device(dev); 2077 2060 struct xe_file *xef = to_xe_file(file); 2078 2061 struct drm_xe_vm_create *args = data; 2079 - struct xe_tile *tile; 2080 2062 struct xe_vm *vm; 2081 - u32 id, asid; 2063 + u32 id; 2082 2064 int err; 2083 2065 u32 flags = 0; 2084 2066 ··· 2113 2097 if (args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE) 2114 2098 flags |= XE_VM_FLAG_FAULT_MODE; 2115 2099 2116 - vm = xe_vm_create(xe, flags); 2100 + vm = xe_vm_create(xe, flags, xef); 2117 2101 if (IS_ERR(vm)) 2118 2102 return PTR_ERR(vm); 2119 - 2120 - if (xe->info.has_asid) { 2121 - down_write(&xe->usm.lock); 2122 - err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm, 2123 - XA_LIMIT(1, XE_MAX_ASID - 1), 2124 - &xe->usm.next_asid, GFP_KERNEL); 2125 - up_write(&xe->usm.lock); 2126 - if (err < 0) 2127 - goto err_close_and_put; 2128 - 2129 - vm->usm.asid = asid; 2130 - } 2131 - 2132 - vm->xef = xe_file_get(xef); 2133 - 2134 - /* Record BO memory for VM pagetable created against client */ 2135 - for_each_tile(tile, xe, id) 2136 - if (vm->pt_root[id]) 2137 - xe_drm_client_add_bo(vm->xef->client, vm->pt_root[id]->bo); 2138 2103 2139 2104 #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_MEM) 2140 2105 /* Warning: Security issue - never enable by default */ ··· 3418 3421 free_bind_ops: 3419 3422 if (args->num_binds > 1) 3420 3423 kvfree(*bind_ops); 3424 + *bind_ops = NULL; 3421 3425 return err; 3422 3426 } 3423 3427 ··· 3525 3527 struct xe_exec_queue *q = NULL; 3526 3528 u32 num_syncs, num_ufence = 0; 3527 3529 struct xe_sync_entry *syncs = NULL; 3528 - struct drm_xe_vm_bind_op *bind_ops; 3530 + struct drm_xe_vm_bind_op *bind_ops = NULL; 3529 3531 struct xe_vma_ops vops; 3530 3532 struct dma_fence *fence; 3531 3533 int err;
+1 -1
drivers/gpu/drm/xe/xe_vm.h
··· 26 26 struct xe_svm_range; 27 27 struct drm_exec; 28 28 29 - struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags); 29 + struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef); 30 30 31 31 struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id); 32 32 int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node);
+12 -8
drivers/i2c/busses/i2c-rtl9300.c
··· 143 143 return -EIO; 144 144 145 145 for (i = 0; i < len; i++) { 146 - if (i % 4 == 0) 147 - vals[i/4] = 0; 148 - vals[i/4] <<= 8; 149 - vals[i/4] |= buf[i]; 146 + unsigned int shift = (i % 4) * 8; 147 + unsigned int reg = i / 4; 148 + 149 + vals[reg] |= buf[i] << shift; 150 150 } 151 151 152 152 return regmap_bulk_write(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_DATA_WORD0, ··· 175 175 return ret; 176 176 177 177 ret = regmap_read_poll_timeout(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_CTRL1, 178 - val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 2000); 178 + val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 100000); 179 179 if (ret) 180 180 return ret; 181 181 ··· 281 281 ret = rtl9300_i2c_reg_addr_set(i2c, command, 1); 282 282 if (ret) 283 283 goto out_unlock; 284 - ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0]); 284 + if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX) { 285 + ret = -EINVAL; 286 + goto out_unlock; 287 + } 288 + ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0] + 1); 285 289 if (ret) 286 290 goto out_unlock; 287 291 if (read_write == I2C_SMBUS_WRITE) { 288 - ret = rtl9300_i2c_write(i2c, &data->block[1], data->block[0]); 292 + ret = rtl9300_i2c_write(i2c, &data->block[0], data->block[0] + 1); 289 293 if (ret) 290 294 goto out_unlock; 291 295 } 292 - len = data->block[0]; 296 + len = data->block[0] + 1; 293 297 break; 294 298 295 299 default:
+1 -1
drivers/iio/accel/sca3300.c
··· 477 477 struct iio_dev *indio_dev = pf->indio_dev; 478 478 struct sca3300_data *data = iio_priv(indio_dev); 479 479 int bit, ret, val, i = 0; 480 - IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX); 480 + IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX) = { }; 481 481 482 482 iio_for_each_active_channel(indio_dev, bit) { 483 483 ret = sca3300_read_reg(data, indio_dev->channels[bit].address, &val);
+1 -1
drivers/iio/adc/Kconfig
··· 1300 1300 1301 1301 config ROHM_BD79124 1302 1302 tristate "Rohm BD79124 ADC driver" 1303 - depends on I2C 1303 + depends on I2C && GPIOLIB 1304 1304 select REGMAP_I2C 1305 1305 select IIO_ADC_HELPER 1306 1306 help
+7 -7
drivers/iio/adc/ad7124.c
··· 849 849 static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan_spec *chan) 850 850 { 851 851 struct device *dev = &st->sd.spi->dev; 852 - struct ad7124_channel *ch = &st->channels[chan->channel]; 852 + struct ad7124_channel *ch = &st->channels[chan->address]; 853 853 int ret; 854 854 855 855 if (ch->syscalib_mode == AD7124_SYSCALIB_ZERO_SCALE) { ··· 865 865 if (ret < 0) 866 866 return ret; 867 867 868 - dev_dbg(dev, "offset for channel %d after zero-scale calibration: 0x%x\n", 869 - chan->channel, ch->cfg.calibration_offset); 868 + dev_dbg(dev, "offset for channel %lu after zero-scale calibration: 0x%x\n", 869 + chan->address, ch->cfg.calibration_offset); 870 870 } else { 871 871 ch->cfg.calibration_gain = st->gain_default; 872 872 ··· 880 880 if (ret < 0) 881 881 return ret; 882 882 883 - dev_dbg(dev, "gain for channel %d after full-scale calibration: 0x%x\n", 884 - chan->channel, ch->cfg.calibration_gain); 883 + dev_dbg(dev, "gain for channel %lu after full-scale calibration: 0x%x\n", 884 + chan->address, ch->cfg.calibration_gain); 885 885 } 886 886 887 887 return 0; ··· 924 924 { 925 925 struct ad7124_state *st = iio_priv(indio_dev); 926 926 927 - st->channels[chan->channel].syscalib_mode = mode; 927 + st->channels[chan->address].syscalib_mode = mode; 928 928 929 929 return 0; 930 930 } ··· 934 934 { 935 935 struct ad7124_state *st = iio_priv(indio_dev); 936 936 937 - return st->channels[chan->channel].syscalib_mode; 937 + return st->channels[chan->address].syscalib_mode; 938 938 } 939 939 940 940 static const struct iio_enum ad7124_syscalib_mode_enum = {
+75 -12
drivers/iio/adc/ad7173.c
··· 200 200 /* 201 201 * Following fields are used to compare equality. If you 202 202 * make adaptations in it, you most likely also have to adapt 203 - * ad7173_find_live_config(), too. 203 + * ad7173_is_setup_equal(), too. 204 204 */ 205 205 struct_group(config_props, 206 206 bool bipolar; ··· 561 561 st->config_usage_counter = 0; 562 562 } 563 563 564 - static struct ad7173_channel_config * 565 - ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg) 564 + /** 565 + * ad7173_is_setup_equal - Compare two channel setups 566 + * @cfg1: First channel configuration 567 + * @cfg2: Second channel configuration 568 + * 569 + * Compares all configuration options that affect the registers connected to 570 + * SETUP_SEL, namely CONFIGx, FILTERx, GAINx and OFFSETx. 571 + * 572 + * Returns: true if the setups are identical, false otherwise 573 + */ 574 + static bool ad7173_is_setup_equal(const struct ad7173_channel_config *cfg1, 575 + const struct ad7173_channel_config *cfg2) 566 576 { 567 - struct ad7173_channel_config *cfg_aux; 568 - int i; 569 - 570 577 /* 571 578 * This is just to make sure that the comparison is adapted after 572 579 * struct ad7173_channel_config was changed. ··· 586 579 u8 ref_sel; 587 580 })); 588 581 582 + return cfg1->bipolar == cfg2->bipolar && 583 + cfg1->input_buf == cfg2->input_buf && 584 + cfg1->odr == cfg2->odr && 585 + cfg1->ref_sel == cfg2->ref_sel; 586 + } 587 + 588 + static struct ad7173_channel_config * 589 + ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg) 590 + { 591 + struct ad7173_channel_config *cfg_aux; 592 + int i; 593 + 589 594 for (i = 0; i < st->num_channels; i++) { 590 595 cfg_aux = &st->channels[i].cfg; 591 596 592 - if (cfg_aux->live && 593 - cfg->bipolar == cfg_aux->bipolar && 594 - cfg->input_buf == cfg_aux->input_buf && 595 - cfg->odr == cfg_aux->odr && 596 - cfg->ref_sel == cfg_aux->ref_sel) 597 + if (cfg_aux->live && ad7173_is_setup_equal(cfg, cfg_aux)) 597 598 return cfg_aux; 598 599 } 599 600 return NULL; ··· 1243 1228 const unsigned long *scan_mask) 1244 1229 { 1245 1230 struct ad7173_state *st = iio_priv(indio_dev); 1246 - int i, ret; 1231 + int i, j, k, ret; 1247 1232 1248 1233 for (i = 0; i < indio_dev->num_channels; i++) { 1249 1234 if (test_bit(i, scan_mask)) ··· 1252 1237 ret = ad_sd_write_reg(&st->sd, AD7173_REG_CH(i), 2, 0); 1253 1238 if (ret < 0) 1254 1239 return ret; 1240 + } 1241 + 1242 + /* 1243 + * On some chips, there are more channels that setups, so if there were 1244 + * more unique setups requested than the number of available slots, 1245 + * ad7173_set_channel() will have written over some of the slots. We 1246 + * can detect this by making sure each assigned cfg_slot matches the 1247 + * requested configuration. If it doesn't, we know that the slot was 1248 + * overwritten by a different channel. 1249 + */ 1250 + for_each_set_bit(i, scan_mask, indio_dev->num_channels) { 1251 + const struct ad7173_channel_config *cfg1, *cfg2; 1252 + 1253 + cfg1 = &st->channels[i].cfg; 1254 + 1255 + for_each_set_bit(j, scan_mask, indio_dev->num_channels) { 1256 + cfg2 = &st->channels[j].cfg; 1257 + 1258 + /* 1259 + * Only compare configs that are assigned to the same 1260 + * SETUP_SEL slot and don't compare channel to itself. 1261 + */ 1262 + if (i == j || cfg1->cfg_slot != cfg2->cfg_slot) 1263 + continue; 1264 + 1265 + /* 1266 + * If we find two different configs trying to use the 1267 + * same SETUP_SEL slot, then we know that the that we 1268 + * have too many unique configurations requested for 1269 + * the available slots and at least one was overwritten. 1270 + */ 1271 + if (!ad7173_is_setup_equal(cfg1, cfg2)) { 1272 + /* 1273 + * At this point, there isn't a way to tell 1274 + * which setups are actually programmed in the 1275 + * ADC anymore, so we could read them back to 1276 + * see, but it is simpler to just turn off all 1277 + * of the live flags so that everything gets 1278 + * reprogramed on the next attempt read a sample. 1279 + */ 1280 + for (k = 0; k < st->num_channels; k++) 1281 + st->channels[k].cfg.live = false; 1282 + 1283 + dev_err(&st->sd.spi->dev, 1284 + "Too many unique channel configurations requested for scan\n"); 1285 + return -EINVAL; 1286 + } 1287 + } 1255 1288 } 1256 1289 1257 1290 return 0;
+1
drivers/iio/adc/ad7380.c
··· 873 873 .has_hardware_gain = true, 874 874 .available_scan_masks = ad7380_4_channel_scan_masks, 875 875 .timing_specs = &ad7380_4_timing, 876 + .max_conversion_rate_hz = 4 * MEGA, 876 877 }; 877 878 878 879 static const struct spi_offload_config ad7380_offload_config = {
+10 -23
drivers/iio/adc/rzg2l_adc.c
··· 89 89 struct completion completion; 90 90 struct mutex lock; 91 91 u16 last_val[RZG2L_ADC_MAX_CHANNELS]; 92 - bool was_rpm_active; 93 92 }; 94 93 95 94 /** ··· 427 428 if (!indio_dev) 428 429 return -ENOMEM; 429 430 431 + platform_set_drvdata(pdev, indio_dev); 432 + 430 433 adc = iio_priv(indio_dev); 431 434 432 435 adc->hw_params = device_get_match_data(dev); ··· 460 459 ret = devm_pm_runtime_enable(dev); 461 460 if (ret) 462 461 return ret; 463 - 464 - platform_set_drvdata(pdev, indio_dev); 465 462 466 463 ret = rzg2l_adc_hw_init(dev, adc); 467 464 if (ret) ··· 540 541 }; 541 542 int ret; 542 543 543 - if (pm_runtime_suspended(dev)) { 544 - adc->was_rpm_active = false; 545 - } else { 546 - ret = pm_runtime_force_suspend(dev); 547 - if (ret) 548 - return ret; 549 - adc->was_rpm_active = true; 550 - } 544 + ret = pm_runtime_force_suspend(dev); 545 + if (ret) 546 + return ret; 551 547 552 548 ret = reset_control_bulk_assert(ARRAY_SIZE(resets), resets); 553 549 if (ret) ··· 551 557 return 0; 552 558 553 559 rpm_restore: 554 - if (adc->was_rpm_active) 555 - pm_runtime_force_resume(dev); 556 - 560 + pm_runtime_force_resume(dev); 557 561 return ret; 558 562 } 559 563 ··· 569 577 if (ret) 570 578 return ret; 571 579 572 - if (adc->was_rpm_active) { 573 - ret = pm_runtime_force_resume(dev); 574 - if (ret) 575 - goto resets_restore; 576 - } 580 + ret = pm_runtime_force_resume(dev); 581 + if (ret) 582 + goto resets_restore; 577 583 578 584 ret = rzg2l_adc_hw_init(dev, adc); 579 585 if (ret) ··· 580 590 return 0; 581 591 582 592 rpm_restore: 583 - if (adc->was_rpm_active) { 584 - pm_runtime_mark_last_busy(dev); 585 - pm_runtime_put_autosuspend(dev); 586 - } 593 + pm_runtime_force_suspend(dev); 587 594 resets_restore: 588 595 reset_control_bulk_assert(ARRAY_SIZE(resets), resets); 589 596 return ret;
+5 -1
drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
··· 32 32 goto exit; 33 33 34 34 *temp = (s16)be16_to_cpup(raw); 35 + /* 36 + * Temperature data is invalid if both accel and gyro are off. 37 + * Return -EBUSY in this case. 38 + */ 35 39 if (*temp == INV_ICM42600_DATA_INVALID) 36 - ret = -EINVAL; 40 + ret = -EBUSY; 37 41 38 42 exit: 39 43 mutex_unlock(&st->lock);
+1 -1
drivers/iio/light/as73211.c
··· 639 639 struct { 640 640 __le16 chan[4]; 641 641 aligned_s64 ts; 642 - } scan; 642 + } scan = { }; 643 643 int data_result, ret; 644 644 645 645 mutex_lock(&data->mutex);
+5 -4
drivers/iio/pressure/bmp280-core.c
··· 3213 3213 3214 3214 /* Bring chip out of reset if there is an assigned GPIO line */ 3215 3215 gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 3216 + if (IS_ERR(gpiod)) 3217 + return dev_err_probe(dev, PTR_ERR(gpiod), "failed to get reset GPIO\n"); 3218 + 3216 3219 /* Deassert the signal */ 3217 - if (gpiod) { 3218 - dev_info(dev, "release reset\n"); 3219 - gpiod_set_value(gpiod, 0); 3220 - } 3220 + dev_info(dev, "release reset\n"); 3221 + gpiod_set_value(gpiod, 0); 3221 3222 3222 3223 data->regmap = regmap; 3223 3224
+10 -4
drivers/iio/proximity/isl29501.c
··· 938 938 struct iio_dev *indio_dev = pf->indio_dev; 939 939 struct isl29501_private *isl29501 = iio_priv(indio_dev); 940 940 const unsigned long *active_mask = indio_dev->active_scan_mask; 941 - u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */ 941 + u32 value; 942 + struct { 943 + u16 data; 944 + aligned_s64 ts; 945 + } scan = { }; 942 946 943 - if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) 944 - isl29501_register_read(isl29501, REG_DISTANCE, buffer); 947 + if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) { 948 + isl29501_register_read(isl29501, REG_DISTANCE, &value); 949 + scan.data = value; 950 + } 945 951 946 - iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp); 952 + iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp); 947 953 iio_trigger_notify_done(indio_dev->trig); 948 954 949 955 return IRQ_HANDLED;
+16 -10
drivers/iio/temperature/maxim_thermocouple.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/err.h> 13 13 #include <linux/spi/spi.h> 14 + #include <linux/types.h> 14 15 #include <linux/iio/iio.h> 15 16 #include <linux/iio/sysfs.h> 16 17 #include <linux/iio/trigger.h> ··· 122 121 struct spi_device *spi; 123 122 const struct maxim_thermocouple_chip *chip; 124 123 char tc_type; 125 - 126 - u8 buffer[16] __aligned(IIO_DMA_MINALIGN); 124 + /* Buffer for reading up to 2 hardware channels. */ 125 + struct { 126 + union { 127 + __be16 raw16; 128 + __be32 raw32; 129 + __be16 raw[2]; 130 + }; 131 + aligned_s64 timestamp; 132 + } buffer __aligned(IIO_DMA_MINALIGN); 127 133 }; 128 134 129 135 static int maxim_thermocouple_read(struct maxim_thermocouple_data *data, ··· 138 130 { 139 131 unsigned int storage_bytes = data->chip->read_size; 140 132 unsigned int shift = chan->scan_type.shift + (chan->address * 8); 141 - __be16 buf16; 142 - __be32 buf32; 143 133 int ret; 144 134 145 135 switch (storage_bytes) { 146 136 case 2: 147 - ret = spi_read(data->spi, (void *)&buf16, storage_bytes); 148 - *val = be16_to_cpu(buf16); 137 + ret = spi_read(data->spi, &data->buffer.raw16, storage_bytes); 138 + *val = be16_to_cpu(data->buffer.raw16); 149 139 break; 150 140 case 4: 151 - ret = spi_read(data->spi, (void *)&buf32, storage_bytes); 152 - *val = be32_to_cpu(buf32); 141 + ret = spi_read(data->spi, &data->buffer.raw32, storage_bytes); 142 + *val = be32_to_cpu(data->buffer.raw32); 153 143 break; 154 144 default: 155 145 ret = -EINVAL; ··· 172 166 struct maxim_thermocouple_data *data = iio_priv(indio_dev); 173 167 int ret; 174 168 175 - ret = spi_read(data->spi, data->buffer, data->chip->read_size); 169 + ret = spi_read(data->spi, data->buffer.raw, data->chip->read_size); 176 170 if (!ret) { 177 - iio_push_to_buffers_with_ts(indio_dev, data->buffer, 171 + iio_push_to_buffers_with_ts(indio_dev, &data->buffer, 178 172 sizeof(data->buffer), 179 173 iio_get_time_ns(indio_dev)); 180 174 }
+2 -2
drivers/infiniband/core/umem_odp.c
··· 115 115 116 116 out_free_map: 117 117 if (ib_uses_virt_dma(dev)) 118 - kfree(map->pfn_list); 118 + kvfree(map->pfn_list); 119 119 else 120 120 hmm_dma_map_free(dev->dma_device, map); 121 121 return ret; ··· 287 287 mutex_unlock(&umem_odp->umem_mutex); 288 288 mmu_interval_notifier_remove(&umem_odp->notifier); 289 289 if (ib_uses_virt_dma(dev)) 290 - kfree(umem_odp->map.pfn_list); 290 + kvfree(umem_odp->map.pfn_list); 291 291 else 292 292 hmm_dma_map_free(dev->dma_device, &umem_odp->map); 293 293 }
+2 -6
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 1921 1921 struct bnxt_re_srq *srq = container_of(ib_srq, struct bnxt_re_srq, 1922 1922 ib_srq); 1923 1923 struct bnxt_re_dev *rdev = srq->rdev; 1924 - int rc; 1925 1924 1926 1925 switch (srq_attr_mask) { 1927 1926 case IB_SRQ_MAX_WR: ··· 1932 1933 return -EINVAL; 1933 1934 1934 1935 srq->qplib_srq.threshold = srq_attr->srq_limit; 1935 - rc = bnxt_qplib_modify_srq(&rdev->qplib_res, &srq->qplib_srq); 1936 - if (rc) { 1937 - ibdev_err(&rdev->ibdev, "Modify HW SRQ failed!"); 1938 - return rc; 1939 - } 1936 + bnxt_qplib_srq_arm_db(&srq->qplib_srq.dbinfo, srq->qplib_srq.threshold); 1937 + 1940 1938 /* On success, update the shadow */ 1941 1939 srq->srq_limit = srq_attr->srq_limit; 1942 1940 /* No need to Build and send response back to udata */
+23
drivers/infiniband/hw/bnxt_re/main.c
··· 2017 2017 rdev->nqr = NULL; 2018 2018 } 2019 2019 2020 + /* When DEL_GID fails, driver is not freeing GID ctx memory. 2021 + * To avoid the memory leak, free the memory during unload 2022 + */ 2023 + static void bnxt_re_free_gid_ctx(struct bnxt_re_dev *rdev) 2024 + { 2025 + struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl; 2026 + struct bnxt_re_gid_ctx *ctx, **ctx_tbl; 2027 + int i; 2028 + 2029 + if (!sgid_tbl->active) 2030 + return; 2031 + 2032 + ctx_tbl = sgid_tbl->ctx; 2033 + for (i = 0; i < sgid_tbl->max; i++) { 2034 + if (sgid_tbl->hw_id[i] == 0xFFFF) 2035 + continue; 2036 + 2037 + ctx = ctx_tbl[i]; 2038 + kfree(ctx); 2039 + } 2040 + } 2041 + 2020 2042 static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type) 2021 2043 { 2022 2044 u8 type; ··· 2052 2030 if (test_and_clear_bit(BNXT_RE_FLAG_QOS_WORK_REG, &rdev->flags)) 2053 2031 cancel_delayed_work_sync(&rdev->worker); 2054 2032 2033 + bnxt_re_free_gid_ctx(rdev); 2055 2034 if (test_and_clear_bit(BNXT_RE_FLAG_RESOURCES_INITIALIZED, 2056 2035 &rdev->flags)) 2057 2036 bnxt_re_cleanup_res(rdev);
+1 -29
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 705 705 srq->dbinfo.db = srq->dpi->dbr; 706 706 srq->dbinfo.max_slot = 1; 707 707 srq->dbinfo.priv_db = res->dpi_tbl.priv_db; 708 - if (srq->threshold) 709 - bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA); 710 - srq->arm_req = false; 708 + bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA); 711 709 712 710 return 0; 713 711 fail: ··· 713 715 kfree(srq->swq); 714 716 715 717 return rc; 716 - } 717 - 718 - int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res, 719 - struct bnxt_qplib_srq *srq) 720 - { 721 - struct bnxt_qplib_hwq *srq_hwq = &srq->hwq; 722 - u32 count; 723 - 724 - count = __bnxt_qplib_get_avail(srq_hwq); 725 - if (count > srq->threshold) { 726 - srq->arm_req = false; 727 - bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold); 728 - } else { 729 - /* Deferred arming */ 730 - srq->arm_req = true; 731 - } 732 - 733 - return 0; 734 718 } 735 719 736 720 int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, ··· 756 776 struct bnxt_qplib_hwq *srq_hwq = &srq->hwq; 757 777 struct rq_wqe *srqe; 758 778 struct sq_sge *hw_sge; 759 - u32 count = 0; 760 779 int i, next; 761 780 762 781 spin_lock(&srq_hwq->lock); ··· 787 808 788 809 bnxt_qplib_hwq_incr_prod(&srq->dbinfo, srq_hwq, srq->dbinfo.max_slot); 789 810 790 - spin_lock(&srq_hwq->lock); 791 - count = __bnxt_qplib_get_avail(srq_hwq); 792 - spin_unlock(&srq_hwq->lock); 793 811 /* Ring DB */ 794 812 bnxt_qplib_ring_prod_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ); 795 - if (srq->arm_req == true && count > srq->threshold) { 796 - srq->arm_req = false; 797 - bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold); 798 - } 799 813 800 814 return 0; 801 815 }
-2
drivers/infiniband/hw/bnxt_re/qplib_fp.h
··· 546 546 srqn_handler_t srq_handler); 547 547 int bnxt_qplib_create_srq(struct bnxt_qplib_res *res, 548 548 struct bnxt_qplib_srq *srq); 549 - int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res, 550 - struct bnxt_qplib_srq *srq); 551 549 int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, 552 550 struct bnxt_qplib_srq *srq); 553 551 void bnxt_qplib_destroy_srq(struct bnxt_qplib_res *res,
+2
drivers/infiniband/hw/bnxt_re/qplib_res.c
··· 121 121 pbl->pg_arr = vmalloc_array(pages, sizeof(void *)); 122 122 if (!pbl->pg_arr) 123 123 return -ENOMEM; 124 + memset(pbl->pg_arr, 0, pages * sizeof(void *)); 124 125 125 126 pbl->pg_map_arr = vmalloc_array(pages, sizeof(dma_addr_t)); 126 127 if (!pbl->pg_map_arr) { ··· 129 128 pbl->pg_arr = NULL; 130 129 return -ENOMEM; 131 130 } 131 + memset(pbl->pg_map_arr, 0, pages * sizeof(dma_addr_t)); 132 132 pbl->pg_count = 0; 133 133 pbl->pg_size = sginfo->pgsize; 134 134
+5 -1
drivers/infiniband/hw/erdma/erdma_verbs.c
··· 994 994 old_entry = xa_store(&dev->qp_xa, 1, qp, GFP_KERNEL); 995 995 if (xa_is_err(old_entry)) 996 996 ret = xa_err(old_entry); 997 + else 998 + qp->ibqp.qp_num = 1; 997 999 } else { 998 1000 ret = xa_alloc_cyclic(&dev->qp_xa, &qp->ibqp.qp_num, qp, 999 1001 XA_LIMIT(1, dev->attrs.max_qp - 1), ··· 1033 1031 if (ret) 1034 1032 goto err_out_cmd; 1035 1033 } else { 1036 - init_kernel_qp(dev, qp, attrs); 1034 + ret = init_kernel_qp(dev, qp, attrs); 1035 + if (ret) 1036 + goto err_out_xa; 1037 1037 } 1038 1038 1039 1039 qp->attrs.max_send_sge = attrs->cap.max_send_sge;
+3 -3
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 3043 3043 if (!hr_dev->is_vf) 3044 3044 hns_roce_free_link_table(hr_dev); 3045 3045 3046 - if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09) 3046 + if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09) 3047 3047 free_dip_entry(hr_dev); 3048 3048 } 3049 3049 ··· 5476 5476 return ret; 5477 5477 } 5478 5478 5479 - static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 qpn, 5479 + static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 sccn, 5480 5480 void *buffer) 5481 5481 { 5482 5482 struct hns_roce_v2_scc_context *context; ··· 5488 5488 return PTR_ERR(mailbox); 5489 5489 5490 5490 ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, HNS_ROCE_CMD_QUERY_SCCC, 5491 - qpn); 5491 + sccn); 5492 5492 if (ret) 5493 5493 goto out; 5494 5494
+8 -1
drivers/infiniband/hw/hns/hns_roce_restrack.c
··· 100 100 struct hns_roce_v2_qp_context qpc; 101 101 struct hns_roce_v2_scc_context sccc; 102 102 } context = {}; 103 + u32 sccn = hr_qp->qpn; 103 104 int ret; 104 105 105 106 if (!hr_dev->hw->query_qpc) ··· 117 116 !hr_dev->hw->query_sccc) 118 117 goto out; 119 118 120 - ret = hr_dev->hw->query_sccc(hr_dev, hr_qp->qpn, &context.sccc); 119 + if (hr_qp->cong_type == CONG_TYPE_DIP) { 120 + if (!hr_qp->dip) 121 + goto out; 122 + sccn = hr_qp->dip->dip_idx; 123 + } 124 + 125 + ret = hr_dev->hw->query_sccc(hr_dev, sccn, &context.sccc); 121 126 if (ret) 122 127 ibdev_warn_ratelimited(&hr_dev->ib_dev, 123 128 "failed to query SCCC, ret = %d.\n",
+8 -21
drivers/infiniband/sw/rxe/rxe_net.c
··· 345 345 346 346 static void rxe_skb_tx_dtor(struct sk_buff *skb) 347 347 { 348 - struct net_device *ndev = skb->dev; 349 - struct rxe_dev *rxe; 350 - unsigned int qp_index; 351 - struct rxe_qp *qp; 348 + struct rxe_qp *qp = skb->sk->sk_user_data; 352 349 int skb_out; 353 350 354 - rxe = rxe_get_dev_from_net(ndev); 355 - if (!rxe && is_vlan_dev(ndev)) 356 - rxe = rxe_get_dev_from_net(vlan_dev_real_dev(ndev)); 357 - if (WARN_ON(!rxe)) 358 - return; 359 - 360 - qp_index = (int)(uintptr_t)skb->sk->sk_user_data; 361 - if (!qp_index) 362 - return; 363 - 364 - qp = rxe_pool_get_index(&rxe->qp_pool, qp_index); 365 - if (!qp) 366 - goto put_dev; 367 - 368 351 skb_out = atomic_dec_return(&qp->skb_out); 369 - if (qp->need_req_skb && skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW) 352 + if (unlikely(qp->need_req_skb && 353 + skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) 370 354 rxe_sched_task(&qp->send_task); 371 355 372 356 rxe_put(qp); 373 - put_dev: 374 - ib_device_put(&rxe->ib_dev); 375 357 sock_put(skb->sk); 376 358 } 377 359 ··· 365 383 sock_hold(sk); 366 384 skb->sk = sk; 367 385 skb->destructor = rxe_skb_tx_dtor; 386 + rxe_get(pkt->qp); 368 387 atomic_inc(&pkt->qp->skb_out); 369 388 370 389 if (skb->protocol == htons(ETH_P_IP)) ··· 388 405 sock_hold(sk); 389 406 skb->sk = sk; 390 407 skb->destructor = rxe_skb_tx_dtor; 408 + rxe_get(pkt->qp); 391 409 atomic_inc(&pkt->qp->skb_out); 392 410 393 411 if (skb->protocol == htons(ETH_P_IP)) ··· 480 496 rcu_read_unlock(); 481 497 goto out; 482 498 } 499 + 500 + /* Add time stamp to skb. */ 501 + skb->tstamp = ktime_get(); 483 502 484 503 skb_reserve(skb, hdr_len + LL_RESERVED_SPACE(ndev)); 485 504
+1 -1
drivers/infiniband/sw/rxe/rxe_qp.c
··· 244 244 err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); 245 245 if (err < 0) 246 246 return err; 247 - qp->sk->sk->sk_user_data = (void *)(uintptr_t)qp->elem.index; 247 + qp->sk->sk->sk_user_data = qp; 248 248 249 249 /* pick a source UDP port number for this QP based on 250 250 * the source QPN. this spreads traffic for different QPs
+2 -2
drivers/iommu/amd/init.c
··· 3638 3638 { 3639 3639 u32 seg = 0, bus, dev, fn; 3640 3640 char *hid, *uid, *p, *addr; 3641 - char acpiid[ACPIID_LEN] = {0}; 3641 + char acpiid[ACPIID_LEN + 1] = { }; /* size with NULL terminator */ 3642 3642 int i; 3643 3643 3644 3644 addr = strchr(str, '@'); ··· 3664 3664 /* We have the '@', make it the terminator to get just the acpiid */ 3665 3665 *addr++ = 0; 3666 3666 3667 - if (strlen(str) > ACPIID_LEN + 1) 3667 + if (strlen(str) > ACPIID_LEN) 3668 3668 goto not_found; 3669 3669 3670 3670 if (sscanf(str, "=%s", acpiid) != 1)
+1 -1
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
··· 2997 2997 /* ATS is being switched off, invalidate the entire ATC */ 2998 2998 arm_smmu_atc_inv_master(master, IOMMU_NO_PASID); 2999 2999 } 3000 - master->ats_enabled = state->ats_enabled; 3001 3000 3002 3001 arm_smmu_remove_master_domain(master, state->old_domain, state->ssid); 3002 + master->ats_enabled = state->ats_enabled; 3003 3003 } 3004 3004 3005 3005 static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+5 -3
drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
··· 301 301 struct iommu_vevent_tegra241_cmdqv vevent_data; 302 302 int i; 303 303 304 - for (i = 0; i < LVCMDQ_ERR_MAP_NUM_64; i++) 305 - vevent_data.lvcmdq_err_map[i] = 306 - readq_relaxed(REG_VINTF(vintf, LVCMDQ_ERR_MAP_64(i))); 304 + for (i = 0; i < LVCMDQ_ERR_MAP_NUM_64; i++) { 305 + u64 err = readq_relaxed(REG_VINTF(vintf, LVCMDQ_ERR_MAP_64(i))); 306 + 307 + vevent_data.lvcmdq_err_map[i] = cpu_to_le64(err); 308 + } 307 309 308 310 iommufd_viommu_report_event(viommu, IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV, 309 311 &vevent_data, sizeof(vevent_data));
+2 -2
drivers/iommu/iommufd/viommu.c
··· 339 339 } 340 340 341 341 *base_pa = (page_to_pfn(pages[0]) << PAGE_SHIFT) + offset; 342 - kfree(pages); 342 + kvfree(pages); 343 343 return access; 344 344 345 345 out_unpin: ··· 349 349 out_destroy: 350 350 iommufd_access_destroy_internal(viommu->ictx, access); 351 351 out_free: 352 - kfree(pages); 352 + kvfree(pages); 353 353 return ERR_PTR(rc); 354 354 } 355 355
+1 -1
drivers/iommu/riscv/iommu.c
··· 1283 1283 unsigned long *ptr; 1284 1284 1285 1285 ptr = riscv_iommu_pte_fetch(domain, iova, &pte_size); 1286 - if (_io_pte_none(*ptr) || !_io_pte_present(*ptr)) 1286 + if (!ptr) 1287 1287 return 0; 1288 1288 1289 1289 return pfn_to_phys(__page_val_to_pfn(*ptr)) | (iova & (pte_size - 1));
+9 -6
drivers/iommu/virtio-iommu.c
··· 998 998 iommu_dma_get_resv_regions(dev, head); 999 999 } 1000 1000 1001 - static const struct iommu_ops viommu_ops; 1002 - static struct virtio_driver virtio_iommu_drv; 1001 + static const struct bus_type *virtio_bus_type; 1003 1002 1004 1003 static int viommu_match_node(struct device *dev, const void *data) 1005 1004 { ··· 1007 1008 1008 1009 static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode) 1009 1010 { 1010 - struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL, 1011 - fwnode, viommu_match_node); 1011 + struct device *dev = bus_find_device(virtio_bus_type, NULL, fwnode, 1012 + viommu_match_node); 1013 + 1012 1014 put_device(dev); 1013 1015 1014 1016 return dev ? dev_to_virtio(dev)->priv : NULL; ··· 1160 1160 if (!viommu) 1161 1161 return -ENOMEM; 1162 1162 1163 + /* Borrow this for easy lookups later */ 1164 + virtio_bus_type = dev->bus; 1165 + 1163 1166 spin_lock_init(&viommu->request_lock); 1164 1167 ida_init(&viommu->domain_ids); 1165 1168 viommu->dev = dev; ··· 1232 1229 if (ret) 1233 1230 goto err_free_vqs; 1234 1231 1235 - iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev); 1236 - 1237 1232 vdev->priv = viommu; 1233 + 1234 + iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev); 1238 1235 1239 1236 dev_info(dev, "input address: %u bits\n", 1240 1237 order_base_2(viommu->geometry.aperture_end));
+5 -7
drivers/isdn/hardware/mISDN/hfcpci.c
··· 39 39 40 40 #include "hfc_pci.h" 41 41 42 + static void hfcpci_softirq(struct timer_list *unused); 42 43 static const char *hfcpci_revision = "2.0"; 43 44 44 45 static int HFC_cnt; 45 46 static uint debug; 46 47 static uint poll, tics; 47 - static struct timer_list hfc_tl; 48 + static DEFINE_TIMER(hfc_tl, hfcpci_softirq); 48 49 static unsigned long hfc_jiffies; 49 50 50 51 MODULE_AUTHOR("Karsten Keil"); ··· 2306 2305 hfc_jiffies = jiffies + 1; 2307 2306 else 2308 2307 hfc_jiffies += tics; 2309 - hfc_tl.expires = hfc_jiffies; 2310 - add_timer(&hfc_tl); 2308 + mod_timer(&hfc_tl, hfc_jiffies); 2311 2309 } 2312 2310 2313 2311 static int __init ··· 2332 2332 if (poll != HFCPCI_BTRANS_THRESHOLD) { 2333 2333 printk(KERN_INFO "%s: Using alternative poll value of %d\n", 2334 2334 __func__, poll); 2335 - timer_setup(&hfc_tl, hfcpci_softirq, 0); 2336 - hfc_tl.expires = jiffies + tics; 2337 - hfc_jiffies = hfc_tl.expires; 2338 - add_timer(&hfc_tl); 2335 + hfc_jiffies = jiffies + tics; 2336 + mod_timer(&hfc_tl, hfc_jiffies); 2339 2337 } else 2340 2338 tics = 0; /* indicate the use of controller's timer */ 2341 2339
+92 -30
drivers/md/md.c
··· 339 339 * so all the races disappear. 340 340 */ 341 341 static bool create_on_open = true; 342 + static bool legacy_async_del_gendisk = true; 342 343 343 344 /* 344 345 * We have a system wide 'event count' that is incremented ··· 878 877 export_rdev(rdev, mddev); 879 878 } 880 879 881 - /* Call del_gendisk after release reconfig_mutex to avoid 882 - * deadlock (e.g. call del_gendisk under the lock and an 883 - * access to sysfs files waits the lock) 884 - * And MD_DELETED is only used for md raid which is set in 885 - * do_md_stop. dm raid only uses md_stop to stop. So dm raid 886 - * doesn't need to check MD_DELETED when getting reconfig lock 887 - */ 888 - if (test_bit(MD_DELETED, &mddev->flags)) 889 - del_gendisk(mddev->gendisk); 880 + if (!legacy_async_del_gendisk) { 881 + /* 882 + * Call del_gendisk after release reconfig_mutex to avoid 883 + * deadlock (e.g. call del_gendisk under the lock and an 884 + * access to sysfs files waits the lock) 885 + * And MD_DELETED is only used for md raid which is set in 886 + * do_md_stop. dm raid only uses md_stop to stop. So dm raid 887 + * doesn't need to check MD_DELETED when getting reconfig lock 888 + */ 889 + if (test_bit(MD_DELETED, &mddev->flags)) 890 + del_gendisk(mddev->gendisk); 891 + } 890 892 } 891 893 EXPORT_SYMBOL_GPL(mddev_unlock); 892 894 ··· 1423 1419 else { 1424 1420 if (sb->events_hi == sb->cp_events_hi && 1425 1421 sb->events_lo == sb->cp_events_lo) { 1426 - mddev->resync_offset = sb->resync_offset; 1422 + mddev->resync_offset = sb->recovery_cp; 1427 1423 } else 1428 1424 mddev->resync_offset = 0; 1429 1425 } ··· 1551 1547 mddev->minor_version = sb->minor_version; 1552 1548 if (mddev->in_sync) 1553 1549 { 1554 - sb->resync_offset = mddev->resync_offset; 1550 + sb->recovery_cp = mddev->resync_offset; 1555 1551 sb->cp_events_hi = (mddev->events>>32); 1556 1552 sb->cp_events_lo = (u32)mddev->events; 1557 1553 if (mddev->resync_offset == MaxSector) 1558 1554 sb->state = (1<< MD_SB_CLEAN); 1559 1555 } else 1560 - sb->resync_offset = 0; 1556 + sb->recovery_cp = 0; 1561 1557 1562 1558 sb->layout = mddev->layout; 1563 1559 sb->chunk_size = mddev->chunk_sectors << 9; ··· 4839 4835 static struct md_sysfs_entry md_metadata = 4840 4836 __ATTR_PREALLOC(metadata_version, S_IRUGO|S_IWUSR, metadata_show, metadata_store); 4841 4837 4838 + static bool rdev_needs_recovery(struct md_rdev *rdev, sector_t sectors) 4839 + { 4840 + return rdev->raid_disk >= 0 && 4841 + !test_bit(Journal, &rdev->flags) && 4842 + !test_bit(Faulty, &rdev->flags) && 4843 + !test_bit(In_sync, &rdev->flags) && 4844 + rdev->recovery_offset < sectors; 4845 + } 4846 + 4847 + static enum sync_action md_get_active_sync_action(struct mddev *mddev) 4848 + { 4849 + struct md_rdev *rdev; 4850 + bool is_recover = false; 4851 + 4852 + if (mddev->resync_offset < MaxSector) 4853 + return ACTION_RESYNC; 4854 + 4855 + if (mddev->reshape_position != MaxSector) 4856 + return ACTION_RESHAPE; 4857 + 4858 + rcu_read_lock(); 4859 + rdev_for_each_rcu(rdev, mddev) { 4860 + if (rdev_needs_recovery(rdev, MaxSector)) { 4861 + is_recover = true; 4862 + break; 4863 + } 4864 + } 4865 + rcu_read_unlock(); 4866 + 4867 + return is_recover ? ACTION_RECOVER : ACTION_IDLE; 4868 + } 4869 + 4842 4870 enum sync_action md_sync_action(struct mddev *mddev) 4843 4871 { 4844 4872 unsigned long recovery = mddev->recovery; 4873 + enum sync_action active_action; 4845 4874 4846 4875 /* 4847 4876 * frozen has the highest priority, means running sync_thread will be ··· 4898 4861 !test_bit(MD_RECOVERY_NEEDED, &recovery)) 4899 4862 return ACTION_IDLE; 4900 4863 4901 - if (test_bit(MD_RECOVERY_RESHAPE, &recovery) || 4902 - mddev->reshape_position != MaxSector) 4864 + /* 4865 + * Check if any sync operation (resync/recover/reshape) is 4866 + * currently active. This ensures that only one sync operation 4867 + * can run at a time. Returns the type of active operation, or 4868 + * ACTION_IDLE if none are active. 4869 + */ 4870 + active_action = md_get_active_sync_action(mddev); 4871 + if (active_action != ACTION_IDLE) 4872 + return active_action; 4873 + 4874 + if (test_bit(MD_RECOVERY_RESHAPE, &recovery)) 4903 4875 return ACTION_RESHAPE; 4904 4876 4905 4877 if (test_bit(MD_RECOVERY_RECOVER, &recovery)) ··· 5864 5818 { 5865 5819 struct mddev *mddev = container_of(ko, struct mddev, kobj); 5866 5820 5821 + if (legacy_async_del_gendisk) { 5822 + if (mddev->sysfs_state) 5823 + sysfs_put(mddev->sysfs_state); 5824 + if (mddev->sysfs_level) 5825 + sysfs_put(mddev->sysfs_level); 5826 + del_gendisk(mddev->gendisk); 5827 + } 5867 5828 put_disk(mddev->gendisk); 5868 5829 } 5869 5830 ··· 6073 6020 static int md_alloc_and_put(dev_t dev, char *name) 6074 6021 { 6075 6022 struct mddev *mddev = md_alloc(dev, name); 6023 + 6024 + if (legacy_async_del_gendisk) 6025 + pr_warn("md: async del_gendisk mode will be removed in future, please upgrade to mdadm-4.5+\n"); 6076 6026 6077 6027 if (IS_ERR(mddev)) 6078 6028 return PTR_ERR(mddev); ··· 6487 6431 mddev->persistent = 0; 6488 6432 mddev->level = LEVEL_NONE; 6489 6433 mddev->clevel[0] = 0; 6490 - /* if UNTIL_STOP is set, it's cleared here */ 6491 - mddev->hold_active = 0; 6492 - /* Don't clear MD_CLOSING, or mddev can be opened again. */ 6493 - mddev->flags &= BIT_ULL_MASK(MD_CLOSING); 6434 + 6435 + /* 6436 + * For legacy_async_del_gendisk mode, it can stop the array in the 6437 + * middle of assembling it, then it still can access the array. So 6438 + * it needs to clear MD_CLOSING. If not legacy_async_del_gendisk, 6439 + * it can't open the array again after stopping it. So it doesn't 6440 + * clear MD_CLOSING. 6441 + */ 6442 + if (legacy_async_del_gendisk && mddev->hold_active) { 6443 + clear_bit(MD_CLOSING, &mddev->flags); 6444 + } else { 6445 + /* if UNTIL_STOP is set, it's cleared here */ 6446 + mddev->hold_active = 0; 6447 + /* Don't clear MD_CLOSING, or mddev can be opened again. */ 6448 + mddev->flags &= BIT_ULL_MASK(MD_CLOSING); 6449 + } 6494 6450 mddev->sb_flags = 0; 6495 6451 mddev->ro = MD_RDWR; 6496 6452 mddev->metadata_type[0] = 0; ··· 6726 6658 6727 6659 export_array(mddev); 6728 6660 md_clean(mddev); 6729 - set_bit(MD_DELETED, &mddev->flags); 6661 + if (!legacy_async_del_gendisk) 6662 + set_bit(MD_DELETED, &mddev->flags); 6730 6663 } 6731 6664 md_new_event(); 6732 6665 sysfs_notify_dirent_safe(mddev->sysfs_state); ··· 9037 8968 start = MaxSector; 9038 8969 rcu_read_lock(); 9039 8970 rdev_for_each_rcu(rdev, mddev) 9040 - if (rdev->raid_disk >= 0 && 9041 - !test_bit(Journal, &rdev->flags) && 9042 - !test_bit(Faulty, &rdev->flags) && 9043 - !test_bit(In_sync, &rdev->flags) && 9044 - rdev->recovery_offset < start) 8971 + if (rdev_needs_recovery(rdev, start)) 9045 8972 start = rdev->recovery_offset; 9046 8973 rcu_read_unlock(); 9047 8974 ··· 9396 9331 test_bit(MD_RECOVERY_RECOVER, &mddev->recovery)) { 9397 9332 rcu_read_lock(); 9398 9333 rdev_for_each_rcu(rdev, mddev) 9399 - if (rdev->raid_disk >= 0 && 9400 - mddev->delta_disks >= 0 && 9401 - !test_bit(Journal, &rdev->flags) && 9402 - !test_bit(Faulty, &rdev->flags) && 9403 - !test_bit(In_sync, &rdev->flags) && 9404 - rdev->recovery_offset < mddev->curr_resync) 9334 + if (mddev->delta_disks >= 0 && 9335 + rdev_needs_recovery(rdev, mddev->curr_resync)) 9405 9336 rdev->recovery_offset = mddev->curr_resync; 9406 9337 rcu_read_unlock(); 9407 9338 } ··· 10453 10392 module_param(start_dirty_degraded, int, S_IRUGO|S_IWUSR); 10454 10393 module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR); 10455 10394 module_param(create_on_open, bool, S_IRUSR|S_IWUSR); 10395 + module_param(legacy_async_del_gendisk, bool, 0600); 10456 10396 10457 10397 MODULE_LICENSE("GPL"); 10458 10398 MODULE_DESCRIPTION("MD RAID framework");
-1
drivers/media/i2c/alvium-csi2.c
··· 1841 1841 1842 1842 } else { 1843 1843 alvium_set_stream_mipi(alvium, enable); 1844 - pm_runtime_mark_last_busy(&client->dev); 1845 1844 pm_runtime_put_autosuspend(&client->dev); 1846 1845 } 1847 1846
+1 -6
drivers/media/i2c/ccs/ccs-core.c
··· 787 787 rval = -EINVAL; 788 788 } 789 789 790 - if (pm_status > 0) { 791 - pm_runtime_mark_last_busy(&client->dev); 790 + if (pm_status > 0) 792 791 pm_runtime_put_autosuspend(&client->dev); 793 - } 794 792 795 793 return rval; 796 794 } ··· 1912 1914 if (!enable) { 1913 1915 ccs_stop_streaming(sensor); 1914 1916 sensor->streaming = false; 1915 - pm_runtime_mark_last_busy(&client->dev); 1916 1917 pm_runtime_put_autosuspend(&client->dev); 1917 1918 1918 1919 return 0; ··· 1926 1929 rval = ccs_start_streaming(sensor); 1927 1930 if (rval < 0) { 1928 1931 sensor->streaming = false; 1929 - pm_runtime_mark_last_busy(&client->dev); 1930 1932 pm_runtime_put_autosuspend(&client->dev); 1931 1933 } 1932 1934 ··· 2673 2677 return -ENODEV; 2674 2678 } 2675 2679 2676 - pm_runtime_mark_last_busy(&client->dev); 2677 2680 pm_runtime_put_autosuspend(&client->dev); 2678 2681 2679 2682 /*
-1
drivers/media/i2c/dw9768.c
··· 374 374 375 375 static int dw9768_close(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh) 376 376 { 377 - pm_runtime_mark_last_busy(sd->dev); 378 377 pm_runtime_put_autosuspend(sd->dev); 379 378 380 379 return 0;
-3
drivers/media/i2c/gc0308.c
··· 974 974 if (ret) 975 975 dev_err(gc0308->dev, "failed to set control: %d\n", ret); 976 976 977 - pm_runtime_mark_last_busy(gc0308->dev); 978 977 pm_runtime_put_autosuspend(gc0308->dev); 979 978 980 979 return ret; ··· 1156 1157 return 0; 1157 1158 1158 1159 disable_pm: 1159 - pm_runtime_mark_last_busy(gc0308->dev); 1160 1160 pm_runtime_put_autosuspend(gc0308->dev); 1161 1161 return ret; 1162 1162 } 1163 1163 1164 1164 static int gc0308_stop_stream(struct gc0308 *gc0308) 1165 1165 { 1166 - pm_runtime_mark_last_busy(gc0308->dev); 1167 1166 pm_runtime_put_autosuspend(gc0308->dev); 1168 1167 return 0; 1169 1168 }
-3
drivers/media/i2c/gc2145.c
··· 963 963 return 0; 964 964 965 965 err_rpm_put: 966 - pm_runtime_mark_last_busy(&client->dev); 967 966 pm_runtime_put_autosuspend(&client->dev); 968 967 return ret; 969 968 } ··· 984 985 if (ret) 985 986 dev_err(&client->dev, "%s failed to write regs\n", __func__); 986 987 987 - pm_runtime_mark_last_busy(&client->dev); 988 988 pm_runtime_put_autosuspend(&client->dev); 989 989 990 990 return ret; ··· 1191 1193 break; 1192 1194 } 1193 1195 1194 - pm_runtime_mark_last_busy(&client->dev); 1195 1196 pm_runtime_put_autosuspend(&client->dev); 1196 1197 1197 1198 return ret;
-2
drivers/media/i2c/imx219.c
··· 771 771 return 0; 772 772 773 773 err_rpm_put: 774 - pm_runtime_mark_last_busy(&client->dev); 775 774 pm_runtime_put_autosuspend(&client->dev); 776 775 return ret; 777 776 } ··· 792 793 __v4l2_ctrl_grab(imx219->vflip, false); 793 794 __v4l2_ctrl_grab(imx219->hflip, false); 794 795 795 - pm_runtime_mark_last_busy(&client->dev); 796 796 pm_runtime_put_autosuspend(&client->dev); 797 797 798 798 return ret;
-3
drivers/media/i2c/imx283.c
··· 1143 1143 return 0; 1144 1144 1145 1145 err_rpm_put: 1146 - pm_runtime_mark_last_busy(imx283->dev); 1147 1146 pm_runtime_put_autosuspend(imx283->dev); 1148 1147 1149 1148 return ret; ··· 1162 1163 if (ret) 1163 1164 dev_err(imx283->dev, "Failed to stop stream\n"); 1164 1165 1165 - pm_runtime_mark_last_busy(imx283->dev); 1166 1166 pm_runtime_put_autosuspend(imx283->dev); 1167 1167 1168 1168 return ret; ··· 1556 1558 * Decrease the PM usage count. The device will get suspended after the 1557 1559 * autosuspend delay, turning the power off. 1558 1560 */ 1559 - pm_runtime_mark_last_busy(imx283->dev); 1560 1561 pm_runtime_put_autosuspend(imx283->dev); 1561 1562 1562 1563 return 0;
-3
drivers/media/i2c/imx290.c
··· 869 869 break; 870 870 } 871 871 872 - pm_runtime_mark_last_busy(imx290->dev); 873 872 pm_runtime_put_autosuspend(imx290->dev); 874 873 875 874 return ret; ··· 1098 1099 } 1099 1100 } else { 1100 1101 imx290_stop_streaming(imx290); 1101 - pm_runtime_mark_last_busy(imx290->dev); 1102 1102 pm_runtime_put_autosuspend(imx290->dev); 1103 1103 } 1104 1104 ··· 1292 1294 * will already be prevented even before the delay. 1293 1295 */ 1294 1296 v4l2_i2c_subdev_init(&imx290->sd, client, &imx290_subdev_ops); 1295 - pm_runtime_mark_last_busy(imx290->dev); 1296 1297 pm_runtime_put_autosuspend(imx290->dev); 1297 1298 1298 1299 imx290->sd.internal_ops = &imx290_internal_ops;
-1
drivers/media/i2c/imx296.c
··· 604 604 if (!enable) { 605 605 ret = imx296_stream_off(sensor); 606 606 607 - pm_runtime_mark_last_busy(sensor->dev); 608 607 pm_runtime_put_autosuspend(sensor->dev); 609 608 610 609 goto unlock;
-1
drivers/media/i2c/imx415.c
··· 952 952 if (!enable) { 953 953 ret = imx415_stream_off(sensor); 954 954 955 - pm_runtime_mark_last_busy(sensor->dev); 956 955 pm_runtime_put_autosuspend(sensor->dev); 957 956 958 957 goto unlock;
-6
drivers/media/i2c/mt9m114.c
··· 974 974 return 0; 975 975 976 976 error: 977 - pm_runtime_mark_last_busy(&sensor->client->dev); 978 977 pm_runtime_put_autosuspend(&sensor->client->dev); 979 978 980 979 return ret; ··· 987 988 988 989 ret = mt9m114_set_state(sensor, MT9M114_SYS_STATE_ENTER_SUSPEND); 989 990 990 - pm_runtime_mark_last_busy(&sensor->client->dev); 991 991 pm_runtime_put_autosuspend(&sensor->client->dev); 992 992 993 993 return ret; ··· 1044 1046 break; 1045 1047 } 1046 1048 1047 - pm_runtime_mark_last_busy(&sensor->client->dev); 1048 1049 pm_runtime_put_autosuspend(&sensor->client->dev); 1049 1050 1050 1051 return ret; ··· 1110 1113 break; 1111 1114 } 1112 1115 1113 - pm_runtime_mark_last_busy(&sensor->client->dev); 1114 1116 pm_runtime_put_autosuspend(&sensor->client->dev); 1115 1117 1116 1118 return ret; ··· 1561 1565 break; 1562 1566 } 1563 1567 1564 - pm_runtime_mark_last_busy(&sensor->client->dev); 1565 1568 pm_runtime_put_autosuspend(&sensor->client->dev); 1566 1569 1567 1570 return ret; ··· 2467 2472 * Decrease the PM usage count. The device will get suspended after the 2468 2473 * autosuspend delay, turning the power off. 2469 2474 */ 2470 - pm_runtime_mark_last_busy(dev); 2471 2475 pm_runtime_put_autosuspend(dev); 2472 2476 2473 2477 return 0;
-3
drivers/media/i2c/ov4689.c
··· 497 497 } else { 498 498 cci_write(ov4689->regmap, OV4689_REG_CTRL_MODE, 499 499 OV4689_MODE_SW_STANDBY, NULL); 500 - pm_runtime_mark_last_busy(dev); 501 500 pm_runtime_put_autosuspend(dev); 502 501 } 503 502 ··· 701 702 break; 702 703 } 703 704 704 - pm_runtime_mark_last_busy(dev); 705 705 pm_runtime_put_autosuspend(dev); 706 706 707 707 return ret; ··· 997 999 goto err_clean_subdev_pm; 998 1000 } 999 1001 1000 - pm_runtime_mark_last_busy(dev); 1001 1002 pm_runtime_put_autosuspend(dev); 1002 1003 1003 1004 return 0;
-4
drivers/media/i2c/ov5640.c
··· 3341 3341 break; 3342 3342 } 3343 3343 3344 - pm_runtime_mark_last_busy(&sensor->i2c_client->dev); 3345 3344 pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3346 3345 3347 3346 return 0; ··· 3416 3417 break; 3417 3418 } 3418 3419 3419 - pm_runtime_mark_last_busy(&sensor->i2c_client->dev); 3420 3420 pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3421 3421 3422 3422 return ret; ··· 3752 3754 mutex_unlock(&sensor->lock); 3753 3755 3754 3756 if (!enable || ret) { 3755 - pm_runtime_mark_last_busy(&sensor->i2c_client->dev); 3756 3757 pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3757 3758 } 3758 3759 ··· 3962 3965 3963 3966 pm_runtime_set_autosuspend_delay(dev, 1000); 3964 3967 pm_runtime_use_autosuspend(dev); 3965 - pm_runtime_mark_last_busy(dev); 3966 3968 pm_runtime_put_autosuspend(dev); 3967 3969 3968 3970 return 0;
-3
drivers/media/i2c/ov5645.c
··· 808 808 break; 809 809 } 810 810 811 - pm_runtime_mark_last_busy(ov5645->dev); 812 811 pm_runtime_put_autosuspend(ov5645->dev); 813 812 814 813 return ret; ··· 978 979 OV5645_SYSTEM_CTRL0_STOP); 979 980 980 981 rpm_put: 981 - pm_runtime_mark_last_busy(ov5645->dev); 982 982 pm_runtime_put_autosuspend(ov5645->dev); 983 983 984 984 return ret; ··· 1194 1196 1195 1197 pm_runtime_set_autosuspend_delay(dev, 1000); 1196 1198 pm_runtime_use_autosuspend(dev); 1197 - pm_runtime_mark_last_busy(dev); 1198 1199 pm_runtime_put_autosuspend(dev); 1199 1200 1200 1201 return 0;
+1 -6
drivers/media/i2c/ov64a40.c
··· 2990 2990 return 0; 2991 2991 2992 2992 error_power_off: 2993 - pm_runtime_mark_last_busy(ov64a40->dev); 2994 2993 pm_runtime_put_autosuspend(ov64a40->dev); 2995 2994 2996 2995 return ret; ··· 2999 3000 struct v4l2_subdev_state *state) 3000 3001 { 3001 3002 cci_update_bits(ov64a40->cci, OV64A40_REG_SMIA, BIT(0), 0, NULL); 3002 - pm_runtime_mark_last_busy(ov64a40->dev); 3003 3003 pm_runtime_put_autosuspend(ov64a40->dev); 3004 3004 3005 3005 __v4l2_ctrl_grab(ov64a40->link_freq, false); ··· 3327 3329 break; 3328 3330 } 3329 3331 3330 - if (pm_status > 0) { 3331 - pm_runtime_mark_last_busy(ov64a40->dev); 3332 + if (pm_status > 0) 3332 3333 pm_runtime_put_autosuspend(ov64a40->dev); 3333 - } 3334 3334 3335 3335 return ret; 3336 3336 } ··· 3618 3622 goto error_subdev_cleanup; 3619 3623 } 3620 3624 3621 - pm_runtime_mark_last_busy(&client->dev); 3622 3625 pm_runtime_put_autosuspend(&client->dev); 3623 3626 3624 3627 return 0;
-2
drivers/media/i2c/ov8858.c
··· 1391 1391 } 1392 1392 } else { 1393 1393 ov8858_stop_stream(ov8858); 1394 - pm_runtime_mark_last_busy(&client->dev); 1395 1394 pm_runtime_put_autosuspend(&client->dev); 1396 1395 } 1397 1396 ··· 1944 1945 goto err_power_off; 1945 1946 } 1946 1947 1947 - pm_runtime_mark_last_busy(dev); 1948 1948 pm_runtime_put_autosuspend(dev); 1949 1949 1950 1950 return 0;
-2
drivers/media/i2c/st-mipid02.c
··· 465 465 if (ret) 466 466 goto error; 467 467 468 - pm_runtime_mark_last_busy(&client->dev); 469 468 pm_runtime_put_autosuspend(&client->dev); 470 469 471 470 error: ··· 541 542 cci_write(bridge->regmap, MIPID02_DATA_LANE0_REG1, 0, &ret); 542 543 cci_write(bridge->regmap, MIPID02_DATA_LANE1_REG1, 0, &ret); 543 544 544 - pm_runtime_mark_last_busy(&client->dev); 545 545 pm_runtime_put_autosuspend(&client->dev); 546 546 return ret; 547 547 }
-5
drivers/media/i2c/tc358746.c
··· 816 816 return 0; 817 817 818 818 err_out: 819 - pm_runtime_mark_last_busy(sd->dev); 820 819 pm_runtime_put_sync_autosuspend(sd->dev); 821 820 822 821 return err; ··· 837 838 if (err) 838 839 return err; 839 840 840 - pm_runtime_mark_last_busy(sd->dev); 841 841 pm_runtime_put_sync_autosuspend(sd->dev); 842 842 843 843 return v4l2_subdev_call(src, video, s_stream, 0); ··· 1014 1016 err = tc358746_read(tc358746, reg->reg, &val); 1015 1017 reg->val = val; 1016 1018 1017 - pm_runtime_mark_last_busy(sd->dev); 1018 1019 pm_runtime_put_sync_autosuspend(sd->dev); 1019 1020 1020 1021 return err; ··· 1029 1032 1030 1033 tc358746_write(tc358746, (u32)reg->reg, (u32)reg->val); 1031 1034 1032 - pm_runtime_mark_last_busy(sd->dev); 1033 1035 pm_runtime_put_sync_autosuspend(sd->dev); 1034 1036 1035 1037 return 0; ··· 1391 1395 } 1392 1396 1393 1397 err = tc358746_read(tc358746, CHIPID_REG, &val); 1394 - pm_runtime_mark_last_busy(dev); 1395 1398 pm_runtime_put_sync_autosuspend(dev); 1396 1399 if (err) 1397 1400 return -ENODEV;
-4
drivers/media/i2c/thp7312.c
··· 808 808 if (!enable) { 809 809 thp7312_stream_enable(thp7312, false); 810 810 811 - pm_runtime_mark_last_busy(thp7312->dev); 812 811 pm_runtime_put_autosuspend(thp7312->dev); 813 812 814 813 v4l2_subdev_unlock_state(sd_state); ··· 838 839 goto finish_unlock; 839 840 840 841 finish_pm: 841 - pm_runtime_mark_last_busy(thp7312->dev); 842 842 pm_runtime_put_autosuspend(thp7312->dev); 843 843 finish_unlock: 844 844 v4l2_subdev_unlock_state(sd_state); ··· 1145 1147 break; 1146 1148 } 1147 1149 1148 - pm_runtime_mark_last_busy(thp7312->dev); 1149 1150 pm_runtime_put_autosuspend(thp7312->dev); 1150 1151 1151 1152 return ret; ··· 2180 2183 * Decrease the PM usage count. The device will get suspended after the 2181 2184 * autosuspend delay, turning the power off. 2182 2185 */ 2183 - pm_runtime_mark_last_busy(dev); 2184 2186 pm_runtime_put_autosuspend(dev); 2185 2187 2186 2188 dev_info(dev, "THP7312 firmware version %02u.%02u\n",
-4
drivers/media/i2c/vd55g1.c
··· 1104 1104 1105 1105 vd55g1_grab_ctrls(sensor, false); 1106 1106 1107 - pm_runtime_mark_last_busy(sensor->dev); 1108 1107 pm_runtime_put_autosuspend(sensor->dev); 1109 1108 1110 1109 return ret; ··· 1337 1338 break; 1338 1339 } 1339 1340 1340 - pm_runtime_mark_last_busy(sensor->dev); 1341 1341 pm_runtime_put_autosuspend(sensor->dev); 1342 1342 1343 1343 return ret; ··· 1431 1433 break; 1432 1434 } 1433 1435 1434 - pm_runtime_mark_last_busy(sensor->dev); 1435 1436 pm_runtime_put_autosuspend(sensor->dev); 1436 1437 1437 1438 return ret; ··· 1892 1895 pm_runtime_enable(dev); 1893 1896 pm_runtime_set_autosuspend_delay(dev, 4000); 1894 1897 pm_runtime_use_autosuspend(dev); 1895 - pm_runtime_mark_last_busy(dev); 1896 1898 pm_runtime_put_autosuspend(dev); 1897 1899 1898 1900 ret = vd55g1_subdev_init(sensor);
-4
drivers/media/i2c/vd56g3.c
··· 493 493 break; 494 494 } 495 495 496 - pm_runtime_mark_last_busy(sensor->dev); 497 496 pm_runtime_put_autosuspend(sensor->dev); 498 497 499 498 return ret; ··· 576 577 break; 577 578 } 578 579 579 - pm_runtime_mark_last_busy(sensor->dev); 580 580 pm_runtime_put_autosuspend(sensor->dev); 581 581 582 582 return ret; ··· 1019 1021 __v4l2_ctrl_grab(sensor->vflip_ctrl, false); 1020 1022 __v4l2_ctrl_grab(sensor->patgen_ctrl, false); 1021 1023 1022 - pm_runtime_mark_last_busy(sensor->dev); 1023 1024 pm_runtime_put_autosuspend(sensor->dev); 1024 1025 1025 1026 return ret; ··· 1524 1527 } 1525 1528 1526 1529 /* Sensor could now be powered off (after the autosuspend delay) */ 1527 - pm_runtime_mark_last_busy(dev); 1528 1530 pm_runtime_put_autosuspend(dev); 1529 1531 1530 1532 dev_dbg(dev, "Successfully probe %s sensor\n",
-4
drivers/media/i2c/video-i2c.c
··· 288 288 return tmp; 289 289 290 290 tmp = regmap_bulk_read(data->regmap, AMG88XX_REG_TTHL, &buf, 2); 291 - pm_runtime_mark_last_busy(regmap_get_device(data->regmap)); 292 291 pm_runtime_put_autosuspend(regmap_get_device(data->regmap)); 293 292 if (tmp) 294 293 return tmp; ··· 526 527 return 0; 527 528 528 529 error_rpm_put: 529 - pm_runtime_mark_last_busy(dev); 530 530 pm_runtime_put_autosuspend(dev); 531 531 error_del_list: 532 532 video_i2c_del_list(vq, VB2_BUF_STATE_QUEUED); ··· 542 544 543 545 kthread_stop(data->kthread_vid_cap); 544 546 data->kthread_vid_cap = NULL; 545 - pm_runtime_mark_last_busy(regmap_get_device(data->regmap)); 546 547 pm_runtime_put_autosuspend(regmap_get_device(data->regmap)); 547 548 548 549 video_i2c_del_list(vq, VB2_BUF_STATE_ERROR); ··· 850 853 if (ret < 0) 851 854 goto error_pm_disable; 852 855 853 - pm_runtime_mark_last_busy(&client->dev); 854 856 pm_runtime_put_autosuspend(&client->dev); 855 857 856 858 return 0;
-4
drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
··· 451 451 if (q_status.report_queue_count == 0 && 452 452 (q_status.instance_queue_count == 0 || dec_info.sequence_changed)) { 453 453 dev_dbg(inst->dev->dev, "%s: finishing job.\n", __func__); 454 - pm_runtime_mark_last_busy(inst->dev->dev); 455 454 pm_runtime_put_autosuspend(inst->dev->dev); 456 455 v4l2_m2m_job_finish(inst->v4l2_m2m_dev, m2m_ctx); 457 456 } ··· 1363 1364 } 1364 1365 1365 1366 } 1366 - pm_runtime_mark_last_busy(inst->dev->dev); 1367 1367 pm_runtime_put_autosuspend(inst->dev->dev); 1368 1368 return ret; 1369 1369 ··· 1496 1498 else 1497 1499 streamoff_capture(q); 1498 1500 1499 - pm_runtime_mark_last_busy(inst->dev->dev); 1500 1501 pm_runtime_put_autosuspend(inst->dev->dev); 1501 1502 } 1502 1503 ··· 1659 1662 1660 1663 finish_job_and_return: 1661 1664 dev_dbg(inst->dev->dev, "%s: leave and finish job", __func__); 1662 - pm_runtime_mark_last_busy(inst->dev->dev); 1663 1665 pm_runtime_put_autosuspend(inst->dev->dev); 1664 1666 v4l2_m2m_job_finish(inst->v4l2_m2m_dev, m2m_ctx); 1665 1667 }
-5
drivers/media/platform/chips-media/wave5/wave5-vpu-enc.c
··· 1391 1391 if (ret) 1392 1392 goto return_buffers; 1393 1393 1394 - pm_runtime_mark_last_busy(inst->dev->dev); 1395 1394 pm_runtime_put_autosuspend(inst->dev->dev); 1396 1395 return 0; 1397 1396 return_buffers: 1398 1397 wave5_return_bufs(q, VB2_BUF_STATE_QUEUED); 1399 - pm_runtime_mark_last_busy(inst->dev->dev); 1400 1398 pm_runtime_put_autosuspend(inst->dev->dev); 1401 1399 return ret; 1402 1400 } ··· 1463 1465 else 1464 1466 streamoff_capture(inst, q); 1465 1467 1466 - pm_runtime_mark_last_busy(inst->dev->dev); 1467 1468 pm_runtime_put_autosuspend(inst->dev->dev); 1468 1469 } 1469 1470 ··· 1517 1520 break; 1518 1521 } 1519 1522 dev_dbg(inst->dev->dev, "%s: leave with active job", __func__); 1520 - pm_runtime_mark_last_busy(inst->dev->dev); 1521 1523 pm_runtime_put_autosuspend(inst->dev->dev); 1522 1524 return; 1523 1525 default: ··· 1525 1529 break; 1526 1530 } 1527 1531 dev_dbg(inst->dev->dev, "%s: leave and finish job", __func__); 1528 - pm_runtime_mark_last_busy(inst->dev->dev); 1529 1532 pm_runtime_put_autosuspend(inst->dev->dev); 1530 1533 v4l2_m2m_job_finish(inst->v4l2_m2m_dev, m2m_ctx); 1531 1534 }
-2
drivers/media/platform/nvidia/tegra-vde/h264.c
··· 585 585 return 0; 586 586 587 587 put_runtime_pm: 588 - pm_runtime_mark_last_busy(dev); 589 588 pm_runtime_put_autosuspend(dev); 590 589 591 590 unlock: ··· 611 612 if (err) 612 613 dev_err(dev, "DEC end: Failed to assert HW reset: %d\n", err); 613 614 614 - pm_runtime_mark_last_busy(dev); 615 615 pm_runtime_put_autosuspend(dev); 616 616 617 617 mutex_unlock(&vde->lock);
-1
drivers/media/platform/qcom/iris/iris_hfi_queue.c
··· 142 142 } 143 143 mutex_unlock(&core->lock); 144 144 145 - pm_runtime_mark_last_busy(core->dev); 146 145 pm_runtime_put_autosuspend(core->dev); 147 146 148 147 return 0;
-2
drivers/media/platform/raspberrypi/pisp_be/pisp_be.c
··· 950 950 kfree(job); 951 951 } 952 952 953 - pm_runtime_mark_last_busy(pispbe->dev); 954 953 pm_runtime_put_autosuspend(pispbe->dev); 955 954 956 955 dev_dbg(pispbe->dev, "Nodes streaming now 0x%x\n", ··· 1741 1742 if (ret) 1742 1743 goto disable_devs_err; 1743 1744 1744 - pm_runtime_mark_last_busy(pispbe->dev); 1745 1745 pm_runtime_put_autosuspend(pispbe->dev); 1746 1746 1747 1747 return 0;
+9 -8
drivers/media/platform/rockchip/rkvdec/rkvdec.c
··· 765 765 { 766 766 struct rkvdec_dev *rkvdec = ctx->dev; 767 767 768 - pm_runtime_mark_last_busy(rkvdec->dev); 769 768 pm_runtime_put_autosuspend(rkvdec->dev); 770 769 rkvdec_job_finish_no_pm(ctx, result); 771 770 } ··· 1158 1159 return ret; 1159 1160 } 1160 1161 1161 - if (iommu_get_domain_for_dev(&pdev->dev)) { 1162 - rkvdec->empty_domain = iommu_paging_domain_alloc(rkvdec->dev); 1163 - 1164 - if (!rkvdec->empty_domain) 1165 - dev_warn(rkvdec->dev, "cannot alloc new empty domain\n"); 1166 - } 1167 - 1168 1162 vb2_dma_contig_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32)); 1169 1163 1170 1164 irq = platform_get_irq(pdev, 0); ··· 1179 1187 ret = rkvdec_v4l2_init(rkvdec); 1180 1188 if (ret) 1181 1189 goto err_disable_runtime_pm; 1190 + 1191 + if (iommu_get_domain_for_dev(&pdev->dev)) { 1192 + rkvdec->empty_domain = iommu_paging_domain_alloc(rkvdec->dev); 1193 + 1194 + if (IS_ERR(rkvdec->empty_domain)) { 1195 + rkvdec->empty_domain = NULL; 1196 + dev_warn(rkvdec->dev, "cannot alloc new empty domain\n"); 1197 + } 1198 + } 1182 1199 1183 1200 return 0; 1184 1201
-1
drivers/media/platform/verisilicon/hantro_drv.c
··· 89 89 struct hantro_ctx *ctx, 90 90 enum vb2_buffer_state result) 91 91 { 92 - pm_runtime_mark_last_busy(vpu->dev); 93 92 pm_runtime_put_autosuspend(vpu->dev); 94 93 95 94 clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
+1 -3
drivers/media/rc/gpio-ir-recv.c
··· 48 48 if (val >= 0) 49 49 ir_raw_event_store_edge(gpio_dev->rcdev, val == 1); 50 50 51 - if (pmdev) { 52 - pm_runtime_mark_last_busy(pmdev); 51 + if (pmdev) 53 52 pm_runtime_put_autosuspend(pmdev); 54 - } 55 53 56 54 return IRQ_HANDLED; 57 55 }
-1
drivers/memstick/core/memstick.c
··· 555 555 */ 556 556 void memstick_remove_host(struct memstick_host *host) 557 557 { 558 - host->removing = 1; 559 558 flush_workqueue(workqueue); 560 559 mutex_lock(&host->lock); 561 560 if (host->card)
+1
drivers/memstick/host/rtsx_usb_ms.c
··· 812 812 int err; 813 813 814 814 host->eject = true; 815 + msh->removing = true; 815 816 cancel_work_sync(&host->handle_req); 816 817 cancel_delayed_work_sync(&host->poll_card); 817 818
+31 -2
drivers/mmc/host/sdhci-of-arasan.c
··· 99 99 #define HIWORD_UPDATE(val, mask, shift) \ 100 100 ((val) << (shift) | (mask) << ((shift) + 16)) 101 101 102 + #define CD_STABLE_TIMEOUT_US 1000000 103 + #define CD_STABLE_MAX_SLEEP_US 10 104 + 102 105 /** 103 106 * struct sdhci_arasan_soc_ctl_field - Field used in sdhci_arasan_soc_ctl_map 104 107 * ··· 209 206 * 19MHz instead 210 207 */ 211 208 #define SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN BIT(2) 209 + /* Enable CD stable check before power-up */ 210 + #define SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE BIT(3) 212 211 }; 213 212 214 213 struct sdhci_arasan_of_data { 215 214 const struct sdhci_arasan_soc_ctl_map *soc_ctl_map; 216 215 const struct sdhci_pltfm_data *pdata; 217 216 const struct sdhci_arasan_clk_ops *clk_ops; 217 + u32 quirks; 218 218 }; 219 219 220 220 static const struct sdhci_arasan_soc_ctl_map rk3399_soc_ctl_map = { ··· 520 514 return -EINVAL; 521 515 } 522 516 517 + static void sdhci_arasan_set_power_and_bus_voltage(struct sdhci_host *host, unsigned char mode, 518 + unsigned short vdd) 519 + { 520 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 521 + struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); 522 + u32 reg; 523 + 524 + /* 525 + * Ensure that the card detect logic has stabilized before powering up, this is 526 + * necessary after a host controller reset. 527 + */ 528 + if (mode == MMC_POWER_UP && sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE) 529 + read_poll_timeout(sdhci_readl, reg, reg & SDHCI_CD_STABLE, CD_STABLE_MAX_SLEEP_US, 530 + CD_STABLE_TIMEOUT_US, false, host, SDHCI_PRESENT_STATE); 531 + 532 + sdhci_set_power_and_bus_voltage(host, mode, vdd); 533 + } 534 + 523 535 static const struct sdhci_ops sdhci_arasan_ops = { 524 536 .set_clock = sdhci_arasan_set_clock, 525 537 .get_max_clock = sdhci_pltfm_clk_get_max_clock, ··· 545 521 .set_bus_width = sdhci_set_bus_width, 546 522 .reset = sdhci_arasan_reset, 547 523 .set_uhs_signaling = sdhci_set_uhs_signaling, 548 - .set_power = sdhci_set_power_and_bus_voltage, 524 + .set_power = sdhci_arasan_set_power_and_bus_voltage, 549 525 .hw_reset = sdhci_arasan_hw_reset, 550 526 }; 551 527 ··· 594 570 .set_bus_width = sdhci_set_bus_width, 595 571 .reset = sdhci_arasan_reset, 596 572 .set_uhs_signaling = sdhci_set_uhs_signaling, 597 - .set_power = sdhci_set_power_and_bus_voltage, 573 + .set_power = sdhci_arasan_set_power_and_bus_voltage, 598 574 .irq = sdhci_arasan_cqhci_irq, 599 575 }; 600 576 ··· 1471 1447 static struct sdhci_arasan_of_data sdhci_arasan_zynqmp_data = { 1472 1448 .pdata = &sdhci_arasan_zynqmp_pdata, 1473 1449 .clk_ops = &zynqmp_clk_ops, 1450 + .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, 1474 1451 }; 1475 1452 1476 1453 static const struct sdhci_arasan_clk_ops versal_clk_ops = { ··· 1482 1457 static struct sdhci_arasan_of_data sdhci_arasan_versal_data = { 1483 1458 .pdata = &sdhci_arasan_zynqmp_pdata, 1484 1459 .clk_ops = &versal_clk_ops, 1460 + .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, 1485 1461 }; 1486 1462 1487 1463 static const struct sdhci_arasan_clk_ops versal_net_clk_ops = { ··· 1493 1467 static struct sdhci_arasan_of_data sdhci_arasan_versal_net_data = { 1494 1468 .pdata = &sdhci_arasan_versal_net_pdata, 1495 1469 .clk_ops = &versal_net_clk_ops, 1470 + .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, 1496 1471 }; 1497 1472 1498 1473 static struct sdhci_arasan_of_data intel_keembay_emmc_data = { ··· 1963 1936 1964 1937 if (of_device_is_compatible(np, "rockchip,rk3399-sdhci-5.1")) 1965 1938 sdhci_arasan_update_clockmultiplier(host, 0x0); 1939 + 1940 + sdhci_arasan->quirks |= data->quirks; 1966 1941 1967 1942 if (of_device_is_compatible(np, "intel,keembay-sdhci-5.1-emmc") || 1968 1943 of_device_is_compatible(np, "intel,keembay-sdhci-5.1-sd") ||
+21 -16
drivers/mmc/host/sdhci-pci-gli.c
··· 287 287 #define GLI_MAX_TUNING_LOOP 40 288 288 289 289 /* Genesys Logic chipset */ 290 + static void sdhci_gli_mask_replay_timer_timeout(struct pci_dev *pdev) 291 + { 292 + int aer; 293 + u32 value; 294 + 295 + /* mask the replay timer timeout of AER */ 296 + aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); 297 + if (aer) { 298 + pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); 299 + value |= PCI_ERR_COR_REP_TIMER; 300 + pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); 301 + } 302 + } 303 + 290 304 static inline void gl9750_wt_on(struct sdhci_host *host) 291 305 { 292 306 u32 wt_value; ··· 621 607 { 622 608 struct sdhci_pci_slot *slot = sdhci_priv(host); 623 609 struct pci_dev *pdev; 624 - int aer; 625 610 u32 value; 626 611 627 612 pdev = slot->chip->pdev; ··· 639 626 pci_set_power_state(pdev, PCI_D0); 640 627 641 628 /* mask the replay timer timeout of AER */ 642 - aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); 643 - if (aer) { 644 - pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); 645 - value |= PCI_ERR_COR_REP_TIMER; 646 - pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); 647 - } 629 + sdhci_gli_mask_replay_timer_timeout(pdev); 648 630 649 631 gl9750_wt_off(host); 650 632 } ··· 814 806 static void gl9755_hw_setting(struct sdhci_pci_slot *slot) 815 807 { 816 808 struct pci_dev *pdev = slot->chip->pdev; 817 - int aer; 818 809 u32 value; 819 810 820 811 gl9755_wt_on(pdev); ··· 848 841 pci_set_power_state(pdev, PCI_D0); 849 842 850 843 /* mask the replay timer timeout of AER */ 851 - aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); 852 - if (aer) { 853 - pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); 854 - value |= PCI_ERR_COR_REP_TIMER; 855 - pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); 856 - } 844 + sdhci_gli_mask_replay_timer_timeout(pdev); 857 845 858 846 gl9755_wt_off(pdev); 859 847 } ··· 1753 1751 return ret; 1754 1752 } 1755 1753 1756 - static void gli_set_gl9763e(struct sdhci_pci_slot *slot) 1754 + static void gl9763e_hw_setting(struct sdhci_pci_slot *slot) 1757 1755 { 1758 1756 struct pci_dev *pdev = slot->chip->pdev; 1759 1757 u32 value; ··· 1781 1779 value &= ~GLI_9763E_HS400_RXDLY; 1782 1780 value |= FIELD_PREP(GLI_9763E_HS400_RXDLY, GLI_9763E_HS400_RXDLY_5); 1783 1781 pci_write_config_dword(pdev, PCIE_GLI_9763E_CLKRXDLY, value); 1782 + 1783 + /* mask the replay timer timeout of AER */ 1784 + sdhci_gli_mask_replay_timer_timeout(pdev); 1784 1785 1785 1786 pci_read_config_dword(pdev, PCIE_GLI_9763E_VHS, &value); 1786 1787 value &= ~GLI_9763E_VHS_REV; ··· 1928 1923 gli_pcie_enable_msi(slot); 1929 1924 host->mmc_host_ops.hs400_enhanced_strobe = 1930 1925 gl9763e_hs400_enhanced_strobe; 1931 - gli_set_gl9763e(slot); 1926 + gl9763e_hw_setting(slot); 1932 1927 sdhci_enable_v4_mode(host); 1933 1928 1934 1929 return 0;
+18
drivers/mmc/host/sdhci_am654.c
··· 156 156 157 157 #define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0) 158 158 #define SDHCI_AM654_QUIRK_SUPPRESS_V1P8_ENA BIT(1) 159 + #define SDHCI_AM654_QUIRK_DISABLE_HS400 BIT(2) 159 160 }; 160 161 161 162 struct window { ··· 766 765 { 767 766 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 768 767 struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host); 768 + struct device *dev = mmc_dev(host->mmc); 769 769 u32 ctl_cfg_2 = 0; 770 770 u32 mask; 771 771 u32 val; ··· 821 819 ret = sdhci_am654_get_otap_delay(host, sdhci_am654); 822 820 if (ret) 823 821 goto err_cleanup_host; 822 + 823 + if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_DISABLE_HS400 && 824 + host->mmc->caps2 & (MMC_CAP2_HS400 | MMC_CAP2_HS400_ES)) { 825 + dev_info(dev, "HS400 mode not supported on this silicon revision, disabling it\n"); 826 + host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); 827 + } 824 828 825 829 ret = __sdhci_add_host(host); 826 830 if (ret) ··· 890 882 891 883 return 0; 892 884 } 885 + 886 + static const struct soc_device_attribute sdhci_am654_descope_hs400[] = { 887 + { .family = "AM62PX", .revision = "SR1.0" }, 888 + { .family = "AM62PX", .revision = "SR1.1" }, 889 + { /* sentinel */ } 890 + }; 893 891 894 892 static const struct of_device_id sdhci_am654_of_match[] = { 895 893 { ··· 983 969 ret = mmc_of_parse(host->mmc); 984 970 if (ret) 985 971 return dev_err_probe(dev, ret, "parsing dt failed\n"); 972 + 973 + soc = soc_device_match(sdhci_am654_descope_hs400); 974 + if (soc) 975 + sdhci_am654->quirks |= SDHCI_AM654_QUIRK_DISABLE_HS400; 986 976 987 977 host->mmc_host_ops.start_signal_voltage_switch = sdhci_am654_start_signal_voltage_switch; 988 978 host->mmc_host_ops.execute_tuning = sdhci_am654_execute_tuning;
+1 -1
drivers/most/core.c
··· 538 538 dev = bus_find_device_by_name(&mostbus, NULL, mdev); 539 539 if (!dev) 540 540 return NULL; 541 - put_device(dev); 542 541 iface = dev_get_drvdata(dev); 542 + put_device(dev); 543 543 list_for_each_entry_safe(c, tmp, &iface->p->channel_list, list) { 544 544 if (!strcmp(dev_name(&c->dev), mdev_ch)) 545 545 return c;
+30 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 8022 8022 } 8023 8023 rx_rings = min_t(int, rx_rings, hwr.grp); 8024 8024 hwr.cp = min_t(int, hwr.cp, bp->cp_nr_rings); 8025 - if (hwr.stat > bnxt_get_ulp_stat_ctxs(bp)) 8025 + if (bnxt_ulp_registered(bp->edev) && 8026 + hwr.stat > bnxt_get_ulp_stat_ctxs(bp)) 8026 8027 hwr.stat -= bnxt_get_ulp_stat_ctxs(bp); 8027 8028 hwr.cp = min_t(int, hwr.cp, hwr.stat); 8028 8029 rc = bnxt_trim_rings(bp, &rx_rings, &hwr.tx, hwr.cp, sh); ··· 8031 8030 hwr.rx = rx_rings << 1; 8032 8031 tx_cp = bnxt_num_tx_to_cp(bp, hwr.tx); 8033 8032 hwr.cp = sh ? max_t(int, tx_cp, rx_rings) : tx_cp + rx_rings; 8033 + if (hwr.tx != bp->tx_nr_rings) { 8034 + netdev_warn(bp->dev, 8035 + "Able to reserve only %d out of %d requested TX rings\n", 8036 + hwr.tx, bp->tx_nr_rings); 8037 + } 8034 8038 bp->tx_nr_rings = hwr.tx; 8035 8039 8036 8040 /* If we cannot reserve all the RX rings, reset the RSS map only ··· 12863 12857 return rc; 12864 12858 } 12865 12859 12860 + static int bnxt_tx_nr_rings(struct bnxt *bp) 12861 + { 12862 + return bp->num_tc ? bp->tx_nr_rings_per_tc * bp->num_tc : 12863 + bp->tx_nr_rings_per_tc; 12864 + } 12865 + 12866 + static int bnxt_tx_nr_rings_per_tc(struct bnxt *bp) 12867 + { 12868 + return bp->num_tc ? bp->tx_nr_rings / bp->num_tc : bp->tx_nr_rings; 12869 + } 12870 + 12866 12871 static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init) 12867 12872 { 12868 12873 int rc = 0; ··· 12891 12874 if (rc) 12892 12875 return rc; 12893 12876 12877 + /* Make adjustments if reserved TX rings are less than requested */ 12878 + bp->tx_nr_rings -= bp->tx_nr_rings_xdp; 12879 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 12880 + if (bp->tx_nr_rings_xdp) { 12881 + bp->tx_nr_rings_xdp = bp->tx_nr_rings_per_tc; 12882 + bp->tx_nr_rings += bp->tx_nr_rings_xdp; 12883 + } 12894 12884 rc = bnxt_alloc_mem(bp, irq_re_init); 12895 12885 if (rc) { 12896 12886 netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc); ··· 16353 16329 bp->cp_nr_rings = min_t(int, bp->tx_nr_rings_per_tc, bp->rx_nr_rings); 16354 16330 bp->rx_nr_rings = bp->cp_nr_rings; 16355 16331 bp->tx_nr_rings_per_tc = bp->cp_nr_rings; 16356 - bp->tx_nr_rings = bp->tx_nr_rings_per_tc; 16332 + bp->tx_nr_rings = bnxt_tx_nr_rings(bp); 16357 16333 } 16358 16334 16359 16335 static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh) ··· 16385 16361 bnxt_trim_dflt_sh_rings(bp); 16386 16362 else 16387 16363 bp->cp_nr_rings = bp->tx_nr_rings_per_tc + bp->rx_nr_rings; 16388 - bp->tx_nr_rings = bp->tx_nr_rings_per_tc; 16364 + bp->tx_nr_rings = bnxt_tx_nr_rings(bp); 16389 16365 16390 16366 avail_msix = bnxt_get_max_func_irqs(bp) - bp->cp_nr_rings; 16391 16367 if (avail_msix >= BNXT_MIN_ROCE_CP_RINGS) { ··· 16398 16374 rc = __bnxt_reserve_rings(bp); 16399 16375 if (rc && rc != -ENODEV) 16400 16376 netdev_warn(bp->dev, "Unable to reserve tx rings\n"); 16401 - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 16377 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16402 16378 if (sh) 16403 16379 bnxt_trim_dflt_sh_rings(bp); 16404 16380 ··· 16407 16383 rc = __bnxt_reserve_rings(bp); 16408 16384 if (rc && rc != -ENODEV) 16409 16385 netdev_warn(bp->dev, "2nd rings reservation failed.\n"); 16410 - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 16386 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16411 16387 } 16412 16388 if (BNXT_CHIP_TYPE_NITRO_A0(bp)) { 16413 16389 bp->rx_nr_rings++; ··· 16441 16417 if (rc) 16442 16418 goto init_dflt_ring_err; 16443 16419 16444 - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 16420 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16445 16421 16446 16422 bnxt_set_dflt_rfs(bp); 16447 16423
+4 -7
drivers/net/ethernet/cadence/macb_main.c
··· 3091 3091 /* Add GEM_OCTTXH, GEM_OCTRXH */ 3092 3092 val = bp->macb_reg_readl(bp, offset + 4); 3093 3093 bp->ethtool_stats[i] += ((u64)val) << 32; 3094 - *(p++) += ((u64)val) << 32; 3094 + *p += ((u64)val) << 32; 3095 3095 } 3096 3096 } 3097 3097 ··· 5636 5636 5637 5637 if (dev) { 5638 5638 bp = netdev_priv(dev); 5639 + unregister_netdev(dev); 5639 5640 phy_exit(bp->sgmii_phy); 5640 5641 mdiobus_unregister(bp->mii_bus); 5641 5642 mdiobus_free(bp->mii_bus); 5642 5643 5643 - unregister_netdev(dev); 5644 + device_set_wakeup_enable(&bp->pdev->dev, 0); 5644 5645 cancel_work_sync(&bp->hresp_err_bh_work); 5645 5646 pm_runtime_disable(&pdev->dev); 5646 5647 pm_runtime_dont_use_autosuspend(&pdev->dev); 5647 - if (!pm_runtime_suspended(&pdev->dev)) { 5648 - macb_clks_disable(bp->pclk, bp->hclk, bp->tx_clk, 5649 - bp->rx_clk, bp->tsu_clk); 5650 - pm_runtime_set_suspended(&pdev->dev); 5651 - } 5648 + pm_runtime_set_suspended(&pdev->dev); 5652 5649 phylink_destroy(bp->phylink); 5653 5650 free_netdev(dev); 5654 5651 }
+1 -1
drivers/net/ethernet/dlink/dl2k.c
··· 1099 1099 dev->stats.rx_bytes += dr32(OctetRcvOk); 1100 1100 dev->stats.tx_bytes += dr32(OctetXmtOk); 1101 1101 1102 - dev->stats.multicast = dr32(McstFramesRcvdOk); 1102 + dev->stats.multicast += dr32(McstFramesRcvdOk); 1103 1103 dev->stats.collisions += dr32(SingleColFrames) 1104 1104 + dr32(MultiColFrames); 1105 1105
+1
drivers/net/ethernet/intel/ice/ice.h
··· 511 511 ICE_FLAG_LINK_LENIENT_MODE_ENA, 512 512 ICE_FLAG_PLUG_AUX_DEV, 513 513 ICE_FLAG_UNPLUG_AUX_DEV, 514 + ICE_FLAG_AUX_DEV_CREATED, 514 515 ICE_FLAG_MTU_CHANGED, 515 516 ICE_FLAG_GNSS, /* GNSS successfully initialized */ 516 517 ICE_FLAG_DPLL, /* SyncE/PTP dplls initialized */
+38 -11
drivers/net/ethernet/intel/ice/ice_adapter.c
··· 13 13 static DEFINE_XARRAY(ice_adapters); 14 14 static DEFINE_MUTEX(ice_adapters_mutex); 15 15 16 - static unsigned long ice_adapter_index(u64 dsn) 16 + #define ICE_ADAPTER_FIXED_INDEX BIT_ULL(63) 17 + 18 + #define ICE_ADAPTER_INDEX_E825C \ 19 + (ICE_DEV_ID_E825C_BACKPLANE | ICE_ADAPTER_FIXED_INDEX) 20 + 21 + static u64 ice_adapter_index(struct pci_dev *pdev) 17 22 { 23 + switch (pdev->device) { 24 + case ICE_DEV_ID_E825C_BACKPLANE: 25 + case ICE_DEV_ID_E825C_QSFP: 26 + case ICE_DEV_ID_E825C_SFP: 27 + case ICE_DEV_ID_E825C_SGMII: 28 + /* E825C devices have multiple NACs which are connected to the 29 + * same clock source, and which must share the same 30 + * ice_adapter structure. We can't use the serial number since 31 + * each NAC has its own NVM generated with its own unique 32 + * Device Serial Number. Instead, rely on the embedded nature 33 + * of the E825C devices, and use a fixed index. This relies on 34 + * the fact that all E825C physical functions in a given 35 + * system are part of the same overall device. 36 + */ 37 + return ICE_ADAPTER_INDEX_E825C; 38 + default: 39 + return pci_get_dsn(pdev) & ~ICE_ADAPTER_FIXED_INDEX; 40 + } 41 + } 42 + 43 + static unsigned long ice_adapter_xa_index(struct pci_dev *pdev) 44 + { 45 + u64 index = ice_adapter_index(pdev); 46 + 18 47 #if BITS_PER_LONG == 64 19 - return dsn; 48 + return index; 20 49 #else 21 - return (u32)dsn ^ (u32)(dsn >> 32); 50 + return (u32)index ^ (u32)(index >> 32); 22 51 #endif 23 52 } 24 53 25 - static struct ice_adapter *ice_adapter_new(u64 dsn) 54 + static struct ice_adapter *ice_adapter_new(struct pci_dev *pdev) 26 55 { 27 56 struct ice_adapter *adapter; 28 57 ··· 59 30 if (!adapter) 60 31 return NULL; 61 32 62 - adapter->device_serial_number = dsn; 33 + adapter->index = ice_adapter_index(pdev); 63 34 spin_lock_init(&adapter->ptp_gltsyn_time_lock); 64 35 spin_lock_init(&adapter->txq_ctx_lock); 65 36 refcount_set(&adapter->refcount, 1); ··· 93 64 */ 94 65 struct ice_adapter *ice_adapter_get(struct pci_dev *pdev) 95 66 { 96 - u64 dsn = pci_get_dsn(pdev); 97 67 struct ice_adapter *adapter; 98 68 unsigned long index; 99 69 int err; 100 70 101 - index = ice_adapter_index(dsn); 71 + index = ice_adapter_xa_index(pdev); 102 72 scoped_guard(mutex, &ice_adapters_mutex) { 103 73 err = xa_insert(&ice_adapters, index, NULL, GFP_KERNEL); 104 74 if (err == -EBUSY) { 105 75 adapter = xa_load(&ice_adapters, index); 106 76 refcount_inc(&adapter->refcount); 107 - WARN_ON_ONCE(adapter->device_serial_number != dsn); 77 + WARN_ON_ONCE(adapter->index != ice_adapter_index(pdev)); 108 78 return adapter; 109 79 } 110 80 if (err) 111 81 return ERR_PTR(err); 112 82 113 - adapter = ice_adapter_new(dsn); 83 + adapter = ice_adapter_new(pdev); 114 84 if (!adapter) 115 85 return ERR_PTR(-ENOMEM); 116 86 xa_store(&ice_adapters, index, adapter, GFP_KERNEL); ··· 128 100 */ 129 101 void ice_adapter_put(struct pci_dev *pdev) 130 102 { 131 - u64 dsn = pci_get_dsn(pdev); 132 103 struct ice_adapter *adapter; 133 104 unsigned long index; 134 105 135 - index = ice_adapter_index(dsn); 106 + index = ice_adapter_xa_index(pdev); 136 107 scoped_guard(mutex, &ice_adapters_mutex) { 137 108 adapter = xa_load(&ice_adapters, index); 138 109 if (WARN_ON(!adapter))
+2 -2
drivers/net/ethernet/intel/ice/ice_adapter.h
··· 33 33 * @txq_ctx_lock: Spinlock protecting access to the GLCOMM_QTX_CNTX_CTL register 34 34 * @ctrl_pf: Control PF of the adapter 35 35 * @ports: Ports list 36 - * @device_serial_number: DSN cached for collision detection on 32bit systems 36 + * @index: 64-bit index cached for collision detection on 32bit systems 37 37 */ 38 38 struct ice_adapter { 39 39 refcount_t refcount; ··· 44 44 45 45 struct ice_pf *ctrl_pf; 46 46 struct ice_port_list ports; 47 - u64 device_serial_number; 47 + u64 index; 48 48 }; 49 49 50 50 struct ice_adapter *ice_adapter_get(struct pci_dev *pdev);
+32 -12
drivers/net/ethernet/intel/ice/ice_ddp.c
··· 2377 2377 * The function will apply the new Tx topology from the package buffer 2378 2378 * if available. 2379 2379 * 2380 - * Return: zero when update was successful, negative values otherwise. 2380 + * Return: 2381 + * * 0 - Successfully applied topology configuration. 2382 + * * -EBUSY - Failed to acquire global configuration lock. 2383 + * * -EEXIST - Topology configuration has already been applied. 2384 + * * -EIO - Unable to apply topology configuration. 2385 + * * -ENODEV - Failed to re-initialize device after applying configuration. 2386 + * * Other negative error codes indicate unexpected failures. 2381 2387 */ 2382 2388 int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len) 2383 2389 { ··· 2416 2410 2417 2411 if (status) { 2418 2412 ice_debug(hw, ICE_DBG_INIT, "Get current topology is failed\n"); 2419 - return status; 2413 + return -EIO; 2420 2414 } 2421 2415 2422 2416 /* Is default topology already applied ? */ ··· 2503 2497 ICE_GLOBAL_CFG_LOCK_TIMEOUT); 2504 2498 if (status) { 2505 2499 ice_debug(hw, ICE_DBG_INIT, "Failed to acquire global lock\n"); 2506 - return status; 2500 + return -EBUSY; 2507 2501 } 2508 2502 2509 2503 /* Check if reset was triggered already. */ 2510 2504 reg = rd32(hw, GLGEN_RSTAT); 2511 2505 if (reg & GLGEN_RSTAT_DEVSTATE_M) { 2512 - /* Reset is in progress, re-init the HW again */ 2513 2506 ice_debug(hw, ICE_DBG_INIT, "Reset is in progress. Layer topology might be applied already\n"); 2514 2507 ice_check_reset(hw); 2515 - return 0; 2508 + /* Reset is in progress, re-init the HW again */ 2509 + goto reinit_hw; 2516 2510 } 2517 2511 2518 2512 /* Set new topology */ 2519 2513 status = ice_get_set_tx_topo(hw, new_topo, size, NULL, NULL, true); 2520 2514 if (status) { 2521 - ice_debug(hw, ICE_DBG_INIT, "Failed setting Tx topology\n"); 2522 - return status; 2515 + ice_debug(hw, ICE_DBG_INIT, "Failed to set Tx topology, status %pe\n", 2516 + ERR_PTR(status)); 2517 + /* only report -EIO here as the caller checks the error value 2518 + * and reports an informational error message informing that 2519 + * the driver failed to program Tx topology. 2520 + */ 2521 + status = -EIO; 2523 2522 } 2524 2523 2525 - /* New topology is updated, delay 1 second before issuing the CORER */ 2524 + /* Even if Tx topology config failed, we need to CORE reset here to 2525 + * clear the global configuration lock. Delay 1 second to allow 2526 + * hardware to settle then issue a CORER 2527 + */ 2526 2528 msleep(1000); 2527 2529 ice_reset(hw, ICE_RESET_CORER); 2528 - /* CORER will clear the global lock, so no explicit call 2529 - * required for release. 2530 - */ 2530 + ice_check_reset(hw); 2531 2531 2532 - return 0; 2532 + reinit_hw: 2533 + /* Since we triggered a CORER, re-initialize hardware */ 2534 + ice_deinit_hw(hw); 2535 + if (ice_init_hw(hw)) { 2536 + ice_debug(hw, ICE_DBG_INIT, "Failed to re-init hardware after setting Tx topology\n"); 2537 + return -ENODEV; 2538 + } 2539 + 2540 + return status; 2533 2541 }
+6 -4
drivers/net/ethernet/intel/ice/ice_idc.c
··· 336 336 mutex_lock(&pf->adev_mutex); 337 337 cdev->adev = adev; 338 338 mutex_unlock(&pf->adev_mutex); 339 + set_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags); 339 340 340 341 return 0; 341 342 } ··· 348 347 { 349 348 struct auxiliary_device *adev; 350 349 350 + if (!test_and_clear_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags)) 351 + return; 352 + 351 353 mutex_lock(&pf->adev_mutex); 352 354 adev = pf->cdev_info->adev; 353 355 pf->cdev_info->adev = NULL; 354 356 mutex_unlock(&pf->adev_mutex); 355 357 356 - if (adev) { 357 - auxiliary_device_delete(adev); 358 - auxiliary_device_uninit(adev); 359 - } 358 + auxiliary_device_delete(adev); 359 + auxiliary_device_uninit(adev); 360 360 } 361 361 362 362 /**
+11 -5
drivers/net/ethernet/intel/ice/ice_main.c
··· 4536 4536 dev_info(dev, "Tx scheduling layers switching feature disabled\n"); 4537 4537 else 4538 4538 dev_info(dev, "Tx scheduling layers switching feature enabled\n"); 4539 - /* if there was a change in topology ice_cfg_tx_topo triggered 4540 - * a CORER and we need to re-init hw 4539 + return 0; 4540 + } else if (err == -ENODEV) { 4541 + /* If we failed to re-initialize the device, we can no longer 4542 + * continue loading. 4541 4543 */ 4542 - ice_deinit_hw(hw); 4543 - err = ice_init_hw(hw); 4544 - 4544 + dev_warn(dev, "Failed to initialize hardware after applying Tx scheduling configuration.\n"); 4545 4545 return err; 4546 4546 } else if (err == -EIO) { 4547 4547 dev_info(dev, "DDP package does not support Tx scheduling layers switching feature - please update to the latest DDP package and try again\n"); 4548 + return 0; 4549 + } else if (err == -EEXIST) { 4550 + return 0; 4548 4551 } 4549 4552 4553 + /* Do not treat this as a fatal error. */ 4554 + dev_info(dev, "Failed to apply Tx scheduling configuration, err %pe\n", 4555 + ERR_PTR(err)); 4550 4556 return 0; 4551 4557 } 4552 4558
+1 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 1352 1352 skb = ice_construct_skb(rx_ring, xdp); 1353 1353 /* exit if we failed to retrieve a buffer */ 1354 1354 if (!skb) { 1355 - rx_ring->ring_stats->rx_stats.alloc_page_failed++; 1355 + rx_ring->ring_stats->rx_stats.alloc_buf_failed++; 1356 1356 xdp_verdict = ICE_XDP_CONSUMED; 1357 1357 } 1358 1358 ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
+57 -4
drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
··· 180 180 } 181 181 182 182 /** 183 + * idpf_tx_singleq_dma_map_error - handle TX DMA map errors 184 + * @txq: queue to send buffer on 185 + * @skb: send buffer 186 + * @first: original first buffer info buffer for packet 187 + * @idx: starting point on ring to unwind 188 + */ 189 + static void idpf_tx_singleq_dma_map_error(struct idpf_tx_queue *txq, 190 + struct sk_buff *skb, 191 + struct idpf_tx_buf *first, u16 idx) 192 + { 193 + struct libeth_sq_napi_stats ss = { }; 194 + struct libeth_cq_pp cp = { 195 + .dev = txq->dev, 196 + .ss = &ss, 197 + }; 198 + 199 + u64_stats_update_begin(&txq->stats_sync); 200 + u64_stats_inc(&txq->q_stats.dma_map_errs); 201 + u64_stats_update_end(&txq->stats_sync); 202 + 203 + /* clear dma mappings for failed tx_buf map */ 204 + for (;;) { 205 + struct idpf_tx_buf *tx_buf; 206 + 207 + tx_buf = &txq->tx_buf[idx]; 208 + libeth_tx_complete(tx_buf, &cp); 209 + if (tx_buf == first) 210 + break; 211 + if (idx == 0) 212 + idx = txq->desc_count; 213 + idx--; 214 + } 215 + 216 + if (skb_is_gso(skb)) { 217 + union idpf_tx_flex_desc *tx_desc; 218 + 219 + /* If we failed a DMA mapping for a TSO packet, we will have 220 + * used one additional descriptor for a context 221 + * descriptor. Reset that here. 222 + */ 223 + tx_desc = &txq->flex_tx[idx]; 224 + memset(tx_desc, 0, sizeof(*tx_desc)); 225 + if (idx == 0) 226 + idx = txq->desc_count; 227 + idx--; 228 + } 229 + 230 + /* Update tail in case netdev_xmit_more was previously true */ 231 + idpf_tx_buf_hw_update(txq, idx, false); 232 + } 233 + 234 + /** 183 235 * idpf_tx_singleq_map - Build the Tx base descriptor 184 236 * @tx_q: queue to send buffer on 185 237 * @first: first buffer info buffer to use ··· 271 219 for (frag = &skb_shinfo(skb)->frags[0];; frag++) { 272 220 unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED; 273 221 274 - if (dma_mapping_error(tx_q->dev, dma)) 275 - return idpf_tx_dma_map_error(tx_q, skb, first, i); 222 + if (unlikely(dma_mapping_error(tx_q->dev, dma))) 223 + return idpf_tx_singleq_dma_map_error(tx_q, skb, 224 + first, i); 276 225 277 226 /* record length, and DMA address */ 278 227 dma_unmap_len_set(tx_buf, len, size); ··· 415 362 { 416 363 struct idpf_tx_offload_params offload = { }; 417 364 struct idpf_tx_buf *first; 365 + u32 count, buf_count = 1; 418 366 int csum, tso, needed; 419 - unsigned int count; 420 367 __be16 protocol; 421 368 422 - count = idpf_tx_desc_count_required(tx_q, skb); 369 + count = idpf_tx_res_count_required(tx_q, skb, &buf_count); 423 370 if (unlikely(!count)) 424 371 return idpf_tx_drop_skb(tx_q, skb); 425 372
+281 -472
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 8 8 #include "idpf_ptp.h" 9 9 #include "idpf_virtchnl.h" 10 10 11 - struct idpf_tx_stash { 12 - struct hlist_node hlist; 13 - struct libeth_sqe buf; 14 - }; 15 - 16 - #define idpf_tx_buf_compl_tag(buf) (*(u32 *)&(buf)->priv) 11 + #define idpf_tx_buf_next(buf) (*(u32 *)&(buf)->priv) 17 12 LIBETH_SQE_CHECK_PRIV(u32); 18 13 19 14 /** ··· 32 37 return false; 33 38 34 39 return true; 35 - } 36 - 37 - /** 38 - * idpf_buf_lifo_push - push a buffer pointer onto stack 39 - * @stack: pointer to stack struct 40 - * @buf: pointer to buf to push 41 - * 42 - * Returns 0 on success, negative on failure 43 - **/ 44 - static int idpf_buf_lifo_push(struct idpf_buf_lifo *stack, 45 - struct idpf_tx_stash *buf) 46 - { 47 - if (unlikely(stack->top == stack->size)) 48 - return -ENOSPC; 49 - 50 - stack->bufs[stack->top++] = buf; 51 - 52 - return 0; 53 - } 54 - 55 - /** 56 - * idpf_buf_lifo_pop - pop a buffer pointer from stack 57 - * @stack: pointer to stack struct 58 - **/ 59 - static struct idpf_tx_stash *idpf_buf_lifo_pop(struct idpf_buf_lifo *stack) 60 - { 61 - if (unlikely(!stack->top)) 62 - return NULL; 63 - 64 - return stack->bufs[--stack->top]; 65 40 } 66 41 67 42 /** ··· 62 97 static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) 63 98 { 64 99 struct libeth_sq_napi_stats ss = { }; 65 - struct idpf_buf_lifo *buf_stack; 66 - struct idpf_tx_stash *stash; 67 100 struct libeth_cq_pp cp = { 68 101 .dev = txq->dev, 69 102 .ss = &ss, 70 103 }; 71 - struct hlist_node *tmp; 72 - u32 i, tag; 104 + u32 i; 73 105 74 106 /* Buffers already cleared, nothing to do */ 75 107 if (!txq->tx_buf) 76 108 return; 77 109 78 110 /* Free all the Tx buffer sk_buffs */ 79 - for (i = 0; i < txq->desc_count; i++) 111 + for (i = 0; i < txq->buf_pool_size; i++) 80 112 libeth_tx_complete(&txq->tx_buf[i], &cp); 81 113 82 114 kfree(txq->tx_buf); 83 115 txq->tx_buf = NULL; 84 - 85 - if (!idpf_queue_has(FLOW_SCH_EN, txq)) 86 - return; 87 - 88 - buf_stack = &txq->stash->buf_stack; 89 - if (!buf_stack->bufs) 90 - return; 91 - 92 - /* 93 - * If a Tx timeout occurred, there are potentially still bufs in the 94 - * hash table, free them here. 95 - */ 96 - hash_for_each_safe(txq->stash->sched_buf_hash, tag, tmp, stash, 97 - hlist) { 98 - if (!stash) 99 - continue; 100 - 101 - libeth_tx_complete(&stash->buf, &cp); 102 - hash_del(&stash->hlist); 103 - idpf_buf_lifo_push(buf_stack, stash); 104 - } 105 - 106 - for (i = 0; i < buf_stack->size; i++) 107 - kfree(buf_stack->bufs[i]); 108 - 109 - kfree(buf_stack->bufs); 110 - buf_stack->bufs = NULL; 111 116 } 112 117 113 118 /** ··· 93 158 94 159 if (!txq->desc_ring) 95 160 return; 161 + 162 + if (txq->refillq) 163 + kfree(txq->refillq->ring); 96 164 97 165 dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma); 98 166 txq->desc_ring = NULL; ··· 153 215 */ 154 216 static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q) 155 217 { 156 - struct idpf_buf_lifo *buf_stack; 157 - int buf_size; 158 - int i; 159 - 160 218 /* Allocate book keeping buffers only. Buffers to be supplied to HW 161 219 * are allocated by kernel network stack and received as part of skb 162 220 */ 163 - buf_size = sizeof(struct idpf_tx_buf) * tx_q->desc_count; 164 - tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL); 221 + if (idpf_queue_has(FLOW_SCH_EN, tx_q)) 222 + tx_q->buf_pool_size = U16_MAX; 223 + else 224 + tx_q->buf_pool_size = tx_q->desc_count; 225 + tx_q->tx_buf = kcalloc(tx_q->buf_pool_size, sizeof(*tx_q->tx_buf), 226 + GFP_KERNEL); 165 227 if (!tx_q->tx_buf) 166 228 return -ENOMEM; 167 - 168 - if (!idpf_queue_has(FLOW_SCH_EN, tx_q)) 169 - return 0; 170 - 171 - buf_stack = &tx_q->stash->buf_stack; 172 - 173 - /* Initialize tx buf stack for out-of-order completions if 174 - * flow scheduling offload is enabled 175 - */ 176 - buf_stack->bufs = kcalloc(tx_q->desc_count, sizeof(*buf_stack->bufs), 177 - GFP_KERNEL); 178 - if (!buf_stack->bufs) 179 - return -ENOMEM; 180 - 181 - buf_stack->size = tx_q->desc_count; 182 - buf_stack->top = tx_q->desc_count; 183 - 184 - for (i = 0; i < tx_q->desc_count; i++) { 185 - buf_stack->bufs[i] = kzalloc(sizeof(*buf_stack->bufs[i]), 186 - GFP_KERNEL); 187 - if (!buf_stack->bufs[i]) 188 - return -ENOMEM; 189 - } 190 229 191 230 return 0; 192 231 } ··· 179 264 struct idpf_tx_queue *tx_q) 180 265 { 181 266 struct device *dev = tx_q->dev; 267 + struct idpf_sw_queue *refillq; 182 268 int err; 183 269 184 270 err = idpf_tx_buf_alloc_all(tx_q); ··· 202 286 tx_q->next_to_use = 0; 203 287 tx_q->next_to_clean = 0; 204 288 idpf_queue_set(GEN_CHK, tx_q); 289 + 290 + if (!idpf_queue_has(FLOW_SCH_EN, tx_q)) 291 + return 0; 292 + 293 + refillq = tx_q->refillq; 294 + refillq->desc_count = tx_q->buf_pool_size; 295 + refillq->ring = kcalloc(refillq->desc_count, sizeof(u32), 296 + GFP_KERNEL); 297 + if (!refillq->ring) { 298 + err = -ENOMEM; 299 + goto err_alloc; 300 + } 301 + 302 + for (unsigned int i = 0; i < refillq->desc_count; i++) 303 + refillq->ring[i] = 304 + FIELD_PREP(IDPF_RFL_BI_BUFID_M, i) | 305 + FIELD_PREP(IDPF_RFL_BI_GEN_M, 306 + idpf_queue_has(GEN_CHK, refillq)); 307 + 308 + /* Go ahead and flip the GEN bit since this counts as filling 309 + * up the ring, i.e. we already ring wrapped. 310 + */ 311 + idpf_queue_change(GEN_CHK, refillq); 312 + 313 + tx_q->last_re = tx_q->desc_count - IDPF_TX_SPLITQ_RE_MIN_GAP; 205 314 206 315 return 0; 207 316 ··· 278 337 for (i = 0; i < vport->num_txq_grp; i++) { 279 338 for (j = 0; j < vport->txq_grps[i].num_txq; j++) { 280 339 struct idpf_tx_queue *txq = vport->txq_grps[i].txqs[j]; 281 - u8 gen_bits = 0; 282 - u16 bufidx_mask; 283 340 284 341 err = idpf_tx_desc_alloc(vport, txq); 285 342 if (err) { ··· 286 347 i); 287 348 goto err_out; 288 349 } 289 - 290 - if (!idpf_is_queue_model_split(vport->txq_model)) 291 - continue; 292 - 293 - txq->compl_tag_cur_gen = 0; 294 - 295 - /* Determine the number of bits in the bufid 296 - * mask and add one to get the start of the 297 - * generation bits 298 - */ 299 - bufidx_mask = txq->desc_count - 1; 300 - while (bufidx_mask >> 1) { 301 - txq->compl_tag_gen_s++; 302 - bufidx_mask = bufidx_mask >> 1; 303 - } 304 - txq->compl_tag_gen_s++; 305 - 306 - gen_bits = IDPF_TX_SPLITQ_COMPL_TAG_WIDTH - 307 - txq->compl_tag_gen_s; 308 - txq->compl_tag_gen_max = GETMAXVAL(gen_bits); 309 - 310 - /* Set bufid mask based on location of first 311 - * gen bit; it cannot simply be the descriptor 312 - * ring size-1 since we can have size values 313 - * where not all of those bits are set. 314 - */ 315 - txq->compl_tag_bufid_m = 316 - GETMAXVAL(txq->compl_tag_gen_s); 317 350 } 318 351 319 352 if (!idpf_is_queue_model_split(vport->txq_model)) ··· 534 623 } 535 624 536 625 /** 537 - * idpf_rx_post_buf_refill - Post buffer id to refill queue 626 + * idpf_post_buf_refill - Post buffer id to refill queue 538 627 * @refillq: refill queue to post to 539 628 * @buf_id: buffer id to post 540 629 */ 541 - static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id) 630 + static void idpf_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id) 542 631 { 543 632 u32 nta = refillq->next_to_use; 544 633 545 634 /* store the buffer ID and the SW maintained GEN bit to the refillq */ 546 635 refillq->ring[nta] = 547 - FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) | 548 - FIELD_PREP(IDPF_RX_BI_GEN_M, 636 + FIELD_PREP(IDPF_RFL_BI_BUFID_M, buf_id) | 637 + FIELD_PREP(IDPF_RFL_BI_GEN_M, 549 638 idpf_queue_has(GEN_CHK, refillq)); 550 639 551 640 if (unlikely(++nta == refillq->desc_count)) { ··· 926 1015 struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; 927 1016 928 1017 for (j = 0; j < txq_grp->num_txq; j++) { 1018 + if (flow_sch_en) { 1019 + kfree(txq_grp->txqs[j]->refillq); 1020 + txq_grp->txqs[j]->refillq = NULL; 1021 + } 1022 + 929 1023 kfree(txq_grp->txqs[j]); 930 1024 txq_grp->txqs[j] = NULL; 931 1025 } ··· 940 1024 941 1025 kfree(txq_grp->complq); 942 1026 txq_grp->complq = NULL; 943 - 944 - if (flow_sch_en) 945 - kfree(txq_grp->stashes); 946 1027 } 947 1028 kfree(vport->txq_grps); 948 1029 vport->txq_grps = NULL; ··· 1300 1387 for (i = 0; i < vport->num_txq_grp; i++) { 1301 1388 struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; 1302 1389 struct idpf_adapter *adapter = vport->adapter; 1303 - struct idpf_txq_stash *stashes; 1304 1390 int j; 1305 1391 1306 1392 tx_qgrp->vport = vport; ··· 1310 1398 GFP_KERNEL); 1311 1399 if (!tx_qgrp->txqs[j]) 1312 1400 goto err_alloc; 1313 - } 1314 - 1315 - if (split && flow_sch_en) { 1316 - stashes = kcalloc(num_txq, sizeof(*stashes), 1317 - GFP_KERNEL); 1318 - if (!stashes) 1319 - goto err_alloc; 1320 - 1321 - tx_qgrp->stashes = stashes; 1322 1401 } 1323 1402 1324 1403 for (j = 0; j < tx_qgrp->num_txq; j++) { ··· 1331 1428 if (!flow_sch_en) 1332 1429 continue; 1333 1430 1334 - if (split) { 1335 - q->stash = &stashes[j]; 1336 - hash_init(q->stash->sched_buf_hash); 1337 - } 1338 - 1339 1431 idpf_queue_set(FLOW_SCH_EN, q); 1432 + 1433 + q->refillq = kzalloc(sizeof(*q->refillq), GFP_KERNEL); 1434 + if (!q->refillq) 1435 + goto err_alloc; 1436 + 1437 + idpf_queue_set(GEN_CHK, q->refillq); 1438 + idpf_queue_set(RFL_GEN_CHK, q->refillq); 1340 1439 } 1341 1440 1342 1441 if (!split) ··· 1622 1717 spin_unlock_bh(&tx_tstamp_caps->status_lock); 1623 1718 } 1624 1719 1625 - /** 1626 - * idpf_tx_clean_stashed_bufs - clean bufs that were stored for 1627 - * out of order completions 1628 - * @txq: queue to clean 1629 - * @compl_tag: completion tag of packet to clean (from completion descriptor) 1630 - * @cleaned: pointer to stats struct to track cleaned packets/bytes 1631 - * @budget: Used to determine if we are in netpoll 1632 - */ 1633 - static void idpf_tx_clean_stashed_bufs(struct idpf_tx_queue *txq, 1634 - u16 compl_tag, 1635 - struct libeth_sq_napi_stats *cleaned, 1636 - int budget) 1637 - { 1638 - struct idpf_tx_stash *stash; 1639 - struct hlist_node *tmp_buf; 1640 - struct libeth_cq_pp cp = { 1641 - .dev = txq->dev, 1642 - .ss = cleaned, 1643 - .napi = budget, 1644 - }; 1645 - 1646 - /* Buffer completion */ 1647 - hash_for_each_possible_safe(txq->stash->sched_buf_hash, stash, tmp_buf, 1648 - hlist, compl_tag) { 1649 - if (unlikely(idpf_tx_buf_compl_tag(&stash->buf) != compl_tag)) 1650 - continue; 1651 - 1652 - hash_del(&stash->hlist); 1653 - 1654 - if (stash->buf.type == LIBETH_SQE_SKB && 1655 - (skb_shinfo(stash->buf.skb)->tx_flags & SKBTX_IN_PROGRESS)) 1656 - idpf_tx_read_tstamp(txq, stash->buf.skb); 1657 - 1658 - libeth_tx_complete(&stash->buf, &cp); 1659 - 1660 - /* Push shadow buf back onto stack */ 1661 - idpf_buf_lifo_push(&txq->stash->buf_stack, stash); 1662 - } 1663 - } 1664 - 1665 - /** 1666 - * idpf_stash_flow_sch_buffers - store buffer parameters info to be freed at a 1667 - * later time (only relevant for flow scheduling mode) 1668 - * @txq: Tx queue to clean 1669 - * @tx_buf: buffer to store 1670 - */ 1671 - static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq, 1672 - struct idpf_tx_buf *tx_buf) 1673 - { 1674 - struct idpf_tx_stash *stash; 1675 - 1676 - if (unlikely(tx_buf->type <= LIBETH_SQE_CTX)) 1677 - return 0; 1678 - 1679 - stash = idpf_buf_lifo_pop(&txq->stash->buf_stack); 1680 - if (unlikely(!stash)) { 1681 - net_err_ratelimited("%s: No out-of-order TX buffers left!\n", 1682 - netdev_name(txq->netdev)); 1683 - 1684 - return -ENOMEM; 1685 - } 1686 - 1687 - /* Store buffer params in shadow buffer */ 1688 - stash->buf.skb = tx_buf->skb; 1689 - stash->buf.bytes = tx_buf->bytes; 1690 - stash->buf.packets = tx_buf->packets; 1691 - stash->buf.type = tx_buf->type; 1692 - stash->buf.nr_frags = tx_buf->nr_frags; 1693 - dma_unmap_addr_set(&stash->buf, dma, dma_unmap_addr(tx_buf, dma)); 1694 - dma_unmap_len_set(&stash->buf, len, dma_unmap_len(tx_buf, len)); 1695 - idpf_tx_buf_compl_tag(&stash->buf) = idpf_tx_buf_compl_tag(tx_buf); 1696 - 1697 - /* Add buffer to buf_hash table to be freed later */ 1698 - hash_add(txq->stash->sched_buf_hash, &stash->hlist, 1699 - idpf_tx_buf_compl_tag(&stash->buf)); 1700 - 1701 - tx_buf->type = LIBETH_SQE_EMPTY; 1702 - 1703 - return 0; 1704 - } 1705 - 1706 1720 #define idpf_tx_splitq_clean_bump_ntc(txq, ntc, desc, buf) \ 1707 1721 do { \ 1708 1722 if (unlikely(++(ntc) == (txq)->desc_count)) { \ ··· 1649 1825 * Separate packet completion events will be reported on the completion queue, 1650 1826 * and the buffers will be cleaned separately. The stats are not updated from 1651 1827 * this function when using flow-based scheduling. 1652 - * 1653 - * Furthermore, in flow scheduling mode, check to make sure there are enough 1654 - * reserve buffers to stash the packet. If there are not, return early, which 1655 - * will leave next_to_clean pointing to the packet that failed to be stashed. 1656 - * 1657 - * Return: false in the scenario above, true otherwise. 1658 1828 */ 1659 - static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, 1829 + static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, 1660 1830 int napi_budget, 1661 1831 struct libeth_sq_napi_stats *cleaned, 1662 1832 bool descs_only) ··· 1664 1846 .napi = napi_budget, 1665 1847 }; 1666 1848 struct idpf_tx_buf *tx_buf; 1667 - bool clean_complete = true; 1849 + 1850 + if (descs_only) { 1851 + /* Bump ring index to mark as cleaned. */ 1852 + tx_q->next_to_clean = end; 1853 + return; 1854 + } 1668 1855 1669 1856 tx_desc = &tx_q->flex_tx[ntc]; 1670 1857 next_pending_desc = &tx_q->flex_tx[end]; ··· 1689 1866 break; 1690 1867 1691 1868 eop_idx = tx_buf->rs_idx; 1869 + libeth_tx_complete(tx_buf, &cp); 1692 1870 1693 - if (descs_only) { 1694 - if (IDPF_TX_BUF_RSV_UNUSED(tx_q) < tx_buf->nr_frags) { 1695 - clean_complete = false; 1696 - goto tx_splitq_clean_out; 1697 - } 1871 + /* unmap remaining buffers */ 1872 + while (ntc != eop_idx) { 1873 + idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, 1874 + tx_desc, tx_buf); 1698 1875 1699 - idpf_stash_flow_sch_buffers(tx_q, tx_buf); 1700 - 1701 - while (ntc != eop_idx) { 1702 - idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, 1703 - tx_desc, tx_buf); 1704 - idpf_stash_flow_sch_buffers(tx_q, tx_buf); 1705 - } 1706 - } else { 1876 + /* unmap any remaining paged data */ 1707 1877 libeth_tx_complete(tx_buf, &cp); 1708 - 1709 - /* unmap remaining buffers */ 1710 - while (ntc != eop_idx) { 1711 - idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, 1712 - tx_desc, tx_buf); 1713 - 1714 - /* unmap any remaining paged data */ 1715 - libeth_tx_complete(tx_buf, &cp); 1716 - } 1717 1878 } 1718 1879 1719 1880 fetch_next_txq_desc: 1720 1881 idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, tx_desc, tx_buf); 1721 1882 } 1722 1883 1723 - tx_splitq_clean_out: 1724 1884 tx_q->next_to_clean = ntc; 1725 - 1726 - return clean_complete; 1727 1885 } 1728 1886 1729 - #define idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, buf) \ 1730 - do { \ 1731 - (buf)++; \ 1732 - (ntc)++; \ 1733 - if (unlikely((ntc) == (txq)->desc_count)) { \ 1734 - buf = (txq)->tx_buf; \ 1735 - ntc = 0; \ 1736 - } \ 1737 - } while (0) 1738 - 1739 1887 /** 1740 - * idpf_tx_clean_buf_ring - clean flow scheduling TX queue buffers 1888 + * idpf_tx_clean_bufs - clean flow scheduling TX queue buffers 1741 1889 * @txq: queue to clean 1742 - * @compl_tag: completion tag of packet to clean (from completion descriptor) 1890 + * @buf_id: packet's starting buffer ID, from completion descriptor 1743 1891 * @cleaned: pointer to stats struct to track cleaned packets/bytes 1744 1892 * @budget: Used to determine if we are in netpoll 1745 1893 * 1746 - * Cleans all buffers associated with the input completion tag either from the 1747 - * TX buffer ring or from the hash table if the buffers were previously 1748 - * stashed. Returns the byte/segment count for the cleaned packet associated 1749 - * this completion tag. 1894 + * Clean all buffers associated with the packet starting at buf_id. Returns the 1895 + * byte/segment count for the cleaned packet. 1750 1896 */ 1751 - static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag, 1752 - struct libeth_sq_napi_stats *cleaned, 1753 - int budget) 1897 + static void idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id, 1898 + struct libeth_sq_napi_stats *cleaned, 1899 + int budget) 1754 1900 { 1755 - u16 idx = compl_tag & txq->compl_tag_bufid_m; 1756 1901 struct idpf_tx_buf *tx_buf = NULL; 1757 1902 struct libeth_cq_pp cp = { 1758 1903 .dev = txq->dev, 1759 1904 .ss = cleaned, 1760 1905 .napi = budget, 1761 1906 }; 1762 - u16 ntc, orig_idx = idx; 1763 1907 1764 - tx_buf = &txq->tx_buf[idx]; 1765 - 1766 - if (unlikely(tx_buf->type <= LIBETH_SQE_CTX || 1767 - idpf_tx_buf_compl_tag(tx_buf) != compl_tag)) 1768 - return false; 1769 - 1908 + tx_buf = &txq->tx_buf[buf_id]; 1770 1909 if (tx_buf->type == LIBETH_SQE_SKB) { 1771 1910 if (skb_shinfo(tx_buf->skb)->tx_flags & SKBTX_IN_PROGRESS) 1772 1911 idpf_tx_read_tstamp(txq, tx_buf->skb); 1773 1912 1774 1913 libeth_tx_complete(tx_buf, &cp); 1914 + idpf_post_buf_refill(txq->refillq, buf_id); 1775 1915 } 1776 1916 1777 - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); 1917 + while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) { 1918 + buf_id = idpf_tx_buf_next(tx_buf); 1778 1919 1779 - while (idpf_tx_buf_compl_tag(tx_buf) == compl_tag) { 1920 + tx_buf = &txq->tx_buf[buf_id]; 1780 1921 libeth_tx_complete(tx_buf, &cp); 1781 - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); 1922 + idpf_post_buf_refill(txq->refillq, buf_id); 1782 1923 } 1783 - 1784 - /* 1785 - * It's possible the packet we just cleaned was an out of order 1786 - * completion, which means we can stash the buffers starting from 1787 - * the original next_to_clean and reuse the descriptors. We need 1788 - * to compare the descriptor ring next_to_clean packet's "first" buffer 1789 - * to the "first" buffer of the packet we just cleaned to determine if 1790 - * this is the case. Howevever, next_to_clean can point to either a 1791 - * reserved buffer that corresponds to a context descriptor used for the 1792 - * next_to_clean packet (TSO packet) or the "first" buffer (single 1793 - * packet). The orig_idx from the packet we just cleaned will always 1794 - * point to the "first" buffer. If next_to_clean points to a reserved 1795 - * buffer, let's bump ntc once and start the comparison from there. 1796 - */ 1797 - ntc = txq->next_to_clean; 1798 - tx_buf = &txq->tx_buf[ntc]; 1799 - 1800 - if (tx_buf->type == LIBETH_SQE_CTX) 1801 - idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, tx_buf); 1802 - 1803 - /* 1804 - * If ntc still points to a different "first" buffer, clean the 1805 - * descriptor ring and stash all of the buffers for later cleaning. If 1806 - * we cannot stash all of the buffers, next_to_clean will point to the 1807 - * "first" buffer of the packet that could not be stashed and cleaning 1808 - * will start there next time. 1809 - */ 1810 - if (unlikely(tx_buf != &txq->tx_buf[orig_idx] && 1811 - !idpf_tx_splitq_clean(txq, orig_idx, budget, cleaned, 1812 - true))) 1813 - return true; 1814 - 1815 - /* 1816 - * Otherwise, update next_to_clean to reflect the cleaning that was 1817 - * done above. 1818 - */ 1819 - txq->next_to_clean = idx; 1820 - 1821 - return true; 1822 1924 } 1823 1925 1824 1926 /** ··· 1762 2014 struct libeth_sq_napi_stats *cleaned, 1763 2015 int budget) 1764 2016 { 1765 - u16 compl_tag; 2017 + /* RS completion contains queue head for queue based scheduling or 2018 + * completion tag for flow based scheduling. 2019 + */ 2020 + u16 rs_compl_val = le16_to_cpu(desc->q_head_compl_tag.q_head); 1766 2021 1767 2022 if (!idpf_queue_has(FLOW_SCH_EN, txq)) { 1768 - u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head); 1769 - 1770 - idpf_tx_splitq_clean(txq, head, budget, cleaned, false); 2023 + idpf_tx_splitq_clean(txq, rs_compl_val, budget, cleaned, false); 1771 2024 return; 1772 2025 } 1773 2026 1774 - compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag); 1775 - 1776 - /* If we didn't clean anything on the ring, this packet must be 1777 - * in the hash table. Go clean it there. 1778 - */ 1779 - if (!idpf_tx_clean_buf_ring(txq, compl_tag, cleaned, budget)) 1780 - idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget); 2027 + idpf_tx_clean_bufs(txq, rs_compl_val, cleaned, budget); 1781 2028 } 1782 2029 1783 2030 /** ··· 1889 2146 /* Update BQL */ 1890 2147 nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); 1891 2148 1892 - dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) || 1893 - np->state != __IDPF_VPORT_UP || 2149 + dont_wake = !complq_ok || np->state != __IDPF_VPORT_UP || 1894 2150 !netif_carrier_ok(tx_q->netdev); 1895 2151 /* Check if the TXQ needs to and can be restarted */ 1896 2152 __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes, ··· 1946 2204 desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag); 1947 2205 } 1948 2206 1949 - /* Global conditions to tell whether the txq (and related resources) 1950 - * has room to allow the use of "size" descriptors. 2207 + /** 2208 + * idpf_tx_splitq_has_room - check if enough Tx splitq resources are available 2209 + * @tx_q: the queue to be checked 2210 + * @descs_needed: number of descriptors required for this packet 2211 + * @bufs_needed: number of Tx buffers required for this packet 2212 + * 2213 + * Return: 0 if no room available, 1 otherwise 1951 2214 */ 1952 - static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size) 2215 + static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 descs_needed, 2216 + u32 bufs_needed) 1953 2217 { 1954 - if (IDPF_DESC_UNUSED(tx_q) < size || 2218 + if (IDPF_DESC_UNUSED(tx_q) < descs_needed || 1955 2219 IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) > 1956 2220 IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq) || 1957 - IDPF_TX_BUF_RSV_LOW(tx_q)) 2221 + idpf_tx_splitq_get_free_bufs(tx_q->refillq) < bufs_needed) 1958 2222 return 0; 1959 2223 return 1; 1960 2224 } ··· 1969 2221 * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions 1970 2222 * @tx_q: the queue to be checked 1971 2223 * @descs_needed: number of descriptors required for this packet 2224 + * @bufs_needed: number of buffers needed for this packet 1972 2225 * 1973 - * Returns 0 if stop is not needed 2226 + * Return: 0 if stop is not needed 1974 2227 */ 1975 2228 static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q, 1976 - unsigned int descs_needed) 2229 + u32 descs_needed, 2230 + u32 bufs_needed) 1977 2231 { 2232 + /* Since we have multiple resources to check for splitq, our 2233 + * start,stop_thrs becomes a boolean check instead of a count 2234 + * threshold. 2235 + */ 1978 2236 if (netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx, 1979 - idpf_txq_has_room(tx_q, descs_needed), 2237 + idpf_txq_has_room(tx_q, descs_needed, 2238 + bufs_needed), 1980 2239 1, 1)) 1981 2240 return 0; 1982 2241 ··· 2025 2270 } 2026 2271 2027 2272 /** 2028 - * idpf_tx_desc_count_required - calculate number of Tx descriptors needed 2273 + * idpf_tx_res_count_required - get number of Tx resources needed for this pkt 2029 2274 * @txq: queue to send buffer on 2030 2275 * @skb: send buffer 2276 + * @bufs_needed: (output) number of buffers needed for this skb. 2031 2277 * 2032 - * Returns number of data descriptors needed for this skb. 2278 + * Return: number of data descriptors and buffers needed for this skb. 2033 2279 */ 2034 - unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, 2035 - struct sk_buff *skb) 2280 + unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq, 2281 + struct sk_buff *skb, 2282 + u32 *bufs_needed) 2036 2283 { 2037 2284 const struct skb_shared_info *shinfo; 2038 2285 unsigned int count = 0, i; ··· 2045 2288 return count; 2046 2289 2047 2290 shinfo = skb_shinfo(skb); 2291 + *bufs_needed += shinfo->nr_frags; 2048 2292 for (i = 0; i < shinfo->nr_frags; i++) { 2049 2293 unsigned int size; 2050 2294 ··· 2075 2317 } 2076 2318 2077 2319 /** 2078 - * idpf_tx_dma_map_error - handle TX DMA map errors 2079 - * @txq: queue to send buffer on 2080 - * @skb: send buffer 2081 - * @first: original first buffer info buffer for packet 2082 - * @idx: starting point on ring to unwind 2083 - */ 2084 - void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, 2085 - struct idpf_tx_buf *first, u16 idx) 2086 - { 2087 - struct libeth_sq_napi_stats ss = { }; 2088 - struct libeth_cq_pp cp = { 2089 - .dev = txq->dev, 2090 - .ss = &ss, 2091 - }; 2092 - 2093 - u64_stats_update_begin(&txq->stats_sync); 2094 - u64_stats_inc(&txq->q_stats.dma_map_errs); 2095 - u64_stats_update_end(&txq->stats_sync); 2096 - 2097 - /* clear dma mappings for failed tx_buf map */ 2098 - for (;;) { 2099 - struct idpf_tx_buf *tx_buf; 2100 - 2101 - tx_buf = &txq->tx_buf[idx]; 2102 - libeth_tx_complete(tx_buf, &cp); 2103 - if (tx_buf == first) 2104 - break; 2105 - if (idx == 0) 2106 - idx = txq->desc_count; 2107 - idx--; 2108 - } 2109 - 2110 - if (skb_is_gso(skb)) { 2111 - union idpf_tx_flex_desc *tx_desc; 2112 - 2113 - /* If we failed a DMA mapping for a TSO packet, we will have 2114 - * used one additional descriptor for a context 2115 - * descriptor. Reset that here. 2116 - */ 2117 - tx_desc = &txq->flex_tx[idx]; 2118 - memset(tx_desc, 0, sizeof(*tx_desc)); 2119 - if (idx == 0) 2120 - idx = txq->desc_count; 2121 - idx--; 2122 - } 2123 - 2124 - /* Update tail in case netdev_xmit_more was previously true */ 2125 - idpf_tx_buf_hw_update(txq, idx, false); 2126 - } 2127 - 2128 - /** 2129 2320 * idpf_tx_splitq_bump_ntu - adjust NTU and generation 2130 2321 * @txq: the tx ring to wrap 2131 2322 * @ntu: ring index to bump ··· 2083 2376 { 2084 2377 ntu++; 2085 2378 2086 - if (ntu == txq->desc_count) { 2379 + if (ntu == txq->desc_count) 2087 2380 ntu = 0; 2088 - txq->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(txq); 2089 - } 2090 2381 2091 2382 return ntu; 2383 + } 2384 + 2385 + /** 2386 + * idpf_tx_get_free_buf_id - get a free buffer ID from the refill queue 2387 + * @refillq: refill queue to get buffer ID from 2388 + * @buf_id: return buffer ID 2389 + * 2390 + * Return: true if a buffer ID was found, false if not 2391 + */ 2392 + static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq, 2393 + u32 *buf_id) 2394 + { 2395 + u32 ntc = refillq->next_to_clean; 2396 + u32 refill_desc; 2397 + 2398 + refill_desc = refillq->ring[ntc]; 2399 + 2400 + if (unlikely(idpf_queue_has(RFL_GEN_CHK, refillq) != 2401 + !!(refill_desc & IDPF_RFL_BI_GEN_M))) 2402 + return false; 2403 + 2404 + *buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc); 2405 + 2406 + if (unlikely(++ntc == refillq->desc_count)) { 2407 + idpf_queue_change(RFL_GEN_CHK, refillq); 2408 + ntc = 0; 2409 + } 2410 + 2411 + refillq->next_to_clean = ntc; 2412 + 2413 + return true; 2414 + } 2415 + 2416 + /** 2417 + * idpf_tx_splitq_pkt_err_unmap - Unmap buffers and bump tail in case of error 2418 + * @txq: Tx queue to unwind 2419 + * @params: pointer to splitq params struct 2420 + * @first: starting buffer for packet to unmap 2421 + */ 2422 + static void idpf_tx_splitq_pkt_err_unmap(struct idpf_tx_queue *txq, 2423 + struct idpf_tx_splitq_params *params, 2424 + struct idpf_tx_buf *first) 2425 + { 2426 + struct idpf_sw_queue *refillq = txq->refillq; 2427 + struct libeth_sq_napi_stats ss = { }; 2428 + struct idpf_tx_buf *tx_buf = first; 2429 + struct libeth_cq_pp cp = { 2430 + .dev = txq->dev, 2431 + .ss = &ss, 2432 + }; 2433 + 2434 + u64_stats_update_begin(&txq->stats_sync); 2435 + u64_stats_inc(&txq->q_stats.dma_map_errs); 2436 + u64_stats_update_end(&txq->stats_sync); 2437 + 2438 + libeth_tx_complete(tx_buf, &cp); 2439 + while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) { 2440 + tx_buf = &txq->tx_buf[idpf_tx_buf_next(tx_buf)]; 2441 + libeth_tx_complete(tx_buf, &cp); 2442 + } 2443 + 2444 + /* Update tail in case netdev_xmit_more was previously true. */ 2445 + idpf_tx_buf_hw_update(txq, params->prev_ntu, false); 2446 + 2447 + if (!refillq) 2448 + return; 2449 + 2450 + /* Restore refillq state to avoid leaking tags. */ 2451 + if (params->prev_refill_gen != idpf_queue_has(RFL_GEN_CHK, refillq)) 2452 + idpf_queue_change(RFL_GEN_CHK, refillq); 2453 + refillq->next_to_clean = params->prev_refill_ntc; 2092 2454 } 2093 2455 2094 2456 /** ··· 2181 2405 struct netdev_queue *nq; 2182 2406 struct sk_buff *skb; 2183 2407 skb_frag_t *frag; 2408 + u32 next_buf_id; 2184 2409 u16 td_cmd = 0; 2185 2410 dma_addr_t dma; 2186 2411 ··· 2199 2422 tx_buf = first; 2200 2423 first->nr_frags = 0; 2201 2424 2202 - params->compl_tag = 2203 - (tx_q->compl_tag_cur_gen << tx_q->compl_tag_gen_s) | i; 2204 - 2205 2425 for (frag = &skb_shinfo(skb)->frags[0];; frag++) { 2206 2426 unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED; 2207 2427 2208 - if (dma_mapping_error(tx_q->dev, dma)) 2209 - return idpf_tx_dma_map_error(tx_q, skb, first, i); 2428 + if (unlikely(dma_mapping_error(tx_q->dev, dma))) { 2429 + idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL; 2430 + return idpf_tx_splitq_pkt_err_unmap(tx_q, params, 2431 + first); 2432 + } 2210 2433 2211 2434 first->nr_frags++; 2212 - idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag; 2213 2435 tx_buf->type = LIBETH_SQE_FRAG; 2214 2436 2215 2437 /* record length, and DMA address */ ··· 2264 2488 max_data); 2265 2489 2266 2490 if (unlikely(++i == tx_q->desc_count)) { 2267 - tx_buf = tx_q->tx_buf; 2268 2491 tx_desc = &tx_q->flex_tx[0]; 2269 2492 i = 0; 2270 - tx_q->compl_tag_cur_gen = 2271 - IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); 2272 2493 } else { 2273 - tx_buf++; 2274 2494 tx_desc++; 2275 2495 } 2276 - 2277 - /* Since this packet has a buffer that is going to span 2278 - * multiple descriptors, it's going to leave holes in 2279 - * to the TX buffer ring. To ensure these holes do not 2280 - * cause issues in the cleaning routines, we will clear 2281 - * them of any stale data and assign them the same 2282 - * completion tag as the current packet. Then when the 2283 - * packet is being cleaned, the cleaning routines will 2284 - * simply pass over these holes and finish cleaning the 2285 - * rest of the packet. 2286 - */ 2287 - tx_buf->type = LIBETH_SQE_EMPTY; 2288 - idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag; 2289 2496 2290 2497 /* Adjust the DMA offset and the remaining size of the 2291 2498 * fragment. On the first iteration of this loop, ··· 2294 2535 idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size); 2295 2536 2296 2537 if (unlikely(++i == tx_q->desc_count)) { 2297 - tx_buf = tx_q->tx_buf; 2298 2538 tx_desc = &tx_q->flex_tx[0]; 2299 2539 i = 0; 2300 - tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); 2301 2540 } else { 2302 - tx_buf++; 2303 2541 tx_desc++; 2304 2542 } 2543 + 2544 + if (idpf_queue_has(FLOW_SCH_EN, tx_q)) { 2545 + if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq, 2546 + &next_buf_id))) { 2547 + idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL; 2548 + return idpf_tx_splitq_pkt_err_unmap(tx_q, params, 2549 + first); 2550 + } 2551 + } else { 2552 + next_buf_id = i; 2553 + } 2554 + idpf_tx_buf_next(tx_buf) = next_buf_id; 2555 + tx_buf = &tx_q->tx_buf[next_buf_id]; 2305 2556 2306 2557 size = skb_frag_size(frag); 2307 2558 data_len -= size; ··· 2327 2558 2328 2559 /* write last descriptor with RS and EOP bits */ 2329 2560 first->rs_idx = i; 2561 + idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL; 2330 2562 td_cmd |= params->eop_cmd; 2331 2563 idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size); 2332 2564 i = idpf_tx_splitq_bump_ntu(tx_q, i); ··· 2431 2661 union idpf_flex_tx_ctx_desc *desc; 2432 2662 int i = txq->next_to_use; 2433 2663 2434 - txq->tx_buf[i].type = LIBETH_SQE_CTX; 2435 - 2436 2664 /* grab the next descriptor */ 2437 2665 desc = &txq->flex_ctx[i]; 2438 2666 txq->next_to_use = idpf_tx_splitq_bump_ntu(txq, i); ··· 2524 2756 #endif /* CONFIG_PTP_1588_CLOCK */ 2525 2757 2526 2758 /** 2759 + * idpf_tx_splitq_need_re - check whether RE bit needs to be set 2760 + * @tx_q: pointer to Tx queue 2761 + * 2762 + * Return: true if RE bit needs to be set, false otherwise 2763 + */ 2764 + static bool idpf_tx_splitq_need_re(struct idpf_tx_queue *tx_q) 2765 + { 2766 + int gap = tx_q->next_to_use - tx_q->last_re; 2767 + 2768 + gap += (gap < 0) ? tx_q->desc_count : 0; 2769 + 2770 + return gap >= IDPF_TX_SPLITQ_RE_MIN_GAP; 2771 + } 2772 + 2773 + /** 2527 2774 * idpf_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors 2528 2775 * @skb: send buffer 2529 2776 * @tx_q: queue to send buffer on ··· 2548 2765 static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, 2549 2766 struct idpf_tx_queue *tx_q) 2550 2767 { 2551 - struct idpf_tx_splitq_params tx_params = { }; 2768 + struct idpf_tx_splitq_params tx_params = { 2769 + .prev_ntu = tx_q->next_to_use, 2770 + }; 2552 2771 union idpf_flex_tx_ctx_desc *ctx_desc; 2553 2772 struct idpf_tx_buf *first; 2554 - unsigned int count; 2773 + u32 count, buf_count = 1; 2555 2774 int tso, idx; 2775 + u32 buf_id; 2556 2776 2557 - count = idpf_tx_desc_count_required(tx_q, skb); 2777 + count = idpf_tx_res_count_required(tx_q, skb, &buf_count); 2558 2778 if (unlikely(!count)) 2559 2779 return idpf_tx_drop_skb(tx_q, skb); 2560 2780 ··· 2567 2781 2568 2782 /* Check for splitq specific TX resources */ 2569 2783 count += (IDPF_TX_DESCS_PER_CACHE_LINE + tso); 2570 - if (idpf_tx_maybe_stop_splitq(tx_q, count)) { 2784 + if (idpf_tx_maybe_stop_splitq(tx_q, count, buf_count)) { 2571 2785 idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); 2572 2786 2573 2787 return NETDEV_TX_BUSY; ··· 2599 2813 idpf_tx_set_tstamp_desc(ctx_desc, idx); 2600 2814 } 2601 2815 2602 - /* record the location of the first descriptor for this packet */ 2603 - first = &tx_q->tx_buf[tx_q->next_to_use]; 2816 + if (idpf_queue_has(FLOW_SCH_EN, tx_q)) { 2817 + struct idpf_sw_queue *refillq = tx_q->refillq; 2818 + 2819 + /* Save refillq state in case of a packet rollback. Otherwise, 2820 + * the tags will be leaked since they will be popped from the 2821 + * refillq but never reposted during cleaning. 2822 + */ 2823 + tx_params.prev_refill_gen = 2824 + idpf_queue_has(RFL_GEN_CHK, refillq); 2825 + tx_params.prev_refill_ntc = refillq->next_to_clean; 2826 + 2827 + if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq, 2828 + &buf_id))) { 2829 + if (tx_params.prev_refill_gen != 2830 + idpf_queue_has(RFL_GEN_CHK, refillq)) 2831 + idpf_queue_change(RFL_GEN_CHK, refillq); 2832 + refillq->next_to_clean = tx_params.prev_refill_ntc; 2833 + 2834 + tx_q->next_to_use = tx_params.prev_ntu; 2835 + return idpf_tx_drop_skb(tx_q, skb); 2836 + } 2837 + tx_params.compl_tag = buf_id; 2838 + 2839 + tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE; 2840 + tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP; 2841 + /* Set the RE bit to periodically "clean" the descriptor ring. 2842 + * MIN_GAP is set to MIN_RING size to ensure it will be set at 2843 + * least once each time around the ring. 2844 + */ 2845 + if (idpf_tx_splitq_need_re(tx_q)) { 2846 + tx_params.eop_cmd |= IDPF_TXD_FLEX_FLOW_CMD_RE; 2847 + tx_q->txq_grp->num_completions_pending++; 2848 + tx_q->last_re = tx_q->next_to_use; 2849 + } 2850 + 2851 + if (skb->ip_summed == CHECKSUM_PARTIAL) 2852 + tx_params.offload.td_cmd |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; 2853 + 2854 + } else { 2855 + buf_id = tx_q->next_to_use; 2856 + 2857 + tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2; 2858 + tx_params.eop_cmd = IDPF_TXD_LAST_DESC_CMD; 2859 + 2860 + if (skb->ip_summed == CHECKSUM_PARTIAL) 2861 + tx_params.offload.td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN; 2862 + } 2863 + 2864 + first = &tx_q->tx_buf[buf_id]; 2604 2865 first->skb = skb; 2605 2866 2606 2867 if (tso) { ··· 2657 2824 } else { 2658 2825 first->packets = 1; 2659 2826 first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN); 2660 - } 2661 - 2662 - if (idpf_queue_has(FLOW_SCH_EN, tx_q)) { 2663 - tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE; 2664 - tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP; 2665 - /* Set the RE bit to catch any packets that may have not been 2666 - * stashed during RS completion cleaning. MIN_GAP is set to 2667 - * MIN_RING size to ensure it will be set at least once each 2668 - * time around the ring. 2669 - */ 2670 - if (!(tx_q->next_to_use % IDPF_TX_SPLITQ_RE_MIN_GAP)) { 2671 - tx_params.eop_cmd |= IDPF_TXD_FLEX_FLOW_CMD_RE; 2672 - tx_q->txq_grp->num_completions_pending++; 2673 - } 2674 - 2675 - if (skb->ip_summed == CHECKSUM_PARTIAL) 2676 - tx_params.offload.td_cmd |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; 2677 - 2678 - } else { 2679 - tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2; 2680 - tx_params.eop_cmd = IDPF_TXD_LAST_DESC_CMD; 2681 - 2682 - if (skb->ip_summed == CHECKSUM_PARTIAL) 2683 - tx_params.offload.td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN; 2684 2827 } 2685 2828 2686 2829 idpf_tx_splitq_map(tx_q, &tx_params, first); ··· 3196 3387 skip_data: 3197 3388 rx_buf->netmem = 0; 3198 3389 3199 - idpf_rx_post_buf_refill(refillq, buf_id); 3390 + idpf_post_buf_refill(refillq, buf_id); 3200 3391 IDPF_RX_BUMP_NTC(rxq, ntc); 3201 3392 3202 3393 /* skip if it is non EOP desc */ ··· 3304 3495 bool failure; 3305 3496 3306 3497 if (idpf_queue_has(RFL_GEN_CHK, refillq) != 3307 - !!(refill_desc & IDPF_RX_BI_GEN_M)) 3498 + !!(refill_desc & IDPF_RFL_BI_GEN_M)) 3308 3499 break; 3309 3500 3310 - buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc); 3501 + buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc); 3311 3502 failure = idpf_rx_update_bufq_desc(bufq, buf_id, buf_desc); 3312 3503 if (failure) 3313 3504 break;
+33 -54
drivers/net/ethernet/intel/idpf/idpf_txrx.h
··· 108 108 */ 109 109 #define IDPF_TX_SPLITQ_RE_MIN_GAP 64 110 110 111 - #define IDPF_RX_BI_GEN_M BIT(16) 112 - #define IDPF_RX_BI_BUFID_M GENMASK(15, 0) 111 + #define IDPF_RFL_BI_GEN_M BIT(16) 112 + #define IDPF_RFL_BI_BUFID_M GENMASK(15, 0) 113 113 114 114 #define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M 115 115 #define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M ··· 117 117 #define IDPF_DESC_UNUSED(txq) \ 118 118 ((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \ 119 119 (txq)->next_to_clean - (txq)->next_to_use - 1) 120 - 121 - #define IDPF_TX_BUF_RSV_UNUSED(txq) ((txq)->stash->buf_stack.top) 122 - #define IDPF_TX_BUF_RSV_LOW(txq) (IDPF_TX_BUF_RSV_UNUSED(txq) < \ 123 - (txq)->desc_count >> 2) 124 120 125 121 #define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 1) 126 122 /* Determine the absolute number of completions pending, i.e. the number of ··· 127 131 0 : U32_MAX) + \ 128 132 (txq)->num_completions_pending - (txq)->complq->num_completions) 129 133 130 - #define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16 131 - /* Adjust the generation for the completion tag and wrap if necessary */ 132 - #define IDPF_TX_ADJ_COMPL_TAG_GEN(txq) \ 133 - ((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \ 134 - 0 : (txq)->compl_tag_cur_gen) 134 + #define IDPF_TXBUF_NULL U32_MAX 135 135 136 136 #define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS) 137 137 ··· 143 151 }; 144 152 145 153 #define idpf_tx_buf libeth_sqe 146 - 147 - /** 148 - * struct idpf_buf_lifo - LIFO for managing OOO completions 149 - * @top: Used to know how many buffers are left 150 - * @size: Total size of LIFO 151 - * @bufs: Backing array 152 - */ 153 - struct idpf_buf_lifo { 154 - u16 top; 155 - u16 size; 156 - struct idpf_tx_stash **bufs; 157 - }; 158 154 159 155 /** 160 156 * struct idpf_tx_offload_params - Offload parameters for a given packet ··· 176 196 * @compl_tag: Associated tag for completion 177 197 * @td_tag: Descriptor tunneling tag 178 198 * @offload: Offload parameters 199 + * @prev_ntu: stored TxQ next_to_use in case of rollback 200 + * @prev_refill_ntc: stored refillq next_to_clean in case of packet rollback 201 + * @prev_refill_gen: stored refillq generation bit in case of packet rollback 179 202 */ 180 203 struct idpf_tx_splitq_params { 181 204 enum idpf_tx_desc_dtype_value dtype; ··· 189 206 }; 190 207 191 208 struct idpf_tx_offload_params offload; 209 + 210 + u16 prev_ntu; 211 + u16 prev_refill_ntc; 212 + bool prev_refill_gen; 192 213 }; 193 214 194 215 enum idpf_tx_ctx_desc_eipt_offload { ··· 455 468 #define IDPF_DIM_DEFAULT_PROFILE_IX 1 456 469 457 470 /** 458 - * struct idpf_txq_stash - Tx buffer stash for Flow-based scheduling mode 459 - * @buf_stack: Stack of empty buffers to store buffer info for out of order 460 - * buffer completions. See struct idpf_buf_lifo 461 - * @sched_buf_hash: Hash table to store buffers 462 - */ 463 - struct idpf_txq_stash { 464 - struct idpf_buf_lifo buf_stack; 465 - DECLARE_HASHTABLE(sched_buf_hash, 12); 466 - } ____cacheline_aligned; 467 - 468 - /** 469 471 * struct idpf_rx_queue - software structure representing a receive queue 470 472 * @rx: universal receive descriptor array 471 473 * @single_buf: buffer descriptor array in singleq ··· 586 610 * @netdev: &net_device corresponding to this queue 587 611 * @next_to_use: Next descriptor to use 588 612 * @next_to_clean: Next descriptor to clean 613 + * @last_re: last descriptor index that RE bit was set 614 + * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather 589 615 * @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on 590 616 * the TX completion queue, it can be for any TXQ associated 591 617 * with that completion queue. This means we can clean up to ··· 598 620 * only once at the end of the cleaning routine. 599 621 * @clean_budget: singleq only, queue cleaning budget 600 622 * @cleaned_pkts: Number of packets cleaned for the above said case 601 - * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather 602 - * @stash: Tx buffer stash for Flow-based scheduling mode 603 - * @compl_tag_bufid_m: Completion tag buffer id mask 604 - * @compl_tag_cur_gen: Used to keep track of current completion tag generation 605 - * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset 623 + * @refillq: Pointer to refill queue 606 624 * @cached_tstamp_caps: Tx timestamp capabilities negotiated with the CP 607 625 * @tstamp_task: Work that handles Tx timestamp read 608 626 * @stats_sync: See struct u64_stats_sync ··· 607 633 * @size: Length of descriptor ring in bytes 608 634 * @dma: Physical address of ring 609 635 * @q_vector: Backreference to associated vector 636 + * @buf_pool_size: Total number of idpf_tx_buf 610 637 */ 611 638 struct idpf_tx_queue { 612 639 __cacheline_group_begin_aligned(read_mostly); ··· 629 654 u16 desc_count; 630 655 631 656 u16 tx_min_pkt_len; 632 - u16 compl_tag_gen_s; 633 657 634 658 struct net_device *netdev; 635 659 __cacheline_group_end_aligned(read_mostly); ··· 636 662 __cacheline_group_begin_aligned(read_write); 637 663 u16 next_to_use; 638 664 u16 next_to_clean; 665 + u16 last_re; 666 + u16 tx_max_bufs; 639 667 640 668 union { 641 669 u32 cleaned_bytes; ··· 645 669 }; 646 670 u16 cleaned_pkts; 647 671 648 - u16 tx_max_bufs; 649 - struct idpf_txq_stash *stash; 650 - 651 - u16 compl_tag_bufid_m; 652 - u16 compl_tag_cur_gen; 653 - u16 compl_tag_gen_max; 672 + struct idpf_sw_queue *refillq; 654 673 655 674 struct idpf_ptp_vport_tx_tstamp_caps *cached_tstamp_caps; 656 675 struct work_struct *tstamp_task; ··· 660 689 dma_addr_t dma; 661 690 662 691 struct idpf_q_vector *q_vector; 692 + u32 buf_pool_size; 663 693 __cacheline_group_end_aligned(cold); 664 694 }; 665 695 libeth_cacheline_set_assert(struct idpf_tx_queue, 64, 666 - 112 + sizeof(struct u64_stats_sync), 667 - 24); 696 + 104 + sizeof(struct u64_stats_sync), 697 + 32); 668 698 669 699 /** 670 700 * struct idpf_buf_queue - software structure representing a buffer queue ··· 875 903 * @vport: Vport back pointer 876 904 * @num_txq: Number of TX queues associated 877 905 * @txqs: Array of TX queue pointers 878 - * @stashes: array of OOO stashes for the queues 879 906 * @complq: Associated completion queue pointer, split queue only 880 907 * @num_completions_pending: Total number of completions pending for the 881 908 * completion queue, acculumated for all TX queues ··· 889 918 890 919 u16 num_txq; 891 920 struct idpf_tx_queue *txqs[IDPF_LARGE_MAX_Q]; 892 - struct idpf_txq_stash *stashes; 893 921 894 922 struct idpf_compl_queue *complq; 895 923 ··· 981 1011 reg->dyn_ctl); 982 1012 } 983 1013 1014 + /** 1015 + * idpf_tx_splitq_get_free_bufs - get number of free buf_ids in refillq 1016 + * @refillq: pointer to refillq containing buf_ids 1017 + */ 1018 + static inline u32 idpf_tx_splitq_get_free_bufs(struct idpf_sw_queue *refillq) 1019 + { 1020 + return (refillq->next_to_use > refillq->next_to_clean ? 1021 + 0 : refillq->desc_count) + 1022 + refillq->next_to_use - refillq->next_to_clean - 1; 1023 + } 1024 + 984 1025 int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget); 985 1026 void idpf_vport_init_num_qs(struct idpf_vport *vport, 986 1027 struct virtchnl2_create_vport *vport_msg); ··· 1019 1038 bool xmit_more); 1020 1039 unsigned int idpf_size_to_txd_count(unsigned int size); 1021 1040 netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb); 1022 - void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, 1023 - struct idpf_tx_buf *first, u16 ring_idx); 1024 - unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, 1025 - struct sk_buff *skb); 1041 + unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq, 1042 + struct sk_buff *skb, u32 *buf_count); 1026 1043 void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue); 1027 1044 netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, 1028 1045 struct idpf_tx_queue *tx_q);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
··· 3125 3125 if (err) 3126 3126 return err; 3127 3127 3128 - combo_ver = le32_to_cpu(civd.combo_ver); 3128 + combo_ver = get_unaligned_le32(&civd.combo_ver); 3129 3129 3130 3130 orom->major = (u8)FIELD_GET(IXGBE_OROM_VER_MASK, combo_ver); 3131 3131 orom->patch = (u8)FIELD_GET(IXGBE_OROM_VER_PATCH_MASK, combo_ver);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
··· 932 932 __le32 combo_ver; /* Combo Image Version number */ 933 933 u8 combo_name_len; /* Length of the unicode combo image version string, max of 32 */ 934 934 __le16 combo_name[32]; /* Unicode string representing the Combo Image version */ 935 - }; 935 + } __packed; 936 936 937 937 /* Function specific capabilities */ 938 938 struct ixgbe_hw_func_caps {
+7
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
··· 1978 1978 goto err_release_regions; 1979 1979 } 1980 1980 1981 + if (!is_cn20k(pdev) && 1982 + !is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) { 1983 + dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n", 1984 + cgx->cgx_id); 1985 + goto err_release_regions; 1986 + } 1987 + 1981 1988 cgx->lmac_count = cgx->mac_ops->get_nr_lmacs(cgx); 1982 1989 if (!cgx->lmac_count) { 1983 1990 dev_notice(dev, "CGX %d LMAC count is zero, skipping probe\n", cgx->cgx_id);
+14
drivers/net/ethernet/marvell/octeontx2/af/rvu.h
··· 783 783 return false; 784 784 } 785 785 786 + static inline bool is_cgx_mapped_to_nix(unsigned short id, u8 cgx_id) 787 + { 788 + /* On CNF10KA and CNF10KB silicons only two CGX blocks are connected 789 + * to NIX. 790 + */ 791 + if (id == PCI_SUBSYS_DEVID_CNF10K_A || id == PCI_SUBSYS_DEVID_CNF10K_B) 792 + return cgx_id <= 1; 793 + 794 + return !(cgx_id && !(id == PCI_SUBSYS_DEVID_96XX || 795 + id == PCI_SUBSYS_DEVID_98XX || 796 + id == PCI_SUBSYS_DEVID_CN10K_A || 797 + id == PCI_SUBSYS_DEVID_CN10K_B)); 798 + } 799 + 786 800 static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu) 787 801 { 788 802 u64 npc_const3;
+3 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 124 124 dev_stats->rx_ucast_frames; 125 125 126 126 dev_stats->tx_bytes = OTX2_GET_TX_STATS(TX_OCTS); 127 - dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP); 127 + dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP) + 128 + (unsigned long)atomic_long_read(&dev_stats->tx_discards); 129 + 128 130 dev_stats->tx_bcast_frames = OTX2_GET_TX_STATS(TX_BCAST); 129 131 dev_stats->tx_mcast_frames = OTX2_GET_TX_STATS(TX_MCAST); 130 132 dev_stats->tx_ucast_frames = OTX2_GET_TX_STATS(TX_UCAST);
+1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
··· 153 153 u64 tx_bcast_frames; 154 154 u64 tx_mcast_frames; 155 155 u64 tx_drops; 156 + atomic_long_t tx_discards; 156 157 }; 157 158 158 159 /* Driver counted stats */
+3
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 2220 2220 { 2221 2221 struct otx2_nic *pf = netdev_priv(netdev); 2222 2222 int qidx = skb_get_queue_mapping(skb); 2223 + struct otx2_dev_stats *dev_stats; 2223 2224 struct otx2_snd_queue *sq; 2224 2225 struct netdev_queue *txq; 2225 2226 int sq_idx; ··· 2233 2232 /* Check for minimum and maximum packet length */ 2234 2233 if (skb->len <= ETH_HLEN || 2235 2234 (!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) { 2235 + dev_stats = &pf->hw.dev_stats; 2236 + atomic_long_inc(&dev_stats->tx_discards); 2236 2237 dev_kfree_skb(skb); 2237 2238 return NETDEV_TX_OK; 2238 2239 }
+10
drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
··· 417 417 { 418 418 struct otx2_nic *vf = netdev_priv(netdev); 419 419 int qidx = skb_get_queue_mapping(skb); 420 + struct otx2_dev_stats *dev_stats; 420 421 struct otx2_snd_queue *sq; 421 422 struct netdev_queue *txq; 423 + 424 + /* Check for minimum and maximum packet length */ 425 + if (skb->len <= ETH_HLEN || 426 + (!skb_shinfo(skb)->gso_size && skb->len > vf->tx_max_pktlen)) { 427 + dev_stats = &vf->hw.dev_stats; 428 + atomic_long_inc(&dev_stats->tx_discards); 429 + dev_kfree_skb(skb); 430 + return NETDEV_TX_OK; 431 + } 422 432 423 433 sq = &vf->qset.sq[qidx]; 424 434 txq = netdev_get_tx_queue(netdev, qidx);
+12 -1
drivers/net/ethernet/marvell/octeontx2/nic/rep.c
··· 371 371 stats->rx_mcast_frames = rsp->rx.mcast; 372 372 stats->tx_bytes = rsp->tx.octs; 373 373 stats->tx_frames = rsp->tx.ucast + rsp->tx.bcast + rsp->tx.mcast; 374 - stats->tx_drops = rsp->tx.drop; 374 + stats->tx_drops = rsp->tx.drop + 375 + (unsigned long)atomic_long_read(&stats->tx_discards); 375 376 exit: 376 377 mutex_unlock(&priv->mbox.lock); 377 378 } ··· 419 418 struct otx2_nic *pf = rep->mdev; 420 419 struct otx2_snd_queue *sq; 421 420 struct netdev_queue *txq; 421 + struct rep_stats *stats; 422 + 423 + /* Check for minimum and maximum packet length */ 424 + if (skb->len <= ETH_HLEN || 425 + (!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) { 426 + stats = &rep->stats; 427 + atomic_long_inc(&stats->tx_discards); 428 + dev_kfree_skb(skb); 429 + return NETDEV_TX_OK; 430 + } 422 431 423 432 sq = &pf->qset.sq[rep->rep_id]; 424 433 txq = netdev_get_tx_queue(dev, 0);
+1
drivers/net/ethernet/marvell/octeontx2/nic/rep.h
··· 27 27 u64 tx_bytes; 28 28 u64 tx_frames; 29 29 u64 tx_drops; 30 + atomic_long_t tx_discards; 30 31 }; 31 32 32 33 struct rep_dev {
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 160 160 if (err) 161 161 return err; 162 162 163 - mlx5_unload_one_devl_locked(dev, true); 163 + mlx5_sync_reset_unload_flow(dev, true); 164 164 err = mlx5_health_wait_pci_up(dev); 165 165 if (err) 166 166 NL_SET_ERR_MSG_MOD(extack, "FW activate aborted, PCI reads fail after reset");
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
··· 575 575 if (err) 576 576 return err; 577 577 } 578 - priv->dcbx.xoff = xoff; 579 578 580 579 /* Apply the settings */ 581 580 if (update_buffer) { ··· 582 583 if (err) 583 584 return err; 584 585 } 586 + 587 + priv->dcbx.xoff = xoff; 585 588 586 589 if (update_prio2buffer) 587 590 err = mlx5e_port_set_priority2buffer(priv->mdev, prio2buffer);
+12
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
··· 66 66 struct mlx5e_bufferx_reg buffer[MLX5E_MAX_NETWORK_BUFFER]; 67 67 }; 68 68 69 + #ifdef CONFIG_MLX5_CORE_EN_DCB 69 70 int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv, 70 71 u32 change, unsigned int mtu, 71 72 struct ieee_pfc *pfc, 72 73 u32 *buffer_size, 73 74 u8 *prio2buffer); 75 + #else 76 + static inline int 77 + mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv, 78 + u32 change, unsigned int mtu, 79 + void *pfc, 80 + u32 *buffer_size, 81 + u8 *prio2buffer) 82 + { 83 + return 0; 84 + } 85 + #endif 74 86 75 87 int mlx5e_port_query_buffer(struct mlx5e_priv *priv, 76 88 struct mlx5e_port_buffer *port_buffer);
+18 -1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 49 49 #include "en.h" 50 50 #include "en/dim.h" 51 51 #include "en/txrx.h" 52 + #include "en/port_buffer.h" 52 53 #include "en_tc.h" 53 54 #include "en_rep.h" 54 55 #include "en_accel/ipsec.h" ··· 139 138 if (up) { 140 139 netdev_info(priv->netdev, "Link up\n"); 141 140 netif_carrier_on(priv->netdev); 141 + mlx5e_port_manual_buffer_config(priv, 0, priv->netdev->mtu, 142 + NULL, NULL, NULL); 142 143 } else { 143 144 netdev_info(priv->netdev, "Link down\n"); 144 145 netif_carrier_off(priv->netdev); ··· 3043 3040 struct mlx5e_params *params = &priv->channels.params; 3044 3041 struct net_device *netdev = priv->netdev; 3045 3042 struct mlx5_core_dev *mdev = priv->mdev; 3046 - u16 mtu; 3043 + u16 mtu, prev_mtu; 3047 3044 int err; 3045 + 3046 + mlx5e_query_mtu(mdev, params, &prev_mtu); 3048 3047 3049 3048 err = mlx5e_set_mtu(mdev, params, params->sw_mtu); 3050 3049 if (err) ··· 3056 3051 if (mtu != params->sw_mtu) 3057 3052 netdev_warn(netdev, "%s: VPort MTU %d is different than netdev mtu %d\n", 3058 3053 __func__, mtu, params->sw_mtu); 3054 + 3055 + if (mtu != prev_mtu && MLX5_BUFFER_SUPPORTED(mdev)) { 3056 + err = mlx5e_port_manual_buffer_config(priv, 0, mtu, 3057 + NULL, NULL, NULL); 3058 + if (err) { 3059 + netdev_warn(netdev, "%s: Failed to set Xon/Xoff values with MTU %d (err %d), setting back to previous MTU %d\n", 3060 + __func__, mtu, err, prev_mtu); 3061 + 3062 + mlx5e_set_mtu(mdev, params, prev_mtu); 3063 + return err; 3064 + } 3065 + } 3059 3066 3060 3067 params->sw_mtu = mtu; 3061 3068 return 0;
+7 -8
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 3734 3734 char *value = val.vstr; 3735 3735 u8 eswitch_mode; 3736 3736 3737 + eswitch_mode = mlx5_eswitch_mode(dev); 3738 + if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { 3739 + NL_SET_ERR_MSG_FMT_MOD(extack, 3740 + "Changing fs mode is not supported when eswitch offloads enabled."); 3741 + return -EOPNOTSUPP; 3742 + } 3743 + 3737 3744 if (!strcmp(value, "dmfs")) 3738 3745 return 0; 3739 3746 ··· 3764 3757 NL_SET_ERR_MSG_MOD(extack, 3765 3758 "Bad parameter: supported values are [\"dmfs\", \"smfs\", \"hmfs\"]"); 3766 3759 return -EINVAL; 3767 - } 3768 - 3769 - eswitch_mode = mlx5_eswitch_mode(dev); 3770 - if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { 3771 - NL_SET_ERR_MSG_FMT_MOD(extack, 3772 - "Moving to %s is not supported when eswitch offloads enabled.", 3773 - value); 3774 - return -EOPNOTSUPP; 3775 3760 } 3776 3761 3777 3762 return 0;
+76 -56
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 6 6 #include "fw_reset.h" 7 7 #include "diag/fw_tracer.h" 8 8 #include "lib/tout.h" 9 + #include "sf/sf.h" 9 10 10 11 enum { 11 12 MLX5_FW_RESET_FLAGS_RESET_REQUESTED, 12 13 MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, 13 14 MLX5_FW_RESET_FLAGS_PENDING_COMP, 14 15 MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS, 15 - MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED 16 + MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, 17 + MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, 16 18 }; 17 19 18 20 struct mlx5_fw_reset { ··· 221 219 return mlx5_reg_mfrl_set(dev, MLX5_MFRL_REG_RESET_LEVEL0, 0, 0, false); 222 220 } 223 221 224 - static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unloaded) 222 + static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev) 225 223 { 226 224 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 227 225 struct devlink *devlink = priv_to_devlink(dev); ··· 230 228 if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) { 231 229 complete(&fw_reset->done); 232 230 } else { 233 - if (!unloaded) 234 - mlx5_unload_one(dev, false); 231 + mlx5_sync_reset_unload_flow(dev, false); 235 232 if (mlx5_health_wait_pci_up(dev)) 236 233 mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n"); 237 234 else ··· 273 272 274 273 mlx5_sync_reset_clear_reset_requested(dev, false); 275 274 mlx5_enter_error_state(dev, true); 276 - mlx5_fw_reset_complete_reload(dev, false); 275 + mlx5_fw_reset_complete_reload(dev); 277 276 } 278 277 279 278 #define MLX5_RESET_POLL_INTERVAL (HZ / 10) ··· 426 425 427 426 if (!MLX5_CAP_GEN(dev, fast_teardown)) { 428 427 mlx5_core_warn(dev, "fast teardown is not supported by firmware\n"); 428 + return false; 429 + } 430 + 431 + if (!mlx5_core_is_ecpf(dev) && !mlx5_sf_table_empty(dev)) { 432 + mlx5_core_warn(dev, "SFs should be removed before reset\n"); 429 433 return false; 430 434 } 431 435 ··· 592 586 return err; 593 587 } 594 588 595 - static void mlx5_sync_reset_now_event(struct work_struct *work) 589 + void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked) 596 590 { 597 - struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset, 598 - reset_now_work); 599 - struct mlx5_core_dev *dev = fw_reset->dev; 600 - int err; 601 - 602 - if (mlx5_sync_reset_clear_reset_requested(dev, false)) 603 - return; 604 - 605 - mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n"); 606 - 607 - err = mlx5_cmd_fast_teardown_hca(dev); 608 - if (err) { 609 - mlx5_core_warn(dev, "Fast teardown failed, no reset done, err %d\n", err); 610 - goto done; 611 - } 612 - 613 - err = mlx5_sync_pci_reset(dev, fw_reset->reset_method); 614 - if (err) { 615 - mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, no reset done, err %d\n", err); 616 - set_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags); 617 - } 618 - 619 - mlx5_enter_error_state(dev, true); 620 - done: 621 - fw_reset->ret = err; 622 - mlx5_fw_reset_complete_reload(dev, false); 623 - } 624 - 625 - static void mlx5_sync_reset_unload_event(struct work_struct *work) 626 - { 627 - struct mlx5_fw_reset *fw_reset; 628 - struct mlx5_core_dev *dev; 591 + struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 629 592 unsigned long timeout; 630 593 int poll_freq = 20; 631 594 bool reset_action; 632 595 u8 rst_state; 633 596 int err; 634 597 635 - fw_reset = container_of(work, struct mlx5_fw_reset, reset_unload_work); 636 - dev = fw_reset->dev; 637 - 638 - if (mlx5_sync_reset_clear_reset_requested(dev, false)) 639 - return; 640 - 641 - mlx5_core_warn(dev, "Sync Reset Unload. Function is forced down.\n"); 642 - 643 - err = mlx5_cmd_fast_teardown_hca(dev); 644 - if (err) 645 - mlx5_core_warn(dev, "Fast teardown failed, unloading, err %d\n", err); 646 - else 647 - mlx5_enter_error_state(dev, true); 648 - 649 - if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) 598 + if (locked) 650 599 mlx5_unload_one_devl_locked(dev, false); 651 600 else 652 601 mlx5_unload_one(dev, false); 602 + 603 + if (!test_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags)) 604 + return; 653 605 654 606 mlx5_set_fw_rst_ack(dev); 655 607 mlx5_core_warn(dev, "Sync Reset Unload done, device reset expected\n"); ··· 636 672 goto done; 637 673 } 638 674 639 - mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n", rst_state); 675 + mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n", 676 + rst_state); 640 677 if (rst_state == MLX5_FW_RST_STATE_TOGGLE_REQ) { 641 678 err = mlx5_sync_pci_reset(dev, fw_reset->reset_method); 642 679 if (err) { 643 - mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n", err); 680 + mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n", 681 + err); 644 682 fw_reset->ret = err; 645 683 } 646 684 } 647 685 648 686 done: 649 - mlx5_fw_reset_complete_reload(dev, true); 687 + clear_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags); 688 + } 689 + 690 + static void mlx5_sync_reset_now_event(struct work_struct *work) 691 + { 692 + struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset, 693 + reset_now_work); 694 + struct mlx5_core_dev *dev = fw_reset->dev; 695 + int err; 696 + 697 + if (mlx5_sync_reset_clear_reset_requested(dev, false)) 698 + return; 699 + 700 + mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n"); 701 + 702 + err = mlx5_cmd_fast_teardown_hca(dev); 703 + if (err) { 704 + mlx5_core_warn(dev, "Fast teardown failed, no reset done, err %d\n", err); 705 + goto done; 706 + } 707 + 708 + err = mlx5_sync_pci_reset(dev, fw_reset->reset_method); 709 + if (err) { 710 + mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, no reset done, err %d\n", err); 711 + set_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags); 712 + } 713 + 714 + mlx5_enter_error_state(dev, true); 715 + done: 716 + fw_reset->ret = err; 717 + mlx5_fw_reset_complete_reload(dev); 718 + } 719 + 720 + static void mlx5_sync_reset_unload_event(struct work_struct *work) 721 + { 722 + struct mlx5_fw_reset *fw_reset; 723 + struct mlx5_core_dev *dev; 724 + int err; 725 + 726 + fw_reset = container_of(work, struct mlx5_fw_reset, reset_unload_work); 727 + dev = fw_reset->dev; 728 + 729 + if (mlx5_sync_reset_clear_reset_requested(dev, false)) 730 + return; 731 + 732 + set_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags); 733 + mlx5_core_warn(dev, "Sync Reset Unload. Function is forced down.\n"); 734 + 735 + err = mlx5_cmd_fast_teardown_hca(dev); 736 + if (err) 737 + mlx5_core_warn(dev, "Fast teardown failed, unloading, err %d\n", err); 738 + else 739 + mlx5_enter_error_state(dev, true); 740 + 741 + mlx5_fw_reset_complete_reload(dev); 650 742 } 651 743 652 744 static void mlx5_sync_reset_abort_event(struct work_struct *work)
+1
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
··· 12 12 int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev); 13 13 14 14 int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev); 15 + void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked); 15 16 int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev, 16 17 struct netlink_ext_ack *extack); 17 18 void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev);
+10
drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
··· 518 518 WARN_ON(!xa_empty(&table->function_ids)); 519 519 kfree(table); 520 520 } 521 + 522 + bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev) 523 + { 524 + struct mlx5_sf_table *table = dev->priv.sf_table; 525 + 526 + if (!table) 527 + return true; 528 + 529 + return xa_empty(&table->function_ids); 530 + }
+6
drivers/net/ethernet/mellanox/mlx5/core/sf/sf.h
··· 17 17 18 18 int mlx5_sf_table_init(struct mlx5_core_dev *dev); 19 19 void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev); 20 + bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev); 20 21 21 22 int mlx5_devlink_sf_port_new(struct devlink *devlink, 22 23 const struct devlink_port_new_attrs *add_attr, ··· 60 59 61 60 static inline void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev) 62 61 { 62 + } 63 + 64 + static inline bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev) 65 + { 66 + return true; 63 67 } 64 68 65 69 #endif
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
··· 117 117 mlx5hws_err(ctx, "No such stc_type: %d\n", stc_type); 118 118 pr_warn("HWS: Invalid stc_type: %d\n", stc_type); 119 119 ret = -EINVAL; 120 - goto unlock_and_out; 120 + goto free_shared_stc; 121 121 } 122 122 123 123 ret = mlx5hws_action_alloc_single_stc(ctx, &stc_attr, tbl_type,
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pat_arg.c
··· 279 279 return ret; 280 280 281 281 clean_pattern: 282 - mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, *pattern_id); 282 + mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, ptrn_id); 283 283 out_unlock: 284 284 mutex_unlock(&ctx->pattern_cache->lock); 285 285 return ret; ··· 527 527 u32 *nop_locations, __be64 *new_pat) 528 528 { 529 529 u16 prev_src_field = INVALID_FIELD, prev_dst_field = INVALID_FIELD; 530 - u16 src_field, dst_field; 531 530 u8 action_type; 532 531 bool dependent; 533 532 size_t i, j; ··· 538 539 return 0; 539 540 540 541 for (i = 0, j = 0; i < num_actions; i++, j++) { 542 + u16 src_field = INVALID_FIELD; 543 + u16 dst_field = INVALID_FIELD; 544 + 541 545 if (j >= max_actions) 542 546 return -EINVAL; 543 547
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c
··· 124 124 mlx5hws_err(pool->ctx, "Failed to create resource type: %d size %zu\n", 125 125 pool->type, pool->alloc_log_sz); 126 126 mlx5hws_buddy_cleanup(buddy); 127 + kfree(buddy); 127 128 return -ENOMEM; 128 129 } 129 130
+4
drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
··· 52 52 fbnic_bmc_rpc_init(fbd); 53 53 fbnic_rss_reinit(fbd, fbn); 54 54 55 + phylink_resume(fbn->phylink); 56 + 55 57 return 0; 56 58 time_stop: 57 59 fbnic_time_stop(fbn); ··· 85 83 static int fbnic_stop(struct net_device *netdev) 86 84 { 87 85 struct fbnic_net *fbn = netdev_priv(netdev); 86 + 87 + phylink_suspend(fbn->phylink, fbnic_bmc_present(fbn->fbd)); 88 88 89 89 fbnic_down(fbn); 90 90 fbnic_pcs_free_irq(fbn->fbd);
+6 -9
drivers/net/ethernet/meta/fbnic/fbnic_pci.c
··· 118 118 struct fbnic_dev *fbd = fbn->fbd; 119 119 120 120 schedule_delayed_work(&fbd->service_task, HZ); 121 - phylink_resume(fbn->phylink); 122 121 } 123 122 124 123 static void fbnic_service_task_stop(struct fbnic_net *fbn) 125 124 { 126 125 struct fbnic_dev *fbd = fbn->fbd; 127 126 128 - phylink_suspend(fbn->phylink, fbnic_bmc_present(fbd)); 129 127 cancel_delayed_work(&fbd->service_task); 130 128 } 131 129 ··· 442 444 443 445 /* Re-enable mailbox */ 444 446 err = fbnic_fw_request_mbx(fbd); 447 + devl_unlock(priv_to_devlink(fbd)); 445 448 if (err) 446 449 goto err_free_irqs; 447 - 448 - devl_unlock(priv_to_devlink(fbd)); 449 450 450 451 /* Only send log history if log buffer is empty to prevent duplicate 451 452 * log entries. ··· 462 465 463 466 rtnl_lock(); 464 467 465 - if (netif_running(netdev)) { 468 + if (netif_running(netdev)) 466 469 err = __fbnic_open(fbn); 467 - if (err) 468 - goto err_free_mbx; 469 - } 470 470 471 471 rtnl_unlock(); 472 + if (err) 473 + goto err_free_mbx; 472 474 473 475 return 0; 474 476 err_free_mbx: 475 477 fbnic_fw_log_disable(fbd); 476 478 477 - rtnl_unlock(); 479 + devl_lock(priv_to_devlink(fbd)); 478 480 fbnic_fw_free_mbx(fbd); 481 + devl_unlock(priv_to_devlink(fbd)); 479 482 err_free_irqs: 480 483 fbnic_free_irqs(fbd); 481 484 err_invalidate_uc_addr:
+11 -2
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
··· 49 49 writel(XGMAC_INT_DEFAULT_EN, ioaddr + XGMAC_INT_EN); 50 50 } 51 51 52 + static void dwxgmac2_update_caps(struct stmmac_priv *priv) 53 + { 54 + if (!priv->dma_cap.mbps_10_100) 55 + priv->hw->link.caps &= ~(MAC_10 | MAC_100); 56 + else if (!priv->dma_cap.half_duplex) 57 + priv->hw->link.caps &= ~(MAC_10HD | MAC_100HD); 58 + } 59 + 52 60 static void dwxgmac2_set_mac(void __iomem *ioaddr, bool enable) 53 61 { 54 62 u32 tx = readl(ioaddr + XGMAC_TX_CONFIG); ··· 1432 1424 1433 1425 const struct stmmac_ops dwxgmac210_ops = { 1434 1426 .core_init = dwxgmac2_core_init, 1427 + .update_caps = dwxgmac2_update_caps, 1435 1428 .set_mac = dwxgmac2_set_mac, 1436 1429 .rx_ipc = dwxgmac2_rx_ipc, 1437 1430 .rx_queue_enable = dwxgmac2_rx_queue_enable, ··· 1541 1532 mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins); 1542 1533 1543 1534 mac->link.caps = MAC_ASYM_PAUSE | MAC_SYM_PAUSE | 1544 - MAC_1000FD | MAC_2500FD | MAC_5000FD | 1545 - MAC_10000FD; 1535 + MAC_10 | MAC_100 | MAC_1000FD | 1536 + MAC_2500FD | MAC_5000FD | MAC_10000FD; 1546 1537 mac->link.duplex = 0; 1547 1538 mac->link.speed10 = XGMAC_CONFIG_SS_10_MII; 1548 1539 mac->link.speed100 = XGMAC_CONFIG_SS_100_MII;
+5 -4
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
··· 203 203 } 204 204 205 205 writel(value, ioaddr + XGMAC_MTL_RXQ_OPMODE(channel)); 206 - 207 - /* Enable MTL RX overflow */ 208 - value = readl(ioaddr + XGMAC_MTL_QINTEN(channel)); 209 - writel(value | XGMAC_RXOIE, ioaddr + XGMAC_MTL_QINTEN(channel)); 210 206 } 211 207 212 208 static void dwxgmac2_dma_tx_mode(struct stmmac_priv *priv, void __iomem *ioaddr, ··· 382 386 static int dwxgmac2_get_hw_feature(void __iomem *ioaddr, 383 387 struct dma_features *dma_cap) 384 388 { 389 + struct stmmac_priv *priv; 385 390 u32 hw_cap; 391 + 392 + priv = container_of(dma_cap, struct stmmac_priv, dma_cap); 386 393 387 394 /* MAC HW feature 0 */ 388 395 hw_cap = readl(ioaddr + XGMAC_HW_FEATURE0); ··· 409 410 dma_cap->vlhash = (hw_cap & XGMAC_HWFEAT_VLHASH) >> 4; 410 411 dma_cap->half_duplex = (hw_cap & XGMAC_HWFEAT_HDSEL) >> 3; 411 412 dma_cap->mbps_1000 = (hw_cap & XGMAC_HWFEAT_GMIISEL) >> 1; 413 + if (dma_cap->mbps_1000 && priv->synopsys_id >= DWXGMAC_CORE_2_20) 414 + dma_cap->mbps_10_100 = 1; 412 415 413 416 /* MAC HW feature 1 */ 414 417 hw_cap = readl(ioaddr + XGMAC_HW_FEATURE1);
+4 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2586 2586 struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue); 2587 2587 struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; 2588 2588 struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue]; 2589 + bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported; 2589 2590 struct xsk_buff_pool *pool = tx_q->xsk_pool; 2590 2591 unsigned int entry = tx_q->cur_tx; 2591 2592 struct dma_desc *tx_desc = NULL; ··· 2674 2673 } 2675 2674 2676 2675 stmmac_prepare_tx_desc(priv, tx_desc, 1, xdp_desc.len, 2677 - true, priv->mode, true, true, 2676 + csum, priv->mode, true, true, 2678 2677 xdp_desc.len); 2679 2678 2680 2679 stmmac_enable_dma_transmission(priv, priv->ioaddr, queue); ··· 4989 4988 { 4990 4989 struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue]; 4991 4990 struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; 4991 + bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported; 4992 4992 unsigned int entry = tx_q->cur_tx; 4993 4993 struct dma_desc *tx_desc; 4994 4994 dma_addr_t dma_addr; ··· 5041 5039 stmmac_set_desc_addr(priv, tx_desc, dma_addr); 5042 5040 5043 5041 stmmac_prepare_tx_desc(priv, tx_desc, 1, xdpf->len, 5044 - true, priv->mode, true, true, 5042 + csum, priv->mode, true, true, 5045 5043 xdpf->len); 5046 5044 5047 5045 tx_q->tx_count_frames++;
+8 -9
drivers/net/hyperv/netvsc.c
··· 1812 1812 1813 1813 /* Enable NAPI handler before init callbacks */ 1814 1814 netif_napi_add(ndev, &net_device->chan_table[0].napi, netvsc_poll); 1815 + napi_enable(&net_device->chan_table[0].napi); 1816 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, 1817 + &net_device->chan_table[0].napi); 1818 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, 1819 + &net_device->chan_table[0].napi); 1815 1820 1816 1821 /* Open the channel */ 1817 1822 device->channel->next_request_id_callback = vmbus_next_request_id; ··· 1836 1831 /* Channel is opened */ 1837 1832 netdev_dbg(ndev, "hv_netvsc channel opened successfully\n"); 1838 1833 1839 - napi_enable(&net_device->chan_table[0].napi); 1840 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, 1841 - &net_device->chan_table[0].napi); 1842 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, 1843 - &net_device->chan_table[0].napi); 1844 - 1845 1834 /* Connect with the NetVsp */ 1846 1835 ret = netvsc_connect_vsp(device, net_device, device_info); 1847 1836 if (ret != 0) { ··· 1853 1854 1854 1855 close: 1855 1856 RCU_INIT_POINTER(net_device_ctx->nvdev, NULL); 1856 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL); 1857 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL); 1858 - napi_disable(&net_device->chan_table[0].napi); 1859 1857 1860 1858 /* Now, we can close the channel safely */ 1861 1859 vmbus_close(device->channel); 1862 1860 1863 1861 cleanup: 1862 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL); 1863 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL); 1864 + napi_disable(&net_device->chan_table[0].napi); 1864 1865 netif_napi_del(&net_device->chan_table[0].napi); 1865 1866 1866 1867 cleanup2:
+16 -7
drivers/net/hyperv/rndis_filter.c
··· 1252 1252 new_sc->rqstor_size = netvsc_rqstor_size(netvsc_ring_bytes); 1253 1253 new_sc->max_pkt_size = NETVSC_MAX_PKT_SIZE; 1254 1254 1255 + /* Enable napi before opening the vmbus channel to avoid races 1256 + * as the host placing data on the host->guest ring may be left 1257 + * out if napi was not enabled. 1258 + */ 1259 + napi_enable(&nvchan->napi); 1260 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX, 1261 + &nvchan->napi); 1262 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX, 1263 + &nvchan->napi); 1264 + 1255 1265 ret = vmbus_open(new_sc, netvsc_ring_bytes, 1256 1266 netvsc_ring_bytes, NULL, 0, 1257 1267 netvsc_channel_cb, nvchan); 1258 - if (ret == 0) { 1259 - napi_enable(&nvchan->napi); 1260 - netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX, 1261 - &nvchan->napi); 1262 - netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX, 1263 - &nvchan->napi); 1264 - } else { 1268 + if (ret != 0) { 1265 1269 netdev_notice(ndev, "sub channel open failed: %d\n", ret); 1270 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX, 1271 + NULL); 1272 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX, 1273 + NULL); 1274 + napi_disable(&nvchan->napi); 1266 1275 } 1267 1276 1268 1277 if (atomic_inc_return(&nvscdev->open_chn) == nvscdev->num_chn)
+4
drivers/net/phy/mscc/mscc.h
··· 484 484 void vsc85xx_link_change_notify(struct phy_device *phydev); 485 485 void vsc8584_config_ts_intr(struct phy_device *phydev); 486 486 int vsc8584_ptp_init(struct phy_device *phydev); 487 + void vsc8584_ptp_deinit(struct phy_device *phydev); 487 488 int vsc8584_ptp_probe_once(struct phy_device *phydev); 488 489 int vsc8584_ptp_probe(struct phy_device *phydev); 489 490 irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev); ··· 498 497 static inline int vsc8584_ptp_init(struct phy_device *phydev) 499 498 { 500 499 return 0; 500 + } 501 + static inline void vsc8584_ptp_deinit(struct phy_device *phydev) 502 + { 501 503 } 502 504 static inline int vsc8584_ptp_probe_once(struct phy_device *phydev) 503 505 {
+1 -3
drivers/net/phy/mscc/mscc_main.c
··· 2359 2359 2360 2360 static void vsc85xx_remove(struct phy_device *phydev) 2361 2361 { 2362 - struct vsc8531_private *priv = phydev->priv; 2363 - 2364 - skb_queue_purge(&priv->rx_skbs_list); 2362 + vsc8584_ptp_deinit(phydev); 2365 2363 } 2366 2364 2367 2365 /* Microsemi VSC85xx PHYs */
+21 -13
drivers/net/phy/mscc/mscc_ptp.c
··· 1298 1298 1299 1299 static int __vsc8584_init_ptp(struct phy_device *phydev) 1300 1300 { 1301 - struct vsc8531_private *vsc8531 = phydev->priv; 1302 1301 static const u32 ltc_seq_e[] = { 0, 400000, 0, 0, 0 }; 1303 1302 static const u8 ltc_seq_a[] = { 8, 6, 5, 4, 2 }; 1304 1303 u32 val; ··· 1514 1515 1515 1516 vsc85xx_ts_eth_cmp1_sig(phydev); 1516 1517 1517 - vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp; 1518 - vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp; 1519 - vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp; 1520 - vsc8531->mii_ts.ts_info = vsc85xx_ts_info; 1521 - phydev->mii_ts = &vsc8531->mii_ts; 1522 - 1523 - memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps)); 1524 - 1525 - vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps, 1526 - &phydev->mdio.dev); 1527 - return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock); 1518 + return 0; 1528 1519 } 1529 1520 1530 1521 void vsc8584_config_ts_intr(struct phy_device *phydev) ··· 1539 1550 } 1540 1551 1541 1552 return 0; 1553 + } 1554 + 1555 + void vsc8584_ptp_deinit(struct phy_device *phydev) 1556 + { 1557 + struct vsc8531_private *vsc8531 = phydev->priv; 1558 + 1559 + if (vsc8531->ptp->ptp_clock) { 1560 + ptp_clock_unregister(vsc8531->ptp->ptp_clock); 1561 + skb_queue_purge(&vsc8531->rx_skbs_list); 1562 + } 1542 1563 } 1543 1564 1544 1565 irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev) ··· 1611 1612 1612 1613 vsc8531->ptp->phydev = phydev; 1613 1614 1614 - return 0; 1615 + vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp; 1616 + vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp; 1617 + vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp; 1618 + vsc8531->mii_ts.ts_info = vsc85xx_ts_info; 1619 + phydev->mii_ts = &vsc8531->mii_ts; 1620 + 1621 + memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps)); 1622 + vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps, 1623 + &phydev->mdio.dev); 1624 + return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock); 1615 1625 } 1616 1626 1617 1627 int vsc8584_ptp_probe_once(struct phy_device *phydev)
+3
drivers/net/usb/qmi_wwan.c
··· 1355 1355 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 1356 1356 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 1357 1357 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */ 1358 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1034, 2)}, /* Telit LE910C4-WWX */ 1359 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1037, 4)}, /* Telit LE910C4-WWX */ 1360 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1038, 3)}, /* Telit LE910C4-WWX */ 1358 1361 {QMI_QUIRK_SET_DTR(0x1bc7, 0x103a, 0)}, /* Telit LE910C4-WWX */ 1359 1362 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1360 1363 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */
+4 -3
drivers/net/virtio_net.c
··· 5758 5758 disable_rx_mode_work(vi); 5759 5759 flush_work(&vi->rx_mode_work); 5760 5760 5761 - netif_tx_lock_bh(vi->dev); 5762 - netif_device_detach(vi->dev); 5763 - netif_tx_unlock_bh(vi->dev); 5764 5761 if (netif_running(vi->dev)) { 5765 5762 rtnl_lock(); 5766 5763 virtnet_close(vi->dev); 5767 5764 rtnl_unlock(); 5768 5765 } 5766 + 5767 + netif_tx_lock_bh(vi->dev); 5768 + netif_device_detach(vi->dev); 5769 + netif_tx_unlock_bh(vi->dev); 5769 5770 } 5770 5771 5771 5772 static int init_vqs(struct virtnet_info *vi);
+2 -2
drivers/of/device.c
··· 17 17 18 18 /** 19 19 * of_match_device - Tell if a struct device matches an of_device_id list 20 - * @matches: array of of device match structures to search in 21 - * @dev: the of device structure to match against 20 + * @matches: array of of_device_id match structures to search in 21 + * @dev: the OF device structure to match against 22 22 * 23 23 * Used by a driver to check whether an platform_device present in the 24 24 * system is in its list of supported devices.
+7 -2
drivers/of/dynamic.c
··· 935 935 return -ENOMEM; 936 936 937 937 ret = of_changeset_add_property(ocs, np, new_pp); 938 - if (ret) 938 + if (ret) { 939 939 __of_prop_free(new_pp); 940 + return ret; 941 + } 940 942 941 - return ret; 943 + new_pp->next = np->deadprops; 944 + np->deadprops = new_pp; 945 + 946 + return 0; 942 947 } 943 948 944 949 /**
+13 -4
drivers/of/of_reserved_mem.c
··· 25 25 #include <linux/memblock.h> 26 26 #include <linux/kmemleak.h> 27 27 #include <linux/cma.h> 28 + #include <linux/dma-map-ops.h> 28 29 29 30 #include "of_private.h" 30 31 ··· 176 175 base = dt_mem_next_cell(dt_root_addr_cells, &prop); 177 176 size = dt_mem_next_cell(dt_root_size_cells, &prop); 178 177 179 - if (size && 180 - early_init_dt_reserve_memory(base, size, nomap) == 0) 178 + if (size && early_init_dt_reserve_memory(base, size, nomap) == 0) { 179 + /* Architecture specific contiguous memory fixup. */ 180 + if (of_flat_dt_is_compatible(node, "shared-dma-pool") && 181 + of_get_flat_dt_prop(node, "reusable", NULL)) 182 + dma_contiguous_early_fixup(base, size); 181 183 pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n", 182 184 uname, &base, (unsigned long)(size / SZ_1M)); 183 - else 185 + } else { 184 186 pr_err("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n", 185 187 uname, &base, (unsigned long)(size / SZ_1M)); 188 + } 186 189 187 190 len -= t_len; 188 191 } ··· 477 472 uname, (unsigned long)(size / SZ_1M)); 478 473 return -ENOMEM; 479 474 } 480 - 475 + /* Architecture specific contiguous memory fixup. */ 476 + if (of_flat_dt_is_compatible(node, "shared-dma-pool") && 477 + of_get_flat_dt_prop(node, "reusable", NULL)) 478 + dma_contiguous_early_fixup(base, size); 481 479 /* Save region in the reserved_mem array */ 482 480 fdt_reserved_mem_save_node(node, uname, base, size); 483 481 return 0; ··· 779 771 return -EINVAL; 780 772 781 773 resource_set_range(res, rmem->base, rmem->size); 774 + res->flags = IORESOURCE_MEM; 782 775 res->name = rmem->name; 783 776 return 0; 784 777 }
+1
drivers/pinctrl/Kconfig
··· 539 539 tristate "STMicroelectronics STMFX GPIO expander pinctrl driver" 540 540 depends on I2C 541 541 depends on OF_GPIO 542 + depends on HAS_IOMEM 542 543 select GENERIC_PINCONF 543 544 select GPIOLIB_IRQCHIP 544 545 select MFD_STMFX
+4 -4
drivers/pinctrl/mediatek/pinctrl-airoha.c
··· 2696 2696 arg = 1; 2697 2697 break; 2698 2698 default: 2699 - return -EOPNOTSUPP; 2699 + return -ENOTSUPP; 2700 2700 } 2701 2701 2702 2702 *config = pinconf_to_config_packed(param, arg); ··· 2788 2788 break; 2789 2789 } 2790 2790 default: 2791 - return -EOPNOTSUPP; 2791 + return -ENOTSUPP; 2792 2792 } 2793 2793 } 2794 2794 ··· 2805 2805 if (airoha_pinconf_get(pctrl_dev, 2806 2806 airoha_pinctrl_groups[group].pins[i], 2807 2807 config)) 2808 - return -EOPNOTSUPP; 2808 + return -ENOTSUPP; 2809 2809 2810 2810 if (i && cur_config != *config) 2811 - return -EOPNOTSUPP; 2811 + return -ENOTSUPP; 2812 2812 2813 2813 cur_config = *config; 2814 2814 }
+1 -1
drivers/pinctrl/meson/pinctrl-amlogic-a4.c
··· 1093 1093 { .compatible = "amlogic,pinctrl-s6", .data = &s6_priv_data, }, 1094 1094 { /* sentinel */ } 1095 1095 }; 1096 - MODULE_DEVICE_TABLE(of, aml_pctl_dt_match); 1096 + MODULE_DEVICE_TABLE(of, aml_pctl_of_match); 1097 1097 1098 1098 static struct platform_driver aml_pctl_driver = { 1099 1099 .driver = {
+1 -1
drivers/platform/x86/amd/hsmp/acpi.c
··· 504 504 505 505 dev_set_drvdata(dev, &hsmp_pdev->sock[sock_ind]); 506 506 507 - return ret; 507 + return 0; 508 508 } 509 509 510 510 static const struct bin_attribute hsmp_metric_tbl_attr = {
+5
drivers/platform/x86/amd/hsmp/hsmp.c
··· 356 356 if (!sock || !buf) 357 357 return -EINVAL; 358 358 359 + if (!sock->metric_tbl_addr) { 360 + dev_err(sock->dev, "Metrics table address not available\n"); 361 + return -ENOMEM; 362 + } 363 + 359 364 /* Do not support lseek(), also don't allow more than the size of metric table */ 360 365 if (size != sizeof(struct hsmp_metric_table)) { 361 366 dev_err(sock->dev, "Wrong buffer size\n");
+34 -20
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 28 28 .spurious_8042 = true, 29 29 }; 30 30 31 + static struct quirk_entry quirk_s2idle_spurious_8042 = { 32 + .s2idle_bug_mmio = FCH_PM_BASE + FCH_PM_SCRATCH, 33 + .spurious_8042 = true, 34 + }; 35 + 31 36 static const struct dmi_system_id fwbug_list[] = { 32 37 { 33 38 .ident = "L14 Gen2 AMD", 34 - .driver_data = &quirk_s2idle_bug, 39 + .driver_data = &quirk_s2idle_spurious_8042, 35 40 .matches = { 36 41 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 37 42 DMI_MATCH(DMI_PRODUCT_NAME, "20X5"), ··· 44 39 }, 45 40 { 46 41 .ident = "T14s Gen2 AMD", 47 - .driver_data = &quirk_s2idle_bug, 42 + .driver_data = &quirk_s2idle_spurious_8042, 48 43 .matches = { 49 44 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 50 45 DMI_MATCH(DMI_PRODUCT_NAME, "20XF"), ··· 52 47 }, 53 48 { 54 49 .ident = "X13 Gen2 AMD", 55 - .driver_data = &quirk_s2idle_bug, 50 + .driver_data = &quirk_s2idle_spurious_8042, 56 51 .matches = { 57 52 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 58 53 DMI_MATCH(DMI_PRODUCT_NAME, "20XH"), ··· 60 55 }, 61 56 { 62 57 .ident = "T14 Gen2 AMD", 63 - .driver_data = &quirk_s2idle_bug, 58 + .driver_data = &quirk_s2idle_spurious_8042, 64 59 .matches = { 65 60 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 66 61 DMI_MATCH(DMI_PRODUCT_NAME, "20XK"), ··· 68 63 }, 69 64 { 70 65 .ident = "T14 Gen1 AMD", 71 - .driver_data = &quirk_s2idle_bug, 66 + .driver_data = &quirk_s2idle_spurious_8042, 72 67 .matches = { 73 68 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 74 69 DMI_MATCH(DMI_PRODUCT_NAME, "20UD"), ··· 76 71 }, 77 72 { 78 73 .ident = "T14 Gen1 AMD", 79 - .driver_data = &quirk_s2idle_bug, 74 + .driver_data = &quirk_s2idle_spurious_8042, 80 75 .matches = { 81 76 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 82 77 DMI_MATCH(DMI_PRODUCT_NAME, "20UE"), ··· 84 79 }, 85 80 { 86 81 .ident = "T14s Gen1 AMD", 87 - .driver_data = &quirk_s2idle_bug, 82 + .driver_data = &quirk_s2idle_spurious_8042, 88 83 .matches = { 89 84 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 90 85 DMI_MATCH(DMI_PRODUCT_NAME, "20UH"), ··· 92 87 }, 93 88 { 94 89 .ident = "T14s Gen1 AMD", 95 - .driver_data = &quirk_s2idle_bug, 90 + .driver_data = &quirk_s2idle_spurious_8042, 96 91 .matches = { 97 92 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 98 93 DMI_MATCH(DMI_PRODUCT_NAME, "20UJ"), ··· 100 95 }, 101 96 { 102 97 .ident = "P14s Gen1 AMD", 103 - .driver_data = &quirk_s2idle_bug, 98 + .driver_data = &quirk_s2idle_spurious_8042, 104 99 .matches = { 105 100 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 106 101 DMI_MATCH(DMI_PRODUCT_NAME, "20Y1"), ··· 108 103 }, 109 104 { 110 105 .ident = "P14s Gen2 AMD", 111 - .driver_data = &quirk_s2idle_bug, 106 + .driver_data = &quirk_s2idle_spurious_8042, 112 107 .matches = { 113 108 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 114 109 DMI_MATCH(DMI_PRODUCT_NAME, "21A0"), ··· 116 111 }, 117 112 { 118 113 .ident = "P14s Gen2 AMD", 119 - .driver_data = &quirk_s2idle_bug, 114 + .driver_data = &quirk_s2idle_spurious_8042, 120 115 .matches = { 121 116 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 122 117 DMI_MATCH(DMI_PRODUCT_NAME, "21A1"), ··· 157 152 }, 158 153 { 159 154 .ident = "IdeaPad 1 14AMN7", 160 - .driver_data = &quirk_s2idle_bug, 155 + .driver_data = &quirk_s2idle_spurious_8042, 161 156 .matches = { 162 157 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 163 158 DMI_MATCH(DMI_PRODUCT_NAME, "82VF"), ··· 165 160 }, 166 161 { 167 162 .ident = "IdeaPad 1 15AMN7", 168 - .driver_data = &quirk_s2idle_bug, 163 + .driver_data = &quirk_s2idle_spurious_8042, 169 164 .matches = { 170 165 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 171 166 DMI_MATCH(DMI_PRODUCT_NAME, "82VG"), ··· 173 168 }, 174 169 { 175 170 .ident = "IdeaPad 1 15AMN7", 176 - .driver_data = &quirk_s2idle_bug, 171 + .driver_data = &quirk_s2idle_spurious_8042, 177 172 .matches = { 178 173 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 179 174 DMI_MATCH(DMI_PRODUCT_NAME, "82X5"), ··· 181 176 }, 182 177 { 183 178 .ident = "IdeaPad Slim 3 14AMN8", 184 - .driver_data = &quirk_s2idle_bug, 179 + .driver_data = &quirk_s2idle_spurious_8042, 185 180 .matches = { 186 181 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 187 182 DMI_MATCH(DMI_PRODUCT_NAME, "82XN"), ··· 189 184 }, 190 185 { 191 186 .ident = "IdeaPad Slim 3 15AMN8", 192 - .driver_data = &quirk_s2idle_bug, 187 + .driver_data = &quirk_s2idle_spurious_8042, 193 188 .matches = { 194 189 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 195 190 DMI_MATCH(DMI_PRODUCT_NAME, "82XQ"), ··· 198 193 /* https://gitlab.freedesktop.org/drm/amd/-/issues/4434 */ 199 194 { 200 195 .ident = "Lenovo Yoga 6 13ALC6", 201 - .driver_data = &quirk_s2idle_bug, 196 + .driver_data = &quirk_s2idle_spurious_8042, 202 197 .matches = { 203 198 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 204 199 DMI_MATCH(DMI_PRODUCT_NAME, "82ND"), ··· 207 202 /* https://gitlab.freedesktop.org/drm/amd/-/issues/2684 */ 208 203 { 209 204 .ident = "HP Laptop 15s-eq2xxx", 210 - .driver_data = &quirk_s2idle_bug, 205 + .driver_data = &quirk_s2idle_spurious_8042, 211 206 .matches = { 212 207 DMI_MATCH(DMI_SYS_VENDOR, "HP"), 213 208 DMI_MATCH(DMI_PRODUCT_NAME, "HP Laptop 15s-eq2xxx"), ··· 290 285 { 291 286 const struct dmi_system_id *dmi_id; 292 287 288 + /* 289 + * IRQ1 may cause an interrupt during resume even without a keyboard 290 + * press. 291 + * 292 + * Affects Renoir, Cezanne and Barcelo SoCs 293 + * 294 + * A solution is available in PMFW 64.66.0, but it must be activated by 295 + * SBIOS. If SBIOS is known to have the fix a quirk can be added for 296 + * a given system to avoid workaround. 297 + */ 293 298 if (dev->cpu_id == AMD_CPU_ID_CZN) 294 299 dev->disable_8042_wakeup = true; 295 300 ··· 310 295 if (dev->quirks->s2idle_bug_mmio) 311 296 pr_info("Using s2idle quirk to avoid %s platform firmware bug\n", 312 297 dmi_id->ident); 313 - if (dev->quirks->spurious_8042) 314 - dev->disable_8042_wakeup = true; 298 + dev->disable_8042_wakeup = dev->quirks->spurious_8042; 315 299 }
-13
drivers/platform/x86/amd/pmc/pmc.c
··· 530 530 static int amd_pmc_wa_irq1(struct amd_pmc_dev *pdev) 531 531 { 532 532 struct device *d; 533 - int rc; 534 - 535 - /* cezanne platform firmware has a fix in 64.66.0 */ 536 - if (pdev->cpu_id == AMD_CPU_ID_CZN) { 537 - if (!pdev->major) { 538 - rc = amd_pmc_get_smu_version(pdev); 539 - if (rc) 540 - return rc; 541 - } 542 - 543 - if (pdev->major > 64 || (pdev->major == 64 && pdev->minor > 65)) 544 - return 0; 545 - } 546 533 547 534 d = bus_find_device_by_name(&serio_bus, NULL, "serio0"); 548 535 if (!d)
+9 -10
drivers/platform/x86/dell/dell-smbios-base.c
··· 39 39 struct smbios_device { 40 40 struct list_head list; 41 41 struct device *device; 42 + int priority; 42 43 int (*call_fn)(struct calling_interface_buffer *arg); 43 44 }; 44 45 ··· 146 145 } 147 146 EXPORT_SYMBOL_GPL(dell_smbios_error); 148 147 149 - int dell_smbios_register_device(struct device *d, void *call_fn) 148 + int dell_smbios_register_device(struct device *d, int priority, void *call_fn) 150 149 { 151 150 struct smbios_device *priv; 152 151 ··· 155 154 return -ENOMEM; 156 155 get_device(d); 157 156 priv->device = d; 157 + priv->priority = priority; 158 158 priv->call_fn = call_fn; 159 159 mutex_lock(&smbios_mutex); 160 160 list_add_tail(&priv->list, &smbios_device_list); ··· 294 292 295 293 int dell_smbios_call(struct calling_interface_buffer *buffer) 296 294 { 297 - int (*call_fn)(struct calling_interface_buffer *) = NULL; 298 - struct device *selected_dev = NULL; 295 + struct smbios_device *selected = NULL; 299 296 struct smbios_device *priv; 300 297 int ret; 301 298 302 299 mutex_lock(&smbios_mutex); 303 300 list_for_each_entry(priv, &smbios_device_list, list) { 304 - if (!selected_dev || priv->device->id >= selected_dev->id) { 305 - dev_dbg(priv->device, "Trying device ID: %d\n", 306 - priv->device->id); 307 - call_fn = priv->call_fn; 308 - selected_dev = priv->device; 301 + if (!selected || priv->priority >= selected->priority) { 302 + dev_dbg(priv->device, "Trying device ID: %d\n", priv->priority); 303 + selected = priv; 309 304 } 310 305 } 311 306 312 - if (!selected_dev) { 307 + if (!selected) { 313 308 ret = -ENODEV; 314 309 pr_err("No dell-smbios drivers are loaded\n"); 315 310 goto out_smbios_call; 316 311 } 317 312 318 - ret = call_fn(buffer); 313 + ret = selected->call_fn(buffer); 319 314 320 315 out_smbios_call: 321 316 mutex_unlock(&smbios_mutex);
+1 -2
drivers/platform/x86/dell/dell-smbios-smm.c
··· 125 125 if (ret) 126 126 goto fail_platform_device_add; 127 127 128 - ret = dell_smbios_register_device(&platform_device->dev, 129 - &dell_smbios_smm_call); 128 + ret = dell_smbios_register_device(&platform_device->dev, 0, &dell_smbios_smm_call); 130 129 if (ret) 131 130 goto fail_register; 132 131
+1 -3
drivers/platform/x86/dell/dell-smbios-wmi.c
··· 264 264 if (ret) 265 265 return ret; 266 266 267 - /* ID is used by dell-smbios to set priority of drivers */ 268 - wdev->dev.id = 1; 269 - ret = dell_smbios_register_device(&wdev->dev, &dell_smbios_wmi_call); 267 + ret = dell_smbios_register_device(&wdev->dev, 1, &dell_smbios_wmi_call); 270 268 if (ret) 271 269 return ret; 272 270
+1 -1
drivers/platform/x86/dell/dell-smbios.h
··· 64 64 struct calling_interface_token tokens[]; 65 65 } __packed; 66 66 67 - int dell_smbios_register_device(struct device *d, void *call_fn); 67 + int dell_smbios_register_device(struct device *d, int priority, void *call_fn); 68 68 void dell_smbios_unregister_device(struct device *d); 69 69 70 70 int dell_smbios_error(int value);
+2 -2
drivers/platform/x86/hp/hp-wmi.c
··· 92 92 "8A25" 93 93 }; 94 94 95 - /* DMI Board names of Victus 16-s1000 laptops */ 95 + /* DMI Board names of Victus 16-r1000 and Victus 16-s1000 laptops */ 96 96 static const char * const victus_s_thermal_profile_boards[] = { 97 - "8C9C" 97 + "8C99", "8C9C" 98 98 }; 99 99 100 100 enum hp_wmi_radio {
+6
drivers/platform/x86/intel/int3472/discrete.c
··· 193 193 *con_id = "privacy-led"; 194 194 *gpio_flags = GPIO_ACTIVE_HIGH; 195 195 break; 196 + case INT3472_GPIO_TYPE_HOTPLUG_DETECT: 197 + *con_id = "hpd"; 198 + *gpio_flags = GPIO_ACTIVE_HIGH; 199 + break; 196 200 case INT3472_GPIO_TYPE_POWER_ENABLE: 197 201 *con_id = "avdd"; 198 202 *gpio_flags = GPIO_ACTIVE_HIGH; ··· 227 223 * 0x0b Power enable 228 224 * 0x0c Clock enable 229 225 * 0x0d Privacy LED 226 + * 0x13 Hotplug detect 230 227 * 231 228 * There are some known platform specific quirks where that does not quite 232 229 * hold up; for example where a pin with type 0x01 (Power down) is mapped to ··· 297 292 switch (type) { 298 293 case INT3472_GPIO_TYPE_RESET: 299 294 case INT3472_GPIO_TYPE_POWERDOWN: 295 + case INT3472_GPIO_TYPE_HOTPLUG_DETECT: 300 296 ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, con_id, gpio_flags); 301 297 if (ret) 302 298 err_msg = "Failed to map GPIO pin to sensor\n";
+5
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
··· 192 192 static int write_eff_lat_ctrl(struct uncore_data *data, unsigned int val, enum uncore_index index) 193 193 { 194 194 struct tpmi_uncore_cluster_info *cluster_info; 195 + struct tpmi_uncore_struct *uncore_root; 195 196 u64 control; 196 197 197 198 cluster_info = container_of(data, struct tpmi_uncore_cluster_info, uncore_data); 199 + uncore_root = cluster_info->uncore_root; 200 + 201 + if (uncore_root->write_blocked) 202 + return -EPERM; 198 203 199 204 if (cluster_info->root_domain) 200 205 return -ENODATA;
+5 -8
drivers/regulator/pca9450-regulator.c
··· 40 40 struct device *dev; 41 41 struct regmap *regmap; 42 42 struct gpio_desc *sd_vsel_gpio; 43 - struct notifier_block restart_nb; 44 43 enum pca9450_chip_type type; 45 44 unsigned int rcnt; 46 45 int irq; ··· 1099 1100 return IRQ_HANDLED; 1100 1101 } 1101 1102 1102 - static int pca9450_i2c_restart_handler(struct notifier_block *nb, 1103 - unsigned long action, void *data) 1103 + static int pca9450_i2c_restart_handler(struct sys_off_data *data) 1104 1104 { 1105 - struct pca9450 *pca9450 = container_of(nb, struct pca9450, restart_nb); 1105 + struct pca9450 *pca9450 = data->cb_data; 1106 1106 struct i2c_client *i2c = container_of(pca9450->dev, struct i2c_client, dev); 1107 1107 1108 1108 dev_dbg(&i2c->dev, "Restarting device..\n"); ··· 1259 1261 pca9450->sd_vsel_fixed_low = 1260 1262 of_property_read_bool(ldo5->dev.of_node, "nxp,sd-vsel-fixed-low"); 1261 1263 1262 - pca9450->restart_nb.notifier_call = pca9450_i2c_restart_handler; 1263 - pca9450->restart_nb.priority = PCA9450_RESTART_HANDLER_PRIORITY; 1264 - 1265 - if (register_restart_handler(&pca9450->restart_nb)) 1264 + if (devm_register_sys_off_handler(&i2c->dev, SYS_OFF_MODE_RESTART, 1265 + PCA9450_RESTART_HANDLER_PRIORITY, 1266 + pca9450_i2c_restart_handler, pca9450)) 1266 1267 dev_warn(&i2c->dev, "Failed to register restart handler\n"); 1267 1268 1268 1269 dev_info(&i2c->dev, "%s probed.\n",
+6 -6
drivers/regulator/tps65219-regulator.c
··· 454 454 irq_type->irq_name, 455 455 irq_data); 456 456 if (error) 457 - return dev_err_probe(tps->dev, PTR_ERR(rdev), 458 - "Failed to request %s IRQ %d: %d\n", 459 - irq_type->irq_name, irq, error); 457 + return dev_err_probe(tps->dev, error, 458 + "Failed to request %s IRQ %d\n", 459 + irq_type->irq_name, irq); 460 460 } 461 461 462 462 for (i = 0; i < pmic->dev_irq_size; ++i) { ··· 477 477 irq_type->irq_name, 478 478 irq_data); 479 479 if (error) 480 - return dev_err_probe(tps->dev, PTR_ERR(rdev), 481 - "Failed to request %s IRQ %d: %d\n", 482 - irq_type->irq_name, irq, error); 480 + return dev_err_probe(tps->dev, error, 481 + "Failed to request %s IRQ %d\n", 482 + irq_type->irq_name, irq); 483 483 } 484 484 485 485 return 0;
+9 -2
drivers/s390/char/sclp.c
··· 77 77 /* The currently active SCLP command word. */ 78 78 static sclp_cmdw_t active_cmd; 79 79 80 + static inline struct sccb_header *sclpint_to_sccb(u32 sccb_int) 81 + { 82 + if (sccb_int) 83 + return __va(sccb_int); 84 + return NULL; 85 + } 86 + 80 87 static inline void sclp_trace(int prio, char *id, u32 a, u64 b, bool err) 81 88 { 82 89 struct sclp_trace_entry e; ··· 627 620 628 621 static bool ok_response(u32 sccb_int, sclp_cmdw_t cmd) 629 622 { 630 - struct sccb_header *sccb = (struct sccb_header *)__va(sccb_int); 623 + struct sccb_header *sccb = sclpint_to_sccb(sccb_int); 631 624 struct evbuf_header *evbuf; 632 625 u16 response; 633 626 ··· 666 659 667 660 /* INT: Interrupt received (a=intparm, b=cmd) */ 668 661 sclp_trace_sccb(0, "INT", param32, active_cmd, active_cmd, 669 - (struct sccb_header *)__va(finished_sccb), 662 + sclpint_to_sccb(finished_sccb), 670 663 !ok_response(finished_sccb, active_cmd)); 671 664 672 665 if (finished_sccb) {
-2
drivers/scsi/fnic/fnic.h
··· 323 323 FNIC_IN_ETH_TRANS_FC_MODE, 324 324 }; 325 325 326 - struct mempool; 327 - 328 326 enum fnic_role_e { 329 327 FNIC_ROLE_FCP_INITIATOR = 0, 330 328 };
+2
drivers/scsi/qla4xxx/ql4_os.c
··· 6606 6606 6607 6607 ep = qla4xxx_ep_connect(ha->host, (struct sockaddr *)dst_addr, 0); 6608 6608 vfree(dst_addr); 6609 + if (IS_ERR(ep)) 6610 + return NULL; 6609 6611 return ep; 6610 6612 } 6611 6613
+3 -5
drivers/spi/spi-fsl-lpspi.c
··· 330 330 } 331 331 332 332 if (config.speed_hz > perclk_rate / 2) { 333 - dev_err(fsl_lpspi->dev, 334 - "per-clk should be at least two times of transfer speed"); 335 - return -EINVAL; 333 + div = 2; 334 + } else { 335 + div = DIV_ROUND_UP(perclk_rate, config.speed_hz); 336 336 } 337 - 338 - div = DIV_ROUND_UP(perclk_rate, config.speed_hz); 339 337 340 338 for (prescale = 0; prescale <= prescale_max; prescale++) { 341 339 scldiv = div / (1 << prescale) - 2;
+4
drivers/spi/spi-mem.c
··· 265 265 */ 266 266 bool spi_mem_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) 267 267 { 268 + /* Make sure the operation frequency is correct before going futher */ 269 + spi_mem_adjust_op_freq(mem, (struct spi_mem_op *)op); 270 + 268 271 if (spi_mem_check_op(op)) 269 272 return false; 270 273 ··· 580 577 * spi_mem_calc_op_duration() - Derives the theoretical length (in ns) of an 581 578 * operation. This helps finding the best variant 582 579 * among a list of possible choices. 580 + * @mem: the SPI memory 583 581 * @op: the operation to benchmark 584 582 * 585 583 * Some chips have per-op frequency limitations, PCBs usually have their own
+15 -7
drivers/spi/spi-qpic-snand.c
··· 210 210 struct qcom_nand_controller *snandc = nand_to_qcom_snand(nand); 211 211 struct qpic_ecc *qecc = snandc->qspi->ecc; 212 212 213 - if (section > 1) 214 - return -ERANGE; 213 + switch (section) { 214 + case 0: 215 + oobregion->offset = 0; 216 + oobregion->length = qecc->bytes * (qecc->steps - 1) + 217 + qecc->bbm_size; 218 + return 0; 219 + case 1: 220 + oobregion->offset = qecc->bytes * (qecc->steps - 1) + 221 + qecc->bbm_size + 222 + qecc->steps * 4; 223 + oobregion->length = mtd->oobsize - oobregion->offset; 224 + return 0; 225 + } 215 226 216 - oobregion->length = qecc->ecc_bytes_hw + qecc->spare_bytes; 217 - oobregion->offset = mtd->oobsize - oobregion->length; 218 - 219 - return 0; 227 + return -ERANGE; 220 228 } 221 229 222 230 static int qcom_spi_ooblayout_free(struct mtd_info *mtd, int section, ··· 1204 1196 u32 cfg0, cfg1, ecc_bch_cfg, ecc_buf_cfg; 1205 1197 1206 1198 cfg0 = (ecc_cfg->cfg0 & ~CW_PER_PAGE_MASK) | 1207 - FIELD_PREP(CW_PER_PAGE_MASK, num_cw - 1); 1199 + FIELD_PREP(CW_PER_PAGE_MASK, 0); 1208 1200 cfg1 = ecc_cfg->cfg1; 1209 1201 ecc_bch_cfg = ecc_cfg->ecc_bch_cfg; 1210 1202 ecc_buf_cfg = ecc_cfg->ecc_buf_cfg;
+5 -5
drivers/spi/spi-st-ssc4.c
··· 378 378 pinctrl_pm_select_sleep_state(&pdev->dev); 379 379 } 380 380 381 - static int __maybe_unused spi_st_runtime_suspend(struct device *dev) 381 + static int spi_st_runtime_suspend(struct device *dev) 382 382 { 383 383 struct spi_controller *host = dev_get_drvdata(dev); 384 384 struct spi_st *spi_st = spi_controller_get_devdata(host); ··· 391 391 return 0; 392 392 } 393 393 394 - static int __maybe_unused spi_st_runtime_resume(struct device *dev) 394 + static int spi_st_runtime_resume(struct device *dev) 395 395 { 396 396 struct spi_controller *host = dev_get_drvdata(dev); 397 397 struct spi_st *spi_st = spi_controller_get_devdata(host); ··· 428 428 } 429 429 430 430 static const struct dev_pm_ops spi_st_pm = { 431 - SET_SYSTEM_SLEEP_PM_OPS(spi_st_suspend, spi_st_resume) 432 - SET_RUNTIME_PM_OPS(spi_st_runtime_suspend, spi_st_runtime_resume, NULL) 431 + SYSTEM_SLEEP_PM_OPS(spi_st_suspend, spi_st_resume) 432 + RUNTIME_PM_OPS(spi_st_runtime_suspend, spi_st_runtime_resume, NULL) 433 433 }; 434 434 435 435 static const struct of_device_id stm_spi_match[] = { ··· 441 441 static struct platform_driver spi_st_driver = { 442 442 .driver = { 443 443 .name = "spi-st", 444 - .pm = pm_sleep_ptr(&spi_st_pm), 444 + .pm = pm_ptr(&spi_st_pm), 445 445 .of_match_table = of_match_ptr(stm_spi_match), 446 446 }, 447 447 .probe = spi_st_probe,
+44 -32
drivers/ufs/core/ufshcd.c
··· 1303 1303 * 1304 1304 * Return: 0 upon success; -EBUSY upon timeout. 1305 1305 */ 1306 - static int ufshcd_wait_for_doorbell_clr(struct ufs_hba *hba, 1306 + static int ufshcd_wait_for_pending_cmds(struct ufs_hba *hba, 1307 1307 u64 wait_timeout_us) 1308 1308 { 1309 1309 int ret = 0; ··· 1431 1431 down_write(&hba->clk_scaling_lock); 1432 1432 1433 1433 if (!hba->clk_scaling.is_allowed || 1434 - ufshcd_wait_for_doorbell_clr(hba, timeout_us)) { 1434 + ufshcd_wait_for_pending_cmds(hba, timeout_us)) { 1435 1435 ret = -EBUSY; 1436 1436 up_write(&hba->clk_scaling_lock); 1437 1437 mutex_unlock(&hba->wb_mutex); ··· 3199 3199 } 3200 3200 3201 3201 /* 3202 - * Return: 0 upon success; < 0 upon failure. 3202 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3203 + * < 0 if another error occurred. 3203 3204 */ 3204 3205 static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba, 3205 3206 struct ufshcd_lrb *lrbp, int max_timeout) ··· 3276 3275 } 3277 3276 } 3278 3277 3279 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3280 3278 return err; 3281 3279 } 3282 3280 ··· 3294 3294 } 3295 3295 3296 3296 /* 3297 - * Return: 0 upon success; < 0 upon failure. 3297 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3298 + * < 0 if another error occurred. 3298 3299 */ 3299 3300 static int ufshcd_issue_dev_cmd(struct ufs_hba *hba, struct ufshcd_lrb *lrbp, 3300 3301 const u32 tag, int timeout) ··· 3318 3317 * @cmd_type: specifies the type (NOP, Query...) 3319 3318 * @timeout: timeout in milliseconds 3320 3319 * 3321 - * Return: 0 upon success; < 0 upon failure. 3320 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3321 + * < 0 if another error occurred. 3322 3322 * 3323 3323 * NOTE: Since there is only one available tag for device management commands, 3324 3324 * it is expected you hold the hba->dev_cmd.lock mutex. ··· 3365 3363 (*request)->upiu_req.selector = selector; 3366 3364 } 3367 3365 3366 + /* 3367 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3368 + * < 0 if another error occurred. 3369 + */ 3368 3370 static int ufshcd_query_flag_retry(struct ufs_hba *hba, 3369 3371 enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res) 3370 3372 { ··· 3389 3383 dev_err(hba->dev, 3390 3384 "%s: query flag, opcode %d, idn %d, failed with error %d after %d retries\n", 3391 3385 __func__, opcode, idn, ret, retries); 3392 - WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret); 3393 3386 return ret; 3394 3387 } 3395 3388 ··· 3400 3395 * @index: flag index to access 3401 3396 * @flag_res: the flag value after the query request completes 3402 3397 * 3403 - * Return: 0 for success; < 0 upon failure. 3398 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3399 + * < 0 if another error occurred. 3404 3400 */ 3405 3401 int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, 3406 3402 enum flag_idn idn, u8 index, bool *flag_res) ··· 3457 3451 3458 3452 out_unlock: 3459 3453 ufshcd_dev_man_unlock(hba); 3460 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3461 3454 return err; 3462 3455 } 3463 3456 ··· 3469 3464 * @selector: selector field 3470 3465 * @attr_val: the attribute value after the query request completes 3471 3466 * 3472 - * Return: 0 upon success; < 0 upon failure. 3473 - */ 3467 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3468 + * < 0 if another error occurred. 3469 + */ 3474 3470 int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, 3475 3471 enum attr_idn idn, u8 index, u8 selector, u32 *attr_val) 3476 3472 { ··· 3519 3513 3520 3514 out_unlock: 3521 3515 ufshcd_dev_man_unlock(hba); 3522 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3523 3516 return err; 3524 3517 } 3525 3518 ··· 3533 3528 * @attr_val: the attribute value after the query request 3534 3529 * completes 3535 3530 * 3536 - * Return: 0 for success; < 0 upon failure. 3537 - */ 3531 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3532 + * < 0 if another error occurred. 3533 + */ 3538 3534 int ufshcd_query_attr_retry(struct ufs_hba *hba, 3539 3535 enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector, 3540 3536 u32 *attr_val) ··· 3557 3551 dev_err(hba->dev, 3558 3552 "%s: query attribute, idn %d, failed with error %d after %d retries\n", 3559 3553 __func__, idn, ret, QUERY_REQ_RETRIES); 3560 - WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret); 3561 3554 return ret; 3562 3555 } 3563 3556 3564 3557 /* 3565 - * Return: 0 if successful; < 0 upon failure. 3558 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3559 + * < 0 if another error occurred. 3566 3560 */ 3567 3561 static int __ufshcd_query_descriptor(struct ufs_hba *hba, 3568 3562 enum query_opcode opcode, enum desc_idn idn, u8 index, ··· 3621 3615 out_unlock: 3622 3616 hba->dev_cmd.query.descriptor = NULL; 3623 3617 ufshcd_dev_man_unlock(hba); 3624 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3625 3618 return err; 3626 3619 } 3627 3620 ··· 3637 3632 * The buf_len parameter will contain, on return, the length parameter 3638 3633 * received on the response. 3639 3634 * 3640 - * Return: 0 for success; < 0 upon failure. 3635 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3636 + * < 0 if another error occurred. 3641 3637 */ 3642 3638 int ufshcd_query_descriptor_retry(struct ufs_hba *hba, 3643 3639 enum query_opcode opcode, ··· 3656 3650 break; 3657 3651 } 3658 3652 3659 - WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err); 3660 3653 return err; 3661 3654 } 3662 3655 ··· 3668 3663 * @param_read_buf: pointer to buffer where parameter would be read 3669 3664 * @param_size: sizeof(param_read_buf) 3670 3665 * 3671 - * Return: 0 in case of success; < 0 upon failure. 3666 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 3667 + * < 0 if another error occurred. 3672 3668 */ 3673 3669 int ufshcd_read_desc_param(struct ufs_hba *hba, 3674 3670 enum desc_idn desc_id, ··· 3736 3730 out: 3737 3731 if (is_kmalloc) 3738 3732 kfree(desc_buf); 3739 - WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret); 3740 3733 return ret; 3741 3734 } 3742 3735 ··· 4786 4781 * 4787 4782 * Set fDeviceInit flag and poll until device toggles it. 4788 4783 * 4789 - * Return: 0 upon success; < 0 upon failure. 4784 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 4785 + * < 0 if another error occurred. 4790 4786 */ 4791 4787 static int ufshcd_complete_dev_init(struct ufs_hba *hba) 4792 4788 { ··· 5141 5135 * not respond with NOP IN UPIU within timeout of %NOP_OUT_TIMEOUT 5142 5136 * and we retry sending NOP OUT for %NOP_OUT_RETRIES iterations. 5143 5137 * 5144 - * Return: 0 upon success; < 0 upon failure. 5138 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 5139 + * < 0 if another error occurred. 5145 5140 */ 5146 5141 static int ufshcd_verify_dev_init(struct ufs_hba *hba) 5147 5142 { ··· 5566 5559 irqreturn_t retval = IRQ_NONE; 5567 5560 struct uic_command *cmd; 5568 5561 5569 - spin_lock(hba->host->host_lock); 5562 + guard(spinlock_irqsave)(hba->host->host_lock); 5570 5563 cmd = hba->active_uic_cmd; 5571 - if (WARN_ON_ONCE(!cmd)) 5564 + if (!cmd) 5572 5565 goto unlock; 5573 5566 5574 5567 if (ufshcd_is_auto_hibern8_error(hba, intr_status)) ··· 5593 5586 ufshcd_add_uic_command_trace(hba, cmd, UFS_CMD_COMP); 5594 5587 5595 5588 unlock: 5596 - spin_unlock(hba->host->host_lock); 5597 - 5598 5589 return retval; 5599 5590 } 5600 5591 ··· 5874 5869 * as the device is allowed to manage its own way of handling background 5875 5870 * operations. 5876 5871 * 5877 - * Return: zero on success, non-zero on failure. 5872 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 5873 + * < 0 if another error occurred. 5878 5874 */ 5879 5875 static int ufshcd_enable_auto_bkops(struct ufs_hba *hba) 5880 5876 { ··· 5914 5908 * host is idle so that BKOPS are managed effectively without any negative 5915 5909 * impacts. 5916 5910 * 5917 - * Return: zero on success, non-zero on failure. 5911 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 5912 + * < 0 if another error occurred. 5918 5913 */ 5919 5914 static int ufshcd_disable_auto_bkops(struct ufs_hba *hba) 5920 5915 { ··· 6065 6058 __func__, err); 6066 6059 } 6067 6060 6061 + /* 6062 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 6063 + * < 0 if another error occurred. 6064 + */ 6068 6065 int ufshcd_read_device_lvl_exception_id(struct ufs_hba *hba, u64 *exception_id) 6069 6066 { 6070 6067 struct utp_upiu_query_v4_0 *upiu_resp; ··· 6931 6920 bool queue_eh_work = false; 6932 6921 irqreturn_t retval = IRQ_NONE; 6933 6922 6934 - spin_lock(hba->host->host_lock); 6923 + guard(spinlock_irqsave)(hba->host->host_lock); 6935 6924 hba->errors |= UFSHCD_ERROR_MASK & intr_status; 6936 6925 6937 6926 if (hba->errors & INT_FATAL_ERRORS) { ··· 6990 6979 */ 6991 6980 hba->errors = 0; 6992 6981 hba->uic_error = 0; 6993 - spin_unlock(hba->host->host_lock); 6982 + 6994 6983 return retval; 6995 6984 } 6996 6985 ··· 7465 7454 * @sg_list: Pointer to SG list when DATA IN/OUT UPIU is required in ARPMB operation 7466 7455 * @dir: DMA direction 7467 7456 * 7468 - * Return: zero on success, non-zero on failure. 7457 + * Return: 0 upon success; > 0 in case the UFS device reported an OCS error; 7458 + * < 0 if another error occurred. 7469 7459 */ 7470 7460 int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *req_upiu, 7471 7461 struct utp_upiu_req *rsp_upiu, struct ufs_ehs *req_ehs,
+15 -24
drivers/ufs/host/ufs-qcom.c
··· 2070 2070 return IRQ_HANDLED; 2071 2071 } 2072 2072 2073 - static void ufs_qcom_irq_free(struct ufs_qcom_irq *uqi) 2074 - { 2075 - for (struct ufs_qcom_irq *q = uqi; q->irq; q++) 2076 - devm_free_irq(q->hba->dev, q->irq, q->hba); 2077 - 2078 - platform_device_msi_free_irqs_all(uqi->hba->dev); 2079 - devm_kfree(uqi->hba->dev, uqi); 2080 - } 2081 - 2082 - DEFINE_FREE(ufs_qcom_irq, struct ufs_qcom_irq *, if (_T) ufs_qcom_irq_free(_T)) 2083 - 2084 2073 static int ufs_qcom_config_esi(struct ufs_hba *hba) 2085 2074 { 2086 2075 struct ufs_qcom_host *host = ufshcd_get_variant(hba); ··· 2084 2095 */ 2085 2096 nr_irqs = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL]; 2086 2097 2087 - struct ufs_qcom_irq *qi __free(ufs_qcom_irq) = 2088 - devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL); 2089 - if (!qi) 2090 - return -ENOMEM; 2091 - /* Preset so __free() has a pointer to hba in all error paths */ 2092 - qi[0].hba = hba; 2093 - 2094 2098 ret = platform_device_msi_init_and_alloc_irqs(hba->dev, nr_irqs, 2095 2099 ufs_qcom_write_msi_msg); 2096 2100 if (ret) { 2097 - dev_err(hba->dev, "Failed to request Platform MSI %d\n", ret); 2098 - return ret; 2101 + dev_warn(hba->dev, "Platform MSI not supported or failed, continuing without ESI\n"); 2102 + return ret; /* Continue without ESI */ 2103 + } 2104 + 2105 + struct ufs_qcom_irq *qi = devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL); 2106 + 2107 + if (!qi) { 2108 + platform_device_msi_free_irqs_all(hba->dev); 2109 + return -ENOMEM; 2099 2110 } 2100 2111 2101 2112 for (int idx = 0; idx < nr_irqs; idx++) { ··· 2106 2117 ret = devm_request_irq(hba->dev, qi[idx].irq, ufs_qcom_mcq_esi_handler, 2107 2118 IRQF_SHARED, "qcom-mcq-esi", qi + idx); 2108 2119 if (ret) { 2109 - dev_err(hba->dev, "%s: Fail to request IRQ for %d, err = %d\n", 2120 + dev_err(hba->dev, "%s: Failed to request IRQ for %d, err = %d\n", 2110 2121 __func__, qi[idx].irq, ret); 2111 - qi[idx].irq = 0; 2122 + /* Free previously allocated IRQs */ 2123 + for (int j = 0; j < idx; j++) 2124 + devm_free_irq(hba->dev, qi[j].irq, qi + j); 2125 + platform_device_msi_free_irqs_all(hba->dev); 2126 + devm_kfree(hba->dev, qi); 2112 2127 return ret; 2113 2128 } 2114 2129 } 2115 - 2116 - retain_and_null_ptr(qi); 2117 2130 2118 2131 if (host->hw_ver.major >= 6) { 2119 2132 ufshcd_rmwl(hba, ESI_VEC_MASK, FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1),
+1
drivers/ufs/host/ufshcd-pci.c
··· 630 630 { PCI_VDEVICE(INTEL, 0xA847), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 631 631 { PCI_VDEVICE(INTEL, 0x7747), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 632 632 { PCI_VDEVICE(INTEL, 0xE447), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 633 + { PCI_VDEVICE(INTEL, 0x4D47), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 633 634 { } /* terminate list */ 634 635 }; 635 636
+2 -1
drivers/usb/chipidea/ci_hdrc_imx.c
··· 338 338 schedule_work(&ci->usb_phy->chg_work); 339 339 break; 340 340 case CI_HDRC_CONTROLLER_PULLUP_EVENT: 341 - if (ci->role == CI_ROLE_GADGET) 341 + if (ci->role == CI_ROLE_GADGET && 342 + ci->gadget.speed == USB_SPEED_HIGH) 342 343 imx_usbmisc_pullup(data->usbmisc_data, 343 344 ci->gadget.connected); 344 345 break;
+16 -7
drivers/usb/chipidea/usbmisc_imx.c
··· 1068 1068 unsigned long flags; 1069 1069 u32 val; 1070 1070 1071 + if (on) 1072 + return; 1073 + 1071 1074 spin_lock_irqsave(&usbmisc->lock, flags); 1072 1075 val = readl(usbmisc->base + MX7D_USBNC_USB_CTRL2); 1073 - if (!on) { 1074 - val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK; 1075 - val |= MX7D_USBNC_USB_CTRL2_OPMODE(1); 1076 - val |= MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1077 - } else { 1078 - val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1079 - } 1076 + val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK; 1077 + val |= MX7D_USBNC_USB_CTRL2_OPMODE(1); 1078 + val |= MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1079 + writel(val, usbmisc->base + MX7D_USBNC_USB_CTRL2); 1080 + spin_unlock_irqrestore(&usbmisc->lock, flags); 1081 + 1082 + /* Last for at least 1 micro-frame to let host see disconnect signal */ 1083 + usleep_range(125, 150); 1084 + 1085 + spin_lock_irqsave(&usbmisc->lock, flags); 1086 + val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK; 1087 + val |= MX7D_USBNC_USB_CTRL2_OPMODE(0); 1088 + val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN; 1080 1089 writel(val, usbmisc->base + MX7D_USBNC_USB_CTRL2); 1081 1090 spin_unlock_irqrestore(&usbmisc->lock, flags); 1082 1091 }
+16 -12
drivers/usb/core/hcd.c
··· 1636 1636 struct usb_hcd *hcd = bus_to_hcd(urb->dev->bus); 1637 1637 struct usb_anchor *anchor = urb->anchor; 1638 1638 int status = urb->unlinked; 1639 - unsigned long flags; 1640 1639 1641 1640 urb->hcpriv = NULL; 1642 1641 if (unlikely((urb->transfer_flags & URB_SHORT_NOT_OK) && ··· 1653 1654 /* pass ownership to the completion handler */ 1654 1655 urb->status = status; 1655 1656 /* 1656 - * Only collect coverage in the softirq context and disable interrupts 1657 - * to avoid scenarios with nested remote coverage collection sections 1658 - * that KCOV does not support. 1659 - * See the comment next to kcov_remote_start_usb_softirq() for details. 1657 + * This function can be called in task context inside another remote 1658 + * coverage collection section, but kcov doesn't support that kind of 1659 + * recursion yet. Only collect coverage in softirq context for now. 1660 1660 */ 1661 - flags = kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); 1661 + kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); 1662 1662 urb->complete(urb); 1663 - kcov_remote_stop_softirq(flags); 1663 + kcov_remote_stop_softirq(); 1664 1664 1665 1665 usb_anchor_resume_wakeups(anchor); 1666 1666 atomic_dec(&urb->use_count); ··· 1717 1719 * @urb: urb being returned to the USB device driver. 1718 1720 * @status: completion status code for the URB. 1719 1721 * 1720 - * Context: atomic. The completion callback is invoked in caller's context. 1721 - * For HCDs with HCD_BH flag set, the completion callback is invoked in BH 1722 - * context (except for URBs submitted to the root hub which always complete in 1723 - * caller's context). 1722 + * Context: atomic. The completion callback is invoked either in a work queue 1723 + * (BH) context or in the caller's context, depending on whether the HCD_BH 1724 + * flag is set in the @hcd structure, except that URBs submitted to the 1725 + * root hub always complete in BH context. 1724 1726 * 1725 1727 * This hands the URB from HCD to its USB device driver, using its 1726 1728 * completion function. The HCD has freed all per-urb resources ··· 2164 2166 urb->complete = usb_ehset_completion; 2165 2167 urb->status = -EINPROGRESS; 2166 2168 urb->actual_length = 0; 2167 - urb->transfer_flags = URB_DIR_IN; 2169 + urb->transfer_flags = URB_DIR_IN | URB_NO_TRANSFER_DMA_MAP; 2168 2170 usb_get_urb(urb); 2169 2171 atomic_inc(&urb->use_count); 2170 2172 atomic_inc(&urb->dev->urbnum); ··· 2228 2230 2229 2231 /* Complete remaining DATA and STATUS stages using the same URB */ 2230 2232 urb->status = -EINPROGRESS; 2233 + urb->transfer_flags &= ~URB_NO_TRANSFER_DMA_MAP; 2231 2234 usb_get_urb(urb); 2232 2235 atomic_inc(&urb->use_count); 2233 2236 atomic_inc(&urb->dev->urbnum); 2237 + if (map_urb_for_dma(hcd, urb, GFP_KERNEL)) { 2238 + usb_put_urb(urb); 2239 + goto out1; 2240 + } 2241 + 2234 2242 retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 0); 2235 2243 if (!retval && !wait_for_completion_timeout(&done, 2236 2244 msecs_to_jiffies(2000))) {
+1
drivers/usb/core/quirks.c
··· 371 371 { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, 372 372 373 373 /* SanDisk Corp. SanDisk 3.2Gen1 */ 374 + { USB_DEVICE(0x0781, 0x5596), .driver_info = USB_QUIRK_DELAY_INIT }, 374 375 { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT }, 375 376 376 377 /* SanDisk Extreme 55AE */
+2
drivers/usb/dwc3/dwc3-pci.c
··· 41 41 #define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee 42 42 #define PCI_DEVICE_ID_INTEL_TGPH 0x43ee 43 43 #define PCI_DEVICE_ID_INTEL_JSP 0x4dee 44 + #define PCI_DEVICE_ID_INTEL_WCL 0x4d7e 44 45 #define PCI_DEVICE_ID_INTEL_ADL 0x460e 45 46 #define PCI_DEVICE_ID_INTEL_ADL_PCH 0x51ee 46 47 #define PCI_DEVICE_ID_INTEL_ADLN 0x465e ··· 432 431 { PCI_DEVICE_DATA(INTEL, TGPLP, &dwc3_pci_intel_swnode) }, 433 432 { PCI_DEVICE_DATA(INTEL, TGPH, &dwc3_pci_intel_swnode) }, 434 433 { PCI_DEVICE_DATA(INTEL, JSP, &dwc3_pci_intel_swnode) }, 434 + { PCI_DEVICE_DATA(INTEL, WCL, &dwc3_pci_intel_swnode) }, 435 435 { PCI_DEVICE_DATA(INTEL, ADL, &dwc3_pci_intel_swnode) }, 436 436 { PCI_DEVICE_DATA(INTEL, ADL_PCH, &dwc3_pci_intel_swnode) }, 437 437 { PCI_DEVICE_DATA(INTEL, ADLN, &dwc3_pci_intel_swnode) },
+16 -4
drivers/usb/dwc3/ep0.c
··· 288 288 dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 8, 289 289 DWC3_TRBCTL_CONTROL_SETUP, false); 290 290 ret = dwc3_ep0_start_trans(dep); 291 - WARN_ON(ret < 0); 291 + if (ret < 0) 292 + dev_err(dwc->dev, "ep0 out start transfer failed: %d\n", ret); 293 + 292 294 for (i = 2; i < DWC3_ENDPOINTS_NUM; i++) { 293 295 struct dwc3_ep *dwc3_ep; 294 296 ··· 1063 1061 ret = dwc3_ep0_start_trans(dep); 1064 1062 } 1065 1063 1066 - WARN_ON(ret < 0); 1064 + if (ret < 0) 1065 + dev_err(dwc->dev, 1066 + "ep0 data phase start transfer failed: %d\n", ret); 1067 1067 } 1068 1068 1069 1069 static int dwc3_ep0_start_control_status(struct dwc3_ep *dep) ··· 1082 1078 1083 1079 static void __dwc3_ep0_do_control_status(struct dwc3 *dwc, struct dwc3_ep *dep) 1084 1080 { 1085 - WARN_ON(dwc3_ep0_start_control_status(dep)); 1081 + int ret; 1082 + 1083 + ret = dwc3_ep0_start_control_status(dep); 1084 + if (ret) 1085 + dev_err(dwc->dev, 1086 + "ep0 status phase start transfer failed: %d\n", ret); 1086 1087 } 1087 1088 1088 1089 static void dwc3_ep0_do_control_status(struct dwc3 *dwc, ··· 1130 1121 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); 1131 1122 memset(&params, 0, sizeof(params)); 1132 1123 ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params); 1133 - WARN_ON_ONCE(ret); 1124 + if (ret) 1125 + dev_err_ratelimited(dwc->dev, 1126 + "ep0 data phase end transfer failed: %d\n", ret); 1127 + 1134 1128 dep->resource_index = 0; 1135 1129 } 1136 1130
+17 -2
drivers/usb/dwc3/gadget.c
··· 1772 1772 dep->flags |= DWC3_EP_DELAY_STOP; 1773 1773 return 0; 1774 1774 } 1775 - WARN_ON_ONCE(ret); 1775 + 1776 + if (ret) 1777 + dev_err_ratelimited(dep->dwc->dev, 1778 + "end transfer failed: %d\n", ret); 1779 + 1776 1780 dep->resource_index = 0; 1777 1781 1778 1782 if (!interrupt) ··· 3781 3777 static void dwc3_gadget_endpoint_transfer_not_ready(struct dwc3_ep *dep, 3782 3778 const struct dwc3_event_depevt *event) 3783 3779 { 3780 + /* 3781 + * During a device-initiated disconnect, a late xferNotReady event can 3782 + * be generated after the End Transfer command resets the event filter, 3783 + * but before the controller is halted. Ignore it to prevent a new 3784 + * transfer from starting. 3785 + */ 3786 + if (!dep->dwc->connected) 3787 + return; 3788 + 3784 3789 dwc3_gadget_endpoint_frame_from_event(dep, event); 3785 3790 3786 3791 /* ··· 4052 4039 dep->flags &= ~DWC3_EP_STALL; 4053 4040 4054 4041 ret = dwc3_send_clear_stall_ep_cmd(dep); 4055 - WARN_ON_ONCE(ret); 4042 + if (ret) 4043 + dev_err_ratelimited(dwc->dev, 4044 + "failed to clear STALL on %s\n", dep->name); 4056 4045 } 4057 4046 } 4058 4047
+7 -2
drivers/usb/gadget/udc/tegra-xudc.c
··· 502 502 struct clk_bulk_data *clks; 503 503 504 504 bool device_mode; 505 + bool current_device_mode; 505 506 struct work_struct usb_role_sw_work; 506 507 507 508 struct phy **usb3_phy; ··· 716 715 717 716 phy_set_mode_ext(xudc->curr_utmi_phy, PHY_MODE_USB_OTG, 718 717 USB_ROLE_DEVICE); 718 + 719 + xudc->current_device_mode = true; 719 720 } 720 721 721 722 static void tegra_xudc_device_mode_off(struct tegra_xudc *xudc) ··· 727 724 int err; 728 725 729 726 dev_dbg(xudc->dev, "device mode off\n"); 727 + 728 + xudc->current_device_mode = false; 730 729 731 730 connected = !!(xudc_readl(xudc, PORTSC) & PORTSC_CCS); 732 731 ··· 4049 4044 4050 4045 spin_lock_irqsave(&xudc->lock, flags); 4051 4046 xudc->suspended = false; 4047 + if (xudc->device_mode != xudc->current_device_mode) 4048 + schedule_work(&xudc->usb_role_sw_work); 4052 4049 spin_unlock_irqrestore(&xudc->lock, flags); 4053 - 4054 - schedule_work(&xudc->usb_role_sw_work); 4055 4050 4056 4051 pm_runtime_enable(dev); 4057 4052
+1 -2
drivers/usb/host/xhci-hub.c
··· 704 704 if (!xhci->devs[i]) 705 705 continue; 706 706 707 - retval = xhci_disable_slot(xhci, i); 708 - xhci_free_virt_device(xhci, i); 707 + retval = xhci_disable_and_free_slot(xhci, i); 709 708 if (retval) 710 709 xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n", 711 710 i, retval);
+11 -11
drivers/usb/host/xhci-mem.c
··· 865 865 * will be manipulated by the configure endpoint, allocate device, or update 866 866 * hub functions while this function is removing the TT entries from the list. 867 867 */ 868 - void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id) 868 + void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, 869 + int slot_id) 869 870 { 870 - struct xhci_virt_device *dev; 871 871 int i; 872 872 int old_active_eps = 0; 873 873 874 874 /* Slot ID 0 is reserved */ 875 - if (slot_id == 0 || !xhci->devs[slot_id]) 875 + if (slot_id == 0 || !dev) 876 876 return; 877 877 878 - dev = xhci->devs[slot_id]; 879 - 880 - xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 881 - if (!dev) 882 - return; 878 + /* If device ctx array still points to _this_ device, clear it */ 879 + if (dev->out_ctx && 880 + xhci->dcbaa->dev_context_ptrs[slot_id] == cpu_to_le64(dev->out_ctx->dma)) 881 + xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 883 882 884 883 trace_xhci_free_virt_device(dev); 885 884 ··· 919 920 dev->udev->slot_id = 0; 920 921 if (dev->rhub_port && dev->rhub_port->slot_id == slot_id) 921 922 dev->rhub_port->slot_id = 0; 922 - kfree(xhci->devs[slot_id]); 923 - xhci->devs[slot_id] = NULL; 923 + if (xhci->devs[slot_id] == dev) 924 + xhci->devs[slot_id] = NULL; 925 + kfree(dev); 924 926 } 925 927 926 928 /* ··· 962 962 out: 963 963 /* we are now at a leaf device */ 964 964 xhci_debugfs_remove_slot(xhci, slot_id); 965 - xhci_free_virt_device(xhci, slot_id); 965 + xhci_free_virt_device(xhci, vdev, slot_id); 966 966 } 967 967 968 968 int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+4 -3
drivers/usb/host/xhci-pci-renesas.c
··· 47 47 #define RENESAS_ROM_ERASE_MAGIC 0x5A65726F 48 48 #define RENESAS_ROM_WRITE_MAGIC 0x53524F4D 49 49 50 - #define RENESAS_RETRY 10000 51 - #define RENESAS_DELAY 10 50 + #define RENESAS_RETRY 50000 /* 50000 * RENESAS_DELAY ~= 500ms */ 51 + #define RENESAS_CHIP_ERASE_RETRY 500000 /* 500000 * RENESAS_DELAY ~= 5s */ 52 + #define RENESAS_DELAY 10 52 53 53 54 #define RENESAS_FW_NAME "renesas_usb_fw.mem" 54 55 ··· 408 407 /* sleep a bit while ROM is erased */ 409 408 msleep(20); 410 409 411 - for (i = 0; i < RENESAS_RETRY; i++) { 410 + for (i = 0; i < RENESAS_CHIP_ERASE_RETRY; i++) { 412 411 retval = pci_read_config_byte(pdev, RENESAS_ROM_STATUS, 413 412 &status); 414 413 status &= RENESAS_ROM_STATUS_ERASE;
+7 -2
drivers/usb/host/xhci-ring.c
··· 1592 1592 command->slot_id = 0; 1593 1593 } 1594 1594 1595 - static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id) 1595 + static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id, 1596 + u32 cmd_comp_code) 1596 1597 { 1597 1598 struct xhci_virt_device *virt_dev; 1598 1599 struct xhci_slot_ctx *slot_ctx; ··· 1608 1607 if (xhci->quirks & XHCI_EP_LIMIT_QUIRK) 1609 1608 /* Delete default control endpoint resources */ 1610 1609 xhci_free_device_endpoint_resources(xhci, virt_dev, true); 1610 + if (cmd_comp_code == COMP_SUCCESS) { 1611 + xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 1612 + xhci->devs[slot_id] = NULL; 1613 + } 1611 1614 } 1612 1615 1613 1616 static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id) ··· 1861 1856 xhci_handle_cmd_enable_slot(slot_id, cmd, cmd_comp_code); 1862 1857 break; 1863 1858 case TRB_DISABLE_SLOT: 1864 - xhci_handle_cmd_disable_slot(xhci, slot_id); 1859 + xhci_handle_cmd_disable_slot(xhci, slot_id, cmd_comp_code); 1865 1860 break; 1866 1861 case TRB_CONFIG_EP: 1867 1862 if (!cmd->completion)
+16 -7
drivers/usb/host/xhci.c
··· 309 309 return -EINVAL; 310 310 311 311 iman = readl(&ir->ir_set->iman); 312 + iman &= ~IMAN_IP; 312 313 iman |= IMAN_IE; 313 314 writel(iman, &ir->ir_set->iman); 314 315 ··· 326 325 return -EINVAL; 327 326 328 327 iman = readl(&ir->ir_set->iman); 328 + iman &= ~IMAN_IP; 329 329 iman &= ~IMAN_IE; 330 330 writel(iman, &ir->ir_set->iman); 331 331 ··· 3934 3932 * Obtaining a new device slot to inform the xHCI host that 3935 3933 * the USB device has been reset. 3936 3934 */ 3937 - ret = xhci_disable_slot(xhci, udev->slot_id); 3938 - xhci_free_virt_device(xhci, udev->slot_id); 3935 + ret = xhci_disable_and_free_slot(xhci, udev->slot_id); 3939 3936 if (!ret) { 3940 3937 ret = xhci_alloc_dev(hcd, udev); 3941 3938 if (ret == 1) ··· 4091 4090 xhci_disable_slot(xhci, udev->slot_id); 4092 4091 4093 4092 spin_lock_irqsave(&xhci->lock, flags); 4094 - xhci_free_virt_device(xhci, udev->slot_id); 4093 + xhci_free_virt_device(xhci, virt_dev, udev->slot_id); 4095 4094 spin_unlock_irqrestore(&xhci->lock, flags); 4096 4095 4097 4096 } ··· 4138 4137 xhci_free_command(xhci, command); 4139 4138 4140 4139 return 0; 4140 + } 4141 + 4142 + int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id) 4143 + { 4144 + struct xhci_virt_device *vdev = xhci->devs[slot_id]; 4145 + int ret; 4146 + 4147 + ret = xhci_disable_slot(xhci, slot_id); 4148 + xhci_free_virt_device(xhci, vdev, slot_id); 4149 + return ret; 4141 4150 } 4142 4151 4143 4152 /* ··· 4256 4245 return 1; 4257 4246 4258 4247 disable_slot: 4259 - xhci_disable_slot(xhci, udev->slot_id); 4260 - xhci_free_virt_device(xhci, udev->slot_id); 4248 + xhci_disable_and_free_slot(xhci, udev->slot_id); 4261 4249 4262 4250 return 0; 4263 4251 } ··· 4392 4382 dev_warn(&udev->dev, "Device not responding to setup %s.\n", act); 4393 4383 4394 4384 mutex_unlock(&xhci->mutex); 4395 - ret = xhci_disable_slot(xhci, udev->slot_id); 4396 - xhci_free_virt_device(xhci, udev->slot_id); 4385 + ret = xhci_disable_and_free_slot(xhci, udev->slot_id); 4397 4386 if (!ret) { 4398 4387 if (xhci_alloc_dev(hcd, udev) == 1) 4399 4388 xhci_setup_addressable_virt_dev(xhci, udev);
+2 -1
drivers/usb/host/xhci.h
··· 1791 1791 /* xHCI memory management */ 1792 1792 void xhci_mem_cleanup(struct xhci_hcd *xhci); 1793 1793 int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags); 1794 - void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id); 1794 + void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, int slot_id); 1795 1795 int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, struct usb_device *udev, gfp_t flags); 1796 1796 int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *udev); 1797 1797 void xhci_copy_ep0_dequeue_into_input_ctx(struct xhci_hcd *xhci, ··· 1888 1888 int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev, 1889 1889 struct usb_tt *tt, gfp_t mem_flags); 1890 1890 int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id); 1891 + int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id); 1891 1892 int xhci_ext_cap_init(struct xhci_hcd *xhci); 1892 1893 1893 1894 int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup);
+1 -1
drivers/usb/storage/realtek_cr.c
··· 252 252 return USB_STOR_TRANSPORT_ERROR; 253 253 } 254 254 255 - residue = bcs->Residue; 255 + residue = le32_to_cpu(bcs->Residue); 256 256 if (bcs->Tag != us->tag) 257 257 return USB_STOR_TRANSPORT_ERROR; 258 258
+29
drivers/usb/storage/unusual_devs.h
··· 934 934 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 935 935 US_FL_SANE_SENSE ), 936 936 937 + /* Added by Maël GUERIN <mael.guerin@murena.io> */ 938 + UNUSUAL_DEV( 0x0603, 0x8611, 0x0000, 0xffff, 939 + "Novatek", 940 + "NTK96550-based camera", 941 + USB_SC_SCSI, USB_PR_BULK, NULL, 942 + US_FL_BULK_IGNORE_TAG ), 943 + 937 944 /* 938 945 * Reported by Hanno Boeck <hanno@gmx.de> 939 946 * Taken from the Lycoris Kernel ··· 1500 1493 "External", 1501 1494 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1502 1495 US_FL_NO_WP_DETECT ), 1496 + 1497 + /* 1498 + * Reported by Zenm Chen <zenmchen@gmail.com> 1499 + * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch 1500 + * the device into Wi-Fi mode. 1501 + */ 1502 + UNUSUAL_DEV( 0x0bda, 0x1a2b, 0x0000, 0xffff, 1503 + "Realtek", 1504 + "DISK", 1505 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1506 + US_FL_IGNORE_DEVICE ), 1507 + 1508 + /* 1509 + * Reported by Zenm Chen <zenmchen@gmail.com> 1510 + * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch 1511 + * the device into Wi-Fi mode. 1512 + */ 1513 + UNUSUAL_DEV( 0x0bda, 0xa192, 0x0000, 0xffff, 1514 + "Realtek", 1515 + "DISK", 1516 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1517 + US_FL_IGNORE_DEVICE ), 1503 1518 1504 1519 UNUSUAL_DEV( 0x0d49, 0x7310, 0x0000, 0x9999, 1505 1520 "Maxtor",
+8 -4
drivers/usb/typec/tcpm/fusb302.c
··· 1485 1485 struct fusb302_chip *chip = dev_id; 1486 1486 unsigned long flags; 1487 1487 1488 + /* Disable our level triggered IRQ until our irq_work has cleared it */ 1489 + disable_irq_nosync(chip->gpio_int_n_irq); 1490 + 1488 1491 spin_lock_irqsave(&chip->irq_lock, flags); 1489 1492 if (chip->irq_suspended) 1490 1493 chip->irq_while_suspended = true; ··· 1630 1627 } 1631 1628 done: 1632 1629 mutex_unlock(&chip->lock); 1630 + enable_irq(chip->gpio_int_n_irq); 1633 1631 } 1634 1632 1635 1633 static int init_gpio(struct fusb302_chip *chip) ··· 1755 1751 goto destroy_workqueue; 1756 1752 } 1757 1753 1758 - ret = devm_request_threaded_irq(dev, chip->gpio_int_n_irq, 1759 - NULL, fusb302_irq_intn, 1760 - IRQF_ONESHOT | IRQF_TRIGGER_LOW, 1761 - "fsc_interrupt_int_n", chip); 1754 + ret = request_irq(chip->gpio_int_n_irq, fusb302_irq_intn, 1755 + IRQF_ONESHOT | IRQF_TRIGGER_LOW, 1756 + "fsc_interrupt_int_n", chip); 1762 1757 if (ret < 0) { 1763 1758 dev_err(dev, "cannot request IRQ for GPIO Int_N, ret=%d", ret); 1764 1759 goto tcpm_unregister_port; ··· 1782 1779 struct fusb302_chip *chip = i2c_get_clientdata(client); 1783 1780 1784 1781 disable_irq_wake(chip->gpio_int_n_irq); 1782 + free_irq(chip->gpio_int_n_irq, chip); 1785 1783 cancel_work_sync(&chip->irq_work); 1786 1784 cancel_delayed_work_sync(&chip->bc_lvl_handler); 1787 1785 tcpm_unregister_port(chip->tcpm_port);
+58
drivers/usb/typec/tcpm/maxim_contaminant.c
··· 188 188 if (ret < 0) 189 189 return ret; 190 190 191 + /* Disable low power mode */ 192 + ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL, 193 + FIELD_PREP(CCLPMODESEL, 194 + LOW_POWER_MODE_DISABLE)); 195 + 191 196 /* Sleep to allow comparators settle */ 192 197 usleep_range(5000, 6000); 193 198 ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL, TCPC_TCPC_CTRL_ORIENTATION, PLUG_ORNT_CC1); ··· 329 324 return 0; 330 325 } 331 326 327 + static int max_contaminant_enable_toggling(struct max_tcpci_chip *chip) 328 + { 329 + struct regmap *regmap = chip->data.regmap; 330 + int ret; 331 + 332 + /* Disable dry detection if enabled. */ 333 + ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL, 334 + FIELD_PREP(CCLPMODESEL, 335 + LOW_POWER_MODE_DISABLE)); 336 + if (ret) 337 + return ret; 338 + 339 + ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL1, CCCONNDRY, 0); 340 + if (ret) 341 + return ret; 342 + 343 + ret = max_tcpci_write8(chip, TCPC_ROLE_CTRL, TCPC_ROLE_CTRL_DRP | 344 + FIELD_PREP(TCPC_ROLE_CTRL_CC1, 345 + TCPC_ROLE_CTRL_CC_RD) | 346 + FIELD_PREP(TCPC_ROLE_CTRL_CC2, 347 + TCPC_ROLE_CTRL_CC_RD)); 348 + if (ret) 349 + return ret; 350 + 351 + ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL, 352 + TCPC_TCPC_CTRL_EN_LK4CONN_ALRT, 353 + TCPC_TCPC_CTRL_EN_LK4CONN_ALRT); 354 + if (ret) 355 + return ret; 356 + 357 + return max_tcpci_write8(chip, TCPC_COMMAND, TCPC_CMD_LOOK4CONNECTION); 358 + } 359 + 332 360 bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect_while_debounce, 333 361 bool *cc_handled) 334 362 { ··· 377 339 ret = max_tcpci_read8(chip, TCPC_POWER_CTRL, &pwr_cntl); 378 340 if (ret < 0) 379 341 return false; 342 + 343 + if (cc_status & TCPC_CC_STATUS_TOGGLING) { 344 + if (chip->contaminant_state == DETECTED) 345 + return true; 346 + return false; 347 + } 380 348 381 349 if (chip->contaminant_state == NOT_DETECTED || chip->contaminant_state == SINK) { 382 350 if (!disconnect_while_debounce) ··· 416 372 max_contaminant_enable_dry_detection(chip); 417 373 return true; 418 374 } 375 + 376 + ret = max_contaminant_enable_toggling(chip); 377 + if (ret) 378 + dev_err(chip->dev, 379 + "Failed to enable toggling, ret=%d", 380 + ret); 419 381 } 420 382 } else if (chip->contaminant_state == DETECTED) { 421 383 if (!(cc_status & TCPC_CC_STATUS_TOGGLING)) { ··· 429 379 if (chip->contaminant_state == DETECTED) { 430 380 max_contaminant_enable_dry_detection(chip); 431 381 return true; 382 + } else { 383 + ret = max_contaminant_enable_toggling(chip); 384 + if (ret) { 385 + dev_err(chip->dev, 386 + "Failed to enable toggling, ret=%d", 387 + ret); 388 + return true; 389 + } 432 390 } 433 391 } 434 392 }
+1
drivers/usb/typec/tcpm/tcpci_maxim.h
··· 21 21 #define CCOVPDIS BIT(6) 22 22 #define SBURPCTRL BIT(5) 23 23 #define CCLPMODESEL GENMASK(4, 3) 24 + #define LOW_POWER_MODE_DISABLE 0 24 25 #define ULTRA_LOW_POWER_MODE 1 25 26 #define CCRPCTRL GENMASK(2, 0) 26 27 #define UA_1_SRC 1
+7 -2
drivers/vhost/net.c
··· 99 99 atomic_t refcount; 100 100 wait_queue_head_t wait; 101 101 struct vhost_virtqueue *vq; 102 + struct rcu_head rcu; 102 103 }; 103 104 104 105 #define VHOST_NET_BATCH 64 ··· 251 250 252 251 static int vhost_net_ubuf_put(struct vhost_net_ubuf_ref *ubufs) 253 252 { 254 - int r = atomic_sub_return(1, &ubufs->refcount); 253 + int r; 254 + 255 + rcu_read_lock(); 256 + r = atomic_sub_return(1, &ubufs->refcount); 255 257 if (unlikely(!r)) 256 258 wake_up(&ubufs->wait); 259 + rcu_read_unlock(); 257 260 return r; 258 261 } 259 262 ··· 270 265 static void vhost_net_ubuf_put_wait_and_free(struct vhost_net_ubuf_ref *ubufs) 271 266 { 272 267 vhost_net_ubuf_put_and_wait(ubufs); 273 - kfree(ubufs); 268 + kfree_rcu(ubufs, rcu); 274 269 } 275 270 276 271 static void vhost_net_clear_ubuf_info(struct vhost_net *n)
+4
drivers/virtio/virtio_input.c
··· 360 360 { 361 361 struct virtio_input *vi = vdev->priv; 362 362 unsigned long flags; 363 + void *buf; 363 364 364 365 spin_lock_irqsave(&vi->lock, flags); 365 366 vi->ready = false; 366 367 spin_unlock_irqrestore(&vi->lock, flags); 367 368 369 + virtio_reset_device(vdev); 370 + while ((buf = virtqueue_detach_unused_buf(vi->sts)) != NULL) 371 + kfree(buf); 368 372 vdev->config->del_vqs(vdev); 369 373 return 0; 370 374 }
+2 -2
drivers/virtio/virtio_pci_legacy_dev.c
··· 140 140 * vp_legacy_queue_vector - set the MSIX vector for a specific virtqueue 141 141 * @ldev: the legacy virtio-pci device 142 142 * @index: queue index 143 - * @vector: the config vector 143 + * @vector: the queue vector 144 144 * 145 - * Returns the config vector read from the device 145 + * Returns the queue vector read from the device 146 146 */ 147 147 u16 vp_legacy_queue_vector(struct virtio_pci_legacy_device *ldev, 148 148 u16 index, u16 vector)
+2 -2
drivers/virtio/virtio_pci_modern_dev.c
··· 546 546 * vp_modern_queue_vector - set the MSIX vector for a specific virtqueue 547 547 * @mdev: the modern virtio-pci device 548 548 * @index: queue index 549 - * @vector: the config vector 549 + * @vector: the queue vector 550 550 * 551 - * Returns the config vector read from the device 551 + * Returns the queue vector read from the device 552 552 */ 553 553 u16 vp_modern_queue_vector(struct virtio_pci_modern_device *mdev, 554 554 u16 index, u16 vector)
-23
drivers/xen/xenbus/xenbus_xs.c
··· 718 718 return 0; 719 719 } 720 720 721 - /* 722 - * Certain older XenBus toolstack cannot handle reading values that are 723 - * not populated. Some Xen 3.4 installation are incapable of doing this 724 - * so if we are running on anything older than 4 do not attempt to read 725 - * control/platform-feature-xs_reset_watches. 726 - */ 727 - static bool xen_strict_xenbus_quirk(void) 728 - { 729 - #ifdef CONFIG_X86 730 - uint32_t eax, ebx, ecx, edx, base; 731 - 732 - base = xen_cpuid_base(); 733 - cpuid(base + 1, &eax, &ebx, &ecx, &edx); 734 - 735 - if ((eax >> 16) < 4) 736 - return true; 737 - #endif 738 - return false; 739 - 740 - } 741 721 static void xs_reset_watches(void) 742 722 { 743 723 int err; 744 724 745 725 if (!xen_hvm_domain() || xen_initial_domain()) 746 - return; 747 - 748 - if (xen_strict_xenbus_quirk()) 749 726 return; 750 727 751 728 if (!xenbus_read_unsigned("control",
+10 -1
fs/debugfs/inode.c
··· 183 183 struct debugfs_fs_info *sb_opts = sb->s_fs_info; 184 184 struct debugfs_fs_info *new_opts = fc->s_fs_info; 185 185 186 + if (!new_opts) 187 + return 0; 188 + 186 189 sync_filesystem(sb); 187 190 188 191 /* structure copy of new mount options to sb */ ··· 285 282 286 283 static int debugfs_get_tree(struct fs_context *fc) 287 284 { 285 + int err; 286 + 288 287 if (!(debugfs_allow & DEBUGFS_ALLOW_API)) 289 288 return -EPERM; 290 289 291 - return get_tree_single(fc, debugfs_fill_super); 290 + err = get_tree_single(fc, debugfs_fill_super); 291 + if (err) 292 + return err; 293 + 294 + return debugfs_reconfigure(fc); 292 295 } 293 296 294 297 static void debugfs_free_fc(struct fs_context *fc)
+5 -4
fs/nfs/pagelist.c
··· 253 253 nfs_page_clear_headlock(req); 254 254 } 255 255 256 - /* 257 - * nfs_page_group_sync_on_bit_locked 256 + /** 257 + * nfs_page_group_sync_on_bit_locked - Test if all requests have @bit set 258 + * @req: request in page group 259 + * @bit: PG_* bit that is used to sync page group 258 260 * 259 261 * must be called with page group lock held 260 262 */ 261 - static bool 262 - nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit) 263 + bool nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit) 263 264 { 264 265 struct nfs_page *head = req->wb_head; 265 266 struct nfs_page *tmp;
+10 -19
fs/nfs/write.c
··· 153 153 } 154 154 } 155 155 156 - static int 157 - nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode) 156 + static void nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode) 158 157 { 159 - int ret; 160 - 161 - if (!test_bit(PG_REMOVE, &req->wb_flags)) 162 - return 0; 163 - ret = nfs_page_group_lock(req); 164 - if (ret) 165 - return ret; 166 158 if (test_and_clear_bit(PG_REMOVE, &req->wb_flags)) 167 159 nfs_page_set_inode_ref(req, inode); 168 - nfs_page_group_unlock(req); 169 - return 0; 170 160 } 171 161 172 162 /** ··· 575 585 } 576 586 } 577 587 588 + ret = nfs_page_group_lock(head); 589 + if (ret < 0) 590 + goto out_unlock; 591 + 578 592 /* Ensure that nobody removed the request before we locked it */ 579 593 if (head != folio->private) { 594 + nfs_page_group_unlock(head); 580 595 nfs_unlock_and_release_request(head); 581 596 goto retry; 582 597 } 583 598 584 - ret = nfs_cancel_remove_inode(head, inode); 585 - if (ret < 0) 586 - goto out_unlock; 587 - 588 - ret = nfs_page_group_lock(head); 589 - if (ret < 0) 590 - goto out_unlock; 599 + nfs_cancel_remove_inode(head, inode); 591 600 592 601 /* lock each request in the page group */ 593 602 for (subreq = head->wb_this_page; ··· 775 786 { 776 787 struct nfs_inode *nfsi = NFS_I(nfs_page_to_inode(req)); 777 788 778 - if (nfs_page_group_sync_on_bit(req, PG_REMOVE)) { 789 + nfs_page_group_lock(req); 790 + if (nfs_page_group_sync_on_bit_locked(req, PG_REMOVE)) { 779 791 struct folio *folio = nfs_page_to_folio(req->wb_head); 780 792 struct address_space *mapping = folio->mapping; 781 793 ··· 788 798 } 789 799 spin_unlock(&mapping->i_private_lock); 790 800 } 801 + nfs_page_group_unlock(req); 791 802 792 803 if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) { 793 804 atomic_long_dec(&nfsi->nrequests);
+1 -1
fs/smb/client/smb2ops.c
··· 4496 4496 for (int i = 1; i < num_rqst; i++) { 4497 4497 struct smb_rqst *old = &old_rq[i - 1]; 4498 4498 struct smb_rqst *new = &new_rq[i]; 4499 - struct folio_queue *buffer; 4499 + struct folio_queue *buffer = NULL; 4500 4500 size_t size = iov_iter_count(&old->rq_iter); 4501 4501 4502 4502 orig_len += smb_rqst_len(server, old);
+7 -7
fs/squashfs/super.c
··· 187 187 unsigned short flags; 188 188 unsigned int fragments; 189 189 u64 lookup_table_start, xattr_id_table_start, next_table; 190 - int err; 190 + int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); 191 191 192 192 TRACE("Entered squashfs_fill_superblock\n"); 193 + 194 + if (!devblksize) { 195 + errorf(fc, "squashfs: unable to set blocksize\n"); 196 + return -EINVAL; 197 + } 193 198 194 199 sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL); 195 200 if (sb->s_fs_info == NULL) { ··· 206 201 207 202 msblk->panic_on_errors = (opts->errors == Opt_errors_panic); 208 203 209 - msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); 210 - if (!msblk->devblksize) { 211 - errorf(fc, "squashfs: unable to set blocksize\n"); 212 - return -EINVAL; 213 - } 214 - 204 + msblk->devblksize = devblksize; 215 205 msblk->devblksize_log2 = ffz(~msblk->devblksize); 216 206 217 207 mutex_init(&msblk->meta_index_mutex);
+1
include/linux/atmdev.h
··· 185 185 int (*compat_ioctl)(struct atm_dev *dev,unsigned int cmd, 186 186 void __user *arg); 187 187 #endif 188 + int (*pre_send)(struct atm_vcc *vcc, struct sk_buff *skb); 188 189 int (*send)(struct atm_vcc *vcc,struct sk_buff *skb); 189 190 int (*send_bh)(struct atm_vcc *vcc, struct sk_buff *skb); 190 191 int (*send_oam)(struct atm_vcc *vcc,void *cell,int flags);
+1
include/linux/blkdev.h
··· 656 656 QUEUE_FLAG_SQ_SCHED, /* single queue style io dispatch */ 657 657 QUEUE_FLAG_DISABLE_WBT_DEF, /* for sched to disable/enable wbt */ 658 658 QUEUE_FLAG_NO_ELV_SWITCH, /* can't switch elevator any more */ 659 + QUEUE_FLAG_QOS_ENABLED, /* qos is enabled */ 659 660 QUEUE_FLAG_MAX 660 661 }; 661 662
-8
include/linux/compiler.h
··· 288 288 #define __ADDRESSABLE(sym) \ 289 289 ___ADDRESSABLE(sym, __section(".discard.addressable")) 290 290 291 - #define __ADDRESSABLE_ASM(sym) \ 292 - .pushsection .discard.addressable,"aw"; \ 293 - .align ARCH_SEL(8,4); \ 294 - ARCH_SEL(.quad, .long) __stringify(sym); \ 295 - .popsection; 296 - 297 - #define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym)) 298 - 299 291 /* 300 292 * This returns a constant expression while determining if an argument is 301 293 * a constant expression, most importantly without evaluating the argument.
+3
include/linux/dma-map-ops.h
··· 153 153 { 154 154 __free_pages(page, get_order(size)); 155 155 } 156 + static inline void dma_contiguous_early_fixup(phys_addr_t base, unsigned long size) 157 + { 158 + } 156 159 #endif /* CONFIG_DMA_CMA*/ 157 160 158 161 #ifdef CONFIG_DMA_DECLARE_COHERENT
+1 -6
include/linux/iosys-map.h
··· 264 264 */ 265 265 static inline void iosys_map_clear(struct iosys_map *map) 266 266 { 267 - if (map->is_iomem) { 268 - map->vaddr_iomem = NULL; 269 - map->is_iomem = false; 270 - } else { 271 - map->vaddr = NULL; 272 - } 267 + memset(map, 0, sizeof(*map)); 273 268 } 274 269 275 270 /**
+11 -9
include/linux/iov_iter.h
··· 160 160 161 161 do { 162 162 struct folio *folio = folioq_folio(folioq, slot); 163 - size_t part, remain, consumed; 163 + size_t part, remain = 0, consumed; 164 164 size_t fsize; 165 165 void *base; 166 166 ··· 168 168 break; 169 169 170 170 fsize = folioq_folio_size(folioq, slot); 171 - base = kmap_local_folio(folio, skip); 172 - part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); 173 - remain = step(base, progress, part, priv, priv2); 174 - kunmap_local(base); 175 - consumed = part - remain; 176 - len -= consumed; 177 - progress += consumed; 178 - skip += consumed; 171 + if (skip < fsize) { 172 + base = kmap_local_folio(folio, skip); 173 + part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); 174 + remain = step(base, progress, part, priv, priv2); 175 + kunmap_local(base); 176 + consumed = part - remain; 177 + len -= consumed; 178 + progress += consumed; 179 + skip += consumed; 180 + } 179 181 if (skip >= fsize) { 180 182 skip = 0; 181 183 slot++;
+9 -38
include/linux/kcov.h
··· 57 57 58 58 /* 59 59 * The softirq flavor of kcov_remote_*() functions is introduced as a temporary 60 - * workaround for KCOV's lack of nested remote coverage sections support. 61 - * 62 - * Adding support is tracked in https://bugzilla.kernel.org/show_bug.cgi?id=210337. 63 - * 64 - * kcov_remote_start_usb_softirq(): 65 - * 66 - * 1. Only collects coverage when called in the softirq context. This allows 67 - * avoiding nested remote coverage collection sections in the task context. 68 - * For example, USB/IP calls usb_hcd_giveback_urb() in the task context 69 - * within an existing remote coverage collection section. Thus, KCOV should 70 - * not attempt to start collecting coverage within the coverage collection 71 - * section in __usb_hcd_giveback_urb() in this case. 72 - * 73 - * 2. Disables interrupts for the duration of the coverage collection section. 74 - * This allows avoiding nested remote coverage collection sections in the 75 - * softirq context (a softirq might occur during the execution of a work in 76 - * the BH workqueue, which runs with in_serving_softirq() > 0). 77 - * For example, usb_giveback_urb_bh() runs in the BH workqueue with 78 - * interrupts enabled, so __usb_hcd_giveback_urb() might be interrupted in 79 - * the middle of its remote coverage collection section, and the interrupt 80 - * handler might invoke __usb_hcd_giveback_urb() again. 60 + * work around for kcov's lack of nested remote coverage sections support in 61 + * task context. Adding support for nested sections is tracked in: 62 + * https://bugzilla.kernel.org/show_bug.cgi?id=210337 81 63 */ 82 64 83 - static inline unsigned long kcov_remote_start_usb_softirq(u64 id) 65 + static inline void kcov_remote_start_usb_softirq(u64 id) 84 66 { 85 - unsigned long flags = 0; 86 - 87 - if (in_serving_softirq()) { 88 - local_irq_save(flags); 67 + if (in_serving_softirq() && !in_hardirq()) 89 68 kcov_remote_start_usb(id); 90 - } 91 - 92 - return flags; 93 69 } 94 70 95 - static inline void kcov_remote_stop_softirq(unsigned long flags) 71 + static inline void kcov_remote_stop_softirq(void) 96 72 { 97 - if (in_serving_softirq()) { 73 + if (in_serving_softirq() && !in_hardirq()) 98 74 kcov_remote_stop(); 99 - local_irq_restore(flags); 100 - } 101 75 } 102 76 103 77 #ifdef CONFIG_64BIT ··· 105 131 } 106 132 static inline void kcov_remote_start_common(u64 id) {} 107 133 static inline void kcov_remote_start_usb(u64 id) {} 108 - static inline unsigned long kcov_remote_start_usb_softirq(u64 id) 109 - { 110 - return 0; 111 - } 112 - static inline void kcov_remote_stop_softirq(unsigned long flags) {} 134 + static inline void kcov_remote_start_usb_softirq(u64 id) {} 135 + static inline void kcov_remote_stop_softirq(void) {} 113 136 114 137 #endif /* CONFIG_KCOV */ 115 138 #endif /* _LINUX_KCOV_H */
+3 -2
include/linux/memblock.h
··· 40 40 * via a driver, and never indicated in the firmware-provided memory map as 41 41 * system RAM. This corresponds to IORESOURCE_SYSRAM_DRIVER_MANAGED in the 42 42 * kernel resource tree. 43 - * @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages are 44 - * not initialized (only for reserved regions). 43 + * @MEMBLOCK_RSRV_NOINIT: reserved memory region for which struct pages are not 44 + * fully initialized. Users of this flag are responsible to properly initialize 45 + * struct pages of this region 45 46 * @MEMBLOCK_RSRV_KERN: memory region that is reserved for kernel use, 46 47 * either explictitly with memblock_reserve_kern() or via memblock 47 48 * allocation APIs. All memblock allocations set this flag.
+5
include/linux/migrate.h
··· 79 79 void folio_migrate_flags(struct folio *newfolio, struct folio *folio); 80 80 int folio_migrate_mapping(struct address_space *mapping, 81 81 struct folio *newfolio, struct folio *folio, int extra_count); 82 + int set_movable_ops(const struct movable_operations *ops, enum pagetype type); 82 83 83 84 #else 84 85 ··· 98 97 99 98 static inline int migrate_huge_page_move_mapping(struct address_space *mapping, 100 99 struct folio *dst, struct folio *src) 100 + { 101 + return -ENOSYS; 102 + } 103 + static inline int set_movable_ops(const struct movable_operations *ops, enum pagetype type) 101 104 { 102 105 return -ENOSYS; 103 106 }
+1
include/linux/nfs_page.h
··· 160 160 extern int nfs_page_group_lock(struct nfs_page *); 161 161 extern void nfs_page_group_unlock(struct nfs_page *); 162 162 extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int); 163 + extern bool nfs_page_group_sync_on_bit_locked(struct nfs_page *, unsigned int); 163 164 extern int nfs_page_set_headlock(struct nfs_page *req); 164 165 extern void nfs_page_clear_headlock(struct nfs_page *req); 165 166 extern bool nfs_async_iocounter_wait(struct rpc_task *, struct nfs_lock_context *);
+1
include/linux/platform_data/x86/int3472.h
··· 27 27 #define INT3472_GPIO_TYPE_CLK_ENABLE 0x0c 28 28 #define INT3472_GPIO_TYPE_PRIVACY_LED 0x0d 29 29 #define INT3472_GPIO_TYPE_HANDSHAKE 0x12 30 + #define INT3472_GPIO_TYPE_HOTPLUG_DETECT 0x13 30 31 31 32 #define INT3472_PDEV_MAX_NAME_LEN 23 32 33 #define INT3472_MAX_SENSOR_GPIOS 3
+2
include/linux/skbuff.h
··· 4213 4213 struct iov_iter *to, int len, u32 *crcp); 4214 4214 int skb_copy_datagram_from_iter(struct sk_buff *skb, int offset, 4215 4215 struct iov_iter *from, int len); 4216 + int skb_copy_datagram_from_iter_full(struct sk_buff *skb, int offset, 4217 + struct iov_iter *from, int len); 4216 4218 int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *frm); 4217 4219 void skb_free_datagram(struct sock *sk, struct sk_buff *skb); 4218 4220 int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags);
-2
include/linux/virtio_config.h
··· 328 328 bool virtio_get_shm_region(struct virtio_device *vdev, 329 329 struct virtio_shm_region *region, u8 id) 330 330 { 331 - if (!region->len) 332 - return false; 333 331 if (!vdev->config->get_shm_region) 334 332 return false; 335 333 return vdev->config->get_shm_region(vdev, region, id);
+1 -1
include/net/bluetooth/hci_sync.h
··· 93 93 94 94 int hci_update_eir_sync(struct hci_dev *hdev); 95 95 int hci_update_class_sync(struct hci_dev *hdev); 96 - int hci_update_name_sync(struct hci_dev *hdev); 96 + int hci_update_name_sync(struct hci_dev *hdev, const u8 *name); 97 97 int hci_write_ssp_mode_sync(struct hci_dev *hdev, u8 mode); 98 98 99 99 int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+17 -1
include/net/rose.h
··· 8 8 #ifndef _ROSE_H 9 9 #define _ROSE_H 10 10 11 + #include <linux/refcount.h> 11 12 #include <linux/rose.h> 12 13 #include <net/ax25.h> 13 14 #include <net/sock.h> ··· 97 96 ax25_cb *ax25; 98 97 struct net_device *dev; 99 98 unsigned short count; 100 - unsigned short use; 99 + refcount_t use; 101 100 unsigned int number; 102 101 char restarted; 103 102 char dce_mode; ··· 151 150 }; 152 151 153 152 #define rose_sk(sk) ((struct rose_sock *)(sk)) 153 + 154 + static inline void rose_neigh_hold(struct rose_neigh *rose_neigh) 155 + { 156 + refcount_inc(&rose_neigh->use); 157 + } 158 + 159 + static inline void rose_neigh_put(struct rose_neigh *rose_neigh) 160 + { 161 + if (refcount_dec_and_test(&rose_neigh->use)) { 162 + if (rose_neigh->ax25) 163 + ax25_cb_put(rose_neigh->ax25); 164 + kfree(rose_neigh->digipeat); 165 + kfree(rose_neigh); 166 + } 167 + } 154 168 155 169 /* af_rose.c */ 156 170 extern ax25_address rose_callsign;
+3 -2
include/sound/cs35l56.h
··· 107 107 #define CS35L56_DSP1_PMEM_5114 0x3804FE8 108 108 109 109 #define CS35L63_DSP1_FW_VER CS35L56_DSP1_FW_VER 110 - #define CS35L63_DSP1_HALO_STATE 0x280396C 111 - #define CS35L63_DSP1_PM_CUR_STATE 0x28042C8 110 + #define CS35L63_DSP1_HALO_STATE 0x2803C04 111 + #define CS35L63_DSP1_PM_CUR_STATE 0x2804518 112 112 #define CS35L63_PROTECTION_STATUS 0x340009C 113 113 #define CS35L63_TRANSDUCER_ACTUAL_PS 0x34000F4 114 114 #define CS35L63_MAIN_RENDER_USER_MUTE 0x3400020 ··· 306 306 struct gpio_desc *reset_gpio; 307 307 struct cs35l56_spi_payload *spi_payload_buf; 308 308 const struct cs35l56_fw_reg *fw_reg; 309 + const struct cirrus_amp_cal_controls *calibration_controls; 309 310 }; 310 311 311 312 static inline bool cs35l56_is_otp_register(unsigned int reg)
+3 -3
include/sound/tas2781-tlv.h
··· 2 2 // 3 3 // ALSA SoC Texas Instruments TAS2781 Audio Smart Amplifier 4 4 // 5 - // Copyright (C) 2022 - 2024 Texas Instruments Incorporated 5 + // Copyright (C) 2022 - 2025 Texas Instruments Incorporated 6 6 // https://www.ti.com 7 7 // 8 8 // The TAS2781 driver implements a flexible and configurable ··· 15 15 #ifndef __TAS2781_TLV_H__ 16 16 #define __TAS2781_TLV_H__ 17 17 18 - static const __maybe_unused DECLARE_TLV_DB_SCALE(dvc_tlv, -10000, 50, 0); 19 - static const __maybe_unused DECLARE_TLV_DB_SCALE(amp_vol_tlv, 1100, 50, 0); 18 + static const __maybe_unused DECLARE_TLV_DB_SCALE(tas2781_dvc_tlv, -10000, 50, 0); 19 + static const __maybe_unused DECLARE_TLV_DB_SCALE(tas2781_amp_tlv, 1100, 50, 0); 20 20 21 21 #endif
+1
include/uapi/linux/pfrut.h
··· 89 89 __u32 hw_ver; 90 90 __u32 rt_ver; 91 91 __u8 platform_id[16]; 92 + __u32 svn_ver; 92 93 }; 93 94 94 95 enum pfru_dsm_status {
+1 -1
include/uapi/linux/raid/md_p.h
··· 173 173 #else 174 174 #error unspecified endianness 175 175 #endif 176 - __u32 resync_offset; /* 11 resync checkpoint sector count */ 176 + __u32 recovery_cp; /* 11 resync checkpoint sector count */ 177 177 /* There are only valid for minor_version > 90 */ 178 178 __u64 reshape_position; /* 12,13 next address in array-space for reshape */ 179 179 __u32 new_level; /* 14 new level we are reshaping to */
+2 -2
include/uapi/linux/vhost.h
··· 260 260 * When fork_owner is set to VHOST_FORK_OWNER_KTHREAD: 261 261 * - Vhost will create vhost workers as kernel threads. 262 262 */ 263 - #define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x83, __u8) 263 + #define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x84, __u8) 264 264 265 265 /** 266 266 * VHOST_GET_FORK_OWNER - Get the current fork_owner flag for the vhost device. ··· 268 268 * 269 269 * @return: An 8-bit value indicating the current thread mode. 270 270 */ 271 - #define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x84, __u8) 271 + #define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x85, __u8) 272 272 273 273 #endif
+3
io_uring/futex.c
··· 288 288 goto done_unlock; 289 289 } 290 290 291 + req->flags |= REQ_F_ASYNC_DATA; 291 292 req->async_data = ifd; 292 293 ifd->q = futex_q_init; 293 294 ifd->q.bitset = iof->futex_mask; ··· 310 309 if (ret < 0) 311 310 req_set_fail(req); 312 311 io_req_set_res(req, ret, 0); 312 + req->async_data = NULL; 313 + req->flags &= ~REQ_F_ASYNC_DATA; 313 314 kfree(ifd); 314 315 return IOU_COMPLETE; 315 316 }
+1
io_uring/io_uring.c
··· 2119 2119 req->file = NULL; 2120 2120 req->tctx = current->io_uring; 2121 2121 req->cancel_seq_set = false; 2122 + req->async_data = NULL; 2122 2123 2123 2124 if (unlikely(opcode >= IORING_OP_LAST)) { 2124 2125 req->opcode = 0;
+1
kernel/Kconfig.kexec
··· 97 97 config KEXEC_HANDOVER 98 98 bool "kexec handover" 99 99 depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE 100 + depends on !DEFERRED_STRUCT_PAGE_INIT 100 101 select MEMBLOCK_KHO_SCRATCH 101 102 select KEXEC_FILE 102 103 select DEBUG_FS
+5 -6
kernel/cgroup/cpuset.c
··· 280 280 { 281 281 if (!cpusets_insane_config() && 282 282 movable_only_nodes(nodes)) { 283 - static_branch_enable(&cpusets_insane_config_key); 283 + static_branch_enable_cpuslocked(&cpusets_insane_config_key); 284 284 pr_info("Unsupported (movable nodes only) cpuset configuration detected (nmask=%*pbl)!\n" 285 285 "Cpuset allocations might fail even with a lot of memory available.\n", 286 286 nodemask_pr_args(nodes)); ··· 1843 1843 if (is_partition_valid(cs)) 1844 1844 adding = cpumask_and(tmp->addmask, 1845 1845 xcpus, parent->effective_xcpus); 1846 - } else if (is_partition_invalid(cs) && 1846 + } else if (is_partition_invalid(cs) && !cpumask_empty(xcpus) && 1847 1847 cpumask_subset(xcpus, parent->effective_xcpus)) { 1848 1848 struct cgroup_subsys_state *css; 1849 1849 struct cpuset *child; ··· 3358 3358 else 3359 3359 return -EINVAL; 3360 3360 3361 - css_get(&cs->css); 3362 3361 cpus_read_lock(); 3363 3362 mutex_lock(&cpuset_mutex); 3364 3363 if (is_cpuset_online(cs)) 3365 3364 retval = update_prstate(cs, val); 3366 3365 mutex_unlock(&cpuset_mutex); 3367 3366 cpus_read_unlock(); 3368 - css_put(&cs->css); 3369 3367 return retval ?: nbytes; 3370 3368 } 3371 3369 ··· 3868 3870 partcmd = partcmd_invalidate; 3869 3871 /* 3870 3872 * On the other hand, an invalid partition root may be transitioned 3871 - * back to a regular one. 3873 + * back to a regular one with a non-empty effective xcpus. 3872 3874 */ 3873 - else if (is_partition_valid(parent) && is_partition_invalid(cs)) 3875 + else if (is_partition_valid(parent) && is_partition_invalid(cs) && 3876 + !cpumask_empty(cs->effective_xcpus)) 3874 3877 partcmd = partcmd_update; 3875 3878 3876 3879 if (partcmd >= 0) {
+3
kernel/cgroup/rstat.c
··· 479 479 if (!css_uses_rstat(css)) 480 480 return; 481 481 482 + if (!css->rstat_cpu) 483 + return; 484 + 482 485 css_rstat_flush(css); 483 486 484 487 /* sanity check */
-2
kernel/dma/contiguous.c
··· 483 483 pr_err("Reserved memory: unable to setup CMA region\n"); 484 484 return err; 485 485 } 486 - /* Architecture specific contiguous memory fixup. */ 487 - dma_contiguous_early_fixup(rmem->base, rmem->size); 488 486 489 487 if (default_cma) 490 488 dma_contiguous_default_area = cma;
+2 -2
kernel/dma/pool.c
··· 102 102 103 103 #ifdef CONFIG_DMA_DIRECT_REMAP 104 104 addr = dma_common_contiguous_remap(page, pool_size, 105 - pgprot_dmacoherent(PAGE_KERNEL), 106 - __builtin_return_address(0)); 105 + pgprot_decrypted(pgprot_dmacoherent(PAGE_KERNEL)), 106 + __builtin_return_address(0)); 107 107 if (!addr) 108 108 goto free_page; 109 109 #else
+6
kernel/events/core.c
··· 2665 2665 2666 2666 static void perf_event_unthrottle(struct perf_event *event, bool start) 2667 2667 { 2668 + if (event->state != PERF_EVENT_STATE_ACTIVE) 2669 + return; 2670 + 2668 2671 event->hw.interrupts = 0; 2669 2672 if (start) 2670 2673 event->pmu->start(event, 0); ··· 2677 2674 2678 2675 static void perf_event_throttle(struct perf_event *event) 2679 2676 { 2677 + if (event->state != PERF_EVENT_STATE_ACTIVE) 2678 + return; 2679 + 2680 2680 event->hw.interrupts = MAX_INTERRUPTS; 2681 2681 event->pmu->stop(event, 0); 2682 2682 if (event == event->group_leader)
+25 -4
kernel/kexec_handover.c
··· 144 144 unsigned int order) 145 145 { 146 146 struct kho_mem_phys_bits *bits; 147 - struct kho_mem_phys *physxa; 147 + struct kho_mem_phys *physxa, *new_physxa; 148 148 const unsigned long pfn_high = pfn >> order; 149 149 150 150 might_sleep(); 151 151 152 - physxa = xa_load_or_alloc(&track->orders, order, sizeof(*physxa)); 153 - if (IS_ERR(physxa)) 154 - return PTR_ERR(physxa); 152 + physxa = xa_load(&track->orders, order); 153 + if (!physxa) { 154 + int err; 155 + 156 + new_physxa = kzalloc(sizeof(*physxa), GFP_KERNEL); 157 + if (!new_physxa) 158 + return -ENOMEM; 159 + 160 + xa_init(&new_physxa->phys_bits); 161 + physxa = xa_cmpxchg(&track->orders, order, NULL, new_physxa, 162 + GFP_KERNEL); 163 + 164 + err = xa_err(physxa); 165 + if (err || physxa) { 166 + xa_destroy(&new_physxa->phys_bits); 167 + kfree(new_physxa); 168 + 169 + if (err) 170 + return err; 171 + } else { 172 + physxa = new_physxa; 173 + } 174 + } 155 175 156 176 bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS, 157 177 sizeof(*bits)); ··· 564 544 err_free_scratch_desc: 565 545 memblock_free(kho_scratch, kho_scratch_cnt * sizeof(*kho_scratch)); 566 546 err_disable_kho: 547 + pr_warn("Failed to reserve scratch area, disabling kexec handover\n"); 567 548 kho_enable = false; 568 549 } 569 550
+4 -3
kernel/params.c
··· 513 513 int param_set_copystring(const char *val, const struct kernel_param *kp) 514 514 { 515 515 const struct kparam_string *kps = kp->str; 516 + const size_t len = strnlen(val, kps->maxlen); 516 517 517 - if (strnlen(val, kps->maxlen) == kps->maxlen) { 518 + if (len == kps->maxlen) { 518 519 pr_err("%s: string doesn't fit in %u chars.\n", 519 520 kp->name, kps->maxlen-1); 520 521 return -ENOSPC; 521 522 } 522 - strcpy(kps->string, val); 523 + memcpy(kps->string, val, len + 1); 523 524 return 0; 524 525 } 525 526 EXPORT_SYMBOL(param_set_copystring); ··· 842 841 dot = strchr(kp->name, '.'); 843 842 if (!dot) { 844 843 /* This happens for core_param() */ 845 - strcpy(modname, "kernel"); 844 + strscpy(modname, "kernel"); 846 845 name_len = 0; 847 846 } else { 848 847 name_len = dot - kp->name + 1;
+4
kernel/sched/ext.c
··· 5749 5749 __setscheduler_class(p->policy, p->prio); 5750 5750 struct sched_enq_and_set_ctx ctx; 5751 5751 5752 + if (!tryget_task_struct(p)) 5753 + continue; 5754 + 5752 5755 if (old_class != new_class && p->se.sched_delayed) 5753 5756 dequeue_task(task_rq(p), p, DEQUEUE_SLEEP | DEQUEUE_DELAYED); 5754 5757 ··· 5764 5761 sched_enq_and_set_task(&ctx); 5765 5762 5766 5763 check_class_changed(task_rq(p), p, old_class, p->prio); 5764 + put_task_struct(p); 5767 5765 } 5768 5766 scx_task_iter_stop(&sti); 5769 5767 percpu_up_write(&scx_fork_rwsem);
+1
kernel/trace/fgraph.c
··· 1397 1397 ftrace_graph_active--; 1398 1398 gops->saved_func = NULL; 1399 1399 fgraph_lru_release_index(i); 1400 + unregister_pm_notifier(&ftrace_suspend_notifier); 1400 1401 } 1401 1402 return ret; 1402 1403 }
+10 -9
kernel/trace/ftrace.c
··· 4661 4661 } else { 4662 4662 iter->hash = alloc_and_copy_ftrace_hash(size_bits, hash); 4663 4663 } 4664 + } else { 4665 + if (hash) 4666 + iter->hash = alloc_and_copy_ftrace_hash(hash->size_bits, hash); 4667 + else 4668 + iter->hash = EMPTY_HASH; 4669 + } 4664 4670 4665 - if (!iter->hash) { 4666 - trace_parser_put(&iter->parser); 4667 - goto out_unlock; 4668 - } 4669 - } else 4670 - iter->hash = hash; 4671 + if (!iter->hash) { 4672 + trace_parser_put(&iter->parser); 4673 + goto out_unlock; 4674 + } 4671 4675 4672 4676 ret = 0; 4673 4677 ··· 6547 6543 ftrace_hash_move_and_update_ops(iter->ops, orig_hash, 6548 6544 iter->hash, filter_hash); 6549 6545 mutex_unlock(&ftrace_lock); 6550 - } else { 6551 - /* For read only, the hash is the ops hash */ 6552 - iter->hash = NULL; 6553 6546 } 6554 6547 6555 6548 mutex_unlock(&iter->ops->func_hash->regex_lock);
+1 -1
kernel/trace/ring_buffer.c
··· 7666 7666 rb_test_started = true; 7667 7667 7668 7668 set_current_state(TASK_INTERRUPTIBLE); 7669 - /* Just run for 10 seconds */; 7669 + /* Just run for 10 seconds */ 7670 7670 schedule_timeout(10 * HZ); 7671 7671 7672 7672 kthread_stop(rb_hammer);
+14 -8
kernel/trace/trace.c
··· 1816 1816 1817 1817 ret = get_user(ch, ubuf++); 1818 1818 if (ret) 1819 - return ret; 1819 + goto fail; 1820 1820 1821 1821 read++; 1822 1822 cnt--; ··· 1830 1830 while (cnt && isspace(ch)) { 1831 1831 ret = get_user(ch, ubuf++); 1832 1832 if (ret) 1833 - return ret; 1833 + goto fail; 1834 1834 read++; 1835 1835 cnt--; 1836 1836 } ··· 1848 1848 while (cnt && !isspace(ch) && ch) { 1849 1849 if (parser->idx < parser->size - 1) 1850 1850 parser->buffer[parser->idx++] = ch; 1851 - else 1852 - return -EINVAL; 1851 + else { 1852 + ret = -EINVAL; 1853 + goto fail; 1854 + } 1853 1855 1854 1856 ret = get_user(ch, ubuf++); 1855 1857 if (ret) 1856 - return ret; 1858 + goto fail; 1857 1859 read++; 1858 1860 cnt--; 1859 1861 } ··· 1870 1868 /* Make sure the parsed string always terminates with '\0'. */ 1871 1869 parser->buffer[parser->idx] = 0; 1872 1870 } else { 1873 - return -EINVAL; 1871 + ret = -EINVAL; 1872 + goto fail; 1874 1873 } 1875 1874 1876 1875 *ppos += read; 1877 1876 return read; 1877 + fail: 1878 + trace_parser_fail(parser); 1879 + return ret; 1878 1880 } 1879 1881 1880 1882 /* TODO add a seq_buf_to_buffer() */ ··· 10638 10632 ret = print_trace_line(&iter); 10639 10633 if (ret != TRACE_TYPE_NO_CONSUME) 10640 10634 trace_consume(&iter); 10635 + 10636 + trace_printk_seq(&iter.seq); 10641 10637 } 10642 10638 touch_nmi_watchdog(); 10643 - 10644 - trace_printk_seq(&iter.seq); 10645 10639 } 10646 10640 10647 10641 if (!cnt)
+7 -1
kernel/trace/trace.h
··· 1292 1292 */ 1293 1293 struct trace_parser { 1294 1294 bool cont; 1295 + bool fail; 1295 1296 char *buffer; 1296 1297 unsigned idx; 1297 1298 unsigned size; ··· 1300 1299 1301 1300 static inline bool trace_parser_loaded(struct trace_parser *parser) 1302 1301 { 1303 - return (parser->idx != 0); 1302 + return !parser->fail && parser->idx != 0; 1304 1303 } 1305 1304 1306 1305 static inline bool trace_parser_cont(struct trace_parser *parser) ··· 1312 1311 { 1313 1312 parser->cont = false; 1314 1313 parser->idx = 0; 1314 + } 1315 + 1316 + static inline void trace_parser_fail(struct trace_parser *parser) 1317 + { 1318 + parser->fail = true; 1315 1319 } 1316 1320 1317 1321 extern int trace_parser_get_init(struct trace_parser *parser, int size);
+16 -6
kernel/trace/trace_functions_graph.c
··· 27 27 unsigned long enter_funcs[FTRACE_RETFUNC_DEPTH]; 28 28 }; 29 29 30 + struct fgraph_ent_args { 31 + struct ftrace_graph_ent_entry ent; 32 + /* Force the sizeof of args[] to have FTRACE_REGS_MAX_ARGS entries */ 33 + unsigned long args[FTRACE_REGS_MAX_ARGS]; 34 + }; 35 + 30 36 struct fgraph_data { 31 37 struct fgraph_cpu_data __percpu *cpu_data; 32 38 33 39 /* Place to preserve last processed entry. */ 34 40 union { 35 - struct ftrace_graph_ent_entry ent; 41 + struct fgraph_ent_args ent; 42 + /* TODO allow retaddr to have args */ 36 43 struct fgraph_retaddr_ent_entry rent; 37 - } ent; 44 + }; 38 45 struct ftrace_graph_ret_entry ret; 39 46 int failed; 40 47 int cpu; ··· 634 627 * Save current and next entries for later reference 635 628 * if the output fails. 636 629 */ 637 - if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT)) 638 - data->ent.rent = *(struct fgraph_retaddr_ent_entry *)curr; 639 - else 640 - data->ent.ent = *curr; 630 + if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT)) { 631 + data->rent = *(struct fgraph_retaddr_ent_entry *)curr; 632 + } else { 633 + int size = min((int)sizeof(data->ent), (int)iter->ent_size); 634 + 635 + memcpy(&data->ent, curr, size); 636 + } 641 637 /* 642 638 * If the next event is not a return type, then 643 639 * we only care about what type it is. Otherwise we can
+6
mm/balloon_compaction.c
··· 254 254 .putback_page = balloon_page_putback, 255 255 }; 256 256 257 + static int __init balloon_init(void) 258 + { 259 + return set_movable_ops(&balloon_mops, PGTY_offline); 260 + } 261 + core_initcall(balloon_init); 262 + 257 263 #endif /* CONFIG_BALLOON_COMPACTION */
+14 -1
mm/damon/core.c
··· 845 845 return NULL; 846 846 } 847 847 848 + static struct damos_filter *damos_nth_ops_filter(int n, struct damos *s) 849 + { 850 + struct damos_filter *filter; 851 + int i = 0; 852 + 853 + damos_for_each_ops_filter(filter, s) { 854 + if (i++ == n) 855 + return filter; 856 + } 857 + return NULL; 858 + } 859 + 848 860 static void damos_commit_filter_arg( 849 861 struct damos_filter *dst, struct damos_filter *src) 850 862 { ··· 883 871 { 884 872 dst->type = src->type; 885 873 dst->matching = src->matching; 874 + dst->allow = src->allow; 886 875 damos_commit_filter_arg(dst, src); 887 876 } 888 877 ··· 921 908 int i = 0, j = 0; 922 909 923 910 damos_for_each_ops_filter_safe(dst_filter, next, dst) { 924 - src_filter = damos_nth_filter(i++, src); 911 + src_filter = damos_nth_ops_filter(i++, src); 925 912 if (src_filter) 926 913 damos_commit_filter(dst_filter, src_filter); 927 914 else
+1 -1
mm/damon/sysfs-schemes.c
··· 2158 2158 { 2159 2159 damon_sysfs_access_pattern_rm_dirs(scheme->access_pattern); 2160 2160 kobject_put(&scheme->access_pattern->kobj); 2161 - kobject_put(&scheme->dests->kobj); 2162 2161 damos_sysfs_dests_rm_dirs(scheme->dests); 2162 + kobject_put(&scheme->dests->kobj); 2163 2163 damon_sysfs_quotas_rm_dirs(scheme->quotas); 2164 2164 kobject_put(&scheme->quotas->kobj); 2165 2165 kobject_put(&scheme->watermarks->kobj);
+7 -2
mm/debug_vm_pgtable.c
··· 990 990 991 991 /* Free page table entries */ 992 992 if (args->start_ptep) { 993 + pmd_clear(args->pmdp); 993 994 pte_free(args->mm, args->start_ptep); 994 995 mm_dec_nr_ptes(args->mm); 995 996 } 996 997 997 998 if (args->start_pmdp) { 999 + pud_clear(args->pudp); 998 1000 pmd_free(args->mm, args->start_pmdp); 999 1001 mm_dec_nr_pmds(args->mm); 1000 1002 } 1001 1003 1002 1004 if (args->start_pudp) { 1005 + p4d_clear(args->p4dp); 1003 1006 pud_free(args->mm, args->start_pudp); 1004 1007 mm_dec_nr_puds(args->mm); 1005 1008 } 1006 1009 1007 - if (args->start_p4dp) 1010 + if (args->start_p4dp) { 1011 + pgd_clear(args->pgdp); 1008 1012 p4d_free(args->mm, args->start_p4dp); 1013 + } 1009 1014 1010 1015 /* Free vma and mm struct */ 1011 1016 if (args->vma) 1012 1017 vm_area_free(args->vma); 1013 1018 1014 1019 if (args->mm) 1015 - mmdrop(args->mm); 1020 + mmput(args->mm); 1016 1021 } 1017 1022 1018 1023 static struct page * __init
+13 -6
mm/memblock.c
··· 780 780 } 781 781 782 782 if ((nr_pages << PAGE_SHIFT) > threshold_bytes) { 783 - mem_size_mb = memblock_phys_mem_size() >> 20; 783 + mem_size_mb = memblock_phys_mem_size() / SZ_1M; 784 784 pr_err("NUMA: no nodes coverage for %luMB of %luMB RAM\n", 785 - (nr_pages << PAGE_SHIFT) >> 20, mem_size_mb); 785 + (nr_pages << PAGE_SHIFT) / SZ_1M, mem_size_mb); 786 786 return false; 787 787 } 788 788 ··· 1091 1091 1092 1092 /** 1093 1093 * memblock_reserved_mark_noinit - Mark a reserved memory region with flag 1094 - * MEMBLOCK_RSRV_NOINIT which results in the struct pages not being initialized 1095 - * for this region. 1094 + * MEMBLOCK_RSRV_NOINIT 1095 + * 1096 1096 * @base: the base phys addr of the region 1097 1097 * @size: the size of the region 1098 1098 * 1099 - * struct pages will not be initialized for reserved memory regions marked with 1100 - * %MEMBLOCK_RSRV_NOINIT. 1099 + * The struct pages for the reserved regions marked %MEMBLOCK_RSRV_NOINIT will 1100 + * not be fully initialized to allow the caller optimize their initialization. 1101 + * 1102 + * When %CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, setting this flag 1103 + * completely bypasses the initialization of struct pages for such region. 1104 + * 1105 + * When %CONFIG_DEFERRED_STRUCT_PAGE_INIT is disabled, struct pages in this 1106 + * region will be initialized with default values but won't be marked as 1107 + * reserved. 1101 1108 * 1102 1109 * Return: 0 on success, -errno on failure. 1103 1110 */
+8
mm/memory-failure.c
··· 853 853 #define hwpoison_hugetlb_range NULL 854 854 #endif 855 855 856 + static int hwpoison_test_walk(unsigned long start, unsigned long end, 857 + struct mm_walk *walk) 858 + { 859 + /* We also want to consider pages mapped into VM_PFNMAP. */ 860 + return 0; 861 + } 862 + 856 863 static const struct mm_walk_ops hwpoison_walk_ops = { 857 864 .pmd_entry = hwpoison_pte_range, 858 865 .hugetlb_entry = hwpoison_hugetlb_range, 866 + .test_walk = hwpoison_test_walk, 859 867 .walk_lock = PGWALK_RDLOCK, 860 868 }; 861 869
+30 -8
mm/migrate.c
··· 43 43 #include <linux/sched/sysctl.h> 44 44 #include <linux/memory-tiers.h> 45 45 #include <linux/pagewalk.h> 46 - #include <linux/balloon_compaction.h> 47 - #include <linux/zsmalloc.h> 48 46 49 47 #include <asm/tlbflush.h> 50 48 ··· 50 52 51 53 #include "internal.h" 52 54 #include "swap.h" 55 + 56 + static const struct movable_operations *offline_movable_ops; 57 + static const struct movable_operations *zsmalloc_movable_ops; 58 + 59 + int set_movable_ops(const struct movable_operations *ops, enum pagetype type) 60 + { 61 + /* 62 + * We only allow for selected types and don't handle concurrent 63 + * registration attempts yet. 64 + */ 65 + switch (type) { 66 + case PGTY_offline: 67 + if (offline_movable_ops && ops) 68 + return -EBUSY; 69 + offline_movable_ops = ops; 70 + break; 71 + case PGTY_zsmalloc: 72 + if (zsmalloc_movable_ops && ops) 73 + return -EBUSY; 74 + zsmalloc_movable_ops = ops; 75 + break; 76 + default: 77 + return -EINVAL; 78 + } 79 + return 0; 80 + } 81 + EXPORT_SYMBOL_GPL(set_movable_ops); 53 82 54 83 static const struct movable_operations *page_movable_ops(struct page *page) 55 84 { ··· 87 62 * it as movable, the page type must be sticky until the page gets freed 88 63 * back to the buddy. 89 64 */ 90 - #ifdef CONFIG_BALLOON_COMPACTION 91 65 if (PageOffline(page)) 92 66 /* Only balloon compaction sets PageOffline pages movable. */ 93 - return &balloon_mops; 94 - #endif /* CONFIG_BALLOON_COMPACTION */ 95 - #if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) 67 + return offline_movable_ops; 96 68 if (PageZsmalloc(page)) 97 - return &zsmalloc_mops; 98 - #endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */ 69 + return zsmalloc_movable_ops; 70 + 99 71 return NULL; 100 72 } 101 73
+47 -35
mm/mremap.c
··· 323 323 } 324 324 #endif 325 325 326 + static inline bool uffd_supports_page_table_move(struct pagetable_move_control *pmc) 327 + { 328 + /* 329 + * If we are moving a VMA that has uffd-wp registered but with 330 + * remap events disabled (new VMA will not be registered with uffd), we 331 + * need to ensure that the uffd-wp state is cleared from all pgtables. 332 + * This means recursing into lower page tables in move_page_tables(). 333 + * 334 + * We might get called with VMAs reversed when recovering from a 335 + * failed page table move. In that case, the 336 + * "old"-but-actually-"originally new" VMA during recovery will not have 337 + * a uffd context. Recursing into lower page tables during the original 338 + * move but not during the recovery move will cause trouble, because we 339 + * run into already-existing page tables. So check both VMAs. 340 + */ 341 + return !vma_has_uffd_without_event_remap(pmc->old) && 342 + !vma_has_uffd_without_event_remap(pmc->new); 343 + } 344 + 326 345 #ifdef CONFIG_HAVE_MOVE_PMD 327 346 static bool move_normal_pmd(struct pagetable_move_control *pmc, 328 347 pmd_t *old_pmd, pmd_t *new_pmd) ··· 353 334 pmd_t pmd; 354 335 355 336 if (!arch_supports_page_table_move()) 337 + return false; 338 + if (!uffd_supports_page_table_move(pmc)) 356 339 return false; 357 340 /* 358 341 * The destination pmd shouldn't be established, free_pgtables() ··· 380 359 * this point, and verify that it really is empty. We'll see. 381 360 */ 382 361 if (WARN_ON_ONCE(!pmd_none(*new_pmd))) 383 - return false; 384 - 385 - /* If this pmd belongs to a uffd vma with remap events disabled, we need 386 - * to ensure that the uffd-wp state is cleared from all pgtables. This 387 - * means recursing into lower page tables in move_page_tables(), and we 388 - * can reuse the existing code if we simply treat the entry as "not 389 - * moved". 390 - */ 391 - if (vma_has_uffd_without_event_remap(vma)) 392 362 return false; 393 363 394 364 /* ··· 430 418 431 419 if (!arch_supports_page_table_move()) 432 420 return false; 421 + if (!uffd_supports_page_table_move(pmc)) 422 + return false; 433 423 /* 434 424 * The destination pud shouldn't be established, free_pgtables() 435 425 * should have released it. 436 426 */ 437 427 if (WARN_ON_ONCE(!pud_none(*new_pud))) 438 - return false; 439 - 440 - /* If this pud belongs to a uffd vma with remap events disabled, we need 441 - * to ensure that the uffd-wp state is cleared from all pgtables. This 442 - * means recursing into lower page tables in move_page_tables(), and we 443 - * can reuse the existing code if we simply treat the entry as "not 444 - * moved". 445 - */ 446 - if (vma_has_uffd_without_event_remap(vma)) 447 428 return false; 448 429 449 430 /* ··· 1625 1620 1626 1621 static bool vma_multi_allowed(struct vm_area_struct *vma) 1627 1622 { 1628 - struct file *file; 1623 + struct file *file = vma->vm_file; 1629 1624 1630 1625 /* 1631 1626 * We can't support moving multiple uffd VMAs as notify requires ··· 1638 1633 * Custom get unmapped area might result in MREMAP_FIXED not 1639 1634 * being obeyed. 1640 1635 */ 1641 - file = vma->vm_file; 1642 - if (file && !vma_is_shmem(vma) && !is_vm_hugetlb_page(vma)) { 1643 - const struct file_operations *fop = file->f_op; 1636 + if (!file || !file->f_op->get_unmapped_area) 1637 + return true; 1638 + /* Known good. */ 1639 + if (vma_is_shmem(vma)) 1640 + return true; 1641 + if (is_vm_hugetlb_page(vma)) 1642 + return true; 1643 + if (file->f_op->get_unmapped_area == thp_get_unmapped_area) 1644 + return true; 1644 1645 1645 - if (fop->get_unmapped_area) 1646 - return false; 1647 - } 1648 - 1649 - return true; 1646 + return false; 1650 1647 } 1651 1648 1652 1649 static int check_prep_vma(struct vma_remap_struct *vrm) ··· 1825 1818 unsigned long start = vrm->addr; 1826 1819 unsigned long end = vrm->addr + vrm->old_len; 1827 1820 unsigned long new_addr = vrm->new_addr; 1828 - bool allowed = true, seen_vma = false; 1829 1821 unsigned long target_addr = new_addr; 1830 1822 unsigned long res = -EFAULT; 1831 1823 unsigned long last_end; 1824 + bool seen_vma = false; 1825 + 1832 1826 VMA_ITERATOR(vmi, current->mm, start); 1833 1827 1834 1828 /* ··· 1842 1834 unsigned long addr = max(vma->vm_start, start); 1843 1835 unsigned long len = min(end, vma->vm_end) - addr; 1844 1836 unsigned long offset, res_vma; 1845 - 1846 - if (!allowed) 1847 - return -EFAULT; 1837 + bool multi_allowed; 1848 1838 1849 1839 /* No gap permitted at the start of the range. */ 1850 1840 if (!seen_vma && start < vma->vm_start) ··· 1871 1865 vrm->new_addr = target_addr + offset; 1872 1866 vrm->old_len = vrm->new_len = len; 1873 1867 1874 - allowed = vma_multi_allowed(vma); 1875 - if (seen_vma && !allowed) 1876 - return -EFAULT; 1868 + multi_allowed = vma_multi_allowed(vma); 1869 + if (!multi_allowed) { 1870 + /* This is not the first VMA, abort immediately. */ 1871 + if (seen_vma) 1872 + return -EFAULT; 1873 + /* This is the first, but there are more, abort. */ 1874 + if (vma->vm_end < end) 1875 + return -EFAULT; 1876 + } 1877 1877 1878 1878 res_vma = check_prep_vma(vrm); 1879 1879 if (!res_vma) ··· 1888 1876 return res_vma; 1889 1877 1890 1878 if (!seen_vma) { 1891 - VM_WARN_ON_ONCE(allowed && res_vma != new_addr); 1879 + VM_WARN_ON_ONCE(multi_allowed && res_vma != new_addr); 1892 1880 res = res_vma; 1893 1881 } 1894 1882
+2 -2
mm/numa_emulation.c
··· 73 73 } 74 74 75 75 printk(KERN_INFO "Faking node %d at [mem %#018Lx-%#018Lx] (%LuMB)\n", 76 - nid, eb->start, eb->end - 1, (eb->end - eb->start) >> 20); 76 + nid, eb->start, eb->end - 1, (eb->end - eb->start) / SZ_1M); 77 77 return 0; 78 78 } 79 79 ··· 264 264 min_size = ALIGN(max(min_size, FAKE_NODE_MIN_SIZE), FAKE_NODE_MIN_SIZE); 265 265 if (size < min_size) { 266 266 pr_err("Fake node size %LuMB too small, increasing to %LuMB\n", 267 - size >> 20, min_size >> 20); 267 + size / SZ_1M, min_size / SZ_1M); 268 268 size = min_size; 269 269 } 270 270 size = ALIGN_DOWN(size, FAKE_NODE_MIN_SIZE);
+3 -3
mm/numa_memblks.c
··· 76 76 for (j = 0; j < cnt; j++) 77 77 numa_distance[i * cnt + j] = i == j ? 78 78 LOCAL_DISTANCE : REMOTE_DISTANCE; 79 - printk(KERN_DEBUG "NUMA: Initialized distance table, cnt=%d\n", cnt); 79 + pr_debug("NUMA: Initialized distance table, cnt=%d\n", cnt); 80 80 81 81 return 0; 82 82 } ··· 427 427 unsigned long pfn_align = node_map_pfn_alignment(); 428 428 429 429 if (pfn_align && pfn_align < PAGES_PER_SECTION) { 430 - unsigned long node_align_mb = PFN_PHYS(pfn_align) >> 20; 430 + unsigned long node_align_mb = PFN_PHYS(pfn_align) / SZ_1M; 431 431 432 - unsigned long sect_align_mb = PFN_PHYS(PAGES_PER_SECTION) >> 20; 432 + unsigned long sect_align_mb = PFN_PHYS(PAGES_PER_SECTION) / SZ_1M; 433 433 434 434 pr_warn("Node alignment %luMB < min %luMB, rejecting NUMA config\n", 435 435 node_align_mb, sect_align_mb);
+2 -2
mm/vmscan.c
··· 5772 5772 if (sysfs_create_group(mm_kobj, &lru_gen_attr_group)) 5773 5773 pr_err("lru_gen: failed to create sysfs group\n"); 5774 5774 5775 - debugfs_create_file_aux_num("lru_gen", 0644, NULL, NULL, 1, 5775 + debugfs_create_file_aux_num("lru_gen", 0644, NULL, NULL, false, 5776 5776 &lru_gen_rw_fops); 5777 - debugfs_create_file_aux_num("lru_gen_full", 0444, NULL, NULL, 0, 5777 + debugfs_create_file_aux_num("lru_gen_full", 0444, NULL, NULL, true, 5778 5778 &lru_gen_ro_fops); 5779 5779 5780 5780 return 0;
+10
mm/zsmalloc.c
··· 2246 2246 2247 2247 static int __init zs_init(void) 2248 2248 { 2249 + int rc __maybe_unused; 2250 + 2249 2251 #ifdef CONFIG_ZPOOL 2250 2252 zpool_register_driver(&zs_zpool_driver); 2253 + #endif 2254 + #ifdef CONFIG_COMPACTION 2255 + rc = set_movable_ops(&zsmalloc_mops, PGTY_zsmalloc); 2256 + if (rc) 2257 + return rc; 2251 2258 #endif 2252 2259 zs_stat_init(); 2253 2260 return 0; ··· 2264 2257 { 2265 2258 #ifdef CONFIG_ZPOOL 2266 2259 zpool_unregister_driver(&zs_zpool_driver); 2260 + #endif 2261 + #ifdef CONFIG_COMPACTION 2262 + set_movable_ops(NULL, PGTY_zsmalloc); 2267 2263 #endif 2268 2264 zs_stat_exit(); 2269 2265 }
+12 -3
net/atm/common.c
··· 635 635 636 636 skb->dev = NULL; /* for paths shared with net_device interfaces */ 637 637 if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) { 638 - atm_return_tx(vcc, skb); 639 - kfree_skb(skb); 640 638 error = -EFAULT; 641 - goto out; 639 + goto free_skb; 642 640 } 643 641 if (eff != size) 644 642 memset(skb->data + size, 0, eff-size); 643 + 644 + if (vcc->dev->ops->pre_send) { 645 + error = vcc->dev->ops->pre_send(vcc, skb); 646 + if (error) 647 + goto free_skb; 648 + } 649 + 645 650 error = vcc->dev->ops->send(vcc, skb); 646 651 error = error ? error : size; 647 652 out: 648 653 release_sock(sk); 649 654 return error; 655 + free_skb: 656 + atm_return_tx(vcc, skb); 657 + kfree_skb(skb); 658 + goto out; 650 659 } 651 660 652 661 __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait)
+41 -17
net/bluetooth/hci_conn.c
··· 149 149 150 150 hci_chan_list_flush(conn); 151 151 152 - hci_conn_hash_del(hdev, conn); 153 - 154 152 if (HCI_CONN_HANDLE_UNSET(conn->handle)) 155 153 ida_free(&hdev->unset_handle_ida, conn->handle); 156 154 ··· 1150 1152 disable_delayed_work_sync(&conn->auto_accept_work); 1151 1153 disable_delayed_work_sync(&conn->idle_work); 1152 1154 1153 - if (conn->type == ACL_LINK) { 1154 - /* Unacked frames */ 1155 - hdev->acl_cnt += conn->sent; 1156 - } else if (conn->type == LE_LINK) { 1157 - cancel_delayed_work(&conn->le_conn_timeout); 1155 + /* Remove the connection from the list so unacked logic can detect when 1156 + * a certain pool is not being utilized. 1157 + */ 1158 + hci_conn_hash_del(hdev, conn); 1158 1159 1159 - if (hdev->le_pkts) 1160 - hdev->le_cnt += conn->sent; 1160 + /* Handle unacked frames: 1161 + * 1162 + * - In case there are no connection, or if restoring the buffers 1163 + * considered in transist would overflow, restore all buffers to the 1164 + * pool. 1165 + * - Otherwise restore just the buffers considered in transit for the 1166 + * hci_conn 1167 + */ 1168 + switch (conn->type) { 1169 + case ACL_LINK: 1170 + if (!hci_conn_num(hdev, ACL_LINK) || 1171 + hdev->acl_cnt + conn->sent > hdev->acl_pkts) 1172 + hdev->acl_cnt = hdev->acl_pkts; 1161 1173 else 1162 1174 hdev->acl_cnt += conn->sent; 1163 - } else { 1164 - /* Unacked ISO frames */ 1165 - if (conn->type == CIS_LINK || 1166 - conn->type == BIS_LINK || 1167 - conn->type == PA_LINK) { 1168 - if (hdev->iso_pkts) 1169 - hdev->iso_cnt += conn->sent; 1170 - else if (hdev->le_pkts) 1175 + break; 1176 + case LE_LINK: 1177 + cancel_delayed_work(&conn->le_conn_timeout); 1178 + 1179 + if (hdev->le_pkts) { 1180 + if (!hci_conn_num(hdev, LE_LINK) || 1181 + hdev->le_cnt + conn->sent > hdev->le_pkts) 1182 + hdev->le_cnt = hdev->le_pkts; 1183 + else 1171 1184 hdev->le_cnt += conn->sent; 1185 + } else { 1186 + if ((!hci_conn_num(hdev, LE_LINK) && 1187 + !hci_conn_num(hdev, ACL_LINK)) || 1188 + hdev->acl_cnt + conn->sent > hdev->acl_pkts) 1189 + hdev->acl_cnt = hdev->acl_pkts; 1172 1190 else 1173 1191 hdev->acl_cnt += conn->sent; 1174 1192 } 1193 + break; 1194 + case CIS_LINK: 1195 + case BIS_LINK: 1196 + case PA_LINK: 1197 + if (!hci_iso_count(hdev) || 1198 + hdev->iso_cnt + conn->sent > hdev->iso_pkts) 1199 + hdev->iso_cnt = hdev->iso_pkts; 1200 + else 1201 + hdev->iso_cnt += conn->sent; 1202 + break; 1175 1203 } 1176 1204 1177 1205 skb_queue_purge(&conn->data_q);
+23 -2
net/bluetooth/hci_event.c
··· 2703 2703 if (!conn) 2704 2704 goto unlock; 2705 2705 2706 - if (status) { 2706 + if (status && status != HCI_ERROR_UNKNOWN_CONN_ID) { 2707 2707 mgmt_disconnect_failed(hdev, &conn->dst, conn->type, 2708 2708 conn->dst_type, status); 2709 2709 ··· 2717 2717 2718 2718 goto done; 2719 2719 } 2720 + 2721 + /* During suspend, mark connection as closed immediately 2722 + * since we might not receive HCI_EV_DISCONN_COMPLETE 2723 + */ 2724 + if (hdev->suspended) 2725 + conn->state = BT_CLOSED; 2720 2726 2721 2727 mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags); 2722 2728 ··· 4404 4398 if (!conn) 4405 4399 continue; 4406 4400 4407 - conn->sent -= count; 4401 + /* Check if there is really enough packets outstanding before 4402 + * attempting to decrease the sent counter otherwise it could 4403 + * underflow.. 4404 + */ 4405 + if (conn->sent >= count) { 4406 + conn->sent -= count; 4407 + } else { 4408 + bt_dev_warn(hdev, "hcon %p sent %u < count %u", 4409 + conn, conn->sent, count); 4410 + conn->sent = 0; 4411 + } 4408 4412 4409 4413 for (i = 0; i < count; ++i) 4410 4414 hci_conn_tx_dequeue(conn); ··· 7024 7008 { 7025 7009 struct hci_evt_le_big_sync_lost *ev = data; 7026 7010 struct hci_conn *bis, *conn; 7011 + bool mgmt_conn; 7027 7012 7028 7013 bt_dev_dbg(hdev, "big handle 0x%2.2x", ev->handle); 7029 7014 ··· 7043 7026 while ((bis = hci_conn_hash_lookup_big_state(hdev, ev->handle, 7044 7027 BT_CONNECTED, 7045 7028 HCI_ROLE_SLAVE))) { 7029 + mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &bis->flags); 7030 + mgmt_device_disconnected(hdev, &bis->dst, bis->type, bis->dst_type, 7031 + ev->reason, mgmt_conn); 7032 + 7046 7033 clear_bit(HCI_CONN_BIG_SYNC, &bis->flags); 7047 7034 hci_disconn_cfm(bis, ev->reason); 7048 7035 hci_conn_del(bis);
+3 -3
net/bluetooth/hci_sync.c
··· 3481 3481 return hci_write_scan_enable_sync(hdev, scan); 3482 3482 } 3483 3483 3484 - int hci_update_name_sync(struct hci_dev *hdev) 3484 + int hci_update_name_sync(struct hci_dev *hdev, const u8 *name) 3485 3485 { 3486 3486 struct hci_cp_write_local_name cp; 3487 3487 3488 3488 memset(&cp, 0, sizeof(cp)); 3489 3489 3490 - memcpy(cp.name, hdev->dev_name, sizeof(cp.name)); 3490 + memcpy(cp.name, name, sizeof(cp.name)); 3491 3491 3492 3492 return __hci_cmd_sync_status(hdev, HCI_OP_WRITE_LOCAL_NAME, 3493 3493 sizeof(cp), &cp, ··· 3540 3540 hci_write_fast_connectable_sync(hdev, false); 3541 3541 hci_update_scan_sync(hdev); 3542 3542 hci_update_class_sync(hdev); 3543 - hci_update_name_sync(hdev); 3543 + hci_update_name_sync(hdev, hdev->dev_name); 3544 3544 hci_update_eir_sync(hdev); 3545 3545 } 3546 3546
+7 -2
net/bluetooth/mgmt.c
··· 3892 3892 3893 3893 static int set_name_sync(struct hci_dev *hdev, void *data) 3894 3894 { 3895 + struct mgmt_pending_cmd *cmd = data; 3896 + struct mgmt_cp_set_local_name *cp = cmd->param; 3897 + 3895 3898 if (lmp_bredr_capable(hdev)) { 3896 - hci_update_name_sync(hdev); 3899 + hci_update_name_sync(hdev, cp->name); 3897 3900 hci_update_eir_sync(hdev); 3898 3901 } 3899 3902 ··· 9708 9705 if (!mgmt_connected) 9709 9706 return; 9710 9707 9711 - if (link_type != ACL_LINK && link_type != LE_LINK) 9708 + if (link_type != ACL_LINK && 9709 + link_type != LE_LINK && 9710 + link_type != BIS_LINK) 9712 9711 return; 9713 9712 9714 9713 bacpy(&ev.addr.bdaddr, bdaddr);
+14
net/core/datagram.c
··· 618 618 } 619 619 EXPORT_SYMBOL(skb_copy_datagram_from_iter); 620 620 621 + int skb_copy_datagram_from_iter_full(struct sk_buff *skb, int offset, 622 + struct iov_iter *from, int len) 623 + { 624 + struct iov_iter_state state; 625 + int ret; 626 + 627 + iov_iter_save_state(from, &state); 628 + ret = skb_copy_datagram_from_iter(skb, offset, from, len); 629 + if (ret) 630 + iov_iter_restore(from, &state); 631 + return ret; 632 + } 633 + EXPORT_SYMBOL(skb_copy_datagram_from_iter_full); 634 + 621 635 int zerocopy_fill_skb_from_iter(struct sk_buff *skb, 622 636 struct iov_iter *from, size_t length) 623 637 {
+4 -2
net/core/page_pool.c
··· 287 287 } 288 288 289 289 if (pool->mp_ops) { 290 - if (!pool->dma_map || !pool->dma_sync) 291 - return -EOPNOTSUPP; 290 + if (!pool->dma_map || !pool->dma_sync) { 291 + err = -EOPNOTSUPP; 292 + goto free_ptr_ring; 293 + } 292 294 293 295 if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) { 294 296 err = -EFAULT;
+7 -3
net/ipv4/route.c
··· 2576 2576 !netif_is_l3_master(dev_out)) 2577 2577 return ERR_PTR(-EINVAL); 2578 2578 2579 - if (ipv4_is_lbcast(fl4->daddr)) 2579 + if (ipv4_is_lbcast(fl4->daddr)) { 2580 2580 type = RTN_BROADCAST; 2581 - else if (ipv4_is_multicast(fl4->daddr)) 2581 + 2582 + /* reset fi to prevent gateway resolution */ 2583 + fi = NULL; 2584 + } else if (ipv4_is_multicast(fl4->daddr)) { 2582 2585 type = RTN_MULTICAST; 2583 - else if (ipv4_is_zeronet(fl4->daddr)) 2586 + } else if (ipv4_is_zeronet(fl4->daddr)) { 2584 2587 return ERR_PTR(-EINVAL); 2588 + } 2585 2589 2586 2590 if (dev_out->flags & IFF_LOOPBACK) 2587 2591 flags |= RTCF_LOCAL;
+8 -17
net/l2tp/l2tp_ppp.c
··· 129 129 130 130 static const struct proto_ops pppol2tp_ops; 131 131 132 - /* Retrieves the pppol2tp socket associated to a session. 133 - * A reference is held on the returned socket, so this function must be paired 134 - * with sock_put(). 135 - */ 132 + /* Retrieves the pppol2tp socket associated to a session. */ 136 133 static struct sock *pppol2tp_session_get_sock(struct l2tp_session *session) 137 134 { 138 135 struct pppol2tp_session *ps = l2tp_session_priv(session); 139 - struct sock *sk; 140 136 141 - rcu_read_lock(); 142 - sk = rcu_dereference(ps->sk); 143 - if (sk) 144 - sock_hold(sk); 145 - rcu_read_unlock(); 146 - 147 - return sk; 137 + return rcu_dereference(ps->sk); 148 138 } 149 139 150 140 /* Helpers to obtain tunnel/session contexts from sockets. ··· 196 206 197 207 static void pppol2tp_recv(struct l2tp_session *session, struct sk_buff *skb, int data_len) 198 208 { 199 - struct pppol2tp_session *ps = l2tp_session_priv(session); 200 - struct sock *sk = NULL; 209 + struct sock *sk; 201 210 202 211 /* If the socket is bound, send it in to PPP's input queue. Otherwise 203 212 * queue it on the session socket. 204 213 */ 205 214 rcu_read_lock(); 206 - sk = rcu_dereference(ps->sk); 215 + sk = pppol2tp_session_get_sock(session); 207 216 if (!sk) 208 217 goto no_sock; 209 218 ··· 499 510 struct l2tp_session *session = arg; 500 511 struct sock *sk; 501 512 513 + rcu_read_lock(); 502 514 sk = pppol2tp_session_get_sock(session); 503 515 if (sk) { 504 516 struct pppox_sock *po = pppox_sk(sk); 505 517 506 518 seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan)); 507 - sock_put(sk); 508 519 } 520 + rcu_read_unlock(); 509 521 } 510 522 511 523 static void pppol2tp_session_init(struct l2tp_session *session) ··· 1520 1530 port = ntohs(inet->inet_sport); 1521 1531 } 1522 1532 1533 + rcu_read_lock(); 1523 1534 sk = pppol2tp_session_get_sock(session); 1524 1535 if (sk) { 1525 1536 state = sk->sk_state; ··· 1556 1565 struct pppox_sock *po = pppox_sk(sk); 1557 1566 1558 1567 seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan)); 1559 - sock_put(sk); 1560 1568 } 1569 + rcu_read_unlock(); 1561 1570 } 1562 1571 1563 1572 static int pppol2tp_seq_show(struct seq_file *m, void *v)
+7 -6
net/rose/af_rose.c
··· 170 170 171 171 if (rose->neighbour == neigh) { 172 172 rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0); 173 - rose->neighbour->use--; 173 + rose_neigh_put(rose->neighbour); 174 174 rose->neighbour = NULL; 175 175 } 176 176 } ··· 212 212 if (rose->device == dev) { 213 213 rose_disconnect(sk, ENETUNREACH, ROSE_OUT_OF_ORDER, 0); 214 214 if (rose->neighbour) 215 - rose->neighbour->use--; 215 + rose_neigh_put(rose->neighbour); 216 216 netdev_put(rose->device, &rose->dev_tracker); 217 217 rose->device = NULL; 218 218 } ··· 655 655 break; 656 656 657 657 case ROSE_STATE_2: 658 - rose->neighbour->use--; 658 + rose_neigh_put(rose->neighbour); 659 659 release_sock(sk); 660 660 rose_disconnect(sk, 0, -1, -1); 661 661 lock_sock(sk); ··· 823 823 rose->lci = rose_new_lci(rose->neighbour); 824 824 if (!rose->lci) { 825 825 err = -ENETUNREACH; 826 + rose_neigh_put(rose->neighbour); 826 827 goto out_release; 827 828 } 828 829 ··· 835 834 dev = rose_dev_first(); 836 835 if (!dev) { 837 836 err = -ENETUNREACH; 837 + rose_neigh_put(rose->neighbour); 838 838 goto out_release; 839 839 } 840 840 841 841 user = ax25_findbyuid(current_euid()); 842 842 if (!user) { 843 843 err = -EINVAL; 844 + rose_neigh_put(rose->neighbour); 844 845 dev_put(dev); 845 846 goto out_release; 846 847 } ··· 876 873 sk->sk_state = TCP_SYN_SENT; 877 874 878 875 rose->state = ROSE_STATE_1; 879 - 880 - rose->neighbour->use++; 881 876 882 877 rose_write_internal(sk, ROSE_CALL_REQUEST); 883 878 rose_start_heartbeat(sk); ··· 1078 1077 GFP_ATOMIC); 1079 1078 make_rose->facilities = facilities; 1080 1079 1081 - make_rose->neighbour->use++; 1080 + rose_neigh_hold(make_rose->neighbour); 1082 1081 1083 1082 if (rose_sk(sk)->defer) { 1084 1083 make_rose->state = ROSE_STATE_5;
+6 -6
net/rose/rose_in.c
··· 56 56 case ROSE_CLEAR_REQUEST: 57 57 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 58 58 rose_disconnect(sk, ECONNREFUSED, skb->data[3], skb->data[4]); 59 - rose->neighbour->use--; 59 + rose_neigh_put(rose->neighbour); 60 60 break; 61 61 62 62 default: ··· 79 79 case ROSE_CLEAR_REQUEST: 80 80 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 81 81 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 82 - rose->neighbour->use--; 82 + rose_neigh_put(rose->neighbour); 83 83 break; 84 84 85 85 case ROSE_CLEAR_CONFIRMATION: 86 86 rose_disconnect(sk, 0, -1, -1); 87 - rose->neighbour->use--; 87 + rose_neigh_put(rose->neighbour); 88 88 break; 89 89 90 90 default: ··· 121 121 case ROSE_CLEAR_REQUEST: 122 122 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 123 123 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 124 - rose->neighbour->use--; 124 + rose_neigh_put(rose->neighbour); 125 125 break; 126 126 127 127 case ROSE_RR: ··· 234 234 case ROSE_CLEAR_REQUEST: 235 235 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 236 236 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 237 - rose->neighbour->use--; 237 + rose_neigh_put(rose->neighbour); 238 238 break; 239 239 240 240 default: ··· 254 254 if (frametype == ROSE_CLEAR_REQUEST) { 255 255 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 256 256 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 257 - rose_sk(sk)->neighbour->use--; 257 + rose_neigh_put(rose_sk(sk)->neighbour); 258 258 } 259 259 260 260 return 0;
+38 -24
net/rose/rose_route.c
··· 93 93 rose_neigh->ax25 = NULL; 94 94 rose_neigh->dev = dev; 95 95 rose_neigh->count = 0; 96 - rose_neigh->use = 0; 97 96 rose_neigh->dce_mode = 0; 98 97 rose_neigh->loopback = 0; 99 98 rose_neigh->number = rose_neigh_no++; 100 99 rose_neigh->restarted = 0; 100 + refcount_set(&rose_neigh->use, 1); 101 101 102 102 skb_queue_head_init(&rose_neigh->queue); 103 103 ··· 178 178 } 179 179 } 180 180 rose_neigh->count++; 181 + rose_neigh_hold(rose_neigh); 181 182 182 183 goto out; 183 184 } ··· 188 187 rose_node->neighbour[rose_node->count] = rose_neigh; 189 188 rose_node->count++; 190 189 rose_neigh->count++; 190 + rose_neigh_hold(rose_neigh); 191 191 } 192 192 193 193 out: ··· 236 234 237 235 if ((s = rose_neigh_list) == rose_neigh) { 238 236 rose_neigh_list = rose_neigh->next; 239 - if (rose_neigh->ax25) 240 - ax25_cb_put(rose_neigh->ax25); 241 - kfree(rose_neigh->digipeat); 242 - kfree(rose_neigh); 243 237 return; 244 238 } 245 239 246 240 while (s != NULL && s->next != NULL) { 247 241 if (s->next == rose_neigh) { 248 242 s->next = rose_neigh->next; 249 - if (rose_neigh->ax25) 250 - ax25_cb_put(rose_neigh->ax25); 251 - kfree(rose_neigh->digipeat); 252 - kfree(rose_neigh); 253 243 return; 254 244 } 255 245 ··· 257 263 struct rose_route *s; 258 264 259 265 if (rose_route->neigh1 != NULL) 260 - rose_route->neigh1->use--; 266 + rose_neigh_put(rose_route->neigh1); 261 267 262 268 if (rose_route->neigh2 != NULL) 263 - rose_route->neigh2->use--; 269 + rose_neigh_put(rose_route->neigh2); 264 270 265 271 if ((s = rose_route_list) == rose_route) { 266 272 rose_route_list = rose_route->next; ··· 324 330 for (i = 0; i < rose_node->count; i++) { 325 331 if (rose_node->neighbour[i] == rose_neigh) { 326 332 rose_neigh->count--; 333 + rose_neigh_put(rose_neigh); 327 334 328 - if (rose_neigh->count == 0 && rose_neigh->use == 0) 335 + if (rose_neigh->count == 0) { 329 336 rose_remove_neigh(rose_neigh); 337 + rose_neigh_put(rose_neigh); 338 + } 330 339 331 340 rose_node->count--; 332 341 ··· 378 381 sn->ax25 = NULL; 379 382 sn->dev = NULL; 380 383 sn->count = 0; 381 - sn->use = 0; 382 384 sn->dce_mode = 1; 383 385 sn->loopback = 1; 384 386 sn->number = rose_neigh_no++; 385 387 sn->restarted = 1; 388 + refcount_set(&sn->use, 1); 386 389 387 390 skb_queue_head_init(&sn->queue); 388 391 ··· 433 436 rose_node_list = rose_node; 434 437 435 438 rose_loopback_neigh->count++; 439 + rose_neigh_hold(rose_loopback_neigh); 436 440 437 441 out: 438 442 spin_unlock_bh(&rose_node_list_lock); ··· 465 467 rose_remove_node(rose_node); 466 468 467 469 rose_loopback_neigh->count--; 470 + rose_neigh_put(rose_loopback_neigh); 468 471 469 472 out: 470 473 spin_unlock_bh(&rose_node_list_lock); ··· 505 506 memmove(&t->neighbour[i], &t->neighbour[i + 1], 506 507 sizeof(t->neighbour[0]) * 507 508 (t->count - i)); 509 + rose_neigh_put(s); 508 510 } 509 511 510 512 if (t->count <= 0) ··· 513 513 } 514 514 515 515 rose_remove_neigh(s); 516 + rose_neigh_put(s); 516 517 } 517 518 spin_unlock_bh(&rose_neigh_list_lock); 518 519 spin_unlock_bh(&rose_node_list_lock); ··· 549 548 { 550 549 struct rose_neigh *s, *rose_neigh; 551 550 struct rose_node *t, *rose_node; 551 + int i; 552 552 553 553 spin_lock_bh(&rose_node_list_lock); 554 554 spin_lock_bh(&rose_neigh_list_lock); ··· 560 558 while (rose_node != NULL) { 561 559 t = rose_node; 562 560 rose_node = rose_node->next; 563 - if (!t->loopback) 561 + 562 + if (!t->loopback) { 563 + for (i = 0; i < t->count; i++) 564 + rose_neigh_put(t->neighbour[i]); 564 565 rose_remove_node(t); 566 + } 565 567 } 566 568 567 569 while (rose_neigh != NULL) { 568 570 s = rose_neigh; 569 571 rose_neigh = rose_neigh->next; 570 572 571 - if (s->use == 0 && !s->loopback) { 572 - s->count = 0; 573 + if (!s->loopback) { 573 574 rose_remove_neigh(s); 575 + rose_neigh_put(s); 574 576 } 575 577 } 576 578 ··· 690 684 for (i = 0; i < node->count; i++) { 691 685 if (node->neighbour[i]->restarted) { 692 686 res = node->neighbour[i]; 687 + rose_neigh_hold(node->neighbour[i]); 693 688 goto out; 694 689 } 695 690 } ··· 702 695 for (i = 0; i < node->count; i++) { 703 696 if (!rose_ftimer_running(node->neighbour[i])) { 704 697 res = node->neighbour[i]; 698 + rose_neigh_hold(node->neighbour[i]); 705 699 goto out; 706 700 } 707 701 failed = 1; ··· 792 784 } 793 785 794 786 if (rose_route->neigh1 == rose_neigh) { 795 - rose_route->neigh1->use--; 787 + rose_neigh_put(rose_route->neigh1); 796 788 rose_route->neigh1 = NULL; 797 789 rose_transmit_clear_request(rose_route->neigh2, rose_route->lci2, ROSE_OUT_OF_ORDER, 0); 798 790 } 799 791 800 792 if (rose_route->neigh2 == rose_neigh) { 801 - rose_route->neigh2->use--; 793 + rose_neigh_put(rose_route->neigh2); 802 794 rose_route->neigh2 = NULL; 803 795 rose_transmit_clear_request(rose_route->neigh1, rose_route->lci1, ROSE_OUT_OF_ORDER, 0); 804 796 } ··· 927 919 rose_clear_queues(sk); 928 920 rose->cause = ROSE_NETWORK_CONGESTION; 929 921 rose->diagnostic = 0; 930 - rose->neighbour->use--; 922 + rose_neigh_put(rose->neighbour); 931 923 rose->neighbour = NULL; 932 924 rose->lci = 0; 933 925 rose->state = ROSE_STATE_0; ··· 1052 1044 1053 1045 if ((new_lci = rose_new_lci(new_neigh)) == 0) { 1054 1046 rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 71); 1055 - goto out; 1047 + goto put_neigh; 1056 1048 } 1057 1049 1058 1050 if ((rose_route = kmalloc(sizeof(*rose_route), GFP_ATOMIC)) == NULL) { 1059 1051 rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 120); 1060 - goto out; 1052 + goto put_neigh; 1061 1053 } 1062 1054 1063 1055 rose_route->lci1 = lci; ··· 1070 1062 rose_route->lci2 = new_lci; 1071 1063 rose_route->neigh2 = new_neigh; 1072 1064 1073 - rose_route->neigh1->use++; 1074 - rose_route->neigh2->use++; 1065 + rose_neigh_hold(rose_route->neigh1); 1066 + rose_neigh_hold(rose_route->neigh2); 1075 1067 1076 1068 rose_route->next = rose_route_list; 1077 1069 rose_route_list = rose_route; ··· 1083 1075 rose_transmit_link(skb, rose_route->neigh2); 1084 1076 res = 1; 1085 1077 1078 + put_neigh: 1079 + rose_neigh_put(new_neigh); 1086 1080 out: 1087 1081 spin_unlock_bh(&rose_route_list_lock); 1088 1082 spin_unlock_bh(&rose_neigh_list_lock); ··· 1200 1190 (rose_neigh->loopback) ? "RSLOOP-0" : ax2asc(buf, &rose_neigh->callsign), 1201 1191 rose_neigh->dev ? rose_neigh->dev->name : "???", 1202 1192 rose_neigh->count, 1203 - rose_neigh->use, 1193 + refcount_read(&rose_neigh->use) - rose_neigh->count - 1, 1204 1194 (rose_neigh->dce_mode) ? "DCE" : "DTE", 1205 1195 (rose_neigh->restarted) ? "yes" : "no", 1206 1196 ax25_display_timer(&rose_neigh->t0timer) / HZ, ··· 1305 1295 struct rose_neigh *s, *rose_neigh = rose_neigh_list; 1306 1296 struct rose_node *t, *rose_node = rose_node_list; 1307 1297 struct rose_route *u, *rose_route = rose_route_list; 1298 + int i; 1308 1299 1309 1300 while (rose_neigh != NULL) { 1310 1301 s = rose_neigh; 1311 1302 rose_neigh = rose_neigh->next; 1312 1303 1313 1304 rose_remove_neigh(s); 1305 + rose_neigh_put(s); 1314 1306 } 1315 1307 1316 1308 while (rose_node != NULL) { 1317 1309 t = rose_node; 1318 1310 rose_node = rose_node->next; 1319 1311 1312 + for (i = 0; i < t->count; i++) 1313 + rose_neigh_put(t->neighbour[i]); 1320 1314 rose_remove_node(t); 1321 1315 } 1322 1316
+1 -1
net/rose/rose_timer.c
··· 180 180 break; 181 181 182 182 case ROSE_STATE_2: /* T3 */ 183 - rose->neighbour->use--; 183 + rose_neigh_put(rose->neighbour); 184 184 rose_disconnect(sk, ETIMEDOUT, -1, -1); 185 185 break; 186 186
+2
net/sctp/ipv6.c
··· 547 547 { 548 548 addr->v6.sin6_family = AF_INET6; 549 549 addr->v6.sin6_port = 0; 550 + addr->v6.sin6_flowinfo = 0; 550 551 addr->v6.sin6_addr = sk->sk_v6_rcv_saddr; 552 + addr->v6.sin6_scope_id = 0; 551 553 } 552 554 553 555 /* Initialize sk->sk_rcv_saddr from sctp_addr. */
+5 -3
net/vmw_vsock/virtio_transport_common.c
··· 105 105 size_t len, 106 106 bool zcopy) 107 107 { 108 + struct msghdr *msg = info->msg; 109 + 108 110 if (zcopy) 109 - return __zerocopy_sg_from_iter(info->msg, NULL, skb, 110 - &info->msg->msg_iter, len, NULL); 111 + return __zerocopy_sg_from_iter(msg, NULL, skb, 112 + &msg->msg_iter, len, NULL); 111 113 112 114 virtio_vsock_skb_put(skb, len); 113 - return skb_copy_datagram_from_iter(skb, 0, &info->msg->msg_iter, len); 115 + return skb_copy_datagram_from_iter_full(skb, 0, &msg->msg_iter, len); 114 116 } 115 117 116 118 static void virtio_transport_init_hdr(struct sk_buff *skb,
+18 -12
rust/kernel/alloc/allocator.rs
··· 43 43 /// For more details see [self]. 44 44 pub struct KVmalloc; 45 45 46 - /// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. 47 - fn aligned_size(new_layout: Layout) -> usize { 48 - // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. 49 - let layout = new_layout.pad_to_align(); 50 - 51 - // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()` 52 - // which together with the slab guarantees means the `krealloc` will return a properly aligned 53 - // object (see comments in `kmalloc()` for more information). 54 - layout.size() 55 - } 56 - 57 46 /// # Invariants 58 47 /// 59 48 /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`. ··· 77 88 old_layout: Layout, 78 89 flags: Flags, 79 90 ) -> Result<NonNull<[u8]>, AllocError> { 80 - let size = aligned_size(layout); 91 + let size = layout.size(); 81 92 let ptr = match ptr { 82 93 Some(ptr) => { 83 94 if old_layout.size() == 0 { ··· 112 123 } 113 124 } 114 125 126 + impl Kmalloc { 127 + /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of 128 + /// `layout`. 129 + pub fn aligned_layout(layout: Layout) -> Layout { 130 + // Note that `layout.size()` (after padding) is guaranteed to be a multiple of 131 + // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return 132 + // a properly aligned object (see comments in `kmalloc()` for more information). 133 + layout.pad_to_align() 134 + } 135 + } 136 + 115 137 // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that 116 138 // - memory remains valid until it is explicitly freed, 117 139 // - passing a pointer to a valid memory allocation is OK, ··· 135 135 old_layout: Layout, 136 136 flags: Flags, 137 137 ) -> Result<NonNull<[u8]>, AllocError> { 138 + let layout = Kmalloc::aligned_layout(layout); 139 + 138 140 // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`. 139 141 unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) } 140 142 } ··· 178 176 old_layout: Layout, 179 177 flags: Flags, 180 178 ) -> Result<NonNull<[u8]>, AllocError> { 179 + // `KVmalloc` may use the `Kmalloc` backend, hence we have to enforce a `Kmalloc` 180 + // compatible layout. 181 + let layout = Kmalloc::aligned_layout(layout); 182 + 181 183 // TODO: Support alignments larger than PAGE_SIZE. 182 184 if layout.align() > bindings::PAGE_SIZE { 183 185 pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n");
+11
rust/kernel/alloc/allocator_test.rs
··· 22 22 pub type Vmalloc = Kmalloc; 23 23 pub type KVmalloc = Kmalloc; 24 24 25 + impl Cmalloc { 26 + /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of 27 + /// `layout`. 28 + pub fn aligned_layout(layout: Layout) -> Layout { 29 + // Note that `layout.size()` (after padding) is guaranteed to be a multiple of 30 + // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return 31 + // a properly aligned object (see comments in `kmalloc()` for more information). 32 + layout.pad_to_align() 33 + } 34 + } 35 + 25 36 extern "C" { 26 37 #[link_name = "aligned_alloc"] 27 38 fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void;
+184 -24
rust/kernel/device.rs
··· 15 15 16 16 pub mod property; 17 17 18 - /// A reference-counted device. 18 + /// The core representation of a device in the kernel's driver model. 19 19 /// 20 - /// This structure represents the Rust abstraction for a C `struct device`. This implementation 21 - /// abstracts the usage of an already existing C `struct device` within Rust code that we get 22 - /// passed from the C side. 20 + /// This structure represents the Rust abstraction for a C `struct device`. A [`Device`] can either 21 + /// exist as temporary reference (see also [`Device::from_raw`]), which is only valid within a 22 + /// certain scope or as [`ARef<Device>`], owning a dedicated reference count. 23 23 /// 24 - /// An instance of this abstraction can be obtained temporarily or permanent. 24 + /// # Device Types 25 25 /// 26 - /// A temporary one is bound to the lifetime of the C `struct device` pointer used for creation. 27 - /// A permanent instance is always reference-counted and hence not restricted by any lifetime 28 - /// boundaries. 26 + /// A [`Device`] can represent either a bus device or a class device. 29 27 /// 30 - /// For subsystems it is recommended to create a permanent instance to wrap into a subsystem 31 - /// specific device structure (e.g. `pci::Device`). This is useful for passing it to drivers in 32 - /// `T::probe()`, such that a driver can store the `ARef<Device>` (equivalent to storing a 33 - /// `struct device` pointer in a C driver) for arbitrary purposes, e.g. allocating DMA coherent 34 - /// memory. 28 + /// ## Bus Devices 29 + /// 30 + /// A bus device is a [`Device`] that is associated with a physical or virtual bus. Examples of 31 + /// buses include PCI, USB, I2C, and SPI. Devices attached to a bus are registered with a specific 32 + /// bus type, which facilitates matching devices with appropriate drivers based on IDs or other 33 + /// identifying information. Bus devices are visible in sysfs under `/sys/bus/<bus-name>/devices/`. 34 + /// 35 + /// ## Class Devices 36 + /// 37 + /// A class device is a [`Device`] that is associated with a logical category of functionality 38 + /// rather than a physical bus. Examples of classes include block devices, network interfaces, sound 39 + /// cards, and input devices. Class devices are grouped under a common class and exposed to 40 + /// userspace via entries in `/sys/class/<class-name>/`. 41 + /// 42 + /// # Device Context 43 + /// 44 + /// [`Device`] references are generic over a [`DeviceContext`], which represents the type state of 45 + /// a [`Device`]. 46 + /// 47 + /// As the name indicates, this type state represents the context of the scope the [`Device`] 48 + /// reference is valid in. For instance, the [`Bound`] context guarantees that the [`Device`] is 49 + /// bound to a driver for the entire duration of the existence of a [`Device<Bound>`] reference. 50 + /// 51 + /// Other [`DeviceContext`] types besides [`Bound`] are [`Normal`], [`Core`] and [`CoreInternal`]. 52 + /// 53 + /// Unless selected otherwise [`Device`] defaults to the [`Normal`] [`DeviceContext`], which by 54 + /// itself has no additional requirements. 55 + /// 56 + /// It is always up to the caller of [`Device::from_raw`] to select the correct [`DeviceContext`] 57 + /// type for the corresponding scope the [`Device`] reference is created in. 58 + /// 59 + /// All [`DeviceContext`] types other than [`Normal`] are intended to be used with 60 + /// [bus devices](#bus-devices) only. 61 + /// 62 + /// # Implementing Bus Devices 63 + /// 64 + /// This section provides a guideline to implement bus specific devices, such as [`pci::Device`] or 65 + /// [`platform::Device`]. 66 + /// 67 + /// A bus specific device should be defined as follows. 68 + /// 69 + /// ```ignore 70 + /// #[repr(transparent)] 71 + /// pub struct Device<Ctx: device::DeviceContext = device::Normal>( 72 + /// Opaque<bindings::bus_device_type>, 73 + /// PhantomData<Ctx>, 74 + /// ); 75 + /// ``` 76 + /// 77 + /// Since devices are reference counted, [`AlwaysRefCounted`] should be implemented for `Device` 78 + /// (i.e. `Device<Normal>`). Note that [`AlwaysRefCounted`] must not be implemented for any other 79 + /// [`DeviceContext`], since all other device context types are only valid within a certain scope. 80 + /// 81 + /// In order to be able to implement the [`DeviceContext`] dereference hierarchy, bus device 82 + /// implementations should call the [`impl_device_context_deref`] macro as shown below. 83 + /// 84 + /// ```ignore 85 + /// // SAFETY: `Device` is a transparent wrapper of a type that doesn't depend on `Device`'s 86 + /// // generic argument. 87 + /// kernel::impl_device_context_deref!(unsafe { Device }); 88 + /// ``` 89 + /// 90 + /// In order to convert from a any [`Device<Ctx>`] to [`ARef<Device>`], bus devices can implement 91 + /// the following macro call. 92 + /// 93 + /// ```ignore 94 + /// kernel::impl_device_context_into_aref!(Device); 95 + /// ``` 96 + /// 97 + /// Bus devices should also implement the following [`AsRef`] implementation, such that users can 98 + /// easily derive a generic [`Device`] reference. 99 + /// 100 + /// ```ignore 101 + /// impl<Ctx: device::DeviceContext> AsRef<device::Device<Ctx>> for Device<Ctx> { 102 + /// fn as_ref(&self) -> &device::Device<Ctx> { 103 + /// ... 104 + /// } 105 + /// } 106 + /// ``` 107 + /// 108 + /// # Implementing Class Devices 109 + /// 110 + /// Class device implementations require less infrastructure and depend slightly more on the 111 + /// specific subsystem. 112 + /// 113 + /// An example implementation for a class device could look like this. 114 + /// 115 + /// ```ignore 116 + /// #[repr(C)] 117 + /// pub struct Device<T: class::Driver> { 118 + /// dev: Opaque<bindings::class_device_type>, 119 + /// data: T::Data, 120 + /// } 121 + /// ``` 122 + /// 123 + /// This class device uses the sub-classing pattern to embed the driver's private data within the 124 + /// allocation of the class device. For this to be possible the class device is generic over the 125 + /// class specific `Driver` trait implementation. 126 + /// 127 + /// Just like any device, class devices are reference counted and should hence implement 128 + /// [`AlwaysRefCounted`] for `Device`. 129 + /// 130 + /// Class devices should also implement the following [`AsRef`] implementation, such that users can 131 + /// easily derive a generic [`Device`] reference. 132 + /// 133 + /// ```ignore 134 + /// impl<T: class::Driver> AsRef<device::Device> for Device<T> { 135 + /// fn as_ref(&self) -> &device::Device { 136 + /// ... 137 + /// } 138 + /// } 139 + /// ``` 140 + /// 141 + /// An example for a class device implementation is [`drm::Device`]. 35 142 /// 36 143 /// # Invariants 37 144 /// ··· 149 42 /// 150 43 /// `bindings::device::release` is valid to be called from any thread, hence `ARef<Device>` can be 151 44 /// dropped from any thread. 45 + /// 46 + /// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted 47 + /// [`drm::Device`]: kernel::drm::Device 48 + /// [`impl_device_context_deref`]: kernel::impl_device_context_deref 49 + /// [`pci::Device`]: kernel::pci::Device 50 + /// [`platform::Device`]: kernel::platform::Device 152 51 #[repr(transparent)] 153 52 pub struct Device<Ctx: DeviceContext = Normal>(Opaque<bindings::device>, PhantomData<Ctx>); 154 53 ··· 424 311 // synchronization in `struct device`. 425 312 unsafe impl Sync for Device {} 426 313 427 - /// Marker trait for the context of a bus specific device. 314 + /// Marker trait for the context or scope of a bus specific device. 428 315 /// 429 - /// Some functions of a bus specific device should only be called from a certain context, i.e. bus 430 - /// callbacks, such as `probe()`. 316 + /// [`DeviceContext`] is a marker trait for types representing the context of a bus specific 317 + /// [`Device`]. 431 318 /// 432 - /// This is the marker trait for structures representing the context of a bus specific device. 319 + /// The specific device context types are: [`CoreInternal`], [`Core`], [`Bound`] and [`Normal`]. 320 + /// 321 + /// [`DeviceContext`] types are hierarchical, which means that there is a strict hierarchy that 322 + /// defines which [`DeviceContext`] type can be derived from another. For instance, any 323 + /// [`Device<Core>`] can dereference to a [`Device<Bound>`]. 324 + /// 325 + /// The following enumeration illustrates the dereference hierarchy of [`DeviceContext`] types. 326 + /// 327 + /// - [`CoreInternal`] => [`Core`] => [`Bound`] => [`Normal`] 328 + /// 329 + /// Bus devices can automatically implement the dereference hierarchy by using 330 + /// [`impl_device_context_deref`]. 331 + /// 332 + /// Note that the guarantee for a [`Device`] reference to have a certain [`DeviceContext`] comes 333 + /// from the specific scope the [`Device`] reference is valid in. 334 + /// 335 + /// [`impl_device_context_deref`]: kernel::impl_device_context_deref 433 336 pub trait DeviceContext: private::Sealed {} 434 337 435 - /// The [`Normal`] context is the context of a bus specific device when it is not an argument of 436 - /// any bus callback. 338 + /// The [`Normal`] context is the default [`DeviceContext`] of any [`Device`]. 339 + /// 340 + /// The normal context does not indicate any specific context. Any `Device<Ctx>` is also a valid 341 + /// [`Device<Normal>`]. It is the only [`DeviceContext`] for which it is valid to implement 342 + /// [`AlwaysRefCounted`] for. 343 + /// 344 + /// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted 437 345 pub struct Normal; 438 346 439 - /// The [`Core`] context is the context of a bus specific device when it is supplied as argument of 440 - /// any of the bus callbacks, such as `probe()`. 347 + /// The [`Core`] context is the context of a bus specific device when it appears as argument of 348 + /// any bus specific callback, such as `probe()`. 349 + /// 350 + /// The core context indicates that the [`Device<Core>`] reference's scope is limited to the bus 351 + /// callback it appears in. It is intended to be used for synchronization purposes. Bus device 352 + /// implementations can implement methods for [`Device<Core>`], such that they can only be called 353 + /// from bus callbacks. 441 354 pub struct Core; 442 355 443 - /// Semantically the same as [`Core`] but reserved for internal usage of the corresponding bus 356 + /// Semantically the same as [`Core`], but reserved for internal usage of the corresponding bus 444 357 /// abstraction. 358 + /// 359 + /// The internal core context is intended to be used in exactly the same way as the [`Core`] 360 + /// context, with the difference that this [`DeviceContext`] is internal to the corresponding bus 361 + /// abstraction. 362 + /// 363 + /// This context mainly exists to share generic [`Device`] infrastructure that should only be called 364 + /// from bus callbacks with bus abstractions, but without making them accessible for drivers. 445 365 pub struct CoreInternal; 446 366 447 - /// The [`Bound`] context is the context of a bus specific device reference when it is guaranteed to 448 - /// be bound for the duration of its lifetime. 367 + /// The [`Bound`] context is the [`DeviceContext`] of a bus specific device when it is guaranteed to 368 + /// be bound to a driver. 369 + /// 370 + /// The bound context indicates that for the entire duration of the lifetime of a [`Device<Bound>`] 371 + /// reference, the [`Device`] is guaranteed to be bound to a driver. 372 + /// 373 + /// Some APIs, such as [`dma::CoherentAllocation`] or [`Devres`] rely on the [`Device`] to be bound, 374 + /// which can be proven with the [`Bound`] device context. 375 + /// 376 + /// Any abstraction that can guarantee a scope where the corresponding bus device is bound, should 377 + /// provide a [`Device<Bound>`] reference to its users for this scope. This allows users to benefit 378 + /// from optimizations for accessing device resources, see also [`Devres::access`]. 379 + /// 380 + /// [`Devres`]: kernel::devres::Devres 381 + /// [`Devres::access`]: kernel::devres::Devres::access 382 + /// [`dma::CoherentAllocation`]: kernel::dma::CoherentAllocation 449 383 pub struct Bound; 450 384 451 385 mod private {
+18 -9
rust/kernel/devres.rs
··· 115 115 /// Contains all the fields shared with [`Self::callback`]. 116 116 // TODO: Replace with `UnsafePinned`, once available. 117 117 // 118 - // Subsequently, the `drop_in_place()` in `Devres::drop` and the explicit `Send` and `Sync' 119 - // impls can be removed. 118 + // Subsequently, the `drop_in_place()` in `Devres::drop` and `Devres::new` as well as the 119 + // explicit `Send` and `Sync' impls can be removed. 120 120 #[pin] 121 121 inner: Opaque<Inner<T>>, 122 + _add_action: (), 122 123 } 123 124 124 125 impl<T: Send> Devres<T> { ··· 141 140 dev: dev.into(), 142 141 callback, 143 142 // INVARIANT: `inner` is properly initialized. 144 - inner <- { 143 + inner <- Opaque::pin_init(try_pin_init!(Inner { 144 + devm <- Completion::new(), 145 + revoke <- Completion::new(), 146 + data <- Revocable::new(data), 147 + })), 148 + // TODO: Replace with "initializer code blocks" [1] once available. 149 + // 150 + // [1] https://github.com/Rust-for-Linux/pin-init/pull/69 151 + _add_action: { 145 152 // SAFETY: `this` is a valid pointer to uninitialized memory. 146 153 let inner = unsafe { &raw mut (*this.as_ptr()).inner }; 147 154 ··· 161 152 // live at least as long as the returned `impl PinInit<Self, Error>`. 162 153 to_result(unsafe { 163 154 bindings::devm_add_action(dev.as_raw(), Some(callback), inner.cast()) 164 - })?; 155 + }).inspect_err(|_| { 156 + let inner = Opaque::cast_into(inner); 165 157 166 - Opaque::pin_init(try_pin_init!(Inner { 167 - devm <- Completion::new(), 168 - revoke <- Completion::new(), 169 - data <- Revocable::new(data), 170 - })) 158 + // SAFETY: `inner` is a valid pointer to an `Inner<T>` and valid for both reads 159 + // and writes. 160 + unsafe { core::ptr::drop_in_place(inner) }; 161 + })?; 171 162 }, 172 163 }) 173 164 }
+87 -2
rust/kernel/driver.rs
··· 2 2 3 3 //! Generic support for drivers of different buses (e.g., PCI, Platform, Amba, etc.). 4 4 //! 5 - //! Each bus / subsystem is expected to implement [`RegistrationOps`], which allows drivers to 6 - //! register using the [`Registration`] class. 5 + //! This documentation describes how to implement a bus specific driver API and how to align it with 6 + //! the design of (bus specific) devices. 7 + //! 8 + //! Note: Readers are expected to know the content of the documentation of [`Device`] and 9 + //! [`DeviceContext`]. 10 + //! 11 + //! # Driver Trait 12 + //! 13 + //! The main driver interface is defined by a bus specific driver trait. For instance: 14 + //! 15 + //! ```ignore 16 + //! pub trait Driver: Send { 17 + //! /// The type holding information about each device ID supported by the driver. 18 + //! type IdInfo: 'static; 19 + //! 20 + //! /// The table of OF device ids supported by the driver. 21 + //! const OF_ID_TABLE: Option<of::IdTable<Self::IdInfo>> = None; 22 + //! 23 + //! /// The table of ACPI device ids supported by the driver. 24 + //! const ACPI_ID_TABLE: Option<acpi::IdTable<Self::IdInfo>> = None; 25 + //! 26 + //! /// Driver probe. 27 + //! fn probe(dev: &Device<device::Core>, id_info: &Self::IdInfo) -> Result<Pin<KBox<Self>>>; 28 + //! 29 + //! /// Driver unbind (optional). 30 + //! fn unbind(dev: &Device<device::Core>, this: Pin<&Self>) { 31 + //! let _ = (dev, this); 32 + //! } 33 + //! } 34 + //! ``` 35 + //! 36 + //! For specific examples see [`auxiliary::Driver`], [`pci::Driver`] and [`platform::Driver`]. 37 + //! 38 + //! The `probe()` callback should return a `Result<Pin<KBox<Self>>>`, i.e. the driver's private 39 + //! data. The bus abstraction should store the pointer in the corresponding bus device. The generic 40 + //! [`Device`] infrastructure provides common helpers for this purpose on its 41 + //! [`Device<CoreInternal>`] implementation. 42 + //! 43 + //! All driver callbacks should provide a reference to the driver's private data. Once the driver 44 + //! is unbound from the device, the bus abstraction should take back the ownership of the driver's 45 + //! private data from the corresponding [`Device`] and [`drop`] it. 46 + //! 47 + //! All driver callbacks should provide a [`Device<Core>`] reference (see also [`device::Core`]). 48 + //! 49 + //! # Adapter 50 + //! 51 + //! The adapter implementation of a bus represents the abstraction layer between the C bus 52 + //! callbacks and the Rust bus callbacks. It therefore has to be generic over an implementation of 53 + //! the [driver trait](#driver-trait). 54 + //! 55 + //! ```ignore 56 + //! pub struct Adapter<T: Driver>; 57 + //! ``` 58 + //! 59 + //! There's a common [`Adapter`] trait that can be implemented to inherit common driver 60 + //! infrastructure, such as finding the ID info from an [`of::IdTable`] or [`acpi::IdTable`]. 61 + //! 62 + //! # Driver Registration 63 + //! 64 + //! In order to register C driver types (such as `struct platform_driver`) the [adapter](#adapter) 65 + //! should implement the [`RegistrationOps`] trait. 66 + //! 67 + //! This trait implementation can be used to create the actual registration with the common 68 + //! [`Registration`] type. 69 + //! 70 + //! Typically, bus abstractions want to provide a bus specific `module_bus_driver!` macro, which 71 + //! creates a kernel module with exactly one [`Registration`] for the bus specific adapter. 72 + //! 73 + //! The generic driver infrastructure provides a helper for this with the [`module_driver`] macro. 74 + //! 75 + //! # Device IDs 76 + //! 77 + //! Besides the common device ID types, such as [`of::DeviceId`] and [`acpi::DeviceId`], most buses 78 + //! may need to implement their own device ID types. 79 + //! 80 + //! For this purpose the generic infrastructure in [`device_id`] should be used. 81 + //! 82 + //! [`auxiliary::Driver`]: kernel::auxiliary::Driver 83 + //! [`Core`]: device::Core 84 + //! [`Device`]: device::Device 85 + //! [`Device<Core>`]: device::Device<device::Core> 86 + //! [`Device<CoreInternal>`]: device::Device<device::CoreInternal> 87 + //! [`DeviceContext`]: device::DeviceContext 88 + //! [`device_id`]: kernel::device_id 89 + //! [`module_driver`]: kernel::module_driver 90 + //! [`pci::Driver`]: kernel::pci::Driver 91 + //! [`platform::Driver`]: kernel::platform::Driver 7 92 8 93 use crate::error::{Error, Result}; 9 94 use crate::{acpi, device, of, str::CStr, try_pin_init, types::Opaque, ThisModule};
+25 -7
rust/kernel/drm/device.rs
··· 5 5 //! C header: [`include/linux/drm/drm_device.h`](srctree/include/linux/drm/drm_device.h) 6 6 7 7 use crate::{ 8 + alloc::allocator::Kmalloc, 8 9 bindings, device, drm, 9 10 drm::driver::AllocImpl, 10 11 error::from_err_ptr, ··· 13 12 prelude::*, 14 13 types::{ARef, AlwaysRefCounted, Opaque}, 15 14 }; 16 - use core::{mem, ops::Deref, ptr, ptr::NonNull}; 15 + use core::{alloc::Layout, mem, ops::Deref, ptr, ptr::NonNull}; 17 16 18 17 #[cfg(CONFIG_DRM_LEGACY)] 19 18 macro_rules! drm_legacy_fields { ··· 54 53 /// 55 54 /// `self.dev` is a valid instance of a `struct device`. 56 55 #[repr(C)] 57 - #[pin_data] 58 56 pub struct Device<T: drm::Driver> { 59 57 dev: Opaque<bindings::drm_device>, 60 - #[pin] 61 58 data: T::Data, 62 59 } 63 60 ··· 95 96 96 97 /// Create a new `drm::Device` for a `drm::Driver`. 97 98 pub fn new(dev: &device::Device, data: impl PinInit<T::Data, Error>) -> Result<ARef<Self>> { 99 + // `__drm_dev_alloc` uses `kmalloc()` to allocate memory, hence ensure a `kmalloc()` 100 + // compatible `Layout`. 101 + let layout = Kmalloc::aligned_layout(Layout::new::<Self>()); 102 + 98 103 // SAFETY: 99 104 // - `VTABLE`, as a `const` is pinned to the read-only section of the compilation, 100 105 // - `dev` is valid by its type invarants, ··· 106 103 bindings::__drm_dev_alloc( 107 104 dev.as_raw(), 108 105 &Self::VTABLE, 109 - mem::size_of::<Self>(), 106 + layout.size(), 110 107 mem::offset_of!(Self, dev), 111 108 ) 112 109 } ··· 120 117 // - `raw_data` is a valid pointer to uninitialized memory. 121 118 // - `raw_data` will not move until it is dropped. 122 119 unsafe { data.__pinned_init(raw_data) }.inspect_err(|_| { 123 - // SAFETY: `__drm_dev_alloc()` was successful, hence `raw_drm` must be valid and the 120 + // SAFETY: `raw_drm` is a valid pointer to `Self`, given that `__drm_dev_alloc` was 121 + // successful. 122 + let drm_dev = unsafe { Self::into_drm_device(raw_drm) }; 123 + 124 + // SAFETY: `__drm_dev_alloc()` was successful, hence `drm_dev` must be valid and the 124 125 // refcount must be non-zero. 125 - unsafe { bindings::drm_dev_put(ptr::addr_of_mut!((*raw_drm.as_ptr()).dev).cast()) }; 126 + unsafe { bindings::drm_dev_put(drm_dev) }; 126 127 })?; 127 128 128 129 // SAFETY: The reference count is one, and now we take ownership of that reference as a ··· 145 138 // SAFETY: By the safety requirements of this function `ptr` is a valid pointer to a 146 139 // `struct drm_device` embedded in `Self`. 147 140 unsafe { crate::container_of!(Opaque::cast_from(ptr), Self, dev) }.cast_mut() 141 + } 142 + 143 + /// # Safety 144 + /// 145 + /// `ptr` must be a valid pointer to `Self`. 146 + unsafe fn into_drm_device(ptr: NonNull<Self>) -> *mut bindings::drm_device { 147 + // SAFETY: By the safety requirements of this function, `ptr` is a valid pointer to `Self`. 148 + unsafe { &raw mut (*ptr.as_ptr()).dev }.cast() 148 149 } 149 150 150 151 /// Not intended to be called externally, except via declare_drm_ioctls!() ··· 204 189 } 205 190 206 191 unsafe fn dec_ref(obj: NonNull<Self>) { 192 + // SAFETY: `obj` is a valid pointer to `Self`. 193 + let drm_dev = unsafe { Self::into_drm_device(obj) }; 194 + 207 195 // SAFETY: The safety requirements guarantee that the refcount is non-zero. 208 - unsafe { bindings::drm_dev_put(obj.cast().as_ptr()) }; 196 + unsafe { bindings::drm_dev_put(drm_dev) }; 209 197 } 210 198 } 211 199
+1 -1
rust/kernel/faux.rs
··· 4 4 //! 5 5 //! This module provides bindings for working with faux devices in kernel modules. 6 6 //! 7 - //! C header: [`include/linux/device/faux.h`] 7 + //! C header: [`include/linux/device/faux.h`](srctree/include/linux/device/faux.h) 8 8 9 9 use crate::{bindings, device, error::code::*, prelude::*}; 10 10 use core::ptr::{addr_of_mut, null, null_mut, NonNull};
+2 -2
sound/core/timer.c
··· 2139 2139 goto err_take_id; 2140 2140 } 2141 2141 2142 + utimer->id = utimer_id; 2143 + 2142 2144 utimer->name = kasprintf(GFP_KERNEL, "snd-utimer%d", utimer_id); 2143 2145 if (!utimer->name) { 2144 2146 err = -ENOMEM; 2145 2147 goto err_get_name; 2146 2148 } 2147 - 2148 - utimer->id = utimer_id; 2149 2149 2150 2150 tid.dev_sclass = SNDRV_TIMER_SCLASS_APPLICATION; 2151 2151 tid.dev_class = SNDRV_TIMER_CLASS_GLOBAL;
+22 -9
sound/hda/codecs/realtek/alc269.c
··· 510 510 hp_pin = 0x21; 511 511 512 512 alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */ 513 + 514 + /* 3k pull low control for Headset jack. */ 515 + /* NOTE: call this before clearing the pin, otherwise codec stalls */ 516 + /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly 517 + * when booting with headset plugged. So skip setting it for the codec alc257 518 + */ 519 + if (spec->en_3kpull_low) 520 + alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 521 + 513 522 hp_pin_sense = snd_hda_jack_detect(codec, hp_pin); 514 523 515 524 if (hp_pin_sense) { ··· 528 519 AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 529 520 530 521 msleep(75); 531 - 532 - /* 3k pull low control for Headset jack. */ 533 - /* NOTE: call this before clearing the pin, otherwise codec stalls */ 534 - /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly 535 - * when booting with headset plugged. So skip setting it for the codec alc257 536 - */ 537 - if (spec->en_3kpull_low) 538 - alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 539 522 540 523 if (!spec->no_shutup_pins) 541 524 snd_hda_codec_write(codec, hp_pin, 0, ··· 3580 3579 ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE, 3581 3580 ALC294_FIXUP_ASUS_MIC, 3582 3581 ALC294_FIXUP_ASUS_HEADSET_MIC, 3582 + ALC294_FIXUP_ASUS_I2C_HEADSET_MIC, 3583 3583 ALC294_FIXUP_ASUS_SPK, 3584 3584 ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE, 3585 3585 ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE, ··· 4890 4888 }, 4891 4889 .chained = true, 4892 4890 .chain_id = ALC269_FIXUP_HEADSET_MIC 4891 + }, 4892 + [ALC294_FIXUP_ASUS_I2C_HEADSET_MIC] = { 4893 + .type = HDA_FIXUP_PINS, 4894 + .v.pins = (const struct hda_pintbl[]) { 4895 + { 0x19, 0x03a19020 }, /* use as headset mic */ 4896 + { } 4897 + }, 4898 + .chained = true, 4899 + .chain_id = ALC287_FIXUP_CS35L41_I2C_2 4893 4900 }, 4894 4901 [ALC294_FIXUP_ASUS_SPK] = { 4895 4902 .type = HDA_FIXUP_VERBS, ··· 6379 6368 SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 6380 6369 SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), 6381 6370 SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 6371 + SND_PCI_QUIRK(0x103c, 0x8548, "HP EliteBook x360 830 G6", ALC285_FIXUP_HP_GPIO_LED), 6372 + SND_PCI_QUIRK(0x103c, 0x854a, "HP EliteBook 830 G6", ALC285_FIXUP_HP_GPIO_LED), 6382 6373 SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11), 6383 6374 SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360), 6384 6375 SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), ··· 6741 6728 SND_PCI_QUIRK(0x1043, 0x1b13, "ASUS U41SV/GA403U", ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC), 6742 6729 SND_PCI_QUIRK(0x1043, 0x1b93, "ASUS G614JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), 6743 6730 SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 6744 - SND_PCI_QUIRK(0x1043, 0x1c03, "ASUS UM3406HA", ALC287_FIXUP_CS35L41_I2C_2), 6731 + SND_PCI_QUIRK(0x1043, 0x1c03, "ASUS UM3406HA", ALC294_FIXUP_ASUS_I2C_HEADSET_MIC), 6745 6732 SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 6746 6733 SND_PCI_QUIRK(0x1043, 0x1c33, "ASUS UX5304MA", ALC245_FIXUP_CS35L41_SPI_2), 6747 6734 SND_PCI_QUIRK(0x1043, 0x1c43, "ASUS UX8406MA", ALC245_FIXUP_CS35L41_SPI_2),
+2 -2
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 267 267 static const struct snd_kcontrol_new tas2781_snd_controls[] = { 268 268 ACARD_SINGLE_RANGE_EXT_TLV("Speaker Analog Volume", TAS2781_AMP_LEVEL, 269 269 1, 0, 20, 0, tas2781_amp_getvol, 270 - tas2781_amp_putvol, amp_vol_tlv), 270 + tas2781_amp_putvol, tas2781_amp_tlv), 271 271 ACARD_SINGLE_BOOL_EXT("Speaker Force Firmware Load", 0, 272 272 tas2781_force_fwload_get, tas2781_force_fwload_put), 273 273 }; ··· 305 305 efi_char16_t efi_name[TAS2563_CAL_VAR_NAME_MAX]; 306 306 unsigned long max_size = TAS2563_CAL_DATA_SIZE; 307 307 unsigned char var8[TAS2563_CAL_VAR_NAME_MAX]; 308 - struct tasdevice_priv *p = h->hda_priv; 308 + struct tasdevice_priv *p = h->priv; 309 309 struct calidata *cd = &p->cali_data; 310 310 struct cali_reg *r = &cd->cali_reg_array; 311 311 unsigned int offset = 0;
+4 -2
sound/hda/codecs/side-codecs/tas2781_hda_spi.c
··· 494 494 495 495 static struct snd_kcontrol_new tas2781_snd_ctls[] = { 496 496 ACARD_SINGLE_RANGE_EXT_TLV(NULL, TAS2781_AMP_LEVEL, 1, 0, 20, 0, 497 - tas2781_amp_getvol, tas2781_amp_putvol, amp_vol_tlv), 497 + tas2781_amp_getvol, tas2781_amp_putvol, 498 + tas2781_amp_tlv), 498 499 ACARD_SINGLE_RANGE_EXT_TLV(NULL, TAS2781_DVC_LVL, 0, 0, 200, 1, 499 - tas2781_digital_getvol, tas2781_digital_putvol, dvc_tlv), 500 + tas2781_digital_getvol, tas2781_digital_putvol, 501 + tas2781_dvc_tlv), 500 502 ACARD_SINGLE_BOOL_EXT(NULL, 0, tas2781_force_fwload_get, 501 503 tas2781_force_fwload_put), 502 504 };
-69
sound/soc/codecs/cs35l56-sdw.c
··· 393 393 return 0; 394 394 } 395 395 396 - static int cs35l63_sdw_kick_divider(struct cs35l56_private *cs35l56, 397 - struct sdw_slave *peripheral) 398 - { 399 - unsigned int curr_scale_reg, next_scale_reg; 400 - int curr_scale, next_scale, ret; 401 - 402 - if (!cs35l56->base.init_done) 403 - return 0; 404 - 405 - if (peripheral->bus->params.curr_bank) { 406 - curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1; 407 - next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0; 408 - } else { 409 - curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0; 410 - next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1; 411 - } 412 - 413 - /* 414 - * Current clock scale value must be different to new value. 415 - * Modify current to guarantee this. If next still has the dummy 416 - * value we wrote when it was current, the core code has not set 417 - * a new scale so restore its original good value 418 - */ 419 - curr_scale = sdw_read_no_pm(peripheral, curr_scale_reg); 420 - if (curr_scale < 0) { 421 - dev_err(cs35l56->base.dev, "Failed to read current clock scale: %d\n", curr_scale); 422 - return curr_scale; 423 - } 424 - 425 - next_scale = sdw_read_no_pm(peripheral, next_scale_reg); 426 - if (next_scale < 0) { 427 - dev_err(cs35l56->base.dev, "Failed to read next clock scale: %d\n", next_scale); 428 - return next_scale; 429 - } 430 - 431 - if (next_scale == CS35L56_SDW_INVALID_BUS_SCALE) { 432 - next_scale = cs35l56->old_sdw_clock_scale; 433 - ret = sdw_write_no_pm(peripheral, next_scale_reg, next_scale); 434 - if (ret < 0) { 435 - dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", 436 - ret); 437 - return ret; 438 - } 439 - } 440 - 441 - cs35l56->old_sdw_clock_scale = curr_scale; 442 - ret = sdw_write_no_pm(peripheral, curr_scale_reg, CS35L56_SDW_INVALID_BUS_SCALE); 443 - if (ret < 0) { 444 - dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", ret); 445 - return ret; 446 - } 447 - 448 - dev_dbg(cs35l56->base.dev, "Next bus scale: %#x\n", next_scale); 449 - 450 - return 0; 451 - } 452 - 453 - static int cs35l56_sdw_bus_config(struct sdw_slave *peripheral, 454 - struct sdw_bus_params *params) 455 - { 456 - struct cs35l56_private *cs35l56 = dev_get_drvdata(&peripheral->dev); 457 - 458 - if ((cs35l56->base.type == 0x63) && (cs35l56->base.rev < 0xa1)) 459 - return cs35l63_sdw_kick_divider(cs35l56, peripheral); 460 - 461 - return 0; 462 - } 463 - 464 396 static int __maybe_unused cs35l56_sdw_clk_stop(struct sdw_slave *peripheral, 465 397 enum sdw_clk_stop_mode mode, 466 398 enum sdw_clk_stop_type type) ··· 408 476 .read_prop = cs35l56_sdw_read_prop, 409 477 .interrupt_callback = cs35l56_sdw_interrupt, 410 478 .update_status = cs35l56_sdw_update_status, 411 - .bus_config = cs35l56_sdw_bus_config, 412 479 #ifdef DEBUG 413 480 .clk_stop = cs35l56_sdw_clk_stop, 414 481 #endif
+26 -3
sound/soc/codecs/cs35l56-shared.c
··· 838 838 }; 839 839 EXPORT_SYMBOL_NS_GPL(cs35l56_calibration_controls, "SND_SOC_CS35L56_SHARED"); 840 840 841 + static const struct cirrus_amp_cal_controls cs35l63_calibration_controls = { 842 + .alg_id = 0xbf210, 843 + .mem_region = WMFW_ADSP2_YM, 844 + .ambient = "CAL_AMBIENT", 845 + .calr = "CAL_R", 846 + .status = "CAL_STATUS", 847 + .checksum = "CAL_CHECKSUM", 848 + }; 849 + 841 850 int cs35l56_get_calibration(struct cs35l56_base *cs35l56_base) 842 851 { 843 852 u64 silicon_uid = 0; ··· 921 912 void cs35l56_log_tuning(struct cs35l56_base *cs35l56_base, struct cs_dsp *cs_dsp) 922 913 { 923 914 __be32 pid, sid, tid; 915 + unsigned int alg_id; 924 916 int ret; 917 + 918 + switch (cs35l56_base->type) { 919 + case 0x54: 920 + case 0x56: 921 + case 0x57: 922 + alg_id = 0x9f212; 923 + break; 924 + default: 925 + alg_id = 0xbf212; 926 + break; 927 + } 925 928 926 929 scoped_guard(mutex, &cs_dsp->pwr_lock) { 927 930 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_PRJCT_ID", 928 - WMFW_ADSP2_XM, 0x9f212), 931 + WMFW_ADSP2_XM, alg_id), 929 932 0, &pid, sizeof(pid)); 930 933 if (!ret) 931 934 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_CHNNL_ID", 932 - WMFW_ADSP2_XM, 0x9f212), 935 + WMFW_ADSP2_XM, alg_id), 933 936 0, &sid, sizeof(sid)); 934 937 if (!ret) 935 938 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_SNPSHT_ID", 936 - WMFW_ADSP2_XM, 0x9f212), 939 + WMFW_ADSP2_XM, alg_id), 937 940 0, &tid, sizeof(tid)); 938 941 } 939 942 ··· 995 974 case 0x35A54: 996 975 case 0x35A56: 997 976 case 0x35A57: 977 + cs35l56_base->calibration_controls = &cs35l56_calibration_controls; 998 978 break; 999 979 case 0x35A630: 980 + cs35l56_base->calibration_controls = &cs35l63_calibration_controls; 1000 981 devid = devid >> 4; 1001 982 break; 1002 983 default:
+1 -1
sound/soc/codecs/cs35l56.c
··· 695 695 return ret; 696 696 697 697 ret = cs_amp_write_cal_coeffs(&cs35l56->dsp.cs_dsp, 698 - &cs35l56_calibration_controls, 698 + cs35l56->base.calibration_controls, 699 699 &cs35l56->base.cal_data); 700 700 701 701 wm_adsp_stop(&cs35l56->dsp);
-3
sound/soc/codecs/cs35l56.h
··· 20 20 #define CS35L56_SDW_GEN_INT_MASK_1 0xc1 21 21 #define CS35L56_SDW_INT_MASK_CODEC_IRQ BIT(0) 22 22 23 - #define CS35L56_SDW_INVALID_BUS_SCALE 0xf 24 - 25 23 #define CS35L56_RX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) 26 24 #define CS35L56_TX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE \ 27 25 | SNDRV_PCM_FMTBIT_S32_LE) ··· 50 52 u8 asp_slot_count; 51 53 bool tdm_mode; 52 54 bool sysclk_set; 53 - u8 old_sdw_clock_scale; 54 55 u8 sdw_link_num; 55 56 u8 sdw_unique_id; 56 57 };
+1 -1
sound/soc/codecs/es8389.c
··· 636 636 regmap_write(es8389->regmap, ES8389_ANA_CTL1, 0x59); 637 637 regmap_write(es8389->regmap, ES8389_ADC_EN, 0x00); 638 638 regmap_write(es8389->regmap, ES8389_CLK_OFF1, 0x00); 639 - regmap_write(es8389->regmap, ES8389_RESET, 0x7E); 639 + regmap_write(es8389->regmap, ES8389_RESET, 0x3E); 640 640 regmap_update_bits(es8389->regmap, ES8389_DAC_INV, 0x80, 0x80); 641 641 usleep_range(8000, 8500); 642 642 regmap_update_bits(es8389->regmap, ES8389_DAC_INV, 0x80, 0x00);
+2 -2
sound/soc/codecs/tas2781-i2c.c
··· 910 910 static const struct snd_kcontrol_new tas2781_snd_controls[] = { 911 911 SOC_SINGLE_RANGE_EXT_TLV("Speaker Analog Volume", TAS2781_AMP_LEVEL, 912 912 1, 0, 20, 0, tas2781_amp_getvol, 913 - tas2781_amp_putvol, amp_vol_tlv), 913 + tas2781_amp_putvol, tas2781_amp_tlv), 914 914 SOC_SINGLE_RANGE_EXT_TLV("Speaker Digital Volume", TAS2781_DVC_LVL, 915 915 0, 0, 200, 1, tas2781_digital_getvol, 916 - tas2781_digital_putvol, dvc_tlv), 916 + tas2781_digital_putvol, tas2781_dvc_tlv), 917 917 }; 918 918 919 919 static const struct snd_kcontrol_new tas2781_cali_controls[] = {
+1 -1
sound/usb/stream.c
··· 349 349 u16 cs_len; 350 350 u8 cs_type; 351 351 352 - if (len < sizeof(*p)) 352 + if (len < sizeof(*cs_desc)) 353 353 break; 354 354 cs_len = le16_to_cpu(cs_desc->wLength); 355 355 if (len < cs_len)
+1 -1
sound/usb/validate.c
··· 285 285 /* UAC_VERSION_3, UAC3_EXTENDED_TERMINAL: not implemented yet */ 286 286 FUNC(UAC_VERSION_3, UAC3_MIXER_UNIT, validate_mixer_unit), 287 287 FUNC(UAC_VERSION_3, UAC3_SELECTOR_UNIT, validate_selector_unit), 288 - FUNC(UAC_VERSION_3, UAC_FEATURE_UNIT, validate_uac3_feature_unit), 288 + FUNC(UAC_VERSION_3, UAC3_FEATURE_UNIT, validate_uac3_feature_unit), 289 289 /* UAC_VERSION_3, UAC3_EFFECT_UNIT: not implemented yet */ 290 290 FUNC(UAC_VERSION_3, UAC3_PROCESSING_UNIT, validate_processing_unit), 291 291 FUNC(UAC_VERSION_3, UAC3_EXTENSION_UNIT, validate_processing_unit),
+28
tools/arch/arm64/include/asm/cputype.h
··· 75 75 #define ARM_CPU_PART_CORTEX_A76 0xD0B 76 76 #define ARM_CPU_PART_NEOVERSE_N1 0xD0C 77 77 #define ARM_CPU_PART_CORTEX_A77 0xD0D 78 + #define ARM_CPU_PART_CORTEX_A76AE 0xD0E 78 79 #define ARM_CPU_PART_NEOVERSE_V1 0xD40 79 80 #define ARM_CPU_PART_CORTEX_A78 0xD41 80 81 #define ARM_CPU_PART_CORTEX_A78AE 0xD42 81 82 #define ARM_CPU_PART_CORTEX_X1 0xD44 82 83 #define ARM_CPU_PART_CORTEX_A510 0xD46 84 + #define ARM_CPU_PART_CORTEX_X1C 0xD4C 83 85 #define ARM_CPU_PART_CORTEX_A520 0xD80 84 86 #define ARM_CPU_PART_CORTEX_A710 0xD47 85 87 #define ARM_CPU_PART_CORTEX_A715 0xD4D ··· 121 119 #define QCOM_CPU_PART_KRYO 0x200 122 120 #define QCOM_CPU_PART_KRYO_2XX_GOLD 0x800 123 121 #define QCOM_CPU_PART_KRYO_2XX_SILVER 0x801 122 + #define QCOM_CPU_PART_KRYO_3XX_GOLD 0x802 124 123 #define QCOM_CPU_PART_KRYO_3XX_SILVER 0x803 125 124 #define QCOM_CPU_PART_KRYO_4XX_GOLD 0x804 126 125 #define QCOM_CPU_PART_KRYO_4XX_SILVER 0x805 126 + #define QCOM_CPU_PART_ORYON_X1 0x001 127 127 128 128 #define NVIDIA_CPU_PART_DENVER 0x003 129 129 #define NVIDIA_CPU_PART_CARMEL 0x004 ··· 133 129 #define FUJITSU_CPU_PART_A64FX 0x001 134 130 135 131 #define HISI_CPU_PART_TSV110 0xD01 132 + #define HISI_CPU_PART_HIP09 0xD02 136 133 #define HISI_CPU_PART_HIP12 0xD06 137 134 138 135 #define APPLE_CPU_PART_M1_ICESTORM 0x022 ··· 164 159 #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) 165 160 #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1) 166 161 #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77) 162 + #define MIDR_CORTEX_A76AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76AE) 167 163 #define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1) 168 164 #define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78) 169 165 #define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE) 170 166 #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1) 171 167 #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510) 168 + #define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C) 172 169 #define MIDR_CORTEX_A520 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A520) 173 170 #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) 174 171 #define MIDR_CORTEX_A715 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A715) ··· 203 196 #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO) 204 197 #define MIDR_QCOM_KRYO_2XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_GOLD) 205 198 #define MIDR_QCOM_KRYO_2XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_SILVER) 199 + #define MIDR_QCOM_KRYO_3XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_GOLD) 206 200 #define MIDR_QCOM_KRYO_3XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_SILVER) 207 201 #define MIDR_QCOM_KRYO_4XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_GOLD) 208 202 #define MIDR_QCOM_KRYO_4XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_SILVER) 203 + #define MIDR_QCOM_ORYON_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_ORYON_X1) 204 + 205 + /* 206 + * NOTES: 207 + * - Qualcomm Kryo 5XX Prime / Gold ID themselves as MIDR_CORTEX_A77 208 + * - Qualcomm Kryo 5XX Silver IDs itself as MIDR_QCOM_KRYO_4XX_SILVER 209 + * - Qualcomm Kryo 6XX Prime IDs itself as MIDR_CORTEX_X1 210 + * - Qualcomm Kryo 6XX Gold IDs itself as ARM_CPU_PART_CORTEX_A78 211 + * - Qualcomm Kryo 6XX Silver IDs itself as MIDR_CORTEX_A55 212 + */ 213 + 209 214 #define MIDR_NVIDIA_DENVER MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_DENVER) 210 215 #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL) 211 216 #define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX) 212 217 #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110) 218 + #define MIDR_HISI_HIP09 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP09) 213 219 #define MIDR_HISI_HIP12 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP12) 214 220 #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM) 215 221 #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM) ··· 310 290 { 311 291 return read_cpuid(MIDR_EL1); 312 292 } 293 + 294 + struct target_impl_cpu { 295 + u64 midr; 296 + u64 revidr; 297 + u64 aidr; 298 + }; 299 + 300 + bool cpu_errata_set_target_impl(u64 num, void *impl_cpus); 313 301 314 302 static inline u64 __attribute_const__ read_cpuid_mpidr(void) 315 303 {
-13
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 2 /* 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License, version 2, as 5 - * published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 11 - * 12 - * You should have received a copy of the GNU General Public License 13 - * along with this program; if not, write to the Free Software 14 - * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. 15 - * 16 3 * Copyright IBM Corp. 2007 17 4 * 18 5 * Authors: Hollis Blanchard <hollisb@us.ibm.com>
+9 -1
tools/arch/x86/include/asm/cpufeatures.h
··· 218 218 #define X86_FEATURE_FLEXPRIORITY ( 8*32+ 1) /* "flexpriority" Intel FlexPriority */ 219 219 #define X86_FEATURE_EPT ( 8*32+ 2) /* "ept" Intel Extended Page Table */ 220 220 #define X86_FEATURE_VPID ( 8*32+ 3) /* "vpid" Intel Virtual Processor ID */ 221 + #define X86_FEATURE_COHERENCY_SFW_NO ( 8*32+ 4) /* SNP cache coherency software work around not needed */ 221 222 222 223 #define X86_FEATURE_VMMCALL ( 8*32+15) /* "vmmcall" Prefer VMMCALL to VMCALL */ 223 224 #define X86_FEATURE_XENPV ( 8*32+16) /* Xen paravirtual guest */ ··· 457 456 #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */ 458 457 #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ 459 458 #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */ 459 + #define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* The memory form of VERW mitigates TSA */ 460 460 #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */ 461 + 461 462 #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ 462 463 #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ 464 + 465 + #define X86_FEATURE_GP_ON_USER_CPUID (20*32+17) /* User CPUID faulting */ 463 466 464 467 #define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */ 465 468 #define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */ ··· 492 487 #define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */ 493 488 #define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */ 494 489 #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */ 490 + #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ 491 + #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ 492 + #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ 495 493 496 494 /* 497 495 * BUG word(s) ··· 550 542 #define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */ 551 543 #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 552 544 #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 553 - 545 + #define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ 554 546 #endif /* _ASM_X86_CPUFEATURES_H */
+7
tools/arch/x86/include/asm/msr-index.h
··· 419 419 #define DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI (1UL << 12) 420 420 #define DEBUGCTLMSR_FREEZE_IN_SMM_BIT 14 421 421 #define DEBUGCTLMSR_FREEZE_IN_SMM (1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT) 422 + #define DEBUGCTLMSR_RTM_DEBUG BIT(15) 422 423 423 424 #define MSR_PEBS_FRONTEND 0x000003f7 424 425 ··· 734 733 #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 735 734 #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302 736 735 736 + /* AMD Hardware Feedback Support MSRs */ 737 + #define MSR_AMD_WORKLOAD_CLASS_CONFIG 0xc0000500 738 + #define MSR_AMD_WORKLOAD_CLASS_ID 0xc0000501 739 + #define MSR_AMD_WORKLOAD_HRST 0xc0000502 740 + 737 741 /* AMD Last Branch Record MSRs */ 738 742 #define MSR_AMD64_LBR_SELECT 0xc000010e 739 743 ··· 837 831 #define MSR_K7_HWCR_SMMLOCK BIT_ULL(MSR_K7_HWCR_SMMLOCK_BIT) 838 832 #define MSR_K7_HWCR_IRPERF_EN_BIT 30 839 833 #define MSR_K7_HWCR_IRPERF_EN BIT_ULL(MSR_K7_HWCR_IRPERF_EN_BIT) 834 + #define MSR_K7_HWCR_CPUID_USER_DIS_BIT 35 840 835 #define MSR_K7_FID_VID_CTL 0xc0010041 841 836 #define MSR_K7_FID_VID_STATUS 0xc0010042 842 837 #define MSR_K7_HWCR_CPB_DIS_BIT 25
+7 -1
tools/arch/x86/include/uapi/asm/kvm.h
··· 965 965 struct kvm_tdx_capabilities { 966 966 __u64 supported_attrs; 967 967 __u64 supported_xfam; 968 - __u64 reserved[254]; 968 + 969 + __u64 kernel_tdvmcallinfo_1_r11; 970 + __u64 user_tdvmcallinfo_1_r11; 971 + __u64 kernel_tdvmcallinfo_1_r12; 972 + __u64 user_tdvmcallinfo_1_r12; 973 + 974 + __u64 reserved[250]; 969 975 970 976 /* Configurable CPUID bits for userspace */ 971 977 struct kvm_cpuid2 cpuid;
+28
tools/include/linux/args.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _LINUX_ARGS_H 4 + #define _LINUX_ARGS_H 5 + 6 + /* 7 + * How do these macros work? 8 + * 9 + * In __COUNT_ARGS() _0 to _12 are just placeholders from the start 10 + * in order to make sure _n is positioned over the correct number 11 + * from 12 to 0 (depending on X, which is a variadic argument list). 12 + * They serve no purpose other than occupying a position. Since each 13 + * macro parameter must have a distinct identifier, those identifiers 14 + * are as good as any. 15 + * 16 + * In COUNT_ARGS() we use actual integers, so __COUNT_ARGS() returns 17 + * that as _n. 18 + */ 19 + 20 + /* This counts to 15. Any more, it will return 16th argument. */ 21 + #define __COUNT_ARGS(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _n, X...) _n 22 + #define COUNT_ARGS(X...) __COUNT_ARGS(, ##X, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0) 23 + 24 + /* Concatenate two parameters, but allow them to be expanded beforehand. */ 25 + #define __CONCAT(a, b) a ## b 26 + #define CONCATENATE(a, b) __CONCAT(a, b) 27 + 28 + #endif /* _LINUX_ARGS_H */
+6 -23
tools/include/linux/bits.h
··· 2 2 #ifndef __LINUX_BITS_H 3 3 #define __LINUX_BITS_H 4 4 5 - #include <linux/const.h> 6 5 #include <vdso/bits.h> 7 6 #include <uapi/linux/bits.h> 8 - #include <asm/bitsperlong.h> 9 7 10 8 #define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG)) 11 9 #define BIT_WORD(nr) ((nr) / BITS_PER_LONG) ··· 48 50 (type_max(t) << (l) & \ 49 51 type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h))))) 50 52 53 + #define GENMASK(h, l) GENMASK_TYPE(unsigned long, h, l) 54 + #define GENMASK_ULL(h, l) GENMASK_TYPE(unsigned long long, h, l) 55 + 51 56 #define GENMASK_U8(h, l) GENMASK_TYPE(u8, h, l) 52 57 #define GENMASK_U16(h, l) GENMASK_TYPE(u16, h, l) 53 58 #define GENMASK_U32(h, l) GENMASK_TYPE(u32, h, l) 54 59 #define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l) 60 + #define GENMASK_U128(h, l) GENMASK_TYPE(u128, h, l) 55 61 56 62 /* 57 63 * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE(). The ··· 81 79 * BUILD_BUG_ON_ZERO is not available in h files included from asm files, 82 80 * disable the input check if that is the case. 83 81 */ 84 - #define GENMASK_INPUT_CHECK(h, l) 0 82 + #define GENMASK(h, l) __GENMASK(h, l) 83 + #define GENMASK_ULL(h, l) __GENMASK_ULL(h, l) 85 84 86 85 #endif /* !defined(__ASSEMBLY__) */ 87 - 88 - #define GENMASK(h, l) \ 89 - (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l)) 90 - #define GENMASK_ULL(h, l) \ 91 - (GENMASK_INPUT_CHECK(h, l) + __GENMASK_ULL(h, l)) 92 - 93 - #if !defined(__ASSEMBLY__) 94 - /* 95 - * Missing asm support 96 - * 97 - * __GENMASK_U128() depends on _BIT128() which would not work 98 - * in the asm code, as it shifts an 'unsigned __int128' data 99 - * type instead of direct representation of 128 bit constants 100 - * such as long and unsigned long. The fundamental problem is 101 - * that a 128 bit constant will get silently truncated by the 102 - * gcc compiler. 103 - */ 104 - #define GENMASK_U128(h, l) \ 105 - (GENMASK_INPUT_CHECK(h, l) + __GENMASK_U128(h, l)) 106 - #endif 107 86 108 87 #endif /* __LINUX_BITS_H */
+23
tools/include/linux/cfi_types.h
··· 41 41 SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) 42 42 #endif 43 43 44 + #else /* __ASSEMBLY__ */ 45 + 46 + #ifdef CONFIG_CFI_CLANG 47 + #define DEFINE_CFI_TYPE(name, func) \ 48 + /* \ 49 + * Force a reference to the function so the compiler generates \ 50 + * __kcfi_typeid_<func>. \ 51 + */ \ 52 + __ADDRESSABLE(func); \ 53 + /* u32 name __ro_after_init = __kcfi_typeid_<func> */ \ 54 + extern u32 name; \ 55 + asm ( \ 56 + " .pushsection .data..ro_after_init,\"aw\",\%progbits \n" \ 57 + " .type " #name ",\%object \n" \ 58 + " .globl " #name " \n" \ 59 + " .p2align 2, 0x0 \n" \ 60 + #name ": \n" \ 61 + " .4byte __kcfi_typeid_" #func " \n" \ 62 + " .size " #name ", 4 \n" \ 63 + " .popsection \n" \ 64 + ); 65 + #endif 66 + 44 67 #endif /* __ASSEMBLY__ */ 45 68 #endif /* _LINUX_CFI_TYPES_H */
+7 -1
tools/include/uapi/asm-generic/unistd.h
··· 852 852 #define __NR_open_tree_attr 467 853 853 __SYSCALL(__NR_open_tree_attr, sys_open_tree_attr) 854 854 855 + /* fs/inode.c */ 856 + #define __NR_file_getattr 468 857 + __SYSCALL(__NR_file_getattr, sys_file_getattr) 858 + #define __NR_file_setattr 469 859 + __SYSCALL(__NR_file_setattr, sys_file_setattr) 860 + 855 861 #undef __NR_syscalls 856 - #define __NR_syscalls 468 862 + #define __NR_syscalls 470 857 863 858 864 /* 859 865 * 32 bit systems traditionally used different
+27
tools/include/uapi/linux/kvm.h
··· 178 178 #define KVM_EXIT_NOTIFY 37 179 179 #define KVM_EXIT_LOONGARCH_IOCSR 38 180 180 #define KVM_EXIT_MEMORY_FAULT 39 181 + #define KVM_EXIT_TDX 40 181 182 182 183 /* For KVM_EXIT_INTERNAL_ERROR */ 183 184 /* Emulate instruction failed. */ ··· 448 447 __u64 gpa; 449 448 __u64 size; 450 449 } memory_fault; 450 + /* KVM_EXIT_TDX */ 451 + struct { 452 + __u64 flags; 453 + __u64 nr; 454 + union { 455 + struct { 456 + __u64 ret; 457 + __u64 data[5]; 458 + } unknown; 459 + struct { 460 + __u64 ret; 461 + __u64 gpa; 462 + __u64 size; 463 + } get_quote; 464 + struct { 465 + __u64 ret; 466 + __u64 leaf; 467 + __u64 r11, r12, r13, r14; 468 + } get_tdvmcall_info; 469 + struct { 470 + __u64 ret; 471 + __u64 vector; 472 + } setup_event_notify; 473 + }; 474 + } tdx; 451 475 /* Fix the size of the union. */ 452 476 char padding[256]; 453 477 }; ··· 961 935 #define KVM_CAP_ARM_EL2 240 962 936 #define KVM_CAP_ARM_EL2_E2H0 241 963 937 #define KVM_CAP_RISCV_MP_STATE_RESET 242 938 + #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 964 939 965 940 struct kvm_irq_routing_irqchip { 966 941 __u32 irqchip;
+2
tools/perf/arch/arm/entry/syscalls/syscall.tbl
··· 482 482 465 common listxattrat sys_listxattrat 483 483 466 common removexattrat sys_removexattrat 484 484 467 common open_tree_attr sys_open_tree_attr 485 + 468 common file_getattr sys_file_getattr 486 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
··· 382 382 465 n64 listxattrat sys_listxattrat 383 383 466 n64 removexattrat sys_removexattrat 384 384 467 n64 open_tree_attr sys_open_tree_attr 385 + 468 n64 file_getattr sys_file_getattr 386 + 469 n64 file_setattr sys_file_setattr
+2
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 558 558 465 common listxattrat sys_listxattrat 559 559 466 common removexattrat sys_removexattrat 560 560 467 common open_tree_attr sys_open_tree_attr 561 + 468 common file_getattr sys_file_getattr 562 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 470 470 465 common listxattrat sys_listxattrat sys_listxattrat 471 471 466 common removexattrat sys_removexattrat sys_removexattrat 472 472 467 common open_tree_attr sys_open_tree_attr sys_open_tree_attr 473 + 468 common file_getattr sys_file_getattr sys_file_getattr 474 + 469 common file_setattr sys_file_setattr sys_file_setattr
+2
tools/perf/arch/sh/entry/syscalls/syscall.tbl
··· 471 471 465 common listxattrat sys_listxattrat 472 472 466 common removexattrat sys_removexattrat 473 473 467 common open_tree_attr sys_open_tree_attr 474 + 468 common file_getattr sys_file_getattr 475 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/sparc/entry/syscalls/syscall.tbl
··· 513 513 465 common listxattrat sys_listxattrat 514 514 466 common removexattrat sys_removexattrat 515 515 467 common open_tree_attr sys_open_tree_attr 516 + 468 common file_getattr sys_file_getattr 517 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/x86/entry/syscalls/syscall_32.tbl
··· 473 473 465 i386 listxattrat sys_listxattrat 474 474 466 i386 removexattrat sys_removexattrat 475 475 467 i386 open_tree_attr sys_open_tree_attr 476 + 468 i386 file_getattr sys_file_getattr 477 + 469 i386 file_setattr sys_file_setattr
+2
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 391 391 465 common listxattrat sys_listxattrat 392 392 466 common removexattrat sys_removexattrat 393 393 467 common open_tree_attr sys_open_tree_attr 394 + 468 common file_getattr sys_file_getattr 395 + 469 common file_setattr sys_file_setattr 394 396 395 397 # 396 398 # Due to a historical design error, certain syscalls are numbered differently
+1
tools/perf/arch/x86/tests/topdown.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include "arch-tests.h" 3 3 #include "../util/topdown.h" 4 + #include "debug.h" 4 5 #include "evlist.h" 5 6 #include "parse-events.h" 6 7 #include "pmu.h"
+2
tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
··· 438 438 465 common listxattrat sys_listxattrat 439 439 466 common removexattrat sys_removexattrat 440 440 467 common open_tree_attr sys_open_tree_attr 441 + 468 common file_getattr sys_file_getattr 442 + 469 common file_setattr sys_file_setattr
+1 -1
tools/perf/bench/inject-buildid.c
··· 85 85 if (typeflag == FTW_D || typeflag == FTW_SL) 86 86 return 0; 87 87 88 - if (filename__read_build_id(fpath, &bid) < 0) 88 + if (filename__read_build_id(fpath, &bid, /*block=*/true) < 0) 89 89 return 0; 90 90 91 91 dso->name = realpath(fpath, NULL);
+4 -4
tools/perf/builtin-buildid-cache.c
··· 180 180 struct nscookie nsc; 181 181 182 182 nsinfo__mountns_enter(nsi, &nsc); 183 - err = filename__read_build_id(filename, &bid); 183 + err = filename__read_build_id(filename, &bid, /*block=*/true); 184 184 nsinfo__mountns_exit(&nsc); 185 185 if (err < 0) { 186 186 pr_debug("Couldn't read a build-id in %s\n", filename); ··· 204 204 int err; 205 205 206 206 nsinfo__mountns_enter(nsi, &nsc); 207 - err = filename__read_build_id(filename, &bid); 207 + err = filename__read_build_id(filename, &bid, /*block=*/true); 208 208 nsinfo__mountns_exit(&nsc); 209 209 if (err < 0) { 210 210 pr_debug("Couldn't read a build-id in %s\n", filename); ··· 280 280 if (!dso__build_id_filename(dso, filename, sizeof(filename), false)) 281 281 return true; 282 282 283 - if (filename__read_build_id(filename, &bid) == -1) { 283 + if (filename__read_build_id(filename, &bid, /*block=*/true) == -1) { 284 284 if (errno == ENOENT) 285 285 return false; 286 286 ··· 309 309 int err; 310 310 311 311 nsinfo__mountns_enter(nsi, &nsc); 312 - err = filename__read_build_id(filename, &bid); 312 + err = filename__read_build_id(filename, &bid, /*block=*/true); 313 313 nsinfo__mountns_exit(&nsc); 314 314 if (err < 0) { 315 315 pr_debug("Couldn't read a build-id in %s\n", filename);
+2 -2
tools/perf/builtin-inject.c
··· 680 680 681 681 mutex_lock(dso__lock(dso)); 682 682 nsinfo__mountns_enter(dso__nsinfo(dso), &nsc); 683 - if (filename__read_build_id(dso__long_name(dso), &bid) > 0) 683 + if (filename__read_build_id(dso__long_name(dso), &bid, /*block=*/true) > 0) 684 684 dso__set_build_id(dso, &bid); 685 685 else if (dso__nsinfo(dso)) { 686 686 char *new_name = dso__filename_with_chroot(dso, dso__long_name(dso)); 687 687 688 - if (new_name && filename__read_build_id(new_name, &bid) > 0) 688 + if (new_name && filename__read_build_id(new_name, &bid, /*block=*/true) > 0) 689 689 dso__set_build_id(dso, &bid); 690 690 free(new_name); 691 691 }
+1 -1
tools/perf/tests/sdt.c
··· 31 31 struct build_id bid = { .size = 0, }; 32 32 int err; 33 33 34 - err = filename__read_build_id(filename, &bid); 34 + err = filename__read_build_id(filename, &bid, /*block=*/true); 35 35 if (err < 0) { 36 36 pr_debug("Failed to read build id of %s\n", filename); 37 37 return err;
+18
tools/perf/trace/beauty/include/uapi/linux/fcntl.h
··· 90 90 #define DN_ATTRIB 0x00000020 /* File changed attibutes */ 91 91 #define DN_MULTISHOT 0x80000000 /* Don't remove notifier */ 92 92 93 + /* Reserved kernel ranges [-100], [-10000, -40000]. */ 93 94 #define AT_FDCWD -100 /* Special value for dirfd used to 94 95 indicate openat should use the 95 96 current working directory. */ 96 97 98 + /* 99 + * The concept of process and threads in userland and the kernel is a confusing 100 + * one - within the kernel every thread is a 'task' with its own individual PID, 101 + * however from userland's point of view threads are grouped by a single PID, 102 + * which is that of the 'thread group leader', typically the first thread 103 + * spawned. 104 + * 105 + * To cut the Gideon knot, for internal kernel usage, we refer to 106 + * PIDFD_SELF_THREAD to refer to the current thread (or task from a kernel 107 + * perspective), and PIDFD_SELF_THREAD_GROUP to refer to the current thread 108 + * group leader... 109 + */ 110 + #define PIDFD_SELF_THREAD -10000 /* Current thread. */ 111 + #define PIDFD_SELF_THREAD_GROUP -10001 /* Current thread group leader. */ 112 + 113 + #define FD_PIDFS_ROOT -10002 /* Root of the pidfs filesystem */ 114 + #define FD_INVALID -10009 /* Invalid file descriptor: -10000 - EBADF = -10009 */ 97 115 98 116 /* Generic flags for the *at(2) family of syscalls. */ 99 117
+88
tools/perf/trace/beauty/include/uapi/linux/fs.h
··· 60 60 #define RENAME_EXCHANGE (1 << 1) /* Exchange source and dest */ 61 61 #define RENAME_WHITEOUT (1 << 2) /* Whiteout source */ 62 62 63 + /* 64 + * The root inode of procfs is guaranteed to always have the same inode number. 65 + * For programs that make heavy use of procfs, verifying that the root is a 66 + * real procfs root and using openat2(RESOLVE_{NO_{XDEV,MAGICLINKS},BENEATH}) 67 + * will allow you to make sure you are never tricked into operating on the 68 + * wrong procfs file. 69 + */ 70 + enum procfs_ino { 71 + PROCFS_ROOT_INO = 1, 72 + }; 73 + 63 74 struct file_clone_range { 64 75 __s64 src_fd; 65 76 __u64 src_offset; ··· 100 89 struct fs_sysfs_path { 101 90 __u8 len; 102 91 __u8 name[128]; 92 + }; 93 + 94 + /* Protection info capability flags */ 95 + #define LBMD_PI_CAP_INTEGRITY (1 << 0) 96 + #define LBMD_PI_CAP_REFTAG (1 << 1) 97 + 98 + /* Checksum types for Protection Information */ 99 + #define LBMD_PI_CSUM_NONE 0 100 + #define LBMD_PI_CSUM_IP 1 101 + #define LBMD_PI_CSUM_CRC16_T10DIF 2 102 + #define LBMD_PI_CSUM_CRC64_NVME 4 103 + 104 + /* sizeof first published struct */ 105 + #define LBMD_SIZE_VER0 16 106 + 107 + /* 108 + * Logical block metadata capability descriptor 109 + * If the device does not support metadata, all the fields will be zero. 110 + * Applications must check lbmd_flags to determine whether metadata is 111 + * supported or not. 112 + */ 113 + struct logical_block_metadata_cap { 114 + /* Bitmask of logical block metadata capability flags */ 115 + __u32 lbmd_flags; 116 + /* 117 + * The amount of data described by each unit of logical block 118 + * metadata 119 + */ 120 + __u16 lbmd_interval; 121 + /* 122 + * Size in bytes of the logical block metadata associated with each 123 + * interval 124 + */ 125 + __u8 lbmd_size; 126 + /* 127 + * Size in bytes of the opaque block tag associated with each 128 + * interval 129 + */ 130 + __u8 lbmd_opaque_size; 131 + /* 132 + * Offset in bytes of the opaque block tag within the logical block 133 + * metadata 134 + */ 135 + __u8 lbmd_opaque_offset; 136 + /* Size in bytes of the T10 PI tuple associated with each interval */ 137 + __u8 lbmd_pi_size; 138 + /* Offset in bytes of T10 PI tuple within the logical block metadata */ 139 + __u8 lbmd_pi_offset; 140 + /* T10 PI guard tag type */ 141 + __u8 lbmd_guard_tag_type; 142 + /* Size in bytes of the T10 PI application tag */ 143 + __u8 lbmd_app_tag_size; 144 + /* Size in bytes of the T10 PI reference tag */ 145 + __u8 lbmd_ref_tag_size; 146 + /* Size in bytes of the T10 PI storage tag */ 147 + __u8 lbmd_storage_tag_size; 148 + __u8 pad; 103 149 }; 104 150 105 151 /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ ··· 215 147 __u32 fsx_cowextsize; /* CoW extsize field value (get/set)*/ 216 148 unsigned char fsx_pad[8]; 217 149 }; 150 + 151 + /* 152 + * Variable size structure for file_[sg]et_attr(). 153 + * 154 + * Note. This is alternative to the structure 'struct file_kattr'/'struct fsxattr'. 155 + * As this structure is passed to/from userspace with its size, this can 156 + * be versioned based on the size. 157 + */ 158 + struct file_attr { 159 + __u64 fa_xflags; /* xflags field value (get/set) */ 160 + __u32 fa_extsize; /* extsize field value (get/set)*/ 161 + __u32 fa_nextents; /* nextents field value (get) */ 162 + __u32 fa_projid; /* project identifier (get/set) */ 163 + __u32 fa_cowextsize; /* CoW extsize field value (get/set) */ 164 + }; 165 + 166 + #define FILE_ATTR_SIZE_VER0 24 167 + #define FILE_ATTR_SIZE_LATEST FILE_ATTR_SIZE_VER0 218 168 219 169 /* 220 170 * Flags for the fsx_xflags field ··· 333 247 * also /sys/kernel/debug/ for filesystems with debugfs exports 334 248 */ 335 249 #define FS_IOC_GETFSSYSFSPATH _IOR(0x15, 1, struct fs_sysfs_path) 250 + /* Get logical block metadata capability details */ 251 + #define FS_IOC_GETLBMD_CAP _IOWR(0x15, 2, struct logical_block_metadata_cap) 336 252 337 253 /* 338 254 * Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS)
+8 -1
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 244 244 # define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT) 245 245 /* Unused; kept only for source compatibility */ 246 246 # define PR_MTE_TCF_SHIFT 1 247 + /* MTE tag check store only */ 248 + # define PR_MTE_STORE_ONLY (1UL << 19) 247 249 /* RISC-V pointer masking tag length */ 248 250 # define PR_PMLEN_SHIFT 24 249 251 # define PR_PMLEN_MASK (0x7fUL << PR_PMLEN_SHIFT) ··· 257 255 /* Dispatch syscalls to a userspace handler */ 258 256 #define PR_SET_SYSCALL_USER_DISPATCH 59 259 257 # define PR_SYS_DISPATCH_OFF 0 260 - # define PR_SYS_DISPATCH_ON 1 258 + /* Enable dispatch except for the specified range */ 259 + # define PR_SYS_DISPATCH_EXCLUSIVE_ON 1 260 + /* Enable dispatch for the specified range */ 261 + # define PR_SYS_DISPATCH_INCLUSIVE_ON 2 262 + /* Legacy name for backwards compatibility */ 263 + # define PR_SYS_DISPATCH_ON PR_SYS_DISPATCH_EXCLUSIVE_ON 261 264 /* The control values for the user space selector when dispatch is enabled */ 262 265 # define SYSCALL_DISPATCH_FILTER_ALLOW 0 263 266 # define SYSCALL_DISPATCH_FILTER_BLOCK 1
+35
tools/perf/trace/beauty/include/uapi/linux/vhost.h
··· 235 235 */ 236 236 #define VHOST_VDPA_GET_VRING_SIZE _IOWR(VHOST_VIRTIO, 0x82, \ 237 237 struct vhost_vring_state) 238 + 239 + /* Extended features manipulation */ 240 + #define VHOST_GET_FEATURES_ARRAY _IOR(VHOST_VIRTIO, 0x83, \ 241 + struct vhost_features_array) 242 + #define VHOST_SET_FEATURES_ARRAY _IOW(VHOST_VIRTIO, 0x83, \ 243 + struct vhost_features_array) 244 + 245 + /* fork_owner values for vhost */ 246 + #define VHOST_FORK_OWNER_KTHREAD 0 247 + #define VHOST_FORK_OWNER_TASK 1 248 + 249 + /** 250 + * VHOST_SET_FORK_FROM_OWNER - Set the fork_owner flag for the vhost device, 251 + * This ioctl must called before VHOST_SET_OWNER. 252 + * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y 253 + * 254 + * @param fork_owner: An 8-bit value that determines the vhost thread mode 255 + * 256 + * When fork_owner is set to VHOST_FORK_OWNER_TASK(default value): 257 + * - Vhost will create vhost worker as tasks forked from the owner, 258 + * inheriting all of the owner's attributes. 259 + * 260 + * When fork_owner is set to VHOST_FORK_OWNER_KTHREAD: 261 + * - Vhost will create vhost workers as kernel threads. 262 + */ 263 + #define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x84, __u8) 264 + 265 + /** 266 + * VHOST_GET_FORK_OWNER - Get the current fork_owner flag for the vhost device. 267 + * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y 268 + * 269 + * @return: An 8-bit value indicating the current thread mode. 270 + */ 271 + #define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x85, __u8) 272 + 238 273 #endif
+2 -2
tools/perf/util/build-id.c
··· 115 115 struct build_id bid = { .size = 0, }; 116 116 int ret; 117 117 118 - ret = filename__read_build_id(pathname, &bid); 118 + ret = filename__read_build_id(pathname, &bid, /*block=*/true); 119 119 if (ret < 0) 120 120 return ret; 121 121 ··· 841 841 int ret; 842 842 843 843 nsinfo__mountns_enter(nsi, &nsc); 844 - ret = filename__read_build_id(filename, bid); 844 + ret = filename__read_build_id(filename, bid, /*block=*/true); 845 845 nsinfo__mountns_exit(&nsc); 846 846 847 847 return ret;
+6 -2
tools/perf/util/debuginfo.c
··· 110 110 if (!dso) 111 111 goto out; 112 112 113 - /* Set the build id for DSO_BINARY_TYPE__BUILDID_DEBUGINFO */ 114 - if (is_regular_file(path) && filename__read_build_id(path, &bid) > 0) 113 + /* 114 + * Set the build id for DSO_BINARY_TYPE__BUILDID_DEBUGINFO. Don't block 115 + * incase the path isn't for a regular file. 116 + */ 117 + assert(!dso__has_build_id(dso)); 118 + if (filename__read_build_id(path, &bid, /*block=*/false) > 0) 115 119 dso__set_build_id(dso, &bid); 116 120 117 121 for (type = distro_dwarf_types;
+2 -2
tools/perf/util/dsos.c
··· 81 81 return 0; 82 82 } 83 83 nsinfo__mountns_enter(dso__nsinfo(dso), &nsc); 84 - if (filename__read_build_id(dso__long_name(dso), &bid) > 0) { 84 + if (filename__read_build_id(dso__long_name(dso), &bid, /*block=*/true) > 0) { 85 85 dso__set_build_id(dso, &bid); 86 86 args->have_build_id = true; 87 87 } else if (errno == ENOENT && dso__nsinfo(dso)) { 88 88 char *new_name = dso__filename_with_chroot(dso, dso__long_name(dso)); 89 89 90 - if (new_name && filename__read_build_id(new_name, &bid) > 0) { 90 + if (new_name && filename__read_build_id(new_name, &bid, /*block=*/true) > 0) { 91 91 dso__set_build_id(dso, &bid); 92 92 args->have_build_id = true; 93 93 }
+5 -4
tools/perf/util/symbol-elf.c
··· 902 902 903 903 #else // HAVE_LIBBFD_BUILDID_SUPPORT 904 904 905 - static int read_build_id(const char *filename, struct build_id *bid) 905 + static int read_build_id(const char *filename, struct build_id *bid, bool block) 906 906 { 907 907 size_t size = sizeof(bid->data); 908 908 int fd, err = -1; ··· 911 911 if (size < BUILD_ID_SIZE) 912 912 goto out; 913 913 914 - fd = open(filename, O_RDONLY); 914 + fd = open(filename, block ? O_RDONLY : (O_RDONLY | O_NONBLOCK)); 915 915 if (fd < 0) 916 916 goto out; 917 917 ··· 934 934 935 935 #endif // HAVE_LIBBFD_BUILDID_SUPPORT 936 936 937 - int filename__read_build_id(const char *filename, struct build_id *bid) 937 + int filename__read_build_id(const char *filename, struct build_id *bid, bool block) 938 938 { 939 939 struct kmod_path m = { .name = NULL, }; 940 940 char path[PATH_MAX]; ··· 958 958 } 959 959 close(fd); 960 960 filename = path; 961 + block = true; 961 962 } 962 963 963 - err = read_build_id(filename, bid); 964 + err = read_build_id(filename, bid, block); 964 965 965 966 if (m.comp) 966 967 unlink(filename);
+29 -30
tools/perf/util/symbol-minimal.c
··· 4 4 5 5 #include <errno.h> 6 6 #include <unistd.h> 7 - #include <stdio.h> 8 7 #include <fcntl.h> 9 8 #include <string.h> 10 9 #include <stdlib.h> ··· 85 86 /* 86 87 * Just try PT_NOTE header otherwise fails 87 88 */ 88 - int filename__read_build_id(const char *filename, struct build_id *bid) 89 + int filename__read_build_id(const char *filename, struct build_id *bid, bool block) 89 90 { 90 - FILE *fp; 91 - int ret = -1; 91 + int fd, ret = -1; 92 92 bool need_swap = false, elf32; 93 - u8 e_ident[EI_NIDENT]; 94 - int i; 95 93 union { 96 94 struct { 97 95 Elf32_Ehdr ehdr32; ··· 99 103 Elf64_Phdr *phdr64; 100 104 }; 101 105 } hdrs; 102 - void *phdr; 103 - size_t phdr_size; 104 - void *buf = NULL; 105 - size_t buf_size = 0; 106 + void *phdr, *buf = NULL; 107 + ssize_t phdr_size, ehdr_size, buf_size = 0; 106 108 107 - fp = fopen(filename, "r"); 108 - if (fp == NULL) 109 + fd = open(filename, block ? O_RDONLY : (O_RDONLY | O_NONBLOCK)); 110 + if (fd < 0) 109 111 return -1; 110 112 111 - if (fread(e_ident, sizeof(e_ident), 1, fp) != 1) 113 + if (read(fd, hdrs.ehdr32.e_ident, EI_NIDENT) != EI_NIDENT) 112 114 goto out; 113 115 114 - if (memcmp(e_ident, ELFMAG, SELFMAG) || 115 - e_ident[EI_VERSION] != EV_CURRENT) 116 + if (memcmp(hdrs.ehdr32.e_ident, ELFMAG, SELFMAG) || 117 + hdrs.ehdr32.e_ident[EI_VERSION] != EV_CURRENT) 116 118 goto out; 117 119 118 - need_swap = check_need_swap(e_ident[EI_DATA]); 119 - elf32 = e_ident[EI_CLASS] == ELFCLASS32; 120 + need_swap = check_need_swap(hdrs.ehdr32.e_ident[EI_DATA]); 121 + elf32 = hdrs.ehdr32.e_ident[EI_CLASS] == ELFCLASS32; 122 + ehdr_size = (elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64)) - EI_NIDENT; 120 123 121 - if (fread(elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64, 122 - elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64), 123 - 1, fp) != 1) 124 + if (read(fd, 125 + (elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64) + EI_NIDENT, 126 + ehdr_size) != ehdr_size) 124 127 goto out; 125 128 126 129 if (need_swap) { ··· 133 138 hdrs.ehdr64.e_phnum = bswap_16(hdrs.ehdr64.e_phnum); 134 139 } 135 140 } 136 - phdr_size = elf32 ? hdrs.ehdr32.e_phentsize * hdrs.ehdr32.e_phnum 137 - : hdrs.ehdr64.e_phentsize * hdrs.ehdr64.e_phnum; 141 + if ((elf32 && hdrs.ehdr32.e_phentsize != sizeof(Elf32_Phdr)) || 142 + (!elf32 && hdrs.ehdr64.e_phentsize != sizeof(Elf64_Phdr))) 143 + goto out; 144 + 145 + phdr_size = elf32 ? sizeof(Elf32_Phdr) * hdrs.ehdr32.e_phnum 146 + : sizeof(Elf64_Phdr) * hdrs.ehdr64.e_phnum; 138 147 phdr = malloc(phdr_size); 139 148 if (phdr == NULL) 140 149 goto out; 141 150 142 - fseek(fp, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET); 143 - if (fread(phdr, phdr_size, 1, fp) != 1) 151 + lseek(fd, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET); 152 + if (read(fd, phdr, phdr_size) != phdr_size) 144 153 goto out_free; 145 154 146 155 if (elf32) ··· 152 153 else 153 154 hdrs.phdr64 = phdr; 154 155 155 - for (i = 0; i < elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum; i++) { 156 - size_t p_filesz; 156 + for (int i = 0; i < (elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum); i++) { 157 + ssize_t p_filesz; 157 158 158 159 if (need_swap) { 159 160 if (elf32) { ··· 179 180 goto out_free; 180 181 buf = tmp; 181 182 } 182 - fseek(fp, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET); 183 - if (fread(buf, p_filesz, 1, fp) != 1) 183 + lseek(fd, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET); 184 + if (read(fd, buf, p_filesz) != p_filesz) 184 185 goto out_free; 185 186 186 187 ret = read_build_id(buf, p_filesz, bid, need_swap); ··· 193 194 free(buf); 194 195 free(phdr); 195 196 out: 196 - fclose(fp); 197 + close(fd); 197 198 return ret; 198 199 } 199 200 ··· 323 324 if (ret >= 0) 324 325 RC_CHK_ACCESS(dso)->is_64_bit = ret; 325 326 326 - if (filename__read_build_id(ss->name, &bid) > 0) 327 + if (filename__read_build_id(ss->name, &bid, /*block=*/true) > 0) 327 328 dso__set_build_id(dso, &bid); 328 329 return 0; 329 330 }
+4 -4
tools/perf/util/symbol.c
··· 1869 1869 1870 1870 /* 1871 1871 * Read the build id if possible. This is required for 1872 - * DSO_BINARY_TYPE__BUILDID_DEBUGINFO to work 1872 + * DSO_BINARY_TYPE__BUILDID_DEBUGINFO to work. Don't block in case path 1873 + * isn't for a regular file. 1873 1874 */ 1874 - if (!dso__has_build_id(dso) && 1875 - is_regular_file(dso__long_name(dso))) { 1875 + if (!dso__has_build_id(dso)) { 1876 1876 struct build_id bid = { .size = 0, }; 1877 1877 1878 1878 __symbol__join_symfs(name, PATH_MAX, dso__long_name(dso)); 1879 - if (filename__read_build_id(name, &bid) > 0) 1879 + if (filename__read_build_id(name, &bid, /*block=*/false) > 0) 1880 1880 dso__set_build_id(dso, &bid); 1881 1881 } 1882 1882
+1 -1
tools/perf/util/symbol.h
··· 140 140 141 141 enum dso_type dso__type_fd(int fd); 142 142 143 - int filename__read_build_id(const char *filename, struct build_id *id); 143 + int filename__read_build_id(const char *filename, struct build_id *id, bool block); 144 144 int sysfs__read_build_id(const char *filename, struct build_id *bid); 145 145 int modules__parse(const char *filename, void *arg, 146 146 int (*process_module)(void *arg, const char *name,
+1 -1
tools/perf/util/synthetic-events.c
··· 401 401 nsi = nsinfo__new(event->pid); 402 402 nsinfo__mountns_enter(nsi, &nc); 403 403 404 - rc = filename__read_build_id(event->filename, &bid) > 0 ? 0 : -1; 404 + rc = filename__read_build_id(event->filename, &bid, /*block=*/false) > 0 ? 0 : -1; 405 405 406 406 nsinfo__mountns_exit(&nc); 407 407 nsinfo__put(nsi);
+4 -3
tools/power/cpupower/man/cpupower-set.1
··· 81 81 .RE 82 82 83 83 .PP 84 - \-\-turbo\-boost, \-t 84 + \-\-turbo\-boost, \-\-boost, \-t 85 85 .RS 4 86 - This option is used to enable or disable the turbo boost feature on 87 - supported Intel and AMD processors. 86 + This option is used to enable or disable the boost feature on 87 + supported Intel and AMD processors, and other boost supported systems. 88 + (The --boost option is an alias for the --turbo-boost option) 88 89 89 90 This option takes as parameter either \fB1\fP to enable, or \fB0\fP to disable the feature. 90 91
+15 -1
tools/power/cpupower/utils/cpufreq-info.c
··· 128 128 /* ToDo: Make this more global */ 129 129 unsigned long pstates[MAX_HW_PSTATES] = {0,}; 130 130 131 - ret = cpufreq_has_boost_support(cpu, &support, &active, &b_states); 131 + ret = cpufreq_has_x86_boost_support(cpu, &support, &active, &b_states); 132 132 if (ret) { 133 133 printf(_("Error while evaluating Boost Capabilities" 134 134 " on CPU %d -- are you root?\n"), cpu); ··· 204 204 return 0; 205 205 } 206 206 207 + static int get_boost_mode_generic(unsigned int cpu) 208 + { 209 + bool active; 210 + 211 + if (!cpufreq_has_generic_boost_support(&active)) { 212 + printf(_(" boost state support:\n")); 213 + printf(_(" Active: %s\n"), active ? _("yes") : _("no")); 214 + } 215 + 216 + return 0; 217 + } 218 + 207 219 /* --boost / -b */ 208 220 209 221 static int get_boost_mode(unsigned int cpu) ··· 226 214 cpupower_cpu_info.vendor == X86_VENDOR_HYGON || 227 215 cpupower_cpu_info.vendor == X86_VENDOR_INTEL) 228 216 return get_boost_mode_x86(cpu); 217 + else 218 + get_boost_mode_generic(cpu); 229 219 230 220 freqs = cpufreq_get_boost_frequencies(cpu); 231 221 if (freqs) {
+3 -2
tools/power/cpupower/utils/cpupower-set.c
··· 21 21 {"epp", required_argument, NULL, 'e'}, 22 22 {"amd-pstate-mode", required_argument, NULL, 'm'}, 23 23 {"turbo-boost", required_argument, NULL, 't'}, 24 + {"boost", required_argument, NULL, 't'}, 24 25 { }, 25 26 }; 26 27 ··· 63 62 64 63 params.params = 0; 65 64 /* parameter parsing */ 66 - while ((ret = getopt_long(argc, argv, "b:e:m:", 67 - set_opts, NULL)) != -1) { 65 + while ((ret = getopt_long(argc, argv, "b:e:m:t:", 66 + set_opts, NULL)) != -1) { 68 67 switch (ret) { 69 68 case 'b': 70 69 if (params.perf_bias)
+7 -7
tools/power/cpupower/utils/helpers/helpers.h
··· 103 103 104 104 /* cpuid and cpuinfo helpers **************************/ 105 105 106 + int cpufreq_has_generic_boost_support(bool *active); 107 + int cpupower_set_turbo_boost(int turbo_boost); 108 + 106 109 /* X86 ONLY ****************************************/ 107 110 #if defined(__i386__) || defined(__x86_64__) 108 111 ··· 121 118 122 119 extern int cpupower_set_epp(unsigned int cpu, char *epp); 123 120 extern int cpupower_set_amd_pstate_mode(char *mode); 124 - extern int cpupower_set_turbo_boost(int turbo_boost); 125 121 126 122 /* Read/Write msr ****************************/ 127 123 ··· 141 139 142 140 /* AMD HW pstate decoding **************************/ 143 141 144 - extern int cpufreq_has_boost_support(unsigned int cpu, int *support, 145 - int *active, int * states); 142 + int cpufreq_has_x86_boost_support(unsigned int cpu, int *support, 143 + int *active, int *states); 146 144 147 145 /* AMD P-State stuff **************************/ 148 146 bool cpupower_amd_pstate_enabled(void); ··· 183 181 { return -1; }; 184 182 static inline int cpupower_set_amd_pstate_mode(char *mode) 185 183 { return -1; }; 186 - static inline int cpupower_set_turbo_boost(int turbo_boost) 187 - { return -1; }; 188 184 189 185 /* Read/Write msr ****************************/ 190 186 191 - static inline int cpufreq_has_boost_support(unsigned int cpu, int *support, 192 - int *active, int * states) 187 + static inline int cpufreq_has_x86_boost_support(unsigned int cpu, int *support, 188 + int *active, int *states) 193 189 { return -1; } 194 190 195 191 static inline bool cpupower_amd_pstate_enabled(void)
+54 -22
tools/power/cpupower/utils/helpers/misc.c
··· 8 8 #include "helpers/helpers.h" 9 9 #include "helpers/sysfs.h" 10 10 #include "cpufreq.h" 11 + #include "cpupower_intern.h" 11 12 12 13 #if defined(__i386__) || defined(__x86_64__) 13 14 14 - #include "cpupower_intern.h" 15 - 16 15 #define MSR_AMD_HWCR 0xc0010015 17 16 18 - int cpufreq_has_boost_support(unsigned int cpu, int *support, int *active, 19 - int *states) 17 + int cpufreq_has_x86_boost_support(unsigned int cpu, int *support, int *active, 18 + int *states) 20 19 { 21 20 int ret; 22 21 unsigned long long val; ··· 123 124 return 0; 124 125 } 125 126 126 - int cpupower_set_turbo_boost(int turbo_boost) 127 - { 128 - char path[SYSFS_PATH_MAX]; 129 - char linebuf[2] = {}; 130 - 131 - snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost"); 132 - 133 - if (!is_valid_path(path)) 134 - return -1; 135 - 136 - snprintf(linebuf, sizeof(linebuf), "%d", turbo_boost); 137 - 138 - if (cpupower_write_sysfs(path, linebuf, 2) <= 0) 139 - return -1; 140 - 141 - return 0; 142 - } 143 - 144 127 bool cpupower_amd_pstate_enabled(void) 145 128 { 146 129 char *driver = cpufreq_get_driver(0); ··· 140 159 } 141 160 142 161 #endif /* #if defined(__i386__) || defined(__x86_64__) */ 162 + 163 + int cpufreq_has_generic_boost_support(bool *active) 164 + { 165 + char path[SYSFS_PATH_MAX]; 166 + char linebuf[2] = {}; 167 + unsigned long val; 168 + char *endp; 169 + 170 + snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost"); 171 + 172 + if (!is_valid_path(path)) 173 + return -EACCES; 174 + 175 + if (cpupower_read_sysfs(path, linebuf, 2) <= 0) 176 + return -EINVAL; 177 + 178 + val = strtoul(linebuf, &endp, 0); 179 + if (endp == linebuf || errno == ERANGE) 180 + return -EINVAL; 181 + 182 + switch (val) { 183 + case 0: 184 + *active = false; 185 + break; 186 + case 1: 187 + *active = true; 188 + break; 189 + default: 190 + return -EINVAL; 191 + } 192 + 193 + return 0; 194 + } 143 195 144 196 /* get_cpustate 145 197 * ··· 272 258 ((unsigned int)(speed % 1000) / 100)); 273 259 } 274 260 } 261 + } 262 + 263 + int cpupower_set_turbo_boost(int turbo_boost) 264 + { 265 + char path[SYSFS_PATH_MAX]; 266 + char linebuf[2] = {}; 267 + 268 + snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost"); 269 + 270 + if (!is_valid_path(path)) 271 + return -1; 272 + 273 + snprintf(linebuf, sizeof(linebuf), "%d", turbo_boost); 274 + 275 + if (cpupower_write_sysfs(path, linebuf, 2) <= 0) 276 + return -1; 277 + 278 + return 0; 275 279 }
+2
tools/scripts/syscall.tbl
··· 408 408 465 common listxattrat sys_listxattrat 409 409 466 common removexattrat sys_removexattrat 410 410 467 common open_tree_attr sys_open_tree_attr 411 + 468 common file_getattr sys_file_getattr 412 + 469 common file_setattr sys_file_setattr
+1
tools/testing/selftests/damon/Makefile
··· 4 4 TEST_GEN_FILES += access_memory access_memory_even 5 5 6 6 TEST_FILES = _damon_sysfs.py 7 + TEST_FILES += drgn_dump_damon_status.py 7 8 8 9 # functionality tests 9 10 TEST_PROGS += sysfs.sh
+261 -3
tools/testing/selftests/mm/mremap_test.c
··· 5 5 #define _GNU_SOURCE 6 6 7 7 #include <errno.h> 8 + #include <fcntl.h> 9 + #include <linux/userfaultfd.h> 8 10 #include <stdlib.h> 9 11 #include <stdio.h> 10 12 #include <string.h> 13 + #include <sys/ioctl.h> 11 14 #include <sys/mman.h> 15 + #include <syscall.h> 12 16 #include <time.h> 13 17 #include <stdbool.h> 14 18 ··· 172 168 173 169 if (first_val <= start && second_val >= end) { 174 170 success = true; 171 + fflush(maps_fp); 175 172 break; 176 173 } 177 174 } 178 175 179 176 return success; 177 + } 178 + 179 + /* Check if [ptr, ptr + size) mapped in /proc/self/maps. */ 180 + static bool is_ptr_mapped(FILE *maps_fp, void *ptr, unsigned long size) 181 + { 182 + unsigned long start = (unsigned long)ptr; 183 + unsigned long end = start + size; 184 + 185 + return is_range_mapped(maps_fp, start, end); 180 186 } 181 187 182 188 /* ··· 747 733 dont_unmap ? " [dontunnmap]" : ""); 748 734 } 749 735 736 + #ifdef __NR_userfaultfd 737 + static void mremap_move_multi_invalid_vmas(FILE *maps_fp, 738 + unsigned long page_size) 739 + { 740 + char *test_name = "mremap move multiple invalid vmas"; 741 + const size_t size = 10 * page_size; 742 + bool success = true; 743 + char *ptr, *tgt_ptr; 744 + int uffd, err, i; 745 + void *res; 746 + struct uffdio_api api = { 747 + .api = UFFD_API, 748 + .features = UFFD_EVENT_PAGEFAULT, 749 + }; 750 + 751 + uffd = syscall(__NR_userfaultfd, O_NONBLOCK); 752 + if (uffd == -1) { 753 + err = errno; 754 + perror("userfaultfd"); 755 + if (err == EPERM) { 756 + ksft_test_result_skip("%s - missing uffd", test_name); 757 + return; 758 + } 759 + success = false; 760 + goto out; 761 + } 762 + if (ioctl(uffd, UFFDIO_API, &api)) { 763 + perror("ioctl UFFDIO_API"); 764 + success = false; 765 + goto out_close_uffd; 766 + } 767 + 768 + ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, 769 + MAP_PRIVATE | MAP_ANON, -1, 0); 770 + if (ptr == MAP_FAILED) { 771 + perror("mmap"); 772 + success = false; 773 + goto out_close_uffd; 774 + } 775 + 776 + tgt_ptr = mmap(NULL, size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0); 777 + if (tgt_ptr == MAP_FAILED) { 778 + perror("mmap"); 779 + success = false; 780 + goto out_close_uffd; 781 + } 782 + if (munmap(tgt_ptr, size)) { 783 + perror("munmap"); 784 + success = false; 785 + goto out_unmap; 786 + } 787 + 788 + /* 789 + * Unmap so we end up with: 790 + * 791 + * 0 2 4 6 8 10 offset in buffer 792 + * |*| |*| |*| |*| |*| 793 + * |*| |*| |*| |*| |*| 794 + * 795 + * Additionally, register each with UFFD. 796 + */ 797 + for (i = 0; i < 10; i += 2) { 798 + void *unmap_ptr = &ptr[(i + 1) * page_size]; 799 + unsigned long start = (unsigned long)&ptr[i * page_size]; 800 + struct uffdio_register reg = { 801 + .range = { 802 + .start = start, 803 + .len = page_size, 804 + }, 805 + .mode = UFFDIO_REGISTER_MODE_MISSING, 806 + }; 807 + 808 + if (ioctl(uffd, UFFDIO_REGISTER, &reg) == -1) { 809 + perror("ioctl UFFDIO_REGISTER"); 810 + success = false; 811 + goto out_unmap; 812 + } 813 + if (munmap(unmap_ptr, page_size)) { 814 + perror("munmap"); 815 + success = false; 816 + goto out_unmap; 817 + } 818 + } 819 + 820 + /* 821 + * Now try to move the entire range which is invalid for multi VMA move. 822 + * 823 + * This will fail, and no VMA should be moved, as we check this ahead of 824 + * time. 825 + */ 826 + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 827 + err = errno; 828 + if (res != MAP_FAILED) { 829 + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); 830 + success = false; 831 + goto out_unmap; 832 + } 833 + if (err != EFAULT) { 834 + errno = err; 835 + perror("mrmeap() unexpected error"); 836 + success = false; 837 + goto out_unmap; 838 + } 839 + if (is_ptr_mapped(maps_fp, tgt_ptr, page_size)) { 840 + fprintf(stderr, 841 + "Invalid uffd-armed VMA at start of multi range moved\n"); 842 + success = false; 843 + goto out_unmap; 844 + } 845 + 846 + /* 847 + * Now try to move a single VMA, this should succeed as not multi VMA 848 + * move. 849 + */ 850 + res = mremap(ptr, page_size, page_size, 851 + MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 852 + if (res == MAP_FAILED) { 853 + perror("mremap single invalid-multi VMA"); 854 + success = false; 855 + goto out_unmap; 856 + } 857 + 858 + /* 859 + * Unmap the VMA, and remap a non-uffd registered (therefore, multi VMA 860 + * move valid) VMA at the start of ptr range. 861 + */ 862 + if (munmap(tgt_ptr, page_size)) { 863 + perror("munmap"); 864 + success = false; 865 + goto out_unmap; 866 + } 867 + res = mmap(ptr, page_size, PROT_READ | PROT_WRITE, 868 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 869 + if (res == MAP_FAILED) { 870 + perror("mmap"); 871 + success = false; 872 + goto out_unmap; 873 + } 874 + 875 + /* 876 + * Now try to move the entire range, we should succeed in moving the 877 + * first VMA, but no others, and report a failure. 878 + */ 879 + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 880 + err = errno; 881 + if (res != MAP_FAILED) { 882 + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); 883 + success = false; 884 + goto out_unmap; 885 + } 886 + if (err != EFAULT) { 887 + errno = err; 888 + perror("mrmeap() unexpected error"); 889 + success = false; 890 + goto out_unmap; 891 + } 892 + if (!is_ptr_mapped(maps_fp, tgt_ptr, page_size)) { 893 + fprintf(stderr, "Valid VMA not moved\n"); 894 + success = false; 895 + goto out_unmap; 896 + } 897 + 898 + /* 899 + * Unmap the VMA, and map valid VMA at start of ptr range, and replace 900 + * all existing multi-move invalid VMAs, except the last, with valid 901 + * multi-move VMAs. 902 + */ 903 + if (munmap(tgt_ptr, page_size)) { 904 + perror("munmap"); 905 + success = false; 906 + goto out_unmap; 907 + } 908 + if (munmap(ptr, size - 2 * page_size)) { 909 + perror("munmap"); 910 + success = false; 911 + goto out_unmap; 912 + } 913 + for (i = 0; i < 8; i += 2) { 914 + res = mmap(&ptr[i * page_size], page_size, 915 + PROT_READ | PROT_WRITE, 916 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 917 + if (res == MAP_FAILED) { 918 + perror("mmap"); 919 + success = false; 920 + goto out_unmap; 921 + } 922 + } 923 + 924 + /* 925 + * Now try to move the entire range, we should succeed in moving all but 926 + * the last VMA, and report a failure. 927 + */ 928 + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); 929 + err = errno; 930 + if (res != MAP_FAILED) { 931 + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); 932 + success = false; 933 + goto out_unmap; 934 + } 935 + if (err != EFAULT) { 936 + errno = err; 937 + perror("mrmeap() unexpected error"); 938 + success = false; 939 + goto out_unmap; 940 + } 941 + 942 + for (i = 0; i < 10; i += 2) { 943 + bool is_mapped = is_ptr_mapped(maps_fp, 944 + &tgt_ptr[i * page_size], page_size); 945 + 946 + if (i < 8 && !is_mapped) { 947 + fprintf(stderr, "Valid VMA not moved at %d\n", i); 948 + success = false; 949 + goto out_unmap; 950 + } else if (i == 8 && is_mapped) { 951 + fprintf(stderr, "Invalid VMA moved at %d\n", i); 952 + success = false; 953 + goto out_unmap; 954 + } 955 + } 956 + 957 + out_unmap: 958 + if (munmap(tgt_ptr, size)) 959 + perror("munmap tgt"); 960 + if (munmap(ptr, size)) 961 + perror("munmap src"); 962 + out_close_uffd: 963 + close(uffd); 964 + out: 965 + if (success) 966 + ksft_test_result_pass("%s\n", test_name); 967 + else 968 + ksft_test_result_fail("%s\n", test_name); 969 + } 970 + #else 971 + static void mremap_move_multi_invalid_vmas(FILE *maps_fp, unsigned long page_size) 972 + { 973 + char *test_name = "mremap move multiple invalid vmas"; 974 + 975 + ksft_test_result_skip("%s - missing uffd", test_name); 976 + } 977 + #endif /* __NR_userfaultfd */ 978 + 750 979 /* Returns the time taken for the remap on success else returns -1. */ 751 980 static long long remap_region(struct config c, unsigned int threshold_mb, 752 981 char *rand_addr) ··· 1331 1074 char *rand_addr; 1332 1075 size_t rand_size; 1333 1076 int num_expand_tests = 2; 1334 - int num_misc_tests = 8; 1077 + int num_misc_tests = 9; 1335 1078 struct test test_cases[MAX_TEST] = {}; 1336 1079 struct test perf_test_cases[MAX_PERF_TEST]; 1337 1080 int page_size; ··· 1454 1197 mremap_expand_merge(maps_fp, page_size); 1455 1198 mremap_expand_merge_offset(maps_fp, page_size); 1456 1199 1457 - fclose(maps_fp); 1458 - 1459 1200 mremap_move_within_range(pattern_seed, rand_addr); 1460 1201 mremap_move_1mb_from_start(pattern_seed, rand_addr); 1461 1202 mremap_shrink_multiple_vmas(page_size, /* inplace= */true); ··· 1462 1207 mremap_move_multiple_vmas(pattern_seed, page_size, /* dontunmap= */ true); 1463 1208 mremap_move_multiple_vmas_split(pattern_seed, page_size, /* dontunmap= */ false); 1464 1209 mremap_move_multiple_vmas_split(pattern_seed, page_size, /* dontunmap= */ true); 1210 + mremap_move_multi_invalid_vmas(maps_fp, page_size); 1211 + 1212 + fclose(maps_fp); 1465 1213 1466 1214 if (run_perf_tests) { 1467 1215 ksft_print_msg("\n%s\n",
-1
tools/testing/selftests/sched_ext/hotplug.c
··· 6 6 #include <bpf/bpf.h> 7 7 #include <sched.h> 8 8 #include <scx/common.h> 9 - #include <sched.h> 10 9 #include <sys/wait.h> 11 10 #include <unistd.h> 12 11
+2 -2
tools/testing/selftests/ublk/kublk.c
··· 1400 1400 1401 1401 if (!((1ULL << i) & features)) 1402 1402 continue; 1403 - if (i < sizeof(feat_map) / sizeof(feat_map[0])) 1403 + if (i < ARRAY_SIZE(feat_map)) 1404 1404 feat = feat_map[i]; 1405 1405 else 1406 1406 feat = "unknown"; ··· 1477 1477 printf("\tdefault: nr_queues=2(max 32), depth=128(max 1024), dev_id=-1(auto allocation)\n"); 1478 1478 printf("\tdefault: nthreads=nr_queues"); 1479 1479 1480 - for (i = 0; i < sizeof(tgt_ops_list) / sizeof(tgt_ops_list[0]); i++) { 1480 + for (i = 0; i < ARRAY_SIZE(tgt_ops_list); i++) { 1481 1481 const struct ublk_tgt_ops *ops = tgt_ops_list[i]; 1482 1482 1483 1483 if (ops->usage)
+4
tools/testing/shared/linux/idr.h
··· 1 + /* Avoid duplicate definitions due to system headers. */ 2 + #ifdef __CONCAT 3 + #undef __CONCAT 4 + #endif 1 5 #include "../../../../include/linux/idr.h"
+8
tools/tracing/latency/Makefile.config
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + include $(srctree)/tools/scripts/utilities.mak 4 + 3 5 STOP_ERROR := 6 + 7 + ifndef ($(NO_LIBTRACEEVENT),1) 8 + ifeq ($(call get-executable,$(PKG_CONFIG)),) 9 + $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it) 10 + endif 11 + endif 4 12 5 13 define lib_setup 6 14 $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))
+8
tools/tracing/rtla/Makefile.config
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 + include $(srctree)/tools/scripts/utilities.mak 4 + 3 5 STOP_ERROR := 4 6 5 7 LIBTRACEEVENT_MIN_VERSION = 1.5 6 8 LIBTRACEFS_MIN_VERSION = 1.6 9 + 10 + ifndef ($(NO_LIBTRACEEVENT),1) 11 + ifeq ($(call get-executable,$(PKG_CONFIG)),) 12 + $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it) 13 + endif 14 + endif 7 15 8 16 define lib_setup 9 17 $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))