Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge ra.kernel.org:/pub/scm/linux/kernel/git/netdev/net

Some of the devlink bits were tricky, but I think I got it right.

Signed-off-by: David S. Miller <davem@davemloft.net>

+1620 -784
+2
.mailmap
··· 25 25 Alexander Lobakin <alobakin@pm.me> <alobakin@dlink.ru> 26 26 Alexander Lobakin <alobakin@pm.me> <alobakin@marvell.com> 27 27 Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru> 28 + Alexander Mikhalitsyn <alexander@mihalicyn.com> <alexander.mikhalitsyn@virtuozzo.com> 29 + Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@canonical.com> 28 30 Alexandre Belloni <alexandre.belloni@bootlin.com> <alexandre.belloni@free-electrons.com> 29 31 Alexei Starovoitov <ast@kernel.org> <alexei.starovoitov@gmail.com> 30 32 Alexei Starovoitov <ast@kernel.org> <ast@fb.com>
+92
Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst
··· 1 + 2 + .. SPDX-License-Identifier: GPL-2.0 3 + 4 + Cross-Thread Return Address Predictions 5 + ======================================= 6 + 7 + Certain AMD and Hygon processors are subject to a cross-thread return address 8 + predictions vulnerability. When running in SMT mode and one sibling thread 9 + transitions out of C0 state, the other sibling thread could use return target 10 + predictions from the sibling thread that transitioned out of C0. 11 + 12 + The Spectre v2 mitigations protect the Linux kernel, as it fills the return 13 + address prediction entries with safe targets when context switching to the idle 14 + thread. However, KVM does allow a VMM to prevent exiting guest mode when 15 + transitioning out of C0. This could result in a guest-controlled return target 16 + being consumed by the sibling thread. 17 + 18 + Affected processors 19 + ------------------- 20 + 21 + The following CPUs are vulnerable: 22 + 23 + - AMD Family 17h processors 24 + - Hygon Family 18h processors 25 + 26 + Related CVEs 27 + ------------ 28 + 29 + The following CVE entry is related to this issue: 30 + 31 + ============== ======================================= 32 + CVE-2022-27672 Cross-Thread Return Address Predictions 33 + ============== ======================================= 34 + 35 + Problem 36 + ------- 37 + 38 + Affected SMT-capable processors support 1T and 2T modes of execution when SMT 39 + is enabled. In 2T mode, both threads in a core are executing code. For the 40 + processor core to enter 1T mode, it is required that one of the threads 41 + requests to transition out of the C0 state. This can be communicated with the 42 + HLT instruction or with an MWAIT instruction that requests non-C0. 43 + When the thread re-enters the C0 state, the processor transitions back 44 + to 2T mode, assuming the other thread is also still in C0 state. 45 + 46 + In affected processors, the return address predictor (RAP) is partitioned 47 + depending on the SMT mode. For instance, in 2T mode each thread uses a private 48 + 16-entry RAP, but in 1T mode, the active thread uses a 32-entry RAP. Upon 49 + transition between 1T/2T mode, the RAP contents are not modified but the RAP 50 + pointers (which control the next return target to use for predictions) may 51 + change. This behavior may result in return targets from one SMT thread being 52 + used by RET predictions in the sibling thread following a 1T/2T switch. In 53 + particular, a RET instruction executed immediately after a transition to 1T may 54 + use a return target from the thread that just became idle. In theory, this 55 + could lead to information disclosure if the return targets used do not come 56 + from trustworthy code. 57 + 58 + Attack scenarios 59 + ---------------- 60 + 61 + An attack can be mounted on affected processors by performing a series of CALL 62 + instructions with targeted return locations and then transitioning out of C0 63 + state. 64 + 65 + Mitigation mechanism 66 + -------------------- 67 + 68 + Before entering idle state, the kernel context switches to the idle thread. The 69 + context switch fills the RAP entries (referred to as the RSB in Linux) with safe 70 + targets by performing a sequence of CALL instructions. 71 + 72 + Prevent a guest VM from directly putting the processor into an idle state by 73 + intercepting HLT and MWAIT instructions. 74 + 75 + Both mitigations are required to fully address this issue. 76 + 77 + Mitigation control on the kernel command line 78 + --------------------------------------------- 79 + 80 + Use existing Spectre v2 mitigations that will fill the RSB on context switch. 81 + 82 + Mitigation control for KVM - module parameter 83 + --------------------------------------------- 84 + 85 + By default, the KVM hypervisor mitigates this issue by intercepting guest 86 + attempts to transition out of C0. A VMM can use the KVM_CAP_X86_DISABLE_EXITS 87 + capability to override those interceptions, but since this is not common, the 88 + mitigation that covers this path is not enabled by default. 89 + 90 + The mitigation for the KVM_CAP_X86_DISABLE_EXITS capability can be turned on 91 + using the boolean module parameter mitigate_smt_rsb, e.g.: 92 + kvm.mitigate_smt_rsb=1
+1
Documentation/admin-guide/hw-vuln/index.rst
··· 18 18 core-scheduling.rst 19 19 l1d_flush.rst 20 20 processor_mmio_stale_data.rst 21 + cross-thread-rsb.rst
+8 -10
MAINTAINERS
··· 16135 16135 16136 16136 PCI ENDPOINT SUBSYSTEM 16137 16137 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 16138 - R: Krzysztof Wilczyński <kw@linux.com> 16138 + M: Krzysztof Wilczyński <kw@linux.com> 16139 16139 R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 16140 16140 R: Kishon Vijay Abraham I <kishon@kernel.org> 16141 16141 L: linux-pci@vger.kernel.org ··· 16143 16143 Q: https://patchwork.kernel.org/project/linux-pci/list/ 16144 16144 B: https://bugzilla.kernel.org 16145 16145 C: irc://irc.oftc.net/linux-pci 16146 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git 16146 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 16147 16147 F: Documentation/PCI/endpoint/* 16148 16148 F: Documentation/misc-devices/pci-endpoint-test.rst 16149 16149 F: drivers/misc/pci_endpoint_test.c ··· 16178 16178 Q: https://patchwork.kernel.org/project/linux-pci/list/ 16179 16179 B: https://bugzilla.kernel.org 16180 16180 C: irc://irc.oftc.net/linux-pci 16181 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git 16181 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 16182 16182 F: Documentation/driver-api/pci/p2pdma.rst 16183 16183 F: drivers/pci/p2pdma.c 16184 16184 F: include/linux/pci-p2pdma.h ··· 16200 16200 16201 16201 PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS 16202 16202 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 16203 + M: Krzysztof Wilczyński <kw@linux.com> 16203 16204 R: Rob Herring <robh@kernel.org> 16204 - R: Krzysztof Wilczyński <kw@linux.com> 16205 16205 L: linux-pci@vger.kernel.org 16206 16206 S: Supported 16207 16207 Q: https://patchwork.kernel.org/project/linux-pci/list/ 16208 16208 B: https://bugzilla.kernel.org 16209 16209 C: irc://irc.oftc.net/linux-pci 16210 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git 16210 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 16211 16211 F: Documentation/devicetree/bindings/pci/ 16212 16212 F: drivers/pci/controller/ 16213 16213 F: drivers/pci/pci-bridge-emul.c ··· 16220 16220 Q: https://patchwork.kernel.org/project/linux-pci/list/ 16221 16221 B: https://bugzilla.kernel.org 16222 16222 C: irc://irc.oftc.net/linux-pci 16223 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git 16223 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 16224 16224 F: Documentation/PCI/ 16225 16225 F: Documentation/devicetree/bindings/pci/ 16226 16226 F: arch/x86/kernel/early-quirks.c ··· 20120 20120 SUPERH 20121 20121 M: Yoshinori Sato <ysato@users.sourceforge.jp> 20122 20122 M: Rich Felker <dalias@libc.org> 20123 + M: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 20123 20124 L: linux-sh@vger.kernel.org 20124 20125 S: Maintained 20125 20126 Q: http://patchwork.kernel.org/project/linux-sh/list/ ··· 20353 20352 F: drivers/platform/x86/system76_acpi.c 20354 20353 20355 20354 SYSV FILESYSTEM 20356 - M: Christoph Hellwig <hch@infradead.org> 20357 - S: Maintained 20355 + S: Orphan 20358 20356 F: Documentation/filesystems/sysv-fs.rst 20359 20357 F: fs/sysv/ 20360 20358 F: include/linux/sysv_fs.h ··· 21848 21848 T: git git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git 21849 21849 21850 21850 UUID HELPERS 21851 - M: Christoph Hellwig <hch@lst.de> 21852 21851 R: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 21853 21852 L: linux-kernel@vger.kernel.org 21854 21853 S: Maintained 21855 - T: git git://git.infradead.org/users/hch/uuid.git 21856 21854 F: include/linux/uuid.h 21857 21855 F: include/uapi/linux/uuid.h 21858 21856 F: lib/test_uuid.c
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 2 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc7 5 + EXTRAVERSION = -rc8 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm/boot/dts/rk3288.dtsi
··· 1181 1181 clock-names = "dp", "pclk"; 1182 1182 phys = <&edp_phy>; 1183 1183 phy-names = "dp"; 1184 + power-domains = <&power RK3288_PD_VIO>; 1184 1185 resets = <&cru SRST_EDP>; 1185 1186 reset-names = "dp"; 1186 1187 rockchip,grf = <&grf>;
+1 -1
arch/arm/boot/dts/stihxxx-b2120.dtsi
··· 178 178 tsin-num = <0>; 179 179 serial-not-parallel; 180 180 i2c-bus = <&ssc2>; 181 - reset-gpios = <&pio15 4 GPIO_ACTIVE_HIGH>; 181 + reset-gpios = <&pio15 4 GPIO_ACTIVE_LOW>; 182 182 dvb-card = <STV0367_TDA18212_NIMA_1>; 183 183 }; 184 184 };
+2 -2
arch/arm64/boot/dts/amlogic/meson-axg.dtsi
··· 1886 1886 sd_emmc_b: sd@5000 { 1887 1887 compatible = "amlogic,meson-axg-mmc"; 1888 1888 reg = <0x0 0x5000 0x0 0x800>; 1889 - interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>; 1889 + interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>; 1890 1890 status = "disabled"; 1891 1891 clocks = <&clkc CLKID_SD_EMMC_B>, 1892 1892 <&clkc CLKID_SD_EMMC_B_CLK0>, ··· 1898 1898 sd_emmc_c: mmc@7000 { 1899 1899 compatible = "amlogic,meson-axg-mmc"; 1900 1900 reg = <0x0 0x7000 0x0 0x800>; 1901 - interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>; 1901 + interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>; 1902 1902 status = "disabled"; 1903 1903 clocks = <&clkc CLKID_SD_EMMC_C>, 1904 1904 <&clkc CLKID_SD_EMMC_C_CLK0>,
+3 -3
arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
··· 2324 2324 sd_emmc_a: sd@ffe03000 { 2325 2325 compatible = "amlogic,meson-axg-mmc"; 2326 2326 reg = <0x0 0xffe03000 0x0 0x800>; 2327 - interrupts = <GIC_SPI 189 IRQ_TYPE_EDGE_RISING>; 2327 + interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>; 2328 2328 status = "disabled"; 2329 2329 clocks = <&clkc CLKID_SD_EMMC_A>, 2330 2330 <&clkc CLKID_SD_EMMC_A_CLK0>, ··· 2336 2336 sd_emmc_b: sd@ffe05000 { 2337 2337 compatible = "amlogic,meson-axg-mmc"; 2338 2338 reg = <0x0 0xffe05000 0x0 0x800>; 2339 - interrupts = <GIC_SPI 190 IRQ_TYPE_EDGE_RISING>; 2339 + interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>; 2340 2340 status = "disabled"; 2341 2341 clocks = <&clkc CLKID_SD_EMMC_B>, 2342 2342 <&clkc CLKID_SD_EMMC_B_CLK0>, ··· 2348 2348 sd_emmc_c: mmc@ffe07000 { 2349 2349 compatible = "amlogic,meson-axg-mmc"; 2350 2350 reg = <0x0 0xffe07000 0x0 0x800>; 2351 - interrupts = <GIC_SPI 191 IRQ_TYPE_EDGE_RISING>; 2351 + interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>; 2352 2352 status = "disabled"; 2353 2353 clocks = <&clkc CLKID_SD_EMMC_C>, 2354 2354 <&clkc CLKID_SD_EMMC_C_CLK0>,
+3 -3
arch/arm64/boot/dts/amlogic/meson-gx.dtsi
··· 603 603 sd_emmc_a: mmc@70000 { 604 604 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 605 605 reg = <0x0 0x70000 0x0 0x800>; 606 - interrupts = <GIC_SPI 216 IRQ_TYPE_EDGE_RISING>; 606 + interrupts = <GIC_SPI 216 IRQ_TYPE_LEVEL_HIGH>; 607 607 status = "disabled"; 608 608 }; 609 609 610 610 sd_emmc_b: mmc@72000 { 611 611 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 612 612 reg = <0x0 0x72000 0x0 0x800>; 613 - interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>; 613 + interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>; 614 614 status = "disabled"; 615 615 }; 616 616 617 617 sd_emmc_c: mmc@74000 { 618 618 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 619 619 reg = <0x0 0x74000 0x0 0x800>; 620 - interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>; 620 + interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>; 621 621 status = "disabled"; 622 622 }; 623 623 };
+2 -2
arch/arm64/boot/dts/mediatek/mt8195.dtsi
··· 2146 2146 }; 2147 2147 2148 2148 vdosys0: syscon@1c01a000 { 2149 - compatible = "mediatek,mt8195-mmsys", "syscon"; 2149 + compatible = "mediatek,mt8195-vdosys0", "mediatek,mt8195-mmsys", "syscon"; 2150 2150 reg = <0 0x1c01a000 0 0x1000>; 2151 2151 mboxes = <&gce0 0 CMDQ_THR_PRIO_4>; 2152 2152 #clock-cells = <1>; ··· 2292 2292 }; 2293 2293 2294 2294 vdosys1: syscon@1c100000 { 2295 - compatible = "mediatek,mt8195-mmsys", "syscon"; 2295 + compatible = "mediatek,mt8195-vdosys1", "syscon"; 2296 2296 reg = <0 0x1c100000 0 0x1000>; 2297 2297 #clock-cells = <1>; 2298 2298 };
-2
arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
··· 96 96 linux,default-trigger = "heartbeat"; 97 97 gpios = <&rk805 1 GPIO_ACTIVE_LOW>; 98 98 default-state = "on"; 99 - mode = <0x23>; 100 99 }; 101 100 102 101 user_led: led-1 { ··· 103 104 linux,default-trigger = "mmc1"; 104 105 gpios = <&rk805 0 GPIO_ACTIVE_LOW>; 105 106 default-state = "off"; 106 - mode = <0x05>; 107 107 }; 108 108 }; 109 109 };
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-op1-opp.dtsi
··· 111 111 }; 112 112 }; 113 113 114 - dmc_opp_table: dmc_opp_table { 114 + dmc_opp_table: opp-table-3 { 115 115 compatible = "operating-points-v2"; 116 116 117 117 opp00 {
+7
arch/arm64/boot/dts/rockchip/rk3399-pinephone-pro.dts
··· 104 104 }; 105 105 }; 106 106 107 + &cpu_alert0 { 108 + temperature = <65000>; 109 + }; 110 + &cpu_alert1 { 111 + temperature = <68000>; 112 + }; 113 + 107 114 &cpu_l0 { 108 115 cpu-supply = <&vdd_cpu_l>; 109 116 };
+2 -4
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 589 589 clocks = <&cru HCLK_M_CRYPTO0>, <&cru HCLK_S_CRYPTO0>, <&cru SCLK_CRYPTO0>; 590 590 clock-names = "hclk_master", "hclk_slave", "sclk"; 591 591 resets = <&cru SRST_CRYPTO0>, <&cru SRST_CRYPTO0_S>, <&cru SRST_CRYPTO0_M>; 592 - reset-names = "master", "lave", "crypto"; 592 + reset-names = "master", "slave", "crypto-rst"; 593 593 }; 594 594 595 595 crypto1: crypto@ff8b8000 { ··· 599 599 clocks = <&cru HCLK_M_CRYPTO1>, <&cru HCLK_S_CRYPTO1>, <&cru SCLK_CRYPTO1>; 600 600 clock-names = "hclk_master", "hclk_slave", "sclk"; 601 601 resets = <&cru SRST_CRYPTO1>, <&cru SRST_CRYPTO1_S>, <&cru SRST_CRYPTO1_M>; 602 - reset-names = "master", "slave", "crypto"; 602 + reset-names = "master", "slave", "crypto-rst"; 603 603 }; 604 604 605 605 i2c1: i2c@ff110000 { ··· 2241 2241 pcfg_input_pull_up: pcfg-input-pull-up { 2242 2242 input-enable; 2243 2243 bias-pull-up; 2244 - drive-strength = <2>; 2245 2244 }; 2246 2245 2247 2246 pcfg_input_pull_down: pcfg-input-pull-down { 2248 2247 input-enable; 2249 2248 bias-pull-down; 2250 - drive-strength = <2>; 2251 2249 }; 2252 2250 2253 2251 clock {
+11
arch/arm64/boot/dts/rockchip/rk3566-box-demo.dts
··· 353 353 }; 354 354 }; 355 355 356 + &pmu_io_domains { 357 + pmuio2-supply = <&vcc_3v3>; 358 + vccio1-supply = <&vcc_3v3>; 359 + vccio3-supply = <&vcc_3v3>; 360 + vccio4-supply = <&vcca_1v8>; 361 + vccio5-supply = <&vcc_3v3>; 362 + vccio6-supply = <&vcca_1v8>; 363 + vccio7-supply = <&vcc_3v3>; 364 + status = "okay"; 365 + }; 366 + 356 367 &pwm0 { 357 368 status = "okay"; 358 369 };
+3 -2
arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
··· 571 571 }; 572 572 573 573 &i2s1_8ch { 574 + pinctrl-names = "default"; 575 + pinctrl-0 = <&i2s1m0_sclktx &i2s1m0_lrcktx &i2s1m0_sdi0 &i2s1m0_sdo0>; 574 576 rockchip,trcm-sync-tx-only; 575 577 status = "okay"; 576 578 }; ··· 732 730 disable-wp; 733 731 pinctrl-names = "default"; 734 732 pinctrl-0 = <&sdmmc0_bus4 &sdmmc0_clk &sdmmc0_cmd &sdmmc0_det>; 735 - sd-uhs-sdr104; 733 + sd-uhs-sdr50; 736 734 vmmc-supply = <&vcc3v3_sd>; 737 735 vqmmc-supply = <&vccio_sd>; 738 736 status = "okay"; 739 737 }; 740 738 741 739 &sdmmc2 { 742 - supports-sdio; 743 740 bus-width = <4>; 744 741 disable-wp; 745 742 cap-sd-highspeed;
+1
arch/arm64/boot/dts/rockchip/rk356x.dtsi
··· 966 966 clock-names = "aclk_mst", "aclk_slv", 967 967 "aclk_dbi", "pclk", "aux"; 968 968 device_type = "pci"; 969 + #interrupt-cells = <1>; 969 970 interrupt-map-mask = <0 0 0 7>; 970 971 interrupt-map = <0 0 0 1 &pcie_intc 0>, 971 972 <0 0 0 2 &pcie_intc 1>,
-1
arch/powerpc/Kconfig
··· 163 163 select ARCH_WANT_IRQS_OFF_ACTIVATE_MM 164 164 select ARCH_WANT_LD_ORPHAN_WARN 165 165 select ARCH_WANTS_MODULES_DATA_IN_VMALLOC if PPC_BOOK3S_32 || PPC_8xx 166 - select ARCH_WANTS_NO_INSTR 167 166 select ARCH_WEAK_RELEASE_ACQUIRE 168 167 select BINFMT_ELF 169 168 select BUILDTIME_TABLE_SORT
+4 -2
arch/powerpc/kernel/interrupt.c
··· 50 50 */ 51 51 static notrace __always_inline bool prep_irq_for_enabled_exit(bool restartable) 52 52 { 53 + bool must_hard_disable = (exit_must_hard_disable() || !restartable); 54 + 53 55 /* This must be done with RI=1 because tracing may touch vmaps */ 54 56 trace_hardirqs_on(); 55 57 56 - if (exit_must_hard_disable() || !restartable) 58 + if (must_hard_disable) 57 59 __hard_EE_RI_disable(); 58 60 59 61 #ifdef CONFIG_PPC64 60 62 /* This pattern matches prep_irq_for_idle */ 61 63 if (unlikely(lazy_irq_pending_nocheck())) { 62 - if (exit_must_hard_disable() || !restartable) { 64 + if (must_hard_disable) { 63 65 local_paca->irq_happened |= PACA_IRQ_HARD_DIS; 64 66 __hard_RI_enable(); 65 67 }
+1
arch/powerpc/kexec/file_load_64.c
··· 26 26 #include <asm/firmware.h> 27 27 #include <asm/kexec_ranges.h> 28 28 #include <asm/crashdump-ppc64.h> 29 + #include <asm/mmzone.h> 29 30 #include <asm/prom.h> 30 31 31 32 struct umem_info {
+4
arch/riscv/include/asm/pgtable.h
··· 721 721 page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd); 722 722 return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd))); 723 723 } 724 + 725 + #define pmdp_collapse_flush pmdp_collapse_flush 726 + extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, 727 + unsigned long address, pmd_t *pmdp); 724 728 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 725 729 726 730 /*
+5 -3
arch/riscv/kernel/probes/kprobes.c
··· 65 65 66 66 int __kprobes arch_prepare_kprobe(struct kprobe *p) 67 67 { 68 - unsigned long probe_addr = (unsigned long)p->addr; 68 + u16 *insn = (u16 *)p->addr; 69 69 70 - if (probe_addr & 0x1) 70 + if ((unsigned long)insn & 0x1) 71 71 return -EILSEQ; 72 72 73 73 if (!arch_check_kprobe(p)) 74 74 return -EILSEQ; 75 75 76 76 /* copy instruction */ 77 - p->opcode = *p->addr; 77 + p->opcode = (kprobe_opcode_t)(*insn++); 78 + if (GET_INSN_LENGTH(p->opcode) == 4) 79 + p->opcode |= (kprobe_opcode_t)(*insn) << 16; 78 80 79 81 /* decode instruction */ 80 82 switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
+2 -1
arch/riscv/kernel/stacktrace.c
··· 32 32 fp = (unsigned long)__builtin_frame_address(0); 33 33 sp = current_stack_pointer; 34 34 pc = (unsigned long)walk_stackframe; 35 + level = -1; 35 36 } else { 36 37 /* task blocked in __switch_to */ 37 38 fp = task->thread.s[0]; ··· 44 43 unsigned long low, high; 45 44 struct stackframe *frame; 46 45 47 - if (unlikely(!__kernel_text_address(pc) || (level++ >= 1 && !fn(arg, pc)))) 46 + if (unlikely(!__kernel_text_address(pc) || (level++ >= 0 && !fn(arg, pc)))) 48 47 break; 49 48 50 49 /* Validate frame pointer */
+3 -1
arch/riscv/mm/cacheflush.c
··· 90 90 if (PageHuge(page)) 91 91 page = compound_head(page); 92 92 93 - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) 93 + if (!test_bit(PG_dcache_clean, &page->flags)) { 94 94 flush_icache_all(); 95 + set_bit(PG_dcache_clean, &page->flags); 96 + } 95 97 } 96 98 #endif /* CONFIG_MMU */ 97 99
+20
arch/riscv/mm/pgtable.c
··· 81 81 } 82 82 83 83 #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ 84 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 85 + pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, 86 + unsigned long address, pmd_t *pmdp) 87 + { 88 + pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); 89 + 90 + VM_BUG_ON(address & ~HPAGE_PMD_MASK); 91 + VM_BUG_ON(pmd_trans_huge(*pmdp)); 92 + /* 93 + * When leaf PTE entries (regular pages) are collapsed into a leaf 94 + * PMD entry (huge page), a valid non-leaf PTE is converted into a 95 + * valid leaf PTE at the level 1 page table. Since the sfence.vma 96 + * forms that specify an address only apply to leaf PTEs, we need a 97 + * global flush here. collapse_huge_page() assumes these flushes are 98 + * eager, so just do the fence here. 99 + */ 100 + flush_tlb_mm(vma->vm_mm); 101 + return pmd; 102 + } 103 + #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+1
arch/x86/include/asm/cpufeatures.h
··· 466 466 #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */ 467 467 #define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */ 468 468 #define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */ 469 + #define X86_BUG_SMT_RSB X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */ 469 470 470 471 #endif /* _ASM_X86_CPUFEATURES_H */
+2
arch/x86/include/asm/intel-family.h
··· 123 123 #define INTEL_FAM6_METEORLAKE 0xAC 124 124 #define INTEL_FAM6_METEORLAKE_L 0xAA 125 125 126 + #define INTEL_FAM6_LUNARLAKE_M 0xBD 127 + 126 128 /* "Small Core" Processors (Atom/E-Core) */ 127 129 128 130 #define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
+7 -2
arch/x86/kernel/cpu/common.c
··· 1256 1256 #define MMIO_SBDS BIT(2) 1257 1257 /* CPU is affected by RETbleed, speculating where you would not expect it */ 1258 1258 #define RETBLEED BIT(3) 1259 + /* CPU is affected by SMT (cross-thread) return predictions */ 1260 + #define SMT_RSB BIT(4) 1259 1261 1260 1262 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { 1261 1263 VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), ··· 1289 1287 1290 1288 VULNBL_AMD(0x15, RETBLEED), 1291 1289 VULNBL_AMD(0x16, RETBLEED), 1292 - VULNBL_AMD(0x17, RETBLEED), 1293 - VULNBL_HYGON(0x18, RETBLEED), 1290 + VULNBL_AMD(0x17, RETBLEED | SMT_RSB), 1291 + VULNBL_HYGON(0x18, RETBLEED | SMT_RSB), 1294 1292 {} 1295 1293 }; 1296 1294 ··· 1407 1405 !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) && 1408 1406 !(ia32_cap & ARCH_CAP_PBRSB_NO)) 1409 1407 setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB); 1408 + 1409 + if (cpu_matches(cpu_vuln_blacklist, SMT_RSB)) 1410 + setup_force_cpu_bug(X86_BUG_SMT_RSB); 1410 1411 1411 1412 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) 1412 1413 return;
+1 -1
arch/x86/kernel/kprobes/core.c
··· 625 625 /* 1 byte conditional jump */ 626 626 p->ainsn.emulate_op = kprobe_emulate_jcc; 627 627 p->ainsn.jcc.type = opcode & 0xf; 628 - p->ainsn.rel32 = *(char *)insn->immediate.bytes; 628 + p->ainsn.rel32 = insn->immediate.value; 629 629 break; 630 630 case 0x0f: 631 631 opcode = insn->opcode.bytes[1];
+32 -11
arch/x86/kvm/x86.c
··· 191 191 bool __read_mostly eager_page_split = true; 192 192 module_param(eager_page_split, bool, 0644); 193 193 194 + /* Enable/disable SMT_RSB bug mitigation */ 195 + bool __read_mostly mitigate_smt_rsb; 196 + module_param(mitigate_smt_rsb, bool, 0444); 197 + 194 198 /* 195 199 * Restoring the host value for MSRs that are only consumed when running in 196 200 * usermode, e.g. SYSCALL MSRs and TSC_AUX, can be deferred until the CPU ··· 4452 4448 r = KVM_CLOCK_VALID_FLAGS; 4453 4449 break; 4454 4450 case KVM_CAP_X86_DISABLE_EXITS: 4455 - r |= KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_PAUSE | 4456 - KVM_X86_DISABLE_EXITS_CSTATE; 4457 - if(kvm_can_mwait_in_guest()) 4458 - r |= KVM_X86_DISABLE_EXITS_MWAIT; 4451 + r = KVM_X86_DISABLE_EXITS_PAUSE; 4452 + 4453 + if (!mitigate_smt_rsb) { 4454 + r |= KVM_X86_DISABLE_EXITS_HLT | 4455 + KVM_X86_DISABLE_EXITS_CSTATE; 4456 + 4457 + if (kvm_can_mwait_in_guest()) 4458 + r |= KVM_X86_DISABLE_EXITS_MWAIT; 4459 + } 4459 4460 break; 4460 4461 case KVM_CAP_X86_SMM: 4461 4462 if (!IS_ENABLED(CONFIG_KVM_SMM)) ··· 6236 6227 if (cap->args[0] & ~KVM_X86_DISABLE_VALID_EXITS) 6237 6228 break; 6238 6229 6239 - if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) && 6240 - kvm_can_mwait_in_guest()) 6241 - kvm->arch.mwait_in_guest = true; 6242 - if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT) 6243 - kvm->arch.hlt_in_guest = true; 6244 6230 if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE) 6245 6231 kvm->arch.pause_in_guest = true; 6246 - if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE) 6247 - kvm->arch.cstate_in_guest = true; 6232 + 6233 + #define SMT_RSB_MSG "This processor is affected by the Cross-Thread Return Predictions vulnerability. " \ 6234 + "KVM_CAP_X86_DISABLE_EXITS should only be used with SMT disabled or trusted guests." 6235 + 6236 + if (!mitigate_smt_rsb) { 6237 + if (boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible() && 6238 + (cap->args[0] & ~KVM_X86_DISABLE_EXITS_PAUSE)) 6239 + pr_warn_once(SMT_RSB_MSG); 6240 + 6241 + if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) && 6242 + kvm_can_mwait_in_guest()) 6243 + kvm->arch.mwait_in_guest = true; 6244 + if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT) 6245 + kvm->arch.hlt_in_guest = true; 6246 + if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE) 6247 + kvm->arch.cstate_in_guest = true; 6248 + } 6249 + 6248 6250 r = 0; 6249 6251 break; 6250 6252 case KVM_CAP_MSR_PLATFORM_INFO: ··· 13476 13456 static int __init kvm_x86_init(void) 13477 13457 { 13478 13458 kvm_mmu_x86_module_init(); 13459 + mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible(); 13479 13460 return 0; 13480 13461 } 13481 13462 module_init(kvm_x86_init);
+1 -1
drivers/acpi/nfit/core.c
··· 3297 3297 3298 3298 mutex_lock(&acpi_desc->init_mutex); 3299 3299 set_bit(ARS_CANCEL, &acpi_desc->scrub_flags); 3300 - cancel_delayed_work_sync(&acpi_desc->dwork); 3301 3300 mutex_unlock(&acpi_desc->init_mutex); 3301 + cancel_delayed_work_sync(&acpi_desc->dwork); 3302 3302 3303 3303 /* 3304 3304 * Bounce the nvdimm bus lock to make sure any in-flight
+8 -10
drivers/clk/ingenic/jz4760-cgu.c
··· 58 58 unsigned long rate, unsigned long parent_rate, 59 59 unsigned int *pm, unsigned int *pn, unsigned int *pod) 60 60 { 61 - unsigned int m, n, od, m_max = (1 << pll_info->m_bits) - 2; 61 + unsigned int m, n, od, m_max = (1 << pll_info->m_bits) - 1; 62 62 63 63 /* The frequency after the N divider must be between 1 and 50 MHz. */ 64 64 n = parent_rate / (1 * MHZ); ··· 66 66 /* The N divider must be >= 2. */ 67 67 n = clamp_val(n, 2, 1 << pll_info->n_bits); 68 68 69 - for (;; n >>= 1) { 70 - od = (unsigned int)-1; 69 + rate /= MHZ; 70 + parent_rate /= MHZ; 71 71 72 - do { 73 - m = (rate / MHZ) * (1 << ++od) * n / (parent_rate / MHZ); 74 - } while ((m > m_max || m & 1) && (od < 4)); 75 - 76 - if (od < 4 && m >= 4 && m <= m_max) 77 - break; 72 + for (m = m_max; m >= m_max && n >= 2; n--) { 73 + m = rate * n / parent_rate; 74 + od = m & 1; 75 + m <<= od; 78 76 } 79 77 80 78 *pm = m; 81 - *pn = n; 79 + *pn = n + 1; 82 80 *pod = 1 << od; 83 81 } 84 82
+4 -6
drivers/clk/microchip/clk-mpfs-ccc.c
··· 164 164 165 165 for (unsigned int i = 0; i < num_clks; i++) { 166 166 struct mpfs_ccc_out_hw_clock *out_hw = &out_hws[i]; 167 - char *name = devm_kzalloc(dev, 23, GFP_KERNEL); 167 + char *name = devm_kasprintf(dev, GFP_KERNEL, "%s_out%u", parent->name, i); 168 168 169 169 if (!name) 170 170 return -ENOMEM; 171 171 172 - snprintf(name, 23, "%s_out%u", parent->name, i); 173 172 out_hw->divider.hw.init = CLK_HW_INIT_HW(name, &parent->hw, &clk_divider_ops, 0); 174 173 out_hw->divider.reg = data->pll_base[i / MPFS_CCC_OUTPUTS_PER_PLL] + 175 174 out_hw->reg_offset; ··· 200 201 201 202 for (unsigned int i = 0; i < num_clks; i++) { 202 203 struct mpfs_ccc_pll_hw_clock *pll_hw = &pll_hws[i]; 203 - char *name = devm_kzalloc(dev, 18, GFP_KERNEL); 204 204 205 - if (!name) 205 + pll_hw->name = devm_kasprintf(dev, GFP_KERNEL, "ccc%s_pll%u", 206 + strchrnul(dev->of_node->full_name, '@'), i); 207 + if (!pll_hw->name) 206 208 return -ENOMEM; 207 209 208 210 pll_hw->base = data->pll_base[i]; 209 - snprintf(name, 18, "ccc%s_pll%u", strchrnul(dev->of_node->full_name, '@'), i); 210 - pll_hw->name = (const char *)name; 211 211 pll_hw->hw.init = CLK_HW_INIT_PARENTS_DATA_FIXED_SIZE(pll_hw->name, 212 212 pll_hw->parents, 213 213 &mpfs_ccc_pll_ops, 0);
+19 -15
drivers/cpufreq/qcom-cpufreq-hw.c
··· 143 143 return lval * xo_rate; 144 144 } 145 145 146 - /* Get the current frequency of the CPU (after throttling) */ 147 - static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) 148 - { 149 - struct qcom_cpufreq_data *data; 150 - struct cpufreq_policy *policy; 151 - 152 - policy = cpufreq_cpu_get_raw(cpu); 153 - if (!policy) 154 - return 0; 155 - 156 - data = policy->driver_data; 157 - 158 - return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ; 159 - } 160 - 161 146 /* Get the frequency requested by the cpufreq core for the CPU */ 162 147 static unsigned int qcom_cpufreq_get_freq(unsigned int cpu) 163 148 { ··· 162 177 index = min(index, LUT_MAX_ENTRIES - 1); 163 178 164 179 return policy->freq_table[index].frequency; 180 + } 181 + 182 + static unsigned int qcom_cpufreq_hw_get(unsigned int cpu) 183 + { 184 + struct qcom_cpufreq_data *data; 185 + struct cpufreq_policy *policy; 186 + 187 + policy = cpufreq_cpu_get_raw(cpu); 188 + if (!policy) 189 + return 0; 190 + 191 + data = policy->driver_data; 192 + 193 + if (data->throttle_irq >= 0) 194 + return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ; 195 + 196 + return qcom_cpufreq_get_freq(cpu); 165 197 } 166 198 167 199 static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy, ··· 706 704 return -ENOMEM; 707 705 708 706 qcom_cpufreq.soc_data = of_device_get_match_data(dev); 707 + if (!qcom_cpufreq.soc_data) 708 + return -ENODEV; 709 709 710 710 clk_data = devm_kzalloc(dev, struct_size(clk_data, hws, num_domains), GFP_KERNEL); 711 711 if (!clk_data)
+7 -5
drivers/cxl/core/region.c
··· 131 131 struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); 132 132 struct cxl_port *iter = cxled_to_port(cxled); 133 133 struct cxl_ep *ep; 134 - int rc; 134 + int rc = 0; 135 135 136 136 while (!is_cxl_root(to_cxl_port(iter->dev.parent))) 137 137 iter = to_cxl_port(iter->dev.parent); ··· 143 143 144 144 cxl_rr = cxl_rr_load(iter, cxlr); 145 145 cxld = cxl_rr->decoder; 146 - rc = cxld->reset(cxld); 146 + if (cxld->reset) 147 + rc = cxld->reset(cxld); 147 148 if (rc) 148 149 return rc; 149 150 } ··· 187 186 iter = ep->next, ep = cxl_ep_load(iter, cxlmd)) { 188 187 cxl_rr = cxl_rr_load(iter, cxlr); 189 188 cxld = cxl_rr->decoder; 190 - cxld->reset(cxld); 189 + if (cxld->reset) 190 + cxld->reset(cxld); 191 191 } 192 192 193 193 cxled->cxld.reset(&cxled->cxld); ··· 993 991 int i, distance; 994 992 995 993 /* 996 - * Passthrough ports impose no distance requirements between 994 + * Passthrough decoders impose no distance requirements between 997 995 * peers 998 996 */ 999 - if (port->nr_dports == 1) 997 + if (cxl_rr->nr_targets == 1) 1000 998 distance = 0; 1001 999 else 1002 1000 distance = p->nr_targets / cxl_rr->nr_targets;
+1 -1
drivers/dax/super.c
··· 475 475 /** 476 476 * dax_holder() - obtain the holder of a dax device 477 477 * @dax_dev: a dax_device instance 478 - 478 + * 479 479 * Return: the holder's data which represents the holder if registered, 480 480 * otherwize NULL. 481 481 */
+6 -3
drivers/firmware/efi/libstub/arm64.c
··· 19 19 const u8 *type1_family = efi_get_smbios_string(1, family); 20 20 21 21 /* 22 - * Ampere Altra machines crash in SetTime() if SetVirtualAddressMap() 23 - * has not been called prior. 22 + * Ampere eMAG, Altra, and Altra Max machines crash in SetTime() if 23 + * SetVirtualAddressMap() has not been called prior. 24 24 */ 25 - if (!type1_family || strcmp(type1_family, "Altra")) 25 + if (!type1_family || ( 26 + strcmp(type1_family, "eMAG") && 27 + strcmp(type1_family, "Altra") && 28 + strcmp(type1_family, "Altra Max"))) 26 29 return false; 27 30 28 31 efi_warn("Working around broken SetVirtualAddressMap()\n");
+1
drivers/gpio/Kconfig
··· 1531 1531 tristate "Mellanox BlueField 2 SoC GPIO" 1532 1532 depends on (MELLANOX_PLATFORM && ARM64 && ACPI) || (64BIT && COMPILE_TEST) 1533 1533 select GPIO_GENERIC 1534 + select GPIOLIB_IRQCHIP 1534 1535 help 1535 1536 Say Y here if you want GPIO support on Mellanox BlueField 2 SoC. 1536 1537
+23 -18
drivers/gpio/gpio-vf610.c
··· 30 30 31 31 struct vf610_gpio_port { 32 32 struct gpio_chip gc; 33 - struct irq_chip ic; 34 33 void __iomem *base; 35 34 void __iomem *gpio_base; 36 35 const struct fsl_gpio_soc_data *sdata; ··· 206 207 207 208 static void vf610_gpio_irq_mask(struct irq_data *d) 208 209 { 209 - struct vf610_gpio_port *port = 210 - gpiochip_get_data(irq_data_get_irq_chip_data(d)); 211 - void __iomem *pcr_base = port->base + PORT_PCR(d->hwirq); 210 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 211 + struct vf610_gpio_port *port = gpiochip_get_data(gc); 212 + irq_hw_number_t gpio_num = irqd_to_hwirq(d); 213 + void __iomem *pcr_base = port->base + PORT_PCR(gpio_num); 212 214 213 215 vf610_gpio_writel(0, pcr_base); 216 + gpiochip_disable_irq(gc, gpio_num); 214 217 } 215 218 216 219 static void vf610_gpio_irq_unmask(struct irq_data *d) 217 220 { 218 - struct vf610_gpio_port *port = 219 - gpiochip_get_data(irq_data_get_irq_chip_data(d)); 220 - void __iomem *pcr_base = port->base + PORT_PCR(d->hwirq); 221 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 222 + struct vf610_gpio_port *port = gpiochip_get_data(gc); 223 + irq_hw_number_t gpio_num = irqd_to_hwirq(d); 224 + void __iomem *pcr_base = port->base + PORT_PCR(gpio_num); 221 225 222 - vf610_gpio_writel(port->irqc[d->hwirq] << PORT_PCR_IRQC_OFFSET, 226 + gpiochip_enable_irq(gc, gpio_num); 227 + vf610_gpio_writel(port->irqc[gpio_num] << PORT_PCR_IRQC_OFFSET, 223 228 pcr_base); 224 229 } 225 230 ··· 240 237 return 0; 241 238 } 242 239 240 + static const struct irq_chip vf610_irqchip = { 241 + .name = "gpio-vf610", 242 + .irq_ack = vf610_gpio_irq_ack, 243 + .irq_mask = vf610_gpio_irq_mask, 244 + .irq_unmask = vf610_gpio_irq_unmask, 245 + .irq_set_type = vf610_gpio_irq_set_type, 246 + .irq_set_wake = vf610_gpio_irq_set_wake, 247 + .flags = IRQCHIP_IMMUTABLE, 248 + GPIOCHIP_IRQ_RESOURCE_HELPERS, 249 + }; 250 + 243 251 static void vf610_gpio_disable_clk(void *data) 244 252 { 245 253 clk_disable_unprepare(data); ··· 263 249 struct vf610_gpio_port *port; 264 250 struct gpio_chip *gc; 265 251 struct gpio_irq_chip *girq; 266 - struct irq_chip *ic; 267 252 int i; 268 253 int ret; 269 254 ··· 328 315 gc->direction_output = vf610_gpio_direction_output; 329 316 gc->set = vf610_gpio_set; 330 317 331 - ic = &port->ic; 332 - ic->name = "gpio-vf610"; 333 - ic->irq_ack = vf610_gpio_irq_ack; 334 - ic->irq_mask = vf610_gpio_irq_mask; 335 - ic->irq_unmask = vf610_gpio_irq_unmask; 336 - ic->irq_set_type = vf610_gpio_irq_set_type; 337 - ic->irq_set_wake = vf610_gpio_irq_set_wake; 338 - 339 318 /* Mask all GPIO interrupts */ 340 319 for (i = 0; i < gc->ngpio; i++) 341 320 vf610_gpio_writel(0, port->base + PORT_PCR(i)); ··· 336 331 vf610_gpio_writel(~0, port->base + PORT_ISFR); 337 332 338 333 girq = &gc->irq; 339 - girq->chip = ic; 334 + gpio_irq_chip_set_chip(girq, &vf610_irqchip); 340 335 girq->parent_handler = vf610_gpio_irq_handler; 341 336 girq->num_parents = 1; 342 337 girq->parents = devm_kcalloc(&pdev->dev, 1,
+12
drivers/gpio/gpiolib-acpi.c
··· 1637 1637 .ignore_wake = "ELAN0415:00@9", 1638 1638 }, 1639 1639 }, 1640 + { 1641 + /* 1642 + * Spurious wakeups from TP_ATTN# pin 1643 + * Found in BIOS 1.7.7 1644 + */ 1645 + .matches = { 1646 + DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"), 1647 + }, 1648 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 1649 + .ignore_wake = "SYNA1202:00@16", 1650 + }, 1651 + }, 1640 1652 {} /* Terminating entry */ 1641 1653 }; 1642 1654
-1
drivers/gpio/gpiolib-acpi.h
··· 14 14 15 15 #include <linux/gpio/consumer.h> 16 16 17 - struct acpi_device; 18 17 struct device; 19 18 struct fwnode_handle; 20 19
+2 -1
drivers/gpu/drm/Kconfig
··· 53 53 54 54 config DRM_USE_DYNAMIC_DEBUG 55 55 bool "use dynamic debug to implement drm.debug" 56 - default y 56 + default n 57 + depends on BROKEN 57 58 depends on DRM 58 59 depends on DYNAMIC_DEBUG || DYNAMIC_DEBUG_CORE 59 60 depends on JUMP_LABEL
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 243 243 244 244 #define AMDGPU_VCNFW_LOG_SIZE (32 * 1024) 245 245 extern int amdgpu_vcnfw_log; 246 + extern int amdgpu_sg_display; 246 247 247 248 #define AMDGPU_VM_MAX_NUM_CTX 4096 248 249 #define AMDGPU_SG_THRESHOLD (256*1024*1024)
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 1220 1220 * next job actually sees the results from the previous one 1221 1221 * before we start executing on the same scheduler ring. 1222 1222 */ 1223 - if (!s_fence || s_fence->sched != sched) 1223 + if (!s_fence || s_fence->sched != sched) { 1224 + dma_fence_put(fence); 1224 1225 continue; 1226 + } 1225 1227 1226 1228 r = amdgpu_sync_fence(&p->gang_leader->explicit_sync, fence); 1229 + dma_fence_put(fence); 1227 1230 if (r) 1228 1231 return r; 1229 1232 }
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 4268 4268 } 4269 4269 adev->in_suspend = false; 4270 4270 4271 + if (adev->enable_mes) 4272 + amdgpu_mes_self_test(adev); 4273 + 4271 4274 if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0)) 4272 4275 DRM_WARN("smart shift update failed\n"); 4273 4276
+11
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 186 186 int amdgpu_smartshift_bias; 187 187 int amdgpu_use_xgmi_p2p = 1; 188 188 int amdgpu_vcnfw_log; 189 + int amdgpu_sg_display = -1; /* auto */ 189 190 190 191 static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work); 191 192 ··· 931 930 */ 932 931 MODULE_PARM_DESC(vcnfw_log, "Enable vcnfw log(0 = disable (default value), 1 = enable)"); 933 932 module_param_named(vcnfw_log, amdgpu_vcnfw_log, int, 0444); 933 + 934 + /** 935 + * DOC: sg_display (int) 936 + * Disable S/G (scatter/gather) display (i.e., display from system memory). 937 + * This option is only relevant on APUs. Set this option to 0 to disable 938 + * S/G display if you experience flickering or other issues under memory 939 + * pressure and report the issue. 940 + */ 941 + MODULE_PARM_DESC(sg_display, "S/G Display (-1 = auto (default), 0 = disable)"); 942 + module_param_named(sg_display, amdgpu_sg_display, int, 0444); 934 943 935 944 /** 936 945 * DOC: smu_pptable_id (int)
+7 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 618 618 if (!ring || !ring->fence_drv.initialized) 619 619 continue; 620 620 621 - if (!ring->no_scheduler) 621 + /* 622 + * Notice we check for sched.ops since there's some 623 + * override on the meaning of sched.ready by amdgpu. 624 + * The natural check would be sched.ready, which is 625 + * set as drm_sched_init() finishes... 626 + */ 627 + if (ring->sched.ops) 622 628 drm_sched_fini(&ring->sched); 623 629 624 630 for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 295 295 #define amdgpu_ring_parse_cs(r, p, job, ib) ((r)->funcs->parse_cs((p), (job), (ib))) 296 296 #define amdgpu_ring_patch_cs_in_place(r, p, job, ib) ((r)->funcs->patch_cs_in_place((p), (job), (ib))) 297 297 #define amdgpu_ring_test_ring(r) (r)->funcs->test_ring((r)) 298 - #define amdgpu_ring_test_ib(r, t) (r)->funcs->test_ib((r), (t)) 298 + #define amdgpu_ring_test_ib(r, t) ((r)->funcs->test_ib ? (r)->funcs->test_ib((r), (t)) : 0) 299 299 #define amdgpu_ring_get_rptr(r) (r)->funcs->get_rptr((r)) 300 300 #define amdgpu_ring_get_wptr(r) (r)->funcs->get_wptr((r)) 301 301 #define amdgpu_ring_set_wptr(r) (r)->funcs->set_wptr((r))
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
··· 974 974 trace_amdgpu_vm_update_ptes(params, frag_start, upd_end, 975 975 min(nptes, 32u), dst, incr, 976 976 upd_flags, 977 - vm->task_info.pid, 977 + vm->task_info.tgid, 978 978 vm->immediate.fence_context); 979 979 amdgpu_vm_pte_update_flags(params, to_amdgpu_bo_vm(pt), 980 980 cursor.level, pe_start, dst,
-1
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 6877 6877 .emit_gds_switch = gfx_v9_0_ring_emit_gds_switch, 6878 6878 .emit_hdp_flush = gfx_v9_0_ring_emit_hdp_flush, 6879 6879 .test_ring = gfx_v9_0_ring_test_ring, 6880 - .test_ib = gfx_v9_0_ring_test_ib, 6881 6880 .insert_nop = amdgpu_ring_insert_nop, 6882 6881 .pad_ib = amdgpu_ring_generic_pad_ib, 6883 6882 .emit_switch_buffer = gfx_v9_ring_emit_sb,
+1 -1
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 1344 1344 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1345 1345 1346 1346 /* it's only intended for use in mes_self_test case, not for s0ix and reset */ 1347 - if (!amdgpu_in_reset(adev) && !adev->in_s0ix && 1347 + if (!amdgpu_in_reset(adev) && !adev->in_s0ix && !adev->in_suspend && 1348 1348 (adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3))) 1349 1349 amdgpu_mes_self_test(adev); 1350 1350
+3 -1
drivers/gpu/drm/amd/amdgpu/soc21.c
··· 641 641 AMD_CG_SUPPORT_GFX_CGLS | 642 642 AMD_CG_SUPPORT_REPEATER_FGCG | 643 643 AMD_CG_SUPPORT_GFX_MGCG | 644 - AMD_CG_SUPPORT_HDP_SD; 644 + AMD_CG_SUPPORT_HDP_SD | 645 + AMD_CG_SUPPORT_ATHUB_MGCG | 646 + AMD_CG_SUPPORT_ATHUB_LS; 645 647 adev->pg_flags = AMD_PG_SUPPORT_VCN | 646 648 AMD_PG_SUPPORT_VCN_DPG | 647 649 AMD_PG_SUPPORT_JPEG;
+38 -15
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1184 1184 1185 1185 memset(pa_config, 0, sizeof(*pa_config)); 1186 1186 1187 - logical_addr_low = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18; 1188 - pt_base = amdgpu_gmc_pd_addr(adev->gart.bo); 1189 - 1190 - if (adev->apu_flags & AMD_APU_IS_RAVEN2) 1191 - /* 1192 - * Raven2 has a HW issue that it is unable to use the vram which 1193 - * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the 1194 - * workaround that increase system aperture high address (add 1) 1195 - * to get rid of the VM fault and hardware hang. 1196 - */ 1197 - logical_addr_high = max((adev->gmc.fb_end >> 18) + 0x1, adev->gmc.agp_end >> 18); 1198 - else 1199 - logical_addr_high = max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18; 1200 - 1201 1187 agp_base = 0; 1202 1188 agp_bot = adev->gmc.agp_start >> 24; 1203 1189 agp_top = adev->gmc.agp_end >> 24; 1204 1190 1191 + /* AGP aperture is disabled */ 1192 + if (agp_bot == agp_top) { 1193 + logical_addr_low = adev->gmc.vram_start >> 18; 1194 + if (adev->apu_flags & AMD_APU_IS_RAVEN2) 1195 + /* 1196 + * Raven2 has a HW issue that it is unable to use the vram which 1197 + * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the 1198 + * workaround that increase system aperture high address (add 1) 1199 + * to get rid of the VM fault and hardware hang. 1200 + */ 1201 + logical_addr_high = (adev->gmc.fb_end >> 18) + 0x1; 1202 + else 1203 + logical_addr_high = adev->gmc.vram_end >> 18; 1204 + } else { 1205 + logical_addr_low = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18; 1206 + if (adev->apu_flags & AMD_APU_IS_RAVEN2) 1207 + /* 1208 + * Raven2 has a HW issue that it is unable to use the vram which 1209 + * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the 1210 + * workaround that increase system aperture high address (add 1) 1211 + * to get rid of the VM fault and hardware hang. 1212 + */ 1213 + logical_addr_high = max((adev->gmc.fb_end >> 18) + 0x1, adev->gmc.agp_end >> 18); 1214 + else 1215 + logical_addr_high = max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18; 1216 + } 1217 + 1218 + pt_base = amdgpu_gmc_pd_addr(adev->gart.bo); 1205 1219 1206 1220 page_table_start.high_part = (u32)(adev->gmc.gart_start >> 44) & 0xF; 1207 1221 page_table_start.low_part = (u32)(adev->gmc.gart_start >> 12); ··· 1517 1503 case IP_VERSION(3, 0, 1): 1518 1504 case IP_VERSION(3, 1, 2): 1519 1505 case IP_VERSION(3, 1, 3): 1506 + case IP_VERSION(3, 1, 4): 1507 + case IP_VERSION(3, 1, 5): 1520 1508 case IP_VERSION(3, 1, 6): 1521 1509 init_data.flags.gpu_vm_support = true; 1522 1510 break; ··· 1527 1511 } 1528 1512 break; 1529 1513 } 1514 + if (init_data.flags.gpu_vm_support && 1515 + (amdgpu_sg_display == 0)) 1516 + init_data.flags.gpu_vm_support = false; 1530 1517 1531 1518 if (init_data.flags.gpu_vm_support) 1532 1519 adev->mode_info.gpu_vm_support = true; ··· 9658 9639 * `dcn10_can_pipe_disable_cursor`). By now, all modified planes are in 9659 9640 * atomic state, so call drm helper to normalize zpos. 9660 9641 */ 9661 - drm_atomic_normalize_zpos(dev, state); 9642 + ret = drm_atomic_normalize_zpos(dev, state); 9643 + if (ret) { 9644 + drm_dbg(dev, "drm_atomic_normalize_zpos() failed\n"); 9645 + goto fail; 9646 + } 9662 9647 9663 9648 /* Remove exiting planes if they are modified */ 9664 9649 for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
+1 -1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
··· 3626 3626 (int)hubp->curs_attr.width || pos_cpy.x 3627 3627 <= (int)hubp->curs_attr.width + 3628 3628 pipe_ctx->plane_state->src_rect.x) { 3629 - pos_cpy.x = temp_x + viewport_width; 3629 + pos_cpy.x = 2 * viewport_width - temp_x; 3630 3630 } 3631 3631 } 3632 3632 } else {
+2
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 1991 1991 case IP_VERSION(9, 4, 2): 1992 1992 case IP_VERSION(10, 3, 0): 1993 1993 case IP_VERSION(11, 0, 0): 1994 + case IP_VERSION(11, 0, 1): 1995 + case IP_VERSION(11, 0, 2): 1994 1996 *states = ATTR_STATE_SUPPORTED; 1995 1997 break; 1996 1998 default:
+3 -2
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu13_driver_if_v13_0_0.h
··· 123 123 (1 << FEATURE_DS_FCLK_BIT) | \ 124 124 (1 << FEATURE_DS_LCLK_BIT) | \ 125 125 (1 << FEATURE_DS_DCFCLK_BIT) | \ 126 - (1 << FEATURE_DS_UCLK_BIT)) 126 + (1 << FEATURE_DS_UCLK_BIT) | \ 127 + (1ULL << FEATURE_DS_VCN_BIT)) 127 128 128 129 //For use with feature control messages 129 130 typedef enum { ··· 523 522 TEMP_HOTSPOT_M, 524 523 TEMP_MEM, 525 524 TEMP_VR_GFX, 526 - TEMP_VR_SOC, 527 525 TEMP_VR_MEM0, 528 526 TEMP_VR_MEM1, 527 + TEMP_VR_SOC, 529 528 TEMP_VR_U, 530 529 TEMP_LIQUID0, 531 530 TEMP_LIQUID1,
+15 -14
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu13_driver_if_v13_0_7.h
··· 113 113 #define NUM_FEATURES 64 114 114 115 115 #define ALLOWED_FEATURE_CTRL_DEFAULT 0xFFFFFFFFFFFFFFFFULL 116 - #define ALLOWED_FEATURE_CTRL_SCPM (1 << FEATURE_DPM_GFXCLK_BIT) | \ 117 - (1 << FEATURE_DPM_GFX_POWER_OPTIMIZER_BIT) | \ 118 - (1 << FEATURE_DPM_UCLK_BIT) | \ 119 - (1 << FEATURE_DPM_FCLK_BIT) | \ 120 - (1 << FEATURE_DPM_SOCCLK_BIT) | \ 121 - (1 << FEATURE_DPM_MP0CLK_BIT) | \ 122 - (1 << FEATURE_DPM_LINK_BIT) | \ 123 - (1 << FEATURE_DPM_DCN_BIT) | \ 124 - (1 << FEATURE_DS_GFXCLK_BIT) | \ 125 - (1 << FEATURE_DS_SOCCLK_BIT) | \ 126 - (1 << FEATURE_DS_FCLK_BIT) | \ 127 - (1 << FEATURE_DS_LCLK_BIT) | \ 128 - (1 << FEATURE_DS_DCFCLK_BIT) | \ 129 - (1 << FEATURE_DS_UCLK_BIT) 116 + #define ALLOWED_FEATURE_CTRL_SCPM ((1 << FEATURE_DPM_GFXCLK_BIT) | \ 117 + (1 << FEATURE_DPM_GFX_POWER_OPTIMIZER_BIT) | \ 118 + (1 << FEATURE_DPM_UCLK_BIT) | \ 119 + (1 << FEATURE_DPM_FCLK_BIT) | \ 120 + (1 << FEATURE_DPM_SOCCLK_BIT) | \ 121 + (1 << FEATURE_DPM_MP0CLK_BIT) | \ 122 + (1 << FEATURE_DPM_LINK_BIT) | \ 123 + (1 << FEATURE_DPM_DCN_BIT) | \ 124 + (1 << FEATURE_DS_GFXCLK_BIT) | \ 125 + (1 << FEATURE_DS_SOCCLK_BIT) | \ 126 + (1 << FEATURE_DS_FCLK_BIT) | \ 127 + (1 << FEATURE_DS_LCLK_BIT) | \ 128 + (1 << FEATURE_DS_DCFCLK_BIT) | \ 129 + (1 << FEATURE_DS_UCLK_BIT) | \ 130 + (1ULL << FEATURE_DS_VCN_BIT)) 130 131 131 132 //For use with feature control messages 132 133 typedef enum {
+2 -2
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 28 28 #define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF 29 29 #define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04 30 30 #define SMU13_DRIVER_IF_VERSION_ALDE 0x08 31 - #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_0 0x34 31 + #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_0 0x37 32 32 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x07 33 33 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_5 0x04 34 34 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10 0x32 35 - #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_7 0x35 35 + #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_7 0x37 36 36 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_10 0x1D 37 37 38 38 #define SMU13_MODE1_RESET_WAIT_TIME_IN_MS 500 //500ms
+6
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 407 407 struct amdgpu_device *adev = smu->adev; 408 408 int ret = 0; 409 409 410 + if (amdgpu_sriov_vf(smu->adev)) 411 + return 0; 412 + 410 413 ret = smu_v13_0_0_get_pptable_from_pmfw(smu, 411 414 &smu_table->power_play_table, 412 415 &smu_table->power_play_table_size); ··· 1259 1256 struct smu_13_0_0_powerplay_table *powerplay_table = 1260 1257 table_context->power_play_table; 1261 1258 PPTable_t *pptable = smu->smu_table.driver_pptable; 1259 + 1260 + if (amdgpu_sriov_vf(smu->adev)) 1261 + return 0; 1262 1262 1263 1263 if (!range) 1264 1264 return -EINVAL;
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 124 124 MSG_MAP(DFCstateControl, PPSMC_MSG_SetExternalClientDfCstateAllow, 0), 125 125 MSG_MAP(ArmD3, PPSMC_MSG_ArmD3, 0), 126 126 MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0), 127 + MSG_MAP(GetPptLimit, PPSMC_MSG_GetPptLimit, 0), 127 128 }; 128 129 129 130 static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
+2 -2
drivers/gpu/drm/ast/ast_mode.c
··· 714 714 struct ast_plane *ast_primary_plane = &ast->primary_plane; 715 715 struct drm_plane *primary_plane = &ast_primary_plane->base; 716 716 void __iomem *vaddr = ast->vram; 717 - u64 offset = ast->vram_base; 717 + u64 offset = 0; /* with shmem, the primary plane is always at offset 0 */ 718 718 unsigned long cursor_size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); 719 719 unsigned long size = ast->vram_fb_available - cursor_size; 720 720 int ret; ··· 972 972 return -ENOMEM; 973 973 974 974 vaddr = ast->vram + ast->vram_fb_available - size; 975 - offset = ast->vram_base + ast->vram_fb_available - size; 975 + offset = ast->vram_fb_available - size; 976 976 977 977 ret = ast_plane_init(dev, ast_cursor_plane, vaddr, offset, size, 978 978 0x01, &ast_cursor_plane_funcs,
+20 -13
drivers/gpu/drm/drm_client.c
··· 233 233 234 234 static void drm_client_buffer_delete(struct drm_client_buffer *buffer) 235 235 { 236 - struct drm_device *dev = buffer->client->dev; 237 - 238 236 if (buffer->gem) { 239 237 drm_gem_vunmap_unlocked(buffer->gem, &buffer->map); 240 238 drm_gem_object_put(buffer->gem); 241 239 } 242 240 243 - if (buffer->handle) 244 - drm_mode_destroy_dumb(dev, buffer->handle, buffer->client->file); 245 - 246 241 kfree(buffer); 247 242 } 248 243 249 244 static struct drm_client_buffer * 250 - drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format) 245 + drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, 246 + u32 format, u32 *handle) 251 247 { 252 248 const struct drm_format_info *info = drm_format_info(format); 253 249 struct drm_mode_create_dumb dumb_args = { }; ··· 265 269 if (ret) 266 270 goto err_delete; 267 271 268 - buffer->handle = dumb_args.handle; 269 - buffer->pitch = dumb_args.pitch; 270 - 271 272 obj = drm_gem_object_lookup(client->file, dumb_args.handle); 272 273 if (!obj) { 273 274 ret = -ENOENT; 274 275 goto err_delete; 275 276 } 276 277 278 + buffer->pitch = dumb_args.pitch; 277 279 buffer->gem = obj; 280 + *handle = dumb_args.handle; 278 281 279 282 return buffer; 280 283 ··· 360 365 } 361 366 362 367 static int drm_client_buffer_addfb(struct drm_client_buffer *buffer, 363 - u32 width, u32 height, u32 format) 368 + u32 width, u32 height, u32 format, 369 + u32 handle) 364 370 { 365 371 struct drm_client_dev *client = buffer->client; 366 372 struct drm_mode_fb_cmd fb_req = { }; ··· 373 377 fb_req.depth = info->depth; 374 378 fb_req.width = width; 375 379 fb_req.height = height; 376 - fb_req.handle = buffer->handle; 380 + fb_req.handle = handle; 377 381 fb_req.pitch = buffer->pitch; 378 382 379 383 ret = drm_mode_addfb(client->dev, &fb_req, client->file); ··· 410 414 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format) 411 415 { 412 416 struct drm_client_buffer *buffer; 417 + u32 handle; 413 418 int ret; 414 419 415 - buffer = drm_client_buffer_create(client, width, height, format); 420 + buffer = drm_client_buffer_create(client, width, height, format, 421 + &handle); 416 422 if (IS_ERR(buffer)) 417 423 return buffer; 418 424 419 - ret = drm_client_buffer_addfb(buffer, width, height, format); 425 + ret = drm_client_buffer_addfb(buffer, width, height, format, handle); 426 + 427 + /* 428 + * The handle is only needed for creating the framebuffer, destroy it 429 + * again to solve a circular dependency should anybody export the GEM 430 + * object as DMA-buf. The framebuffer and our buffer structure are still 431 + * holding references to the GEM object to prevent its destruction. 432 + */ 433 + drm_mode_destroy_dumb(client->dev, handle, client->file); 434 + 420 435 if (ret) { 421 436 drm_client_buffer_delete(buffer); 422 437 return ERR_PTR(ret);
+23 -10
drivers/gpu/drm/i915/display/intel_bios.c
··· 2466 2466 dvo_port); 2467 2467 } 2468 2468 2469 + static enum port 2470 + dsi_dvo_port_to_port(struct drm_i915_private *i915, u8 dvo_port) 2471 + { 2472 + switch (dvo_port) { 2473 + case DVO_PORT_MIPIA: 2474 + return PORT_A; 2475 + case DVO_PORT_MIPIC: 2476 + if (DISPLAY_VER(i915) >= 11) 2477 + return PORT_B; 2478 + else 2479 + return PORT_C; 2480 + default: 2481 + return PORT_NONE; 2482 + } 2483 + } 2484 + 2469 2485 static int parse_bdb_230_dp_max_link_rate(const int vbt_max_link_rate) 2470 2486 { 2471 2487 switch (vbt_max_link_rate) { ··· 3430 3414 3431 3415 dvo_port = child->dvo_port; 3432 3416 3433 - if (dvo_port == DVO_PORT_MIPIA || 3434 - (dvo_port == DVO_PORT_MIPIB && DISPLAY_VER(i915) >= 11) || 3435 - (dvo_port == DVO_PORT_MIPIC && DISPLAY_VER(i915) < 11)) { 3436 - if (port) 3437 - *port = dvo_port - DVO_PORT_MIPIA; 3438 - return true; 3439 - } else if (dvo_port == DVO_PORT_MIPIB || 3440 - dvo_port == DVO_PORT_MIPIC || 3441 - dvo_port == DVO_PORT_MIPID) { 3417 + if (dsi_dvo_port_to_port(i915, dvo_port) == PORT_NONE) { 3442 3418 drm_dbg_kms(&i915->drm, 3443 3419 "VBT has unsupported DSI port %c\n", 3444 3420 port_name(dvo_port - DVO_PORT_MIPIA)); 3421 + continue; 3445 3422 } 3423 + 3424 + if (port) 3425 + *port = dsi_dvo_port_to_port(i915, dvo_port); 3426 + return true; 3446 3427 } 3447 3428 3448 3429 return false; ··· 3524 3511 if (!(child->device_type & DEVICE_TYPE_MIPI_OUTPUT)) 3525 3512 continue; 3526 3513 3527 - if (child->dvo_port - DVO_PORT_MIPIA == encoder->port) { 3514 + if (dsi_dvo_port_to_port(i915, child->dvo_port) == encoder->port) { 3528 3515 if (!devdata->dsc) 3529 3516 return false; 3530 3517
+12
drivers/gpu/drm/i915/display/intel_fbdev.c
··· 328 328 return ret; 329 329 } 330 330 331 + static int intelfb_dirty(struct drm_fb_helper *helper, struct drm_clip_rect *clip) 332 + { 333 + if (!(clip->x1 < clip->x2 && clip->y1 < clip->y2)) 334 + return 0; 335 + 336 + if (helper->fb->funcs->dirty) 337 + return helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, clip, 1); 338 + 339 + return 0; 340 + } 341 + 331 342 static const struct drm_fb_helper_funcs intel_fb_helper_funcs = { 332 343 .fb_probe = intelfb_create, 344 + .fb_dirty = intelfb_dirty, 333 345 }; 334 346 335 347 static void intel_fbdev_destroy(struct intel_fbdev *ifbdev)
+2 -1
drivers/gpu/drm/i915/display/skl_watermark.c
··· 1587 1587 skl_check_wm_level(&wm->wm[level], ddb); 1588 1588 1589 1589 if (icl_need_wm1_wa(i915, plane_id) && 1590 - level == 1 && wm->wm[0].enable) { 1590 + level == 1 && !wm->wm[level].enable && 1591 + wm->wm[0].enable) { 1591 1592 wm->wm[level].blocks = wm->wm[0].blocks; 1592 1593 wm->wm[level].lines = wm->wm[0].lines; 1593 1594 wm->wm[level].ignore_lines = wm->wm[0].ignore_lines;
+7 -7
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 3483 3483 eb.composite_fence : 3484 3484 &eb.requests[0]->fence); 3485 3485 3486 + if (unlikely(eb.gem_context->syncobj)) { 3487 + drm_syncobj_replace_fence(eb.gem_context->syncobj, 3488 + eb.composite_fence ? 3489 + eb.composite_fence : 3490 + &eb.requests[0]->fence); 3491 + } 3492 + 3486 3493 if (out_fence) { 3487 3494 if (err == 0) { 3488 3495 fd_install(out_fence_fd, out_fence->file); ··· 3499 3492 } else { 3500 3493 fput(out_fence->file); 3501 3494 } 3502 - } 3503 - 3504 - if (unlikely(eb.gem_context->syncobj)) { 3505 - drm_syncobj_replace_fence(eb.gem_context->syncobj, 3506 - eb.composite_fence ? 3507 - eb.composite_fence : 3508 - &eb.requests[0]->fence); 3509 3495 } 3510 3496 3511 3497 if (!out_fence && eb.composite_fence)
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_shmem.c
··· 579 579 mapping_set_gfp_mask(mapping, mask); 580 580 GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM)); 581 581 582 - i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class, 0); 582 + i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class, flags); 583 583 obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE; 584 584 obj->write_domain = I915_GEM_DOMAIN_CPU; 585 585 obj->read_domains = I915_GEM_DOMAIN_CPU;
+7 -7
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 1355 1355 GAMT_CHKN_BIT_REG, 1356 1356 GAMT_CHKN_DISABLE_L3_COH_PIPE); 1357 1357 1358 + /* 1359 + * Wa_1408615072:icl,ehl (vsunit) 1360 + * Wa_1407596294:icl,ehl (hsunit) 1361 + */ 1362 + wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE, 1363 + VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS); 1364 + 1358 1365 /* Wa_1407352427:icl,ehl */ 1359 1366 wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2, 1360 1367 PSDUNIT_CLKGATE_DIS); ··· 2545 2538 /* WaEnable32PlaneMode:icl */ 2546 2539 wa_masked_en(wal, GEN9_CSFE_CHICKEN1_RCS, 2547 2540 GEN11_ENABLE_32_PLANE_MODE); 2548 - 2549 - /* 2550 - * Wa_1408615072:icl,ehl (vsunit) 2551 - * Wa_1407596294:icl,ehl (hsunit) 2552 - */ 2553 - wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE, 2554 - VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS); 2555 2541 2556 2542 /* 2557 2543 * Wa_1408767742:icl[a2..forever],ehl[all]
+1 -1
drivers/gpu/drm/vc4/vc4_crtc.c
··· 711 711 struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder); 712 712 713 713 if (vc4_encoder->type == VC4_ENCODER_TYPE_HDMI0) { 714 - vc4_state->hvs_load = max(mode->clock * mode->hdisplay / mode->htotal + 1000, 714 + vc4_state->hvs_load = max(mode->clock * mode->hdisplay / mode->htotal + 8000, 715 715 mode->clock * 9 / 10) * 1000; 716 716 } else { 717 717 vc4_state->hvs_load = mode->clock * 1000;
+9 -9
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 97 97 #define VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_1_SHIFT 8 98 98 #define VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_1_MASK VC4_MASK(15, 8) 99 99 100 + #define VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_0_MASK VC4_MASK(7, 0) 101 + #define VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_0_SET_AVMUTE BIT(0) 102 + #define VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_0_CLEAR_AVMUTE BIT(4) 103 + 100 104 # define VC4_HD_M_SW_RST BIT(2) 101 105 # define VC4_HD_M_ENABLE BIT(0) 102 106 ··· 1310 1306 VC4_HDMI_VERTB_VBP)); 1311 1307 unsigned long flags; 1312 1308 unsigned char gcp; 1313 - bool gcp_en; 1314 1309 u32 reg; 1315 1310 int idx; 1316 1311 ··· 1344 1341 switch (vc4_state->output_bpc) { 1345 1342 case 12: 1346 1343 gcp = 6; 1347 - gcp_en = true; 1348 1344 break; 1349 1345 case 10: 1350 1346 gcp = 5; 1351 - gcp_en = true; 1352 1347 break; 1353 1348 case 8: 1354 1349 default: 1355 - gcp = 4; 1356 - gcp_en = false; 1350 + gcp = 0; 1357 1351 break; 1358 1352 } 1359 1353 ··· 1359 1359 * doesn't signal in GCP. 1360 1360 */ 1361 1361 if (vc4_state->output_format == VC4_HDMI_OUTPUT_YUV422) { 1362 - gcp = 4; 1363 - gcp_en = false; 1362 + gcp = 0; 1364 1363 } 1365 1364 1366 1365 reg = HDMI_READ(HDMI_DEEP_COLOR_CONFIG_1); ··· 1372 1373 reg = HDMI_READ(HDMI_GCP_WORD_1); 1373 1374 reg &= ~VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_1_MASK; 1374 1375 reg |= VC4_SET_FIELD(gcp, VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_1); 1376 + reg &= ~VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_0_MASK; 1377 + reg |= VC5_HDMI_GCP_WORD_1_GCP_SUBPACKET_BYTE_0_CLEAR_AVMUTE; 1375 1378 HDMI_WRITE(HDMI_GCP_WORD_1, reg); 1376 1379 1377 1380 reg = HDMI_READ(HDMI_GCP_CONFIG); 1378 - reg &= ~VC5_HDMI_GCP_CONFIG_GCP_ENABLE; 1379 - reg |= gcp_en ? VC5_HDMI_GCP_CONFIG_GCP_ENABLE : 0; 1381 + reg |= VC5_HDMI_GCP_CONFIG_GCP_ENABLE; 1380 1382 HDMI_WRITE(HDMI_GCP_CONFIG, reg); 1381 1383 1382 1384 reg = HDMI_READ(HDMI_MISC_CONTROL);
+4 -2
drivers/gpu/drm/vc4/vc4_plane.c
··· 340 340 { 341 341 struct vc4_plane_state *vc4_state = to_vc4_plane_state(state); 342 342 struct drm_framebuffer *fb = state->fb; 343 - struct drm_gem_dma_object *bo = drm_fb_dma_get_gem_obj(fb, 0); 343 + struct drm_gem_dma_object *bo; 344 344 int num_planes = fb->format->num_planes; 345 345 struct drm_crtc_state *crtc_state; 346 346 u32 h_subsample = fb->format->hsub; ··· 359 359 if (ret) 360 360 return ret; 361 361 362 - for (i = 0; i < num_planes; i++) 362 + for (i = 0; i < num_planes; i++) { 363 + bo = drm_fb_dma_get_gem_obj(fb, i); 363 364 vc4_state->offsets[i] = bo->dma_addr + fb->offsets[i]; 365 + } 364 366 365 367 /* 366 368 * We don't support subpixel source positioning for scaling,
+1 -4
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 126 126 void __user *user_bo_handles = NULL; 127 127 struct virtio_gpu_object_array *buflist = NULL; 128 128 struct sync_file *sync_file; 129 - int in_fence_fd = exbuf->fence_fd; 130 129 int out_fence_fd = -1; 131 130 void *buf; 132 131 uint64_t fence_ctx; ··· 151 152 ring_idx = exbuf->ring_idx; 152 153 } 153 154 154 - exbuf->fence_fd = -1; 155 - 156 155 virtio_gpu_create_context(dev, file); 157 156 if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_IN) { 158 157 struct dma_fence *in_fence; 159 158 160 - in_fence = sync_file_get_fence(in_fence_fd); 159 + in_fence = sync_file_get_fence(exbuf->fence_fd); 161 160 162 161 if (!in_fence) 163 162 return -EINVAL;
+8 -4
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 462 462 return -ENOMEM; 463 463 } 464 464 465 + /* 466 + * vmw_bo_init will delete the *p_bo object if it fails 467 + */ 465 468 ret = vmw_bo_init(vmw, *p_bo, size, 466 469 placement, interruptible, pin, 467 470 bo_free); ··· 473 470 474 471 return ret; 475 472 out_error: 476 - kfree(*p_bo); 477 473 *p_bo = NULL; 478 474 return ret; 479 475 } ··· 598 596 ttm_bo_put(&vmw_bo->base); 599 597 } 600 598 599 + drm_gem_object_put(&vmw_bo->base.base); 601 600 return ret; 602 601 } 603 602 ··· 639 636 640 637 ret = vmw_user_bo_synccpu_grab(vbo, arg->flags); 641 638 vmw_bo_unreference(&vbo); 639 + drm_gem_object_put(&vbo->base.base); 642 640 if (unlikely(ret != 0)) { 643 641 if (ret == -ERESTARTSYS || ret == -EBUSY) 644 642 return -EBUSY; ··· 697 693 * struct vmw_buffer_object should be placed. 698 694 * Return: Zero on success, Negative error code on error. 699 695 * 700 - * The vmw buffer object pointer will be refcounted. 696 + * The vmw buffer object pointer will be refcounted (both ttm and gem) 701 697 */ 702 698 int vmw_user_bo_lookup(struct drm_file *filp, 703 699 uint32_t handle, ··· 714 710 715 711 *out = gem_to_vmw_bo(gobj); 716 712 ttm_bo_get(&(*out)->base); 717 - drm_gem_object_put(gobj); 718 713 719 714 return 0; 720 715 } ··· 794 791 ret = vmw_gem_object_create_with_handle(dev_priv, file_priv, 795 792 args->size, &args->handle, 796 793 &vbo); 797 - 794 + /* drop reference from allocate - handle holds it now */ 795 + drm_gem_object_put(&vbo->base.base); 798 796 return ret; 799 797 } 800 798
+2
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 1160 1160 } 1161 1161 ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, true, false); 1162 1162 ttm_bo_put(&vmw_bo->base); 1163 + drm_gem_object_put(&vmw_bo->base.base); 1163 1164 if (unlikely(ret != 0)) 1164 1165 return ret; 1165 1166 ··· 1215 1214 } 1216 1215 ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, false, false); 1217 1216 ttm_bo_put(&vmw_bo->base); 1217 + drm_gem_object_put(&vmw_bo->base.base); 1218 1218 if (unlikely(ret != 0)) 1219 1219 return ret; 1220 1220
+4 -4
drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
··· 146 146 &vmw_sys_placement : 147 147 &vmw_vram_sys_placement, 148 148 true, false, &vmw_gem_destroy, p_vbo); 149 - 150 - (*p_vbo)->base.base.funcs = &vmw_gem_object_funcs; 151 149 if (ret != 0) 152 150 goto out_no_bo; 153 151 152 + (*p_vbo)->base.base.funcs = &vmw_gem_object_funcs; 153 + 154 154 ret = drm_gem_handle_create(filp, &(*p_vbo)->base.base, handle); 155 - /* drop reference from allocate - handle holds it now */ 156 - drm_gem_object_put(&(*p_vbo)->base.base); 157 155 out_no_bo: 158 156 return ret; 159 157 } ··· 178 180 rep->map_handle = drm_vma_node_offset_addr(&vbo->base.base.vma_node); 179 181 rep->cur_gmr_id = handle; 180 182 rep->cur_gmr_offset = 0; 183 + /* drop reference from allocate - handle holds it now */ 184 + drm_gem_object_put(&vbo->base.base); 181 185 out_no_bo: 182 186 return ret; 183 187 }
+3 -1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1815 1815 1816 1816 err_out: 1817 1817 /* vmw_user_lookup_handle takes one ref so does new_fb */ 1818 - if (bo) 1818 + if (bo) { 1819 1819 vmw_bo_unreference(&bo); 1820 + drm_gem_object_put(&bo->base.base); 1821 + } 1820 1822 if (surface) 1821 1823 vmw_surface_unreference(&surface); 1822 1824
+1
drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
··· 458 458 ret = vmw_overlay_update_stream(dev_priv, buf, arg, true); 459 459 460 460 vmw_bo_unreference(&buf); 461 + drm_gem_object_put(&buf->base.base); 461 462 462 463 out_unlock: 463 464 mutex_unlock(&overlay->mutex);
+1
drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
··· 807 807 num_output_sig, tfile, shader_handle); 808 808 out_bad_arg: 809 809 vmw_bo_unreference(&buffer); 810 + drm_gem_object_put(&buffer->base.base); 810 811 return ret; 811 812 } 812 813
+6 -4
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 683 683 container_of(base, struct vmw_user_surface, prime.base); 684 684 struct vmw_resource *res = &user_srf->srf.res; 685 685 686 - if (base->shareable && res && res->backup) 686 + if (res && res->backup) 687 687 drm_gem_object_put(&res->backup->base.base); 688 688 689 689 *p_base = NULL; ··· 864 864 goto out_unlock; 865 865 } 866 866 vmw_bo_reference(res->backup); 867 - drm_gem_object_get(&res->backup->base.base); 867 + /* 868 + * We don't expose the handle to the userspace and surface 869 + * already holds a gem reference 870 + */ 871 + drm_gem_handle_delete(file_priv, backup_handle); 868 872 } 869 873 870 874 tmp = vmw_resource_reference(&srf->res); ··· 1572 1568 drm_vma_node_offset_addr(&res->backup->base.base.vma_node); 1573 1569 rep->buffer_size = res->backup->base.base.size; 1574 1570 rep->buffer_handle = backup_handle; 1575 - if (user_srf->prime.base.shareable) 1576 - drm_gem_object_get(&res->backup->base.base); 1577 1571 } else { 1578 1572 rep->buffer_map_handle = 0; 1579 1573 rep->buffer_size = 0;
+4 -4
drivers/infiniband/core/umem_dmabuf.c
··· 26 26 if (umem_dmabuf->sgt) 27 27 goto wait_fence; 28 28 29 - sgt = dma_buf_map_attachment_unlocked(umem_dmabuf->attach, 30 - DMA_BIDIRECTIONAL); 29 + sgt = dma_buf_map_attachment(umem_dmabuf->attach, 30 + DMA_BIDIRECTIONAL); 31 31 if (IS_ERR(sgt)) 32 32 return PTR_ERR(sgt); 33 33 ··· 103 103 umem_dmabuf->last_sg_trim = 0; 104 104 } 105 105 106 - dma_buf_unmap_attachment_unlocked(umem_dmabuf->attach, umem_dmabuf->sgt, 107 - DMA_BIDIRECTIONAL); 106 + dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt, 107 + DMA_BIDIRECTIONAL); 108 108 109 109 umem_dmabuf->sgt = NULL; 110 110 }
+5 -2
drivers/infiniband/hw/hfi1/file_ops.c
··· 1318 1318 addr = arg + offsetof(struct hfi1_tid_info, tidcnt); 1319 1319 if (copy_to_user((void __user *)addr, &tinfo.tidcnt, 1320 1320 sizeof(tinfo.tidcnt))) 1321 - return -EFAULT; 1321 + ret = -EFAULT; 1322 1322 1323 1323 addr = arg + offsetof(struct hfi1_tid_info, length); 1324 - if (copy_to_user((void __user *)addr, &tinfo.length, 1324 + if (!ret && copy_to_user((void __user *)addr, &tinfo.length, 1325 1325 sizeof(tinfo.length))) 1326 1326 ret = -EFAULT; 1327 + 1328 + if (ret) 1329 + hfi1_user_exp_rcv_invalid(fd, &tinfo); 1327 1330 } 1328 1331 1329 1332 return ret;
+2 -7
drivers/infiniband/hw/hfi1/user_exp_rcv.c
··· 160 160 static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf) 161 161 { 162 162 int pinned; 163 - unsigned int npages; 163 + unsigned int npages = tidbuf->npages; 164 164 unsigned long vaddr = tidbuf->vaddr; 165 165 struct page **pages = NULL; 166 166 struct hfi1_devdata *dd = fd->uctxt->dd; 167 - 168 - /* Get the number of pages the user buffer spans */ 169 - npages = num_user_pages(vaddr, tidbuf->length); 170 - if (!npages) 171 - return -EINVAL; 172 167 173 168 if (npages > fd->uctxt->expected_count) { 174 169 dd_dev_err(dd, "Expected buffer too big\n"); ··· 191 196 return pinned; 192 197 } 193 198 tidbuf->pages = pages; 194 - tidbuf->npages = npages; 195 199 fd->tid_n_pinned += pinned; 196 200 return pinned; 197 201 } ··· 268 274 mutex_init(&tidbuf->cover_mutex); 269 275 tidbuf->vaddr = tinfo->vaddr; 270 276 tidbuf->length = tinfo->length; 277 + tidbuf->npages = num_user_pages(tidbuf->vaddr, tidbuf->length); 271 278 tidbuf->psets = kcalloc(uctxt->expected_count, sizeof(*tidbuf->psets), 272 279 GFP_KERNEL); 273 280 if (!tidbuf->psets) {
+3
drivers/infiniband/hw/irdma/cm.c
··· 1722 1722 continue; 1723 1723 1724 1724 idev = in_dev_get(ip_dev); 1725 + if (!idev) 1726 + continue; 1727 + 1725 1728 in_dev_for_each_ifa_rtnl(ifa, idev) { 1726 1729 ibdev_dbg(&iwdev->ibdev, 1727 1730 "CM: Allocating child CM Listener forIP=%pI4, vlan_id=%d, MAC=%pM\n",
+1 -1
drivers/infiniband/hw/mana/qp.c
··· 289 289 290 290 /* IB ports start with 1, MANA Ethernet ports start with 0 */ 291 291 port = ucmd.port; 292 - if (ucmd.port > mc->num_ports) 292 + if (port < 1 || port > mc->num_ports) 293 293 return -EINVAL; 294 294 295 295 if (attr->cap.max_send_wr > MAX_SEND_BUFFERS_PER_QUEUE) {
+4 -4
drivers/infiniband/hw/usnic/usnic_uiom.c
··· 276 276 size = pa_end - pa_start + PAGE_SIZE; 277 277 usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x", 278 278 va_start, &pa_start, size, flags); 279 - err = iommu_map(pd->domain, va_start, pa_start, 280 - size, flags); 279 + err = iommu_map_atomic(pd->domain, va_start, 280 + pa_start, size, flags); 281 281 if (err) { 282 282 usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n", 283 283 va_start, &pa_start, size, err); ··· 293 293 size = pa - pa_start + PAGE_SIZE; 294 294 usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x\n", 295 295 va_start, &pa_start, size, flags); 296 - err = iommu_map(pd->domain, va_start, pa_start, 297 - size, flags); 296 + err = iommu_map_atomic(pd->domain, va_start, 297 + pa_start, size, flags); 298 298 if (err) { 299 299 usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n", 300 300 va_start, &pa_start, size, err);
+8
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 2200 2200 rn->attach_mcast = ipoib_mcast_attach; 2201 2201 rn->detach_mcast = ipoib_mcast_detach; 2202 2202 rn->hca = hca; 2203 + 2204 + rc = netif_set_real_num_tx_queues(dev, 1); 2205 + if (rc) 2206 + goto out; 2207 + 2208 + rc = netif_set_real_num_rx_queues(dev, 1); 2209 + if (rc) 2210 + goto out; 2203 2211 } 2204 2212 2205 2213 priv->rn_ops = dev->netdev_ops;
+1 -2
drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
··· 312 312 313 313 if (srv_path->kobj.state_in_sysfs) { 314 314 sysfs_remove_group(&srv_path->kobj, &rtrs_srv_path_attr_group); 315 - kobject_del(&srv_path->kobj); 316 315 kobject_put(&srv_path->kobj); 316 + rtrs_srv_destroy_once_sysfs_root_folders(srv_path); 317 317 } 318 318 319 - rtrs_srv_destroy_once_sysfs_root_folders(srv_path); 320 319 }
+3 -3
drivers/net/ethernet/broadcom/bgmac-bcma.c
··· 240 240 bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST; 241 241 bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL1; 242 242 bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_PHY; 243 - if (ci->pkg == BCMA_PKG_ID_BCM47188 || 244 - ci->pkg == BCMA_PKG_ID_BCM47186) { 243 + if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM47186) || 244 + (ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg == BCMA_PKG_ID_BCM47188)) { 245 245 bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_RGMII; 246 246 bgmac->feature_flags |= BGMAC_FEAT_IOST_ATTACHED; 247 247 } 248 - if (ci->pkg == BCMA_PKG_ID_BCM5358) 248 + if (ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM5358) 249 249 bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_EPHYRMII; 250 250 break; 251 251 case BCMA_CHIP_ID_BCM53573:
+6 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 9273 9273 netdev_err(bp->dev, "ring reservation/IRQ init failure rc: %d\n", rc); 9274 9274 return rc; 9275 9275 } 9276 - if (tcs && (bp->tx_nr_rings_per_tc * tcs != bp->tx_nr_rings)) { 9276 + if (tcs && (bp->tx_nr_rings_per_tc * tcs != 9277 + bp->tx_nr_rings - bp->tx_nr_rings_xdp)) { 9277 9278 netdev_err(bp->dev, "tx ring reservation failure\n"); 9278 9279 netdev_reset_tc(bp->dev); 9279 - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 9280 + if (bp->tx_nr_rings_xdp) 9281 + bp->tx_nr_rings_per_tc = bp->tx_nr_rings_xdp; 9282 + else 9283 + bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 9280 9284 return -ENOMEM; 9281 9285 } 9282 9286 return 0;
+3 -1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2921 2921 struct i40e_pf *pf = vsi->back; 2922 2922 2923 2923 if (i40e_enabled_xdp_vsi(vsi)) { 2924 - int frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; 2924 + int frame_size = new_mtu + I40E_PACKET_HDR_PAD; 2925 2925 2926 2926 if (frame_size > i40e_max_xdp_frame_size(vsi)) 2927 2927 return -EINVAL; ··· 13167 13167 } 13168 13168 13169 13169 br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); 13170 + if (!br_spec) 13171 + return -EINVAL; 13170 13172 13171 13173 nla_for_each_nested(attr, br_spec, rem) { 13172 13174 __u16 mode;
+2 -2
drivers/net/ethernet/intel/ice/ice_devlink.c
··· 927 927 { 928 928 int status; 929 929 930 - if (node->tx_priority >= 8) { 930 + if (priority >= 8) { 931 931 NL_SET_ERR_MSG_MOD(extack, "Priority should be less than 8"); 932 932 return -EINVAL; 933 933 } ··· 957 957 { 958 958 int status; 959 959 960 - if (node->tx_weight > 200 || node->tx_weight < 1) { 960 + if (weight > 200 || weight < 1) { 961 961 NL_SET_ERR_MSG_MOD(extack, "Weight must be between 1 and 200"); 962 962 return -EINVAL; 963 963 }
+26
drivers/net/ethernet/intel/ice/ice_main.c
··· 275 275 if (status && status != -EEXIST) 276 276 return status; 277 277 278 + netdev_dbg(vsi->netdev, "set promisc filter bits for VSI %i: 0x%x\n", 279 + vsi->vsi_num, promisc_m); 278 280 return 0; 279 281 } 280 282 ··· 302 300 promisc_m, 0); 303 301 } 304 302 303 + netdev_dbg(vsi->netdev, "clear promisc filter bits for VSI %i: 0x%x\n", 304 + vsi->vsi_num, promisc_m); 305 305 return status; 306 306 } 307 307 ··· 418 414 } 419 415 err = 0; 420 416 vlan_ops->dis_rx_filtering(vsi); 417 + 418 + /* promiscuous mode implies allmulticast so 419 + * that VSIs that are in promiscuous mode are 420 + * subscribed to multicast packets coming to 421 + * the port 422 + */ 423 + err = ice_set_promisc(vsi, 424 + ICE_MCAST_PROMISC_BITS); 425 + if (err) 426 + goto out_promisc; 421 427 } 422 428 } else { 423 429 /* Clear Rx filter to remove traffic from wire */ ··· 443 429 if (vsi->netdev->features & 444 430 NETIF_F_HW_VLAN_CTAG_FILTER) 445 431 vlan_ops->ena_rx_filtering(vsi); 432 + } 433 + 434 + /* disable allmulti here, but only if allmulti is not 435 + * still enabled for the netdev 436 + */ 437 + if (!(vsi->current_netdev_flags & IFF_ALLMULTI)) { 438 + err = ice_clear_promisc(vsi, 439 + ICE_MCAST_PROMISC_BITS); 440 + if (err) { 441 + netdev_err(netdev, "Error %d clearing multicast promiscuous on VSI %i\n", 442 + err, vsi->vsi_num); 443 + } 446 444 } 447 445 } 448 446 }
+31 -13
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 598 598 } 599 599 600 600 /** 601 - * ice_clean_xdp_irq_zc - AF_XDP ZC specific Tx cleaning routine 601 + * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer 602 + * @xdp_ring: XDP Tx ring 603 + * @tx_buf: Tx buffer to clean 604 + */ 605 + static void 606 + ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf) 607 + { 608 + page_frag_free(tx_buf->raw_buf); 609 + xdp_ring->xdp_tx_active--; 610 + dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), 611 + dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); 612 + dma_unmap_len_set(tx_buf, len, 0); 613 + } 614 + 615 + /** 616 + * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ 602 617 * @xdp_ring: XDP Tx ring 603 618 */ 604 619 static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) ··· 622 607 struct ice_tx_desc *tx_desc; 623 608 u16 cnt = xdp_ring->count; 624 609 struct ice_tx_buf *tx_buf; 610 + u16 completed_frames = 0; 625 611 u16 xsk_frames = 0; 626 612 u16 last_rs; 627 613 int i; 628 614 629 615 last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1; 630 616 tx_desc = ICE_TX_DESC(xdp_ring, last_rs); 631 - if (tx_desc->cmd_type_offset_bsz & 632 - cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) { 617 + if ((tx_desc->cmd_type_offset_bsz & 618 + cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) { 633 619 if (last_rs >= ntc) 634 - xsk_frames = last_rs - ntc + 1; 620 + completed_frames = last_rs - ntc + 1; 635 621 else 636 - xsk_frames = last_rs + cnt - ntc + 1; 622 + completed_frames = last_rs + cnt - ntc + 1; 637 623 } 638 624 639 - if (!xsk_frames) 625 + if (!completed_frames) 640 626 return; 641 627 642 - if (likely(!xdp_ring->xdp_tx_active)) 628 + if (likely(!xdp_ring->xdp_tx_active)) { 629 + xsk_frames = completed_frames; 643 630 goto skip; 631 + } 644 632 645 633 ntc = xdp_ring->next_to_clean; 646 - for (i = 0; i < xsk_frames; i++) { 634 + for (i = 0; i < completed_frames; i++) { 647 635 tx_buf = &xdp_ring->tx_buf[ntc]; 648 636 649 - if (tx_buf->xdp) { 650 - xsk_buff_free(tx_buf->xdp); 651 - xdp_ring->xdp_tx_active--; 637 + if (tx_buf->raw_buf) { 638 + ice_clean_xdp_tx_buf(xdp_ring, tx_buf); 639 + tx_buf->raw_buf = NULL; 652 640 } else { 653 641 xsk_frames++; 654 642 } 655 643 656 644 ntc++; 657 - if (ntc == cnt) 645 + if (ntc >= xdp_ring->count) 658 646 ntc = 0; 659 647 } 660 648 skip: 661 649 tx_desc->cmd_type_offset_bsz = 0; 662 - xdp_ring->next_to_clean += xsk_frames; 650 + xdp_ring->next_to_clean += completed_frames; 663 651 if (xdp_ring->next_to_clean >= cnt) 664 652 xdp_ring->next_to_clean -= cnt; 665 653 if (xsk_frames)
+38 -16
drivers/net/ethernet/intel/igb/igb_main.c
··· 2256 2256 } 2257 2257 } 2258 2258 2259 + #ifdef CONFIG_IGB_HWMON 2260 + /** 2261 + * igb_set_i2c_bb - Init I2C interface 2262 + * @hw: pointer to hardware structure 2263 + **/ 2264 + static void igb_set_i2c_bb(struct e1000_hw *hw) 2265 + { 2266 + u32 ctrl_ext; 2267 + s32 i2cctl; 2268 + 2269 + ctrl_ext = rd32(E1000_CTRL_EXT); 2270 + ctrl_ext |= E1000_CTRL_I2C_ENA; 2271 + wr32(E1000_CTRL_EXT, ctrl_ext); 2272 + wrfl(); 2273 + 2274 + i2cctl = rd32(E1000_I2CPARAMS); 2275 + i2cctl |= E1000_I2CBB_EN 2276 + | E1000_I2C_CLK_OE_N 2277 + | E1000_I2C_DATA_OE_N; 2278 + wr32(E1000_I2CPARAMS, i2cctl); 2279 + wrfl(); 2280 + } 2281 + #endif 2282 + 2259 2283 void igb_reset(struct igb_adapter *adapter) 2260 2284 { 2261 2285 struct pci_dev *pdev = adapter->pdev; ··· 2424 2400 * interface. 2425 2401 */ 2426 2402 if (adapter->ets) 2427 - mac->ops.init_thermal_sensor_thresh(hw); 2403 + igb_set_i2c_bb(hw); 2404 + mac->ops.init_thermal_sensor_thresh(hw); 2428 2405 } 2429 2406 } 2430 2407 #endif ··· 3166 3141 **/ 3167 3142 static s32 igb_init_i2c(struct igb_adapter *adapter) 3168 3143 { 3169 - struct e1000_hw *hw = &adapter->hw; 3170 3144 s32 status = 0; 3171 - s32 i2cctl; 3172 3145 3173 3146 /* I2C interface supported on i350 devices */ 3174 3147 if (adapter->hw.mac.type != e1000_i350) 3175 3148 return 0; 3176 - 3177 - i2cctl = rd32(E1000_I2CPARAMS); 3178 - i2cctl |= E1000_I2CBB_EN 3179 - | E1000_I2C_CLK_OUT | E1000_I2C_CLK_OE_N 3180 - | E1000_I2C_DATA_OUT | E1000_I2C_DATA_OE_N; 3181 - wr32(E1000_I2CPARAMS, i2cctl); 3182 - wrfl(); 3183 3149 3184 3150 /* Initialize the i2c bus which is controlled by the registers. 3185 3151 * This bus will use the i2c_algo_bit structure that implements ··· 3560 3544 adapter->ets = true; 3561 3545 else 3562 3546 adapter->ets = false; 3547 + /* Only enable I2C bit banging if an external thermal 3548 + * sensor is supported. 3549 + */ 3550 + if (adapter->ets) 3551 + igb_set_i2c_bb(hw); 3552 + hw->mac.ops.init_thermal_sensor_thresh(hw); 3563 3553 if (igb_sysfs_init(adapter)) 3564 3554 dev_err(&pdev->dev, 3565 3555 "failed to allocate sysfs resources\n"); ··· 6836 6814 struct timespec64 ts; 6837 6815 u32 tsauxc; 6838 6816 6839 - if (pin < 0 || pin >= IGB_N_PEROUT) 6817 + if (pin < 0 || pin >= IGB_N_SDP) 6840 6818 return; 6841 6819 6842 6820 spin_lock(&adapter->tmreg_lock); ··· 6844 6822 if (hw->mac.type == e1000_82580 || 6845 6823 hw->mac.type == e1000_i354 || 6846 6824 hw->mac.type == e1000_i350) { 6847 - s64 ns = timespec64_to_ns(&adapter->perout[pin].period); 6825 + s64 ns = timespec64_to_ns(&adapter->perout[tsintr_tt].period); 6848 6826 u32 systiml, systimh, level_mask, level, rem; 6849 6827 u64 systim, now; 6850 6828 ··· 6892 6870 ts.tv_nsec = (u32)systim; 6893 6871 ts.tv_sec = ((u32)(systim >> 32)) & 0xFF; 6894 6872 } else { 6895 - ts = timespec64_add(adapter->perout[pin].start, 6896 - adapter->perout[pin].period); 6873 + ts = timespec64_add(adapter->perout[tsintr_tt].start, 6874 + adapter->perout[tsintr_tt].period); 6897 6875 } 6898 6876 6899 6877 /* u32 conversion of tv_sec is safe until y2106 */ ··· 6902 6880 tsauxc = rd32(E1000_TSAUXC); 6903 6881 tsauxc |= TSAUXC_EN_TT0; 6904 6882 wr32(E1000_TSAUXC, tsauxc); 6905 - adapter->perout[pin].start = ts; 6883 + adapter->perout[tsintr_tt].start = ts; 6906 6884 6907 6885 spin_unlock(&adapter->tmreg_lock); 6908 6886 } ··· 6916 6894 struct ptp_clock_event event; 6917 6895 struct timespec64 ts; 6918 6896 6919 - if (pin < 0 || pin >= IGB_N_EXTTS) 6897 + if (pin < 0 || pin >= IGB_N_SDP) 6920 6898 return; 6921 6899 6922 6900 if (hw->mac.type == e1000_82580 ||
+2
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 73 73 #define IXGBE_RXBUFFER_4K 4096 74 74 #define IXGBE_MAX_RXBUFFER 16384 /* largest size for a single descriptor */ 75 75 76 + #define IXGBE_PKT_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) 77 + 76 78 /* Attempt to maximize the headroom available for incoming frames. We 77 79 * use a 2K buffer for receives and need 1536/1534 to store the data for 78 80 * the frame. This leaves us with 512 bytes of room. From that we need
+17 -11
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 6778 6778 } 6779 6779 6780 6780 /** 6781 + * ixgbe_max_xdp_frame_size - returns the maximum allowed frame size for XDP 6782 + * @adapter: device handle, pointer to adapter 6783 + */ 6784 + static int ixgbe_max_xdp_frame_size(struct ixgbe_adapter *adapter) 6785 + { 6786 + if (PAGE_SIZE >= 8192 || adapter->flags2 & IXGBE_FLAG2_RX_LEGACY) 6787 + return IXGBE_RXBUFFER_2K; 6788 + else 6789 + return IXGBE_RXBUFFER_3K; 6790 + } 6791 + 6792 + /** 6781 6793 * ixgbe_change_mtu - Change the Maximum Transfer Unit 6782 6794 * @netdev: network interface device structure 6783 6795 * @new_mtu: new value for maximum frame size ··· 6800 6788 { 6801 6789 struct ixgbe_adapter *adapter = netdev_priv(netdev); 6802 6790 6803 - if (adapter->xdp_prog) { 6804 - int new_frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + 6805 - VLAN_HLEN; 6806 - int i; 6791 + if (ixgbe_enabled_xdp_adapter(adapter)) { 6792 + int new_frame_size = new_mtu + IXGBE_PKT_HDR_PAD; 6807 6793 6808 - for (i = 0; i < adapter->num_rx_queues; i++) { 6809 - struct ixgbe_ring *ring = adapter->rx_ring[i]; 6810 - 6811 - if (new_frame_size > ixgbe_rx_bufsz(ring)) { 6812 - e_warn(probe, "Requested MTU size is not supported with XDP\n"); 6813 - return -EINVAL; 6814 - } 6794 + if (new_frame_size > ixgbe_max_xdp_frame_size(adapter)) { 6795 + e_warn(probe, "Requested MTU size is not supported with XDP\n"); 6796 + return -EINVAL; 6815 6797 } 6816 6798 } 6817 6799
+28 -17
drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
··· 130 130 }; 131 131 }; 132 132 133 - static int nfp_ipsec_cfg_cmd_issue(struct nfp_net *nn, int type, int saidx, 134 - struct nfp_ipsec_cfg_mssg *msg) 133 + static int nfp_net_ipsec_cfg(struct nfp_net *nn, struct nfp_mbox_amsg_entry *entry) 135 134 { 135 + unsigned int offset = nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL; 136 + struct nfp_ipsec_cfg_mssg *msg = (struct nfp_ipsec_cfg_mssg *)entry->msg; 136 137 int i, msg_size, ret; 137 138 138 - msg->cmd = type; 139 - msg->sa_idx = saidx; 140 - msg->rsp = 0; 141 - msg_size = ARRAY_SIZE(msg->raw); 142 - 143 - for (i = 0; i < msg_size; i++) 144 - nn_writel(nn, NFP_NET_CFG_MBOX_VAL + 4 * i, msg->raw[i]); 145 - 146 - ret = nfp_net_mbox_reconfig(nn, NFP_NET_CFG_MBOX_CMD_IPSEC); 147 - if (ret < 0) 139 + ret = nfp_net_mbox_lock(nn, sizeof(*msg)); 140 + if (ret) 148 141 return ret; 142 + 143 + msg_size = ARRAY_SIZE(msg->raw); 144 + for (i = 0; i < msg_size; i++) 145 + nn_writel(nn, offset + 4 * i, msg->raw[i]); 146 + 147 + ret = nfp_net_mbox_reconfig(nn, entry->cmd); 148 + if (ret < 0) { 149 + nn_ctrl_bar_unlock(nn); 150 + return ret; 151 + } 149 152 150 153 /* For now we always read the whole message response back */ 151 154 for (i = 0; i < msg_size; i++) 152 - msg->raw[i] = nn_readl(nn, NFP_NET_CFG_MBOX_VAL + 4 * i); 155 + msg->raw[i] = nn_readl(nn, offset + 4 * i); 156 + 157 + nn_ctrl_bar_unlock(nn); 153 158 154 159 switch (msg->rsp) { 155 160 case NFP_IPSEC_CFG_MSSG_OK: ··· 492 487 } 493 488 494 489 /* Allocate saidx and commit the SA */ 495 - err = nfp_ipsec_cfg_cmd_issue(nn, NFP_IPSEC_CFG_MSSG_ADD_SA, saidx, &msg); 490 + msg.cmd = NFP_IPSEC_CFG_MSSG_ADD_SA; 491 + msg.sa_idx = saidx; 492 + err = nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_IPSEC, &msg, 493 + sizeof(msg), nfp_net_ipsec_cfg); 496 494 if (err) { 497 495 xa_erase(&nn->xa_ipsec, saidx); 498 496 NL_SET_ERR_MSG_MOD(extack, "Failed to issue IPsec command"); ··· 509 501 510 502 static void nfp_net_xfrm_del_state(struct xfrm_state *x) 511 503 { 504 + struct nfp_ipsec_cfg_mssg msg = { 505 + .cmd = NFP_IPSEC_CFG_MSSG_INV_SA, 506 + .sa_idx = x->xso.offload_handle - 1, 507 + }; 512 508 struct net_device *netdev = x->xso.dev; 513 - struct nfp_ipsec_cfg_mssg msg; 514 509 struct nfp_net *nn; 515 510 int err; 516 511 517 512 nn = netdev_priv(netdev); 518 - err = nfp_ipsec_cfg_cmd_issue(nn, NFP_IPSEC_CFG_MSSG_INV_SA, 519 - x->xso.offload_handle - 1, &msg); 513 + err = nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_IPSEC, &msg, 514 + sizeof(msg), nfp_net_ipsec_cfg); 520 515 if (err) 521 516 nn_warn(nn, "Failed to invalidate SA in hardware\n"); 522 517
+19 -6
drivers/net/ethernet/netronome/nfp/nfp_net.h
··· 617 617 * @vnic_no_name: For non-port PF vNIC make ndo_get_phys_port_name return 618 618 * -EOPNOTSUPP to keep backwards compatibility (set by app) 619 619 * @port: Pointer to nfp_port structure if vNIC is a port 620 - * @mc_lock: Protect mc_addrs list 621 - * @mc_addrs: List of mc addrs to add/del to HW 622 - * @mc_work: Work to update mc addrs 620 + * @mbox_amsg: Asynchronously processed message via mailbox 621 + * @mbox_amsg.lock: Protect message list 622 + * @mbox_amsg.list: List of message to process 623 + * @mbox_amsg.work: Work to process message asynchronously 623 624 * @app_priv: APP private data for this vNIC 624 625 */ 625 626 struct nfp_net { ··· 722 721 723 722 struct nfp_port *port; 724 723 725 - spinlock_t mc_lock; 726 - struct list_head mc_addrs; 727 - struct work_struct mc_work; 724 + struct { 725 + spinlock_t lock; 726 + struct list_head list; 727 + struct work_struct work; 728 + } mbox_amsg; 728 729 729 730 void *app_priv; 730 731 }; 732 + 733 + struct nfp_mbox_amsg_entry { 734 + struct list_head list; 735 + int (*cfg)(struct nfp_net *nn, struct nfp_mbox_amsg_entry *entry); 736 + u32 cmd; 737 + char msg[]; 738 + }; 739 + 740 + int nfp_net_sched_mbox_amsg_work(struct nfp_net *nn, u32 cmd, const void *data, size_t len, 741 + int (*cb)(struct nfp_net *, struct nfp_mbox_amsg_entry *)); 731 742 732 743 /* Functions to read/write from/to a BAR 733 744 * Performs any endian conversion necessary.
+56 -54
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 1334 1334 return err; 1335 1335 } 1336 1336 1337 - struct nfp_mc_addr_entry { 1338 - u8 addr[ETH_ALEN]; 1339 - u32 cmd; 1340 - struct list_head list; 1341 - }; 1342 - 1343 - static int nfp_net_mc_cfg(struct nfp_net *nn, const unsigned char *addr, const u32 cmd) 1337 + int nfp_net_sched_mbox_amsg_work(struct nfp_net *nn, u32 cmd, const void *data, size_t len, 1338 + int (*cb)(struct nfp_net *, struct nfp_mbox_amsg_entry *)) 1344 1339 { 1340 + struct nfp_mbox_amsg_entry *entry; 1341 + 1342 + entry = kmalloc(sizeof(*entry) + len, GFP_ATOMIC); 1343 + if (!entry) 1344 + return -ENOMEM; 1345 + 1346 + memcpy(entry->msg, data, len); 1347 + entry->cmd = cmd; 1348 + entry->cfg = cb; 1349 + 1350 + spin_lock_bh(&nn->mbox_amsg.lock); 1351 + list_add_tail(&entry->list, &nn->mbox_amsg.list); 1352 + spin_unlock_bh(&nn->mbox_amsg.lock); 1353 + 1354 + schedule_work(&nn->mbox_amsg.work); 1355 + 1356 + return 0; 1357 + } 1358 + 1359 + static void nfp_net_mbox_amsg_work(struct work_struct *work) 1360 + { 1361 + struct nfp_net *nn = container_of(work, struct nfp_net, mbox_amsg.work); 1362 + struct nfp_mbox_amsg_entry *entry, *tmp; 1363 + struct list_head tmp_list; 1364 + 1365 + INIT_LIST_HEAD(&tmp_list); 1366 + 1367 + spin_lock_bh(&nn->mbox_amsg.lock); 1368 + list_splice_init(&nn->mbox_amsg.list, &tmp_list); 1369 + spin_unlock_bh(&nn->mbox_amsg.lock); 1370 + 1371 + list_for_each_entry_safe(entry, tmp, &tmp_list, list) { 1372 + int err = entry->cfg(nn, entry); 1373 + 1374 + if (err) 1375 + nn_err(nn, "Config cmd %d to HW failed %d.\n", entry->cmd, err); 1376 + 1377 + list_del(&entry->list); 1378 + kfree(entry); 1379 + } 1380 + } 1381 + 1382 + static int nfp_net_mc_cfg(struct nfp_net *nn, struct nfp_mbox_amsg_entry *entry) 1383 + { 1384 + unsigned char *addr = entry->msg; 1345 1385 int ret; 1346 1386 1347 1387 ret = nfp_net_mbox_lock(nn, NFP_NET_CFG_MULTICAST_SZ); ··· 1393 1353 nn_writew(nn, nn->tlv_caps.mbox_off + NFP_NET_CFG_MULTICAST_MAC_LO, 1394 1354 get_unaligned_be16(addr + 4)); 1395 1355 1396 - return nfp_net_mbox_reconfig_and_unlock(nn, cmd); 1397 - } 1398 - 1399 - static int nfp_net_mc_prep(struct nfp_net *nn, const unsigned char *addr, const u32 cmd) 1400 - { 1401 - struct nfp_mc_addr_entry *entry; 1402 - 1403 - entry = kmalloc(sizeof(*entry), GFP_ATOMIC); 1404 - if (!entry) 1405 - return -ENOMEM; 1406 - 1407 - ether_addr_copy(entry->addr, addr); 1408 - entry->cmd = cmd; 1409 - spin_lock_bh(&nn->mc_lock); 1410 - list_add_tail(&entry->list, &nn->mc_addrs); 1411 - spin_unlock_bh(&nn->mc_lock); 1412 - 1413 - schedule_work(&nn->mc_work); 1414 - 1415 - return 0; 1356 + return nfp_net_mbox_reconfig_and_unlock(nn, entry->cmd); 1416 1357 } 1417 1358 1418 1359 static int nfp_net_mc_sync(struct net_device *netdev, const unsigned char *addr) ··· 1406 1385 return -EINVAL; 1407 1386 } 1408 1387 1409 - return nfp_net_mc_prep(nn, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD); 1388 + return nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_MULTICAST_ADD, addr, 1389 + NFP_NET_CFG_MULTICAST_SZ, nfp_net_mc_cfg); 1410 1390 } 1411 1391 1412 1392 static int nfp_net_mc_unsync(struct net_device *netdev, const unsigned char *addr) 1413 1393 { 1414 1394 struct nfp_net *nn = netdev_priv(netdev); 1415 1395 1416 - return nfp_net_mc_prep(nn, addr, NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL); 1417 - } 1418 - 1419 - static void nfp_net_mc_addr_config(struct work_struct *work) 1420 - { 1421 - struct nfp_net *nn = container_of(work, struct nfp_net, mc_work); 1422 - struct nfp_mc_addr_entry *entry, *tmp; 1423 - struct list_head tmp_list; 1424 - 1425 - INIT_LIST_HEAD(&tmp_list); 1426 - 1427 - spin_lock_bh(&nn->mc_lock); 1428 - list_splice_init(&nn->mc_addrs, &tmp_list); 1429 - spin_unlock_bh(&nn->mc_lock); 1430 - 1431 - list_for_each_entry_safe(entry, tmp, &tmp_list, list) { 1432 - if (nfp_net_mc_cfg(nn, entry->addr, entry->cmd)) 1433 - nn_err(nn, "Config mc address to HW failed.\n"); 1434 - 1435 - list_del(&entry->list); 1436 - kfree(entry); 1437 - } 1396 + return nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_MULTICAST_DEL, addr, 1397 + NFP_NET_CFG_MULTICAST_SZ, nfp_net_mc_cfg); 1438 1398 } 1439 1399 1440 1400 static void nfp_net_set_rx_mode(struct net_device *netdev) ··· 2688 2686 if (!nn->dp.netdev) 2689 2687 return 0; 2690 2688 2691 - spin_lock_init(&nn->mc_lock); 2692 - INIT_LIST_HEAD(&nn->mc_addrs); 2693 - INIT_WORK(&nn->mc_work, nfp_net_mc_addr_config); 2689 + spin_lock_init(&nn->mbox_amsg.lock); 2690 + INIT_LIST_HEAD(&nn->mbox_amsg.list); 2691 + INIT_WORK(&nn->mbox_amsg.work, nfp_net_mbox_amsg_work); 2694 2692 2695 2693 return register_netdev(nn->dp.netdev); 2696 2694 ··· 2711 2709 unregister_netdev(nn->dp.netdev); 2712 2710 nfp_net_ipsec_clean(nn); 2713 2711 nfp_ccm_mbox_clean(nn); 2714 - flush_work(&nn->mc_work); 2712 + flush_work(&nn->mbox_amsg.work); 2715 2713 nfp_net_reconfig_wait_posted(nn); 2716 2714 }
-1
drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
··· 403 403 */ 404 404 #define NFP_NET_CFG_MBOX_BASE 0x1800 405 405 #define NFP_NET_CFG_MBOX_VAL_MAX_SZ 0x1F8 406 - #define NFP_NET_CFG_MBOX_VAL 0x1808 407 406 #define NFP_NET_CFG_MBOX_SIMPLE_CMD 0x0 408 407 #define NFP_NET_CFG_MBOX_SIMPLE_RET 0x4 409 408 #define NFP_NET_CFG_MBOX_SIMPLE_VAL 0x8
+2 -1
drivers/net/ethernet/stmicro/stmmac/dwmac5.c
··· 541 541 return 0; 542 542 } 543 543 544 - val |= PPSCMDx(index, 0x2); 545 544 val |= TRGTMODSELx(index, 0x2); 546 545 val |= PPSEN0; 546 + writel(val, ioaddr + MAC_PPS_CONTROL); 547 547 548 548 writel(cfg->start.tv_sec, ioaddr + MAC_PPSx_TARGET_TIME_SEC(index)); 549 549 ··· 568 568 writel(period - 1, ioaddr + MAC_PPSx_WIDTH(index)); 569 569 570 570 /* Finally, activate it */ 571 + val |= PPSCMDx(index, 0x2); 571 572 writel(val, ioaddr + MAC_PPS_CONTROL); 572 573 return 0; 573 574 }
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 559 559 dma_cfg->mixed_burst = of_property_read_bool(np, "snps,mixed-burst"); 560 560 561 561 plat->force_thresh_dma_mode = of_property_read_bool(np, "snps,force_thresh_dma_mode"); 562 - if (plat->force_thresh_dma_mode) { 562 + if (plat->force_thresh_dma_mode && plat->force_sf_dma_mode) { 563 563 plat->force_sf_dma_mode = 0; 564 564 dev_warn(&pdev->dev, 565 565 "force_sf_dma_mode is ignored if force_thresh_dma_mode is set.\n");
+11 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 501 501 k3_udma_glue_disable_tx_chn(common->tx_chns[i].tx_chn); 502 502 } 503 503 504 + reinit_completion(&common->tdown_complete); 504 505 k3_udma_glue_tdown_rx_chn(common->rx_chns.rx_chn, true); 506 + 507 + if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) { 508 + i = wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiffies(1000)); 509 + if (!i) 510 + dev_err(common->dev, "rx teardown timeout\n"); 511 + } 512 + 505 513 napi_disable(&common->napi_rx); 506 514 507 515 for (i = 0; i < AM65_CPSW_MAX_RX_FLOWS; i++) ··· 729 721 730 722 if (cppi5_desc_is_tdcm(desc_dma)) { 731 723 dev_dbg(dev, "%s RX tdown flow: %u\n", __func__, flow_idx); 724 + if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) 725 + complete(&common->tdown_complete); 732 726 return 0; 733 727 } 734 728 ··· 2746 2736 }; 2747 2737 2748 2738 static const struct am65_cpsw_pdata am64x_cpswxg_pdata = { 2749 - .quirks = 0, 2739 + .quirks = AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ, 2750 2740 .ale_dev_id = "am64-cpswxg", 2751 2741 .fdqring_mode = K3_RINGACC_RING_MODE_RING, 2752 2742 };
+1
drivers/net/ethernet/ti/am65-cpsw-nuss.h
··· 91 91 }; 92 92 93 93 #define AM65_CPSW_QUIRK_I2027_NO_TX_CSUM BIT(0) 94 + #define AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ BIT(1) 94 95 95 96 struct am65_cpsw_pdata { 96 97 u32 quirks;
+4 -4
drivers/net/usb/kalmia.c
··· 65 65 init_msg, init_msg_len, &act_len, KALMIA_USB_TIMEOUT); 66 66 if (status != 0) { 67 67 netdev_err(dev->net, 68 - "Error sending init packet. Status %i, length %i\n", 69 - status, act_len); 68 + "Error sending init packet. Status %i\n", 69 + status); 70 70 return status; 71 71 } 72 72 else if (act_len != init_msg_len) { ··· 83 83 84 84 if (status != 0) 85 85 netdev_err(dev->net, 86 - "Error receiving init result. Status %i, length %i\n", 87 - status, act_len); 86 + "Error receiving init result. Status %i\n", 87 + status); 88 88 else if (act_len != expected_len) 89 89 netdev_err(dev->net, "Unexpected init result length: %i\n", 90 90 act_len);
+25 -25
drivers/net/vmxnet3/vmxnet3_drv.c
··· 1546 1546 rxd->len = rbi->len; 1547 1547 } 1548 1548 1549 - #ifdef VMXNET3_RSS 1550 - if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE && 1551 - (adapter->netdev->features & NETIF_F_RXHASH)) { 1552 - enum pkt_hash_types hash_type; 1553 - 1554 - switch (rcd->rssType) { 1555 - case VMXNET3_RCD_RSS_TYPE_IPV4: 1556 - case VMXNET3_RCD_RSS_TYPE_IPV6: 1557 - hash_type = PKT_HASH_TYPE_L3; 1558 - break; 1559 - case VMXNET3_RCD_RSS_TYPE_TCPIPV4: 1560 - case VMXNET3_RCD_RSS_TYPE_TCPIPV6: 1561 - case VMXNET3_RCD_RSS_TYPE_UDPIPV4: 1562 - case VMXNET3_RCD_RSS_TYPE_UDPIPV6: 1563 - hash_type = PKT_HASH_TYPE_L4; 1564 - break; 1565 - default: 1566 - hash_type = PKT_HASH_TYPE_L3; 1567 - break; 1568 - } 1569 - skb_set_hash(ctx->skb, 1570 - le32_to_cpu(rcd->rssHash), 1571 - hash_type); 1572 - } 1573 - #endif 1574 1549 skb_record_rx_queue(ctx->skb, rq->qid); 1575 1550 skb_put(ctx->skb, rcd->len); 1576 1551 ··· 1628 1653 u32 mtu = adapter->netdev->mtu; 1629 1654 skb->len += skb->data_len; 1630 1655 1656 + #ifdef VMXNET3_RSS 1657 + if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE && 1658 + (adapter->netdev->features & NETIF_F_RXHASH)) { 1659 + enum pkt_hash_types hash_type; 1660 + 1661 + switch (rcd->rssType) { 1662 + case VMXNET3_RCD_RSS_TYPE_IPV4: 1663 + case VMXNET3_RCD_RSS_TYPE_IPV6: 1664 + hash_type = PKT_HASH_TYPE_L3; 1665 + break; 1666 + case VMXNET3_RCD_RSS_TYPE_TCPIPV4: 1667 + case VMXNET3_RCD_RSS_TYPE_TCPIPV6: 1668 + case VMXNET3_RCD_RSS_TYPE_UDPIPV4: 1669 + case VMXNET3_RCD_RSS_TYPE_UDPIPV6: 1670 + hash_type = PKT_HASH_TYPE_L4; 1671 + break; 1672 + default: 1673 + hash_type = PKT_HASH_TYPE_L3; 1674 + break; 1675 + } 1676 + skb_set_hash(skb, 1677 + le32_to_cpu(rcd->rssHash), 1678 + hash_type); 1679 + } 1680 + #endif 1631 1681 vmxnet3_rx_csum(adapter, skb, 1632 1682 (union Vmxnet3_GenericDesc *)rcd); 1633 1683 skb->protocol = eth_type_trans(skb, adapter->netdev);
+19
drivers/nvdimm/Kconfig
··· 102 102 depends on ENCRYPTED_KEYS 103 103 depends on (LIBNVDIMM=ENCRYPTED_KEYS) || LIBNVDIMM=m 104 104 105 + config NVDIMM_KMSAN 106 + bool 107 + depends on KMSAN 108 + help 109 + KMSAN, and other memory debug facilities, increase the size of 110 + 'struct page' to contain extra metadata. This collides with 111 + the NVDIMM capability to store a potentially 112 + larger-than-"System RAM" size 'struct page' array in a 113 + reservation of persistent memory rather than limited / 114 + precious DRAM. However, that reservation needs to persist for 115 + the life of the given NVDIMM namespace. If you are using KMSAN 116 + to debug an issue unrelated to NVDIMMs or DAX then say N to this 117 + option. Otherwise, say Y but understand that any namespaces 118 + (with the page array stored pmem) created with this build of 119 + the kernel will permanently reserve and strand excess 120 + capacity compared to the CONFIG_KMSAN=n case. 121 + 122 + Select N if unsure. 123 + 105 124 config NVDIMM_TEST_BUILD 106 125 tristate "Build the unit test core" 107 126 depends on m
+1 -1
drivers/nvdimm/nd.h
··· 652 652 struct nd_namespace_common *ndns); 653 653 #if IS_ENABLED(CONFIG_ND_CLAIM) 654 654 /* max struct page size independent of kernel config */ 655 - #define MAX_STRUCT_PAGE_SIZE 128 655 + #define MAX_STRUCT_PAGE_SIZE 64 656 656 int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap); 657 657 #else 658 658 static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn,
+27 -15
drivers/nvdimm/pfn_devs.c
··· 13 13 #include "pfn.h" 14 14 #include "nd.h" 15 15 16 + static const bool page_struct_override = IS_ENABLED(CONFIG_NVDIMM_KMSAN); 17 + 16 18 static void nd_pfn_release(struct device *dev) 17 19 { 18 20 struct nd_region *nd_region = to_nd_region(dev->parent); ··· 760 758 return -ENXIO; 761 759 } 762 760 763 - /* 764 - * Note, we use 64 here for the standard size of struct page, 765 - * debugging options may cause it to be larger in which case the 766 - * implementation will limit the pfns advertised through 767 - * ->direct_access() to those that are included in the memmap. 768 - */ 769 761 start = nsio->res.start; 770 762 size = resource_size(&nsio->res); 771 763 npfns = PHYS_PFN(size - SZ_8K); ··· 778 782 } 779 783 end_trunc = start + size - ALIGN_DOWN(start + size, align); 780 784 if (nd_pfn->mode == PFN_MODE_PMEM) { 785 + unsigned long page_map_size = MAX_STRUCT_PAGE_SIZE * npfns; 786 + 781 787 /* 782 788 * The altmap should be padded out to the block size used 783 789 * when populating the vmemmap. This *should* be equal to 784 790 * PMD_SIZE for most architectures. 785 791 * 786 - * Also make sure size of struct page is less than 128. We 787 - * want to make sure we use large enough size here so that 788 - * we don't have a dynamic reserve space depending on 789 - * struct page size. But we also want to make sure we notice 790 - * when we end up adding new elements to struct page. 792 + * Also make sure size of struct page is less than 793 + * MAX_STRUCT_PAGE_SIZE. The goal here is compatibility in the 794 + * face of production kernel configurations that reduce the 795 + * 'struct page' size below MAX_STRUCT_PAGE_SIZE. For debug 796 + * kernel configurations that increase the 'struct page' size 797 + * above MAX_STRUCT_PAGE_SIZE, the page_struct_override allows 798 + * for continuing with the capacity that will be wasted when 799 + * reverting to a production kernel configuration. Otherwise, 800 + * those configurations are blocked by default. 791 801 */ 792 - BUILD_BUG_ON(sizeof(struct page) > MAX_STRUCT_PAGE_SIZE); 793 - offset = ALIGN(start + SZ_8K + MAX_STRUCT_PAGE_SIZE * npfns, align) 794 - - start; 802 + if (sizeof(struct page) > MAX_STRUCT_PAGE_SIZE) { 803 + if (page_struct_override) 804 + page_map_size = sizeof(struct page) * npfns; 805 + else { 806 + dev_err(&nd_pfn->dev, 807 + "Memory debug options prevent using pmem for the page map\n"); 808 + return -EINVAL; 809 + } 810 + } 811 + offset = ALIGN(start + SZ_8K + page_map_size, align) - start; 795 812 } else if (nd_pfn->mode == PFN_MODE_RAM) 796 813 offset = ALIGN(start + SZ_8K, align) - start; 797 814 else ··· 827 818 pfn_sb->version_minor = cpu_to_le16(4); 828 819 pfn_sb->end_trunc = cpu_to_le32(end_trunc); 829 820 pfn_sb->align = cpu_to_le32(nd_pfn->align); 830 - pfn_sb->page_struct_size = cpu_to_le16(MAX_STRUCT_PAGE_SIZE); 821 + if (sizeof(struct page) > MAX_STRUCT_PAGE_SIZE && page_struct_override) 822 + pfn_sb->page_struct_size = cpu_to_le16(sizeof(struct page)); 823 + else 824 + pfn_sb->page_struct_size = cpu_to_le16(MAX_STRUCT_PAGE_SIZE); 831 825 pfn_sb->page_size = cpu_to_le32(PAGE_SIZE); 832 826 checksum = nd_sb_checksum((struct nd_gen_sb *) pfn_sb); 833 827 pfn_sb->checksum = cpu_to_le64(checksum);
+1 -1
drivers/nvme/host/auth.c
··· 45 45 int sess_key_len; 46 46 }; 47 47 48 - struct workqueue_struct *nvme_auth_wq; 48 + static struct workqueue_struct *nvme_auth_wq; 49 49 50 50 #define nvme_auth_flags_from_qid(qid) \ 51 51 (qid == 0) ? 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED
+10 -10
drivers/nvme/host/pci.c
··· 2509 2509 { 2510 2510 int result = -ENOMEM; 2511 2511 struct pci_dev *pdev = to_pci_dev(dev->dev); 2512 - int dma_address_bits = 64; 2513 2512 2514 2513 if (pci_enable_device_mem(pdev)) 2515 2514 return result; 2516 2515 2517 2516 pci_set_master(pdev); 2518 - 2519 - if (dev->ctrl.quirks & NVME_QUIRK_DMA_ADDRESS_BITS_48) 2520 - dma_address_bits = 48; 2521 - if (dma_set_mask_and_coherent(dev->dev, DMA_BIT_MASK(dma_address_bits))) 2522 - goto disable; 2523 2517 2524 2518 if (readl(dev->bar + NVME_REG_CSTS) == -1) { 2525 2519 result = -ENODEV; ··· 2964 2970 2965 2971 dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node); 2966 2972 if (!dev) 2967 - return NULL; 2973 + return ERR_PTR(-ENOMEM); 2968 2974 INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work); 2969 2975 mutex_init(&dev->shutdown_lock); 2970 2976 ··· 2992 2998 quirks); 2993 2999 if (ret) 2994 3000 goto out_put_device; 2995 - 3001 + 3002 + if (dev->ctrl.quirks & NVME_QUIRK_DMA_ADDRESS_BITS_48) 3003 + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48)); 3004 + else 3005 + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 2996 3006 dma_set_min_align_mask(&pdev->dev, NVME_CTRL_PAGE_SIZE - 1); 2997 3007 dma_set_max_seg_size(&pdev->dev, 0xffffffff); 2998 3008 ··· 3029 3031 int result = -ENOMEM; 3030 3032 3031 3033 dev = nvme_pci_alloc_dev(pdev, id); 3032 - if (!dev) 3033 - return -ENOMEM; 3034 + if (IS_ERR(dev)) 3035 + return PTR_ERR(dev); 3034 3036 3035 3037 result = nvme_dev_map(dev); 3036 3038 if (result) ··· 3421 3423 { PCI_DEVICE(0x10ec, 0x5762), /* ADATA SX6000LNP */ 3422 3424 .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN | 3423 3425 NVME_QUIRK_BOGUS_NID, }, 3426 + { PCI_DEVICE(0x10ec, 0x5763), /* ADATA SX6000PNP */ 3427 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3424 3428 { PCI_DEVICE(0x1cc1, 0x8201), /* ADATA SX8200PNP 512GB */ 3425 3429 .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 3426 3430 NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+2 -1
drivers/of/of_reserved_mem.c
··· 48 48 err = memblock_mark_nomap(base, size); 49 49 if (err) 50 50 memblock_phys_free(base, size); 51 - kmemleak_ignore_phys(base); 52 51 } 52 + 53 + kmemleak_ignore_phys(base); 53 54 54 55 return err; 55 56 }
-7
drivers/pci/pci.c
··· 1665 1665 return i; 1666 1666 1667 1667 pci_save_ltr_state(dev); 1668 - pci_save_aspm_l1ss_state(dev); 1669 1668 pci_save_dpc_state(dev); 1670 1669 pci_save_aer_state(dev); 1671 1670 pci_save_ptm_state(dev); ··· 1771 1772 * LTR itself (in the PCIe capability). 1772 1773 */ 1773 1774 pci_restore_ltr_state(dev); 1774 - pci_restore_aspm_l1ss_state(dev); 1775 1775 1776 1776 pci_restore_pcie_state(dev); 1777 1777 pci_restore_pasid_state(dev); ··· 3462 3464 2 * sizeof(u16)); 3463 3465 if (error) 3464 3466 pci_err(dev, "unable to allocate suspend buffer for LTR\n"); 3465 - 3466 - error = pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_L1SS, 3467 - 2 * sizeof(u32)); 3468 - if (error) 3469 - pci_err(dev, "unable to allocate suspend buffer for ASPM-L1SS\n"); 3470 3467 3471 3468 pci_allocate_vc_save_buffers(dev); 3472 3469 }
-4
drivers/pci/pci.h
··· 566 566 void pcie_aspm_init_link_state(struct pci_dev *pdev); 567 567 void pcie_aspm_exit_link_state(struct pci_dev *pdev); 568 568 void pcie_aspm_powersave_config_link(struct pci_dev *pdev); 569 - void pci_save_aspm_l1ss_state(struct pci_dev *dev); 570 - void pci_restore_aspm_l1ss_state(struct pci_dev *dev); 571 569 #else 572 570 static inline void pcie_aspm_init_link_state(struct pci_dev *pdev) { } 573 571 static inline void pcie_aspm_exit_link_state(struct pci_dev *pdev) { } 574 572 static inline void pcie_aspm_powersave_config_link(struct pci_dev *pdev) { } 575 - static inline void pci_save_aspm_l1ss_state(struct pci_dev *dev) { } 576 - static inline void pci_restore_aspm_l1ss_state(struct pci_dev *dev) { } 577 573 #endif 578 574 579 575 #ifdef CONFIG_PCIE_ECRC
+33 -76
drivers/pci/pcie/aspm.c
··· 470 470 pci_write_config_dword(pdev, pos, val); 471 471 } 472 472 473 - static void aspm_program_l1ss(struct pci_dev *dev, u32 ctl1, u32 ctl2) 474 - { 475 - u16 l1ss = dev->l1ss; 476 - u32 l1_2_enable; 477 - 478 - /* 479 - * Per PCIe r6.0, sec 5.5.4, T_POWER_ON in PCI_L1SS_CTL2 must be 480 - * programmed prior to setting the L1.2 enable bits in PCI_L1SS_CTL1. 481 - */ 482 - pci_write_config_dword(dev, l1ss + PCI_L1SS_CTL2, ctl2); 483 - 484 - /* 485 - * In addition, Common_Mode_Restore_Time and LTR_L1.2_THRESHOLD in 486 - * PCI_L1SS_CTL1 must be programmed *before* setting the L1.2 487 - * enable bits, even though they're all in PCI_L1SS_CTL1. 488 - */ 489 - l1_2_enable = ctl1 & PCI_L1SS_CTL1_L1_2_MASK; 490 - ctl1 &= ~PCI_L1SS_CTL1_L1_2_MASK; 491 - 492 - pci_write_config_dword(dev, l1ss + PCI_L1SS_CTL1, ctl1); 493 - if (l1_2_enable) 494 - pci_write_config_dword(dev, l1ss + PCI_L1SS_CTL1, 495 - ctl1 | l1_2_enable); 496 - } 497 - 498 473 /* Calculate L1.2 PM substate timing parameters */ 499 474 static void aspm_calc_l1ss_info(struct pcie_link_state *link, 500 475 u32 parent_l1ss_cap, u32 child_l1ss_cap) ··· 479 504 u32 t_common_mode, t_power_on, l1_2_threshold, scale, value; 480 505 u32 ctl1 = 0, ctl2 = 0; 481 506 u32 pctl1, pctl2, cctl1, cctl2; 507 + u32 pl1_2_enables, cl1_2_enables; 482 508 483 509 if (!(link->aspm_support & ASPM_STATE_L1_2_MASK)) 484 510 return; ··· 528 552 ctl2 == pctl2 && ctl2 == cctl2) 529 553 return; 530 554 531 - pctl1 &= ~(PCI_L1SS_CTL1_CM_RESTORE_TIME | 532 - PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 533 - PCI_L1SS_CTL1_LTR_L12_TH_SCALE); 534 - pctl1 |= (ctl1 & (PCI_L1SS_CTL1_CM_RESTORE_TIME | 535 - PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 536 - PCI_L1SS_CTL1_LTR_L12_TH_SCALE)); 537 - aspm_program_l1ss(parent, pctl1, ctl2); 555 + /* Disable L1.2 while updating. See PCIe r5.0, sec 5.5.4, 7.8.3.3 */ 556 + pl1_2_enables = pctl1 & PCI_L1SS_CTL1_L1_2_MASK; 557 + cl1_2_enables = cctl1 & PCI_L1SS_CTL1_L1_2_MASK; 538 558 539 - cctl1 &= ~(PCI_L1SS_CTL1_CM_RESTORE_TIME | 540 - PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 541 - PCI_L1SS_CTL1_LTR_L12_TH_SCALE); 542 - cctl1 |= (ctl1 & (PCI_L1SS_CTL1_CM_RESTORE_TIME | 543 - PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 544 - PCI_L1SS_CTL1_LTR_L12_TH_SCALE)); 545 - aspm_program_l1ss(child, cctl1, ctl2); 559 + if (pl1_2_enables || cl1_2_enables) { 560 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 561 + PCI_L1SS_CTL1_L1_2_MASK, 0); 562 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 563 + PCI_L1SS_CTL1_L1_2_MASK, 0); 564 + } 565 + 566 + /* Program T_POWER_ON times in both ports */ 567 + pci_write_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, ctl2); 568 + pci_write_config_dword(child, child->l1ss + PCI_L1SS_CTL2, ctl2); 569 + 570 + /* Program Common_Mode_Restore_Time in upstream device */ 571 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 572 + PCI_L1SS_CTL1_CM_RESTORE_TIME, ctl1); 573 + 574 + /* Program LTR_L1.2_THRESHOLD time in both ports */ 575 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 576 + PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 577 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE, ctl1); 578 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 579 + PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 580 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE, ctl1); 581 + 582 + if (pl1_2_enables || cl1_2_enables) { 583 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 0, 584 + pl1_2_enables); 585 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 0, 586 + cl1_2_enables); 587 + } 546 588 } 547 589 548 590 static void aspm_l1ss_init(struct pcie_link_state *link) ··· 749 755 PCI_L1SS_CTL1_L1SS_MASK, val); 750 756 pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 751 757 PCI_L1SS_CTL1_L1SS_MASK, val); 752 - } 753 - 754 - void pci_save_aspm_l1ss_state(struct pci_dev *dev) 755 - { 756 - struct pci_cap_saved_state *save_state; 757 - u16 l1ss = dev->l1ss; 758 - u32 *cap; 759 - 760 - if (!l1ss) 761 - return; 762 - 763 - save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_L1SS); 764 - if (!save_state) 765 - return; 766 - 767 - cap = (u32 *)&save_state->cap.data[0]; 768 - pci_read_config_dword(dev, l1ss + PCI_L1SS_CTL2, cap++); 769 - pci_read_config_dword(dev, l1ss + PCI_L1SS_CTL1, cap++); 770 - } 771 - 772 - void pci_restore_aspm_l1ss_state(struct pci_dev *dev) 773 - { 774 - struct pci_cap_saved_state *save_state; 775 - u32 *cap, ctl1, ctl2; 776 - u16 l1ss = dev->l1ss; 777 - 778 - if (!l1ss) 779 - return; 780 - 781 - save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_L1SS); 782 - if (!save_state) 783 - return; 784 - 785 - cap = (u32 *)&save_state->cap.data[0]; 786 - ctl2 = *cap++; 787 - ctl1 = *cap; 788 - aspm_program_l1ss(dev, ctl1, ctl2); 789 758 } 790 759 791 760 static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val)
+11 -2
drivers/pinctrl/aspeed/pinctrl-aspeed.c
··· 93 93 static int aspeed_sig_expr_disable(struct aspeed_pinmux_data *ctx, 94 94 const struct aspeed_sig_expr *expr) 95 95 { 96 + int ret; 97 + 96 98 pr_debug("Disabling signal %s for %s\n", expr->signal, 97 99 expr->function); 98 100 99 - return aspeed_sig_expr_set(ctx, expr, false); 101 + ret = aspeed_sig_expr_eval(ctx, expr, true); 102 + if (ret < 0) 103 + return ret; 104 + 105 + if (ret) 106 + return aspeed_sig_expr_set(ctx, expr, false); 107 + 108 + return 0; 100 109 } 101 110 102 111 /** ··· 123 114 int ret = 0; 124 115 125 116 if (!exprs) 126 - return true; 117 + return -EINVAL; 127 118 128 119 while (*exprs && !ret) { 129 120 ret = aspeed_sig_expr_disable(ctx, *exprs);
+13 -3
drivers/pinctrl/intel/pinctrl-intel.c
··· 1709 1709 EXPORT_SYMBOL_GPL(intel_pinctrl_get_soc_data); 1710 1710 1711 1711 #ifdef CONFIG_PM_SLEEP 1712 + static bool __intel_gpio_is_direct_irq(u32 value) 1713 + { 1714 + return (value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) && 1715 + (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO); 1716 + } 1717 + 1712 1718 static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int pin) 1713 1719 { 1714 1720 const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin); ··· 1748 1742 * See https://bugzilla.kernel.org/show_bug.cgi?id=214749. 1749 1743 */ 1750 1744 value = readl(intel_get_padcfg(pctrl, pin, PADCFG0)); 1751 - if ((value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) && 1752 - (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO)) 1745 + if (__intel_gpio_is_direct_irq(value)) 1753 1746 return true; 1754 1747 1755 1748 return false; ··· 1878 1873 for (i = 0; i < pctrl->soc->npins; i++) { 1879 1874 const struct pinctrl_pin_desc *desc = &pctrl->soc->pins[i]; 1880 1875 1881 - if (!intel_pinctrl_should_save(pctrl, desc->number)) 1876 + if (!(intel_pinctrl_should_save(pctrl, desc->number) || 1877 + /* 1878 + * If the firmware mangled the register contents too much, 1879 + * check the saved value for the Direct IRQ mode. 1880 + */ 1881 + __intel_gpio_is_direct_irq(pads[i].padcfg0))) 1882 1882 continue; 1883 1883 1884 1884 intel_restore_padcfg(pctrl, desc->number, PADCFG0, pads[i].padcfg0);
+2 -2
drivers/pinctrl/mediatek/pinctrl-mt8195.c
··· 659 659 PIN_FIELD_BASE(10, 10, 4, 0x010, 0x10, 9, 3), 660 660 PIN_FIELD_BASE(11, 11, 4, 0x000, 0x10, 24, 3), 661 661 PIN_FIELD_BASE(12, 12, 4, 0x010, 0x10, 12, 3), 662 - PIN_FIELD_BASE(13, 13, 4, 0x010, 0x10, 27, 3), 662 + PIN_FIELD_BASE(13, 13, 4, 0x000, 0x10, 27, 3), 663 663 PIN_FIELD_BASE(14, 14, 4, 0x010, 0x10, 15, 3), 664 664 PIN_FIELD_BASE(15, 15, 4, 0x010, 0x10, 0, 3), 665 665 PIN_FIELD_BASE(16, 16, 4, 0x010, 0x10, 18, 3), ··· 708 708 PIN_FIELD_BASE(78, 78, 3, 0x000, 0x10, 15, 3), 709 709 PIN_FIELD_BASE(79, 79, 3, 0x000, 0x10, 18, 3), 710 710 PIN_FIELD_BASE(80, 80, 3, 0x000, 0x10, 21, 3), 711 - PIN_FIELD_BASE(81, 81, 3, 0x000, 0x10, 28, 3), 711 + PIN_FIELD_BASE(81, 81, 3, 0x000, 0x10, 24, 3), 712 712 PIN_FIELD_BASE(82, 82, 3, 0x000, 0x10, 27, 3), 713 713 PIN_FIELD_BASE(83, 83, 3, 0x010, 0x10, 0, 3), 714 714 PIN_FIELD_BASE(84, 84, 3, 0x010, 0x10, 3, 3),
+1
drivers/pinctrl/pinctrl-amd.c
··· 365 365 366 366 } else { 367 367 debounce_enable = " ∅"; 368 + time = 0; 368 369 } 369 370 snprintf(debounce_value, sizeof(debounce_value), "%u", time * unit); 370 371 seq_printf(s, "debounce %s (🕑 %sus)| ", debounce_enable, debounce_value);
+2
drivers/pinctrl/pinctrl-single.c
··· 372 372 if (!pcs->fmask) 373 373 return 0; 374 374 function = pinmux_generic_get_function(pctldev, fselector); 375 + if (!function) 376 + return -EINVAL; 375 377 func = function->data; 376 378 if (!func) 377 379 return -EINVAL;
+1 -1
drivers/pinctrl/qcom/pinctrl-sm8450-lpass-lpi.c
··· 105 105 static const char * const swr_tx_clk_groups[] = { "gpio0" }; 106 106 static const char * const swr_tx_data_groups[] = { "gpio1", "gpio2", "gpio14" }; 107 107 static const char * const swr_rx_clk_groups[] = { "gpio3" }; 108 - static const char * const swr_rx_data_groups[] = { "gpio4", "gpio5", "gpio15" }; 108 + static const char * const swr_rx_data_groups[] = { "gpio4", "gpio5" }; 109 109 static const char * const dmic1_clk_groups[] = { "gpio6" }; 110 110 static const char * const dmic1_data_groups[] = { "gpio7" }; 111 111 static const char * const dmic2_clk_groups[] = { "gpio8" };
+9
drivers/platform/x86/intel/vsec.c
··· 408 408 .quirks = VSEC_QUIRK_NO_DVSEC | VSEC_QUIRK_EARLY_HW, 409 409 }; 410 410 411 + /* MTL info */ 412 + static const struct intel_vsec_platform_info mtl_info = { 413 + .quirks = VSEC_QUIRK_NO_WATCHER | VSEC_QUIRK_NO_CRASHLOG, 414 + }; 415 + 411 416 #define PCI_DEVICE_ID_INTEL_VSEC_ADL 0x467d 412 417 #define PCI_DEVICE_ID_INTEL_VSEC_DG1 0x490e 418 + #define PCI_DEVICE_ID_INTEL_VSEC_MTL_M 0x7d0d 419 + #define PCI_DEVICE_ID_INTEL_VSEC_MTL_S 0xad0d 413 420 #define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM 0x09a7 414 421 #define PCI_DEVICE_ID_INTEL_VSEC_RPL 0xa77d 415 422 #define PCI_DEVICE_ID_INTEL_VSEC_TGL 0x9a0d 416 423 static const struct pci_device_id intel_vsec_pci_ids[] = { 417 424 { PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) }, 418 425 { PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) }, 426 + { PCI_DEVICE_DATA(INTEL, VSEC_MTL_M, &mtl_info) }, 427 + { PCI_DEVICE_DATA(INTEL, VSEC_MTL_S, &mtl_info) }, 419 428 { PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, &(struct intel_vsec_platform_info) {}) }, 420 429 { PCI_DEVICE_DATA(INTEL, VSEC_RPL, &tgl_info) }, 421 430 { PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) },
+1 -1
drivers/spi/spi-dw-core.c
··· 366 366 * will be adjusted at the final stage of the IRQ-based SPI transfer 367 367 * execution so not to lose the leftover of the incoming data. 368 368 */ 369 - level = min_t(u16, dws->fifo_len / 2, dws->tx_len); 369 + level = min_t(unsigned int, dws->fifo_len / 2, dws->tx_len); 370 370 dw_writel(dws, DW_SPI_TXFTLR, level); 371 371 dw_writel(dws, DW_SPI_RXFTLR, level - 1); 372 372
+18 -5
drivers/spi/spi.c
··· 2220 2220 /*-------------------------------------------------------------------------*/ 2221 2221 2222 2222 #if defined(CONFIG_OF) 2223 + static void of_spi_parse_dt_cs_delay(struct device_node *nc, 2224 + struct spi_delay *delay, const char *prop) 2225 + { 2226 + u32 value; 2227 + 2228 + if (!of_property_read_u32(nc, prop, &value)) { 2229 + if (value > U16_MAX) { 2230 + delay->value = DIV_ROUND_UP(value, 1000); 2231 + delay->unit = SPI_DELAY_UNIT_USECS; 2232 + } else { 2233 + delay->value = value; 2234 + delay->unit = SPI_DELAY_UNIT_NSECS; 2235 + } 2236 + } 2237 + } 2238 + 2223 2239 static int of_spi_parse_dt(struct spi_controller *ctlr, struct spi_device *spi, 2224 2240 struct device_node *nc) 2225 2241 { 2226 2242 u32 value; 2227 - u16 cs_setup; 2228 2243 int rc; 2229 2244 2230 2245 /* Mode (clock phase/polarity/etc.) */ ··· 2325 2310 if (!of_property_read_u32(nc, "spi-max-frequency", &value)) 2326 2311 spi->max_speed_hz = value; 2327 2312 2328 - if (!of_property_read_u16(nc, "spi-cs-setup-delay-ns", &cs_setup)) { 2329 - spi->cs_setup.value = cs_setup; 2330 - spi->cs_setup.unit = SPI_DELAY_UNIT_NSECS; 2331 - } 2313 + /* Device CS delays */ 2314 + of_spi_parse_dt_cs_delay(nc, &spi->cs_setup, "spi-cs-setup-delay-ns"); 2332 2315 2333 2316 return 0; 2334 2317 }
+16 -6
drivers/spi/spidev.c
··· 90 90 /*-------------------------------------------------------------------------*/ 91 91 92 92 static ssize_t 93 + spidev_sync_unlocked(struct spi_device *spi, struct spi_message *message) 94 + { 95 + ssize_t status; 96 + 97 + status = spi_sync(spi, message); 98 + if (status == 0) 99 + status = message->actual_length; 100 + 101 + return status; 102 + } 103 + 104 + static ssize_t 93 105 spidev_sync(struct spidev_data *spidev, struct spi_message *message) 94 106 { 95 - int status; 107 + ssize_t status; 96 108 struct spi_device *spi; 97 109 98 110 mutex_lock(&spidev->spi_lock); ··· 113 101 if (spi == NULL) 114 102 status = -ESHUTDOWN; 115 103 else 116 - status = spi_sync(spi, message); 117 - 118 - if (status == 0) 119 - status = message->actual_length; 104 + status = spidev_sync_unlocked(spi, message); 120 105 121 106 mutex_unlock(&spidev->spi_lock); 107 + 122 108 return status; 123 109 } 124 110 ··· 304 294 spi_message_add_tail(k_tmp, &msg); 305 295 } 306 296 307 - status = spidev_sync(spidev, &msg); 297 + status = spidev_sync_unlocked(spidev->spi, &msg); 308 298 if (status < 0) 309 299 goto done; 310 300
+3
drivers/usb/core/quirks.c
··· 526 526 /* DJI CineSSD */ 527 527 { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM }, 528 528 529 + /* Alcor Link AK9563 SC Reader used in 2022 Lenovo ThinkPads */ 530 + { USB_DEVICE(0x2ce3, 0x9563), .driver_info = USB_QUIRK_NO_LPM }, 531 + 529 532 /* DELL USB GEN2 */ 530 533 { USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME }, 531 534
+4
drivers/usb/gadget/function/u_ether.c
··· 798 798 net->max_mtu = GETHER_MAX_MTU_SIZE; 799 799 800 800 dev->gadget = g; 801 + SET_NETDEV_DEV(net, &g->dev); 801 802 SET_NETDEV_DEVTYPE(net, &gadget_type); 802 803 803 804 status = register_netdev(net); ··· 873 872 struct usb_gadget *g; 874 873 int status; 875 874 875 + if (!net->dev.parent) 876 + return -EINVAL; 876 877 dev = netdev_priv(net); 877 878 g = dev->gadget; 878 879 ··· 905 902 906 903 dev = netdev_priv(net); 907 904 dev->gadget = g; 905 + SET_NETDEV_DEV(net, &g->dev); 908 906 } 909 907 EXPORT_SYMBOL_GPL(gether_set_gadget); 910 908
+4 -4
drivers/usb/typec/altmodes/displayport.c
··· 535 535 /* FIXME: Port can only be DFP_U. */ 536 536 537 537 /* Make sure we have compatiple pin configurations */ 538 - if (!(DP_CAP_DFP_D_PIN_ASSIGN(port->vdo) & 539 - DP_CAP_UFP_D_PIN_ASSIGN(alt->vdo)) && 540 - !(DP_CAP_UFP_D_PIN_ASSIGN(port->vdo) & 541 - DP_CAP_DFP_D_PIN_ASSIGN(alt->vdo))) 538 + if (!(DP_CAP_PIN_ASSIGN_DFP_D(port->vdo) & 539 + DP_CAP_PIN_ASSIGN_UFP_D(alt->vdo)) && 540 + !(DP_CAP_PIN_ASSIGN_UFP_D(port->vdo) & 541 + DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo))) 542 542 return -ENODEV; 543 543 544 544 ret = sysfs_create_group(&alt->dev.kobj, &dp_altmode_group);
+9 -1
drivers/video/fbdev/core/fb_defio.c
··· 313 313 } 314 314 EXPORT_SYMBOL_GPL(fb_deferred_io_open); 315 315 316 - void fb_deferred_io_cleanup(struct fb_info *info) 316 + void fb_deferred_io_release(struct fb_info *info) 317 317 { 318 318 struct fb_deferred_io *fbdefio = info->fbdefio; 319 319 struct page *page; ··· 327 327 page = fb_deferred_io_page(info, i); 328 328 page->mapping = NULL; 329 329 } 330 + } 331 + EXPORT_SYMBOL_GPL(fb_deferred_io_release); 332 + 333 + void fb_deferred_io_cleanup(struct fb_info *info) 334 + { 335 + struct fb_deferred_io *fbdefio = info->fbdefio; 336 + 337 + fb_deferred_io_release(info); 330 338 331 339 kvfree(info->pagerefs); 332 340 mutex_destroy(&fbdefio->lock);
+4
drivers/video/fbdev/core/fbmem.c
··· 1454 1454 struct fb_info * const info = file->private_data; 1455 1455 1456 1456 lock_fb_info(info); 1457 + #if IS_ENABLED(CONFIG_FB_DEFERRED_IO) 1458 + if (info->fbdefio) 1459 + fb_deferred_io_release(info); 1460 + #endif 1457 1461 if (info->fbops->fb_release) 1458 1462 info->fbops->fb_release(info,1); 1459 1463 module_put(info->fbops->owner);
+42 -39
drivers/video/fbdev/nvidia/nvidia.c
··· 1197 1197 return nvidiafb_check_var(&info->var, info); 1198 1198 } 1199 1199 1200 - static u32 nvidia_get_chipset(struct fb_info *info) 1200 + static u32 nvidia_get_chipset(struct pci_dev *pci_dev, 1201 + volatile u32 __iomem *REGS) 1201 1202 { 1202 - struct nvidia_par *par = info->par; 1203 - u32 id = (par->pci_dev->vendor << 16) | par->pci_dev->device; 1203 + u32 id = (pci_dev->vendor << 16) | pci_dev->device; 1204 1204 1205 1205 printk(KERN_INFO PFX "Device ID: %x \n", id); 1206 1206 1207 1207 if ((id & 0xfff0) == 0x00f0 || 1208 1208 (id & 0xfff0) == 0x02e0) { 1209 1209 /* pci-e */ 1210 - id = NV_RD32(par->REGS, 0x1800); 1210 + id = NV_RD32(REGS, 0x1800); 1211 1211 1212 1212 if ((id & 0x0000ffff) == 0x000010DE) 1213 1213 id = 0x10DE0000 | (id >> 16); ··· 1220 1220 return id; 1221 1221 } 1222 1222 1223 - static u32 nvidia_get_arch(struct fb_info *info) 1223 + static u32 nvidia_get_arch(u32 Chipset) 1224 1224 { 1225 - struct nvidia_par *par = info->par; 1226 1225 u32 arch = 0; 1227 1226 1228 - switch (par->Chipset & 0x0ff0) { 1227 + switch (Chipset & 0x0ff0) { 1229 1228 case 0x0100: /* GeForce 256 */ 1230 1229 case 0x0110: /* GeForce2 MX */ 1231 1230 case 0x0150: /* GeForce2 */ ··· 1277 1278 struct fb_info *info; 1278 1279 unsigned short cmd; 1279 1280 int ret; 1281 + volatile u32 __iomem *REGS; 1282 + int Chipset; 1283 + u32 Architecture; 1280 1284 1281 1285 NVTRACE_ENTER(); 1282 1286 assert(pd != NULL); 1283 1287 1288 + if (pci_enable_device(pd)) { 1289 + printk(KERN_ERR PFX "cannot enable PCI device\n"); 1290 + return -ENODEV; 1291 + } 1292 + 1293 + /* enable IO and mem if not already done */ 1294 + pci_read_config_word(pd, PCI_COMMAND, &cmd); 1295 + cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MEMORY); 1296 + pci_write_config_word(pd, PCI_COMMAND, cmd); 1297 + 1298 + nvidiafb_fix.mmio_start = pci_resource_start(pd, 0); 1299 + nvidiafb_fix.mmio_len = pci_resource_len(pd, 0); 1300 + 1301 + REGS = ioremap(nvidiafb_fix.mmio_start, nvidiafb_fix.mmio_len); 1302 + if (!REGS) { 1303 + printk(KERN_ERR PFX "cannot ioremap MMIO base\n"); 1304 + return -ENODEV; 1305 + } 1306 + 1307 + Chipset = nvidia_get_chipset(pd, REGS); 1308 + Architecture = nvidia_get_arch(Chipset); 1309 + if (Architecture == 0) { 1310 + printk(KERN_ERR PFX "unknown NV_ARCH\n"); 1311 + goto err_out; 1312 + } 1313 + 1284 1314 ret = aperture_remove_conflicting_pci_devices(pd, "nvidiafb"); 1285 1315 if (ret) 1286 - return ret; 1316 + goto err_out; 1287 1317 1288 1318 info = framebuffer_alloc(sizeof(struct nvidia_par), &pd->dev); 1289 - 1290 1319 if (!info) 1291 1320 goto err_out; 1292 1321 ··· 1324 1297 1325 1298 if (info->pixmap.addr == NULL) 1326 1299 goto err_out_kfree; 1327 - 1328 - if (pci_enable_device(pd)) { 1329 - printk(KERN_ERR PFX "cannot enable PCI device\n"); 1330 - goto err_out_enable; 1331 - } 1332 1300 1333 1301 if (pci_request_regions(pd, "nvidiafb")) { 1334 1302 printk(KERN_ERR PFX "cannot request PCI regions\n"); ··· 1340 1318 par->paneltweak = paneltweak; 1341 1319 par->reverse_i2c = reverse_i2c; 1342 1320 1343 - /* enable IO and mem if not already done */ 1344 - pci_read_config_word(pd, PCI_COMMAND, &cmd); 1345 - cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MEMORY); 1346 - pci_write_config_word(pd, PCI_COMMAND, cmd); 1347 - 1348 - nvidiafb_fix.mmio_start = pci_resource_start(pd, 0); 1349 1321 nvidiafb_fix.smem_start = pci_resource_start(pd, 1); 1350 - nvidiafb_fix.mmio_len = pci_resource_len(pd, 0); 1351 1322 1352 - par->REGS = ioremap(nvidiafb_fix.mmio_start, nvidiafb_fix.mmio_len); 1323 + par->REGS = REGS; 1353 1324 1354 - if (!par->REGS) { 1355 - printk(KERN_ERR PFX "cannot ioremap MMIO base\n"); 1356 - goto err_out_free_base0; 1357 - } 1358 - 1359 - par->Chipset = nvidia_get_chipset(info); 1360 - par->Architecture = nvidia_get_arch(info); 1361 - 1362 - if (par->Architecture == 0) { 1363 - printk(KERN_ERR PFX "unknown NV_ARCH\n"); 1364 - goto err_out_arch; 1365 - } 1325 + par->Chipset = Chipset; 1326 + par->Architecture = Architecture; 1366 1327 1367 1328 sprintf(nvidiafb_fix.id, "NV%x", (pd->device & 0x0ff0) >> 4); 1368 1329 1369 1330 if (NVCommonSetup(info)) 1370 - goto err_out_arch; 1331 + goto err_out_free_base0; 1371 1332 1372 1333 par->FbAddress = nvidiafb_fix.smem_start; 1373 1334 par->FbMapSize = par->RamAmountKBytes * 1024; ··· 1406 1401 goto err_out_iounmap_fb; 1407 1402 } 1408 1403 1409 - 1410 1404 printk(KERN_INFO PFX 1411 1405 "PCI nVidia %s framebuffer (%dMB @ 0x%lX)\n", 1412 1406 info->fix.id, ··· 1419 1415 err_out_free_base1: 1420 1416 fb_destroy_modedb(info->monspecs.modedb); 1421 1417 nvidia_delete_i2c_busses(par); 1422 - err_out_arch: 1423 - iounmap(par->REGS); 1424 - err_out_free_base0: 1418 + err_out_free_base0: 1425 1419 pci_release_regions(pd); 1426 1420 err_out_enable: 1427 1421 kfree(info->pixmap.addr); 1428 1422 err_out_kfree: 1429 1423 framebuffer_release(info); 1430 1424 err_out: 1425 + iounmap(REGS); 1431 1426 return -ENODEV; 1432 1427 } 1433 1428
+4
fs/aio.c
··· 361 361 spin_lock(&mm->ioctx_lock); 362 362 rcu_read_lock(); 363 363 table = rcu_dereference(mm->ioctx_table); 364 + if (!table) 365 + goto out_unlock; 366 + 364 367 for (i = 0; i < table->nr; i++) { 365 368 struct kioctx *ctx; 366 369 ··· 377 374 } 378 375 } 379 376 377 + out_unlock: 380 378 rcu_read_unlock(); 381 379 spin_unlock(&mm->ioctx_lock); 382 380 return res;
+2
fs/btrfs/extent_io.c
··· 3826 3826 lockend = round_up(start + len, inode->root->fs_info->sectorsize); 3827 3827 prev_extent_end = lockstart; 3828 3828 3829 + btrfs_inode_lock(inode, BTRFS_ILOCK_SHARED); 3829 3830 lock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 3830 3831 3831 3832 ret = fiemap_find_last_extent_offset(inode, path, &last_extent_end); ··· 4020 4019 4021 4020 out_unlock: 4022 4021 unlock_extent(&inode->io_tree, lockstart, lockend, &cached_state); 4022 + btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); 4023 4023 out: 4024 4024 free_extent_state(delalloc_cached_state); 4025 4025 btrfs_free_backref_share_ctx(backref_ctx);
+17 -6
fs/btrfs/tree-log.c
··· 3576 3576 } 3577 3577 3578 3578 static int flush_dir_items_batch(struct btrfs_trans_handle *trans, 3579 - struct btrfs_root *log, 3579 + struct btrfs_inode *inode, 3580 3580 struct extent_buffer *src, 3581 3581 struct btrfs_path *dst_path, 3582 3582 int start_slot, 3583 3583 int count) 3584 3584 { 3585 + struct btrfs_root *log = inode->root->log_root; 3585 3586 char *ins_data = NULL; 3586 3587 struct btrfs_item_batch batch; 3587 3588 struct extent_buffer *dst; 3588 3589 unsigned long src_offset; 3589 3590 unsigned long dst_offset; 3591 + u64 last_index; 3590 3592 struct btrfs_key key; 3591 3593 u32 item_size; 3592 3594 int ret; ··· 3646 3644 src_offset = btrfs_item_ptr_offset(src, start_slot + count - 1); 3647 3645 copy_extent_buffer(dst, src, dst_offset, src_offset, batch.total_data_size); 3648 3646 btrfs_release_path(dst_path); 3647 + 3648 + last_index = batch.keys[count - 1].offset; 3649 + ASSERT(last_index > inode->last_dir_index_offset); 3650 + 3651 + /* 3652 + * If for some unexpected reason the last item's index is not greater 3653 + * than the last index we logged, warn and return an error to fallback 3654 + * to a transaction commit. 3655 + */ 3656 + if (WARN_ON(last_index <= inode->last_dir_index_offset)) 3657 + ret = -EUCLEAN; 3658 + else 3659 + inode->last_dir_index_offset = last_index; 3649 3660 out: 3650 3661 kfree(ins_data); 3651 3662 ··· 3708 3693 } 3709 3694 3710 3695 di = btrfs_item_ptr(src, i, struct btrfs_dir_item); 3711 - ctx->last_dir_item_offset = key.offset; 3712 3696 3713 3697 /* 3714 3698 * Skip ranges of items that consist only of dir item keys created ··· 3770 3756 if (batch_size > 0) { 3771 3757 int ret; 3772 3758 3773 - ret = flush_dir_items_batch(trans, log, src, dst_path, 3759 + ret = flush_dir_items_batch(trans, inode, src, dst_path, 3774 3760 batch_start, batch_size); 3775 3761 if (ret < 0) 3776 3762 return ret; ··· 4058 4044 4059 4045 min_key = BTRFS_DIR_START_INDEX; 4060 4046 max_key = 0; 4061 - ctx->last_dir_item_offset = inode->last_dir_index_offset; 4062 4047 4063 4048 while (1) { 4064 4049 ret = log_dir_items(trans, inode, path, dst_path, ··· 4068 4055 break; 4069 4056 min_key = max_key + 1; 4070 4057 } 4071 - 4072 - inode->last_dir_index_offset = ctx->last_dir_item_offset; 4073 4058 4074 4059 return 0; 4075 4060 }
-2
fs/btrfs/tree-log.h
··· 24 24 bool logging_new_delayed_dentries; 25 25 /* Indicate if the inode being logged was logged before. */ 26 26 bool logged_before; 27 - /* Tracks the last logged dir item/index key offset. */ 28 - u64 last_dir_item_offset; 29 27 struct inode *inode; 30 28 struct list_head list; 31 29 /* Only used for fast fsyncs. */
+15 -1
fs/btrfs/volumes.c
··· 403 403 static void free_fs_devices(struct btrfs_fs_devices *fs_devices) 404 404 { 405 405 struct btrfs_device *device; 406 + 406 407 WARN_ON(fs_devices->opened); 407 408 while (!list_empty(&fs_devices->devices)) { 408 409 device = list_entry(fs_devices->devices.next, ··· 1182 1181 1183 1182 mutex_lock(&uuid_mutex); 1184 1183 close_fs_devices(fs_devices); 1185 - if (!fs_devices->opened) 1184 + if (!fs_devices->opened) { 1186 1185 list_splice_init(&fs_devices->seed_list, &list); 1186 + 1187 + /* 1188 + * If the struct btrfs_fs_devices is not assembled with any 1189 + * other device, it can be re-initialized during the next mount 1190 + * without the needing device-scan step. Therefore, it can be 1191 + * fully freed. 1192 + */ 1193 + if (fs_devices->num_devices == 1) { 1194 + list_del(&fs_devices->fs_list); 1195 + free_fs_devices(fs_devices); 1196 + } 1197 + } 1198 + 1187 1199 1188 1200 list_for_each_entry_safe(fs_devices, tmp, &list, seed_list) { 1189 1201 close_fs_devices(fs_devices);
+6
fs/ceph/mds_client.c
··· 3685 3685 break; 3686 3686 3687 3687 case CEPH_SESSION_FLUSHMSG: 3688 + /* flush cap releases */ 3689 + spin_lock(&session->s_cap_lock); 3690 + if (session->s_num_cap_releases) 3691 + ceph_flush_cap_releases(mdsc, session); 3692 + spin_unlock(&session->s_cap_lock); 3693 + 3688 3694 send_flushmsg_ack(mdsc, session, seq); 3689 3695 break; 3690 3696
+3 -2
fs/dax.c
··· 1271 1271 if (ret < 0) 1272 1272 goto out_unlock; 1273 1273 1274 - ret = copy_mc_to_kernel(daddr, saddr, length); 1275 - if (ret) 1274 + if (copy_mc_to_kernel(daddr, saddr, length) == 0) 1275 + ret = length; 1276 + else 1276 1277 ret = -EIO; 1277 1278 1278 1279 out_unlock:
+1 -1
fs/nfsd/nfs4state.c
··· 8182 8182 8183 8183 nfsd4_client_tracking_exit(net); 8184 8184 nfs4_state_destroy_net(net); 8185 - rhltable_destroy(&nfs4_file_rhltable); 8186 8185 #ifdef CONFIG_NFSD_V4_2_INTER_SSC 8187 8186 nfsd4_ssc_shutdown_umount(nn); 8188 8187 #endif ··· 8191 8192 nfs4_state_shutdown(void) 8192 8193 { 8193 8194 nfsd4_destroy_callback_queue(); 8195 + rhltable_destroy(&nfs4_file_rhltable); 8194 8196 } 8195 8197 8196 8198 static void
+1 -1
fs/squashfs/xattr_id.c
··· 76 76 /* Sanity check values */ 77 77 78 78 /* there is always at least one xattr id */ 79 - if (*xattr_ids <= 0) 79 + if (*xattr_ids == 0) 80 80 return ERR_PTR(-EINVAL); 81 81 82 82 len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
-5
include/drm/drm_client.h
··· 127 127 struct drm_client_dev *client; 128 128 129 129 /** 130 - * @handle: Buffer handle 131 - */ 132 - u32 handle; 133 - 134 - /** 135 130 * @pitch: Buffer pitch 136 131 */ 137 132 u32 pitch;
+1
include/linux/fb.h
··· 662 662 extern void fb_deferred_io_open(struct fb_info *info, 663 663 struct inode *inode, 664 664 struct file *file); 665 + extern void fb_deferred_io_release(struct fb_info *info); 665 666 extern void fb_deferred_io_cleanup(struct fb_info *info); 666 667 extern int fb_deferred_io_fsync(struct file *file, loff_t start, 667 668 loff_t end, int datasync);
+9 -3
include/linux/mm.h
··· 137 137 * define their own version of this macro in <asm/pgtable.h> 138 138 */ 139 139 #if BITS_PER_LONG == 64 140 - /* This function must be updated when the size of struct page grows above 80 140 + /* This function must be updated when the size of struct page grows above 96 141 141 * or reduces below 56. The idea that compiler optimizes out switch() 142 142 * statement, and only leaves move/store instructions. Also the compiler can 143 143 * combine write statements if they are both assignments and can be reordered, ··· 148 148 { 149 149 unsigned long *_pp = (void *)page; 150 150 151 - /* Check that struct page is either 56, 64, 72, or 80 bytes */ 151 + /* Check that struct page is either 56, 64, 72, 80, 88 or 96 bytes */ 152 152 BUILD_BUG_ON(sizeof(struct page) & 7); 153 153 BUILD_BUG_ON(sizeof(struct page) < 56); 154 - BUILD_BUG_ON(sizeof(struct page) > 80); 154 + BUILD_BUG_ON(sizeof(struct page) > 96); 155 155 156 156 switch (sizeof(struct page)) { 157 + case 96: 158 + _pp[11] = 0; 159 + fallthrough; 160 + case 88: 161 + _pp[10] = 0; 162 + fallthrough; 157 163 case 80: 158 164 _pp[9] = 0; 159 165 fallthrough;
-2
include/linux/netdevice.h
··· 2858 2858 int register_netdevice_notifier_net(struct net *net, struct notifier_block *nb); 2859 2859 int unregister_netdevice_notifier_net(struct net *net, 2860 2860 struct notifier_block *nb); 2861 - void move_netdevice_notifier_net(struct net *src_net, struct net *dst_net, 2862 - struct notifier_block *nb); 2863 2861 int register_netdevice_notifier_dev_net(struct net_device *dev, 2864 2862 struct notifier_block *nb, 2865 2863 struct netdev_net_notifier *nn);
+3 -2
include/linux/shrinker.h
··· 107 107 108 108 #ifdef CONFIG_SHRINKER_DEBUG 109 109 extern int shrinker_debugfs_add(struct shrinker *shrinker); 110 - extern void shrinker_debugfs_remove(struct shrinker *shrinker); 110 + extern struct dentry *shrinker_debugfs_remove(struct shrinker *shrinker); 111 111 extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker, 112 112 const char *fmt, ...); 113 113 #else /* CONFIG_SHRINKER_DEBUG */ ··· 115 115 { 116 116 return 0; 117 117 } 118 - static inline void shrinker_debugfs_remove(struct shrinker *shrinker) 118 + static inline struct dentry *shrinker_debugfs_remove(struct shrinker *shrinker) 119 119 { 120 + return NULL; 120 121 } 121 122 static inline __printf(2, 3) 122 123 int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...)
+1
include/linux/trace_events.h
··· 270 270 const int align; 271 271 const int is_signed; 272 272 const int filter_type; 273 + const int len; 273 274 }; 274 275 int (*define_fields)(struct trace_event_call *); 275 276 };
+13
include/net/sock.h
··· 2411 2411 return false; 2412 2412 } 2413 2413 2414 + static inline struct sk_buff *skb_clone_and_charge_r(struct sk_buff *skb, struct sock *sk) 2415 + { 2416 + skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC)); 2417 + if (skb) { 2418 + if (sk_rmem_schedule(sk, skb, skb->truesize)) { 2419 + skb_set_owner_r(skb, sk); 2420 + return skb; 2421 + } 2422 + __kfree_skb(skb); 2423 + } 2424 + return NULL; 2425 + } 2426 + 2414 2427 static inline void skb_prepare_for_gro(struct sk_buff *skb) 2415 2428 { 2416 2429 if (skb->destructor != sock_wfree) {
+2 -1
include/trace/stages/stage4_event_fields.h
··· 26 26 #define __array(_type, _item, _len) { \ 27 27 .type = #_type"["__stringify(_len)"]", .name = #_item, \ 28 28 .size = sizeof(_type[_len]), .align = ALIGN_STRUCTFIELD(_type), \ 29 - .is_signed = is_signed_type(_type), .filter_type = FILTER_OTHER }, 29 + .is_signed = is_signed_type(_type), .filter_type = FILTER_OTHER,\ 30 + .len = _len }, 30 31 31 32 #undef __dynamic_array 32 33 #define __dynamic_array(_type, _item, _len) { \
+1
include/uapi/drm/virtgpu_drm.h
··· 64 64 __u32 pad; 65 65 }; 66 66 67 + /* fence_fd is modified on success if VIRTGPU_EXECBUF_FENCE_FD_OUT flag is set. */ 67 68 struct drm_virtgpu_execbuffer { 68 69 __u32 flags; 69 70 __u32 size;
+3 -2
kernel/locking/rtmutex.c
··· 901 901 * then we need to wake the new top waiter up to try 902 902 * to get the lock. 903 903 */ 904 - if (prerequeue_top_waiter != rt_mutex_top_waiter(lock)) 905 - wake_up_state(waiter->task, waiter->wake_state); 904 + top_waiter = rt_mutex_top_waiter(lock); 905 + if (prerequeue_top_waiter != top_waiter) 906 + wake_up_state(top_waiter->task, top_waiter->wake_state); 906 907 raw_spin_unlock_irq(&lock->wait_lock); 907 908 return 0; 908 909 }
+1
kernel/trace/trace.h
··· 1282 1282 int offset; 1283 1283 int size; 1284 1284 int is_signed; 1285 + int len; 1285 1286 }; 1286 1287 1287 1288 struct prog_entry;
+30 -9
kernel/trace/trace_events.c
··· 114 114 115 115 static int __trace_define_field(struct list_head *head, const char *type, 116 116 const char *name, int offset, int size, 117 - int is_signed, int filter_type) 117 + int is_signed, int filter_type, int len) 118 118 { 119 119 struct ftrace_event_field *field; 120 120 ··· 133 133 field->offset = offset; 134 134 field->size = size; 135 135 field->is_signed = is_signed; 136 + field->len = len; 136 137 137 138 list_add(&field->link, head); 138 139 ··· 151 150 152 151 head = trace_get_fields(call); 153 152 return __trace_define_field(head, type, name, offset, size, 154 - is_signed, filter_type); 153 + is_signed, filter_type, 0); 155 154 } 156 155 EXPORT_SYMBOL_GPL(trace_define_field); 156 + 157 + static int trace_define_field_ext(struct trace_event_call *call, const char *type, 158 + const char *name, int offset, int size, int is_signed, 159 + int filter_type, int len) 160 + { 161 + struct list_head *head; 162 + 163 + if (WARN_ON(!call->class)) 164 + return 0; 165 + 166 + head = trace_get_fields(call); 167 + return __trace_define_field(head, type, name, offset, size, 168 + is_signed, filter_type, len); 169 + } 157 170 158 171 #define __generic_field(type, item, filter_type) \ 159 172 ret = __trace_define_field(&ftrace_generic_fields, #type, \ 160 173 #item, 0, 0, is_signed_type(type), \ 161 - filter_type); \ 174 + filter_type, 0); \ 162 175 if (ret) \ 163 176 return ret; 164 177 ··· 181 166 "common_" #item, \ 182 167 offsetof(typeof(ent), item), \ 183 168 sizeof(ent.item), \ 184 - is_signed_type(type), FILTER_OTHER); \ 169 + is_signed_type(type), FILTER_OTHER, 0); \ 185 170 if (ret) \ 186 171 return ret; 187 172 ··· 1603 1588 seq_printf(m, "\tfield:%s %s;\toffset:%u;\tsize:%u;\tsigned:%d;\n", 1604 1589 field->type, field->name, field->offset, 1605 1590 field->size, !!field->is_signed); 1606 - else 1607 - seq_printf(m, "\tfield:%.*s %s%s;\toffset:%u;\tsize:%u;\tsigned:%d;\n", 1591 + else if (field->len) 1592 + seq_printf(m, "\tfield:%.*s %s[%d];\toffset:%u;\tsize:%u;\tsigned:%d;\n", 1608 1593 (int)(array_descriptor - field->type), 1609 1594 field->type, field->name, 1610 - array_descriptor, field->offset, 1595 + field->len, field->offset, 1611 1596 field->size, !!field->is_signed); 1597 + else 1598 + seq_printf(m, "\tfield:%.*s %s[];\toffset:%u;\tsize:%u;\tsigned:%d;\n", 1599 + (int)(array_descriptor - field->type), 1600 + field->type, field->name, 1601 + field->offset, field->size, !!field->is_signed); 1612 1602 1613 1603 return 0; 1614 1604 } ··· 2399 2379 } 2400 2380 2401 2381 offset = ALIGN(offset, field->align); 2402 - ret = trace_define_field(call, field->type, field->name, 2382 + ret = trace_define_field_ext(call, field->type, field->name, 2403 2383 offset, field->size, 2404 - field->is_signed, field->filter_type); 2384 + field->is_signed, field->filter_type, 2385 + field->len); 2405 2386 if (WARN_ON_ONCE(ret)) { 2406 2387 pr_err("error code is %d\n", ret); 2407 2388 break;
+2 -1
kernel/trace/trace_export.c
··· 111 111 #define __array(_type, _item, _len) { \ 112 112 .type = #_type"["__stringify(_len)"]", .name = #_item, \ 113 113 .size = sizeof(_type[_len]), .align = __alignof__(_type), \ 114 - is_signed_type(_type), .filter_type = FILTER_OTHER }, 114 + is_signed_type(_type), .filter_type = FILTER_OTHER, \ 115 + .len = _len }, 115 116 116 117 #undef __array_desc 117 118 #define __array_desc(_type, _container, _item, _len) __array(_type, _item, _len)
+20 -19
lib/parser.c
··· 11 11 #include <linux/slab.h> 12 12 #include <linux/string.h> 13 13 14 + /* 15 + * max size needed by different bases to express U64 16 + * HEX: "0xFFFFFFFFFFFFFFFF" --> 18 17 + * DEC: "18446744073709551615" --> 20 18 + * OCT: "01777777777777777777777" --> 23 19 + * pick the max one to define NUMBER_BUF_LEN 20 + */ 21 + #define NUMBER_BUF_LEN 24 22 + 14 23 /** 15 24 * match_one - Determines if a string matches a simple pattern 16 25 * @s: the string to examine for presence of the pattern ··· 138 129 static int match_number(substring_t *s, int *result, int base) 139 130 { 140 131 char *endp; 141 - char *buf; 132 + char buf[NUMBER_BUF_LEN]; 142 133 int ret; 143 134 long val; 144 135 145 - buf = match_strdup(s); 146 - if (!buf) 147 - return -ENOMEM; 148 - 136 + if (match_strlcpy(buf, s, NUMBER_BUF_LEN) >= NUMBER_BUF_LEN) 137 + return -ERANGE; 149 138 ret = 0; 150 139 val = simple_strtol(buf, &endp, base); 151 140 if (endp == buf) ··· 152 145 ret = -ERANGE; 153 146 else 154 147 *result = (int) val; 155 - kfree(buf); 156 148 return ret; 157 149 } 158 150 ··· 169 163 */ 170 164 static int match_u64int(substring_t *s, u64 *result, int base) 171 165 { 172 - char *buf; 166 + char buf[NUMBER_BUF_LEN]; 173 167 int ret; 174 168 u64 val; 175 169 176 - buf = match_strdup(s); 177 - if (!buf) 178 - return -ENOMEM; 179 - 170 + if (match_strlcpy(buf, s, NUMBER_BUF_LEN) >= NUMBER_BUF_LEN) 171 + return -ERANGE; 180 172 ret = kstrtoull(buf, base, &val); 181 173 if (!ret) 182 174 *result = val; 183 - kfree(buf); 184 175 return ret; 185 176 } 186 177 ··· 209 206 */ 210 207 int match_uint(substring_t *s, unsigned int *result) 211 208 { 212 - int err = -ENOMEM; 213 - char *buf = match_strdup(s); 209 + char buf[NUMBER_BUF_LEN]; 214 210 215 - if (buf) { 216 - err = kstrtouint(buf, 10, result); 217 - kfree(buf); 218 - } 219 - return err; 211 + if (match_strlcpy(buf, s, NUMBER_BUF_LEN) >= NUMBER_BUF_LEN) 212 + return -ERANGE; 213 + 214 + return kstrtouint(buf, 10, result); 220 215 } 221 216 EXPORT_SYMBOL(match_uint); 222 217
+1 -1
mm/gup.c
··· 1914 1914 drain_allow = false; 1915 1915 } 1916 1916 1917 - if (!folio_isolate_lru(folio)) 1917 + if (folio_isolate_lru(folio)) 1918 1918 continue; 1919 1919 1920 1920 list_add_tail(&folio->lru, movable_page_list);
+3
mm/kasan/common.c
··· 246 246 247 247 static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip) 248 248 { 249 + if (!kasan_arch_is_ready()) 250 + return false; 251 + 249 252 if (ptr != page_address(virt_to_head_page(ptr))) { 250 253 kasan_report_invalid_free(ptr, ip, KASAN_REPORT_INVALID_FREE); 251 254 return true;
+6 -1
mm/kasan/generic.c
··· 191 191 192 192 bool kasan_byte_accessible(const void *addr) 193 193 { 194 - s8 shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(addr)); 194 + s8 shadow_byte; 195 + 196 + if (!kasan_arch_is_ready()) 197 + return true; 198 + 199 + shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(addr)); 195 200 196 201 return shadow_byte >= 0 && shadow_byte < KASAN_GRANULE_SIZE; 197 202 }
+12
mm/kasan/shadow.c
··· 291 291 unsigned long shadow_start, shadow_end; 292 292 int ret; 293 293 294 + if (!kasan_arch_is_ready()) 295 + return 0; 296 + 294 297 if (!is_vmalloc_or_module_addr((void *)addr)) 295 298 return 0; 296 299 ··· 462 459 unsigned long region_start, region_end; 463 460 unsigned long size; 464 461 462 + if (!kasan_arch_is_ready()) 463 + return; 464 + 465 465 region_start = ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE); 466 466 region_end = ALIGN_DOWN(end, KASAN_MEMORY_PER_SHADOW_PAGE); 467 467 ··· 508 502 * with setting memory tags, so the KASAN_VMALLOC_INIT flag is ignored. 509 503 */ 510 504 505 + if (!kasan_arch_is_ready()) 506 + return (void *)start; 507 + 511 508 if (!is_vmalloc_or_module_addr(start)) 512 509 return (void *)start; 513 510 ··· 533 524 */ 534 525 void __kasan_poison_vmalloc(const void *start, unsigned long size) 535 526 { 527 + if (!kasan_arch_is_ready()) 528 + return; 529 + 536 530 if (!is_vmalloc_or_module_addr(start)) 537 531 return; 538 532
+5 -2
mm/ksm.c
··· 2629 2629 new_page = NULL; 2630 2630 } 2631 2631 if (new_page) { 2632 - copy_user_highpage(new_page, page, address, vma); 2633 - 2632 + if (copy_mc_user_highpage(new_page, page, address, vma)) { 2633 + put_page(new_page); 2634 + memory_failure_queue(page_to_pfn(page), 0); 2635 + return ERR_PTR(-EHWPOISON); 2636 + } 2634 2637 SetPageDirty(new_page); 2635 2638 __SetPageUptodate(new_page); 2636 2639 __SetPageLocked(new_page);
+1 -7
mm/memblock.c
··· 1640 1640 end = PFN_DOWN(base + size); 1641 1641 1642 1642 for (; cursor < end; cursor++) { 1643 - /* 1644 - * Reserved pages are always initialized by the end of 1645 - * memblock_free_all() (by memmap_init() and, if deferred 1646 - * initialization is enabled, memmap_init_reserved_pages()), so 1647 - * these pages can be released directly to the buddy allocator. 1648 - */ 1649 - __free_pages_core(pfn_to_page(cursor), 0); 1643 + memblock_free_pages(pfn_to_page(cursor), cursor, 0); 1650 1644 totalram_pages_inc(); 1651 1645 } 1652 1646 }
+3
mm/memory.c
··· 3840 3840 if (unlikely(!page)) { 3841 3841 ret = VM_FAULT_OOM; 3842 3842 goto out_page; 3843 + } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) { 3844 + ret = VM_FAULT_HWPOISON; 3845 + goto out_page; 3843 3846 } 3844 3847 folio = page_folio(page); 3845 3848
+4 -1
mm/page_alloc.c
··· 5631 5631 */ 5632 5632 void __free_pages(struct page *page, unsigned int order) 5633 5633 { 5634 + /* get PageHead before we drop reference */ 5635 + int head = PageHead(page); 5636 + 5634 5637 if (put_page_testzero(page)) 5635 5638 free_the_page(page, order); 5636 - else if (!PageHead(page)) 5639 + else if (!head) 5637 5640 while (order-- > 0) 5638 5641 free_the_page(page + (1 << order), order); 5639 5642 }
+8 -5
mm/shrinker_debug.c
··· 246 246 } 247 247 EXPORT_SYMBOL(shrinker_debugfs_rename); 248 248 249 - void shrinker_debugfs_remove(struct shrinker *shrinker) 249 + struct dentry *shrinker_debugfs_remove(struct shrinker *shrinker) 250 250 { 251 + struct dentry *entry = shrinker->debugfs_entry; 252 + 251 253 lockdep_assert_held(&shrinker_rwsem); 252 254 253 255 kfree_const(shrinker->name); 254 256 shrinker->name = NULL; 255 257 256 - if (!shrinker->debugfs_entry) 257 - return; 258 + if (entry) { 259 + ida_free(&shrinker_debugfs_ida, shrinker->debugfs_id); 260 + shrinker->debugfs_entry = NULL; 261 + } 258 262 259 - debugfs_remove_recursive(shrinker->debugfs_entry); 260 - ida_free(&shrinker_debugfs_ida, shrinker->debugfs_id); 263 + return entry; 261 264 } 262 265 263 266 static int __init shrinker_debugfs_init(void)
+14 -6
mm/swapfile.c
··· 1764 1764 struct page *swapcache; 1765 1765 spinlock_t *ptl; 1766 1766 pte_t *pte, new_pte; 1767 + bool hwposioned = false; 1767 1768 int ret = 1; 1768 1769 1769 1770 swapcache = page; 1770 1771 page = ksm_might_need_to_copy(page, vma, addr); 1771 1772 if (unlikely(!page)) 1772 1773 return -ENOMEM; 1774 + else if (unlikely(PTR_ERR(page) == -EHWPOISON)) 1775 + hwposioned = true; 1773 1776 1774 1777 pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); 1775 1778 if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { ··· 1780 1777 goto out; 1781 1778 } 1782 1779 1783 - if (unlikely(!PageUptodate(page))) { 1784 - pte_t pteval; 1780 + if (unlikely(hwposioned || !PageUptodate(page))) { 1781 + swp_entry_t swp_entry; 1785 1782 1786 1783 dec_mm_counter(vma->vm_mm, MM_SWAPENTS); 1787 - pteval = swp_entry_to_pte(make_swapin_error_entry()); 1788 - set_pte_at(vma->vm_mm, addr, pte, pteval); 1789 - swap_free(entry); 1784 + if (hwposioned) { 1785 + swp_entry = make_hwpoison_entry(swapcache); 1786 + page = swapcache; 1787 + } else { 1788 + swp_entry = make_swapin_error_entry(); 1789 + } 1790 + new_pte = swp_entry_to_pte(swp_entry); 1790 1791 ret = 0; 1791 - goto out; 1792 + goto setpte; 1792 1793 } 1793 1794 1794 1795 /* See do_swap_page() */ ··· 1824 1817 new_pte = pte_mksoft_dirty(new_pte); 1825 1818 if (pte_swp_uffd_wp(*pte)) 1826 1819 new_pte = pte_mkuffd_wp(new_pte); 1820 + setpte: 1827 1821 set_pte_at(vma->vm_mm, addr, pte, new_pte); 1828 1822 swap_free(entry); 1829 1823 out:
+5 -1
mm/vmscan.c
··· 741 741 */ 742 742 void unregister_shrinker(struct shrinker *shrinker) 743 743 { 744 + struct dentry *debugfs_entry; 745 + 744 746 if (!(shrinker->flags & SHRINKER_REGISTERED)) 745 747 return; 746 748 ··· 751 749 shrinker->flags &= ~SHRINKER_REGISTERED; 752 750 if (shrinker->flags & SHRINKER_MEMCG_AWARE) 753 751 unregister_memcg_shrinker(shrinker); 754 - shrinker_debugfs_remove(shrinker); 752 + debugfs_entry = shrinker_debugfs_remove(shrinker); 755 753 up_write(&shrinker_rwsem); 754 + 755 + debugfs_remove_recursive(debugfs_entry); 756 756 757 757 kfree(shrinker->nr_deferred); 758 758 shrinker->nr_deferred = NULL;
+1
net/caif/caif_socket.c
··· 1011 1011 return; 1012 1012 } 1013 1013 sk_stream_kill_queues(&cf_sk->sk); 1014 + WARN_ON_ONCE(sk->sk_forward_alloc); 1014 1015 caif_free_client(&cf_sk->layer); 1015 1016 } 1016 1017
+1 -9
net/core/dev.c
··· 1870 1870 __register_netdevice_notifier_net(dst_net, nb, true); 1871 1871 } 1872 1872 1873 - void move_netdevice_notifier_net(struct net *src_net, struct net *dst_net, 1874 - struct notifier_block *nb) 1875 - { 1876 - rtnl_lock(); 1877 - __move_netdevice_notifier_net(src_net, dst_net, nb); 1878 - rtnl_unlock(); 1879 - } 1880 - 1881 1873 int register_netdevice_notifier_dev_net(struct net_device *dev, 1882 1874 struct notifier_block *nb, 1883 1875 struct netdev_net_notifier *nn) ··· 10374 10382 10375 10383 BUILD_BUG_ON(n > sizeof(*stats64) / sizeof(u64)); 10376 10384 for (i = 0; i < n; i++) 10377 - dst[i] = atomic_long_read(&src[i]); 10385 + dst[i] = (unsigned long)atomic_long_read(&src[i]); 10378 10386 /* zero out counters that only exist in rtnl_link_stats64 */ 10379 10387 memset((char *)stats64 + n * sizeof(u64), 0, 10380 10388 sizeof(*stats64) - n * sizeof(u64));
+9 -1
net/core/net_namespace.c
··· 304 304 } 305 305 EXPORT_SYMBOL_GPL(get_net_ns_by_id); 306 306 307 + /* init code that must occur even if setup_net() is not called. */ 308 + static __net_init void preinit_net(struct net *net) 309 + { 310 + ref_tracker_dir_init(&net->notrefcnt_tracker, 128); 311 + } 312 + 307 313 /* 308 314 * setup_net runs the initializers for the network namespace object. 309 315 */ ··· 322 316 323 317 refcount_set(&net->ns.count, 1); 324 318 ref_tracker_dir_init(&net->refcnt_tracker, 128); 325 - ref_tracker_dir_init(&net->notrefcnt_tracker, 128); 326 319 327 320 refcount_set(&net->passive, 1); 328 321 get_random_bytes(&net->hash_mix, sizeof(u32)); ··· 477 472 rv = -ENOMEM; 478 473 goto dec_ucounts; 479 474 } 475 + 476 + preinit_net(net); 480 477 refcount_set(&net->passive, 1); 481 478 net->ucounts = ucounts; 482 479 get_user_ns(user_ns); ··· 1125 1118 init_net.key_domain = &init_net_key_domain; 1126 1119 #endif 1127 1120 down_write(&pernet_ops_rwsem); 1121 + preinit_net(&init_net); 1128 1122 if (setup_net(&init_net, &init_user_ns)) 1129 1123 panic("Could not setup the initial network namespace"); 1130 1124
-1
net/core/stream.c
··· 209 209 sk_mem_reclaim_final(sk); 210 210 211 211 WARN_ON_ONCE(sk->sk_wmem_queued); 212 - WARN_ON_ONCE(sk->sk_forward_alloc); 213 212 214 213 /* It is _impossible_ for the backlog to contain anything 215 214 * when we get here. All user references to this socket
+2 -5
net/dccp/ipv6.c
··· 551 551 *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL); 552 552 /* Clone pktoptions received with SYN, if we own the req */ 553 553 if (*own_req && ireq->pktopts) { 554 - newnp->pktoptions = skb_clone(ireq->pktopts, GFP_ATOMIC); 554 + newnp->pktoptions = skb_clone_and_charge_r(ireq->pktopts, newsk); 555 555 consume_skb(ireq->pktopts); 556 556 ireq->pktopts = NULL; 557 - if (newnp->pktoptions) 558 - skb_set_owner_r(newnp->pktoptions, newsk); 559 557 } 560 558 561 559 return newsk; ··· 613 615 --ANK (980728) 614 616 */ 615 617 if (np->rxopt.all) 616 - opt_skb = skb_clone(skb, GFP_ATOMIC); 618 + opt_skb = skb_clone_and_charge_r(skb, sk); 617 619 618 620 if (sk->sk_state == DCCP_OPEN) { /* Fast path */ 619 621 if (dccp_rcv_established(sk, skb, dccp_hdr(skb), skb->len)) ··· 677 679 np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb)); 678 680 if (ipv6_opt_accepted(sk, opt_skb, 679 681 &DCCP_SKB_CB(opt_skb)->header.h6)) { 680 - skb_set_owner_r(opt_skb, sk); 681 682 memmove(IP6CB(opt_skb), 682 683 &DCCP_SKB_CB(opt_skb)->header.h6, 683 684 sizeof(struct inet6_skb_parm));
-2
net/devlink/dev.c
··· 343 343 * reload process so the notifications are generated separatelly. 344 344 */ 345 345 devlink_notify_unregister(devlink); 346 - move_netdevice_notifier_net(curr_net, dest_net, 347 - &devlink->netdevice_nb); 348 346 write_pnet(&devlink->_net, dest_net); 349 347 devlink_notify_register(devlink); 350 348 }
+13
net/devlink/leftover.c
··· 3837 3837 return err; 3838 3838 } 3839 3839 3840 + static void devlink_param_notify(struct devlink *devlink, 3841 + unsigned int port_index, 3842 + struct devlink_param_item *param_item, 3843 + enum devlink_command cmd); 3844 + 3845 + struct devlink_info_req { 3846 + struct sk_buff *msg; 3847 + void (*version_cb)(const char *version_name, 3848 + enum devlink_info_version_type version_type, 3849 + void *version_cb_priv); 3850 + void *version_cb_priv; 3851 + }; 3852 + 3840 3853 static const struct devlink_param devlink_param_generic[] = { 3841 3854 { 3842 3855 .id = DEVLINK_PARAM_GENERIC_ID_INT_ERR_RESET,
+1 -1
net/ipv6/datagram.c
··· 51 51 fl6->flowi6_mark = sk->sk_mark; 52 52 fl6->fl6_dport = inet->inet_dport; 53 53 fl6->fl6_sport = inet->inet_sport; 54 - fl6->flowlabel = np->flow_label; 54 + fl6->flowlabel = ip6_make_flowinfo(np->tclass, np->flow_label); 55 55 fl6->flowi6_uid = sk->sk_uid; 56 56 57 57 if (!oif)
+4 -7
net/ipv6/tcp_ipv6.c
··· 272 272 fl6.flowi6_proto = IPPROTO_TCP; 273 273 fl6.daddr = sk->sk_v6_daddr; 274 274 fl6.saddr = saddr ? *saddr : np->saddr; 275 + fl6.flowlabel = ip6_make_flowinfo(np->tclass, np->flow_label); 275 276 fl6.flowi6_oif = sk->sk_bound_dev_if; 276 277 fl6.flowi6_mark = sk->sk_mark; 277 278 fl6.fl6_dport = usin->sin6_port; ··· 1388 1387 1389 1388 /* Clone pktoptions received with SYN, if we own the req */ 1390 1389 if (ireq->pktopts) { 1391 - newnp->pktoptions = skb_clone(ireq->pktopts, 1392 - sk_gfp_mask(sk, GFP_ATOMIC)); 1390 + newnp->pktoptions = skb_clone_and_charge_r(ireq->pktopts, newsk); 1393 1391 consume_skb(ireq->pktopts); 1394 1392 ireq->pktopts = NULL; 1395 - if (newnp->pktoptions) { 1393 + if (newnp->pktoptions) 1396 1394 tcp_v6_restore_cb(newnp->pktoptions); 1397 - skb_set_owner_r(newnp->pktoptions, newsk); 1398 - } 1399 1395 } 1400 1396 } else { 1401 1397 if (!req_unhash && found_dup_sk) { ··· 1464 1466 --ANK (980728) 1465 1467 */ 1466 1468 if (np->rxopt.all) 1467 - opt_skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC)); 1469 + opt_skb = skb_clone_and_charge_r(skb, sk); 1468 1470 1469 1471 reason = SKB_DROP_REASON_NOT_SPECIFIED; 1470 1472 if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ ··· 1550 1552 if (np->repflow) 1551 1553 np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb)); 1552 1554 if (ipv6_opt_accepted(sk, opt_skb, &TCP_SKB_CB(opt_skb)->header.h6)) { 1553 - skb_set_owner_r(opt_skb, sk); 1554 1555 tcp_v6_restore_cb(opt_skb); 1555 1556 opt_skb = xchg(&np->pktoptions, opt_skb); 1556 1557 } else {
+1 -1
net/key/af_key.c
··· 1261 1261 const struct sadb_x_nat_t_type* n_type; 1262 1262 struct xfrm_encap_tmpl *natt; 1263 1263 1264 - x->encap = kmalloc(sizeof(*x->encap), GFP_KERNEL); 1264 + x->encap = kzalloc(sizeof(*x->encap), GFP_KERNEL); 1265 1265 if (!x->encap) { 1266 1266 err = -ENOMEM; 1267 1267 goto out;
+4
net/mpls/af_mpls.c
··· 1428 1428 free: 1429 1429 kfree(table); 1430 1430 out: 1431 + mdev->sysctl = NULL; 1431 1432 return -ENOBUFS; 1432 1433 } 1433 1434 ··· 1437 1436 { 1438 1437 struct net *net = dev_net(dev); 1439 1438 struct ctl_table *table; 1439 + 1440 + if (!mdev->sysctl) 1441 + return; 1440 1442 1441 1443 table = mdev->sysctl->ctl_table_arg; 1442 1444 unregister_net_sysctl_table(mdev->sysctl);
+3 -1
net/openvswitch/meter.c
··· 449 449 450 450 err = attach_meter(meter_tbl, meter); 451 451 if (err) 452 - goto exit_unlock; 452 + goto exit_free_old_meter; 453 453 454 454 ovs_unlock(); 455 455 ··· 472 472 genlmsg_end(reply, ovs_reply_header); 473 473 return genlmsg_reply(reply, info); 474 474 475 + exit_free_old_meter: 476 + ovs_meter_free(old_meter); 475 477 exit_unlock: 476 478 ovs_unlock(); 477 479 nlmsg_free(reply);
+3 -3
net/sched/act_ctinfo.c
··· 93 93 cp = rcu_dereference_bh(ca->params); 94 94 95 95 tcf_lastuse_update(&ca->tcf_tm); 96 - bstats_update(&ca->tcf_bstats, skb); 96 + tcf_action_update_bstats(&ca->common, skb); 97 97 action = READ_ONCE(ca->tcf_action); 98 98 99 99 wlen = skb_network_offset(skb); ··· 212 212 index = actparm->index; 213 213 err = tcf_idr_check_alloc(tn, &index, a, bind); 214 214 if (!err) { 215 - ret = tcf_idr_create(tn, index, est, a, 216 - &act_ctinfo_ops, bind, false, flags); 215 + ret = tcf_idr_create_from_flags(tn, index, est, a, 216 + &act_ctinfo_ops, bind, flags); 217 217 if (ret) { 218 218 tcf_idr_cleanup(tn, index); 219 219 return ret;
+1 -3
net/sctp/diag.c
··· 343 343 struct sctp_comm_param *commp = p; 344 344 struct sock *sk = ep->base.sk; 345 345 const struct inet_diag_req_v2 *r = commp->r; 346 - struct sctp_association *assoc = 347 - list_entry(ep->asocs.next, struct sctp_association, asocs); 348 346 349 347 /* find the ep only once through the transports by this condition */ 350 - if (tsp->asoc != assoc) 348 + if (!list_is_first(&tsp->asoc->asocs, &ep->asocs)) 351 349 return 0; 352 350 353 351 if (r->sdiag_family != AF_UNSPEC && sk->sk_family != r->sdiag_family)
+6 -3
net/socket.c
··· 982 982 static void sock_recv_mark(struct msghdr *msg, struct sock *sk, 983 983 struct sk_buff *skb) 984 984 { 985 - if (sock_flag(sk, SOCK_RCVMARK) && skb) 986 - put_cmsg(msg, SOL_SOCKET, SO_MARK, sizeof(__u32), 987 - &skb->mark); 985 + if (sock_flag(sk, SOCK_RCVMARK) && skb) { 986 + /* We must use a bounce buffer for CONFIG_HARDENED_USERCOPY=y */ 987 + __u32 mark = skb->mark; 988 + 989 + put_cmsg(msg, SOL_SOCKET, SO_MARK, sizeof(__u32), &mark); 990 + } 988 991 } 989 992 990 993 void __sock_recv_cmsgs(struct msghdr *msg, struct sock *sk,
+2
net/tipc/socket.c
··· 2617 2617 /* Send a 'SYN-' to destination */ 2618 2618 m.msg_name = dest; 2619 2619 m.msg_namelen = destlen; 2620 + iov_iter_kvec(&m.msg_iter, ITER_SOURCE, NULL, 0, 0); 2620 2621 2621 2622 /* If connect is in non-blocking case, set MSG_DONTWAIT to 2622 2623 * indicate send_msg() is never blocked. ··· 2780 2779 __skb_queue_head(&new_sk->sk_receive_queue, buf); 2781 2780 skb_set_owner_r(buf, new_sk); 2782 2781 } 2782 + iov_iter_kvec(&m.msg_iter, ITER_SOURCE, NULL, 0, 0); 2783 2783 __tipc_sendstream(new_sock, &m, 0); 2784 2784 release_sock(new_sk); 2785 2785 exit:
+1 -1
scripts/gdb/linux/cpus.py
··· 163 163 task_ptr_type = task_type.get_type().pointer() 164 164 165 165 if utils.is_target_arch("x86"): 166 - var_ptr = gdb.parse_and_eval("&current_task") 166 + var_ptr = gdb.parse_and_eval("&pcpu_hot.current_task") 167 167 return per_cpu(var_ptr, cpu).dereference() 168 168 elif utils.is_target_arch("aarch64"): 169 169 current_task_addr = gdb.parse_and_eval("$SP_EL0")
+1 -2
security/apparmor/policy_compat.c
··· 160 160 if (!table) 161 161 return NULL; 162 162 163 - /* zero init so skip the trap state (state == 0) */ 164 - for (state = 1; state < state_count; state++) { 163 + for (state = 0; state < state_count; state++) { 165 164 table[state * 2] = compute_fperms_user(dfa, state); 166 165 table[state * 2 + 1] = compute_fperms_other(dfa, state); 167 166 }
+9
sound/pci/hda/patch_realtek.c
··· 9423 9423 SND_PCI_QUIRK(0x103c, 0x89c3, "Zbook Studio G9", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 9424 9424 SND_PCI_QUIRK(0x103c, 0x89c6, "Zbook Fury 17 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9425 9425 SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9426 + SND_PCI_QUIRK(0x103c, 0x89d3, "HP EliteBook 645 G9 (MB 89D2)", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9426 9427 SND_PCI_QUIRK(0x103c, 0x8a78, "HP Dev One", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST), 9427 9428 SND_PCI_QUIRK(0x103c, 0x8aa0, "HP ProBook 440 G9 (MB 8A9E)", ALC236_FIXUP_HP_GPIO_LED), 9428 9429 SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED), ··· 9434 9433 SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9435 9434 SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9436 9435 SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9436 + SND_PCI_QUIRK(0x103c, 0x8b7a, "HP", ALC236_FIXUP_HP_GPIO_LED), 9437 + SND_PCI_QUIRK(0x103c, 0x8b7d, "HP", ALC236_FIXUP_HP_GPIO_LED), 9438 + SND_PCI_QUIRK(0x103c, 0x8b8a, "HP", ALC236_FIXUP_HP_GPIO_LED), 9439 + SND_PCI_QUIRK(0x103c, 0x8b8b, "HP", ALC236_FIXUP_HP_GPIO_LED), 9440 + SND_PCI_QUIRK(0x103c, 0x8b8d, "HP", ALC236_FIXUP_HP_GPIO_LED), 9437 9441 SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9438 9442 SND_PCI_QUIRK(0x103c, 0x8bf0, "HP", ALC236_FIXUP_HP_GPIO_LED), 9439 9443 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), ··· 9486 9480 SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), 9487 9481 SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402", ALC245_FIXUP_CS35L41_SPI_2), 9488 9482 SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), 9483 + SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), 9489 9484 SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), 9490 9485 SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS), 9491 9486 SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), ··· 9530 9523 SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP), 9531 9524 SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP), 9532 9525 SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 9526 + SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP), 9533 9527 SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC), 9534 9528 SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC), 9535 9529 SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC), ··· 9709 9701 SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */ 9710 9702 SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802), 9711 9703 SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X), 9704 + SND_PCI_QUIRK(0x1c6c, 0x1251, "Positivo N14KP6-TG", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE), 9712 9705 SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS), 9713 9706 SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP), 9714 9707 SND_PCI_QUIRK(0x1d05, 0x1100, "TongFang GKxNRxx", ALC269_FIXUP_NO_SHUTUP),
+5 -6
sound/pci/lx6464es/lx_core.c
··· 493 493 dev_dbg(chip->card->dev, 494 494 "CMD_08_ASK_BUFFERS: needed %d, freed %d\n", 495 495 *r_needed, *r_freed); 496 - for (i = 0; i < MAX_STREAM_BUFFER; ++i) { 497 - for (i = 0; i != chip->rmh.stat_len; ++i) 498 - dev_dbg(chip->card->dev, 499 - " stat[%d]: %x, %x\n", i, 500 - chip->rmh.stat[i], 501 - chip->rmh.stat[i] & MASK_DATA_SIZE); 496 + for (i = 0; i < MAX_STREAM_BUFFER && i < chip->rmh.stat_len; 497 + ++i) { 498 + dev_dbg(chip->card->dev, " stat[%d]: %x, %x\n", i, 499 + chip->rmh.stat[i], 500 + chip->rmh.stat[i] & MASK_DATA_SIZE); 502 501 } 503 502 } 504 503
+4 -2
sound/soc/codecs/es8326.c
··· 729 729 } 730 730 dev_dbg(component->dev, "jack-pol %x", es8326->jack_pol); 731 731 732 - ret = device_property_read_u8(component->dev, "everest,interrupt-src", &es8326->jack_pol); 732 + ret = device_property_read_u8(component->dev, "everest,interrupt-src", 733 + &es8326->interrupt_src); 733 734 if (ret != 0) { 734 735 dev_dbg(component->dev, "interrupt-src return %d", ret); 735 736 es8326->interrupt_src = ES8326_HP_DET_SRC_PIN9; 736 737 } 737 738 dev_dbg(component->dev, "interrupt-src %x", es8326->interrupt_src); 738 739 739 - ret = device_property_read_u8(component->dev, "everest,interrupt-clk", &es8326->jack_pol); 740 + ret = device_property_read_u8(component->dev, "everest,interrupt-clk", 741 + &es8326->interrupt_clk); 740 742 if (ret != 0) { 741 743 dev_dbg(component->dev, "interrupt-clk return %d", ret); 742 744 es8326->interrupt_clk = 0x45;
+1 -1
sound/soc/codecs/rt715-sdca-sdw.c
··· 167 167 } 168 168 169 169 /* set the timeout values */ 170 - prop->clk_stop_timeout = 20; 170 + prop->clk_stop_timeout = 200; 171 171 172 172 return 0; 173 173 }
+92 -43
sound/soc/codecs/tas5805m.c
··· 154 154 #define TAS5805M_VOLUME_MIN 0 155 155 156 156 struct tas5805m_priv { 157 + struct i2c_client *i2c; 157 158 struct regulator *pvdd; 158 159 struct gpio_desc *gpio_pdn_n; 159 160 ··· 166 165 int vol[2]; 167 166 bool is_powered; 168 167 bool is_muted; 168 + 169 + struct work_struct work; 170 + struct mutex lock; 169 171 }; 170 172 171 173 static void set_dsp_scale(struct regmap *rm, int offset, int vol) ··· 185 181 regmap_bulk_write(rm, offset, v, ARRAY_SIZE(v)); 186 182 } 187 183 188 - static void tas5805m_refresh(struct snd_soc_component *component) 184 + static void tas5805m_refresh(struct tas5805m_priv *tas5805m) 189 185 { 190 - struct tas5805m_priv *tas5805m = 191 - snd_soc_component_get_drvdata(component); 192 186 struct regmap *rm = tas5805m->regmap; 193 187 194 - dev_dbg(component->dev, "refresh: is_muted=%d, vol=%d/%d\n", 188 + dev_dbg(&tas5805m->i2c->dev, "refresh: is_muted=%d, vol=%d/%d\n", 195 189 tas5805m->is_muted, tas5805m->vol[0], tas5805m->vol[1]); 196 190 197 191 regmap_write(rm, REG_PAGE, 0x00); ··· 202 200 */ 203 201 set_dsp_scale(rm, 0x24, tas5805m->vol[0]); 204 202 set_dsp_scale(rm, 0x28, tas5805m->vol[1]); 203 + 204 + regmap_write(rm, REG_PAGE, 0x00); 205 + regmap_write(rm, REG_BOOK, 0x00); 205 206 206 207 /* Set/clear digital soft-mute */ 207 208 regmap_write(rm, REG_DEVICE_CTRL_2, ··· 231 226 struct tas5805m_priv *tas5805m = 232 227 snd_soc_component_get_drvdata(component); 233 228 229 + mutex_lock(&tas5805m->lock); 234 230 ucontrol->value.integer.value[0] = tas5805m->vol[0]; 235 231 ucontrol->value.integer.value[1] = tas5805m->vol[1]; 232 + mutex_unlock(&tas5805m->lock); 233 + 236 234 return 0; 237 235 } 238 236 ··· 251 243 snd_soc_kcontrol_component(kcontrol); 252 244 struct tas5805m_priv *tas5805m = 253 245 snd_soc_component_get_drvdata(component); 246 + int ret = 0; 254 247 255 248 if (!(volume_is_valid(ucontrol->value.integer.value[0]) && 256 249 volume_is_valid(ucontrol->value.integer.value[1]))) 257 250 return -EINVAL; 258 251 252 + mutex_lock(&tas5805m->lock); 259 253 if (tas5805m->vol[0] != ucontrol->value.integer.value[0] || 260 254 tas5805m->vol[1] != ucontrol->value.integer.value[1]) { 261 255 tas5805m->vol[0] = ucontrol->value.integer.value[0]; ··· 266 256 tas5805m->vol[0], tas5805m->vol[1], 267 257 tas5805m->is_powered); 268 258 if (tas5805m->is_powered) 269 - tas5805m_refresh(component); 270 - return 1; 259 + tas5805m_refresh(tas5805m); 260 + ret = 1; 271 261 } 262 + mutex_unlock(&tas5805m->lock); 272 263 273 - return 0; 264 + return ret; 274 265 } 275 266 276 267 static const struct snd_kcontrol_new tas5805m_snd_controls[] = { ··· 305 294 struct snd_soc_component *component = dai->component; 306 295 struct tas5805m_priv *tas5805m = 307 296 snd_soc_component_get_drvdata(component); 308 - struct regmap *rm = tas5805m->regmap; 309 - unsigned int chan, global1, global2; 310 297 311 298 switch (cmd) { 312 299 case SNDRV_PCM_TRIGGER_START: 313 300 case SNDRV_PCM_TRIGGER_RESUME: 314 301 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 315 - dev_dbg(component->dev, "DSP startup\n"); 316 - 317 - /* We mustn't issue any I2C transactions until the I2S 318 - * clock is stable. Furthermore, we must allow a 5ms 319 - * delay after the first set of register writes to 320 - * allow the DSP to boot before configuring it. 321 - */ 322 - usleep_range(5000, 10000); 323 - send_cfg(rm, dsp_cfg_preboot, 324 - ARRAY_SIZE(dsp_cfg_preboot)); 325 - usleep_range(5000, 15000); 326 - send_cfg(rm, tas5805m->dsp_cfg_data, 327 - tas5805m->dsp_cfg_len); 328 - 329 - tas5805m->is_powered = true; 330 - tas5805m_refresh(component); 302 + dev_dbg(component->dev, "clock start\n"); 303 + schedule_work(&tas5805m->work); 331 304 break; 332 305 333 306 case SNDRV_PCM_TRIGGER_STOP: 334 307 case SNDRV_PCM_TRIGGER_SUSPEND: 335 308 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 336 - dev_dbg(component->dev, "DSP shutdown\n"); 337 - 338 - tas5805m->is_powered = false; 339 - 340 - regmap_write(rm, REG_PAGE, 0x00); 341 - regmap_write(rm, REG_BOOK, 0x00); 342 - 343 - regmap_read(rm, REG_CHAN_FAULT, &chan); 344 - regmap_read(rm, REG_GLOBAL_FAULT1, &global1); 345 - regmap_read(rm, REG_GLOBAL_FAULT2, &global2); 346 - 347 - dev_dbg(component->dev, 348 - "fault regs: CHAN=%02x, GLOBAL1=%02x, GLOBAL2=%02x\n", 349 - chan, global1, global2); 350 - 351 - regmap_write(rm, REG_DEVICE_CTRL_2, DCTRL2_MODE_HIZ); 352 309 break; 353 310 354 311 default: 355 312 return -EINVAL; 313 + } 314 + 315 + return 0; 316 + } 317 + 318 + static void do_work(struct work_struct *work) 319 + { 320 + struct tas5805m_priv *tas5805m = 321 + container_of(work, struct tas5805m_priv, work); 322 + struct regmap *rm = tas5805m->regmap; 323 + 324 + dev_dbg(&tas5805m->i2c->dev, "DSP startup\n"); 325 + 326 + mutex_lock(&tas5805m->lock); 327 + /* We mustn't issue any I2C transactions until the I2S 328 + * clock is stable. Furthermore, we must allow a 5ms 329 + * delay after the first set of register writes to 330 + * allow the DSP to boot before configuring it. 331 + */ 332 + usleep_range(5000, 10000); 333 + send_cfg(rm, dsp_cfg_preboot, ARRAY_SIZE(dsp_cfg_preboot)); 334 + usleep_range(5000, 15000); 335 + send_cfg(rm, tas5805m->dsp_cfg_data, tas5805m->dsp_cfg_len); 336 + 337 + tas5805m->is_powered = true; 338 + tas5805m_refresh(tas5805m); 339 + mutex_unlock(&tas5805m->lock); 340 + } 341 + 342 + static int tas5805m_dac_event(struct snd_soc_dapm_widget *w, 343 + struct snd_kcontrol *kcontrol, int event) 344 + { 345 + struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); 346 + struct tas5805m_priv *tas5805m = 347 + snd_soc_component_get_drvdata(component); 348 + struct regmap *rm = tas5805m->regmap; 349 + 350 + if (event & SND_SOC_DAPM_PRE_PMD) { 351 + unsigned int chan, global1, global2; 352 + 353 + dev_dbg(component->dev, "DSP shutdown\n"); 354 + cancel_work_sync(&tas5805m->work); 355 + 356 + mutex_lock(&tas5805m->lock); 357 + if (tas5805m->is_powered) { 358 + tas5805m->is_powered = false; 359 + 360 + regmap_write(rm, REG_PAGE, 0x00); 361 + regmap_write(rm, REG_BOOK, 0x00); 362 + 363 + regmap_read(rm, REG_CHAN_FAULT, &chan); 364 + regmap_read(rm, REG_GLOBAL_FAULT1, &global1); 365 + regmap_read(rm, REG_GLOBAL_FAULT2, &global2); 366 + 367 + dev_dbg(component->dev, "fault regs: CHAN=%02x, " 368 + "GLOBAL1=%02x, GLOBAL2=%02x\n", 369 + chan, global1, global2); 370 + 371 + regmap_write(rm, REG_DEVICE_CTRL_2, DCTRL2_MODE_HIZ); 372 + } 373 + mutex_unlock(&tas5805m->lock); 356 374 } 357 375 358 376 return 0; ··· 394 354 395 355 static const struct snd_soc_dapm_widget tas5805m_dapm_widgets[] = { 396 356 SND_SOC_DAPM_AIF_IN("DAC IN", "Playback", 0, SND_SOC_NOPM, 0, 0), 397 - SND_SOC_DAPM_DAC("DAC", NULL, SND_SOC_NOPM, 0, 0), 357 + SND_SOC_DAPM_DAC_E("DAC", NULL, SND_SOC_NOPM, 0, 0, 358 + tas5805m_dac_event, SND_SOC_DAPM_PRE_PMD), 398 359 SND_SOC_DAPM_OUTPUT("OUT") 399 360 }; 400 361 ··· 416 375 struct tas5805m_priv *tas5805m = 417 376 snd_soc_component_get_drvdata(component); 418 377 378 + mutex_lock(&tas5805m->lock); 419 379 dev_dbg(component->dev, "set mute=%d (is_powered=%d)\n", 420 380 mute, tas5805m->is_powered); 381 + 421 382 tas5805m->is_muted = mute; 422 383 if (tas5805m->is_powered) 423 - tas5805m_refresh(component); 384 + tas5805m_refresh(tas5805m); 385 + mutex_unlock(&tas5805m->lock); 424 386 425 387 return 0; 426 388 } ··· 478 434 if (!tas5805m) 479 435 return -ENOMEM; 480 436 437 + tas5805m->i2c = i2c; 481 438 tas5805m->pvdd = devm_regulator_get(dev, "pvdd"); 482 439 if (IS_ERR(tas5805m->pvdd)) { 483 440 dev_err(dev, "failed to get pvdd supply: %ld\n", ··· 552 507 gpiod_set_value(tas5805m->gpio_pdn_n, 1); 553 508 usleep_range(10000, 15000); 554 509 510 + INIT_WORK(&tas5805m->work, do_work); 511 + mutex_init(&tas5805m->lock); 512 + 555 513 /* Don't register through devm. We need to be able to unregister 556 514 * the component prior to deasserting PDN# 557 515 */ ··· 575 527 struct device *dev = &i2c->dev; 576 528 struct tas5805m_priv *tas5805m = dev_get_drvdata(dev); 577 529 530 + cancel_work_sync(&tas5805m->work); 578 531 snd_soc_unregister_component(dev); 579 532 gpiod_set_value(tas5805m->gpio_pdn_n, 0); 580 533 usleep_range(10000, 15000);
+1
sound/soc/fsl/fsl_sai.c
··· 1141 1141 1142 1142 sai->verid.version = val & 1143 1143 (FSL_SAI_VERID_MAJOR_MASK | FSL_SAI_VERID_MINOR_MASK); 1144 + sai->verid.version >>= FSL_SAI_VERID_MINOR_SHIFT; 1144 1145 sai->verid.feature = val & FSL_SAI_VERID_FEATURE_MASK; 1145 1146 1146 1147 ret = regmap_read(sai->regmap, FSL_SAI_PARAM, &val);
+6 -2
sound/soc/soc-topology.c
··· 1401 1401 1402 1402 template.num_kcontrols = le32_to_cpu(w->num_kcontrols); 1403 1403 kc = devm_kcalloc(tplg->dev, le32_to_cpu(w->num_kcontrols), sizeof(*kc), GFP_KERNEL); 1404 - if (!kc) 1404 + if (!kc) { 1405 + ret = -ENOMEM; 1405 1406 goto hdr_err; 1407 + } 1406 1408 1407 1409 kcontrol_type = devm_kcalloc(tplg->dev, le32_to_cpu(w->num_kcontrols), sizeof(unsigned int), 1408 1410 GFP_KERNEL); 1409 - if (!kcontrol_type) 1411 + if (!kcontrol_type) { 1412 + ret = -ENOMEM; 1410 1413 goto hdr_err; 1414 + } 1411 1415 1412 1416 for (i = 0; i < le32_to_cpu(w->num_kcontrols); i++) { 1413 1417 control_hdr = (struct snd_soc_tplg_ctl_hdr *)tplg->pos;
+15 -21
sound/soc/sof/amd/acp.c
··· 318 318 { 319 319 struct snd_sof_dev *sdev = context; 320 320 const struct sof_amd_acp_desc *desc = get_chip_info(sdev->pdata); 321 - unsigned int base = desc->dsp_intr_base; 322 321 unsigned int val, count = ACP_HW_SEM_RETRY_COUNT; 323 322 324 323 val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, desc->ext_intr_stat); ··· 327 328 return IRQ_HANDLED; 328 329 } 329 330 330 - val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET); 331 - if (val & ACP_DSP_TO_HOST_IRQ) { 332 - while (snd_sof_dsp_read(sdev, ACP_DSP_BAR, desc->hw_semaphore_offset)) { 333 - /* Wait until acquired HW Semaphore lock or timeout */ 334 - count--; 335 - if (!count) { 336 - dev_err(sdev->dev, "%s: Failed to acquire HW lock\n", __func__); 337 - return IRQ_NONE; 338 - } 331 + while (snd_sof_dsp_read(sdev, ACP_DSP_BAR, desc->hw_semaphore_offset)) { 332 + /* Wait until acquired HW Semaphore lock or timeout */ 333 + count--; 334 + if (!count) { 335 + dev_err(sdev->dev, "%s: Failed to acquire HW lock\n", __func__); 336 + return IRQ_NONE; 339 337 } 340 - 341 - sof_ops(sdev)->irq_thread(irq, sdev); 342 - val |= ACP_DSP_TO_HOST_IRQ; 343 - snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET, val); 344 - 345 - /* Unlock or Release HW Semaphore */ 346 - snd_sof_dsp_write(sdev, ACP_DSP_BAR, desc->hw_semaphore_offset, 0x0); 347 - 348 - return IRQ_HANDLED; 349 338 } 350 339 351 - return IRQ_NONE; 340 + sof_ops(sdev)->irq_thread(irq, sdev); 341 + /* Unlock or Release HW Semaphore */ 342 + snd_sof_dsp_write(sdev, ACP_DSP_BAR, desc->hw_semaphore_offset, 0x0); 343 + 344 + return IRQ_HANDLED; 352 345 }; 353 346 354 347 static irqreturn_t acp_irq_handler(int irq, void *dev_id) ··· 351 360 unsigned int val; 352 361 353 362 val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET); 354 - if (val) 363 + if (val) { 364 + val |= ACP_DSP_TO_HOST_IRQ; 365 + snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET, val); 355 366 return IRQ_WAKE_THREAD; 367 + } 356 368 357 369 return IRQ_NONE; 358 370 }
+3
sound/synth/emux/emux_nrpn.c
··· 349 349 snd_emux_xg_control(struct snd_emux_port *port, struct snd_midi_channel *chan, 350 350 int param) 351 351 { 352 + if (param >= ARRAY_SIZE(chan->control)) 353 + return -EINVAL; 354 + 352 355 return send_converted_effect(xg_effects, ARRAY_SIZE(xg_effects), 353 356 port, chan, param, 354 357 chan->control[param],
-4
tools/testing/memblock/internal.h
··· 15 15 16 16 struct page {}; 17 17 18 - void __free_pages_core(struct page *page, unsigned int order) 19 - { 20 - } 21 - 22 18 void memblock_free_pages(struct page *page, unsigned long pfn, 23 19 unsigned int order) 24 20 {
+127 -1
tools/testing/selftests/net/fib_rule_tests.sh
··· 10 10 11 11 PAUSE_ON_FAIL=${PAUSE_ON_FAIL:=no} 12 12 IP="ip -netns testns" 13 + IP_PEER="ip -netns peerns" 13 14 14 15 RTABLE=100 16 + RTABLE_PEER=101 15 17 GW_IP4=192.51.100.2 16 18 SRC_IP=192.51.100.3 17 19 GW_IP6=2001:db8:1::2 ··· 22 20 DEV_ADDR=192.51.100.1 23 21 DEV_ADDR6=2001:db8:1::1 24 22 DEV=dummy0 25 - TESTS="fib_rule6 fib_rule4" 23 + TESTS="fib_rule6 fib_rule4 fib_rule6_connect fib_rule4_connect" 24 + 25 + SELFTEST_PATH="" 26 26 27 27 log_test() 28 28 { ··· 56 52 echo "######################################################################" 57 53 } 58 54 55 + check_nettest() 56 + { 57 + if which nettest > /dev/null 2>&1; then 58 + return 0 59 + fi 60 + 61 + # Add the selftest directory to PATH if not already done 62 + if [ "${SELFTEST_PATH}" = "" ]; then 63 + SELFTEST_PATH="$(dirname $0)" 64 + PATH="${PATH}:${SELFTEST_PATH}" 65 + 66 + # Now retry with the new path 67 + if which nettest > /dev/null 2>&1; then 68 + return 0 69 + fi 70 + 71 + if [ "${ret}" -eq 0 ]; then 72 + ret="${ksft_skip}" 73 + fi 74 + echo "nettest not found (try 'make -C ${SELFTEST_PATH} nettest')" 75 + fi 76 + 77 + return 1 78 + } 79 + 59 80 setup() 60 81 { 61 82 set -e ··· 99 70 { 100 71 $IP link del dev dummy0 &> /dev/null 101 72 ip netns del testns 73 + } 74 + 75 + setup_peer() 76 + { 77 + set -e 78 + 79 + ip netns add peerns 80 + $IP_PEER link set dev lo up 81 + 82 + ip link add name veth0 netns testns type veth \ 83 + peer name veth1 netns peerns 84 + $IP link set dev veth0 up 85 + $IP_PEER link set dev veth1 up 86 + 87 + $IP address add 192.0.2.10 peer 192.0.2.11/32 dev veth0 88 + $IP_PEER address add 192.0.2.11 peer 192.0.2.10/32 dev veth1 89 + 90 + $IP address add 2001:db8::10 peer 2001:db8::11/128 dev veth0 nodad 91 + $IP_PEER address add 2001:db8::11 peer 2001:db8::10/128 dev veth1 nodad 92 + 93 + $IP_PEER address add 198.51.100.11/32 dev lo 94 + $IP route add table $RTABLE_PEER 198.51.100.11/32 via 192.0.2.11 95 + 96 + $IP_PEER address add 2001:db8::1:11/128 dev lo 97 + $IP route add table $RTABLE_PEER 2001:db8::1:11/128 via 2001:db8::11 98 + 99 + set +e 100 + } 101 + 102 + cleanup_peer() 103 + { 104 + $IP link del dev veth0 105 + ip netns del peerns 102 106 } 103 107 104 108 fib_check_iproute_support() ··· 252 190 fi 253 191 } 254 192 193 + # Verify that the IPV6_TCLASS option of UDPv6 and TCPv6 sockets is properly 194 + # taken into account when connecting the socket and when sending packets. 195 + fib_rule6_connect_test() 196 + { 197 + local dsfield 198 + 199 + if ! check_nettest; then 200 + echo "SKIP: Could not run test without nettest tool" 201 + return 202 + fi 203 + 204 + setup_peer 205 + $IP -6 rule add dsfield 0x04 table $RTABLE_PEER 206 + 207 + # Combine the base DS Field value (0x04) with all possible ECN values 208 + # (Not-ECT: 0, ECT(1): 1, ECT(0): 2, CE: 3). 209 + # The ECN bits shouldn't influence the result of the test. 210 + for dsfield in 0x04 0x05 0x06 0x07; do 211 + nettest -q -6 -B -t 5 -N testns -O peerns -U -D \ 212 + -Q "${dsfield}" -l 2001:db8::1:11 -r 2001:db8::1:11 213 + log_test $? 0 "rule6 dsfield udp connect (dsfield ${dsfield})" 214 + 215 + nettest -q -6 -B -t 5 -N testns -O peerns -Q "${dsfield}" \ 216 + -l 2001:db8::1:11 -r 2001:db8::1:11 217 + log_test $? 0 "rule6 dsfield tcp connect (dsfield ${dsfield})" 218 + done 219 + 220 + $IP -6 rule del dsfield 0x04 table $RTABLE_PEER 221 + cleanup_peer 222 + } 223 + 255 224 fib_rule4_del() 256 225 { 257 226 $IP rule del $1 ··· 389 296 fi 390 297 } 391 298 299 + # Verify that the IP_TOS option of UDPv4 and TCPv4 sockets is properly taken 300 + # into account when connecting the socket and when sending packets. 301 + fib_rule4_connect_test() 302 + { 303 + local dsfield 304 + 305 + if ! check_nettest; then 306 + echo "SKIP: Could not run test without nettest tool" 307 + return 308 + fi 309 + 310 + setup_peer 311 + $IP -4 rule add dsfield 0x04 table $RTABLE_PEER 312 + 313 + # Combine the base DS Field value (0x04) with all possible ECN values 314 + # (Not-ECT: 0, ECT(1): 1, ECT(0): 2, CE: 3). 315 + # The ECN bits shouldn't influence the result of the test. 316 + for dsfield in 0x04 0x05 0x06 0x07; do 317 + nettest -q -B -t 5 -N testns -O peerns -D -U -Q "${dsfield}" \ 318 + -l 198.51.100.11 -r 198.51.100.11 319 + log_test $? 0 "rule4 dsfield udp connect (dsfield ${dsfield})" 320 + 321 + nettest -q -B -t 5 -N testns -O peerns -Q "${dsfield}" \ 322 + -l 198.51.100.11 -r 198.51.100.11 323 + log_test $? 0 "rule4 dsfield tcp connect (dsfield ${dsfield})" 324 + done 325 + 326 + $IP -4 rule del dsfield 0x04 table $RTABLE_PEER 327 + cleanup_peer 328 + } 329 + 392 330 run_fibrule_tests() 393 331 { 394 332 log_section "IPv4 fib rule" ··· 469 345 case $t in 470 346 fib_rule6_test|fib_rule6) fib_rule6_test;; 471 347 fib_rule4_test|fib_rule4) fib_rule4_test;; 348 + fib_rule6_connect_test|fib_rule6_connect) fib_rule6_connect_test;; 349 + fib_rule4_connect_test|fib_rule4_connect) fib_rule4_connect_test;; 472 350 473 351 help) echo "Test names: $TESTS"; exit 0;; 474 352
+50 -1
tools/testing/selftests/net/nettest.c
··· 87 87 int use_setsockopt; 88 88 int use_freebind; 89 89 int use_cmsg; 90 + uint8_t dsfield; 90 91 const char *dev; 91 92 const char *server_dev; 92 93 int ifindex; ··· 579 578 } 580 579 581 580 return rc; 581 + } 582 + 583 + static int set_dsfield(int sd, int version, int dsfield) 584 + { 585 + if (!dsfield) 586 + return 0; 587 + 588 + switch (version) { 589 + case AF_INET: 590 + if (setsockopt(sd, SOL_IP, IP_TOS, &dsfield, 591 + sizeof(dsfield)) < 0) { 592 + log_err_errno("setsockopt(IP_TOS)"); 593 + return -1; 594 + } 595 + break; 596 + 597 + case AF_INET6: 598 + if (setsockopt(sd, SOL_IPV6, IPV6_TCLASS, &dsfield, 599 + sizeof(dsfield)) < 0) { 600 + log_err_errno("setsockopt(IPV6_TCLASS)"); 601 + return -1; 602 + } 603 + break; 604 + 605 + default: 606 + log_error("Invalid address family\n"); 607 + return -1; 608 + } 609 + 610 + return 0; 582 611 } 583 612 584 613 static int str_to_uint(const char *str, int min, int max, unsigned int *value) ··· 1348 1317 (char *)&one, sizeof(one)) < 0) 1349 1318 log_err_errno("Setting SO_BROADCAST error"); 1350 1319 1320 + if (set_dsfield(sd, AF_INET, args->dsfield) != 0) 1321 + goto out_err; 1322 + 1351 1323 if (args->dev && bind_to_device(sd, args->dev) != 0) 1352 1324 goto out_err; 1353 1325 else if (args->use_setsockopt && ··· 1477 1443 goto err; 1478 1444 1479 1445 if (set_reuseport(sd) != 0) 1446 + goto err; 1447 + 1448 + if (set_dsfield(sd, args->version, args->dsfield) != 0) 1480 1449 goto err; 1481 1450 1482 1451 if (args->dev && bind_to_device(sd, args->dev) != 0) ··· 1695 1658 if (set_reuseport(sd) != 0) 1696 1659 goto err; 1697 1660 1661 + if (set_dsfield(sd, args->version, args->dsfield) != 0) 1662 + goto err; 1663 + 1698 1664 if (args->dev && bind_to_device(sd, args->dev) != 0) 1699 1665 goto err; 1700 1666 else if (args->use_setsockopt && ··· 1902 1862 return client_status; 1903 1863 } 1904 1864 1905 - #define GETOPT_STR "sr:l:c:p:t:g:P:DRn:M:X:m:d:I:BN:O:SUCi6xL:0:1:2:3:Fbqf" 1865 + #define GETOPT_STR "sr:l:c:Q:p:t:g:P:DRn:M:X:m:d:I:BN:O:SUCi6xL:0:1:2:3:Fbqf" 1906 1866 #define OPT_FORCE_BIND_KEY_IFINDEX 1001 1907 1867 #define OPT_NO_BIND_KEY_IFINDEX 1002 1908 1868 ··· 1933 1893 " -D|R datagram (D) / raw (R) socket (default stream)\n" 1934 1894 " -l addr local address to bind to in server mode\n" 1935 1895 " -c addr local address to bind to in client mode\n" 1896 + " -Q dsfield DS Field value of the socket (the IP_TOS or\n" 1897 + " IPV6_TCLASS socket option)\n" 1936 1898 " -x configure XFRM policy on socket\n" 1937 1899 "\n" 1938 1900 " -d dev bind socket to given device name\n" ··· 2012 1970 case 'c': 2013 1971 args.has_local_ip = 1; 2014 1972 args.client_local_addr_str = optarg; 1973 + break; 1974 + case 'Q': 1975 + if (str_to_uint(optarg, 0, 255, &tmp) != 0) { 1976 + fprintf(stderr, "Invalid DS Field\n"); 1977 + return 1; 1978 + } 1979 + args.dsfield = tmp; 2015 1980 break; 2016 1981 case 'p': 2017 1982 if (str_to_uint(optarg, 1, 65535, &tmp) != 0) {