Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.14-rc5).

Conflicts:

drivers/net/ethernet/cadence/macb_main.c
fa52f15c745c ("net: cadence: macb: Synchronize stats calculations")
75696dd0fd72 ("net: cadence: macb: Convert to get_stats64")
https://lore.kernel.org/20250224125848.68ee63e5@canb.auug.org.au

Adjacent changes:

drivers/net/ethernet/intel/ice/ice_sriov.c
79990cf5e7ad ("ice: Fix deinitializing VF in error path")
a203163274a4 ("ice: simplify VF MSI-X managing")

net/ipv4/tcp.c
18912c520674 ("tcp: devmem: don't write truncated dmabuf CMSGs to userspace")
297d389e9e5b ("net: prefix devmem specific helpers")

net/mptcp/subflow.c
8668860b0ad3 ("mptcp: reset when MPTCP opts are dropped after join")
c3349a22c200 ("mptcp: consolidate subflow cleanup")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3228 -1627
+2 -1
Documentation/arch/powerpc/cxl.rst
··· 18 18 both access system memory directly and with the same effective 19 19 addresses. 20 20 21 + **This driver is deprecated and will be removed in a future release.** 21 22 22 23 Hardware overview 23 24 ================= ··· 454 453 455 454 A cxl sysfs class is added under /sys/class/cxl to facilitate 456 455 enumeration and tuning of the accelerators. Its layout is 457 - described in Documentation/ABI/testing/sysfs-class-cxl 456 + described in Documentation/ABI/obsolete/sysfs-class-cxl 458 457 459 458 460 459 Udev rules
+2 -2
Documentation/arch/x86/sva.rst
··· 25 25 mmu_notifier() support to keep the device TLB cache and the CPU cache in 26 26 sync. When an ATS lookup fails for a virtual address, the device should 27 27 use the PRI in order to request the virtual address to be paged into the 28 - CPU page tables. The device must use ATS again in order the fetch the 28 + CPU page tables. The device must use ATS again in order to fetch the 29 29 translation before use. 30 30 31 31 Shared Hardware Workqueues ··· 216 216 217 217 Single Root I/O Virtualization (SR-IOV) focuses on providing independent 218 218 hardware interfaces for virtualizing hardware. Hence, it's required to be 219 - almost fully functional interface to software supporting the traditional 219 + an almost fully functional interface to software supporting the traditional 220 220 BARs, space for interrupts via MSI-X, its own register layout. 221 221 Virtual Functions (VFs) are assisted by the Physical Function (PF) 222 222 driver.
+7 -1
Documentation/devicetree/bindings/arm/rockchip/pmu.yaml
··· 53 53 reg: 54 54 maxItems: 1 55 55 56 + power-controller: 57 + type: object 58 + 59 + reboot-mode: 60 + type: object 61 + 56 62 required: 57 63 - compatible 58 64 - reg 59 65 60 - additionalProperties: true 66 + additionalProperties: false 61 67 62 68 examples: 63 69 - |
+7 -1
Documentation/devicetree/bindings/mtd/cdns,hp-nfc.yaml
··· 33 33 clocks: 34 34 maxItems: 1 35 35 36 + clock-names: 37 + items: 38 + - const: nf_clk 39 + 36 40 dmas: 37 41 maxItems: 1 38 42 ··· 55 51 - reg-names 56 52 - interrupts 57 53 - clocks 54 + - clock-names 58 55 59 56 unevaluatedProperties: false 60 57 ··· 71 66 #address-cells = <1>; 72 67 #size-cells = <0>; 73 68 interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>; 74 - clocks = <&nf_clk>; 69 + clocks = <&clk>; 70 + clock-names = "nf_clk"; 75 71 cdns,board-delay-ps = <4830>; 76 72 77 73 nand@0 {
+8 -1
Documentation/networking/strparser.rst
··· 112 112 Callbacks 113 113 ========= 114 114 115 - There are six callbacks: 115 + There are seven callbacks: 116 116 117 117 :: 118 118 ··· 182 182 the length of the message. skb->len - offset may be greater 183 183 then full_len since strparser does not trim the skb. 184 184 185 + :: 186 + 187 + int (*read_sock)(struct strparser *strp, read_descriptor_t *desc, 188 + sk_read_actor_t recv_actor); 189 + 190 + The read_sock callback is used by strparser instead of 191 + sock->ops->read_sock, if provided. 185 192 :: 186 193 187 194 int (*read_sock_done)(struct strparser *strp, int err);
+3 -3
Documentation/userspace-api/landlock.rst
··· 8 8 ===================================== 9 9 10 10 :Author: Mickaël Salaün 11 - :Date: October 2024 11 + :Date: January 2025 12 12 13 13 The goal of Landlock is to enable restriction of ambient rights (e.g. global 14 14 filesystem or network access) for a set of processes. Because Landlock ··· 329 329 A sandboxed process can connect to a non-sandboxed process when its domain is 330 330 not scoped. If a process's domain is scoped, it can only connect to sockets 331 331 created by processes in the same scope. 332 - Moreover, If a process is scoped to send signal to a non-scoped process, it can 332 + Moreover, if a process is scoped to send signal to a non-scoped process, it can 333 333 only send signals to processes in the same scope. 334 334 335 335 A connected datagram socket behaves like a stream socket when its domain is 336 - scoped, meaning if the domain is scoped after the socket is connected , it can 336 + scoped, meaning if the domain is scoped after the socket is connected, it can 337 337 still :manpage:`send(2)` data just like a stream socket. However, in the same 338 338 scenario, a non-connected datagram socket cannot send data (with 339 339 :manpage:`sendto(2)`) outside its scope.
+9 -8
MAINTAINERS
··· 2210 2210 2211 2211 ARM/APPLE MACHINE SUPPORT 2212 2212 M: Sven Peter <sven@svenpeter.dev> 2213 + M: Janne Grunau <j@jannau.net> 2213 2214 R: Alyssa Rosenzweig <alyssa@rosenzweig.io> 2214 2215 L: asahi@lists.linux.dev 2215 2216 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 2285 2284 2286 2285 ARM/ASPEED MACHINE SUPPORT 2287 2286 M: Joel Stanley <joel@jms.id.au> 2288 - R: Andrew Jeffery <andrew@codeconstruct.com.au> 2287 + M: Andrew Jeffery <andrew@codeconstruct.com.au> 2289 2288 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2290 2289 L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers) 2291 2290 S: Supported ··· 2878 2877 2879 2878 ARM/NXP S32G/S32R DWMAC ETHERNET DRIVER 2880 2879 M: Jan Petrous <jan.petrous@oss.nxp.com> 2881 - L: NXP S32 Linux Team <s32@nxp.com> 2880 + R: s32@nxp.com 2882 2881 S: Maintained 2883 2882 F: Documentation/devicetree/bindings/net/nxp,s32-dwmac.yaml 2884 2883 F: drivers/net/ethernet/stmicro/stmmac/dwmac-s32.c ··· 5856 5855 5857 5856 CONFIGFS 5858 5857 M: Joel Becker <jlbec@evilplan.org> 5859 - M: Christoph Hellwig <hch@lst.de> 5860 5858 S: Supported 5861 5859 T: git git://git.infradead.org/users/hch/configfs.git 5862 5860 F: fs/configfs/ ··· 6878 6878 F: tools/testing/selftests/dma/ 6879 6879 6880 6880 DMA MAPPING HELPERS 6881 - M: Christoph Hellwig <hch@lst.de> 6882 6881 M: Marek Szyprowski <m.szyprowski@samsung.com> 6883 6882 R: Robin Murphy <robin.murphy@arm.com> 6884 6883 L: iommu@lists.linux.dev ··· 7424 7425 F: drivers/gpu/drm/panel/panel-novatek-nt36672a.c 7425 7426 7426 7427 DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS 7427 - M: Karol Herbst <kherbst@redhat.com> 7428 7428 M: Lyude Paul <lyude@redhat.com> 7429 7429 M: Danilo Krummrich <dakr@kernel.org> 7430 7430 L: dri-devel@lists.freedesktop.org ··· 15681 15683 15682 15684 MICROSOFT MANA RDMA DRIVER 15683 15685 M: Long Li <longli@microsoft.com> 15684 - M: Ajay Sharma <sharmaajay@microsoft.com> 15686 + M: Konstantin Taranov <kotaranov@microsoft.com> 15685 15687 L: linux-rdma@vger.kernel.org 15686 15688 S: Supported 15687 15689 F: drivers/infiniband/hw/mana/ ··· 19878 19880 F: tools/testing/selftests/net/rds/ 19879 19881 19880 19882 RDT - RESOURCE ALLOCATION 19881 - M: Fenghua Yu <fenghua.yu@intel.com> 19883 + M: Tony Luck <tony.luck@intel.com> 19882 19884 M: Reinette Chatre <reinette.chatre@intel.com> 19883 19885 L: linux-kernel@vger.kernel.org 19884 19886 S: Supported ··· 20329 20331 M: Paul Walmsley <paul.walmsley@sifive.com> 20330 20332 M: Palmer Dabbelt <palmer@dabbelt.com> 20331 20333 M: Albert Ou <aou@eecs.berkeley.edu> 20334 + R: Alexandre Ghiti <alex@ghiti.fr> 20332 20335 L: linux-riscv@lists.infradead.org 20333 20336 S: Supported 20334 20337 Q: https://patchwork.kernel.org/project/linux-riscv/list/ ··· 21923 21924 21924 21925 SOCKET TIMESTAMPING 21925 21926 M: Willem de Bruijn <willemdebruijn.kernel@gmail.com> 21927 + R: Jason Xing <kernelxing@tencent.com> 21926 21928 S: Maintained 21927 21929 F: Documentation/networking/timestamping.rst 21928 21930 F: include/linux/net_tstamp.h 21929 21931 F: include/uapi/linux/net_tstamp.h 21932 + F: tools/testing/selftests/bpf/*/net_timestamping* 21933 + F: tools/testing/selftests/net/*timestamp* 21930 21934 F: tools/testing/selftests/net/so_txtime.c 21931 21935 21932 21936 SOEKRIS NET48XX LED SUPPORT ··· 24072 24070 TRACING MMIO ACCESSES (MMIOTRACE) 24073 24071 M: Steven Rostedt <rostedt@goodmis.org> 24074 24072 M: Masami Hiramatsu <mhiramat@kernel.org> 24075 - R: Karol Herbst <karolherbst@gmail.com> 24076 24073 R: Pekka Paalanen <ppaalanen@gmail.com> 24077 24074 L: linux-kernel@vger.kernel.org 24078 24075 L: nouveau@lists.freedesktop.org
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 14 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
-1
arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
··· 226 226 }; 227 227 228 228 &uart5 { 229 - pinctrl-0 = <&uart5_xfer>; 230 229 rts-gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>; 231 230 status = "okay"; 232 231 };
+6
arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
··· 396 396 status = "okay"; 397 397 }; 398 398 399 + &uart5 { 400 + /delete-property/ dmas; 401 + /delete-property/ dma-names; 402 + pinctrl-0 = <&uart5_xfer>; 403 + }; 404 + 399 405 /* Mule UCAN */ 400 406 &usb_host0_ehci { 401 407 status = "okay";
+1 -2
arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
··· 17 17 18 18 &gmac2io { 19 19 phy-handle = <&yt8531c>; 20 - tx_delay = <0x19>; 21 - rx_delay = <0x05>; 20 + phy-mode = "rgmii-id"; 22 21 status = "okay"; 23 22 24 23 mdio {
+1
arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus.dts
··· 15 15 16 16 &gmac2io { 17 17 phy-handle = <&rtl8211e>; 18 + phy-mode = "rgmii"; 18 19 tx_delay = <0x24>; 19 20 rx_delay = <0x18>; 20 21 status = "okay";
-1
arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus.dtsi
··· 109 109 assigned-clocks = <&cru SCLK_MAC2IO>, <&cru SCLK_MAC2IO_EXT>; 110 110 assigned-clock-parents = <&gmac_clk>, <&gmac_clk>; 111 111 clock_in_out = "input"; 112 - phy-mode = "rgmii"; 113 112 phy-supply = <&vcc_io>; 114 113 pinctrl-0 = <&rgmiim1_pins>; 115 114 pinctrl-names = "default";
+4 -4
arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
··· 22 22 }; 23 23 24 24 /* EC turns on w/ pp900_usb_en */ 25 - pp900_usb: pp900-ap { 25 + pp900_usb: regulator-pp900-ap { 26 26 }; 27 27 28 28 /* EC turns on w/ pp900_pcie_en */ 29 - pp900_pcie: pp900-ap { 29 + pp900_pcie: regulator-pp900-ap { 30 30 }; 31 31 32 32 pp3000: regulator-pp3000 { ··· 126 126 }; 127 127 128 128 /* Always on; plain and simple */ 129 - pp3000_ap: pp3000_emmc: pp3000 { 129 + pp3000_ap: pp3000_emmc: regulator-pp3000 { 130 130 }; 131 131 132 132 pp1500_ap_io: regulator-pp1500-ap-io { ··· 160 160 }; 161 161 162 162 /* EC turns on w/ pp3300_usb_en_l */ 163 - pp3300_usb: pp3300 { 163 + pp3300_usb: regulator-pp3300 { 164 164 }; 165 165 166 166 /* gpio is shared with pp1800_pcie and pinctrl is set there */
+3 -3
arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
··· 92 92 }; 93 93 94 94 /* EC turns on pp1800_s3_en */ 95 - pp1800_s3: pp1800 { 95 + pp1800_s3: regulator-pp1800 { 96 96 }; 97 97 98 98 /* pp3300 children, sorted by name */ ··· 109 109 }; 110 110 111 111 /* EC turns on pp3300_s0_en */ 112 - pp3300_s0: pp3300 { 112 + pp3300_s0: regulator-pp3300 { 113 113 }; 114 114 115 115 /* EC turns on pp3300_s3_en */ 116 - pp3300_s3: pp3300 { 116 + pp3300_s3: regulator-pp3300 { 117 117 }; 118 118 119 119 /*
+11 -11
arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
··· 189 189 }; 190 190 191 191 /* EC turns on w/ pp900_ddrpll_en */ 192 - pp900_ddrpll: pp900-ap { 192 + pp900_ddrpll: regulator-pp900-ap { 193 193 }; 194 194 195 195 /* EC turns on w/ pp900_pll_en */ 196 - pp900_pll: pp900-ap { 196 + pp900_pll: regulator-pp900-ap { 197 197 }; 198 198 199 199 /* EC turns on w/ pp900_pmu_en */ 200 - pp900_pmu: pp900-ap { 200 + pp900_pmu: regulator-pp900-ap { 201 201 }; 202 202 203 203 /* EC turns on w/ pp1800_s0_en_l */ 204 - pp1800_ap_io: pp1800_emmc: pp1800_nfc: pp1800_s0: pp1800 { 204 + pp1800_ap_io: pp1800_emmc: pp1800_nfc: pp1800_s0: regulator-pp1800 { 205 205 }; 206 206 207 207 /* EC turns on w/ pp1800_avdd_en_l */ 208 - pp1800_avdd: pp1800 { 208 + pp1800_avdd: regulator-pp1800 { 209 209 }; 210 210 211 211 /* EC turns on w/ pp1800_lid_en_l */ 212 - pp1800_lid: pp1800_mic: pp1800 { 212 + pp1800_lid: pp1800_mic: regulator-pp1800 { 213 213 }; 214 214 215 215 /* EC turns on w/ lpddr_pwr_en */ 216 - pp1800_lpddr: pp1800 { 216 + pp1800_lpddr: regulator-pp1800 { 217 217 }; 218 218 219 219 /* EC turns on w/ pp1800_pmu_en_l */ 220 - pp1800_pmu: pp1800 { 220 + pp1800_pmu: regulator-pp1800 { 221 221 }; 222 222 223 223 /* EC turns on w/ pp1800_usb_en_l */ 224 - pp1800_usb: pp1800 { 224 + pp1800_usb: regulator-pp1800 { 225 225 }; 226 226 227 227 pp3000_sd_slot: regulator-pp3000-sd-slot { ··· 259 259 }; 260 260 261 261 /* EC turns on w/ pp3300_trackpad_en_l */ 262 - pp3300_trackpad: pp3300-trackpad { 262 + pp3300_trackpad: regulator-pp3300-trackpad { 263 263 }; 264 264 265 265 /* EC turns on w/ usb_a_en */ 266 - pp5000_usb_a_vbus: pp5000 { 266 + pp5000_usb_a_vbus: regulator-pp5000 { 267 267 }; 268 268 269 269 ap_rtc_clk: ap-rtc-clk {
+11 -11
arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
··· 549 549 mmu600_pcie: iommu@fc900000 { 550 550 compatible = "arm,smmu-v3"; 551 551 reg = <0x0 0xfc900000 0x0 0x200000>; 552 - interrupts = <GIC_SPI 369 IRQ_TYPE_LEVEL_HIGH 0>, 553 - <GIC_SPI 371 IRQ_TYPE_LEVEL_HIGH 0>, 554 - <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH 0>, 555 - <GIC_SPI 367 IRQ_TYPE_LEVEL_HIGH 0>; 552 + interrupts = <GIC_SPI 369 IRQ_TYPE_EDGE_RISING 0>, 553 + <GIC_SPI 371 IRQ_TYPE_EDGE_RISING 0>, 554 + <GIC_SPI 374 IRQ_TYPE_EDGE_RISING 0>, 555 + <GIC_SPI 367 IRQ_TYPE_EDGE_RISING 0>; 556 556 interrupt-names = "eventq", "gerror", "priq", "cmdq-sync"; 557 557 #iommu-cells = <1>; 558 558 }; ··· 560 560 mmu600_php: iommu@fcb00000 { 561 561 compatible = "arm,smmu-v3"; 562 562 reg = <0x0 0xfcb00000 0x0 0x200000>; 563 - interrupts = <GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH 0>, 564 - <GIC_SPI 383 IRQ_TYPE_LEVEL_HIGH 0>, 565 - <GIC_SPI 386 IRQ_TYPE_LEVEL_HIGH 0>, 566 - <GIC_SPI 379 IRQ_TYPE_LEVEL_HIGH 0>; 563 + interrupts = <GIC_SPI 381 IRQ_TYPE_EDGE_RISING 0>, 564 + <GIC_SPI 383 IRQ_TYPE_EDGE_RISING 0>, 565 + <GIC_SPI 386 IRQ_TYPE_EDGE_RISING 0>, 566 + <GIC_SPI 379 IRQ_TYPE_EDGE_RISING 0>; 567 567 interrupt-names = "eventq", "gerror", "priq", "cmdq-sync"; 568 568 #iommu-cells = <1>; 569 569 status = "disabled"; ··· 2668 2668 rockchip,hw-tshut-temp = <120000>; 2669 2669 rockchip,hw-tshut-mode = <0>; /* tshut mode 0:CRU 1:GPIO */ 2670 2670 rockchip,hw-tshut-polarity = <0>; /* tshut polarity 0:LOW 1:HIGH */ 2671 - pinctrl-0 = <&tsadc_gpio_func>; 2672 - pinctrl-1 = <&tsadc_shut>; 2673 - pinctrl-names = "gpio", "otpout"; 2671 + pinctrl-0 = <&tsadc_shut_org>; 2672 + pinctrl-1 = <&tsadc_gpio_func>; 2673 + pinctrl-names = "default", "sleep"; 2674 2674 #thermal-sensor-cells = <1>; 2675 2675 status = "disabled"; 2676 2676 };
+2 -2
arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts
··· 113 113 compatible = "regulator-fixed"; 114 114 regulator-name = "vcc3v3_lcd"; 115 115 enable-active-high; 116 - gpio = <&gpio1 RK_PC4 GPIO_ACTIVE_HIGH>; 116 + gpio = <&gpio0 RK_PC4 GPIO_ACTIVE_HIGH>; 117 117 pinctrl-names = "default"; 118 118 pinctrl-0 = <&lcdpwr_en>; 119 119 vin-supply = <&vcc3v3_sys>; ··· 241 241 &pinctrl { 242 242 lcd { 243 243 lcdpwr_en: lcdpwr-en { 244 - rockchip,pins = <1 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>; 244 + rockchip,pins = <0 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>; 245 245 }; 246 246 247 247 bl_en: bl-en {
-1
arch/arm64/boot/dts/rockchip/rk3588-extra.dtsi
··· 213 213 interrupt-names = "sys", "pmc", "msg", "legacy", "err", 214 214 "dma0", "dma1", "dma2", "dma3"; 215 215 max-link-speed = <3>; 216 - iommus = <&mmu600_pcie 0x0000>; 217 216 num-lanes = <4>; 218 217 phys = <&pcie30phy>; 219 218 phy-names = "pcie-phy";
+4
arch/arm64/boot/dts/rockchip/rk3588-rock-5b-pcie-ep.dtso
··· 23 23 vpcie3v3-supply = <&vcc3v3_pcie30>; 24 24 status = "okay"; 25 25 }; 26 + 27 + &mmu600_pcie { 28 + status = "disabled"; 29 + };
+2
arch/arm64/configs/defconfig
··· 1551 1551 CONFIG_SL28CPLD_INTC=y 1552 1552 CONFIG_QCOM_PDC=y 1553 1553 CONFIG_QCOM_MPM=y 1554 + CONFIG_TI_SCI_INTR_IRQCHIP=y 1555 + CONFIG_TI_SCI_INTA_IRQCHIP=y 1554 1556 CONFIG_RESET_GPIO=m 1555 1557 CONFIG_RESET_IMX7=y 1556 1558 CONFIG_RESET_QCOM_AOSS=y
+1 -1
arch/riscv/include/asm/cmpxchg.h
··· 231 231 __arch_cmpxchg(".w", ".w" sc_sfx, ".w" cas_sfx, \ 232 232 sc_prepend, sc_append, \ 233 233 cas_prepend, cas_append, \ 234 - __ret, __ptr, (long), __old, __new); \ 234 + __ret, __ptr, (long)(int)(long), __old, __new); \ 235 235 break; \ 236 236 case 8: \ 237 237 __arch_cmpxchg(".d", ".d" sc_sfx, ".d" cas_sfx, \
+1 -1
arch/riscv/include/asm/futex.h
··· 93 93 _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \ 94 94 _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \ 95 95 : [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp) 96 - : [ov] "Jr" (oldval), [nv] "Jr" (newval) 96 + : [ov] "Jr" ((long)(int)oldval), [nv] "Jr" (newval) 97 97 : "memory"); 98 98 __disable_user_access(); 99 99
+6 -6
arch/riscv/kernel/cacheinfo.c
··· 108 108 if (!np) 109 109 return -ENOENT; 110 110 111 - if (of_property_read_bool(np, "cache-size")) 111 + if (of_property_present(np, "cache-size")) 112 112 ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level); 113 - if (of_property_read_bool(np, "i-cache-size")) 113 + if (of_property_present(np, "i-cache-size")) 114 114 ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level); 115 - if (of_property_read_bool(np, "d-cache-size")) 115 + if (of_property_present(np, "d-cache-size")) 116 116 ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level); 117 117 118 118 prev = np; ··· 125 125 break; 126 126 if (level <= levels) 127 127 break; 128 - if (of_property_read_bool(np, "cache-size")) 128 + if (of_property_present(np, "cache-size")) 129 129 ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level); 130 - if (of_property_read_bool(np, "i-cache-size")) 130 + if (of_property_present(np, "i-cache-size")) 131 131 ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level); 132 - if (of_property_read_bool(np, "d-cache-size")) 132 + if (of_property_present(np, "d-cache-size")) 133 133 ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level); 134 134 levels = level; 135 135 }
+1 -1
arch/riscv/kernel/cpufeature.c
··· 479 479 if (bit < RISCV_ISA_EXT_BASE) 480 480 *this_hwcap |= isa2hwcap[bit]; 481 481 } 482 - } while (loop && memcmp(prev_resolved_isa, resolved_isa, sizeof(prev_resolved_isa))); 482 + } while (loop && !bitmap_equal(prev_resolved_isa, resolved_isa, RISCV_ISA_EXT_MAX)); 483 483 } 484 484 485 485 static void __init match_isa_ext(const char *name, const char *name_end, unsigned long *bitmap)
+1 -1
arch/riscv/kernel/setup.c
··· 322 322 323 323 riscv_init_cbo_blocksizes(); 324 324 riscv_fill_hwcap(); 325 - init_rt_signal_env(); 326 325 apply_boot_alternatives(); 326 + init_rt_signal_env(); 327 327 328 328 if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) && 329 329 riscv_isa_extension_available(NULL, ZICBOM))
-6
arch/riscv/kernel/signal.c
··· 215 215 if (cal_all || riscv_v_vstate_query(task_pt_regs(current))) 216 216 total_context_size += riscv_v_sc_size; 217 217 } 218 - /* 219 - * Preserved a __riscv_ctx_hdr for END signal context header if an 220 - * extension uses __riscv_extra_ext_header 221 - */ 222 - if (total_context_size) 223 - total_context_size += sizeof(struct __riscv_ctx_hdr); 224 218 225 219 frame_size += total_context_size; 226 220
+1 -1
arch/s390/boot/startup.c
··· 86 86 : [reg1] "=&d" (reg1), 87 87 [reg2] "=&a" (reg2), 88 88 [rc] "+&d" (rc), 89 - [tmp] "=&d" (tmp), 89 + [tmp] "+&d" (tmp), 90 90 "+Q" (get_lowcore()->program_new_psw), 91 91 "=Q" (old) 92 92 : [psw_old] "a" (&old),
+2
arch/s390/configs/debug_defconfig
··· 469 469 CONFIG_MD=y 470 470 CONFIG_BLK_DEV_MD=y 471 471 # CONFIG_MD_BITMAP_FILE is not set 472 + CONFIG_MD_LINEAR=m 472 473 CONFIG_MD_CLUSTER=m 473 474 CONFIG_BCACHE=m 474 475 CONFIG_BLK_DEV_DM=y ··· 875 874 CONFIG_LATENCYTOP=y 876 875 CONFIG_BOOTTIME_TRACING=y 877 876 CONFIG_FUNCTION_GRAPH_RETVAL=y 877 + CONFIG_FUNCTION_GRAPH_RETADDR=y 878 878 CONFIG_FPROBE=y 879 879 CONFIG_FUNCTION_PROFILER=y 880 880 CONFIG_STACK_TRACER=y
+2
arch/s390/configs/defconfig
··· 459 459 CONFIG_MD=y 460 460 CONFIG_BLK_DEV_MD=y 461 461 # CONFIG_MD_BITMAP_FILE is not set 462 + CONFIG_MD_LINEAR=m 462 463 CONFIG_MD_CLUSTER=m 463 464 CONFIG_BCACHE=m 464 465 CONFIG_BLK_DEV_DM=y ··· 826 825 CONFIG_LATENCYTOP=y 827 826 CONFIG_BOOTTIME_TRACING=y 828 827 CONFIG_FUNCTION_GRAPH_RETVAL=y 828 + CONFIG_FUNCTION_GRAPH_RETADDR=y 829 829 CONFIG_FPROBE=y 830 830 CONFIG_FUNCTION_PROFILER=y 831 831 CONFIG_STACK_TRACER=y
+3 -1
arch/s390/purgatory/Makefile
··· 8 8 $(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE 9 9 $(call if_changed_rule,cc_o_c) 10 10 11 - CFLAGS_sha256.o := -D__DISABLE_EXPORTS -D__NO_FORTIFY 11 + CFLAGS_sha256.o := -D__NO_FORTIFY 12 12 13 13 $(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S FORCE 14 14 $(call if_changed_rule,as_o_S) ··· 19 19 KBUILD_CFLAGS += -Os -m64 -msoft-float -fno-common 20 20 KBUILD_CFLAGS += -fno-stack-protector 21 21 KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 22 + KBUILD_CFLAGS += -D__DISABLE_EXPORTS 22 23 KBUILD_CFLAGS += $(CLANG_FLAGS) 23 24 KBUILD_CFLAGS += $(call cc-option,-fno-PIE) 24 25 KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS)) 26 + KBUILD_AFLAGS += -D__DISABLE_EXPORTS 25 27 26 28 # Since we link purgatory with -r unresolved symbols are not checked, so we 27 29 # also link a purgatory.chk binary without -r to check for unresolved symbols.
+7 -13
arch/x86/events/intel/core.c
··· 397 397 METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_FETCH_LAT, 6), 398 398 METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_MEM_BOUND, 7), 399 399 400 + INTEL_EVENT_CONSTRAINT(0x20, 0xf), 401 + 402 + INTEL_UEVENT_CONSTRAINT(0x012a, 0xf), 403 + INTEL_UEVENT_CONSTRAINT(0x012b, 0xf), 400 404 INTEL_UEVENT_CONSTRAINT(0x0148, 0x4), 401 405 INTEL_UEVENT_CONSTRAINT(0x0175, 0x4), 402 406 403 407 INTEL_EVENT_CONSTRAINT(0x2e, 0x3ff), 404 408 INTEL_EVENT_CONSTRAINT(0x3c, 0x3ff), 405 - /* 406 - * Generally event codes < 0x90 are restricted to counters 0-3. 407 - * The 0x2E and 0x3C are exception, which has no restriction. 408 - */ 409 - INTEL_EVENT_CONSTRAINT_RANGE(0x01, 0x8f, 0xf), 410 409 411 - INTEL_UEVENT_CONSTRAINT(0x01a3, 0xf), 412 - INTEL_UEVENT_CONSTRAINT(0x02a3, 0xf), 413 410 INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4), 414 411 INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4), 415 412 INTEL_UEVENT_CONSTRAINT(0x04a4, 0x1), 416 413 INTEL_UEVENT_CONSTRAINT(0x08a4, 0x1), 417 414 INTEL_UEVENT_CONSTRAINT(0x10a4, 0x1), 418 415 INTEL_UEVENT_CONSTRAINT(0x01b1, 0x8), 416 + INTEL_UEVENT_CONSTRAINT(0x01cd, 0x3fc), 419 417 INTEL_UEVENT_CONSTRAINT(0x02cd, 0x3), 420 - INTEL_EVENT_CONSTRAINT(0xce, 0x1), 421 418 422 419 INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xdf, 0xf), 423 - /* 424 - * Generally event codes >= 0x90 are likely to have no restrictions. 425 - * The exception are defined as above. 426 - */ 427 - INTEL_EVENT_CONSTRAINT_RANGE(0x90, 0xfe, 0x3ff), 420 + 421 + INTEL_UEVENT_CONSTRAINT(0x00e0, 0xf), 428 422 429 423 EVENT_CONSTRAINT_END 430 424 };
+1 -1
arch/x86/events/intel/ds.c
··· 1199 1199 INTEL_FLAGS_UEVENT_CONSTRAINT(0x100, 0x100000000ULL), /* INST_RETIRED.PREC_DIST */ 1200 1200 INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL), 1201 1201 1202 - INTEL_HYBRID_LDLAT_CONSTRAINT(0x1cd, 0x3ff), 1202 + INTEL_HYBRID_LDLAT_CONSTRAINT(0x1cd, 0x3fc), 1203 1203 INTEL_HYBRID_STLAT_CONSTRAINT(0x2cd, 0x3), 1204 1204 INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */ 1205 1205 INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
+1
arch/x86/kernel/cpu/cpuid-deps.c
··· 45 45 { X86_FEATURE_AES, X86_FEATURE_XMM2 }, 46 46 { X86_FEATURE_SHA_NI, X86_FEATURE_XMM2 }, 47 47 { X86_FEATURE_GFNI, X86_FEATURE_XMM2 }, 48 + { X86_FEATURE_AVX_VNNI, X86_FEATURE_AVX }, 48 49 { X86_FEATURE_FMA, X86_FEATURE_AVX }, 49 50 { X86_FEATURE_VAES, X86_FEATURE_AVX }, 50 51 { X86_FEATURE_VPCLMULQDQ, X86_FEATURE_AVX },
+5 -2
block/blk-merge.c
··· 270 270 const struct bio_vec *bv, unsigned *nsegs, unsigned *bytes, 271 271 unsigned max_segs, unsigned max_bytes) 272 272 { 273 - unsigned max_len = min(max_bytes, UINT_MAX) - *bytes; 273 + unsigned max_len = max_bytes - *bytes; 274 274 unsigned len = min(bv->bv_len, max_len); 275 275 unsigned total_len = 0; 276 276 unsigned seg_size = 0; ··· 556 556 { 557 557 struct req_iterator iter = { 558 558 .bio = rq->bio, 559 - .iter = rq->bio->bi_iter, 560 559 }; 561 560 struct phys_vec vec; 562 561 int nsegs = 0; 562 + 563 + /* the internal flush request may not have bio attached */ 564 + if (iter.bio) 565 + iter.iter = iter.bio->bi_iter; 563 566 564 567 while (blk_map_iter_next(rq, &iter, &vec)) { 565 568 *last_sg = blk_next_sg(last_sg, sglist);
+1
drivers/accel/amdxdna/amdxdna_mailbox.c
··· 8 8 #include <linux/bitfield.h> 9 9 #include <linux/interrupt.h> 10 10 #include <linux/iopoll.h> 11 + #include <linux/slab.h> 11 12 #include <linux/xarray.h> 12 13 13 14 #define CREATE_TRACE_POINTS
+7 -1
drivers/acpi/platform_profile.c
··· 417 417 418 418 static umode_t profile_class_is_visible(struct kobject *kobj, struct attribute *attr, int idx) 419 419 { 420 - if (!class_find_device(&platform_profile_class, NULL, NULL, profile_class_registered)) 420 + struct device *dev; 421 + 422 + dev = class_find_device(&platform_profile_class, NULL, NULL, profile_class_registered); 423 + if (!dev) 421 424 return 0; 425 + 426 + put_device(dev); 427 + 422 428 return attr->mode; 423 429 } 424 430
-2
drivers/ata/libahci_platform.c
··· 651 651 * If no sub-node was found, keep this for device tree 652 652 * compatibility 653 653 */ 654 - hpriv->mask_port_map |= BIT(0); 655 - 656 654 rc = ahci_platform_get_phy(hpriv, 0, dev, dev->of_node); 657 655 if (rc) 658 656 goto err_out;
+4 -2
drivers/bluetooth/btusb.c
··· 2102 2102 return submit_or_queue_tx_urb(hdev, urb); 2103 2103 2104 2104 case HCI_SCODATA_PKT: 2105 - if (hci_conn_num(hdev, SCO_LINK) < 1) 2105 + if (!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) && 2106 + hci_conn_num(hdev, SCO_LINK) < 1) 2106 2107 return -ENODEV; 2107 2108 2108 2109 urb = alloc_isoc_urb(hdev, skb); ··· 2577 2576 return submit_or_queue_tx_urb(hdev, urb); 2578 2577 2579 2578 case HCI_SCODATA_PKT: 2580 - if (hci_conn_num(hdev, SCO_LINK) < 1) 2579 + if (!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) && 2580 + hci_conn_num(hdev, SCO_LINK) < 1) 2581 2581 return -ENODEV; 2582 2582 2583 2583 urb = alloc_isoc_urb(hdev, skb);
+14 -1
drivers/clocksource/jcore-pit.c
··· 114 114 pit->periodic_delta = DIV_ROUND_CLOSEST(NSEC_PER_SEC, HZ * buspd); 115 115 116 116 clockevents_config_and_register(&pit->ced, freq, 1, ULONG_MAX); 117 + enable_percpu_irq(pit->ced.irq, IRQ_TYPE_NONE); 118 + 119 + return 0; 120 + } 121 + 122 + static int jcore_pit_local_teardown(unsigned cpu) 123 + { 124 + struct jcore_pit *pit = this_cpu_ptr(jcore_pit_percpu); 125 + 126 + pr_info("Local J-Core PIT teardown on cpu %u\n", cpu); 127 + 128 + disable_percpu_irq(pit->ced.irq); 117 129 118 130 return 0; 119 131 } ··· 180 168 return -ENOMEM; 181 169 } 182 170 171 + irq_set_percpu_devid(pit_irq); 183 172 err = request_percpu_irq(pit_irq, jcore_timer_interrupt, 184 173 "jcore_pit", jcore_pit_percpu); 185 174 if (err) { ··· 250 237 251 238 cpuhp_setup_state(CPUHP_AP_JCORE_TIMER_STARTING, 252 239 "clockevents/jcore:starting", 253 - jcore_pit_local_init, NULL); 240 + jcore_pit_local_init, jcore_pit_local_teardown); 254 241 255 242 return 0; 256 243 }
+2 -2
drivers/edac/qcom_edac.c
··· 95 95 * Configure interrupt enable registers such that Tag, Data RAM related 96 96 * interrupts are propagated to interrupt controller for servicing 97 97 */ 98 - ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, 98 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable, 99 99 TRP0_INTERRUPT_ENABLE, 100 100 TRP0_INTERRUPT_ENABLE); 101 101 if (ret) ··· 113 113 if (ret) 114 114 return ret; 115 115 116 - ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, 116 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable, 117 117 DRP0_INTERRUPT_ENABLE, 118 118 DRP0_INTERRUPT_ENABLE); 119 119 if (ret)
+2 -2
drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
··· 254 254 if (num > max_num) 255 255 return -EINVAL; 256 256 257 - ret = ph->xops->xfer_get_init(ph, SCMI_IMX_MISC_CTRL_SET, sizeof(*in), 258 - 0, &t); 257 + ret = ph->xops->xfer_get_init(ph, SCMI_IMX_MISC_CTRL_SET, 258 + sizeof(*in) + num * sizeof(__le32), 0, &t); 259 259 if (ret) 260 260 return ret; 261 261
+6 -18
drivers/firmware/cirrus/cs_dsp.c
··· 1609 1609 goto out_fw; 1610 1610 } 1611 1611 1612 - ret = regmap_raw_write_async(regmap, reg, buf->buf, 1613 - le32_to_cpu(region->len)); 1612 + ret = regmap_raw_write(regmap, reg, buf->buf, 1613 + le32_to_cpu(region->len)); 1614 1614 if (ret != 0) { 1615 1615 cs_dsp_err(dsp, 1616 1616 "%s.%d: Failed to write %d bytes at %d in %s: %d\n", ··· 1625 1625 regions++; 1626 1626 } 1627 1627 1628 - ret = regmap_async_complete(regmap); 1629 - if (ret != 0) { 1630 - cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret); 1631 - goto out_fw; 1632 - } 1633 - 1634 1628 if (pos > firmware->size) 1635 1629 cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n", 1636 1630 file, regions, pos - firmware->size); ··· 1632 1638 cs_dsp_debugfs_save_wmfwname(dsp, file); 1633 1639 1634 1640 out_fw: 1635 - regmap_async_complete(regmap); 1636 1641 cs_dsp_buf_free(&buf_list); 1637 1642 1638 1643 if (ret == -EOVERFLOW) ··· 2319 2326 cs_dsp_dbg(dsp, "%s.%d: Writing %d bytes at %x\n", 2320 2327 file, blocks, le32_to_cpu(blk->len), 2321 2328 reg); 2322 - ret = regmap_raw_write_async(regmap, reg, buf->buf, 2323 - le32_to_cpu(blk->len)); 2329 + ret = regmap_raw_write(regmap, reg, buf->buf, 2330 + le32_to_cpu(blk->len)); 2324 2331 if (ret != 0) { 2325 2332 cs_dsp_err(dsp, 2326 2333 "%s.%d: Failed to write to %x in %s: %d\n", ··· 2332 2339 blocks++; 2333 2340 } 2334 2341 2335 - ret = regmap_async_complete(regmap); 2336 - if (ret != 0) 2337 - cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret); 2338 - 2339 2342 if (pos > firmware->size) 2340 2343 cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n", 2341 2344 file, blocks, pos - firmware->size); ··· 2339 2350 cs_dsp_debugfs_save_binname(dsp, file); 2340 2351 2341 2352 out_fw: 2342 - regmap_async_complete(regmap); 2343 2353 cs_dsp_buf_free(&buf_list); 2344 2354 2345 2355 if (ret == -EOVERFLOW) ··· 2549 2561 { 2550 2562 int ret; 2551 2563 2552 - ret = regmap_update_bits_async(dsp->regmap, dsp->base + ADSP2_CONTROL, 2553 - ADSP2_SYS_ENA, ADSP2_SYS_ENA); 2564 + ret = regmap_update_bits(dsp->regmap, dsp->base + ADSP2_CONTROL, 2565 + ADSP2_SYS_ENA, ADSP2_SYS_ENA); 2554 2566 if (ret != 0) 2555 2567 return ret; 2556 2568
+1
drivers/firmware/imx/Kconfig
··· 25 25 26 26 config IMX_SCMI_MISC_DRV 27 27 tristate "IMX SCMI MISC Protocol driver" 28 + depends on ARCH_MXC || COMPILE_TEST 28 29 default y if ARCH_MXC 29 30 help 30 31 The System Controller Management Interface firmware (SCMI FW) is
+4
drivers/gpio/gpio-vf610.c
··· 36 36 struct clk *clk_port; 37 37 struct clk *clk_gpio; 38 38 int irq; 39 + spinlock_t lock; /* protect gpio direction registers */ 39 40 }; 40 41 41 42 #define GPIO_PDOR 0x00 ··· 125 124 u32 val; 126 125 127 126 if (port->sdata->have_paddr) { 127 + guard(spinlock_irqsave)(&port->lock); 128 128 val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR); 129 129 val &= ~mask; 130 130 vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR); ··· 144 142 vf610_gpio_set(chip, gpio, value); 145 143 146 144 if (port->sdata->have_paddr) { 145 + guard(spinlock_irqsave)(&port->lock); 147 146 val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR); 148 147 val |= mask; 149 148 vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR); ··· 300 297 return -ENOMEM; 301 298 302 299 port->sdata = device_get_match_data(dev); 300 + spin_lock_init(&port->lock); 303 301 304 302 dual_base = port->sdata->have_dual_base; 305 303
+70 -30
drivers/gpio/gpiolib.c
··· 1057 1057 desc->gdev = gdev; 1058 1058 1059 1059 if (gc->get_direction && gpiochip_line_is_valid(gc, desc_index)) { 1060 - assign_bit(FLAG_IS_OUT, 1061 - &desc->flags, !gc->get_direction(gc, desc_index)); 1060 + ret = gc->get_direction(gc, desc_index); 1061 + if (ret < 0) 1062 + /* 1063 + * FIXME: Bail-out here once all GPIO drivers 1064 + * are updated to not return errors in 1065 + * situations that can be considered normal 1066 + * operation. 1067 + */ 1068 + dev_warn(&gdev->dev, 1069 + "%s: get_direction failed: %d\n", 1070 + __func__, ret); 1071 + 1072 + assign_bit(FLAG_IS_OUT, &desc->flags, !ret); 1062 1073 } else { 1063 1074 assign_bit(FLAG_IS_OUT, 1064 1075 &desc->flags, !gc->direction_input); ··· 2739 2728 if (guard.gc->direction_input) { 2740 2729 ret = guard.gc->direction_input(guard.gc, 2741 2730 gpio_chip_hwgpio(desc)); 2742 - } else if (guard.gc->get_direction && 2743 - (guard.gc->get_direction(guard.gc, 2744 - gpio_chip_hwgpio(desc)) != 1)) { 2745 - gpiod_warn(desc, 2746 - "%s: missing direction_input() operation and line is output\n", 2747 - __func__); 2748 - return -EIO; 2731 + } else if (guard.gc->get_direction) { 2732 + ret = guard.gc->get_direction(guard.gc, 2733 + gpio_chip_hwgpio(desc)); 2734 + if (ret < 0) 2735 + return ret; 2736 + 2737 + if (ret != GPIO_LINE_DIRECTION_IN) { 2738 + gpiod_warn(desc, 2739 + "%s: missing direction_input() operation and line is output\n", 2740 + __func__); 2741 + return -EIO; 2742 + } 2749 2743 } 2750 2744 if (ret == 0) { 2751 2745 clear_bit(FLAG_IS_OUT, &desc->flags); ··· 2787 2771 gpio_chip_hwgpio(desc), val); 2788 2772 } else { 2789 2773 /* Check that we are in output mode if we can */ 2790 - if (guard.gc->get_direction && 2791 - guard.gc->get_direction(guard.gc, gpio_chip_hwgpio(desc))) { 2792 - gpiod_warn(desc, 2793 - "%s: missing direction_output() operation\n", 2794 - __func__); 2795 - return -EIO; 2774 + if (guard.gc->get_direction) { 2775 + ret = guard.gc->get_direction(guard.gc, 2776 + gpio_chip_hwgpio(desc)); 2777 + if (ret < 0) 2778 + return ret; 2779 + 2780 + if (ret != GPIO_LINE_DIRECTION_OUT) { 2781 + gpiod_warn(desc, 2782 + "%s: missing direction_output() operation\n", 2783 + __func__); 2784 + return -EIO; 2785 + } 2796 2786 } 2797 2787 /* 2798 2788 * If we can't actively set the direction, we are some ··· 3151 3129 static int gpio_chip_get_multiple(struct gpio_chip *gc, 3152 3130 unsigned long *mask, unsigned long *bits) 3153 3131 { 3132 + lockdep_assert_held(&gc->gpiodev->srcu); 3133 + 3154 3134 if (gc->get_multiple) 3155 3135 return gc->get_multiple(gc, mask, bits); 3156 3136 if (gc->get) { ··· 3183 3159 struct gpio_array *array_info, 3184 3160 unsigned long *value_bitmap) 3185 3161 { 3162 + struct gpio_chip *gc; 3186 3163 int ret, i = 0; 3187 3164 3188 3165 /* ··· 3195 3170 array_size <= array_info->size && 3196 3171 (void *)array_info == desc_array + array_info->size) { 3197 3172 if (!can_sleep) 3198 - WARN_ON(array_info->chip->can_sleep); 3173 + WARN_ON(array_info->gdev->can_sleep); 3199 3174 3200 - ret = gpio_chip_get_multiple(array_info->chip, 3201 - array_info->get_mask, 3175 + guard(srcu)(&array_info->gdev->srcu); 3176 + gc = srcu_dereference(array_info->gdev->chip, 3177 + &array_info->gdev->srcu); 3178 + if (!gc) 3179 + return -ENODEV; 3180 + 3181 + ret = gpio_chip_get_multiple(gc, array_info->get_mask, 3202 3182 value_bitmap); 3203 3183 if (ret) 3204 3184 return ret; ··· 3484 3454 static void gpio_chip_set_multiple(struct gpio_chip *gc, 3485 3455 unsigned long *mask, unsigned long *bits) 3486 3456 { 3457 + lockdep_assert_held(&gc->gpiodev->srcu); 3458 + 3487 3459 if (gc->set_multiple) { 3488 3460 gc->set_multiple(gc, mask, bits); 3489 3461 } else { ··· 3503 3471 struct gpio_array *array_info, 3504 3472 unsigned long *value_bitmap) 3505 3473 { 3474 + struct gpio_chip *gc; 3506 3475 int i = 0; 3507 3476 3508 3477 /* ··· 3515 3482 array_size <= array_info->size && 3516 3483 (void *)array_info == desc_array + array_info->size) { 3517 3484 if (!can_sleep) 3518 - WARN_ON(array_info->chip->can_sleep); 3485 + WARN_ON(array_info->gdev->can_sleep); 3486 + 3487 + guard(srcu)(&array_info->gdev->srcu); 3488 + gc = srcu_dereference(array_info->gdev->chip, 3489 + &array_info->gdev->srcu); 3490 + if (!gc) 3491 + return -ENODEV; 3519 3492 3520 3493 if (!raw && !bitmap_empty(array_info->invert_mask, array_size)) 3521 3494 bitmap_xor(value_bitmap, value_bitmap, 3522 3495 array_info->invert_mask, array_size); 3523 3496 3524 - gpio_chip_set_multiple(array_info->chip, array_info->set_mask, 3525 - value_bitmap); 3497 + gpio_chip_set_multiple(gc, array_info->set_mask, value_bitmap); 3526 3498 3527 3499 i = find_first_zero_bit(array_info->set_mask, array_size); 3528 3500 if (i == array_size) ··· 4789 4751 { 4790 4752 struct gpio_desc *desc; 4791 4753 struct gpio_descs *descs; 4754 + struct gpio_device *gdev; 4792 4755 struct gpio_array *array_info = NULL; 4793 - struct gpio_chip *gc; 4794 4756 int count, bitmap_size; 4757 + unsigned long dflags; 4795 4758 size_t descs_size; 4796 4759 4797 4760 count = gpiod_count(dev, con_id); ··· 4813 4774 4814 4775 descs->desc[descs->ndescs] = desc; 4815 4776 4816 - gc = gpiod_to_chip(desc); 4777 + gdev = gpiod_to_gpio_device(desc); 4817 4778 /* 4818 4779 * If pin hardware number of array member 0 is also 0, select 4819 4780 * its chip as a candidate for fast bitmap processing path. ··· 4821 4782 if (descs->ndescs == 0 && gpio_chip_hwgpio(desc) == 0) { 4822 4783 struct gpio_descs *array; 4823 4784 4824 - bitmap_size = BITS_TO_LONGS(gc->ngpio > count ? 4825 - gc->ngpio : count); 4785 + bitmap_size = BITS_TO_LONGS(gdev->ngpio > count ? 4786 + gdev->ngpio : count); 4826 4787 4827 4788 array = krealloc(descs, descs_size + 4828 4789 struct_size(array_info, invert_mask, 3 * bitmap_size), ··· 4842 4803 4843 4804 array_info->desc = descs->desc; 4844 4805 array_info->size = count; 4845 - array_info->chip = gc; 4806 + array_info->gdev = gdev; 4846 4807 bitmap_set(array_info->get_mask, descs->ndescs, 4847 4808 count - descs->ndescs); 4848 4809 bitmap_set(array_info->set_mask, descs->ndescs, ··· 4855 4816 continue; 4856 4817 4857 4818 /* Unmark array members which don't belong to the 'fast' chip */ 4858 - if (array_info->chip != gc) { 4819 + if (array_info->gdev != gdev) { 4859 4820 __clear_bit(descs->ndescs, array_info->get_mask); 4860 4821 __clear_bit(descs->ndescs, array_info->set_mask); 4861 4822 } ··· 4878 4839 array_info->set_mask); 4879 4840 } 4880 4841 } else { 4842 + dflags = READ_ONCE(desc->flags); 4881 4843 /* Exclude open drain or open source from fast output */ 4882 - if (gpiochip_line_is_open_drain(gc, descs->ndescs) || 4883 - gpiochip_line_is_open_source(gc, descs->ndescs)) 4844 + if (test_bit(FLAG_OPEN_DRAIN, &dflags) || 4845 + test_bit(FLAG_OPEN_SOURCE, &dflags)) 4884 4846 __clear_bit(descs->ndescs, 4885 4847 array_info->set_mask); 4886 4848 /* Identify 'fast' pins which require invertion */ ··· 4893 4853 if (array_info) 4894 4854 dev_dbg(dev, 4895 4855 "GPIO array info: chip=%s, size=%d, get_mask=%lx, set_mask=%lx, invert_mask=%lx\n", 4896 - array_info->chip->label, array_info->size, 4856 + array_info->gdev->label, array_info->size, 4897 4857 *array_info->get_mask, *array_info->set_mask, 4898 4858 *array_info->invert_mask); 4899 4859 return descs;
+2 -2
drivers/gpio/gpiolib.h
··· 114 114 * 115 115 * @desc: Array of pointers to the GPIO descriptors 116 116 * @size: Number of elements in desc 117 - * @chip: Parent GPIO chip 117 + * @gdev: Parent GPIO device 118 118 * @get_mask: Get mask used in fastpath 119 119 * @set_mask: Set mask used in fastpath 120 120 * @invert_mask: Invert mask used in fastpath ··· 126 126 struct gpio_array { 127 127 struct gpio_desc **desc; 128 128 unsigned int size; 129 - struct gpio_chip *chip; 129 + struct gpio_device *gdev; 130 130 unsigned long *get_mask; 131 131 unsigned long *set_mask; 132 132 unsigned long invert_mask[];
+2 -2
drivers/gpu/drm/i915/display/icl_dsi.c
··· 809 809 /* select data lane width */ 810 810 tmp = intel_de_read(display, 811 811 TRANS_DDI_FUNC_CTL(display, dsi_trans)); 812 - tmp &= ~DDI_PORT_WIDTH_MASK; 813 - tmp |= DDI_PORT_WIDTH(intel_dsi->lane_count); 812 + tmp &= ~TRANS_DDI_PORT_WIDTH_MASK; 813 + tmp |= TRANS_DDI_PORT_WIDTH(intel_dsi->lane_count); 814 814 815 815 /* select input pipe */ 816 816 tmp &= ~TRANS_DDI_EDP_INPUT_MASK;
+3 -5
drivers/gpu/drm/i915/display/intel_ddi.c
··· 658 658 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 659 659 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 660 660 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 661 - bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST); 662 661 u32 ctl; 663 662 664 663 if (DISPLAY_VER(dev_priv) >= 11) ··· 677 678 TRANS_DDI_PORT_SYNC_MASTER_SELECT_MASK); 678 679 679 680 if (DISPLAY_VER(dev_priv) >= 12) { 680 - if (!intel_dp_mst_is_master_trans(crtc_state) || 681 - (!is_mst && intel_dp_is_uhbr(crtc_state))) { 681 + if (!intel_dp_mst_is_master_trans(crtc_state)) { 682 682 ctl &= ~(TGL_TRANS_DDI_PORT_MASK | 683 683 TRANS_DDI_MODE_SELECT_MASK); 684 684 } ··· 3132 3134 intel_dp_set_power(intel_dp, DP_SET_POWER_D3); 3133 3135 3134 3136 if (DISPLAY_VER(dev_priv) >= 12) { 3135 - if (is_mst) { 3137 + if (is_mst || intel_dp_is_uhbr(old_crtc_state)) { 3136 3138 enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder; 3137 3139 3138 3140 intel_de_rmw(dev_priv, ··· 3485 3487 intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(dev_priv, port), 3486 3488 XELPDP_PORT_WIDTH_MASK | XELPDP_PORT_REVERSAL, port_buf); 3487 3489 3488 - buf_ctl |= DDI_PORT_WIDTH(lane_count); 3490 + buf_ctl |= DDI_PORT_WIDTH(crtc_state->lane_count); 3489 3491 3490 3492 if (DISPLAY_VER(dev_priv) >= 20) 3491 3493 buf_ctl |= XE2LPD_DDI_BUF_D2D_LINK_ENABLE;
+18
drivers/gpu/drm/i915/display/intel_display.c
··· 6628 6628 static int intel_joiner_add_affected_crtcs(struct intel_atomic_state *state) 6629 6629 { 6630 6630 struct drm_i915_private *i915 = to_i915(state->base.dev); 6631 + const struct intel_plane_state *plane_state; 6631 6632 struct intel_crtc_state *crtc_state; 6633 + struct intel_plane *plane; 6632 6634 struct intel_crtc *crtc; 6633 6635 u8 affected_pipes = 0; 6634 6636 u8 modeset_pipes = 0; 6635 6637 int i; 6636 6638 6639 + /* 6640 + * Any plane which is in use by the joiner needs its crtc. 6641 + * Pull those in first as this will not have happened yet 6642 + * if the plane remains disabled according to uapi. 6643 + */ 6644 + for_each_new_intel_plane_in_state(state, plane, plane_state, i) { 6645 + crtc = to_intel_crtc(plane_state->hw.crtc); 6646 + if (!crtc) 6647 + continue; 6648 + 6649 + crtc_state = intel_atomic_get_crtc_state(&state->base, crtc); 6650 + if (IS_ERR(crtc_state)) 6651 + return PTR_ERR(crtc_state); 6652 + } 6653 + 6654 + /* Now pull in all joined crtcs */ 6637 6655 for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) { 6638 6656 affected_pipes |= crtc_state->joiner_pipes; 6639 6657 if (intel_crtc_needs_modeset(crtc_state))
+2 -2
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 3449 3449 */ 3450 3450 ret = deregister_context(ce, ce->guc_id.id); 3451 3451 if (ret) { 3452 - spin_lock(&ce->guc_state.lock); 3452 + spin_lock_irqsave(&ce->guc_state.lock, flags); 3453 3453 set_context_registered(ce); 3454 3454 clr_context_destroyed(ce); 3455 - spin_unlock(&ce->guc_state.lock); 3455 + spin_unlock_irqrestore(&ce->guc_state.lock, flags); 3456 3456 /* 3457 3457 * As gt-pm is awake at function entry, intel_wakeref_put_async merely decrements 3458 3458 * the wakeref immediately but per function spec usage call this after unlock.
+1 -1
drivers/gpu/drm/i915/i915_reg.h
··· 3633 3633 #define DDI_BUF_IS_IDLE (1 << 7) 3634 3634 #define DDI_BUF_CTL_TC_PHY_OWNERSHIP REG_BIT(6) 3635 3635 #define DDI_A_4_LANES (1 << 4) 3636 - #define DDI_PORT_WIDTH(width) (((width) - 1) << 1) 3636 + #define DDI_PORT_WIDTH(width) (((width) == 3 ? 4 : ((width) - 1)) << 1) 3637 3637 #define DDI_PORT_WIDTH_MASK (7 << 1) 3638 3638 #define DDI_PORT_WIDTH_SHIFT 1 3639 3639 #define DDI_INIT_DISPLAY_DETECTED (1 << 0)
+4 -4
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 813 813 } 814 814 815 815 ver = gmu_read(gmu, REG_A6XX_GMU_CORE_FW_VERSION); 816 - DRM_INFO("Loaded GMU firmware v%u.%u.%u\n", 817 - FIELD_GET(A6XX_GMU_CORE_FW_VERSION_MAJOR__MASK, ver), 818 - FIELD_GET(A6XX_GMU_CORE_FW_VERSION_MINOR__MASK, ver), 819 - FIELD_GET(A6XX_GMU_CORE_FW_VERSION_STEP__MASK, ver)); 816 + DRM_INFO_ONCE("Loaded GMU firmware v%u.%u.%u\n", 817 + FIELD_GET(A6XX_GMU_CORE_FW_VERSION_MAJOR__MASK, ver), 818 + FIELD_GET(A6XX_GMU_CORE_FW_VERSION_MINOR__MASK, ver), 819 + FIELD_GET(A6XX_GMU_CORE_FW_VERSION_STEP__MASK, ver)); 820 820 821 821 return 0; 822 822 }
+1 -1
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
··· 297 297 { 298 298 .name = "wb_2", .id = WB_2, 299 299 .base = 0x65000, .len = 0x2c8, 300 - .features = WB_SDM845_MASK, 300 + .features = WB_SM8250_MASK, 301 301 .format_list = wb2_formats_rgb, 302 302 .num_formats = ARRAY_SIZE(wb2_formats_rgb), 303 303 .clk_ctrl = DPU_CLK_CTRL_WB2,
+1 -1
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
··· 304 304 { 305 305 .name = "wb_2", .id = WB_2, 306 306 .base = 0x65000, .len = 0x2c8, 307 - .features = WB_SDM845_MASK, 307 + .features = WB_SM8250_MASK, 308 308 .format_list = wb2_formats_rgb, 309 309 .num_formats = ARRAY_SIZE(wb2_formats_rgb), 310 310 .clk_ctrl = DPU_CLK_CTRL_WB2,
-2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_3_sm6150.h
··· 116 116 .sblk = &sdm845_lm_sblk, 117 117 .pingpong = PINGPONG_0, 118 118 .dspp = DSPP_0, 119 - .lm_pair = LM_1, 120 119 }, { 121 120 .name = "lm_1", .id = LM_1, 122 121 .base = 0x45000, .len = 0x320, 123 122 .features = MIXER_QCM2290_MASK, 124 123 .sblk = &sdm845_lm_sblk, 125 124 .pingpong = PINGPONG_1, 126 - .lm_pair = LM_0, 127 125 }, { 128 126 .name = "lm_2", .id = LM_2, 129 127 .base = 0x46000, .len = 0x320,
+1 -1
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
··· 144 144 { 145 145 .name = "wb_2", .id = WB_2, 146 146 .base = 0x65000, .len = 0x2c8, 147 - .features = WB_SDM845_MASK, 147 + .features = WB_SM8250_MASK, 148 148 .format_list = wb2_formats_rgb, 149 149 .num_formats = ARRAY_SIZE(wb2_formats_rgb), 150 150 .clk_ctrl = DPU_CLK_CTRL_WB2,
-2
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 1228 1228 done: 1229 1229 kfree(states); 1230 1230 return ret; 1231 - 1232 - return 0; 1233 1231 } 1234 1232 1235 1233 static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+3
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 2281 2281 } 2282 2282 } 2283 2283 2284 + if (phys_enc->hw_pp && phys_enc->hw_pp->ops.setup_dither) 2285 + phys_enc->hw_pp->ops.setup_dither(phys_enc->hw_pp, NULL); 2286 + 2284 2287 /* reset the merge 3D HW block */ 2285 2288 if (phys_enc->hw_pp && phys_enc->hw_pp->merge_3d) { 2286 2289 phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d,
+2 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
··· 52 52 u32 slice_last_group_size; 53 53 u32 det_thresh_flatness; 54 54 bool is_cmd_mode = !(mode & DSC_MODE_VIDEO); 55 + bool input_10_bits = dsc->bits_per_component == 10; 55 56 56 57 DPU_REG_WRITE(c, DSC_COMMON_MODE, mode); 57 58 ··· 69 68 data |= (dsc->line_buf_depth << 3); 70 69 data |= (dsc->simple_422 << 2); 71 70 data |= (dsc->convert_rgb << 1); 72 - data |= dsc->bits_per_component; 71 + data |= input_10_bits; 73 72 74 73 DPU_REG_WRITE(c, DSC_ENC, data); 75 74
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
··· 272 272 273 273 if (cap & BIT(DPU_MDP_VSYNC_SEL)) 274 274 ops->setup_vsync_source = dpu_hw_setup_vsync_sel; 275 - else 275 + else if (!(cap & BIT(DPU_MDP_PERIPH_0_REMOVED))) 276 276 ops->setup_vsync_source = dpu_hw_setup_wd_timer; 277 277 278 278 ops->get_safe_status = dpu_hw_get_safe_status;
+3 -4
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 1164 1164 unsigned int num_planes) 1165 1165 { 1166 1166 unsigned int i; 1167 - int ret; 1168 1167 1169 1168 for (i = 0; i < num_planes; i++) { 1170 1169 struct drm_plane_state *plane_state = states[i]; ··· 1172 1173 !plane_state->visible) 1173 1174 continue; 1174 1175 1175 - ret = dpu_plane_virtual_assign_resources(crtc, global_state, 1176 + int ret = dpu_plane_virtual_assign_resources(crtc, global_state, 1176 1177 state, plane_state); 1177 1178 if (ret) 1178 - break; 1179 + return ret; 1179 1180 } 1180 1181 1181 - return ret; 1182 + return 0; 1182 1183 } 1183 1184 1184 1185 static void dpu_plane_flush_csc(struct dpu_plane *pdpu, struct dpu_sw_pipe *pipe)
+6 -5
drivers/gpu/drm/msm/dp/dp_display.c
··· 930 930 return -EINVAL; 931 931 } 932 932 933 - if (mode->clock > DP_MAX_PIXEL_CLK_KHZ) 934 - return MODE_CLOCK_HIGH; 935 - 936 933 msm_dp_display = container_of(dp, struct msm_dp_display_private, msm_dp_display); 937 934 link_info = &msm_dp_display->panel->link_info; 938 935 939 - if (drm_mode_is_420_only(&dp->connector->display_info, mode) && 940 - msm_dp_display->panel->vsc_sdp_supported) 936 + if ((drm_mode_is_420_only(&dp->connector->display_info, mode) && 937 + msm_dp_display->panel->vsc_sdp_supported) || 938 + msm_dp_wide_bus_available(dp)) 941 939 mode_pclk_khz /= 2; 940 + 941 + if (mode_pclk_khz > DP_MAX_PIXEL_CLK_KHZ) 942 + return MODE_CLOCK_HIGH; 942 943 943 944 mode_bpp = dp->connector->display_info.bpc * num_components; 944 945 if (!mode_bpp)
+4 -1
drivers/gpu/drm/msm/dp/dp_drm.c
··· 257 257 return -EINVAL; 258 258 } 259 259 260 - if (mode->clock > DP_MAX_PIXEL_CLK_KHZ) 260 + if (msm_dp_wide_bus_available(dp)) 261 + mode_pclk_khz /= 2; 262 + 263 + if (mode_pclk_khz > DP_MAX_PIXEL_CLK_KHZ) 261 264 return MODE_CLOCK_HIGH; 262 265 263 266 /*
+36 -17
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
··· 83 83 /* protects REG_DSI_7nm_PHY_CMN_CLK_CFG0 register */ 84 84 spinlock_t postdiv_lock; 85 85 86 + /* protects REG_DSI_7nm_PHY_CMN_CLK_CFG1 register */ 87 + spinlock_t pclk_mux_lock; 88 + 86 89 struct pll_7nm_cached_state cached_state; 87 90 88 91 struct dsi_pll_7nm *slave; ··· 375 372 ndelay(250); 376 373 } 377 374 378 - static void dsi_pll_disable_global_clk(struct dsi_pll_7nm *pll) 375 + static void dsi_pll_cmn_clk_cfg0_write(struct dsi_pll_7nm *pll, u32 val) 379 376 { 377 + unsigned long flags; 378 + 379 + spin_lock_irqsave(&pll->postdiv_lock, flags); 380 + writel(val, pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG0); 381 + spin_unlock_irqrestore(&pll->postdiv_lock, flags); 382 + } 383 + 384 + static void dsi_pll_cmn_clk_cfg1_update(struct dsi_pll_7nm *pll, u32 mask, 385 + u32 val) 386 + { 387 + unsigned long flags; 380 388 u32 data; 381 389 390 + spin_lock_irqsave(&pll->pclk_mux_lock, flags); 382 391 data = readl(pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 383 - writel(data & ~BIT(5), pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 392 + data &= ~mask; 393 + data |= val & mask; 394 + 395 + writel(data, pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 396 + spin_unlock_irqrestore(&pll->pclk_mux_lock, flags); 397 + } 398 + 399 + static void dsi_pll_disable_global_clk(struct dsi_pll_7nm *pll) 400 + { 401 + dsi_pll_cmn_clk_cfg1_update(pll, DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN, 0); 384 402 } 385 403 386 404 static void dsi_pll_enable_global_clk(struct dsi_pll_7nm *pll) 387 405 { 388 - u32 data; 406 + u32 cfg_1 = DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN | DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN_SEL; 389 407 390 408 writel(0x04, pll->phy->base + REG_DSI_7nm_PHY_CMN_CTRL_3); 391 - 392 - data = readl(pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 393 - writel(data | BIT(5) | BIT(4), pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 409 + dsi_pll_cmn_clk_cfg1_update(pll, cfg_1, cfg_1); 394 410 } 395 411 396 412 static void dsi_pll_phy_dig_reset(struct dsi_pll_7nm *pll) ··· 587 565 { 588 566 struct dsi_pll_7nm *pll_7nm = to_pll_7nm(phy->vco_hw); 589 567 struct pll_7nm_cached_state *cached = &pll_7nm->cached_state; 590 - void __iomem *phy_base = pll_7nm->phy->base; 591 568 u32 val; 592 569 int ret; 593 570 ··· 595 574 val |= cached->pll_out_div; 596 575 writel(val, pll_7nm->phy->pll_base + REG_DSI_7nm_PHY_PLL_PLL_OUTDIV_RATE); 597 576 598 - writel(cached->bit_clk_div | (cached->pix_clk_div << 4), 599 - phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG0); 600 - 601 - val = readl(phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 602 - val &= ~0x3; 603 - val |= cached->pll_mux; 604 - writel(val, phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 577 + dsi_pll_cmn_clk_cfg0_write(pll_7nm, 578 + DSI_7nm_PHY_CMN_CLK_CFG0_DIV_CTRL_3_0(cached->bit_clk_div) | 579 + DSI_7nm_PHY_CMN_CLK_CFG0_DIV_CTRL_7_4(cached->pix_clk_div)); 580 + dsi_pll_cmn_clk_cfg1_update(pll_7nm, 0x3, cached->pll_mux); 605 581 606 582 ret = dsi_pll_7nm_vco_set_rate(phy->vco_hw, 607 583 pll_7nm->vco_current_rate, ··· 617 599 static int dsi_7nm_set_usecase(struct msm_dsi_phy *phy) 618 600 { 619 601 struct dsi_pll_7nm *pll_7nm = to_pll_7nm(phy->vco_hw); 620 - void __iomem *base = phy->base; 621 602 u32 data = 0x0; /* internal PLL */ 622 603 623 604 DBG("DSI PLL%d", pll_7nm->phy->id); ··· 635 618 } 636 619 637 620 /* set PLL src */ 638 - writel(data << 2, base + REG_DSI_7nm_PHY_CMN_CLK_CFG1); 621 + dsi_pll_cmn_clk_cfg1_update(pll_7nm, DSI_7nm_PHY_CMN_CLK_CFG1_BITCLK_SEL__MASK, 622 + DSI_7nm_PHY_CMN_CLK_CFG1_BITCLK_SEL(data)); 639 623 640 624 return 0; 641 625 } ··· 751 733 pll_by_2_bit, 752 734 }), 2, 0, pll_7nm->phy->base + 753 735 REG_DSI_7nm_PHY_CMN_CLK_CFG1, 754 - 0, 1, 0, NULL); 736 + 0, 1, 0, &pll_7nm->pclk_mux_lock); 755 737 if (IS_ERR(hw)) { 756 738 ret = PTR_ERR(hw); 757 739 goto fail; ··· 796 778 pll_7nm_list[phy->id] = pll_7nm; 797 779 798 780 spin_lock_init(&pll_7nm->postdiv_lock); 781 + spin_lock_init(&pll_7nm->pclk_mux_lock); 799 782 800 783 pll_7nm->phy = phy; 801 784
+4 -7
drivers/gpu/drm/msm/msm_drv.h
··· 537 537 static inline unsigned long timeout_to_jiffies(const ktime_t *timeout) 538 538 { 539 539 ktime_t now = ktime_get(); 540 - s64 remaining_jiffies; 541 540 542 - if (ktime_compare(*timeout, now) < 0) { 543 - remaining_jiffies = 0; 544 - } else { 545 - ktime_t rem = ktime_sub(*timeout, now); 546 - remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ); 547 - } 541 + if (ktime_compare(*timeout, now) <= 0) 542 + return 0; 548 543 544 + ktime_t rem = ktime_sub(*timeout, now); 545 + s64 remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ); 549 546 return clamp(remaining_jiffies, 1LL, (s64)INT_MAX); 550 547 } 551 548
+9 -2
drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml
··· 9 9 <reg32 offset="0x00004" name="REVISION_ID1"/> 10 10 <reg32 offset="0x00008" name="REVISION_ID2"/> 11 11 <reg32 offset="0x0000c" name="REVISION_ID3"/> 12 - <reg32 offset="0x00010" name="CLK_CFG0"/> 13 - <reg32 offset="0x00014" name="CLK_CFG1"/> 12 + <reg32 offset="0x00010" name="CLK_CFG0"> 13 + <bitfield name="DIV_CTRL_3_0" low="0" high="3" type="uint"/> 14 + <bitfield name="DIV_CTRL_7_4" low="4" high="7" type="uint"/> 15 + </reg32> 16 + <reg32 offset="0x00014" name="CLK_CFG1"> 17 + <bitfield name="CLK_EN" pos="5" type="boolean"/> 18 + <bitfield name="CLK_EN_SEL" pos="4" type="boolean"/> 19 + <bitfield name="BITCLK_SEL" low="2" high="3" type="uint"/> 20 + </reg32> 14 21 <reg32 offset="0x00018" name="GLBL_CTRL"/> 15 22 <reg32 offset="0x0001c" name="RBUF_CTRL"/> 16 23 <reg32 offset="0x00020" name="VREG_CTRL_0"/>
+7 -2
drivers/gpu/drm/nouveau/nouveau_svm.c
··· 590 590 unsigned long timeout = 591 591 jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); 592 592 struct mm_struct *mm = svmm->notifier.mm; 593 + struct folio *folio; 593 594 struct page *page; 594 595 unsigned long start = args->p.addr; 595 596 unsigned long notifier_seq; ··· 617 616 ret = -EINVAL; 618 617 goto out; 619 618 } 619 + folio = page_folio(page); 620 620 621 621 mutex_lock(&svmm->mutex); 622 622 if (!mmu_interval_read_retry(&notifier->notifier, 623 623 notifier_seq)) 624 624 break; 625 625 mutex_unlock(&svmm->mutex); 626 + 627 + folio_unlock(folio); 628 + folio_put(folio); 626 629 } 627 630 628 631 /* Map the page on the GPU. */ ··· 642 637 ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL); 643 638 mutex_unlock(&svmm->mutex); 644 639 645 - unlock_page(page); 646 - put_page(page); 640 + folio_unlock(folio); 641 + folio_put(folio); 647 642 648 643 out: 649 644 mmu_interval_notifier_remove(&notifier->notifier);
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
··· 75 75 .bootstrap_multiple_falcons = gp10b_pmu_acr_bootstrap_multiple_falcons, 76 76 }; 77 77 78 - #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC) 78 + #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) 79 79 MODULE_FIRMWARE("nvidia/gp10b/pmu/desc.bin"); 80 80 MODULE_FIRMWARE("nvidia/gp10b/pmu/image.bin"); 81 81 MODULE_FIRMWARE("nvidia/gp10b/pmu/sig.bin");
+4 -4
drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
··· 109 109 if (jadard->desc->lp11_to_reset_delay_ms) 110 110 msleep(jadard->desc->lp11_to_reset_delay_ms); 111 111 112 - gpiod_set_value(jadard->reset, 1); 112 + gpiod_set_value(jadard->reset, 0); 113 113 msleep(5); 114 114 115 - gpiod_set_value(jadard->reset, 0); 115 + gpiod_set_value(jadard->reset, 1); 116 116 msleep(10); 117 117 118 - gpiod_set_value(jadard->reset, 1); 118 + gpiod_set_value(jadard->reset, 0); 119 119 msleep(130); 120 120 121 121 ret = jadard->desc->init(jadard); ··· 1130 1130 dsi->format = desc->format; 1131 1131 dsi->lanes = desc->lanes; 1132 1132 1133 - jadard->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 1133 + jadard->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 1134 1134 if (IS_ERR(jadard->reset)) { 1135 1135 DRM_DEV_ERROR(&dsi->dev, "failed to get our reset GPIO\n"); 1136 1136 return PTR_ERR(jadard->reset);
+4 -2
drivers/gpu/drm/xe/xe_guc_ct.c
··· 1723 1723 drm_printf(p, "\tg2h outstanding: %d\n", 1724 1724 snapshot->g2h_outstanding); 1725 1725 1726 - if (snapshot->ctb) 1727 - xe_print_blob_ascii85(p, "CTB data", '\n', 1726 + if (snapshot->ctb) { 1727 + drm_printf(p, "[CTB].length: 0x%zx\n", snapshot->ctb_size); 1728 + xe_print_blob_ascii85(p, "[CTB].data", '\n', 1728 1729 snapshot->ctb, 0, snapshot->ctb_size); 1730 + } 1729 1731 } else { 1730 1732 drm_puts(p, "CT disabled\n"); 1731 1733 }
+2 -1
drivers/gpu/drm/xe/xe_guc_log.c
··· 208 208 drm_printf(p, "GuC timestamp: 0x%08llX [%llu]\n", snapshot->stamp, snapshot->stamp); 209 209 drm_printf(p, "Log level: %u\n", snapshot->level); 210 210 211 + drm_printf(p, "[LOG].length: 0x%zx\n", snapshot->size); 211 212 remain = snapshot->size; 212 213 for (i = 0; i < snapshot->num_chunks; i++) { 213 214 size_t size = min(GUC_LOG_CHUNK_SIZE, remain); 214 - const char *prefix = i ? NULL : "Log data"; 215 + const char *prefix = i ? NULL : "[LOG].data"; 215 216 char suffix = i == snapshot->num_chunks - 1 ? '\n' : 0; 216 217 217 218 xe_print_blob_ascii85(p, prefix, suffix, snapshot->copy[i], 0, size);
+1 -13
drivers/gpu/drm/xe/xe_irq.c
··· 757 757 758 758 xe_irq_postinstall(xe); 759 759 760 - err = devm_add_action_or_reset(xe->drm.dev, irq_uninstall, xe); 761 - if (err) 762 - goto free_irq_handler; 763 - 764 - return 0; 765 - 766 - free_irq_handler: 767 - if (xe_device_has_msix(xe)) 768 - xe_irq_msix_free(xe); 769 - else 770 - xe_irq_msi_free(xe); 771 - 772 - return err; 760 + return devm_add_action_or_reset(xe->drm.dev, irq_uninstall, xe); 773 761 } 774 762 775 763 static void xe_irq_msi_synchronize_irq(struct xe_device *xe)
+10 -5
drivers/i2c/i2c-core-base.c
··· 2506 2506 static int i2c_detect(struct i2c_adapter *adapter, struct i2c_driver *driver) 2507 2507 { 2508 2508 const unsigned short *address_list; 2509 - struct i2c_client temp_client; 2509 + struct i2c_client *temp_client; 2510 2510 int i, err = 0; 2511 2511 2512 2512 address_list = driver->address_list; ··· 2527 2527 return 0; 2528 2528 2529 2529 /* Set up a temporary client to help detect callback */ 2530 - memset(&temp_client, 0, sizeof(temp_client)); 2531 - temp_client.adapter = adapter; 2530 + temp_client = kzalloc(sizeof(*temp_client), GFP_KERNEL); 2531 + if (!temp_client) 2532 + return -ENOMEM; 2533 + 2534 + temp_client->adapter = adapter; 2532 2535 2533 2536 for (i = 0; address_list[i] != I2C_CLIENT_END; i += 1) { 2534 2537 dev_dbg(&adapter->dev, 2535 2538 "found normal entry for adapter %d, addr 0x%02x\n", 2536 2539 i2c_adapter_id(adapter), address_list[i]); 2537 - temp_client.addr = address_list[i]; 2538 - err = i2c_detect_address(&temp_client, driver); 2540 + temp_client->addr = address_list[i]; 2541 + err = i2c_detect_address(temp_client, driver); 2539 2542 if (unlikely(err)) 2540 2543 break; 2541 2544 } 2545 + 2546 + kfree(temp_client); 2542 2547 2543 2548 return err; 2544 2549 }
-1
drivers/infiniband/hw/bnxt_re/bnxt_re.h
··· 187 187 #define BNXT_RE_FLAG_ISSUE_ROCE_STATS 29 188 188 struct net_device *netdev; 189 189 struct auxiliary_device *adev; 190 - struct notifier_block nb; 191 190 unsigned int version, major, minor; 192 191 struct bnxt_qplib_chip_ctx *chip_ctx; 193 192 struct bnxt_en_dev *en_dev;
+2 -2
drivers/infiniband/hw/bnxt_re/hw_counters.c
··· 348 348 goto done; 349 349 } 350 350 bnxt_re_copy_err_stats(rdev, stats, err_s); 351 - if (_is_ext_stats_supported(rdev->dev_attr->dev_cap_flags) && 352 - !rdev->is_virtfn) { 351 + if (bnxt_ext_stats_supported(rdev->chip_ctx, rdev->dev_attr->dev_cap_flags, 352 + rdev->is_virtfn)) { 353 353 rc = bnxt_re_get_ext_stat(rdev, stats); 354 354 if (rc) { 355 355 clear_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS,
+2
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 1870 1870 srq->qplib_srq.threshold = srq_init_attr->attr.srq_limit; 1871 1871 srq->srq_limit = srq_init_attr->attr.srq_limit; 1872 1872 srq->qplib_srq.eventq_hw_ring_id = rdev->nqr->nq[0].ring_id; 1873 + srq->qplib_srq.sg_info.pgsize = PAGE_SIZE; 1874 + srq->qplib_srq.sg_info.pgshft = PAGE_SHIFT; 1873 1875 nq = &rdev->nqr->nq[0]; 1874 1876 1875 1877 if (udata) {
+11 -11
drivers/infiniband/hw/bnxt_re/main.c
··· 396 396 397 397 static void bnxt_re_async_notifier(void *handle, struct hwrm_async_event_cmpl *cmpl) 398 398 { 399 - struct bnxt_re_dev *rdev = (struct bnxt_re_dev *)handle; 399 + struct bnxt_re_en_dev_info *en_info = auxiliary_get_drvdata(handle); 400 400 struct bnxt_re_dcb_work *dcb_work; 401 + struct bnxt_re_dev *rdev; 401 402 u32 data1, data2; 402 403 u16 event_id; 404 + 405 + rdev = en_info->rdev; 406 + if (!rdev) 407 + return; 403 408 404 409 event_id = le16_to_cpu(cmpl->event_id); 405 410 data1 = le32_to_cpu(cmpl->event_data1); ··· 438 433 int indx; 439 434 440 435 rdev = en_info->rdev; 436 + if (!rdev) 437 + return; 441 438 rcfw = &rdev->rcfw; 442 439 443 440 if (reset) { ··· 468 461 int indx, rc; 469 462 470 463 rdev = en_info->rdev; 464 + if (!rdev) 465 + return; 471 466 msix_ent = rdev->nqr->msix_entries; 472 467 rcfw = &rdev->rcfw; 473 468 if (!ent) { ··· 1359 1350 return NULL; 1360 1351 } 1361 1352 /* Default values */ 1362 - rdev->nb.notifier_call = NULL; 1363 1353 rdev->netdev = en_dev->net; 1364 1354 rdev->en_dev = en_dev; 1365 1355 rdev->adev = adev; ··· 2353 2345 static void bnxt_re_remove_device(struct bnxt_re_dev *rdev, u8 op_type, 2354 2346 struct auxiliary_device *aux_dev) 2355 2347 { 2356 - if (rdev->nb.notifier_call) { 2357 - unregister_netdevice_notifier(&rdev->nb); 2358 - rdev->nb.notifier_call = NULL; 2359 - } else { 2360 - /* If notifier is null, we should have already done a 2361 - * clean up before coming here. 2362 - */ 2363 - return; 2364 - } 2365 2348 bnxt_re_setup_cc(rdev, false); 2366 2349 ib_unregister_device(&rdev->ibdev); 2367 2350 bnxt_re_dev_uninit(rdev, op_type); ··· 2432 2433 ibdev_info(&rdev->ibdev, "%s: L2 driver notified to stop en_state 0x%lx", 2433 2434 __func__, en_dev->en_state); 2434 2435 bnxt_re_remove_device(rdev, BNXT_RE_PRE_RECOVERY_REMOVE, adev); 2436 + bnxt_re_update_en_info_rdev(NULL, en_info, adev); 2435 2437 mutex_unlock(&bnxt_re_mutex); 2436 2438 2437 2439 return 0;
+8
drivers/infiniband/hw/bnxt_re/qplib_res.h
··· 547 547 CREQ_QUERY_FUNC_RESP_SB_EXT_STATS; 548 548 } 549 549 550 + static inline int bnxt_ext_stats_supported(struct bnxt_qplib_chip_ctx *ctx, 551 + u16 flags, bool virtfn) 552 + { 553 + /* ext stats supported if cap flag is set AND is a PF OR a Thor2 VF */ 554 + return (_is_ext_stats_supported(flags) && 555 + ((virtfn && bnxt_qplib_is_chip_gen_p7(ctx)) || (!virtfn))); 556 + } 557 + 550 558 static inline bool _is_hw_retx_supported(u16 dev_cap_flags) 551 559 { 552 560 return dev_cap_flags &
+48 -16
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 1286 1286 return tx_timeout; 1287 1287 } 1288 1288 1289 - static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u16 opcode) 1289 + static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u32 tx_timeout) 1290 1290 { 1291 - struct hns_roce_v2_priv *priv = hr_dev->priv; 1292 - u32 tx_timeout = hns_roce_cmdq_tx_timeout(opcode, priv->cmq.tx_timeout); 1293 1291 u32 timeout = 0; 1294 1292 1295 1293 do { ··· 1297 1299 } while (++timeout < tx_timeout); 1298 1300 } 1299 1301 1300 - static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev, 1301 - struct hns_roce_cmq_desc *desc, int num) 1302 + static int __hns_roce_cmq_send_one(struct hns_roce_dev *hr_dev, 1303 + struct hns_roce_cmq_desc *desc, 1304 + int num, u32 tx_timeout) 1302 1305 { 1303 1306 struct hns_roce_v2_priv *priv = hr_dev->priv; 1304 1307 struct hns_roce_v2_cmq_ring *csq = &priv->cmq.csq; ··· 1307 1308 u32 tail; 1308 1309 int ret; 1309 1310 int i; 1310 - 1311 - spin_lock_bh(&csq->lock); 1312 1311 1313 1312 tail = csq->head; 1314 1313 ··· 1321 1324 1322 1325 atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CMDS_CNT]); 1323 1326 1324 - hns_roce_wait_csq_done(hr_dev, le16_to_cpu(desc->opcode)); 1327 + hns_roce_wait_csq_done(hr_dev, tx_timeout); 1325 1328 if (hns_roce_cmq_csq_done(hr_dev)) { 1326 1329 ret = 0; 1327 1330 for (i = 0; i < num; i++) { 1328 1331 /* check the result of hardware write back */ 1329 - desc[i] = csq->desc[tail++]; 1332 + desc_ret = le16_to_cpu(csq->desc[tail++].retval); 1330 1333 if (tail == csq->desc_num) 1331 1334 tail = 0; 1332 - 1333 - desc_ret = le16_to_cpu(desc[i].retval); 1334 1335 if (likely(desc_ret == CMD_EXEC_SUCCESS)) 1335 1336 continue; 1336 1337 1337 - dev_err_ratelimited(hr_dev->dev, 1338 - "Cmdq IO error, opcode = 0x%x, return = 0x%x.\n", 1339 - desc->opcode, desc_ret); 1340 1338 ret = hns_roce_cmd_err_convert_errno(desc_ret); 1341 1339 } 1342 1340 } else { ··· 1346 1354 ret = -EAGAIN; 1347 1355 } 1348 1356 1349 - spin_unlock_bh(&csq->lock); 1350 - 1351 1357 if (ret) 1352 1358 atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CMDS_ERR_CNT]); 1359 + 1360 + return ret; 1361 + } 1362 + 1363 + static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev, 1364 + struct hns_roce_cmq_desc *desc, int num) 1365 + { 1366 + struct hns_roce_v2_priv *priv = hr_dev->priv; 1367 + struct hns_roce_v2_cmq_ring *csq = &priv->cmq.csq; 1368 + u16 opcode = le16_to_cpu(desc->opcode); 1369 + u32 tx_timeout = hns_roce_cmdq_tx_timeout(opcode, priv->cmq.tx_timeout); 1370 + u8 try_cnt = HNS_ROCE_OPC_POST_MB_TRY_CNT; 1371 + u32 rsv_tail; 1372 + int ret; 1373 + int i; 1374 + 1375 + while (try_cnt) { 1376 + try_cnt--; 1377 + 1378 + spin_lock_bh(&csq->lock); 1379 + rsv_tail = csq->head; 1380 + ret = __hns_roce_cmq_send_one(hr_dev, desc, num, tx_timeout); 1381 + if (opcode == HNS_ROCE_OPC_POST_MB && ret == -ETIME && 1382 + try_cnt) { 1383 + spin_unlock_bh(&csq->lock); 1384 + mdelay(HNS_ROCE_OPC_POST_MB_RETRY_GAP_MSEC); 1385 + continue; 1386 + } 1387 + 1388 + for (i = 0; i < num; i++) { 1389 + desc[i] = csq->desc[rsv_tail++]; 1390 + if (rsv_tail == csq->desc_num) 1391 + rsv_tail = 0; 1392 + } 1393 + spin_unlock_bh(&csq->lock); 1394 + break; 1395 + } 1396 + 1397 + if (ret) 1398 + dev_err_ratelimited(hr_dev->dev, 1399 + "Cmdq IO error, opcode = 0x%x, return = %d.\n", 1400 + opcode, ret); 1353 1401 1354 1402 return ret; 1355 1403 }
+2
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 230 230 }; 231 231 232 232 #define HNS_ROCE_OPC_POST_MB_TIMEOUT 35000 233 + #define HNS_ROCE_OPC_POST_MB_TRY_CNT 8 234 + #define HNS_ROCE_OPC_POST_MB_RETRY_GAP_MSEC 5 233 235 struct hns_roce_cmdq_tx_timeout_map { 234 236 u16 opcode; 235 237 u32 tx_timeout;
+1 -1
drivers/infiniband/hw/mana/main.c
··· 174 174 175 175 req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE; 176 176 req.num_resources = 1; 177 - req.alignment = 1; 177 + req.alignment = PAGE_SIZE / MANA_PAGE_SIZE; 178 178 179 179 /* Have GDMA start searching from 0 */ 180 180 req.allocated_resources = 0;
+2 -1
drivers/infiniband/hw/mlx5/ah.c
··· 67 67 ah->av.tclass = grh->traffic_class; 68 68 } 69 69 70 - ah->av.stat_rate_sl = (rdma_ah_get_static_rate(ah_attr) << 4); 70 + ah->av.stat_rate_sl = 71 + (mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)) << 4); 71 72 72 73 if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) { 73 74 if (init_attr->xmit_slave)
+6 -2
drivers/infiniband/hw/mlx5/counters.c
··· 546 546 struct ib_qp *qp) 547 547 { 548 548 struct mlx5_ib_dev *dev = to_mdev(qp->device); 549 + bool new = false; 549 550 int err; 550 551 551 552 if (!counter->id) { ··· 561 560 return err; 562 561 counter->id = 563 562 MLX5_GET(alloc_q_counter_out, out, counter_set_id); 563 + new = true; 564 564 } 565 565 566 566 err = mlx5_ib_qp_set_counter(qp, counter); ··· 571 569 return 0; 572 570 573 571 fail_set_counter: 574 - mlx5_ib_counter_dealloc(counter); 575 - counter->id = 0; 572 + if (new) { 573 + mlx5_ib_counter_dealloc(counter); 574 + counter->id = 0; 575 + } 576 576 577 577 return err; 578 578 }
+14 -2
drivers/infiniband/hw/mlx5/mr.c
··· 1550 1550 1551 1551 dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv); 1552 1552 1553 - if (!umem_dmabuf->sgt) 1553 + if (!umem_dmabuf->sgt || !mr) 1554 1554 return; 1555 1555 1556 1556 mlx5r_umr_update_mr_pas(mr, MLX5_IB_UPD_XLT_ZAP); ··· 1935 1935 static void 1936 1936 mlx5_free_priv_descs(struct mlx5_ib_mr *mr) 1937 1937 { 1938 - if (!mr->umem && !mr->data_direct && mr->descs) { 1938 + if (!mr->umem && !mr->data_direct && 1939 + mr->ibmr.type != IB_MR_TYPE_DM && mr->descs) { 1939 1940 struct ib_device *device = mr->ibmr.device; 1940 1941 int size = mr->max_descs * mr->desc_size; 1941 1942 struct mlx5_ib_dev *dev = to_mdev(device); ··· 2023 2022 struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device); 2024 2023 struct mlx5_cache_ent *ent = mr->mmkey.cache_ent; 2025 2024 bool is_odp = is_odp_mr(mr); 2025 + bool is_odp_dma_buf = is_dmabuf_mr(mr) && 2026 + !to_ib_umem_dmabuf(mr->umem)->pinned; 2026 2027 int ret = 0; 2027 2028 2028 2029 if (is_odp) 2029 2030 mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex); 2031 + 2032 + if (is_odp_dma_buf) 2033 + dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, NULL); 2030 2034 2031 2035 if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) { 2032 2036 ent = mr->mmkey.cache_ent; ··· 2058 2052 if (!ret) 2059 2053 to_ib_umem_odp(mr->umem)->private = NULL; 2060 2054 mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex); 2055 + } 2056 + 2057 + if (is_odp_dma_buf) { 2058 + if (!ret) 2059 + to_ib_umem_dmabuf(mr->umem)->private = NULL; 2060 + dma_resv_unlock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv); 2061 2061 } 2062 2062 2063 2063 return ret;
+1
drivers/infiniband/hw/mlx5/odp.c
··· 242 242 if (__xa_cmpxchg(&imr->implicit_children, idx, mr, NULL, GFP_KERNEL) != 243 243 mr) { 244 244 xa_unlock(&imr->implicit_children); 245 + mlx5r_deref_odp_mkey(&imr->mmkey); 245 246 return; 246 247 } 247 248
+6 -4
drivers/infiniband/hw/mlx5/qp.c
··· 3447 3447 return 0; 3448 3448 } 3449 3449 3450 - static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) 3450 + int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate) 3451 3451 { 3452 3452 u32 stat_rate_support; 3453 3453 3454 - if (rate == IB_RATE_PORT_CURRENT) 3454 + if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS) 3455 3455 return 0; 3456 3456 3457 3457 if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_800_GBPS) ··· 3596 3596 sizeof(grh->dgid.raw)); 3597 3597 } 3598 3598 3599 - err = ib_rate_to_mlx5(dev, rdma_ah_get_static_rate(ah)); 3599 + err = mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah)); 3600 3600 if (err < 0) 3601 3601 return err; 3602 3602 MLX5_SET(ads, path, stat_rate, err); ··· 4579 4579 4580 4580 set_id = mlx5_ib_get_counters_id(dev, attr->port_num - 1); 4581 4581 MLX5_SET(dctc, dctc, counter_set_id, set_id); 4582 + 4583 + qp->port = attr->port_num; 4582 4584 } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) { 4583 4585 struct mlx5_ib_modify_qp_resp resp = {}; 4584 4586 u32 out[MLX5_ST_SZ_DW(create_dct_out)] = {}; ··· 5076 5074 } 5077 5075 5078 5076 if (qp_attr_mask & IB_QP_PORT) 5079 - qp_attr->port_num = MLX5_GET(dctc, dctc, port); 5077 + qp_attr->port_num = mqp->port; 5080 5078 if (qp_attr_mask & IB_QP_MIN_RNR_TIMER) 5081 5079 qp_attr->min_rnr_timer = MLX5_GET(dctc, dctc, min_rnr_nak); 5082 5080 if (qp_attr_mask & IB_QP_AV) {
+1
drivers/infiniband/hw/mlx5/qp.h
··· 56 56 int mlx5_ib_qp_set_counter(struct ib_qp *qp, struct rdma_counter *counter); 57 57 int mlx5_ib_qp_event_init(void); 58 58 void mlx5_ib_qp_event_cleanup(void); 59 + int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate); 59 60 #endif /* _MLX5_IB_QP_H */
+56 -27
drivers/infiniband/hw/mlx5/umr.c
··· 231 231 ib_dealloc_pd(dev->umrc.pd); 232 232 } 233 233 234 - static int mlx5r_umr_recover(struct mlx5_ib_dev *dev) 235 - { 236 - struct umr_common *umrc = &dev->umrc; 237 - struct ib_qp_attr attr; 238 - int err; 239 - 240 - attr.qp_state = IB_QPS_RESET; 241 - err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE); 242 - if (err) { 243 - mlx5_ib_dbg(dev, "Couldn't modify UMR QP\n"); 244 - goto err; 245 - } 246 - 247 - err = mlx5r_umr_qp_rst2rts(dev, umrc->qp); 248 - if (err) 249 - goto err; 250 - 251 - umrc->state = MLX5_UMR_STATE_ACTIVE; 252 - return 0; 253 - 254 - err: 255 - umrc->state = MLX5_UMR_STATE_ERR; 256 - return err; 257 - } 258 234 259 235 static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe, 260 236 struct mlx5r_umr_wqe *wqe, bool with_data) ··· 275 299 out: 276 300 spin_unlock_irqrestore(&qp->sq.lock, flags); 277 301 302 + return err; 303 + } 304 + 305 + static int mlx5r_umr_recover(struct mlx5_ib_dev *dev, u32 mkey, 306 + struct mlx5r_umr_context *umr_context, 307 + struct mlx5r_umr_wqe *wqe, bool with_data) 308 + { 309 + struct umr_common *umrc = &dev->umrc; 310 + struct ib_qp_attr attr; 311 + int err; 312 + 313 + mutex_lock(&umrc->lock); 314 + /* Preventing any further WRs to be sent now */ 315 + if (umrc->state != MLX5_UMR_STATE_RECOVER) { 316 + mlx5_ib_warn(dev, "UMR recovery encountered an unexpected state=%d\n", 317 + umrc->state); 318 + umrc->state = MLX5_UMR_STATE_RECOVER; 319 + } 320 + mutex_unlock(&umrc->lock); 321 + 322 + /* Sending a final/barrier WR (the failed one) and wait for its completion. 323 + * This will ensure that all the previous WRs got a completion before 324 + * we set the QP state to RESET. 325 + */ 326 + err = mlx5r_umr_post_send(umrc->qp, mkey, &umr_context->cqe, wqe, 327 + with_data); 328 + if (err) { 329 + mlx5_ib_warn(dev, "UMR recovery post send failed, err %d\n", err); 330 + goto err; 331 + } 332 + 333 + /* Since the QP is in an error state, it will only receive 334 + * IB_WC_WR_FLUSH_ERR. However, as it serves only as a barrier 335 + * we don't care about its status. 336 + */ 337 + wait_for_completion(&umr_context->done); 338 + 339 + attr.qp_state = IB_QPS_RESET; 340 + err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE); 341 + if (err) { 342 + mlx5_ib_warn(dev, "Couldn't modify UMR QP to RESET, err=%d\n", err); 343 + goto err; 344 + } 345 + 346 + err = mlx5r_umr_qp_rst2rts(dev, umrc->qp); 347 + if (err) { 348 + mlx5_ib_warn(dev, "Couldn't modify UMR QP to RTS, err=%d\n", err); 349 + goto err; 350 + } 351 + 352 + umrc->state = MLX5_UMR_STATE_ACTIVE; 353 + return 0; 354 + 355 + err: 356 + umrc->state = MLX5_UMR_STATE_ERR; 278 357 return err; 279 358 } 280 359 ··· 397 366 mlx5_ib_warn(dev, 398 367 "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs, mkey = %u\n", 399 368 umr_context.status, mkey); 400 - mutex_lock(&umrc->lock); 401 - err = mlx5r_umr_recover(dev); 402 - mutex_unlock(&umrc->lock); 369 + err = mlx5r_umr_recover(dev, mkey, &umr_context, wqe, with_data); 403 370 if (err) 404 371 mlx5_ib_warn(dev, "couldn't recover UMR, err %d\n", 405 372 err);
+38 -11
drivers/irqchip/irq-gic-v3.c
··· 44 44 #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0) 45 45 #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1) 46 46 #define FLAGS_WORKAROUND_ASR_ERRATUM_8601001 (1ULL << 2) 47 + #define FLAGS_WORKAROUND_INSECURE (1ULL << 3) 47 48 48 49 #define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1) 49 50 ··· 83 82 #define GIC_ID_NR (1U << GICD_TYPER_ID_BITS(gic_data.rdists.gicd_typer)) 84 83 #define GIC_LINE_NR min(GICD_TYPER_SPIS(gic_data.rdists.gicd_typer), 1020U) 85 84 #define GIC_ESPI_NR GICD_TYPER_ESPIS(gic_data.rdists.gicd_typer) 85 + 86 + static bool nmi_support_forbidden; 86 87 87 88 /* 88 89 * There are 16 SGIs, though we only actually use 8 in Linux. The other 8 SGIs ··· 166 163 { 167 164 bool ds; 168 165 166 + cpus_have_group0 = gic_has_group0(); 167 + 169 168 ds = gic_dist_security_disabled(); 170 - if (!ds) { 171 - u32 val; 169 + if ((gic_data.flags & FLAGS_WORKAROUND_INSECURE) && !ds) { 170 + if (cpus_have_group0) { 171 + u32 val; 172 172 173 - val = readl_relaxed(gic_data.dist_base + GICD_CTLR); 174 - val |= GICD_CTLR_DS; 175 - writel_relaxed(val, gic_data.dist_base + GICD_CTLR); 173 + val = readl_relaxed(gic_data.dist_base + GICD_CTLR); 174 + val |= GICD_CTLR_DS; 175 + writel_relaxed(val, gic_data.dist_base + GICD_CTLR); 176 176 177 - ds = gic_dist_security_disabled(); 178 - if (ds) 179 - pr_warn("Broken GIC integration, security disabled"); 177 + ds = gic_dist_security_disabled(); 178 + if (ds) 179 + pr_warn("Broken GIC integration, security disabled\n"); 180 + } else { 181 + pr_warn("Broken GIC integration, pNMI forbidden\n"); 182 + nmi_support_forbidden = true; 183 + } 180 184 } 181 185 182 186 cpus_have_security_disabled = ds; 183 - cpus_have_group0 = gic_has_group0(); 184 187 185 188 /* 186 189 * How priority values are used by the GIC depends on two things: ··· 218 209 * be in the non-secure range, we program the non-secure values into 219 210 * the distributor to match the PMR values we want. 220 211 */ 221 - if (cpus_have_group0 & !cpus_have_security_disabled) { 212 + if (cpus_have_group0 && !cpus_have_security_disabled) { 222 213 dist_prio_irq = __gicv3_prio_to_ns(dist_prio_irq); 223 214 dist_prio_nmi = __gicv3_prio_to_ns(dist_prio_nmi); 224 215 } ··· 1931 1922 return true; 1932 1923 } 1933 1924 1925 + static bool gic_enable_quirk_rk3399(void *data) 1926 + { 1927 + struct gic_chip_data *d = data; 1928 + 1929 + if (of_machine_is_compatible("rockchip,rk3399")) { 1930 + d->flags |= FLAGS_WORKAROUND_INSECURE; 1931 + return true; 1932 + } 1933 + 1934 + return false; 1935 + } 1936 + 1934 1937 static bool rd_set_non_coherent(void *data) 1935 1938 { 1936 1939 struct gic_chip_data *d = data; ··· 2018 1997 .init = rd_set_non_coherent, 2019 1998 }, 2020 1999 { 2000 + .desc = "GICv3: Insecure RK3399 integration", 2001 + .iidr = 0x0000043b, 2002 + .mask = 0xff000fff, 2003 + .init = gic_enable_quirk_rk3399, 2004 + }, 2005 + { 2021 2006 } 2022 2007 }; 2023 2008 ··· 2031 2004 { 2032 2005 int i; 2033 2006 2034 - if (!gic_prio_masking_enabled()) 2007 + if (!gic_prio_masking_enabled() || nmi_support_forbidden) 2035 2008 return; 2036 2009 2037 2010 rdist_nmi_refs = kcalloc(gic_data.ppi_nr + SGI_NR,
+1 -1
drivers/irqchip/irq-jcore-aic.c
··· 38 38 static void handle_jcore_irq(struct irq_desc *desc) 39 39 { 40 40 if (irqd_is_per_cpu(irq_desc_get_irq_data(desc))) 41 - handle_percpu_irq(desc); 41 + handle_percpu_devid_irq(desc); 42 42 else 43 43 handle_simple_irq(desc); 44 44 }
+64 -3
drivers/irqchip/qcom-pdc.c
··· 21 21 #include <linux/types.h> 22 22 23 23 #define PDC_MAX_GPIO_IRQS 256 24 + #define PDC_DRV_OFFSET 0x10000 24 25 25 26 /* Valid only on HW version < 3.2 */ 26 27 #define IRQ_ENABLE_BANK 0x10 28 + #define IRQ_ENABLE_BANK_MAX (IRQ_ENABLE_BANK + BITS_TO_BYTES(PDC_MAX_GPIO_IRQS)) 27 29 #define IRQ_i_CFG 0x110 28 30 29 31 /* Valid only on HW version >= 3.2 */ ··· 48 46 49 47 static DEFINE_RAW_SPINLOCK(pdc_lock); 50 48 static void __iomem *pdc_base; 49 + static void __iomem *pdc_prev_base; 51 50 static struct pdc_pin_region *pdc_region; 52 51 static int pdc_region_cnt; 53 52 static unsigned int pdc_version; 53 + static bool pdc_x1e_quirk; 54 + 55 + static void pdc_base_reg_write(void __iomem *base, int reg, u32 i, u32 val) 56 + { 57 + writel_relaxed(val, base + reg + i * sizeof(u32)); 58 + } 54 59 55 60 static void pdc_reg_write(int reg, u32 i, u32 val) 56 61 { 57 - writel_relaxed(val, pdc_base + reg + i * sizeof(u32)); 62 + pdc_base_reg_write(pdc_base, reg, i, val); 58 63 } 59 64 60 65 static u32 pdc_reg_read(int reg, u32 i) 61 66 { 62 67 return readl_relaxed(pdc_base + reg + i * sizeof(u32)); 68 + } 69 + 70 + static void pdc_x1e_irq_enable_write(u32 bank, u32 enable) 71 + { 72 + void __iomem *base; 73 + 74 + /* Remap the write access to work around a hardware bug on X1E */ 75 + switch (bank) { 76 + case 0 ... 1: 77 + /* Use previous DRV (client) region and shift to bank 3-4 */ 78 + base = pdc_prev_base; 79 + bank += 3; 80 + break; 81 + case 2 ... 4: 82 + /* Use our own region and shift to bank 0-2 */ 83 + base = pdc_base; 84 + bank -= 2; 85 + break; 86 + case 5: 87 + /* No fixup required for bank 5 */ 88 + base = pdc_base; 89 + break; 90 + default: 91 + WARN_ON(1); 92 + return; 93 + } 94 + 95 + pdc_base_reg_write(base, IRQ_ENABLE_BANK, bank, enable); 63 96 } 64 97 65 98 static void __pdc_enable_intr(int pin_out, bool on) ··· 109 72 110 73 enable = pdc_reg_read(IRQ_ENABLE_BANK, index); 111 74 __assign_bit(mask, &enable, on); 112 - pdc_reg_write(IRQ_ENABLE_BANK, index, enable); 75 + 76 + if (pdc_x1e_quirk) 77 + pdc_x1e_irq_enable_write(index, enable); 78 + else 79 + pdc_reg_write(IRQ_ENABLE_BANK, index, enable); 113 80 } else { 114 81 enable = pdc_reg_read(IRQ_i_CFG, pin_out); 115 82 __assign_bit(IRQ_i_CFG_IRQ_ENABLE, &enable, on); ··· 365 324 if (res_size > resource_size(&res)) 366 325 pr_warn("%pOF: invalid reg size, please fix DT\n", node); 367 326 327 + /* 328 + * PDC has multiple DRV regions, each one provides the same set of 329 + * registers for a particular client in the system. Due to a hardware 330 + * bug on X1E, some writes to the IRQ_ENABLE_BANK register must be 331 + * issued inside the previous region. This region belongs to 332 + * a different client and is not described in the device tree. Map the 333 + * region with the expected offset to preserve support for old DTs. 334 + */ 335 + if (of_device_is_compatible(node, "qcom,x1e80100-pdc")) { 336 + pdc_prev_base = ioremap(res.start - PDC_DRV_OFFSET, IRQ_ENABLE_BANK_MAX); 337 + if (!pdc_prev_base) { 338 + pr_err("%pOF: unable to map previous PDC DRV region\n", node); 339 + return -ENXIO; 340 + } 341 + 342 + pdc_x1e_quirk = true; 343 + } 344 + 368 345 pdc_base = ioremap(res.start, res_size); 369 346 if (!pdc_base) { 370 347 pr_err("%pOF: unable to map PDC registers\n", node); 371 - return -ENXIO; 348 + ret = -ENXIO; 349 + goto fail; 372 350 } 373 351 374 352 pdc_version = pdc_reg_read(PDC_VERSION_REG, 0); ··· 423 363 fail: 424 364 kfree(pdc_region); 425 365 iounmap(pdc_base); 366 + iounmap(pdc_prev_base); 426 367 return ret; 427 368 } 428 369
+14 -11
drivers/md/dm-integrity.c
··· 3790 3790 break; 3791 3791 3792 3792 case STATUSTYPE_TABLE: { 3793 - __u64 watermark_percentage = (__u64)(ic->journal_entries - ic->free_sectors_threshold) * 100; 3794 - 3795 - watermark_percentage += ic->journal_entries / 2; 3796 - do_div(watermark_percentage, ic->journal_entries); 3797 - arg_count = 3; 3793 + arg_count = 1; /* buffer_sectors */ 3798 3794 arg_count += !!ic->meta_dev; 3799 3795 arg_count += ic->sectors_per_block != 1; 3800 3796 arg_count += !!(ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)); 3801 3797 arg_count += ic->reset_recalculate_flag; 3802 3798 arg_count += ic->discard; 3803 - arg_count += ic->mode == 'J'; 3804 - arg_count += ic->mode == 'J'; 3805 - arg_count += ic->mode == 'B'; 3806 - arg_count += ic->mode == 'B'; 3799 + arg_count += ic->mode != 'I'; /* interleave_sectors */ 3800 + arg_count += ic->mode == 'J'; /* journal_sectors */ 3801 + arg_count += ic->mode == 'J'; /* journal_watermark */ 3802 + arg_count += ic->mode == 'J'; /* commit_time */ 3803 + arg_count += ic->mode == 'B'; /* sectors_per_bit */ 3804 + arg_count += ic->mode == 'B'; /* bitmap_flush_interval */ 3807 3805 arg_count += !!ic->internal_hash_alg.alg_string; 3808 3806 arg_count += !!ic->journal_crypt_alg.alg_string; 3809 3807 arg_count += !!ic->journal_mac_alg.alg_string; ··· 3820 3822 DMEMIT(" reset_recalculate"); 3821 3823 if (ic->discard) 3822 3824 DMEMIT(" allow_discards"); 3823 - DMEMIT(" journal_sectors:%u", ic->initial_sectors - SB_SECTORS); 3824 - DMEMIT(" interleave_sectors:%u", 1U << ic->sb->log2_interleave_sectors); 3825 + if (ic->mode != 'I') 3826 + DMEMIT(" interleave_sectors:%u", 1U << ic->sb->log2_interleave_sectors); 3825 3827 DMEMIT(" buffer_sectors:%u", 1U << ic->log2_buffer_sectors); 3826 3828 if (ic->mode == 'J') { 3829 + __u64 watermark_percentage = (__u64)(ic->journal_entries - ic->free_sectors_threshold) * 100; 3830 + 3831 + watermark_percentage += ic->journal_entries / 2; 3832 + do_div(watermark_percentage, ic->journal_entries); 3833 + DMEMIT(" journal_sectors:%u", ic->initial_sectors - SB_SECTORS); 3827 3834 DMEMIT(" journal_watermark:%u", (unsigned int)watermark_percentage); 3828 3835 DMEMIT(" commit_time:%u", ic->autocommit_msec); 3829 3836 }
+1
drivers/md/dm-vdo/dedupe.c
··· 2178 2178 2179 2179 vdo_set_dedupe_index_timeout_interval(vdo_dedupe_index_timeout_interval); 2180 2180 vdo_set_dedupe_index_min_timer_interval(vdo_dedupe_index_min_timer_interval); 2181 + spin_lock_init(&zones->lock); 2181 2182 2182 2183 /* 2183 2184 * Since we will save up the timeouts that would have been reported but were ratelimited,
+1 -3
drivers/md/raid0.c
··· 386 386 lim.io_opt = lim.io_min * mddev->raid_disks; 387 387 lim.features |= BLK_FEAT_ATOMIC_WRITES; 388 388 err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); 389 - if (err) { 390 - queue_limits_cancel_update(mddev->gendisk->queue); 389 + if (err) 391 390 return err; 392 - } 393 391 return queue_limits_set(mddev->gendisk->queue, &lim); 394 392 } 395 393
+1 -3
drivers/md/raid1.c
··· 3219 3219 lim.max_write_zeroes_sectors = 0; 3220 3220 lim.features |= BLK_FEAT_ATOMIC_WRITES; 3221 3221 err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); 3222 - if (err) { 3223 - queue_limits_cancel_update(mddev->gendisk->queue); 3222 + if (err) 3224 3223 return err; 3225 - } 3226 3224 return queue_limits_set(mddev->gendisk->queue, &lim); 3227 3225 } 3228 3226
+1 -3
drivers/md/raid10.c
··· 4020 4020 lim.io_opt = lim.io_min * raid10_nr_stripes(conf); 4021 4021 lim.features |= BLK_FEAT_ATOMIC_WRITES; 4022 4022 err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); 4023 - if (err) { 4024 - queue_limits_cancel_update(mddev->gendisk->queue); 4023 + if (err) 4025 4024 return err; 4026 - } 4027 4025 return queue_limits_set(mddev->gendisk->queue, &lim); 4028 4026 } 4029 4027
+33 -11
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 471 471 struct { 472 472 void __iomem *virt; 473 473 dma_addr_t dma; 474 + dma_addr_t iova_dma; 475 + u32 size; 474 476 } io; 475 477 476 478 int irq; ··· 1837 1835 } 1838 1836 1839 1837 if (dir == DMA_FROM_DEVICE) { 1840 - src_dma = cdns_ctrl->io.dma; 1838 + src_dma = cdns_ctrl->io.iova_dma; 1841 1839 dst_dma = buf_dma; 1842 1840 } else { 1843 1841 src_dma = buf_dma; 1844 - dst_dma = cdns_ctrl->io.dma; 1842 + dst_dma = cdns_ctrl->io.iova_dma; 1845 1843 } 1846 1844 1847 1845 tx = dmaengine_prep_dma_memcpy(cdns_ctrl->dmac, dst_dma, src_dma, len, ··· 1863 1861 dma_async_issue_pending(cdns_ctrl->dmac); 1864 1862 wait_for_completion(&finished); 1865 1863 1866 - dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir); 1864 + dma_unmap_single(dma_dev->dev, buf_dma, len, dir); 1867 1865 1868 1866 return 0; 1869 1867 1870 1868 err_unmap: 1871 - dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir); 1869 + dma_unmap_single(dma_dev->dev, buf_dma, len, dir); 1872 1870 1873 1871 err: 1874 1872 dev_dbg(cdns_ctrl->dev, "Fall back to CPU I/O\n"); ··· 2871 2869 static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) 2872 2870 { 2873 2871 dma_cap_mask_t mask; 2872 + struct dma_device *dma_dev = cdns_ctrl->dmac->device; 2874 2873 int ret; 2875 2874 2876 2875 cdns_ctrl->cdma_desc = dma_alloc_coherent(cdns_ctrl->dev, ··· 2907 2904 dma_cap_set(DMA_MEMCPY, mask); 2908 2905 2909 2906 if (cdns_ctrl->caps1->has_dma) { 2910 - cdns_ctrl->dmac = dma_request_channel(mask, NULL, NULL); 2911 - if (!cdns_ctrl->dmac) { 2912 - dev_err(cdns_ctrl->dev, 2913 - "Unable to get a DMA channel\n"); 2914 - ret = -EBUSY; 2907 + cdns_ctrl->dmac = dma_request_chan_by_mask(&mask); 2908 + if (IS_ERR(cdns_ctrl->dmac)) { 2909 + ret = dev_err_probe(cdns_ctrl->dev, PTR_ERR(cdns_ctrl->dmac), 2910 + "%d: Failed to get a DMA channel\n", ret); 2915 2911 goto disable_irq; 2916 2912 } 2913 + } 2914 + 2915 + cdns_ctrl->io.iova_dma = dma_map_resource(dma_dev->dev, cdns_ctrl->io.dma, 2916 + cdns_ctrl->io.size, 2917 + DMA_BIDIRECTIONAL, 0); 2918 + 2919 + ret = dma_mapping_error(dma_dev->dev, cdns_ctrl->io.iova_dma); 2920 + if (ret) { 2921 + dev_err(cdns_ctrl->dev, "Failed to map I/O resource to DMA\n"); 2922 + goto dma_release_chnl; 2917 2923 } 2918 2924 2919 2925 nand_controller_init(&cdns_ctrl->controller); ··· 2935 2923 if (ret) { 2936 2924 dev_err(cdns_ctrl->dev, "Failed to register MTD: %d\n", 2937 2925 ret); 2938 - goto dma_release_chnl; 2926 + goto unmap_dma_resource; 2939 2927 } 2940 2928 2941 2929 kfree(cdns_ctrl->buf); 2942 2930 cdns_ctrl->buf = kzalloc(cdns_ctrl->buf_size, GFP_KERNEL); 2943 2931 if (!cdns_ctrl->buf) { 2944 2932 ret = -ENOMEM; 2945 - goto dma_release_chnl; 2933 + goto unmap_dma_resource; 2946 2934 } 2947 2935 2948 2936 return 0; 2937 + 2938 + unmap_dma_resource: 2939 + dma_unmap_resource(dma_dev->dev, cdns_ctrl->io.iova_dma, 2940 + cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0); 2949 2941 2950 2942 dma_release_chnl: 2951 2943 if (cdns_ctrl->dmac) ··· 2972 2956 static void cadence_nand_remove(struct cdns_nand_ctrl *cdns_ctrl) 2973 2957 { 2974 2958 cadence_nand_chips_cleanup(cdns_ctrl); 2959 + if (cdns_ctrl->dmac) 2960 + dma_unmap_resource(cdns_ctrl->dmac->device->dev, 2961 + cdns_ctrl->io.iova_dma, cdns_ctrl->io.size, 2962 + DMA_BIDIRECTIONAL, 0); 2975 2963 cadence_nand_irq_cleanup(cdns_ctrl->irq, cdns_ctrl); 2976 2964 kfree(cdns_ctrl->buf); 2977 2965 dma_free_coherent(cdns_ctrl->dev, sizeof(struct cadence_nand_cdma_desc), ··· 3040 3020 cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res); 3041 3021 if (IS_ERR(cdns_ctrl->io.virt)) 3042 3022 return PTR_ERR(cdns_ctrl->io.virt); 3023 + 3043 3024 cdns_ctrl->io.dma = res->start; 3025 + cdns_ctrl->io.size = resource_size(res); 3044 3026 3045 3027 dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk"); 3046 3028 if (IS_ERR(dt->clk))
+11 -11
drivers/mtd/nand/raw/qcom_nandc.c
··· 1881 1881 nandc->regs->addr0 = 0; 1882 1882 nandc->regs->addr1 = 0; 1883 1883 1884 - host->cfg0 = FIELD_PREP(CW_PER_PAGE_MASK, 0) | 1885 - FIELD_PREP(UD_SIZE_BYTES_MASK, 512) | 1886 - FIELD_PREP(NUM_ADDR_CYCLES_MASK, 5) | 1887 - FIELD_PREP(SPARE_SIZE_BYTES_MASK, 0); 1884 + nandc->regs->cfg0 = cpu_to_le32(FIELD_PREP(CW_PER_PAGE_MASK, 0) | 1885 + FIELD_PREP(UD_SIZE_BYTES_MASK, 512) | 1886 + FIELD_PREP(NUM_ADDR_CYCLES_MASK, 5) | 1887 + FIELD_PREP(SPARE_SIZE_BYTES_MASK, 0)); 1888 1888 1889 - host->cfg1 = FIELD_PREP(NAND_RECOVERY_CYCLES_MASK, 7) | 1890 - FIELD_PREP(BAD_BLOCK_BYTE_NUM_MASK, 17) | 1891 - FIELD_PREP(CS_ACTIVE_BSY, 0) | 1892 - FIELD_PREP(BAD_BLOCK_IN_SPARE_AREA, 1) | 1893 - FIELD_PREP(WR_RD_BSY_GAP_MASK, 2) | 1894 - FIELD_PREP(WIDE_FLASH, 0) | 1895 - FIELD_PREP(DEV0_CFG1_ECC_DISABLE, 1); 1889 + nandc->regs->cfg1 = cpu_to_le32(FIELD_PREP(NAND_RECOVERY_CYCLES_MASK, 7) | 1890 + FIELD_PREP(BAD_BLOCK_BYTE_NUM_MASK, 17) | 1891 + FIELD_PREP(CS_ACTIVE_BSY, 0) | 1892 + FIELD_PREP(BAD_BLOCK_IN_SPARE_AREA, 1) | 1893 + FIELD_PREP(WR_RD_BSY_GAP_MASK, 2) | 1894 + FIELD_PREP(WIDE_FLASH, 0) | 1895 + FIELD_PREP(DEV0_CFG1_ECC_DISABLE, 1)); 1896 1896 1897 1897 if (!nandc->props->qpic_version2) 1898 1898 nandc->regs->ecc_buf_cfg = cpu_to_le32(ECC_CFG_ECC_DISABLE);
+1 -1
drivers/mtd/spi-nor/sst.c
··· 174 174 int ret; 175 175 176 176 nor->program_opcode = op; 177 - ret = spi_nor_write_data(nor, to, 1, buf); 177 + ret = spi_nor_write_data(nor, to, len, buf); 178 178 if (ret < 0) 179 179 return ret; 180 180 WARN(ret != len, "While writing %zu byte written %i bytes\n", len, ret);
+6
drivers/net/dsa/realtek/Kconfig
··· 43 43 help 44 44 Select to enable support for Realtek RTL8366RB. 45 45 46 + config NET_DSA_REALTEK_RTL8366RB_LEDS 47 + bool "Support RTL8366RB LED control" 48 + depends on (LEDS_CLASS=y || LEDS_CLASS=NET_DSA_REALTEK_RTL8366RB) 49 + depends on NET_DSA_REALTEK_RTL8366RB 50 + default NET_DSA_REALTEK_RTL8366RB 51 + 46 52 endif
+3
drivers/net/dsa/realtek/Makefile
··· 12 12 13 13 obj-$(CONFIG_NET_DSA_REALTEK_RTL8366RB) += rtl8366.o 14 14 rtl8366-objs := rtl8366-core.o rtl8366rb.o 15 + ifdef CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS 16 + rtl8366-objs += rtl8366rb-leds.o 17 + endif 15 18 obj-$(CONFIG_NET_DSA_REALTEK_RTL8365MB) += rtl8365mb.o
+177
drivers/net/dsa/realtek/rtl8366rb-leds.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bitops.h> 4 + #include <linux/regmap.h> 5 + #include <net/dsa.h> 6 + #include "rtl83xx.h" 7 + #include "rtl8366rb.h" 8 + 9 + static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port) 10 + { 11 + switch (led_group) { 12 + case 0: 13 + return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 14 + case 1: 15 + return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 16 + case 2: 17 + return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 18 + case 3: 19 + return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 20 + default: 21 + return 0; 22 + } 23 + } 24 + 25 + static int rb8366rb_get_port_led(struct rtl8366rb_led *led) 26 + { 27 + struct realtek_priv *priv = led->priv; 28 + u8 led_group = led->led_group; 29 + u8 port_num = led->port_num; 30 + int ret; 31 + u32 val; 32 + 33 + ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group), 34 + &val); 35 + if (ret) { 36 + dev_err(priv->dev, "error reading LED on port %d group %d\n", 37 + led_group, port_num); 38 + return ret; 39 + } 40 + 41 + return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num)); 42 + } 43 + 44 + static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable) 45 + { 46 + struct realtek_priv *priv = led->priv; 47 + u8 led_group = led->led_group; 48 + u8 port_num = led->port_num; 49 + int ret; 50 + 51 + ret = regmap_update_bits(priv->map, 52 + RTL8366RB_LED_X_X_CTRL_REG(led_group), 53 + rtl8366rb_led_group_port_mask(led_group, 54 + port_num), 55 + enable ? 0xffff : 0); 56 + if (ret) { 57 + dev_err(priv->dev, "error updating LED on port %d group %d\n", 58 + led_group, port_num); 59 + return ret; 60 + } 61 + 62 + /* Change the LED group to manual controlled LEDs if required */ 63 + ret = rb8366rb_set_ledgroup_mode(priv, led_group, 64 + RTL8366RB_LEDGROUP_FORCE); 65 + 66 + if (ret) { 67 + dev_err(priv->dev, "error updating LED GROUP group %d\n", 68 + led_group); 69 + return ret; 70 + } 71 + 72 + return 0; 73 + } 74 + 75 + static int 76 + rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev, 77 + enum led_brightness brightness) 78 + { 79 + struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led, 80 + cdev); 81 + 82 + return rb8366rb_set_port_led(led, brightness == LED_ON); 83 + } 84 + 85 + static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp, 86 + struct fwnode_handle *led_fwnode) 87 + { 88 + struct rtl8366rb *rb = priv->chip_data; 89 + struct led_init_data init_data = { }; 90 + enum led_default_state state; 91 + struct rtl8366rb_led *led; 92 + u32 led_group; 93 + int ret; 94 + 95 + ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group); 96 + if (ret) 97 + return ret; 98 + 99 + if (led_group >= RTL8366RB_NUM_LEDGROUPS) { 100 + dev_warn(priv->dev, "Invalid LED reg %d defined for port %d", 101 + led_group, dp->index); 102 + return -EINVAL; 103 + } 104 + 105 + led = &rb->leds[dp->index][led_group]; 106 + led->port_num = dp->index; 107 + led->led_group = led_group; 108 + led->priv = priv; 109 + 110 + state = led_init_default_state_get(led_fwnode); 111 + switch (state) { 112 + case LEDS_DEFSTATE_ON: 113 + led->cdev.brightness = 1; 114 + rb8366rb_set_port_led(led, 1); 115 + break; 116 + case LEDS_DEFSTATE_KEEP: 117 + led->cdev.brightness = 118 + rb8366rb_get_port_led(led); 119 + break; 120 + case LEDS_DEFSTATE_OFF: 121 + default: 122 + led->cdev.brightness = 0; 123 + rb8366rb_set_port_led(led, 0); 124 + } 125 + 126 + led->cdev.max_brightness = 1; 127 + led->cdev.brightness_set_blocking = 128 + rtl8366rb_cled_brightness_set_blocking; 129 + init_data.fwnode = led_fwnode; 130 + init_data.devname_mandatory = true; 131 + 132 + init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d", 133 + dp->ds->index, dp->index, led_group); 134 + if (!init_data.devicename) 135 + return -ENOMEM; 136 + 137 + ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data); 138 + if (ret) { 139 + dev_warn(priv->dev, "Failed to init LED %d for port %d", 140 + led_group, dp->index); 141 + return ret; 142 + } 143 + 144 + return 0; 145 + } 146 + 147 + int rtl8366rb_setup_leds(struct realtek_priv *priv) 148 + { 149 + struct dsa_switch *ds = &priv->ds; 150 + struct device_node *leds_np; 151 + struct dsa_port *dp; 152 + int ret = 0; 153 + 154 + dsa_switch_for_each_port(dp, ds) { 155 + if (!dp->dn) 156 + continue; 157 + 158 + leds_np = of_get_child_by_name(dp->dn, "leds"); 159 + if (!leds_np) { 160 + dev_dbg(priv->dev, "No leds defined for port %d", 161 + dp->index); 162 + continue; 163 + } 164 + 165 + for_each_child_of_node_scoped(leds_np, led_np) { 166 + ret = rtl8366rb_setup_led(priv, dp, 167 + of_fwnode_handle(led_np)); 168 + if (ret) 169 + break; 170 + } 171 + 172 + of_node_put(leds_np); 173 + if (ret) 174 + return ret; 175 + } 176 + return 0; 177 + }
+6 -252
drivers/net/dsa/realtek/rtl8366rb.c
··· 27 27 #include "realtek-smi.h" 28 28 #include "realtek-mdio.h" 29 29 #include "rtl83xx.h" 30 - 31 - #define RTL8366RB_PORT_NUM_CPU 5 32 - #define RTL8366RB_NUM_PORTS 6 33 - #define RTL8366RB_PHY_NO_MAX 4 34 - #define RTL8366RB_PHY_ADDR_MAX 31 30 + #include "rtl8366rb.h" 35 31 36 32 /* Switch Global Configuration register */ 37 33 #define RTL8366RB_SGCR 0x0000 ··· 172 176 */ 173 177 #define RTL8366RB_VLAN_INGRESS_CTRL2_REG 0x037f 174 178 175 - /* LED control registers */ 176 - /* The LED blink rate is global; it is used by all triggers in all groups. */ 177 - #define RTL8366RB_LED_BLINKRATE_REG 0x0430 178 - #define RTL8366RB_LED_BLINKRATE_MASK 0x0007 179 - #define RTL8366RB_LED_BLINKRATE_28MS 0x0000 180 - #define RTL8366RB_LED_BLINKRATE_56MS 0x0001 181 - #define RTL8366RB_LED_BLINKRATE_84MS 0x0002 182 - #define RTL8366RB_LED_BLINKRATE_111MS 0x0003 183 - #define RTL8366RB_LED_BLINKRATE_222MS 0x0004 184 - #define RTL8366RB_LED_BLINKRATE_446MS 0x0005 185 - 186 - /* LED trigger event for each group */ 187 - #define RTL8366RB_LED_CTRL_REG 0x0431 188 - #define RTL8366RB_LED_CTRL_OFFSET(led_group) \ 189 - (4 * (led_group)) 190 - #define RTL8366RB_LED_CTRL_MASK(led_group) \ 191 - (0xf << RTL8366RB_LED_CTRL_OFFSET(led_group)) 192 - 193 - /* The RTL8366RB_LED_X_X registers are used to manually set the LED state only 194 - * when the corresponding LED group in RTL8366RB_LED_CTRL_REG is 195 - * RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored. 196 - */ 197 - #define RTL8366RB_LED_0_1_CTRL_REG 0x0432 198 - #define RTL8366RB_LED_2_3_CTRL_REG 0x0433 199 - #define RTL8366RB_LED_X_X_CTRL_REG(led_group) \ 200 - ((led_group) <= 1 ? \ 201 - RTL8366RB_LED_0_1_CTRL_REG : \ 202 - RTL8366RB_LED_2_3_CTRL_REG) 203 - #define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0) 204 - #define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6) 205 - #define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0) 206 - #define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6) 207 - 208 179 #define RTL8366RB_MIB_COUNT 33 209 180 #define RTL8366RB_GLOBAL_MIB_COUNT 1 210 181 #define RTL8366RB_MIB_COUNTER_PORT_OFFSET 0x0050 ··· 207 244 #define RTL8366RB_PORT_STATUS_AN_MASK 0x0080 208 245 209 246 #define RTL8366RB_NUM_VLANS 16 210 - #define RTL8366RB_NUM_LEDGROUPS 4 211 247 #define RTL8366RB_NUM_VIDS 4096 212 248 #define RTL8366RB_PRIORITYMAX 7 213 249 #define RTL8366RB_NUM_FIDS 8 ··· 312 350 #define RTL8366RB_GREEN_FEATURE_MSK 0x0007 313 351 #define RTL8366RB_GREEN_FEATURE_TX BIT(0) 314 352 #define RTL8366RB_GREEN_FEATURE_RX BIT(2) 315 - 316 - enum rtl8366_ledgroup_mode { 317 - RTL8366RB_LEDGROUP_OFF = 0x0, 318 - RTL8366RB_LEDGROUP_DUP_COL = 0x1, 319 - RTL8366RB_LEDGROUP_LINK_ACT = 0x2, 320 - RTL8366RB_LEDGROUP_SPD1000 = 0x3, 321 - RTL8366RB_LEDGROUP_SPD100 = 0x4, 322 - RTL8366RB_LEDGROUP_SPD10 = 0x5, 323 - RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6, 324 - RTL8366RB_LEDGROUP_SPD100_ACT = 0x7, 325 - RTL8366RB_LEDGROUP_SPD10_ACT = 0x8, 326 - RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9, 327 - RTL8366RB_LEDGROUP_FIBER = 0xa, 328 - RTL8366RB_LEDGROUP_AN_FAULT = 0xb, 329 - RTL8366RB_LEDGROUP_LINK_RX = 0xc, 330 - RTL8366RB_LEDGROUP_LINK_TX = 0xd, 331 - RTL8366RB_LEDGROUP_MASTER = 0xe, 332 - RTL8366RB_LEDGROUP_FORCE = 0xf, 333 - 334 - __RTL8366RB_LEDGROUP_MODE_MAX 335 - }; 336 - 337 - struct rtl8366rb_led { 338 - u8 port_num; 339 - u8 led_group; 340 - struct realtek_priv *priv; 341 - struct led_classdev cdev; 342 - }; 343 - 344 - /** 345 - * struct rtl8366rb - RTL8366RB-specific data 346 - * @max_mtu: per-port max MTU setting 347 - * @pvid_enabled: if PVID is set for respective port 348 - * @leds: per-port and per-ledgroup led info 349 - */ 350 - struct rtl8366rb { 351 - unsigned int max_mtu[RTL8366RB_NUM_PORTS]; 352 - bool pvid_enabled[RTL8366RB_NUM_PORTS]; 353 - struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS]; 354 - }; 355 353 356 354 static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = { 357 355 { 0, 0, 4, "IfInOctets" }, ··· 753 831 return 0; 754 832 } 755 833 756 - static int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv, 757 - u8 led_group, 758 - enum rtl8366_ledgroup_mode mode) 834 + /* This code is used also with LEDs disabled */ 835 + int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv, 836 + u8 led_group, 837 + enum rtl8366_ledgroup_mode mode) 759 838 { 760 839 int ret; 761 840 u32 val; ··· 773 850 return 0; 774 851 } 775 852 776 - static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port) 777 - { 778 - switch (led_group) { 779 - case 0: 780 - return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 781 - case 1: 782 - return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 783 - case 2: 784 - return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 785 - case 3: 786 - return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port)); 787 - default: 788 - return 0; 789 - } 790 - } 791 - 792 - static int rb8366rb_get_port_led(struct rtl8366rb_led *led) 793 - { 794 - struct realtek_priv *priv = led->priv; 795 - u8 led_group = led->led_group; 796 - u8 port_num = led->port_num; 797 - int ret; 798 - u32 val; 799 - 800 - ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group), 801 - &val); 802 - if (ret) { 803 - dev_err(priv->dev, "error reading LED on port %d group %d\n", 804 - led_group, port_num); 805 - return ret; 806 - } 807 - 808 - return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num)); 809 - } 810 - 811 - static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable) 812 - { 813 - struct realtek_priv *priv = led->priv; 814 - u8 led_group = led->led_group; 815 - u8 port_num = led->port_num; 816 - int ret; 817 - 818 - ret = regmap_update_bits(priv->map, 819 - RTL8366RB_LED_X_X_CTRL_REG(led_group), 820 - rtl8366rb_led_group_port_mask(led_group, 821 - port_num), 822 - enable ? 0xffff : 0); 823 - if (ret) { 824 - dev_err(priv->dev, "error updating LED on port %d group %d\n", 825 - led_group, port_num); 826 - return ret; 827 - } 828 - 829 - /* Change the LED group to manual controlled LEDs if required */ 830 - ret = rb8366rb_set_ledgroup_mode(priv, led_group, 831 - RTL8366RB_LEDGROUP_FORCE); 832 - 833 - if (ret) { 834 - dev_err(priv->dev, "error updating LED GROUP group %d\n", 835 - led_group); 836 - return ret; 837 - } 838 - 839 - return 0; 840 - } 841 - 842 - static int 843 - rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev, 844 - enum led_brightness brightness) 845 - { 846 - struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led, 847 - cdev); 848 - 849 - return rb8366rb_set_port_led(led, brightness == LED_ON); 850 - } 851 - 852 - static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp, 853 - struct fwnode_handle *led_fwnode) 854 - { 855 - struct rtl8366rb *rb = priv->chip_data; 856 - struct led_init_data init_data = { }; 857 - enum led_default_state state; 858 - struct rtl8366rb_led *led; 859 - u32 led_group; 860 - int ret; 861 - 862 - ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group); 863 - if (ret) 864 - return ret; 865 - 866 - if (led_group >= RTL8366RB_NUM_LEDGROUPS) { 867 - dev_warn(priv->dev, "Invalid LED reg %d defined for port %d", 868 - led_group, dp->index); 869 - return -EINVAL; 870 - } 871 - 872 - led = &rb->leds[dp->index][led_group]; 873 - led->port_num = dp->index; 874 - led->led_group = led_group; 875 - led->priv = priv; 876 - 877 - state = led_init_default_state_get(led_fwnode); 878 - switch (state) { 879 - case LEDS_DEFSTATE_ON: 880 - led->cdev.brightness = 1; 881 - rb8366rb_set_port_led(led, 1); 882 - break; 883 - case LEDS_DEFSTATE_KEEP: 884 - led->cdev.brightness = 885 - rb8366rb_get_port_led(led); 886 - break; 887 - case LEDS_DEFSTATE_OFF: 888 - default: 889 - led->cdev.brightness = 0; 890 - rb8366rb_set_port_led(led, 0); 891 - } 892 - 893 - led->cdev.max_brightness = 1; 894 - led->cdev.brightness_set_blocking = 895 - rtl8366rb_cled_brightness_set_blocking; 896 - init_data.fwnode = led_fwnode; 897 - init_data.devname_mandatory = true; 898 - 899 - init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d", 900 - dp->ds->index, dp->index, led_group); 901 - if (!init_data.devicename) 902 - return -ENOMEM; 903 - 904 - ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data); 905 - if (ret) { 906 - dev_warn(priv->dev, "Failed to init LED %d for port %d", 907 - led_group, dp->index); 908 - return ret; 909 - } 910 - 911 - return 0; 912 - } 913 - 853 + /* This code is used also with LEDs disabled */ 914 854 static int rtl8366rb_setup_all_leds_off(struct realtek_priv *priv) 915 855 { 916 856 int ret = 0; ··· 792 1006 } 793 1007 794 1008 return ret; 795 - } 796 - 797 - static int rtl8366rb_setup_leds(struct realtek_priv *priv) 798 - { 799 - struct dsa_switch *ds = &priv->ds; 800 - struct device_node *leds_np; 801 - struct dsa_port *dp; 802 - int ret = 0; 803 - 804 - dsa_switch_for_each_port(dp, ds) { 805 - if (!dp->dn) 806 - continue; 807 - 808 - leds_np = of_get_child_by_name(dp->dn, "leds"); 809 - if (!leds_np) { 810 - dev_dbg(priv->dev, "No leds defined for port %d", 811 - dp->index); 812 - continue; 813 - } 814 - 815 - for_each_child_of_node_scoped(leds_np, led_np) { 816 - ret = rtl8366rb_setup_led(priv, dp, 817 - of_fwnode_handle(led_np)); 818 - if (ret) 819 - break; 820 - } 821 - 822 - of_node_put(leds_np); 823 - if (ret) 824 - return ret; 825 - } 826 - return 0; 827 1009 } 828 1010 829 1011 static int rtl8366rb_setup(struct dsa_switch *ds)
+107
drivers/net/dsa/realtek/rtl8366rb.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + 3 + #ifndef _RTL8366RB_H 4 + #define _RTL8366RB_H 5 + 6 + #include "realtek.h" 7 + 8 + #define RTL8366RB_PORT_NUM_CPU 5 9 + #define RTL8366RB_NUM_PORTS 6 10 + #define RTL8366RB_PHY_NO_MAX 4 11 + #define RTL8366RB_NUM_LEDGROUPS 4 12 + #define RTL8366RB_PHY_ADDR_MAX 31 13 + 14 + /* LED control registers */ 15 + /* The LED blink rate is global; it is used by all triggers in all groups. */ 16 + #define RTL8366RB_LED_BLINKRATE_REG 0x0430 17 + #define RTL8366RB_LED_BLINKRATE_MASK 0x0007 18 + #define RTL8366RB_LED_BLINKRATE_28MS 0x0000 19 + #define RTL8366RB_LED_BLINKRATE_56MS 0x0001 20 + #define RTL8366RB_LED_BLINKRATE_84MS 0x0002 21 + #define RTL8366RB_LED_BLINKRATE_111MS 0x0003 22 + #define RTL8366RB_LED_BLINKRATE_222MS 0x0004 23 + #define RTL8366RB_LED_BLINKRATE_446MS 0x0005 24 + 25 + /* LED trigger event for each group */ 26 + #define RTL8366RB_LED_CTRL_REG 0x0431 27 + #define RTL8366RB_LED_CTRL_OFFSET(led_group) \ 28 + (4 * (led_group)) 29 + #define RTL8366RB_LED_CTRL_MASK(led_group) \ 30 + (0xf << RTL8366RB_LED_CTRL_OFFSET(led_group)) 31 + 32 + /* The RTL8366RB_LED_X_X registers are used to manually set the LED state only 33 + * when the corresponding LED group in RTL8366RB_LED_CTRL_REG is 34 + * RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored. 35 + */ 36 + #define RTL8366RB_LED_0_1_CTRL_REG 0x0432 37 + #define RTL8366RB_LED_2_3_CTRL_REG 0x0433 38 + #define RTL8366RB_LED_X_X_CTRL_REG(led_group) \ 39 + ((led_group) <= 1 ? \ 40 + RTL8366RB_LED_0_1_CTRL_REG : \ 41 + RTL8366RB_LED_2_3_CTRL_REG) 42 + #define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0) 43 + #define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6) 44 + #define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0) 45 + #define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6) 46 + 47 + enum rtl8366_ledgroup_mode { 48 + RTL8366RB_LEDGROUP_OFF = 0x0, 49 + RTL8366RB_LEDGROUP_DUP_COL = 0x1, 50 + RTL8366RB_LEDGROUP_LINK_ACT = 0x2, 51 + RTL8366RB_LEDGROUP_SPD1000 = 0x3, 52 + RTL8366RB_LEDGROUP_SPD100 = 0x4, 53 + RTL8366RB_LEDGROUP_SPD10 = 0x5, 54 + RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6, 55 + RTL8366RB_LEDGROUP_SPD100_ACT = 0x7, 56 + RTL8366RB_LEDGROUP_SPD10_ACT = 0x8, 57 + RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9, 58 + RTL8366RB_LEDGROUP_FIBER = 0xa, 59 + RTL8366RB_LEDGROUP_AN_FAULT = 0xb, 60 + RTL8366RB_LEDGROUP_LINK_RX = 0xc, 61 + RTL8366RB_LEDGROUP_LINK_TX = 0xd, 62 + RTL8366RB_LEDGROUP_MASTER = 0xe, 63 + RTL8366RB_LEDGROUP_FORCE = 0xf, 64 + 65 + __RTL8366RB_LEDGROUP_MODE_MAX 66 + }; 67 + 68 + #if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS) 69 + 70 + struct rtl8366rb_led { 71 + u8 port_num; 72 + u8 led_group; 73 + struct realtek_priv *priv; 74 + struct led_classdev cdev; 75 + }; 76 + 77 + int rtl8366rb_setup_leds(struct realtek_priv *priv); 78 + 79 + #else 80 + 81 + static inline int rtl8366rb_setup_leds(struct realtek_priv *priv) 82 + { 83 + return 0; 84 + } 85 + 86 + #endif /* IS_ENABLED(CONFIG_LEDS_CLASS) */ 87 + 88 + /** 89 + * struct rtl8366rb - RTL8366RB-specific data 90 + * @max_mtu: per-port max MTU setting 91 + * @pvid_enabled: if PVID is set for respective port 92 + * @leds: per-port and per-ledgroup led info 93 + */ 94 + struct rtl8366rb { 95 + unsigned int max_mtu[RTL8366RB_NUM_PORTS]; 96 + bool pvid_enabled[RTL8366RB_NUM_PORTS]; 97 + #if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS) 98 + struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS]; 99 + #endif 100 + }; 101 + 102 + /* This code is used also with LEDs disabled */ 103 + int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv, 104 + u8 led_group, 105 + enum rtl8366_ledgroup_mode mode); 106 + 107 + #endif /* _RTL8366RB_H */
+2
drivers/net/ethernet/cadence/macb.h
··· 1277 1277 struct clk *rx_clk; 1278 1278 struct clk *tsu_clk; 1279 1279 struct net_device *dev; 1280 + /* Protects hw_stats and ethtool_stats */ 1281 + spinlock_t stats_lock; 1280 1282 union { 1281 1283 struct macb_stats macb; 1282 1284 struct gem_stats gem;
+10 -2
drivers/net/ethernet/cadence/macb_main.c
··· 1989 1989 1990 1990 if (status & MACB_BIT(ISR_ROVR)) { 1991 1991 /* We missed at least one packet */ 1992 + spin_lock(&bp->stats_lock); 1992 1993 if (macb_is_gem(bp)) 1993 1994 bp->hw_stats.gem.rx_overruns++; 1994 1995 else 1995 1996 bp->hw_stats.macb.rx_overruns++; 1997 + spin_unlock(&bp->stats_lock); 1996 1998 1997 1999 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 1998 2000 queue_writel(queue, ISR, MACB_BIT(ISR_ROVR)); ··· 3114 3112 { 3115 3113 struct gem_stats *hwstat = &bp->hw_stats.gem; 3116 3114 3115 + spin_lock_irq(&bp->stats_lock); 3117 3116 if (netif_running(bp->dev)) 3118 3117 gem_update_stats(bp); 3119 3118 ··· 3145 3142 nstat->tx_aborted_errors = hwstat->tx_excessive_collisions; 3146 3143 nstat->tx_carrier_errors = hwstat->tx_carrier_sense_errors; 3147 3144 nstat->tx_fifo_errors = hwstat->tx_underrun; 3145 + spin_unlock_irq(&bp->stats_lock); 3148 3146 } 3149 3147 3150 3148 static void gem_get_ethtool_stats(struct net_device *dev, 3151 3149 struct ethtool_stats *stats, u64 *data) 3152 3150 { 3153 - struct macb *bp; 3151 + struct macb *bp = netdev_priv(dev); 3154 3152 3155 - bp = netdev_priv(dev); 3153 + spin_lock_irq(&bp->stats_lock); 3156 3154 gem_update_stats(bp); 3157 3155 memcpy(data, &bp->ethtool_stats, sizeof(u64) 3158 3156 * (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES)); 3157 + spin_unlock_irq(&bp->stats_lock); 3159 3158 } 3160 3159 3161 3160 static int gem_get_sset_count(struct net_device *dev, int sset) ··· 3210 3205 } 3211 3206 3212 3207 /* read stats from hardware */ 3208 + spin_lock_irq(&bp->stats_lock); 3213 3209 macb_update_stats(bp); 3214 3210 3215 3211 /* Convert HW stats into netdevice stats */ ··· 3244 3238 nstat->tx_carrier_errors = hwstat->tx_carrier_errors; 3245 3239 nstat->tx_fifo_errors = hwstat->tx_underruns; 3246 3240 /* Don't know about heartbeat or window errors... */ 3241 + spin_unlock_irq(&bp->stats_lock); 3247 3242 } 3248 3243 3249 3244 static void macb_get_pause_stats(struct net_device *dev, ··· 5270 5263 } 5271 5264 } 5272 5265 spin_lock_init(&bp->lock); 5266 + spin_lock_init(&bp->stats_lock); 5273 5267 5274 5268 /* setup capabilities */ 5275 5269 macb_configure_caps(bp, macb_config);
+74 -29
drivers/net/ethernet/freescale/enetc/enetc.c
··· 167 167 return skb->csum_offset == offsetof(struct tcphdr, check); 168 168 } 169 169 170 + /** 171 + * enetc_unwind_tx_frame() - Unwind the DMA mappings of a multi-buffer Tx frame 172 + * @tx_ring: Pointer to the Tx ring on which the buffer descriptors are located 173 + * @count: Number of Tx buffer descriptors which need to be unmapped 174 + * @i: Index of the last successfully mapped Tx buffer descriptor 175 + */ 176 + static void enetc_unwind_tx_frame(struct enetc_bdr *tx_ring, int count, int i) 177 + { 178 + while (count--) { 179 + struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i]; 180 + 181 + enetc_free_tx_frame(tx_ring, tx_swbd); 182 + if (i == 0) 183 + i = tx_ring->bd_count; 184 + i--; 185 + } 186 + } 187 + 170 188 static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb) 171 189 { 172 190 bool do_vlan, do_onestep_tstamp = false, do_twostep_tstamp = false; ··· 297 279 } 298 280 299 281 if (do_onestep_tstamp) { 300 - u32 lo, hi, val; 301 - u64 sec, nsec; 282 + __be32 new_sec_l, new_nsec; 283 + u32 lo, hi, nsec, val; 284 + __be16 new_sec_h; 302 285 u8 *data; 286 + u64 sec; 303 287 304 288 lo = enetc_rd_hot(hw, ENETC_SICTR0); 305 289 hi = enetc_rd_hot(hw, ENETC_SICTR1); ··· 315 295 /* Update originTimestamp field of Sync packet 316 296 * - 48 bits seconds field 317 297 * - 32 bits nanseconds field 298 + * 299 + * In addition, the UDP checksum needs to be updated 300 + * by software after updating originTimestamp field, 301 + * otherwise the hardware will calculate the wrong 302 + * checksum when updating the correction field and 303 + * update it to the packet. 318 304 */ 319 305 data = skb_mac_header(skb); 320 - *(__be16 *)(data + offset2) = 321 - htons((sec >> 32) & 0xffff); 322 - *(__be32 *)(data + offset2 + 2) = 323 - htonl(sec & 0xffffffff); 324 - *(__be32 *)(data + offset2 + 6) = htonl(nsec); 306 + new_sec_h = htons((sec >> 32) & 0xffff); 307 + new_sec_l = htonl(sec & 0xffffffff); 308 + new_nsec = htonl(nsec); 309 + if (udp) { 310 + struct udphdr *uh = udp_hdr(skb); 311 + __be32 old_sec_l, old_nsec; 312 + __be16 old_sec_h; 313 + 314 + old_sec_h = *(__be16 *)(data + offset2); 315 + inet_proto_csum_replace2(&uh->check, skb, old_sec_h, 316 + new_sec_h, false); 317 + 318 + old_sec_l = *(__be32 *)(data + offset2 + 2); 319 + inet_proto_csum_replace4(&uh->check, skb, old_sec_l, 320 + new_sec_l, false); 321 + 322 + old_nsec = *(__be32 *)(data + offset2 + 6); 323 + inet_proto_csum_replace4(&uh->check, skb, old_nsec, 324 + new_nsec, false); 325 + } 326 + 327 + *(__be16 *)(data + offset2) = new_sec_h; 328 + *(__be32 *)(data + offset2 + 2) = new_sec_l; 329 + *(__be32 *)(data + offset2 + 6) = new_nsec; 325 330 326 331 /* Configure single-step register */ 327 332 val = ENETC_PM0_SINGLE_STEP_EN; ··· 417 372 dma_err: 418 373 dev_err(tx_ring->dev, "DMA map error"); 419 374 420 - do { 421 - tx_swbd = &tx_ring->tx_swbd[i]; 422 - enetc_free_tx_frame(tx_ring, tx_swbd); 423 - if (i == 0) 424 - i = tx_ring->bd_count; 425 - i--; 426 - } while (count--); 375 + enetc_unwind_tx_frame(tx_ring, count, i); 427 376 428 377 return 0; 429 378 } 430 379 431 - static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb, 432 - struct enetc_tx_swbd *tx_swbd, 433 - union enetc_tx_bd *txbd, int *i, int hdr_len, 434 - int data_len) 380 + static int enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb, 381 + struct enetc_tx_swbd *tx_swbd, 382 + union enetc_tx_bd *txbd, int *i, int hdr_len, 383 + int data_len) 435 384 { 436 385 union enetc_tx_bd txbd_tmp; 437 386 u8 flags = 0, e_flags = 0; 438 387 dma_addr_t addr; 388 + int count = 1; 439 389 440 390 enetc_clear_tx_bd(&txbd_tmp); 441 391 addr = tx_ring->tso_headers_dma + *i * TSO_HEADER_SIZE; ··· 473 433 /* Write the BD */ 474 434 txbd_tmp.ext.e_flags = e_flags; 475 435 *txbd = txbd_tmp; 436 + count++; 476 437 } 438 + 439 + return count; 477 440 } 478 441 479 442 static int enetc_map_tx_tso_data(struct enetc_bdr *tx_ring, struct sk_buff *skb, ··· 833 790 834 791 /* compute the csum over the L4 header */ 835 792 csum = enetc_tso_hdr_csum(&tso, skb, hdr, hdr_len, &pos); 836 - enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, &i, hdr_len, data_len); 793 + count += enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, 794 + &i, hdr_len, data_len); 837 795 bd_data_num = 0; 838 - count++; 839 796 840 797 while (data_len > 0) { 841 798 int size; ··· 859 816 err = enetc_map_tx_tso_data(tx_ring, skb, tx_swbd, txbd, 860 817 tso.data, size, 861 818 size == data_len); 862 - if (err) 819 + if (err) { 820 + if (i == 0) 821 + i = tx_ring->bd_count; 822 + i--; 823 + 863 824 goto err_map_data; 825 + } 864 826 865 827 data_len -= size; 866 828 count++; ··· 894 846 dev_err(tx_ring->dev, "DMA map error"); 895 847 896 848 err_chained_bd: 897 - do { 898 - tx_swbd = &tx_ring->tx_swbd[i]; 899 - enetc_free_tx_frame(tx_ring, tx_swbd); 900 - if (i == 0) 901 - i = tx_ring->bd_count; 902 - i--; 903 - } while (count--); 849 + enetc_unwind_tx_frame(tx_ring, count, i); 904 850 905 851 return 0; 906 852 } ··· 1943 1901 enetc_xdp_drop(rx_ring, orig_i, i); 1944 1902 tx_ring->stats.xdp_tx_drops++; 1945 1903 } else { 1946 - tx_ring->stats.xdp_tx += xdp_tx_bd_cnt; 1904 + tx_ring->stats.xdp_tx++; 1947 1905 rx_ring->xdp.xdp_tx_in_flight += xdp_tx_bd_cnt; 1948 1906 xdp_tx_frm_cnt++; 1949 1907 /* The XDP_TX enqueue was successful, so we ··· 3270 3228 new_offloads |= ENETC_F_TX_TSTAMP; 3271 3229 break; 3272 3230 case HWTSTAMP_TX_ONESTEP_SYNC: 3231 + if (!enetc_si_is_pf(priv->si)) 3232 + return -EOPNOTSUPP; 3233 + 3273 3234 new_offloads &= ~ENETC_F_TX_TSTAMP_MASK; 3274 3235 new_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP; 3275 3236 break;
+1 -1
drivers/net/ethernet/freescale/enetc/enetc4_pf.c
··· 672 672 err_alloc_msix: 673 673 err_config_si: 674 674 err_clk_get: 675 - mutex_destroy(&priv->mm_lock); 676 675 free_netdev(ndev); 677 676 678 677 return err; ··· 683 684 struct net_device *ndev = si->ndev; 684 685 685 686 unregister_netdev(ndev); 687 + enetc4_link_deinit(priv); 686 688 enetc_free_msix(priv); 687 689 free_netdev(ndev); 688 690 }
+5 -2
drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
··· 832 832 static int enetc_get_ts_info(struct net_device *ndev, 833 833 struct kernel_ethtool_ts_info *info) 834 834 { 835 + struct enetc_ndev_priv *priv = netdev_priv(ndev); 835 836 int *phc_idx; 836 837 837 838 phc_idx = symbol_get(enetc_phc_index); ··· 853 852 SOF_TIMESTAMPING_TX_SOFTWARE; 854 853 855 854 info->tx_types = (1 << HWTSTAMP_TX_OFF) | 856 - (1 << HWTSTAMP_TX_ON) | 857 - (1 << HWTSTAMP_TX_ONESTEP_SYNC); 855 + (1 << HWTSTAMP_TX_ON); 856 + 857 + if (enetc_si_is_pf(priv->si)) 858 + info->tx_types |= (1 << HWTSTAMP_TX_ONESTEP_SYNC); 858 859 859 860 info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) | 860 861 (1 << HWTSTAMP_FILTER_ALL);
+2
drivers/net/ethernet/google/gve/gve_rx_dqo.c
··· 109 109 void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) 110 110 { 111 111 int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); 112 + struct gve_rx_ring *rx = &priv->rx[idx]; 112 113 113 114 if (!gve_rx_was_added_to_block(priv, idx)) 114 115 return; 115 116 117 + page_pool_disable_direct_recycling(rx->dqo.page_pool); 116 118 gve_remove_napi(priv, ntfy_idx); 117 119 gve_rx_remove_from_block(priv, idx); 118 120 gve_rx_reset_ring_dqo(priv, idx);
+8 -4
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2029 2029 static void iavf_finish_config(struct work_struct *work) 2030 2030 { 2031 2031 struct iavf_adapter *adapter; 2032 - bool netdev_released = false; 2032 + bool locks_released = false; 2033 2033 int pairs, err; 2034 2034 2035 2035 adapter = container_of(work, struct iavf_adapter, finish_config); ··· 2058 2058 netif_set_real_num_tx_queues(adapter->netdev, pairs); 2059 2059 2060 2060 if (adapter->netdev->reg_state != NETREG_REGISTERED) { 2061 + mutex_unlock(&adapter->crit_lock); 2061 2062 netdev_unlock(adapter->netdev); 2062 - netdev_released = true; 2063 + locks_released = true; 2063 2064 err = register_netdevice(adapter->netdev); 2064 2065 if (err) { 2065 2066 dev_err(&adapter->pdev->dev, "Unable to register netdev (%d)\n", 2066 2067 err); 2067 2068 2068 2069 /* go back and try again.*/ 2070 + mutex_lock(&adapter->crit_lock); 2069 2071 iavf_free_rss(adapter); 2070 2072 iavf_free_misc_irq(adapter); 2071 2073 iavf_reset_interrupt_capability(adapter); 2072 2074 iavf_change_state(adapter, 2073 2075 __IAVF_INIT_CONFIG_ADAPTER); 2076 + mutex_unlock(&adapter->crit_lock); 2074 2077 goto out; 2075 2078 } 2076 2079 } ··· 2089 2086 } 2090 2087 2091 2088 out: 2092 - mutex_unlock(&adapter->crit_lock); 2093 - if (!netdev_released) 2089 + if (!locks_released) { 2090 + mutex_unlock(&adapter->crit_lock); 2094 2091 netdev_unlock(adapter->netdev); 2092 + } 2095 2093 rtnl_unlock(); 2096 2094 } 2097 2095
+1 -2
drivers/net/ethernet/intel/ice/ice_eswitch.c
··· 38 38 if (ice_vsi_add_vlan_zero(uplink_vsi)) 39 39 goto err_vlan_zero; 40 40 41 - if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true, 42 - ICE_FLTR_RX)) 41 + if (ice_set_dflt_vsi(uplink_vsi)) 43 42 goto err_def_rx; 44 43 45 44 if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true,
+1 -4
drivers/net/ethernet/intel/ice/ice_sriov.c
··· 36 36 37 37 hash_for_each_safe(vfs->table, bkt, tmp, vf, entry) { 38 38 hash_del_rcu(&vf->entry); 39 + ice_deinitialize_vf_entry(vf); 39 40 ice_put_vf(vf); 40 41 } 41 42 } ··· 173 172 bit_idx = (hw->func_caps.vf_base_id + vf->vf_id) % 32; 174 173 wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx)); 175 174 } 176 - 177 - /* clear malicious info since the VF is getting released */ 178 - if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) 179 - list_del(&vf->mbx_info.list_entry); 180 175 181 176 mutex_unlock(&vf->cfg_lock); 182 177 }
+8
drivers/net/ethernet/intel/ice/ice_vf_lib.c
··· 1036 1036 mutex_init(&vf->cfg_lock); 1037 1037 } 1038 1038 1039 + void ice_deinitialize_vf_entry(struct ice_vf *vf) 1040 + { 1041 + struct ice_pf *pf = vf->pf; 1042 + 1043 + if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) 1044 + list_del(&vf->mbx_info.list_entry); 1045 + } 1046 + 1039 1047 /** 1040 1048 * ice_dis_vf_qs - Disable the VF queues 1041 1049 * @vf: pointer to the VF structure
+1
drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
··· 24 24 #endif 25 25 26 26 void ice_initialize_vf_entry(struct ice_vf *vf); 27 + void ice_deinitialize_vf_entry(struct ice_vf *vf); 27 28 void ice_dis_vf_qs(struct ice_vf *vf); 28 29 int ice_check_vf_init(struct ice_vf *vf); 29 30 enum virtchnl_status_code ice_err_to_virt_err(int err);
+2 -1
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 3013 3013 skb_shinfo(skb)->gso_size = rsc_seg_len; 3014 3014 3015 3015 skb_reset_network_header(skb); 3016 - len = skb->len - skb_transport_offset(skb); 3017 3016 3018 3017 if (ipv4) { 3019 3018 struct iphdr *ipv4h = ip_hdr(skb); ··· 3021 3022 3022 3023 /* Reset and set transport header offset in skb */ 3023 3024 skb_set_transport_header(skb, sizeof(struct iphdr)); 3025 + len = skb->len - skb_transport_offset(skb); 3024 3026 3025 3027 /* Compute the TCP pseudo header checksum*/ 3026 3028 tcp_hdr(skb)->check = ··· 3031 3031 3032 3032 skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; 3033 3033 skb_set_transport_header(skb, sizeof(struct ipv6hdr)); 3034 + len = skb->len - skb_transport_offset(skb); 3034 3035 tcp_hdr(skb)->check = 3035 3036 ~tcp_v6_check(len, &ipv6h->saddr, &ipv6h->daddr, 0); 3036 3037 }
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
··· 1122 1122 * returns error (ENOENT), then no cage present. If no cage present then 1123 1123 * connection type is backplane or BASE-T. 1124 1124 */ 1125 - return ixgbe_aci_get_netlist_node(hw, cmd, NULL, NULL); 1125 + return !ixgbe_aci_get_netlist_node(hw, cmd, NULL, NULL); 1126 1126 } 1127 1127 1128 1128 /**
+1 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
··· 324 324 MVPP2_PRS_RI_VLAN_MASK), 325 325 /* Non IP flow, with vlan tag */ 326 326 MVPP2_DEF_FLOW(MVPP22_FLOW_ETHERNET, MVPP2_FL_NON_IP_TAG, 327 - MVPP22_CLS_HEK_OPT_VLAN, 327 + MVPP22_CLS_HEK_TAGGED, 328 328 0, 0), 329 329 }; 330 330
+7 -1
drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
··· 564 564 return err; 565 565 566 566 esw_qos_normalize_min_rate(parent->esw, parent, extack); 567 + trace_mlx5_esw_vport_qos_create(vport->dev, vport, 568 + vport->qos.sched_node->max_rate, 569 + vport->qos.sched_node->bw_share); 567 570 568 571 return 0; 569 572 } ··· 594 591 sched_node->vport = vport; 595 592 vport->qos.sched_node = sched_node; 596 593 err = esw_qos_vport_enable(vport, parent, extack); 597 - if (err) 594 + if (err) { 595 + __esw_qos_free_node(sched_node); 598 596 esw_qos_put(esw); 597 + vport->qos.sched_node = NULL; 598 + } 599 599 600 600 return err; 601 601 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 572 572 pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ; 573 573 pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ; 574 574 mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d", 575 - name, size, start); 575 + name ? name : "mlx5_pcif_pool", size, start); 576 576 return pool; 577 577 } 578 578
+14
drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
··· 515 515 return 0; 516 516 } 517 517 518 + /* Loongson's DWMAC device may take nearly two seconds to complete DMA reset */ 519 + static int loongson_dwmac_fix_reset(void *priv, void __iomem *ioaddr) 520 + { 521 + u32 value = readl(ioaddr + DMA_BUS_MODE); 522 + 523 + value |= DMA_BUS_MODE_SFT_RESET; 524 + writel(value, ioaddr + DMA_BUS_MODE); 525 + 526 + return readl_poll_timeout(ioaddr + DMA_BUS_MODE, value, 527 + !(value & DMA_BUS_MODE_SFT_RESET), 528 + 10000, 2000000); 529 + } 530 + 518 531 static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id *id) 519 532 { 520 533 struct plat_stmmacenet_data *plat; ··· 578 565 579 566 plat->bsp_priv = ld; 580 567 plat->setup = loongson_dwmac_setup; 568 + plat->fix_soc_reset = loongson_dwmac_fix_reset; 581 569 ld->dev = &pdev->dev; 582 570 ld->loongson_id = readl(res.addr + GMAC_VERSION) & 0xff; 583 571
+1
drivers/net/ethernet/ti/Kconfig
··· 99 99 select NET_DEVLINK 100 100 select TI_DAVINCI_MDIO 101 101 select PHYLINK 102 + select PAGE_POOL 102 103 select TI_K3_CPPI_DESC_POOL 103 104 imply PHY_TI_GMII_SEL 104 105 depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS
+1 -20
drivers/net/ethernet/ti/icssg/icss_iep.c
··· 474 474 static int icss_iep_perout_enable(struct icss_iep *iep, 475 475 struct ptp_perout_request *req, int on) 476 476 { 477 - int ret = 0; 478 - 479 - mutex_lock(&iep->ptp_clk_mutex); 480 - 481 - if (iep->pps_enabled) { 482 - ret = -EBUSY; 483 - goto exit; 484 - } 485 - 486 - if (iep->perout_enabled == !!on) 487 - goto exit; 488 - 489 - ret = icss_iep_perout_enable_hw(iep, req, on); 490 - if (!ret) 491 - iep->perout_enabled = !!on; 492 - 493 - exit: 494 - mutex_unlock(&iep->ptp_clk_mutex); 495 - 496 - return ret; 477 + return -EOPNOTSUPP; 497 478 } 498 479 499 480 static void icss_iep_cap_cmp_work(struct work_struct *work)
+16 -5
drivers/net/ipvlan/ipvlan_core.c
··· 416 416 417 417 static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb) 418 418 { 419 - const struct iphdr *ip4h = ip_hdr(skb); 420 419 struct net_device *dev = skb->dev; 421 420 struct net *net = dev_net(dev); 422 - struct rtable *rt; 423 421 int err, ret = NET_XMIT_DROP; 422 + const struct iphdr *ip4h; 423 + struct rtable *rt; 424 424 struct flowi4 fl4 = { 425 425 .flowi4_oif = dev->ifindex, 426 - .flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)), 427 426 .flowi4_flags = FLOWI_FLAG_ANYSRC, 428 427 .flowi4_mark = skb->mark, 429 - .daddr = ip4h->daddr, 430 - .saddr = ip4h->saddr, 431 428 }; 429 + 430 + if (!pskb_network_may_pull(skb, sizeof(struct iphdr))) 431 + goto err; 432 + 433 + ip4h = ip_hdr(skb); 434 + fl4.daddr = ip4h->daddr; 435 + fl4.saddr = ip4h->saddr; 436 + fl4.flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)); 432 437 433 438 rt = ip_route_output_flow(net, &fl4, NULL); 434 439 if (IS_ERR(rt)) ··· 492 487 { 493 488 struct net_device *dev = skb->dev; 494 489 int err, ret = NET_XMIT_DROP; 490 + 491 + if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) { 492 + DEV_STATS_INC(dev, tx_errors); 493 + kfree_skb(skb); 494 + return ret; 495 + } 495 496 496 497 err = ipvlan_route_v6_outbound(dev, skb); 497 498 if (unlikely(err)) {
+14
drivers/net/loopback.c
··· 244 244 return NETDEV_TX_OK; 245 245 } 246 246 247 + static int blackhole_neigh_output(struct neighbour *n, struct sk_buff *skb) 248 + { 249 + kfree_skb(skb); 250 + return 0; 251 + } 252 + 253 + static int blackhole_neigh_construct(struct net_device *dev, 254 + struct neighbour *n) 255 + { 256 + n->output = blackhole_neigh_output; 257 + return 0; 258 + } 259 + 247 260 static const struct net_device_ops blackhole_netdev_ops = { 248 261 .ndo_start_xmit = blackhole_netdev_xmit, 262 + .ndo_neigh_construct = blackhole_neigh_construct, 249 263 }; 250 264 251 265 /* This is a dst-dummy device used specifically for invalidated
+2
drivers/net/netdevsim/ethtool.c
··· 184 184 185 185 static void nsim_ethtool_ring_init(struct netdevsim *ns) 186 186 { 187 + ns->ethtool.ring.rx_pending = 512; 187 188 ns->ethtool.ring.rx_max_pending = 4096; 188 189 ns->ethtool.ring.rx_jumbo_max_pending = 4096; 189 190 ns->ethtool.ring.rx_mini_max_pending = 4096; 191 + ns->ethtool.ring.tx_pending = 512; 190 192 ns->ethtool.ring.tx_max_pending = 4096; 191 193 } 192 194
+1 -1
drivers/net/phy/qcom/qca807x.c
··· 774 774 control_dac &= ~QCA807X_CONTROL_DAC_MASK; 775 775 if (!priv->dac_full_amplitude) 776 776 control_dac |= QCA807X_CONTROL_DAC_DSP_AMPLITUDE; 777 - if (!priv->dac_full_amplitude) 777 + if (!priv->dac_full_bias_current) 778 778 control_dac |= QCA807X_CONTROL_DAC_DSP_BIAS_CURRENT; 779 779 if (!priv->dac_disable_bias_current_tweak) 780 780 control_dac |= QCA807X_CONTROL_DAC_BIAS_CURRENT_TWEAK;
+1 -3
drivers/net/usb/gl620a.c
··· 179 179 { 180 180 dev->hard_mtu = GL_RCV_BUF_SIZE; 181 181 dev->net->hard_header_len += 4; 182 - dev->in = usb_rcvbulkpipe(dev->udev, dev->driver_info->in); 183 - dev->out = usb_sndbulkpipe(dev->udev, dev->driver_info->out); 184 - return 0; 182 + return usbnet_get_endpoints(dev, intf); 185 183 } 186 184 187 185 static const struct driver_info genelink_info = {
+38 -17
drivers/nvme/host/apple.c
··· 1011 1011 ret = apple_rtkit_shutdown(anv->rtk); 1012 1012 if (ret) 1013 1013 goto out; 1014 + 1015 + writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1014 1016 } 1015 1017 1016 - writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1018 + /* 1019 + * Only do the soft-reset if the CPU is not running, which means either we 1020 + * or the previous stage shut it down cleanly. 1021 + */ 1022 + if (!(readl(anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL) & 1023 + APPLE_ANS_COPROC_CPU_CONTROL_RUN)) { 1017 1024 1018 - ret = reset_control_assert(anv->reset); 1019 - if (ret) 1020 - goto out; 1025 + ret = reset_control_assert(anv->reset); 1026 + if (ret) 1027 + goto out; 1021 1028 1022 - ret = apple_rtkit_reinit(anv->rtk); 1023 - if (ret) 1024 - goto out; 1029 + ret = apple_rtkit_reinit(anv->rtk); 1030 + if (ret) 1031 + goto out; 1025 1032 1026 - ret = reset_control_deassert(anv->reset); 1027 - if (ret) 1028 - goto out; 1033 + ret = reset_control_deassert(anv->reset); 1034 + if (ret) 1035 + goto out; 1029 1036 1030 - writel(APPLE_ANS_COPROC_CPU_CONTROL_RUN, 1031 - anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1032 - ret = apple_rtkit_boot(anv->rtk); 1037 + writel(APPLE_ANS_COPROC_CPU_CONTROL_RUN, 1038 + anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1039 + 1040 + ret = apple_rtkit_boot(anv->rtk); 1041 + } else { 1042 + ret = apple_rtkit_wake(anv->rtk); 1043 + } 1044 + 1033 1045 if (ret) { 1034 1046 dev_err(anv->dev, "ANS did not boot"); 1035 1047 goto out; ··· 1528 1516 1529 1517 return anv; 1530 1518 put_dev: 1519 + apple_nvme_detach_genpd(anv); 1531 1520 put_device(anv->dev); 1532 1521 return ERR_PTR(ret); 1533 1522 } ··· 1562 1549 nvme_uninit_ctrl(&anv->ctrl); 1563 1550 out_put_ctrl: 1564 1551 nvme_put_ctrl(&anv->ctrl); 1552 + apple_nvme_detach_genpd(anv); 1565 1553 return ret; 1566 1554 } 1567 1555 ··· 1577 1563 apple_nvme_disable(anv, true); 1578 1564 nvme_uninit_ctrl(&anv->ctrl); 1579 1565 1580 - if (apple_rtkit_is_running(anv->rtk)) 1566 + if (apple_rtkit_is_running(anv->rtk)) { 1581 1567 apple_rtkit_shutdown(anv->rtk); 1568 + 1569 + writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1570 + } 1582 1571 1583 1572 apple_nvme_detach_genpd(anv); 1584 1573 } ··· 1591 1574 struct apple_nvme *anv = platform_get_drvdata(pdev); 1592 1575 1593 1576 apple_nvme_disable(anv, true); 1594 - if (apple_rtkit_is_running(anv->rtk)) 1577 + if (apple_rtkit_is_running(anv->rtk)) { 1595 1578 apple_rtkit_shutdown(anv->rtk); 1579 + 1580 + writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1581 + } 1596 1582 } 1597 1583 1598 1584 static int apple_nvme_resume(struct device *dev) ··· 1612 1592 1613 1593 apple_nvme_disable(anv, true); 1614 1594 1615 - if (apple_rtkit_is_running(anv->rtk)) 1595 + if (apple_rtkit_is_running(anv->rtk)) { 1616 1596 ret = apple_rtkit_shutdown(anv->rtk); 1617 1597 1618 - writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1598 + writel(0, anv->mmio_coproc + APPLE_ANS_COPROC_CPU_CONTROL); 1599 + } 1619 1600 1620 1601 return ret; 1621 1602 }
-2
drivers/nvme/host/core.c
··· 564 564 switch (new_state) { 565 565 case NVME_CTRL_LIVE: 566 566 switch (old_state) { 567 - case NVME_CTRL_NEW: 568 - case NVME_CTRL_RESETTING: 569 567 case NVME_CTRL_CONNECTING: 570 568 changed = true; 571 569 fallthrough;
+8 -63
drivers/nvme/host/fc.c
··· 781 781 static void 782 782 nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl) 783 783 { 784 - enum nvme_ctrl_state state; 785 - unsigned long flags; 786 - 787 784 dev_info(ctrl->ctrl.device, 788 785 "NVME-FC{%d}: controller connectivity lost. Awaiting " 789 786 "Reconnect", ctrl->cnum); 790 787 791 - spin_lock_irqsave(&ctrl->lock, flags); 792 788 set_bit(ASSOC_FAILED, &ctrl->flags); 793 - state = nvme_ctrl_state(&ctrl->ctrl); 794 - spin_unlock_irqrestore(&ctrl->lock, flags); 795 - 796 - switch (state) { 797 - case NVME_CTRL_NEW: 798 - case NVME_CTRL_LIVE: 799 - /* 800 - * Schedule a controller reset. The reset will terminate the 801 - * association and schedule the reconnect timer. Reconnects 802 - * will be attempted until either the ctlr_loss_tmo 803 - * (max_retries * connect_delay) expires or the remoteport's 804 - * dev_loss_tmo expires. 805 - */ 806 - if (nvme_reset_ctrl(&ctrl->ctrl)) { 807 - dev_warn(ctrl->ctrl.device, 808 - "NVME-FC{%d}: Couldn't schedule reset.\n", 809 - ctrl->cnum); 810 - nvme_delete_ctrl(&ctrl->ctrl); 811 - } 812 - break; 813 - 814 - case NVME_CTRL_CONNECTING: 815 - /* 816 - * The association has already been terminated and the 817 - * controller is attempting reconnects. No need to do anything 818 - * futher. Reconnects will be attempted until either the 819 - * ctlr_loss_tmo (max_retries * connect_delay) expires or the 820 - * remoteport's dev_loss_tmo expires. 821 - */ 822 - break; 823 - 824 - case NVME_CTRL_RESETTING: 825 - /* 826 - * Controller is already in the process of terminating the 827 - * association. No need to do anything further. The reconnect 828 - * step will kick in naturally after the association is 829 - * terminated. 830 - */ 831 - break; 832 - 833 - case NVME_CTRL_DELETING: 834 - case NVME_CTRL_DELETING_NOIO: 835 - default: 836 - /* no action to take - let it delete */ 837 - break; 838 - } 789 + nvme_reset_ctrl(&ctrl->ctrl); 839 790 } 840 791 841 792 /** ··· 3022 3071 struct nvmefc_ls_rcv_op *disls = NULL; 3023 3072 unsigned long flags; 3024 3073 int ret; 3025 - bool changed; 3026 3074 3027 3075 ++ctrl->ctrl.nr_reconnects; 3028 3076 ··· 3127 3177 else 3128 3178 ret = nvme_fc_recreate_io_queues(ctrl); 3129 3179 } 3130 - if (ret) 3131 - goto out_term_aen_ops; 3132 - 3133 - spin_lock_irqsave(&ctrl->lock, flags); 3134 - if (!test_bit(ASSOC_FAILED, &ctrl->flags)) 3135 - changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE); 3136 - else 3180 + if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags)) 3137 3181 ret = -EIO; 3138 - spin_unlock_irqrestore(&ctrl->lock, flags); 3139 - 3140 3182 if (ret) 3141 3183 goto out_term_aen_ops; 3184 + 3185 + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE)) { 3186 + ret = -EIO; 3187 + goto out_term_aen_ops; 3188 + } 3142 3189 3143 3190 ctrl->ctrl.nr_reconnects = 0; 3144 - 3145 - if (changed) 3146 - nvme_start_ctrl(&ctrl->ctrl); 3191 + nvme_start_ctrl(&ctrl->ctrl); 3147 3192 3148 3193 return 0; /* Success */ 3149 3194
+1 -2
drivers/nvme/host/ioctl.c
··· 283 283 { 284 284 if (ns && nsid != ns->head->ns_id) { 285 285 dev_err(ctrl->device, 286 - "%s: nsid (%u) in cmd does not match nsid (%u)" 287 - "of namespace\n", 286 + "%s: nsid (%u) in cmd does not match nsid (%u) of namespace\n", 288 287 current->comm, nsid, ns->head->ns_id); 289 288 return false; 290 289 }
+2
drivers/nvme/host/pci.c
··· 3706 3706 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3707 3707 { PCI_DEVICE(0x1cc1, 0x5350), /* ADATA XPG GAMMIX S50 */ 3708 3708 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3709 + { PCI_DEVICE(0x1dbe, 0x5216), /* Acer/INNOGRIT FA100/5216 NVMe SSD */ 3710 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3709 3711 { PCI_DEVICE(0x1dbe, 0x5236), /* ADATA XPG GAMMIX S70 */ 3710 3712 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3711 3713 { PCI_DEVICE(0x1e49, 0x0021), /* ZHITAI TiPro5000 NVMe SSD */
+48 -2
drivers/nvme/host/tcp.c
··· 763 763 return 0; 764 764 } 765 765 766 + static void nvme_tcp_handle_c2h_term(struct nvme_tcp_queue *queue, 767 + struct nvme_tcp_term_pdu *pdu) 768 + { 769 + u16 fes; 770 + const char *msg; 771 + u32 plen = le32_to_cpu(pdu->hdr.plen); 772 + 773 + static const char * const msg_table[] = { 774 + [NVME_TCP_FES_INVALID_PDU_HDR] = "Invalid PDU Header Field", 775 + [NVME_TCP_FES_PDU_SEQ_ERR] = "PDU Sequence Error", 776 + [NVME_TCP_FES_HDR_DIGEST_ERR] = "Header Digest Error", 777 + [NVME_TCP_FES_DATA_OUT_OF_RANGE] = "Data Transfer Out Of Range", 778 + [NVME_TCP_FES_R2T_LIMIT_EXCEEDED] = "R2T Limit Exceeded", 779 + [NVME_TCP_FES_UNSUPPORTED_PARAM] = "Unsupported Parameter", 780 + }; 781 + 782 + if (plen < NVME_TCP_MIN_C2HTERM_PLEN || 783 + plen > NVME_TCP_MAX_C2HTERM_PLEN) { 784 + dev_err(queue->ctrl->ctrl.device, 785 + "Received a malformed C2HTermReq PDU (plen = %u)\n", 786 + plen); 787 + return; 788 + } 789 + 790 + fes = le16_to_cpu(pdu->fes); 791 + if (fes && fes < ARRAY_SIZE(msg_table)) 792 + msg = msg_table[fes]; 793 + else 794 + msg = "Unknown"; 795 + 796 + dev_err(queue->ctrl->ctrl.device, 797 + "Received C2HTermReq (FES = %s)\n", msg); 798 + } 799 + 766 800 static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb, 767 801 unsigned int *offset, size_t *len) 768 802 { ··· 818 784 return 0; 819 785 820 786 hdr = queue->pdu; 787 + if (unlikely(hdr->type == nvme_tcp_c2h_term)) { 788 + /* 789 + * C2HTermReq never includes Header or Data digests. 790 + * Skip the checks. 791 + */ 792 + nvme_tcp_handle_c2h_term(queue, (void *)queue->pdu); 793 + return -EINVAL; 794 + } 795 + 821 796 if (queue->hdr_digest) { 822 797 ret = nvme_tcp_verify_hdgst(queue, queue->pdu, hdr->hlen); 823 798 if (unlikely(ret)) ··· 1492 1449 msg.msg_control = cbuf; 1493 1450 msg.msg_controllen = sizeof(cbuf); 1494 1451 } 1452 + msg.msg_flags = MSG_WAITALL; 1495 1453 ret = kernel_recvmsg(queue->sock, &msg, &iov, 1, 1496 1454 iov.iov_len, msg.msg_flags); 1497 - if (ret < 0) { 1455 + if (ret < sizeof(*icresp)) { 1498 1456 pr_warn("queue %d: failed to receive icresp, error %d\n", 1499 1457 nvme_tcp_queue_id(queue), ret); 1458 + if (ret >= 0) 1459 + ret = -ECONNRESET; 1500 1460 goto free_icresp; 1501 1461 } 1502 1462 ret = -ENOTCONN; ··· 1611 1565 ctrl->io_queues[HCTX_TYPE_POLL]; 1612 1566 } 1613 1567 1614 - /** 1568 + /* 1615 1569 * Track the number of queues assigned to each cpu using a global per-cpu 1616 1570 * counter and select the least used cpu from the mq_map. Our goal is to spread 1617 1571 * different controllers I/O threads across different cpu cores.
+19 -21
drivers/nvme/target/core.c
··· 606 606 goto out_dev_put; 607 607 } 608 608 609 + if (percpu_ref_init(&ns->ref, nvmet_destroy_namespace, 0, GFP_KERNEL)) 610 + goto out_pr_exit; 611 + 609 612 nvmet_ns_changed(subsys, ns->nsid); 610 613 ns->enabled = true; 611 614 xa_set_mark(&subsys->namespaces, ns->nsid, NVMET_NS_ENABLED); ··· 616 613 out_unlock: 617 614 mutex_unlock(&subsys->lock); 618 615 return ret; 616 + out_pr_exit: 617 + if (ns->pr.enable) 618 + nvmet_pr_exit_ns(ns); 619 619 out_dev_put: 620 620 list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 621 621 pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid)); ··· 644 638 645 639 mutex_unlock(&subsys->lock); 646 640 641 + /* 642 + * Now that we removed the namespaces from the lookup list, we 643 + * can kill the per_cpu ref and wait for any remaining references 644 + * to be dropped, as well as a RCU grace period for anyone only 645 + * using the namepace under rcu_read_lock(). Note that we can't 646 + * use call_rcu here as we need to ensure the namespaces have 647 + * been fully destroyed before unloading the module. 648 + */ 649 + percpu_ref_kill(&ns->ref); 650 + synchronize_rcu(); 651 + wait_for_completion(&ns->disable_done); 652 + percpu_ref_exit(&ns->ref); 653 + 647 654 if (ns->pr.enable) 648 655 nvmet_pr_exit_ns(ns); 649 656 ··· 679 660 if (ns->nsid == subsys->max_nsid) 680 661 subsys->max_nsid = nvmet_max_nsid(subsys); 681 662 682 - mutex_unlock(&subsys->lock); 683 - 684 - /* 685 - * Now that we removed the namespaces from the lookup list, we 686 - * can kill the per_cpu ref and wait for any remaining references 687 - * to be dropped, as well as a RCU grace period for anyone only 688 - * using the namepace under rcu_read_lock(). Note that we can't 689 - * use call_rcu here as we need to ensure the namespaces have 690 - * been fully destroyed before unloading the module. 691 - */ 692 - percpu_ref_kill(&ns->ref); 693 - synchronize_rcu(); 694 - wait_for_completion(&ns->disable_done); 695 - percpu_ref_exit(&ns->ref); 696 - 697 - mutex_lock(&subsys->lock); 698 663 subsys->nr_namespaces--; 699 664 mutex_unlock(&subsys->lock); 700 665 ··· 708 705 ns->nsid = nsid; 709 706 ns->subsys = subsys; 710 707 711 - if (percpu_ref_init(&ns->ref, nvmet_destroy_namespace, 0, GFP_KERNEL)) 712 - goto out_free; 713 - 714 708 if (ns->nsid > subsys->max_nsid) 715 709 subsys->max_nsid = nsid; 716 710 ··· 730 730 return ns; 731 731 out_exit: 732 732 subsys->max_nsid = nvmet_max_nsid(subsys); 733 - percpu_ref_exit(&ns->ref); 734 - out_free: 735 733 kfree(ns); 736 734 out_unlock: 737 735 mutex_unlock(&subsys->lock);
+7 -7
drivers/nvme/target/nvmet.h
··· 784 784 785 785 static inline bool nvmet_cc_en(u32 cc) 786 786 { 787 - return (cc >> NVME_CC_EN_SHIFT) & 0x1; 787 + return (cc & NVME_CC_ENABLE) >> NVME_CC_EN_SHIFT; 788 788 } 789 789 790 790 static inline u8 nvmet_cc_css(u32 cc) 791 791 { 792 - return (cc >> NVME_CC_CSS_SHIFT) & 0x7; 792 + return (cc & NVME_CC_CSS_MASK) >> NVME_CC_CSS_SHIFT; 793 793 } 794 794 795 795 static inline u8 nvmet_cc_mps(u32 cc) 796 796 { 797 - return (cc >> NVME_CC_MPS_SHIFT) & 0xf; 797 + return (cc & NVME_CC_MPS_MASK) >> NVME_CC_MPS_SHIFT; 798 798 } 799 799 800 800 static inline u8 nvmet_cc_ams(u32 cc) 801 801 { 802 - return (cc >> NVME_CC_AMS_SHIFT) & 0x7; 802 + return (cc & NVME_CC_AMS_MASK) >> NVME_CC_AMS_SHIFT; 803 803 } 804 804 805 805 static inline u8 nvmet_cc_shn(u32 cc) 806 806 { 807 - return (cc >> NVME_CC_SHN_SHIFT) & 0x3; 807 + return (cc & NVME_CC_SHN_MASK) >> NVME_CC_SHN_SHIFT; 808 808 } 809 809 810 810 static inline u8 nvmet_cc_iosqes(u32 cc) 811 811 { 812 - return (cc >> NVME_CC_IOSQES_SHIFT) & 0xf; 812 + return (cc & NVME_CC_IOSQES_MASK) >> NVME_CC_IOSQES_SHIFT; 813 813 } 814 814 815 815 static inline u8 nvmet_cc_iocqes(u32 cc) 816 816 { 817 - return (cc >> NVME_CC_IOCQES_SHIFT) & 0xf; 817 + return (cc & NVME_CC_IOCQES_MASK) >> NVME_CC_IOCQES_SHIFT; 818 818 } 819 819 820 820 /* Convert a 32-bit number to a 16-bit 0's based number */
+29 -10
drivers/nvme/target/pci-epf.c
··· 46 46 /* 47 47 * BAR CC register and SQ polling intervals. 48 48 */ 49 - #define NVMET_PCI_EPF_CC_POLL_INTERVAL msecs_to_jiffies(5) 49 + #define NVMET_PCI_EPF_CC_POLL_INTERVAL msecs_to_jiffies(10) 50 50 #define NVMET_PCI_EPF_SQ_POLL_INTERVAL msecs_to_jiffies(5) 51 51 #define NVMET_PCI_EPF_SQ_POLL_IDLE msecs_to_jiffies(5000) 52 52 ··· 1694 1694 struct nvmet_pci_epf_ctrl *ctrl = 1695 1695 container_of(work, struct nvmet_pci_epf_ctrl, poll_sqs.work); 1696 1696 struct nvmet_pci_epf_queue *sq; 1697 + unsigned long limit = jiffies; 1697 1698 unsigned long last = 0; 1698 1699 int i, nr_sqs; 1699 1700 ··· 1707 1706 continue; 1708 1707 if (nvmet_pci_epf_process_sq(ctrl, sq)) 1709 1708 nr_sqs++; 1709 + } 1710 + 1711 + /* 1712 + * If we have been running for a while, reschedule to let other 1713 + * tasks run and to avoid RCU stalls. 1714 + */ 1715 + if (time_is_before_jiffies(limit + secs_to_jiffies(1))) { 1716 + cond_resched(); 1717 + limit = jiffies; 1718 + continue; 1710 1719 } 1711 1720 1712 1721 if (nr_sqs) { ··· 1833 1822 if (ctrl->io_sqes < sizeof(struct nvme_command)) { 1834 1823 dev_err(ctrl->dev, "Unsupported I/O SQES %zu (need %zu)\n", 1835 1824 ctrl->io_sqes, sizeof(struct nvme_command)); 1836 - return -EINVAL; 1825 + goto err; 1837 1826 } 1838 1827 1839 1828 ctrl->io_cqes = 1UL << nvmet_cc_iocqes(ctrl->cc); 1840 1829 if (ctrl->io_cqes < sizeof(struct nvme_completion)) { 1841 1830 dev_err(ctrl->dev, "Unsupported I/O CQES %zu (need %zu)\n", 1842 1831 ctrl->io_sqes, sizeof(struct nvme_completion)); 1843 - return -EINVAL; 1832 + goto err; 1844 1833 } 1845 1834 1846 1835 /* Create the admin queue. */ ··· 1855 1844 qsize, pci_addr, 0); 1856 1845 if (status != NVME_SC_SUCCESS) { 1857 1846 dev_err(ctrl->dev, "Failed to create admin completion queue\n"); 1858 - return -EINVAL; 1847 + goto err; 1859 1848 } 1860 1849 1861 1850 qsize = aqa & 0x00000fff; ··· 1865 1854 if (status != NVME_SC_SUCCESS) { 1866 1855 dev_err(ctrl->dev, "Failed to create admin submission queue\n"); 1867 1856 nvmet_pci_epf_delete_cq(ctrl->tctrl, 0); 1868 - return -EINVAL; 1857 + goto err; 1869 1858 } 1870 1859 1871 1860 ctrl->sq_ab = NVMET_PCI_EPF_SQ_AB; 1872 1861 ctrl->irq_vector_threshold = NVMET_PCI_EPF_IV_THRESHOLD; 1873 1862 ctrl->enabled = true; 1863 + ctrl->csts = NVME_CSTS_RDY; 1874 1864 1875 1865 /* Start polling the controller SQs. */ 1876 1866 schedule_delayed_work(&ctrl->poll_sqs, 0); 1877 1867 1878 1868 return 0; 1869 + 1870 + err: 1871 + ctrl->csts = 0; 1872 + return -EINVAL; 1879 1873 } 1880 1874 1881 1875 static void nvmet_pci_epf_disable_ctrl(struct nvmet_pci_epf_ctrl *ctrl) ··· 1905 1889 /* Delete the admin queue last. */ 1906 1890 nvmet_pci_epf_delete_sq(ctrl->tctrl, 0); 1907 1891 nvmet_pci_epf_delete_cq(ctrl->tctrl, 0); 1892 + 1893 + ctrl->csts &= ~NVME_CSTS_RDY; 1908 1894 } 1909 1895 1910 1896 static void nvmet_pci_epf_poll_cc_work(struct work_struct *work) ··· 1921 1903 1922 1904 old_cc = ctrl->cc; 1923 1905 new_cc = nvmet_pci_epf_bar_read32(ctrl, NVME_REG_CC); 1906 + if (new_cc == old_cc) 1907 + goto reschedule_work; 1908 + 1924 1909 ctrl->cc = new_cc; 1925 1910 1926 1911 if (nvmet_cc_en(new_cc) && !nvmet_cc_en(old_cc)) { 1927 1912 ret = nvmet_pci_epf_enable_ctrl(ctrl); 1928 1913 if (ret) 1929 - return; 1930 - ctrl->csts |= NVME_CSTS_RDY; 1914 + goto reschedule_work; 1931 1915 } 1932 1916 1933 - if (!nvmet_cc_en(new_cc) && nvmet_cc_en(old_cc)) { 1917 + if (!nvmet_cc_en(new_cc) && nvmet_cc_en(old_cc)) 1934 1918 nvmet_pci_epf_disable_ctrl(ctrl); 1935 - ctrl->csts &= ~NVME_CSTS_RDY; 1936 - } 1937 1919 1938 1920 if (nvmet_cc_shn(new_cc) && !nvmet_cc_shn(old_cc)) { 1939 1921 nvmet_pci_epf_disable_ctrl(ctrl); ··· 1946 1928 nvmet_update_cc(ctrl->tctrl, ctrl->cc); 1947 1929 nvmet_pci_epf_bar_write32(ctrl, NVME_REG_CSTS, ctrl->csts); 1948 1930 1931 + reschedule_work: 1949 1932 schedule_delayed_work(&ctrl->poll_cc, NVMET_PCI_EPF_CC_POLL_INTERVAL); 1950 1933 } 1951 1934
+23 -10
drivers/nvme/target/rdma.c
··· 996 996 nvmet_req_complete(&cmd->req, status); 997 997 } 998 998 999 + static bool nvmet_rdma_recv_not_live(struct nvmet_rdma_queue *queue, 1000 + struct nvmet_rdma_rsp *rsp) 1001 + { 1002 + unsigned long flags; 1003 + bool ret = true; 1004 + 1005 + spin_lock_irqsave(&queue->state_lock, flags); 1006 + /* 1007 + * recheck queue state is not live to prevent a race condition 1008 + * with RDMA_CM_EVENT_ESTABLISHED handler. 1009 + */ 1010 + if (queue->state == NVMET_RDMA_Q_LIVE) 1011 + ret = false; 1012 + else if (queue->state == NVMET_RDMA_Q_CONNECTING) 1013 + list_add_tail(&rsp->wait_list, &queue->rsp_wait_list); 1014 + else 1015 + nvmet_rdma_put_rsp(rsp); 1016 + spin_unlock_irqrestore(&queue->state_lock, flags); 1017 + return ret; 1018 + } 1019 + 999 1020 static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) 1000 1021 { 1001 1022 struct nvmet_rdma_cmd *cmd = ··· 1059 1038 rsp->n_rdma = 0; 1060 1039 rsp->invalidate_rkey = 0; 1061 1040 1062 - if (unlikely(queue->state != NVMET_RDMA_Q_LIVE)) { 1063 - unsigned long flags; 1064 - 1065 - spin_lock_irqsave(&queue->state_lock, flags); 1066 - if (queue->state == NVMET_RDMA_Q_CONNECTING) 1067 - list_add_tail(&rsp->wait_list, &queue->rsp_wait_list); 1068 - else 1069 - nvmet_rdma_put_rsp(rsp); 1070 - spin_unlock_irqrestore(&queue->state_lock, flags); 1041 + if (unlikely(queue->state != NVMET_RDMA_Q_LIVE) && 1042 + nvmet_rdma_recv_not_live(queue, rsp)) 1071 1043 return; 1072 - } 1073 1044 1074 1045 nvmet_rdma_handle_command(queue, rsp); 1075 1046 }
+1
drivers/platform/cznic/Kconfig
··· 6 6 7 7 menuconfig CZNIC_PLATFORMS 8 8 bool "Platform support for CZ.NIC's Turris hardware" 9 + depends on ARCH_MVEBU || COMPILE_TEST 9 10 help 10 11 Say Y here to be able to choose driver support for CZ.NIC's Turris 11 12 devices. This option alone does not add any kernel code.
+15 -16
drivers/power/supply/axp20x_battery.c
··· 466 466 467 467 /* 468 468 * If a fault is detected it must also be cleared; if the 469 - * condition persists it should reappear (This is an 470 - * assumption, it's actually not documented). A restart was 471 - * not sufficient to clear the bit in testing despite the 472 - * register listed as POR. 469 + * condition persists it should reappear. A restart was not 470 + * sufficient to clear the bit in testing despite the register 471 + * listed as POR. 473 472 */ 474 473 case POWER_SUPPLY_PROP_HEALTH: 475 474 ret = regmap_read(axp20x_batt->regmap, AXP717_PMU_FAULT, ··· 479 480 switch (reg & AXP717_BATT_PMU_FAULT_MASK) { 480 481 case AXP717_BATT_UVLO_2_5V: 481 482 val->intval = POWER_SUPPLY_HEALTH_DEAD; 482 - regmap_update_bits(axp20x_batt->regmap, 483 - AXP717_PMU_FAULT, 484 - AXP717_BATT_UVLO_2_5V, 485 - AXP717_BATT_UVLO_2_5V); 483 + regmap_write_bits(axp20x_batt->regmap, 484 + AXP717_PMU_FAULT, 485 + AXP717_BATT_UVLO_2_5V, 486 + AXP717_BATT_UVLO_2_5V); 486 487 return 0; 487 488 488 489 case AXP717_BATT_OVER_TEMP: 489 490 val->intval = POWER_SUPPLY_HEALTH_HOT; 490 - regmap_update_bits(axp20x_batt->regmap, 491 - AXP717_PMU_FAULT, 492 - AXP717_BATT_OVER_TEMP, 493 - AXP717_BATT_OVER_TEMP); 491 + regmap_write_bits(axp20x_batt->regmap, 492 + AXP717_PMU_FAULT, 493 + AXP717_BATT_OVER_TEMP, 494 + AXP717_BATT_OVER_TEMP); 494 495 return 0; 495 496 496 497 case AXP717_BATT_UNDER_TEMP: 497 498 val->intval = POWER_SUPPLY_HEALTH_COLD; 498 - regmap_update_bits(axp20x_batt->regmap, 499 - AXP717_PMU_FAULT, 500 - AXP717_BATT_UNDER_TEMP, 501 - AXP717_BATT_UNDER_TEMP); 499 + regmap_write_bits(axp20x_batt->regmap, 500 + AXP717_PMU_FAULT, 501 + AXP717_BATT_UNDER_TEMP, 502 + AXP717_BATT_UNDER_TEMP); 502 503 return 0; 503 504 504 505 default:
+2 -2
drivers/power/supply/da9150-fg.c
··· 247 247 DA9150_QIF_SD_GAIN_SIZE); 248 248 da9150_fg_read_sync_end(fg); 249 249 250 - div = (u64) (sd_gain * shunt_val * 65536ULL); 250 + div = 65536ULL * sd_gain * shunt_val; 251 251 do_div(div, 1000000); 252 - res = (u64) (iavg * 1000000ULL); 252 + res = 1000000ULL * iavg; 253 253 do_div(res, div); 254 254 255 255 val->intval = (int) res;
+4 -4
drivers/power/supply/power_supply_core.c
··· 1592 1592 if (rc) 1593 1593 goto register_thermal_failed; 1594 1594 1595 - scoped_guard(rwsem_read, &psy->extensions_sem) { 1596 - rc = power_supply_create_triggers(psy); 1597 - if (rc) 1598 - goto create_triggers_failed; 1595 + rc = power_supply_create_triggers(psy); 1596 + if (rc) 1597 + goto create_triggers_failed; 1599 1598 1599 + scoped_guard(rwsem_read, &psy->extensions_sem) { 1600 1600 rc = power_supply_add_hwmon_sysfs(psy); 1601 1601 if (rc) 1602 1602 goto add_hwmon_sysfs_failed;
+7 -7
drivers/scsi/scsi_lib.c
··· 1669 1669 if (in_flight) 1670 1670 __set_bit(SCMD_STATE_INFLIGHT, &cmd->state); 1671 1671 1672 - /* 1673 - * Only clear the driver-private command data if the LLD does not supply 1674 - * a function to initialize that data. 1675 - */ 1676 - if (!shost->hostt->init_cmd_priv) 1677 - memset(cmd + 1, 0, shost->hostt->cmd_size); 1678 - 1679 1672 cmd->prot_op = SCSI_PROT_NORMAL; 1680 1673 if (blk_rq_bytes(req)) 1681 1674 cmd->sc_data_direction = rq_dma_dir(req); ··· 1834 1841 } 1835 1842 if (!scsi_host_queue_ready(q, shost, sdev, cmd)) 1836 1843 goto out_dec_target_busy; 1844 + 1845 + /* 1846 + * Only clear the driver-private command data if the LLD does not supply 1847 + * a function to initialize that data. 1848 + */ 1849 + if (shost->hostt->cmd_size && !shost->hostt->init_cmd_priv) 1850 + memset(cmd + 1, 0, shost->hostt->cmd_size); 1837 1851 1838 1852 if (!(req->rq_flags & RQF_DONTPREP)) { 1839 1853 ret = scsi_prepare_cmd(req);
+4 -1
drivers/soc/loongson/loongson2_guts.c
··· 114 114 if (of_property_read_string(root, "model", &machine)) 115 115 of_property_read_string_index(root, "compatible", 0, &machine); 116 116 of_node_put(root); 117 - if (machine) 117 + if (machine) { 118 118 soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL); 119 + if (!soc_dev_attr.machine) 120 + return -ENOMEM; 121 + } 119 122 120 123 svr = loongson2_guts_get_svr(); 121 124 soc_die = loongson2_soc_die_match(svr, loongson2_soc_die);
+8 -27
drivers/tee/optee/supp.c
··· 80 80 struct optee *optee = tee_get_drvdata(ctx->teedev); 81 81 struct optee_supp *supp = &optee->supp; 82 82 struct optee_supp_req *req; 83 - bool interruptable; 84 83 u32 ret; 85 84 86 85 /* ··· 110 111 /* 111 112 * Wait for supplicant to process and return result, once we've 112 113 * returned from wait_for_completion(&req->c) successfully we have 113 - * exclusive access again. 114 + * exclusive access again. Allow the wait to be killable such that 115 + * the wait doesn't turn into an indefinite state if the supplicant 116 + * gets hung for some reason. 114 117 */ 115 - while (wait_for_completion_interruptible(&req->c)) { 118 + if (wait_for_completion_killable(&req->c)) { 116 119 mutex_lock(&supp->mutex); 117 - interruptable = !supp->ctx; 118 - if (interruptable) { 119 - /* 120 - * There's no supplicant available and since the 121 - * supp->mutex currently is held none can 122 - * become available until the mutex released 123 - * again. 124 - * 125 - * Interrupting an RPC to supplicant is only 126 - * allowed as a way of slightly improving the user 127 - * experience in case the supplicant hasn't been 128 - * started yet. During normal operation the supplicant 129 - * will serve all requests in a timely manner and 130 - * interrupting then wouldn't make sense. 131 - */ 132 - if (req->in_queue) { 133 - list_del(&req->link); 134 - req->in_queue = false; 135 - } 120 + if (req->in_queue) { 121 + list_del(&req->link); 122 + req->in_queue = false; 136 123 } 137 124 mutex_unlock(&supp->mutex); 138 - 139 - if (interruptable) { 140 - req->ret = TEEC_ERROR_COMMUNICATION; 141 - break; 142 - } 125 + req->ret = TEEC_ERROR_COMMUNICATION; 143 126 } 144 127 145 128 ret = req->ret;
+4 -2
drivers/ufs/core/ufs_bsg.c
··· 194 194 ufshcd_rpm_put_sync(hba); 195 195 kfree(buff); 196 196 bsg_reply->result = ret; 197 - job->reply_len = !rpmb ? sizeof(struct ufs_bsg_reply) : sizeof(struct ufs_rpmb_reply); 198 197 /* complete the job here only if no error */ 199 - if (ret == 0) 198 + if (ret == 0) { 199 + job->reply_len = rpmb ? sizeof(struct ufs_rpmb_reply) : 200 + sizeof(struct ufs_bsg_reply); 200 201 bsg_job_done(job, ret, bsg_reply->reply_payload_rcv_len); 202 + } 201 203 202 204 return ret; 203 205 }
+19 -19
drivers/ufs/core/ufshcd.c
··· 266 266 267 267 static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba) 268 268 { 269 - return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba); 269 + return scsi_host_busy(hba->host) || ufshcd_has_pending_tasks(hba); 270 270 } 271 271 272 272 static const struct ufs_dev_quirk ufs_fixups[] = { ··· 628 628 const struct scsi_device *sdev_ufs = hba->ufs_device_wlun; 629 629 630 630 dev_err(hba->dev, "UFS Host state=%d\n", hba->ufshcd_state); 631 - dev_err(hba->dev, "outstanding reqs=0x%lx tasks=0x%lx\n", 632 - hba->outstanding_reqs, hba->outstanding_tasks); 631 + dev_err(hba->dev, "%d outstanding reqs, tasks=0x%lx\n", 632 + scsi_host_busy(hba->host), hba->outstanding_tasks); 633 633 dev_err(hba->dev, "saved_err=0x%x, saved_uic_err=0x%x\n", 634 634 hba->saved_err, hba->saved_uic_err); 635 635 dev_err(hba->dev, "Device power mode=%d, UIC link state=%d\n", ··· 8882 8882 dev_info(hba->dev, "%s() finished; outstanding_tasks = %#lx.\n", 8883 8883 __func__, hba->outstanding_tasks); 8884 8884 8885 - return hba->outstanding_reqs ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE; 8885 + return scsi_host_busy(hba->host) ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE; 8886 8886 } 8887 8887 8888 8888 static const struct attribute_group *ufshcd_driver_groups[] = { ··· 10431 10431 */ 10432 10432 spin_lock_init(&hba->clk_gating.lock); 10433 10433 10434 + /* 10435 + * Set the default power management level for runtime and system PM. 10436 + * Host controller drivers can override them in their 10437 + * 'ufs_hba_variant_ops::init' callback. 10438 + * 10439 + * Default power saving mode is to keep UFS link in Hibern8 state 10440 + * and UFS device in sleep state. 10441 + */ 10442 + hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10443 + UFS_SLEEP_PWR_MODE, 10444 + UIC_LINK_HIBERN8_STATE); 10445 + hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10446 + UFS_SLEEP_PWR_MODE, 10447 + UIC_LINK_HIBERN8_STATE); 10448 + 10434 10449 err = ufshcd_hba_init(hba); 10435 10450 if (err) 10436 10451 goto out_error; ··· 10558 10543 ufshcd_print_host_state(hba); 10559 10544 goto out_disable; 10560 10545 } 10561 - 10562 - /* 10563 - * Set the default power management level for runtime and system PM if 10564 - * not set by the host controller drivers. 10565 - * Default power saving mode is to keep UFS link in Hibern8 state 10566 - * and UFS device in sleep state. 10567 - */ 10568 - if (!hba->rpm_lvl) 10569 - hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10570 - UFS_SLEEP_PWR_MODE, 10571 - UIC_LINK_HIBERN8_STATE); 10572 - if (!hba->spm_lvl) 10573 - hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10574 - UFS_SLEEP_PWR_MODE, 10575 - UIC_LINK_HIBERN8_STATE); 10576 10546 10577 10547 INIT_DELAYED_WORK(&hba->rpm_dev_flush_recheck_work, ufshcd_rpm_dev_flush_recheck_work); 10578 10548 INIT_DELAYED_WORK(&hba->ufs_rtc_update_work, ufshcd_rtc_work);
+3
fs/afs/server.c
··· 163 163 rb_insert_color(&server->uuid_rb, &net->fs_servers); 164 164 hlist_add_head_rcu(&server->proc_link, &net->fs_proc); 165 165 166 + afs_get_cell(cell, afs_cell_trace_get_server); 167 + 166 168 added_dup: 167 169 write_seqlock(&net->fs_addr_lock); 168 170 estate = rcu_dereference_protected(server->endpoint_state, ··· 444 442 atomic_read(&server->active), afs_server_trace_free); 445 443 afs_put_endpoint_state(rcu_access_pointer(server->endpoint_state), 446 444 afs_estate_trace_put_server); 445 + afs_put_cell(server->cell, afs_cell_trace_put_server); 447 446 kfree(server); 448 447 } 449 448
+2 -2
fs/afs/server_list.c
··· 97 97 break; 98 98 if (j < slist->nr_servers) { 99 99 if (slist->servers[j].server == server) { 100 - afs_put_server(volume->cell->net, server, 101 - afs_server_trace_put_slist_isort); 100 + afs_unuse_server(volume->cell->net, server, 101 + afs_server_trace_put_slist_isort); 102 102 continue; 103 103 } 104 104
+5 -4
fs/bcachefs/btree_cache.c
··· 203 203 return NULL; 204 204 } 205 205 206 - bch2_btree_lock_init(&b->c, 0); 206 + bch2_btree_lock_init(&b->c, 0, GFP_KERNEL); 207 207 208 208 __bch2_btree_node_to_freelist(bc, b); 209 209 return b; ··· 795 795 } 796 796 797 797 b = __btree_node_mem_alloc(c, GFP_NOWAIT|__GFP_NOWARN); 798 - if (!b) { 798 + if (b) { 799 + bch2_btree_lock_init(&b->c, pcpu_read_locks ? SIX_LOCK_INIT_PCPU : 0, GFP_NOWAIT); 800 + } else { 799 801 mutex_unlock(&bc->lock); 800 802 bch2_trans_unlock(trans); 801 803 b = __btree_node_mem_alloc(c, GFP_KERNEL); 802 804 if (!b) 803 805 goto err; 806 + bch2_btree_lock_init(&b->c, pcpu_read_locks ? SIX_LOCK_INIT_PCPU : 0, GFP_KERNEL); 804 807 mutex_lock(&bc->lock); 805 808 } 806 - 807 - bch2_btree_lock_init(&b->c, pcpu_read_locks ? SIX_LOCK_INIT_PCPU : 0); 808 809 809 810 BUG_ON(!six_trylock_intent(&b->c.lock)); 810 811 BUG_ON(!six_trylock_write(&b->c.lock));
+1 -1
fs/bcachefs/btree_io.c
··· 996 996 } 997 997 got_good_key: 998 998 le16_add_cpu(&i->u64s, -next_good_key); 999 - memmove_u64s_down(k, bkey_p_next(k), (u64 *) vstruct_end(i) - (u64 *) k); 999 + memmove_u64s_down(k, (u64 *) k + next_good_key, (u64 *) vstruct_end(i) - (u64 *) k); 1000 1000 set_btree_node_need_rewrite(b); 1001 1001 } 1002 1002 fsck_err:
+1 -1
fs/bcachefs/btree_key_cache.c
··· 156 156 } 157 157 158 158 if (ck) { 159 - bch2_btree_lock_init(&ck->c, pcpu_readers ? SIX_LOCK_INIT_PCPU : 0); 159 + bch2_btree_lock_init(&ck->c, pcpu_readers ? SIX_LOCK_INIT_PCPU : 0, GFP_KERNEL); 160 160 ck->c.cached = true; 161 161 goto lock; 162 162 }
+3 -2
fs/bcachefs/btree_locking.c
··· 7 7 static struct lock_class_key bch2_btree_node_lock_key; 8 8 9 9 void bch2_btree_lock_init(struct btree_bkey_cached_common *b, 10 - enum six_lock_init_flags flags) 10 + enum six_lock_init_flags flags, 11 + gfp_t gfp) 11 12 { 12 - __six_lock_init(&b->lock, "b->c.lock", &bch2_btree_node_lock_key, flags); 13 + __six_lock_init(&b->lock, "b->c.lock", &bch2_btree_node_lock_key, flags, gfp); 13 14 lockdep_set_notrack_class(&b->lock); 14 15 } 15 16
+1 -1
fs/bcachefs/btree_locking.h
··· 13 13 #include "btree_iter.h" 14 14 #include "six.h" 15 15 16 - void bch2_btree_lock_init(struct btree_bkey_cached_common *, enum six_lock_init_flags); 16 + void bch2_btree_lock_init(struct btree_bkey_cached_common *, enum six_lock_init_flags, gfp_t gfp); 17 17 18 18 void bch2_trans_unlock_noassert(struct btree_trans *); 19 19 void bch2_trans_unlock_write(struct btree_trans *);
+1
fs/bcachefs/data_update.c
··· 340 340 struct printbuf buf = PRINTBUF; 341 341 342 342 prt_str(&buf, "about to insert invalid key in data update path"); 343 + prt_printf(&buf, "\nop.nonce: %u", m->op.nonce); 343 344 prt_str(&buf, "\nold: "); 344 345 bch2_bkey_val_to_text(&buf, c, old); 345 346 prt_str(&buf, "\nk: ");
-5
fs/bcachefs/dirent.h
··· 31 31 sizeof(u64)); 32 32 } 33 33 34 - static inline unsigned int dirent_occupied_size(const struct qstr *name) 35 - { 36 - return (BKEY_U64s + dirent_val_u64s(name->len)) * sizeof(u64); 37 - } 38 - 39 34 int bch2_dirent_read_target(struct btree_trans *, subvol_inum, 40 35 struct bkey_s_c_dirent, subvol_inum *); 41 36
+1 -1
fs/bcachefs/extents.h
··· 704 704 ptr1.unwritten == ptr2.unwritten && 705 705 ptr1.offset == ptr2.offset && 706 706 ptr1.dev == ptr2.dev && 707 - ptr1.dev == ptr2.dev); 707 + ptr1.gen == ptr2.gen); 708 708 } 709 709 710 710 void bch2_ptr_swab(struct bkey_s);
-11
fs/bcachefs/fs-common.c
··· 152 152 if (is_subdir_for_nlink(new_inode)) 153 153 dir_u->bi_nlink++; 154 154 dir_u->bi_mtime = dir_u->bi_ctime = now; 155 - dir_u->bi_size += dirent_occupied_size(name); 156 155 157 156 ret = bch2_inode_write(trans, &dir_iter, dir_u); 158 157 if (ret) ··· 220 221 } 221 222 222 223 dir_u->bi_mtime = dir_u->bi_ctime = now; 223 - dir_u->bi_size += dirent_occupied_size(name); 224 224 225 225 dir_hash = bch2_hash_info_init(c, dir_u); 226 226 ··· 322 324 323 325 dir_u->bi_mtime = dir_u->bi_ctime = inode_u->bi_ctime = now; 324 326 dir_u->bi_nlink -= is_subdir_for_nlink(inode_u); 325 - dir_u->bi_size -= dirent_occupied_size(name); 326 327 327 328 ret = bch2_hash_delete_at(trans, bch2_dirent_hash_desc, 328 329 &dir_hash, &dirent_iter, ··· 459 462 ret = -EXDEV; 460 463 goto err; 461 464 } 462 - 463 - if (mode == BCH_RENAME) { 464 - src_dir_u->bi_size -= dirent_occupied_size(src_name); 465 - dst_dir_u->bi_size += dirent_occupied_size(dst_name); 466 - } 467 - 468 - if (mode == BCH_RENAME_OVERWRITE) 469 - src_dir_u->bi_size -= dirent_occupied_size(src_name); 470 465 471 466 if (src_inode_u->bi_parent_subvol) 472 467 src_inode_u->bi_parent_subvol = dst_dir.subvol;
+1
fs/bcachefs/fs-io.c
··· 466 466 ret = bch2_truncate_folio(inode, iattr->ia_size); 467 467 if (unlikely(ret < 0)) 468 468 goto err; 469 + ret = 0; 469 470 470 471 truncate_setsize(&inode->v, iattr->ia_size); 471 472
-21
fs/bcachefs/fsck.c
··· 1978 1978 return ret; 1979 1979 } 1980 1980 1981 - static int check_dir_i_size_notnested(struct btree_trans *trans, struct inode_walker *w) 1982 - { 1983 - struct bch_fs *c = trans->c; 1984 - int ret = 0; 1985 - 1986 - darray_for_each(w->inodes, i) 1987 - if (fsck_err_on(i->inode.bi_size != i->i_size, 1988 - trans, inode_dir_wrong_nlink, 1989 - "directory %llu:%u with wrong i_size: got %llu, should be %llu", 1990 - w->last_pos.inode, i->snapshot, i->inode.bi_size, i->i_size)) { 1991 - i->inode.bi_size = i->i_size; 1992 - ret = bch2_fsck_write_inode(trans, &i->inode); 1993 - if (ret) 1994 - break; 1995 - } 1996 - fsck_err: 1997 - bch_err_fn(c, ret); 1998 - return ret; 1999 - } 2000 - 2001 1981 static int check_subdir_dirents_count(struct btree_trans *trans, struct inode_walker *w) 2002 1982 { 2003 1983 u32 restart_count = trans->restart_count; 2004 1984 return check_subdir_count_notnested(trans, w) ?: 2005 - check_dir_i_size_notnested(trans, w) ?: 2006 1985 trans_was_restarted(trans, restart_count); 2007 1986 } 2008 1987
+3 -1
fs/bcachefs/journal.c
··· 1194 1194 1195 1195 closure_sync(&cl); 1196 1196 1197 - if (ret && ret != -BCH_ERR_bucket_alloc_blocked) 1197 + if (ret && 1198 + ret != -BCH_ERR_bucket_alloc_blocked && 1199 + ret != -BCH_ERR_open_buckets_empty) 1198 1200 break; 1199 1201 } 1200 1202
+1 -4
fs/bcachefs/sb-downgrade.c
··· 90 90 BIT_ULL(BCH_RECOVERY_PASS_check_allocations), \ 91 91 BCH_FSCK_ERR_accounting_mismatch, \ 92 92 BCH_FSCK_ERR_accounting_key_replicas_nr_devs_0, \ 93 - BCH_FSCK_ERR_accounting_key_junk_at_end) \ 94 - x(directory_size, \ 95 - BIT_ULL(BCH_RECOVERY_PASS_check_dirents), \ 96 - BCH_FSCK_ERR_directory_size_mismatch) \ 93 + BCH_FSCK_ERR_accounting_key_junk_at_end) 97 94 98 95 #define DOWNGRADE_TABLE() \ 99 96 x(bucket_stripe_sectors, \
+3 -2
fs/bcachefs/six.c
··· 850 850 EXPORT_SYMBOL_GPL(six_lock_exit); 851 851 852 852 void __six_lock_init(struct six_lock *lock, const char *name, 853 - struct lock_class_key *key, enum six_lock_init_flags flags) 853 + struct lock_class_key *key, enum six_lock_init_flags flags, 854 + gfp_t gfp) 854 855 { 855 856 atomic_set(&lock->state, 0); 856 857 raw_spin_lock_init(&lock->wait_lock); ··· 874 873 * failure if they wish by checking lock->readers, but generally 875 874 * will not want to treat it as an error. 876 875 */ 877 - lock->readers = alloc_percpu(unsigned); 876 + lock->readers = alloc_percpu_gfp(unsigned, gfp); 878 877 } 879 878 #endif 880 879 }
+4 -3
fs/bcachefs/six.h
··· 164 164 }; 165 165 166 166 void __six_lock_init(struct six_lock *lock, const char *name, 167 - struct lock_class_key *key, enum six_lock_init_flags flags); 167 + struct lock_class_key *key, enum six_lock_init_flags flags, 168 + gfp_t gfp); 168 169 169 170 /** 170 171 * six_lock_init - initialize a six lock 171 172 * @lock: lock to initialize 172 173 * @flags: optional flags, i.e. SIX_LOCK_INIT_PCPU 173 174 */ 174 - #define six_lock_init(lock, flags) \ 175 + #define six_lock_init(lock, flags, gfp) \ 175 176 do { \ 176 177 static struct lock_class_key __key; \ 177 178 \ 178 - __six_lock_init((lock), #lock, &__key, flags); \ 179 + __six_lock_init((lock), #lock, &__key, flags, gfp); \ 179 180 } while (0) 180 181 181 182 /**
+59 -24
fs/btrfs/extent_map.c
··· 1128 1128 long nr_dropped = 0; 1129 1129 struct rb_node *node; 1130 1130 1131 + lockdep_assert_held_write(&tree->lock); 1132 + 1131 1133 /* 1132 1134 * Take the mmap lock so that we serialize with the inode logging phase 1133 1135 * of fsync because we may need to set the full sync flag on the inode, ··· 1141 1139 * to find new extents, which may not be there yet because ordered 1142 1140 * extents haven't completed yet. 1143 1141 * 1144 - * We also do a try lock because otherwise we could deadlock. This is 1145 - * because the shrinker for this filesystem may be invoked while we are 1146 - * in a path that is holding the mmap lock in write mode. For example in 1147 - * a reflink operation while COWing an extent buffer, when allocating 1148 - * pages for a new extent buffer and under memory pressure, the shrinker 1149 - * may be invoked, and therefore we would deadlock by attempting to read 1150 - * lock the mmap lock while we are holding already a write lock on it. 1142 + * We also do a try lock because we don't want to block for too long and 1143 + * we are holding the extent map tree's lock in write mode. 1151 1144 */ 1152 1145 if (!down_read_trylock(&inode->i_mmap_lock)) 1153 1146 return 0; 1154 - 1155 - /* 1156 - * We want to be fast so if the lock is busy we don't want to spend time 1157 - * waiting for it - either some task is about to do IO for the inode or 1158 - * we may have another task shrinking extent maps, here in this code, so 1159 - * skip this inode. 1160 - */ 1161 - if (!write_trylock(&tree->lock)) { 1162 - up_read(&inode->i_mmap_lock); 1163 - return 0; 1164 - } 1165 1147 1166 1148 node = rb_first(&tree->root); 1167 1149 while (node) { ··· 1187 1201 break; 1188 1202 node = next; 1189 1203 } 1190 - write_unlock(&tree->lock); 1191 1204 up_read(&inode->i_mmap_lock); 1192 1205 1193 1206 return nr_dropped; 1207 + } 1208 + 1209 + static struct btrfs_inode *find_first_inode_to_shrink(struct btrfs_root *root, 1210 + u64 min_ino) 1211 + { 1212 + struct btrfs_inode *inode; 1213 + unsigned long from = min_ino; 1214 + 1215 + xa_lock(&root->inodes); 1216 + while (true) { 1217 + struct extent_map_tree *tree; 1218 + 1219 + inode = xa_find(&root->inodes, &from, ULONG_MAX, XA_PRESENT); 1220 + if (!inode) 1221 + break; 1222 + 1223 + tree = &inode->extent_tree; 1224 + 1225 + /* 1226 + * We want to be fast so if the lock is busy we don't want to 1227 + * spend time waiting for it (some task is about to do IO for 1228 + * the inode). 1229 + */ 1230 + if (!write_trylock(&tree->lock)) 1231 + goto next; 1232 + 1233 + /* 1234 + * Skip inode if it doesn't have loaded extent maps, so we avoid 1235 + * getting a reference and doing an iput later. This includes 1236 + * cases like files that were opened for things like stat(2), or 1237 + * files with all extent maps previously released through the 1238 + * release folio callback (btrfs_release_folio()) or released in 1239 + * a previous run, or directories which never have extent maps. 1240 + */ 1241 + if (RB_EMPTY_ROOT(&tree->root)) { 1242 + write_unlock(&tree->lock); 1243 + goto next; 1244 + } 1245 + 1246 + if (igrab(&inode->vfs_inode)) 1247 + break; 1248 + 1249 + write_unlock(&tree->lock); 1250 + next: 1251 + from = btrfs_ino(inode) + 1; 1252 + cond_resched_lock(&root->inodes.xa_lock); 1253 + } 1254 + xa_unlock(&root->inodes); 1255 + 1256 + return inode; 1194 1257 } 1195 1258 1196 1259 static long btrfs_scan_root(struct btrfs_root *root, struct btrfs_em_shrink_ctx *ctx) ··· 1249 1214 long nr_dropped = 0; 1250 1215 u64 min_ino = fs_info->em_shrinker_last_ino + 1; 1251 1216 1252 - inode = btrfs_find_first_inode(root, min_ino); 1217 + inode = find_first_inode_to_shrink(root, min_ino); 1253 1218 while (inode) { 1254 1219 nr_dropped += btrfs_scan_inode(inode, ctx); 1220 + write_unlock(&inode->extent_tree.lock); 1255 1221 1256 1222 min_ino = btrfs_ino(inode) + 1; 1257 1223 fs_info->em_shrinker_last_ino = btrfs_ino(inode); 1258 - btrfs_add_delayed_iput(inode); 1224 + iput(&inode->vfs_inode); 1259 1225 1260 - if (ctx->scanned >= ctx->nr_to_scan || 1261 - btrfs_fs_closing(inode->root->fs_info)) 1226 + if (ctx->scanned >= ctx->nr_to_scan || btrfs_fs_closing(fs_info)) 1262 1227 break; 1263 1228 1264 1229 cond_resched(); 1265 1230 1266 - inode = btrfs_find_first_inode(root, min_ino); 1231 + inode = find_first_inode_to_shrink(root, min_ino); 1267 1232 } 1268 1233 1269 1234 if (inode) {
+8 -1
fs/btrfs/file.c
··· 1090 1090 u64 lockend; 1091 1091 size_t num_written = 0; 1092 1092 ssize_t ret; 1093 - loff_t old_isize = i_size_read(inode); 1093 + loff_t old_isize; 1094 1094 unsigned int ilock_flags = 0; 1095 1095 const bool nowait = (iocb->ki_flags & IOCB_NOWAIT); 1096 1096 unsigned int bdp_flags = (nowait ? BDP_ASYNC : 0); ··· 1102 1102 ret = btrfs_inode_lock(BTRFS_I(inode), ilock_flags); 1103 1103 if (ret < 0) 1104 1104 return ret; 1105 + 1106 + /* 1107 + * We can only trust the isize with inode lock held, or it can race with 1108 + * other buffered writes and cause incorrect call of 1109 + * pagecache_isize_extended() to overwrite existing data. 1110 + */ 1111 + old_isize = i_size_read(inode); 1105 1112 1106 1113 ret = generic_write_checks(iocb, i); 1107 1114 if (ret <= 0)
+1
fs/btrfs/tests/delayed-refs-tests.c
··· 1009 1009 if (!ret) 1010 1010 ret = select_delayed_refs_test(&trans); 1011 1011 1012 + kfree(transaction); 1012 1013 out_free_fs_info: 1013 1014 btrfs_free_dummy_fs_info(fs_info); 1014 1015 return ret;
+5 -1
fs/btrfs/volumes.c
··· 7200 7200 7201 7201 fs_devices = find_fsid(fsid, NULL); 7202 7202 if (!fs_devices) { 7203 - if (!btrfs_test_opt(fs_info, DEGRADED)) 7203 + if (!btrfs_test_opt(fs_info, DEGRADED)) { 7204 + btrfs_err(fs_info, 7205 + "failed to find fsid %pU when attempting to open seed devices", 7206 + fsid); 7204 7207 return ERR_PTR(-ENOENT); 7208 + } 7205 7209 7206 7210 fs_devices = alloc_fs_devices(fsid); 7207 7211 if (IS_ERR(fs_devices))
+6
fs/fuse/dev.c
··· 838 838 return 0; 839 839 } 840 840 841 + /* 842 + * Attempt to steal a page from the splice() pipe and move it into the 843 + * pagecache. If successful, the pointer in @pagep will be updated. The 844 + * folio that was originally in @pagep will lose a reference and the new 845 + * folio returned in @pagep will carry a reference. 846 + */ 841 847 static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) 842 848 { 843 849 int err;
+1 -1
fs/fuse/dir.c
··· 1636 1636 goto out_err; 1637 1637 1638 1638 if (fc->cache_symlinks) 1639 - return page_get_link(dentry, inode, callback); 1639 + return page_get_link_raw(dentry, inode, callback); 1640 1640 1641 1641 err = -ECHILD; 1642 1642 if (!dentry)
+11 -2
fs/fuse/file.c
··· 955 955 fuse_invalidate_atime(inode); 956 956 } 957 957 958 - for (i = 0; i < ap->num_folios; i++) 958 + for (i = 0; i < ap->num_folios; i++) { 959 959 folio_end_read(ap->folios[i], !err); 960 + folio_put(ap->folios[i]); 961 + } 960 962 if (ia->ff) 961 963 fuse_file_put(ia->ff, false); 962 964 ··· 1050 1048 ap = &ia->ap; 1051 1049 1052 1050 while (ap->num_folios < cur_pages) { 1053 - folio = readahead_folio(rac); 1051 + /* 1052 + * This returns a folio with a ref held on it. 1053 + * The ref needs to be held until the request is 1054 + * completed, since the splice case (see 1055 + * fuse_try_move_page()) drops the ref after it's 1056 + * replaced in the page cache. 1057 + */ 1058 + folio = __readahead_folio(rac); 1054 1059 ap->folios[ap->num_folios] = folio; 1055 1060 ap->descs[ap->num_folios].length = folio_size(folio); 1056 1061 ap->num_folios++;
+3 -5
fs/iomap/direct-io.c
··· 427 427 bio_put(bio); 428 428 goto zero_tail; 429 429 } 430 - if (dio->flags & IOMAP_DIO_WRITE) { 430 + if (dio->flags & IOMAP_DIO_WRITE) 431 431 task_io_account_write(n); 432 - } else { 433 - if (dio->flags & IOMAP_DIO_DIRTY) 434 - bio_set_pages_dirty(bio); 435 - } 432 + else if (dio->flags & IOMAP_DIO_DIRTY) 433 + bio_set_pages_dirty(bio); 436 434 437 435 dio->size += n; 438 436 copied += n;
+19 -5
fs/namei.c
··· 5356 5356 EXPORT_SYMBOL(vfs_get_link); 5357 5357 5358 5358 /* get the link contents into pagecache */ 5359 - const char *page_get_link(struct dentry *dentry, struct inode *inode, 5360 - struct delayed_call *callback) 5359 + static char *__page_get_link(struct dentry *dentry, struct inode *inode, 5360 + struct delayed_call *callback) 5361 5361 { 5362 - char *kaddr; 5363 5362 struct page *page; 5364 5363 struct address_space *mapping = inode->i_mapping; 5365 5364 ··· 5377 5378 } 5378 5379 set_delayed_call(callback, page_put_link, page); 5379 5380 BUG_ON(mapping_gfp_mask(mapping) & __GFP_HIGHMEM); 5380 - kaddr = page_address(page); 5381 - nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1); 5381 + return page_address(page); 5382 + } 5383 + 5384 + const char *page_get_link_raw(struct dentry *dentry, struct inode *inode, 5385 + struct delayed_call *callback) 5386 + { 5387 + return __page_get_link(dentry, inode, callback); 5388 + } 5389 + EXPORT_SYMBOL_GPL(page_get_link_raw); 5390 + 5391 + const char *page_get_link(struct dentry *dentry, struct inode *inode, 5392 + struct delayed_call *callback) 5393 + { 5394 + char *kaddr = __page_get_link(dentry, inode, callback); 5395 + 5396 + if (!IS_ERR(kaddr)) 5397 + nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1); 5382 5398 return kaddr; 5383 5399 } 5384 5400
+37
fs/nfs/delegation.c
··· 781 781 } 782 782 783 783 /** 784 + * nfs4_inode_set_return_delegation_on_close - asynchronously return a delegation 785 + * @inode: inode to process 786 + * 787 + * This routine is called to request that the delegation be returned as soon 788 + * as the file is closed. If the file is already closed, the delegation is 789 + * immediately returned. 790 + */ 791 + void nfs4_inode_set_return_delegation_on_close(struct inode *inode) 792 + { 793 + struct nfs_delegation *delegation; 794 + struct nfs_delegation *ret = NULL; 795 + 796 + if (!inode) 797 + return; 798 + rcu_read_lock(); 799 + delegation = nfs4_get_valid_delegation(inode); 800 + if (!delegation) 801 + goto out; 802 + spin_lock(&delegation->lock); 803 + if (!delegation->inode) 804 + goto out_unlock; 805 + if (list_empty(&NFS_I(inode)->open_files) && 806 + !test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) { 807 + /* Refcount matched in nfs_end_delegation_return() */ 808 + ret = nfs_get_delegation(delegation); 809 + } else 810 + set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags); 811 + out_unlock: 812 + spin_unlock(&delegation->lock); 813 + if (ret) 814 + nfs_clear_verifier_delegated(inode); 815 + out: 816 + rcu_read_unlock(); 817 + nfs_end_delegation_return(inode, ret, 0); 818 + } 819 + 820 + /** 784 821 * nfs4_inode_return_delegation_on_close - asynchronously return a delegation 785 822 * @inode: inode to process 786 823 *
+1
fs/nfs/delegation.h
··· 49 49 unsigned long pagemod_limit, u32 deleg_type); 50 50 int nfs4_inode_return_delegation(struct inode *inode); 51 51 void nfs4_inode_return_delegation_on_close(struct inode *inode); 52 + void nfs4_inode_set_return_delegation_on_close(struct inode *inode); 52 53 int nfs_async_inode_return_delegation(struct inode *inode, const nfs4_stateid *stateid); 53 54 void nfs_inode_evict_delegation(struct inode *inode); 54 55
+23
fs/nfs/direct.c
··· 56 56 #include <linux/uaccess.h> 57 57 #include <linux/atomic.h> 58 58 59 + #include "delegation.h" 59 60 #include "internal.h" 60 61 #include "iostat.h" 61 62 #include "pnfs.h" ··· 129 128 dreq->max_count = req_start; 130 129 if (req_start < dreq->count) 131 130 dreq->count = req_start; 131 + } 132 + 133 + static void nfs_direct_file_adjust_size_locked(struct inode *inode, 134 + loff_t offset, size_t count) 135 + { 136 + loff_t newsize = offset + (loff_t)count; 137 + loff_t oldsize = i_size_read(inode); 138 + 139 + if (newsize > oldsize) { 140 + i_size_write(inode, newsize); 141 + NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_SIZE; 142 + trace_nfs_size_grow(inode, newsize); 143 + nfs_inc_stats(inode, NFSIOS_EXTENDWRITE); 144 + } 132 145 } 133 146 134 147 /** ··· 286 271 287 272 nfs_direct_count_bytes(dreq, hdr); 288 273 spin_unlock(&dreq->lock); 274 + 275 + nfs_update_delegated_atime(dreq->inode); 289 276 290 277 while (!list_empty(&hdr->pages)) { 291 278 struct nfs_page *req = nfs_list_entry(hdr->pages.next); ··· 758 741 struct nfs_direct_req *dreq = hdr->dreq; 759 742 struct nfs_commit_info cinfo; 760 743 struct nfs_page *req = nfs_list_entry(hdr->pages.next); 744 + struct inode *inode = dreq->inode; 761 745 int flags = NFS_ODIRECT_DONE; 762 746 763 747 trace_nfs_direct_write_completion(dreq); ··· 779 761 flags = dreq->flags; 780 762 } 781 763 spin_unlock(&dreq->lock); 764 + 765 + spin_lock(&inode->i_lock); 766 + nfs_direct_file_adjust_size_locked(inode, dreq->io_start, dreq->count); 767 + nfs_update_delegated_mtime_locked(dreq->inode); 768 + spin_unlock(&inode->i_lock); 782 769 783 770 while (!list_empty(&hdr->pages)) { 784 771
+7 -3
fs/nfs/nfs4proc.c
··· 133 133 if (err) 134 134 return NULL; 135 135 136 + label->lsmid = shim.id; 136 137 label->label = shim.context; 137 138 label->len = shim.len; 138 139 return label; ··· 146 145 if (label) { 147 146 shim.context = label->label; 148 147 shim.len = label->len; 149 - shim.id = LSM_ID_UNDEF; 148 + shim.id = label->lsmid; 150 149 security_release_secctx(&shim); 151 150 } 152 151 } ··· 3907 3906 3908 3907 static void nfs4_close_context(struct nfs_open_context *ctx, int is_sync) 3909 3908 { 3909 + struct dentry *dentry = ctx->dentry; 3910 3910 if (ctx->state == NULL) 3911 3911 return; 3912 + if (dentry->d_flags & DCACHE_NFSFS_RENAMED) 3913 + nfs4_inode_set_return_delegation_on_close(d_inode(dentry)); 3912 3914 if (is_sync) 3913 3915 nfs4_close_sync(ctx->state, _nfs4_ctx_to_openmode(ctx)); 3914 3916 else ··· 6273 6269 size_t buflen) 6274 6270 { 6275 6271 struct nfs_server *server = NFS_SERVER(inode); 6276 - struct nfs4_label label = {0, 0, buflen, buf}; 6272 + struct nfs4_label label = {0, 0, 0, buflen, buf}; 6277 6273 6278 6274 u32 bitmask[3] = { 0, 0, FATTR4_WORD2_SECURITY_LABEL }; 6279 6275 struct nfs_fattr fattr = { ··· 6378 6374 static int 6379 6375 nfs4_set_security_label(struct inode *inode, const void *buf, size_t buflen) 6380 6376 { 6381 - struct nfs4_label ilabel = {0, 0, buflen, (char *)buf }; 6377 + struct nfs4_label ilabel = {0, 0, 0, buflen, (char *)buf }; 6382 6378 struct nfs_fattr *fattr; 6383 6379 int status; 6384 6380
-1
fs/nsfs.c
··· 37 37 } 38 38 39 39 const struct dentry_operations ns_dentry_operations = { 40 - .d_delete = always_delete_dentry, 41 40 .d_dname = ns_dname, 42 41 .d_prune = stashed_dentry_prune, 43 42 };
+1 -1
fs/overlayfs/copy_up.c
··· 618 618 err = PTR_ERR(upper); 619 619 if (!IS_ERR(upper)) { 620 620 err = ovl_do_link(ofs, ovl_dentry_upper(c->dentry), udir, upper); 621 - dput(upper); 622 621 623 622 if (!err) { 624 623 /* Restore timestamps on parent (best effort) */ ··· 625 626 ovl_dentry_set_upper_alias(c->dentry); 626 627 ovl_dentry_update_reval(c->dentry, upper); 627 628 } 629 + dput(upper); 628 630 } 629 631 inode_unlock(udir); 630 632 if (err)
-1
fs/pidfs.c
··· 521 521 } 522 522 523 523 const struct dentry_operations pidfs_dentry_operations = { 524 - .d_delete = always_delete_dentry, 525 524 .d_dname = pidfs_dname, 526 525 .d_prune = stashed_dentry_prune, 527 526 };
+4
fs/smb/client/smb2ops.c
··· 4965 4965 next_buffer = (char *)cifs_buf_get(); 4966 4966 else 4967 4967 next_buffer = (char *)cifs_small_buf_get(); 4968 + if (!next_buffer) { 4969 + cifs_server_dbg(VFS, "No memory for (large) SMB response\n"); 4970 + return -1; 4971 + } 4968 4972 memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd); 4969 4973 } 4970 4974
+4 -2
include/linux/fs.h
··· 2975 2975 } else if (iocb->ki_flags & IOCB_DONTCACHE) { 2976 2976 struct address_space *mapping = iocb->ki_filp->f_mapping; 2977 2977 2978 - filemap_fdatawrite_range_kick(mapping, iocb->ki_pos, 2979 - iocb->ki_pos + count); 2978 + filemap_fdatawrite_range_kick(mapping, iocb->ki_pos - count, 2979 + iocb->ki_pos - 1); 2980 2980 } 2981 2981 2982 2982 return count; ··· 3452 3452 3453 3453 extern int readlink_copy(char __user *, int, const char *, int); 3454 3454 extern int page_readlink(struct dentry *, char __user *, int); 3455 + extern const char *page_get_link_raw(struct dentry *, struct inode *, 3456 + struct delayed_call *); 3455 3457 extern const char *page_get_link(struct dentry *, struct inode *, 3456 3458 struct delayed_call *); 3457 3459 extern void page_put_link(void *);
+4 -3
include/linux/mm_types.h
··· 875 875 */ 876 876 unsigned int nr_cpus_allowed; 877 877 /** 878 - * @max_nr_cid: Maximum number of concurrency IDs allocated. 878 + * @max_nr_cid: Maximum number of allowed concurrency 879 + * IDs allocated. 879 880 * 880 - * Track the highest number of concurrency IDs allocated for the 881 - * mm. 881 + * Track the highest number of allowed concurrency IDs 882 + * allocated for the mm. 882 883 */ 883 884 atomic_t max_nr_cid; 884 885 /**
+1
include/linux/nfs4.h
··· 47 47 struct nfs4_label { 48 48 uint32_t lfs; 49 49 uint32_t pi; 50 + u32 lsmid; 50 51 u32 len; 51 52 char *label; 52 53 };
+2
include/linux/nvme-tcp.h
··· 13 13 #define NVME_TCP_ADMIN_CCSZ SZ_8K 14 14 #define NVME_TCP_DIGEST_LENGTH 4 15 15 #define NVME_TCP_MIN_MAXH2CDATA 4096 16 + #define NVME_TCP_MIN_C2HTERM_PLEN 24 17 + #define NVME_TCP_MAX_C2HTERM_PLEN 152 16 18 17 19 enum nvme_tcp_pfv { 18 20 NVME_TCP_PFV_1_0 = 0x0,
+33 -7
include/linux/nvme.h
··· 199 199 #define NVME_NVM_IOSQES 6 200 200 #define NVME_NVM_IOCQES 4 201 201 202 + /* 203 + * Controller Configuration (CC) register (Offset 14h) 204 + */ 202 205 enum { 206 + /* Enable (EN): bit 0 */ 203 207 NVME_CC_ENABLE = 1 << 0, 204 208 NVME_CC_EN_SHIFT = 0, 209 + 210 + /* Bits 03:01 are reserved (NVMe Base Specification rev 2.1) */ 211 + 212 + /* I/O Command Set Selected (CSS): bits 06:04 */ 205 213 NVME_CC_CSS_SHIFT = 4, 206 - NVME_CC_MPS_SHIFT = 7, 207 - NVME_CC_AMS_SHIFT = 11, 208 - NVME_CC_SHN_SHIFT = 14, 209 - NVME_CC_IOSQES_SHIFT = 16, 210 - NVME_CC_IOCQES_SHIFT = 20, 214 + NVME_CC_CSS_MASK = 7 << NVME_CC_CSS_SHIFT, 211 215 NVME_CC_CSS_NVM = 0 << NVME_CC_CSS_SHIFT, 212 216 NVME_CC_CSS_CSI = 6 << NVME_CC_CSS_SHIFT, 213 - NVME_CC_CSS_MASK = 7 << NVME_CC_CSS_SHIFT, 217 + 218 + /* Memory Page Size (MPS): bits 10:07 */ 219 + NVME_CC_MPS_SHIFT = 7, 220 + NVME_CC_MPS_MASK = 0xf << NVME_CC_MPS_SHIFT, 221 + 222 + /* Arbitration Mechanism Selected (AMS): bits 13:11 */ 223 + NVME_CC_AMS_SHIFT = 11, 224 + NVME_CC_AMS_MASK = 7 << NVME_CC_AMS_SHIFT, 214 225 NVME_CC_AMS_RR = 0 << NVME_CC_AMS_SHIFT, 215 226 NVME_CC_AMS_WRRU = 1 << NVME_CC_AMS_SHIFT, 216 227 NVME_CC_AMS_VS = 7 << NVME_CC_AMS_SHIFT, 228 + 229 + /* Shutdown Notification (SHN): bits 15:14 */ 230 + NVME_CC_SHN_SHIFT = 14, 231 + NVME_CC_SHN_MASK = 3 << NVME_CC_SHN_SHIFT, 217 232 NVME_CC_SHN_NONE = 0 << NVME_CC_SHN_SHIFT, 218 233 NVME_CC_SHN_NORMAL = 1 << NVME_CC_SHN_SHIFT, 219 234 NVME_CC_SHN_ABRUPT = 2 << NVME_CC_SHN_SHIFT, 220 - NVME_CC_SHN_MASK = 3 << NVME_CC_SHN_SHIFT, 235 + 236 + /* I/O Submission Queue Entry Size (IOSQES): bits 19:16 */ 237 + NVME_CC_IOSQES_SHIFT = 16, 238 + NVME_CC_IOSQES_MASK = 0xf << NVME_CC_IOSQES_SHIFT, 221 239 NVME_CC_IOSQES = NVME_NVM_IOSQES << NVME_CC_IOSQES_SHIFT, 240 + 241 + /* I/O Completion Queue Entry Size (IOCQES): bits 23:20 */ 242 + NVME_CC_IOCQES_SHIFT = 20, 243 + NVME_CC_IOCQES_MASK = 0xf << NVME_CC_IOCQES_SHIFT, 222 244 NVME_CC_IOCQES = NVME_NVM_IOCQES << NVME_CC_IOCQES_SHIFT, 245 + 246 + /* Controller Ready Independent of Media Enable (CRIME): bit 24 */ 223 247 NVME_CC_CRIME = 1 << 24, 248 + 249 + /* Bits 25:31 are reserved (NVMe Base Specification rev 2.1) */ 224 250 }; 225 251 226 252 enum {
+2
include/linux/skmsg.h
··· 91 91 struct sk_psock_progs progs; 92 92 #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) 93 93 struct strparser strp; 94 + u32 copied_seq; 95 + u32 ingress_bytes; 94 96 #endif 95 97 struct sk_buff_head ingress_skb; 96 98 struct list_head ingress_msg;
+2
include/linux/socket.h
··· 392 392 393 393 extern int move_addr_to_kernel(void __user *uaddr, int ulen, struct sockaddr_storage *kaddr); 394 394 extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data); 395 + extern int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len, 396 + void *data); 395 397 396 398 struct timespec64; 397 399 struct __kernel_timespec;
+1 -2
include/linux/sunrpc/sched.h
··· 158 158 RPC_TASK_NEED_XMIT, 159 159 RPC_TASK_NEED_RECV, 160 160 RPC_TASK_MSG_PIN_WAIT, 161 - RPC_TASK_SIGNALLED, 162 161 }; 163 162 164 163 #define rpc_test_and_set_running(t) \ ··· 170 171 171 172 #define RPC_IS_ACTIVATED(t) test_bit(RPC_TASK_ACTIVE, &(t)->tk_runstate) 172 173 173 - #define RPC_SIGNALLED(t) test_bit(RPC_TASK_SIGNALLED, &(t)->tk_runstate) 174 + #define RPC_SIGNALLED(t) (READ_ONCE(task->tk_rpc_status) == -ERESTARTSYS) 174 175 175 176 /* 176 177 * Task priorities.
+1
include/net/sock.h
··· 1742 1742 struct sock *sk_alloc(struct net *net, int family, gfp_t priority, 1743 1743 struct proto *prot, int kern); 1744 1744 void sk_free(struct sock *sk); 1745 + void sk_net_refcnt_upgrade(struct sock *sk); 1745 1746 void sk_destruct(struct sock *sk); 1746 1747 struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority); 1747 1748 void sk_free_unlock_clone(struct sock *sk);
+2
include/net/strparser.h
··· 43 43 struct strp_callbacks { 44 44 int (*parse_msg)(struct strparser *strp, struct sk_buff *skb); 45 45 void (*rcv_msg)(struct strparser *strp, struct sk_buff *skb); 46 + int (*read_sock)(struct strparser *strp, read_descriptor_t *desc, 47 + sk_read_actor_t recv_actor); 46 48 int (*read_sock_done)(struct strparser *strp, int err); 47 49 void (*abort_parser)(struct strparser *strp, int err); 48 50 void (*lock)(struct strparser *strp);
+8
include/net/tcp.h
··· 745 745 /* Read 'sendfile()'-style from a TCP socket */ 746 746 int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, 747 747 sk_read_actor_t recv_actor); 748 + int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc, 749 + sk_read_actor_t recv_actor, bool noack, 750 + u32 *copied_seq); 748 751 int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor); 749 752 struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off); 750 753 void tcp_read_done(struct sock *sk, size_t len); ··· 2626 2623 #ifdef CONFIG_BPF_SYSCALL 2627 2624 int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); 2628 2625 void tcp_bpf_clone(const struct sock *sk, struct sock *newsk); 2626 + #ifdef CONFIG_BPF_STREAM_PARSER 2627 + struct strparser; 2628 + int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc, 2629 + sk_read_actor_t recv_actor); 2630 + #endif /* CONFIG_BPF_STREAM_PARSER */ 2629 2631 #endif /* CONFIG_BPF_SYSCALL */ 2630 2632 2631 2633 #ifdef CONFIG_INET
+31
include/sound/cs35l56.h
··· 12 12 #include <linux/firmware/cirrus/cs_dsp.h> 13 13 #include <linux/regulator/consumer.h> 14 14 #include <linux/regmap.h> 15 + #include <linux/spi/spi.h> 15 16 #include <sound/cs-amp-lib.h> 16 17 17 18 #define CS35L56_DEVID 0x0000000 ··· 62 61 #define CS35L56_IRQ1_MASK_8 0x000E0AC 63 62 #define CS35L56_IRQ1_MASK_18 0x000E0D4 64 63 #define CS35L56_IRQ1_MASK_20 0x000E0DC 64 + #define CS35L56_DSP_MBOX_1_RAW 0x0011000 65 65 #define CS35L56_DSP_VIRTUAL1_MBOX_1 0x0011020 66 66 #define CS35L56_DSP_VIRTUAL1_MBOX_2 0x0011024 67 67 #define CS35L56_DSP_VIRTUAL1_MBOX_3 0x0011028 ··· 226 224 #define CS35L56_HALO_STATE_SHUTDOWN 1 227 225 #define CS35L56_HALO_STATE_BOOT_DONE 2 228 226 227 + #define CS35L56_MBOX_CMD_PING 0x0A000000 229 228 #define CS35L56_MBOX_CMD_AUDIO_PLAY 0x0B000001 230 229 #define CS35L56_MBOX_CMD_AUDIO_PAUSE 0x0B000002 231 230 #define CS35L56_MBOX_CMD_AUDIO_REINIT 0x0B000003 ··· 257 254 #define CS35L56_NUM_BULK_SUPPLIES 3 258 255 #define CS35L56_NUM_DSP_REGIONS 5 259 256 257 + /* Additional margin for SYSTEM_RESET to control port ready on SPI */ 258 + #define CS35L56_SPI_RESET_TO_PORT_READY_US (CS35L56_CONTROL_PORT_READY_US + 2500) 259 + 260 + struct cs35l56_spi_payload { 261 + __be32 addr; 262 + __be16 pad; 263 + __be32 value; 264 + } __packed; 265 + static_assert(sizeof(struct cs35l56_spi_payload) == 10); 266 + 260 267 struct cs35l56_base { 261 268 struct device *dev; 262 269 struct regmap *regmap; ··· 282 269 s8 cal_index; 283 270 struct cirrus_amp_cal_data cal_data; 284 271 struct gpio_desc *reset_gpio; 272 + struct cs35l56_spi_payload *spi_payload_buf; 285 273 }; 286 274 287 275 static inline bool cs35l56_is_otp_register(unsigned int reg) 288 276 { 289 277 return (reg >> 16) == 3; 278 + } 279 + 280 + static inline int cs35l56_init_config_for_spi(struct cs35l56_base *cs35l56, 281 + struct spi_device *spi) 282 + { 283 + cs35l56->spi_payload_buf = devm_kzalloc(&spi->dev, 284 + sizeof(*cs35l56->spi_payload_buf), 285 + GFP_KERNEL | GFP_DMA); 286 + if (!cs35l56->spi_payload_buf) 287 + return -ENOMEM; 288 + 289 + return 0; 290 + } 291 + 292 + static inline bool cs35l56_is_spi(struct cs35l56_base *cs35l56) 293 + { 294 + return IS_ENABLED(CONFIG_SPI_MASTER) && !!cs35l56->spi_payload_buf; 290 295 } 291 296 292 297 extern const struct regmap_config cs35l56_regmap_i2c;
+2
include/trace/events/afs.h
··· 174 174 EM(afs_cell_trace_get_queue_dns, "GET q-dns ") \ 175 175 EM(afs_cell_trace_get_queue_manage, "GET q-mng ") \ 176 176 EM(afs_cell_trace_get_queue_new, "GET q-new ") \ 177 + EM(afs_cell_trace_get_server, "GET server") \ 177 178 EM(afs_cell_trace_get_vol, "GET vol ") \ 178 179 EM(afs_cell_trace_insert, "INSERT ") \ 179 180 EM(afs_cell_trace_manage, "MANAGE ") \ ··· 183 182 EM(afs_cell_trace_put_destroy, "PUT destry") \ 184 183 EM(afs_cell_trace_put_queue_work, "PUT q-work") \ 185 184 EM(afs_cell_trace_put_queue_fail, "PUT q-fail") \ 185 + EM(afs_cell_trace_put_server, "PUT server") \ 186 186 EM(afs_cell_trace_put_vol, "PUT vol ") \ 187 187 EM(afs_cell_trace_see_source, "SEE source") \ 188 188 EM(afs_cell_trace_see_ws, "SEE ws ") \
+1 -2
include/trace/events/sunrpc.h
··· 360 360 { (1UL << RPC_TASK_ACTIVE), "ACTIVE" }, \ 361 361 { (1UL << RPC_TASK_NEED_XMIT), "NEED_XMIT" }, \ 362 362 { (1UL << RPC_TASK_NEED_RECV), "NEED_RECV" }, \ 363 - { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" }, \ 364 - { (1UL << RPC_TASK_SIGNALLED), "SIGNALLED" }) 363 + { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" }) 365 364 366 365 DECLARE_EVENT_CLASS(rpc_task_running, 367 366
+1 -1
include/uapi/linux/io_uring.h
··· 380 380 * result will be the number of buffers send, with 381 381 * the starting buffer ID in cqe->flags as per 382 382 * usual for provided buffer usage. The buffers 383 - * will be contigious from the starting buffer ID. 383 + * will be contiguous from the starting buffer ID. 384 384 */ 385 385 #define IORING_RECVSEND_POLL_FIRST (1U << 0) 386 386 #define IORING_RECV_MULTISHOT (1U << 1)
+6 -2
include/uapi/linux/landlock.h
··· 268 268 * ~~~~~~~~~~~~~~~~ 269 269 * 270 270 * These flags enable to restrict a sandboxed process to a set of network 271 - * actions. This is supported since the Landlock ABI version 4. 271 + * actions. 272 + * 273 + * This is supported since Landlock ABI version 4. 272 274 * 273 275 * The following access rights apply to TCP port numbers: 274 276 * ··· 293 291 * Setting a flag for a ruleset will isolate the Landlock domain to forbid 294 292 * connections to resources outside the domain. 295 293 * 294 + * This is supported since Landlock ABI version 6. 295 + * 296 296 * Scopes: 297 297 * 298 298 * - %LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET: Restrict a sandboxed process from 299 299 * connecting to an abstract UNIX socket created by a process outside the 300 - * related Landlock domain (e.g. a parent domain or a non-sandboxed process). 300 + * related Landlock domain (e.g., a parent domain or a non-sandboxed process). 301 301 * - %LANDLOCK_SCOPE_SIGNAL: Restrict a sandboxed process from sending a signal 302 302 * to another process outside the domain. 303 303 */
+18 -5
io_uring/io-wq.c
··· 64 64 65 65 union { 66 66 struct rcu_head rcu; 67 - struct work_struct work; 67 + struct delayed_work work; 68 68 }; 69 69 }; 70 70 ··· 770 770 } 771 771 } 772 772 773 + static void queue_create_worker_retry(struct io_worker *worker) 774 + { 775 + /* 776 + * We only bother retrying because there's a chance that the 777 + * failure to create a worker is due to some temporary condition 778 + * in the forking task (e.g. outstanding signal); give the task 779 + * some time to clear that condition. 780 + */ 781 + schedule_delayed_work(&worker->work, 782 + msecs_to_jiffies(worker->init_retries * 5)); 783 + } 784 + 773 785 static void create_worker_cont(struct callback_head *cb) 774 786 { 775 787 struct io_worker *worker; ··· 821 809 822 810 /* re-create attempts grab a new worker ref, drop the existing one */ 823 811 io_worker_release(worker); 824 - schedule_work(&worker->work); 812 + queue_create_worker_retry(worker); 825 813 } 826 814 827 815 static void io_workqueue_create(struct work_struct *work) 828 816 { 829 - struct io_worker *worker = container_of(work, struct io_worker, work); 817 + struct io_worker *worker = container_of(work, struct io_worker, 818 + work.work); 830 819 struct io_wq_acct *acct = io_wq_get_acct(worker); 831 820 832 821 if (!io_queue_worker_create(worker, acct, create_worker_cont)) ··· 868 855 kfree(worker); 869 856 goto fail; 870 857 } else { 871 - INIT_WORK(&worker->work, io_workqueue_create); 872 - schedule_work(&worker->work); 858 + INIT_DELAYED_WORK(&worker->work, io_workqueue_create); 859 + queue_create_worker_retry(worker); 873 860 } 874 861 875 862 return true;
+2
io_uring/io_uring.c
··· 2045 2045 req->opcode = 0; 2046 2046 return io_init_fail_req(req, -EINVAL); 2047 2047 } 2048 + opcode = array_index_nospec(opcode, IORING_OP_LAST); 2049 + 2048 2050 def = &io_issue_defs[opcode]; 2049 2051 if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) { 2050 2052 /* enforce forwards compatibility on users */
-6
io_uring/rsrc.h
··· 4 4 5 5 #include <linux/lockdep.h> 6 6 7 - #define IO_NODE_ALLOC_CACHE_MAX 32 8 - 9 - #define IO_RSRC_TAG_TABLE_SHIFT (PAGE_SHIFT - 3) 10 - #define IO_RSRC_TAG_TABLE_MAX (1U << IO_RSRC_TAG_TABLE_SHIFT) 11 - #define IO_RSRC_TAG_TABLE_MASK (IO_RSRC_TAG_TABLE_MAX - 1) 12 - 13 7 enum { 14 8 IORING_RSRC_FILE = 0, 15 9 IORING_RSRC_BUFFER = 1,
+21 -9
io_uring/rw.c
··· 23 23 #include "poll.h" 24 24 #include "rw.h" 25 25 26 + static void io_complete_rw(struct kiocb *kiocb, long res); 27 + static void io_complete_rw_iopoll(struct kiocb *kiocb, long res); 28 + 26 29 struct io_rw { 27 30 /* NOTE: kiocb has the file as the first member, so don't do it here */ 28 31 struct kiocb kiocb; ··· 291 288 } 292 289 rw->kiocb.dio_complete = NULL; 293 290 rw->kiocb.ki_flags = 0; 291 + 292 + if (req->ctx->flags & IORING_SETUP_IOPOLL) 293 + rw->kiocb.ki_complete = io_complete_rw_iopoll; 294 + else 295 + rw->kiocb.ki_complete = io_complete_rw; 294 296 295 297 rw->addr = READ_ONCE(sqe->addr); 296 298 rw->len = READ_ONCE(sqe->len); ··· 571 563 smp_store_release(&req->iopoll_completed, 1); 572 564 } 573 565 574 - static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret) 566 + static inline void io_rw_done(struct io_kiocb *req, ssize_t ret) 575 567 { 568 + struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); 569 + 576 570 /* IO was queued async, completion will happen later */ 577 571 if (ret == -EIOCBQUEUED) 578 572 return; ··· 596 586 } 597 587 } 598 588 599 - INDIRECT_CALL_2(kiocb->ki_complete, io_complete_rw_iopoll, 600 - io_complete_rw, kiocb, ret); 589 + if (req->ctx->flags & IORING_SETUP_IOPOLL) 590 + io_complete_rw_iopoll(&rw->kiocb, ret); 591 + else 592 + io_complete_rw(&rw->kiocb, ret); 601 593 } 602 594 603 595 static int kiocb_done(struct io_kiocb *req, ssize_t ret, ··· 610 598 611 599 if (ret >= 0 && req->flags & REQ_F_CUR_POS) 612 600 req->file->f_pos = rw->kiocb.ki_pos; 613 - if (ret >= 0 && (rw->kiocb.ki_complete == io_complete_rw)) { 601 + if (ret >= 0 && !(req->ctx->flags & IORING_SETUP_IOPOLL)) { 614 602 __io_complete_rw_common(req, ret); 615 603 /* 616 604 * Safe to call io_end from here as we're inline ··· 621 609 io_req_rw_cleanup(req, issue_flags); 622 610 return IOU_OK; 623 611 } else { 624 - io_rw_done(&rw->kiocb, ret); 612 + io_rw_done(req, ret); 625 613 } 626 614 627 615 return IOU_ISSUE_SKIP_COMPLETE; ··· 825 813 if (ctx->flags & IORING_SETUP_IOPOLL) { 826 814 if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll) 827 815 return -EOPNOTSUPP; 828 - 829 816 kiocb->private = NULL; 830 817 kiocb->ki_flags |= IOCB_HIPRI; 831 - kiocb->ki_complete = io_complete_rw_iopoll; 832 818 req->iopoll_completed = 0; 833 819 if (ctx->flags & IORING_SETUP_HYBRID_IOPOLL) { 834 820 /* make sure every req only blocks once*/ ··· 836 826 } else { 837 827 if (kiocb->ki_flags & IOCB_HIPRI) 838 828 return -EINVAL; 839 - kiocb->ki_complete = io_complete_rw; 840 829 } 841 830 842 831 if (req->flags & REQ_F_HAS_METADATA) { ··· 913 904 } else if (ret == -EIOCBQUEUED) { 914 905 return IOU_ISSUE_SKIP_COMPLETE; 915 906 } else if (ret == req->cqe.res || ret <= 0 || !force_nonblock || 916 - (req->flags & REQ_F_NOWAIT) || !need_complete_io(req)) { 907 + (req->flags & REQ_F_NOWAIT) || !need_complete_io(req) || 908 + (issue_flags & IO_URING_F_MULTISHOT)) { 917 909 /* read all, failed, already did sync or don't want to retry */ 918 910 goto done; 919 911 } ··· 987 977 if (!io_file_can_poll(req)) 988 978 return -EBADFD; 989 979 980 + /* make it sync, multishot doesn't support async execution */ 981 + rw->kiocb.ki_complete = NULL; 990 982 ret = __io_read(req, issue_flags); 991 983 992 984 /*
+1 -1
kernel/bpf/arena.c
··· 39 39 */ 40 40 41 41 /* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */ 42 - #define GUARD_SZ (1ull << sizeof_field(struct bpf_insn, off) * 8) 42 + #define GUARD_SZ round_up(1ull << sizeof_field(struct bpf_insn, off) * 8, PAGE_SIZE << 1) 43 43 #define KERN_VM_SZ (SZ_4G + GUARD_SZ) 44 44 45 45 struct bpf_arena {
+1 -1
kernel/bpf/bpf_cgrp_storage.c
··· 153 153 154 154 static void cgroup_storage_map_free(struct bpf_map *map) 155 155 { 156 - bpf_local_storage_map_free(map, &cgroup_cache, NULL); 156 + bpf_local_storage_map_free(map, &cgroup_cache, &bpf_cgrp_storage_busy); 157 157 } 158 158 159 159 /* *gfp_flags* is a hidden argument provided by the verifier */
+2
kernel/bpf/btf.c
··· 6507 6507 /* rxrpc */ 6508 6508 { "rxrpc_recvdata", 0x1 }, 6509 6509 { "rxrpc_resend", 0x10 }, 6510 + /* skb */ 6511 + {"kfree_skb", 0x1000}, 6510 6512 /* sunrpc */ 6511 6513 { "xs_stream_read_data", 0x1 }, 6512 6514 /* ... from xprt_cong_event event class */
-4
kernel/bpf/ringbuf.c
··· 268 268 /* allow writable mapping for the consumer_pos only */ 269 269 if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE) 270 270 return -EPERM; 271 - } else { 272 - vm_flags_clear(vma, VM_MAYWRITE); 273 271 } 274 272 /* remap_vmalloc_range() checks size and offset constraints */ 275 273 return remap_vmalloc_range(vma, rb_map->rb, ··· 287 289 * position, and the ring buffer data itself. 288 290 */ 289 291 return -EPERM; 290 - } else { 291 - vm_flags_clear(vma, VM_MAYWRITE); 292 292 } 293 293 /* remap_vmalloc_range() checks size and offset constraints */ 294 294 return remap_vmalloc_range(vma, rb_map->rb, vma->vm_pgoff + RINGBUF_PGOFF);
+22 -21
kernel/bpf/syscall.c
··· 1035 1035 static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma) 1036 1036 { 1037 1037 struct bpf_map *map = filp->private_data; 1038 - int err; 1038 + int err = 0; 1039 1039 1040 1040 if (!map->ops->map_mmap || !IS_ERR_OR_NULL(map->record)) 1041 1041 return -ENOTSUPP; ··· 1059 1059 err = -EACCES; 1060 1060 goto out; 1061 1061 } 1062 + bpf_map_write_active_inc(map); 1062 1063 } 1064 + out: 1065 + mutex_unlock(&map->freeze_mutex); 1066 + if (err) 1067 + return err; 1063 1068 1064 1069 /* set default open/close callbacks */ 1065 1070 vma->vm_ops = &bpf_map_default_vmops; 1066 1071 vma->vm_private_data = map; 1067 1072 vm_flags_clear(vma, VM_MAYEXEC); 1073 + /* If mapping is read-only, then disallow potentially re-mapping with 1074 + * PROT_WRITE by dropping VM_MAYWRITE flag. This VM_MAYWRITE clearing 1075 + * means that as far as BPF map's memory-mapped VMAs are concerned, 1076 + * VM_WRITE and VM_MAYWRITE and equivalent, if one of them is set, 1077 + * both should be set, so we can forget about VM_MAYWRITE and always 1078 + * check just VM_WRITE 1079 + */ 1068 1080 if (!(vma->vm_flags & VM_WRITE)) 1069 - /* disallow re-mapping with PROT_WRITE */ 1070 1081 vm_flags_clear(vma, VM_MAYWRITE); 1071 1082 1072 1083 err = map->ops->map_mmap(map, vma); 1073 - if (err) 1074 - goto out; 1084 + if (err) { 1085 + if (vma->vm_flags & VM_WRITE) 1086 + bpf_map_write_active_dec(map); 1087 + } 1075 1088 1076 - if (vma->vm_flags & VM_MAYWRITE) 1077 - bpf_map_write_active_inc(map); 1078 - out: 1079 - mutex_unlock(&map->freeze_mutex); 1080 1089 return err; 1081 1090 } 1082 1091 ··· 1977 1968 return err; 1978 1969 } 1979 1970 1980 - #define MAP_LOOKUP_RETRIES 3 1981 - 1982 1971 int generic_map_lookup_batch(struct bpf_map *map, 1983 1972 const union bpf_attr *attr, 1984 1973 union bpf_attr __user *uattr) ··· 1986 1979 void __user *values = u64_to_user_ptr(attr->batch.values); 1987 1980 void __user *keys = u64_to_user_ptr(attr->batch.keys); 1988 1981 void *buf, *buf_prevkey, *prev_key, *key, *value; 1989 - int err, retry = MAP_LOOKUP_RETRIES; 1990 1982 u32 value_size, cp, max_count; 1983 + int err; 1991 1984 1992 1985 if (attr->batch.elem_flags & ~BPF_F_LOCK) 1993 1986 return -EINVAL; ··· 2033 2026 err = bpf_map_copy_value(map, key, value, 2034 2027 attr->batch.elem_flags); 2035 2028 2036 - if (err == -ENOENT) { 2037 - if (retry) { 2038 - retry--; 2039 - continue; 2040 - } 2041 - err = -EINTR; 2042 - break; 2043 - } 2029 + if (err == -ENOENT) 2030 + goto next_key; 2044 2031 2045 2032 if (err) 2046 2033 goto free_buf; ··· 2049 2048 goto free_buf; 2050 2049 } 2051 2050 2051 + cp++; 2052 + next_key: 2052 2053 if (!prev_key) 2053 2054 prev_key = buf_prevkey; 2054 2055 2055 2056 swap(prev_key, key); 2056 - retry = MAP_LOOKUP_RETRIES; 2057 - cp++; 2058 2057 cond_resched(); 2059 2058 } 2060 2059
+23 -8
kernel/bpf/verifier.c
··· 1501 1501 struct bpf_reference_state *s; 1502 1502 1503 1503 s = acquire_reference_state(env, insn_idx); 1504 + if (!s) 1505 + return -ENOMEM; 1504 1506 s->type = type; 1505 1507 s->id = id; 1506 1508 s->ptr = ptr; ··· 9151 9149 return 0; 9152 9150 } 9153 9151 9154 - /* Returns constant key value if possible, else negative error */ 9155 - static s64 get_constant_map_key(struct bpf_verifier_env *env, 9152 + /* Returns constant key value in `value` if possible, else negative error */ 9153 + static int get_constant_map_key(struct bpf_verifier_env *env, 9156 9154 struct bpf_reg_state *key, 9157 - u32 key_size) 9155 + u32 key_size, 9156 + s64 *value) 9158 9157 { 9159 9158 struct bpf_func_state *state = func(env, key); 9160 9159 struct bpf_reg_state *reg; ··· 9182 9179 /* First handle precisely tracked STACK_ZERO */ 9183 9180 for (i = off; i >= 0 && stype[i] == STACK_ZERO; i--) 9184 9181 zero_size++; 9185 - if (zero_size >= key_size) 9182 + if (zero_size >= key_size) { 9183 + *value = 0; 9186 9184 return 0; 9185 + } 9187 9186 9188 9187 /* Check that stack contains a scalar spill of expected size */ 9189 9188 if (!is_spilled_scalar_reg(&state->stack[spi])) ··· 9208 9203 if (err < 0) 9209 9204 return err; 9210 9205 9211 - return reg->var_off.value; 9206 + *value = reg->var_off.value; 9207 + return 0; 9212 9208 } 9209 + 9210 + static bool can_elide_value_nullness(enum bpf_map_type type); 9213 9211 9214 9212 static int check_func_arg(struct bpf_verifier_env *env, u32 arg, 9215 9213 struct bpf_call_arg_meta *meta, ··· 9362 9354 err = check_helper_mem_access(env, regno, key_size, BPF_READ, false, NULL); 9363 9355 if (err) 9364 9356 return err; 9365 - meta->const_map_key = get_constant_map_key(env, reg, key_size); 9366 - if (meta->const_map_key < 0 && meta->const_map_key != -EOPNOTSUPP) 9367 - return meta->const_map_key; 9357 + if (can_elide_value_nullness(meta->map_ptr->map_type)) { 9358 + err = get_constant_map_key(env, reg, key_size, &meta->const_map_key); 9359 + if (err < 0) { 9360 + meta->const_map_key = -1; 9361 + if (err == -EOPNOTSUPP) 9362 + err = 0; 9363 + else 9364 + return err; 9365 + } 9366 + } 9368 9367 break; 9369 9368 case ARG_PTR_TO_MAP_VALUE: 9370 9369 if (type_may_be_null(arg_type) && register_is_null(reg))
+11 -39
kernel/cgroup/dmem.c
··· 220 220 struct dmem_cgroup_pool_state *test_pool) 221 221 { 222 222 struct page_counter *climit; 223 - struct cgroup_subsys_state *css, *next_css; 223 + struct cgroup_subsys_state *css; 224 224 struct dmemcg_state *dmemcg_iter; 225 - struct dmem_cgroup_pool_state *pool, *parent_pool; 226 - bool found_descendant; 225 + struct dmem_cgroup_pool_state *pool, *found_pool; 227 226 228 227 climit = &limit_pool->cnt; 229 228 230 229 rcu_read_lock(); 231 - parent_pool = pool = limit_pool; 232 - css = &limit_pool->cs->css; 233 230 234 - /* 235 - * This logic is roughly equivalent to css_foreach_descendant_pre, 236 - * except we also track the parent pool to find out which pool we need 237 - * to calculate protection values for. 238 - * 239 - * We can stop the traversal once we find test_pool among the 240 - * descendants since we don't really care about any others. 241 - */ 242 - while (pool != test_pool) { 243 - next_css = css_next_child(NULL, css); 244 - if (next_css) { 245 - parent_pool = pool; 246 - } else { 247 - while (css != &limit_pool->cs->css) { 248 - next_css = css_next_child(css, css->parent); 249 - if (next_css) 250 - break; 251 - css = css->parent; 252 - parent_pool = pool_parent(parent_pool); 253 - } 254 - /* 255 - * We can only hit this when test_pool is not a 256 - * descendant of limit_pool. 257 - */ 258 - if (WARN_ON_ONCE(css == &limit_pool->cs->css)) 259 - break; 260 - } 261 - css = next_css; 262 - 263 - found_descendant = false; 231 + css_for_each_descendant_pre(css, &limit_pool->cs->css) { 264 232 dmemcg_iter = container_of(css, struct dmemcg_state, css); 233 + found_pool = NULL; 265 234 266 235 list_for_each_entry_rcu(pool, &dmemcg_iter->pools, css_node) { 267 - if (pool_parent(pool) == parent_pool) { 268 - found_descendant = true; 236 + if (pool->region == limit_pool->region) { 237 + found_pool = pool; 269 238 break; 270 239 } 271 240 } 272 - if (!found_descendant) 241 + if (!found_pool) 273 242 continue; 274 243 275 244 page_counter_calculate_protection( 276 - climit, &pool->cnt, true); 245 + climit, &found_pool->cnt, true); 246 + 247 + if (found_pool == test_pool) 248 + break; 277 249 } 278 250 rcu_read_unlock(); 279 251 }
+1 -1
kernel/events/uprobes.c
··· 417 417 struct mm_struct *mm, short d) 418 418 { 419 419 pr_warn("ref_ctr %s failed for inode: 0x%lx offset: " 420 - "0x%llx ref_ctr_offset: 0x%llx of mm: 0x%pK\n", 420 + "0x%llx ref_ctr_offset: 0x%llx of mm: 0x%p\n", 421 421 d > 0 ? "increment" : "decrement", uprobe->inode->i_ino, 422 422 (unsigned long long) uprobe->offset, 423 423 (unsigned long long) uprobe->ref_ctr_offset, mm);
+8 -3
kernel/rseq.c
··· 507 507 return -EINVAL; 508 508 if (!access_ok(rseq, rseq_len)) 509 509 return -EFAULT; 510 - current->rseq = rseq; 511 - current->rseq_len = rseq_len; 512 - current->rseq_sig = sig; 513 510 #ifdef CONFIG_DEBUG_RSEQ 514 511 /* 515 512 * Initialize the in-kernel rseq fields copy for validation of ··· 518 521 get_user(rseq_kernel_fields(current)->mm_cid, &rseq->mm_cid)) 519 522 return -EFAULT; 520 523 #endif 524 + /* 525 + * Activate the registration by setting the rseq area address, length 526 + * and signature in the task struct. 527 + */ 528 + current->rseq = rseq; 529 + current->rseq_len = rseq_len; 530 + current->rseq_sig = sig; 531 + 521 532 /* 522 533 * If rseq was previously inactive, and has just been 523 534 * registered, ensure the cpu_id_start and cpu_id fields
+7 -4
kernel/sched/ext.c
··· 3117 3117 { 3118 3118 struct task_struct *prev = rq->curr; 3119 3119 struct task_struct *p; 3120 - bool prev_on_scx = prev->sched_class == &ext_sched_class; 3121 3120 bool keep_prev = rq->scx.flags & SCX_RQ_BAL_KEEP; 3122 3121 bool kick_idle = false; 3123 3122 ··· 3136 3137 * if pick_task_scx() is called without preceding balance_scx(). 3137 3138 */ 3138 3139 if (unlikely(rq->scx.flags & SCX_RQ_BAL_PENDING)) { 3139 - if (prev_on_scx) { 3140 + if (prev->scx.flags & SCX_TASK_QUEUED) { 3140 3141 keep_prev = true; 3141 3142 } else { 3142 3143 keep_prev = false; 3143 3144 kick_idle = true; 3144 3145 } 3145 - } else if (unlikely(keep_prev && !prev_on_scx)) { 3146 - /* only allowed during transitions */ 3146 + } else if (unlikely(keep_prev && 3147 + prev->sched_class != &ext_sched_class)) { 3148 + /* 3149 + * Can happen while enabling as SCX_RQ_BAL_PENDING assertion is 3150 + * conditional on scx_enabled() and may have been skipped. 3151 + */ 3147 3152 WARN_ON_ONCE(scx_ops_enable_state() == SCX_OPS_ENABLED); 3148 3153 keep_prev = false; 3149 3154 }
+22 -3
kernel/sched/sched.h
··· 3698 3698 { 3699 3699 struct cpumask *cidmask = mm_cidmask(mm); 3700 3700 struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid; 3701 - int cid = __this_cpu_read(pcpu_cid->recent_cid); 3701 + int cid, max_nr_cid, allowed_max_nr_cid; 3702 3702 3703 + /* 3704 + * After shrinking the number of threads or reducing the number 3705 + * of allowed cpus, reduce the value of max_nr_cid so expansion 3706 + * of cid allocation will preserve cache locality if the number 3707 + * of threads or allowed cpus increase again. 3708 + */ 3709 + max_nr_cid = atomic_read(&mm->max_nr_cid); 3710 + while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed), 3711 + atomic_read(&mm->mm_users))), 3712 + max_nr_cid > allowed_max_nr_cid) { 3713 + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */ 3714 + if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) { 3715 + max_nr_cid = allowed_max_nr_cid; 3716 + break; 3717 + } 3718 + } 3703 3719 /* Try to re-use recent cid. This improves cache locality. */ 3704 - if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask)) 3720 + cid = __this_cpu_read(pcpu_cid->recent_cid); 3721 + if (!mm_cid_is_unset(cid) && cid < max_nr_cid && 3722 + !cpumask_test_and_set_cpu(cid, cidmask)) 3705 3723 return cid; 3706 3724 /* 3707 3725 * Expand cid allocation if the maximum number of concurrency ··· 3727 3709 * and number of threads. Expanding cid allocation as much as 3728 3710 * possible improves cache locality. 3729 3711 */ 3730 - cid = atomic_read(&mm->max_nr_cid); 3712 + cid = max_nr_cid; 3731 3713 while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) { 3714 + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */ 3732 3715 if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1)) 3733 3716 continue; 3734 3717 if (!cpumask_test_and_set_cpu(cid, cidmask))
+5 -7
kernel/trace/fprobe.c
··· 403 403 lockdep_assert_held(&fprobe_mutex); 404 404 405 405 fprobe_graph_active--; 406 - if (!fprobe_graph_active) { 407 - /* Q: should we unregister it ? */ 406 + /* Q: should we unregister it ? */ 407 + if (!fprobe_graph_active) 408 408 unregister_ftrace_graph(&fprobe_graph_ops); 409 - return; 410 - } 411 409 412 - ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0); 410 + if (num) 411 + ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0); 413 412 } 414 413 415 414 static int symbols_cmp(const void *a, const void *b) ··· 678 679 } 679 680 del_fprobe_hash(fp); 680 681 681 - if (count) 682 - fprobe_graph_remove_ips(addrs, count); 682 + fprobe_graph_remove_ips(addrs, count); 683 683 684 684 kfree_rcu(hlist_array, rcu); 685 685 fp->hlist_array = NULL;
+25 -11
kernel/trace/ftrace.c
··· 3220 3220 * The filter_hash updates uses just the append_hash() function 3221 3221 * and the notrace_hash does not. 3222 3222 */ 3223 - static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash) 3223 + static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash, 3224 + int size_bits) 3224 3225 { 3225 3226 struct ftrace_func_entry *entry; 3226 3227 int size; 3227 3228 int i; 3228 3229 3229 - /* An empty hash does everything */ 3230 - if (ftrace_hash_empty(*hash)) 3231 - return 0; 3230 + if (*hash) { 3231 + /* An empty hash does everything */ 3232 + if (ftrace_hash_empty(*hash)) 3233 + return 0; 3234 + } else { 3235 + *hash = alloc_ftrace_hash(size_bits); 3236 + if (!*hash) 3237 + return -ENOMEM; 3238 + } 3232 3239 3233 3240 /* If new_hash has everything make hash have everything */ 3234 3241 if (ftrace_hash_empty(new_hash)) { ··· 3299 3292 /* Return a new hash that has a union of all @ops->filter_hash entries */ 3300 3293 static struct ftrace_hash *append_hashes(struct ftrace_ops *ops) 3301 3294 { 3302 - struct ftrace_hash *new_hash; 3295 + struct ftrace_hash *new_hash = NULL; 3303 3296 struct ftrace_ops *subops; 3297 + int size_bits; 3304 3298 int ret; 3305 3299 3306 - new_hash = alloc_ftrace_hash(ops->func_hash->filter_hash->size_bits); 3307 - if (!new_hash) 3308 - return NULL; 3300 + if (ops->func_hash->filter_hash) 3301 + size_bits = ops->func_hash->filter_hash->size_bits; 3302 + else 3303 + size_bits = FTRACE_HASH_DEFAULT_BITS; 3309 3304 3310 3305 list_for_each_entry(subops, &ops->subop_list, list) { 3311 - ret = append_hash(&new_hash, subops->func_hash->filter_hash); 3306 + ret = append_hash(&new_hash, subops->func_hash->filter_hash, size_bits); 3312 3307 if (ret < 0) { 3313 3308 free_ftrace_hash(new_hash); 3314 3309 return NULL; ··· 3319 3310 if (ftrace_hash_empty(new_hash)) 3320 3311 break; 3321 3312 } 3322 - return new_hash; 3313 + /* Can't return NULL as that means this failed */ 3314 + return new_hash ? : EMPTY_HASH; 3323 3315 } 3324 3316 3325 3317 /* Make @ops trace evenything except what all its subops do not trace */ ··· 3515 3505 filter_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->filter_hash); 3516 3506 if (!filter_hash) 3517 3507 return -ENOMEM; 3518 - ret = append_hash(&filter_hash, subops->func_hash->filter_hash); 3508 + ret = append_hash(&filter_hash, subops->func_hash->filter_hash, 3509 + size_bits); 3519 3510 if (ret < 0) { 3520 3511 free_ftrace_hash(filter_hash); 3521 3512 return ret; ··· 5717 5706 if (!entry) 5718 5707 return -ENOENT; 5719 5708 free_hash_entry(hash, entry); 5709 + return 0; 5710 + } else if (__ftrace_lookup_ip(hash, ip) != NULL) { 5711 + /* Already exists */ 5720 5712 return 0; 5721 5713 } 5722 5714
+9 -2
kernel/trace/trace_events.c
··· 1591 1591 return iter; 1592 1592 #endif 1593 1593 1594 + /* 1595 + * The iter is allocated in s_start() and passed via the 'v' 1596 + * parameter. To stop the iterator, NULL must be returned. But 1597 + * the return value is what the 'v' parameter in s_stop() receives 1598 + * and frees. Free iter here as it will no longer be used. 1599 + */ 1600 + kfree(iter); 1594 1601 return NULL; 1595 1602 } 1596 1603 ··· 1674 1667 } 1675 1668 #endif 1676 1669 1677 - static void s_stop(struct seq_file *m, void *p) 1670 + static void s_stop(struct seq_file *m, void *v) 1678 1671 { 1679 - kfree(p); 1672 + kfree(v); 1680 1673 t_stop(m, NULL); 1681 1674 } 1682 1675
+2 -4
kernel/trace/trace_functions.c
··· 216 216 217 217 parent_ip = function_get_true_parent_ip(parent_ip, fregs); 218 218 219 - trace_ctx = tracing_gen_ctx(); 219 + trace_ctx = tracing_gen_ctx_dec(); 220 220 221 221 data = this_cpu_ptr(tr->array_buffer.data); 222 222 if (!atomic_read(&data->disabled)) ··· 321 321 struct trace_array *tr = op->private; 322 322 struct trace_array_cpu *data; 323 323 unsigned int trace_ctx; 324 - unsigned long flags; 325 324 int bit; 326 325 327 326 if (unlikely(!tr->function_enabled)) ··· 346 347 if (is_repeat_check(tr, last_info, ip, parent_ip)) 347 348 goto out; 348 349 349 - local_save_flags(flags); 350 - trace_ctx = tracing_gen_ctx_flags(flags); 350 + trace_ctx = tracing_gen_ctx_dec(); 351 351 process_repeats(tr, ip, parent_ip, last_info, trace_ctx); 352 352 353 353 trace_function(tr, ip, parent_ip, trace_ctx);
+3 -1
kernel/workqueue.c
··· 2254 2254 * queues a new work item to a wq after destroy_workqueue(wq). 2255 2255 */ 2256 2256 if (unlikely(wq->flags & (__WQ_DESTROYING | __WQ_DRAINING) && 2257 - WARN_ON_ONCE(!is_chained_work(wq)))) 2257 + WARN_ONCE(!is_chained_work(wq), "workqueue: cannot queue %ps on wq %s\n", 2258 + work->func, wq->name))) { 2258 2259 return; 2260 + } 2259 2261 rcu_read_lock(); 2260 2262 retry: 2261 2263 /* pwq which will be used unless @work is executing elsewhere */
+1 -1
mm/filemap.c
··· 445 445 * filemap_fdatawrite_range_kick - start writeback on a range 446 446 * @mapping: target address_space 447 447 * @start: index to start writeback on 448 - * @end: last (non-inclusive) index for writeback 448 + * @end: last (inclusive) index for writeback 449 449 * 450 450 * This is a non-integrity writeback helper, to start writing back folios 451 451 * for the indicated range.
-2
mm/truncate.c
··· 548 548 549 549 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); 550 550 551 - if (folio_test_dirty(folio)) 552 - return 0; 553 551 if (folio_mapped(folio)) 554 552 unmap_mapping_folio(folio); 555 553 BUG_ON(folio_mapped(folio));
+7 -2
net/bluetooth/l2cap_core.c
··· 632 632 test_bit(FLAG_HOLD_HCI_CONN, &chan->flags)) 633 633 hci_conn_hold(conn->hcon); 634 634 635 - list_add(&chan->list, &conn->chan_l); 635 + /* Append to the list since the order matters for ECRED */ 636 + list_add_tail(&chan->list, &conn->chan_l); 636 637 } 637 638 638 639 void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) ··· 3772 3771 struct l2cap_ecred_conn_rsp *rsp_flex = 3773 3772 container_of(&rsp->pdu.rsp, struct l2cap_ecred_conn_rsp, hdr); 3774 3773 3775 - if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags)) 3774 + /* Check if channel for outgoing connection or if it wasn't deferred 3775 + * since in those cases it must be skipped. 3776 + */ 3777 + if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags) || 3778 + !test_and_clear_bit(FLAG_DEFER_SETUP, &chan->flags)) 3776 3779 return; 3777 3780 3778 3781 /* Reset ident so only one response is sent */
+1 -4
net/bpf/test_run.c
··· 660 660 void __user *data_in = u64_to_user_ptr(kattr->test.data_in); 661 661 void *data; 662 662 663 - if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom) 663 + if (user_size < ETH_HLEN || user_size > PAGE_SIZE - headroom - tailroom) 664 664 return ERR_PTR(-EINVAL); 665 - 666 - if (user_size > size) 667 - return ERR_PTR(-EMSGSIZE); 668 665 669 666 size = SKB_DATA_ALIGN(size); 670 667 data = kzalloc(size + headroom + tailroom, GFP_USER);
+1 -12
net/core/bpf_sk_storage.c
··· 355 355 356 356 static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog) 357 357 { 358 - const struct btf *btf_vmlinux; 359 - const struct btf_type *t; 360 - const char *tname; 361 - u32 btf_id; 362 - 363 358 if (prog->aux->dst_prog) 364 359 return false; 365 360 ··· 369 374 return true; 370 375 case BPF_TRACE_FENTRY: 371 376 case BPF_TRACE_FEXIT: 372 - btf_vmlinux = bpf_get_btf_vmlinux(); 373 - if (IS_ERR_OR_NULL(btf_vmlinux)) 374 - return false; 375 - btf_id = prog->aux->attach_btf_id; 376 - t = btf_type_by_id(btf_vmlinux, btf_id); 377 - tname = btf_name_by_offset(btf_vmlinux, t->name_off); 378 - return !!strncmp(tname, "bpf_sk_storage", 377 + return !!strncmp(prog->aux->attach_func_name, "bpf_sk_storage", 379 378 strlen("bpf_sk_storage")); 380 379 default: 381 380 return false;
+4 -10
net/core/dev.c
··· 2142 2142 struct notifier_block *nb, 2143 2143 struct netdev_net_notifier *nn) 2144 2144 { 2145 - struct net *net = dev_net(dev); 2146 2145 int err; 2147 2146 2148 - /* rtnl_net_lock() assumes dev is not yet published by 2149 - * register_netdevice(). 2150 - */ 2151 - DEBUG_NET_WARN_ON_ONCE(!list_empty(&dev->dev_list)); 2152 - 2153 - rtnl_net_lock(net); 2154 - err = __register_netdevice_notifier_net(net, nb, false); 2147 + rtnl_net_dev_lock(dev); 2148 + err = __register_netdevice_notifier_net(dev_net(dev), nb, false); 2155 2149 if (!err) { 2156 2150 nn->nb = nb; 2157 2151 list_add(&nn->list, &dev->net_notifier_list); 2158 2152 } 2159 - rtnl_net_unlock(net); 2153 + rtnl_net_dev_unlock(dev); 2160 2154 2161 2155 return err; 2162 2156 } ··· 4759 4765 * we have to raise NET_RX_SOFTIRQ. 4760 4766 */ 4761 4767 if (!sd->in_net_rx_action) 4762 - __raise_softirq_irqoff(NET_RX_SOFTIRQ); 4768 + raise_softirq_irqoff(NET_RX_SOFTIRQ); 4763 4769 } 4764 4770 4765 4771 #ifdef CONFIG_RPS
+1
net/core/gro.c
··· 652 652 skb->pkt_type = PACKET_HOST; 653 653 654 654 skb->encapsulation = 0; 655 + skb->ip_summed = CHECKSUM_NONE; 655 656 skb_shinfo(skb)->gso_type = 0; 656 657 skb_shinfo(skb)->gso_size = 0; 657 658 if (unlikely(skb->slow_gro)) {
+10
net/core/scm.c
··· 282 282 } 283 283 EXPORT_SYMBOL(put_cmsg); 284 284 285 + int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len, 286 + void *data) 287 + { 288 + /* Don't produce truncated CMSGs */ 289 + if (!msg->msg_control || msg->msg_controllen < CMSG_LEN(len)) 290 + return -ETOOSMALL; 291 + 292 + return put_cmsg(msg, level, type, len, data); 293 + } 294 + 285 295 void put_cmsg_scm_timestamping64(struct msghdr *msg, struct scm_timestamping_internal *tss_internal) 286 296 { 287 297 struct scm_timestamping64 tss;
+1 -1
net/core/skbuff.c
··· 6148 6148 skb->offload_fwd_mark = 0; 6149 6149 skb->offload_l3_fwd_mark = 0; 6150 6150 #endif 6151 + ipvs_reset(skb); 6151 6152 6152 6153 if (!xnet) 6153 6154 return; 6154 6155 6155 - ipvs_reset(skb); 6156 6156 skb->mark = 0; 6157 6157 skb_clear_tstamp(skb); 6158 6158 }
+7
net/core/skmsg.c
··· 549 549 return num_sge; 550 550 } 551 551 552 + #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) 553 + psock->ingress_bytes += len; 554 + #endif 552 555 copied = len; 553 556 msg->sg.start = 0; 554 557 msg->sg.size = copied; ··· 1147 1144 if (!ret) 1148 1145 sk_psock_set_state(psock, SK_PSOCK_RX_STRP_ENABLED); 1149 1146 1147 + if (sk_is_tcp(sk)) { 1148 + psock->strp.cb.read_sock = tcp_bpf_strp_read_sock; 1149 + psock->copied_seq = tcp_sk(sk)->copied_seq; 1150 + } 1150 1151 return ret; 1151 1152 } 1152 1153
+22 -5
net/core/sock.c
··· 2261 2261 get_net_track(net, &sk->ns_tracker, priority); 2262 2262 sock_inuse_add(net, 1); 2263 2263 } else { 2264 + net_passive_inc(net); 2264 2265 __netns_tracker_alloc(net, &sk->ns_tracker, 2265 2266 false, priority); 2266 2267 } ··· 2286 2285 static void __sk_destruct(struct rcu_head *head) 2287 2286 { 2288 2287 struct sock *sk = container_of(head, struct sock, sk_rcu); 2288 + struct net *net = sock_net(sk); 2289 2289 struct sk_filter *filter; 2290 2290 2291 2291 if (sk->sk_destruct) ··· 2318 2316 put_cred(sk->sk_peer_cred); 2319 2317 put_pid(sk->sk_peer_pid); 2320 2318 2321 - if (likely(sk->sk_net_refcnt)) 2322 - put_net_track(sock_net(sk), &sk->ns_tracker); 2323 - else 2324 - __netns_tracker_free(sock_net(sk), &sk->ns_tracker, false); 2325 - 2319 + if (likely(sk->sk_net_refcnt)) { 2320 + put_net_track(net, &sk->ns_tracker); 2321 + } else { 2322 + __netns_tracker_free(net, &sk->ns_tracker, false); 2323 + net_passive_dec(net); 2324 + } 2326 2325 sk_prot_free(sk->sk_prot_creator, sk); 2327 2326 } 2327 + 2328 + void sk_net_refcnt_upgrade(struct sock *sk) 2329 + { 2330 + struct net *net = sock_net(sk); 2331 + 2332 + WARN_ON_ONCE(sk->sk_net_refcnt); 2333 + __netns_tracker_free(net, &sk->ns_tracker, false); 2334 + net_passive_dec(net); 2335 + sk->sk_net_refcnt = 1; 2336 + get_net_track(net, &sk->ns_tracker, GFP_KERNEL); 2337 + sock_inuse_add(net, 1); 2338 + } 2339 + EXPORT_SYMBOL_GPL(sk_net_refcnt_upgrade); 2328 2340 2329 2341 void sk_destruct(struct sock *sk) 2330 2342 { ··· 2436 2420 * is not properly dismantling its kernel sockets at netns 2437 2421 * destroy time. 2438 2422 */ 2423 + net_passive_inc(sock_net(newsk)); 2439 2424 __netns_tracker_alloc(sock_net(newsk), &newsk->ns_tracker, 2440 2425 false, priority); 2441 2426 }
+4 -1
net/core/sock_map.c
··· 303 303 304 304 write_lock_bh(&sk->sk_callback_lock); 305 305 if (stream_parser && stream_verdict && !psock->saved_data_ready) { 306 - ret = sk_psock_init_strp(sk, psock); 306 + if (sk_is_tcp(sk)) 307 + ret = sk_psock_init_strp(sk, psock); 308 + else 309 + ret = -EOPNOTSUPP; 307 310 if (ret) { 308 311 write_unlock_bh(&sk->sk_callback_lock); 309 312 sk_psock_put(sk, psock);
+2 -1
net/core/sysctl_net_core.c
··· 34 34 static int min_rcvbuf = SOCK_MIN_RCVBUF; 35 35 static int max_skb_frags = MAX_SKB_FRAGS; 36 36 static int min_mem_pcpu_rsv = SK_MEMORY_PCPU_RESERVE; 37 + static int netdev_budget_usecs_min = 2 * USEC_PER_SEC / HZ; 37 38 38 39 static int net_msg_warn; /* Unused, but still a sysctl */ 39 40 ··· 588 587 .maxlen = sizeof(unsigned int), 589 588 .mode = 0644, 590 589 .proc_handler = proc_dointvec_minmax, 591 - .extra1 = SYSCTL_ZERO, 590 + .extra1 = &netdev_budget_usecs_min, 592 591 }, 593 592 { 594 593 .procname = "fb_tunnels_only_for_init_net",
+16
net/ethtool/common.c
··· 6 6 #include <linux/rtnetlink.h> 7 7 #include <linux/ptp_clock_kernel.h> 8 8 #include <linux/phy_link_topology.h> 9 + #include <net/netdev_queues.h> 9 10 10 11 #include "netlink.h" 11 12 #include "common.h" ··· 812 811 * mean the ops attached to a netdev later on are sane. 813 812 */ 814 813 return 0; 814 + } 815 + 816 + void ethtool_ringparam_get_cfg(struct net_device *dev, 817 + struct ethtool_ringparam *param, 818 + struct kernel_ethtool_ringparam *kparam, 819 + struct netlink_ext_ack *extack) 820 + { 821 + memset(param, 0, sizeof(*param)); 822 + memset(kparam, 0, sizeof(*kparam)); 823 + 824 + param->cmd = ETHTOOL_GRINGPARAM; 825 + dev->ethtool_ops->get_ringparam(dev, param, kparam, extack); 826 + 827 + /* Driver gives us current state, we want to return current config */ 828 + kparam->tcp_data_split = dev->cfg->hds_config; 815 829 } 816 830 817 831 static void ethtool_init_tsinfo(struct kernel_ethtool_ts_info *info)
+6
net/ethtool/common.h
··· 51 51 struct ethtool_channels channels, 52 52 struct genl_info *info); 53 53 int ethtool_check_rss_ctx_busy(struct net_device *dev, u32 rss_context); 54 + 55 + void ethtool_ringparam_get_cfg(struct net_device *dev, 56 + struct ethtool_ringparam *param, 57 + struct kernel_ethtool_ringparam *kparam, 58 + struct netlink_ext_ack *extack); 59 + 54 60 int __ethtool_get_ts_info(struct net_device *dev, struct kernel_ethtool_ts_info *info); 55 61 int ethtool_get_ts_info_by_phc(struct net_device *dev, 56 62 struct kernel_ethtool_ts_info *info,
+2 -2
net/ethtool/ioctl.c
··· 2065 2065 2066 2066 static int ethtool_set_ringparam(struct net_device *dev, void __user *useraddr) 2067 2067 { 2068 - struct ethtool_ringparam ringparam, max = { .cmd = ETHTOOL_GRINGPARAM }; 2069 2068 struct kernel_ethtool_ringparam kernel_ringparam; 2069 + struct ethtool_ringparam ringparam, max; 2070 2070 int ret; 2071 2071 2072 2072 if (!dev->ethtool_ops->set_ringparam || !dev->ethtool_ops->get_ringparam) ··· 2075 2075 if (copy_from_user(&ringparam, useraddr, sizeof(ringparam))) 2076 2076 return -EFAULT; 2077 2077 2078 - dev->ethtool_ops->get_ringparam(dev, &max, &kernel_ringparam, NULL); 2078 + ethtool_ringparam_get_cfg(dev, &max, &kernel_ringparam, NULL); 2079 2079 2080 2080 /* ensure new ring parameters are within the maximums */ 2081 2081 if (ringparam.rx_pending > max.rx_max_pending ||
+4 -5
net/ethtool/rings.c
··· 215 215 static int 216 216 ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info) 217 217 { 218 - struct kernel_ethtool_ringparam kernel_ringparam = {}; 219 - struct ethtool_ringparam ringparam = {}; 218 + struct kernel_ethtool_ringparam kernel_ringparam; 220 219 struct net_device *dev = req_info->dev; 220 + struct ethtool_ringparam ringparam; 221 221 struct nlattr **tb = info->attrs; 222 222 const struct nlattr *err_attr; 223 223 bool mod = false; 224 224 int ret; 225 225 226 - dev->ethtool_ops->get_ringparam(dev, &ringparam, 227 - &kernel_ringparam, info->extack); 228 - kernel_ringparam.tcp_data_split = dev->cfg->hds_config; 226 + ethtool_ringparam_get_cfg(dev, &ringparam, &kernel_ringparam, 227 + info->extack); 229 228 230 229 ethnl_update_u32(&ringparam.rx_pending, tb[ETHTOOL_A_RINGS_RX], &mod); 231 230 ethnl_update_u32(&ringparam.rx_mini_pending,
+34 -21
net/ipv4/tcp.c
··· 1573 1573 * or for 'peeking' the socket using this routine 1574 1574 * (although both would be easy to implement). 1575 1575 */ 1576 - int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, 1577 - sk_read_actor_t recv_actor) 1576 + static int __tcp_read_sock(struct sock *sk, read_descriptor_t *desc, 1577 + sk_read_actor_t recv_actor, bool noack, 1578 + u32 *copied_seq) 1578 1579 { 1579 1580 struct sk_buff *skb; 1580 1581 struct tcp_sock *tp = tcp_sk(sk); 1581 - u32 seq = tp->copied_seq; 1582 + u32 seq = *copied_seq; 1582 1583 u32 offset; 1583 1584 int copied = 0; 1584 1585 ··· 1633 1632 tcp_eat_recv_skb(sk, skb); 1634 1633 if (!desc->count) 1635 1634 break; 1636 - WRITE_ONCE(tp->copied_seq, seq); 1635 + WRITE_ONCE(*copied_seq, seq); 1637 1636 } 1638 - WRITE_ONCE(tp->copied_seq, seq); 1637 + WRITE_ONCE(*copied_seq, seq); 1638 + 1639 + if (noack) 1640 + goto out; 1639 1641 1640 1642 tcp_rcv_space_adjust(sk); 1641 1643 ··· 1647 1643 tcp_recv_skb(sk, seq, &offset); 1648 1644 tcp_cleanup_rbuf(sk, copied); 1649 1645 } 1646 + out: 1650 1647 return copied; 1651 1648 } 1649 + 1650 + int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, 1651 + sk_read_actor_t recv_actor) 1652 + { 1653 + return __tcp_read_sock(sk, desc, recv_actor, false, 1654 + &tcp_sk(sk)->copied_seq); 1655 + } 1652 1656 EXPORT_SYMBOL(tcp_read_sock); 1657 + 1658 + int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc, 1659 + sk_read_actor_t recv_actor, bool noack, 1660 + u32 *copied_seq) 1661 + { 1662 + return __tcp_read_sock(sk, desc, recv_actor, noack, copied_seq); 1663 + } 1653 1664 1654 1665 int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) 1655 1666 { ··· 2465 2446 */ 2466 2447 memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg)); 2467 2448 dmabuf_cmsg.frag_size = copy; 2468 - err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR, 2469 - sizeof(dmabuf_cmsg), &dmabuf_cmsg); 2470 - if (err || msg->msg_flags & MSG_CTRUNC) { 2471 - msg->msg_flags &= ~MSG_CTRUNC; 2472 - if (!err) 2473 - err = -ETOOSMALL; 2449 + err = put_cmsg_notrunc(msg, SOL_SOCKET, 2450 + SO_DEVMEM_LINEAR, 2451 + sizeof(dmabuf_cmsg), 2452 + &dmabuf_cmsg); 2453 + if (err) 2474 2454 goto out; 2475 - } 2476 2455 2477 2456 sent += copy; 2478 2457 ··· 2529 2512 offset += copy; 2530 2513 remaining_len -= copy; 2531 2514 2532 - err = put_cmsg(msg, SOL_SOCKET, 2533 - SO_DEVMEM_DMABUF, 2534 - sizeof(dmabuf_cmsg), 2535 - &dmabuf_cmsg); 2536 - if (err || msg->msg_flags & MSG_CTRUNC) { 2537 - msg->msg_flags &= ~MSG_CTRUNC; 2538 - if (!err) 2539 - err = -ETOOSMALL; 2515 + err = put_cmsg_notrunc(msg, SOL_SOCKET, 2516 + SO_DEVMEM_DMABUF, 2517 + sizeof(dmabuf_cmsg), 2518 + &dmabuf_cmsg); 2519 + if (err) 2540 2520 goto out; 2541 - } 2542 2521 2543 2522 atomic_long_inc(&niov->pp_ref_count); 2544 2523 tcp_xa_pool.netmems[tcp_xa_pool.idx++] = skb_frag_netmem(frag);
+36
net/ipv4/tcp_bpf.c
··· 646 646 ops->sendmsg == tcp_sendmsg ? 0 : -ENOTSUPP; 647 647 } 648 648 649 + #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) 650 + int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc, 651 + sk_read_actor_t recv_actor) 652 + { 653 + struct sock *sk = strp->sk; 654 + struct sk_psock *psock; 655 + struct tcp_sock *tp; 656 + int copied = 0; 657 + 658 + tp = tcp_sk(sk); 659 + rcu_read_lock(); 660 + psock = sk_psock(sk); 661 + if (WARN_ON_ONCE(!psock)) { 662 + desc->error = -EINVAL; 663 + goto out; 664 + } 665 + 666 + psock->ingress_bytes = 0; 667 + copied = tcp_read_sock_noack(sk, desc, recv_actor, true, 668 + &psock->copied_seq); 669 + if (copied < 0) 670 + goto out; 671 + /* recv_actor may redirect skb to another socket (SK_REDIRECT) or 672 + * just put skb into ingress queue of current socket (SK_PASS). 673 + * For SK_REDIRECT, we need to ack the frame immediately but for 674 + * SK_PASS, we want to delay the ack until tcp_bpf_recvmsg_parser(). 675 + */ 676 + tp->copied_seq = psock->copied_seq - psock->ingress_bytes; 677 + tcp_rcv_space_adjust(sk); 678 + __tcp_cleanup_rbuf(sk, copied - psock->ingress_bytes); 679 + out: 680 + rcu_read_unlock(); 681 + return copied; 682 + } 683 + #endif /* CONFIG_BPF_STREAM_PARSER */ 684 + 649 685 int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore) 650 686 { 651 687 int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
+4 -6
net/ipv4/tcp_minisocks.c
··· 817 817 818 818 /* In sequence, PAWS is OK. */ 819 819 820 - /* TODO: We probably should defer ts_recent change once 821 - * we take ownership of @req. 822 - */ 823 - if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt)) 824 - WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval); 825 - 826 820 if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) { 827 821 /* Truncate SYN, it is out of window starting 828 822 at tcp_rsk(req)->rcv_isn + 1. */ ··· 864 870 req, &own_req); 865 871 if (!child) 866 872 goto listen_overflow; 873 + 874 + if (own_req && tmp_opt.saw_tstamp && 875 + !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt)) 876 + tcp_sk(child)->rx_opt.ts_recent = tmp_opt.rcv_tsval; 867 877 868 878 if (own_req && rsk_drop_req(req)) { 869 879 reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
+12 -2
net/ipv6/rpl_iptunnel.c
··· 262 262 { 263 263 struct dst_entry *orig_dst = skb_dst(skb); 264 264 struct dst_entry *dst = NULL; 265 + struct lwtunnel_state *lwtst; 265 266 struct rpl_lwt *rlwt; 266 267 int err; 267 268 268 - rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate); 269 + /* We cannot dereference "orig_dst" once ip6_route_input() or 270 + * skb_dst_drop() is called. However, in order to detect a dst loop, we 271 + * need the address of its lwtstate. So, save the address of lwtstate 272 + * now and use it later as a comparison. 273 + */ 274 + lwtst = orig_dst->lwtstate; 275 + 276 + rlwt = rpl_lwt_lwtunnel(lwtst); 269 277 270 278 local_bh_disable(); 271 279 dst = dst_cache_get(&rlwt->cache); ··· 288 280 if (!dst) { 289 281 ip6_route_input(skb); 290 282 dst = skb_dst(skb); 291 - if (!dst->error) { 283 + 284 + /* cache only if we don't create a dst reference loop */ 285 + if (!dst->error && lwtst != dst->lwtstate) { 292 286 local_bh_disable(); 293 287 dst_cache_set_ip6(&rlwt->cache, dst, 294 288 &ipv6_hdr(skb)->saddr);
+12 -2
net/ipv6/seg6_iptunnel.c
··· 472 472 { 473 473 struct dst_entry *orig_dst = skb_dst(skb); 474 474 struct dst_entry *dst = NULL; 475 + struct lwtunnel_state *lwtst; 475 476 struct seg6_lwt *slwt; 476 477 int err; 477 478 478 - slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate); 479 + /* We cannot dereference "orig_dst" once ip6_route_input() or 480 + * skb_dst_drop() is called. However, in order to detect a dst loop, we 481 + * need the address of its lwtstate. So, save the address of lwtstate 482 + * now and use it later as a comparison. 483 + */ 484 + lwtst = orig_dst->lwtstate; 485 + 486 + slwt = seg6_lwt_lwtunnel(lwtst); 479 487 480 488 local_bh_disable(); 481 489 dst = dst_cache_get(&slwt->cache); ··· 498 490 if (!dst) { 499 491 ip6_route_input(skb); 500 492 dst = skb_dst(skb); 501 - if (!dst->error) { 493 + 494 + /* cache only if we don't create a dst reference loop */ 495 + if (!dst->error && lwtst != dst->lwtstate) { 502 496 local_bh_disable(); 503 497 dst_cache_set_ip6(&slwt->cache, dst, 504 498 &ipv6_hdr(skb)->saddr);
-5
net/mptcp/pm_netlink.c
··· 1521 1521 if (mptcp_pm_is_userspace(msk)) 1522 1522 goto next; 1523 1523 1524 - if (list_empty(&msk->conn_list)) { 1525 - mptcp_pm_remove_anno_addr(msk, addr, false); 1526 - goto next; 1527 - } 1528 - 1529 1524 lock_sock(sk); 1530 1525 remove_subflow = mptcp_lookup_subflow_by_saddr(&msk->conn_list, addr); 1531 1526 mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
+2
net/mptcp/protocol.h
··· 1194 1194 pr_debug("TCP fallback already done (msk=%p)\n", msk); 1195 1195 return; 1196 1196 } 1197 + if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback))) 1198 + return; 1197 1199 set_bit(MPTCP_FALLBACK_DONE, &msk->flags); 1198 1200 } 1199 1201
+2 -18
net/mptcp/subflow.c
··· 1139 1139 if (data_len == 0) { 1140 1140 pr_debug("infinite mapping received\n"); 1141 1141 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX); 1142 - subflow->map_data_len = 0; 1143 1142 return MAPPING_INVALID; 1144 1143 } 1145 1144 ··· 1297 1298 mptcp_schedule_work(sk); 1298 1299 } 1299 1300 1300 - static bool subflow_can_fallback(struct mptcp_subflow_context *subflow) 1301 - { 1302 - struct mptcp_sock *msk = mptcp_sk(subflow->conn); 1303 - 1304 - if (subflow->mp_join) 1305 - return false; 1306 - else if (READ_ONCE(msk->csum_enabled)) 1307 - return !subflow->valid_csum_seen; 1308 - else 1309 - return READ_ONCE(msk->allow_infinite_fallback); 1310 - } 1311 - 1312 1301 static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk) 1313 1302 { 1314 1303 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); ··· 1392 1405 return true; 1393 1406 } 1394 1407 1395 - if (!subflow_can_fallback(subflow) && subflow->map_data_len) { 1408 + if (!READ_ONCE(msk->allow_infinite_fallback)) { 1396 1409 /* fatal protocol error, close the socket. 1397 1410 * subflow_error_report() will introduce the appropriate barriers 1398 1411 */ ··· 1771 1784 * needs it. 1772 1785 * Update ns_tracker to current stack trace and refcounted tracker. 1773 1786 */ 1774 - __netns_tracker_free(net, &sf->sk->ns_tracker, false); 1775 - sf->sk->sk_net_refcnt = 1; 1776 - get_net_track(net, &sf->sk->ns_tracker, GFP_KERNEL); 1777 - sock_inuse_add(net, 1); 1787 + sk_net_refcnt_upgrade(sf->sk); 1778 1788 err = tcp_set_ulp(sf->sk, "mptcp"); 1779 1789 if (err) 1780 1790 goto err_free;
-10
net/netlink/af_netlink.c
··· 796 796 797 797 sock_prot_inuse_add(sock_net(sk), &netlink_proto, -1); 798 798 799 - /* Because struct net might disappear soon, do not keep a pointer. */ 800 - if (!sk->sk_net_refcnt && sock_net(sk) != &init_net) { 801 - __netns_tracker_free(sock_net(sk), &sk->ns_tracker, false); 802 - /* Because of deferred_put_nlk_sk and use of work queue, 803 - * it is possible netns will be freed before this socket. 804 - */ 805 - sock_net_set(sk, &init_net); 806 - __netns_tracker_alloc(&init_net, &sk->ns_tracker, 807 - false, GFP_KERNEL); 808 - } 809 799 call_rcu(&nlk->rcu, deferred_put_nlk_sk); 810 800 return 0; 811 801 }
+2 -6
net/rds/tcp.c
··· 504 504 release_sock(sk); 505 505 return false; 506 506 } 507 - /* Update ns_tracker to current stack trace and refcounted tracker */ 508 - __netns_tracker_free(net, &sk->ns_tracker, false); 509 - 510 - sk->sk_net_refcnt = 1; 511 - netns_tracker_alloc(net, &sk->ns_tracker, GFP_KERNEL); 512 - sock_inuse_add(net, 1); 507 + sk_net_refcnt_upgrade(sk); 508 + put_net(net); 513 509 } 514 510 rtn = net_generic(net, rds_tcp_netid); 515 511 if (rtn->sndbuf_size > 0) {
-1
net/rxrpc/ar-internal.h
··· 360 360 u8 pmtud_jumbo; /* Max jumbo packets for the MTU */ 361 361 bool ackr_adv_pmtud; /* T if the peer advertises path-MTU */ 362 362 unsigned int ackr_max_data; /* Maximum data advertised by peer */ 363 - seqcount_t mtu_lock; /* Lockless MTU access management */ 364 363 unsigned int if_mtu; /* Local interface MTU (- hdrsize) for this peer */ 365 364 unsigned int max_data; /* Maximum packet data capacity for this peer */ 366 365 unsigned short hdrsize; /* header size (IP + UDP + RxRPC) */
-2
net/rxrpc/input.c
··· 810 810 if (max_mtu < peer->max_data) { 811 811 trace_rxrpc_pmtud_reduce(peer, sp->hdr.serial, max_mtu, 812 812 rxrpc_pmtud_reduce_ack); 813 - write_seqcount_begin(&peer->mtu_lock); 814 813 peer->max_data = max_mtu; 815 - write_seqcount_end(&peer->mtu_lock); 816 814 } 817 815 818 816 max_data = umin(max_mtu, peer->max_data);
+1 -8
net/rxrpc/peer_event.c
··· 130 130 peer->pmtud_bad = max_data + 1; 131 131 132 132 trace_rxrpc_pmtud_reduce(peer, 0, max_data, rxrpc_pmtud_reduce_icmp); 133 - write_seqcount_begin(&peer->mtu_lock); 134 133 peer->max_data = max_data; 135 - write_seqcount_end(&peer->mtu_lock); 136 134 } 137 135 } 138 136 ··· 406 408 } 407 409 408 410 max_data = umin(max_data, peer->ackr_max_data); 409 - if (max_data != peer->max_data) { 410 - preempt_disable(); 411 - write_seqcount_begin(&peer->mtu_lock); 411 + if (max_data != peer->max_data) 412 412 peer->max_data = max_data; 413 - write_seqcount_end(&peer->mtu_lock); 414 - preempt_enable(); 415 - } 416 413 417 414 jumbo = max_data + sizeof(struct rxrpc_jumbo_header); 418 415 jumbo /= RXRPC_JUMBO_SUBPKTLEN;
+2 -3
net/rxrpc/peer_object.c
··· 235 235 peer->service_conns = RB_ROOT; 236 236 seqlock_init(&peer->service_conn_lock); 237 237 spin_lock_init(&peer->lock); 238 - seqcount_init(&peer->mtu_lock); 239 238 peer->debug_id = atomic_inc_return(&rxrpc_debug_id); 240 239 peer->recent_srtt_us = UINT_MAX; 241 240 peer->cong_ssthresh = RXRPC_TX_MAX_WINDOW; ··· 324 325 hash_key = rxrpc_peer_hash_key(local, &peer->srx); 325 326 rxrpc_init_peer(local, peer, hash_key); 326 327 327 - spin_lock_bh(&rxnet->peer_hash_lock); 328 + spin_lock(&rxnet->peer_hash_lock); 328 329 hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key); 329 330 list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new); 330 - spin_unlock_bh(&rxnet->peer_hash_lock); 331 + spin_unlock(&rxnet->peer_hash_lock); 331 332 } 332 333 333 334 /*
+12
net/rxrpc/rxperf.c
··· 478 478 call->unmarshal++; 479 479 fallthrough; 480 480 case 2: 481 + ret = rxperf_extract_data(call, true); 482 + if (ret < 0) 483 + return ret; 484 + 485 + /* Deal with the terminal magic cookie. */ 486 + call->iov_len = 4; 487 + call->kvec[0].iov_len = call->iov_len; 488 + call->kvec[0].iov_base = call->tmp; 489 + iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len); 490 + call->unmarshal++; 491 + fallthrough; 492 + case 3: 481 493 ret = rxperf_extract_data(call, false); 482 494 if (ret < 0) 483 495 return ret;
+1 -4
net/smc/af_smc.c
··· 3337 3337 * which need net ref. 3338 3338 */ 3339 3339 sk = smc->clcsock->sk; 3340 - __netns_tracker_free(net, &sk->ns_tracker, false); 3341 - sk->sk_net_refcnt = 1; 3342 - get_net_track(net, &sk->ns_tracker, GFP_KERNEL); 3343 - sock_inuse_add(net, 1); 3340 + sk_net_refcnt_upgrade(sk); 3344 3341 return 0; 3345 3342 } 3346 3343
+9 -2
net/strparser/strparser.c
··· 347 347 struct socket *sock = strp->sk->sk_socket; 348 348 read_descriptor_t desc; 349 349 350 - if (unlikely(!sock || !sock->ops || !sock->ops->read_sock)) 350 + if (unlikely(!sock || !sock->ops)) 351 + return -EBUSY; 352 + 353 + if (unlikely(!strp->cb.read_sock && !sock->ops->read_sock)) 351 354 return -EBUSY; 352 355 353 356 desc.arg.data = strp; ··· 358 355 desc.count = 1; /* give more than one skb per call */ 359 356 360 357 /* sk should be locked here, so okay to do read_sock */ 361 - sock->ops->read_sock(strp->sk, &desc, strp_recv); 358 + if (strp->cb.read_sock) 359 + strp->cb.read_sock(strp, &desc, strp_recv); 360 + else 361 + sock->ops->read_sock(strp->sk, &desc, strp_recv); 362 362 363 363 desc.error = strp->cb.read_sock_done(strp, desc.error); 364 364 ··· 474 468 strp->cb.unlock = cb->unlock ? : strp_sock_unlock; 475 469 strp->cb.rcv_msg = cb->rcv_msg; 476 470 strp->cb.parse_msg = cb->parse_msg; 471 + strp->cb.read_sock = cb->read_sock; 477 472 strp->cb.read_sock_done = cb->read_sock_done ? : default_read_sock_done; 478 473 strp->cb.abort_parser = cb->abort_parser ? : strp_abort_strp; 479 474
+3 -7
net/sunrpc/cache.c
··· 1674 1674 } 1675 1675 } 1676 1676 1677 - #ifdef CONFIG_PROC_FS 1678 1677 static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) 1679 1678 { 1680 1679 struct proc_dir_entry *p; 1681 1680 struct sunrpc_net *sn; 1681 + 1682 + if (!IS_ENABLED(CONFIG_PROC_FS)) 1683 + return 0; 1682 1684 1683 1685 sn = net_generic(net, sunrpc_net_id); 1684 1686 cd->procfs = proc_mkdir(cd->name, sn->proc_net_rpc); ··· 1709 1707 remove_cache_proc_entries(cd); 1710 1708 return -ENOMEM; 1711 1709 } 1712 - #else /* CONFIG_PROC_FS */ 1713 - static int create_cache_proc_entries(struct cache_detail *cd, struct net *net) 1714 - { 1715 - return 0; 1716 - } 1717 - #endif 1718 1710 1719 1711 void __init cache_initialize(void) 1720 1712 {
-2
net/sunrpc/sched.c
··· 864 864 if (!rpc_task_set_rpc_status(task, -ERESTARTSYS)) 865 865 return; 866 866 trace_rpc_task_signalled(task, task->tk_action); 867 - set_bit(RPC_TASK_SIGNALLED, &task->tk_runstate); 868 - smp_mb__after_atomic(); 869 867 queue = READ_ONCE(task->tk_waitqueue); 870 868 if (queue) 871 869 rpc_wake_up_queued_task(queue, task);
+1 -4
net/sunrpc/svcsock.c
··· 1541 1541 newlen = error; 1542 1542 1543 1543 if (protocol == IPPROTO_TCP) { 1544 - __netns_tracker_free(net, &sock->sk->ns_tracker, false); 1545 - sock->sk->sk_net_refcnt = 1; 1546 - get_net_track(net, &sock->sk->ns_tracker, GFP_KERNEL); 1547 - sock_inuse_add(net, 1); 1544 + sk_net_refcnt_upgrade(sock->sk); 1548 1545 if ((error = kernel_listen(sock, 64)) < 0) 1549 1546 goto bummer; 1550 1547 }
+11 -7
net/sunrpc/xprtsock.c
··· 1941 1941 goto out; 1942 1942 } 1943 1943 1944 - if (protocol == IPPROTO_TCP) { 1945 - __netns_tracker_free(xprt->xprt_net, &sock->sk->ns_tracker, false); 1946 - sock->sk->sk_net_refcnt = 1; 1947 - get_net_track(xprt->xprt_net, &sock->sk->ns_tracker, GFP_KERNEL); 1948 - sock_inuse_add(xprt->xprt_net, 1); 1949 - } 1944 + if (protocol == IPPROTO_TCP) 1945 + sk_net_refcnt_upgrade(sock->sk); 1950 1946 1951 1947 filp = sock_alloc_file(sock, O_NONBLOCK, NULL); 1952 1948 if (IS_ERR(filp)) ··· 2577 2581 struct sock_xprt *lower_transport = 2578 2582 container_of(lower_xprt, struct sock_xprt, xprt); 2579 2583 2580 - lower_transport->xprt_err = status ? -EACCES : 0; 2584 + switch (status) { 2585 + case 0: 2586 + case -EACCES: 2587 + case -ETIMEDOUT: 2588 + lower_transport->xprt_err = status; 2589 + break; 2590 + default: 2591 + lower_transport->xprt_err = -EACCES; 2592 + } 2581 2593 complete(&lower_transport->handshake_done); 2582 2594 xprt_put(lower_xprt); 2583 2595 }
+1
net/unix/af_unix.c
··· 2101 2101 goto out_sock_put; 2102 2102 } 2103 2103 2104 + sock_put(other); 2104 2105 goto lookup; 2105 2106 } 2106 2107
+1 -1
security/integrity/evm/evm_crypto.c
··· 180 180 } 181 181 182 182 /* 183 - * Dump large security xattr values as a continuous ascii hexademical string. 183 + * Dump large security xattr values as a continuous ascii hexadecimal string. 184 184 * (pr_debug is limited to 64 bytes.) 185 185 */ 186 186 static void dump_security_xattr_l(const char *prefix, const void *src,
+1 -1
security/integrity/evm/evm_main.c
··· 169 169 * and compare it against the stored security.evm xattr. 170 170 * 171 171 * For performance: 172 - * - use the previoulsy retrieved xattr value and length to calculate the 172 + * - use the previously retrieved xattr value and length to calculate the 173 173 * HMAC.) 174 174 * - cache the verification result in the iint, when available. 175 175 *
+3
security/integrity/ima/ima.h
··· 149 149 #define IMA_CHECK_BLACKLIST 0x40000000 150 150 #define IMA_VERITY_REQUIRED 0x80000000 151 151 152 + /* Exclude non-action flags which are not rule-specific. */ 153 + #define IMA_NONACTION_RULE_FLAGS (IMA_NONACTION_FLAGS & ~IMA_NEW_FILE) 154 + 152 155 #define IMA_DO_MASK (IMA_MEASURE | IMA_APPRAISE | IMA_AUDIT | \ 153 156 IMA_HASH | IMA_APPRAISE_SUBMASK) 154 157 #define IMA_DONE_MASK (IMA_MEASURED | IMA_APPRAISED | IMA_AUDITED | \
+8 -5
security/integrity/ima/ima_main.c
··· 269 269 mutex_lock(&iint->mutex); 270 270 271 271 if (test_and_clear_bit(IMA_CHANGE_ATTR, &iint->atomic_flags)) 272 - /* reset appraisal flags if ima_inode_post_setattr was called */ 272 + /* 273 + * Reset appraisal flags (action and non-action rule-specific) 274 + * if ima_inode_post_setattr was called. 275 + */ 273 276 iint->flags &= ~(IMA_APPRAISE | IMA_APPRAISED | 274 277 IMA_APPRAISE_SUBMASK | IMA_APPRAISED_SUBMASK | 275 - IMA_NONACTION_FLAGS); 278 + IMA_NONACTION_RULE_FLAGS); 276 279 277 280 /* 278 281 * Re-evaulate the file if either the xattr has changed or the ··· 1014 1011 } 1015 1012 1016 1013 /* 1017 - * Both LSM hooks and auxilary based buffer measurements are 1018 - * based on policy. To avoid code duplication, differentiate 1019 - * between the LSM hooks and auxilary buffer measurements, 1014 + * Both LSM hooks and auxiliary based buffer measurements are 1015 + * based on policy. To avoid code duplication, differentiate 1016 + * between the LSM hooks and auxiliary buffer measurements, 1020 1017 * retrieving the policy rule information only for the LSM hook 1021 1018 * buffer measurements. 1022 1019 */
+1 -2
security/landlock/net.c
··· 63 63 if (WARN_ON_ONCE(dom->num_layers < 1)) 64 64 return -EACCES; 65 65 66 - /* Checks if it's a (potential) TCP socket. */ 67 - if (sock->type != SOCK_STREAM) 66 + if (!sk_is_tcp(sock->sk)) 68 67 return 0; 69 68 70 69 /* Checks for minimal header length to safely read sa_family. */
+1 -1
security/landlock/ruleset.c
··· 124 124 return ERR_PTR(-ENOMEM); 125 125 RB_CLEAR_NODE(&new_rule->node); 126 126 if (is_object_pointer(id.type)) { 127 - /* This should be catched by insert_rule(). */ 127 + /* This should have been caught by insert_rule(). */ 128 128 WARN_ON_ONCE(!id.key.object); 129 129 landlock_get_object(id.key.object); 130 130 }
+3
sound/pci/hda/cs35l56_hda_spi.c
··· 22 22 return -ENOMEM; 23 23 24 24 cs35l56->base.dev = &spi->dev; 25 + ret = cs35l56_init_config_for_spi(&cs35l56->base, spi); 26 + if (ret) 27 + return ret; 25 28 26 29 #ifdef CS35L56_WAKE_HOLD_TIME_US 27 30 cs35l56->base.can_hibernate = true;
+1 -1
sound/pci/hda/patch_realtek.c
··· 10623 10623 SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC), 10624 10624 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 10625 10625 SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), 10626 + SND_PCI_QUIRK(0x1043, 0x1460, "Asus VivoBook 15", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10626 10627 SND_PCI_QUIRK(0x1043, 0x1463, "Asus GA402X/GA402N", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), 10627 10628 SND_PCI_QUIRK(0x1043, 0x1473, "ASUS GU604VI/VC/VE/VG/VJ/VQ/VU/VV/VY/VZ", ALC285_FIXUP_ASUS_HEADSET_MIC), 10628 10629 SND_PCI_QUIRK(0x1043, 0x1483, "ASUS GU603VQ/VU/VV/VJ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC), ··· 10657 10656 SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE), 10658 10657 SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE), 10659 10658 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 10660 - SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC), 10661 10659 SND_PCI_QUIRK(0x1043, 0x1a63, "ASUS UX3405MA", ALC245_FIXUP_CS35L41_SPI_2), 10662 10660 SND_PCI_QUIRK(0x1043, 0x1a83, "ASUS UM5302LA", ALC294_FIXUP_CS35L41_I2C_2), 10663 10661 SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+80
sound/soc/codecs/cs35l56-shared.c
··· 10 10 #include <linux/gpio/consumer.h> 11 11 #include <linux/regmap.h> 12 12 #include <linux/regulator/consumer.h> 13 + #include <linux/spi/spi.h> 13 14 #include <linux/types.h> 14 15 #include <sound/cs-amp-lib.h> 15 16 ··· 304 303 } 305 304 EXPORT_SYMBOL_NS_GPL(cs35l56_wait_min_reset_pulse, "SND_SOC_CS35L56_SHARED"); 306 305 306 + static const struct { 307 + u32 addr; 308 + u32 value; 309 + } cs35l56_spi_system_reset_stages[] = { 310 + { .addr = CS35L56_DSP_VIRTUAL1_MBOX_1, .value = CS35L56_MBOX_CMD_SYSTEM_RESET }, 311 + /* The next write is necessary to delimit the soft reset */ 312 + { .addr = CS35L56_DSP_MBOX_1_RAW, .value = CS35L56_MBOX_CMD_PING }, 313 + }; 314 + 315 + static void cs35l56_spi_issue_bus_locked_reset(struct cs35l56_base *cs35l56_base, 316 + struct spi_device *spi) 317 + { 318 + struct cs35l56_spi_payload *buf = cs35l56_base->spi_payload_buf; 319 + struct spi_transfer t = { 320 + .tx_buf = buf, 321 + .len = sizeof(*buf), 322 + }; 323 + struct spi_message m; 324 + int i, ret; 325 + 326 + for (i = 0; i < ARRAY_SIZE(cs35l56_spi_system_reset_stages); i++) { 327 + buf->addr = cpu_to_be32(cs35l56_spi_system_reset_stages[i].addr); 328 + buf->value = cpu_to_be32(cs35l56_spi_system_reset_stages[i].value); 329 + spi_message_init_with_transfers(&m, &t, 1); 330 + ret = spi_sync_locked(spi, &m); 331 + if (ret) 332 + dev_warn(cs35l56_base->dev, "spi_sync failed: %d\n", ret); 333 + 334 + usleep_range(CS35L56_SPI_RESET_TO_PORT_READY_US, 335 + 2 * CS35L56_SPI_RESET_TO_PORT_READY_US); 336 + } 337 + } 338 + 339 + static void cs35l56_spi_system_reset(struct cs35l56_base *cs35l56_base) 340 + { 341 + struct spi_device *spi = to_spi_device(cs35l56_base->dev); 342 + unsigned int val; 343 + int read_ret, ret; 344 + 345 + /* 346 + * There must not be any other SPI bus activity while the amp is 347 + * soft-resetting. 348 + */ 349 + ret = spi_bus_lock(spi->controller); 350 + if (ret) { 351 + dev_warn(cs35l56_base->dev, "spi_bus_lock failed: %d\n", ret); 352 + return; 353 + } 354 + 355 + cs35l56_spi_issue_bus_locked_reset(cs35l56_base, spi); 356 + spi_bus_unlock(spi->controller); 357 + 358 + /* 359 + * Check firmware boot by testing for a response in MBOX_2. 360 + * HALO_STATE cannot be trusted yet because the reset sequence 361 + * can leave it with stale state. But MBOX is reset. 362 + * The regmap must remain in cache-only until the chip has 363 + * booted, so use a bypassed read. 364 + */ 365 + ret = read_poll_timeout(regmap_read_bypassed, read_ret, 366 + (val > 0) && (val < 0xffffffff), 367 + CS35L56_HALO_STATE_POLL_US, 368 + CS35L56_HALO_STATE_TIMEOUT_US, 369 + false, 370 + cs35l56_base->regmap, 371 + CS35L56_DSP_VIRTUAL1_MBOX_2, 372 + &val); 373 + if (ret) { 374 + dev_err(cs35l56_base->dev, "SPI reboot timed out(%d): MBOX2=%#x\n", 375 + read_ret, val); 376 + } 377 + } 378 + 307 379 static const struct reg_sequence cs35l56_system_reset_seq[] = { 308 380 REG_SEQ0(CS35L56_DSP1_HALO_STATE, 0), 309 381 REG_SEQ0(CS35L56_DSP_VIRTUAL1_MBOX_1, CS35L56_MBOX_CMD_SYSTEM_RESET), ··· 389 315 * accesses other than the controlled system reset sequence below. 390 316 */ 391 317 regcache_cache_only(cs35l56_base->regmap, true); 318 + 319 + if (cs35l56_is_spi(cs35l56_base)) { 320 + cs35l56_spi_system_reset(cs35l56_base); 321 + return; 322 + } 323 + 392 324 regmap_multi_reg_write_bypassed(cs35l56_base->regmap, 393 325 cs35l56_system_reset_seq, 394 326 ARRAY_SIZE(cs35l56_system_reset_seq));
+3
sound/soc/codecs/cs35l56-spi.c
··· 33 33 34 34 cs35l56->base.dev = &spi->dev; 35 35 cs35l56->base.can_hibernate = true; 36 + ret = cs35l56_init_config_for_spi(&cs35l56->base, spi); 37 + if (ret) 38 + return ret; 36 39 37 40 ret = cs35l56_common_probe(cs35l56); 38 41 if (ret != 0)
+4 -11
sound/soc/codecs/es8328.c
··· 233 233 234 234 /* Left Mixer */ 235 235 static const struct snd_kcontrol_new es8328_left_mixer_controls[] = { 236 - SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL17, 7, 1, 0), 237 236 SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL17, 6, 1, 0), 238 237 SOC_DAPM_SINGLE("Right Playback Switch", ES8328_DACCONTROL18, 7, 1, 0), 239 238 SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL18, 6, 1, 0), ··· 242 243 static const struct snd_kcontrol_new es8328_right_mixer_controls[] = { 243 244 SOC_DAPM_SINGLE("Left Playback Switch", ES8328_DACCONTROL19, 7, 1, 0), 244 245 SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL19, 6, 1, 0), 245 - SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL20, 7, 1, 0), 246 246 SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL20, 6, 1, 0), 247 247 }; 248 248 ··· 334 336 SND_SOC_DAPM_DAC("Left DAC", "Left Playback", ES8328_DACPOWER, 335 337 ES8328_DACPOWER_LDAC_OFF, 1), 336 338 337 - SND_SOC_DAPM_MIXER("Left Mixer", SND_SOC_NOPM, 0, 0, 339 + SND_SOC_DAPM_MIXER("Left Mixer", ES8328_DACCONTROL17, 7, 0, 338 340 &es8328_left_mixer_controls[0], 339 341 ARRAY_SIZE(es8328_left_mixer_controls)), 340 - SND_SOC_DAPM_MIXER("Right Mixer", SND_SOC_NOPM, 0, 0, 342 + SND_SOC_DAPM_MIXER("Right Mixer", ES8328_DACCONTROL20, 7, 0, 341 343 &es8328_right_mixer_controls[0], 342 344 ARRAY_SIZE(es8328_right_mixer_controls)), 343 345 ··· 416 418 { "Right Line Mux", "PGA", "Right PGA Mux" }, 417 419 { "Right Line Mux", "Differential", "Differential Mux" }, 418 420 419 - { "Left Out 1", NULL, "Left DAC" }, 420 - { "Right Out 1", NULL, "Right DAC" }, 421 - { "Left Out 2", NULL, "Left DAC" }, 422 - { "Right Out 2", NULL, "Right DAC" }, 423 - 424 - { "Left Mixer", "Playback Switch", "Left DAC" }, 421 + { "Left Mixer", NULL, "Left DAC" }, 425 422 { "Left Mixer", "Left Bypass Switch", "Left Line Mux" }, 426 423 { "Left Mixer", "Right Playback Switch", "Right DAC" }, 427 424 { "Left Mixer", "Right Bypass Switch", "Right Line Mux" }, 428 425 429 426 { "Right Mixer", "Left Playback Switch", "Left DAC" }, 430 427 { "Right Mixer", "Left Bypass Switch", "Left Line Mux" }, 431 - { "Right Mixer", "Playback Switch", "Right DAC" }, 428 + { "Right Mixer", NULL, "Right DAC" }, 432 429 { "Right Mixer", "Right Bypass Switch", "Right Line Mux" }, 433 430 434 431 { "DAC DIG", NULL, "DAC STM" },
+9 -1
sound/soc/codecs/tas2764.c
··· 365 365 { 366 366 struct snd_soc_component *component = dai->component; 367 367 struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); 368 - u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0; 368 + u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0, asi_cfg_4 = 0; 369 369 int ret; 370 370 371 371 switch (fmt & SND_SOC_DAIFMT_INV_MASK) { ··· 374 374 fallthrough; 375 375 case SND_SOC_DAIFMT_NB_NF: 376 376 asi_cfg_1 = TAS2764_TDM_CFG1_RX_RISING; 377 + asi_cfg_4 = TAS2764_TDM_CFG4_TX_FALLING; 377 378 break; 378 379 case SND_SOC_DAIFMT_IB_IF: 379 380 asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; 380 381 fallthrough; 381 382 case SND_SOC_DAIFMT_IB_NF: 382 383 asi_cfg_1 = TAS2764_TDM_CFG1_RX_FALLING; 384 + asi_cfg_4 = TAS2764_TDM_CFG4_TX_RISING; 383 385 break; 384 386 } 385 387 386 388 ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG1, 387 389 TAS2764_TDM_CFG1_RX_MASK, 388 390 asi_cfg_1); 391 + if (ret < 0) 392 + return ret; 393 + 394 + ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG4, 395 + TAS2764_TDM_CFG4_TX_MASK, 396 + asi_cfg_4); 389 397 if (ret < 0) 390 398 return ret; 391 399
+7 -1
sound/soc/codecs/tas2764.h
··· 25 25 26 26 /* Power Control */ 27 27 #define TAS2764_PWR_CTRL TAS2764_REG(0X0, 0x02) 28 - #define TAS2764_PWR_CTRL_MASK GENMASK(1, 0) 28 + #define TAS2764_PWR_CTRL_MASK GENMASK(2, 0) 29 29 #define TAS2764_PWR_CTRL_ACTIVE 0x0 30 30 #define TAS2764_PWR_CTRL_MUTE BIT(0) 31 31 #define TAS2764_PWR_CTRL_SHUTDOWN BIT(1) ··· 78 78 #define TAS2764_TDM_CFG3_RXS_MASK GENMASK(7, 4) 79 79 #define TAS2764_TDM_CFG3_RXS_SHIFT 0x4 80 80 #define TAS2764_TDM_CFG3_MASK GENMASK(3, 0) 81 + 82 + /* TDM Configuration Reg4 */ 83 + #define TAS2764_TDM_CFG4 TAS2764_REG(0X0, 0x0d) 84 + #define TAS2764_TDM_CFG4_TX_MASK BIT(0) 85 + #define TAS2764_TDM_CFG4_TX_RISING 0x0 86 + #define TAS2764_TDM_CFG4_TX_FALLING BIT(0) 81 87 82 88 /* TDM Configuration Reg5 */ 83 89 #define TAS2764_TDM_CFG5 TAS2764_REG(0X0, 0x0e)
+1 -1
sound/soc/codecs/tas2770.c
··· 506 506 } 507 507 508 508 static DECLARE_TLV_DB_SCALE(tas2770_digital_tlv, 1100, 50, 0); 509 - static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -12750, 50, 0); 509 + static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -10050, 50, 0); 510 510 511 511 static const struct snd_kcontrol_new tas2770_snd_controls[] = { 512 512 SOC_SINGLE_TLV("Speaker Playback Volume", TAS2770_PLAY_CFG_REG2,
+3 -3
sound/soc/fsl/fsl_sai.c
··· 994 994 { 995 995 .name = "sai-tx", 996 996 .playback = { 997 - .stream_name = "CPU-Playback", 997 + .stream_name = "SAI-Playback", 998 998 .channels_min = 1, 999 999 .channels_max = 32, 1000 - .rate_min = 8000, 1000 + .rate_min = 8000, 1001 1001 .rate_max = 2822400, 1002 1002 .rates = SNDRV_PCM_RATE_KNOT, 1003 1003 .formats = FSL_SAI_FORMATS, ··· 1007 1007 { 1008 1008 .name = "sai-rx", 1009 1009 .capture = { 1010 - .stream_name = "CPU-Capture", 1010 + .stream_name = "SAI-Capture", 1011 1011 .channels_min = 1, 1012 1012 .channels_max = 32, 1013 1013 .rate_min = 8000,
+2 -2
sound/soc/fsl/imx-audmix.c
··· 119 119 static const char *name[][3] = { 120 120 {"HiFi-AUDMIX-FE-0", "HiFi-AUDMIX-FE-1", "HiFi-AUDMIX-FE-2"}, 121 121 {"sai-tx", "sai-tx", "sai-rx"}, 122 - {"AUDMIX-Playback-0", "AUDMIX-Playback-1", "CPU-Capture"}, 123 - {"CPU-Playback", "CPU-Playback", "AUDMIX-Capture-0"}, 122 + {"AUDMIX-Playback-0", "AUDMIX-Playback-1", "SAI-Capture"}, 123 + {"SAI-Playback", "SAI-Playback", "AUDMIX-Capture-0"}, 124 124 }; 125 125 126 126 static int imx_audmix_probe(struct platform_device *pdev)
+7
sound/soc/intel/boards/sof_sdw.c
··· 803 803 int *be_id, struct snd_soc_codec_conf **codec_conf) 804 804 { 805 805 struct device *dev = card->dev; 806 + struct snd_soc_acpi_mach *mach = dev_get_platdata(card->dev); 806 807 struct asoc_sdw_mc_private *ctx = snd_soc_card_get_drvdata(card); 808 + struct snd_soc_acpi_mach_params *mach_params = &mach->mach_params; 807 809 struct intel_mc_ctx *intel_ctx = (struct intel_mc_ctx *)ctx->private; 808 810 struct asoc_sdw_endpoint *sof_end; 809 811 int stream; ··· 902 900 903 901 codecs[j].name = sof_end->codec_name; 904 902 codecs[j].dai_name = sof_end->dai_info->dai_name; 903 + if (sof_end->dai_info->dai_type == SOC_SDW_DAI_TYPE_MIC && 904 + mach_params->dmic_num > 0) { 905 + dev_warn(dev, 906 + "Both SDW DMIC and PCH DMIC are present, if incorrect, please set kernel params snd_sof_intel_hda_generic dmic_num=0 to disable PCH DMIC\n"); 907 + } 905 908 j++; 906 909 } 907 910
+2 -16
sound/soc/sof/intel/hda.c
··· 1312 1312 /* report to machine driver if any DMICs are found */ 1313 1313 mach->mach_params.dmic_num = check_dmic_num(sdev); 1314 1314 1315 - if (sdw_mach_found) { 1316 - /* 1317 - * DMICs use up to 4 pins and are typically pin-muxed with SoundWire 1318 - * link 2 and 3, or link 1 and 2, thus we only try to enable dmics 1319 - * if all conditions are true: 1320 - * a) 2 or fewer links are used by SoundWire 1321 - * b) the NHLT table reports the presence of microphones 1322 - */ 1323 - if (hweight_long(mach->link_mask) <= 2) 1324 - dmic_fixup = true; 1325 - else 1326 - mach->mach_params.dmic_num = 0; 1327 - } else { 1328 - if (mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER) 1329 - dmic_fixup = true; 1330 - } 1315 + if (sdw_mach_found || mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER) 1316 + dmic_fixup = true; 1331 1317 1332 1318 if (tplg_fixup && 1333 1319 dmic_fixup &&
+1 -1
sound/usb/midi.c
··· 1145 1145 { 1146 1146 struct usbmidi_out_port *port = substream->runtime->private_data; 1147 1147 1148 - cancel_work_sync(&port->ep->work); 1148 + flush_work(&port->ep->work); 1149 1149 return substream_open(substream, 0, 0); 1150 1150 } 1151 1151
+1
sound/usb/quirks.c
··· 1868 1868 case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */ 1869 1869 subs->stream_offset_adj = 2; 1870 1870 break; 1871 + case USB_ID(0x2b73, 0x000a): /* Pioneer DJM-900NXS2 */ 1871 1872 case USB_ID(0x2b73, 0x0013): /* Pioneer DJM-450 */ 1872 1873 pioneer_djm_set_format_quirk(subs, 0x0082); 1873 1874 break;
-6
tools/arch/arm64/tools/Makefile
··· 13 13 MKDIR ?= mkdir 14 14 RM ?= rm 15 15 16 - ifeq ($(V),1) 17 - Q = 18 - else 19 - Q = @ 20 - endif 21 - 22 16 arm64_tools_dir = $(top_srcdir)/arch/arm64/tools 23 17 arm64_sysreg_tbl = $(arm64_tools_dir)/sysreg 24 18 arm64_gen_sysreg = $(arm64_tools_dir)/gen-sysreg.awk
-6
tools/bpf/Makefile
··· 27 27 srctree := $(patsubst %/,%,$(dir $(srctree))) 28 28 endif 29 29 30 - ifeq ($(V),1) 31 - Q = 32 - else 33 - Q = @ 34 - endif 35 - 36 30 FEATURE_USER = .bpf 37 31 FEATURE_TESTS = libbfd disassembler-four-args disassembler-init-styled 38 32 FEATURE_DISPLAY = libbfd
-6
tools/bpf/bpftool/Documentation/Makefile
··· 5 5 RM ?= rm -f 6 6 RMDIR ?= rmdir --ignore-fail-on-non-empty 7 7 8 - ifeq ($(V),1) 9 - Q = 10 - else 11 - Q = @ 12 - endif 13 - 14 8 prefix ?= /usr/local 15 9 mandir ?= $(prefix)/man 16 10 man8dir = $(mandir)/man8
-6
tools/bpf/bpftool/Makefile
··· 7 7 srctree := $(patsubst %/,%,$(dir $(srctree))) 8 8 endif 9 9 10 - ifeq ($(V),1) 11 - Q = 12 - else 13 - Q = @ 14 - endif 15 - 16 10 BPF_DIR = $(srctree)/tools/lib/bpf 17 11 18 12 ifneq ($(OUTPUT),)
-2
tools/bpf/resolve_btfids/Makefile
··· 5 5 srctree := $(abspath $(CURDIR)/../../../) 6 6 7 7 ifeq ($(V),1) 8 - Q = 9 8 msg = 10 9 else 11 - Q = @ 12 10 ifeq ($(silent),1) 13 11 msg = 14 12 else
+1 -4
tools/bpf/runqslower/Makefile
··· 26 26 VMLINUX_BTF_PATH := $(or $(VMLINUX_BTF),$(firstword \ 27 27 $(wildcard $(VMLINUX_BTF_PATHS)))) 28 28 29 - ifeq ($(V),1) 30 - Q = 31 - else 32 - Q = @ 29 + ifneq ($(V),1) 33 30 MAKEFLAGS += --no-print-directory 34 31 submake_extras := feature_display=0 35 32 endif
+1 -7
tools/build/Makefile
··· 17 17 18 18 export HOSTCC HOSTLD HOSTAR 19 19 20 - ifeq ($(V),1) 21 - Q = 22 - else 23 - Q = @ 24 - endif 25 - 26 - export Q srctree CC LD 20 + export srctree CC LD 27 21 28 22 MAKEFLAGS := --no-print-directory 29 23 build := -f $(srctree)/tools/build/Makefile.build dir=. obj
-13
tools/lib/bpf/Makefile
··· 53 53 54 54 # copy a bit from Linux kbuild 55 55 56 - ifeq ("$(origin V)", "command line") 57 - VERBOSE = $(V) 58 - endif 59 - ifndef VERBOSE 60 - VERBOSE = 0 61 - endif 62 - 63 56 INCLUDES = -I$(or $(OUTPUT),.) \ 64 57 -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi \ 65 58 -I$(srctree)/tools/arch/$(SRCARCH)/include ··· 88 95 89 96 # flags specific for shared library 90 97 SHLIB_FLAGS := -DSHARED -fPIC 91 - 92 - ifeq ($(VERBOSE),1) 93 - Q = 94 - else 95 - Q = @ 96 - endif 97 98 98 99 # Disable command line variables (CFLAGS) override from top 99 100 # level Makefile (perf), otherwise build Makefile will get
-13
tools/lib/perf/Makefile
··· 39 39 libdir_SQ = $(subst ','\'',$(libdir)) 40 40 libdir_relative_SQ = $(subst ','\'',$(libdir_relative)) 41 41 42 - ifeq ("$(origin V)", "command line") 43 - VERBOSE = $(V) 44 - endif 45 - ifndef VERBOSE 46 - VERBOSE = 0 47 - endif 48 - 49 - ifeq ($(VERBOSE),1) 50 - Q = 51 - else 52 - Q = @ 53 - endif 54 - 55 42 TEST_ARGS := $(if $(V),-v) 56 43 57 44 # Set compile option CFLAGS
-13
tools/lib/thermal/Makefile
··· 39 39 libdir_SQ = $(subst ','\'',$(libdir)) 40 40 libdir_relative_SQ = $(subst ','\'',$(libdir_relative)) 41 41 42 - ifeq ("$(origin V)", "command line") 43 - VERBOSE = $(V) 44 - endif 45 - ifndef VERBOSE 46 - VERBOSE = 0 47 - endif 48 - 49 - ifeq ($(VERBOSE),1) 50 - Q = 51 - else 52 - Q = @ 53 - endif 54 - 55 42 # Set compile option CFLAGS 56 43 ifdef EXTRA_CFLAGS 57 44 CFLAGS := $(EXTRA_CFLAGS)
-6
tools/objtool/Makefile
··· 46 46 AWK = awk 47 47 MKDIR = mkdir 48 48 49 - ifeq ($(V),1) 50 - Q = 51 - else 52 - Q = @ 53 - endif 54 - 55 49 BUILD_ORC := n 56 50 57 51 ifeq ($(SRCARCH),x86)
-41
tools/perf/Makefile.perf
··· 161 161 SOURCE := $(shell ln -sf $(srctree)/tools/perf $(OUTPUT)/source) 162 162 endif 163 163 164 - # Beautify output 165 - # --------------------------------------------------------------------------- 166 - # 167 - # Most of build commands in Kbuild start with "cmd_". You can optionally define 168 - # "quiet_cmd_*". If defined, the short log is printed. Otherwise, no log from 169 - # that command is printed by default. 170 - # 171 - # e.g.) 172 - # quiet_cmd_depmod = DEPMOD $(MODLIB) 173 - # cmd_depmod = $(srctree)/scripts/depmod.sh $(DEPMOD) $(KERNELRELEASE) 174 - # 175 - # A simple variant is to prefix commands with $(Q) - that's useful 176 - # for commands that shall be hidden in non-verbose mode. 177 - # 178 - # $(Q)$(MAKE) $(build)=scripts/basic 179 - # 180 - # To put more focus on warnings, be less verbose as default 181 - # Use 'make V=1' to see the full commands 182 - 183 - ifeq ($(V),1) 184 - quiet = 185 - Q = 186 - else 187 - quiet=quiet_ 188 - Q=@ 189 - endif 190 - 191 - # If the user is running make -s (silent mode), suppress echoing of commands 192 - # make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS. 193 - ifeq ($(filter 3.%,$(MAKE_VERSION)),) 194 - short-opts := $(firstword -$(MAKEFLAGS)) 195 - else 196 - short-opts := $(filter-out --%,$(MAKEFLAGS)) 197 - endif 198 - 199 - ifneq ($(findstring s,$(short-opts)),) 200 - quiet=silent_ 201 - endif 202 - 203 - export quiet Q 204 - 205 164 # Do not use make's built-in rules 206 165 # (this improves performance and avoids hard-to-debug behaviour); 207 166 MAKEFLAGS += -r
+30
tools/scripts/Makefile.include
··· 136 136 NO_SUBDIR = : 137 137 endif 138 138 139 + # Beautify output 140 + # --------------------------------------------------------------------------- 141 + # 142 + # Most of build commands in Kbuild start with "cmd_". You can optionally define 143 + # "quiet_cmd_*". If defined, the short log is printed. Otherwise, no log from 144 + # that command is printed by default. 145 + # 146 + # e.g.) 147 + # quiet_cmd_depmod = DEPMOD $(MODLIB) 148 + # cmd_depmod = $(srctree)/scripts/depmod.sh $(DEPMOD) $(KERNELRELEASE) 149 + # 150 + # A simple variant is to prefix commands with $(Q) - that's useful 151 + # for commands that shall be hidden in non-verbose mode. 152 + # 153 + # $(Q)$(MAKE) $(build)=scripts/basic 154 + # 155 + # To put more focus on warnings, be less verbose as default 156 + # Use 'make V=1' to see the full commands 157 + 158 + ifeq ($(V),1) 159 + quiet = 160 + Q = 161 + else 162 + quiet = quiet_ 163 + Q = @ 164 + endif 165 + 139 166 # If the user is running make -s (silent mode), suppress echoing of commands 140 167 # make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS. 141 168 ifeq ($(filter 3.%,$(MAKE_VERSION)),) ··· 173 146 174 147 ifneq ($(findstring s,$(short-opts)),) 175 148 silent=1 149 + quiet=silent_ 176 150 endif 151 + 152 + export quiet Q 177 153 178 154 # 179 155 # Define a callable command for descending to a new directory
+1 -1
tools/sound/dapm-graph
··· 10 10 11 11 STYLE_COMPONENT_ON="color=dodgerblue;style=bold" 12 12 STYLE_COMPONENT_OFF="color=gray40;style=filled;fillcolor=gray90" 13 - STYLE_NODE_ON="shape=box,style=bold,color=green4" 13 + STYLE_NODE_ON="shape=box,style=bold,color=green4,fillcolor=white" 14 14 STYLE_NODE_OFF="shape=box,style=filled,color=gray30,fillcolor=gray95" 15 15 16 16 # Print usage and exit
-6
tools/testing/selftests/bpf/Makefile.docs
··· 7 7 RM ?= rm -f 8 8 RMDIR ?= rmdir --ignore-fail-on-non-empty 9 9 10 - ifeq ($(V),1) 11 - Q = 12 - else 13 - Q = @ 14 - endif 15 - 16 10 prefix ?= /usr/local 17 11 mandir ?= $(prefix)/man 18 12 man2dir = $(mandir)/man2
+44 -18
tools/testing/selftests/bpf/map_tests/map_in_map_batch_ops.c
··· 120 120 121 121 static void fetch_and_validate(int outer_map_fd, 122 122 struct bpf_map_batch_opts *opts, 123 - __u32 batch_size, bool delete_entries) 123 + __u32 batch_size, bool delete_entries, 124 + bool has_holes) 124 125 { 125 - __u32 *fetched_keys, *fetched_values, total_fetched = 0; 126 - __u32 batch_key = 0, fetch_count, step_size; 127 - int err, max_entries = OUTER_MAP_ENTRIES; 126 + int err, max_entries = OUTER_MAP_ENTRIES - !!has_holes; 127 + __u32 *fetched_keys, *fetched_values, total_fetched = 0, i; 128 + __u32 batch_key = 0, fetch_count, step_size = batch_size; 128 129 __u32 value_size = sizeof(__u32); 129 130 130 131 /* Total entries needs to be fetched */ ··· 135 134 "Memory allocation failed for fetched_keys or fetched_values", 136 135 "error=%s\n", strerror(errno)); 137 136 138 - for (step_size = batch_size; 139 - step_size <= max_entries; 140 - step_size += batch_size) { 137 + /* hash map may not always return full batch */ 138 + for (i = 0; i < OUTER_MAP_ENTRIES; i++) { 141 139 fetch_count = step_size; 142 140 err = delete_entries 143 141 ? bpf_map_lookup_and_delete_batch(outer_map_fd, ··· 155 155 if (err && errno == ENOSPC) { 156 156 /* Fetch again with higher batch size */ 157 157 total_fetched = 0; 158 + step_size += batch_size; 158 159 continue; 159 160 } 160 161 ··· 185 184 } 186 185 187 186 static void _map_in_map_batch_ops(enum bpf_map_type outer_map_type, 188 - enum bpf_map_type inner_map_type) 187 + enum bpf_map_type inner_map_type, 188 + bool has_holes) 189 189 { 190 + __u32 max_entries = OUTER_MAP_ENTRIES - !!has_holes; 190 191 __u32 *outer_map_keys, *inner_map_fds; 191 - __u32 max_entries = OUTER_MAP_ENTRIES; 192 192 LIBBPF_OPTS(bpf_map_batch_opts, opts); 193 193 __u32 value_size = sizeof(__u32); 194 194 int batch_size[2] = {5, 10}; 195 195 __u32 map_index, op_index; 196 196 int outer_map_fd, ret; 197 197 198 - outer_map_keys = calloc(max_entries, value_size); 199 - inner_map_fds = calloc(max_entries, value_size); 198 + outer_map_keys = calloc(OUTER_MAP_ENTRIES, value_size); 199 + inner_map_fds = calloc(OUTER_MAP_ENTRIES, value_size); 200 200 CHECK((!outer_map_keys || !inner_map_fds), 201 201 "Memory allocation failed for outer_map_keys or inner_map_fds", 202 202 "error=%s\n", strerror(errno)); ··· 211 209 ((outer_map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS) 212 210 ? 9 : 1000) - map_index; 213 211 212 + /* This condition is only meaningful for array of maps. 213 + * 214 + * max_entries == OUTER_MAP_ENTRIES - 1 if it is true. Say 215 + * max_entries is short for n, then outer_map_keys looks like: 216 + * 217 + * [n, n-1, ... 2, 1] 218 + * 219 + * We change it to 220 + * 221 + * [n, n-1, ... 2, 0] 222 + * 223 + * So it will leave key 1 as a hole. It will serve to test the 224 + * correctness when batch on an array: a "non-exist" key might be 225 + * actually allocated and returned from key iteration. 226 + */ 227 + if (has_holes) 228 + outer_map_keys[max_entries - 1]--; 229 + 214 230 /* batch operation - map_update */ 215 231 ret = bpf_map_update_batch(outer_map_fd, outer_map_keys, 216 232 inner_map_fds, &max_entries, &opts); ··· 239 219 /* batch operation - map_lookup */ 240 220 for (op_index = 0; op_index < 2; ++op_index) 241 221 fetch_and_validate(outer_map_fd, &opts, 242 - batch_size[op_index], false); 222 + batch_size[op_index], false, 223 + has_holes); 243 224 244 225 /* batch operation - map_lookup_delete */ 245 226 if (outer_map_type == BPF_MAP_TYPE_HASH_OF_MAPS) 246 227 fetch_and_validate(outer_map_fd, &opts, 247 - max_entries, true /*delete*/); 228 + max_entries, true /*delete*/, 229 + has_holes); 248 230 249 231 /* close all map fds */ 250 - for (map_index = 0; map_index < max_entries; map_index++) 232 + for (map_index = 0; map_index < OUTER_MAP_ENTRIES; map_index++) 251 233 close(inner_map_fds[map_index]); 252 234 close(outer_map_fd); 253 235 ··· 259 237 260 238 void test_map_in_map_batch_ops_array(void) 261 239 { 262 - _map_in_map_batch_ops(BPF_MAP_TYPE_ARRAY_OF_MAPS, BPF_MAP_TYPE_ARRAY); 240 + _map_in_map_batch_ops(BPF_MAP_TYPE_ARRAY_OF_MAPS, BPF_MAP_TYPE_ARRAY, false); 263 241 printf("%s:PASS with inner ARRAY map\n", __func__); 264 - _map_in_map_batch_ops(BPF_MAP_TYPE_ARRAY_OF_MAPS, BPF_MAP_TYPE_HASH); 242 + _map_in_map_batch_ops(BPF_MAP_TYPE_ARRAY_OF_MAPS, BPF_MAP_TYPE_HASH, false); 265 243 printf("%s:PASS with inner HASH map\n", __func__); 244 + _map_in_map_batch_ops(BPF_MAP_TYPE_ARRAY_OF_MAPS, BPF_MAP_TYPE_ARRAY, true); 245 + printf("%s:PASS with inner ARRAY map with holes\n", __func__); 246 + _map_in_map_batch_ops(BPF_MAP_TYPE_ARRAY_OF_MAPS, BPF_MAP_TYPE_HASH, true); 247 + printf("%s:PASS with inner HASH map with holes\n", __func__); 266 248 } 267 249 268 250 void test_map_in_map_batch_ops_hash(void) 269 251 { 270 - _map_in_map_batch_ops(BPF_MAP_TYPE_HASH_OF_MAPS, BPF_MAP_TYPE_ARRAY); 252 + _map_in_map_batch_ops(BPF_MAP_TYPE_HASH_OF_MAPS, BPF_MAP_TYPE_ARRAY, false); 271 253 printf("%s:PASS with inner ARRAY map\n", __func__); 272 - _map_in_map_batch_ops(BPF_MAP_TYPE_HASH_OF_MAPS, BPF_MAP_TYPE_HASH); 254 + _map_in_map_batch_ops(BPF_MAP_TYPE_HASH_OF_MAPS, BPF_MAP_TYPE_HASH, false); 273 255 printf("%s:PASS with inner HASH map\n", __func__); 274 256 }
+3 -56
tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
··· 526 526 if (!ASSERT_EQ(err, 1, "epoll_wait(fd)")) 527 527 goto out_close; 528 528 529 - n = recv(c1, &b, 1, SOCK_NONBLOCK); 530 - ASSERT_EQ(n, 0, "recv_timeout(fin)"); 529 + n = recv(c1, &b, 1, MSG_DONTWAIT); 530 + ASSERT_EQ(n, 0, "recv(fin)"); 531 531 out_close: 532 532 close(c1); 533 533 close(p1); ··· 535 535 test_sockmap_pass_prog__destroy(skel); 536 536 } 537 537 538 - static void test_sockmap_stream_pass(void) 539 - { 540 - int zero = 0, sent, recvd; 541 - int verdict, parser; 542 - int err, map; 543 - int c = -1, p = -1; 544 - struct test_sockmap_pass_prog *pass = NULL; 545 - char snd[256] = "0123456789"; 546 - char rcv[256] = "0"; 547 - 548 - pass = test_sockmap_pass_prog__open_and_load(); 549 - verdict = bpf_program__fd(pass->progs.prog_skb_verdict); 550 - parser = bpf_program__fd(pass->progs.prog_skb_parser); 551 - map = bpf_map__fd(pass->maps.sock_map_rx); 552 - 553 - err = bpf_prog_attach(parser, map, BPF_SK_SKB_STREAM_PARSER, 0); 554 - if (!ASSERT_OK(err, "bpf_prog_attach stream parser")) 555 - goto out; 556 - 557 - err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); 558 - if (!ASSERT_OK(err, "bpf_prog_attach stream verdict")) 559 - goto out; 560 - 561 - err = create_pair(AF_INET, SOCK_STREAM, &c, &p); 562 - if (err) 563 - goto out; 564 - 565 - /* sk_data_ready of 'p' will be replaced by strparser handler */ 566 - err = bpf_map_update_elem(map, &zero, &p, BPF_NOEXIST); 567 - if (!ASSERT_OK(err, "bpf_map_update_elem(p)")) 568 - goto out_close; 569 - 570 - /* 571 - * as 'prog_skb_parser' return the original skb len and 572 - * 'prog_skb_verdict' return SK_PASS, the kernel will just 573 - * pass it through to original socket 'p' 574 - */ 575 - sent = xsend(c, snd, sizeof(snd), 0); 576 - ASSERT_EQ(sent, sizeof(snd), "xsend(c)"); 577 - 578 - recvd = recv_timeout(p, rcv, sizeof(rcv), SOCK_NONBLOCK, 579 - IO_TIMEOUT_SEC); 580 - ASSERT_EQ(recvd, sizeof(rcv), "recv_timeout(p)"); 581 - 582 - out_close: 583 - close(c); 584 - close(p); 585 - 586 - out: 587 - test_sockmap_pass_prog__destroy(pass); 588 - } 589 538 590 539 static void test_sockmap_skb_verdict_fionread(bool pass_prog) 591 540 { ··· 581 632 ASSERT_EQ(avail, expected, "ioctl(FIONREAD)"); 582 633 /* On DROP test there will be no data to read */ 583 634 if (pass_prog) { 584 - recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); 635 + recvd = recv_timeout(c1, &buf, sizeof(buf), MSG_DONTWAIT, IO_TIMEOUT_SEC); 585 636 ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); 586 637 } 587 638 ··· 1082 1133 test_sockmap_progs_query(BPF_SK_SKB_VERDICT); 1083 1134 if (test__start_subtest("sockmap skb_verdict shutdown")) 1084 1135 test_sockmap_skb_verdict_shutdown(); 1085 - if (test__start_subtest("sockmap stream parser and verdict pass")) 1086 - test_sockmap_stream_pass(); 1087 1136 if (test__start_subtest("sockmap skb_verdict fionread")) 1088 1137 test_sockmap_skb_verdict_fionread(true); 1089 1138 if (test__start_subtest("sockmap skb_verdict fionread on drop"))
+454
tools/testing/selftests/bpf/prog_tests/sockmap_strp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <error.h> 3 + #include <netinet/tcp.h> 4 + #include <test_progs.h> 5 + #include "sockmap_helpers.h" 6 + #include "test_skmsg_load_helpers.skel.h" 7 + #include "test_sockmap_strp.skel.h" 8 + 9 + #define STRP_PKT_HEAD_LEN 4 10 + #define STRP_PKT_BODY_LEN 6 11 + #define STRP_PKT_FULL_LEN (STRP_PKT_HEAD_LEN + STRP_PKT_BODY_LEN) 12 + 13 + static const char packet[STRP_PKT_FULL_LEN] = "head+body\0"; 14 + static const int test_packet_num = 100; 15 + 16 + /* Current implementation of tcp_bpf_recvmsg_parser() invokes data_ready 17 + * with sk held if an skb exists in sk_receive_queue. Then for the 18 + * data_ready implementation of strparser, it will delay the read 19 + * operation if sk is held and EAGAIN is returned. 20 + */ 21 + static int sockmap_strp_consume_pre_data(int p) 22 + { 23 + int recvd; 24 + bool retried = false; 25 + char rcv[10]; 26 + 27 + retry: 28 + errno = 0; 29 + recvd = recv_timeout(p, rcv, sizeof(rcv), 0, 1); 30 + if (recvd < 0 && errno == EAGAIN && retried == false) { 31 + /* On the first call, EAGAIN will certainly be returned. 32 + * A 1-second wait is enough for the workqueue to finish. 33 + */ 34 + sleep(1); 35 + retried = true; 36 + goto retry; 37 + } 38 + 39 + if (!ASSERT_EQ(recvd, STRP_PKT_FULL_LEN, "recv error or truncated data") || 40 + !ASSERT_OK(memcmp(packet, rcv, STRP_PKT_FULL_LEN), 41 + "data mismatch")) 42 + return -1; 43 + return 0; 44 + } 45 + 46 + static struct test_sockmap_strp *sockmap_strp_init(int *out_map, bool pass, 47 + bool need_parser) 48 + { 49 + struct test_sockmap_strp *strp = NULL; 50 + int verdict, parser; 51 + int err; 52 + 53 + strp = test_sockmap_strp__open_and_load(); 54 + *out_map = bpf_map__fd(strp->maps.sock_map); 55 + 56 + if (need_parser) 57 + parser = bpf_program__fd(strp->progs.prog_skb_parser_partial); 58 + else 59 + parser = bpf_program__fd(strp->progs.prog_skb_parser); 60 + 61 + if (pass) 62 + verdict = bpf_program__fd(strp->progs.prog_skb_verdict_pass); 63 + else 64 + verdict = bpf_program__fd(strp->progs.prog_skb_verdict); 65 + 66 + err = bpf_prog_attach(parser, *out_map, BPF_SK_SKB_STREAM_PARSER, 0); 67 + if (!ASSERT_OK(err, "bpf_prog_attach stream parser")) 68 + goto err; 69 + 70 + err = bpf_prog_attach(verdict, *out_map, BPF_SK_SKB_STREAM_VERDICT, 0); 71 + if (!ASSERT_OK(err, "bpf_prog_attach stream verdict")) 72 + goto err; 73 + 74 + return strp; 75 + err: 76 + test_sockmap_strp__destroy(strp); 77 + return NULL; 78 + } 79 + 80 + /* Dispatch packets to different socket by packet size: 81 + * 82 + * ------ ------ 83 + * | pkt4 || pkt1 |... > remote socket 84 + * ------ ------ / ------ ------ 85 + * | pkt8 | pkt7 |... 86 + * ------ ------ \ ------ ------ 87 + * | pkt3 || pkt2 |... > local socket 88 + * ------ ------ 89 + */ 90 + static void test_sockmap_strp_dispatch_pkt(int family, int sotype) 91 + { 92 + int i, j, zero = 0, one = 1, recvd; 93 + int err, map; 94 + int c0 = -1, p0 = -1, c1 = -1, p1 = -1; 95 + struct test_sockmap_strp *strp = NULL; 96 + int test_cnt = 6; 97 + char rcv[10]; 98 + struct { 99 + char data[7]; 100 + int data_len; 101 + int send_cnt; 102 + int *receiver; 103 + } send_dir[2] = { 104 + /* data expected to deliver to local */ 105 + {"llllll", 6, 0, &p0}, 106 + /* data expected to deliver to remote */ 107 + {"rrrrr", 5, 0, &c1} 108 + }; 109 + 110 + strp = sockmap_strp_init(&map, false, false); 111 + if (!ASSERT_TRUE(strp, "sockmap_strp_init")) 112 + return; 113 + 114 + err = create_socket_pairs(family, sotype, &c0, &c1, &p0, &p1); 115 + if (!ASSERT_OK(err, "create_socket_pairs()")) 116 + goto out; 117 + 118 + err = bpf_map_update_elem(map, &zero, &p0, BPF_NOEXIST); 119 + if (!ASSERT_OK(err, "bpf_map_update_elem(p0)")) 120 + goto out_close; 121 + 122 + err = bpf_map_update_elem(map, &one, &p1, BPF_NOEXIST); 123 + if (!ASSERT_OK(err, "bpf_map_update_elem(p1)")) 124 + goto out_close; 125 + 126 + err = setsockopt(c1, IPPROTO_TCP, TCP_NODELAY, &zero, sizeof(zero)); 127 + if (!ASSERT_OK(err, "setsockopt(TCP_NODELAY)")) 128 + goto out_close; 129 + 130 + /* deliver data with data size greater than 5 to local */ 131 + strp->data->verdict_max_size = 5; 132 + 133 + for (i = 0; i < test_cnt; i++) { 134 + int d = i % 2; 135 + 136 + xsend(c0, send_dir[d].data, send_dir[d].data_len, 0); 137 + send_dir[d].send_cnt++; 138 + } 139 + 140 + for (i = 0; i < 2; i++) { 141 + for (j = 0; j < send_dir[i].send_cnt; j++) { 142 + int expected = send_dir[i].data_len; 143 + 144 + recvd = recv_timeout(*send_dir[i].receiver, rcv, 145 + expected, MSG_DONTWAIT, 146 + IO_TIMEOUT_SEC); 147 + if (!ASSERT_EQ(recvd, expected, "recv_timeout()")) 148 + goto out_close; 149 + if (!ASSERT_OK(memcmp(send_dir[i].data, rcv, recvd), 150 + "data mismatch")) 151 + goto out_close; 152 + } 153 + } 154 + out_close: 155 + close(c0); 156 + close(c1); 157 + close(p0); 158 + close(p1); 159 + out: 160 + test_sockmap_strp__destroy(strp); 161 + } 162 + 163 + /* We have multiple packets in one skb 164 + * ------------ ------------ ------------ 165 + * | packet1 | packet2 | ... 166 + * ------------ ------------ ------------ 167 + */ 168 + static void test_sockmap_strp_multiple_pkt(int family, int sotype) 169 + { 170 + int i, zero = 0; 171 + int sent, recvd, total; 172 + int err, map; 173 + int c = -1, p = -1; 174 + struct test_sockmap_strp *strp = NULL; 175 + char *snd = NULL, *rcv = NULL; 176 + 177 + strp = sockmap_strp_init(&map, true, true); 178 + if (!ASSERT_TRUE(strp, "sockmap_strp_init")) 179 + return; 180 + 181 + err = create_pair(family, sotype, &c, &p); 182 + if (err) 183 + goto out; 184 + 185 + err = bpf_map_update_elem(map, &zero, &p, BPF_NOEXIST); 186 + if (!ASSERT_OK(err, "bpf_map_update_elem(zero, p)")) 187 + goto out_close; 188 + 189 + /* construct multiple packets in one buffer */ 190 + total = test_packet_num * STRP_PKT_FULL_LEN; 191 + snd = malloc(total); 192 + rcv = malloc(total + 1); 193 + if (!ASSERT_TRUE(snd, "malloc(snd)") || 194 + !ASSERT_TRUE(rcv, "malloc(rcv)")) 195 + goto out_close; 196 + 197 + for (i = 0; i < test_packet_num; i++) { 198 + memcpy(snd + i * STRP_PKT_FULL_LEN, 199 + packet, STRP_PKT_FULL_LEN); 200 + } 201 + 202 + sent = xsend(c, snd, total, 0); 203 + if (!ASSERT_EQ(sent, total, "xsend(c)")) 204 + goto out_close; 205 + 206 + /* try to recv one more byte to avoid truncation check */ 207 + recvd = recv_timeout(p, rcv, total + 1, MSG_DONTWAIT, IO_TIMEOUT_SEC); 208 + if (!ASSERT_EQ(recvd, total, "recv(rcv)")) 209 + goto out_close; 210 + 211 + /* we sent TCP segment with multiple encapsulation 212 + * then check whether packets are handled correctly 213 + */ 214 + if (!ASSERT_OK(memcmp(snd, rcv, total), "data mismatch")) 215 + goto out_close; 216 + 217 + out_close: 218 + close(c); 219 + close(p); 220 + if (snd) 221 + free(snd); 222 + if (rcv) 223 + free(rcv); 224 + out: 225 + test_sockmap_strp__destroy(strp); 226 + } 227 + 228 + /* Test strparser with partial read */ 229 + static void test_sockmap_strp_partial_read(int family, int sotype) 230 + { 231 + int zero = 0, recvd, off; 232 + int err, map; 233 + int c = -1, p = -1; 234 + struct test_sockmap_strp *strp = NULL; 235 + char rcv[STRP_PKT_FULL_LEN + 1] = "0"; 236 + 237 + strp = sockmap_strp_init(&map, true, true); 238 + if (!ASSERT_TRUE(strp, "sockmap_strp_init")) 239 + return; 240 + 241 + err = create_pair(family, sotype, &c, &p); 242 + if (err) 243 + goto out; 244 + 245 + /* sk_data_ready of 'p' will be replaced by strparser handler */ 246 + err = bpf_map_update_elem(map, &zero, &p, BPF_NOEXIST); 247 + if (!ASSERT_OK(err, "bpf_map_update_elem(zero, p)")) 248 + goto out_close; 249 + 250 + /* 1.1 send partial head, 1 byte header left */ 251 + off = STRP_PKT_HEAD_LEN - 1; 252 + xsend(c, packet, off, 0); 253 + recvd = recv_timeout(p, rcv, sizeof(rcv), MSG_DONTWAIT, 1); 254 + if (!ASSERT_EQ(-1, recvd, "partial head sent, expected no data")) 255 + goto out_close; 256 + 257 + /* 1.2 send remaining head and body */ 258 + xsend(c, packet + off, STRP_PKT_FULL_LEN - off, 0); 259 + recvd = recv_timeout(p, rcv, sizeof(rcv), MSG_DONTWAIT, IO_TIMEOUT_SEC); 260 + if (!ASSERT_EQ(recvd, STRP_PKT_FULL_LEN, "expected full data")) 261 + goto out_close; 262 + 263 + /* 2.1 send partial head, 1 byte header left */ 264 + off = STRP_PKT_HEAD_LEN - 1; 265 + xsend(c, packet, off, 0); 266 + 267 + /* 2.2 send remaining head and partial body, 1 byte body left */ 268 + xsend(c, packet + off, STRP_PKT_FULL_LEN - off - 1, 0); 269 + off = STRP_PKT_FULL_LEN - 1; 270 + recvd = recv_timeout(p, rcv, sizeof(rcv), MSG_DONTWAIT, 1); 271 + if (!ASSERT_EQ(-1, recvd, "partial body sent, expected no data")) 272 + goto out_close; 273 + 274 + /* 2.3 send remaining body */ 275 + xsend(c, packet + off, STRP_PKT_FULL_LEN - off, 0); 276 + recvd = recv_timeout(p, rcv, sizeof(rcv), MSG_DONTWAIT, IO_TIMEOUT_SEC); 277 + if (!ASSERT_EQ(recvd, STRP_PKT_FULL_LEN, "expected full data")) 278 + goto out_close; 279 + 280 + out_close: 281 + close(c); 282 + close(p); 283 + 284 + out: 285 + test_sockmap_strp__destroy(strp); 286 + } 287 + 288 + /* Test simple socket read/write with strparser + FIONREAD */ 289 + static void test_sockmap_strp_pass(int family, int sotype, bool fionread) 290 + { 291 + int zero = 0, pkt_size = STRP_PKT_FULL_LEN, sent, recvd, avail; 292 + int err, map; 293 + int c = -1, p = -1; 294 + int test_cnt = 10, i; 295 + struct test_sockmap_strp *strp = NULL; 296 + char rcv[STRP_PKT_FULL_LEN + 1] = "0"; 297 + 298 + strp = sockmap_strp_init(&map, true, true); 299 + if (!ASSERT_TRUE(strp, "sockmap_strp_init")) 300 + return; 301 + 302 + err = create_pair(family, sotype, &c, &p); 303 + if (err) 304 + goto out; 305 + 306 + /* inject some data before bpf process, it should be read 307 + * correctly because we check sk_receive_queue in 308 + * tcp_bpf_recvmsg_parser(). 309 + */ 310 + sent = xsend(c, packet, pkt_size, 0); 311 + if (!ASSERT_EQ(sent, pkt_size, "xsend(pre-data)")) 312 + goto out_close; 313 + 314 + /* sk_data_ready of 'p' will be replaced by strparser handler */ 315 + err = bpf_map_update_elem(map, &zero, &p, BPF_NOEXIST); 316 + if (!ASSERT_OK(err, "bpf_map_update_elem(p)")) 317 + goto out_close; 318 + 319 + /* consume previous data we injected */ 320 + if (sockmap_strp_consume_pre_data(p)) 321 + goto out_close; 322 + 323 + /* Previously, we encountered issues such as deadlocks and 324 + * sequence errors that resulted in the inability to read 325 + * continuously. Therefore, we perform multiple iterations 326 + * of testing here. 327 + */ 328 + for (i = 0; i < test_cnt; i++) { 329 + sent = xsend(c, packet, pkt_size, 0); 330 + if (!ASSERT_EQ(sent, pkt_size, "xsend(c)")) 331 + goto out_close; 332 + 333 + recvd = recv_timeout(p, rcv, sizeof(rcv), MSG_DONTWAIT, 334 + IO_TIMEOUT_SEC); 335 + if (!ASSERT_EQ(recvd, pkt_size, "recv_timeout(p)") || 336 + !ASSERT_OK(memcmp(packet, rcv, pkt_size), 337 + "memcmp, data mismatch")) 338 + goto out_close; 339 + } 340 + 341 + if (fionread) { 342 + sent = xsend(c, packet, pkt_size, 0); 343 + if (!ASSERT_EQ(sent, pkt_size, "second xsend(c)")) 344 + goto out_close; 345 + 346 + err = ioctl(p, FIONREAD, &avail); 347 + if (!ASSERT_OK(err, "ioctl(FIONREAD) error") || 348 + !ASSERT_EQ(avail, pkt_size, "ioctl(FIONREAD)")) 349 + goto out_close; 350 + 351 + recvd = recv_timeout(p, rcv, sizeof(rcv), MSG_DONTWAIT, 352 + IO_TIMEOUT_SEC); 353 + if (!ASSERT_EQ(recvd, pkt_size, "second recv_timeout(p)") || 354 + !ASSERT_OK(memcmp(packet, rcv, pkt_size), 355 + "second memcmp, data mismatch")) 356 + goto out_close; 357 + } 358 + 359 + out_close: 360 + close(c); 361 + close(p); 362 + 363 + out: 364 + test_sockmap_strp__destroy(strp); 365 + } 366 + 367 + /* Test strparser with verdict mode */ 368 + static void test_sockmap_strp_verdict(int family, int sotype) 369 + { 370 + int zero = 0, one = 1, sent, recvd, off; 371 + int err, map; 372 + int c0 = -1, p0 = -1, c1 = -1, p1 = -1; 373 + struct test_sockmap_strp *strp = NULL; 374 + char rcv[STRP_PKT_FULL_LEN + 1] = "0"; 375 + 376 + strp = sockmap_strp_init(&map, false, true); 377 + if (!ASSERT_TRUE(strp, "sockmap_strp_init")) 378 + return; 379 + 380 + /* We simulate a reverse proxy server. 381 + * When p0 receives data from c0, we forward it to c1. 382 + * From c1's perspective, it will consider this data 383 + * as being sent by p1. 384 + */ 385 + err = create_socket_pairs(family, sotype, &c0, &c1, &p0, &p1); 386 + if (!ASSERT_OK(err, "create_socket_pairs()")) 387 + goto out; 388 + 389 + err = bpf_map_update_elem(map, &zero, &p0, BPF_NOEXIST); 390 + if (!ASSERT_OK(err, "bpf_map_update_elem(p0)")) 391 + goto out_close; 392 + 393 + err = bpf_map_update_elem(map, &one, &p1, BPF_NOEXIST); 394 + if (!ASSERT_OK(err, "bpf_map_update_elem(p1)")) 395 + goto out_close; 396 + 397 + sent = xsend(c0, packet, STRP_PKT_FULL_LEN, 0); 398 + if (!ASSERT_EQ(sent, STRP_PKT_FULL_LEN, "xsend(c0)")) 399 + goto out_close; 400 + 401 + recvd = recv_timeout(c1, rcv, sizeof(rcv), MSG_DONTWAIT, 402 + IO_TIMEOUT_SEC); 403 + if (!ASSERT_EQ(recvd, STRP_PKT_FULL_LEN, "recv_timeout(c1)") || 404 + !ASSERT_OK(memcmp(packet, rcv, STRP_PKT_FULL_LEN), 405 + "received data does not match the sent data")) 406 + goto out_close; 407 + 408 + /* send again to ensure the stream is functioning correctly. */ 409 + sent = xsend(c0, packet, STRP_PKT_FULL_LEN, 0); 410 + if (!ASSERT_EQ(sent, STRP_PKT_FULL_LEN, "second xsend(c0)")) 411 + goto out_close; 412 + 413 + /* partial read */ 414 + off = STRP_PKT_FULL_LEN / 2; 415 + recvd = recv_timeout(c1, rcv, off, MSG_DONTWAIT, 416 + IO_TIMEOUT_SEC); 417 + recvd += recv_timeout(c1, rcv + off, sizeof(rcv) - off, MSG_DONTWAIT, 418 + IO_TIMEOUT_SEC); 419 + 420 + if (!ASSERT_EQ(recvd, STRP_PKT_FULL_LEN, "partial recv_timeout(c1)") || 421 + !ASSERT_OK(memcmp(packet, rcv, STRP_PKT_FULL_LEN), 422 + "partial received data does not match the sent data")) 423 + goto out_close; 424 + 425 + out_close: 426 + close(c0); 427 + close(c1); 428 + close(p0); 429 + close(p1); 430 + out: 431 + test_sockmap_strp__destroy(strp); 432 + } 433 + 434 + void test_sockmap_strp(void) 435 + { 436 + if (test__start_subtest("sockmap strp tcp pass")) 437 + test_sockmap_strp_pass(AF_INET, SOCK_STREAM, false); 438 + if (test__start_subtest("sockmap strp tcp v6 pass")) 439 + test_sockmap_strp_pass(AF_INET6, SOCK_STREAM, false); 440 + if (test__start_subtest("sockmap strp tcp pass fionread")) 441 + test_sockmap_strp_pass(AF_INET, SOCK_STREAM, true); 442 + if (test__start_subtest("sockmap strp tcp v6 pass fionread")) 443 + test_sockmap_strp_pass(AF_INET6, SOCK_STREAM, true); 444 + if (test__start_subtest("sockmap strp tcp verdict")) 445 + test_sockmap_strp_verdict(AF_INET, SOCK_STREAM); 446 + if (test__start_subtest("sockmap strp tcp v6 verdict")) 447 + test_sockmap_strp_verdict(AF_INET6, SOCK_STREAM); 448 + if (test__start_subtest("sockmap strp tcp partial read")) 449 + test_sockmap_strp_partial_read(AF_INET, SOCK_STREAM); 450 + if (test__start_subtest("sockmap strp tcp multiple packets")) 451 + test_sockmap_strp_multiple_pkt(AF_INET, SOCK_STREAM); 452 + if (test__start_subtest("sockmap strp tcp dispatch")) 453 + test_sockmap_strp_dispatch_pkt(AF_INET, SOCK_STREAM); 454 + }
+2 -2
tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
··· 52 52 ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to cpumap entry prog_id"); 53 53 54 54 /* send a packet to trigger any potential bugs in there */ 55 - char data[10] = {}; 55 + char data[ETH_HLEN] = {}; 56 56 DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 57 57 .data_in = &data, 58 - .data_size_in = 10, 58 + .data_size_in = sizeof(data), 59 59 .flags = BPF_F_TEST_XDP_LIVE_FRAMES, 60 60 .repeat = 1, 61 61 );
+4 -4
tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
··· 23 23 __u32 len = sizeof(info); 24 24 int err, dm_fd, dm_fd_redir, map_fd; 25 25 struct nstoken *nstoken = NULL; 26 - char data[10] = {}; 26 + char data[ETH_HLEN] = {}; 27 27 __u32 idx = 0; 28 28 29 29 SYS(out_close, "ip netns add %s", TEST_NS); ··· 58 58 /* send a packet to trigger any potential bugs in there */ 59 59 DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 60 60 .data_in = &data, 61 - .data_size_in = 10, 61 + .data_size_in = sizeof(data), 62 62 .flags = BPF_F_TEST_XDP_LIVE_FRAMES, 63 63 .repeat = 1, 64 64 ); ··· 158 158 struct nstoken *nstoken = NULL; 159 159 __u32 len = sizeof(info); 160 160 int err, dm_fd, dm_fd_redir, map_fd, ifindex_dst; 161 - char data[10] = {}; 161 + char data[ETH_HLEN] = {}; 162 162 __u32 idx = 0; 163 163 164 164 SYS(out_close, "ip netns add %s", TEST_NS); ··· 208 208 /* send a packet to trigger any potential bugs in there */ 209 209 DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 210 210 .data_in = &data, 211 - .data_size_in = 10, 211 + .data_size_in = sizeof(data), 212 212 .flags = BPF_F_TEST_XDP_LIVE_FRAMES, 213 213 .repeat = 1, 214 214 );
+53
tools/testing/selftests/bpf/progs/test_sockmap_strp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <bpf/bpf_helpers.h> 4 + #include <bpf/bpf_endian.h> 5 + int verdict_max_size = 10000; 6 + struct { 7 + __uint(type, BPF_MAP_TYPE_SOCKMAP); 8 + __uint(max_entries, 20); 9 + __type(key, int); 10 + __type(value, int); 11 + } sock_map SEC(".maps"); 12 + 13 + SEC("sk_skb/stream_verdict") 14 + int prog_skb_verdict(struct __sk_buff *skb) 15 + { 16 + __u32 one = 1; 17 + 18 + if (skb->len > verdict_max_size) 19 + return SK_PASS; 20 + 21 + return bpf_sk_redirect_map(skb, &sock_map, one, 0); 22 + } 23 + 24 + SEC("sk_skb/stream_verdict") 25 + int prog_skb_verdict_pass(struct __sk_buff *skb) 26 + { 27 + return SK_PASS; 28 + } 29 + 30 + SEC("sk_skb/stream_parser") 31 + int prog_skb_parser(struct __sk_buff *skb) 32 + { 33 + return skb->len; 34 + } 35 + 36 + SEC("sk_skb/stream_parser") 37 + int prog_skb_parser_partial(struct __sk_buff *skb) 38 + { 39 + /* agreement with the test program on a 4-byte size header 40 + * and 6-byte body. 41 + */ 42 + if (skb->len < 4) { 43 + /* need more header to determine full length */ 44 + return 0; 45 + } 46 + /* return full length decoded from header. 47 + * the return value may be larger than skb->len which 48 + * means framework must wait body coming. 49 + */ 50 + return 10; 51 + } 52 + 53 + char _license[] SEC("license") = "GPL";
+15
tools/testing/selftests/bpf/progs/verifier_array_access.c
··· 713 713 return val->index; 714 714 } 715 715 716 + SEC("socket") 717 + __description("doesn't reject UINT64_MAX as s64 for irrelevant maps") 718 + __success __retval(42) 719 + unsigned int doesnt_reject_irrelevant_maps(void) 720 + { 721 + __u64 key = 0xFFFFFFFFFFFFFFFF; 722 + struct test_val *val; 723 + 724 + val = bpf_map_lookup_elem(&map_hash_48b, &key); 725 + if (val) 726 + return val->index; 727 + 728 + return 42; 729 + } 730 + 716 731 char _license[] SEC("license") = "GPL";
+142 -3
tools/testing/selftests/drivers/net/hds.py
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 import errno 5 + import os 5 6 from lib.py import ksft_run, ksft_exit, ksft_eq, ksft_raises, KsftSkipEx 6 - from lib.py import EthtoolFamily, NlError 7 + from lib.py import CmdExitFailure, EthtoolFamily, NlError 7 8 from lib.py import NetDrvEnv 9 + from lib.py import defer, ethtool, ip 8 10 9 - def get_hds(cfg, netnl) -> None: 11 + 12 + def _get_hds_mode(cfg, netnl) -> str: 10 13 try: 11 14 rings = netnl.rings_get({'header': {'dev-index': cfg.ifindex}}) 12 15 except NlError as e: 13 16 raise KsftSkipEx('ring-get not supported by device') 14 17 if 'tcp-data-split' not in rings: 15 18 raise KsftSkipEx('tcp-data-split not supported by device') 19 + return rings['tcp-data-split'] 20 + 21 + 22 + def _xdp_onoff(cfg): 23 + test_dir = os.path.dirname(os.path.realpath(__file__)) 24 + prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 25 + ip("link set dev %s xdp obj %s sec xdp" % 26 + (cfg.ifname, prog)) 27 + ip("link set dev %s xdp off" % cfg.ifname) 28 + 29 + 30 + def _ioctl_ringparam_modify(cfg, netnl) -> None: 31 + """ 32 + Helper for performing a hopefully unimportant IOCTL SET. 33 + IOCTL does not support HDS, so it should not affect the HDS config. 34 + """ 35 + try: 36 + rings = netnl.rings_get({'header': {'dev-index': cfg.ifindex}}) 37 + except NlError as e: 38 + raise KsftSkipEx('ring-get not supported by device') 39 + 40 + if 'tx' not in rings: 41 + raise KsftSkipEx('setting Tx ring size not supported') 42 + 43 + try: 44 + ethtool(f"--disable-netlink -G {cfg.ifname} tx {rings['tx'] // 2}") 45 + except CmdExitFailure as e: 46 + ethtool(f"--disable-netlink -G {cfg.ifname} tx {rings['tx'] * 2}") 47 + defer(ethtool, f"-G {cfg.ifname} tx {rings['tx']}") 48 + 49 + 50 + def get_hds(cfg, netnl) -> None: 51 + _get_hds_mode(cfg, netnl) 52 + 16 53 17 54 def get_hds_thresh(cfg, netnl) -> None: 18 55 try: ··· 141 104 netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 'hds-thresh': hds_gt}) 142 105 ksft_eq(e.exception.nl_msg.error, -errno.EINVAL) 143 106 107 + 108 + def set_xdp(cfg, netnl) -> None: 109 + """ 110 + Enable single-buffer XDP on the device. 111 + When HDS is in "auto" / UNKNOWN mode, XDP installation should work. 112 + """ 113 + mode = _get_hds_mode(cfg, netnl) 114 + if mode == 'enabled': 115 + netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 116 + 'tcp-data-split': 'unknown'}) 117 + 118 + _xdp_onoff(cfg) 119 + 120 + 121 + def enabled_set_xdp(cfg, netnl) -> None: 122 + """ 123 + Enable single-buffer XDP on the device. 124 + When HDS is in "enabled" mode, XDP installation should not work. 125 + """ 126 + _get_hds_mode(cfg, netnl) 127 + netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 128 + 'tcp-data-split': 'enabled'}) 129 + 130 + defer(netnl.rings_set, {'header': {'dev-index': cfg.ifindex}, 131 + 'tcp-data-split': 'unknown'}) 132 + 133 + with ksft_raises(CmdExitFailure) as e: 134 + _xdp_onoff(cfg) 135 + 136 + 137 + def set_xdp(cfg, netnl) -> None: 138 + """ 139 + Enable single-buffer XDP on the device. 140 + When HDS is in "auto" / UNKNOWN mode, XDP installation should work. 141 + """ 142 + mode = _get_hds_mode(cfg, netnl) 143 + if mode == 'enabled': 144 + netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 145 + 'tcp-data-split': 'unknown'}) 146 + 147 + _xdp_onoff(cfg) 148 + 149 + 150 + def enabled_set_xdp(cfg, netnl) -> None: 151 + """ 152 + Enable single-buffer XDP on the device. 153 + When HDS is in "enabled" mode, XDP installation should not work. 154 + """ 155 + _get_hds_mode(cfg, netnl) # Trigger skip if not supported 156 + 157 + netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 158 + 'tcp-data-split': 'enabled'}) 159 + defer(netnl.rings_set, {'header': {'dev-index': cfg.ifindex}, 160 + 'tcp-data-split': 'unknown'}) 161 + 162 + with ksft_raises(CmdExitFailure) as e: 163 + _xdp_onoff(cfg) 164 + 165 + 166 + def ioctl(cfg, netnl) -> None: 167 + mode1 = _get_hds_mode(cfg, netnl) 168 + _ioctl_ringparam_modify(cfg, netnl) 169 + mode2 = _get_hds_mode(cfg, netnl) 170 + 171 + ksft_eq(mode1, mode2) 172 + 173 + 174 + def ioctl_set_xdp(cfg, netnl) -> None: 175 + """ 176 + Like set_xdp(), but we perturb the settings via the legacy ioctl. 177 + """ 178 + mode = _get_hds_mode(cfg, netnl) 179 + if mode == 'enabled': 180 + netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 181 + 'tcp-data-split': 'unknown'}) 182 + 183 + _ioctl_ringparam_modify(cfg, netnl) 184 + 185 + _xdp_onoff(cfg) 186 + 187 + 188 + def ioctl_enabled_set_xdp(cfg, netnl) -> None: 189 + """ 190 + Enable single-buffer XDP on the device. 191 + When HDS is in "enabled" mode, XDP installation should not work. 192 + """ 193 + _get_hds_mode(cfg, netnl) # Trigger skip if not supported 194 + 195 + netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 196 + 'tcp-data-split': 'enabled'}) 197 + defer(netnl.rings_set, {'header': {'dev-index': cfg.ifindex}, 198 + 'tcp-data-split': 'unknown'}) 199 + 200 + with ksft_raises(CmdExitFailure) as e: 201 + _xdp_onoff(cfg) 202 + 203 + 144 204 def main() -> None: 145 205 with NetDrvEnv(__file__, queue_count=3) as cfg: 146 206 ksft_run([get_hds, ··· 246 112 set_hds_enable, 247 113 set_hds_thresh_zero, 248 114 set_hds_thresh_max, 249 - set_hds_thresh_gt], 115 + set_hds_thresh_gt, 116 + set_xdp, 117 + enabled_set_xdp, 118 + ioctl, 119 + ioctl_set_xdp, 120 + ioctl_enabled_set_xdp], 250 121 args=(cfg, EthtoolFamily())) 251 122 ksft_exit() 252 123
+3 -4
tools/testing/selftests/drivers/net/queues.py
··· 81 81 82 82 netnl = EthtoolFamily() 83 83 channels = netnl.channels_get({'header': {'dev-index': cfg.ifindex}}) 84 - if channels['combined-count'] == 0: 85 - rx_type = 'rx' 86 - else: 87 - rx_type = 'combined' 84 + rx_type = 'rx' 85 + if channels.get('combined-count', 0) > 0: 86 + rx_type = 'combined' 88 87 89 88 expected = curr_queues - 1 90 89 cmd(f"ethtool -L {cfg.dev['ifname']} {rx_type} {expected}", timeout=10)
+54
tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe.tc
··· 7 7 echo > dynamic_events 8 8 9 9 PLACE=$FUNCTION_FORK 10 + PLACE2="kmem_cache_free" 11 + PLACE3="schedule_timeout" 10 12 11 13 echo "f:myevent1 $PLACE" >> dynamic_events 14 + 15 + # Make sure the event is attached and is the only one 16 + grep -q $PLACE enabled_functions 17 + cnt=`cat enabled_functions | wc -l` 18 + if [ $cnt -ne 1 ]; then 19 + exit_fail 20 + fi 21 + 12 22 echo "f:myevent2 $PLACE%return" >> dynamic_events 23 + 24 + # It should till be the only attached function 25 + cnt=`cat enabled_functions | wc -l` 26 + if [ $cnt -ne 1 ]; then 27 + exit_fail 28 + fi 29 + 30 + # add another event 31 + echo "f:myevent3 $PLACE2" >> dynamic_events 32 + 33 + grep -q $PLACE2 enabled_functions 34 + cnt=`cat enabled_functions | wc -l` 35 + if [ $cnt -ne 2 ]; then 36 + exit_fail 37 + fi 13 38 14 39 grep -q myevent1 dynamic_events 15 40 grep -q myevent2 dynamic_events 41 + grep -q myevent3 dynamic_events 16 42 test -d events/fprobes/myevent1 17 43 test -d events/fprobes/myevent2 18 44 ··· 47 21 grep -q myevent1 dynamic_events 48 22 ! grep -q myevent2 dynamic_events 49 23 24 + # should still have 2 left 25 + cnt=`cat enabled_functions | wc -l` 26 + if [ $cnt -ne 2 ]; then 27 + exit_fail 28 + fi 29 + 50 30 echo > dynamic_events 31 + 32 + # Should have none left 33 + cnt=`cat enabled_functions | wc -l` 34 + if [ $cnt -ne 0 ]; then 35 + exit_fail 36 + fi 37 + 38 + echo "f:myevent4 $PLACE" >> dynamic_events 39 + 40 + # Should only have one enabled 41 + cnt=`cat enabled_functions | wc -l` 42 + if [ $cnt -ne 1 ]; then 43 + exit_fail 44 + fi 45 + 46 + echo > dynamic_events 47 + 48 + # Should have none left 49 + cnt=`cat enabled_functions | wc -l` 50 + if [ $cnt -ne 0 ]; then 51 + exit_fail 52 + fi 51 53 52 54 clear_trace
-2
tools/testing/selftests/hid/Makefile
··· 43 43 # $3 - target (assumed to be file); only file name will be emitted; 44 44 # $4 - optional extra arg, emitted as-is, if provided. 45 45 ifeq ($(V),1) 46 - Q = 47 46 msg = 48 47 else 49 - Q = @ 50 48 msg = @printf ' %-8s%s %s%s\n' "$(1)" "$(if $(2), [$(2)])" "$(notdir $(3))" "$(if $(4), $(4))"; 51 49 MAKEFLAGS += --no-print-directory 52 50 submake_extras := feature_display=0
+2
tools/testing/selftests/landlock/.gitignore
··· 1 1 /*_test 2 + /sandbox-and-launch 2 3 /true 4 + /wait-pipe
+1
tools/testing/selftests/landlock/common.h
··· 207 207 struct protocol_variant { 208 208 int domain; 209 209 int type; 210 + int protocol; 210 211 }; 211 212 212 213 struct service_fixture {
+3
tools/testing/selftests/landlock/config
··· 1 + CONFIG_AF_UNIX_OOB=y 1 2 CONFIG_CGROUPS=y 2 3 CONFIG_CGROUP_SCHED=y 3 4 CONFIG_INET=y 4 5 CONFIG_IPV6=y 5 6 CONFIG_KEYS=y 7 + CONFIG_MPTCP=y 8 + CONFIG_MPTCP_IPV6=y 6 9 CONFIG_NET=y 7 10 CONFIG_NET_NS=y 8 11 CONFIG_OVERLAY_FS=y
+110 -14
tools/testing/selftests/landlock/net_test.c
··· 85 85 clear_ambient_cap(_metadata, CAP_NET_ADMIN); 86 86 } 87 87 88 + static bool prot_is_tcp(const struct protocol_variant *const prot) 89 + { 90 + return (prot->domain == AF_INET || prot->domain == AF_INET6) && 91 + prot->type == SOCK_STREAM && 92 + (prot->protocol == IPPROTO_TCP || prot->protocol == IPPROTO_IP); 93 + } 94 + 88 95 static bool is_restricted(const struct protocol_variant *const prot, 89 96 const enum sandbox_type sandbox) 90 97 { 91 - switch (prot->domain) { 92 - case AF_INET: 93 - case AF_INET6: 94 - switch (prot->type) { 95 - case SOCK_STREAM: 96 - return sandbox == TCP_SANDBOX; 97 - } 98 - break; 99 - } 98 + if (sandbox == TCP_SANDBOX) 99 + return prot_is_tcp(prot); 100 100 return false; 101 101 } 102 102 ··· 105 105 int ret; 106 106 107 107 ret = socket(srv->protocol.domain, srv->protocol.type | SOCK_CLOEXEC, 108 - 0); 108 + srv->protocol.protocol); 109 109 if (ret < 0) 110 110 return -errno; 111 111 return ret; ··· 290 290 } 291 291 292 292 /* clang-format off */ 293 - FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp) { 293 + FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp1) { 294 294 /* clang-format on */ 295 295 .sandbox = NO_SANDBOX, 296 296 .prot = { 297 297 .domain = AF_INET, 298 298 .type = SOCK_STREAM, 299 + /* IPPROTO_IP == 0 */ 300 + .protocol = IPPROTO_IP, 299 301 }, 300 302 }; 301 303 302 304 /* clang-format off */ 303 - FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp) { 305 + FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp2) { 306 + /* clang-format on */ 307 + .sandbox = NO_SANDBOX, 308 + .prot = { 309 + .domain = AF_INET, 310 + .type = SOCK_STREAM, 311 + .protocol = IPPROTO_TCP, 312 + }, 313 + }; 314 + 315 + /* clang-format off */ 316 + FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_mptcp) { 317 + /* clang-format on */ 318 + .sandbox = NO_SANDBOX, 319 + .prot = { 320 + .domain = AF_INET, 321 + .type = SOCK_STREAM, 322 + .protocol = IPPROTO_MPTCP, 323 + }, 324 + }; 325 + 326 + /* clang-format off */ 327 + FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp1) { 304 328 /* clang-format on */ 305 329 .sandbox = NO_SANDBOX, 306 330 .prot = { 307 331 .domain = AF_INET6, 308 332 .type = SOCK_STREAM, 333 + /* IPPROTO_IP == 0 */ 334 + .protocol = IPPROTO_IP, 335 + }, 336 + }; 337 + 338 + /* clang-format off */ 339 + FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp2) { 340 + /* clang-format on */ 341 + .sandbox = NO_SANDBOX, 342 + .prot = { 343 + .domain = AF_INET6, 344 + .type = SOCK_STREAM, 345 + .protocol = IPPROTO_TCP, 346 + }, 347 + }; 348 + 349 + /* clang-format off */ 350 + FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_mptcp) { 351 + /* clang-format on */ 352 + .sandbox = NO_SANDBOX, 353 + .prot = { 354 + .domain = AF_INET6, 355 + .type = SOCK_STREAM, 356 + .protocol = IPPROTO_MPTCP, 309 357 }, 310 358 }; 311 359 ··· 398 350 }; 399 351 400 352 /* clang-format off */ 401 - FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp) { 353 + FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp1) { 402 354 /* clang-format on */ 403 355 .sandbox = TCP_SANDBOX, 404 356 .prot = { 405 357 .domain = AF_INET, 406 358 .type = SOCK_STREAM, 359 + /* IPPROTO_IP == 0 */ 360 + .protocol = IPPROTO_IP, 407 361 }, 408 362 }; 409 363 410 364 /* clang-format off */ 411 - FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp) { 365 + FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp2) { 366 + /* clang-format on */ 367 + .sandbox = TCP_SANDBOX, 368 + .prot = { 369 + .domain = AF_INET, 370 + .type = SOCK_STREAM, 371 + .protocol = IPPROTO_TCP, 372 + }, 373 + }; 374 + 375 + /* clang-format off */ 376 + FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_mptcp) { 377 + /* clang-format on */ 378 + .sandbox = TCP_SANDBOX, 379 + .prot = { 380 + .domain = AF_INET, 381 + .type = SOCK_STREAM, 382 + .protocol = IPPROTO_MPTCP, 383 + }, 384 + }; 385 + 386 + /* clang-format off */ 387 + FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp1) { 412 388 /* clang-format on */ 413 389 .sandbox = TCP_SANDBOX, 414 390 .prot = { 415 391 .domain = AF_INET6, 416 392 .type = SOCK_STREAM, 393 + /* IPPROTO_IP == 0 */ 394 + .protocol = IPPROTO_IP, 395 + }, 396 + }; 397 + 398 + /* clang-format off */ 399 + FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp2) { 400 + /* clang-format on */ 401 + .sandbox = TCP_SANDBOX, 402 + .prot = { 403 + .domain = AF_INET6, 404 + .type = SOCK_STREAM, 405 + .protocol = IPPROTO_TCP, 406 + }, 407 + }; 408 + 409 + /* clang-format off */ 410 + FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_mptcp) { 411 + /* clang-format on */ 412 + .sandbox = TCP_SANDBOX, 413 + .prot = { 414 + .domain = AF_INET6, 415 + .type = SOCK_STREAM, 416 + .protocol = IPPROTO_MPTCP, 417 417 }, 418 418 }; 419 419
+3
tools/testing/selftests/net/lib/Makefile
··· 9 9 TEST_FILES += ../../../../net/ynl 10 10 11 11 TEST_GEN_FILES += csum 12 + TEST_GEN_FILES += $(patsubst %.c,%.o,$(wildcard *.bpf.c)) 12 13 13 14 TEST_INCLUDES := $(wildcard py/*.py sh/*.sh) 14 15 15 16 include ../../lib.mk 17 + 18 + include ../bpf.mk
+13
tools/testing/selftests/net/lib/xdp_dummy.bpf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define KBUILD_MODNAME "xdp_dummy" 4 + #include <linux/bpf.h> 5 + #include <bpf/bpf_helpers.h> 6 + 7 + SEC("xdp") 8 + int xdp_dummy_prog(struct xdp_md *ctx) 9 + { 10 + return XDP_PASS; 11 + } 12 + 13 + char _license[] SEC("license") = "GPL";
+3 -3
tools/testing/selftests/rseq/rseq-riscv-bits.h
··· 243 243 #ifdef RSEQ_COMPARE_TWICE 244 244 RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]") 245 245 #endif 246 - RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, 3) 246 + RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, 3) 247 247 RSEQ_INJECT_ASM(4) 248 248 RSEQ_ASM_DEFINE_ABORT(4, abort) 249 249 : /* gcc asm goto does not allow outputs */ ··· 251 251 [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD), 252 252 [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), 253 253 [ptr] "r" (ptr), 254 - [off] "er" (off), 255 - [inc] "er" (inc) 254 + [off] "r" (off), 255 + [inc] "r" (inc) 256 256 RSEQ_INJECT_INPUT 257 257 : "memory", RSEQ_ASM_TMP_REG_1 258 258 RSEQ_INJECT_CLOBBER
+1 -1
tools/testing/selftests/rseq/rseq-riscv.h
··· 158 158 "bnez " RSEQ_ASM_TMP_REG_1 ", 222b\n" \ 159 159 "333:\n" 160 160 161 - #define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, post_commit_label) \ 161 + #define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, post_commit_label) \ 162 162 "mv " RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(ptr) "]\n" \ 163 163 RSEQ_ASM_OP_R_ADD(off) \ 164 164 REG_L RSEQ_ASM_TMP_REG_1 ", 0(" RSEQ_ASM_TMP_REG_1 ")\n" \
-13
tools/thermal/lib/Makefile
··· 39 39 libdir_SQ = $(subst ','\'',$(libdir)) 40 40 libdir_relative_SQ = $(subst ','\'',$(libdir_relative)) 41 41 42 - ifeq ("$(origin V)", "command line") 43 - VERBOSE = $(V) 44 - endif 45 - ifndef VERBOSE 46 - VERBOSE = 0 47 - endif 48 - 49 - ifeq ($(VERBOSE),1) 50 - Q = 51 - else 52 - Q = @ 53 - endif 54 - 55 42 # Set compile option CFLAGS 56 43 ifdef EXTRA_CFLAGS 57 44 CFLAGS := $(EXTRA_CFLAGS)
-6
tools/tracing/latency/Makefile
··· 37 37 FEATURE_DISPLAY := libtraceevent 38 38 FEATURE_DISPLAY += libtracefs 39 39 40 - ifeq ($(V),1) 41 - Q = 42 - else 43 - Q = @ 44 - endif 45 - 46 40 all: $(LATENCY-COLLECTOR) 47 41 48 42 include $(srctree)/tools/build/Makefile.include
-6
tools/tracing/rtla/Makefile
··· 37 37 FEATURE_DISPLAY += libtracefs 38 38 FEATURE_DISPLAY += libcpupower 39 39 40 - ifeq ($(V),1) 41 - Q = 42 - else 43 - Q = @ 44 - endif 45 - 46 40 all: $(RTLA) 47 41 48 42 include $(srctree)/tools/build/Makefile.include
-6
tools/verification/rv/Makefile
··· 35 35 FEATURE_DISPLAY := libtraceevent 36 36 FEATURE_DISPLAY += libtracefs 37 37 38 - ifeq ($(V),1) 39 - Q = 40 - else 41 - Q = @ 42 - endif 43 - 44 38 all: $(RV) 45 39 46 40 include $(srctree)/tools/build/Makefile.include