Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'kvm-riscv-6.18-1' of https://github.com/kvm-riscv/linux into HEAD

KVM/riscv changes for 6.18

- Added SBI FWFT extension for Guest/VM with misaligned
delegation and pointer masking PMLEN features
- Added ONE_REG interface for SBI FWFT extension
- Added Zicbop and bfloat16 extensions for Guest/VM
- Enabled more common KVM selftests for RISC-V such as
access_tracking_perf_test, dirty_log_perf_test,
memslot_modification_stress_test, memslot_perf_test,
mmu_stress_test, and rseq_test
- Added SBI v3.0 PMU enhancements in KVM and perf driver

+5814 -1482
+1
.mailmap
··· 589 589 Nikolay Aleksandrov <razor@blackwall.org> <nikolay@cumulusnetworks.com> 590 590 Nikolay Aleksandrov <razor@blackwall.org> <nikolay@nvidia.com> 591 591 Nikolay Aleksandrov <razor@blackwall.org> <nikolay@isovalent.com> 592 + Nobuhiro Iwamatsu <nobuhiro.iwamatsu.x90@mail.toshiba> <nobuhiro1.iwamatsu@toshiba.co.jp> 592 593 Odelu Kukatla <quic_okukatla@quicinc.com> <okukatla@codeaurora.org> 593 594 Oleksandr Natalenko <oleksandr@natalenko.name> <oleksandr@redhat.com> 594 595 Oleksij Rempel <linux@rempel-privat.de> <bug-track@fisher-privat.net>
+5
Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml
··· 20 20 - enum: 21 21 - fsl,imx7ulp-spi 22 22 - fsl,imx8qxp-spi 23 + - nxp,s32g2-lpspi 23 24 - items: 24 25 - enum: 25 26 - fsl,imx8ulp-spi ··· 28 27 - fsl,imx94-spi 29 28 - fsl,imx95-spi 30 29 - const: fsl,imx7ulp-spi 30 + - items: 31 + - const: nxp,s32g3-lpspi 32 + - const: nxp,s32g2-lpspi 33 + 31 34 reg: 32 35 maxItems: 1 33 36
+2 -3
Documentation/networking/napi.rst
··· 433 433 434 434 Threaded NAPI is an operating mode that uses dedicated kernel 435 435 threads rather than software IRQ context for NAPI processing. 436 - The configuration is per netdevice and will affect all 437 - NAPI instances of that device. Each NAPI instance will spawn a separate 438 - thread (called ``napi/${ifc-name}-${napi-id}``). 436 + Each threaded NAPI instance will spawn a separate thread 437 + (called ``napi/${ifc-name}-${napi-id}``). 439 438 440 439 It is recommended to pin each kernel thread to a single CPU, the same 441 440 CPU as the CPU which services the interrupt. Note that the mapping
+25 -4
Documentation/sound/alsa-configuration.rst
··· 2253 2253 Default: 0x0000 2254 2254 ignore_ctl_error 2255 2255 Ignore any USB-controller regarding mixer interface (default: no) 2256 + ``ignore_ctl_error=1`` may help when you get an error at accessing 2257 + the mixer element such as URB error -22. This happens on some 2258 + buggy USB device or the controller. This workaround corresponds to 2259 + the ``quirk_flags`` bit 14, too. 2256 2260 autoclock 2257 2261 Enable auto-clock selection for UAC2 devices (default: yes) 2262 + lowlatency 2263 + Enable low latency playback mode (default: yes). 2264 + Could disable it to switch back to the old mode if face a regression. 2258 2265 quirk_alias 2259 2266 Quirk alias list, pass strings like ``0123abcd:5678beef``, which 2260 2267 applies the existing quirk for the device 5678:beef to a new ··· 2291 2284 The driver prints a message like "Found post-registration device 2292 2285 assignment: 1234abcd:04" for such a device, so that user can 2293 2286 notice the need. 2287 + skip_validation 2288 + Skip unit descriptor validation (default: no). 2289 + The option is used to ignores the validation errors with the hexdump 2290 + of the unit descriptor instead of a driver probe error, so that we 2291 + can check its details. 2294 2292 quirk_flags 2295 2293 Contains the bit flags for various device specific workarounds. 2296 2294 Applied to the corresponding card index. ··· 2319 2307 * bit 16: Set up the interface at first like UAC1 2320 2308 * bit 17: Apply the generic implicit feedback sync mode 2321 2309 * bit 18: Don't apply implicit feedback sync mode 2310 + * bit 19: Don't closed interface during setting sample rate 2311 + * bit 20: Force an interface reset whenever stopping & restarting 2312 + a stream 2313 + * bit 21: Do not set PCM rate (frequency) when only one rate is 2314 + available for the given endpoint. 2315 + * bit 22: Set the fixed resolution 16 for Mic Capture Volume 2316 + * bit 23: Set the fixed resolution 384 for Mic Capture Volume 2317 + * bit 24: Set minimum volume control value as mute for devices 2318 + where the lowest playback value represents muted state instead 2319 + of minimum audible volume 2322 2320 2323 2321 This module supports multiple devices, autoprobe and hotplugging. 2324 2322 ··· 2336 2314 Don't put the value over 20. Changing via sysfs has no sanity 2337 2315 check. 2338 2316 2339 - NB: ``ignore_ctl_error=1`` may help when you get an error at accessing 2340 - the mixer element such as URB error -22. This happens on some 2341 - buggy USB device or the controller. This workaround corresponds to 2342 - the ``quirk_flags`` bit 14, too. 2317 + NB: ``ignore_ctl_error=1`` just provides a quick way to work around the 2318 + issues. If you have a buggy device that requires these quirks, please 2319 + report it to the upstream. 2343 2320 2344 2321 NB: ``quirk_alias`` option is provided only for testing / development. 2345 2322 If you want to have a proper support, contact to upstream for
+11 -3
MAINTAINERS
··· 3526 3526 F: arch/arm/boot/dts/nspire/ 3527 3527 3528 3528 ARM/TOSHIBA VISCONTI ARCHITECTURE 3529 - M: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> 3529 + M: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.x90@mail.toshiba> 3530 3530 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3531 3531 S: Supported 3532 3532 T: git git://git.kernel.org/pub/scm/linux/kernel/git/iwamatsu/linux-visconti.git ··· 3667 3667 F: drivers/virt/coco/pkvm-guest/ 3668 3668 F: tools/testing/selftests/arm64/ 3669 3669 X: arch/arm64/boot/dts/ 3670 + X: arch/arm64/configs/defconfig 3670 3671 3671 3672 ARROW SPEEDCHIPS XRS7000 SERIES ETHERNET SWITCH DRIVER 3672 3673 M: George McCollister <george.mccollister@gmail.com> ··· 7821 7820 Q: https://gitlab.freedesktop.org/drm/nouveau/-/merge_requests 7822 7821 B: https://gitlab.freedesktop.org/drm/nouveau/-/issues 7823 7822 C: irc://irc.oftc.net/nouveau 7824 - T: git https://gitlab.freedesktop.org/drm/nouveau.git 7823 + T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 7825 7824 F: drivers/gpu/drm/nouveau/ 7826 7825 F: include/uapi/drm/nouveau_drm.h 7827 7826 ··· 10389 10388 F: drivers/input/touchscreen/goodix* 10390 10389 10391 10390 GOOGLE ETHERNET DRIVERS 10392 - M: Jeroen de Borst <jeroendb@google.com> 10391 + M: Joshua Washington <joshwash@google.com> 10393 10392 M: Harshitha Ramamurthy <hramamurthy@google.com> 10394 10393 L: netdev@vger.kernel.org 10395 10394 S: Maintained ··· 17851 17850 NETWORKING [TLS] 17852 17851 M: John Fastabend <john.fastabend@gmail.com> 17853 17852 M: Jakub Kicinski <kuba@kernel.org> 17853 + M: Sabrina Dubroca <sd@queasysnail.net> 17854 17854 L: netdev@vger.kernel.org 17855 17855 S: Maintained 17856 17856 F: include/net/tls.h ··· 24253 24251 S: Maintained 24254 24252 F: Documentation/devicetree/bindings/input/allwinner,sun4i-a10-lradc-keys.yaml 24255 24253 F: drivers/input/keyboard/sun4i-lradc-keys.c 24254 + 24255 + SUNDANCE NETWORK DRIVER 24256 + M: Denis Kirjanov <dkirjanov@suse.de> 24257 + L: netdev@vger.kernel.org 24258 + S: Maintained 24259 + F: drivers/net/ethernet/dlink/sundance.c 24256 24260 24257 24261 SUNPLUS ETHERNET DRIVER 24258 24262 M: Wells Lu <wellslutw@gmail.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+2
arch/arm/boot/dts/microchip/at91-sama7d65_curiosity.dts
··· 387 387 388 388 &sdmmc1 { 389 389 bus-width = <4>; 390 + no-1-8-v; 391 + sdhci-caps-mask = <0x0 0x00200000>; 390 392 pinctrl-names = "default"; 391 393 pinctrl-0 = <&pinctrl_sdmmc1_default>; 392 394 status = "okay";
+1 -1
arch/arm/boot/dts/rockchip/rk3128-xpi-3128.dts
··· 272 272 phy-mode = "rmii"; 273 273 phy-handle = <&phy0>; 274 274 assigned-clocks = <&cru SCLK_MAC_SRC>; 275 - assigned-clock-rates= <50000000>; 275 + assigned-clock-rates = <50000000>; 276 276 pinctrl-names = "default"; 277 277 pinctrl-0 = <&rmii_pins>; 278 278 status = "okay";
+3 -3
arch/arm/boot/dts/rockchip/rv1109-relfor-saib.dts
··· 250 250 &i2s0 { 251 251 /delete-property/ pinctrl-0; 252 252 rockchip,trcm-sync-rx-only; 253 - pinctrl-0 = <&i2s0m0_sclk_rx>, 254 - <&i2s0m0_lrck_rx>, 255 - <&i2s0m0_sdi0>; 253 + pinctrl-0 = <&i2s0m0_sclk_rx>, 254 + <&i2s0m0_lrck_rx>, 255 + <&i2s0m0_sdi0>; 256 256 pinctrl-names = "default"; 257 257 status = "okay"; 258 258 };
+4
arch/arm/mach-at91/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 + config ARCH_MICROCHIP 3 + bool 4 + 2 5 menuconfig ARCH_AT91 3 6 bool "AT91/Microchip SoCs" 4 7 depends on (CPU_LITTLE_ENDIAN && (ARCH_MULTI_V4T || ARCH_MULTI_V5)) || \ ··· 11 8 select GPIOLIB 12 9 select PINCTRL 13 10 select SOC_BUS 11 + select ARCH_MICROCHIP 14 12 15 13 if ARCH_AT91 16 14 config SOC_SAMV7
+3
arch/arm64/boot/dts/axiado/ax3000-evk.dts
··· 14 14 #size-cells = <2>; 15 15 16 16 aliases { 17 + serial0 = &uart0; 18 + serial1 = &uart1; 19 + serial2 = &uart2; 17 20 serial3 = &uart3; 18 21 }; 19 22
+1
arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
··· 555 555 pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_gpio>; 556 556 cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>; 557 557 vmmc-supply = <&reg_usdhc2_vmmc>; 558 + vqmmc-supply = <&ldo5>; 558 559 bus-width = <4>; 559 560 status = "okay"; 560 561 };
+1
arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
··· 609 609 pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_gpio>; 610 610 cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>; 611 611 vmmc-supply = <&reg_usdhc2_vmmc>; 612 + vqmmc-supply = <&ldo5>; 612 613 bus-width = <4>; 613 614 status = "okay"; 614 615 };
+7 -6
arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts
··· 467 467 status = "okay"; 468 468 }; 469 469 470 + &reg_usdhc2_vqmmc { 471 + status = "okay"; 472 + }; 473 + 470 474 &sai5 { 471 475 pinctrl-names = "default"; 472 476 pinctrl-0 = <&pinctrl_sai5>; ··· 880 876 <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d2>, 881 877 <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d2>, 882 878 <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d2>, 883 - <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d2>, 884 - <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0>; 879 + <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d2>; 885 880 }; 886 881 887 882 pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp { ··· 889 886 <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d4>, 890 887 <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4>, 891 888 <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4>, 892 - <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>, 893 - <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0>; 889 + <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>; 894 890 }; 895 891 896 892 pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp { ··· 898 896 <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d4>, 899 897 <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4>, 900 898 <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4>, 901 - <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>, 902 - <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0>; 899 + <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>; 903 900 }; 904 901 905 902 pinctrl_usdhc2_gpio: usdhc2-gpiogrp {
+7 -6
arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
··· 604 604 status = "okay"; 605 605 }; 606 606 607 + &reg_usdhc2_vqmmc { 608 + status = "okay"; 609 + }; 610 + 607 611 &sai3 { 608 612 pinctrl-names = "default"; 609 613 pinctrl-0 = <&pinctrl_sai3>; ··· 987 983 <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d2>, 988 984 <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d2>, 989 985 <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d2>, 990 - <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d2>, 991 - <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0>; 986 + <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d2>; 992 987 }; 993 988 994 989 pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp { ··· 996 993 <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d4>, 997 994 <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4>, 998 995 <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4>, 999 - <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>, 1000 - <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0>; 996 + <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>; 1001 997 }; 1002 998 1003 999 pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp { ··· 1005 1003 <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d4>, 1006 1004 <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4>, 1007 1005 <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4>, 1008 - <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>, 1009 - <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0>; 1006 + <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4>; 1010 1007 }; 1011 1008 1012 1009 pinctrl_usdhc2_gpio: usdhc2-gpiogrp {
+22 -9
arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
··· 16 16 reg = <0x0 0x40000000 0 0x80000000>; 17 17 }; 18 18 19 - /* identical to buck4_reg, but should never change */ 20 - reg_vcc3v3: regulator-vcc3v3 { 21 - compatible = "regulator-fixed"; 22 - regulator-name = "VCC3V3"; 23 - regulator-min-microvolt = <3300000>; 19 + reg_usdhc2_vqmmc: regulator-usdhc2-vqmmc { 20 + compatible = "regulator-gpio"; 21 + pinctrl-names = "default"; 22 + pinctrl-0 = <&pinctrl_reg_usdhc2_vqmmc>; 23 + regulator-name = "V_SD2"; 24 + regulator-min-microvolt = <1800000>; 24 25 regulator-max-microvolt = <3300000>; 25 - regulator-always-on; 26 + gpios = <&gpio1 4 GPIO_ACTIVE_HIGH>; 27 + states = <1800000 0x1>, 28 + <3300000 0x0>; 29 + vin-supply = <&ldo5_reg>; 30 + status = "disabled"; 26 31 }; 27 32 }; 28 33 ··· 178 173 read-only; 179 174 reg = <0x53>; 180 175 pagesize = <16>; 181 - vcc-supply = <&reg_vcc3v3>; 176 + vcc-supply = <&buck4_reg>; 182 177 }; 183 178 184 179 m24c64: eeprom@57 { 185 180 compatible = "atmel,24c64"; 186 181 reg = <0x57>; 187 182 pagesize = <32>; 188 - vcc-supply = <&reg_vcc3v3>; 183 + vcc-supply = <&buck4_reg>; 189 184 }; 185 + }; 186 + 187 + &usdhc2 { 188 + vqmmc-supply = <&reg_usdhc2_vqmmc>; 190 189 }; 191 190 192 191 &usdhc3 { ··· 202 193 non-removable; 203 194 no-sd; 204 195 no-sdio; 205 - vmmc-supply = <&reg_vcc3v3>; 196 + vmmc-supply = <&buck4_reg>; 206 197 vqmmc-supply = <&buck5_reg>; 207 198 status = "okay"; 208 199 }; ··· 240 231 241 232 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { 242 233 fsl,pins = <MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x10>; 234 + }; 235 + 236 + pinctrl_reg_usdhc2_vqmmc: regusdhc2vqmmcgrp { 237 + fsl,pins = <MX8MP_IOMUXC_GPIO1_IO04__GPIO1_IO04 0xc0>; 243 238 }; 244 239 245 240 pinctrl_usdhc3: usdhc3grp {
+5 -5
arch/arm64/boot/dts/freescale/imx95-19x19-evk.dts
··· 80 80 flexcan1_phy: can-phy0 { 81 81 compatible = "nxp,tjr1443"; 82 82 #phy-cells = <0>; 83 - max-bitrate = <1000000>; 83 + max-bitrate = <8000000>; 84 84 enable-gpios = <&i2c6_pcal6416 6 GPIO_ACTIVE_HIGH>; 85 - standby-gpios = <&i2c6_pcal6416 5 GPIO_ACTIVE_HIGH>; 85 + standby-gpios = <&i2c6_pcal6416 5 GPIO_ACTIVE_LOW>; 86 86 }; 87 87 88 88 flexcan2_phy: can-phy1 { 89 89 compatible = "nxp,tjr1443"; 90 90 #phy-cells = <0>; 91 - max-bitrate = <1000000>; 92 - enable-gpios = <&i2c6_pcal6416 4 GPIO_ACTIVE_HIGH>; 93 - standby-gpios = <&i2c6_pcal6416 3 GPIO_ACTIVE_HIGH>; 91 + max-bitrate = <8000000>; 92 + enable-gpios = <&i2c4_gpio_expander_21 4 GPIO_ACTIVE_HIGH>; 93 + standby-gpios = <&i2c4_gpio_expander_21 3 GPIO_ACTIVE_LOW>; 94 94 }; 95 95 96 96 reg_vref_1v8: regulator-1p8v {
+1 -1
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 1843 1843 <GIC_SPI 294 IRQ_TYPE_LEVEL_HIGH>; 1844 1844 clocks = <&scmi_clk IMX95_CLK_VPU>, 1845 1845 <&vpu_blk_ctrl IMX95_CLK_VPUBLK_JPEG_ENC>; 1846 - assigned-clocks = <&vpu_blk_ctrl IMX95_CLK_VPUBLK_JPEG_DEC>; 1846 + assigned-clocks = <&vpu_blk_ctrl IMX95_CLK_VPUBLK_JPEG_ENC>; 1847 1847 assigned-clock-parents = <&scmi_clk IMX95_CLK_VPUJPEG>; 1848 1848 power-domains = <&scmi_devpd IMX95_PD_VPU>; 1849 1849 };
+4 -4
arch/arm64/boot/dts/rockchip/px30-pp1516.dtsi
··· 72 72 }; 73 73 74 74 vcc_cam_avdd: regulator-vcc-cam-avdd { 75 - compatible = "regulator-fixed"; 75 + compatible = "regulator-fixed"; 76 76 regulator-name = "vcc_cam_avdd"; 77 77 gpio = <&gpio3 RK_PC0 GPIO_ACTIVE_LOW>; 78 78 pinctrl-names = "default"; ··· 83 83 }; 84 84 85 85 vcc_cam_dovdd: regulator-vcc-cam-dovdd { 86 - compatible = "regulator-fixed"; 86 + compatible = "regulator-fixed"; 87 87 regulator-name = "vcc_cam_dovdd"; 88 88 gpio = <&gpio3 RK_PC1 GPIO_ACTIVE_LOW>; 89 89 pinctrl-names = "default"; ··· 94 94 }; 95 95 96 96 vcc_cam_dvdd: regulator-vcc-cam-dvdd { 97 - compatible = "regulator-fixed"; 97 + compatible = "regulator-fixed"; 98 98 regulator-name = "vcc_cam_dvdd"; 99 99 gpio = <&gpio3 RK_PC5 GPIO_ACTIVE_HIGH>; 100 100 enable-active-high; ··· 106 106 }; 107 107 108 108 vcc_lens_afvdd: regulator-vcc-lens-afvdd { 109 - compatible = "regulator-fixed"; 109 + compatible = "regulator-fixed"; 110 110 regulator-name = "vcc_lens_afvdd"; 111 111 gpio = <&gpio3 RK_PB2 GPIO_ACTIVE_LOW>; 112 112 pinctrl-names = "default";
+3 -3
arch/arm64/boot/dts/rockchip/px30-ringneck-haikou-video-demo.dtso
··· 26 26 }; 27 27 28 28 cam_afvdd_2v8: regulator-cam-afvdd-2v8 { 29 - compatible = "regulator-fixed"; 29 + compatible = "regulator-fixed"; 30 30 gpio = <&pca9670 2 GPIO_ACTIVE_LOW>; 31 31 regulator-max-microvolt = <2800000>; 32 32 regulator-min-microvolt = <2800000>; ··· 35 35 }; 36 36 37 37 cam_avdd_2v8: regulator-cam-avdd-2v8 { 38 - compatible = "regulator-fixed"; 38 + compatible = "regulator-fixed"; 39 39 gpio = <&pca9670 4 GPIO_ACTIVE_LOW>; 40 40 regulator-max-microvolt = <2800000>; 41 41 regulator-min-microvolt = <2800000>; ··· 44 44 }; 45 45 46 46 cam_dovdd_1v8: regulator-cam-dovdd-1v8 { 47 - compatible = "regulator-fixed"; 47 + compatible = "regulator-fixed"; 48 48 gpio = <&pca9670 3 GPIO_ACTIVE_LOW>; 49 49 regulator-max-microvolt = <1800000>; 50 50 regulator-min-microvolt = <1800000>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3308-sakurapi-rk3308b.dts
··· 260 260 status = "okay"; 261 261 }; 262 262 263 - &usb_host_ohci{ 263 + &usb_host_ohci { 264 264 status = "okay"; 265 265 };
+1 -1
arch/arm64/boot/dts/rockchip/rk3368-lba3368.dts
··· 609 609 610 610 bluetooth { 611 611 compatible = "brcm,bcm4345c5"; 612 - interrupts-extended = <&gpio3 RK_PA7 GPIO_ACTIVE_HIGH>; 612 + interrupts-extended = <&gpio3 RK_PA7 IRQ_TYPE_LEVEL_HIGH>; 613 613 interrupt-names = "host-wakeup"; 614 614 clocks = <&rk808 RK808_CLKOUT1>; 615 615 clock-names = "lpo";
+1
arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
··· 959 959 reg = <0>; 960 960 m25p,fast-read; 961 961 spi-max-frequency = <10000000>; 962 + vcc-supply = <&vcc_3v0>; 962 963 }; 963 964 }; 964 965
+1
arch/arm64/boot/dts/rockchip/rk3399-pinephone-pro.dts
··· 754 754 compatible = "jedec,spi-nor"; 755 755 reg = <0>; 756 756 spi-max-frequency = <10000000>; 757 + vcc-supply = <&vcc_1v8>; 757 758 }; 758 759 }; 759 760
+3 -3
arch/arm64/boot/dts/rockchip/rk3399-puma-haikou-video-demo.dtso
··· 26 26 }; 27 27 28 28 cam_afvdd_2v8: regulator-cam-afvdd-2v8 { 29 - compatible = "regulator-fixed"; 29 + compatible = "regulator-fixed"; 30 30 gpio = <&pca9670 2 GPIO_ACTIVE_LOW>; 31 31 regulator-max-microvolt = <2800000>; 32 32 regulator-min-microvolt = <2800000>; ··· 35 35 }; 36 36 37 37 cam_avdd_2v8: regulator-cam-avdd-2v8 { 38 - compatible = "regulator-fixed"; 38 + compatible = "regulator-fixed"; 39 39 gpio = <&pca9670 4 GPIO_ACTIVE_LOW>; 40 40 regulator-max-microvolt = <2800000>; 41 41 regulator-min-microvolt = <2800000>; ··· 44 44 }; 45 45 46 46 cam_dovdd_1v8: regulator-cam-dovdd-1v8 { 47 - compatible = "regulator-fixed"; 47 + compatible = "regulator-fixed"; 48 48 gpio = <&pca9670 3 GPIO_ACTIVE_LOW>; 49 49 regulator-max-microvolt = <1800000>; 50 50 regulator-min-microvolt = <1800000>;
+2 -2
arch/arm64/boot/dts/rockchip/rk3566-bigtreetech-cb2.dtsi
··· 53 53 gpios = <&gpio4 RK_PA1 GPIO_ACTIVE_LOW>; 54 54 linux,default-trigger = "default-on"; 55 55 pinctrl-names = "default"; 56 - pinctrl-0 =<&blue_led>; 56 + pinctrl-0 = <&blue_led>; 57 57 }; 58 58 59 59 led-1 { ··· 62 62 gpios = <&gpio0 RK_PB7 GPIO_ACTIVE_LOW>; 63 63 linux,default-trigger = "heartbeat"; 64 64 pinctrl-names = "default"; 65 - pinctrl-0 =<&heartbeat_led>; 65 + pinctrl-0 = <&heartbeat_led>; 66 66 }; 67 67 }; 68 68
+1 -4
arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
··· 302 302 &eth1m0_tx_bus2 303 303 &eth1m0_rx_bus2 304 304 &eth1m0_rgmii_clk 305 - &eth1m0_rgmii_bus 306 - &ethm0_clk1_25m_out>; 305 + &eth1m0_rgmii_bus>; 307 306 status = "okay"; 308 307 }; 309 308 ··· 783 784 rgmii_phy0: phy@1 { 784 785 compatible = "ethernet-phy-ieee802.3-c22"; 785 786 reg = <0x1>; 786 - clocks = <&cru REFCLKO25M_GMAC0_OUT>; 787 787 pinctrl-names = "default"; 788 788 pinctrl-0 = <&gmac0_rst>; 789 789 reset-assert-us = <20000>; ··· 795 797 rgmii_phy1: phy@1 { 796 798 compatible = "ethernet-phy-ieee802.3-c22"; 797 799 reg = <0x1>; 798 - clocks = <&cru REFCLKO25M_GMAC1_OUT>; 799 800 pinctrl-names = "default"; 800 801 pinctrl-0 = <&gmac1_rst>; 801 802 reset-assert-us = <20000>;
+1
arch/arm64/boot/dts/rockchip/rk3582-radxa-e52c.dts
··· 250 250 compatible = "belling,bl24c16a", "atmel,24c16"; 251 251 reg = <0x50>; 252 252 pagesize = <16>; 253 + read-only; 253 254 vcc-supply = <&vcc_3v3_pmu>; 254 255 }; 255 256 };
+1 -1
arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-plus.dts
··· 77 77 pinctrl-names = "default"; 78 78 pinctrl-0 = <&hp_detect>; 79 79 simple-audio-card,aux-devs = <&speaker_amp>, <&headphone_amp>; 80 - simple-audio-card,hp-det-gpios = <&gpio1 RK_PD3 GPIO_ACTIVE_LOW>; 80 + simple-audio-card,hp-det-gpios = <&gpio1 RK_PD3 GPIO_ACTIVE_HIGH>; 81 81 simple-audio-card,widgets = 82 82 "Microphone", "Onboard Microphone", 83 83 "Microphone", "Microphone Jack",
+2
arch/arm64/boot/dts/rockchip/rk3588-orangepi-5.dtsi
··· 365 365 max-frequency = <200000000>; 366 366 mmc-hs400-1_8v; 367 367 mmc-hs400-enhanced-strobe; 368 + vmmc-supply = <&vcc_3v3_s3>; 369 + vqmmc-supply = <&vcc_1v8_s3>; 368 370 status = "okay"; 369 371 }; 370 372
+35
arch/arm64/boot/dts/rockchip/rk3588-rock-5t.dts
··· 68 68 status = "okay"; 69 69 }; 70 70 71 + &pcie30phy { 72 + data-lanes = <1 1 2 2>; 73 + }; 74 + 75 + &pcie3x2 { 76 + pinctrl-names = "default"; 77 + pinctrl-0 = <&pcie3x2_rst>; 78 + reset-gpios = <&gpio4 RK_PB0 GPIO_ACTIVE_HIGH>; 79 + vpcie3v3-supply = <&vcc3v3_pcie30>; 80 + status = "okay"; 81 + }; 82 + 83 + &pcie3x4 { 84 + num-lanes = <2>; 85 + }; 86 + 71 87 &pinctrl { 72 88 hdmirx { 73 89 hdmirx_hpd: hdmirx-5v-detection { ··· 106 90 }; 107 91 }; 108 92 93 + pcie3 { 94 + pcie3x2_rst: pcie3x2-rst { 95 + rockchip,pins = <4 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>; 96 + }; 97 + }; 98 + 109 99 sound { 110 100 hp_detect: hp-detect { 111 101 rockchip,pins = <4 RK_PC3 RK_FUNC_GPIO &pcfg_pull_none>; 102 + }; 103 + }; 104 + 105 + usb { 106 + vcc5v0_host_en: vcc5v0-host-en { 107 + rockchip,pins = <1 RK_PA1 RK_FUNC_GPIO &pcfg_pull_none>; 112 108 }; 113 109 }; 114 110 }; ··· 130 102 pinctrl-names = "default"; 131 103 pinctrl-0 = <&pcie2_0_vcc3v3_en>; 132 104 status = "okay"; 105 + }; 106 + 107 + &vcc5v0_host { 108 + enable-active-high; 109 + gpio = <&gpio1 RK_PA1 GPIO_ACTIVE_HIGH>; 110 + pinctrl-names = "default"; 111 + pinctrl-0 = <&vcc5v0_host_en>; 133 112 };
+2 -2
arch/arm64/boot/dts/rockchip/rk3588j.dtsi
··· 28 28 compatible = "operating-points-v2"; 29 29 opp-shared; 30 30 31 - opp-1200000000{ 31 + opp-1200000000 { 32 32 opp-hz = /bits/ 64 <1200000000>; 33 33 opp-microvolt = <750000 750000 950000>; 34 34 clock-latency-ns = <40000>; ··· 49 49 compatible = "operating-points-v2"; 50 50 opp-shared; 51 51 52 - opp-1200000000{ 52 + opp-1200000000 { 53 53 opp-hz = /bits/ 64 <1200000000>; 54 54 opp-microvolt = <750000 750000 950000>; 55 55 clock-latency-ns = <40000>;
+2 -2
arch/arm64/boot/dts/rockchip/rk3588s-roc-pc.dts
··· 320 320 &i2c3 { 321 321 status = "okay"; 322 322 323 - es8388: audio-codec@10 { 323 + es8388: audio-codec@11 { 324 324 compatible = "everest,es8388", "everest,es8328"; 325 - reg = <0x10>; 325 + reg = <0x11>; 326 326 clocks = <&cru I2S1_8CH_MCLKOUT>; 327 327 AVDD-supply = <&vcc_3v3_s0>; 328 328 DVDD-supply = <&vcc_1v8_s0>;
+1
arch/arm64/include/asm/module.h
··· 19 19 20 20 /* for CONFIG_DYNAMIC_FTRACE */ 21 21 struct plt_entry *ftrace_trampolines; 22 + struct plt_entry *init_ftrace_trampolines; 22 23 }; 23 24 24 25 u64 module_emit_plt_entry(struct module *mod, Elf64_Shdr *sechdrs,
+1
arch/arm64/include/asm/module.lds.h
··· 2 2 .plt 0 : { BYTE(0) } 3 3 .init.plt 0 : { BYTE(0) } 4 4 .text.ftrace_trampoline 0 : { BYTE(0) } 5 + .init.text.ftrace_trampoline 0 : { BYTE(0) } 5 6 6 7 #ifdef CONFIG_KASAN_SW_TAGS 7 8 /*
+5
arch/arm64/include/uapi/asm/bitsperlong.h
··· 17 17 #ifndef __ASM_BITSPERLONG_H 18 18 #define __ASM_BITSPERLONG_H 19 19 20 + #if defined(__KERNEL__) && !defined(__aarch64__) 21 + /* Used by the compat vDSO */ 22 + #define __BITS_PER_LONG 32 23 + #else 20 24 #define __BITS_PER_LONG 64 25 + #endif 21 26 22 27 #include <asm-generic/bitsperlong.h> 23 28
+10 -3
arch/arm64/kernel/ftrace.c
··· 258 258 return ftrace_modify_code(pc, 0, new, false); 259 259 } 260 260 261 - static struct plt_entry *get_ftrace_plt(struct module *mod) 261 + static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr) 262 262 { 263 263 #ifdef CONFIG_MODULES 264 - struct plt_entry *plt = mod->arch.ftrace_trampolines; 264 + struct plt_entry *plt = NULL; 265 + 266 + if (within_module_mem_type(addr, mod, MOD_INIT_TEXT)) 267 + plt = mod->arch.init_ftrace_trampolines; 268 + else if (within_module_mem_type(addr, mod, MOD_TEXT)) 269 + plt = mod->arch.ftrace_trampolines; 270 + else 271 + return NULL; 265 272 266 273 return &plt[FTRACE_PLT_IDX]; 267 274 #else ··· 339 332 if (WARN_ON(!mod)) 340 333 return false; 341 334 342 - plt = get_ftrace_plt(mod); 335 + plt = get_ftrace_plt(mod, pc); 343 336 if (!plt) { 344 337 pr_err("ftrace: no module PLT for %ps\n", (void *)*addr); 345 338 return false;
+11 -1
arch/arm64/kernel/module-plts.c
··· 283 283 unsigned long core_plts = 0; 284 284 unsigned long init_plts = 0; 285 285 Elf64_Sym *syms = NULL; 286 - Elf_Shdr *pltsec, *tramp = NULL; 286 + Elf_Shdr *pltsec, *tramp = NULL, *init_tramp = NULL; 287 287 int i; 288 288 289 289 /* ··· 298 298 else if (!strcmp(secstrings + sechdrs[i].sh_name, 299 299 ".text.ftrace_trampoline")) 300 300 tramp = sechdrs + i; 301 + else if (!strcmp(secstrings + sechdrs[i].sh_name, 302 + ".init.text.ftrace_trampoline")) 303 + init_tramp = sechdrs + i; 301 304 else if (sechdrs[i].sh_type == SHT_SYMTAB) 302 305 syms = (Elf64_Sym *)sechdrs[i].sh_addr; 303 306 } ··· 364 361 tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC; 365 362 tramp->sh_addralign = __alignof__(struct plt_entry); 366 363 tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry); 364 + } 365 + 366 + if (init_tramp) { 367 + init_tramp->sh_type = SHT_NOBITS; 368 + init_tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC; 369 + init_tramp->sh_addralign = __alignof__(struct plt_entry); 370 + init_tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry); 367 371 } 368 372 369 373 return 0;
+11
arch/arm64/kernel/module.c
··· 466 466 __init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR); 467 467 468 468 mod->arch.ftrace_trampolines = plts; 469 + 470 + s = find_section(hdr, sechdrs, ".init.text.ftrace_trampoline"); 471 + if (!s) 472 + return -ENOEXEC; 473 + 474 + plts = (void *)s->sh_addr; 475 + 476 + __init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR); 477 + 478 + mod->arch.init_ftrace_trampolines = plts; 479 + 469 480 #endif 470 481 return 0; 471 482 }
+1
arch/mips/configs/mtx1_defconfig
··· 273 273 CONFIG_ULI526X=m 274 274 CONFIG_PCMCIA_XIRCOM=m 275 275 CONFIG_DL2K=m 276 + CONFIG_SUNDANCE=m 276 277 CONFIG_PCMCIA_FMVJ18X=m 277 278 CONFIG_E100=m 278 279 CONFIG_E1000=m
+1
arch/powerpc/configs/ppc6xx_defconfig
··· 433 433 CONFIG_ULI526X=m 434 434 CONFIG_PCMCIA_XIRCOM=m 435 435 CONFIG_DL2K=m 436 + CONFIG_SUNDANCE=m 436 437 CONFIG_S2IO=m 437 438 CONFIG_FEC_MPC52xx=m 438 439 CONFIG_GIANFAR=m
+1 -1
arch/riscv/Kconfig
··· 65 65 select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE 66 66 select ARCH_SUPPORTS_HUGETLBFS if MMU 67 67 # LLD >= 14: https://github.com/llvm/llvm-project/issues/50505 68 - select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000 68 + select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000 && CMODEL_MEDANY 69 69 select ARCH_SUPPORTS_LTO_CLANG_THIN if LLD_VERSION >= 140000 70 70 select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS if 64BIT && MMU 71 71 select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU
+1 -1
arch/riscv/include/asm/asm.h
··· 91 91 #endif 92 92 93 93 .macro asm_per_cpu dst sym tmp 94 - REG_L \tmp, TASK_TI_CPU_NUM(tp) 94 + lw \tmp, TASK_TI_CPU_NUM(tp) 95 95 slli \tmp, \tmp, PER_CPU_OFFSET_SHIFT 96 96 la \dst, __per_cpu_offset 97 97 add \dst, \dst, \tmp
+4
arch/riscv/include/asm/kvm_host.h
··· 21 21 #include <asm/kvm_vcpu_fp.h> 22 22 #include <asm/kvm_vcpu_insn.h> 23 23 #include <asm/kvm_vcpu_sbi.h> 24 + #include <asm/kvm_vcpu_sbi_fwft.h> 24 25 #include <asm/kvm_vcpu_timer.h> 25 26 #include <asm/kvm_vcpu_pmu.h> 26 27 ··· 263 262 264 263 /* Performance monitoring context */ 265 264 struct kvm_pmu pmu_context; 265 + 266 + /* Firmware feature SBI extension context */ 267 + struct kvm_sbi_fwft fwft_context; 266 268 267 269 /* 'static' configurations which are set only once */ 268 270 struct kvm_vcpu_config cfg;
+3
arch/riscv/include/asm/kvm_vcpu_pmu.h
··· 98 98 int kvm_riscv_vcpu_pmu_snapshot_set_shmem(struct kvm_vcpu *vcpu, unsigned long saddr_low, 99 99 unsigned long saddr_high, unsigned long flags, 100 100 struct kvm_vcpu_sbi_return *retdata); 101 + int kvm_riscv_vcpu_pmu_event_info(struct kvm_vcpu *vcpu, unsigned long saddr_low, 102 + unsigned long saddr_high, unsigned long num_events, 103 + unsigned long flags, struct kvm_vcpu_sbi_return *retdata); 101 104 void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); 102 105 void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); 103 106
+14 -11
arch/riscv/include/asm/kvm_vcpu_sbi.h
··· 11 11 12 12 #define KVM_SBI_IMPID 3 13 13 14 - #define KVM_SBI_VERSION_MAJOR 2 14 + #define KVM_SBI_VERSION_MAJOR 3 15 15 #define KVM_SBI_VERSION_MINOR 0 16 16 17 17 enum kvm_riscv_sbi_ext_status { ··· 59 59 void (*deinit)(struct kvm_vcpu *vcpu); 60 60 61 61 void (*reset)(struct kvm_vcpu *vcpu); 62 + 63 + unsigned long state_reg_subtype; 64 + unsigned long (*get_state_reg_count)(struct kvm_vcpu *vcpu); 65 + int (*get_state_reg_id)(struct kvm_vcpu *vcpu, int index, u64 *reg_id); 66 + int (*get_state_reg)(struct kvm_vcpu *vcpu, unsigned long reg_num, 67 + unsigned long reg_size, void *reg_val); 68 + int (*set_state_reg)(struct kvm_vcpu *vcpu, unsigned long reg_num, 69 + unsigned long reg_size, const void *reg_val); 62 70 }; 63 71 64 72 void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run); ··· 77 69 unsigned long pc, unsigned long a1); 78 70 void kvm_riscv_vcpu_sbi_load_reset_state(struct kvm_vcpu *vcpu); 79 71 int kvm_riscv_vcpu_sbi_return(struct kvm_vcpu *vcpu, struct kvm_run *run); 72 + int kvm_riscv_vcpu_reg_indices_sbi_ext(struct kvm_vcpu *vcpu, u64 __user *uindices); 80 73 int kvm_riscv_vcpu_set_reg_sbi_ext(struct kvm_vcpu *vcpu, 81 74 const struct kvm_one_reg *reg); 82 75 int kvm_riscv_vcpu_get_reg_sbi_ext(struct kvm_vcpu *vcpu, 83 76 const struct kvm_one_reg *reg); 84 - int kvm_riscv_vcpu_set_reg_sbi(struct kvm_vcpu *vcpu, 85 - const struct kvm_one_reg *reg); 86 - int kvm_riscv_vcpu_get_reg_sbi(struct kvm_vcpu *vcpu, 87 - const struct kvm_one_reg *reg); 77 + int kvm_riscv_vcpu_reg_indices_sbi(struct kvm_vcpu *vcpu, u64 __user *uindices); 78 + int kvm_riscv_vcpu_set_reg_sbi(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); 79 + int kvm_riscv_vcpu_get_reg_sbi(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); 88 80 const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext( 89 81 struct kvm_vcpu *vcpu, unsigned long extid); 90 - bool riscv_vcpu_supports_sbi_ext(struct kvm_vcpu *vcpu, int idx); 91 82 int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run); 92 83 void kvm_riscv_vcpu_sbi_init(struct kvm_vcpu *vcpu); 93 84 void kvm_riscv_vcpu_sbi_deinit(struct kvm_vcpu *vcpu); 94 85 void kvm_riscv_vcpu_sbi_reset(struct kvm_vcpu *vcpu); 95 - 96 - int kvm_riscv_vcpu_get_reg_sbi_sta(struct kvm_vcpu *vcpu, unsigned long reg_num, 97 - unsigned long *reg_val); 98 - int kvm_riscv_vcpu_set_reg_sbi_sta(struct kvm_vcpu *vcpu, unsigned long reg_num, 99 - unsigned long reg_val); 100 86 101 87 #ifdef CONFIG_RISCV_SBI_V01 102 88 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01; ··· 104 102 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_dbcn; 105 103 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_susp; 106 104 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_sta; 105 + extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_fwft; 107 106 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental; 108 107 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor; 109 108
+34
arch/riscv/include/asm/kvm_vcpu_sbi_fwft.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2025 Rivos Inc. 4 + * 5 + * Authors: 6 + * Clément Léger <cleger@rivosinc.com> 7 + */ 8 + 9 + #ifndef __KVM_VCPU_RISCV_FWFT_H 10 + #define __KVM_VCPU_RISCV_FWFT_H 11 + 12 + #include <asm/sbi.h> 13 + 14 + struct kvm_sbi_fwft_feature; 15 + 16 + struct kvm_sbi_fwft_config { 17 + const struct kvm_sbi_fwft_feature *feature; 18 + bool supported; 19 + bool enabled; 20 + unsigned long flags; 21 + }; 22 + 23 + /* FWFT data structure per vcpu */ 24 + struct kvm_sbi_fwft { 25 + struct kvm_sbi_fwft_config *configs; 26 + #ifndef CONFIG_32BIT 27 + bool have_vs_pmlen_7; 28 + bool have_vs_pmlen_16; 29 + #endif 30 + }; 31 + 32 + #define vcpu_to_fwft(vcpu) (&(vcpu)->arch.fwft_context) 33 + 34 + #endif /* !__KVM_VCPU_RISCV_FWFT_H */
+13
arch/riscv/include/asm/sbi.h
··· 136 136 SBI_EXT_PMU_COUNTER_FW_READ, 137 137 SBI_EXT_PMU_COUNTER_FW_READ_HI, 138 138 SBI_EXT_PMU_SNAPSHOT_SET_SHMEM, 139 + SBI_EXT_PMU_EVENT_GET_INFO, 139 140 }; 140 141 141 142 union sbi_pmu_ctr_info { ··· 160 159 u64 reserved[447]; 161 160 }; 162 161 162 + struct riscv_pmu_event_info { 163 + u32 event_idx; 164 + u32 output; 165 + u64 event_data; 166 + }; 167 + 168 + #define RISCV_PMU_EVENT_INFO_OUTPUT_MASK 0x01 169 + 163 170 #define RISCV_PMU_RAW_EVENT_MASK GENMASK_ULL(47, 0) 164 171 #define RISCV_PMU_PLAT_FW_EVENT_MASK GENMASK_ULL(61, 0) 172 + /* SBI v3.0 allows extended hpmeventX width value */ 173 + #define RISCV_PMU_RAW_EVENT_V2_MASK GENMASK_ULL(55, 0) 165 174 #define RISCV_PMU_RAW_EVENT_IDX 0x20000 175 + #define RISCV_PMU_RAW_EVENT_V2_IDX 0x30000 166 176 #define RISCV_PLAT_FW_EVENT 0xFFFF 167 177 168 178 /** General pmu event codes specified in SBI PMU extension */ ··· 231 219 SBI_PMU_EVENT_TYPE_HW = 0x0, 232 220 SBI_PMU_EVENT_TYPE_CACHE = 0x1, 233 221 SBI_PMU_EVENT_TYPE_RAW = 0x2, 222 + SBI_PMU_EVENT_TYPE_RAW_V2 = 0x3, 234 223 SBI_PMU_EVENT_TYPE_FW = 0xf, 235 224 }; 236 225
+4 -4
arch/riscv/include/asm/uaccess.h
··· 209 209 err = 0; \ 210 210 break; \ 211 211 __gu_failed: \ 212 - x = 0; \ 212 + x = (__typeof__(x))0; \ 213 213 err = -EFAULT; \ 214 214 } while (0) 215 215 ··· 311 311 do { \ 312 312 if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \ 313 313 !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \ 314 - __inttype(x) ___val = (__inttype(x))x; \ 314 + __typeof__(*(__gu_ptr)) ___val = (x); \ 315 315 if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(___val), sizeof(*__gu_ptr))) \ 316 316 goto label; \ 317 317 break; \ ··· 438 438 } 439 439 440 440 #define __get_kernel_nofault(dst, src, type, err_label) \ 441 - __get_user_nocheck(*((type *)(dst)), (type *)(src), err_label) 441 + __get_user_nocheck(*((type *)(dst)), (__force __user type *)(src), err_label) 442 442 443 443 #define __put_kernel_nofault(dst, src, type, err_label) \ 444 - __put_user_nocheck(*((type *)(src)), (type *)(dst), err_label) 444 + __put_user_nocheck(*((type *)(src)), (__force __user type *)(dst), err_label) 445 445 446 446 static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len) 447 447 {
+21
arch/riscv/include/uapi/asm/kvm.h
··· 56 56 unsigned long mimpid; 57 57 unsigned long zicboz_block_size; 58 58 unsigned long satp_mode; 59 + unsigned long zicbop_block_size; 59 60 }; 60 61 61 62 /* CORE registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ ··· 186 185 KVM_RISCV_ISA_EXT_ZICCRSE, 187 186 KVM_RISCV_ISA_EXT_ZAAMO, 188 187 KVM_RISCV_ISA_EXT_ZALRSC, 188 + KVM_RISCV_ISA_EXT_ZICBOP, 189 + KVM_RISCV_ISA_EXT_ZFBFMIN, 190 + KVM_RISCV_ISA_EXT_ZVFBFMIN, 191 + KVM_RISCV_ISA_EXT_ZVFBFWMA, 189 192 KVM_RISCV_ISA_EXT_MAX, 190 193 }; 191 194 ··· 210 205 KVM_RISCV_SBI_EXT_DBCN, 211 206 KVM_RISCV_SBI_EXT_STA, 212 207 KVM_RISCV_SBI_EXT_SUSP, 208 + KVM_RISCV_SBI_EXT_FWFT, 213 209 KVM_RISCV_SBI_EXT_MAX, 214 210 }; 215 211 ··· 218 212 struct kvm_riscv_sbi_sta { 219 213 unsigned long shmem_lo; 220 214 unsigned long shmem_hi; 215 + }; 216 + 217 + struct kvm_riscv_sbi_fwft_feature { 218 + unsigned long enable; 219 + unsigned long flags; 220 + unsigned long value; 221 + }; 222 + 223 + /* SBI FWFT extension registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ 224 + struct kvm_riscv_sbi_fwft { 225 + struct kvm_riscv_sbi_fwft_feature misaligned_deleg; 226 + struct kvm_riscv_sbi_fwft_feature pointer_masking; 221 227 }; 222 228 223 229 /* Possible states for kvm_riscv_timer */ ··· 315 297 #define KVM_REG_RISCV_SBI_STA (0x0 << KVM_REG_RISCV_SUBTYPE_SHIFT) 316 298 #define KVM_REG_RISCV_SBI_STA_REG(name) \ 317 299 (offsetof(struct kvm_riscv_sbi_sta, name) / sizeof(unsigned long)) 300 + #define KVM_REG_RISCV_SBI_FWFT (0x1 << KVM_REG_RISCV_SUBTYPE_SHIFT) 301 + #define KVM_REG_RISCV_SBI_FWFT_REG(name) \ 302 + (offsetof(struct kvm_riscv_sbi_fwft, name) / sizeof(unsigned long)) 318 303 319 304 /* Device Control API: RISC-V AIA */ 320 305 #define KVM_DEV_RISCV_APLIC_ALIGN 0x1000
+1 -1
arch/riscv/kernel/entry.S
··· 46 46 * a0 = &new_vmalloc[BIT_WORD(cpu)] 47 47 * a1 = BIT_MASK(cpu) 48 48 */ 49 - REG_L a2, TASK_TI_CPU(tp) 49 + lw a2, TASK_TI_CPU(tp) 50 50 /* 51 51 * Compute the new_vmalloc element position: 52 52 * (cpu / 64) * 8 = (cpu >> 6) << 3
+2 -2
arch/riscv/kernel/kexec_elf.c
··· 28 28 int i; 29 29 int ret = 0; 30 30 size_t size; 31 - struct kexec_buf kbuf; 31 + struct kexec_buf kbuf = {}; 32 32 const struct elf_phdr *phdr; 33 33 34 34 kbuf.image = image; ··· 66 66 { 67 67 int i; 68 68 int ret; 69 - struct kexec_buf kbuf; 69 + struct kexec_buf kbuf = {}; 70 70 const struct elf_phdr *phdr; 71 71 unsigned long lowest_paddr = ULONG_MAX; 72 72 unsigned long lowest_vaddr = ULONG_MAX;
+1 -1
arch/riscv/kernel/kexec_image.c
··· 41 41 struct riscv_image_header *h; 42 42 u64 flags; 43 43 bool be_image, be_kernel; 44 - struct kexec_buf kbuf; 44 + struct kexec_buf kbuf = {}; 45 45 int ret; 46 46 47 47 /* Check Image header */
+1 -1
arch/riscv/kernel/machine_kexec_file.c
··· 261 261 int ret; 262 262 void *fdt; 263 263 unsigned long initrd_pbase = 0UL; 264 - struct kexec_buf kbuf; 264 + struct kexec_buf kbuf = {}; 265 265 char *modified_cmdline = NULL; 266 266 267 267 kbuf.image = image;
+1
arch/riscv/kvm/Makefile
··· 27 27 kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o 28 28 kvm-y += vcpu_sbi.o 29 29 kvm-y += vcpu_sbi_base.o 30 + kvm-y += vcpu_sbi_fwft.o 30 31 kvm-y += vcpu_sbi_hsm.o 31 32 kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_sbi_pmu.o 32 33 kvm-y += vcpu_sbi_replace.o
+24 -3
arch/riscv/kvm/gstage.c
··· 321 321 if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) == HGATP_MODE_SV57X4) { 322 322 kvm_riscv_gstage_mode = HGATP_MODE_SV57X4; 323 323 kvm_riscv_gstage_pgd_levels = 5; 324 - goto skip_sv48x4_test; 324 + goto done; 325 325 } 326 326 327 327 /* Try Sv48x4 G-stage mode */ ··· 329 329 if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) == HGATP_MODE_SV48X4) { 330 330 kvm_riscv_gstage_mode = HGATP_MODE_SV48X4; 331 331 kvm_riscv_gstage_pgd_levels = 4; 332 + goto done; 332 333 } 333 - skip_sv48x4_test: 334 334 335 + /* Try Sv39x4 G-stage mode */ 336 + csr_write(CSR_HGATP, HGATP_MODE_SV39X4 << HGATP_MODE_SHIFT); 337 + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) == HGATP_MODE_SV39X4) { 338 + kvm_riscv_gstage_mode = HGATP_MODE_SV39X4; 339 + kvm_riscv_gstage_pgd_levels = 3; 340 + goto done; 341 + } 342 + #else /* CONFIG_32BIT */ 343 + /* Try Sv32x4 G-stage mode */ 344 + csr_write(CSR_HGATP, HGATP_MODE_SV32X4 << HGATP_MODE_SHIFT); 345 + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) == HGATP_MODE_SV32X4) { 346 + kvm_riscv_gstage_mode = HGATP_MODE_SV32X4; 347 + kvm_riscv_gstage_pgd_levels = 2; 348 + goto done; 349 + } 350 + #endif 351 + 352 + /* KVM depends on !HGATP_MODE_OFF */ 353 + kvm_riscv_gstage_mode = HGATP_MODE_OFF; 354 + kvm_riscv_gstage_pgd_levels = 0; 355 + 356 + done: 335 357 csr_write(CSR_HGATP, 0); 336 358 kvm_riscv_local_hfence_gvma_all(); 337 - #endif 338 359 }
+17 -16
arch/riscv/kvm/main.c
··· 93 93 return rc; 94 94 95 95 kvm_riscv_gstage_mode_detect(); 96 + switch (kvm_riscv_gstage_mode) { 97 + case HGATP_MODE_SV32X4: 98 + str = "Sv32x4"; 99 + break; 100 + case HGATP_MODE_SV39X4: 101 + str = "Sv39x4"; 102 + break; 103 + case HGATP_MODE_SV48X4: 104 + str = "Sv48x4"; 105 + break; 106 + case HGATP_MODE_SV57X4: 107 + str = "Sv57x4"; 108 + break; 109 + default: 110 + kvm_riscv_nacl_exit(); 111 + return -ENODEV; 112 + } 96 113 97 114 kvm_riscv_gstage_vmid_detect(); 98 115 ··· 152 135 (rc) ? slist : "no features"); 153 136 } 154 137 155 - switch (kvm_riscv_gstage_mode) { 156 - case HGATP_MODE_SV32X4: 157 - str = "Sv32x4"; 158 - break; 159 - case HGATP_MODE_SV39X4: 160 - str = "Sv39x4"; 161 - break; 162 - case HGATP_MODE_SV48X4: 163 - str = "Sv48x4"; 164 - break; 165 - case HGATP_MODE_SV57X4: 166 - str = "Sv57x4"; 167 - break; 168 - default: 169 - return -ENODEV; 170 - } 171 138 kvm_info("using %s G-stage page table format\n", str); 172 139 173 140 kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
+2 -1
arch/riscv/kvm/vcpu.c
··· 133 133 134 134 /* Mark this VCPU never ran */ 135 135 vcpu->arch.ran_atleast_once = false; 136 + 137 + vcpu->arch.cfg.hedeleg = KVM_HEDELEG_DEFAULT; 136 138 vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; 137 139 bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX); 138 140 ··· 572 570 cfg->hstateen0 |= SMSTATEEN0_SSTATEEN0; 573 571 } 574 572 575 - cfg->hedeleg = KVM_HEDELEG_DEFAULT; 576 573 if (vcpu->guest_debug) 577 574 cfg->hedeleg &= ~BIT(EXC_BREAKPOINT); 578 575 }
+32 -63
arch/riscv/kvm/vcpu_onereg.c
··· 65 65 KVM_ISA_EXT_ARR(ZCF), 66 66 KVM_ISA_EXT_ARR(ZCMOP), 67 67 KVM_ISA_EXT_ARR(ZFA), 68 + KVM_ISA_EXT_ARR(ZFBFMIN), 68 69 KVM_ISA_EXT_ARR(ZFH), 69 70 KVM_ISA_EXT_ARR(ZFHMIN), 70 71 KVM_ISA_EXT_ARR(ZICBOM), 72 + KVM_ISA_EXT_ARR(ZICBOP), 71 73 KVM_ISA_EXT_ARR(ZICBOZ), 72 74 KVM_ISA_EXT_ARR(ZICCRSE), 73 75 KVM_ISA_EXT_ARR(ZICNTR), ··· 90 88 KVM_ISA_EXT_ARR(ZTSO), 91 89 KVM_ISA_EXT_ARR(ZVBB), 92 90 KVM_ISA_EXT_ARR(ZVBC), 91 + KVM_ISA_EXT_ARR(ZVFBFMIN), 92 + KVM_ISA_EXT_ARR(ZVFBFWMA), 93 93 KVM_ISA_EXT_ARR(ZVFH), 94 94 KVM_ISA_EXT_ARR(ZVFHMIN), 95 95 KVM_ISA_EXT_ARR(ZVKB), ··· 177 173 case KVM_RISCV_ISA_EXT_C: 178 174 case KVM_RISCV_ISA_EXT_I: 179 175 case KVM_RISCV_ISA_EXT_M: 180 - case KVM_RISCV_ISA_EXT_SMNPM: 181 176 /* There is not architectural config bit to disable sscofpmf completely */ 182 177 case KVM_RISCV_ISA_EXT_SSCOFPMF: 183 178 case KVM_RISCV_ISA_EXT_SSNPM: ··· 202 199 case KVM_RISCV_ISA_EXT_ZCF: 203 200 case KVM_RISCV_ISA_EXT_ZCMOP: 204 201 case KVM_RISCV_ISA_EXT_ZFA: 202 + case KVM_RISCV_ISA_EXT_ZFBFMIN: 205 203 case KVM_RISCV_ISA_EXT_ZFH: 206 204 case KVM_RISCV_ISA_EXT_ZFHMIN: 205 + case KVM_RISCV_ISA_EXT_ZICBOP: 207 206 case KVM_RISCV_ISA_EXT_ZICCRSE: 208 207 case KVM_RISCV_ISA_EXT_ZICNTR: 209 208 case KVM_RISCV_ISA_EXT_ZICOND: ··· 225 220 case KVM_RISCV_ISA_EXT_ZTSO: 226 221 case KVM_RISCV_ISA_EXT_ZVBB: 227 222 case KVM_RISCV_ISA_EXT_ZVBC: 223 + case KVM_RISCV_ISA_EXT_ZVFBFMIN: 224 + case KVM_RISCV_ISA_EXT_ZVFBFWMA: 228 225 case KVM_RISCV_ISA_EXT_ZVFH: 229 226 case KVM_RISCV_ISA_EXT_ZVFHMIN: 230 227 case KVM_RISCV_ISA_EXT_ZVKB: ··· 284 277 reg_val = vcpu->arch.isa[0] & KVM_RISCV_BASE_ISA_MASK; 285 278 break; 286 279 case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size): 287 - if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOM)) 280 + if (!riscv_isa_extension_available(NULL, ZICBOM)) 288 281 return -ENOENT; 289 282 reg_val = riscv_cbom_block_size; 290 283 break; 291 284 case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): 292 - if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOZ)) 285 + if (!riscv_isa_extension_available(NULL, ZICBOZ)) 293 286 return -ENOENT; 294 287 reg_val = riscv_cboz_block_size; 288 + break; 289 + case KVM_REG_RISCV_CONFIG_REG(zicbop_block_size): 290 + if (!riscv_isa_extension_available(NULL, ZICBOP)) 291 + return -ENOENT; 292 + reg_val = riscv_cbop_block_size; 295 293 break; 296 294 case KVM_REG_RISCV_CONFIG_REG(mvendorid): 297 295 reg_val = vcpu->arch.mvendorid; ··· 378 366 } 379 367 break; 380 368 case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size): 381 - if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOM)) 369 + if (!riscv_isa_extension_available(NULL, ZICBOM)) 382 370 return -ENOENT; 383 371 if (reg_val != riscv_cbom_block_size) 384 372 return -EINVAL; 385 373 break; 386 374 case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): 387 - if (!riscv_isa_extension_available(vcpu->arch.isa, ZICBOZ)) 375 + if (!riscv_isa_extension_available(NULL, ZICBOZ)) 388 376 return -ENOENT; 389 377 if (reg_val != riscv_cboz_block_size) 378 + return -EINVAL; 379 + break; 380 + case KVM_REG_RISCV_CONFIG_REG(zicbop_block_size): 381 + if (!riscv_isa_extension_available(NULL, ZICBOP)) 382 + return -ENOENT; 383 + if (reg_val != riscv_cbop_block_size) 390 384 return -EINVAL; 391 385 break; 392 386 case KVM_REG_RISCV_CONFIG_REG(mvendorid): ··· 835 817 * was not available. 836 818 */ 837 819 if (i == KVM_REG_RISCV_CONFIG_REG(zicbom_block_size) && 838 - !riscv_isa_extension_available(vcpu->arch.isa, ZICBOM)) 820 + !riscv_isa_extension_available(NULL, ZICBOM)) 839 821 continue; 840 822 else if (i == KVM_REG_RISCV_CONFIG_REG(zicboz_block_size) && 841 - !riscv_isa_extension_available(vcpu->arch.isa, ZICBOZ)) 823 + !riscv_isa_extension_available(NULL, ZICBOZ)) 824 + continue; 825 + else if (i == KVM_REG_RISCV_CONFIG_REG(zicbop_block_size) && 826 + !riscv_isa_extension_available(NULL, ZICBOP)) 842 827 continue; 843 828 844 829 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; ··· 1082 1061 return copy_isa_ext_reg_indices(vcpu, NULL); 1083 1062 } 1084 1063 1085 - static int copy_sbi_ext_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) 1086 - { 1087 - unsigned int n = 0; 1088 - 1089 - for (int i = 0; i < KVM_RISCV_SBI_EXT_MAX; i++) { 1090 - u64 size = IS_ENABLED(CONFIG_32BIT) ? 1091 - KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; 1092 - u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_SBI_EXT | 1093 - KVM_REG_RISCV_SBI_SINGLE | i; 1094 - 1095 - if (!riscv_vcpu_supports_sbi_ext(vcpu, i)) 1096 - continue; 1097 - 1098 - if (uindices) { 1099 - if (put_user(reg, uindices)) 1100 - return -EFAULT; 1101 - uindices++; 1102 - } 1103 - 1104 - n++; 1105 - } 1106 - 1107 - return n; 1108 - } 1109 - 1110 1064 static unsigned long num_sbi_ext_regs(struct kvm_vcpu *vcpu) 1111 1065 { 1112 - return copy_sbi_ext_reg_indices(vcpu, NULL); 1113 - } 1114 - 1115 - static int copy_sbi_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) 1116 - { 1117 - struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context; 1118 - int total = 0; 1119 - 1120 - if (scontext->ext_status[KVM_RISCV_SBI_EXT_STA] == KVM_RISCV_SBI_EXT_STATUS_ENABLED) { 1121 - u64 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; 1122 - int n = sizeof(struct kvm_riscv_sbi_sta) / sizeof(unsigned long); 1123 - 1124 - for (int i = 0; i < n; i++) { 1125 - u64 reg = KVM_REG_RISCV | size | 1126 - KVM_REG_RISCV_SBI_STATE | 1127 - KVM_REG_RISCV_SBI_STA | i; 1128 - 1129 - if (uindices) { 1130 - if (put_user(reg, uindices)) 1131 - return -EFAULT; 1132 - uindices++; 1133 - } 1134 - } 1135 - 1136 - total += n; 1137 - } 1138 - 1139 - return total; 1066 + return kvm_riscv_vcpu_reg_indices_sbi_ext(vcpu, NULL); 1140 1067 } 1141 1068 1142 1069 static inline unsigned long num_sbi_regs(struct kvm_vcpu *vcpu) 1143 1070 { 1144 - return copy_sbi_reg_indices(vcpu, NULL); 1071 + return kvm_riscv_vcpu_reg_indices_sbi(vcpu, NULL); 1145 1072 } 1146 1073 1147 1074 static inline unsigned long num_vector_regs(const struct kvm_vcpu *vcpu) ··· 1212 1243 return ret; 1213 1244 uindices += ret; 1214 1245 1215 - ret = copy_sbi_ext_reg_indices(vcpu, uindices); 1246 + ret = kvm_riscv_vcpu_reg_indices_sbi_ext(vcpu, uindices); 1216 1247 if (ret < 0) 1217 1248 return ret; 1218 1249 uindices += ret; 1219 1250 1220 - ret = copy_sbi_reg_indices(vcpu, uindices); 1251 + ret = kvm_riscv_vcpu_reg_indices_sbi(vcpu, uindices); 1221 1252 if (ret < 0) 1222 1253 return ret; 1223 1254 uindices += ret;
+65 -9
arch/riscv/kvm/vcpu_pmu.c
··· 60 60 type = PERF_TYPE_HW_CACHE; 61 61 break; 62 62 case SBI_PMU_EVENT_TYPE_RAW: 63 + case SBI_PMU_EVENT_TYPE_RAW_V2: 63 64 case SBI_PMU_EVENT_TYPE_FW: 64 65 type = PERF_TYPE_RAW; 65 66 break; ··· 128 127 break; 129 128 case SBI_PMU_EVENT_TYPE_RAW: 130 129 config = evt_data & RISCV_PMU_RAW_EVENT_MASK; 130 + break; 131 + case SBI_PMU_EVENT_TYPE_RAW_V2: 132 + config = evt_data & RISCV_PMU_RAW_EVENT_V2_MASK; 131 133 break; 132 134 case SBI_PMU_EVENT_TYPE_FW: 133 135 if (ecode < SBI_PMU_FW_MAX) ··· 409 405 int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data); 410 406 int sbiret = 0; 411 407 gpa_t saddr; 412 - unsigned long hva; 413 - bool writable; 414 408 415 409 if (!kvpmu || flags) { 416 410 sbiret = SBI_ERR_INVALID_PARAM; ··· 430 428 goto out; 431 429 } 432 430 433 - hva = kvm_vcpu_gfn_to_hva_prot(vcpu, saddr >> PAGE_SHIFT, &writable); 434 - if (kvm_is_error_hva(hva) || !writable) { 435 - sbiret = SBI_ERR_INVALID_ADDRESS; 436 - goto out; 437 - } 438 - 439 431 kvpmu->sdata = kzalloc(snapshot_area_size, GFP_ATOMIC); 440 432 if (!kvpmu->sdata) 441 433 return -ENOMEM; 442 434 435 + /* No need to check writable slot explicitly as kvm_vcpu_write_guest does it internally */ 443 436 if (kvm_vcpu_write_guest(vcpu, saddr, kvpmu->sdata, snapshot_area_size)) { 444 437 kfree(kvpmu->sdata); 445 - sbiret = SBI_ERR_FAILURE; 438 + sbiret = SBI_ERR_INVALID_ADDRESS; 446 439 goto out; 447 440 } 448 441 ··· 445 448 446 449 out: 447 450 retdata->err_val = sbiret; 451 + 452 + return 0; 453 + } 454 + 455 + int kvm_riscv_vcpu_pmu_event_info(struct kvm_vcpu *vcpu, unsigned long saddr_low, 456 + unsigned long saddr_high, unsigned long num_events, 457 + unsigned long flags, struct kvm_vcpu_sbi_return *retdata) 458 + { 459 + struct riscv_pmu_event_info *einfo = NULL; 460 + int shmem_size = num_events * sizeof(*einfo); 461 + gpa_t shmem; 462 + u32 eidx, etype; 463 + u64 econfig; 464 + int ret; 465 + 466 + if (flags != 0 || (saddr_low & (SZ_16 - 1) || num_events == 0)) { 467 + ret = SBI_ERR_INVALID_PARAM; 468 + goto out; 469 + } 470 + 471 + shmem = saddr_low; 472 + if (saddr_high != 0) { 473 + if (IS_ENABLED(CONFIG_32BIT)) { 474 + shmem |= ((gpa_t)saddr_high << 32); 475 + } else { 476 + ret = SBI_ERR_INVALID_ADDRESS; 477 + goto out; 478 + } 479 + } 480 + 481 + einfo = kzalloc(shmem_size, GFP_KERNEL); 482 + if (!einfo) 483 + return -ENOMEM; 484 + 485 + ret = kvm_vcpu_read_guest(vcpu, shmem, einfo, shmem_size); 486 + if (ret) { 487 + ret = SBI_ERR_FAILURE; 488 + goto free_mem; 489 + } 490 + 491 + for (int i = 0; i < num_events; i++) { 492 + eidx = einfo[i].event_idx; 493 + etype = kvm_pmu_get_perf_event_type(eidx); 494 + econfig = kvm_pmu_get_perf_event_config(eidx, einfo[i].event_data); 495 + ret = riscv_pmu_get_event_info(etype, econfig, NULL); 496 + einfo[i].output = (ret > 0) ? 1 : 0; 497 + } 498 + 499 + ret = kvm_vcpu_write_guest(vcpu, shmem, einfo, shmem_size); 500 + if (ret) { 501 + ret = SBI_ERR_INVALID_ADDRESS; 502 + goto free_mem; 503 + } 504 + 505 + ret = 0; 506 + free_mem: 507 + kfree(einfo); 508 + out: 509 + retdata->err_val = ret; 448 510 449 511 return 0; 450 512 }
+163 -35
arch/riscv/kvm/vcpu_sbi.c
··· 79 79 .ext_ptr = &vcpu_sbi_ext_sta, 80 80 }, 81 81 { 82 + .ext_idx = KVM_RISCV_SBI_EXT_FWFT, 83 + .ext_ptr = &vcpu_sbi_ext_fwft, 84 + }, 85 + { 82 86 .ext_idx = KVM_RISCV_SBI_EXT_EXPERIMENTAL, 83 87 .ext_ptr = &vcpu_sbi_ext_experimental, 84 88 }, ··· 110 106 return sext; 111 107 } 112 108 113 - bool riscv_vcpu_supports_sbi_ext(struct kvm_vcpu *vcpu, int idx) 109 + static bool riscv_vcpu_supports_sbi_ext(struct kvm_vcpu *vcpu, int idx) 114 110 { 115 111 struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context; 116 112 const struct kvm_riscv_sbi_extension_entry *sext; ··· 288 284 return 0; 289 285 } 290 286 287 + int kvm_riscv_vcpu_reg_indices_sbi_ext(struct kvm_vcpu *vcpu, u64 __user *uindices) 288 + { 289 + unsigned int n = 0; 290 + 291 + for (int i = 0; i < KVM_RISCV_SBI_EXT_MAX; i++) { 292 + u64 size = IS_ENABLED(CONFIG_32BIT) ? 293 + KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; 294 + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_SBI_EXT | 295 + KVM_REG_RISCV_SBI_SINGLE | i; 296 + 297 + if (!riscv_vcpu_supports_sbi_ext(vcpu, i)) 298 + continue; 299 + 300 + if (uindices) { 301 + if (put_user(reg, uindices)) 302 + return -EFAULT; 303 + uindices++; 304 + } 305 + 306 + n++; 307 + } 308 + 309 + return n; 310 + } 311 + 291 312 int kvm_riscv_vcpu_set_reg_sbi_ext(struct kvm_vcpu *vcpu, 292 313 const struct kvm_one_reg *reg) 293 314 { ··· 389 360 return 0; 390 361 } 391 362 392 - int kvm_riscv_vcpu_set_reg_sbi(struct kvm_vcpu *vcpu, 393 - const struct kvm_one_reg *reg) 363 + int kvm_riscv_vcpu_reg_indices_sbi(struct kvm_vcpu *vcpu, u64 __user *uindices) 394 364 { 395 - unsigned long __user *uaddr = 396 - (unsigned long __user *)(unsigned long)reg->addr; 397 - unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | 398 - KVM_REG_SIZE_MASK | 399 - KVM_REG_RISCV_SBI_STATE); 400 - unsigned long reg_subtype, reg_val; 365 + struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context; 366 + const struct kvm_riscv_sbi_extension_entry *entry; 367 + const struct kvm_vcpu_sbi_extension *ext; 368 + unsigned long state_reg_count; 369 + int i, j, rc, count = 0; 370 + u64 reg; 401 371 402 - if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) 403 - return -EINVAL; 372 + for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) { 373 + entry = &sbi_ext[i]; 374 + ext = entry->ext_ptr; 404 375 405 - if (copy_from_user(&reg_val, uaddr, KVM_REG_SIZE(reg->id))) 406 - return -EFAULT; 376 + if (!ext->get_state_reg_count || 377 + scontext->ext_status[entry->ext_idx] != KVM_RISCV_SBI_EXT_STATUS_ENABLED) 378 + continue; 407 379 408 - reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK; 409 - reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK; 380 + state_reg_count = ext->get_state_reg_count(vcpu); 381 + if (!uindices) 382 + goto skip_put_user; 410 383 411 - switch (reg_subtype) { 412 - case KVM_REG_RISCV_SBI_STA: 413 - return kvm_riscv_vcpu_set_reg_sbi_sta(vcpu, reg_num, reg_val); 414 - default: 415 - return -EINVAL; 384 + for (j = 0; j < state_reg_count; j++) { 385 + if (ext->get_state_reg_id) { 386 + rc = ext->get_state_reg_id(vcpu, j, &reg); 387 + if (rc) 388 + return rc; 389 + } else { 390 + reg = KVM_REG_RISCV | 391 + (IS_ENABLED(CONFIG_32BIT) ? 392 + KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64) | 393 + KVM_REG_RISCV_SBI_STATE | 394 + ext->state_reg_subtype | j; 395 + } 396 + 397 + if (put_user(reg, uindices)) 398 + return -EFAULT; 399 + uindices++; 400 + } 401 + 402 + skip_put_user: 403 + count += state_reg_count; 416 404 } 417 405 418 - return 0; 406 + return count; 419 407 } 420 408 421 - int kvm_riscv_vcpu_get_reg_sbi(struct kvm_vcpu *vcpu, 422 - const struct kvm_one_reg *reg) 409 + static const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext_withstate(struct kvm_vcpu *vcpu, 410 + unsigned long subtype) 411 + { 412 + struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context; 413 + const struct kvm_riscv_sbi_extension_entry *entry; 414 + const struct kvm_vcpu_sbi_extension *ext; 415 + int i; 416 + 417 + for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) { 418 + entry = &sbi_ext[i]; 419 + ext = entry->ext_ptr; 420 + 421 + if (ext->get_state_reg_count && 422 + ext->state_reg_subtype == subtype && 423 + scontext->ext_status[entry->ext_idx] == KVM_RISCV_SBI_EXT_STATUS_ENABLED) 424 + return ext; 425 + } 426 + 427 + return NULL; 428 + } 429 + 430 + int kvm_riscv_vcpu_set_reg_sbi(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 423 431 { 424 432 unsigned long __user *uaddr = 425 433 (unsigned long __user *)(unsigned long)reg->addr; 426 434 unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | 427 435 KVM_REG_SIZE_MASK | 428 436 KVM_REG_RISCV_SBI_STATE); 429 - unsigned long reg_subtype, reg_val; 430 - int ret; 437 + const struct kvm_vcpu_sbi_extension *ext; 438 + unsigned long reg_subtype; 439 + void *reg_val; 440 + u64 data64; 441 + u32 data32; 442 + u16 data16; 443 + u8 data8; 431 444 432 - if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) 433 - return -EINVAL; 434 - 435 - reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK; 436 - reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK; 437 - 438 - switch (reg_subtype) { 439 - case KVM_REG_RISCV_SBI_STA: 440 - ret = kvm_riscv_vcpu_get_reg_sbi_sta(vcpu, reg_num, &reg_val); 445 + switch (KVM_REG_SIZE(reg->id)) { 446 + case 1: 447 + reg_val = &data8; 448 + break; 449 + case 2: 450 + reg_val = &data16; 451 + break; 452 + case 4: 453 + reg_val = &data32; 454 + break; 455 + case 8: 456 + reg_val = &data64; 441 457 break; 442 458 default: 443 459 return -EINVAL; 444 460 } 445 461 462 + if (copy_from_user(reg_val, uaddr, KVM_REG_SIZE(reg->id))) 463 + return -EFAULT; 464 + 465 + reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK; 466 + reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK; 467 + 468 + ext = kvm_vcpu_sbi_find_ext_withstate(vcpu, reg_subtype); 469 + if (!ext || !ext->set_state_reg) 470 + return -EINVAL; 471 + 472 + return ext->set_state_reg(vcpu, reg_num, KVM_REG_SIZE(reg->id), reg_val); 473 + } 474 + 475 + int kvm_riscv_vcpu_get_reg_sbi(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 476 + { 477 + unsigned long __user *uaddr = 478 + (unsigned long __user *)(unsigned long)reg->addr; 479 + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | 480 + KVM_REG_SIZE_MASK | 481 + KVM_REG_RISCV_SBI_STATE); 482 + const struct kvm_vcpu_sbi_extension *ext; 483 + unsigned long reg_subtype; 484 + void *reg_val; 485 + u64 data64; 486 + u32 data32; 487 + u16 data16; 488 + u8 data8; 489 + int ret; 490 + 491 + switch (KVM_REG_SIZE(reg->id)) { 492 + case 1: 493 + reg_val = &data8; 494 + break; 495 + case 2: 496 + reg_val = &data16; 497 + break; 498 + case 4: 499 + reg_val = &data32; 500 + break; 501 + case 8: 502 + reg_val = &data64; 503 + break; 504 + default: 505 + return -EINVAL; 506 + } 507 + 508 + reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK; 509 + reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK; 510 + 511 + ext = kvm_vcpu_sbi_find_ext_withstate(vcpu, reg_subtype); 512 + if (!ext || !ext->get_state_reg) 513 + return -EINVAL; 514 + 515 + ret = ext->get_state_reg(vcpu, reg_num, KVM_REG_SIZE(reg->id), reg_val); 446 516 if (ret) 447 517 return ret; 448 518 449 - if (copy_to_user(uaddr, &reg_val, KVM_REG_SIZE(reg->id))) 519 + if (copy_to_user(uaddr, reg_val, KVM_REG_SIZE(reg->id))) 450 520 return -EFAULT; 451 521 452 522 return 0;
+544
arch/riscv/kvm/vcpu_sbi_fwft.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2025 Rivos Inc. 4 + * 5 + * Authors: 6 + * Clément Léger <cleger@rivosinc.com> 7 + */ 8 + 9 + #include <linux/errno.h> 10 + #include <linux/err.h> 11 + #include <linux/kvm_host.h> 12 + #include <asm/cpufeature.h> 13 + #include <asm/sbi.h> 14 + #include <asm/kvm_vcpu_sbi.h> 15 + #include <asm/kvm_vcpu_sbi_fwft.h> 16 + 17 + #define MIS_DELEG (BIT_ULL(EXC_LOAD_MISALIGNED) | BIT_ULL(EXC_STORE_MISALIGNED)) 18 + 19 + struct kvm_sbi_fwft_feature { 20 + /** 21 + * @id: Feature ID 22 + */ 23 + enum sbi_fwft_feature_t id; 24 + 25 + /** 26 + * @first_reg_num: ONE_REG index of the first ONE_REG register 27 + */ 28 + unsigned long first_reg_num; 29 + 30 + /** 31 + * @supported: Check if the feature is supported on the vcpu 32 + * 33 + * This callback is optional, if not provided the feature is assumed to 34 + * be supported 35 + */ 36 + bool (*supported)(struct kvm_vcpu *vcpu); 37 + 38 + /** 39 + * @reset: Reset the feature value irrespective whether feature is supported or not 40 + * 41 + * This callback is mandatory 42 + */ 43 + void (*reset)(struct kvm_vcpu *vcpu); 44 + 45 + /** 46 + * @set: Set the feature value 47 + * 48 + * Return SBI_SUCCESS on success or an SBI error (SBI_ERR_*) 49 + * 50 + * This callback is mandatory 51 + */ 52 + long (*set)(struct kvm_vcpu *vcpu, struct kvm_sbi_fwft_config *conf, 53 + bool one_reg_access, unsigned long value); 54 + 55 + /** 56 + * @get: Get the feature current value 57 + * 58 + * Return SBI_SUCCESS on success or an SBI error (SBI_ERR_*) 59 + * 60 + * This callback is mandatory 61 + */ 62 + long (*get)(struct kvm_vcpu *vcpu, struct kvm_sbi_fwft_config *conf, 63 + bool one_reg_access, unsigned long *value); 64 + }; 65 + 66 + static const enum sbi_fwft_feature_t kvm_fwft_defined_features[] = { 67 + SBI_FWFT_MISALIGNED_EXC_DELEG, 68 + SBI_FWFT_LANDING_PAD, 69 + SBI_FWFT_SHADOW_STACK, 70 + SBI_FWFT_DOUBLE_TRAP, 71 + SBI_FWFT_PTE_AD_HW_UPDATING, 72 + SBI_FWFT_POINTER_MASKING_PMLEN, 73 + }; 74 + 75 + static bool kvm_fwft_is_defined_feature(enum sbi_fwft_feature_t feature) 76 + { 77 + int i; 78 + 79 + for (i = 0; i < ARRAY_SIZE(kvm_fwft_defined_features); i++) { 80 + if (kvm_fwft_defined_features[i] == feature) 81 + return true; 82 + } 83 + 84 + return false; 85 + } 86 + 87 + static bool kvm_sbi_fwft_misaligned_delegation_supported(struct kvm_vcpu *vcpu) 88 + { 89 + return misaligned_traps_can_delegate(); 90 + } 91 + 92 + static void kvm_sbi_fwft_reset_misaligned_delegation(struct kvm_vcpu *vcpu) 93 + { 94 + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; 95 + 96 + cfg->hedeleg &= ~MIS_DELEG; 97 + } 98 + 99 + static long kvm_sbi_fwft_set_misaligned_delegation(struct kvm_vcpu *vcpu, 100 + struct kvm_sbi_fwft_config *conf, 101 + bool one_reg_access, unsigned long value) 102 + { 103 + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; 104 + 105 + if (value == 1) { 106 + cfg->hedeleg |= MIS_DELEG; 107 + if (!one_reg_access) 108 + csr_set(CSR_HEDELEG, MIS_DELEG); 109 + } else if (value == 0) { 110 + cfg->hedeleg &= ~MIS_DELEG; 111 + if (!one_reg_access) 112 + csr_clear(CSR_HEDELEG, MIS_DELEG); 113 + } else { 114 + return SBI_ERR_INVALID_PARAM; 115 + } 116 + 117 + return SBI_SUCCESS; 118 + } 119 + 120 + static long kvm_sbi_fwft_get_misaligned_delegation(struct kvm_vcpu *vcpu, 121 + struct kvm_sbi_fwft_config *conf, 122 + bool one_reg_access, unsigned long *value) 123 + { 124 + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; 125 + 126 + *value = (cfg->hedeleg & MIS_DELEG) == MIS_DELEG; 127 + return SBI_SUCCESS; 128 + } 129 + 130 + #ifndef CONFIG_32BIT 131 + 132 + static bool try_to_set_pmm(unsigned long value) 133 + { 134 + csr_set(CSR_HENVCFG, value); 135 + return (csr_read_clear(CSR_HENVCFG, ENVCFG_PMM) & ENVCFG_PMM) == value; 136 + } 137 + 138 + static bool kvm_sbi_fwft_pointer_masking_pmlen_supported(struct kvm_vcpu *vcpu) 139 + { 140 + struct kvm_sbi_fwft *fwft = vcpu_to_fwft(vcpu); 141 + 142 + if (!riscv_isa_extension_available(vcpu->arch.isa, SMNPM)) 143 + return false; 144 + 145 + fwft->have_vs_pmlen_7 = try_to_set_pmm(ENVCFG_PMM_PMLEN_7); 146 + fwft->have_vs_pmlen_16 = try_to_set_pmm(ENVCFG_PMM_PMLEN_16); 147 + 148 + return fwft->have_vs_pmlen_7 || fwft->have_vs_pmlen_16; 149 + } 150 + 151 + static void kvm_sbi_fwft_reset_pointer_masking_pmlen(struct kvm_vcpu *vcpu) 152 + { 153 + vcpu->arch.cfg.henvcfg &= ~ENVCFG_PMM; 154 + } 155 + 156 + static long kvm_sbi_fwft_set_pointer_masking_pmlen(struct kvm_vcpu *vcpu, 157 + struct kvm_sbi_fwft_config *conf, 158 + bool one_reg_access, unsigned long value) 159 + { 160 + struct kvm_sbi_fwft *fwft = vcpu_to_fwft(vcpu); 161 + unsigned long pmm; 162 + 163 + switch (value) { 164 + case 0: 165 + pmm = ENVCFG_PMM_PMLEN_0; 166 + break; 167 + case 7: 168 + if (!fwft->have_vs_pmlen_7) 169 + return SBI_ERR_INVALID_PARAM; 170 + pmm = ENVCFG_PMM_PMLEN_7; 171 + break; 172 + case 16: 173 + if (!fwft->have_vs_pmlen_16) 174 + return SBI_ERR_INVALID_PARAM; 175 + pmm = ENVCFG_PMM_PMLEN_16; 176 + break; 177 + default: 178 + return SBI_ERR_INVALID_PARAM; 179 + } 180 + 181 + vcpu->arch.cfg.henvcfg &= ~ENVCFG_PMM; 182 + vcpu->arch.cfg.henvcfg |= pmm; 183 + 184 + /* 185 + * Instead of waiting for vcpu_load/put() to update HENVCFG CSR, 186 + * update here so that VCPU see's pointer masking mode change 187 + * immediately. 188 + */ 189 + if (!one_reg_access) 190 + csr_write(CSR_HENVCFG, vcpu->arch.cfg.henvcfg); 191 + 192 + return SBI_SUCCESS; 193 + } 194 + 195 + static long kvm_sbi_fwft_get_pointer_masking_pmlen(struct kvm_vcpu *vcpu, 196 + struct kvm_sbi_fwft_config *conf, 197 + bool one_reg_access, unsigned long *value) 198 + { 199 + switch (vcpu->arch.cfg.henvcfg & ENVCFG_PMM) { 200 + case ENVCFG_PMM_PMLEN_0: 201 + *value = 0; 202 + break; 203 + case ENVCFG_PMM_PMLEN_7: 204 + *value = 7; 205 + break; 206 + case ENVCFG_PMM_PMLEN_16: 207 + *value = 16; 208 + break; 209 + default: 210 + return SBI_ERR_FAILURE; 211 + } 212 + 213 + return SBI_SUCCESS; 214 + } 215 + 216 + #endif 217 + 218 + static const struct kvm_sbi_fwft_feature features[] = { 219 + { 220 + .id = SBI_FWFT_MISALIGNED_EXC_DELEG, 221 + .first_reg_num = offsetof(struct kvm_riscv_sbi_fwft, misaligned_deleg.enable) / 222 + sizeof(unsigned long), 223 + .supported = kvm_sbi_fwft_misaligned_delegation_supported, 224 + .reset = kvm_sbi_fwft_reset_misaligned_delegation, 225 + .set = kvm_sbi_fwft_set_misaligned_delegation, 226 + .get = kvm_sbi_fwft_get_misaligned_delegation, 227 + }, 228 + #ifndef CONFIG_32BIT 229 + { 230 + .id = SBI_FWFT_POINTER_MASKING_PMLEN, 231 + .first_reg_num = offsetof(struct kvm_riscv_sbi_fwft, pointer_masking.enable) / 232 + sizeof(unsigned long), 233 + .supported = kvm_sbi_fwft_pointer_masking_pmlen_supported, 234 + .reset = kvm_sbi_fwft_reset_pointer_masking_pmlen, 235 + .set = kvm_sbi_fwft_set_pointer_masking_pmlen, 236 + .get = kvm_sbi_fwft_get_pointer_masking_pmlen, 237 + }, 238 + #endif 239 + }; 240 + 241 + static const struct kvm_sbi_fwft_feature *kvm_sbi_fwft_regnum_to_feature(unsigned long reg_num) 242 + { 243 + const struct kvm_sbi_fwft_feature *feature; 244 + int i; 245 + 246 + for (i = 0; i < ARRAY_SIZE(features); i++) { 247 + feature = &features[i]; 248 + if (feature->first_reg_num <= reg_num && reg_num < (feature->first_reg_num + 3)) 249 + return feature; 250 + } 251 + 252 + return NULL; 253 + } 254 + 255 + static struct kvm_sbi_fwft_config * 256 + kvm_sbi_fwft_get_config(struct kvm_vcpu *vcpu, enum sbi_fwft_feature_t feature) 257 + { 258 + int i; 259 + struct kvm_sbi_fwft *fwft = vcpu_to_fwft(vcpu); 260 + 261 + for (i = 0; i < ARRAY_SIZE(features); i++) { 262 + if (fwft->configs[i].feature->id == feature) 263 + return &fwft->configs[i]; 264 + } 265 + 266 + return NULL; 267 + } 268 + 269 + static int kvm_fwft_get_feature(struct kvm_vcpu *vcpu, u32 feature, 270 + struct kvm_sbi_fwft_config **conf) 271 + { 272 + struct kvm_sbi_fwft_config *tconf; 273 + 274 + tconf = kvm_sbi_fwft_get_config(vcpu, feature); 275 + if (!tconf) { 276 + if (kvm_fwft_is_defined_feature(feature)) 277 + return SBI_ERR_NOT_SUPPORTED; 278 + 279 + return SBI_ERR_DENIED; 280 + } 281 + 282 + if (!tconf->supported || !tconf->enabled) 283 + return SBI_ERR_NOT_SUPPORTED; 284 + 285 + *conf = tconf; 286 + 287 + return SBI_SUCCESS; 288 + } 289 + 290 + static int kvm_sbi_fwft_set(struct kvm_vcpu *vcpu, u32 feature, 291 + unsigned long value, unsigned long flags) 292 + { 293 + int ret; 294 + struct kvm_sbi_fwft_config *conf; 295 + 296 + ret = kvm_fwft_get_feature(vcpu, feature, &conf); 297 + if (ret) 298 + return ret; 299 + 300 + if ((flags & ~SBI_FWFT_SET_FLAG_LOCK) != 0) 301 + return SBI_ERR_INVALID_PARAM; 302 + 303 + if (conf->flags & SBI_FWFT_SET_FLAG_LOCK) 304 + return SBI_ERR_DENIED_LOCKED; 305 + 306 + conf->flags = flags; 307 + 308 + return conf->feature->set(vcpu, conf, false, value); 309 + } 310 + 311 + static int kvm_sbi_fwft_get(struct kvm_vcpu *vcpu, unsigned long feature, 312 + unsigned long *value) 313 + { 314 + int ret; 315 + struct kvm_sbi_fwft_config *conf; 316 + 317 + ret = kvm_fwft_get_feature(vcpu, feature, &conf); 318 + if (ret) 319 + return ret; 320 + 321 + return conf->feature->get(vcpu, conf, false, value); 322 + } 323 + 324 + static int kvm_sbi_ext_fwft_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, 325 + struct kvm_vcpu_sbi_return *retdata) 326 + { 327 + int ret; 328 + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; 329 + unsigned long funcid = cp->a6; 330 + 331 + switch (funcid) { 332 + case SBI_EXT_FWFT_SET: 333 + ret = kvm_sbi_fwft_set(vcpu, cp->a0, cp->a1, cp->a2); 334 + break; 335 + case SBI_EXT_FWFT_GET: 336 + ret = kvm_sbi_fwft_get(vcpu, cp->a0, &retdata->out_val); 337 + break; 338 + default: 339 + ret = SBI_ERR_NOT_SUPPORTED; 340 + break; 341 + } 342 + 343 + retdata->err_val = ret; 344 + 345 + return 0; 346 + } 347 + 348 + static int kvm_sbi_ext_fwft_init(struct kvm_vcpu *vcpu) 349 + { 350 + struct kvm_sbi_fwft *fwft = vcpu_to_fwft(vcpu); 351 + const struct kvm_sbi_fwft_feature *feature; 352 + struct kvm_sbi_fwft_config *conf; 353 + int i; 354 + 355 + fwft->configs = kcalloc(ARRAY_SIZE(features), sizeof(struct kvm_sbi_fwft_config), 356 + GFP_KERNEL); 357 + if (!fwft->configs) 358 + return -ENOMEM; 359 + 360 + for (i = 0; i < ARRAY_SIZE(features); i++) { 361 + feature = &features[i]; 362 + conf = &fwft->configs[i]; 363 + if (feature->supported) 364 + conf->supported = feature->supported(vcpu); 365 + else 366 + conf->supported = true; 367 + 368 + conf->enabled = conf->supported; 369 + conf->feature = feature; 370 + } 371 + 372 + return 0; 373 + } 374 + 375 + static void kvm_sbi_ext_fwft_deinit(struct kvm_vcpu *vcpu) 376 + { 377 + struct kvm_sbi_fwft *fwft = vcpu_to_fwft(vcpu); 378 + 379 + kfree(fwft->configs); 380 + } 381 + 382 + static void kvm_sbi_ext_fwft_reset(struct kvm_vcpu *vcpu) 383 + { 384 + struct kvm_sbi_fwft *fwft = vcpu_to_fwft(vcpu); 385 + const struct kvm_sbi_fwft_feature *feature; 386 + int i; 387 + 388 + for (i = 0; i < ARRAY_SIZE(features); i++) { 389 + fwft->configs[i].flags = 0; 390 + feature = &features[i]; 391 + if (feature->reset) 392 + feature->reset(vcpu); 393 + } 394 + } 395 + 396 + static unsigned long kvm_sbi_ext_fwft_get_reg_count(struct kvm_vcpu *vcpu) 397 + { 398 + unsigned long max_reg_count = sizeof(struct kvm_riscv_sbi_fwft) / sizeof(unsigned long); 399 + const struct kvm_sbi_fwft_feature *feature; 400 + struct kvm_sbi_fwft_config *conf; 401 + unsigned long reg, ret = 0; 402 + 403 + for (reg = 0; reg < max_reg_count; reg++) { 404 + feature = kvm_sbi_fwft_regnum_to_feature(reg); 405 + if (!feature) 406 + continue; 407 + 408 + conf = kvm_sbi_fwft_get_config(vcpu, feature->id); 409 + if (!conf || !conf->supported) 410 + continue; 411 + 412 + ret++; 413 + } 414 + 415 + return ret; 416 + } 417 + 418 + static int kvm_sbi_ext_fwft_get_reg_id(struct kvm_vcpu *vcpu, int index, u64 *reg_id) 419 + { 420 + int reg, max_reg_count = sizeof(struct kvm_riscv_sbi_fwft) / sizeof(unsigned long); 421 + const struct kvm_sbi_fwft_feature *feature; 422 + struct kvm_sbi_fwft_config *conf; 423 + int idx = 0; 424 + 425 + for (reg = 0; reg < max_reg_count; reg++) { 426 + feature = kvm_sbi_fwft_regnum_to_feature(reg); 427 + if (!feature) 428 + continue; 429 + 430 + conf = kvm_sbi_fwft_get_config(vcpu, feature->id); 431 + if (!conf || !conf->supported) 432 + continue; 433 + 434 + if (index == idx) { 435 + *reg_id = KVM_REG_RISCV | 436 + (IS_ENABLED(CONFIG_32BIT) ? 437 + KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64) | 438 + KVM_REG_RISCV_SBI_STATE | 439 + KVM_REG_RISCV_SBI_FWFT | reg; 440 + return 0; 441 + } 442 + 443 + idx++; 444 + } 445 + 446 + return -ENOENT; 447 + } 448 + 449 + static int kvm_sbi_ext_fwft_get_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, 450 + unsigned long reg_size, void *reg_val) 451 + { 452 + const struct kvm_sbi_fwft_feature *feature; 453 + struct kvm_sbi_fwft_config *conf; 454 + unsigned long *value; 455 + int ret = 0; 456 + 457 + if (reg_size != sizeof(unsigned long)) 458 + return -EINVAL; 459 + value = reg_val; 460 + 461 + feature = kvm_sbi_fwft_regnum_to_feature(reg_num); 462 + if (!feature) 463 + return -ENOENT; 464 + 465 + conf = kvm_sbi_fwft_get_config(vcpu, feature->id); 466 + if (!conf || !conf->supported) 467 + return -ENOENT; 468 + 469 + switch (reg_num - feature->first_reg_num) { 470 + case 0: 471 + *value = conf->enabled; 472 + break; 473 + case 1: 474 + *value = conf->flags; 475 + break; 476 + case 2: 477 + ret = conf->feature->get(vcpu, conf, true, value); 478 + break; 479 + default: 480 + return -ENOENT; 481 + } 482 + 483 + return sbi_err_map_linux_errno(ret); 484 + } 485 + 486 + static int kvm_sbi_ext_fwft_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, 487 + unsigned long reg_size, const void *reg_val) 488 + { 489 + const struct kvm_sbi_fwft_feature *feature; 490 + struct kvm_sbi_fwft_config *conf; 491 + unsigned long value; 492 + int ret = 0; 493 + 494 + if (reg_size != sizeof(unsigned long)) 495 + return -EINVAL; 496 + value = *(const unsigned long *)reg_val; 497 + 498 + feature = kvm_sbi_fwft_regnum_to_feature(reg_num); 499 + if (!feature) 500 + return -ENOENT; 501 + 502 + conf = kvm_sbi_fwft_get_config(vcpu, feature->id); 503 + if (!conf || !conf->supported) 504 + return -ENOENT; 505 + 506 + switch (reg_num - feature->first_reg_num) { 507 + case 0: 508 + switch (value) { 509 + case 0: 510 + conf->enabled = false; 511 + break; 512 + case 1: 513 + conf->enabled = true; 514 + break; 515 + default: 516 + return -EINVAL; 517 + } 518 + break; 519 + case 1: 520 + conf->flags = value & SBI_FWFT_SET_FLAG_LOCK; 521 + break; 522 + case 2: 523 + ret = conf->feature->set(vcpu, conf, true, value); 524 + break; 525 + default: 526 + return -ENOENT; 527 + } 528 + 529 + return sbi_err_map_linux_errno(ret); 530 + } 531 + 532 + const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_fwft = { 533 + .extid_start = SBI_EXT_FWFT, 534 + .extid_end = SBI_EXT_FWFT, 535 + .handler = kvm_sbi_ext_fwft_handler, 536 + .init = kvm_sbi_ext_fwft_init, 537 + .deinit = kvm_sbi_ext_fwft_deinit, 538 + .reset = kvm_sbi_ext_fwft_reset, 539 + .state_reg_subtype = KVM_REG_RISCV_SBI_FWFT, 540 + .get_state_reg_count = kvm_sbi_ext_fwft_get_reg_count, 541 + .get_state_reg_id = kvm_sbi_ext_fwft_get_reg_id, 542 + .get_state_reg = kvm_sbi_ext_fwft_get_reg, 543 + .set_state_reg = kvm_sbi_ext_fwft_set_reg, 544 + };
+3
arch/riscv/kvm/vcpu_sbi_pmu.c
··· 73 73 case SBI_EXT_PMU_SNAPSHOT_SET_SHMEM: 74 74 ret = kvm_riscv_vcpu_pmu_snapshot_set_shmem(vcpu, cp->a0, cp->a1, cp->a2, retdata); 75 75 break; 76 + case SBI_EXT_PMU_EVENT_GET_INFO: 77 + ret = kvm_riscv_vcpu_pmu_event_info(vcpu, cp->a0, cp->a1, cp->a2, cp->a3, retdata); 78 + break; 76 79 default: 77 80 retdata->err_val = SBI_ERR_NOT_SUPPORTED; 78 81 }
+44 -30
arch/riscv/kvm/vcpu_sbi_sta.c
··· 85 85 unsigned long shmem_phys_hi = cp->a1; 86 86 u32 flags = cp->a2; 87 87 struct sbi_sta_struct zero_sta = {0}; 88 - unsigned long hva; 89 - bool writable; 90 88 gpa_t shmem; 91 89 int ret; 92 90 ··· 109 111 return SBI_ERR_INVALID_ADDRESS; 110 112 } 111 113 112 - hva = kvm_vcpu_gfn_to_hva_prot(vcpu, shmem >> PAGE_SHIFT, &writable); 113 - if (kvm_is_error_hva(hva) || !writable) 114 - return SBI_ERR_INVALID_ADDRESS; 115 - 114 + /* No need to check writable slot explicitly as kvm_vcpu_write_guest does it internally */ 116 115 ret = kvm_vcpu_write_guest(vcpu, shmem, &zero_sta, sizeof(zero_sta)); 117 116 if (ret) 118 - return SBI_ERR_FAILURE; 117 + return SBI_ERR_INVALID_ADDRESS; 119 118 120 119 vcpu->arch.sta.shmem = shmem; 121 120 vcpu->arch.sta.last_steal = current->sched_info.run_delay; ··· 146 151 return !!sched_info_on(); 147 152 } 148 153 149 - const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_sta = { 150 - .extid_start = SBI_EXT_STA, 151 - .extid_end = SBI_EXT_STA, 152 - .handler = kvm_sbi_ext_sta_handler, 153 - .probe = kvm_sbi_ext_sta_probe, 154 - .reset = kvm_riscv_vcpu_sbi_sta_reset, 155 - }; 156 - 157 - int kvm_riscv_vcpu_get_reg_sbi_sta(struct kvm_vcpu *vcpu, 158 - unsigned long reg_num, 159 - unsigned long *reg_val) 154 + static unsigned long kvm_sbi_ext_sta_get_state_reg_count(struct kvm_vcpu *vcpu) 160 155 { 156 + return sizeof(struct kvm_riscv_sbi_sta) / sizeof(unsigned long); 157 + } 158 + 159 + static int kvm_sbi_ext_sta_get_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, 160 + unsigned long reg_size, void *reg_val) 161 + { 162 + unsigned long *value; 163 + 164 + if (reg_size != sizeof(unsigned long)) 165 + return -EINVAL; 166 + value = reg_val; 167 + 161 168 switch (reg_num) { 162 169 case KVM_REG_RISCV_SBI_STA_REG(shmem_lo): 163 - *reg_val = (unsigned long)vcpu->arch.sta.shmem; 170 + *value = (unsigned long)vcpu->arch.sta.shmem; 164 171 break; 165 172 case KVM_REG_RISCV_SBI_STA_REG(shmem_hi): 166 173 if (IS_ENABLED(CONFIG_32BIT)) 167 - *reg_val = upper_32_bits(vcpu->arch.sta.shmem); 174 + *value = upper_32_bits(vcpu->arch.sta.shmem); 168 175 else 169 - *reg_val = 0; 176 + *value = 0; 170 177 break; 171 178 default: 172 - return -EINVAL; 179 + return -ENOENT; 173 180 } 174 181 175 182 return 0; 176 183 } 177 184 178 - int kvm_riscv_vcpu_set_reg_sbi_sta(struct kvm_vcpu *vcpu, 179 - unsigned long reg_num, 180 - unsigned long reg_val) 185 + static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, 186 + unsigned long reg_size, const void *reg_val) 181 187 { 188 + unsigned long value; 189 + 190 + if (reg_size != sizeof(unsigned long)) 191 + return -EINVAL; 192 + value = *(const unsigned long *)reg_val; 193 + 182 194 switch (reg_num) { 183 195 case KVM_REG_RISCV_SBI_STA_REG(shmem_lo): 184 196 if (IS_ENABLED(CONFIG_32BIT)) { 185 197 gpa_t hi = upper_32_bits(vcpu->arch.sta.shmem); 186 198 187 - vcpu->arch.sta.shmem = reg_val; 199 + vcpu->arch.sta.shmem = value; 188 200 vcpu->arch.sta.shmem |= hi << 32; 189 201 } else { 190 - vcpu->arch.sta.shmem = reg_val; 202 + vcpu->arch.sta.shmem = value; 191 203 } 192 204 break; 193 205 case KVM_REG_RISCV_SBI_STA_REG(shmem_hi): 194 206 if (IS_ENABLED(CONFIG_32BIT)) { 195 207 gpa_t lo = lower_32_bits(vcpu->arch.sta.shmem); 196 208 197 - vcpu->arch.sta.shmem = ((gpa_t)reg_val << 32); 209 + vcpu->arch.sta.shmem = ((gpa_t)value << 32); 198 210 vcpu->arch.sta.shmem |= lo; 199 - } else if (reg_val != 0) { 211 + } else if (value != 0) { 200 212 return -EINVAL; 201 213 } 202 214 break; 203 215 default: 204 - return -EINVAL; 216 + return -ENOENT; 205 217 } 206 218 207 219 return 0; 208 220 } 221 + 222 + const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_sta = { 223 + .extid_start = SBI_EXT_STA, 224 + .extid_end = SBI_EXT_STA, 225 + .handler = kvm_sbi_ext_sta_handler, 226 + .probe = kvm_sbi_ext_sta_probe, 227 + .reset = kvm_riscv_vcpu_sbi_sta_reset, 228 + .state_reg_subtype = KVM_REG_RISCV_SBI_STA, 229 + .get_state_reg_count = kvm_sbi_ext_sta_get_state_reg_count, 230 + .get_state_reg = kvm_sbi_ext_sta_get_reg, 231 + .set_state_reg = kvm_sbi_ext_sta_set_reg, 232 + };
+3 -5
arch/riscv/kvm/vmid.c
··· 14 14 #include <linux/smp.h> 15 15 #include <linux/kvm_host.h> 16 16 #include <asm/csr.h> 17 + #include <asm/kvm_mmu.h> 17 18 #include <asm/kvm_tlb.h> 18 19 #include <asm/kvm_vmid.h> 19 20 ··· 25 24 26 25 void __init kvm_riscv_gstage_vmid_detect(void) 27 26 { 28 - unsigned long old; 29 - 30 27 /* Figure-out number of VMID bits in HW */ 31 - old = csr_read(CSR_HGATP); 32 - csr_write(CSR_HGATP, old | HGATP_VMID); 28 + csr_write(CSR_HGATP, (kvm_riscv_gstage_mode << HGATP_MODE_SHIFT) | HGATP_VMID); 33 29 vmid_bits = csr_read(CSR_HGATP); 34 30 vmid_bits = (vmid_bits & HGATP_VMID) >> HGATP_VMID_SHIFT; 35 31 vmid_bits = fls_long(vmid_bits); 36 - csr_write(CSR_HGATP, old); 32 + csr_write(CSR_HGATP, 0); 37 33 38 34 /* We polluted local TLB so flush all guest TLB */ 39 35 kvm_riscv_local_hfence_gvma_all();
+2 -2
arch/riscv/net/bpf_jit_comp64.c
··· 1356 1356 emit_mv(rd, rs, ctx); 1357 1357 #ifdef CONFIG_SMP 1358 1358 /* Load current CPU number in T1 */ 1359 - emit_ld(RV_REG_T1, offsetof(struct thread_info, cpu), 1359 + emit_lw(RV_REG_T1, offsetof(struct thread_info, cpu), 1360 1360 RV_REG_TP, ctx); 1361 1361 /* Load address of __per_cpu_offset array in T2 */ 1362 1362 emit_addr(RV_REG_T2, (u64)&__per_cpu_offset, extra_pass, ctx); ··· 1763 1763 */ 1764 1764 if (insn->src_reg == 0 && insn->imm == BPF_FUNC_get_smp_processor_id) { 1765 1765 /* Load current CPU number in R0 */ 1766 - emit_ld(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu), 1766 + emit_lw(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu), 1767 1767 RV_REG_TP, ctx); 1768 1768 break; 1769 1769 }
+3
arch/x86/include/asm/pgtable_64_types.h
··· 36 36 #define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57) 37 37 #endif /* USE_EARLY_PGTABLE_L5 */ 38 38 39 + #define ARCH_PAGE_TABLE_SYNC_MASK \ 40 + (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED) 41 + 39 42 extern unsigned int pgdir_shift; 40 43 extern unsigned int ptrs_per_p4d; 41 44
+18
arch/x86/mm/init_64.c
··· 224 224 } 225 225 226 226 /* 227 + * Make kernel mappings visible in all page tables in the system. 228 + * This is necessary except when the init task populates kernel mappings 229 + * during the boot process. In that case, all processes originating from 230 + * the init task copies the kernel mappings, so there is no issue. 231 + * Otherwise, missing synchronization could lead to kernel crashes due 232 + * to missing page table entries for certain kernel mappings. 233 + * 234 + * Synchronization is performed at the top level, which is the PGD in 235 + * 5-level paging systems. But in 4-level paging systems, however, 236 + * pgd_populate() is a no-op, so synchronization is done at the P4D level. 237 + * sync_global_pgds() handles this difference between paging levels. 238 + */ 239 + void arch_sync_kernel_mappings(unsigned long start, unsigned long end) 240 + { 241 + sync_global_pgds(start, end); 242 + } 243 + 244 + /* 227 245 * NOTE: This function is marked __ref because it calls __init function 228 246 * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0. 229 247 */
+39
crypto/sha1.c
··· 49 49 return 0; 50 50 } 51 51 52 + static int __crypto_sha1_export_core(const struct sha1_ctx *ctx, void *out) 53 + { 54 + memcpy(out, ctx, offsetof(struct sha1_ctx, buf)); 55 + return 0; 56 + } 57 + 58 + static int __crypto_sha1_import_core(struct sha1_ctx *ctx, const void *in) 59 + { 60 + memcpy(ctx, in, offsetof(struct sha1_ctx, buf)); 61 + return 0; 62 + } 63 + 52 64 const u8 sha1_zero_message_hash[SHA1_DIGEST_SIZE] = { 53 65 0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d, 54 66 0x32, 0x55, 0xbf, 0xef, 0x95, 0x60, 0x18, 0x90, ··· 104 92 static int crypto_sha1_import(struct shash_desc *desc, const void *in) 105 93 { 106 94 return __crypto_sha1_import(SHA1_CTX(desc), in); 95 + } 96 + 97 + static int crypto_sha1_export_core(struct shash_desc *desc, void *out) 98 + { 99 + return __crypto_sha1_export_core(SHA1_CTX(desc), out); 100 + } 101 + 102 + static int crypto_sha1_import_core(struct shash_desc *desc, const void *in) 103 + { 104 + return __crypto_sha1_import_core(SHA1_CTX(desc), in); 107 105 } 108 106 109 107 #define HMAC_SHA1_KEY(tfm) ((struct hmac_sha1_key *)crypto_shash_ctx(tfm)) ··· 165 143 return __crypto_sha1_import(&ctx->sha_ctx, in); 166 144 } 167 145 146 + static int crypto_hmac_sha1_export_core(struct shash_desc *desc, void *out) 147 + { 148 + return __crypto_sha1_export_core(&HMAC_SHA1_CTX(desc)->sha_ctx, out); 149 + } 150 + 151 + static int crypto_hmac_sha1_import_core(struct shash_desc *desc, const void *in) 152 + { 153 + struct hmac_sha1_ctx *ctx = HMAC_SHA1_CTX(desc); 154 + 155 + ctx->ostate = HMAC_SHA1_KEY(desc->tfm)->ostate; 156 + return __crypto_sha1_import_core(&ctx->sha_ctx, in); 157 + } 158 + 168 159 static struct shash_alg algs[] = { 169 160 { 170 161 .base.cra_name = "sha1", ··· 192 157 .digest = crypto_sha1_digest, 193 158 .export = crypto_sha1_export, 194 159 .import = crypto_sha1_import, 160 + .export_core = crypto_sha1_export_core, 161 + .import_core = crypto_sha1_import_core, 195 162 .descsize = sizeof(struct sha1_ctx), 196 163 .statesize = SHA1_SHASH_STATE_SIZE, 197 164 }, ··· 212 175 .digest = crypto_hmac_sha1_digest, 213 176 .export = crypto_hmac_sha1_export, 214 177 .import = crypto_hmac_sha1_import, 178 + .export_core = crypto_hmac_sha1_export_core, 179 + .import_core = crypto_hmac_sha1_import_core, 215 180 .descsize = sizeof(struct hmac_sha1_ctx), 216 181 .statesize = SHA1_SHASH_STATE_SIZE, 217 182 },
+71
crypto/sha256.c
··· 50 50 return 0; 51 51 } 52 52 53 + static int __crypto_sha256_export_core(const struct __sha256_ctx *ctx, 54 + void *out) 55 + { 56 + memcpy(out, ctx, offsetof(struct __sha256_ctx, buf)); 57 + return 0; 58 + } 59 + 60 + static int __crypto_sha256_import_core(struct __sha256_ctx *ctx, const void *in) 61 + { 62 + memcpy(ctx, in, offsetof(struct __sha256_ctx, buf)); 63 + return 0; 64 + } 65 + 53 66 /* SHA-224 */ 54 67 55 68 const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] = { ··· 111 98 return __crypto_sha256_import(&SHA224_CTX(desc)->ctx, in); 112 99 } 113 100 101 + static int crypto_sha224_export_core(struct shash_desc *desc, void *out) 102 + { 103 + return __crypto_sha256_export_core(&SHA224_CTX(desc)->ctx, out); 104 + } 105 + 106 + static int crypto_sha224_import_core(struct shash_desc *desc, const void *in) 107 + { 108 + return __crypto_sha256_import_core(&SHA224_CTX(desc)->ctx, in); 109 + } 110 + 114 111 /* SHA-256 */ 115 112 116 113 const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] = { ··· 167 144 static int crypto_sha256_import(struct shash_desc *desc, const void *in) 168 145 { 169 146 return __crypto_sha256_import(&SHA256_CTX(desc)->ctx, in); 147 + } 148 + 149 + static int crypto_sha256_export_core(struct shash_desc *desc, void *out) 150 + { 151 + return __crypto_sha256_export_core(&SHA256_CTX(desc)->ctx, out); 152 + } 153 + 154 + static int crypto_sha256_import_core(struct shash_desc *desc, const void *in) 155 + { 156 + return __crypto_sha256_import_core(&SHA256_CTX(desc)->ctx, in); 170 157 } 171 158 172 159 /* HMAC-SHA224 */ ··· 231 198 return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); 232 199 } 233 200 201 + static int crypto_hmac_sha224_export_core(struct shash_desc *desc, void *out) 202 + { 203 + return __crypto_sha256_export_core(&HMAC_SHA224_CTX(desc)->ctx.sha_ctx, 204 + out); 205 + } 206 + 207 + static int crypto_hmac_sha224_import_core(struct shash_desc *desc, 208 + const void *in) 209 + { 210 + struct hmac_sha224_ctx *ctx = HMAC_SHA224_CTX(desc); 211 + 212 + ctx->ctx.ostate = HMAC_SHA224_KEY(desc->tfm)->key.ostate; 213 + return __crypto_sha256_import_core(&ctx->ctx.sha_ctx, in); 214 + } 215 + 234 216 /* HMAC-SHA256 */ 235 217 236 218 #define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(tfm)) ··· 298 250 return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); 299 251 } 300 252 253 + static int crypto_hmac_sha256_export_core(struct shash_desc *desc, void *out) 254 + { 255 + return __crypto_sha256_export_core(&HMAC_SHA256_CTX(desc)->ctx.sha_ctx, 256 + out); 257 + } 258 + 259 + static int crypto_hmac_sha256_import_core(struct shash_desc *desc, 260 + const void *in) 261 + { 262 + struct hmac_sha256_ctx *ctx = HMAC_SHA256_CTX(desc); 263 + 264 + ctx->ctx.ostate = HMAC_SHA256_KEY(desc->tfm)->key.ostate; 265 + return __crypto_sha256_import_core(&ctx->ctx.sha_ctx, in); 266 + } 267 + 301 268 /* Algorithm definitions */ 302 269 303 270 static struct shash_alg algs[] = { ··· 329 266 .digest = crypto_sha224_digest, 330 267 .export = crypto_sha224_export, 331 268 .import = crypto_sha224_import, 269 + .export_core = crypto_sha224_export_core, 270 + .import_core = crypto_sha224_import_core, 332 271 .descsize = sizeof(struct sha224_ctx), 333 272 .statesize = SHA256_SHASH_STATE_SIZE, 334 273 }, ··· 347 282 .digest = crypto_sha256_digest, 348 283 .export = crypto_sha256_export, 349 284 .import = crypto_sha256_import, 285 + .export_core = crypto_sha256_export_core, 286 + .import_core = crypto_sha256_import_core, 350 287 .descsize = sizeof(struct sha256_ctx), 351 288 .statesize = SHA256_SHASH_STATE_SIZE, 352 289 }, ··· 367 300 .digest = crypto_hmac_sha224_digest, 368 301 .export = crypto_hmac_sha224_export, 369 302 .import = crypto_hmac_sha224_import, 303 + .export_core = crypto_hmac_sha224_export_core, 304 + .import_core = crypto_hmac_sha224_import_core, 370 305 .descsize = sizeof(struct hmac_sha224_ctx), 371 306 .statesize = SHA256_SHASH_STATE_SIZE, 372 307 }, ··· 387 318 .digest = crypto_hmac_sha256_digest, 388 319 .export = crypto_hmac_sha256_export, 389 320 .import = crypto_hmac_sha256_import, 321 + .export_core = crypto_hmac_sha256_export_core, 322 + .import_core = crypto_hmac_sha256_import_core, 390 323 .descsize = sizeof(struct hmac_sha256_ctx), 391 324 .statesize = SHA256_SHASH_STATE_SIZE, 392 325 },
+71
crypto/sha512.c
··· 50 50 return 0; 51 51 } 52 52 53 + static int __crypto_sha512_export_core(const struct __sha512_ctx *ctx, 54 + void *out) 55 + { 56 + memcpy(out, ctx, offsetof(struct __sha512_ctx, buf)); 57 + return 0; 58 + } 59 + 60 + static int __crypto_sha512_import_core(struct __sha512_ctx *ctx, const void *in) 61 + { 62 + memcpy(ctx, in, offsetof(struct __sha512_ctx, buf)); 63 + return 0; 64 + } 65 + 53 66 /* SHA-384 */ 54 67 55 68 const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE] = { ··· 111 98 static int crypto_sha384_import(struct shash_desc *desc, const void *in) 112 99 { 113 100 return __crypto_sha512_import(&SHA384_CTX(desc)->ctx, in); 101 + } 102 + 103 + static int crypto_sha384_export_core(struct shash_desc *desc, void *out) 104 + { 105 + return __crypto_sha512_export_core(&SHA384_CTX(desc)->ctx, out); 106 + } 107 + 108 + static int crypto_sha384_import_core(struct shash_desc *desc, const void *in) 109 + { 110 + return __crypto_sha512_import_core(&SHA384_CTX(desc)->ctx, in); 114 111 } 115 112 116 113 /* SHA-512 */ ··· 175 152 return __crypto_sha512_import(&SHA512_CTX(desc)->ctx, in); 176 153 } 177 154 155 + static int crypto_sha512_export_core(struct shash_desc *desc, void *out) 156 + { 157 + return __crypto_sha512_export_core(&SHA512_CTX(desc)->ctx, out); 158 + } 159 + 160 + static int crypto_sha512_import_core(struct shash_desc *desc, const void *in) 161 + { 162 + return __crypto_sha512_import_core(&SHA512_CTX(desc)->ctx, in); 163 + } 164 + 178 165 /* HMAC-SHA384 */ 179 166 180 167 #define HMAC_SHA384_KEY(tfm) ((struct hmac_sha384_key *)crypto_shash_ctx(tfm)) ··· 235 202 236 203 ctx->ctx.ostate = HMAC_SHA384_KEY(desc->tfm)->key.ostate; 237 204 return __crypto_sha512_import(&ctx->ctx.sha_ctx, in); 205 + } 206 + 207 + static int crypto_hmac_sha384_export_core(struct shash_desc *desc, void *out) 208 + { 209 + return __crypto_sha512_export_core(&HMAC_SHA384_CTX(desc)->ctx.sha_ctx, 210 + out); 211 + } 212 + 213 + static int crypto_hmac_sha384_import_core(struct shash_desc *desc, 214 + const void *in) 215 + { 216 + struct hmac_sha384_ctx *ctx = HMAC_SHA384_CTX(desc); 217 + 218 + ctx->ctx.ostate = HMAC_SHA384_KEY(desc->tfm)->key.ostate; 219 + return __crypto_sha512_import_core(&ctx->ctx.sha_ctx, in); 238 220 } 239 221 240 222 /* HMAC-SHA512 */ ··· 304 256 return __crypto_sha512_import(&ctx->ctx.sha_ctx, in); 305 257 } 306 258 259 + static int crypto_hmac_sha512_export_core(struct shash_desc *desc, void *out) 260 + { 261 + return __crypto_sha512_export_core(&HMAC_SHA512_CTX(desc)->ctx.sha_ctx, 262 + out); 263 + } 264 + 265 + static int crypto_hmac_sha512_import_core(struct shash_desc *desc, 266 + const void *in) 267 + { 268 + struct hmac_sha512_ctx *ctx = HMAC_SHA512_CTX(desc); 269 + 270 + ctx->ctx.ostate = HMAC_SHA512_KEY(desc->tfm)->key.ostate; 271 + return __crypto_sha512_import_core(&ctx->ctx.sha_ctx, in); 272 + } 273 + 307 274 /* Algorithm definitions */ 308 275 309 276 static struct shash_alg algs[] = { ··· 335 272 .digest = crypto_sha384_digest, 336 273 .export = crypto_sha384_export, 337 274 .import = crypto_sha384_import, 275 + .export_core = crypto_sha384_export_core, 276 + .import_core = crypto_sha384_import_core, 338 277 .descsize = sizeof(struct sha384_ctx), 339 278 .statesize = SHA512_SHASH_STATE_SIZE, 340 279 }, ··· 353 288 .digest = crypto_sha512_digest, 354 289 .export = crypto_sha512_export, 355 290 .import = crypto_sha512_import, 291 + .export_core = crypto_sha512_export_core, 292 + .import_core = crypto_sha512_import_core, 356 293 .descsize = sizeof(struct sha512_ctx), 357 294 .statesize = SHA512_SHASH_STATE_SIZE, 358 295 }, ··· 373 306 .digest = crypto_hmac_sha384_digest, 374 307 .export = crypto_hmac_sha384_export, 375 308 .import = crypto_hmac_sha384_import, 309 + .export_core = crypto_hmac_sha384_export_core, 310 + .import_core = crypto_hmac_sha384_import_core, 376 311 .descsize = sizeof(struct hmac_sha384_ctx), 377 312 .statesize = SHA512_SHASH_STATE_SIZE, 378 313 }, ··· 393 324 .digest = crypto_hmac_sha512_digest, 394 325 .export = crypto_hmac_sha512_export, 395 326 .import = crypto_hmac_sha512_import, 327 + .export_core = crypto_hmac_sha512_export_core, 328 + .import_core = crypto_hmac_sha512_import_core, 396 329 .descsize = sizeof(struct hmac_sha512_ctx), 397 330 .statesize = SHA512_SHASH_STATE_SIZE, 398 331 },
+1 -1
drivers/accel/ivpu/ivpu_drv.c
··· 677 677 static void ivpu_dev_fini(struct ivpu_device *vdev) 678 678 { 679 679 ivpu_jobs_abort_all(vdev); 680 - ivpu_pm_cancel_recovery(vdev); 680 + ivpu_pm_disable_recovery(vdev); 681 681 ivpu_pm_disable(vdev); 682 682 ivpu_prepare_for_reset(vdev); 683 683 ivpu_shutdown(vdev);
+2 -2
drivers/accel/ivpu/ivpu_pm.c
··· 417 417 ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay); 418 418 } 419 419 420 - void ivpu_pm_cancel_recovery(struct ivpu_device *vdev) 420 + void ivpu_pm_disable_recovery(struct ivpu_device *vdev) 421 421 { 422 422 drm_WARN_ON(&vdev->drm, delayed_work_pending(&vdev->pm->job_timeout_work)); 423 - cancel_work_sync(&vdev->pm->recovery_work); 423 + disable_work_sync(&vdev->pm->recovery_work); 424 424 } 425 425 426 426 void ivpu_pm_enable(struct ivpu_device *vdev)
+1 -1
drivers/accel/ivpu/ivpu_pm.h
··· 25 25 void ivpu_pm_init(struct ivpu_device *vdev); 26 26 void ivpu_pm_enable(struct ivpu_device *vdev); 27 27 void ivpu_pm_disable(struct ivpu_device *vdev); 28 - void ivpu_pm_cancel_recovery(struct ivpu_device *vdev); 28 + void ivpu_pm_disable_recovery(struct ivpu_device *vdev); 29 29 30 30 int ivpu_pm_suspend_cb(struct device *dev); 31 31 int ivpu_pm_resume_cb(struct device *dev);
+3 -1
drivers/acpi/arm64/iort.c
··· 937 937 938 938 new_sids = krealloc_array(sids, count + new_count, 939 939 sizeof(*new_sids), GFP_KERNEL); 940 - if (!new_sids) 940 + if (!new_sids) { 941 + kfree(sids); 941 942 return NULL; 943 + } 942 944 943 945 for (i = count; i < total_count; i++) 944 946 new_sids[i] = id_start++;
+2 -2
drivers/acpi/riscv/cppc.c
··· 119 119 120 120 *val = data.ret.value; 121 121 122 - return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0; 122 + return data.ret.error; 123 123 } 124 124 125 125 return -EINVAL; ··· 148 148 149 149 smp_call_function_single(cpu, cppc_ffh_csr_write, &data, 1); 150 150 151 - return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0; 151 + return data.ret.error; 152 152 } 153 153 154 154 return -EINVAL;
+41 -16
drivers/bluetooth/hci_vhci.c
··· 380 380 .write = force_devcd_write, 381 381 }; 382 382 383 + static void vhci_debugfs_init(struct vhci_data *data) 384 + { 385 + struct hci_dev *hdev = data->hdev; 386 + 387 + debugfs_create_file("force_suspend", 0644, hdev->debugfs, data, 388 + &force_suspend_fops); 389 + 390 + debugfs_create_file("force_wakeup", 0644, hdev->debugfs, data, 391 + &force_wakeup_fops); 392 + 393 + if (IS_ENABLED(CONFIG_BT_MSFTEXT)) 394 + debugfs_create_file("msft_opcode", 0644, hdev->debugfs, data, 395 + &msft_opcode_fops); 396 + 397 + if (IS_ENABLED(CONFIG_BT_AOSPEXT)) 398 + debugfs_create_file("aosp_capable", 0644, hdev->debugfs, data, 399 + &aosp_capable_fops); 400 + 401 + debugfs_create_file("force_devcoredump", 0644, hdev->debugfs, data, 402 + &force_devcoredump_fops); 403 + } 404 + 383 405 static int __vhci_create_device(struct vhci_data *data, __u8 opcode) 384 406 { 385 407 struct hci_dev *hdev; ··· 456 434 return -EBUSY; 457 435 } 458 436 459 - debugfs_create_file("force_suspend", 0644, hdev->debugfs, data, 460 - &force_suspend_fops); 461 - 462 - debugfs_create_file("force_wakeup", 0644, hdev->debugfs, data, 463 - &force_wakeup_fops); 464 - 465 - if (IS_ENABLED(CONFIG_BT_MSFTEXT)) 466 - debugfs_create_file("msft_opcode", 0644, hdev->debugfs, data, 467 - &msft_opcode_fops); 468 - 469 - if (IS_ENABLED(CONFIG_BT_AOSPEXT)) 470 - debugfs_create_file("aosp_capable", 0644, hdev->debugfs, data, 471 - &aosp_capable_fops); 472 - 473 - debugfs_create_file("force_devcoredump", 0644, hdev->debugfs, data, 474 - &force_devcoredump_fops); 437 + if (!IS_ERR_OR_NULL(hdev->debugfs)) 438 + vhci_debugfs_init(data); 475 439 476 440 hci_skb_pkt_type(skb) = HCI_VENDOR_PKT; 477 441 ··· 659 651 return 0; 660 652 } 661 653 654 + static void vhci_debugfs_remove(struct hci_dev *hdev) 655 + { 656 + debugfs_lookup_and_remove("force_suspend", hdev->debugfs); 657 + 658 + debugfs_lookup_and_remove("force_wakeup", hdev->debugfs); 659 + 660 + if (IS_ENABLED(CONFIG_BT_MSFTEXT)) 661 + debugfs_lookup_and_remove("msft_opcode", hdev->debugfs); 662 + 663 + if (IS_ENABLED(CONFIG_BT_AOSPEXT)) 664 + debugfs_lookup_and_remove("aosp_capable", hdev->debugfs); 665 + 666 + debugfs_lookup_and_remove("force_devcoredump", hdev->debugfs); 667 + } 668 + 662 669 static int vhci_release(struct inode *inode, struct file *file) 663 670 { 664 671 struct vhci_data *data = file->private_data; ··· 685 662 hdev = data->hdev; 686 663 687 664 if (hdev) { 665 + if (!IS_ERR_OR_NULL(hdev->debugfs)) 666 + vhci_debugfs_remove(hdev); 688 667 hci_unregister_dev(hdev); 689 668 hci_free_dev(hdev); 690 669 }
-1
drivers/edac/altera_edac.c
··· 128 128 129 129 ptemp = dma_alloc_coherent(mci->pdev, 16, &dma_handle, GFP_KERNEL); 130 130 if (!ptemp) { 131 - dma_free_coherent(mci->pdev, 16, ptemp, dma_handle); 132 131 edac_printk(KERN_ERR, EDAC_MC, 133 132 "Inject: Buffer Allocation error\n"); 134 133 return -ENOMEM;
+3 -3
drivers/gpio/Kconfig
··· 3 3 # GPIO infrastructure and drivers 4 4 # 5 5 6 + config GPIOLIB_LEGACY 7 + def_bool y 8 + 6 9 menuconfig GPIOLIB 7 10 bool "GPIO Support" 8 11 help ··· 14 11 one or more of the GPIO drivers below. 15 12 16 13 If unsure, say N. 17 - 18 - config GPIOLIB_LEGACY 19 - def_bool y 20 14 21 15 if GPIOLIB 22 16
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 448 448 psp->cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL); 449 449 if (!psp->cmd) { 450 450 dev_err(adev->dev, "Failed to allocate memory to command buffer!\n"); 451 - ret = -ENOMEM; 451 + return -ENOMEM; 452 452 } 453 453 454 454 adev->psp.xgmi_context.supports_extended_data =
-5
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 1462 1462 1463 1463 static void dce_v10_0_audio_fini(struct amdgpu_device *adev) 1464 1464 { 1465 - int i; 1466 - 1467 1465 if (!amdgpu_audio) 1468 1466 return; 1469 1467 1470 1468 if (!adev->mode_info.audio.enabled) 1471 1469 return; 1472 - 1473 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1474 - dce_v10_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1475 1470 1476 1471 adev->mode_info.audio.enabled = false; 1477 1472 }
-5
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 1511 1511 1512 1512 static void dce_v11_0_audio_fini(struct amdgpu_device *adev) 1513 1513 { 1514 - int i; 1515 - 1516 1514 if (!amdgpu_audio) 1517 1515 return; 1518 1516 1519 1517 if (!adev->mode_info.audio.enabled) 1520 1518 return; 1521 - 1522 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1523 - dce_v11_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1524 1519 1525 1520 adev->mode_info.audio.enabled = false; 1526 1521 }
-5
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 1451 1451 1452 1452 static void dce_v6_0_audio_fini(struct amdgpu_device *adev) 1453 1453 { 1454 - int i; 1455 - 1456 1454 if (!amdgpu_audio) 1457 1455 return; 1458 1456 1459 1457 if (!adev->mode_info.audio.enabled) 1460 1458 return; 1461 - 1462 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1463 - dce_v6_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1464 1459 1465 1460 adev->mode_info.audio.enabled = false; 1466 1461 }
-5
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 1443 1443 1444 1444 static void dce_v8_0_audio_fini(struct amdgpu_device *adev) 1445 1445 { 1446 - int i; 1447 - 1448 1446 if (!amdgpu_audio) 1449 1447 return; 1450 1448 1451 1449 if (!adev->mode_info.audio.enabled) 1452 1450 return; 1453 - 1454 - for (i = 0; i < adev->mode_info.audio.num_pins; i++) 1455 - dce_v8_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); 1456 1451 1457 1452 adev->mode_info.audio.enabled = false; 1458 1453 }
+3 -2
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 641 641 break; 642 642 case MES_MISC_OP_CHANGE_CONFIG: 643 643 if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) < 0x63) { 644 - dev_err(mes->adev->dev, "MES FW version must be larger than 0x63 to support limit single process feature.\n"); 645 - return -EINVAL; 644 + dev_warn_once(mes->adev->dev, 645 + "MES FW version must be larger than 0x63 to support limit single process feature.\n"); 646 + return 0; 646 647 } 647 648 misc_pkt.opcode = MESAPI_MISC__CHANGE_CONFIG; 648 649 misc_pkt.change_config.opcode =
+3 -3
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
··· 1377 1377 1378 1378 switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) { 1379 1379 case IP_VERSION(6, 0, 0): 1380 - if ((adev->sdma.instance[0].fw_version >= 24) && !adev->sdma.disable_uq) 1380 + if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq) 1381 1381 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1382 1382 break; 1383 1383 case IP_VERSION(6, 0, 1): ··· 1385 1385 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1386 1386 break; 1387 1387 case IP_VERSION(6, 0, 2): 1388 - if ((adev->sdma.instance[0].fw_version >= 21) && !adev->sdma.disable_uq) 1388 + if ((adev->sdma.instance[0].fw_version >= 23) && !adev->sdma.disable_uq) 1389 1389 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1390 1390 break; 1391 1391 case IP_VERSION(6, 0, 3): 1392 - if ((adev->sdma.instance[0].fw_version >= 25) && !adev->sdma.disable_uq) 1392 + if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq) 1393 1393 adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1394 1394 break; 1395 1395 case IP_VERSION(6, 1, 0):
+1 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 8381 8381 drm_add_modes_noedid(connector, 1920, 1080); 8382 8382 } else { 8383 8383 amdgpu_dm_connector_ddc_get_modes(connector, drm_edid); 8384 - if (encoder && (connector->connector_type != DRM_MODE_CONNECTOR_eDP) && 8385 - (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)) 8384 + if (encoder) 8386 8385 amdgpu_dm_connector_add_common_modes(encoder, connector); 8387 8386 amdgpu_dm_connector_add_freesync_modes(connector, drm_edid); 8388 8387 }
+9
drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
··· 520 520 REG_UPDATE(DPP_CONTROL, DPP_CLOCK_ENABLE, 0); 521 521 } 522 522 523 + void dpp_force_disable_cursor(struct dpp *dpp_base) 524 + { 525 + struct dcn10_dpp *dpp = TO_DCN10_DPP(dpp_base); 526 + 527 + /* Force disable cursor */ 528 + REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, 0); 529 + dpp_base->pos.cur0_ctl.bits.cur0_enable = 0; 530 + } 531 + 523 532 static const struct dpp_funcs dcn10_dpp_funcs = { 524 533 .dpp_read_state = dpp_read_state, 525 534 .dpp_reset = dpp_reset,
+2
drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.h
··· 1525 1525 1526 1526 void dpp1_cm_get_gamut_remap(struct dpp *dpp_base, 1527 1527 struct dpp_grph_csc_adjustment *adjust); 1528 + void dpp_force_disable_cursor(struct dpp *dpp_base); 1529 + 1528 1530 #endif
+1
drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
··· 1494 1494 .dpp_dppclk_control = dpp1_dppclk_control, 1495 1495 .dpp_set_hdr_multiplier = dpp3_set_hdr_multiplier, 1496 1496 .dpp_get_gamut_remap = dpp3_cm_get_gamut_remap, 1497 + .dpp_force_disable_cursor = dpp_force_disable_cursor, 1497 1498 }; 1498 1499 1499 1500
+72
drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
··· 528 528 529 529 apply_symclk_on_tx_off_wa(link); 530 530 } 531 + 532 + /** 533 + * dcn314_dpp_pg_control - DPP power gate control. 534 + * 535 + * @hws: dce_hwseq reference. 536 + * @dpp_inst: DPP instance reference. 537 + * @power_on: true if we want to enable power gate, false otherwise. 538 + * 539 + * Enable or disable power gate in the specific DPP instance. 540 + * If power gating is disabled, will force disable cursor in the DPP instance. 541 + */ 542 + void dcn314_dpp_pg_control( 543 + struct dce_hwseq *hws, 544 + unsigned int dpp_inst, 545 + bool power_on) 546 + { 547 + uint32_t power_gate = power_on ? 0 : 1; 548 + uint32_t pwr_status = power_on ? 0 : 2; 549 + 550 + 551 + if (hws->ctx->dc->debug.disable_dpp_power_gate) { 552 + /* Workaround for DCN314 with disabled power gating */ 553 + if (!power_on) { 554 + 555 + /* Force disable cursor if power gating is disabled */ 556 + struct dpp *dpp = hws->ctx->dc->res_pool->dpps[dpp_inst]; 557 + if (dpp && dpp->funcs->dpp_force_disable_cursor) 558 + dpp->funcs->dpp_force_disable_cursor(dpp); 559 + } 560 + return; 561 + } 562 + if (REG(DOMAIN1_PG_CONFIG) == 0) 563 + return; 564 + 565 + switch (dpp_inst) { 566 + case 0: /* DPP0 */ 567 + REG_UPDATE(DOMAIN1_PG_CONFIG, 568 + DOMAIN1_POWER_GATE, power_gate); 569 + 570 + REG_WAIT(DOMAIN1_PG_STATUS, 571 + DOMAIN1_PGFSM_PWR_STATUS, pwr_status, 572 + 1, 1000); 573 + break; 574 + case 1: /* DPP1 */ 575 + REG_UPDATE(DOMAIN3_PG_CONFIG, 576 + DOMAIN3_POWER_GATE, power_gate); 577 + 578 + REG_WAIT(DOMAIN3_PG_STATUS, 579 + DOMAIN3_PGFSM_PWR_STATUS, pwr_status, 580 + 1, 1000); 581 + break; 582 + case 2: /* DPP2 */ 583 + REG_UPDATE(DOMAIN5_PG_CONFIG, 584 + DOMAIN5_POWER_GATE, power_gate); 585 + 586 + REG_WAIT(DOMAIN5_PG_STATUS, 587 + DOMAIN5_PGFSM_PWR_STATUS, pwr_status, 588 + 1, 1000); 589 + break; 590 + case 3: /* DPP3 */ 591 + REG_UPDATE(DOMAIN7_PG_CONFIG, 592 + DOMAIN7_POWER_GATE, power_gate); 593 + 594 + REG_WAIT(DOMAIN7_PG_STATUS, 595 + DOMAIN7_PGFSM_PWR_STATUS, pwr_status, 596 + 1, 1000); 597 + break; 598 + default: 599 + BREAK_TO_DEBUGGER(); 600 + break; 601 + } 602 + }
+2
drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.h
··· 47 47 48 48 void dcn314_disable_link_output(struct dc_link *link, const struct link_resource *link_res, enum signal_type signal); 49 49 50 + void dcn314_dpp_pg_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool power_on); 51 + 50 52 #endif /* __DC_HWSS_DCN314_H__ */
+1
drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
··· 141 141 .enable_power_gating_plane = dcn314_enable_power_gating_plane, 142 142 .dpp_root_clock_control = dcn314_dpp_root_clock_control, 143 143 .hubp_pg_control = dcn31_hubp_pg_control, 144 + .dpp_pg_control = dcn314_dpp_pg_control, 144 145 .program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree, 145 146 .update_odm = dcn314_update_odm, 146 147 .dsc_pg_control = dcn314_dsc_pg_control,
+3
drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
··· 349 349 struct dpp *dpp_base, 350 350 enum dc_color_space color_space, 351 351 struct dc_csc_transform cursor_csc_color_matrix); 352 + 353 + void (*dpp_force_disable_cursor)(struct dpp *dpp_base); 354 + 352 355 }; 353 356 354 357
+11
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 393 393 gpiod_set_value_cansleep(pdata->enable_gpio, 1); 394 394 395 395 /* 396 + * After EN is deasserted and an external clock is detected, the bridge 397 + * will sample GPIO3:1 to determine its frequency. The driver will 398 + * overwrite this setting in ti_sn_bridge_set_refclk_freq(). But this is 399 + * racy. Thus we have to wait a couple of us. According to the datasheet 400 + * the GPIO lines has to be stable at least 5 us (td5) but it seems that 401 + * is not enough and the refclk frequency value is still lost or 402 + * overwritten by the bridge itself. Waiting for 20us seems to work. 403 + */ 404 + usleep_range(20, 30); 405 + 406 + /* 396 407 * If we have a reference clock we can enable communication w/ the 397 408 * panel (including the aux channel) w/out any need for an input clock 398 409 * so we can do it in resume which lets us read the EDID before
+6 -1
drivers/gpu/drm/nouveau/gv100_fence.c
··· 18 18 struct nvif_push *push = &chan->chan.push; 19 19 int ret; 20 20 21 - ret = PUSH_WAIT(push, 8); 21 + ret = PUSH_WAIT(push, 13); 22 22 if (ret) 23 23 return ret; 24 24 ··· 31 31 NVDEF(NVC36F, SEM_EXECUTE, RELEASE_WFI, EN) | 32 32 NVDEF(NVC36F, SEM_EXECUTE, PAYLOAD_SIZE, 32BIT) | 33 33 NVDEF(NVC36F, SEM_EXECUTE, RELEASE_TIMESTAMP, DIS)); 34 + 35 + PUSH_MTHD(push, NVC36F, MEM_OP_A, 0, 36 + MEM_OP_B, 0, 37 + MEM_OP_C, NVDEF(NVC36F, MEM_OP_C, MEMBAR_TYPE, SYS_MEMBAR), 38 + MEM_OP_D, NVDEF(NVC36F, MEM_OP_D, OPERATION, MEMBAR)); 34 39 35 40 PUSH_MTHD(push, NVC36F, NON_STALL_INTERRUPT, 0); 36 41
+85
drivers/gpu/drm/nouveau/include/nvhw/class/clc36f.h
··· 7 7 8 8 #define NVC36F_NON_STALL_INTERRUPT (0x00000020) 9 9 #define NVC36F_NON_STALL_INTERRUPT_HANDLE 31:0 10 + // NOTE - MEM_OP_A and MEM_OP_B have been replaced in gp100 with methods for 11 + // specifying the page address for a targeted TLB invalidate and the uTLB for 12 + // a targeted REPLAY_CANCEL for UVM. 13 + // The previous MEM_OP_A/B functionality is in MEM_OP_C/D, with slightly 14 + // rearranged fields. 15 + #define NVC36F_MEM_OP_A (0x00000028) 16 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_CLIENT_UNIT_ID 5:0 // only relevant for REPLAY_CANCEL_TARGETED 17 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_INVALIDATION_SIZE 5:0 // Used to specify size of invalidate, used for invalidates which are not of the REPLAY_CANCEL_TARGETED type 18 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_GPC_ID 10:6 // only relevant for REPLAY_CANCEL_TARGETED 19 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_MMU_ENGINE_ID 6:0 // only relevant for REPLAY_CANCEL_VA_GLOBAL 20 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR 11:11 21 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_EN 0x00000001 22 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_DIS 0x00000000 23 + #define NVC36F_MEM_OP_A_TLB_INVALIDATE_TARGET_ADDR_LO 31:12 24 + #define NVC36F_MEM_OP_B (0x0000002c) 25 + #define NVC36F_MEM_OP_B_TLB_INVALIDATE_TARGET_ADDR_HI 31:0 26 + #define NVC36F_MEM_OP_C (0x00000030) 27 + #define NVC36F_MEM_OP_C_MEMBAR_TYPE 2:0 28 + #define NVC36F_MEM_OP_C_MEMBAR_TYPE_SYS_MEMBAR 0x00000000 29 + #define NVC36F_MEM_OP_C_MEMBAR_TYPE_MEMBAR 0x00000001 30 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB 0:0 31 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ONE 0x00000000 32 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ALL 0x00000001 // Probably nonsensical for MMU_TLB_INVALIDATE_TARGETED 33 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC 1:1 34 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_ENABLE 0x00000000 35 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_DISABLE 0x00000001 36 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY 4:2 // only relevant if GPC ENABLE 37 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_NONE 0x00000000 38 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START 0x00000001 39 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START_ACK_ALL 0x00000002 40 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_TARGETED 0x00000003 41 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_GLOBAL 0x00000004 42 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_VA_GLOBAL 0x00000005 43 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE 6:5 // only relevant if GPC ENABLE 44 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_NONE 0x00000000 45 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_GLOBALLY 0x00000001 46 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_INTRANODE 0x00000002 47 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE 9:7 //only relevant for REPLAY_CANCEL_VA_GLOBAL 48 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_READ 0 49 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE 1 50 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_STRONG 2 51 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_RSVRVD 3 52 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_WEAK 4 53 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_ALL 5 54 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE_AND_ATOMIC 6 55 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ALL 7 56 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL 9:7 // Invalidate affects this level and all below 57 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_ALL 0x00000000 // Invalidate tlb caches at all levels of the page table 58 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_PTE_ONLY 0x00000001 59 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE0 0x00000002 60 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE1 0x00000003 61 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE2 0x00000004 62 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE3 0x00000005 63 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE4 0x00000006 64 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE5 0x00000007 65 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE 11:10 // only relevant if PDB_ONE 66 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_VID_MEM 0x00000000 67 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_COHERENT 0x00000002 68 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_NONCOHERENT 0x00000003 69 + #define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ADDR_LO 31:12 // only relevant if PDB_ONE 70 + #define NVC36F_MEM_OP_C_ACCESS_COUNTER_CLR_TARGETED_NOTIFY_TAG 19:0 71 + // MEM_OP_D MUST be preceded by MEM_OPs A-C. 72 + #define NVC36F_MEM_OP_D (0x00000034) 73 + #define NVC36F_MEM_OP_D_TLB_INVALIDATE_PDB_ADDR_HI 26:0 // only relevant if PDB_ONE 74 + #define NVC36F_MEM_OP_D_OPERATION 31:27 75 + #define NVC36F_MEM_OP_D_OPERATION_MEMBAR 0x00000005 76 + #define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE 0x00000009 77 + #define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE_TARGETED 0x0000000a 78 + #define NVC36F_MEM_OP_D_OPERATION_L2_PEERMEM_INVALIDATE 0x0000000d 79 + #define NVC36F_MEM_OP_D_OPERATION_L2_SYSMEM_INVALIDATE 0x0000000e 80 + // CLEAN_LINES is an alias for Tegra/GPU IP usage 81 + #define NVC36F_MEM_OP_B_OPERATION_L2_INVALIDATE_CLEAN_LINES 0x0000000e 82 + #define NVC36F_MEM_OP_D_OPERATION_L2_CLEAN_COMPTAGS 0x0000000f 83 + #define NVC36F_MEM_OP_D_OPERATION_L2_FLUSH_DIRTY 0x00000010 84 + #define NVC36F_MEM_OP_D_OPERATION_L2_WAIT_FOR_SYS_PENDING_READS 0x00000015 85 + #define NVC36F_MEM_OP_D_OPERATION_ACCESS_COUNTER_CLR 0x00000016 86 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE 1:0 87 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MIMC 0x00000000 88 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MOMC 0x00000001 89 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_ALL 0x00000002 90 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_TARGETED 0x00000003 91 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE 2:2 92 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MIMC 0x00000000 93 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MOMC 0x00000001 94 + #define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_BANK 6:3 10 95 #define NVC36F_SEM_ADDR_LO (0x0000005c) 11 96 #define NVC36F_SEM_ADDR_LO_OFFSET 31:2 12 97 #define NVC36F_SEM_ADDR_HI (0x00000060)
+2
drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
··· 350 350 nvkm_chid_unref(&fifo->chid); 351 351 352 352 nvkm_event_fini(&fifo->nonstall.event); 353 + if (fifo->func->nonstall_dtor) 354 + fifo->func->nonstall_dtor(fifo); 353 355 mutex_destroy(&fifo->mutex); 354 356 355 357 if (fifo->func->dtor)
+15 -8
drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
··· 517 517 static void 518 518 ga100_fifo_nonstall_block(struct nvkm_event *event, int type, int index) 519 519 { 520 - struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event); 521 - struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0); 522 - 523 - nvkm_inth_block(&runl->nonstall.inth); 524 520 } 525 521 526 522 static void 527 523 ga100_fifo_nonstall_allow(struct nvkm_event *event, int type, int index) 528 524 { 529 - struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event); 530 - struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0); 531 - 532 - nvkm_inth_allow(&runl->nonstall.inth); 533 525 } 534 526 535 527 const struct nvkm_event_func ··· 556 564 if (ret) 557 565 return ret; 558 566 567 + nvkm_inth_allow(&runl->nonstall.inth); 568 + 559 569 nr = max(nr, runl->id + 1); 560 570 } 561 571 562 572 return nr; 573 + } 574 + 575 + void 576 + ga100_fifo_nonstall_dtor(struct nvkm_fifo *fifo) 577 + { 578 + struct nvkm_runl *runl; 579 + 580 + nvkm_runl_foreach(runl, fifo) { 581 + if (runl->nonstall.vector < 0) 582 + continue; 583 + nvkm_inth_block(&runl->nonstall.inth); 584 + } 563 585 } 564 586 565 587 int ··· 605 599 .runl_ctor = ga100_fifo_runl_ctor, 606 600 .mmu_fault = &tu102_fifo_mmu_fault, 607 601 .nonstall_ctor = ga100_fifo_nonstall_ctor, 602 + .nonstall_dtor = ga100_fifo_nonstall_dtor, 608 603 .nonstall = &ga100_fifo_nonstall, 609 604 .runl = &ga100_runl, 610 605 .runq = &ga100_runq,
+1
drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c
··· 30 30 .runl_ctor = ga100_fifo_runl_ctor, 31 31 .mmu_fault = &tu102_fifo_mmu_fault, 32 32 .nonstall_ctor = ga100_fifo_nonstall_ctor, 33 + .nonstall_dtor = ga100_fifo_nonstall_dtor, 33 34 .nonstall = &ga100_fifo_nonstall, 34 35 .runl = &ga100_runl, 35 36 .runq = &ga100_runq,
+2
drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h
··· 41 41 void (*start)(struct nvkm_fifo *, unsigned long *); 42 42 43 43 int (*nonstall_ctor)(struct nvkm_fifo *); 44 + void (*nonstall_dtor)(struct nvkm_fifo *); 44 45 const struct nvkm_event_func *nonstall; 45 46 46 47 const struct nvkm_runl_func *runl; ··· 201 200 202 201 int ga100_fifo_runl_ctor(struct nvkm_fifo *); 203 202 int ga100_fifo_nonstall_ctor(struct nvkm_fifo *); 203 + void ga100_fifo_nonstall_dtor(struct nvkm_fifo *); 204 204 extern const struct nvkm_event_func ga100_fifo_nonstall; 205 205 extern const struct nvkm_runl_func ga100_runl; 206 206 extern const struct nvkm_runq_func ga100_runq;
+1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fifo.c
··· 601 601 rm->chan.func = &r535_chan; 602 602 rm->nonstall = &ga100_fifo_nonstall; 603 603 rm->nonstall_ctor = ga100_fifo_nonstall_ctor; 604 + rm->nonstall_dtor = ga100_fifo_nonstall_dtor; 604 605 605 606 return nvkm_fifo_new_(rm, device, type, inst, pfifo); 606 607 }
+7 -4
drivers/gpu/drm/scheduler/sched_entity.c
··· 391 391 * Add a callback to the current dependency of the entity to wake up the 392 392 * scheduler when the entity becomes available. 393 393 */ 394 - static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) 394 + static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity, 395 + struct drm_sched_job *sched_job) 395 396 { 396 397 struct drm_gpu_scheduler *sched = entity->rq->sched; 397 398 struct dma_fence *fence = entity->dependency; ··· 421 420 dma_fence_put(entity->dependency); 422 421 entity->dependency = fence; 423 422 } 423 + 424 + if (trace_drm_sched_job_unschedulable_enabled() && 425 + !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &entity->dependency->flags)) 426 + trace_drm_sched_job_unschedulable(sched_job, entity->dependency); 424 427 425 428 if (!dma_fence_add_callback(entity->dependency, &entity->cb, 426 429 drm_sched_entity_wakeup)) ··· 466 461 467 462 while ((entity->dependency = 468 463 drm_sched_job_dependency(sched_job, entity))) { 469 - if (drm_sched_entity_add_dependency_cb(entity)) { 470 - trace_drm_sched_job_unschedulable(sched_job, entity->dependency); 464 + if (drm_sched_entity_add_dependency_cb(entity, sched_job)) 471 465 return NULL; 472 - } 473 466 } 474 467 475 468 /* skip jobs from entity that marked guilty */
+1 -2
drivers/gpu/drm/xe/xe_bo.c
··· 819 819 return ret; 820 820 } 821 821 822 - tt_has_data = ttm && (ttm_tt_is_populated(ttm) || 823 - (ttm->page_flags & TTM_TT_FLAG_SWAPPED)); 822 + tt_has_data = ttm && (ttm_tt_is_populated(ttm) || ttm_tt_is_swapped(ttm)); 824 823 825 824 move_lacks_source = !old_mem || (handle_system_ccs ? (!bo->ccs_cleared) : 826 825 (!mem_type_is_vram(old_mem_type) && !tt_has_data));
+5 -4
drivers/hwmon/ina238.c
··· 379 379 regval = clamp_val(val, -163, 163); 380 380 regval = (regval * 1000 * 4) / 381 381 (INA238_SHUNT_VOLTAGE_LSB * data->gain); 382 - regval = clamp_val(regval, S16_MIN, S16_MAX); 382 + regval = clamp_val(regval, S16_MIN, S16_MAX) & 0xffff; 383 383 384 384 switch (attr) { 385 385 case hwmon_in_max: ··· 517 517 * Unsigned postive values. Compared against the 24-bit power register, 518 518 * lower 8-bits are truncated. Same conversion to/from uW as POWER 519 519 * register. 520 + * The first clamp_val() is to establish a baseline to avoid overflows. 520 521 */ 521 - regval = clamp_val(val, 0, LONG_MAX); 522 - regval = div_u64(val * 4 * 100 * data->rshunt, data->config->power_calculate_factor * 522 + regval = clamp_val(val, 0, LONG_MAX / 2); 523 + regval = div_u64(regval * 4 * 100 * data->rshunt, data->config->power_calculate_factor * 523 524 1000ULL * INA238_FIXED_SHUNT * data->gain); 524 525 regval = clamp_val(regval >> 8, 0, U16_MAX); 525 526 ··· 573 572 return -EOPNOTSUPP; 574 573 575 574 /* Signed */ 576 - regval = clamp_val(val, -40000, 125000); 575 + val = clamp_val(val, -40000, 125000); 577 576 regval = div_s64(val * 10000, data->config->temp_lsb) << data->config->temp_shift; 578 577 regval = clamp_val(regval, S16_MIN, S16_MAX) & (0xffff << data->config->temp_shift); 579 578
+2 -3
drivers/hwmon/mlxreg-fan.c
··· 561 561 if (!pwm->connected) 562 562 continue; 563 563 pwm->fan = fan; 564 + /* Set minimal PWM speed. */ 565 + pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY); 564 566 pwm->cdev = devm_thermal_of_cooling_device_register(dev, NULL, mlxreg_fan_name[i], 565 567 pwm, &mlxreg_fan_cooling_ops); 566 568 if (IS_ERR(pwm->cdev)) { 567 569 dev_err(dev, "Failed to register cooling device\n"); 568 570 return PTR_ERR(pwm->cdev); 569 571 } 570 - 571 - /* Set minimal PWM speed. */ 572 - pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY); 573 572 } 574 573 575 574 return 0;
+1 -1
drivers/i2c/busses/i2c-i801.c
··· 1052 1052 { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1053 1053 { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_SOC_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1054 1054 { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1055 - { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1055 + { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5) }, 1056 1056 { PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1057 1057 { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) }, 1058 1058 { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+8 -14
drivers/i2c/busses/i2c-rtl9300.c
··· 99 99 { 100 100 u32 val, mask; 101 101 102 + if (len < 1 || len > 16) 103 + return -EINVAL; 104 + 102 105 val = chan->bus_freq << RTL9300_I2C_MST_CTRL2_SCL_FREQ_OFS; 103 106 mask = RTL9300_I2C_MST_CTRL2_SCL_FREQ_MASK; 104 107 ··· 225 222 } 226 223 227 224 switch (size) { 228 - case I2C_SMBUS_QUICK: 229 - ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0); 230 - if (ret) 231 - goto out_unlock; 232 - ret = rtl9300_i2c_reg_addr_set(i2c, 0, 0); 233 - if (ret) 234 - goto out_unlock; 235 - break; 236 - 237 225 case I2C_SMBUS_BYTE: 238 226 if (read_write == I2C_SMBUS_WRITE) { 239 227 ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0); ··· 306 312 307 313 static u32 rtl9300_i2c_func(struct i2c_adapter *a) 308 314 { 309 - return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE | 310 - I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA | 311 - I2C_FUNC_SMBUS_BLOCK_DATA; 315 + return I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_BYTE_DATA | 316 + I2C_FUNC_SMBUS_WORD_DATA | I2C_FUNC_SMBUS_BLOCK_DATA | 317 + I2C_FUNC_SMBUS_I2C_BLOCK; 312 318 } 313 319 314 320 static const struct i2c_algorithm rtl9300_i2c_algo = { ··· 317 323 }; 318 324 319 325 static struct i2c_adapter_quirks rtl9300_i2c_quirks = { 320 - .flags = I2C_AQ_NO_CLK_STRETCH, 326 + .flags = I2C_AQ_NO_CLK_STRETCH | I2C_AQ_NO_ZERO_LEN, 321 327 .max_read_len = 16, 322 328 .max_write_len = 16, 323 329 }; ··· 347 353 348 354 platform_set_drvdata(pdev, i2c); 349 355 350 - if (device_get_child_node_count(dev) >= RTL9300_I2C_MUX_NCHAN) 356 + if (device_get_child_node_count(dev) > RTL9300_I2C_MUX_NCHAN) 351 357 return dev_err_probe(dev, -EINVAL, "Too many channels\n"); 352 358 353 359 device_for_each_child_node(dev, child) {
+3 -3
drivers/isdn/mISDN/dsp_hwec.c
··· 51 51 goto _do; 52 52 53 53 { 54 - char *dup, *tok, *name, *val; 54 + char *dup, *next, *tok, *name, *val; 55 55 int tmp; 56 56 57 - dup = kstrdup(arg, GFP_ATOMIC); 57 + dup = next = kstrdup(arg, GFP_ATOMIC); 58 58 if (!dup) 59 59 return; 60 60 61 - while ((tok = strsep(&dup, ","))) { 61 + while ((tok = strsep(&next, ","))) { 62 62 if (!strlen(tok)) 63 63 continue; 64 64 name = strsep(&tok, "=");
+5
drivers/md/md.c
··· 9125 9125 } 9126 9126 9127 9127 action = md_sync_action(mddev); 9128 + if (action == ACTION_FROZEN || action == ACTION_IDLE) { 9129 + set_bit(MD_RECOVERY_INTR, &mddev->recovery); 9130 + goto skip; 9131 + } 9132 + 9128 9133 desc = md_sync_action_name(action); 9129 9134 mddev->last_sync_action = action; 9130 9135
+1 -1
drivers/md/raid1.c
··· 1225 1225 int i = 0; 1226 1226 struct bio *behind_bio = NULL; 1227 1227 1228 - behind_bio = bio_alloc_bioset(NULL, vcnt, 0, GFP_NOIO, 1228 + behind_bio = bio_alloc_bioset(NULL, vcnt, bio->bi_opf, GFP_NOIO, 1229 1229 &r1_bio->mddev->bio_set); 1230 1230 1231 1231 /* discard op, we don't support writezero/writesame yet */
+13 -4
drivers/net/dsa/mv88e6xxx/leds.c
··· 779 779 continue; 780 780 if (led_num > 1) { 781 781 dev_err(dev, "invalid LED specified port %d\n", port); 782 - return -EINVAL; 782 + ret = -EINVAL; 783 + goto err_put_led; 783 784 } 784 785 785 786 if (led_num == 0) ··· 824 823 init_data.devname_mandatory = true; 825 824 init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d:0%d", chip->info->name, 826 825 port, led_num); 827 - if (!init_data.devicename) 828 - return -ENOMEM; 826 + if (!init_data.devicename) { 827 + ret = -ENOMEM; 828 + goto err_put_led; 829 + } 829 830 830 831 ret = devm_led_classdev_register_ext(dev, l, &init_data); 831 832 kfree(init_data.devicename); 832 833 833 834 if (ret) { 834 835 dev_err(dev, "Failed to init LED %d for port %d", led_num, port); 835 - return ret; 836 + goto err_put_led; 836 837 } 837 838 } 838 839 840 + fwnode_handle_put(leds); 839 841 return 0; 842 + 843 + err_put_led: 844 + fwnode_handle_put(led); 845 + fwnode_handle_put(leds); 846 + return ret; 840 847 }
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 4397 4397 for (i = 0; i < bp->rx_agg_ring_size; i++) { 4398 4398 if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_KERNEL)) { 4399 4399 netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n", 4400 - ring_nr, i, bp->rx_ring_size); 4400 + ring_nr, i, bp->rx_agg_ring_size); 4401 4401 break; 4402 4402 } 4403 4403 prod = NEXT_RX_AGG(prod);
+16 -12
drivers/net/ethernet/cadence/macb_main.c
··· 1223 1223 { 1224 1224 struct macb *bp = queue->bp; 1225 1225 u16 queue_index = queue - bp->queues; 1226 + unsigned long flags; 1226 1227 unsigned int tail; 1227 1228 unsigned int head; 1228 1229 int packets = 0; 1229 1230 u32 bytes = 0; 1230 1231 1231 - spin_lock(&queue->tx_ptr_lock); 1232 + spin_lock_irqsave(&queue->tx_ptr_lock, flags); 1232 1233 head = queue->tx_head; 1233 1234 for (tail = queue->tx_tail; tail != head && packets < budget; tail++) { 1234 1235 struct macb_tx_skb *tx_skb; ··· 1292 1291 CIRC_CNT(queue->tx_head, queue->tx_tail, 1293 1292 bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp)) 1294 1293 netif_wake_subqueue(bp->dev, queue_index); 1295 - spin_unlock(&queue->tx_ptr_lock); 1294 + spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); 1296 1295 1297 1296 return packets; 1298 1297 } ··· 1708 1707 { 1709 1708 struct macb *bp = queue->bp; 1710 1709 unsigned int head_idx, tbqp; 1710 + unsigned long flags; 1711 1711 1712 - spin_lock(&queue->tx_ptr_lock); 1712 + spin_lock_irqsave(&queue->tx_ptr_lock, flags); 1713 1713 1714 1714 if (queue->tx_head == queue->tx_tail) 1715 1715 goto out_tx_ptr_unlock; ··· 1722 1720 if (tbqp == head_idx) 1723 1721 goto out_tx_ptr_unlock; 1724 1722 1725 - spin_lock_irq(&bp->lock); 1723 + spin_lock(&bp->lock); 1726 1724 macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); 1727 - spin_unlock_irq(&bp->lock); 1725 + spin_unlock(&bp->lock); 1728 1726 1729 1727 out_tx_ptr_unlock: 1730 - spin_unlock(&queue->tx_ptr_lock); 1728 + spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); 1731 1729 } 1732 1730 1733 1731 static bool macb_tx_complete_pending(struct macb_queue *queue) 1734 1732 { 1735 1733 bool retval = false; 1734 + unsigned long flags; 1736 1735 1737 - spin_lock(&queue->tx_ptr_lock); 1736 + spin_lock_irqsave(&queue->tx_ptr_lock, flags); 1738 1737 if (queue->tx_head != queue->tx_tail) { 1739 1738 /* Make hw descriptor updates visible to CPU */ 1740 1739 rmb(); ··· 1743 1740 if (macb_tx_desc(queue, queue->tx_tail)->ctrl & MACB_BIT(TX_USED)) 1744 1741 retval = true; 1745 1742 } 1746 - spin_unlock(&queue->tx_ptr_lock); 1743 + spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); 1747 1744 return retval; 1748 1745 } 1749 1746 ··· 2311 2308 struct macb_queue *queue = &bp->queues[queue_index]; 2312 2309 unsigned int desc_cnt, nr_frags, frag_size, f; 2313 2310 unsigned int hdrlen; 2311 + unsigned long flags; 2314 2312 bool is_lso; 2315 2313 netdev_tx_t ret = NETDEV_TX_OK; 2316 2314 ··· 2372 2368 desc_cnt += DIV_ROUND_UP(frag_size, bp->max_tx_length); 2373 2369 } 2374 2370 2375 - spin_lock_bh(&queue->tx_ptr_lock); 2371 + spin_lock_irqsave(&queue->tx_ptr_lock, flags); 2376 2372 2377 2373 /* This is a hard error, log it. */ 2378 2374 if (CIRC_SPACE(queue->tx_head, queue->tx_tail, ··· 2396 2392 netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index), 2397 2393 skb->len); 2398 2394 2399 - spin_lock_irq(&bp->lock); 2395 + spin_lock(&bp->lock); 2400 2396 macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); 2401 - spin_unlock_irq(&bp->lock); 2397 + spin_unlock(&bp->lock); 2402 2398 2403 2399 if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) 2404 2400 netif_stop_subqueue(dev, queue_index); 2405 2401 2406 2402 unlock: 2407 - spin_unlock_bh(&queue->tx_ptr_lock); 2403 + spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); 2408 2404 2409 2405 return ret; 2410 2406 }
+12 -8
drivers/net/ethernet/cavium/thunder/thunder_bgx.c
··· 1493 1493 * this cortina phy, for which there is no driver 1494 1494 * support, ignore it. 1495 1495 */ 1496 - if (phy_np && 1497 - !of_device_is_compatible(phy_np, "cortina,cs4223-slice")) { 1498 - /* Wait until the phy drivers are available */ 1499 - pd = of_phy_find_device(phy_np); 1500 - if (!pd) 1501 - goto defer; 1502 - bgx->lmac[lmac].phydev = pd; 1496 + if (phy_np) { 1497 + if (!of_device_is_compatible(phy_np, "cortina,cs4223-slice")) { 1498 + /* Wait until the phy drivers are available */ 1499 + pd = of_phy_find_device(phy_np); 1500 + if (!pd) { 1501 + of_node_put(phy_np); 1502 + goto defer; 1503 + } 1504 + bgx->lmac[lmac].phydev = pd; 1505 + } 1506 + of_node_put(phy_np); 1503 1507 } 1504 1508 1505 1509 lmac++; ··· 1519 1515 * for phy devices we may have already found. 1520 1516 */ 1521 1517 while (lmac) { 1518 + lmac--; 1522 1519 if (bgx->lmac[lmac].phydev) { 1523 1520 put_device(&bgx->lmac[lmac].phydev->mdio.dev); 1524 1521 bgx->lmac[lmac].phydev = NULL; 1525 1522 } 1526 - lmac--; 1527 1523 } 1528 1524 of_node_put(node); 1529 1525 return -EPROBE_DEFER;
+20
drivers/net/ethernet/dlink/Kconfig
··· 32 32 To compile this driver as a module, choose M here: the 33 33 module will be called dl2k. 34 34 35 + config SUNDANCE 36 + tristate "Sundance Alta support" 37 + depends on PCI 38 + select CRC32 39 + select MII 40 + help 41 + This driver is for the Sundance "Alta" chip. 42 + More specific information and updates are available from 43 + <http://www.scyld.com/network/sundance.html>. 44 + 45 + config SUNDANCE_MMIO 46 + bool "Use MMIO instead of PIO" 47 + depends on SUNDANCE 48 + help 49 + Enable memory-mapped I/O for interaction with Sundance NIC registers. 50 + Do NOT enable this by default, PIO (enabled when MMIO is disabled) 51 + is known to solve bugs on certain chips. 52 + 53 + If unsure, say N. 54 + 35 55 endif # NET_VENDOR_DLINK
+1
drivers/net/ethernet/dlink/Makefile
··· 4 4 # 5 5 6 6 obj-$(CONFIG_DL2K) += dl2k.o 7 + obj-$(CONFIG_SUNDANCE) += sundance.o
+1990
drivers/net/ethernet/dlink/sundance.c
··· 1 + /* sundance.c: A Linux device driver for the Sundance ST201 "Alta". */ 2 + /* 3 + Written 1999-2000 by Donald Becker. 4 + 5 + This software may be used and distributed according to the terms of 6 + the GNU General Public License (GPL), incorporated herein by reference. 7 + Drivers based on or derived from this code fall under the GPL and must 8 + retain the authorship, copyright and license notice. This file is not 9 + a complete program and may only be used when the entire operating 10 + system is licensed under the GPL. 11 + 12 + The author may be reached as becker@scyld.com, or C/O 13 + Scyld Computing Corporation 14 + 410 Severn Ave., Suite 210 15 + Annapolis MD 21403 16 + 17 + Support and updates available at 18 + http://www.scyld.com/network/sundance.html 19 + [link no longer provides useful info -jgarzik] 20 + Archives of the mailing list are still available at 21 + https://www.beowulf.org/pipermail/netdrivers/ 22 + 23 + */ 24 + 25 + #define DRV_NAME "sundance" 26 + 27 + /* The user-configurable values. 28 + These may be modified when a driver module is loaded.*/ 29 + static int debug = 1; /* 1 normal messages, 0 quiet .. 7 verbose. */ 30 + /* Maximum number of multicast addresses to filter (vs. rx-all-multicast). 31 + Typical is a 64 element hash table based on the Ethernet CRC. */ 32 + static const int multicast_filter_limit = 32; 33 + 34 + /* Set the copy breakpoint for the copy-only-tiny-frames scheme. 35 + Setting to > 1518 effectively disables this feature. 36 + This chip can receive into offset buffers, so the Alpha does not 37 + need a copy-align. */ 38 + static int rx_copybreak; 39 + static int flowctrl=1; 40 + 41 + /* media[] specifies the media type the NIC operates at. 42 + autosense Autosensing active media. 43 + 10mbps_hd 10Mbps half duplex. 44 + 10mbps_fd 10Mbps full duplex. 45 + 100mbps_hd 100Mbps half duplex. 46 + 100mbps_fd 100Mbps full duplex. 47 + 0 Autosensing active media. 48 + 1 10Mbps half duplex. 49 + 2 10Mbps full duplex. 50 + 3 100Mbps half duplex. 51 + 4 100Mbps full duplex. 52 + */ 53 + #define MAX_UNITS 8 54 + static char *media[MAX_UNITS]; 55 + 56 + 57 + /* Operational parameters that are set at compile time. */ 58 + 59 + /* Keep the ring sizes a power of two for compile efficiency. 60 + The compiler will convert <unsigned>'%'<2^N> into a bit mask. 61 + Making the Tx ring too large decreases the effectiveness of channel 62 + bonding and packet priority, and more than 128 requires modifying the 63 + Tx error recovery. 64 + Large receive rings merely waste memory. */ 65 + #define TX_RING_SIZE 32 66 + #define TX_QUEUE_LEN (TX_RING_SIZE - 1) /* Limit ring entries actually used. */ 67 + #define RX_RING_SIZE 64 68 + #define RX_BUDGET 32 69 + #define TX_TOTAL_SIZE TX_RING_SIZE*sizeof(struct netdev_desc) 70 + #define RX_TOTAL_SIZE RX_RING_SIZE*sizeof(struct netdev_desc) 71 + 72 + /* Operational parameters that usually are not changed. */ 73 + /* Time in jiffies before concluding the transmitter is hung. */ 74 + #define TX_TIMEOUT (4*HZ) 75 + #define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/ 76 + 77 + /* Include files, designed to support most kernel versions 2.0.0 and later. */ 78 + #include <linux/module.h> 79 + #include <linux/kernel.h> 80 + #include <linux/string.h> 81 + #include <linux/timer.h> 82 + #include <linux/errno.h> 83 + #include <linux/ioport.h> 84 + #include <linux/interrupt.h> 85 + #include <linux/pci.h> 86 + #include <linux/netdevice.h> 87 + #include <linux/etherdevice.h> 88 + #include <linux/skbuff.h> 89 + #include <linux/init.h> 90 + #include <linux/bitops.h> 91 + #include <linux/uaccess.h> 92 + #include <asm/processor.h> /* Processor type for cache alignment. */ 93 + #include <asm/io.h> 94 + #include <linux/delay.h> 95 + #include <linux/spinlock.h> 96 + #include <linux/dma-mapping.h> 97 + #include <linux/crc32.h> 98 + #include <linux/ethtool.h> 99 + #include <linux/mii.h> 100 + 101 + MODULE_AUTHOR("Donald Becker <becker@scyld.com>"); 102 + MODULE_DESCRIPTION("Sundance Alta Ethernet driver"); 103 + MODULE_LICENSE("GPL"); 104 + 105 + module_param(debug, int, 0); 106 + module_param(rx_copybreak, int, 0); 107 + module_param_array(media, charp, NULL, 0); 108 + module_param(flowctrl, int, 0); 109 + MODULE_PARM_DESC(debug, "Sundance Alta debug level (0-5)"); 110 + MODULE_PARM_DESC(rx_copybreak, "Sundance Alta copy breakpoint for copy-only-tiny-frames"); 111 + MODULE_PARM_DESC(flowctrl, "Sundance Alta flow control [0|1]"); 112 + 113 + /* 114 + Theory of Operation 115 + 116 + I. Board Compatibility 117 + 118 + This driver is designed for the Sundance Technologies "Alta" ST201 chip. 119 + 120 + II. Board-specific settings 121 + 122 + III. Driver operation 123 + 124 + IIIa. Ring buffers 125 + 126 + This driver uses two statically allocated fixed-size descriptor lists 127 + formed into rings by a branch from the final descriptor to the beginning of 128 + the list. The ring sizes are set at compile time by RX/TX_RING_SIZE. 129 + Some chips explicitly use only 2^N sized rings, while others use a 130 + 'next descriptor' pointer that the driver forms into rings. 131 + 132 + IIIb/c. Transmit/Receive Structure 133 + 134 + This driver uses a zero-copy receive and transmit scheme. 135 + The driver allocates full frame size skbuffs for the Rx ring buffers at 136 + open() time and passes the skb->data field to the chip as receive data 137 + buffers. When an incoming frame is less than RX_COPYBREAK bytes long, 138 + a fresh skbuff is allocated and the frame is copied to the new skbuff. 139 + When the incoming frame is larger, the skbuff is passed directly up the 140 + protocol stack. Buffers consumed this way are replaced by newly allocated 141 + skbuffs in a later phase of receives. 142 + 143 + The RX_COPYBREAK value is chosen to trade-off the memory wasted by 144 + using a full-sized skbuff for small frames vs. the copying costs of larger 145 + frames. New boards are typically used in generously configured machines 146 + and the underfilled buffers have negligible impact compared to the benefit of 147 + a single allocation size, so the default value of zero results in never 148 + copying packets. When copying is done, the cost is usually mitigated by using 149 + a combined copy/checksum routine. Copying also preloads the cache, which is 150 + most useful with small frames. 151 + 152 + A subtle aspect of the operation is that the IP header at offset 14 in an 153 + ethernet frame isn't longword aligned for further processing. 154 + Unaligned buffers are permitted by the Sundance hardware, so 155 + frames are received into the skbuff at an offset of "+2", 16-byte aligning 156 + the IP header. 157 + 158 + IIId. Synchronization 159 + 160 + The driver runs as two independent, single-threaded flows of control. One 161 + is the send-packet routine, which enforces single-threaded use by the 162 + dev->tbusy flag. The other thread is the interrupt handler, which is single 163 + threaded by the hardware and interrupt handling software. 164 + 165 + The send packet thread has partial control over the Tx ring and 'dev->tbusy' 166 + flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next 167 + queue slot is empty, it clears the tbusy flag when finished otherwise it sets 168 + the 'lp->tx_full' flag. 169 + 170 + The interrupt handler has exclusive control over the Rx ring and records stats 171 + from the Tx ring. After reaping the stats, it marks the Tx queue entry as 172 + empty by incrementing the dirty_tx mark. Iff the 'lp->tx_full' flag is set, it 173 + clears both the tx_full and tbusy flags. 174 + 175 + IV. Notes 176 + 177 + IVb. References 178 + 179 + The Sundance ST201 datasheet, preliminary version. 180 + The Kendin KS8723 datasheet, preliminary version. 181 + The ICplus IP100 datasheet, preliminary version. 182 + http://www.scyld.com/expert/100mbps.html 183 + http://www.scyld.com/expert/NWay.html 184 + 185 + IVc. Errata 186 + 187 + */ 188 + 189 + /* Work-around for Kendin chip bugs. */ 190 + #ifndef CONFIG_SUNDANCE_MMIO 191 + #define USE_IO_OPS 1 192 + #endif 193 + 194 + static const struct pci_device_id sundance_pci_tbl[] = { 195 + { 0x1186, 0x1002, 0x1186, 0x1002, 0, 0, 0 }, 196 + { 0x1186, 0x1002, 0x1186, 0x1003, 0, 0, 1 }, 197 + { 0x1186, 0x1002, 0x1186, 0x1012, 0, 0, 2 }, 198 + { 0x1186, 0x1002, 0x1186, 0x1040, 0, 0, 3 }, 199 + { 0x1186, 0x1002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4 }, 200 + { 0x13F0, 0x0201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5 }, 201 + { 0x13F0, 0x0200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 6 }, 202 + { } 203 + }; 204 + MODULE_DEVICE_TABLE(pci, sundance_pci_tbl); 205 + 206 + enum { 207 + netdev_io_size = 128 208 + }; 209 + 210 + struct pci_id_info { 211 + const char *name; 212 + }; 213 + static const struct pci_id_info pci_id_tbl[] = { 214 + {"D-Link DFE-550TX FAST Ethernet Adapter"}, 215 + {"D-Link DFE-550FX 100Mbps Fiber-optics Adapter"}, 216 + {"D-Link DFE-580TX 4 port Server Adapter"}, 217 + {"D-Link DFE-530TXS FAST Ethernet Adapter"}, 218 + {"D-Link DL10050-based FAST Ethernet Adapter"}, 219 + {"Sundance Technology Alta"}, 220 + {"IC Plus Corporation IP100A FAST Ethernet Adapter"}, 221 + { } /* terminate list. */ 222 + }; 223 + 224 + /* This driver was written to use PCI memory space, however x86-oriented 225 + hardware often uses I/O space accesses. */ 226 + 227 + /* Offsets to the device registers. 228 + Unlike software-only systems, device drivers interact with complex hardware. 229 + It's not useful to define symbolic names for every register bit in the 230 + device. The name can only partially document the semantics and make 231 + the driver longer and more difficult to read. 232 + In general, only the important configuration values or bits changed 233 + multiple times should be defined symbolically. 234 + */ 235 + enum alta_offsets { 236 + DMACtrl = 0x00, 237 + TxListPtr = 0x04, 238 + TxDMABurstThresh = 0x08, 239 + TxDMAUrgentThresh = 0x09, 240 + TxDMAPollPeriod = 0x0a, 241 + RxDMAStatus = 0x0c, 242 + RxListPtr = 0x10, 243 + DebugCtrl0 = 0x1a, 244 + DebugCtrl1 = 0x1c, 245 + RxDMABurstThresh = 0x14, 246 + RxDMAUrgentThresh = 0x15, 247 + RxDMAPollPeriod = 0x16, 248 + LEDCtrl = 0x1a, 249 + ASICCtrl = 0x30, 250 + EEData = 0x34, 251 + EECtrl = 0x36, 252 + FlashAddr = 0x40, 253 + FlashData = 0x44, 254 + WakeEvent = 0x45, 255 + TxStatus = 0x46, 256 + TxFrameId = 0x47, 257 + DownCounter = 0x18, 258 + IntrClear = 0x4a, 259 + IntrEnable = 0x4c, 260 + IntrStatus = 0x4e, 261 + MACCtrl0 = 0x50, 262 + MACCtrl1 = 0x52, 263 + StationAddr = 0x54, 264 + MaxFrameSize = 0x5A, 265 + RxMode = 0x5c, 266 + MIICtrl = 0x5e, 267 + MulticastFilter0 = 0x60, 268 + MulticastFilter1 = 0x64, 269 + RxOctetsLow = 0x68, 270 + RxOctetsHigh = 0x6a, 271 + TxOctetsLow = 0x6c, 272 + TxOctetsHigh = 0x6e, 273 + TxFramesOK = 0x70, 274 + RxFramesOK = 0x72, 275 + StatsCarrierError = 0x74, 276 + StatsLateColl = 0x75, 277 + StatsMultiColl = 0x76, 278 + StatsOneColl = 0x77, 279 + StatsTxDefer = 0x78, 280 + RxMissed = 0x79, 281 + StatsTxXSDefer = 0x7a, 282 + StatsTxAbort = 0x7b, 283 + StatsBcastTx = 0x7c, 284 + StatsBcastRx = 0x7d, 285 + StatsMcastTx = 0x7e, 286 + StatsMcastRx = 0x7f, 287 + /* Aliased and bogus values! */ 288 + RxStatus = 0x0c, 289 + }; 290 + 291 + #define ASIC_HI_WORD(x) ((x) + 2) 292 + 293 + enum ASICCtrl_HiWord_bit { 294 + GlobalReset = 0x0001, 295 + RxReset = 0x0002, 296 + TxReset = 0x0004, 297 + DMAReset = 0x0008, 298 + FIFOReset = 0x0010, 299 + NetworkReset = 0x0020, 300 + HostReset = 0x0040, 301 + ResetBusy = 0x0400, 302 + }; 303 + 304 + /* Bits in the interrupt status/mask registers. */ 305 + enum intr_status_bits { 306 + IntrSummary=0x0001, IntrPCIErr=0x0002, IntrMACCtrl=0x0008, 307 + IntrTxDone=0x0004, IntrRxDone=0x0010, IntrRxStart=0x0020, 308 + IntrDrvRqst=0x0040, 309 + StatsMax=0x0080, LinkChange=0x0100, 310 + IntrTxDMADone=0x0200, IntrRxDMADone=0x0400, 311 + }; 312 + 313 + /* Bits in the RxMode register. */ 314 + enum rx_mode_bits { 315 + AcceptAllIPMulti=0x20, AcceptMultiHash=0x10, AcceptAll=0x08, 316 + AcceptBroadcast=0x04, AcceptMulticast=0x02, AcceptMyPhys=0x01, 317 + }; 318 + /* Bits in MACCtrl. */ 319 + enum mac_ctrl0_bits { 320 + EnbFullDuplex=0x20, EnbRcvLargeFrame=0x40, 321 + EnbFlowCtrl=0x100, EnbPassRxCRC=0x200, 322 + }; 323 + enum mac_ctrl1_bits { 324 + StatsEnable=0x0020, StatsDisable=0x0040, StatsEnabled=0x0080, 325 + TxEnable=0x0100, TxDisable=0x0200, TxEnabled=0x0400, 326 + RxEnable=0x0800, RxDisable=0x1000, RxEnabled=0x2000, 327 + }; 328 + 329 + /* Bits in WakeEvent register. */ 330 + enum wake_event_bits { 331 + WakePktEnable = 0x01, 332 + MagicPktEnable = 0x02, 333 + LinkEventEnable = 0x04, 334 + WolEnable = 0x80, 335 + }; 336 + 337 + /* The Rx and Tx buffer descriptors. */ 338 + /* Note that using only 32 bit fields simplifies conversion to big-endian 339 + architectures. */ 340 + struct netdev_desc { 341 + __le32 next_desc; 342 + __le32 status; 343 + struct desc_frag { __le32 addr, length; } frag; 344 + }; 345 + 346 + /* Bits in netdev_desc.status */ 347 + enum desc_status_bits { 348 + DescOwn=0x8000, 349 + DescEndPacket=0x4000, 350 + DescEndRing=0x2000, 351 + LastFrag=0x80000000, 352 + DescIntrOnTx=0x8000, 353 + DescIntrOnDMADone=0x80000000, 354 + DisableAlign = 0x00000001, 355 + }; 356 + 357 + #define PRIV_ALIGN 15 /* Required alignment mask */ 358 + /* Use __attribute__((aligned (L1_CACHE_BYTES))) to maintain alignment 359 + within the structure. */ 360 + #define MII_CNT 4 361 + struct netdev_private { 362 + /* Descriptor rings first for alignment. */ 363 + struct netdev_desc *rx_ring; 364 + struct netdev_desc *tx_ring; 365 + struct sk_buff* rx_skbuff[RX_RING_SIZE]; 366 + struct sk_buff* tx_skbuff[TX_RING_SIZE]; 367 + dma_addr_t tx_ring_dma; 368 + dma_addr_t rx_ring_dma; 369 + struct timer_list timer; /* Media monitoring timer. */ 370 + struct net_device *ndev; /* backpointer */ 371 + /* ethtool extra stats */ 372 + struct { 373 + u64 tx_multiple_collisions; 374 + u64 tx_single_collisions; 375 + u64 tx_late_collisions; 376 + u64 tx_deferred; 377 + u64 tx_deferred_excessive; 378 + u64 tx_aborted; 379 + u64 tx_bcasts; 380 + u64 rx_bcasts; 381 + u64 tx_mcasts; 382 + u64 rx_mcasts; 383 + } xstats; 384 + /* Frequently used values: keep some adjacent for cache effect. */ 385 + spinlock_t lock; 386 + int msg_enable; 387 + int chip_id; 388 + unsigned int cur_rx, dirty_rx; /* Producer/consumer ring indices */ 389 + unsigned int rx_buf_sz; /* Based on MTU+slack. */ 390 + struct netdev_desc *last_tx; /* Last Tx descriptor used. */ 391 + unsigned int cur_tx, dirty_tx; 392 + /* These values are keep track of the transceiver/media in use. */ 393 + unsigned int flowctrl:1; 394 + unsigned int default_port:4; /* Last dev->if_port value. */ 395 + unsigned int an_enable:1; 396 + unsigned int speed; 397 + unsigned int wol_enabled:1; /* Wake on LAN enabled */ 398 + struct tasklet_struct rx_tasklet; 399 + struct tasklet_struct tx_tasklet; 400 + int budget; 401 + int cur_task; 402 + /* Multicast and receive mode. */ 403 + spinlock_t mcastlock; /* SMP lock multicast updates. */ 404 + u16 mcast_filter[4]; 405 + /* MII transceiver section. */ 406 + struct mii_if_info mii_if; 407 + int mii_preamble_required; 408 + unsigned char phys[MII_CNT]; /* MII device addresses, only first one used. */ 409 + struct pci_dev *pci_dev; 410 + void __iomem *base; 411 + spinlock_t statlock; 412 + }; 413 + 414 + /* The station address location in the EEPROM. */ 415 + #define EEPROM_SA_OFFSET 0x10 416 + #define DEFAULT_INTR (IntrRxDMADone | IntrPCIErr | \ 417 + IntrDrvRqst | IntrTxDone | StatsMax | \ 418 + LinkChange) 419 + 420 + static int change_mtu(struct net_device *dev, int new_mtu); 421 + static int eeprom_read(void __iomem *ioaddr, int location); 422 + static int mdio_read(struct net_device *dev, int phy_id, int location); 423 + static void mdio_write(struct net_device *dev, int phy_id, int location, int value); 424 + static int mdio_wait_link(struct net_device *dev, int wait); 425 + static int netdev_open(struct net_device *dev); 426 + static void check_duplex(struct net_device *dev); 427 + static void netdev_timer(struct timer_list *t); 428 + static void tx_timeout(struct net_device *dev, unsigned int txqueue); 429 + static void init_ring(struct net_device *dev); 430 + static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); 431 + static int reset_tx (struct net_device *dev); 432 + static irqreturn_t intr_handler(int irq, void *dev_instance); 433 + static void rx_poll(struct tasklet_struct *t); 434 + static void tx_poll(struct tasklet_struct *t); 435 + static void refill_rx (struct net_device *dev); 436 + static void netdev_error(struct net_device *dev, int intr_status); 437 + static void netdev_error(struct net_device *dev, int intr_status); 438 + static void set_rx_mode(struct net_device *dev); 439 + static int __set_mac_addr(struct net_device *dev); 440 + static int sundance_set_mac_addr(struct net_device *dev, void *data); 441 + static struct net_device_stats *get_stats(struct net_device *dev); 442 + static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); 443 + static int netdev_close(struct net_device *dev); 444 + static const struct ethtool_ops ethtool_ops; 445 + 446 + static void sundance_reset(struct net_device *dev, unsigned long reset_cmd) 447 + { 448 + struct netdev_private *np = netdev_priv(dev); 449 + void __iomem *ioaddr = np->base + ASICCtrl; 450 + int countdown; 451 + 452 + /* ST201 documentation states ASICCtrl is a 32bit register */ 453 + iowrite32 (reset_cmd | ioread32 (ioaddr), ioaddr); 454 + /* ST201 documentation states reset can take up to 1 ms */ 455 + countdown = 10 + 1; 456 + while (ioread32 (ioaddr) & (ResetBusy << 16)) { 457 + if (--countdown == 0) { 458 + printk(KERN_WARNING "%s : reset not completed !!\n", dev->name); 459 + break; 460 + } 461 + udelay(100); 462 + } 463 + } 464 + 465 + #ifdef CONFIG_NET_POLL_CONTROLLER 466 + static void sundance_poll_controller(struct net_device *dev) 467 + { 468 + struct netdev_private *np = netdev_priv(dev); 469 + 470 + disable_irq(np->pci_dev->irq); 471 + intr_handler(np->pci_dev->irq, dev); 472 + enable_irq(np->pci_dev->irq); 473 + } 474 + #endif 475 + 476 + static const struct net_device_ops netdev_ops = { 477 + .ndo_open = netdev_open, 478 + .ndo_stop = netdev_close, 479 + .ndo_start_xmit = start_tx, 480 + .ndo_get_stats = get_stats, 481 + .ndo_set_rx_mode = set_rx_mode, 482 + .ndo_eth_ioctl = netdev_ioctl, 483 + .ndo_tx_timeout = tx_timeout, 484 + .ndo_change_mtu = change_mtu, 485 + .ndo_set_mac_address = sundance_set_mac_addr, 486 + .ndo_validate_addr = eth_validate_addr, 487 + #ifdef CONFIG_NET_POLL_CONTROLLER 488 + .ndo_poll_controller = sundance_poll_controller, 489 + #endif 490 + }; 491 + 492 + static int sundance_probe1(struct pci_dev *pdev, 493 + const struct pci_device_id *ent) 494 + { 495 + struct net_device *dev; 496 + struct netdev_private *np; 497 + static int card_idx; 498 + int chip_idx = ent->driver_data; 499 + int irq; 500 + int i; 501 + void __iomem *ioaddr; 502 + u16 mii_ctl; 503 + void *ring_space; 504 + dma_addr_t ring_dma; 505 + #ifdef USE_IO_OPS 506 + int bar = 0; 507 + #else 508 + int bar = 1; 509 + #endif 510 + int phy, phy_end, phy_idx = 0; 511 + __le16 addr[ETH_ALEN / 2]; 512 + 513 + if (pci_enable_device(pdev)) 514 + return -EIO; 515 + pci_set_master(pdev); 516 + 517 + irq = pdev->irq; 518 + 519 + dev = alloc_etherdev(sizeof(*np)); 520 + if (!dev) 521 + return -ENOMEM; 522 + SET_NETDEV_DEV(dev, &pdev->dev); 523 + 524 + if (pci_request_regions(pdev, DRV_NAME)) 525 + goto err_out_netdev; 526 + 527 + ioaddr = pci_iomap(pdev, bar, netdev_io_size); 528 + if (!ioaddr) 529 + goto err_out_res; 530 + 531 + for (i = 0; i < 3; i++) 532 + addr[i] = 533 + cpu_to_le16(eeprom_read(ioaddr, i + EEPROM_SA_OFFSET)); 534 + eth_hw_addr_set(dev, (u8 *)addr); 535 + 536 + np = netdev_priv(dev); 537 + np->ndev = dev; 538 + np->base = ioaddr; 539 + np->pci_dev = pdev; 540 + np->chip_id = chip_idx; 541 + np->msg_enable = (1 << debug) - 1; 542 + spin_lock_init(&np->lock); 543 + spin_lock_init(&np->statlock); 544 + tasklet_setup(&np->rx_tasklet, rx_poll); 545 + tasklet_setup(&np->tx_tasklet, tx_poll); 546 + 547 + ring_space = dma_alloc_coherent(&pdev->dev, TX_TOTAL_SIZE, 548 + &ring_dma, GFP_KERNEL); 549 + if (!ring_space) 550 + goto err_out_cleardev; 551 + np->tx_ring = (struct netdev_desc *)ring_space; 552 + np->tx_ring_dma = ring_dma; 553 + 554 + ring_space = dma_alloc_coherent(&pdev->dev, RX_TOTAL_SIZE, 555 + &ring_dma, GFP_KERNEL); 556 + if (!ring_space) 557 + goto err_out_unmap_tx; 558 + np->rx_ring = (struct netdev_desc *)ring_space; 559 + np->rx_ring_dma = ring_dma; 560 + 561 + np->mii_if.dev = dev; 562 + np->mii_if.mdio_read = mdio_read; 563 + np->mii_if.mdio_write = mdio_write; 564 + np->mii_if.phy_id_mask = 0x1f; 565 + np->mii_if.reg_num_mask = 0x1f; 566 + 567 + /* The chip-specific entries in the device structure. */ 568 + dev->netdev_ops = &netdev_ops; 569 + dev->ethtool_ops = &ethtool_ops; 570 + dev->watchdog_timeo = TX_TIMEOUT; 571 + 572 + /* MTU range: 68 - 8191 */ 573 + dev->min_mtu = ETH_MIN_MTU; 574 + dev->max_mtu = 8191; 575 + 576 + pci_set_drvdata(pdev, dev); 577 + 578 + i = register_netdev(dev); 579 + if (i) 580 + goto err_out_unmap_rx; 581 + 582 + printk(KERN_INFO "%s: %s at %p, %pM, IRQ %d.\n", 583 + dev->name, pci_id_tbl[chip_idx].name, ioaddr, 584 + dev->dev_addr, irq); 585 + 586 + np->phys[0] = 1; /* Default setting */ 587 + np->mii_preamble_required++; 588 + 589 + /* 590 + * It seems some phys doesn't deal well with address 0 being accessed 591 + * first 592 + */ 593 + if (sundance_pci_tbl[np->chip_id].device == 0x0200) { 594 + phy = 0; 595 + phy_end = 31; 596 + } else { 597 + phy = 1; 598 + phy_end = 32; /* wraps to zero, due to 'phy & 0x1f' */ 599 + } 600 + for (; phy <= phy_end && phy_idx < MII_CNT; phy++) { 601 + int phyx = phy & 0x1f; 602 + int mii_status = mdio_read(dev, phyx, MII_BMSR); 603 + if (mii_status != 0xffff && mii_status != 0x0000) { 604 + np->phys[phy_idx++] = phyx; 605 + np->mii_if.advertising = mdio_read(dev, phyx, MII_ADVERTISE); 606 + if ((mii_status & 0x0040) == 0) 607 + np->mii_preamble_required++; 608 + printk(KERN_INFO "%s: MII PHY found at address %d, status " 609 + "0x%4.4x advertising %4.4x.\n", 610 + dev->name, phyx, mii_status, np->mii_if.advertising); 611 + } 612 + } 613 + np->mii_preamble_required--; 614 + 615 + if (phy_idx == 0) { 616 + printk(KERN_INFO "%s: No MII transceiver found, aborting. ASIC status %x\n", 617 + dev->name, ioread32(ioaddr + ASICCtrl)); 618 + goto err_out_unregister; 619 + } 620 + 621 + np->mii_if.phy_id = np->phys[0]; 622 + 623 + /* Parse override configuration */ 624 + np->an_enable = 1; 625 + if (card_idx < MAX_UNITS) { 626 + if (media[card_idx] != NULL) { 627 + np->an_enable = 0; 628 + if (strcmp (media[card_idx], "100mbps_fd") == 0 || 629 + strcmp (media[card_idx], "4") == 0) { 630 + np->speed = 100; 631 + np->mii_if.full_duplex = 1; 632 + } else if (strcmp (media[card_idx], "100mbps_hd") == 0 || 633 + strcmp (media[card_idx], "3") == 0) { 634 + np->speed = 100; 635 + np->mii_if.full_duplex = 0; 636 + } else if (strcmp (media[card_idx], "10mbps_fd") == 0 || 637 + strcmp (media[card_idx], "2") == 0) { 638 + np->speed = 10; 639 + np->mii_if.full_duplex = 1; 640 + } else if (strcmp (media[card_idx], "10mbps_hd") == 0 || 641 + strcmp (media[card_idx], "1") == 0) { 642 + np->speed = 10; 643 + np->mii_if.full_duplex = 0; 644 + } else { 645 + np->an_enable = 1; 646 + } 647 + } 648 + if (flowctrl == 1) 649 + np->flowctrl = 1; 650 + } 651 + 652 + /* Fibre PHY? */ 653 + if (ioread32 (ioaddr + ASICCtrl) & 0x80) { 654 + /* Default 100Mbps Full */ 655 + if (np->an_enable) { 656 + np->speed = 100; 657 + np->mii_if.full_duplex = 1; 658 + np->an_enable = 0; 659 + } 660 + } 661 + /* Reset PHY */ 662 + mdio_write (dev, np->phys[0], MII_BMCR, BMCR_RESET); 663 + mdelay (300); 664 + /* If flow control enabled, we need to advertise it.*/ 665 + if (np->flowctrl) 666 + mdio_write (dev, np->phys[0], MII_ADVERTISE, np->mii_if.advertising | 0x0400); 667 + mdio_write (dev, np->phys[0], MII_BMCR, BMCR_ANENABLE|BMCR_ANRESTART); 668 + /* Force media type */ 669 + if (!np->an_enable) { 670 + mii_ctl = 0; 671 + mii_ctl |= (np->speed == 100) ? BMCR_SPEED100 : 0; 672 + mii_ctl |= (np->mii_if.full_duplex) ? BMCR_FULLDPLX : 0; 673 + mdio_write (dev, np->phys[0], MII_BMCR, mii_ctl); 674 + printk (KERN_INFO "Override speed=%d, %s duplex\n", 675 + np->speed, np->mii_if.full_duplex ? "Full" : "Half"); 676 + 677 + } 678 + 679 + /* Perhaps move the reset here? */ 680 + /* Reset the chip to erase previous misconfiguration. */ 681 + if (netif_msg_hw(np)) 682 + printk("ASIC Control is %x.\n", ioread32(ioaddr + ASICCtrl)); 683 + sundance_reset(dev, 0x00ff << 16); 684 + if (netif_msg_hw(np)) 685 + printk("ASIC Control is now %x.\n", ioread32(ioaddr + ASICCtrl)); 686 + 687 + card_idx++; 688 + return 0; 689 + 690 + err_out_unregister: 691 + unregister_netdev(dev); 692 + err_out_unmap_rx: 693 + dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, 694 + np->rx_ring, np->rx_ring_dma); 695 + err_out_unmap_tx: 696 + dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, 697 + np->tx_ring, np->tx_ring_dma); 698 + err_out_cleardev: 699 + pci_iounmap(pdev, ioaddr); 700 + err_out_res: 701 + pci_release_regions(pdev); 702 + err_out_netdev: 703 + free_netdev (dev); 704 + return -ENODEV; 705 + } 706 + 707 + static int change_mtu(struct net_device *dev, int new_mtu) 708 + { 709 + if (netif_running(dev)) 710 + return -EBUSY; 711 + WRITE_ONCE(dev->mtu, new_mtu); 712 + return 0; 713 + } 714 + 715 + #define eeprom_delay(ee_addr) ioread32(ee_addr) 716 + /* Read the EEPROM and MII Management Data I/O (MDIO) interfaces. */ 717 + static int eeprom_read(void __iomem *ioaddr, int location) 718 + { 719 + int boguscnt = 10000; /* Typical 1900 ticks. */ 720 + iowrite16(0x0200 | (location & 0xff), ioaddr + EECtrl); 721 + do { 722 + eeprom_delay(ioaddr + EECtrl); 723 + if (! (ioread16(ioaddr + EECtrl) & 0x8000)) { 724 + return ioread16(ioaddr + EEData); 725 + } 726 + } while (--boguscnt > 0); 727 + return 0; 728 + } 729 + 730 + /* MII transceiver control section. 731 + Read and write the MII registers using software-generated serial 732 + MDIO protocol. See the MII specifications or DP83840A data sheet 733 + for details. 734 + 735 + The maximum data clock rate is 2.5 Mhz. The minimum timing is usually 736 + met by back-to-back 33Mhz PCI cycles. */ 737 + #define mdio_delay() ioread8(mdio_addr) 738 + 739 + enum mii_reg_bits { 740 + MDIO_ShiftClk=0x0001, MDIO_Data=0x0002, MDIO_EnbOutput=0x0004, 741 + }; 742 + #define MDIO_EnbIn (0) 743 + #define MDIO_WRITE0 (MDIO_EnbOutput) 744 + #define MDIO_WRITE1 (MDIO_Data | MDIO_EnbOutput) 745 + 746 + /* Generate the preamble required for initial synchronization and 747 + a few older transceivers. */ 748 + static void mdio_sync(void __iomem *mdio_addr) 749 + { 750 + int bits = 32; 751 + 752 + /* Establish sync by sending at least 32 logic ones. */ 753 + while (--bits >= 0) { 754 + iowrite8(MDIO_WRITE1, mdio_addr); 755 + mdio_delay(); 756 + iowrite8(MDIO_WRITE1 | MDIO_ShiftClk, mdio_addr); 757 + mdio_delay(); 758 + } 759 + } 760 + 761 + static int mdio_read(struct net_device *dev, int phy_id, int location) 762 + { 763 + struct netdev_private *np = netdev_priv(dev); 764 + void __iomem *mdio_addr = np->base + MIICtrl; 765 + int mii_cmd = (0xf6 << 10) | (phy_id << 5) | location; 766 + int i, retval = 0; 767 + 768 + if (np->mii_preamble_required) 769 + mdio_sync(mdio_addr); 770 + 771 + /* Shift the read command bits out. */ 772 + for (i = 15; i >= 0; i--) { 773 + int dataval = (mii_cmd & (1 << i)) ? MDIO_WRITE1 : MDIO_WRITE0; 774 + 775 + iowrite8(dataval, mdio_addr); 776 + mdio_delay(); 777 + iowrite8(dataval | MDIO_ShiftClk, mdio_addr); 778 + mdio_delay(); 779 + } 780 + /* Read the two transition, 16 data, and wire-idle bits. */ 781 + for (i = 19; i > 0; i--) { 782 + iowrite8(MDIO_EnbIn, mdio_addr); 783 + mdio_delay(); 784 + retval = (retval << 1) | ((ioread8(mdio_addr) & MDIO_Data) ? 1 : 0); 785 + iowrite8(MDIO_EnbIn | MDIO_ShiftClk, mdio_addr); 786 + mdio_delay(); 787 + } 788 + return (retval>>1) & 0xffff; 789 + } 790 + 791 + static void mdio_write(struct net_device *dev, int phy_id, int location, int value) 792 + { 793 + struct netdev_private *np = netdev_priv(dev); 794 + void __iomem *mdio_addr = np->base + MIICtrl; 795 + int mii_cmd = (0x5002 << 16) | (phy_id << 23) | (location<<18) | value; 796 + int i; 797 + 798 + if (np->mii_preamble_required) 799 + mdio_sync(mdio_addr); 800 + 801 + /* Shift the command bits out. */ 802 + for (i = 31; i >= 0; i--) { 803 + int dataval = (mii_cmd & (1 << i)) ? MDIO_WRITE1 : MDIO_WRITE0; 804 + 805 + iowrite8(dataval, mdio_addr); 806 + mdio_delay(); 807 + iowrite8(dataval | MDIO_ShiftClk, mdio_addr); 808 + mdio_delay(); 809 + } 810 + /* Clear out extra bits. */ 811 + for (i = 2; i > 0; i--) { 812 + iowrite8(MDIO_EnbIn, mdio_addr); 813 + mdio_delay(); 814 + iowrite8(MDIO_EnbIn | MDIO_ShiftClk, mdio_addr); 815 + mdio_delay(); 816 + } 817 + } 818 + 819 + static int mdio_wait_link(struct net_device *dev, int wait) 820 + { 821 + int bmsr; 822 + int phy_id; 823 + struct netdev_private *np; 824 + 825 + np = netdev_priv(dev); 826 + phy_id = np->phys[0]; 827 + 828 + do { 829 + bmsr = mdio_read(dev, phy_id, MII_BMSR); 830 + if (bmsr & 0x0004) 831 + return 0; 832 + mdelay(1); 833 + } while (--wait > 0); 834 + return -1; 835 + } 836 + 837 + static int netdev_open(struct net_device *dev) 838 + { 839 + struct netdev_private *np = netdev_priv(dev); 840 + void __iomem *ioaddr = np->base; 841 + const int irq = np->pci_dev->irq; 842 + unsigned long flags; 843 + int i; 844 + 845 + sundance_reset(dev, 0x00ff << 16); 846 + 847 + i = request_irq(irq, intr_handler, IRQF_SHARED, dev->name, dev); 848 + if (i) 849 + return i; 850 + 851 + if (netif_msg_ifup(np)) 852 + printk(KERN_DEBUG "%s: netdev_open() irq %d\n", dev->name, irq); 853 + 854 + init_ring(dev); 855 + 856 + iowrite32(np->rx_ring_dma, ioaddr + RxListPtr); 857 + /* The Tx list pointer is written as packets are queued. */ 858 + 859 + /* Initialize other registers. */ 860 + __set_mac_addr(dev); 861 + #if IS_ENABLED(CONFIG_VLAN_8021Q) 862 + iowrite16(dev->mtu + 18, ioaddr + MaxFrameSize); 863 + #else 864 + iowrite16(dev->mtu + 14, ioaddr + MaxFrameSize); 865 + #endif 866 + if (dev->mtu > 2047) 867 + iowrite32(ioread32(ioaddr + ASICCtrl) | 0x0C, ioaddr + ASICCtrl); 868 + 869 + /* Configure the PCI bus bursts and FIFO thresholds. */ 870 + 871 + if (dev->if_port == 0) 872 + dev->if_port = np->default_port; 873 + 874 + spin_lock_init(&np->mcastlock); 875 + 876 + set_rx_mode(dev); 877 + iowrite16(0, ioaddr + IntrEnable); 878 + iowrite16(0, ioaddr + DownCounter); 879 + /* Set the chip to poll every N*320nsec. */ 880 + iowrite8(100, ioaddr + RxDMAPollPeriod); 881 + iowrite8(127, ioaddr + TxDMAPollPeriod); 882 + /* Fix DFE-580TX packet drop issue */ 883 + if (np->pci_dev->revision >= 0x14) 884 + iowrite8(0x01, ioaddr + DebugCtrl1); 885 + netif_start_queue(dev); 886 + 887 + spin_lock_irqsave(&np->lock, flags); 888 + reset_tx(dev); 889 + spin_unlock_irqrestore(&np->lock, flags); 890 + 891 + iowrite16 (StatsEnable | RxEnable | TxEnable, ioaddr + MACCtrl1); 892 + 893 + /* Disable Wol */ 894 + iowrite8(ioread8(ioaddr + WakeEvent) | 0x00, ioaddr + WakeEvent); 895 + np->wol_enabled = 0; 896 + 897 + if (netif_msg_ifup(np)) 898 + printk(KERN_DEBUG "%s: Done netdev_open(), status: Rx %x Tx %x " 899 + "MAC Control %x, %4.4x %4.4x.\n", 900 + dev->name, ioread32(ioaddr + RxStatus), ioread8(ioaddr + TxStatus), 901 + ioread32(ioaddr + MACCtrl0), 902 + ioread16(ioaddr + MACCtrl1), ioread16(ioaddr + MACCtrl0)); 903 + 904 + /* Set the timer to check for link beat. */ 905 + timer_setup(&np->timer, netdev_timer, 0); 906 + np->timer.expires = jiffies + 3*HZ; 907 + add_timer(&np->timer); 908 + 909 + /* Enable interrupts by setting the interrupt mask. */ 910 + iowrite16(DEFAULT_INTR, ioaddr + IntrEnable); 911 + 912 + return 0; 913 + } 914 + 915 + static void check_duplex(struct net_device *dev) 916 + { 917 + struct netdev_private *np = netdev_priv(dev); 918 + void __iomem *ioaddr = np->base; 919 + int mii_lpa = mdio_read(dev, np->phys[0], MII_LPA); 920 + int negotiated = mii_lpa & np->mii_if.advertising; 921 + int duplex; 922 + 923 + /* Force media */ 924 + if (!np->an_enable || mii_lpa == 0xffff) { 925 + if (np->mii_if.full_duplex) 926 + iowrite16 (ioread16 (ioaddr + MACCtrl0) | EnbFullDuplex, 927 + ioaddr + MACCtrl0); 928 + return; 929 + } 930 + 931 + /* Autonegotiation */ 932 + duplex = (negotiated & 0x0100) || (negotiated & 0x01C0) == 0x0040; 933 + if (np->mii_if.full_duplex != duplex) { 934 + np->mii_if.full_duplex = duplex; 935 + if (netif_msg_link(np)) 936 + printk(KERN_INFO "%s: Setting %s-duplex based on MII #%d " 937 + "negotiated capability %4.4x.\n", dev->name, 938 + duplex ? "full" : "half", np->phys[0], negotiated); 939 + iowrite16(ioread16(ioaddr + MACCtrl0) | (duplex ? 0x20 : 0), ioaddr + MACCtrl0); 940 + } 941 + } 942 + 943 + static void netdev_timer(struct timer_list *t) 944 + { 945 + struct netdev_private *np = timer_container_of(np, t, timer); 946 + struct net_device *dev = np->mii_if.dev; 947 + void __iomem *ioaddr = np->base; 948 + int next_tick = 10*HZ; 949 + 950 + if (netif_msg_timer(np)) { 951 + printk(KERN_DEBUG "%s: Media selection timer tick, intr status %4.4x, " 952 + "Tx %x Rx %x.\n", 953 + dev->name, ioread16(ioaddr + IntrEnable), 954 + ioread8(ioaddr + TxStatus), ioread32(ioaddr + RxStatus)); 955 + } 956 + check_duplex(dev); 957 + np->timer.expires = jiffies + next_tick; 958 + add_timer(&np->timer); 959 + } 960 + 961 + static void tx_timeout(struct net_device *dev, unsigned int txqueue) 962 + { 963 + struct netdev_private *np = netdev_priv(dev); 964 + void __iomem *ioaddr = np->base; 965 + unsigned long flag; 966 + 967 + netif_stop_queue(dev); 968 + tasklet_disable_in_atomic(&np->tx_tasklet); 969 + iowrite16(0, ioaddr + IntrEnable); 970 + printk(KERN_WARNING "%s: Transmit timed out, TxStatus %2.2x " 971 + "TxFrameId %2.2x," 972 + " resetting...\n", dev->name, ioread8(ioaddr + TxStatus), 973 + ioread8(ioaddr + TxFrameId)); 974 + 975 + { 976 + int i; 977 + for (i=0; i<TX_RING_SIZE; i++) { 978 + printk(KERN_DEBUG "%02x %08llx %08x %08x(%02x) %08x %08x\n", i, 979 + (unsigned long long)(np->tx_ring_dma + i*sizeof(*np->tx_ring)), 980 + le32_to_cpu(np->tx_ring[i].next_desc), 981 + le32_to_cpu(np->tx_ring[i].status), 982 + (le32_to_cpu(np->tx_ring[i].status) >> 2) & 0xff, 983 + le32_to_cpu(np->tx_ring[i].frag.addr), 984 + le32_to_cpu(np->tx_ring[i].frag.length)); 985 + } 986 + printk(KERN_DEBUG "TxListPtr=%08x netif_queue_stopped=%d\n", 987 + ioread32(np->base + TxListPtr), 988 + netif_queue_stopped(dev)); 989 + printk(KERN_DEBUG "cur_tx=%d(%02x) dirty_tx=%d(%02x)\n", 990 + np->cur_tx, np->cur_tx % TX_RING_SIZE, 991 + np->dirty_tx, np->dirty_tx % TX_RING_SIZE); 992 + printk(KERN_DEBUG "cur_rx=%d dirty_rx=%d\n", np->cur_rx, np->dirty_rx); 993 + printk(KERN_DEBUG "cur_task=%d\n", np->cur_task); 994 + } 995 + spin_lock_irqsave(&np->lock, flag); 996 + 997 + /* Stop and restart the chip's Tx processes . */ 998 + reset_tx(dev); 999 + spin_unlock_irqrestore(&np->lock, flag); 1000 + 1001 + dev->if_port = 0; 1002 + 1003 + netif_trans_update(dev); /* prevent tx timeout */ 1004 + dev->stats.tx_errors++; 1005 + if (np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 4) { 1006 + netif_wake_queue(dev); 1007 + } 1008 + iowrite16(DEFAULT_INTR, ioaddr + IntrEnable); 1009 + tasklet_enable(&np->tx_tasklet); 1010 + } 1011 + 1012 + 1013 + /* Initialize the Rx and Tx rings, along with various 'dev' bits. */ 1014 + static void init_ring(struct net_device *dev) 1015 + { 1016 + struct netdev_private *np = netdev_priv(dev); 1017 + int i; 1018 + 1019 + np->cur_rx = np->cur_tx = 0; 1020 + np->dirty_rx = np->dirty_tx = 0; 1021 + np->cur_task = 0; 1022 + 1023 + np->rx_buf_sz = (dev->mtu <= 1520 ? PKT_BUF_SZ : dev->mtu + 16); 1024 + 1025 + /* Initialize all Rx descriptors. */ 1026 + for (i = 0; i < RX_RING_SIZE; i++) { 1027 + np->rx_ring[i].next_desc = cpu_to_le32(np->rx_ring_dma + 1028 + ((i+1)%RX_RING_SIZE)*sizeof(*np->rx_ring)); 1029 + np->rx_ring[i].status = 0; 1030 + np->rx_ring[i].frag.length = 0; 1031 + np->rx_skbuff[i] = NULL; 1032 + } 1033 + 1034 + /* Fill in the Rx buffers. Handle allocation failure gracefully. */ 1035 + for (i = 0; i < RX_RING_SIZE; i++) { 1036 + dma_addr_t addr; 1037 + 1038 + struct sk_buff *skb = 1039 + netdev_alloc_skb(dev, np->rx_buf_sz + 2); 1040 + np->rx_skbuff[i] = skb; 1041 + if (skb == NULL) 1042 + break; 1043 + skb_reserve(skb, 2); /* 16 byte align the IP header. */ 1044 + addr = dma_map_single(&np->pci_dev->dev, skb->data, 1045 + np->rx_buf_sz, DMA_FROM_DEVICE); 1046 + if (dma_mapping_error(&np->pci_dev->dev, addr)) { 1047 + dev_kfree_skb(skb); 1048 + np->rx_skbuff[i] = NULL; 1049 + break; 1050 + } 1051 + np->rx_ring[i].frag.addr = cpu_to_le32(addr); 1052 + np->rx_ring[i].frag.length = cpu_to_le32(np->rx_buf_sz | LastFrag); 1053 + } 1054 + np->dirty_rx = (unsigned int)(i - RX_RING_SIZE); 1055 + 1056 + for (i = 0; i < TX_RING_SIZE; i++) { 1057 + np->tx_skbuff[i] = NULL; 1058 + np->tx_ring[i].status = 0; 1059 + } 1060 + } 1061 + 1062 + static void tx_poll(struct tasklet_struct *t) 1063 + { 1064 + struct netdev_private *np = from_tasklet(np, t, tx_tasklet); 1065 + unsigned head = np->cur_task % TX_RING_SIZE; 1066 + struct netdev_desc *txdesc = 1067 + &np->tx_ring[(np->cur_tx - 1) % TX_RING_SIZE]; 1068 + 1069 + /* Chain the next pointer */ 1070 + for (; np->cur_tx - np->cur_task > 0; np->cur_task++) { 1071 + int entry = np->cur_task % TX_RING_SIZE; 1072 + txdesc = &np->tx_ring[entry]; 1073 + if (np->last_tx) { 1074 + np->last_tx->next_desc = cpu_to_le32(np->tx_ring_dma + 1075 + entry*sizeof(struct netdev_desc)); 1076 + } 1077 + np->last_tx = txdesc; 1078 + } 1079 + /* Indicate the latest descriptor of tx ring */ 1080 + txdesc->status |= cpu_to_le32(DescIntrOnTx); 1081 + 1082 + if (ioread32 (np->base + TxListPtr) == 0) 1083 + iowrite32 (np->tx_ring_dma + head * sizeof(struct netdev_desc), 1084 + np->base + TxListPtr); 1085 + } 1086 + 1087 + static netdev_tx_t 1088 + start_tx (struct sk_buff *skb, struct net_device *dev) 1089 + { 1090 + struct netdev_private *np = netdev_priv(dev); 1091 + struct netdev_desc *txdesc; 1092 + dma_addr_t addr; 1093 + unsigned entry; 1094 + 1095 + /* Calculate the next Tx descriptor entry. */ 1096 + entry = np->cur_tx % TX_RING_SIZE; 1097 + np->tx_skbuff[entry] = skb; 1098 + txdesc = &np->tx_ring[entry]; 1099 + 1100 + addr = dma_map_single(&np->pci_dev->dev, skb->data, skb->len, 1101 + DMA_TO_DEVICE); 1102 + if (dma_mapping_error(&np->pci_dev->dev, addr)) 1103 + goto drop_frame; 1104 + 1105 + txdesc->next_desc = 0; 1106 + txdesc->status = cpu_to_le32 ((entry << 2) | DisableAlign); 1107 + txdesc->frag.addr = cpu_to_le32(addr); 1108 + txdesc->frag.length = cpu_to_le32 (skb->len | LastFrag); 1109 + 1110 + /* Increment cur_tx before tasklet_schedule() */ 1111 + np->cur_tx++; 1112 + mb(); 1113 + /* Schedule a tx_poll() task */ 1114 + tasklet_schedule(&np->tx_tasklet); 1115 + 1116 + /* On some architectures: explicitly flush cache lines here. */ 1117 + if (np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 1 && 1118 + !netif_queue_stopped(dev)) { 1119 + /* do nothing */ 1120 + } else { 1121 + netif_stop_queue (dev); 1122 + } 1123 + if (netif_msg_tx_queued(np)) { 1124 + printk (KERN_DEBUG 1125 + "%s: Transmit frame #%d queued in slot %d.\n", 1126 + dev->name, np->cur_tx, entry); 1127 + } 1128 + return NETDEV_TX_OK; 1129 + 1130 + drop_frame: 1131 + dev_kfree_skb_any(skb); 1132 + np->tx_skbuff[entry] = NULL; 1133 + dev->stats.tx_dropped++; 1134 + return NETDEV_TX_OK; 1135 + } 1136 + 1137 + /* Reset hardware tx and free all of tx buffers */ 1138 + static int 1139 + reset_tx (struct net_device *dev) 1140 + { 1141 + struct netdev_private *np = netdev_priv(dev); 1142 + void __iomem *ioaddr = np->base; 1143 + struct sk_buff *skb; 1144 + int i; 1145 + 1146 + /* Reset tx logic, TxListPtr will be cleaned */ 1147 + iowrite16 (TxDisable, ioaddr + MACCtrl1); 1148 + sundance_reset(dev, (NetworkReset|FIFOReset|DMAReset|TxReset) << 16); 1149 + 1150 + /* free all tx skbuff */ 1151 + for (i = 0; i < TX_RING_SIZE; i++) { 1152 + np->tx_ring[i].next_desc = 0; 1153 + 1154 + skb = np->tx_skbuff[i]; 1155 + if (skb) { 1156 + dma_unmap_single(&np->pci_dev->dev, 1157 + le32_to_cpu(np->tx_ring[i].frag.addr), 1158 + skb->len, DMA_TO_DEVICE); 1159 + dev_kfree_skb_any(skb); 1160 + np->tx_skbuff[i] = NULL; 1161 + dev->stats.tx_dropped++; 1162 + } 1163 + } 1164 + np->cur_tx = np->dirty_tx = 0; 1165 + np->cur_task = 0; 1166 + 1167 + np->last_tx = NULL; 1168 + iowrite8(127, ioaddr + TxDMAPollPeriod); 1169 + 1170 + iowrite16 (StatsEnable | RxEnable | TxEnable, ioaddr + MACCtrl1); 1171 + return 0; 1172 + } 1173 + 1174 + /* The interrupt handler cleans up after the Tx thread, 1175 + and schedule a Rx thread work */ 1176 + static irqreturn_t intr_handler(int irq, void *dev_instance) 1177 + { 1178 + struct net_device *dev = (struct net_device *)dev_instance; 1179 + struct netdev_private *np = netdev_priv(dev); 1180 + void __iomem *ioaddr = np->base; 1181 + int hw_frame_id; 1182 + int tx_cnt; 1183 + int tx_status; 1184 + int handled = 0; 1185 + int i; 1186 + 1187 + do { 1188 + int intr_status = ioread16(ioaddr + IntrStatus); 1189 + iowrite16(intr_status, ioaddr + IntrStatus); 1190 + 1191 + if (netif_msg_intr(np)) 1192 + printk(KERN_DEBUG "%s: Interrupt, status %4.4x.\n", 1193 + dev->name, intr_status); 1194 + 1195 + if (!(intr_status & DEFAULT_INTR)) 1196 + break; 1197 + 1198 + handled = 1; 1199 + 1200 + if (intr_status & (IntrRxDMADone)) { 1201 + iowrite16(DEFAULT_INTR & ~(IntrRxDone|IntrRxDMADone), 1202 + ioaddr + IntrEnable); 1203 + if (np->budget < 0) 1204 + np->budget = RX_BUDGET; 1205 + tasklet_schedule(&np->rx_tasklet); 1206 + } 1207 + if (intr_status & (IntrTxDone | IntrDrvRqst)) { 1208 + tx_status = ioread16 (ioaddr + TxStatus); 1209 + for (tx_cnt=32; tx_status & 0x80; --tx_cnt) { 1210 + if (netif_msg_tx_done(np)) 1211 + printk 1212 + ("%s: Transmit status is %2.2x.\n", 1213 + dev->name, tx_status); 1214 + if (tx_status & 0x1e) { 1215 + if (netif_msg_tx_err(np)) 1216 + printk("%s: Transmit error status %4.4x.\n", 1217 + dev->name, tx_status); 1218 + dev->stats.tx_errors++; 1219 + if (tx_status & 0x10) 1220 + dev->stats.tx_fifo_errors++; 1221 + if (tx_status & 0x08) 1222 + dev->stats.collisions++; 1223 + if (tx_status & 0x04) 1224 + dev->stats.tx_fifo_errors++; 1225 + if (tx_status & 0x02) 1226 + dev->stats.tx_window_errors++; 1227 + 1228 + /* 1229 + ** This reset has been verified on 1230 + ** DFE-580TX boards ! phdm@macqel.be. 1231 + */ 1232 + if (tx_status & 0x10) { /* TxUnderrun */ 1233 + /* Restart Tx FIFO and transmitter */ 1234 + sundance_reset(dev, (NetworkReset|FIFOReset|TxReset) << 16); 1235 + /* No need to reset the Tx pointer here */ 1236 + } 1237 + /* Restart the Tx. Need to make sure tx enabled */ 1238 + i = 10; 1239 + do { 1240 + iowrite16(ioread16(ioaddr + MACCtrl1) | TxEnable, ioaddr + MACCtrl1); 1241 + if (ioread16(ioaddr + MACCtrl1) & TxEnabled) 1242 + break; 1243 + mdelay(1); 1244 + } while (--i); 1245 + } 1246 + /* Yup, this is a documentation bug. It cost me *hours*. */ 1247 + iowrite16 (0, ioaddr + TxStatus); 1248 + if (tx_cnt < 0) { 1249 + iowrite32(5000, ioaddr + DownCounter); 1250 + break; 1251 + } 1252 + tx_status = ioread16 (ioaddr + TxStatus); 1253 + } 1254 + hw_frame_id = (tx_status >> 8) & 0xff; 1255 + } else { 1256 + hw_frame_id = ioread8(ioaddr + TxFrameId); 1257 + } 1258 + 1259 + if (np->pci_dev->revision >= 0x14) { 1260 + spin_lock(&np->lock); 1261 + for (; np->cur_tx - np->dirty_tx > 0; np->dirty_tx++) { 1262 + int entry = np->dirty_tx % TX_RING_SIZE; 1263 + struct sk_buff *skb; 1264 + int sw_frame_id; 1265 + sw_frame_id = (le32_to_cpu( 1266 + np->tx_ring[entry].status) >> 2) & 0xff; 1267 + if (sw_frame_id == hw_frame_id && 1268 + !(le32_to_cpu(np->tx_ring[entry].status) 1269 + & 0x00010000)) 1270 + break; 1271 + if (sw_frame_id == (hw_frame_id + 1) % 1272 + TX_RING_SIZE) 1273 + break; 1274 + skb = np->tx_skbuff[entry]; 1275 + /* Free the original skb. */ 1276 + dma_unmap_single(&np->pci_dev->dev, 1277 + le32_to_cpu(np->tx_ring[entry].frag.addr), 1278 + skb->len, DMA_TO_DEVICE); 1279 + dev_consume_skb_irq(np->tx_skbuff[entry]); 1280 + np->tx_skbuff[entry] = NULL; 1281 + np->tx_ring[entry].frag.addr = 0; 1282 + np->tx_ring[entry].frag.length = 0; 1283 + } 1284 + spin_unlock(&np->lock); 1285 + } else { 1286 + spin_lock(&np->lock); 1287 + for (; np->cur_tx - np->dirty_tx > 0; np->dirty_tx++) { 1288 + int entry = np->dirty_tx % TX_RING_SIZE; 1289 + struct sk_buff *skb; 1290 + if (!(le32_to_cpu(np->tx_ring[entry].status) 1291 + & 0x00010000)) 1292 + break; 1293 + skb = np->tx_skbuff[entry]; 1294 + /* Free the original skb. */ 1295 + dma_unmap_single(&np->pci_dev->dev, 1296 + le32_to_cpu(np->tx_ring[entry].frag.addr), 1297 + skb->len, DMA_TO_DEVICE); 1298 + dev_consume_skb_irq(np->tx_skbuff[entry]); 1299 + np->tx_skbuff[entry] = NULL; 1300 + np->tx_ring[entry].frag.addr = 0; 1301 + np->tx_ring[entry].frag.length = 0; 1302 + } 1303 + spin_unlock(&np->lock); 1304 + } 1305 + 1306 + if (netif_queue_stopped(dev) && 1307 + np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 4) { 1308 + /* The ring is no longer full, clear busy flag. */ 1309 + netif_wake_queue (dev); 1310 + } 1311 + /* Abnormal error summary/uncommon events handlers. */ 1312 + if (intr_status & (IntrPCIErr | LinkChange | StatsMax)) 1313 + netdev_error(dev, intr_status); 1314 + } while (0); 1315 + if (netif_msg_intr(np)) 1316 + printk(KERN_DEBUG "%s: exiting interrupt, status=%#4.4x.\n", 1317 + dev->name, ioread16(ioaddr + IntrStatus)); 1318 + return IRQ_RETVAL(handled); 1319 + } 1320 + 1321 + static void rx_poll(struct tasklet_struct *t) 1322 + { 1323 + struct netdev_private *np = from_tasklet(np, t, rx_tasklet); 1324 + struct net_device *dev = np->ndev; 1325 + int entry = np->cur_rx % RX_RING_SIZE; 1326 + int boguscnt = np->budget; 1327 + void __iomem *ioaddr = np->base; 1328 + int received = 0; 1329 + 1330 + /* If EOP is set on the next entry, it's a new packet. Send it up. */ 1331 + while (1) { 1332 + struct netdev_desc *desc = &(np->rx_ring[entry]); 1333 + u32 frame_status = le32_to_cpu(desc->status); 1334 + int pkt_len; 1335 + 1336 + if (--boguscnt < 0) { 1337 + goto not_done; 1338 + } 1339 + if (!(frame_status & DescOwn)) 1340 + break; 1341 + pkt_len = frame_status & 0x1fff; /* Chip omits the CRC. */ 1342 + if (netif_msg_rx_status(np)) 1343 + printk(KERN_DEBUG " netdev_rx() status was %8.8x.\n", 1344 + frame_status); 1345 + if (frame_status & 0x001f4000) { 1346 + /* There was a error. */ 1347 + if (netif_msg_rx_err(np)) 1348 + printk(KERN_DEBUG " netdev_rx() Rx error was %8.8x.\n", 1349 + frame_status); 1350 + dev->stats.rx_errors++; 1351 + if (frame_status & 0x00100000) 1352 + dev->stats.rx_length_errors++; 1353 + if (frame_status & 0x00010000) 1354 + dev->stats.rx_fifo_errors++; 1355 + if (frame_status & 0x00060000) 1356 + dev->stats.rx_frame_errors++; 1357 + if (frame_status & 0x00080000) 1358 + dev->stats.rx_crc_errors++; 1359 + if (frame_status & 0x00100000) { 1360 + printk(KERN_WARNING "%s: Oversized Ethernet frame," 1361 + " status %8.8x.\n", 1362 + dev->name, frame_status); 1363 + } 1364 + } else { 1365 + struct sk_buff *skb; 1366 + #ifndef final_version 1367 + if (netif_msg_rx_status(np)) 1368 + printk(KERN_DEBUG " netdev_rx() normal Rx pkt length %d" 1369 + ", bogus_cnt %d.\n", 1370 + pkt_len, boguscnt); 1371 + #endif 1372 + /* Check if the packet is long enough to accept without copying 1373 + to a minimally-sized skbuff. */ 1374 + if (pkt_len < rx_copybreak && 1375 + (skb = netdev_alloc_skb(dev, pkt_len + 2)) != NULL) { 1376 + skb_reserve(skb, 2); /* 16 byte align the IP header */ 1377 + dma_sync_single_for_cpu(&np->pci_dev->dev, 1378 + le32_to_cpu(desc->frag.addr), 1379 + np->rx_buf_sz, DMA_FROM_DEVICE); 1380 + skb_copy_to_linear_data(skb, np->rx_skbuff[entry]->data, pkt_len); 1381 + dma_sync_single_for_device(&np->pci_dev->dev, 1382 + le32_to_cpu(desc->frag.addr), 1383 + np->rx_buf_sz, DMA_FROM_DEVICE); 1384 + skb_put(skb, pkt_len); 1385 + } else { 1386 + dma_unmap_single(&np->pci_dev->dev, 1387 + le32_to_cpu(desc->frag.addr), 1388 + np->rx_buf_sz, DMA_FROM_DEVICE); 1389 + skb_put(skb = np->rx_skbuff[entry], pkt_len); 1390 + np->rx_skbuff[entry] = NULL; 1391 + } 1392 + skb->protocol = eth_type_trans(skb, dev); 1393 + /* Note: checksum -> skb->ip_summed = CHECKSUM_UNNECESSARY; */ 1394 + netif_rx(skb); 1395 + } 1396 + entry = (entry + 1) % RX_RING_SIZE; 1397 + received++; 1398 + } 1399 + np->cur_rx = entry; 1400 + refill_rx (dev); 1401 + np->budget -= received; 1402 + iowrite16(DEFAULT_INTR, ioaddr + IntrEnable); 1403 + return; 1404 + 1405 + not_done: 1406 + np->cur_rx = entry; 1407 + refill_rx (dev); 1408 + if (!received) 1409 + received = 1; 1410 + np->budget -= received; 1411 + if (np->budget <= 0) 1412 + np->budget = RX_BUDGET; 1413 + tasklet_schedule(&np->rx_tasklet); 1414 + } 1415 + 1416 + static void refill_rx (struct net_device *dev) 1417 + { 1418 + struct netdev_private *np = netdev_priv(dev); 1419 + int entry; 1420 + 1421 + /* Refill the Rx ring buffers. */ 1422 + for (;(np->cur_rx - np->dirty_rx + RX_RING_SIZE) % RX_RING_SIZE > 0; 1423 + np->dirty_rx = (np->dirty_rx + 1) % RX_RING_SIZE) { 1424 + struct sk_buff *skb; 1425 + dma_addr_t addr; 1426 + 1427 + entry = np->dirty_rx % RX_RING_SIZE; 1428 + if (np->rx_skbuff[entry] == NULL) { 1429 + skb = netdev_alloc_skb(dev, np->rx_buf_sz + 2); 1430 + np->rx_skbuff[entry] = skb; 1431 + if (skb == NULL) 1432 + break; /* Better luck next round. */ 1433 + skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ 1434 + addr = dma_map_single(&np->pci_dev->dev, skb->data, 1435 + np->rx_buf_sz, DMA_FROM_DEVICE); 1436 + if (dma_mapping_error(&np->pci_dev->dev, addr)) { 1437 + dev_kfree_skb_irq(skb); 1438 + np->rx_skbuff[entry] = NULL; 1439 + break; 1440 + } 1441 + 1442 + np->rx_ring[entry].frag.addr = cpu_to_le32(addr); 1443 + } 1444 + /* Perhaps we need not reset this field. */ 1445 + np->rx_ring[entry].frag.length = 1446 + cpu_to_le32(np->rx_buf_sz | LastFrag); 1447 + np->rx_ring[entry].status = 0; 1448 + } 1449 + } 1450 + static void netdev_error(struct net_device *dev, int intr_status) 1451 + { 1452 + struct netdev_private *np = netdev_priv(dev); 1453 + void __iomem *ioaddr = np->base; 1454 + u16 mii_ctl, mii_advertise, mii_lpa; 1455 + int speed; 1456 + 1457 + if (intr_status & LinkChange) { 1458 + if (mdio_wait_link(dev, 10) == 0) { 1459 + printk(KERN_INFO "%s: Link up\n", dev->name); 1460 + if (np->an_enable) { 1461 + mii_advertise = mdio_read(dev, np->phys[0], 1462 + MII_ADVERTISE); 1463 + mii_lpa = mdio_read(dev, np->phys[0], MII_LPA); 1464 + mii_advertise &= mii_lpa; 1465 + printk(KERN_INFO "%s: Link changed: ", 1466 + dev->name); 1467 + if (mii_advertise & ADVERTISE_100FULL) { 1468 + np->speed = 100; 1469 + printk("100Mbps, full duplex\n"); 1470 + } else if (mii_advertise & ADVERTISE_100HALF) { 1471 + np->speed = 100; 1472 + printk("100Mbps, half duplex\n"); 1473 + } else if (mii_advertise & ADVERTISE_10FULL) { 1474 + np->speed = 10; 1475 + printk("10Mbps, full duplex\n"); 1476 + } else if (mii_advertise & ADVERTISE_10HALF) { 1477 + np->speed = 10; 1478 + printk("10Mbps, half duplex\n"); 1479 + } else 1480 + printk("\n"); 1481 + 1482 + } else { 1483 + mii_ctl = mdio_read(dev, np->phys[0], MII_BMCR); 1484 + speed = (mii_ctl & BMCR_SPEED100) ? 100 : 10; 1485 + np->speed = speed; 1486 + printk(KERN_INFO "%s: Link changed: %dMbps ,", 1487 + dev->name, speed); 1488 + printk("%s duplex.\n", 1489 + (mii_ctl & BMCR_FULLDPLX) ? 1490 + "full" : "half"); 1491 + } 1492 + check_duplex(dev); 1493 + if (np->flowctrl && np->mii_if.full_duplex) { 1494 + iowrite16(ioread16(ioaddr + MulticastFilter1+2) | 0x0200, 1495 + ioaddr + MulticastFilter1+2); 1496 + iowrite16(ioread16(ioaddr + MACCtrl0) | EnbFlowCtrl, 1497 + ioaddr + MACCtrl0); 1498 + } 1499 + netif_carrier_on(dev); 1500 + } else { 1501 + printk(KERN_INFO "%s: Link down\n", dev->name); 1502 + netif_carrier_off(dev); 1503 + } 1504 + } 1505 + if (intr_status & StatsMax) { 1506 + get_stats(dev); 1507 + } 1508 + if (intr_status & IntrPCIErr) { 1509 + printk(KERN_ERR "%s: Something Wicked happened! %4.4x.\n", 1510 + dev->name, intr_status); 1511 + /* We must do a global reset of DMA to continue. */ 1512 + } 1513 + } 1514 + 1515 + static struct net_device_stats *get_stats(struct net_device *dev) 1516 + { 1517 + struct netdev_private *np = netdev_priv(dev); 1518 + void __iomem *ioaddr = np->base; 1519 + unsigned long flags; 1520 + u8 late_coll, single_coll, mult_coll; 1521 + 1522 + spin_lock_irqsave(&np->statlock, flags); 1523 + /* The chip only need report frame silently dropped. */ 1524 + dev->stats.rx_missed_errors += ioread8(ioaddr + RxMissed); 1525 + dev->stats.tx_packets += ioread16(ioaddr + TxFramesOK); 1526 + dev->stats.rx_packets += ioread16(ioaddr + RxFramesOK); 1527 + dev->stats.tx_carrier_errors += ioread8(ioaddr + StatsCarrierError); 1528 + 1529 + mult_coll = ioread8(ioaddr + StatsMultiColl); 1530 + np->xstats.tx_multiple_collisions += mult_coll; 1531 + single_coll = ioread8(ioaddr + StatsOneColl); 1532 + np->xstats.tx_single_collisions += single_coll; 1533 + late_coll = ioread8(ioaddr + StatsLateColl); 1534 + np->xstats.tx_late_collisions += late_coll; 1535 + dev->stats.collisions += mult_coll 1536 + + single_coll 1537 + + late_coll; 1538 + 1539 + np->xstats.tx_deferred += ioread8(ioaddr + StatsTxDefer); 1540 + np->xstats.tx_deferred_excessive += ioread8(ioaddr + StatsTxXSDefer); 1541 + np->xstats.tx_aborted += ioread8(ioaddr + StatsTxAbort); 1542 + np->xstats.tx_bcasts += ioread8(ioaddr + StatsBcastTx); 1543 + np->xstats.rx_bcasts += ioread8(ioaddr + StatsBcastRx); 1544 + np->xstats.tx_mcasts += ioread8(ioaddr + StatsMcastTx); 1545 + np->xstats.rx_mcasts += ioread8(ioaddr + StatsMcastRx); 1546 + 1547 + dev->stats.tx_bytes += ioread16(ioaddr + TxOctetsLow); 1548 + dev->stats.tx_bytes += ioread16(ioaddr + TxOctetsHigh) << 16; 1549 + dev->stats.rx_bytes += ioread16(ioaddr + RxOctetsLow); 1550 + dev->stats.rx_bytes += ioread16(ioaddr + RxOctetsHigh) << 16; 1551 + 1552 + spin_unlock_irqrestore(&np->statlock, flags); 1553 + 1554 + return &dev->stats; 1555 + } 1556 + 1557 + static void set_rx_mode(struct net_device *dev) 1558 + { 1559 + struct netdev_private *np = netdev_priv(dev); 1560 + void __iomem *ioaddr = np->base; 1561 + u16 mc_filter[4]; /* Multicast hash filter */ 1562 + u32 rx_mode; 1563 + int i; 1564 + 1565 + if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */ 1566 + memset(mc_filter, 0xff, sizeof(mc_filter)); 1567 + rx_mode = AcceptBroadcast | AcceptMulticast | AcceptAll | AcceptMyPhys; 1568 + } else if ((netdev_mc_count(dev) > multicast_filter_limit) || 1569 + (dev->flags & IFF_ALLMULTI)) { 1570 + /* Too many to match, or accept all multicasts. */ 1571 + memset(mc_filter, 0xff, sizeof(mc_filter)); 1572 + rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys; 1573 + } else if (!netdev_mc_empty(dev)) { 1574 + struct netdev_hw_addr *ha; 1575 + int bit; 1576 + int index; 1577 + int crc; 1578 + memset (mc_filter, 0, sizeof (mc_filter)); 1579 + netdev_for_each_mc_addr(ha, dev) { 1580 + crc = ether_crc_le(ETH_ALEN, ha->addr); 1581 + for (index=0, bit=0; bit < 6; bit++, crc <<= 1) 1582 + if (crc & 0x80000000) index |= 1 << bit; 1583 + mc_filter[index/16] |= (1 << (index % 16)); 1584 + } 1585 + rx_mode = AcceptBroadcast | AcceptMultiHash | AcceptMyPhys; 1586 + } else { 1587 + iowrite8(AcceptBroadcast | AcceptMyPhys, ioaddr + RxMode); 1588 + return; 1589 + } 1590 + if (np->mii_if.full_duplex && np->flowctrl) 1591 + mc_filter[3] |= 0x0200; 1592 + 1593 + for (i = 0; i < 4; i++) 1594 + iowrite16(mc_filter[i], ioaddr + MulticastFilter0 + i*2); 1595 + iowrite8(rx_mode, ioaddr + RxMode); 1596 + } 1597 + 1598 + static int __set_mac_addr(struct net_device *dev) 1599 + { 1600 + struct netdev_private *np = netdev_priv(dev); 1601 + u16 addr16; 1602 + 1603 + addr16 = (dev->dev_addr[0] | (dev->dev_addr[1] << 8)); 1604 + iowrite16(addr16, np->base + StationAddr); 1605 + addr16 = (dev->dev_addr[2] | (dev->dev_addr[3] << 8)); 1606 + iowrite16(addr16, np->base + StationAddr+2); 1607 + addr16 = (dev->dev_addr[4] | (dev->dev_addr[5] << 8)); 1608 + iowrite16(addr16, np->base + StationAddr+4); 1609 + return 0; 1610 + } 1611 + 1612 + /* Invoked with rtnl_lock held */ 1613 + static int sundance_set_mac_addr(struct net_device *dev, void *data) 1614 + { 1615 + const struct sockaddr *addr = data; 1616 + 1617 + if (!is_valid_ether_addr(addr->sa_data)) 1618 + return -EADDRNOTAVAIL; 1619 + eth_hw_addr_set(dev, addr->sa_data); 1620 + __set_mac_addr(dev); 1621 + 1622 + return 0; 1623 + } 1624 + 1625 + static const struct { 1626 + const char name[ETH_GSTRING_LEN]; 1627 + } sundance_stats[] = { 1628 + { "tx_multiple_collisions" }, 1629 + { "tx_single_collisions" }, 1630 + { "tx_late_collisions" }, 1631 + { "tx_deferred" }, 1632 + { "tx_deferred_excessive" }, 1633 + { "tx_aborted" }, 1634 + { "tx_bcasts" }, 1635 + { "rx_bcasts" }, 1636 + { "tx_mcasts" }, 1637 + { "rx_mcasts" }, 1638 + }; 1639 + 1640 + static int check_if_running(struct net_device *dev) 1641 + { 1642 + if (!netif_running(dev)) 1643 + return -EINVAL; 1644 + return 0; 1645 + } 1646 + 1647 + static void get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) 1648 + { 1649 + struct netdev_private *np = netdev_priv(dev); 1650 + strscpy(info->driver, DRV_NAME, sizeof(info->driver)); 1651 + strscpy(info->bus_info, pci_name(np->pci_dev), sizeof(info->bus_info)); 1652 + } 1653 + 1654 + static int get_link_ksettings(struct net_device *dev, 1655 + struct ethtool_link_ksettings *cmd) 1656 + { 1657 + struct netdev_private *np = netdev_priv(dev); 1658 + spin_lock_irq(&np->lock); 1659 + mii_ethtool_get_link_ksettings(&np->mii_if, cmd); 1660 + spin_unlock_irq(&np->lock); 1661 + return 0; 1662 + } 1663 + 1664 + static int set_link_ksettings(struct net_device *dev, 1665 + const struct ethtool_link_ksettings *cmd) 1666 + { 1667 + struct netdev_private *np = netdev_priv(dev); 1668 + int res; 1669 + spin_lock_irq(&np->lock); 1670 + res = mii_ethtool_set_link_ksettings(&np->mii_if, cmd); 1671 + spin_unlock_irq(&np->lock); 1672 + return res; 1673 + } 1674 + 1675 + static int nway_reset(struct net_device *dev) 1676 + { 1677 + struct netdev_private *np = netdev_priv(dev); 1678 + return mii_nway_restart(&np->mii_if); 1679 + } 1680 + 1681 + static u32 get_link(struct net_device *dev) 1682 + { 1683 + struct netdev_private *np = netdev_priv(dev); 1684 + return mii_link_ok(&np->mii_if); 1685 + } 1686 + 1687 + static u32 get_msglevel(struct net_device *dev) 1688 + { 1689 + struct netdev_private *np = netdev_priv(dev); 1690 + return np->msg_enable; 1691 + } 1692 + 1693 + static void set_msglevel(struct net_device *dev, u32 val) 1694 + { 1695 + struct netdev_private *np = netdev_priv(dev); 1696 + np->msg_enable = val; 1697 + } 1698 + 1699 + static void get_strings(struct net_device *dev, u32 stringset, 1700 + u8 *data) 1701 + { 1702 + if (stringset == ETH_SS_STATS) 1703 + memcpy(data, sundance_stats, sizeof(sundance_stats)); 1704 + } 1705 + 1706 + static int get_sset_count(struct net_device *dev, int sset) 1707 + { 1708 + switch (sset) { 1709 + case ETH_SS_STATS: 1710 + return ARRAY_SIZE(sundance_stats); 1711 + default: 1712 + return -EOPNOTSUPP; 1713 + } 1714 + } 1715 + 1716 + static void get_ethtool_stats(struct net_device *dev, 1717 + struct ethtool_stats *stats, u64 *data) 1718 + { 1719 + struct netdev_private *np = netdev_priv(dev); 1720 + int i = 0; 1721 + 1722 + get_stats(dev); 1723 + data[i++] = np->xstats.tx_multiple_collisions; 1724 + data[i++] = np->xstats.tx_single_collisions; 1725 + data[i++] = np->xstats.tx_late_collisions; 1726 + data[i++] = np->xstats.tx_deferred; 1727 + data[i++] = np->xstats.tx_deferred_excessive; 1728 + data[i++] = np->xstats.tx_aborted; 1729 + data[i++] = np->xstats.tx_bcasts; 1730 + data[i++] = np->xstats.rx_bcasts; 1731 + data[i++] = np->xstats.tx_mcasts; 1732 + data[i++] = np->xstats.rx_mcasts; 1733 + } 1734 + 1735 + #ifdef CONFIG_PM 1736 + 1737 + static void sundance_get_wol(struct net_device *dev, 1738 + struct ethtool_wolinfo *wol) 1739 + { 1740 + struct netdev_private *np = netdev_priv(dev); 1741 + void __iomem *ioaddr = np->base; 1742 + u8 wol_bits; 1743 + 1744 + wol->wolopts = 0; 1745 + 1746 + wol->supported = (WAKE_PHY | WAKE_MAGIC); 1747 + if (!np->wol_enabled) 1748 + return; 1749 + 1750 + wol_bits = ioread8(ioaddr + WakeEvent); 1751 + if (wol_bits & MagicPktEnable) 1752 + wol->wolopts |= WAKE_MAGIC; 1753 + if (wol_bits & LinkEventEnable) 1754 + wol->wolopts |= WAKE_PHY; 1755 + } 1756 + 1757 + static int sundance_set_wol(struct net_device *dev, 1758 + struct ethtool_wolinfo *wol) 1759 + { 1760 + struct netdev_private *np = netdev_priv(dev); 1761 + void __iomem *ioaddr = np->base; 1762 + u8 wol_bits; 1763 + 1764 + if (!device_can_wakeup(&np->pci_dev->dev)) 1765 + return -EOPNOTSUPP; 1766 + 1767 + np->wol_enabled = !!(wol->wolopts); 1768 + wol_bits = ioread8(ioaddr + WakeEvent); 1769 + wol_bits &= ~(WakePktEnable | MagicPktEnable | 1770 + LinkEventEnable | WolEnable); 1771 + 1772 + if (np->wol_enabled) { 1773 + if (wol->wolopts & WAKE_MAGIC) 1774 + wol_bits |= (MagicPktEnable | WolEnable); 1775 + if (wol->wolopts & WAKE_PHY) 1776 + wol_bits |= (LinkEventEnable | WolEnable); 1777 + } 1778 + iowrite8(wol_bits, ioaddr + WakeEvent); 1779 + 1780 + device_set_wakeup_enable(&np->pci_dev->dev, np->wol_enabled); 1781 + 1782 + return 0; 1783 + } 1784 + #else 1785 + #define sundance_get_wol NULL 1786 + #define sundance_set_wol NULL 1787 + #endif /* CONFIG_PM */ 1788 + 1789 + static const struct ethtool_ops ethtool_ops = { 1790 + .begin = check_if_running, 1791 + .get_drvinfo = get_drvinfo, 1792 + .nway_reset = nway_reset, 1793 + .get_link = get_link, 1794 + .get_wol = sundance_get_wol, 1795 + .set_wol = sundance_set_wol, 1796 + .get_msglevel = get_msglevel, 1797 + .set_msglevel = set_msglevel, 1798 + .get_strings = get_strings, 1799 + .get_sset_count = get_sset_count, 1800 + .get_ethtool_stats = get_ethtool_stats, 1801 + .get_link_ksettings = get_link_ksettings, 1802 + .set_link_ksettings = set_link_ksettings, 1803 + }; 1804 + 1805 + static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 1806 + { 1807 + struct netdev_private *np = netdev_priv(dev); 1808 + int rc; 1809 + 1810 + if (!netif_running(dev)) 1811 + return -EINVAL; 1812 + 1813 + spin_lock_irq(&np->lock); 1814 + rc = generic_mii_ioctl(&np->mii_if, if_mii(rq), cmd, NULL); 1815 + spin_unlock_irq(&np->lock); 1816 + 1817 + return rc; 1818 + } 1819 + 1820 + static int netdev_close(struct net_device *dev) 1821 + { 1822 + struct netdev_private *np = netdev_priv(dev); 1823 + void __iomem *ioaddr = np->base; 1824 + struct sk_buff *skb; 1825 + int i; 1826 + 1827 + /* Wait and kill tasklet */ 1828 + tasklet_kill(&np->rx_tasklet); 1829 + tasklet_kill(&np->tx_tasklet); 1830 + np->cur_tx = 0; 1831 + np->dirty_tx = 0; 1832 + np->cur_task = 0; 1833 + np->last_tx = NULL; 1834 + 1835 + netif_stop_queue(dev); 1836 + 1837 + if (netif_msg_ifdown(np)) { 1838 + printk(KERN_DEBUG "%s: Shutting down ethercard, status was Tx %2.2x " 1839 + "Rx %4.4x Int %2.2x.\n", 1840 + dev->name, ioread8(ioaddr + TxStatus), 1841 + ioread32(ioaddr + RxStatus), ioread16(ioaddr + IntrStatus)); 1842 + printk(KERN_DEBUG "%s: Queue pointers were Tx %d / %d, Rx %d / %d.\n", 1843 + dev->name, np->cur_tx, np->dirty_tx, np->cur_rx, np->dirty_rx); 1844 + } 1845 + 1846 + /* Disable interrupts by clearing the interrupt mask. */ 1847 + iowrite16(0x0000, ioaddr + IntrEnable); 1848 + 1849 + /* Disable Rx and Tx DMA for safely release resource */ 1850 + iowrite32(0x500, ioaddr + DMACtrl); 1851 + 1852 + /* Stop the chip's Tx and Rx processes. */ 1853 + iowrite16(TxDisable | RxDisable | StatsDisable, ioaddr + MACCtrl1); 1854 + 1855 + for (i = 2000; i > 0; i--) { 1856 + if ((ioread32(ioaddr + DMACtrl) & 0xc000) == 0) 1857 + break; 1858 + mdelay(1); 1859 + } 1860 + 1861 + iowrite16(GlobalReset | DMAReset | FIFOReset | NetworkReset, 1862 + ioaddr + ASIC_HI_WORD(ASICCtrl)); 1863 + 1864 + for (i = 2000; i > 0; i--) { 1865 + if ((ioread16(ioaddr + ASIC_HI_WORD(ASICCtrl)) & ResetBusy) == 0) 1866 + break; 1867 + mdelay(1); 1868 + } 1869 + 1870 + #ifdef __i386__ 1871 + if (netif_msg_hw(np)) { 1872 + printk(KERN_DEBUG " Tx ring at %8.8x:\n", 1873 + (int)(np->tx_ring_dma)); 1874 + for (i = 0; i < TX_RING_SIZE; i++) 1875 + printk(KERN_DEBUG " #%d desc. %4.4x %8.8x %8.8x.\n", 1876 + i, np->tx_ring[i].status, np->tx_ring[i].frag.addr, 1877 + np->tx_ring[i].frag.length); 1878 + printk(KERN_DEBUG " Rx ring %8.8x:\n", 1879 + (int)(np->rx_ring_dma)); 1880 + for (i = 0; i < /*RX_RING_SIZE*/4 ; i++) { 1881 + printk(KERN_DEBUG " #%d desc. %4.4x %4.4x %8.8x\n", 1882 + i, np->rx_ring[i].status, np->rx_ring[i].frag.addr, 1883 + np->rx_ring[i].frag.length); 1884 + } 1885 + } 1886 + #endif /* __i386__ debugging only */ 1887 + 1888 + free_irq(np->pci_dev->irq, dev); 1889 + 1890 + timer_delete_sync(&np->timer); 1891 + 1892 + /* Free all the skbuffs in the Rx queue. */ 1893 + for (i = 0; i < RX_RING_SIZE; i++) { 1894 + np->rx_ring[i].status = 0; 1895 + skb = np->rx_skbuff[i]; 1896 + if (skb) { 1897 + dma_unmap_single(&np->pci_dev->dev, 1898 + le32_to_cpu(np->rx_ring[i].frag.addr), 1899 + np->rx_buf_sz, DMA_FROM_DEVICE); 1900 + dev_kfree_skb(skb); 1901 + np->rx_skbuff[i] = NULL; 1902 + } 1903 + np->rx_ring[i].frag.addr = cpu_to_le32(0xBADF00D0); /* poison */ 1904 + } 1905 + for (i = 0; i < TX_RING_SIZE; i++) { 1906 + np->tx_ring[i].next_desc = 0; 1907 + skb = np->tx_skbuff[i]; 1908 + if (skb) { 1909 + dma_unmap_single(&np->pci_dev->dev, 1910 + le32_to_cpu(np->tx_ring[i].frag.addr), 1911 + skb->len, DMA_TO_DEVICE); 1912 + dev_kfree_skb(skb); 1913 + np->tx_skbuff[i] = NULL; 1914 + } 1915 + } 1916 + 1917 + return 0; 1918 + } 1919 + 1920 + static void sundance_remove1(struct pci_dev *pdev) 1921 + { 1922 + struct net_device *dev = pci_get_drvdata(pdev); 1923 + 1924 + if (dev) { 1925 + struct netdev_private *np = netdev_priv(dev); 1926 + unregister_netdev(dev); 1927 + dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, 1928 + np->rx_ring, np->rx_ring_dma); 1929 + dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, 1930 + np->tx_ring, np->tx_ring_dma); 1931 + pci_iounmap(pdev, np->base); 1932 + pci_release_regions(pdev); 1933 + free_netdev(dev); 1934 + } 1935 + } 1936 + 1937 + static int __maybe_unused sundance_suspend(struct device *dev_d) 1938 + { 1939 + struct net_device *dev = dev_get_drvdata(dev_d); 1940 + struct netdev_private *np = netdev_priv(dev); 1941 + void __iomem *ioaddr = np->base; 1942 + 1943 + if (!netif_running(dev)) 1944 + return 0; 1945 + 1946 + netdev_close(dev); 1947 + netif_device_detach(dev); 1948 + 1949 + if (np->wol_enabled) { 1950 + iowrite8(AcceptBroadcast | AcceptMyPhys, ioaddr + RxMode); 1951 + iowrite16(RxEnable, ioaddr + MACCtrl1); 1952 + } 1953 + 1954 + device_set_wakeup_enable(dev_d, np->wol_enabled); 1955 + 1956 + return 0; 1957 + } 1958 + 1959 + static int __maybe_unused sundance_resume(struct device *dev_d) 1960 + { 1961 + struct net_device *dev = dev_get_drvdata(dev_d); 1962 + int err = 0; 1963 + 1964 + if (!netif_running(dev)) 1965 + return 0; 1966 + 1967 + err = netdev_open(dev); 1968 + if (err) { 1969 + printk(KERN_ERR "%s: Can't resume interface!\n", 1970 + dev->name); 1971 + goto out; 1972 + } 1973 + 1974 + netif_device_attach(dev); 1975 + 1976 + out: 1977 + return err; 1978 + } 1979 + 1980 + static SIMPLE_DEV_PM_OPS(sundance_pm_ops, sundance_suspend, sundance_resume); 1981 + 1982 + static struct pci_driver sundance_driver = { 1983 + .name = DRV_NAME, 1984 + .id_table = sundance_pci_tbl, 1985 + .probe = sundance_probe1, 1986 + .remove = sundance_remove1, 1987 + .driver.pm = &sundance_pm_ops, 1988 + }; 1989 + 1990 + module_pci_driver(sundance_driver);
+7 -3
drivers/net/ethernet/intel/e1000e/ethtool.c
··· 549 549 { 550 550 struct e1000_adapter *adapter = netdev_priv(netdev); 551 551 struct e1000_hw *hw = &adapter->hw; 552 + size_t total_len, max_len; 552 553 u16 *eeprom_buff; 553 - void *ptr; 554 - int max_len; 554 + int ret_val = 0; 555 555 int first_word; 556 556 int last_word; 557 - int ret_val = 0; 557 + void *ptr; 558 558 u16 i; 559 559 560 560 if (eeprom->len == 0) ··· 568 568 return -EINVAL; 569 569 570 570 max_len = hw->nvm.word_size * 2; 571 + 572 + if (check_add_overflow(eeprom->offset, eeprom->len, &total_len) || 573 + total_len > max_len) 574 + return -EFBIG; 571 575 572 576 first_word = eeprom->offset >> 1; 573 577 last_word = (eeprom->offset + eeprom->len - 1) >> 1;
+2 -2
drivers/net/ethernet/intel/i40e/i40e_client.c
··· 359 359 if (i40e_client_get_params(vsi, &cdev->lan_info.params)) 360 360 goto free_cdev; 361 361 362 - mac = list_first_entry(&cdev->lan_info.netdev->dev_addrs.list, 363 - struct netdev_hw_addr, list); 362 + mac = list_first_entry_or_null(&cdev->lan_info.netdev->dev_addrs.list, 363 + struct netdev_hw_addr, list); 364 364 if (mac) 365 365 ether_addr_copy(cdev->lan_info.lanmac, mac->addr); 366 366 else
+19 -104
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 40 40 * setup, adding or removing filters, or other things. Many of 41 41 * these will be useful for some forms of unit testing. 42 42 **************************************************************/ 43 - static char i40e_dbg_command_buf[256] = ""; 44 - 45 - /** 46 - * i40e_dbg_command_read - read for command datum 47 - * @filp: the opened file 48 - * @buffer: where to write the data for the user to read 49 - * @count: the size of the user's buffer 50 - * @ppos: file position offset 51 - **/ 52 - static ssize_t i40e_dbg_command_read(struct file *filp, char __user *buffer, 53 - size_t count, loff_t *ppos) 54 - { 55 - struct i40e_pf *pf = filp->private_data; 56 - struct i40e_vsi *main_vsi; 57 - int bytes_not_copied; 58 - int buf_size = 256; 59 - char *buf; 60 - int len; 61 - 62 - /* don't allow partial reads */ 63 - if (*ppos != 0) 64 - return 0; 65 - if (count < buf_size) 66 - return -ENOSPC; 67 - 68 - buf = kzalloc(buf_size, GFP_KERNEL); 69 - if (!buf) 70 - return -ENOSPC; 71 - 72 - main_vsi = i40e_pf_get_main_vsi(pf); 73 - len = snprintf(buf, buf_size, "%s: %s\n", main_vsi->netdev->name, 74 - i40e_dbg_command_buf); 75 - 76 - bytes_not_copied = copy_to_user(buffer, buf, len); 77 - kfree(buf); 78 - 79 - if (bytes_not_copied) 80 - return -EFAULT; 81 - 82 - *ppos = len; 83 - return len; 84 - } 85 43 86 44 static char *i40e_filter_state_string[] = { 87 45 "INVALID", ··· 1579 1621 static const struct file_operations i40e_dbg_command_fops = { 1580 1622 .owner = THIS_MODULE, 1581 1623 .open = simple_open, 1582 - .read = i40e_dbg_command_read, 1583 1624 .write = i40e_dbg_command_write, 1584 1625 }; 1585 1626 ··· 1587 1630 * The netdev_ops entry in debugfs is for giving the driver commands 1588 1631 * to be executed from the netdev operations. 1589 1632 **************************************************************/ 1590 - static char i40e_dbg_netdev_ops_buf[256] = ""; 1591 - 1592 - /** 1593 - * i40e_dbg_netdev_ops_read - read for netdev_ops datum 1594 - * @filp: the opened file 1595 - * @buffer: where to write the data for the user to read 1596 - * @count: the size of the user's buffer 1597 - * @ppos: file position offset 1598 - **/ 1599 - static ssize_t i40e_dbg_netdev_ops_read(struct file *filp, char __user *buffer, 1600 - size_t count, loff_t *ppos) 1601 - { 1602 - struct i40e_pf *pf = filp->private_data; 1603 - struct i40e_vsi *main_vsi; 1604 - int bytes_not_copied; 1605 - int buf_size = 256; 1606 - char *buf; 1607 - int len; 1608 - 1609 - /* don't allow partal reads */ 1610 - if (*ppos != 0) 1611 - return 0; 1612 - if (count < buf_size) 1613 - return -ENOSPC; 1614 - 1615 - buf = kzalloc(buf_size, GFP_KERNEL); 1616 - if (!buf) 1617 - return -ENOSPC; 1618 - 1619 - main_vsi = i40e_pf_get_main_vsi(pf); 1620 - len = snprintf(buf, buf_size, "%s: %s\n", main_vsi->netdev->name, 1621 - i40e_dbg_netdev_ops_buf); 1622 - 1623 - bytes_not_copied = copy_to_user(buffer, buf, len); 1624 - kfree(buf); 1625 - 1626 - if (bytes_not_copied) 1627 - return -EFAULT; 1628 - 1629 - *ppos = len; 1630 - return len; 1631 - } 1632 1633 1633 1634 /** 1634 1635 * i40e_dbg_netdev_ops_write - write into netdev_ops datum ··· 1600 1685 size_t count, loff_t *ppos) 1601 1686 { 1602 1687 struct i40e_pf *pf = filp->private_data; 1688 + char *cmd_buf, *buf_tmp; 1603 1689 int bytes_not_copied; 1604 1690 struct i40e_vsi *vsi; 1605 - char *buf_tmp; 1606 1691 int vsi_seid; 1607 1692 int i, cnt; 1608 1693 1609 1694 /* don't allow partial writes */ 1610 1695 if (*ppos != 0) 1611 1696 return 0; 1612 - if (count >= sizeof(i40e_dbg_netdev_ops_buf)) 1613 - return -ENOSPC; 1614 1697 1615 - memset(i40e_dbg_netdev_ops_buf, 0, sizeof(i40e_dbg_netdev_ops_buf)); 1616 - bytes_not_copied = copy_from_user(i40e_dbg_netdev_ops_buf, 1617 - buffer, count); 1618 - if (bytes_not_copied) 1698 + cmd_buf = kzalloc(count + 1, GFP_KERNEL); 1699 + if (!cmd_buf) 1700 + return count; 1701 + bytes_not_copied = copy_from_user(cmd_buf, buffer, count); 1702 + if (bytes_not_copied) { 1703 + kfree(cmd_buf); 1619 1704 return -EFAULT; 1620 - i40e_dbg_netdev_ops_buf[count] = '\0'; 1705 + } 1706 + cmd_buf[count] = '\0'; 1621 1707 1622 - buf_tmp = strchr(i40e_dbg_netdev_ops_buf, '\n'); 1708 + buf_tmp = strchr(cmd_buf, '\n'); 1623 1709 if (buf_tmp) { 1624 1710 *buf_tmp = '\0'; 1625 - count = buf_tmp - i40e_dbg_netdev_ops_buf + 1; 1711 + count = buf_tmp - cmd_buf + 1; 1626 1712 } 1627 1713 1628 - if (strncmp(i40e_dbg_netdev_ops_buf, "change_mtu", 10) == 0) { 1714 + if (strncmp(cmd_buf, "change_mtu", 10) == 0) { 1629 1715 int mtu; 1630 1716 1631 - cnt = sscanf(&i40e_dbg_netdev_ops_buf[11], "%i %i", 1717 + cnt = sscanf(&cmd_buf[11], "%i %i", 1632 1718 &vsi_seid, &mtu); 1633 1719 if (cnt != 2) { 1634 1720 dev_info(&pf->pdev->dev, "change_mtu <vsi_seid> <mtu>\n"); ··· 1651 1735 dev_info(&pf->pdev->dev, "Could not acquire RTNL - please try again\n"); 1652 1736 } 1653 1737 1654 - } else if (strncmp(i40e_dbg_netdev_ops_buf, "set_rx_mode", 11) == 0) { 1655 - cnt = sscanf(&i40e_dbg_netdev_ops_buf[11], "%i", &vsi_seid); 1738 + } else if (strncmp(cmd_buf, "set_rx_mode", 11) == 0) { 1739 + cnt = sscanf(&cmd_buf[11], "%i", &vsi_seid); 1656 1740 if (cnt != 1) { 1657 1741 dev_info(&pf->pdev->dev, "set_rx_mode <vsi_seid>\n"); 1658 1742 goto netdev_ops_write_done; ··· 1672 1756 dev_info(&pf->pdev->dev, "Could not acquire RTNL - please try again\n"); 1673 1757 } 1674 1758 1675 - } else if (strncmp(i40e_dbg_netdev_ops_buf, "napi", 4) == 0) { 1676 - cnt = sscanf(&i40e_dbg_netdev_ops_buf[4], "%i", &vsi_seid); 1759 + } else if (strncmp(cmd_buf, "napi", 4) == 0) { 1760 + cnt = sscanf(&cmd_buf[4], "%i", &vsi_seid); 1677 1761 if (cnt != 1) { 1678 1762 dev_info(&pf->pdev->dev, "napi <vsi_seid>\n"); 1679 1763 goto netdev_ops_write_done; ··· 1691 1775 dev_info(&pf->pdev->dev, "napi called\n"); 1692 1776 } 1693 1777 } else { 1694 - dev_info(&pf->pdev->dev, "unknown command '%s'\n", 1695 - i40e_dbg_netdev_ops_buf); 1778 + dev_info(&pf->pdev->dev, "unknown command '%s'\n", cmd_buf); 1696 1779 dev_info(&pf->pdev->dev, "available commands\n"); 1697 1780 dev_info(&pf->pdev->dev, " change_mtu <vsi_seid> <mtu>\n"); 1698 1781 dev_info(&pf->pdev->dev, " set_rx_mode <vsi_seid>\n"); 1699 1782 dev_info(&pf->pdev->dev, " napi <vsi_seid>\n"); 1700 1783 } 1701 1784 netdev_ops_write_done: 1785 + kfree(cmd_buf); 1702 1786 return count; 1703 1787 } 1704 1788 1705 1789 static const struct file_operations i40e_dbg_netdev_ops_fops = { 1706 1790 .owner = THIS_MODULE, 1707 1791 .open = simple_open, 1708 - .read = i40e_dbg_netdev_ops_read, 1709 1792 .write = i40e_dbg_netdev_ops_write, 1710 1793 }; 1711 1794
+7 -5
drivers/net/ethernet/intel/ice/ice_main.c
··· 3176 3176 hw = &pf->hw; 3177 3177 tx = &pf->ptp.port.tx; 3178 3178 spin_lock_irqsave(&tx->lock, flags); 3179 - ice_ptp_complete_tx_single_tstamp(tx); 3179 + if (tx->init) { 3180 + ice_ptp_complete_tx_single_tstamp(tx); 3180 3181 3181 - idx = find_next_bit_wrap(tx->in_use, tx->len, 3182 - tx->last_ll_ts_idx_read + 1); 3183 - if (idx != tx->len) 3184 - ice_ptp_req_tx_single_tstamp(tx, idx); 3182 + idx = find_next_bit_wrap(tx->in_use, tx->len, 3183 + tx->last_ll_ts_idx_read + 1); 3184 + if (idx != tx->len) 3185 + ice_ptp_req_tx_single_tstamp(tx, idx); 3186 + } 3185 3187 spin_unlock_irqrestore(&tx->lock, flags); 3186 3188 3187 3189 val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+8 -5
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 2701 2701 */ 2702 2702 if (hw->dev_caps.ts_dev_info.ts_ll_int_read) { 2703 2703 struct ice_ptp_tx *tx = &pf->ptp.port.tx; 2704 - u8 idx; 2704 + u8 idx, last; 2705 2705 2706 2706 if (!ice_pf_state_is_nominal(pf)) 2707 2707 return IRQ_HANDLED; 2708 2708 2709 2709 spin_lock(&tx->lock); 2710 - idx = find_next_bit_wrap(tx->in_use, tx->len, 2711 - tx->last_ll_ts_idx_read + 1); 2712 - if (idx != tx->len) 2713 - ice_ptp_req_tx_single_tstamp(tx, idx); 2710 + if (tx->init) { 2711 + last = tx->last_ll_ts_idx_read + 1; 2712 + idx = find_next_bit_wrap(tx->in_use, tx->len, 2713 + last); 2714 + if (idx != tx->len) 2715 + ice_ptp_req_tx_single_tstamp(tx, idx); 2716 + } 2714 2717 spin_unlock(&tx->lock); 2715 2718 2716 2719 return IRQ_HANDLED;
+2 -2
drivers/net/ethernet/intel/idpf/idpf_idc.c
··· 247 247 if (!adev) 248 248 return; 249 249 250 + ida_free(&idpf_idc_ida, adev->id); 251 + 250 252 auxiliary_device_delete(adev); 251 253 auxiliary_device_uninit(adev); 252 - 253 - ida_free(&idpf_idc_ida, adev->id); 254 254 } 255 255 256 256 /**
+6 -3
drivers/net/ethernet/intel/idpf/idpf_lib.c
··· 2344 2344 struct idpf_netdev_priv *np = netdev_priv(netdev); 2345 2345 struct idpf_vport_config *vport_config; 2346 2346 struct sockaddr *addr = p; 2347 + u8 old_mac_addr[ETH_ALEN]; 2347 2348 struct idpf_vport *vport; 2348 2349 int err = 0; 2349 2350 ··· 2368 2367 if (ether_addr_equal(netdev->dev_addr, addr->sa_data)) 2369 2368 goto unlock_mutex; 2370 2369 2370 + ether_addr_copy(old_mac_addr, vport->default_mac_addr); 2371 + ether_addr_copy(vport->default_mac_addr, addr->sa_data); 2371 2372 vport_config = vport->adapter->vport_config[vport->idx]; 2372 2373 err = idpf_add_mac_filter(vport, np, addr->sa_data, false); 2373 2374 if (err) { 2374 2375 __idpf_del_mac_filter(vport_config, addr->sa_data); 2376 + ether_addr_copy(vport->default_mac_addr, netdev->dev_addr); 2375 2377 goto unlock_mutex; 2376 2378 } 2377 2379 2378 - if (is_valid_ether_addr(vport->default_mac_addr)) 2379 - idpf_del_mac_filter(vport, np, vport->default_mac_addr, false); 2380 + if (is_valid_ether_addr(old_mac_addr)) 2381 + __idpf_del_mac_filter(vport_config, old_mac_addr); 2380 2382 2381 - ether_addr_copy(vport->default_mac_addr, addr->sa_data); 2382 2383 eth_hw_addr_set(netdev, addr->sa_data); 2383 2384 2384 2385 unlock_mutex:
+12
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
··· 3765 3765 return le32_to_cpu(vport_msg->vport_id); 3766 3766 } 3767 3767 3768 + static void idpf_set_mac_type(struct idpf_vport *vport, 3769 + struct virtchnl2_mac_addr *mac_addr) 3770 + { 3771 + bool is_primary; 3772 + 3773 + is_primary = ether_addr_equal(vport->default_mac_addr, mac_addr->addr); 3774 + mac_addr->type = is_primary ? VIRTCHNL2_MAC_ADDR_PRIMARY : 3775 + VIRTCHNL2_MAC_ADDR_EXTRA; 3776 + } 3777 + 3768 3778 /** 3769 3779 * idpf_mac_filter_async_handler - Async callback for mac filters 3770 3780 * @adapter: private data struct ··· 3904 3894 list) { 3905 3895 if (add && f->add) { 3906 3896 ether_addr_copy(mac_addr[i].addr, f->macaddr); 3897 + idpf_set_mac_type(vport, &mac_addr[i]); 3907 3898 i++; 3908 3899 f->add = false; 3909 3900 if (i == total_filters) ··· 3912 3901 } 3913 3902 if (!add && f->remove) { 3914 3903 ether_addr_copy(mac_addr[i].addr, f->macaddr); 3904 + idpf_set_mac_type(vport, &mac_addr[i]); 3915 3905 i++; 3916 3906 f->remove = false; 3917 3907 if (i == total_filters)
+2 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
··· 3571 3571 3572 3572 for (i = 0; i < ARRAY_SIZE(ixgbe_ls_map); ++i) { 3573 3573 if (hw->phy.eee_speeds_supported & ixgbe_ls_map[i].mac_speed) 3574 - linkmode_set_bit(ixgbe_lp_map[i].link_mode, 3574 + linkmode_set_bit(ixgbe_ls_map[i].link_mode, 3575 3575 edata->supported); 3576 3576 } 3577 3577 3578 3578 for (i = 0; i < ARRAY_SIZE(ixgbe_ls_map); ++i) { 3579 3579 if (hw->phy.eee_speeds_advertised & ixgbe_ls_map[i].mac_speed) 3580 - linkmode_set_bit(ixgbe_lp_map[i].link_mode, 3580 + linkmode_set_bit(ixgbe_ls_map[i].link_mode, 3581 3581 edata->advertised); 3582 3582 } 3583 3583
+9 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 1761 1761 bool gso = false; 1762 1762 int tx_num; 1763 1763 1764 + if (skb_vlan_tag_present(skb) && 1765 + !eth_proto_is_802_3(eth_hdr(skb)->h_proto)) { 1766 + skb = __vlan_hwaccel_push_inside(skb); 1767 + if (!skb) 1768 + goto dropped; 1769 + } 1770 + 1764 1771 /* normally we can rely on the stack not calling this more than once, 1765 1772 * however we have 2 queues running on the same ring so we need to lock 1766 1773 * the ring access ··· 1813 1806 1814 1807 drop: 1815 1808 spin_unlock(&eth->page_lock); 1816 - stats->tx_dropped++; 1817 1809 dev_kfree_skb_any(skb); 1810 + dropped: 1811 + stats->tx_dropped++; 1818 1812 return NETDEV_TX_OK; 1819 1813 } 1820 1814
+3 -1
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 267 267 pp.dma_dir = priv->dma_dir; 268 268 269 269 ring->pp = page_pool_create(&pp); 270 - if (!ring->pp) 270 + if (IS_ERR(ring->pp)) { 271 + err = PTR_ERR(ring->pp); 271 272 goto err_ring; 273 + } 272 274 273 275 if (xdp_rxq_info_reg(&ring->xdp_rxq, priv->dev, queue_index, 0) < 0) 274 276 goto err_pp;
+5 -2
drivers/net/ethernet/microchip/lan865x/lan865x.c
··· 423 423 free_netdev(priv->netdev); 424 424 } 425 425 426 - static const struct spi_device_id spidev_spi_ids[] = { 426 + static const struct spi_device_id lan865x_ids[] = { 427 427 { .name = "lan8650" }, 428 + { .name = "lan8651" }, 428 429 {}, 429 430 }; 431 + MODULE_DEVICE_TABLE(spi, lan865x_ids); 430 432 431 433 static const struct of_device_id lan865x_dt_ids[] = { 432 434 { .compatible = "microchip,lan8650" }, 435 + { .compatible = "microchip,lan8651" }, 433 436 { /* Sentinel */ } 434 437 }; 435 438 MODULE_DEVICE_TABLE(of, lan865x_dt_ids); ··· 444 441 }, 445 442 .probe = lan865x_probe, 446 443 .remove = lan865x_remove, 447 - .id_table = spidev_spi_ids, 444 + .id_table = lan865x_ids, 448 445 }; 449 446 module_spi_driver(lan865x_driver); 450 447
+2 -1
drivers/net/ethernet/oa_tc6.c
··· 1249 1249 1250 1250 /* Set the SPI controller to pump at realtime priority */ 1251 1251 tc6->spi->rt = true; 1252 - spi_setup(tc6->spi); 1252 + if (spi_setup(tc6->spi) < 0) 1253 + return NULL; 1253 1254 1254 1255 tc6->spi_ctrl_tx_buf = devm_kzalloc(&tc6->spi->dev, 1255 1256 OA_TC6_CTRL_SPI_BUF_SIZE,
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1522 1522 } 1523 1523 } 1524 1524 1525 - if (single_port) { 1525 + if (single_port && num_tx) { 1526 1526 netif_txq = netdev_get_tx_queue(ndev, chn); 1527 1527 netdev_tx_completed_queue(netif_txq, num_tx, total_bytes); 1528 1528 am65_cpsw_nuss_tx_wake(tx_chn, ndev, netif_txq);
+10
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1168 1168 &meta_max_len); 1169 1169 dma_unmap_single(lp->dev, skbuf_dma->dma_address, lp->max_frm_size, 1170 1170 DMA_FROM_DEVICE); 1171 + 1172 + if (IS_ERR(app_metadata)) { 1173 + if (net_ratelimit()) 1174 + netdev_err(lp->ndev, "Failed to get RX metadata pointer\n"); 1175 + dev_kfree_skb_any(skb); 1176 + lp->ndev->stats.rx_dropped++; 1177 + goto rx_submit; 1178 + } 1179 + 1171 1180 /* TODO: Derive app word index programmatically */ 1172 1181 rx_len = (app_metadata[LEN_APP] & 0xFFFF); 1173 1182 skb_put(skb, rx_len); ··· 1189 1180 u64_stats_add(&lp->rx_bytes, rx_len); 1190 1181 u64_stats_update_end(&lp->rx_stat_sync); 1191 1182 1183 + rx_submit: 1192 1184 for (i = 0; i < CIRC_SPACE(lp->rx_ring_head, lp->rx_ring_tail, 1193 1185 RX_BUF_NUM_DEFAULT); i++) 1194 1186 axienet_rx_submit_desc(lp->ndev);
+1 -1
drivers/net/ethernet/xircom/xirc2ps_cs.c
··· 1576 1576 msleep(40); /* wait 40 msec to let it complete */ 1577 1577 } 1578 1578 if (full_duplex) 1579 - PutByte(XIRCREG1_ECR, GetByte(XIRCREG1_ECR | FullDuplex)); 1579 + PutByte(XIRCREG1_ECR, GetByte(XIRCREG1_ECR) | FullDuplex); 1580 1580 } else { /* No MII */ 1581 1581 SelectPage(0); 1582 1582 value = GetByte(XIRCREG_ESR); /* read the ESR */
+4 -4
drivers/net/macsec.c
··· 1844 1844 1845 1845 if (tb_sa[MACSEC_SA_ATTR_PN]) { 1846 1846 spin_lock_bh(&rx_sa->lock); 1847 - rx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); 1847 + rx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); 1848 1848 spin_unlock_bh(&rx_sa->lock); 1849 1849 } 1850 1850 ··· 2086 2086 } 2087 2087 2088 2088 spin_lock_bh(&tx_sa->lock); 2089 - tx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); 2089 + tx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); 2090 2090 spin_unlock_bh(&tx_sa->lock); 2091 2091 2092 2092 if (tb_sa[MACSEC_SA_ATTR_ACTIVE]) ··· 2398 2398 2399 2399 spin_lock_bh(&tx_sa->lock); 2400 2400 prev_pn = tx_sa->next_pn_halves; 2401 - tx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); 2401 + tx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); 2402 2402 spin_unlock_bh(&tx_sa->lock); 2403 2403 } 2404 2404 ··· 2496 2496 2497 2497 spin_lock_bh(&rx_sa->lock); 2498 2498 prev_pn = rx_sa->next_pn_halves; 2499 - rx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); 2499 + rx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); 2500 2500 spin_unlock_bh(&rx_sa->lock); 2501 2501 } 2502 2502
+1
drivers/net/mctp/mctp-usb.c
··· 183 183 struct mctp_usb_hdr *hdr; 184 184 u8 pkt_len; /* length of MCTP packet, no USB header */ 185 185 186 + skb_reset_mac_header(skb); 186 187 hdr = skb_pull_data(skb, sizeof(*hdr)); 187 188 if (!hdr) 188 189 break;
+1 -1
drivers/net/pcs/pcs-rzn1-miic.c
··· 19 19 #define MIIC_PRCMD 0x0 20 20 #define MIIC_ESID_CODE 0x4 21 21 22 - #define MIIC_MODCTRL 0x20 22 + #define MIIC_MODCTRL 0x8 23 23 #define MIIC_MODCTRL_SW_MODE GENMASK(4, 0) 24 24 25 25 #define MIIC_CONVCTRL(port) (0x100 + (port) * 4)
+8 -10
drivers/net/phy/mscc/mscc_ptp.c
··· 456 456 *p++ = (reg >> 24) & 0xff; 457 457 } 458 458 459 - len = skb_queue_len(&ptp->tx_queue); 459 + len = skb_queue_len_lockless(&ptp->tx_queue); 460 460 if (len < 1) 461 461 return; 462 462 463 463 while (len--) { 464 - skb = __skb_dequeue(&ptp->tx_queue); 464 + skb = skb_dequeue(&ptp->tx_queue); 465 465 if (!skb) 466 466 return; 467 467 ··· 486 486 * packet in the FIFO right now, reschedule it for later 487 487 * packets. 488 488 */ 489 - __skb_queue_tail(&ptp->tx_queue, skb); 489 + skb_queue_tail(&ptp->tx_queue, skb); 490 490 } 491 491 } 492 492 ··· 1068 1068 case HWTSTAMP_TX_ON: 1069 1069 break; 1070 1070 case HWTSTAMP_TX_OFF: 1071 + skb_queue_purge(&vsc8531->ptp->tx_queue); 1071 1072 break; 1072 1073 default: 1073 1074 return -ERANGE; ··· 1092 1091 vsc8531->ptp->rx_filter = cfg->rx_filter; 1093 1092 1094 1093 mutex_lock(&vsc8531->ts_lock); 1095 - 1096 - __skb_queue_purge(&vsc8531->ptp->tx_queue); 1097 - __skb_queue_head_init(&vsc8531->ptp->tx_queue); 1098 1094 1099 1095 /* Disable predictor while configuring the 1588 block */ 1100 1096 val = vsc85xx_ts_read_csr(phydev, PROCESSOR, ··· 1178 1180 1179 1181 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 1180 1182 1181 - mutex_lock(&vsc8531->ts_lock); 1182 - __skb_queue_tail(&vsc8531->ptp->tx_queue, skb); 1183 - mutex_unlock(&vsc8531->ts_lock); 1183 + skb_queue_tail(&vsc8531->ptp->tx_queue, skb); 1184 1184 return; 1185 1185 1186 1186 out: ··· 1544 1548 if (vsc8531->ptp->ptp_clock) { 1545 1549 ptp_clock_unregister(vsc8531->ptp->ptp_clock); 1546 1550 skb_queue_purge(&vsc8531->rx_skbs_list); 1551 + skb_queue_purge(&vsc8531->ptp->tx_queue); 1547 1552 } 1548 1553 } 1549 1554 ··· 1568 1571 if (rc & VSC85XX_1588_INT_FIFO_ADD) { 1569 1572 vsc85xx_get_tx_ts(priv->ptp); 1570 1573 } else if (rc & VSC85XX_1588_INT_FIFO_OVERFLOW) { 1571 - __skb_queue_purge(&priv->ptp->tx_queue); 1574 + skb_queue_purge(&priv->ptp->tx_queue); 1572 1575 vsc85xx_ts_reset_fifo(phydev); 1573 1576 } 1574 1577 ··· 1588 1591 mutex_init(&vsc8531->phc_lock); 1589 1592 mutex_init(&vsc8531->ts_lock); 1590 1593 skb_queue_head_init(&vsc8531->rx_skbs_list); 1594 + skb_queue_head_init(&vsc8531->ptp->tx_queue); 1591 1595 1592 1596 /* Retrieve the shared load/save GPIO. Request it as non exclusive as 1593 1597 * the same GPIO can be requested by all the PHYs of the same package.
+65 -38
drivers/net/phy/phylink.c
··· 1016 1016 pl->pcs->ops->pcs_an_restart(pl->pcs); 1017 1017 } 1018 1018 1019 + enum inband_type { 1020 + INBAND_NONE, 1021 + INBAND_CISCO_SGMII, 1022 + INBAND_BASEX, 1023 + }; 1024 + 1025 + static enum inband_type phylink_get_inband_type(phy_interface_t interface) 1026 + { 1027 + switch (interface) { 1028 + case PHY_INTERFACE_MODE_SGMII: 1029 + case PHY_INTERFACE_MODE_QSGMII: 1030 + case PHY_INTERFACE_MODE_QUSGMII: 1031 + case PHY_INTERFACE_MODE_USXGMII: 1032 + case PHY_INTERFACE_MODE_10G_QXGMII: 1033 + /* These protocols are designed for use with a PHY which 1034 + * communicates its negotiation result back to the MAC via 1035 + * inband communication. Note: there exist PHYs that run 1036 + * with SGMII but do not send the inband data. 1037 + */ 1038 + return INBAND_CISCO_SGMII; 1039 + 1040 + case PHY_INTERFACE_MODE_1000BASEX: 1041 + case PHY_INTERFACE_MODE_2500BASEX: 1042 + /* 1000base-X is designed for use media-side for Fibre 1043 + * connections, and thus the Autoneg bit needs to be 1044 + * taken into account. We also do this for 2500base-X 1045 + * as well, but drivers may not support this, so may 1046 + * need to override this. 1047 + */ 1048 + return INBAND_BASEX; 1049 + 1050 + default: 1051 + return INBAND_NONE; 1052 + } 1053 + } 1054 + 1019 1055 /** 1020 1056 * phylink_pcs_neg_mode() - helper to determine PCS inband mode 1021 1057 * @pl: a pointer to a &struct phylink returned from phylink_create() ··· 1079 1043 unsigned int pcs_ib_caps = 0; 1080 1044 unsigned int phy_ib_caps = 0; 1081 1045 unsigned int neg_mode, mode; 1082 - enum { 1083 - INBAND_CISCO_SGMII, 1084 - INBAND_BASEX, 1085 - } type; 1046 + enum inband_type type; 1047 + 1048 + type = phylink_get_inband_type(interface); 1049 + if (type == INBAND_NONE) { 1050 + pl->pcs_neg_mode = PHYLINK_PCS_NEG_NONE; 1051 + pl->act_link_an_mode = pl->req_link_an_mode; 1052 + return; 1053 + } 1086 1054 1087 1055 mode = pl->req_link_an_mode; 1088 1056 1089 1057 pl->phy_ib_mode = 0; 1090 - 1091 - switch (interface) { 1092 - case PHY_INTERFACE_MODE_SGMII: 1093 - case PHY_INTERFACE_MODE_QSGMII: 1094 - case PHY_INTERFACE_MODE_QUSGMII: 1095 - case PHY_INTERFACE_MODE_USXGMII: 1096 - case PHY_INTERFACE_MODE_10G_QXGMII: 1097 - /* These protocols are designed for use with a PHY which 1098 - * communicates its negotiation result back to the MAC via 1099 - * inband communication. Note: there exist PHYs that run 1100 - * with SGMII but do not send the inband data. 1101 - */ 1102 - type = INBAND_CISCO_SGMII; 1103 - break; 1104 - 1105 - case PHY_INTERFACE_MODE_1000BASEX: 1106 - case PHY_INTERFACE_MODE_2500BASEX: 1107 - /* 1000base-X is designed for use media-side for Fibre 1108 - * connections, and thus the Autoneg bit needs to be 1109 - * taken into account. We also do this for 2500base-X 1110 - * as well, but drivers may not support this, so may 1111 - * need to override this. 1112 - */ 1113 - type = INBAND_BASEX; 1114 - break; 1115 - 1116 - default: 1117 - pl->pcs_neg_mode = PHYLINK_PCS_NEG_NONE; 1118 - pl->act_link_an_mode = mode; 1119 - return; 1120 - } 1121 1058 1122 1059 if (pcs) 1123 1060 pcs_ib_caps = phylink_pcs_inband_caps(pcs, interface); ··· 2141 2132 __ETHTOOL_LINK_MODE_MASK_NBITS, pl->supported, 2142 2133 __ETHTOOL_LINK_MODE_MASK_NBITS, phy->advertising); 2143 2134 2144 - if (phy_interrupt_is_valid(phy)) 2145 - phy_request_interrupt(phy); 2146 - 2147 2135 if (pl->config->mac_managed_pm) 2148 2136 phy->mac_managed_pm = true; 2149 2137 ··· 2156 2150 if (ret == -EOPNOTSUPP) 2157 2151 ret = 0; 2158 2152 } 2153 + 2154 + if (ret == 0 && phy_interrupt_is_valid(phy)) 2155 + phy_request_interrupt(phy); 2159 2156 2160 2157 return ret; 2161 2158 } ··· 3634 3625 { 3635 3626 __ETHTOOL_DECLARE_LINK_MODE_MASK(support); 3636 3627 struct phylink_link_state config; 3628 + enum inband_type inband_type; 3637 3629 phy_interface_t interface; 3638 3630 int ret; 3639 3631 ··· 3680 3670 3681 3671 phylink_dbg(pl, "optical SFP: chosen %s interface\n", 3682 3672 phy_modes(interface)); 3673 + 3674 + inband_type = phylink_get_inband_type(interface); 3675 + if (inband_type == INBAND_NONE) { 3676 + /* If this is the sole interface, and there is no inband 3677 + * support, clear the advertising mask and Autoneg bit in 3678 + * the support mask. Otherwise, just clear the Autoneg bit 3679 + * in the advertising mask. 3680 + */ 3681 + if (phy_interface_weight(pl->sfp_interfaces) == 1) { 3682 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, 3683 + pl->sfp_support); 3684 + linkmode_zero(config.advertising); 3685 + } else { 3686 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, 3687 + config.advertising); 3688 + } 3689 + } 3683 3690 3684 3691 if (!phylink_validate_pcs_inband_autoneg(pl, interface, 3685 3692 config.advertising)) {
+3
drivers/net/phy/sfp.c
··· 492 492 SFP_QUIRK("ALCATELLUCENT", "3FE46541AA", sfp_quirk_2500basex, 493 493 sfp_fixup_nokia), 494 494 495 + // FLYPRO SFP-10GT-CS-30M uses Rollball protocol to talk to the PHY. 496 + SFP_QUIRK_F("FLYPRO", "SFP-10GT-CS-30M", sfp_fixup_rollball), 497 + 495 498 // Fiberstore SFP-10G-T doesn't identify as copper, uses the Rollball 496 499 // protocol to talk to the PHY and needs 4 sec wait before probing the 497 500 // PHY.
+3 -3
drivers/net/ppp/ppp_generic.c
··· 1744 1744 */ 1745 1745 if (net_ratelimit()) 1746 1746 netdev_err(ppp->dev, "ppp: compressor dropped pkt\n"); 1747 - kfree_skb(skb); 1748 1747 consume_skb(new_skb); 1749 1748 new_skb = NULL; 1750 1749 } ··· 1844 1845 "down - pkt dropped.\n"); 1845 1846 goto drop; 1846 1847 } 1847 - skb = pad_compress_skb(ppp, skb); 1848 - if (!skb) 1848 + new_skb = pad_compress_skb(ppp, skb); 1849 + if (!new_skb) 1849 1850 goto drop; 1851 + skb = new_skb; 1850 1852 } 1851 1853 1852 1854 /*
+12 -6
drivers/net/vxlan/vxlan_core.c
··· 1445 1445 if (READ_ONCE(f->updated) != now) 1446 1446 WRITE_ONCE(f->updated, now); 1447 1447 1448 + /* Don't override an fdb with nexthop with a learnt entry */ 1449 + if (rcu_access_pointer(f->nh)) 1450 + return SKB_DROP_REASON_VXLAN_ENTRY_EXISTS; 1451 + 1448 1452 if (likely(vxlan_addr_equal(&rdst->remote_ip, src_ip) && 1449 1453 rdst->remote_ifindex == ifindex)) 1450 1454 return SKB_NOT_DROPPED_YET; 1451 1455 1452 1456 /* Don't migrate static entries, drop packets */ 1453 1457 if (f->state & (NUD_PERMANENT | NUD_NOARP)) 1454 - return SKB_DROP_REASON_VXLAN_ENTRY_EXISTS; 1455 - 1456 - /* Don't override an fdb with nexthop with a learnt entry */ 1457 - if (rcu_access_pointer(f->nh)) 1458 1458 return SKB_DROP_REASON_VXLAN_ENTRY_EXISTS; 1459 1459 1460 1460 if (net_ratelimit()) ··· 1877 1877 n = neigh_lookup(&arp_tbl, &tip, dev); 1878 1878 1879 1879 if (n) { 1880 + struct vxlan_rdst *rdst = NULL; 1880 1881 struct vxlan_fdb *f; 1881 1882 struct sk_buff *reply; 1882 1883 ··· 1888 1887 1889 1888 rcu_read_lock(); 1890 1889 f = vxlan_find_mac_tx(vxlan, n->ha, vni); 1891 - if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) { 1890 + if (f) 1891 + rdst = first_remote_rcu(f); 1892 + if (rdst && vxlan_addr_any(&rdst->remote_ip)) { 1892 1893 /* bridge-local neighbor */ 1893 1894 neigh_release(n); 1894 1895 rcu_read_unlock(); ··· 2047 2044 n = neigh_lookup(ipv6_stub->nd_tbl, &msg->target, dev); 2048 2045 2049 2046 if (n) { 2047 + struct vxlan_rdst *rdst = NULL; 2050 2048 struct vxlan_fdb *f; 2051 2049 struct sk_buff *reply; 2052 2050 ··· 2057 2053 } 2058 2054 2059 2055 f = vxlan_find_mac_tx(vxlan, n->ha, vni); 2060 - if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) { 2056 + if (f) 2057 + rdst = first_remote_rcu(f); 2058 + if (rdst && vxlan_addr_any(&rdst->remote_ip)) { 2061 2059 /* bridge-local neighbor */ 2062 2060 neigh_release(n); 2063 2061 goto out;
+1 -3
drivers/net/vxlan/vxlan_private.h
··· 61 61 return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)]; 62 62 } 63 63 64 - /* First remote destination for a forwarding entry. 65 - * Guaranteed to be non-NULL because remotes are never deleted. 66 - */ 64 + /* First remote destination for a forwarding entry. */ 67 65 static inline struct vxlan_rdst *first_remote_rcu(struct vxlan_fdb *fdb) 68 66 { 69 67 if (rcu_access_pointer(fdb->nh))
+2
drivers/net/wireless/ath/ath11k/core.h
··· 411 411 bool do_not_send_tmpl; 412 412 struct ath11k_arp_ns_offload arp_ns_offload; 413 413 struct ath11k_rekey_data rekey_data; 414 + u32 num_stations; 415 + bool reinstall_group_keys; 414 416 415 417 struct ath11k_reg_tpc_power_info reg_tpc_info; 416 418
+102 -9
drivers/net/wireless/ath/ath11k/mac.c
··· 4317 4317 return first_errno; 4318 4318 } 4319 4319 4320 + static int ath11k_set_group_keys(struct ath11k_vif *arvif) 4321 + { 4322 + struct ath11k *ar = arvif->ar; 4323 + struct ath11k_base *ab = ar->ab; 4324 + const u8 *addr = arvif->bssid; 4325 + int i, ret, first_errno = 0; 4326 + struct ath11k_peer *peer; 4327 + 4328 + spin_lock_bh(&ab->base_lock); 4329 + peer = ath11k_peer_find(ab, arvif->vdev_id, addr); 4330 + spin_unlock_bh(&ab->base_lock); 4331 + 4332 + if (!peer) 4333 + return -ENOENT; 4334 + 4335 + for (i = 0; i < ARRAY_SIZE(peer->keys); i++) { 4336 + struct ieee80211_key_conf *key = peer->keys[i]; 4337 + 4338 + if (!key || (key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) 4339 + continue; 4340 + 4341 + ret = ath11k_install_key(arvif, key, SET_KEY, addr, 4342 + WMI_KEY_GROUP); 4343 + if (ret < 0 && first_errno == 0) 4344 + first_errno = ret; 4345 + 4346 + if (ret < 0) 4347 + ath11k_warn(ab, "failed to set group key of idx %d for vdev %d: %d\n", 4348 + i, arvif->vdev_id, ret); 4349 + } 4350 + 4351 + return first_errno; 4352 + } 4353 + 4320 4354 static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, 4321 4355 struct ieee80211_vif *vif, struct ieee80211_sta *sta, 4322 4356 struct ieee80211_key_conf *key) ··· 4360 4326 struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); 4361 4327 struct ath11k_peer *peer; 4362 4328 struct ath11k_sta *arsta; 4329 + bool is_ap_with_no_sta; 4363 4330 const u8 *peer_addr; 4364 4331 int ret = 0; 4365 4332 u32 flags = 0; ··· 4421 4386 else 4422 4387 flags |= WMI_KEY_GROUP; 4423 4388 4424 - ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags); 4425 - if (ret) { 4426 - ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret); 4427 - goto exit; 4428 - } 4389 + ath11k_dbg(ar->ab, ATH11K_DBG_MAC, 4390 + "%s for peer %pM on vdev %d flags 0x%X, type = %d, num_sta %d\n", 4391 + cmd == SET_KEY ? "SET_KEY" : "DEL_KEY", peer_addr, arvif->vdev_id, 4392 + flags, arvif->vdev_type, arvif->num_stations); 4429 4393 4430 - ret = ath11k_dp_peer_rx_pn_replay_config(arvif, peer_addr, cmd, key); 4431 - if (ret) { 4432 - ath11k_warn(ab, "failed to offload PN replay detection %d\n", ret); 4433 - goto exit; 4394 + /* Allow group key clearing only in AP mode when no stations are 4395 + * associated. There is a known race condition in firmware where 4396 + * group addressed packets may be dropped if the key is cleared 4397 + * and immediately set again during rekey. 4398 + * 4399 + * During GTK rekey, mac80211 issues a clear key (if the old key 4400 + * exists) followed by an install key operation for same key 4401 + * index. This causes ath11k to send two WMI commands in quick 4402 + * succession: one to clear the old key and another to install the 4403 + * new key in the same slot. 4404 + * 4405 + * Under certain conditions—especially under high load or time 4406 + * sensitive scenarios, firmware may process these commands 4407 + * asynchronously in a way that firmware assumes the key is 4408 + * cleared whereas hardware has a valid key. This inconsistency 4409 + * between hardware and firmware leads to group addressed packet 4410 + * drops after rekey. 4411 + * Only setting the same key again can restore a valid key in 4412 + * firmware and allow packets to be transmitted. 4413 + * 4414 + * There is a use case where an AP can transition from Secure mode 4415 + * to open mode without a vdev restart by just deleting all 4416 + * associated peers and clearing key, Hence allow clear key for 4417 + * that case alone. Mark arvif->reinstall_group_keys in such cases 4418 + * and reinstall the same key when the first peer is added, 4419 + * allowing firmware to recover from the race if it had occurred. 4420 + */ 4421 + 4422 + is_ap_with_no_sta = (vif->type == NL80211_IFTYPE_AP && 4423 + !arvif->num_stations); 4424 + if ((flags & WMI_KEY_PAIRWISE) || cmd == SET_KEY || is_ap_with_no_sta) { 4425 + ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags); 4426 + if (ret) { 4427 + ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret); 4428 + goto exit; 4429 + } 4430 + 4431 + ret = ath11k_dp_peer_rx_pn_replay_config(arvif, peer_addr, cmd, key); 4432 + if (ret) { 4433 + ath11k_warn(ab, "failed to offload PN replay detection %d\n", 4434 + ret); 4435 + goto exit; 4436 + } 4437 + 4438 + if ((flags & WMI_KEY_GROUP) && cmd == SET_KEY && is_ap_with_no_sta) 4439 + arvif->reinstall_group_keys = true; 4434 4440 } 4435 4441 4436 4442 spin_lock_bh(&ab->base_lock); ··· 5070 4994 return -ENOBUFS; 5071 4995 5072 4996 ar->num_stations++; 4997 + arvif->num_stations++; 5073 4998 5074 4999 return 0; 5075 5000 } ··· 5086 5009 return; 5087 5010 5088 5011 ar->num_stations--; 5012 + arvif->num_stations--; 5089 5013 } 5090 5014 5091 5015 static u32 ath11k_mac_ieee80211_sta_bw_to_wmi(struct ath11k *ar, ··· 9616 9538 ath11k_warn(ab, "refusing to associate station: too many connected already (%d)\n", 9617 9539 ar->max_num_stations); 9618 9540 goto exit; 9541 + } 9542 + 9543 + /* Driver allows the DEL KEY followed by SET KEY sequence for 9544 + * group keys for only when there is no clients associated, if at 9545 + * all firmware has entered the race during that window, 9546 + * reinstalling the same key when the first sta connects will allow 9547 + * firmware to recover from the race. 9548 + */ 9549 + if (arvif->num_stations == 1 && arvif->reinstall_group_keys) { 9550 + ath11k_dbg(ab, ATH11K_DBG_MAC, "set group keys on 1st station add for vdev %d\n", 9551 + arvif->vdev_id); 9552 + ret = ath11k_set_group_keys(arvif); 9553 + if (ret) 9554 + goto dec_num_station; 9555 + arvif->reinstall_group_keys = false; 9619 9556 } 9620 9557 9621 9558 arsta->rx_stats = kzalloc(sizeof(*arsta->rx_stats), GFP_KERNEL);
+1
drivers/net/wireless/ath/ath12k/wmi.c
··· 2423 2423 2424 2424 eml_cap = arg->ml.eml_cap; 2425 2425 if (u16_get_bits(eml_cap, IEEE80211_EML_CAP_EMLSR_SUPP)) { 2426 + ml_params->flags |= cpu_to_le32(ATH12K_WMI_FLAG_MLO_EMLSR_SUPPORT); 2426 2427 /* Padding delay */ 2427 2428 eml_pad_delay = ieee80211_emlsr_pad_delay_in_us(eml_cap); 2428 2429 ml_params->emlsr_padding_delay_us = cpu_to_le32(eml_pad_delay);
+2 -4
drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
··· 393 393 if (!cfg->btcoex) 394 394 return; 395 395 396 - if (cfg->btcoex->timer_on) { 397 - cfg->btcoex->timer_on = false; 398 - timer_shutdown_sync(&cfg->btcoex->timer); 399 - } 396 + timer_shutdown_sync(&cfg->btcoex->timer); 397 + cfg->btcoex->timer_on = false; 400 398 401 399 cancel_work_sync(&cfg->btcoex->work); 402 400
+24 -1
drivers/net/wireless/intel/iwlwifi/fw/acpi.c
··· 169 169 170 170 BUILD_BUG_ON(ARRAY_SIZE(acpi_dsm_size) != DSM_FUNC_NUM_FUNCS); 171 171 172 - if (WARN_ON(func >= ARRAY_SIZE(acpi_dsm_size))) 172 + if (WARN_ON(func >= ARRAY_SIZE(acpi_dsm_size) || !func)) 173 173 return -EINVAL; 174 174 175 175 expected_size = acpi_dsm_size[func]; ··· 177 177 /* Currently all ACPI DSMs are either 8-bit or 32-bit */ 178 178 if (expected_size != sizeof(u8) && expected_size != sizeof(u32)) 179 179 return -EOPNOTSUPP; 180 + 181 + if (!fwrt->acpi_dsm_funcs_valid) { 182 + ret = iwl_acpi_get_dsm_integer(fwrt->dev, ACPI_DSM_REV, 183 + DSM_FUNC_QUERY, 184 + &iwl_guid, &tmp, 185 + acpi_dsm_size[DSM_FUNC_QUERY]); 186 + if (ret) { 187 + /* always indicate BIT(0) to avoid re-reading */ 188 + fwrt->acpi_dsm_funcs_valid = BIT(0); 189 + return ret; 190 + } 191 + 192 + IWL_DEBUG_RADIO(fwrt, "ACPI DSM validity bitmap 0x%x\n", 193 + (u32)tmp); 194 + /* always indicate BIT(0) to avoid re-reading */ 195 + fwrt->acpi_dsm_funcs_valid = tmp | BIT(0); 196 + } 197 + 198 + if (!(fwrt->acpi_dsm_funcs_valid & BIT(func))) { 199 + IWL_DEBUG_RADIO(fwrt, "ACPI DSM %d not indicated as valid\n", 200 + func); 201 + return -ENODATA; 202 + } 180 203 181 204 ret = iwl_acpi_get_dsm_integer(fwrt->dev, ACPI_DSM_REV, func, 182 205 &iwl_guid, &tmp, expected_size);
+8
drivers/net/wireless/intel/iwlwifi/fw/runtime.h
··· 113 113 * @phy_filters: specific phy filters as read from WPFC BIOS table 114 114 * @ppag_bios_rev: PPAG BIOS revision 115 115 * @ppag_bios_source: see &enum bios_source 116 + * @acpi_dsm_funcs_valid: bitmap indicating which DSM values are valid, 117 + * zero (default initialization) means it hasn't been read yet, 118 + * and BIT(0) is set when it has since function 0 also has this 119 + * bitmap and is always supported 116 120 */ 117 121 struct iwl_fw_runtime { 118 122 struct iwl_trans *trans; ··· 193 189 bool uats_valid; 194 190 u8 uefi_tables_lock_status; 195 191 struct iwl_phy_specific_cfg phy_filters; 192 + 193 + #ifdef CONFIG_ACPI 194 + u32 acpi_dsm_funcs_valid; 195 + #endif 196 196 }; 197 197 198 198 void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
+6
drivers/net/wireless/intel/iwlwifi/fw/uefi.c
··· 747 747 goto out; 748 748 } 749 749 750 + if (!(data->functions[DSM_FUNC_QUERY] & BIT(func))) { 751 + IWL_DEBUG_RADIO(fwrt, "DSM func %d not in 0x%x\n", 752 + func, data->functions[DSM_FUNC_QUERY]); 753 + goto out; 754 + } 755 + 750 756 *value = data->functions[func]; 751 757 752 758 IWL_DEBUG_RADIO(fwrt,
+17 -5
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 673 673 674 674 IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_sff_name, 675 675 DEVICE(0x0082), SUBDEV_MASKED(0xC000, 0xF000)), 676 + IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_sff_name, 677 + DEVICE(0x0085), SUBDEV_MASKED(0xC000, 0xF000)), 676 678 IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_d_name, 677 679 DEVICE(0x0082), SUBDEV(0x4820)), 678 680 IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_mow1_name, ··· 731 729 DEVICE(0x0083), SUBDEV_MASKED(0x5, 0xF)), 732 730 IWL_DEV_INFO(iwl1000_bg_cfg, iwl1000_bg_name, 733 731 DEVICE(0x0083), SUBDEV_MASKED(0x6, 0xF)), 732 + IWL_DEV_INFO(iwl1000_bgn_cfg, iwl1000_bgn_name, 733 + DEVICE(0x0084), SUBDEV_MASKED(0x5, 0xF)), 734 734 IWL_DEV_INFO(iwl1000_bg_cfg, iwl1000_bg_name, 735 - DEVICE(0x0084), SUBDEV(0x1216)), 736 - IWL_DEV_INFO(iwl1000_bg_cfg, iwl1000_bg_name, 737 - DEVICE(0x0084), SUBDEV(0x1316)), 735 + DEVICE(0x0084), SUBDEV_MASKED(0x6, 0xF)), 738 736 739 737 /* 100 Series WiFi */ 740 738 IWL_DEV_INFO(iwl100_bgn_cfg, iwl100_bgn_name, ··· 966 964 DEVICE(0x24F3), SUBDEV(0x0004)), 967 965 IWL_DEV_INFO(iwl8260_cfg, iwl8260_2n_name, 968 966 DEVICE(0x24F3), SUBDEV(0x0044)), 967 + IWL_DEV_INFO(iwl8260_cfg, iwl8260_2ac_name, 968 + DEVICE(0x24F4)), 969 + IWL_DEV_INFO(iwl8260_cfg, iwl4165_2ac_name, 970 + DEVICE(0x24F5)), 971 + IWL_DEV_INFO(iwl8260_cfg, iwl4165_2ac_name, 972 + DEVICE(0x24F6)), 969 973 IWL_DEV_INFO(iwl8265_cfg, iwl8265_2ac_name, 970 974 DEVICE(0x24FD)), 971 975 IWL_DEV_INFO(iwl8265_cfg, iwl8275_2ac_name, ··· 1230 1222 * Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan, 1231 1223 * but not bits [15:8]. So if we have bits set in lower word, assume 1232 1224 * the device is alive. 1225 + * Alternatively, if the scratch value is 0xFFFFFFFF, then we no longer 1226 + * have access to the device and consider it powered off. 1233 1227 * For older devices, just try silently to grab the NIC. 1234 1228 */ 1235 1229 if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) { 1236 - if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) & 1237 - CSR_FUNC_SCRATCH_POWER_OFF_MASK)) 1230 + u32 scratch = iwl_read32(trans, CSR_FUNC_SCRATCH); 1231 + 1232 + if (!(scratch & CSR_FUNC_SCRATCH_POWER_OFF_MASK) || 1233 + scratch == ~0U) 1238 1234 device_was_powered_off = true; 1239 1235 } else { 1240 1236 /*
+2 -1
drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/tx.c
··· 2092 2092 break; 2093 2093 } 2094 2094 2095 - if (trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210) 2095 + if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_9000 && 2096 + trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210) 2096 2097 len = DIV_ROUND_UP(len, 4); 2097 2098 2098 2099 if (WARN_ON(len > 0xFFF || write_ptr >= TFD_QUEUE_SIZE_MAX))
+6 -3
drivers/net/wireless/marvell/libertas/cfg.c
··· 1151 1151 /* add SSID TLV */ 1152 1152 rcu_read_lock(); 1153 1153 ssid_eid = ieee80211_bss_get_ie(bss, WLAN_EID_SSID); 1154 - if (ssid_eid) 1155 - pos += lbs_add_ssid_tlv(pos, ssid_eid + 2, ssid_eid[1]); 1156 - else 1154 + if (ssid_eid) { 1155 + u32 ssid_len = min(ssid_eid[1], IEEE80211_MAX_SSID_LEN); 1156 + 1157 + pos += lbs_add_ssid_tlv(pos, ssid_eid + 2, ssid_len); 1158 + } else { 1157 1159 lbs_deb_assoc("no SSID\n"); 1160 + } 1158 1161 rcu_read_unlock(); 1159 1162 1160 1163 /* add DS param TLV */
+3 -2
drivers/net/wireless/marvell/mwifiex/cfg80211.c
··· 4673 4673 * additional active scan request for hidden SSIDs on passive channels. 4674 4674 */ 4675 4675 adapter->num_in_chan_stats = 2 * (n_channels_bg + n_channels_a); 4676 - adapter->chan_stats = vmalloc(array_size(sizeof(*adapter->chan_stats), 4677 - adapter->num_in_chan_stats)); 4676 + adapter->chan_stats = kcalloc(adapter->num_in_chan_stats, 4677 + sizeof(*adapter->chan_stats), 4678 + GFP_KERNEL); 4678 4679 4679 4680 if (!adapter->chan_stats) 4680 4681 return -ENOMEM;
+2 -2
drivers/net/wireless/marvell/mwifiex/main.c
··· 642 642 goto done; 643 643 644 644 err_add_intf: 645 - vfree(adapter->chan_stats); 645 + kfree(adapter->chan_stats); 646 646 err_init_chan_scan: 647 647 wiphy_unregister(adapter->wiphy); 648 648 wiphy_free(adapter->wiphy); ··· 1485 1485 wiphy_free(adapter->wiphy); 1486 1486 adapter->wiphy = NULL; 1487 1487 1488 - vfree(adapter->chan_stats); 1488 + kfree(adapter->chan_stats); 1489 1489 mwifiex_free_cmd_buffers(adapter); 1490 1490 } 1491 1491
+42 -1
drivers/net/wireless/mediatek/mt76/mac80211.c
··· 818 818 } 819 819 EXPORT_SYMBOL_GPL(mt76_free_device); 820 820 821 + static void mt76_reset_phy(struct mt76_phy *phy) 822 + { 823 + if (!phy) 824 + return; 825 + 826 + INIT_LIST_HEAD(&phy->tx_list); 827 + } 828 + 829 + void mt76_reset_device(struct mt76_dev *dev) 830 + { 831 + int i; 832 + 833 + rcu_read_lock(); 834 + for (i = 0; i < ARRAY_SIZE(dev->wcid); i++) { 835 + struct mt76_wcid *wcid; 836 + 837 + wcid = rcu_dereference(dev->wcid[i]); 838 + if (!wcid) 839 + continue; 840 + 841 + wcid->sta = 0; 842 + mt76_wcid_cleanup(dev, wcid); 843 + rcu_assign_pointer(dev->wcid[i], NULL); 844 + } 845 + rcu_read_unlock(); 846 + 847 + INIT_LIST_HEAD(&dev->wcid_list); 848 + INIT_LIST_HEAD(&dev->sta_poll_list); 849 + dev->vif_mask = 0; 850 + memset(dev->wcid_mask, 0, sizeof(dev->wcid_mask)); 851 + 852 + mt76_reset_phy(&dev->phy); 853 + for (i = 0; i < ARRAY_SIZE(dev->phys); i++) 854 + mt76_reset_phy(dev->phys[i]); 855 + } 856 + EXPORT_SYMBOL_GPL(mt76_reset_device); 857 + 821 858 struct mt76_phy *mt76_vif_phy(struct ieee80211_hw *hw, 822 859 struct ieee80211_vif *vif) 823 860 { ··· 1716 1679 skb_queue_splice_tail_init(&wcid->tx_pending, &list); 1717 1680 spin_unlock(&wcid->tx_pending.lock); 1718 1681 1682 + spin_lock(&wcid->tx_offchannel.lock); 1683 + skb_queue_splice_tail_init(&wcid->tx_offchannel, &list); 1684 + spin_unlock(&wcid->tx_offchannel.lock); 1685 + 1719 1686 spin_unlock_bh(&phy->tx_lock); 1720 1687 1721 1688 while ((skb = __skb_dequeue(&list)) != NULL) { ··· 1731 1690 1732 1691 void mt76_wcid_add_poll(struct mt76_dev *dev, struct mt76_wcid *wcid) 1733 1692 { 1734 - if (test_bit(MT76_MCU_RESET, &dev->phy.state)) 1693 + if (test_bit(MT76_MCU_RESET, &dev->phy.state) || !wcid->sta) 1735 1694 return; 1736 1695 1737 1696 spin_lock_bh(&dev->sta_poll_lock);
+1
drivers/net/wireless/mediatek/mt76/mt76.h
··· 1243 1243 struct ieee80211_rate *rates, int n_rates); 1244 1244 void mt76_unregister_device(struct mt76_dev *dev); 1245 1245 void mt76_free_device(struct mt76_dev *dev); 1246 + void mt76_reset_device(struct mt76_dev *dev); 1246 1247 void mt76_unregister_phy(struct mt76_phy *phy); 1247 1248 1248 1249 struct mt76_phy *mt76_alloc_radio_phy(struct mt76_dev *dev, unsigned int size,
+5 -7
drivers/net/wireless/mediatek/mt76/mt7915/mac.c
··· 1460 1460 if (i == 10) 1461 1461 dev_err(dev->mt76.dev, "chip full reset failed\n"); 1462 1462 1463 - spin_lock_bh(&dev->mt76.sta_poll_lock); 1464 - while (!list_empty(&dev->mt76.sta_poll_list)) 1465 - list_del_init(dev->mt76.sta_poll_list.next); 1466 - spin_unlock_bh(&dev->mt76.sta_poll_lock); 1467 - 1468 - memset(dev->mt76.wcid_mask, 0, sizeof(dev->mt76.wcid_mask)); 1469 - dev->mt76.vif_mask = 0; 1470 1463 dev->phy.omac_mask = 0; 1471 1464 if (phy2) 1472 1465 phy2->omac_mask = 0; 1466 + 1467 + mt76_reset_device(&dev->mt76); 1468 + 1469 + INIT_LIST_HEAD(&dev->sta_rc_list); 1470 + INIT_LIST_HEAD(&dev->twt_list); 1473 1471 1474 1472 i = mt76_wcid_alloc(dev->mt76.wcid_mask, MT7915_WTBL_STA); 1475 1473 dev->mt76.global_wcid.idx = i;
+1 -4
drivers/net/wireless/mediatek/mt76/mt7921/main.c
··· 1459 1459 if (vif->type != NL80211_IFTYPE_STATION || !vif->cfg.assoc) 1460 1460 return -EOPNOTSUPP; 1461 1461 1462 - /* Avoid beacon loss due to the CAC(Channel Availability Check) time 1463 - * of the AP. 1464 - */ 1465 1462 if (!cfg80211_chandef_usable(hw->wiphy, &chsw->chandef, 1466 - IEEE80211_CHAN_RADAR)) 1463 + IEEE80211_CHAN_DISABLED)) 1467 1464 return -EOPNOTSUPP; 1468 1465 1469 1466 return 0;
+1 -1
drivers/net/wireless/mediatek/mt76/mt7925/mac.c
··· 1449 1449 sta = wcid_to_sta(wcid); 1450 1450 1451 1451 if (sta && likely(e->skb->protocol != cpu_to_be16(ETH_P_PAE))) 1452 - mt76_connac2_tx_check_aggr(sta, txwi); 1452 + mt7925_tx_check_aggr(sta, e->skb, wcid); 1453 1453 1454 1454 skb_pull(e->skb, headroom); 1455 1455 mt76_tx_complete_skb(mdev, e->wcid, e->skb);
+6 -1
drivers/net/wireless/mediatek/mt76/mt7925/main.c
··· 1191 1191 struct mt792x_bss_conf *mconf; 1192 1192 struct mt792x_link_sta *mlink; 1193 1193 1194 + if (vif->type == NL80211_IFTYPE_AP) 1195 + break; 1196 + 1194 1197 link_sta = mt792x_sta_to_link_sta(vif, sta, link_id); 1195 1198 if (!link_sta) 1196 1199 continue; ··· 2072 2069 GFP_KERNEL); 2073 2070 mlink = devm_kzalloc(dev->mt76.dev, sizeof(*mlink), 2074 2071 GFP_KERNEL); 2075 - if (!mconf || !mlink) 2072 + if (!mconf || !mlink) { 2073 + mt792x_mutex_release(dev); 2076 2074 return -ENOMEM; 2075 + } 2077 2076 } 2078 2077 2079 2078 mconfs[link_id] = mconf;
+8 -4
drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
··· 1834 1834 struct tlv *tlv; 1835 1835 u16 eml_cap; 1836 1836 1837 + if (!ieee80211_vif_is_mld(vif)) 1838 + return; 1839 + 1837 1840 tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_EHT_MLD, sizeof(*eht_mld)); 1838 1841 eht_mld = (struct sta_rec_eht_mld *)tlv; 1839 1842 eht_mld->mld_type = 0xff; 1840 - 1841 - if (!ieee80211_vif_is_mld(vif)) 1842 - return; 1843 1843 1844 1844 ext_capa = cfg80211_get_iftype_ext_capa(wiphy, 1845 1845 ieee80211_vif_type_p2p(vif)); ··· 1912 1912 struct mt76_dev *dev = phy->dev; 1913 1913 struct mt792x_bss_conf *mconf; 1914 1914 struct sk_buff *skb; 1915 + int conn_state; 1915 1916 1916 1917 mconf = mt792x_vif_to_link(mvif, info->wcid->link_id); 1917 1918 ··· 1921 1920 if (IS_ERR(skb)) 1922 1921 return PTR_ERR(skb); 1923 1922 1923 + conn_state = info->enable ? CONN_STATE_PORT_SECURE : 1924 + CONN_STATE_DISCONNECT; 1925 + 1924 1926 if (info->enable && info->link_sta) { 1925 1927 mt76_connac_mcu_sta_basic_tlv(dev, skb, info->link_conf, 1926 1928 info->link_sta, 1927 - info->enable, info->newly); 1929 + conn_state, info->newly); 1928 1930 mt7925_mcu_sta_phy_tlv(skb, info->vif, info->link_sta); 1929 1931 mt7925_mcu_sta_ht_tlv(skb, info->link_sta); 1930 1932 mt7925_mcu_sta_vht_tlv(skb, info->link_sta);
+38 -22
drivers/net/wireless/mediatek/mt76/mt7996/mac.c
··· 62 62 int i; 63 63 64 64 wcid = mt76_wcid_ptr(dev, idx); 65 - if (!wcid) 65 + if (!wcid || !wcid->sta) 66 66 return NULL; 67 67 68 68 if (!mt7996_band_valid(dev, band_idx)) ··· 903 903 IEEE80211_TX_CTRL_MLO_LINK); 904 904 905 905 mvif = vif ? (struct mt7996_vif *)vif->drv_priv : NULL; 906 - if (mvif) 907 - mlink = rcu_dereference(mvif->mt76.link[link_id]); 906 + if (mvif) { 907 + if (wcid->offchannel) 908 + mlink = rcu_dereference(mvif->mt76.offchannel_link); 909 + if (!mlink) 910 + mlink = rcu_dereference(mvif->mt76.link[link_id]); 911 + } 908 912 909 913 if (mlink) { 910 914 omac_idx = mlink->omac_idx; ··· 1247 1243 idx = FIELD_GET(MT_TXFREE_INFO_WLAN_ID, info); 1248 1244 wcid = mt76_wcid_ptr(dev, idx); 1249 1245 sta = wcid_to_sta(wcid); 1250 - if (!sta) 1246 + if (!sta) { 1247 + link_sta = NULL; 1251 1248 goto next; 1249 + } 1252 1250 1253 1251 link_sta = rcu_dereference(sta->link[wcid->link_id]); 1254 1252 if (!link_sta) ··· 1700 1694 static void 1701 1695 mt7996_update_vif_beacon(void *priv, u8 *mac, struct ieee80211_vif *vif) 1702 1696 { 1703 - struct ieee80211_hw *hw = priv; 1697 + struct ieee80211_bss_conf *link_conf; 1698 + struct mt7996_phy *phy = priv; 1699 + struct mt7996_dev *dev = phy->dev; 1700 + unsigned int link_id; 1701 + 1704 1702 1705 1703 switch (vif->type) { 1706 1704 case NL80211_IFTYPE_MESH_POINT: 1707 1705 case NL80211_IFTYPE_ADHOC: 1708 1706 case NL80211_IFTYPE_AP: 1709 - mt7996_mcu_add_beacon(hw, vif, &vif->bss_conf); 1710 1707 break; 1711 1708 default: 1712 - break; 1709 + return; 1713 1710 } 1711 + 1712 + for_each_vif_active_link(vif, link_conf, link_id) { 1713 + struct mt7996_vif_link *link; 1714 + 1715 + link = mt7996_vif_link(dev, vif, link_id); 1716 + if (!link || link->phy != phy) 1717 + continue; 1718 + 1719 + mt7996_mcu_add_beacon(dev->mt76.hw, vif, link_conf); 1720 + } 1721 + } 1722 + 1723 + void mt7996_mac_update_beacons(struct mt7996_phy *phy) 1724 + { 1725 + ieee80211_iterate_active_interfaces(phy->mt76->hw, 1726 + IEEE80211_IFACE_ITER_RESUME_ALL, 1727 + mt7996_update_vif_beacon, phy); 1714 1728 } 1715 1729 1716 1730 static void ··· 1738 1712 { 1739 1713 struct mt76_phy *phy2, *phy3; 1740 1714 1741 - ieee80211_iterate_active_interfaces(dev->mt76.hw, 1742 - IEEE80211_IFACE_ITER_RESUME_ALL, 1743 - mt7996_update_vif_beacon, dev->mt76.hw); 1715 + mt7996_mac_update_beacons(&dev->phy); 1744 1716 1745 1717 phy2 = dev->mt76.phys[MT_BAND1]; 1746 - if (!phy2) 1747 - return; 1748 - 1749 - ieee80211_iterate_active_interfaces(phy2->hw, 1750 - IEEE80211_IFACE_ITER_RESUME_ALL, 1751 - mt7996_update_vif_beacon, phy2->hw); 1718 + if (phy2) 1719 + mt7996_mac_update_beacons(phy2->priv); 1752 1720 1753 1721 phy3 = dev->mt76.phys[MT_BAND2]; 1754 - if (!phy3) 1755 - return; 1756 - 1757 - ieee80211_iterate_active_interfaces(phy3->hw, 1758 - IEEE80211_IFACE_ITER_RESUME_ALL, 1759 - mt7996_update_vif_beacon, phy3->hw); 1722 + if (phy3) 1723 + mt7996_mac_update_beacons(phy3->priv); 1760 1724 } 1761 1725 1762 1726 void mt7996_tx_token_put(struct mt7996_dev *dev)
+5
drivers/net/wireless/mediatek/mt76/mt7996/main.c
··· 516 516 struct mt7996_phy *phy = mphy->priv; 517 517 int ret; 518 518 519 + if (mphy->offchannel) 520 + mt7996_mac_update_beacons(phy); 521 + 519 522 ret = mt7996_mcu_set_chan_info(phy, UNI_CHANNEL_SWITCH); 520 523 if (ret) 521 524 goto out; ··· 536 533 537 534 mt7996_mac_reset_counters(phy); 538 535 phy->noise = 0; 536 + if (!mphy->offchannel) 537 + mt7996_mac_update_beacons(phy); 539 538 540 539 out: 541 540 ieee80211_queue_delayed_work(mphy->hw, &mphy->mac_work,
+10 -5
drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
··· 1879 1879 int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev, 1880 1880 void *data, u16 version) 1881 1881 { 1882 + struct uni_header hdr = {}; 1882 1883 struct ra_fixed_rate *req; 1883 - struct uni_header hdr; 1884 1884 struct sk_buff *skb; 1885 1885 struct tlv *tlv; 1886 1886 int len; ··· 2755 2755 struct ieee80211_bss_conf *link_conf) 2756 2756 { 2757 2757 struct mt7996_dev *dev = mt7996_hw_dev(hw); 2758 - struct mt76_vif_link *mlink = mt76_vif_conf_link(&dev->mt76, vif, link_conf); 2758 + struct mt7996_vif_link *link = mt7996_vif_conf_link(dev, vif, link_conf); 2759 + struct mt76_vif_link *mlink = link ? &link->mt76 : NULL; 2759 2760 struct ieee80211_mutable_offsets offs; 2760 2761 struct ieee80211_tx_info *info; 2761 2762 struct sk_buff *skb, *rskb; 2762 2763 struct tlv *tlv; 2763 2764 struct bss_bcn_content_tlv *bcn; 2764 2765 int len, extra_len = 0; 2766 + bool enabled = link_conf->enable_beacon; 2765 2767 2766 2768 if (link_conf->nontransmitted) 2767 2769 return 0; ··· 2771 2769 if (!mlink) 2772 2770 return -EINVAL; 2773 2771 2772 + if (link->phy && link->phy->mt76->offchannel) 2773 + enabled = false; 2774 + 2774 2775 rskb = __mt7996_mcu_alloc_bss_req(&dev->mt76, mlink, 2775 2776 MT7996_MAX_BSS_OFFLOAD_SIZE); 2776 2777 if (IS_ERR(rskb)) 2777 2778 return PTR_ERR(rskb); 2778 2779 2779 2780 skb = ieee80211_beacon_get_template(hw, vif, &offs, link_conf->link_id); 2780 - if (link_conf->enable_beacon && !skb) { 2781 + if (enabled && !skb) { 2781 2782 dev_kfree_skb(rskb); 2782 2783 return -EINVAL; 2783 2784 } ··· 2799 2794 len = ALIGN(sizeof(*bcn) + MT_TXD_SIZE + extra_len, 4); 2800 2795 tlv = mt7996_mcu_add_uni_tlv(rskb, UNI_BSS_INFO_BCN_CONTENT, len); 2801 2796 bcn = (struct bss_bcn_content_tlv *)tlv; 2802 - bcn->enable = link_conf->enable_beacon; 2797 + bcn->enable = enabled; 2803 2798 if (!bcn->enable) 2804 2799 goto out; 2805 2800 ··· 3377 3372 { 3378 3373 struct { 3379 3374 u8 __rsv[4]; 3380 - } __packed hdr; 3375 + } __packed hdr = {}; 3381 3376 struct hdr_trans_blacklist *req_blacklist; 3382 3377 struct hdr_trans_en *req_en; 3383 3378 struct sk_buff *skb;
+1
drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
··· 732 732 struct sk_buff *skb, struct mt76_wcid *wcid, 733 733 struct ieee80211_key_conf *key, int pid, 734 734 enum mt76_txq_id qid, u32 changed); 735 + void mt7996_mac_update_beacons(struct mt7996_phy *phy); 735 736 void mt7996_mac_set_coverage_class(struct mt7996_phy *phy); 736 737 void mt7996_mac_work(struct work_struct *work); 737 738 void mt7996_mac_reset_work(struct work_struct *work);
+6 -6
drivers/net/wireless/mediatek/mt76/tx.c
··· 332 332 struct mt76_wcid *wcid, struct sk_buff *skb) 333 333 { 334 334 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 335 + struct ieee80211_hdr *hdr = (void *)skb->data; 335 336 struct sk_buff_head *head; 336 337 337 338 if (mt76_testmode_enabled(phy)) { ··· 350 349 info->hw_queue |= FIELD_PREP(MT_TX_HW_QUEUE_PHY, phy->band_idx); 351 350 352 351 if ((info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) || 353 - (info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK)) 352 + ((info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK) && 353 + ieee80211_is_probe_req(hdr->frame_control))) 354 354 head = &wcid->tx_offchannel; 355 355 else 356 356 head = &wcid->tx_pending; ··· 646 644 static void mt76_txq_schedule_pending(struct mt76_phy *phy) 647 645 { 648 646 LIST_HEAD(tx_list); 647 + int ret = 0; 649 648 650 649 if (list_empty(&phy->tx_list)) 651 650 return; ··· 658 655 list_splice_init(&phy->tx_list, &tx_list); 659 656 while (!list_empty(&tx_list)) { 660 657 struct mt76_wcid *wcid; 661 - int ret; 662 658 663 659 wcid = list_first_entry(&tx_list, struct mt76_wcid, tx_list); 664 660 list_del_init(&wcid->tx_list); 665 661 666 662 spin_unlock(&phy->tx_lock); 667 - ret = mt76_txq_schedule_pending_wcid(phy, wcid, &wcid->tx_offchannel); 663 + if (ret >= 0) 664 + ret = mt76_txq_schedule_pending_wcid(phy, wcid, &wcid->tx_offchannel); 668 665 if (ret >= 0 && !phy->offchannel) 669 666 ret = mt76_txq_schedule_pending_wcid(phy, wcid, &wcid->tx_pending); 670 667 spin_lock(&phy->tx_lock); ··· 673 670 !skb_queue_empty(&wcid->tx_offchannel) && 674 671 list_empty(&wcid->tx_list)) 675 672 list_add_tail(&wcid->tx_list, &phy->tx_list); 676 - 677 - if (ret < 0) 678 - break; 679 673 } 680 674 spin_unlock(&phy->tx_lock); 681 675
+28 -11
drivers/net/wireless/microchip/wilc1000/wlan_cfg.c
··· 41 41 }; 42 42 43 43 static const struct wilc_cfg_str g_cfg_str[] = { 44 - {WID_FIRMWARE_VERSION, NULL}, 45 - {WID_MAC_ADDR, NULL}, 46 - {WID_ASSOC_RES_INFO, NULL}, 47 - {WID_NIL, NULL} 44 + {WID_FIRMWARE_VERSION, 0, NULL}, 45 + {WID_MAC_ADDR, 0, NULL}, 46 + {WID_ASSOC_RES_INFO, 0, NULL}, 47 + {WID_NIL, 0, NULL} 48 48 }; 49 49 50 50 #define WILC_RESP_MSG_TYPE_CONFIG_REPLY 'R' ··· 147 147 148 148 switch (FIELD_GET(WILC_WID_TYPE, wid)) { 149 149 case WID_CHAR: 150 + len = 3; 151 + if (len + 2 > size) 152 + return; 153 + 150 154 while (cfg->b[i].id != WID_NIL && cfg->b[i].id != wid) 151 155 i++; 152 156 153 157 if (cfg->b[i].id == wid) 154 158 cfg->b[i].val = info[4]; 155 159 156 - len = 3; 157 160 break; 158 161 159 162 case WID_SHORT: 163 + len = 4; 164 + if (len + 2 > size) 165 + return; 166 + 160 167 while (cfg->hw[i].id != WID_NIL && cfg->hw[i].id != wid) 161 168 i++; 162 169 163 170 if (cfg->hw[i].id == wid) 164 171 cfg->hw[i].val = get_unaligned_le16(&info[4]); 165 172 166 - len = 4; 167 173 break; 168 174 169 175 case WID_INT: 176 + len = 6; 177 + if (len + 2 > size) 178 + return; 179 + 170 180 while (cfg->w[i].id != WID_NIL && cfg->w[i].id != wid) 171 181 i++; 172 182 173 183 if (cfg->w[i].id == wid) 174 184 cfg->w[i].val = get_unaligned_le32(&info[4]); 175 185 176 - len = 6; 177 186 break; 178 187 179 188 case WID_STR: 189 + len = 2 + get_unaligned_le16(&info[2]); 190 + 180 191 while (cfg->s[i].id != WID_NIL && cfg->s[i].id != wid) 181 192 i++; 182 193 183 - if (cfg->s[i].id == wid) 184 - memcpy(cfg->s[i].str, &info[2], 185 - get_unaligned_le16(&info[2]) + 2); 194 + if (cfg->s[i].id == wid) { 195 + if (len > cfg->s[i].len || (len + 2 > size)) 196 + return; 186 197 187 - len = 2 + get_unaligned_le16(&info[2]); 198 + memcpy(cfg->s[i].str, &info[2], 199 + len); 200 + } 201 + 188 202 break; 189 203 190 204 default: ··· 398 384 /* store the string cfg parameters */ 399 385 wl->cfg.s[i].id = WID_FIRMWARE_VERSION; 400 386 wl->cfg.s[i].str = str_vals->firmware_version; 387 + wl->cfg.s[i].len = sizeof(str_vals->firmware_version); 401 388 i++; 402 389 wl->cfg.s[i].id = WID_MAC_ADDR; 403 390 wl->cfg.s[i].str = str_vals->mac_address; 391 + wl->cfg.s[i].len = sizeof(str_vals->mac_address); 404 392 i++; 405 393 wl->cfg.s[i].id = WID_ASSOC_RES_INFO; 406 394 wl->cfg.s[i].str = str_vals->assoc_rsp; 395 + wl->cfg.s[i].len = sizeof(str_vals->assoc_rsp); 407 396 i++; 408 397 wl->cfg.s[i].id = WID_NIL; 409 398 wl->cfg.s[i].str = NULL;
+3 -2
drivers/net/wireless/microchip/wilc1000/wlan_cfg.h
··· 24 24 25 25 struct wilc_cfg_str { 26 26 u16 id; 27 + u16 len; 27 28 u8 *str; 28 29 }; 29 30 30 31 struct wilc_cfg_str_vals { 31 - u8 mac_address[7]; 32 - u8 firmware_version[129]; 32 + u8 mac_address[8]; 33 + u8 firmware_version[130]; 33 34 u8 assoc_rsp[WILC_MAX_ASSOC_RESP_FRAME_SIZE]; 34 35 }; 35 36
+2 -2
drivers/net/wireless/ralink/rt2x00/Kconfig
··· 66 66 select RT2X00_LIB_PCI 67 67 select RT2X00_LIB_FIRMWARE 68 68 select RT2X00_LIB_CRYPTO 69 - select CRC_CCITT 70 69 select EEPROM_93CX6 71 70 help 72 71 This adds support for rt27xx/rt28xx/rt30xx wireless chipset family. ··· 141 142 select RT2X00_LIB_USB 142 143 select RT2X00_LIB_FIRMWARE 143 144 select RT2X00_LIB_CRYPTO 144 - select CRC_CCITT 145 145 help 146 146 This adds support for rt27xx/rt28xx/rt30xx wireless chipset family. 147 147 Supported chips: RT2770, RT2870 & RT3070, RT3071 & RT3072 ··· 215 217 216 218 config RT2800_LIB 217 219 tristate 220 + select CRC_CCITT 218 221 219 222 config RT2800_LIB_MMIO 220 223 tristate ··· 224 225 225 226 config RT2X00_LIB_MMIO 226 227 tristate 228 + select RT2X00_LIB 227 229 228 230 config RT2X00_LIB_PCI 229 231 tristate
+1 -1
drivers/net/wireless/st/cw1200/sta.c
··· 1291 1291 rcu_read_lock(); 1292 1292 ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID); 1293 1293 if (ssidie) { 1294 - join.ssid_len = ssidie[1]; 1294 + join.ssid_len = min(ssidie[1], IEEE80211_MAX_SSID_LEN); 1295 1295 memcpy(join.ssid, &ssidie[2], join.ssid_len); 1296 1296 } 1297 1297 rcu_read_unlock();
+11 -7
drivers/nvme/host/core.c
··· 903 903 u32 upper, lower; 904 904 u64 ref48; 905 905 906 + /* only type1 and type 2 PI formats have a reftag */ 907 + switch (ns->head->pi_type) { 908 + case NVME_NS_DPS_PI_TYPE1: 909 + case NVME_NS_DPS_PI_TYPE2: 910 + break; 911 + default: 912 + return; 913 + } 914 + 906 915 /* both rw and write zeroes share the same reftag format */ 907 916 switch (ns->head->guard_type) { 908 917 case NVME_NVM_NS_16B_GUARD: ··· 951 942 952 943 if (nvme_ns_has_pi(ns->head)) { 953 944 cmnd->write_zeroes.control |= cpu_to_le16(NVME_RW_PRINFO_PRACT); 954 - 955 - switch (ns->head->pi_type) { 956 - case NVME_NS_DPS_PI_TYPE1: 957 - case NVME_NS_DPS_PI_TYPE2: 958 - nvme_set_ref_tag(ns, cmnd, req); 959 - break; 960 - } 945 + nvme_set_ref_tag(ns, cmnd, req); 961 946 } 962 947 963 948 return BLK_STS_OK; ··· 1042 1039 if (WARN_ON_ONCE(!nvme_ns_has_pi(ns->head))) 1043 1040 return BLK_STS_NOTSUPP; 1044 1041 control |= NVME_RW_PRINFO_PRACT; 1042 + nvme_set_ref_tag(ns, cmnd, req); 1045 1043 } 1046 1044 1047 1045 if (bio_integrity_flagged(req->bio, BIP_CHECK_GUARD))
+4 -1
drivers/of/of_numa.c
··· 59 59 r = -EINVAL; 60 60 } 61 61 62 - for (i = 0; !r && !of_address_to_resource(np, i, &rsrc); i++) 62 + for (i = 0; !r && !of_address_to_resource(np, i, &rsrc); i++) { 63 63 r = numa_add_memblk(nid, rsrc.start, rsrc.end + 1); 64 + if (!r) 65 + node_set(nid, numa_nodes_parsed); 66 + } 64 67 65 68 if (!i || r) { 66 69 of_node_put(np);
-3
drivers/pcmcia/Kconfig
··· 250 250 config PCCARD_NONSTATIC 251 251 bool 252 252 253 - config PCCARD_IODYN 254 - bool 255 - 256 253 endif # PCCARD
-1
drivers/pcmcia/Makefile
··· 12 12 13 13 pcmcia_rsrc-y += rsrc_mgr.o 14 14 pcmcia_rsrc-$(CONFIG_PCCARD_NONSTATIC) += rsrc_nonstatic.o 15 - pcmcia_rsrc-$(CONFIG_PCCARD_IODYN) += rsrc_iodyn.o 16 15 obj-$(CONFIG_PCCARD) += pcmcia_rsrc.o 17 16 18 17
-17
drivers/pcmcia/cs.c
··· 229 229 EXPORT_SYMBOL(pcmcia_unregister_socket); 230 230 231 231 232 - struct pcmcia_socket *pcmcia_get_socket_by_nr(unsigned int nr) 233 - { 234 - struct pcmcia_socket *s; 235 - 236 - down_read(&pcmcia_socket_list_rwsem); 237 - list_for_each_entry(s, &pcmcia_socket_list, socket_list) 238 - if (s->sock == nr) { 239 - up_read(&pcmcia_socket_list_rwsem); 240 - return s; 241 - } 242 - up_read(&pcmcia_socket_list_rwsem); 243 - 244 - return NULL; 245 - 246 - } 247 - EXPORT_SYMBOL(pcmcia_get_socket_by_nr); 248 - 249 232 static int socket_reset(struct pcmcia_socket *skt) 250 233 { 251 234 int status, i;
-1
drivers/pcmcia/cs_internal.h
··· 116 116 extern const struct class pcmcia_socket_class; 117 117 118 118 int pccard_register_pcmcia(struct pcmcia_socket *s, struct pcmcia_callback *c); 119 - struct pcmcia_socket *pcmcia_get_socket_by_nr(unsigned int nr); 120 119 121 120 void pcmcia_parse_uevents(struct pcmcia_socket *socket, unsigned int events); 122 121 #define PCMCIA_UEVENT_EJECT 0x0001
+1 -1
drivers/pcmcia/ds.c
··· 1308 1308 * physically present, even if the call to this function returns 1309 1309 * non-NULL. Furthermore, the device driver most likely is unbound 1310 1310 * almost immediately, so the timeframe where pcmcia_dev_present 1311 - * returns NULL is probably really really small. 1311 + * returns NULL is probably really, really small. 1312 1312 */ 1313 1313 struct pcmcia_device *pcmcia_dev_present(struct pcmcia_device *_p_dev) 1314 1314 {
+9 -1
drivers/pcmcia/omap_cf.c
··· 215 215 return -EINVAL; 216 216 217 217 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 218 + if (!res) 219 + return -EINVAL; 218 220 219 221 cf = kzalloc(sizeof *cf, GFP_KERNEL); 220 222 if (!cf) ··· 304 302 kfree(cf); 305 303 } 306 304 307 - static struct platform_driver omap_cf_driver = { 305 + /* 306 + * omap_cf_remove() lives in .exit.text. For drivers registered via 307 + * platform_driver_probe() this is ok because they cannot get unbound at 308 + * runtime. So mark the driver struct with __refdata to prevent modpost 309 + * triggering a section mismatch warning. 310 + */ 311 + static struct platform_driver omap_cf_driver __refdata = { 308 312 .driver = { 309 313 .name = driver_name, 310 314 },
-168
drivers/pcmcia/rsrc_iodyn.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * rsrc_iodyn.c -- Resource management routines for MEM-static sockets. 4 - * 5 - * The initial developer of the original code is David A. Hinds 6 - * <dahinds@users.sourceforge.net>. Portions created by David A. Hinds 7 - * are Copyright (C) 1999 David A. Hinds. All Rights Reserved. 8 - * 9 - * (C) 1999 David A. Hinds 10 - */ 11 - 12 - #include <linux/slab.h> 13 - #include <linux/module.h> 14 - #include <linux/kernel.h> 15 - 16 - #include <pcmcia/ss.h> 17 - #include <pcmcia/cistpl.h> 18 - #include "cs_internal.h" 19 - 20 - 21 - struct pcmcia_align_data { 22 - unsigned long mask; 23 - unsigned long offset; 24 - }; 25 - 26 - static resource_size_t pcmcia_align(void *align_data, 27 - const struct resource *res, 28 - resource_size_t size, resource_size_t align) 29 - { 30 - struct pcmcia_align_data *data = align_data; 31 - resource_size_t start; 32 - 33 - start = (res->start & ~data->mask) + data->offset; 34 - if (start < res->start) 35 - start += data->mask + 1; 36 - 37 - #ifdef CONFIG_X86 38 - if (res->flags & IORESOURCE_IO) { 39 - if (start & 0x300) 40 - start = (start + 0x3ff) & ~0x3ff; 41 - } 42 - #endif 43 - 44 - #ifdef CONFIG_M68K 45 - if (res->flags & IORESOURCE_IO) { 46 - if ((res->start + size - 1) >= 1024) 47 - start = res->end; 48 - } 49 - #endif 50 - 51 - return start; 52 - } 53 - 54 - 55 - static struct resource *__iodyn_find_io_region(struct pcmcia_socket *s, 56 - unsigned long base, int num, 57 - unsigned long align) 58 - { 59 - struct resource *res = pcmcia_make_resource(0, num, IORESOURCE_IO, 60 - dev_name(&s->dev)); 61 - struct pcmcia_align_data data; 62 - unsigned long min = base; 63 - int ret; 64 - 65 - data.mask = align - 1; 66 - data.offset = base & data.mask; 67 - 68 - #ifdef CONFIG_PCI 69 - if (s->cb_dev) { 70 - ret = pci_bus_alloc_resource(s->cb_dev->bus, res, num, 1, 71 - min, 0, pcmcia_align, &data); 72 - } else 73 - #endif 74 - ret = allocate_resource(&ioport_resource, res, num, min, ~0UL, 75 - 1, pcmcia_align, &data); 76 - 77 - if (ret != 0) { 78 - kfree(res); 79 - res = NULL; 80 - } 81 - return res; 82 - } 83 - 84 - static int iodyn_find_io(struct pcmcia_socket *s, unsigned int attr, 85 - unsigned int *base, unsigned int num, 86 - unsigned int align, struct resource **parent) 87 - { 88 - int i, ret = 0; 89 - 90 - /* Check for an already-allocated window that must conflict with 91 - * what was asked for. It is a hack because it does not catch all 92 - * potential conflicts, just the most obvious ones. 93 - */ 94 - for (i = 0; i < MAX_IO_WIN; i++) { 95 - if (!s->io[i].res) 96 - continue; 97 - 98 - if (!*base) 99 - continue; 100 - 101 - if ((s->io[i].res->start & (align-1)) == *base) 102 - return -EBUSY; 103 - } 104 - 105 - for (i = 0; i < MAX_IO_WIN; i++) { 106 - struct resource *res = s->io[i].res; 107 - unsigned int try; 108 - 109 - if (res && (res->flags & IORESOURCE_BITS) != 110 - (attr & IORESOURCE_BITS)) 111 - continue; 112 - 113 - if (!res) { 114 - if (align == 0) 115 - align = 0x10000; 116 - 117 - res = s->io[i].res = __iodyn_find_io_region(s, *base, 118 - num, align); 119 - if (!res) 120 - return -EINVAL; 121 - 122 - *base = res->start; 123 - s->io[i].res->flags = 124 - ((res->flags & ~IORESOURCE_BITS) | 125 - (attr & IORESOURCE_BITS)); 126 - s->io[i].InUse = num; 127 - *parent = res; 128 - return 0; 129 - } 130 - 131 - /* Try to extend top of window */ 132 - try = res->end + 1; 133 - if ((*base == 0) || (*base == try)) { 134 - if (adjust_resource(s->io[i].res, res->start, 135 - resource_size(res) + num)) 136 - continue; 137 - *base = try; 138 - s->io[i].InUse += num; 139 - *parent = res; 140 - return 0; 141 - } 142 - 143 - /* Try to extend bottom of window */ 144 - try = res->start - num; 145 - if ((*base == 0) || (*base == try)) { 146 - if (adjust_resource(s->io[i].res, 147 - res->start - num, 148 - resource_size(res) + num)) 149 - continue; 150 - *base = try; 151 - s->io[i].InUse += num; 152 - *parent = res; 153 - return 0; 154 - } 155 - } 156 - 157 - return -EINVAL; 158 - } 159 - 160 - 161 - struct pccard_resource_ops pccard_iodyn_ops = { 162 - .validate_mem = NULL, 163 - .find_io = iodyn_find_io, 164 - .find_mem = NULL, 165 - .init = static_init, 166 - .exit = NULL, 167 - }; 168 - EXPORT_SYMBOL(pccard_iodyn_ops);
+3 -1
drivers/pcmcia/rsrc_nonstatic.c
··· 375 375 376 376 if (validate && !s->fake_cis) { 377 377 /* move it to the validated data set */ 378 - add_interval(&s_data->mem_db_valid, base, size); 378 + ret = add_interval(&s_data->mem_db_valid, base, size); 379 + if (ret) 380 + return ret; 379 381 sub_interval(&s_data->mem_db, base, size); 380 382 } 381 383
+3 -2
drivers/pcmcia/socket_sysfs.c
··· 10 10 #include <linux/init.h> 11 11 #include <linux/kernel.h> 12 12 #include <linux/string.h> 13 + #include <linux/string_choices.h> 13 14 #include <linux/major.h> 14 15 #include <linux/errno.h> 15 16 #include <linux/mm.h> ··· 99 98 char *buf) 100 99 { 101 100 struct pcmcia_socket *s = to_socket(dev); 102 - return sysfs_emit(buf, "%s\n", s->state & SOCKET_SUSPEND ? "off" : "on"); 101 + return sysfs_emit(buf, "%s\n", str_off_on(s->state & SOCKET_SUSPEND)); 103 102 } 104 103 105 104 static ssize_t pccard_store_card_pm_state(struct device *dev, ··· 178 177 struct device_attribute *attr, char *buf) 179 178 { 180 179 struct pcmcia_socket *s = to_socket(dev); 181 - return sysfs_emit(buf, "%s\n", s->resource_setup_done ? "yes" : "no"); 180 + return sysfs_emit(buf, "%s\n", str_yes_no(s->resource_setup_done)); 182 181 } 183 182 184 183 static ssize_t pccard_store_resource(struct device *dev,
+141 -50
drivers/perf/riscv_pmu_sbi.c
··· 59 59 #define PERF_EVENT_FLAG_USER_ACCESS BIT(SYSCTL_USER_ACCESS) 60 60 #define PERF_EVENT_FLAG_LEGACY BIT(SYSCTL_LEGACY) 61 61 62 - PMU_FORMAT_ATTR(event, "config:0-47"); 62 + PMU_FORMAT_ATTR(event, "config:0-55"); 63 63 PMU_FORMAT_ATTR(firmware, "config:62-63"); 64 64 65 65 static bool sbi_v2_available; 66 + static bool sbi_v3_available; 66 67 static DEFINE_STATIC_KEY_FALSE(sbi_pmu_snapshot_available); 67 68 #define sbi_pmu_snapshot_available() \ 68 69 static_branch_unlikely(&sbi_pmu_snapshot_available) ··· 100 99 /* Cache the available counters in a bitmask */ 101 100 static unsigned long cmask; 102 101 102 + static int pmu_event_find_cache(u64 config); 103 103 struct sbi_pmu_event_data { 104 104 union { 105 105 union { ··· 300 298 }, 301 299 }; 302 300 301 + static int pmu_sbi_check_event_info(void) 302 + { 303 + int num_events = ARRAY_SIZE(pmu_hw_event_map) + PERF_COUNT_HW_CACHE_MAX * 304 + PERF_COUNT_HW_CACHE_OP_MAX * PERF_COUNT_HW_CACHE_RESULT_MAX; 305 + struct riscv_pmu_event_info *event_info_shmem; 306 + phys_addr_t base_addr; 307 + int i, j, k, result = 0, count = 0; 308 + struct sbiret ret; 309 + 310 + event_info_shmem = kcalloc(num_events, sizeof(*event_info_shmem), GFP_KERNEL); 311 + if (!event_info_shmem) 312 + return -ENOMEM; 313 + 314 + for (i = 0; i < ARRAY_SIZE(pmu_hw_event_map); i++) 315 + event_info_shmem[count++].event_idx = pmu_hw_event_map[i].event_idx; 316 + 317 + for (i = 0; i < ARRAY_SIZE(pmu_cache_event_map); i++) { 318 + for (j = 0; j < ARRAY_SIZE(pmu_cache_event_map[i]); j++) { 319 + for (k = 0; k < ARRAY_SIZE(pmu_cache_event_map[i][j]); k++) 320 + event_info_shmem[count++].event_idx = 321 + pmu_cache_event_map[i][j][k].event_idx; 322 + } 323 + } 324 + 325 + base_addr = __pa(event_info_shmem); 326 + if (IS_ENABLED(CONFIG_32BIT)) 327 + ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_EVENT_GET_INFO, lower_32_bits(base_addr), 328 + upper_32_bits(base_addr), count, 0, 0, 0); 329 + else 330 + ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_EVENT_GET_INFO, base_addr, 0, 331 + count, 0, 0, 0); 332 + if (ret.error) { 333 + result = -EOPNOTSUPP; 334 + goto free_mem; 335 + } 336 + 337 + for (i = 0; i < ARRAY_SIZE(pmu_hw_event_map); i++) { 338 + if (!(event_info_shmem[i].output & RISCV_PMU_EVENT_INFO_OUTPUT_MASK)) 339 + pmu_hw_event_map[i].event_idx = -ENOENT; 340 + } 341 + 342 + count = ARRAY_SIZE(pmu_hw_event_map); 343 + 344 + for (i = 0; i < ARRAY_SIZE(pmu_cache_event_map); i++) { 345 + for (j = 0; j < ARRAY_SIZE(pmu_cache_event_map[i]); j++) { 346 + for (k = 0; k < ARRAY_SIZE(pmu_cache_event_map[i][j]); k++) { 347 + if (!(event_info_shmem[count].output & 348 + RISCV_PMU_EVENT_INFO_OUTPUT_MASK)) 349 + pmu_cache_event_map[i][j][k].event_idx = -ENOENT; 350 + count++; 351 + } 352 + } 353 + } 354 + 355 + free_mem: 356 + kfree(event_info_shmem); 357 + 358 + return result; 359 + } 360 + 303 361 static void pmu_sbi_check_event(struct sbi_pmu_event_data *edata) 304 362 { 305 363 struct sbiret ret; ··· 377 315 378 316 static void pmu_sbi_check_std_events(struct work_struct *work) 379 317 { 318 + int ret; 319 + 320 + if (sbi_v3_available) { 321 + ret = pmu_sbi_check_event_info(); 322 + if (ret) 323 + pr_err("pmu_sbi_check_event_info failed with error %d\n", ret); 324 + return; 325 + } 326 + 380 327 for (int i = 0; i < ARRAY_SIZE(pmu_hw_event_map); i++) 381 328 pmu_sbi_check_event(&pmu_hw_event_map[i]); 382 329 ··· 412 341 413 342 return (info->type == SBI_PMU_CTR_TYPE_FW) ? true : false; 414 343 } 344 + 345 + int riscv_pmu_get_event_info(u32 type, u64 config, u64 *econfig) 346 + { 347 + int ret = -ENOENT; 348 + 349 + switch (type) { 350 + case PERF_TYPE_HARDWARE: 351 + if (config >= PERF_COUNT_HW_MAX) 352 + return -EINVAL; 353 + ret = pmu_hw_event_map[config].event_idx; 354 + break; 355 + case PERF_TYPE_HW_CACHE: 356 + ret = pmu_event_find_cache(config); 357 + break; 358 + case PERF_TYPE_RAW: 359 + /* 360 + * As per SBI v0.3 specification, 361 + * -- the upper 16 bits must be unused for a hardware raw event. 362 + * As per SBI v2.0 specification, 363 + * -- the upper 8 bits must be unused for a hardware raw event. 364 + * Bits 63:62 are used to distinguish between raw events 365 + * 00 - Hardware raw event 366 + * 10 - SBI firmware events 367 + * 11 - Risc-V platform specific firmware event 368 + */ 369 + switch (config >> 62) { 370 + case 0: 371 + if (sbi_v3_available) { 372 + /* Return error any bits [56-63] is set as it is not allowed by the spec */ 373 + if (!(config & ~RISCV_PMU_RAW_EVENT_V2_MASK)) { 374 + if (econfig) 375 + *econfig = config & RISCV_PMU_RAW_EVENT_V2_MASK; 376 + ret = RISCV_PMU_RAW_EVENT_V2_IDX; 377 + } 378 + /* Return error any bits [48-63] is set as it is not allowed by the spec */ 379 + } else if (!(config & ~RISCV_PMU_RAW_EVENT_MASK)) { 380 + if (econfig) 381 + *econfig = config & RISCV_PMU_RAW_EVENT_MASK; 382 + ret = RISCV_PMU_RAW_EVENT_IDX; 383 + } 384 + break; 385 + case 2: 386 + ret = (config & 0xFFFF) | (SBI_PMU_EVENT_TYPE_FW << 16); 387 + break; 388 + case 3: 389 + /* 390 + * For Risc-V platform specific firmware events 391 + * Event code - 0xFFFF 392 + * Event data - raw event encoding 393 + */ 394 + ret = SBI_PMU_EVENT_TYPE_FW << 16 | RISCV_PLAT_FW_EVENT; 395 + if (econfig) 396 + *econfig = config & RISCV_PMU_PLAT_FW_EVENT_MASK; 397 + break; 398 + default: 399 + break; 400 + } 401 + break; 402 + default: 403 + break; 404 + } 405 + 406 + return ret; 407 + } 408 + EXPORT_SYMBOL_GPL(riscv_pmu_get_event_info); 415 409 416 410 /* 417 411 * Returns the counter width of a programmable counter and number of hardware ··· 643 507 { 644 508 u32 type = event->attr.type; 645 509 u64 config = event->attr.config; 646 - int ret = -ENOENT; 647 510 648 511 /* 649 512 * Ensure we are finished checking standard hardware events for ··· 650 515 */ 651 516 flush_work(&check_std_events_work); 652 517 653 - switch (type) { 654 - case PERF_TYPE_HARDWARE: 655 - if (config >= PERF_COUNT_HW_MAX) 656 - return -EINVAL; 657 - ret = pmu_hw_event_map[event->attr.config].event_idx; 658 - break; 659 - case PERF_TYPE_HW_CACHE: 660 - ret = pmu_event_find_cache(config); 661 - break; 662 - case PERF_TYPE_RAW: 663 - /* 664 - * As per SBI specification, the upper 16 bits must be unused 665 - * for a hardware raw event. 666 - * Bits 63:62 are used to distinguish between raw events 667 - * 00 - Hardware raw event 668 - * 10 - SBI firmware events 669 - * 11 - Risc-V platform specific firmware event 670 - */ 671 - 672 - switch (config >> 62) { 673 - case 0: 674 - /* Return error any bits [48-63] is set as it is not allowed by the spec */ 675 - if (!(config & ~RISCV_PMU_RAW_EVENT_MASK)) { 676 - *econfig = config & RISCV_PMU_RAW_EVENT_MASK; 677 - ret = RISCV_PMU_RAW_EVENT_IDX; 678 - } 679 - break; 680 - case 2: 681 - ret = (config & 0xFFFF) | (SBI_PMU_EVENT_TYPE_FW << 16); 682 - break; 683 - case 3: 684 - /* 685 - * For Risc-V platform specific firmware events 686 - * Event code - 0xFFFF 687 - * Event data - raw event encoding 688 - */ 689 - ret = SBI_PMU_EVENT_TYPE_FW << 16 | RISCV_PLAT_FW_EVENT; 690 - *econfig = config & RISCV_PMU_PLAT_FW_EVENT_MASK; 691 - break; 692 - default: 693 - break; 694 - } 695 - break; 696 - default: 697 - break; 698 - } 699 - 700 - return ret; 518 + return riscv_pmu_get_event_info(type, config, econfig); 701 519 } 702 520 703 521 static void pmu_sbi_snapshot_free(struct riscv_pmu *pmu) ··· 1539 1451 1540 1452 if (sbi_spec_version >= sbi_mk_version(2, 0)) 1541 1453 sbi_v2_available = true; 1454 + 1455 + if (sbi_spec_version >= sbi_mk_version(3, 0)) 1456 + sbi_v3_available = true; 1542 1457 1543 1458 ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_RISCV_STARTING, 1544 1459 "perf/riscv/pmu:starting",
+12 -59
drivers/platform/x86/acer-wmi.c
··· 129 129 enum acer_wmi_gaming_misc_setting { 130 130 ACER_WMID_MISC_SETTING_OC_1 = 0x0005, 131 131 ACER_WMID_MISC_SETTING_OC_2 = 0x0007, 132 + /* Unreliable on some models */ 132 133 ACER_WMID_MISC_SETTING_SUPPORTED_PROFILES = 0x000A, 133 134 ACER_WMID_MISC_SETTING_PLATFORM_PROFILE = 0x000B, 134 135 }; ··· 794 793 * returning from turbo mode when the mode key is in toggle mode. 795 794 */ 796 795 static int last_non_turbo_profile = INT_MIN; 797 - 798 - /* The most performant supported profile */ 799 - static int acer_predator_v4_max_perf; 800 796 801 797 enum acer_predator_v4_thermal_profile { 802 798 ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET = 0x00, ··· 2012 2014 if (err) 2013 2015 return err; 2014 2016 2015 - if (tp != acer_predator_v4_max_perf) 2017 + if (tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO) 2016 2018 last_non_turbo_profile = tp; 2017 2019 2018 2020 return 0; ··· 2021 2023 static int 2022 2024 acer_predator_v4_platform_profile_probe(void *drvdata, unsigned long *choices) 2023 2025 { 2024 - unsigned long supported_profiles; 2025 - int err; 2026 + set_bit(PLATFORM_PROFILE_PERFORMANCE, choices); 2027 + set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices); 2028 + set_bit(PLATFORM_PROFILE_BALANCED, choices); 2029 + set_bit(PLATFORM_PROFILE_QUIET, choices); 2030 + set_bit(PLATFORM_PROFILE_LOW_POWER, choices); 2026 2031 2027 - err = WMID_gaming_get_misc_setting(ACER_WMID_MISC_SETTING_SUPPORTED_PROFILES, 2028 - (u8 *)&supported_profiles); 2029 - if (err) 2030 - return err; 2031 - 2032 - /* Iterate through supported profiles in order of increasing performance */ 2033 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_ECO, &supported_profiles)) { 2034 - set_bit(PLATFORM_PROFILE_LOW_POWER, choices); 2035 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO; 2036 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO; 2037 - } 2038 - 2039 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET, &supported_profiles)) { 2040 - set_bit(PLATFORM_PROFILE_QUIET, choices); 2041 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET; 2042 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET; 2043 - } 2044 - 2045 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED, &supported_profiles)) { 2046 - set_bit(PLATFORM_PROFILE_BALANCED, choices); 2047 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED; 2048 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED; 2049 - } 2050 - 2051 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE, &supported_profiles)) { 2052 - set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices); 2053 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE; 2054 - 2055 - /* We only use this profile as a fallback option in case no prior 2056 - * profile is supported. 2057 - */ 2058 - if (last_non_turbo_profile < 0) 2059 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE; 2060 - } 2061 - 2062 - if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO, &supported_profiles)) { 2063 - set_bit(PLATFORM_PROFILE_PERFORMANCE, choices); 2064 - acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO; 2065 - 2066 - /* We need to handle the hypothetical case where only the turbo profile 2067 - * is supported. In this case the turbo toggle will essentially be a 2068 - * no-op. 2069 - */ 2070 - if (last_non_turbo_profile < 0) 2071 - last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO; 2072 - } 2032 + /* Set default non-turbo profile */ 2033 + last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED; 2073 2034 2074 2035 return 0; 2075 2036 } ··· 2065 2108 if (cycle_gaming_thermal_profile) { 2066 2109 platform_profile_cycle(); 2067 2110 } else { 2068 - /* Do nothing if no suitable platform profiles where found */ 2069 - if (last_non_turbo_profile < 0) 2070 - return 0; 2071 - 2072 2111 err = WMID_gaming_get_misc_setting( 2073 2112 ACER_WMID_MISC_SETTING_PLATFORM_PROFILE, &current_tp); 2074 2113 if (err) 2075 2114 return err; 2076 2115 2077 - if (current_tp == acer_predator_v4_max_perf) 2116 + if (current_tp == ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO) 2078 2117 tp = last_non_turbo_profile; 2079 2118 else 2080 - tp = acer_predator_v4_max_perf; 2119 + tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO; 2081 2120 2082 2121 err = WMID_gaming_set_misc_setting( 2083 2122 ACER_WMID_MISC_SETTING_PLATFORM_PROFILE, tp); ··· 2081 2128 return err; 2082 2129 2083 2130 /* Store last profile for toggle */ 2084 - if (current_tp != acer_predator_v4_max_perf) 2131 + if (current_tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO) 2085 2132 last_non_turbo_profile = current_tp; 2086 2133 2087 2134 platform_profile_notify(platform_profile_device);
+10 -4
drivers/platform/x86/amd/hfi/hfi.c
··· 385 385 amd_hfi_data->pcct_entry = pcct_entry; 386 386 pcct_ext = (struct acpi_pcct_ext_pcc_slave *)pcct_entry; 387 387 388 - if (pcct_ext->length <= 0) 389 - return -EINVAL; 388 + if (pcct_ext->length <= 0) { 389 + ret = -EINVAL; 390 + goto out; 391 + } 390 392 391 393 amd_hfi_data->shmem = devm_kzalloc(amd_hfi_data->dev, pcct_ext->length, GFP_KERNEL); 392 - if (!amd_hfi_data->shmem) 393 - return -ENOMEM; 394 + if (!amd_hfi_data->shmem) { 395 + ret = -ENOMEM; 396 + goto out; 397 + } 394 398 395 399 pcc_chan->shmem_base_addr = pcct_ext->base_address; 396 400 pcc_chan->shmem_size = pcct_ext->length; ··· 402 398 /* parse the shared memory info from the PCCT table */ 403 399 ret = amd_hfi_fill_metadata(amd_hfi_data); 404 400 401 + out: 402 + /* Don't leak any ACPI memory */ 405 403 acpi_put_table(pcct_tbl); 406 404 407 405 return ret;
+14
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 248 248 DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"), 249 249 } 250 250 }, 251 + { 252 + .ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10", 253 + .driver_data = &quirk_spurious_8042, 254 + .matches = { 255 + DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"), 256 + } 257 + }, 258 + { 259 + .ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10", 260 + .driver_data = &quirk_spurious_8042, 261 + .matches = { 262 + DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"), 263 + } 264 + }, 251 265 {} 252 266 }; 253 267
+22 -6
drivers/platform/x86/asus-nb-wmi.c
··· 147 147 }; 148 148 149 149 static struct quirk_entry quirk_asus_zenbook_duo_kbd = { 150 - .ignore_key_wlan = true, 150 + .key_wlan_event = ASUS_WMI_KEY_IGNORE, 151 + }; 152 + 153 + static struct quirk_entry quirk_asus_z13 = { 154 + .key_wlan_event = ASUS_WMI_KEY_ARMOURY, 155 + .tablet_switch_mode = asus_wmi_kbd_dock_devid, 151 156 }; 152 157 153 158 static int dmi_matched(const struct dmi_system_id *dmi) ··· 544 539 }, 545 540 .driver_data = &quirk_asus_zenbook_duo_kbd, 546 541 }, 542 + { 543 + .callback = dmi_matched, 544 + .ident = "ASUS ROG Z13", 545 + .matches = { 546 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 547 + DMI_MATCH(DMI_PRODUCT_NAME, "ROG Flow Z13"), 548 + }, 549 + .driver_data = &quirk_asus_z13, 550 + }, 547 551 {}, 548 552 }; 549 553 ··· 632 618 { KE_KEY, 0x93, { KEY_SWITCHVIDEOMODE } }, /* SDSP LCD + CRT + TV + DVI */ 633 619 { KE_KEY, 0x95, { KEY_MEDIA } }, 634 620 { KE_KEY, 0x99, { KEY_PHONE } }, /* Conflicts with fan mode switch */ 621 + { KE_KEY, 0X9D, { KEY_FN_F } }, 635 622 { KE_KEY, 0xA0, { KEY_SWITCHVIDEOMODE } }, /* SDSP HDMI only */ 636 623 { KE_KEY, 0xA1, { KEY_SWITCHVIDEOMODE } }, /* SDSP LCD + HDMI */ 637 624 { KE_KEY, 0xA2, { KEY_SWITCHVIDEOMODE } }, /* SDSP CRT + HDMI */ ··· 647 632 { KE_IGNORE, 0xC0, }, /* External display connect/disconnect notification */ 648 633 { KE_KEY, 0xC4, { KEY_KBDILLUMUP } }, 649 634 { KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } }, 635 + { KE_KEY, 0xCA, { KEY_F13 } }, /* Noise cancelling on Expertbook B9 */ 636 + { KE_KEY, 0xCB, { KEY_F14 } }, /* Fn+noise-cancel */ 650 637 { KE_IGNORE, 0xC6, }, /* Ambient Light Sensor notification */ 651 638 { KE_IGNORE, 0xCF, }, /* AC mode */ 652 639 { KE_KEY, 0xFA, { KEY_PROG2 } }, /* Lid flip action */ 653 640 { KE_KEY, 0xBD, { KEY_PROG2 } }, /* Lid flip action on ROG xflow laptops */ 641 + { KE_KEY, ASUS_WMI_KEY_ARMOURY, { KEY_PROG3 } }, 654 642 { KE_END, 0}, 655 643 }; 656 644 ··· 673 655 if (atkbd_reports_vol_keys) 674 656 *code = ASUS_WMI_KEY_IGNORE; 675 657 break; 676 - case 0x5D: /* Wireless console Toggle */ 677 - case 0x5E: /* Wireless console Enable */ 678 - case 0x5F: /* Wireless console Disable */ 679 - if (quirks->ignore_key_wlan) 680 - *code = ASUS_WMI_KEY_IGNORE; 658 + case 0x5F: /* Wireless console Disable / Special Key */ 659 + if (quirks->key_wlan_event) 660 + *code = quirks->key_wlan_event; 681 661 break; 682 662 } 683 663 }
+8 -1
drivers/platform/x86/asus-wmi.c
··· 5088 5088 5089 5089 asus_s2idle_check_register(); 5090 5090 5091 - return asus_wmi_add(pdev); 5091 + ret = asus_wmi_add(pdev); 5092 + if (ret) 5093 + asus_s2idle_check_unregister(); 5094 + 5095 + return ret; 5092 5096 } 5093 5097 5094 5098 static bool used; 5099 + static DEFINE_MUTEX(register_mutex); 5095 5100 5096 5101 int __init_or_module asus_wmi_register_driver(struct asus_wmi_driver *driver) 5097 5102 { 5098 5103 struct platform_driver *platform_driver; 5099 5104 struct platform_device *platform_device; 5100 5105 5106 + guard(mutex)(&register_mutex); 5101 5107 if (used) 5102 5108 return -EBUSY; 5103 5109 ··· 5126 5120 5127 5121 void asus_wmi_unregister_driver(struct asus_wmi_driver *driver) 5128 5122 { 5123 + guard(mutex)(&register_mutex); 5129 5124 asus_s2idle_check_unregister(); 5130 5125 5131 5126 platform_device_unregister(driver->platform_device);
+2 -1
drivers/platform/x86/asus-wmi.h
··· 18 18 #include <linux/i8042.h> 19 19 20 20 #define ASUS_WMI_KEY_IGNORE (-1) 21 + #define ASUS_WMI_KEY_ARMOURY 0xffff01 21 22 #define ASUS_WMI_BRN_DOWN 0x2e 22 23 #define ASUS_WMI_BRN_UP 0x2f 23 24 ··· 41 40 bool wmi_force_als_set; 42 41 bool wmi_ignore_fan; 43 42 bool filter_i8042_e1_extended_codes; 44 - bool ignore_key_wlan; 43 + int key_wlan_event; 45 44 enum asus_wmi_tablet_switch_mode tablet_switch_mode; 46 45 int wapf; 47 46 /*
+4
drivers/platform/x86/hp/hp-wmi.c
··· 122 122 HPWMI_BATTERY_CHARGE_PERIOD = 0x10, 123 123 HPWMI_SANITIZATION_MODE = 0x17, 124 124 HPWMI_CAMERA_TOGGLE = 0x1A, 125 + HPWMI_FN_P_HOTKEY = 0x1B, 125 126 HPWMI_OMEN_KEY = 0x1D, 126 127 HPWMI_SMART_EXPERIENCE_APP = 0x21, 127 128 }; ··· 981 980 if (!sparse_keymap_report_event(hp_wmi_input_dev, 982 981 key_code, 1, true)) 983 982 pr_info("Unknown key code - 0x%x\n", key_code); 983 + break; 984 + case HPWMI_FN_P_HOTKEY: 985 + platform_profile_cycle(); 984 986 break; 985 987 case HPWMI_OMEN_KEY: 986 988 if (event_data) /* Only should be true for HP Omen */
+1
drivers/platform/x86/intel/pmc/core.c
··· 1625 1625 X86_MATCH_VFM(INTEL_RAPTORLAKE_P, &tgl_l_pmc_dev), 1626 1626 X86_MATCH_VFM(INTEL_RAPTORLAKE, &adl_pmc_dev), 1627 1627 X86_MATCH_VFM(INTEL_RAPTORLAKE_S, &adl_pmc_dev), 1628 + X86_MATCH_VFM(INTEL_BARTLETTLAKE, &adl_pmc_dev), 1628 1629 X86_MATCH_VFM(INTEL_METEORLAKE_L, &mtl_pmc_dev), 1629 1630 X86_MATCH_VFM(INTEL_ARROWLAKE, &arl_pmc_dev), 1630 1631 X86_MATCH_VFM(INTEL_ARROWLAKE_H, &arl_h_pmc_dev),
+1 -1
drivers/platform/x86/intel/tpmi_power_domains.c
··· 178 178 179 179 info->punit_thread_id = FIELD_GET(LP_ID_MASK, data); 180 180 info->punit_core_id = FIELD_GET(MODULE_ID_MASK, data); 181 - info->pkg_id = topology_physical_package_id(cpu); 181 + info->pkg_id = topology_logical_package_id(cpu); 182 182 info->linux_cpu = cpu; 183 183 184 184 return 0;
+1 -2
drivers/ptp/ptp_ocp.c
··· 4557 4557 ptp_ocp_debugfs_remove_device(bp); 4558 4558 ptp_ocp_detach_sysfs(bp); 4559 4559 ptp_ocp_attr_group_del(bp); 4560 - if (timer_pending(&bp->watchdog)) 4561 - timer_delete_sync(&bp->watchdog); 4560 + timer_delete_sync(&bp->watchdog); 4562 4561 if (bp->ts0) 4563 4562 ptp_ocp_unregister_ext(bp->ts0); 4564 4563 if (bp->ts1)
+6 -4
drivers/scsi/lpfc/lpfc_nvmet.c
··· 1243 1243 struct lpfc_nvmet_tgtport *tgtp; 1244 1244 struct lpfc_async_xchg_ctx *ctxp = 1245 1245 container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req); 1246 - struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer; 1246 + struct rqb_dmabuf *nvmebuf; 1247 1247 struct lpfc_hba *phba = ctxp->phba; 1248 1248 unsigned long iflag; 1249 1249 ··· 1251 1251 lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n", 1252 1252 ctxp->oxid, ctxp->size, raw_smp_processor_id()); 1253 1253 1254 + spin_lock_irqsave(&ctxp->ctxlock, iflag); 1255 + nvmebuf = ctxp->rqb_buffer; 1254 1256 if (!nvmebuf) { 1257 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1255 1258 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR, 1256 1259 "6425 Defer rcv: no buffer oxid x%x: " 1257 1260 "flg %x ste %x\n", 1258 1261 ctxp->oxid, ctxp->flag, ctxp->state); 1259 1262 return; 1260 1263 } 1264 + ctxp->rqb_buffer = NULL; 1265 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1261 1266 1262 1267 tgtp = phba->targetport->private; 1263 1268 if (tgtp) ··· 1270 1265 1271 1266 /* Free the nvmebuf since a new buffer already replaced it */ 1272 1267 nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 1273 - spin_lock_irqsave(&ctxp->ctxlock, iflag); 1274 - ctxp->rqb_buffer = NULL; 1275 - spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1276 1268 } 1277 1269 1278 1270 /**
+10 -6
drivers/scsi/sr.c
··· 475 475 476 476 static int sr_revalidate_disk(struct scsi_cd *cd) 477 477 { 478 + struct request_queue *q = cd->device->request_queue; 478 479 struct scsi_sense_hdr sshdr; 480 + struct queue_limits lim; 481 + int sector_size; 479 482 480 483 /* if the unit is not ready, nothing more to do */ 481 484 if (scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr)) 482 485 return 0; 483 486 sr_cd_check(&cd->cdi); 484 - return get_sectorsize(cd); 487 + sector_size = get_sectorsize(cd); 488 + 489 + lim = queue_limits_start_update(q); 490 + lim.logical_block_size = sector_size; 491 + lim.features |= BLK_FEAT_ROTATIONAL; 492 + return queue_limits_commit_update_frozen(q, &lim); 485 493 } 486 494 487 495 static int sr_block_open(struct gendisk *disk, blk_mode_t mode) ··· 729 721 730 722 static int get_sectorsize(struct scsi_cd *cd) 731 723 { 732 - struct request_queue *q = cd->device->request_queue; 733 724 static const u8 cmd[10] = { READ_CAPACITY }; 734 725 unsigned char buffer[8] = { }; 735 - struct queue_limits lim; 736 726 int err; 737 727 int sector_size; 738 728 struct scsi_failure failure_defs[] = { ··· 801 795 set_capacity(cd->disk, cd->capacity); 802 796 } 803 797 804 - lim = queue_limits_start_update(q); 805 - lim.logical_block_size = sector_size; 806 - return queue_limits_commit_update_frozen(q, &lim); 798 + return sector_size; 807 799 } 808 800 809 801 static int get_capabilities(struct scsi_cd *cd)
+7 -5
drivers/soc/qcom/mdt_loader.c
··· 39 39 if (phend > fw->size) 40 40 return false; 41 41 42 - if (ehdr->e_shentsize != sizeof(struct elf32_shdr)) 43 - return false; 42 + if (ehdr->e_shentsize || ehdr->e_shnum) { 43 + if (ehdr->e_shentsize != sizeof(struct elf32_shdr)) 44 + return false; 44 45 45 - shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff); 46 - if (shend > fw->size) 47 - return false; 46 + shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff); 47 + if (shend > fw->size) 48 + return false; 49 + } 48 50 49 51 return true; 50 52 }
+33
drivers/spi/spi-cadence-quadspi.c
··· 108 108 109 109 bool is_jh7110; /* Flag for StarFive JH7110 SoC */ 110 110 bool disable_stig_mode; 111 + refcount_t refcount; 112 + refcount_t inflight_ops; 111 113 112 114 const struct cqspi_driver_platdata *ddata; 113 115 }; ··· 737 735 u8 *rxbuf_end = rxbuf + n_rx; 738 736 int ret = 0; 739 737 738 + if (!refcount_read(&cqspi->refcount)) 739 + return -ENODEV; 740 + 740 741 writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR); 741 742 writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES); 742 743 ··· 1075 1070 unsigned int remaining = n_tx; 1076 1071 unsigned int write_bytes; 1077 1072 int ret; 1073 + 1074 + if (!refcount_read(&cqspi->refcount)) 1075 + return -ENODEV; 1078 1076 1079 1077 writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR); 1080 1078 writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES); ··· 1469 1461 struct cqspi_st *cqspi = spi_controller_get_devdata(mem->spi->controller); 1470 1462 struct device *dev = &cqspi->pdev->dev; 1471 1463 1464 + if (refcount_read(&cqspi->inflight_ops) == 0) 1465 + return -ENODEV; 1466 + 1472 1467 ret = pm_runtime_resume_and_get(dev); 1473 1468 if (ret) { 1474 1469 dev_err(&mem->spi->dev, "resume failed with %d\n", ret); 1475 1470 return ret; 1471 + } 1472 + 1473 + if (!refcount_read(&cqspi->refcount)) 1474 + return -EBUSY; 1475 + 1476 + refcount_inc(&cqspi->inflight_ops); 1477 + 1478 + if (!refcount_read(&cqspi->refcount)) { 1479 + if (refcount_read(&cqspi->inflight_ops)) 1480 + refcount_dec(&cqspi->inflight_ops); 1481 + return -EBUSY; 1476 1482 } 1477 1483 1478 1484 ret = cqspi_mem_process(mem, op); ··· 1495 1473 1496 1474 if (ret) 1497 1475 dev_err(&mem->spi->dev, "operation failed with %d\n", ret); 1476 + 1477 + if (refcount_read(&cqspi->inflight_ops) > 1) 1478 + refcount_dec(&cqspi->inflight_ops); 1498 1479 1499 1480 return ret; 1500 1481 } ··· 1950 1925 } 1951 1926 } 1952 1927 1928 + refcount_set(&cqspi->refcount, 1); 1929 + refcount_set(&cqspi->inflight_ops, 1); 1930 + 1953 1931 ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0, 1954 1932 pdev->name, cqspi); 1955 1933 if (ret) { ··· 2014 1986 static void cqspi_remove(struct platform_device *pdev) 2015 1987 { 2016 1988 struct cqspi_st *cqspi = platform_get_drvdata(pdev); 1989 + 1990 + refcount_set(&cqspi->refcount, 0); 1991 + 1992 + if (!refcount_dec_and_test(&cqspi->inflight_ops)) 1993 + cqspi_wait_idle(cqspi); 2017 1994 2018 1995 spi_unregister_controller(cqspi->host); 2019 1996 cqspi_controller_enable(cqspi, 0);
+28 -19
drivers/spi/spi-fsl-lpspi.c
··· 3 3 // Freescale i.MX7ULP LPSPI driver 4 4 // 5 5 // Copyright 2016 Freescale Semiconductor, Inc. 6 - // Copyright 2018 NXP Semiconductors 6 + // Copyright 2018, 2023, 2025 NXP 7 7 8 + #include <linux/bitfield.h> 8 9 #include <linux/clk.h> 9 10 #include <linux/completion.h> 10 11 #include <linux/delay.h> ··· 71 70 #define DER_TDDE BIT(0) 72 71 #define CFGR1_PCSCFG BIT(27) 73 72 #define CFGR1_PINCFG (BIT(24)|BIT(25)) 74 - #define CFGR1_PCSPOL BIT(8) 73 + #define CFGR1_PCSPOL_MASK GENMASK(11, 8) 75 74 #define CFGR1_NOSTALL BIT(3) 76 75 #define CFGR1_HOST BIT(0) 77 76 #define FSR_TXCOUNT (0xFF) ··· 83 82 #define TCR_RXMSK BIT(19) 84 83 #define TCR_TXMSK BIT(18) 85 84 85 + #define SR_CLEAR_MASK GENMASK(13, 8) 86 + 86 87 struct fsl_lpspi_devtype_data { 87 - u8 prescale_max; 88 + u8 prescale_max : 3; /* 0 == no limit */ 89 + bool query_hw_for_num_cs : 1; 88 90 }; 89 91 90 92 struct lpspi_config { ··· 133 129 }; 134 130 135 131 /* 136 - * ERR051608 fixed or not: 137 - * https://www.nxp.com/docs/en/errata/i.MX93_1P87f.pdf 132 + * Devices with ERR051608 have a max TCR_PRESCALE value of 1, otherwise there is 133 + * no prescale limit: https://www.nxp.com/docs/en/errata/i.MX93_1P87f.pdf 138 134 */ 139 - static struct fsl_lpspi_devtype_data imx93_lpspi_devtype_data = { 135 + static const struct fsl_lpspi_devtype_data imx93_lpspi_devtype_data = { 140 136 .prescale_max = 1, 137 + .query_hw_for_num_cs = true, 141 138 }; 142 139 143 - static struct fsl_lpspi_devtype_data imx7ulp_lpspi_devtype_data = { 144 - .prescale_max = 7, 140 + static const struct fsl_lpspi_devtype_data imx7ulp_lpspi_devtype_data = { 141 + /* All defaults */ 142 + }; 143 + 144 + static const struct fsl_lpspi_devtype_data s32g_lpspi_devtype_data = { 145 + .query_hw_for_num_cs = true, 145 146 }; 146 147 147 148 static const struct of_device_id fsl_lpspi_dt_ids[] = { 148 149 { .compatible = "fsl,imx7ulp-spi", .data = &imx7ulp_lpspi_devtype_data,}, 149 150 { .compatible = "fsl,imx93-spi", .data = &imx93_lpspi_devtype_data,}, 151 + { .compatible = "nxp,s32g2-lpspi", .data = &s32g_lpspi_devtype_data,}, 150 152 { /* sentinel */ } 151 153 }; 152 154 MODULE_DEVICE_TABLE(of, fsl_lpspi_dt_ids); ··· 331 321 int scldiv; 332 322 333 323 perclk_rate = clk_get_rate(fsl_lpspi->clk_per); 334 - prescale_max = fsl_lpspi->devtype_data->prescale_max; 324 + prescale_max = fsl_lpspi->devtype_data->prescale_max ?: 7; 335 325 336 326 if (!config.speed_hz) { 337 327 dev_err(fsl_lpspi->dev, ··· 433 423 else 434 424 temp = CFGR1_PINCFG; 435 425 if (fsl_lpspi->config.mode & SPI_CS_HIGH) 436 - temp |= CFGR1_PCSPOL; 426 + temp |= FIELD_PREP(CFGR1_PCSPOL_MASK, 427 + BIT(fsl_lpspi->config.chip_select)); 428 + 437 429 writel(temp, fsl_lpspi->base + IMX7ULP_CFGR1); 438 430 439 431 temp = readl(fsl_lpspi->base + IMX7ULP_CR); ··· 544 532 fsl_lpspi_intctrl(fsl_lpspi, 0); 545 533 } 546 534 547 - /* W1C for all flags in SR */ 548 - temp = 0x3F << 8; 549 - writel(temp, fsl_lpspi->base + IMX7ULP_SR); 550 - 551 535 /* Clear FIFO and disable module */ 552 536 temp = CR_RRF | CR_RTF; 553 537 writel(temp, fsl_lpspi->base + IMX7ULP_CR); 538 + 539 + /* W1C for all flags in SR */ 540 + writel(SR_CLEAR_MASK, fsl_lpspi->base + IMX7ULP_SR); 554 541 555 542 return 0; 556 543 } ··· 741 730 fsl_lpspi_write_tx_fifo(fsl_lpspi); 742 731 743 732 ret = fsl_lpspi_wait_for_completion(controller); 744 - if (ret) 745 - return ret; 746 733 747 734 fsl_lpspi_reset(fsl_lpspi); 748 735 749 - return 0; 736 + return ret; 750 737 } 751 738 752 739 static int fsl_lpspi_transfer_one(struct spi_controller *controller, ··· 794 785 if (temp_SR & SR_MBF || 795 786 readl(fsl_lpspi->base + IMX7ULP_FSR) & FSR_TXCOUNT) { 796 787 writel(SR_FCF, fsl_lpspi->base + IMX7ULP_SR); 797 - fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE); 788 + fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE | (temp_IER & IER_TDIE)); 798 789 return IRQ_HANDLED; 799 790 } 800 791 ··· 939 930 fsl_lpspi->rxfifosize = 1 << ((temp >> 8) & 0x0f); 940 931 if (of_property_read_u32((&pdev->dev)->of_node, "num-cs", 941 932 &num_cs)) { 942 - if (of_device_is_compatible(pdev->dev.of_node, "fsl,imx93-spi")) 933 + if (devtype_data->query_hw_for_num_cs) 943 934 num_cs = ((temp >> 16) & 0xf); 944 935 else 945 936 num_cs = 1;
-12
drivers/spi/spi-microchip-core-qspi.c
··· 531 531 532 532 static bool mchp_coreqspi_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) 533 533 { 534 - struct mchp_coreqspi *qspi = spi_controller_get_devdata(mem->spi->controller); 535 - unsigned long clk_hz; 536 - u32 baud_rate_val; 537 - 538 534 if (!spi_mem_default_supports_op(mem, op)) 539 535 return false; 540 536 ··· 552 556 if (op->data.dir == SPI_MEM_DATA_OUT) 553 557 return false; 554 558 } 555 - 556 - clk_hz = clk_get_rate(qspi->clk); 557 - if (!clk_hz) 558 - return false; 559 - 560 - baud_rate_val = DIV_ROUND_UP(clk_hz, 2 * op->max_freq); 561 - if (baud_rate_val > MAX_DIVIDER || baud_rate_val < MIN_DIVIDER) 562 - return false; 563 559 564 560 return true; 565 561 }
+4 -2
drivers/spi/spi-qpic-snand.c
··· 1615 1615 ret = spi_register_controller(ctlr); 1616 1616 if (ret) { 1617 1617 dev_err(&pdev->dev, "spi_register_controller failed.\n"); 1618 - goto err_spi_init; 1618 + goto err_register_controller; 1619 1619 } 1620 1620 1621 1621 return 0; 1622 1622 1623 + err_register_controller: 1624 + nand_ecc_unregister_on_host_hw_engine(&snandc->qspi->ecc_eng); 1623 1625 err_spi_init: 1624 1626 qcom_nandc_unalloc(snandc); 1625 1627 err_snand_alloc: ··· 1643 1641 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1644 1642 1645 1643 spi_unregister_controller(ctlr); 1646 - 1644 + nand_ecc_unregister_on_host_hw_engine(&snandc->qspi->ecc_eng); 1647 1645 qcom_nandc_unalloc(snandc); 1648 1646 1649 1647 clk_disable_unprepare(snandc->aon_clk);
+2 -2
drivers/tee/optee/ffa_abi.c
··· 657 657 * with a matching configuration. 658 658 */ 659 659 660 - static bool optee_ffa_api_is_compatbile(struct ffa_device *ffa_dev, 660 + static bool optee_ffa_api_is_compatible(struct ffa_device *ffa_dev, 661 661 const struct ffa_ops *ops) 662 662 { 663 663 const struct ffa_msg_ops *msg_ops = ops->msg_ops; ··· 908 908 ffa_ops = ffa_dev->ops; 909 909 notif_ops = ffa_ops->notifier_ops; 910 910 911 - if (!optee_ffa_api_is_compatbile(ffa_dev, ffa_ops)) 911 + if (!optee_ffa_api_is_compatible(ffa_dev, ffa_ops)) 912 912 return -EINVAL; 913 913 914 914 if (!optee_ffa_exchange_caps(ffa_dev, ffa_ops, &sec_caps,
+10 -4
drivers/tee/tee_shm.c
··· 230 230 pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL); 231 231 if (!pages) { 232 232 rc = -ENOMEM; 233 - goto err; 233 + goto err_pages; 234 234 } 235 235 236 236 for (i = 0; i < nr_pages; i++) ··· 243 243 rc = shm_register(shm->ctx, shm, pages, nr_pages, 244 244 (unsigned long)shm->kaddr); 245 245 if (rc) 246 - goto err; 246 + goto err_kfree; 247 247 } 248 248 249 249 return 0; 250 - err: 250 + err_kfree: 251 + kfree(pages); 252 + err_pages: 251 253 free_pages_exact(shm->kaddr, shm->size); 252 254 shm->kaddr = NULL; 253 255 return rc; ··· 562 560 */ 563 561 void tee_shm_put(struct tee_shm *shm) 564 562 { 565 - struct tee_device *teedev = shm->ctx->teedev; 563 + struct tee_device *teedev; 566 564 bool do_release = false; 567 565 566 + if (!shm || !shm->ctx || !shm->ctx->teedev) 567 + return; 568 + 569 + teedev = shm->ctx->teedev; 568 570 mutex_lock(&teedev->mutex); 569 571 if (refcount_dec_and_test(&shm->refcount)) { 570 572 /*
+1 -1
fs/btrfs/btrfs_inode.h
··· 248 248 u64 new_delalloc_bytes; 249 249 /* 250 250 * The offset of the last dir index key that was logged. 251 - * This is used only for directories. 251 + * This is used only for directories. Protected by 'log_mutex'. 252 252 */ 253 253 u64 last_dir_index_offset; 254 254 };
+25 -25
fs/btrfs/inode.c
··· 6805 6805 struct fscrypt_name fname; 6806 6806 u64 index; 6807 6807 int ret; 6808 - int drop_inode = 0; 6809 6808 6810 6809 /* do not allow sys_link's with other subvols of the same device */ 6811 6810 if (btrfs_root_id(root) != btrfs_root_id(BTRFS_I(inode)->root)) ··· 6836 6837 6837 6838 /* There are several dir indexes for this inode, clear the cache. */ 6838 6839 BTRFS_I(inode)->dir_index = 0ULL; 6839 - inc_nlink(inode); 6840 6840 inode_inc_iversion(inode); 6841 6841 inode_set_ctime_current(inode); 6842 - ihold(inode); 6843 6842 set_bit(BTRFS_INODE_COPY_EVERYTHING, &BTRFS_I(inode)->runtime_flags); 6844 6843 6845 6844 ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode), 6846 6845 &fname.disk_name, 1, index); 6846 + if (ret) 6847 + goto fail; 6847 6848 6849 + /* Link added now we update the inode item with the new link count. */ 6850 + inc_nlink(inode); 6851 + ret = btrfs_update_inode(trans, BTRFS_I(inode)); 6848 6852 if (ret) { 6849 - drop_inode = 1; 6850 - } else { 6851 - struct dentry *parent = dentry->d_parent; 6852 - 6853 - ret = btrfs_update_inode(trans, BTRFS_I(inode)); 6854 - if (ret) 6855 - goto fail; 6856 - if (inode->i_nlink == 1) { 6857 - /* 6858 - * If new hard link count is 1, it's a file created 6859 - * with open(2) O_TMPFILE flag. 6860 - */ 6861 - ret = btrfs_orphan_del(trans, BTRFS_I(inode)); 6862 - if (ret) 6863 - goto fail; 6864 - } 6865 - d_instantiate(dentry, inode); 6866 - btrfs_log_new_name(trans, old_dentry, NULL, 0, parent); 6853 + btrfs_abort_transaction(trans, ret); 6854 + goto fail; 6867 6855 } 6856 + 6857 + if (inode->i_nlink == 1) { 6858 + /* 6859 + * If the new hard link count is 1, it's a file created with the 6860 + * open(2) O_TMPFILE flag. 6861 + */ 6862 + ret = btrfs_orphan_del(trans, BTRFS_I(inode)); 6863 + if (ret) { 6864 + btrfs_abort_transaction(trans, ret); 6865 + goto fail; 6866 + } 6867 + } 6868 + 6869 + /* Grab reference for the new dentry passed to d_instantiate(). */ 6870 + ihold(inode); 6871 + d_instantiate(dentry, inode); 6872 + btrfs_log_new_name(trans, old_dentry, NULL, 0, dentry->d_parent); 6868 6873 6869 6874 fail: 6870 6875 fscrypt_free_filename(&fname); 6871 6876 if (trans) 6872 6877 btrfs_end_transaction(trans); 6873 - if (drop_inode) { 6874 - inode_dec_link_count(inode); 6875 - iput(inode); 6876 - } 6877 6878 btrfs_btree_balance_dirty(fs_info); 6878 6879 return ret; 6879 6880 } ··· 7829 7830 ei->last_sub_trans = 0; 7830 7831 ei->logged_trans = 0; 7831 7832 ei->delalloc_bytes = 0; 7833 + /* new_delalloc_bytes and last_dir_index_offset are in a union. */ 7832 7834 ei->new_delalloc_bytes = 0; 7833 7835 ei->defrag_bytes = 0; 7834 7836 ei->disk_i_size = 0;
+53 -25
fs/btrfs/tree-log.c
··· 3340 3340 return 0; 3341 3341 } 3342 3342 3343 + static bool mark_inode_as_not_logged(const struct btrfs_trans_handle *trans, 3344 + struct btrfs_inode *inode) 3345 + { 3346 + bool ret = false; 3347 + 3348 + /* 3349 + * Do this only if ->logged_trans is still 0 to prevent races with 3350 + * concurrent logging as we may see the inode not logged when 3351 + * inode_logged() is called but it gets logged after inode_logged() did 3352 + * not find it in the log tree and we end up setting ->logged_trans to a 3353 + * value less than trans->transid after the concurrent logging task has 3354 + * set it to trans->transid. As a consequence, subsequent rename, unlink 3355 + * and link operations may end up not logging new names and removing old 3356 + * names from the log. 3357 + */ 3358 + spin_lock(&inode->lock); 3359 + if (inode->logged_trans == 0) 3360 + inode->logged_trans = trans->transid - 1; 3361 + else if (inode->logged_trans == trans->transid) 3362 + ret = true; 3363 + spin_unlock(&inode->lock); 3364 + 3365 + return ret; 3366 + } 3367 + 3343 3368 /* 3344 3369 * Check if an inode was logged in the current transaction. This correctly deals 3345 3370 * with the case where the inode was logged but has a logged_trans of 0, which ··· 3382 3357 struct btrfs_key key; 3383 3358 int ret; 3384 3359 3385 - if (inode->logged_trans == trans->transid) 3360 + /* 3361 + * Quick lockless call, since once ->logged_trans is set to the current 3362 + * transaction, we never set it to a lower value anywhere else. 3363 + */ 3364 + if (data_race(inode->logged_trans) == trans->transid) 3386 3365 return 1; 3387 3366 3388 3367 /* 3389 - * If logged_trans is not 0, then we know the inode logged was not logged 3390 - * in this transaction, so we can return false right away. 3368 + * If logged_trans is not 0 and not trans->transid, then we know the 3369 + * inode was not logged in this transaction, so we can return false 3370 + * right away. We take the lock to avoid a race caused by load/store 3371 + * tearing with a concurrent btrfs_log_inode() call or a concurrent task 3372 + * in this function further below - an update to trans->transid can be 3373 + * teared into two 32 bits updates for example, in which case we could 3374 + * see a positive value that is not trans->transid and assume the inode 3375 + * was not logged when it was. 3391 3376 */ 3392 - if (inode->logged_trans > 0) 3377 + spin_lock(&inode->lock); 3378 + if (inode->logged_trans == trans->transid) { 3379 + spin_unlock(&inode->lock); 3380 + return 1; 3381 + } else if (inode->logged_trans > 0) { 3382 + spin_unlock(&inode->lock); 3393 3383 return 0; 3384 + } 3385 + spin_unlock(&inode->lock); 3394 3386 3395 3387 /* 3396 3388 * If no log tree was created for this root in this transaction, then ··· 3416 3374 * transaction's ID, to avoid the search below in a future call in case 3417 3375 * a log tree gets created after this. 3418 3376 */ 3419 - if (!test_bit(BTRFS_ROOT_HAS_LOG_TREE, &inode->root->state)) { 3420 - inode->logged_trans = trans->transid - 1; 3421 - return 0; 3422 - } 3377 + if (!test_bit(BTRFS_ROOT_HAS_LOG_TREE, &inode->root->state)) 3378 + return mark_inode_as_not_logged(trans, inode); 3423 3379 3424 3380 /* 3425 3381 * We have a log tree and the inode's logged_trans is 0. We can't tell ··· 3471 3431 * Set logged_trans to a value greater than 0 and less then the 3472 3432 * current transaction to avoid doing the search in future calls. 3473 3433 */ 3474 - inode->logged_trans = trans->transid - 1; 3475 - return 0; 3434 + return mark_inode_as_not_logged(trans, inode); 3476 3435 } 3477 3436 3478 3437 /* ··· 3479 3440 * the current transacion's ID, to avoid future tree searches as long as 3480 3441 * the inode is not evicted again. 3481 3442 */ 3443 + spin_lock(&inode->lock); 3482 3444 inode->logged_trans = trans->transid; 3483 - 3484 - /* 3485 - * If it's a directory, then we must set last_dir_index_offset to the 3486 - * maximum possible value, so that the next attempt to log the inode does 3487 - * not skip checking if dir index keys found in modified subvolume tree 3488 - * leaves have been logged before, otherwise it would result in attempts 3489 - * to insert duplicate dir index keys in the log tree. This must be done 3490 - * because last_dir_index_offset is an in-memory only field, not persisted 3491 - * in the inode item or any other on-disk structure, so its value is lost 3492 - * once the inode is evicted. 3493 - */ 3494 - if (S_ISDIR(inode->vfs_inode.i_mode)) 3495 - inode->last_dir_index_offset = (u64)-1; 3445 + spin_unlock(&inode->lock); 3496 3446 3497 3447 return 1; 3498 3448 } ··· 4073 4045 4074 4046 /* 4075 4047 * If the inode was logged before and it was evicted, then its 4076 - * last_dir_index_offset is (u64)-1, so we don't the value of the last index 4048 + * last_dir_index_offset is 0, so we don't know the value of the last index 4077 4049 * key offset. If that's the case, search for it and update the inode. This 4078 4050 * is to avoid lookups in the log tree every time we try to insert a dir index 4079 4051 * key from a leaf changed in the current transaction, and to allow us to always ··· 4089 4061 4090 4062 lockdep_assert_held(&inode->log_mutex); 4091 4063 4092 - if (inode->last_dir_index_offset != (u64)-1) 4064 + if (inode->last_dir_index_offset != 0) 4093 4065 return 0; 4094 4066 4095 4067 if (!ctx->logged_before) {
+3
fs/ocfs2/inode.c
··· 1281 1281 * the journal is flushed before journal shutdown. Thus it is safe to 1282 1282 * have inodes get cleaned up after journal shutdown. 1283 1283 */ 1284 + if (!osb->journal) 1285 + return; 1286 + 1284 1287 jbd2_journal_release_jbd_inode(osb->journal->j_journal, 1285 1288 &oi->ip_jinode); 1286 1289 }
+21 -17
fs/proc/generic.c
··· 367 367 .setattr = proc_notify_change, 368 368 }; 369 369 370 + static void pde_set_flags(struct proc_dir_entry *pde) 371 + { 372 + const struct proc_ops *proc_ops = pde->proc_ops; 373 + 374 + if (!proc_ops) 375 + return; 376 + 377 + if (proc_ops->proc_flags & PROC_ENTRY_PERMANENT) 378 + pde->flags |= PROC_ENTRY_PERMANENT; 379 + if (proc_ops->proc_read_iter) 380 + pde->flags |= PROC_ENTRY_proc_read_iter; 381 + #ifdef CONFIG_COMPAT 382 + if (proc_ops->proc_compat_ioctl) 383 + pde->flags |= PROC_ENTRY_proc_compat_ioctl; 384 + #endif 385 + if (proc_ops->proc_lseek) 386 + pde->flags |= PROC_ENTRY_proc_lseek; 387 + } 388 + 370 389 /* returns the registered entry, or frees dp and returns NULL on failure */ 371 390 struct proc_dir_entry *proc_register(struct proc_dir_entry *dir, 372 391 struct proc_dir_entry *dp) 373 392 { 374 393 if (proc_alloc_inum(&dp->low_ino)) 375 394 goto out_free_entry; 395 + 396 + pde_set_flags(dp); 376 397 377 398 write_lock(&proc_subdir_lock); 378 399 dp->parent = dir; ··· 582 561 return p; 583 562 } 584 563 585 - static void pde_set_flags(struct proc_dir_entry *pde) 586 - { 587 - if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT) 588 - pde->flags |= PROC_ENTRY_PERMANENT; 589 - if (pde->proc_ops->proc_read_iter) 590 - pde->flags |= PROC_ENTRY_proc_read_iter; 591 - #ifdef CONFIG_COMPAT 592 - if (pde->proc_ops->proc_compat_ioctl) 593 - pde->flags |= PROC_ENTRY_proc_compat_ioctl; 594 - #endif 595 - if (pde->proc_ops->proc_lseek) 596 - pde->flags |= PROC_ENTRY_proc_lseek; 597 - } 598 - 599 564 struct proc_dir_entry *proc_create_data(const char *name, umode_t mode, 600 565 struct proc_dir_entry *parent, 601 566 const struct proc_ops *proc_ops, void *data) ··· 592 585 if (!p) 593 586 return NULL; 594 587 p->proc_ops = proc_ops; 595 - pde_set_flags(p); 596 588 return proc_register(parent, p); 597 589 } 598 590 EXPORT_SYMBOL(proc_create_data); ··· 642 636 p->proc_ops = &proc_seq_ops; 643 637 p->seq_ops = ops; 644 638 p->state_size = state_size; 645 - pde_set_flags(p); 646 639 return proc_register(parent, p); 647 640 } 648 641 EXPORT_SYMBOL(proc_create_seq_private); ··· 672 667 return NULL; 673 668 p->proc_ops = &proc_single_ops; 674 669 p->single_show = show; 675 - pde_set_flags(p); 676 670 return proc_register(parent, p); 677 671 } 678 672 EXPORT_SYMBOL(proc_create_single_data);
+24 -7
fs/smb/client/cifs_debug.c
··· 304 304 list_for_each(tmp1, &ses->tcon_list) { 305 305 tcon = list_entry(tmp1, struct cifs_tcon, tcon_list); 306 306 cfids = tcon->cfids; 307 + if (!cfids) 308 + continue; 307 309 spin_lock(&cfids->cfid_list_lock); /* check lock ordering */ 308 310 seq_printf(m, "Num entries: %d\n", cfids->num_entries); 309 311 list_for_each_entry(cfid, &cfids->entries, entry) { ··· 321 319 seq_printf(m, "\n"); 322 320 } 323 321 spin_unlock(&cfids->cfid_list_lock); 324 - 325 - 326 322 } 327 323 } 328 324 } ··· 344 344 return "Pattern_V1"; 345 345 default: 346 346 return "invalid"; 347 + } 348 + } 349 + 350 + static __always_inline const char *cipher_alg_str(__le16 cipher) 351 + { 352 + switch (cipher) { 353 + case SMB2_ENCRYPTION_AES128_CCM: 354 + return "AES128-CCM"; 355 + case SMB2_ENCRYPTION_AES128_GCM: 356 + return "AES128-GCM"; 357 + case SMB2_ENCRYPTION_AES256_CCM: 358 + return "AES256-CCM"; 359 + case SMB2_ENCRYPTION_AES256_GCM: 360 + return "AES256-GCM"; 361 + default: 362 + return "UNKNOWN"; 347 363 } 348 364 } 349 365 ··· 555 539 else 556 540 seq_puts(m, "disabled (not supported by this server)"); 557 541 542 + /* Show negotiated encryption cipher, even if not required */ 543 + seq_puts(m, "\nEncryption: "); 544 + if (server->cipher_type) 545 + seq_printf(m, "Negotiated cipher (%s)", cipher_alg_str(server->cipher_type)); 546 + 558 547 seq_printf(m, "\n\n\tSessions: "); 559 548 i = 0; 560 549 list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) { ··· 597 576 598 577 /* dump session id helpful for use with network trace */ 599 578 seq_printf(m, " SessionId: 0x%llx", ses->Suid); 600 - if (ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA) { 579 + if (ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA) 601 580 seq_puts(m, " encrypted"); 602 - /* can help in debugging to show encryption type */ 603 - if (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM) 604 - seq_puts(m, "(gcm256)"); 605 - } 606 581 if (ses->sign) 607 582 seq_puts(m, " signed"); 608 583
+3
fs/smb/client/cifs_unicode.c
··· 629 629 int len; 630 630 __le16 *dst; 631 631 632 + if (!src) 633 + return NULL; 634 + 632 635 len = cifs_local_to_utf16_bytes(src, maxlen, cp); 633 636 len += 2; /* NULL */ 634 637 dst = kmalloc(len, GFP_KERNEL);
+1 -1
fs/smb/client/reparse.c
··· 278 278 } 279 279 280 280 /* 281 - * For absolute symlinks it is not possible to determinate 281 + * For absolute symlinks it is not possible to determine 282 282 * if it should point to directory or file. 283 283 */ 284 284 if (symname[0] == '/') {
+2 -2
fs/smb/client/smb1ops.c
··· 1005 1005 rc = -EOPNOTSUPP; 1006 1006 } 1007 1007 1008 - /* Fallback to SMB_COM_SETATTR command when absolutelty needed. */ 1008 + /* Fallback to SMB_COM_SETATTR command when absolutely needed. */ 1009 1009 if (rc == -EOPNOTSUPP) { 1010 1010 cifs_dbg(FYI, "calling SetInformation since SetPathInfo for attrs/times not supported by this server\n"); 1011 1011 rc = SMBSetInformation(xid, tcon, full_path, ··· 1039 1039 cifsFileInfo_put(open_file); 1040 1040 1041 1041 /* 1042 - * Setting the read-only bit is not honered on non-NT servers when done 1042 + * Setting the read-only bit is not honored on non-NT servers when done 1043 1043 * via open-semantics. So for setting it, use SMB_COM_SETATTR command. 1044 1044 * This command works only after the file is closed, so use it only when 1045 1045 * operation was called without the filehandle.
+15 -4
fs/smb/client/smb2misc.c
··· 614 614 struct cifs_tcon *tcon; 615 615 struct cifs_pending_open *open; 616 616 617 + /* Trace receipt of lease break request from server */ 618 + trace_smb3_lease_break_enter(le32_to_cpu(rsp->CurrentLeaseState), 619 + le32_to_cpu(rsp->Flags), 620 + le16_to_cpu(rsp->Epoch), 621 + le32_to_cpu(rsp->hdr.Id.SyncId.TreeId), 622 + le64_to_cpu(rsp->hdr.SessionId), 623 + *((u64 *)rsp->LeaseKey), 624 + *((u64 *)&rsp->LeaseKey[8])); 625 + 617 626 cifs_dbg(FYI, "Checking for lease break\n"); 618 627 619 628 /* If server is a channel, select the primary channel */ ··· 669 660 spin_unlock(&cifs_tcp_ses_lock); 670 661 cifs_dbg(FYI, "Can not process lease break - no lease matched\n"); 671 662 trace_smb3_lease_not_found(le32_to_cpu(rsp->CurrentLeaseState), 672 - le32_to_cpu(rsp->hdr.Id.SyncId.TreeId), 673 - le64_to_cpu(rsp->hdr.SessionId), 674 - *((u64 *)rsp->LeaseKey), 675 - *((u64 *)&rsp->LeaseKey[8])); 663 + le32_to_cpu(rsp->Flags), 664 + le16_to_cpu(rsp->Epoch), 665 + le32_to_cpu(rsp->hdr.Id.SyncId.TreeId), 666 + le64_to_cpu(rsp->hdr.SessionId), 667 + *((u64 *)rsp->LeaseKey), 668 + *((u64 *)&rsp->LeaseKey[8])); 676 669 677 670 return false; 678 671 }
+2 -2
fs/smb/client/smb2pdu.c
··· 6192 6192 please_key_high = (__u64 *)(lease_key+8); 6193 6193 if (rc) { 6194 6194 cifs_stats_fail_inc(tcon, SMB2_OPLOCK_BREAK_HE); 6195 - trace_smb3_lease_err(le32_to_cpu(lease_state), tcon->tid, 6195 + trace_smb3_lease_ack_err(le32_to_cpu(lease_state), tcon->tid, 6196 6196 ses->Suid, *please_key_low, *please_key_high, rc); 6197 6197 cifs_dbg(FYI, "Send error in Lease Break = %d\n", rc); 6198 6198 } else 6199 - trace_smb3_lease_done(le32_to_cpu(lease_state), tcon->tid, 6199 + trace_smb3_lease_ack_done(le32_to_cpu(lease_state), tcon->tid, 6200 6200 ses->Suid, *please_key_low, *please_key_high); 6201 6201 6202 6202 return rc;
+49 -3
fs/smb/client/trace.h
··· 1171 1171 __u64 lease_key_high), \ 1172 1172 TP_ARGS(lease_state, tid, sesid, lease_key_low, lease_key_high)) 1173 1173 1174 - DEFINE_SMB3_LEASE_DONE_EVENT(lease_done); 1175 - DEFINE_SMB3_LEASE_DONE_EVENT(lease_not_found); 1174 + DEFINE_SMB3_LEASE_DONE_EVENT(lease_ack_done); 1175 + /* Tracepoint when a lease break request is received/entered (includes epoch and flags) */ 1176 + DECLARE_EVENT_CLASS(smb3_lease_enter_class, 1177 + TP_PROTO(__u32 lease_state, 1178 + __u32 flags, 1179 + __u16 epoch, 1180 + __u32 tid, 1181 + __u64 sesid, 1182 + __u64 lease_key_low, 1183 + __u64 lease_key_high), 1184 + TP_ARGS(lease_state, flags, epoch, tid, sesid, lease_key_low, lease_key_high), 1185 + TP_STRUCT__entry( 1186 + __field(__u32, lease_state) 1187 + __field(__u32, flags) 1188 + __field(__u16, epoch) 1189 + __field(__u32, tid) 1190 + __field(__u64, sesid) 1191 + __field(__u64, lease_key_low) 1192 + __field(__u64, lease_key_high) 1193 + ), 1194 + TP_fast_assign( 1195 + __entry->lease_state = lease_state; 1196 + __entry->flags = flags; 1197 + __entry->epoch = epoch; 1198 + __entry->tid = tid; 1199 + __entry->sesid = sesid; 1200 + __entry->lease_key_low = lease_key_low; 1201 + __entry->lease_key_high = lease_key_high; 1202 + ), 1203 + TP_printk("sid=0x%llx tid=0x%x lease_key=0x%llx%llx lease_state=0x%x flags=0x%x epoch=%u", 1204 + __entry->sesid, __entry->tid, __entry->lease_key_high, 1205 + __entry->lease_key_low, __entry->lease_state, __entry->flags, __entry->epoch) 1206 + ) 1207 + 1208 + #define DEFINE_SMB3_LEASE_ENTER_EVENT(name) \ 1209 + DEFINE_EVENT(smb3_lease_enter_class, smb3_##name, \ 1210 + TP_PROTO(__u32 lease_state, \ 1211 + __u32 flags, \ 1212 + __u16 epoch, \ 1213 + __u32 tid, \ 1214 + __u64 sesid, \ 1215 + __u64 lease_key_low, \ 1216 + __u64 lease_key_high), \ 1217 + TP_ARGS(lease_state, flags, epoch, tid, sesid, lease_key_low, lease_key_high)) 1218 + 1219 + DEFINE_SMB3_LEASE_ENTER_EVENT(lease_break_enter); 1220 + /* Lease not found: reuse lease_enter payload (includes epoch and flags) */ 1221 + DEFINE_SMB3_LEASE_ENTER_EVENT(lease_not_found); 1176 1222 1177 1223 DECLARE_EVENT_CLASS(smb3_lease_err_class, 1178 1224 TP_PROTO(__u32 lease_state, ··· 1259 1213 int rc), \ 1260 1214 TP_ARGS(lease_state, tid, sesid, lease_key_low, lease_key_high, rc)) 1261 1215 1262 - DEFINE_SMB3_LEASE_ERR_EVENT(lease_err); 1216 + DEFINE_SMB3_LEASE_ERR_EVENT(lease_ack_err); 1263 1217 1264 1218 DECLARE_EVENT_CLASS(smb3_connect_class, 1265 1219 TP_PROTO(char *hostname,
+14 -11
fs/smb/server/smb2pdu.c
··· 2951 2951 } 2952 2952 2953 2953 ksmbd_debug(SMB, "converted name = %s\n", name); 2954 - if (strchr(name, ':')) { 2955 - if (!test_share_config_flag(work->tcon->share_conf, 2956 - KSMBD_SHARE_FLAG_STREAMS)) { 2957 - rc = -EBADF; 2958 - goto err_out2; 2959 - } 2960 - rc = parse_stream_name(name, &stream_name, &s_type); 2961 - if (rc < 0) 2962 - goto err_out2; 2963 - } 2964 2954 2965 2955 if (posix_ctxt == false) { 2956 + if (strchr(name, ':')) { 2957 + if (!test_share_config_flag(work->tcon->share_conf, 2958 + KSMBD_SHARE_FLAG_STREAMS)) { 2959 + rc = -EBADF; 2960 + goto err_out2; 2961 + } 2962 + rc = parse_stream_name(name, &stream_name, &s_type); 2963 + if (rc < 0) 2964 + goto err_out2; 2965 + } 2966 + 2966 2967 rc = ksmbd_validate_filename(name); 2967 2968 if (rc < 0) 2968 2969 goto err_out2; ··· 3443 3442 3444 3443 fp->attrib_only = !(req->DesiredAccess & ~(FILE_READ_ATTRIBUTES_LE | 3445 3444 FILE_WRITE_ATTRIBUTES_LE | FILE_SYNCHRONIZE_LE)); 3445 + 3446 + fp->is_posix_ctxt = posix_ctxt; 3446 3447 3447 3448 /* fp should be searchable through ksmbd_inode.m_fp_list 3448 3449 * after daccess, saccess, attrib_only, and stream are ··· 5991 5988 if (IS_ERR(new_name)) 5992 5989 return PTR_ERR(new_name); 5993 5990 5994 - if (strchr(new_name, ':')) { 5991 + if (fp->is_posix_ctxt == false && strchr(new_name, ':')) { 5995 5992 int s_type; 5996 5993 char *xattr_stream_name, *stream_name = NULL; 5997 5994 size_t xattr_stream_size;
+2
fs/smb/server/vfs_cache.h
··· 112 112 bool is_durable; 113 113 bool is_persistent; 114 114 bool is_resilient; 115 + 116 + bool is_posix_ctxt; 115 117 }; 116 118 117 119 static inline void set_ctx_actor(struct dir_context *ctx,
+2 -1
include/linux/kexec.h
··· 460 460 461 461 /* List of defined/legal kexec file flags */ 462 462 #define KEXEC_FILE_FLAGS (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \ 463 - KEXEC_FILE_NO_INITRAMFS | KEXEC_FILE_DEBUG) 463 + KEXEC_FILE_NO_INITRAMFS | KEXEC_FILE_DEBUG | \ 464 + KEXEC_FILE_NO_CMA) 464 465 465 466 /* flag to track if kexec reboot is in progress */ 466 467 extern bool kexec_in_progress;
+1
include/linux/perf/riscv_pmu.h
··· 89 89 struct riscv_pmu *riscv_pmu_alloc(void); 90 90 #ifdef CONFIG_RISCV_PMU_SBI 91 91 int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); 92 + int riscv_pmu_get_event_info(u32 type, u64 config, u64 *econfig); 92 93 #endif 93 94 94 95 #endif /* CONFIG_RISCV_PMU */
+29
include/linux/pgalloc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_PGALLOC_H 3 + #define _LINUX_PGALLOC_H 4 + 5 + #include <linux/pgtable.h> 6 + #include <asm/pgalloc.h> 7 + 8 + /* 9 + * {pgd,p4d}_populate_kernel() are defined as macros to allow 10 + * compile-time optimization based on the configured page table levels. 11 + * Without this, linking may fail because callers (e.g., KASAN) may rely 12 + * on calls to these functions being optimized away when passing symbols 13 + * that exist only for certain page table levels. 14 + */ 15 + #define pgd_populate_kernel(addr, pgd, p4d) \ 16 + do { \ 17 + pgd_populate(&init_mm, pgd, p4d); \ 18 + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \ 19 + arch_sync_kernel_mappings(addr, addr); \ 20 + } while (0) 21 + 22 + #define p4d_populate_kernel(addr, p4d, pud) \ 23 + do { \ 24 + p4d_populate(&init_mm, p4d, pud); \ 25 + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED) \ 26 + arch_sync_kernel_mappings(addr, addr); \ 27 + } while (0) 28 + 29 + #endif /* _LINUX_PGALLOC_H */
+21 -4
include/linux/pgtable.h
··· 1467 1467 } 1468 1468 #endif 1469 1469 1470 + /* 1471 + * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values 1472 + * and let generic vmalloc, ioremap and page table update code know when 1473 + * arch_sync_kernel_mappings() needs to be called. 1474 + */ 1475 + #ifndef ARCH_PAGE_TABLE_SYNC_MASK 1476 + #define ARCH_PAGE_TABLE_SYNC_MASK 0 1477 + #endif 1478 + 1479 + /* 1480 + * There is no default implementation for arch_sync_kernel_mappings(). It is 1481 + * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK 1482 + * is 0. 1483 + */ 1484 + void arch_sync_kernel_mappings(unsigned long start, unsigned long end); 1485 + 1470 1486 #endif /* CONFIG_MMU */ 1471 1487 1472 1488 /* ··· 1954 1938 /* 1955 1939 * Page Table Modification bits for pgtbl_mod_mask. 1956 1940 * 1957 - * These are used by the p?d_alloc_track*() set of functions an in the generic 1958 - * vmalloc/ioremap code to track at which page-table levels entries have been 1959 - * modified. Based on that the code can better decide when vmalloc and ioremap 1960 - * mapping changes need to be synchronized to other page-tables in the system. 1941 + * These are used by the p?d_alloc_track*() and p*d_populate_kernel() 1942 + * functions in the generic vmalloc, ioremap and page table update code 1943 + * to track at which page-table levels entries have been modified. 1944 + * Based on that the code can better decide when page table changes need 1945 + * to be synchronized to other page-tables in the system. 1961 1946 */ 1962 1947 #define __PGTBL_PGD_MODIFIED 0 1963 1948 #define __PGTBL_P4D_MODIFIED 1
+5
include/linux/phy.h
··· 169 169 return bitmap_empty(intf, PHY_INTERFACE_MODE_MAX); 170 170 } 171 171 172 + static inline unsigned int phy_interface_weight(const unsigned long *intf) 173 + { 174 + return bitmap_weight(intf, PHY_INTERFACE_MODE_MAX); 175 + } 176 + 172 177 static inline void phy_interface_and(unsigned long *dst, const unsigned long *a, 173 178 const unsigned long *b) 174 179 {
+8 -1
include/linux/timekeeper_internal.h
··· 76 76 * @cs_was_changed_seq: The sequence number of clocksource change events 77 77 * @clock_valid: Indicator for valid clock 78 78 * @monotonic_to_boot: CLOCK_MONOTONIC to CLOCK_BOOTTIME offset 79 + * @monotonic_to_aux: CLOCK_MONOTONIC to CLOCK_AUX offset 79 80 * @cycle_interval: Number of clock cycles in one NTP interval 80 81 * @xtime_interval: Number of clock shifted nano seconds in one NTP 81 82 * interval. ··· 117 116 * 118 117 * @offs_aux is used by the auxiliary timekeepers which do not utilize any 119 118 * of the regular timekeeper offset fields. 119 + * 120 + * @monotonic_to_aux is a timespec64 representation of @offs_aux to 121 + * accelerate the VDSO update for CLOCK_AUX. 120 122 * 121 123 * The cacheline ordering of the structure is optimized for in kernel usage of 122 124 * the ktime_get() and ktime_get_ts64() family of time accessors. Struct ··· 163 159 u8 cs_was_changed_seq; 164 160 u8 clock_valid; 165 161 166 - struct timespec64 monotonic_to_boot; 162 + union { 163 + struct timespec64 monotonic_to_boot; 164 + struct timespec64 monotonic_to_aux; 165 + }; 167 166 168 167 u64 cycle_interval; 169 168 u64 xtime_interval;
-16
include/linux/vmalloc.h
··· 220 220 struct page **pages, unsigned int page_shift); 221 221 222 222 /* 223 - * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values 224 - * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings() 225 - * needs to be called. 226 - */ 227 - #ifndef ARCH_PAGE_TABLE_SYNC_MASK 228 - #define ARCH_PAGE_TABLE_SYNC_MASK 0 229 - #endif 230 - 231 - /* 232 - * There is no default implementation for arch_sync_kernel_mappings(). It is 233 - * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK 234 - * is 0. 235 - */ 236 - void arch_sync_kernel_mappings(unsigned long start, unsigned long end); 237 - 238 - /* 239 223 * Lowlevel-APIs (not for driver use!) 240 224 */ 241 225
+13 -4
include/net/sock.h
··· 285 285 * @sk_ack_backlog: current listen backlog 286 286 * @sk_max_ack_backlog: listen backlog set in listen() 287 287 * @sk_uid: user id of owner 288 + * @sk_ino: inode number (zero if orphaned) 288 289 * @sk_prefer_busy_poll: prefer busypolling over softirq processing 289 290 * @sk_busy_poll_budget: napi processing budget when busypolling 290 291 * @sk_priority: %SO_PRIORITY setting ··· 519 518 u32 sk_ack_backlog; 520 519 u32 sk_max_ack_backlog; 521 520 kuid_t sk_uid; 521 + unsigned long sk_ino; 522 522 spinlock_t sk_peer_lock; 523 523 int sk_bind_phc; 524 524 struct pid *sk_peer_pid; ··· 2058 2056 static inline void sk_set_socket(struct sock *sk, struct socket *sock) 2059 2057 { 2060 2058 sk->sk_socket = sock; 2059 + if (sock) { 2060 + WRITE_ONCE(sk->sk_uid, SOCK_INODE(sock)->i_uid); 2061 + WRITE_ONCE(sk->sk_ino, SOCK_INODE(sock)->i_ino); 2062 + } 2061 2063 } 2062 2064 2063 2065 static inline wait_queue_head_t *sk_sleep(struct sock *sk) ··· 2083 2077 sk_set_socket(sk, NULL); 2084 2078 sk->sk_wq = NULL; 2085 2079 /* Note: sk_uid is unchanged. */ 2080 + WRITE_ONCE(sk->sk_ino, 0); 2086 2081 write_unlock_bh(&sk->sk_callback_lock); 2087 2082 } 2088 2083 ··· 2094 2087 rcu_assign_pointer(sk->sk_wq, &parent->wq); 2095 2088 parent->sk = sk; 2096 2089 sk_set_socket(sk, parent); 2097 - WRITE_ONCE(sk->sk_uid, SOCK_INODE(parent)->i_uid); 2098 2090 security_sock_graft(sk, parent); 2099 2091 write_unlock_bh(&sk->sk_callback_lock); 2092 + } 2093 + 2094 + static inline unsigned long sock_i_ino(const struct sock *sk) 2095 + { 2096 + /* Paired with WRITE_ONCE() in sock_graft() and sock_orphan() */ 2097 + return READ_ONCE(sk->sk_ino); 2100 2098 } 2101 2099 2102 2100 static inline kuid_t sk_uid(const struct sock *sk) ··· 2109 2097 /* Paired with WRITE_ONCE() in sockfs_setattr() */ 2110 2098 return READ_ONCE(sk->sk_uid); 2111 2099 } 2112 - 2113 - unsigned long __sock_i_ino(struct sock *sk); 2114 - unsigned long sock_i_ino(struct sock *sk); 2115 2100 2116 2101 static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk) 2117 2102 {
+1 -7
include/pcmcia/ss.h
··· 227 227 228 228 229 229 /* socket drivers must define the resource operations type they use. There 230 - * are three options: 230 + * are two options: 231 231 * - pccard_static_ops iomem and ioport areas are assigned statically 232 - * - pccard_iodyn_ops iomem areas is assigned statically, ioport 233 - * areas dynamically 234 - * If this option is selected, use 235 - * "select PCCARD_IODYN" in Kconfig. 236 232 * - pccard_nonstatic_ops iomem and ioport areas are assigned dynamically. 237 233 * If this option is selected, use 238 234 * "select PCCARD_NONSTATIC" in Kconfig. ··· 236 240 */ 237 241 extern struct pccard_resource_ops pccard_static_ops; 238 242 #if defined(CONFIG_PCMCIA) || defined(CONFIG_PCMCIA_MODULE) 239 - extern struct pccard_resource_ops pccard_iodyn_ops; 240 243 extern struct pccard_resource_ops pccard_nonstatic_ops; 241 244 #else 242 245 /* If PCMCIA is not used, but only CARDBUS, these functions are not used 243 246 * at all. Therefore, do not use the large (240K!) rsrc_nonstatic module 244 247 */ 245 - #define pccard_iodyn_ops pccard_static_ops 246 248 #define pccard_nonstatic_ops pccard_static_ops 247 249 #endif 248 250
+2
include/uapi/linux/netfilter/nf_tables.h
··· 1784 1784 * enum nft_device_attributes - nf_tables device netlink attributes 1785 1785 * 1786 1786 * @NFTA_DEVICE_NAME: name of this device (NLA_STRING) 1787 + * @NFTA_DEVICE_PREFIX: device name prefix, a simple wildcard (NLA_STRING) 1787 1788 */ 1788 1789 enum nft_devices_attributes { 1789 1790 NFTA_DEVICE_UNSPEC, 1790 1791 NFTA_DEVICE_NAME, 1792 + NFTA_DEVICE_PREFIX, 1791 1793 __NFTA_DEVICE_MAX 1792 1794 }; 1793 1795 #define NFTA_DEVICE_MAX (__NFTA_DEVICE_MAX - 1)
+3
init/Kconfig
··· 146 146 config RUSTC_HAS_FILE_WITH_NUL 147 147 def_bool RUSTC_VERSION >= 108900 148 148 149 + config RUSTC_HAS_FILE_AS_C_STR 150 + def_bool RUSTC_VERSION >= 109100 151 + 149 152 config PAHOLE_VERSION 150 153 int 151 154 default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+1 -1
kernel/auditfilter.c
··· 1326 1326 1327 1327 /* handle trailing slashes */ 1328 1328 pathlen -= parentlen; 1329 - while (p[pathlen - 1] == '/') 1329 + while (pathlen > 0 && p[pathlen - 1] == '/') 1330 1330 pathlen--; 1331 1331 1332 1332 if (pathlen != dlen)
+1
kernel/events/core.c
··· 10330 10330 ret = 1; 10331 10331 event->pending_kill = POLL_HUP; 10332 10332 perf_event_disable_inatomic(event); 10333 + event->pmu->stop(event, 0); 10333 10334 } 10334 10335 10335 10336 if (event->attr.sigtrap) {
+1 -1
kernel/fork.c
··· 689 689 mm_pasid_drop(mm); 690 690 mm_destroy_cid(mm); 691 691 percpu_counter_destroy_many(mm->rss_stat, NR_MM_COUNTERS); 692 - futex_hash_free(mm); 693 692 694 693 free_mm(mm); 695 694 } ··· 1137 1138 if (mm->binfmt) 1138 1139 module_put(mm->binfmt->module); 1139 1140 lru_gen_del_mm(mm); 1141 + futex_hash_free(mm); 1140 1142 mmdrop(mm); 1141 1143 } 1142 1144
+12 -4
kernel/futex/core.c
··· 1722 1722 RCU_INIT_POINTER(mm->futex_phash, NULL); 1723 1723 mm->futex_phash_new = NULL; 1724 1724 /* futex-ref */ 1725 + mm->futex_ref = NULL; 1725 1726 atomic_long_set(&mm->futex_atomic, 0); 1726 1727 mm->futex_batches = get_state_synchronize_rcu(); 1727 - mm->futex_ref = alloc_percpu(unsigned int); 1728 - if (!mm->futex_ref) 1729 - return -ENOMEM; 1730 - this_cpu_inc(*mm->futex_ref); /* 0 -> 1 */ 1731 1728 return 0; 1732 1729 } 1733 1730 ··· 1796 1799 return -EBUSY; 1797 1800 return 0; 1798 1801 } 1802 + } 1803 + 1804 + if (!mm->futex_ref) { 1805 + /* 1806 + * This will always be allocated by the first thread and 1807 + * therefore requires no locking. 1808 + */ 1809 + mm->futex_ref = alloc_percpu(unsigned int); 1810 + if (!mm->futex_ref) 1811 + return -ENOMEM; 1812 + this_cpu_inc(*mm->futex_ref); /* 0 -> 1 */ 1799 1813 } 1800 1814 1801 1815 fph = kvzalloc(struct_size(fph, queues, hash_slots),
+2
kernel/sched/topology.c
··· 2201 2201 goto unlock; 2202 2202 2203 2203 hop_masks = bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), hop_cmp); 2204 + if (!hop_masks) 2205 + goto unlock; 2204 2206 hop = hop_masks - k.masks; 2205 2207 2206 2208 ret = hop ?
+8 -2
kernel/time/timekeeping.c
··· 83 83 } 84 84 #endif 85 85 86 + static inline void tk_update_aux_offs(struct timekeeper *tk, ktime_t offs) 87 + { 88 + tk->offs_aux = offs; 89 + tk->monotonic_to_aux = ktime_to_timespec64(offs); 90 + } 91 + 86 92 /* flag for if timekeeping is suspended */ 87 93 int __read_mostly timekeeping_suspended; 88 94 ··· 1512 1506 timekeeping_restore_shadow(tkd); 1513 1507 return -EINVAL; 1514 1508 } 1515 - tks->offs_aux = offs; 1509 + tk_update_aux_offs(tks, offs); 1516 1510 } 1517 1511 1518 1512 timekeeping_update_from_shadow(tkd, TK_UPDATE_ALL); ··· 2943 2937 * xtime ("realtime") is not applicable for auxiliary clocks and 2944 2938 * kept in sync with "monotonic". 2945 2939 */ 2946 - aux_tks->offs_aux = ktime_sub(timespec64_to_ktime(*tnew), tnow); 2940 + tk_update_aux_offs(aux_tks, ktime_sub(timespec64_to_ktime(*tnew), tnow)); 2947 2941 2948 2942 timekeeping_update_from_shadow(aux_tkd, TK_UPDATE_ALL); 2949 2943 return 0;
+2 -2
kernel/time/vsyscall.c
··· 159 159 if (clock_mode != VDSO_CLOCKMODE_NONE) { 160 160 fill_clock_configuration(vc, &tk->tkr_mono); 161 161 162 - vdso_ts->sec = tk->xtime_sec; 162 + vdso_ts->sec = tk->xtime_sec + tk->monotonic_to_aux.tv_sec; 163 163 164 164 nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift; 165 - nsec += tk->offs_aux; 165 + nsec += tk->monotonic_to_aux.tv_nsec; 166 166 vdso_ts->sec += __iter_div_u64_rem(nsec, NSEC_PER_SEC, &nsec); 167 167 nsec = nsec << tk->tkr_mono.shift; 168 168 vdso_ts->nsec = nsec;
+2 -2
mm/damon/core.c
··· 2073 2073 2074 2074 if (quota->ms) { 2075 2075 if (quota->total_charged_ns) 2076 - throughput = quota->total_charged_sz * 1000000 / 2077 - quota->total_charged_ns; 2076 + throughput = mult_frac(quota->total_charged_sz, 1000000, 2077 + quota->total_charged_ns); 2078 2078 else 2079 2079 throughput = PAGE_SIZE * 1024; 2080 2080 esz = min(throughput * quota->ms, esz);
+6 -6
mm/kasan/init.c
··· 13 13 #include <linux/mm.h> 14 14 #include <linux/pfn.h> 15 15 #include <linux/slab.h> 16 + #include <linux/pgalloc.h> 16 17 17 18 #include <asm/page.h> 18 - #include <asm/pgalloc.h> 19 19 20 20 #include "kasan.h" 21 21 ··· 191 191 pud_t *pud; 192 192 pmd_t *pmd; 193 193 194 - p4d_populate(&init_mm, p4d, 194 + p4d_populate_kernel(addr, p4d, 195 195 lm_alias(kasan_early_shadow_pud)); 196 196 pud = pud_offset(p4d, addr); 197 197 pud_populate(&init_mm, pud, ··· 212 212 } else { 213 213 p = early_alloc(PAGE_SIZE, NUMA_NO_NODE); 214 214 pud_init(p); 215 - p4d_populate(&init_mm, p4d, p); 215 + p4d_populate_kernel(addr, p4d, p); 216 216 } 217 217 } 218 218 zero_pud_populate(p4d, addr, next); ··· 251 251 * puds,pmds, so pgd_populate(), pud_populate() 252 252 * is noops. 253 253 */ 254 - pgd_populate(&init_mm, pgd, 254 + pgd_populate_kernel(addr, pgd, 255 255 lm_alias(kasan_early_shadow_p4d)); 256 256 p4d = p4d_offset(pgd, addr); 257 - p4d_populate(&init_mm, p4d, 257 + p4d_populate_kernel(addr, p4d, 258 258 lm_alias(kasan_early_shadow_pud)); 259 259 pud = pud_offset(p4d, addr); 260 260 pud_populate(&init_mm, pud, ··· 273 273 if (!p) 274 274 return -ENOMEM; 275 275 } else { 276 - pgd_populate(&init_mm, pgd, 276 + pgd_populate_kernel(addr, pgd, 277 277 early_alloc(PAGE_SIZE, NUMA_NO_NODE)); 278 278 } 279 279 }
+2
mm/kasan/kasan_test_c.c
··· 1578 1578 1579 1579 ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); 1580 1580 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); 1581 + OPTIMIZER_HIDE_VAR(ptr); 1581 1582 1582 1583 src = kmalloc(KASAN_GRANULE_SIZE, GFP_KERNEL | __GFP_ZERO); 1583 1584 strscpy(src, "f0cacc1a0000000", KASAN_GRANULE_SIZE); 1585 + OPTIMIZER_HIDE_VAR(src); 1584 1586 1585 1587 /* 1586 1588 * Make sure that strscpy() does not trigger KASAN if it overreads into
+14 -8
mm/kasan/shadow.c
··· 305 305 pte_t pte; 306 306 int index; 307 307 308 - if (likely(!pte_none(ptep_get(ptep)))) 309 - return 0; 308 + arch_leave_lazy_mmu_mode(); 310 309 311 310 index = PFN_DOWN(addr - data->start); 312 311 page = data->pages[index]; ··· 318 319 data->pages[index] = NULL; 319 320 } 320 321 spin_unlock(&init_mm.page_table_lock); 322 + 323 + arch_enter_lazy_mmu_mode(); 321 324 322 325 return 0; 323 326 } ··· 462 461 static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, 463 462 void *unused) 464 463 { 465 - unsigned long page; 464 + pte_t pte; 465 + int none; 466 466 467 - page = (unsigned long)__va(pte_pfn(ptep_get(ptep)) << PAGE_SHIFT); 467 + arch_leave_lazy_mmu_mode(); 468 468 469 469 spin_lock(&init_mm.page_table_lock); 470 - 471 - if (likely(!pte_none(ptep_get(ptep)))) { 470 + pte = ptep_get(ptep); 471 + none = pte_none(pte); 472 + if (likely(!none)) 472 473 pte_clear(&init_mm, addr, ptep); 473 - free_page(page); 474 - } 475 474 spin_unlock(&init_mm.page_table_lock); 475 + 476 + if (likely(!none)) 477 + __free_page(pfn_to_page(pte_pfn(pte))); 478 + 479 + arch_enter_lazy_mmu_mode(); 476 480 477 481 return 0; 478 482 }
+20 -7
mm/kmemleak.c
··· 437 437 else if (untagged_objp == untagged_ptr || alias) 438 438 return object; 439 439 else { 440 + /* 441 + * Printk deferring due to the kmemleak_lock held. 442 + * This is done to avoid deadlock. 443 + */ 444 + printk_deferred_enter(); 440 445 kmemleak_warn("Found object by alias at 0x%08lx\n", 441 446 ptr); 442 447 dump_object_info(object); 448 + printk_deferred_exit(); 443 449 break; 444 450 } 445 451 } ··· 742 736 else if (untagged_objp + parent->size <= untagged_ptr) 743 737 link = &parent->rb_node.rb_right; 744 738 else { 739 + /* 740 + * Printk deferring due to the kmemleak_lock held. 741 + * This is done to avoid deadlock. 742 + */ 743 + printk_deferred_enter(); 745 744 kmemleak_stop("Cannot insert 0x%lx into the object search tree (overlaps existing)\n", 746 745 ptr); 747 746 /* ··· 754 743 * be freed while the kmemleak_lock is held. 755 744 */ 756 745 dump_object_info(parent); 746 + printk_deferred_exit(); 757 747 return -EEXIST; 758 748 } 759 749 } ··· 868 856 869 857 raw_spin_lock_irqsave(&kmemleak_lock, flags); 870 858 object = __find_and_remove_object(ptr, 1, objflags); 871 - if (!object) { 872 - #ifdef DEBUG 873 - kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n", 874 - ptr, size); 875 - #endif 859 + if (!object) 876 860 goto unlock; 877 - } 878 861 879 862 /* 880 863 * Create one or two objects that may result from the memory block ··· 889 882 890 883 unlock: 891 884 raw_spin_unlock_irqrestore(&kmemleak_lock, flags); 892 - if (object) 885 + if (object) { 893 886 __delete_object(object); 887 + } else { 888 + #ifdef DEBUG 889 + kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n", 890 + ptr, size); 891 + #endif 892 + } 894 893 895 894 out: 896 895 if (object_l)
+3 -3
mm/percpu.c
··· 3108 3108 #endif /* BUILD_EMBED_FIRST_CHUNK */ 3109 3109 3110 3110 #ifdef BUILD_PAGE_FIRST_CHUNK 3111 - #include <asm/pgalloc.h> 3111 + #include <linux/pgalloc.h> 3112 3112 3113 3113 #ifndef P4D_TABLE_SIZE 3114 3114 #define P4D_TABLE_SIZE PAGE_SIZE ··· 3134 3134 3135 3135 if (pgd_none(*pgd)) { 3136 3136 p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE); 3137 - pgd_populate(&init_mm, pgd, p4d); 3137 + pgd_populate_kernel(addr, pgd, p4d); 3138 3138 } 3139 3139 3140 3140 p4d = p4d_offset(pgd, addr); 3141 3141 if (p4d_none(*p4d)) { 3142 3142 pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE); 3143 - p4d_populate(&init_mm, p4d, pud); 3143 + p4d_populate_kernel(addr, p4d, pud); 3144 3144 } 3145 3145 3146 3146 pud = pud_offset(p4d, addr);
+26 -11
mm/slub.c
··· 962 962 } 963 963 964 964 #ifdef CONFIG_STACKDEPOT 965 - static noinline depot_stack_handle_t set_track_prepare(void) 965 + static noinline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) 966 966 { 967 967 depot_stack_handle_t handle; 968 968 unsigned long entries[TRACK_ADDRS_COUNT]; 969 969 unsigned int nr_entries; 970 970 971 971 nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3); 972 - handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT); 972 + handle = stack_depot_save(entries, nr_entries, gfp_flags); 973 973 974 974 return handle; 975 975 } 976 976 #else 977 - static inline depot_stack_handle_t set_track_prepare(void) 977 + static inline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) 978 978 { 979 979 return 0; 980 980 } ··· 996 996 } 997 997 998 998 static __always_inline void set_track(struct kmem_cache *s, void *object, 999 - enum track_item alloc, unsigned long addr) 999 + enum track_item alloc, unsigned long addr, gfp_t gfp_flags) 1000 1000 { 1001 - depot_stack_handle_t handle = set_track_prepare(); 1001 + depot_stack_handle_t handle = set_track_prepare(gfp_flags); 1002 1002 1003 1003 set_track_update(s, object, alloc, addr, handle); 1004 1004 } ··· 1140 1140 return; 1141 1141 1142 1142 slab_bug(s, reason); 1143 - print_trailer(s, slab, object); 1143 + if (!object || !check_valid_pointer(s, slab, object)) { 1144 + print_slab_info(slab); 1145 + pr_err("Invalid pointer 0x%p\n", object); 1146 + } else { 1147 + print_trailer(s, slab, object); 1148 + } 1144 1149 add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); 1145 1150 1146 1151 WARN_ON(1); ··· 1926 1921 static inline void slab_pad_check(struct kmem_cache *s, struct slab *slab) {} 1927 1922 static inline int check_object(struct kmem_cache *s, struct slab *slab, 1928 1923 void *object, u8 val) { return 1; } 1929 - static inline depot_stack_handle_t set_track_prepare(void) { return 0; } 1924 + static inline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) { return 0; } 1930 1925 static inline void set_track(struct kmem_cache *s, void *object, 1931 - enum track_item alloc, unsigned long addr) {} 1926 + enum track_item alloc, unsigned long addr, gfp_t gfp_flags) {} 1932 1927 static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, 1933 1928 struct slab *slab) {} 1934 1929 static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, ··· 3881 3876 * For debug caches here we had to go through 3882 3877 * alloc_single_from_partial() so just store the 3883 3878 * tracking info and return the object. 3879 + * 3880 + * Due to disabled preemption we need to disallow 3881 + * blocking. The flags are further adjusted by 3882 + * gfp_nested_mask() in stack_depot itself. 3884 3883 */ 3885 3884 if (s->flags & SLAB_STORE_USER) 3886 - set_track(s, freelist, TRACK_ALLOC, addr); 3885 + set_track(s, freelist, TRACK_ALLOC, addr, 3886 + gfpflags & ~(__GFP_DIRECT_RECLAIM)); 3887 3887 3888 3888 return freelist; 3889 3889 } ··· 3920 3910 goto new_objects; 3921 3911 3922 3912 if (s->flags & SLAB_STORE_USER) 3923 - set_track(s, freelist, TRACK_ALLOC, addr); 3913 + set_track(s, freelist, TRACK_ALLOC, addr, 3914 + gfpflags & ~(__GFP_DIRECT_RECLAIM)); 3924 3915 3925 3916 return freelist; 3926 3917 } ··· 4432 4421 unsigned long flags; 4433 4422 depot_stack_handle_t handle = 0; 4434 4423 4424 + /* 4425 + * We cannot use GFP_NOWAIT as there are callsites where waking up 4426 + * kswapd could deadlock 4427 + */ 4435 4428 if (s->flags & SLAB_STORE_USER) 4436 - handle = set_track_prepare(); 4429 + handle = set_track_prepare(__GFP_NOWARN); 4437 4430 4438 4431 spin_lock_irqsave(&n->list_lock, flags); 4439 4432
+3 -8
mm/sparse-vmemmap.c
··· 27 27 #include <linux/spinlock.h> 28 28 #include <linux/vmalloc.h> 29 29 #include <linux/sched.h> 30 + #include <linux/pgalloc.h> 30 31 31 32 #include <asm/dma.h> 32 - #include <asm/pgalloc.h> 33 33 #include <asm/tlbflush.h> 34 34 35 35 #include "hugetlb_vmemmap.h" ··· 229 229 if (!p) 230 230 return NULL; 231 231 pud_init(p); 232 - p4d_populate(&init_mm, p4d, p); 232 + p4d_populate_kernel(addr, p4d, p); 233 233 } 234 234 return p4d; 235 235 } ··· 241 241 void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); 242 242 if (!p) 243 243 return NULL; 244 - pgd_populate(&init_mm, pgd, p); 244 + pgd_populate_kernel(addr, pgd, p); 245 245 } 246 246 return pgd; 247 247 } ··· 577 577 578 578 if (r < 0) 579 579 return NULL; 580 - 581 - if (system_state == SYSTEM_BOOTING) 582 - memmap_boot_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE)); 583 - else 584 - memmap_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE)); 585 580 586 581 return pfn_to_page(pfn); 587 582 }
+9 -6
mm/sparse.c
··· 454 454 */ 455 455 sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true); 456 456 sparsemap_buf_end = sparsemap_buf + size; 457 - #ifndef CONFIG_SPARSEMEM_VMEMMAP 458 - memmap_boot_pages_add(DIV_ROUND_UP(size, PAGE_SIZE)); 459 - #endif 460 457 } 461 458 462 459 static void __init sparse_buffer_fini(void) ··· 564 567 sparse_buffer_fini(); 565 568 goto failed; 566 569 } 570 + memmap_boot_pages_add(DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page), 571 + PAGE_SIZE)); 567 572 sparse_init_early_section(nid, map, pnum, 0); 568 573 } 569 574 } ··· 679 680 unsigned long start = (unsigned long) pfn_to_page(pfn); 680 681 unsigned long end = start + nr_pages * sizeof(struct page); 681 682 682 - memmap_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE))); 683 683 vmemmap_free(start, end, altmap); 684 684 } 685 685 static void free_map_bootmem(struct page *memmap) ··· 854 856 * The memmap of early sections is always fully populated. See 855 857 * section_activate() and pfn_valid() . 856 858 */ 857 - if (!section_is_early) 859 + if (!section_is_early) { 860 + memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE))); 858 861 depopulate_section_memmap(pfn, nr_pages, altmap); 859 - else if (memmap) 862 + } else if (memmap) { 863 + memmap_boot_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), 864 + PAGE_SIZE))); 860 865 free_map_bootmem(memmap); 866 + } 861 867 862 868 if (empty) 863 869 ms->section_mem_map = (unsigned long)NULL; ··· 906 904 section_deactivate(pfn, nr_pages, altmap); 907 905 return ERR_PTR(-ENOMEM); 908 906 } 907 + memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)); 909 908 910 909 return memmap; 911 910 }
+7 -2
mm/userfaultfd.c
··· 1453 1453 folio_unlock(src_folio); 1454 1454 folio_put(src_folio); 1455 1455 } 1456 - if (dst_pte) 1457 - pte_unmap(dst_pte); 1456 + /* 1457 + * Unmap in reverse order (LIFO) to maintain proper kmap_local 1458 + * index ordering when CONFIG_HIGHPTE is enabled. We mapped dst_pte 1459 + * first, then src_pte, so we must unmap src_pte first, then dst_pte. 1460 + */ 1458 1461 if (src_pte) 1459 1462 pte_unmap(src_pte); 1463 + if (dst_pte) 1464 + pte_unmap(dst_pte); 1460 1465 mmu_notifier_invalidate_range_end(&range); 1461 1466 if (si) 1462 1467 put_swap_device(si);
+4 -2
net/atm/resources.c
··· 112 112 113 113 if (atm_proc_dev_register(dev) < 0) { 114 114 pr_err("atm_proc_dev_register failed for dev %s\n", type); 115 - goto out_fail; 115 + mutex_unlock(&atm_dev_mutex); 116 + kfree(dev); 117 + return NULL; 116 118 } 117 119 118 120 if (atm_register_sysfs(dev, parent) < 0) { ··· 130 128 return dev; 131 129 132 130 out_fail: 133 - kfree(dev); 131 + put_device(&dev->class_dev); 134 132 dev = NULL; 135 133 goto out; 136 134 }
+4
net/ax25/ax25_in.c
··· 433 433 int ax25_kiss_rcv(struct sk_buff *skb, struct net_device *dev, 434 434 struct packet_type *ptype, struct net_device *orig_dev) 435 435 { 436 + skb = skb_share_check(skb, GFP_ATOMIC); 437 + if (!skb) 438 + return NET_RX_DROP; 439 + 436 440 skb_orphan(skb); 437 441 438 442 if (!net_eq(dev_net(dev), &init_net)) {
+6 -1
net/batman-adv/network-coding.c
··· 1687 1687 1688 1688 coding_len = ntohs(coded_packet_tmp.coded_len); 1689 1689 1690 - if (coding_len > skb->len) 1690 + /* ensure dst buffer is large enough (payload only) */ 1691 + if (coding_len + h_size > skb->len) 1692 + return NULL; 1693 + 1694 + /* ensure src buffer is large enough (payload only) */ 1695 + if (coding_len + h_size > nc_packet->skb->len) 1691 1696 return NULL; 1692 1697 1693 1698 /* Here the magic is reversed:
+3
net/bluetooth/l2cap_sock.c
··· 1422 1422 if (!sk) 1423 1423 return 0; 1424 1424 1425 + lock_sock_nested(sk, L2CAP_NESTING_PARENT); 1425 1426 l2cap_sock_cleanup_listen(sk); 1427 + release_sock(sk); 1428 + 1426 1429 bt_sock_unlink(&l2cap_sk_list, sk); 1427 1430 1428 1431 err = l2cap_sock_shutdown(sock, SHUT_RDWR);
-3
net/bridge/br_netfilter_hooks.c
··· 626 626 break; 627 627 } 628 628 629 - ct = container_of(nfct, struct nf_conn, ct_general); 630 - WARN_ON_ONCE(!nf_ct_is_confirmed(ct)); 631 - 632 629 return ret; 633 630 } 634 631 #endif
+2
net/core/gen_estimator.c
··· 90 90 rate = (b_packets - est->last_packets) << (10 - est->intvl_log); 91 91 rate = (rate >> est->ewma_log) - (est->avpps >> est->ewma_log); 92 92 93 + preempt_disable_nested(); 93 94 write_seqcount_begin(&est->seq); 94 95 est->avbps += brate; 95 96 est->avpps += rate; 96 97 write_seqcount_end(&est->seq); 98 + preempt_enable_nested(); 97 99 98 100 est->last_bytes = b_bytes; 99 101 est->last_packets = b_packets;
-22
net/core/sock.c
··· 2780 2780 EXPORT_SYMBOL(sock_pfree); 2781 2781 #endif /* CONFIG_INET */ 2782 2782 2783 - unsigned long __sock_i_ino(struct sock *sk) 2784 - { 2785 - unsigned long ino; 2786 - 2787 - read_lock(&sk->sk_callback_lock); 2788 - ino = sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_ino : 0; 2789 - read_unlock(&sk->sk_callback_lock); 2790 - return ino; 2791 - } 2792 - EXPORT_SYMBOL(__sock_i_ino); 2793 - 2794 - unsigned long sock_i_ino(struct sock *sk) 2795 - { 2796 - unsigned long ino; 2797 - 2798 - local_bh_disable(); 2799 - ino = __sock_i_ino(sk); 2800 - local_bh_enable(); 2801 - return ino; 2802 - } 2803 - EXPORT_SYMBOL(sock_i_ino); 2804 - 2805 2783 /* 2806 2784 * Allocate a skb from the socket's send buffer. 2807 2785 */
+3 -4
net/ipv4/devinet.c
··· 340 340 341 341 static int __init inet_blackhole_dev_init(void) 342 342 { 343 - int err = 0; 343 + struct in_device *in_dev; 344 344 345 345 rtnl_lock(); 346 - if (!inetdev_init(blackhole_netdev)) 347 - err = -ENOMEM; 346 + in_dev = inetdev_init(blackhole_netdev); 348 347 rtnl_unlock(); 349 348 350 - return err; 349 + return PTR_ERR_OR_ZERO(in_dev); 351 350 } 352 351 late_initcall(inet_blackhole_dev_init); 353 352
+4 -2
net/ipv4/icmp.c
··· 799 799 struct sk_buff *cloned_skb = NULL; 800 800 struct ip_options opts = { 0 }; 801 801 enum ip_conntrack_info ctinfo; 802 + enum ip_conntrack_dir dir; 802 803 struct nf_conn *ct; 803 804 __be32 orig_ip; 804 805 805 806 ct = nf_ct_get(skb_in, &ctinfo); 806 - if (!ct || !(ct->status & IPS_SRC_NAT)) { 807 + if (!ct || !(READ_ONCE(ct->status) & IPS_NAT_MASK)) { 807 808 __icmp_send(skb_in, type, code, info, &opts); 808 809 return; 809 810 } ··· 819 818 goto out; 820 819 821 820 orig_ip = ip_hdr(skb_in)->saddr; 822 - ip_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.ip; 821 + dir = CTINFO2DIR(ctinfo); 822 + ip_hdr(skb_in)->saddr = ct->tuplehash[dir].tuple.src.u3.ip; 823 823 __icmp_send(skb_in, type, code, info, &opts); 824 824 ip_hdr(skb_in)->saddr = orig_ip; 825 825 out:
+2 -4
net/ipv6/exthdrs.c
··· 494 494 495 495 idev = __in6_dev_get(skb->dev); 496 496 497 - accept_rpl_seg = net->ipv6.devconf_all->rpl_seg_enabled; 498 - if (accept_rpl_seg > idev->cnf.rpl_seg_enabled) 499 - accept_rpl_seg = idev->cnf.rpl_seg_enabled; 500 - 497 + accept_rpl_seg = min(READ_ONCE(net->ipv6.devconf_all->rpl_seg_enabled), 498 + READ_ONCE(idev->cnf.rpl_seg_enabled)); 501 499 if (!accept_rpl_seg) { 502 500 kfree_skb(skb); 503 501 return -1;
+4 -2
net/ipv6/ip6_icmp.c
··· 54 54 struct inet6_skb_parm parm = { 0 }; 55 55 struct sk_buff *cloned_skb = NULL; 56 56 enum ip_conntrack_info ctinfo; 57 + enum ip_conntrack_dir dir; 57 58 struct in6_addr orig_ip; 58 59 struct nf_conn *ct; 59 60 60 61 ct = nf_ct_get(skb_in, &ctinfo); 61 - if (!ct || !(ct->status & IPS_SRC_NAT)) { 62 + if (!ct || !(READ_ONCE(ct->status) & IPS_NAT_MASK)) { 62 63 __icmpv6_send(skb_in, type, code, info, &parm); 63 64 return; 64 65 } ··· 74 73 goto out; 75 74 76 75 orig_ip = ipv6_hdr(skb_in)->saddr; 77 - ipv6_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.in6; 76 + dir = CTINFO2DIR(ctinfo); 77 + ipv6_hdr(skb_in)->saddr = ct->tuplehash[dir].tuple.src.u3.in6; 78 78 __icmpv6_send(skb_in, type, code, info, &parm); 79 79 ipv6_hdr(skb_in)->saddr = orig_ip; 80 80 out:
+15 -17
net/ipv6/tcp_ipv6.c
··· 1431 1431 ireq = inet_rsk(req); 1432 1432 1433 1433 if (sk_acceptq_is_full(sk)) 1434 - goto out_overflow; 1434 + goto exit_overflow; 1435 1435 1436 1436 if (!dst) { 1437 1437 dst = inet6_csk_route_req(sk, &fl6, req, IPPROTO_TCP); 1438 1438 if (!dst) 1439 - goto out; 1439 + goto exit; 1440 1440 } 1441 1441 1442 1442 newsk = tcp_create_openreq_child(sk, req, skb); 1443 1443 if (!newsk) 1444 - goto out_nonewsk; 1444 + goto exit_nonewsk; 1445 1445 1446 1446 /* 1447 1447 * No need to charge this sock to the relevant IPv6 refcnt debug socks ··· 1525 1525 const union tcp_md5_addr *addr; 1526 1526 1527 1527 addr = (union tcp_md5_addr *)&newsk->sk_v6_daddr; 1528 - if (tcp_md5_key_copy(newsk, addr, AF_INET6, 128, l3index, key)) { 1529 - inet_csk_prepare_forced_close(newsk); 1530 - tcp_done(newsk); 1531 - goto out; 1532 - } 1528 + if (tcp_md5_key_copy(newsk, addr, AF_INET6, 128, l3index, key)) 1529 + goto put_and_exit; 1533 1530 } 1534 1531 } 1535 1532 #endif 1536 1533 #ifdef CONFIG_TCP_AO 1537 1534 /* Copy over tcp_ao_info if any */ 1538 1535 if (tcp_ao_copy_all_matching(sk, newsk, req, skb, AF_INET6)) 1539 - goto out; /* OOM */ 1536 + goto put_and_exit; /* OOM */ 1540 1537 #endif 1541 1538 1542 - if (__inet_inherit_port(sk, newsk) < 0) { 1543 - inet_csk_prepare_forced_close(newsk); 1544 - tcp_done(newsk); 1545 - goto out; 1546 - } 1539 + if (__inet_inherit_port(sk, newsk) < 0) 1540 + goto put_and_exit; 1547 1541 *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), 1548 1542 &found_dup_sk); 1549 1543 if (*own_req) { ··· 1564 1570 1565 1571 return newsk; 1566 1572 1567 - out_overflow: 1573 + exit_overflow: 1568 1574 __NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS); 1569 - out_nonewsk: 1575 + exit_nonewsk: 1570 1576 dst_release(dst); 1571 - out: 1577 + exit: 1572 1578 tcp_listendrop(sk); 1573 1579 return NULL; 1580 + put_and_exit: 1581 + inet_csk_prepare_forced_close(newsk); 1582 + tcp_done(newsk); 1583 + goto exit; 1574 1584 } 1575 1585 1576 1586 INDIRECT_CALLABLE_DECLARE(struct dst_entry *ipv4_dst_check(struct dst_entry *,
+1 -1
net/mac80211/driver-ops.h
··· 1416 1416 struct ieee80211_sub_if_data *sdata, 1417 1417 struct cfg80211_ftm_responder_stats *ftm_stats) 1418 1418 { 1419 - u32 ret = -EOPNOTSUPP; 1419 + int ret = -EOPNOTSUPP; 1420 1420 1421 1421 might_sleep(); 1422 1422 lockdep_assert_wiphy(local->hw.wiphy);
+6 -1
net/mac80211/main.c
··· 1111 1111 int result, i; 1112 1112 enum nl80211_band band; 1113 1113 int channels, max_bitrates; 1114 - bool supp_ht, supp_vht, supp_he, supp_eht; 1114 + bool supp_ht, supp_vht, supp_he, supp_eht, supp_s1g; 1115 1115 struct cfg80211_chan_def dflt_chandef = {}; 1116 1116 1117 1117 if (ieee80211_hw_check(hw, QUEUE_CONTROL) && ··· 1227 1227 supp_vht = false; 1228 1228 supp_he = false; 1229 1229 supp_eht = false; 1230 + supp_s1g = false; 1230 1231 for (band = 0; band < NUM_NL80211_BANDS; band++) { 1231 1232 const struct ieee80211_sband_iftype_data *iftd; 1232 1233 struct ieee80211_supported_band *sband; ··· 1275 1274 max_bitrates = sband->n_bitrates; 1276 1275 supp_ht = supp_ht || sband->ht_cap.ht_supported; 1277 1276 supp_vht = supp_vht || sband->vht_cap.vht_supported; 1277 + supp_s1g = supp_s1g || sband->s1g_cap.s1g; 1278 1278 1279 1279 for_each_sband_iftype_data(sband, i, iftd) { 1280 1280 u8 he_40_mhz_cap; ··· 1407 1405 if (supp_vht) 1408 1406 local->scan_ies_len += 1409 1407 2 + sizeof(struct ieee80211_vht_cap); 1408 + 1409 + if (supp_s1g) 1410 + local->scan_ies_len += 2 + sizeof(struct ieee80211_s1g_cap); 1410 1411 1411 1412 /* 1412 1413 * HE cap element is variable in size - set len to allow max size */
+8
net/mac80211/mlme.c
··· 1189 1189 "required MCSes not supported, disabling EHT\n"); 1190 1190 } 1191 1191 1192 + if (conn->mode >= IEEE80211_CONN_MODE_EHT && 1193 + channel->band != NL80211_BAND_2GHZ && 1194 + conn->bw_limit == IEEE80211_CONN_BW_LIMIT_40) { 1195 + conn->mode = IEEE80211_CONN_MODE_HE; 1196 + link_id_info(sdata, link_id, 1197 + "required bandwidth not supported, disabling EHT\n"); 1198 + } 1199 + 1192 1200 /* the mode can only decrease, so this must terminate */ 1193 1201 if (ap_mode != conn->mode) { 1194 1202 kfree(elems);
+25 -5
net/mac80211/tests/chan-mode.c
··· 2 2 /* 3 3 * KUnit tests for channel mode functions 4 4 * 5 - * Copyright (C) 2024 Intel Corporation 5 + * Copyright (C) 2024-2025 Intel Corporation 6 6 */ 7 7 #include <net/cfg80211.h> 8 8 #include <kunit/test.h> ··· 28 28 u8 vht_basic_mcs_1_4, vht_basic_mcs_5_8; 29 29 u8 he_basic_mcs_1_4, he_basic_mcs_5_8; 30 30 u8 eht_mcs7_min_nss; 31 + u16 eht_disabled_subchannels; 32 + u8 eht_bw; 33 + enum ieee80211_conn_bw_limit conn_bw_limit; 34 + enum ieee80211_conn_bw_limit expected_bw_limit; 31 35 int error; 32 36 } determine_chan_mode_cases[] = { 33 37 { ··· 132 128 .conn_mode = IEEE80211_CONN_MODE_EHT, 133 129 .eht_mcs7_min_nss = 0x15, 134 130 .error = EINVAL, 131 + }, { 132 + .desc = "80 MHz EHT is downgraded to 40 MHz HE due to puncturing", 133 + .conn_mode = IEEE80211_CONN_MODE_EHT, 134 + .expected_mode = IEEE80211_CONN_MODE_HE, 135 + .conn_bw_limit = IEEE80211_CONN_BW_LIMIT_80, 136 + .expected_bw_limit = IEEE80211_CONN_BW_LIMIT_40, 137 + .eht_disabled_subchannels = 0x08, 138 + .eht_bw = IEEE80211_EHT_OPER_CHAN_WIDTH_80MHZ, 135 139 } 136 140 }; 137 141 KUNIT_ARRAY_PARAM_DESC(determine_chan_mode, determine_chan_mode_cases, desc) ··· 150 138 struct t_sdata *t_sdata = T_SDATA(test); 151 139 struct ieee80211_conn_settings conn = { 152 140 .mode = params->conn_mode, 153 - .bw_limit = IEEE80211_CONN_BW_LIMIT_20, 141 + .bw_limit = params->conn_bw_limit, 154 142 }; 155 143 struct cfg80211_bss cbss = { 156 144 .channel = &t_sdata->band_5ghz.channels[0], ··· 203 191 0x7f, 0x01, 0x00, 0x88, 0x88, 0x88, 0x00, 0x00, 204 192 0x00, 205 193 /* EHT Operation */ 206 - WLAN_EID_EXTENSION, 0x09, WLAN_EID_EXT_EHT_OPERATION, 207 - 0x01, params->eht_mcs7_min_nss ? params->eht_mcs7_min_nss : 0x11, 208 - 0x00, 0x00, 0x00, 0x00, 0x24, 0x00, 194 + WLAN_EID_EXTENSION, 0x0b, WLAN_EID_EXT_EHT_OPERATION, 195 + 0x03, params->eht_mcs7_min_nss ? params->eht_mcs7_min_nss : 0x11, 196 + 0x00, 0x00, 0x00, params->eht_bw, 197 + params->eht_bw == IEEE80211_EHT_OPER_CHAN_WIDTH_80MHZ ? 42 : 36, 198 + 0x00, 199 + u16_get_bits(params->eht_disabled_subchannels, 0xff), 200 + u16_get_bits(params->eht_disabled_subchannels, 0xff00), 209 201 }; 210 202 struct ieee80211_chan_req chanreq = {}; 211 203 struct cfg80211_chan_def ap_chandef = {}; 212 204 struct ieee802_11_elems *elems; 205 + 206 + /* To force EHT downgrade to HE on punctured 80 MHz downgraded to 40 MHz */ 207 + set_bit(IEEE80211_HW_DISALLOW_PUNCTURING, t_sdata->local.hw.flags); 213 208 214 209 if (params->strict) 215 210 set_bit(IEEE80211_HW_STRICT, t_sdata->local.hw.flags); ··· 256 237 } else { 257 238 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elems); 258 239 KUNIT_ASSERT_EQ(test, conn.mode, params->expected_mode); 240 + KUNIT_ASSERT_EQ(test, conn.bw_limit, params->expected_bw_limit); 259 241 } 260 242 } 261 243
+1 -1
net/mctp/af_mctp.c
··· 425 425 return 0; 426 426 } 427 427 428 - return -EINVAL; 428 + return -ENOPROTOOPT; 429 429 } 430 430 431 431 /* helpers for reading/writing the tag ioc, handling compatibility across the
+19 -16
net/mctp/route.c
··· 378 378 static void mctp_flow_prepare_output(struct sk_buff *skb, struct mctp_dev *dev) {} 379 379 #endif 380 380 381 + /* takes ownership of skb, both in success and failure cases */ 381 382 static int mctp_frag_queue(struct mctp_sk_key *key, struct sk_buff *skb) 382 383 { 383 384 struct mctp_hdr *hdr = mctp_hdr(skb); ··· 388 387 & MCTP_HDR_SEQ_MASK; 389 388 390 389 if (!key->reasm_head) { 391 - /* Since we're manipulating the shared frag_list, ensure it isn't 392 - * shared with any other SKBs. 390 + /* Since we're manipulating the shared frag_list, ensure it 391 + * isn't shared with any other SKBs. In the cloned case, 392 + * this will free the skb; callers can no longer access it 393 + * safely. 393 394 */ 394 395 key->reasm_head = skb_unshare(skb, GFP_ATOMIC); 395 396 if (!key->reasm_head) ··· 405 402 exp_seq = (key->last_seq + 1) & MCTP_HDR_SEQ_MASK; 406 403 407 404 if (this_seq != exp_seq) 408 - return -EINVAL; 405 + goto err_free; 409 406 410 407 if (key->reasm_head->len + skb->len > mctp_message_maxlen) 411 - return -EINVAL; 408 + goto err_free; 412 409 413 410 skb->next = NULL; 414 411 skb->sk = NULL; ··· 422 419 key->reasm_head->truesize += skb->truesize; 423 420 424 421 return 0; 422 + 423 + err_free: 424 + kfree_skb(skb); 425 + return -EINVAL; 425 426 } 426 427 427 428 static int mctp_dst_input(struct mctp_dst *dst, struct sk_buff *skb) ··· 539 532 * key isn't observable yet 540 533 */ 541 534 mctp_frag_queue(key, skb); 535 + skb = NULL; 542 536 543 537 /* if the key_add fails, we've raced with another 544 538 * SOM packet with the same src, dest and tag. There's 545 539 * no way to distinguish future packets, so all we 546 - * can do is drop; we'll free the skb on exit from 547 - * this function. 540 + * can do is drop. 548 541 */ 549 542 rc = mctp_key_add(key, msk); 550 - if (!rc) { 543 + if (!rc) 551 544 trace_mctp_key_acquire(key); 552 - skb = NULL; 553 - } 554 545 555 546 /* we don't need to release key->lock on exit, so 556 547 * clean up here and suppress the unlock via ··· 566 561 key = NULL; 567 562 } else { 568 563 rc = mctp_frag_queue(key, skb); 569 - if (!rc) 570 - skb = NULL; 564 + skb = NULL; 571 565 } 572 566 } 573 567 ··· 576 572 */ 577 573 578 574 /* we need to be continuing an existing reassembly... */ 579 - if (!key->reasm_head) 575 + if (!key->reasm_head) { 580 576 rc = -EINVAL; 581 - else 577 + } else { 582 578 rc = mctp_frag_queue(key, skb); 579 + skb = NULL; 580 + } 583 581 584 582 if (rc) 585 583 goto out_unlock; 586 - 587 - /* we've queued; the queue owns the skb now */ 588 - skb = NULL; 589 584 590 585 /* end of message? deliver to socket, and we're done with 591 586 * the reassembly/response key
-1
net/mptcp/protocol.c
··· 3554 3554 write_lock_bh(&sk->sk_callback_lock); 3555 3555 rcu_assign_pointer(sk->sk_wq, &parent->wq); 3556 3556 sk_set_socket(sk, parent); 3557 - WRITE_ONCE(sk->sk_uid, SOCK_INODE(parent)->i_uid); 3558 3557 write_unlock_bh(&sk->sk_callback_lock); 3559 3558 } 3560 3559
+2 -2
net/netfilter/nf_conntrack_helper.c
··· 368 368 (cur->tuple.src.l3num == NFPROTO_UNSPEC || 369 369 cur->tuple.src.l3num == me->tuple.src.l3num) && 370 370 cur->tuple.dst.protonum == me->tuple.dst.protonum) { 371 - ret = -EEXIST; 371 + ret = -EBUSY; 372 372 goto out; 373 373 } 374 374 } ··· 379 379 hlist_for_each_entry(cur, &nf_ct_helper_hash[h], hnode) { 380 380 if (nf_ct_tuple_src_mask_cmp(&cur->tuple, &me->tuple, 381 381 &mask)) { 382 - ret = -EEXIST; 382 + ret = -EBUSY; 383 383 goto out; 384 384 } 385 385 }
+31 -11
net/netfilter/nf_tables_api.c
··· 1959 1959 return -ENOSPC; 1960 1960 } 1961 1961 1962 + static bool hook_is_prefix(struct nft_hook *hook) 1963 + { 1964 + return strlen(hook->ifname) >= hook->ifnamelen; 1965 + } 1966 + 1967 + static int nft_nla_put_hook_dev(struct sk_buff *skb, struct nft_hook *hook) 1968 + { 1969 + int attr = hook_is_prefix(hook) ? NFTA_DEVICE_PREFIX : NFTA_DEVICE_NAME; 1970 + 1971 + return nla_put_string(skb, attr, hook->ifname); 1972 + } 1973 + 1962 1974 static int nft_dump_basechain_hook(struct sk_buff *skb, 1963 1975 const struct net *net, int family, 1964 1976 const struct nft_base_chain *basechain, ··· 2002 1990 if (!first) 2003 1991 first = hook; 2004 1992 2005 - if (nla_put(skb, NFTA_DEVICE_NAME, 2006 - hook->ifnamelen, hook->ifname)) 1993 + if (nft_nla_put_hook_dev(skb, hook)) 2007 1994 goto nla_put_failure; 2008 1995 n++; 2009 1996 } 2010 1997 nla_nest_end(skb, nest_devs); 2011 1998 2012 1999 if (n == 1 && 2013 - nla_put(skb, NFTA_HOOK_DEV, 2014 - first->ifnamelen, first->ifname)) 2000 + !hook_is_prefix(first) && 2001 + nla_put_string(skb, NFTA_HOOK_DEV, first->ifname)) 2015 2002 goto nla_put_failure; 2016 2003 } 2017 2004 nla_nest_end(skb, nest); ··· 2321 2310 } 2322 2311 2323 2312 static struct nft_hook *nft_netdev_hook_alloc(struct net *net, 2324 - const struct nlattr *attr) 2313 + const struct nlattr *attr, 2314 + bool prefix) 2325 2315 { 2326 2316 struct nf_hook_ops *ops; 2327 2317 struct net_device *dev; ··· 2339 2327 if (err < 0) 2340 2328 goto err_hook_free; 2341 2329 2342 - hook->ifnamelen = nla_len(attr); 2330 + /* include the terminating NUL-char when comparing non-prefixes */ 2331 + hook->ifnamelen = strlen(hook->ifname) + !prefix; 2343 2332 2344 2333 /* nf_tables_netdev_event() is called under rtnl_mutex, this is 2345 2334 * indirectly serializing all the other holders of the commit_mutex with ··· 2387 2374 struct nft_hook *hook, *next; 2388 2375 const struct nlattr *tmp; 2389 2376 int rem, n = 0, err; 2377 + bool prefix; 2390 2378 2391 2379 nla_for_each_nested(tmp, attr, rem) { 2392 - if (nla_type(tmp) != NFTA_DEVICE_NAME) { 2380 + switch (nla_type(tmp)) { 2381 + case NFTA_DEVICE_NAME: 2382 + prefix = false; 2383 + break; 2384 + case NFTA_DEVICE_PREFIX: 2385 + prefix = true; 2386 + break; 2387 + default: 2393 2388 err = -EINVAL; 2394 2389 goto err_hook; 2395 2390 } 2396 2391 2397 - hook = nft_netdev_hook_alloc(net, tmp); 2392 + hook = nft_netdev_hook_alloc(net, tmp, prefix); 2398 2393 if (IS_ERR(hook)) { 2399 2394 NL_SET_BAD_ATTR(extack, tmp); 2400 2395 err = PTR_ERR(hook); ··· 2448 2427 int err; 2449 2428 2450 2429 if (tb[NFTA_HOOK_DEV]) { 2451 - hook = nft_netdev_hook_alloc(net, tb[NFTA_HOOK_DEV]); 2430 + hook = nft_netdev_hook_alloc(net, tb[NFTA_HOOK_DEV], false); 2452 2431 if (IS_ERR(hook)) { 2453 2432 NL_SET_BAD_ATTR(extack, tb[NFTA_HOOK_DEV]); 2454 2433 return PTR_ERR(hook); ··· 9479 9458 9480 9459 list_for_each_entry_rcu(hook, hook_list, list, 9481 9460 lockdep_commit_lock_is_held(net)) { 9482 - if (nla_put(skb, NFTA_DEVICE_NAME, 9483 - hook->ifnamelen, hook->ifname)) 9461 + if (nft_nla_put_hook_dev(skb, hook)) 9484 9462 goto nla_put_failure; 9485 9463 } 9486 9464 nla_nest_end(skb, nest_devs);
+1 -1
net/netlink/diag.c
··· 168 168 NETLINK_CB(cb->skb).portid, 169 169 cb->nlh->nlmsg_seq, 170 170 NLM_F_MULTI, 171 - __sock_i_ino(sk)) < 0) { 171 + sock_i_ino(sk)) < 0) { 172 172 ret = 1; 173 173 break; 174 174 }
-2
net/smc/smc_clc.c
··· 426 426 { 427 427 struct smc_clc_msg_hdr *hdr = &dclc->hdr; 428 428 429 - if (hdr->typev1 != SMC_TYPE_R && hdr->typev1 != SMC_TYPE_D) 430 - return false; 431 429 if (hdr->version == SMC_V1) { 432 430 if (ntohs(hdr->length) != sizeof(struct smc_clc_msg_decline)) 433 431 return false;
+3
net/smc/smc_ib.c
··· 742 742 unsigned int i; 743 743 bool ret = false; 744 744 745 + if (!lnk->smcibdev->ibdev->dma_device) 746 + return ret; 747 + 745 748 /* for now there is just one DMA address */ 746 749 for_each_sg(buf_slot->sgt[lnk->link_idx].sgl, sg, 747 750 buf_slot->sgt[lnk->link_idx].nents, i) {
+2 -1
net/wireless/scan.c
··· 1916 1916 */ 1917 1917 1918 1918 f = rcu_access_pointer(new->pub.beacon_ies); 1919 - kfree_rcu((struct cfg80211_bss_ies *)f, rcu_head); 1919 + if (!new->pub.hidden_beacon_bss) 1920 + kfree_rcu((struct cfg80211_bss_ies *)f, rcu_head); 1920 1921 return false; 1921 1922 } 1922 1923
+4 -1
net/wireless/sme.c
··· 900 900 if (!wdev->u.client.ssid_len) { 901 901 rcu_read_lock(); 902 902 for_each_valid_link(cr, link) { 903 + u32 ssid_len; 904 + 903 905 ssid = ieee80211_bss_get_elem(cr->links[link].bss, 904 906 WLAN_EID_SSID); 905 907 906 908 if (!ssid || !ssid->datalen) 907 909 continue; 908 910 909 - memcpy(wdev->u.client.ssid, ssid->data, ssid->datalen); 911 + ssid_len = min(ssid->datalen, IEEE80211_MAX_SSID_LEN); 912 + memcpy(wdev->u.client.ssid, ssid->data, ssid_len); 910 913 wdev->u.client.ssid_len = ssid->datalen; 911 914 break; 912 915 }
+10 -5
rust/kernel/lib.rs
··· 296 296 297 297 /// Gets the C string file name of a [`Location`]. 298 298 /// 299 - /// If `file_with_nul()` is not available, returns a string that warns about it. 299 + /// If `Location::file_as_c_str()` is not available, returns a string that warns about it. 300 300 /// 301 301 /// [`Location`]: core::panic::Location 302 302 /// ··· 310 310 /// let caller = core::panic::Location::caller(); 311 311 /// 312 312 /// // Output: 313 - /// // - A path like "rust/kernel/example.rs" if file_with_nul() is available. 314 - /// // - "<Location::file_with_nul() not supported>" otherwise. 313 + /// // - A path like "rust/kernel/example.rs" if `file_as_c_str()` is available. 314 + /// // - "<Location::file_as_c_str() not supported>" otherwise. 315 315 /// let caller_file = file_from_location(caller); 316 316 /// 317 317 /// // Prints out the message with caller's file name. ··· 326 326 /// ``` 327 327 #[inline] 328 328 pub fn file_from_location<'a>(loc: &'a core::panic::Location<'a>) -> &'a core::ffi::CStr { 329 - #[cfg(CONFIG_RUSTC_HAS_FILE_WITH_NUL)] 329 + #[cfg(CONFIG_RUSTC_HAS_FILE_AS_C_STR)] 330 + { 331 + loc.file_as_c_str() 332 + } 333 + 334 + #[cfg(all(CONFIG_RUSTC_HAS_FILE_WITH_NUL, not(CONFIG_RUSTC_HAS_FILE_AS_C_STR)))] 330 335 { 331 336 loc.file_with_nul() 332 337 } ··· 339 334 #[cfg(not(CONFIG_RUSTC_HAS_FILE_WITH_NUL))] 340 335 { 341 336 let _ = loc; 342 - c"<Location::file_with_nul() not supported>" 337 + c"<Location::file_as_c_str() not supported>" 343 338 } 344 339 }
+1
rust/kernel/mm/virt.rs
··· 209 209 /// 210 210 /// For the duration of 'a, the referenced vma must be undergoing initialization in an 211 211 /// `f_ops->mmap()` hook. 212 + #[repr(transparent)] 212 213 pub struct VmaNew { 213 214 vma: VmaRef, 214 215 }
+8 -4
scripts/Makefile.kasan
··· 86 86 hwasan-use-short-granules=0 \ 87 87 hwasan-inline-all-checks=0 88 88 89 - # Instrument memcpy/memset/memmove calls by using instrumented __hwasan_mem*(). 90 - ifeq ($(call clang-min-version, 150000)$(call gcc-min-version, 130000),y) 91 - kasan_params += hwasan-kernel-mem-intrinsic-prefix=1 92 - endif 89 + # Instrument memcpy/memset/memmove calls by using instrumented __(hw)asan_mem*(). 90 + ifdef CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX 91 + ifdef CONFIG_CC_IS_GCC 92 + kasan_params += asan-kernel-mem-intrinsic-prefix=1 93 + else 94 + kasan_params += hwasan-kernel-mem-intrinsic-prefix=1 95 + endif 96 + endif # CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX 93 97 94 98 endif # CONFIG_KASAN_SW_TAGS 95 99
+10 -2
scripts/generate_rust_target.rs
··· 225 225 ts.push("features", features); 226 226 ts.push("llvm-target", "x86_64-linux-gnu"); 227 227 ts.push("supported-sanitizers", ["kcfi", "kernel-address"]); 228 - ts.push("target-pointer-width", "64"); 228 + if cfg.rustc_version_atleast(1, 91, 0) { 229 + ts.push("target-pointer-width", 64); 230 + } else { 231 + ts.push("target-pointer-width", "64"); 232 + } 229 233 } else if cfg.has("X86_32") { 230 234 // This only works on UML, as i386 otherwise needs regparm support in rustc 231 235 if !cfg.has("UML") { ··· 249 245 } 250 246 ts.push("features", features); 251 247 ts.push("llvm-target", "i386-unknown-linux-gnu"); 252 - ts.push("target-pointer-width", "32"); 248 + if cfg.rustc_version_atleast(1, 91, 0) { 249 + ts.push("target-pointer-width", 32); 250 + } else { 251 + ts.push("target-pointer-width", "32"); 252 + } 253 253 } else if cfg.has("LOONGARCH") { 254 254 panic!("loongarch uses the builtin rustc loongarch64-unknown-none-softfloat target"); 255 255 } else {
+1 -1
sound/firewire/motu/motu-hwdep.c
··· 111 111 events = 0; 112 112 spin_unlock_irq(&motu->lock); 113 113 114 - return events | EPOLLOUT; 114 + return events; 115 115 } 116 116 117 117 static int hwdep_get_info(struct snd_motu *motu, void __user *arg)
+1
sound/hda/codecs/hdmi/hdmi.c
··· 1582 1582 static const struct snd_pci_quirk force_connect_list[] = { 1583 1583 SND_PCI_QUIRK(0x103c, 0x83e2, "HP EliteDesk 800 G4", 1), 1584 1584 SND_PCI_QUIRK(0x103c, 0x83ef, "HP MP9 G4 Retail System AMS", 1), 1585 + SND_PCI_QUIRK(0x103c, 0x845a, "HP EliteDesk 800 G4 DM 65W", 1), 1585 1586 SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1), 1586 1587 SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1), 1587 1588 SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1),
+17
sound/hda/codecs/hdmi/nvhdmi.c
··· 198 198 HDA_CODEC_ID_MODEL(0x10de0098, "GPU 98 HDMI/DP", MODEL_GENERIC), 199 199 HDA_CODEC_ID_MODEL(0x10de0099, "GPU 99 HDMI/DP", MODEL_GENERIC), 200 200 HDA_CODEC_ID_MODEL(0x10de009a, "GPU 9a HDMI/DP", MODEL_GENERIC), 201 + HDA_CODEC_ID_MODEL(0x10de009b, "GPU 9b HDMI/DP", MODEL_GENERIC), 202 + HDA_CODEC_ID_MODEL(0x10de009c, "GPU 9c HDMI/DP", MODEL_GENERIC), 201 203 HDA_CODEC_ID_MODEL(0x10de009d, "GPU 9d HDMI/DP", MODEL_GENERIC), 202 204 HDA_CODEC_ID_MODEL(0x10de009e, "GPU 9e HDMI/DP", MODEL_GENERIC), 203 205 HDA_CODEC_ID_MODEL(0x10de009f, "GPU 9f HDMI/DP", MODEL_GENERIC), 204 206 HDA_CODEC_ID_MODEL(0x10de00a0, "GPU a0 HDMI/DP", MODEL_GENERIC), 207 + HDA_CODEC_ID_MODEL(0x10de00a1, "GPU a1 HDMI/DP", MODEL_GENERIC), 205 208 HDA_CODEC_ID_MODEL(0x10de00a3, "GPU a3 HDMI/DP", MODEL_GENERIC), 206 209 HDA_CODEC_ID_MODEL(0x10de00a4, "GPU a4 HDMI/DP", MODEL_GENERIC), 207 210 HDA_CODEC_ID_MODEL(0x10de00a5, "GPU a5 HDMI/DP", MODEL_GENERIC), 208 211 HDA_CODEC_ID_MODEL(0x10de00a6, "GPU a6 HDMI/DP", MODEL_GENERIC), 209 212 HDA_CODEC_ID_MODEL(0x10de00a7, "GPU a7 HDMI/DP", MODEL_GENERIC), 213 + HDA_CODEC_ID_MODEL(0x10de00a8, "GPU a8 HDMI/DP", MODEL_GENERIC), 214 + HDA_CODEC_ID_MODEL(0x10de00a9, "GPU a9 HDMI/DP", MODEL_GENERIC), 215 + HDA_CODEC_ID_MODEL(0x10de00aa, "GPU aa HDMI/DP", MODEL_GENERIC), 216 + HDA_CODEC_ID_MODEL(0x10de00ab, "GPU ab HDMI/DP", MODEL_GENERIC), 217 + HDA_CODEC_ID_MODEL(0x10de00ad, "GPU ad HDMI/DP", MODEL_GENERIC), 218 + HDA_CODEC_ID_MODEL(0x10de00ae, "GPU ae HDMI/DP", MODEL_GENERIC), 219 + HDA_CODEC_ID_MODEL(0x10de00af, "GPU af HDMI/DP", MODEL_GENERIC), 220 + HDA_CODEC_ID_MODEL(0x10de00b0, "GPU b0 HDMI/DP", MODEL_GENERIC), 221 + HDA_CODEC_ID_MODEL(0x10de00b1, "GPU b1 HDMI/DP", MODEL_GENERIC), 222 + HDA_CODEC_ID_MODEL(0x10de00c0, "GPU c0 HDMI/DP", MODEL_GENERIC), 223 + HDA_CODEC_ID_MODEL(0x10de00c1, "GPU c1 HDMI/DP", MODEL_GENERIC), 224 + HDA_CODEC_ID_MODEL(0x10de00c3, "GPU c3 HDMI/DP", MODEL_GENERIC), 225 + HDA_CODEC_ID_MODEL(0x10de00c4, "GPU c4 HDMI/DP", MODEL_GENERIC), 226 + HDA_CODEC_ID_MODEL(0x10de00c5, "GPU c5 HDMI/DP", MODEL_GENERIC), 210 227 {} /* terminator */ 211 228 }; 212 229 MODULE_DEVICE_TABLE(hdaudio, snd_hda_id_nvhdmi);
+2
sound/hda/codecs/hdmi/tegrahdmi.c
··· 299 299 HDA_CODEC_ID_MODEL(0x10de002f, "Tegra194 HDMI/DP2", MODEL_TEGRA), 300 300 HDA_CODEC_ID_MODEL(0x10de0030, "Tegra194 HDMI/DP3", MODEL_TEGRA), 301 301 HDA_CODEC_ID_MODEL(0x10de0031, "Tegra234 HDMI/DP", MODEL_TEGRA234), 302 + HDA_CODEC_ID_MODEL(0x10de0033, "SoC 33 HDMI/DP", MODEL_TEGRA234), 302 303 HDA_CODEC_ID_MODEL(0x10de0034, "Tegra264 HDMI/DP", MODEL_TEGRA234), 304 + HDA_CODEC_ID_MODEL(0x10de0035, "SoC 35 HDMI/DP", MODEL_TEGRA234), 303 305 {} /* terminator */ 304 306 }; 305 307 MODULE_DEVICE_TABLE(hdaudio, snd_hda_id_tegrahdmi);
+2
sound/hda/codecs/realtek/alc269.c
··· 7147 7147 SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP), 7148 7148 SND_PCI_QUIRK(0x1d05, 0x1387, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), 7149 7149 SND_PCI_QUIRK(0x1d05, 0x1409, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), 7150 + SND_PCI_QUIRK(0x1d05, 0x300f, "TongFang X6AR5xxY", ALC2XX_FIXUP_HEADSET_MIC), 7151 + SND_PCI_QUIRK(0x1d05, 0x3019, "TongFang X6FR5xxY", ALC2XX_FIXUP_HEADSET_MIC), 7150 7152 SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS), 7151 7153 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 7152 7154 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+6 -3
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 300 300 { 301 301 efi_guid_t efi_guid = tasdev_fct_efi_guid[LENOVO]; 302 302 char *vars[TASDEV_CALIB_N] = { 303 - "R0_%d", "InvR0_%d", "R0_Low_%d", "Power_%d", "TLim_%d" 303 + "R0_%d", "R0_Low_%d", "InvR0_%d", "Power_%d", "TLim_%d" 304 304 }; 305 305 efi_char16_t efi_name[TAS2563_CAL_VAR_NAME_MAX]; 306 306 unsigned long max_size = TAS2563_CAL_DATA_SIZE; ··· 310 310 struct cali_reg *r = &cd->cali_reg_array; 311 311 unsigned int offset = 0; 312 312 unsigned char *data; 313 + __be32 bedata; 313 314 efi_status_t status; 314 315 unsigned int attr; 315 316 int ret, i, j, k; ··· 328 327 data[offset] = i; 329 328 offset++; 330 329 for (j = 0; j < TASDEV_CALIB_N; ++j) { 331 - ret = snprintf(var8, sizeof(var8), vars[j], i); 332 - 330 + /* EFI name for calibration started with 1, not 0 */ 331 + ret = snprintf(var8, sizeof(var8), vars[j], i + 1); 333 332 if (ret < 0 || ret >= sizeof(var8) - 1) { 334 333 dev_err(p->dev, "%s: Read %s failed\n", 335 334 __func__, var8); ··· 352 351 i, j, status); 353 352 return -EINVAL; 354 353 } 354 + bedata = cpu_to_be32(*(uint32_t *)&data[offset]); 355 + memcpy(&data[offset], &bedata, sizeof(bedata)); 355 356 offset += TAS2563_CAL_DATA_SIZE; 356 357 } 357 358 }
+23 -3
sound/hda/core/intel-dsp-config.c
··· 116 116 .flags = FLAG_SST, 117 117 .device = PCI_DEVICE_ID_INTEL_HDA_FCL, 118 118 }, 119 + #else /* AVS disabled; force to legacy as SOF doesn't work for SKL or KBL */ 120 + { 121 + .device = PCI_DEVICE_ID_INTEL_HDA_SKL_LP, 122 + }, 123 + { 124 + .device = PCI_DEVICE_ID_INTEL_HDA_KBL_LP, 125 + }, 119 126 #endif 120 127 #if IS_ENABLED(CONFIG_SND_SOC_SOF_APOLLOLAKE) 121 128 { ··· 174 167 175 168 /* 176 169 * CoffeeLake, CannonLake, CometLake, IceLake, TigerLake, AlderLake, 177 - * RaptorLake use legacy HDAudio driver except for Google Chromebooks 178 - * and when DMICs are present. Two cases are required since Coreboot 179 - * does not expose NHLT tables. 170 + * RaptorLake, MeteorLake use legacy HDAudio driver except for Google 171 + * Chromebooks and when DMICs are present. Two cases are required since 172 + * Coreboot does not expose NHLT tables. 180 173 * 181 174 * When the Chromebook quirk is not present, it's based on information 182 175 * that no such device exists. When the quirk is present, it could be ··· 523 516 /* Meteor Lake */ 524 517 #if IS_ENABLED(CONFIG_SND_SOC_SOF_METEORLAKE) 525 518 /* Meteorlake-P */ 519 + { 520 + .flags = FLAG_SOF, 521 + .device = PCI_DEVICE_ID_INTEL_HDA_MTL, 522 + .dmi_table = (const struct dmi_system_id []) { 523 + { 524 + .ident = "Google Chromebooks", 525 + .matches = { 526 + DMI_MATCH(DMI_SYS_VENDOR, "Google"), 527 + } 528 + }, 529 + {} 530 + } 531 + }, 526 532 { 527 533 .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 528 534 .device = PCI_DEVICE_ID_INTEL_HDA_MTL,
+1 -1
sound/soc/codecs/idt821034.c
··· 1067 1067 1068 1068 ret = idt821034_set_slic_conf(idt821034, ch, slic_conf); 1069 1069 if (ret) { 1070 - dev_err(&idt821034->spi->dev, "dir in gpio %d (%u, 0x%x) failed (%d)\n", 1070 + dev_err(&idt821034->spi->dev, "dir out gpio %d (%u, 0x%x) failed (%d)\n", 1071 1071 offset, ch, mask, ret); 1072 1072 } 1073 1073
+1 -1
sound/soc/renesas/rcar/core.c
··· 597 597 598 598 dev_dbg(dev, "%s is connected to io (%s)\n", 599 599 rsnd_mod_name(mod), 600 - snd_pcm_direction_name(io->substream->stream)); 600 + rsnd_io_is_play(io) ? "Playback" : "Capture"); 601 601 602 602 return 0; 603 603 }
+15 -10
sound/soc/soc-core.c
··· 369 369 *snd_soc_lookup_component_nolocked(struct device *dev, const char *driver_name) 370 370 { 371 371 struct snd_soc_component *component; 372 - struct snd_soc_component *found_component; 373 372 374 - found_component = NULL; 375 373 for_each_component(component) { 376 - if ((dev == component->dev) && 377 - (!driver_name || 378 - (driver_name == component->driver->name) || 379 - (strcmp(component->driver->name, driver_name) == 0))) { 380 - found_component = component; 381 - break; 382 - } 374 + if (dev != component->dev) 375 + continue; 376 + 377 + if (!driver_name) 378 + return component; 379 + 380 + if (!component->driver->name) 381 + continue; 382 + 383 + if (component->driver->name == driver_name) 384 + return component; 385 + 386 + if (strcmp(component->driver->name, driver_name) == 0) 387 + return component; 383 388 } 384 389 385 - return found_component; 390 + return NULL; 386 391 } 387 392 EXPORT_SYMBOL_GPL(snd_soc_lookup_component_nolocked); 388 393
+1
sound/soc/sof/intel/ptl.c
··· 143 143 .read_sdw_lcount = hda_sdw_check_lcount_ext, 144 144 .check_sdw_irq = lnl_dsp_check_sdw_irq, 145 145 .check_sdw_wakeen_irq = lnl_sdw_check_wakeen_irq, 146 + .sdw_process_wakeen = hda_sdw_process_wakeen_common, 146 147 .check_ipc_irq = mtl_dsp_check_ipc_irq, 147 148 .cl_init = mtl_dsp_cl_init, 148 149 .power_down_dsp = mtl_power_down_dsp,
+8 -4
sound/usb/format.c
··· 327 327 max_rate = combine_quad(&fmt[6]); 328 328 329 329 switch (max_rate) { 330 + case 192000: 331 + if (rate == 176400 || rate == 192000) 332 + return true; 333 + fallthrough; 334 + case 96000: 335 + if (rate == 88200 || rate == 96000) 336 + return true; 337 + fallthrough; 330 338 case 48000: 331 339 return (rate == 44100 || rate == 48000); 332 - case 96000: 333 - return (rate == 88200 || rate == 96000); 334 - case 192000: 335 - return (rate == 176400 || rate == 192000); 336 340 default: 337 341 usb_audio_info(chip, 338 342 "%u:%d : unexpected max rate: %u\n",
+3 -5
sound/usb/mixer_quirks.c
··· 4608 4608 if (unitid == 7 && cval->control == UAC_FU_VOLUME) 4609 4609 snd_dragonfly_quirk_db_scale(mixer, cval, kctl); 4610 4610 break; 4611 + } 4612 + 4611 4613 /* lowest playback value is muted on some devices */ 4612 - case USB_ID(0x0d8c, 0x000c): /* C-Media */ 4613 - case USB_ID(0x0d8c, 0x0014): /* C-Media */ 4614 - case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */ 4614 + if (mixer->chip->quirk_flags & QUIRK_FLAG_MIXER_MIN_MUTE) 4615 4615 if (strstr(kctl->id.name, "Playback")) 4616 4616 cval->min_mute = 1; 4617 - break; 4618 - } 4619 4617 4620 4618 /* ALSA-ify some Plantronics headset control names */ 4621 4619 if (USB_ID_VENDOR(mixer->chip->usb_id) == 0x047f &&
+20 -2
sound/usb/quirks.c
··· 2199 2199 QUIRK_FLAG_SET_IFACE_FIRST), 2200 2200 DEVICE_FLG(0x0556, 0x0014, /* Phoenix Audio TMX320VC */ 2201 2201 QUIRK_FLAG_GET_SAMPLE_RATE), 2202 + DEVICE_FLG(0x0572, 0x1b08, /* Conexant Systems (Rockwell), Inc. */ 2203 + QUIRK_FLAG_MIXER_MIN_MUTE), 2204 + DEVICE_FLG(0x0572, 0x1b09, /* Conexant Systems (Rockwell), Inc. */ 2205 + QUIRK_FLAG_MIXER_MIN_MUTE), 2202 2206 DEVICE_FLG(0x05a3, 0x9420, /* ELP HD USB Camera */ 2203 2207 QUIRK_FLAG_GET_SAMPLE_RATE), 2204 2208 DEVICE_FLG(0x05a7, 0x1020, /* Bose Companion 5 */ ··· 2245 2241 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2246 2242 DEVICE_FLG(0x0b0e, 0x0349, /* Jabra 550a */ 2247 2243 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2244 + DEVICE_FLG(0x0bda, 0x498a, /* Realtek Semiconductor Corp. */ 2245 + QUIRK_FLAG_MIXER_MIN_MUTE), 2248 2246 DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */ 2249 2247 QUIRK_FLAG_GET_SAMPLE_RATE), 2250 2248 DEVICE_FLG(0x0c45, 0x636b, /* Microdia JP001 USB Camera */ 2251 2249 QUIRK_FLAG_GET_SAMPLE_RATE), 2252 - DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */ 2253 - QUIRK_FLAG_CTL_MSG_DELAY_1M), 2250 + DEVICE_FLG(0x0d8c, 0x000c, /* C-Media */ 2251 + QUIRK_FLAG_MIXER_MIN_MUTE), 2252 + DEVICE_FLG(0x0d8c, 0x0014, /* C-Media */ 2253 + QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIXER_MIN_MUTE), 2254 2254 DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */ 2255 2255 QUIRK_FLAG_FIXED_RATE), 2256 2256 DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */ ··· 2263 2255 QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER), 2264 2256 DEVICE_FLG(0x1101, 0x0003, /* Audioengine D1 */ 2265 2257 QUIRK_FLAG_GET_SAMPLE_RATE), 2258 + DEVICE_FLG(0x12d1, 0x3a07, /* Huawei Technologies Co., Ltd. */ 2259 + QUIRK_FLAG_MIXER_MIN_MUTE), 2266 2260 DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */ 2267 2261 QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16), 2268 2262 DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */ ··· 2303 2293 QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY), 2304 2294 DEVICE_FLG(0x1901, 0x0191, /* GE B850V3 CP2114 audio interface */ 2305 2295 QUIRK_FLAG_GET_SAMPLE_RATE), 2296 + DEVICE_FLG(0x19f7, 0x0003, /* RODE NT-USB */ 2297 + QUIRK_FLAG_MIXER_MIN_MUTE), 2306 2298 DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */ 2307 2299 QUIRK_FLAG_GET_SAMPLE_RATE), 2308 2300 DEVICE_FLG(0x1bcf, 0x2281, /* HD Webcam */ ··· 2355 2343 QUIRK_FLAG_IGNORE_CTL_ERROR), 2356 2344 DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */ 2357 2345 QUIRK_FLAG_GET_SAMPLE_RATE), 2346 + DEVICE_FLG(0x2a70, 0x1881, /* OnePlus Technology (Shenzhen) Co., Ltd. BE02T */ 2347 + QUIRK_FLAG_MIXER_MIN_MUTE), 2358 2348 DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */ 2359 2349 QUIRK_FLAG_GENERIC_IMPLICIT_FB), 2360 2350 DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */ ··· 2367 2353 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2368 2354 DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */ 2369 2355 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2356 + DEVICE_FLG(0x2d99, 0x0026, /* HECATE G2 GAMING HEADSET */ 2357 + QUIRK_FLAG_MIXER_MIN_MUTE), 2370 2358 DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */ 2371 2359 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2372 2360 DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */ 2373 2361 QUIRK_FLAG_IGNORE_CTL_ERROR), 2362 + DEVICE_FLG(0x339b, 0x3a07, /* Synaptics HONOR USB-C HEADSET */ 2363 + QUIRK_FLAG_MIXER_MIN_MUTE), 2374 2364 DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */ 2375 2365 QUIRK_FLAG_GET_SAMPLE_RATE), 2376 2366 DEVICE_FLG(0x534d, 0x0021, /* MacroSilicon MS2100/MS2106 */
+4
sound/usb/usbaudio.h
··· 196 196 * for the given endpoint. 197 197 * QUIRK_FLAG_MIC_RES_16 and QUIRK_FLAG_MIC_RES_384 198 198 * Set the fixed resolution for Mic Capture Volume (mostly for webcams) 199 + * QUIRK_FLAG_MIXER_MIN_MUTE 200 + * Set minimum volume control value as mute for devices where the lowest 201 + * playback value represents muted state instead of minimum audible volume 199 202 */ 200 203 201 204 #define QUIRK_FLAG_GET_SAMPLE_RATE (1U << 0) ··· 225 222 #define QUIRK_FLAG_FIXED_RATE (1U << 21) 226 223 #define QUIRK_FLAG_MIC_RES_16 (1U << 22) 227 224 #define QUIRK_FLAG_MIC_RES_384 (1U << 23) 225 + #define QUIRK_FLAG_MIXER_MIN_MUTE (1U << 24) 228 226 229 227 #endif /* __USBAUDIO_H */
+1 -1
tools/gpio/Makefile
··· 77 77 78 78 clean: 79 79 rm -f $(ALL_PROGRAMS) 80 - rm -f $(OUTPUT)include/linux/gpio.h 80 + rm -rf $(OUTPUT)include 81 81 find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete 82 82 83 83 install: $(ALL_PROGRAMS)
+1 -1
tools/net/ynl/pyynl/ynl_gen_c.py
··· 830 830 'ynl_attr_for_each_nested(attr2, attr) {', 831 831 '\tif (ynl_attr_validate(yarg, attr2))', 832 832 '\t\treturn YNL_PARSE_CB_ERROR;', 833 - f'\t{var}->_count.{self.c_name}++;', 833 + f'\tn_{self.c_name}++;', 834 834 '}'] 835 835 return get_lines, None, local_vars 836 836
+2 -2
tools/perf/tests/pe-file-parsing.c
··· 37 37 size_t idx; 38 38 39 39 scnprintf(filename, PATH_MAX, "%s/pe-file.exe", d); 40 - ret = filename__read_build_id(filename, &bid); 40 + ret = filename__read_build_id(filename, &bid, /*block=*/true); 41 41 TEST_ASSERT_VAL("Failed to read build_id", 42 42 ret == sizeof(expect_build_id)); 43 43 TEST_ASSERT_VAL("Wrong build_id", !memcmp(bid.data, expect_build_id, ··· 49 49 !strcmp(debuglink, expect_debuglink)); 50 50 51 51 scnprintf(debugfile, PATH_MAX, "%s/%s", d, debuglink); 52 - ret = filename__read_build_id(debugfile, &bid); 52 + ret = filename__read_build_id(debugfile, &bid, /*block=*/true); 53 53 TEST_ASSERT_VAL("Failed to read debug file build_id", 54 54 ret == sizeof(expect_build_id)); 55 55 TEST_ASSERT_VAL("Wrong build_id", !memcmp(bid.data, expect_build_id,
+1 -1
tools/perf/tests/shell/test_bpf_metadata.sh
··· 61 61 /perf_version/ { 62 62 if (entry) print $NF; 63 63 } 64 - ' | egrep "$VERS" > /dev/null 64 + ' | grep -qF "$VERS" 65 65 then 66 66 echo "Basic BPF metadata test [Failed invalid output]" 67 67 err=1
+27 -12
tools/perf/util/bpf-event.c
··· 657 657 info_node->info_linear = info_linear; 658 658 info_node->metadata = NULL; 659 659 if (!perf_env__insert_bpf_prog_info(env, info_node)) { 660 - free(info_linear); 660 + /* 661 + * Insert failed, likely because of a duplicate event 662 + * made by the sideband thread. Ignore synthesizing the 663 + * metadata. 664 + */ 661 665 free(info_node); 666 + goto out; 662 667 } 668 + /* info_linear is now owned by info_node and shouldn't be freed below. */ 663 669 info_linear = NULL; 664 670 665 671 /* ··· 833 827 return err; 834 828 } 835 829 836 - static void perf_env__add_bpf_info(struct perf_env *env, u32 id) 830 + static int perf_env__add_bpf_info(struct perf_env *env, u32 id) 837 831 { 838 832 struct bpf_prog_info_node *info_node; 839 833 struct perf_bpil *info_linear; 840 834 struct btf *btf = NULL; 841 835 u64 arrays; 842 836 u32 btf_id; 843 - int fd; 837 + int fd, err = 0; 844 838 845 839 fd = bpf_prog_get_fd_by_id(id); 846 840 if (fd < 0) 847 - return; 841 + return -EINVAL; 848 842 849 843 arrays = 1UL << PERF_BPIL_JITED_KSYMS; 850 844 arrays |= 1UL << PERF_BPIL_JITED_FUNC_LENS; ··· 858 852 info_linear = get_bpf_prog_info_linear(fd, arrays); 859 853 if (IS_ERR_OR_NULL(info_linear)) { 860 854 pr_debug("%s: failed to get BPF program info. aborting\n", __func__); 855 + err = PTR_ERR(info_linear); 861 856 goto out; 862 857 } 863 858 ··· 869 862 info_node->info_linear = info_linear; 870 863 info_node->metadata = bpf_metadata_create(&info_linear->info); 871 864 if (!perf_env__insert_bpf_prog_info(env, info_node)) { 865 + pr_debug("%s: duplicate add bpf info request for id %u\n", 866 + __func__, btf_id); 872 867 free(info_linear); 873 868 free(info_node); 869 + goto out; 874 870 } 875 - } else 871 + } else { 876 872 free(info_linear); 873 + err = -ENOMEM; 874 + goto out; 875 + } 877 876 878 877 if (btf_id == 0) 879 878 goto out; 880 879 881 880 btf = btf__load_from_kernel_by_id(btf_id); 882 - if (libbpf_get_error(btf)) { 883 - pr_debug("%s: failed to get BTF of id %u, aborting\n", 884 - __func__, btf_id); 885 - goto out; 881 + if (!btf) { 882 + err = -errno; 883 + pr_debug("%s: failed to get BTF of id %u %d\n", __func__, btf_id, err); 884 + } else { 885 + perf_env__fetch_btf(env, btf_id, btf); 886 886 } 887 - perf_env__fetch_btf(env, btf_id, btf); 888 887 889 888 out: 890 889 btf__free(btf); 891 890 close(fd); 891 + return err; 892 892 } 893 893 894 894 static int bpf_event__sb_cb(union perf_event *event, void *data) 895 895 { 896 896 struct perf_env *env = data; 897 + int ret = 0; 897 898 898 899 if (event->header.type != PERF_RECORD_BPF_EVENT) 899 900 return -1; 900 901 901 902 switch (event->bpf.type) { 902 903 case PERF_BPF_EVENT_PROG_LOAD: 903 - perf_env__add_bpf_info(env, event->bpf.id); 904 + ret = perf_env__add_bpf_info(env, event->bpf.id); 904 905 905 906 case PERF_BPF_EVENT_PROG_UNLOAD: 906 907 /* ··· 922 907 break; 923 908 } 924 909 925 - return 0; 910 + return ret; 926 911 } 927 912 928 913 int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env)
+39 -22
tools/perf/util/bpf-utils.c
··· 20 20 */ 21 21 }; 22 22 23 - static struct bpil_array_desc bpil_array_desc[] = { 23 + static const struct bpil_array_desc bpil_array_desc[] = { 24 24 [PERF_BPIL_JITED_INSNS] = { 25 25 offsetof(struct bpf_prog_info, jited_prog_insns), 26 26 offsetof(struct bpf_prog_info, jited_prog_len), ··· 115 115 __u32 info_len = sizeof(info); 116 116 __u32 data_len = 0; 117 117 int i, err; 118 - void *ptr; 118 + __u8 *ptr; 119 119 120 120 if (arrays >> PERF_BPIL_LAST_ARRAY) 121 121 return ERR_PTR(-EINVAL); ··· 126 126 pr_debug("can't get prog info: %s", strerror(errno)); 127 127 return ERR_PTR(-EFAULT); 128 128 } 129 + if (info.type >= __MAX_BPF_PROG_TYPE) 130 + pr_debug("%s:%d: unexpected program type %u\n", __func__, __LINE__, info.type); 129 131 130 132 /* step 2: calculate total size of all arrays */ 131 133 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 134 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 132 135 bool include_array = (arrays & (1UL << i)) > 0; 133 - struct bpil_array_desc *desc; 134 136 __u32 count, size; 135 - 136 - desc = bpil_array_desc + i; 137 137 138 138 /* kernel is too old to support this field */ 139 139 if (info_len < desc->array_offset + sizeof(__u32) || ··· 163 163 ptr = info_linear->data; 164 164 165 165 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 166 - struct bpil_array_desc *desc; 166 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 167 167 __u32 count, size; 168 168 169 169 if ((arrays & (1UL << i)) == 0) 170 170 continue; 171 171 172 - desc = bpil_array_desc + i; 173 172 count = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 174 173 size = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 175 174 bpf_prog_info_set_offset_u32(&info_linear->info, 176 175 desc->count_offset, count); 177 176 bpf_prog_info_set_offset_u32(&info_linear->info, 178 177 desc->size_offset, size); 178 + assert(ptr >= info_linear->data); 179 + assert(ptr < &info_linear->data[data_len]); 179 180 bpf_prog_info_set_offset_u64(&info_linear->info, 180 181 desc->array_offset, 181 182 ptr_to_u64(ptr)); ··· 190 189 free(info_linear); 191 190 return ERR_PTR(-EFAULT); 192 191 } 192 + if (info_linear->info.type >= __MAX_BPF_PROG_TYPE) { 193 + pr_debug("%s:%d: unexpected program type %u\n", 194 + __func__, __LINE__, info_linear->info.type); 195 + } 193 196 194 197 /* step 6: verify the data */ 198 + ptr = info_linear->data; 195 199 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 196 - struct bpil_array_desc *desc; 197 - __u32 v1, v2; 200 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 201 + __u32 count1, count2, size1, size2; 202 + __u64 ptr2; 198 203 199 204 if ((arrays & (1UL << i)) == 0) 200 205 continue; 201 206 202 - desc = bpil_array_desc + i; 203 - v1 = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 204 - v2 = bpf_prog_info_read_offset_u32(&info_linear->info, 207 + count1 = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 208 + count2 = bpf_prog_info_read_offset_u32(&info_linear->info, 205 209 desc->count_offset); 206 - if (v1 != v2) 207 - pr_warning("%s: mismatch in element count\n", __func__); 210 + if (count1 != count2) { 211 + pr_warning("%s: mismatch in element count %u vs %u\n", __func__, count1, count2); 212 + free(info_linear); 213 + return ERR_PTR(-ERANGE); 214 + } 208 215 209 - v1 = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 210 - v2 = bpf_prog_info_read_offset_u32(&info_linear->info, 216 + size1 = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 217 + size2 = bpf_prog_info_read_offset_u32(&info_linear->info, 211 218 desc->size_offset); 212 - if (v1 != v2) 213 - pr_warning("%s: mismatch in rec size\n", __func__); 219 + if (size1 != size2) { 220 + pr_warning("%s: mismatch in rec size %u vs %u\n", __func__, size1, size2); 221 + free(info_linear); 222 + return ERR_PTR(-ERANGE); 223 + } 224 + ptr2 = bpf_prog_info_read_offset_u64(&info_linear->info, desc->array_offset); 225 + if (ptr_to_u64(ptr) != ptr2) { 226 + pr_warning("%s: mismatch in array %p vs %llx\n", __func__, ptr, ptr2); 227 + free(info_linear); 228 + return ERR_PTR(-ERANGE); 229 + } 230 + ptr += roundup(count1 * size1, sizeof(__u64)); 214 231 } 215 232 216 233 /* step 7: update info_len and data_len */ ··· 243 224 int i; 244 225 245 226 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 246 - struct bpil_array_desc *desc; 227 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 247 228 __u64 addr, offs; 248 229 249 230 if ((info_linear->arrays & (1UL << i)) == 0) 250 231 continue; 251 232 252 - desc = bpil_array_desc + i; 253 233 addr = bpf_prog_info_read_offset_u64(&info_linear->info, 254 234 desc->array_offset); 255 235 offs = addr - ptr_to_u64(info_linear->data); ··· 262 244 int i; 263 245 264 246 for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) { 265 - struct bpil_array_desc *desc; 247 + const struct bpil_array_desc *desc = &bpil_array_desc[i]; 266 248 __u64 addr, offs; 267 249 268 250 if ((info_linear->arrays & (1UL << i)) == 0) 269 251 continue; 270 252 271 - desc = bpil_array_desc + i; 272 253 offs = bpf_prog_info_read_offset_u64(&info_linear->info, 273 254 desc->array_offset); 274 255 addr = offs + ptr_to_u64(info_linear->data);
+7 -3
tools/perf/util/symbol-elf.c
··· 873 873 874 874 #ifdef HAVE_LIBBFD_BUILDID_SUPPORT 875 875 876 - static int read_build_id(const char *filename, struct build_id *bid) 876 + static int read_build_id(const char *filename, struct build_id *bid, bool block) 877 877 { 878 878 size_t size = sizeof(bid->data); 879 - int err = -1; 879 + int err = -1, fd; 880 880 bfd *abfd; 881 881 882 - abfd = bfd_openr(filename, NULL); 882 + fd = open(filename, block ? O_RDONLY : (O_RDONLY | O_NONBLOCK)); 883 + if (fd < 0) 884 + return -1; 885 + 886 + abfd = bfd_fdopenr(filename, /*target=*/NULL, fd); 883 887 if (!abfd) 884 888 return -1; 885 889
+2 -2
tools/testing/selftests/drivers/net/hw/csum.py
··· 17 17 ip_args = f"-{ipver} -S {cfg.remote_addr_v[ipver]} -D {cfg.addr_v[ipver]}" 18 18 19 19 rx_cmd = f"{cfg.bin_local} -i {cfg.ifname} -n 100 {ip_args} -r 1 -R {extra_args}" 20 - tx_cmd = f"{cfg.bin_remote} -i {cfg.ifname} -n 100 {ip_args} -r 1 -T {extra_args}" 20 + tx_cmd = f"{cfg.bin_remote} -i {cfg.remote_ifname} -n 100 {ip_args} -r 1 -T {extra_args}" 21 21 22 22 with bkg(rx_cmd, exit_wait=True): 23 23 wait_port_listen(34000, proto="udp") ··· 37 37 if extra_args != "-U -Z": 38 38 extra_args += " -r 1" 39 39 40 - rx_cmd = f"{cfg.bin_remote} -i {cfg.ifname} -L 1 -n 100 {ip_args} -R {extra_args}" 40 + rx_cmd = f"{cfg.bin_remote} -i {cfg.remote_ifname} -L 1 -n 100 {ip_args} -R {extra_args}" 41 41 tx_cmd = f"{cfg.bin_local} -i {cfg.ifname} -L 1 -n 100 {ip_args} -T {extra_args}" 42 42 43 43 with bkg(rx_cmd, host=cfg.remote, exit_wait=True):
+6
tools/testing/selftests/kvm/Makefile.kvm
··· 198 198 TEST_GEN_PROGS_riscv = $(TEST_GEN_PROGS_COMMON) 199 199 TEST_GEN_PROGS_riscv += riscv/sbi_pmu_test 200 200 TEST_GEN_PROGS_riscv += riscv/ebreak_test 201 + TEST_GEN_PROGS_riscv += access_tracking_perf_test 201 202 TEST_GEN_PROGS_riscv += arch_timer 202 203 TEST_GEN_PROGS_riscv += coalesced_io_test 204 + TEST_GEN_PROGS_riscv += dirty_log_perf_test 203 205 TEST_GEN_PROGS_riscv += get-reg-list 206 + TEST_GEN_PROGS_riscv += memslot_modification_stress_test 207 + TEST_GEN_PROGS_riscv += memslot_perf_test 208 + TEST_GEN_PROGS_riscv += mmu_stress_test 209 + TEST_GEN_PROGS_riscv += rseq_test 204 210 TEST_GEN_PROGS_riscv += steal_time 205 211 206 212 TEST_GEN_PROGS_loongarch += coalesced_io_test
+1
tools/testing/selftests/kvm/access_tracking_perf_test.c
··· 50 50 #include "memstress.h" 51 51 #include "guest_modes.h" 52 52 #include "processor.h" 53 + #include "ucall_common.h" 53 54 54 55 #include "cgroup_util.h" 55 56 #include "lru_gen_util.h"
+1
tools/testing/selftests/kvm/include/riscv/processor.h
··· 9 9 10 10 #include <linux/stringify.h> 11 11 #include <asm/csr.h> 12 + #include <asm/vdso/processor.h> 12 13 #include "kvm_util.h" 13 14 14 15 #define INSN_OPCODE_MASK 0x007c
+1
tools/testing/selftests/kvm/memslot_modification_stress_test.c
··· 22 22 #include "processor.h" 23 23 #include "test_util.h" 24 24 #include "guest_modes.h" 25 + #include "ucall_common.h" 25 26 26 27 #define DUMMY_MEMSLOT_INDEX 7 27 28
+1
tools/testing/selftests/kvm/memslot_perf_test.c
··· 25 25 #include <test_util.h> 26 26 #include <kvm_util.h> 27 27 #include <processor.h> 28 + #include <ucall_common.h> 28 29 29 30 #define MEM_EXTRA_SIZE SZ_64K 30 31
+60
tools/testing/selftests/kvm/riscv/get-reg-list.c
··· 80 80 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZCF: 81 81 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZCMOP: 82 82 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFA: 83 + case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFBFMIN: 83 84 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFH: 84 85 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFHMIN: 85 86 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZICBOM: 87 + case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZICBOP: 86 88 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZICBOZ: 87 89 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZICCRSE: 88 90 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZICNTR: ··· 105 103 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZTSO: 106 104 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZVBB: 107 105 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZVBC: 106 + case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZVFBFMIN: 107 + case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZVFBFWMA: 108 108 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZVFH: 109 109 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZVFHMIN: 110 110 case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZVKB: ··· 132 128 case KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_DBCN: 133 129 case KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_SUSP: 134 130 case KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_STA: 131 + case KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_FWFT: 135 132 case KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_EXPERIMENTAL: 136 133 case KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_VENDOR: 137 134 return true; ··· 260 255 return "KVM_REG_RISCV_CONFIG_REG(zicbom_block_size)"; 261 256 case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): 262 257 return "KVM_REG_RISCV_CONFIG_REG(zicboz_block_size)"; 258 + case KVM_REG_RISCV_CONFIG_REG(zicbop_block_size): 259 + return "KVM_REG_RISCV_CONFIG_REG(zicbop_block_size)"; 263 260 case KVM_REG_RISCV_CONFIG_REG(mvendorid): 264 261 return "KVM_REG_RISCV_CONFIG_REG(mvendorid)"; 265 262 case KVM_REG_RISCV_CONFIG_REG(marchid): ··· 539 532 KVM_ISA_EXT_ARR(ZCF), 540 533 KVM_ISA_EXT_ARR(ZCMOP), 541 534 KVM_ISA_EXT_ARR(ZFA), 535 + KVM_ISA_EXT_ARR(ZFBFMIN), 542 536 KVM_ISA_EXT_ARR(ZFH), 543 537 KVM_ISA_EXT_ARR(ZFHMIN), 544 538 KVM_ISA_EXT_ARR(ZICBOM), 539 + KVM_ISA_EXT_ARR(ZICBOP), 545 540 KVM_ISA_EXT_ARR(ZICBOZ), 546 541 KVM_ISA_EXT_ARR(ZICCRSE), 547 542 KVM_ISA_EXT_ARR(ZICNTR), ··· 564 555 KVM_ISA_EXT_ARR(ZTSO), 565 556 KVM_ISA_EXT_ARR(ZVBB), 566 557 KVM_ISA_EXT_ARR(ZVBC), 558 + KVM_ISA_EXT_ARR(ZVFBFMIN), 559 + KVM_ISA_EXT_ARR(ZVFBFWMA), 567 560 KVM_ISA_EXT_ARR(ZVFH), 568 561 KVM_ISA_EXT_ARR(ZVFHMIN), 569 562 KVM_ISA_EXT_ARR(ZVKB), ··· 638 627 KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_DBCN), 639 628 KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_SUSP), 640 629 KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_STA), 630 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_FWFT), 641 631 KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_EXPERIMENTAL), 642 632 KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_VENDOR), 643 633 }; ··· 695 683 return strdup_printf("KVM_REG_RISCV_SBI_STA | %lld /* UNKNOWN */", reg_off); 696 684 } 697 685 686 + static const char *sbi_fwft_id_to_str(__u64 reg_off) 687 + { 688 + switch (reg_off) { 689 + case 0: return "KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(misaligned_deleg.enable)"; 690 + case 1: return "KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(misaligned_deleg.flags)"; 691 + case 2: return "KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(misaligned_deleg.value)"; 692 + case 3: return "KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(pointer_masking.enable)"; 693 + case 4: return "KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(pointer_masking.flags)"; 694 + case 5: return "KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(pointer_masking.value)"; 695 + } 696 + return strdup_printf("KVM_REG_RISCV_SBI_FWFT | %lld /* UNKNOWN */", reg_off); 697 + } 698 + 698 699 static const char *sbi_id_to_str(const char *prefix, __u64 id) 699 700 { 700 701 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_SBI_STATE); ··· 720 695 switch (reg_subtype) { 721 696 case KVM_REG_RISCV_SBI_STA: 722 697 return sbi_sta_id_to_str(reg_off); 698 + case KVM_REG_RISCV_SBI_FWFT: 699 + return sbi_fwft_id_to_str(reg_off); 723 700 } 724 701 725 702 return strdup_printf("%lld | %lld /* UNKNOWN */", reg_subtype, reg_off); ··· 807 780 */ 808 781 static __u64 base_regs[] = { 809 782 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(isa), 783 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicbom_block_size), 810 784 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(mvendorid), 811 785 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(marchid), 812 786 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(mimpid), 787 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicboz_block_size), 813 788 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(satp_mode), 789 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicbop_block_size), 814 790 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.pc), 815 791 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.ra), 816 792 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.sp), ··· 889 859 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_STATE | KVM_REG_RISCV_SBI_STA | KVM_REG_RISCV_SBI_STA_REG(shmem_hi), 890 860 }; 891 861 862 + static __u64 sbi_fwft_regs[] = { 863 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_FWFT, 864 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_STATE | KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(misaligned_deleg.enable), 865 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_STATE | KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(misaligned_deleg.flags), 866 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_STATE | KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(misaligned_deleg.value), 867 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_STATE | KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(pointer_masking.enable), 868 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_STATE | KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(pointer_masking.flags), 869 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_STATE | KVM_REG_RISCV_SBI_FWFT | KVM_REG_RISCV_SBI_FWFT_REG(pointer_masking.value), 870 + }; 871 + 892 872 static __u64 zicbom_regs[] = { 893 873 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicbom_block_size), 894 874 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZICBOM, 875 + }; 876 + 877 + static __u64 zicbop_regs[] = { 878 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicbop_block_size), 879 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZICBOP, 895 880 }; 896 881 897 882 static __u64 zicboz_regs[] = { ··· 1055 1010 #define SUBLIST_SBI_STA \ 1056 1011 {"sbi-sta", .feature_type = VCPU_FEATURE_SBI_EXT, .feature = KVM_RISCV_SBI_EXT_STA, \ 1057 1012 .regs = sbi_sta_regs, .regs_n = ARRAY_SIZE(sbi_sta_regs),} 1013 + #define SUBLIST_SBI_FWFT \ 1014 + {"sbi-fwft", .feature_type = VCPU_FEATURE_SBI_EXT, .feature = KVM_RISCV_SBI_EXT_FWFT, \ 1015 + .regs = sbi_fwft_regs, .regs_n = ARRAY_SIZE(sbi_fwft_regs),} 1058 1016 #define SUBLIST_ZICBOM \ 1059 1017 {"zicbom", .feature = KVM_RISCV_ISA_EXT_ZICBOM, .regs = zicbom_regs, .regs_n = ARRAY_SIZE(zicbom_regs),} 1018 + #define SUBLIST_ZICBOP \ 1019 + {"zicbop", .feature = KVM_RISCV_ISA_EXT_ZICBOP, .regs = zicbop_regs, .regs_n = ARRAY_SIZE(zicbop_regs),} 1060 1020 #define SUBLIST_ZICBOZ \ 1061 1021 {"zicboz", .feature = KVM_RISCV_ISA_EXT_ZICBOZ, .regs = zicboz_regs, .regs_n = ARRAY_SIZE(zicboz_regs),} 1062 1022 #define SUBLIST_AIA \ ··· 1142 1092 KVM_SBI_EXT_SIMPLE_CONFIG(pmu, PMU); 1143 1093 KVM_SBI_EXT_SIMPLE_CONFIG(dbcn, DBCN); 1144 1094 KVM_SBI_EXT_SIMPLE_CONFIG(susp, SUSP); 1095 + KVM_SBI_EXT_SUBLIST_CONFIG(fwft, FWFT); 1145 1096 1146 1097 KVM_ISA_EXT_SUBLIST_CONFIG(aia, AIA); 1147 1098 KVM_ISA_EXT_SUBLIST_CONFIG(fp_f, FP_F); ··· 1178 1127 KVM_ISA_EXT_SIMPLE_CONFIG(zcf, ZCF); 1179 1128 KVM_ISA_EXT_SIMPLE_CONFIG(zcmop, ZCMOP); 1180 1129 KVM_ISA_EXT_SIMPLE_CONFIG(zfa, ZFA); 1130 + KVM_ISA_EXT_SIMPLE_CONFIG(zfbfmin, ZFBFMIN); 1181 1131 KVM_ISA_EXT_SIMPLE_CONFIG(zfh, ZFH); 1182 1132 KVM_ISA_EXT_SIMPLE_CONFIG(zfhmin, ZFHMIN); 1183 1133 KVM_ISA_EXT_SUBLIST_CONFIG(zicbom, ZICBOM); 1134 + KVM_ISA_EXT_SUBLIST_CONFIG(zicbop, ZICBOP); 1184 1135 KVM_ISA_EXT_SUBLIST_CONFIG(zicboz, ZICBOZ); 1185 1136 KVM_ISA_EXT_SIMPLE_CONFIG(ziccrse, ZICCRSE); 1186 1137 KVM_ISA_EXT_SIMPLE_CONFIG(zicntr, ZICNTR); ··· 1203 1150 KVM_ISA_EXT_SIMPLE_CONFIG(ztso, ZTSO); 1204 1151 KVM_ISA_EXT_SIMPLE_CONFIG(zvbb, ZVBB); 1205 1152 KVM_ISA_EXT_SIMPLE_CONFIG(zvbc, ZVBC); 1153 + KVM_ISA_EXT_SIMPLE_CONFIG(zvfbfmin, ZVFBFMIN); 1154 + KVM_ISA_EXT_SIMPLE_CONFIG(zvfbfwma, ZVFBFWMA); 1206 1155 KVM_ISA_EXT_SIMPLE_CONFIG(zvfh, ZVFH); 1207 1156 KVM_ISA_EXT_SIMPLE_CONFIG(zvfhmin, ZVFHMIN); 1208 1157 KVM_ISA_EXT_SIMPLE_CONFIG(zvkb, ZVKB); ··· 1222 1167 &config_sbi_pmu, 1223 1168 &config_sbi_dbcn, 1224 1169 &config_sbi_susp, 1170 + &config_sbi_fwft, 1225 1171 &config_aia, 1226 1172 &config_fp_f, 1227 1173 &config_fp_d, ··· 1257 1201 &config_zcf, 1258 1202 &config_zcmop, 1259 1203 &config_zfa, 1204 + &config_zfbfmin, 1260 1205 &config_zfh, 1261 1206 &config_zfhmin, 1262 1207 &config_zicbom, 1208 + &config_zicbop, 1263 1209 &config_zicboz, 1264 1210 &config_ziccrse, 1265 1211 &config_zicntr, ··· 1282 1224 &config_ztso, 1283 1225 &config_zvbb, 1284 1226 &config_zvbc, 1227 + &config_zvfbfmin, 1228 + &config_zvfbfwma, 1285 1229 &config_zvfh, 1286 1230 &config_zvfhmin, 1287 1231 &config_zvkb,
+2 -2
tools/testing/selftests/mm/cow.c
··· 1554 1554 } 1555 1555 1556 1556 /* Read from the page to populate the shared zeropage. */ 1557 - FORCE_READ(mem); 1558 - FORCE_READ(smem); 1557 + FORCE_READ(*mem); 1558 + FORCE_READ(*smem); 1559 1559 1560 1560 fn(mem, smem, pagesize); 1561 1561 munmap:
+1 -1
tools/testing/selftests/mm/guard-regions.c
··· 145 145 if (write) 146 146 *ptr = 'x'; 147 147 else 148 - FORCE_READ(ptr); 148 + FORCE_READ(*ptr); 149 149 } 150 150 151 151 signal_jump_set = false;
+3 -1
tools/testing/selftests/mm/hugetlb-madvise.c
··· 50 50 unsigned long i; 51 51 52 52 for (i = 0; i < nr_pages; i++) { 53 + unsigned long *addr2 = 54 + ((unsigned long *)(addr + (i * huge_page_size))); 53 55 /* Prevent the compiler from optimizing out the entire loop: */ 54 - FORCE_READ(((unsigned long *)(addr + (i * huge_page_size)))); 56 + FORCE_READ(*addr2); 55 57 } 56 58 } 57 59
+1 -1
tools/testing/selftests/mm/migration.c
··· 110 110 * the memory access actually happens and prevents the compiler 111 111 * from optimizing away this entire loop. 112 112 */ 113 - FORCE_READ((uint64_t *)ptr); 113 + FORCE_READ(*(uint64_t *)ptr); 114 114 } 115 115 116 116 return NULL;
+1 -1
tools/testing/selftests/mm/pagemap_ioctl.c
··· 1525 1525 1526 1526 ret = madvise(mem, hpage_size, MADV_HUGEPAGE); 1527 1527 if (!ret) { 1528 - FORCE_READ(mem); 1528 + FORCE_READ(*mem); 1529 1529 1530 1530 ret = pagemap_ioctl(mem, hpage_size, &vec, 1, 0, 1531 1531 0, PAGE_IS_PFNZERO, 0, 0, PAGE_IS_PFNZERO);
+5 -2
tools/testing/selftests/mm/split_huge_page_test.c
··· 439 439 } 440 440 madvise(*addr, fd_size, MADV_HUGEPAGE); 441 441 442 - for (size_t i = 0; i < fd_size; i++) 443 - FORCE_READ((*addr + i)); 442 + for (size_t i = 0; i < fd_size; i++) { 443 + char *addr2 = *addr + i; 444 + 445 + FORCE_READ(*addr2); 446 + } 444 447 445 448 if (!check_huge_file(*addr, fd_size / pmd_pagesize, pmd_pagesize)) { 446 449 ksft_print_msg("No large pagecache folio generated, please provide a filesystem supporting large folio\n");
+1 -1
tools/testing/selftests/mm/vm_util.h
··· 23 23 * anything with it in order to trigger a read page fault. We therefore must use 24 24 * volatile to stop the compiler from optimising this away. 25 25 */ 26 - #define FORCE_READ(x) (*(volatile typeof(x) *)x) 26 + #define FORCE_READ(x) (*(const volatile typeof(x) *)&(x)) 27 27 28 28 extern unsigned int __page_size; 29 29 extern unsigned int __page_shift;
+1
tools/testing/selftests/net/Makefile
··· 99 99 TEST_GEN_PROGS += bind_timewait 100 100 TEST_PROGS += test_vxlan_mdb.sh 101 101 TEST_PROGS += test_bridge_neigh_suppress.sh 102 + TEST_PROGS += test_vxlan_nh.sh 102 103 TEST_PROGS += test_vxlan_nolocalbypass.sh 103 104 TEST_PROGS += test_bridge_backup_port.sh 104 105 TEST_PROGS += test_neigh.sh
+2 -2
tools/testing/selftests/net/bind_bhash.c
··· 75 75 int *array = (int *)arg; 76 76 77 77 for (i = 0; i < MAX_CONNECTIONS; i++) { 78 - sock_fd = bind_socket(SO_REUSEADDR | SO_REUSEPORT, setup_addr); 78 + sock_fd = bind_socket(SO_REUSEPORT, setup_addr); 79 79 if (sock_fd < 0) { 80 80 ret = sock_fd; 81 81 pthread_exit(&ret); ··· 103 103 104 104 setup_addr = use_v6 ? setup_addr_v6 : setup_addr_v4; 105 105 106 - listener_fd = bind_socket(SO_REUSEADDR | SO_REUSEPORT, setup_addr); 106 + listener_fd = bind_socket(SO_REUSEPORT, setup_addr); 107 107 if (listen(listener_fd, 100) < 0) { 108 108 perror("listen failed"); 109 109 return -1;
+1 -1
tools/testing/selftests/net/netfilter/conntrack_clash.sh
··· 99 99 local entries 100 100 local cre 101 101 102 - if ! ip netns exec "$ns" ./udpclash $daddr $dport;then 102 + if ! ip netns exec "$ns" timeout 30 ./udpclash $daddr $dport;then 103 103 echo "INFO: did not receive expected number of replies for $daddr:$dport" 104 104 ip netns exec "$ctns" conntrack -S 105 105 # don't fail: check if clash resolution triggered after all.
+3 -2
tools/testing/selftests/net/netfilter/conntrack_resize.sh
··· 187 187 [ -x udpclash ] || return 188 188 189 189 while [ $now -lt $end ]; do 190 - ip netns exec "$ns" ./udpclash 127.0.0.1 $((RANDOM%65536)) > /dev/null 2>&1 190 + ip netns exec "$ns" timeout 30 ./udpclash 127.0.0.1 $((RANDOM%65536)) > /dev/null 2>&1 191 191 192 192 now=$(date +%s) 193 193 done ··· 277 277 insert_flood() 278 278 { 279 279 local n="$1" 280 + local timeout="$2" 280 281 local r=0 281 282 282 283 r=$((RANDOM%$insert_count)) ··· 303 302 read tainted_then < /proc/sys/kernel/tainted 304 303 305 304 for n in "$nsclient1" "$nsclient2";do 306 - insert_flood "$n" & 305 + insert_flood "$n" "$timeout" & 307 306 done 308 307 309 308 # resize table constantly while flood/insert/dump/flushs
+76 -37
tools/testing/selftests/net/netfilter/nft_flowtable.sh
··· 20 20 SOCAT_TIMEOUT=60 21 21 22 22 nsin="" 23 + nsin_small="" 23 24 ns1out="" 24 25 ns2out="" 25 26 ··· 37 36 38 37 cleanup_all_ns 39 38 40 - rm -f "$nsin" "$ns1out" "$ns2out" 39 + rm -f "$nsin" "$nsin_small" "$ns1out" "$ns2out" 41 40 42 41 [ "$log_netns" -eq 0 ] && sysctl -q net.netfilter.nf_log_all_netns="$log_netns" 43 42 } ··· 73 72 rmtu=2000 74 73 75 74 filesize=$((2 * 1024 * 1024)) 75 + filesize_small=$((filesize / 16)) 76 76 77 77 usage(){ 78 78 echo "nft_flowtable.sh [OPTIONS]" ··· 91 89 o) omtu=$OPTARG;; 92 90 l) lmtu=$OPTARG;; 93 91 r) rmtu=$OPTARG;; 94 - s) filesize=$OPTARG;; 92 + s) 93 + filesize=$OPTARG 94 + filesize_small=$((OPTARG / 16)) 95 + ;; 95 96 *) usage;; 96 97 esac 97 98 done ··· 220 215 fi 221 216 222 217 nsin=$(mktemp) 218 + nsin_small=$(mktemp) 223 219 ns1out=$(mktemp) 224 220 ns2out=$(mktemp) 225 221 ··· 271 265 check_dscp() 272 266 { 273 267 local what=$1 268 + local pmtud="$2" 274 269 local ok=1 275 270 276 271 local counter ··· 284 277 local pc4z=${counter%*bytes*} 285 278 local pc4z=${pc4z#*packets} 286 279 280 + local failmsg="FAIL: pmtu $pmtu: $what counters do not match, expected" 281 + 287 282 case "$what" in 288 283 "dscp_none") 289 284 if [ "$pc4" -gt 0 ] || [ "$pc4z" -eq 0 ]; then 290 - echo "FAIL: dscp counters do not match, expected dscp3 == 0, dscp0 > 0, but got $pc4,$pc4z" 1>&2 285 + echo "$failmsg dscp3 == 0, dscp0 > 0, but got $pc4,$pc4z" 1>&2 291 286 ret=1 292 287 ok=0 293 288 fi 294 289 ;; 295 290 "dscp_fwd") 296 291 if [ "$pc4" -eq 0 ] || [ "$pc4z" -eq 0 ]; then 297 - echo "FAIL: dscp counters do not match, expected dscp3 and dscp0 > 0 but got $pc4,$pc4z" 1>&2 292 + echo "$failmsg dscp3 and dscp0 > 0 but got $pc4,$pc4z" 1>&2 298 293 ret=1 299 294 ok=0 300 295 fi 301 296 ;; 302 297 "dscp_ingress") 303 298 if [ "$pc4" -eq 0 ] || [ "$pc4z" -gt 0 ]; then 304 - echo "FAIL: dscp counters do not match, expected dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2 299 + echo "$failmsg dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2 305 300 ret=1 306 301 ok=0 307 302 fi 308 303 ;; 309 304 "dscp_egress") 310 305 if [ "$pc4" -eq 0 ] || [ "$pc4z" -gt 0 ]; then 311 - echo "FAIL: dscp counters do not match, expected dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2 306 + echo "$failmsg dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2 312 307 ret=1 313 308 ok=0 314 309 fi 315 310 ;; 316 311 *) 317 - echo "FAIL: Unknown DSCP check" 1>&2 312 + echo "$failmsg: Unknown DSCP check" 1>&2 318 313 ret=1 319 314 ok=0 320 315 esac ··· 328 319 329 320 check_transfer() 330 321 { 331 - in=$1 332 - out=$2 333 - what=$3 322 + local in=$1 323 + local out=$2 324 + local what=$3 334 325 335 326 if ! cmp "$in" "$out" > /dev/null 2>&1; then 336 327 echo "FAIL: file mismatch for $what" 1>&2 ··· 351 342 { 352 343 local nsa=$1 353 344 local nsb=$2 354 - local dstip=$3 355 - local dstport=$4 345 + local pmtu=$3 346 + local dstip=$4 347 + local dstport=$5 356 348 local lret=0 349 + local socatc 350 + local socatl 351 + local infile="$nsin" 357 352 358 - timeout "$SOCAT_TIMEOUT" ip netns exec "$nsb" socat -4 TCP-LISTEN:12345,reuseaddr STDIO < "$nsin" > "$ns2out" & 353 + if [ $pmtu -eq 0 ]; then 354 + infile="$nsin_small" 355 + fi 356 + 357 + timeout "$SOCAT_TIMEOUT" ip netns exec "$nsb" socat -4 TCP-LISTEN:12345,reuseaddr STDIO < "$infile" > "$ns2out" & 359 358 lpid=$! 360 359 361 360 busywait 1000 listener_ready 362 361 363 - timeout "$SOCAT_TIMEOUT" ip netns exec "$nsa" socat -4 TCP:"$dstip":"$dstport" STDIO < "$nsin" > "$ns1out" 362 + timeout "$SOCAT_TIMEOUT" ip netns exec "$nsa" socat -4 TCP:"$dstip":"$dstport" STDIO < "$infile" > "$ns1out" 363 + socatc=$? 364 364 365 365 wait $lpid 366 + socatl=$? 366 367 367 - if ! check_transfer "$nsin" "$ns2out" "ns1 -> ns2"; then 368 + if [ $socatl -ne 0 ] || [ $socatc -ne 0 ];then 369 + rc=1 370 + fi 371 + 372 + if ! check_transfer "$infile" "$ns2out" "ns1 -> ns2"; then 368 373 lret=1 369 374 ret=1 370 375 fi 371 376 372 - if ! check_transfer "$nsin" "$ns1out" "ns1 <- ns2"; then 377 + if ! check_transfer "$infile" "$ns1out" "ns1 <- ns2"; then 373 378 lret=1 374 379 ret=1 375 380 fi ··· 393 370 394 371 test_tcp_forwarding() 395 372 { 396 - test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345 373 + local pmtu="$3" 374 + 375 + test_tcp_forwarding_ip "$1" "$2" "$pmtu" 10.0.2.99 12345 397 376 398 377 return $? 399 378 } 400 379 401 380 test_tcp_forwarding_set_dscp() 402 381 { 403 - check_dscp "dscp_none" 382 + local pmtu="$3" 404 383 405 384 ip netns exec "$nsr1" nft -f - <<EOF 406 385 table netdev dscpmangle { ··· 413 388 } 414 389 EOF 415 390 if [ $? -eq 0 ]; then 416 - test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345 417 - check_dscp "dscp_ingress" 391 + test_tcp_forwarding_ip "$1" "$2" "$3" 10.0.2.99 12345 392 + check_dscp "dscp_ingress" "$pmtu" 418 393 419 394 ip netns exec "$nsr1" nft delete table netdev dscpmangle 420 395 else ··· 430 405 } 431 406 EOF 432 407 if [ $? -eq 0 ]; then 433 - test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345 434 - check_dscp "dscp_egress" 408 + test_tcp_forwarding_ip "$1" "$2" "$pmtu" 10.0.2.99 12345 409 + check_dscp "dscp_egress" "$pmtu" 435 410 436 - ip netns exec "$nsr1" nft flush table netdev dscpmangle 411 + ip netns exec "$nsr1" nft delete table netdev dscpmangle 437 412 else 438 413 echo "SKIP: Could not load netdev:egress for veth1" 439 414 fi ··· 441 416 # partial. If flowtable really works, then both dscp-is-0 and dscp-is-cs3 442 417 # counters should have seen packets (before and after ft offload kicks in). 443 418 ip netns exec "$nsr1" nft -a insert rule inet filter forward ip dscp set cs3 444 - test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345 445 - check_dscp "dscp_fwd" 419 + test_tcp_forwarding_ip "$1" "$2" "$pmtu" 10.0.2.99 12345 420 + check_dscp "dscp_fwd" "$pmtu" 446 421 } 447 422 448 423 test_tcp_forwarding_nat() 449 424 { 425 + local nsa="$1" 426 + local nsb="$2" 427 + local pmtu="$3" 428 + local what="$4" 450 429 local lret 451 - local pmtu 452 430 453 - test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345 431 + [ "$pmtu" -eq 0 ] && what="$what (pmtu disabled)" 432 + 433 + test_tcp_forwarding_ip "$nsa" "$nsb" "$pmtu" 10.0.2.99 12345 454 434 lret=$? 455 - 456 - pmtu=$3 457 - what=$4 458 435 459 436 if [ "$lret" -eq 0 ] ; then 460 437 if [ "$pmtu" -eq 1 ] ;then 461 - check_counters "flow offload for ns1/ns2 with masquerade and pmtu discovery $what" 438 + check_counters "flow offload for ns1/ns2 with masquerade $what" 462 439 else 463 440 echo "PASS: flow offload for ns1/ns2 with masquerade $what" 464 441 fi 465 442 466 - test_tcp_forwarding_ip "$1" "$2" 10.6.6.6 1666 443 + test_tcp_forwarding_ip "$1" "$2" "$pmtu" 10.6.6.6 1666 467 444 lret=$? 468 445 if [ "$pmtu" -eq 1 ] ;then 469 - check_counters "flow offload for ns1/ns2 with dnat and pmtu discovery $what" 446 + check_counters "flow offload for ns1/ns2 with dnat $what" 470 447 elif [ "$lret" -eq 0 ] ; then 471 448 echo "PASS: flow offload for ns1/ns2 with dnat $what" 472 449 fi 450 + else 451 + echo "FAIL: flow offload for ns1/ns2 with dnat $what" 473 452 fi 474 453 475 454 return $lret 476 455 } 477 456 478 457 make_file "$nsin" "$filesize" 458 + make_file "$nsin_small" "$filesize_small" 479 459 480 460 # First test: 481 461 # No PMTU discovery, nsr1 is expected to fragment packets from ns1 to ns2 as needed. 482 462 # Due to MTU mismatch in both directions, all packets (except small packets like pure 483 463 # acks) have to be handled by normal forwarding path. Therefore, packet counters 484 464 # are not checked. 485 - if test_tcp_forwarding "$ns1" "$ns2"; then 465 + if test_tcp_forwarding "$ns1" "$ns2" 0; then 486 466 echo "PASS: flow offloaded for ns1/ns2" 487 467 else 488 468 echo "FAIL: flow offload for ns1/ns2:" 1>&2 ··· 519 489 } 520 490 EOF 521 491 492 + check_dscp "dscp_none" "0" 522 493 if ! test_tcp_forwarding_set_dscp "$ns1" "$ns2" 0 ""; then 523 - echo "FAIL: flow offload for ns1/ns2 with dscp update" 1>&2 494 + echo "FAIL: flow offload for ns1/ns2 with dscp update and no pmtu discovery" 1>&2 524 495 exit 0 525 496 fi 526 497 ··· 543 512 # are lower than file size and packets were forwarded via flowtable layer. 544 513 # For earlier tests (large mtus), packets cannot be handled via flowtable 545 514 # (except pure acks and other small packets). 515 + ip netns exec "$nsr1" nft reset counters table inet filter >/dev/null 516 + ip netns exec "$ns2" nft reset counters table inet filter >/dev/null 517 + 518 + if ! test_tcp_forwarding_set_dscp "$ns1" "$ns2" 1 ""; then 519 + echo "FAIL: flow offload for ns1/ns2 with dscp update and pmtu discovery" 1>&2 520 + exit 0 521 + fi 522 + 546 523 ip netns exec "$nsr1" nft reset counters table inet filter >/dev/null 547 524 548 525 if ! test_tcp_forwarding_nat "$ns1" "$ns2" 1 ""; then ··· 683 644 ip -net "$ns2" route add default via 10.0.2.1 684 645 ip -net "$ns2" route add default via dead:2::1 685 646 686 - if test_tcp_forwarding "$ns1" "$ns2"; then 647 + if test_tcp_forwarding "$ns1" "$ns2" 1; then 687 648 check_counters "ipsec tunnel mode for ns1/ns2" 688 649 else 689 650 echo "FAIL: ipsec tunnel mode for ns1/ns2" ··· 707 668 fi 708 669 709 670 echo "re-run with random mtus and file size: -o $o -l $l -r $r -s $filesize" 710 - $0 -o "$o" -l "$l" -r "$r" -s "$filesize" 671 + $0 -o "$o" -l "$l" -r "$r" -s "$filesize" || ret=1 711 672 fi 712 673 713 674 exit $ret
+1 -1
tools/testing/selftests/net/netfilter/udpclash.c
··· 29 29 int sockfd; 30 30 }; 31 31 32 - static int wait = 1; 32 + static volatile int wait = 1; 33 33 34 34 static void *thread_main(void *varg) 35 35 {
+223
tools/testing/selftests/net/test_vxlan_nh.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source lib.sh 5 + TESTS=" 6 + basic_tx_ipv4 7 + basic_tx_ipv6 8 + learning 9 + proxy_ipv4 10 + proxy_ipv6 11 + " 12 + VERBOSE=0 13 + 14 + ################################################################################ 15 + # Utilities 16 + 17 + run_cmd() 18 + { 19 + local cmd="$1" 20 + local out 21 + local stderr="2>/dev/null" 22 + 23 + if [ "$VERBOSE" = "1" ]; then 24 + echo "COMMAND: $cmd" 25 + stderr= 26 + fi 27 + 28 + out=$(eval "$cmd" "$stderr") 29 + rc=$? 30 + if [ "$VERBOSE" -eq 1 ] && [ -n "$out" ]; then 31 + echo " $out" 32 + fi 33 + 34 + return $rc 35 + } 36 + 37 + ################################################################################ 38 + # Cleanup 39 + 40 + exit_cleanup_all() 41 + { 42 + cleanup_all_ns 43 + exit "${EXIT_STATUS}" 44 + } 45 + 46 + ################################################################################ 47 + # Tests 48 + 49 + nh_stats_get() 50 + { 51 + ip -n "$ns1" -s -j nexthop show id 10 | jq ".[][\"group_stats\"][][\"packets\"]" 52 + } 53 + 54 + tc_stats_get() 55 + { 56 + tc_rule_handle_stats_get "dev dummy1 egress" 101 ".packets" "-n $ns1" 57 + } 58 + 59 + basic_tx_common() 60 + { 61 + local af_str=$1; shift 62 + local proto=$1; shift 63 + local local_addr=$1; shift 64 + local plen=$1; shift 65 + local remote_addr=$1; shift 66 + 67 + RET=0 68 + 69 + # Test basic Tx functionality. Check that stats are incremented on 70 + # both the FDB nexthop group and the egress device. 71 + 72 + run_cmd "ip -n $ns1 link add name dummy1 up type dummy" 73 + run_cmd "ip -n $ns1 route add $remote_addr/$plen dev dummy1" 74 + run_cmd "tc -n $ns1 qdisc add dev dummy1 clsact" 75 + run_cmd "tc -n $ns1 filter add dev dummy1 egress proto $proto pref 1 handle 101 flower ip_proto udp dst_ip $remote_addr dst_port 4789 action pass" 76 + 77 + run_cmd "ip -n $ns1 address add $local_addr/$plen dev lo" 78 + 79 + run_cmd "ip -n $ns1 nexthop add id 1 via $remote_addr fdb" 80 + run_cmd "ip -n $ns1 nexthop add id 10 group 1 fdb" 81 + 82 + run_cmd "ip -n $ns1 link add name vx0 up type vxlan id 10010 local $local_addr dstport 4789" 83 + run_cmd "bridge -n $ns1 fdb add 00:11:22:33:44:55 dev vx0 self static nhid 10" 84 + 85 + run_cmd "ip netns exec $ns1 mausezahn vx0 -a own -b 00:11:22:33:44:55 -c 1 -q" 86 + 87 + busywait "$BUSYWAIT_TIMEOUT" until_counter_is "== 1" nh_stats_get > /dev/null 88 + check_err $? "FDB nexthop group stats did not increase" 89 + 90 + busywait "$BUSYWAIT_TIMEOUT" until_counter_is "== 1" tc_stats_get > /dev/null 91 + check_err $? "tc filter stats did not increase" 92 + 93 + log_test "VXLAN FDB nexthop: $af_str basic Tx" 94 + } 95 + 96 + basic_tx_ipv4() 97 + { 98 + basic_tx_common "IPv4" ipv4 192.0.2.1 32 192.0.2.2 99 + } 100 + 101 + basic_tx_ipv6() 102 + { 103 + basic_tx_common "IPv6" ipv6 2001:db8:1::1 128 2001:db8:1::2 104 + } 105 + 106 + learning() 107 + { 108 + RET=0 109 + 110 + # When learning is enabled on the VXLAN device, an incoming packet 111 + # might try to refresh an FDB entry that points to an FDB nexthop group 112 + # instead of an ordinary remote destination. Check that the kernel does 113 + # not crash in this situation. 114 + 115 + run_cmd "ip -n $ns1 address add 192.0.2.1/32 dev lo" 116 + run_cmd "ip -n $ns1 address add 192.0.2.2/32 dev lo" 117 + 118 + run_cmd "ip -n $ns1 nexthop add id 1 via 192.0.2.3 fdb" 119 + run_cmd "ip -n $ns1 nexthop add id 10 group 1 fdb" 120 + 121 + run_cmd "ip -n $ns1 link add name vx0 up type vxlan id 10010 local 192.0.2.1 dstport 12345 localbypass" 122 + run_cmd "ip -n $ns1 link add name vx1 up type vxlan id 10020 local 192.0.2.2 dstport 54321 learning" 123 + 124 + run_cmd "bridge -n $ns1 fdb add 00:11:22:33:44:55 dev vx0 self static dst 192.0.2.2 port 54321 vni 10020" 125 + run_cmd "bridge -n $ns1 fdb add 00:aa:bb:cc:dd:ee dev vx1 self static nhid 10" 126 + 127 + run_cmd "ip netns exec $ns1 mausezahn vx0 -a 00:aa:bb:cc:dd:ee -b 00:11:22:33:44:55 -c 1 -q" 128 + 129 + log_test "VXLAN FDB nexthop: learning" 130 + } 131 + 132 + proxy_common() 133 + { 134 + local af_str=$1; shift 135 + local local_addr=$1; shift 136 + local plen=$1; shift 137 + local remote_addr=$1; shift 138 + local neigh_addr=$1; shift 139 + local ping_cmd=$1; shift 140 + 141 + RET=0 142 + 143 + # When the "proxy" option is enabled on the VXLAN device, the device 144 + # will suppress ARP requests and IPv6 Neighbor Solicitation messages if 145 + # it is able to reply on behalf of the remote host. That is, if a 146 + # matching and valid neighbor entry is configured on the VXLAN device 147 + # whose MAC address is not behind the "any" remote (0.0.0.0 / ::). The 148 + # FDB entry for the neighbor's MAC address might point to an FDB 149 + # nexthop group instead of an ordinary remote destination. Check that 150 + # the kernel does not crash in this situation. 151 + 152 + run_cmd "ip -n $ns1 address add $local_addr/$plen dev lo" 153 + 154 + run_cmd "ip -n $ns1 nexthop add id 1 via $remote_addr fdb" 155 + run_cmd "ip -n $ns1 nexthop add id 10 group 1 fdb" 156 + 157 + run_cmd "ip -n $ns1 link add name vx0 up type vxlan id 10010 local $local_addr dstport 4789 proxy" 158 + 159 + run_cmd "ip -n $ns1 neigh add $neigh_addr lladdr 00:11:22:33:44:55 nud perm dev vx0" 160 + 161 + run_cmd "bridge -n $ns1 fdb add 00:11:22:33:44:55 dev vx0 self static nhid 10" 162 + 163 + run_cmd "ip netns exec $ns1 $ping_cmd" 164 + 165 + log_test "VXLAN FDB nexthop: $af_str proxy" 166 + } 167 + 168 + proxy_ipv4() 169 + { 170 + proxy_common "IPv4" 192.0.2.1 32 192.0.2.2 192.0.2.3 \ 171 + "arping -b -c 1 -s 192.0.2.1 -I vx0 192.0.2.3" 172 + } 173 + 174 + proxy_ipv6() 175 + { 176 + proxy_common "IPv6" 2001:db8:1::1 128 2001:db8:1::2 2001:db8:1::3 \ 177 + "ndisc6 -r 1 -s 2001:db8:1::1 -w 1 2001:db8:1::3 vx0" 178 + } 179 + 180 + ################################################################################ 181 + # Usage 182 + 183 + usage() 184 + { 185 + cat <<EOF 186 + usage: ${0##*/} OPTS 187 + 188 + -t <test> Test(s) to run (default: all) 189 + (options: $TESTS) 190 + -p Pause on fail 191 + -v Verbose mode (show commands and output) 192 + EOF 193 + } 194 + 195 + ################################################################################ 196 + # Main 197 + 198 + while getopts ":t:pvh" opt; do 199 + case $opt in 200 + t) TESTS=$OPTARG;; 201 + p) PAUSE_ON_FAIL=yes;; 202 + v) VERBOSE=$((VERBOSE + 1));; 203 + h) usage; exit 0;; 204 + *) usage; exit 1;; 205 + esac 206 + done 207 + 208 + require_command mausezahn 209 + require_command arping 210 + require_command ndisc6 211 + require_command jq 212 + 213 + if ! ip nexthop help 2>&1 | grep -q "stats"; then 214 + echo "SKIP: iproute2 ip too old, missing nexthop stats support" 215 + exit "$ksft_skip" 216 + fi 217 + 218 + trap exit_cleanup_all EXIT 219 + 220 + for t in $TESTS 221 + do 222 + setup_ns ns1; $t; cleanup_all_ns; 223 + done
+1 -2
tools/testing/selftests/rseq/rseq-riscv.h
··· 8 8 * exception when executed in all modes. 9 9 */ 10 10 #include <endian.h> 11 + #include <asm/fence.h> 11 12 12 13 #if defined(__BYTE_ORDER) ? (__BYTE_ORDER == __LITTLE_ENDIAN) : defined(__LITTLE_ENDIAN) 13 14 #define RSEQ_SIG 0xf1401073 /* csrr mhartid, x0 */ ··· 25 24 #define REG_L __REG_SEL("ld ", "lw ") 26 25 #define REG_S __REG_SEL("sd ", "sw ") 27 26 28 - #define RISCV_FENCE(p, s) \ 29 - __asm__ __volatile__ ("fence " #p "," #s : : : "memory") 30 27 #define rseq_smp_mb() RISCV_FENCE(rw, rw) 31 28 #define rseq_smp_rmb() RISCV_FENCE(r, r) 32 29 #define rseq_smp_wmb() RISCV_FENCE(w, w)