Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.18-rc7).

No conflicts, adjacent changes:

tools/testing/selftests/net/af_unix/Makefile
e1bb28bf13f4 ("selftest: af_unix: Add test for SO_PEEK_OFF.")
45a1cd8346ca ("selftests: af_unix: Add tests for ECONNRESET and EOF semantics")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2890 -1058
+3 -1
.mailmap
··· 206 206 David Brownell <david-b@pacbell.net> 207 207 David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> 208 208 David Heidelberg <david@ixit.cz> <d.okias@gmail.com> 209 + David Hildenbrand <david@kernel.org> <david@redhat.com> 209 210 David Rheinsberg <david@readahead.eu> <dh.herrmann@gmail.com> 210 211 David Rheinsberg <david@readahead.eu> <dh.herrmann@googlemail.com> 211 212 David Rheinsberg <david@readahead.eu> <david.rheinsberg@gmail.com> ··· 427 426 Kenneth Westfield <quic_kwestfie@quicinc.com> <kwestfie@codeaurora.org> 428 427 Kiran Gunda <quic_kgunda@quicinc.com> <kgunda@codeaurora.org> 429 428 Kirill Tkhai <tkhai@ya.ru> <ktkhai@virtuozzo.com> 430 - Kirill A. Shutemov <kas@kernel.org> <kirill.shutemov@linux.intel.com> 429 + Kiryl Shutsemau <kas@kernel.org> <kirill.shutemov@linux.intel.com> 431 430 Kishon Vijay Abraham I <kishon@kernel.org> <kishon@ti.com> 432 431 Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@linaro.org> 433 432 Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@somainline.org> ··· 438 437 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com> 439 438 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com> 440 439 Krzysztof Kozlowski <krzk@kernel.org> <krzysztof.kozlowski@canonical.com> 440 + Krzysztof Kozlowski <krzk@kernel.org> <krzysztof.kozlowski@linaro.org> 441 441 Krzysztof Wilczyński <kwilczynski@kernel.org> <krzysztof.wilczynski@linux.com> 442 442 Krzysztof Wilczyński <kwilczynski@kernel.org> <kw@linux.com> 443 443 Kshitiz Godara <quic_kgodara@quicinc.com> <kgodara@codeaurora.org>
+4 -5
Documentation/sound/codecs/cs35l56.rst
··· 105 105 106 106 The format of the firmware file names is: 107 107 108 - SoundWire (except CS35L56 Rev B0): 108 + SoundWire: 109 109 cs35lxx-b0-dsp1-misc-SSID[-spkidX]-l?u? 110 110 111 - SoundWire CS35L56 Rev B0: 111 + SoundWire CS35L56 Rev B0 firmware released before kernel version 6.16: 112 112 cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN 113 113 114 114 Non-SoundWire (HDA and I2S): ··· 127 127 * spkidX is an optional part, used for laptops that have firmware 128 128 configurations for different makes and models of internal speakers. 129 129 130 - The CS35L56 Rev B0 continues to use the old filename scheme because a 131 - large number of firmware files have already been published with these 132 - names. 130 + Early firmware for CS35L56 Rev B0 used the ALSA prefix (ampN) as the 131 + filename qualifier. Support for the l?u? qualifier was added in kernel 6.16. 133 132 134 133 Sound Open Firmware and ALSA topology files 135 134 -------------------------------------------
+22 -19
MAINTAINERS
··· 4400 4400 M: Jens Axboe <axboe@kernel.dk> 4401 4401 L: linux-block@vger.kernel.org 4402 4402 S: Maintained 4403 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git 4403 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux.git 4404 4404 F: Documentation/ABI/stable/sysfs-block 4405 4405 F: Documentation/block/ 4406 4406 F: block/ ··· 9266 9266 L: bridge@lists.linux.dev 9267 9267 L: netdev@vger.kernel.org 9268 9268 S: Maintained 9269 - W: http://www.linuxfoundation.org/en/Net:Bridge 9270 9269 F: include/linux/if_bridge.h 9271 9270 F: include/uapi/linux/if_bridge.h 9272 9271 F: include/linux/netfilter_bridge/ ··· 11527 11528 HUGETLB SUBSYSTEM 11528 11529 M: Muchun Song <muchun.song@linux.dev> 11529 11530 M: Oscar Salvador <osalvador@suse.de> 11530 - R: David Hildenbrand <david@redhat.com> 11531 + R: David Hildenbrand <david@kernel.org> 11531 11532 L: linux-mm@kvack.org 11532 11533 S: Maintained 11533 11534 F: Documentation/ABI/testing/sysfs-kernel-mm-hugepages ··· 13734 13735 M: Christian Borntraeger <borntraeger@linux.ibm.com> 13735 13736 M: Janosch Frank <frankja@linux.ibm.com> 13736 13737 M: Claudio Imbrenda <imbrenda@linux.ibm.com> 13737 - R: David Hildenbrand <david@redhat.com> 13738 + R: David Hildenbrand <david@kernel.org> 13738 13739 L: kvm@vger.kernel.org 13739 13740 S: Supported 13740 13741 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git ··· 16206 16207 M: Krzysztof Kozlowski <krzk@kernel.org> 16207 16208 L: linux-kernel@vger.kernel.org 16208 16209 S: Maintained 16209 - B: mailto:krzysztof.kozlowski@linaro.org 16210 + B: mailto:krzk@kernel.org 16210 16211 T: git git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-mem-ctrl.git 16211 16212 F: Documentation/devicetree/bindings/memory-controllers/ 16212 16213 F: drivers/memory/ ··· 16222 16223 F: drivers/devfreq/tegra30-devfreq.c 16223 16224 16224 16225 MEMORY HOT(UN)PLUG 16225 - M: David Hildenbrand <david@redhat.com> 16226 + M: David Hildenbrand <david@kernel.org> 16226 16227 M: Oscar Salvador <osalvador@suse.de> 16227 16228 L: linux-mm@kvack.org 16228 16229 S: Maintained ··· 16247 16248 16248 16249 MEMORY MANAGEMENT - CORE 16249 16250 M: Andrew Morton <akpm@linux-foundation.org> 16250 - M: David Hildenbrand <david@redhat.com> 16251 + M: David Hildenbrand <david@kernel.org> 16251 16252 R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16252 16253 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16253 16254 R: Vlastimil Babka <vbabka@suse.cz> ··· 16303 16304 16304 16305 MEMORY MANAGEMENT - GUP (GET USER PAGES) 16305 16306 M: Andrew Morton <akpm@linux-foundation.org> 16306 - M: David Hildenbrand <david@redhat.com> 16307 + M: David Hildenbrand <david@kernel.org> 16307 16308 R: Jason Gunthorpe <jgg@nvidia.com> 16308 16309 R: John Hubbard <jhubbard@nvidia.com> 16309 16310 R: Peter Xu <peterx@redhat.com> ··· 16319 16320 16320 16321 MEMORY MANAGEMENT - KSM (Kernel Samepage Merging) 16321 16322 M: Andrew Morton <akpm@linux-foundation.org> 16322 - M: David Hildenbrand <david@redhat.com> 16323 + M: David Hildenbrand <david@kernel.org> 16323 16324 R: Xu Xin <xu.xin16@zte.com.cn> 16324 16325 R: Chengming Zhou <chengming.zhou@linux.dev> 16325 16326 L: linux-mm@kvack.org ··· 16335 16336 16336 16337 MEMORY MANAGEMENT - MEMORY POLICY AND MIGRATION 16337 16338 M: Andrew Morton <akpm@linux-foundation.org> 16338 - M: David Hildenbrand <david@redhat.com> 16339 + M: David Hildenbrand <david@kernel.org> 16339 16340 R: Zi Yan <ziy@nvidia.com> 16340 16341 R: Matthew Brost <matthew.brost@intel.com> 16341 16342 R: Joshua Hahn <joshua.hahnjy@gmail.com> ··· 16375 16376 16376 16377 MEMORY MANAGEMENT - MISC 16377 16378 M: Andrew Morton <akpm@linux-foundation.org> 16378 - M: David Hildenbrand <david@redhat.com> 16379 + M: David Hildenbrand <david@kernel.org> 16379 16380 R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16380 16381 R: Liam R. Howlett <Liam.Howlett@oracle.com> 16381 16382 R: Vlastimil Babka <vbabka@suse.cz> ··· 16463 16464 MEMORY MANAGEMENT - RECLAIM 16464 16465 M: Andrew Morton <akpm@linux-foundation.org> 16465 16466 M: Johannes Weiner <hannes@cmpxchg.org> 16466 - R: David Hildenbrand <david@redhat.com> 16467 + R: David Hildenbrand <david@kernel.org> 16467 16468 R: Michal Hocko <mhocko@kernel.org> 16468 16469 R: Qi Zheng <zhengqi.arch@bytedance.com> 16469 16470 R: Shakeel Butt <shakeel.butt@linux.dev> ··· 16476 16477 16477 16478 MEMORY MANAGEMENT - RMAP (REVERSE MAPPING) 16478 16479 M: Andrew Morton <akpm@linux-foundation.org> 16479 - M: David Hildenbrand <david@redhat.com> 16480 + M: David Hildenbrand <david@kernel.org> 16480 16481 M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16481 16482 R: Rik van Riel <riel@surriel.com> 16482 16483 R: Liam R. Howlett <Liam.Howlett@oracle.com> ··· 16521 16522 16522 16523 MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE) 16523 16524 M: Andrew Morton <akpm@linux-foundation.org> 16524 - M: David Hildenbrand <david@redhat.com> 16525 + M: David Hildenbrand <david@kernel.org> 16525 16526 M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16526 16527 R: Zi Yan <ziy@nvidia.com> 16527 16528 R: Baolin Wang <baolin.wang@linux.alibaba.com> ··· 16623 16624 M: Andrew Morton <akpm@linux-foundation.org> 16624 16625 M: Liam R. Howlett <Liam.Howlett@oracle.com> 16625 16626 M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 16626 - M: David Hildenbrand <david@redhat.com> 16627 + M: David Hildenbrand <david@kernel.org> 16627 16628 R: Vlastimil Babka <vbabka@suse.cz> 16628 16629 R: Jann Horn <jannh@google.com> 16629 16630 L: linux-mm@kvack.org ··· 18798 18799 F: arch/arm/*omap*/*clock* 18799 18800 18800 18801 OMAP DEVICE TREE SUPPORT 18802 + M: Aaro Koskinen <aaro.koskinen@iki.fi> 18803 + M: Andreas Kemnade <andreas@kemnade.info> 18804 + M: Kevin Hilman <khilman@baylibre.com> 18805 + M: Roger Quadros <rogerq@kernel.org> 18801 18806 M: Tony Lindgren <tony@atomide.com> 18802 18807 L: linux-omap@vger.kernel.org 18803 18808 L: devicetree@vger.kernel.org ··· 21200 21197 F: drivers/i2c/busses/i2c-qcom-cci.c 21201 21198 21202 21199 QUALCOMM INTERCONNECT BWMON DRIVER 21203 - M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 21200 + M: Krzysztof Kozlowski <krzk@kernel.org> 21204 21201 L: linux-arm-msm@vger.kernel.org 21205 21202 S: Maintained 21206 21203 F: Documentation/devicetree/bindings/interconnect/qcom,msm8998-bwmon.yaml ··· 27111 27108 27112 27109 VIRTIO BALLOON 27113 27110 M: "Michael S. Tsirkin" <mst@redhat.com> 27114 - M: David Hildenbrand <david@redhat.com> 27111 + M: David Hildenbrand <david@kernel.org> 27115 27112 L: virtualization@lists.linux.dev 27116 27113 S: Maintained 27117 27114 F: drivers/virtio/virtio_balloon.c ··· 27266 27263 F: include/uapi/linux/virtio_iommu.h 27267 27264 27268 27265 VIRTIO MEM DRIVER 27269 - M: David Hildenbrand <david@redhat.com> 27266 + M: David Hildenbrand <david@kernel.org> 27270 27267 L: virtualization@lists.linux.dev 27271 27268 S: Maintained 27272 27269 W: https://virtio-mem.gitlab.io/ ··· 27872 27869 F: arch/x86/kernel/unwind_*.c 27873 27870 27874 27871 X86 TRUST DOMAIN EXTENSIONS (TDX) 27875 - M: Kirill A. Shutemov <kas@kernel.org> 27872 + M: Kiryl Shutsemau <kas@kernel.org> 27876 27873 R: Dave Hansen <dave.hansen@linux.intel.com> 27877 27874 R: Rick Edgecombe <rick.p.edgecombe@intel.com> 27878 27875 L: x86@kernel.org
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 18 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+14
arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-fuji-data64.dts
··· 1254 1254 max-frequency = <25000000>; 1255 1255 bus-width = <4>; 1256 1256 }; 1257 + 1258 + /* 1259 + * FIXME: rgmii delay is introduced by MAC (configured in u-boot now) 1260 + * instead of PCB on fuji board, so the "phy-mode" should be updated to 1261 + * "rgmii-[tx|rx]id" when the aspeed-mac driver can handle the delay 1262 + * properly. 1263 + */ 1264 + &mac3 { 1265 + status = "okay"; 1266 + phy-mode = "rgmii"; 1267 + phy-handle = <&ethphy3>; 1268 + pinctrl-names = "default"; 1269 + pinctrl-0 = <&pinctrl_rgmii4_default>; 1270 + };
+2 -2
arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-1440.dts
··· 55 55 mdio { 56 56 /delete-node/ switch@1e; 57 57 58 - bcm54210e: ethernet-phy@0 { 59 - reg = <0>; 58 + bcm54210e: ethernet-phy@25 { 59 + reg = <25>; 60 60 }; 61 61 }; 62 62 };
+2 -2
arch/arm/boot/dts/nxp/imx/imx51-zii-rdu1.dts
··· 259 259 pinctrl-0 = <&pinctrl_audmux>; 260 260 status = "okay"; 261 261 262 - ssi2 { 262 + mux-ssi2 { 263 263 fsl,audmux-port = <1>; 264 264 fsl,port-config = < 265 265 (IMX_AUDMUX_V2_PTCR_SYN | ··· 271 271 >; 272 272 }; 273 273 274 - aud3 { 274 + mux-aud3 { 275 275 fsl,audmux-port = <2>; 276 276 fsl,port-config = < 277 277 IMX_AUDMUX_V2_PTCR_SYN
+1 -1
arch/arm/boot/dts/nxp/imx/imx6ull-engicam-microgea-rmm.dts
··· 136 136 interrupt-parent = <&gpio2>; 137 137 interrupts = <8 IRQ_TYPE_EDGE_FALLING>; 138 138 reset-gpios = <&gpio2 14 GPIO_ACTIVE_LOW>; 139 - report-rate-hz = <6>; 139 + report-rate-hz = <60>; 140 140 /* settings valid only for Hycon touchscreen */ 141 141 touchscreen-size-x = <1280>; 142 142 touchscreen-size-y = <800>;
+10
arch/arm64/boot/dts/broadcom/bcm2712-rpi-5-b.dts
··· 18 18 19 19 #include "bcm2712-rpi-5-b-ovl-rp1.dts" 20 20 21 + / { 22 + aliases { 23 + ethernet0 = &rp1_eth; 24 + }; 25 + }; 26 + 21 27 &pcie2 { 22 28 #include "rp1-nexus.dtsi" 23 29 }; 24 30 25 31 &rp1_eth { 32 + assigned-clocks = <&rp1_clocks RP1_CLK_ETH_TSU>, 33 + <&rp1_clocks RP1_CLK_ETH>; 34 + assigned-clock-rates = <50000000>, 35 + <125000000>; 26 36 status = "okay"; 27 37 phy-mode = "rgmii-id"; 28 38 phy-handle = <&phy1>;
-2
arch/arm64/boot/dts/freescale/imx8-ss-img.dtsi
··· 67 67 power-domains = <&pd IMX_SC_R_CSI_0>; 68 68 fsl,channel = <0>; 69 69 fsl,num-irqs = <32>; 70 - status = "disabled"; 71 70 }; 72 71 73 72 gpio0_mipi_csi0: gpio@58222000 { ··· 143 144 power-domains = <&pd IMX_SC_R_CSI_1>; 144 145 fsl,channel = <0>; 145 146 fsl,num-irqs = <32>; 146 - status = "disabled"; 147 147 }; 148 148 149 149 gpio0_mipi_csi1: gpio@58242000 {
+19 -5
arch/arm64/boot/dts/freescale/imx8mp-kontron-bl-osm-s.dts
··· 16 16 ethernet1 = &eqos; 17 17 }; 18 18 19 - extcon_usbc: usbc { 20 - compatible = "linux,extcon-usb-gpio"; 19 + connector { 20 + compatible = "gpio-usb-b-connector", "usb-b-connector"; 21 + id-gpios = <&gpio1 10 GPIO_ACTIVE_HIGH>; 22 + label = "Type-C"; 21 23 pinctrl-names = "default"; 22 24 pinctrl-0 = <&pinctrl_usb1_id>; 23 - id-gpios = <&gpio1 10 GPIO_ACTIVE_HIGH>; 25 + type = "micro"; 26 + vbus-supply = <&reg_usb1_vbus>; 27 + 28 + port { 29 + usb_dr_connector: endpoint { 30 + remote-endpoint = <&usb3_dwc>; 31 + }; 32 + }; 24 33 }; 25 34 26 35 leds { ··· 253 244 hnp-disable; 254 245 srp-disable; 255 246 dr_mode = "otg"; 256 - extcon = <&extcon_usbc>; 257 247 usb-role-switch; 248 + role-switch-default-mode = "peripheral"; 258 249 status = "okay"; 250 + 251 + port { 252 + usb3_dwc: endpoint { 253 + remote-endpoint = <&usb_dr_connector>; 254 + }; 255 + }; 259 256 }; 260 257 261 258 &usb_dwc3_1 { ··· 288 273 }; 289 274 290 275 &usb3_phy0 { 291 - vbus-supply = <&reg_usb1_vbus>; 292 276 status = "okay"; 293 277 }; 294 278
+2 -1
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 1886 1886 assigned-clock-rates = <3600000000>, <100000000>, <10000000>; 1887 1887 assigned-clock-parents = <0>, <0>, 1888 1888 <&scmi_clk IMX95_CLK_SYSPLL1_PFD1_DIV2>; 1889 - msi-map = <0x0 &its 0x98 0x1>; 1889 + msi-map = <0x0 &its 0x10 0x1>; 1890 1890 power-domains = <&scmi_devpd IMX95_PD_HSIO_TOP>; 1891 1891 status = "disabled"; 1892 1892 }; ··· 1963 1963 assigned-clock-rates = <3600000000>, <100000000>, <10000000>; 1964 1964 assigned-clock-parents = <0>, <0>, 1965 1965 <&scmi_clk IMX95_CLK_SYSPLL1_PFD1_DIV2>; 1966 + msi-map = <0x0 &its 0x98 0x1>; 1966 1967 power-domains = <&scmi_devpd IMX95_PD_HSIO_TOP>; 1967 1968 status = "disabled"; 1968 1969 };
+1
arch/arm64/boot/dts/nvidia/tegra194-p3668.dtsi
··· 42 42 interrupt-parent = <&gpio>; 43 43 interrupts = <TEGRA194_MAIN_GPIO(G, 4) IRQ_TYPE_LEVEL_LOW>; 44 44 #phy-cells = <0>; 45 + wakeup-source; 45 46 }; 46 47 }; 47 48 };
-1
arch/arm64/boot/dts/rockchip/rk3328.dtsi
··· 598 598 pinctrl-2 = <&otp_pin>; 599 599 resets = <&cru SRST_TSADC>; 600 600 reset-names = "tsadc-apb"; 601 - rockchip,grf = <&grf>; 602 601 rockchip,hw-tshut-temp = <100000>; 603 602 #thermal-sensor-cells = <1>; 604 603 status = "disabled";
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-op1.dtsi
··· 3 3 * Copyright (c) 2016-2017 Fuzhou Rockchip Electronics Co., Ltd 4 4 */ 5 5 6 - #include "rk3399.dtsi" 6 + #include "rk3399-base.dtsi" 7 7 8 8 / { 9 9 cluster0_opp: opp-table-0 {
+5 -5
arch/arm64/boot/dts/rockchip/rk3399-puma-haikou-video-demo.dtso
··· 45 45 46 46 cam_dovdd_1v8: regulator-cam-dovdd-1v8 { 47 47 compatible = "regulator-fixed"; 48 - gpio = <&pca9670 3 GPIO_ACTIVE_LOW>; 49 - regulator-max-microvolt = <1800000>; 50 - regulator-min-microvolt = <1800000>; 51 - regulator-name = "cam-dovdd-1v8"; 52 - vin-supply = <&vcc1v8_video>; 48 + gpio = <&pca9670 3 GPIO_ACTIVE_LOW>; 49 + regulator-max-microvolt = <1800000>; 50 + regulator-min-microvolt = <1800000>; 51 + regulator-name = "cam-dovdd-1v8"; 52 + vin-supply = <&vcc1v8_video>; 53 53 }; 54 54 55 55 cam_dvdd_1v2: regulator-cam-dvdd-1v2 {
+3 -3
arch/arm64/boot/dts/rockchip/rk3566-bigtreetech-cb2.dtsi
··· 120 120 compatible = "regulator-fixed"; 121 121 regulator-name = "vcc3v3_pcie"; 122 122 enable-active-high; 123 - gpios = <&gpio0 RK_PB1 GPIO_ACTIVE_HIGH>; 123 + gpios = <&gpio4 RK_PB1 GPIO_ACTIVE_HIGH>; 124 124 pinctrl-names = "default"; 125 125 pinctrl-0 = <&pcie_drv>; 126 126 regulator-always-on; ··· 187 187 vcc5v0_usb2b: regulator-vcc5v0-usb2b { 188 188 compatible = "regulator-fixed"; 189 189 enable-active-high; 190 - gpio = <&gpio0 RK_PC4 GPIO_ACTIVE_HIGH>; 190 + gpio = <&gpio4 RK_PC4 GPIO_ACTIVE_HIGH>; 191 191 pinctrl-names = "default"; 192 192 pinctrl-0 = <&vcc5v0_usb2b_en>; 193 193 regulator-name = "vcc5v0_usb2b"; ··· 199 199 vcc5v0_usb2t: regulator-vcc5v0-usb2t { 200 200 compatible = "regulator-fixed"; 201 201 enable-active-high; 202 - gpios = <&gpio0 RK_PD5 GPIO_ACTIVE_HIGH>; 202 + gpios = <&gpio3 RK_PD5 GPIO_ACTIVE_HIGH>; 203 203 pinctrl-names = "default"; 204 204 pinctrl-0 = <&vcc5v0_usb2t_en>; 205 205 regulator-name = "vcc5v0_usb2t";
+1 -1
arch/arm64/boot/dts/rockchip/rk3566-pinetab2.dtsi
··· 789 789 vccio1-supply = <&vccio_acodec>; 790 790 vccio2-supply = <&vcc_1v8>; 791 791 vccio3-supply = <&vccio_sd>; 792 - vccio4-supply = <&vcc_1v8>; 792 + vccio4-supply = <&vcca1v8_pmu>; 793 793 vccio5-supply = <&vcc_1v8>; 794 794 vccio6-supply = <&vcc1v8_dvp>; 795 795 vccio7-supply = <&vcc_3v3>;
+2
arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts
··· 482 482 }; 483 483 484 484 &i2s1_8ch { 485 + pinctrl-names = "default"; 486 + pinctrl-0 = <&i2s1m0_sclktx &i2s1m0_lrcktx &i2s1m0_sdi0 &i2s1m0_sdo0>; 485 487 rockchip,trcm-sync-tx-only; 486 488 status = "okay"; 487 489 };
-14
arch/arm64/boot/dts/rockchip/rk3576.dtsi
··· 276 276 opp-microvolt = <900000 900000 950000>; 277 277 clock-latency-ns = <40000>; 278 278 }; 279 - 280 - opp-2208000000 { 281 - opp-hz = /bits/ 64 <2208000000>; 282 - opp-microvolt = <950000 950000 950000>; 283 - clock-latency-ns = <40000>; 284 - }; 285 279 }; 286 280 287 281 cluster1_opp_table: opp-table-cluster1 { ··· 340 346 opp-2208000000 { 341 347 opp-hz = /bits/ 64 <2208000000>; 342 348 opp-microvolt = <925000 925000 950000>; 343 - clock-latency-ns = <40000>; 344 - }; 345 - 346 - opp-2304000000 { 347 - opp-hz = /bits/ 64 <2304000000>; 348 - opp-microvolt = <950000 950000 950000>; 349 349 clock-latency-ns = <40000>; 350 350 }; 351 351 }; ··· 2549 2561 interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>; 2550 2562 pinctrl-names = "default"; 2551 2563 pinctrl-0 = <&i2c9m0_xfer>; 2552 - resets = <&cru SRST_I2C9>, <&cru SRST_P_I2C9>; 2553 - reset-names = "i2c", "apb"; 2554 2564 #address-cells = <1>; 2555 2565 #size-cells = <0>; 2556 2566 status = "disabled";
+1 -1
arch/arm64/boot/dts/rockchip/rk3588-opp.dtsi
··· 115 115 }; 116 116 }; 117 117 118 - gpu_opp_table: opp-table { 118 + gpu_opp_table: opp-table-gpu { 119 119 compatible = "operating-points-v2"; 120 120 121 121 opp-300000000 {
+1 -3
arch/arm64/boot/dts/rockchip/rk3588-tiger.dtsi
··· 382 382 cap-mmc-highspeed; 383 383 mmc-ddr-1_8v; 384 384 mmc-hs200-1_8v; 385 - mmc-hs400-1_8v; 386 - mmc-hs400-enhanced-strobe; 387 385 mmc-pwrseq = <&emmc_pwrseq>; 388 386 no-sdio; 389 387 no-sd; 390 388 non-removable; 391 389 pinctrl-names = "default"; 392 - pinctrl-0 = <&emmc_bus8 &emmc_cmd &emmc_clk &emmc_data_strobe>; 390 + pinctrl-0 = <&emmc_bus8 &emmc_cmd &emmc_clk>; 393 391 vmmc-supply = <&vcc_3v3_s3>; 394 392 vqmmc-supply = <&vcc_1v8_s3>; 395 393 status = "okay";
+1 -1
arch/arm64/boot/dts/rockchip/rk3588j.dtsi
··· 66 66 }; 67 67 }; 68 68 69 - gpu_opp_table: opp-table { 69 + gpu_opp_table: opp-table-gpu { 70 70 compatible = "operating-points-v2"; 71 71 72 72 opp-300000000 {
+2 -2
arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
··· 14 14 gpios = <&gpio0 RK_PC5 GPIO_ACTIVE_HIGH>; 15 15 regulator-name = "vcc3v3_pcie20"; 16 16 regulator-boot-on; 17 - regulator-min-microvolt = <1800000>; 18 - regulator-max-microvolt = <1800000>; 17 + regulator-min-microvolt = <3300000>; 18 + regulator-max-microvolt = <3300000>; 19 19 startup-delay-us = <50000>; 20 20 vin-supply = <&vcc5v0_sys>; 21 21 };
+1 -1
arch/arm64/configs/defconfig
··· 1341 1341 CONFIG_COMMON_CLK_VC3=y 1342 1342 CONFIG_COMMON_CLK_VC5=y 1343 1343 CONFIG_COMMON_CLK_BD718XX=m 1344 - CONFIG_CLK_RASPBERRYPI=m 1344 + CONFIG_CLK_RASPBERRYPI=y 1345 1345 CONFIG_CLK_IMX8MM=y 1346 1346 CONFIG_CLK_IMX8MN=y 1347 1347 CONFIG_CLK_IMX8MP=y
+2 -2
arch/arm64/include/asm/page.h
··· 33 33 unsigned long vaddr); 34 34 #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio 35 35 36 - void tag_clear_highpage(struct page *to); 37 - #define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE 36 + bool tag_clear_highpages(struct page *to, int numpages); 37 + #define __HAVE_ARCH_TAG_CLEAR_HIGHPAGES 38 38 39 39 #define clear_user_page(page, vaddr, pg) clear_page(page) 40 40 #define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
+1 -1
arch/arm64/kvm/arm.c
··· 624 624 kvm_timer_vcpu_load(vcpu); 625 625 kvm_vgic_load(vcpu); 626 626 kvm_vcpu_load_debug(vcpu); 627 + kvm_vcpu_load_fgt(vcpu); 627 628 if (has_vhe()) 628 629 kvm_vcpu_load_vhe(vcpu); 629 630 kvm_arch_vcpu_load_fp(vcpu); ··· 643 642 vcpu->arch.hcr_el2 |= HCR_TWI; 644 643 645 644 vcpu_set_pauth_traps(vcpu); 646 - kvm_vcpu_load_fgt(vcpu); 647 645 648 646 if (is_protected_kvm_enabled()) { 649 647 kvm_call_hyp_nvhe(__pkvm_vcpu_load,
+5 -1
arch/arm64/kvm/sys_regs.c
··· 5609 5609 5610 5610 guard(mutex)(&kvm->arch.config_lock); 5611 5611 5612 - if (!irqchip_in_kernel(kvm)) { 5612 + /* 5613 + * This hacks into the ID registers, so only perform it when the 5614 + * first vcpu runs, or the kvm_set_vm_id_reg() helper will scream. 5615 + */ 5616 + if (!irqchip_in_kernel(kvm) && !kvm_vm_has_ran_once(kvm)) { 5613 5617 u64 val; 5614 5618 5615 5619 val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64PFR0_EL1) & ~ID_AA64PFR0_EL1_GIC;
+11 -10
arch/arm64/mm/fault.c
··· 967 967 return vma_alloc_folio(flags, 0, vma, vaddr); 968 968 } 969 969 970 - void tag_clear_highpage(struct page *page) 970 + bool tag_clear_highpages(struct page *page, int numpages) 971 971 { 972 972 /* 973 973 * Check if MTE is supported and fall back to clear_highpage(). 974 974 * get_huge_zero_folio() unconditionally passes __GFP_ZEROTAGS and 975 - * post_alloc_hook() will invoke tag_clear_highpage(). 975 + * post_alloc_hook() will invoke tag_clear_highpages(). 976 976 */ 977 - if (!system_supports_mte()) { 978 - clear_highpage(page); 979 - return; 980 - } 977 + if (!system_supports_mte()) 978 + return false; 981 979 982 - /* Newly allocated page, shouldn't have been tagged yet */ 983 - WARN_ON_ONCE(!try_page_mte_tagging(page)); 984 - mte_zero_clear_page_tags(page_address(page)); 985 - set_page_mte_tagged(page); 980 + /* Newly allocated pages, shouldn't have been tagged yet */ 981 + for (int i = 0; i < numpages; i++, page++) { 982 + WARN_ON_ONCE(!try_page_mte_tagging(page)); 983 + mte_zero_clear_page_tags(page_address(page)); 984 + set_page_mte_tagged(page); 985 + } 986 + return true; 986 987 }
+1
arch/powerpc/Kconfig
··· 137 137 select ARCH_HAS_DMA_OPS if PPC64 138 138 select ARCH_HAS_FORTIFY_SOURCE 139 139 select ARCH_HAS_GCOV_PROFILE_ALL 140 + select ARCH_HAS_GIGANTIC_PAGE if ARCH_SUPPORTS_HUGETLBFS 140 141 select ARCH_HAS_KCOV 141 142 select ARCH_HAS_KERNEL_FPU_SUPPORT if PPC64 && PPC_FPU 142 143 select ARCH_HAS_MEMBARRIER_CALLBACKS
-1
arch/powerpc/platforms/Kconfig.cputype
··· 423 423 config PPC_RADIX_MMU 424 424 bool "Radix MMU Support" 425 425 depends on PPC_BOOK3S_64 426 - select ARCH_HAS_GIGANTIC_PAGE 427 426 default y 428 427 help 429 428 Enable support for the Power ISA 3.0 Radix style MMU. Currently this
+5 -7
arch/s390/include/asm/pgtable.h
··· 1154 1154 #define IPTE_NODAT 0x400 1155 1155 #define IPTE_GUEST_ASCE 0x800 1156 1156 1157 - static __always_inline void __ptep_rdp(unsigned long addr, pte_t *ptep, 1158 - unsigned long opt, unsigned long asce, 1159 - int local) 1157 + static __always_inline void __ptep_rdp(unsigned long addr, pte_t *ptep, int local) 1160 1158 { 1161 1159 unsigned long pto; 1162 1160 1163 1161 pto = __pa(ptep) & ~(PTRS_PER_PTE * sizeof(pte_t) - 1); 1164 - asm volatile(".insn rrf,0xb98b0000,%[r1],%[r2],%[asce],%[m4]" 1162 + asm volatile(".insn rrf,0xb98b0000,%[r1],%[r2],%%r0,%[m4]" 1165 1163 : "+m" (*ptep) 1166 - : [r1] "a" (pto), [r2] "a" ((addr & PAGE_MASK) | opt), 1167 - [asce] "a" (asce), [m4] "i" (local)); 1164 + : [r1] "a" (pto), [r2] "a" (addr & PAGE_MASK), 1165 + [m4] "i" (local)); 1168 1166 } 1169 1167 1170 1168 static __always_inline void __ptep_ipte(unsigned long address, pte_t *ptep, ··· 1346 1348 * A local RDP can be used to do the flush. 1347 1349 */ 1348 1350 if (cpu_has_rdp() && !(pte_val(*ptep) & _PAGE_PROTECT)) 1349 - __ptep_rdp(address, ptep, 0, 0, 1); 1351 + __ptep_rdp(address, ptep, 1); 1350 1352 } 1351 1353 #define flush_tlb_fix_spurious_fault flush_tlb_fix_spurious_fault 1352 1354
+2 -2
arch/s390/mm/pgtable.c
··· 274 274 preempt_disable(); 275 275 atomic_inc(&mm->context.flush_count); 276 276 if (cpumask_equal(mm_cpumask(mm), cpumask_of(smp_processor_id()))) 277 - __ptep_rdp(addr, ptep, 0, 0, 1); 277 + __ptep_rdp(addr, ptep, 1); 278 278 else 279 - __ptep_rdp(addr, ptep, 0, 0, 0); 279 + __ptep_rdp(addr, ptep, 0); 280 280 /* 281 281 * PTE is not invalidated by RDP, only _PAGE_PROTECT is cleared. That 282 282 * means it is still valid and active, and must not be changed according
+5 -5
arch/x86/events/core.c
··· 2789 2789 return; 2790 2790 } 2791 2791 2792 - if (perf_callchain_store(entry, regs->ip)) 2793 - return; 2794 - 2795 - if (perf_hw_regs(regs)) 2792 + if (perf_hw_regs(regs)) { 2793 + if (perf_callchain_store(entry, regs->ip)) 2794 + return; 2796 2795 unwind_start(&state, current, regs, NULL); 2797 - else 2796 + } else { 2798 2797 unwind_start(&state, current, NULL, (void *)regs->sp); 2798 + } 2799 2799 2800 2800 for (; !unwind_done(&state); unwind_next_frame(&state)) { 2801 2801 addr = unwind_get_return_address(&state);
+5
arch/x86/include/asm/ftrace.h
··· 56 56 return &arch_ftrace_regs(fregs)->regs; 57 57 } 58 58 59 + #define arch_ftrace_partial_regs(regs) do { \ 60 + regs->flags &= ~X86_EFLAGS_FIXED; \ 61 + regs->cs = __KERNEL_CS; \ 62 + } while (0) 63 + 59 64 #define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ 60 65 (_regs)->ip = arch_ftrace_regs(fregs)->regs.ip; \ 61 66 (_regs)->sp = arch_ftrace_regs(fregs)->regs.sp; \
+1 -1
arch/x86/kernel/acpi/cppc.c
··· 196 196 break; 197 197 } 198 198 199 - for_each_present_cpu(cpu) { 199 + for_each_online_cpu(cpu) { 200 200 u32 tmp; 201 201 int ret; 202 202
+7
arch/x86/kernel/cpu/amd.c
··· 1037 1037 1038 1038 static const struct x86_cpu_id zen5_rdseed_microcode[] = { 1039 1039 ZEN_MODEL_STEP_UCODE(0x1a, 0x02, 0x1, 0x0b00215a), 1040 + ZEN_MODEL_STEP_UCODE(0x1a, 0x08, 0x1, 0x0b008121), 1040 1041 ZEN_MODEL_STEP_UCODE(0x1a, 0x11, 0x0, 0x0b101054), 1042 + ZEN_MODEL_STEP_UCODE(0x1a, 0x24, 0x0, 0x0b204037), 1043 + ZEN_MODEL_STEP_UCODE(0x1a, 0x44, 0x0, 0x0b404035), 1044 + ZEN_MODEL_STEP_UCODE(0x1a, 0x44, 0x1, 0x0b404108), 1045 + ZEN_MODEL_STEP_UCODE(0x1a, 0x60, 0x0, 0x0b600037), 1046 + ZEN_MODEL_STEP_UCODE(0x1a, 0x68, 0x0, 0x0b608038), 1047 + ZEN_MODEL_STEP_UCODE(0x1a, 0x70, 0x0, 0x0b700037), 1041 1048 {}, 1042 1049 }; 1043 1050
+1
arch/x86/kernel/cpu/microcode/amd.c
··· 224 224 case 0xb1010: return cur_rev <= 0xb101046; break; 225 225 case 0xb2040: return cur_rev <= 0xb204031; break; 226 226 case 0xb4040: return cur_rev <= 0xb404031; break; 227 + case 0xb4041: return cur_rev <= 0xb404101; break; 227 228 case 0xb6000: return cur_rev <= 0xb600031; break; 228 229 case 0xb6080: return cur_rev <= 0xb608031; break; 229 230 case 0xb7000: return cur_rev <= 0xb700031; break;
+7 -1
arch/x86/kernel/ftrace_64.S
··· 354 354 UNWIND_HINT_UNDEFINED 355 355 ANNOTATE_NOENDBR 356 356 357 + /* Restore return_to_handler value that got eaten by previous ret instruction. */ 358 + subq $8, %rsp 359 + UNWIND_HINT_FUNC 360 + 357 361 /* Save ftrace_regs for function exit context */ 358 362 subq $(FRAME_SIZE), %rsp 359 363 360 364 movq %rax, RAX(%rsp) 361 365 movq %rdx, RDX(%rsp) 362 366 movq %rbp, RBP(%rsp) 367 + movq %rsp, RSP(%rsp) 363 368 movq %rsp, %rdi 364 369 365 370 call ftrace_return_to_handler ··· 373 368 movq RDX(%rsp), %rdx 374 369 movq RAX(%rsp), %rax 375 370 376 - addq $(FRAME_SIZE), %rsp 371 + addq $(FRAME_SIZE) + 8, %rsp 372 + 377 373 /* 378 374 * Jump back to the old return address. This cannot be JMP_NOSPEC rdi 379 375 * since IBT would demand that contain ENDBR, which simply isn't so for
+8 -1
arch/x86/kvm/svm/svm.c
··· 705 705 706 706 static void svm_recalc_lbr_msr_intercepts(struct kvm_vcpu *vcpu) 707 707 { 708 - bool intercept = !(to_svm(vcpu)->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK); 708 + struct vcpu_svm *svm = to_svm(vcpu); 709 + bool intercept = !(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK); 710 + 711 + if (intercept == svm->lbr_msrs_intercepted) 712 + return; 709 713 710 714 svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW, intercept); 711 715 svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW, intercept); ··· 718 714 719 715 if (sev_es_guest(vcpu->kvm)) 720 716 svm_set_intercept_for_msr(vcpu, MSR_IA32_DEBUGCTLMSR, MSR_TYPE_RW, intercept); 717 + 718 + svm->lbr_msrs_intercepted = intercept; 721 719 } 722 720 723 721 void svm_vcpu_free_msrpm(void *msrpm) ··· 1227 1221 } 1228 1222 1229 1223 svm->x2avic_msrs_intercepted = true; 1224 + svm->lbr_msrs_intercepted = true; 1230 1225 1231 1226 svm->vmcb01.ptr = page_address(vmcb01_page); 1232 1227 svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT);
+1
arch/x86/kvm/svm/svm.h
··· 336 336 bool guest_state_loaded; 337 337 338 338 bool x2avic_msrs_intercepted; 339 + bool lbr_msrs_intercepted; 339 340 340 341 /* Guest GIF value, used when vGIF is not enabled */ 341 342 bool guest_gif;
+1 -1
block/bdev.c
··· 231 231 232 232 EXPORT_SYMBOL(sb_set_blocksize); 233 233 234 - int sb_min_blocksize(struct super_block *sb, int size) 234 + int __must_check sb_min_blocksize(struct super_block *sb, int size) 235 235 { 236 236 int minsize = bdev_logical_block_size(sb->s_bdev); 237 237 if (size < minsize)
+37 -14
drivers/acpi/acpi_mrrm.c
··· 152 152 153 153 static __init int add_boot_memory_ranges(void) 154 154 { 155 - struct kobject *pkobj, *kobj; 155 + struct kobject *pkobj, *kobj, **kobjs; 156 156 int ret = -EINVAL; 157 - char *name; 157 + char name[16]; 158 + int i; 158 159 159 160 pkobj = kobject_create_and_add("memory_ranges", acpi_kobj); 161 + if (!pkobj) 162 + return -ENOMEM; 160 163 161 - for (int i = 0; i < mrrm_mem_entry_num; i++) { 162 - name = kasprintf(GFP_KERNEL, "range%d", i); 163 - if (!name) { 164 - ret = -ENOMEM; 165 - break; 166 - } 167 - 168 - kobj = kobject_create_and_add(name, pkobj); 169 - 170 - ret = sysfs_create_groups(kobj, memory_range_groups); 171 - if (ret) 172 - return ret; 164 + kobjs = kcalloc(mrrm_mem_entry_num, sizeof(*kobjs), GFP_KERNEL); 165 + if (!kobjs) { 166 + kobject_put(pkobj); 167 + return -ENOMEM; 173 168 } 174 169 170 + for (i = 0; i < mrrm_mem_entry_num; i++) { 171 + scnprintf(name, sizeof(name), "range%d", i); 172 + kobj = kobject_create_and_add(name, pkobj); 173 + if (!kobj) { 174 + ret = -ENOMEM; 175 + goto cleanup; 176 + } 177 + 178 + ret = sysfs_create_groups(kobj, memory_range_groups); 179 + if (ret) { 180 + kobject_put(kobj); 181 + goto cleanup; 182 + } 183 + kobjs[i] = kobj; 184 + } 185 + 186 + kfree(kobjs); 187 + return 0; 188 + 189 + cleanup: 190 + for (int j = 0; j < i; j++) { 191 + if (kobjs[j]) { 192 + sysfs_remove_groups(kobjs[j], memory_range_groups); 193 + kobject_put(kobjs[j]); 194 + } 195 + } 196 + kfree(kobjs); 197 + kobject_put(pkobj); 175 198 return ret; 176 199 } 177 200
+3 -3
drivers/acpi/cppc_acpi.c
··· 460 460 if (acpi_disabled) 461 461 return false; 462 462 463 - for_each_present_cpu(cpu) { 463 + for_each_online_cpu(cpu) { 464 464 cpc_ptr = per_cpu(cpc_desc_ptr, cpu); 465 465 if (!cpc_ptr) 466 466 return false; ··· 476 476 struct cpc_desc *cpc_ptr; 477 477 int cpu; 478 478 479 - for_each_present_cpu(cpu) { 479 + for_each_online_cpu(cpu) { 480 480 cpc_ptr = per_cpu(cpc_desc_ptr, cpu); 481 481 desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF]; 482 482 if (!CPC_IN_SYSTEM_MEMORY(desired_reg) && ··· 1435 1435 { 1436 1436 int cpu; 1437 1437 1438 - for_each_present_cpu(cpu) { 1438 + for_each_online_cpu(cpu) { 1439 1439 struct cpc_register_resource *ref_perf_reg; 1440 1440 struct cpc_desc *cpc_desc; 1441 1441
+25 -21
drivers/acpi/numa/hmat.c
··· 874 874 } 875 875 } 876 876 877 - static void hmat_register_target(struct memory_target *target) 877 + static void hmat_hotplug_target(struct memory_target *target) 878 878 { 879 879 int nid = pxm_to_node(target->memory_pxm); 880 880 881 + /* 882 + * Skip offline nodes. This can happen when memory marked EFI_MEMORY_SP, 883 + * "specific purpose", is applied to all the memory in a proximity 884 + * domain leading to * the node being marked offline / unplugged, or if 885 + * memory-only "hotplug" node is offline. 886 + */ 887 + if (nid == NUMA_NO_NODE || !node_online(nid)) 888 + return; 889 + 890 + guard(mutex)(&target_lock); 891 + if (target->registered) 892 + return; 893 + 894 + hmat_register_target_initiators(target); 895 + hmat_register_target_cache(target); 896 + hmat_register_target_perf(target, ACCESS_COORDINATE_LOCAL); 897 + hmat_register_target_perf(target, ACCESS_COORDINATE_CPU); 898 + target->registered = true; 899 + } 900 + 901 + static void hmat_register_target(struct memory_target *target) 902 + { 881 903 /* 882 904 * Devices may belong to either an offline or online 883 905 * node, so unconditionally add them. ··· 917 895 } 918 896 mutex_unlock(&target_lock); 919 897 920 - /* 921 - * Skip offline nodes. This can happen when memory 922 - * marked EFI_MEMORY_SP, "specific purpose", is applied 923 - * to all the memory in a proximity domain leading to 924 - * the node being marked offline / unplugged, or if 925 - * memory-only "hotplug" node is offline. 926 - */ 927 - if (nid == NUMA_NO_NODE || !node_online(nid)) 928 - return; 929 - 930 - mutex_lock(&target_lock); 931 - if (!target->registered) { 932 - hmat_register_target_initiators(target); 933 - hmat_register_target_cache(target); 934 - hmat_register_target_perf(target, ACCESS_COORDINATE_LOCAL); 935 - hmat_register_target_perf(target, ACCESS_COORDINATE_CPU); 936 - target->registered = true; 937 - } 938 - mutex_unlock(&target_lock); 898 + hmat_hotplug_target(target); 939 899 } 940 900 941 901 static void hmat_register_targets(void) ··· 943 939 if (!target) 944 940 return NOTIFY_OK; 945 941 946 - hmat_register_target(target); 942 + hmat_hotplug_target(target); 947 943 return NOTIFY_OK; 948 944 } 949 945
+1 -1
drivers/acpi/numa/srat.c
··· 237 237 struct acpi_srat_generic_affinity *p = 238 238 (struct acpi_srat_generic_affinity *)header; 239 239 240 - if (p->device_handle_type == 0) { 240 + if (p->device_handle_type == 1) { 241 241 /* 242 242 * For pci devices this may be the only place they 243 243 * are assigned a proximity domain
+4 -5
drivers/cpufreq/intel_pstate.c
··· 603 603 { 604 604 u64 misc_en; 605 605 606 - if (!cpu_feature_enabled(X86_FEATURE_IDA)) 607 - return true; 608 - 609 606 rdmsrq(MSR_IA32_MISC_ENABLE, misc_en); 610 607 611 608 return !!(misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE); ··· 2103 2106 u32 vid; 2104 2107 2105 2108 val = (u64)pstate << 8; 2106 - if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled)) 2109 + if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled) && 2110 + cpu_feature_enabled(X86_FEATURE_IDA)) 2107 2111 val |= (u64)1 << 32; 2108 2112 2109 2113 vid_fp = cpudata->vid.min + mul_fp( ··· 2269 2271 u64 val; 2270 2272 2271 2273 val = (u64)pstate << 8; 2272 - if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled)) 2274 + if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled) && 2275 + cpu_feature_enabled(X86_FEATURE_IDA)) 2273 2276 val |= (u64)1 << 32; 2274 2277 2275 2278 return val;
+2
drivers/crypto/hisilicon/qm.c
··· 3871 3871 pdev = container_of(dev, struct pci_dev, dev); 3872 3872 if (pci_physfn(pdev) != qm->pdev) { 3873 3873 pci_err(qm->pdev, "the pdev input does not match the pf!\n"); 3874 + put_device(dev); 3874 3875 return -EINVAL; 3875 3876 } 3876 3877 3877 3878 *fun_index = pdev->devfn; 3879 + put_device(dev); 3878 3880 3879 3881 return 0; 3880 3882 }
+2
drivers/cxl/core/region.c
··· 3702 3702 if (validate_region_offset(cxlr, offset)) 3703 3703 return -EINVAL; 3704 3704 3705 + offset -= cxlr->params.cache_size; 3705 3706 rc = region_offset_to_dpa_result(cxlr, offset, &result); 3706 3707 if (rc || !result.cxlmd || result.dpa == ULLONG_MAX) { 3707 3708 dev_dbg(&cxlr->dev, ··· 3735 3734 if (validate_region_offset(cxlr, offset)) 3736 3735 return -EINVAL; 3737 3736 3737 + offset -= cxlr->params.cache_size; 3738 3738 rc = region_offset_to_dpa_result(cxlr, offset, &result); 3739 3739 if (rc || !result.cxlmd || result.dpa == ULLONG_MAX) { 3740 3740 dev_dbg(&cxlr->dev,
+17 -5
drivers/edac/altera_edac.c
··· 1184 1184 if (ret) 1185 1185 return ret; 1186 1186 1187 - /* Verify OCRAM has been initialized */ 1187 + /* 1188 + * Verify that OCRAM has been initialized. 1189 + * During a warm reset, OCRAM contents are retained, but the control 1190 + * and status registers are reset to their default values. Therefore, 1191 + * ECC must be explicitly re-enabled in the control register. 1192 + * Error condition: if INITCOMPLETEA is clear and ECC_EN is already set. 1193 + */ 1188 1194 if (!ecc_test_bits(ALTR_A10_ECC_INITCOMPLETEA, 1189 - (base + ALTR_A10_ECC_INITSTAT_OFST))) 1190 - return -ENODEV; 1195 + (base + ALTR_A10_ECC_INITSTAT_OFST))) { 1196 + if (!ecc_test_bits(ALTR_A10_ECC_EN, 1197 + (base + ALTR_A10_ECC_CTRL_OFST))) 1198 + ecc_set_bits(ALTR_A10_ECC_EN, 1199 + (base + ALTR_A10_ECC_CTRL_OFST)); 1200 + else 1201 + return -ENODEV; 1202 + } 1191 1203 1192 1204 /* Enable IRQ on Single Bit Error */ 1193 1205 writel(ALTR_A10_ECC_SERRINTEN, (base + ALTR_A10_ECC_ERRINTENS_OFST)); ··· 1369 1357 .ue_set_mask = ALTR_A10_ECC_TDERRA, 1370 1358 .set_err_ofst = ALTR_A10_ECC_INTTEST_OFST, 1371 1359 .ecc_irq_handler = altr_edac_a10_ecc_irq, 1372 - .inject_fops = &altr_edac_a10_device_inject2_fops, 1360 + .inject_fops = &altr_edac_a10_device_inject_fops, 1373 1361 }; 1374 1362 1375 1363 #endif /* CONFIG_EDAC_ALTERA_ETHERNET */ ··· 1459 1447 .ue_set_mask = ALTR_A10_ECC_TDERRA, 1460 1448 .set_err_ofst = ALTR_A10_ECC_INTTEST_OFST, 1461 1449 .ecc_irq_handler = altr_edac_a10_ecc_irq, 1462 - .inject_fops = &altr_edac_a10_device_inject2_fops, 1450 + .inject_fops = &altr_edac_a10_device_inject_fops, 1463 1451 }; 1464 1452 1465 1453 #endif /* CONFIG_EDAC_ALTERA_USB */
+13 -11
drivers/edac/versalnet_edac.c
··· 605 605 length = result[MSG_ERR_LENGTH]; 606 606 offset = result[MSG_ERR_OFFSET]; 607 607 608 + /* 609 + * The data can come in two stretches. Construct the regs from two 610 + * messages. The offset indicates the offset from which the data is to 611 + * be taken. 612 + */ 613 + for (i = 0 ; i < length; i++) { 614 + k = offset + i; 615 + j = ERROR_DATA + i; 616 + mc_priv->regs[k] = result[j]; 617 + } 618 + 608 619 if (result[TOTAL_ERR_LENGTH] > length) { 609 620 if (!mc_priv->part_len) 610 621 mc_priv->part_len = length; 611 622 else 612 623 mc_priv->part_len += length; 613 - /* 614 - * The data can come in 2 stretches. Construct the regs from 2 615 - * messages the offset indicates the offset from which the data is to 616 - * be taken 617 - */ 618 - for (i = 0 ; i < length; i++) { 619 - k = offset + i; 620 - j = ERROR_DATA + i; 621 - mc_priv->regs[k] = result[j]; 622 - } 624 + 623 625 if (mc_priv->part_len < result[TOTAL_ERR_LENGTH]) 624 626 return 0; 625 627 mc_priv->part_len = 0; ··· 707 705 /* Convert to bytes */ 708 706 length = result[TOTAL_ERR_LENGTH] * 4; 709 707 log_non_standard_event(sec_type, &amd_versalnet_guid, mc_priv->message, 710 - sec_sev, (void *)&result[ERROR_DATA], length); 708 + sec_sev, (void *)&mc_priv->regs, length); 711 709 712 710 return 0; 713 711 }
+2
drivers/firewire/core-card.c
··· 577 577 INIT_LIST_HEAD(&card->transactions.list); 578 578 spin_lock_init(&card->transactions.lock); 579 579 580 + spin_lock_init(&card->topology_map.lock); 581 + 580 582 card->split_timeout.hi = DEFAULT_SPLIT_TIMEOUT / 8000; 581 583 card->split_timeout.lo = (DEFAULT_SPLIT_TIMEOUT % 8000) << 19; 582 584 card->split_timeout.cycles = DEFAULT_SPLIT_TIMEOUT;
+2 -1
drivers/firewire/core-topology.c
··· 441 441 const u32 *self_ids, int self_id_count) 442 442 { 443 443 __be32 *map = buffer; 444 + u32 next_generation = be32_to_cpu(buffer[1]) + 1; 444 445 int node_count = (root_node_id & 0x3f) + 1; 445 446 446 447 memset(map, 0, buffer_size); 447 448 448 449 *map++ = cpu_to_be32((self_id_count + 2) << 16); 449 - *map++ = cpu_to_be32(be32_to_cpu(buffer[1]) + 1); 450 + *map++ = cpu_to_be32(next_generation); 450 451 *map++ = cpu_to_be32((node_count << 16) | self_id_count); 451 452 452 453 while (self_id_count--)
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
··· 236 236 r = amdgpu_xcp_select_scheds(adev, hw_ip, hw_prio, fpriv, 237 237 &num_scheds, &scheds); 238 238 if (r) 239 - goto cleanup_entity; 239 + goto error_free_entity; 240 240 } 241 241 242 242 /* disable load balance if the hw engine retains context among dependent jobs */
+12
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 82 82 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 83 83 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 84 84 85 + /* 86 + * Disable peer-to-peer access for DCC-enabled VRAM surfaces on GFX12+. 87 + * Such buffers cannot be safely accessed over P2P due to device-local 88 + * compression metadata. Fallback to system-memory path instead. 89 + * Device supports GFX12 (GC 12.x or newer) 90 + * BO was created with the AMDGPU_GEM_CREATE_GFX12_DCC flag 91 + * 92 + */ 93 + if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0) && 94 + bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) 95 + attach->peer2peer = false; 96 + 85 97 if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) && 86 98 pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) 87 99 attach->peer2peer = false;
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.c
··· 280 280 if (ret) 281 281 return ret; 282 282 283 + /* Ensure *bo is NULL so a new BO will be created */ 284 + *bo = NULL; 283 285 ret = amdgpu_bo_create_kernel(adev, 284 286 size, 285 287 ISP_MC_ADDR_ALIGN,
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
··· 151 151 { 152 152 struct amdgpu_userq_fence *userq_fence, *tmp; 153 153 struct dma_fence *fence; 154 + unsigned long flags; 154 155 u64 rptr; 155 156 int i; 156 157 157 158 if (!fence_drv) 158 159 return; 159 160 161 + spin_lock_irqsave(&fence_drv->fence_list_lock, flags); 160 162 rptr = amdgpu_userq_fence_read(fence_drv); 161 163 162 - spin_lock(&fence_drv->fence_list_lock); 163 164 list_for_each_entry_safe(userq_fence, tmp, &fence_drv->fences, link) { 164 165 fence = &userq_fence->base; 165 166 ··· 175 174 list_del(&userq_fence->link); 176 175 dma_fence_put(fence); 177 176 } 178 - spin_unlock(&fence_drv->fence_list_lock); 177 + spin_unlock_irqrestore(&fence_drv->fence_list_lock, flags); 179 178 } 180 179 181 180 void amdgpu_userq_fence_driver_destroy(struct kref *ref)
+1
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
··· 878 878 .get_rptr = jpeg_v5_0_1_dec_ring_get_rptr, 879 879 .get_wptr = jpeg_v5_0_1_dec_ring_get_wptr, 880 880 .set_wptr = jpeg_v5_0_1_dec_ring_set_wptr, 881 + .parse_cs = amdgpu_jpeg_dec_parse_cs, 881 882 .emit_frame_size = 882 883 SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 + 883 884 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+6 -6
drivers/gpu/drm/amd/amdkfd/kfd_queue.c
··· 297 297 goto out_err_unreserve; 298 298 } 299 299 300 - if (properties->ctx_save_restore_area_size != topo_dev->node_props.cwsr_size) { 301 - pr_debug("queue cwsr size 0x%x not equal to node cwsr size 0x%x\n", 300 + if (properties->ctx_save_restore_area_size < topo_dev->node_props.cwsr_size) { 301 + pr_debug("queue cwsr size 0x%x not sufficient for node cwsr size 0x%x\n", 302 302 properties->ctx_save_restore_area_size, 303 303 topo_dev->node_props.cwsr_size); 304 304 err = -EINVAL; 305 305 goto out_err_unreserve; 306 306 } 307 307 308 - total_cwsr_size = (topo_dev->node_props.cwsr_size + topo_dev->node_props.debug_memory_size) 309 - * NUM_XCC(pdd->dev->xcc_mask); 308 + total_cwsr_size = (properties->ctx_save_restore_area_size + 309 + topo_dev->node_props.debug_memory_size) * NUM_XCC(pdd->dev->xcc_mask); 310 310 total_cwsr_size = ALIGN(total_cwsr_size, PAGE_SIZE); 311 311 312 312 err = kfd_queue_buffer_get(vm, (void *)properties->ctx_save_restore_area_address, ··· 352 352 topo_dev = kfd_topology_device_by_id(pdd->dev->id); 353 353 if (!topo_dev) 354 354 return -EINVAL; 355 - total_cwsr_size = (topo_dev->node_props.cwsr_size + topo_dev->node_props.debug_memory_size) 356 - * NUM_XCC(pdd->dev->xcc_mask); 355 + total_cwsr_size = (properties->ctx_save_restore_area_size + 356 + topo_dev->node_props.debug_memory_size) * NUM_XCC(pdd->dev->xcc_mask); 357 357 total_cwsr_size = ALIGN(total_cwsr_size, PAGE_SIZE); 358 358 359 359 kfd_queue_buffer_svm_put(pdd, properties->ctx_save_restore_area_address, total_cwsr_size);
+2
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 3687 3687 svm_range_apply_attrs(p, prange, nattr, attrs, &update_mapping); 3688 3688 /* TODO: unmap ranges from GPU that lost access */ 3689 3689 } 3690 + update_mapping |= !p->xnack_enabled && !list_empty(&remap_list); 3691 + 3690 3692 list_for_each_entry_safe(prange, next, &remove_list, update_list) { 3691 3693 pr_debug("unlink old 0x%p prange 0x%p [0x%lx 0x%lx]\n", 3692 3694 prange->svms, prange, prange->start,
+11
drivers/gpu/drm/amd/display/modules/freesync/freesync.c
··· 1260 1260 update_v_total_for_static_ramp( 1261 1261 core_freesync, stream, in_out_vrr); 1262 1262 } 1263 + 1264 + /* 1265 + * If VRR is inactive, set vtotal min and max to nominal vtotal 1266 + */ 1267 + if (in_out_vrr->state == VRR_STATE_INACTIVE) { 1268 + in_out_vrr->adjust.v_total_min = 1269 + mod_freesync_calc_v_total_from_refresh(stream, 1270 + in_out_vrr->max_refresh_in_uhz); 1271 + in_out_vrr->adjust.v_total_max = in_out_vrr->adjust.v_total_min; 1272 + return; 1273 + } 1263 1274 } 1264 1275 1265 1276 unsigned long long mod_freesync_calc_nominal_field_rate(
+2 -2
drivers/gpu/drm/clients/drm_client_setup.c
··· 13 13 static char drm_client_default[16] = CONFIG_DRM_CLIENT_DEFAULT; 14 14 module_param_string(active, drm_client_default, sizeof(drm_client_default), 0444); 15 15 MODULE_PARM_DESC(active, 16 - "Choose which drm client to start, default is" 17 - CONFIG_DRM_CLIENT_DEFAULT "]"); 16 + "Choose which drm client to start, default is " 17 + CONFIG_DRM_CLIENT_DEFAULT); 18 18 19 19 /** 20 20 * drm_client_setup() - Setup in-kernel DRM clients
+6 -1
drivers/gpu/drm/i915/display/intel_psr.c
··· 585 585 struct intel_display *display = to_intel_display(intel_dp); 586 586 int ret; 587 587 588 + /* TODO: Enable Panel Replay on MST once it's properly implemented. */ 589 + if (intel_dp->mst_detect == DRM_DP_MST) 590 + return; 591 + 588 592 ret = drm_dp_dpcd_read_data(&intel_dp->aux, DP_PANEL_REPLAY_CAP_SUPPORT, 589 593 &intel_dp->pr_dpcd, sizeof(intel_dp->pr_dpcd)); 590 594 if (ret < 0) ··· 892 888 { 893 889 struct intel_display *display = to_intel_display(intel_dp); 894 890 u32 current_dc_state = intel_display_power_get_current_dc_state(display); 895 - struct drm_vblank_crtc *vblank = &display->drm->vblank[intel_dp->psr.pipe]; 891 + struct intel_crtc *crtc = intel_crtc_for_pipe(display, intel_dp->psr.pipe); 892 + struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(&crtc->base); 896 893 897 894 return (current_dc_state != DC_STATE_EN_UPTO_DC5 && 898 895 current_dc_state != DC_STATE_EN_UPTO_DC6) ||
+18
drivers/gpu/drm/panthor/panthor_gem.c
··· 288 288 289 289 panthor_gem_debugfs_set_usage_flags(bo, 0); 290 290 291 + /* If this is a write-combine mapping, we query the sgt to force a CPU 292 + * cache flush (dma_map_sgtable() is called when the sgt is created). 293 + * This ensures the zero-ing is visible to any uncached mapping created 294 + * by vmap/mmap. 295 + * FIXME: Ideally this should be done when pages are allocated, not at 296 + * BO creation time. 297 + */ 298 + if (shmem->map_wc) { 299 + struct sg_table *sgt; 300 + 301 + sgt = drm_gem_shmem_get_pages_sgt(shmem); 302 + if (IS_ERR(sgt)) { 303 + ret = PTR_ERR(sgt); 304 + goto out_put_gem; 305 + } 306 + } 307 + 291 308 /* 292 309 * Allocate an id of idr table where the obj is registered 293 310 * and handle has the id what user can see. ··· 313 296 if (!ret) 314 297 *size = bo->base.base.size; 315 298 299 + out_put_gem: 316 300 /* drop reference from allocate - handle holds it now. */ 317 301 drm_gem_object_put(&shmem->base); 318 302
+15 -1
drivers/gpu/drm/vmwgfx/vmwgfx_cursor_plane.c
··· 100 100 if (vmw->has_mob) { 101 101 if ((vmw->capabilities2 & SVGA_CAP2_CURSOR_MOB) != 0) 102 102 return VMW_CURSOR_UPDATE_MOB; 103 + else 104 + return VMW_CURSOR_UPDATE_GB_ONLY; 103 105 } 104 - 106 + drm_warn_once(&vmw->drm, "Unknown Cursor Type!\n"); 105 107 return VMW_CURSOR_UPDATE_NONE; 106 108 } 107 109 ··· 141 139 { 142 140 switch (update_type) { 143 141 case VMW_CURSOR_UPDATE_LEGACY: 142 + case VMW_CURSOR_UPDATE_GB_ONLY: 144 143 case VMW_CURSOR_UPDATE_NONE: 145 144 return 0; 146 145 case VMW_CURSOR_UPDATE_MOB: ··· 626 623 if (!surface || vps->cursor.legacy.id == surface->snooper.id) 627 624 vps->cursor.update_type = VMW_CURSOR_UPDATE_NONE; 628 625 break; 626 + case VMW_CURSOR_UPDATE_GB_ONLY: 629 627 case VMW_CURSOR_UPDATE_MOB: { 630 628 bo = vmw_user_object_buffer(&vps->uo); 631 629 if (bo) { ··· 741 737 vmw_cursor_plane_atomic_update(struct drm_plane *plane, 742 738 struct drm_atomic_state *state) 743 739 { 740 + struct vmw_bo *bo; 744 741 struct drm_plane_state *new_state = 745 742 drm_atomic_get_new_plane_state(state, plane); 746 743 struct drm_plane_state *old_state = ··· 766 761 break; 767 762 case VMW_CURSOR_UPDATE_MOB: 768 763 vmw_cursor_update_mob(dev_priv, vps); 764 + break; 765 + case VMW_CURSOR_UPDATE_GB_ONLY: 766 + bo = vmw_user_object_buffer(&vps->uo); 767 + if (bo) 768 + vmw_send_define_cursor_cmd(dev_priv, bo->map.virtual, 769 + vps->base.crtc_w, 770 + vps->base.crtc_h, 771 + vps->base.hotspot_x, 772 + vps->base.hotspot_y); 769 773 break; 770 774 case VMW_CURSOR_UPDATE_NONE: 771 775 /* do nothing */
+1
drivers/gpu/drm/vmwgfx/vmwgfx_cursor_plane.h
··· 33 33 enum vmw_cursor_update_type { 34 34 VMW_CURSOR_UPDATE_NONE = 0, 35 35 VMW_CURSOR_UPDATE_LEGACY, 36 + VMW_CURSOR_UPDATE_GB_ONLY, 36 37 VMW_CURSOR_UPDATE_MOB, 37 38 }; 38 39
+5
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 3668 3668 3669 3669 3670 3670 cmd_id = header->id; 3671 + if (header->size > SVGA_CMD_MAX_DATASIZE) { 3672 + VMW_DEBUG_USER("SVGA3D command: %d is too big.\n", 3673 + cmd_id + SVGA_3D_CMD_BASE); 3674 + return -E2BIG; 3675 + } 3671 3676 *size = header->size + sizeof(SVGA3dCmdHeader); 3672 3677 3673 3678 cmd_id -= SVGA_3D_CMD_BASE;
+5 -7
drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
··· 32 32 33 33 /** 34 34 * struct vmw_bo_dirty - Dirty information for buffer objects 35 + * @ref_count: Reference count for this structure. Must be first member! 35 36 * @start: First currently dirty bit 36 37 * @end: Last currently dirty bit + 1 37 38 * @method: The currently used dirty method 38 39 * @change_count: Number of consecutive method change triggers 39 - * @ref_count: Reference count for this structure 40 40 * @bitmap_size: The size of the bitmap in bits. Typically equal to the 41 41 * nuber of pages in the bo. 42 42 * @bitmap: A bitmap where each bit represents a page. A set bit means a 43 43 * dirty page. 44 44 */ 45 45 struct vmw_bo_dirty { 46 + struct kref ref_count; 46 47 unsigned long start; 47 48 unsigned long end; 48 49 enum vmw_bo_dirty_method method; 49 50 unsigned int change_count; 50 - unsigned int ref_count; 51 51 unsigned long bitmap_size; 52 52 unsigned long bitmap[]; 53 53 }; ··· 221 221 int ret; 222 222 223 223 if (dirty) { 224 - dirty->ref_count++; 224 + kref_get(&dirty->ref_count); 225 225 return 0; 226 226 } 227 227 ··· 235 235 dirty->bitmap_size = num_pages; 236 236 dirty->start = dirty->bitmap_size; 237 237 dirty->end = 0; 238 - dirty->ref_count = 1; 238 + kref_init(&dirty->ref_count); 239 239 if (num_pages < PAGE_SIZE / sizeof(pte_t)) { 240 240 dirty->method = VMW_BO_DIRTY_PAGETABLE; 241 241 } else { ··· 274 274 { 275 275 struct vmw_bo_dirty *dirty = vbo->dirty; 276 276 277 - if (dirty && --dirty->ref_count == 0) { 278 - kvfree(dirty); 277 + if (dirty && kref_put(&dirty->ref_count, (void *)kvfree)) 279 278 vbo->dirty = NULL; 280 - } 281 279 } 282 280 283 281 /**
+1
drivers/gpu/drm/xe/regs/xe_gt_regs.h
··· 168 168 169 169 #define XEHP_SLICE_COMMON_ECO_CHICKEN1 XE_REG_MCR(0x731c, XE_REG_OPTION_MASKED) 170 170 #define MSC_MSAA_REODER_BUF_BYPASS_DISABLE REG_BIT(14) 171 + #define FAST_CLEAR_VALIGN_FIX REG_BIT(13) 171 172 172 173 #define XE2LPM_CCCHKNREG1 XE_REG(0x82a8) 173 174
+11
drivers/gpu/drm/xe/xe_wa.c
··· 679 679 }, 680 680 { XE_RTP_NAME("14023061436"), 681 681 XE_RTP_RULES(GRAPHICS_VERSION_RANGE(3000, 3001), 682 + FUNC(xe_rtp_match_first_render_or_compute), OR, 683 + GRAPHICS_VERSION_RANGE(3003, 3005), 682 684 FUNC(xe_rtp_match_first_render_or_compute)), 683 685 XE_RTP_ACTIONS(SET(TDL_CHICKEN, QID_WAIT_FOR_THREAD_NOT_RUN_DISABLE)) 684 686 }, ··· 917 915 { XE_RTP_NAME("22021007897"), 918 916 XE_RTP_RULES(GRAPHICS_VERSION_RANGE(3000, 3003), ENGINE_CLASS(RENDER)), 919 917 XE_RTP_ACTIONS(SET(COMMON_SLICE_CHICKEN4, SBE_PUSH_CONSTANT_BEHIND_FIX_ENABLE)) 918 + }, 919 + { XE_RTP_NAME("14024681466"), 920 + XE_RTP_RULES(GRAPHICS_VERSION_RANGE(3000, 3005), ENGINE_CLASS(RENDER)), 921 + XE_RTP_ACTIONS(SET(XEHP_SLICE_COMMON_ECO_CHICKEN1, FAST_CLEAR_VALIGN_FIX)) 922 + }, 923 + { XE_RTP_NAME("15016589081"), 924 + XE_RTP_RULES(GRAPHICS_VERSION(3000), GRAPHICS_STEP(A0, B0), 925 + ENGINE_CLASS(RENDER)), 926 + XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_CLIP_NEGATIVE_BOUNDING_BOX)) 920 927 }, 921 928 }; 922 929
+2
drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
··· 194 194 if (rc) 195 195 goto cleanup; 196 196 197 + mp2_ops->stop(privdata, cl_data->sensor_idx[i]); 198 + amd_sfh_wait_for_response(privdata, cl_data->sensor_idx[i], DISABLE_SENSOR); 197 199 writel(0, privdata->mmio + amd_get_p2c_val(privdata, 0)); 198 200 mp2_ops->start(privdata, info); 199 201 status = amd_sfh_wait_for_response
+1
drivers/hid/hid-apple.c
··· 355 355 356 356 static const struct apple_non_apple_keyboard non_apple_keyboards[] = { 357 357 { "SONiX USB DEVICE" }, 358 + { "SONiX AK870 PRO" }, 358 359 { "Keychron" }, 359 360 { "AONE" }, 360 361 { "GANSS" },
+2 -3
drivers/hid/hid-corsair-void.c
··· 553 553 554 554 if (IS_ERR(new_supply)) { 555 555 hid_err(drvdata->hid_dev, 556 - "failed to register battery '%s' (reason: %ld)\n", 557 - drvdata->battery_desc.name, 558 - PTR_ERR(new_supply)); 556 + "failed to register battery '%s' (reason: %pe)\n", 557 + drvdata->battery_desc.name, new_supply); 559 558 return; 560 559 } 561 560
+4 -2
drivers/hid/hid-elecom.c
··· 75 75 */ 76 76 mouse_button_fixup(hdev, rdesc, *rsize, 20, 28, 22, 14, 8); 77 77 break; 78 - case USB_DEVICE_ID_ELECOM_M_XT3URBK: 78 + case USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB: 79 + case USB_DEVICE_ID_ELECOM_M_XT3URBK_018F: 79 80 case USB_DEVICE_ID_ELECOM_M_XT3DRBK: 80 81 case USB_DEVICE_ID_ELECOM_M_XT4DRBK: 81 82 /* ··· 120 119 static const struct hid_device_id elecom_devices[] = { 121 120 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084) }, 122 121 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XGL20DLBK) }, 123 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK) }, 122 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB) }, 123 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_018F) }, 124 124 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK) }, 125 125 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 126 126 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) },
+5 -3
drivers/hid/hid-ids.h
··· 449 449 #define USB_VENDOR_ID_ELECOM 0x056e 450 450 #define USB_DEVICE_ID_ELECOM_BM084 0x0061 451 451 #define USB_DEVICE_ID_ELECOM_M_XGL20DLBK 0x00e6 452 - #define USB_DEVICE_ID_ELECOM_M_XT3URBK 0x00fb 452 + #define USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB 0x00fb 453 + #define USB_DEVICE_ID_ELECOM_M_XT3URBK_018F 0x018f 453 454 #define USB_DEVICE_ID_ELECOM_M_XT3DRBK 0x00fc 454 455 #define USB_DEVICE_ID_ELECOM_M_XT4DRBK 0x00fd 455 456 #define USB_DEVICE_ID_ELECOM_M_DT1URBK 0x00fe ··· 719 718 #define USB_DEVICE_ID_ITE_LENOVO_YOGA2 0x8350 720 719 #define I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720 0x837a 721 720 #define USB_DEVICE_ID_ITE_LENOVO_YOGA900 0x8396 721 + #define I2C_DEVICE_ID_ITE_LENOVO_YOGA_SLIM_7X_KEYBOARD 0x8987 722 722 #define USB_DEVICE_ID_ITE8595 0x8595 723 723 #define USB_DEVICE_ID_ITE_MEDION_E1239T 0xce50 724 724 ··· 1545 1543 #define USB_VENDOR_ID_SIGNOTEC 0x2133 1546 1544 #define USB_DEVICE_ID_SIGNOTEC_VIEWSONIC_PD1011 0x0018 1547 1545 1548 - #define USB_VENDOR_ID_SMARTLINKTECHNOLOGY 0x4c4a 1549 - #define USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155 0x4155 1546 + #define USB_VENDOR_ID_JIELI_SDK_DEFAULT 0x4c4a 1547 + #define USB_DEVICE_ID_JIELI_SDK_4155 0x4155 1550 1548 1551 1549 #endif
+3 -2
drivers/hid/hid-input.c
··· 399 399 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_CHROMEBOOK_TROGDOR_POMPOM), 400 400 HID_BATTERY_QUIRK_AVOID_QUERY }, 401 401 /* 402 - * Elan I2C-HID touchscreens seem to all report a non present battery, 403 - * set HID_BATTERY_QUIRK_IGNORE for all Elan I2C-HID devices. 402 + * Elan HID touchscreens seem to all report a non present battery, 403 + * set HID_BATTERY_QUIRK_IGNORE for all Elan I2C and USB HID devices. 404 404 */ 405 405 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE }, 406 + { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE }, 406 407 {} 407 408 }; 408 409
+17
drivers/hid/hid-lenovo.c
··· 148 148 0x81, 0x01, /* Input (Const,Array,Abs,No Wrap,Linear,Preferred State,No Null Position) */ 149 149 }; 150 150 151 + static const __u8 lenovo_yoga7x_kbd_need_fixup_collection[] = { 152 + 0x15, 0x00, // Logical Minimum (0) 153 + 0x25, 0x65, // Logical Maximum (101) 154 + 0x05, 0x07, // Usage Page (Keyboard) 155 + 0x19, 0x00, // Usage Minimum (0) 156 + 0x29, 0xDD, // Usage Maximum (221) 157 + }; 158 + 151 159 static const __u8 *lenovo_report_fixup(struct hid_device *hdev, __u8 *rdesc, 152 160 unsigned int *rsize) 153 161 { ··· 183 175 rdesc[256] = 0x01; /* report count = 0x01 */ 184 176 rdesc[258] = 0x00; /* input = 0x00 */ 185 177 rdesc[260] = 0x01; /* report count (2) = 0x01 */ 178 + } 179 + break; 180 + case I2C_DEVICE_ID_ITE_LENOVO_YOGA_SLIM_7X_KEYBOARD: 181 + if (*rsize == 176 && 182 + memcmp(&rdesc[52], lenovo_yoga7x_kbd_need_fixup_collection, 183 + sizeof(lenovo_yoga7x_kbd_need_fixup_collection)) == 0) { 184 + rdesc[55] = rdesc[61]; // logical maximum = usage maximum 186 185 } 187 186 break; 188 187 } ··· 1553 1538 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X12_TAB) }, 1554 1539 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1555 1540 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X12_TAB2) }, 1541 + { HID_DEVICE(BUS_I2C, HID_GROUP_GENERIC, 1542 + USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_YOGA_SLIM_7X_KEYBOARD) }, 1556 1543 { } 1557 1544 }; 1558 1545
+2 -5
drivers/hid/hid-ntrig.c
··· 142 142 int ret; 143 143 char buf[20]; 144 144 struct usb_device *usb_dev = hid_to_usb_dev(hdev); 145 - unsigned char *data = kmalloc(8, GFP_KERNEL); 145 + unsigned char *data __free(kfree) = kmalloc(8, GFP_KERNEL); 146 146 147 147 if (!hid_is_usb(hdev)) 148 148 return; 149 149 150 150 if (!data) 151 - goto err_free; 151 + return; 152 152 153 153 ret = usb_control_msg(usb_dev, usb_rcvctrlpipe(usb_dev, 0), 154 154 USB_REQ_CLEAR_FEATURE, ··· 163 163 hid_info(hdev, "Firmware version: %s (%02x%02x %02x%02x)\n", 164 164 buf, data[2], data[3], data[4], data[5]); 165 165 } 166 - 167 - err_free: 168 - kfree(data); 169 166 } 170 167 171 168 static ssize_t show_phys_width(struct device *dev,
+2
drivers/hid/hid-playstation.c
··· 1942 1942 "Failed to retrieve DualShock4 calibration info: %d\n", 1943 1943 ret); 1944 1944 ret = -EILSEQ; 1945 + kfree(buf); 1945 1946 goto transfer_failed; 1946 1947 } else { 1947 1948 break; ··· 1960 1959 1961 1960 if (ret) { 1962 1961 hid_warn(hdev, "Failed to retrieve DualShock4 calibration info: %d\n", ret); 1962 + kfree(buf); 1963 1963 goto transfer_failed; 1964 1964 } 1965 1965 }
+14 -2
drivers/hid/hid-quirks.c
··· 410 410 #if IS_ENABLED(CONFIG_HID_ELECOM) 411 411 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084) }, 412 412 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XGL20DLBK) }, 413 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK) }, 413 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB) }, 414 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_018F) }, 414 415 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK) }, 415 416 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 416 417 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, ··· 916 915 #endif 917 916 { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, 918 917 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) }, 919 - { HID_USB_DEVICE(USB_VENDOR_ID_SMARTLINKTECHNOLOGY, USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155) }, 920 918 { } 921 919 }; 922 920 ··· 1063 1063 if (!strncmp(hdev->name, elan_acpi_id[i].id, 1064 1064 strlen(elan_acpi_id[i].id))) 1065 1065 return true; 1066 + break; 1067 + case USB_VENDOR_ID_JIELI_SDK_DEFAULT: 1068 + /* 1069 + * Multiple USB devices with identical IDs (mic & touchscreen). 1070 + * The touch screen requires hid core processing, but the 1071 + * microphone does not. They can be distinguished by manufacturer 1072 + * and serial number. 1073 + */ 1074 + if (hdev->product == USB_DEVICE_ID_JIELI_SDK_4155 && 1075 + strncmp(hdev->name, "SmartlinkTechnology", 19) == 0 && 1076 + strncmp(hdev->uniq, "20201111000001", 14) == 0) 1077 + return true; 1066 1078 break; 1067 1079 } 1068 1080
+3 -1
drivers/hid/hid-uclogic-params.c
··· 1369 1369 event_hook->hdev = hdev; 1370 1370 event_hook->size = ARRAY_SIZE(reconnect_event); 1371 1371 event_hook->event = kmemdup(reconnect_event, event_hook->size, GFP_KERNEL); 1372 - if (!event_hook->event) 1372 + if (!event_hook->event) { 1373 + kfree(event_hook); 1373 1374 return -ENOMEM; 1375 + } 1374 1376 1375 1377 list_add_tail(&event_hook->list, &p->event_hooks->list); 1376 1378
+2 -2
drivers/hid/usbhid/hid-pidff.c
··· 806 806 807 807 static int pidff_needs_playback(struct pidff_device *pidff, int effect_id, int n) 808 808 { 809 - return pidff->effect[effect_id].is_infinite || 810 - pidff->effect[effect_id].loop_count != n; 809 + return !pidff->effect[effect_id].is_infinite || 810 + pidff->effect[effect_id].loop_count != n; 811 811 } 812 812 813 813 /*
+26 -28
drivers/hwmon/gpd-fan.c
··· 12 12 * Copyright (c) 2024 Cryolitia PukNgae 13 13 */ 14 14 15 - #include <linux/acpi.h> 16 15 #include <linux/dmi.h> 17 16 #include <linux/hwmon.h> 17 + #include <linux/io.h> 18 18 #include <linux/ioport.h> 19 19 #include <linux/kernel.h> 20 20 #include <linux/module.h> ··· 276 276 return (u16)high << 8 | low; 277 277 } 278 278 279 - static void gpd_win4_init_ec(void) 280 - { 281 - u8 chip_id, chip_ver; 282 - 283 - gpd_ecram_read(0x2000, &chip_id); 284 - 285 - if (chip_id == 0x55) { 286 - gpd_ecram_read(0x1060, &chip_ver); 287 - gpd_ecram_write(0x1060, chip_ver | 0x80); 288 - } 289 - } 290 - 291 - static int gpd_win4_read_rpm(void) 292 - { 293 - int ret; 294 - 295 - ret = gpd_generic_read_rpm(); 296 - 297 - if (ret == 0) 298 - // Re-init EC when speed is 0 299 - gpd_win4_init_ec(); 300 - 301 - return ret; 302 - } 303 - 304 279 static int gpd_wm2_read_rpm(void) 305 280 { 306 281 for (u16 pwm_ctr_offset = GPD_PWM_CTR_OFFSET; ··· 295 320 static int gpd_read_rpm(void) 296 321 { 297 322 switch (gpd_driver_priv.drvdata->board) { 323 + case win4_6800u: 298 324 case win_mini: 299 325 case duo: 300 326 return gpd_generic_read_rpm(); 301 - case win4_6800u: 302 - return gpd_win4_read_rpm(); 303 327 case win_max_2: 304 328 return gpd_wm2_read_rpm(); 305 329 } ··· 581 607 .info = gpd_fan_hwmon_channel_info 582 608 }; 583 609 610 + static void gpd_win4_init_ec(void) 611 + { 612 + u8 chip_id, chip_ver; 613 + 614 + gpd_ecram_read(0x2000, &chip_id); 615 + 616 + if (chip_id == 0x55) { 617 + gpd_ecram_read(0x1060, &chip_ver); 618 + gpd_ecram_write(0x1060, chip_ver | 0x80); 619 + } 620 + } 621 + 622 + static void gpd_init_ec(void) 623 + { 624 + // The buggy firmware won't initialize EC properly on boot. 625 + // Before its initialization, reading RPM will always return 0, 626 + // and writing PWM will have no effect. 627 + // Initialize it manually on driver load. 628 + if (gpd_driver_priv.drvdata->board == win4_6800u) 629 + gpd_win4_init_ec(); 630 + } 631 + 584 632 static int gpd_fan_probe(struct platform_device *pdev) 585 633 { 586 634 struct device *dev = &pdev->dev; ··· 629 633 if (IS_ERR(hwdev)) 630 634 return dev_err_probe(dev, PTR_ERR(hwdev), 631 635 "Failed to register hwmon device\n"); 636 + 637 + gpd_init_ec(); 632 638 633 639 return 0; 634 640 }
+2 -1
drivers/irqchip/irq-riscv-intc.c
··· 166 166 static const struct irq_domain_ops riscv_intc_domain_ops = { 167 167 .map = riscv_intc_domain_map, 168 168 .xlate = irq_domain_xlate_onecell, 169 - .alloc = riscv_intc_domain_alloc 169 + .alloc = riscv_intc_domain_alloc, 170 + .free = irq_domain_free_irqs_top, 170 171 }; 171 172 172 173 static struct fwnode_handle *riscv_intc_hwnode(void)
+2 -2
drivers/memory/tegra/tegra210.c
··· 1015 1015 }, 1016 1016 }, 1017 1017 }, { 1018 - .id = TEGRA210_MC_SESRD, 1018 + .id = TEGRA210_MC_SESWR, 1019 1019 .name = "seswr", 1020 1020 .swgroup = TEGRA_SWGROUP_SE, 1021 1021 .regs = { ··· 1079 1079 }, 1080 1080 }, 1081 1081 }, { 1082 - .id = TEGRA210_MC_ETRR, 1082 + .id = TEGRA210_MC_ETRW, 1083 1083 .name = "etrw", 1084 1084 .swgroup = TEGRA_SWGROUP_ETR, 1085 1085 .regs = {
+1 -1
drivers/mmc/host/Kconfig
··· 950 950 config MMC_WMT 951 951 tristate "Wondermedia SD/MMC Host Controller support" 952 952 depends on ARCH_VT8500 || COMPILE_TEST 953 - default y 953 + default ARCH_VT8500 954 954 help 955 955 This selects support for the SD/MMC Host Controller on 956 956 Wondermedia WM8505/WM8650 based SoCs.
+2 -2
drivers/mmc/host/dw_mmc-rockchip.c
··· 42 42 */ 43 43 static int rockchip_mmc_get_internal_phase(struct dw_mci *host, bool sample) 44 44 { 45 - unsigned long rate = clk_get_rate(host->ciu_clk); 45 + unsigned long rate = clk_get_rate(host->ciu_clk) / RK3288_CLKGEN_DIV; 46 46 u32 raw_value; 47 47 u16 degrees; 48 48 u32 delay_num = 0; ··· 85 85 86 86 static int rockchip_mmc_set_internal_phase(struct dw_mci *host, bool sample, int degrees) 87 87 { 88 - unsigned long rate = clk_get_rate(host->ciu_clk); 88 + unsigned long rate = clk_get_rate(host->ciu_clk) / RK3288_CLKGEN_DIV; 89 89 u8 nineties, remainder; 90 90 u8 delay_num; 91 91 u32 raw_value;
+18 -38
drivers/mmc/host/pxamci.c
··· 652 652 host->clkrt = CLKRT_OFF; 653 653 654 654 host->clk = devm_clk_get(dev, NULL); 655 - if (IS_ERR(host->clk)) { 656 - host->clk = NULL; 657 - return PTR_ERR(host->clk); 658 - } 655 + if (IS_ERR(host->clk)) 656 + return dev_err_probe(dev, PTR_ERR(host->clk), 657 + "Failed to acquire clock\n"); 659 658 660 659 host->clkrate = clk_get_rate(host->clk); 661 660 ··· 702 703 703 704 platform_set_drvdata(pdev, mmc); 704 705 705 - host->dma_chan_rx = dma_request_chan(dev, "rx"); 706 - if (IS_ERR(host->dma_chan_rx)) { 707 - host->dma_chan_rx = NULL; 706 + host->dma_chan_rx = devm_dma_request_chan(dev, "rx"); 707 + if (IS_ERR(host->dma_chan_rx)) 708 708 return dev_err_probe(dev, PTR_ERR(host->dma_chan_rx), 709 709 "unable to request rx dma channel\n"); 710 - } 711 710 712 - host->dma_chan_tx = dma_request_chan(dev, "tx"); 713 - if (IS_ERR(host->dma_chan_tx)) { 714 - dev_err(dev, "unable to request tx dma channel\n"); 715 - ret = PTR_ERR(host->dma_chan_tx); 716 - host->dma_chan_tx = NULL; 717 - goto out; 718 - } 711 + 712 + host->dma_chan_tx = devm_dma_request_chan(dev, "tx"); 713 + if (IS_ERR(host->dma_chan_tx)) 714 + return dev_err_probe(dev, PTR_ERR(host->dma_chan_tx), 715 + "unable to request tx dma channel\n"); 719 716 720 717 if (host->pdata) { 721 718 host->detect_delay_ms = host->pdata->detect_delay_ms; 722 719 723 720 host->power = devm_gpiod_get_optional(dev, "power", GPIOD_OUT_LOW); 724 - if (IS_ERR(host->power)) { 725 - ret = PTR_ERR(host->power); 726 - dev_err(dev, "Failed requesting gpio_power\n"); 727 - goto out; 728 - } 721 + if (IS_ERR(host->power)) 722 + return dev_err_probe(dev, PTR_ERR(host->power), 723 + "Failed requesting gpio_power\n"); 729 724 730 725 /* FIXME: should we pass detection delay to debounce? */ 731 726 ret = mmc_gpiod_request_cd(mmc, "cd", 0, false, 0); 732 - if (ret && ret != -ENOENT) { 733 - dev_err(dev, "Failed requesting gpio_cd\n"); 734 - goto out; 735 - } 727 + if (ret && ret != -ENOENT) 728 + return dev_err_probe(dev, ret, "Failed requesting gpio_cd\n"); 736 729 737 730 if (!host->pdata->gpio_card_ro_invert) 738 731 mmc->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH; 739 732 740 733 ret = mmc_gpiod_request_ro(mmc, "wp", 0, 0); 741 - if (ret && ret != -ENOENT) { 742 - dev_err(dev, "Failed requesting gpio_ro\n"); 743 - goto out; 744 - } 734 + if (ret && ret != -ENOENT) 735 + return dev_err_probe(dev, ret, "Failed requesting gpio_ro\n"); 736 + 745 737 if (!ret) 746 738 host->use_ro_gpio = true; 747 739 ··· 749 759 if (ret) { 750 760 if (host->pdata && host->pdata->exit) 751 761 host->pdata->exit(dev, mmc); 752 - goto out; 753 762 } 754 763 755 - return 0; 756 - 757 - out: 758 - if (host->dma_chan_rx) 759 - dma_release_channel(host->dma_chan_rx); 760 - if (host->dma_chan_tx) 761 - dma_release_channel(host->dma_chan_tx); 762 764 return ret; 763 765 } 764 766 ··· 773 791 774 792 dmaengine_terminate_all(host->dma_chan_rx); 775 793 dmaengine_terminate_all(host->dma_chan_tx); 776 - dma_release_channel(host->dma_chan_rx); 777 - dma_release_channel(host->dma_chan_tx); 778 794 } 779 795 } 780 796
+1 -1
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 94 94 #define DLL_TXCLK_TAPNUM_DEFAULT 0x10 95 95 #define DLL_TXCLK_TAPNUM_90_DEGREES 0xA 96 96 #define DLL_TXCLK_TAPNUM_FROM_SW BIT(24) 97 - #define DLL_STRBIN_TAPNUM_DEFAULT 0x8 97 + #define DLL_STRBIN_TAPNUM_DEFAULT 0x4 98 98 #define DLL_STRBIN_TAPNUM_FROM_SW BIT(24) 99 99 #define DLL_STRBIN_DELAY_NUM_SEL BIT(26) 100 100 #define DLL_STRBIN_DELAY_NUM_OFFSET 16
+4 -2
drivers/mtd/mtdchar.c
··· 599 599 uint8_t *datbuf = NULL, *oobbuf = NULL; 600 600 size_t datbuf_len, oobbuf_len; 601 601 int ret = 0; 602 + u64 end; 602 603 603 604 if (copy_from_user(&req, argp, sizeof(req))) 604 605 return -EFAULT; ··· 619 618 req.len &= 0xffffffff; 620 619 req.ooblen &= 0xffffffff; 621 620 622 - if (req.start + req.len > mtd->size) 621 + if (check_add_overflow(req.start, req.len, &end) || end > mtd->size) 623 622 return -EINVAL; 624 623 625 624 datbuf_len = min_t(size_t, req.len, mtd->erasesize); ··· 699 698 size_t datbuf_len, oobbuf_len; 700 699 size_t orig_len, orig_ooblen; 701 700 int ret = 0; 701 + u64 end; 702 702 703 703 if (copy_from_user(&req, argp, sizeof(req))) 704 704 return -EFAULT; ··· 726 724 req.len &= 0xffffffff; 727 725 req.ooblen &= 0xffffffff; 728 726 729 - if (req.start + req.len > mtd->size) { 727 + if (check_add_overflow(req.start, req.len, &end) || end > mtd->size) { 730 728 ret = -EINVAL; 731 729 goto out; 732 730 }
+1 -1
drivers/mtd/nand/Kconfig
··· 63 63 64 64 config MTD_NAND_ECC_REALTEK 65 65 tristate "Realtek RTL93xx hardware ECC engine" 66 - depends on HAS_IOMEM 66 + depends on HAS_IOMEM && HAS_DMA 67 67 depends on MACH_REALTEK_RTL || COMPILE_TEST 68 68 select MTD_NAND_ECC 69 69 help
+3 -3
drivers/mtd/nand/ecc-realtek.c
··· 380 380 nand_ecc_cleanup_req_tweaking(&ctx->req_ctx); 381 381 } 382 382 383 - static struct nand_ecc_engine_ops rtl_ecc_engine_ops = { 383 + static const struct nand_ecc_engine_ops rtl_ecc_engine_ops = { 384 384 .init_ctx = rtl_ecc_init_ctx, 385 385 .cleanup_ctx = rtl_ecc_cleanup_ctx, 386 386 .prepare_io_req = rtl_ecc_prepare_io_req, ··· 418 418 419 419 rtlc->buf = dma_alloc_noncoherent(dev, RTL_ECC_DMA_SIZE, &rtlc->buf_dma, 420 420 DMA_BIDIRECTIONAL, GFP_KERNEL); 421 - if (IS_ERR(rtlc->buf)) 422 - return PTR_ERR(rtlc->buf); 421 + if (!rtlc->buf) 422 + return -ENOMEM; 423 423 424 424 rtlc->dev = dev; 425 425 rtlc->engine.dev = dev;
+1 -1
drivers/mtd/nand/onenand/onenand_samsung.c
··· 906 906 err = devm_request_irq(&pdev->dev, r->start, 907 907 s5pc110_onenand_irq, 908 908 IRQF_SHARED, "onenand", 909 - &onenand); 909 + onenand); 910 910 if (err) { 911 911 dev_err(&pdev->dev, "failed to get irq\n"); 912 912 return err;
+2 -1
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 2871 2871 static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) 2872 2872 { 2873 2873 dma_cap_mask_t mask; 2874 - struct dma_device *dma_dev = cdns_ctrl->dmac->device; 2874 + struct dma_device *dma_dev; 2875 2875 int ret; 2876 2876 2877 2877 cdns_ctrl->cdma_desc = dma_alloc_coherent(cdns_ctrl->dev, ··· 2915 2915 } 2916 2916 } 2917 2917 2918 + dma_dev = cdns_ctrl->dmac->device; 2918 2919 cdns_ctrl->io.iova_dma = dma_map_resource(dma_dev->dev, cdns_ctrl->io.dma, 2919 2920 cdns_ctrl->io.size, 2920 2921 DMA_BIDIRECTIONAL, 0);
+1 -1
drivers/mtd/nand/spi/fmsh.c
··· 58 58 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 59 59 &write_cache_variants, 60 60 &update_cache_variants), 61 - SPINAND_HAS_QE_BIT, 61 + 0, 62 62 SPINAND_ECCINFO(&fm25s01a_ooblayout, NULL)), 63 63 }; 64 64
+12 -2
drivers/net/dsa/hirschmann/hellcreek_ptp.c
··· 376 376 hellcreek_set_brightness(hellcreek, STATUS_OUT_IS_GM, 1); 377 377 378 378 /* Register both leds */ 379 - led_classdev_register(hellcreek->dev, &hellcreek->led_sync_good); 380 - led_classdev_register(hellcreek->dev, &hellcreek->led_is_gm); 379 + ret = led_classdev_register(hellcreek->dev, &hellcreek->led_sync_good); 380 + if (ret) { 381 + dev_err(hellcreek->dev, "Failed to register sync_good LED\n"); 382 + goto out; 383 + } 384 + 385 + ret = led_classdev_register(hellcreek->dev, &hellcreek->led_is_gm); 386 + if (ret) { 387 + dev_err(hellcreek->dev, "Failed to register is_gm LED\n"); 388 + led_classdev_unregister(&hellcreek->led_sync_good); 389 + goto out; 390 + } 381 391 382 392 ret = 0; 383 393
+1
drivers/net/dsa/microchip/lan937x_main.c
··· 540 540 ksz_pread16(dev, port, reg, &data16); 541 541 542 542 /* Update tune Adjust */ 543 + data16 &= ~PORT_TUNE_ADJ; 543 544 data16 |= FIELD_PREP(PORT_TUNE_ADJ, val); 544 545 ksz_pwrite16(dev, port, reg, data16); 545 546
+1 -1
drivers/net/ethernet/airoha/airoha_ppe.c
··· 308 308 if (!airoha_is_valid_gdm_port(eth, port)) 309 309 return -EINVAL; 310 310 311 - if (dsa_port >= 0) 311 + if (dsa_port >= 0 || eth->ports[1]) 312 312 pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 313 313 : port->id; 314 314 else
+4 -3
drivers/net/ethernet/emulex/benet/be_main.c
··· 1296 1296 (adapter->bmc_filt_mask & BMC_FILT_MULTICAST) 1297 1297 1298 1298 static bool be_send_pkt_to_bmc(struct be_adapter *adapter, 1299 - struct sk_buff **skb) 1299 + struct sk_buff **skb, 1300 + struct be_wrb_params *wrb_params) 1300 1301 { 1301 1302 struct ethhdr *eh = (struct ethhdr *)(*skb)->data; 1302 1303 bool os2bmc = false; ··· 1361 1360 * to BMC, asic expects the vlan to be inline in the packet. 1362 1361 */ 1363 1362 if (os2bmc) 1364 - *skb = be_insert_vlan_in_pkt(adapter, *skb, NULL); 1363 + *skb = be_insert_vlan_in_pkt(adapter, *skb, wrb_params); 1365 1364 1366 1365 return os2bmc; 1367 1366 } ··· 1388 1387 /* if os2bmc is enabled and if the pkt is destined to bmc, 1389 1388 * enqueue the pkt a 2nd time with mgmt bit set. 1390 1389 */ 1391 - if (be_send_pkt_to_bmc(adapter, &skb)) { 1390 + if (be_send_pkt_to_bmc(adapter, &skb, &wrb_params)) { 1392 1391 BE_WRB_F_SET(wrb_params.features, OS2BMC, 1); 1393 1392 wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params); 1394 1393 if (unlikely(!wrb_cnt))
+19 -3
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 3253 3253 3254 3254 err = ice_ptp_init_port(pf, &ptp->port); 3255 3255 if (err) 3256 - goto err_exit; 3256 + goto err_clean_pf; 3257 3257 3258 3258 /* Start the PHY timestamping block */ 3259 3259 ice_ptp_reset_phy_timestamping(pf); ··· 3270 3270 dev_info(ice_pf_to_dev(pf), "PTP init successful\n"); 3271 3271 return; 3272 3272 3273 + err_clean_pf: 3274 + mutex_destroy(&ptp->port.ps_lock); 3275 + ice_ptp_cleanup_pf(pf); 3273 3276 err_exit: 3274 3277 /* If we registered a PTP clock, release it */ 3275 3278 if (pf->ptp.clock) { 3276 3279 ptp_clock_unregister(ptp->clock); 3277 3280 pf->ptp.clock = NULL; 3278 3281 } 3279 - ptp->state = ICE_PTP_ERROR; 3282 + /* Keep ICE_PTP_UNINIT state to avoid ambiguity at driver unload 3283 + * and to avoid duplicated resources release. 3284 + */ 3285 + ptp->state = ICE_PTP_UNINIT; 3280 3286 dev_err(ice_pf_to_dev(pf), "PTP failed %d\n", err); 3281 3287 } 3282 3288 ··· 3295 3289 */ 3296 3290 void ice_ptp_release(struct ice_pf *pf) 3297 3291 { 3298 - if (pf->ptp.state != ICE_PTP_READY) 3292 + if (pf->ptp.state == ICE_PTP_UNINIT) 3299 3293 return; 3294 + 3295 + if (pf->ptp.state != ICE_PTP_READY) { 3296 + mutex_destroy(&pf->ptp.port.ps_lock); 3297 + ice_ptp_cleanup_pf(pf); 3298 + if (pf->ptp.clock) { 3299 + ptp_clock_unregister(pf->ptp.clock); 3300 + pf->ptp.clock = NULL; 3301 + } 3302 + return; 3303 + } 3300 3304 3301 3305 pf->ptp.state = ICE_PTP_UNINIT; 3302 3306
+2
drivers/net/ethernet/intel/idpf/idpf_main.c
··· 141 141 destroy_workqueue(adapter->vc_event_wq); 142 142 143 143 for (i = 0; i < adapter->max_vports; i++) { 144 + if (!adapter->vport_config[i]) 145 + continue; 144 146 kfree(adapter->vport_config[i]->user_config.q_coalesce); 145 147 kfree(adapter->vport_config[i]); 146 148 adapter->vport_config[i] = NULL;
+2 -4
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 324 324 free_irq(irq->map.virq, &irq->nh); 325 325 err_req_irq: 326 326 #ifdef CONFIG_RFS_ACCEL 327 - if (i && rmap && *rmap) { 328 - free_irq_cpu_rmap(*rmap); 329 - *rmap = NULL; 330 - } 327 + if (i && rmap && *rmap) 328 + irq_cpu_rmap_remove(*rmap, irq->map.virq); 331 329 err_irq_rmap: 332 330 #endif 333 331 if (i && pci_msix_can_alloc_dyn(dev->pdev))
+2
drivers/net/ethernet/mellanox/mlxsw/core_linecards.c
··· 601 601 err = devlink_info_version_fixed_put(req, 602 602 DEVLINK_INFO_VERSION_GENERIC_FW_PSID, 603 603 info->psid); 604 + if (err) 605 + goto unlock; 604 606 605 607 sprintf(buf, "%u.%u.%u", info->fw_major, info->fw_minor, 606 608 info->fw_sub_minor);
+4 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
··· 830 830 return -EINVAL; 831 831 832 832 rule = mlxsw_sp_acl_rule_lookup(mlxsw_sp, ruleset, f->cookie); 833 - if (!rule) 834 - return -EINVAL; 833 + if (!rule) { 834 + err = -EINVAL; 835 + goto err_rule_get_stats; 836 + } 835 837 836 838 err = mlxsw_sp_acl_rule_get_stats(mlxsw_sp, rule, &packets, &bytes, 837 839 &drops, &lastuse, &used_hw_stats);
+3 -2
drivers/net/ethernet/qlogic/qede/qede_fp.c
··· 4 4 * Copyright (c) 2019-2020 Marvell International Ltd. 5 5 */ 6 6 7 + #include <linux/array_size.h> 7 8 #include <linux/netdevice.h> 8 9 #include <linux/etherdevice.h> 9 10 #include <linux/skbuff.h> ··· 961 960 { 962 961 int i; 963 962 964 - for (i = 0; cqe->len_list[i]; i++) 963 + for (i = 0; cqe->len_list[i] && i < ARRAY_SIZE(cqe->len_list); i++) 965 964 qede_fill_frag_skb(edev, rxq, cqe->tpa_agg_index, 966 965 le16_to_cpu(cqe->len_list[i])); 967 966 ··· 986 985 dma_unmap_page(rxq->dev, tpa_info->buffer.mapping, 987 986 PAGE_SIZE, rxq->data_direction); 988 987 989 - for (i = 0; cqe->len_list[i]; i++) 988 + for (i = 0; cqe->len_list[i] && i < ARRAY_SIZE(cqe->len_list); i++) 990 989 qede_fill_frag_skb(edev, rxq, cqe->tpa_agg_index, 991 990 le16_to_cpu(cqe->len_list[i])); 992 991 if (unlikely(i > 1))
+34 -11
drivers/net/ethernet/toshiba/ps3_gelic_net.c
··· 260 260 if (atomic_dec_if_positive(&card->users) == 0) { 261 261 pr_debug("%s: real do\n", __func__); 262 262 napi_disable(&card->napi); 263 + timer_delete_sync(&card->rx_oom_timer); 263 264 /* 264 265 * Disable irq. Wireless interrupts will 265 266 * be disabled later if any ··· 971 970 * gelic_card_decode_one_descr - processes an rx descriptor 972 971 * @card: card structure 973 972 * 974 - * returns 1 if a packet has been sent to the stack, otherwise 0 973 + * returns 1 if a packet has been sent to the stack, -ENOMEM on skb alloc 974 + * failure, otherwise 0 975 975 * 976 976 * processes an rx descriptor by iommu-unmapping the data buffer and passing 977 977 * the packet up to the stack ··· 983 981 struct gelic_descr_chain *chain = &card->rx_chain; 984 982 struct gelic_descr *descr = chain->head; 985 983 struct net_device *netdev = NULL; 986 - int dmac_chain_ended; 984 + int dmac_chain_ended = 0; 985 + int prepare_rx_ret; 987 986 988 987 status = gelic_descr_get_status(descr); 989 988 990 989 if (status == GELIC_DESCR_DMA_CARDOWNED) 991 990 return 0; 992 991 993 - if (status == GELIC_DESCR_DMA_NOT_IN_USE) { 992 + if (status == GELIC_DESCR_DMA_NOT_IN_USE || !descr->skb) { 994 993 dev_dbg(ctodev(card), "dormant descr? %p\n", descr); 995 - return 0; 994 + dmac_chain_ended = 1; 995 + goto refill; 996 996 } 997 997 998 998 /* netdevice select */ ··· 1052 1048 refill: 1053 1049 1054 1050 /* is the current descriptor terminated with next_descr == NULL? */ 1055 - dmac_chain_ended = 1056 - be32_to_cpu(descr->hw_regs.dmac_cmd_status) & 1057 - GELIC_DESCR_RX_DMA_CHAIN_END; 1051 + if (!dmac_chain_ended) 1052 + dmac_chain_ended = 1053 + be32_to_cpu(descr->hw_regs.dmac_cmd_status) & 1054 + GELIC_DESCR_RX_DMA_CHAIN_END; 1058 1055 /* 1059 1056 * So that always DMAC can see the end 1060 1057 * of the descriptor chain to avoid ··· 1067 1062 gelic_descr_set_status(descr, GELIC_DESCR_DMA_NOT_IN_USE); 1068 1063 1069 1064 /* 1070 - * this call can fail, but for now, just leave this 1071 - * descriptor without skb 1065 + * this call can fail, propagate the error 1072 1066 */ 1073 - gelic_descr_prepare_rx(card, descr); 1067 + prepare_rx_ret = gelic_descr_prepare_rx(card, descr); 1068 + if (prepare_rx_ret) 1069 + return prepare_rx_ret; 1074 1070 1075 1071 chain->tail = descr; 1076 1072 chain->head = descr->next; ··· 1093 1087 return 1; 1094 1088 } 1095 1089 1090 + static void gelic_rx_oom_timer(struct timer_list *t) 1091 + { 1092 + struct gelic_card *card = timer_container_of(card, t, rx_oom_timer); 1093 + 1094 + napi_schedule(&card->napi); 1095 + } 1096 + 1096 1097 /** 1097 1098 * gelic_net_poll - NAPI poll function called by the stack to return packets 1098 1099 * @napi: napi structure ··· 1112 1099 { 1113 1100 struct gelic_card *card = container_of(napi, struct gelic_card, napi); 1114 1101 int packets_done = 0; 1102 + int work_result = 0; 1115 1103 1116 1104 while (packets_done < budget) { 1117 - if (!gelic_card_decode_one_descr(card)) 1105 + work_result = gelic_card_decode_one_descr(card); 1106 + if (work_result != 1) 1118 1107 break; 1119 1108 1120 1109 packets_done++; 1110 + } 1111 + 1112 + if (work_result == -ENOMEM) { 1113 + napi_complete_done(napi, packets_done); 1114 + mod_timer(&card->rx_oom_timer, jiffies + 1); 1115 + return packets_done; 1121 1116 } 1122 1117 1123 1118 if (packets_done < budget) { ··· 1596 1575 atomic_set(&card->tx_timeout_task_counter, 0); 1597 1576 mutex_init(&card->updown_lock); 1598 1577 atomic_set(&card->users, 0); 1578 + 1579 + timer_setup(&card->rx_oom_timer, gelic_rx_oom_timer, 0); 1599 1580 1600 1581 return card; 1601 1582 }
+1
drivers/net/ethernet/toshiba/ps3_gelic_net.h
··· 268 268 struct gelic_card { 269 269 struct napi_struct napi; 270 270 struct net_device *netdev[GELIC_PORT_MAX]; 271 + struct timer_list rx_oom_timer; 271 272 /* 272 273 * hypervisor requires irq_status should be 273 274 * 8 bytes aligned, but u64 member is
+3
drivers/net/phy/phylink.c
··· 640 640 641 641 static void phylink_fill_fixedlink_supported(unsigned long *supported) 642 642 { 643 + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, supported); 644 + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, supported); 645 + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, supported); 643 646 linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, supported); 644 647 linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, supported); 645 648 linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, supported);
+20 -18
drivers/net/veth.c
··· 392 392 } 393 393 /* Restore Eth hdr pulled by dev_forward_skb/eth_type_trans */ 394 394 __skb_push(skb, ETH_HLEN); 395 - /* Depend on prior success packets started NAPI consumer via 396 - * __veth_xdp_flush(). Cancel TXQ stop if consumer stopped, 397 - * paired with empty check in veth_poll(). 398 - */ 399 395 netif_tx_stop_queue(txq); 400 - smp_mb__after_atomic(); 401 - if (unlikely(__ptr_ring_empty(&rq->xdp_ring))) 402 - netif_tx_wake_queue(txq); 396 + /* Makes sure NAPI peer consumer runs. Consumer is responsible 397 + * for starting txq again, until then ndo_start_xmit (this 398 + * function) will not be invoked by the netstack again. 399 + */ 400 + __veth_xdp_flush(rq); 403 401 break; 404 402 case NET_RX_DROP: /* same as NET_XMIT_DROP */ 405 403 drop: ··· 898 900 struct veth_xdp_tx_bq *bq, 899 901 struct veth_stats *stats) 900 902 { 901 - struct veth_priv *priv = netdev_priv(rq->dev); 902 - int queue_idx = rq->xdp_rxq.queue_index; 903 - struct netdev_queue *peer_txq; 904 - struct net_device *peer_dev; 905 903 int i, done = 0, n_xdpf = 0; 906 904 void *xdpf[VETH_XDP_BATCH]; 907 - 908 - /* NAPI functions as RCU section */ 909 - peer_dev = rcu_dereference_check(priv->peer, rcu_read_lock_bh_held()); 910 - peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL; 911 905 912 906 for (i = 0; i < budget; i++) { 913 907 void *ptr = __ptr_ring_consume(&rq->xdp_ring); ··· 949 959 rq->stats.vs.xdp_packets += done; 950 960 u64_stats_update_end(&rq->stats.syncp); 951 961 952 - if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq))) 953 - netif_tx_wake_queue(peer_txq); 954 - 955 962 return done; 956 963 } 957 964 ··· 956 969 { 957 970 struct veth_rq *rq = 958 971 container_of(napi, struct veth_rq, xdp_napi); 972 + struct veth_priv *priv = netdev_priv(rq->dev); 973 + int queue_idx = rq->xdp_rxq.queue_index; 974 + struct netdev_queue *peer_txq; 959 975 struct veth_stats stats = {}; 976 + struct net_device *peer_dev; 960 977 struct veth_xdp_tx_bq bq; 961 978 int done; 962 979 963 980 bq.count = 0; 981 + 982 + /* NAPI functions as RCU section */ 983 + peer_dev = rcu_dereference_check(priv->peer, rcu_read_lock_bh_held()); 984 + peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL; 964 985 965 986 xdp_set_return_frame_no_direct(); 966 987 done = veth_xdp_rcv(rq, budget, &bq, &stats); ··· 990 995 if (stats.xdp_tx > 0) 991 996 veth_xdp_flush(rq, &bq); 992 997 xdp_clear_return_frame_no_direct(); 998 + 999 + /* Release backpressure per NAPI poll */ 1000 + smp_rmb(); /* Paired with netif_tx_stop_queue set_bit */ 1001 + if (peer_txq && netif_tx_queue_stopped(peer_txq)) { 1002 + txq_trans_cond_update(peer_txq); 1003 + netif_tx_wake_queue(peer_txq); 1004 + } 993 1005 994 1006 return done; 995 1007 }
+7
drivers/net/wireless/realtek/rtw89/fw.c
··· 7694 7694 INIT_LIST_HEAD(&list); 7695 7695 7696 7696 list_for_each_entry_safe(ch_info, tmp, &scan_info->chan_list, list) { 7697 + /* The operating channel (tx_null == true) should 7698 + * not be last in the list, to avoid breaking 7699 + * RTL8851BU and RTL8832BU. 7700 + */ 7701 + if (list_len + 1 == RTW89_SCAN_LIST_LIMIT_AX && ch_info->tx_null) 7702 + break; 7703 + 7697 7704 list_move_tail(&ch_info->list, &list); 7698 7705 7699 7706 list_len++;
+2
drivers/pci/pci.h
··· 958 958 void pci_restore_aspm_l1ss_state(struct pci_dev *dev); 959 959 960 960 #ifdef CONFIG_PCIEASPM 961 + void pcie_aspm_remove_cap(struct pci_dev *pdev, u32 lnkcap); 961 962 void pcie_aspm_init_link_state(struct pci_dev *pdev); 962 963 void pcie_aspm_exit_link_state(struct pci_dev *pdev); 963 964 void pcie_aspm_pm_state_change(struct pci_dev *pdev, bool locked); ··· 966 965 void pci_configure_ltr(struct pci_dev *pdev); 967 966 void pci_bridge_reconfigure_ltr(struct pci_dev *pdev); 968 967 #else 968 + static inline void pcie_aspm_remove_cap(struct pci_dev *pdev, u32 lnkcap) { } 969 969 static inline void pcie_aspm_init_link_state(struct pci_dev *pdev) { } 970 970 static inline void pcie_aspm_exit_link_state(struct pci_dev *pdev) { } 971 971 static inline void pcie_aspm_pm_state_change(struct pci_dev *pdev, bool locked) { }
+17 -8
drivers/pci/pcie/aspm.c
··· 814 814 static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) 815 815 { 816 816 struct pci_dev *child = link->downstream, *parent = link->pdev; 817 - u32 parent_lnkcap, child_lnkcap; 818 817 u16 parent_lnkctl, child_lnkctl; 819 818 struct pci_bus *linkbus = parent->subordinate; 820 819 ··· 828 829 * If ASPM not supported, don't mess with the clocks and link, 829 830 * bail out now. 830 831 */ 831 - pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &parent_lnkcap); 832 - pcie_capability_read_dword(child, PCI_EXP_LNKCAP, &child_lnkcap); 833 - if (!(parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPMS)) 832 + if (!(parent->aspm_l0s_support && child->aspm_l0s_support) && 833 + !(parent->aspm_l1_support && child->aspm_l1_support)) 834 834 return; 835 835 836 836 /* Configure common clock before checking latencies */ ··· 841 843 * read-only Link Capabilities may change depending on common clock 842 844 * configuration (PCIe r5.0, sec 7.5.3.6). 843 845 */ 844 - pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &parent_lnkcap); 845 - pcie_capability_read_dword(child, PCI_EXP_LNKCAP, &child_lnkcap); 846 846 pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &parent_lnkctl); 847 847 pcie_capability_read_word(child, PCI_EXP_LNKCTL, &child_lnkctl); 848 848 ··· 860 864 * given link unless components on both sides of the link each 861 865 * support L0s. 862 866 */ 863 - if (parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPM_L0S) 867 + if (parent->aspm_l0s_support && child->aspm_l0s_support) 864 868 link->aspm_support |= PCIE_LINK_STATE_L0S; 865 869 866 870 if (child_lnkctl & PCI_EXP_LNKCTL_ASPM_L0S) ··· 869 873 link->aspm_enabled |= PCIE_LINK_STATE_L0S_DW; 870 874 871 875 /* Setup L1 state */ 872 - if (parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPM_L1) 876 + if (parent->aspm_l1_support && child->aspm_l1_support) 873 877 link->aspm_support |= PCIE_LINK_STATE_L1; 874 878 875 879 if (parent_lnkctl & child_lnkctl & PCI_EXP_LNKCTL_ASPM_L1) ··· 1525 1529 return __pci_enable_link_state(pdev, state, true); 1526 1530 } 1527 1531 EXPORT_SYMBOL(pci_enable_link_state_locked); 1532 + 1533 + void pcie_aspm_remove_cap(struct pci_dev *pdev, u32 lnkcap) 1534 + { 1535 + if (lnkcap & PCI_EXP_LNKCAP_ASPM_L0S) 1536 + pdev->aspm_l0s_support = 0; 1537 + if (lnkcap & PCI_EXP_LNKCAP_ASPM_L1) 1538 + pdev->aspm_l1_support = 0; 1539 + 1540 + pci_info(pdev, "ASPM: Link Capabilities%s%s treated as unsupported to avoid device defect\n", 1541 + lnkcap & PCI_EXP_LNKCAP_ASPM_L0S ? " L0s" : "", 1542 + lnkcap & PCI_EXP_LNKCAP_ASPM_L1 ? " L1" : ""); 1543 + 1544 + } 1528 1545 1529 1546 static int pcie_aspm_set_policy(const char *val, 1530 1547 const struct kernel_param *kp)
+7
drivers/pci/probe.c
··· 1656 1656 if (reg32 & PCI_EXP_LNKCAP_DLLLARC) 1657 1657 pdev->link_active_reporting = 1; 1658 1658 1659 + #ifdef CONFIG_PCIEASPM 1660 + if (reg32 & PCI_EXP_LNKCAP_ASPM_L0S) 1661 + pdev->aspm_l0s_support = 1; 1662 + if (reg32 & PCI_EXP_LNKCAP_ASPM_L1) 1663 + pdev->aspm_l1_support = 1; 1664 + #endif 1665 + 1659 1666 parent = pci_upstream_bridge(pdev); 1660 1667 if (!parent) 1661 1668 return;
+21 -19
drivers/pci/quirks.c
··· 2494 2494 */ 2495 2495 static void quirk_disable_aspm_l0s(struct pci_dev *dev) 2496 2496 { 2497 - pci_info(dev, "Disabling L0s\n"); 2498 - pci_disable_link_state(dev, PCIE_LINK_STATE_L0S); 2497 + pcie_aspm_remove_cap(dev, PCI_EXP_LNKCAP_ASPM_L0S); 2499 2498 } 2500 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10a7, quirk_disable_aspm_l0s); 2501 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10a9, quirk_disable_aspm_l0s); 2502 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10b6, quirk_disable_aspm_l0s); 2503 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10c6, quirk_disable_aspm_l0s); 2504 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10c7, quirk_disable_aspm_l0s); 2505 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10c8, quirk_disable_aspm_l0s); 2506 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10d6, quirk_disable_aspm_l0s); 2507 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10db, quirk_disable_aspm_l0s); 2508 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10dd, quirk_disable_aspm_l0s); 2509 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10e1, quirk_disable_aspm_l0s); 2510 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10ec, quirk_disable_aspm_l0s); 2511 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f1, quirk_disable_aspm_l0s); 2512 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s); 2513 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s); 2499 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10a7, quirk_disable_aspm_l0s); 2500 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10a9, quirk_disable_aspm_l0s); 2501 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10b6, quirk_disable_aspm_l0s); 2502 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10c6, quirk_disable_aspm_l0s); 2503 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10c7, quirk_disable_aspm_l0s); 2504 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10c8, quirk_disable_aspm_l0s); 2505 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10d6, quirk_disable_aspm_l0s); 2506 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10db, quirk_disable_aspm_l0s); 2507 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10dd, quirk_disable_aspm_l0s); 2508 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10e1, quirk_disable_aspm_l0s); 2509 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10ec, quirk_disable_aspm_l0s); 2510 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10f1, quirk_disable_aspm_l0s); 2511 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s); 2512 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s); 2514 2513 2515 2514 static void quirk_disable_aspm_l0s_l1(struct pci_dev *dev) 2516 2515 { 2517 - pci_info(dev, "Disabling ASPM L0s/L1\n"); 2518 - pci_disable_link_state(dev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1); 2516 + pcie_aspm_remove_cap(dev, 2517 + PCI_EXP_LNKCAP_ASPM_L0S | PCI_EXP_LNKCAP_ASPM_L1); 2519 2518 } 2520 2519 2521 2520 /* ··· 2522 2523 * upstream PCIe root port when ASPM is enabled. At least L0s mode is affected; 2523 2524 * disable both L0s and L1 for now to be safe. 2524 2525 */ 2525 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x1080, quirk_disable_aspm_l0s_l1); 2526 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ASMEDIA, 0x1080, quirk_disable_aspm_l0s_l1); 2527 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_FREESCALE, 0x0451, quirk_disable_aspm_l0s_l1); 2528 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PASEMI, 0xa002, quirk_disable_aspm_l0s_l1); 2529 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_HUAWEI, 0x1105, quirk_disable_aspm_l0s_l1); 2526 2530 2527 2531 /* 2528 2532 * Some Pericom PCIe-to-PCI bridges in reverse mode need the PCIe Retrain
+11 -2
drivers/pmdomain/arm/scmi_pm_domain.c
··· 41 41 42 42 static int scmi_pm_domain_probe(struct scmi_device *sdev) 43 43 { 44 - int num_domains, i; 44 + int num_domains, i, ret; 45 45 struct device *dev = &sdev->dev; 46 46 struct device_node *np = dev->of_node; 47 47 struct scmi_pm_domain *scmi_pd; ··· 108 108 scmi_pd_data->domains = domains; 109 109 scmi_pd_data->num_domains = num_domains; 110 110 111 + ret = of_genpd_add_provider_onecell(np, scmi_pd_data); 112 + if (ret) 113 + goto err_rm_genpds; 114 + 111 115 dev_set_drvdata(dev, scmi_pd_data); 112 116 113 - return of_genpd_add_provider_onecell(np, scmi_pd_data); 117 + return 0; 118 + err_rm_genpds: 119 + for (i = num_domains - 1; i >= 0; i--) 120 + pm_genpd_remove(domains[i]); 121 + 122 + return ret; 114 123 } 115 124 116 125 static void scmi_pm_domain_remove(struct scmi_device *sdev)
+2
drivers/pmdomain/imx/gpc.c
··· 536 536 return; 537 537 } 538 538 } 539 + 540 + of_node_put(pgc_node); 539 541 } 540 542 541 543 static struct platform_driver imx_gpc_driver = {
+14 -15
drivers/pmdomain/samsung/exynos-pm-domains.c
··· 92 92 { }, 93 93 }; 94 94 95 - static const char *exynos_get_domain_name(struct device_node *node) 95 + static const char *exynos_get_domain_name(struct device *dev, 96 + struct device_node *node) 96 97 { 97 98 const char *name; 98 99 99 100 if (of_property_read_string(node, "label", &name) < 0) 100 101 name = kbasename(node->full_name); 101 - return kstrdup_const(name, GFP_KERNEL); 102 + return devm_kstrdup_const(dev, name, GFP_KERNEL); 102 103 } 103 104 104 105 static int exynos_pd_probe(struct platform_device *pdev) ··· 116 115 if (!pd) 117 116 return -ENOMEM; 118 117 119 - pd->pd.name = exynos_get_domain_name(np); 118 + pd->pd.name = exynos_get_domain_name(dev, np); 120 119 if (!pd->pd.name) 121 120 return -ENOMEM; 122 121 123 122 pd->base = of_iomap(np, 0); 124 - if (!pd->base) { 125 - kfree_const(pd->pd.name); 123 + if (!pd->base) 126 124 return -ENODEV; 127 - } 128 125 129 126 pd->pd.power_off = exynos_pd_power_off; 130 127 pd->pd.power_on = exynos_pd_power_on; 131 128 pd->local_pwr_cfg = pm_domain_cfg->local_pwr_cfg; 129 + 130 + /* 131 + * Some Samsung platforms with bootloaders turning on the splash-screen 132 + * and handing it over to the kernel, requires the power-domains to be 133 + * reset during boot. 134 + */ 135 + if (IS_ENABLED(CONFIG_ARM) && 136 + of_device_is_compatible(np, "samsung,exynos4210-pd")) 137 + exynos_pd_power_off(&pd->pd); 132 138 133 139 on = readl_relaxed(pd->base + 0x4) & pd->local_pwr_cfg; 134 140 ··· 154 146 pr_info("%pOF has as child subdomain: %pOF.\n", 155 147 parent.np, child.np); 156 148 } 157 - 158 - /* 159 - * Some Samsung platforms with bootloaders turning on the splash-screen 160 - * and handing it over to the kernel, requires the power-domains to be 161 - * reset during boot. As a temporary hack to manage this, let's enforce 162 - * a sync_state. 163 - */ 164 - if (!ret) 165 - of_genpd_sync_state(np); 166 149 167 150 pm_runtime_enable(dev); 168 151 return ret;
+2 -2
drivers/pwm/pwm-adp5585.c
··· 190 190 return 0; 191 191 } 192 192 193 - static const struct adp5585_pwm_chip adp5589_pwm_chip_info = { 193 + static const struct adp5585_pwm_chip adp5585_pwm_chip_info = { 194 194 .pwm_cfg = ADP5585_PWM_CFG, 195 195 .pwm_offt_low = ADP5585_PWM_OFFT_LOW, 196 196 .pwm_ont_low = ADP5585_PWM_ONT_LOW, 197 197 }; 198 198 199 - static const struct adp5585_pwm_chip adp5585_pwm_chip_info = { 199 + static const struct adp5585_pwm_chip adp5589_pwm_chip_info = { 200 200 .pwm_cfg = ADP5589_PWM_CFG, 201 201 .pwm_offt_low = ADP5589_PWM_OFFT_LOW, 202 202 .pwm_ont_low = ADP5589_PWM_ONT_LOW,
+1
drivers/regulator/fixed.c
··· 334 334 ret = dev_err_probe(&pdev->dev, PTR_ERR(drvdata->dev), 335 335 "Failed to register regulator: %ld\n", 336 336 PTR_ERR(drvdata->dev)); 337 + gpiod_put(cfg.ena_gpiod); 337 338 return ret; 338 339 } 339 340
+2 -2
drivers/reset/reset-imx8mp-audiomix.c
··· 14 14 #include <linux/reset-controller.h> 15 15 16 16 #define IMX8MP_AUDIOMIX_EARC_RESET_OFFSET 0x200 17 - #define IMX8MP_AUDIOMIX_EARC_RESET_MASK BIT(1) 18 - #define IMX8MP_AUDIOMIX_EARC_PHY_RESET_MASK BIT(2) 17 + #define IMX8MP_AUDIOMIX_EARC_RESET_MASK BIT(0) 18 + #define IMX8MP_AUDIOMIX_EARC_PHY_RESET_MASK BIT(1) 19 19 20 20 #define IMX8MP_AUDIOMIX_DSP_RUNSTALL_OFFSET 0x108 21 21 #define IMX8MP_AUDIOMIX_DSP_RUNSTALL_MASK BIT(5)
-1
drivers/s390/net/ctcm_mpc.c
··· 701 701 702 702 grp->sweep_req_pend_num--; 703 703 ctcmpc_send_sweep_resp(ch); 704 - kfree(mpcginfo); 705 704 return; 706 705 } 707 706
+11 -4
drivers/spi/spi-imx.c
··· 519 519 { 520 520 u32 reg; 521 521 522 - reg = readl(spi_imx->base + MX51_ECSPI_CTRL); 523 - reg |= MX51_ECSPI_CTRL_XCH; 524 - writel(reg, spi_imx->base + MX51_ECSPI_CTRL); 522 + if (spi_imx->usedma) { 523 + reg = readl(spi_imx->base + MX51_ECSPI_DMA); 524 + reg |= MX51_ECSPI_DMA_TEDEN | MX51_ECSPI_DMA_RXDEN; 525 + writel(reg, spi_imx->base + MX51_ECSPI_DMA); 526 + } else { 527 + reg = readl(spi_imx->base + MX51_ECSPI_CTRL); 528 + reg |= MX51_ECSPI_CTRL_XCH; 529 + writel(reg, spi_imx->base + MX51_ECSPI_CTRL); 530 + } 525 531 } 526 532 527 533 static void mx51_ecspi_disable(struct spi_imx_data *spi_imx) ··· 765 759 writel(MX51_ECSPI_DMA_RX_WML(spi_imx->wml - 1) | 766 760 MX51_ECSPI_DMA_TX_WML(tx_wml) | 767 761 MX51_ECSPI_DMA_RXT_WML(spi_imx->wml) | 768 - MX51_ECSPI_DMA_TEDEN | MX51_ECSPI_DMA_RXDEN | 769 762 MX51_ECSPI_DMA_RXTDEN, spi_imx->base + MX51_ECSPI_DMA); 770 763 } 771 764 ··· 1524 1519 dmaengine_submit(desc_tx); 1525 1520 reinit_completion(&spi_imx->dma_tx_completion); 1526 1521 dma_async_issue_pending(controller->dma_tx); 1522 + 1523 + spi_imx->devtype_data->trigger(spi_imx); 1527 1524 1528 1525 transfer_timeout = spi_imx_calculate_timeout(spi_imx, transfer->len); 1529 1526
+1 -1
drivers/spi/spi-xilinx.c
··· 300 300 301 301 /* Read out all the data from the Rx FIFO */ 302 302 rx_words = n_words; 303 - stalled = 10; 303 + stalled = 32; 304 304 while (rx_words) { 305 305 if (rx_words == n_words && !(stalled--) && 306 306 !(sr & XSPI_SR_TX_EMPTY_MASK) &&
+12
drivers/spi/spi.c
··· 2851 2851 acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias, 2852 2852 sizeof(spi->modalias)); 2853 2853 2854 + /* 2855 + * This gets re-tried in spi_probe() for -EPROBE_DEFER handling in case 2856 + * the GPIO controller does not have a driver yet. This needs to be done 2857 + * here too, because this call sets the GPIO direction and/or bias. 2858 + * Setting these needs to be done even if there is no driver, in which 2859 + * case spi_probe() will never get called. 2860 + * TODO: ideally the setup of the GPIO should be handled in a generic 2861 + * manner in the ACPI/gpiolib core code. 2862 + */ 2863 + if (spi->irq < 0) 2864 + spi->irq = acpi_dev_gpio_irq_get(adev, 0); 2865 + 2854 2866 acpi_device_set_enumerated(adev); 2855 2867 2856 2868 adev->power.flags.ignore_parent = true;
+66 -12
fs/afs/cell.c
··· 229 229 * @name: The name of the cell. 230 230 * @namesz: The strlen of the cell name. 231 231 * @vllist: A colon/comma separated list of numeric IP addresses or NULL. 232 - * @excl: T if an error should be given if the cell name already exists. 232 + * @reason: The reason we're doing the lookup 233 233 * @trace: The reason to be logged if the lookup is successful. 234 234 * 235 235 * Look up a cell record by name and query the DNS for VL server addresses if ··· 239 239 */ 240 240 struct afs_cell *afs_lookup_cell(struct afs_net *net, 241 241 const char *name, unsigned int namesz, 242 - const char *vllist, bool excl, 242 + const char *vllist, 243 + enum afs_lookup_cell_for reason, 243 244 enum afs_cell_trace trace) 244 245 { 245 246 struct afs_cell *cell, *candidate, *cursor; ··· 248 247 enum afs_cell_state state; 249 248 int ret, n; 250 249 251 - _enter("%s,%s", name, vllist); 250 + _enter("%s,%s,%u", name, vllist, reason); 252 251 253 - if (!excl) { 252 + if (reason != AFS_LOOKUP_CELL_PRELOAD) { 254 253 cell = afs_find_cell(net, name, namesz, trace); 255 - if (!IS_ERR(cell)) 254 + if (!IS_ERR(cell)) { 255 + if (reason == AFS_LOOKUP_CELL_DYNROOT) 256 + goto no_wait; 257 + if (cell->state == AFS_CELL_SETTING_UP || 258 + cell->state == AFS_CELL_UNLOOKED) 259 + goto lookup_cell; 256 260 goto wait_for_cell; 261 + } 257 262 } 258 263 259 264 /* Assume we're probably going to create a cell and preallocate and ··· 305 298 rb_insert_color(&cell->net_node, &net->cells); 306 299 up_write(&net->cells_lock); 307 300 308 - afs_queue_cell(cell, afs_cell_trace_queue_new); 301 + lookup_cell: 302 + if (reason != AFS_LOOKUP_CELL_PRELOAD && 303 + reason != AFS_LOOKUP_CELL_ROOTCELL) { 304 + set_bit(AFS_CELL_FL_DO_LOOKUP, &cell->flags); 305 + afs_queue_cell(cell, afs_cell_trace_queue_new); 306 + } 309 307 310 308 wait_for_cell: 311 - _debug("wait_for_cell"); 312 309 state = smp_load_acquire(&cell->state); /* vs error */ 313 - if (state != AFS_CELL_ACTIVE && 314 - state != AFS_CELL_DEAD) { 310 + switch (state) { 311 + case AFS_CELL_ACTIVE: 312 + case AFS_CELL_DEAD: 313 + break; 314 + case AFS_CELL_UNLOOKED: 315 + default: 316 + if (reason == AFS_LOOKUP_CELL_PRELOAD || 317 + reason == AFS_LOOKUP_CELL_ROOTCELL) 318 + break; 319 + _debug("wait_for_cell"); 315 320 afs_see_cell(cell, afs_cell_trace_wait); 316 321 wait_var_event(&cell->state, 317 322 ({ 318 323 state = smp_load_acquire(&cell->state); /* vs error */ 319 324 state == AFS_CELL_ACTIVE || state == AFS_CELL_DEAD; 320 325 })); 326 + _debug("waited_for_cell %d %d", cell->state, cell->error); 321 327 } 322 328 329 + no_wait: 323 330 /* Check the state obtained from the wait check. */ 331 + state = smp_load_acquire(&cell->state); /* vs error */ 324 332 if (state == AFS_CELL_DEAD) { 325 333 ret = cell->error; 326 334 goto error; 335 + } 336 + if (state == AFS_CELL_ACTIVE) { 337 + switch (cell->dns_status) { 338 + case DNS_LOOKUP_NOT_DONE: 339 + if (cell->dns_source == DNS_RECORD_FROM_CONFIG) { 340 + ret = 0; 341 + break; 342 + } 343 + fallthrough; 344 + default: 345 + ret = -EIO; 346 + goto error; 347 + case DNS_LOOKUP_GOOD: 348 + case DNS_LOOKUP_GOOD_WITH_BAD: 349 + ret = 0; 350 + break; 351 + case DNS_LOOKUP_GOT_NOT_FOUND: 352 + ret = -ENOENT; 353 + goto error; 354 + case DNS_LOOKUP_BAD: 355 + ret = -EREMOTEIO; 356 + goto error; 357 + case DNS_LOOKUP_GOT_LOCAL_FAILURE: 358 + case DNS_LOOKUP_GOT_TEMP_FAILURE: 359 + case DNS_LOOKUP_GOT_NS_FAILURE: 360 + ret = -EDESTADDRREQ; 361 + goto error; 362 + } 327 363 } 328 364 329 365 _leave(" = %p [cell]", cell); ··· 375 325 cell_already_exists: 376 326 _debug("cell exists"); 377 327 cell = cursor; 378 - if (excl) { 328 + if (reason == AFS_LOOKUP_CELL_PRELOAD) { 379 329 ret = -EEXIST; 380 330 } else { 381 331 afs_use_cell(cursor, trace); ··· 434 384 return -EINVAL; 435 385 436 386 /* allocate a cell record for the root/workstation cell */ 437 - new_root = afs_lookup_cell(net, rootcell, len, vllist, false, 387 + new_root = afs_lookup_cell(net, rootcell, len, vllist, 388 + AFS_LOOKUP_CELL_ROOTCELL, 438 389 afs_cell_trace_use_lookup_ws); 439 390 if (IS_ERR(new_root)) { 440 391 _leave(" = %ld", PTR_ERR(new_root)); ··· 828 777 switch (cell->state) { 829 778 case AFS_CELL_SETTING_UP: 830 779 goto set_up_cell; 780 + case AFS_CELL_UNLOOKED: 831 781 case AFS_CELL_ACTIVE: 832 782 goto cell_is_active; 833 783 case AFS_CELL_REMOVING: ··· 849 797 goto remove_cell; 850 798 } 851 799 852 - afs_set_cell_state(cell, AFS_CELL_ACTIVE); 800 + afs_set_cell_state(cell, AFS_CELL_UNLOOKED); 853 801 854 802 cell_is_active: 855 803 if (afs_has_cell_expired(cell, &next_manage)) ··· 859 807 ret = afs_update_cell(cell); 860 808 if (ret < 0) 861 809 cell->error = ret; 810 + if (cell->state == AFS_CELL_UNLOOKED) 811 + afs_set_cell_state(cell, AFS_CELL_ACTIVE); 862 812 } 863 813 864 814 if (next_manage < TIME64_MAX && cell->net->live) {
+2 -1
fs/afs/dynroot.c
··· 108 108 dotted = true; 109 109 } 110 110 111 - cell = afs_lookup_cell(net, name, len, NULL, false, 111 + cell = afs_lookup_cell(net, name, len, NULL, 112 + AFS_LOOKUP_CELL_DYNROOT, 112 113 afs_cell_trace_use_lookup_dynroot); 113 114 if (IS_ERR(cell)) { 114 115 ret = PTR_ERR(cell);
+11 -1
fs/afs/internal.h
··· 343 343 344 344 enum afs_cell_state { 345 345 AFS_CELL_SETTING_UP, 346 + AFS_CELL_UNLOOKED, 346 347 AFS_CELL_ACTIVE, 347 348 AFS_CELL_REMOVING, 348 349 AFS_CELL_DEAD, ··· 1050 1049 extern int afs_cell_init(struct afs_net *, const char *); 1051 1050 extern struct afs_cell *afs_find_cell(struct afs_net *, const char *, unsigned, 1052 1051 enum afs_cell_trace); 1052 + enum afs_lookup_cell_for { 1053 + AFS_LOOKUP_CELL_DYNROOT, 1054 + AFS_LOOKUP_CELL_MOUNTPOINT, 1055 + AFS_LOOKUP_CELL_DIRECT_MOUNT, 1056 + AFS_LOOKUP_CELL_PRELOAD, 1057 + AFS_LOOKUP_CELL_ROOTCELL, 1058 + AFS_LOOKUP_CELL_ALIAS_CHECK, 1059 + }; 1053 1060 struct afs_cell *afs_lookup_cell(struct afs_net *net, 1054 1061 const char *name, unsigned int namesz, 1055 - const char *vllist, bool excl, 1062 + const char *vllist, 1063 + enum afs_lookup_cell_for reason, 1056 1064 enum afs_cell_trace trace); 1057 1065 extern struct afs_cell *afs_use_cell(struct afs_cell *, enum afs_cell_trace); 1058 1066 void afs_unuse_cell(struct afs_cell *cell, enum afs_cell_trace reason);
+2 -1
fs/afs/mntpt.c
··· 107 107 if (size > AFS_MAXCELLNAME) 108 108 return -ENAMETOOLONG; 109 109 110 - cell = afs_lookup_cell(ctx->net, p, size, NULL, false, 110 + cell = afs_lookup_cell(ctx->net, p, size, NULL, 111 + AFS_LOOKUP_CELL_MOUNTPOINT, 111 112 afs_cell_trace_use_lookup_mntpt); 112 113 if (IS_ERR(cell)) { 113 114 pr_err("kAFS: unable to lookup cell '%pd'\n", mntpt);
+2 -1
fs/afs/proc.c
··· 122 122 if (strcmp(buf, "add") == 0) { 123 123 struct afs_cell *cell; 124 124 125 - cell = afs_lookup_cell(net, name, strlen(name), args, true, 125 + cell = afs_lookup_cell(net, name, strlen(name), args, 126 + AFS_LOOKUP_CELL_PRELOAD, 126 127 afs_cell_trace_use_lookup_add); 127 128 if (IS_ERR(cell)) { 128 129 ret = PTR_ERR(cell);
+1 -1
fs/afs/super.c
··· 290 290 /* lookup the cell record */ 291 291 if (cellname) { 292 292 cell = afs_lookup_cell(ctx->net, cellname, cellnamesz, 293 - NULL, false, 293 + NULL, AFS_LOOKUP_CELL_DIRECT_MOUNT, 294 294 afs_cell_trace_use_lookup_mount); 295 295 if (IS_ERR(cell)) { 296 296 pr_err("kAFS: unable to lookup cell '%*.*s'\n",
+2 -1
fs/afs/vl_alias.c
··· 269 269 if (!name_len || name_len > AFS_MAXCELLNAME) 270 270 master = ERR_PTR(-EOPNOTSUPP); 271 271 else 272 - master = afs_lookup_cell(cell->net, cell_name, name_len, NULL, false, 272 + master = afs_lookup_cell(cell->net, cell_name, name_len, NULL, 273 + AFS_LOOKUP_CELL_ALIAS_CHECK, 273 274 afs_cell_trace_use_lookup_canonical); 274 275 kfree(cell_name); 275 276 if (IS_ERR(master))
+18 -1
fs/bfs/inode.c
··· 61 61 off = (ino - BFS_ROOT_INO) % BFS_INODES_PER_BLOCK; 62 62 di = (struct bfs_inode *)bh->b_data + off; 63 63 64 - inode->i_mode = 0x0000FFFF & le32_to_cpu(di->i_mode); 64 + /* 65 + * https://martin.hinner.info/fs/bfs/bfs-structure.html explains that 66 + * BFS in SCO UnixWare environment used only lower 9 bits of di->i_mode 67 + * value. This means that, although bfs_write_inode() saves whole 68 + * inode->i_mode bits (which include S_IFMT bits and S_IS{UID,GID,VTX} 69 + * bits), middle 7 bits of di->i_mode value can be garbage when these 70 + * bits were not saved by bfs_write_inode(). 71 + * Since we can't tell whether middle 7 bits are garbage, use only 72 + * lower 12 bits (i.e. tolerate S_IS{UID,GID,VTX} bits possibly being 73 + * garbage) and reconstruct S_IFMT bits for Linux environment from 74 + * di->i_vtype value. 75 + */ 76 + inode->i_mode = 0x00000FFF & le32_to_cpu(di->i_mode); 65 77 if (le32_to_cpu(di->i_vtype) == BFS_VDIR) { 66 78 inode->i_mode |= S_IFDIR; 67 79 inode->i_op = &bfs_dir_inops; ··· 83 71 inode->i_op = &bfs_file_inops; 84 72 inode->i_fop = &bfs_file_operations; 85 73 inode->i_mapping->a_ops = &bfs_aops; 74 + } else { 75 + brelse(bh); 76 + printf("Unknown vtype=%u %s:%08lx\n", 77 + le32_to_cpu(di->i_vtype), inode->i_sb->s_id, ino); 78 + goto error; 86 79 } 87 80 88 81 BFS_I(inode)->i_sblock = le32_to_cpu(di->i_sblock);
+3 -1
fs/binfmt_misc.c
··· 837 837 inode_unlock(d_inode(root)); 838 838 839 839 if (err) { 840 - if (f) 840 + if (f) { 841 + exe_file_allow_write_access(f); 841 842 filp_close(f, NULL); 843 + } 842 844 kfree(e); 843 845 return err; 844 846 }
+1
fs/efivarfs/super.c
··· 533 533 .init_fs_context = efivarfs_init_fs_context, 534 534 .kill_sb = efivarfs_kill_sb, 535 535 .parameters = efivarfs_parameters, 536 + .fs_flags = FS_POWER_FREEZE, 536 537 }; 537 538 538 539 static __init int efivarfs_init(void)
+4 -1
fs/exfat/super.c
··· 433 433 struct exfat_sb_info *sbi = EXFAT_SB(sb); 434 434 435 435 /* set block size to read super block */ 436 - sb_min_blocksize(sb, 512); 436 + if (!sb_min_blocksize(sb, 512)) { 437 + exfat_err(sb, "unable to set blocksize"); 438 + return -EINVAL; 439 + } 437 440 438 441 /* read boot sector */ 439 442 sbi->boot_bh = sb_bread(sb, 0);
+5 -1
fs/fat/inode.c
··· 1595 1595 1596 1596 setup(sb); /* flavour-specific stuff that needs options */ 1597 1597 1598 + error = -EINVAL; 1599 + if (!sb_min_blocksize(sb, 512)) { 1600 + fat_msg(sb, KERN_ERR, "unable to set blocksize"); 1601 + goto out_fail; 1602 + } 1598 1603 error = -EIO; 1599 - sb_min_blocksize(sb, 512); 1600 1604 bh = sb_bread(sb, 0); 1601 1605 if (bh == NULL) { 1602 1606 fat_msg(sb, KERN_ERR, "unable to read boot sector");
+1 -1
fs/fuse/virtio_fs.c
··· 373 373 374 374 sprintf(buff, "%d", i); 375 375 fsvq->kobj = kobject_create_and_add(buff, fs->mqs_kobj); 376 - if (!fs->mqs_kobj) { 376 + if (!fsvq->kobj) { 377 377 ret = -ENOMEM; 378 378 goto out_del; 379 379 }
+18 -11
fs/hostfs/hostfs_kern.c
··· 979 979 { 980 980 struct hostfs_fs_info *fsi = fc->s_fs_info; 981 981 struct fs_parse_result result; 982 - char *host_root; 982 + char *host_root, *tmp_root; 983 983 int opt; 984 984 985 985 opt = fs_parse(fc, hostfs_param_specs, param, &result); ··· 990 990 case Opt_hostfs: 991 991 host_root = param->string; 992 992 if (!*host_root) 993 - host_root = ""; 994 - fsi->host_root_path = 995 - kasprintf(GFP_KERNEL, "%s/%s", root_ino, host_root); 996 - if (fsi->host_root_path == NULL) 993 + break; 994 + tmp_root = kasprintf(GFP_KERNEL, "%s%s", 995 + fsi->host_root_path, host_root); 996 + if (!tmp_root) 997 997 return -ENOMEM; 998 + kfree(fsi->host_root_path); 999 + fsi->host_root_path = tmp_root; 998 1000 break; 999 1001 } 1000 1002 ··· 1006 1004 static int hostfs_parse_monolithic(struct fs_context *fc, void *data) 1007 1005 { 1008 1006 struct hostfs_fs_info *fsi = fc->s_fs_info; 1009 - char *host_root = (char *)data; 1007 + char *tmp_root, *host_root = (char *)data; 1010 1008 1011 1009 /* NULL is printed as '(null)' by printf(): avoid that. */ 1012 1010 if (host_root == NULL) 1013 - host_root = ""; 1011 + return 0; 1014 1012 1015 - fsi->host_root_path = 1016 - kasprintf(GFP_KERNEL, "%s/%s", root_ino, host_root); 1017 - if (fsi->host_root_path == NULL) 1013 + tmp_root = kasprintf(GFP_KERNEL, "%s%s", fsi->host_root_path, host_root); 1014 + if (!tmp_root) 1018 1015 return -ENOMEM; 1019 - 1016 + kfree(fsi->host_root_path); 1017 + fsi->host_root_path = tmp_root; 1020 1018 return 0; 1021 1019 } 1022 1020 ··· 1051 1049 if (!fsi) 1052 1050 return -ENOMEM; 1053 1051 1052 + fsi->host_root_path = kasprintf(GFP_KERNEL, "%s/", root_ino); 1053 + if (!fsi->host_root_path) { 1054 + kfree(fsi); 1055 + return -ENOMEM; 1056 + } 1054 1057 fc->s_fs_info = fsi; 1055 1058 fc->ops = &hostfs_context_ops; 1056 1059 return 0;
+12
fs/inode.c
··· 1967 1967 } 1968 1968 EXPORT_SYMBOL(iput); 1969 1969 1970 + /** 1971 + * iput_not_last - put an inode assuming this is not the last reference 1972 + * @inode: inode to put 1973 + */ 1974 + void iput_not_last(struct inode *inode) 1975 + { 1976 + VFS_BUG_ON_INODE(atomic_read(&inode->i_count) < 2, inode); 1977 + 1978 + WARN_ON(atomic_sub_return(1, &inode->i_count) == 0); 1979 + } 1980 + EXPORT_SYMBOL(iput_not_last); 1981 + 1970 1982 #ifdef CONFIG_BLOCK 1971 1983 /** 1972 1984 * bmap - find a block number in a file
+5
fs/isofs/inode.c
··· 610 610 goto out_freesbi; 611 611 } 612 612 opt->blocksize = sb_min_blocksize(s, opt->blocksize); 613 + if (!opt->blocksize) { 614 + printk(KERN_ERR 615 + "ISOFS: unable to set blocksize\n"); 616 + goto out_freesbi; 617 + } 613 618 614 619 sbi->s_high_sierra = 0; /* default is iso9660 */ 615 620 sbi->s_session = opt->session;
+18 -28
fs/namespace.c
··· 132 132 */ 133 133 __cacheline_aligned_in_smp DEFINE_SEQLOCK(mount_lock); 134 134 135 - static inline struct mnt_namespace *node_to_mnt_ns(const struct rb_node *node) 136 - { 137 - struct ns_common *ns; 138 - 139 - if (!node) 140 - return NULL; 141 - ns = rb_entry(node, struct ns_common, ns_tree_node); 142 - return container_of(ns, struct mnt_namespace, ns); 143 - } 144 - 145 135 static void mnt_ns_release(struct mnt_namespace *ns) 146 136 { 147 137 /* keep alive for {list,stat}mount() */ ··· 141 151 kfree(ns); 142 152 } 143 153 } 144 - DEFINE_FREE(mnt_ns_release, struct mnt_namespace *, if (_T) mnt_ns_release(_T)) 154 + DEFINE_FREE(mnt_ns_release, struct mnt_namespace *, 155 + if (!IS_ERR(_T)) mnt_ns_release(_T)) 145 156 146 157 static void mnt_ns_release_rcu(struct rcu_head *rcu) 147 158 { ··· 5445 5454 ret = statmount_sb_source(s, seq); 5446 5455 break; 5447 5456 case STATMOUNT_MNT_UIDMAP: 5448 - sm->mnt_uidmap = start; 5457 + offp = &sm->mnt_uidmap; 5449 5458 ret = statmount_mnt_uidmap(s, seq); 5450 5459 break; 5451 5460 case STATMOUNT_MNT_GIDMAP: 5452 - sm->mnt_gidmap = start; 5461 + offp = &sm->mnt_gidmap; 5453 5462 ret = statmount_mnt_gidmap(s, seq); 5454 5463 break; 5455 5464 default: ··· 5727 5736 ret = copy_struct_from_user(kreq, sizeof(*kreq), req, usize); 5728 5737 if (ret) 5729 5738 return ret; 5730 - if (kreq->spare != 0) 5739 + if (kreq->mnt_ns_fd != 0 && kreq->mnt_ns_id) 5731 5740 return -EINVAL; 5732 5741 /* The first valid unique mount id is MNT_UNIQUE_ID_OFFSET + 1. */ 5733 5742 if (kreq->mnt_id <= MNT_UNIQUE_ID_OFFSET) ··· 5744 5753 { 5745 5754 struct mnt_namespace *mnt_ns; 5746 5755 5747 - if (kreq->mnt_ns_id && kreq->spare) 5748 - return ERR_PTR(-EINVAL); 5749 - 5750 - if (kreq->mnt_ns_id) 5751 - return lookup_mnt_ns(kreq->mnt_ns_id); 5752 - 5753 - if (kreq->spare) { 5756 + if (kreq->mnt_ns_id) { 5757 + mnt_ns = lookup_mnt_ns(kreq->mnt_ns_id); 5758 + } else if (kreq->mnt_ns_fd) { 5754 5759 struct ns_common *ns; 5755 5760 5756 - CLASS(fd, f)(kreq->spare); 5761 + CLASS(fd, f)(kreq->mnt_ns_fd); 5757 5762 if (fd_empty(f)) 5758 5763 return ERR_PTR(-EBADF); 5759 5764 ··· 5764 5777 } else { 5765 5778 mnt_ns = current->nsproxy->mnt_ns; 5766 5779 } 5780 + if (!mnt_ns) 5781 + return ERR_PTR(-ENOENT); 5767 5782 5768 5783 refcount_inc(&mnt_ns->passive); 5769 5784 return mnt_ns; ··· 5790 5801 return ret; 5791 5802 5792 5803 ns = grab_requested_mnt_ns(&kreq); 5793 - if (!ns) 5794 - return -ENOENT; 5804 + if (IS_ERR(ns)) 5805 + return PTR_ERR(ns); 5795 5806 5796 5807 if (kreq.mnt_ns_id && (ns != current->nsproxy->mnt_ns) && 5797 5808 !ns_capable_noaudit(ns->user_ns, CAP_SYS_ADMIN)) ··· 5901 5912 static inline int prepare_klistmount(struct klistmount *kls, struct mnt_id_req *kreq, 5902 5913 size_t nr_mnt_ids) 5903 5914 { 5904 - 5905 5915 u64 last_mnt_id = kreq->param; 5916 + struct mnt_namespace *ns; 5906 5917 5907 5918 /* The first valid unique mount id is MNT_UNIQUE_ID_OFFSET + 1. */ 5908 5919 if (last_mnt_id != 0 && last_mnt_id <= MNT_UNIQUE_ID_OFFSET) ··· 5916 5927 if (!kls->kmnt_ids) 5917 5928 return -ENOMEM; 5918 5929 5919 - kls->ns = grab_requested_mnt_ns(kreq); 5920 - if (!kls->ns) 5921 - return -ENOENT; 5930 + ns = grab_requested_mnt_ns(kreq); 5931 + if (IS_ERR(ns)) 5932 + return PTR_ERR(ns); 5933 + kls->ns = ns; 5922 5934 5923 5935 kls->mnt_parent_id = kreq->mnt_id; 5924 5936 return 0;
+8
fs/nfs/client.c
··· 338 338 /* Match the xprt security policy */ 339 339 if (clp->cl_xprtsec.policy != data->xprtsec.policy) 340 340 continue; 341 + if (clp->cl_xprtsec.policy == RPC_XPRTSEC_TLS_X509) { 342 + if (clp->cl_xprtsec.cert_serial != 343 + data->xprtsec.cert_serial) 344 + continue; 345 + if (clp->cl_xprtsec.privkey_serial != 346 + data->xprtsec.privkey_serial) 347 + continue; 348 + } 341 349 342 350 refcount_inc(&clp->cl_count); 343 351 return clp;
+4 -3
fs/nfs/dir.c
··· 2268 2268 return -ENAMETOOLONG; 2269 2269 2270 2270 if (open_flags & O_CREAT) { 2271 - file->f_mode |= FMODE_CREATED; 2272 2271 error = nfs_do_create(dir, dentry, mode, open_flags); 2273 - if (error) 2272 + if (!error) { 2273 + file->f_mode |= FMODE_CREATED; 2274 + return finish_open(file, dentry, NULL); 2275 + } else if (error != -EEXIST || open_flags & O_EXCL) 2274 2276 return error; 2275 - return finish_open(file, dentry, NULL); 2276 2277 } 2277 2278 if (d_in_lookup(dentry)) { 2278 2279 /* The only flags nfs_lookup considers are
+12 -6
fs/nfs/inode.c
··· 718 718 struct nfs_fattr *fattr; 719 719 loff_t oldsize = i_size_read(inode); 720 720 int error = 0; 721 + kuid_t task_uid = current_fsuid(); 722 + kuid_t owner_uid = inode->i_uid; 721 723 722 724 nfs_inc_stats(inode, NFSIOS_VFSSETATTR); 723 725 ··· 741 739 if (nfs_have_delegated_mtime(inode) && attr->ia_valid & ATTR_MTIME) { 742 740 spin_lock(&inode->i_lock); 743 741 if (attr->ia_valid & ATTR_MTIME_SET) { 744 - nfs_set_timestamps_to_ts(inode, attr); 745 - attr->ia_valid &= ~(ATTR_MTIME|ATTR_MTIME_SET| 742 + if (uid_eq(task_uid, owner_uid)) { 743 + nfs_set_timestamps_to_ts(inode, attr); 744 + attr->ia_valid &= ~(ATTR_MTIME|ATTR_MTIME_SET| 746 745 ATTR_ATIME|ATTR_ATIME_SET); 746 + } 747 747 } else { 748 748 nfs_update_timestamps(inode, attr->ia_valid); 749 749 attr->ia_valid &= ~(ATTR_MTIME|ATTR_ATIME); ··· 755 751 attr->ia_valid & ATTR_ATIME && 756 752 !(attr->ia_valid & ATTR_MTIME)) { 757 753 if (attr->ia_valid & ATTR_ATIME_SET) { 758 - spin_lock(&inode->i_lock); 759 - nfs_set_timestamps_to_ts(inode, attr); 760 - spin_unlock(&inode->i_lock); 761 - attr->ia_valid &= ~(ATTR_ATIME|ATTR_ATIME_SET); 754 + if (uid_eq(task_uid, owner_uid)) { 755 + spin_lock(&inode->i_lock); 756 + nfs_set_timestamps_to_ts(inode, attr); 757 + spin_unlock(&inode->i_lock); 758 + attr->ia_valid &= ~(ATTR_ATIME|ATTR_ATIME_SET); 759 + } 762 760 } else { 763 761 nfs_update_delegated_atime(inode); 764 762 attr->ia_valid &= ~ATTR_ATIME;
+121 -108
fs/nfs/localio.c
··· 42 42 /* Begin mostly DIO-specific members */ 43 43 size_t end_len; 44 44 short int end_iter_index; 45 - short int n_iters; 45 + atomic_t n_iters; 46 46 bool iter_is_dio_aligned[NFSLOCAL_MAX_IOS]; 47 - loff_t offset[NFSLOCAL_MAX_IOS] ____cacheline_aligned; 48 - struct iov_iter iters[NFSLOCAL_MAX_IOS]; 47 + struct iov_iter iters[NFSLOCAL_MAX_IOS] ____cacheline_aligned; 49 48 /* End mostly DIO-specific members */ 50 49 }; 51 50 ··· 313 314 init_sync_kiocb(&iocb->kiocb, file); 314 315 315 316 iocb->hdr = hdr; 317 + iocb->kiocb.ki_pos = hdr->args.offset; 316 318 iocb->kiocb.ki_flags &= ~IOCB_APPEND; 319 + iocb->kiocb.ki_complete = NULL; 317 320 iocb->aio_complete_work = NULL; 318 321 319 322 iocb->end_iter_index = -1; ··· 389 388 return true; 390 389 } 391 390 391 + static void 392 + nfs_local_iter_setup(struct iov_iter *iter, int rw, struct bio_vec *bvec, 393 + unsigned int nvecs, unsigned long total, 394 + size_t start, size_t len) 395 + { 396 + iov_iter_bvec(iter, rw, bvec, nvecs, total); 397 + if (start) 398 + iov_iter_advance(iter, start); 399 + iov_iter_truncate(iter, len); 400 + } 401 + 392 402 /* 393 403 * Setup as many as 3 iov_iter based on extents described by @local_dio. 394 404 * Returns the number of iov_iter that were setup. 395 405 */ 396 406 static int 397 407 nfs_local_iters_setup_dio(struct nfs_local_kiocb *iocb, int rw, 398 - unsigned int nvecs, size_t len, 408 + unsigned int nvecs, unsigned long total, 399 409 struct nfs_local_dio *local_dio) 400 410 { 401 411 int n_iters = 0; ··· 414 402 415 403 /* Setup misaligned start? */ 416 404 if (local_dio->start_len) { 417 - iov_iter_bvec(&iters[n_iters], rw, iocb->bvec, nvecs, len); 418 - iters[n_iters].count = local_dio->start_len; 419 - iocb->offset[n_iters] = iocb->hdr->args.offset; 420 - iocb->iter_is_dio_aligned[n_iters] = false; 405 + nfs_local_iter_setup(&iters[n_iters], rw, iocb->bvec, 406 + nvecs, total, 0, local_dio->start_len); 421 407 ++n_iters; 422 408 } 423 409 424 - /* Setup misaligned end? 425 - * If so, the end is purposely setup to be issued using buffered IO 426 - * before the middle (which will use DIO, if DIO-aligned, with AIO). 427 - * This creates problems if/when the end results in a partial write. 428 - * So must save index and length of end to handle this corner case. 410 + /* 411 + * Setup DIO-aligned middle, if there is no misaligned end (below) 412 + * then AIO completion is used, see nfs_local_call_{read,write} 429 413 */ 430 - if (local_dio->end_len) { 431 - iov_iter_bvec(&iters[n_iters], rw, iocb->bvec, nvecs, len); 432 - iocb->offset[n_iters] = local_dio->end_offset; 433 - iov_iter_advance(&iters[n_iters], 434 - local_dio->start_len + local_dio->middle_len); 435 - iocb->iter_is_dio_aligned[n_iters] = false; 436 - /* Save index and length of end */ 437 - iocb->end_iter_index = n_iters; 438 - iocb->end_len = local_dio->end_len; 439 - ++n_iters; 440 - } 441 - 442 - /* Setup DIO-aligned middle to be issued last, to allow for 443 - * DIO with AIO completion (see nfs_local_call_{read,write}). 444 - */ 445 - iov_iter_bvec(&iters[n_iters], rw, iocb->bvec, nvecs, len); 446 - if (local_dio->start_len) 447 - iov_iter_advance(&iters[n_iters], local_dio->start_len); 448 - iters[n_iters].count -= local_dio->end_len; 449 - iocb->offset[n_iters] = local_dio->middle_offset; 414 + nfs_local_iter_setup(&iters[n_iters], rw, iocb->bvec, nvecs, 415 + total, local_dio->start_len, local_dio->middle_len); 450 416 451 417 iocb->iter_is_dio_aligned[n_iters] = 452 418 nfs_iov_iter_aligned_bvec(&iters[n_iters], ··· 432 442 433 443 if (unlikely(!iocb->iter_is_dio_aligned[n_iters])) { 434 444 trace_nfs_local_dio_misaligned(iocb->hdr->inode, 435 - iocb->hdr->args.offset, len, local_dio); 445 + local_dio->start_len, local_dio->middle_len, local_dio); 436 446 return 0; /* no DIO-aligned IO possible */ 437 447 } 448 + iocb->end_iter_index = n_iters; 438 449 ++n_iters; 439 450 440 - iocb->n_iters = n_iters; 451 + /* Setup misaligned end? */ 452 + if (local_dio->end_len) { 453 + nfs_local_iter_setup(&iters[n_iters], rw, iocb->bvec, 454 + nvecs, total, local_dio->start_len + 455 + local_dio->middle_len, local_dio->end_len); 456 + iocb->end_iter_index = n_iters; 457 + ++n_iters; 458 + } 459 + 460 + atomic_set(&iocb->n_iters, n_iters); 441 461 return n_iters; 442 462 } 443 463 ··· 473 473 } 474 474 len = hdr->args.count - total; 475 475 476 + /* 477 + * For each iocb, iocb->n_iters is always at least 1 and we always 478 + * end io after first nfs_local_pgio_done call unless misaligned DIO. 479 + */ 480 + atomic_set(&iocb->n_iters, 1); 481 + 476 482 if (test_bit(NFS_IOHDR_ODIRECT, &hdr->flags)) { 477 483 struct nfs_local_dio local_dio; 478 484 479 485 if (nfs_is_local_dio_possible(iocb, rw, len, &local_dio) && 480 - nfs_local_iters_setup_dio(iocb, rw, v, len, &local_dio) != 0) 486 + nfs_local_iters_setup_dio(iocb, rw, v, len, &local_dio) != 0) { 487 + /* Ensure DIO WRITE's IO on stable storage upon completion */ 488 + if (rw == ITER_SOURCE) 489 + iocb->kiocb.ki_flags |= IOCB_DSYNC|IOCB_SYNC; 481 490 return; /* is DIO-aligned */ 491 + } 482 492 } 483 493 484 494 /* Use buffered IO */ 485 - iocb->offset[0] = hdr->args.offset; 486 495 iov_iter_bvec(&iocb->iters[0], rw, iocb->bvec, v, len); 487 - iocb->n_iters = 1; 488 496 } 489 497 490 498 static void ··· 512 504 hdr->task.tk_start = ktime_get(); 513 505 } 514 506 515 - static void 516 - nfs_local_pgio_done(struct nfs_pgio_header *hdr, long status) 507 + static bool 508 + nfs_local_pgio_done(struct nfs_local_kiocb *iocb, long status, bool force) 517 509 { 510 + struct nfs_pgio_header *hdr = iocb->hdr; 511 + 518 512 /* Must handle partial completions */ 519 513 if (status >= 0) { 520 514 hdr->res.count += status; ··· 527 517 hdr->res.op_status = nfs_localio_errno_to_nfs4_stat(status); 528 518 hdr->task.tk_status = status; 529 519 } 520 + 521 + if (force) 522 + return true; 523 + 524 + BUG_ON(atomic_read(&iocb->n_iters) <= 0); 525 + return atomic_dec_and_test(&iocb->n_iters); 530 526 } 531 527 532 528 static void ··· 563 547 queue_work(nfsiod_workqueue, &iocb->work); 564 548 } 565 549 566 - static void 567 - nfs_local_read_done(struct nfs_local_kiocb *iocb, long status) 550 + static void nfs_local_read_done(struct nfs_local_kiocb *iocb) 568 551 { 569 552 struct nfs_pgio_header *hdr = iocb->hdr; 570 553 struct file *filp = iocb->kiocb.ki_filp; 554 + long status = hdr->task.tk_status; 571 555 572 556 if ((iocb->kiocb.ki_flags & IOCB_DIRECT) && status == -EINVAL) { 573 557 /* Underlying FS will return -EINVAL if misaligned DIO is attempted. */ ··· 580 564 */ 581 565 hdr->res.replen = 0; 582 566 583 - if (hdr->res.count != hdr->args.count || 584 - hdr->args.offset + hdr->res.count >= i_size_read(file_inode(filp))) 567 + /* nfs_readpage_result() handles short read */ 568 + 569 + if (hdr->args.offset + hdr->res.count >= i_size_read(file_inode(filp))) 585 570 hdr->res.eof = true; 586 571 587 572 dprintk("%s: read %ld bytes eof %d.\n", __func__, 588 573 status > 0 ? status : 0, hdr->res.eof); 574 + } 575 + 576 + static inline void nfs_local_read_iocb_done(struct nfs_local_kiocb *iocb) 577 + { 578 + nfs_local_read_done(iocb); 579 + nfs_local_pgio_release(iocb); 589 580 } 590 581 591 582 static void nfs_local_read_aio_complete_work(struct work_struct *work) ··· 600 577 struct nfs_local_kiocb *iocb = 601 578 container_of(work, struct nfs_local_kiocb, work); 602 579 603 - nfs_local_pgio_release(iocb); 580 + nfs_local_read_iocb_done(iocb); 604 581 } 605 582 606 583 static void nfs_local_read_aio_complete(struct kiocb *kiocb, long ret) ··· 608 585 struct nfs_local_kiocb *iocb = 609 586 container_of(kiocb, struct nfs_local_kiocb, kiocb); 610 587 611 - nfs_local_pgio_done(iocb->hdr, ret); 612 - nfs_local_read_done(iocb, ret); 588 + /* AIO completion of DIO read should always be last to complete */ 589 + if (unlikely(!nfs_local_pgio_done(iocb, ret, false))) 590 + return; 591 + 613 592 nfs_local_pgio_aio_complete(iocb); /* Calls nfs_local_read_aio_complete_work */ 614 593 } 615 594 ··· 621 596 container_of(work, struct nfs_local_kiocb, work); 622 597 struct file *filp = iocb->kiocb.ki_filp; 623 598 const struct cred *save_cred; 599 + bool force_done = false; 624 600 ssize_t status; 601 + int n_iters; 625 602 626 603 save_cred = override_creds(filp->f_cred); 627 604 628 - for (int i = 0; i < iocb->n_iters ; i++) { 605 + n_iters = atomic_read(&iocb->n_iters); 606 + for (int i = 0; i < n_iters ; i++) { 629 607 if (iocb->iter_is_dio_aligned[i]) { 630 608 iocb->kiocb.ki_flags |= IOCB_DIRECT; 631 - iocb->kiocb.ki_complete = nfs_local_read_aio_complete; 632 - iocb->aio_complete_work = nfs_local_read_aio_complete_work; 633 - } 609 + /* Only use AIO completion if DIO-aligned segment is last */ 610 + if (i == iocb->end_iter_index) { 611 + iocb->kiocb.ki_complete = nfs_local_read_aio_complete; 612 + iocb->aio_complete_work = nfs_local_read_aio_complete_work; 613 + } 614 + } else 615 + iocb->kiocb.ki_flags &= ~IOCB_DIRECT; 634 616 635 - iocb->kiocb.ki_pos = iocb->offset[i]; 636 617 status = filp->f_op->read_iter(&iocb->kiocb, &iocb->iters[i]); 637 618 if (status != -EIOCBQUEUED) { 638 - nfs_local_pgio_done(iocb->hdr, status); 639 - if (iocb->hdr->task.tk_status) 619 + if (unlikely(status >= 0 && status < iocb->iters[i].count)) 620 + force_done = true; /* Partial read */ 621 + if (nfs_local_pgio_done(iocb, status, force_done)) { 622 + nfs_local_read_iocb_done(iocb); 640 623 break; 624 + } 641 625 } 642 626 } 643 627 644 628 revert_creds(save_cred); 645 - 646 - if (status != -EIOCBQUEUED) { 647 - nfs_local_read_done(iocb, status); 648 - nfs_local_pgio_release(iocb); 649 - } 650 629 } 651 630 652 631 static int ··· 765 736 fattr->du.nfs3.used = stat.blocks << 9; 766 737 } 767 738 768 - static void 769 - nfs_local_write_done(struct nfs_local_kiocb *iocb, long status) 739 + static void nfs_local_write_done(struct nfs_local_kiocb *iocb) 770 740 { 771 741 struct nfs_pgio_header *hdr = iocb->hdr; 772 - struct inode *inode = hdr->inode; 742 + long status = hdr->task.tk_status; 773 743 774 744 dprintk("%s: wrote %ld bytes.\n", __func__, status > 0 ? status : 0); 775 745 ··· 787 759 nfs_set_pgio_error(hdr, -ENOSPC, hdr->args.offset); 788 760 status = -ENOSPC; 789 761 /* record -ENOSPC in terms of nfs_local_pgio_done */ 790 - nfs_local_pgio_done(hdr, status); 762 + (void) nfs_local_pgio_done(iocb, status, true); 791 763 } 792 764 if (hdr->task.tk_status < 0) 793 - nfs_reset_boot_verifier(inode); 765 + nfs_reset_boot_verifier(hdr->inode); 766 + } 767 + 768 + static inline void nfs_local_write_iocb_done(struct nfs_local_kiocb *iocb) 769 + { 770 + nfs_local_write_done(iocb); 771 + nfs_local_vfs_getattr(iocb); 772 + nfs_local_pgio_release(iocb); 794 773 } 795 774 796 775 static void nfs_local_write_aio_complete_work(struct work_struct *work) ··· 805 770 struct nfs_local_kiocb *iocb = 806 771 container_of(work, struct nfs_local_kiocb, work); 807 772 808 - nfs_local_vfs_getattr(iocb); 809 - nfs_local_pgio_release(iocb); 773 + nfs_local_write_iocb_done(iocb); 810 774 } 811 775 812 776 static void nfs_local_write_aio_complete(struct kiocb *kiocb, long ret) ··· 813 779 struct nfs_local_kiocb *iocb = 814 780 container_of(kiocb, struct nfs_local_kiocb, kiocb); 815 781 816 - nfs_local_pgio_done(iocb->hdr, ret); 817 - nfs_local_write_done(iocb, ret); 782 + /* AIO completion of DIO write should always be last to complete */ 783 + if (unlikely(!nfs_local_pgio_done(iocb, ret, false))) 784 + return; 785 + 818 786 nfs_local_pgio_aio_complete(iocb); /* Calls nfs_local_write_aio_complete_work */ 819 787 } 820 788 ··· 827 791 struct file *filp = iocb->kiocb.ki_filp; 828 792 unsigned long old_flags = current->flags; 829 793 const struct cred *save_cred; 794 + bool force_done = false; 830 795 ssize_t status; 796 + int n_iters; 831 797 832 798 current->flags |= PF_LOCAL_THROTTLE | PF_MEMALLOC_NOIO; 833 799 save_cred = override_creds(filp->f_cred); 834 800 835 801 file_start_write(filp); 836 - for (int i = 0; i < iocb->n_iters ; i++) { 802 + n_iters = atomic_read(&iocb->n_iters); 803 + for (int i = 0; i < n_iters ; i++) { 837 804 if (iocb->iter_is_dio_aligned[i]) { 838 805 iocb->kiocb.ki_flags |= IOCB_DIRECT; 839 - iocb->kiocb.ki_complete = nfs_local_write_aio_complete; 840 - iocb->aio_complete_work = nfs_local_write_aio_complete_work; 841 - } 842 - retry: 843 - iocb->kiocb.ki_pos = iocb->offset[i]; 806 + /* Only use AIO completion if DIO-aligned segment is last */ 807 + if (i == iocb->end_iter_index) { 808 + iocb->kiocb.ki_complete = nfs_local_write_aio_complete; 809 + iocb->aio_complete_work = nfs_local_write_aio_complete_work; 810 + } 811 + } else 812 + iocb->kiocb.ki_flags &= ~IOCB_DIRECT; 813 + 844 814 status = filp->f_op->write_iter(&iocb->kiocb, &iocb->iters[i]); 845 815 if (status != -EIOCBQUEUED) { 846 - if (unlikely(status >= 0 && status < iocb->iters[i].count)) { 847 - /* partial write */ 848 - if (i == iocb->end_iter_index) { 849 - /* Must not account partial end, otherwise, due 850 - * to end being issued before middle: the partial 851 - * write accounting in nfs_local_write_done() 852 - * would incorrectly advance hdr->args.offset 853 - */ 854 - status = 0; 855 - } else { 856 - /* Partial write at start or buffered middle, 857 - * exit early. 858 - */ 859 - nfs_local_pgio_done(iocb->hdr, status); 860 - break; 861 - } 862 - } else if (unlikely(status == -ENOTBLK && 863 - (iocb->kiocb.ki_flags & IOCB_DIRECT))) { 864 - /* VFS will return -ENOTBLK if DIO WRITE fails to 865 - * invalidate the page cache. Retry using buffered IO. 866 - */ 867 - iocb->kiocb.ki_flags &= ~IOCB_DIRECT; 868 - iocb->kiocb.ki_complete = NULL; 869 - iocb->aio_complete_work = NULL; 870 - goto retry; 871 - } 872 - nfs_local_pgio_done(iocb->hdr, status); 873 - if (iocb->hdr->task.tk_status) 816 + if (unlikely(status >= 0 && status < iocb->iters[i].count)) 817 + force_done = true; /* Partial write */ 818 + if (nfs_local_pgio_done(iocb, status, force_done)) { 819 + nfs_local_write_iocb_done(iocb); 874 820 break; 821 + } 875 822 } 876 823 } 877 824 file_end_write(filp); 878 825 879 826 revert_creds(save_cred); 880 827 current->flags = old_flags; 881 - 882 - if (status != -EIOCBQUEUED) { 883 - nfs_local_write_done(iocb, status); 884 - nfs_local_vfs_getattr(iocb); 885 - nfs_local_pgio_release(iocb); 886 - } 887 828 } 888 829 889 830 static int
+12 -2
fs/nfs/nfs3client.c
··· 2 2 #include <linux/nfs_fs.h> 3 3 #include <linux/nfs_mount.h> 4 4 #include <linux/sunrpc/addr.h> 5 + #include <net/handshake.h> 5 6 #include "internal.h" 6 7 #include "nfs3_fs.h" 7 8 #include "netns.h" ··· 99 98 .net = mds_clp->cl_net, 100 99 .timeparms = &ds_timeout, 101 100 .cred = mds_srv->cred, 102 - .xprtsec = mds_clp->cl_xprtsec, 101 + .xprtsec = { 102 + .policy = RPC_XPRTSEC_NONE, 103 + .cert_serial = TLS_NO_CERT, 104 + .privkey_serial = TLS_NO_PRIVKEY, 105 + }, 103 106 .connect_timeout = connect_timeout, 104 107 .reconnect_timeout = connect_timeout, 105 108 }; ··· 116 111 cl_init.hostname = buf; 117 112 118 113 switch (ds_proto) { 114 + case XPRT_TRANSPORT_TCP_TLS: 115 + if (mds_clp->cl_xprtsec.policy != RPC_XPRTSEC_NONE) 116 + cl_init.xprtsec = mds_clp->cl_xprtsec; 117 + else 118 + ds_proto = XPRT_TRANSPORT_TCP; 119 + fallthrough; 119 120 case XPRT_TRANSPORT_RDMA: 120 121 case XPRT_TRANSPORT_TCP: 121 - case XPRT_TRANSPORT_TCP_TLS: 122 122 if (mds_clp->cl_nconnect > 1) 123 123 cl_init.nconnect = mds_clp->cl_nconnect; 124 124 }
+12 -2
fs/nfs/nfs4client.c
··· 11 11 #include <linux/sunrpc/xprt.h> 12 12 #include <linux/sunrpc/bc_xprt.h> 13 13 #include <linux/sunrpc/rpc_pipe_fs.h> 14 + #include <net/handshake.h> 14 15 #include "internal.h" 15 16 #include "callback.h" 16 17 #include "delegation.h" ··· 984 983 .net = mds_clp->cl_net, 985 984 .timeparms = &ds_timeout, 986 985 .cred = mds_srv->cred, 987 - .xprtsec = mds_srv->nfs_client->cl_xprtsec, 986 + .xprtsec = { 987 + .policy = RPC_XPRTSEC_NONE, 988 + .cert_serial = TLS_NO_CERT, 989 + .privkey_serial = TLS_NO_PRIVKEY, 990 + }, 988 991 }; 989 992 char buf[INET6_ADDRSTRLEN + 1]; 990 993 ··· 997 992 cl_init.hostname = buf; 998 993 999 994 switch (ds_proto) { 995 + case XPRT_TRANSPORT_TCP_TLS: 996 + if (mds_srv->nfs_client->cl_xprtsec.policy != RPC_XPRTSEC_NONE) 997 + cl_init.xprtsec = mds_srv->nfs_client->cl_xprtsec; 998 + else 999 + ds_proto = XPRT_TRANSPORT_TCP; 1000 + fallthrough; 1000 1001 case XPRT_TRANSPORT_RDMA: 1001 1002 case XPRT_TRANSPORT_TCP: 1002 - case XPRT_TRANSPORT_TCP_TLS: 1003 1003 if (mds_clp->cl_nconnect > 1) { 1004 1004 cl_init.nconnect = mds_clp->cl_nconnect; 1005 1005 cl_init.max_connect = NFS_MAX_TRANSPORTS;
+6 -3
fs/nfs/nfs4proc.c
··· 4715 4715 }; 4716 4716 unsigned short task_flags = 0; 4717 4717 4718 - if (NFS_SERVER(inode)->flags & NFS_MOUNT_SOFTREVAL) 4718 + if (server->flags & NFS_MOUNT_SOFTREVAL) 4719 4719 task_flags |= RPC_TASK_TIMEOUT; 4720 + if (server->caps & NFS_CAP_MOVEABLE) 4721 + task_flags |= RPC_TASK_MOVEABLE; 4720 4722 4721 4723 args.bitmask = nfs4_bitmask(server, fattr->label); 4722 4724 4723 4725 nfs_fattr_init(fattr); 4726 + nfs4_init_sequence(&args.seq_args, &res.seq_res, 0, 0); 4724 4727 4725 4728 dprintk("NFS call lookupp ino=0x%lx\n", inode->i_ino); 4726 - status = nfs4_call_sync(clnt, server, &msg, &args.seq_args, 4727 - &res.seq_res, task_flags); 4729 + status = nfs4_do_call_sync(clnt, server, &msg, &args.seq_args, 4730 + &res.seq_res, task_flags); 4728 4731 dprintk("NFS reply lookupp: %d\n", status); 4729 4732 return status; 4730 4733 }
+35 -31
fs/nfs/pnfs_nfs.c
··· 809 809 unsigned int retrans) 810 810 { 811 811 struct nfs_client *clp = ERR_PTR(-EIO); 812 + struct nfs_client *mds_clp = mds_srv->nfs_client; 813 + enum xprtsec_policies xprtsec_policy = mds_clp->cl_xprtsec.policy; 812 814 struct nfs4_pnfs_ds_addr *da; 813 815 unsigned long connect_timeout = timeo * (retrans + 1) * HZ / 10; 816 + int ds_proto; 814 817 int status = 0; 815 818 816 819 dprintk("--> %s DS %s\n", __func__, ds->ds_remotestr); ··· 837 834 .xprtsec = clp->cl_xprtsec, 838 835 }; 839 836 840 - if (da->da_transport != clp->cl_proto && 841 - clp->cl_proto != XPRT_TRANSPORT_TCP_TLS) 842 - continue; 843 - if (da->da_transport == XPRT_TRANSPORT_TCP && 844 - mds_srv->nfs_client->cl_proto == XPRT_TRANSPORT_TCP_TLS) 837 + if (xprt_args.ident == XPRT_TRANSPORT_TCP && 838 + clp->cl_proto == XPRT_TRANSPORT_TCP_TLS) 845 839 xprt_args.ident = XPRT_TRANSPORT_TCP_TLS; 846 840 847 - if (da->da_addr.ss_family != clp->cl_addr.ss_family) 841 + if (xprt_args.ident != clp->cl_proto) 842 + continue; 843 + if (xprt_args.dstaddr->sa_family != 844 + clp->cl_addr.ss_family) 848 845 continue; 849 846 /* Add this address as an alias */ 850 847 rpc_clnt_add_xprt(clp->cl_rpcclient, &xprt_args, 851 - rpc_clnt_test_and_add_xprt, NULL); 848 + rpc_clnt_test_and_add_xprt, NULL); 852 849 continue; 853 850 } 854 - if (da->da_transport == XPRT_TRANSPORT_TCP && 855 - mds_srv->nfs_client->cl_proto == XPRT_TRANSPORT_TCP_TLS) 856 - da->da_transport = XPRT_TRANSPORT_TCP_TLS; 857 - clp = get_v3_ds_connect(mds_srv, 858 - &da->da_addr, 859 - da->da_addrlen, da->da_transport, 860 - timeo, retrans); 851 + 852 + ds_proto = da->da_transport; 853 + if (ds_proto == XPRT_TRANSPORT_TCP && 854 + xprtsec_policy != RPC_XPRTSEC_NONE) 855 + ds_proto = XPRT_TRANSPORT_TCP_TLS; 856 + 857 + clp = get_v3_ds_connect(mds_srv, &da->da_addr, da->da_addrlen, 858 + ds_proto, timeo, retrans); 861 859 if (IS_ERR(clp)) 862 860 continue; 863 861 clp->cl_rpcclient->cl_softerr = 0; ··· 884 880 u32 minor_version) 885 881 { 886 882 struct nfs_client *clp = ERR_PTR(-EIO); 883 + struct nfs_client *mds_clp = mds_srv->nfs_client; 884 + enum xprtsec_policies xprtsec_policy = mds_clp->cl_xprtsec.policy; 887 885 struct nfs4_pnfs_ds_addr *da; 886 + int ds_proto; 888 887 int status = 0; 889 888 890 889 dprintk("--> %s DS %s\n", __func__, ds->ds_remotestr); ··· 915 908 .data = &xprtdata, 916 909 }; 917 910 918 - if (da->da_transport != clp->cl_proto && 919 - clp->cl_proto != XPRT_TRANSPORT_TCP_TLS) 920 - continue; 921 - if (da->da_transport == XPRT_TRANSPORT_TCP && 922 - mds_srv->nfs_client->cl_proto == 923 - XPRT_TRANSPORT_TCP_TLS) { 911 + if (xprt_args.ident == XPRT_TRANSPORT_TCP && 912 + clp->cl_proto == XPRT_TRANSPORT_TCP_TLS) { 924 913 struct sockaddr *addr = 925 914 (struct sockaddr *)&da->da_addr; 926 915 struct sockaddr_in *sin = ··· 947 944 xprt_args.ident = XPRT_TRANSPORT_TCP_TLS; 948 945 xprt_args.servername = servername; 949 946 } 950 - if (da->da_addr.ss_family != clp->cl_addr.ss_family) 947 + if (xprt_args.ident != clp->cl_proto) 948 + continue; 949 + if (xprt_args.dstaddr->sa_family != 950 + clp->cl_addr.ss_family) 951 951 continue; 952 952 953 953 /** ··· 964 958 if (xprtdata.cred) 965 959 put_cred(xprtdata.cred); 966 960 } else { 967 - if (da->da_transport == XPRT_TRANSPORT_TCP && 968 - mds_srv->nfs_client->cl_proto == 969 - XPRT_TRANSPORT_TCP_TLS) 970 - da->da_transport = XPRT_TRANSPORT_TCP_TLS; 971 - clp = nfs4_set_ds_client(mds_srv, 972 - &da->da_addr, 973 - da->da_addrlen, 974 - da->da_transport, timeo, 975 - retrans, minor_version); 961 + ds_proto = da->da_transport; 962 + if (ds_proto == XPRT_TRANSPORT_TCP && 963 + xprtsec_policy != RPC_XPRTSEC_NONE) 964 + ds_proto = XPRT_TRANSPORT_TCP_TLS; 965 + 966 + clp = nfs4_set_ds_client(mds_srv, &da->da_addr, 967 + da->da_addrlen, ds_proto, 968 + timeo, retrans, minor_version); 976 969 if (IS_ERR(clp)) 977 970 continue; 978 971 ··· 982 977 clp = ERR_PTR(-EIO); 983 978 continue; 984 979 } 985 - 986 980 } 987 981 } 988 982
+1
fs/nfs/sysfs.c
··· 189 189 return p; 190 190 191 191 kobject_put(&p->kobject); 192 + kobject_put(&p->nfs_net_kobj); 192 193 } 193 194 return NULL; 194 195 }
+3 -1
fs/smb/client/fs_context.c
··· 1435 1435 cifs_errorf(fc, "Unknown error parsing devname\n"); 1436 1436 goto cifs_parse_mount_err; 1437 1437 } 1438 + kfree(ctx->source); 1438 1439 ctx->source = smb3_fs_context_fullpath(ctx, '/'); 1439 1440 if (IS_ERR(ctx->source)) { 1440 1441 ctx->source = NULL; 1441 1442 cifs_errorf(fc, "OOM when copying UNC string\n"); 1442 1443 goto cifs_parse_mount_err; 1443 1444 } 1445 + kfree(fc->source); 1444 1446 fc->source = kstrdup(ctx->source, GFP_KERNEL); 1445 1447 if (fc->source == NULL) { 1446 1448 cifs_errorf(fc, "OOM when copying UNC string\n"); ··· 1470 1468 break; 1471 1469 } 1472 1470 1473 - if (strnlen(param->string, CIFS_MAX_USERNAME_LEN) > 1471 + if (strnlen(param->string, CIFS_MAX_USERNAME_LEN) == 1474 1472 CIFS_MAX_USERNAME_LEN) { 1475 1473 pr_warn("username too long\n"); 1476 1474 goto cifs_parse_mount_err;
+3
fs/smb/client/smbdirect.c
··· 290 290 break; 291 291 292 292 case SMBDIRECT_SOCKET_CREATED: 293 + sc->status = SMBDIRECT_SOCKET_DISCONNECTED; 294 + break; 295 + 293 296 case SMBDIRECT_SOCKET_CONNECTED: 294 297 sc->status = SMBDIRECT_SOCKET_ERROR; 295 298 break;
+1 -1
fs/smb/client/transport.c
··· 830 830 if (!server || server->terminate) 831 831 continue; 832 832 833 - if (CIFS_CHAN_NEEDS_RECONNECT(ses, i)) 833 + if (CIFS_CHAN_NEEDS_RECONNECT(ses, cur)) 834 834 continue; 835 835 836 836 /*
+10 -3
fs/super.c
··· 1183 1183 1184 1184 static const char *filesystems_freeze_ptr = "filesystems_freeze"; 1185 1185 1186 - static void filesystems_freeze_callback(struct super_block *sb, void *unused) 1186 + static void filesystems_freeze_callback(struct super_block *sb, void *freeze_all_ptr) 1187 1187 { 1188 1188 if (!sb->s_op->freeze_fs && !sb->s_op->freeze_super) 1189 + return; 1190 + 1191 + if (freeze_all_ptr && !(sb->s_type->fs_flags & FS_POWER_FREEZE)) 1189 1192 return; 1190 1193 1191 1194 if (!get_active_super(sb)) ··· 1204 1201 deactivate_super(sb); 1205 1202 } 1206 1203 1207 - void filesystems_freeze(void) 1204 + void filesystems_freeze(bool freeze_all) 1208 1205 { 1209 - __iterate_supers(filesystems_freeze_callback, NULL, 1206 + void *freeze_all_ptr = NULL; 1207 + 1208 + if (freeze_all) 1209 + freeze_all_ptr = &freeze_all; 1210 + __iterate_supers(filesystems_freeze_callback, freeze_all_ptr, 1210 1211 SUPER_ITER_UNLOCKED | SUPER_ITER_REVERSE); 1211 1212 } 1212 1213
+4 -1
fs/xfs/xfs_super.c
··· 1693 1693 if (error) 1694 1694 return error; 1695 1695 1696 - sb_min_blocksize(sb, BBSIZE); 1696 + if (!sb_min_blocksize(sb, BBSIZE)) { 1697 + xfs_err(mp, "unable to set blocksize"); 1698 + return -EINVAL; 1699 + } 1697 1700 sb->s_xattr = xfs_xattr_handlers; 1698 1701 sb->s_export_op = &xfs_export_operations; 1699 1702 #ifdef CONFIG_XFS_QUOTA
+1 -1
include/linux/entry-virt.h
··· 32 32 */ 33 33 static inline int arch_xfer_to_guest_mode_handle_work(unsigned long ti_work); 34 34 35 - #ifndef arch_xfer_to_guest_mode_work 35 + #ifndef arch_xfer_to_guest_mode_handle_work 36 36 static inline int arch_xfer_to_guest_mode_handle_work(unsigned long ti_work) 37 37 { 38 38 return 0;
+20
include/linux/filter.h
··· 901 901 cb->data_end = skb->data + skb_headlen(skb); 902 902 } 903 903 904 + static inline int bpf_prog_run_data_pointers( 905 + const struct bpf_prog *prog, 906 + struct sk_buff *skb) 907 + { 908 + struct bpf_skb_data_end *cb = (struct bpf_skb_data_end *)skb->cb; 909 + void *save_data_meta, *save_data_end; 910 + int res; 911 + 912 + save_data_meta = cb->data_meta; 913 + save_data_end = cb->data_end; 914 + 915 + bpf_compute_data_pointers(skb); 916 + res = bpf_prog_run(prog, skb); 917 + 918 + cb->data_meta = save_data_meta; 919 + cb->data_end = save_data_end; 920 + 921 + return res; 922 + } 923 + 904 924 /* Similar to bpf_compute_data_pointers(), except that save orginal 905 925 * data in cb->data and cb->meta_data for restore. 906 926 */
+5 -3
include/linux/fs.h
··· 2689 2689 #define FS_ALLOW_IDMAP 32 /* FS has been updated to handle vfs idmappings. */ 2690 2690 #define FS_MGTIME 64 /* FS uses multigrain timestamps */ 2691 2691 #define FS_LBS 128 /* FS supports LBS */ 2692 + #define FS_POWER_FREEZE 256 /* Always freeze on suspend/hibernate */ 2692 2693 #define FS_RENAME_DOES_D_MOVE 32768 /* FS will handle d_move() during rename() internally. */ 2693 2694 int (*init_fs_context)(struct fs_context *); 2694 2695 const struct fs_parameter_spec *parameters; ··· 2824 2823 2825 2824 extern void ihold(struct inode * inode); 2826 2825 extern void iput(struct inode *); 2826 + void iput_not_last(struct inode *); 2827 2827 int inode_update_timestamps(struct inode *inode, int flags); 2828 2828 int generic_update_time(struct inode *, int); 2829 2829 ··· 3425 3423 extern void inode_sb_list_add(struct inode *inode); 3426 3424 extern void inode_add_lru(struct inode *inode); 3427 3425 3428 - extern int sb_set_blocksize(struct super_block *, int); 3429 - extern int sb_min_blocksize(struct super_block *, int); 3426 + int sb_set_blocksize(struct super_block *sb, int size); 3427 + int __must_check sb_min_blocksize(struct super_block *sb, int size); 3430 3428 3431 3429 int generic_file_mmap(struct file *, struct vm_area_struct *); 3432 3430 int generic_file_mmap_prepare(struct vm_area_desc *desc); ··· 3608 3606 extern void iterate_supers(void (*f)(struct super_block *, void *), void *arg); 3609 3607 extern void iterate_supers_type(struct file_system_type *, 3610 3608 void (*)(struct super_block *, void *), void *); 3611 - void filesystems_freeze(void); 3609 + void filesystems_freeze(bool freeze_all); 3612 3610 void filesystems_thaw(void); 3613 3611 3614 3612 extern int dcache_dir_open(struct inode *, struct file *);
+9 -1
include/linux/ftrace.h
··· 193 193 #if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \ 194 194 defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS) 195 195 196 + #ifndef arch_ftrace_partial_regs 197 + #define arch_ftrace_partial_regs(regs) do {} while (0) 198 + #endif 199 + 196 200 static __always_inline struct pt_regs * 197 201 ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs) 198 202 { ··· 206 202 * Since arch_ftrace_get_regs() will check some members and may return 207 203 * NULL, we can not use it. 208 204 */ 209 - return &arch_ftrace_regs(fregs)->regs; 205 + regs = &arch_ftrace_regs(fregs)->regs; 206 + 207 + /* Allow arch specific updates to regs. */ 208 + arch_ftrace_partial_regs(regs); 209 + return regs; 210 210 } 211 211 212 212 #endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
+4 -2
include/linux/highmem.h
··· 249 249 kunmap_local(kaddr); 250 250 } 251 251 252 - #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGE 252 + #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES 253 253 254 - static inline void tag_clear_highpage(struct page *page) 254 + /* Return false to let people know we did not initialize the pages */ 255 + static inline bool tag_clear_highpages(struct page *page, int numpages) 255 256 { 257 + return false; 256 258 } 257 259 258 260 #endif
+10 -3
include/linux/mm.h
··· 2074 2074 return folio_large_nr_pages(folio); 2075 2075 } 2076 2076 2077 - #if !defined(CONFIG_ARCH_HAS_GIGANTIC_PAGE) 2077 + #if !defined(CONFIG_HAVE_GIGANTIC_FOLIOS) 2078 2078 /* 2079 2079 * We don't expect any folios that exceed buddy sizes (and consequently 2080 2080 * memory sections). ··· 2087 2087 * pages are guaranteed to be contiguous. 2088 2088 */ 2089 2089 #define MAX_FOLIO_ORDER PFN_SECTION_SHIFT 2090 - #else 2090 + #elif defined(CONFIG_HUGETLB_PAGE) 2091 2091 /* 2092 2092 * There is no real limit on the folio size. We limit them to the maximum we 2093 - * currently expect (e.g., hugetlb, dax). 2093 + * currently expect (see CONFIG_HAVE_GIGANTIC_FOLIOS): with hugetlb, we expect 2094 + * no folios larger than 16 GiB on 64bit and 1 GiB on 32bit. 2095 + */ 2096 + #define MAX_FOLIO_ORDER get_order(IS_ENABLED(CONFIG_64BIT) ? SZ_16G : SZ_1G) 2097 + #else 2098 + /* 2099 + * Without hugetlb, gigantic folios that are bigger than a single PUD are 2100 + * currently impossible. 2094 2101 */ 2095 2102 #define MAX_FOLIO_ORDER PUD_ORDER 2096 2103 #endif
+2
include/linux/pci.h
··· 412 412 u16 l1ss; /* L1SS Capability pointer */ 413 413 #ifdef CONFIG_PCIEASPM 414 414 struct pcie_link_state *link_state; /* ASPM link state */ 415 + unsigned int aspm_l0s_support:1; /* ASPM L0s support */ 416 + unsigned int aspm_l1_support:1; /* ASPM L1 support */ 415 417 unsigned int ltr_path:1; /* Latency Tolerance Reporting 416 418 supported from root to here */ 417 419 #endif
+2 -1
include/net/xfrm.h
··· 536 536 537 537 static inline const struct xfrm_mode *xfrm_ip2inner_mode(struct xfrm_state *x, int ipproto) 538 538 { 539 - if ((ipproto == IPPROTO_IPIP && x->props.family == AF_INET) || 539 + if ((x->sel.family != AF_UNSPEC) || 540 + (ipproto == IPPROTO_IPIP && x->props.family == AF_INET) || 540 541 (ipproto == IPPROTO_IPV6 && x->props.family == AF_INET6)) 541 542 return &x->inner_mode; 542 543 else
+3
include/uapi/linux/io_uring/query.h
··· 36 36 __u64 enter_flags; 37 37 /* Bitmask of all supported IOSQE_* flags */ 38 38 __u64 sqe_flags; 39 + /* The number of available query opcodes */ 40 + __u32 nr_query_opcodes; 41 + __u32 __pad; 39 42 }; 40 43 41 44 #endif
+1 -1
include/uapi/linux/mount.h
··· 197 197 */ 198 198 struct mnt_id_req { 199 199 __u32 size; 200 - __u32 spare; 200 + __u32 mnt_ns_fd; 201 201 __u64 mnt_id; 202 202 __u64 param; 203 203 __u64 mnt_ns_id;
+14 -9
include/uapi/linux/tee.h
··· 249 249 * @cancel_id: [in] Cancellation id, a unique value to identify this request 250 250 * @session: [out] Session id 251 251 * @ret: [out] return value 252 - * @ret_origin [out] origin of the return value 253 - * @num_params [in] number of parameters following this struct 252 + * @ret_origin: [out] origin of the return value 253 + * @num_params: [in] number of &struct tee_ioctl_param entries in @params 254 + * @params: array of ioctl parameters 254 255 */ 255 256 struct tee_ioctl_open_session_arg { 256 257 __u8 uuid[TEE_IOCTL_UUID_LEN]; ··· 277 276 struct tee_ioctl_buf_data) 278 277 279 278 /** 280 - * struct tee_ioctl_invoke_func_arg - Invokes a function in a Trusted 281 - * Application 279 + * struct tee_ioctl_invoke_arg - Invokes a function in a Trusted Application 282 280 * @func: [in] Trusted Application function, specific to the TA 283 281 * @session: [in] Session id 284 282 * @cancel_id: [in] Cancellation id, a unique value to identify this request 285 283 * @ret: [out] return value 286 - * @ret_origin [out] origin of the return value 287 - * @num_params [in] number of parameters following this struct 284 + * @ret_origin: [out] origin of the return value 285 + * @num_params: [in] number of parameters following this struct 286 + * @params: array of ioctl parameters 288 287 */ 289 288 struct tee_ioctl_invoke_arg { 290 289 __u32 func; ··· 339 338 /** 340 339 * struct tee_iocl_supp_recv_arg - Receive a request for a supplicant function 341 340 * @func: [in] supplicant function 342 - * @num_params [in/out] number of parameters following this struct 341 + * @num_params: [in/out] number of &struct tee_ioctl_param entries in @params 342 + * @params: array of ioctl parameters 343 343 * 344 344 * @num_params is the number of params that tee-supplicant has room to 345 345 * receive when input, @num_params is the number of actual params ··· 365 363 /** 366 364 * struct tee_iocl_supp_send_arg - Send a response to a received request 367 365 * @ret: [out] return value 368 - * @num_params [in] number of parameters following this struct 366 + * @num_params: [in] number of &struct tee_ioctl_param entries in @params 367 + * @params: array of ioctl parameters 369 368 */ 370 369 struct tee_iocl_supp_send_arg { 371 370 __u32 ret; ··· 457 454 */ 458 455 459 456 /** 460 - * struct tee_ioctl_invoke_func_arg - Invokes an object in a Trusted Application 457 + * struct tee_ioctl_object_invoke_arg - Invokes an object in a 458 + * Trusted Application 461 459 * @id: [in] Object id 462 460 * @op: [in] Object operation, specific to the object 463 461 * @ret: [out] return value 464 462 * @num_params: [in] number of parameters following this struct 463 + * @params: array of ioctl parameters 465 464 */ 466 465 struct tee_ioctl_object_invoke_arg { 467 466 __u64 id;
+2
io_uring/query.c
··· 20 20 e->ring_setup_flags = IORING_SETUP_FLAGS; 21 21 e->enter_flags = IORING_ENTER_FLAGS; 22 22 e->sqe_flags = SQE_VALID_FLAGS; 23 + e->nr_query_opcodes = __IO_URING_QUERY_MAX; 24 + e->__pad = 0; 23 25 return sizeof(*e); 24 26 } 25 27
+9 -7
io_uring/rsrc.c
··· 943 943 struct req_iterator rq_iter; 944 944 struct io_mapped_ubuf *imu; 945 945 struct io_rsrc_node *node; 946 - struct bio_vec bv, *bvec; 947 - u16 nr_bvecs; 946 + struct bio_vec bv; 947 + unsigned int nr_bvecs = 0; 948 948 int ret = 0; 949 949 950 950 io_ring_submit_lock(ctx, issue_flags); ··· 965 965 goto unlock; 966 966 } 967 967 968 - nr_bvecs = blk_rq_nr_phys_segments(rq); 969 - imu = io_alloc_imu(ctx, nr_bvecs); 968 + /* 969 + * blk_rq_nr_phys_segments() may overestimate the number of bvecs 970 + * but avoids needing to iterate over the bvecs 971 + */ 972 + imu = io_alloc_imu(ctx, blk_rq_nr_phys_segments(rq)); 970 973 if (!imu) { 971 974 kfree(node); 972 975 ret = -ENOMEM; ··· 980 977 imu->len = blk_rq_bytes(rq); 981 978 imu->acct_pages = 0; 982 979 imu->folio_shift = PAGE_SHIFT; 983 - imu->nr_bvecs = nr_bvecs; 984 980 refcount_set(&imu->refs, 1); 985 981 imu->release = release; 986 982 imu->priv = rq; 987 983 imu->is_kbuf = true; 988 984 imu->dir = 1 << rq_data_dir(rq); 989 985 990 - bvec = imu->bvec; 991 986 rq_for_each_bvec(bv, rq, rq_iter) 992 - *bvec++ = bv; 987 + imu->bvec[nr_bvecs++] = bv; 988 + imu->nr_bvecs = nr_bvecs; 993 989 994 990 node->buf = imu; 995 991 data->nodes[index] = node;
+3
io_uring/rw.c
··· 463 463 464 464 void io_readv_writev_cleanup(struct io_kiocb *req) 465 465 { 466 + struct io_async_rw *rw = req->async_data; 467 + 466 468 lockdep_assert_held(&req->ctx->uring_lock); 469 + io_vec_free(&rw->vec); 467 470 io_rw_recycle(req, 0); 468 471 } 469 472
+15 -11
kernel/bpf/helpers.c
··· 4167 4167 } 4168 4168 4169 4169 /** 4170 - * bpf_task_work_schedule_signal - Schedule BPF callback using task_work_add with TWA_SIGNAL mode 4170 + * bpf_task_work_schedule_signal_impl - Schedule BPF callback using task_work_add with TWA_SIGNAL 4171 + * mode 4171 4172 * @task: Task struct for which callback should be scheduled 4172 4173 * @tw: Pointer to struct bpf_task_work in BPF map value for internal bookkeeping 4173 4174 * @map__map: bpf_map that embeds struct bpf_task_work in the values ··· 4177 4176 * 4178 4177 * Return: 0 if task work has been scheduled successfully, negative error code otherwise 4179 4178 */ 4180 - __bpf_kfunc int bpf_task_work_schedule_signal(struct task_struct *task, struct bpf_task_work *tw, 4181 - void *map__map, bpf_task_work_callback_t callback, 4182 - void *aux__prog) 4179 + __bpf_kfunc int bpf_task_work_schedule_signal_impl(struct task_struct *task, 4180 + struct bpf_task_work *tw, void *map__map, 4181 + bpf_task_work_callback_t callback, 4182 + void *aux__prog) 4183 4183 { 4184 4184 return bpf_task_work_schedule(task, tw, map__map, callback, aux__prog, TWA_SIGNAL); 4185 4185 } 4186 4186 4187 4187 /** 4188 - * bpf_task_work_schedule_resume - Schedule BPF callback using task_work_add with TWA_RESUME mode 4188 + * bpf_task_work_schedule_resume_impl - Schedule BPF callback using task_work_add with TWA_RESUME 4189 + * mode 4189 4190 * @task: Task struct for which callback should be scheduled 4190 4191 * @tw: Pointer to struct bpf_task_work in BPF map value for internal bookkeeping 4191 4192 * @map__map: bpf_map that embeds struct bpf_task_work in the values ··· 4196 4193 * 4197 4194 * Return: 0 if task work has been scheduled successfully, negative error code otherwise 4198 4195 */ 4199 - __bpf_kfunc int bpf_task_work_schedule_resume(struct task_struct *task, struct bpf_task_work *tw, 4200 - void *map__map, bpf_task_work_callback_t callback, 4201 - void *aux__prog) 4196 + __bpf_kfunc int bpf_task_work_schedule_resume_impl(struct task_struct *task, 4197 + struct bpf_task_work *tw, void *map__map, 4198 + bpf_task_work_callback_t callback, 4199 + void *aux__prog) 4202 4200 { 4203 4201 return bpf_task_work_schedule(task, tw, map__map, callback, aux__prog, TWA_RESUME); 4204 4202 } ··· 4378 4374 #if defined(CONFIG_BPF_LSM) && defined(CONFIG_CGROUPS) 4379 4375 BTF_ID_FLAGS(func, bpf_cgroup_read_xattr, KF_RCU) 4380 4376 #endif 4381 - BTF_ID_FLAGS(func, bpf_stream_vprintk, KF_TRUSTED_ARGS) 4382 - BTF_ID_FLAGS(func, bpf_task_work_schedule_signal, KF_TRUSTED_ARGS) 4383 - BTF_ID_FLAGS(func, bpf_task_work_schedule_resume, KF_TRUSTED_ARGS) 4377 + BTF_ID_FLAGS(func, bpf_stream_vprintk_impl, KF_TRUSTED_ARGS) 4378 + BTF_ID_FLAGS(func, bpf_task_work_schedule_signal_impl, KF_TRUSTED_ARGS) 4379 + BTF_ID_FLAGS(func, bpf_task_work_schedule_resume_impl, KF_TRUSTED_ARGS) 4384 4380 BTF_KFUNCS_END(common_btf_ids) 4385 4381 4386 4382 static const struct btf_kfunc_id_set common_kfunc_set = {
+2 -1
kernel/bpf/stream.c
··· 355 355 * Avoid using enum bpf_stream_id so that kfunc users don't have to pull in the 356 356 * enum in headers. 357 357 */ 358 - __bpf_kfunc int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args, u32 len__sz, void *aux__prog) 358 + __bpf_kfunc int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const void *args, 359 + u32 len__sz, void *aux__prog) 359 360 { 360 361 struct bpf_bprintf_data data = { 361 362 .get_bin_args = true,
-5
kernel/bpf/trampoline.c
··· 479 479 * BPF_TRAMP_F_SHARE_IPMODIFY is set, we can generate the 480 480 * trampoline again, and retry register. 481 481 */ 482 - /* reset fops->func and fops->trampoline for re-register */ 483 - tr->fops->func = NULL; 484 - tr->fops->trampoline = 0; 485 - 486 - /* free im memory and reallocate later */ 487 482 bpf_tramp_image_free(im); 488 483 goto again; 489 484 }
+10 -8
kernel/bpf/verifier.c
··· 8866 8866 struct bpf_verifier_state *cur) 8867 8867 { 8868 8868 struct bpf_func_state *fold, *fcur; 8869 - int i, fr; 8869 + int i, fr, num_slots; 8870 8870 8871 8871 reset_idmap_scratch(env); 8872 8872 for (fr = old->curframe; fr >= 0; fr--) { ··· 8879 8879 &fcur->regs[i], 8880 8880 &env->idmap_scratch); 8881 8881 8882 - for (i = 0; i < fold->allocated_stack / BPF_REG_SIZE; i++) { 8882 + num_slots = min(fold->allocated_stack / BPF_REG_SIZE, 8883 + fcur->allocated_stack / BPF_REG_SIZE); 8884 + for (i = 0; i < num_slots; i++) { 8883 8885 if (!is_spilled_reg(&fold->stack[i]) || 8884 8886 !is_spilled_reg(&fcur->stack[i])) 8885 8887 continue; ··· 12261 12259 KF_bpf_res_spin_lock_irqsave, 12262 12260 KF_bpf_res_spin_unlock_irqrestore, 12263 12261 KF___bpf_trap, 12264 - KF_bpf_task_work_schedule_signal, 12265 - KF_bpf_task_work_schedule_resume, 12262 + KF_bpf_task_work_schedule_signal_impl, 12263 + KF_bpf_task_work_schedule_resume_impl, 12266 12264 }; 12267 12265 12268 12266 BTF_ID_LIST(special_kfunc_list) ··· 12333 12331 BTF_ID(func, bpf_res_spin_lock_irqsave) 12334 12332 BTF_ID(func, bpf_res_spin_unlock_irqrestore) 12335 12333 BTF_ID(func, __bpf_trap) 12336 - BTF_ID(func, bpf_task_work_schedule_signal) 12337 - BTF_ID(func, bpf_task_work_schedule_resume) 12334 + BTF_ID(func, bpf_task_work_schedule_signal_impl) 12335 + BTF_ID(func, bpf_task_work_schedule_resume_impl) 12338 12336 12339 12337 static bool is_task_work_add_kfunc(u32 func_id) 12340 12338 { 12341 - return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal] || 12342 - func_id == special_kfunc_list[KF_bpf_task_work_schedule_resume]; 12339 + return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal_impl] || 12340 + func_id == special_kfunc_list[KF_bpf_task_work_schedule_resume_impl]; 12343 12341 } 12344 12342 12345 12343 static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
+1 -1
kernel/crash_core.c
··· 373 373 old_res->start = 0; 374 374 old_res->end = 0; 375 375 } else { 376 - crashk_res.end = ram_res->start - 1; 376 + old_res->end = ram_res->start - 1; 377 377 } 378 378 379 379 crash_free_reserved_phys_range(ram_res->start, ram_res->end);
+3 -6
kernel/power/hibernate.c
··· 821 821 goto Restore; 822 822 823 823 ksys_sync_helper(); 824 - if (filesystem_freeze_enabled) 825 - filesystems_freeze(); 824 + filesystems_freeze(filesystem_freeze_enabled); 826 825 827 826 error = freeze_processes(); 828 827 if (error) ··· 927 928 if (error) 928 929 goto restore; 929 930 930 - if (filesystem_freeze_enabled) 931 - filesystems_freeze(); 931 + filesystems_freeze(filesystem_freeze_enabled); 932 932 933 933 error = freeze_processes(); 934 934 if (error) ··· 1077 1079 if (error) 1078 1080 goto Restore; 1079 1081 1080 - if (filesystem_freeze_enabled) 1081 - filesystems_freeze(); 1082 + filesystems_freeze(filesystem_freeze_enabled); 1082 1083 1083 1084 pm_pr_dbg("Preparing processes for hibernation restore.\n"); 1084 1085 error = freeze_processes();
+1 -2
kernel/power/suspend.c
··· 375 375 if (error) 376 376 goto Restore; 377 377 378 - if (filesystem_freeze_enabled) 379 - filesystems_freeze(); 378 + filesystems_freeze(filesystem_freeze_enabled); 380 379 trace_suspend_resume(TPS("freeze_processes"), 0, true); 381 380 error = suspend_freeze_processes(); 382 381 trace_suspend_resume(TPS("freeze_processes"), 0, false);
+13 -9
kernel/power/swap.c
··· 635 635 }; 636 636 637 637 /* Indicates the image size after compression */ 638 - static atomic_t compressed_size = ATOMIC_INIT(0); 638 + static atomic64_t compressed_size = ATOMIC_INIT(0); 639 639 640 640 /* 641 641 * Compression function that runs in its own thread. ··· 664 664 d->ret = crypto_acomp_compress(d->cr); 665 665 d->cmp_len = d->cr->dlen; 666 666 667 - atomic_set(&compressed_size, atomic_read(&compressed_size) + d->cmp_len); 667 + atomic64_add(d->cmp_len, &compressed_size); 668 668 atomic_set_release(&d->stop, 1); 669 669 wake_up(&d->done); 670 670 } ··· 689 689 ktime_t start; 690 690 ktime_t stop; 691 691 size_t off; 692 - unsigned thr, run_threads, nr_threads; 692 + unsigned int thr, run_threads, nr_threads; 693 693 unsigned char *page = NULL; 694 694 struct cmp_data *data = NULL; 695 695 struct crc_data *crc = NULL; 696 696 697 697 hib_init_batch(&hb); 698 698 699 - atomic_set(&compressed_size, 0); 699 + atomic64_set(&compressed_size, 0); 700 700 701 701 /* 702 702 * We'll limit the number of threads for compression to limit memory ··· 877 877 stop = ktime_get(); 878 878 if (!ret) 879 879 ret = err2; 880 - if (!ret) 880 + if (!ret) { 881 + swsusp_show_speed(start, stop, nr_to_write, "Wrote"); 882 + pr_info("Image size after compression: %lld kbytes\n", 883 + (atomic64_read(&compressed_size) / 1024)); 881 884 pr_info("Image saving done\n"); 882 - swsusp_show_speed(start, stop, nr_to_write, "Wrote"); 883 - pr_info("Image size after compression: %d kbytes\n", 884 - (atomic_read(&compressed_size) / 1024)); 885 + } else { 886 + pr_err("Image saving failed: %d\n", ret); 887 + } 885 888 886 889 out_clean: 887 890 hib_finish_batch(&hb); ··· 902 899 } 903 900 vfree(data); 904 901 } 905 - if (page) free_page((unsigned long)page); 902 + if (page) 903 + free_page((unsigned long)page); 906 904 907 905 return ret; 908 906 }
+13 -13
kernel/sched/ext.c
··· 25 25 * guarantee system safety. Maintain a dedicated task list which contains every 26 26 * task between its fork and eventual free. 27 27 */ 28 - static DEFINE_SPINLOCK(scx_tasks_lock); 28 + static DEFINE_RAW_SPINLOCK(scx_tasks_lock); 29 29 static LIST_HEAD(scx_tasks); 30 30 31 31 /* ops enable/disable */ ··· 476 476 BUILD_BUG_ON(__SCX_DSQ_ITER_ALL_FLAGS & 477 477 ((1U << __SCX_DSQ_LNODE_PRIV_SHIFT) - 1)); 478 478 479 - spin_lock_irq(&scx_tasks_lock); 479 + raw_spin_lock_irq(&scx_tasks_lock); 480 480 481 481 iter->cursor = (struct sched_ext_entity){ .flags = SCX_TASK_CURSOR }; 482 482 list_add(&iter->cursor.tasks_node, &scx_tasks); ··· 507 507 __scx_task_iter_rq_unlock(iter); 508 508 if (iter->list_locked) { 509 509 iter->list_locked = false; 510 - spin_unlock_irq(&scx_tasks_lock); 510 + raw_spin_unlock_irq(&scx_tasks_lock); 511 511 } 512 512 } 513 513 514 514 static void __scx_task_iter_maybe_relock(struct scx_task_iter *iter) 515 515 { 516 516 if (!iter->list_locked) { 517 - spin_lock_irq(&scx_tasks_lock); 517 + raw_spin_lock_irq(&scx_tasks_lock); 518 518 iter->list_locked = true; 519 519 } 520 520 } ··· 2940 2940 } 2941 2941 } 2942 2942 2943 - spin_lock_irq(&scx_tasks_lock); 2943 + raw_spin_lock_irq(&scx_tasks_lock); 2944 2944 list_add_tail(&p->scx.tasks_node, &scx_tasks); 2945 - spin_unlock_irq(&scx_tasks_lock); 2945 + raw_spin_unlock_irq(&scx_tasks_lock); 2946 2946 2947 2947 percpu_up_read(&scx_fork_rwsem); 2948 2948 } ··· 2966 2966 { 2967 2967 unsigned long flags; 2968 2968 2969 - spin_lock_irqsave(&scx_tasks_lock, flags); 2969 + raw_spin_lock_irqsave(&scx_tasks_lock, flags); 2970 2970 list_del_init(&p->scx.tasks_node); 2971 - spin_unlock_irqrestore(&scx_tasks_lock, flags); 2971 + raw_spin_unlock_irqrestore(&scx_tasks_lock, flags); 2972 2972 2973 2973 /* 2974 2974 * @p is off scx_tasks and wholly ours. scx_enable()'s READY -> ENABLED ··· 4276 4276 size_t avail, used; 4277 4277 bool idle; 4278 4278 4279 - rq_lock(rq, &rf); 4279 + rq_lock_irqsave(rq, &rf); 4280 4280 4281 4281 idle = list_empty(&rq->scx.runnable_list) && 4282 4282 rq->curr->sched_class == &idle_sched_class; ··· 4345 4345 list_for_each_entry(p, &rq->scx.runnable_list, scx.runnable_node) 4346 4346 scx_dump_task(&s, &dctx, p, ' '); 4347 4347 next: 4348 - rq_unlock(rq, &rf); 4348 + rq_unlock_irqrestore(rq, &rf); 4349 4349 } 4350 4350 4351 4351 dump_newline(&s); ··· 5321 5321 BUG_ON(!zalloc_cpumask_var_node(&rq->scx.cpus_to_kick_if_idle, GFP_KERNEL, n)); 5322 5322 BUG_ON(!zalloc_cpumask_var_node(&rq->scx.cpus_to_preempt, GFP_KERNEL, n)); 5323 5323 BUG_ON(!zalloc_cpumask_var_node(&rq->scx.cpus_to_wait, GFP_KERNEL, n)); 5324 - init_irq_work(&rq->scx.deferred_irq_work, deferred_irq_workfn); 5325 - init_irq_work(&rq->scx.kick_cpus_irq_work, kick_cpus_irq_workfn); 5324 + rq->scx.deferred_irq_work = IRQ_WORK_INIT_HARD(deferred_irq_workfn); 5325 + rq->scx.kick_cpus_irq_work = IRQ_WORK_INIT_HARD(kick_cpus_irq_workfn); 5326 5326 5327 5327 if (cpu_online(cpu)) 5328 5328 cpu_rq(cpu)->scx.flags |= SCX_RQ_ONLINE; ··· 6401 6401 6402 6402 guard(rcu)(); 6403 6403 6404 - sch = rcu_dereference(sch); 6404 + sch = rcu_dereference(scx_root); 6405 6405 if (unlikely(!sch)) 6406 6406 return; 6407 6407
+6 -6
kernel/time/posix-timers.c
··· 475 475 if (!kc->timer_create) 476 476 return -EOPNOTSUPP; 477 477 478 - new_timer = alloc_posix_timer(); 479 - if (unlikely(!new_timer)) 480 - return -EAGAIN; 481 - 482 - spin_lock_init(&new_timer->it_lock); 483 - 484 478 /* Special case for CRIU to restore timers with a given timer ID. */ 485 479 if (unlikely(current->signal->timer_create_restore_ids)) { 486 480 if (copy_from_user(&req_id, created_timer_id, sizeof(req_id))) ··· 483 489 if ((unsigned int)req_id > INT_MAX) 484 490 return -EINVAL; 485 491 } 492 + 493 + new_timer = alloc_posix_timer(); 494 + if (unlikely(!new_timer)) 495 + return -EAGAIN; 496 + 497 + spin_lock_init(&new_timer->it_lock); 486 498 487 499 /* 488 500 * Add the timer to the hash table. The timer is not yet valid
+45 -15
kernel/trace/ftrace.c
··· 1971 1971 */ 1972 1972 static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops, 1973 1973 struct ftrace_hash *old_hash, 1974 - struct ftrace_hash *new_hash) 1974 + struct ftrace_hash *new_hash, 1975 + bool update_target) 1975 1976 { 1976 1977 struct ftrace_page *pg; 1977 1978 struct dyn_ftrace *rec, *end = NULL; ··· 2007 2006 if (rec->flags & FTRACE_FL_DISABLED) 2008 2007 continue; 2009 2008 2010 - /* We need to update only differences of filter_hash */ 2009 + /* 2010 + * Unless we are updating the target of a direct function, 2011 + * we only need to update differences of filter_hash 2012 + */ 2011 2013 in_old = !!ftrace_lookup_ip(old_hash, rec->ip); 2012 2014 in_new = !!ftrace_lookup_ip(new_hash, rec->ip); 2013 - if (in_old == in_new) 2015 + if (!update_target && (in_old == in_new)) 2014 2016 continue; 2015 2017 2016 2018 if (in_new) { ··· 2024 2020 if (is_ipmodify) 2025 2021 goto rollback; 2026 2022 2027 - FTRACE_WARN_ON(rec->flags & FTRACE_FL_DIRECT); 2023 + /* 2024 + * If this is called by __modify_ftrace_direct() 2025 + * then it is only changing where the direct 2026 + * pointer is jumping to, and the record already 2027 + * points to a direct trampoline. If it isn't, 2028 + * then it is a bug to update ipmodify on a direct 2029 + * caller. 2030 + */ 2031 + FTRACE_WARN_ON(!update_target && 2032 + (rec->flags & FTRACE_FL_DIRECT)); 2028 2033 2029 2034 /* 2030 2035 * Another ops with IPMODIFY is already ··· 2089 2076 if (ftrace_hash_empty(hash)) 2090 2077 hash = NULL; 2091 2078 2092 - return __ftrace_hash_update_ipmodify(ops, EMPTY_HASH, hash); 2079 + return __ftrace_hash_update_ipmodify(ops, EMPTY_HASH, hash, false); 2093 2080 } 2094 2081 2095 2082 /* Disabling always succeeds */ ··· 2100 2087 if (ftrace_hash_empty(hash)) 2101 2088 hash = NULL; 2102 2089 2103 - __ftrace_hash_update_ipmodify(ops, hash, EMPTY_HASH); 2090 + __ftrace_hash_update_ipmodify(ops, hash, EMPTY_HASH, false); 2104 2091 } 2105 2092 2106 2093 static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops, ··· 2114 2101 if (ftrace_hash_empty(new_hash)) 2115 2102 new_hash = NULL; 2116 2103 2117 - return __ftrace_hash_update_ipmodify(ops, old_hash, new_hash); 2104 + return __ftrace_hash_update_ipmodify(ops, old_hash, new_hash, false); 2118 2105 } 2119 2106 2120 2107 static void print_ip_ins(const char *fmt, const unsigned char *p) ··· 5966 5953 free_ftrace_hash(fhp); 5967 5954 } 5968 5955 5956 + static void reset_direct(struct ftrace_ops *ops, unsigned long addr) 5957 + { 5958 + struct ftrace_hash *hash = ops->func_hash->filter_hash; 5959 + 5960 + remove_direct_functions_hash(hash, addr); 5961 + 5962 + /* cleanup for possible another register call */ 5963 + ops->func = NULL; 5964 + ops->trampoline = 0; 5965 + } 5966 + 5969 5967 /** 5970 5968 * register_ftrace_direct - Call a custom trampoline directly 5971 5969 * for multiple functions registered in @ops ··· 6072 6048 ops->direct_call = addr; 6073 6049 6074 6050 err = register_ftrace_function_nolock(ops); 6051 + if (err) 6052 + reset_direct(ops, addr); 6075 6053 6076 6054 out_unlock: 6077 6055 mutex_unlock(&direct_mutex); ··· 6106 6080 int unregister_ftrace_direct(struct ftrace_ops *ops, unsigned long addr, 6107 6081 bool free_filters) 6108 6082 { 6109 - struct ftrace_hash *hash = ops->func_hash->filter_hash; 6110 6083 int err; 6111 6084 6112 6085 if (check_direct_multi(ops)) ··· 6115 6090 6116 6091 mutex_lock(&direct_mutex); 6117 6092 err = unregister_ftrace_function(ops); 6118 - remove_direct_functions_hash(hash, addr); 6093 + reset_direct(ops, addr); 6119 6094 mutex_unlock(&direct_mutex); 6120 - 6121 - /* cleanup for possible another register call */ 6122 - ops->func = NULL; 6123 - ops->trampoline = 0; 6124 6095 6125 6096 if (free_filters) 6126 6097 ftrace_free_filter(ops); ··· 6127 6106 static int 6128 6107 __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr) 6129 6108 { 6130 - struct ftrace_hash *hash; 6109 + struct ftrace_hash *hash = ops->func_hash->filter_hash; 6131 6110 struct ftrace_func_entry *entry, *iter; 6132 6111 static struct ftrace_ops tmp_ops = { 6133 6112 .func = ftrace_stub, ··· 6148 6127 return err; 6149 6128 6150 6129 /* 6130 + * Call __ftrace_hash_update_ipmodify() here, so that we can call 6131 + * ops->ops_func for the ops. This is needed because the above 6132 + * register_ftrace_function_nolock() worked on tmp_ops. 6133 + */ 6134 + err = __ftrace_hash_update_ipmodify(ops, hash, hash, true); 6135 + if (err) 6136 + goto out; 6137 + 6138 + /* 6151 6139 * Now the ftrace_ops_list_func() is called to do the direct callers. 6152 6140 * We can safely change the direct functions attached to each entry. 6153 6141 */ 6154 6142 mutex_lock(&ftrace_lock); 6155 6143 6156 - hash = ops->func_hash->filter_hash; 6157 6144 size = 1 << hash->size_bits; 6158 6145 for (i = 0; i < size; i++) { 6159 6146 hlist_for_each_entry(iter, &hash->buckets[i], hlist) { ··· 6176 6147 6177 6148 mutex_unlock(&ftrace_lock); 6178 6149 6150 + out: 6179 6151 /* Removing the tmp_ops will add the updated direct callers to the functions */ 6180 6152 unregister_ftrace_function(&tmp_ops); 6181 6153
+3
lib/test_kho.c
··· 301 301 phys_addr_t fdt_phys; 302 302 int err; 303 303 304 + if (!kho_is_enabled()) 305 + return 0; 306 + 304 307 err = kho_retrieve_subtree(KHO_TEST_FDT, &fdt_phys); 305 308 if (!err) 306 309 return kho_test_restore(fdt_phys);
+7
mm/Kconfig
··· 908 908 config PGTABLE_HAS_HUGE_LEAVES 909 909 def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE 910 910 911 + # 912 + # We can end up creating gigantic folio. 913 + # 914 + config HAVE_GIGANTIC_FOLIOS 915 + def_bool (HUGETLB_PAGE && ARCH_HAS_GIGANTIC_PAGE) || \ 916 + (ZONE_DEVICE && HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) 917 + 911 918 # TODO: Allow to be enabled without THP 912 919 config ARCH_SUPPORTS_HUGE_PFNMAP 913 920 def_bool n
+4 -2
mm/huge_memory.c
··· 3522 3522 /* order-1 is not supported for anonymous THP. */ 3523 3523 VM_WARN_ONCE(warns && new_order == 1, 3524 3524 "Cannot split to order-1 folio"); 3525 - return new_order != 1; 3525 + if (new_order == 1) 3526 + return false; 3526 3527 } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && 3527 3528 !mapping_large_folio_support(folio->mapping)) { 3528 3529 /* ··· 3554 3553 if (folio_test_anon(folio)) { 3555 3554 VM_WARN_ONCE(warns && new_order == 1, 3556 3555 "Cannot split to order-1 folio"); 3557 - return new_order != 1; 3556 + if (new_order == 1) 3557 + return false; 3558 3558 } else if (new_order) { 3559 3559 if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && 3560 3560 !mapping_large_folio_support(folio->mapping)) {
+2 -1
mm/memblock.c
··· 1826 1826 */ 1827 1827 unsigned long __init memblock_estimated_nr_free_pages(void) 1828 1828 { 1829 - return PHYS_PFN(memblock_phys_mem_size() - memblock_reserved_size()); 1829 + return PHYS_PFN(memblock_phys_mem_size() - 1830 + memblock_reserved_kern_size(MEMBLOCK_ALLOC_ANYWHERE, NUMA_NO_NODE)); 1830 1831 } 1831 1832 1832 1833 /* lowest address */
+2 -7
mm/page_alloc.c
··· 1822 1822 * If memory tags should be zeroed 1823 1823 * (which happens only when memory should be initialized as well). 1824 1824 */ 1825 - if (zero_tags) { 1826 - /* Initialize both memory and memory tags. */ 1827 - for (i = 0; i != 1 << order; ++i) 1828 - tag_clear_highpage(page + i); 1825 + if (zero_tags) 1826 + init = !tag_clear_highpages(page, 1 << order); 1829 1827 1830 - /* Take note that memory was initialized by the loop above. */ 1831 - init = false; 1832 - } 1833 1828 if (!should_skip_kasan_unpoison(gfp_flags) && 1834 1829 kasan_unpoison_pages(page, order, init)) { 1835 1830 /* Take note that memory was initialized by KASAN. */
+7 -8
mm/shmem.c
··· 131 131 #define SHMEM_SEEN_INODES 2 132 132 #define SHMEM_SEEN_HUGE 4 133 133 #define SHMEM_SEEN_INUMS 8 134 - #define SHMEM_SEEN_NOSWAP 16 135 - #define SHMEM_SEEN_QUOTA 32 134 + #define SHMEM_SEEN_QUOTA 16 136 135 }; 137 136 138 137 #ifdef CONFIG_TRANSPARENT_HUGEPAGE ··· 4679 4680 "Turning off swap in unprivileged tmpfs mounts unsupported"); 4680 4681 } 4681 4682 ctx->noswap = true; 4682 - ctx->seen |= SHMEM_SEEN_NOSWAP; 4683 4683 break; 4684 4684 case Opt_quota: 4685 4685 if (fc->user_ns != &init_user_ns) ··· 4828 4830 err = "Current inum too high to switch to 32-bit inums"; 4829 4831 goto out; 4830 4832 } 4831 - if ((ctx->seen & SHMEM_SEEN_NOSWAP) && ctx->noswap && !sbinfo->noswap) { 4833 + 4834 + /* 4835 + * "noswap" doesn't use fsparam_flag_no, i.e. there's no "swap" 4836 + * counterpart for (re-)enabling swap. 4837 + */ 4838 + if (ctx->noswap && !sbinfo->noswap) { 4832 4839 err = "Cannot disable swap on remount"; 4833 - goto out; 4834 - } 4835 - if (!(ctx->seen & SHMEM_SEEN_NOSWAP) && !ctx->noswap && sbinfo->noswap) { 4836 - err = "Cannot enable swap on remount if it was disabled on first mount"; 4837 4840 goto out; 4838 4841 } 4839 4842
+6 -2
mm/slub.c
··· 6336 6336 6337 6337 if (unlikely(!slab_free_hook(s, p[i], init, false))) { 6338 6338 p[i] = p[--size]; 6339 - if (!size) 6340 - goto flush_remote; 6341 6339 continue; 6342 6340 } 6343 6341 ··· 6349 6351 6350 6352 i++; 6351 6353 } 6354 + 6355 + if (!size) 6356 + goto flush_remote; 6352 6357 6353 6358 next_batch: 6354 6359 if (!local_trylock(&s->cpu_sheaves->lock)) ··· 6406 6405 size -= batch; 6407 6406 goto next_batch; 6408 6407 } 6408 + 6409 + if (remote_nr) 6410 + goto flush_remote; 6409 6411 6410 6412 return; 6411 6413
+13
mm/swap_state.c
··· 748 748 749 749 blk_start_plug(&plug); 750 750 for (addr = start; addr < end; ilx++, addr += PAGE_SIZE) { 751 + struct swap_info_struct *si = NULL; 752 + 751 753 if (!pte++) { 752 754 pte = pte_offset_map(vmf->pmd, addr); 753 755 if (!pte) ··· 763 761 continue; 764 762 pte_unmap(pte); 765 763 pte = NULL; 764 + /* 765 + * Readahead entry may come from a device that we are not 766 + * holding a reference to, try to grab a reference, or skip. 767 + */ 768 + if (swp_type(entry) != swp_type(targ_entry)) { 769 + si = get_swap_device(entry); 770 + if (!si) 771 + continue; 772 + } 766 773 folio = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, 767 774 &page_allocated, false); 775 + if (si) 776 + put_swap_device(si); 768 777 if (!folio) 769 778 continue; 770 779 if (page_allocated) {
+3
net/core/dev_ioctl.c
··· 443 443 struct ifreq ifrr; 444 444 int err; 445 445 446 + if (!kernel_cfg->ifr) 447 + return -EINVAL; 448 + 446 449 strscpy_pad(ifrr.ifr_name, dev->name, IFNAMSIZ); 447 450 ifrr.ifr_ifru = kernel_cfg->ifr->ifr_ifru; 448 451
+3 -1
net/devlink/rate.c
··· 828 828 if (!devlink_rate->parent) 829 829 continue; 830 830 831 - refcount_dec(&devlink_rate->parent->refcnt); 832 831 if (devlink_rate_is_leaf(devlink_rate)) 833 832 ops->rate_leaf_parent_set(devlink_rate, NULL, devlink_rate->priv, 834 833 NULL, NULL); 835 834 else if (devlink_rate_is_node(devlink_rate)) 836 835 ops->rate_node_parent_set(devlink_rate, NULL, devlink_rate->priv, 837 836 NULL, NULL); 837 + 838 + refcount_dec(&devlink_rate->parent->refcnt); 839 + devlink_rate->parent = NULL; 838 840 } 839 841 list_for_each_entry_safe(devlink_rate, tmp, &devlink->rate_list, list) { 840 842 if (devlink_rate_is_node(devlink_rate)) {
+4 -2
net/ipv4/esp4_offload.c
··· 122 122 struct sk_buff *skb, 123 123 netdev_features_t features) 124 124 { 125 - __be16 type = x->inner_mode.family == AF_INET6 ? htons(ETH_P_IPV6) 126 - : htons(ETH_P_IP); 125 + const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, 126 + XFRM_MODE_SKB_CB(skb)->protocol); 127 + __be16 type = inner_mode->family == AF_INET6 ? htons(ETH_P_IPV6) 128 + : htons(ETH_P_IP); 127 129 128 130 return skb_eth_gso_segment(skb, features, type); 129 131 }
+4 -2
net/ipv6/esp6_offload.c
··· 158 158 struct sk_buff *skb, 159 159 netdev_features_t features) 160 160 { 161 - __be16 type = x->inner_mode.family == AF_INET ? htons(ETH_P_IP) 162 - : htons(ETH_P_IPV6); 161 + const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, 162 + XFRM_MODE_SKB_CB(skb)->protocol); 163 + __be16 type = inner_mode->family == AF_INET ? htons(ETH_P_IP) 164 + : htons(ETH_P_IPV6); 163 165 164 166 return skb_eth_gso_segment(skb, features, type); 165 167 }
+3 -3
net/l2tp/l2tp_core.c
··· 1246 1246 else 1247 1247 l2tp_build_l2tpv3_header(session, __skb_push(skb, session->hdr_len)); 1248 1248 1249 - /* Reset skb netfilter state */ 1250 - memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); 1251 - IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED); 1249 + /* Reset control buffer */ 1250 + memset(skb->cb, 0, sizeof(skb->cb)); 1251 + 1252 1252 nf_reset_ct(skb); 1253 1253 1254 1254 /* L2TP uses its own lockdep subclass to avoid lockdep splats caused by
+53 -1
net/mptcp/options.c
··· 838 838 839 839 opts->suboptions = 0; 840 840 841 + /* Force later mptcp_write_options(), but do not use any actual 842 + * option space. 843 + */ 841 844 if (unlikely(__mptcp_check_fallback(msk) && !mptcp_check_infinite_map(skb))) 842 - return false; 845 + return true; 843 846 844 847 if (unlikely(skb && TCP_SKB_CB(skb)->tcp_flags & TCPHDR_RST)) { 845 848 if (mptcp_established_options_fastclose(sk, &opt_size, remaining, opts) || ··· 1044 1041 WRITE_ONCE(msk->snd_una, new_snd_una); 1045 1042 } 1046 1043 1044 + static void rwin_update(struct mptcp_sock *msk, struct sock *ssk, 1045 + struct sk_buff *skb) 1046 + { 1047 + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 1048 + struct tcp_sock *tp = tcp_sk(ssk); 1049 + u64 mptcp_rcv_wnd; 1050 + 1051 + /* Avoid touching extra cachelines if TCP is going to accept this 1052 + * skb without filling the TCP-level window even with a possibly 1053 + * outdated mptcp-level rwin. 1054 + */ 1055 + if (!skb->len || skb->len < tcp_receive_window(tp)) 1056 + return; 1057 + 1058 + mptcp_rcv_wnd = atomic64_read(&msk->rcv_wnd_sent); 1059 + if (!after64(mptcp_rcv_wnd, subflow->rcv_wnd_sent)) 1060 + return; 1061 + 1062 + /* Some other subflow grew the mptcp-level rwin since rcv_wup, 1063 + * resync. 1064 + */ 1065 + tp->rcv_wnd += mptcp_rcv_wnd - subflow->rcv_wnd_sent; 1066 + subflow->rcv_wnd_sent = mptcp_rcv_wnd; 1067 + } 1068 + 1047 1069 static void ack_update_msk(struct mptcp_sock *msk, 1048 1070 struct sock *ssk, 1049 1071 struct mptcp_options_received *mp_opt) ··· 1236 1208 */ 1237 1209 if (mp_opt.use_ack) 1238 1210 ack_update_msk(msk, sk, &mp_opt); 1211 + rwin_update(msk, sk, skb); 1239 1212 1240 1213 /* Zero-data-length packets are dropped by the caller and not 1241 1214 * propagated to the MPTCP layer, so the skb extension does not ··· 1323 1294 1324 1295 if (rcv_wnd_new != rcv_wnd_old) { 1325 1296 raise_win: 1297 + /* The msk-level rcv wnd is after the tcp level one, 1298 + * sync the latter. 1299 + */ 1300 + rcv_wnd_new = rcv_wnd_old; 1326 1301 win = rcv_wnd_old - ack_seq; 1327 1302 tp->rcv_wnd = min_t(u64, win, U32_MAX); 1328 1303 new_win = tp->rcv_wnd; ··· 1349 1316 } 1350 1317 1351 1318 update_wspace: 1319 + WRITE_ONCE(msk->old_wspace, tp->rcv_wnd); 1320 + subflow->rcv_wnd_sent = rcv_wnd_new; 1321 + } 1322 + 1323 + static void mptcp_track_rwin(struct tcp_sock *tp) 1324 + { 1325 + const struct sock *ssk = (const struct sock *)tp; 1326 + struct mptcp_subflow_context *subflow; 1327 + struct mptcp_sock *msk; 1328 + 1329 + if (!ssk) 1330 + return; 1331 + 1332 + subflow = mptcp_subflow_ctx(ssk); 1333 + msk = mptcp_sk(subflow->conn); 1352 1334 WRITE_ONCE(msk->old_wspace, tp->rcv_wnd); 1353 1335 } 1354 1336 ··· 1658 1610 TCPOLEN_MPTCP_RST, 1659 1611 opts->reset_transient, 1660 1612 opts->reset_reason); 1613 + return; 1614 + } else if (unlikely(!opts->suboptions)) { 1615 + /* Fallback to TCP */ 1616 + mptcp_track_rwin(tp); 1661 1617 return; 1662 1618 } 1663 1619
+13 -7
net/mptcp/pm.c
··· 18 18 u8 retrans_times; 19 19 struct timer_list add_timer; 20 20 struct mptcp_sock *sock; 21 + struct rcu_head rcu; 21 22 }; 22 23 23 24 static DEFINE_SPINLOCK(mptcp_pm_list_lock); ··· 156 155 157 156 entry = mptcp_pm_del_add_timer(msk, addr, false); 158 157 ret = entry; 159 - kfree(entry); 158 + kfree_rcu(entry, rcu); 160 159 161 160 return ret; 162 161 } ··· 346 345 { 347 346 struct mptcp_pm_add_entry *entry; 348 347 struct sock *sk = (struct sock *)msk; 349 - struct timer_list *add_timer = NULL; 348 + bool stop_timer = false; 349 + 350 + rcu_read_lock(); 350 351 351 352 spin_lock_bh(&msk->pm.lock); 352 353 entry = mptcp_lookup_anno_list_by_saddr(msk, addr); 353 354 if (entry && (!check_id || entry->addr.id == addr->id)) { 354 355 entry->retrans_times = ADD_ADDR_RETRANS_MAX; 355 - add_timer = &entry->add_timer; 356 + stop_timer = true; 356 357 } 357 358 if (!check_id && entry) 358 359 list_del(&entry->list); 359 360 spin_unlock_bh(&msk->pm.lock); 360 361 361 - /* no lock, because sk_stop_timer_sync() is calling timer_delete_sync() */ 362 - if (add_timer) 363 - sk_stop_timer_sync(sk, add_timer); 362 + /* Note: entry might have been removed by another thread. 363 + * We hold rcu_read_lock() to ensure it is not freed under us. 364 + */ 365 + if (stop_timer) 366 + sk_stop_timer_sync(sk, &entry->add_timer); 364 367 368 + rcu_read_unlock(); 365 369 return entry; 366 370 } 367 371 ··· 421 415 422 416 list_for_each_entry_safe(entry, tmp, &free_list, list) { 423 417 sk_stop_timer_sync(sk, &entry->add_timer); 424 - kfree(entry); 418 + kfree_rcu(entry, rcu); 425 419 } 426 420 } 427 421
+1 -1
net/mptcp/pm_kernel.c
··· 680 680 681 681 void mptcp_pm_nl_rm_addr(struct mptcp_sock *msk, u8 rm_id) 682 682 { 683 - if (rm_id && WARN_ON_ONCE(msk->pm.add_addr_accepted == 0)) { 683 + if (rm_id && !WARN_ON_ONCE(msk->pm.add_addr_accepted == 0)) { 684 684 u8 limit_add_addr_accepted = 685 685 mptcp_pm_get_limit_add_addr_accepted(msk); 686 686
+57 -27
net/mptcp/protocol.c
··· 61 61 62 62 static const struct proto_ops *mptcp_fallback_tcp_ops(const struct sock *sk) 63 63 { 64 + unsigned short family = READ_ONCE(sk->sk_family); 65 + 64 66 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 65 - if (sk->sk_prot == &tcpv6_prot) 67 + if (family == AF_INET6) 66 68 return &inet6_stream_ops; 67 69 #endif 68 - WARN_ON_ONCE(sk->sk_prot != &tcp_prot); 70 + WARN_ON_ONCE(family != AF_INET); 69 71 return &inet_stream_ops; 70 72 } 71 73 ··· 77 75 78 76 if (__mptcp_check_fallback(msk)) 79 77 return true; 78 + 79 + /* The caller possibly is not holding the msk socket lock, but 80 + * in the fallback case only the current subflow is touching 81 + * the OoO queue. 82 + */ 83 + if (!RB_EMPTY_ROOT(&msk->out_of_order_queue)) 84 + return false; 80 85 81 86 spin_lock_bh(&msk->fallback_lock); 82 87 if (!msk->allow_infinite_fallback) { ··· 944 935 945 936 bool mptcp_schedule_work(struct sock *sk) 946 937 { 947 - if (inet_sk_state_load(sk) != TCP_CLOSE && 948 - schedule_work(&mptcp_sk(sk)->work)) { 949 - /* each subflow already holds a reference to the sk, and the 950 - * workqueue is invoked by a subflow, so sk can't go away here. 951 - */ 952 - sock_hold(sk); 938 + if (inet_sk_state_load(sk) == TCP_CLOSE) 939 + return false; 940 + 941 + /* Get a reference on this socket, mptcp_worker() will release it. 942 + * As mptcp_worker() might complete before us, we can not avoid 943 + * a sock_hold()/sock_put() if schedule_work() returns false. 944 + */ 945 + sock_hold(sk); 946 + 947 + if (schedule_work(&mptcp_sk(sk)->work)) 953 948 return true; 954 - } 949 + 950 + sock_put(sk); 955 951 return false; 956 952 } 957 953 ··· 2412 2398 2413 2399 /* flags for __mptcp_close_ssk() */ 2414 2400 #define MPTCP_CF_PUSH BIT(1) 2415 - #define MPTCP_CF_FASTCLOSE BIT(2) 2416 2401 2417 2402 /* be sure to send a reset only if the caller asked for it, also 2418 2403 * clean completely the subflow status when the subflow reaches ··· 2422 2409 unsigned int flags) 2423 2410 { 2424 2411 if (((1 << ssk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) || 2425 - (flags & MPTCP_CF_FASTCLOSE)) { 2412 + subflow->send_fastclose) { 2426 2413 /* The MPTCP code never wait on the subflow sockets, TCP-level 2427 2414 * disconnect should never fail 2428 2415 */ ··· 2469 2456 2470 2457 lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); 2471 2458 2472 - if ((flags & MPTCP_CF_FASTCLOSE) && !__mptcp_check_fallback(msk)) { 2473 - /* be sure to force the tcp_close path 2474 - * to generate the egress reset 2475 - */ 2476 - ssk->sk_lingertime = 0; 2477 - sock_set_flag(ssk, SOCK_LINGER); 2478 - subflow->send_fastclose = 1; 2479 - } 2459 + if (subflow->send_fastclose && ssk->sk_state != TCP_CLOSE) 2460 + tcp_set_state(ssk, TCP_CLOSE); 2480 2461 2481 2462 need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk); 2482 2463 if (!dispose_it) { ··· 2566 2559 2567 2560 if (ssk_state != TCP_CLOSE && 2568 2561 (ssk_state != TCP_CLOSE_WAIT || 2569 - inet_sk_state_load(sk) != TCP_ESTABLISHED)) 2562 + inet_sk_state_load(sk) != TCP_ESTABLISHED || 2563 + __mptcp_check_fallback(msk))) 2570 2564 continue; 2571 2565 2572 2566 /* 'subflow_data_ready' will re-sched once rx queue is empty */ ··· 2775 2767 struct mptcp_sock *msk = mptcp_sk(sk); 2776 2768 2777 2769 mptcp_set_state(sk, TCP_CLOSE); 2778 - mptcp_for_each_subflow_safe(msk, subflow, tmp) 2779 - __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), 2780 - subflow, MPTCP_CF_FASTCLOSE); 2770 + 2771 + /* Explicitly send the fastclose reset as need */ 2772 + if (__mptcp_check_fallback(msk)) 2773 + return; 2774 + 2775 + mptcp_for_each_subflow_safe(msk, subflow, tmp) { 2776 + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 2777 + 2778 + lock_sock(ssk); 2779 + 2780 + /* Some subflow socket states don't allow/need a reset.*/ 2781 + if ((1 << ssk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) 2782 + goto unlock; 2783 + 2784 + subflow->send_fastclose = 1; 2785 + tcp_send_active_reset(ssk, ssk->sk_allocation, 2786 + SK_RST_REASON_TCP_ABORT_ON_CLOSE); 2787 + unlock: 2788 + release_sock(ssk); 2789 + } 2781 2790 } 2782 2791 2783 2792 static void mptcp_worker(struct work_struct *work) ··· 2821 2796 __mptcp_close_subflow(sk); 2822 2797 2823 2798 if (mptcp_close_tout_expired(sk)) { 2799 + struct mptcp_subflow_context *subflow, *tmp; 2800 + 2824 2801 mptcp_do_fastclose(sk); 2802 + mptcp_for_each_subflow_safe(msk, subflow, tmp) 2803 + __mptcp_close_ssk(sk, subflow->tcp_sock, subflow, 0); 2825 2804 mptcp_close_wake_up(sk); 2826 2805 } 2827 2806 ··· 3250 3221 /* msk->subflow is still intact, the following will not free the first 3251 3222 * subflow 3252 3223 */ 3253 - mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE); 3224 + mptcp_do_fastclose(sk); 3225 + mptcp_destroy_common(msk); 3254 3226 3255 3227 /* The first subflow is already in TCP_CLOSE status, the following 3256 3228 * can't overlap with a fallback anymore ··· 3430 3400 msk->rcvq_space.space = TCP_INIT_CWND * TCP_MSS_DEFAULT; 3431 3401 } 3432 3402 3433 - void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags) 3403 + void mptcp_destroy_common(struct mptcp_sock *msk) 3434 3404 { 3435 3405 struct mptcp_subflow_context *subflow, *tmp; 3436 3406 struct sock *sk = (struct sock *)msk; ··· 3439 3409 3440 3410 /* join list will be eventually flushed (with rst) at sock lock release time */ 3441 3411 mptcp_for_each_subflow_safe(msk, subflow, tmp) 3442 - __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags); 3412 + __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, 0); 3443 3413 3444 3414 __skb_queue_purge(&sk->sk_receive_queue); 3445 3415 skb_rbtree_purge(&msk->out_of_order_queue); ··· 3457 3427 3458 3428 /* allow the following to close even the initial subflow */ 3459 3429 msk->free_first = 1; 3460 - mptcp_destroy_common(msk, 0); 3430 + mptcp_destroy_common(msk); 3461 3431 sk_sockets_allocated_dec(sk); 3462 3432 } 3463 3433
+2 -1
net/mptcp/protocol.h
··· 509 509 u64 remote_key; 510 510 u64 idsn; 511 511 u64 map_seq; 512 + u64 rcv_wnd_sent; 512 513 u32 snd_isn; 513 514 u32 token; 514 515 u32 rel_write_seq; ··· 977 976 local_bh_enable(); 978 977 } 979 978 980 - void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags); 979 + void mptcp_destroy_common(struct mptcp_sock *msk); 981 980 982 981 #define MPTCP_TOKEN_MAX_RETRIES 4 983 982
+8
net/mptcp/subflow.c
··· 2144 2144 tcp_prot_override = tcp_prot; 2145 2145 tcp_prot_override.release_cb = tcp_release_cb_override; 2146 2146 tcp_prot_override.diag_destroy = tcp_abort_override; 2147 + #ifdef CONFIG_BPF_SYSCALL 2148 + /* Disable sockmap processing for subflows */ 2149 + tcp_prot_override.psock_update_sk_prot = NULL; 2150 + #endif 2147 2151 2148 2152 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 2149 2153 /* In struct mptcp_subflow_request_sock, we assume the TCP request sock ··· 2184 2180 tcpv6_prot_override = tcpv6_prot; 2185 2181 tcpv6_prot_override.release_cb = tcp_release_cb_override; 2186 2182 tcpv6_prot_override.diag_destroy = tcp_abort_override; 2183 + #ifdef CONFIG_BPF_SYSCALL 2184 + /* Disable sockmap processing for subflows */ 2185 + tcpv6_prot_override.psock_update_sk_prot = NULL; 2186 + #endif 2187 2187 #endif 2188 2188 2189 2189 mptcp_diag_subflow_init(&subflow_ulp_ops);
+1 -67
net/openvswitch/actions.c
··· 572 572 return 0; 573 573 } 574 574 575 - static int set_nsh(struct sk_buff *skb, struct sw_flow_key *flow_key, 576 - const struct nlattr *a) 577 - { 578 - struct nshhdr *nh; 579 - size_t length; 580 - int err; 581 - u8 flags; 582 - u8 ttl; 583 - int i; 584 - 585 - struct ovs_key_nsh key; 586 - struct ovs_key_nsh mask; 587 - 588 - err = nsh_key_from_nlattr(a, &key, &mask); 589 - if (err) 590 - return err; 591 - 592 - /* Make sure the NSH base header is there */ 593 - if (!pskb_may_pull(skb, skb_network_offset(skb) + NSH_BASE_HDR_LEN)) 594 - return -ENOMEM; 595 - 596 - nh = nsh_hdr(skb); 597 - length = nsh_hdr_len(nh); 598 - 599 - /* Make sure the whole NSH header is there */ 600 - err = skb_ensure_writable(skb, skb_network_offset(skb) + 601 - length); 602 - if (unlikely(err)) 603 - return err; 604 - 605 - nh = nsh_hdr(skb); 606 - skb_postpull_rcsum(skb, nh, length); 607 - flags = nsh_get_flags(nh); 608 - flags = OVS_MASKED(flags, key.base.flags, mask.base.flags); 609 - flow_key->nsh.base.flags = flags; 610 - ttl = nsh_get_ttl(nh); 611 - ttl = OVS_MASKED(ttl, key.base.ttl, mask.base.ttl); 612 - flow_key->nsh.base.ttl = ttl; 613 - nsh_set_flags_and_ttl(nh, flags, ttl); 614 - nh->path_hdr = OVS_MASKED(nh->path_hdr, key.base.path_hdr, 615 - mask.base.path_hdr); 616 - flow_key->nsh.base.path_hdr = nh->path_hdr; 617 - switch (nh->mdtype) { 618 - case NSH_M_TYPE1: 619 - for (i = 0; i < NSH_MD1_CONTEXT_SIZE; i++) { 620 - nh->md1.context[i] = 621 - OVS_MASKED(nh->md1.context[i], key.context[i], 622 - mask.context[i]); 623 - } 624 - memcpy(flow_key->nsh.context, nh->md1.context, 625 - sizeof(nh->md1.context)); 626 - break; 627 - case NSH_M_TYPE2: 628 - memset(flow_key->nsh.context, 0, 629 - sizeof(flow_key->nsh.context)); 630 - break; 631 - default: 632 - return -EINVAL; 633 - } 634 - skb_postpush_rcsum(skb, nh, length); 635 - return 0; 636 - } 637 - 638 575 /* Must follow skb_ensure_writable() since that can move the skb data. */ 639 576 static void set_tp_port(struct sk_buff *skb, __be16 *port, 640 577 __be16 new_port, __sum16 *check) ··· 1067 1130 get_mask(a, struct ovs_key_ethernet *)); 1068 1131 break; 1069 1132 1070 - case OVS_KEY_ATTR_NSH: 1071 - err = set_nsh(skb, flow_key, a); 1072 - break; 1073 - 1074 1133 case OVS_KEY_ATTR_IPV4: 1075 1134 err = set_ipv4(skb, flow_key, nla_data(a), 1076 1135 get_mask(a, struct ovs_key_ipv4 *)); ··· 1103 1170 case OVS_KEY_ATTR_CT_LABELS: 1104 1171 case OVS_KEY_ATTR_CT_ORIG_TUPLE_IPV4: 1105 1172 case OVS_KEY_ATTR_CT_ORIG_TUPLE_IPV6: 1173 + case OVS_KEY_ATTR_NSH: 1106 1174 err = -EINVAL; 1107 1175 break; 1108 1176 }
+8 -56
net/openvswitch/flow_netlink.c
··· 1305 1305 return 0; 1306 1306 } 1307 1307 1308 + /* 1309 + * Constructs NSH header 'nh' from attributes of OVS_ACTION_ATTR_PUSH_NSH, 1310 + * where 'nh' points to a memory block of 'size' bytes. It's assumed that 1311 + * attributes were previously validated with validate_push_nsh(). 1312 + */ 1308 1313 int nsh_hdr_from_nlattr(const struct nlattr *attr, 1309 1314 struct nshhdr *nh, size_t size) 1310 1315 { ··· 1319 1314 u8 ttl = 0; 1320 1315 int mdlen = 0; 1321 1316 1322 - /* validate_nsh has check this, so we needn't do duplicate check here 1323 - */ 1324 1317 if (size < NSH_BASE_HDR_LEN) 1325 1318 return -ENOBUFS; 1326 1319 ··· 1358 1355 /* nsh header length = NSH_BASE_HDR_LEN + mdlen */ 1359 1356 nh->ver_flags_ttl_len = 0; 1360 1357 nsh_set_flags_ttl_len(nh, flags, ttl, NSH_BASE_HDR_LEN + mdlen); 1361 - 1362 - return 0; 1363 - } 1364 - 1365 - int nsh_key_from_nlattr(const struct nlattr *attr, 1366 - struct ovs_key_nsh *nsh, struct ovs_key_nsh *nsh_mask) 1367 - { 1368 - struct nlattr *a; 1369 - int rem; 1370 - 1371 - /* validate_nsh has check this, so we needn't do duplicate check here 1372 - */ 1373 - nla_for_each_nested(a, attr, rem) { 1374 - int type = nla_type(a); 1375 - 1376 - switch (type) { 1377 - case OVS_NSH_KEY_ATTR_BASE: { 1378 - const struct ovs_nsh_key_base *base = nla_data(a); 1379 - const struct ovs_nsh_key_base *base_mask = base + 1; 1380 - 1381 - nsh->base = *base; 1382 - nsh_mask->base = *base_mask; 1383 - break; 1384 - } 1385 - case OVS_NSH_KEY_ATTR_MD1: { 1386 - const struct ovs_nsh_key_md1 *md1 = nla_data(a); 1387 - const struct ovs_nsh_key_md1 *md1_mask = md1 + 1; 1388 - 1389 - memcpy(nsh->context, md1->context, sizeof(*md1)); 1390 - memcpy(nsh_mask->context, md1_mask->context, 1391 - sizeof(*md1_mask)); 1392 - break; 1393 - } 1394 - case OVS_NSH_KEY_ATTR_MD2: 1395 - /* Not supported yet */ 1396 - return -ENOTSUPP; 1397 - default: 1398 - return -EINVAL; 1399 - } 1400 - } 1401 1358 1402 1359 return 0; 1403 1360 } ··· 2802 2839 return err; 2803 2840 } 2804 2841 2805 - static bool validate_nsh(const struct nlattr *attr, bool is_mask, 2806 - bool is_push_nsh, bool log) 2842 + static bool validate_push_nsh(const struct nlattr *attr, bool log) 2807 2843 { 2808 2844 struct sw_flow_match match; 2809 2845 struct sw_flow_key key; 2810 - int ret = 0; 2811 2846 2812 2847 ovs_match_init(&match, &key, true, NULL); 2813 - ret = nsh_key_put_from_nlattr(attr, &match, is_mask, 2814 - is_push_nsh, log); 2815 - return !ret; 2848 + return !nsh_key_put_from_nlattr(attr, &match, false, true, log); 2816 2849 } 2817 2850 2818 2851 /* Return false if there are any non-masked bits set. ··· 2954 2995 flow_key->ip.proto != IPPROTO_SCTP) 2955 2996 return -EINVAL; 2956 2997 2957 - break; 2958 - 2959 - case OVS_KEY_ATTR_NSH: 2960 - if (eth_type != htons(ETH_P_NSH)) 2961 - return -EINVAL; 2962 - if (!validate_nsh(nla_data(a), masked, false, log)) 2963 - return -EINVAL; 2964 2998 break; 2965 2999 2966 3000 default: ··· 3389 3437 return -EINVAL; 3390 3438 } 3391 3439 mac_proto = MAC_PROTO_NONE; 3392 - if (!validate_nsh(nla_data(a), false, true, true)) 3440 + if (!validate_push_nsh(nla_data(a), log)) 3393 3441 return -EINVAL; 3394 3442 break; 3395 3443
-2
net/openvswitch/flow_netlink.h
··· 65 65 void ovs_nla_free_flow_actions(struct sw_flow_actions *); 66 66 void ovs_nla_free_flow_actions_rcu(struct sw_flow_actions *); 67 67 68 - int nsh_key_from_nlattr(const struct nlattr *attr, struct ovs_key_nsh *nsh, 69 - struct ovs_key_nsh *nsh_mask); 70 68 int nsh_hdr_from_nlattr(const struct nlattr *attr, struct nshhdr *nh, 71 69 size_t size); 72 70
+2 -4
net/sched/act_bpf.c
··· 47 47 filter = rcu_dereference(prog->filter); 48 48 if (at_ingress) { 49 49 __skb_push(skb, skb->mac_len); 50 - bpf_compute_data_pointers(skb); 51 - filter_res = bpf_prog_run(filter, skb); 50 + filter_res = bpf_prog_run_data_pointers(filter, skb); 52 51 __skb_pull(skb, skb->mac_len); 53 52 } else { 54 - bpf_compute_data_pointers(skb); 55 - filter_res = bpf_prog_run(filter, skb); 53 + filter_res = bpf_prog_run_data_pointers(filter, skb); 56 54 } 57 55 if (unlikely(!skb->tstamp && skb->tstamp_type)) 58 56 skb->tstamp_type = SKB_CLOCK_REALTIME;
+2 -4
net/sched/cls_bpf.c
··· 97 97 } else if (at_ingress) { 98 98 /* It is safe to push/pull even if skb_shared() */ 99 99 __skb_push(skb, skb->mac_len); 100 - bpf_compute_data_pointers(skb); 101 - filter_res = bpf_prog_run(prog->filter, skb); 100 + filter_res = bpf_prog_run_data_pointers(prog->filter, skb); 102 101 __skb_pull(skb, skb->mac_len); 103 102 } else { 104 - bpf_compute_data_pointers(skb); 105 - filter_res = bpf_prog_run(prog->filter, skb); 103 + filter_res = bpf_prog_run_data_pointers(prog->filter, skb); 106 104 } 107 105 if (unlikely(!skb->tstamp && skb->tstamp_type)) 108 106 skb->tstamp_type = SKB_CLOCK_REALTIME;
+1 -2
net/unix/af_unix.c
··· 2938 2938 2939 2939 u = unix_sk(sk); 2940 2940 2941 + redo: 2941 2942 /* Lock the socket to prevent queue disordering 2942 2943 * while sleeps in memcpy_tomsg 2943 2944 */ ··· 2950 2949 struct sk_buff *skb, *last; 2951 2950 int chunk; 2952 2951 2953 - redo: 2954 2952 unix_state_lock(sk); 2955 2953 if (sock_flag(sk, SOCK_DEAD)) { 2956 2954 err = -ECONNRESET; ··· 2999 2999 goto out; 3000 3000 } 3001 3001 3002 - mutex_lock(&u->iolock); 3003 3002 goto redo; 3004 3003 unlock: 3005 3004 unix_state_unlock(sk);
+31 -9
net/vmw_vsock/af_vsock.c
··· 1661 1661 timeout = schedule_timeout(timeout); 1662 1662 lock_sock(sk); 1663 1663 1664 - if (signal_pending(current)) { 1665 - err = sock_intr_errno(timeout); 1666 - sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE; 1667 - sock->state = SS_UNCONNECTED; 1668 - vsock_transport_cancel_pkt(vsk); 1669 - vsock_remove_connected(vsk); 1670 - goto out_wait; 1671 - } else if ((sk->sk_state != TCP_ESTABLISHED) && (timeout == 0)) { 1672 - err = -ETIMEDOUT; 1664 + /* Connection established. Whatever happens to socket once we 1665 + * release it, that's not connect()'s concern. No need to go 1666 + * into signal and timeout handling. Call it a day. 1667 + * 1668 + * Note that allowing to "reset" an already established socket 1669 + * here is racy and insecure. 1670 + */ 1671 + if (sk->sk_state == TCP_ESTABLISHED) 1672 + break; 1673 + 1674 + /* If connection was _not_ established and a signal/timeout came 1675 + * to be, we want the socket's state reset. User space may want 1676 + * to retry. 1677 + * 1678 + * sk_state != TCP_ESTABLISHED implies that socket is not on 1679 + * vsock_connected_table. We keep the binding and the transport 1680 + * assigned. 1681 + */ 1682 + if (signal_pending(current) || timeout == 0) { 1683 + err = timeout == 0 ? -ETIMEDOUT : sock_intr_errno(timeout); 1684 + 1685 + /* Listener might have already responded with 1686 + * VIRTIO_VSOCK_OP_RESPONSE. Its handling expects our 1687 + * sk_state == TCP_SYN_SENT, which hereby we break. 1688 + * In such case VIRTIO_VSOCK_OP_RST will follow. 1689 + */ 1673 1690 sk->sk_state = TCP_CLOSE; 1674 1691 sock->state = SS_UNCONNECTED; 1692 + 1693 + /* Try to cancel VIRTIO_VSOCK_OP_REQUEST skb sent out by 1694 + * transport->connect(). 1695 + */ 1675 1696 vsock_transport_cancel_pkt(vsk); 1697 + 1676 1698 goto out_wait; 1677 1699 } 1678 1700
+1 -1
net/xfrm/xfrm_device.c
··· 438 438 439 439 check_tunnel_size = x->xso.type == XFRM_DEV_OFFLOAD_PACKET && 440 440 x->props.mode == XFRM_MODE_TUNNEL; 441 - switch (x->inner_mode.family) { 441 + switch (skb_dst(skb)->ops->family) { 442 442 case AF_INET: 443 443 /* Check for IPv4 options */ 444 444 if (ip_hdr(skb)->ihl != 5)
+6 -2
net/xfrm/xfrm_output.c
··· 698 698 return; 699 699 700 700 if (x->outer_mode.encap == XFRM_MODE_TUNNEL) { 701 - switch (x->outer_mode.family) { 701 + switch (skb_dst(skb)->ops->family) { 702 702 case AF_INET: 703 703 xo->inner_ipproto = ip_hdr(skb)->protocol; 704 704 break; ··· 772 772 /* Exclusive direct xmit for tunnel mode, as 773 773 * some filtering or matching rules may apply 774 774 * in transport mode. 775 + * Locally generated packets also require 776 + * the normal XFRM path for L2 header setup, 777 + * as the hardware needs the L2 header to match 778 + * for encryption, so skip direct output as well. 775 779 */ 776 - if (x->props.mode == XFRM_MODE_TUNNEL) 780 + if (x->props.mode == XFRM_MODE_TUNNEL && !skb->sk) 777 781 return xfrm_dev_direct_output(sk, x, skb); 778 782 779 783 return xfrm_output_resume(sk, skb, 0);
+22 -8
net/xfrm/xfrm_state.c
··· 592 592 } 593 593 EXPORT_SYMBOL(xfrm_state_free); 594 594 595 + static void xfrm_state_delete_tunnel(struct xfrm_state *x); 595 596 static void xfrm_state_gc_destroy(struct xfrm_state *x) 596 597 { 597 598 if (x->mode_cbs && x->mode_cbs->destroy_state) ··· 608 607 kfree(x->replay_esn); 609 608 kfree(x->preplay_esn); 610 609 xfrm_unset_type_offload(x); 610 + xfrm_state_delete_tunnel(x); 611 611 if (x->type) { 612 612 x->type->destructor(x); 613 613 xfrm_put_type(x->type); ··· 808 806 } 809 807 EXPORT_SYMBOL(__xfrm_state_destroy); 810 808 811 - static void xfrm_state_delete_tunnel(struct xfrm_state *x); 812 809 int __xfrm_state_delete(struct xfrm_state *x) 813 810 { 814 811 struct net *net = xs_net(x); ··· 2074 2073 return x; 2075 2074 2076 2075 error: 2076 + x->km.state = XFRM_STATE_DEAD; 2077 2077 xfrm_state_put(x); 2078 2078 out: 2079 2079 return NULL; ··· 2159 2157 xfrm_state_insert(xc); 2160 2158 } else { 2161 2159 if (xfrm_state_add(xc) < 0) 2162 - goto error; 2160 + goto error_add; 2163 2161 } 2164 2162 2165 2163 return xc; 2164 + error_add: 2165 + if (xuo) 2166 + xfrm_dev_state_delete(xc); 2166 2167 error: 2168 + xc->km.state = XFRM_STATE_DEAD; 2167 2169 xfrm_state_put(xc); 2168 2170 return NULL; 2169 2171 } ··· 2197 2191 } 2198 2192 2199 2193 if (x1->km.state == XFRM_STATE_ACQ) { 2200 - if (x->dir && x1->dir != x->dir) 2194 + if (x->dir && x1->dir != x->dir) { 2195 + to_put = x1; 2201 2196 goto out; 2197 + } 2202 2198 2203 2199 __xfrm_state_insert(x); 2204 2200 x = NULL; 2205 2201 } else { 2206 - if (x1->dir != x->dir) 2202 + if (x1->dir != x->dir) { 2203 + to_put = x1; 2207 2204 goto out; 2205 + } 2208 2206 } 2209 2207 err = 0; 2210 2208 ··· 3308 3298 void xfrm_state_fini(struct net *net) 3309 3299 { 3310 3300 unsigned int sz; 3301 + int i; 3311 3302 3312 3303 flush_work(&net->xfrm.state_hash_work); 3313 3304 xfrm_state_flush(net, 0, false); ··· 3316 3305 3317 3306 WARN_ON(!list_empty(&net->xfrm.state_all)); 3318 3307 3308 + for (i = 0; i <= net->xfrm.state_hmask; i++) { 3309 + WARN_ON(!hlist_empty(net->xfrm.state_byseq + i)); 3310 + WARN_ON(!hlist_empty(net->xfrm.state_byspi + i)); 3311 + WARN_ON(!hlist_empty(net->xfrm.state_bysrc + i)); 3312 + WARN_ON(!hlist_empty(net->xfrm.state_bydst + i)); 3313 + } 3314 + 3319 3315 sz = (net->xfrm.state_hmask + 1) * sizeof(struct hlist_head); 3320 - WARN_ON(!hlist_empty(net->xfrm.state_byseq)); 3321 3316 xfrm_hash_free(net->xfrm.state_byseq, sz); 3322 - WARN_ON(!hlist_empty(net->xfrm.state_byspi)); 3323 3317 xfrm_hash_free(net->xfrm.state_byspi, sz); 3324 - WARN_ON(!hlist_empty(net->xfrm.state_bysrc)); 3325 3318 xfrm_hash_free(net->xfrm.state_bysrc, sz); 3326 - WARN_ON(!hlist_empty(net->xfrm.state_bydst)); 3327 3319 xfrm_hash_free(net->xfrm.state_bydst, sz); 3328 3320 free_percpu(net->xfrm.state_cache_input); 3329 3321 }
+7 -1
net/xfrm/xfrm_user.c
··· 947 947 948 948 if (attrs[XFRMA_SA_PCPU]) { 949 949 x->pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]); 950 - if (x->pcpu_num >= num_possible_cpus()) 950 + if (x->pcpu_num >= num_possible_cpus()) { 951 + err = -ERANGE; 952 + NL_SET_ERR_MSG(extack, "pCPU number too big"); 951 953 goto error; 954 + } 952 955 } 953 956 954 957 err = __xfrm_init_state(x, extack); ··· 3038 3035 } 3039 3036 3040 3037 xfrm_state_free(x); 3038 + xfrm_dev_policy_delete(xp); 3039 + xfrm_dev_policy_free(xp); 3040 + security_xfrm_policy_free(xp->security); 3041 3041 kfree(xp); 3042 3042 3043 3043 return 0;
+2 -1
scripts/gendwarfksyms/gendwarfksyms.c
··· 138 138 error("no input files?"); 139 139 } 140 140 141 - symbol_read_exports(stdin); 141 + if (!symbol_read_exports(stdin)) 142 + return 0; 142 143 143 144 if (symtypes_file) { 144 145 symfile = fopen(symtypes_file, "w");
+1 -1
scripts/gendwarfksyms/gendwarfksyms.h
··· 123 123 typedef void (*symbol_callback_t)(struct symbol *, void *arg); 124 124 125 125 bool is_symbol_ptr(const char *name); 126 - void symbol_read_exports(FILE *file); 126 + int symbol_read_exports(FILE *file); 127 127 void symbol_read_symtab(int fd); 128 128 struct symbol *symbol_get(const char *name); 129 129 void symbol_set_ptr(struct symbol *sym, Dwarf_Die *ptr);
+3 -1
scripts/gendwarfksyms/symbols.c
··· 128 128 return for_each(name, NULL, NULL) > 0; 129 129 } 130 130 131 - void symbol_read_exports(FILE *file) 131 + int symbol_read_exports(FILE *file) 132 132 { 133 133 struct symbol *sym; 134 134 char *line = NULL; ··· 159 159 160 160 free(line); 161 161 debug("%d exported symbols", nsym); 162 + 163 + return nsym; 162 164 } 163 165 164 166 static void get_symbol(struct symbol *sym, void *arg)
+3 -4
security/landlock/fs.c
··· 1335 1335 * At this point, we own the ihold() reference that was 1336 1336 * originally set up by get_inode_object() and the 1337 1337 * __iget() reference that we just set in this loop 1338 - * walk. Therefore the following call to iput() will 1339 - * not sleep nor drop the inode because there is now at 1340 - * least two references to it. 1338 + * walk. Therefore there are at least two references 1339 + * on the inode. 1341 1340 */ 1342 - iput(inode); 1341 + iput_not_last(inode); 1343 1342 } else { 1344 1343 spin_unlock(&object->lock); 1345 1344 rcu_read_unlock();
+2 -2
sound/hda/codecs/hdmi/nvhdmi-mcp.c
··· 350 350 static const struct hda_codec_ops nvhdmi_mcp_codec_ops = { 351 351 .probe = nvhdmi_mcp_probe, 352 352 .remove = snd_hda_hdmi_simple_remove, 353 - .build_controls = nvhdmi_mcp_build_pcms, 354 - .build_pcms = nvhdmi_mcp_build_controls, 353 + .build_pcms = nvhdmi_mcp_build_pcms, 354 + .build_controls = nvhdmi_mcp_build_controls, 355 355 .init = nvhdmi_mcp_init, 356 356 .unsol_event = snd_hda_hdmi_simple_unsol_event, 357 357 };
+9
sound/hda/codecs/realtek/alc269.c
··· 6694 6694 SND_PCI_QUIRK(0x103c, 0x8e60, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 6695 6695 SND_PCI_QUIRK(0x103c, 0x8e61, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 6696 6696 SND_PCI_QUIRK(0x103c, 0x8e62, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 6697 + SND_PCI_QUIRK(0x103c, 0x8ed5, "HP Merino13X", ALC245_FIXUP_TAS2781_SPI_2), 6698 + SND_PCI_QUIRK(0x103c, 0x8ed6, "HP Merino13", ALC245_FIXUP_TAS2781_SPI_2), 6699 + SND_PCI_QUIRK(0x103c, 0x8ed7, "HP Merino14", ALC245_FIXUP_TAS2781_SPI_2), 6700 + SND_PCI_QUIRK(0x103c, 0x8ed8, "HP Merino16", ALC245_FIXUP_TAS2781_SPI_2), 6701 + SND_PCI_QUIRK(0x103c, 0x8ed9, "HP Merino14W", ALC245_FIXUP_TAS2781_SPI_2), 6702 + SND_PCI_QUIRK(0x103c, 0x8eda, "HP Merino16W", ALC245_FIXUP_TAS2781_SPI_2), 6703 + SND_PCI_QUIRK(0x103c, 0x8f40, "HP Lampas14", ALC287_FIXUP_TXNW2781_I2C), 6704 + SND_PCI_QUIRK(0x103c, 0x8f41, "HP Lampas16", ALC287_FIXUP_TXNW2781_I2C), 6705 + SND_PCI_QUIRK(0x103c, 0x8f42, "HP LampasW14", ALC287_FIXUP_TXNW2781_I2C), 6697 6706 SND_PCI_QUIRK(0x1043, 0x1032, "ASUS VivoBook X513EA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 6698 6707 SND_PCI_QUIRK(0x1043, 0x1034, "ASUS GU605C", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 6699 6708 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+7 -3
sound/soc/codecs/cs4271.c
··· 581 581 582 582 ret = regcache_sync(cs4271->regmap); 583 583 if (ret < 0) 584 - return ret; 584 + goto err_disable_regulator; 585 585 586 586 ret = regmap_update_bits(cs4271->regmap, CS4271_MODE2, 587 587 CS4271_MODE2_PDN | CS4271_MODE2_CPEN, 588 588 CS4271_MODE2_PDN | CS4271_MODE2_CPEN); 589 589 if (ret < 0) 590 - return ret; 590 + goto err_disable_regulator; 591 591 ret = regmap_update_bits(cs4271->regmap, CS4271_MODE2, 592 592 CS4271_MODE2_PDN, 0); 593 593 if (ret < 0) 594 - return ret; 594 + goto err_disable_regulator; 595 595 /* Power-up sequence requires 85 uS */ 596 596 udelay(85); 597 597 ··· 601 601 CS4271_MODE2_MUTECAEQUB); 602 602 603 603 return 0; 604 + 605 + err_disable_regulator: 606 + regulator_bulk_disable(ARRAY_SIZE(cs4271->supplies), cs4271->supplies); 607 + return ret; 604 608 } 605 609 606 610 static void cs4271_component_remove(struct snd_soc_component *component)
+44 -25
sound/soc/codecs/da7213.c
··· 2124 2124 return 0; 2125 2125 } 2126 2126 2127 + static int da7213_runtime_suspend(struct device *dev) 2128 + { 2129 + struct da7213_priv *da7213 = dev_get_drvdata(dev); 2130 + 2131 + regcache_cache_only(da7213->regmap, true); 2132 + regcache_mark_dirty(da7213->regmap); 2133 + regulator_bulk_disable(DA7213_NUM_SUPPLIES, da7213->supplies); 2134 + 2135 + return 0; 2136 + } 2137 + 2138 + static int da7213_runtime_resume(struct device *dev) 2139 + { 2140 + struct da7213_priv *da7213 = dev_get_drvdata(dev); 2141 + int ret; 2142 + 2143 + ret = regulator_bulk_enable(DA7213_NUM_SUPPLIES, da7213->supplies); 2144 + if (ret < 0) 2145 + return ret; 2146 + regcache_cache_only(da7213->regmap, false); 2147 + return regcache_sync(da7213->regmap); 2148 + } 2149 + 2150 + static int da7213_suspend(struct snd_soc_component *component) 2151 + { 2152 + struct da7213_priv *da7213 = snd_soc_component_get_drvdata(component); 2153 + 2154 + return da7213_runtime_suspend(da7213->dev); 2155 + } 2156 + 2157 + static int da7213_resume(struct snd_soc_component *component) 2158 + { 2159 + struct da7213_priv *da7213 = snd_soc_component_get_drvdata(component); 2160 + 2161 + return da7213_runtime_resume(da7213->dev); 2162 + } 2163 + 2127 2164 static const struct snd_soc_component_driver soc_component_dev_da7213 = { 2128 2165 .probe = da7213_probe, 2129 2166 .set_bias_level = da7213_set_bias_level, 2130 2167 .controls = da7213_snd_controls, 2131 2168 .num_controls = ARRAY_SIZE(da7213_snd_controls), 2169 + .suspend = da7213_suspend, 2170 + .resume = da7213_resume, 2132 2171 .dapm_widgets = da7213_dapm_widgets, 2133 2172 .num_dapm_widgets = ARRAY_SIZE(da7213_dapm_widgets), 2134 2173 .dapm_routes = da7213_audio_map, ··· 2213 2174 da7213->fin_min_rate = (uintptr_t)i2c_get_match_data(i2c); 2214 2175 if (!da7213->fin_min_rate) 2215 2176 return -EINVAL; 2177 + 2178 + da7213->dev = &i2c->dev; 2216 2179 2217 2180 i2c_set_clientdata(i2c, da7213); 2218 2181 ··· 2265 2224 pm_runtime_disable(&i2c->dev); 2266 2225 } 2267 2226 2268 - static int da7213_runtime_suspend(struct device *dev) 2269 - { 2270 - struct da7213_priv *da7213 = dev_get_drvdata(dev); 2271 - 2272 - regcache_cache_only(da7213->regmap, true); 2273 - regcache_mark_dirty(da7213->regmap); 2274 - regulator_bulk_disable(DA7213_NUM_SUPPLIES, da7213->supplies); 2275 - 2276 - return 0; 2277 - } 2278 - 2279 - static int da7213_runtime_resume(struct device *dev) 2280 - { 2281 - struct da7213_priv *da7213 = dev_get_drvdata(dev); 2282 - int ret; 2283 - 2284 - ret = regulator_bulk_enable(DA7213_NUM_SUPPLIES, da7213->supplies); 2285 - if (ret < 0) 2286 - return ret; 2287 - regcache_cache_only(da7213->regmap, false); 2288 - return regcache_sync(da7213->regmap); 2289 - } 2290 - 2291 - static DEFINE_RUNTIME_DEV_PM_OPS(da7213_pm, da7213_runtime_suspend, 2292 - da7213_runtime_resume, NULL); 2227 + static const struct dev_pm_ops da7213_pm = { 2228 + RUNTIME_PM_OPS(da7213_runtime_suspend, da7213_runtime_resume, NULL) 2229 + }; 2293 2230 2294 2231 static const struct i2c_device_id da7213_i2c_id[] = { 2295 2232 { "da7213" },
+1
sound/soc/codecs/da7213.h
··· 595 595 /* Codec private data */ 596 596 struct da7213_priv { 597 597 struct regmap *regmap; 598 + struct device *dev; 598 599 struct mutex ctrl_lock; 599 600 struct regulator_bulk_data supplies[DA7213_NUM_SUPPLIES]; 600 601 struct clk *mclk;
+1 -1
sound/soc/codecs/lpass-va-macro.c
··· 1638 1638 if (ret) 1639 1639 goto err_clkout; 1640 1640 1641 - va->fsgen = clk_hw_get_clk(&va->hw, "fsgen"); 1641 + va->fsgen = devm_clk_hw_get_clk(dev, &va->hw, "fsgen"); 1642 1642 if (IS_ERR(va->fsgen)) { 1643 1643 ret = PTR_ERR(va->fsgen); 1644 1644 goto err_clkout;
+7 -2
sound/soc/codecs/tas2781-i2c.c
··· 1957 1957 { 1958 1958 struct i2c_client *client = (struct i2c_client *)tas_priv->client; 1959 1959 unsigned int dev_addrs[TASDEVICE_MAX_CHANNELS]; 1960 - int i, ndev = 0; 1960 + int ndev = 0; 1961 + int i, rc; 1961 1962 1962 1963 if (tas_priv->isacpi) { 1963 1964 ndev = device_property_read_u32_array(&client->dev, ··· 1969 1968 } else { 1970 1969 ndev = (ndev < ARRAY_SIZE(dev_addrs)) 1971 1970 ? ndev : ARRAY_SIZE(dev_addrs); 1972 - ndev = device_property_read_u32_array(&client->dev, 1971 + rc = device_property_read_u32_array(&client->dev, 1973 1972 "ti,audio-slots", dev_addrs, ndev); 1973 + if (rc != 0) { 1974 + ndev = 1; 1975 + dev_addrs[0] = client->addr; 1976 + } 1974 1977 } 1975 1978 1976 1979 tas_priv->irq =
+18 -2
sound/soc/codecs/tas2783-sdw.c
··· 762 762 goto out; 763 763 } 764 764 765 - mutex_lock(&tas_dev->pde_lock); 766 765 img_sz = fmw->size; 767 766 buf = fmw->data; 768 767 offset += FW_DL_OFFSET; 768 + if (offset >= (img_sz - FW_FL_HDR)) { 769 + dev_err(tas_dev->dev, 770 + "firmware is too small"); 771 + ret = -EINVAL; 772 + goto out; 773 + } 774 + 775 + mutex_lock(&tas_dev->pde_lock); 769 776 while (offset < (img_sz - FW_FL_HDR)) { 770 777 memset(&hdr, 0, sizeof(hdr)); 771 778 offset += read_header(&buf[offset], &hdr); ··· 782 775 hdr.length, offset); 783 776 /* size also includes the header */ 784 777 file_blk_size = hdr.length - FW_FL_HDR; 778 + 779 + /* make sure that enough data is there */ 780 + if (offset + file_blk_size > img_sz) { 781 + ret = -EINVAL; 782 + dev_err(tas_dev->dev, 783 + "corrupt firmware file"); 784 + break; 785 + } 785 786 786 787 switch (hdr.file_id) { 787 788 case 0: ··· 823 808 break; 824 809 } 825 810 mutex_unlock(&tas_dev->pde_lock); 826 - tas2783_update_calibdata(tas_dev); 811 + if (!ret) 812 + tas2783_update_calibdata(tas_dev); 827 813 828 814 out: 829 815 if (!ret)
+1 -2
sound/soc/renesas/rcar/ssiu.c
··· 509 509 int rsnd_ssiu_probe(struct rsnd_priv *priv) 510 510 { 511 511 struct device *dev = rsnd_priv_to_dev(priv); 512 - struct device_node *node; 512 + struct device_node *node __free(device_node) = rsnd_ssiu_of_node(priv); 513 513 struct rsnd_ssiu *ssiu; 514 514 struct rsnd_mod_ops *ops; 515 515 const int *list = NULL; ··· 522 522 * see 523 523 * rsnd_ssiu_bufsif_to_id() 524 524 */ 525 - node = rsnd_ssiu_of_node(priv); 526 525 if (node) 527 526 nr = rsnd_node_count(priv, node, SSIU_NAME); 528 527 else
+2 -1
sound/soc/sdca/sdca_functions.c
··· 894 894 return ret; 895 895 } 896 896 897 - control->values = devm_kzalloc(dev, hweight64(control->cn_list), GFP_KERNEL); 897 + control->values = devm_kcalloc(dev, hweight64(control->cn_list), 898 + sizeof(int), GFP_KERNEL); 898 899 if (!control->values) 899 900 return -ENOMEM; 900 901
+14 -6
sound/soc/sdw_utils/soc_sdw_utils.c
··· 1277 1277 struct sdw_slave *slave; 1278 1278 struct device *sdw_dev; 1279 1279 const char *sdw_codec_name; 1280 - int i; 1280 + int ret, i; 1281 1281 1282 1282 dlc = kzalloc(sizeof(*dlc), GFP_KERNEL); 1283 1283 if (!dlc) ··· 1307 1307 } 1308 1308 1309 1309 slave = dev_to_sdw_dev(sdw_dev); 1310 - if (!slave) 1311 - return -EINVAL; 1310 + if (!slave) { 1311 + ret = -EINVAL; 1312 + goto put_device; 1313 + } 1312 1314 1313 1315 /* Make sure BIOS provides SDCA properties */ 1314 1316 if (!slave->sdca_data.interface_revision) { 1315 1317 dev_warn(&slave->dev, "SDCA properties not found in the BIOS\n"); 1316 - return 1; 1318 + ret = 1; 1319 + goto put_device; 1317 1320 } 1318 1321 1319 1322 for (i = 0; i < slave->sdca_data.num_functions; i++) { ··· 1325 1322 if (dai_type == dai_info->dai_type) { 1326 1323 dev_dbg(&slave->dev, "DAI type %d sdca function %s found\n", 1327 1324 dai_type, slave->sdca_data.function[i].name); 1328 - return 1; 1325 + ret = 1; 1326 + goto put_device; 1329 1327 } 1330 1328 } 1331 1329 ··· 1334 1330 "SDCA device function for DAI type %d not supported, skip endpoint\n", 1335 1331 dai_info->dai_type); 1336 1332 1337 - return 0; 1333 + ret = 0; 1334 + 1335 + put_device: 1336 + put_device(sdw_dev); 1337 + return ret; 1338 1338 } 1339 1339 1340 1340 int asoc_sdw_parse_sdw_endpoints(struct snd_soc_card *card,
+5
sound/usb/endpoint.c
··· 1362 1362 ep->sample_rem = ep->cur_rate % ep->pps; 1363 1363 ep->packsize[0] = ep->cur_rate / ep->pps; 1364 1364 ep->packsize[1] = (ep->cur_rate + (ep->pps - 1)) / ep->pps; 1365 + if (ep->packsize[1] > ep->maxpacksize) { 1366 + usb_audio_dbg(chip, "Too small maxpacksize %u for rate %u / pps %u\n", 1367 + ep->maxpacksize, ep->cur_rate, ep->pps); 1368 + return -EINVAL; 1369 + } 1365 1370 1366 1371 /* calculate the frequency in 16.16 format */ 1367 1372 ep->freqm = ep->freqn;
+2
sound/usb/mixer.c
··· 3086 3086 int i; 3087 3087 3088 3088 assoc = usb_ifnum_to_if(dev, ctrlif)->intf_assoc; 3089 + if (!assoc) 3090 + return -EINVAL; 3089 3091 3090 3092 /* Detect BADD capture/playback channels from AS EP descriptors */ 3091 3093 for (i = 0; i < assoc->bInterfaceCount; i++) {
+8
sound/usb/quirks.c
··· 2022 2022 case USB_ID(0x16d0, 0x09d8): /* NuPrime IDA-8 */ 2023 2023 case USB_ID(0x16d0, 0x09db): /* NuPrime Audio DAC-9 */ 2024 2024 case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */ 2025 + case USB_ID(0x16d0, 0x0ab1): /* PureAudio APA DAC */ 2026 + case USB_ID(0x16d0, 0xeca1): /* PureAudio Lotus DAC5, DAC5 SE, DAC5 Pro */ 2025 2027 case USB_ID(0x1db5, 0x0003): /* Bryston BDA3 */ 2026 2028 case USB_ID(0x20a0, 0x4143): /* WaveIO USB Audio 2.0 */ 2027 2029 case USB_ID(0x22e1, 0xca01): /* HDTA Serenade DSD */ ··· 2269 2267 QUIRK_FLAG_FIXED_RATE), 2270 2268 DEVICE_FLG(0x0fd9, 0x0008, /* Hauppauge HVR-950Q */ 2271 2269 QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER), 2270 + DEVICE_FLG(0x1038, 0x1294, /* SteelSeries Arctis Pro Wireless */ 2271 + QUIRK_FLAG_MIXER_PLAYBACK_MIN_MUTE), 2272 2272 DEVICE_FLG(0x1101, 0x0003, /* Audioengine D1 */ 2273 2273 QUIRK_FLAG_GET_SAMPLE_RATE), 2274 2274 DEVICE_FLG(0x12d1, 0x3a07, /* Huawei Technologies Co., Ltd. */ ··· 2301 2297 QUIRK_FLAG_IGNORE_CLOCK_SOURCE), 2302 2298 DEVICE_FLG(0x1686, 0x00dd, /* Zoom R16/24 */ 2303 2299 QUIRK_FLAG_TX_LENGTH | QUIRK_FLAG_CTL_MSG_DELAY_1M), 2300 + DEVICE_FLG(0x16d0, 0x0ab1, /* PureAudio APA DAC */ 2301 + QUIRK_FLAG_DSD_RAW), 2302 + DEVICE_FLG(0x16d0, 0xeca1, /* PureAudio Lotus DAC5, DAC5 SE and DAC5 Pro */ 2303 + QUIRK_FLAG_DSD_RAW), 2304 2304 DEVICE_FLG(0x17aa, 0x1046, /* Lenovo ThinkStation P620 Rear Line-in, Line-out and Microphone */ 2305 2305 QUIRK_FLAG_DISABLE_AUTOSUSPEND), 2306 2306 DEVICE_FLG(0x17aa, 0x104d, /* Lenovo ThinkStation P620 Internal Speaker + Front Headset */
+1
tools/arch/x86/include/uapi/asm/vmx.h
··· 93 93 #define EXIT_REASON_TPAUSE 68 94 94 #define EXIT_REASON_BUS_LOCK 74 95 95 #define EXIT_REASON_NOTIFY 75 96 + #define EXIT_REASON_SEAMCALL 76 96 97 #define EXIT_REASON_TDCALL 77 97 98 #define EXIT_REASON_MSR_READ_IMM 84 98 99 #define EXIT_REASON_MSR_WRITE_IMM 85
+1 -1
tools/bpf/bpftool/Documentation/bpftool-prog.rst
··· 182 182 183 183 bpftool prog tracelog { stdout | stderr } *PROG* 184 184 Dump the BPF stream of the program. BPF programs can write to these streams 185 - at runtime with the **bpf_stream_vprintk**\ () kfunc. The kernel may write 185 + at runtime with the **bpf_stream_vprintk_impl**\ () kfunc. The kernel may write 186 186 error messages to the standard error stream. This facility should be used 187 187 only for debugging purposes. 188 188
+2 -2
tools/build/feature/Makefile
··· 107 107 __BUILD = $(CC) $(CFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.c,$(@F)) $(LDFLAGS) 108 108 BUILD = $(__BUILD) > $(@:.bin=.make.output) 2>&1 109 109 BUILD_BFD = $(BUILD) -DPACKAGE='"perf"' -lbfd -ldl 110 - BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd 110 + BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -ldl -lz -llzma -lzstd 111 111 112 112 __BUILDXX = $(CXX) $(CXXFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.cpp,$(@F)) $(LDFLAGS) 113 113 BUILDXX = $(__BUILDXX) > $(@:.bin=.make.output) 2>&1 ··· 115 115 ############################### 116 116 117 117 $(OUTPUT)test-all.bin: 118 - $(BUILD_ALL) || $(BUILD_ALL) -lopcodes -liberty 118 + $(BUILD_ALL) 119 119 120 120 $(OUTPUT)test-hello.bin: 121 121 $(BUILD)
+13 -13
tools/lib/bpf/bpf_helpers.h
··· 315 315 ___param, sizeof(___param)); \ 316 316 }) 317 317 318 - extern int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args, 319 - __u32 len__sz, void *aux__prog) __weak __ksym; 318 + extern int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const void *args, 319 + __u32 len__sz, void *aux__prog) __weak __ksym; 320 320 321 - #define bpf_stream_printk(stream_id, fmt, args...) \ 322 - ({ \ 323 - static const char ___fmt[] = fmt; \ 324 - unsigned long long ___param[___bpf_narg(args)]; \ 325 - \ 326 - _Pragma("GCC diagnostic push") \ 327 - _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 328 - ___bpf_fill(___param, args); \ 329 - _Pragma("GCC diagnostic pop") \ 330 - \ 331 - bpf_stream_vprintk(stream_id, ___fmt, ___param, sizeof(___param), NULL);\ 321 + #define bpf_stream_printk(stream_id, fmt, args...) \ 322 + ({ \ 323 + static const char ___fmt[] = fmt; \ 324 + unsigned long long ___param[___bpf_narg(args)]; \ 325 + \ 326 + _Pragma("GCC diagnostic push") \ 327 + _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \ 328 + ___bpf_fill(___param, args); \ 329 + _Pragma("GCC diagnostic pop") \ 330 + \ 331 + bpf_stream_vprintk_impl(stream_id, ___fmt, ___param, sizeof(___param), NULL); \ 332 332 }) 333 333 334 334 /* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
+2 -3
tools/perf/Makefile.config
··· 354 354 355 355 FEATURE_CHECK_LDFLAGS-libaio = -lrt 356 356 357 - FEATURE_CHECK_LDFLAGS-disassembler-four-args = -lbfd -lopcodes -ldl 358 - FEATURE_CHECK_LDFLAGS-disassembler-init-styled = -lbfd -lopcodes -ldl 359 - 360 357 CORE_CFLAGS += -fno-omit-frame-pointer 361 358 CORE_CFLAGS += -Wall 362 359 CORE_CFLAGS += -Wextra ··· 927 930 928 931 ifeq ($(feature-libbfd), 1) 929 932 EXTLIBS += -lbfd -lopcodes 933 + FEATURE_CHECK_LDFLAGS-disassembler-four-args = -lbfd -lopcodes -ldl 934 + FEATURE_CHECK_LDFLAGS-disassembler-init-styled = -lbfd -lopcodes -ldl 930 935 else 931 936 # we are on a system that requires -liberty and (maybe) -lz 932 937 # to link against -lbfd; test each case individually here
+2
tools/perf/builtin-lock.c
··· 1867 1867 eops.sample = process_sample_event; 1868 1868 eops.comm = perf_event__process_comm; 1869 1869 eops.mmap = perf_event__process_mmap; 1870 + eops.mmap2 = perf_event__process_mmap2; 1870 1871 eops.namespaces = perf_event__process_namespaces; 1871 1872 eops.tracing_data = perf_event__process_tracing_data; 1872 1873 session = perf_session__new(&data, &eops); ··· 2024 2023 eops.sample = process_sample_event; 2025 2024 eops.comm = perf_event__process_comm; 2026 2025 eops.mmap = perf_event__process_mmap; 2026 + eops.mmap2 = perf_event__process_mmap2; 2027 2027 eops.tracing_data = perf_event__process_tracing_data; 2028 2028 2029 2029 perf_env__init(&host_env);
+9 -5
tools/perf/tests/shell/lock_contention.sh
··· 13 13 rm -f ${perfdata} 14 14 rm -f ${result} 15 15 rm -f ${errout} 16 - trap - EXIT TERM INT 16 + trap - EXIT TERM INT ERR 17 17 } 18 18 19 19 trap_cleanup() { 20 + if (( $? == 139 )); then #SIGSEGV 21 + err=1 22 + fi 20 23 echo "Unexpected signal in ${FUNCNAME[1]}" 21 24 cleanup 22 25 exit ${err} 23 26 } 24 - trap trap_cleanup EXIT TERM INT 27 + trap trap_cleanup EXIT TERM INT ERR 25 28 26 29 check() { 27 30 if [ "$(id -u)" != 0 ]; then ··· 148 145 fi 149 146 150 147 # the perf lock contention output goes to the stderr 151 - perf lock con -a -b -g -E 1 -q -- perf bench sched messaging -p > /dev/null 2> ${result} 148 + perf lock con -a -b --lock-cgroup -E 1 -q -- perf bench sched messaging -p > /dev/null 2> ${result} 152 149 if [ "$(cat "${result}" | wc -l)" != "1" ]; then 153 150 echo "[Fail] BPF result count is not 1:" "$(cat "${result}" | wc -l)" 154 151 err=1 ··· 274 271 return 275 272 fi 276 273 277 - perf lock con -a -b -g -E 1 -F wait_total -q -- perf bench sched messaging -p > /dev/null 2> ${result} 274 + perf lock con -a -b --lock-cgroup -E 1 -F wait_total -q -- perf bench sched messaging -p > /dev/null 2> ${result} 278 275 if [ "$(cat "${result}" | wc -l)" != "1" ]; then 279 276 echo "[Fail] BPF result should have a cgroup result:" "$(cat "${result}")" 280 277 err=1 ··· 282 279 fi 283 280 284 281 cgroup=$(cat "${result}" | awk '{ print $3 }') 285 - perf lock con -a -b -g -E 1 -G "${cgroup}" -q -- perf bench sched messaging -p > /dev/null 2> ${result} 282 + perf lock con -a -b --lock-cgroup -E 1 -G "${cgroup}" -q -- perf bench sched messaging -p > /dev/null 2> ${result} 286 283 if [ "$(cat "${result}" | wc -l)" != "1" ]; then 287 284 echo "[Fail] BPF result should have a result with cgroup filter:" "$(cat "${cgroup}")" 288 285 err=1 ··· 341 338 test_cgroup_filter 342 339 test_csv_output 343 340 341 + cleanup 344 342 exit ${err}
+2 -8
tools/perf/util/header.c
··· 1022 1022 1023 1023 down_read(&env->bpf_progs.lock); 1024 1024 1025 - if (env->bpf_progs.infos_cnt == 0) 1026 - goto out; 1027 - 1028 1025 ret = do_write(ff, &env->bpf_progs.infos_cnt, 1029 1026 sizeof(env->bpf_progs.infos_cnt)); 1030 - if (ret < 0) 1027 + if (ret < 0 || env->bpf_progs.infos_cnt == 0) 1031 1028 goto out; 1032 1029 1033 1030 root = &env->bpf_progs.infos; ··· 1064 1067 1065 1068 down_read(&env->bpf_progs.lock); 1066 1069 1067 - if (env->bpf_progs.btfs_cnt == 0) 1068 - goto out; 1069 - 1070 1070 ret = do_write(ff, &env->bpf_progs.btfs_cnt, 1071 1071 sizeof(env->bpf_progs.btfs_cnt)); 1072 1072 1073 - if (ret < 0) 1073 + if (ret < 0 || env->bpf_progs.btfs_cnt == 0) 1074 1074 goto out; 1075 1075 1076 1076 root = &env->bpf_progs.btfs;
+38
tools/perf/util/libbfd.c
··· 38 38 asymbol **syms; 39 39 }; 40 40 41 + static bool perf_bfd_lock(void *bfd_mutex) 42 + { 43 + mutex_lock(bfd_mutex); 44 + return true; 45 + } 46 + 47 + static bool perf_bfd_unlock(void *bfd_mutex) 48 + { 49 + mutex_unlock(bfd_mutex); 50 + return true; 51 + } 52 + 53 + static void perf_bfd_init(void) 54 + { 55 + static struct mutex bfd_mutex; 56 + 57 + mutex_init_recursive(&bfd_mutex); 58 + 59 + if (bfd_init() != BFD_INIT_MAGIC) { 60 + pr_err("Error initializing libbfd\n"); 61 + return; 62 + } 63 + if (!bfd_thread_init(perf_bfd_lock, perf_bfd_unlock, &bfd_mutex)) 64 + pr_err("Error initializing libbfd threading\n"); 65 + } 66 + 67 + static void ensure_bfd_init(void) 68 + { 69 + static pthread_once_t bfd_init_once = PTHREAD_ONCE_INIT; 70 + 71 + pthread_once(&bfd_init_once, perf_bfd_init); 72 + } 73 + 41 74 static int bfd_error(const char *string) 42 75 { 43 76 const char *errmsg; ··· 165 132 bfd *abfd; 166 133 struct a2l_data *a2l = NULL; 167 134 135 + ensure_bfd_init(); 168 136 abfd = bfd_openr(path, NULL); 169 137 if (abfd == NULL) 170 138 return NULL; ··· 322 288 bfd *abfd; 323 289 u64 start, len; 324 290 291 + ensure_bfd_init(); 325 292 abfd = bfd_openr(debugfile, NULL); 326 293 if (!abfd) 327 294 return -1; ··· 428 393 if (fd < 0) 429 394 return -1; 430 395 396 + ensure_bfd_init(); 431 397 abfd = bfd_fdopenr(filename, /*target=*/NULL, fd); 432 398 if (!abfd) 433 399 return -1; ··· 457 421 asection *section; 458 422 bfd *abfd; 459 423 424 + ensure_bfd_init(); 460 425 abfd = bfd_openr(filename, NULL); 461 426 if (!abfd) 462 427 return -1; ··· 517 480 memset(tpath, 0, sizeof(tpath)); 518 481 perf_exe(tpath, sizeof(tpath)); 519 482 483 + ensure_bfd_init(); 520 484 bfdf = bfd_openr(tpath, NULL); 521 485 if (bfdf == NULL) 522 486 abort();
+10 -4
tools/perf/util/mutex.c
··· 17 17 18 18 #define CHECK_ERR(err) check_err(__func__, err) 19 19 20 - static void __mutex_init(struct mutex *mtx, bool pshared) 20 + static void __mutex_init(struct mutex *mtx, bool pshared, bool recursive) 21 21 { 22 22 pthread_mutexattr_t attr; 23 23 ··· 27 27 /* In normal builds enable error checking, such as recursive usage. */ 28 28 CHECK_ERR(pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ERRORCHECK)); 29 29 #endif 30 + if (recursive) 31 + CHECK_ERR(pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)); 30 32 if (pshared) 31 33 CHECK_ERR(pthread_mutexattr_setpshared(&attr, PTHREAD_PROCESS_SHARED)); 32 - 33 34 CHECK_ERR(pthread_mutex_init(&mtx->lock, &attr)); 34 35 CHECK_ERR(pthread_mutexattr_destroy(&attr)); 35 36 } 36 37 37 38 void mutex_init(struct mutex *mtx) 38 39 { 39 - __mutex_init(mtx, /*pshared=*/false); 40 + __mutex_init(mtx, /*pshared=*/false, /*recursive=*/false); 40 41 } 41 42 42 43 void mutex_init_pshared(struct mutex *mtx) 43 44 { 44 - __mutex_init(mtx, /*pshared=*/true); 45 + __mutex_init(mtx, /*pshared=*/true, /*recursive=*/false); 46 + } 47 + 48 + void mutex_init_recursive(struct mutex *mtx) 49 + { 50 + __mutex_init(mtx, /*pshared=*/false, /*recursive=*/true); 45 51 } 46 52 47 53 void mutex_destroy(struct mutex *mtx)
+2
tools/perf/util/mutex.h
··· 104 104 * process-private attribute. 105 105 */ 106 106 void mutex_init_pshared(struct mutex *mtx); 107 + /* Initializes a mutex that may be recursively held on the same thread. */ 108 + void mutex_init_recursive(struct mutex *mtx); 107 109 void mutex_destroy(struct mutex *mtx); 108 110 109 111 void mutex_lock(struct mutex *mtx) EXCLUSIVE_LOCK_FUNCTION(*mtx);
+3
tools/testing/selftests/bpf/config
··· 50 50 CONFIG_IPV6_TUNNEL=y 51 51 CONFIG_KEYS=y 52 52 CONFIG_LIRC=y 53 + CONFIG_LIVEPATCH=y 53 54 CONFIG_LWTUNNEL=y 54 55 CONFIG_MODULE_SIG=y 55 56 CONFIG_MODULE_SRCVERSION_ALL=y ··· 112 111 CONFIG_NF_NAT=y 113 112 CONFIG_PACKET=y 114 113 CONFIG_RC_CORE=y 114 + CONFIG_SAMPLES=y 115 + CONFIG_SAMPLE_LIVEPATCH=m 115 116 CONFIG_SECURITY=y 116 117 CONFIG_SECURITYFS=y 117 118 CONFIG_SYN_COOKIES=y
+107
tools/testing/selftests/bpf/prog_tests/livepatch_trampoline.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */ 3 + 4 + #include <test_progs.h> 5 + #include "testing_helpers.h" 6 + #include "livepatch_trampoline.skel.h" 7 + 8 + static int load_livepatch(void) 9 + { 10 + char path[4096]; 11 + 12 + /* CI will set KBUILD_OUTPUT */ 13 + snprintf(path, sizeof(path), "%s/samples/livepatch/livepatch-sample.ko", 14 + getenv("KBUILD_OUTPUT") ? : "../../../.."); 15 + 16 + return load_module(path, env_verbosity > VERBOSE_NONE); 17 + } 18 + 19 + static void unload_livepatch(void) 20 + { 21 + /* Disable the livepatch before unloading the module */ 22 + system("echo 0 > /sys/kernel/livepatch/livepatch_sample/enabled"); 23 + 24 + unload_module("livepatch_sample", env_verbosity > VERBOSE_NONE); 25 + } 26 + 27 + static void read_proc_cmdline(void) 28 + { 29 + char buf[4096]; 30 + int fd, ret; 31 + 32 + fd = open("/proc/cmdline", O_RDONLY); 33 + if (!ASSERT_OK_FD(fd, "open /proc/cmdline")) 34 + return; 35 + 36 + ret = read(fd, buf, sizeof(buf)); 37 + if (!ASSERT_GT(ret, 0, "read /proc/cmdline")) 38 + goto out; 39 + 40 + ASSERT_OK(strncmp(buf, "this has been live patched", 26), "strncmp"); 41 + 42 + out: 43 + close(fd); 44 + } 45 + 46 + static void __test_livepatch_trampoline(bool fexit_first) 47 + { 48 + struct livepatch_trampoline *skel = NULL; 49 + int err; 50 + 51 + skel = livepatch_trampoline__open_and_load(); 52 + if (!ASSERT_OK_PTR(skel, "skel_open_and_load")) 53 + goto out; 54 + 55 + skel->bss->my_pid = getpid(); 56 + 57 + if (!fexit_first) { 58 + /* fentry program is loaded first by default */ 59 + err = livepatch_trampoline__attach(skel); 60 + if (!ASSERT_OK(err, "skel_attach")) 61 + goto out; 62 + } else { 63 + /* Manually load fexit program first. */ 64 + skel->links.fexit_cmdline = bpf_program__attach(skel->progs.fexit_cmdline); 65 + if (!ASSERT_OK_PTR(skel->links.fexit_cmdline, "attach_fexit")) 66 + goto out; 67 + 68 + skel->links.fentry_cmdline = bpf_program__attach(skel->progs.fentry_cmdline); 69 + if (!ASSERT_OK_PTR(skel->links.fentry_cmdline, "attach_fentry")) 70 + goto out; 71 + } 72 + 73 + read_proc_cmdline(); 74 + 75 + ASSERT_EQ(skel->bss->fentry_hit, 1, "fentry_hit"); 76 + ASSERT_EQ(skel->bss->fexit_hit, 1, "fexit_hit"); 77 + out: 78 + livepatch_trampoline__destroy(skel); 79 + } 80 + 81 + void test_livepatch_trampoline(void) 82 + { 83 + int retry_cnt = 0; 84 + 85 + retry: 86 + if (load_livepatch()) { 87 + if (retry_cnt) { 88 + ASSERT_OK(1, "load_livepatch"); 89 + goto out; 90 + } 91 + /* 92 + * Something else (previous run of the same test?) loaded 93 + * the KLP module. Unload the KLP module and retry. 94 + */ 95 + unload_livepatch(); 96 + retry_cnt++; 97 + goto retry; 98 + } 99 + 100 + if (test__start_subtest("fentry_first")) 101 + __test_livepatch_trampoline(false); 102 + 103 + if (test__start_subtest("fexit_first")) 104 + __test_livepatch_trampoline(true); 105 + out: 106 + unload_livepatch(); 107 + }
+140
tools/testing/selftests/bpf/prog_tests/mptcp.c
··· 6 6 #include <netinet/in.h> 7 7 #include <test_progs.h> 8 8 #include <unistd.h> 9 + #include <errno.h> 9 10 #include "cgroup_helpers.h" 10 11 #include "network_helpers.h" 11 12 #include "mptcp_sock.skel.h" 12 13 #include "mptcpify.skel.h" 13 14 #include "mptcp_subflow.skel.h" 15 + #include "mptcp_sockmap.skel.h" 14 16 15 17 #define NS_TEST "mptcp_ns" 16 18 #define ADDR_1 "10.0.1.1" ··· 438 436 close(cgroup_fd); 439 437 } 440 438 439 + /* Test sockmap on MPTCP server handling non-mp-capable clients. */ 440 + static void test_sockmap_with_mptcp_fallback(struct mptcp_sockmap *skel) 441 + { 442 + int listen_fd = -1, client_fd1 = -1, client_fd2 = -1; 443 + int server_fd1 = -1, server_fd2 = -1, sent, recvd; 444 + char snd[9] = "123456789"; 445 + char rcv[10]; 446 + 447 + /* start server with MPTCP enabled */ 448 + listen_fd = start_mptcp_server(AF_INET, NULL, 0, 0); 449 + if (!ASSERT_OK_FD(listen_fd, "sockmap-fb:start_mptcp_server")) 450 + return; 451 + 452 + skel->bss->trace_port = ntohs(get_socket_local_port(listen_fd)); 453 + skel->bss->sk_index = 0; 454 + /* create client without MPTCP enabled */ 455 + client_fd1 = connect_to_fd_opts(listen_fd, NULL); 456 + if (!ASSERT_OK_FD(client_fd1, "sockmap-fb:connect_to_fd")) 457 + goto end; 458 + 459 + server_fd1 = accept(listen_fd, NULL, 0); 460 + skel->bss->sk_index = 1; 461 + client_fd2 = connect_to_fd_opts(listen_fd, NULL); 462 + if (!ASSERT_OK_FD(client_fd2, "sockmap-fb:connect_to_fd")) 463 + goto end; 464 + 465 + server_fd2 = accept(listen_fd, NULL, 0); 466 + /* test normal redirect behavior: data sent by client_fd1 can be 467 + * received by client_fd2 468 + */ 469 + skel->bss->redirect_idx = 1; 470 + sent = send(client_fd1, snd, sizeof(snd), 0); 471 + if (!ASSERT_EQ(sent, sizeof(snd), "sockmap-fb:send(client_fd1)")) 472 + goto end; 473 + 474 + /* try to recv more bytes to avoid truncation check */ 475 + recvd = recv(client_fd2, rcv, sizeof(rcv), 0); 476 + if (!ASSERT_EQ(recvd, sizeof(snd), "sockmap-fb:recv(client_fd2)")) 477 + goto end; 478 + 479 + end: 480 + if (client_fd1 >= 0) 481 + close(client_fd1); 482 + if (client_fd2 >= 0) 483 + close(client_fd2); 484 + if (server_fd1 >= 0) 485 + close(server_fd1); 486 + if (server_fd2 >= 0) 487 + close(server_fd2); 488 + close(listen_fd); 489 + } 490 + 491 + /* Test sockmap rejection of MPTCP sockets - both server and client sides. */ 492 + static void test_sockmap_reject_mptcp(struct mptcp_sockmap *skel) 493 + { 494 + int listen_fd = -1, server_fd = -1, client_fd1 = -1; 495 + int err, zero = 0; 496 + 497 + /* start server with MPTCP enabled */ 498 + listen_fd = start_mptcp_server(AF_INET, NULL, 0, 0); 499 + if (!ASSERT_OK_FD(listen_fd, "start_mptcp_server")) 500 + return; 501 + 502 + skel->bss->trace_port = ntohs(get_socket_local_port(listen_fd)); 503 + skel->bss->sk_index = 0; 504 + /* create client with MPTCP enabled */ 505 + client_fd1 = connect_to_fd(listen_fd, 0); 506 + if (!ASSERT_OK_FD(client_fd1, "connect_to_fd client_fd1")) 507 + goto end; 508 + 509 + /* bpf_sock_map_update() called from sockops should reject MPTCP sk */ 510 + if (!ASSERT_EQ(skel->bss->helper_ret, -EOPNOTSUPP, "should reject")) 511 + goto end; 512 + 513 + server_fd = accept(listen_fd, NULL, 0); 514 + err = bpf_map_update_elem(bpf_map__fd(skel->maps.sock_map), 515 + &zero, &server_fd, BPF_NOEXIST); 516 + if (!ASSERT_EQ(err, -EOPNOTSUPP, "server should be disallowed")) 517 + goto end; 518 + 519 + /* MPTCP client should also be disallowed */ 520 + err = bpf_map_update_elem(bpf_map__fd(skel->maps.sock_map), 521 + &zero, &client_fd1, BPF_NOEXIST); 522 + if (!ASSERT_EQ(err, -EOPNOTSUPP, "client should be disallowed")) 523 + goto end; 524 + end: 525 + if (client_fd1 >= 0) 526 + close(client_fd1); 527 + if (server_fd >= 0) 528 + close(server_fd); 529 + close(listen_fd); 530 + } 531 + 532 + static void test_mptcp_sockmap(void) 533 + { 534 + struct mptcp_sockmap *skel; 535 + struct netns_obj *netns; 536 + int cgroup_fd, err; 537 + 538 + cgroup_fd = test__join_cgroup("/mptcp_sockmap"); 539 + if (!ASSERT_OK_FD(cgroup_fd, "join_cgroup: mptcp_sockmap")) 540 + return; 541 + 542 + skel = mptcp_sockmap__open_and_load(); 543 + if (!ASSERT_OK_PTR(skel, "skel_open_load: mptcp_sockmap")) 544 + goto close_cgroup; 545 + 546 + skel->links.mptcp_sockmap_inject = 547 + bpf_program__attach_cgroup(skel->progs.mptcp_sockmap_inject, cgroup_fd); 548 + if (!ASSERT_OK_PTR(skel->links.mptcp_sockmap_inject, "attach sockmap")) 549 + goto skel_destroy; 550 + 551 + err = bpf_prog_attach(bpf_program__fd(skel->progs.mptcp_sockmap_redirect), 552 + bpf_map__fd(skel->maps.sock_map), 553 + BPF_SK_SKB_STREAM_VERDICT, 0); 554 + if (!ASSERT_OK(err, "bpf_prog_attach stream verdict")) 555 + goto skel_destroy; 556 + 557 + netns = netns_new(NS_TEST, true); 558 + if (!ASSERT_OK_PTR(netns, "netns_new: mptcp_sockmap")) 559 + goto skel_destroy; 560 + 561 + if (endpoint_init("subflow") < 0) 562 + goto close_netns; 563 + 564 + test_sockmap_with_mptcp_fallback(skel); 565 + test_sockmap_reject_mptcp(skel); 566 + 567 + close_netns: 568 + netns_free(netns); 569 + skel_destroy: 570 + mptcp_sockmap__destroy(skel); 571 + close_cgroup: 572 + close(cgroup_fd); 573 + } 574 + 441 575 void test_mptcp(void) 442 576 { 443 577 if (test__start_subtest("base")) ··· 582 444 test_mptcpify(); 583 445 if (test__start_subtest("subflow")) 584 446 test_subflow(); 447 + if (test__start_subtest("sockmap")) 448 + test_mptcp_sockmap(); 585 449 }
+150
tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <test_progs.h> 3 + #include "stacktrace_ips.skel.h" 4 + 5 + #ifdef __x86_64__ 6 + static int check_stacktrace_ips(int fd, __u32 key, int cnt, ...) 7 + { 8 + __u64 ips[PERF_MAX_STACK_DEPTH]; 9 + struct ksyms *ksyms = NULL; 10 + int i, err = 0; 11 + va_list args; 12 + 13 + /* sorted by addr */ 14 + ksyms = load_kallsyms_local(); 15 + if (!ASSERT_OK_PTR(ksyms, "load_kallsyms_local")) 16 + return -1; 17 + 18 + /* unlikely, but... */ 19 + if (!ASSERT_LT(cnt, PERF_MAX_STACK_DEPTH, "check_max")) 20 + return -1; 21 + 22 + err = bpf_map_lookup_elem(fd, &key, ips); 23 + if (err) 24 + goto out; 25 + 26 + /* 27 + * Compare all symbols provided via arguments with stacktrace ips, 28 + * and their related symbol addresses.t 29 + */ 30 + va_start(args, cnt); 31 + 32 + for (i = 0; i < cnt; i++) { 33 + unsigned long val; 34 + struct ksym *ksym; 35 + 36 + val = va_arg(args, unsigned long); 37 + ksym = ksym_search_local(ksyms, ips[i]); 38 + if (!ASSERT_OK_PTR(ksym, "ksym_search_local")) 39 + break; 40 + ASSERT_EQ(ksym->addr, val, "stack_cmp"); 41 + } 42 + 43 + va_end(args); 44 + 45 + out: 46 + free_kallsyms_local(ksyms); 47 + return err; 48 + } 49 + 50 + static void test_stacktrace_ips_kprobe_multi(bool retprobe) 51 + { 52 + LIBBPF_OPTS(bpf_kprobe_multi_opts, opts, 53 + .retprobe = retprobe 54 + ); 55 + LIBBPF_OPTS(bpf_test_run_opts, topts); 56 + struct stacktrace_ips *skel; 57 + 58 + skel = stacktrace_ips__open_and_load(); 59 + if (!ASSERT_OK_PTR(skel, "stacktrace_ips__open_and_load")) 60 + return; 61 + 62 + if (!skel->kconfig->CONFIG_UNWINDER_ORC) { 63 + test__skip(); 64 + goto cleanup; 65 + } 66 + 67 + skel->links.kprobe_multi_test = bpf_program__attach_kprobe_multi_opts( 68 + skel->progs.kprobe_multi_test, 69 + "bpf_testmod_stacktrace_test", &opts); 70 + if (!ASSERT_OK_PTR(skel->links.kprobe_multi_test, "bpf_program__attach_kprobe_multi_opts")) 71 + goto cleanup; 72 + 73 + trigger_module_test_read(1); 74 + 75 + load_kallsyms(); 76 + 77 + check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 4, 78 + ksym_get_addr("bpf_testmod_stacktrace_test_3"), 79 + ksym_get_addr("bpf_testmod_stacktrace_test_2"), 80 + ksym_get_addr("bpf_testmod_stacktrace_test_1"), 81 + ksym_get_addr("bpf_testmod_test_read")); 82 + 83 + cleanup: 84 + stacktrace_ips__destroy(skel); 85 + } 86 + 87 + static void test_stacktrace_ips_raw_tp(void) 88 + { 89 + __u32 info_len = sizeof(struct bpf_prog_info); 90 + LIBBPF_OPTS(bpf_test_run_opts, topts); 91 + struct bpf_prog_info info = {}; 92 + struct stacktrace_ips *skel; 93 + __u64 bpf_prog_ksym = 0; 94 + int err; 95 + 96 + skel = stacktrace_ips__open_and_load(); 97 + if (!ASSERT_OK_PTR(skel, "stacktrace_ips__open_and_load")) 98 + return; 99 + 100 + if (!skel->kconfig->CONFIG_UNWINDER_ORC) { 101 + test__skip(); 102 + goto cleanup; 103 + } 104 + 105 + skel->links.rawtp_test = bpf_program__attach_raw_tracepoint( 106 + skel->progs.rawtp_test, 107 + "bpf_testmod_test_read"); 108 + if (!ASSERT_OK_PTR(skel->links.rawtp_test, "bpf_program__attach_raw_tracepoint")) 109 + goto cleanup; 110 + 111 + /* get bpf program address */ 112 + info.jited_ksyms = ptr_to_u64(&bpf_prog_ksym); 113 + info.nr_jited_ksyms = 1; 114 + err = bpf_prog_get_info_by_fd(bpf_program__fd(skel->progs.rawtp_test), 115 + &info, &info_len); 116 + if (!ASSERT_OK(err, "bpf_prog_get_info_by_fd")) 117 + goto cleanup; 118 + 119 + trigger_module_test_read(1); 120 + 121 + load_kallsyms(); 122 + 123 + check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 2, 124 + bpf_prog_ksym, 125 + ksym_get_addr("bpf_trace_run2")); 126 + 127 + cleanup: 128 + stacktrace_ips__destroy(skel); 129 + } 130 + 131 + static void __test_stacktrace_ips(void) 132 + { 133 + if (test__start_subtest("kprobe_multi")) 134 + test_stacktrace_ips_kprobe_multi(false); 135 + if (test__start_subtest("kretprobe_multi")) 136 + test_stacktrace_ips_kprobe_multi(true); 137 + if (test__start_subtest("raw_tp")) 138 + test_stacktrace_ips_raw_tp(); 139 + } 140 + #else 141 + static void __test_stacktrace_ips(void) 142 + { 143 + test__skip(); 144 + } 145 + #endif 146 + 147 + void test_stacktrace_ips(void) 148 + { 149 + __test_stacktrace_ips(); 150 + }
+53
tools/testing/selftests/bpf/progs/iters_looping.c
··· 161 161 162 162 return 0; 163 163 } 164 + 165 + __used 166 + static void iterator_with_diff_stack_depth(int x) 167 + { 168 + struct bpf_iter_num iter; 169 + 170 + asm volatile ( 171 + "if r1 == 42 goto 0f;" 172 + "*(u64 *)(r10 - 128) = 0;" 173 + "0:" 174 + /* create iterator */ 175 + "r1 = %[iter];" 176 + "r2 = 0;" 177 + "r3 = 10;" 178 + "call %[bpf_iter_num_new];" 179 + "1:" 180 + /* consume next item */ 181 + "r1 = %[iter];" 182 + "call %[bpf_iter_num_next];" 183 + "if r0 == 0 goto 2f;" 184 + "goto 1b;" 185 + "2:" 186 + /* destroy iterator */ 187 + "r1 = %[iter];" 188 + "call %[bpf_iter_num_destroy];" 189 + : 190 + : __imm_ptr(iter), ITER_HELPERS 191 + : __clobber_common, "r6" 192 + ); 193 + } 194 + 195 + SEC("socket") 196 + __success 197 + __naked int widening_stack_size_bug(void *ctx) 198 + { 199 + /* 200 + * Depending on iterator_with_diff_stack_depth() parameter value, 201 + * subprogram stack depth is either 8 or 128 bytes. Arrange values so 202 + * that it is 128 on a first call and 8 on a second. This triggered a 203 + * bug in verifier's widen_imprecise_scalars() logic. 204 + */ 205 + asm volatile ( 206 + "r6 = 0;" 207 + "r1 = 0;" 208 + "1:" 209 + "call iterator_with_diff_stack_depth;" 210 + "r1 = 42;" 211 + "r6 += 1;" 212 + "if r6 < 2 goto 1b;" 213 + "r0 = 0;" 214 + "exit;" 215 + ::: __clobber_all); 216 + }
+30
tools/testing/selftests/bpf/progs/livepatch_trampoline.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */ 3 + 4 + #include <linux/bpf.h> 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + 8 + int fentry_hit; 9 + int fexit_hit; 10 + int my_pid; 11 + 12 + SEC("fentry/cmdline_proc_show") 13 + int BPF_PROG(fentry_cmdline) 14 + { 15 + if (my_pid != (bpf_get_current_pid_tgid() >> 32)) 16 + return 0; 17 + 18 + fentry_hit = 1; 19 + return 0; 20 + } 21 + 22 + SEC("fexit/cmdline_proc_show") 23 + int BPF_PROG(fexit_cmdline) 24 + { 25 + if (my_pid != (bpf_get_current_pid_tgid() >> 32)) 26 + return 0; 27 + 28 + fexit_hit = 1; 29 + return 0; 30 + }
+43
tools/testing/selftests/bpf/progs/mptcp_sockmap.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "bpf_tracing_net.h" 4 + 5 + char _license[] SEC("license") = "GPL"; 6 + 7 + int sk_index; 8 + int redirect_idx; 9 + int trace_port; 10 + int helper_ret; 11 + struct { 12 + __uint(type, BPF_MAP_TYPE_SOCKMAP); 13 + __uint(key_size, sizeof(__u32)); 14 + __uint(value_size, sizeof(__u32)); 15 + __uint(max_entries, 100); 16 + } sock_map SEC(".maps"); 17 + 18 + SEC("sockops") 19 + int mptcp_sockmap_inject(struct bpf_sock_ops *skops) 20 + { 21 + struct bpf_sock *sk; 22 + 23 + /* only accept specified connection */ 24 + if (skops->local_port != trace_port || 25 + skops->op != BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB) 26 + return 1; 27 + 28 + sk = skops->sk; 29 + if (!sk) 30 + return 1; 31 + 32 + /* update sk handler */ 33 + helper_ret = bpf_sock_map_update(skops, &sock_map, &sk_index, BPF_NOEXIST); 34 + 35 + return 1; 36 + } 37 + 38 + SEC("sk_skb/stream_verdict") 39 + int mptcp_sockmap_redirect(struct __sk_buff *skb) 40 + { 41 + /* redirect skb to the sk under sock_map[redirect_idx] */ 42 + return bpf_sk_redirect_map(skb, &sock_map, redirect_idx, 0); 43 + }
+49
tools/testing/selftests/bpf/progs/stacktrace_ips.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2018 Facebook 3 + 4 + #include <vmlinux.h> 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + 8 + #ifndef PERF_MAX_STACK_DEPTH 9 + #define PERF_MAX_STACK_DEPTH 127 10 + #endif 11 + 12 + typedef __u64 stack_trace_t[PERF_MAX_STACK_DEPTH]; 13 + 14 + struct { 15 + __uint(type, BPF_MAP_TYPE_STACK_TRACE); 16 + __uint(max_entries, 16384); 17 + __type(key, __u32); 18 + __type(value, stack_trace_t); 19 + } stackmap SEC(".maps"); 20 + 21 + extern bool CONFIG_UNWINDER_ORC __kconfig __weak; 22 + 23 + /* 24 + * This function is here to have CONFIG_UNWINDER_ORC 25 + * used and added to object BTF. 26 + */ 27 + int unused(void) 28 + { 29 + return CONFIG_UNWINDER_ORC ? 0 : 1; 30 + } 31 + 32 + __u32 stack_key; 33 + 34 + SEC("kprobe.multi") 35 + int kprobe_multi_test(struct pt_regs *ctx) 36 + { 37 + stack_key = bpf_get_stackid(ctx, &stackmap, 0); 38 + return 0; 39 + } 40 + 41 + SEC("raw_tp/bpf_testmod_test_read") 42 + int rawtp_test(void *ctx) 43 + { 44 + /* Skip ebpf program entry in the stack. */ 45 + stack_key = bpf_get_stackid(ctx, &stackmap, 0); 46 + return 0; 47 + } 48 + 49 + char _license[] SEC("license") = "GPL";
+3 -3
tools/testing/selftests/bpf/progs/stream_fail.c
··· 10 10 __failure __msg("Possibly NULL pointer passed") 11 11 int stream_vprintk_null_arg(void *ctx) 12 12 { 13 - bpf_stream_vprintk(BPF_STDOUT, "", NULL, 0, NULL); 13 + bpf_stream_vprintk_impl(BPF_STDOUT, "", NULL, 0, NULL); 14 14 return 0; 15 15 } 16 16 ··· 18 18 __failure __msg("R3 type=scalar expected=") 19 19 int stream_vprintk_scalar_arg(void *ctx) 20 20 { 21 - bpf_stream_vprintk(BPF_STDOUT, "", (void *)46, 0, NULL); 21 + bpf_stream_vprintk_impl(BPF_STDOUT, "", (void *)46, 0, NULL); 22 22 return 0; 23 23 } 24 24 ··· 26 26 __failure __msg("arg#1 doesn't point to a const string") 27 27 int stream_vprintk_string_arg(void *ctx) 28 28 { 29 - bpf_stream_vprintk(BPF_STDOUT, ctx, NULL, 0, NULL); 29 + bpf_stream_vprintk_impl(BPF_STDOUT, ctx, NULL, 0, NULL); 30 30 return 0; 31 31 } 32 32
+3 -3
tools/testing/selftests/bpf/progs/task_work.c
··· 66 66 if (!work) 67 67 return 0; 68 68 69 - bpf_task_work_schedule_resume(task, &work->tw, &hmap, process_work, NULL); 69 + bpf_task_work_schedule_resume_impl(task, &work->tw, &hmap, process_work, NULL); 70 70 return 0; 71 71 } 72 72 ··· 80 80 work = bpf_map_lookup_elem(&arrmap, &key); 81 81 if (!work) 82 82 return 0; 83 - bpf_task_work_schedule_signal(task, &work->tw, &arrmap, process_work, NULL); 83 + bpf_task_work_schedule_signal_impl(task, &work->tw, &arrmap, process_work, NULL); 84 84 return 0; 85 85 } 86 86 ··· 102 102 work = bpf_map_lookup_elem(&lrumap, &key); 103 103 if (!work || work->data[0]) 104 104 return 0; 105 - bpf_task_work_schedule_resume(task, &work->tw, &lrumap, process_work, NULL); 105 + bpf_task_work_schedule_resume_impl(task, &work->tw, &lrumap, process_work, NULL); 106 106 return 0; 107 107 }
+4 -4
tools/testing/selftests/bpf/progs/task_work_fail.c
··· 53 53 work = bpf_map_lookup_elem(&arrmap, &key); 54 54 if (!work) 55 55 return 0; 56 - bpf_task_work_schedule_resume(task, &work->tw, &hmap, process_work, NULL); 56 + bpf_task_work_schedule_resume_impl(task, &work->tw, &hmap, process_work, NULL); 57 57 return 0; 58 58 } 59 59 ··· 65 65 struct bpf_task_work tw; 66 66 67 67 task = bpf_get_current_task_btf(); 68 - bpf_task_work_schedule_resume(task, &tw, &hmap, process_work, NULL); 68 + bpf_task_work_schedule_resume_impl(task, &tw, &hmap, process_work, NULL); 69 69 return 0; 70 70 } 71 71 ··· 76 76 struct task_struct *task; 77 77 78 78 task = bpf_get_current_task_btf(); 79 - bpf_task_work_schedule_resume(task, NULL, &hmap, process_work, NULL); 79 + bpf_task_work_schedule_resume_impl(task, NULL, &hmap, process_work, NULL); 80 80 return 0; 81 81 } 82 82 ··· 91 91 work = bpf_map_lookup_elem(&arrmap, &key); 92 92 if (!work) 93 93 return 0; 94 - bpf_task_work_schedule_resume(task, &work->tw, NULL, process_work, NULL); 94 + bpf_task_work_schedule_resume_impl(task, &work->tw, NULL, process_work, NULL); 95 95 return 0; 96 96 }
+2 -2
tools/testing/selftests/bpf/progs/task_work_stress.c
··· 51 51 if (!work) 52 52 return 0; 53 53 } 54 - err = bpf_task_work_schedule_signal(bpf_get_current_task_btf(), &work->tw, &hmap, 55 - process_work, NULL); 54 + err = bpf_task_work_schedule_signal_impl(bpf_get_current_task_btf(), &work->tw, &hmap, 55 + process_work, NULL); 56 56 if (err) 57 57 __sync_fetch_and_add(&schedule_error, 1); 58 58 else
+26
tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
··· 417 417 return a + (long)b + c + d + (long)e + f + g + h + i + j + k; 418 418 } 419 419 420 + noinline void bpf_testmod_stacktrace_test(void) 421 + { 422 + /* used for stacktrace test as attach function */ 423 + asm volatile (""); 424 + } 425 + 426 + noinline void bpf_testmod_stacktrace_test_3(void) 427 + { 428 + bpf_testmod_stacktrace_test(); 429 + asm volatile (""); 430 + } 431 + 432 + noinline void bpf_testmod_stacktrace_test_2(void) 433 + { 434 + bpf_testmod_stacktrace_test_3(); 435 + asm volatile (""); 436 + } 437 + 438 + noinline void bpf_testmod_stacktrace_test_1(void) 439 + { 440 + bpf_testmod_stacktrace_test_2(); 441 + asm volatile (""); 442 + } 443 + 420 444 int bpf_testmod_fentry_ok; 421 445 422 446 noinline ssize_t ··· 520 496 bpf_testmod_fentry_test11(16, (void *)17, 18, 19, (void *)20, 521 497 21, 22, 23, 24, 25, 26) != 231) 522 498 goto out; 499 + 500 + bpf_testmod_stacktrace_test_1(); 523 501 524 502 bpf_testmod_fentry_ok = 1; 525 503 out:
+4
tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc
··· 20 20 echo 0 > tracing_on 21 21 echo 0 > events/enable 22 22 23 + # Clear functions caused by page cache; run sample_events twice 24 + sample_events 25 + sample_events 26 + 23 27 echo "Get the most frequently calling function" 24 28 echo > trace 25 29 sample_events
+1
tools/testing/selftests/net/.gitignore
··· 45 45 socket 46 46 so_incoming_cpu 47 47 so_netns_cookie 48 + so_peek_off 48 49 so_txtime 49 50 so_rcv_listener 50 51 stress_reuseport_listen
+1
tools/testing/selftests/net/af_unix/Makefile
··· 6 6 scm_inq \ 7 7 scm_pidfd \ 8 8 scm_rights \ 9 + so_peek_off \ 9 10 unix_connect \ 10 11 unix_connreset \ 11 12 # end of TEST_GEN_PROGS
+162
tools/testing/selftests/net/af_unix/so_peek_off.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright 2025 Google LLC */ 3 + 4 + #include <stdlib.h> 5 + #include <unistd.h> 6 + 7 + #include <sys/socket.h> 8 + 9 + #include "../../kselftest_harness.h" 10 + 11 + FIXTURE(so_peek_off) 12 + { 13 + int fd[2]; /* 0: sender, 1: receiver */ 14 + }; 15 + 16 + FIXTURE_VARIANT(so_peek_off) 17 + { 18 + int type; 19 + }; 20 + 21 + FIXTURE_VARIANT_ADD(so_peek_off, stream) 22 + { 23 + .type = SOCK_STREAM, 24 + }; 25 + 26 + FIXTURE_VARIANT_ADD(so_peek_off, dgram) 27 + { 28 + .type = SOCK_DGRAM, 29 + }; 30 + 31 + FIXTURE_VARIANT_ADD(so_peek_off, seqpacket) 32 + { 33 + .type = SOCK_SEQPACKET, 34 + }; 35 + 36 + FIXTURE_SETUP(so_peek_off) 37 + { 38 + struct timeval timeout = { 39 + .tv_sec = 0, 40 + .tv_usec = 3000, 41 + }; 42 + int ret; 43 + 44 + ret = socketpair(AF_UNIX, variant->type, 0, self->fd); 45 + ASSERT_EQ(0, ret); 46 + 47 + ret = setsockopt(self->fd[1], SOL_SOCKET, SO_RCVTIMEO_NEW, 48 + &timeout, sizeof(timeout)); 49 + ASSERT_EQ(0, ret); 50 + 51 + ret = setsockopt(self->fd[1], SOL_SOCKET, SO_PEEK_OFF, 52 + &(int){0}, sizeof(int)); 53 + ASSERT_EQ(0, ret); 54 + } 55 + 56 + FIXTURE_TEARDOWN(so_peek_off) 57 + { 58 + close_range(self->fd[0], self->fd[1], 0); 59 + } 60 + 61 + #define sendeq(fd, str, flags) \ 62 + do { \ 63 + int bytes, len = strlen(str); \ 64 + \ 65 + bytes = send(fd, str, len, flags); \ 66 + ASSERT_EQ(len, bytes); \ 67 + } while (0) 68 + 69 + #define recveq(fd, str, buflen, flags) \ 70 + do { \ 71 + char buf[(buflen) + 1] = {}; \ 72 + int bytes; \ 73 + \ 74 + bytes = recv(fd, buf, buflen, flags); \ 75 + ASSERT_NE(-1, bytes); \ 76 + ASSERT_STREQ(str, buf); \ 77 + } while (0) 78 + 79 + #define async \ 80 + for (pid_t pid = (pid = fork(), \ 81 + pid < 0 ? \ 82 + __TH_LOG("Failed to start async {}"), \ 83 + _metadata->exit_code = KSFT_FAIL, \ 84 + __bail(1, _metadata), \ 85 + 0xdead : \ 86 + pid); \ 87 + !pid; exit(0)) 88 + 89 + TEST_F(so_peek_off, single_chunk) 90 + { 91 + sendeq(self->fd[0], "aaaabbbb", 0); 92 + 93 + recveq(self->fd[1], "aaaa", 4, MSG_PEEK); 94 + recveq(self->fd[1], "bbbb", 100, MSG_PEEK); 95 + } 96 + 97 + TEST_F(so_peek_off, two_chunks) 98 + { 99 + sendeq(self->fd[0], "aaaa", 0); 100 + sendeq(self->fd[0], "bbbb", 0); 101 + 102 + recveq(self->fd[1], "aaaa", 4, MSG_PEEK); 103 + recveq(self->fd[1], "bbbb", 100, MSG_PEEK); 104 + } 105 + 106 + TEST_F(so_peek_off, two_chunks_blocking) 107 + { 108 + async { 109 + usleep(1000); 110 + sendeq(self->fd[0], "aaaa", 0); 111 + } 112 + 113 + recveq(self->fd[1], "aaaa", 4, MSG_PEEK); 114 + 115 + async { 116 + usleep(1000); 117 + sendeq(self->fd[0], "bbbb", 0); 118 + } 119 + 120 + /* goto again; -> goto redo; in unix_stream_read_generic(). */ 121 + recveq(self->fd[1], "bbbb", 100, MSG_PEEK); 122 + } 123 + 124 + TEST_F(so_peek_off, two_chunks_overlap) 125 + { 126 + sendeq(self->fd[0], "aaaa", 0); 127 + recveq(self->fd[1], "aa", 2, MSG_PEEK); 128 + 129 + sendeq(self->fd[0], "bbbb", 0); 130 + 131 + if (variant->type == SOCK_STREAM) { 132 + /* SOCK_STREAM tries to fill the buffer. */ 133 + recveq(self->fd[1], "aabb", 4, MSG_PEEK); 134 + recveq(self->fd[1], "bb", 100, MSG_PEEK); 135 + } else { 136 + /* SOCK_DGRAM and SOCK_SEQPACKET returns at the skb boundary. */ 137 + recveq(self->fd[1], "aa", 100, MSG_PEEK); 138 + recveq(self->fd[1], "bbbb", 100, MSG_PEEK); 139 + } 140 + } 141 + 142 + TEST_F(so_peek_off, two_chunks_overlap_blocking) 143 + { 144 + async { 145 + usleep(1000); 146 + sendeq(self->fd[0], "aaaa", 0); 147 + } 148 + 149 + recveq(self->fd[1], "aa", 2, MSG_PEEK); 150 + 151 + async { 152 + usleep(1000); 153 + sendeq(self->fd[0], "bbbb", 0); 154 + } 155 + 156 + /* Even SOCK_STREAM does not wait if at least one byte is read. */ 157 + recveq(self->fd[1], "aa", 100, MSG_PEEK); 158 + 159 + recveq(self->fd[1], "bbbb", 100, MSG_PEEK); 160 + } 161 + 162 + TEST_HARNESS_MAIN
+7
tools/testing/selftests/net/forwarding/lib_sh_test.sh
··· 30 30 do_test "tfail" false 31 31 } 32 32 33 + tfail2() 34 + { 35 + do_test "tfail2" false 36 + } 37 + 33 38 txfail() 34 39 { 35 40 FAIL_TO_XFAIL=yes do_test "txfail" false ··· 137 132 ret_subtest $ksft_fail "tfail" txfail tfail 138 133 139 134 ret_subtest $ksft_xfail "txfail" txfail txfail 135 + 136 + ret_subtest $ksft_fail "tfail2" tfail2 tfail 140 137 } 141 138 142 139 exit_status_tests_run()
+1 -1
tools/testing/selftests/net/lib.sh
··· 43 43 weights[$i]=$((weight++)) 44 44 done 45 45 46 - if [[ ${weights[$a]} > ${weights[$b]} ]]; then 46 + if [[ ${weights[$a]} -ge ${weights[$b]} ]]; then 47 47 echo "$a" 48 48 return 0 49 49 else
+16 -11
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 3655 3655 fastclose_tests() 3656 3656 { 3657 3657 if reset_check_counter "fastclose test" "MPTcpExtMPFastcloseTx"; then 3658 - MPTCP_LIB_SUBTEST_FLAKY=1 3659 3658 test_linkfail=1024 fastclose=client \ 3660 3659 run_tests $ns1 $ns2 10.0.1.1 3661 3660 chk_join_nr 0 0 0 ··· 3663 3664 fi 3664 3665 3665 3666 if reset_check_counter "fastclose server test" "MPTcpExtMPFastcloseRx"; then 3666 - MPTCP_LIB_SUBTEST_FLAKY=1 3667 3667 test_linkfail=1024 fastclose=server \ 3668 3668 run_tests $ns1 $ns2 10.0.1.1 3669 3669 join_rst_nr=1 \ ··· 3959 3961 continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3960 3962 set_userspace_pm $ns1 3961 3963 pm_nl_set_limits $ns2 2 2 3962 - { test_linkfail=128 speed=5 \ 3964 + { timeout_test=120 test_linkfail=128 speed=5 \ 3963 3965 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 3964 3966 local tests_pid=$! 3965 3967 wait_mpj $ns1 ··· 3992 3994 continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3993 3995 set_userspace_pm $ns2 3994 3996 pm_nl_set_limits $ns1 0 1 3995 - { test_linkfail=128 speed=5 \ 3997 + { timeout_test=120 test_linkfail=128 speed=5 \ 3996 3998 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 3997 3999 local tests_pid=$! 3998 4000 wait_mpj $ns2 ··· 4020 4022 continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 4021 4023 set_userspace_pm $ns2 4022 4024 pm_nl_set_limits $ns1 0 1 4023 - { test_linkfail=128 speed=5 \ 4025 + { timeout_test=120 test_linkfail=128 speed=5 \ 4024 4026 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4025 4027 local tests_pid=$! 4026 4028 wait_mpj $ns2 ··· 4041 4043 continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 4042 4044 set_userspace_pm $ns2 4043 4045 pm_nl_set_limits $ns1 0 1 4044 - { test_linkfail=128 speed=5 \ 4046 + { timeout_test=120 test_linkfail=128 speed=5 \ 4045 4047 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4046 4048 local tests_pid=$! 4047 4049 wait_mpj $ns2 ··· 4065 4067 continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 4066 4068 set_userspace_pm $ns1 4067 4069 pm_nl_set_limits $ns2 1 1 4068 - { test_linkfail=128 speed=5 \ 4070 + { timeout_test=120 test_linkfail=128 speed=5 \ 4069 4071 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4070 4072 local tests_pid=$! 4071 4073 wait_mpj $ns1 ··· 4096 4098 pm_nl_set_limits $ns1 2 2 4097 4099 pm_nl_set_limits $ns2 2 2 4098 4100 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal 4099 - { test_linkfail=128 speed=slow \ 4101 + { timeout_test=120 test_linkfail=128 speed=slow \ 4100 4102 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4101 4103 local tests_pid=$! 4102 4104 ··· 4123 4125 pm_nl_set_limits $ns2 0 3 4124 4126 pm_nl_add_endpoint $ns2 10.0.1.2 id 1 dev ns2eth1 flags subflow 4125 4127 pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow 4126 - { test_linkfail=128 speed=5 \ 4128 + { timeout_test=120 test_linkfail=128 speed=5 \ 4127 4129 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4128 4130 local tests_pid=$! 4129 4131 ··· 4201 4203 # broadcast IP: no packet for this address will be received on ns1 4202 4204 pm_nl_add_endpoint $ns1 224.0.0.1 id 2 flags signal 4203 4205 pm_nl_add_endpoint $ns1 10.0.1.1 id 42 flags signal 4204 - { test_linkfail=128 speed=5 \ 4206 + { timeout_test=120 test_linkfail=128 speed=5 \ 4205 4207 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4206 4208 local tests_pid=$! 4207 4209 ··· 4210 4212 $ns1 10.0.2.1 id 1 flags signal 4211 4213 chk_subflow_nr "before delete" 2 4212 4214 chk_mptcp_info subflows 1 subflows 1 4215 + chk_mptcp_info add_addr_signal 2 add_addr_accepted 1 4213 4216 4214 4217 pm_nl_del_endpoint $ns1 1 10.0.2.1 4215 4218 pm_nl_del_endpoint $ns1 2 224.0.0.1 4216 4219 sleep 0.5 4217 4220 chk_subflow_nr "after delete" 1 4218 4221 chk_mptcp_info subflows 0 subflows 0 4222 + chk_mptcp_info add_addr_signal 0 add_addr_accepted 0 4219 4223 4220 4224 pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal 4221 4225 pm_nl_add_endpoint $ns1 10.0.3.1 id 2 flags signal 4222 4226 wait_mpj $ns2 4223 4227 chk_subflow_nr "after re-add" 3 4224 4228 chk_mptcp_info subflows 2 subflows 2 4229 + chk_mptcp_info add_addr_signal 2 add_addr_accepted 2 4225 4230 4226 4231 pm_nl_del_endpoint $ns1 42 10.0.1.1 4227 4232 sleep 0.5 4228 4233 chk_subflow_nr "after delete ID 0" 2 4229 4234 chk_mptcp_info subflows 2 subflows 2 4235 + chk_mptcp_info add_addr_signal 2 add_addr_accepted 2 4230 4236 4231 4237 pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal 4232 4238 wait_mpj $ns2 4233 4239 chk_subflow_nr "after re-add ID 0" 3 4234 4240 chk_mptcp_info subflows 3 subflows 3 4241 + chk_mptcp_info add_addr_signal 3 add_addr_accepted 2 4235 4242 4236 4243 pm_nl_del_endpoint $ns1 99 10.0.1.1 4237 4244 sleep 0.5 4238 4245 chk_subflow_nr "after re-delete ID 0" 2 4239 4246 chk_mptcp_info subflows 2 subflows 2 4247 + chk_mptcp_info add_addr_signal 2 add_addr_accepted 2 4240 4248 4241 4249 pm_nl_add_endpoint $ns1 10.0.1.1 id 88 flags signal 4242 4250 wait_mpj $ns2 4243 4251 chk_subflow_nr "after re-re-add ID 0" 3 4244 4252 chk_mptcp_info subflows 3 subflows 3 4253 + chk_mptcp_info add_addr_signal 3 add_addr_accepted 2 4245 4254 mptcp_lib_kill_group_wait $tests_pid 4246 4255 4247 4256 kill_events_pids ··· 4281 4276 # broadcast IP: no packet for this address will be received on ns1 4282 4277 pm_nl_add_endpoint $ns1 224.0.0.1 id 2 flags signal 4283 4278 pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow 4284 - { test_linkfail=128 speed=20 \ 4279 + { timeout_test=120 test_linkfail=128 speed=20 \ 4285 4280 run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4286 4281 local tests_pid=$! 4287 4282
+1 -1
tools/testing/selftests/user_events/perf_test.c
··· 236 236 ASSERT_EQ(1 << reg.enable_bit, self->check); 237 237 238 238 /* Ensure write shows up at correct offset */ 239 - ASSERT_NE(-1, write(self->data_fd, &reg.write_index, 239 + ASSERT_NE(-1, write(self->data_fd, (void *)&reg.write_index, 240 240 sizeof(reg.write_index))); 241 241 val = (void *)(((char *)perf_page) + perf_page->data_offset); 242 242 ASSERT_EQ(PERF_RECORD_SAMPLE, *val);
+18 -1
tools/testing/selftests/vfio/lib/include/vfio_util.h
··· 4 4 5 5 #include <fcntl.h> 6 6 #include <string.h> 7 - #include <linux/vfio.h> 7 + 8 + #include <uapi/linux/types.h> 9 + #include <linux/iommufd.h> 8 10 #include <linux/list.h> 9 11 #include <linux/pci_regs.h> 12 + #include <linux/vfio.h> 10 13 11 14 #include "../../../kselftest.h" 12 15 ··· 188 185 struct vfio_pci_driver driver; 189 186 }; 190 187 188 + struct iova_allocator { 189 + struct iommu_iova_range *ranges; 190 + u32 nranges; 191 + u32 range_idx; 192 + u64 range_offset; 193 + }; 194 + 191 195 /* 192 196 * Return the BDF string of the device that the test should use. 193 197 * ··· 215 205 struct vfio_pci_device *vfio_pci_device_init(const char *bdf, const char *iommu_mode); 216 206 void vfio_pci_device_cleanup(struct vfio_pci_device *device); 217 207 void vfio_pci_device_reset(struct vfio_pci_device *device); 208 + 209 + struct iommu_iova_range *vfio_pci_iova_ranges(struct vfio_pci_device *device, 210 + u32 *nranges); 211 + 212 + struct iova_allocator *iova_allocator_init(struct vfio_pci_device *device); 213 + void iova_allocator_cleanup(struct iova_allocator *allocator); 214 + iova_t iova_allocator_alloc(struct iova_allocator *allocator, size_t size); 218 215 219 216 int __vfio_pci_dma_map(struct vfio_pci_device *device, 220 217 struct vfio_dma_region *region);
+245 -1
tools/testing/selftests/vfio/lib/vfio_pci_device.c
··· 12 12 #include <sys/mman.h> 13 13 14 14 #include <uapi/linux/types.h> 15 + #include <linux/iommufd.h> 15 16 #include <linux/limits.h> 16 17 #include <linux/mman.h> 18 + #include <linux/overflow.h> 17 19 #include <linux/types.h> 18 20 #include <linux/vfio.h> 19 - #include <linux/iommufd.h> 20 21 21 22 #include "../../../kselftest.h" 22 23 #include <vfio_util.h> ··· 29 28 int __ret = ioctl((_fd), (_op), (__arg)); \ 30 29 VFIO_ASSERT_EQ(__ret, 0, "ioctl(%s, %s, %s) returned %d\n", #_fd, #_op, #_arg, __ret); \ 31 30 } while (0) 31 + 32 + static struct vfio_info_cap_header *next_cap_hdr(void *buf, u32 bufsz, 33 + u32 *cap_offset) 34 + { 35 + struct vfio_info_cap_header *hdr; 36 + 37 + if (!*cap_offset) 38 + return NULL; 39 + 40 + VFIO_ASSERT_LT(*cap_offset, bufsz); 41 + VFIO_ASSERT_GE(bufsz - *cap_offset, sizeof(*hdr)); 42 + 43 + hdr = (struct vfio_info_cap_header *)((u8 *)buf + *cap_offset); 44 + *cap_offset = hdr->next; 45 + 46 + return hdr; 47 + } 48 + 49 + static struct vfio_info_cap_header *vfio_iommu_info_cap_hdr(struct vfio_iommu_type1_info *info, 50 + u16 cap_id) 51 + { 52 + struct vfio_info_cap_header *hdr; 53 + u32 cap_offset = info->cap_offset; 54 + u32 max_depth; 55 + u32 depth = 0; 56 + 57 + if (!(info->flags & VFIO_IOMMU_INFO_CAPS)) 58 + return NULL; 59 + 60 + if (cap_offset) 61 + VFIO_ASSERT_GE(cap_offset, sizeof(*info)); 62 + 63 + max_depth = (info->argsz - sizeof(*info)) / sizeof(*hdr); 64 + 65 + while ((hdr = next_cap_hdr(info, info->argsz, &cap_offset))) { 66 + depth++; 67 + VFIO_ASSERT_LE(depth, max_depth, "Capability chain contains a cycle\n"); 68 + 69 + if (hdr->id == cap_id) 70 + return hdr; 71 + } 72 + 73 + return NULL; 74 + } 75 + 76 + /* Return buffer including capability chain, if present. Free with free() */ 77 + static struct vfio_iommu_type1_info *vfio_iommu_get_info(struct vfio_pci_device *device) 78 + { 79 + struct vfio_iommu_type1_info *info; 80 + 81 + info = malloc(sizeof(*info)); 82 + VFIO_ASSERT_NOT_NULL(info); 83 + 84 + *info = (struct vfio_iommu_type1_info) { 85 + .argsz = sizeof(*info), 86 + }; 87 + 88 + ioctl_assert(device->container_fd, VFIO_IOMMU_GET_INFO, info); 89 + VFIO_ASSERT_GE(info->argsz, sizeof(*info)); 90 + 91 + info = realloc(info, info->argsz); 92 + VFIO_ASSERT_NOT_NULL(info); 93 + 94 + ioctl_assert(device->container_fd, VFIO_IOMMU_GET_INFO, info); 95 + VFIO_ASSERT_GE(info->argsz, sizeof(*info)); 96 + 97 + return info; 98 + } 99 + 100 + /* 101 + * Return iova ranges for the device's container. Normalize vfio_iommu_type1 to 102 + * report iommufd's iommu_iova_range. Free with free(). 103 + */ 104 + static struct iommu_iova_range *vfio_iommu_iova_ranges(struct vfio_pci_device *device, 105 + u32 *nranges) 106 + { 107 + struct vfio_iommu_type1_info_cap_iova_range *cap_range; 108 + struct vfio_iommu_type1_info *info; 109 + struct vfio_info_cap_header *hdr; 110 + struct iommu_iova_range *ranges = NULL; 111 + 112 + info = vfio_iommu_get_info(device); 113 + hdr = vfio_iommu_info_cap_hdr(info, VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE); 114 + VFIO_ASSERT_NOT_NULL(hdr); 115 + 116 + cap_range = container_of(hdr, struct vfio_iommu_type1_info_cap_iova_range, header); 117 + VFIO_ASSERT_GT(cap_range->nr_iovas, 0); 118 + 119 + ranges = calloc(cap_range->nr_iovas, sizeof(*ranges)); 120 + VFIO_ASSERT_NOT_NULL(ranges); 121 + 122 + for (u32 i = 0; i < cap_range->nr_iovas; i++) { 123 + ranges[i] = (struct iommu_iova_range){ 124 + .start = cap_range->iova_ranges[i].start, 125 + .last = cap_range->iova_ranges[i].end, 126 + }; 127 + } 128 + 129 + *nranges = cap_range->nr_iovas; 130 + 131 + free(info); 132 + return ranges; 133 + } 134 + 135 + /* Return iova ranges of the device's IOAS. Free with free() */ 136 + static struct iommu_iova_range *iommufd_iova_ranges(struct vfio_pci_device *device, 137 + u32 *nranges) 138 + { 139 + struct iommu_iova_range *ranges; 140 + int ret; 141 + 142 + struct iommu_ioas_iova_ranges query = { 143 + .size = sizeof(query), 144 + .ioas_id = device->ioas_id, 145 + }; 146 + 147 + ret = ioctl(device->iommufd, IOMMU_IOAS_IOVA_RANGES, &query); 148 + VFIO_ASSERT_EQ(ret, -1); 149 + VFIO_ASSERT_EQ(errno, EMSGSIZE); 150 + VFIO_ASSERT_GT(query.num_iovas, 0); 151 + 152 + ranges = calloc(query.num_iovas, sizeof(*ranges)); 153 + VFIO_ASSERT_NOT_NULL(ranges); 154 + 155 + query.allowed_iovas = (uintptr_t)ranges; 156 + 157 + ioctl_assert(device->iommufd, IOMMU_IOAS_IOVA_RANGES, &query); 158 + *nranges = query.num_iovas; 159 + 160 + return ranges; 161 + } 162 + 163 + static int iova_range_comp(const void *a, const void *b) 164 + { 165 + const struct iommu_iova_range *ra = a, *rb = b; 166 + 167 + if (ra->start < rb->start) 168 + return -1; 169 + 170 + if (ra->start > rb->start) 171 + return 1; 172 + 173 + return 0; 174 + } 175 + 176 + /* Return sorted IOVA ranges of the device. Free with free(). */ 177 + struct iommu_iova_range *vfio_pci_iova_ranges(struct vfio_pci_device *device, 178 + u32 *nranges) 179 + { 180 + struct iommu_iova_range *ranges; 181 + 182 + if (device->iommufd) 183 + ranges = iommufd_iova_ranges(device, nranges); 184 + else 185 + ranges = vfio_iommu_iova_ranges(device, nranges); 186 + 187 + if (!ranges) 188 + return NULL; 189 + 190 + VFIO_ASSERT_GT(*nranges, 0); 191 + 192 + /* Sort and check that ranges are sane and non-overlapping */ 193 + qsort(ranges, *nranges, sizeof(*ranges), iova_range_comp); 194 + VFIO_ASSERT_LT(ranges[0].start, ranges[0].last); 195 + 196 + for (u32 i = 1; i < *nranges; i++) { 197 + VFIO_ASSERT_LT(ranges[i].start, ranges[i].last); 198 + VFIO_ASSERT_LT(ranges[i - 1].last, ranges[i].start); 199 + } 200 + 201 + return ranges; 202 + } 203 + 204 + struct iova_allocator *iova_allocator_init(struct vfio_pci_device *device) 205 + { 206 + struct iova_allocator *allocator; 207 + struct iommu_iova_range *ranges; 208 + u32 nranges; 209 + 210 + ranges = vfio_pci_iova_ranges(device, &nranges); 211 + VFIO_ASSERT_NOT_NULL(ranges); 212 + 213 + allocator = malloc(sizeof(*allocator)); 214 + VFIO_ASSERT_NOT_NULL(allocator); 215 + 216 + *allocator = (struct iova_allocator){ 217 + .ranges = ranges, 218 + .nranges = nranges, 219 + .range_idx = 0, 220 + .range_offset = 0, 221 + }; 222 + 223 + return allocator; 224 + } 225 + 226 + void iova_allocator_cleanup(struct iova_allocator *allocator) 227 + { 228 + free(allocator->ranges); 229 + free(allocator); 230 + } 231 + 232 + iova_t iova_allocator_alloc(struct iova_allocator *allocator, size_t size) 233 + { 234 + VFIO_ASSERT_GT(size, 0, "Invalid size arg, zero\n"); 235 + VFIO_ASSERT_EQ(size & (size - 1), 0, "Invalid size arg, non-power-of-2\n"); 236 + 237 + for (;;) { 238 + struct iommu_iova_range *range; 239 + iova_t iova, last; 240 + 241 + VFIO_ASSERT_LT(allocator->range_idx, allocator->nranges, 242 + "IOVA allocator out of space\n"); 243 + 244 + range = &allocator->ranges[allocator->range_idx]; 245 + iova = range->start + allocator->range_offset; 246 + 247 + /* Check for sufficient space at the current offset */ 248 + if (check_add_overflow(iova, size - 1, &last) || 249 + last > range->last) 250 + goto next_range; 251 + 252 + /* Align iova to size */ 253 + iova = last & ~(size - 1); 254 + 255 + /* Check for sufficient space at the aligned iova */ 256 + if (check_add_overflow(iova, size - 1, &last) || 257 + last > range->last) 258 + goto next_range; 259 + 260 + if (last == range->last) { 261 + allocator->range_idx++; 262 + allocator->range_offset = 0; 263 + } else { 264 + allocator->range_offset = last - range->start + 1; 265 + } 266 + 267 + return iova; 268 + 269 + next_range: 270 + allocator->range_idx++; 271 + allocator->range_offset = 0; 272 + } 273 + } 32 274 33 275 iova_t __to_iova(struct vfio_pci_device *device, void *vaddr) 34 276 {
+17 -3
tools/testing/selftests/vfio/vfio_dma_mapping_test.c
··· 3 3 #include <sys/mman.h> 4 4 #include <unistd.h> 5 5 6 + #include <uapi/linux/types.h> 7 + #include <linux/iommufd.h> 6 8 #include <linux/limits.h> 7 9 #include <linux/mman.h> 8 10 #include <linux/sizes.h> ··· 95 93 96 94 FIXTURE(vfio_dma_mapping_test) { 97 95 struct vfio_pci_device *device; 96 + struct iova_allocator *iova_allocator; 98 97 }; 99 98 100 99 FIXTURE_VARIANT(vfio_dma_mapping_test) { ··· 120 117 FIXTURE_SETUP(vfio_dma_mapping_test) 121 118 { 122 119 self->device = vfio_pci_device_init(device_bdf, variant->iommu_mode); 120 + self->iova_allocator = iova_allocator_init(self->device); 123 121 } 124 122 125 123 FIXTURE_TEARDOWN(vfio_dma_mapping_test) 126 124 { 125 + iova_allocator_cleanup(self->iova_allocator); 127 126 vfio_pci_device_cleanup(self->device); 128 127 } 129 128 ··· 147 142 else 148 143 ASSERT_NE(region.vaddr, MAP_FAILED); 149 144 150 - region.iova = (u64)region.vaddr; 145 + region.iova = iova_allocator_alloc(self->iova_allocator, size); 151 146 region.size = size; 152 147 153 148 vfio_pci_dma_map(self->device, &region); ··· 224 219 FIXTURE_SETUP(vfio_dma_map_limit_test) 225 220 { 226 221 struct vfio_dma_region *region = &self->region; 222 + struct iommu_iova_range *ranges; 227 223 u64 region_size = getpagesize(); 224 + iova_t last_iova; 225 + u32 nranges; 228 226 229 227 /* 230 228 * Over-allocate mmap by double the size to provide enough backing vaddr ··· 240 232 MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 241 233 ASSERT_NE(region->vaddr, MAP_FAILED); 242 234 243 - /* One page prior to the end of address space */ 244 - region->iova = ~(iova_t)0 & ~(region_size - 1); 235 + ranges = vfio_pci_iova_ranges(self->device, &nranges); 236 + VFIO_ASSERT_NOT_NULL(ranges); 237 + last_iova = ranges[nranges - 1].last; 238 + free(ranges); 239 + 240 + /* One page prior to the last iova */ 241 + region->iova = last_iova & ~(region_size - 1); 245 242 region->size = region_size; 246 243 } 247 244 ··· 289 276 struct vfio_dma_region *region = &self->region; 290 277 int rc; 291 278 279 + region->iova = ~(iova_t)0 & ~(region->size - 1); 292 280 region->size = self->mmap_size; 293 281 294 282 rc = __vfio_pci_dma_map(self->device, region);
+8 -4
tools/testing/selftests/vfio/vfio_pci_driver_test.c
··· 19 19 } while (0) 20 20 21 21 static void region_setup(struct vfio_pci_device *device, 22 + struct iova_allocator *iova_allocator, 22 23 struct vfio_dma_region *region, u64 size) 23 24 { 24 25 const int flags = MAP_SHARED | MAP_ANONYMOUS; ··· 30 29 VFIO_ASSERT_NE(vaddr, MAP_FAILED); 31 30 32 31 region->vaddr = vaddr; 33 - region->iova = (u64)vaddr; 32 + region->iova = iova_allocator_alloc(iova_allocator, size); 34 33 region->size = size; 35 34 36 35 vfio_pci_dma_map(device, region); ··· 45 44 46 45 FIXTURE(vfio_pci_driver_test) { 47 46 struct vfio_pci_device *device; 47 + struct iova_allocator *iova_allocator; 48 48 struct vfio_dma_region memcpy_region; 49 49 void *vaddr; 50 50 int msi_fd; ··· 74 72 struct vfio_pci_driver *driver; 75 73 76 74 self->device = vfio_pci_device_init(device_bdf, variant->iommu_mode); 75 + self->iova_allocator = iova_allocator_init(self->device); 77 76 78 77 driver = &self->device->driver; 79 78 80 - region_setup(self->device, &self->memcpy_region, SZ_1G); 81 - region_setup(self->device, &driver->region, SZ_2M); 79 + region_setup(self->device, self->iova_allocator, &self->memcpy_region, SZ_1G); 80 + region_setup(self->device, self->iova_allocator, &driver->region, SZ_2M); 82 81 83 82 /* Any IOVA that doesn't overlap memcpy_region and driver->region. */ 84 - self->unmapped_iova = 8UL * SZ_1G; 83 + self->unmapped_iova = iova_allocator_alloc(self->iova_allocator, SZ_1G); 85 84 86 85 vfio_pci_driver_init(self->device); 87 86 self->msi_fd = self->device->msi_eventfds[driver->msi]; ··· 111 108 region_teardown(self->device, &self->memcpy_region); 112 109 region_teardown(self->device, &driver->region); 113 110 111 + iova_allocator_cleanup(self->iova_allocator); 114 112 vfio_pci_device_cleanup(self->device); 115 113 } 116 114