Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Adjacent changes:

net/mptcp/protocol.h
63740448a32e ("mptcp: fix accept vs worker race")
2a6a870e44dd ("mptcp: stops worker on unaccepted sockets at listener close")
ddb1a072f858 ("mptcp: move first subflow allocation at mpc access time")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+1670 -917
+4
.mailmap
··· 232 232 John Crispin <john@phrozen.org> <blogic@openwrt.org> 233 233 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 234 234 John Stultz <johnstul@us.ibm.com> 235 + <jon.toppins+linux@gmail.com> <jtoppins@cumulusnetworks.com> 236 + <jon.toppins+linux@gmail.com> <jtoppins@redhat.com> 235 237 Jordan Crouse <jordan@cosmicpenguin.net> <jcrouse@codeaurora.org> 236 238 <josh@joshtriplett.org> <josh@freedesktop.org> 237 239 <josh@joshtriplett.org> <josh@kernel.org> ··· 299 297 Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com> 300 298 Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> <martyna.szapar-mudlaw@intel.com> 301 299 Mathieu Othacehe <m.othacehe@gmail.com> 300 + Mat Martineau <martineau@kernel.org> <mathew.j.martineau@linux.intel.com> 301 + Mat Martineau <martineau@kernel.org> <mathewm@codeaurora.org> 302 302 Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com> 303 303 Matthew Wilcox <willy@infradead.org> <matthew@wil.cx> 304 304 Matthew Wilcox <willy@infradead.org> <mawilcox@linuxonhyperv.com>
+1
Documentation/admin-guide/kernel-parameters.rst
··· 128 128 KVM Kernel Virtual Machine support is enabled. 129 129 LIBATA Libata driver is enabled 130 130 LP Printer support is enabled. 131 + LOONGARCH LoongArch architecture is enabled. 131 132 LOOP Loopback device support is enabled. 132 133 M68k M68k architecture is enabled. 133 134 These options have more detailed description inside of
+6
Documentation/admin-guide/kernel-parameters.txt
··· 6933 6933 When enabled, memory and cache locality will be 6934 6934 impacted. 6935 6935 6936 + writecombine= [LOONGARCH] Control the MAT (Memory Access Type) of 6937 + ioremap_wc(). 6938 + 6939 + on - Enable writecombine, use WUC for ioremap_wc() 6940 + off - Disable writecombine, use SUC for ioremap_wc() 6941 + 6936 6942 x2apic_phys [X86-64,APIC] Use x2apic physical mode instead of 6937 6943 default x2apic cluster mode on platforms 6938 6944 supporting x2apic.
+4
Documentation/kbuild/llvm.rst
··· 171 171 Getting LLVM 172 172 ------------- 173 173 174 + We provide prebuilt stable versions of LLVM on `kernel.org <https://kernel.org/pub/tools/llvm/>`_. 175 + Below are links that may be useful for building LLVM from source or procuring 176 + it through a distribution's package manager. 177 + 174 178 - https://releases.llvm.org/download.html 175 179 - https://github.com/llvm/llvm-project 176 180 - https://llvm.org/docs/GettingStarted.html
+15
Documentation/networking/devlink/ice.rst
··· 7 7 This document describes the devlink features implemented by the ``ice`` 8 8 device driver. 9 9 10 + Parameters 11 + ========== 12 + 13 + .. list-table:: Generic parameters implemented 14 + 15 + * - Name 16 + - Mode 17 + - Notes 18 + * - ``enable_roce`` 19 + - runtime 20 + - mutually exclusive with ``enable_iwarp`` 21 + * - ``enable_iwarp`` 22 + - runtime 23 + - mutually exclusive with ``enable_roce`` 24 + 10 25 Info versions 11 26 ============= 12 27
+3 -3
Documentation/riscv/vm-layout.rst
··· 47 47 | Kernel-space virtual memory, shared between all processes: 48 48 ____________________________________________________________|___________________________________________________________ 49 49 | | | | 50 - ffffffc6fee00000 | -228 GB | ffffffc6feffffff | 2 MB | fixmap 50 + ffffffc6fea00000 | -228 GB | ffffffc6feffffff | 6 MB | fixmap 51 51 ffffffc6ff000000 | -228 GB | ffffffc6ffffffff | 16 MB | PCI io 52 52 ffffffc700000000 | -228 GB | ffffffc7ffffffff | 4 GB | vmemmap 53 53 ffffffc800000000 | -224 GB | ffffffd7ffffffff | 64 GB | vmalloc/ioremap space ··· 83 83 | Kernel-space virtual memory, shared between all processes: 84 84 ____________________________________________________________|___________________________________________________________ 85 85 | | | | 86 - ffff8d7ffee00000 | -114.5 TB | ffff8d7ffeffffff | 2 MB | fixmap 86 + ffff8d7ffea00000 | -114.5 TB | ffff8d7ffeffffff | 6 MB | fixmap 87 87 ffff8d7fff000000 | -114.5 TB | ffff8d7fffffffff | 16 MB | PCI io 88 88 ffff8d8000000000 | -114.5 TB | ffff8f7fffffffff | 2 TB | vmemmap 89 89 ffff8f8000000000 | -112.5 TB | ffffaf7fffffffff | 32 TB | vmalloc/ioremap space ··· 119 119 | Kernel-space virtual memory, shared between all processes: 120 120 ____________________________________________________________|___________________________________________________________ 121 121 | | | | 122 - ff1bfffffee00000 | -57 PB | ff1bfffffeffffff | 2 MB | fixmap 122 + ff1bfffffea00000 | -57 PB | ff1bfffffeffffff | 6 MB | fixmap 123 123 ff1bffffff000000 | -57 PB | ff1bffffffffffff | 16 MB | PCI io 124 124 ff1c000000000000 | -57 PB | ff1fffffffffffff | 1 PB | vmemmap 125 125 ff20000000000000 | -56 PB | ff5fffffffffffff | 16 PB | vmalloc/ioremap space
+1 -1
Documentation/rust/arch-support.rst
··· 15 15 ============ ================ ============================================== 16 16 Architecture Level of support Constraints 17 17 ============ ================ ============================================== 18 - ``x86`` Maintained ``x86_64`` only. 19 18 ``um`` Maintained ``x86_64`` only. 19 + ``x86`` Maintained ``x86_64`` only. 20 20 ============ ================ ============================================== 21 21
+1 -1
Documentation/sound/hd-audio/models.rst
··· 704 704 no-jd 705 705 BIOS setup but without jack-detection 706 706 intel 707 - Intel DG45* mobos 707 + Intel D*45* mobos 708 708 dell-m6-amic 709 709 Dell desktops/laptops with analog mics 710 710 dell-m6-dmic
+1
MAINTAINERS
··· 14623 14623 14624 14624 NETWORKING [MPTCP] 14625 14625 M: Matthieu Baerts <matthieu.baerts@tessares.net> 14626 + M: Mat Martineau <martineau@kernel.org> 14626 14627 L: netdev@vger.kernel.org 14627 14628 L: mptcp@lists.linux.dev 14628 14629 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 3 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+3 -9
arch/arm/boot/dts/imx6ull-colibri.dtsi
··· 33 33 self-powered; 34 34 type = "micro"; 35 35 36 - ports { 37 - #address-cells = <1>; 38 - #size-cells = <0>; 39 - 40 - port@0 { 41 - reg = <0>; 42 - usb_dr_connector: endpoint { 43 - remote-endpoint = <&usb1_drd_sw>; 44 - }; 36 + port { 37 + usb_dr_connector: endpoint { 38 + remote-endpoint = <&usb1_drd_sw>; 45 39 }; 46 40 }; 47 41 };
-2
arch/arm/boot/dts/imx7d-remarkable2.dts
··· 118 118 reg = <0x62>; 119 119 pinctrl-names = "default"; 120 120 pinctrl-0 = <&pinctrl_epdpmic>; 121 - #address-cells = <1>; 122 - #size-cells = <0>; 123 121 #thermal-sensor-cells = <0>; 124 122 epd-pwr-good-gpios = <&gpio6 21 GPIO_ACTIVE_HIGH>; 125 123
+1 -1
arch/arm/boot/dts/rk3288.dtsi
··· 942 942 status = "disabled"; 943 943 }; 944 944 945 - spdif: sound@ff88b0000 { 945 + spdif: sound@ff8b0000 { 946 946 compatible = "rockchip,rk3288-spdif", "rockchip,rk3066-spdif"; 947 947 reg = <0x0 0xff8b0000 0x0 0x10000>; 948 948 #sound-dai-cells = <0>;
+1 -1
arch/arm/configs/imx_v6_v7_defconfig
··· 76 76 CONFIG_RFKILL_INPUT=y 77 77 CONFIG_PCI=y 78 78 CONFIG_PCI_MSI=y 79 - CONFIG_PCI_IMX6=y 79 + CONFIG_PCI_IMX6_HOST=y 80 80 CONFIG_DEVTMPFS=y 81 81 CONFIG_DEVTMPFS_MOUNT=y 82 82 # CONFIG_STANDALONE is not set
+7 -8
arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
··· 1571 1571 1572 1572 dmc: bus@38000 { 1573 1573 compatible = "simple-bus"; 1574 - reg = <0x0 0x38000 0x0 0x400>; 1575 1574 #address-cells = <2>; 1576 1575 #size-cells = <2>; 1577 - ranges = <0x0 0x0 0x0 0x38000 0x0 0x400>; 1576 + ranges = <0x0 0x0 0x0 0x38000 0x0 0x2000>; 1578 1577 1579 1578 canvas: video-lut@48 { 1580 1579 compatible = "amlogic,canvas"; 1581 1580 reg = <0x0 0x48 0x0 0x14>; 1581 + }; 1582 + 1583 + pmu: pmu@80 { 1584 + reg = <0x0 0x80 0x0 0x40>, 1585 + <0x0 0xc00 0x0 0x40>; 1586 + interrupts = <GIC_SPI 52 IRQ_TYPE_EDGE_RISING>; 1582 1587 }; 1583 1588 }; 1584 1589 ··· 1708 1703 }; 1709 1704 }; 1710 1705 }; 1711 - }; 1712 - 1713 - pmu: pmu@ff638000 { 1714 - reg = <0x0 0xff638000 0x0 0x100>, 1715 - <0x0 0xff638c00 0x0 0x100>; 1716 - interrupts = <GIC_SPI 52 IRQ_TYPE_EDGE_RISING>; 1717 1706 }; 1718 1707 1719 1708 aobus: bus@ff800000 {
+1 -1
arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi
··· 194 194 rohm,reset-snvs-powered; 195 195 196 196 #clock-cells = <0>; 197 - clocks = <&osc_32k 0>; 197 + clocks = <&osc_32k>; 198 198 clock-output-names = "clk-32k-out"; 199 199 200 200 regulators {
+2 -2
arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
··· 99 99 compatible = "regulator-fixed"; 100 100 enable-active-high; 101 101 gpio = <&gpio2 20 GPIO_ACTIVE_HIGH>; /* PMIC_EN_ETH */ 102 - off-on-delay = <500000>; 102 + off-on-delay-us = <500000>; 103 103 pinctrl-names = "default"; 104 104 pinctrl-0 = <&pinctrl_reg_eth>; 105 105 regulator-always-on; ··· 139 139 enable-active-high; 140 140 /* Verdin SD_1_PWR_EN (SODIMM 76) */ 141 141 gpio = <&gpio3 5 GPIO_ACTIVE_HIGH>; 142 - off-on-delay = <100000>; 142 + off-on-delay-us = <100000>; 143 143 pinctrl-names = "default"; 144 144 pinctrl-0 = <&pinctrl_usdhc2_pwr_en>; 145 145 regulator-max-microvolt = <3300000>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-verdin-dev.dtsi
··· 10 10 compatible = "regulator-fixed"; 11 11 enable-active-high; 12 12 gpio = <&gpio_expander_21 4 GPIO_ACTIVE_HIGH>; /* ETH_PWR_EN */ 13 - off-on-delay = <500000>; 13 + off-on-delay-us = <500000>; 14 14 regulator-max-microvolt = <3300000>; 15 15 regulator-min-microvolt = <3300000>; 16 16 regulator-name = "+V3.3_ETH";
+2 -2
arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
··· 87 87 compatible = "regulator-fixed"; 88 88 enable-active-high; 89 89 gpio = <&gpio2 20 GPIO_ACTIVE_HIGH>; /* PMIC_EN_ETH */ 90 - off-on-delay = <500000>; 90 + off-on-delay-us = <500000>; 91 91 pinctrl-names = "default"; 92 92 pinctrl-0 = <&pinctrl_reg_eth>; 93 93 regulator-always-on; ··· 128 128 enable-active-high; 129 129 /* Verdin SD_1_PWR_EN (SODIMM 76) */ 130 130 gpio = <&gpio4 22 GPIO_ACTIVE_HIGH>; 131 - off-on-delay = <100000>; 131 + off-on-delay-us = <100000>; 132 132 pinctrl-names = "default"; 133 133 pinctrl-0 = <&pinctrl_usdhc2_pwr_en>; 134 134 regulator-max-microvolt = <3300000>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 1128 1128 1129 1129 lcdif2: display-controller@32e90000 { 1130 1130 compatible = "fsl,imx8mp-lcdif"; 1131 - reg = <0x32e90000 0x238>; 1131 + reg = <0x32e90000 0x10000>; 1132 1132 interrupts = <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>; 1133 1133 clocks = <&clk IMX8MP_CLK_MEDIA_DISP2_PIX_ROOT>, 1134 1134 <&clk IMX8MP_CLK_MEDIA_APB_ROOT>,
+2 -2
arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
··· 62 62 perst-gpios = <&tlmm 58 GPIO_ACTIVE_LOW>; 63 63 }; 64 64 65 - &pcie_phy0 { 65 + &pcie_qmp0 { 66 66 status = "okay"; 67 67 }; 68 68 69 - &pcie_phy1 { 69 + &pcie_qmp1 { 70 70 status = "okay"; 71 71 }; 72 72
+2 -2
arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi
··· 48 48 perst-gpios = <&tlmm 61 GPIO_ACTIVE_LOW>; 49 49 }; 50 50 51 - &pcie_phy0 { 51 + &pcie_qmp0 { 52 52 status = "okay"; 53 53 }; 54 54 55 - &pcie_phy1 { 55 + &pcie_qmp1 { 56 56 status = "okay"; 57 57 }; 58 58
+2 -2
arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
··· 1012 1012 left_spkr: speaker@0,3 { 1013 1013 compatible = "sdw10217211000"; 1014 1014 reg = <0 3>; 1015 - powerdown-gpios = <&tlmm 130 GPIO_ACTIVE_HIGH>; 1015 + powerdown-gpios = <&tlmm 130 GPIO_ACTIVE_LOW>; 1016 1016 #thermal-sensor-cells = <0>; 1017 1017 sound-name-prefix = "SpkrLeft"; 1018 1018 #sound-dai-cells = <0>; ··· 1021 1021 right_spkr: speaker@0,4 { 1022 1022 compatible = "sdw10217211000"; 1023 1023 reg = <0 4>; 1024 - powerdown-gpios = <&tlmm 130 GPIO_ACTIVE_HIGH>; 1024 + powerdown-gpios = <&tlmm 130 GPIO_ACTIVE_LOW>; 1025 1025 #thermal-sensor-cells = <0>; 1026 1026 sound-name-prefix = "SpkrRight"; 1027 1027 #sound-dai-cells = <0>;
+1 -1
arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
··· 464 464 465 465 &mdss_dp_out { 466 466 data-lanes = <0 1>; 467 - link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 467 + link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000>; 468 468 }; 469 469 470 470 &mdss_mdp {
+3 -2
arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi
··· 59 59 #size-cells = <0>; 60 60 61 61 pmk8280_pon: pon@1300 { 62 - compatible = "qcom,pm8998-pon"; 63 - reg = <0x1300>; 62 + compatible = "qcom,pmk8350-pon"; 63 + reg = <0x1300>, <0x800>; 64 + reg-names = "hlos", "pbs"; 64 65 65 66 pmk8280_pon_pwrkey: pwrkey { 66 67 compatible = "qcom,pmk8350-pwrkey";
+2 -2
arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
··· 753 753 left_spkr: speaker@0,3 { 754 754 compatible = "sdw10217211000"; 755 755 reg = <0 3>; 756 - powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>; 756 + powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_LOW>; 757 757 #thermal-sensor-cells = <0>; 758 758 sound-name-prefix = "SpkrLeft"; 759 759 #sound-dai-cells = <0>; ··· 761 761 762 762 right_spkr: speaker@0,4 { 763 763 compatible = "sdw10217211000"; 764 - powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>; 764 + powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_LOW>; 765 765 reg = <0 4>; 766 766 #thermal-sensor-cells = <0>; 767 767 sound-name-prefix = "SpkrRight";
+2 -2
arch/arm64/boot/dts/qcom/sdm850-samsung-w737.dts
··· 662 662 left_spkr: speaker@0,3 { 663 663 compatible = "sdw10217211000"; 664 664 reg = <0 3>; 665 - powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>; 665 + powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_LOW>; 666 666 #thermal-sensor-cells = <0>; 667 667 sound-name-prefix = "SpkrLeft"; 668 668 #sound-dai-cells = <0>; ··· 670 670 671 671 right_spkr: speaker@0,4 { 672 672 compatible = "sdw10217211000"; 673 - powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>; 673 + powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_LOW>; 674 674 reg = <0 4>; 675 675 #thermal-sensor-cells = <0>; 676 676 sound-name-prefix = "SpkrRight";
+2 -2
arch/arm64/boot/dts/qcom/sm8250-mtp.dts
··· 764 764 left_spkr: speaker@0,3 { 765 765 compatible = "sdw10217211000"; 766 766 reg = <0 3>; 767 - powerdown-gpios = <&tlmm 26 GPIO_ACTIVE_HIGH>; 767 + powerdown-gpios = <&tlmm 26 GPIO_ACTIVE_LOW>; 768 768 #thermal-sensor-cells = <0>; 769 769 sound-name-prefix = "SpkrLeft"; 770 770 #sound-dai-cells = <0>; ··· 773 773 right_spkr: speaker@0,4 { 774 774 compatible = "sdw10217211000"; 775 775 reg = <0 4>; 776 - powerdown-gpios = <&tlmm 127 GPIO_ACTIVE_HIGH>; 776 + powerdown-gpios = <&tlmm 127 GPIO_ACTIVE_LOW>; 777 777 #thermal-sensor-cells = <0>; 778 778 sound-name-prefix = "SpkrRight"; 779 779 #sound-dai-cells = <0>;
+2
arch/arm64/boot/dts/rockchip/rk3326-anbernic-rg351m.dts
··· 24 24 25 25 &internal_display { 26 26 compatible = "elida,kd35t133"; 27 + iovcc-supply = <&vcc_lcd>; 28 + vdd-supply = <&vcc_lcd>; 27 29 }; 28 30 29 31 &pwm0 {
-2
arch/arm64/boot/dts/rockchip/rk3326-odroid-go.dtsi
··· 235 235 internal_display: panel@0 { 236 236 reg = <0>; 237 237 backlight = <&backlight>; 238 - iovcc-supply = <&vcc_lcd>; 239 238 reset-gpios = <&gpio3 RK_PC0 GPIO_ACTIVE_LOW>; 240 239 rotation = <270>; 241 - vdd-supply = <&vcc_lcd>; 242 240 243 241 port { 244 242 mipi_in_panel: endpoint {
+2
arch/arm64/boot/dts/rockchip/rk3326-odroid-go2-v11.dts
··· 83 83 84 84 &internal_display { 85 85 compatible = "elida,kd35t133"; 86 + iovcc-supply = <&vcc_lcd>; 87 + vdd-supply = <&vcc_lcd>; 86 88 }; 87 89 88 90 &rk817 {
+2
arch/arm64/boot/dts/rockchip/rk3326-odroid-go2.dts
··· 59 59 60 60 &internal_display { 61 61 compatible = "elida,kd35t133"; 62 + iovcc-supply = <&vcc_lcd>; 63 + vdd-supply = <&vcc_lcd>; 62 64 }; 63 65 64 66 &rk817_charger {
-1
arch/arm64/boot/dts/rockchip/rk3368-evb.dtsi
··· 61 61 pinctrl-names = "default"; 62 62 pinctrl-0 = <&bl_en>; 63 63 pwms = <&pwm0 0 1000000 PWM_POLARITY_INVERTED>; 64 - pwm-delay-us = <10000>; 65 64 }; 66 65 67 66 emmc_pwrseq: emmc-pwrseq {
-1
arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
··· 198 198 power-supply = <&pp3300_disp>; 199 199 pinctrl-names = "default"; 200 200 pinctrl-0 = <&bl_en>; 201 - pwm-delay-us = <10000>; 202 201 }; 203 202 204 203 gpio_keys: gpio-keys {
-1
arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
··· 167 167 pinctrl-names = "default"; 168 168 pinctrl-0 = <&bl_en>; 169 169 pwms = <&pwm1 0 1000000 0>; 170 - pwm-delay-us = <10000>; 171 170 }; 172 171 173 172 dmic: dmic {
+4 -14
arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
··· 50 50 pinctrl-0 = <&panel_en_pin>; 51 51 power-supply = <&vcc3v3_panel>; 52 52 53 - ports { 54 - #address-cells = <1>; 55 - #size-cells = <0>; 56 - 57 - port@0 { 58 - reg = <0>; 59 - #address-cells = <1>; 60 - #size-cells = <0>; 61 - 62 - panel_in_edp: endpoint@0 { 63 - reg = <0>; 64 - remote-endpoint = <&edp_out_panel>; 65 - }; 53 + port { 54 + panel_in_edp: endpoint { 55 + remote-endpoint = <&edp_out_panel>; 66 56 }; 67 57 }; 68 58 }; ··· 933 943 disable-wp; 934 944 pinctrl-names = "default"; 935 945 pinctrl-0 = <&sdmmc_clk &sdmmc_cmd &sdmmc_bus4>; 936 - sd-uhs-sdr104; 946 + sd-uhs-sdr50; 937 947 vmmc-supply = <&vcc3v0_sd>; 938 948 vqmmc-supply = <&vcc_sdio>; 939 949 status = "okay";
+3 -9
arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dtsi
··· 647 647 avdd-supply = <&avdd>; 648 648 backlight = <&backlight>; 649 649 dvdd-supply = <&vcc3v3_s0>; 650 - ports { 651 - #address-cells = <1>; 652 - #size-cells = <0>; 653 650 654 - port@0 { 655 - reg = <0>; 656 - 657 - mipi_in_panel: endpoint { 658 - remote-endpoint = <&mipi_out_panel>; 659 - }; 651 + port { 652 + mipi_in_panel: endpoint { 653 + remote-endpoint = <&mipi_out_panel>; 660 654 }; 661 655 }; 662 656 };
+1 -1
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 552 552 <0x0 0xfff10000 0 0x10000>, /* GICH */ 553 553 <0x0 0xfff20000 0 0x10000>; /* GICV */ 554 554 interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH 0>; 555 - its: interrupt-controller@fee20000 { 555 + its: msi-controller@fee20000 { 556 556 compatible = "arm,gic-v3-its"; 557 557 msi-controller; 558 558 #msi-cells = <1>;
+4 -2
arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg353x.dtsi
··· 16 16 }; 17 17 18 18 &cru { 19 - assigned-clocks = <&cru PLL_GPLL>, <&pmucru PLL_PPLL>, <&cru PLL_VPLL>; 20 - assigned-clock-rates = <1200000000>, <200000000>, <241500000>; 19 + assigned-clocks = <&pmucru CLK_RTC_32K>, <&cru PLL_GPLL>, 20 + <&pmucru PLL_PPLL>, <&cru PLL_VPLL>; 21 + assigned-clock-rates = <32768>, <1200000000>, 22 + <200000000>, <241500000>; 21 23 }; 22 24 23 25 &gpio_keys_control {
+4 -2
arch/arm64/boot/dts/rockchip/rk3566-anbernic-rg503.dts
··· 105 105 }; 106 106 107 107 &cru { 108 - assigned-clocks = <&cru PLL_GPLL>, <&pmucru PLL_PPLL>, <&cru PLL_VPLL>; 109 - assigned-clock-rates = <1200000000>, <200000000>, <500000000>; 108 + assigned-clocks = <&pmucru CLK_RTC_32K>, <&cru PLL_GPLL>, 109 + <&pmucru PLL_PPLL>, <&cru PLL_VPLL>; 110 + assigned-clock-rates = <32768>, <1200000000>, 111 + <200000000>, <500000000>; 110 112 }; 111 113 112 114 &dsi_dphy0 {
+1 -1
arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi
··· 598 598 non-removable; 599 599 pinctrl-names = "default"; 600 600 pinctrl-0 = <&sdmmc1_bus4 &sdmmc1_cmd &sdmmc1_clk>; 601 - sd-uhs-sdr104; 601 + sd-uhs-sdr50; 602 602 vmmc-supply = <&vcc3v3_sys>; 603 603 vqmmc-supply = <&vcc_1v8>; 604 604 status = "okay";
+9
arch/arm64/boot/dts/rockchip/rk3588s.dtsi
··· 222 222 cache-size = <131072>; 223 223 cache-line-size = <64>; 224 224 cache-sets = <512>; 225 + cache-level = <2>; 225 226 next-level-cache = <&l3_cache>; 226 227 }; 227 228 ··· 231 230 cache-size = <131072>; 232 231 cache-line-size = <64>; 233 232 cache-sets = <512>; 233 + cache-level = <2>; 234 234 next-level-cache = <&l3_cache>; 235 235 }; 236 236 ··· 240 238 cache-size = <131072>; 241 239 cache-line-size = <64>; 242 240 cache-sets = <512>; 241 + cache-level = <2>; 243 242 next-level-cache = <&l3_cache>; 244 243 }; 245 244 ··· 249 246 cache-size = <131072>; 250 247 cache-line-size = <64>; 251 248 cache-sets = <512>; 249 + cache-level = <2>; 252 250 next-level-cache = <&l3_cache>; 253 251 }; 254 252 ··· 258 254 cache-size = <524288>; 259 255 cache-line-size = <64>; 260 256 cache-sets = <1024>; 257 + cache-level = <2>; 261 258 next-level-cache = <&l3_cache>; 262 259 }; 263 260 ··· 267 262 cache-size = <524288>; 268 263 cache-line-size = <64>; 269 264 cache-sets = <1024>; 265 + cache-level = <2>; 270 266 next-level-cache = <&l3_cache>; 271 267 }; 272 268 ··· 276 270 cache-size = <524288>; 277 271 cache-line-size = <64>; 278 272 cache-sets = <1024>; 273 + cache-level = <2>; 279 274 next-level-cache = <&l3_cache>; 280 275 }; 281 276 ··· 285 278 cache-size = <524288>; 286 279 cache-line-size = <64>; 287 280 cache-sets = <1024>; 281 + cache-level = <2>; 288 282 next-level-cache = <&l3_cache>; 289 283 }; 290 284 ··· 294 286 cache-size = <3145728>; 295 287 cache-line-size = <64>; 296 288 cache-sets = <4096>; 289 + cache-level = <3>; 297 290 }; 298 291 }; 299 292
+16
arch/loongarch/Kconfig
··· 447 447 protection support. However, you can enable LoongArch DMW-based 448 448 ioremap() for better performance. 449 449 450 + config ARCH_WRITECOMBINE 451 + bool "Enable WriteCombine (WUC) for ioremap()" 452 + help 453 + LoongArch maintains cache coherency in hardware, but when paired 454 + with LS7A chipsets the WUC attribute (Weak-ordered UnCached, which 455 + is similar to WriteCombine) is out of the scope of cache coherency 456 + machanism for PCIe devices (this is a PCIe protocol violation, which 457 + may be fixed in newer chipsets). 458 + 459 + This means WUC can only used for write-only memory regions now, so 460 + this option is disabled by default, making WUC silently fallback to 461 + SUC for ioremap(). You can enable this option if the kernel is ensured 462 + to run on hardware without this bug. 463 + 464 + You can override this setting via writecombine=on/off boot parameter. 465 + 450 466 config ARCH_STRICT_ALIGN 451 467 bool "Enable -mstrict-align to prevent unaligned accesses" if EXPERT 452 468 default y
+3
arch/loongarch/include/asm/acpi.h
··· 41 41 42 42 static inline unsigned long acpi_get_wakeup_address(void) 43 43 { 44 + #ifdef CONFIG_SUSPEND 44 45 extern void loongarch_wakeup_start(void); 45 46 return (unsigned long)loongarch_wakeup_start; 47 + #endif 48 + return 0UL; 46 49 } 47 50 48 51 #endif /* _ASM_LOONGARCH_ACPI_H */
+2 -2
arch/loongarch/include/asm/addrspace.h
··· 71 71 #define _ATYPE32_ int 72 72 #define _ATYPE64_ __s64 73 73 #ifdef CONFIG_64BIT 74 - #define _CONST64_(x) x ## L 74 + #define _CONST64_(x) x ## UL 75 75 #else 76 - #define _CONST64_(x) x ## LL 76 + #define _CONST64_(x) x ## ULL 77 77 #endif 78 78 #endif 79 79
-1
arch/loongarch/include/asm/bootinfo.h
··· 13 13 extern void init_environ(void); 14 14 extern void memblock_init(void); 15 15 extern void platform_init(void); 16 - extern void plat_swiotlb_setup(void); 17 16 extern int __init init_numa_memory(void); 18 17 19 18 struct loongson_board_info {
+1
arch/loongarch/include/asm/cpu-features.h
··· 42 42 #define cpu_has_fpu cpu_opt(LOONGARCH_CPU_FPU) 43 43 #define cpu_has_lsx cpu_opt(LOONGARCH_CPU_LSX) 44 44 #define cpu_has_lasx cpu_opt(LOONGARCH_CPU_LASX) 45 + #define cpu_has_crc32 cpu_opt(LOONGARCH_CPU_CRC32) 45 46 #define cpu_has_complex cpu_opt(LOONGARCH_CPU_COMPLEX) 46 47 #define cpu_has_crypto cpu_opt(LOONGARCH_CPU_CRYPTO) 47 48 #define cpu_has_lvz cpu_opt(LOONGARCH_CPU_LVZ)
+21 -19
arch/loongarch/include/asm/cpu.h
··· 78 78 #define CPU_FEATURE_FPU 3 /* CPU has FPU */ 79 79 #define CPU_FEATURE_LSX 4 /* CPU has LSX (128-bit SIMD) */ 80 80 #define CPU_FEATURE_LASX 5 /* CPU has LASX (256-bit SIMD) */ 81 - #define CPU_FEATURE_COMPLEX 6 /* CPU has Complex instructions */ 82 - #define CPU_FEATURE_CRYPTO 7 /* CPU has Crypto instructions */ 83 - #define CPU_FEATURE_LVZ 8 /* CPU has Virtualization extension */ 84 - #define CPU_FEATURE_LBT_X86 9 /* CPU has X86 Binary Translation */ 85 - #define CPU_FEATURE_LBT_ARM 10 /* CPU has ARM Binary Translation */ 86 - #define CPU_FEATURE_LBT_MIPS 11 /* CPU has MIPS Binary Translation */ 87 - #define CPU_FEATURE_TLB 12 /* CPU has TLB */ 88 - #define CPU_FEATURE_CSR 13 /* CPU has CSR */ 89 - #define CPU_FEATURE_WATCH 14 /* CPU has watchpoint registers */ 90 - #define CPU_FEATURE_VINT 15 /* CPU has vectored interrupts */ 91 - #define CPU_FEATURE_CSRIPI 16 /* CPU has CSR-IPI */ 92 - #define CPU_FEATURE_EXTIOI 17 /* CPU has EXT-IOI */ 93 - #define CPU_FEATURE_PREFETCH 18 /* CPU has prefetch instructions */ 94 - #define CPU_FEATURE_PMP 19 /* CPU has perfermance counter */ 95 - #define CPU_FEATURE_SCALEFREQ 20 /* CPU supports cpufreq scaling */ 96 - #define CPU_FEATURE_FLATMODE 21 /* CPU has flat mode */ 97 - #define CPU_FEATURE_EIODECODE 22 /* CPU has EXTIOI interrupt pin decode mode */ 98 - #define CPU_FEATURE_GUESTID 23 /* CPU has GuestID feature */ 99 - #define CPU_FEATURE_HYPERVISOR 24 /* CPU has hypervisor (running in VM) */ 81 + #define CPU_FEATURE_CRC32 6 /* CPU has CRC32 instructions */ 82 + #define CPU_FEATURE_COMPLEX 7 /* CPU has Complex instructions */ 83 + #define CPU_FEATURE_CRYPTO 8 /* CPU has Crypto instructions */ 84 + #define CPU_FEATURE_LVZ 9 /* CPU has Virtualization extension */ 85 + #define CPU_FEATURE_LBT_X86 10 /* CPU has X86 Binary Translation */ 86 + #define CPU_FEATURE_LBT_ARM 11 /* CPU has ARM Binary Translation */ 87 + #define CPU_FEATURE_LBT_MIPS 12 /* CPU has MIPS Binary Translation */ 88 + #define CPU_FEATURE_TLB 13 /* CPU has TLB */ 89 + #define CPU_FEATURE_CSR 14 /* CPU has CSR */ 90 + #define CPU_FEATURE_WATCH 15 /* CPU has watchpoint registers */ 91 + #define CPU_FEATURE_VINT 16 /* CPU has vectored interrupts */ 92 + #define CPU_FEATURE_CSRIPI 17 /* CPU has CSR-IPI */ 93 + #define CPU_FEATURE_EXTIOI 18 /* CPU has EXT-IOI */ 94 + #define CPU_FEATURE_PREFETCH 19 /* CPU has prefetch instructions */ 95 + #define CPU_FEATURE_PMP 20 /* CPU has perfermance counter */ 96 + #define CPU_FEATURE_SCALEFREQ 21 /* CPU supports cpufreq scaling */ 97 + #define CPU_FEATURE_FLATMODE 22 /* CPU has flat mode */ 98 + #define CPU_FEATURE_EIODECODE 23 /* CPU has EXTIOI interrupt pin decode mode */ 99 + #define CPU_FEATURE_GUESTID 24 /* CPU has GuestID feature */ 100 + #define CPU_FEATURE_HYPERVISOR 25 /* CPU has hypervisor (running in VM) */ 100 101 101 102 #define LOONGARCH_CPU_CPUCFG BIT_ULL(CPU_FEATURE_CPUCFG) 102 103 #define LOONGARCH_CPU_LAM BIT_ULL(CPU_FEATURE_LAM) ··· 105 104 #define LOONGARCH_CPU_FPU BIT_ULL(CPU_FEATURE_FPU) 106 105 #define LOONGARCH_CPU_LSX BIT_ULL(CPU_FEATURE_LSX) 107 106 #define LOONGARCH_CPU_LASX BIT_ULL(CPU_FEATURE_LASX) 107 + #define LOONGARCH_CPU_CRC32 BIT_ULL(CPU_FEATURE_CRC32) 108 108 #define LOONGARCH_CPU_COMPLEX BIT_ULL(CPU_FEATURE_COMPLEX) 109 109 #define LOONGARCH_CPU_CRYPTO BIT_ULL(CPU_FEATURE_CRYPTO) 110 110 #define LOONGARCH_CPU_LVZ BIT_ULL(CPU_FEATURE_LVZ)
+3 -1
arch/loongarch/include/asm/io.h
··· 54 54 * @offset: bus address of the memory 55 55 * @size: size of the resource to map 56 56 */ 57 + extern pgprot_t pgprot_wc; 58 + 57 59 #define ioremap_wc(offset, size) \ 58 - ioremap_prot((offset), (size), pgprot_val(PAGE_KERNEL_WUC)) 60 + ioremap_prot((offset), (size), pgprot_val(pgprot_wc)) 59 61 60 62 #define ioremap_cache(offset, size) \ 61 63 ioremap_prot((offset), (size), pgprot_val(PAGE_KERNEL))
+3 -3
arch/loongarch/include/asm/loongarch.h
··· 117 117 #define CPUCFG1_EP BIT(22) 118 118 #define CPUCFG1_RPLV BIT(23) 119 119 #define CPUCFG1_HUGEPG BIT(24) 120 - #define CPUCFG1_IOCSRBRD BIT(25) 120 + #define CPUCFG1_CRC32 BIT(25) 121 121 #define CPUCFG1_MSGINT BIT(26) 122 122 123 123 #define LOONGARCH_CPUCFG2 0x2 ··· 423 423 #define CSR_ASID_ASID_WIDTH 10 424 424 #define CSR_ASID_ASID (_ULCAST_(0x3ff) << CSR_ASID_ASID_SHIFT) 425 425 426 - #define LOONGARCH_CSR_PGDL 0x19 /* Page table base address when VA[47] = 0 */ 426 + #define LOONGARCH_CSR_PGDL 0x19 /* Page table base address when VA[VALEN-1] = 0 */ 427 427 428 - #define LOONGARCH_CSR_PGDH 0x1a /* Page table base address when VA[47] = 1 */ 428 + #define LOONGARCH_CSR_PGDH 0x1a /* Page table base address when VA[VALEN-1] = 1 */ 429 429 430 430 #define LOONGARCH_CSR_PGD 0x1b /* Page table base */ 431 431
+4 -4
arch/loongarch/include/asm/module.lds.h
··· 2 2 /* Copyright (C) 2020-2022 Loongson Technology Corporation Limited */ 3 3 SECTIONS { 4 4 . = ALIGN(4); 5 - .got : { BYTE(0) } 6 - .plt : { BYTE(0) } 7 - .plt.idx : { BYTE(0) } 8 - .ftrace_trampoline : { BYTE(0) } 5 + .got 0 : { BYTE(0) } 6 + .plt 0 : { BYTE(0) } 7 + .plt.idx 0 : { BYTE(0) } 8 + .ftrace_trampoline 0 : { BYTE(0) } 9 9 }
+2 -1
arch/loongarch/include/uapi/asm/ptrace.h
··· 47 47 }; 48 48 49 49 struct user_watch_state { 50 - uint16_t dbg_info; 50 + uint64_t dbg_info; 51 51 struct { 52 52 uint64_t addr; 53 53 uint64_t mask; 54 54 uint32_t ctrl; 55 + uint32_t pad; 55 56 } dbg_regs[8]; 56 57 }; 57 58
+7 -2
arch/loongarch/kernel/cpu-probe.c
··· 60 60 61 61 /* MAP BASE */ 62 62 unsigned long vm_map_base; 63 - EXPORT_SYMBOL_GPL(vm_map_base); 63 + EXPORT_SYMBOL(vm_map_base); 64 64 65 65 static void cpu_probe_addrbits(struct cpuinfo_loongarch *c) 66 66 { ··· 94 94 c->options = LOONGARCH_CPU_CPUCFG | LOONGARCH_CPU_CSR | 95 95 LOONGARCH_CPU_TLB | LOONGARCH_CPU_VINT | LOONGARCH_CPU_WATCH; 96 96 97 - elf_hwcap = HWCAP_LOONGARCH_CPUCFG | HWCAP_LOONGARCH_CRC32; 97 + elf_hwcap = HWCAP_LOONGARCH_CPUCFG; 98 98 99 99 config = read_cpucfg(LOONGARCH_CPUCFG1); 100 100 if (config & CPUCFG1_UAL) { 101 101 c->options |= LOONGARCH_CPU_UAL; 102 102 elf_hwcap |= HWCAP_LOONGARCH_UAL; 103 103 } 104 + if (config & CPUCFG1_CRC32) { 105 + c->options |= LOONGARCH_CPU_CRC32; 106 + elf_hwcap |= HWCAP_LOONGARCH_CRC32; 107 + } 108 + 104 109 105 110 config = read_cpucfg(LOONGARCH_CPUCFG2); 106 111 if (config & CPUCFG2_LAM) {
+1
arch/loongarch/kernel/proc.c
··· 76 76 if (cpu_has_fpu) seq_printf(m, " fpu"); 77 77 if (cpu_has_lsx) seq_printf(m, " lsx"); 78 78 if (cpu_has_lasx) seq_printf(m, " lasx"); 79 + if (cpu_has_crc32) seq_printf(m, " crc32"); 79 80 if (cpu_has_complex) seq_printf(m, " complex"); 80 81 if (cpu_has_crypto) seq_printf(m, " crypto"); 81 82 if (cpu_has_lvz) seq_printf(m, " lvz");
+16 -9
arch/loongarch/kernel/ptrace.c
··· 391 391 return 0; 392 392 } 393 393 394 - static int ptrace_hbp_get_resource_info(unsigned int note_type, u16 *info) 394 + static int ptrace_hbp_get_resource_info(unsigned int note_type, u64 *info) 395 395 { 396 396 u8 num; 397 - u16 reg = 0; 397 + u64 reg = 0; 398 398 399 399 switch (note_type) { 400 400 case NT_LOONGARCH_HW_BREAK: ··· 524 524 return modify_user_hw_breakpoint(bp, &attr); 525 525 } 526 526 527 - #define PTRACE_HBP_CTRL_SZ sizeof(u32) 528 527 #define PTRACE_HBP_ADDR_SZ sizeof(u64) 529 528 #define PTRACE_HBP_MASK_SZ sizeof(u64) 529 + #define PTRACE_HBP_CTRL_SZ sizeof(u32) 530 + #define PTRACE_HBP_PAD_SZ sizeof(u32) 530 531 531 532 static int hw_break_get(struct task_struct *target, 532 533 const struct user_regset *regset, 533 534 struct membuf to) 534 535 { 535 - u16 info; 536 + u64 info; 536 537 u32 ctrl; 537 538 u64 addr, mask; 538 539 int ret, idx = 0; ··· 546 545 547 546 membuf_write(&to, &info, sizeof(info)); 548 547 549 - /* (address, ctrl) registers */ 548 + /* (address, mask, ctrl) registers */ 550 549 while (to.left) { 551 550 ret = ptrace_hbp_get_addr(note_type, target, idx, &addr); 552 551 if (ret) ··· 563 562 membuf_store(&to, addr); 564 563 membuf_store(&to, mask); 565 564 membuf_store(&to, ctrl); 565 + membuf_zero(&to, sizeof(u32)); 566 566 idx++; 567 567 } 568 568 ··· 584 582 offset = offsetof(struct user_watch_state, dbg_regs); 585 583 user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf, 0, offset); 586 584 587 - /* (address, ctrl) registers */ 585 + /* (address, mask, ctrl) registers */ 588 586 limit = regset->n * regset->size; 589 587 while (count && offset < limit) { 590 588 if (count < PTRACE_HBP_ADDR_SZ) ··· 604 602 break; 605 603 606 604 ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &mask, 607 - offset, offset + PTRACE_HBP_ADDR_SZ); 605 + offset, offset + PTRACE_HBP_MASK_SZ); 608 606 if (ret) 609 607 return ret; 610 608 ··· 613 611 return ret; 614 612 offset += PTRACE_HBP_MASK_SZ; 615 613 616 - ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &mask, 617 - offset, offset + PTRACE_HBP_MASK_SZ); 614 + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ctrl, 615 + offset, offset + PTRACE_HBP_CTRL_SZ); 618 616 if (ret) 619 617 return ret; 620 618 ··· 622 620 if (ret) 623 621 return ret; 624 622 offset += PTRACE_HBP_CTRL_SZ; 623 + 624 + user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf, 625 + offset, offset + PTRACE_HBP_PAD_SZ); 626 + offset += PTRACE_HBP_PAD_SZ; 627 + 625 628 idx++; 626 629 } 627 630
+23 -2
arch/loongarch/kernel/setup.c
··· 160 160 dmi_walk(find_tokens, NULL); 161 161 } 162 162 163 + #ifdef CONFIG_ARCH_WRITECOMBINE 164 + pgprot_t pgprot_wc = PAGE_KERNEL_WUC; 165 + #else 166 + pgprot_t pgprot_wc = PAGE_KERNEL_SUC; 167 + #endif 168 + 169 + EXPORT_SYMBOL(pgprot_wc); 170 + 171 + static int __init setup_writecombine(char *p) 172 + { 173 + if (!strcmp(p, "on")) 174 + pgprot_wc = PAGE_KERNEL_WUC; 175 + else if (!strcmp(p, "off")) 176 + pgprot_wc = PAGE_KERNEL_SUC; 177 + else 178 + pr_warn("Unknown writecombine setting \"%s\".\n", p); 179 + 180 + return 0; 181 + } 182 + early_param("writecombine", setup_writecombine); 183 + 163 184 static int usermem __initdata; 164 185 165 186 static int __init early_parse_mem(char *p) ··· 389 368 /* 390 369 * In order to reduce the possibility of kernel panic when failed to 391 370 * get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate 392 - * low memory as small as possible before plat_swiotlb_setup(), so 393 - * make sparse_init() using top-down allocation. 371 + * low memory as small as possible before swiotlb_init(), so make 372 + * sparse_init() using top-down allocation. 394 373 */ 395 374 memblock_set_bottom_up(false); 396 375 sparse_init();
+1 -1
arch/loongarch/kernel/stacktrace.c
··· 30 30 31 31 regs->regs[1] = 0; 32 32 for (unwind_start(&state, task, regs); 33 - !unwind_done(&state); unwind_next_frame(&state)) { 33 + !unwind_done(&state) && !unwind_error(&state); unwind_next_frame(&state)) { 34 34 addr = unwind_get_return_address(&state); 35 35 if (!addr || !consume_entry(cookie, addr)) 36 36 break;
+1
arch/loongarch/kernel/unwind.c
··· 28 28 29 29 } while (!get_stack_info(state->sp, state->task, info)); 30 30 31 + state->error = true; 31 32 return false; 32 33 }
+3 -1
arch/loongarch/kernel/unwind_prologue.c
··· 211 211 pc = regs->csr_era; 212 212 213 213 if (user_mode(regs) || !__kernel_text_address(pc)) 214 - return false; 214 + goto out; 215 215 216 216 state->first = true; 217 217 state->pc = pc; ··· 226 226 227 227 } while (!get_stack_info(state->sp, state->task, info)); 228 228 229 + out: 230 + state->error = true; 229 231 return false; 230 232 } 231 233
+2 -2
arch/loongarch/mm/init.c
··· 41 41 * don't have to care about aliases on other CPUs. 42 42 */ 43 43 unsigned long empty_zero_page, zero_page_mask; 44 - EXPORT_SYMBOL_GPL(empty_zero_page); 44 + EXPORT_SYMBOL(empty_zero_page); 45 45 EXPORT_SYMBOL(zero_page_mask); 46 46 47 47 void setup_zero_pages(void) ··· 270 270 #endif 271 271 #ifndef __PAGETABLE_PMD_FOLDED 272 272 pmd_t invalid_pmd_table[PTRS_PER_PMD] __page_aligned_bss; 273 - EXPORT_SYMBOL_GPL(invalid_pmd_table); 273 + EXPORT_SYMBOL(invalid_pmd_table); 274 274 #endif 275 275 pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned_bss; 276 276 EXPORT_SYMBOL(invalid_pte_table);
+4
arch/loongarch/power/suspend_asm.S
··· 80 80 81 81 JUMP_VIRT_ADDR t0, t1 82 82 83 + /* Enable PG */ 84 + li.w t0, 0xb0 # PLV=0, IE=0, PG=1 85 + csrwr t0, LOONGARCH_CSR_CRMD 86 + 83 87 la.pcrel t0, acpi_saved_sp 84 88 ld.d sp, t0, 0 85 89 SETUP_WAKEUP
+1
arch/powerpc/mm/numa.c
··· 366 366 WARN(numa_distance_table[nid][nid] == -1, 367 367 "NUMA distance details for node %d not provided\n", nid); 368 368 } 369 + EXPORT_SYMBOL_GPL(update_numa_distance); 369 370 370 371 /* 371 372 * ibm,numa-lookup-index-table= {N, domainid1, domainid2, ..... domainidN}
+7
arch/powerpc/platforms/pseries/papr_scm.c
··· 1428 1428 return -ENODEV; 1429 1429 } 1430 1430 1431 + /* 1432 + * open firmware platform device create won't update the NUMA 1433 + * distance table. For PAPR SCM devices we use numa_map_to_online_node() 1434 + * to find the nearest online NUMA node and that requires correct 1435 + * distance table information. 1436 + */ 1437 + update_numa_distance(dn); 1431 1438 1432 1439 p = kzalloc(sizeof(*p), GFP_KERNEL); 1433 1440 if (!p)
-1
arch/riscv/boot/dts/canaan/k210.dtsi
··· 259 259 <&sysclk K210_CLK_APB0>; 260 260 clock-names = "ssi_clk", "pclk"; 261 261 resets = <&sysrst K210_RST_SPI2>; 262 - spi-max-frequency = <25000000>; 263 262 }; 264 263 265 264 i2s0: i2s@50250000 {
+8
arch/riscv/include/asm/fixmap.h
··· 22 22 */ 23 23 enum fixed_addresses { 24 24 FIX_HOLE, 25 + /* 26 + * The fdt fixmap mapping must be PMD aligned and will be mapped 27 + * using PMD entries in fixmap_pmd in 64-bit and a PGD entry in 32-bit. 28 + */ 29 + FIX_FDT_END, 30 + FIX_FDT = FIX_FDT_END + FIX_FDT_SIZE / PAGE_SIZE - 1, 31 + 32 + /* Below fixmaps will be mapped using fixmap_pte */ 25 33 FIX_PTE, 26 34 FIX_PMD, 27 35 FIX_PUD,
+6 -2
arch/riscv/include/asm/pgtable.h
··· 87 87 88 88 #define FIXADDR_TOP PCI_IO_START 89 89 #ifdef CONFIG_64BIT 90 - #define FIXADDR_SIZE PMD_SIZE 90 + #define MAX_FDT_SIZE PMD_SIZE 91 + #define FIX_FDT_SIZE (MAX_FDT_SIZE + SZ_2M) 92 + #define FIXADDR_SIZE (PMD_SIZE + FIX_FDT_SIZE) 91 93 #else 92 - #define FIXADDR_SIZE PGDIR_SIZE 94 + #define MAX_FDT_SIZE PGDIR_SIZE 95 + #define FIX_FDT_SIZE MAX_FDT_SIZE 96 + #define FIXADDR_SIZE (PGDIR_SIZE + FIX_FDT_SIZE) 93 97 #endif 94 98 #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) 95 99
+1 -5
arch/riscv/kernel/setup.c
··· 278 278 #if IS_ENABLED(CONFIG_BUILTIN_DTB) 279 279 unflatten_and_copy_device_tree(); 280 280 #else 281 - if (early_init_dt_verify(__va(XIP_FIXUP(dtb_early_pa)))) 282 - unflatten_device_tree(); 283 - else 284 - pr_err("No DTB found in kernel mappings\n"); 281 + unflatten_device_tree(); 285 282 #endif 286 - early_init_fdt_scan_reserved_mem(); 287 283 misc_mem_init(); 288 284 289 285 init_resources();
+8 -1
arch/riscv/kernel/signal.c
··· 19 19 #include <asm/signal32.h> 20 20 #include <asm/switch_to.h> 21 21 #include <asm/csr.h> 22 + #include <asm/cacheflush.h> 22 23 23 24 extern u32 __user_rt_sigreturn[2]; 24 25 ··· 182 181 { 183 182 struct rt_sigframe __user *frame; 184 183 long err = 0; 184 + unsigned long __maybe_unused addr; 185 185 186 186 frame = get_sigframe(ksig, regs, sizeof(*frame)); 187 187 if (!access_ok(frame, sizeof(*frame))) ··· 211 209 if (copy_to_user(&frame->sigreturn_code, __user_rt_sigreturn, 212 210 sizeof(frame->sigreturn_code))) 213 211 return -EFAULT; 214 - regs->ra = (unsigned long)&frame->sigreturn_code; 212 + 213 + addr = (unsigned long)&frame->sigreturn_code; 214 + /* Make sure the two instructions are pushed to icache. */ 215 + flush_icache_range(addr, addr + sizeof(frame->sigreturn_code)); 216 + 217 + regs->ra = addr; 215 218 #endif /* CONFIG_MMU */ 216 219 217 220 /*
+36 -46
arch/riscv/mm/init.c
··· 57 57 EXPORT_SYMBOL(empty_zero_page); 58 58 59 59 extern char _start[]; 60 - #define DTB_EARLY_BASE_VA PGDIR_SIZE 61 60 void *_dtb_early_va __initdata; 62 61 uintptr_t _dtb_early_pa __initdata; 63 62 ··· 235 236 set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET); 236 237 237 238 reserve_initrd_mem(); 239 + 240 + /* 241 + * No allocation should be done before reserving the memory as defined 242 + * in the device tree, otherwise the allocation could end up in a 243 + * reserved region. 244 + */ 245 + early_init_fdt_scan_reserved_mem(); 246 + 238 247 /* 239 248 * If DTB is built in, no need to reserve its memblock. 240 249 * Otherwise, do reserve it but avoid using 241 250 * early_init_fdt_reserve_self() since __pa() does 242 251 * not work for DTB pointers that are fixmap addresses 243 252 */ 244 - if (!IS_ENABLED(CONFIG_BUILTIN_DTB)) { 245 - /* 246 - * In case the DTB is not located in a memory region we won't 247 - * be able to locate it later on via the linear mapping and 248 - * get a segfault when accessing it via __va(dtb_early_pa). 249 - * To avoid this situation copy DTB to a memory region. 250 - * Note that memblock_phys_alloc will also reserve DTB region. 251 - */ 252 - if (!memblock_is_memory(dtb_early_pa)) { 253 - size_t fdt_size = fdt_totalsize(dtb_early_va); 254 - phys_addr_t new_dtb_early_pa = memblock_phys_alloc(fdt_size, PAGE_SIZE); 255 - void *new_dtb_early_va = early_memremap(new_dtb_early_pa, fdt_size); 256 - 257 - memcpy(new_dtb_early_va, dtb_early_va, fdt_size); 258 - early_memunmap(new_dtb_early_va, fdt_size); 259 - _dtb_early_pa = new_dtb_early_pa; 260 - } else 261 - memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va)); 262 - } 253 + if (!IS_ENABLED(CONFIG_BUILTIN_DTB)) 254 + memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va)); 263 255 264 256 dma_contiguous_reserve(dma32_phys_limit); 265 257 if (IS_ENABLED(CONFIG_64BIT)) ··· 269 279 static pte_t fixmap_pte[PTRS_PER_PTE] __page_aligned_bss; 270 280 271 281 pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); 272 - static p4d_t __maybe_unused early_dtb_p4d[PTRS_PER_P4D] __initdata __aligned(PAGE_SIZE); 273 - static pud_t __maybe_unused early_dtb_pud[PTRS_PER_PUD] __initdata __aligned(PAGE_SIZE); 274 - static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE); 275 282 276 283 #ifdef CONFIG_XIP_KERNEL 277 284 #define pt_ops (*(struct pt_alloc_ops *)XIP_FIXUP(&pt_ops)) ··· 613 626 #define trampoline_pgd_next (pgtable_l5_enabled ? \ 614 627 (uintptr_t)trampoline_p4d : (pgtable_l4_enabled ? \ 615 628 (uintptr_t)trampoline_pud : (uintptr_t)trampoline_pmd)) 616 - #define early_dtb_pgd_next (pgtable_l5_enabled ? \ 617 - (uintptr_t)early_dtb_p4d : (pgtable_l4_enabled ? \ 618 - (uintptr_t)early_dtb_pud : (uintptr_t)early_dtb_pmd)) 619 629 #else 620 630 #define pgd_next_t pte_t 621 631 #define alloc_pgd_next(__va) pt_ops.alloc_pte(__va) ··· 620 636 #define create_pgd_next_mapping(__nextp, __va, __pa, __sz, __prot) \ 621 637 create_pte_mapping(__nextp, __va, __pa, __sz, __prot) 622 638 #define fixmap_pgd_next ((uintptr_t)fixmap_pte) 623 - #define early_dtb_pgd_next ((uintptr_t)early_dtb_pmd) 624 639 #define create_p4d_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0) 625 640 #define create_pud_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0) 626 641 #define create_pmd_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0) ··· 843 860 * this means 2 PMD entries whereas for 32-bit kernel, this is only 1 PGDIR 844 861 * entry. 845 862 */ 846 - static void __init create_fdt_early_page_table(pgd_t *pgdir, uintptr_t dtb_pa) 863 + static void __init create_fdt_early_page_table(pgd_t *pgdir, 864 + uintptr_t fix_fdt_va, 865 + uintptr_t dtb_pa) 847 866 { 848 - #ifndef CONFIG_BUILTIN_DTB 849 867 uintptr_t pa = dtb_pa & ~(PMD_SIZE - 1); 850 868 851 - create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA, 852 - IS_ENABLED(CONFIG_64BIT) ? early_dtb_pgd_next : pa, 853 - PGDIR_SIZE, 854 - IS_ENABLED(CONFIG_64BIT) ? PAGE_TABLE : PAGE_KERNEL); 869 + #ifndef CONFIG_BUILTIN_DTB 870 + /* Make sure the fdt fixmap address is always aligned on PMD size */ 871 + BUILD_BUG_ON(FIX_FDT % (PMD_SIZE / PAGE_SIZE)); 855 872 856 - if (pgtable_l5_enabled) 857 - create_p4d_mapping(early_dtb_p4d, DTB_EARLY_BASE_VA, 858 - (uintptr_t)early_dtb_pud, P4D_SIZE, PAGE_TABLE); 859 - 860 - if (pgtable_l4_enabled) 861 - create_pud_mapping(early_dtb_pud, DTB_EARLY_BASE_VA, 862 - (uintptr_t)early_dtb_pmd, PUD_SIZE, PAGE_TABLE); 863 - 864 - if (IS_ENABLED(CONFIG_64BIT)) { 865 - create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA, 873 + /* In 32-bit only, the fdt lies in its own PGD */ 874 + if (!IS_ENABLED(CONFIG_64BIT)) { 875 + create_pgd_mapping(early_pg_dir, fix_fdt_va, 876 + pa, MAX_FDT_SIZE, PAGE_KERNEL); 877 + } else { 878 + create_pmd_mapping(fixmap_pmd, fix_fdt_va, 866 879 pa, PMD_SIZE, PAGE_KERNEL); 867 - create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA + PMD_SIZE, 880 + create_pmd_mapping(fixmap_pmd, fix_fdt_va + PMD_SIZE, 868 881 pa + PMD_SIZE, PMD_SIZE, PAGE_KERNEL); 869 882 } 870 883 871 - dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PMD_SIZE - 1)); 884 + dtb_early_va = (void *)fix_fdt_va + (dtb_pa & (PMD_SIZE - 1)); 872 885 #else 873 886 /* 874 887 * For 64-bit kernel, __va can't be used since it would return a linear ··· 1034 1055 create_kernel_page_table(early_pg_dir, true); 1035 1056 1036 1057 /* Setup early mapping for FDT early scan */ 1037 - create_fdt_early_page_table(early_pg_dir, dtb_pa); 1058 + create_fdt_early_page_table(early_pg_dir, 1059 + __fix_to_virt(FIX_FDT), dtb_pa); 1038 1060 1039 1061 /* 1040 1062 * Bootime fixmap only can handle PMD_SIZE mapping. Thus, boot-ioremap ··· 1077 1097 u64 i; 1078 1098 1079 1099 /* Setup swapper PGD for fixmap */ 1100 + #if !defined(CONFIG_64BIT) 1101 + /* 1102 + * In 32-bit, the device tree lies in a pgd entry, so it must be copied 1103 + * directly in swapper_pg_dir in addition to the pgd entry that points 1104 + * to fixmap_pte. 1105 + */ 1106 + unsigned long idx = pgd_index(__fix_to_virt(FIX_FDT)); 1107 + 1108 + set_pgd(&swapper_pg_dir[idx], early_pg_dir[idx]); 1109 + #endif 1080 1110 create_pgd_mapping(swapper_pg_dir, FIXADDR_START, 1081 1111 __pa_symbol(fixmap_pgd_next), 1082 1112 PGDIR_SIZE, PAGE_TABLE);
+1 -6
arch/riscv/purgatory/Makefile
··· 84 84 CFLAGS_REMOVE_ctype.o += $(PURGATORY_CFLAGS_REMOVE) 85 85 CFLAGS_ctype.o += $(PURGATORY_CFLAGS) 86 86 87 - AFLAGS_REMOVE_entry.o += -Wa,-gdwarf-2 88 - AFLAGS_REMOVE_memcpy.o += -Wa,-gdwarf-2 89 - AFLAGS_REMOVE_memset.o += -Wa,-gdwarf-2 90 - AFLAGS_REMOVE_strcmp.o += -Wa,-gdwarf-2 91 - AFLAGS_REMOVE_strlen.o += -Wa,-gdwarf-2 92 - AFLAGS_REMOVE_strncmp.o += -Wa,-gdwarf-2 87 + asflags-remove-y += $(foreach x, -g -gdwarf-4 -gdwarf-5, $(x) -Wa,$(x)) 93 88 94 89 $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE 95 90 $(call if_changed,ld)
+8 -3
arch/s390/net/bpf_jit_comp.c
··· 539 539 { 540 540 memcpy(plt, bpf_plt, BPF_PLT_SIZE); 541 541 *(void **)((char *)plt + (bpf_plt_ret - bpf_plt)) = ret; 542 - *(void **)((char *)plt + (bpf_plt_target - bpf_plt)) = target; 542 + *(void **)((char *)plt + (bpf_plt_target - bpf_plt)) = target ?: ret; 543 543 } 544 544 545 545 /* ··· 2010 2010 } __packed insn; 2011 2011 char expected_plt[BPF_PLT_SIZE]; 2012 2012 char current_plt[BPF_PLT_SIZE]; 2013 + char new_plt[BPF_PLT_SIZE]; 2013 2014 char *plt; 2015 + char *ret; 2014 2016 int err; 2015 2017 2016 2018 /* Verify the branch to be patched. */ ··· 2034 2032 err = copy_from_kernel_nofault(current_plt, plt, BPF_PLT_SIZE); 2035 2033 if (err < 0) 2036 2034 return err; 2037 - bpf_jit_plt(expected_plt, (char *)ip + 6, old_addr); 2035 + ret = (char *)ip + 6; 2036 + bpf_jit_plt(expected_plt, ret, old_addr); 2038 2037 if (memcmp(current_plt, expected_plt, BPF_PLT_SIZE)) 2039 2038 return -EINVAL; 2040 2039 /* Adjust the call address. */ 2040 + bpf_jit_plt(new_plt, ret, new_addr); 2041 2041 s390_kernel_write(plt + (bpf_plt_target - bpf_plt), 2042 - &new_addr, sizeof(void *)); 2042 + new_plt + (bpf_plt_target - bpf_plt), 2043 + sizeof(void *)); 2043 2044 } 2044 2045 2045 2046 /* Adjust the mask of the branch. */
+2 -2
arch/x86/kernel/x86_init.c
··· 33 33 static void iommu_shutdown_noop(void) { } 34 34 bool __init bool_x86_init_noop(void) { return false; } 35 35 void x86_op_int_noop(int cpu) { } 36 - static __init int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; } 37 - static __init void get_rtc_noop(struct timespec64 *now) { } 36 + static int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; } 37 + static void get_rtc_noop(struct timespec64 *now) { } 38 38 39 39 static __initconst const struct of_device_id of_cmos_match[] = { 40 40 { .compatible = "motorola,mc146818" },
+1 -2
arch/x86/purgatory/Makefile
··· 69 69 CFLAGS_REMOVE_string.o += $(PURGATORY_CFLAGS_REMOVE) 70 70 CFLAGS_string.o += $(PURGATORY_CFLAGS) 71 71 72 - AFLAGS_REMOVE_setup-x86_$(BITS).o += -Wa,-gdwarf-2 73 - AFLAGS_REMOVE_entry64.o += -Wa,-gdwarf-2 72 + asflags-remove-y += $(foreach x, -g -gdwarf-4 -gdwarf-5, $(x) -Wa,$(x)) 74 73 75 74 $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE 76 75 $(call if_changed,ld)
+7
drivers/acpi/resource.c
··· 440 440 }, 441 441 }, 442 442 { 443 + .ident = "Asus ExpertBook B1502CBA", 444 + .matches = { 445 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 446 + DMI_MATCH(DMI_BOARD_NAME, "B1502CBA"), 447 + }, 448 + }, 449 + { 443 450 .ident = "Asus ExpertBook B2402CBA", 444 451 .matches = { 445 452 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+1
drivers/acpi/x86/utils.c
··· 213 213 disk in the system. 214 214 */ 215 215 static const struct x86_cpu_id storage_d3_cpu_ids[] = { 216 + X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 24, NULL), /* Picasso */ 216 217 X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 96, NULL), /* Renoir */ 217 218 X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 104, NULL), /* Lucienne */ 218 219 X86_MATCH_VENDOR_FAM_MODEL(AMD, 25, 80, NULL), /* Cezanne */
+2 -1
drivers/clk/clk-renesas-pcie.c
··· 143 143 static const struct regmap_config rs9_regmap_config = { 144 144 .reg_bits = 8, 145 145 .val_bits = 8, 146 - .cache_type = REGCACHE_NONE, 146 + .cache_type = REGCACHE_FLAT, 147 147 .max_register = RS9_REG_BCP, 148 + .num_reg_defaults_raw = 0x8, 148 149 .rd_table = &rs9_readable_table, 149 150 .wr_table = &rs9_writeable_table, 150 151 .reg_write = rs9_regmap_i2c_write,
+6 -4
drivers/clk/imx/clk-imx6ul.c
··· 95 95 { } 96 96 }; 97 97 98 - static const char * enet1_ref_sels[] = { "enet1_ref_125m", "enet1_ref_pad", }; 98 + static const char * enet1_ref_sels[] = { "enet1_ref_125m", "enet1_ref_pad", "dummy", "dummy"}; 99 99 static const u32 enet1_ref_sels_table[] = { IMX6UL_GPR1_ENET1_TX_CLK_DIR, 100 - IMX6UL_GPR1_ENET1_CLK_SEL }; 100 + IMX6UL_GPR1_ENET1_CLK_SEL, 0, 101 + IMX6UL_GPR1_ENET1_TX_CLK_DIR | IMX6UL_GPR1_ENET1_CLK_SEL }; 101 102 static const u32 enet1_ref_sels_table_mask = IMX6UL_GPR1_ENET1_TX_CLK_DIR | 102 103 IMX6UL_GPR1_ENET1_CLK_SEL; 103 - static const char * enet2_ref_sels[] = { "enet2_ref_125m", "enet2_ref_pad", }; 104 + static const char * enet2_ref_sels[] = { "enet2_ref_125m", "enet2_ref_pad", "dummy", "dummy"}; 104 105 static const u32 enet2_ref_sels_table[] = { IMX6UL_GPR1_ENET2_TX_CLK_DIR, 105 - IMX6UL_GPR1_ENET2_CLK_SEL }; 106 + IMX6UL_GPR1_ENET2_CLK_SEL, 0, 107 + IMX6UL_GPR1_ENET2_TX_CLK_DIR | IMX6UL_GPR1_ENET2_CLK_SEL }; 106 108 static const u32 enet2_ref_sels_table_mask = IMX6UL_GPR1_ENET2_TX_CLK_DIR | 107 109 IMX6UL_GPR1_ENET2_CLK_SEL; 108 110
+6 -3
drivers/clk/sprd/common.c
··· 17 17 .reg_bits = 32, 18 18 .reg_stride = 4, 19 19 .val_bits = 32, 20 - .max_register = 0xffff, 21 20 .fast_io = true, 22 21 }; 23 22 ··· 42 43 struct device *dev = &pdev->dev; 43 44 struct device_node *node = dev->of_node, *np; 44 45 struct regmap *regmap; 46 + struct resource *res; 47 + struct regmap_config reg_config = sprdclk_regmap_config; 45 48 46 49 if (of_find_property(node, "sprd,syscon", NULL)) { 47 50 regmap = syscon_regmap_lookup_by_phandle(node, "sprd,syscon"); ··· 60 59 return PTR_ERR(regmap); 61 60 } 62 61 } else { 63 - base = devm_platform_ioremap_resource(pdev, 0); 62 + base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 64 63 if (IS_ERR(base)) 65 64 return PTR_ERR(base); 66 65 66 + reg_config.max_register = resource_size(res) - reg_config.reg_stride; 67 + 67 68 regmap = devm_regmap_init_mmio(&pdev->dev, base, 68 - &sprdclk_regmap_config); 69 + &reg_config); 69 70 if (IS_ERR(regmap)) { 70 71 pr_err("failed to init regmap\n"); 71 72 return PTR_ERR(regmap);
+8 -10
drivers/cpufreq/amd-pstate.c
··· 840 840 841 841 switch(mode_idx) { 842 842 case AMD_PSTATE_DISABLE: 843 - if (!current_pstate_driver) 844 - return -EINVAL; 845 - if (cppc_state == AMD_PSTATE_ACTIVE) 846 - return -EBUSY; 847 - cpufreq_unregister_driver(current_pstate_driver); 848 - amd_pstate_driver_cleanup(); 843 + if (current_pstate_driver) { 844 + cpufreq_unregister_driver(current_pstate_driver); 845 + amd_pstate_driver_cleanup(); 846 + } 849 847 break; 850 848 case AMD_PSTATE_PASSIVE: 851 849 if (current_pstate_driver) { 852 850 if (current_pstate_driver == &amd_pstate_driver) 853 851 return 0; 854 852 cpufreq_unregister_driver(current_pstate_driver); 855 - cppc_state = AMD_PSTATE_PASSIVE; 856 - current_pstate_driver = &amd_pstate_driver; 857 853 } 858 854 855 + current_pstate_driver = &amd_pstate_driver; 856 + cppc_state = AMD_PSTATE_PASSIVE; 859 857 ret = cpufreq_register_driver(current_pstate_driver); 860 858 break; 861 859 case AMD_PSTATE_ACTIVE: ··· 861 863 if (current_pstate_driver == &amd_pstate_epp_driver) 862 864 return 0; 863 865 cpufreq_unregister_driver(current_pstate_driver); 864 - current_pstate_driver = &amd_pstate_epp_driver; 865 - cppc_state = AMD_PSTATE_ACTIVE; 866 866 } 867 867 868 + current_pstate_driver = &amd_pstate_epp_driver; 869 + cppc_state = AMD_PSTATE_ACTIVE; 868 870 ret = cpufreq_register_driver(current_pstate_driver); 869 871 break; 870 872 default:
+2 -1
drivers/firmware/psci/psci.c
··· 167 167 168 168 err = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE, suspend_mode, 0, 0); 169 169 if (err < 0) 170 - pr_warn("failed to set %s mode: %d\n", enable ? "OSI" : "PC", err); 170 + pr_info(FW_BUG "failed to set %s mode: %d\n", 171 + enable ? "OSI" : "PC", err); 171 172 return psci_to_linux_errno(err); 172 173 } 173 174
+30 -30
drivers/i2c/busses/i2c-mchp-pci1xxxx.c
··· 48 48 * SR_HOLD_TIME_XK_TICKS field will indicate the number of ticks of the 49 49 * baud clock required to program 'Hold Time' at X KHz. 50 50 */ 51 - #define SR_HOLD_TIME_100K_TICKS 133 52 - #define SR_HOLD_TIME_400K_TICKS 20 53 - #define SR_HOLD_TIME_1000K_TICKS 11 51 + #define SR_HOLD_TIME_100K_TICKS 150 52 + #define SR_HOLD_TIME_400K_TICKS 20 53 + #define SR_HOLD_TIME_1000K_TICKS 12 54 54 55 55 #define SMB_CORE_COMPLETION_REG_OFF3 (SMBUS_MAST_CORE_ADDR_BASE + 0x23) 56 56 ··· 65 65 * the baud clock required to program 'fair idle delay' at X KHz. Fair idle 66 66 * delay establishes the MCTP T(IDLE_DELAY) period. 67 67 */ 68 - #define FAIR_BUS_IDLE_MIN_100K_TICKS 969 69 - #define FAIR_BUS_IDLE_MIN_400K_TICKS 157 70 - #define FAIR_BUS_IDLE_MIN_1000K_TICKS 157 68 + #define FAIR_BUS_IDLE_MIN_100K_TICKS 992 69 + #define FAIR_BUS_IDLE_MIN_400K_TICKS 500 70 + #define FAIR_BUS_IDLE_MIN_1000K_TICKS 500 71 71 72 72 /* 73 73 * FAIR_IDLE_DELAY_XK_TICKS field will indicate the number of ticks of the 74 74 * baud clock required to satisfy the fairness protocol at X KHz. 75 75 */ 76 - #define FAIR_IDLE_DELAY_100K_TICKS 1000 77 - #define FAIR_IDLE_DELAY_400K_TICKS 500 78 - #define FAIR_IDLE_DELAY_1000K_TICKS 500 76 + #define FAIR_IDLE_DELAY_100K_TICKS 963 77 + #define FAIR_IDLE_DELAY_400K_TICKS 156 78 + #define FAIR_IDLE_DELAY_1000K_TICKS 156 79 79 80 80 #define SMB_IDLE_SCALING_100K \ 81 81 ((FAIR_IDLE_DELAY_100K_TICKS << 16) | FAIR_BUS_IDLE_MIN_100K_TICKS) ··· 105 105 */ 106 106 #define BUS_CLK_100K_LOW_PERIOD_TICKS 156 107 107 #define BUS_CLK_400K_LOW_PERIOD_TICKS 41 108 - #define BUS_CLK_1000K_LOW_PERIOD_TICKS 15 108 + #define BUS_CLK_1000K_LOW_PERIOD_TICKS 15 109 109 110 110 /* 111 111 * BUS_CLK_XK_HIGH_PERIOD_TICKS field defines the number of I2C Baud Clock ··· 131 131 */ 132 132 #define CLK_SYNC_100K 4 133 133 #define CLK_SYNC_400K 4 134 - #define CLK_SYNC_1000K 4 134 + #define CLK_SYNC_1000K 4 135 135 136 136 #define SMB_CORE_DATA_TIMING_REG_OFF (SMBUS_MAST_CORE_ADDR_BASE + 0x40) 137 137 ··· 142 142 * determines the SCLK hold time following SDAT driven low during the first 143 143 * START bit in a transfer. 144 144 */ 145 - #define FIRST_START_HOLD_100K_TICKS 22 146 - #define FIRST_START_HOLD_400K_TICKS 16 147 - #define FIRST_START_HOLD_1000K_TICKS 6 145 + #define FIRST_START_HOLD_100K_TICKS 23 146 + #define FIRST_START_HOLD_400K_TICKS 8 147 + #define FIRST_START_HOLD_1000K_TICKS 12 148 148 149 149 /* 150 150 * STOP_SETUP_XK_TICKS will indicate the number of ticks of the baud clock 151 151 * required to program 'STOP_SETUP' timer at X KHz. This timer determines the 152 152 * SDAT setup time from the rising edge of SCLK for a STOP condition. 153 153 */ 154 - #define STOP_SETUP_100K_TICKS 157 154 + #define STOP_SETUP_100K_TICKS 150 155 155 #define STOP_SETUP_400K_TICKS 20 156 - #define STOP_SETUP_1000K_TICKS 12 156 + #define STOP_SETUP_1000K_TICKS 12 157 157 158 158 /* 159 159 * RESTART_SETUP_XK_TICKS will indicate the number of ticks of the baud clock 160 160 * required to program 'RESTART_SETUP' timer at X KHz. This timer determines the 161 161 * SDAT setup time from the rising edge of SCLK for a repeated START condition. 162 162 */ 163 - #define RESTART_SETUP_100K_TICKS 157 163 + #define RESTART_SETUP_100K_TICKS 156 164 164 #define RESTART_SETUP_400K_TICKS 20 165 165 #define RESTART_SETUP_1000K_TICKS 12 166 166 ··· 169 169 * required to program 'DATA_HOLD' timer at X KHz. This timer determines the 170 170 * SDAT hold time following SCLK driven low. 171 171 */ 172 - #define DATA_HOLD_100K_TICKS 2 172 + #define DATA_HOLD_100K_TICKS 12 173 173 #define DATA_HOLD_400K_TICKS 2 174 174 #define DATA_HOLD_1000K_TICKS 2 175 175 ··· 190 190 * Bus Idle Minimum time = BUS_IDLE_MIN[7:0] x Baud_Clock_Period x 191 191 * (BUS_IDLE_MIN_XK_TICKS[7] ? 4,1) 192 192 */ 193 - #define BUS_IDLE_MIN_100K_TICKS 167UL 194 - #define BUS_IDLE_MIN_400K_TICKS 139UL 195 - #define BUS_IDLE_MIN_1000K_TICKS 133UL 193 + #define BUS_IDLE_MIN_100K_TICKS 36UL 194 + #define BUS_IDLE_MIN_400K_TICKS 10UL 195 + #define BUS_IDLE_MIN_1000K_TICKS 4UL 196 196 197 197 /* 198 198 * CTRL_CUM_TIME_OUT_XK_TICKS defines SMBus Controller Cumulative Time-Out. 199 199 * SMBus Controller Cumulative Time-Out duration = 200 200 * CTRL_CUM_TIME_OUT_XK_TICKS[7:0] x Baud_Clock_Period x 2048 201 201 */ 202 - #define CTRL_CUM_TIME_OUT_100K_TICKS 159 203 - #define CTRL_CUM_TIME_OUT_400K_TICKS 159 204 - #define CTRL_CUM_TIME_OUT_1000K_TICKS 159 202 + #define CTRL_CUM_TIME_OUT_100K_TICKS 76 203 + #define CTRL_CUM_TIME_OUT_400K_TICKS 76 204 + #define CTRL_CUM_TIME_OUT_1000K_TICKS 76 205 205 206 206 /* 207 207 * TARGET_CUM_TIME_OUT_XK_TICKS defines SMBus Target Cumulative Time-Out duration. 208 208 * SMBus Target Cumulative Time-Out duration = TARGET_CUM_TIME_OUT_XK_TICKS[7:0] x 209 209 * Baud_Clock_Period x 4096 210 210 */ 211 - #define TARGET_CUM_TIME_OUT_100K_TICKS 199 212 - #define TARGET_CUM_TIME_OUT_400K_TICKS 199 213 - #define TARGET_CUM_TIME_OUT_1000K_TICKS 199 211 + #define TARGET_CUM_TIME_OUT_100K_TICKS 95 212 + #define TARGET_CUM_TIME_OUT_400K_TICKS 95 213 + #define TARGET_CUM_TIME_OUT_1000K_TICKS 95 214 214 215 215 /* 216 216 * CLOCK_HIGH_TIME_OUT_XK defines Clock High time out period. 217 217 * Clock High time out period = CLOCK_HIGH_TIME_OUT_XK[7:0] x Baud_Clock_Period x 8 218 218 */ 219 - #define CLOCK_HIGH_TIME_OUT_100K_TICKS 204 220 - #define CLOCK_HIGH_TIME_OUT_400K_TICKS 204 221 - #define CLOCK_HIGH_TIME_OUT_1000K_TICKS 204 219 + #define CLOCK_HIGH_TIME_OUT_100K_TICKS 97 220 + #define CLOCK_HIGH_TIME_OUT_400K_TICKS 97 221 + #define CLOCK_HIGH_TIME_OUT_1000K_TICKS 97 222 222 223 223 #define TO_SCALING_100K \ 224 224 ((BUS_IDLE_MIN_100K_TICKS << 24) | (CTRL_CUM_TIME_OUT_100K_TICKS << 16) | \
+19 -16
drivers/i2c/busses/i2c-ocores.c
··· 342 342 * ocores_isr(), we just add our polling code around it. 343 343 * 344 344 * It can run in atomic context 345 + * 346 + * Return: 0 on success, -ETIMEDOUT on timeout 345 347 */ 346 - static void ocores_process_polling(struct ocores_i2c *i2c) 348 + static int ocores_process_polling(struct ocores_i2c *i2c) 347 349 { 348 - while (1) { 349 - irqreturn_t ret; 350 - int err; 350 + irqreturn_t ret; 351 + int err = 0; 351 352 353 + while (1) { 352 354 err = ocores_poll_wait(i2c); 353 - if (err) { 354 - i2c->state = STATE_ERROR; 355 + if (err) 355 356 break; /* timeout */ 356 - } 357 357 358 358 ret = ocores_isr(-1, i2c); 359 359 if (ret == IRQ_NONE) ··· 364 364 break; 365 365 } 366 366 } 367 + 368 + return err; 367 369 } 368 370 369 371 static int ocores_xfer_core(struct ocores_i2c *i2c, 370 372 struct i2c_msg *msgs, int num, 371 373 bool polling) 372 374 { 373 - int ret; 375 + int ret = 0; 374 376 u8 ctrl; 375 377 376 378 ctrl = oc_getreg(i2c, OCI2C_CONTROL); ··· 390 388 oc_setreg(i2c, OCI2C_CMD, OCI2C_CMD_START); 391 389 392 390 if (polling) { 393 - ocores_process_polling(i2c); 391 + ret = ocores_process_polling(i2c); 394 392 } else { 395 - ret = wait_event_timeout(i2c->wait, 396 - (i2c->state == STATE_ERROR) || 397 - (i2c->state == STATE_DONE), HZ); 398 - if (ret == 0) { 399 - ocores_process_timeout(i2c); 400 - return -ETIMEDOUT; 401 - } 393 + if (wait_event_timeout(i2c->wait, 394 + (i2c->state == STATE_ERROR) || 395 + (i2c->state == STATE_DONE), HZ) == 0) 396 + ret = -ETIMEDOUT; 397 + } 398 + if (ret) { 399 + ocores_process_timeout(i2c); 400 + return ret; 402 401 } 403 402 404 403 return (i2c->state == STATE_DONE) ? num : -EIO;
+34 -26
drivers/infiniband/core/cma.c
··· 624 624 return id_priv->id.route.addr.src_addr.ss_family; 625 625 } 626 626 627 - static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey) 627 + static int cma_set_default_qkey(struct rdma_id_private *id_priv) 628 628 { 629 629 struct ib_sa_mcmember_rec rec; 630 630 int ret = 0; 631 - 632 - if (id_priv->qkey) { 633 - if (qkey && id_priv->qkey != qkey) 634 - return -EINVAL; 635 - return 0; 636 - } 637 - 638 - if (qkey) { 639 - id_priv->qkey = qkey; 640 - return 0; 641 - } 642 631 643 632 switch (id_priv->id.ps) { 644 633 case RDMA_PS_UDP: ··· 646 657 break; 647 658 } 648 659 return ret; 660 + } 661 + 662 + static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey) 663 + { 664 + if (!qkey || 665 + (id_priv->qkey && (id_priv->qkey != qkey))) 666 + return -EINVAL; 667 + 668 + id_priv->qkey = qkey; 669 + return 0; 649 670 } 650 671 651 672 static void cma_translate_ib(struct sockaddr_ib *sib, struct rdma_dev_addr *dev_addr) ··· 1228 1229 *qp_attr_mask = IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_PORT; 1229 1230 1230 1231 if (id_priv->id.qp_type == IB_QPT_UD) { 1231 - ret = cma_set_qkey(id_priv, 0); 1232 + ret = cma_set_default_qkey(id_priv); 1232 1233 if (ret) 1233 1234 return ret; 1234 1235 ··· 4568 4569 memset(&rep, 0, sizeof rep); 4569 4570 rep.status = status; 4570 4571 if (status == IB_SIDR_SUCCESS) { 4571 - ret = cma_set_qkey(id_priv, qkey); 4572 + if (qkey) 4573 + ret = cma_set_qkey(id_priv, qkey); 4574 + else 4575 + ret = cma_set_default_qkey(id_priv); 4572 4576 if (ret) 4573 4577 return ret; 4574 4578 rep.qp_num = id_priv->qp_num; ··· 4776 4774 enum ib_gid_type gid_type; 4777 4775 struct net_device *ndev; 4778 4776 4779 - if (!status) 4780 - status = cma_set_qkey(id_priv, be32_to_cpu(multicast->rec.qkey)); 4781 - else 4777 + if (status) 4782 4778 pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to join multicast. status %d\n", 4783 4779 status); 4784 4780 ··· 4804 4804 } 4805 4805 4806 4806 event->param.ud.qp_num = 0xFFFFFF; 4807 - event->param.ud.qkey = be32_to_cpu(multicast->rec.qkey); 4807 + event->param.ud.qkey = id_priv->qkey; 4808 4808 4809 4809 out: 4810 4810 if (ndev) ··· 4823 4823 READ_ONCE(id_priv->state) == RDMA_CM_DESTROYING) 4824 4824 goto out; 4825 4825 4826 - cma_make_mc_event(status, id_priv, multicast, &event, mc); 4827 - ret = cma_cm_event_handler(id_priv, &event); 4826 + ret = cma_set_qkey(id_priv, be32_to_cpu(multicast->rec.qkey)); 4827 + if (!ret) { 4828 + cma_make_mc_event(status, id_priv, multicast, &event, mc); 4829 + ret = cma_cm_event_handler(id_priv, &event); 4830 + } 4828 4831 rdma_destroy_ah_attr(&event.param.ud.ah_attr); 4829 4832 WARN_ON(ret); 4830 4833 ··· 4880 4877 if (ret) 4881 4878 return ret; 4882 4879 4883 - ret = cma_set_qkey(id_priv, 0); 4884 - if (ret) 4885 - return ret; 4880 + if (!id_priv->qkey) { 4881 + ret = cma_set_default_qkey(id_priv); 4882 + if (ret) 4883 + return ret; 4884 + } 4886 4885 4887 4886 cma_set_mgid(id_priv, (struct sockaddr *) &mc->addr, &rec.mgid); 4888 4887 rec.qkey = cpu_to_be32(id_priv->qkey); ··· 4961 4956 cma_iboe_set_mgid(addr, &ib.rec.mgid, gid_type); 4962 4957 4963 4958 ib.rec.pkey = cpu_to_be16(0xffff); 4964 - if (id_priv->id.ps == RDMA_PS_UDP) 4965 - ib.rec.qkey = cpu_to_be32(RDMA_UDP_QKEY); 4966 - 4967 4959 if (dev_addr->bound_dev_if) 4968 4960 ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if); 4969 4961 if (!ndev) ··· 4986 4984 if (err || !ib.rec.mtu) 4987 4985 return err ?: -EINVAL; 4988 4986 4987 + if (!id_priv->qkey) 4988 + cma_set_default_qkey(id_priv); 4989 + 4989 4990 rdma_ip2gid((struct sockaddr *)&id_priv->id.route.addr.src_addr, 4990 4991 &ib.rec.port_gid); 4991 4992 INIT_WORK(&mc->iboe_join.work, cma_iboe_join_work_handler); ··· 5012 5007 /* ULP is calling this wrong. */ 5013 5008 if (!id->device || (READ_ONCE(id_priv->state) != RDMA_CM_ADDR_BOUND && 5014 5009 READ_ONCE(id_priv->state) != RDMA_CM_ADDR_RESOLVED)) 5010 + return -EINVAL; 5011 + 5012 + if (id_priv->id.qp_type != IB_QPT_UD) 5015 5013 return -EINVAL; 5016 5014 5017 5015 mc = kzalloc(sizeof(*mc), GFP_KERNEL);
+2
drivers/infiniband/core/verbs.c
··· 532 532 else 533 533 ret = device->ops.create_ah(ah, &init_attr, NULL); 534 534 if (ret) { 535 + if (ah->sgid_attr) 536 + rdma_put_gid_attr(ah->sgid_attr); 535 537 kfree(ah); 536 538 return ERR_PTR(ret); 537 539 }
+1 -1
drivers/infiniband/hw/erdma/erdma_cq.c
··· 65 65 [ERDMA_OP_LOCAL_INV] = IB_WC_LOCAL_INV, 66 66 [ERDMA_OP_READ_WITH_INV] = IB_WC_RDMA_READ, 67 67 [ERDMA_OP_ATOMIC_CAS] = IB_WC_COMP_SWAP, 68 - [ERDMA_OP_ATOMIC_FAD] = IB_WC_FETCH_ADD, 68 + [ERDMA_OP_ATOMIC_FAA] = IB_WC_FETCH_ADD, 69 69 }; 70 70 71 71 static const struct {
+2 -2
drivers/infiniband/hw/erdma/erdma_hw.h
··· 441 441 }; 442 442 443 443 /* EQ related. */ 444 - #define ERDMA_DEFAULT_EQ_DEPTH 256 444 + #define ERDMA_DEFAULT_EQ_DEPTH 4096 445 445 446 446 /* ceqe */ 447 447 #define ERDMA_CEQE_HDR_DB_MASK BIT_ULL(63) ··· 491 491 ERDMA_OP_LOCAL_INV = 15, 492 492 ERDMA_OP_READ_WITH_INV = 16, 493 493 ERDMA_OP_ATOMIC_CAS = 17, 494 - ERDMA_OP_ATOMIC_FAD = 18, 494 + ERDMA_OP_ATOMIC_FAA = 18, 495 495 ERDMA_NUM_OPCODES = 19, 496 496 ERDMA_OP_INVALID = ERDMA_NUM_OPCODES + 1 497 497 };
+1 -1
drivers/infiniband/hw/erdma/erdma_main.c
··· 56 56 static int erdma_enum_and_get_netdev(struct erdma_dev *dev) 57 57 { 58 58 struct net_device *netdev; 59 - int ret = -ENODEV; 59 + int ret = -EPROBE_DEFER; 60 60 61 61 /* Already binded to a net_device, so we skip. */ 62 62 if (dev->netdev)
+2 -2
drivers/infiniband/hw/erdma/erdma_qp.c
··· 405 405 FIELD_PREP(ERDMA_SQE_MR_MTT_CNT_MASK, 406 406 mr->mem.mtt_nents); 407 407 408 - if (mr->mem.mtt_nents < ERDMA_MAX_INLINE_MTT_ENTRIES) { 408 + if (mr->mem.mtt_nents <= ERDMA_MAX_INLINE_MTT_ENTRIES) { 409 409 attrs |= FIELD_PREP(ERDMA_SQE_MR_MTT_TYPE_MASK, 0); 410 410 /* Copy SGLs to SQE content to accelerate */ 411 411 memcpy(get_queue_entry(qp->kern_qp.sq_buf, idx + 1, ··· 439 439 cpu_to_le64(atomic_wr(send_wr)->compare_add); 440 440 } else { 441 441 wqe_hdr |= FIELD_PREP(ERDMA_SQE_HDR_OPCODE_MASK, 442 - ERDMA_OP_ATOMIC_FAD); 442 + ERDMA_OP_ATOMIC_FAA); 443 443 atomic_sqe->fetchadd_swap_data = 444 444 cpu_to_le64(atomic_wr(send_wr)->compare_add); 445 445 }
+1 -1
drivers/infiniband/hw/erdma/erdma_verbs.h
··· 11 11 12 12 /* RDMA Capability. */ 13 13 #define ERDMA_MAX_PD (128 * 1024) 14 - #define ERDMA_MAX_SEND_WR 4096 14 + #define ERDMA_MAX_SEND_WR 8192 15 15 #define ERDMA_MAX_ORD 128 16 16 #define ERDMA_MAX_IRD 128 17 17 #define ERDMA_MAX_SGE_RD 1
+10 -6
drivers/infiniband/hw/irdma/cm.c
··· 1458 1458 * irdma_find_listener - find a cm node listening on this addr-port pair 1459 1459 * @cm_core: cm's core 1460 1460 * @dst_addr: listener ip addr 1461 + * @ipv4: flag indicating IPv4 when true 1461 1462 * @dst_port: listener tcp port num 1462 1463 * @vlan_id: virtual LAN ID 1463 1464 * @listener_state: state to match with listen node's 1464 1465 */ 1465 1466 static struct irdma_cm_listener * 1466 - irdma_find_listener(struct irdma_cm_core *cm_core, u32 *dst_addr, u16 dst_port, 1467 - u16 vlan_id, enum irdma_cm_listener_state listener_state) 1467 + irdma_find_listener(struct irdma_cm_core *cm_core, u32 *dst_addr, bool ipv4, 1468 + u16 dst_port, u16 vlan_id, 1469 + enum irdma_cm_listener_state listener_state) 1468 1470 { 1469 1471 struct irdma_cm_listener *listen_node; 1470 1472 static const u32 ip_zero[4] = { 0, 0, 0, 0 }; ··· 1479 1477 list_for_each_entry (listen_node, &cm_core->listen_list, list) { 1480 1478 memcpy(listen_addr, listen_node->loc_addr, sizeof(listen_addr)); 1481 1479 listen_port = listen_node->loc_port; 1482 - if (listen_port != dst_port || 1480 + if (listen_node->ipv4 != ipv4 || listen_port != dst_port || 1483 1481 !(listener_state & listen_node->listener_state)) 1484 1482 continue; 1485 1483 /* compare node pair, return node handle if a match */ ··· 2904 2902 unsigned long flags; 2905 2903 2906 2904 /* cannot have multiple matching listeners */ 2907 - listener = irdma_find_listener(cm_core, cm_info->loc_addr, 2908 - cm_info->loc_port, cm_info->vlan_id, 2909 - IRDMA_CM_LISTENER_EITHER_STATE); 2905 + listener = 2906 + irdma_find_listener(cm_core, cm_info->loc_addr, cm_info->ipv4, 2907 + cm_info->loc_port, cm_info->vlan_id, 2908 + IRDMA_CM_LISTENER_EITHER_STATE); 2910 2909 if (listener && 2911 2910 listener->listener_state == IRDMA_CM_LISTENER_ACTIVE_STATE) { 2912 2911 refcount_dec(&listener->refcnt); ··· 3156 3153 3157 3154 listener = irdma_find_listener(cm_core, 3158 3155 cm_info.loc_addr, 3156 + cm_info.ipv4, 3159 3157 cm_info.loc_port, 3160 3158 cm_info.vlan_id, 3161 3159 IRDMA_CM_LISTENER_ACTIVE_STATE);
+1 -1
drivers/infiniband/hw/irdma/cm.h
··· 41 41 #define TCP_OPTIONS_PADDING 3 42 42 43 43 #define IRDMA_DEFAULT_RETRYS 64 44 - #define IRDMA_DEFAULT_RETRANS 8 44 + #define IRDMA_DEFAULT_RETRANS 32 45 45 #define IRDMA_DEFAULT_TTL 0x40 46 46 #define IRDMA_DEFAULT_RTT_VAR 6 47 47 #define IRDMA_DEFAULT_SS_THRESH 0x3fffffff
+3
drivers/infiniband/hw/irdma/hw.c
··· 41 41 IRDMA_HMC_IW_XFFL, 42 42 IRDMA_HMC_IW_Q1, 43 43 IRDMA_HMC_IW_Q1FL, 44 + IRDMA_HMC_IW_PBLE, 44 45 IRDMA_HMC_IW_TIMER, 45 46 IRDMA_HMC_IW_FSIMC, 46 47 IRDMA_HMC_IW_FSIAV, ··· 828 827 info.entry_type = rf->sd_type; 829 828 830 829 for (i = 0; i < IW_HMC_OBJ_TYPE_NUM; i++) { 830 + if (iw_hmc_obj_types[i] == IRDMA_HMC_IW_PBLE) 831 + continue; 831 832 if (dev->hmc_info->hmc_obj[iw_hmc_obj_types[i]].cnt) { 832 833 info.rsrc_type = iw_hmc_obj_types[i]; 833 834 info.count = dev->hmc_info->hmc_obj[info.rsrc_type].cnt;
+4 -1
drivers/infiniband/hw/irdma/utils.c
··· 2595 2595 /* remove the SQ WR by moving SQ tail*/ 2596 2596 IRDMA_RING_SET_TAIL(*sq_ring, 2597 2597 sq_ring->tail + qp->sq_wrtrk_array[sq_ring->tail].quanta); 2598 - 2598 + if (cmpl->cpi.op_type == IRDMAQP_OP_NOP) { 2599 + kfree(cmpl); 2600 + continue; 2601 + } 2599 2602 ibdev_dbg(iwqp->iwscq->ibcq.device, 2600 2603 "DEV: %s: adding wr_id = 0x%llx SQ Completion to list qp_id=%d\n", 2601 2604 __func__, cmpl->cpi.wr_id, qp->qp_id);
+4
drivers/infiniband/hw/mlx5/main.c
··· 442 442 *active_width = IB_WIDTH_2X; 443 443 *active_speed = IB_SPEED_NDR; 444 444 break; 445 + case MLX5E_PROT_MASK(MLX5E_400GAUI_8): 446 + *active_width = IB_WIDTH_8X; 447 + *active_speed = IB_SPEED_HDR; 448 + break; 445 449 case MLX5E_PROT_MASK(MLX5E_400GAUI_4_400GBASE_CR4_KR4): 446 450 *active_width = IB_WIDTH_4X; 447 451 *active_speed = IB_SPEED_NDR;
+4 -1
drivers/memstick/core/memstick.c
··· 410 410 return card; 411 411 err_out: 412 412 host->card = old_card; 413 + kfree_const(card->dev.kobj.name); 413 414 kfree(card); 414 415 return NULL; 415 416 } ··· 469 468 put_device(&card->dev); 470 469 host->card = NULL; 471 470 } 472 - } else 471 + } else { 472 + kfree_const(card->dev.kobj.name); 473 473 kfree(card); 474 + } 474 475 } 475 476 476 477 out_power_off:
-2
drivers/mmc/host/sdhci_am654.c
··· 351 351 */ 352 352 case MMC_TIMING_SD_HS: 353 353 case MMC_TIMING_MMC_HS: 354 - case MMC_TIMING_UHS_SDR12: 355 - case MMC_TIMING_UHS_SDR25: 356 354 val &= ~SDHCI_CTRL_HISPD; 357 355 } 358 356 }
+15 -6
drivers/mtd/ubi/build.c
··· 666 666 ubi->ec_hdr_alsize = ALIGN(UBI_EC_HDR_SIZE, ubi->hdrs_min_io_size); 667 667 ubi->vid_hdr_alsize = ALIGN(UBI_VID_HDR_SIZE, ubi->hdrs_min_io_size); 668 668 669 - if (ubi->vid_hdr_offset && ((ubi->vid_hdr_offset + UBI_VID_HDR_SIZE) > 670 - ubi->vid_hdr_alsize)) { 671 - ubi_err(ubi, "VID header offset %d too large.", ubi->vid_hdr_offset); 672 - return -EINVAL; 673 - } 674 - 675 669 dbg_gen("min_io_size %d", ubi->min_io_size); 676 670 dbg_gen("max_write_size %d", ubi->max_write_size); 677 671 dbg_gen("hdrs_min_io_size %d", ubi->hdrs_min_io_size); ··· 681 687 ~(ubi->hdrs_min_io_size - 1); 682 688 ubi->vid_hdr_shift = ubi->vid_hdr_offset - 683 689 ubi->vid_hdr_aloffset; 690 + } 691 + 692 + /* 693 + * Memory allocation for VID header is ubi->vid_hdr_alsize 694 + * which is described in comments in io.c. 695 + * Make sure VID header shift + UBI_VID_HDR_SIZE not exceeds 696 + * ubi->vid_hdr_alsize, so that all vid header operations 697 + * won't access memory out of bounds. 698 + */ 699 + if ((ubi->vid_hdr_shift + UBI_VID_HDR_SIZE) > ubi->vid_hdr_alsize) { 700 + ubi_err(ubi, "Invalid VID header offset %d, VID header shift(%d)" 701 + " + VID header size(%zu) > VID header aligned size(%d).", 702 + ubi->vid_hdr_offset, ubi->vid_hdr_shift, 703 + UBI_VID_HDR_SIZE, ubi->vid_hdr_alsize); 704 + return -EINVAL; 684 705 } 685 706 686 707 /* Similar for the data offset */
+2 -2
drivers/mtd/ubi/wl.c
··· 575 575 * @vol_id: the volume ID that last used this PEB 576 576 * @lnum: the last used logical eraseblock number for the PEB 577 577 * @torture: if the physical eraseblock has to be tortured 578 - * @nested: denotes whether the work_sem is already held in read mode 578 + * @nested: denotes whether the work_sem is already held 579 579 * 580 580 * This function returns zero in case of success and a %-ENOMEM in case of 581 581 * failure. ··· 1131 1131 int err1; 1132 1132 1133 1133 /* Re-schedule the LEB for erasure */ 1134 - err1 = schedule_erase(ubi, e, vol_id, lnum, 0, false); 1134 + err1 = schedule_erase(ubi, e, vol_id, lnum, 0, true); 1135 1135 if (err1) { 1136 1136 spin_lock(&ubi->wl_lock); 1137 1137 wl_entry_destroy(ubi, e);
+4 -3
drivers/net/bonding/bond_main.c
··· 1777 1777 1778 1778 /* The bonding driver uses ether_setup() to convert a master bond device 1779 1779 * to ARPHRD_ETHER, that resets the target netdevice's flags so we always 1780 - * have to restore the IFF_MASTER flag, and only restore IFF_SLAVE if it was set 1780 + * have to restore the IFF_MASTER flag, and only restore IFF_SLAVE and IFF_UP 1781 + * if they were set 1781 1782 */ 1782 1783 static void bond_ether_setup(struct net_device *bond_dev) 1783 1784 { 1784 - unsigned int slave_flag = bond_dev->flags & IFF_SLAVE; 1785 + unsigned int flags = bond_dev->flags & (IFF_SLAVE | IFF_UP); 1785 1786 1786 1787 ether_setup(bond_dev); 1787 - bond_dev->flags |= IFF_MASTER | slave_flag; 1788 + bond_dev->flags |= IFF_MASTER | flags; 1788 1789 bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING; 1789 1790 } 1790 1791
+1 -1
drivers/net/dsa/microchip/ksz8795.c
··· 96 96 97 97 if (frame_size > KSZ8_LEGAL_PACKET_SIZE) 98 98 ctrl2 |= SW_LEGAL_PACKET_DISABLE; 99 - else if (frame_size > KSZ8863_NORMAL_PACKET_SIZE) 99 + if (frame_size > KSZ8863_NORMAL_PACKET_SIZE) 100 100 ctrl1 |= SW_HUGE_PACKET; 101 101 102 102 ret = ksz_rmw8(dev, REG_SW_CTRL_1, SW_HUGE_PACKET, ctrl1);
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2361 2361 case ASYNC_EVENT_CMPL_EVENT_ID_PHC_UPDATE: { 2362 2362 switch (BNXT_EVENT_PHC_EVENT_TYPE(data1)) { 2363 2363 case ASYNC_EVENT_CMPL_PHC_UPDATE_EVENT_DATA1_FLAGS_PHC_RTC_UPDATE: 2364 - if (bp->fw_cap & BNXT_FW_CAP_PTP_RTC) { 2364 + if (BNXT_PTP_USE_RTC(bp)) { 2365 2365 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 2366 2366 u64 ns; 2367 2367 ··· 7601 7601 u8 flags; 7602 7602 int rc; 7603 7603 7604 - if (bp->hwrm_spec_code < 0x10801) { 7604 + if (bp->hwrm_spec_code < 0x10801 || !BNXT_CHIP_P5_THOR(bp)) { 7605 7605 rc = -ENODEV; 7606 7606 goto no_ptp; 7607 7607 }
+10 -9
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 304 304 struct auxiliary_device *adev; 305 305 306 306 /* Skip if no auxiliary device init was done. */ 307 - if (!(bp->flags & BNXT_FLAG_ROCE_CAP)) 307 + if (!bp->aux_priv) 308 308 return; 309 309 310 310 aux_priv = bp->aux_priv; ··· 324 324 bp->edev = NULL; 325 325 kfree(aux_priv->edev); 326 326 kfree(aux_priv); 327 + bp->aux_priv = NULL; 327 328 } 328 329 329 330 static void bnxt_set_edev_info(struct bnxt_en_dev *edev, struct bnxt *bp) ··· 360 359 if (!(bp->flags & BNXT_FLAG_ROCE_CAP)) 361 360 return; 362 361 363 - bp->aux_priv = kzalloc(sizeof(*bp->aux_priv), GFP_KERNEL); 364 - if (!bp->aux_priv) 362 + aux_priv = kzalloc(sizeof(*bp->aux_priv), GFP_KERNEL); 363 + if (!aux_priv) 365 364 goto exit; 366 365 367 - bp->aux_priv->id = ida_alloc(&bnxt_aux_dev_ids, GFP_KERNEL); 368 - if (bp->aux_priv->id < 0) { 366 + aux_priv->id = ida_alloc(&bnxt_aux_dev_ids, GFP_KERNEL); 367 + if (aux_priv->id < 0) { 369 368 netdev_warn(bp->dev, 370 369 "ida alloc failed for ROCE auxiliary device\n"); 371 - kfree(bp->aux_priv); 370 + kfree(aux_priv); 372 371 goto exit; 373 372 } 374 373 375 - aux_priv = bp->aux_priv; 376 374 aux_dev = &aux_priv->aux_dev; 377 375 aux_dev->id = aux_priv->id; 378 376 aux_dev->name = "rdma"; ··· 380 380 381 381 rc = auxiliary_device_init(aux_dev); 382 382 if (rc) { 383 - ida_free(&bnxt_aux_dev_ids, bp->aux_priv->id); 384 - kfree(bp->aux_priv); 383 + ida_free(&bnxt_aux_dev_ids, aux_priv->id); 384 + kfree(aux_priv); 385 385 goto exit; 386 386 } 387 + bp->aux_priv = aux_priv; 387 388 388 389 /* From this point, all cleanup will happen via the .release callback & 389 390 * any error unwinding will need to include a call to
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
··· 1135 1135 return; 1136 1136 1137 1137 if (adap->flower_stats_timer.function) 1138 - del_timer_sync(&adap->flower_stats_timer); 1138 + timer_shutdown_sync(&adap->flower_stats_timer); 1139 1139 cancel_work_sync(&adap->flower_stats_work); 1140 1140 rhashtable_destroy(&adap->flower_tbl); 1141 1141 adap->tc_flower_initialized = false;
+26 -25
drivers/net/ethernet/intel/e1000e/netdev.c
··· 5287 5287 ew32(TARC(0), tarc0); 5288 5288 } 5289 5289 5290 - /* disable TSO for pcie and 10/100 speeds, to avoid 5291 - * some hardware issues 5292 - */ 5293 - if (!(adapter->flags & FLAG_TSO_FORCE)) { 5294 - switch (adapter->link_speed) { 5295 - case SPEED_10: 5296 - case SPEED_100: 5297 - e_info("10/100 speed: disabling TSO\n"); 5298 - netdev->features &= ~NETIF_F_TSO; 5299 - netdev->features &= ~NETIF_F_TSO6; 5300 - break; 5301 - case SPEED_1000: 5302 - netdev->features |= NETIF_F_TSO; 5303 - netdev->features |= NETIF_F_TSO6; 5304 - break; 5305 - default: 5306 - /* oops */ 5307 - break; 5308 - } 5309 - if (hw->mac.type == e1000_pch_spt) { 5310 - netdev->features &= ~NETIF_F_TSO; 5311 - netdev->features &= ~NETIF_F_TSO6; 5312 - } 5313 - } 5314 - 5315 5290 /* enable transmits in the hardware, need to do this 5316 5291 * after setting TARC(0) 5317 5292 */ ··· 7499 7524 NETIF_F_RXHASH | 7500 7525 NETIF_F_RXCSUM | 7501 7526 NETIF_F_HW_CSUM); 7527 + 7528 + /* disable TSO for pcie and 10/100 speeds to avoid 7529 + * some hardware issues and for i219 to fix transfer 7530 + * speed being capped at 60% 7531 + */ 7532 + if (!(adapter->flags & FLAG_TSO_FORCE)) { 7533 + switch (adapter->link_speed) { 7534 + case SPEED_10: 7535 + case SPEED_100: 7536 + e_info("10/100 speed: disabling TSO\n"); 7537 + netdev->features &= ~NETIF_F_TSO; 7538 + netdev->features &= ~NETIF_F_TSO6; 7539 + break; 7540 + case SPEED_1000: 7541 + netdev->features |= NETIF_F_TSO; 7542 + netdev->features |= NETIF_F_TSO6; 7543 + break; 7544 + default: 7545 + /* oops */ 7546 + break; 7547 + } 7548 + if (hw->mac.type == e1000_pch_spt) { 7549 + netdev->features &= ~NETIF_F_TSO; 7550 + netdev->features &= ~NETIF_F_TSO6; 7551 + } 7552 + } 7502 7553 7503 7554 /* Set user-changeable features (subset of all device features) */ 7504 7555 netdev->hw_features = netdev->features;
+6 -3
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 11072 11072 pf->hw.aq.asq_last_status)); 11073 11073 } 11074 11074 /* reinit the misc interrupt */ 11075 - if (pf->flags & I40E_FLAG_MSIX_ENABLED) 11075 + if (pf->flags & I40E_FLAG_MSIX_ENABLED) { 11076 11076 ret = i40e_setup_misc_vector(pf); 11077 + if (ret) 11078 + goto end_unlock; 11079 + } 11077 11080 11078 11081 /* Add a filter to drop all Flow control frames from any VSI from being 11079 11082 * transmitted. By doing so we stop a malicious VF from sending out ··· 14150 14147 vsi->id = ctxt.vsi_number; 14151 14148 } 14152 14149 14153 - vsi->active_filters = 0; 14154 - clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state); 14155 14150 spin_lock_bh(&vsi->mac_filter_hash_lock); 14151 + vsi->active_filters = 0; 14156 14152 /* If macvlan filters already exist, force them to get loaded */ 14157 14153 hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) { 14158 14154 f->state = I40E_FILTER_NEW; 14159 14155 f_count++; 14160 14156 } 14161 14157 spin_unlock_bh(&vsi->mac_filter_hash_lock); 14158 + clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state); 14162 14159 14163 14160 if (f_count) { 14164 14161 vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED;
-6
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 60 60 if (!IS_ENABLED(CONFIG_MLX5_CORE_EN)) 61 61 return false; 62 62 63 - if (mlx5_core_is_management_pf(dev)) 64 - return false; 65 - 66 63 if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH) 67 64 return false; 68 65 ··· 186 189 bool mlx5_rdma_supported(struct mlx5_core_dev *dev) 187 190 { 188 191 if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND)) 189 - return false; 190 - 191 - if (mlx5_core_is_management_pf(dev)) 192 192 return false; 193 193 194 194 if (dev->priv.flags & MLX5_PRIV_FLAGS_DISABLE_IB_ADEV)
-8
drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
··· 75 75 if (!mlx5_core_is_ecpf(dev)) 76 76 return 0; 77 77 78 - /* Management PF don't have a peer PF */ 79 - if (mlx5_core_is_management_pf(dev)) 80 - return 0; 81 - 82 78 return mlx5_host_pf_init(dev); 83 79 } 84 80 ··· 83 87 int err; 84 88 85 89 if (!mlx5_core_is_ecpf(dev)) 86 - return; 87 - 88 - /* Management PF don't have a peer PF */ 89 - if (mlx5_core_is_management_pf(dev)) 90 90 return; 91 91 92 92 mlx5_host_pf_cleanup(dev);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1488 1488 void *hca_caps; 1489 1489 int err; 1490 1490 1491 - if (!mlx5_core_is_ecpf(dev) || mlx5_core_is_management_pf(dev)) { 1491 + if (!mlx5_core_is_ecpf(dev)) { 1492 1492 *max_sfs = 0; 1493 1493 return 0; 1494 1494 }
+2
drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c
··· 31 31 32 32 if (tlv->type == MLXFW_MFA2_TLV_MULTI_PART) { 33 33 multi = mlxfw_mfa2_tlv_multi_get(mfa2_file, tlv); 34 + if (!multi) 35 + return NULL; 34 36 tlv_len = NLA_ALIGN(tlv_len + be16_to_cpu(multi->total_len)); 35 37 } 36 38
+1 -1
drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
··· 26 26 #define MLXSW_PCI_CIR_TIMEOUT_MSECS 1000 27 27 28 28 #define MLXSW_PCI_SW_RESET_TIMEOUT_MSECS 900000 29 - #define MLXSW_PCI_SW_RESET_WAIT_MSECS 200 29 + #define MLXSW_PCI_SW_RESET_WAIT_MSECS 400 30 30 #define MLXSW_PCI_FW_READY 0xA1844 31 31 #define MLXSW_PCI_FW_READY_MASK 0xFFFF 32 32 #define MLXSW_PCI_FW_READY_MAGIC 0x5E
-1
drivers/net/ethernet/sfc/efx.c
··· 540 540 else 541 541 efx->state = STATE_NET_UP; 542 542 543 - efx_selftest_async_start(efx); 544 543 return 0; 545 544 } 546 545
+2
drivers/net/ethernet/sfc/efx_common.c
··· 544 544 /* Start the hardware monitor if there is one */ 545 545 efx_start_monitor(efx); 546 546 547 + efx_selftest_async_start(efx); 548 + 547 549 /* Link state detection is normally event-driven; we have 548 550 * to poll now because we could have missed a change 549 551 */
+1 -1
drivers/net/hamradio/Kconfig
··· 47 47 48 48 config SCC 49 49 tristate "Z8530 SCC driver" 50 - depends on ISA && AX25 && ISA_DMA_API 50 + depends on ISA && AX25 51 51 help 52 52 These cards are used to connect your Linux box to an amateur radio 53 53 in order to communicate with other computers. If you want to use
+11 -6
drivers/net/veth.c
··· 1262 1262 1263 1263 peer = rtnl_dereference(priv->peer); 1264 1264 if (peer && peer->real_num_tx_queues <= dev->real_num_rx_queues) { 1265 + struct veth_priv *priv_peer = netdev_priv(peer); 1265 1266 xdp_features_t val = NETDEV_XDP_ACT_BASIC | 1266 1267 NETDEV_XDP_ACT_REDIRECT | 1267 1268 NETDEV_XDP_ACT_RX_SG; 1268 1269 1269 - if (priv->_xdp_prog || veth_gro_requested(dev)) 1270 + if (priv_peer->_xdp_prog || veth_gro_requested(peer)) 1270 1271 val |= NETDEV_XDP_ACT_NDO_XMIT | 1271 1272 NETDEV_XDP_ACT_NDO_XMIT_SG; 1272 1273 xdp_set_features_flag(dev, val); ··· 1505 1504 { 1506 1505 netdev_features_t changed = features ^ dev->features; 1507 1506 struct veth_priv *priv = netdev_priv(dev); 1507 + struct net_device *peer; 1508 1508 int err; 1509 1509 1510 1510 if (!(changed & NETIF_F_GRO) || !(dev->flags & IFF_UP) || priv->_xdp_prog) 1511 1511 return 0; 1512 1512 1513 + peer = rtnl_dereference(priv->peer); 1513 1514 if (features & NETIF_F_GRO) { 1514 1515 err = veth_napi_enable(dev); 1515 1516 if (err) 1516 1517 return err; 1517 1518 1518 - xdp_features_set_redirect_target(dev, true); 1519 + if (peer) 1520 + xdp_features_set_redirect_target(peer, true); 1519 1521 } else { 1520 - xdp_features_clear_redirect_target(dev); 1522 + if (peer) 1523 + xdp_features_clear_redirect_target(peer); 1521 1524 veth_napi_del(dev); 1522 1525 } 1523 1526 return 0; ··· 1603 1598 peer->max_mtu = max_mtu; 1604 1599 } 1605 1600 1606 - xdp_features_set_redirect_target(dev, true); 1601 + xdp_features_set_redirect_target(peer, true); 1607 1602 } 1608 1603 1609 1604 if (old_prog) { 1610 1605 if (!prog) { 1611 - if (!veth_gro_requested(dev)) 1612 - xdp_features_clear_redirect_target(dev); 1606 + if (peer && !veth_gro_requested(dev)) 1607 + xdp_features_clear_redirect_target(peer); 1613 1608 1614 1609 if (dev->flags & IFF_UP) 1615 1610 veth_disable_xdp(dev);
+6 -2
drivers/net/virtio_net.c
··· 815 815 int page_off, 816 816 unsigned int *len) 817 817 { 818 - struct page *page = alloc_page(GFP_ATOMIC); 818 + int tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 819 + struct page *page; 819 820 821 + if (page_off + *len + tailroom > PAGE_SIZE) 822 + return NULL; 823 + 824 + page = alloc_page(GFP_ATOMIC); 820 825 if (!page) 821 826 return NULL; 822 827 ··· 829 824 page_off += *len; 830 825 831 826 while (--*num_buf) { 832 - int tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 833 827 unsigned int buflen; 834 828 void *buf; 835 829 int off;
+1 -1
drivers/net/vmxnet3/vmxnet3_drv.c
··· 1504 1504 goto rcd_done; 1505 1505 } 1506 1506 1507 - if (rxDataRingUsed) { 1507 + if (rxDataRingUsed && adapter->rxdataring_enabled) { 1508 1508 size_t sz; 1509 1509 1510 1510 BUG_ON(rcd->len > rq->data_ring.desc_size);
+1 -3
drivers/net/wireless/ath/ath9k/mci.c
··· 646 646 struct ath_hw *ah = sc->sc_ah; 647 647 struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; 648 648 struct ath9k_channel *chan = ah->curchan; 649 - static const u32 channelmap[] = { 650 - 0x00000000, 0xffff0000, 0xffffffff, 0x7fffffff 651 - }; 649 + u32 channelmap[] = {0x00000000, 0xffff0000, 0xffffffff, 0x7fffffff}; 652 650 int i; 653 651 s16 chan_start, chan_end; 654 652 u16 wlan_chan;
+2
drivers/nvme/host/pci.c
··· 3443 3443 { PCI_DEVICE(0x1d97, 0x2269), /* Lexar NM760 */ 3444 3444 .driver_data = NVME_QUIRK_BOGUS_NID | 3445 3445 NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3446 + { PCI_DEVICE(0x10ec, 0x5763), /* TEAMGROUP T-FORCE CARDEA ZERO Z330 SSD */ 3447 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3446 3448 { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0061), 3447 3449 .driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, }, 3448 3450 { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0065),
+16 -16
drivers/perf/amlogic/meson_g12_ddr_pmu.c
··· 21 21 #define DMC_QOS_IRQ BIT(30) 22 22 23 23 /* DMC bandwidth monitor register address offset */ 24 - #define DMC_MON_G12_CTRL0 (0x20 << 2) 25 - #define DMC_MON_G12_CTRL1 (0x21 << 2) 26 - #define DMC_MON_G12_CTRL2 (0x22 << 2) 27 - #define DMC_MON_G12_CTRL3 (0x23 << 2) 28 - #define DMC_MON_G12_CTRL4 (0x24 << 2) 29 - #define DMC_MON_G12_CTRL5 (0x25 << 2) 30 - #define DMC_MON_G12_CTRL6 (0x26 << 2) 31 - #define DMC_MON_G12_CTRL7 (0x27 << 2) 32 - #define DMC_MON_G12_CTRL8 (0x28 << 2) 24 + #define DMC_MON_G12_CTRL0 (0x0 << 2) 25 + #define DMC_MON_G12_CTRL1 (0x1 << 2) 26 + #define DMC_MON_G12_CTRL2 (0x2 << 2) 27 + #define DMC_MON_G12_CTRL3 (0x3 << 2) 28 + #define DMC_MON_G12_CTRL4 (0x4 << 2) 29 + #define DMC_MON_G12_CTRL5 (0x5 << 2) 30 + #define DMC_MON_G12_CTRL6 (0x6 << 2) 31 + #define DMC_MON_G12_CTRL7 (0x7 << 2) 32 + #define DMC_MON_G12_CTRL8 (0x8 << 2) 33 33 34 - #define DMC_MON_G12_ALL_REQ_CNT (0x29 << 2) 35 - #define DMC_MON_G12_ALL_GRANT_CNT (0x2a << 2) 36 - #define DMC_MON_G12_ONE_GRANT_CNT (0x2b << 2) 37 - #define DMC_MON_G12_SEC_GRANT_CNT (0x2c << 2) 38 - #define DMC_MON_G12_THD_GRANT_CNT (0x2d << 2) 39 - #define DMC_MON_G12_FOR_GRANT_CNT (0x2e << 2) 40 - #define DMC_MON_G12_TIMER (0x2f << 2) 34 + #define DMC_MON_G12_ALL_REQ_CNT (0x9 << 2) 35 + #define DMC_MON_G12_ALL_GRANT_CNT (0xa << 2) 36 + #define DMC_MON_G12_ONE_GRANT_CNT (0xb << 2) 37 + #define DMC_MON_G12_SEC_GRANT_CNT (0xc << 2) 38 + #define DMC_MON_G12_THD_GRANT_CNT (0xd << 2) 39 + #define DMC_MON_G12_FOR_GRANT_CNT (0xe << 2) 40 + #define DMC_MON_G12_TIMER (0xf << 2) 41 41 42 42 /* Each bit represent a axi line */ 43 43 PMU_FORMAT_ATTR(event, "config:0-7");
+8 -7
drivers/regulator/fan53555.c
··· 8 8 // Copyright (c) 2012 Marvell Technology Ltd. 9 9 // Yunfan Zhang <yfzhang@marvell.com> 10 10 11 - #include <linux/module.h> 12 - #include <linux/param.h> 11 + #include <linux/bits.h> 13 12 #include <linux/err.h> 13 + #include <linux/i2c.h> 14 + #include <linux/module.h> 15 + #include <linux/of_device.h> 16 + #include <linux/param.h> 14 17 #include <linux/platform_device.h> 18 + #include <linux/regmap.h> 15 19 #include <linux/regulator/driver.h> 20 + #include <linux/regulator/fan53555.h> 16 21 #include <linux/regulator/machine.h> 17 22 #include <linux/regulator/of_regulator.h> 18 - #include <linux/of_device.h> 19 - #include <linux/i2c.h> 20 23 #include <linux/slab.h> 21 - #include <linux/regmap.h> 22 - #include <linux/regulator/fan53555.h> 23 24 24 25 /* Voltage setting */ 25 26 #define FAN53555_VSEL0 0x00 ··· 61 60 #define TCS_VSEL1_MODE (1 << 6) 62 61 63 62 #define TCS_SLEW_SHIFT 3 64 - #define TCS_SLEW_MASK (0x3 < 3) 63 + #define TCS_SLEW_MASK GENMASK(4, 3) 65 64 66 65 enum fan53555_vendor { 67 66 FAN53526_VENDOR_FAIRCHILD = 0,
+2
drivers/regulator/sm5703-regulator.c
··· 42 42 .type = REGULATOR_VOLTAGE, \ 43 43 .id = SM5703_USBLDO ## _id, \ 44 44 .ops = &sm5703_regulator_ops_fixed, \ 45 + .n_voltages = 1, \ 45 46 .fixed_uV = SM5703_USBLDO_MICROVOLT, \ 46 47 .enable_reg = SM5703_REG_USBLDO12, \ 47 48 .enable_mask = SM5703_REG_EN_USBLDO ##_id, \ ··· 57 56 .type = REGULATOR_VOLTAGE, \ 58 57 .id = SM5703_VBUS, \ 59 58 .ops = &sm5703_regulator_ops_fixed, \ 59 + .n_voltages = 1, \ 60 60 .fixed_uV = SM5703_VBUS_MICROVOLT, \ 61 61 .enable_reg = SM5703_REG_CNTL, \ 62 62 .enable_mask = SM5703_OPERATION_MODE_MASK, \
+8 -12
drivers/scsi/ses.c
··· 509 509 int i; 510 510 struct ses_component *scomp; 511 511 512 - if (!edev->component[0].scratch) 513 - return 0; 514 - 515 512 for (i = 0; i < edev->components; i++) { 516 513 scomp = edev->component[i].scratch; 517 514 if (scomp->addr != efd->addr) ··· 599 602 components++, 600 603 type_ptr[0], 601 604 name); 602 - else 605 + else if (components < edev->components) 603 606 ecomp = &edev->component[components++]; 607 + else 608 + ecomp = ERR_PTR(-EINVAL); 604 609 605 610 if (!IS_ERR(ecomp)) { 606 611 if (addl_desc_ptr) { ··· 733 734 components += type_ptr[1]; 734 735 } 735 736 736 - if (components == 0) { 737 - sdev_printk(KERN_WARNING, sdev, "enclosure has no enumerated components\n"); 738 - goto err_free; 739 - } 740 - 741 737 ses_dev->page1 = buf; 742 738 ses_dev->page1_len = len; 743 739 buf = NULL; ··· 774 780 buf = NULL; 775 781 } 776 782 page2_not_supported: 777 - scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL); 778 - if (!scomp) 779 - goto err_free; 783 + if (components > 0) { 784 + scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL); 785 + if (!scomp) 786 + goto err_free; 787 + } 780 788 781 789 edev = enclosure_register(cdev->parent, dev_name(&sdev->sdev_gendev), 782 790 components, &ses_enclosure_callbacks);
+1 -1
drivers/spi/spi-rockchip-sfc.c
··· 632 632 if (ret) { 633 633 dev_err(dev, "Failed to request irq\n"); 634 634 635 - return ret; 635 + goto err_irq; 636 636 } 637 637 638 638 ret = rockchip_sfc_init(sfc);
+1 -1
drivers/tee/optee/call.c
··· 488 488 #elif defined(CONFIG_ARM64) 489 489 return (pgprot_val(p) & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL); 490 490 #else 491 - #error "Unuspported architecture" 491 + #error "Unsupported architecture" 492 492 #endif 493 493 } 494 494
+1 -1
drivers/tee/tee_shm.c
··· 32 32 is_kmap_addr((void *)start))) 33 33 return -EINVAL; 34 34 35 - page = virt_to_page(start); 35 + page = virt_to_page((void *)start); 36 36 for (n = 0; n < page_count; n++) { 37 37 pages[n] = page + n; 38 38 get_page(pages[n]);
+66 -7
drivers/thermal/intel/therm_throt.c
··· 193 193 #define THERM_THROT_POLL_INTERVAL HZ 194 194 #define THERM_STATUS_PROCHOT_LOG BIT(1) 195 195 196 - #define THERM_STATUS_CLEAR_CORE_MASK (BIT(1) | BIT(3) | BIT(5) | BIT(7) | BIT(9) | BIT(11) | BIT(13) | BIT(15)) 197 - #define THERM_STATUS_CLEAR_PKG_MASK (BIT(1) | BIT(3) | BIT(5) | BIT(7) | BIT(9) | BIT(11)) 196 + static u64 therm_intr_core_clear_mask; 197 + static u64 therm_intr_pkg_clear_mask; 198 + 199 + static void thermal_intr_init_core_clear_mask(void) 200 + { 201 + if (therm_intr_core_clear_mask) 202 + return; 203 + 204 + /* 205 + * Reference: Intel SDM Volume 4 206 + * "Table 2-2. IA-32 Architectural MSRs", MSR 0x19C 207 + * IA32_THERM_STATUS. 208 + */ 209 + 210 + /* 211 + * Bit 1, 3, 5: CPUID.01H:EDX[22] = 1. This driver will not 212 + * enable interrupts, when 0 as it checks for X86_FEATURE_ACPI. 213 + */ 214 + therm_intr_core_clear_mask = (BIT(1) | BIT(3) | BIT(5)); 215 + 216 + /* 217 + * Bit 7 and 9: Thermal Threshold #1 and #2 log 218 + * If CPUID.01H:ECX[8] = 1 219 + */ 220 + if (boot_cpu_has(X86_FEATURE_TM2)) 221 + therm_intr_core_clear_mask |= (BIT(7) | BIT(9)); 222 + 223 + /* Bit 11: Power Limitation log (R/WC0) If CPUID.06H:EAX[4] = 1 */ 224 + if (boot_cpu_has(X86_FEATURE_PLN)) 225 + therm_intr_core_clear_mask |= BIT(11); 226 + 227 + /* 228 + * Bit 13: Current Limit log (R/WC0) If CPUID.06H:EAX[7] = 1 229 + * Bit 15: Cross Domain Limit log (R/WC0) If CPUID.06H:EAX[7] = 1 230 + */ 231 + if (boot_cpu_has(X86_FEATURE_HWP)) 232 + therm_intr_core_clear_mask |= (BIT(13) | BIT(15)); 233 + } 234 + 235 + static void thermal_intr_init_pkg_clear_mask(void) 236 + { 237 + if (therm_intr_pkg_clear_mask) 238 + return; 239 + 240 + /* 241 + * Reference: Intel SDM Volume 4 242 + * "Table 2-2. IA-32 Architectural MSRs", MSR 0x1B1 243 + * IA32_PACKAGE_THERM_STATUS. 244 + */ 245 + 246 + /* All bits except BIT 26 depend on CPUID.06H: EAX[6] = 1 */ 247 + if (boot_cpu_has(X86_FEATURE_PTS)) 248 + therm_intr_pkg_clear_mask = (BIT(1) | BIT(3) | BIT(5) | BIT(7) | BIT(9) | BIT(11)); 249 + 250 + /* 251 + * Intel SDM Volume 2A: Thermal and Power Management Leaf 252 + * Bit 26: CPUID.06H: EAX[19] = 1 253 + */ 254 + if (boot_cpu_has(X86_FEATURE_HFI)) 255 + therm_intr_pkg_clear_mask |= BIT(26); 256 + } 198 257 199 258 /* 200 259 * Clear the bits in package thermal status register for bit = 1 ··· 266 207 267 208 if (level == CORE_LEVEL) { 268 209 msr = MSR_IA32_THERM_STATUS; 269 - msr_val = THERM_STATUS_CLEAR_CORE_MASK; 210 + msr_val = therm_intr_core_clear_mask; 270 211 } else { 271 212 msr = MSR_IA32_PACKAGE_THERM_STATUS; 272 - msr_val = THERM_STATUS_CLEAR_PKG_MASK; 273 - if (boot_cpu_has(X86_FEATURE_HFI)) 274 - msr_val |= BIT(26); 275 - 213 + msr_val = therm_intr_pkg_clear_mask; 276 214 } 277 215 278 216 msr_val &= ~bit_mask; ··· 763 707 /* We'll mask the thermal vector in the lapic till we're ready: */ 764 708 h = THERMAL_APIC_VECTOR | APIC_DM_FIXED | APIC_LVT_MASKED; 765 709 apic_write(APIC_LVTTHMR, h); 710 + 711 + thermal_intr_init_core_clear_mask(); 712 + thermal_intr_init_pkg_clear_mask(); 766 713 767 714 rdmsr(MSR_IA32_THERM_INTERRUPT, l, h); 768 715 if (cpu_has(c, X86_FEATURE_PLN) && !int_pln_enable)
+31 -10
fs/cifs/smb2pdu.c
··· 587 587 588 588 } 589 589 590 + /* If invalid preauth context warn but use what we requested, SHA-512 */ 590 591 static void decode_preauth_context(struct smb2_preauth_neg_context *ctxt) 591 592 { 592 593 unsigned int len = le16_to_cpu(ctxt->DataLength); 593 594 594 - /* If invalid preauth context warn but use what we requested, SHA-512 */ 595 + /* 596 + * Caller checked that DataLength remains within SMB boundary. We still 597 + * need to confirm that one HashAlgorithms member is accounted for. 598 + */ 595 599 if (len < MIN_PREAUTH_CTXT_DATA_LEN) { 596 600 pr_warn_once("server sent bad preauth context\n"); 597 601 return; ··· 614 610 { 615 611 unsigned int len = le16_to_cpu(ctxt->DataLength); 616 612 617 - /* sizeof compress context is a one element compression capbility struct */ 613 + /* 614 + * Caller checked that DataLength remains within SMB boundary. We still 615 + * need to confirm that one CompressionAlgorithms member is accounted 616 + * for. 617 + */ 618 618 if (len < 10) { 619 619 pr_warn_once("server sent bad compression cntxt\n"); 620 620 return; ··· 640 632 unsigned int len = le16_to_cpu(ctxt->DataLength); 641 633 642 634 cifs_dbg(FYI, "decode SMB3.11 encryption neg context of len %d\n", len); 635 + /* 636 + * Caller checked that DataLength remains within SMB boundary. We still 637 + * need to confirm that one Cipher flexible array member is accounted 638 + * for. 639 + */ 643 640 if (len < MIN_ENCRYPT_CTXT_DATA_LEN) { 644 641 pr_warn_once("server sent bad crypto ctxt len\n"); 645 642 return -EINVAL; ··· 691 678 { 692 679 unsigned int len = le16_to_cpu(pctxt->DataLength); 693 680 681 + /* 682 + * Caller checked that DataLength remains within SMB boundary. We still 683 + * need to confirm that one SigningAlgorithms flexible array member is 684 + * accounted for. 685 + */ 694 686 if ((len < 4) || (len > 16)) { 695 687 pr_warn_once("server sent bad signing negcontext\n"); 696 688 return; ··· 737 719 for (i = 0; i < ctxt_cnt; i++) { 738 720 int clen; 739 721 /* check that offset is not beyond end of SMB */ 740 - if (len_of_ctxts == 0) 741 - break; 742 - 743 722 if (len_of_ctxts < sizeof(struct smb2_neg_context)) 744 723 break; 745 724 746 725 pctx = (struct smb2_neg_context *)(offset + (char *)rsp); 747 - clen = le16_to_cpu(pctx->DataLength); 726 + clen = sizeof(struct smb2_neg_context) 727 + + le16_to_cpu(pctx->DataLength); 728 + /* 729 + * 2.2.4 SMB2 NEGOTIATE Response 730 + * Subsequent negotiate contexts MUST appear at the first 8-byte 731 + * aligned offset following the previous negotiate context. 732 + */ 733 + if (i + 1 != ctxt_cnt) 734 + clen = ALIGN(clen, 8); 748 735 if (clen > len_of_ctxts) 749 736 break; 750 737 ··· 770 747 else 771 748 cifs_server_dbg(VFS, "unknown negcontext of type %d ignored\n", 772 749 le16_to_cpu(pctx->ContextType)); 773 - 774 750 if (rc) 775 751 break; 776 - /* offsets must be 8 byte aligned */ 777 - clen = ALIGN(clen, 8); 778 - offset += clen + sizeof(struct smb2_neg_context); 752 + 753 + offset += clen; 779 754 len_of_ctxts -= clen; 780 755 } 781 756 return rc;
+10 -7
fs/fs-writeback.c
··· 978 978 continue; 979 979 } 980 980 981 + /* 982 + * If wb_tryget fails, the wb has been shutdown, skip it. 983 + * 984 + * Pin @wb so that it stays on @bdi->wb_list. This allows 985 + * continuing iteration from @wb after dropping and 986 + * regrabbing rcu read lock. 987 + */ 988 + if (!wb_tryget(wb)) 989 + continue; 990 + 981 991 /* alloc failed, execute synchronously using on-stack fallback */ 982 992 work = &fallback_work; 983 993 *work = *base_work; ··· 996 986 work->done = &fallback_work_done; 997 987 998 988 wb_queue_work(wb, work); 999 - 1000 - /* 1001 - * Pin @wb so that it stays on @bdi->wb_list. This allows 1002 - * continuing iteration from @wb after dropping and 1003 - * regrabbing rcu read lock. 1004 - */ 1005 - wb_get(wb); 1006 989 last_wb = wb; 1007 990 1008 991 rcu_read_unlock();
+14 -9
fs/ksmbd/smb2pdu.c
··· 876 876 } 877 877 878 878 static __le32 decode_preauth_ctxt(struct ksmbd_conn *conn, 879 - struct smb2_preauth_neg_context *pneg_ctxt) 879 + struct smb2_preauth_neg_context *pneg_ctxt, 880 + int len_of_ctxts) 880 881 { 881 - __le32 err = STATUS_NO_PREAUTH_INTEGRITY_HASH_OVERLAP; 882 + /* 883 + * sizeof(smb2_preauth_neg_context) assumes SMB311_SALT_SIZE Salt, 884 + * which may not be present. Only check for used HashAlgorithms[1]. 885 + */ 886 + if (len_of_ctxts < MIN_PREAUTH_CTXT_DATA_LEN) 887 + return STATUS_INVALID_PARAMETER; 882 888 883 - if (pneg_ctxt->HashAlgorithms == SMB2_PREAUTH_INTEGRITY_SHA512) { 884 - conn->preauth_info->Preauth_HashId = 885 - SMB2_PREAUTH_INTEGRITY_SHA512; 886 - err = STATUS_SUCCESS; 887 - } 889 + if (pneg_ctxt->HashAlgorithms != SMB2_PREAUTH_INTEGRITY_SHA512) 890 + return STATUS_NO_PREAUTH_INTEGRITY_HASH_OVERLAP; 888 891 889 - return err; 892 + conn->preauth_info->Preauth_HashId = SMB2_PREAUTH_INTEGRITY_SHA512; 893 + return STATUS_SUCCESS; 890 894 } 891 895 892 896 static void decode_encrypt_ctxt(struct ksmbd_conn *conn, ··· 1018 1014 break; 1019 1015 1020 1016 status = decode_preauth_ctxt(conn, 1021 - (struct smb2_preauth_neg_context *)pctx); 1017 + (struct smb2_preauth_neg_context *)pctx, 1018 + len_of_ctxts); 1022 1019 if (status != STATUS_SUCCESS) 1023 1020 break; 1024 1021 } else if (pctx->ContextType == SMB2_ENCRYPTION_CAPABILITIES) {
+20
fs/nilfs2/segment.c
··· 430 430 return 0; 431 431 } 432 432 433 + /** 434 + * nilfs_segctor_zeropad_segsum - zero pad the rest of the segment summary area 435 + * @sci: segment constructor object 436 + * 437 + * nilfs_segctor_zeropad_segsum() zero-fills unallocated space at the end of 438 + * the current segment summary block. 439 + */ 440 + static void nilfs_segctor_zeropad_segsum(struct nilfs_sc_info *sci) 441 + { 442 + struct nilfs_segsum_pointer *ssp; 443 + 444 + ssp = sci->sc_blk_cnt > 0 ? &sci->sc_binfo_ptr : &sci->sc_finfo_ptr; 445 + if (ssp->offset < ssp->bh->b_size) 446 + memset(ssp->bh->b_data + ssp->offset, 0, 447 + ssp->bh->b_size - ssp->offset); 448 + } 449 + 433 450 static int nilfs_segctor_feed_segment(struct nilfs_sc_info *sci) 434 451 { 435 452 sci->sc_nblk_this_inc += sci->sc_curseg->sb_sum.nblocks; ··· 455 438 * The current segment is filled up 456 439 * (internal code) 457 440 */ 441 + nilfs_segctor_zeropad_segsum(sci); 458 442 sci->sc_curseg = NILFS_NEXT_SEGBUF(sci->sc_curseg); 459 443 return nilfs_segctor_reset_segment_buffer(sci); 460 444 } ··· 560 542 goto retry; 561 543 } 562 544 if (unlikely(required)) { 545 + nilfs_segctor_zeropad_segsum(sci); 563 546 err = nilfs_segbuf_extend_segsum(segbuf); 564 547 if (unlikely(err)) 565 548 goto failed; ··· 1552 1533 nadd = min_t(int, nadd << 1, SC_MAX_SEGDELTA); 1553 1534 sci->sc_stage = prev_stage; 1554 1535 } 1536 + nilfs_segctor_zeropad_segsum(sci); 1555 1537 nilfs_segctor_truncate_segments(sci, sci->sc_curseg, nilfs->ns_sufile); 1556 1538 return 0; 1557 1539
+4 -2
fs/userfaultfd.c
··· 1955 1955 ret = -EFAULT; 1956 1956 if (copy_from_user(&uffdio_api, buf, sizeof(uffdio_api))) 1957 1957 goto out; 1958 - /* Ignore unsupported features (userspace built against newer kernel) */ 1959 - features = uffdio_api.features & UFFD_API_FEATURES; 1958 + features = uffdio_api.features; 1959 + ret = -EINVAL; 1960 + if (uffdio_api.api != UFFD_API || (features & ~UFFD_API_FEATURES)) 1961 + goto err_out; 1960 1962 ret = -EPERM; 1961 1963 if ((features & UFFD_FEATURE_EVENT_FORK) && !capable(CAP_SYS_PTRACE)) 1962 1964 goto err_out;
+21 -18
include/linux/kmsan.h
··· 134 134 * @page_shift: page_shift passed to vmap_range_noflush(). 135 135 * 136 136 * KMSAN maps shadow and origin pages of @pages into contiguous ranges in 137 - * vmalloc metadata address range. 137 + * vmalloc metadata address range. Returns 0 on success, callers must check 138 + * for non-zero return value. 138 139 */ 139 - void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, 140 - pgprot_t prot, struct page **pages, 141 - unsigned int page_shift); 140 + int kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, 141 + pgprot_t prot, struct page **pages, 142 + unsigned int page_shift); 142 143 143 144 /** 144 145 * kmsan_vunmap_kernel_range_noflush() - Notify KMSAN about a vunmap. ··· 160 159 * @page_shift: page_shift argument passed to vmap_range_noflush(). 161 160 * 162 161 * KMSAN creates new metadata pages for the physical pages mapped into the 163 - * virtual memory. 162 + * virtual memory. Returns 0 on success, callers must check for non-zero return 163 + * value. 164 164 */ 165 - void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, 166 - phys_addr_t phys_addr, pgprot_t prot, 167 - unsigned int page_shift); 165 + int kmsan_ioremap_page_range(unsigned long addr, unsigned long end, 166 + phys_addr_t phys_addr, pgprot_t prot, 167 + unsigned int page_shift); 168 168 169 169 /** 170 170 * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. ··· 283 281 { 284 282 } 285 283 286 - static inline void kmsan_vmap_pages_range_noflush(unsigned long start, 287 - unsigned long end, 288 - pgprot_t prot, 289 - struct page **pages, 290 - unsigned int page_shift) 284 + static inline int kmsan_vmap_pages_range_noflush(unsigned long start, 285 + unsigned long end, 286 + pgprot_t prot, 287 + struct page **pages, 288 + unsigned int page_shift) 291 289 { 290 + return 0; 292 291 } 293 292 294 293 static inline void kmsan_vunmap_range_noflush(unsigned long start, ··· 297 294 { 298 295 } 299 296 300 - static inline void kmsan_ioremap_page_range(unsigned long start, 301 - unsigned long end, 302 - phys_addr_t phys_addr, 303 - pgprot_t prot, 304 - unsigned int page_shift) 297 + static inline int kmsan_ioremap_page_range(unsigned long start, 298 + unsigned long end, 299 + phys_addr_t phys_addr, pgprot_t prot, 300 + unsigned int page_shift) 305 301 { 302 + return 0; 306 303 } 307 304 308 305 static inline void kmsan_iounmap_page_range(unsigned long start,
-5
include/linux/mlx5/driver.h
··· 1215 1215 return dev->coredev_type == MLX5_COREDEV_VF; 1216 1216 } 1217 1217 1218 - static inline bool mlx5_core_is_management_pf(const struct mlx5_core_dev *dev) 1219 - { 1220 - return MLX5_CAP_GEN(dev, num_ports) == 1 && !MLX5_CAP_GEN(dev, native_port_num); 1221 - } 1222 - 1223 1218 static inline bool mlx5_core_is_ecpf(const struct mlx5_core_dev *dev) 1224 1219 { 1225 1220 return dev->caps.embedded_cpu;
+3 -2
include/linux/skbuff.h
··· 294 294 u8 pkt_otherhost:1; 295 295 u8 in_prerouting:1; 296 296 u8 bridged_dnat:1; 297 + u8 sabotage_in_done:1; 297 298 __u16 frag_max_size; 298 299 struct net_device *physindev; 299 300 ··· 4729 4728 4730 4729 static inline void nf_reset_trace(struct sk_buff *skb) 4731 4730 { 4732 - #if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES) 4731 + #if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || IS_ENABLED(CONFIG_NF_TABLES) 4733 4732 skb->nf_trace = 0; 4734 4733 #endif 4735 4734 } ··· 4749 4748 dst->_nfct = src->_nfct; 4750 4749 nf_conntrack_get(skb_nfct(src)); 4751 4750 #endif 4752 - #if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES) 4751 + #if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || IS_ENABLED(CONFIG_NF_TABLES) 4753 4752 if (copy) 4754 4753 dst->nf_trace = src->nf_trace; 4755 4754 #endif
+4
include/net/netfilter/nf_tables.h
··· 1085 1085 }; 1086 1086 1087 1087 int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain); 1088 + int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set, 1089 + const struct nft_set_iter *iter, 1090 + struct nft_set_elem *elem); 1091 + int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set); 1088 1092 1089 1093 enum nft_chain_types { 1090 1094 NFT_CHAIN_T_DEFAULT = 0,
+2 -9
init/initramfs.c
··· 60 60 message = x; 61 61 } 62 62 63 - static void panic_show_mem(const char *fmt, ...) 64 - { 65 - va_list args; 66 - 67 - show_mem(0, NULL); 68 - va_start(args, fmt); 69 - panic(fmt, args); 70 - va_end(args); 71 - } 63 + #define panic_show_mem(fmt, ...) \ 64 + ({ show_mem(0, NULL); panic(fmt, ##__VA_ARGS__); }) 72 65 73 66 /* link hash */ 74 67
+1 -1
io_uring/io_uring.c
··· 998 998 999 999 void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags) 1000 1000 { 1001 - if (req->ctx->task_complete && (issue_flags & IO_URING_F_IOWQ)) { 1001 + if (req->ctx->task_complete && req->ctx->submitter_task != current) { 1002 1002 req->io_task_work.func = io_req_task_complete; 1003 1003 io_req_task_work_add(req); 1004 1004 } else if (!(issue_flags & IO_URING_F_UNLOCKED) ||
+15
kernel/bpf/verifier.c
··· 3243 3243 } 3244 3244 } else if (opcode == BPF_EXIT) { 3245 3245 return -ENOTSUPP; 3246 + } else if (BPF_SRC(insn->code) == BPF_X) { 3247 + if (!(*reg_mask & (dreg | sreg))) 3248 + return 0; 3249 + /* dreg <cond> sreg 3250 + * Both dreg and sreg need precision before 3251 + * this insn. If only sreg was marked precise 3252 + * before it would be equally necessary to 3253 + * propagate it to dreg. 3254 + */ 3255 + *reg_mask |= (sreg | dreg); 3256 + /* else dreg <cond> K 3257 + * Only dreg still needs precision before 3258 + * this insn, so for the K-based conditional 3259 + * there is nothing new to be marked. 3260 + */ 3246 3261 } 3247 3262 } else if (class == BPF_LD) { 3248 3263 if (!(*reg_mask & dreg))
+145 -35
kernel/cgroup/cpuset.c
··· 1513 1513 spin_unlock_irq(&callback_lock); 1514 1514 1515 1515 if (adding || deleting) 1516 - update_tasks_cpumask(parent, tmp->new_cpus); 1516 + update_tasks_cpumask(parent, tmp->addmask); 1517 1517 1518 1518 /* 1519 1519 * Set or clear CS_SCHED_LOAD_BALANCE when partcmd_update, if necessary. ··· 1770 1770 /* 1771 1771 * Use the cpumasks in trialcs for tmpmasks when they are pointers 1772 1772 * to allocated cpumasks. 1773 + * 1774 + * Note that update_parent_subparts_cpumask() uses only addmask & 1775 + * delmask, but not new_cpus. 1773 1776 */ 1774 1777 tmp.addmask = trialcs->subparts_cpus; 1775 1778 tmp.delmask = trialcs->effective_cpus; 1776 - tmp.new_cpus = trialcs->cpus_allowed; 1779 + tmp.new_cpus = NULL; 1777 1780 #endif 1778 1781 1779 1782 retval = validate_change(cs, trialcs); ··· 1840 1837 } 1841 1838 } 1842 1839 spin_unlock_irq(&callback_lock); 1840 + 1841 + #ifdef CONFIG_CPUMASK_OFFSTACK 1842 + /* Now trialcs->cpus_allowed is available */ 1843 + tmp.new_cpus = trialcs->cpus_allowed; 1844 + #endif 1843 1845 1844 1846 /* effective_cpus will be updated here */ 1845 1847 update_cpumasks_hier(cs, &tmp, false); ··· 2453 2445 2454 2446 static struct cpuset *cpuset_attach_old_cs; 2455 2447 2448 + /* 2449 + * Check to see if a cpuset can accept a new task 2450 + * For v1, cpus_allowed and mems_allowed can't be empty. 2451 + * For v2, effective_cpus can't be empty. 2452 + * Note that in v1, effective_cpus = cpus_allowed. 2453 + */ 2454 + static int cpuset_can_attach_check(struct cpuset *cs) 2455 + { 2456 + if (cpumask_empty(cs->effective_cpus) || 2457 + (!is_in_v2_mode() && nodes_empty(cs->mems_allowed))) 2458 + return -ENOSPC; 2459 + return 0; 2460 + } 2461 + 2456 2462 /* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */ 2457 2463 static int cpuset_can_attach(struct cgroup_taskset *tset) 2458 2464 { ··· 2481 2459 2482 2460 percpu_down_write(&cpuset_rwsem); 2483 2461 2484 - /* allow moving tasks into an empty cpuset if on default hierarchy */ 2485 - ret = -ENOSPC; 2486 - if (!is_in_v2_mode() && 2487 - (cpumask_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed))) 2488 - goto out_unlock; 2489 - 2490 - /* 2491 - * Task cannot be moved to a cpuset with empty effective cpus. 2492 - */ 2493 - if (cpumask_empty(cs->effective_cpus)) 2462 + /* Check to see if task is allowed in the cpuset */ 2463 + ret = cpuset_can_attach_check(cs); 2464 + if (ret) 2494 2465 goto out_unlock; 2495 2466 2496 2467 cgroup_taskset_for_each(task, css, tset) { ··· 2500 2485 * changes which zero cpus/mems_allowed. 2501 2486 */ 2502 2487 cs->attach_in_progress++; 2503 - ret = 0; 2504 2488 out_unlock: 2505 2489 percpu_up_write(&cpuset_rwsem); 2506 2490 return ret; ··· 2508 2494 static void cpuset_cancel_attach(struct cgroup_taskset *tset) 2509 2495 { 2510 2496 struct cgroup_subsys_state *css; 2497 + struct cpuset *cs; 2511 2498 2512 2499 cgroup_taskset_first(tset, &css); 2500 + cs = css_cs(css); 2513 2501 2514 2502 percpu_down_write(&cpuset_rwsem); 2515 - css_cs(css)->attach_in_progress--; 2503 + cs->attach_in_progress--; 2504 + if (!cs->attach_in_progress) 2505 + wake_up(&cpuset_attach_wq); 2516 2506 percpu_up_write(&cpuset_rwsem); 2517 2507 } 2518 2508 2519 2509 /* 2520 - * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach() 2510 + * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach_task() 2521 2511 * but we can't allocate it dynamically there. Define it global and 2522 2512 * allocate from cpuset_init(). 2523 2513 */ 2524 2514 static cpumask_var_t cpus_attach; 2515 + static nodemask_t cpuset_attach_nodemask_to; 2516 + 2517 + static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task) 2518 + { 2519 + percpu_rwsem_assert_held(&cpuset_rwsem); 2520 + 2521 + if (cs != &top_cpuset) 2522 + guarantee_online_cpus(task, cpus_attach); 2523 + else 2524 + cpumask_andnot(cpus_attach, task_cpu_possible_mask(task), 2525 + cs->subparts_cpus); 2526 + /* 2527 + * can_attach beforehand should guarantee that this doesn't 2528 + * fail. TODO: have a better way to handle failure here 2529 + */ 2530 + WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); 2531 + 2532 + cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); 2533 + cpuset_update_task_spread_flags(cs, task); 2534 + } 2525 2535 2526 2536 static void cpuset_attach(struct cgroup_taskset *tset) 2527 2537 { 2528 - /* static buf protected by cpuset_rwsem */ 2529 - static nodemask_t cpuset_attach_nodemask_to; 2530 2538 struct task_struct *task; 2531 2539 struct task_struct *leader; 2532 2540 struct cgroup_subsys_state *css; ··· 2579 2543 2580 2544 guarantee_online_mems(cs, &cpuset_attach_nodemask_to); 2581 2545 2582 - cgroup_taskset_for_each(task, css, tset) { 2583 - if (cs != &top_cpuset) 2584 - guarantee_online_cpus(task, cpus_attach); 2585 - else 2586 - cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); 2587 - /* 2588 - * can_attach beforehand should guarantee that this doesn't 2589 - * fail. TODO: have a better way to handle failure here 2590 - */ 2591 - WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach)); 2592 - 2593 - cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to); 2594 - cpuset_update_task_spread_flags(cs, task); 2595 - } 2546 + cgroup_taskset_for_each(task, css, tset) 2547 + cpuset_attach_task(cs, task); 2596 2548 2597 2549 /* 2598 2550 * Change mm for all threadgroup leaders. This is expensive and may ··· 3272 3248 } 3273 3249 3274 3250 /* 3251 + * In case the child is cloned into a cpuset different from its parent, 3252 + * additional checks are done to see if the move is allowed. 3253 + */ 3254 + static int cpuset_can_fork(struct task_struct *task, struct css_set *cset) 3255 + { 3256 + struct cpuset *cs = css_cs(cset->subsys[cpuset_cgrp_id]); 3257 + bool same_cs; 3258 + int ret; 3259 + 3260 + rcu_read_lock(); 3261 + same_cs = (cs == task_cs(current)); 3262 + rcu_read_unlock(); 3263 + 3264 + if (same_cs) 3265 + return 0; 3266 + 3267 + lockdep_assert_held(&cgroup_mutex); 3268 + percpu_down_write(&cpuset_rwsem); 3269 + 3270 + /* Check to see if task is allowed in the cpuset */ 3271 + ret = cpuset_can_attach_check(cs); 3272 + if (ret) 3273 + goto out_unlock; 3274 + 3275 + ret = task_can_attach(task, cs->effective_cpus); 3276 + if (ret) 3277 + goto out_unlock; 3278 + 3279 + ret = security_task_setscheduler(task); 3280 + if (ret) 3281 + goto out_unlock; 3282 + 3283 + /* 3284 + * Mark attach is in progress. This makes validate_change() fail 3285 + * changes which zero cpus/mems_allowed. 3286 + */ 3287 + cs->attach_in_progress++; 3288 + out_unlock: 3289 + percpu_up_write(&cpuset_rwsem); 3290 + return ret; 3291 + } 3292 + 3293 + static void cpuset_cancel_fork(struct task_struct *task, struct css_set *cset) 3294 + { 3295 + struct cpuset *cs = css_cs(cset->subsys[cpuset_cgrp_id]); 3296 + bool same_cs; 3297 + 3298 + rcu_read_lock(); 3299 + same_cs = (cs == task_cs(current)); 3300 + rcu_read_unlock(); 3301 + 3302 + if (same_cs) 3303 + return; 3304 + 3305 + percpu_down_write(&cpuset_rwsem); 3306 + cs->attach_in_progress--; 3307 + if (!cs->attach_in_progress) 3308 + wake_up(&cpuset_attach_wq); 3309 + percpu_up_write(&cpuset_rwsem); 3310 + } 3311 + 3312 + /* 3275 3313 * Make sure the new task conform to the current state of its parent, 3276 3314 * which could have been changed by cpuset just after it inherits the 3277 3315 * state from the parent and before it sits on the cgroup's task list. 3278 3316 */ 3279 3317 static void cpuset_fork(struct task_struct *task) 3280 3318 { 3281 - if (task_css_is_root(task, cpuset_cgrp_id)) 3282 - return; 3319 + struct cpuset *cs; 3320 + bool same_cs; 3283 3321 3284 - set_cpus_allowed_ptr(task, current->cpus_ptr); 3285 - task->mems_allowed = current->mems_allowed; 3322 + rcu_read_lock(); 3323 + cs = task_cs(task); 3324 + same_cs = (cs == task_cs(current)); 3325 + rcu_read_unlock(); 3326 + 3327 + if (same_cs) { 3328 + if (cs == &top_cpuset) 3329 + return; 3330 + 3331 + set_cpus_allowed_ptr(task, current->cpus_ptr); 3332 + task->mems_allowed = current->mems_allowed; 3333 + return; 3334 + } 3335 + 3336 + /* CLONE_INTO_CGROUP */ 3337 + percpu_down_write(&cpuset_rwsem); 3338 + guarantee_online_mems(cs, &cpuset_attach_nodemask_to); 3339 + cpuset_attach_task(cs, task); 3340 + 3341 + cs->attach_in_progress--; 3342 + if (!cs->attach_in_progress) 3343 + wake_up(&cpuset_attach_wq); 3344 + 3345 + percpu_up_write(&cpuset_rwsem); 3286 3346 } 3287 3347 3288 3348 struct cgroup_subsys cpuset_cgrp_subsys = { ··· 3379 3271 .attach = cpuset_attach, 3380 3272 .post_attach = cpuset_post_attach, 3381 3273 .bind = cpuset_bind, 3274 + .can_fork = cpuset_can_fork, 3275 + .cancel_fork = cpuset_cancel_fork, 3382 3276 .fork = cpuset_fork, 3383 3277 .legacy_cftypes = legacy_files, 3384 3278 .dfl_cftypes = dfl_files,
+5 -2
kernel/cgroup/legacy_freezer.c
··· 22 22 #include <linux/freezer.h> 23 23 #include <linux/seq_file.h> 24 24 #include <linux/mutex.h> 25 + #include <linux/cpu.h> 25 26 26 27 /* 27 28 * A cgroup is freezing if any FREEZING flags are set. FREEZING_SELF is ··· 351 350 352 351 if (freeze) { 353 352 if (!(freezer->state & CGROUP_FREEZING)) 354 - static_branch_inc(&freezer_active); 353 + static_branch_inc_cpuslocked(&freezer_active); 355 354 freezer->state |= state; 356 355 freeze_cgroup(freezer); 357 356 } else { ··· 362 361 if (!(freezer->state & CGROUP_FREEZING)) { 363 362 freezer->state &= ~CGROUP_FROZEN; 364 363 if (was_freezing) 365 - static_branch_dec(&freezer_active); 364 + static_branch_dec_cpuslocked(&freezer_active); 366 365 unfreeze_cgroup(freezer); 367 366 } 368 367 } ··· 380 379 { 381 380 struct cgroup_subsys_state *pos; 382 381 382 + cpus_read_lock(); 383 383 /* 384 384 * Update all its descendants in pre-order traversal. Each 385 385 * descendant will try to inherit its parent's FREEZING state as ··· 409 407 } 410 408 rcu_read_unlock(); 411 409 mutex_unlock(&freezer_mutex); 410 + cpus_read_unlock(); 412 411 } 413 412 414 413 static ssize_t freezer_write(struct kernfs_open_file *of,
+1 -3
kernel/cgroup/rstat.c
··· 457 457 struct task_cputime *cputime = &bstat->cputime; 458 458 int i; 459 459 460 - cputime->stime = 0; 461 - cputime->utime = 0; 462 - cputime->sum_exec_runtime = 0; 460 + memset(bstat, 0, sizeof(*bstat)); 463 461 for_each_possible_cpu(i) { 464 462 struct kernel_cpustat kcpustat; 465 463 u64 *cpustat = kcpustat.cpustat;
+1
kernel/fork.c
··· 1174 1174 fail_pcpu: 1175 1175 while (i > 0) 1176 1176 percpu_counter_destroy(&mm->rss_stat[--i]); 1177 + destroy_context(mm); 1177 1178 fail_nocontext: 1178 1179 mm_free_pgd(mm); 1179 1180 fail_nopgd:
+10
kernel/sched/fair.c
··· 10238 10238 10239 10239 sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) / 10240 10240 sds->total_capacity; 10241 + 10242 + /* 10243 + * If the local group is more loaded than the average system 10244 + * load, don't try to pull any tasks. 10245 + */ 10246 + if (local->avg_load >= sds->avg_load) { 10247 + env->imbalance = 0; 10248 + return; 10249 + } 10250 + 10241 10251 } 10242 10252 10243 10253 /*
+40 -29
kernel/sys.c
··· 664 664 struct cred *new; 665 665 int retval; 666 666 kuid_t kruid, keuid, ksuid; 667 + bool ruid_new, euid_new, suid_new; 667 668 668 669 kruid = make_kuid(ns, ruid); 669 670 keuid = make_kuid(ns, euid); ··· 679 678 if ((suid != (uid_t) -1) && !uid_valid(ksuid)) 680 679 return -EINVAL; 681 680 681 + old = current_cred(); 682 + 683 + /* check for no-op */ 684 + if ((ruid == (uid_t) -1 || uid_eq(kruid, old->uid)) && 685 + (euid == (uid_t) -1 || (uid_eq(keuid, old->euid) && 686 + uid_eq(keuid, old->fsuid))) && 687 + (suid == (uid_t) -1 || uid_eq(ksuid, old->suid))) 688 + return 0; 689 + 690 + ruid_new = ruid != (uid_t) -1 && !uid_eq(kruid, old->uid) && 691 + !uid_eq(kruid, old->euid) && !uid_eq(kruid, old->suid); 692 + euid_new = euid != (uid_t) -1 && !uid_eq(keuid, old->uid) && 693 + !uid_eq(keuid, old->euid) && !uid_eq(keuid, old->suid); 694 + suid_new = suid != (uid_t) -1 && !uid_eq(ksuid, old->uid) && 695 + !uid_eq(ksuid, old->euid) && !uid_eq(ksuid, old->suid); 696 + if ((ruid_new || euid_new || suid_new) && 697 + !ns_capable_setid(old->user_ns, CAP_SETUID)) 698 + return -EPERM; 699 + 682 700 new = prepare_creds(); 683 701 if (!new) 684 702 return -ENOMEM; 685 - 686 - old = current_cred(); 687 - 688 - retval = -EPERM; 689 - if (!ns_capable_setid(old->user_ns, CAP_SETUID)) { 690 - if (ruid != (uid_t) -1 && !uid_eq(kruid, old->uid) && 691 - !uid_eq(kruid, old->euid) && !uid_eq(kruid, old->suid)) 692 - goto error; 693 - if (euid != (uid_t) -1 && !uid_eq(keuid, old->uid) && 694 - !uid_eq(keuid, old->euid) && !uid_eq(keuid, old->suid)) 695 - goto error; 696 - if (suid != (uid_t) -1 && !uid_eq(ksuid, old->uid) && 697 - !uid_eq(ksuid, old->euid) && !uid_eq(ksuid, old->suid)) 698 - goto error; 699 - } 700 703 701 704 if (ruid != (uid_t) -1) { 702 705 new->uid = kruid; ··· 766 761 struct cred *new; 767 762 int retval; 768 763 kgid_t krgid, kegid, ksgid; 764 + bool rgid_new, egid_new, sgid_new; 769 765 770 766 krgid = make_kgid(ns, rgid); 771 767 kegid = make_kgid(ns, egid); ··· 779 773 if ((sgid != (gid_t) -1) && !gid_valid(ksgid)) 780 774 return -EINVAL; 781 775 776 + old = current_cred(); 777 + 778 + /* check for no-op */ 779 + if ((rgid == (gid_t) -1 || gid_eq(krgid, old->gid)) && 780 + (egid == (gid_t) -1 || (gid_eq(kegid, old->egid) && 781 + gid_eq(kegid, old->fsgid))) && 782 + (sgid == (gid_t) -1 || gid_eq(ksgid, old->sgid))) 783 + return 0; 784 + 785 + rgid_new = rgid != (gid_t) -1 && !gid_eq(krgid, old->gid) && 786 + !gid_eq(krgid, old->egid) && !gid_eq(krgid, old->sgid); 787 + egid_new = egid != (gid_t) -1 && !gid_eq(kegid, old->gid) && 788 + !gid_eq(kegid, old->egid) && !gid_eq(kegid, old->sgid); 789 + sgid_new = sgid != (gid_t) -1 && !gid_eq(ksgid, old->gid) && 790 + !gid_eq(ksgid, old->egid) && !gid_eq(ksgid, old->sgid); 791 + if ((rgid_new || egid_new || sgid_new) && 792 + !ns_capable_setid(old->user_ns, CAP_SETGID)) 793 + return -EPERM; 794 + 782 795 new = prepare_creds(); 783 796 if (!new) 784 797 return -ENOMEM; 785 - old = current_cred(); 786 - 787 - retval = -EPERM; 788 - if (!ns_capable_setid(old->user_ns, CAP_SETGID)) { 789 - if (rgid != (gid_t) -1 && !gid_eq(krgid, old->gid) && 790 - !gid_eq(krgid, old->egid) && !gid_eq(krgid, old->sgid)) 791 - goto error; 792 - if (egid != (gid_t) -1 && !gid_eq(kegid, old->gid) && 793 - !gid_eq(kegid, old->egid) && !gid_eq(kegid, old->sgid)) 794 - goto error; 795 - if (sgid != (gid_t) -1 && !gid_eq(ksgid, old->gid) && 796 - !gid_eq(ksgid, old->egid) && !gid_eq(ksgid, old->sgid)) 797 - goto error; 798 - } 799 798 800 799 if (rgid != (gid_t) -1) 801 800 new->gid = krgid;
+31 -35
lib/maple_tree.c
··· 1303 1303 node = mas->alloc; 1304 1304 node->request_count = 0; 1305 1305 while (requested) { 1306 - max_req = MAPLE_ALLOC_SLOTS; 1307 - if (node->node_count) { 1308 - unsigned int offset = node->node_count; 1309 - 1310 - slots = (void **)&node->slot[offset]; 1311 - max_req -= offset; 1312 - } else { 1313 - slots = (void **)&node->slot; 1314 - } 1315 - 1306 + max_req = MAPLE_ALLOC_SLOTS - node->node_count; 1307 + slots = (void **)&node->slot[node->node_count]; 1316 1308 max_req = min(requested, max_req); 1317 1309 count = mt_alloc_bulk(gfp, max_req, slots); 1318 1310 if (!count) 1319 1311 goto nomem_bulk; 1320 1312 1313 + if (node->node_count == 0) { 1314 + node->slot[0]->node_count = 0; 1315 + node->slot[0]->request_count = 0; 1316 + } 1317 + 1321 1318 node->node_count += count; 1322 1319 allocated += count; 1323 1320 node = node->slot[0]; 1324 - node->node_count = 0; 1325 - node->request_count = 0; 1326 1321 requested -= count; 1327 1322 } 1328 1323 mas->alloc->total = allocated; ··· 4965 4970 * Return: True if found in a leaf, false otherwise. 4966 4971 * 4967 4972 */ 4968 - static bool mas_rev_awalk(struct ma_state *mas, unsigned long size) 4973 + static bool mas_rev_awalk(struct ma_state *mas, unsigned long size, 4974 + unsigned long *gap_min, unsigned long *gap_max) 4969 4975 { 4970 4976 enum maple_type type = mte_node_type(mas->node); 4971 4977 struct maple_node *node = mas_mn(mas); ··· 5031 5035 5032 5036 if (unlikely(ma_is_leaf(type))) { 5033 5037 mas->offset = offset; 5034 - mas->min = min; 5035 - mas->max = min + gap - 1; 5038 + *gap_min = min; 5039 + *gap_max = min + gap - 1; 5036 5040 return true; 5037 5041 } 5038 5042 ··· 5056 5060 { 5057 5061 enum maple_type type = mte_node_type(mas->node); 5058 5062 unsigned long pivot, min, gap = 0; 5059 - unsigned char offset; 5060 - unsigned long *gaps; 5061 - unsigned long *pivots = ma_pivots(mas_mn(mas), type); 5062 - void __rcu **slots = ma_slots(mas_mn(mas), type); 5063 + unsigned char offset, data_end; 5064 + unsigned long *gaps, *pivots; 5065 + void __rcu **slots; 5066 + struct maple_node *node; 5063 5067 bool found = false; 5064 5068 5065 5069 if (ma_is_dense(type)) { ··· 5067 5071 return true; 5068 5072 } 5069 5073 5070 - gaps = ma_gaps(mte_to_node(mas->node), type); 5074 + node = mas_mn(mas); 5075 + pivots = ma_pivots(node, type); 5076 + slots = ma_slots(node, type); 5077 + gaps = ma_gaps(node, type); 5071 5078 offset = mas->offset; 5072 5079 min = mas_safe_min(mas, pivots, offset); 5073 - for (; offset < mt_slots[type]; offset++) { 5074 - pivot = mas_safe_pivot(mas, pivots, offset, type); 5075 - if (offset && !pivot) 5076 - break; 5080 + data_end = ma_data_end(node, type, pivots, mas->max); 5081 + for (; offset <= data_end; offset++) { 5082 + pivot = mas_logical_pivot(mas, pivots, offset, type); 5077 5083 5078 5084 /* Not within lower bounds */ 5079 5085 if (mas->index > pivot) ··· 5310 5312 unsigned long *pivots; 5311 5313 enum maple_type mt; 5312 5314 5315 + if (min >= max) 5316 + return -EINVAL; 5317 + 5313 5318 if (mas_is_start(mas)) 5314 5319 mas_start(mas); 5315 5320 else if (mas->offset >= 2) ··· 5367 5366 { 5368 5367 struct maple_enode *last = mas->node; 5369 5368 5369 + if (min >= max) 5370 + return -EINVAL; 5371 + 5370 5372 if (mas_is_start(mas)) { 5371 5373 mas_start(mas); 5372 5374 mas->offset = mas_data_end(mas); ··· 5389 5385 mas->index = min; 5390 5386 mas->last = max; 5391 5387 5392 - while (!mas_rev_awalk(mas, size)) { 5388 + while (!mas_rev_awalk(mas, size, &min, &max)) { 5393 5389 if (last == mas->node) { 5394 5390 if (!mas_rewind_node(mas)) 5395 5391 return -EBUSY; ··· 5404 5400 if (unlikely(mas->offset == MAPLE_NODE_SLOTS)) 5405 5401 return -EBUSY; 5406 5402 5407 - /* 5408 - * mas_rev_awalk() has set mas->min and mas->max to the gap values. If 5409 - * the maximum is outside the window we are searching, then use the last 5410 - * location in the search. 5411 - * mas->max and mas->min is the range of the gap. 5412 - * mas->index and mas->last are currently set to the search range. 5413 - */ 5414 - 5415 5403 /* Trim the upper limit to the max. */ 5416 - if (mas->max <= mas->last) 5417 - mas->last = mas->max; 5404 + if (max <= mas->last) 5405 + mas->last = max; 5418 5406 5419 5407 mas->index = mas->last - size + 1; 5420 5408 return 0;
+10 -2
mm/backing-dev.c
··· 507 507 static void cleanup_offline_cgwbs_workfn(struct work_struct *work); 508 508 static DECLARE_WORK(cleanup_offline_cgwbs_work, cleanup_offline_cgwbs_workfn); 509 509 510 + static void cgwb_free_rcu(struct rcu_head *rcu_head) 511 + { 512 + struct bdi_writeback *wb = container_of(rcu_head, 513 + struct bdi_writeback, rcu); 514 + 515 + percpu_ref_exit(&wb->refcnt); 516 + kfree(wb); 517 + } 518 + 510 519 static void cgwb_release_workfn(struct work_struct *work) 511 520 { 512 521 struct bdi_writeback *wb = container_of(work, struct bdi_writeback, ··· 538 529 list_del(&wb->offline_node); 539 530 spin_unlock_irq(&cgwb_lock); 540 531 541 - percpu_ref_exit(&wb->refcnt); 542 532 wb_exit(wb); 543 533 bdi_put(bdi); 544 534 WARN_ON_ONCE(!list_empty(&wb->b_attached)); 545 - kfree_rcu(wb, rcu); 535 + call_rcu(&wb->rcu, cgwb_free_rcu); 546 536 } 547 537 548 538 static void cgwb_release(struct percpu_ref *refcnt)
+15 -4
mm/huge_memory.c
··· 1838 1838 if (is_swap_pmd(*pmd)) { 1839 1839 swp_entry_t entry = pmd_to_swp_entry(*pmd); 1840 1840 struct page *page = pfn_swap_entry_to_page(entry); 1841 + pmd_t newpmd; 1841 1842 1842 1843 VM_BUG_ON(!is_pmd_migration_entry(*pmd)); 1843 1844 if (is_writable_migration_entry(entry)) { 1844 - pmd_t newpmd; 1845 1845 /* 1846 1846 * A protection check is difficult so 1847 1847 * just be safe and disable write ··· 1855 1855 newpmd = pmd_swp_mksoft_dirty(newpmd); 1856 1856 if (pmd_swp_uffd_wp(*pmd)) 1857 1857 newpmd = pmd_swp_mkuffd_wp(newpmd); 1858 - set_pmd_at(mm, addr, pmd, newpmd); 1858 + } else { 1859 + newpmd = *pmd; 1859 1860 } 1861 + 1862 + if (uffd_wp) 1863 + newpmd = pmd_swp_mkuffd_wp(newpmd); 1864 + else if (uffd_wp_resolve) 1865 + newpmd = pmd_swp_clear_uffd_wp(newpmd); 1866 + if (!pmd_same(*pmd, newpmd)) 1867 + set_pmd_at(mm, addr, pmd, newpmd); 1860 1868 goto unlock; 1861 1869 } 1862 1870 #endif ··· 2665 2657 VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); 2666 2658 2667 2659 is_hzp = is_huge_zero_page(&folio->page); 2668 - VM_WARN_ON_ONCE_FOLIO(is_hzp, folio); 2669 - if (is_hzp) 2660 + if (is_hzp) { 2661 + pr_warn_ratelimited("Called split_huge_page for huge zero page\n"); 2670 2662 return -EBUSY; 2663 + } 2671 2664 2672 2665 if (folio_test_writeback(folio)) 2673 2666 return -EBUSY; ··· 3260 3251 pmdswp = swp_entry_to_pmd(entry); 3261 3252 if (pmd_soft_dirty(pmdval)) 3262 3253 pmdswp = pmd_swp_mksoft_dirty(pmdswp); 3254 + if (pmd_uffd_wp(pmdval)) 3255 + pmdswp = pmd_swp_mkuffd_wp(pmdswp); 3263 3256 set_pmd_at(mm, address, pvmw->pmd, pmdswp); 3264 3257 page_remove_rmap(page, vma, true); 3265 3258 put_page(page);
+4
mm/khugepaged.c
··· 572 572 result = SCAN_PTE_NON_PRESENT; 573 573 goto out; 574 574 } 575 + if (pte_uffd_wp(pteval)) { 576 + result = SCAN_PTE_UFFD_WP; 577 + goto out; 578 + } 575 579 page = vm_normal_page(vma, address, pteval); 576 580 if (unlikely(!page) || unlikely(is_zone_device_page(page))) { 577 581 result = SCAN_PAGE_NULL;
+47 -8
mm/kmsan/hooks.c
··· 148 148 * into the virtual memory. If those physical pages already had shadow/origin, 149 149 * those are ignored. 150 150 */ 151 - void kmsan_ioremap_page_range(unsigned long start, unsigned long end, 152 - phys_addr_t phys_addr, pgprot_t prot, 153 - unsigned int page_shift) 151 + int kmsan_ioremap_page_range(unsigned long start, unsigned long end, 152 + phys_addr_t phys_addr, pgprot_t prot, 153 + unsigned int page_shift) 154 154 { 155 155 gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; 156 156 struct page *shadow, *origin; 157 157 unsigned long off = 0; 158 - int nr; 158 + int nr, err = 0, clean = 0, mapped; 159 159 160 160 if (!kmsan_enabled || kmsan_in_runtime()) 161 - return; 161 + return 0; 162 162 163 163 nr = (end - start) / PAGE_SIZE; 164 164 kmsan_enter_runtime(); 165 - for (int i = 0; i < nr; i++, off += PAGE_SIZE) { 165 + for (int i = 0; i < nr; i++, off += PAGE_SIZE, clean = i) { 166 166 shadow = alloc_pages(gfp_mask, 1); 167 167 origin = alloc_pages(gfp_mask, 1); 168 - __vmap_pages_range_noflush( 168 + if (!shadow || !origin) { 169 + err = -ENOMEM; 170 + goto ret; 171 + } 172 + mapped = __vmap_pages_range_noflush( 169 173 vmalloc_shadow(start + off), 170 174 vmalloc_shadow(start + off + PAGE_SIZE), prot, &shadow, 171 175 PAGE_SHIFT); 172 - __vmap_pages_range_noflush( 176 + if (mapped) { 177 + err = mapped; 178 + goto ret; 179 + } 180 + shadow = NULL; 181 + mapped = __vmap_pages_range_noflush( 173 182 vmalloc_origin(start + off), 174 183 vmalloc_origin(start + off + PAGE_SIZE), prot, &origin, 175 184 PAGE_SHIFT); 185 + if (mapped) { 186 + __vunmap_range_noflush( 187 + vmalloc_shadow(start + off), 188 + vmalloc_shadow(start + off + PAGE_SIZE)); 189 + err = mapped; 190 + goto ret; 191 + } 192 + origin = NULL; 193 + } 194 + /* Page mapping loop finished normally, nothing to clean up. */ 195 + clean = 0; 196 + 197 + ret: 198 + if (clean > 0) { 199 + /* 200 + * Something went wrong. Clean up shadow/origin pages allocated 201 + * on the last loop iteration, then delete mappings created 202 + * during the previous iterations. 203 + */ 204 + if (shadow) 205 + __free_pages(shadow, 1); 206 + if (origin) 207 + __free_pages(origin, 1); 208 + __vunmap_range_noflush( 209 + vmalloc_shadow(start), 210 + vmalloc_shadow(start + clean * PAGE_SIZE)); 211 + __vunmap_range_noflush( 212 + vmalloc_origin(start), 213 + vmalloc_origin(start + clean * PAGE_SIZE)); 176 214 } 177 215 flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); 178 216 flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); 179 217 kmsan_leave_runtime(); 218 + return err; 180 219 } 181 220 182 221 void kmsan_iounmap_page_range(unsigned long start, unsigned long end)
+18 -9
mm/kmsan/shadow.c
··· 216 216 kmsan_leave_runtime(); 217 217 } 218 218 219 - void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, 220 - pgprot_t prot, struct page **pages, 221 - unsigned int page_shift) 219 + int kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, 220 + pgprot_t prot, struct page **pages, 221 + unsigned int page_shift) 222 222 { 223 223 unsigned long shadow_start, origin_start, shadow_end, origin_end; 224 224 struct page **s_pages, **o_pages; 225 - int nr, mapped; 225 + int nr, mapped, err = 0; 226 226 227 227 if (!kmsan_enabled) 228 - return; 228 + return 0; 229 229 230 230 shadow_start = vmalloc_meta((void *)start, KMSAN_META_SHADOW); 231 231 shadow_end = vmalloc_meta((void *)end, KMSAN_META_SHADOW); 232 232 if (!shadow_start) 233 - return; 233 + return 0; 234 234 235 235 nr = (end - start) / PAGE_SIZE; 236 236 s_pages = kcalloc(nr, sizeof(*s_pages), GFP_KERNEL); 237 237 o_pages = kcalloc(nr, sizeof(*o_pages), GFP_KERNEL); 238 - if (!s_pages || !o_pages) 238 + if (!s_pages || !o_pages) { 239 + err = -ENOMEM; 239 240 goto ret; 241 + } 240 242 for (int i = 0; i < nr; i++) { 241 243 s_pages[i] = shadow_page_for(pages[i]); 242 244 o_pages[i] = origin_page_for(pages[i]); ··· 251 249 kmsan_enter_runtime(); 252 250 mapped = __vmap_pages_range_noflush(shadow_start, shadow_end, prot, 253 251 s_pages, page_shift); 254 - KMSAN_WARN_ON(mapped); 252 + if (mapped) { 253 + err = mapped; 254 + goto ret; 255 + } 255 256 mapped = __vmap_pages_range_noflush(origin_start, origin_end, prot, 256 257 o_pages, page_shift); 257 - KMSAN_WARN_ON(mapped); 258 + if (mapped) { 259 + err = mapped; 260 + goto ret; 261 + } 258 262 kmsan_leave_runtime(); 259 263 flush_tlb_kernel_range(shadow_start, shadow_end); 260 264 flush_tlb_kernel_range(origin_start, origin_end); ··· 270 262 ret: 271 263 kfree(s_pages); 272 264 kfree(o_pages); 265 + return err; 273 266 } 274 267 275 268 /* Allocate metadata for pages allocated at boot time. */
+48 -54
mm/mempolicy.c
··· 790 790 return err; 791 791 } 792 792 793 - /* Step 2: apply policy to a range and do splits. */ 794 - static int mbind_range(struct mm_struct *mm, unsigned long start, 795 - unsigned long end, struct mempolicy *new_pol) 793 + /* Split or merge the VMA (if required) and apply the new policy */ 794 + static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma, 795 + struct vm_area_struct **prev, unsigned long start, 796 + unsigned long end, struct mempolicy *new_pol) 796 797 { 797 - VMA_ITERATOR(vmi, mm, start); 798 - struct vm_area_struct *prev; 799 - struct vm_area_struct *vma; 800 - int err = 0; 798 + struct vm_area_struct *merged; 799 + unsigned long vmstart, vmend; 801 800 pgoff_t pgoff; 801 + int err; 802 802 803 - prev = vma_prev(&vmi); 804 - vma = vma_find(&vmi, end); 805 - if (WARN_ON(!vma)) 803 + vmend = min(end, vma->vm_end); 804 + if (start > vma->vm_start) { 805 + *prev = vma; 806 + vmstart = start; 807 + } else { 808 + vmstart = vma->vm_start; 809 + } 810 + 811 + if (mpol_equal(vma_policy(vma), new_pol)) 806 812 return 0; 807 813 808 - if (start > vma->vm_start) 809 - prev = vma; 814 + pgoff = vma->vm_pgoff + ((vmstart - vma->vm_start) >> PAGE_SHIFT); 815 + merged = vma_merge(vmi, vma->vm_mm, *prev, vmstart, vmend, vma->vm_flags, 816 + vma->anon_vma, vma->vm_file, pgoff, new_pol, 817 + vma->vm_userfaultfd_ctx, anon_vma_name(vma)); 818 + if (merged) { 819 + *prev = merged; 820 + return vma_replace_policy(merged, new_pol); 821 + } 810 822 811 - do { 812 - unsigned long vmstart = max(start, vma->vm_start); 813 - unsigned long vmend = min(end, vma->vm_end); 814 - 815 - if (mpol_equal(vma_policy(vma), new_pol)) 816 - goto next; 817 - 818 - pgoff = vma->vm_pgoff + 819 - ((vmstart - vma->vm_start) >> PAGE_SHIFT); 820 - prev = vma_merge(&vmi, mm, prev, vmstart, vmend, vma->vm_flags, 821 - vma->anon_vma, vma->vm_file, pgoff, 822 - new_pol, vma->vm_userfaultfd_ctx, 823 - anon_vma_name(vma)); 824 - if (prev) { 825 - vma = prev; 826 - goto replace; 827 - } 828 - if (vma->vm_start != vmstart) { 829 - err = split_vma(&vmi, vma, vmstart, 1); 830 - if (err) 831 - goto out; 832 - } 833 - if (vma->vm_end != vmend) { 834 - err = split_vma(&vmi, vma, vmend, 0); 835 - if (err) 836 - goto out; 837 - } 838 - replace: 839 - err = vma_replace_policy(vma, new_pol); 823 + if (vma->vm_start != vmstart) { 824 + err = split_vma(vmi, vma, vmstart, 1); 840 825 if (err) 841 - goto out; 842 - next: 843 - prev = vma; 844 - } for_each_vma_range(vmi, vma, end); 826 + return err; 827 + } 845 828 846 - out: 847 - return err; 829 + if (vma->vm_end != vmend) { 830 + err = split_vma(vmi, vma, vmend, 0); 831 + if (err) 832 + return err; 833 + } 834 + 835 + *prev = vma; 836 + return vma_replace_policy(vma, new_pol); 848 837 } 849 838 850 839 /* Set the process memory policy */ ··· 1248 1259 nodemask_t *nmask, unsigned long flags) 1249 1260 { 1250 1261 struct mm_struct *mm = current->mm; 1262 + struct vm_area_struct *vma, *prev; 1263 + struct vma_iterator vmi; 1251 1264 struct mempolicy *new; 1252 1265 unsigned long end; 1253 1266 int err; ··· 1319 1328 goto up_out; 1320 1329 } 1321 1330 1322 - err = mbind_range(mm, start, end, new); 1331 + vma_iter_init(&vmi, mm, start); 1332 + prev = vma_prev(&vmi); 1333 + for_each_vma_range(vmi, vma, end) { 1334 + err = mbind_range(&vmi, vma, &prev, start, end, new); 1335 + if (err) 1336 + break; 1337 + } 1323 1338 1324 1339 if (!err) { 1325 1340 int nr_failed = 0; ··· 1486 1489 unsigned long, home_node, unsigned long, flags) 1487 1490 { 1488 1491 struct mm_struct *mm = current->mm; 1489 - struct vm_area_struct *vma; 1492 + struct vm_area_struct *vma, *prev; 1490 1493 struct mempolicy *new, *old; 1491 - unsigned long vmstart; 1492 - unsigned long vmend; 1493 1494 unsigned long end; 1494 1495 int err = -ENOENT; 1495 1496 VMA_ITERATOR(vmi, mm, start); ··· 1516 1521 if (end == start) 1517 1522 return 0; 1518 1523 mmap_write_lock(mm); 1524 + prev = vma_prev(&vmi); 1519 1525 for_each_vma_range(vmi, vma, end) { 1520 1526 /* 1521 1527 * If any vma in the range got policy other than MPOL_BIND ··· 1537 1541 } 1538 1542 1539 1543 new->home_node = home_node; 1540 - vmstart = max(start, vma->vm_start); 1541 - vmend = min(end, vma->vm_end); 1542 - err = mbind_range(mm, vmstart, vmend, new); 1544 + err = mbind_range(&vmi, vma, &prev, start, end, new); 1543 1545 mpol_put(new); 1544 1546 if (err) 1545 1547 break;
+43 -5
mm/mmap.c
··· 1518 1518 */ 1519 1519 static unsigned long unmapped_area(struct vm_unmapped_area_info *info) 1520 1520 { 1521 - unsigned long length, gap; 1521 + unsigned long length, gap, low_limit; 1522 + struct vm_area_struct *tmp; 1522 1523 1523 1524 MA_STATE(mas, &current->mm->mm_mt, 0, 0); 1524 1525 ··· 1528 1527 if (length < info->length) 1529 1528 return -ENOMEM; 1530 1529 1531 - if (mas_empty_area(&mas, info->low_limit, info->high_limit - 1, 1532 - length)) 1530 + low_limit = info->low_limit; 1531 + retry: 1532 + if (mas_empty_area(&mas, low_limit, info->high_limit - 1, length)) 1533 1533 return -ENOMEM; 1534 1534 1535 1535 gap = mas.index; 1536 1536 gap += (info->align_offset - gap) & info->align_mask; 1537 + tmp = mas_next(&mas, ULONG_MAX); 1538 + if (tmp && (tmp->vm_flags & VM_GROWSDOWN)) { /* Avoid prev check if possible */ 1539 + if (vm_start_gap(tmp) < gap + length - 1) { 1540 + low_limit = tmp->vm_end; 1541 + mas_reset(&mas); 1542 + goto retry; 1543 + } 1544 + } else { 1545 + tmp = mas_prev(&mas, 0); 1546 + if (tmp && vm_end_gap(tmp) > gap) { 1547 + low_limit = vm_end_gap(tmp); 1548 + mas_reset(&mas); 1549 + goto retry; 1550 + } 1551 + } 1552 + 1537 1553 return gap; 1538 1554 } 1539 1555 ··· 1566 1548 */ 1567 1549 static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) 1568 1550 { 1569 - unsigned long length, gap; 1551 + unsigned long length, gap, high_limit, gap_end; 1552 + struct vm_area_struct *tmp; 1570 1553 1571 1554 MA_STATE(mas, &current->mm->mm_mt, 0, 0); 1572 1555 /* Adjust search length to account for worst case alignment overhead */ ··· 1575 1556 if (length < info->length) 1576 1557 return -ENOMEM; 1577 1558 1578 - if (mas_empty_area_rev(&mas, info->low_limit, info->high_limit - 1, 1559 + high_limit = info->high_limit; 1560 + retry: 1561 + if (mas_empty_area_rev(&mas, info->low_limit, high_limit - 1, 1579 1562 length)) 1580 1563 return -ENOMEM; 1581 1564 1582 1565 gap = mas.last + 1 - info->length; 1583 1566 gap -= (gap - info->align_offset) & info->align_mask; 1567 + gap_end = mas.last; 1568 + tmp = mas_next(&mas, ULONG_MAX); 1569 + if (tmp && (tmp->vm_flags & VM_GROWSDOWN)) { /* Avoid prev check if possible */ 1570 + if (vm_start_gap(tmp) <= gap_end) { 1571 + high_limit = vm_start_gap(tmp); 1572 + mas_reset(&mas); 1573 + goto retry; 1574 + } 1575 + } else { 1576 + tmp = mas_prev(&mas, 0); 1577 + if (tmp && vm_end_gap(tmp) > gap) { 1578 + high_limit = tmp->vm_start; 1579 + mas_reset(&mas); 1580 + goto retry; 1581 + } 1582 + } 1583 + 1584 1584 return gap; 1585 1585 } 1586 1586
+1 -1
mm/mprotect.c
··· 838 838 } 839 839 tlb_finish_mmu(&tlb); 840 840 841 - if (vma_iter_end(&vmi) < end) 841 + if (!error && vma_iter_end(&vmi) < end) 842 842 error = -ENOMEM; 843 843 844 844 out:
+19
mm/page_alloc.c
··· 6632 6632 int nid; 6633 6633 int __maybe_unused cpu; 6634 6634 pg_data_t *self = data; 6635 + unsigned long flags; 6635 6636 6637 + /* 6638 + * Explicitly disable this CPU's interrupts before taking seqlock 6639 + * to prevent any IRQ handler from calling into the page allocator 6640 + * (e.g. GFP_ATOMIC) that could hit zonelist_iter_begin and livelock. 6641 + */ 6642 + local_irq_save(flags); 6643 + /* 6644 + * Explicitly disable this CPU's synchronous printk() before taking 6645 + * seqlock to prevent any printk() from trying to hold port->lock, for 6646 + * tty_insert_flip_string_and_push_buffer() on other CPU might be 6647 + * calling kmalloc(GFP_ATOMIC | __GFP_NOWARN) with port->lock held. 6648 + */ 6649 + printk_deferred_enter(); 6636 6650 write_seqlock(&zonelist_update_seq); 6637 6651 6638 6652 #ifdef CONFIG_NUMA ··· 6685 6671 } 6686 6672 6687 6673 write_sequnlock(&zonelist_update_seq); 6674 + printk_deferred_exit(); 6675 + local_irq_restore(flags); 6688 6676 } 6689 6677 6690 6678 static noinline void __init ··· 9465 9449 return false; 9466 9450 9467 9451 if (PageReserved(page)) 9452 + return false; 9453 + 9454 + if (PageHuge(page)) 9468 9455 return false; 9469 9456 } 9470 9457 return true;
+1 -1
mm/swap.c
··· 222 222 if (lruvec) 223 223 unlock_page_lruvec_irqrestore(lruvec, flags); 224 224 folios_put(fbatch->folios, folio_batch_count(fbatch)); 225 - folio_batch_init(fbatch); 225 + folio_batch_reinit(fbatch); 226 226 } 227 227 228 228 static void folio_batch_add_and_move(struct folio_batch *fbatch,
+7 -3
mm/vmalloc.c
··· 313 313 ioremap_max_page_shift); 314 314 flush_cache_vmap(addr, end); 315 315 if (!err) 316 - kmsan_ioremap_page_range(addr, end, phys_addr, prot, 317 - ioremap_max_page_shift); 316 + err = kmsan_ioremap_page_range(addr, end, phys_addr, prot, 317 + ioremap_max_page_shift); 318 318 return err; 319 319 } 320 320 ··· 605 605 int vmap_pages_range_noflush(unsigned long addr, unsigned long end, 606 606 pgprot_t prot, struct page **pages, unsigned int page_shift) 607 607 { 608 - kmsan_vmap_pages_range_noflush(addr, end, prot, pages, page_shift); 608 + int ret = kmsan_vmap_pages_range_noflush(addr, end, prot, pages, 609 + page_shift); 610 + 611 + if (ret) 612 + return ret; 609 613 return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift); 610 614 } 611 615
+11 -6
net/bridge/br_netfilter_hooks.c
··· 869 869 { 870 870 struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); 871 871 872 - if (nf_bridge && !nf_bridge->in_prerouting && 873 - !netif_is_l3_master(skb->dev) && 874 - !netif_is_l3_slave(skb->dev)) { 875 - nf_bridge_info_free(skb); 876 - state->okfn(state->net, state->sk, skb); 877 - return NF_STOLEN; 872 + if (nf_bridge) { 873 + if (nf_bridge->sabotage_in_done) 874 + return NF_ACCEPT; 875 + 876 + if (!nf_bridge->in_prerouting && 877 + !netif_is_l3_master(skb->dev) && 878 + !netif_is_l3_slave(skb->dev)) { 879 + nf_bridge->sabotage_in_done = 1; 880 + state->okfn(state->net, state->sk, skb); 881 + return NF_STOLEN; 882 + } 878 883 } 879 884 880 885 return NF_ACCEPT;
+11
net/bridge/br_switchdev.c
··· 148 148 if (test_bit(BR_FDB_LOCKED, &fdb->flags)) 149 149 return; 150 150 151 + /* Entries with these flags were created using ndm_state == NUD_REACHABLE, 152 + * ndm_flags == NTF_MASTER( | NTF_STICKY), ext_flags == 0 by something 153 + * equivalent to 'bridge fdb add ... master dynamic (sticky)'. 154 + * Drivers don't know how to deal with these, so don't notify them to 155 + * avoid confusing them. 156 + */ 157 + if (test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags) && 158 + !test_bit(BR_FDB_STATIC, &fdb->flags) && 159 + !test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) 160 + return; 161 + 151 162 br_switchdev_fdb_populate(br, &item, fdb, NULL); 152 163 153 164 switch (type) {
+2 -1
net/ipv6/rpl.c
··· 32 32 size_t ipv6_rpl_srh_size(unsigned char n, unsigned char cmpri, 33 33 unsigned char cmpre) 34 34 { 35 - return (n * IPV6_PFXTAIL_LEN(cmpri)) + IPV6_PFXTAIL_LEN(cmpre); 35 + return sizeof(struct ipv6_rpl_sr_hdr) + (n * IPV6_PFXTAIL_LEN(cmpri)) + 36 + IPV6_PFXTAIL_LEN(cmpre); 36 37 } 37 38 38 39 void ipv6_rpl_srh_decompress(struct ipv6_rpl_sr_hdr *outhdr,
+50 -24
net/mptcp/protocol.c
··· 2344 2344 unsigned int flags) 2345 2345 { 2346 2346 struct mptcp_sock *msk = mptcp_sk(sk); 2347 - bool need_push, dispose_it; 2347 + bool dispose_it, need_push = false; 2348 + 2349 + /* If the first subflow moved to a close state before accept, e.g. due 2350 + * to an incoming reset, mptcp either: 2351 + * - if either the subflow or the msk are dead, destroy the context 2352 + * (the subflow socket is deleted by inet_child_forget) and the msk 2353 + * - otherwise do nothing at the moment and take action at accept and/or 2354 + * listener shutdown - user-space must be able to accept() the closed 2355 + * socket. 2356 + */ 2357 + if (msk->in_accept_queue && msk->first == ssk) { 2358 + if (!sock_flag(sk, SOCK_DEAD) && !sock_flag(ssk, SOCK_DEAD)) 2359 + return; 2360 + 2361 + /* ensure later check in mptcp_worker() will dispose the msk */ 2362 + sock_set_flag(sk, SOCK_DEAD); 2363 + lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); 2364 + mptcp_subflow_drop_ctx(ssk); 2365 + goto out_release; 2366 + } 2348 2367 2349 2368 dispose_it = !msk->subflow || ssk != msk->subflow->sk; 2350 2369 if (dispose_it) ··· 2399 2380 if (!inet_csk(ssk)->icsk_ulp_ops) { 2400 2381 WARN_ON_ONCE(!sock_flag(ssk, SOCK_DEAD)); 2401 2382 kfree_rcu(subflow, rcu); 2402 - } else if (msk->in_accept_queue && msk->first == ssk) { 2403 - /* if the first subflow moved to a close state, e.g. due to 2404 - * incoming reset and we reach here before inet_child_forget() 2405 - * the TCP stack could later try to close it via 2406 - * inet_csk_listen_stop(), or deliver it to the user space via 2407 - * accept(). 2408 - * We can't delete the subflow - or risk a double free - nor let 2409 - * the msk survive - or will be leaked in the non accept scenario: 2410 - * fallback and let TCP cope with the subflow cleanup. 2411 - */ 2412 - WARN_ON_ONCE(sock_flag(ssk, SOCK_DEAD)); 2413 - mptcp_subflow_drop_ctx(ssk); 2414 2383 } else { 2415 2384 /* otherwise tcp will dispose of the ssk and subflow ctx */ 2416 - if (ssk->sk_state == TCP_LISTEN) 2385 + if (ssk->sk_state == TCP_LISTEN) { 2386 + tcp_set_state(ssk, TCP_CLOSE); 2387 + mptcp_subflow_queue_clean(sk, ssk); 2388 + inet_csk_listen_stop(ssk); 2417 2389 mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED); 2390 + } 2418 2391 2419 2392 __tcp_close(ssk, 0); 2420 2393 2421 2394 /* close acquired an extra ref */ 2422 2395 __sock_put(ssk); 2423 2396 } 2397 + 2398 + out_release: 2424 2399 release_sock(ssk); 2425 2400 2426 2401 sock_put(ssk); ··· 2469 2456 mptcp_close_ssk(sk, ssk, subflow); 2470 2457 } 2471 2458 2472 - /* if the MPC subflow has been closed before the msk is accepted, 2473 - * msk will never be accept-ed, close it now 2474 - */ 2475 - if (!msk->first && msk->in_accept_queue) { 2476 - sock_set_flag(sk, SOCK_DEAD); 2477 - inet_sk_state_store(sk, TCP_CLOSE); 2478 - } 2479 2459 } 2480 2460 2481 - static bool mptcp_check_close_timeout(const struct sock *sk) 2461 + static bool mptcp_should_close(const struct sock *sk) 2482 2462 { 2483 2463 s32 delta = tcp_jiffies32 - inet_csk(sk)->icsk_mtup.probe_timestamp; 2484 2464 struct mptcp_subflow_context *subflow; 2485 2465 2486 - if (delta >= TCP_TIMEWAIT_LEN) 2466 + if (delta >= TCP_TIMEWAIT_LEN || mptcp_sk(sk)->in_accept_queue) 2487 2467 return true; 2488 2468 2489 2469 /* if all subflows are in closed status don't bother with additional ··· 2684 2678 * even if it is orphaned and in FIN_WAIT2 state 2685 2679 */ 2686 2680 if (sock_flag(sk, SOCK_DEAD)) { 2687 - if (mptcp_check_close_timeout(sk)) { 2681 + if (mptcp_should_close(sk)) { 2688 2682 inet_sk_state_store(sk, TCP_CLOSE); 2689 2683 mptcp_do_fastclose(sk); 2690 2684 } ··· 2924 2918 xfrm_sk_free_policy(sk); 2925 2919 2926 2920 sock_put(sk); 2921 + } 2922 + 2923 + void __mptcp_unaccepted_force_close(struct sock *sk) 2924 + { 2925 + sock_set_flag(sk, SOCK_DEAD); 2926 + inet_sk_state_store(sk, TCP_CLOSE); 2927 + mptcp_do_fastclose(sk); 2928 + __mptcp_destroy_sock(sk); 2927 2929 } 2928 2930 2929 2931 static __poll_t mptcp_check_readable(struct mptcp_sock *msk) ··· 3778 3764 if (!ssk->sk_socket) 3779 3765 mptcp_sock_graft(ssk, newsock); 3780 3766 } 3767 + 3768 + /* Do late cleanup for the first subflow as necessary. Also 3769 + * deal with bad peers not doing a complete shutdown. 3770 + */ 3771 + if (msk->first && 3772 + unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) { 3773 + __mptcp_close_ssk(newsk, msk->first, 3774 + mptcp_subflow_ctx(msk->first), 0); 3775 + if (unlikely(list_empty(&msk->conn_list))) 3776 + inet_sk_state_store(newsk, TCP_CLOSE); 3777 + } 3778 + 3781 3779 release_sock(newsk); 3782 3780 } 3783 3781
+2
net/mptcp/protocol.h
··· 626 626 struct mptcp_subflow_context *subflow); 627 627 void __mptcp_subflow_send_ack(struct sock *ssk); 628 628 void mptcp_subflow_reset(struct sock *ssk); 629 + void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk); 629 630 void mptcp_sock_graft(struct sock *sk, struct socket *parent); 630 631 struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk); 631 632 bool __mptcp_close(struct sock *sk, long timeout); 632 633 void mptcp_cancel_work(struct sock *sk); 634 + void __mptcp_unaccepted_force_close(struct sock *sk); 633 635 void mptcp_set_owner_r(struct sk_buff *skb, struct sock *sk); 634 636 635 637 bool mptcp_addresses_equal(const struct mptcp_addr_info *a,
+77 -3
net/mptcp/subflow.c
··· 715 715 if (!ctx) 716 716 return; 717 717 718 - subflow_ulp_fallback(ssk, ctx); 719 - if (ctx->conn) 720 - sock_put(ctx->conn); 718 + list_del(&mptcp_subflow_ctx(ssk)->node); 719 + if (inet_csk(ssk)->icsk_ulp_ops) { 720 + subflow_ulp_fallback(ssk, ctx); 721 + if (ctx->conn) 722 + sock_put(ctx->conn); 723 + } 721 724 722 725 kfree_rcu(ctx, rcu); 723 726 } ··· 1803 1800 subflow->rx_eof = 1; 1804 1801 mptcp_subflow_eof(parent); 1805 1802 } 1803 + } 1804 + 1805 + void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_ssk) 1806 + { 1807 + struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue; 1808 + struct mptcp_sock *msk, *next, *head = NULL; 1809 + struct request_sock *req; 1810 + struct sock *sk; 1811 + 1812 + /* build a list of all unaccepted mptcp sockets */ 1813 + spin_lock_bh(&queue->rskq_lock); 1814 + for (req = queue->rskq_accept_head; req; req = req->dl_next) { 1815 + struct mptcp_subflow_context *subflow; 1816 + struct sock *ssk = req->sk; 1817 + 1818 + if (!sk_is_mptcp(ssk)) 1819 + continue; 1820 + 1821 + subflow = mptcp_subflow_ctx(ssk); 1822 + if (!subflow || !subflow->conn) 1823 + continue; 1824 + 1825 + /* skip if already in list */ 1826 + sk = subflow->conn; 1827 + msk = mptcp_sk(sk); 1828 + if (msk->dl_next || msk == head) 1829 + continue; 1830 + 1831 + sock_hold(sk); 1832 + msk->dl_next = head; 1833 + head = msk; 1834 + } 1835 + spin_unlock_bh(&queue->rskq_lock); 1836 + if (!head) 1837 + return; 1838 + 1839 + /* can't acquire the msk socket lock under the subflow one, 1840 + * or will cause ABBA deadlock 1841 + */ 1842 + release_sock(listener_ssk); 1843 + 1844 + for (msk = head; msk; msk = next) { 1845 + sk = (struct sock *)msk; 1846 + 1847 + lock_sock_nested(sk, SINGLE_DEPTH_NESTING); 1848 + next = msk->dl_next; 1849 + msk->dl_next = NULL; 1850 + 1851 + __mptcp_unaccepted_force_close(sk); 1852 + release_sock(sk); 1853 + 1854 + /* lockdep will report a false positive ABBA deadlock 1855 + * between cancel_work_sync and the listener socket. 1856 + * The involved locks belong to different sockets WRT 1857 + * the existing AB chain. 1858 + * Using a per socket key is problematic as key 1859 + * deregistration requires process context and must be 1860 + * performed at socket disposal time, in atomic 1861 + * context. 1862 + * Just tell lockdep to consider the listener socket 1863 + * released here. 1864 + */ 1865 + mutex_release(&listener_sk->sk_lock.dep_map, _RET_IP_); 1866 + mptcp_cancel_work(sk); 1867 + mutex_acquire(&listener_sk->sk_lock.dep_map, 0, 0, _RET_IP_); 1868 + 1869 + sock_put(sk); 1870 + } 1871 + 1872 + /* we are still under the listener msk socket lock */ 1873 + lock_sock_nested(listener_ssk, SINGLE_DEPTH_NESTING); 1806 1874 } 1807 1875 1808 1876 static int subflow_ulp_init(struct sock *sk)
+61 -8
net/netfilter/nf_tables_api.c
··· 3447 3447 return 0; 3448 3448 } 3449 3449 3450 + int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set, 3451 + const struct nft_set_iter *iter, 3452 + struct nft_set_elem *elem) 3453 + { 3454 + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 3455 + struct nft_ctx *pctx = (struct nft_ctx *)ctx; 3456 + const struct nft_data *data; 3457 + int err; 3458 + 3459 + if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) && 3460 + *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END) 3461 + return 0; 3462 + 3463 + data = nft_set_ext_data(ext); 3464 + switch (data->verdict.code) { 3465 + case NFT_JUMP: 3466 + case NFT_GOTO: 3467 + pctx->level++; 3468 + err = nft_chain_validate(ctx, data->verdict.chain); 3469 + if (err < 0) 3470 + return err; 3471 + pctx->level--; 3472 + break; 3473 + default: 3474 + break; 3475 + } 3476 + 3477 + return 0; 3478 + } 3479 + 3480 + struct nft_set_elem_catchall { 3481 + struct list_head list; 3482 + struct rcu_head rcu; 3483 + void *elem; 3484 + }; 3485 + 3486 + int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set) 3487 + { 3488 + u8 genmask = nft_genmask_next(ctx->net); 3489 + struct nft_set_elem_catchall *catchall; 3490 + struct nft_set_elem elem; 3491 + struct nft_set_ext *ext; 3492 + int ret = 0; 3493 + 3494 + list_for_each_entry_rcu(catchall, &set->catchall_list, list) { 3495 + ext = nft_set_elem_ext(set, catchall->elem); 3496 + if (!nft_set_elem_active(ext, genmask)) 3497 + continue; 3498 + 3499 + elem.priv = catchall->elem; 3500 + ret = nft_setelem_validate(ctx, set, NULL, &elem); 3501 + if (ret < 0) 3502 + return ret; 3503 + } 3504 + 3505 + return ret; 3506 + } 3507 + 3450 3508 static struct nft_rule *nft_rule_lookup_byid(const struct net *net, 3451 3509 const struct nft_chain *chain, 3452 3510 const struct nlattr *nla); ··· 4817 4759 return err; 4818 4760 } 4819 4761 4820 - struct nft_set_elem_catchall { 4821 - struct list_head list; 4822 - struct rcu_head rcu; 4823 - void *elem; 4824 - }; 4825 - 4826 4762 static void nft_set_catchall_destroy(const struct nft_ctx *ctx, 4827 4763 struct nft_set *set) 4828 4764 { ··· 6108 6056 if (err < 0) 6109 6057 return err; 6110 6058 6111 - if (!nla[NFTA_SET_ELEM_KEY] && !(flags & NFT_SET_ELEM_CATCHALL)) 6059 + if (((flags & NFT_SET_ELEM_CATCHALL) && nla[NFTA_SET_ELEM_KEY]) || 6060 + (!(flags & NFT_SET_ELEM_CATCHALL) && !nla[NFTA_SET_ELEM_KEY])) 6112 6061 return -EINVAL; 6113 6062 6114 6063 if (flags != 0) { ··· 7105 7052 } 7106 7053 7107 7054 if (nla[NFTA_OBJ_USERDATA]) { 7108 - obj->udata = nla_memdup(nla[NFTA_OBJ_USERDATA], GFP_KERNEL); 7055 + obj->udata = nla_memdup(nla[NFTA_OBJ_USERDATA], GFP_KERNEL_ACCOUNT); 7109 7056 if (obj->udata == NULL) 7110 7057 goto err_userdata; 7111 7058
+4 -32
net/netfilter/nft_lookup.c
··· 199 199 return -1; 200 200 } 201 201 202 - static int nft_lookup_validate_setelem(const struct nft_ctx *ctx, 203 - struct nft_set *set, 204 - const struct nft_set_iter *iter, 205 - struct nft_set_elem *elem) 206 - { 207 - const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 208 - struct nft_ctx *pctx = (struct nft_ctx *)ctx; 209 - const struct nft_data *data; 210 - int err; 211 - 212 - if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) && 213 - *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END) 214 - return 0; 215 - 216 - data = nft_set_ext_data(ext); 217 - switch (data->verdict.code) { 218 - case NFT_JUMP: 219 - case NFT_GOTO: 220 - pctx->level++; 221 - err = nft_chain_validate(ctx, data->verdict.chain); 222 - if (err < 0) 223 - return err; 224 - pctx->level--; 225 - break; 226 - default: 227 - break; 228 - } 229 - 230 - return 0; 231 - } 232 - 233 202 static int nft_lookup_validate(const struct nft_ctx *ctx, 234 203 const struct nft_expr *expr, 235 204 const struct nft_data **d) ··· 214 245 iter.skip = 0; 215 246 iter.count = 0; 216 247 iter.err = 0; 217 - iter.fn = nft_lookup_validate_setelem; 248 + iter.fn = nft_setelem_validate; 218 249 219 250 priv->set->ops->walk(ctx, priv->set, &iter); 251 + if (!iter.err) 252 + iter.err = nft_set_catchall_validate(ctx, priv->set); 253 + 220 254 if (iter.err < 0) 221 255 return iter.err; 222 256
+3
net/sched/cls_api.c
··· 3235 3235 3236 3236 err_miss_alloc: 3237 3237 tcf_exts_destroy(exts); 3238 + #ifdef CONFIG_NET_CLS_ACT 3239 + exts->actions = NULL; 3240 + #endif 3238 3241 return err; 3239 3242 } 3240 3243 EXPORT_SYMBOL(tcf_exts_init_ex);
+7 -6
net/sched/sch_qfq.c
··· 421 421 } else 422 422 weight = 1; 423 423 424 - if (tb[TCA_QFQ_LMAX]) { 424 + if (tb[TCA_QFQ_LMAX]) 425 425 lmax = nla_get_u32(tb[TCA_QFQ_LMAX]); 426 - if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) { 427 - pr_notice("qfq: invalid max length %u\n", lmax); 428 - return -EINVAL; 429 - } 430 - } else 426 + else 431 427 lmax = psched_mtu(qdisc_dev(sch)); 428 + 429 + if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) { 430 + pr_notice("qfq: invalid max length %u\n", lmax); 431 + return -EINVAL; 432 + } 432 433 433 434 inv_w = ONE_FP / weight; 434 435 weight = ONE_FP / inv_w;
+5 -1
net/sunrpc/auth_gss/gss_krb5_test.c
··· 73 73 { 74 74 const struct gss_krb5_test_param *param = test->param_value; 75 75 struct xdr_buf buf = { 76 - .head[0].iov_base = param->plaintext->data, 77 76 .head[0].iov_len = param->plaintext->len, 78 77 .len = param->plaintext->len, 79 78 }; ··· 97 98 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, tfm); 98 99 err = crypto_ahash_setkey(tfm, Kc.data, Kc.len); 99 100 KUNIT_ASSERT_EQ(test, err, 0); 101 + 102 + buf.head[0].iov_base = kunit_kzalloc(test, buf.head[0].iov_len, GFP_KERNEL); 103 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf.head[0].iov_base); 104 + memcpy(buf.head[0].iov_base, param->plaintext->data, buf.head[0].iov_len); 100 105 101 106 checksum.len = gk5e->cksumlength; 102 107 checksum.data = kunit_kzalloc(test, checksum.len, GFP_KERNEL); ··· 1330 1327 if (!gk5e) 1331 1328 kunit_skip(test, "Encryption type is not available"); 1332 1329 1330 + memset(usage_data, 0, sizeof(usage_data)); 1333 1331 usage.data[3] = param->constant; 1334 1332 1335 1333 Ke.len = gk5e->Ke_length;
+15 -1
rust/Makefile
··· 262 262 # some configurations, with new GCC versions, etc. 263 263 bindgen_extra_c_flags = -w --target=$(BINDGEN_TARGET) 264 264 265 + # Auto variable zero-initialization requires an additional special option with 266 + # clang that is going to be removed sometime in the future (likely in 267 + # clang-18), so make sure to pass this option only if clang supports it 268 + # (libclang major version < 16). 269 + # 270 + # https://github.com/llvm/llvm-project/issues/44842 271 + # https://github.com/llvm/llvm-project/blob/llvmorg-16.0.0-rc2/clang/docs/ReleaseNotes.rst#deprecated-compiler-flags 272 + ifdef CONFIG_INIT_STACK_ALL_ZERO 273 + libclang_maj_ver=$(shell $(BINDGEN) $(srctree)/scripts/rust_is_available_bindgen_libclang.h 2>&1 | sed -ne 's/.*clang version \([0-9]*\).*/\1/p') 274 + ifeq ($(shell expr $(libclang_maj_ver) \< 16), 1) 275 + bindgen_extra_c_flags += -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang 276 + endif 277 + endif 278 + 265 279 bindgen_c_flags = $(filter-out $(bindgen_skip_c_flags), $(c_flags)) \ 266 280 $(bindgen_extra_c_flags) 267 281 endif ··· 297 283 $(bindgen_target_cflags) $(bindgen_target_extra) 298 284 299 285 $(obj)/bindings/bindings_generated.rs: private bindgen_target_flags = \ 300 - $(shell grep -v '^\#\|^$$' $(srctree)/$(src)/bindgen_parameters) 286 + $(shell grep -v '^#\|^$$' $(srctree)/$(src)/bindgen_parameters) 301 287 $(obj)/bindings/bindings_generated.rs: $(src)/bindings/bindings_helper.h \ 302 288 $(src)/bindgen_parameters FORCE 303 289 $(call if_changed_dep,bindgen)
+5 -1
rust/kernel/print.rs
··· 18 18 19 19 // Called from `vsprintf` with format specifier `%pA`. 20 20 #[no_mangle] 21 - unsafe fn rust_fmt_argument(buf: *mut c_char, end: *mut c_char, ptr: *const c_void) -> *mut c_char { 21 + unsafe extern "C" fn rust_fmt_argument( 22 + buf: *mut c_char, 23 + end: *mut c_char, 24 + ptr: *const c_void, 25 + ) -> *mut c_char { 22 26 use fmt::Write; 23 27 // SAFETY: The C contract guarantees that `buf` is valid if it's less than `end`. 24 28 let mut w = unsafe { RawFormatter::from_ptrs(buf.cast(), end.cast()) };
+1 -1
rust/kernel/str.rs
··· 408 408 /// If `pos` is less than `end`, then the region between `pos` (inclusive) and `end` 409 409 /// (exclusive) must be valid for writes for the lifetime of the returned [`RawFormatter`]. 410 410 pub(crate) unsafe fn from_ptrs(pos: *mut u8, end: *mut u8) -> Self { 411 - // INVARIANT: The safety requierments guarantee the type invariants. 411 + // INVARIANT: The safety requirements guarantee the type invariants. 412 412 Self { 413 413 beg: pos as _, 414 414 pos: pos as _,
+34 -32
scripts/Makefile.package
··· 27 27 tar -I $(KGZIP) -c $(RCS_TAR_IGNORE) -f $(2).tar.gz \ 28 28 --transform 's:^:$(2)/:S' $(TAR_CONTENT) $(3) 29 29 30 - # tarball compression 31 - # --------------------------------------------------------------------------- 32 - 33 - %.tar.gz: %.tar 34 - $(call cmd,gzip) 35 - 36 - %.tar.bz2: %.tar 37 - $(call cmd,bzip2) 38 - 39 - %.tar.xz: %.tar 40 - $(call cmd,xzmisc) 41 - 42 - %.tar.zst: %.tar 43 - $(call cmd,zstd) 44 - 45 30 # Git 46 31 # --------------------------------------------------------------------------- 47 32 ··· 42 57 false; \ 43 58 fi 44 59 60 + git-config-tar.gz = -c tar.tar.gz.command="$(KGZIP)" 61 + git-config-tar.bz2 = -c tar.tar.bz2.command="$(KBZIP2)" 62 + git-config-tar.xz = -c tar.tar.xz.command="$(XZ)" 63 + git-config-tar.zst = -c tar.tar.zst.command="$(ZSTD)" 64 + 65 + quiet_cmd_archive = ARCHIVE $@ 66 + cmd_archive = git -C $(srctree) $(git-config-tar$(suffix $@)) archive \ 67 + --output=$$(realpath $@) --prefix=$(basename $@)/ $(archive-args) 68 + 45 69 # Linux source tarball 46 70 # --------------------------------------------------------------------------- 47 71 48 - quiet_cmd_archive_linux = ARCHIVE $@ 49 - cmd_archive_linux = \ 50 - git -C $(srctree) archive --output=$$(realpath $@) --prefix=$(basename $@)/ $$(cat $<) 72 + linux-tarballs := $(addprefix linux, .tar.gz) 51 73 52 - targets += linux.tar 53 - linux.tar: .tmp_HEAD FORCE 54 - $(call if_changed,archive_linux) 74 + targets += $(linux-tarballs) 75 + $(linux-tarballs): archive-args = $$(cat $<) 76 + $(linux-tarballs): .tmp_HEAD FORCE 77 + $(call if_changed,archive) 55 78 56 79 # rpm-pkg 57 80 # --------------------------------------------------------------------------- ··· 87 94 $(UTS_MACHINE)-linux -bb $(objtree)/binkernel.spec 88 95 89 96 quiet_cmd_debianize = GEN $@ 90 - cmd_debianize = $(srctree)/scripts/package/mkdebian 97 + cmd_debianize = $(srctree)/scripts/package/mkdebian $(mkdebian-opts) 91 98 92 99 debian: FORCE 93 100 $(call cmd,debianize) ··· 96 103 debian-orig: private source = $(shell dpkg-parsechangelog -S Source) 97 104 debian-orig: private version = $(shell dpkg-parsechangelog -S Version | sed 's/-[^-]*$$//') 98 105 debian-orig: private orig-name = $(source)_$(version).orig.tar.gz 106 + debian-orig: mkdebian-opts = --need-source 99 107 debian-orig: linux.tar.gz debian 100 108 $(Q)if [ "$(df --output=target .. 2>/dev/null)" = "$(df --output=target $< 2>/dev/null)" ]; then \ 101 109 ln -f $< ../$(orig-name); \ ··· 139 145 $(Q)$(MAKE) -f $(srctree)/Makefile 140 146 +$(Q)$(srctree)/scripts/package/buildtar $@ 141 147 142 - quiet_cmd_tar = TAR $@ 143 - cmd_tar = cd $<; tar cf ../$@ --owner=root --group=root --sort=name * 148 + compress-tar.gz = -I "$(KGZIP)" 149 + compress-tar.bz2 = -I "$(KBZIP2)" 150 + compress-tar.xz = -I "$(XZ)" 151 + compress-tar.zst = -I "$(ZSTD)" 144 152 145 - linux-$(KERNELRELEASE)-$(ARCH).tar: tar-install 153 + quiet_cmd_tar = TAR $@ 154 + cmd_tar = cd $<; tar cf ../$@ $(compress-tar$(suffix $@)) --owner=root --group=root --sort=name * 155 + 156 + dir-tarballs := $(addprefix linux-$(KERNELRELEASE)-$(ARCH), .tar .tar.gz .tar.bz2 .tar.xz .tar.zst) 157 + 158 + $(dir-tarballs): tar-install 146 159 $(call cmd,tar) 147 160 148 161 PHONY += dir-pkg ··· 181 180 .tmp_perf/PERF-VERSION-FILE: .tmp_HEAD $(srctree)/tools/perf/util/PERF-VERSION-GEN | .tmp_perf 182 181 $(call cmd,perf_version_file) 183 182 184 - quiet_cmd_archive_perf = ARCHIVE $@ 185 - cmd_archive_perf = \ 186 - git -C $(srctree) archive --output=$$(realpath $@) --prefix=$(basename $@)/ \ 187 - --add-file=$$(realpath $(word 2, $^)) \ 183 + perf-archive-args = --add-file=$$(realpath $(word 2, $^)) \ 188 184 --add-file=$$(realpath $(word 3, $^)) \ 189 185 $$(cat $(word 2, $^))^{tree} $$(cat $<) 190 186 191 - targets += perf-$(KERNELVERSION).tar 192 - perf-$(KERNELVERSION).tar: tools/perf/MANIFEST .tmp_perf/HEAD .tmp_perf/PERF-VERSION-FILE FORCE 193 - $(call if_changed,archive_perf) 187 + 188 + perf-tarballs := $(addprefix perf-$(KERNELVERSION), .tar .tar.gz .tar.bz2 .tar.xz .tar.zst) 189 + 190 + targets += $(perf-tarballs) 191 + $(perf-tarballs): archive-args = $(perf-archive-args) 192 + $(perf-tarballs): tools/perf/MANIFEST .tmp_perf/HEAD .tmp_perf/PERF-VERSION-FILE FORCE 193 + $(call if_changed,archive) 194 194 195 195 PHONY += perf-tar-src-pkg 196 196 perf-tar-src-pkg: perf-$(KERNELVERSION).tar
+4 -1
scripts/generate_rust_analyzer.py
··· 104 104 name = path.name.replace(".rs", "") 105 105 106 106 # Skip those that are not crate roots. 107 - if f"{name}.o" not in open(path.parent / "Makefile").read(): 107 + try: 108 + if f"{name}.o" not in open(path.parent / "Makefile").read(): 109 + continue 110 + except FileNotFoundError: 108 111 continue 109 112 110 113 logging.info("Adding %s", name)
+1 -1
scripts/is_rust_module.sh
··· 13 13 # 14 14 # In the future, checking for the `.comment` section may be another 15 15 # option, see https://github.com/rust-lang/rust/pull/97550. 16 - ${NM} "$*" | grep -qE '^[0-9a-fA-F]+ r _R[^[:space:]]+16___IS_RUST_MODULE[^[:space:]]*$' 16 + ${NM} "$*" | grep -qE '^[0-9a-fA-F]+ [Rr] _R[^[:space:]]+16___IS_RUST_MODULE[^[:space:]]*$'
+28 -36
scripts/package/gen-diff-patch
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0-only 3 3 4 - diff_patch="${1}" 5 - untracked_patch="${2}" 6 - srctree=$(dirname $0)/../.. 4 + diff_patch=$1 7 5 8 - rm -f ${diff_patch} ${untracked_patch} 6 + mkdir -p "$(dirname "${diff_patch}")" 9 7 10 - if ! ${srctree}/scripts/check-git; then 8 + git -C "${srctree:-.}" diff HEAD > "${diff_patch}" 9 + 10 + if [ ! -s "${diff_patch}" ] || 11 + [ -z "$(git -C "${srctree:-.}" ls-files --other --exclude-standard | head -n1)" ]; then 11 12 exit 12 13 fi 13 14 14 - mkdir -p "$(dirname ${diff_patch})" "$(dirname ${untracked_patch})" 15 - 16 - git -C "${srctree}" diff HEAD > "${diff_patch}" 17 - 18 - if [ ! -s "${diff_patch}" ]; then 19 - rm -f "${diff_patch}" 20 - exit 21 - fi 22 - 23 - git -C ${srctree} status --porcelain --untracked-files=all | 24 - while read stat path 25 - do 26 - if [ "${stat}" = '??' ]; then 27 - 28 - if ! diff -u /dev/null "${srctree}/${path}" > .tmp_diff && 29 - ! head -n1 .tmp_diff | grep -q "Binary files"; then 30 - { 31 - echo "--- /dev/null" 32 - echo "+++ linux/$path" 33 - cat .tmp_diff | tail -n +3 34 - } >> ${untracked_patch} 35 - fi 36 - fi 37 - done 38 - 39 - rm -f .tmp_diff 40 - 41 - if [ ! -s "${diff_patch}" ]; then 42 - rm -f "${diff_patch}" 43 - exit 44 - fi 15 + # The source tarball, which is generated by 'git archive', contains everything 16 + # you committed in the repository. If you have local diff ('git diff HEAD'), 17 + # it will go into ${diff_patch}. If untracked files are remaining, the resulting 18 + # source package may not be correct. 19 + # 20 + # Examples: 21 + # - You modified a source file to add #include "new-header.h" 22 + # but forgot to add new-header.h 23 + # - You modified a Makefile to add 'obj-$(CONFIG_FOO) += new-dirver.o' 24 + # but you forgot to add new-driver.c 25 + # 26 + # You need to commit them, or at least stage them by 'git add'. 27 + # 28 + # This script does not take care of untracked files because doing so would 29 + # introduce additional complexity. Instead, print a warning message here if 30 + # untracked files are found. 31 + # If all untracked files are just garbage, you can ignore this warning. 32 + echo >&2 "============================ WARNING ============================" 33 + echo >&2 "Your working tree has diff from HEAD, and also untracked file(s)." 34 + echo >&2 "Please make sure you did 'git add' for all new files you need in" 35 + echo >&2 "the source package." 36 + echo >&2 "================================================================="
+59 -44
scripts/package/mkdebian
··· 84 84 fi 85 85 } 86 86 87 + # Create debian/source/ if it is a source package build 88 + gen_source () 89 + { 90 + mkdir -p debian/source 91 + 92 + echo "3.0 (quilt)" > debian/source/format 93 + 94 + { 95 + echo "diff-ignore" 96 + echo "extend-diff-ignore = .*" 97 + } > debian/source/local-options 98 + 99 + # Add .config as a patch 100 + mkdir -p debian/patches 101 + { 102 + echo "Subject: Add .config" 103 + echo "Author: ${maintainer}" 104 + echo 105 + echo "--- /dev/null" 106 + echo "+++ linux/.config" 107 + diff -u /dev/null "${KCONFIG_CONFIG}" | tail -n +3 108 + } > debian/patches/config.patch 109 + echo config.patch > debian/patches/series 110 + 111 + "${srctree}/scripts/package/gen-diff-patch" debian/patches/diff.patch 112 + if [ -s debian/patches/diff.patch ]; then 113 + sed -i " 114 + 1iSubject: Add local diff 115 + 1iAuthor: ${maintainer} 116 + 1i 117 + " debian/patches/diff.patch 118 + 119 + echo diff.patch >> debian/patches/series 120 + else 121 + rm -f debian/patches/diff.patch 122 + fi 123 + } 124 + 87 125 rm -rf debian 126 + mkdir debian 127 + 128 + email=${DEBEMAIL-$EMAIL} 129 + 130 + # use email string directly if it contains <email> 131 + if echo "${email}" | grep -q '<.*>'; then 132 + maintainer=${email} 133 + else 134 + # or construct the maintainer string 135 + user=${KBUILD_BUILD_USER-$(id -nu)} 136 + name=${DEBFULLNAME-${user}} 137 + if [ -z "${email}" ]; then 138 + buildhost=${KBUILD_BUILD_HOST-$(hostname -f 2>/dev/null || hostname)} 139 + email="${user}@${buildhost}" 140 + fi 141 + maintainer="${name} <${email}>" 142 + fi 143 + 144 + if [ "$1" = --need-source ]; then 145 + gen_source 146 + fi 88 147 89 148 # Some variables and settings used throughout the script 90 149 version=$KERNELRELEASE ··· 163 104 debarch= 164 105 set_debarch 165 106 166 - email=${DEBEMAIL-$EMAIL} 167 - 168 - # use email string directly if it contains <email> 169 - if echo $email | grep -q '<.*>'; then 170 - maintainer=$email 171 - else 172 - # or construct the maintainer string 173 - user=${KBUILD_BUILD_USER-$(id -nu)} 174 - name=${DEBFULLNAME-$user} 175 - if [ -z "$email" ]; then 176 - buildhost=${KBUILD_BUILD_HOST-$(hostname -f 2>/dev/null || hostname)} 177 - email="$user@$buildhost" 178 - fi 179 - maintainer="$name <$email>" 180 - fi 181 - 182 107 # Try to determine distribution 183 108 if [ -n "$KDEB_CHANGELOG_DIST" ]; then 184 109 distribution=$KDEB_CHANGELOG_DIST ··· 173 130 distribution="unstable" 174 131 echo >&2 "Using default distribution of 'unstable' in the changelog" 175 132 echo >&2 "Install lsb-release or set \$KDEB_CHANGELOG_DIST explicitly" 176 - fi 177 - 178 - mkdir -p debian/source/ 179 - echo "3.0 (quilt)" > debian/source/format 180 - 181 - { 182 - echo "diff-ignore" 183 - echo "extend-diff-ignore = .*" 184 - } > debian/source/local-options 185 - 186 - # Add .config as a patch 187 - mkdir -p debian/patches 188 - { 189 - echo "Subject: Add .config" 190 - echo "Author: ${maintainer}" 191 - echo 192 - echo "--- /dev/null" 193 - echo "+++ linux/.config" 194 - diff -u /dev/null "${KCONFIG_CONFIG}" | tail -n +3 195 - } > debian/patches/config 196 - echo config > debian/patches/series 197 - 198 - $(dirname $0)/gen-diff-patch debian/patches/diff.patch debian/patches/untracked.patch 199 - if [ -f debian/patches/diff.patch ]; then 200 - echo diff.patch >> debian/patches/series 201 - fi 202 - if [ -f debian/patches/untracked.patch ]; then 203 - echo untracked.patch >> debian/patches/series 204 133 fi 205 134 206 135 echo $debarch > debian/arch
+2 -9
scripts/package/mkspec
··· 19 19 mkdir -p rpmbuild/SOURCES 20 20 cp linux.tar.gz rpmbuild/SOURCES 21 21 cp "${KCONFIG_CONFIG}" rpmbuild/SOURCES/config 22 - $(dirname $0)/gen-diff-patch rpmbuild/SOURCES/diff.patch rpmbuild/SOURCES/untracked.patch 23 - touch rpmbuild/SOURCES/diff.patch rpmbuild/SOURCES/untracked.patch 22 + "${srctree}/scripts/package/gen-diff-patch" rpmbuild/SOURCES/diff.patch 24 23 fi 25 24 26 25 if grep -q CONFIG_MODULES=y include/config/auto.conf; then ··· 55 56 $S Source0: linux.tar.gz 56 57 $S Source1: config 57 58 $S Source2: diff.patch 58 - $S Source3: untracked.patch 59 59 Provides: $PROVIDES 60 60 $S BuildRequires: bc binutils bison dwarves 61 61 $S BuildRequires: (elfutils-libelf-devel or libelf-devel) flex ··· 92 94 $S %prep 93 95 $S %setup -q -n linux 94 96 $S cp %{SOURCE1} .config 95 - $S if [ -s %{SOURCE2} ]; then 96 - $S patch -p1 < %{SOURCE2} 97 - $S fi 98 - $S if [ -s %{SOURCE3} ]; then 99 - $S patch -p1 < %{SOURCE3} 100 - $S fi 97 + $S patch -p1 < %{SOURCE2} 101 98 $S 102 99 $S %build 103 100 $S $MAKE %{?_smp_mflags} KERNELRELEASE=$KERNELRELEASE KBUILD_BUILD_VERSION=%{release}
+1 -1
sound/firewire/tascam/tascam-stream.c
··· 490 490 // packet is important for media clock recovery. 491 491 err = amdtp_domain_start(&tscm->domain, tx_init_skip_cycles, true, true); 492 492 if (err < 0) 493 - return err; 493 + goto error; 494 494 495 495 if (!amdtp_domain_wait_ready(&tscm->domain, READY_TIMEOUT_MS)) { 496 496 err = -ETIMEDOUT;
+5 -2
sound/i2c/cs8427.c
··· 561 561 if (snd_BUG_ON(!cs8427)) 562 562 return -ENXIO; 563 563 chip = cs8427->private_data; 564 - if (active) 564 + if (active) { 565 565 memcpy(chip->playback.pcm_status, 566 566 chip->playback.def_status, 24); 567 - chip->playback.pcm_ctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_INACTIVE; 567 + chip->playback.pcm_ctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_INACTIVE; 568 + } else { 569 + chip->playback.pcm_ctl->vd[0].access |= SNDRV_CTL_ELEM_ACCESS_INACTIVE; 570 + } 568 571 snd_ctl_notify(cs8427->bus->card, 569 572 SNDRV_CTL_EVENT_MASK_VALUE | SNDRV_CTL_EVENT_MASK_INFO, 570 573 &chip->playback.pcm_ctl->id);
+9 -5
sound/pci/emu10k1/emupcm.c
··· 1236 1236 { 1237 1237 struct snd_emu10k1 *emu = snd_pcm_substream_chip(substream); 1238 1238 1239 - emu->capture_interrupt = NULL; 1239 + emu->capture_mic_interrupt = NULL; 1240 1240 emu->pcm_capture_mic_substream = NULL; 1241 1241 return 0; 1242 1242 } ··· 1344 1344 { 1345 1345 struct snd_emu10k1 *emu = snd_pcm_substream_chip(substream); 1346 1346 1347 - emu->capture_interrupt = NULL; 1347 + emu->capture_efx_interrupt = NULL; 1348 1348 emu->pcm_capture_efx_substream = NULL; 1349 1349 return 0; 1350 1350 } ··· 1781 1781 struct snd_kcontrol *kctl; 1782 1782 int err; 1783 1783 1784 - err = snd_pcm_new(emu->card, "emu10k1 efx", device, 8, 1, &pcm); 1784 + err = snd_pcm_new(emu->card, "emu10k1 efx", device, emu->audigy ? 0 : 8, 1, &pcm); 1785 1785 if (err < 0) 1786 1786 return err; 1787 1787 1788 1788 pcm->private_data = emu; 1789 1789 1790 - snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_emu10k1_fx8010_playback_ops); 1790 + if (!emu->audigy) 1791 + snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_emu10k1_fx8010_playback_ops); 1791 1792 snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_emu10k1_capture_efx_ops); 1792 1793 1793 1794 pcm->info_flags = 0; 1794 - strcpy(pcm->name, "Multichannel Capture/PT Playback"); 1795 + if (emu->audigy) 1796 + strcpy(pcm->name, "Multichannel Capture"); 1797 + else 1798 + strcpy(pcm->name, "Multichannel Capture/PT Playback"); 1795 1799 emu->pcm_efx = pcm; 1796 1800 1797 1801 /* EFX capture - record the "FXBUS2" channels, by default we connect the EXTINs
+1 -1
sound/pci/hda/patch_hdmi.c
··· 4604 4604 HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI", patch_i915_tgl_hdmi), 4605 4605 HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi), 4606 4606 HDA_CODEC_ENTRY(0x80862818, "Raptorlake HDMI", patch_i915_tgl_hdmi), 4607 - HDA_CODEC_ENTRY(0x80862819, "DG2 HDMI", patch_i915_adlp_hdmi), 4607 + HDA_CODEC_ENTRY(0x80862819, "DG2 HDMI", patch_i915_tgl_hdmi), 4608 4608 HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi), 4609 4609 HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI", patch_i915_icl_hdmi), 4610 4610 HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_adlp_hdmi),
+29
sound/pci/hda/patch_realtek.c
··· 6960 6960 ALC269_FIXUP_DELL_M101Z, 6961 6961 ALC269_FIXUP_SKU_IGNORE, 6962 6962 ALC269_FIXUP_ASUS_G73JW, 6963 + ALC269_FIXUP_ASUS_N7601ZM_PINS, 6964 + ALC269_FIXUP_ASUS_N7601ZM, 6963 6965 ALC269_FIXUP_LENOVO_EAPD, 6964 6966 ALC275_FIXUP_SONY_HWEQ, 6965 6967 ALC275_FIXUP_SONY_DISABLE_AAMIX, ··· 7257 7255 { 0x17, 0x99130111 }, /* subwoofer */ 7258 7256 { } 7259 7257 } 7258 + }, 7259 + [ALC269_FIXUP_ASUS_N7601ZM_PINS] = { 7260 + .type = HDA_FIXUP_PINS, 7261 + .v.pins = (const struct hda_pintbl[]) { 7262 + { 0x19, 0x03A11050 }, 7263 + { 0x1a, 0x03A11C30 }, 7264 + { 0x21, 0x03211420 }, 7265 + { } 7266 + } 7267 + }, 7268 + [ALC269_FIXUP_ASUS_N7601ZM] = { 7269 + .type = HDA_FIXUP_VERBS, 7270 + .v.verbs = (const struct hda_verb[]) { 7271 + {0x20, AC_VERB_SET_COEF_INDEX, 0x62}, 7272 + {0x20, AC_VERB_SET_PROC_COEF, 0xa007}, 7273 + {0x20, AC_VERB_SET_COEF_INDEX, 0x10}, 7274 + {0x20, AC_VERB_SET_PROC_COEF, 0x8420}, 7275 + {0x20, AC_VERB_SET_COEF_INDEX, 0x0f}, 7276 + {0x20, AC_VERB_SET_PROC_COEF, 0x7774}, 7277 + { } 7278 + }, 7279 + .chained = true, 7280 + .chain_id = ALC269_FIXUP_ASUS_N7601ZM_PINS, 7260 7281 }, 7261 7282 [ALC269_FIXUP_LENOVO_EAPD] = { 7262 7283 .type = HDA_FIXUP_VERBS, ··· 9491 9466 SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 9492 9467 SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE), 9493 9468 SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE), 9469 + SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM), 9494 9470 SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), 9495 9471 SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC), 9496 9472 SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC), ··· 9689 9663 SND_PCI_QUIRK(0x17aa, 0x22f1, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2), 9690 9664 SND_PCI_QUIRK(0x17aa, 0x22f2, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2), 9691 9665 SND_PCI_QUIRK(0x17aa, 0x22f3, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2), 9666 + SND_PCI_QUIRK(0x17aa, 0x2318, "Thinkpad Z13 Gen2", ALC287_FIXUP_CS35L41_I2C_2), 9667 + SND_PCI_QUIRK(0x17aa, 0x2319, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2), 9668 + SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2), 9692 9669 SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 9693 9670 SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 9694 9671 SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+10
sound/pci/hda/patch_sigmatel.c
··· 1707 1707 }; 1708 1708 1709 1709 static const struct hda_pintbl ref92hd73xx_pin_configs[] = { 1710 + // Port A-H 1710 1711 { 0x0a, 0x02214030 }, 1711 1712 { 0x0b, 0x02a19040 }, 1712 1713 { 0x0c, 0x01a19020 }, ··· 1716 1715 { 0x0f, 0x01014010 }, 1717 1716 { 0x10, 0x01014020 }, 1718 1717 { 0x11, 0x01014030 }, 1718 + // CD in 1719 1719 { 0x12, 0x02319040 }, 1720 + // Digial Mic ins 1720 1721 { 0x13, 0x90a000f0 }, 1721 1722 { 0x14, 0x90a000f0 }, 1723 + // Digital outs 1722 1724 { 0x22, 0x01452050 }, 1723 1725 { 0x23, 0x01452050 }, 1724 1726 {} ··· 1762 1758 }; 1763 1759 1764 1760 static const struct hda_pintbl intel_dg45id_pin_configs[] = { 1761 + // Analog outputs 1765 1762 { 0x0a, 0x02214230 }, 1766 1763 { 0x0b, 0x02A19240 }, 1767 1764 { 0x0c, 0x01013214 }, ··· 1770 1765 { 0x0e, 0x01A19250 }, 1771 1766 { 0x0f, 0x01011212 }, 1772 1767 { 0x10, 0x01016211 }, 1768 + // Digital output 1769 + { 0x22, 0x01451380 }, 1770 + { 0x23, 0x40f000f0 }, 1773 1771 {} 1774 1772 }; 1775 1773 ··· 1963 1955 "DFI LanParty", STAC_92HD73XX_REF), 1964 1956 SND_PCI_QUIRK(PCI_VENDOR_ID_DFI, 0x3101, 1965 1957 "DFI LanParty", STAC_92HD73XX_REF), 1958 + SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5001, 1959 + "Intel DP45SG", STAC_92HD73XX_INTEL), 1966 1960 SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5002, 1967 1961 "Intel DG45ID", STAC_92HD73XX_INTEL), 1968 1962 SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5003,
+7 -7
tools/Makefile
··· 39 39 @echo ' turbostat - Intel CPU idle stats and freq reporting tool' 40 40 @echo ' usb - USB testing tools' 41 41 @echo ' virtio - vhost test module' 42 - @echo ' vm - misc vm tools' 42 + @echo ' mm - misc mm tools' 43 43 @echo ' wmi - WMI interface examples' 44 44 @echo ' x86_energy_perf_policy - Intel energy policy tool' 45 45 @echo '' ··· 69 69 cpupower: FORCE 70 70 $(call descend,power/$@) 71 71 72 - cgroup counter firewire hv guest bootconfig spi usb virtio vm bpf iio gpio objtool leds wmi pci firmware debugging tracing: FORCE 72 + cgroup counter firewire hv guest bootconfig spi usb virtio mm bpf iio gpio objtool leds wmi pci firmware debugging tracing: FORCE 73 73 $(call descend,$@) 74 74 75 75 bpf/%: FORCE ··· 118 118 119 119 all: acpi cgroup counter cpupower gpio hv firewire \ 120 120 perf selftests bootconfig spi turbostat usb \ 121 - virtio vm bpf x86_energy_perf_policy \ 121 + virtio mm bpf x86_energy_perf_policy \ 122 122 tmon freefall iio objtool kvm_stat wmi \ 123 123 pci debugging tracing thermal thermometer thermal-engine 124 124 ··· 128 128 cpupower_install: 129 129 $(call descend,power/$(@:_install=),install) 130 130 131 - cgroup_install counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install vm_install bpf_install objtool_install wmi_install pci_install debugging_install tracing_install: 131 + cgroup_install counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install mm_install bpf_install objtool_install wmi_install pci_install debugging_install tracing_install: 132 132 $(call descend,$(@:_install=),install) 133 133 134 134 selftests_install: ··· 158 158 install: acpi_install cgroup_install counter_install cpupower_install gpio_install \ 159 159 hv_install firewire_install iio_install \ 160 160 perf_install selftests_install turbostat_install usb_install \ 161 - virtio_install vm_install bpf_install x86_energy_perf_policy_install \ 161 + virtio_install mm_install bpf_install x86_energy_perf_policy_install \ 162 162 tmon_install freefall_install objtool_install kvm_stat_install \ 163 163 wmi_install pci_install debugging_install intel-speed-select_install \ 164 164 tracing_install thermometer_install thermal-engine_install ··· 169 169 cpupower_clean: 170 170 $(call descend,power/cpupower,clean) 171 171 172 - cgroup_clean counter_clean hv_clean firewire_clean bootconfig_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean firmware_clean debugging_clean tracing_clean: 172 + cgroup_clean counter_clean hv_clean firewire_clean bootconfig_clean spi_clean usb_clean virtio_clean mm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean firmware_clean debugging_clean tracing_clean: 173 173 $(call descend,$(@:_clean=),clean) 174 174 175 175 libapi_clean: ··· 211 211 212 212 clean: acpi_clean cgroup_clean counter_clean cpupower_clean hv_clean firewire_clean \ 213 213 perf_clean selftests_clean turbostat_clean bootconfig_clean spi_clean usb_clean virtio_clean \ 214 - vm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \ 214 + mm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \ 215 215 freefall_clean build_clean libbpf_clean libsubcmd_clean \ 216 216 gpio_clean objtool_clean leds_clean wmi_clean pci_clean firmware_clean debugging_clean \ 217 217 intel-speed-select_clean tracing_clean thermal_clean thermometer_clean thermal-engine_clean
+1 -1
tools/arch/loongarch/include/uapi/asm/bitsperlong.h
··· 2 2 #ifndef __ASM_LOONGARCH_BITSPERLONG_H 3 3 #define __ASM_LOONGARCH_BITSPERLONG_H 4 4 5 - #define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8) 5 + #define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) 6 6 7 7 #include <asm-generic/bitsperlong.h> 8 8
+1 -1
tools/mm/page_owner_sort.c
··· 857 857 if (cull & CULL_PID || filter & FILTER_PID) 858 858 fprintf(fout, ", PID %d", list[i].pid); 859 859 if (cull & CULL_TGID || filter & FILTER_TGID) 860 - fprintf(fout, ", TGID %d", list[i].pid); 860 + fprintf(fout, ", TGID %d", list[i].tgid); 861 861 if (cull & CULL_COMM || filter & FILTER_COMM) 862 862 fprintf(fout, ", task_comm_name: %s", list[i].comm); 863 863 if (cull & CULL_ALLOCATOR) {
+9 -3
usr/gen_init_cpio.c
··· 353 353 buf.st_mtime = 0xffffffff; 354 354 } 355 355 356 + if (buf.st_mtime < 0) { 357 + fprintf(stderr, "%s: Timestamp negative, clipping.\n", 358 + location); 359 + buf.st_mtime = 0; 360 + } 361 + 356 362 if (buf.st_size > 0xffffffff) { 357 363 fprintf(stderr, "%s: Size exceeds maximum cpio file size\n", 358 364 location); ··· 608 602 /* 609 603 * Timestamps after 2106-02-07 06:28:15 UTC have an ascii hex time_t 610 604 * representation that exceeds 8 chars and breaks the cpio header 611 - * specification. 605 + * specification. Negative timestamps similarly exceed 8 chars. 612 606 */ 613 - if (default_mtime > 0xffffffff) { 614 - fprintf(stderr, "ERROR: Timestamp too large for cpio format\n"); 607 + if (default_mtime > 0xffffffff || default_mtime < 0) { 608 + fprintf(stderr, "ERROR: Timestamp out of range for cpio format\n"); 615 609 exit(1); 616 610 } 617 611