Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC/tda998x: Fix reporting of nonexistent capture streams

Merge series from Mark Brown <broonie@kernel.org>:

The recently added pcm-test selftest has pointed out that systems with
the tda998x driver end up advertising that they support capture when in
reality as far as I can see the tda998x devices are transmit only. The
DAIs registered through hdmi-codec are bidirectional, meaning that for
I2S systems when combined with a typical bidrectional CPU DAI the
overall capability of the PCM is bidirectional. In most cases the I2S
links will clock OK but no useful audio will be returned which isn't so
bad but we should still not advertise the useless capability, and some
systems may notice problems for example due to pinmux management.

This is happening due to the hdmi-codec helpers not providing any
mechanism for indicating unidirectional audio so add one and use it in
the tda998x driver. It is likely other hdmi-codec users are also
affected but I don't have those systems to hand.

Mark Brown (2):
ASoC: hdmi-codec: Allow playback and capture to be disabled
drm: tda99x: Don't advertise non-existent capture support

drivers/gpu/drm/i2c/tda998x_drv.c | 2 ++
include/sound/hdmi-codec.h | 4 ++++
sound/soc/codecs/hdmi-codec.c | 30 +++++++++++++++++++++++++-----
3 files changed, 31 insertions(+), 5 deletions(-)

base-commit: f0c4d9fc9cc9462659728d168387191387e903cc
--
2.30.2

+2170 -1063
+4
CREDITS
··· 2452 2452 S: San Jose, CA 95110 2453 2453 S: USA 2454 2454 2455 + N: Michal Marek 2456 + E: michal.lkml@markovi.net 2457 + D: Kbuild Maintainer 2009-2017 2458 + 2455 2459 N: Martin Mares 2456 2460 E: mj@ucw.cz 2457 2461 W: http://www.ucw.cz/~mj/
+4 -1
Documentation/devicetree/bindings/input/goodix,gt7375p.yaml
··· 16 16 17 17 properties: 18 18 compatible: 19 - items: 19 + oneOf: 20 20 - const: goodix,gt7375p 21 + - items: 22 + - const: goodix,gt7986u 23 + - const: goodix,gt7375p 21 24 22 25 reg: 23 26 enum:
+2 -3
Documentation/driver-api/miscellaneous.rst
··· 16 16 16x50 UART Driver 17 17 ================= 18 18 19 - .. kernel-doc:: drivers/tty/serial/serial_core.c 20 - :export: 21 - 22 19 .. kernel-doc:: drivers/tty/serial/8250/8250_core.c 23 20 :export: 21 + 22 + See serial/driver.rst for related APIs. 24 23 25 24 Pulse-Width Modulation (PWM) 26 25 ============================
+1 -1
Documentation/process/code-of-conduct-interpretation.rst
··· 51 51 uncertain how to handle situations that come up. It will not be 52 52 considered a violation report unless you want it to be. If you are 53 53 uncertain about approaching the TAB or any other maintainers, please 54 - reach out to our conflict mediator, Joanna Lee <joanna.lee@gesmer.com>. 54 + reach out to our conflict mediator, Joanna Lee <jlee@linuxfoundation.org>. 55 55 56 56 In the end, "be kind to each other" is really what the end goal is for 57 57 everybody. We know everyone is human and we all fail at times, but the
+38 -9
MAINTAINERS
··· 2197 2197 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2198 2198 S: Supported 2199 2199 W: http://www.hisilicon.com 2200 - T: git git://github.com/hisilicon/linux-hisi.git 2200 + T: git https://github.com/hisilicon/linux-hisi.git 2201 2201 F: arch/arm/boot/dts/hi3* 2202 2202 F: arch/arm/boot/dts/hip* 2203 2203 F: arch/arm/boot/dts/hisi* ··· 4809 4809 L: ceph-devel@vger.kernel.org 4810 4810 S: Supported 4811 4811 W: http://ceph.com/ 4812 - T: git git://github.com/ceph/ceph-client.git 4812 + T: git https://github.com/ceph/ceph-client.git 4813 4813 F: include/linux/ceph/ 4814 4814 F: include/linux/crush/ 4815 4815 F: net/ceph/ ··· 4821 4821 L: ceph-devel@vger.kernel.org 4822 4822 S: Supported 4823 4823 W: http://ceph.com/ 4824 - T: git git://github.com/ceph/ceph-client.git 4824 + T: git https://github.com/ceph/ceph-client.git 4825 4825 F: Documentation/filesystems/ceph.rst 4826 4826 F: fs/ceph/ 4827 4827 ··· 11035 11035 M: Masahiro Yamada <masahiroy@kernel.org> 11036 11036 L: linux-kbuild@vger.kernel.org 11037 11037 S: Maintained 11038 + Q: https://patchwork.kernel.org/project/linux-kbuild/list/ 11038 11039 T: git git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git kconfig 11039 11040 F: Documentation/kbuild/kconfig* 11040 11041 F: scripts/Kconfig.include ··· 11093 11092 11094 11093 KERNEL BUILD + files below scripts/ (unless maintained elsewhere) 11095 11094 M: Masahiro Yamada <masahiroy@kernel.org> 11096 - M: Michal Marek <michal.lkml@markovi.net> 11095 + R: Nathan Chancellor <nathan@kernel.org> 11097 11096 R: Nick Desaulniers <ndesaulniers@google.com> 11097 + R: Nicolas Schier <nicolas@fjasle.eu> 11098 11098 L: linux-kbuild@vger.kernel.org 11099 11099 S: Maintained 11100 + Q: https://patchwork.kernel.org/project/linux-kbuild/list/ 11100 11101 T: git git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git 11101 11102 F: Documentation/kbuild/ 11102 11103 F: Makefile ··· 13627 13624 S: Supported 13628 13625 F: drivers/misc/atmel-ssc.c 13629 13626 F: include/linux/atmel-ssc.h 13627 + 13628 + MICROCHIP SOC DRIVERS 13629 + M: Conor Dooley <conor@kernel.org> 13630 + S: Supported 13631 + T: git https://git.kernel.org/pub/scm/linux/kernel/git/conor/linux.git/ 13632 + F: drivers/soc/microchip/ 13630 13633 13631 13634 MICROCHIP USB251XB DRIVER 13632 13635 M: Richard Leitner <richard.leitner@skidata.com> ··· 17231 17222 L: ceph-devel@vger.kernel.org 17232 17223 S: Supported 17233 17224 W: http://ceph.com/ 17234 - T: git git://github.com/ceph/ceph-client.git 17225 + T: git https://github.com/ceph/ceph-client.git 17235 17226 F: Documentation/ABI/testing/sysfs-bus-rbd 17236 17227 F: drivers/block/rbd.c 17237 17228 F: drivers/block/rbd_types.h ··· 17732 17723 N: riscv 17733 17724 K: riscv 17734 17725 17735 - RISC-V/MICROCHIP POLARFIRE SOC SUPPORT 17726 + RISC-V MICROCHIP FPGA SUPPORT 17736 17727 M: Conor Dooley <conor.dooley@microchip.com> 17737 17728 M: Daire McNamara <daire.mcnamara@microchip.com> 17738 17729 L: linux-riscv@lists.infradead.org ··· 17750 17741 F: arch/riscv/boot/dts/microchip/ 17751 17742 F: drivers/char/hw_random/mpfs-rng.c 17752 17743 F: drivers/clk/microchip/clk-mpfs.c 17753 - F: drivers/i2c/busses/i2c-microchip-core.c 17744 + F: drivers/i2c/busses/i2c-microchip-corei2c.c 17754 17745 F: drivers/mailbox/mailbox-mpfs.c 17755 17746 F: drivers/pci/controller/pcie-microchip-host.c 17756 17747 F: drivers/reset/reset-mpfs.c 17757 17748 F: drivers/rtc/rtc-mpfs.c 17758 - F: drivers/soc/microchip/ 17749 + F: drivers/soc/microchip/mpfs-sys-controller.c 17759 17750 F: drivers/spi/spi-microchip-core-qspi.c 17760 17751 F: drivers/spi/spi-microchip-core.c 17761 17752 F: drivers/usb/musb/mpfs.c 17762 17753 F: include/soc/microchip/mpfs.h 17754 + 17755 + RISC-V MISC SOC SUPPORT 17756 + M: Conor Dooley <conor@kernel.org> 17757 + L: linux-riscv@lists.infradead.org 17758 + S: Maintained 17759 + Q: https://patchwork.kernel.org/project/linux-riscv/list/ 17760 + T: git https://git.kernel.org/pub/scm/linux/kernel/git/conor/linux.git/ 17761 + F: Documentation/devicetree/bindings/riscv/ 17762 + F: arch/riscv/boot/dts/ 17763 17763 17764 17764 RNBD BLOCK DRIVERS 17765 17765 M: Md. Haris Iqbal <haris.iqbal@ionos.com> ··· 18796 18778 M: Paul Walmsley <paul.walmsley@sifive.com> 18797 18779 L: linux-riscv@lists.infradead.org 18798 18780 S: Supported 18799 - T: git https://github.com/sifive/riscv-linux.git 18800 18781 N: sifive 18801 18782 K: [^@]sifive 18802 18783 ··· 18813 18796 S: Maintained 18814 18797 F: Documentation/devicetree/bindings/dma/sifive,fu540-c000-pdma.yaml 18815 18798 F: drivers/dma/sf-pdma/ 18799 + 18800 + SIFIVE SOC DRIVERS 18801 + M: Conor Dooley <conor@kernel.org> 18802 + L: linux-riscv@lists.infradead.org 18803 + S: Maintained 18804 + T: git https://git.kernel.org/pub/scm/linux/kernel/git/conor/linux.git/ 18805 + F: drivers/soc/sifive/ 18816 18806 18817 18807 SILEAD TOUCHSCREEN DRIVER 18818 18808 M: Hans de Goede <hdegoede@redhat.com> ··· 19621 19597 M: Ion Badulescu <ionut@badula.org> 19622 19598 S: Odd Fixes 19623 19599 F: drivers/net/ethernet/adaptec/starfire* 19600 + 19601 + STARFIVE DEVICETREES 19602 + M: Emil Renner Berthing <kernel@esmil.dk> 19603 + S: Maintained 19604 + F: arch/riscv/boot/dts/starfive/ 19624 19605 19625 19606 STARFIVE JH7100 CLOCK DRIVERS 19626 19607 M: Emil Renner Berthing <kernel@esmil.dk>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 1 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+2 -2
arch/arm/boot/dts/imx7s.dtsi
··· 1270 1270 clocks = <&clks IMX7D_NAND_USDHC_BUS_RAWNAND_CLK>; 1271 1271 }; 1272 1272 1273 - gpmi: nand-controller@33002000{ 1273 + gpmi: nand-controller@33002000 { 1274 1274 compatible = "fsl,imx7d-gpmi-nand"; 1275 1275 #address-cells = <1>; 1276 - #size-cells = <1>; 1276 + #size-cells = <0>; 1277 1277 reg = <0x33002000 0x2000>, <0x33004000 0x4000>; 1278 1278 reg-names = "gpmi-nand", "bch"; 1279 1279 interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+20
arch/arm/boot/dts/lan966x-pcb8291.dts
··· 69 69 pins = "GPIO_35", "GPIO_36"; 70 70 function = "can0_b"; 71 71 }; 72 + 73 + sgpio_a_pins: sgpio-a-pins { 74 + /* SCK, D0, D1, LD */ 75 + pins = "GPIO_32", "GPIO_33", "GPIO_34", "GPIO_35"; 76 + function = "sgpio_a"; 77 + }; 72 78 }; 73 79 74 80 &can0 { ··· 122 116 123 117 &serdes { 124 118 status = "okay"; 119 + }; 120 + 121 + &sgpio { 122 + pinctrl-0 = <&sgpio_a_pins>; 123 + pinctrl-names = "default"; 124 + microchip,sgpio-port-ranges = <0 3>, <8 11>; 125 + status = "okay"; 126 + 127 + gpio@0 { 128 + ngpios = <64>; 129 + }; 130 + gpio@1 { 131 + ngpios = <64>; 132 + }; 125 133 }; 126 134 127 135 &switch {
+1 -1
arch/arm/boot/dts/sama7g5-pinfunc.h
··· 261 261 #define PIN_PB2__FLEXCOM6_IO0 PINMUX_PIN(PIN_PB2, 2, 1) 262 262 #define PIN_PB2__ADTRG PINMUX_PIN(PIN_PB2, 3, 1) 263 263 #define PIN_PB2__A20 PINMUX_PIN(PIN_PB2, 4, 1) 264 - #define PIN_PB2__FLEXCOM11_IO0 PINMUX_PIN(PIN_PB2, 6, 3) 264 + #define PIN_PB2__FLEXCOM11_IO1 PINMUX_PIN(PIN_PB2, 6, 3) 265 265 #define PIN_PB3 35 266 266 #define PIN_PB3__GPIO PINMUX_PIN(PIN_PB3, 0, 0) 267 267 #define PIN_PB3__RF1 PINMUX_PIN(PIN_PB3, 1, 1)
+6 -1
arch/arm/mach-at91/pm_suspend.S
··· 169 169 cmp tmp1, #UDDRC_STAT_SELFREF_TYPE_SW 170 170 bne sr_ena_2 171 171 172 - /* Put DDR PHY's DLL in bypass mode for non-backup modes. */ 172 + /* Disable DX DLLs for non-backup modes. */ 173 173 cmp r7, #AT91_PM_BACKUP 174 174 beq sr_ena_3 175 + 176 + /* Do not soft reset the AC DLL. */ 177 + ldr tmp1, [r3, DDR3PHY_ACDLLCR] 178 + bic tmp1, tmp1, DDR3PHY_ACDLLCR_DLLSRST 179 + str tmp1, [r3, DDR3PHY_ACDLLCR] 175 180 176 181 /* Disable DX DLLs. */ 177 182 ldr tmp1, [r3, #DDR3PHY_DX0DLLCR]
+26 -6
arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml-mba8mx.dts
··· 34 34 off-on-delay-us = <12000>; 35 35 }; 36 36 37 - extcon_usbotg1: extcon-usbotg1 { 38 - compatible = "linux,extcon-usb-gpio"; 37 + connector { 38 + compatible = "gpio-usb-b-connector", "usb-b-connector"; 39 + type = "micro"; 40 + label = "X19"; 39 41 pinctrl-names = "default"; 40 - pinctrl-0 = <&pinctrl_usb1_extcon>; 41 - id-gpio = <&gpio1 10 GPIO_ACTIVE_HIGH>; 42 + pinctrl-0 = <&pinctrl_usb1_connector>; 43 + id-gpios = <&gpio1 10 GPIO_ACTIVE_HIGH>; 44 + 45 + ports { 46 + #address-cells = <1>; 47 + #size-cells = <0>; 48 + 49 + port@0 { 50 + reg = <0>; 51 + usb_dr_connector: endpoint { 52 + remote-endpoint = <&usb1_drd_sw>; 53 + }; 54 + }; 55 + }; 42 56 }; 43 57 }; 44 58 ··· 119 105 pinctrl-names = "default"; 120 106 pinctrl-0 = <&pinctrl_usbotg1>; 121 107 dr_mode = "otg"; 122 - extcon = <&extcon_usbotg1>; 123 108 srp-disable; 124 109 hnp-disable; 125 110 adp-disable; 126 111 power-active-high; 127 112 over-current-active-low; 113 + usb-role-switch; 128 114 status = "okay"; 115 + 116 + port { 117 + usb1_drd_sw: endpoint { 118 + remote-endpoint = <&usb_dr_connector>; 119 + }; 120 + }; 129 121 }; 130 122 131 123 &usbotg2 { ··· 251 231 <MX8MM_IOMUXC_GPIO1_IO13_USB1_OTG_OC 0x84>; 252 232 }; 253 233 254 - pinctrl_usb1_extcon: usb1-extcongrp { 234 + pinctrl_usb1_connector: usb1-connectorgrp { 255 235 fsl,pins = <MX8MM_IOMUXC_GPIO1_IO10_GPIO1_IO10 0x1c0>; 256 236 }; 257 237
+2 -2
arch/arm64/boot/dts/freescale/imx8mm.dtsi
··· 1244 1244 clocks = <&clk IMX8MM_CLK_NAND_USDHC_BUS_RAWNAND_CLK>; 1245 1245 }; 1246 1246 1247 - gpmi: nand-controller@33002000{ 1247 + gpmi: nand-controller@33002000 { 1248 1248 compatible = "fsl,imx8mm-gpmi-nand", "fsl,imx7d-gpmi-nand"; 1249 1249 #address-cells = <1>; 1250 - #size-cells = <1>; 1250 + #size-cells = <0>; 1251 1251 reg = <0x33002000 0x2000>, <0x33004000 0x4000>; 1252 1252 reg-names = "gpmi-nand", "bch"; 1253 1253 interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mn.dtsi
··· 1102 1102 gpmi: nand-controller@33002000 { 1103 1103 compatible = "fsl,imx8mn-gpmi-nand", "fsl,imx7d-gpmi-nand"; 1104 1104 #address-cells = <1>; 1105 - #size-cells = <1>; 1105 + #size-cells = <0>; 1106 1106 reg = <0x33002000 0x2000>, <0x33004000 0x4000>; 1107 1107 reg-names = "gpmi-nand", "bch"; 1108 1108 interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
arch/arm64/boot/dts/freescale/imx93-pinfunc.h
+1 -1
arch/arm64/boot/dts/qcom/ipq8074.dtsi
··· 668 668 669 669 apcs_glb: mailbox@b111000 { 670 670 compatible = "qcom,ipq8074-apcs-apps-global"; 671 - reg = <0x0b111000 0x6000>; 671 + reg = <0x0b111000 0x1000>; 672 672 673 673 #clock-cells = <1>; 674 674 #mbox-cells = <1>;
+1 -1
arch/arm64/boot/dts/qcom/msm8996.dtsi
··· 3504 3504 }; 3505 3505 3506 3506 saw3: syscon@9a10000 { 3507 - compatible = "qcom,tcsr-msm8996", "syscon"; 3507 + compatible = "syscon"; 3508 3508 reg = <0x09a10000 0x1000>; 3509 3509 }; 3510 3510
+12 -1
arch/arm64/boot/dts/qcom/sa8155p-adp.dts
··· 43 43 44 44 regulator-always-on; 45 45 regulator-boot-on; 46 - regulator-allow-set-load; 47 46 48 47 vin-supply = <&vreg_3p3>; 49 48 }; ··· 136 137 regulator-max-microvolt = <880000>; 137 138 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 138 139 regulator-allow-set-load; 140 + regulator-allowed-modes = 141 + <RPMH_REGULATOR_MODE_LPM 142 + RPMH_REGULATOR_MODE_HPM>; 139 143 }; 140 144 141 145 vreg_l7a_1p8: ldo7 { ··· 154 152 regulator-max-microvolt = <2960000>; 155 153 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 156 154 regulator-allow-set-load; 155 + regulator-allowed-modes = 156 + <RPMH_REGULATOR_MODE_LPM 157 + RPMH_REGULATOR_MODE_HPM>; 157 158 }; 158 159 159 160 vreg_l11a_0p8: ldo11 { ··· 263 258 regulator-max-microvolt = <1200000>; 264 259 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 265 260 regulator-allow-set-load; 261 + regulator-allowed-modes = 262 + <RPMH_REGULATOR_MODE_LPM 263 + RPMH_REGULATOR_MODE_HPM>; 266 264 }; 267 265 268 266 vreg_l7c_1p8: ldo7 { ··· 281 273 regulator-max-microvolt = <1200000>; 282 274 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 283 275 regulator-allow-set-load; 276 + regulator-allowed-modes = 277 + <RPMH_REGULATOR_MODE_LPM 278 + RPMH_REGULATOR_MODE_HPM>; 284 279 }; 285 280 286 281 vreg_l10c_3p3: ldo10 {
+12
arch/arm64/boot/dts/qcom/sa8295p-adp.dts
··· 83 83 regulator-max-microvolt = <1200000>; 84 84 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 85 85 regulator-allow-set-load; 86 + regulator-allowed-modes = 87 + <RPMH_REGULATOR_MODE_LPM 88 + RPMH_REGULATOR_MODE_HPM>; 86 89 }; 87 90 88 91 vreg_l4c: ldo4 { ··· 101 98 regulator-max-microvolt = <1200000>; 102 99 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 103 100 regulator-allow-set-load; 101 + regulator-allowed-modes = 102 + <RPMH_REGULATOR_MODE_LPM 103 + RPMH_REGULATOR_MODE_HPM>; 104 104 }; 105 105 106 106 vreg_l7c: ldo7 { ··· 119 113 regulator-max-microvolt = <2504000>; 120 114 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 121 115 regulator-allow-set-load; 116 + regulator-allowed-modes = 117 + <RPMH_REGULATOR_MODE_LPM 118 + RPMH_REGULATOR_MODE_HPM>; 122 119 }; 123 120 124 121 vreg_l17c: ldo17 { ··· 130 121 regulator-max-microvolt = <2504000>; 131 122 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 132 123 regulator-allow-set-load; 124 + regulator-allowed-modes = 125 + <RPMH_REGULATOR_MODE_LPM 126 + RPMH_REGULATOR_MODE_HPM>; 133 127 }; 134 128 }; 135 129
+2 -1
arch/arm64/boot/dts/qcom/sc7280.dtsi
··· 2296 2296 2297 2297 lpass_audiocc: clock-controller@3300000 { 2298 2298 compatible = "qcom,sc7280-lpassaudiocc"; 2299 - reg = <0 0x03300000 0 0x30000>; 2299 + reg = <0 0x03300000 0 0x30000>, 2300 + <0 0x032a9000 0 0x1000>; 2300 2301 clocks = <&rpmhcc RPMH_CXO_CLK>, 2301 2302 <&lpass_aon LPASS_AON_CC_MAIN_RCG_CLK_SRC>; 2302 2303 clock-names = "bi_tcxo", "lpass_aon_cc_main_rcg_clk_src";
+6
arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
··· 124 124 regulator-max-microvolt = <2504000>; 125 125 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 126 126 regulator-allow-set-load; 127 + regulator-allowed-modes = 128 + <RPMH_REGULATOR_MODE_LPM 129 + RPMH_REGULATOR_MODE_HPM>; 127 130 }; 128 131 129 132 vreg_l13c: ldo13 { ··· 149 146 regulator-max-microvolt = <1200000>; 150 147 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 151 148 regulator-allow-set-load; 149 + regulator-allowed-modes = 150 + <RPMH_REGULATOR_MODE_LPM 151 + RPMH_REGULATOR_MODE_HPM>; 152 152 }; 153 153 154 154 vreg_l4d: ldo4 {
+8 -28
arch/arm64/boot/dts/qcom/sc8280xp.dtsi
··· 885 885 886 886 ufs_mem_phy: phy@1d87000 { 887 887 compatible = "qcom,sc8280xp-qmp-ufs-phy"; 888 - reg = <0 0x01d87000 0 0xe10>; 888 + reg = <0 0x01d87000 0 0x1c8>; 889 889 #address-cells = <2>; 890 890 #size-cells = <2>; 891 891 ranges; 892 892 clock-names = "ref", 893 893 "ref_aux"; 894 - clocks = <&rpmhcc RPMH_CXO_CLK>, 894 + clocks = <&gcc GCC_UFS_REF_CLKREF_CLK>, 895 895 <&gcc GCC_UFS_PHY_PHY_AUX_CLK>; 896 896 897 897 resets = <&ufs_mem_hc 0>; ··· 953 953 954 954 ufs_card_phy: phy@1da7000 { 955 955 compatible = "qcom,sc8280xp-qmp-ufs-phy"; 956 - reg = <0 0x01da7000 0 0xe10>; 956 + reg = <0 0x01da7000 0 0x1c8>; 957 957 #address-cells = <2>; 958 958 #size-cells = <2>; 959 959 ranges; 960 960 clock-names = "ref", 961 961 "ref_aux"; 962 - clocks = <&gcc GCC_UFS_1_CARD_CLKREF_CLK>, 962 + clocks = <&gcc GCC_UFS_REF_CLKREF_CLK>, 963 963 <&gcc GCC_UFS_CARD_PHY_AUX_CLK>; 964 964 965 965 resets = <&ufs_card_hc 0>; ··· 1181 1181 usb_0_ssphy: usb3-phy@88eb400 { 1182 1182 reg = <0 0x088eb400 0 0x100>, 1183 1183 <0 0x088eb600 0 0x3ec>, 1184 - <0 0x088ec400 0 0x1f0>, 1184 + <0 0x088ec400 0 0x364>, 1185 1185 <0 0x088eba00 0 0x100>, 1186 1186 <0 0x088ebc00 0 0x3ec>, 1187 - <0 0x088ec700 0 0x64>; 1187 + <0 0x088ec200 0 0x18>; 1188 1188 #phy-cells = <0>; 1189 1189 #clock-cells = <0>; 1190 1190 clocks = <&gcc GCC_USB3_PRIM_PHY_PIPE_CLK>; 1191 1191 clock-names = "pipe0"; 1192 1192 clock-output-names = "usb0_phy_pipe_clk_src"; 1193 - }; 1194 - 1195 - usb_0_dpphy: dp-phy@88ed200 { 1196 - reg = <0 0x088ed200 0 0x200>, 1197 - <0 0x088ed400 0 0x200>, 1198 - <0 0x088eda00 0 0x200>, 1199 - <0 0x088ea600 0 0x200>, 1200 - <0 0x088ea800 0 0x200>; 1201 - #clock-cells = <1>; 1202 - #phy-cells = <0>; 1203 1193 }; 1204 1194 }; 1205 1195 ··· 1232 1242 1233 1243 usb_1_ssphy: usb3-phy@8903400 { 1234 1244 reg = <0 0x08903400 0 0x100>, 1235 - <0 0x08903c00 0 0x3ec>, 1236 - <0 0x08904400 0 0x1f0>, 1245 + <0 0x08903600 0 0x3ec>, 1246 + <0 0x08904400 0 0x364>, 1237 1247 <0 0x08903a00 0 0x100>, 1238 1248 <0 0x08903c00 0 0x3ec>, 1239 1249 <0 0x08904200 0 0x18>; ··· 1242 1252 clocks = <&gcc GCC_USB3_SEC_PHY_PIPE_CLK>; 1243 1253 clock-names = "pipe0"; 1244 1254 clock-output-names = "usb1_phy_pipe_clk_src"; 1245 - }; 1246 - 1247 - usb_1_dpphy: dp-phy@8904200 { 1248 - reg = <0 0x08904200 0 0x200>, 1249 - <0 0x08904400 0 0x200>, 1250 - <0 0x08904a00 0 0x200>, 1251 - <0 0x08904600 0 0x200>, 1252 - <0 0x08904800 0 0x200>; 1253 - #clock-cells = <1>; 1254 - #phy-cells = <0>; 1255 1255 }; 1256 1256 }; 1257 1257
+6
arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi
··· 348 348 regulator-max-microvolt = <2960000>; 349 349 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 350 350 regulator-allow-set-load; 351 + regulator-allowed-modes = 352 + <RPMH_REGULATOR_MODE_LPM 353 + RPMH_REGULATOR_MODE_HPM>; 351 354 }; 352 355 353 356 vreg_l7c_3p0: ldo7 { ··· 370 367 regulator-max-microvolt = <2960000>; 371 368 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 372 369 regulator-allow-set-load; 370 + regulator-allowed-modes = 371 + <RPMH_REGULATOR_MODE_LPM 372 + RPMH_REGULATOR_MODE_HPM>; 373 373 }; 374 374 375 375 vreg_l10c_3p3: ldo10 {
+6
arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
··· 317 317 regulator-max-microvolt = <2960000>; 318 318 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 319 319 regulator-allow-set-load; 320 + regulator-allowed-modes = 321 + <RPMH_REGULATOR_MODE_LPM 322 + RPMH_REGULATOR_MODE_HPM>; 320 323 }; 321 324 322 325 vreg_l7c_2p85: ldo7 { ··· 342 339 regulator-max-microvolt = <2960000>; 343 340 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 344 341 regulator-allow-set-load; 342 + regulator-allowed-modes = 343 + <RPMH_REGULATOR_MODE_LPM 344 + RPMH_REGULATOR_MODE_HPM>; 345 345 }; 346 346 347 347 vreg_l10c_3p3: ldo10 {
+1
arch/arm64/boot/dts/qcom/sm8250.dtsi
··· 334 334 exit-latency-us = <6562>; 335 335 min-residency-us = <9987>; 336 336 local-timer-stop; 337 + status = "disabled"; 337 338 }; 338 339 }; 339 340 };
+12
arch/arm64/boot/dts/qcom/sm8350-hdk.dts
··· 107 107 regulator-max-microvolt = <888000>; 108 108 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 109 109 regulator-allow-set-load; 110 + regulator-allowed-modes = 111 + <RPMH_REGULATOR_MODE_LPM 112 + RPMH_REGULATOR_MODE_HPM>; 110 113 }; 111 114 112 115 vreg_l6b_1p2: ldo6 { ··· 118 115 regulator-max-microvolt = <1208000>; 119 116 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 120 117 regulator-allow-set-load; 118 + regulator-allowed-modes = 119 + <RPMH_REGULATOR_MODE_LPM 120 + RPMH_REGULATOR_MODE_HPM>; 121 121 }; 122 122 123 123 vreg_l7b_2p96: ldo7 { ··· 129 123 regulator-max-microvolt = <2504000>; 130 124 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 131 125 regulator-allow-set-load; 126 + regulator-allowed-modes = 127 + <RPMH_REGULATOR_MODE_LPM 128 + RPMH_REGULATOR_MODE_HPM>; 132 129 }; 133 130 134 131 vreg_l9b_1p2: ldo9 { ··· 140 131 regulator-max-microvolt = <1200000>; 141 132 regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>; 142 133 regulator-allow-set-load; 134 + regulator-allowed-modes = 135 + <RPMH_REGULATOR_MODE_LPM 136 + RPMH_REGULATOR_MODE_HPM>; 143 137 }; 144 138 }; 145 139
+2 -2
arch/arm64/include/asm/pgtable.h
··· 863 863 864 864 static inline bool pmd_user_accessible_page(pmd_t pmd) 865 865 { 866 - return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd)); 866 + return pmd_leaf(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd)); 867 867 } 868 868 869 869 static inline bool pud_user_accessible_page(pud_t pud) 870 870 { 871 - return pud_present(pud) && pud_user(pud); 871 + return pud_leaf(pud) && pud_user(pud); 872 872 } 873 873 #endif 874 874
+1 -1
arch/arm64/kernel/entry-ftrace.S
··· 299 299 ret 300 300 SYM_FUNC_END(ftrace_stub) 301 301 302 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 302 303 SYM_TYPED_FUNC_START(ftrace_stub_graph) 303 304 ret 304 305 SYM_FUNC_END(ftrace_stub_graph) 305 306 306 - #ifdef CONFIG_FUNCTION_GRAPH_TRACER 307 307 /* 308 308 * void return_to_handler(void) 309 309 *
+1 -1
arch/powerpc/kernel/vmlinux.lds.S
··· 142 142 #endif 143 143 144 144 .data.rel.ro : AT(ADDR(.data.rel.ro) - LOAD_OFFSET) { 145 - *(.data.rel.ro*) 145 + *(.data.rel.ro .data.rel.ro.*) 146 146 } 147 147 148 148 .branch_lt : AT(ADDR(.branch_lt) - LOAD_OFFSET) {
+10 -1
arch/s390/include/asm/processor.h
··· 199 199 /* Has task runtime instrumentation enabled ? */ 200 200 #define is_ri_task(tsk) (!!(tsk)->thread.ri_cb) 201 201 202 - register unsigned long current_stack_pointer asm("r15"); 202 + /* avoid using global register due to gcc bug in versions < 8.4 */ 203 + #define current_stack_pointer (__current_stack_pointer()) 204 + 205 + static __always_inline unsigned long __current_stack_pointer(void) 206 + { 207 + unsigned long sp; 208 + 209 + asm volatile("lgr %0,15" : "=d" (sp)); 210 + return sp; 211 + } 203 212 204 213 static __always_inline unsigned short stap(void) 205 214 {
+2 -3
arch/x86/events/amd/core.c
··· 861 861 pmu_enabled = cpuc->enabled; 862 862 cpuc->enabled = 0; 863 863 864 - /* stop everything (includes BRS) */ 865 - amd_pmu_disable_all(); 864 + amd_brs_disable_all(); 866 865 867 866 /* Drain BRS is in use (could be inactive) */ 868 867 if (cpuc->lbr_users) ··· 872 873 873 874 cpuc->enabled = pmu_enabled; 874 875 if (pmu_enabled) 875 - amd_pmu_enable_all(0); 876 + amd_brs_enable_all(); 876 877 877 878 return amd_pmu_adjust_nmi_window(handled); 878 879 }
+1
arch/x86/events/amd/uncore.c
··· 553 553 554 554 hlist_for_each_entry_safe(uncore, n, &uncore_unused_list, node) { 555 555 hlist_del(&uncore->node); 556 + kfree(uncore->events); 556 557 kfree(uncore); 557 558 } 558 559 }
+9
arch/x86/events/intel/pt.c
··· 1263 1263 if (1 << order != nr_pages) 1264 1264 goto out; 1265 1265 1266 + /* 1267 + * Some processors cannot always support single range for more than 1268 + * 4KB - refer errata TGL052, ADL037 and RPL017. Future processors might 1269 + * also be affected, so for now rather than trying to keep track of 1270 + * which ones, just disable it for all. 1271 + */ 1272 + if (nr_pages > 1) 1273 + goto out; 1274 + 1266 1275 buf->single = true; 1267 1276 buf->nr_pages = nr_pages; 1268 1277 ret = 0;
+5 -3
arch/x86/include/asm/msr-index.h
··· 535 535 #define MSR_AMD64_CPUID_FN_1 0xc0011004 536 536 #define MSR_AMD64_LS_CFG 0xc0011020 537 537 #define MSR_AMD64_DC_CFG 0xc0011022 538 + 539 + #define MSR_AMD64_DE_CFG 0xc0011029 540 + #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT 1 541 + #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT) 542 + 538 543 #define MSR_AMD64_BU_CFG2 0xc001102a 539 544 #define MSR_AMD64_IBSFETCHCTL 0xc0011030 540 545 #define MSR_AMD64_IBSFETCHLINAD 0xc0011031 ··· 645 640 #define FAM10H_MMIO_CONF_BASE_MASK 0xfffffffULL 646 641 #define FAM10H_MMIO_CONF_BASE_SHIFT 20 647 642 #define MSR_FAM10H_NODE_ID 0xc001100c 648 - #define MSR_F10H_DECFG 0xc0011029 649 - #define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT 1 650 - #define MSR_F10H_DECFG_LFENCE_SERIALIZE BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT) 651 643 652 644 /* K8 MSRs */ 653 645 #define MSR_K8_TOP_MEM1 0xc001001a
+1 -1
arch/x86/include/asm/qspinlock_paravirt.h
··· 37 37 * rsi = lockval (second argument) 38 38 * rdx = internal variable (set to 0) 39 39 */ 40 - asm (".pushsection .spinlock.text;" 40 + asm (".pushsection .spinlock.text, \"ax\";" 41 41 ".globl " PV_UNLOCK ";" 42 42 ".type " PV_UNLOCK ", @function;" 43 43 ".align 4,0x90;"
+2 -4
arch/x86/kernel/cpu/amd.c
··· 770 770 set_cpu_bug(c, X86_BUG_AMD_TLB_MMATCH); 771 771 } 772 772 773 - #define MSR_AMD64_DE_CFG 0xC0011029 774 - 775 773 static void init_amd_ln(struct cpuinfo_x86 *c) 776 774 { 777 775 /* ··· 963 965 * msr_set_bit() uses the safe accessors, too, even if the MSR 964 966 * is not present. 965 967 */ 966 - msr_set_bit(MSR_F10H_DECFG, 967 - MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT); 968 + msr_set_bit(MSR_AMD64_DE_CFG, 969 + MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT); 968 970 969 971 /* A serializing LFENCE stops RDTSC speculation */ 970 972 set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+2 -2
arch/x86/kernel/cpu/hygon.c
··· 326 326 * msr_set_bit() uses the safe accessors, too, even if the MSR 327 327 * is not present. 328 328 */ 329 - msr_set_bit(MSR_F10H_DECFG, 330 - MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT); 329 + msr_set_bit(MSR_AMD64_DE_CFG, 330 + MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT); 331 331 332 332 /* A serializing LFENCE stops RDTSC speculation */ 333 333 set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+3
arch/x86/kernel/cpu/sgx/ioctl.c
··· 356 356 if (!length || !IS_ALIGNED(length, PAGE_SIZE)) 357 357 return -EINVAL; 358 358 359 + if (offset + length < offset) 360 + return -EINVAL; 361 + 359 362 if (offset + length - PAGE_SIZE >= encl->size) 360 363 return -EINVAL; 361 364
+1 -1
arch/x86/kernel/fpu/core.c
··· 605 605 if (test_thread_flag(TIF_NEED_FPU_LOAD)) 606 606 fpregs_restore_userregs(); 607 607 save_fpregs_to_fpstate(dst_fpu); 608 + fpregs_unlock(); 608 609 if (!(clone_flags & CLONE_THREAD)) 609 610 fpu_inherit_perms(dst_fpu); 610 - fpregs_unlock(); 611 611 612 612 /* 613 613 * Children never inherit PASID state.
+5 -5
arch/x86/kvm/svm/svm.c
··· 2709 2709 msr->data = 0; 2710 2710 2711 2711 switch (msr->index) { 2712 - case MSR_F10H_DECFG: 2713 - if (boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) 2714 - msr->data |= MSR_F10H_DECFG_LFENCE_SERIALIZE; 2712 + case MSR_AMD64_DE_CFG: 2713 + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC)) 2714 + msr->data |= MSR_AMD64_DE_CFG_LFENCE_SERIALIZE; 2715 2715 break; 2716 2716 case MSR_IA32_PERF_CAPABILITIES: 2717 2717 return 0; ··· 2812 2812 msr_info->data = 0x1E; 2813 2813 } 2814 2814 break; 2815 - case MSR_F10H_DECFG: 2815 + case MSR_AMD64_DE_CFG: 2816 2816 msr_info->data = svm->msr_decfg; 2817 2817 break; 2818 2818 default: ··· 3041 3041 case MSR_VM_IGNNE: 3042 3042 vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data); 3043 3043 break; 3044 - case MSR_F10H_DECFG: { 3044 + case MSR_AMD64_DE_CFG: { 3045 3045 struct kvm_msr_entry msr_entry; 3046 3046 3047 3047 msr_entry.index = msr->index;
+1 -1
arch/x86/kvm/x86.c
··· 1557 1557 MSR_IA32_VMX_EPT_VPID_CAP, 1558 1558 MSR_IA32_VMX_VMFUNC, 1559 1559 1560 - MSR_F10H_DECFG, 1560 + MSR_AMD64_DE_CFG, 1561 1561 MSR_IA32_UCODE_REV, 1562 1562 MSR_IA32_ARCH_CAPABILITIES, 1563 1563 MSR_IA32_PERF_CAPABILITIES,
-13
arch/x86/net/bpf_jit_comp.c
··· 11 11 #include <linux/bpf.h> 12 12 #include <linux/memory.h> 13 13 #include <linux/sort.h> 14 - #include <linux/init.h> 15 14 #include <asm/extable.h> 16 15 #include <asm/set_memory.h> 17 16 #include <asm/nospec-branch.h> ··· 386 387 out: 387 388 mutex_unlock(&text_mutex); 388 389 return ret; 389 - } 390 - 391 - int __init bpf_arch_init_dispatcher_early(void *ip) 392 - { 393 - const u8 *nop_insn = x86_nops[5]; 394 - 395 - if (is_endbr(*(u32 *)ip)) 396 - ip += ENDBR_INSN_SIZE; 397 - 398 - if (memcmp(ip, nop_insn, X86_PATCH_SIZE)) 399 - text_poke_early(ip, nop_insn, X86_PATCH_SIZE); 400 - return 0; 401 390 } 402 391 403 392 int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+1
arch/x86/power/cpu.c
··· 519 519 MSR_TSX_FORCE_ABORT, 520 520 MSR_IA32_MCU_OPT_CTRL, 521 521 MSR_AMD64_LS_CFG, 522 + MSR_AMD64_DE_CFG, 522 523 }; 523 524 524 525 msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
+2 -1
arch/x86/xen/enlighten_pv.c
··· 23 23 #include <linux/start_kernel.h> 24 24 #include <linux/sched.h> 25 25 #include <linux/kprobes.h> 26 + #include <linux/kstrtox.h> 26 27 #include <linux/memblock.h> 27 28 #include <linux/export.h> 28 29 #include <linux/mm.h> ··· 114 113 static int __init parse_xen_msr_safe(char *str) 115 114 { 116 115 if (str) 117 - return strtobool(str, &xen_msr_safe); 116 + return kstrtobool(str, &xen_msr_safe); 118 117 return -EINVAL; 119 118 } 120 119 early_param("xen_msr_safe", parse_xen_msr_safe);
+2 -1
arch/x86/xen/setup.c
··· 7 7 8 8 #include <linux/init.h> 9 9 #include <linux/sched.h> 10 + #include <linux/kstrtox.h> 10 11 #include <linux/mm.h> 11 12 #include <linux/pm.h> 12 13 #include <linux/memblock.h> ··· 86 85 arg = strstr(xen_start_info->cmd_line, "xen_512gb_limit="); 87 86 if (!arg) 88 87 val = true; 89 - else if (strtobool(arg + strlen("xen_512gb_limit="), &val)) 88 + else if (kstrtobool(arg + strlen("xen_512gb_limit="), &val)) 90 89 return; 91 90 92 91 xen_512gb_limit = val;
+1 -1
block/blk-cgroup.c
··· 1213 1213 * parent so that offline always happens towards the root. 1214 1214 */ 1215 1215 if (parent) 1216 - blkcg_pin_online(css); 1216 + blkcg_pin_online(&parent->css); 1217 1217 return 0; 1218 1218 } 1219 1219
-1
block/blk-core.c
··· 425 425 PERCPU_REF_INIT_ATOMIC, GFP_KERNEL)) 426 426 goto fail_stats; 427 427 428 - blk_queue_dma_alignment(q, 511); 429 428 blk_set_default_limits(&q->limits); 430 429 q->nr_requests = BLKDEV_DEFAULT_RQ; 431 430
+5 -4
block/blk-settings.c
··· 57 57 lim->misaligned = 0; 58 58 lim->zoned = BLK_ZONED_NONE; 59 59 lim->zone_write_granularity = 0; 60 + lim->dma_alignment = 511; 60 61 } 61 - EXPORT_SYMBOL(blk_set_default_limits); 62 62 63 63 /** 64 64 * blk_set_stacking_limits - set default limits for stacking devices ··· 600 600 601 601 t->io_min = max(t->io_min, b->io_min); 602 602 t->io_opt = lcm_not_zero(t->io_opt, b->io_opt); 603 + t->dma_alignment = max(t->dma_alignment, b->dma_alignment); 603 604 604 605 /* Set non-power-of-2 compatible chunk_sectors boundary */ 605 606 if (b->chunk_sectors) ··· 774 773 **/ 775 774 void blk_queue_dma_alignment(struct request_queue *q, int mask) 776 775 { 777 - q->dma_alignment = mask; 776 + q->limits.dma_alignment = mask; 778 777 } 779 778 EXPORT_SYMBOL(blk_queue_dma_alignment); 780 779 ··· 796 795 { 797 796 BUG_ON(mask > PAGE_SIZE); 798 797 799 - if (mask > q->dma_alignment) 800 - q->dma_alignment = mask; 798 + if (mask > q->limits.dma_alignment) 799 + q->limits.dma_alignment = mask; 801 800 } 802 801 EXPORT_SYMBOL(blk_queue_update_dma_alignment); 803 802
+1
block/blk.h
··· 331 331 bool blk_rq_merge_ok(struct request *rq, struct bio *bio); 332 332 enum elv_merge blk_try_merge(struct request *rq, struct bio *bio); 333 333 334 + void blk_set_default_limits(struct queue_limits *lim); 334 335 int blk_dev_init(void); 335 336 336 337 /*
+1 -1
drivers/accessibility/speakup/main.c
··· 1778 1778 { 1779 1779 unsigned long flags; 1780 1780 1781 - if (!speakup_console[vc->vc_num] || spk_parked) 1781 + if (!speakup_console[vc->vc_num] || spk_parked || !synth) 1782 1782 return; 1783 1783 if (!spin_trylock_irqsave(&speakup_info.spinlock, flags)) 1784 1784 /* Speakup output, discard */
+1 -1
drivers/accessibility/speakup/utils.h
··· 54 54 55 55 static inline struct st_key *hash_name(char *name) 56 56 { 57 - u_char *pn = (u_char *)name; 57 + unsigned char *pn = (unsigned char *)name; 58 58 int hash = 0; 59 59 60 60 while (*pn) {
+7
drivers/android/binder_alloc.c
··· 739 739 const char *failure_string; 740 740 struct binder_buffer *buffer; 741 741 742 + if (unlikely(vma->vm_mm != alloc->mm)) { 743 + ret = -EINVAL; 744 + failure_string = "invalid vma->vm_mm"; 745 + goto err_invalid_mm; 746 + } 747 + 742 748 mutex_lock(&binder_alloc_mmap_lock); 743 749 if (alloc->buffer_size) { 744 750 ret = -EBUSY; ··· 791 785 alloc->buffer_size = 0; 792 786 err_already_mapped: 793 787 mutex_unlock(&binder_alloc_mmap_lock); 788 + err_invalid_mm: 794 789 binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 795 790 "%s: %d %lx-%lx %s failed %d\n", __func__, 796 791 alloc->pid, vma->vm_start, vma->vm_end,
+2 -2
drivers/block/drbd/drbd_main.c
··· 2672 2672 enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor) 2673 2673 { 2674 2674 struct drbd_resource *resource = adm_ctx->resource; 2675 - struct drbd_connection *connection; 2675 + struct drbd_connection *connection, *n; 2676 2676 struct drbd_device *device; 2677 2677 struct drbd_peer_device *peer_device, *tmp_peer_device; 2678 2678 struct gendisk *disk; ··· 2789 2789 return NO_ERROR; 2790 2790 2791 2791 out_idr_remove_from_resource: 2792 - for_each_connection(connection, resource) { 2792 + for_each_connection_safe(connection, n, resource) { 2793 2793 peer_device = idr_remove(&connection->peer_devices, vnr); 2794 2794 if (peer_device) 2795 2795 kref_put(&connection->kref, drbd_destroy_connection);
+7 -1
drivers/extcon/extcon-usbc-tusb320.c
··· 327 327 return IRQ_NONE; 328 328 329 329 tusb320_extcon_irq_handler(priv, reg); 330 - tusb320_typec_irq_handler(priv, reg); 330 + 331 + /* 332 + * Type-C support is optional. Only call the Type-C handler if a 333 + * port had been registered previously. 334 + */ 335 + if (priv->port) 336 + tusb320_typec_irq_handler(priv, reg); 331 337 332 338 regmap_write(priv->regmap, TUSB320_REG9, reg); 333 339
+29 -8
drivers/firmware/google/coreboot_table.c
··· 149 149 if (!ptr) 150 150 return -ENOMEM; 151 151 152 - ret = bus_register(&coreboot_bus_type); 153 - if (!ret) { 154 - ret = coreboot_table_populate(dev, ptr); 155 - if (ret) 156 - bus_unregister(&coreboot_bus_type); 157 - } 152 + ret = coreboot_table_populate(dev, ptr); 153 + 158 154 memunmap(ptr); 159 155 160 156 return ret; ··· 165 169 static int coreboot_table_remove(struct platform_device *pdev) 166 170 { 167 171 bus_for_each_dev(&coreboot_bus_type, NULL, NULL, __cb_dev_unregister); 168 - bus_unregister(&coreboot_bus_type); 169 172 return 0; 170 173 } 171 174 ··· 194 199 .of_match_table = of_match_ptr(coreboot_of_match), 195 200 }, 196 201 }; 197 - module_platform_driver(coreboot_table_driver); 202 + 203 + static int __init coreboot_table_driver_init(void) 204 + { 205 + int ret; 206 + 207 + ret = bus_register(&coreboot_bus_type); 208 + if (ret) 209 + return ret; 210 + 211 + ret = platform_driver_register(&coreboot_table_driver); 212 + if (ret) { 213 + bus_unregister(&coreboot_bus_type); 214 + return ret; 215 + } 216 + 217 + return 0; 218 + } 219 + 220 + static void __exit coreboot_table_driver_exit(void) 221 + { 222 + platform_driver_unregister(&coreboot_table_driver); 223 + bus_unregister(&coreboot_bus_type); 224 + } 225 + 226 + module_init(coreboot_table_driver_init); 227 + module_exit(coreboot_table_driver_exit); 228 + 198 229 MODULE_AUTHOR("Google, Inc."); 199 230 MODULE_LICENSE("GPL");
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1293 1293 u32 reg, u32 v); 1294 1294 struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev, 1295 1295 struct dma_fence *gang); 1296 + bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev); 1296 1297 1297 1298 /* atpx handler */ 1298 1299 #if defined(CONFIG_VGA_SWITCHEROO)
+1 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 171 171 (kfd_mem_limit.ttm_mem_used + ttm_mem_needed > 172 172 kfd_mem_limit.max_ttm_mem_limit) || 173 173 (adev && adev->kfd.vram_used + vram_needed > 174 - adev->gmc.real_vram_size - 175 - atomic64_read(&adev->vram_pin_size) - 176 - reserved_for_pt)) { 174 + adev->gmc.real_vram_size - reserved_for_pt)) { 177 175 ret = -ENOMEM; 178 176 goto release; 179 177 }
+20 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 109 109 return r; 110 110 111 111 ++(num_ibs[r]); 112 + p->gang_leader_idx = r; 112 113 return 0; 113 114 } 114 115 ··· 288 287 } 289 288 } 290 289 291 - if (!p->gang_size) 292 - return -EINVAL; 290 + if (!p->gang_size) { 291 + ret = -EINVAL; 292 + goto free_partial_kdata; 293 + } 293 294 294 295 for (i = 0; i < p->gang_size; ++i) { 295 296 ret = amdgpu_job_alloc(p->adev, num_ibs[i], &p->jobs[i], vm); ··· 303 300 if (ret) 304 301 goto free_all_kdata; 305 302 } 306 - p->gang_leader = p->jobs[p->gang_size - 1]; 303 + p->gang_leader = p->jobs[p->gang_leader_idx]; 307 304 308 305 if (p->ctx->vram_lost_counter != p->gang_leader->vram_lost_counter) { 309 306 ret = -ECANCELED; ··· 1198 1195 return r; 1199 1196 } 1200 1197 1201 - for (i = 0; i < p->gang_size - 1; ++i) { 1198 + for (i = 0; i < p->gang_size; ++i) { 1199 + if (p->jobs[i] == leader) 1200 + continue; 1201 + 1202 1202 r = amdgpu_sync_clone(&leader->sync, &p->jobs[i]->sync); 1203 1203 if (r) 1204 1204 return r; 1205 1205 } 1206 1206 1207 - r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_size - 1]); 1207 + r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_leader_idx]); 1208 1208 if (r && r != -ERESTARTSYS) 1209 1209 DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n"); 1210 - 1211 1210 return r; 1212 1211 } 1213 1212 ··· 1243 1238 for (i = 0; i < p->gang_size; ++i) 1244 1239 drm_sched_job_arm(&p->jobs[i]->base); 1245 1240 1246 - for (i = 0; i < (p->gang_size - 1); ++i) { 1241 + for (i = 0; i < p->gang_size; ++i) { 1247 1242 struct dma_fence *fence; 1243 + 1244 + if (p->jobs[i] == leader) 1245 + continue; 1248 1246 1249 1247 fence = &p->jobs[i]->base.s_fence->scheduled; 1250 1248 r = amdgpu_sync_fence(&leader->sync, fence); ··· 1284 1276 list_for_each_entry(e, &p->validated, tv.head) { 1285 1277 1286 1278 /* Everybody except for the gang leader uses READ */ 1287 - for (i = 0; i < (p->gang_size - 1); ++i) { 1279 + for (i = 0; i < p->gang_size; ++i) { 1280 + if (p->jobs[i] == leader) 1281 + continue; 1282 + 1288 1283 dma_resv_add_fence(e->tv.bo->base.resv, 1289 1284 &p->jobs[i]->base.s_fence->finished, 1290 1285 DMA_RESV_USAGE_READ); ··· 1297 1286 e->tv.num_shared = 0; 1298 1287 } 1299 1288 1300 - seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_size - 1], 1289 + seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_leader_idx], 1301 1290 p->fence); 1302 1291 amdgpu_cs_post_dependencies(p); 1303 1292
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
··· 54 54 55 55 /* scheduler job objects */ 56 56 unsigned int gang_size; 57 + unsigned int gang_leader_idx; 57 58 struct drm_sched_entity *entities[AMDGPU_CS_GANG_SIZE]; 58 59 struct amdgpu_job *jobs[AMDGPU_CS_GANG_SIZE]; 59 60 struct amdgpu_job *gang_leader;
+41
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 6044 6044 dma_fence_put(old); 6045 6045 return NULL; 6046 6046 } 6047 + 6048 + bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev) 6049 + { 6050 + switch (adev->asic_type) { 6051 + #ifdef CONFIG_DRM_AMDGPU_SI 6052 + case CHIP_HAINAN: 6053 + #endif 6054 + case CHIP_TOPAZ: 6055 + /* chips with no display hardware */ 6056 + return false; 6057 + #ifdef CONFIG_DRM_AMDGPU_SI 6058 + case CHIP_TAHITI: 6059 + case CHIP_PITCAIRN: 6060 + case CHIP_VERDE: 6061 + case CHIP_OLAND: 6062 + #endif 6063 + #ifdef CONFIG_DRM_AMDGPU_CIK 6064 + case CHIP_BONAIRE: 6065 + case CHIP_HAWAII: 6066 + case CHIP_KAVERI: 6067 + case CHIP_KABINI: 6068 + case CHIP_MULLINS: 6069 + #endif 6070 + case CHIP_TONGA: 6071 + case CHIP_FIJI: 6072 + case CHIP_POLARIS10: 6073 + case CHIP_POLARIS11: 6074 + case CHIP_POLARIS12: 6075 + case CHIP_VEGAM: 6076 + case CHIP_CARRIZO: 6077 + case CHIP_STONEY: 6078 + /* chips with display hardware */ 6079 + return true; 6080 + default: 6081 + /* IP discovery */ 6082 + if (!adev->ip_versions[DCE_HWIP][0] || 6083 + (adev->harvest_ip_mask & AMD_HARVEST_IP_DMU_MASK)) 6084 + return false; 6085 + return true; 6086 + } 6087 + }
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
··· 656 656 } 657 657 658 658 if (amdgpu_sriov_vf(adev) || 659 - !amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_DCE)) { 659 + !amdgpu_device_has_display_hardware(adev)) { 660 660 size = 0; 661 661 } else { 662 662 size = amdgpu_gmc_get_vbios_fb_size(adev);
+1
drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
··· 45 45 MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin"); 46 46 MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin"); 47 47 MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin"); 48 + MODULE_FIRMWARE("amdgpu/psp_13_0_10_ta.bin"); 48 49 49 50 /* For large FW files the time to complete can be very long */ 50 51 #define USBC_PD_POLLING_LIMIT_S 240
+26 -6
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 147 147 /* Number of bytes in PSP footer for firmware. */ 148 148 #define PSP_FOOTER_BYTES 0x100 149 149 150 + /* 151 + * DMUB Async to Sync Mechanism Status 152 + */ 153 + #define DMUB_ASYNC_TO_SYNC_ACCESS_FAIL 1 154 + #define DMUB_ASYNC_TO_SYNC_ACCESS_TIMEOUT 2 155 + #define DMUB_ASYNC_TO_SYNC_ACCESS_SUCCESS 3 156 + #define DMUB_ASYNC_TO_SYNC_ACCESS_INVALID 4 157 + 150 158 /** 151 159 * DOC: overview 152 160 * ··· 1645 1637 } 1646 1638 } 1647 1639 1648 - if (amdgpu_dm_initialize_drm_device(adev)) { 1649 - DRM_ERROR( 1650 - "amdgpu: failed to initialize sw for display support.\n"); 1651 - goto error; 1652 - } 1653 - 1654 1640 /* Enable outbox notification only after IRQ handlers are registered and DMUB is alive. 1655 1641 * It is expected that DMUB will resend any pending notifications at this point, for 1656 1642 * example HPD from DPIA. 1657 1643 */ 1658 1644 if (dc_is_dmub_outbox_supported(adev->dm.dc)) 1659 1645 dc_enable_dmub_outbox(adev->dm.dc); 1646 + 1647 + if (amdgpu_dm_initialize_drm_device(adev)) { 1648 + DRM_ERROR( 1649 + "amdgpu: failed to initialize sw for display support.\n"); 1650 + goto error; 1651 + } 1660 1652 1661 1653 /* create fake encoders for MST */ 1662 1654 dm_dp_create_fake_mst_encoders(adev); ··· 10117 10109 *operation_result = AUX_RET_ERROR_TIMEOUT; 10118 10110 } else if (status_type == DMUB_ASYNC_TO_SYNC_ACCESS_FAIL) { 10119 10111 *operation_result = AUX_RET_ERROR_ENGINE_ACQUIRE; 10112 + } else if (status_type == DMUB_ASYNC_TO_SYNC_ACCESS_INVALID) { 10113 + *operation_result = AUX_RET_ERROR_INVALID_REPLY; 10120 10114 } else { 10121 10115 *operation_result = AUX_RET_ERROR_UNKNOWN; 10122 10116 } ··· 10166 10156 payload->reply[0] = adev->dm.dmub_notify->aux_reply.command; 10167 10157 if (!payload->write && adev->dm.dmub_notify->aux_reply.length && 10168 10158 payload->reply[0] == AUX_TRANSACTION_REPLY_AUX_ACK) { 10159 + 10160 + if (payload->length != adev->dm.dmub_notify->aux_reply.length) { 10161 + DRM_WARN("invalid read from DPIA AUX %x(%d) got length %d!\n", 10162 + payload->address, payload->length, 10163 + adev->dm.dmub_notify->aux_reply.length); 10164 + return amdgpu_dm_set_dmub_async_sync_status(is_cmd_aux, ctx, 10165 + DMUB_ASYNC_TO_SYNC_ACCESS_INVALID, 10166 + (uint32_t *)operation_result); 10167 + } 10168 + 10169 10169 memcpy(payload->data, adev->dm.dmub_notify->aux_reply.data, 10170 10170 adev->dm.dmub_notify->aux_reply.length); 10171 10171 }
-6
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 51 51 #define AMDGPU_DMUB_NOTIFICATION_MAX 5 52 52 53 53 /* 54 - * DMUB Async to Sync Mechanism Status 55 - */ 56 - #define DMUB_ASYNC_TO_SYNC_ACCESS_FAIL 1 57 - #define DMUB_ASYNC_TO_SYNC_ACCESS_TIMEOUT 2 58 - #define DMUB_ASYNC_TO_SYNC_ACCESS_SUCCESS 3 59 - /* 60 54 #include "include/amdgpu_dal_power_if.h" 61 55 #include "amdgpu_dm_irq.h" 62 56 */
+8 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 412 412 { 413 413 struct amdgpu_crtc *acrtc = NULL; 414 414 struct drm_plane *cursor_plane; 415 - 415 + bool is_dcn; 416 416 int res = -ENOMEM; 417 417 418 418 cursor_plane = kzalloc(sizeof(*cursor_plane), GFP_KERNEL); ··· 450 450 acrtc->otg_inst = -1; 451 451 452 452 dm->adev->mode_info.crtcs[crtc_index] = acrtc; 453 - drm_crtc_enable_color_mgmt(&acrtc->base, MAX_COLOR_LUT_ENTRIES, 453 + 454 + /* Don't enable DRM CRTC degamma property for DCE since it doesn't 455 + * support programmable degamma anywhere. 456 + */ 457 + is_dcn = dm->adev->dm.dc->caps.color.dpp.dcn_arch; 458 + drm_crtc_enable_color_mgmt(&acrtc->base, is_dcn ? MAX_COLOR_LUT_ENTRIES : 0, 454 459 true, MAX_COLOR_LUT_ENTRIES); 460 + 455 461 drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES); 456 462 457 463 return 0;
+30
drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
··· 2393 2393 return result; 2394 2394 } 2395 2395 2396 + static enum bp_result get_vram_info_v30( 2397 + struct bios_parser *bp, 2398 + struct dc_vram_info *info) 2399 + { 2400 + struct atom_vram_info_header_v3_0 *info_v30; 2401 + enum bp_result result = BP_RESULT_OK; 2402 + 2403 + info_v30 = GET_IMAGE(struct atom_vram_info_header_v3_0, 2404 + DATA_TABLES(vram_info)); 2405 + 2406 + if (info_v30 == NULL) 2407 + return BP_RESULT_BADBIOSTABLE; 2408 + 2409 + info->num_chans = info_v30->channel_num; 2410 + info->dram_channel_width_bytes = (1 << info_v30->channel_width) / 8; 2411 + 2412 + return result; 2413 + } 2414 + 2415 + 2396 2416 /* 2397 2417 * get_integrated_info_v11 2398 2418 * ··· 3074 3054 break; 3075 3055 case 5: 3076 3056 result = get_vram_info_v25(bp, info); 3057 + break; 3058 + default: 3059 + break; 3060 + } 3061 + break; 3062 + 3063 + case 3: 3064 + switch (revision.minor) { 3065 + case 0: 3066 + result = get_vram_info_v30(bp, info); 3077 3067 break; 3078 3068 default: 3079 3069 break;
+1
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubp.c
··· 87 87 .hubp_init = hubp3_init, 88 88 .set_unbounded_requesting = hubp31_set_unbounded_requesting, 89 89 .hubp_soft_reset = hubp31_soft_reset, 90 + .hubp_set_flip_int = hubp1_set_flip_int, 90 91 .hubp_in_blank = hubp1_in_blank, 91 92 .program_extended_blank = hubp31_program_extended_blank, 92 93 };
+1 -1
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_optc.c
··· 237 237 .clear_optc_underflow = optc1_clear_optc_underflow, 238 238 .setup_global_swap_lock = NULL, 239 239 .get_crc = optc1_get_crc, 240 - .configure_crc = optc2_configure_crc, 240 + .configure_crc = optc1_configure_crc, 241 241 .set_dsc_config = optc3_set_dsc_config, 242 242 .get_dsc_status = optc2_get_dsc_status, 243 243 .set_dwb_source = NULL,
+5 -9
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
··· 283 283 using the max for calculation */ 284 284 285 285 if (hubp->curs_attr.width > 0) { 286 - // Round cursor width to next multiple of 64 287 - cursor_size = (((hubp->curs_attr.width + 63) / 64) * 64) * hubp->curs_attr.height; 286 + cursor_size = hubp->curs_attr.pitch * hubp->curs_attr.height; 288 287 289 288 switch (pipe->stream->cursor_attributes.color_format) { 290 289 case CURSOR_MODE_MONO: ··· 308 309 cursor_size > 16384) { 309 310 /* cursor_num_mblk = CEILING(num_cursors*cursor_width*cursor_width*cursor_Bpe/mblk_bytes, 1) 310 311 */ 311 - cache_lines_used += (((hubp->curs_attr.width * hubp->curs_attr.height * cursor_bpp + 312 - DCN3_2_MALL_MBLK_SIZE_BYTES - 1) / DCN3_2_MALL_MBLK_SIZE_BYTES) * 313 - DCN3_2_MALL_MBLK_SIZE_BYTES) / dc->caps.cache_line_size + 2; 312 + cache_lines_used += (((cursor_size + DCN3_2_MALL_MBLK_SIZE_BYTES - 1) / 313 + DCN3_2_MALL_MBLK_SIZE_BYTES) * DCN3_2_MALL_MBLK_SIZE_BYTES) / 314 + dc->caps.cache_line_size + 2; 314 315 } 315 316 break; 316 317 } ··· 726 727 struct hubp *hubp = pipe->plane_res.hubp; 727 728 728 729 if (pipe->stream && pipe->plane_state && hubp && hubp->funcs->hubp_update_mall_sel) { 729 - //Round cursor width up to next multiple of 64 730 - int cursor_width = ((hubp->curs_attr.width + 63) / 64) * 64; 731 - int cursor_height = hubp->curs_attr.height; 732 - int cursor_size = cursor_width * cursor_height; 730 + int cursor_size = hubp->curs_attr.pitch * hubp->curs_attr.height; 733 731 734 732 switch (hubp->curs_attr.color_format) { 735 733 case CURSOR_MODE_MONO:
+15 -1
drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
··· 1803 1803 */ 1804 1804 context->bw_ctx.dml.soc.dram_clock_change_latency_us = 1805 1805 dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us; 1806 + /* For DCN32/321 need to validate with fclk pstate change latency equal to dummy so 1807 + * prefetch is scheduled correctly to account for dummy pstate. 1808 + */ 1809 + if (dummy_latency_index == 0) 1810 + context->bw_ctx.dml.soc.fclk_change_latency_us = 1811 + dc->clk_mgr->bw_params->dummy_pstate_table[dummy_latency_index].dummy_pstate_latency_us; 1806 1812 dcn32_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, false); 1807 1813 maxMpcComb = context->bw_ctx.dml.vba.maxMpcComb; 1808 1814 dcfclk_from_fw_based_mclk_switching = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb]; ··· 1996 1990 1997 1991 context->perf_params.stutter_period_us = context->bw_ctx.dml.vba.StutterPeriod; 1998 1992 1993 + if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching && dummy_latency_index == 0) 1994 + context->bw_ctx.dml.soc.fclk_change_latency_us = 1995 + dc->clk_mgr->bw_params->dummy_pstate_table[dummy_latency_index].dummy_pstate_latency_us; 1996 + 1999 1997 dcn32_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel); 2000 1998 2001 1999 if (!pstate_en) ··· 2007 1997 context->bw_ctx.dml.soc.dram_clock_change_latency_us = 2008 1998 dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us; 2009 1999 2010 - if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) 2000 + if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) { 2011 2001 dcn30_setup_mclk_switch_using_fw_based_vblank_stretch(dc, context); 2002 + if (dummy_latency_index == 0) 2003 + context->bw_ctx.dml.soc.fclk_change_latency_us = 2004 + dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.fclk_change_latency_us; 2005 + } 2012 2006 } 2013 2007 2014 2008 static void dcn32_get_optimal_dcfclk_fclk_for_uclk(unsigned int uclk_mts,
+2
drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
··· 718 718 719 719 do { 720 720 MaxTotalRDBandwidth = 0; 721 + DestinationLineTimesForPrefetchLessThan2 = false; 722 + VRatioPrefetchMoreThanMax = false; 721 723 #ifdef __DML_VBA_DEBUG__ 722 724 dml_print("DML::%s: Start loop: VStartup = %d\n", __func__, mode_lib->vba.VStartupLines); 723 725 #endif
+2
drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
··· 46 46 // Prefetch schedule max vratio 47 47 #define __DML_MAX_VRATIO_PRE__ 4.0 48 48 49 + #define __DML_VBA_MAX_DST_Y_PRE__ 63.75 50 + 49 51 #define BPP_INVALID 0 50 52 #define BPP_BLENDED_PIPE 0xffffffff 51 53
+3 -4
drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
··· 3475 3475 double min_Lsw; 3476 3476 double Tsw_est1 = 0; 3477 3477 double Tsw_est3 = 0; 3478 - double TPreMargin = 0; 3479 3478 3480 3479 if (v->GPUVMEnable == true && v->HostVMEnable == true) 3481 3480 HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels; ··· 3668 3669 dst_y_prefetch_equ = VStartup - (*TSetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime - 3669 3670 (*DSTYAfterScaler + (double) *DSTXAfterScaler / (double) myPipe->HTotal); 3670 3671 3672 + dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, __DML_VBA_MAX_DST_Y_PRE__); 3671 3673 #ifdef __DML_VBA_DEBUG__ 3672 3674 dml_print("DML::%s: HTotal = %d\n", __func__, myPipe->HTotal); 3673 3675 dml_print("DML::%s: min_Lsw = %f\n", __func__, min_Lsw); ··· 3701 3701 3702 3702 dst_y_prefetch_equ = dml_floor(4.0 * (dst_y_prefetch_equ + 0.125), 1) / 4.0; 3703 3703 Tpre_rounded = dst_y_prefetch_equ * LineTime; 3704 - 3705 - TPreMargin = Tpre_rounded - TPreReq; 3706 3704 #ifdef __DML_VBA_DEBUG__ 3707 3705 dml_print("DML::%s: dst_y_prefetch_equ: %f (after round)\n", __func__, dst_y_prefetch_equ); 3708 3706 dml_print("DML::%s: LineTime: %f\n", __func__, LineTime); ··· 3728 3730 *VRatioPrefetchY = 0; 3729 3731 *VRatioPrefetchC = 0; 3730 3732 *RequiredPrefetchPixDataBWLuma = 0; 3731 - if (dst_y_prefetch_equ > 1 && TPreMargin > 0.0) { 3733 + if (dst_y_prefetch_equ > 1 && 3734 + (Tpre_rounded >= TPreReq || dst_y_prefetch_equ == __DML_VBA_MAX_DST_Y_PRE__)) { 3732 3735 double PrefetchBandwidth1; 3733 3736 double PrefetchBandwidth2; 3734 3737 double PrefetchBandwidth3;
+11 -12
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1156 1156 uint64_t features_supported; 1157 1157 int ret = 0; 1158 1158 1159 - if (adev->in_suspend && smu_is_dpm_running(smu)) { 1160 - dev_info(adev->dev, "dpm has been enabled\n"); 1161 - /* this is needed specifically */ 1162 - switch (adev->ip_versions[MP1_HWIP][0]) { 1163 - case IP_VERSION(11, 0, 7): 1164 - case IP_VERSION(11, 0, 11): 1165 - case IP_VERSION(11, 5, 0): 1166 - case IP_VERSION(11, 0, 12): 1159 + switch (adev->ip_versions[MP1_HWIP][0]) { 1160 + case IP_VERSION(11, 0, 7): 1161 + case IP_VERSION(11, 0, 11): 1162 + case IP_VERSION(11, 5, 0): 1163 + case IP_VERSION(11, 0, 12): 1164 + if (adev->in_suspend && smu_is_dpm_running(smu)) { 1165 + dev_info(adev->dev, "dpm has been enabled\n"); 1167 1166 ret = smu_system_features_control(smu, true); 1168 1167 if (ret) 1169 1168 dev_err(adev->dev, "Failed system features control!\n"); 1170 - break; 1171 - default: 1172 - break; 1169 + return ret; 1173 1170 } 1174 - return ret; 1171 + break; 1172 + default: 1173 + break; 1175 1174 } 1176 1175 1177 1176 ret = smu_init_display_count(smu, 0);
+8
drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
··· 1388 1388 CMN2ASIC_MAPPING_WORKLOAD, 1389 1389 }; 1390 1390 1391 + enum smu_baco_seq { 1392 + BACO_SEQ_BACO = 0, 1393 + BACO_SEQ_MSR, 1394 + BACO_SEQ_BAMACO, 1395 + BACO_SEQ_ULPS, 1396 + BACO_SEQ_COUNT, 1397 + }; 1398 + 1391 1399 #define MSG_MAP(msg, index, valid_in_vf) \ 1392 1400 [SMU_MSG_##msg] = {1, (index), (valid_in_vf)} 1393 1401
+1 -9
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0.h
··· 147 147 uint32_t max_fast_ppt_limit; 148 148 }; 149 149 150 - enum smu_v11_0_baco_seq { 151 - BACO_SEQ_BACO = 0, 152 - BACO_SEQ_MSR, 153 - BACO_SEQ_BAMACO, 154 - BACO_SEQ_ULPS, 155 - BACO_SEQ_COUNT, 156 - }; 157 - 158 150 #if defined(SWSMU_CODE_LAYER_L2) || defined(SWSMU_CODE_LAYER_L3) 159 151 160 152 int smu_v11_0_init_microcode(struct smu_context *smu); ··· 249 257 int smu_v11_0_baco_exit(struct smu_context *smu); 250 258 251 259 int smu_v11_0_baco_set_armd3_sequence(struct smu_context *smu, 252 - enum smu_v11_0_baco_seq baco_seq); 260 + enum smu_baco_seq baco_seq); 253 261 254 262 int smu_v11_0_mode1_reset(struct smu_context *smu); 255 263
+3 -8
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 124 124 enum smu_13_0_power_state power_state; 125 125 }; 126 126 127 - enum smu_v13_0_baco_seq { 128 - BACO_SEQ_BACO = 0, 129 - BACO_SEQ_MSR, 130 - BACO_SEQ_BAMACO, 131 - BACO_SEQ_ULPS, 132 - BACO_SEQ_COUNT, 133 - }; 134 - 135 127 #if defined(SWSMU_CODE_LAYER_L2) || defined(SWSMU_CODE_LAYER_L3) 136 128 137 129 int smu_v13_0_init_microcode(struct smu_context *smu); ··· 209 217 210 218 int smu_v13_0_get_max_sustainable_clocks_by_dc(struct smu_context *smu, 211 219 struct pp_smu_nv_clock_table *max_clocks); 220 + 221 + int smu_v13_0_baco_set_armd3_sequence(struct smu_context *smu, 222 + enum smu_baco_seq baco_seq); 212 223 213 224 bool smu_v13_0_baco_is_support(struct smu_context *smu); 214 225
+4
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 379 379 ((adev->pdev->device == 0x73BF) && 380 380 (adev->pdev->revision == 0xCF)) || 381 381 ((adev->pdev->device == 0x7422) && 382 + (adev->pdev->revision == 0x00)) || 383 + ((adev->pdev->device == 0x73A3) && 384 + (adev->pdev->revision == 0x00)) || 385 + ((adev->pdev->device == 0x73E3) && 382 386 (adev->pdev->revision == 0x00))) 383 387 smu_baco->platform_support = false; 384 388
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
··· 1576 1576 } 1577 1577 1578 1578 int smu_v11_0_baco_set_armd3_sequence(struct smu_context *smu, 1579 - enum smu_v11_0_baco_seq baco_seq) 1579 + enum smu_baco_seq baco_seq) 1580 1580 { 1581 1581 return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ArmD3, baco_seq, NULL); 1582 1582 }
+9
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 2230 2230 return ret; 2231 2231 } 2232 2232 2233 + int smu_v13_0_baco_set_armd3_sequence(struct smu_context *smu, 2234 + enum smu_baco_seq baco_seq) 2235 + { 2236 + return smu_cmn_send_smc_msg_with_param(smu, 2237 + SMU_MSG_ArmD3, 2238 + baco_seq, 2239 + NULL); 2240 + } 2241 + 2233 2242 bool smu_v13_0_baco_is_support(struct smu_context *smu) 2234 2243 { 2235 2244 struct smu_baco_context *smu_baco = &smu->smu_baco;
+28 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 120 120 MSG_MAP(Mode1Reset, PPSMC_MSG_Mode1Reset, 0), 121 121 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 0), 122 122 MSG_MAP(DFCstateControl, PPSMC_MSG_SetExternalClientDfCstateAllow, 0), 123 + MSG_MAP(ArmD3, PPSMC_MSG_ArmD3, 0), 123 124 }; 124 125 125 126 static struct cmn2asic_mapping smu_v13_0_0_clk_map[SMU_CLK_COUNT] = { ··· 1567 1566 NULL); 1568 1567 } 1569 1568 1569 + static int smu_v13_0_0_baco_enter(struct smu_context *smu) 1570 + { 1571 + struct smu_baco_context *smu_baco = &smu->smu_baco; 1572 + struct amdgpu_device *adev = smu->adev; 1573 + 1574 + if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) 1575 + return smu_v13_0_baco_set_armd3_sequence(smu, 1576 + smu_baco->maco_support ? BACO_SEQ_BAMACO : BACO_SEQ_BACO); 1577 + else 1578 + return smu_v13_0_baco_enter(smu); 1579 + } 1580 + 1581 + static int smu_v13_0_0_baco_exit(struct smu_context *smu) 1582 + { 1583 + struct amdgpu_device *adev = smu->adev; 1584 + 1585 + if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) { 1586 + /* Wait for PMFW handling for the Dstate change */ 1587 + usleep_range(10000, 11000); 1588 + return smu_v13_0_baco_set_armd3_sequence(smu, BACO_SEQ_ULPS); 1589 + } else { 1590 + return smu_v13_0_baco_exit(smu); 1591 + } 1592 + } 1593 + 1570 1594 static bool smu_v13_0_0_is_mode1_reset_supported(struct smu_context *smu) 1571 1595 { 1572 1596 struct amdgpu_device *adev = smu->adev; ··· 1853 1827 .baco_is_support = smu_v13_0_baco_is_support, 1854 1828 .baco_get_state = smu_v13_0_baco_get_state, 1855 1829 .baco_set_state = smu_v13_0_baco_set_state, 1856 - .baco_enter = smu_v13_0_baco_enter, 1857 - .baco_exit = smu_v13_0_baco_exit, 1830 + .baco_enter = smu_v13_0_0_baco_enter, 1831 + .baco_exit = smu_v13_0_0_baco_exit, 1858 1832 .mode1_reset_is_support = smu_v13_0_0_is_mode1_reset_supported, 1859 1833 .mode1_reset = smu_v13_0_mode1_reset, 1860 1834 .set_mp1_state = smu_v13_0_0_set_mp1_state,
+28 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 122 122 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 0), 123 123 MSG_MAP(SetMGpuFanBoostLimitRpm, PPSMC_MSG_SetMGpuFanBoostLimitRpm, 0), 124 124 MSG_MAP(DFCstateControl, PPSMC_MSG_SetExternalClientDfCstateAllow, 0), 125 + MSG_MAP(ArmD3, PPSMC_MSG_ArmD3, 0), 125 126 }; 126 127 127 128 static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = { ··· 1579 1578 return ret; 1580 1579 } 1581 1580 1581 + static int smu_v13_0_7_baco_enter(struct smu_context *smu) 1582 + { 1583 + struct smu_baco_context *smu_baco = &smu->smu_baco; 1584 + struct amdgpu_device *adev = smu->adev; 1585 + 1586 + if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) 1587 + return smu_v13_0_baco_set_armd3_sequence(smu, 1588 + smu_baco->maco_support ? BACO_SEQ_BAMACO : BACO_SEQ_BACO); 1589 + else 1590 + return smu_v13_0_baco_enter(smu); 1591 + } 1592 + 1593 + static int smu_v13_0_7_baco_exit(struct smu_context *smu) 1594 + { 1595 + struct amdgpu_device *adev = smu->adev; 1596 + 1597 + if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) { 1598 + /* Wait for PMFW handling for the Dstate change */ 1599 + usleep_range(10000, 11000); 1600 + return smu_v13_0_baco_set_armd3_sequence(smu, BACO_SEQ_ULPS); 1601 + } else { 1602 + return smu_v13_0_baco_exit(smu); 1603 + } 1604 + } 1605 + 1582 1606 static bool smu_v13_0_7_is_mode1_reset_supported(struct smu_context *smu) 1583 1607 { 1584 1608 struct amdgpu_device *adev = smu->adev; ··· 1681 1655 .baco_is_support = smu_v13_0_baco_is_support, 1682 1656 .baco_get_state = smu_v13_0_baco_get_state, 1683 1657 .baco_set_state = smu_v13_0_baco_set_state, 1684 - .baco_enter = smu_v13_0_baco_enter, 1685 - .baco_exit = smu_v13_0_baco_exit, 1658 + .baco_enter = smu_v13_0_7_baco_enter, 1659 + .baco_exit = smu_v13_0_7_baco_exit, 1686 1660 .mode1_reset_is_support = smu_v13_0_7_is_mode1_reset_supported, 1687 1661 .mode1_reset = smu_v13_0_mode1_reset, 1688 1662 .set_mp1_state = smu_v13_0_7_set_mp1_state,
+29 -22
drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
··· 63 63 ssize_t drm_dp_dual_mode_read(struct i2c_adapter *adapter, 64 64 u8 offset, void *buffer, size_t size) 65 65 { 66 + u8 zero = 0; 67 + char *tmpbuf = NULL; 68 + /* 69 + * As sub-addressing is not supported by all adaptors, 70 + * always explicitly read from the start and discard 71 + * any bytes that come before the requested offset. 72 + * This way, no matter whether the adaptor supports it 73 + * or not, we'll end up reading the proper data. 74 + */ 66 75 struct i2c_msg msgs[] = { 67 76 { 68 77 .addr = DP_DUAL_MODE_SLAVE_ADDRESS, 69 78 .flags = 0, 70 79 .len = 1, 71 - .buf = &offset, 80 + .buf = &zero, 72 81 }, 73 82 { 74 83 .addr = DP_DUAL_MODE_SLAVE_ADDRESS, 75 84 .flags = I2C_M_RD, 76 - .len = size, 85 + .len = size + offset, 77 86 .buf = buffer, 78 87 }, 79 88 }; 80 89 int ret; 81 90 91 + if (offset) { 92 + tmpbuf = kmalloc(size + offset, GFP_KERNEL); 93 + if (!tmpbuf) 94 + return -ENOMEM; 95 + 96 + msgs[1].buf = tmpbuf; 97 + } 98 + 82 99 ret = i2c_transfer(adapter, msgs, ARRAY_SIZE(msgs)); 100 + if (tmpbuf) 101 + memcpy(buffer, tmpbuf + offset, size); 102 + 103 + kfree(tmpbuf); 104 + 83 105 if (ret < 0) 84 106 return ret; 85 107 if (ret != ARRAY_SIZE(msgs)) ··· 230 208 if (ret) 231 209 return DRM_DP_DUAL_MODE_UNKNOWN; 232 210 233 - /* 234 - * Sigh. Some (maybe all?) type 1 adaptors are broken and ack 235 - * the offset but ignore it, and instead they just always return 236 - * data from the start of the HDMI ID buffer. So for a broken 237 - * type 1 HDMI adaptor a single byte read will always give us 238 - * 0x44, and for a type 1 DVI adaptor it should give 0x00 239 - * (assuming it implements any registers). Fortunately neither 240 - * of those values will match the type 2 signature of the 241 - * DP_DUAL_MODE_ADAPTOR_ID register so we can proceed with 242 - * the type 2 adaptor detection safely even in the presence 243 - * of broken type 1 adaptors. 244 - */ 245 211 ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_ADAPTOR_ID, 246 212 &adaptor_id, sizeof(adaptor_id)); 247 213 drm_dbg_kms(dev, "DP dual mode adaptor ID: %02x (err %zd)\n", adaptor_id, ret); ··· 243 233 return DRM_DP_DUAL_MODE_TYPE2_DVI; 244 234 } 245 235 /* 246 - * If neither a proper type 1 ID nor a broken type 1 adaptor 247 - * as described above, assume type 1, but let the user know 248 - * that we may have misdetected the type. 236 + * If not a proper type 1 ID, still assume type 1, but let 237 + * the user know that we may have misdetected the type. 249 238 */ 250 - if (!is_type1_adaptor(adaptor_id) && adaptor_id != hdmi_id[0]) 239 + if (!is_type1_adaptor(adaptor_id)) 251 240 drm_err(dev, "Unexpected DP dual mode adaptor ID %02x\n", adaptor_id); 252 241 253 242 } ··· 352 343 * @enable: enable (as opposed to disable) the TMDS output buffers 353 344 * 354 345 * Set the state of the TMDS output buffers in the adaptor. For 355 - * type2 this is set via the DP_DUAL_MODE_TMDS_OEN register. As 356 - * some type 1 adaptors have problems with registers (see comments 357 - * in drm_dp_dual_mode_detect()) we avoid touching the register, 358 - * making this function a no-op on type 1 adaptors. 346 + * type2 this is set via the DP_DUAL_MODE_TMDS_OEN register. 347 + * Type1 adaptors do not support any register writes. 359 348 * 360 349 * Returns: 361 350 * 0 on success, negative error code on failure
+1 -1
drivers/gpu/drm/drm_drv.c
··· 615 615 mutex_init(&dev->clientlist_mutex); 616 616 mutex_init(&dev->master_mutex); 617 617 618 - ret = drmm_add_action(dev, drm_dev_init_release, NULL); 618 + ret = drmm_add_action_or_reset(dev, drm_dev_init_release, NULL); 619 619 if (ret) 620 620 return ret; 621 621
+2 -1
drivers/gpu/drm/drm_internal.h
··· 104 104 105 105 static inline void drm_vblank_destroy_worker(struct drm_vblank_crtc *vblank) 106 106 { 107 - kthread_destroy_worker(vblank->worker); 107 + if (vblank->worker) 108 + kthread_destroy_worker(vblank->worker); 108 109 } 109 110 110 111 int drm_vblank_worker_init(struct drm_vblank_crtc *vblank);
-3
drivers/gpu/drm/drm_mode_config.c
··· 151 151 count = 0; 152 152 connector_id = u64_to_user_ptr(card_res->connector_id_ptr); 153 153 drm_for_each_connector_iter(connector, &conn_iter) { 154 - if (connector->registration_state != DRM_CONNECTOR_REGISTERED) 155 - continue; 156 - 157 154 /* only expose writeback connectors if userspace understands them */ 158 155 if (!file_priv->writeback_connectors && 159 156 (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK))
+2
drivers/gpu/drm/i2c/tda998x_drv.c
··· 1174 1174 struct hdmi_codec_pdata codec_data = { 1175 1175 .ops = &audio_codec_ops, 1176 1176 .max_i2s_channels = 2, 1177 + .no_i2s_capture = 1, 1178 + .no_spdif_capture = 1, 1177 1179 }; 1178 1180 1179 1181 if (priv->audio_port_enable[AUDIO_ROUTE_I2S])
+3 -3
drivers/gpu/drm/i915/gem/i915_gem_ttm.c
··· 1013 1013 return VM_FAULT_SIGBUS; 1014 1014 } 1015 1015 1016 - if (i915_ttm_cpu_maps_iomem(bo->resource)) 1017 - wakeref = intel_runtime_pm_get(&to_i915(obj->base.dev)->runtime_pm); 1018 - 1019 1016 if (!i915_ttm_resource_mappable(bo->resource)) { 1020 1017 int err = -ENODEV; 1021 1018 int i; ··· 1038 1041 goto out_rpm; 1039 1042 } 1040 1043 } 1044 + 1045 + if (i915_ttm_cpu_maps_iomem(bo->resource)) 1046 + wakeref = intel_runtime_pm_get(&to_i915(obj->base.dev)->runtime_pm); 1041 1047 1042 1048 if (drm_dev_enter(dev, &idx)) { 1043 1049 ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
+9 -6
drivers/gpu/drm/lima/lima_devfreq.c
··· 112 112 unsigned long cur_freq; 113 113 int ret; 114 114 const char *regulator_names[] = { "mali", NULL }; 115 - const char *clk_names[] = { "core", NULL }; 116 - struct dev_pm_opp_config config = { 117 - .regulator_names = regulator_names, 118 - .clk_names = clk_names, 119 - }; 120 115 121 116 if (!device_property_present(dev, "operating-points-v2")) 122 117 /* Optional, continue without devfreq */ ··· 119 124 120 125 spin_lock_init(&ldevfreq->lock); 121 126 122 - ret = devm_pm_opp_set_config(dev, &config); 127 + /* 128 + * clkname is set separately so it is not affected by the optional 129 + * regulator setting which may return error. 130 + */ 131 + ret = devm_pm_opp_set_clkname(dev, "core"); 132 + if (ret) 133 + return ret; 134 + 135 + ret = devm_pm_opp_set_regulators(dev, regulator_names); 123 136 if (ret) { 124 137 /* Continue if the optional regulator is missing */ 125 138 if (ret != -ENODEV)
+2
drivers/gpu/drm/panel/panel-simple.c
··· 2500 2500 static const struct panel_desc logictechno_lt161010_2nh = { 2501 2501 .timings = &logictechno_lt161010_2nh_timing, 2502 2502 .num_timings = 1, 2503 + .bpc = 6, 2503 2504 .size = { 2504 2505 .width = 154, 2505 2506 .height = 86, ··· 2530 2529 static const struct panel_desc logictechno_lt170410_2whc = { 2531 2530 .timings = &logictechno_lt170410_2whc_timing, 2532 2531 .num_timings = 1, 2532 + .bpc = 8, 2533 2533 .size = { 2534 2534 .width = 217, 2535 2535 .height = 136,
+4
drivers/gpu/drm/tegra/drm.c
··· 1093 1093 struct host1x *host1x = dev_get_drvdata(dev->dev.parent); 1094 1094 struct iommu_domain *domain; 1095 1095 1096 + /* Our IOMMU usage policy doesn't currently play well with GART */ 1097 + if (of_machine_is_compatible("nvidia,tegra20")) 1098 + return false; 1099 + 1096 1100 /* 1097 1101 * If the Tegra DRM clients are backed by an IOMMU, push buffers are 1098 1102 * likely to be allocated beyond the 32-bit boundary if sufficient
+4 -4
drivers/gpu/drm/vc4/vc4_kms.c
··· 197 197 struct drm_private_state *priv_state; 198 198 199 199 priv_state = drm_atomic_get_new_private_obj_state(state, &vc4->hvs_channels); 200 - if (IS_ERR(priv_state)) 201 - return ERR_CAST(priv_state); 200 + if (!priv_state) 201 + return ERR_PTR(-EINVAL); 202 202 203 203 return to_vc4_hvs_state(priv_state); 204 204 } ··· 210 210 struct drm_private_state *priv_state; 211 211 212 212 priv_state = drm_atomic_get_old_private_obj_state(state, &vc4->hvs_channels); 213 - if (IS_ERR(priv_state)) 214 - return ERR_CAST(priv_state); 213 + if (!priv_state) 214 + return ERR_PTR(-EINVAL); 215 215 216 216 return to_vc4_hvs_state(priv_state); 217 217 }
+4
drivers/gpu/host1x/dev.c
··· 292 292 293 293 static bool host1x_wants_iommu(struct host1x *host1x) 294 294 { 295 + /* Our IOMMU usage policy doesn't currently play well with GART */ 296 + if (of_machine_is_compatible("nvidia,tegra20")) 297 + return false; 298 + 295 299 /* 296 300 * If we support addressing a maximum of 32 bits of physical memory 297 301 * and if the host1x firewall is enabled, there's no need to enable
+12 -12
drivers/iio/accel/bma400_core.c
··· 869 869 unsigned int val; 870 870 int ret; 871 871 872 - /* Try to read chip_id register. It must return 0x90. */ 873 - ret = regmap_read(data->regmap, BMA400_CHIP_ID_REG, &val); 874 - if (ret) { 875 - dev_err(data->dev, "Failed to read chip id register\n"); 876 - return ret; 877 - } 878 - 879 - if (val != BMA400_ID_REG_VAL) { 880 - dev_err(data->dev, "Chip ID mismatch\n"); 881 - return -ENODEV; 882 - } 883 - 884 872 data->regulators[BMA400_VDD_REGULATOR].supply = "vdd"; 885 873 data->regulators[BMA400_VDDIO_REGULATOR].supply = "vddio"; 886 874 ret = devm_regulator_bulk_get(data->dev, ··· 893 905 ret = devm_add_action_or_reset(data->dev, bma400_regulators_disable, data); 894 906 if (ret) 895 907 return ret; 908 + 909 + /* Try to read chip_id register. It must return 0x90. */ 910 + ret = regmap_read(data->regmap, BMA400_CHIP_ID_REG, &val); 911 + if (ret) { 912 + dev_err(data->dev, "Failed to read chip id register\n"); 913 + return ret; 914 + } 915 + 916 + if (val != BMA400_ID_REG_VAL) { 917 + dev_err(data->dev, "Chip ID mismatch\n"); 918 + return -ENODEV; 919 + } 896 920 897 921 ret = bma400_get_power_mode(data); 898 922 if (ret) {
+2 -4
drivers/iio/adc/at91-sama5d2_adc.c
··· 2307 2307 clb->p6 = buf[AT91_ADC_TS_CLB_IDX_P6]; 2308 2308 2309 2309 /* 2310 - * We prepare here the conversion to milli and also add constant 2311 - * factor (5 degrees Celsius) to p1 here to avoid doing it on 2312 - * hotpath. 2310 + * We prepare here the conversion to milli to avoid doing it on hotpath. 2313 2311 */ 2314 - clb->p1 = clb->p1 * 1000 + 5000; 2312 + clb->p1 = clb->p1 * 1000; 2315 2313 2316 2314 free_buf: 2317 2315 kfree(buf);
+3 -1
drivers/iio/adc/at91_adc.c
··· 634 634 trig->ops = &at91_adc_trigger_ops; 635 635 636 636 ret = iio_trigger_register(trig); 637 - if (ret) 637 + if (ret) { 638 + iio_trigger_free(trig); 638 639 return NULL; 640 + } 639 641 640 642 return trig; 641 643 }
+3 -2
drivers/iio/adc/mp2629_adc.c
··· 57 57 MP2629_MAP(SYSTEM_VOLT, "system-volt"), 58 58 MP2629_MAP(INPUT_VOLT, "input-volt"), 59 59 MP2629_MAP(BATT_CURRENT, "batt-current"), 60 - MP2629_MAP(INPUT_CURRENT, "input-current") 60 + MP2629_MAP(INPUT_CURRENT, "input-current"), 61 + { } 61 62 }; 62 63 63 64 static int mp2629_read_raw(struct iio_dev *indio_dev, ··· 75 74 if (ret) 76 75 return ret; 77 76 78 - if (chan->address == MP2629_INPUT_VOLT) 77 + if (chan->channel == MP2629_INPUT_VOLT) 79 78 rval &= GENMASK(6, 0); 80 79 *val = rval; 81 80 return IIO_VAL_INT;
+1 -1
drivers/iio/imu/bno055/bno055.c
··· 632 632 return -EINVAL; 633 633 } 634 634 delta = abs(tbl_val - req_val); 635 - if (delta < best_delta || first) { 635 + if (first || delta < best_delta) { 636 636 best_delta = delta; 637 637 hwval = i; 638 638 first = false;
+4 -8
drivers/iio/pressure/ms5611.h
··· 25 25 MS5607, 26 26 }; 27 27 28 - struct ms5611_chip_info { 29 - u16 prom[MS5611_PROM_WORDS_NB]; 30 - 31 - int (*temp_and_pressure_compensate)(struct ms5611_chip_info *chip_info, 32 - s32 *temp, s32 *pressure); 33 - }; 34 - 35 28 /* 36 29 * OverSampling Rate descriptor. 37 30 * Warning: cmd MUST be kept aligned on a word boundary (see ··· 43 50 const struct ms5611_osr *pressure_osr; 44 51 const struct ms5611_osr *temp_osr; 45 52 53 + u16 prom[MS5611_PROM_WORDS_NB]; 54 + 46 55 int (*reset)(struct ms5611_state *st); 47 56 int (*read_prom_word)(struct ms5611_state *st, int index, u16 *word); 48 57 int (*read_adc_temp_and_pressure)(struct ms5611_state *st, 49 58 s32 *temp, s32 *pressure); 50 59 51 - struct ms5611_chip_info *chip_info; 60 + int (*compensate_temp_and_pressure)(struct ms5611_state *st, s32 *temp, 61 + s32 *pressure); 52 62 struct regulator *vdd; 53 63 }; 54 64
+27 -24
drivers/iio/pressure/ms5611_core.c
··· 85 85 struct ms5611_state *st = iio_priv(indio_dev); 86 86 87 87 for (i = 0; i < MS5611_PROM_WORDS_NB; i++) { 88 - ret = st->read_prom_word(st, i, &st->chip_info->prom[i]); 88 + ret = st->read_prom_word(st, i, &st->prom[i]); 89 89 if (ret < 0) { 90 90 dev_err(&indio_dev->dev, 91 91 "failed to read prom at %d\n", i); ··· 93 93 } 94 94 } 95 95 96 - if (!ms5611_prom_is_valid(st->chip_info->prom, MS5611_PROM_WORDS_NB)) { 96 + if (!ms5611_prom_is_valid(st->prom, MS5611_PROM_WORDS_NB)) { 97 97 dev_err(&indio_dev->dev, "PROM integrity check failed\n"); 98 98 return -ENODEV; 99 99 } ··· 114 114 return ret; 115 115 } 116 116 117 - return st->chip_info->temp_and_pressure_compensate(st->chip_info, 118 - temp, pressure); 117 + return st->compensate_temp_and_pressure(st, temp, pressure); 119 118 } 120 119 121 - static int ms5611_temp_and_pressure_compensate(struct ms5611_chip_info *chip_info, 120 + static int ms5611_temp_and_pressure_compensate(struct ms5611_state *st, 122 121 s32 *temp, s32 *pressure) 123 122 { 124 123 s32 t = *temp, p = *pressure; 125 124 s64 off, sens, dt; 126 125 127 - dt = t - (chip_info->prom[5] << 8); 128 - off = ((s64)chip_info->prom[2] << 16) + ((chip_info->prom[4] * dt) >> 7); 129 - sens = ((s64)chip_info->prom[1] << 15) + ((chip_info->prom[3] * dt) >> 8); 126 + dt = t - (st->prom[5] << 8); 127 + off = ((s64)st->prom[2] << 16) + ((st->prom[4] * dt) >> 7); 128 + sens = ((s64)st->prom[1] << 15) + ((st->prom[3] * dt) >> 8); 130 129 131 - t = 2000 + ((chip_info->prom[6] * dt) >> 23); 130 + t = 2000 + ((st->prom[6] * dt) >> 23); 132 131 if (t < 2000) { 133 132 s64 off2, sens2, t2; 134 133 ··· 153 154 return 0; 154 155 } 155 156 156 - static int ms5607_temp_and_pressure_compensate(struct ms5611_chip_info *chip_info, 157 + static int ms5607_temp_and_pressure_compensate(struct ms5611_state *st, 157 158 s32 *temp, s32 *pressure) 158 159 { 159 160 s32 t = *temp, p = *pressure; 160 161 s64 off, sens, dt; 161 162 162 - dt = t - (chip_info->prom[5] << 8); 163 - off = ((s64)chip_info->prom[2] << 17) + ((chip_info->prom[4] * dt) >> 6); 164 - sens = ((s64)chip_info->prom[1] << 16) + ((chip_info->prom[3] * dt) >> 7); 163 + dt = t - (st->prom[5] << 8); 164 + off = ((s64)st->prom[2] << 17) + ((st->prom[4] * dt) >> 6); 165 + sens = ((s64)st->prom[1] << 16) + ((st->prom[3] * dt) >> 7); 165 166 166 - t = 2000 + ((chip_info->prom[6] * dt) >> 23); 167 + t = 2000 + ((st->prom[6] * dt) >> 23); 167 168 if (t < 2000) { 168 169 s64 off2, sens2, t2, tmp; 169 170 ··· 341 342 342 343 static const unsigned long ms5611_scan_masks[] = {0x3, 0}; 343 344 344 - static struct ms5611_chip_info chip_info_tbl[] = { 345 - [MS5611] = { 346 - .temp_and_pressure_compensate = ms5611_temp_and_pressure_compensate, 347 - }, 348 - [MS5607] = { 349 - .temp_and_pressure_compensate = ms5607_temp_and_pressure_compensate, 350 - } 351 - }; 352 - 353 345 static const struct iio_chan_spec ms5611_channels[] = { 354 346 { 355 347 .type = IIO_PRESSURE, ··· 423 433 struct ms5611_state *st = iio_priv(indio_dev); 424 434 425 435 mutex_init(&st->lock); 426 - st->chip_info = &chip_info_tbl[type]; 436 + 437 + switch (type) { 438 + case MS5611: 439 + st->compensate_temp_and_pressure = 440 + ms5611_temp_and_pressure_compensate; 441 + break; 442 + case MS5607: 443 + st->compensate_temp_and_pressure = 444 + ms5607_temp_and_pressure_compensate; 445 + break; 446 + default: 447 + return -EINVAL; 448 + } 449 + 427 450 st->temp_osr = 428 451 &ms5611_avail_temp_osr[ARRAY_SIZE(ms5611_avail_temp_osr) - 1]; 429 452 st->pressure_osr =
+1 -1
drivers/iio/pressure/ms5611_spi.c
··· 91 91 spi_set_drvdata(spi, indio_dev); 92 92 93 93 spi->mode = SPI_MODE_0; 94 - spi->max_speed_hz = 20000000; 94 + spi->max_speed_hz = min(spi->max_speed_hz, 20000000U); 95 95 spi->bits_per_word = 8; 96 96 ret = spi_setup(spi); 97 97 if (ret < 0)
+5 -1
drivers/iio/trigger/iio-trig-sysfs.c
··· 203 203 204 204 static int __init iio_sysfs_trig_init(void) 205 205 { 206 + int ret; 206 207 device_initialize(&iio_sysfs_trig_dev); 207 208 dev_set_name(&iio_sysfs_trig_dev, "iio_sysfs_trigger"); 208 - return device_add(&iio_sysfs_trig_dev); 209 + ret = device_add(&iio_sysfs_trig_dev); 210 + if (ret) 211 + put_device(&iio_sysfs_trig_dev); 212 + return ret; 209 213 } 210 214 module_init(iio_sysfs_trig_init); 211 215
+4 -4
drivers/input/joystick/iforce/iforce-main.c
··· 273 273 * Get device info. 274 274 */ 275 275 276 - if (!iforce_get_id_packet(iforce, 'M', buf, &len) || len < 3) 276 + if (!iforce_get_id_packet(iforce, 'M', buf, &len) && len >= 3) 277 277 input_dev->id.vendor = get_unaligned_le16(buf + 1); 278 278 else 279 279 dev_warn(&iforce->dev->dev, "Device does not respond to id packet M\n"); 280 280 281 - if (!iforce_get_id_packet(iforce, 'P', buf, &len) || len < 3) 281 + if (!iforce_get_id_packet(iforce, 'P', buf, &len) && len >= 3) 282 282 input_dev->id.product = get_unaligned_le16(buf + 1); 283 283 else 284 284 dev_warn(&iforce->dev->dev, "Device does not respond to id packet P\n"); 285 285 286 - if (!iforce_get_id_packet(iforce, 'B', buf, &len) || len < 3) 286 + if (!iforce_get_id_packet(iforce, 'B', buf, &len) && len >= 3) 287 287 iforce->device_memory.end = get_unaligned_le16(buf + 1); 288 288 else 289 289 dev_warn(&iforce->dev->dev, "Device does not respond to id packet B\n"); 290 290 291 - if (!iforce_get_id_packet(iforce, 'N', buf, &len) || len < 2) 291 + if (!iforce_get_id_packet(iforce, 'N', buf, &len) && len >= 2) 292 292 ff_effects = buf[1]; 293 293 else 294 294 dev_warn(&iforce->dev->dev, "Device does not respond to id packet N\n");
+13 -1
drivers/input/misc/soc_button_array.c
··· 18 18 #include <linux/gpio.h> 19 19 #include <linux/platform_device.h> 20 20 21 + static bool use_low_level_irq; 22 + module_param(use_low_level_irq, bool, 0444); 23 + MODULE_PARM_DESC(use_low_level_irq, "Use low-level triggered IRQ instead of edge triggered"); 24 + 21 25 struct soc_button_info { 22 26 const char *name; 23 27 int acpi_index; ··· 75 71 .matches = { 76 72 DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 77 73 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"), 74 + }, 75 + }, 76 + { 77 + /* Acer Switch V 10 SW5-017, same issue as Acer Switch 10 SW5-012. */ 78 + .matches = { 79 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 80 + DMI_MATCH(DMI_PRODUCT_NAME, "SW5-017"), 78 81 }, 79 82 }, 80 83 { ··· 175 164 } 176 165 177 166 /* See dmi_use_low_level_irq[] comment */ 178 - if (!autorepeat && dmi_check_system(dmi_use_low_level_irq)) { 167 + if (!autorepeat && (use_low_level_irq || 168 + dmi_check_system(dmi_use_low_level_irq))) { 179 169 irq_set_irq_type(irq, IRQ_TYPE_LEVEL_LOW); 180 170 gpio_keys[n_buttons].irq = irq; 181 171 gpio_keys[n_buttons].gpio = -ENOENT;
+1
drivers/input/mouse/synaptics.c
··· 192 192 "SYN3221", /* HP 15-ay000 */ 193 193 "SYN323d", /* HP Spectre X360 13-w013dx */ 194 194 "SYN3257", /* HP Envy 13-ad105ng */ 195 + "SYN3286", /* HP Laptop 15-da3001TU */ 195 196 NULL 196 197 }; 197 198
+4 -4
drivers/input/serio/i8042-acpipnpio.h
··· 115 115 .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_NEVER) 116 116 }, 117 117 { 118 - /* ASUS ZenBook UX425UA */ 118 + /* ASUS ZenBook UX425UA/QA */ 119 119 .matches = { 120 120 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 121 - DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"), 121 + DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425"), 122 122 }, 123 123 .driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER) 124 124 }, 125 125 { 126 - /* ASUS ZenBook UM325UA */ 126 + /* ASUS ZenBook UM325UA/QA */ 127 127 .matches = { 128 128 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 129 - DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"), 129 + DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325"), 130 130 }, 131 131 .driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER) 132 132 },
-4
drivers/input/serio/i8042.c
··· 1543 1543 { 1544 1544 int error; 1545 1545 1546 - i8042_platform_device = dev; 1547 - 1548 1546 if (i8042_reset == I8042_RESET_ALWAYS) { 1549 1547 error = i8042_controller_selftest(); 1550 1548 if (error) ··· 1580 1582 i8042_free_aux_ports(); /* in case KBD failed but AUX not */ 1581 1583 i8042_free_irqs(); 1582 1584 i8042_controller_reset(false); 1583 - i8042_platform_device = NULL; 1584 1585 1585 1586 return error; 1586 1587 } ··· 1589 1592 i8042_unregister_ports(); 1590 1593 i8042_free_irqs(); 1591 1594 i8042_controller_reset(false); 1592 - i8042_platform_device = NULL; 1593 1595 1594 1596 return 0; 1595 1597 }
+11
drivers/input/touchscreen/goodix.c
··· 1158 1158 input_set_abs_params(ts->input_dev, ABS_MT_WIDTH_MAJOR, 0, 255, 0, 0); 1159 1159 input_set_abs_params(ts->input_dev, ABS_MT_TOUCH_MAJOR, 0, 255, 0, 0); 1160 1160 1161 + retry_read_config: 1161 1162 /* Read configuration and apply touchscreen parameters */ 1162 1163 goodix_read_config(ts); 1163 1164 ··· 1166 1165 touchscreen_parse_properties(ts->input_dev, true, &ts->prop); 1167 1166 1168 1167 if (!ts->prop.max_x || !ts->prop.max_y || !ts->max_touch_num) { 1168 + if (!ts->reset_controller_at_probe && 1169 + ts->irq_pin_access_method != IRQ_PIN_ACCESS_NONE) { 1170 + dev_info(&ts->client->dev, "Config not set, resetting controller\n"); 1171 + /* Retry after a controller reset */ 1172 + ts->reset_controller_at_probe = true; 1173 + error = goodix_reset(ts); 1174 + if (error) 1175 + return error; 1176 + goto retry_read_config; 1177 + } 1169 1178 dev_err(&ts->client->dev, 1170 1179 "Invalid config (%d, %d, %d), using defaults\n", 1171 1180 ts->prop.max_x, ts->prop.max_y, ts->max_touch_num);
+3 -5
drivers/iommu/intel/iommu.c
··· 959 959 960 960 domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE); 961 961 pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE; 962 - if (domain_use_first_level(domain)) { 963 - pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US; 964 - if (iommu_is_dma_domain(&domain->domain)) 965 - pteval |= DMA_FL_PTE_ACCESS; 966 - } 962 + if (domain_use_first_level(domain)) 963 + pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US | DMA_FL_PTE_ACCESS; 964 + 967 965 if (cmpxchg64(&pte->val, 0ULL, pteval)) 968 966 /* Someone else set it while we were thinking; use theirs. */ 969 967 free_pgtable_page(tmp_page);
+3 -2
drivers/iommu/intel/pasid.c
··· 642 642 * Since it is a second level only translation setup, we should 643 643 * set SRE bit as well (addresses are expected to be GPAs). 644 644 */ 645 - if (pasid != PASID_RID2PASID) 645 + if (pasid != PASID_RID2PASID && ecap_srs(iommu->ecap)) 646 646 pasid_set_sre(pte); 647 647 pasid_set_present(pte); 648 648 spin_unlock(&iommu->lock); ··· 685 685 * We should set SRE bit as well since the addresses are expected 686 686 * to be GPAs. 687 687 */ 688 - pasid_set_sre(pte); 688 + if (ecap_srs(iommu->ecap)) 689 + pasid_set_sre(pte); 689 690 pasid_set_present(pte); 690 691 spin_unlock(&iommu->lock); 691 692
+1 -1
drivers/isdn/mISDN/core.c
··· 222 222 223 223 err = get_free_devid(); 224 224 if (err < 0) 225 - goto error1; 225 + return err; 226 226 dev->id = err; 227 227 228 228 device_initialize(&dev->dev);
+2 -1
drivers/isdn/mISDN/dsp_pipeline.c
··· 77 77 if (!entry) 78 78 return -ENOMEM; 79 79 80 + INIT_LIST_HEAD(&entry->list); 80 81 entry->elem = elem; 81 82 82 83 entry->dev.class = elements_class; ··· 108 107 device_unregister(&entry->dev); 109 108 return ret; 110 109 err1: 111 - kfree(entry); 110 + put_device(&entry->dev); 112 111 return ret; 113 112 } 114 113 EXPORT_SYMBOL(mISDN_dsp_element_register);
+2
drivers/md/dm-bufio.c
··· 1858 1858 dm_io_client_destroy(c->dm_io); 1859 1859 bad_dm_io: 1860 1860 mutex_destroy(&c->lock); 1861 + if (c->no_sleep) 1862 + static_branch_dec(&no_sleep_enabled); 1861 1863 kfree(c); 1862 1864 bad_client: 1863 1865 return ERR_PTR(r);
+1
drivers/md/dm-crypt.c
··· 3630 3630 limits->physical_block_size = 3631 3631 max_t(unsigned, limits->physical_block_size, cc->sector_size); 3632 3632 limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size); 3633 + limits->dma_alignment = limits->logical_block_size - 1; 3633 3634 } 3634 3635 3635 3636 static struct target_type crypt_target = {
+15 -6
drivers/md/dm-integrity.c
··· 263 263 264 264 struct completion crypto_backoff; 265 265 266 + bool wrote_to_journal; 266 267 bool journal_uptodate; 267 268 bool just_formatted; 268 269 bool recalculate_flag; ··· 2376 2375 if (!commit_sections) 2377 2376 goto release_flush_bios; 2378 2377 2378 + ic->wrote_to_journal = true; 2379 + 2379 2380 i = commit_start; 2380 2381 for (n = 0; n < commit_sections; n++) { 2381 2382 for (j = 0; j < ic->journal_section_entries; j++) { ··· 2593 2590 unsigned write_start, write_sections; 2594 2591 2595 2592 unsigned prev_free_sectors; 2596 - 2597 - /* the following test is not needed, but it tests the replay code */ 2598 - if (unlikely(dm_post_suspending(ic->ti)) && !ic->meta_dev) 2599 - return; 2600 2593 2601 2594 spin_lock_irq(&ic->endio_wait.lock); 2602 2595 write_start = ic->committed_section; ··· 3100 3101 drain_workqueue(ic->commit_wq); 3101 3102 3102 3103 if (ic->mode == 'J') { 3103 - if (ic->meta_dev) 3104 - queue_work(ic->writer_wq, &ic->writer_work); 3104 + queue_work(ic->writer_wq, &ic->writer_work); 3105 3105 drain_workqueue(ic->writer_wq); 3106 3106 dm_integrity_flush_buffers(ic, true); 3107 + if (ic->wrote_to_journal) { 3108 + init_journal(ic, ic->free_section, 3109 + ic->journal_sections - ic->free_section, ic->commit_seq); 3110 + if (ic->free_section) { 3111 + init_journal(ic, 0, ic->free_section, 3112 + next_commit_seq(ic->commit_seq)); 3113 + } 3114 + } 3107 3115 } 3108 3116 3109 3117 if (ic->mode == 'B') { ··· 3137 3131 int r; 3138 3132 3139 3133 DEBUG_print("resume\n"); 3134 + 3135 + ic->wrote_to_journal = false; 3140 3136 3141 3137 if (ic->provided_data_sectors != old_provided_data_sectors) { 3142 3138 if (ic->provided_data_sectors > old_provided_data_sectors && ··· 3378 3370 limits->logical_block_size = ic->sectors_per_block << SECTOR_SHIFT; 3379 3371 limits->physical_block_size = ic->sectors_per_block << SECTOR_SHIFT; 3380 3372 blk_limits_io_min(limits, ic->sectors_per_block << SECTOR_SHIFT); 3373 + limits->dma_alignment = limits->logical_block_size - 1; 3381 3374 } 3382 3375 } 3383 3376
+2 -2
drivers/md/dm-ioctl.c
··· 655 655 size_t *needed = needed_param; 656 656 657 657 *needed += sizeof(struct dm_target_versions); 658 - *needed += strlen(tt->name); 658 + *needed += strlen(tt->name) + 1; 659 659 *needed += ALIGN_MASK; 660 660 } 661 661 ··· 720 720 iter_info.old_vers = NULL; 721 721 iter_info.vers = vers; 722 722 iter_info.flags = 0; 723 - iter_info.end = (char *)vers+len; 723 + iter_info.end = (char *)vers + needed; 724 724 725 725 /* 726 726 * Now loop through filling out the names & versions.
+1
drivers/md/dm-log-writes.c
··· 875 875 limits->logical_block_size = bdev_logical_block_size(lc->dev->bdev); 876 876 limits->physical_block_size = bdev_physical_block_size(lc->dev->bdev); 877 877 limits->io_min = limits->physical_block_size; 878 + limits->dma_alignment = limits->logical_block_size - 1; 878 879 } 879 880 880 881 #if IS_ENABLED(CONFIG_FS_DAX)
+2
drivers/misc/vmw_vmci/vmci_queue_pair.c
··· 854 854 u32 context_id = vmci_get_context_id(); 855 855 struct vmci_event_qp ev; 856 856 857 + memset(&ev, 0, sizeof(ev)); 857 858 ev.msg.hdr.dst = vmci_make_handle(context_id, VMCI_EVENT_HANDLER); 858 859 ev.msg.hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID, 859 860 VMCI_CONTEXT_RESOURCE_ID); ··· 1468 1467 * kernel. 1469 1468 */ 1470 1469 1470 + memset(&ev, 0, sizeof(ev)); 1471 1471 ev.msg.hdr.dst = vmci_make_handle(peer_id, VMCI_EVENT_HANDLER); 1472 1472 ev.msg.hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID, 1473 1473 VMCI_CONTEXT_RESOURCE_ID);
+7 -1
drivers/mmc/core/core.c
··· 1134 1134 mmc_power_cycle(host, ocr); 1135 1135 } else { 1136 1136 bit = fls(ocr) - 1; 1137 - ocr &= 3 << bit; 1137 + /* 1138 + * The bit variable represents the highest voltage bit set in 1139 + * the OCR register. 1140 + * To keep a range of 2 values (e.g. 3.2V/3.3V and 3.3V/3.4V), 1141 + * we must shift the mask '3' with (bit - 1). 1142 + */ 1143 + ocr &= 3 << (bit - 1); 1138 1144 if (bit != host->ios.vdd) 1139 1145 dev_warn(mmc_dev(host), "exceeding card's volts\n"); 1140 1146 }
+2
drivers/mmc/host/sdhci-pci-core.c
··· 1749 1749 } 1750 1750 } 1751 1751 1752 + pci_dev_put(smbus_dev); 1753 + 1752 1754 if (gen == AMD_CHIPSET_BEFORE_ML || gen == AMD_CHIPSET_CZ) 1753 1755 chip->quirks2 |= SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD; 1754 1756
+7
drivers/mmc/host/sdhci-pci-o2micro.c
··· 32 32 #define O2_SD_CAPS 0xE0 33 33 #define O2_SD_ADMA1 0xE2 34 34 #define O2_SD_ADMA2 0xE7 35 + #define O2_SD_MISC_CTRL2 0xF0 35 36 #define O2_SD_INF_MOD 0xF1 36 37 #define O2_SD_MISC_CTRL4 0xFC 37 38 #define O2_SD_MISC_CTRL 0x1C0 ··· 878 877 /* Set Tuning Windows to 5 */ 879 878 pci_write_config_byte(chip->pdev, 880 879 O2_SD_TUNING_CTRL, 0x55); 880 + //Adjust 1st and 2nd CD debounce time 881 + pci_read_config_dword(chip->pdev, O2_SD_MISC_CTRL2, &scratch_32); 882 + scratch_32 &= 0xFFE7FFFF; 883 + scratch_32 |= 0x00180000; 884 + pci_write_config_dword(chip->pdev, O2_SD_MISC_CTRL2, scratch_32); 885 + pci_write_config_dword(chip->pdev, O2_SD_DETECT_SETTING, 1); 881 886 /* Lock WP */ 882 887 ret = pci_read_config_byte(chip->pdev, 883 888 O2_SD_LOCK_WP, &scratch);
+1
drivers/mtd/nand/onenand/Kconfig
··· 26 26 tristate "OneNAND on OMAP2/OMAP3 support" 27 27 depends on ARCH_OMAP2 || ARCH_OMAP3 || (COMPILE_TEST && ARM) 28 28 depends on OF || COMPILE_TEST 29 + depends on OMAP_GPMC 29 30 help 30 31 Support for a OneNAND flash device connected to an OMAP2/OMAP3 SoC 31 32 via the GPMC memory controller.
+2 -2
drivers/mtd/nand/raw/nand_base.c
··· 5834 5834 int req_step = requirements->step_size; 5835 5835 int req_strength = requirements->strength; 5836 5836 int req_corr, step_size, strength, nsteps, ecc_bytes, ecc_bytes_total; 5837 - int best_step, best_strength, best_ecc_bytes; 5837 + int best_step = 0, best_strength = 0, best_ecc_bytes = 0; 5838 5838 int best_ecc_bytes_total = INT_MAX; 5839 5839 int i, j; 5840 5840 ··· 5915 5915 int step_size, strength, nsteps, ecc_bytes, corr; 5916 5916 int best_corr = 0; 5917 5917 int best_step = 0; 5918 - int best_strength, best_ecc_bytes; 5918 + int best_strength = 0, best_ecc_bytes = 0; 5919 5919 int i, j; 5920 5920 5921 5921 for (i = 0; i < caps->nstepinfos; i++) {
+7 -5
drivers/mtd/nand/raw/qcom_nandc.c
··· 3167 3167 3168 3168 ret = mtd_device_parse_register(mtd, probes, NULL, NULL, 0); 3169 3169 if (ret) 3170 - nand_cleanup(chip); 3170 + goto err; 3171 3171 3172 3172 if (nandc->props->use_codeword_fixup) { 3173 3173 ret = qcom_nand_host_parse_boot_partitions(nandc, host, dn); 3174 - if (ret) { 3175 - nand_cleanup(chip); 3176 - return ret; 3177 - } 3174 + if (ret) 3175 + goto err; 3178 3176 } 3179 3177 3178 + return 0; 3179 + 3180 + err: 3181 + nand_cleanup(chip); 3180 3182 return ret; 3181 3183 } 3182 3184
+7 -1
drivers/net/ethernet/amazon/ena/ena_netdev.c
··· 4543 4543 4544 4544 static int __init ena_init(void) 4545 4545 { 4546 + int ret; 4547 + 4546 4548 ena_wq = create_singlethread_workqueue(DRV_MODULE_NAME); 4547 4549 if (!ena_wq) { 4548 4550 pr_err("Failed to create workqueue\n"); 4549 4551 return -ENOMEM; 4550 4552 } 4551 4553 4552 - return pci_register_driver(&ena_pci_driver); 4554 + ret = pci_register_driver(&ena_pci_driver); 4555 + if (ret) 4556 + destroy_workqueue(ena_wq); 4557 + 4558 + return ret; 4553 4559 } 4554 4560 4555 4561 static void __exit ena_cleanup(void)
+2 -1
drivers/net/ethernet/atheros/ag71xx.c
··· 1427 1427 if (ret) { 1428 1428 netif_err(ag, link, ndev, "phylink_of_phy_connect filed with err: %i\n", 1429 1429 ret); 1430 - goto err; 1430 + return ret; 1431 1431 } 1432 1432 1433 1433 max_frame_len = ag71xx_max_frame_len(ndev->mtu); ··· 1448 1448 1449 1449 err: 1450 1450 ag71xx_rings_cleanup(ag); 1451 + phylink_disconnect_phy(ag->phylink); 1451 1452 return ret; 1452 1453 } 1453 1454
-1
drivers/net/ethernet/broadcom/bgmac.c
··· 1568 1568 phy_disconnect(bgmac->net_dev->phydev); 1569 1569 netif_napi_del(&bgmac->napi); 1570 1570 bgmac_dma_free(bgmac); 1571 - free_netdev(bgmac->net_dev); 1572 1571 } 1573 1572 EXPORT_SYMBOL_GPL(bgmac_enet_remove); 1574 1573
+9 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 14037 14037 14038 14038 static int __init bnxt_init(void) 14039 14039 { 14040 + int err; 14041 + 14040 14042 bnxt_debug_init(); 14041 - return pci_register_driver(&bnxt_pci_driver); 14043 + err = pci_register_driver(&bnxt_pci_driver); 14044 + if (err) { 14045 + bnxt_debug_exit(); 14046 + return err; 14047 + } 14048 + 14049 + return 0; 14042 14050 } 14043 14051 14044 14052 static void __exit bnxt_exit(void)
+26 -8
drivers/net/ethernet/cavium/liquidio/lio_main.c
··· 1794 1794 1795 1795 ifstate_set(lio, LIO_IFSTATE_RUNNING); 1796 1796 1797 - if (OCTEON_CN23XX_PF(oct)) { 1798 - if (!oct->msix_on) 1799 - if (setup_tx_poll_fn(netdev)) 1800 - return -1; 1801 - } else { 1802 - if (setup_tx_poll_fn(netdev)) 1803 - return -1; 1797 + if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on)) { 1798 + ret = setup_tx_poll_fn(netdev); 1799 + if (ret) 1800 + goto err_poll; 1804 1801 } 1805 1802 1806 1803 netif_tx_start_all_queues(netdev); ··· 1810 1813 /* tell Octeon to start forwarding packets to host */ 1811 1814 ret = send_rx_ctrl_cmd(lio, 1); 1812 1815 if (ret) 1813 - return ret; 1816 + goto err_rx_ctrl; 1814 1817 1815 1818 /* start periodical statistics fetch */ 1816 1819 INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats); ··· 1820 1823 1821 1824 dev_info(&oct->pci_dev->dev, "%s interface is opened\n", 1822 1825 netdev->name); 1826 + 1827 + return 0; 1828 + 1829 + err_rx_ctrl: 1830 + if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on)) 1831 + cleanup_tx_poll_fn(netdev); 1832 + err_poll: 1833 + if (lio->ptp_clock) { 1834 + ptp_clock_unregister(lio->ptp_clock); 1835 + lio->ptp_clock = NULL; 1836 + } 1837 + 1838 + if (oct->props[lio->ifidx].napi_enabled == 1) { 1839 + list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) 1840 + napi_disable(napi); 1841 + 1842 + oct->props[lio->ifidx].napi_enabled = 0; 1843 + 1844 + if (OCTEON_CN23XX_PF(oct)) 1845 + oct->droq[0]->ops.poll_mode = 0; 1846 + } 1823 1847 1824 1848 return ret; 1825 1849 }
-1
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 819 819 const struct hnae3_dcb_ops *dcb_ops; 820 820 821 821 u16 int_rl_setting; 822 - enum pkt_hash_types rss_type; 823 822 void __iomem *io_base; 824 823 }; 825 824
-20
drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.c
··· 191 191 return HCLGE_COMM_RSS_KEY_SIZE; 192 192 } 193 193 194 - void hclge_comm_get_rss_type(struct hnae3_handle *nic, 195 - struct hclge_comm_rss_tuple_cfg *rss_tuple_sets) 196 - { 197 - if (rss_tuple_sets->ipv4_tcp_en || 198 - rss_tuple_sets->ipv4_udp_en || 199 - rss_tuple_sets->ipv4_sctp_en || 200 - rss_tuple_sets->ipv6_tcp_en || 201 - rss_tuple_sets->ipv6_udp_en || 202 - rss_tuple_sets->ipv6_sctp_en) 203 - nic->kinfo.rss_type = PKT_HASH_TYPE_L4; 204 - else if (rss_tuple_sets->ipv4_fragment_en || 205 - rss_tuple_sets->ipv6_fragment_en) 206 - nic->kinfo.rss_type = PKT_HASH_TYPE_L3; 207 - else 208 - nic->kinfo.rss_type = PKT_HASH_TYPE_NONE; 209 - } 210 - 211 194 int hclge_comm_parse_rss_hfunc(struct hclge_comm_rss_cfg *rss_cfg, 212 195 const u8 hfunc, u8 *hash_algo) 213 196 { ··· 326 343 req->ipv6_udp_en = rss_cfg->rss_tuple_sets.ipv6_udp_en; 327 344 req->ipv6_sctp_en = rss_cfg->rss_tuple_sets.ipv6_sctp_en; 328 345 req->ipv6_fragment_en = rss_cfg->rss_tuple_sets.ipv6_fragment_en; 329 - 330 - if (is_pf) 331 - hclge_comm_get_rss_type(nic, &rss_cfg->rss_tuple_sets); 332 346 333 347 ret = hclge_comm_cmd_send(hw, &desc, 1); 334 348 if (ret)
-2
drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.h
··· 95 95 }; 96 96 97 97 u32 hclge_comm_get_rss_key_size(struct hnae3_handle *handle); 98 - void hclge_comm_get_rss_type(struct hnae3_handle *nic, 99 - struct hclge_comm_rss_tuple_cfg *rss_tuple_sets); 100 98 void hclge_comm_rss_indir_init_cfg(struct hnae3_ae_dev *ae_dev, 101 99 struct hclge_comm_rss_cfg *rss_cfg); 102 100 int hclge_comm_get_rss_tuple(struct hclge_comm_rss_cfg *rss_cfg, int flow_type,
+95 -72
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 105 105 }; 106 106 MODULE_DEVICE_TABLE(pci, hns3_pci_tbl); 107 107 108 - #define HNS3_RX_PTYPE_ENTRY(ptype, l, s, t) \ 108 + #define HNS3_RX_PTYPE_ENTRY(ptype, l, s, t, h) \ 109 109 { ptype, \ 110 110 l, \ 111 111 CHECKSUM_##s, \ 112 112 HNS3_L3_TYPE_##t, \ 113 - 1 } 113 + 1, \ 114 + h} 114 115 115 116 #define HNS3_RX_PTYPE_UNUSED_ENTRY(ptype) \ 116 - { ptype, 0, CHECKSUM_NONE, HNS3_L3_TYPE_PARSE_FAIL, 0 } 117 + { ptype, 0, CHECKSUM_NONE, HNS3_L3_TYPE_PARSE_FAIL, 0, \ 118 + PKT_HASH_TYPE_NONE } 117 119 118 120 static const struct hns3_rx_ptype hns3_rx_ptype_tbl[] = { 119 121 HNS3_RX_PTYPE_UNUSED_ENTRY(0), 120 - HNS3_RX_PTYPE_ENTRY(1, 0, COMPLETE, ARP), 121 - HNS3_RX_PTYPE_ENTRY(2, 0, COMPLETE, RARP), 122 - HNS3_RX_PTYPE_ENTRY(3, 0, COMPLETE, LLDP), 123 - HNS3_RX_PTYPE_ENTRY(4, 0, COMPLETE, PARSE_FAIL), 124 - HNS3_RX_PTYPE_ENTRY(5, 0, COMPLETE, PARSE_FAIL), 125 - HNS3_RX_PTYPE_ENTRY(6, 0, COMPLETE, PARSE_FAIL), 126 - HNS3_RX_PTYPE_ENTRY(7, 0, COMPLETE, CNM), 127 - HNS3_RX_PTYPE_ENTRY(8, 0, NONE, PARSE_FAIL), 122 + HNS3_RX_PTYPE_ENTRY(1, 0, COMPLETE, ARP, PKT_HASH_TYPE_NONE), 123 + HNS3_RX_PTYPE_ENTRY(2, 0, COMPLETE, RARP, PKT_HASH_TYPE_NONE), 124 + HNS3_RX_PTYPE_ENTRY(3, 0, COMPLETE, LLDP, PKT_HASH_TYPE_NONE), 125 + HNS3_RX_PTYPE_ENTRY(4, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 126 + HNS3_RX_PTYPE_ENTRY(5, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 127 + HNS3_RX_PTYPE_ENTRY(6, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 128 + HNS3_RX_PTYPE_ENTRY(7, 0, COMPLETE, CNM, PKT_HASH_TYPE_NONE), 129 + HNS3_RX_PTYPE_ENTRY(8, 0, NONE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 128 130 HNS3_RX_PTYPE_UNUSED_ENTRY(9), 129 131 HNS3_RX_PTYPE_UNUSED_ENTRY(10), 130 132 HNS3_RX_PTYPE_UNUSED_ENTRY(11), ··· 134 132 HNS3_RX_PTYPE_UNUSED_ENTRY(13), 135 133 HNS3_RX_PTYPE_UNUSED_ENTRY(14), 136 134 HNS3_RX_PTYPE_UNUSED_ENTRY(15), 137 - HNS3_RX_PTYPE_ENTRY(16, 0, COMPLETE, PARSE_FAIL), 138 - HNS3_RX_PTYPE_ENTRY(17, 0, COMPLETE, IPV4), 139 - HNS3_RX_PTYPE_ENTRY(18, 0, COMPLETE, IPV4), 140 - HNS3_RX_PTYPE_ENTRY(19, 0, UNNECESSARY, IPV4), 141 - HNS3_RX_PTYPE_ENTRY(20, 0, UNNECESSARY, IPV4), 142 - HNS3_RX_PTYPE_ENTRY(21, 0, NONE, IPV4), 143 - HNS3_RX_PTYPE_ENTRY(22, 0, UNNECESSARY, IPV4), 144 - HNS3_RX_PTYPE_ENTRY(23, 0, NONE, IPV4), 145 - HNS3_RX_PTYPE_ENTRY(24, 0, NONE, IPV4), 146 - HNS3_RX_PTYPE_ENTRY(25, 0, UNNECESSARY, IPV4), 135 + HNS3_RX_PTYPE_ENTRY(16, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 136 + HNS3_RX_PTYPE_ENTRY(17, 0, COMPLETE, IPV4, PKT_HASH_TYPE_NONE), 137 + HNS3_RX_PTYPE_ENTRY(18, 0, COMPLETE, IPV4, PKT_HASH_TYPE_NONE), 138 + HNS3_RX_PTYPE_ENTRY(19, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 139 + HNS3_RX_PTYPE_ENTRY(20, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 140 + HNS3_RX_PTYPE_ENTRY(21, 0, NONE, IPV4, PKT_HASH_TYPE_NONE), 141 + HNS3_RX_PTYPE_ENTRY(22, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 142 + HNS3_RX_PTYPE_ENTRY(23, 0, NONE, IPV4, PKT_HASH_TYPE_L3), 143 + HNS3_RX_PTYPE_ENTRY(24, 0, NONE, IPV4, PKT_HASH_TYPE_L3), 144 + HNS3_RX_PTYPE_ENTRY(25, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 147 145 HNS3_RX_PTYPE_UNUSED_ENTRY(26), 148 146 HNS3_RX_PTYPE_UNUSED_ENTRY(27), 149 147 HNS3_RX_PTYPE_UNUSED_ENTRY(28), 150 - HNS3_RX_PTYPE_ENTRY(29, 0, COMPLETE, PARSE_FAIL), 151 - HNS3_RX_PTYPE_ENTRY(30, 0, COMPLETE, PARSE_FAIL), 152 - HNS3_RX_PTYPE_ENTRY(31, 0, COMPLETE, IPV4), 153 - HNS3_RX_PTYPE_ENTRY(32, 0, COMPLETE, IPV4), 154 - HNS3_RX_PTYPE_ENTRY(33, 1, UNNECESSARY, IPV4), 155 - HNS3_RX_PTYPE_ENTRY(34, 1, UNNECESSARY, IPV4), 156 - HNS3_RX_PTYPE_ENTRY(35, 1, UNNECESSARY, IPV4), 157 - HNS3_RX_PTYPE_ENTRY(36, 0, COMPLETE, IPV4), 158 - HNS3_RX_PTYPE_ENTRY(37, 0, COMPLETE, IPV4), 148 + HNS3_RX_PTYPE_ENTRY(29, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 149 + HNS3_RX_PTYPE_ENTRY(30, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 150 + HNS3_RX_PTYPE_ENTRY(31, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 151 + HNS3_RX_PTYPE_ENTRY(32, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 152 + HNS3_RX_PTYPE_ENTRY(33, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 153 + HNS3_RX_PTYPE_ENTRY(34, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 154 + HNS3_RX_PTYPE_ENTRY(35, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 155 + HNS3_RX_PTYPE_ENTRY(36, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 156 + HNS3_RX_PTYPE_ENTRY(37, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 159 157 HNS3_RX_PTYPE_UNUSED_ENTRY(38), 160 - HNS3_RX_PTYPE_ENTRY(39, 0, COMPLETE, IPV6), 161 - HNS3_RX_PTYPE_ENTRY(40, 0, COMPLETE, IPV6), 162 - HNS3_RX_PTYPE_ENTRY(41, 1, UNNECESSARY, IPV6), 163 - HNS3_RX_PTYPE_ENTRY(42, 1, UNNECESSARY, IPV6), 164 - HNS3_RX_PTYPE_ENTRY(43, 1, UNNECESSARY, IPV6), 165 - HNS3_RX_PTYPE_ENTRY(44, 0, COMPLETE, IPV6), 166 - HNS3_RX_PTYPE_ENTRY(45, 0, COMPLETE, IPV6), 158 + HNS3_RX_PTYPE_ENTRY(39, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 159 + HNS3_RX_PTYPE_ENTRY(40, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 160 + HNS3_RX_PTYPE_ENTRY(41, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 161 + HNS3_RX_PTYPE_ENTRY(42, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 162 + HNS3_RX_PTYPE_ENTRY(43, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 163 + HNS3_RX_PTYPE_ENTRY(44, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 164 + HNS3_RX_PTYPE_ENTRY(45, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 167 165 HNS3_RX_PTYPE_UNUSED_ENTRY(46), 168 166 HNS3_RX_PTYPE_UNUSED_ENTRY(47), 169 167 HNS3_RX_PTYPE_UNUSED_ENTRY(48), ··· 229 227 HNS3_RX_PTYPE_UNUSED_ENTRY(108), 230 228 HNS3_RX_PTYPE_UNUSED_ENTRY(109), 231 229 HNS3_RX_PTYPE_UNUSED_ENTRY(110), 232 - HNS3_RX_PTYPE_ENTRY(111, 0, COMPLETE, IPV6), 233 - HNS3_RX_PTYPE_ENTRY(112, 0, COMPLETE, IPV6), 234 - HNS3_RX_PTYPE_ENTRY(113, 0, UNNECESSARY, IPV6), 235 - HNS3_RX_PTYPE_ENTRY(114, 0, UNNECESSARY, IPV6), 236 - HNS3_RX_PTYPE_ENTRY(115, 0, NONE, IPV6), 237 - HNS3_RX_PTYPE_ENTRY(116, 0, UNNECESSARY, IPV6), 238 - HNS3_RX_PTYPE_ENTRY(117, 0, NONE, IPV6), 239 - HNS3_RX_PTYPE_ENTRY(118, 0, NONE, IPV6), 240 - HNS3_RX_PTYPE_ENTRY(119, 0, UNNECESSARY, IPV6), 230 + HNS3_RX_PTYPE_ENTRY(111, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 231 + HNS3_RX_PTYPE_ENTRY(112, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 232 + HNS3_RX_PTYPE_ENTRY(113, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 233 + HNS3_RX_PTYPE_ENTRY(114, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 234 + HNS3_RX_PTYPE_ENTRY(115, 0, NONE, IPV6, PKT_HASH_TYPE_L3), 235 + HNS3_RX_PTYPE_ENTRY(116, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 236 + HNS3_RX_PTYPE_ENTRY(117, 0, NONE, IPV6, PKT_HASH_TYPE_L3), 237 + HNS3_RX_PTYPE_ENTRY(118, 0, NONE, IPV6, PKT_HASH_TYPE_L3), 238 + HNS3_RX_PTYPE_ENTRY(119, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 241 239 HNS3_RX_PTYPE_UNUSED_ENTRY(120), 242 240 HNS3_RX_PTYPE_UNUSED_ENTRY(121), 243 241 HNS3_RX_PTYPE_UNUSED_ENTRY(122), 244 - HNS3_RX_PTYPE_ENTRY(123, 0, COMPLETE, PARSE_FAIL), 245 - HNS3_RX_PTYPE_ENTRY(124, 0, COMPLETE, PARSE_FAIL), 246 - HNS3_RX_PTYPE_ENTRY(125, 0, COMPLETE, IPV4), 247 - HNS3_RX_PTYPE_ENTRY(126, 0, COMPLETE, IPV4), 248 - HNS3_RX_PTYPE_ENTRY(127, 1, UNNECESSARY, IPV4), 249 - HNS3_RX_PTYPE_ENTRY(128, 1, UNNECESSARY, IPV4), 250 - HNS3_RX_PTYPE_ENTRY(129, 1, UNNECESSARY, IPV4), 251 - HNS3_RX_PTYPE_ENTRY(130, 0, COMPLETE, IPV4), 252 - HNS3_RX_PTYPE_ENTRY(131, 0, COMPLETE, IPV4), 242 + HNS3_RX_PTYPE_ENTRY(123, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 243 + HNS3_RX_PTYPE_ENTRY(124, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), 244 + HNS3_RX_PTYPE_ENTRY(125, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 245 + HNS3_RX_PTYPE_ENTRY(126, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 246 + HNS3_RX_PTYPE_ENTRY(127, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 247 + HNS3_RX_PTYPE_ENTRY(128, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 248 + HNS3_RX_PTYPE_ENTRY(129, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), 249 + HNS3_RX_PTYPE_ENTRY(130, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 250 + HNS3_RX_PTYPE_ENTRY(131, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), 253 251 HNS3_RX_PTYPE_UNUSED_ENTRY(132), 254 - HNS3_RX_PTYPE_ENTRY(133, 0, COMPLETE, IPV6), 255 - HNS3_RX_PTYPE_ENTRY(134, 0, COMPLETE, IPV6), 256 - HNS3_RX_PTYPE_ENTRY(135, 1, UNNECESSARY, IPV6), 257 - HNS3_RX_PTYPE_ENTRY(136, 1, UNNECESSARY, IPV6), 258 - HNS3_RX_PTYPE_ENTRY(137, 1, UNNECESSARY, IPV6), 259 - HNS3_RX_PTYPE_ENTRY(138, 0, COMPLETE, IPV6), 260 - HNS3_RX_PTYPE_ENTRY(139, 0, COMPLETE, IPV6), 252 + HNS3_RX_PTYPE_ENTRY(133, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 253 + HNS3_RX_PTYPE_ENTRY(134, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 254 + HNS3_RX_PTYPE_ENTRY(135, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 255 + HNS3_RX_PTYPE_ENTRY(136, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 256 + HNS3_RX_PTYPE_ENTRY(137, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), 257 + HNS3_RX_PTYPE_ENTRY(138, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 258 + HNS3_RX_PTYPE_ENTRY(139, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), 261 259 HNS3_RX_PTYPE_UNUSED_ENTRY(140), 262 260 HNS3_RX_PTYPE_UNUSED_ENTRY(141), 263 261 HNS3_RX_PTYPE_UNUSED_ENTRY(142), ··· 3778 3776 desc_cb->reuse_flag = 1; 3779 3777 } else if (frag_size <= ring->rx_copybreak) { 3780 3778 ret = hns3_handle_rx_copybreak(skb, i, ring, pull_len, desc_cb); 3781 - if (ret) 3782 - goto out; 3779 + if (!ret) 3780 + return; 3783 3781 } 3784 3782 3785 3783 out: ··· 4173 4171 } 4174 4172 4175 4173 static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring, 4176 - struct sk_buff *skb, u32 rss_hash) 4174 + struct sk_buff *skb, u32 rss_hash, 4175 + u32 l234info, u32 ol_info) 4177 4176 { 4178 - struct hnae3_handle *handle = ring->tqp->handle; 4179 - enum pkt_hash_types rss_type; 4177 + enum pkt_hash_types rss_type = PKT_HASH_TYPE_NONE; 4178 + struct net_device *netdev = ring_to_netdev(ring); 4179 + struct hns3_nic_priv *priv = netdev_priv(netdev); 4180 4180 4181 - if (rss_hash) 4182 - rss_type = handle->kinfo.rss_type; 4183 - else 4184 - rss_type = PKT_HASH_TYPE_NONE; 4181 + if (test_bit(HNS3_NIC_STATE_RXD_ADV_LAYOUT_ENABLE, &priv->state)) { 4182 + u32 ptype = hnae3_get_field(ol_info, HNS3_RXD_PTYPE_M, 4183 + HNS3_RXD_PTYPE_S); 4184 + 4185 + rss_type = hns3_rx_ptype_tbl[ptype].hash_type; 4186 + } else { 4187 + int l3_type = hnae3_get_field(l234info, HNS3_RXD_L3ID_M, 4188 + HNS3_RXD_L3ID_S); 4189 + int l4_type = hnae3_get_field(l234info, HNS3_RXD_L4ID_M, 4190 + HNS3_RXD_L4ID_S); 4191 + 4192 + if (l3_type == HNS3_L3_TYPE_IPV4 || 4193 + l3_type == HNS3_L3_TYPE_IPV6) { 4194 + if (l4_type == HNS3_L4_TYPE_UDP || 4195 + l4_type == HNS3_L4_TYPE_TCP || 4196 + l4_type == HNS3_L4_TYPE_SCTP) 4197 + rss_type = PKT_HASH_TYPE_L4; 4198 + else if (l4_type == HNS3_L4_TYPE_IGMP || 4199 + l4_type == HNS3_L4_TYPE_ICMP) 4200 + rss_type = PKT_HASH_TYPE_L3; 4201 + } 4202 + } 4185 4203 4186 4204 skb_set_hash(skb, rss_hash, rss_type); 4187 4205 } ··· 4304 4282 4305 4283 ring->tqp_vector->rx_group.total_bytes += len; 4306 4284 4307 - hns3_set_rx_skb_rss_type(ring, skb, le32_to_cpu(desc->rx.rss_hash)); 4285 + hns3_set_rx_skb_rss_type(ring, skb, le32_to_cpu(desc->rx.rss_hash), 4286 + l234info, ol_info); 4308 4287 return 0; 4309 4288 } 4310 4289
+1
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
··· 404 404 u32 ip_summed : 2; 405 405 u32 l3_type : 4; 406 406 u32 valid : 1; 407 + u32 hash_type: 3; 407 408 }; 408 409 409 410 struct ring_stats {
+7 -4
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 3443 3443 hdev->hw.mac.autoneg = cmd.base.autoneg; 3444 3444 hdev->hw.mac.speed = cmd.base.speed; 3445 3445 hdev->hw.mac.duplex = cmd.base.duplex; 3446 + linkmode_copy(hdev->hw.mac.advertising, cmd.link_modes.advertising); 3446 3447 3447 3448 return 0; 3448 3449 } ··· 4860 4859 return ret; 4861 4860 } 4862 4861 4863 - hclge_comm_get_rss_type(&vport->nic, &hdev->rss_cfg.rss_tuple_sets); 4864 4862 return 0; 4865 4863 } 4866 4864 ··· 11587 11587 if (ret) 11588 11588 goto err_msi_irq_uninit; 11589 11589 11590 - if (hdev->hw.mac.media_type == HNAE3_MEDIA_TYPE_COPPER && 11591 - !hnae3_dev_phy_imp_supported(hdev)) { 11592 - ret = hclge_mac_mdio_config(hdev); 11590 + if (hdev->hw.mac.media_type == HNAE3_MEDIA_TYPE_COPPER) { 11591 + if (hnae3_dev_phy_imp_supported(hdev)) 11592 + ret = hclge_update_tp_port_info(hdev); 11593 + else 11594 + ret = hclge_mac_mdio_config(hdev); 11595 + 11593 11596 if (ret) 11594 11597 goto err_msi_irq_uninit; 11595 11598 }
+8 -1
drivers/net/ethernet/huawei/hinic/hinic_main.c
··· 1474 1474 1475 1475 static int __init hinic_module_init(void) 1476 1476 { 1477 + int ret; 1478 + 1477 1479 hinic_dbg_register_debugfs(HINIC_DRV_NAME); 1478 - return pci_register_driver(&hinic_driver); 1480 + 1481 + ret = pci_register_driver(&hinic_driver); 1482 + if (ret) 1483 + hinic_dbg_unregister_debugfs(); 1484 + 1485 + return ret; 1479 1486 } 1480 1487 1481 1488 static void __exit hinic_module_exit(void)
+11 -5
drivers/net/ethernet/marvell/octeon_ep/octep_main.c
··· 521 521 octep_oq_dbell_init(oct); 522 522 523 523 ret = octep_get_link_status(oct); 524 - if (ret) 524 + if (ret > 0) 525 525 octep_link_up(netdev); 526 526 527 527 return 0; 528 528 529 529 set_queues_err: 530 - octep_napi_disable(oct); 531 - octep_napi_delete(oct); 532 530 octep_clean_irqs(oct); 533 531 setup_irq_err: 534 532 octep_free_oqs(oct); ··· 956 958 ret = octep_ctrl_mbox_init(ctrl_mbox); 957 959 if (ret) { 958 960 dev_err(&pdev->dev, "Failed to initialize control mbox\n"); 959 - return -1; 961 + goto unsupported_dev; 960 962 } 961 963 oct->ctrl_mbox_ifstats_offset = OCTEP_CTRL_MBOX_SZ(ctrl_mbox->h2fq.elem_sz, 962 964 ctrl_mbox->h2fq.elem_cnt, ··· 966 968 return 0; 967 969 968 970 unsupported_dev: 971 + for (i = 0; i < OCTEP_MMIO_REGIONS; i++) 972 + iounmap(oct->mmio[i].hw_addr); 973 + 974 + kfree(oct->conf); 969 975 return -1; 970 976 } 971 977 ··· 1072 1070 netdev->max_mtu = OCTEP_MAX_MTU; 1073 1071 netdev->mtu = OCTEP_DEFAULT_MTU; 1074 1072 1075 - octep_get_mac_addr(octep_dev, octep_dev->mac_addr); 1073 + err = octep_get_mac_addr(octep_dev, octep_dev->mac_addr); 1074 + if (err) { 1075 + dev_err(&pdev->dev, "Failed to get mac address\n"); 1076 + goto register_dev_err; 1077 + } 1076 1078 eth_hw_addr_set(netdev, octep_dev->mac_addr); 1077 1079 1078 1080 err = register_netdev(netdev);
+2
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 3470 3470 u16 vid; 3471 3471 3472 3472 vxlan_fdb_info = &switchdev_work->vxlan_fdb_info; 3473 + if (!vxlan_fdb_info->offloaded) 3474 + return; 3473 3475 3474 3476 bridge_device = mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev); 3475 3477 if (!bridge_device)
+3
drivers/net/ethernet/microchip/lan966x/lan966x_ethtool.c
··· 716 716 snprintf(queue_name, sizeof(queue_name), "%s-stats", 717 717 dev_name(lan966x->dev)); 718 718 lan966x->stats_queue = create_singlethread_workqueue(queue_name); 719 + if (!lan966x->stats_queue) 720 + return -ENOMEM; 721 + 719 722 INIT_DELAYED_WORK(&lan966x->stats_work, lan966x_check_stats_work); 720 723 queue_delayed_work(lan966x->stats_queue, &lan966x->stats_work, 721 724 LAN966X_STATS_CHECK_DELAY);
+3
drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c
··· 1253 1253 snprintf(queue_name, sizeof(queue_name), "%s-stats", 1254 1254 dev_name(sparx5->dev)); 1255 1255 sparx5->stats_queue = create_singlethread_workqueue(queue_name); 1256 + if (!sparx5->stats_queue) 1257 + return -ENOMEM; 1258 + 1256 1259 INIT_DELAYED_WORK(&sparx5->stats_work, sparx5_check_stats_work); 1257 1260 queue_delayed_work(sparx5->stats_queue, &sparx5->stats_work, 1258 1261 SPX5_STATS_CHECK_DELAY);
+3
drivers/net/ethernet/microchip/sparx5/sparx5_main.c
··· 659 659 snprintf(queue_name, sizeof(queue_name), "%s-mact", 660 660 dev_name(sparx5->dev)); 661 661 sparx5->mact_queue = create_singlethread_workqueue(queue_name); 662 + if (!sparx5->mact_queue) 663 + return -ENOMEM; 664 + 662 665 INIT_DELAYED_WORK(&sparx5->mact_work, sparx5_mact_pull_work); 663 666 queue_delayed_work(sparx5->mact_queue, &sparx5->mact_work, 664 667 SPX5_MACT_PULL_DELAY);
+3 -3
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
··· 1477 1477 1478 1478 if (data < 0x3) { 1479 1479 modinfo->type = ETH_MODULE_SFF_8436; 1480 - modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN; 1480 + modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN; 1481 1481 } else { 1482 1482 modinfo->type = ETH_MODULE_SFF_8636; 1483 - modinfo->eeprom_len = ETH_MODULE_SFF_8636_LEN; 1483 + modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN; 1484 1484 } 1485 1485 break; 1486 1486 case NFP_INTERFACE_QSFP28: 1487 1487 modinfo->type = ETH_MODULE_SFF_8636; 1488 - modinfo->eeprom_len = ETH_MODULE_SFF_8636_LEN; 1488 + modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN; 1489 1489 break; 1490 1490 default: 1491 1491 netdev_err(netdev, "Unsupported module 0x%x detected\n",
+7 -1
drivers/net/ethernet/pensando/ionic/ionic_main.c
··· 687 687 688 688 static int __init ionic_init_module(void) 689 689 { 690 + int ret; 691 + 690 692 ionic_debugfs_create(); 691 - return ionic_bus_register_driver(); 693 + ret = ionic_bus_register_driver(); 694 + if (ret) 695 + ionic_debugfs_destroy(); 696 + 697 + return ret; 692 698 } 693 699 694 700 static void __exit ionic_cleanup_module(void)
+3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 6548 6548 struct stmmac_priv *priv = netdev_priv(dev); 6549 6549 u32 chan; 6550 6550 6551 + /* Ensure tx function is not running */ 6552 + netif_tx_disable(dev); 6553 + 6551 6554 /* Disable NAPI process */ 6552 6555 stmmac_disable_all_queues(priv); 6553 6556
+2 -2
drivers/net/macvlan.c
··· 141 141 u32 idx = macvlan_eth_hash(addr); 142 142 struct hlist_head *h = &vlan->port->vlan_source_hash[idx]; 143 143 144 - hlist_for_each_entry_rcu(entry, h, hlist) { 144 + hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) { 145 145 if (ether_addr_equal_64bits(entry->addr, addr) && 146 146 entry->vlan == vlan) 147 147 return entry; ··· 1647 1647 struct hlist_head *h = &vlan->port->vlan_source_hash[i]; 1648 1648 struct macvlan_source_entry *entry; 1649 1649 1650 - hlist_for_each_entry_rcu(entry, h, hlist) { 1650 + hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) { 1651 1651 if (entry->vlan != vlan) 1652 1652 continue; 1653 1653 if (nla_put(skb, IFLA_MACVLAN_MACADDR, ETH_ALEN, entry->addr))
+32 -15
drivers/net/mctp/mctp-i2c.c
··· 43 43 enum { 44 44 MCTP_I2C_FLOW_STATE_NEW = 0, 45 45 MCTP_I2C_FLOW_STATE_ACTIVE, 46 + MCTP_I2C_FLOW_STATE_INVALID, 46 47 }; 47 48 48 49 /* List of all struct mctp_i2c_client ··· 375 374 */ 376 375 if (!key->valid) { 377 376 state = MCTP_I2C_TX_FLOW_INVALID; 378 - 379 - } else if (key->dev_flow_state == MCTP_I2C_FLOW_STATE_NEW) { 380 - key->dev_flow_state = MCTP_I2C_FLOW_STATE_ACTIVE; 381 - state = MCTP_I2C_TX_FLOW_NEW; 382 377 } else { 383 - state = MCTP_I2C_TX_FLOW_EXISTING; 378 + switch (key->dev_flow_state) { 379 + case MCTP_I2C_FLOW_STATE_NEW: 380 + key->dev_flow_state = MCTP_I2C_FLOW_STATE_ACTIVE; 381 + state = MCTP_I2C_TX_FLOW_NEW; 382 + break; 383 + case MCTP_I2C_FLOW_STATE_ACTIVE: 384 + state = MCTP_I2C_TX_FLOW_EXISTING; 385 + break; 386 + default: 387 + state = MCTP_I2C_TX_FLOW_INVALID; 388 + } 384 389 } 385 390 386 391 spin_unlock_irqrestore(&key->lock, flags); ··· 624 617 625 618 { 626 619 struct mctp_i2c_dev *midev = netdev_priv(mdev->dev); 620 + bool queue_release = false; 627 621 unsigned long flags; 628 622 629 623 spin_lock_irqsave(&midev->lock, flags); 630 - midev->release_count++; 624 + /* if we have seen the flow/key previously, we need to pair the 625 + * original lock with a release 626 + */ 627 + if (key->dev_flow_state == MCTP_I2C_FLOW_STATE_ACTIVE) { 628 + midev->release_count++; 629 + queue_release = true; 630 + } 631 + key->dev_flow_state = MCTP_I2C_FLOW_STATE_INVALID; 631 632 spin_unlock_irqrestore(&midev->lock, flags); 632 633 633 - /* Ensure we have a release operation queued, through the fake 634 - * marker skb 635 - */ 636 - spin_lock(&midev->tx_queue.lock); 637 - if (!midev->unlock_marker.next) 638 - __skb_queue_tail(&midev->tx_queue, &midev->unlock_marker); 639 - spin_unlock(&midev->tx_queue.lock); 640 - 641 - wake_up(&midev->tx_wq); 634 + if (queue_release) { 635 + /* Ensure we have a release operation queued, through the fake 636 + * marker skb 637 + */ 638 + spin_lock(&midev->tx_queue.lock); 639 + if (!midev->unlock_marker.next) 640 + __skb_queue_tail(&midev->tx_queue, 641 + &midev->unlock_marker); 642 + spin_unlock(&midev->tx_queue.lock); 643 + wake_up(&midev->tx_wq); 644 + } 642 645 } 643 646 644 647 static const struct net_device_ops mctp_i2c_ops = {
+2
drivers/net/mhi_net.c
··· 343 343 344 344 kfree_skb(mhi_netdev->skbagg_head); 345 345 346 + free_netdev(ndev); 347 + 346 348 dev_set_drvdata(&mhi_dev->dev, NULL); 347 349 } 348 350
+1
drivers/net/netdevsim/dev.c
··· 1683 1683 ARRAY_SIZE(nsim_devlink_params)); 1684 1684 devl_resources_unregister(devlink); 1685 1685 kfree(nsim_dev->vfconfigs); 1686 + kfree(nsim_dev->fa_cookie); 1686 1687 devl_unlock(devlink); 1687 1688 devlink_free(devlink); 1688 1689 dev_set_drvdata(&nsim_bus_dev->dev, NULL);
+7
drivers/net/phy/dp83867.c
··· 682 682 */ 683 683 dp83867->io_impedance = DP83867_IO_MUX_CFG_IO_IMPEDANCE_MIN / 2; 684 684 685 + /* For non-OF device, the RX and TX FIFO depths are taken from 686 + * default value. So, we init RX & TX FIFO depths here 687 + * so that it is configured correctly later in dp83867_config_init(); 688 + */ 689 + dp83867->tx_fifo_depth = DP83867_PHYCR_FIFO_DEPTH_4_B_NIB; 690 + dp83867->rx_fifo_depth = DP83867_PHYCR_FIFO_DEPTH_4_B_NIB; 691 + 685 692 return 0; 686 693 } 687 694 #endif /* CONFIG_OF_MDIO */
+9 -7
drivers/net/phy/marvell.c
··· 2015 2015 if (err < 0) 2016 2016 return err; 2017 2017 2018 - /* FIXME: Based on trial and error test, it seem 1G need to have 2019 - * delay between soft reset and loopback enablement. 2020 - */ 2021 - if (phydev->speed == SPEED_1000) 2022 - msleep(1000); 2018 + err = phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, 2019 + BMCR_LOOPBACK); 2023 2020 2024 - return phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, 2025 - BMCR_LOOPBACK); 2021 + if (!err) { 2022 + /* It takes some time for PHY device to switch 2023 + * into/out-of loopback mode. 2024 + */ 2025 + msleep(1000); 2026 + } 2027 + return err; 2026 2028 } else { 2027 2029 err = phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, 0); 2028 2030 if (err < 0)
+14 -5
drivers/net/thunderbolt.c
··· 1391 1391 tb_property_add_immediate(tbnet_dir, "prtcstns", flags); 1392 1392 1393 1393 ret = tb_register_property_dir("network", tbnet_dir); 1394 - if (ret) { 1395 - tb_property_free_dir(tbnet_dir); 1396 - return ret; 1397 - } 1394 + if (ret) 1395 + goto err_free_dir; 1398 1396 1399 - return tb_register_service_driver(&tbnet_driver); 1397 + ret = tb_register_service_driver(&tbnet_driver); 1398 + if (ret) 1399 + goto err_unregister; 1400 + 1401 + return 0; 1402 + 1403 + err_unregister: 1404 + tb_unregister_property_dir("network", tbnet_dir); 1405 + err_free_dir: 1406 + tb_property_free_dir(tbnet_dir); 1407 + 1408 + return ret; 1400 1409 } 1401 1410 module_init(tbnet_init); 1402 1411
+1
drivers/net/usb/qmi_wwan.c
··· 1357 1357 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 1358 1358 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 1359 1359 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */ 1360 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x103a, 0)}, /* Telit LE910C4-WWX */ 1360 1361 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1361 1362 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ 1362 1363 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1057, 2)}, /* Telit FN980 */
+42 -4
drivers/net/usb/smsc95xx.c
··· 66 66 spinlock_t mac_cr_lock; 67 67 u8 features; 68 68 u8 suspend_flags; 69 + bool is_internal_phy; 69 70 struct irq_chip irqchip; 70 71 struct irq_domain *irqdomain; 71 72 struct fwnode_handle *irqfwnode; ··· 251 250 252 251 done: 253 252 mutex_unlock(&dev->phy_mutex); 253 + } 254 + 255 + static int smsc95xx_mdiobus_reset(struct mii_bus *bus) 256 + { 257 + struct smsc95xx_priv *pdata; 258 + struct usbnet *dev; 259 + u32 val; 260 + int ret; 261 + 262 + dev = bus->priv; 263 + pdata = dev->driver_priv; 264 + 265 + if (pdata->is_internal_phy) 266 + return 0; 267 + 268 + mutex_lock(&dev->phy_mutex); 269 + 270 + ret = smsc95xx_read_reg(dev, PM_CTRL, &val); 271 + if (ret < 0) 272 + goto reset_out; 273 + 274 + val |= PM_CTL_PHY_RST_; 275 + 276 + ret = smsc95xx_write_reg(dev, PM_CTRL, val); 277 + if (ret < 0) 278 + goto reset_out; 279 + 280 + /* Driver has no knowledge at this point about the external PHY. 281 + * The 802.3 specifies that the reset process shall 282 + * be completed within 0.5 s. 283 + */ 284 + fsleep(500000); 285 + 286 + reset_out: 287 + mutex_unlock(&dev->phy_mutex); 288 + 289 + return 0; 254 290 } 255 291 256 292 static int smsc95xx_mdiobus_read(struct mii_bus *bus, int phy_id, int idx) ··· 1090 1052 static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf) 1091 1053 { 1092 1054 struct smsc95xx_priv *pdata; 1093 - bool is_internal_phy; 1094 1055 char usb_path[64]; 1095 1056 int ret, phy_irq; 1096 1057 u32 val; ··· 1170 1133 if (ret < 0) 1171 1134 goto free_mdio; 1172 1135 1173 - is_internal_phy = !(val & HW_CFG_PSEL_); 1174 - if (is_internal_phy) 1136 + pdata->is_internal_phy = !(val & HW_CFG_PSEL_); 1137 + if (pdata->is_internal_phy) 1175 1138 pdata->mdiobus->phy_mask = ~(1u << SMSC95XX_INTERNAL_PHY_ID); 1176 1139 1177 1140 pdata->mdiobus->priv = dev; 1178 1141 pdata->mdiobus->read = smsc95xx_mdiobus_read; 1179 1142 pdata->mdiobus->write = smsc95xx_mdiobus_write; 1143 + pdata->mdiobus->reset = smsc95xx_mdiobus_reset; 1180 1144 pdata->mdiobus->name = "smsc95xx-mdiobus"; 1181 1145 pdata->mdiobus->parent = &dev->udev->dev; 1182 1146 ··· 1198 1160 } 1199 1161 1200 1162 pdata->phydev->irq = phy_irq; 1201 - pdata->phydev->is_internal = is_internal_phy; 1163 + pdata->phydev->is_internal = pdata->is_internal_phy; 1202 1164 1203 1165 /* detect device revision as different features may be available */ 1204 1166 ret = smsc95xx_read_reg(dev, ID_REV, &val);
+4
drivers/nvme/host/pci.c
··· 3489 3489 NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3490 3490 { PCI_DEVICE(0x1344, 0x5407), /* Micron Technology Inc NVMe SSD */ 3491 3491 .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN }, 3492 + { PCI_DEVICE(0x1344, 0x6001), /* Micron Nitro NVMe */ 3493 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3492 3494 { PCI_DEVICE(0x1c5c, 0x1504), /* SK Hynix PC400 */ 3493 3495 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3494 3496 { PCI_DEVICE(0x1c5c, 0x174a), /* SK Hynix P31 SSD */ ··· 3521 3519 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3522 3520 { PCI_DEVICE(0x2646, 0x501E), /* KINGSTON OM3PGP4xxxxQ OS21011 NVMe SSD */ 3523 3521 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3522 + { PCI_DEVICE(0x1f40, 0x5236), /* Netac Technologies Co. NV7000 NVMe SSD */ 3523 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3524 3524 { PCI_DEVICE(0x1e4B, 0x1001), /* MAXIO MAP1001 */ 3525 3525 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3526 3526 { PCI_DEVICE(0x1e4B, 0x1002), /* MAXIO MAP1002 */
+2
drivers/nvme/target/auth.c
··· 45 45 if (!dhchap_secret) 46 46 return -ENOMEM; 47 47 if (set_ctrl) { 48 + kfree(host->dhchap_ctrl_secret); 48 49 host->dhchap_ctrl_secret = strim(dhchap_secret); 49 50 host->dhchap_ctrl_key_hash = key_hash; 50 51 } else { 52 + kfree(host->dhchap_secret); 51 53 host->dhchap_secret = strim(dhchap_secret); 52 54 host->dhchap_key_hash = key_hash; 53 55 }
+1 -1
drivers/nvmem/lan9662-otpc.c
··· 203 203 } 204 204 205 205 static const struct of_device_id lan9662_otp_match[] = { 206 - { .compatible = "microchip,lan9662-otp", }, 206 + { .compatible = "microchip,lan9662-otpc", }, 207 207 { }, 208 208 }; 209 209 MODULE_DEVICE_TABLE(of, lan9662_otp_match);
+1 -1
drivers/nvmem/u-boot-env.c
··· 135 135 break; 136 136 case U_BOOT_FORMAT_REDUNDANT: 137 137 crc32_offset = offsetof(struct u_boot_env_image_redundant, crc32); 138 - crc32_data_offset = offsetof(struct u_boot_env_image_redundant, mark); 138 + crc32_data_offset = offsetof(struct u_boot_env_image_redundant, data); 139 139 data_offset = offsetof(struct u_boot_env_image_redundant, data); 140 140 break; 141 141 }
+1 -1
drivers/parport/parport_pc.c
··· 468 468 const unsigned char *bufp = buf; 469 469 size_t left = length; 470 470 unsigned long expire = jiffies + port->physport->cad->timeout; 471 - const int fifo = FIFO(port); 471 + const unsigned long fifo = FIFO(port); 472 472 int poll_for = 8; /* 80 usecs */ 473 473 const struct parport_pc_private *priv = port->physport->private_data; 474 474 const int fifo_depth = priv->fifo_depth;
+2
drivers/pinctrl/devicetree.c
··· 220 220 for (state = 0; ; state++) { 221 221 /* Retrieve the pinctrl-* property */ 222 222 propname = kasprintf(GFP_KERNEL, "pinctrl-%d", state); 223 + if (!propname) 224 + return -ENOMEM; 223 225 prop = of_find_property(np, propname, &size); 224 226 kfree(propname); 225 227 if (!prop) {
+29 -5
drivers/pinctrl/mediatek/mtk-eint.c
··· 24 24 #define MTK_EINT_EDGE_SENSITIVE 0 25 25 #define MTK_EINT_LEVEL_SENSITIVE 1 26 26 #define MTK_EINT_DBNC_SET_DBNC_BITS 4 27 + #define MTK_EINT_DBNC_MAX 16 27 28 #define MTK_EINT_DBNC_RST_BIT (0x1 << 1) 28 29 #define MTK_EINT_DBNC_SET_EN (0x1 << 0) 29 30 ··· 48 47 .dbnc_set = 0x600, 49 48 .dbnc_clr = 0x700, 50 49 }; 50 + 51 + const unsigned int debounce_time_mt2701[] = { 52 + 500, 1000, 16000, 32000, 64000, 128000, 256000, 0 53 + }; 54 + EXPORT_SYMBOL_GPL(debounce_time_mt2701); 55 + 56 + const unsigned int debounce_time_mt6765[] = { 57 + 125, 250, 500, 1000, 16000, 32000, 64000, 128000, 256000, 512000, 0 58 + }; 59 + EXPORT_SYMBOL_GPL(debounce_time_mt6765); 60 + 61 + const unsigned int debounce_time_mt6795[] = { 62 + 500, 1000, 16000, 32000, 64000, 128000, 256000, 512000, 0 63 + }; 64 + EXPORT_SYMBOL_GPL(debounce_time_mt6795); 51 65 52 66 static void __iomem *mtk_eint_get_offset(struct mtk_eint *eint, 53 67 unsigned int eint_num, ··· 420 404 int virq, eint_offset; 421 405 unsigned int set_offset, bit, clr_bit, clr_offset, rst, i, unmask, 422 406 dbnc; 423 - static const unsigned int debounce_time[] = {500, 1000, 16000, 32000, 424 - 64000, 128000, 256000}; 425 407 struct irq_data *d; 408 + 409 + if (!eint->hw->db_time) 410 + return -EOPNOTSUPP; 426 411 427 412 virq = irq_find_mapping(eint->domain, eint_num); 428 413 eint_offset = (eint_num % 4) * 8; ··· 435 418 if (!mtk_eint_can_en_debounce(eint, eint_num)) 436 419 return -EINVAL; 437 420 438 - dbnc = ARRAY_SIZE(debounce_time); 439 - for (i = 0; i < ARRAY_SIZE(debounce_time); i++) { 440 - if (debounce <= debounce_time[i]) { 421 + dbnc = eint->num_db_time; 422 + for (i = 0; i < eint->num_db_time; i++) { 423 + if (debounce <= eint->hw->db_time[i]) { 441 424 dbnc = i; 442 425 break; 443 426 } ··· 510 493 &irq_domain_simple_ops, NULL); 511 494 if (!eint->domain) 512 495 return -ENOMEM; 496 + 497 + if (eint->hw->db_time) { 498 + for (i = 0; i < MTK_EINT_DBNC_MAX; i++) 499 + if (eint->hw->db_time[i] == 0) 500 + break; 501 + eint->num_db_time = i; 502 + } 513 503 514 504 mtk_eint_hw_init(eint); 515 505 for (i = 0; i < eint->hw->ap_num; i++) {
+6
drivers/pinctrl/mediatek/mtk-eint.h
··· 37 37 u8 ports; 38 38 unsigned int ap_num; 39 39 unsigned int db_cnt; 40 + const unsigned int *db_time; 40 41 }; 42 + 43 + extern const unsigned int debounce_time_mt2701[]; 44 + extern const unsigned int debounce_time_mt6765[]; 45 + extern const unsigned int debounce_time_mt6795[]; 41 46 42 47 struct mtk_eint; 43 48 ··· 67 62 /* Used to fit into various EINT device */ 68 63 const struct mtk_eint_hw *hw; 69 64 const struct mtk_eint_regs *regs; 65 + u16 num_db_time; 70 66 71 67 /* Used to fit into various pinctrl device */ 72 68 void *pctl;
+1
drivers/pinctrl/mediatek/pinctrl-mt2701.c
··· 518 518 .ports = 6, 519 519 .ap_num = 169, 520 520 .db_cnt = 16, 521 + .db_time = debounce_time_mt2701, 521 522 }, 522 523 }; 523 524
+1
drivers/pinctrl/mediatek/pinctrl-mt2712.c
··· 567 567 .ports = 8, 568 568 .ap_num = 229, 569 569 .db_cnt = 40, 570 + .db_time = debounce_time_mt2701, 570 571 }, 571 572 }; 572 573
+1
drivers/pinctrl/mediatek/pinctrl-mt6765.c
··· 1062 1062 .ports = 6, 1063 1063 .ap_num = 160, 1064 1064 .db_cnt = 13, 1065 + .db_time = debounce_time_mt6765, 1065 1066 }; 1066 1067 1067 1068 static const struct mtk_pin_soc mt6765_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt6779.c
··· 737 737 .ports = 6, 738 738 .ap_num = 195, 739 739 .db_cnt = 13, 740 + .db_time = debounce_time_mt2701, 740 741 }; 741 742 742 743 static const struct mtk_pin_soc mt6779_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt6795.c
··· 475 475 .ports = 7, 476 476 .ap_num = 224, 477 477 .db_cnt = 32, 478 + .db_time = debounce_time_mt6795, 478 479 }; 479 480 480 481 static const unsigned int mt6795_pull_type[] = {
+1
drivers/pinctrl/mediatek/pinctrl-mt7622.c
··· 846 846 .ports = 7, 847 847 .ap_num = ARRAY_SIZE(mt7622_pins), 848 848 .db_cnt = 20, 849 + .db_time = debounce_time_mt6765, 849 850 }; 850 851 851 852 static const struct mtk_pin_soc mt7622_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt7623.c
··· 1369 1369 .ports = 6, 1370 1370 .ap_num = 169, 1371 1371 .db_cnt = 20, 1372 + .db_time = debounce_time_mt2701, 1372 1373 }; 1373 1374 1374 1375 static struct mtk_pin_soc mt7623_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt7629.c
··· 402 402 .ports = 7, 403 403 .ap_num = ARRAY_SIZE(mt7629_pins), 404 404 .db_cnt = 16, 405 + .db_time = debounce_time_mt2701, 405 406 }; 406 407 407 408 static struct mtk_pin_soc mt7629_data = {
+2
drivers/pinctrl/mediatek/pinctrl-mt7986.c
··· 826 826 .ports = 7, 827 827 .ap_num = ARRAY_SIZE(mt7986a_pins), 828 828 .db_cnt = 16, 829 + .db_time = debounce_time_mt6765, 829 830 }; 830 831 831 832 static const struct mtk_eint_hw mt7986b_eint_hw = { ··· 834 833 .ports = 7, 835 834 .ap_num = ARRAY_SIZE(mt7986b_pins), 836 835 .db_cnt = 16, 836 + .db_time = debounce_time_mt6765, 837 837 }; 838 838 839 839 static struct mtk_pin_soc mt7986a_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt8127.c
··· 286 286 .ports = 6, 287 287 .ap_num = 143, 288 288 .db_cnt = 16, 289 + .db_time = debounce_time_mt2701, 289 290 }, 290 291 }; 291 292
+1
drivers/pinctrl/mediatek/pinctrl-mt8135.c
··· 315 315 .ports = 6, 316 316 .ap_num = 192, 317 317 .db_cnt = 16, 318 + .db_time = debounce_time_mt2701, 318 319 }, 319 320 }; 320 321
+1
drivers/pinctrl/mediatek/pinctrl-mt8167.c
··· 319 319 .ports = 6, 320 320 .ap_num = 169, 321 321 .db_cnt = 64, 322 + .db_time = debounce_time_mt6795, 322 323 }, 323 324 }; 324 325
+1
drivers/pinctrl/mediatek/pinctrl-mt8173.c
··· 327 327 .ports = 6, 328 328 .ap_num = 224, 329 329 .db_cnt = 16, 330 + .db_time = debounce_time_mt2701, 330 331 }, 331 332 }; 332 333
+1
drivers/pinctrl/mediatek/pinctrl-mt8183.c
··· 545 545 .ports = 6, 546 546 .ap_num = 212, 547 547 .db_cnt = 13, 548 + .db_time = debounce_time_mt6765, 548 549 }; 549 550 550 551 static const struct mtk_pin_soc mt8183_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt8186.c
··· 1222 1222 .ports = 7, 1223 1223 .ap_num = 217, 1224 1224 .db_cnt = 32, 1225 + .db_time = debounce_time_mt6765, 1225 1226 }; 1226 1227 1227 1228 static const struct mtk_pin_soc mt8186_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt8188.c
··· 1625 1625 .ports = 7, 1626 1626 .ap_num = 225, 1627 1627 .db_cnt = 32, 1628 + .db_time = debounce_time_mt6765, 1628 1629 }; 1629 1630 1630 1631 static const struct mtk_pin_soc mt8188_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt8192.c
··· 1371 1371 .ports = 7, 1372 1372 .ap_num = 224, 1373 1373 .db_cnt = 32, 1374 + .db_time = debounce_time_mt6765, 1374 1375 }; 1375 1376 1376 1377 static const struct mtk_pin_reg_calc mt8192_reg_cals[PINCTRL_PIN_REG_MAX] = {
+1
drivers/pinctrl/mediatek/pinctrl-mt8195.c
··· 935 935 .ports = 7, 936 936 .ap_num = 225, 937 937 .db_cnt = 32, 938 + .db_time = debounce_time_mt6765, 938 939 }; 939 940 940 941 static const struct mtk_pin_soc mt8195_data = {
+1
drivers/pinctrl/mediatek/pinctrl-mt8365.c
··· 453 453 .ports = 5, 454 454 .ap_num = 160, 455 455 .db_cnt = 160, 456 + .db_time = debounce_time_mt6765, 456 457 }, 457 458 }; 458 459
+1
drivers/pinctrl/mediatek/pinctrl-mt8516.c
··· 319 319 .ports = 6, 320 320 .ap_num = 169, 321 321 .db_cnt = 64, 322 + .db_time = debounce_time_mt6795, 322 323 }, 323 324 }; 324 325
+3
drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
··· 709 709 { 710 710 int err, rsel_val; 711 711 712 + if (!pullup && arg == MTK_DISABLE) 713 + return 0; 714 + 712 715 if (hw->rsel_si_unit) { 713 716 /* find pin rsel_index from pin_rsel array*/ 714 717 err = mtk_hw_pin_rsel_lookup(hw, desc, pullup, arg, &rsel_val);
+40
drivers/pinctrl/pinctrl-rockchip.c
··· 679 679 } 680 680 681 681 static struct rockchip_mux_route_data px30_mux_route_data[] = { 682 + RK_MUXROUTE_SAME(2, RK_PB4, 1, 0x184, BIT(16 + 7)), /* cif-d0m0 */ 683 + RK_MUXROUTE_SAME(3, RK_PA1, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d0m1 */ 684 + RK_MUXROUTE_SAME(2, RK_PB6, 1, 0x184, BIT(16 + 7)), /* cif-d1m0 */ 685 + RK_MUXROUTE_SAME(3, RK_PA2, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d1m1 */ 682 686 RK_MUXROUTE_SAME(2, RK_PA0, 1, 0x184, BIT(16 + 7)), /* cif-d2m0 */ 683 687 RK_MUXROUTE_SAME(3, RK_PA3, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d2m1 */ 688 + RK_MUXROUTE_SAME(2, RK_PA1, 1, 0x184, BIT(16 + 7)), /* cif-d3m0 */ 689 + RK_MUXROUTE_SAME(3, RK_PA5, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d3m1 */ 690 + RK_MUXROUTE_SAME(2, RK_PA2, 1, 0x184, BIT(16 + 7)), /* cif-d4m0 */ 691 + RK_MUXROUTE_SAME(3, RK_PA7, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d4m1 */ 692 + RK_MUXROUTE_SAME(2, RK_PA3, 1, 0x184, BIT(16 + 7)), /* cif-d5m0 */ 693 + RK_MUXROUTE_SAME(3, RK_PB0, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d5m1 */ 694 + RK_MUXROUTE_SAME(2, RK_PA4, 1, 0x184, BIT(16 + 7)), /* cif-d6m0 */ 695 + RK_MUXROUTE_SAME(3, RK_PB1, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d6m1 */ 696 + RK_MUXROUTE_SAME(2, RK_PA5, 1, 0x184, BIT(16 + 7)), /* cif-d7m0 */ 697 + RK_MUXROUTE_SAME(3, RK_PB4, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d7m1 */ 698 + RK_MUXROUTE_SAME(2, RK_PA6, 1, 0x184, BIT(16 + 7)), /* cif-d8m0 */ 699 + RK_MUXROUTE_SAME(3, RK_PB6, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d8m1 */ 700 + RK_MUXROUTE_SAME(2, RK_PA7, 1, 0x184, BIT(16 + 7)), /* cif-d9m0 */ 701 + RK_MUXROUTE_SAME(3, RK_PB7, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d9m1 */ 702 + RK_MUXROUTE_SAME(2, RK_PB7, 1, 0x184, BIT(16 + 7)), /* cif-d10m0 */ 703 + RK_MUXROUTE_SAME(3, RK_PC6, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d10m1 */ 704 + RK_MUXROUTE_SAME(2, RK_PC0, 1, 0x184, BIT(16 + 7)), /* cif-d11m0 */ 705 + RK_MUXROUTE_SAME(3, RK_PC7, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-d11m1 */ 706 + RK_MUXROUTE_SAME(2, RK_PB0, 1, 0x184, BIT(16 + 7)), /* cif-vsyncm0 */ 707 + RK_MUXROUTE_SAME(3, RK_PD1, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-vsyncm1 */ 708 + RK_MUXROUTE_SAME(2, RK_PB1, 1, 0x184, BIT(16 + 7)), /* cif-hrefm0 */ 709 + RK_MUXROUTE_SAME(3, RK_PD2, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-hrefm1 */ 710 + RK_MUXROUTE_SAME(2, RK_PB2, 1, 0x184, BIT(16 + 7)), /* cif-clkinm0 */ 711 + RK_MUXROUTE_SAME(3, RK_PD3, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-clkinm1 */ 712 + RK_MUXROUTE_SAME(2, RK_PB3, 1, 0x184, BIT(16 + 7)), /* cif-clkoutm0 */ 713 + RK_MUXROUTE_SAME(3, RK_PD0, 3, 0x184, BIT(16 + 7) | BIT(7)), /* cif-clkoutm1 */ 684 714 RK_MUXROUTE_SAME(3, RK_PC6, 2, 0x184, BIT(16 + 8)), /* pdm-m0 */ 685 715 RK_MUXROUTE_SAME(2, RK_PC6, 1, 0x184, BIT(16 + 8) | BIT(8)), /* pdm-m1 */ 716 + RK_MUXROUTE_SAME(3, RK_PD3, 2, 0x184, BIT(16 + 8)), /* pdm-sdi0m0 */ 717 + RK_MUXROUTE_SAME(2, RK_PC5, 2, 0x184, BIT(16 + 8) | BIT(8)), /* pdm-sdi0m1 */ 686 718 RK_MUXROUTE_SAME(1, RK_PD3, 2, 0x184, BIT(16 + 10)), /* uart2-rxm0 */ 687 719 RK_MUXROUTE_SAME(2, RK_PB6, 2, 0x184, BIT(16 + 10) | BIT(10)), /* uart2-rxm1 */ 720 + RK_MUXROUTE_SAME(1, RK_PD2, 2, 0x184, BIT(16 + 10)), /* uart2-txm0 */ 721 + RK_MUXROUTE_SAME(2, RK_PB4, 2, 0x184, BIT(16 + 10) | BIT(10)), /* uart2-txm1 */ 688 722 RK_MUXROUTE_SAME(0, RK_PC1, 2, 0x184, BIT(16 + 9)), /* uart3-rxm0 */ 689 723 RK_MUXROUTE_SAME(1, RK_PB7, 2, 0x184, BIT(16 + 9) | BIT(9)), /* uart3-rxm1 */ 724 + RK_MUXROUTE_SAME(0, RK_PC0, 2, 0x184, BIT(16 + 9)), /* uart3-txm0 */ 725 + RK_MUXROUTE_SAME(1, RK_PB6, 2, 0x184, BIT(16 + 9) | BIT(9)), /* uart3-txm1 */ 726 + RK_MUXROUTE_SAME(0, RK_PC2, 2, 0x184, BIT(16 + 9)), /* uart3-ctsm0 */ 727 + RK_MUXROUTE_SAME(1, RK_PB4, 2, 0x184, BIT(16 + 9) | BIT(9)), /* uart3-ctsm1 */ 728 + RK_MUXROUTE_SAME(0, RK_PC3, 2, 0x184, BIT(16 + 9)), /* uart3-rtsm0 */ 729 + RK_MUXROUTE_SAME(1, RK_PB5, 2, 0x184, BIT(16 + 9) | BIT(9)), /* uart3-rtsm1 */ 690 730 }; 691 731 692 732 static struct rockchip_mux_route_data rv1126_mux_route_data[] = {
+2 -2
drivers/pinctrl/qcom/pinctrl-sc8280xp.c
··· 1873 1873 [225] = PINGROUP(225, hs3_mi2s, phase_flag, _, _, _, _, egpio), 1874 1874 [226] = PINGROUP(226, hs3_mi2s, phase_flag, _, _, _, _, egpio), 1875 1875 [227] = PINGROUP(227, hs3_mi2s, phase_flag, _, _, _, _, egpio), 1876 - [228] = UFS_RESET(ufs_reset, 0xf1004), 1877 - [229] = UFS_RESET(ufs1_reset, 0xf3004), 1876 + [228] = UFS_RESET(ufs_reset, 0xf1000), 1877 + [229] = UFS_RESET(ufs1_reset, 0xf3000), 1878 1878 [230] = SDC_QDSD_PINGROUP(sdc2_clk, 0xe8000, 14, 6), 1879 1879 [231] = SDC_QDSD_PINGROUP(sdc2_cmd, 0xe8000, 11, 3), 1880 1880 [232] = SDC_QDSD_PINGROUP(sdc2_data, 0xe8000, 9, 0),
+20 -4
drivers/platform/surface/aggregator/ssh_packet_layer.c
··· 1596 1596 ssh_ptl_tx_wakeup_packet(ptl); 1597 1597 } 1598 1598 1599 - static bool ssh_ptl_rx_retransmit_check(struct ssh_ptl *ptl, u8 seq) 1599 + static bool ssh_ptl_rx_retransmit_check(struct ssh_ptl *ptl, const struct ssh_frame *frame) 1600 1600 { 1601 1601 int i; 1602 + 1603 + /* 1604 + * Ignore unsequenced packets. On some devices (notably Surface Pro 9), 1605 + * unsequenced events will always be sent with SEQ=0x00. Attempting to 1606 + * detect retransmission would thus just block all events. 1607 + * 1608 + * While sequence numbers would also allow detection of retransmitted 1609 + * packets in unsequenced communication, they have only ever been used 1610 + * to cover edge-cases in sequenced transmission. In particular, the 1611 + * only instance of packets being retransmitted (that we are aware of) 1612 + * is due to an ACK timeout. As this does not happen in unsequenced 1613 + * communication, skip the retransmission check for those packets 1614 + * entirely. 1615 + */ 1616 + if (frame->type == SSH_FRAME_TYPE_DATA_NSQ) 1617 + return false; 1602 1618 1603 1619 /* 1604 1620 * Check if SEQ has been seen recently (i.e. packet was 1605 1621 * re-transmitted and we should ignore it). 1606 1622 */ 1607 1623 for (i = 0; i < ARRAY_SIZE(ptl->rx.blocked.seqs); i++) { 1608 - if (likely(ptl->rx.blocked.seqs[i] != seq)) 1624 + if (likely(ptl->rx.blocked.seqs[i] != frame->seq)) 1609 1625 continue; 1610 1626 1611 1627 ptl_dbg(ptl, "ptl: ignoring repeated data packet\n"); ··· 1629 1613 } 1630 1614 1631 1615 /* Update list of blocked sequence IDs. */ 1632 - ptl->rx.blocked.seqs[ptl->rx.blocked.offset] = seq; 1616 + ptl->rx.blocked.seqs[ptl->rx.blocked.offset] = frame->seq; 1633 1617 ptl->rx.blocked.offset = (ptl->rx.blocked.offset + 1) 1634 1618 % ARRAY_SIZE(ptl->rx.blocked.seqs); 1635 1619 ··· 1640 1624 const struct ssh_frame *frame, 1641 1625 const struct ssam_span *payload) 1642 1626 { 1643 - if (ssh_ptl_rx_retransmit_check(ptl, frame->seq)) 1627 + if (ssh_ptl_rx_retransmit_check(ptl, frame)) 1644 1628 return; 1645 1629 1646 1630 ptl->ops.data_received(ptl, payload);
+37
drivers/platform/surface/surface_aggregator_registry.c
··· 234 234 NULL, 235 235 }; 236 236 237 + /* Devices for Surface Laptop 5. */ 238 + static const struct software_node *ssam_node_group_sl5[] = { 239 + &ssam_node_root, 240 + &ssam_node_bat_ac, 241 + &ssam_node_bat_main, 242 + &ssam_node_tmp_pprof, 243 + &ssam_node_hid_main_keyboard, 244 + &ssam_node_hid_main_touchpad, 245 + &ssam_node_hid_main_iid5, 246 + &ssam_node_hid_sam_ucm_ucsi, 247 + NULL, 248 + }; 249 + 237 250 /* Devices for Surface Laptop Studio. */ 238 251 static const struct software_node *ssam_node_group_sls[] = { 239 252 &ssam_node_root, ··· 281 268 NULL, 282 269 }; 283 270 271 + /* Devices for Surface Pro 8 */ 284 272 static const struct software_node *ssam_node_group_sp8[] = { 285 273 &ssam_node_root, 286 274 &ssam_node_hub_kip, ··· 289 275 &ssam_node_bat_main, 290 276 &ssam_node_tmp_pprof, 291 277 &ssam_node_kip_tablet_switch, 278 + &ssam_node_hid_kip_keyboard, 279 + &ssam_node_hid_kip_penstash, 280 + &ssam_node_hid_kip_touchpad, 281 + &ssam_node_hid_kip_fwupd, 282 + &ssam_node_hid_sam_sensors, 283 + &ssam_node_hid_sam_ucm_ucsi, 284 + NULL, 285 + }; 286 + 287 + /* Devices for Surface Pro 9 */ 288 + static const struct software_node *ssam_node_group_sp9[] = { 289 + &ssam_node_root, 290 + &ssam_node_hub_kip, 291 + &ssam_node_bat_ac, 292 + &ssam_node_bat_main, 293 + &ssam_node_tmp_pprof, 294 + /* TODO: Tablet mode switch (via POS subsystem) */ 292 295 &ssam_node_hid_kip_keyboard, 293 296 &ssam_node_hid_kip_penstash, 294 297 &ssam_node_hid_kip_touchpad, ··· 334 303 /* Surface Pro 8 */ 335 304 { "MSHW0263", (unsigned long)ssam_node_group_sp8 }, 336 305 306 + /* Surface Pro 9 */ 307 + { "MSHW0343", (unsigned long)ssam_node_group_sp9 }, 308 + 337 309 /* Surface Book 2 */ 338 310 { "MSHW0107", (unsigned long)ssam_node_group_gen5 }, 339 311 ··· 357 323 358 324 /* Surface Laptop 4 (13", Intel) */ 359 325 { "MSHW0250", (unsigned long)ssam_node_group_sl3 }, 326 + 327 + /* Surface Laptop 5 */ 328 + { "MSHW0350", (unsigned long)ssam_node_group_sl5 }, 360 329 361 330 /* Surface Laptop Go 1 */ 362 331 { "MSHW0118", (unsigned long)ssam_node_group_slg1 },
+9
drivers/platform/x86/acer-wmi.c
··· 566 566 }, 567 567 { 568 568 .callback = set_force_caps, 569 + .ident = "Acer Aspire Switch V 10 SW5-017", 570 + .matches = { 571 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"), 572 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SW5-017"), 573 + }, 574 + .driver_data = (void *)ACER_CAP_KBD_DOCK, 575 + }, 576 + { 577 + .callback = set_force_caps, 569 578 .ident = "Acer One 10 (S1003)", 570 579 .matches = { 571 580 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
+1 -2
drivers/platform/x86/amd/pmc.c
··· 276 276 .release = amd_pmc_stb_debugfs_release_v2, 277 277 }; 278 278 279 - #if defined(CONFIG_SUSPEND) || defined(CONFIG_DEBUG_FS) 280 279 static int amd_pmc_setup_smu_logging(struct amd_pmc_dev *dev) 281 280 { 282 281 if (dev->cpu_id == AMD_CPU_ID_PCO) { ··· 350 351 memcpy_fromio(table, pdev->smu_virt_addr, sizeof(struct smu_metrics)); 351 352 return 0; 352 353 } 353 - #endif /* CONFIG_SUSPEND || CONFIG_DEBUG_FS */ 354 354 355 355 #ifdef CONFIG_SUSPEND 356 356 static void amd_pmc_validate_deepest(struct amd_pmc_dev *pdev) ··· 962 964 {"AMDI0006", 0}, 963 965 {"AMDI0007", 0}, 964 966 {"AMDI0008", 0}, 967 + {"AMDI0009", 0}, 965 968 {"AMD0004", 0}, 966 969 {"AMD0005", 0}, 967 970 { }
+2
drivers/platform/x86/asus-wmi.c
··· 1738 1738 pci_write_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, 1739 1739 cpu_to_le32(ports_available)); 1740 1740 1741 + pci_dev_put(xhci_pdev); 1742 + 1741 1743 pr_info("set USB_INTEL_XUSB2PR old: 0x%04x, new: 0x%04x\n", 1742 1744 orig_ports_available, ports_available); 1743 1745 }
+3
drivers/platform/x86/hp-wmi.c
··· 90 90 HPWMI_PEAKSHIFT_PERIOD = 0x0F, 91 91 HPWMI_BATTERY_CHARGE_PERIOD = 0x10, 92 92 HPWMI_SANITIZATION_MODE = 0x17, 93 + HPWMI_SMART_EXPERIENCE_APP = 0x21, 93 94 }; 94 95 95 96 /* ··· 859 858 case HPWMI_BATTERY_CHARGE_PERIOD: 860 859 break; 861 860 case HPWMI_SANITIZATION_MODE: 861 + break; 862 + case HPWMI_SMART_EXPERIENCE_APP: 862 863 break; 863 864 default: 864 865 pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data);
+35 -2
drivers/platform/x86/ideapad-laptop.c
··· 136 136 bool dytc : 1; 137 137 bool fan_mode : 1; 138 138 bool fn_lock : 1; 139 + bool set_fn_lock_led : 1; 139 140 bool hw_rfkill_switch : 1; 140 141 bool kbd_bl : 1; 141 142 bool touchpad_ctrl_via_ec : 1; ··· 155 154 156 155 static bool allow_v4_dytc; 157 156 module_param(allow_v4_dytc, bool, 0444); 158 - MODULE_PARM_DESC(allow_v4_dytc, "Enable DYTC version 4 platform-profile support."); 157 + MODULE_PARM_DESC(allow_v4_dytc, 158 + "Enable DYTC version 4 platform-profile support. " 159 + "If you need this please report this to: platform-driver-x86@vger.kernel.org"); 160 + 161 + static bool hw_rfkill_switch; 162 + module_param(hw_rfkill_switch, bool, 0444); 163 + MODULE_PARM_DESC(hw_rfkill_switch, 164 + "Enable rfkill support for laptops with a hw on/off wifi switch/slider. " 165 + "If you need this please report this to: platform-driver-x86@vger.kernel.org"); 166 + 167 + static bool set_fn_lock_led; 168 + module_param(set_fn_lock_led, bool, 0444); 169 + MODULE_PARM_DESC(set_fn_lock_led, 170 + "Enable driver based updates of the fn-lock LED on fn-lock changes. " 171 + "If you need this please report this to: platform-driver-x86@vger.kernel.org"); 159 172 160 173 /* 161 174 * ACPI Helpers ··· 1516 1501 ideapad_input_report(priv, value); 1517 1502 break; 1518 1503 case 208: 1504 + if (!priv->features.set_fn_lock_led) 1505 + break; 1506 + 1519 1507 if (!eval_hals(priv->adev->handle, &result)) { 1520 1508 bool state = test_bit(HALS_FNLOCK_STATE_BIT, &result); 1521 1509 ··· 1531 1513 } 1532 1514 } 1533 1515 #endif 1516 + 1517 + /* On some models we need to call exec_sals(SALS_FNLOCK_ON/OFF) to set the LED */ 1518 + static const struct dmi_system_id set_fn_lock_led_list[] = { 1519 + { 1520 + /* https://bugzilla.kernel.org/show_bug.cgi?id=212671 */ 1521 + .matches = { 1522 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1523 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Legion R7000P2020H"), 1524 + } 1525 + }, 1526 + {} 1527 + }; 1534 1528 1535 1529 /* 1536 1530 * Some ideapads have a hardware rfkill switch, but most do not have one. ··· 1586 1556 acpi_handle handle = priv->adev->handle; 1587 1557 unsigned long val; 1588 1558 1589 - priv->features.hw_rfkill_switch = dmi_check_system(hw_rfkill_list); 1559 + priv->features.set_fn_lock_led = 1560 + set_fn_lock_led || dmi_check_system(set_fn_lock_led_list); 1561 + priv->features.hw_rfkill_switch = 1562 + hw_rfkill_switch || dmi_check_system(hw_rfkill_list); 1590 1563 1591 1564 /* Most ideapads with ELAN0634 touchpad don't use EC touchpad switch */ 1592 1565 if (acpi_dev_present("ELAN0634", NULL, -1))
+9
drivers/platform/x86/intel/pmc/pltdrv.c
··· 18 18 #include <asm/cpu_device_id.h> 19 19 #include <asm/intel-family.h> 20 20 21 + #include <xen/xen.h> 22 + 21 23 static void intel_pmc_core_release(struct device *dev) 22 24 { 23 25 kfree(dev); ··· 53 51 54 52 /* Skip creating the platform device if ACPI already has a device */ 55 53 if (acpi_dev_present("INT33A1", NULL, -1)) 54 + return -ENODEV; 55 + 56 + /* 57 + * Skip forcefully attaching the device for VMs. Make an exception for 58 + * Xen dom0, which does have full hardware access. 59 + */ 60 + if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR) && !xen_initial_domain()) 56 61 return -ENODEV; 57 62 58 63 if (!x86_match_cpu(intel_pmc_core_platform_ids))
+8
drivers/platform/x86/thinkpad_acpi.c
··· 4497 4497 DMI_MATCH(DMI_PRODUCT_NAME, "21A0"), 4498 4498 } 4499 4499 }, 4500 + { 4501 + .ident = "P14s Gen2 AMD", 4502 + .driver_data = &quirk_s2idle_bug, 4503 + .matches = { 4504 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4505 + DMI_MATCH(DMI_PRODUCT_NAME, "21A1"), 4506 + } 4507 + }, 4500 4508 {} 4501 4509 }; 4502 4510
+1
drivers/s390/block/dcssblk.c
··· 636 636 dev_info->gd->minors = DCSSBLK_MINORS_PER_DISK; 637 637 dev_info->gd->fops = &dcssblk_devops; 638 638 dev_info->gd->private_data = dev_info; 639 + dev_info->gd->flags |= GENHD_FL_NO_PART; 639 640 blk_queue_logical_block_size(dev_info->gd->queue, 4096); 640 641 blk_queue_flag_set(QUEUE_FLAG_DAX, dev_info->gd->queue); 641 642
+1 -1
drivers/s390/scsi/zfcp_fsf.c
··· 884 884 const bool is_srb = zfcp_fsf_req_is_status_read_buffer(req); 885 885 struct zfcp_adapter *adapter = req->adapter; 886 886 struct zfcp_qdio *qdio = adapter->qdio; 887 - int req_id = req->req_id; 887 + unsigned long req_id = req->req_id; 888 888 889 889 zfcp_reqlist_add(adapter->req_list, req); 890 890
+2 -1
drivers/scsi/mpi3mr/mpi3mr_os.c
··· 3265 3265 } 3266 3266 3267 3267 if (scmd->result != (DID_OK << 16) && (scmd->cmnd[0] != ATA_12) && 3268 - (scmd->cmnd[0] != ATA_16)) { 3268 + (scmd->cmnd[0] != ATA_16) && 3269 + mrioc->logging_level & MPI3_DEBUG_SCSI_ERROR) { 3269 3270 ioc_info(mrioc, "%s :scmd->result 0x%x\n", __func__, 3270 3271 scmd->result); 3271 3272 scsi_print_command(scmd);
+5 -1
drivers/scsi/scsi_debug.c
··· 7323 7323 dev_set_name(&sdbg_host->dev, "adapter%d", sdebug_num_hosts); 7324 7324 7325 7325 error = device_register(&sdbg_host->dev); 7326 - if (error) 7326 + if (error) { 7327 + spin_lock(&sdebug_host_list_lock); 7328 + list_del(&sdbg_host->host_list); 7329 + spin_unlock(&sdebug_host_list_lock); 7327 7330 goto clean; 7331 + } 7328 7332 7329 7333 ++sdebug_num_hosts; 7330 7334 return 0;
+16 -15
drivers/scsi/scsi_transport_iscsi.c
··· 231 231 dev_set_name(&ep->dev, "ep-%d", id); 232 232 err = device_register(&ep->dev); 233 233 if (err) 234 - goto free_id; 234 + goto put_dev; 235 235 236 236 err = sysfs_create_group(&ep->dev.kobj, &iscsi_endpoint_group); 237 237 if (err) ··· 245 245 device_unregister(&ep->dev); 246 246 return NULL; 247 247 248 - free_id: 248 + put_dev: 249 249 mutex_lock(&iscsi_ep_idr_mutex); 250 250 idr_remove(&iscsi_ep_idr, id); 251 251 mutex_unlock(&iscsi_ep_idr_mutex); 252 + put_device(&ep->dev); 253 + return NULL; 252 254 free_ep: 253 255 kfree(ep); 254 256 return NULL; ··· 768 766 769 767 err = device_register(&iface->dev); 770 768 if (err) 771 - goto free_iface; 769 + goto put_dev; 772 770 773 771 err = sysfs_create_group(&iface->dev.kobj, &iscsi_iface_group); 774 772 if (err) ··· 782 780 device_unregister(&iface->dev); 783 781 return NULL; 784 782 785 - free_iface: 786 - put_device(iface->dev.parent); 787 - kfree(iface); 783 + put_dev: 784 + put_device(&iface->dev); 788 785 return NULL; 789 786 } 790 787 EXPORT_SYMBOL_GPL(iscsi_create_iface); ··· 1252 1251 1253 1252 err = device_register(&fnode_sess->dev); 1254 1253 if (err) 1255 - goto free_fnode_sess; 1254 + goto put_dev; 1256 1255 1257 1256 if (dd_size) 1258 1257 fnode_sess->dd_data = &fnode_sess[1]; 1259 1258 1260 1259 return fnode_sess; 1261 1260 1262 - free_fnode_sess: 1263 - kfree(fnode_sess); 1261 + put_dev: 1262 + put_device(&fnode_sess->dev); 1264 1263 return NULL; 1265 1264 } 1266 1265 EXPORT_SYMBOL_GPL(iscsi_create_flashnode_sess); ··· 1300 1299 1301 1300 err = device_register(&fnode_conn->dev); 1302 1301 if (err) 1303 - goto free_fnode_conn; 1302 + goto put_dev; 1304 1303 1305 1304 if (dd_size) 1306 1305 fnode_conn->dd_data = &fnode_conn[1]; 1307 1306 1308 1307 return fnode_conn; 1309 1308 1310 - free_fnode_conn: 1311 - kfree(fnode_conn); 1309 + put_dev: 1310 + put_device(&fnode_conn->dev); 1312 1311 return NULL; 1313 1312 } 1314 1313 EXPORT_SYMBOL_GPL(iscsi_create_flashnode_conn); ··· 4816 4815 dev_set_name(&priv->dev, "%s", tt->name); 4817 4816 err = device_register(&priv->dev); 4818 4817 if (err) 4819 - goto free_priv; 4818 + goto put_dev; 4820 4819 4821 4820 err = sysfs_create_group(&priv->dev.kobj, &iscsi_transport_group); 4822 4821 if (err) ··· 4851 4850 unregister_dev: 4852 4851 device_unregister(&priv->dev); 4853 4852 return NULL; 4854 - free_priv: 4855 - kfree(priv); 4853 + put_dev: 4854 + put_device(&priv->dev); 4856 4855 return NULL; 4857 4856 } 4858 4857 EXPORT_SYMBOL_GPL(iscsi_register_transport);
+2
drivers/siox/siox-core.c
··· 839 839 840 840 err_device_register: 841 841 /* don't care to make the buffer smaller again */ 842 + put_device(&sdevice->dev); 843 + sdevice = NULL; 842 844 843 845 err_buf_alloc: 844 846 siox_master_unlock(smaster);
+1 -1
drivers/slimbus/Kconfig
··· 23 23 config SLIM_QCOM_NGD_CTRL 24 24 tristate "Qualcomm SLIMbus Satellite Non-Generic Device Component" 25 25 depends on HAS_IOMEM && DMA_ENGINE && NET 26 - depends on QCOM_RPROC_COMMON || COMPILE_TEST 26 + depends on QCOM_RPROC_COMMON || (COMPILE_TEST && !QCOM_RPROC_COMMON) 27 27 depends on ARCH_QCOM || COMPILE_TEST 28 28 select QCOM_QMI_HELPERS 29 29 select QCOM_PDR_HELPERS
+4 -4
drivers/slimbus/stream.c
··· 67 67 384000, 68 68 768000, 69 69 0, /* Reserved */ 70 - 110250, 71 - 220500, 72 - 441000, 73 - 882000, 70 + 11025, 71 + 22050, 72 + 44100, 73 + 88200, 74 74 176400, 75 75 352800, 76 76 705600,
+11
drivers/soc/imx/soc-imx8m.c
··· 11 11 #include <linux/platform_device.h> 12 12 #include <linux/arm-smccc.h> 13 13 #include <linux/of.h> 14 + #include <linux/clk.h> 14 15 15 16 #define REV_B1 0x21 16 17 ··· 57 56 void __iomem *ocotp_base; 58 57 u32 magic; 59 58 u32 rev; 59 + struct clk *clk; 60 60 61 61 np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp"); 62 62 if (!np) ··· 65 63 66 64 ocotp_base = of_iomap(np, 0); 67 65 WARN_ON(!ocotp_base); 66 + clk = of_clk_get_by_name(np, NULL); 67 + if (!clk) { 68 + WARN_ON(!clk); 69 + return 0; 70 + } 71 + 72 + clk_prepare_enable(clk); 68 73 69 74 /* 70 75 * SOC revision on older imx8mq is not available in fuses so query ··· 88 79 soc_uid <<= 32; 89 80 soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 90 81 82 + clk_disable_unprepare(clk); 83 + clk_put(clk); 91 84 iounmap(ocotp_base); 92 85 of_node_put(np); 93 86
+1 -8
drivers/staging/rtl8192e/rtllib_softmac_wx.c
··· 439 439 union iwreq_data *wrqu, char *extra) 440 440 { 441 441 442 - int ret = 0, len, i; 442 + int ret = 0, len; 443 443 short proto_started; 444 444 unsigned long flags; 445 445 ··· 453 453 if (ieee->iw_mode == IW_MODE_MONITOR) { 454 454 ret = -1; 455 455 goto out; 456 - } 457 - 458 - for (i = 0; i < len; i++) { 459 - if (extra[i] < 0) { 460 - ret = -1; 461 - goto out; 462 - } 463 456 } 464 457 465 458 if (proto_started)
+2 -1
drivers/target/loopback/tcm_loop.c
··· 397 397 ret = device_register(&tl_hba->dev); 398 398 if (ret) { 399 399 pr_err("device_register() failed for tl_hba->dev: %d\n", ret); 400 + put_device(&tl_hba->dev); 400 401 return -ENODEV; 401 402 } 402 403 ··· 1074 1073 */ 1075 1074 ret = tcm_loop_setup_hba_bus(tl_hba, tcm_loop_hba_no_cnt); 1076 1075 if (ret) 1077 - goto out; 1076 + return ERR_PTR(ret); 1078 1077 1079 1078 sh = tl_hba->sh; 1080 1079 tcm_loop_hba_no_cnt++;
+38 -33
drivers/tty/n_gsm.c
··· 264 264 bool constipated; /* Asked by remote to shut up */ 265 265 bool has_devices; /* Devices were registered */ 266 266 267 - struct mutex tx_mutex; 267 + spinlock_t tx_lock; 268 268 unsigned int tx_bytes; /* TX data outstanding */ 269 269 #define TX_THRESH_HI 8192 270 270 #define TX_THRESH_LO 2048 ··· 272 272 struct list_head tx_data_list; /* Pending data packets */ 273 273 274 274 /* Control messages */ 275 - struct delayed_work kick_timeout; /* Kick TX queuing on timeout */ 275 + struct timer_list kick_timer; /* Kick TX queuing on timeout */ 276 276 struct timer_list t2_timer; /* Retransmit timer for commands */ 277 277 int cretries; /* Command retry counter */ 278 278 struct gsm_control *pending_cmd;/* Our current pending command */ ··· 700 700 struct gsm_msg *msg; 701 701 u8 *dp; 702 702 int ocr; 703 + unsigned long flags; 703 704 704 705 msg = gsm_data_alloc(gsm, addr, 0, control); 705 706 if (!msg) ··· 722 721 723 722 gsm_print_packet("Q->", addr, cr, control, NULL, 0); 724 723 725 - mutex_lock(&gsm->tx_mutex); 724 + spin_lock_irqsave(&gsm->tx_lock, flags); 726 725 list_add_tail(&msg->list, &gsm->tx_ctrl_list); 727 726 gsm->tx_bytes += msg->len; 728 - mutex_unlock(&gsm->tx_mutex); 727 + spin_unlock_irqrestore(&gsm->tx_lock, flags); 729 728 gsmld_write_trigger(gsm); 730 729 731 730 return 0; ··· 750 749 spin_unlock_irqrestore(&dlci->lock, flags); 751 750 752 751 /* Clear data packets in MUX write queue */ 753 - mutex_lock(&gsm->tx_mutex); 752 + spin_lock_irqsave(&gsm->tx_lock, flags); 754 753 list_for_each_entry_safe(msg, nmsg, &gsm->tx_data_list, list) { 755 754 if (msg->addr != addr) 756 755 continue; ··· 758 757 list_del(&msg->list); 759 758 kfree(msg); 760 759 } 761 - mutex_unlock(&gsm->tx_mutex); 760 + spin_unlock_irqrestore(&gsm->tx_lock, flags); 762 761 } 763 762 764 763 /** ··· 1029 1028 gsm->tx_bytes += msg->len; 1030 1029 1031 1030 gsmld_write_trigger(gsm); 1032 - schedule_delayed_work(&gsm->kick_timeout, 10 * gsm->t1 * HZ / 100); 1031 + mod_timer(&gsm->kick_timer, jiffies + 10 * gsm->t1 * HZ / 100); 1033 1032 } 1034 1033 1035 1034 /** ··· 1044 1043 1045 1044 static void gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg) 1046 1045 { 1047 - mutex_lock(&dlci->gsm->tx_mutex); 1046 + unsigned long flags; 1047 + spin_lock_irqsave(&dlci->gsm->tx_lock, flags); 1048 1048 __gsm_data_queue(dlci, msg); 1049 - mutex_unlock(&dlci->gsm->tx_mutex); 1049 + spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags); 1050 1050 } 1051 1051 1052 1052 /** ··· 1059 1057 * is data. Keep to the MRU of the mux. This path handles the usual tty 1060 1058 * interface which is a byte stream with optional modem data. 1061 1059 * 1062 - * Caller must hold the tx_mutex of the mux. 1060 + * Caller must hold the tx_lock of the mux. 1063 1061 */ 1064 1062 1065 1063 static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci) ··· 1119 1117 * is data. Keep to the MRU of the mux. This path handles framed data 1120 1118 * queued as skbuffs to the DLCI. 1121 1119 * 1122 - * Caller must hold the tx_mutex of the mux. 1120 + * Caller must hold the tx_lock of the mux. 1123 1121 */ 1124 1122 1125 1123 static int gsm_dlci_data_output_framed(struct gsm_mux *gsm, ··· 1135 1133 if (dlci->adaption == 4) 1136 1134 overhead = 1; 1137 1135 1138 - /* dlci->skb is locked by tx_mutex */ 1136 + /* dlci->skb is locked by tx_lock */ 1139 1137 if (dlci->skb == NULL) { 1140 1138 dlci->skb = skb_dequeue_tail(&dlci->skb_list); 1141 1139 if (dlci->skb == NULL) ··· 1189 1187 * Push an empty frame in to the transmit queue to update the modem status 1190 1188 * bits and to transmit an optional break. 1191 1189 * 1192 - * Caller must hold the tx_mutex of the mux. 1190 + * Caller must hold the tx_lock of the mux. 1193 1191 */ 1194 1192 1195 1193 static int gsm_dlci_modem_output(struct gsm_mux *gsm, struct gsm_dlci *dlci, ··· 1303 1301 1304 1302 static void gsm_dlci_data_kick(struct gsm_dlci *dlci) 1305 1303 { 1304 + unsigned long flags; 1306 1305 int sweep; 1307 1306 1308 1307 if (dlci->constipated) 1309 1308 return; 1310 1309 1311 - mutex_lock(&dlci->gsm->tx_mutex); 1310 + spin_lock_irqsave(&dlci->gsm->tx_lock, flags); 1312 1311 /* If we have nothing running then we need to fire up */ 1313 1312 sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO); 1314 1313 if (dlci->gsm->tx_bytes == 0) { ··· 1320 1317 } 1321 1318 if (sweep) 1322 1319 gsm_dlci_data_sweep(dlci->gsm); 1323 - mutex_unlock(&dlci->gsm->tx_mutex); 1320 + spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags); 1324 1321 } 1325 1322 1326 1323 /* ··· 1711 1708 unsigned int command, u8 *data, int clen) 1712 1709 { 1713 1710 struct gsm_control *ctrl = kzalloc(sizeof(struct gsm_control), 1714 - GFP_KERNEL); 1711 + GFP_ATOMIC); 1715 1712 unsigned long flags; 1716 1713 if (ctrl == NULL) 1717 1714 return NULL; ··· 2022 2019 } 2023 2020 2024 2021 /** 2025 - * gsm_kick_timeout - transmit if possible 2026 - * @work: work contained in our gsm object 2022 + * gsm_kick_timer - transmit if possible 2023 + * @t: timer contained in our gsm object 2027 2024 * 2028 2025 * Transmit data from DLCIs if the queue is empty. We can't rely on 2029 2026 * a tty wakeup except when we filled the pipe so we need to fire off 2030 2027 * new data ourselves in other cases. 2031 2028 */ 2032 - static void gsm_kick_timeout(struct work_struct *work) 2029 + static void gsm_kick_timer(struct timer_list *t) 2033 2030 { 2034 - struct gsm_mux *gsm = container_of(work, struct gsm_mux, kick_timeout.work); 2031 + struct gsm_mux *gsm = from_timer(gsm, t, kick_timer); 2032 + unsigned long flags; 2035 2033 int sent = 0; 2036 2034 2037 - mutex_lock(&gsm->tx_mutex); 2035 + spin_lock_irqsave(&gsm->tx_lock, flags); 2038 2036 /* If we have nothing running then we need to fire up */ 2039 2037 if (gsm->tx_bytes < TX_THRESH_LO) 2040 2038 sent = gsm_dlci_data_sweep(gsm); 2041 - mutex_unlock(&gsm->tx_mutex); 2039 + spin_unlock_irqrestore(&gsm->tx_lock, flags); 2042 2040 2043 2041 if (sent && debug & DBG_DATA) 2044 2042 pr_info("%s TX queue stalled\n", __func__); ··· 2496 2492 } 2497 2493 2498 2494 /* Finish outstanding timers, making sure they are done */ 2499 - cancel_delayed_work_sync(&gsm->kick_timeout); 2495 + del_timer_sync(&gsm->kick_timer); 2500 2496 del_timer_sync(&gsm->t2_timer); 2501 2497 2502 2498 /* Finish writing to ldisc */ ··· 2569 2565 break; 2570 2566 } 2571 2567 } 2572 - mutex_destroy(&gsm->tx_mutex); 2573 2568 mutex_destroy(&gsm->mutex); 2574 2569 kfree(gsm->txframe); 2575 2570 kfree(gsm->buf); ··· 2640 2637 } 2641 2638 spin_lock_init(&gsm->lock); 2642 2639 mutex_init(&gsm->mutex); 2643 - mutex_init(&gsm->tx_mutex); 2644 2640 kref_init(&gsm->ref); 2645 2641 INIT_LIST_HEAD(&gsm->tx_ctrl_list); 2646 2642 INIT_LIST_HEAD(&gsm->tx_data_list); 2647 - INIT_DELAYED_WORK(&gsm->kick_timeout, gsm_kick_timeout); 2643 + timer_setup(&gsm->kick_timer, gsm_kick_timer, 0); 2648 2644 timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0); 2649 2645 INIT_WORK(&gsm->tx_work, gsmld_write_task); 2650 2646 init_waitqueue_head(&gsm->event); 2651 2647 spin_lock_init(&gsm->control_lock); 2648 + spin_lock_init(&gsm->tx_lock); 2652 2649 2653 2650 gsm->t1 = T1; 2654 2651 gsm->t2 = T2; ··· 2673 2670 } 2674 2671 spin_unlock(&gsm_mux_lock); 2675 2672 if (i == MAX_MUX) { 2676 - mutex_destroy(&gsm->tx_mutex); 2677 2673 mutex_destroy(&gsm->mutex); 2678 2674 kfree(gsm->txframe); 2679 2675 kfree(gsm->buf); ··· 2828 2826 static void gsmld_write_task(struct work_struct *work) 2829 2827 { 2830 2828 struct gsm_mux *gsm = container_of(work, struct gsm_mux, tx_work); 2829 + unsigned long flags; 2831 2830 int i, ret; 2832 2831 2833 2832 /* All outstanding control channel and control messages and one data 2834 2833 * frame is sent. 2835 2834 */ 2836 2835 ret = -ENODEV; 2837 - mutex_lock(&gsm->tx_mutex); 2836 + spin_lock_irqsave(&gsm->tx_lock, flags); 2838 2837 if (gsm->tty) 2839 2838 ret = gsm_data_kick(gsm); 2840 - mutex_unlock(&gsm->tx_mutex); 2839 + spin_unlock_irqrestore(&gsm->tx_lock, flags); 2841 2840 2842 2841 if (ret >= 0) 2843 2842 for (i = 0; i < NUM_DLCI; i++) ··· 3045 3042 const unsigned char *buf, size_t nr) 3046 3043 { 3047 3044 struct gsm_mux *gsm = tty->disc_data; 3045 + unsigned long flags; 3048 3046 int space; 3049 3047 int ret; 3050 3048 ··· 3053 3049 return -ENODEV; 3054 3050 3055 3051 ret = -ENOBUFS; 3056 - mutex_lock(&gsm->tx_mutex); 3052 + spin_lock_irqsave(&gsm->tx_lock, flags); 3057 3053 space = tty_write_room(tty); 3058 3054 if (space >= nr) 3059 3055 ret = tty->ops->write(tty, buf, nr); 3060 3056 else 3061 3057 set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); 3062 - mutex_unlock(&gsm->tx_mutex); 3058 + spin_unlock_irqrestore(&gsm->tx_lock, flags); 3063 3059 3064 3060 return ret; 3065 3061 } ··· 3356 3352 static void gsm_modem_upd_via_data(struct gsm_dlci *dlci, u8 brk) 3357 3353 { 3358 3354 struct gsm_mux *gsm = dlci->gsm; 3355 + unsigned long flags; 3359 3356 3360 3357 if (dlci->state != DLCI_OPEN || dlci->adaption != 2) 3361 3358 return; 3362 3359 3363 - mutex_lock(&gsm->tx_mutex); 3360 + spin_lock_irqsave(&gsm->tx_lock, flags); 3364 3361 gsm_dlci_modem_output(gsm, dlci, brk); 3365 - mutex_unlock(&gsm->tx_mutex); 3362 + spin_unlock_irqrestore(&gsm->tx_lock, flags); 3366 3363 } 3367 3364 3368 3365 /**
+13 -4
drivers/tty/serial/8250/8250_lpss.c
··· 174 174 */ 175 175 up->dma = dma; 176 176 177 + lpss->dma_maxburst = 16; 178 + 177 179 port->set_termios = dw8250_do_set_termios; 178 180 179 181 return 0; ··· 279 277 struct dw_dma_slave *rx_param, *tx_param; 280 278 struct device *dev = port->port.dev; 281 279 282 - if (!lpss->dma_param.dma_dev) 280 + if (!lpss->dma_param.dma_dev) { 281 + dma = port->dma; 282 + if (dma) 283 + goto out_configuration_only; 284 + 283 285 return 0; 286 + } 284 287 285 288 rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL); 286 289 if (!rx_param) ··· 296 289 return -ENOMEM; 297 290 298 291 *rx_param = lpss->dma_param; 299 - dma->rxconf.src_maxburst = lpss->dma_maxburst; 300 - 301 292 *tx_param = lpss->dma_param; 302 - dma->txconf.dst_maxburst = lpss->dma_maxburst; 303 293 304 294 dma->fn = lpss8250_dma_filter; 305 295 dma->rx_param = rx_param; 306 296 dma->tx_param = tx_param; 307 297 308 298 port->dma = dma; 299 + 300 + out_configuration_only: 301 + dma->rxconf.src_maxburst = lpss->dma_maxburst; 302 + dma->txconf.dst_maxburst = lpss->dma_maxburst; 303 + 309 304 return 0; 310 305 } 311 306
+30 -22
drivers/tty/serial/8250/8250_omap.c
··· 157 157 return readl(up->port.membase + (reg << up->port.regshift)); 158 158 } 159 159 160 - static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl) 160 + /* 161 + * Called on runtime PM resume path from omap8250_restore_regs(), and 162 + * omap8250_set_mctrl(). 163 + */ 164 + static void __omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl) 161 165 { 162 166 struct uart_8250_port *up = up_to_u8250p(port); 163 167 struct omap8250_priv *priv = up->port.private_data; ··· 185 181 } 186 182 } 187 183 184 + static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl) 185 + { 186 + int err; 187 + 188 + err = pm_runtime_resume_and_get(port->dev); 189 + if (err) 190 + return; 191 + 192 + __omap8250_set_mctrl(port, mctrl); 193 + 194 + pm_runtime_mark_last_busy(port->dev); 195 + pm_runtime_put_autosuspend(port->dev); 196 + } 197 + 188 198 /* 189 199 * Work Around for Errata i202 (2430, 3430, 3630, 4430 and 4460) 190 200 * The access to uart register after MDR1 Access ··· 211 193 static void omap_8250_mdr1_errataset(struct uart_8250_port *up, 212 194 struct omap8250_priv *priv) 213 195 { 214 - u8 timeout = 255; 215 - 216 196 serial_out(up, UART_OMAP_MDR1, priv->mdr1); 217 197 udelay(2); 218 198 serial_out(up, UART_FCR, up->fcr | UART_FCR_CLEAR_XMIT | 219 199 UART_FCR_CLEAR_RCVR); 220 - /* 221 - * Wait for FIFO to empty: when empty, RX_FIFO_E bit is 0 and 222 - * TX_FIFO_E bit is 1. 223 - */ 224 - while (UART_LSR_THRE != (serial_in(up, UART_LSR) & 225 - (UART_LSR_THRE | UART_LSR_DR))) { 226 - timeout--; 227 - if (!timeout) { 228 - /* Should *never* happen. we warn and carry on */ 229 - dev_crit(up->port.dev, "Errata i202: timedout %x\n", 230 - serial_in(up, UART_LSR)); 231 - break; 232 - } 233 - udelay(1); 234 - } 235 200 } 236 201 237 202 static void omap_8250_get_divisor(struct uart_port *port, unsigned int baud, ··· 293 292 { 294 293 struct omap8250_priv *priv = up->port.private_data; 295 294 struct uart_8250_dma *dma = up->dma; 295 + u8 mcr = serial8250_in_MCR(up); 296 296 297 297 if (dma && dma->tx_running) { 298 298 /* ··· 310 308 serial_out(up, UART_EFR, UART_EFR_ECB); 311 309 312 310 serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A); 313 - serial8250_out_MCR(up, UART_MCR_TCRTLR); 311 + serial8250_out_MCR(up, mcr | UART_MCR_TCRTLR); 314 312 serial_out(up, UART_FCR, up->fcr); 315 313 316 314 omap8250_update_scr(up, priv); ··· 326 324 serial_out(up, UART_LCR, 0); 327 325 328 326 /* drop TCR + TLR access, we setup XON/XOFF later */ 329 - serial8250_out_MCR(up, up->mcr); 327 + serial8250_out_MCR(up, mcr); 328 + 330 329 serial_out(up, UART_IER, up->ier); 331 330 332 331 serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B); ··· 344 341 345 342 omap8250_update_mdr1(up, priv); 346 343 347 - up->port.ops->set_mctrl(&up->port, up->port.mctrl); 344 + __omap8250_set_mctrl(&up->port, up->port.mctrl); 348 345 349 346 if (up->port.rs485.flags & SER_RS485_ENABLED) 350 347 serial8250_em485_stop_tx(up); ··· 672 669 673 670 pm_runtime_get_sync(port->dev); 674 671 675 - up->mcr = 0; 676 672 serial_out(up, UART_FCR, UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT); 677 673 678 674 serial_out(up, UART_LCR, UART_LCR_WLEN8); ··· 1460 1458 static int omap8250_remove(struct platform_device *pdev) 1461 1459 { 1462 1460 struct omap8250_priv *priv = platform_get_drvdata(pdev); 1461 + int err; 1462 + 1463 + err = pm_runtime_resume_and_get(&pdev->dev); 1464 + if (err) 1465 + return err; 1463 1466 1464 1467 pm_runtime_dont_use_autosuspend(&pdev->dev); 1465 1468 pm_runtime_put_sync(&pdev->dev); 1469 + flush_work(&priv->qos_work); 1466 1470 pm_runtime_disable(&pdev->dev); 1467 1471 serial8250_unregister_port(priv->line); 1468 1472 cpu_latency_qos_remove_request(&priv->pm_qos_request);
+5 -2
drivers/tty/serial/8250/8250_port.c
··· 1897 1897 static bool handle_rx_dma(struct uart_8250_port *up, unsigned int iir) 1898 1898 { 1899 1899 switch (iir & 0x3f) { 1900 - case UART_IIR_RX_TIMEOUT: 1901 - serial8250_rx_dma_flush(up); 1900 + case UART_IIR_RDI: 1901 + if (!up->dma->rx_running) 1902 + break; 1902 1903 fallthrough; 1903 1904 case UART_IIR_RLSI: 1905 + case UART_IIR_RX_TIMEOUT: 1906 + serial8250_rx_dma_flush(up); 1904 1907 return true; 1905 1908 } 1906 1909 return up->dma->rx_dma(up);
+49 -27
drivers/tty/serial/fsl_lpuart.c
··· 12 12 #include <linux/dmaengine.h> 13 13 #include <linux/dmapool.h> 14 14 #include <linux/io.h> 15 + #include <linux/iopoll.h> 15 16 #include <linux/irq.h> 16 17 #include <linux/module.h> 17 18 #include <linux/of.h> ··· 404 403 405 404 #define lpuart_enable_clks(x) __lpuart_enable_clks(x, true) 406 405 #define lpuart_disable_clks(x) __lpuart_enable_clks(x, false) 407 - 408 - static int lpuart_global_reset(struct lpuart_port *sport) 409 - { 410 - struct uart_port *port = &sport->port; 411 - void __iomem *global_addr; 412 - int ret; 413 - 414 - if (uart_console(port)) 415 - return 0; 416 - 417 - ret = clk_prepare_enable(sport->ipg_clk); 418 - if (ret) { 419 - dev_err(sport->port.dev, "failed to enable uart ipg clk: %d\n", ret); 420 - return ret; 421 - } 422 - 423 - if (is_imx7ulp_lpuart(sport) || is_imx8qxp_lpuart(sport)) { 424 - global_addr = port->membase + UART_GLOBAL - IMX_REG_OFF; 425 - writel(UART_GLOBAL_RST, global_addr); 426 - usleep_range(GLOBAL_RST_MIN_US, GLOBAL_RST_MAX_US); 427 - writel(0, global_addr); 428 - usleep_range(GLOBAL_RST_MIN_US, GLOBAL_RST_MAX_US); 429 - } 430 - 431 - clk_disable_unprepare(sport->ipg_clk); 432 - return 0; 433 - } 434 406 435 407 static void lpuart_stop_tx(struct uart_port *port) 436 408 { ··· 2609 2635 .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND, 2610 2636 /* delay_rts_* and RX_DURING_TX are not supported */ 2611 2637 }; 2638 + 2639 + static int lpuart_global_reset(struct lpuart_port *sport) 2640 + { 2641 + struct uart_port *port = &sport->port; 2642 + void __iomem *global_addr; 2643 + unsigned long ctrl, bd; 2644 + unsigned int val = 0; 2645 + int ret; 2646 + 2647 + ret = clk_prepare_enable(sport->ipg_clk); 2648 + if (ret) { 2649 + dev_err(sport->port.dev, "failed to enable uart ipg clk: %d\n", ret); 2650 + return ret; 2651 + } 2652 + 2653 + if (is_imx7ulp_lpuart(sport) || is_imx8qxp_lpuart(sport)) { 2654 + /* 2655 + * If the transmitter is used by earlycon, wait for transmit engine to 2656 + * complete and then reset. 2657 + */ 2658 + ctrl = lpuart32_read(port, UARTCTRL); 2659 + if (ctrl & UARTCTRL_TE) { 2660 + bd = lpuart32_read(&sport->port, UARTBAUD); 2661 + if (read_poll_timeout(lpuart32_tx_empty, val, val, 1, 100000, false, 2662 + port)) { 2663 + dev_warn(sport->port.dev, 2664 + "timeout waiting for transmit engine to complete\n"); 2665 + clk_disable_unprepare(sport->ipg_clk); 2666 + return 0; 2667 + } 2668 + } 2669 + 2670 + global_addr = port->membase + UART_GLOBAL - IMX_REG_OFF; 2671 + writel(UART_GLOBAL_RST, global_addr); 2672 + usleep_range(GLOBAL_RST_MIN_US, GLOBAL_RST_MAX_US); 2673 + writel(0, global_addr); 2674 + usleep_range(GLOBAL_RST_MIN_US, GLOBAL_RST_MAX_US); 2675 + 2676 + /* Recover the transmitter for earlycon. */ 2677 + if (ctrl & UARTCTRL_TE) { 2678 + lpuart32_write(port, bd, UARTBAUD); 2679 + lpuart32_write(port, ctrl, UARTCTRL); 2680 + } 2681 + } 2682 + 2683 + clk_disable_unprepare(sport->ipg_clk); 2684 + return 0; 2685 + } 2612 2686 2613 2687 static int lpuart_probe(struct platform_device *pdev) 2614 2688 {
+1
drivers/tty/serial/imx.c
··· 2594 2594 .suspend_noirq = imx_uart_suspend_noirq, 2595 2595 .resume_noirq = imx_uart_resume_noirq, 2596 2596 .freeze_noirq = imx_uart_suspend_noirq, 2597 + .thaw_noirq = imx_uart_resume_noirq, 2597 2598 .restore_noirq = imx_uart_resume_noirq, 2598 2599 .suspend = imx_uart_suspend, 2599 2600 .resume = imx_uart_resume,
+28 -28
drivers/usb/cdns3/host.c
··· 24 24 #define CFG_RXDET_P3_EN BIT(15) 25 25 #define LPM_2_STB_SWITCH_EN BIT(25) 26 26 27 - static int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd); 27 + static void xhci_cdns3_plat_start(struct usb_hcd *hcd) 28 + { 29 + struct xhci_hcd *xhci = hcd_to_xhci(hcd); 30 + u32 value; 31 + 32 + /* set usbcmd.EU3S */ 33 + value = readl(&xhci->op_regs->command); 34 + value |= CMD_PM_INDEX; 35 + writel(value, &xhci->op_regs->command); 36 + 37 + if (hcd->regs) { 38 + value = readl(hcd->regs + XECP_AUX_CTRL_REG1); 39 + value |= CFG_RXDET_P3_EN; 40 + writel(value, hcd->regs + XECP_AUX_CTRL_REG1); 41 + 42 + value = readl(hcd->regs + XECP_PORT_CAP_REG); 43 + value |= LPM_2_STB_SWITCH_EN; 44 + writel(value, hcd->regs + XECP_PORT_CAP_REG); 45 + } 46 + } 47 + 48 + static int xhci_cdns3_resume_quirk(struct usb_hcd *hcd) 49 + { 50 + xhci_cdns3_plat_start(hcd); 51 + return 0; 52 + } 28 53 29 54 static const struct xhci_plat_priv xhci_plat_cdns3_xhci = { 30 55 .quirks = XHCI_SKIP_PHY_INIT | XHCI_AVOID_BEI, 31 - .suspend_quirk = xhci_cdns3_suspend_quirk, 56 + .plat_start = xhci_cdns3_plat_start, 57 + .resume_quirk = xhci_cdns3_resume_quirk, 32 58 }; 33 59 34 60 static int __cdns_host_init(struct cdns *cdns) ··· 114 88 err1: 115 89 platform_device_put(xhci); 116 90 return ret; 117 - } 118 - 119 - static int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd) 120 - { 121 - struct xhci_hcd *xhci = hcd_to_xhci(hcd); 122 - u32 value; 123 - 124 - if (pm_runtime_status_suspended(hcd->self.controller)) 125 - return 0; 126 - 127 - /* set usbcmd.EU3S */ 128 - value = readl(&xhci->op_regs->command); 129 - value |= CMD_PM_INDEX; 130 - writel(value, &xhci->op_regs->command); 131 - 132 - if (hcd->regs) { 133 - value = readl(hcd->regs + XECP_AUX_CTRL_REG1); 134 - value |= CFG_RXDET_P3_EN; 135 - writel(value, hcd->regs + XECP_AUX_CTRL_REG1); 136 - 137 - value = readl(hcd->regs + XECP_PORT_CAP_REG); 138 - value |= LPM_2_STB_SWITCH_EN; 139 - writel(value, hcd->regs + XECP_PORT_CAP_REG); 140 - } 141 - 142 - return 0; 143 91 } 144 92 145 93 static void cdns_host_exit(struct cdns *cdns)
+2
drivers/usb/chipidea/otg_fsm.c
··· 256 256 ci->enabled_otg_timer_bits &= ~(1 << t); 257 257 if (ci->next_otg_timer == t) { 258 258 if (ci->enabled_otg_timer_bits == 0) { 259 + spin_unlock_irqrestore(&ci->lock, flags); 259 260 /* No enabled timers after delete it */ 260 261 hrtimer_cancel(&ci->otg_fsm_hrtimer); 262 + spin_lock_irqsave(&ci->lock, flags); 261 263 ci->next_otg_timer = NUM_OTG_FSM_TIMERS; 262 264 } else { 263 265 /* Find the next timer */
+3
drivers/usb/core/quirks.c
··· 362 362 { USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM }, 363 363 { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, 364 364 365 + /* Realforce 87U Keyboard */ 366 + { USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM }, 367 + 365 368 /* M-Systems Flash Disk Pioneers */ 366 369 { USB_DEVICE(0x08ec, 0x1000), .driver_info = USB_QUIRK_RESET_RESUME }, 367 370
+10
drivers/usb/dwc3/core.c
··· 1711 1711 return extcon_get_extcon_dev(name); 1712 1712 1713 1713 /* 1714 + * Check explicitly if "usb-role-switch" is used since 1715 + * extcon_find_edev_by_node() can not be used to check the absence of 1716 + * an extcon device. In the absence of an device it will always return 1717 + * EPROBE_DEFER. 1718 + */ 1719 + if (IS_ENABLED(CONFIG_USB_ROLE_SWITCH) && 1720 + device_property_read_bool(dev, "usb-role-switch")) 1721 + return NULL; 1722 + 1723 + /* 1714 1724 * Try to get an extcon device from the USB PHY controller's "port" 1715 1725 * node. Check if it has the "port" node first, to avoid printing the 1716 1726 * error message from underlying code, as it's a valid case: extcon
+1 -1
drivers/usb/dwc3/gadget.c
··· 1029 1029 dep->endpoint.desc = NULL; 1030 1030 } 1031 1031 1032 - dwc3_remove_requests(dwc, dep, -ECONNRESET); 1032 + dwc3_remove_requests(dwc, dep, -ESHUTDOWN); 1033 1033 1034 1034 dep->stream_capable = false; 1035 1035 dep->type = 0;
-10
drivers/usb/dwc3/host.c
··· 11 11 #include <linux/of.h> 12 12 #include <linux/platform_device.h> 13 13 14 - #include "../host/xhci-plat.h" 15 14 #include "core.h" 16 - 17 - static const struct xhci_plat_priv dwc3_xhci_plat_priv = { 18 - .quirks = XHCI_SKIP_PHY_INIT, 19 - }; 20 15 21 16 static void dwc3_host_fill_xhci_irq_res(struct dwc3 *dwc, 22 17 int irq, char *name) ··· 91 96 dev_err(dwc->dev, "couldn't add resources to xHCI device\n"); 92 97 goto err; 93 98 } 94 - 95 - ret = platform_device_add_data(xhci, &dwc3_xhci_plat_priv, 96 - sizeof(dwc3_xhci_plat_priv)); 97 - if (ret) 98 - goto err; 99 99 100 100 memset(props, 0, sizeof(struct property_entry) * ARRAY_SIZE(props)); 101 101
+6 -4
drivers/usb/host/bcma-hcd.c
··· 285 285 { 286 286 struct bcma_hcd_device *usb_dev = bcma_get_drvdata(dev); 287 287 288 - if (IS_ERR_OR_NULL(usb_dev->gpio_desc)) 288 + if (!usb_dev->gpio_desc) 289 289 return; 290 290 291 291 gpiod_set_value(usb_dev->gpio_desc, val); ··· 406 406 return -ENOMEM; 407 407 usb_dev->core = core; 408 408 409 - if (core->dev.of_node) 410 - usb_dev->gpio_desc = devm_gpiod_get(&core->dev, "vcc", 411 - GPIOD_OUT_HIGH); 409 + usb_dev->gpio_desc = devm_gpiod_get_optional(&core->dev, "vcc", 410 + GPIOD_OUT_HIGH); 411 + if (IS_ERR(usb_dev->gpio_desc)) 412 + return dev_err_probe(&core->dev, PTR_ERR(usb_dev->gpio_desc), 413 + "error obtaining VCC GPIO"); 412 414 413 415 switch (core->id.id) { 414 416 case BCMA_CORE_USB20_HOST:
+17 -2
drivers/usb/serial/option.c
··· 162 162 #define NOVATELWIRELESS_PRODUCT_G2 0xA010 163 163 #define NOVATELWIRELESS_PRODUCT_MC551 0xB001 164 164 165 + #define UBLOX_VENDOR_ID 0x1546 166 + 165 167 /* AMOI PRODUCTS */ 166 168 #define AMOI_VENDOR_ID 0x1614 167 169 #define AMOI_PRODUCT_H01 0x0800 ··· 242 240 #define QUECTEL_PRODUCT_UC15 0x9090 243 241 /* These u-blox products use Qualcomm's vendor ID */ 244 242 #define UBLOX_PRODUCT_R410M 0x90b2 245 - #define UBLOX_PRODUCT_R6XX 0x90fa 246 243 /* These Yuga products use Qualcomm's vendor ID */ 247 244 #define YUGA_PRODUCT_CLM920_NC5 0x9625 248 245 ··· 582 581 #define OPPO_VENDOR_ID 0x22d9 583 582 #define OPPO_PRODUCT_R11 0x276c 584 583 584 + /* Sierra Wireless products */ 585 + #define SIERRA_VENDOR_ID 0x1199 586 + #define SIERRA_PRODUCT_EM9191 0x90d3 585 587 586 588 /* Device flags */ 587 589 ··· 1128 1124 /* u-blox products using Qualcomm vendor ID */ 1129 1125 { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M), 1130 1126 .driver_info = RSVD(1) | RSVD(3) }, 1131 - { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R6XX), 1127 + { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x908b), /* u-blox LARA-R6 00B */ 1128 + .driver_info = RSVD(4) }, 1129 + { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x90fa), 1132 1130 .driver_info = RSVD(3) }, 1131 + /* u-blox products */ 1132 + { USB_DEVICE(UBLOX_VENDOR_ID, 0x1341) }, /* u-blox LARA-L6 */ 1133 + { USB_DEVICE(UBLOX_VENDOR_ID, 0x1342), /* u-blox LARA-L6 (RMNET) */ 1134 + .driver_info = RSVD(4) }, 1135 + { USB_DEVICE(UBLOX_VENDOR_ID, 0x1343), /* u-blox LARA-L6 (ECM) */ 1136 + .driver_info = RSVD(4) }, 1133 1137 /* Quectel products using Quectel vendor ID */ 1134 1138 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff), 1135 1139 .driver_info = NUMEP2 }, ··· 2179 2167 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x010a, 0xff) }, /* Fibocom MA510 (ECM mode) */ 2180 2168 { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) }, /* Fibocom FG150 Diag */ 2181 2169 { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) }, /* Fibocom FG150 AT */ 2170 + { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) }, /* Fibocom FM160 (MBIM mode) */ 2182 2171 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */ 2183 2172 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) }, /* Fibocom FM101-GL (laptop MBIM) */ 2184 2173 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff), /* Fibocom FM101-GL (laptop MBIM) */ ··· 2189 2176 { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */ 2190 2177 { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */ 2191 2178 { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) }, 2179 + { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, 2180 + { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) }, 2192 2181 { } /* Terminating entry */ 2193 2182 }; 2194 2183 MODULE_DEVICE_TABLE(usb, option_ids);
+13 -2
drivers/usb/typec/mux/intel_pmc_mux.c
··· 369 369 return pmc_usb_command(port, (void *)&req, sizeof(req)); 370 370 } 371 371 372 - static int pmc_usb_mux_safe_state(struct pmc_usb_port *port) 372 + static int pmc_usb_mux_safe_state(struct pmc_usb_port *port, 373 + struct typec_mux_state *state) 373 374 { 374 375 u8 msg; 375 376 376 377 if (IOM_PORT_ACTIVITY_IS(port->iom_status, SAFE_MODE)) 378 + return 0; 379 + 380 + if ((IOM_PORT_ACTIVITY_IS(port->iom_status, DP) || 381 + IOM_PORT_ACTIVITY_IS(port->iom_status, DP_MFD)) && 382 + state->alt && state->alt->svid == USB_TYPEC_DP_SID) 383 + return 0; 384 + 385 + if ((IOM_PORT_ACTIVITY_IS(port->iom_status, TBT) || 386 + IOM_PORT_ACTIVITY_IS(port->iom_status, ALT_MODE_TBT_USB)) && 387 + state->alt && state->alt->svid == USB_TYPEC_TBT_SID) 377 388 return 0; 378 389 379 390 msg = PMC_USB_SAFE_MODE; ··· 454 443 return 0; 455 444 456 445 if (state->mode == TYPEC_STATE_SAFE) 457 - return pmc_usb_mux_safe_state(port); 446 + return pmc_usb_mux_safe_state(port, state); 458 447 if (state->mode == TYPEC_STATE_USB) 459 448 return pmc_usb_connect(port, port->role); 460 449
+3 -3
drivers/usb/typec/tipd/core.c
··· 474 474 static irqreturn_t cd321x_interrupt(int irq, void *data) 475 475 { 476 476 struct tps6598x *tps = data; 477 - u64 event; 477 + u64 event = 0; 478 478 u32 status; 479 479 int ret; 480 480 ··· 519 519 static irqreturn_t tps6598x_interrupt(int irq, void *data) 520 520 { 521 521 struct tps6598x *tps = data; 522 - u64 event1; 523 - u64 event2; 522 + u64 event1 = 0; 523 + u64 event2 = 0; 524 524 u32 status; 525 525 int ret; 526 526
+5 -5
drivers/vfio/pci/vfio_pci_core.c
··· 2488 2488 struct vfio_pci_core_device *cur; 2489 2489 bool needs_reset = false; 2490 2490 2491 - list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) { 2492 - /* No VFIO device in the set can have an open device FD */ 2493 - if (cur->vdev.open_count) 2494 - return false; 2491 + /* No other VFIO device in the set can be open. */ 2492 + if (vfio_device_set_open_count(dev_set) > 1) 2493 + return false; 2494 + 2495 + list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) 2495 2496 needs_reset |= cur->needs_reset; 2496 - } 2497 2497 return needs_reset; 2498 2498 } 2499 2499
+21 -5
drivers/vfio/vfio_main.c
··· 125 125 xa_unlock(&vfio_device_set_xa); 126 126 } 127 127 128 + unsigned int vfio_device_set_open_count(struct vfio_device_set *dev_set) 129 + { 130 + struct vfio_device *cur; 131 + unsigned int open_count = 0; 132 + 133 + lockdep_assert_held(&dev_set->lock); 134 + 135 + list_for_each_entry(cur, &dev_set->device_list, dev_set_list) 136 + open_count += cur->open_count; 137 + return open_count; 138 + } 139 + EXPORT_SYMBOL_GPL(vfio_device_set_open_count); 140 + 128 141 /* 129 142 * Group objects - create, release, get, put, search 130 143 */ ··· 814 801 err_close_device: 815 802 mutex_lock(&device->dev_set->lock); 816 803 mutex_lock(&device->group->group_lock); 817 - if (device->open_count == 1 && device->ops->close_device) { 818 - device->ops->close_device(device); 804 + if (device->open_count == 1) { 805 + if (device->ops->close_device) 806 + device->ops->close_device(device); 819 807 820 808 vfio_device_container_unregister(device); 821 809 } ··· 1031 1017 mutex_lock(&device->dev_set->lock); 1032 1018 vfio_assert_device_open(device); 1033 1019 mutex_lock(&device->group->group_lock); 1034 - if (device->open_count == 1 && device->ops->close_device) 1035 - device->ops->close_device(device); 1020 + if (device->open_count == 1) { 1021 + if (device->ops->close_device) 1022 + device->ops->close_device(device); 1036 1023 1037 - vfio_device_container_unregister(device); 1024 + vfio_device_container_unregister(device); 1025 + } 1038 1026 mutex_unlock(&device->group->group_lock); 1039 1027 device->open_count--; 1040 1028 if (device->open_count == 0)
+1 -1
drivers/xen/pcpu.c
··· 228 228 229 229 err = device_register(dev); 230 230 if (err) { 231 - pcpu_release(dev); 231 + put_device(dev); 232 232 return err; 233 233 } 234 234
+7 -3
drivers/xen/platform-pci.c
··· 54 54 pin = pdev->pin; 55 55 56 56 /* We don't know the GSI. Specify the PCI INTx line instead. */ 57 - return ((uint64_t)0x01 << HVM_CALLBACK_VIA_TYPE_SHIFT) | /* PCI INTx identifier */ 57 + return ((uint64_t)HVM_PARAM_CALLBACK_TYPE_PCI_INTX << 58 + HVM_CALLBACK_VIA_TYPE_SHIFT) | 58 59 ((uint64_t)pci_domain_nr(pdev->bus) << 32) | 59 60 ((uint64_t)pdev->bus->number << 16) | 60 61 ((uint64_t)(pdev->devfn & 0xff) << 8) | ··· 145 144 if (ret) { 146 145 dev_warn(&pdev->dev, "Unable to set the evtchn callback " 147 146 "err=%d\n", ret); 148 - goto out; 147 + goto irq_out; 149 148 } 150 149 } 151 150 ··· 153 152 grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes); 154 153 ret = gnttab_setup_auto_xlat_frames(grant_frames); 155 154 if (ret) 156 - goto out; 155 + goto irq_out; 157 156 ret = gnttab_init(); 158 157 if (ret) 159 158 goto grant_out; 160 159 return 0; 161 160 grant_out: 162 161 gnttab_free_auto_xlat_frames(); 162 + irq_out: 163 + if (!xen_have_vector_callback) 164 + free_irq(pdev->irq, pdev); 163 165 out: 164 166 pci_release_region(pdev, 0); 165 167 mem_out:
+6 -3
drivers/xen/xen-pciback/conf_space_capability.c
··· 190 190 }; 191 191 192 192 static struct msi_msix_field_config { 193 - u16 enable_bit; /* bit for enabling MSI/MSI-X */ 194 - unsigned int int_type; /* interrupt type for exclusiveness check */ 193 + u16 enable_bit; /* bit for enabling MSI/MSI-X */ 194 + u16 allowed_bits; /* bits allowed to be changed */ 195 + unsigned int int_type; /* interrupt type for exclusiveness check */ 195 196 } msi_field_config = { 196 197 .enable_bit = PCI_MSI_FLAGS_ENABLE, 198 + .allowed_bits = PCI_MSI_FLAGS_ENABLE, 197 199 .int_type = INTERRUPT_TYPE_MSI, 198 200 }, msix_field_config = { 199 201 .enable_bit = PCI_MSIX_FLAGS_ENABLE, 202 + .allowed_bits = PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL, 200 203 .int_type = INTERRUPT_TYPE_MSIX, 201 204 }; 202 205 ··· 232 229 return 0; 233 230 234 231 if (!dev_data->allow_interrupt_control || 235 - (new_value ^ old_value) & ~field_config->enable_bit) 232 + (new_value ^ old_value) & ~field_config->allowed_bits) 236 233 return PCIBIOS_SET_FAILED; 237 234 238 235 if (new_value & field_config->enable_bit) {
+12 -36
fs/ceph/caps.c
··· 2248 2248 struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc; 2249 2249 struct ceph_inode_info *ci = ceph_inode(inode); 2250 2250 struct ceph_mds_request *req1 = NULL, *req2 = NULL; 2251 - unsigned int max_sessions; 2252 2251 int ret, err = 0; 2253 2252 2254 2253 spin_lock(&ci->i_unsafe_lock); ··· 2266 2267 spin_unlock(&ci->i_unsafe_lock); 2267 2268 2268 2269 /* 2269 - * The mdsc->max_sessions is unlikely to be changed 2270 - * mostly, here we will retry it by reallocating the 2271 - * sessions array memory to get rid of the mdsc->mutex 2272 - * lock. 2273 - */ 2274 - retry: 2275 - max_sessions = mdsc->max_sessions; 2276 - 2277 - /* 2278 2270 * Trigger to flush the journal logs in all the relevant MDSes 2279 2271 * manually, or in the worst case we must wait at most 5 seconds 2280 2272 * to wait the journal logs to be flushed by the MDSes periodically. 2281 2273 */ 2282 - if ((req1 || req2) && likely(max_sessions)) { 2283 - struct ceph_mds_session **sessions = NULL; 2284 - struct ceph_mds_session *s; 2274 + if (req1 || req2) { 2285 2275 struct ceph_mds_request *req; 2276 + struct ceph_mds_session **sessions; 2277 + struct ceph_mds_session *s; 2278 + unsigned int max_sessions; 2286 2279 int i; 2280 + 2281 + mutex_lock(&mdsc->mutex); 2282 + max_sessions = mdsc->max_sessions; 2287 2283 2288 2284 sessions = kcalloc(max_sessions, sizeof(s), GFP_KERNEL); 2289 2285 if (!sessions) { 2286 + mutex_unlock(&mdsc->mutex); 2290 2287 err = -ENOMEM; 2291 2288 goto out; 2292 2289 } ··· 2294 2299 s = req->r_session; 2295 2300 if (!s) 2296 2301 continue; 2297 - if (unlikely(s->s_mds >= max_sessions)) { 2298 - spin_unlock(&ci->i_unsafe_lock); 2299 - for (i = 0; i < max_sessions; i++) { 2300 - s = sessions[i]; 2301 - if (s) 2302 - ceph_put_mds_session(s); 2303 - } 2304 - kfree(sessions); 2305 - goto retry; 2306 - } 2307 2302 if (!sessions[s->s_mds]) { 2308 2303 s = ceph_get_mds_session(s); 2309 2304 sessions[s->s_mds] = s; ··· 2306 2321 s = req->r_session; 2307 2322 if (!s) 2308 2323 continue; 2309 - if (unlikely(s->s_mds >= max_sessions)) { 2310 - spin_unlock(&ci->i_unsafe_lock); 2311 - for (i = 0; i < max_sessions; i++) { 2312 - s = sessions[i]; 2313 - if (s) 2314 - ceph_put_mds_session(s); 2315 - } 2316 - kfree(sessions); 2317 - goto retry; 2318 - } 2319 2324 if (!sessions[s->s_mds]) { 2320 2325 s = ceph_get_mds_session(s); 2321 2326 sessions[s->s_mds] = s; ··· 2317 2342 /* the auth MDS */ 2318 2343 spin_lock(&ci->i_ceph_lock); 2319 2344 if (ci->i_auth_cap) { 2320 - s = ci->i_auth_cap->session; 2321 - if (!sessions[s->s_mds]) 2322 - sessions[s->s_mds] = ceph_get_mds_session(s); 2345 + s = ci->i_auth_cap->session; 2346 + if (!sessions[s->s_mds]) 2347 + sessions[s->s_mds] = ceph_get_mds_session(s); 2323 2348 } 2324 2349 spin_unlock(&ci->i_ceph_lock); 2350 + mutex_unlock(&mdsc->mutex); 2325 2351 2326 2352 /* send flush mdlog request to MDSes */ 2327 2353 for (i = 0; i < max_sessions; i++) {
+1 -1
fs/ceph/inode.c
··· 2492 2492 struct inode *parent; 2493 2493 2494 2494 parent = ceph_lookup_inode(sb, ceph_ino(inode)); 2495 - if (!parent) 2495 + if (IS_ERR(parent)) 2496 2496 return PTR_ERR(parent); 2497 2497 2498 2498 pci = ceph_inode(parent);
+2 -1
fs/ceph/snap.c
··· 763 763 struct ceph_mds_snap_realm *ri; /* encoded */ 764 764 __le64 *snaps; /* encoded */ 765 765 __le64 *prior_parent_snaps; /* encoded */ 766 - struct ceph_snap_realm *realm = NULL; 766 + struct ceph_snap_realm *realm; 767 767 struct ceph_snap_realm *first_realm = NULL; 768 768 struct ceph_snap_realm *realm_to_rebuild = NULL; 769 769 int rebuild_snapcs; ··· 774 774 775 775 dout("%s deletion=%d\n", __func__, deletion); 776 776 more: 777 + realm = NULL; 777 778 rebuild_snapcs = 0; 778 779 ceph_decode_need(&p, e, sizeof(*ri), bad); 779 780 ri = p;
+11 -3
fs/cifs/connect.c
··· 3855 3855 uuid_copy(&cifs_sb->dfs_mount_id, &mnt_ctx.mount_id); 3856 3856 3857 3857 out: 3858 - free_xid(mnt_ctx.xid); 3859 3858 cifs_try_adding_channels(cifs_sb, mnt_ctx.ses); 3860 - return mount_setup_tlink(cifs_sb, mnt_ctx.ses, mnt_ctx.tcon); 3859 + rc = mount_setup_tlink(cifs_sb, mnt_ctx.ses, mnt_ctx.tcon); 3860 + if (rc) 3861 + goto error; 3862 + 3863 + free_xid(mnt_ctx.xid); 3864 + return rc; 3861 3865 3862 3866 error: 3863 3867 dfs_cache_put_refsrv_sessions(&mnt_ctx.mount_id); ··· 3888 3884 goto error; 3889 3885 } 3890 3886 3887 + rc = mount_setup_tlink(cifs_sb, mnt_ctx.ses, mnt_ctx.tcon); 3888 + if (rc) 3889 + goto error; 3890 + 3891 3891 free_xid(mnt_ctx.xid); 3892 - return mount_setup_tlink(cifs_sb, mnt_ctx.ses, mnt_ctx.tcon); 3892 + return rc; 3893 3893 3894 3894 error: 3895 3895 mount_put_conns(&mnt_ctx);
+2 -2
fs/cifs/ioctl.c
··· 343 343 rc = put_user(ExtAttrBits & 344 344 FS_FL_USER_VISIBLE, 345 345 (int __user *)arg); 346 - if (rc != EOPNOTSUPP) 346 + if (rc != -EOPNOTSUPP) 347 347 break; 348 348 } 349 349 #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */ ··· 373 373 * pSMBFile->fid.netfid, 374 374 * extAttrBits, 375 375 * &ExtAttrMask); 376 - * if (rc != EOPNOTSUPP) 376 + * if (rc != -EOPNOTSUPP) 377 377 * break; 378 378 */ 379 379
+4
fs/cifs/smb2ops.c
··· 1116 1116 COMPOUND_FID, current->tgid, 1117 1117 FILE_FULL_EA_INFORMATION, 1118 1118 SMB2_O_INFO_FILE, 0, data, size); 1119 + if (rc) 1120 + goto sea_exit; 1119 1121 smb2_set_next_command(tcon, &rqst[1]); 1120 1122 smb2_set_related(&rqst[1]); 1121 1123 ··· 1128 1126 rqst[2].rq_nvec = 1; 1129 1127 rc = SMB2_close_init(tcon, server, 1130 1128 &rqst[2], COMPOUND_FID, COMPOUND_FID, false); 1129 + if (rc) 1130 + goto sea_exit; 1131 1131 smb2_set_related(&rqst[2]); 1132 1132 1133 1133 rc = compound_send_recv(xid, ses, server,
+21 -14
fs/erofs/fscache.c
··· 75 75 76 76 rcu_read_lock(); 77 77 xas_for_each(&xas, folio, last_page) { 78 - unsigned int pgpos = 79 - (folio_index(folio) - start_page) * PAGE_SIZE; 80 - unsigned int pgend = pgpos + folio_size(folio); 78 + unsigned int pgpos, pgend; 81 79 bool pg_failed = false; 80 + 81 + if (xas_retry(&xas, folio)) 82 + continue; 83 + 84 + pgpos = (folio_index(folio) - start_page) * PAGE_SIZE; 85 + pgend = pgpos + folio_size(folio); 82 86 83 87 for (;;) { 84 88 if (!subreq) { ··· 291 287 return PTR_ERR(src); 292 288 293 289 iov_iter_xarray(&iter, READ, &mapping->i_pages, pos, PAGE_SIZE); 294 - if (copy_to_iter(src + offset, size, &iter) != size) 290 + if (copy_to_iter(src + offset, size, &iter) != size) { 291 + erofs_put_metabuf(&buf); 295 292 return -EFAULT; 293 + } 296 294 iov_iter_zero(PAGE_SIZE - size, &iter); 297 295 erofs_put_metabuf(&buf); 298 296 return PAGE_SIZE; 299 297 } 300 298 301 - count = min_t(size_t, map.m_llen - (pos - map.m_la), len); 302 - DBG_BUGON(!count || count % PAGE_SIZE); 303 - 304 299 if (!(map.m_flags & EROFS_MAP_MAPPED)) { 300 + count = len; 305 301 iov_iter_xarray(&iter, READ, &mapping->i_pages, pos, count); 306 302 iov_iter_zero(count, &iter); 307 303 return count; 308 304 } 305 + 306 + count = min_t(size_t, map.m_llen - (pos - map.m_la), len); 307 + DBG_BUGON(!count || count % PAGE_SIZE); 309 308 310 309 mdev = (struct erofs_map_dev) { 311 310 .m_deviceid = map.m_deviceid, ··· 410 403 static int erofs_fscache_register_volume(struct super_block *sb) 411 404 { 412 405 struct erofs_sb_info *sbi = EROFS_SB(sb); 413 - char *domain_id = sbi->opt.domain_id; 406 + char *domain_id = sbi->domain_id; 414 407 struct fscache_volume *volume; 415 408 char *name; 416 409 int ret = 0; 417 410 418 411 name = kasprintf(GFP_KERNEL, "erofs,%s", 419 - domain_id ? domain_id : sbi->opt.fsid); 412 + domain_id ? domain_id : sbi->fsid); 420 413 if (!name) 421 414 return -ENOMEM; 422 415 ··· 442 435 if (!domain) 443 436 return -ENOMEM; 444 437 445 - domain->domain_id = kstrdup(sbi->opt.domain_id, GFP_KERNEL); 438 + domain->domain_id = kstrdup(sbi->domain_id, GFP_KERNEL); 446 439 if (!domain->domain_id) { 447 440 kfree(domain); 448 441 return -ENOMEM; ··· 479 472 480 473 mutex_lock(&erofs_domain_list_lock); 481 474 list_for_each_entry(domain, &erofs_domain_list, list) { 482 - if (!strcmp(domain->domain_id, sbi->opt.domain_id)) { 475 + if (!strcmp(domain->domain_id, sbi->domain_id)) { 483 476 sbi->domain = domain; 484 477 sbi->volume = domain->volume; 485 478 refcount_inc(&domain->ref); ··· 616 609 struct erofs_fscache *erofs_fscache_register_cookie(struct super_block *sb, 617 610 char *name, bool need_inode) 618 611 { 619 - if (EROFS_SB(sb)->opt.domain_id) 612 + if (EROFS_SB(sb)->domain_id) 620 613 return erofs_domain_register_cookie(sb, name, need_inode); 621 614 return erofs_fscache_acquire_cookie(sb, name, need_inode); 622 615 } ··· 648 641 struct erofs_sb_info *sbi = EROFS_SB(sb); 649 642 struct erofs_fscache *fscache; 650 643 651 - if (sbi->opt.domain_id) 644 + if (sbi->domain_id) 652 645 ret = erofs_fscache_register_domain(sb); 653 646 else 654 647 ret = erofs_fscache_register_volume(sb); ··· 656 649 return ret; 657 650 658 651 /* acquired domain/volume will be relinquished in kill_sb() on error */ 659 - fscache = erofs_fscache_register_cookie(sb, sbi->opt.fsid, true); 652 + fscache = erofs_fscache_register_cookie(sb, sbi->fsid, true); 660 653 if (IS_ERR(fscache)) 661 654 return PTR_ERR(fscache); 662 655
+4 -2
fs/erofs/internal.h
··· 75 75 unsigned int max_sync_decompress_pages; 76 76 #endif 77 77 unsigned int mount_opt; 78 - char *fsid; 79 - char *domain_id; 80 78 }; 81 79 82 80 struct erofs_dev_context { ··· 87 89 struct erofs_fs_context { 88 90 struct erofs_mount_opts opt; 89 91 struct erofs_dev_context *devs; 92 + char *fsid; 93 + char *domain_id; 90 94 }; 91 95 92 96 /* all filesystem-wide lz4 configurations */ ··· 170 170 struct fscache_volume *volume; 171 171 struct erofs_fscache *s_fscache; 172 172 struct erofs_domain *domain; 173 + char *fsid; 174 + char *domain_id; 173 175 }; 174 176 175 177 #define EROFS_SB(sb) ((struct erofs_sb_info *)(sb)->s_fs_info)
+22 -17
fs/erofs/super.c
··· 579 579 break; 580 580 case Opt_fsid: 581 581 #ifdef CONFIG_EROFS_FS_ONDEMAND 582 - kfree(ctx->opt.fsid); 583 - ctx->opt.fsid = kstrdup(param->string, GFP_KERNEL); 584 - if (!ctx->opt.fsid) 582 + kfree(ctx->fsid); 583 + ctx->fsid = kstrdup(param->string, GFP_KERNEL); 584 + if (!ctx->fsid) 585 585 return -ENOMEM; 586 586 #else 587 587 errorfc(fc, "fsid option not supported"); ··· 589 589 break; 590 590 case Opt_domain_id: 591 591 #ifdef CONFIG_EROFS_FS_ONDEMAND 592 - kfree(ctx->opt.domain_id); 593 - ctx->opt.domain_id = kstrdup(param->string, GFP_KERNEL); 594 - if (!ctx->opt.domain_id) 592 + kfree(ctx->domain_id); 593 + ctx->domain_id = kstrdup(param->string, GFP_KERNEL); 594 + if (!ctx->domain_id) 595 595 return -ENOMEM; 596 596 #else 597 597 errorfc(fc, "domain_id option not supported"); ··· 728 728 729 729 sb->s_fs_info = sbi; 730 730 sbi->opt = ctx->opt; 731 - ctx->opt.fsid = NULL; 732 - ctx->opt.domain_id = NULL; 733 731 sbi->devs = ctx->devs; 734 732 ctx->devs = NULL; 733 + sbi->fsid = ctx->fsid; 734 + ctx->fsid = NULL; 735 + sbi->domain_id = ctx->domain_id; 736 + ctx->domain_id = NULL; 735 737 736 738 if (erofs_is_fscache_mode(sb)) { 737 739 sb->s_blocksize = EROFS_BLKSIZ; ··· 822 820 { 823 821 struct erofs_fs_context *ctx = fc->fs_private; 824 822 825 - if (IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && ctx->opt.fsid) 823 + if (IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && ctx->fsid) 826 824 return get_tree_nodev(fc, erofs_fc_fill_super); 827 825 828 826 return get_tree_bdev(fc, erofs_fc_fill_super); ··· 835 833 struct erofs_fs_context *ctx = fc->fs_private; 836 834 837 835 DBG_BUGON(!sb_rdonly(sb)); 836 + 837 + if (ctx->fsid || ctx->domain_id) 838 + erofs_info(sb, "ignoring reconfiguration for fsid|domain_id."); 838 839 839 840 if (test_opt(&ctx->opt, POSIX_ACL)) 840 841 fc->sb_flags |= SB_POSIXACL; ··· 878 873 struct erofs_fs_context *ctx = fc->fs_private; 879 874 880 875 erofs_free_dev_context(ctx->devs); 881 - kfree(ctx->opt.fsid); 882 - kfree(ctx->opt.domain_id); 876 + kfree(ctx->fsid); 877 + kfree(ctx->domain_id); 883 878 kfree(ctx); 884 879 } 885 880 ··· 949 944 erofs_free_dev_context(sbi->devs); 950 945 fs_put_dax(sbi->dax_dev, NULL); 951 946 erofs_fscache_unregister_fs(sb); 952 - kfree(sbi->opt.fsid); 953 - kfree(sbi->opt.domain_id); 947 + kfree(sbi->fsid); 948 + kfree(sbi->domain_id); 954 949 kfree(sbi); 955 950 sb->s_fs_info = NULL; 956 951 } ··· 1103 1098 if (test_opt(opt, DAX_NEVER)) 1104 1099 seq_puts(seq, ",dax=never"); 1105 1100 #ifdef CONFIG_EROFS_FS_ONDEMAND 1106 - if (opt->fsid) 1107 - seq_printf(seq, ",fsid=%s", opt->fsid); 1108 - if (opt->domain_id) 1109 - seq_printf(seq, ",domain_id=%s", opt->domain_id); 1101 + if (sbi->fsid) 1102 + seq_printf(seq, ",fsid=%s", sbi->fsid); 1103 + if (sbi->domain_id) 1104 + seq_printf(seq, ",domain_id=%s", sbi->domain_id); 1110 1105 #endif 1111 1106 return 0; 1112 1107 }
+4 -4
fs/erofs/sysfs.c
··· 210 210 int err; 211 211 212 212 if (erofs_is_fscache_mode(sb)) { 213 - if (sbi->opt.domain_id) { 214 - str = kasprintf(GFP_KERNEL, "%s,%s", sbi->opt.domain_id, 215 - sbi->opt.fsid); 213 + if (sbi->domain_id) { 214 + str = kasprintf(GFP_KERNEL, "%s,%s", sbi->domain_id, 215 + sbi->fsid); 216 216 if (!str) 217 217 return -ENOMEM; 218 218 name = str; 219 219 } else { 220 - name = sbi->opt.fsid; 220 + name = sbi->fsid; 221 221 } 222 222 } else { 223 223 name = sb->s_id;
+3
fs/erofs/zdata.c
··· 660 660 u8 *src, *dst; 661 661 unsigned int i, cnt; 662 662 663 + if (!packed_inode) 664 + return -EFSCORRUPTED; 665 + 663 666 pos += EROFS_I(inode)->z_fragmentoff; 664 667 for (i = 0; i < len; i += cnt) { 665 668 cnt = min_t(unsigned int, len - i,
+12 -2
fs/kernfs/dir.c
··· 31 31 32 32 #define rb_to_kn(X) rb_entry((X), struct kernfs_node, rb) 33 33 34 + static bool __kernfs_active(struct kernfs_node *kn) 35 + { 36 + return atomic_read(&kn->active) >= 0; 37 + } 38 + 34 39 static bool kernfs_active(struct kernfs_node *kn) 35 40 { 36 41 lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem); 37 - return atomic_read(&kn->active) >= 0; 42 + return __kernfs_active(kn); 38 43 } 39 44 40 45 static bool kernfs_lockdep(struct kernfs_node *kn) ··· 710 705 goto err_unlock; 711 706 } 712 707 713 - if (unlikely(!kernfs_active(kn) || !atomic_inc_not_zero(&kn->count))) 708 + /* 709 + * We should fail if @kn has never been activated and guarantee success 710 + * if the caller knows that @kn is active. Both can be achieved by 711 + * __kernfs_active() which tests @kn->active without kernfs_rwsem. 712 + */ 713 + if (unlikely(!__kernfs_active(kn) || !atomic_inc_not_zero(&kn->count))) 714 714 goto err_unlock; 715 715 716 716 spin_unlock(&kernfs_idr_lock);
+13 -7
fs/netfs/buffered_read.c
··· 17 17 { 18 18 struct netfs_io_subrequest *subreq; 19 19 struct folio *folio; 20 - unsigned int iopos, account = 0; 21 20 pgoff_t start_page = rreq->start / PAGE_SIZE; 22 21 pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; 22 + size_t account = 0; 23 23 bool subreq_failed = false; 24 24 25 25 XA_STATE(xas, &rreq->mapping->i_pages, start_page); ··· 39 39 */ 40 40 subreq = list_first_entry(&rreq->subrequests, 41 41 struct netfs_io_subrequest, rreq_link); 42 - iopos = 0; 43 42 subreq_failed = (subreq->error < 0); 44 43 45 44 trace_netfs_rreq(rreq, netfs_rreq_trace_unlock); 46 45 47 46 rcu_read_lock(); 48 47 xas_for_each(&xas, folio, last_page) { 49 - unsigned int pgpos = (folio_index(folio) - start_page) * PAGE_SIZE; 50 - unsigned int pgend = pgpos + folio_size(folio); 48 + loff_t pg_end; 51 49 bool pg_failed = false; 52 50 51 + if (xas_retry(&xas, folio)) 52 + continue; 53 + 54 + pg_end = folio_pos(folio) + folio_size(folio) - 1; 55 + 53 56 for (;;) { 57 + loff_t sreq_end; 58 + 54 59 if (!subreq) { 55 60 pg_failed = true; 56 61 break; ··· 63 58 if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) 64 59 folio_start_fscache(folio); 65 60 pg_failed |= subreq_failed; 66 - if (pgend < iopos + subreq->len) 61 + sreq_end = subreq->start + subreq->len - 1; 62 + if (pg_end < sreq_end) 67 63 break; 68 64 69 65 account += subreq->transferred; 70 - iopos += subreq->len; 71 66 if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { 72 67 subreq = list_next_entry(subreq, rreq_link); 73 68 subreq_failed = (subreq->error < 0); ··· 75 70 subreq = NULL; 76 71 subreq_failed = false; 77 72 } 78 - if (pgend == iopos) 73 + 74 + if (pg_end == sreq_end) 79 75 break; 80 76 } 81 77
+3
fs/netfs/io.c
··· 121 121 XA_STATE(xas, &rreq->mapping->i_pages, subreq->start / PAGE_SIZE); 122 122 123 123 xas_for_each(&xas, folio, (subreq->start + subreq->len - 1) / PAGE_SIZE) { 124 + if (xas_retry(&xas, folio)) 125 + continue; 126 + 124 127 /* We might have multiple writes from the same huge 125 128 * folio, but we mustn't unlock a folio more than once. 126 129 */
+4 -1
fs/nfsd/trace.h
··· 254 254 rqstp->rq_xprt->xpt_remotelen); 255 255 __entry->xid = be32_to_cpu(rqstp->rq_xid); 256 256 __entry->fh_hash = knfsd_fh_hash(&fhp->fh_handle); 257 - __entry->inode = d_inode(fhp->fh_dentry); 257 + if (fhp->fh_dentry) 258 + __entry->inode = d_inode(fhp->fh_dentry); 259 + else 260 + __entry->inode = NULL; 258 261 __entry->type = type; 259 262 __entry->access = access; 260 263 __entry->error = be32_to_cpu(error);
+27 -10
fs/zonefs/super.c
··· 478 478 struct super_block *sb = inode->i_sb; 479 479 struct zonefs_sb_info *sbi = ZONEFS_SB(sb); 480 480 unsigned int noio_flag; 481 - unsigned int nr_zones = 482 - zi->i_zone_size >> (sbi->s_zone_sectors_shift + SECTOR_SHIFT); 481 + unsigned int nr_zones = 1; 483 482 struct zonefs_ioerr_data err = { 484 483 .inode = inode, 485 484 .write = write, 486 485 }; 487 486 int ret; 487 + 488 + /* 489 + * The only files that have more than one zone are conventional zone 490 + * files with aggregated conventional zones, for which the inode zone 491 + * size is always larger than the device zone size. 492 + */ 493 + if (zi->i_zone_size > bdev_zone_sectors(sb->s_bdev)) 494 + nr_zones = zi->i_zone_size >> 495 + (sbi->s_zone_sectors_shift + SECTOR_SHIFT); 488 496 489 497 /* 490 498 * Memory allocations in blkdev_report_zones() can trigger a memory ··· 1415 1407 zi->i_ztype = type; 1416 1408 zi->i_zsector = zone->start; 1417 1409 zi->i_zone_size = zone->len << SECTOR_SHIFT; 1410 + if (zi->i_zone_size > bdev_zone_sectors(sb->s_bdev) << SECTOR_SHIFT && 1411 + !(sbi->s_features & ZONEFS_F_AGGRCNV)) { 1412 + zonefs_err(sb, 1413 + "zone size %llu doesn't match device's zone sectors %llu\n", 1414 + zi->i_zone_size, 1415 + bdev_zone_sectors(sb->s_bdev) << SECTOR_SHIFT); 1416 + return -EINVAL; 1417 + } 1418 1418 1419 1419 zi->i_max_size = min_t(loff_t, MAX_LFS_FILESIZE, 1420 1420 zone->capacity << SECTOR_SHIFT); ··· 1472 1456 struct inode *dir = d_inode(parent); 1473 1457 struct dentry *dentry; 1474 1458 struct inode *inode; 1475 - int ret; 1459 + int ret = -ENOMEM; 1476 1460 1477 1461 dentry = d_alloc_name(parent, name); 1478 1462 if (!dentry) 1479 - return NULL; 1463 + return ERR_PTR(ret); 1480 1464 1481 1465 inode = new_inode(parent->d_sb); 1482 1466 if (!inode) ··· 1501 1485 dput: 1502 1486 dput(dentry); 1503 1487 1504 - return NULL; 1488 + return ERR_PTR(ret); 1505 1489 } 1506 1490 1507 1491 struct zonefs_zone_data { ··· 1521 1505 struct blk_zone *zone, *next, *end; 1522 1506 const char *zgroup_name; 1523 1507 char *file_name; 1524 - struct dentry *dir; 1508 + struct dentry *dir, *dent; 1525 1509 unsigned int n = 0; 1526 1510 int ret; 1527 1511 ··· 1539 1523 zgroup_name = "seq"; 1540 1524 1541 1525 dir = zonefs_create_inode(sb->s_root, zgroup_name, NULL, type); 1542 - if (!dir) { 1543 - ret = -ENOMEM; 1526 + if (IS_ERR(dir)) { 1527 + ret = PTR_ERR(dir); 1544 1528 goto free; 1545 1529 } 1546 1530 ··· 1586 1570 * Use the file number within its group as file name. 1587 1571 */ 1588 1572 snprintf(file_name, ZONEFS_NAME_MAX - 1, "%u", n); 1589 - if (!zonefs_create_inode(dir, file_name, zone, type)) { 1590 - ret = -ENOMEM; 1573 + dent = zonefs_create_inode(dir, file_name, zone, type); 1574 + if (IS_ERR(dent)) { 1575 + ret = PTR_ERR(dent); 1591 1576 goto free; 1592 1577 } 1593 1578
-5
fs/zonefs/sysfs.c
··· 15 15 ssize_t (*show)(struct zonefs_sb_info *sbi, char *buf); 16 16 }; 17 17 18 - static inline struct zonefs_sysfs_attr *to_attr(struct attribute *attr) 19 - { 20 - return container_of(attr, struct zonefs_sysfs_attr, attr); 21 - } 22 - 23 18 #define ZONEFS_SYSFS_ATTR_RO(name) \ 24 19 static struct zonefs_sysfs_attr zonefs_sysfs_attr_##name = __ATTR_RO(name) 25 20
+8 -8
include/linux/blkdev.h
··· 311 311 unsigned char discard_misaligned; 312 312 unsigned char raid_partial_stripes_expensive; 313 313 enum blk_zoned_model zoned; 314 + 315 + /* 316 + * Drivers that set dma_alignment to less than 511 must be prepared to 317 + * handle individual bvec's that are not a multiple of a SECTOR_SIZE 318 + * due to possible offsets. 319 + */ 320 + unsigned int dma_alignment; 314 321 }; 315 322 316 323 typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx, ··· 463 456 unsigned long nr_requests; /* Max # of requests */ 464 457 465 458 unsigned int dma_pad_mask; 466 - /* 467 - * Drivers that set dma_alignment to less than 511 must be prepared to 468 - * handle individual bvec's that are not a multiple of a SECTOR_SIZE 469 - * due to possible offsets. 470 - */ 471 - unsigned int dma_alignment; 472 459 473 460 #ifdef CONFIG_BLK_INLINE_ENCRYPTION 474 461 struct blk_crypto_profile *crypto_profile; ··· 945 944 extern void blk_limits_io_opt(struct queue_limits *limits, unsigned int opt); 946 945 extern void blk_queue_io_opt(struct request_queue *q, unsigned int opt); 947 946 extern void blk_set_queue_depth(struct request_queue *q, unsigned int depth); 948 - extern void blk_set_default_limits(struct queue_limits *lim); 949 947 extern void blk_set_stacking_limits(struct queue_limits *lim); 950 948 extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, 951 949 sector_t offset); ··· 1324 1324 1325 1325 static inline int queue_dma_alignment(const struct request_queue *q) 1326 1326 { 1327 - return q ? q->dma_alignment : 511; 1327 + return q ? q->limits.dma_alignment : 511; 1328 1328 } 1329 1329 1330 1330 static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
+39 -21
include/linux/bpf.h
··· 27 27 #include <linux/bpfptr.h> 28 28 #include <linux/btf.h> 29 29 #include <linux/rcupdate_trace.h> 30 - #include <linux/init.h> 30 + #include <linux/static_call.h> 31 31 32 32 struct bpf_verifier_env; 33 33 struct bpf_verifier_log; ··· 315 315 u32 next_off = map->off_arr->field_off[i]; 316 316 317 317 memcpy(dst + curr_off, src + curr_off, next_off - curr_off); 318 - curr_off += map->off_arr->field_sz[i]; 318 + curr_off = next_off + map->off_arr->field_sz[i]; 319 319 } 320 320 memcpy(dst + curr_off, src + curr_off, map->value_size - curr_off); 321 321 } ··· 344 344 u32 next_off = map->off_arr->field_off[i]; 345 345 346 346 memset(dst + curr_off, 0, next_off - curr_off); 347 - curr_off += map->off_arr->field_sz[i]; 347 + curr_off = next_off + map->off_arr->field_sz[i]; 348 348 } 349 349 memset(dst + curr_off, 0, map->value_size - curr_off); 350 350 } ··· 954 954 void *rw_image; 955 955 u32 image_off; 956 956 struct bpf_ksym ksym; 957 + #ifdef CONFIG_HAVE_STATIC_CALL 958 + struct static_call_key *sc_key; 959 + void *sc_tramp; 960 + #endif 957 961 }; 958 962 959 963 static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func( ··· 975 971 struct bpf_attach_target_info *tgt_info); 976 972 void bpf_trampoline_put(struct bpf_trampoline *tr); 977 973 int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs); 978 - int __init bpf_arch_init_dispatcher_early(void *ip); 974 + 975 + /* 976 + * When the architecture supports STATIC_CALL replace the bpf_dispatcher_fn 977 + * indirection with a direct call to the bpf program. If the architecture does 978 + * not have STATIC_CALL, avoid a double-indirection. 979 + */ 980 + #ifdef CONFIG_HAVE_STATIC_CALL 981 + 982 + #define __BPF_DISPATCHER_SC_INIT(_name) \ 983 + .sc_key = &STATIC_CALL_KEY(_name), \ 984 + .sc_tramp = STATIC_CALL_TRAMP_ADDR(_name), 985 + 986 + #define __BPF_DISPATCHER_SC(name) \ 987 + DEFINE_STATIC_CALL(bpf_dispatcher_##name##_call, bpf_dispatcher_nop_func) 988 + 989 + #define __BPF_DISPATCHER_CALL(name) \ 990 + static_call(bpf_dispatcher_##name##_call)(ctx, insnsi, bpf_func) 991 + 992 + #define __BPF_DISPATCHER_UPDATE(_d, _new) \ 993 + __static_call_update((_d)->sc_key, (_d)->sc_tramp, (_new)) 994 + 995 + #else 996 + #define __BPF_DISPATCHER_SC_INIT(name) 997 + #define __BPF_DISPATCHER_SC(name) 998 + #define __BPF_DISPATCHER_CALL(name) bpf_func(ctx, insnsi) 999 + #define __BPF_DISPATCHER_UPDATE(_d, _new) 1000 + #endif 979 1001 980 1002 #define BPF_DISPATCHER_INIT(_name) { \ 981 1003 .mutex = __MUTEX_INITIALIZER(_name.mutex), \ ··· 1014 984 .name = #_name, \ 1015 985 .lnode = LIST_HEAD_INIT(_name.ksym.lnode), \ 1016 986 }, \ 987 + __BPF_DISPATCHER_SC_INIT(_name##_call) \ 1017 988 } 1018 989 1019 - #define BPF_DISPATCHER_INIT_CALL(_name) \ 1020 - static int __init _name##_init(void) \ 1021 - { \ 1022 - return bpf_arch_init_dispatcher_early(_name##_func); \ 1023 - } \ 1024 - early_initcall(_name##_init) 1025 - 1026 - #ifdef CONFIG_X86_64 1027 - #define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5))) 1028 - #else 1029 - #define BPF_DISPATCHER_ATTRIBUTES 1030 - #endif 1031 - 1032 990 #define DEFINE_BPF_DISPATCHER(name) \ 1033 - notrace BPF_DISPATCHER_ATTRIBUTES \ 991 + __BPF_DISPATCHER_SC(name); \ 1034 992 noinline __nocfi unsigned int bpf_dispatcher_##name##_func( \ 1035 993 const void *ctx, \ 1036 994 const struct bpf_insn *insnsi, \ 1037 995 bpf_func_t bpf_func) \ 1038 996 { \ 1039 - return bpf_func(ctx, insnsi); \ 997 + return __BPF_DISPATCHER_CALL(name); \ 1040 998 } \ 1041 999 EXPORT_SYMBOL(bpf_dispatcher_##name##_func); \ 1042 1000 struct bpf_dispatcher bpf_dispatcher_##name = \ 1043 - BPF_DISPATCHER_INIT(bpf_dispatcher_##name); \ 1044 - BPF_DISPATCHER_INIT_CALL(bpf_dispatcher_##name); 1001 + BPF_DISPATCHER_INIT(bpf_dispatcher_##name); 1045 1002 1046 1003 #define DECLARE_BPF_DISPATCHER(name) \ 1047 1004 unsigned int bpf_dispatcher_##name##_func( \ ··· 1036 1019 const struct bpf_insn *insnsi, \ 1037 1020 bpf_func_t bpf_func); \ 1038 1021 extern struct bpf_dispatcher bpf_dispatcher_##name; 1022 + 1039 1023 #define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_##name##_func 1040 1024 #define BPF_DISPATCHER_PTR(name) (&bpf_dispatcher_##name) 1041 1025 void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+3
include/linux/io_uring.h
··· 16 16 IO_URING_F_SQE128 = 4, 17 17 IO_URING_F_CQE32 = 8, 18 18 IO_URING_F_IOPOLL = 16, 19 + 20 + /* the request is executed from poll, it should not be freed */ 21 + IO_URING_F_MULTISHOT = 32, 19 22 }; 20 23 21 24 struct io_uring_cmd {
+1 -1
include/linux/ring_buffer.h
··· 100 100 101 101 int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full); 102 102 __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, 103 - struct file *filp, poll_table *poll_table); 103 + struct file *filp, poll_table *poll_table, int full); 104 104 void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu); 105 105 106 106 #define RING_BUFFER_ALL_CPUS -1
+2 -2
include/linux/trace.h
··· 26 26 int flags; 27 27 }; 28 28 29 + struct trace_array; 30 + 29 31 #ifdef CONFIG_TRACING 30 32 31 33 int register_ftrace_export(struct trace_export *export); 32 34 int unregister_ftrace_export(struct trace_export *export); 33 - 34 - struct trace_array; 35 35 36 36 void trace_printk_init_buffers(void); 37 37 __printf(3, 4)
+1
include/linux/vfio.h
··· 189 189 void vfio_unregister_group_dev(struct vfio_device *device); 190 190 191 191 int vfio_assign_device_set(struct vfio_device *device, void *set_id); 192 + unsigned int vfio_device_set_open_count(struct vfio_device_set *dev_set); 192 193 193 194 int vfio_mig_get_next_state(struct vfio_device *device, 194 195 enum vfio_device_mig_state cur_fsm,
+1 -1
include/net/ip.h
··· 563 563 BUILD_BUG_ON(offsetof(typeof(flow->addrs), v4addrs.dst) != 564 564 offsetof(typeof(flow->addrs), v4addrs.src) + 565 565 sizeof(flow->addrs.v4addrs.src)); 566 - memcpy(&flow->addrs.v4addrs, &iph->saddr, sizeof(flow->addrs.v4addrs)); 566 + memcpy(&flow->addrs.v4addrs, &iph->addrs, sizeof(flow->addrs.v4addrs)); 567 567 flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS; 568 568 } 569 569
+1 -1
include/net/ipv6.h
··· 897 897 BUILD_BUG_ON(offsetof(typeof(flow->addrs), v6addrs.dst) != 898 898 offsetof(typeof(flow->addrs), v6addrs.src) + 899 899 sizeof(flow->addrs.v6addrs.src)); 900 - memcpy(&flow->addrs.v6addrs, &iph->saddr, sizeof(flow->addrs.v6addrs)); 900 + memcpy(&flow->addrs.v6addrs, &iph->addrs, sizeof(flow->addrs.v6addrs)); 901 901 flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS; 902 902 } 903 903
+1 -1
include/net/sock.h
··· 323 323 * @sk_tskey: counter to disambiguate concurrent tstamp requests 324 324 * @sk_zckey: counter to order MSG_ZEROCOPY notifications 325 325 * @sk_socket: Identd and reporting IO signals 326 - * @sk_user_data: RPC layer private data 326 + * @sk_user_data: RPC layer private data. Write-protected by @sk_callback_lock. 327 327 * @sk_frag: cached page frag 328 328 * @sk_peek_off: current peek_offset value 329 329 * @sk_send_head: front of stuff to transmit
+4 -1
include/soc/at91/sama7-ddr.h
··· 26 26 #define DDR3PHY_PGSR (0x0C) /* DDR3PHY PHY General Status Register */ 27 27 #define DDR3PHY_PGSR_IDONE (1 << 0) /* Initialization Done */ 28 28 29 - #define DDR3PHY_ACIOCR (0x24) /* DDR3PHY AC I/O Configuration Register */ 29 + #define DDR3PHY_ACDLLCR (0x14) /* DDR3PHY AC DLL Control Register */ 30 + #define DDR3PHY_ACDLLCR_DLLSRST (1 << 30) /* DLL Soft Reset */ 31 + 32 + #define DDR3PHY_ACIOCR (0x24) /* DDR3PHY AC I/O Configuration Register */ 30 33 #define DDR3PHY_ACIOCR_CSPDD_CS0 (1 << 18) /* CS#[0] Power Down Driver */ 31 34 #define DDR3PHY_ACIOCR_CKPDD_CK0 (1 << 8) /* CK[0] Power Down Driver */ 32 35 #define DDR3PHY_ACIORC_ACPDD (1 << 3) /* AC Power Down Driver */
+4
include/sound/hdmi-codec.h
··· 124 124 struct hdmi_codec_pdata { 125 125 const struct hdmi_codec_ops *ops; 126 126 uint i2s:1; 127 + uint no_i2s_playback:1; 128 + uint no_i2s_capture:1; 127 129 uint spdif:1; 130 + uint no_spdif_playback:1; 131 + uint no_spdif_capture:1; 128 132 int max_i2s_channels; 129 133 void *data; 130 134 };
+4 -2
include/uapi/linux/ip.h
··· 100 100 __u8 ttl; 101 101 __u8 protocol; 102 102 __sum16 check; 103 - __be32 saddr; 104 - __be32 daddr; 103 + __struct_group(/* no tag */, addrs, /* no attrs */, 104 + __be32 saddr; 105 + __be32 daddr; 106 + ); 105 107 /*The options start here. */ 106 108 }; 107 109
+4 -2
include/uapi/linux/ipv6.h
··· 130 130 __u8 nexthdr; 131 131 __u8 hop_limit; 132 132 133 - struct in6_addr saddr; 134 - struct in6_addr daddr; 133 + __struct_group(/* no tag */, addrs, /* no attrs */, 134 + struct in6_addr saddr; 135 + struct in6_addr daddr; 136 + ); 135 137 }; 136 138 137 139
+1 -1
io_uring/io_uring.c
··· 1768 1768 io_tw_lock(req->ctx, locked); 1769 1769 if (unlikely(req->task->flags & PF_EXITING)) 1770 1770 return -EFAULT; 1771 - return io_issue_sqe(req, IO_URING_F_NONBLOCK); 1771 + return io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_MULTISHOT); 1772 1772 } 1773 1773 1774 1774 struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
+2 -2
io_uring/io_uring.h
··· 17 17 IOU_ISSUE_SKIP_COMPLETE = -EIOCBQUEUED, 18 18 19 19 /* 20 - * Intended only when both REQ_F_POLLED and REQ_F_APOLL_MULTISHOT 21 - * are set to indicate to the poll runner that multishot should be 20 + * Intended only when both IO_URING_F_MULTISHOT is passed 21 + * to indicate to the poll runner that multishot should be 22 22 * removed and the result is set on req->cqe.res. 23 23 */ 24 24 IOU_STOP_MULTISHOT = -ECANCELED,
+9 -14
io_uring/net.c
··· 67 67 struct io_kiocb *notif; 68 68 }; 69 69 70 - #define IO_APOLL_MULTI_POLLED (REQ_F_APOLL_MULTISHOT | REQ_F_POLLED) 71 - 72 70 int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) 73 71 { 74 72 struct io_shutdown *shutdown = io_kiocb_to_cmd(req, struct io_shutdown); ··· 589 591 * again (for multishot). 590 592 */ 591 593 static inline bool io_recv_finish(struct io_kiocb *req, int *ret, 592 - unsigned int cflags, bool mshot_finished) 594 + unsigned int cflags, bool mshot_finished, 595 + unsigned issue_flags) 593 596 { 594 597 if (!(req->flags & REQ_F_APOLL_MULTISHOT)) { 595 598 io_req_set_res(req, *ret, cflags); ··· 613 614 614 615 io_req_set_res(req, *ret, cflags); 615 616 616 - if (req->flags & REQ_F_POLLED) 617 + if (issue_flags & IO_URING_F_MULTISHOT) 617 618 *ret = IOU_STOP_MULTISHOT; 618 619 else 619 620 *ret = IOU_OK; ··· 772 773 if (ret < min_ret) { 773 774 if (ret == -EAGAIN && force_nonblock) { 774 775 ret = io_setup_async_msg(req, kmsg, issue_flags); 775 - if (ret == -EAGAIN && (req->flags & IO_APOLL_MULTI_POLLED) == 776 - IO_APOLL_MULTI_POLLED) { 776 + if (ret == -EAGAIN && (issue_flags & IO_URING_F_MULTISHOT)) { 777 777 io_kbuf_recycle(req, issue_flags); 778 778 return IOU_ISSUE_SKIP_COMPLETE; 779 779 } ··· 801 803 if (kmsg->msg.msg_inq) 802 804 cflags |= IORING_CQE_F_SOCK_NONEMPTY; 803 805 804 - if (!io_recv_finish(req, &ret, cflags, mshot_finished)) 806 + if (!io_recv_finish(req, &ret, cflags, mshot_finished, issue_flags)) 805 807 goto retry_multishot; 806 808 807 809 if (mshot_finished) { ··· 867 869 ret = sock_recvmsg(sock, &msg, flags); 868 870 if (ret < min_ret) { 869 871 if (ret == -EAGAIN && force_nonblock) { 870 - if ((req->flags & IO_APOLL_MULTI_POLLED) == IO_APOLL_MULTI_POLLED) { 872 + if (issue_flags & IO_URING_F_MULTISHOT) { 871 873 io_kbuf_recycle(req, issue_flags); 872 874 return IOU_ISSUE_SKIP_COMPLETE; 873 875 } ··· 900 902 if (msg.msg_inq) 901 903 cflags |= IORING_CQE_F_SOCK_NONEMPTY; 902 904 903 - if (!io_recv_finish(req, &ret, cflags, ret <= 0)) 905 + if (!io_recv_finish(req, &ret, cflags, ret <= 0, issue_flags)) 904 906 goto retry_multishot; 905 907 906 908 return ret; ··· 1287 1289 * return EAGAIN to arm the poll infra since it 1288 1290 * has already been done 1289 1291 */ 1290 - if ((req->flags & IO_APOLL_MULTI_POLLED) == 1291 - IO_APOLL_MULTI_POLLED) 1292 + if (issue_flags & IO_URING_F_MULTISHOT) 1292 1293 ret = IOU_ISSUE_SKIP_COMPLETE; 1293 1294 return ret; 1294 1295 } ··· 1312 1315 goto retry; 1313 1316 1314 1317 io_req_set_res(req, ret, 0); 1315 - if (req->flags & REQ_F_POLLED) 1316 - return IOU_STOP_MULTISHOT; 1317 - return IOU_OK; 1318 + return (issue_flags & IO_URING_F_MULTISHOT) ? IOU_STOP_MULTISHOT : IOU_OK; 1318 1319 } 1319 1320 1320 1321 int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+12
io_uring/poll.c
··· 228 228 return IOU_POLL_DONE; 229 229 if (v & IO_POLL_CANCEL_FLAG) 230 230 return -ECANCELED; 231 + /* 232 + * cqe.res contains only events of the first wake up 233 + * and all others are be lost. Redo vfs_poll() to get 234 + * up to date state. 235 + */ 236 + if ((v & IO_POLL_REF_MASK) != 1) 237 + req->cqe.res = 0; 231 238 232 239 /* the mask was stashed in __io_poll_execute */ 233 240 if (!req->cqe.res) { ··· 245 238 if ((unlikely(!req->cqe.res))) 246 239 continue; 247 240 if (req->apoll_events & EPOLLONESHOT) 241 + return IOU_POLL_DONE; 242 + if (io_is_uring_fops(req->file)) 248 243 return IOU_POLL_DONE; 249 244 250 245 /* multishot, just fill a CQE and proceed */ ··· 266 257 if (ret < 0) 267 258 return ret; 268 259 } 260 + 261 + /* force the next iteration to vfs_poll() */ 262 + req->cqe.res = 0; 269 263 270 264 /* 271 265 * Release all references, retry if someone tried to restart
+7 -19
kernel/bpf/dispatcher.c
··· 4 4 #include <linux/hash.h> 5 5 #include <linux/bpf.h> 6 6 #include <linux/filter.h> 7 - #include <linux/init.h> 7 + #include <linux/static_call.h> 8 8 9 9 /* The BPF dispatcher is a multiway branch code generator. The 10 10 * dispatcher is a mechanism to avoid the performance penalty of an ··· 91 91 return -ENOTSUPP; 92 92 } 93 93 94 - int __weak __init bpf_arch_init_dispatcher_early(void *ip) 95 - { 96 - return -ENOTSUPP; 97 - } 98 - 99 94 static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image, void *buf) 100 95 { 101 96 s64 ips[BPF_DISPATCHER_MAX] = {}, *ipsp = &ips[0]; ··· 105 110 106 111 static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs) 107 112 { 108 - void *old, *new, *tmp; 109 - u32 noff; 110 - int err; 113 + void *new, *tmp; 114 + u32 noff = 0; 111 115 112 - if (!prev_num_progs) { 113 - old = NULL; 114 - noff = 0; 115 - } else { 116 - old = d->image + d->image_off; 116 + if (prev_num_progs) 117 117 noff = d->image_off ^ (PAGE_SIZE / 2); 118 - } 119 118 120 119 new = d->num_progs ? d->image + noff : NULL; 121 120 tmp = d->num_progs ? d->rw_image + noff : NULL; ··· 123 134 return; 124 135 } 125 136 126 - err = bpf_arch_text_poke(d->func, BPF_MOD_JUMP, old, new); 127 - if (err || !new) 128 - return; 137 + __BPF_DISPATCHER_UPDATE(d, new ?: (void *)&bpf_dispatcher_nop_func); 129 138 130 - d->image_off = noff; 139 + if (new) 140 + d->image_off = noff; 131 141 } 132 142 133 143 void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+11 -12
kernel/bpf/percpu_freelist.c
··· 100 100 u32 nr_elems) 101 101 { 102 102 struct pcpu_freelist_head *head; 103 - int i, cpu, pcpu_entries; 103 + unsigned int cpu, cpu_idx, i, j, n, m; 104 104 105 - pcpu_entries = nr_elems / num_possible_cpus() + 1; 106 - i = 0; 105 + n = nr_elems / num_possible_cpus(); 106 + m = nr_elems % num_possible_cpus(); 107 107 108 + cpu_idx = 0; 108 109 for_each_possible_cpu(cpu) { 109 - again: 110 110 head = per_cpu_ptr(s->freelist, cpu); 111 - /* No locking required as this is not visible yet. */ 112 - pcpu_freelist_push_node(head, buf); 113 - i++; 114 - buf += elem_size; 115 - if (i == nr_elems) 116 - break; 117 - if (i % pcpu_entries) 118 - goto again; 111 + j = n + (cpu_idx < m ? 1 : 0); 112 + for (i = 0; i < j; i++) { 113 + /* No locking required as this is not visible yet. */ 114 + pcpu_freelist_push_node(head, buf); 115 + buf += elem_size; 116 + } 117 + cpu_idx++; 119 118 } 120 119 } 121 120
+9 -5
kernel/bpf/verifier.c
··· 6745 6745 /* Transfer references to the callee */ 6746 6746 err = copy_reference_state(callee, caller); 6747 6747 if (err) 6748 - return err; 6748 + goto err_out; 6749 6749 6750 6750 err = set_callee_state_cb(env, caller, callee, *insn_idx); 6751 6751 if (err) 6752 - return err; 6752 + goto err_out; 6753 6753 6754 6754 clear_caller_saved_regs(env, caller->regs); 6755 6755 ··· 6766 6766 print_verifier_state(env, callee, true); 6767 6767 } 6768 6768 return 0; 6769 + 6770 + err_out: 6771 + free_func_state(callee); 6772 + state->frame[state->curframe + 1] = NULL; 6773 + return err; 6769 6774 } 6770 6775 6771 6776 int map_set_for_each_callback_args(struct bpf_verifier_env *env, ··· 6984 6979 return -EINVAL; 6985 6980 } 6986 6981 6987 - state->curframe--; 6988 - caller = state->frame[state->curframe]; 6982 + caller = state->frame[state->curframe - 1]; 6989 6983 if (callee->in_callback_fn) { 6990 6984 /* enforce R0 return value range [0, 1]. */ 6991 6985 struct tnum range = callee->callback_ret_range; ··· 7023 7019 } 7024 7020 /* clear everything in the callee */ 7025 7021 free_func_state(callee); 7026 - state->frame[state->curframe + 1] = NULL; 7022 + state->frame[state->curframe--] = NULL; 7027 7023 return 0; 7028 7024 } 7029 7025
+19 -6
kernel/events/core.c
··· 9306 9306 } 9307 9307 9308 9308 if (event->attr.sigtrap) { 9309 - /* 9310 - * Should not be able to return to user space without processing 9311 - * pending_sigtrap (kernel events can overflow multiple times). 9312 - */ 9313 - WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel); 9309 + unsigned int pending_id = 1; 9310 + 9311 + if (regs) 9312 + pending_id = hash32_ptr((void *)instruction_pointer(regs)) ?: 1; 9314 9313 if (!event->pending_sigtrap) { 9315 - event->pending_sigtrap = 1; 9314 + event->pending_sigtrap = pending_id; 9316 9315 local_inc(&event->ctx->nr_pending); 9316 + } else if (event->attr.exclude_kernel) { 9317 + /* 9318 + * Should not be able to return to user space without 9319 + * consuming pending_sigtrap; with exceptions: 9320 + * 9321 + * 1. Where !exclude_kernel, events can overflow again 9322 + * in the kernel without returning to user space. 9323 + * 9324 + * 2. Events that can overflow again before the IRQ- 9325 + * work without user space progress (e.g. hrtimer). 9326 + * To approximate progress (with false negatives), 9327 + * check 32-bit hash of the current IP. 9328 + */ 9329 + WARN_ON_ONCE(event->pending_sigtrap != pending_id); 9317 9330 } 9318 9331 event->pending_addr = data->addr; 9319 9332 irq_work_queue(&event->pending_irq);
+7 -1
kernel/kprobes.c
··· 1766 1766 if ((list_p != p) && (list_p->post_handler)) 1767 1767 goto noclean; 1768 1768 } 1769 - ap->post_handler = NULL; 1769 + /* 1770 + * For the kprobe-on-ftrace case, we keep the 1771 + * post_handler setting to identify this aggrprobe 1772 + * armed with kprobe_ipmodify_ops. 1773 + */ 1774 + if (!kprobe_ftrace(ap)) 1775 + ap->post_handler = NULL; 1770 1776 } 1771 1777 noclean: 1772 1778 /*
+17 -2
kernel/rseq.c
··· 171 171 return 0; 172 172 } 173 173 174 + static bool rseq_warn_flags(const char *str, u32 flags) 175 + { 176 + u32 test_flags; 177 + 178 + if (!flags) 179 + return false; 180 + test_flags = flags & RSEQ_CS_NO_RESTART_FLAGS; 181 + if (test_flags) 182 + pr_warn_once("Deprecated flags (%u) in %s ABI structure", test_flags, str); 183 + test_flags = flags & ~RSEQ_CS_NO_RESTART_FLAGS; 184 + if (test_flags) 185 + pr_warn_once("Unknown flags (%u) in %s ABI structure", test_flags, str); 186 + return true; 187 + } 188 + 174 189 static int rseq_need_restart(struct task_struct *t, u32 cs_flags) 175 190 { 176 191 u32 flags, event_mask; 177 192 int ret; 178 193 179 - if (WARN_ON_ONCE(cs_flags & RSEQ_CS_NO_RESTART_FLAGS) || cs_flags) 194 + if (rseq_warn_flags("rseq_cs", cs_flags)) 180 195 return -EINVAL; 181 196 182 197 /* Get thread flags. */ ··· 199 184 if (ret) 200 185 return ret; 201 186 202 - if (WARN_ON_ONCE(flags & RSEQ_CS_NO_RESTART_FLAGS) || flags) 187 + if (rseq_warn_flags("rseq", flags)) 203 188 return -EINVAL; 204 189 205 190 /*
+35 -17
kernel/sched/core.c
··· 4200 4200 return success; 4201 4201 } 4202 4202 4203 + static bool __task_needs_rq_lock(struct task_struct *p) 4204 + { 4205 + unsigned int state = READ_ONCE(p->__state); 4206 + 4207 + /* 4208 + * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when 4209 + * the task is blocked. Make sure to check @state since ttwu() can drop 4210 + * locks at the end, see ttwu_queue_wakelist(). 4211 + */ 4212 + if (state == TASK_RUNNING || state == TASK_WAKING) 4213 + return true; 4214 + 4215 + /* 4216 + * Ensure we load p->on_rq after p->__state, otherwise it would be 4217 + * possible to, falsely, observe p->on_rq == 0. 4218 + * 4219 + * See try_to_wake_up() for a longer comment. 4220 + */ 4221 + smp_rmb(); 4222 + if (p->on_rq) 4223 + return true; 4224 + 4225 + #ifdef CONFIG_SMP 4226 + /* 4227 + * Ensure the task has finished __schedule() and will not be referenced 4228 + * anymore. Again, see try_to_wake_up() for a longer comment. 4229 + */ 4230 + smp_rmb(); 4231 + smp_cond_load_acquire(&p->on_cpu, !VAL); 4232 + #endif 4233 + 4234 + return false; 4235 + } 4236 + 4203 4237 /** 4204 4238 * task_call_func - Invoke a function on task in fixed state 4205 4239 * @p: Process for which the function is to be invoked, can be @current. ··· 4251 4217 int task_call_func(struct task_struct *p, task_call_f func, void *arg) 4252 4218 { 4253 4219 struct rq *rq = NULL; 4254 - unsigned int state; 4255 4220 struct rq_flags rf; 4256 4221 int ret; 4257 4222 4258 4223 raw_spin_lock_irqsave(&p->pi_lock, rf.flags); 4259 4224 4260 - state = READ_ONCE(p->__state); 4261 - 4262 - /* 4263 - * Ensure we load p->on_rq after p->__state, otherwise it would be 4264 - * possible to, falsely, observe p->on_rq == 0. 4265 - * 4266 - * See try_to_wake_up() for a longer comment. 4267 - */ 4268 - smp_rmb(); 4269 - 4270 - /* 4271 - * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when 4272 - * the task is blocked. Make sure to check @state since ttwu() can drop 4273 - * locks at the end, see ttwu_queue_wakelist(). 4274 - */ 4275 - if (state == TASK_RUNNING || state == TASK_WAKING || p->on_rq) 4225 + if (__task_needs_rq_lock(p)) 4276 4226 rq = __task_rq_lock(p, &rf); 4277 4227 4278 4228 /*
+3 -2
kernel/trace/ftrace.c
··· 1289 1289 if (!ftrace_mod) 1290 1290 return -ENOMEM; 1291 1291 1292 + INIT_LIST_HEAD(&ftrace_mod->list); 1292 1293 ftrace_mod->func = kstrdup(func, GFP_KERNEL); 1293 1294 ftrace_mod->module = kstrdup(module, GFP_KERNEL); 1294 1295 ftrace_mod->enable = enable; ··· 3191 3190 /* if we can't allocate this size, try something smaller */ 3192 3191 if (!order) 3193 3192 return -ENOMEM; 3194 - order >>= 1; 3193 + order--; 3195 3194 goto again; 3196 3195 } 3197 3196 ··· 7392 7391 } 7393 7392 7394 7393 pr_info("ftrace: allocating %ld entries in %ld pages\n", 7395 - count, count / ENTRIES_PER_PAGE + 1); 7394 + count, DIV_ROUND_UP(count, ENTRIES_PER_PAGE)); 7396 7395 7397 7396 ret = ftrace_process_locs(NULL, 7398 7397 __start_mcount_loc,
+32 -16
kernel/trace/kprobe_event_gen_test.c
··· 73 73 #define KPROBE_GEN_TEST_ARG3 NULL 74 74 #endif 75 75 76 + static bool trace_event_file_is_valid(struct trace_event_file *input) 77 + { 78 + return input && !IS_ERR(input); 79 + } 76 80 77 81 /* 78 82 * Test to make sure we can create a kprobe event, then add more ··· 143 139 kfree(buf); 144 140 return ret; 145 141 delete: 142 + if (trace_event_file_is_valid(gen_kprobe_test)) 143 + gen_kprobe_test = NULL; 146 144 /* We got an error after creating the event, delete it */ 147 145 ret = kprobe_event_delete("gen_kprobe_test"); 148 146 goto out; ··· 208 202 kfree(buf); 209 203 return ret; 210 204 delete: 205 + if (trace_event_file_is_valid(gen_kretprobe_test)) 206 + gen_kretprobe_test = NULL; 211 207 /* We got an error after creating the event, delete it */ 212 208 ret = kprobe_event_delete("gen_kretprobe_test"); 213 209 goto out; ··· 225 217 226 218 ret = test_gen_kretprobe_cmd(); 227 219 if (ret) { 228 - WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr, 229 - "kprobes", 230 - "gen_kretprobe_test", false)); 231 - trace_put_event_file(gen_kretprobe_test); 220 + if (trace_event_file_is_valid(gen_kretprobe_test)) { 221 + WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr, 222 + "kprobes", 223 + "gen_kretprobe_test", false)); 224 + trace_put_event_file(gen_kretprobe_test); 225 + } 232 226 WARN_ON(kprobe_event_delete("gen_kretprobe_test")); 233 227 } 234 228 ··· 239 229 240 230 static void __exit kprobe_event_gen_test_exit(void) 241 231 { 242 - /* Disable the event or you can't remove it */ 243 - WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr, 244 - "kprobes", 245 - "gen_kprobe_test", false)); 232 + if (trace_event_file_is_valid(gen_kprobe_test)) { 233 + /* Disable the event or you can't remove it */ 234 + WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr, 235 + "kprobes", 236 + "gen_kprobe_test", false)); 246 237 247 - /* Now give the file and instance back */ 248 - trace_put_event_file(gen_kprobe_test); 238 + /* Now give the file and instance back */ 239 + trace_put_event_file(gen_kprobe_test); 240 + } 241 + 249 242 250 243 /* Now unregister and free the event */ 251 244 WARN_ON(kprobe_event_delete("gen_kprobe_test")); 252 245 253 - /* Disable the event or you can't remove it */ 254 - WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr, 255 - "kprobes", 256 - "gen_kretprobe_test", false)); 246 + if (trace_event_file_is_valid(gen_kretprobe_test)) { 247 + /* Disable the event or you can't remove it */ 248 + WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr, 249 + "kprobes", 250 + "gen_kretprobe_test", false)); 257 251 258 - /* Now give the file and instance back */ 259 - trace_put_event_file(gen_kretprobe_test); 252 + /* Now give the file and instance back */ 253 + trace_put_event_file(gen_kretprobe_test); 254 + } 255 + 260 256 261 257 /* Now unregister and free the event */ 262 258 WARN_ON(kprobe_event_delete("gen_kretprobe_test"));
+3 -1
kernel/trace/rethook.c
··· 83 83 { 84 84 struct rethook *rh = kzalloc(sizeof(struct rethook), GFP_KERNEL); 85 85 86 - if (!rh || !handler) 86 + if (!rh || !handler) { 87 + kfree(rh); 87 88 return NULL; 89 + } 88 90 89 91 rh->data = data; 90 92 rh->handler = handler;
+50 -21
kernel/trace/ring_buffer.c
··· 519 519 local_t committing; 520 520 local_t commits; 521 521 local_t pages_touched; 522 + local_t pages_lost; 522 523 local_t pages_read; 523 524 long last_pages_touch; 524 525 size_t shortest_full; ··· 895 894 size_t ring_buffer_nr_dirty_pages(struct trace_buffer *buffer, int cpu) 896 895 { 897 896 size_t read; 897 + size_t lost; 898 898 size_t cnt; 899 899 900 900 read = local_read(&buffer->buffers[cpu]->pages_read); 901 + lost = local_read(&buffer->buffers[cpu]->pages_lost); 901 902 cnt = local_read(&buffer->buffers[cpu]->pages_touched); 903 + 904 + if (WARN_ON_ONCE(cnt < lost)) 905 + return 0; 906 + 907 + cnt -= lost; 908 + 902 909 /* The reader can read an empty page, but not more than that */ 903 910 if (cnt < read) { 904 911 WARN_ON_ONCE(read > cnt + 1); ··· 914 905 } 915 906 916 907 return cnt - read; 908 + } 909 + 910 + static __always_inline bool full_hit(struct trace_buffer *buffer, int cpu, int full) 911 + { 912 + struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu]; 913 + size_t nr_pages; 914 + size_t dirty; 915 + 916 + nr_pages = cpu_buffer->nr_pages; 917 + if (!nr_pages || !full) 918 + return true; 919 + 920 + dirty = ring_buffer_nr_dirty_pages(buffer, cpu); 921 + 922 + return (dirty * 100) > (full * nr_pages); 917 923 } 918 924 919 925 /* ··· 1070 1046 !ring_buffer_empty_cpu(buffer, cpu)) { 1071 1047 unsigned long flags; 1072 1048 bool pagebusy; 1073 - size_t nr_pages; 1074 - size_t dirty; 1049 + bool done; 1075 1050 1076 1051 if (!full) 1077 1052 break; 1078 1053 1079 1054 raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); 1080 1055 pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page; 1081 - nr_pages = cpu_buffer->nr_pages; 1082 - dirty = ring_buffer_nr_dirty_pages(buffer, cpu); 1056 + done = !pagebusy && full_hit(buffer, cpu, full); 1057 + 1083 1058 if (!cpu_buffer->shortest_full || 1084 1059 cpu_buffer->shortest_full > full) 1085 1060 cpu_buffer->shortest_full = full; 1086 1061 raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); 1087 - if (!pagebusy && 1088 - (!nr_pages || (dirty * 100) > full * nr_pages)) 1062 + if (done) 1089 1063 break; 1090 1064 } 1091 1065 ··· 1109 1087 * @cpu: the cpu buffer to wait on 1110 1088 * @filp: the file descriptor 1111 1089 * @poll_table: The poll descriptor 1090 + * @full: wait until the percentage of pages are available, if @cpu != RING_BUFFER_ALL_CPUS 1112 1091 * 1113 1092 * If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon 1114 1093 * as data is added to any of the @buffer's cpu buffers. Otherwise ··· 1119 1096 * zero otherwise. 1120 1097 */ 1121 1098 __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, 1122 - struct file *filp, poll_table *poll_table) 1099 + struct file *filp, poll_table *poll_table, int full) 1123 1100 { 1124 1101 struct ring_buffer_per_cpu *cpu_buffer; 1125 1102 struct rb_irq_work *work; 1126 1103 1127 - if (cpu == RING_BUFFER_ALL_CPUS) 1104 + if (cpu == RING_BUFFER_ALL_CPUS) { 1128 1105 work = &buffer->irq_work; 1129 - else { 1106 + full = 0; 1107 + } else { 1130 1108 if (!cpumask_test_cpu(cpu, buffer->cpumask)) 1131 1109 return -EINVAL; 1132 1110 ··· 1135 1111 work = &cpu_buffer->irq_work; 1136 1112 } 1137 1113 1138 - poll_wait(filp, &work->waiters, poll_table); 1139 - work->waiters_pending = true; 1114 + if (full) { 1115 + poll_wait(filp, &work->full_waiters, poll_table); 1116 + work->full_waiters_pending = true; 1117 + } else { 1118 + poll_wait(filp, &work->waiters, poll_table); 1119 + work->waiters_pending = true; 1120 + } 1121 + 1140 1122 /* 1141 1123 * There's a tight race between setting the waiters_pending and 1142 1124 * checking if the ring buffer is empty. Once the waiters_pending bit ··· 1157 1127 * will fix it later. 1158 1128 */ 1159 1129 smp_mb(); 1130 + 1131 + if (full) 1132 + return full_hit(buffer, cpu, full) ? EPOLLIN | EPOLLRDNORM : 0; 1160 1133 1161 1134 if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || 1162 1135 (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) ··· 1802 1769 1803 1770 free_buffer_page(cpu_buffer->reader_page); 1804 1771 1805 - rb_head_page_deactivate(cpu_buffer); 1806 - 1807 1772 if (head) { 1773 + rb_head_page_deactivate(cpu_buffer); 1774 + 1808 1775 list_for_each_entry_safe(bpage, tmp, head, list) { 1809 1776 list_del_init(&bpage->list); 1810 1777 free_buffer_page(bpage); ··· 2040 2007 */ 2041 2008 local_add(page_entries, &cpu_buffer->overrun); 2042 2009 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); 2010 + local_inc(&cpu_buffer->pages_lost); 2043 2011 } 2044 2012 2045 2013 /* ··· 2525 2491 */ 2526 2492 local_add(entries, &cpu_buffer->overrun); 2527 2493 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); 2494 + local_inc(&cpu_buffer->pages_lost); 2528 2495 2529 2496 /* 2530 2497 * The entries will be zeroed out when we move the ··· 3190 3155 static __always_inline void 3191 3156 rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer) 3192 3157 { 3193 - size_t nr_pages; 3194 - size_t dirty; 3195 - size_t full; 3196 - 3197 3158 if (buffer->irq_work.waiters_pending) { 3198 3159 buffer->irq_work.waiters_pending = false; 3199 3160 /* irq_work_queue() supplies it's own memory barriers */ ··· 3213 3182 3214 3183 cpu_buffer->last_pages_touch = local_read(&cpu_buffer->pages_touched); 3215 3184 3216 - full = cpu_buffer->shortest_full; 3217 - nr_pages = cpu_buffer->nr_pages; 3218 - dirty = ring_buffer_nr_dirty_pages(buffer, cpu_buffer->cpu); 3219 - if (full && nr_pages && (dirty * 100) <= full * nr_pages) 3185 + if (!full_hit(buffer, cpu_buffer->cpu, cpu_buffer->shortest_full)) 3220 3186 return; 3221 3187 3222 3188 cpu_buffer->irq_work.wakeup_full = true; ··· 5276 5248 local_set(&cpu_buffer->committing, 0); 5277 5249 local_set(&cpu_buffer->commits, 0); 5278 5250 local_set(&cpu_buffer->pages_touched, 0); 5251 + local_set(&cpu_buffer->pages_lost, 0); 5279 5252 local_set(&cpu_buffer->pages_read, 0); 5280 5253 cpu_buffer->last_pages_touch = 0; 5281 5254 cpu_buffer->shortest_full = 0;
+6 -10
kernel/trace/synth_event_gen_test.c
··· 120 120 121 121 /* Now generate a gen_synth_test event */ 122 122 ret = synth_event_trace_array(gen_synth_test, vals, ARRAY_SIZE(vals)); 123 - out: 123 + free: 124 + kfree(buf); 124 125 return ret; 125 126 delete: 126 127 /* We got an error after creating the event, delete it */ 127 128 synth_event_delete("gen_synth_test"); 128 - free: 129 - kfree(buf); 130 - 131 - goto out; 129 + goto free; 132 130 } 133 131 134 132 /* ··· 225 227 226 228 /* Now trace an empty_synth_test event */ 227 229 ret = synth_event_trace_array(empty_synth_test, vals, ARRAY_SIZE(vals)); 228 - out: 230 + free: 231 + kfree(buf); 229 232 return ret; 230 233 delete: 231 234 /* We got an error after creating the event, delete it */ 232 235 synth_event_delete("empty_synth_test"); 233 - free: 234 - kfree(buf); 235 - 236 - goto out; 236 + goto free; 237 237 } 238 238 239 239 static struct synth_field_desc create_synth_test_fields[] = {
+7 -5
kernel/trace/trace.c
··· 6657 6657 mutex_unlock(&trace_types_lock); 6658 6658 6659 6659 free_cpumask_var(iter->started); 6660 + kfree(iter->fmt); 6660 6661 mutex_destroy(&iter->mutex); 6661 6662 kfree(iter); 6662 6663 ··· 6682 6681 return EPOLLIN | EPOLLRDNORM; 6683 6682 else 6684 6683 return ring_buffer_poll_wait(iter->array_buffer->buffer, iter->cpu_file, 6685 - filp, poll_table); 6684 + filp, poll_table, iter->tr->buffer_percent); 6686 6685 } 6687 6686 6688 6687 static __poll_t ··· 7803 7802 int len) 7804 7803 { 7805 7804 struct tracing_log_err *err; 7805 + char *cmd; 7806 7806 7807 7807 if (tr->n_err_log_entries < TRACING_LOG_ERRS_MAX) { 7808 7808 err = alloc_tracing_log_err(len); ··· 7812 7810 7813 7811 return err; 7814 7812 } 7815 - 7813 + cmd = kzalloc(len, GFP_KERNEL); 7814 + if (!cmd) 7815 + return ERR_PTR(-ENOMEM); 7816 7816 err = list_first_entry(&tr->err_log, struct tracing_log_err, list); 7817 7817 kfree(err->cmd); 7818 - err->cmd = kzalloc(len, GFP_KERNEL); 7819 - if (!err->cmd) 7820 - return ERR_PTR(-ENOMEM); 7818 + err->cmd = cmd; 7821 7819 list_del(&err->list); 7822 7820 7823 7821 return err;
+6 -2
kernel/trace/trace_eprobe.c
··· 52 52 kfree(ep->event_system); 53 53 if (ep->event) 54 54 trace_event_put_ref(ep->event); 55 + kfree(ep->filter_str); 55 56 kfree(ep); 56 57 } 57 58 ··· 564 563 { 565 564 struct eprobe_data *edata = data->private_data; 566 565 566 + if (unlikely(!rec)) 567 + return; 568 + 567 569 __eprobe_trace_func(edata, rec); 568 570 } 569 571 ··· 646 642 INIT_LIST_HEAD(&trigger->list); 647 643 648 644 if (ep->filter_str) { 649 - ret = create_event_filter(file->tr, file->event_call, 645 + ret = create_event_filter(file->tr, ep->event, 650 646 ep->filter_str, false, &filter); 651 647 if (ret) 652 648 goto error; ··· 904 900 905 901 static int trace_eprobe_parse_filter(struct trace_eprobe *ep, int argc, const char *argv[]) 906 902 { 907 - struct event_filter *dummy; 903 + struct event_filter *dummy = NULL; 908 904 int i, ret, len = 0; 909 905 char *p; 910 906
+2 -3
kernel/trace/trace_events_synth.c
··· 828 828 } 829 829 830 830 ret = set_synth_event_print_fmt(call); 831 - if (ret < 0) { 831 + /* unregister_trace_event() will be called inside */ 832 + if (ret < 0) 832 833 trace_remove_event_call(call); 833 - goto err; 834 - } 835 834 out: 836 835 return ret; 837 836 err:
-2
kernel/trace/trace_syscalls.c
··· 201 201 return trace_handle_return(s); 202 202 } 203 203 204 - extern char *__bad_type_size(void); 205 - 206 204 #define SYSCALL_FIELD(_type, _name) { \ 207 205 .type = #_type, .name = #_name, \ 208 206 .size = sizeof(_type), .align = __alignof__(_type), \
+1 -1
mm/maccess.c
··· 97 97 return src - unsafe_addr; 98 98 Efault: 99 99 pagefault_enable(); 100 - dst[-1] = '\0'; 100 + dst[0] = '\0'; 101 101 return -EFAULT; 102 102 } 103 103
+1
net/bpf/test_run.c
··· 774 774 if (user_size > size) 775 775 return ERR_PTR(-EMSGSIZE); 776 776 777 + size = SKB_DATA_ALIGN(size); 777 778 data = kzalloc(size + headroom + tailroom, GFP_USER); 778 779 if (!data) 779 780 return ERR_PTR(-ENOMEM);
+14 -3
net/bridge/br_vlan.c
··· 959 959 list_for_each_entry(p, &br->port_list, list) { 960 960 vg = nbp_vlan_group(p); 961 961 list_for_each_entry(vlan, &vg->vlan_list, vlist) { 962 + if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV) 963 + continue; 962 964 err = vlan_vid_add(p->dev, proto, vlan->vid); 963 965 if (err) 964 966 goto err_filt; ··· 975 973 /* Delete VLANs for the old proto from the device filter. */ 976 974 list_for_each_entry(p, &br->port_list, list) { 977 975 vg = nbp_vlan_group(p); 978 - list_for_each_entry(vlan, &vg->vlan_list, vlist) 976 + list_for_each_entry(vlan, &vg->vlan_list, vlist) { 977 + if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV) 978 + continue; 979 979 vlan_vid_del(p->dev, oldproto, vlan->vid); 980 + } 980 981 } 981 982 982 983 return 0; ··· 988 983 attr.u.vlan_protocol = ntohs(oldproto); 989 984 switchdev_port_attr_set(br->dev, &attr, NULL); 990 985 991 - list_for_each_entry_continue_reverse(vlan, &vg->vlan_list, vlist) 986 + list_for_each_entry_continue_reverse(vlan, &vg->vlan_list, vlist) { 987 + if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV) 988 + continue; 992 989 vlan_vid_del(p->dev, proto, vlan->vid); 990 + } 993 991 994 992 list_for_each_entry_continue_reverse(p, &br->port_list, list) { 995 993 vg = nbp_vlan_group(p); 996 - list_for_each_entry(vlan, &vg->vlan_list, vlist) 994 + list_for_each_entry(vlan, &vg->vlan_list, vlist) { 995 + if (vlan->priv_flags & BR_VLFLAG_ADDED_BY_SWITCHDEV) 996 + continue; 997 997 vlan_vid_del(p->dev, proto, vlan->vid); 998 + } 998 999 } 999 1000 1000 1001 return err;
-3
net/caif/chnl_net.c
··· 310 310 311 311 if (result == 0) { 312 312 pr_debug("connect timeout\n"); 313 - caif_disconnect_client(dev_net(dev), &priv->chnl); 314 - priv->state = CAIF_DISCONNECTED; 315 - pr_debug("state disconnected\n"); 316 313 result = -ETIMEDOUT; 317 314 goto error; 318 315 }
+10
net/dsa/dsa2.c
··· 864 864 return err; 865 865 } 866 866 867 + static void dsa_switch_teardown_tag_protocol(struct dsa_switch *ds) 868 + { 869 + const struct dsa_device_ops *tag_ops = ds->dst->tag_ops; 870 + 871 + if (tag_ops->disconnect) 872 + tag_ops->disconnect(ds); 873 + } 874 + 867 875 static int dsa_switch_setup(struct dsa_switch *ds) 868 876 { 869 877 struct dsa_devlink_priv *dl_priv; ··· 960 952 mdiobus_free(ds->slave_mii_bus); 961 953 ds->slave_mii_bus = NULL; 962 954 } 955 + 956 + dsa_switch_teardown_tag_protocol(ds); 963 957 964 958 if (ds->ops->teardown) 965 959 ds->ops->teardown(ds);
+1
net/dsa/dsa_priv.h
··· 210 210 extern struct rtnl_link_ops dsa_link_ops __read_mostly; 211 211 212 212 /* port.c */ 213 + bool dsa_port_supports_hwtstamp(struct dsa_port *dp, struct ifreq *ifr); 213 214 void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp, 214 215 const struct dsa_device_ops *tag_ops); 215 216 int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age);
+1 -2
net/dsa/master.c
··· 204 204 * switch in the tree that is PTP capable. 205 205 */ 206 206 list_for_each_entry(dp, &dst->ports, list) 207 - if (dp->ds->ops->port_hwtstamp_get || 208 - dp->ds->ops->port_hwtstamp_set) 207 + if (dsa_port_supports_hwtstamp(dp, ifr)) 209 208 return -EBUSY; 210 209 break; 211 210 }
+16
net/dsa/port.c
··· 110 110 return !err; 111 111 } 112 112 113 + bool dsa_port_supports_hwtstamp(struct dsa_port *dp, struct ifreq *ifr) 114 + { 115 + struct dsa_switch *ds = dp->ds; 116 + int err; 117 + 118 + if (!ds->ops->port_hwtstamp_get || !ds->ops->port_hwtstamp_set) 119 + return false; 120 + 121 + /* "See through" shim implementations of the "get" method. 122 + * This will clobber the ifreq structure, but we will either return an 123 + * error, or the master will overwrite it with proper values. 124 + */ 125 + err = ds->ops->port_hwtstamp_get(ds, dp->index, ifr); 126 + return err != -EOPNOTSUPP; 127 + } 128 + 113 129 int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age) 114 130 { 115 131 struct dsa_switch *ds = dp->ds;
+10
net/ipv4/Kconfig
··· 402 402 403 403 If unsure, say Y. 404 404 405 + config INET_TABLE_PERTURB_ORDER 406 + int "INET: Source port perturbation table size (as power of 2)" if EXPERT 407 + default 16 408 + help 409 + Source port perturbation table size (as power of 2) for 410 + RFC 6056 3.3.4. Algorithm 4: Double-Hash Port Selection Algorithm. 411 + 412 + The default is almost always what you want. 413 + Only change this if you know what you are doing. 414 + 405 415 config INET_XFRM_TUNNEL 406 416 tristate 407 417 select INET_TUNNEL
+5 -5
net/ipv4/inet_hashtables.c
··· 906 906 * Note that we use 32bit integers (vs RFC 'short integers') 907 907 * because 2^16 is not a multiple of num_ephemeral and this 908 908 * property might be used by clever attacker. 909 + * 909 910 * RFC claims using TABLE_LENGTH=10 buckets gives an improvement, though 910 - * attacks were since demonstrated, thus we use 65536 instead to really 911 - * give more isolation and privacy, at the expense of 256kB of kernel 912 - * memory. 911 + * attacks were since demonstrated, thus we use 65536 by default instead 912 + * to really give more isolation and privacy, at the expense of 256kB 913 + * of kernel memory. 913 914 */ 914 - #define INET_TABLE_PERTURB_SHIFT 16 915 - #define INET_TABLE_PERTURB_SIZE (1 << INET_TABLE_PERTURB_SHIFT) 915 + #define INET_TABLE_PERTURB_SIZE (1 << CONFIG_INET_TABLE_PERTURB_ORDER) 916 916 static u32 *table_perturb; 917 917 918 918 int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+6 -52
net/kcm/kcmsock.c
··· 222 222 struct sk_buff *skb; 223 223 struct kcm_sock *kcm; 224 224 225 - while ((skb = __skb_dequeue(head))) { 225 + while ((skb = skb_dequeue(head))) { 226 226 /* Reset destructor to avoid calling kcm_rcv_ready */ 227 227 skb->destructor = sock_rfree; 228 228 skb_orphan(skb); ··· 1085 1085 return err; 1086 1086 } 1087 1087 1088 - static struct sk_buff *kcm_wait_data(struct sock *sk, int flags, 1089 - long timeo, int *err) 1090 - { 1091 - struct sk_buff *skb; 1092 - 1093 - while (!(skb = skb_peek(&sk->sk_receive_queue))) { 1094 - if (sk->sk_err) { 1095 - *err = sock_error(sk); 1096 - return NULL; 1097 - } 1098 - 1099 - if (sock_flag(sk, SOCK_DONE)) 1100 - return NULL; 1101 - 1102 - if ((flags & MSG_DONTWAIT) || !timeo) { 1103 - *err = -EAGAIN; 1104 - return NULL; 1105 - } 1106 - 1107 - sk_wait_data(sk, &timeo, NULL); 1108 - 1109 - /* Handle signals */ 1110 - if (signal_pending(current)) { 1111 - *err = sock_intr_errno(timeo); 1112 - return NULL; 1113 - } 1114 - } 1115 - 1116 - return skb; 1117 - } 1118 - 1119 1088 static int kcm_recvmsg(struct socket *sock, struct msghdr *msg, 1120 1089 size_t len, int flags) 1121 1090 { 1122 1091 struct sock *sk = sock->sk; 1123 1092 struct kcm_sock *kcm = kcm_sk(sk); 1124 1093 int err = 0; 1125 - long timeo; 1126 1094 struct strp_msg *stm; 1127 1095 int copied = 0; 1128 1096 struct sk_buff *skb; 1129 1097 1130 - timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 1131 - 1132 - lock_sock(sk); 1133 - 1134 - skb = kcm_wait_data(sk, flags, timeo, &err); 1098 + skb = skb_recv_datagram(sk, flags, &err); 1135 1099 if (!skb) 1136 1100 goto out; 1137 1101 ··· 1126 1162 /* Finished with message */ 1127 1163 msg->msg_flags |= MSG_EOR; 1128 1164 KCM_STATS_INCR(kcm->stats.rx_msgs); 1129 - skb_unlink(skb, &sk->sk_receive_queue); 1130 - kfree_skb(skb); 1131 1165 } 1132 1166 } 1133 1167 1134 1168 out: 1135 - release_sock(sk); 1136 - 1169 + skb_free_datagram(sk, skb); 1137 1170 return copied ? : err; 1138 1171 } 1139 1172 ··· 1140 1179 { 1141 1180 struct sock *sk = sock->sk; 1142 1181 struct kcm_sock *kcm = kcm_sk(sk); 1143 - long timeo; 1144 1182 struct strp_msg *stm; 1145 1183 int err = 0; 1146 1184 ssize_t copied; ··· 1147 1187 1148 1188 /* Only support splice for SOCKSEQPACKET */ 1149 1189 1150 - timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 1151 - 1152 - lock_sock(sk); 1153 - 1154 - skb = kcm_wait_data(sk, flags, timeo, &err); 1190 + skb = skb_recv_datagram(sk, flags, &err); 1155 1191 if (!skb) 1156 1192 goto err_out; 1157 1193 ··· 1175 1219 * finish reading the message. 1176 1220 */ 1177 1221 1178 - release_sock(sk); 1179 - 1222 + skb_free_datagram(sk, skb); 1180 1223 return copied; 1181 1224 1182 1225 err_out: 1183 - release_sock(sk); 1184 - 1226 + skb_free_datagram(sk, skb); 1185 1227 return err; 1186 1228 } 1187 1229
+13 -6
net/l2tp/l2tp_core.c
··· 1150 1150 } 1151 1151 1152 1152 /* Remove hooks into tunnel socket */ 1153 + write_lock_bh(&sk->sk_callback_lock); 1153 1154 sk->sk_destruct = tunnel->old_sk_destruct; 1154 1155 sk->sk_user_data = NULL; 1156 + write_unlock_bh(&sk->sk_callback_lock); 1155 1157 1156 1158 /* Call the original destructor */ 1157 1159 if (sk->sk_destruct) ··· 1471 1469 sock = sockfd_lookup(tunnel->fd, &ret); 1472 1470 if (!sock) 1473 1471 goto err; 1474 - 1475 - ret = l2tp_validate_socket(sock->sk, net, tunnel->encap); 1476 - if (ret < 0) 1477 - goto err_sock; 1478 1472 } 1473 + 1474 + sk = sock->sk; 1475 + write_lock(&sk->sk_callback_lock); 1476 + 1477 + ret = l2tp_validate_socket(sk, net, tunnel->encap); 1478 + if (ret < 0) 1479 + goto err_sock; 1479 1480 1480 1481 tunnel->l2tp_net = net; 1481 1482 pn = l2tp_pernet(net); 1482 1483 1483 - sk = sock->sk; 1484 1484 sock_hold(sk); 1485 1485 tunnel->sock = sk; 1486 1486 ··· 1508 1504 1509 1505 setup_udp_tunnel_sock(net, sock, &udp_cfg); 1510 1506 } else { 1511 - sk->sk_user_data = tunnel; 1507 + rcu_assign_sk_user_data(sk, tunnel); 1512 1508 } 1513 1509 1514 1510 tunnel->old_sk_destruct = sk->sk_destruct; ··· 1522 1518 if (tunnel->fd >= 0) 1523 1519 sockfd_put(sock); 1524 1520 1521 + write_unlock(&sk->sk_callback_lock); 1525 1522 return 0; 1526 1523 1527 1524 err_sock: ··· 1530 1525 sock_release(sock); 1531 1526 else 1532 1527 sockfd_put(sock); 1528 + 1529 + write_unlock(&sk->sk_callback_lock); 1533 1530 err: 1534 1531 return ret; 1535 1532 }
+3 -2
net/tls/tls_device_fallback.c
··· 346 346 salt = tls_ctx->crypto_send.aes_gcm_256.salt; 347 347 break; 348 348 default: 349 - return NULL; 349 + goto free_req; 350 350 } 351 351 cipher_sz = &tls_cipher_size_desc[tls_ctx->crypto_send.info.cipher_type]; 352 352 buf_len = cipher_sz->salt + cipher_sz->iv + TLS_AAD_SPACE_SIZE + ··· 492 492 key = ((struct tls12_crypto_info_aes_gcm_256 *)crypto_info)->key; 493 493 break; 494 494 default: 495 - return -EINVAL; 495 + rc = -EINVAL; 496 + goto free_aead; 496 497 } 497 498 cipher_sz = &tls_cipher_size_desc[crypto_info->cipher_type]; 498 499
+1 -1
net/x25/x25_dev.c
··· 117 117 118 118 if (!pskb_may_pull(skb, 1)) { 119 119 x25_neigh_put(nb); 120 - return 0; 120 + goto drop; 121 121 } 122 122 123 123 switch (skb->data[0]) {
+1 -1
scripts/package/mkdebian
··· 90 90 packageversion=$KDEB_PKGVERSION 91 91 revision=${packageversion##*-} 92 92 else 93 - revision=$(cat .version 2>/dev/null||echo 1) 93 + revision=$($srctree/init/build-version) 94 94 packageversion=$version-$revision 95 95 fi 96 96 sourcename=$KDEB_SOURCENAME
+2
sound/pci/hda/patch_realtek.c
··· 9436 9436 SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_AMP), 9437 9437 SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_AMP), 9438 9438 SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_AMP), 9439 + SND_PCI_QUIRK(0x144d, 0xc1a3, "Samsung Galaxy Book Pro (NP935XDB-KC1SE)", ALC298_FIXUP_SAMSUNG_AMP), 9440 + SND_PCI_QUIRK(0x144d, 0xc1a6, "Samsung Galaxy Book Pro 360 (NP930QBD)", ALC298_FIXUP_SAMSUNG_AMP), 9439 9441 SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8), 9440 9442 SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP), 9441 9443 SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+25 -5
sound/soc/codecs/hdmi-codec.c
··· 816 816 .source = "RX", 817 817 }, 818 818 }; 819 - int ret; 819 + int ret, i; 820 820 821 821 dapm = snd_soc_component_get_dapm(dai->component); 822 - ret = snd_soc_dapm_add_routes(dapm, route, 2); 823 - if (ret) 824 - return ret; 822 + 823 + /* One of the directions might be omitted for unidirectional DAIs */ 824 + for (i = 0; i < ARRAY_SIZE(route); i++) { 825 + if (!route[i].source || !route[i].sink) 826 + continue; 827 + 828 + ret = snd_soc_dapm_add_routes(dapm, &route[i], 1); 829 + if (ret) 830 + return ret; 831 + } 825 832 826 833 daifmt = devm_kzalloc(dai->dev, sizeof(*daifmt), GFP_KERNEL); 827 834 if (!daifmt) ··· 1016 1009 if (hcd->i2s) { 1017 1010 daidrv[i] = hdmi_i2s_dai; 1018 1011 daidrv[i].playback.channels_max = hcd->max_i2s_channels; 1012 + if (hcd->no_i2s_playback) 1013 + memset(&daidrv[i].playback, 0, 1014 + sizeof(daidrv[i].playback)); 1015 + if (hcd->no_i2s_capture) 1016 + memset(&daidrv[i].capture, 0, 1017 + sizeof(daidrv[i].capture)); 1019 1018 i++; 1020 1019 } 1021 1020 1022 - if (hcd->spdif) 1021 + if (hcd->spdif) { 1023 1022 daidrv[i] = hdmi_spdif_dai; 1023 + if (hcd->no_spdif_playback) 1024 + memset(&daidrv[i].playback, 0, 1025 + sizeof(daidrv[i].playback)); 1026 + if (hcd->no_spdif_capture) 1027 + memset(&daidrv[i].capture, 0, 1028 + sizeof(daidrv[i].capture)); 1029 + } 1024 1030 1025 1031 dev_set_drvdata(dev, hcp); 1026 1032
+1 -3
sound/usb/midi.c
··· 1133 1133 port = &umidi->endpoints[i].out->ports[j]; 1134 1134 break; 1135 1135 } 1136 - if (!port) { 1137 - snd_BUG(); 1136 + if (!port) 1138 1137 return -ENXIO; 1139 - } 1140 1138 1141 1139 substream->runtime->private_data = port; 1142 1140 port->state = STATE_UNKNOWN;
+5 -3
tools/arch/x86/include/asm/msr-index.h
··· 535 535 #define MSR_AMD64_CPUID_FN_1 0xc0011004 536 536 #define MSR_AMD64_LS_CFG 0xc0011020 537 537 #define MSR_AMD64_DC_CFG 0xc0011022 538 + 539 + #define MSR_AMD64_DE_CFG 0xc0011029 540 + #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT 1 541 + #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT) 542 + 538 543 #define MSR_AMD64_BU_CFG2 0xc001102a 539 544 #define MSR_AMD64_IBSFETCHCTL 0xc0011030 540 545 #define MSR_AMD64_IBSFETCHLINAD 0xc0011031 ··· 645 640 #define FAM10H_MMIO_CONF_BASE_MASK 0xfffffffULL 646 641 #define FAM10H_MMIO_CONF_BASE_SHIFT 20 647 642 #define MSR_FAM10H_NODE_ID 0xc001100c 648 - #define MSR_F10H_DECFG 0xc0011029 649 - #define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT 1 650 - #define MSR_F10H_DECFG_LFENCE_SERIALIZE BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT) 651 643 652 644 /* K8 MSRs */ 653 645 #define MSR_K8_TOP_MEM1 0xc001001a
+2 -2
tools/iio/iio_generic_buffer.c
··· 715 715 continue; 716 716 } 717 717 718 - toread = buf_len; 719 718 } else { 720 719 usleep(timedelay); 721 - toread = 64; 722 720 } 721 + 722 + toread = buf_len; 723 723 724 724 read_size = read(buf_fd, data, toread * scan_size); 725 725 if (read_size < 0) {
+7
tools/testing/selftests/bpf/prog_tests/varlen.c
··· 63 63 CHECK_VAL(data->total4, size1 + size2); 64 64 CHECK(memcmp(data->payload4, exp_str, size1 + size2), "content_check", 65 65 "doesn't match!\n"); 66 + 67 + CHECK_VAL(bss->ret_bad_read, -EFAULT); 68 + CHECK_VAL(data->payload_bad[0], 0x42); 69 + CHECK_VAL(data->payload_bad[1], 0x42); 70 + CHECK_VAL(data->payload_bad[2], 0); 71 + CHECK_VAL(data->payload_bad[3], 0x42); 72 + CHECK_VAL(data->payload_bad[4], 0x42); 66 73 cleanup: 67 74 test_varlen__destroy(skel); 68 75 }
+5
tools/testing/selftests/bpf/progs/test_varlen.c
··· 19 19 __u64 payload1_len2 = 0; 20 20 __u64 total1 = 0; 21 21 char payload1[MAX_LEN + MAX_LEN] = {}; 22 + __u64 ret_bad_read = 0; 22 23 23 24 /* .data */ 24 25 int payload2_len1 = -1; ··· 36 35 int payload4_len2 = -1; 37 36 int total4= -1; 38 37 char payload4[MAX_LEN + MAX_LEN] = { 1 }; 38 + 39 + char payload_bad[5] = { 0x42, 0x42, 0x42, 0x42, 0x42 }; 39 40 40 41 SEC("raw_tp/sys_enter") 41 42 int handler64_unsigned(void *regs) ··· 63 60 } 64 61 65 62 total1 = payload - (void *)payload1; 63 + 64 + ret_bad_read = bpf_probe_read_kernel_str(payload_bad + 2, 1, (void *) -1); 66 65 67 66 return 0; 68 67 }
+1 -1
tools/testing/selftests/bpf/test_progs.c
··· 1010 1010 msg->subtest_done.have_log); 1011 1011 break; 1012 1012 case MSG_TEST_LOG: 1013 - sprintf(buf, "MSG_TEST_LOG (cnt: %ld, last: %d)", 1013 + sprintf(buf, "MSG_TEST_LOG (cnt: %zu, last: %d)", 1014 1014 strlen(msg->test_log.log_buf), 1015 1015 msg->test_log.is_last); 1016 1016 break;
+1 -1
tools/testing/selftests/bpf/test_verifier.c
··· 1260 1260 1261 1261 bzero(&info, sizeof(info)); 1262 1262 info.xlated_prog_len = xlated_prog_len; 1263 - info.xlated_prog_insns = (__u64)*buf; 1263 + info.xlated_prog_insns = (__u64)(unsigned long)*buf; 1264 1264 if (bpf_obj_get_info_by_fd(fd_prog, &info, &info_len)) { 1265 1265 perror("second bpf_obj_get_info_by_fd failed"); 1266 1266 goto out_free_buf;