Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Conflicts:

MAINTAINERS
- keep Chandrasekar
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
- simple fix + trust the code re-added to param.c in -next is fine
include/linux/bpf.h
- trivial
include/linux/ethtool.h
- trivial, fix kdoc while at it
include/linux/skmsg.h
- move to relevant place in tcp.c, comment re-wrapped
net/core/skmsg.c
- add the sk = sk // sk = NULL around calls
net/tipc/crypto.c
- trivial

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4690 -2217
+1 -1
Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt
··· 32 32 - "#thermal-sensor-cells" Used to expose itself to thermal fw. 33 33 34 34 Read more about iio bindings at 35 - Documentation/devicetree/bindings/iio/iio-bindings.txt 35 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/ 36 36 37 37 Example: 38 38 ncp15wb473@0 {
+3 -2
Documentation/devicetree/bindings/iio/adc/ingenic,adc.yaml
··· 14 14 Industrial I/O subsystem bindings for ADC controller found in 15 15 Ingenic JZ47xx SoCs. 16 16 17 - ADC clients must use the format described in iio-bindings.txt, giving 18 - a phandle and IIO specifier pair ("io-channels") to the ADC controller. 17 + ADC clients must use the format described in 18 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml, 19 + giving a phandle and IIO specifier pair ("io-channels") to the ADC controller. 19 20 20 21 properties: 21 22 compatible:
+3 -1
Documentation/devicetree/bindings/input/adc-joystick.yaml
··· 24 24 description: > 25 25 List of phandle and IIO specifier pairs. 26 26 Each pair defines one ADC channel to which a joystick axis is connected. 27 - See Documentation/devicetree/bindings/iio/iio-bindings.txt for details. 27 + See 28 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml 29 + for details. 28 30 29 31 '#address-cells': 30 32 const: 1
+4 -1
Documentation/devicetree/bindings/input/touchscreen/resistive-adc-touch.txt
··· 5 5 - compatible: must be "resistive-adc-touch" 6 6 The device must be connected to an ADC device that provides channels for 7 7 position measurement and optional pressure. 8 - Refer to ../iio/iio-bindings.txt for details 8 + Refer to 9 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml 10 + for details 11 + 9 12 - iio-channels: must have at least two channels connected to an ADC device. 10 13 These should correspond to the channels exposed by the ADC device and should 11 14 have the right index as the ADC device registers them. These channels
+3 -1
Documentation/devicetree/bindings/mfd/ab8500.txt
··· 72 72 pwm|regulator|rtc|sysctrl|usb]"; 73 73 74 74 A few child devices require ADC channels from the GPADC node. Those follow the 75 - standard bindings from iio/iio-bindings.txt and iio/adc/adc.txt 75 + standard bindings from 76 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml 77 + and Documentation/devicetree/bindings/iio/adc/adc.yaml 76 78 77 79 abx500-temp : io-channels "aux1" and "aux2" for measuring external 78 80 temperatures.
+8 -8
Documentation/devicetree/bindings/mfd/motorola-cpcap.txt
··· 16 16 The sub-functions of CPCAP get their own node with their own compatible values, 17 17 which are described in the following files: 18 18 19 - - ../power/supply/cpcap-battery.txt 20 - - ../power/supply/cpcap-charger.txt 21 - - ../regulator/cpcap-regulator.txt 22 - - ../phy/phy-cpcap-usb.txt 23 - - ../input/cpcap-pwrbutton.txt 24 - - ../rtc/cpcap-rtc.txt 25 - - ../leds/leds-cpcap.txt 26 - - ../iio/adc/cpcap-adc.txt 19 + - Documentation/devicetree/bindings/power/supply/cpcap-battery.txt 20 + - Documentation/devicetree/bindings/power/supply/cpcap-charger.txt 21 + - Documentation/devicetree/bindings/regulator/cpcap-regulator.txt 22 + - Documentation/devicetree/bindings/phy/phy-cpcap-usb.txt 23 + - Documentation/devicetree/bindings/input/cpcap-pwrbutton.txt 24 + - Documentation/devicetree/bindings/rtc/cpcap-rtc.txt 25 + - Documentation/devicetree/bindings/leds/leds-cpcap.txt 26 + - Documentation/devicetree/bindings/iio/adc/motorola,cpcap-adc.yaml 27 27 28 28 The only exception is the audio codec. Instead of a compatible value its 29 29 node must be named "audio-codec".
+1 -1
Documentation/devicetree/bindings/net/brcm,bcm4908-enet.yaml
··· 40 40 - interrupts 41 41 - interrupt-names 42 42 43 - additionalProperties: false 43 + unevaluatedProperties: false 44 44 45 45 examples: 46 46 - |
+1 -1
Documentation/devicetree/bindings/net/ethernet-controller.yaml
··· 49 49 description: 50 50 Reference to an nvmem node for the MAC address 51 51 52 - nvmem-cells-names: 52 + nvmem-cell-names: 53 53 const: mac-address 54 54 55 55 phy-connection-type:
+94 -2
Documentation/devicetree/bindings/net/micrel-ksz90x1.txt
··· 65 65 step is 60ps. The default value is the neutral setting, so setting 66 66 rxc-skew-ps=<0> actually results in -900 picoseconds adjustment. 67 67 68 + The KSZ9031 hardware supports a range of skew values from negative to 69 + positive, where the specific range is property dependent. All values 70 + specified in the devicetree are offset by the minimum value so they 71 + can be represented as positive integers in the devicetree since it's 72 + difficult to represent a negative number in the devictree. 73 + 74 + The following 5-bit values table apply to rxc-skew-ps and txc-skew-ps. 75 + 76 + Pad Skew Value Delay (ps) Devicetree Value 77 + ------------------------------------------------------ 78 + 0_0000 -900ps 0 79 + 0_0001 -840ps 60 80 + 0_0010 -780ps 120 81 + 0_0011 -720ps 180 82 + 0_0100 -660ps 240 83 + 0_0101 -600ps 300 84 + 0_0110 -540ps 360 85 + 0_0111 -480ps 420 86 + 0_1000 -420ps 480 87 + 0_1001 -360ps 540 88 + 0_1010 -300ps 600 89 + 0_1011 -240ps 660 90 + 0_1100 -180ps 720 91 + 0_1101 -120ps 780 92 + 0_1110 -60ps 840 93 + 0_1111 0ps 900 94 + 1_0000 60ps 960 95 + 1_0001 120ps 1020 96 + 1_0010 180ps 1080 97 + 1_0011 240ps 1140 98 + 1_0100 300ps 1200 99 + 1_0101 360ps 1260 100 + 1_0110 420ps 1320 101 + 1_0111 480ps 1380 102 + 1_1000 540ps 1440 103 + 1_1001 600ps 1500 104 + 1_1010 660ps 1560 105 + 1_1011 720ps 1620 106 + 1_1100 780ps 1680 107 + 1_1101 840ps 1740 108 + 1_1110 900ps 1800 109 + 1_1111 960ps 1860 110 + 111 + The following 4-bit values table apply to the txdX-skew-ps, rxdX-skew-ps 112 + data pads, and the rxdv-skew-ps, txen-skew-ps control pads. 113 + 114 + Pad Skew Value Delay (ps) Devicetree Value 115 + ------------------------------------------------------ 116 + 0000 -420ps 0 117 + 0001 -360ps 60 118 + 0010 -300ps 120 119 + 0011 -240ps 180 120 + 0100 -180ps 240 121 + 0101 -120ps 300 122 + 0110 -60ps 360 123 + 0111 0ps 420 124 + 1000 60ps 480 125 + 1001 120ps 540 126 + 1010 180ps 600 127 + 1011 240ps 660 128 + 1100 300ps 720 129 + 1101 360ps 780 130 + 1110 420ps 840 131 + 1111 480ps 900 132 + 68 133 Optional properties: 69 134 70 135 Maximum value of 1860, default value 900: ··· 185 120 186 121 Examples: 187 122 123 + /* Attach to an Ethernet device with autodetected PHY */ 124 + &enet { 125 + rxc-skew-ps = <1800>; 126 + rxdv-skew-ps = <0>; 127 + txc-skew-ps = <1800>; 128 + txen-skew-ps = <0>; 129 + status = "okay"; 130 + }; 131 + 132 + /* Attach to an explicitly-specified PHY */ 188 133 mdio { 189 134 phy0: ethernet-phy@0 { 190 - rxc-skew-ps = <3000>; 135 + rxc-skew-ps = <1800>; 191 136 rxdv-skew-ps = <0>; 192 - txc-skew-ps = <3000>; 137 + txc-skew-ps = <1800>; 193 138 txen-skew-ps = <0>; 194 139 reg = <0>; 195 140 }; ··· 208 133 phy = <&phy0>; 209 134 phy-mode = "rgmii-id"; 210 135 }; 136 + 137 + References 138 + 139 + Micrel ksz9021rl/rn Data Sheet, Revision 1.2. Dated 2/13/2014. 140 + http://www.micrel.com/_PDF/Ethernet/datasheets/ksz9021rl-rn_ds.pdf 141 + 142 + Micrel ksz9031rnx Data Sheet, Revision 2.1. Dated 11/20/2014. 143 + http://www.micrel.com/_PDF/Ethernet/datasheets/KSZ9031RNX.pdf 144 + 145 + Notes: 146 + 147 + Note that a previous version of the Micrel ksz9021rl/rn Data Sheet 148 + was missing extended register 106 (transmit data pad skews), and 149 + incorrectly specified the ps per step as 200ps/step instead of 150 + 120ps/step. The latest update to this document reflects the latest 151 + revision of the Micrel specification even though usage in the kernel 152 + still reflects that incorrect document.
+5 -5
Documentation/networking/ethtool-netlink.rst
··· 980 980 981 981 982 982 PAUSE_GET 983 - ============ 983 + ========= 984 984 985 - Gets channel counts like ``ETHTOOL_GPAUSE`` ioctl request. 985 + Gets pause frame settings like ``ETHTOOL_GPAUSEPARAM`` ioctl request. 986 986 987 987 Request contents: 988 988 ··· 1011 1011 Each member has a corresponding attribute defined. 1012 1012 1013 1013 PAUSE_SET 1014 - ============ 1014 + ========= 1015 1015 1016 1016 Sets pause parameters like ``ETHTOOL_GPAUSEPARAM`` ioctl request. 1017 1017 ··· 1028 1028 EEE_GET 1029 1029 ======= 1030 1030 1031 - Gets channel counts like ``ETHTOOL_GEEE`` ioctl request. 1031 + Gets Energy Efficient Ethernet settings like ``ETHTOOL_GEEE`` ioctl request. 1032 1032 1033 1033 Request contents: 1034 1034 ··· 1058 1058 EEE_SET 1059 1059 ======= 1060 1060 1061 - Sets pause parameters like ``ETHTOOL_GEEEPARAM`` ioctl request. 1061 + Sets Energy Efficient Ethernet parameters like ``ETHTOOL_SEEE`` ioctl request. 1062 1062 1063 1063 Request contents: 1064 1064
+31 -21
MAINTAINERS
··· 2491 2491 N: sc2731 2492 2492 2493 2493 ARM/STI ARCHITECTURE 2494 - M: Patrice Chotard <patrice.chotard@st.com> 2494 + M: Patrice Chotard <patrice.chotard@foss.st.com> 2495 2495 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2496 2496 S: Maintained 2497 2497 W: http://www.stlinux.com ··· 2524 2524 2525 2525 ARM/STM32 ARCHITECTURE 2526 2526 M: Maxime Coquelin <mcoquelin.stm32@gmail.com> 2527 - M: Alexandre Torgue <alexandre.torgue@st.com> 2527 + M: Alexandre Torgue <alexandre.torgue@foss.st.com> 2528 2528 L: linux-stm32@st-md-mailman.stormreply.com (moderated for non-subscribers) 2529 2529 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2530 2530 S: Maintained ··· 3117 3117 F: drivers/md/bcache/ 3118 3118 3119 3119 BDISP ST MEDIA DRIVER 3120 - M: Fabien Dessenne <fabien.dessenne@st.com> 3120 + M: Fabien Dessenne <fabien.dessenne@foss.st.com> 3121 3121 L: linux-media@vger.kernel.org 3122 3122 S: Supported 3123 3123 W: https://linuxtv.org ··· 3679 3679 L: linux-pm@vger.kernel.org 3680 3680 S: Maintained 3681 3681 T: git git://github.com/broadcom/stblinux.git 3682 - F: drivers/soc/bcm/bcm-pmb.c 3682 + F: drivers/soc/bcm/bcm63xx/bcm-pmb.c 3683 3683 F: include/dt-bindings/soc/bcm-pmb.h 3684 3684 3685 3685 BROADCOM SPECIFIC AMBA DRIVER (BCMA) ··· 5084 5084 F: drivers/platform/x86/dell/dell-wmi.c 5085 5085 5086 5086 DELTA ST MEDIA DRIVER 5087 - M: Hugues Fruchet <hugues.fruchet@st.com> 5087 + M: Hugues Fruchet <hugues.fruchet@foss.st.com> 5088 5088 L: linux-media@vger.kernel.org 5089 5089 S: Supported 5090 5090 W: https://linuxtv.org ··· 6010 6010 6011 6011 DRM DRIVERS FOR STI 6012 6012 M: Benjamin Gaignard <benjamin.gaignard@linaro.org> 6013 - M: Vincent Abriou <vincent.abriou@st.com> 6014 6013 L: dri-devel@lists.freedesktop.org 6015 6014 S: Maintained 6016 6015 T: git git://anongit.freedesktop.org/drm/drm-misc ··· 6017 6018 F: drivers/gpu/drm/sti 6018 6019 6019 6020 DRM DRIVERS FOR STM 6020 - M: Yannick Fertre <yannick.fertre@st.com> 6021 - M: Philippe Cornu <philippe.cornu@st.com> 6021 + M: Yannick Fertre <yannick.fertre@foss.st.com> 6022 + M: Philippe Cornu <philippe.cornu@foss.st.com> 6022 6023 M: Benjamin Gaignard <benjamin.gaignard@linaro.org> 6023 - M: Vincent Abriou <vincent.abriou@st.com> 6024 6024 L: dri-devel@lists.freedesktop.org 6025 6025 S: Maintained 6026 6026 T: git git://anongit.freedesktop.org/drm/drm-misc ··· 7478 7480 GENERIC PHY FRAMEWORK 7479 7481 M: Kishon Vijay Abraham I <kishon@ti.com> 7480 7482 M: Vinod Koul <vkoul@kernel.org> 7481 - L: linux-kernel@vger.kernel.org 7483 + L: linux-phy@lists.infradead.org 7482 7484 S: Supported 7485 + Q: https://patchwork.kernel.org/project/linux-phy/list/ 7483 7486 T: git git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy.git 7484 7487 F: Documentation/devicetree/bindings/phy/ 7485 7488 F: drivers/phy/ ··· 8233 8234 F: mm/hugetlb.c 8234 8235 8235 8236 HVA ST MEDIA DRIVER 8236 - M: Jean-Christophe Trotin <jean-christophe.trotin@st.com> 8237 + M: Jean-Christophe Trotin <jean-christophe.trotin@foss.st.com> 8237 8238 L: linux-media@vger.kernel.org 8238 8239 S: Supported 8239 8240 W: https://linuxtv.org ··· 10033 10034 10034 10035 LED SUBSYSTEM 10035 10036 M: Pavel Machek <pavel@ucw.cz> 10036 - R: Dan Murphy <dmurphy@ti.com> 10037 10037 L: linux-leds@vger.kernel.org 10038 10038 S: Maintained 10039 10039 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pavel/linux-leds.git ··· 11169 11171 F: drivers/media/dvb-frontends/stv6111* 11170 11172 11171 11173 MEDIA DRIVERS FOR STM32 - DCMI 11172 - M: Hugues Fruchet <hugues.fruchet@st.com> 11174 + M: Hugues Fruchet <hugues.fruchet@foss.st.com> 11173 11175 L: linux-media@vger.kernel.org 11174 11176 S: Supported 11175 11177 T: git git://linuxtv.org/media_tree.git ··· 14855 14857 S: Maintained 14856 14858 F: drivers/iommu/arm/arm-smmu/qcom_iommu.c 14857 14859 14860 + QUALCOMM IPC ROUTER (QRTR) DRIVER 14861 + M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14862 + L: linux-arm-msm@vger.kernel.org 14863 + S: Maintained 14864 + F: include/trace/events/qrtr.h 14865 + F: include/uapi/linux/qrtr.h 14866 + F: net/qrtr/ 14867 + 14858 14868 QUALCOMM IPCC MAILBOX DRIVER 14859 14869 M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14860 14870 L: linux-arm-msm@vger.kernel.org ··· 15212 15206 REMOTE PROCESSOR (REMOTEPROC) SUBSYSTEM 15213 15207 M: Ohad Ben-Cohen <ohad@wizery.com> 15214 15208 M: Bjorn Andersson <bjorn.andersson@linaro.org> 15209 + M: Mathieu Poirier <mathieu.poirier@linaro.org> 15215 15210 L: linux-remoteproc@vger.kernel.org 15216 15211 S: Maintained 15217 15212 T: git git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc.git rproc-next ··· 15226 15219 REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM 15227 15220 M: Ohad Ben-Cohen <ohad@wizery.com> 15228 15221 M: Bjorn Andersson <bjorn.andersson@linaro.org> 15222 + M: Mathieu Poirier <mathieu.poirier@linaro.org> 15229 15223 L: linux-remoteproc@vger.kernel.org 15230 15224 S: Maintained 15231 15225 T: git git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc.git rpmsg-next ··· 15643 15635 15644 15636 S390 VFIO AP DRIVER 15645 15637 M: Tony Krowiak <akrowiak@linux.ibm.com> 15646 - M: Pierre Morel <pmorel@linux.ibm.com> 15647 15638 M: Halil Pasic <pasic@linux.ibm.com> 15639 + M: Jason Herne <jjherne@linux.ibm.com> 15648 15640 L: linux-s390@vger.kernel.org 15649 15641 S: Supported 15650 15642 W: http://www.ibm.com/developerworks/linux/linux390/ ··· 15656 15648 S390 VFIO-CCW DRIVER 15657 15649 M: Cornelia Huck <cohuck@redhat.com> 15658 15650 M: Eric Farman <farman@linux.ibm.com> 15651 + M: Matthew Rosato <mjrosato@linux.ibm.com> 15659 15652 R: Halil Pasic <pasic@linux.ibm.com> 15660 15653 L: linux-s390@vger.kernel.org 15661 15654 L: kvm@vger.kernel.org ··· 15667 15658 15668 15659 S390 VFIO-PCI DRIVER 15669 15660 M: Matthew Rosato <mjrosato@linux.ibm.com> 15661 + M: Eric Farman <farman@linux.ibm.com> 15670 15662 L: linux-s390@vger.kernel.org 15671 15663 L: kvm@vger.kernel.org 15672 15664 S: Supported ··· 16954 16944 F: drivers/media/i2c/st-mipid02.c 16955 16945 16956 16946 ST STM32 I2C/SMBUS DRIVER 16957 - M: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> 16947 + M: Pierre-Yves MORDRET <pierre-yves.mordret@foss.st.com> 16948 + M: Alain Volmat <alain.volmat@foss.st.com> 16958 16949 L: linux-i2c@vger.kernel.org 16959 16950 S: Maintained 16960 16951 F: drivers/i2c/busses/i2c-stm32* ··· 17080 17069 F: kernel/static_call.c 17081 17070 17082 17071 STI AUDIO (ASoC) DRIVERS 17083 - M: Arnaud Pouliquen <arnaud.pouliquen@st.com> 17072 + M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> 17084 17073 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 17085 17074 S: Maintained 17086 17075 F: Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt ··· 17100 17089 F: drivers/media/usb/stk1160/ 17101 17090 17102 17091 STM32 AUDIO (ASoC) DRIVERS 17103 - M: Olivier Moysan <olivier.moysan@st.com> 17104 - M: Arnaud Pouliquen <arnaud.pouliquen@st.com> 17092 + M: Olivier Moysan <olivier.moysan@foss.st.com> 17093 + M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> 17105 17094 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 17106 17095 S: Maintained 17107 17096 F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml 17108 17097 F: sound/soc/stm/ 17109 17098 17110 17099 STM32 TIMER/LPTIMER DRIVERS 17111 - M: Fabrice Gasnier <fabrice.gasnier@st.com> 17100 + M: Fabrice Gasnier <fabrice.gasnier@foss.st.com> 17112 17101 S: Maintained 17113 17102 F: Documentation/ABI/testing/*timer-stm32 17114 17103 F: Documentation/devicetree/bindings/*/*stm32-*timer* ··· 17118 17107 17119 17108 STMMAC ETHERNET DRIVER 17120 17109 M: Giuseppe Cavallaro <peppe.cavallaro@st.com> 17121 - M: Alexandre Torgue <alexandre.torgue@st.com> 17110 + M: Alexandre Torgue <alexandre.torgue@foss.st.com> 17122 17111 M: Jose Abreu <joabreu@synopsys.com> 17123 17112 L: netdev@vger.kernel.org 17124 17113 S: Supported ··· 17860 17849 F: drivers/thermal/ti-soc-thermal/ 17861 17850 17862 17851 TI BQ27XXX POWER SUPPLY DRIVER 17863 - R: Dan Murphy <dmurphy@ti.com> 17864 17852 F: drivers/power/supply/bq27xxx_battery.c 17865 17853 F: drivers/power/supply/bq27xxx_battery_i2c.c 17866 17854 F: include/linux/power/bq27xxx_battery.h
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 12 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc6 6 6 NAME = Frozen Wasteland 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arc/boot/dts/haps_hs.dts
··· 16 16 memory { 17 17 device_type = "memory"; 18 18 /* CONFIG_LINUX_RAM_BASE needs to match low mem start */ 19 - reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MB low mem */ 19 + reg = <0x0 0x80000000 0x0 0x40000000 /* 1 GB low mem */ 20 20 0x1 0x00000000 0x0 0x40000000>; /* 1 GB highmem */ 21 21 }; 22 22
+2 -2
arch/arc/kernel/signal.c
··· 96 96 sizeof(sf->uc.uc_mcontext.regs.scratch)); 97 97 err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t)); 98 98 99 - return err; 99 + return err ? -EFAULT : 0; 100 100 } 101 101 102 102 static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf) ··· 110 110 &(sf->uc.uc_mcontext.regs.scratch), 111 111 sizeof(sf->uc.uc_mcontext.regs.scratch)); 112 112 if (err) 113 - return err; 113 + return -EFAULT; 114 114 115 115 set_current_blocked(&set); 116 116 regs->bta = uregs.scratch.bta;
+14 -13
arch/arc/kernel/unwind.c
··· 187 187 const void *table_start, unsigned long table_size, 188 188 const u8 *header_start, unsigned long header_size) 189 189 { 190 - const u8 *ptr = header_start + 4; 191 - const u8 *end = header_start + header_size; 192 - 193 190 table->core.pc = (unsigned long)core_start; 194 191 table->core.range = core_size; 195 192 table->init.pc = (unsigned long)init_start; 196 193 table->init.range = init_size; 197 194 table->address = table_start; 198 195 table->size = table_size; 199 - 200 - /* See if the linker provided table looks valid. */ 201 - if (header_size <= 4 202 - || header_start[0] != 1 203 - || (void *)read_pointer(&ptr, end, header_start[1]) != table_start 204 - || header_start[2] == DW_EH_PE_omit 205 - || read_pointer(&ptr, end, header_start[2]) <= 0 206 - || header_start[3] == DW_EH_PE_omit) 207 - header_start = NULL; 208 - 196 + /* To avoid the pointer addition with NULL pointer.*/ 197 + if (header_start != NULL) { 198 + const u8 *ptr = header_start + 4; 199 + const u8 *end = header_start + header_size; 200 + /* See if the linker provided table looks valid. */ 201 + if (header_size <= 4 202 + || header_start[0] != 1 203 + || (void *)read_pointer(&ptr, end, header_start[1]) 204 + != table_start 205 + || header_start[2] == DW_EH_PE_omit 206 + || read_pointer(&ptr, end, header_start[2]) <= 0 207 + || header_start[3] == DW_EH_PE_omit) 208 + header_start = NULL; 209 + } 209 210 table->hdrsz = header_size; 210 211 smp_wmb(); 211 212 table->header = header_start;
+3
arch/arm/boot/dts/am33xx.dtsi
··· 40 40 ethernet1 = &cpsw_emac1; 41 41 spi0 = &spi0; 42 42 spi1 = &spi1; 43 + mmc0 = &mmc1; 44 + mmc1 = &mmc2; 45 + mmc2 = &mmc3; 43 46 }; 44 47 45 48 cpus {
+3 -1
arch/arm/boot/dts/armada-385-turris-omnia.dts
··· 32 32 ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000 33 33 MBUS_ID(0x01, 0x1d) 0 0xfff00000 0x100000 34 34 MBUS_ID(0x09, 0x19) 0 0xf1100000 0x10000 35 - MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000>; 35 + MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000 36 + MBUS_ID(0x0c, 0x04) 0 0xf1200000 0x100000>; 36 37 37 38 internal-regs { 38 39 ··· 390 389 phy1: ethernet-phy@1 { 391 390 compatible = "ethernet-phy-ieee802.3-c22"; 392 391 reg = <1>; 392 + marvell,reg-init = <3 18 0 0x4985>; 393 393 394 394 /* irq is connected to &pcawan pin 7 */ 395 395 };
-8
arch/arm/boot/dts/at91-sam9x60ek.dts
··· 334 334 }; 335 335 336 336 &pinctrl { 337 - atmel,mux-mask = < 338 - /* A B C */ 339 - 0xFFFFFE7F 0xC0E0397F 0xEF00019D /* pioA */ 340 - 0x03FFFFFF 0x02FC7E68 0x00780000 /* pioB */ 341 - 0xffffffff 0xF83FFFFF 0xB800F3FC /* pioC */ 342 - 0x003FFFFF 0x003F8000 0x00000000 /* pioD */ 343 - >; 344 - 345 337 adc { 346 338 pinctrl_adc_default: adc_default { 347 339 atmel,pins = <AT91_PIOB 15 AT91_PERIPH_A AT91_PINCTRL_NONE>;
+2 -2
arch/arm/boot/dts/at91-sama5d27_som1.dtsi
··· 84 84 pinctrl-0 = <&pinctrl_macb0_default>; 85 85 phy-mode = "rmii"; 86 86 87 - ethernet-phy@0 { 88 - reg = <0x0>; 87 + ethernet-phy@7 { 88 + reg = <0x7>; 89 89 interrupt-parent = <&pioA>; 90 90 interrupts = <PIN_PD31 IRQ_TYPE_LEVEL_LOW>; 91 91 pinctrl-names = "default";
-12
arch/arm/boot/dts/bcm2711.dtsi
··· 308 308 #reset-cells = <1>; 309 309 }; 310 310 311 - bsc_intr: interrupt-controller@7ef00040 { 312 - compatible = "brcm,bcm2711-l2-intc", "brcm,l2-intc"; 313 - reg = <0x7ef00040 0x30>; 314 - interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>; 315 - interrupt-controller; 316 - #interrupt-cells = <1>; 317 - }; 318 - 319 311 aon_intr: interrupt-controller@7ef00100 { 320 312 compatible = "brcm,bcm2711-l2-intc", "brcm,l2-intc"; 321 313 reg = <0x7ef00100 0x30>; ··· 354 362 reg = <0x7ef04500 0x100>, <0x7ef00b00 0x300>; 355 363 reg-names = "bsc", "auto-i2c"; 356 364 clock-frequency = <97500>; 357 - interrupt-parent = <&bsc_intr>; 358 - interrupts = <0>; 359 365 status = "disabled"; 360 366 }; 361 367 ··· 395 405 reg = <0x7ef09500 0x100>, <0x7ef05b00 0x300>; 396 406 reg-names = "bsc", "auto-i2c"; 397 407 clock-frequency = <97500>; 398 - interrupt-parent = <&bsc_intr>; 399 - interrupts = <1>; 400 408 status = "disabled"; 401 409 }; 402 410 };
+2
arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
··· 433 433 pinctrl-0 = <&pinctrl_usdhc2>; 434 434 cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>; 435 435 wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>; 436 + vmmc-supply = <&vdd_sd1_reg>; 436 437 status = "disabled"; 437 438 }; 438 439 ··· 443 442 &pinctrl_usdhc3_cdwp>; 444 443 cd-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>; 445 444 wp-gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>; 445 + vmmc-supply = <&vdd_sd0_reg>; 446 446 status = "disabled"; 447 447 };
+16 -6
arch/arm/boot/dts/imx6ul-14x14-evk.dtsi
··· 210 210 micrel,led-mode = <1>; 211 211 clocks = <&clks IMX6UL_CLK_ENET_REF>; 212 212 clock-names = "rmii-ref"; 213 - reset-gpios = <&gpio_spi 1 GPIO_ACTIVE_LOW>; 214 - reset-assert-us = <10000>; 215 - reset-deassert-us = <100>; 216 213 217 214 }; 218 215 ··· 219 222 micrel,led-mode = <1>; 220 223 clocks = <&clks IMX6UL_CLK_ENET2_REF>; 221 224 clock-names = "rmii-ref"; 222 - reset-gpios = <&gpio_spi 2 GPIO_ACTIVE_LOW>; 223 - reset-assert-us = <10000>; 224 - reset-deassert-us = <100>; 225 225 }; 226 226 }; 227 227 }; ··· 235 241 pinctrl-0 = <&pinctrl_flexcan2>; 236 242 xceiver-supply = <&reg_can_3v3>; 237 243 status = "okay"; 244 + }; 245 + 246 + &gpio_spi { 247 + eth0-phy-hog { 248 + gpio-hog; 249 + gpios = <1 GPIO_ACTIVE_HIGH>; 250 + output-high; 251 + line-name = "eth0-phy"; 252 + }; 253 + 254 + eth1-phy-hog { 255 + gpio-hog; 256 + gpios = <2 GPIO_ACTIVE_HIGH>; 257 + output-high; 258 + line-name = "eth1-phy"; 259 + }; 238 260 }; 239 261 240 262 &i2c1 {
+1
arch/arm/boot/dts/imx6ull-myir-mys-6ulx-eval.dts
··· 14 14 }; 15 15 16 16 &gpmi { 17 + fsl,use-minimum-ecc; 17 18 status = "okay"; 18 19 };
+5
arch/arm/boot/dts/omap4.dtsi
··· 22 22 i2c1 = &i2c2; 23 23 i2c2 = &i2c3; 24 24 i2c3 = &i2c4; 25 + mmc0 = &mmc1; 26 + mmc1 = &mmc2; 27 + mmc2 = &mmc3; 28 + mmc3 = &mmc4; 29 + mmc4 = &mmc5; 25 30 serial0 = &uart1; 26 31 serial1 = &uart2; 27 32 serial2 = &uart3;
-8
arch/arm/boot/dts/omap44xx-clocks.dtsi
··· 770 770 ti,max-div = <2>; 771 771 }; 772 772 773 - sha2md5_fck: sha2md5_fck@15c8 { 774 - #clock-cells = <0>; 775 - compatible = "ti,gate-clock"; 776 - clocks = <&l3_div_ck>; 777 - ti,bit-shift = <1>; 778 - reg = <0x15c8>; 779 - }; 780 - 781 773 usb_phy_cm_clk32k: usb_phy_cm_clk32k@640 { 782 774 #clock-cells = <0>; 783 775 compatible = "ti,gate-clock";
+5
arch/arm/boot/dts/omap5.dtsi
··· 25 25 i2c2 = &i2c3; 26 26 i2c3 = &i2c4; 27 27 i2c4 = &i2c5; 28 + mmc0 = &mmc1; 29 + mmc1 = &mmc2; 30 + mmc2 = &mmc3; 31 + mmc3 = &mmc4; 32 + mmc4 = &mmc5; 28 33 serial0 = &uart1; 29 34 serial1 = &uart2; 30 35 serial2 = &uart3;
+9
arch/arm/boot/dts/sam9x60.dtsi
··· 606 606 compatible = "microchip,sam9x60-pinctrl", "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl", "simple-bus"; 607 607 ranges = <0xfffff400 0xfffff400 0x800>; 608 608 609 + /* mux-mask corresponding to sam9x60 SoC in TFBGA228L package */ 610 + atmel,mux-mask = < 611 + /* A B C */ 612 + 0xffffffff 0xffe03fff 0xef00019d /* pioA */ 613 + 0x03ffffff 0x02fc7e7f 0x00780000 /* pioB */ 614 + 0xffffffff 0xffffffff 0xf83fffff /* pioC */ 615 + 0x003fffff 0x003f8000 0x00000000 /* pioD */ 616 + >; 617 + 609 618 pioA: gpio@fffff400 { 610 619 compatible = "microchip,sam9x60-gpio", "atmel,at91sam9x5-gpio", "atmel,at91rm9200-gpio"; 611 620 reg = <0xfffff400 0x200>;
+15 -1
arch/arm/mach-imx/avic.c
··· 7 7 #include <linux/module.h> 8 8 #include <linux/irq.h> 9 9 #include <linux/irqdomain.h> 10 + #include <linux/irqchip.h> 10 11 #include <linux/io.h> 11 12 #include <linux/of.h> 12 13 #include <linux/of_address.h> ··· 163 162 * interrupts. It registers the interrupt enable and disable functions 164 163 * to the kernel for each interrupt source. 165 164 */ 166 - void __init mxc_init_irq(void __iomem *irqbase) 165 + static void __init mxc_init_irq(void __iomem *irqbase) 167 166 { 168 167 struct device_node *np; 169 168 int irq_base; ··· 221 220 222 221 printk(KERN_INFO "MXC IRQ initialized\n"); 223 222 } 223 + 224 + static int __init imx_avic_init(struct device_node *node, 225 + struct device_node *parent) 226 + { 227 + void __iomem *avic_base; 228 + 229 + avic_base = of_iomap(node, 0); 230 + BUG_ON(!avic_base); 231 + mxc_init_irq(avic_base); 232 + return 0; 233 + } 234 + 235 + IRQCHIP_DECLARE(imx_avic, "fsl,avic", imx_avic_init);
-1
arch/arm/mach-imx/common.h
··· 22 22 void imx21_init_early(void); 23 23 void imx31_init_early(void); 24 24 void imx35_init_early(void); 25 - void mxc_init_irq(void __iomem *); 26 25 void mx31_init_irq(void); 27 26 void mx35_init_irq(void); 28 27 void mxc_set_cpu_type(unsigned int type);
-11
arch/arm/mach-imx/mach-imx1.c
··· 17 17 mxc_set_cpu_type(MXC_CPU_MX1); 18 18 } 19 19 20 - static void __init imx1_init_irq(void) 21 - { 22 - void __iomem *avic_addr; 23 - 24 - avic_addr = ioremap(MX1_AVIC_ADDR, SZ_4K); 25 - WARN_ON(!avic_addr); 26 - 27 - mxc_init_irq(avic_addr); 28 - } 29 - 30 20 static const char * const imx1_dt_board_compat[] __initconst = { 31 21 "fsl,imx1", 32 22 NULL ··· 24 34 25 35 DT_MACHINE_START(IMX1_DT, "Freescale i.MX1 (Device Tree Support)") 26 36 .init_early = imx1_init_early, 27 - .init_irq = imx1_init_irq, 28 37 .dt_compat = imx1_dt_board_compat, 29 38 .restart = mxc_restart, 30 39 MACHINE_END
-12
arch/arm/mach-imx/mach-imx25.c
··· 22 22 imx_aips_allow_unprivileged_access("fsl,imx25-aips"); 23 23 } 24 24 25 - static void __init mx25_init_irq(void) 26 - { 27 - struct device_node *np; 28 - void __iomem *avic_base; 29 - 30 - np = of_find_compatible_node(NULL, NULL, "fsl,avic"); 31 - avic_base = of_iomap(np, 0); 32 - BUG_ON(!avic_base); 33 - mxc_init_irq(avic_base); 34 - } 35 - 36 25 static const char * const imx25_dt_board_compat[] __initconst = { 37 26 "fsl,imx25", 38 27 NULL ··· 31 42 .init_early = imx25_init_early, 32 43 .init_machine = imx25_dt_init, 33 44 .init_late = imx25_pm_init, 34 - .init_irq = mx25_init_irq, 35 45 .dt_compat = imx25_dt_board_compat, 36 46 MACHINE_END
-12
arch/arm/mach-imx/mach-imx27.c
··· 56 56 mxc_set_cpu_type(MXC_CPU_MX27); 57 57 } 58 58 59 - static void __init mx27_init_irq(void) 60 - { 61 - void __iomem *avic_base; 62 - struct device_node *np; 63 - 64 - np = of_find_compatible_node(NULL, NULL, "fsl,avic"); 65 - avic_base = of_iomap(np, 0); 66 - BUG_ON(!avic_base); 67 - mxc_init_irq(avic_base); 68 - } 69 - 70 59 static const char * const imx27_dt_board_compat[] __initconst = { 71 60 "fsl,imx27", 72 61 NULL ··· 64 75 DT_MACHINE_START(IMX27_DT, "Freescale i.MX27 (Device Tree Support)") 65 76 .map_io = mx27_map_io, 66 77 .init_early = imx27_init_early, 67 - .init_irq = mx27_init_irq, 68 78 .init_late = imx27_pm_init, 69 79 .dt_compat = imx27_dt_board_compat, 70 80 MACHINE_END
-1
arch/arm/mach-imx/mach-imx31.c
··· 14 14 DT_MACHINE_START(IMX31_DT, "Freescale i.MX31 (Device Tree Support)") 15 15 .map_io = mx31_map_io, 16 16 .init_early = imx31_init_early, 17 - .init_irq = mx31_init_irq, 18 17 .dt_compat = imx31_dt_board_compat, 19 18 MACHINE_END
-1
arch/arm/mach-imx/mach-imx35.c
··· 27 27 .l2c_aux_mask = ~0, 28 28 .map_io = mx35_map_io, 29 29 .init_early = imx35_init_early, 30 - .init_irq = mx35_init_irq, 31 30 .dt_compat = imx35_dt_board_compat, 32 31 MACHINE_END
-24
arch/arm/mach-imx/mm-imx3.c
··· 109 109 mx3_ccm_base = of_iomap(np, 0); 110 110 BUG_ON(!mx3_ccm_base); 111 111 } 112 - 113 - void __init mx31_init_irq(void) 114 - { 115 - void __iomem *avic_base; 116 - struct device_node *np; 117 - 118 - np = of_find_compatible_node(NULL, NULL, "fsl,imx31-avic"); 119 - avic_base = of_iomap(np, 0); 120 - BUG_ON(!avic_base); 121 - 122 - mxc_init_irq(avic_base); 123 - } 124 112 #endif /* ifdef CONFIG_SOC_IMX31 */ 125 113 126 114 #ifdef CONFIG_SOC_IMX35 ··· 145 157 np = of_find_compatible_node(NULL, NULL, "fsl,imx35-ccm"); 146 158 mx3_ccm_base = of_iomap(np, 0); 147 159 BUG_ON(!mx3_ccm_base); 148 - } 149 - 150 - void __init mx35_init_irq(void) 151 - { 152 - void __iomem *avic_base; 153 - struct device_node *np; 154 - 155 - np = of_find_compatible_node(NULL, NULL, "fsl,imx35-avic"); 156 - avic_base = of_iomap(np, 0); 157 - BUG_ON(!avic_base); 158 - 159 - mxc_init_irq(avic_base); 160 160 } 161 161 #endif /* ifdef CONFIG_SOC_IMX35 */
+2 -2
arch/arm/mach-keystone/keystone.c
··· 65 65 static long long __init keystone_pv_fixup(void) 66 66 { 67 67 long long offset; 68 - phys_addr_t mem_start, mem_end; 68 + u64 mem_start, mem_end; 69 69 70 70 mem_start = memblock_start_of_DRAM(); 71 71 mem_end = memblock_end_of_DRAM(); ··· 78 78 if (mem_start < KEYSTONE_HIGH_PHYS_START || 79 79 mem_end > KEYSTONE_HIGH_PHYS_END) { 80 80 pr_crit("Invalid address space for memory (%08llx-%08llx)\n", 81 - (u64)mem_start, (u64)mem_end); 81 + mem_start, mem_end); 82 82 return 0; 83 83 } 84 84
+1
arch/arm/mach-omap1/ams-delta-fiq-handler.S
··· 15 15 #include <linux/platform_data/gpio-omap.h> 16 16 17 17 #include <asm/assembler.h> 18 + #include <asm/irq.h> 18 19 19 20 #include "ams-delta-fiq.h" 20 21 #include "board-ams-delta.h"
+39
arch/arm/mach-omap2/omap-secure.c
··· 9 9 */ 10 10 11 11 #include <linux/arm-smccc.h> 12 + #include <linux/cpu_pm.h> 12 13 #include <linux/kernel.h> 13 14 #include <linux/init.h> 14 15 #include <linux/io.h> ··· 21 20 22 21 #include "common.h" 23 22 #include "omap-secure.h" 23 + #include "soc.h" 24 24 25 25 static phys_addr_t omap_secure_memblock_base; 26 26 ··· 215 213 { 216 214 omap_optee_init_check(); 217 215 } 216 + 217 + /* 218 + * Dummy dispatcher call after core OSWR and MPU off. Updates the ROM return 219 + * address after MMU has been re-enabled after CPU1 has been woken up again. 220 + * Otherwise the ROM code will attempt to use the earlier physical return 221 + * address that got set with MMU off when waking up CPU1. Only used on secure 222 + * devices. 223 + */ 224 + static int cpu_notifier(struct notifier_block *nb, unsigned long cmd, void *v) 225 + { 226 + switch (cmd) { 227 + case CPU_CLUSTER_PM_EXIT: 228 + omap_secure_dispatcher(OMAP4_PPA_SERVICE_0, 229 + FLAG_START_CRITICAL, 230 + 0, 0, 0, 0, 0); 231 + break; 232 + default: 233 + break; 234 + } 235 + 236 + return NOTIFY_OK; 237 + } 238 + 239 + static struct notifier_block secure_notifier_block = { 240 + .notifier_call = cpu_notifier, 241 + }; 242 + 243 + static int __init secure_pm_init(void) 244 + { 245 + if (omap_type() == OMAP2_DEVICE_TYPE_GP || !soc_is_omap44xx()) 246 + return 0; 247 + 248 + cpu_pm_register_notifier(&secure_notifier_block); 249 + 250 + return 0; 251 + } 252 + omap_arch_initcall(secure_pm_init);
+1
arch/arm/mach-omap2/omap-secure.h
··· 50 50 #define OMAP5_DRA7_MON_SET_ACR_INDEX 0x107 51 51 52 52 /* Secure PPA(Primary Protected Application) APIs */ 53 + #define OMAP4_PPA_SERVICE_0 0x21 53 54 #define OMAP4_PPA_L2_POR_INDEX 0x23 54 55 #define OMAP4_PPA_CPU_ACTRL_SMP_INDEX 0x25 55 56
+2 -2
arch/arm/mach-omap2/pmic-cpcap.c
··· 246 246 omap_voltage_register_pmic(voltdm, &omap443x_max8952_mpu); 247 247 248 248 if (of_machine_is_compatible("motorola,droid-bionic")) { 249 - voltdm = voltdm_lookup("mpu"); 249 + voltdm = voltdm_lookup("core"); 250 250 omap_voltage_register_pmic(voltdm, &omap_cpcap_core); 251 251 252 - voltdm = voltdm_lookup("mpu"); 252 + voltdm = voltdm_lookup("iva"); 253 253 omap_voltage_register_pmic(voltdm, &omap_cpcap_iva); 254 254 } else { 255 255 voltdm = voltdm_lookup("core");
+58 -17
arch/arm/mach-omap2/sr_device.c
··· 88 88 89 89 extern struct omap_sr_data omap_sr_pdata[]; 90 90 91 - static int __init sr_dev_init(struct omap_hwmod *oh, void *user) 91 + static int __init sr_init_by_name(const char *name, const char *voltdm) 92 92 { 93 93 struct omap_sr_data *sr_data = NULL; 94 94 struct omap_volt_data *volt_data; 95 - struct omap_smartreflex_dev_attr *sr_dev_attr; 96 95 static int i; 97 96 98 - if (!strncmp(oh->name, "smartreflex_mpu_iva", 20) || 99 - !strncmp(oh->name, "smartreflex_mpu", 16)) 97 + if (!strncmp(name, "smartreflex_mpu_iva", 20) || 98 + !strncmp(name, "smartreflex_mpu", 16)) 100 99 sr_data = &omap_sr_pdata[OMAP_SR_MPU]; 101 - else if (!strncmp(oh->name, "smartreflex_core", 17)) 100 + else if (!strncmp(name, "smartreflex_core", 17)) 102 101 sr_data = &omap_sr_pdata[OMAP_SR_CORE]; 103 - else if (!strncmp(oh->name, "smartreflex_iva", 16)) 102 + else if (!strncmp(name, "smartreflex_iva", 16)) 104 103 sr_data = &omap_sr_pdata[OMAP_SR_IVA]; 105 104 106 105 if (!sr_data) { 107 - pr_err("%s: Unknown instance %s\n", __func__, oh->name); 106 + pr_err("%s: Unknown instance %s\n", __func__, name); 108 107 return -EINVAL; 109 108 } 110 109 111 - sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr; 112 - if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) { 113 - pr_err("%s: No voltage domain specified for %s. Cannot initialize\n", 114 - __func__, oh->name); 115 - goto exit; 116 - } 117 - 118 - sr_data->name = oh->name; 110 + sr_data->name = name; 119 111 if (cpu_is_omap343x()) 120 112 sr_data->ip_type = 1; 121 113 else ··· 128 136 } 129 137 } 130 138 131 - sr_data->voltdm = voltdm_lookup(sr_dev_attr->sensor_voltdm_name); 139 + sr_data->voltdm = voltdm_lookup(voltdm); 132 140 if (!sr_data->voltdm) { 133 141 pr_err("%s: Unable to get voltage domain pointer for VDD %s\n", 134 - __func__, sr_dev_attr->sensor_voltdm_name); 142 + __func__, voltdm); 135 143 goto exit; 136 144 } 137 145 ··· 152 160 return 0; 153 161 } 154 162 163 + static int __init sr_dev_init(struct omap_hwmod *oh, void *user) 164 + { 165 + struct omap_smartreflex_dev_attr *sr_dev_attr; 166 + 167 + sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr; 168 + if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) { 169 + pr_err("%s: No voltage domain specified for %s. Cannot initialize\n", 170 + __func__, oh->name); 171 + return 0; 172 + } 173 + 174 + return sr_init_by_name(oh->name, sr_dev_attr->sensor_voltdm_name); 175 + } 176 + 155 177 /* 156 178 * API to be called from board files to enable smartreflex 157 179 * autocompensation at init. ··· 175 169 sr_enable_on_init = true; 176 170 } 177 171 172 + static const char * const omap4_sr_instances[] = { 173 + "mpu", 174 + "iva", 175 + "core", 176 + }; 177 + 178 + static const char * const dra7_sr_instances[] = { 179 + "mpu", 180 + "core", 181 + }; 182 + 178 183 int __init omap_devinit_smartreflex(void) 179 184 { 185 + const char * const *sr_inst; 186 + int i, nr_sr = 0; 187 + 188 + if (soc_is_omap44xx()) { 189 + sr_inst = omap4_sr_instances; 190 + nr_sr = ARRAY_SIZE(omap4_sr_instances); 191 + 192 + } else if (soc_is_dra7xx()) { 193 + sr_inst = dra7_sr_instances; 194 + nr_sr = ARRAY_SIZE(dra7_sr_instances); 195 + } 196 + 197 + if (nr_sr) { 198 + const char *name, *voltdm; 199 + 200 + for (i = 0; i < nr_sr; i++) { 201 + name = kasprintf(GFP_KERNEL, "smartreflex_%s", sr_inst[i]); 202 + voltdm = sr_inst[i]; 203 + sr_init_by_name(name, voltdm); 204 + } 205 + 206 + return 0; 207 + } 208 + 180 209 return omap_hwmod_for_each_by_class("smartreflex", sr_dev_init, NULL); 181 210 }
+6 -2
arch/arm/mach-pxa/mainstone.c
··· 502 502 #endif 503 503 504 504 static int mst_pcmcia0_irqs[11] = { 505 - [0 ... 10] = -1, 505 + [0 ... 4] = -1, 506 506 [5] = MAINSTONE_S0_CD_IRQ, 507 + [6 ... 7] = -1, 507 508 [8] = MAINSTONE_S0_STSCHG_IRQ, 509 + [9] = -1, 508 510 [10] = MAINSTONE_S0_IRQ, 509 511 }; 510 512 511 513 static int mst_pcmcia1_irqs[11] = { 512 - [0 ... 10] = -1, 514 + [0 ... 4] = -1, 513 515 [5] = MAINSTONE_S1_CD_IRQ, 516 + [6 ... 7] = -1, 514 517 [8] = MAINSTONE_S1_STSCHG_IRQ, 518 + [9] = -1, 515 519 [10] = MAINSTONE_S1_IRQ, 516 520 }; 517 521
+1
arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi
··· 198 198 ranges = <0x0 0x00 0x1700000 0x100000>; 199 199 reg = <0x00 0x1700000 0x0 0x100000>; 200 200 interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>; 201 + dma-coherent; 201 202 202 203 sec_jr0: jr@10000 { 203 204 compatible = "fsl,sec-v5.4-job-ring",
+1
arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
··· 348 348 ranges = <0x0 0x00 0x1700000 0x100000>; 349 349 reg = <0x00 0x1700000 0x0 0x100000>; 350 350 interrupts = <0 75 0x4>; 351 + dma-coherent; 351 352 352 353 sec_jr0: jr@10000 { 353 354 compatible = "fsl,sec-v5.4-job-ring",
+1
arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
··· 354 354 ranges = <0x0 0x00 0x1700000 0x100000>; 355 355 reg = <0x00 0x1700000 0x0 0x100000>; 356 356 interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>; 357 + dma-coherent; 357 358 358 359 sec_jr0: jr@10000 { 359 360 compatible = "fsl,sec-v5.4-job-ring",
+1 -1
arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
··· 124 124 #define MX8MM_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0 125 125 #define MX8MM_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0 126 126 #define MX8MM_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0 127 - #define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0 127 + #define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0 128 128 #define MX8MM_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0 129 129 #define MX8MM_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0 130 130 #define MX8MM_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts
··· 35 35 36 36 &i2c2 { 37 37 clock-frequency = <400000>; 38 - pinctrl-names = "default"; 38 + pinctrl-names = "default", "gpio"; 39 39 pinctrl-0 = <&pinctrl_i2c2>; 40 40 pinctrl-1 = <&pinctrl_i2c2_gpio>; 41 41 sda-gpios = <&gpio5 17 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi
··· 67 67 68 68 &i2c1 { 69 69 clock-frequency = <400000>; 70 - pinctrl-names = "default"; 70 + pinctrl-names = "default", "gpio"; 71 71 pinctrl-0 = <&pinctrl_i2c1>; 72 72 pinctrl-1 = <&pinctrl_i2c1_gpio>; 73 73 sda-gpios = <&gpio5 15 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
··· 130 130 #define MX8MQ_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0 131 131 #define MX8MQ_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0 132 132 #define MX8MQ_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0 133 - #define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0 133 + #define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0 134 134 #define MX8MQ_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0 135 135 #define MX8MQ_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0 136 136 #define MX8MQ_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
+3 -3
arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
··· 310 310 }; 311 311 312 312 CP11X_LABEL(sata0): sata@540000 { 313 - compatible = "marvell,armada-8k-ahci"; 313 + compatible = "marvell,armada-8k-ahci", 314 + "generic-ahci"; 314 315 reg = <0x540000 0x30000>; 315 316 dma-coherent; 317 + interrupts = <107 IRQ_TYPE_LEVEL_HIGH>; 316 318 clocks = <&CP11X_LABEL(clk) 1 15>, 317 319 <&CP11X_LABEL(clk) 1 16>; 318 320 #address-cells = <1>; ··· 322 320 status = "disabled"; 323 321 324 322 sata-port@0 { 325 - interrupts = <109 IRQ_TYPE_LEVEL_HIGH>; 326 323 reg = <0>; 327 324 }; 328 325 329 326 sata-port@1 { 330 - interrupts = <107 IRQ_TYPE_LEVEL_HIGH>; 331 327 reg = <1>; 332 328 }; 333 329 };
+1
arch/arm64/include/asm/kvm_arm.h
··· 278 278 #define CPTR_EL2_DEFAULT CPTR_EL2_RES1 279 279 280 280 /* Hyp Debug Configuration Register bits */ 281 + #define MDCR_EL2_TTRF (1 << 19) 281 282 #define MDCR_EL2_TPMS (1 << 14) 282 283 #define MDCR_EL2_E2PB_MASK (UL(0x3)) 283 284 #define MDCR_EL2_E2PB_SHIFT (UL(12))
-1
arch/arm64/kernel/cpufeature.c
··· 383 383 * of support. 384 384 */ 385 385 S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0), 386 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0), 387 386 ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6), 388 387 ARM64_FTR_END, 389 388 };
+2
arch/arm64/kvm/debug.c
··· 89 89 * - Debug ROM Address (MDCR_EL2_TDRA) 90 90 * - OS related registers (MDCR_EL2_TDOSA) 91 91 * - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB) 92 + * - Self-hosted Trace Filter controls (MDCR_EL2_TTRF) 92 93 * 93 94 * Additionally, KVM only traps guest accesses to the debug registers if 94 95 * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY ··· 113 112 vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; 114 113 vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM | 115 114 MDCR_EL2_TPMS | 115 + MDCR_EL2_TTRF | 116 116 MDCR_EL2_TPMCR | 117 117 MDCR_EL2_TDRA | 118 118 MDCR_EL2_TDOSA);
+9
arch/arm64/kvm/hyp/vgic-v3-sr.c
··· 429 429 if (has_vhe()) 430 430 flags = local_daif_save(); 431 431 432 + /* 433 + * Table 11-2 "Permitted ICC_SRE_ELx.SRE settings" indicates 434 + * that to be able to set ICC_SRE_EL1.SRE to 0, all the 435 + * interrupt overrides must be set. You've got to love this. 436 + */ 437 + sysreg_clear_set(hcr_el2, 0, HCR_AMO | HCR_FMO | HCR_IMO); 438 + isb(); 432 439 write_gicreg(0, ICC_SRE_EL1); 433 440 isb(); 434 441 435 442 val = read_gicreg(ICC_SRE_EL1); 436 443 437 444 write_gicreg(sre, ICC_SRE_EL1); 445 + isb(); 446 + sysreg_clear_set(hcr_el2, HCR_AMO | HCR_FMO | HCR_IMO, 0); 438 447 isb(); 439 448 440 449 if (has_vhe())
+1 -1
arch/mips/kernel/setup.c
··· 43 43 #include <asm/prom.h> 44 44 45 45 #ifdef CONFIG_MIPS_ELF_APPENDED_DTB 46 - const char __section(".appended_dtb") __appended_dtb[0x100000]; 46 + char __section(".appended_dtb") __appended_dtb[0x100000]; 47 47 #endif /* CONFIG_MIPS_ELF_APPENDED_DTB */ 48 48 49 49 struct cpuinfo_mips cpu_data[NR_CPUS] __read_mostly;
+1 -1
arch/parisc/include/asm/cmpxchg.h
··· 72 72 #endif 73 73 case 4: return __cmpxchg_u32((unsigned int *)ptr, 74 74 (unsigned int)old, (unsigned int)new_); 75 - case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_); 75 + case 1: return __cmpxchg_u8((u8 *)ptr, old & 0xff, new_ & 0xff); 76 76 } 77 77 __cmpxchg_called_with_bad_pointer(); 78 78 return old;
-1
arch/parisc/include/asm/processor.h
··· 272 272 regs->gr[23] = 0; \ 273 273 } while(0) 274 274 275 - struct task_struct; 276 275 struct mm_struct; 277 276 278 277 /* Free all resources held by a thread. */
+3 -29
arch/parisc/math-emu/fpu.h
··· 5 5 * Floating-point emulation code 6 6 * Copyright (C) 2001 Hewlett-Packard (Paul Bame) <bame@debian.org> 7 7 */ 8 - /* 9 - * BEGIN_DESC 10 - * 11 - * File: 12 - * @(#) pa/fp/fpu.h $Revision: 1.1 $ 13 - * 14 - * Purpose: 15 - * <<please update with a synopis of the functionality provided by this file>> 16 - * 17 - * 18 - * END_DESC 19 - */ 20 - 21 - #ifdef __NO_PA_HDRS 22 - PA header file -- do not include this header file for non-PA builds. 23 - #endif 24 - 25 8 26 9 #ifndef _MACHINE_FPU_INCLUDED /* allows multiple inclusion */ 27 10 #define _MACHINE_FPU_INCLUDED 28 - 29 - #if 0 30 - #ifndef _SYS_STDSYMS_INCLUDED 31 - # include <sys/stdsyms.h> 32 - #endif /* _SYS_STDSYMS_INCLUDED */ 33 - #include <machine/pdc/pdc_rqsts.h> 34 - #endif 35 11 36 12 #define PA83_FPU_FLAG 0x00000001 37 13 #define PA89_FPU_FLAG 0x00000002 ··· 19 43 #define COPR_FP 0x00000080 /* Floating point -- Coprocessor 0 */ 20 44 #define SFU_MPY_DIVIDE 0x00008000 /* Multiply/Divide __ SFU 0 */ 21 45 22 - 23 46 #define EM_FPU_TYPE_OFFSET 272 24 47 25 48 /* version of EMULATION software for COPR,0,0 instruction */ 26 49 #define EMULATION_VERSION 4 27 50 28 51 /* 29 - * The only was to differeniate between TIMEX and ROLEX (or PCX-S and PCX-T) 30 - * is thorough the potential type field from the PDC_MODEL call. The 31 - * following flags are used at assist this differeniation. 52 + * The only way to differentiate between TIMEX and ROLEX (or PCX-S and PCX-T) 53 + * is through the potential type field from the PDC_MODEL call. 54 + * The following flags are used to assist this differentiation. 32 55 */ 33 56 34 57 #define ROLEX_POTENTIAL_KEY_FLAGS PDC_MODEL_CPU_KEY_WORD_TO_IO 35 58 #define TIMEX_POTENTIAL_KEY_FLAGS (PDC_MODEL_CPU_KEY_QUAD_STORE | \ 36 59 PDC_MODEL_CPU_KEY_RECIP_SQRT) 37 - 38 60 39 61 #endif /* ! _MACHINE_FPU_INCLUDED */
+2 -1
arch/powerpc/platforms/pseries/lpar.c
··· 887 887 888 888 want_v = hpte_encode_avpn(vpn, psize, ssize); 889 889 890 - flags = (newpp & 7) | H_AVPN; 890 + flags = (newpp & (HPTE_R_PP | HPTE_R_N | HPTE_R_KEY_LO)) | H_AVPN; 891 + flags |= (newpp & HPTE_R_KEY_HI) >> 48; 891 892 if (mmu_has_feature(MMU_FTR_KERNEL_RO)) 892 893 /* Move pp0 into bit 8 (IBM 55) */ 893 894 flags |= (newpp & HPTE_R_PP0) >> 55;
+44 -4
arch/powerpc/platforms/pseries/mobility.c
··· 452 452 return ret; 453 453 } 454 454 455 + /** 456 + * struct pseries_suspend_info - State shared between CPUs for join/suspend. 457 + * @counter: Threads are to increment this upon resuming from suspend 458 + * or if an error is received from H_JOIN. The thread which performs 459 + * the first increment (i.e. sets it to 1) is responsible for 460 + * waking the other threads. 461 + * @done: False if join/suspend is in progress. True if the operation is 462 + * complete (successful or not). 463 + */ 464 + struct pseries_suspend_info { 465 + atomic_t counter; 466 + bool done; 467 + }; 468 + 455 469 static int do_join(void *arg) 456 470 { 457 - atomic_t *counter = arg; 471 + struct pseries_suspend_info *info = arg; 472 + atomic_t *counter = &info->counter; 458 473 long hvrc; 459 474 int ret; 460 475 476 + retry: 461 477 /* Must ensure MSR.EE off for H_JOIN. */ 462 478 hard_irq_disable(); 463 479 hvrc = plpar_hcall_norets(H_JOIN); ··· 489 473 case H_SUCCESS: 490 474 /* 491 475 * The suspend is complete and this cpu has received a 492 - * prod. 476 + * prod, or we've received a stray prod from unrelated 477 + * code (e.g. paravirt spinlocks) and we need to join 478 + * again. 479 + * 480 + * This barrier orders the return from H_JOIN above vs 481 + * the load of info->done. It pairs with the barrier 482 + * in the wakeup/prod path below. 493 483 */ 484 + smp_mb(); 485 + if (READ_ONCE(info->done) == false) { 486 + pr_info_ratelimited("premature return from H_JOIN on CPU %i, retrying", 487 + smp_processor_id()); 488 + goto retry; 489 + } 494 490 ret = 0; 495 491 break; 496 492 case H_BAD_MODE: ··· 516 488 517 489 if (atomic_inc_return(counter) == 1) { 518 490 pr_info("CPU %u waking all threads\n", smp_processor_id()); 491 + WRITE_ONCE(info->done, true); 492 + /* 493 + * This barrier orders the store to info->done vs subsequent 494 + * H_PRODs to wake the other CPUs. It pairs with the barrier 495 + * in the H_SUCCESS case above. 496 + */ 497 + smp_mb(); 519 498 prod_others(); 520 499 } 521 500 /* ··· 570 535 int ret; 571 536 572 537 while (true) { 573 - atomic_t counter = ATOMIC_INIT(0); 538 + struct pseries_suspend_info info; 574 539 unsigned long vasi_state; 575 540 int vasi_err; 576 541 577 - ret = stop_machine(do_join, &counter, cpu_online_mask); 542 + info = (struct pseries_suspend_info) { 543 + .counter = ATOMIC_INIT(0), 544 + .done = false, 545 + }; 546 + 547 + ret = stop_machine(do_join, &info, cpu_online_mask); 578 548 if (ret == 0) 579 549 break; 580 550 /*
+1 -1
arch/riscv/Kconfig
··· 314 314 # Common NUMA Features 315 315 config NUMA 316 316 bool "NUMA Memory Allocation and Scheduler Support" 317 - depends on SMP 317 + depends on SMP && MMU 318 318 select GENERIC_ARCH_NUMA 319 319 select OF_NUMA 320 320 select ARCH_SUPPORTS_NUMA_BALANCING
+5 -2
arch/riscv/include/asm/uaccess.h
··· 306 306 * data types like structures or arrays. 307 307 * 308 308 * @ptr must have pointer-to-simple-variable type, and @x must be assignable 309 - * to the result of dereferencing @ptr. 309 + * to the result of dereferencing @ptr. The value of @x is copied to avoid 310 + * re-ordering where @x is evaluated inside the block that enables user-space 311 + * access (thus bypassing user space protection if @x is a function). 310 312 * 311 313 * Caller must check the pointer with access_ok() before calling this 312 314 * function. ··· 318 316 #define __put_user(x, ptr) \ 319 317 ({ \ 320 318 __typeof__(*(ptr)) __user *__gu_ptr = (ptr); \ 319 + __typeof__(*__gu_ptr) __val = (x); \ 321 320 long __pu_err = 0; \ 322 321 \ 323 322 __chk_user_ptr(__gu_ptr); \ 324 323 \ 325 324 __enable_user_access(); \ 326 - __put_user_nocheck(x, __gu_ptr, __pu_err); \ 325 + __put_user_nocheck(__val, __gu_ptr, __pu_err); \ 327 326 __disable_user_access(); \ 328 327 \ 329 328 __pu_err; \
+1
arch/riscv/kernel/entry.S
··· 447 447 #endif 448 448 449 449 .section ".rodata" 450 + .align LGREG 450 451 /* Exception vector table */ 451 452 ENTRY(excp_vect_table) 452 453 RISCV_PTR do_trap_insn_misaligned
+1 -1
arch/riscv/kernel/stacktrace.c
··· 14 14 15 15 #include <asm/stacktrace.h> 16 16 17 - register const unsigned long sp_in_global __asm__("sp"); 17 + register unsigned long sp_in_global __asm__("sp"); 18 18 19 19 #ifdef CONFIG_FRAME_POINTER 20 20
+1 -1
arch/riscv/mm/kasan_init.c
··· 216 216 break; 217 217 218 218 kasan_populate(kasan_mem_to_shadow(start), kasan_mem_to_shadow(end)); 219 - }; 219 + } 220 220 221 221 for (i = 0; i < PTRS_PER_PTE; i++) 222 222 set_pte(&kasan_early_shadow_pte[i],
+1
arch/s390/include/asm/stacktrace.h
··· 12 12 STACK_TYPE_IRQ, 13 13 STACK_TYPE_NODAT, 14 14 STACK_TYPE_RESTART, 15 + STACK_TYPE_MCCK, 15 16 }; 16 17 17 18 struct stack_info {
+1 -1
arch/s390/include/asm/vdso/data.h
··· 6 6 #include <vdso/datapage.h> 7 7 8 8 struct arch_vdso_data { 9 - __u64 tod_steering_delta; 9 + __s64 tod_steering_delta; 10 10 __u64 tod_steering_end; 11 11 }; 12 12
+4 -2
arch/s390/kernel/cpcmd.c
··· 37 37 38 38 static int diag8_response(int cmdlen, char *response, int *rlen) 39 39 { 40 + unsigned long _cmdlen = cmdlen | 0x40000000L; 41 + unsigned long _rlen = *rlen; 40 42 register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf; 41 43 register unsigned long reg3 asm ("3") = (addr_t) response; 42 - register unsigned long reg4 asm ("4") = cmdlen | 0x40000000L; 43 - register unsigned long reg5 asm ("5") = *rlen; 44 + register unsigned long reg4 asm ("4") = _cmdlen; 45 + register unsigned long reg5 asm ("5") = _rlen; 44 46 45 47 asm volatile( 46 48 " diag %2,%0,0x8\n"
+11 -1
arch/s390/kernel/dumpstack.c
··· 79 79 return in_stack(sp, info, STACK_TYPE_NODAT, top - THREAD_SIZE, top); 80 80 } 81 81 82 + static bool in_mcck_stack(unsigned long sp, struct stack_info *info) 83 + { 84 + unsigned long frame_size, top; 85 + 86 + frame_size = STACK_FRAME_OVERHEAD + sizeof(struct pt_regs); 87 + top = S390_lowcore.mcck_stack + frame_size; 88 + return in_stack(sp, info, STACK_TYPE_MCCK, top - THREAD_SIZE, top); 89 + } 90 + 82 91 static bool in_restart_stack(unsigned long sp, struct stack_info *info) 83 92 { 84 93 unsigned long frame_size, top; ··· 117 108 /* Check per-cpu stacks */ 118 109 if (!in_irq_stack(sp, info) && 119 110 !in_nodat_stack(sp, info) && 120 - !in_restart_stack(sp, info)) 111 + !in_restart_stack(sp, info) && 112 + !in_mcck_stack(sp, info)) 121 113 goto unknown; 122 114 123 115 recursion_check:
+1 -1
arch/s390/kernel/irq.c
··· 174 174 175 175 memcpy(&regs->int_code, &S390_lowcore.ext_cpu_addr, 4); 176 176 regs->int_parm = S390_lowcore.ext_params; 177 - regs->int_parm_long = *(unsigned long *)S390_lowcore.ext_params2; 177 + regs->int_parm_long = S390_lowcore.ext_params2; 178 178 179 179 from_idle = !user_mode(regs) && regs->psw.addr == (unsigned long)psw_idle_exit; 180 180 if (from_idle)
+1 -1
arch/s390/kernel/setup.c
··· 354 354 if (!new) 355 355 panic("Couldn't allocate machine check stack"); 356 356 WRITE_ONCE(S390_lowcore.mcck_stack, new + STACK_INIT_OFFSET); 357 - memblock_free(old, THREAD_SIZE); 357 + memblock_free_late(old, THREAD_SIZE); 358 358 return 0; 359 359 } 360 360 early_initcall(stack_realloc);
+8 -2
arch/s390/kernel/time.c
··· 80 80 { 81 81 struct ptff_qto qto; 82 82 struct ptff_qui qui; 83 + int cs; 83 84 84 85 /* Initialize TOD steering parameters */ 85 86 tod_steering_end = tod_clock_base.tod; 86 - vdso_data->arch_data.tod_steering_end = tod_steering_end; 87 + for (cs = 0; cs < CS_BASES; cs++) 88 + vdso_data[cs].arch_data.tod_steering_end = tod_steering_end; 87 89 88 90 if (!test_facility(28)) 89 91 return; ··· 368 366 { 369 367 unsigned long now, adj; 370 368 struct ptff_qto qto; 369 + int cs; 371 370 372 371 /* Fixup the monotonic sched clock. */ 373 372 tod_clock_base.eitod += delta; ··· 384 381 panic("TOD clock sync offset %li is too large to drift\n", 385 382 tod_steering_delta); 386 383 tod_steering_end = now + (abs(tod_steering_delta) << 15); 387 - vdso_data->arch_data.tod_steering_end = tod_steering_end; 384 + for (cs = 0; cs < CS_BASES; cs++) { 385 + vdso_data[cs].arch_data.tod_steering_end = tod_steering_end; 386 + vdso_data[cs].arch_data.tod_steering_delta = tod_steering_delta; 387 + } 388 388 389 389 /* Update LPAR offset. */ 390 390 if (ptff_query(PTFF_QTO) && ptff(&qto, sizeof(qto), PTFF_QTO) == 0)
+1 -1
arch/x86/Makefile
··· 27 27 REALMODE_CFLAGS := -m16 -g -Os -DDISABLE_BRANCH_PROFILING \ 28 28 -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \ 29 29 -fno-strict-aliasing -fomit-frame-pointer -fno-pic \ 30 - -mno-mmx -mno-sse 30 + -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none) 31 31 32 32 REALMODE_CFLAGS += -ffreestanding 33 33 REALMODE_CFLAGS += -fno-stack-protector
+1
arch/x86/include/asm/smp.h
··· 132 132 void play_dead_common(void); 133 133 void wbinvd_on_cpu(int cpu); 134 134 int wbinvd_on_all_cpus(void); 135 + void cond_wakeup_cpu0(void); 135 136 136 137 void native_smp_send_reschedule(int cpu); 137 138 void native_send_call_func_ipi(const struct cpumask *mask);
-12
arch/x86/include/asm/xen/page.h
··· 87 87 #endif 88 88 89 89 /* 90 - * The maximum amount of extra memory compared to the base size. The 91 - * main scaling factor is the size of struct page. At extreme ratios 92 - * of base:extra, all the base memory can be filled with page 93 - * structures for the extra memory, leaving no space for anything 94 - * else. 95 - * 96 - * 10x seems like a reasonable balance between scaling flexibility and 97 - * leaving a practically usable system. 98 - */ 99 - #define XEN_EXTRA_MEM_RATIO (10) 100 - 101 - /* 102 90 * Helper functions to write or read unsigned long values to/from 103 91 * memory, when the access may fault. 104 92 */
+12 -13
arch/x86/kernel/acpi/boot.c
··· 1554 1554 /* 1555 1555 * Initialize the ACPI boot-time table parser. 1556 1556 */ 1557 - if (acpi_table_init()) { 1557 + if (acpi_locate_initial_tables()) 1558 1558 disable_acpi(); 1559 - return; 1560 - } 1559 + else 1560 + acpi_reserve_initial_tables(); 1561 + } 1562 + 1563 + int __init early_acpi_boot_init(void) 1564 + { 1565 + if (acpi_disabled) 1566 + return 1; 1567 + 1568 + acpi_table_init_complete(); 1561 1569 1562 1570 acpi_table_parse(ACPI_SIG_BOOT, acpi_parse_sbf); 1563 1571 ··· 1578 1570 } else { 1579 1571 printk(KERN_WARNING PREFIX "Disabling ACPI support\n"); 1580 1572 disable_acpi(); 1581 - return; 1573 + return 1; 1582 1574 } 1583 1575 } 1584 - } 1585 - 1586 - int __init early_acpi_boot_init(void) 1587 - { 1588 - /* 1589 - * If acpi_disabled, bail out 1590 - */ 1591 - if (acpi_disabled) 1592 - return 1; 1593 1576 1594 1577 /* 1595 1578 * Process the Multiple APIC Description Table (MADT), if present
+3 -5
arch/x86/kernel/setup.c
··· 1045 1045 1046 1046 cleanup_highmap(); 1047 1047 1048 + /* Look for ACPI tables and reserve memory occupied by them. */ 1049 + acpi_boot_table_init(); 1050 + 1048 1051 memblock_set_current_limit(ISA_END_ADDRESS); 1049 1052 e820__memblock_setup(); 1050 1053 ··· 1138 1135 io_delay_init(); 1139 1136 1140 1137 early_platform_quirks(); 1141 - 1142 - /* 1143 - * Parse the ACPI tables for possible boot-time SMP configuration. 1144 - */ 1145 - acpi_boot_table_init(); 1146 1138 1147 1139 early_acpi_boot_init(); 1148 1140
+12 -14
arch/x86/kernel/smpboot.c
··· 1659 1659 local_irq_disable(); 1660 1660 } 1661 1661 1662 - static bool wakeup_cpu0(void) 1662 + /** 1663 + * cond_wakeup_cpu0 - Wake up CPU0 if needed. 1664 + * 1665 + * If NMI wants to wake up CPU0, start CPU0. 1666 + */ 1667 + void cond_wakeup_cpu0(void) 1663 1668 { 1664 1669 if (smp_processor_id() == 0 && enable_start_cpu0) 1665 - return true; 1666 - 1667 - return false; 1670 + start_cpu0(); 1668 1671 } 1672 + EXPORT_SYMBOL_GPL(cond_wakeup_cpu0); 1669 1673 1670 1674 /* 1671 1675 * We need to flush the caches before going to sleep, lest we have ··· 1738 1734 __monitor(mwait_ptr, 0, 0); 1739 1735 mb(); 1740 1736 __mwait(eax, 0); 1741 - /* 1742 - * If NMI wants to wake up CPU0, start CPU0. 1743 - */ 1744 - if (wakeup_cpu0()) 1745 - start_cpu0(); 1737 + 1738 + cond_wakeup_cpu0(); 1746 1739 } 1747 1740 } 1748 1741 ··· 1750 1749 1751 1750 while (1) { 1752 1751 native_halt(); 1753 - /* 1754 - * If NMI wants to wake up CPU0, start CPU0. 1755 - */ 1756 - if (wakeup_cpu0()) 1757 - start_cpu0(); 1752 + 1753 + cond_wakeup_cpu0(); 1758 1754 } 1759 1755 } 1760 1756
+1 -1
arch/x86/kvm/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - ccflags-y += -Iarch/x86/kvm 3 + ccflags-y += -I $(srctree)/arch/x86/kvm 4 4 ccflags-$(CONFIG_KVM_WERROR) += -Werror 5 5 6 6 ifeq ($(CONFIG_FRAME_POINTER),y)
+5 -4
arch/x86/kvm/mmu/mmu.c
··· 5884 5884 struct kvm_mmu_page *sp; 5885 5885 unsigned int ratio; 5886 5886 LIST_HEAD(invalid_list); 5887 + bool flush = false; 5887 5888 ulong to_zap; 5888 5889 5889 5890 rcu_idx = srcu_read_lock(&kvm->srcu); ··· 5906 5905 lpage_disallowed_link); 5907 5906 WARN_ON_ONCE(!sp->lpage_disallowed); 5908 5907 if (is_tdp_mmu_page(sp)) { 5909 - kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, 5910 - sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level)); 5908 + flush |= kvm_tdp_mmu_zap_sp(kvm, sp); 5911 5909 } else { 5912 5910 kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); 5913 5911 WARN_ON_ONCE(sp->lpage_disallowed); 5914 5912 } 5915 5913 5916 5914 if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { 5917 - kvm_mmu_commit_zap_page(kvm, &invalid_list); 5915 + kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); 5918 5916 cond_resched_rwlock_write(&kvm->mmu_lock); 5917 + flush = false; 5919 5918 } 5920 5919 } 5921 - kvm_mmu_commit_zap_page(kvm, &invalid_list); 5920 + kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); 5922 5921 5923 5922 write_unlock(&kvm->mmu_lock); 5924 5923 srcu_read_unlock(&kvm->srcu, rcu_idx);
+14 -12
arch/x86/kvm/mmu/tdp_mmu.c
··· 86 86 list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) 87 87 88 88 static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, 89 - gfn_t start, gfn_t end, bool can_yield); 89 + gfn_t start, gfn_t end, bool can_yield, bool flush); 90 90 91 91 void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) 92 92 { ··· 99 99 100 100 list_del(&root->link); 101 101 102 - zap_gfn_range(kvm, root, 0, max_gfn, false); 102 + zap_gfn_range(kvm, root, 0, max_gfn, false, false); 103 103 104 104 free_page((unsigned long)root->spt); 105 105 kmem_cache_free(mmu_page_header_cache, root); ··· 668 668 * scheduler needs the CPU or there is contention on the MMU lock. If this 669 669 * function cannot yield, it will not release the MMU lock or reschedule and 670 670 * the caller must ensure it does not supply too large a GFN range, or the 671 - * operation can cause a soft lockup. 671 + * operation can cause a soft lockup. Note, in some use cases a flush may be 672 + * required by prior actions. Ensure the pending flush is performed prior to 673 + * yielding. 672 674 */ 673 675 static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, 674 - gfn_t start, gfn_t end, bool can_yield) 676 + gfn_t start, gfn_t end, bool can_yield, bool flush) 675 677 { 676 678 struct tdp_iter iter; 677 - bool flush_needed = false; 678 679 679 680 rcu_read_lock(); 680 681 681 682 tdp_root_for_each_pte(iter, root, start, end) { 682 683 if (can_yield && 683 - tdp_mmu_iter_cond_resched(kvm, &iter, flush_needed)) { 684 - flush_needed = false; 684 + tdp_mmu_iter_cond_resched(kvm, &iter, flush)) { 685 + flush = false; 685 686 continue; 686 687 } 687 688 ··· 700 699 continue; 701 700 702 701 tdp_mmu_set_spte(kvm, &iter, 0); 703 - flush_needed = true; 702 + flush = true; 704 703 } 705 704 706 705 rcu_read_unlock(); 707 - return flush_needed; 706 + return flush; 708 707 } 709 708 710 709 /* ··· 713 712 * SPTEs have been cleared and a TLB flush is needed before releasing the 714 713 * MMU lock. 715 714 */ 716 - bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end) 715 + bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end, 716 + bool can_yield) 717 717 { 718 718 struct kvm_mmu_page *root; 719 719 bool flush = false; 720 720 721 721 for_each_tdp_mmu_root_yield_safe(kvm, root) 722 - flush |= zap_gfn_range(kvm, root, start, end, true); 722 + flush = zap_gfn_range(kvm, root, start, end, can_yield, flush); 723 723 724 724 return flush; 725 725 } ··· 932 930 struct kvm_mmu_page *root, gfn_t start, 933 931 gfn_t end, unsigned long unused) 934 932 { 935 - return zap_gfn_range(kvm, root, start, end, false); 933 + return zap_gfn_range(kvm, root, start, end, false, false); 936 934 } 937 935 938 936 int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
+23 -1
arch/x86/kvm/mmu/tdp_mmu.h
··· 8 8 hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); 9 9 void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root); 10 10 11 - bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end); 11 + bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end, 12 + bool can_yield); 13 + static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, 14 + gfn_t end) 15 + { 16 + return __kvm_tdp_mmu_zap_gfn_range(kvm, start, end, true); 17 + } 18 + static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) 19 + { 20 + gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level); 21 + 22 + /* 23 + * Don't allow yielding, as the caller may have a flush pending. Note, 24 + * if mmu_lock is held for write, zapping will never yield in this case, 25 + * but explicitly disallow it for safety. The TDP MMU does not yield 26 + * until it has made forward progress (steps sideways), and when zapping 27 + * a single shadow page that it's guaranteed to see (thus the mmu_lock 28 + * requirement), its "step sideways" will always step beyond the bounds 29 + * of the shadow page's gfn range and stop iterating before yielding. 30 + */ 31 + lockdep_assert_held_write(&kvm->mmu_lock); 32 + return __kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, end, false); 33 + } 12 34 void kvm_tdp_mmu_zap_all(struct kvm *kvm); 13 35 14 36 int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
+23 -5
arch/x86/kvm/svm/nested.c
··· 246 246 return true; 247 247 } 248 248 249 - static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12) 249 + static bool nested_vmcb_check_save(struct vcpu_svm *svm, struct vmcb *vmcb12) 250 250 { 251 251 struct kvm_vcpu *vcpu = &svm->vcpu; 252 252 bool vmcb12_lma; 253 253 254 + /* 255 + * FIXME: these should be done after copying the fields, 256 + * to avoid TOC/TOU races. For these save area checks 257 + * the possible damage is limited since kvm_set_cr0 and 258 + * kvm_set_cr4 handle failure; EFER_SVME is an exception 259 + * so it is force-set later in nested_prepare_vmcb_save. 260 + */ 254 261 if ((vmcb12->save.efer & EFER_SVME) == 0) 255 262 return false; 256 263 ··· 278 271 if (!kvm_is_valid_cr4(&svm->vcpu, vmcb12->save.cr4)) 279 272 return false; 280 273 281 - return nested_vmcb_check_controls(&vmcb12->control); 274 + return true; 282 275 } 283 276 284 277 static void load_nested_vmcb_control(struct vcpu_svm *svm, ··· 403 396 svm->vmcb->save.gdtr = vmcb12->save.gdtr; 404 397 svm->vmcb->save.idtr = vmcb12->save.idtr; 405 398 kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED); 406 - svm_set_efer(&svm->vcpu, vmcb12->save.efer); 399 + 400 + /* 401 + * Force-set EFER_SVME even though it is checked earlier on the 402 + * VMCB12, because the guest can flip the bit between the check 403 + * and now. Clearing EFER_SVME would call svm_free_nested. 404 + */ 405 + svm_set_efer(&svm->vcpu, vmcb12->save.efer | EFER_SVME); 406 + 407 407 svm_set_cr0(&svm->vcpu, vmcb12->save.cr0); 408 408 svm_set_cr4(&svm->vcpu, vmcb12->save.cr4); 409 409 svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2; ··· 482 468 483 469 484 470 svm->nested.vmcb12_gpa = vmcb12_gpa; 485 - load_nested_vmcb_control(svm, &vmcb12->control); 486 471 nested_prepare_vmcb_control(svm); 487 472 nested_prepare_vmcb_save(svm, vmcb12); 488 473 ··· 528 515 if (WARN_ON_ONCE(!svm->nested.initialized)) 529 516 return -EINVAL; 530 517 531 - if (!nested_vmcb_checks(svm, vmcb12)) { 518 + load_nested_vmcb_control(svm, &vmcb12->control); 519 + 520 + if (!nested_vmcb_check_save(svm, vmcb12) || 521 + !nested_vmcb_check_controls(&svm->nested.ctl)) { 532 522 vmcb12->control.exit_code = SVM_EXIT_ERR; 533 523 vmcb12->control.exit_code_hi = 0; 534 524 vmcb12->control.exit_info_1 = 0; ··· 1224 1208 * TODO: validate reserved bits for all saved state. 1225 1209 */ 1226 1210 if (!(save->cr0 & X86_CR0_PG)) 1211 + goto out_free; 1212 + if (!(save->efer & EFER_SVME)) 1227 1213 goto out_free; 1228 1214 1229 1215 /*
+8
arch/x86/kvm/svm/pmu.c
··· 98 98 static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, 99 99 enum pmu_type type) 100 100 { 101 + struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); 102 + 101 103 switch (msr) { 102 104 case MSR_F15H_PERF_CTL0: 103 105 case MSR_F15H_PERF_CTL1: ··· 107 105 case MSR_F15H_PERF_CTL3: 108 106 case MSR_F15H_PERF_CTL4: 109 107 case MSR_F15H_PERF_CTL5: 108 + if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) 109 + return NULL; 110 + fallthrough; 110 111 case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3: 111 112 if (type != PMU_TYPE_EVNTSEL) 112 113 return NULL; ··· 120 115 case MSR_F15H_PERF_CTR3: 121 116 case MSR_F15H_PERF_CTR4: 122 117 case MSR_F15H_PERF_CTR5: 118 + if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) 119 + return NULL; 120 + fallthrough; 123 121 case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3: 124 122 if (type != PMU_TYPE_COUNTER) 125 123 return NULL;
+36 -21
arch/x86/kvm/x86.c
··· 271 271 * When called, it means the previous get/set msr reached an invalid msr. 272 272 * Return true if we want to ignore/silent this failed msr access. 273 273 */ 274 - static bool kvm_msr_ignored_check(struct kvm_vcpu *vcpu, u32 msr, 275 - u64 data, bool write) 274 + static bool kvm_msr_ignored_check(u32 msr, u64 data, bool write) 276 275 { 277 276 const char *op = write ? "wrmsr" : "rdmsr"; 278 277 ··· 1444 1445 if (r == KVM_MSR_RET_INVALID) { 1445 1446 /* Unconditionally clear the output for simplicity */ 1446 1447 *data = 0; 1447 - if (kvm_msr_ignored_check(vcpu, index, 0, false)) 1448 + if (kvm_msr_ignored_check(index, 0, false)) 1448 1449 r = 0; 1449 1450 } 1450 1451 ··· 1619 1620 int ret = __kvm_set_msr(vcpu, index, data, host_initiated); 1620 1621 1621 1622 if (ret == KVM_MSR_RET_INVALID) 1622 - if (kvm_msr_ignored_check(vcpu, index, data, true)) 1623 + if (kvm_msr_ignored_check(index, data, true)) 1623 1624 ret = 0; 1624 1625 1625 1626 return ret; ··· 1657 1658 if (ret == KVM_MSR_RET_INVALID) { 1658 1659 /* Unconditionally clear *data for simplicity */ 1659 1660 *data = 0; 1660 - if (kvm_msr_ignored_check(vcpu, index, 0, false)) 1661 + if (kvm_msr_ignored_check(index, 0, false)) 1661 1662 ret = 0; 1662 1663 } 1663 1664 ··· 2328 2329 kvm_vcpu_write_tsc_offset(vcpu, offset); 2329 2330 raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); 2330 2331 2331 - spin_lock(&kvm->arch.pvclock_gtod_sync_lock); 2332 + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); 2332 2333 if (!matched) { 2333 2334 kvm->arch.nr_vcpus_matched_tsc = 0; 2334 2335 } else if (!already_matched) { ··· 2336 2337 } 2337 2338 2338 2339 kvm_track_tsc_matching(vcpu); 2339 - spin_unlock(&kvm->arch.pvclock_gtod_sync_lock); 2340 + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); 2340 2341 } 2341 2342 2342 2343 static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, ··· 2558 2559 int i; 2559 2560 struct kvm_vcpu *vcpu; 2560 2561 struct kvm_arch *ka = &kvm->arch; 2562 + unsigned long flags; 2561 2563 2562 2564 kvm_hv_invalidate_tsc_page(kvm); 2563 2565 2564 - spin_lock(&ka->pvclock_gtod_sync_lock); 2565 2566 kvm_make_mclock_inprogress_request(kvm); 2567 + 2566 2568 /* no guest entries from this point */ 2569 + spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); 2567 2570 pvclock_update_vm_gtod_copy(kvm); 2571 + spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); 2568 2572 2569 2573 kvm_for_each_vcpu(i, vcpu, kvm) 2570 2574 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); ··· 2575 2573 /* guest entries allowed */ 2576 2574 kvm_for_each_vcpu(i, vcpu, kvm) 2577 2575 kvm_clear_request(KVM_REQ_MCLOCK_INPROGRESS, vcpu); 2578 - 2579 - spin_unlock(&ka->pvclock_gtod_sync_lock); 2580 2576 #endif 2581 2577 } 2582 2578 ··· 2582 2582 { 2583 2583 struct kvm_arch *ka = &kvm->arch; 2584 2584 struct pvclock_vcpu_time_info hv_clock; 2585 + unsigned long flags; 2585 2586 u64 ret; 2586 2587 2587 - spin_lock(&ka->pvclock_gtod_sync_lock); 2588 + spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); 2588 2589 if (!ka->use_master_clock) { 2589 - spin_unlock(&ka->pvclock_gtod_sync_lock); 2590 + spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); 2590 2591 return get_kvmclock_base_ns() + ka->kvmclock_offset; 2591 2592 } 2592 2593 2593 2594 hv_clock.tsc_timestamp = ka->master_cycle_now; 2594 2595 hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; 2595 - spin_unlock(&ka->pvclock_gtod_sync_lock); 2596 + spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); 2596 2597 2597 2598 /* both __this_cpu_read() and rdtsc() should be on the same cpu */ 2598 2599 get_cpu(); ··· 2687 2686 * If the host uses TSC clock, then passthrough TSC as stable 2688 2687 * to the guest. 2689 2688 */ 2690 - spin_lock(&ka->pvclock_gtod_sync_lock); 2689 + spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); 2691 2690 use_master_clock = ka->use_master_clock; 2692 2691 if (use_master_clock) { 2693 2692 host_tsc = ka->master_cycle_now; 2694 2693 kernel_ns = ka->master_kernel_ns; 2695 2694 } 2696 - spin_unlock(&ka->pvclock_gtod_sync_lock); 2695 + spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); 2697 2696 2698 2697 /* Keep irq disabled to prevent changes to the clock */ 2699 2698 local_irq_save(flags); ··· 5727 5726 } 5728 5727 #endif 5729 5728 case KVM_SET_CLOCK: { 5729 + struct kvm_arch *ka = &kvm->arch; 5730 5730 struct kvm_clock_data user_ns; 5731 5731 u64 now_ns; 5732 5732 ··· 5746 5744 * pvclock_update_vm_gtod_copy(). 5747 5745 */ 5748 5746 kvm_gen_update_masterclock(kvm); 5749 - now_ns = get_kvmclock_ns(kvm); 5750 - kvm->arch.kvmclock_offset += user_ns.clock - now_ns; 5747 + 5748 + /* 5749 + * This pairs with kvm_guest_time_update(): when masterclock is 5750 + * in use, we use master_kernel_ns + kvmclock_offset to set 5751 + * unsigned 'system_time' so if we use get_kvmclock_ns() (which 5752 + * is slightly ahead) here we risk going negative on unsigned 5753 + * 'system_time' when 'user_ns.clock' is very small. 5754 + */ 5755 + spin_lock_irq(&ka->pvclock_gtod_sync_lock); 5756 + if (kvm->arch.use_master_clock) 5757 + now_ns = ka->master_kernel_ns; 5758 + else 5759 + now_ns = get_kvmclock_base_ns(); 5760 + ka->kvmclock_offset = user_ns.clock - now_ns; 5761 + spin_unlock_irq(&ka->pvclock_gtod_sync_lock); 5762 + 5751 5763 kvm_make_all_cpus_request(kvm, KVM_REQ_CLOCK_UPDATE); 5752 5764 break; 5753 5765 } ··· 7740 7724 struct kvm *kvm; 7741 7725 struct kvm_vcpu *vcpu; 7742 7726 int cpu; 7727 + unsigned long flags; 7743 7728 7744 7729 mutex_lock(&kvm_lock); 7745 7730 list_for_each_entry(kvm, &vm_list, vm_list) ··· 7756 7739 list_for_each_entry(kvm, &vm_list, vm_list) { 7757 7740 struct kvm_arch *ka = &kvm->arch; 7758 7741 7759 - spin_lock(&ka->pvclock_gtod_sync_lock); 7760 - 7742 + spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); 7761 7743 pvclock_update_vm_gtod_copy(kvm); 7744 + spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); 7762 7745 7763 7746 kvm_for_each_vcpu(cpu, vcpu, kvm) 7764 7747 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); 7765 7748 7766 7749 kvm_for_each_vcpu(cpu, vcpu, kvm) 7767 7750 kvm_clear_request(KVM_REQ_MCLOCK_INPROGRESS, vcpu); 7768 - 7769 - spin_unlock(&ka->pvclock_gtod_sync_lock); 7770 7751 } 7771 7752 mutex_unlock(&kvm_lock); 7772 7753 }
-1
arch/x86/kvm/x86.h
··· 250 250 void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs); 251 251 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip); 252 252 253 - void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr); 254 253 u64 get_kvmclock_ns(struct kvm *kvm); 255 254 256 255 int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
+1 -1
arch/x86/mm/mem_encrypt.c
··· 262 262 if (pgprot_val(old_prot) == pgprot_val(new_prot)) 263 263 return; 264 264 265 - pa = pfn << page_level_shift(level); 265 + pa = pfn << PAGE_SHIFT; 266 266 size = page_level_size(level); 267 267 268 268 /*
+10 -1
arch/x86/net/bpf_jit_comp.c
··· 1689 1689 } 1690 1690 1691 1691 if (image) { 1692 - if (unlikely(proglen + ilen > oldproglen)) { 1692 + /* 1693 + * When populating the image, assert that: 1694 + * 1695 + * i) We do not write beyond the allocated space, and 1696 + * ii) addrs[i] did not change from the prior run, in order 1697 + * to validate assumptions made for computing branch 1698 + * displacements. 1699 + */ 1700 + if (unlikely(proglen + ilen > oldproglen || 1701 + proglen + ilen != addrs[i])) { 1693 1702 pr_err("bpf_jit: fatal error\n"); 1694 1703 return -EFAULT; 1695 1704 }
+10 -1
arch/x86/net/bpf_jit_comp32.c
··· 2469 2469 } 2470 2470 2471 2471 if (image) { 2472 - if (unlikely(proglen + ilen > oldproglen)) { 2472 + /* 2473 + * When populating the image, assert that: 2474 + * 2475 + * i) We do not write beyond the allocated space, and 2476 + * ii) addrs[i] did not change from the prior run, in order 2477 + * to validate assumptions made for computing branch 2478 + * displacements. 2479 + */ 2480 + if (unlikely(proglen + ilen > oldproglen || 2481 + proglen + ilen != addrs[i])) { 2473 2482 pr_err("bpf_jit: fatal error\n"); 2474 2483 return -EFAULT; 2475 2484 }
+2 -5
arch/x86/xen/p2m.c
··· 98 98 unsigned long xen_max_p2m_pfn __read_mostly; 99 99 EXPORT_SYMBOL_GPL(xen_max_p2m_pfn); 100 100 101 - #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 102 - #define P2M_LIMIT CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 101 + #ifdef CONFIG_XEN_MEMORY_HOTPLUG_LIMIT 102 + #define P2M_LIMIT CONFIG_XEN_MEMORY_HOTPLUG_LIMIT 103 103 #else 104 104 #define P2M_LIMIT 0 105 105 #endif ··· 416 416 xen_p2m_last_pfn = xen_max_p2m_pfn; 417 417 418 418 p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE; 419 - if (!p2m_limit && IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC)) 420 - p2m_limit = xen_start_info->nr_pages * XEN_EXTRA_MEM_RATIO; 421 - 422 419 vm.flags = VM_ALLOC; 423 420 vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit), 424 421 PMD_SIZE * PMDS_PER_MID_PAGE);
+14 -2
arch/x86/xen/setup.c
··· 59 59 } xen_remap_buf __initdata __aligned(PAGE_SIZE); 60 60 static unsigned long xen_remap_mfn __initdata = INVALID_P2M_ENTRY; 61 61 62 + /* 63 + * The maximum amount of extra memory compared to the base size. The 64 + * main scaling factor is the size of struct page. At extreme ratios 65 + * of base:extra, all the base memory can be filled with page 66 + * structures for the extra memory, leaving no space for anything 67 + * else. 68 + * 69 + * 10x seems like a reasonable balance between scaling flexibility and 70 + * leaving a practically usable system. 71 + */ 72 + #define EXTRA_MEM_RATIO (10) 73 + 62 74 static bool xen_512gb_limit __initdata = IS_ENABLED(CONFIG_XEN_512GB); 63 75 64 76 static void __init xen_parse_512gb(void) ··· 790 778 extra_pages += max_pages - max_pfn; 791 779 792 780 /* 793 - * Clamp the amount of extra memory to a XEN_EXTRA_MEM_RATIO 781 + * Clamp the amount of extra memory to a EXTRA_MEM_RATIO 794 782 * factor the base size. 795 783 * 796 784 * Make sure we have no memory above max_pages, as this area 797 785 * isn't handled by the p2m management. 798 786 */ 799 - extra_pages = min3(XEN_EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)), 787 + extra_pages = min3(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)), 800 788 extra_pages, max_pages - max_pfn); 801 789 i = 0; 802 790 addr = xen_e820_table.entries[0].addr;
+33 -31
arch/xtensa/kernel/coprocessor.S
··· 100 100 LOAD_CP_REGS_TAB(7) 101 101 102 102 /* 103 - * coprocessor_flush(struct thread_info*, index) 104 - * a2 a3 105 - * 106 - * Save coprocessor registers for coprocessor 'index'. 107 - * The register values are saved to or loaded from the coprocessor area 108 - * inside the task_info structure. 109 - * 110 - * Note that this function doesn't update the coprocessor_owner information! 111 - * 112 - */ 113 - 114 - ENTRY(coprocessor_flush) 115 - 116 - /* reserve 4 bytes on stack to save a0 */ 117 - abi_entry(4) 118 - 119 - s32i a0, a1, 0 120 - movi a0, .Lsave_cp_regs_jump_table 121 - addx8 a3, a3, a0 122 - l32i a4, a3, 4 123 - l32i a3, a3, 0 124 - add a2, a2, a4 125 - beqz a3, 1f 126 - callx0 a3 127 - 1: l32i a0, a1, 0 128 - 129 - abi_ret(4) 130 - 131 - ENDPROC(coprocessor_flush) 132 - 133 - /* 134 103 * Entry condition: 135 104 * 136 105 * a0: trashed, original value saved on stack (PT_AREG0) ··· 213 244 rfe 214 245 215 246 ENDPROC(fast_coprocessor) 247 + 248 + .text 249 + 250 + /* 251 + * coprocessor_flush(struct thread_info*, index) 252 + * a2 a3 253 + * 254 + * Save coprocessor registers for coprocessor 'index'. 255 + * The register values are saved to or loaded from the coprocessor area 256 + * inside the task_info structure. 257 + * 258 + * Note that this function doesn't update the coprocessor_owner information! 259 + * 260 + */ 261 + 262 + ENTRY(coprocessor_flush) 263 + 264 + /* reserve 4 bytes on stack to save a0 */ 265 + abi_entry(4) 266 + 267 + s32i a0, a1, 0 268 + movi a0, .Lsave_cp_regs_jump_table 269 + addx8 a3, a3, a0 270 + l32i a4, a3, 4 271 + l32i a3, a3, 0 272 + add a2, a2, a4 273 + beqz a3, 1f 274 + callx0 a3 275 + 1: l32i a0, a1, 0 276 + 277 + abi_ret(4) 278 + 279 + ENDPROC(coprocessor_flush) 216 280 217 281 .data 218 282
+4 -1
arch/xtensa/mm/fault.c
··· 112 112 */ 113 113 fault = handle_mm_fault(vma, address, flags, regs); 114 114 115 - if (fault_signal_pending(fault, regs)) 115 + if (fault_signal_pending(fault, regs)) { 116 + if (!user_mode(regs)) 117 + goto bad_page_fault; 116 118 return; 119 + } 117 120 118 121 if (unlikely(fault & VM_FAULT_ERROR)) { 119 122 if (fault & VM_FAULT_OOM)
+19 -4
block/bio.c
··· 277 277 { 278 278 struct bio *parent = bio->bi_private; 279 279 280 - if (!parent->bi_status) 280 + if (bio->bi_status && !parent->bi_status) 281 281 parent->bi_status = bio->bi_status; 282 282 bio_put(bio); 283 283 return parent; ··· 949 949 } 950 950 EXPORT_SYMBOL_GPL(bio_release_pages); 951 951 952 - static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter) 952 + static void __bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter) 953 953 { 954 954 WARN_ON_ONCE(bio->bi_max_vecs); 955 955 ··· 959 959 bio->bi_iter.bi_size = iter->count; 960 960 bio_set_flag(bio, BIO_NO_PAGE_REF); 961 961 bio_set_flag(bio, BIO_CLONED); 962 + } 962 963 964 + static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter) 965 + { 966 + __bio_iov_bvec_set(bio, iter); 963 967 iov_iter_advance(iter, iter->count); 968 + return 0; 969 + } 970 + 971 + static int bio_iov_bvec_set_append(struct bio *bio, struct iov_iter *iter) 972 + { 973 + struct request_queue *q = bio->bi_bdev->bd_disk->queue; 974 + struct iov_iter i = *iter; 975 + 976 + iov_iter_truncate(&i, queue_max_zone_append_sectors(q) << 9); 977 + __bio_iov_bvec_set(bio, &i); 978 + iov_iter_advance(iter, i.count); 964 979 return 0; 965 980 } 966 981 ··· 1109 1094 int ret = 0; 1110 1095 1111 1096 if (iov_iter_is_bvec(iter)) { 1112 - if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND)) 1113 - return -EINVAL; 1097 + if (bio_op(bio) == REQ_OP_ZONE_APPEND) 1098 + return bio_iov_bvec_set_append(bio, iter); 1114 1099 return bio_iov_bvec_set(bio, iter); 1115 1100 } 1116 1101
+8
block/blk-merge.c
··· 382 382 switch (bio_op(rq->bio)) { 383 383 case REQ_OP_DISCARD: 384 384 case REQ_OP_SECURE_ERASE: 385 + if (queue_max_discard_segments(rq->q) > 1) { 386 + struct bio *bio = rq->bio; 387 + 388 + for_each_bio(bio) 389 + nr_phys_segs++; 390 + return nr_phys_segs; 391 + } 392 + return 1; 385 393 case REQ_OP_WRITE_ZEROES: 386 394 return 0; 387 395 case REQ_OP_WRITE_SAME:
-1
block/blk-mq-debugfs.c
··· 302 302 RQF_NAME(QUIET), 303 303 RQF_NAME(ELVPRIV), 304 304 RQF_NAME(IO_STAT), 305 - RQF_NAME(ALLOCED), 306 305 RQF_NAME(PM), 307 306 RQF_NAME(HASHED), 308 307 RQF_NAME(STATS),
+7
block/partitions/core.c
··· 323 323 int err; 324 324 325 325 /* 326 + * disk_max_parts() won't be zero, either GENHD_FL_EXT_DEVT is set 327 + * or 'minors' is passed to alloc_disk(). 328 + */ 329 + if (partno >= disk_max_parts(disk)) 330 + return ERR_PTR(-EINVAL); 331 + 332 + /* 326 333 * Partitions are not supported on zoned block devices that are used as 327 334 * such. 328 335 */
+1 -2
drivers/acpi/acpica/nsaccess.c
··· 99 99 * just create and link the new node(s) here. 100 100 */ 101 101 new_node = 102 - ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_namespace_node)); 102 + acpi_ns_create_node(*ACPI_CAST_PTR(u32, init_val->name)); 103 103 if (!new_node) { 104 104 status = AE_NO_MEMORY; 105 105 goto unlock_and_exit; 106 106 } 107 107 108 - ACPI_COPY_NAMESEG(new_node->name.ascii, init_val->name); 109 108 new_node->descriptor_type = ACPI_DESC_TYPE_NAMED; 110 109 new_node->type = init_val->type; 111 110
+5 -1
drivers/acpi/internal.h
··· 9 9 #ifndef _ACPI_INTERNAL_H_ 10 10 #define _ACPI_INTERNAL_H_ 11 11 12 + #include <linux/idr.h> 13 + 12 14 #define PREFIX "ACPI: " 13 15 14 16 int early_acpi_osi_init(void); ··· 98 96 99 97 extern struct list_head acpi_bus_id_list; 100 98 99 + #define ACPI_MAX_DEVICE_INSTANCES 4096 100 + 101 101 struct acpi_device_bus_id { 102 102 const char *bus_id; 103 - unsigned int instance_no; 103 + struct ida instance_ida; 104 104 struct list_head node; 105 105 }; 106 106
+5
drivers/acpi/processor_idle.c
··· 29 29 */ 30 30 #ifdef CONFIG_X86 31 31 #include <asm/apic.h> 32 + #include <asm/cpu.h> 32 33 #endif 33 34 34 35 #define _COMPONENT ACPI_PROCESSOR_COMPONENT ··· 542 541 wait_for_freeze(); 543 542 } else 544 543 return -ENODEV; 544 + 545 + #if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU) 546 + cond_wakeup_cpu0(); 547 + #endif 545 548 } 546 549 547 550 /* Never reached */
+39 -6
drivers/acpi/scan.c
··· 479 479 list_for_each_entry(acpi_device_bus_id, &acpi_bus_id_list, node) 480 480 if (!strcmp(acpi_device_bus_id->bus_id, 481 481 acpi_device_hid(device))) { 482 - if (acpi_device_bus_id->instance_no > 0) 483 - acpi_device_bus_id->instance_no--; 484 - else { 482 + ida_simple_remove(&acpi_device_bus_id->instance_ida, device->pnp.instance_no); 483 + if (ida_is_empty(&acpi_device_bus_id->instance_ida)) { 485 484 list_del(&acpi_device_bus_id->node); 486 485 kfree_const(acpi_device_bus_id->bus_id); 487 486 kfree(acpi_device_bus_id); ··· 630 631 return NULL; 631 632 } 632 633 634 + static int acpi_device_set_name(struct acpi_device *device, 635 + struct acpi_device_bus_id *acpi_device_bus_id) 636 + { 637 + struct ida *instance_ida = &acpi_device_bus_id->instance_ida; 638 + int result; 639 + 640 + result = ida_simple_get(instance_ida, 0, ACPI_MAX_DEVICE_INSTANCES, GFP_KERNEL); 641 + if (result < 0) 642 + return result; 643 + 644 + device->pnp.instance_no = result; 645 + dev_set_name(&device->dev, "%s:%02x", acpi_device_bus_id->bus_id, result); 646 + return 0; 647 + } 648 + 633 649 int acpi_device_add(struct acpi_device *device, 634 650 void (*release)(struct device *)) 635 651 { ··· 679 665 680 666 acpi_device_bus_id = acpi_device_bus_id_match(acpi_device_hid(device)); 681 667 if (acpi_device_bus_id) { 682 - acpi_device_bus_id->instance_no++; 668 + result = acpi_device_set_name(device, acpi_device_bus_id); 669 + if (result) 670 + goto err_unlock; 683 671 } else { 684 672 acpi_device_bus_id = kzalloc(sizeof(*acpi_device_bus_id), 685 673 GFP_KERNEL); ··· 697 681 goto err_unlock; 698 682 } 699 683 684 + ida_init(&acpi_device_bus_id->instance_ida); 685 + 686 + result = acpi_device_set_name(device, acpi_device_bus_id); 687 + if (result) { 688 + kfree(acpi_device_bus_id); 689 + goto err_unlock; 690 + } 691 + 700 692 list_add_tail(&acpi_device_bus_id->node, &acpi_bus_id_list); 701 693 } 702 - dev_set_name(&device->dev, "%s:%02x", acpi_device_bus_id->bus_id, acpi_device_bus_id->instance_no); 703 694 704 695 if (device->parent) 705 696 list_add_tail(&device->node, &device->parent->children); ··· 1670 1647 device_initialize(&device->dev); 1671 1648 dev_set_uevent_suppress(&device->dev, true); 1672 1649 acpi_init_coherency(device); 1650 + /* Assume there are unmet deps to start with. */ 1651 + device->dep_unmet = 1; 1673 1652 } 1674 1653 1675 1654 void acpi_device_add_finalize(struct acpi_device *device) ··· 1935 1910 { 1936 1911 struct acpi_dep_data *dep; 1937 1912 1913 + adev->dep_unmet = 0; 1914 + 1938 1915 mutex_lock(&acpi_dep_list_lock); 1939 1916 1940 1917 list_for_each_entry(dep, &acpi_dep_list, node) { ··· 1984 1957 return AE_CTRL_DEPTH; 1985 1958 1986 1959 acpi_scan_init_hotplug(device); 1987 - if (!check_dep) 1960 + /* 1961 + * If check_dep is true at this point, the device has no dependencies, 1962 + * or the creation of the device object would have been postponed above. 1963 + */ 1964 + if (check_dep) 1965 + device->dep_unmet = 0; 1966 + else 1988 1967 acpi_scan_dep_init(device); 1989 1968 1990 1969 out:
+39 -3
drivers/acpi/tables.c
··· 780 780 } 781 781 782 782 /* 783 - * acpi_table_init() 783 + * acpi_locate_initial_tables() 784 784 * 785 785 * find RSDP, find and checksum SDT/XSDT. 786 786 * checksum all tables, print SDT/XSDT ··· 788 788 * result: sdt_entry[] is initialized 789 789 */ 790 790 791 - int __init acpi_table_init(void) 791 + int __init acpi_locate_initial_tables(void) 792 792 { 793 793 acpi_status status; 794 794 ··· 803 803 status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0); 804 804 if (ACPI_FAILURE(status)) 805 805 return -EINVAL; 806 - acpi_table_initrd_scan(); 807 806 807 + return 0; 808 + } 809 + 810 + void __init acpi_reserve_initial_tables(void) 811 + { 812 + int i; 813 + 814 + for (i = 0; i < ACPI_MAX_TABLES; i++) { 815 + struct acpi_table_desc *table_desc = &initial_tables[i]; 816 + u64 start = table_desc->address; 817 + u64 size = table_desc->length; 818 + 819 + if (!start || !size) 820 + break; 821 + 822 + pr_info("Reserving %4s table memory at [mem 0x%llx-0x%llx]\n", 823 + table_desc->signature.ascii, start, start + size - 1); 824 + 825 + memblock_reserve(start, size); 826 + } 827 + } 828 + 829 + void __init acpi_table_init_complete(void) 830 + { 831 + acpi_table_initrd_scan(); 808 832 check_multiple_madt(); 833 + } 834 + 835 + int __init acpi_table_init(void) 836 + { 837 + int ret; 838 + 839 + ret = acpi_locate_initial_tables(); 840 + if (ret) 841 + return ret; 842 + 843 + acpi_table_init_complete(); 844 + 809 845 return 0; 810 846 } 811 847
+1
drivers/acpi/video_detect.c
··· 147 147 }, 148 148 }, 149 149 { 150 + .callback = video_detect_force_vendor, 150 151 .ident = "Sony VPCEH3U1E", 151 152 .matches = { 152 153 DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+8 -10
drivers/auxdisplay/charlcd.c
··· 470 470 char c; 471 471 472 472 for (; count-- > 0; (*ppos)++, tmp++) { 473 - if (!in_interrupt() && (((count + 1) & 0x1f) == 0)) 473 + if (((count + 1) & 0x1f) == 0) { 474 474 /* 475 - * let's be a little nice with other processes 476 - * that need some CPU 475 + * charlcd_write() is invoked as a VFS->write() callback 476 + * and as such it is always invoked from preemptible 477 + * context and may sleep. 477 478 */ 478 - schedule(); 479 + cond_resched(); 480 + } 479 481 480 482 if (get_user(c, tmp)) 481 483 return -EFAULT; ··· 539 537 int count = strlen(s); 540 538 541 539 for (; count-- > 0; tmp++) { 542 - if (!in_interrupt() && (((count + 1) & 0x1f) == 0)) 543 - /* 544 - * let's be a little nice with other processes 545 - * that need some CPU 546 - */ 547 - schedule(); 540 + if (((count + 1) & 0x1f) == 0) 541 + cond_resched(); 548 542 549 543 charlcd_write_char(lcd, *tmp); 550 544 }
+3
drivers/base/dd.c
··· 97 97 98 98 get_device(dev); 99 99 100 + kfree(dev->p->deferred_probe_reason); 101 + dev->p->deferred_probe_reason = NULL; 102 + 100 103 /* 101 104 * Drop the mutex while probing each device; the probe path may 102 105 * manipulate the deferred list
+47 -8
drivers/base/power/runtime.c
··· 305 305 return 0; 306 306 } 307 307 308 - static void rpm_put_suppliers(struct device *dev) 308 + static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend) 309 309 { 310 310 struct device_link *link; 311 311 ··· 313 313 device_links_read_lock_held()) { 314 314 315 315 while (refcount_dec_not_one(&link->rpm_active)) 316 - pm_runtime_put(link->supplier); 316 + pm_runtime_put_noidle(link->supplier); 317 + 318 + if (try_to_suspend) 319 + pm_request_idle(link->supplier); 317 320 } 321 + } 322 + 323 + static void rpm_put_suppliers(struct device *dev) 324 + { 325 + __rpm_put_suppliers(dev, true); 326 + } 327 + 328 + static void rpm_suspend_suppliers(struct device *dev) 329 + { 330 + struct device_link *link; 331 + int idx = device_links_read_lock(); 332 + 333 + list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, 334 + device_links_read_lock_held()) 335 + pm_request_idle(link->supplier); 336 + 337 + device_links_read_unlock(idx); 318 338 } 319 339 320 340 /** ··· 364 344 idx = device_links_read_lock(); 365 345 366 346 retval = rpm_get_suppliers(dev); 367 - if (retval) 347 + if (retval) { 348 + rpm_put_suppliers(dev); 368 349 goto fail; 350 + } 369 351 370 352 device_links_read_unlock(idx); 371 353 } ··· 390 368 || (dev->power.runtime_status == RPM_RESUMING && retval))) { 391 369 idx = device_links_read_lock(); 392 370 393 - fail: 394 - rpm_put_suppliers(dev); 371 + __rpm_put_suppliers(dev, false); 395 372 373 + fail: 396 374 device_links_read_unlock(idx); 397 375 } 398 376 ··· 664 642 goto out; 665 643 } 666 644 645 + if (dev->power.irq_safe) 646 + goto out; 647 + 667 648 /* Maybe the parent is now able to suspend. */ 668 - if (parent && !parent->power.ignore_children && !dev->power.irq_safe) { 649 + if (parent && !parent->power.ignore_children) { 669 650 spin_unlock(&dev->power.lock); 670 651 671 652 spin_lock(&parent->power.lock); ··· 676 651 spin_unlock(&parent->power.lock); 677 652 678 653 spin_lock(&dev->power.lock); 654 + } 655 + /* Maybe the suppliers are now able to suspend. */ 656 + if (dev->power.links_count > 0) { 657 + spin_unlock_irq(&dev->power.lock); 658 + 659 + rpm_suspend_suppliers(dev); 660 + 661 + spin_lock_irq(&dev->power.lock); 679 662 } 680 663 681 664 out: ··· 1690 1657 device_links_read_lock_held()) 1691 1658 if (link->flags & DL_FLAG_PM_RUNTIME) { 1692 1659 link->supplier_preactivated = true; 1693 - refcount_inc(&link->rpm_active); 1694 1660 pm_runtime_get_sync(link->supplier); 1661 + refcount_inc(&link->rpm_active); 1695 1662 } 1696 1663 1697 1664 device_links_read_unlock(idx); ··· 1704 1671 void pm_runtime_put_suppliers(struct device *dev) 1705 1672 { 1706 1673 struct device_link *link; 1674 + unsigned long flags; 1675 + bool put; 1707 1676 int idx; 1708 1677 1709 1678 idx = device_links_read_lock(); ··· 1714 1679 device_links_read_lock_held()) 1715 1680 if (link->supplier_preactivated) { 1716 1681 link->supplier_preactivated = false; 1717 - if (refcount_dec_not_one(&link->rpm_active)) 1682 + spin_lock_irqsave(&dev->power.lock, flags); 1683 + put = pm_runtime_status_suspended(dev) && 1684 + refcount_dec_not_one(&link->rpm_active); 1685 + spin_unlock_irqrestore(&dev->power.lock, flags); 1686 + if (put) 1718 1687 pm_runtime_put(link->supplier); 1719 1688 } 1720 1689
+21 -5
drivers/block/null_blk/main.c
··· 1369 1369 } 1370 1370 1371 1371 if (dev->zoned) 1372 - cmd->error = null_process_zoned_cmd(cmd, op, 1373 - sector, nr_sectors); 1372 + sts = null_process_zoned_cmd(cmd, op, sector, nr_sectors); 1374 1373 else 1375 - cmd->error = null_process_cmd(cmd, op, sector, nr_sectors); 1374 + sts = null_process_cmd(cmd, op, sector, nr_sectors); 1375 + 1376 + /* Do not overwrite errors (e.g. timeout errors) */ 1377 + if (cmd->error == BLK_STS_OK) 1378 + cmd->error = sts; 1376 1379 1377 1380 out: 1378 1381 nullb_complete_cmd(cmd); ··· 1454 1451 1455 1452 static enum blk_eh_timer_return null_timeout_rq(struct request *rq, bool res) 1456 1453 { 1454 + struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq); 1455 + 1457 1456 pr_info("rq %p timed out\n", rq); 1458 - blk_mq_complete_request(rq); 1457 + 1458 + /* 1459 + * If the device is marked as blocking (i.e. memory backed or zoned 1460 + * device), the submission path may be blocked waiting for resources 1461 + * and cause real timeouts. For these real timeouts, the submission 1462 + * path will complete the request using blk_mq_complete_request(). 1463 + * Only fake timeouts need to execute blk_mq_complete_request() here. 1464 + */ 1465 + cmd->error = BLK_STS_TIMEOUT; 1466 + if (cmd->fake_timeout) 1467 + blk_mq_complete_request(rq); 1459 1468 return BLK_EH_DONE; 1460 1469 } 1461 1470 ··· 1488 1473 cmd->rq = bd->rq; 1489 1474 cmd->error = BLK_STS_OK; 1490 1475 cmd->nq = nq; 1476 + cmd->fake_timeout = should_timeout_request(bd->rq); 1491 1477 1492 1478 blk_mq_start_request(bd->rq); 1493 1479 ··· 1505 1489 return BLK_STS_OK; 1506 1490 } 1507 1491 } 1508 - if (should_timeout_request(bd->rq)) 1492 + if (cmd->fake_timeout) 1509 1493 return BLK_STS_OK; 1510 1494 1511 1495 return null_handle_cmd(cmd, sector, nr_sectors, req_op(bd->rq));
+1
drivers/block/null_blk/null_blk.h
··· 22 22 blk_status_t error; 23 23 struct nullb_queue *nq; 24 24 struct hrtimer timer; 25 + bool fake_timeout; 25 26 }; 26 27 27 28 struct nullb_queue {
+1 -1
drivers/block/xen-blkback/blkback.c
··· 891 891 out: 892 892 for (i = last_map; i < num; i++) { 893 893 /* Don't zap current batch's valid persistent grants. */ 894 - if(i >= last_map + segs_to_map) 894 + if(i >= map_until) 895 895 pages[i]->persistent_gnt = NULL; 896 896 pages[i]->handle = BLKBACK_INVALID_HANDLE; 897 897 }
+2 -5
drivers/bluetooth/btusb.c
··· 4767 4767 data->diag = NULL; 4768 4768 } 4769 4769 4770 - if (!enable_autosuspend) 4771 - usb_disable_autosuspend(data->udev); 4770 + if (enable_autosuspend) 4771 + usb_enable_autosuspend(data->udev); 4772 4772 4773 4773 err = hci_register_dev(hdev); 4774 4774 if (err < 0) ··· 4828 4828 gpiod_put(data->reset_gpio); 4829 4829 4830 4830 hci_free_dev(hdev); 4831 - 4832 - if (!enable_autosuspend) 4833 - usb_enable_autosuspend(data->udev); 4834 4831 } 4835 4832 4836 4833 #ifdef CONFIG_PM
+1 -1
drivers/bus/mvebu-mbus.c
··· 618 618 * This part of the memory is above 4 GB, so we don't 619 619 * care for the MBus bridge hole. 620 620 */ 621 - if (reg_start >= 0x100000000ULL) 621 + if ((u64)reg_start >= 0x100000000ULL) 622 622 continue; 623 623 624 624 /*
+2 -2
drivers/bus/omap_l3_noc.c
··· 285 285 */ 286 286 l3->debug_irq = platform_get_irq(pdev, 0); 287 287 ret = devm_request_irq(l3->dev, l3->debug_irq, l3_interrupt_handler, 288 - 0x0, "l3-dbg-irq", l3); 288 + IRQF_NO_THREAD, "l3-dbg-irq", l3); 289 289 if (ret) { 290 290 dev_err(l3->dev, "request_irq failed for %d\n", 291 291 l3->debug_irq); ··· 294 294 295 295 l3->app_irq = platform_get_irq(pdev, 1); 296 296 ret = devm_request_irq(l3->dev, l3->app_irq, l3_interrupt_handler, 297 - 0x0, "l3-app-irq", l3); 297 + IRQF_NO_THREAD, "l3-app-irq", l3); 298 298 if (ret) 299 299 dev_err(l3->dev, "request_irq failed for %d\n", l3->app_irq); 300 300
+3 -1
drivers/bus/ti-sysc.c
··· 3053 3053 3054 3054 pm_runtime_put_sync(&pdev->dev); 3055 3055 pm_runtime_disable(&pdev->dev); 3056 - reset_control_assert(ddata->rsts); 3056 + 3057 + if (!reset_control_status(ddata->rsts)) 3058 + reset_control_assert(ddata->rsts); 3057 3059 3058 3060 unprepare: 3059 3061 sysc_unprepare(ddata);
+1 -1
drivers/char/agp/Kconfig
··· 125 125 126 126 config AGP_PARISC 127 127 tristate "HP Quicksilver AGP support" 128 - depends on AGP && PARISC && 64BIT 128 + depends on AGP && PARISC && 64BIT && IOMMU_SBA 129 129 help 130 130 This option gives you AGP GART support for the HP Quicksilver 131 131 AGP bus adapter on HP PA-RISC machines (Ok, just on the C8000
+2 -2
drivers/cpufreq/freq_table.c
··· 267 267 __ATTR_RO(_name##_frequencies) 268 268 269 269 /* 270 - * show_scaling_available_frequencies - show available normal frequencies for 270 + * scaling_available_frequencies_show - show available normal frequencies for 271 271 * the specified CPU 272 272 */ 273 273 static ssize_t scaling_available_frequencies_show(struct cpufreq_policy *policy, ··· 279 279 EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_available_freqs); 280 280 281 281 /* 282 - * show_available_boost_freqs - show available boost frequencies for 282 + * scaling_boost_frequencies_show - show available boost frequencies for 283 283 * the specified CPU 284 284 */ 285 285 static ssize_t scaling_boost_frequencies_show(struct cpufreq_policy *policy,
+1
drivers/extcon/extcon.c
··· 1241 1241 sizeof(*edev->nh), GFP_KERNEL); 1242 1242 if (!edev->nh) { 1243 1243 ret = -ENOMEM; 1244 + device_unregister(&edev->dev); 1244 1245 goto err_dev; 1245 1246 } 1246 1247
+7 -2
drivers/firewire/nosy.c
··· 346 346 struct client *client = file->private_data; 347 347 spinlock_t *client_list_lock = &client->lynx->client_list_lock; 348 348 struct nosy_stats stats; 349 + int ret; 349 350 350 351 switch (cmd) { 351 352 case NOSY_IOC_GET_STATS: ··· 361 360 return 0; 362 361 363 362 case NOSY_IOC_START: 363 + ret = -EBUSY; 364 364 spin_lock_irq(client_list_lock); 365 - list_add_tail(&client->link, &client->lynx->client_list); 365 + if (list_empty(&client->link)) { 366 + list_add_tail(&client->link, &client->lynx->client_list); 367 + ret = 0; 368 + } 366 369 spin_unlock_irq(client_list_lock); 367 370 368 - return 0; 371 + return ret; 369 372 370 373 case NOSY_IOC_STOP: 371 374 spin_lock_irq(client_list_lock);
+3 -7
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1007 1007 1008 1008 /* s3/s4 mask */ 1009 1009 bool in_suspend; 1010 - bool in_hibernate; 1011 - 1012 - /* 1013 - * The combination flag in_poweroff_reboot_com used to identify the poweroff 1014 - * and reboot opt in the s0i3 system-wide suspend. 1015 - */ 1016 - bool in_poweroff_reboot_com; 1010 + bool in_s3; 1011 + bool in_s4; 1012 + bool in_s0ix; 1017 1013 1018 1014 atomic_t in_gpu_reset; 1019 1015 enum pp_mp1_state mp1_state;
+34 -98
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 2371 2371 i = state == AMD_CG_STATE_GATE ? j : adev->num_ip_blocks - j - 1; 2372 2372 if (!adev->ip_blocks[i].status.late_initialized) 2373 2373 continue; 2374 + /* skip CG for GFX on S0ix */ 2375 + if (adev->in_s0ix && 2376 + adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX) 2377 + continue; 2374 2378 /* skip CG for VCE/UVD, it's handled specially */ 2375 2379 if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD && 2376 2380 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCE && ··· 2405 2401 for (j = 0; j < adev->num_ip_blocks; j++) { 2406 2402 i = state == AMD_PG_STATE_GATE ? j : adev->num_ip_blocks - j - 1; 2407 2403 if (!adev->ip_blocks[i].status.late_initialized) 2404 + continue; 2405 + /* skip PG for GFX on S0ix */ 2406 + if (adev->in_s0ix && 2407 + adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX) 2408 2408 continue; 2409 2409 /* skip CG for VCE/UVD, it's handled specially */ 2410 2410 if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD && ··· 2686 2678 { 2687 2679 int i, r; 2688 2680 2689 - if (adev->in_poweroff_reboot_com || 2690 - !amdgpu_acpi_is_s0ix_supported(adev) || amdgpu_in_reset(adev)) { 2691 - amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE); 2692 - amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE); 2693 - } 2681 + amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE); 2682 + amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE); 2694 2683 2695 2684 for (i = adev->num_ip_blocks - 1; i >= 0; i--) { 2696 2685 if (!adev->ip_blocks[i].status.valid) ··· 2727 2722 { 2728 2723 int i, r; 2729 2724 2725 + if (adev->in_s0ix) 2726 + amdgpu_gfx_state_change_set(adev, sGpuChangeState_D3Entry); 2727 + 2730 2728 for (i = adev->num_ip_blocks - 1; i >= 0; i--) { 2731 2729 if (!adev->ip_blocks[i].status.valid) 2732 2730 continue; ··· 2742 2734 adev->ip_blocks[i].status.hw = false; 2743 2735 continue; 2744 2736 } 2737 + 2738 + /* skip suspend of gfx and psp for S0ix 2739 + * gfx is in gfxoff state, so on resume it will exit gfxoff just 2740 + * like at runtime. PSP is also part of the always on hardware 2741 + * so no need to suspend it. 2742 + */ 2743 + if (adev->in_s0ix && 2744 + (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_PSP || 2745 + adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX)) 2746 + continue; 2747 + 2745 2748 /* XXX handle errors */ 2746 2749 r = adev->ip_blocks[i].version->funcs->suspend(adev); 2747 2750 /* XXX handle errors */ ··· 3692 3673 */ 3693 3674 int amdgpu_device_suspend(struct drm_device *dev, bool fbcon) 3694 3675 { 3695 - struct amdgpu_device *adev; 3696 - struct drm_crtc *crtc; 3697 - struct drm_connector *connector; 3698 - struct drm_connector_list_iter iter; 3676 + struct amdgpu_device *adev = drm_to_adev(dev); 3699 3677 int r; 3700 - 3701 - adev = drm_to_adev(dev); 3702 3678 3703 3679 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF) 3704 3680 return 0; ··· 3706 3692 3707 3693 cancel_delayed_work_sync(&adev->delayed_init_work); 3708 3694 3709 - if (!amdgpu_device_has_dc_support(adev)) { 3710 - /* turn off display hw */ 3711 - drm_modeset_lock_all(dev); 3712 - drm_connector_list_iter_begin(dev, &iter); 3713 - drm_for_each_connector_iter(connector, &iter) 3714 - drm_helper_connector_dpms(connector, 3715 - DRM_MODE_DPMS_OFF); 3716 - drm_connector_list_iter_end(&iter); 3717 - drm_modeset_unlock_all(dev); 3718 - /* unpin the front buffers and cursors */ 3719 - list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 3720 - struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 3721 - struct drm_framebuffer *fb = crtc->primary->fb; 3722 - struct amdgpu_bo *robj; 3723 - 3724 - if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) { 3725 - struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo); 3726 - r = amdgpu_bo_reserve(aobj, true); 3727 - if (r == 0) { 3728 - amdgpu_bo_unpin(aobj); 3729 - amdgpu_bo_unreserve(aobj); 3730 - } 3731 - } 3732 - 3733 - if (fb == NULL || fb->obj[0] == NULL) { 3734 - continue; 3735 - } 3736 - robj = gem_to_amdgpu_bo(fb->obj[0]); 3737 - /* don't unpin kernel fb objects */ 3738 - if (!amdgpu_fbdev_robj_is_fb(adev, robj)) { 3739 - r = amdgpu_bo_reserve(robj, true); 3740 - if (r == 0) { 3741 - amdgpu_bo_unpin(robj); 3742 - amdgpu_bo_unreserve(robj); 3743 - } 3744 - } 3745 - } 3746 - } 3747 - 3748 3695 amdgpu_ras_suspend(adev); 3749 3696 3750 3697 r = amdgpu_device_ip_suspend_phase1(adev); 3751 3698 3752 - amdgpu_amdkfd_suspend(adev, adev->in_runpm); 3699 + if (!adev->in_s0ix) 3700 + amdgpu_amdkfd_suspend(adev, adev->in_runpm); 3753 3701 3754 3702 /* evict vram memory */ 3755 3703 amdgpu_bo_evict_vram(adev); 3756 3704 3757 3705 amdgpu_fence_driver_suspend(adev); 3758 3706 3759 - if (adev->in_poweroff_reboot_com || 3760 - !amdgpu_acpi_is_s0ix_supported(adev) || amdgpu_in_reset(adev)) 3761 - r = amdgpu_device_ip_suspend_phase2(adev); 3762 - else 3763 - amdgpu_gfx_state_change_set(adev, sGpuChangeState_D3Entry); 3707 + r = amdgpu_device_ip_suspend_phase2(adev); 3764 3708 /* evict remaining vram memory 3765 3709 * This second call to evict vram is to evict the gart page table 3766 3710 * using the CPU. ··· 3740 3768 */ 3741 3769 int amdgpu_device_resume(struct drm_device *dev, bool fbcon) 3742 3770 { 3743 - struct drm_connector *connector; 3744 - struct drm_connector_list_iter iter; 3745 3771 struct amdgpu_device *adev = drm_to_adev(dev); 3746 - struct drm_crtc *crtc; 3747 3772 int r = 0; 3748 3773 3749 3774 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF) 3750 3775 return 0; 3751 3776 3752 - if (amdgpu_acpi_is_s0ix_supported(adev)) 3777 + if (adev->in_s0ix) 3753 3778 amdgpu_gfx_state_change_set(adev, sGpuChangeState_D0Entry); 3754 3779 3755 3780 /* post card */ ··· 3771 3802 queue_delayed_work(system_wq, &adev->delayed_init_work, 3772 3803 msecs_to_jiffies(AMDGPU_RESUME_MS)); 3773 3804 3774 - if (!amdgpu_device_has_dc_support(adev)) { 3775 - /* pin cursors */ 3776 - list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 3777 - struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 3778 - 3779 - if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) { 3780 - struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo); 3781 - r = amdgpu_bo_reserve(aobj, true); 3782 - if (r == 0) { 3783 - r = amdgpu_bo_pin(aobj, AMDGPU_GEM_DOMAIN_VRAM); 3784 - if (r != 0) 3785 - dev_err(adev->dev, "Failed to pin cursor BO (%d)\n", r); 3786 - amdgpu_crtc->cursor_addr = amdgpu_bo_gpu_offset(aobj); 3787 - amdgpu_bo_unreserve(aobj); 3788 - } 3789 - } 3790 - } 3805 + if (!adev->in_s0ix) { 3806 + r = amdgpu_amdkfd_resume(adev, adev->in_runpm); 3807 + if (r) 3808 + return r; 3791 3809 } 3792 - r = amdgpu_amdkfd_resume(adev, adev->in_runpm); 3793 - if (r) 3794 - return r; 3795 3810 3796 3811 /* Make sure IB tests flushed */ 3797 3812 flush_delayed_work(&adev->delayed_init_work); 3798 3813 3799 - /* blat the mode back in */ 3800 - if (fbcon) { 3801 - if (!amdgpu_device_has_dc_support(adev)) { 3802 - /* pre DCE11 */ 3803 - drm_helper_resume_force_mode(dev); 3804 - 3805 - /* turn on display hw */ 3806 - drm_modeset_lock_all(dev); 3807 - 3808 - drm_connector_list_iter_begin(dev, &iter); 3809 - drm_for_each_connector_iter(connector, &iter) 3810 - drm_helper_connector_dpms(connector, 3811 - DRM_MODE_DPMS_ON); 3812 - drm_connector_list_iter_end(&iter); 3813 - 3814 - drm_modeset_unlock_all(dev); 3815 - } 3814 + if (fbcon) 3816 3815 amdgpu_fbdev_set_suspend(adev, 0); 3817 - } 3818 3816 3819 3817 drm_kms_helper_poll_enable(dev); 3820 3818
+89
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 1310 1310 return amdgpu_display_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos, 1311 1311 stime, etime, mode); 1312 1312 } 1313 + 1314 + int amdgpu_display_suspend_helper(struct amdgpu_device *adev) 1315 + { 1316 + struct drm_device *dev = adev_to_drm(adev); 1317 + struct drm_crtc *crtc; 1318 + struct drm_connector *connector; 1319 + struct drm_connector_list_iter iter; 1320 + int r; 1321 + 1322 + /* turn off display hw */ 1323 + drm_modeset_lock_all(dev); 1324 + drm_connector_list_iter_begin(dev, &iter); 1325 + drm_for_each_connector_iter(connector, &iter) 1326 + drm_helper_connector_dpms(connector, 1327 + DRM_MODE_DPMS_OFF); 1328 + drm_connector_list_iter_end(&iter); 1329 + drm_modeset_unlock_all(dev); 1330 + /* unpin the front buffers and cursors */ 1331 + list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 1332 + struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 1333 + struct drm_framebuffer *fb = crtc->primary->fb; 1334 + struct amdgpu_bo *robj; 1335 + 1336 + if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) { 1337 + struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo); 1338 + r = amdgpu_bo_reserve(aobj, true); 1339 + if (r == 0) { 1340 + amdgpu_bo_unpin(aobj); 1341 + amdgpu_bo_unreserve(aobj); 1342 + } 1343 + } 1344 + 1345 + if (fb == NULL || fb->obj[0] == NULL) { 1346 + continue; 1347 + } 1348 + robj = gem_to_amdgpu_bo(fb->obj[0]); 1349 + /* don't unpin kernel fb objects */ 1350 + if (!amdgpu_fbdev_robj_is_fb(adev, robj)) { 1351 + r = amdgpu_bo_reserve(robj, true); 1352 + if (r == 0) { 1353 + amdgpu_bo_unpin(robj); 1354 + amdgpu_bo_unreserve(robj); 1355 + } 1356 + } 1357 + } 1358 + return r; 1359 + } 1360 + 1361 + int amdgpu_display_resume_helper(struct amdgpu_device *adev) 1362 + { 1363 + struct drm_device *dev = adev_to_drm(adev); 1364 + struct drm_connector *connector; 1365 + struct drm_connector_list_iter iter; 1366 + struct drm_crtc *crtc; 1367 + int r; 1368 + 1369 + /* pin cursors */ 1370 + list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 1371 + struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 1372 + 1373 + if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) { 1374 + struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo); 1375 + r = amdgpu_bo_reserve(aobj, true); 1376 + if (r == 0) { 1377 + r = amdgpu_bo_pin(aobj, AMDGPU_GEM_DOMAIN_VRAM); 1378 + if (r != 0) 1379 + dev_err(adev->dev, "Failed to pin cursor BO (%d)\n", r); 1380 + amdgpu_crtc->cursor_addr = amdgpu_bo_gpu_offset(aobj); 1381 + amdgpu_bo_unreserve(aobj); 1382 + } 1383 + } 1384 + } 1385 + 1386 + drm_helper_resume_force_mode(dev); 1387 + 1388 + /* turn on display hw */ 1389 + drm_modeset_lock_all(dev); 1390 + 1391 + drm_connector_list_iter_begin(dev, &iter); 1392 + drm_for_each_connector_iter(connector, &iter) 1393 + drm_helper_connector_dpms(connector, 1394 + DRM_MODE_DPMS_ON); 1395 + drm_connector_list_iter_end(&iter); 1396 + 1397 + drm_modeset_unlock_all(dev); 1398 + 1399 + return 0; 1400 + } 1401 +
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
··· 47 47 const struct drm_format_info * 48 48 amdgpu_lookup_format_info(u32 format, uint64_t modifier); 49 49 50 + int amdgpu_display_suspend_helper(struct amdgpu_device *adev); 51 + int amdgpu_display_resume_helper(struct amdgpu_device *adev); 52 + 50 53 #endif
+19 -12
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 1107 1107 {0x1002, 0x73A3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1108 1108 {0x1002, 0x73AB, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1109 1109 {0x1002, 0x73AE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1110 + {0x1002, 0x73AF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1110 1111 {0x1002, 0x73BF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1111 1112 1112 1113 /* Van Gogh */ ··· 1275 1274 */ 1276 1275 if (!amdgpu_passthrough(adev)) 1277 1276 adev->mp1_state = PP_MP1_STATE_UNLOAD; 1278 - adev->in_poweroff_reboot_com = true; 1279 1277 amdgpu_device_ip_suspend(adev); 1280 - adev->in_poweroff_reboot_com = false; 1281 1278 adev->mp1_state = PP_MP1_STATE_NONE; 1282 1279 } 1283 1280 1284 1281 static int amdgpu_pmops_suspend(struct device *dev) 1285 1282 { 1286 1283 struct drm_device *drm_dev = dev_get_drvdata(dev); 1284 + struct amdgpu_device *adev = drm_to_adev(drm_dev); 1285 + int r; 1287 1286 1288 - return amdgpu_device_suspend(drm_dev, true); 1287 + if (amdgpu_acpi_is_s0ix_supported(adev)) 1288 + adev->in_s0ix = true; 1289 + adev->in_s3 = true; 1290 + r = amdgpu_device_suspend(drm_dev, true); 1291 + adev->in_s3 = false; 1292 + 1293 + return r; 1289 1294 } 1290 1295 1291 1296 static int amdgpu_pmops_resume(struct device *dev) 1292 1297 { 1293 1298 struct drm_device *drm_dev = dev_get_drvdata(dev); 1299 + struct amdgpu_device *adev = drm_to_adev(drm_dev); 1300 + int r; 1294 1301 1295 - return amdgpu_device_resume(drm_dev, true); 1302 + r = amdgpu_device_resume(drm_dev, true); 1303 + if (amdgpu_acpi_is_s0ix_supported(adev)) 1304 + adev->in_s0ix = false; 1305 + return r; 1296 1306 } 1297 1307 1298 1308 static int amdgpu_pmops_freeze(struct device *dev) ··· 1312 1300 struct amdgpu_device *adev = drm_to_adev(drm_dev); 1313 1301 int r; 1314 1302 1315 - adev->in_hibernate = true; 1303 + adev->in_s4 = true; 1316 1304 r = amdgpu_device_suspend(drm_dev, true); 1317 - adev->in_hibernate = false; 1305 + adev->in_s4 = false; 1318 1306 if (r) 1319 1307 return r; 1320 1308 return amdgpu_asic_reset(adev); ··· 1330 1318 static int amdgpu_pmops_poweroff(struct device *dev) 1331 1319 { 1332 1320 struct drm_device *drm_dev = dev_get_drvdata(dev); 1333 - struct amdgpu_device *adev = drm_to_adev(drm_dev); 1334 - int r; 1335 1321 1336 - adev->in_poweroff_reboot_com = true; 1337 - r = amdgpu_device_suspend(drm_dev, true); 1338 - adev->in_poweroff_reboot_com = false; 1339 - return r; 1322 + return amdgpu_device_suspend(drm_dev, true); 1340 1323 } 1341 1324 1342 1325 static int amdgpu_pmops_restore(struct device *dev)
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 778 778 dev_info->high_va_offset = AMDGPU_GMC_HOLE_END; 779 779 dev_info->high_va_max = AMDGPU_GMC_HOLE_END | vm_size; 780 780 } 781 - dev_info->virtual_address_alignment = max((int)PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE); 781 + dev_info->virtual_address_alignment = max_t(u32, PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE); 782 782 dev_info->pte_fragment_size = (1 << adev->vm_manager.fragment_size) * AMDGPU_GPU_PAGE_SIZE; 783 - dev_info->gart_page_size = AMDGPU_GPU_PAGE_SIZE; 783 + dev_info->gart_page_size = max_t(u32, PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE); 784 784 dev_info->cu_active_number = adev->gfx.cu_info.number; 785 785 dev_info->cu_ao_mask = adev->gfx.cu_info.ao_cu_mask; 786 786 dev_info->ce_ram_size = adev->gfx.ce_ram_size;
+2 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 1028 1028 { 1029 1029 struct ttm_resource_manager *man; 1030 1030 1031 - /* late 2.6.33 fix IGP hibernate - we need pm ops to do this correct */ 1032 - #ifndef CONFIG_HIBERNATION 1033 - if (adev->flags & AMD_IS_APU) { 1034 - /* Useless to evict on IGP chips */ 1031 + if (adev->in_s3 && (adev->flags & AMD_IS_APU)) { 1032 + /* No need to evict vram on APUs for suspend to ram */ 1035 1033 return 0; 1036 1034 } 1037 - #endif 1038 1035 1039 1036 man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM); 1040 1037 return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 906 906 907 907 /* Allocate an SG array and squash pages into it */ 908 908 r = sg_alloc_table_from_pages(ttm->sg, ttm->pages, ttm->num_pages, 0, 909 - ttm->num_pages << PAGE_SHIFT, 909 + (u64)ttm->num_pages << PAGE_SHIFT, 910 910 GFP_KERNEL); 911 911 if (r) 912 912 goto release_sg;
+5 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 2197 2197 uint64_t eaddr; 2198 2198 2199 2199 /* validate the parameters */ 2200 - if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK || 2201 - size == 0 || size & AMDGPU_GPU_PAGE_MASK) 2200 + if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || 2201 + size == 0 || size & ~PAGE_MASK) 2202 2202 return -EINVAL; 2203 2203 2204 2204 /* make sure object fit at this offset */ ··· 2263 2263 int r; 2264 2264 2265 2265 /* validate the parameters */ 2266 - if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK || 2267 - size == 0 || size & AMDGPU_GPU_PAGE_MASK) 2266 + if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || 2267 + size == 0 || size & ~PAGE_MASK) 2268 2268 return -EINVAL; 2269 2269 2270 2270 /* make sure object fit at this offset */ ··· 2409 2409 after->start = eaddr + 1; 2410 2410 after->last = tmp->last; 2411 2411 after->offset = tmp->offset; 2412 - after->offset += after->start - tmp->start; 2412 + after->offset += (after->start - tmp->start) << PAGE_SHIFT; 2413 2413 after->flags = tmp->flags; 2414 2414 after->bo_va = tmp->bo_va; 2415 2415 list_add(&after->list, &tmp->bo_va->invalids);
+8 -1
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 2897 2897 static int dce_v10_0_suspend(void *handle) 2898 2898 { 2899 2899 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 2900 + int r; 2901 + 2902 + r = amdgpu_display_suspend_helper(adev); 2903 + if (r) 2904 + return r; 2900 2905 2901 2906 adev->mode_info.bl_level = 2902 2907 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev); ··· 2926 2921 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder, 2927 2922 bl_level); 2928 2923 } 2924 + if (ret) 2925 + return ret; 2929 2926 2930 - return ret; 2927 + return amdgpu_display_resume_helper(adev); 2931 2928 } 2932 2929 2933 2930 static bool dce_v10_0_is_idle(void *handle)
+8 -1
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 3027 3027 static int dce_v11_0_suspend(void *handle) 3028 3028 { 3029 3029 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 3030 + int r; 3031 + 3032 + r = amdgpu_display_suspend_helper(adev); 3033 + if (r) 3034 + return r; 3030 3035 3031 3036 adev->mode_info.bl_level = 3032 3037 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev); ··· 3056 3051 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder, 3057 3052 bl_level); 3058 3053 } 3054 + if (ret) 3055 + return ret; 3059 3056 3060 - return ret; 3057 + return amdgpu_display_resume_helper(adev); 3061 3058 } 3062 3059 3063 3060 static bool dce_v11_0_is_idle(void *handle)
+7 -1
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 2770 2770 static int dce_v6_0_suspend(void *handle) 2771 2771 { 2772 2772 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 2773 + int r; 2773 2774 2775 + r = amdgpu_display_suspend_helper(adev); 2776 + if (r) 2777 + return r; 2774 2778 adev->mode_info.bl_level = 2775 2779 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev); 2776 2780 ··· 2798 2794 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder, 2799 2795 bl_level); 2800 2796 } 2797 + if (ret) 2798 + return ret; 2801 2799 2802 - return ret; 2800 + return amdgpu_display_resume_helper(adev); 2803 2801 } 2804 2802 2805 2803 static bool dce_v6_0_is_idle(void *handle)
+8 -1
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 2796 2796 static int dce_v8_0_suspend(void *handle) 2797 2797 { 2798 2798 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 2799 + int r; 2800 + 2801 + r = amdgpu_display_suspend_helper(adev); 2802 + if (r) 2803 + return r; 2799 2804 2800 2805 adev->mode_info.bl_level = 2801 2806 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev); ··· 2825 2820 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder, 2826 2821 bl_level); 2827 2822 } 2823 + if (ret) 2824 + return ret; 2828 2825 2829 - return ret; 2826 + return amdgpu_display_resume_helper(adev); 2830 2827 } 2831 2828 2832 2829 static bool dce_v8_0_is_idle(void *handle)
+14 -1
drivers/gpu/drm/amd/amdgpu/dce_virtual.c
··· 39 39 #include "dce_v11_0.h" 40 40 #include "dce_virtual.h" 41 41 #include "ivsrcid/ivsrcid_vislands30.h" 42 + #include "amdgpu_display.h" 42 43 43 44 #define DCE_VIRTUAL_VBLANK_PERIOD 16666666 44 45 ··· 492 491 493 492 static int dce_virtual_suspend(void *handle) 494 493 { 494 + struct amdgpu_device *adev = (struct amdgpu_device *)handle; 495 + int r; 496 + 497 + r = amdgpu_display_suspend_helper(adev); 498 + if (r) 499 + return r; 495 500 return dce_virtual_hw_fini(handle); 496 501 } 497 502 498 503 static int dce_virtual_resume(void *handle) 499 504 { 500 - return dce_virtual_hw_init(handle); 505 + struct amdgpu_device *adev = (struct amdgpu_device *)handle; 506 + int r; 507 + 508 + r = dce_virtual_hw_init(handle); 509 + if (r) 510 + return r; 511 + return amdgpu_display_resume_helper(adev); 501 512 } 502 513 503 514 static bool dce_virtual_is_idle(void *handle)
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
··· 155 155 156 156 /* Wait till CP writes sync code: */ 157 157 status = amdkfd_fence_wait_timeout( 158 - (unsigned int *) rm_state, 158 + rm_state, 159 159 QUEUESTATE__ACTIVE, 1500); 160 160 161 161 kfd_gtt_sa_free(dbgdev->dev, mem_obj);
+3 -3
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 1167 1167 if (retval) 1168 1168 goto fail_allocate_vidmem; 1169 1169 1170 - dqm->fence_addr = dqm->fence_mem->cpu_ptr; 1170 + dqm->fence_addr = (uint64_t *)dqm->fence_mem->cpu_ptr; 1171 1171 dqm->fence_gpu_addr = dqm->fence_mem->gpu_addr; 1172 1172 1173 1173 init_interrupts(dqm); ··· 1340 1340 return retval; 1341 1341 } 1342 1342 1343 - int amdkfd_fence_wait_timeout(unsigned int *fence_addr, 1344 - unsigned int fence_value, 1343 + int amdkfd_fence_wait_timeout(uint64_t *fence_addr, 1344 + uint64_t fence_value, 1345 1345 unsigned int timeout_ms) 1346 1346 { 1347 1347 unsigned long end_jiffies = msecs_to_jiffies(timeout_ms) + jiffies;
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
··· 192 192 uint16_t vmid_pasid[VMID_NUM]; 193 193 uint64_t pipelines_addr; 194 194 uint64_t fence_gpu_addr; 195 - unsigned int *fence_addr; 195 + uint64_t *fence_addr; 196 196 struct kfd_mem_obj *fence_mem; 197 197 bool active_runlist; 198 198 int sched_policy;
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
··· 347 347 } 348 348 349 349 int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address, 350 - uint32_t fence_value) 350 + uint64_t fence_value) 351 351 { 352 352 uint32_t *buffer, size; 353 353 int retval = 0;
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
··· 283 283 } 284 284 285 285 static int pm_query_status_v9(struct packet_manager *pm, uint32_t *buffer, 286 - uint64_t fence_address, uint32_t fence_value) 286 + uint64_t fence_address, uint64_t fence_value) 287 287 { 288 288 struct pm4_mes_query_status *packet; 289 289
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c
··· 263 263 } 264 264 265 265 static int pm_query_status_vi(struct packet_manager *pm, uint32_t *buffer, 266 - uint64_t fence_address, uint32_t fence_value) 266 + uint64_t fence_address, uint64_t fence_value) 267 267 { 268 268 struct pm4_mes_query_status *packet; 269 269
+4 -4
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 1003 1003 u32 *ctl_stack_used_size, 1004 1004 u32 *save_area_used_size); 1005 1005 1006 - int amdkfd_fence_wait_timeout(unsigned int *fence_addr, 1007 - unsigned int fence_value, 1006 + int amdkfd_fence_wait_timeout(uint64_t *fence_addr, 1007 + uint64_t fence_value, 1008 1008 unsigned int timeout_ms); 1009 1009 1010 1010 /* Packet Manager */ ··· 1040 1040 uint32_t filter_param, bool reset, 1041 1041 unsigned int sdma_engine); 1042 1042 int (*query_status)(struct packet_manager *pm, uint32_t *buffer, 1043 - uint64_t fence_address, uint32_t fence_value); 1043 + uint64_t fence_address, uint64_t fence_value); 1044 1044 int (*release_mem)(uint64_t gpu_addr, uint32_t *buffer); 1045 1045 1046 1046 /* Packet sizes */ ··· 1062 1062 struct scheduling_resources *res); 1063 1063 int pm_send_runlist(struct packet_manager *pm, struct list_head *dqm_queues); 1064 1064 int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address, 1065 - uint32_t fence_value); 1065 + uint64_t fence_value); 1066 1066 1067 1067 int pm_send_unmap_queue(struct packet_manager *pm, enum kfd_queue_type type, 1068 1068 enum kfd_unmap_queues_filter mode,
+1
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.h
··· 134 134 HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_EN, mask_sh),\ 135 135 HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_IND_BLK, mask_sh),\ 136 136 HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_IND_BLK_C, mask_sh),\ 137 + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_FLIP_INTERRUPT, SURFACE_FLIP_INT_MASK, mask_sh),\ 137 138 HUBP_SF(HUBPRET0_HUBPRET_CONTROL, DET_BUF_PLANE1_BASE_ADDRESS, mask_sh),\ 138 139 HUBP_SF(HUBPRET0_HUBPRET_CONTROL, CROSSBAR_SRC_CB_B, mask_sh),\ 139 140 HUBP_SF(HUBPRET0_HUBPRET_CONTROL, CROSSBAR_SRC_CR_R, mask_sh),\
+58 -2
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
··· 587 587 tmp, MC_CG_ARB_FREQ_F0); 588 588 } 589 589 590 + static uint16_t smu7_override_pcie_speed(struct pp_hwmgr *hwmgr) 591 + { 592 + struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev); 593 + uint16_t pcie_gen = 0; 594 + 595 + if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4 && 596 + adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN4) 597 + pcie_gen = 3; 598 + else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3 && 599 + adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3) 600 + pcie_gen = 2; 601 + else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 && 602 + adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2) 603 + pcie_gen = 1; 604 + else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1 && 605 + adev->pm.pcie_gen_mask & CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1) 606 + pcie_gen = 0; 607 + 608 + return pcie_gen; 609 + } 610 + 611 + static uint16_t smu7_override_pcie_width(struct pp_hwmgr *hwmgr) 612 + { 613 + struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev); 614 + uint16_t pcie_width = 0; 615 + 616 + if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16) 617 + pcie_width = 16; 618 + else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12) 619 + pcie_width = 12; 620 + else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8) 621 + pcie_width = 8; 622 + else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4) 623 + pcie_width = 4; 624 + else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2) 625 + pcie_width = 2; 626 + else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1) 627 + pcie_width = 1; 628 + 629 + return pcie_width; 630 + } 631 + 590 632 static int smu7_setup_default_pcie_table(struct pp_hwmgr *hwmgr) 591 633 { 592 634 struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend); ··· 725 683 PP_Min_PCIEGen), 726 684 get_pcie_lane_support(data->pcie_lane_cap, 727 685 PP_Max_PCIELane)); 686 + 687 + if (data->pcie_dpm_key_disabled) 688 + phm_setup_pcie_table_entry(&data->dpm_table.pcie_speed_table, 689 + data->dpm_table.pcie_speed_table.count, 690 + smu7_override_pcie_speed(hwmgr), smu7_override_pcie_width(hwmgr)); 728 691 } 729 692 return 0; 730 693 } ··· 1224 1177 (hwmgr->chip_id == CHIP_POLARIS10) || 1225 1178 (hwmgr->chip_id == CHIP_POLARIS11) || 1226 1179 (hwmgr->chip_id == CHIP_POLARIS12) || 1227 - (hwmgr->chip_id == CHIP_TONGA)) 1180 + (hwmgr->chip_id == CHIP_TONGA) || 1181 + (hwmgr->chip_id == CHIP_TOPAZ)) 1228 1182 PHM_WRITE_FIELD(hwmgr->device, MC_SEQ_CNTL_3, CAC_EN, 0x1); 1229 1183 1230 1184 ··· 1295 1247 PPSMC_MSG_PCIeDPM_Enable, 1296 1248 NULL)), 1297 1249 "Failed to enable pcie DPM during DPM Start Function!", 1250 + return -EINVAL); 1251 + } else { 1252 + PP_ASSERT_WITH_CODE( 1253 + (0 == smum_send_msg_to_smc(hwmgr, 1254 + PPSMC_MSG_PCIeDPM_Disable, 1255 + NULL)), 1256 + "Failed to disble pcie DPM during DPM Start Function!", 1298 1257 return -EINVAL); 1299 1258 } 1300 1259 ··· 3331 3276 3332 3277 disable_mclk_switching_for_display = ((1 < hwmgr->display_config->num_display) && 3333 3278 !hwmgr->display_config->multi_monitor_in_sync) || 3334 - smu7_vblank_too_short(hwmgr, hwmgr->display_config->min_vblank_time); 3279 + (hwmgr->display_config->num_display && 3280 + smu7_vblank_too_short(hwmgr, hwmgr->display_config->min_vblank_time)); 3335 3281 3336 3282 disable_mclk_switching = disable_mclk_switching_for_frame_lock || 3337 3283 disable_mclk_switching_for_display;
+62 -10
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
··· 54 54 #include "smuio/smuio_9_0_offset.h" 55 55 #include "smuio/smuio_9_0_sh_mask.h" 56 56 57 + #define smnPCIE_LC_SPEED_CNTL 0x11140290 58 + #define smnPCIE_LC_LINK_WIDTH_CNTL 0x11140288 59 + 57 60 #define HBM_MEMORY_CHANNEL_WIDTH 128 58 61 59 62 static const uint32_t channel_number[] = {1, 2, 0, 4, 0, 8, 0, 16, 2}; ··· 446 443 if (PP_CAP(PHM_PlatformCaps_VCEDPM)) 447 444 data->smu_features[GNLD_DPM_VCE].supported = true; 448 445 449 - if (!data->registry_data.pcie_dpm_key_disabled) 450 - data->smu_features[GNLD_DPM_LINK].supported = true; 446 + data->smu_features[GNLD_DPM_LINK].supported = true; 451 447 452 448 if (!data->registry_data.dcefclk_dpm_key_disabled) 453 449 data->smu_features[GNLD_DPM_DCEFCLK].supported = true; ··· 1544 1542 1545 1543 if (pp_table->PcieLaneCount[i] > pcie_width) 1546 1544 pp_table->PcieLaneCount[i] = pcie_width; 1545 + } 1546 + 1547 + if (data->registry_data.pcie_dpm_key_disabled) { 1548 + for (i = 0; i < NUM_LINK_LEVELS; i++) { 1549 + pp_table->PcieGenSpeed[i] = pcie_gen; 1550 + pp_table->PcieLaneCount[i] = pcie_width; 1551 + } 1547 1552 } 1548 1553 1549 1554 return 0; ··· 2973 2964 return -1); 2974 2965 data->smu_features[GNLD_ACDC].enabled = true; 2975 2966 } 2967 + } 2968 + 2969 + if (data->registry_data.pcie_dpm_key_disabled) { 2970 + PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr, 2971 + false, data->smu_features[GNLD_DPM_LINK].smu_feature_bitmap), 2972 + "Attempt to Disable Link DPM feature Failed!", return -EINVAL); 2973 + data->smu_features[GNLD_DPM_LINK].enabled = false; 2974 + data->smu_features[GNLD_DPM_LINK].supported = false; 2976 2975 } 2977 2976 2978 2977 return 0; ··· 4601 4584 return 0; 4602 4585 } 4603 4586 4587 + static int vega10_get_current_pcie_link_width_level(struct pp_hwmgr *hwmgr) 4588 + { 4589 + struct amdgpu_device *adev = hwmgr->adev; 4590 + 4591 + return (RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL) & 4592 + PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD_MASK) 4593 + >> PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD__SHIFT; 4594 + } 4595 + 4596 + static int vega10_get_current_pcie_link_speed_level(struct pp_hwmgr *hwmgr) 4597 + { 4598 + struct amdgpu_device *adev = hwmgr->adev; 4599 + 4600 + return (RREG32_PCIE(smnPCIE_LC_SPEED_CNTL) & 4601 + PSWUSP0_PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK) 4602 + >> PSWUSP0_PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT; 4603 + } 4604 + 4604 4605 static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr, 4605 4606 enum pp_clock_type type, char *buf) 4606 4607 { ··· 4627 4592 struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table); 4628 4593 struct vega10_single_dpm_table *soc_table = &(data->dpm_table.soc_table); 4629 4594 struct vega10_single_dpm_table *dcef_table = &(data->dpm_table.dcef_table); 4630 - struct vega10_pcie_table *pcie_table = &(data->dpm_table.pcie_table); 4631 4595 struct vega10_odn_clock_voltage_dependency_table *podn_vdd_dep = NULL; 4596 + uint32_t gen_speed, lane_width, current_gen_speed, current_lane_width; 4597 + PPTable_t *pptable = &(data->smc_state_table.pp_table); 4632 4598 4633 4599 int i, now, size = 0, count = 0; 4634 4600 ··· 4686 4650 "*" : ""); 4687 4651 break; 4688 4652 case PP_PCIE: 4689 - smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentLinkIndex, &now); 4653 + current_gen_speed = 4654 + vega10_get_current_pcie_link_speed_level(hwmgr); 4655 + current_lane_width = 4656 + vega10_get_current_pcie_link_width_level(hwmgr); 4657 + for (i = 0; i < NUM_LINK_LEVELS; i++) { 4658 + gen_speed = pptable->PcieGenSpeed[i]; 4659 + lane_width = pptable->PcieLaneCount[i]; 4690 4660 4691 - for (i = 0; i < pcie_table->count; i++) 4692 - size += sprintf(buf + size, "%d: %s %s\n", i, 4693 - (pcie_table->pcie_gen[i] == 0) ? "2.5GT/s, x1" : 4694 - (pcie_table->pcie_gen[i] == 1) ? "5.0GT/s, x16" : 4695 - (pcie_table->pcie_gen[i] == 2) ? "8.0GT/s, x16" : "", 4696 - (i == now) ? "*" : ""); 4661 + size += sprintf(buf + size, "%d: %s %s %s\n", i, 4662 + (gen_speed == 0) ? "2.5GT/s," : 4663 + (gen_speed == 1) ? "5.0GT/s," : 4664 + (gen_speed == 2) ? "8.0GT/s," : 4665 + (gen_speed == 3) ? "16.0GT/s," : "", 4666 + (lane_width == 1) ? "x1" : 4667 + (lane_width == 2) ? "x2" : 4668 + (lane_width == 3) ? "x4" : 4669 + (lane_width == 4) ? "x8" : 4670 + (lane_width == 5) ? "x12" : 4671 + (lane_width == 6) ? "x16" : "", 4672 + (current_gen_speed == gen_speed) && 4673 + (current_lane_width == lane_width) ? 4674 + "*" : ""); 4675 + } 4697 4676 break; 4677 + 4698 4678 case OD_SCLK: 4699 4679 if (hwmgr->od_enabled) { 4700 4680 size = sprintf(buf, "%s:\n", "OD_SCLK");
+24
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
··· 133 133 data->registry_data.auto_wattman_debug = 0; 134 134 data->registry_data.auto_wattman_sample_period = 100; 135 135 data->registry_data.auto_wattman_threshold = 50; 136 + data->registry_data.pcie_dpm_key_disabled = !(hwmgr->feature_mask & PP_PCIE_DPM_MASK); 136 137 } 137 138 138 139 static int vega12_set_features_platform_caps(struct pp_hwmgr *hwmgr) ··· 540 539 pp_table->PcieLaneCount[i] = pcie_width_arg; 541 540 } 542 541 542 + /* override to the highest if it's disabled from ppfeaturmask */ 543 + if (data->registry_data.pcie_dpm_key_disabled) { 544 + for (i = 0; i < NUM_LINK_LEVELS; i++) { 545 + smu_pcie_arg = (i << 16) | (pcie_gen << 8) | pcie_width; 546 + ret = smum_send_msg_to_smc_with_parameter(hwmgr, 547 + PPSMC_MSG_OverridePcieParameters, smu_pcie_arg, 548 + NULL); 549 + PP_ASSERT_WITH_CODE(!ret, 550 + "[OverridePcieParameters] Attempt to override pcie params failed!", 551 + return ret); 552 + 553 + pp_table->PcieGenSpeed[i] = pcie_gen; 554 + pp_table->PcieLaneCount[i] = pcie_width; 555 + } 556 + ret = vega12_enable_smc_features(hwmgr, 557 + false, 558 + data->smu_features[GNLD_DPM_LINK].smu_feature_bitmap); 559 + PP_ASSERT_WITH_CODE(!ret, 560 + "Attempt to Disable DPM LINK Failed!", 561 + return ret); 562 + data->smu_features[GNLD_DPM_LINK].enabled = false; 563 + data->smu_features[GNLD_DPM_LINK].supported = false; 564 + } 543 565 return 0; 544 566 } 545 567
+25
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
··· 171 171 data->registry_data.gfxoff_controlled_by_driver = 1; 172 172 data->gfxoff_allowed = false; 173 173 data->counter_gfxoff = 0; 174 + data->registry_data.pcie_dpm_key_disabled = !(hwmgr->feature_mask & PP_PCIE_DPM_MASK); 174 175 } 175 176 176 177 static int vega20_set_features_platform_caps(struct pp_hwmgr *hwmgr) ··· 883 882 /* update the pptable */ 884 883 pp_table->PcieGenSpeed[i] = pcie_gen_arg; 885 884 pp_table->PcieLaneCount[i] = pcie_width_arg; 885 + } 886 + 887 + /* override to the highest if it's disabled from ppfeaturmask */ 888 + if (data->registry_data.pcie_dpm_key_disabled) { 889 + for (i = 0; i < NUM_LINK_LEVELS; i++) { 890 + smu_pcie_arg = (i << 16) | (pcie_gen << 8) | pcie_width; 891 + ret = smum_send_msg_to_smc_with_parameter(hwmgr, 892 + PPSMC_MSG_OverridePcieParameters, smu_pcie_arg, 893 + NULL); 894 + PP_ASSERT_WITH_CODE(!ret, 895 + "[OverridePcieParameters] Attempt to override pcie params failed!", 896 + return ret); 897 + 898 + pp_table->PcieGenSpeed[i] = pcie_gen; 899 + pp_table->PcieLaneCount[i] = pcie_width; 900 + } 901 + ret = vega20_enable_smc_features(hwmgr, 902 + false, 903 + data->smu_features[GNLD_DPM_LINK].smu_feature_bitmap); 904 + PP_ASSERT_WITH_CODE(!ret, 905 + "Attempt to Disable DPM LINK Failed!", 906 + return ret); 907 + data->smu_features[GNLD_DPM_LINK].enabled = false; 908 + data->smu_features[GNLD_DPM_LINK].supported = false; 886 909 } 887 910 888 911 return 0;
+3 -2
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1294 1294 bool use_baco = !smu->is_apu && 1295 1295 ((amdgpu_in_reset(adev) && 1296 1296 (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)) || 1297 - ((adev->in_runpm || adev->in_hibernate) && amdgpu_asic_supports_baco(adev))); 1297 + ((adev->in_runpm || adev->in_s4) && amdgpu_asic_supports_baco(adev))); 1298 1298 1299 1299 /* 1300 1300 * For custom pptable uploading, skip the DPM features ··· 1431 1431 1432 1432 smu->watermarks_bitmap &= ~(WATERMARKS_LOADED); 1433 1433 1434 - if (smu->is_apu) 1434 + /* skip CGPG when in S0ix */ 1435 + if (smu->is_apu && !adev->in_s0ix) 1435 1436 smu_set_gfx_cgpg(&adev->smu, false); 1436 1437 1437 1438 return 0;
+5
drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
··· 384 384 385 385 static bool vangogh_is_dpm_running(struct smu_context *smu) 386 386 { 387 + struct amdgpu_device *adev = smu->adev; 387 388 int ret = 0; 388 389 uint32_t feature_mask[2]; 389 390 uint64_t feature_enabled; 391 + 392 + /* we need to re-init after suspend so return false */ 393 + if (adev->in_suspend) 394 + return false; 390 395 391 396 ret = smu_cmn_get_enabled_32_bits_mask(smu, feature_mask, 2); 392 397
+2 -1
drivers/gpu/drm/etnaviv/etnaviv_gem.c
··· 689 689 struct page **pages = pvec + pinned; 690 690 691 691 ret = pin_user_pages_fast(ptr, num_pages, 692 - !userptr->ro ? FOLL_WRITE : 0, pages); 692 + FOLL_WRITE | FOLL_FORCE | FOLL_LONGTERM, 693 + pages); 693 694 if (ret < 0) { 694 695 unpin_user_pages(pvec, pinned); 695 696 kvfree(pvec);
-1
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 13 13 #include <linux/irq.h> 14 14 #include <linux/mfd/syscon.h> 15 15 #include <linux/of_device.h> 16 - #include <linux/of_gpio.h> 17 16 #include <linux/platform_device.h> 18 17 #include <linux/pm_runtime.h> 19 18 #include <linux/regmap.h>
+20 -2
drivers/gpu/drm/i915/display/intel_acpi.c
··· 84 84 return; 85 85 } 86 86 87 + if (!pkg->package.count) { 88 + DRM_DEBUG_DRIVER("no connection in _DSM\n"); 89 + return; 90 + } 91 + 87 92 connector_count = &pkg->package.elements[0]; 88 93 DRM_DEBUG_DRIVER("MUX info connectors: %lld\n", 89 94 (unsigned long long)connector_count->integer.value); 90 95 for (i = 1; i < pkg->package.count; i++) { 91 96 union acpi_object *obj = &pkg->package.elements[i]; 92 - union acpi_object *connector_id = &obj->package.elements[0]; 93 - union acpi_object *info = &obj->package.elements[1]; 97 + union acpi_object *connector_id; 98 + union acpi_object *info; 99 + 100 + if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < 2) { 101 + DRM_DEBUG_DRIVER("Invalid object for MUX #%d\n", i); 102 + continue; 103 + } 104 + 105 + connector_id = &obj->package.elements[0]; 106 + info = &obj->package.elements[1]; 107 + if (info->type != ACPI_TYPE_BUFFER || info->buffer.length < 4) { 108 + DRM_DEBUG_DRIVER("Invalid info for MUX obj #%d\n", i); 109 + continue; 110 + } 111 + 94 112 DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n", 95 113 (unsigned long long)connector_id->integer.value); 96 114 DRM_DEBUG_DRIVER(" port id: %s\n",
+3 -2
drivers/gpu/drm/i915/display/intel_atomic_plane.c
··· 317 317 if (!new_plane_state->hw.crtc && !old_plane_state->hw.crtc) 318 318 return 0; 319 319 320 - new_crtc_state->enabled_planes |= BIT(plane->id); 321 - 322 320 ret = plane->check_plane(new_crtc_state, new_plane_state); 323 321 if (ret) 324 322 return ret; 323 + 324 + if (fb) 325 + new_crtc_state->enabled_planes |= BIT(plane->id); 325 326 326 327 /* FIXME pre-g4x don't work like this */ 327 328 if (new_plane_state->uapi.visible)
+1 -3
drivers/gpu/drm/i915/display/intel_dp.c
··· 3619 3619 { 3620 3620 int ret; 3621 3621 3622 - intel_dp_lttpr_init(intel_dp); 3623 - 3624 - if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd)) 3622 + if (intel_dp_init_lttpr_and_dprx_caps(intel_dp) < 0) 3625 3623 return false; 3626 3624 3627 3625 /*
+7
drivers/gpu/drm/i915/display/intel_dp_aux.c
··· 133 133 else 134 134 precharge = 5; 135 135 136 + /* Max timeout value on G4x-BDW: 1.6ms */ 136 137 if (IS_BROADWELL(dev_priv)) 137 138 timeout = DP_AUX_CH_CTL_TIME_OUT_600us; 138 139 else ··· 160 159 enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 161 160 u32 ret; 162 161 162 + /* 163 + * Max timeout values: 164 + * SKL-GLK: 1.6ms 165 + * CNL: 3.2ms 166 + * ICL+: 4ms 167 + */ 163 168 ret = DP_AUX_CH_CTL_SEND_BUSY | 164 169 DP_AUX_CH_CTL_DONE | 165 170 DP_AUX_CH_CTL_INTERRUPT |
+2 -8
drivers/gpu/drm/i915/display/intel_vdsc.c
··· 1014 1014 { 1015 1015 enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe; 1016 1016 1017 - if (crtc_state->cpu_transcoder == TRANSCODER_EDP) 1018 - return DSS_CTL1; 1019 - 1020 - return ICL_PIPE_DSS_CTL1(pipe); 1017 + return is_pipe_dsc(crtc_state) ? ICL_PIPE_DSS_CTL1(pipe) : DSS_CTL1; 1021 1018 } 1022 1019 1023 1020 static i915_reg_t dss_ctl2_reg(const struct intel_crtc_state *crtc_state) 1024 1021 { 1025 1022 enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe; 1026 1023 1027 - if (crtc_state->cpu_transcoder == TRANSCODER_EDP) 1028 - return DSS_CTL2; 1029 - 1030 - return ICL_PIPE_DSS_CTL2(pipe); 1024 + return is_pipe_dsc(crtc_state) ? ICL_PIPE_DSS_CTL2(pipe) : DSS_CTL2; 1031 1025 } 1032 1026 1033 1027 void intel_dsc_enable(struct intel_encoder *encoder,
+12 -1
drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
··· 316 316 WRITE_ONCE(fence->vma, NULL); 317 317 vma->fence = NULL; 318 318 319 - with_intel_runtime_pm_if_in_use(fence_to_uncore(fence)->rpm, wakeref) 319 + /* 320 + * Skip the write to HW if and only if the device is currently 321 + * suspended. 322 + * 323 + * If the driver does not currently hold a wakeref (if_in_use == 0), 324 + * the device may currently be runtime suspended, or it may be woken 325 + * up before the suspend takes place. If the device is not suspended 326 + * (powered down) and we skip clearing the fence register, the HW is 327 + * left in an undefined state where we may end up with multiple 328 + * registers overlapping. 329 + */ 330 + with_intel_runtime_pm_if_active(fence_to_uncore(fence)->rpm, wakeref) 320 331 fence_write(fence); 321 332 } 322 333
+24 -5
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 412 412 } 413 413 414 414 /** 415 - * intel_runtime_pm_get_if_in_use - grab a runtime pm reference if device in use 415 + * __intel_runtime_pm_get_if_active - grab a runtime pm reference if device is active 416 416 * @rpm: the intel_runtime_pm structure 417 + * @ignore_usecount: get a ref even if dev->power.usage_count is 0 417 418 * 418 419 * This function grabs a device-level runtime pm reference if the device is 419 - * already in use and ensures that it is powered up. It is illegal to try 420 - * and access the HW should intel_runtime_pm_get_if_in_use() report failure. 420 + * already active and ensures that it is powered up. It is illegal to try 421 + * and access the HW should intel_runtime_pm_get_if_active() report failure. 422 + * 423 + * If @ignore_usecount=true, a reference will be acquired even if there is no 424 + * user requiring the device to be powered up (dev->power.usage_count == 0). 425 + * If the function returns false in this case then it's guaranteed that the 426 + * device's runtime suspend hook has been called already or that it will be 427 + * called (and hence it's also guaranteed that the device's runtime resume 428 + * hook will be called eventually). 421 429 * 422 430 * Any runtime pm reference obtained by this function must have a symmetric 423 431 * call to intel_runtime_pm_put() to release the reference again. ··· 433 425 * Returns: the wakeref cookie to pass to intel_runtime_pm_put(), evaluates 434 426 * as True if the wakeref was acquired, or False otherwise. 435 427 */ 436 - intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm) 428 + static intel_wakeref_t __intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm, 429 + bool ignore_usecount) 437 430 { 438 431 if (IS_ENABLED(CONFIG_PM)) { 439 432 /* ··· 443 434 * function, since the power state is undefined. This applies 444 435 * atm to the late/early system suspend/resume handlers. 445 436 */ 446 - if (pm_runtime_get_if_in_use(rpm->kdev) <= 0) 437 + if (pm_runtime_get_if_active(rpm->kdev, ignore_usecount) <= 0) 447 438 return 0; 448 439 } 449 440 450 441 intel_runtime_pm_acquire(rpm, true); 451 442 452 443 return track_intel_runtime_pm_wakeref(rpm); 444 + } 445 + 446 + intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm) 447 + { 448 + return __intel_runtime_pm_get_if_active(rpm, false); 449 + } 450 + 451 + intel_wakeref_t intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm) 452 + { 453 + return __intel_runtime_pm_get_if_active(rpm, true); 453 454 } 454 455 455 456 /**
+5
drivers/gpu/drm/i915/intel_runtime_pm.h
··· 177 177 178 178 intel_wakeref_t intel_runtime_pm_get(struct intel_runtime_pm *rpm); 179 179 intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm); 180 + intel_wakeref_t intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm); 180 181 intel_wakeref_t intel_runtime_pm_get_noresume(struct intel_runtime_pm *rpm); 181 182 intel_wakeref_t intel_runtime_pm_get_raw(struct intel_runtime_pm *rpm); 182 183 ··· 187 186 188 187 #define with_intel_runtime_pm_if_in_use(rpm, wf) \ 189 188 for ((wf) = intel_runtime_pm_get_if_in_use(rpm); (wf); \ 189 + intel_runtime_pm_put((rpm), (wf)), (wf) = 0) 190 + 191 + #define with_intel_runtime_pm_if_active(rpm, wf) \ 192 + for ((wf) = intel_runtime_pm_get_if_active(rpm); (wf); \ 190 193 intel_runtime_pm_put((rpm), (wf)), (wf) = 0) 191 194 192 195 void intel_runtime_pm_put_unchecked(struct intel_runtime_pm *rpm);
+1 -1
drivers/gpu/drm/imx/imx-drm-core.c
··· 215 215 216 216 ret = drmm_mode_config_init(drm); 217 217 if (ret) 218 - return ret; 218 + goto err_kms; 219 219 220 220 ret = drm_vblank_init(drm, MAX_CRTC); 221 221 if (ret)
+11 -1
drivers/gpu/drm/imx/imx-ldb.c
··· 197 197 int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN; 198 198 int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder); 199 199 200 + if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) { 201 + dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux); 202 + return; 203 + } 204 + 200 205 drm_panel_prepare(imx_ldb_ch->panel); 201 206 202 207 if (dual) { ··· 259 254 unsigned long di_clk = mode->clock * 1000; 260 255 int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder); 261 256 u32 bus_format = imx_ldb_ch->bus_format; 257 + 258 + if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) { 259 + dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux); 260 + return; 261 + } 262 262 263 263 if (mode->clock > 170000) { 264 264 dev_warn(ldb->dev, ··· 593 583 struct imx_ldb_channel *channel = &imx_ldb->channel[i]; 594 584 595 585 if (!channel->ldb) 596 - break; 586 + continue; 597 587 598 588 ret = imx_ldb_register(drm, channel); 599 589 if (ret)
+2 -2
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
··· 1386 1386 1387 1387 static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 1388 1388 { 1389 - *value = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_CP_0_LO, 1390 - REG_A5XX_RBBM_PERFCTR_CP_0_HI); 1389 + *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO, 1390 + REG_A5XX_RBBM_ALWAYSON_COUNTER_HI); 1391 1391 1392 1392 return 0; 1393 1393 }
+1 -1
drivers/gpu/drm/msm/adreno/a5xx_power.c
··· 304 304 /* Set up the limits management */ 305 305 if (adreno_is_a530(adreno_gpu)) 306 306 a530_lm_setup(gpu); 307 - else 307 + else if (adreno_is_a540(adreno_gpu)) 308 308 a540_lm_setup(gpu); 309 309 310 310 /* Set up SP/TP power collpase */
+1 -1
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 339 339 else 340 340 bit = a6xx_gmu_oob_bits[state].ack_new; 341 341 342 - gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, bit); 342 + gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, 1 << bit); 343 343 } 344 344 345 345 /* Enable CPU control of SPTP power power collapse */
+75 -33
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 522 522 return a6xx_idle(gpu, ring) ? 0 : -EINVAL; 523 523 } 524 524 525 - static void a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, 525 + /* 526 + * Check that the microcode version is new enough to include several key 527 + * security fixes. Return true if the ucode is safe. 528 + */ 529 + static bool a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, 526 530 struct drm_gem_object *obj) 527 531 { 532 + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; 533 + struct msm_gpu *gpu = &adreno_gpu->base; 528 534 u32 *buf = msm_gem_get_vaddr(obj); 535 + bool ret = false; 529 536 530 537 if (IS_ERR(buf)) 531 - return; 538 + return false; 532 539 533 540 /* 534 - * If the lowest nibble is 0xa that is an indication that this microcode 535 - * has been patched. The actual version is in dword [3] but we only care 536 - * about the patchlevel which is the lowest nibble of dword [3] 537 - * 538 - * Otherwise check that the firmware is greater than or equal to 1.90 539 - * which was the first version that had this fix built in 541 + * Targets up to a640 (a618, a630 and a640) need to check for a 542 + * microcode version that is patched to support the whereami opcode or 543 + * one that is new enough to include it by default. 540 544 */ 541 - if (((buf[0] & 0xf) == 0xa) && (buf[2] & 0xf) >= 1) 542 - a6xx_gpu->has_whereami = true; 543 - else if ((buf[0] & 0xfff) > 0x190) 544 - a6xx_gpu->has_whereami = true; 545 + if (adreno_is_a618(adreno_gpu) || adreno_is_a630(adreno_gpu) || 546 + adreno_is_a640(adreno_gpu)) { 547 + /* 548 + * If the lowest nibble is 0xa that is an indication that this 549 + * microcode has been patched. The actual version is in dword 550 + * [3] but we only care about the patchlevel which is the lowest 551 + * nibble of dword [3] 552 + * 553 + * Otherwise check that the firmware is greater than or equal 554 + * to 1.90 which was the first version that had this fix built 555 + * in 556 + */ 557 + if ((((buf[0] & 0xf) == 0xa) && (buf[2] & 0xf) >= 1) || 558 + (buf[0] & 0xfff) >= 0x190) { 559 + a6xx_gpu->has_whereami = true; 560 + ret = true; 561 + goto out; 562 + } 545 563 564 + DRM_DEV_ERROR(&gpu->pdev->dev, 565 + "a630 SQE ucode is too old. Have version %x need at least %x\n", 566 + buf[0] & 0xfff, 0x190); 567 + } else { 568 + /* 569 + * a650 tier targets don't need whereami but still need to be 570 + * equal to or newer than 0.95 for other security fixes 571 + */ 572 + if (adreno_is_a650(adreno_gpu)) { 573 + if ((buf[0] & 0xfff) >= 0x095) { 574 + ret = true; 575 + goto out; 576 + } 577 + 578 + DRM_DEV_ERROR(&gpu->pdev->dev, 579 + "a650 SQE ucode is too old. Have version %x need at least %x\n", 580 + buf[0] & 0xfff, 0x095); 581 + } 582 + 583 + /* 584 + * When a660 is added those targets should return true here 585 + * since those have all the critical security fixes built in 586 + * from the start 587 + */ 588 + } 589 + out: 546 590 msm_gem_put_vaddr(obj); 591 + return ret; 547 592 } 548 593 549 594 static int a6xx_ucode_init(struct msm_gpu *gpu) ··· 611 566 } 612 567 613 568 msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); 614 - a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo); 569 + if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { 570 + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); 571 + drm_gem_object_put(a6xx_gpu->sqe_bo); 572 + 573 + a6xx_gpu->sqe_bo = NULL; 574 + return -EPERM; 575 + } 615 576 } 616 577 617 578 gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE_LO, ··· 1228 1177 /* Force the GPU power on so we can read this register */ 1229 1178 a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1230 1179 1231 - *value = gpu_read64(gpu, REG_A6XX_RBBM_PERFCTR_CP_0_LO, 1232 - REG_A6XX_RBBM_PERFCTR_CP_0_HI); 1180 + *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, 1181 + REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); 1233 1182 1234 1183 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1235 1184 mutex_unlock(&perfcounter_oob); ··· 1401 1350 u32 revn) 1402 1351 { 1403 1352 struct opp_table *opp_table; 1404 - struct nvmem_cell *cell; 1405 1353 u32 supp_hw = UINT_MAX; 1406 - void *buf; 1354 + u16 speedbin; 1355 + int ret; 1407 1356 1408 - cell = nvmem_cell_get(dev, "speed_bin"); 1357 + ret = nvmem_cell_read_u16(dev, "speed_bin", &speedbin); 1409 1358 /* 1410 1359 * -ENOENT means that the platform doesn't support speedbin which is 1411 1360 * fine 1412 1361 */ 1413 - if (PTR_ERR(cell) == -ENOENT) 1362 + if (ret == -ENOENT) { 1414 1363 return 0; 1415 - else if (IS_ERR(cell)) { 1364 + } else if (ret) { 1416 1365 DRM_DEV_ERROR(dev, 1417 - "failed to read speed-bin. Some OPPs may not be supported by hardware"); 1366 + "failed to read speed-bin (%d). Some OPPs may not be supported by hardware", 1367 + ret); 1418 1368 goto done; 1419 1369 } 1370 + speedbin = le16_to_cpu(speedbin); 1420 1371 1421 - buf = nvmem_cell_read(cell, NULL); 1422 - if (IS_ERR(buf)) { 1423 - nvmem_cell_put(cell); 1424 - DRM_DEV_ERROR(dev, 1425 - "failed to read speed-bin. Some OPPs may not be supported by hardware"); 1426 - goto done; 1427 - } 1428 - 1429 - supp_hw = fuse_to_supp_hw(dev, revn, *((u32 *) buf)); 1430 - 1431 - kfree(buf); 1432 - nvmem_cell_put(cell); 1372 + supp_hw = fuse_to_supp_hw(dev, revn, speedbin); 1433 1373 1434 1374 done: 1435 1375 opp_table = dev_pm_opp_set_supported_hw(dev, &supp_hw, 1);
+3 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
··· 496 496 497 497 DPU_REG_WRITE(c, CTL_TOP, mode_sel); 498 498 DPU_REG_WRITE(c, CTL_INTF_ACTIVE, intf_active); 499 - DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE, BIT(cfg->merge_3d - MERGE_3D_0)); 499 + if (cfg->merge_3d) 500 + DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE, 501 + BIT(cfg->merge_3d - MERGE_3D_0)); 500 502 } 501 503 502 504 static void dpu_hw_ctl_intf_cfg(struct dpu_hw_ctl *ctx,
+7 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
··· 43 43 #define DPU_DEBUGFS_DIR "msm_dpu" 44 44 #define DPU_DEBUGFS_HWMASKNAME "hw_log_mask" 45 45 46 + #define MIN_IB_BW 400000000ULL /* Min ib vote 400MB */ 47 + 46 48 static int dpu_kms_hw_init(struct msm_kms *kms); 47 49 static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms); 48 50 ··· 933 931 DPU_DEBUG("REG_DMA is not defined"); 934 932 } 935 933 934 + if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) 935 + dpu_kms_parse_data_bus_icc_path(dpu_kms); 936 + 936 937 pm_runtime_get_sync(&dpu_kms->pdev->dev); 937 938 938 939 dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0); ··· 1036 1031 } 1037 1032 1038 1033 dpu_vbif_init_memtypes(dpu_kms); 1039 - 1040 - if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) 1041 - dpu_kms_parse_data_bus_icc_path(dpu_kms); 1042 1034 1043 1035 pm_runtime_put_sync(&dpu_kms->pdev->dev); 1044 1036 ··· 1193 1191 1194 1192 ddev = dpu_kms->dev; 1195 1193 1194 + WARN_ON(!(dpu_kms->num_paths)); 1196 1195 /* Min vote of BW is required before turning on AXI clk */ 1197 1196 for (i = 0; i < dpu_kms->num_paths; i++) 1198 - icc_set_bw(dpu_kms->path[i], 0, 1199 - dpu_kms->catalog->perf.min_dram_ib); 1197 + icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW)); 1200 1198 1201 1199 rc = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true); 1202 1200 if (rc) {
+7
drivers/gpu/drm/msm/dp/dp_aux.c
··· 32 32 struct drm_dp_aux dp_aux; 33 33 }; 34 34 35 + #define MAX_AUX_RETRIES 5 36 + 35 37 static const char *dp_aux_get_error(u32 aux_error) 36 38 { 37 39 switch (aux_error) { ··· 379 377 ret = dp_aux_cmd_fifo_tx(aux, msg); 380 378 381 379 if (ret < 0) { 380 + if (aux->native) { 381 + aux->retry_cnt++; 382 + if (!(aux->retry_cnt % MAX_AUX_RETRIES)) 383 + dp_catalog_aux_update_cfg(aux->catalog); 384 + } 382 385 usleep_range(400, 500); /* at least 400us to next try */ 383 386 goto unlock_exit; 384 387 }
+1 -1
drivers/gpu/drm/msm/dsi/pll/dsi_pll.c
··· 163 163 break; 164 164 case MSM_DSI_PHY_7NM: 165 165 case MSM_DSI_PHY_7NM_V4_1: 166 - pll = msm_dsi_pll_7nm_init(pdev, id); 166 + pll = msm_dsi_pll_7nm_init(pdev, type, id); 167 167 break; 168 168 default: 169 169 pll = ERR_PTR(-ENXIO);
+4 -2
drivers/gpu/drm/msm/dsi/pll/dsi_pll.h
··· 117 117 } 118 118 #endif 119 119 #ifdef CONFIG_DRM_MSM_DSI_7NM_PHY 120 - struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, int id); 120 + struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, 121 + enum msm_dsi_phy_type type, int id); 121 122 #else 122 123 static inline struct msm_dsi_pll * 123 - msm_dsi_pll_7nm_init(struct platform_device *pdev, int id) 124 + msm_dsi_pll_7nm_init(struct platform_device *pdev, 125 + enum msm_dsi_phy_type type, int id) 124 126 { 125 127 return ERR_PTR(-ENODEV); 126 128 }
+6 -5
drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
··· 325 325 pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_LOW_1, reg->frac_div_start_low); 326 326 pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_MID_1, reg->frac_div_start_mid); 327 327 pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_HIGH_1, reg->frac_div_start_high); 328 - pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCKDET_RATE_1, 0x40); 328 + pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCKDET_RATE_1, reg->pll_lockdet_rate); 329 329 pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCK_DELAY, 0x06); 330 330 pll_write(base + REG_DSI_7nm_PHY_PLL_CMODE_1, 0x10); /* TODO: 0x00 for CPHY */ 331 331 pll_write(base + REG_DSI_7nm_PHY_PLL_CLOCK_INVERTERS, reg->pll_clock_inverters); ··· 509 509 { 510 510 struct msm_dsi_pll *pll = hw_clk_to_pll(hw); 511 511 struct dsi_pll_7nm *pll_7nm = to_pll_7nm(pll); 512 + struct dsi_pll_config *config = &pll_7nm->pll_configuration; 512 513 void __iomem *base = pll_7nm->mmio; 513 514 u64 ref_clk = pll_7nm->vco_ref_clk_rate; 514 515 u64 vco_rate = 0x0; ··· 530 529 /* 531 530 * TODO: 532 531 * 1. Assumes prescaler is disabled 533 - * 2. Multiplier is 2^18. it should be 2^(num_of_frac_bits) 534 532 */ 535 - multiplier = 1 << 18; 533 + multiplier = 1 << config->frac_bits; 536 534 pll_freq = dec * (ref_clk * 2); 537 535 tmp64 = (ref_clk * 2 * frac); 538 536 pll_freq += div_u64(tmp64, multiplier); ··· 852 852 return ret; 853 853 } 854 854 855 - struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, int id) 855 + struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, 856 + enum msm_dsi_phy_type type, int id) 856 857 { 857 858 struct dsi_pll_7nm *pll_7nm; 858 859 struct msm_dsi_pll *pll; ··· 886 885 pll = &pll_7nm->base; 887 886 pll->min_rate = 1000000000UL; 888 887 pll->max_rate = 3500000000UL; 889 - if (pll->type == MSM_DSI_PHY_7NM_V4_1) { 888 + if (type == MSM_DSI_PHY_7NM_V4_1) { 890 889 pll->min_rate = 600000000UL; 891 890 pll->max_rate = (unsigned long)5000000000ULL; 892 891 /* workaround for max rate overflowing on 32-bit builds: */
+5 -2
drivers/gpu/drm/msm/msm_atomic.c
··· 57 57 58 58 static void lock_crtcs(struct msm_kms *kms, unsigned int crtc_mask) 59 59 { 60 + int crtc_index; 60 61 struct drm_crtc *crtc; 61 62 62 - for_each_crtc_mask(kms->dev, crtc, crtc_mask) 63 - mutex_lock(&kms->commit_lock[drm_crtc_index(crtc)]); 63 + for_each_crtc_mask(kms->dev, crtc, crtc_mask) { 64 + crtc_index = drm_crtc_index(crtc); 65 + mutex_lock_nested(&kms->commit_lock[crtc_index], crtc_index); 66 + } 64 67 } 65 68 66 69 static void unlock_crtcs(struct msm_kms *kms, unsigned int crtc_mask)
+13
drivers/gpu/drm/msm/msm_drv.c
··· 570 570 kfree(priv); 571 571 err_put_drm_dev: 572 572 drm_dev_put(ddev); 573 + platform_set_drvdata(pdev, NULL); 573 574 return ret; 574 575 } 575 576 ··· 1073 1072 static int __maybe_unused msm_pm_prepare(struct device *dev) 1074 1073 { 1075 1074 struct drm_device *ddev = dev_get_drvdata(dev); 1075 + struct msm_drm_private *priv = ddev ? ddev->dev_private : NULL; 1076 + 1077 + if (!priv || !priv->kms) 1078 + return 0; 1076 1079 1077 1080 return drm_mode_config_helper_suspend(ddev); 1078 1081 } ··· 1084 1079 static void __maybe_unused msm_pm_complete(struct device *dev) 1085 1080 { 1086 1081 struct drm_device *ddev = dev_get_drvdata(dev); 1082 + struct msm_drm_private *priv = ddev ? ddev->dev_private : NULL; 1083 + 1084 + if (!priv || !priv->kms) 1085 + return; 1087 1086 1088 1087 drm_mode_config_helper_resume(ddev); 1089 1088 } ··· 1320 1311 static void msm_pdev_shutdown(struct platform_device *pdev) 1321 1312 { 1322 1313 struct drm_device *drm = platform_get_drvdata(pdev); 1314 + struct msm_drm_private *priv = drm ? drm->dev_private : NULL; 1315 + 1316 + if (!priv || !priv->kms) 1317 + return; 1323 1318 1324 1319 drm_atomic_helper_shutdown(drm); 1325 1320 }
+1 -1
drivers/gpu/drm/msm/msm_fence.c
··· 45 45 int ret; 46 46 47 47 if (fence > fctx->last_fence) { 48 - DRM_ERROR("%s: waiting on invalid fence: %u (of %u)\n", 48 + DRM_ERROR_RATELIMITED("%s: waiting on invalid fence: %u (of %u)\n", 49 49 fctx->name, fence, fctx->last_fence); 50 50 return -EINVAL; 51 51 }
+2 -6
drivers/gpu/drm/msm/msm_kms.h
··· 157 157 * from the crtc's pending_timer close to end of the frame: 158 158 */ 159 159 struct mutex commit_lock[MAX_CRTCS]; 160 - struct lock_class_key commit_lock_keys[MAX_CRTCS]; 161 160 unsigned pending_crtc_mask; 162 161 struct msm_pending_timer pending_timers[MAX_CRTCS]; 163 162 }; ··· 166 167 { 167 168 unsigned i, ret; 168 169 169 - for (i = 0; i < ARRAY_SIZE(kms->commit_lock); i++) { 170 - lockdep_register_key(&kms->commit_lock_keys[i]); 171 - __mutex_init(&kms->commit_lock[i], "&kms->commit_lock[i]", 172 - &kms->commit_lock_keys[i]); 173 - } 170 + for (i = 0; i < ARRAY_SIZE(kms->commit_lock); i++) 171 + mutex_init(&kms->commit_lock[i]); 174 172 175 173 kms->funcs = funcs; 176 174
+12 -1
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 2693 2693 else 2694 2694 nouveau_display(dev)->format_modifiers = disp50xx_modifiers; 2695 2695 2696 - if (disp->disp->object.oclass >= GK104_DISP) { 2696 + /* FIXME: 256x256 cursors are supported on Kepler, however unlike Maxwell and later 2697 + * generations Kepler requires that we use small pages (4K) for cursor scanout surfaces. The 2698 + * proper fix for this is to teach nouveau to migrate fbs being used for the cursor plane to 2699 + * small page allocations in prepare_fb(). When this is implemented, we should also force 2700 + * large pages (128K) for ovly fbs in order to fix Kepler ovlys. 2701 + * But until then, just limit cursors to 128x128 - which is small enough to avoid ever using 2702 + * large pages. 2703 + */ 2704 + if (disp->disp->object.oclass >= GM107_DISP) { 2697 2705 dev->mode_config.cursor_width = 256; 2698 2706 dev->mode_config.cursor_height = 256; 2707 + } else if (disp->disp->object.oclass >= GK104_DISP) { 2708 + dev->mode_config.cursor_width = 128; 2709 + dev->mode_config.cursor_height = 128; 2699 2710 } else { 2700 2711 dev->mode_config.cursor_width = 64; 2701 2712 dev->mode_config.cursor_height = 64;
+9 -3
drivers/gpu/drm/panel/panel-dsi-cm.c
··· 37 37 u32 height_mm; 38 38 u32 max_hs_rate; 39 39 u32 max_lp_rate; 40 + bool te_support; 40 41 }; 41 42 42 43 struct panel_drv_data { ··· 335 334 if (r) 336 335 goto err; 337 336 338 - r = mipi_dsi_dcs_set_tear_on(ddata->dsi, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 339 - if (r) 340 - goto err; 337 + if (ddata->panel_data->te_support) { 338 + r = mipi_dsi_dcs_set_tear_on(ddata->dsi, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 339 + if (r) 340 + goto err; 341 + } 341 342 342 343 /* possible panel bug */ 343 344 msleep(100); ··· 622 619 .height_mm = 0, 623 620 .max_hs_rate = 300000000, 624 621 .max_lp_rate = 10000000, 622 + .te_support = true, 625 623 }; 626 624 627 625 static const struct dsic_panel_data himalaya_data = { ··· 633 629 .height_mm = 88, 634 630 .max_hs_rate = 300000000, 635 631 .max_lp_rate = 10000000, 632 + .te_support = false, 636 633 }; 637 634 638 635 static const struct dsic_panel_data droid4_data = { ··· 644 639 .height_mm = 89, 645 640 .max_hs_rate = 300000000, 646 641 .max_lp_rate = 10000000, 642 + .te_support = false, 647 643 }; 648 644 649 645 static const struct of_device_id dsicm_of_match[] = {
+2 -2
drivers/gpu/drm/radeon/radeon_ttm.c
··· 364 364 if (gtt->userflags & RADEON_GEM_USERPTR_ANONONLY) { 365 365 /* check that we only pin down anonymous memory 366 366 to prevent problems with writeback */ 367 - unsigned long end = gtt->userptr + ttm->num_pages * PAGE_SIZE; 367 + unsigned long end = gtt->userptr + (u64)ttm->num_pages * PAGE_SIZE; 368 368 struct vm_area_struct *vma; 369 369 vma = find_vma(gtt->usermm, gtt->userptr); 370 370 if (!vma || vma->vm_file || vma->vm_end < end) ··· 386 386 } while (pinned < ttm->num_pages); 387 387 388 388 r = sg_alloc_table_from_pages(ttm->sg, ttm->pages, ttm->num_pages, 0, 389 - ttm->num_pages << PAGE_SHIFT, 389 + (u64)ttm->num_pages << PAGE_SHIFT, 390 390 GFP_KERNEL); 391 391 if (r) 392 392 goto release_sg;
+6 -25
drivers/gpu/drm/rcar-du/rcar_du_encoder.c
··· 48 48 static const struct drm_encoder_funcs rcar_du_encoder_funcs = { 49 49 }; 50 50 51 - static void rcar_du_encoder_release(struct drm_device *dev, void *res) 52 - { 53 - struct rcar_du_encoder *renc = res; 54 - 55 - drm_encoder_cleanup(&renc->base); 56 - kfree(renc); 57 - } 58 - 59 51 int rcar_du_encoder_init(struct rcar_du_device *rcdu, 60 52 enum rcar_du_output output, 61 53 struct device_node *enc_node) 62 54 { 63 55 struct rcar_du_encoder *renc; 64 56 struct drm_bridge *bridge; 65 - int ret; 66 57 67 58 /* 68 59 * Locate the DRM bridge from the DT node. For the DPAD outputs, if the ··· 92 101 return -ENOLINK; 93 102 } 94 103 95 - renc = kzalloc(sizeof(*renc), GFP_KERNEL); 96 - if (renc == NULL) 97 - return -ENOMEM; 98 - 99 - renc->output = output; 100 - 101 104 dev_dbg(rcdu->dev, "initializing encoder %pOF for output %u\n", 102 105 enc_node, output); 103 106 104 - ret = drm_encoder_init(&rcdu->ddev, &renc->base, &rcar_du_encoder_funcs, 105 - DRM_MODE_ENCODER_NONE, NULL); 106 - if (ret < 0) { 107 - kfree(renc); 108 - return ret; 109 - } 107 + renc = drmm_encoder_alloc(&rcdu->ddev, struct rcar_du_encoder, base, 108 + &rcar_du_encoder_funcs, DRM_MODE_ENCODER_NONE, 109 + NULL); 110 + if (!renc) 111 + return -ENOMEM; 110 112 111 - ret = drmm_add_action_or_reset(&rcdu->ddev, rcar_du_encoder_release, 112 - renc); 113 - if (ret) 114 - return ret; 113 + renc->output = output; 115 114 116 115 /* 117 116 * Attach the bridge to the encoder. The bridge will create the
+13 -17
drivers/gpu/drm/tegra/dc.c
··· 1688 1688 dev_err(dc->dev, 1689 1689 "failed to set clock rate to %lu Hz\n", 1690 1690 state->pclk); 1691 + 1692 + err = clk_set_rate(dc->clk, state->pclk); 1693 + if (err < 0) 1694 + dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n", 1695 + dc->clk, state->pclk, err); 1691 1696 } 1692 1697 1693 1698 DRM_DEBUG_KMS("rate: %lu, div: %u\n", clk_get_rate(dc->clk), ··· 1703 1698 value = SHIFT_CLK_DIVIDER(state->div) | PIXEL_CLK_DIVIDER_PCD1; 1704 1699 tegra_dc_writel(dc, value, DC_DISP_DISP_CLOCK_CONTROL); 1705 1700 } 1706 - 1707 - err = clk_set_rate(dc->clk, state->pclk); 1708 - if (err < 0) 1709 - dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n", 1710 - dc->clk, state->pclk, err); 1711 1701 } 1712 1702 1713 1703 static void tegra_dc_stop(struct tegra_dc *dc) ··· 2501 2501 * POWER_CONTROL registers during CRTC enabling. 2502 2502 */ 2503 2503 if (dc->soc->coupled_pm && dc->pipe == 1) { 2504 - u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_CONSUMER; 2505 - struct device_link *link; 2506 - struct device *partner; 2504 + struct device *companion; 2505 + struct tegra_dc *parent; 2507 2506 2508 - partner = driver_find_device(dc->dev->driver, NULL, NULL, 2509 - tegra_dc_match_by_pipe); 2510 - if (!partner) 2507 + companion = driver_find_device(dc->dev->driver, NULL, (const void *)0, 2508 + tegra_dc_match_by_pipe); 2509 + if (!companion) 2511 2510 return -EPROBE_DEFER; 2512 2511 2513 - link = device_link_add(dc->dev, partner, flags); 2514 - if (!link) { 2515 - dev_err(dc->dev, "failed to link controllers\n"); 2516 - return -EINVAL; 2517 - } 2512 + parent = dev_get_drvdata(companion); 2513 + dc->client.parent = &parent->client; 2518 2514 2519 - dev_dbg(dc->dev, "coupled to %s\n", dev_name(partner)); 2515 + dev_dbg(dc->dev, "coupled to %s\n", dev_name(companion)); 2520 2516 } 2521 2517 2522 2518 return 0;
+7
drivers/gpu/drm/tegra/sor.c
··· 3115 3115 * kernel is possible. 3116 3116 */ 3117 3117 if (sor->rst) { 3118 + err = pm_runtime_resume_and_get(sor->dev); 3119 + if (err < 0) { 3120 + dev_err(sor->dev, "failed to get runtime PM: %d\n", err); 3121 + return err; 3122 + } 3123 + 3118 3124 err = reset_control_acquire(sor->rst); 3119 3125 if (err < 0) { 3120 3126 dev_err(sor->dev, "failed to acquire SOR reset: %d\n", ··· 3154 3148 } 3155 3149 3156 3150 reset_control_release(sor->rst); 3151 + pm_runtime_put(sor->dev); 3157 3152 } 3158 3153 3159 3154 err = clk_prepare_enable(sor->clk_safe);
+17
drivers/gpu/drm/vc4/vc4_crtc.c
··· 210 210 { 211 211 const struct vc4_crtc_data *crtc_data = vc4_crtc_to_vc4_crtc_data(vc4_crtc); 212 212 const struct vc4_pv_data *pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc); 213 + struct vc4_dev *vc4 = to_vc4_dev(vc4_crtc->base.dev); 213 214 u32 fifo_len_bytes = pv_data->fifo_depth; 214 215 215 216 /* ··· 238 237 */ 239 238 if (crtc_data->hvs_output == 5) 240 239 return 32; 240 + 241 + /* 242 + * It looks like in some situations, we will overflow 243 + * the PixelValve FIFO (with the bit 10 of PV stat being 244 + * set) and stall the HVS / PV, eventually resulting in 245 + * a page flip timeout. 246 + * 247 + * Displaying the video overlay during a playback with 248 + * Kodi on an RPi3 seems to be a great solution with a 249 + * failure rate around 50%. 250 + * 251 + * Removing 1 from the FIFO full level however 252 + * seems to completely remove that issue. 253 + */ 254 + if (!vc4->hvs->hvs5) 255 + return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1; 241 256 242 257 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX; 243 258 }
-1
drivers/gpu/drm/vc4/vc4_plane.c
··· 1146 1146 plane->state->src_y = state->src_y; 1147 1147 plane->state->src_w = state->src_w; 1148 1148 plane->state->src_h = state->src_h; 1149 - plane->state->src_h = state->src_h; 1150 1149 plane->state->alpha = state->alpha; 1151 1150 plane->state->pixel_blend_mode = state->pixel_blend_mode; 1152 1151 plane->state->rotation = state->rotation;
+4 -2
drivers/gpu/drm/xen/xen_drm_front.c
··· 521 521 drm_dev = drm_dev_alloc(&xen_drm_driver, dev); 522 522 if (IS_ERR(drm_dev)) { 523 523 ret = PTR_ERR(drm_dev); 524 - goto fail; 524 + goto fail_dev; 525 525 } 526 526 527 527 drm_info->drm_dev = drm_dev; ··· 551 551 drm_kms_helper_poll_fini(drm_dev); 552 552 drm_mode_config_cleanup(drm_dev); 553 553 drm_dev_put(drm_dev); 554 - fail: 554 + fail_dev: 555 555 kfree(drm_info); 556 + front_info->drm_info = NULL; 557 + fail: 556 558 return ret; 557 559 } 558 560
-1
drivers/gpu/drm/xen/xen_drm_front_conn.h
··· 16 16 struct drm_connector; 17 17 struct xen_drm_front_drm_info; 18 18 19 - struct xen_drm_front_drm_info; 20 19 21 20 int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info, 22 21 struct drm_connector *connector);
+6 -4
drivers/gpu/host1x/bus.c
··· 705 705 EXPORT_SYMBOL(host1x_driver_unregister); 706 706 707 707 /** 708 - * host1x_client_register() - register a host1x client 708 + * __host1x_client_register() - register a host1x client 709 709 * @client: host1x client 710 + * @key: lock class key for the client-specific mutex 710 711 * 711 712 * Registers a host1x client with each host1x controller instance. Note that 712 713 * each client will only match their parent host1x controller and will only be ··· 716 715 * device and call host1x_device_init(), which will in turn call each client's 717 716 * &host1x_client_ops.init implementation. 718 717 */ 719 - int host1x_client_register(struct host1x_client *client) 718 + int __host1x_client_register(struct host1x_client *client, 719 + struct lock_class_key *key) 720 720 { 721 721 struct host1x *host1x; 722 722 int err; 723 723 724 724 INIT_LIST_HEAD(&client->list); 725 - mutex_init(&client->lock); 725 + __mutex_init(&client->lock, "host1x client lock", key); 726 726 client->usecount = 0; 727 727 728 728 mutex_lock(&devices_lock); ··· 744 742 745 743 return 0; 746 744 } 747 - EXPORT_SYMBOL(host1x_client_register); 745 + EXPORT_SYMBOL(__host1x_client_register); 748 746 749 747 /** 750 748 * host1x_client_unregister() - unregister a host1x client
+3 -1
drivers/infiniband/core/addr.c
··· 76 76 77 77 static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = { 78 78 [LS_NLA_TYPE_DGID] = {.type = NLA_BINARY, 79 - .len = sizeof(struct rdma_nla_ls_gid)}, 79 + .len = sizeof(struct rdma_nla_ls_gid), 80 + .validation_type = NLA_VALIDATE_MIN, 81 + .min = sizeof(struct rdma_nla_ls_gid)}, 80 82 }; 81 83 82 84 static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
+2 -1
drivers/infiniband/hw/cxgb4/cm.c
··· 3616 3616 c4iw_init_wr_wait(ep->com.wr_waitp); 3617 3617 err = cxgb4_remove_server( 3618 3618 ep->com.dev->rdev.lldi.ports[0], ep->stid, 3619 - ep->com.dev->rdev.lldi.rxq_ids[0], true); 3619 + ep->com.dev->rdev.lldi.rxq_ids[0], 3620 + ep->com.local_addr.ss_family == AF_INET6); 3620 3621 if (err) 3621 3622 goto done; 3622 3623 err = c4iw_wait_for_reply(&ep->com.dev->rdev, ep->com.wr_waitp,
+5 -16
drivers/infiniband/hw/hfi1/affinity.c
··· 632 632 */ 633 633 int hfi1_dev_affinity_init(struct hfi1_devdata *dd) 634 634 { 635 - int node = pcibus_to_node(dd->pcidev->bus); 636 635 struct hfi1_affinity_node *entry; 637 636 const struct cpumask *local_mask; 638 637 int curr_cpu, possible, i, ret; 639 638 bool new_entry = false; 640 - 641 - /* 642 - * If the BIOS does not have the NUMA node information set, select 643 - * NUMA 0 so we get consistent performance. 644 - */ 645 - if (node < 0) { 646 - dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n"); 647 - node = 0; 648 - } 649 - dd->node = node; 650 639 651 640 local_mask = cpumask_of_node(dd->node); 652 641 if (cpumask_first(local_mask) >= nr_cpu_ids) ··· 649 660 * create an entry in the global affinity structure and initialize it. 650 661 */ 651 662 if (!entry) { 652 - entry = node_affinity_allocate(node); 663 + entry = node_affinity_allocate(dd->node); 653 664 if (!entry) { 654 665 dd_dev_err(dd, 655 666 "Unable to allocate global affinity node\n"); ··· 740 751 if (new_entry) 741 752 node_affinity_add_tail(entry); 742 753 754 + dd->affinity_entry = entry; 743 755 mutex_unlock(&node_affinity.lock); 744 756 745 757 return 0; ··· 756 766 { 757 767 struct hfi1_affinity_node *entry; 758 768 759 - if (dd->node < 0) 760 - return; 761 - 762 769 mutex_lock(&node_affinity.lock); 770 + if (!dd->affinity_entry) 771 + goto unlock; 763 772 entry = node_affinity_lookup(dd->node); 764 773 if (!entry) 765 774 goto unlock; ··· 769 780 */ 770 781 _dev_comp_vect_cpu_mask_clean_up(dd, entry); 771 782 unlock: 783 + dd->affinity_entry = NULL; 772 784 mutex_unlock(&node_affinity.lock); 773 - dd->node = NUMA_NO_NODE; 774 785 } 775 786 776 787 /*
+1
drivers/infiniband/hw/hfi1/hfi.h
··· 1409 1409 spinlock_t irq_src_lock; 1410 1410 int vnic_num_vports; 1411 1411 struct net_device *dummy_netdev; 1412 + struct hfi1_affinity_node *affinity_entry; 1412 1413 1413 1414 /* Keeps track of IPoIB RSM rule users */ 1414 1415 atomic_t ipoib_rsm_usr_num;
+9 -1
drivers/infiniband/hw/hfi1/init.c
··· 1277 1277 dd->pport = (struct hfi1_pportdata *)(dd + 1); 1278 1278 dd->pcidev = pdev; 1279 1279 pci_set_drvdata(pdev, dd); 1280 - dd->node = NUMA_NO_NODE; 1281 1280 1282 1281 ret = xa_alloc_irq(&hfi1_dev_table, &dd->unit, dd, xa_limit_32b, 1283 1282 GFP_KERNEL); ··· 1286 1287 goto bail; 1287 1288 } 1288 1289 rvt_set_ibdev_name(&dd->verbs_dev.rdi, "%s_%d", class_name(), dd->unit); 1290 + /* 1291 + * If the BIOS does not have the NUMA node information set, select 1292 + * NUMA 0 so we get consistent performance. 1293 + */ 1294 + dd->node = pcibus_to_node(pdev->bus); 1295 + if (dd->node == NUMA_NO_NODE) { 1296 + dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n"); 1297 + dd->node = 0; 1298 + } 1289 1299 1290 1300 /* 1291 1301 * Initialize all locks for the device. This needs to be as early as
+1 -2
drivers/infiniband/hw/hfi1/netdev_rx.c
··· 173 173 return 0; 174 174 } 175 175 176 - cpumask_and(node_cpu_mask, cpu_mask, 177 - cpumask_of_node(pcibus_to_node(dd->pcidev->bus))); 176 + cpumask_and(node_cpu_mask, cpu_mask, cpumask_of_node(dd->node)); 178 177 179 178 available_cpus = cpumask_weight(node_cpu_mask); 180 179
+2 -1
drivers/infiniband/hw/qedr/verbs.c
··· 1244 1244 * TGT QP isn't associated with RQ/SQ 1245 1245 */ 1246 1246 if ((attrs->qp_type != IB_QPT_GSI) && (dev->gsi_qp_created) && 1247 - (attrs->qp_type != IB_QPT_XRC_TGT)) { 1247 + (attrs->qp_type != IB_QPT_XRC_TGT) && 1248 + (attrs->qp_type != IB_QPT_XRC_INI)) { 1248 1249 struct qedr_cq *send_cq = get_qedr_cq(attrs->send_cq); 1249 1250 struct qedr_cq *recv_cq = get_qedr_cq(attrs->recv_cq); 1250 1251
+1 -1
drivers/infiniband/ulp/rtrs/rtrs-clt.c
··· 2720 2720 2721 2721 /* Now it is safe to iterate over all paths without locks */ 2722 2722 list_for_each_entry_safe(sess, tmp, &clt->paths_list, s.entry) { 2723 - rtrs_clt_destroy_sess_files(sess, NULL); 2724 2723 rtrs_clt_close_conns(sess, true); 2724 + rtrs_clt_destroy_sess_files(sess, NULL); 2725 2725 kobject_put(&sess->kobj); 2726 2726 } 2727 2727 free_clt(clt);
+1 -1
drivers/interconnect/bulk.c
··· 53 53 EXPORT_SYMBOL_GPL(icc_bulk_put); 54 54 55 55 /** 56 - * icc_bulk_set() - set bandwidth to a set of paths 56 + * icc_bulk_set_bw() - set bandwidth to a set of paths 57 57 * @num_paths: the number of icc_bulk_data 58 58 * @paths: the icc_bulk_data table containing the paths and bandwidth 59 59 *
+2
drivers/interconnect/core.c
··· 942 942 GFP_KERNEL); 943 943 if (new) 944 944 src->links = new; 945 + else 946 + ret = -ENOMEM; 945 947 946 948 out: 947 949 mutex_unlock(&icc_lock);
+8 -8
drivers/interconnect/qcom/msm8939.c
··· 131 131 DEFINE_QNODE(mas_pcnoc_sdcc_2, MSM8939_MASTER_SDCC_2, 8, -1, -1, MSM8939_PNOC_INT_1); 132 132 DEFINE_QNODE(mas_qdss_bam, MSM8939_MASTER_QDSS_BAM, 8, -1, -1, MSM8939_SNOC_QDSS_INT); 133 133 DEFINE_QNODE(mas_qdss_etr, MSM8939_MASTER_QDSS_ETR, 8, -1, -1, MSM8939_SNOC_QDSS_INT); 134 - DEFINE_QNODE(mas_snoc_cfg, MSM8939_MASTER_SNOC_CFG, 4, 20, -1, MSM8939_SLAVE_SRVC_SNOC); 134 + DEFINE_QNODE(mas_snoc_cfg, MSM8939_MASTER_SNOC_CFG, 4, -1, -1, MSM8939_SLAVE_SRVC_SNOC); 135 135 DEFINE_QNODE(mas_spdm, MSM8939_MASTER_SPDM, 4, -1, -1, MSM8939_PNOC_MAS_0); 136 136 DEFINE_QNODE(mas_tcu0, MSM8939_MASTER_TCU0, 16, -1, -1, MSM8939_SLAVE_EBI_CH0, MSM8939_BIMC_SNOC_MAS, MSM8939_SLAVE_AMPSS_L2); 137 137 DEFINE_QNODE(mas_usb_hs1, MSM8939_MASTER_USB_HS1, 4, -1, -1, MSM8939_PNOC_MAS_1); ··· 156 156 DEFINE_QNODE(pcnoc_snoc_slv, MSM8939_PNOC_SNOC_SLV, 8, -1, 45, MSM8939_SNOC_INT_0, MSM8939_SNOC_INT_BIMC, MSM8939_SNOC_INT_1); 157 157 DEFINE_QNODE(qdss_int, MSM8939_SNOC_QDSS_INT, 8, -1, -1, MSM8939_SNOC_INT_0, MSM8939_SNOC_INT_BIMC); 158 158 DEFINE_QNODE(slv_apps_l2, MSM8939_SLAVE_AMPSS_L2, 16, -1, -1, 0); 159 - DEFINE_QNODE(slv_apss, MSM8939_SLAVE_APSS, 4, -1, 20, 0); 159 + DEFINE_QNODE(slv_apss, MSM8939_SLAVE_APSS, 4, -1, -1, 0); 160 160 DEFINE_QNODE(slv_audio, MSM8939_SLAVE_LPASS, 4, -1, -1, 0); 161 161 DEFINE_QNODE(slv_bimc_cfg, MSM8939_SLAVE_BIMC_CFG, 4, -1, -1, 0); 162 162 DEFINE_QNODE(slv_blsp_1, MSM8939_SLAVE_BLSP_1, 4, -1, -1, 0); 163 163 DEFINE_QNODE(slv_boot_rom, MSM8939_SLAVE_BOOT_ROM, 4, -1, -1, 0); 164 164 DEFINE_QNODE(slv_camera_cfg, MSM8939_SLAVE_CAMERA_CFG, 4, -1, -1, 0); 165 - DEFINE_QNODE(slv_cats_0, MSM8939_SLAVE_CATS_128, 16, -1, 106, 0); 166 - DEFINE_QNODE(slv_cats_1, MSM8939_SLAVE_OCMEM_64, 8, -1, 107, 0); 165 + DEFINE_QNODE(slv_cats_0, MSM8939_SLAVE_CATS_128, 16, -1, -1, 0); 166 + DEFINE_QNODE(slv_cats_1, MSM8939_SLAVE_OCMEM_64, 8, -1, -1, 0); 167 167 DEFINE_QNODE(slv_clk_ctl, MSM8939_SLAVE_CLK_CTL, 4, -1, -1, 0); 168 168 DEFINE_QNODE(slv_crypto_0_cfg, MSM8939_SLAVE_CRYPTO_0_CFG, 4, -1, -1, 0); 169 169 DEFINE_QNODE(slv_dehr_cfg, MSM8939_SLAVE_DEHR_CFG, 4, -1, -1, 0); ··· 187 187 DEFINE_QNODE(slv_security, MSM8939_SLAVE_SECURITY, 4, -1, -1, 0); 188 188 DEFINE_QNODE(slv_snoc_cfg, MSM8939_SLAVE_SNOC_CFG, 4, -1, -1, 0); 189 189 DEFINE_QNODE(slv_spdm, MSM8939_SLAVE_SPDM, 4, -1, -1, 0); 190 - DEFINE_QNODE(slv_srvc_snoc, MSM8939_SLAVE_SRVC_SNOC, 8, -1, 29, 0); 190 + DEFINE_QNODE(slv_srvc_snoc, MSM8939_SLAVE_SRVC_SNOC, 8, -1, -1, 0); 191 191 DEFINE_QNODE(slv_tcsr, MSM8939_SLAVE_TCSR, 4, -1, -1, 0); 192 192 DEFINE_QNODE(slv_tlmm, MSM8939_SLAVE_TLMM, 4, -1, -1, 0); 193 193 DEFINE_QNODE(slv_usb_hs1, MSM8939_SLAVE_USB_HS1, 4, -1, -1, 0); 194 194 DEFINE_QNODE(slv_usb_hs2, MSM8939_SLAVE_USB_HS2, 4, -1, -1, 0); 195 195 DEFINE_QNODE(slv_venus_cfg, MSM8939_SLAVE_VENUS_CFG, 4, -1, -1, 0); 196 - DEFINE_QNODE(snoc_bimc_0_mas, MSM8939_SNOC_BIMC_0_MAS, 16, 3, -1, MSM8939_SNOC_BIMC_0_SLV); 197 - DEFINE_QNODE(snoc_bimc_0_slv, MSM8939_SNOC_BIMC_0_SLV, 16, -1, 24, MSM8939_SLAVE_EBI_CH0); 196 + DEFINE_QNODE(snoc_bimc_0_mas, MSM8939_SNOC_BIMC_0_MAS, 16, -1, -1, MSM8939_SNOC_BIMC_0_SLV); 197 + DEFINE_QNODE(snoc_bimc_0_slv, MSM8939_SNOC_BIMC_0_SLV, 16, -1, -1, MSM8939_SLAVE_EBI_CH0); 198 198 DEFINE_QNODE(snoc_bimc_1_mas, MSM8939_SNOC_BIMC_1_MAS, 16, 76, -1, MSM8939_SNOC_BIMC_1_SLV); 199 199 DEFINE_QNODE(snoc_bimc_1_slv, MSM8939_SNOC_BIMC_1_SLV, 16, -1, 104, MSM8939_SLAVE_EBI_CH0); 200 200 DEFINE_QNODE(snoc_bimc_2_mas, MSM8939_SNOC_BIMC_2_MAS, 16, -1, -1, MSM8939_SNOC_BIMC_2_SLV); 201 201 DEFINE_QNODE(snoc_bimc_2_slv, MSM8939_SNOC_BIMC_2_SLV, 16, -1, -1, MSM8939_SLAVE_EBI_CH0); 202 202 DEFINE_QNODE(snoc_int_0, MSM8939_SNOC_INT_0, 8, 99, 130, MSM8939_SLAVE_QDSS_STM, MSM8939_SLAVE_IMEM, MSM8939_SNOC_PNOC_MAS); 203 - DEFINE_QNODE(snoc_int_1, MSM8939_SNOC_INT_1, 8, 100, 131, MSM8939_SLAVE_APSS, MSM8939_SLAVE_CATS_128, MSM8939_SLAVE_OCMEM_64); 203 + DEFINE_QNODE(snoc_int_1, MSM8939_SNOC_INT_1, 8, -1, -1, MSM8939_SLAVE_APSS, MSM8939_SLAVE_CATS_128, MSM8939_SLAVE_OCMEM_64); 204 204 DEFINE_QNODE(snoc_int_bimc, MSM8939_SNOC_INT_BIMC, 8, 101, 132, MSM8939_SNOC_BIMC_1_MAS); 205 205 DEFINE_QNODE(snoc_pcnoc_mas, MSM8939_SNOC_PNOC_MAS, 8, -1, -1, MSM8939_SNOC_PNOC_SLV); 206 206 DEFINE_QNODE(snoc_pcnoc_slv, MSM8939_SNOC_PNOC_SLV, 8, -1, -1, MSM8939_PNOC_INT_0);
+1 -1
drivers/md/dm-ioctl.c
··· 529 529 * Grab our output buffer. 530 530 */ 531 531 nl = orig_nl = get_result_buffer(param, param_size, &len); 532 - if (len < needed) { 532 + if (len < needed || len < sizeof(nl->dev)) { 533 533 param->flags |= DM_BUFFER_FULL_FLAG; 534 534 goto out; 535 535 }
+25 -8
drivers/md/dm-table.c
··· 1594 1594 return blk_queue_zoned_model(q) != *zoned_model; 1595 1595 } 1596 1596 1597 + /* 1598 + * Check the device zoned model based on the target feature flag. If the target 1599 + * has the DM_TARGET_ZONED_HM feature flag set, host-managed zoned devices are 1600 + * also accepted but all devices must have the same zoned model. If the target 1601 + * has the DM_TARGET_MIXED_ZONED_MODEL feature set, the devices can have any 1602 + * zoned model with all zoned devices having the same zone size. 1603 + */ 1597 1604 static bool dm_table_supports_zoned_model(struct dm_table *t, 1598 1605 enum blk_zoned_model zoned_model) 1599 1606 { ··· 1610 1603 for (i = 0; i < dm_table_get_num_targets(t); i++) { 1611 1604 ti = dm_table_get_target(t, i); 1612 1605 1613 - if (zoned_model == BLK_ZONED_HM && 1614 - !dm_target_supports_zoned_hm(ti->type)) 1615 - return false; 1616 - 1617 - if (!ti->type->iterate_devices || 1618 - ti->type->iterate_devices(ti, device_not_zoned_model, &zoned_model)) 1619 - return false; 1606 + if (dm_target_supports_zoned_hm(ti->type)) { 1607 + if (!ti->type->iterate_devices || 1608 + ti->type->iterate_devices(ti, device_not_zoned_model, 1609 + &zoned_model)) 1610 + return false; 1611 + } else if (!dm_target_supports_mixed_zoned_model(ti->type)) { 1612 + if (zoned_model == BLK_ZONED_HM) 1613 + return false; 1614 + } 1620 1615 } 1621 1616 1622 1617 return true; ··· 1630 1621 struct request_queue *q = bdev_get_queue(dev->bdev); 1631 1622 unsigned int *zone_sectors = data; 1632 1623 1624 + if (!blk_queue_is_zoned(q)) 1625 + return 0; 1626 + 1633 1627 return blk_queue_zone_sectors(q) != *zone_sectors; 1634 1628 } 1635 1629 1630 + /* 1631 + * Check consistency of zoned model and zone sectors across all targets. For 1632 + * zone sectors, if the destination device is a zoned block device, it shall 1633 + * have the specified zone_sectors. 1634 + */ 1636 1635 static int validate_hardware_zoned_model(struct dm_table *table, 1637 1636 enum blk_zoned_model zoned_model, 1638 1637 unsigned int zone_sectors) ··· 1659 1642 return -EINVAL; 1660 1643 1661 1644 if (dm_table_any_dev_attr(table, device_not_matches_zone_sectors, &zone_sectors)) { 1662 - DMERR("%s: zone sectors is not consistent across all devices", 1645 + DMERR("%s: zone sectors is not consistent across all zoned devices", 1663 1646 dm_device_name(table->md)); 1664 1647 return -EINVAL; 1665 1648 }
+1 -1
drivers/md/dm-verity-target.c
··· 34 34 #define DM_VERITY_OPT_IGN_ZEROES "ignore_zero_blocks" 35 35 #define DM_VERITY_OPT_AT_MOST_ONCE "check_at_most_once" 36 36 37 - #define DM_VERITY_OPTS_MAX (2 + DM_VERITY_OPTS_FEC + \ 37 + #define DM_VERITY_OPTS_MAX (3 + DM_VERITY_OPTS_FEC + \ 38 38 DM_VERITY_ROOT_HASH_VERIFICATION_OPTS) 39 39 40 40 static unsigned dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
+1 -1
drivers/md/dm-zoned-target.c
··· 1143 1143 static struct target_type dmz_type = { 1144 1144 .name = "zoned", 1145 1145 .version = {2, 0, 0}, 1146 - .features = DM_TARGET_SINGLETON | DM_TARGET_ZONED_HM, 1146 + .features = DM_TARGET_SINGLETON | DM_TARGET_MIXED_ZONED_MODEL, 1147 1147 .module = THIS_MODULE, 1148 1148 .ctr = dmz_ctr, 1149 1149 .dtr = dmz_dtr,
+4 -1
drivers/md/dm.c
··· 2036 2036 if (size != dm_get_size(md)) 2037 2037 memset(&md->geometry, 0, sizeof(md->geometry)); 2038 2038 2039 - set_capacity_and_notify(md->disk, size); 2039 + if (!get_capacity(md->disk)) 2040 + set_capacity(md->disk, size); 2041 + else 2042 + set_capacity_and_notify(md->disk, size); 2040 2043 2041 2044 dm_table_event_callback(t, event_callback, md); 2042 2045
+7 -10
drivers/misc/mei/client.c
··· 2286 2286 if (buffer_id == 0) 2287 2287 return -EINVAL; 2288 2288 2289 - if (!mei_cl_is_connected(cl)) 2290 - return -ENODEV; 2289 + if (mei_cl_is_connected(cl)) 2290 + return -EPROTO; 2291 2291 2292 2292 if (cl->dma_mapped) 2293 2293 return -EPROTO; ··· 2327 2327 2328 2328 mutex_unlock(&dev->device_lock); 2329 2329 wait_event_timeout(cl->wait, 2330 - cl->dma_mapped || 2331 - cl->status || 2332 - !mei_cl_is_connected(cl), 2330 + cl->dma_mapped || cl->status, 2333 2331 mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); 2334 2332 mutex_lock(&dev->device_lock); 2335 2333 ··· 2374 2376 return -EOPNOTSUPP; 2375 2377 } 2376 2378 2377 - if (!mei_cl_is_connected(cl)) 2378 - return -ENODEV; 2379 + /* do not allow unmap for connected client */ 2380 + if (mei_cl_is_connected(cl)) 2381 + return -EPROTO; 2379 2382 2380 2383 if (!cl->dma_mapped) 2381 2384 return -EPROTO; ··· 2404 2405 2405 2406 mutex_unlock(&dev->device_lock); 2406 2407 wait_event_timeout(cl->wait, 2407 - !cl->dma_mapped || 2408 - cl->status || 2409 - !mei_cl_is_connected(cl), 2408 + !cl->dma_mapped || cl->status, 2410 2409 mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); 2411 2410 mutex_lock(&dev->device_lock); 2412 2411
+18 -6
drivers/net/can/spi/mcp251x.c
··· 314 314 return ret; 315 315 } 316 316 317 + static int mcp251x_spi_write(struct spi_device *spi, int len) 318 + { 319 + struct mcp251x_priv *priv = spi_get_drvdata(spi); 320 + int ret; 321 + 322 + ret = spi_write(spi, priv->spi_tx_buf, len); 323 + if (ret) 324 + dev_err(&spi->dev, "spi write failed: ret = %d\n", ret); 325 + 326 + return ret; 327 + } 328 + 317 329 static u8 mcp251x_read_reg(struct spi_device *spi, u8 reg) 318 330 { 319 331 struct mcp251x_priv *priv = spi_get_drvdata(spi); ··· 373 361 priv->spi_tx_buf[1] = reg; 374 362 priv->spi_tx_buf[2] = val; 375 363 376 - mcp251x_spi_trans(spi, 3); 364 + mcp251x_spi_write(spi, 3); 377 365 } 378 366 379 367 static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2) ··· 385 373 priv->spi_tx_buf[2] = v1; 386 374 priv->spi_tx_buf[3] = v2; 387 375 388 - mcp251x_spi_trans(spi, 4); 376 + mcp251x_spi_write(spi, 4); 389 377 } 390 378 391 379 static void mcp251x_write_bits(struct spi_device *spi, u8 reg, ··· 398 386 priv->spi_tx_buf[2] = mask; 399 387 priv->spi_tx_buf[3] = val; 400 388 401 - mcp251x_spi_trans(spi, 4); 389 + mcp251x_spi_write(spi, 4); 402 390 } 403 391 404 392 static u8 mcp251x_read_stat(struct spi_device *spi) ··· 630 618 buf[i]); 631 619 } else { 632 620 memcpy(priv->spi_tx_buf, buf, TXBDAT_OFF + len); 633 - mcp251x_spi_trans(spi, TXBDAT_OFF + len); 621 + mcp251x_spi_write(spi, TXBDAT_OFF + len); 634 622 } 635 623 } 636 624 ··· 662 650 663 651 /* use INSTRUCTION_RTS, to avoid "repeated frame problem" */ 664 652 priv->spi_tx_buf[0] = INSTRUCTION_RTS(1 << tx_buf_idx); 665 - mcp251x_spi_trans(priv->spi, 1); 653 + mcp251x_spi_write(priv->spi, 1); 666 654 } 667 655 668 656 static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf, ··· 900 888 mdelay(MCP251X_OST_DELAY_MS); 901 889 902 890 priv->spi_tx_buf[0] = INSTRUCTION_RESET; 903 - ret = mcp251x_spi_trans(spi, 1); 891 + ret = mcp251x_spi_write(spi, 1); 904 892 if (ret) 905 893 return ret; 906 894
+5 -1
drivers/net/can/usb/peak_usb/pcan_usb_core.c
··· 861 861 if (dev->adapter->dev_set_bus) { 862 862 err = dev->adapter->dev_set_bus(dev, 0); 863 863 if (err) 864 - goto lbl_unregister_candev; 864 + goto adap_dev_free; 865 865 } 866 866 867 867 /* get device number early */ ··· 872 872 peak_usb_adapter->name, ctrl_idx, dev->device_number); 873 873 874 874 return 0; 875 + 876 + adap_dev_free: 877 + if (dev->adapter->dev_free) 878 + dev->adapter->dev_free(dev); 875 879 876 880 lbl_unregister_candev: 877 881 unregister_candev(netdev);
+172 -21
drivers/net/dsa/lantiq_gswip.c
··· 93 93 94 94 /* GSWIP MII Registers */ 95 95 #define GSWIP_MII_CFGp(p) (0x2 * (p)) 96 + #define GSWIP_MII_CFG_RESET BIT(15) 96 97 #define GSWIP_MII_CFG_EN BIT(14) 98 + #define GSWIP_MII_CFG_ISOLATE BIT(13) 97 99 #define GSWIP_MII_CFG_LDCLKDIS BIT(12) 100 + #define GSWIP_MII_CFG_RGMII_IBS BIT(8) 101 + #define GSWIP_MII_CFG_RMII_CLK BIT(7) 98 102 #define GSWIP_MII_CFG_MODE_MIIP 0x0 99 103 #define GSWIP_MII_CFG_MODE_MIIM 0x1 100 104 #define GSWIP_MII_CFG_MODE_RMIIP 0x2 ··· 195 191 #define GSWIP_PCE_DEFPVID(p) (0x486 + ((p) * 0xA)) 196 192 197 193 #define GSWIP_MAC_FLEN 0x8C5 194 + #define GSWIP_MAC_CTRL_0p(p) (0x903 + ((p) * 0xC)) 195 + #define GSWIP_MAC_CTRL_0_PADEN BIT(8) 196 + #define GSWIP_MAC_CTRL_0_FCS_EN BIT(7) 197 + #define GSWIP_MAC_CTRL_0_FCON_MASK 0x0070 198 + #define GSWIP_MAC_CTRL_0_FCON_AUTO 0x0000 199 + #define GSWIP_MAC_CTRL_0_FCON_RX 0x0010 200 + #define GSWIP_MAC_CTRL_0_FCON_TX 0x0020 201 + #define GSWIP_MAC_CTRL_0_FCON_RXTX 0x0030 202 + #define GSWIP_MAC_CTRL_0_FCON_NONE 0x0040 203 + #define GSWIP_MAC_CTRL_0_FDUP_MASK 0x000C 204 + #define GSWIP_MAC_CTRL_0_FDUP_AUTO 0x0000 205 + #define GSWIP_MAC_CTRL_0_FDUP_EN 0x0004 206 + #define GSWIP_MAC_CTRL_0_FDUP_DIS 0x000C 207 + #define GSWIP_MAC_CTRL_0_GMII_MASK 0x0003 208 + #define GSWIP_MAC_CTRL_0_GMII_AUTO 0x0000 209 + #define GSWIP_MAC_CTRL_0_GMII_MII 0x0001 210 + #define GSWIP_MAC_CTRL_0_GMII_RGMII 0x0002 198 211 #define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC)) 199 212 #define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */ 200 213 ··· 676 655 GSWIP_SDMA_PCTRLp(port)); 677 656 678 657 if (!dsa_is_cpu_port(ds, port)) { 679 - u32 macconf = GSWIP_MDIO_PHY_LINK_AUTO | 680 - GSWIP_MDIO_PHY_SPEED_AUTO | 681 - GSWIP_MDIO_PHY_FDUP_AUTO | 682 - GSWIP_MDIO_PHY_FCONTX_AUTO | 683 - GSWIP_MDIO_PHY_FCONRX_AUTO | 684 - (phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK); 658 + u32 mdio_phy = 0; 685 659 686 - gswip_mdio_w(priv, macconf, GSWIP_MDIO_PHYp(port)); 687 - /* Activate MDIO auto polling */ 688 - gswip_mdio_mask(priv, 0, BIT(port), GSWIP_MDIO_MDC_CFG0); 660 + if (phydev) 661 + mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK; 662 + 663 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_ADDR_MASK, mdio_phy, 664 + GSWIP_MDIO_PHYp(port)); 689 665 } 690 666 691 667 return 0; ··· 694 676 695 677 if (!dsa_is_user_port(ds, port)) 696 678 return; 697 - 698 - if (!dsa_is_cpu_port(ds, port)) { 699 - gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_DOWN, 700 - GSWIP_MDIO_PHY_LINK_MASK, 701 - GSWIP_MDIO_PHYp(port)); 702 - /* Deactivate MDIO auto polling */ 703 - gswip_mdio_mask(priv, BIT(port), 0, GSWIP_MDIO_MDC_CFG0); 704 - } 705 679 706 680 gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0, 707 681 GSWIP_FDMA_PCTRLp(port)); ··· 806 796 gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2); 807 797 gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3); 808 798 809 - /* disable PHY auto polling */ 799 + /* Deactivate MDIO PHY auto polling. Some PHYs as the AR8030 have an 800 + * interoperability problem with this auto polling mechanism because 801 + * their status registers think that the link is in a different state 802 + * than it actually is. For the AR8030 it has the BMSR_ESTATEN bit set 803 + * as well as ESTATUS_1000_TFULL and ESTATUS_1000_XFULL. This makes the 804 + * auto polling state machine consider the link being negotiated with 805 + * 1Gbit/s. Since the PHY itself is a Fast Ethernet RMII PHY this leads 806 + * to the switch port being completely dead (RX and TX are both not 807 + * working). 808 + * Also with various other PHY / port combinations (PHY11G GPHY, PHY22F 809 + * GPHY, external RGMII PEF7071/7072) any traffic would stop. Sometimes 810 + * it would work fine for a few minutes to hours and then stop, on 811 + * other device it would no traffic could be sent or received at all. 812 + * Testing shows that when PHY auto polling is disabled these problems 813 + * go away. 814 + */ 810 815 gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0); 816 + 811 817 /* Configure the MDIO Clock 2.5 MHz */ 812 818 gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1); 813 819 814 - /* Disable the xMII link */ 820 + /* Disable the xMII interface and clear it's isolation bit */ 815 821 for (i = 0; i < priv->hw_info->max_ports; i++) 816 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i); 822 + gswip_mii_mask_cfg(priv, 823 + GSWIP_MII_CFG_EN | GSWIP_MII_CFG_ISOLATE, 824 + 0, i); 817 825 818 826 /* enable special tag insertion on cpu port */ 819 827 gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN, ··· 1526 1498 phy_modes(state->interface), port); 1527 1499 } 1528 1500 1501 + static void gswip_port_set_link(struct gswip_priv *priv, int port, bool link) 1502 + { 1503 + u32 mdio_phy; 1504 + 1505 + if (link) 1506 + mdio_phy = GSWIP_MDIO_PHY_LINK_UP; 1507 + else 1508 + mdio_phy = GSWIP_MDIO_PHY_LINK_DOWN; 1509 + 1510 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_MASK, mdio_phy, 1511 + GSWIP_MDIO_PHYp(port)); 1512 + } 1513 + 1514 + static void gswip_port_set_speed(struct gswip_priv *priv, int port, int speed, 1515 + phy_interface_t interface) 1516 + { 1517 + u32 mdio_phy = 0, mii_cfg = 0, mac_ctrl_0 = 0; 1518 + 1519 + switch (speed) { 1520 + case SPEED_10: 1521 + mdio_phy = GSWIP_MDIO_PHY_SPEED_M10; 1522 + 1523 + if (interface == PHY_INTERFACE_MODE_RMII) 1524 + mii_cfg = GSWIP_MII_CFG_RATE_M50; 1525 + else 1526 + mii_cfg = GSWIP_MII_CFG_RATE_M2P5; 1527 + 1528 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII; 1529 + break; 1530 + 1531 + case SPEED_100: 1532 + mdio_phy = GSWIP_MDIO_PHY_SPEED_M100; 1533 + 1534 + if (interface == PHY_INTERFACE_MODE_RMII) 1535 + mii_cfg = GSWIP_MII_CFG_RATE_M50; 1536 + else 1537 + mii_cfg = GSWIP_MII_CFG_RATE_M25; 1538 + 1539 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII; 1540 + break; 1541 + 1542 + case SPEED_1000: 1543 + mdio_phy = GSWIP_MDIO_PHY_SPEED_G1; 1544 + 1545 + mii_cfg = GSWIP_MII_CFG_RATE_M125; 1546 + 1547 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_RGMII; 1548 + break; 1549 + } 1550 + 1551 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_SPEED_MASK, mdio_phy, 1552 + GSWIP_MDIO_PHYp(port)); 1553 + gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_RATE_MASK, mii_cfg, port); 1554 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_GMII_MASK, mac_ctrl_0, 1555 + GSWIP_MAC_CTRL_0p(port)); 1556 + } 1557 + 1558 + static void gswip_port_set_duplex(struct gswip_priv *priv, int port, int duplex) 1559 + { 1560 + u32 mac_ctrl_0, mdio_phy; 1561 + 1562 + if (duplex == DUPLEX_FULL) { 1563 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_EN; 1564 + mdio_phy = GSWIP_MDIO_PHY_FDUP_EN; 1565 + } else { 1566 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_DIS; 1567 + mdio_phy = GSWIP_MDIO_PHY_FDUP_DIS; 1568 + } 1569 + 1570 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FDUP_MASK, mac_ctrl_0, 1571 + GSWIP_MAC_CTRL_0p(port)); 1572 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_FDUP_MASK, mdio_phy, 1573 + GSWIP_MDIO_PHYp(port)); 1574 + } 1575 + 1576 + static void gswip_port_set_pause(struct gswip_priv *priv, int port, 1577 + bool tx_pause, bool rx_pause) 1578 + { 1579 + u32 mac_ctrl_0, mdio_phy; 1580 + 1581 + if (tx_pause && rx_pause) { 1582 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RXTX; 1583 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN | 1584 + GSWIP_MDIO_PHY_FCONRX_EN; 1585 + } else if (tx_pause) { 1586 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_TX; 1587 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN | 1588 + GSWIP_MDIO_PHY_FCONRX_DIS; 1589 + } else if (rx_pause) { 1590 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RX; 1591 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS | 1592 + GSWIP_MDIO_PHY_FCONRX_EN; 1593 + } else { 1594 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_NONE; 1595 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS | 1596 + GSWIP_MDIO_PHY_FCONRX_DIS; 1597 + } 1598 + 1599 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FCON_MASK, 1600 + mac_ctrl_0, GSWIP_MAC_CTRL_0p(port)); 1601 + gswip_mdio_mask(priv, 1602 + GSWIP_MDIO_PHY_FCONTX_MASK | 1603 + GSWIP_MDIO_PHY_FCONRX_MASK, 1604 + mdio_phy, GSWIP_MDIO_PHYp(port)); 1605 + } 1606 + 1529 1607 static void gswip_phylink_mac_config(struct dsa_switch *ds, int port, 1530 1608 unsigned int mode, 1531 1609 const struct phylink_link_state *state) ··· 1651 1517 break; 1652 1518 case PHY_INTERFACE_MODE_RMII: 1653 1519 miicfg |= GSWIP_MII_CFG_MODE_RMIIM; 1520 + 1521 + /* Configure the RMII clock as output: */ 1522 + miicfg |= GSWIP_MII_CFG_RMII_CLK; 1654 1523 break; 1655 1524 case PHY_INTERFACE_MODE_RGMII: 1656 1525 case PHY_INTERFACE_MODE_RGMII_ID: ··· 1669 1532 "Unsupported interface: %d\n", state->interface); 1670 1533 return; 1671 1534 } 1672 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_MODE_MASK, miicfg, port); 1535 + 1536 + gswip_mii_mask_cfg(priv, 1537 + GSWIP_MII_CFG_MODE_MASK | GSWIP_MII_CFG_RMII_CLK | 1538 + GSWIP_MII_CFG_RGMII_IBS | GSWIP_MII_CFG_LDCLKDIS, 1539 + miicfg, port); 1673 1540 1674 1541 switch (state->interface) { 1675 1542 case PHY_INTERFACE_MODE_RGMII_ID: ··· 1698 1557 struct gswip_priv *priv = ds->priv; 1699 1558 1700 1559 gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port); 1560 + 1561 + if (!dsa_is_cpu_port(ds, port)) 1562 + gswip_port_set_link(priv, port, false); 1701 1563 } 1702 1564 1703 1565 static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port, ··· 1711 1567 bool tx_pause, bool rx_pause) 1712 1568 { 1713 1569 struct gswip_priv *priv = ds->priv; 1570 + 1571 + if (!dsa_is_cpu_port(ds, port)) { 1572 + gswip_port_set_link(priv, port, true); 1573 + gswip_port_set_speed(priv, port, speed, interface); 1574 + gswip_port_set_duplex(priv, port, duplex); 1575 + gswip_port_set_pause(priv, port, tx_pause, rx_pause); 1576 + } 1714 1577 1715 1578 gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); 1716 1579 }
+3 -2
drivers/net/ethernet/amd/pcnet32.c
··· 1534 1534 } 1535 1535 pci_set_master(pdev); 1536 1536 1537 - ioaddr = pci_resource_start(pdev, 0); 1538 - if (!ioaddr) { 1537 + if (!pci_resource_len(pdev, 0)) { 1539 1538 if (pcnet32_debug & NETIF_MSG_PROBE) 1540 1539 pr_err("card has no PCI IO resources, aborting\n"); 1541 1540 err = -ENODEV; ··· 1547 1548 pr_err("architecture does not support 32bit PCI busmaster DMA\n"); 1548 1549 goto err_disable_dev; 1549 1550 } 1551 + 1552 + ioaddr = pci_resource_start(pdev, 0); 1550 1553 if (!request_region(ioaddr, PCNET32_TOTAL_SIZE, "pcnet32_probe_pci")) { 1551 1554 if (pcnet32_debug & NETIF_MSG_PROBE) 1552 1555 pr_err("io address range already allocated\n");
+3 -3
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 180 180 #define XGBE_DMA_SYS_AWCR 0x30303030 181 181 182 182 /* DMA cache settings - PCI device */ 183 - #define XGBE_DMA_PCI_ARCR 0x00000003 184 - #define XGBE_DMA_PCI_AWCR 0x13131313 185 - #define XGBE_DMA_PCI_AWARCR 0x00000313 183 + #define XGBE_DMA_PCI_ARCR 0x000f0f0f 184 + #define XGBE_DMA_PCI_AWCR 0x0f0f0f0f 185 + #define XGBE_DMA_PCI_AWARCR 0x00000f0f 186 186 187 187 /* DMA channel interrupt modes */ 188 188 #define XGBE_IRQ_MODE_EDGE 0
+1
drivers/net/ethernet/broadcom/bcm4908_enet.c
··· 181 181 182 182 err_free_buf_descs: 183 183 dma_free_coherent(dev, size, ring->cpu_addr, ring->dma_addr); 184 + ring->cpu_addr = NULL; 184 185 return -ENOMEM; 185 186 } 186 187
+7
drivers/net/ethernet/cadence/macb_main.c
··· 3269 3269 bool cmp_b = false; 3270 3270 bool cmp_c = false; 3271 3271 3272 + if (!macb_is_gem(bp)) 3273 + return; 3274 + 3272 3275 tp4sp_v = &(fs->h_u.tcp_ip4_spec); 3273 3276 tp4sp_m = &(fs->m_u.tcp_ip4_spec); 3274 3277 ··· 3640 3637 { 3641 3638 struct net_device *netdev = bp->dev; 3642 3639 netdev_features_t features = netdev->features; 3640 + struct ethtool_rx_fs_item *item; 3643 3641 3644 3642 /* TX checksum offload */ 3645 3643 macb_set_txcsum_feature(bp, features); ··· 3649 3645 macb_set_rxcsum_feature(bp, features); 3650 3646 3651 3647 /* RX Flow Filters */ 3648 + list_for_each_entry(item, &bp->rx_fs_list.list, list) 3649 + gem_prog_cmp_regs(bp, &item->fs); 3650 + 3652 3651 macb_set_rxflow_feature(bp, features); 3653 3652 } 3654 3653
+19 -4
drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
··· 1794 1794 struct cudbg_buffer temp_buff = { 0 }; 1795 1795 struct sge_qbase_reg_field *sge_qbase; 1796 1796 struct ireg_buf *ch_sge_dbg; 1797 + u8 padap_running = 0; 1797 1798 int i, rc; 1799 + u32 size; 1798 1800 1799 - rc = cudbg_get_buff(pdbg_init, dbg_buff, 1800 - sizeof(*ch_sge_dbg) * 2 + sizeof(*sge_qbase), 1801 - &temp_buff); 1801 + /* Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX regs can 1802 + * lead to SGE missing doorbells under heavy traffic. So, only 1803 + * collect them when adapter is idle. 1804 + */ 1805 + for_each_port(padap, i) { 1806 + padap_running = netif_running(padap->port[i]); 1807 + if (padap_running) 1808 + break; 1809 + } 1810 + 1811 + size = sizeof(*ch_sge_dbg) * 2; 1812 + if (!padap_running) 1813 + size += sizeof(*sge_qbase); 1814 + 1815 + rc = cudbg_get_buff(pdbg_init, dbg_buff, size, &temp_buff); 1802 1816 if (rc) 1803 1817 return rc; 1804 1818 ··· 1834 1820 ch_sge_dbg++; 1835 1821 } 1836 1822 1837 - if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5) { 1823 + if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5 && 1824 + !padap_running) { 1838 1825 sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg; 1839 1826 /* 1 addr reg SGE_QBASE_INDEX and 4 data reg 1840 1827 * SGE_QBASE_MAP[0-3]
+2 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 2090 2090 0x1190, 0x1194, 2091 2091 0x11a0, 0x11a4, 2092 2092 0x11b0, 0x11b4, 2093 - 0x11fc, 0x1274, 2093 + 0x11fc, 0x123c, 2094 + 0x1254, 0x1274, 2094 2095 0x1280, 0x133c, 2095 2096 0x1800, 0x18fc, 2096 2097 0x3000, 0x302c,
+5 -1
drivers/net/ethernet/freescale/gianfar.c
··· 363 363 364 364 static int gfar_set_mac_addr(struct net_device *dev, void *p) 365 365 { 366 - eth_mac_addr(dev, p); 366 + int ret; 367 + 368 + ret = eth_mac_addr(dev, p); 369 + if (ret) 370 + return ret; 367 371 368 372 gfar_set_mac_for_addr(dev, 0, dev->dev_addr); 369 373
+4 -5
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 4118 4118 * normalcy is to reset. 4119 4119 * 2. A new reset request from the stack due to timeout 4120 4120 * 4121 - * For the first case,error event might not have ae handle available. 4122 4121 * check if this is a new reset request and we are not here just because 4123 4122 * last reset attempt did not succeed and watchdog hit us again. We will 4124 4123 * know this if last reset request did not occur very recently (watchdog ··· 4127 4128 * want to make sure we throttle the reset request. Therefore, we will 4128 4129 * not allow it again before 3*HZ times. 4129 4130 */ 4130 - if (!handle) 4131 - handle = &hdev->vport[0].nic; 4132 4131 4133 4132 if (time_before(jiffies, (hdev->last_reset_time + 4134 4133 HCLGE_RESET_INTERVAL))) { 4135 4134 mod_timer(&hdev->reset_timer, jiffies + HCLGE_RESET_INTERVAL); 4136 4135 return; 4137 - } else if (hdev->default_reset_request) { 4136 + } 4137 + 4138 + if (hdev->default_reset_request) { 4138 4139 hdev->reset_level = 4139 4140 hclge_get_reset_level(ae_dev, 4140 4141 &hdev->default_reset_request); ··· 11790 11791 if (ret) 11791 11792 return ret; 11792 11793 11793 - /* RSS indirection table has been configuared by user */ 11794 + /* RSS indirection table has been configured by user */ 11794 11795 if (rxfh_configured) 11795 11796 goto out; 11796 11797
+4 -4
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 2226 2226 2227 2227 if (test_and_clear_bit(HCLGEVF_RESET_PENDING, 2228 2228 &hdev->reset_state)) { 2229 - /* PF has initmated that it is about to reset the hardware. 2229 + /* PF has intimated that it is about to reset the hardware. 2230 2230 * We now have to poll & check if hardware has actually 2231 2231 * completed the reset sequence. On hardware reset completion, 2232 2232 * VF needs to reset the client and ae device. ··· 2656 2656 { 2657 2657 struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 2658 2658 2659 + clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); 2660 + 2659 2661 hclgevf_reset_tqp_stats(handle); 2660 2662 2661 2663 hclgevf_request_link_info(hdev); 2662 2664 2663 2665 hclgevf_update_link_mode(hdev); 2664 - 2665 - clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); 2666 2666 2667 2667 return 0; 2668 2668 } ··· 3526 3526 if (ret) 3527 3527 return ret; 3528 3528 3529 - /* RSS indirection table has been configuared by user */ 3529 + /* RSS indirection table has been configured by user */ 3530 3530 if (rxfh_configured) 3531 3531 goto out; 3532 3532
+1
drivers/net/ethernet/intel/i40e/i40e.h
··· 142 142 __I40E_VIRTCHNL_OP_PENDING, 143 143 __I40E_RECOVERY_MODE, 144 144 __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */ 145 + __I40E_VFS_RELEASING, 145 146 /* This must be last as it determines the size of the BITMAP */ 146 147 __I40E_STATE_SIZE__, 147 148 };
+3
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 578 578 case RING_TYPE_XDP: 579 579 ring = kmemdup(vsi->xdp_rings[ring_id], sizeof(*ring), GFP_KERNEL); 580 580 break; 581 + default: 582 + ring = NULL; 583 + break; 581 584 } 582 585 if (!ring) 583 586 return;
+48 -7
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
··· 232 232 I40E_STAT(struct i40e_vsi, _name, _stat) 233 233 #define I40E_VEB_STAT(_name, _stat) \ 234 234 I40E_STAT(struct i40e_veb, _name, _stat) 235 + #define I40E_VEB_TC_STAT(_name, _stat) \ 236 + I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat) 235 237 #define I40E_PFC_STAT(_name, _stat) \ 236 238 I40E_STAT(struct i40e_pfc_stats, _name, _stat) 237 239 #define I40E_QUEUE_STAT(_name, _stat) \ ··· 268 266 I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol), 269 267 }; 270 268 269 + struct i40e_cp_veb_tc_stats { 270 + u64 tc_rx_packets; 271 + u64 tc_rx_bytes; 272 + u64 tc_tx_packets; 273 + u64 tc_tx_bytes; 274 + }; 275 + 271 276 static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = { 272 - I40E_VEB_STAT("veb.tc_%u_tx_packets", tc_stats.tc_tx_packets), 273 - I40E_VEB_STAT("veb.tc_%u_tx_bytes", tc_stats.tc_tx_bytes), 274 - I40E_VEB_STAT("veb.tc_%u_rx_packets", tc_stats.tc_rx_packets), 275 - I40E_VEB_STAT("veb.tc_%u_rx_bytes", tc_stats.tc_rx_bytes), 277 + I40E_VEB_TC_STAT("veb.tc_%u_tx_packets", tc_tx_packets), 278 + I40E_VEB_TC_STAT("veb.tc_%u_tx_bytes", tc_tx_bytes), 279 + I40E_VEB_TC_STAT("veb.tc_%u_rx_packets", tc_rx_packets), 280 + I40E_VEB_TC_STAT("veb.tc_%u_rx_bytes", tc_rx_bytes), 276 281 }; 277 282 278 283 static const struct i40e_stats i40e_gstrings_misc_stats[] = { ··· 1110 1101 1111 1102 /* Set flow control settings */ 1112 1103 ethtool_link_ksettings_add_link_mode(ks, supported, Pause); 1104 + ethtool_link_ksettings_add_link_mode(ks, supported, Asym_Pause); 1113 1105 1114 1106 switch (hw->fc.requested_mode) { 1115 1107 case I40E_FC_FULL: ··· 2227 2217 } 2228 2218 2229 2219 /** 2220 + * i40e_get_veb_tc_stats - copy VEB TC statistics to formatted structure 2221 + * @tc: the TC statistics in VEB structure (veb->tc_stats) 2222 + * @i: the index of traffic class in (veb->tc_stats) structure to copy 2223 + * 2224 + * Copy VEB TC statistics from structure of arrays (veb->tc_stats) to 2225 + * one dimensional structure i40e_cp_veb_tc_stats. 2226 + * Produce formatted i40e_cp_veb_tc_stats structure of the VEB TC 2227 + * statistics for the given TC. 2228 + **/ 2229 + static struct i40e_cp_veb_tc_stats 2230 + i40e_get_veb_tc_stats(struct i40e_veb_tc_stats *tc, unsigned int i) 2231 + { 2232 + struct i40e_cp_veb_tc_stats veb_tc = { 2233 + .tc_rx_packets = tc->tc_rx_packets[i], 2234 + .tc_rx_bytes = tc->tc_rx_bytes[i], 2235 + .tc_tx_packets = tc->tc_tx_packets[i], 2236 + .tc_tx_bytes = tc->tc_tx_bytes[i], 2237 + }; 2238 + 2239 + return veb_tc; 2240 + } 2241 + 2242 + /** 2230 2243 * i40e_get_pfc_stats - copy HW PFC statistics to formatted structure 2231 2244 * @pf: the PF device structure 2232 2245 * @i: the priority value to copy ··· 2333 2300 i40e_gstrings_veb_stats); 2334 2301 2335 2302 for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) 2336 - i40e_add_ethtool_stats(&data, veb_stats ? veb : NULL, 2337 - i40e_gstrings_veb_tc_stats); 2303 + if (veb_stats) { 2304 + struct i40e_cp_veb_tc_stats veb_tc = 2305 + i40e_get_veb_tc_stats(&veb->tc_stats, i); 2306 + 2307 + i40e_add_ethtool_stats(&data, &veb_tc, 2308 + i40e_gstrings_veb_tc_stats); 2309 + } else { 2310 + i40e_add_ethtool_stats(&data, NULL, 2311 + i40e_gstrings_veb_tc_stats); 2312 + } 2338 2313 2339 2314 i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats); 2340 2315 ··· 5474 5433 5475 5434 status = i40e_aq_get_phy_register(hw, 5476 5435 I40E_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE, 5477 - true, addr, offset, &value, NULL); 5436 + addr, true, offset, &value, NULL); 5478 5437 if (status) 5479 5438 return -EIO; 5480 5439 data[i] = value;
+16 -14
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2560 2560 i40e_stat_str(hw, aq_ret), 2561 2561 i40e_aq_str(hw, hw->aq.asq_last_status)); 2562 2562 } else { 2563 - dev_info(&pf->pdev->dev, "%s is %s allmulti mode.\n", 2564 - vsi->netdev->name, 2563 + dev_info(&pf->pdev->dev, "%s allmulti mode.\n", 2565 2564 cur_multipromisc ? "entering" : "leaving"); 2566 2565 } 2567 2566 } ··· 6737 6738 set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state); 6738 6739 set_bit(__I40E_CLIENT_L2_CHANGE, pf->state); 6739 6740 } 6740 - /* registers are set, lets apply */ 6741 - if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) 6742 - ret = i40e_hw_set_dcb_config(pf, new_cfg); 6741 + /* registers are set, lets apply */ 6742 + if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) 6743 + ret = i40e_hw_set_dcb_config(pf, new_cfg); 6743 6744 } 6744 6745 6745 6746 err: ··· 10572 10573 goto end_core_reset; 10573 10574 } 10574 10575 10575 - if (!lock_acquired) 10576 - rtnl_lock(); 10577 - ret = i40e_setup_pf_switch(pf, reinit); 10578 - if (ret) 10579 - goto end_unlock; 10580 - 10581 10576 #ifdef CONFIG_I40E_DCB 10582 10577 /* Enable FW to write a default DCB config on link-up 10583 10578 * unless I40E_FLAG_TC_MQPRIO was enabled or DCB ··· 10586 10593 i40e_aq_set_dcb_parameters(hw, false, NULL); 10587 10594 dev_warn(&pf->pdev->dev, 10588 10595 "DCB is not supported for X710-T*L 2.5/5G speeds\n"); 10589 - pf->flags &= ~I40E_FLAG_DCB_CAPABLE; 10596 + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; 10590 10597 } else { 10591 10598 i40e_aq_set_dcb_parameters(hw, true, NULL); 10592 10599 ret = i40e_init_pf_dcb(pf); ··· 10600 10607 } 10601 10608 10602 10609 #endif /* CONFIG_I40E_DCB */ 10610 + if (!lock_acquired) 10611 + rtnl_lock(); 10612 + ret = i40e_setup_pf_switch(pf, reinit); 10613 + if (ret) 10614 + goto end_unlock; 10603 10615 10604 10616 /* The driver only wants link up/down and module qualification 10605 10617 * reports from firmware. Note the negative logic. ··· 15138 15140 * in order to register the netdev 15139 15141 */ 15140 15142 v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN); 15141 - if (v_idx < 0) 15143 + if (v_idx < 0) { 15144 + err = v_idx; 15142 15145 goto err_switch_setup; 15146 + } 15143 15147 pf->lan_vsi = v_idx; 15144 15148 vsi = pf->vsi[v_idx]; 15145 - if (!vsi) 15149 + if (!vsi) { 15150 + err = -EFAULT; 15146 15151 goto err_switch_setup; 15152 + } 15147 15153 vsi->alloc_queue_pairs = 1; 15148 15154 err = i40e_config_netdev(vsi); 15149 15155 if (err)
+5 -7
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2295 2295 * @rx_ring: Rx ring being processed 2296 2296 * @xdp: XDP buffer containing the frame 2297 2297 **/ 2298 - static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring, 2299 - struct xdp_buff *xdp) 2298 + static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp) 2300 2299 { 2301 2300 int err, result = I40E_XDP_PASS; 2302 2301 struct i40e_ring *xdp_ring; ··· 2334 2335 } 2335 2336 xdp_out: 2336 2337 rcu_read_unlock(); 2337 - return ERR_PTR(-result); 2338 + return result; 2338 2339 } 2339 2340 2340 2341 /** ··· 2447 2448 unsigned int xdp_xmit = 0; 2448 2449 bool failure = false; 2449 2450 struct xdp_buff xdp; 2451 + int xdp_res = 0; 2450 2452 2451 2453 #if (PAGE_SIZE < 8192) 2452 2454 frame_sz = i40e_rx_frame_truesize(rx_ring, 0); ··· 2513 2513 /* At larger PAGE_SIZE, frame_sz depend on len size */ 2514 2514 xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size); 2515 2515 #endif 2516 - skb = i40e_run_xdp(rx_ring, &xdp); 2516 + xdp_res = i40e_run_xdp(rx_ring, &xdp); 2517 2517 } 2518 2518 2519 - if (IS_ERR(skb)) { 2520 - unsigned int xdp_res = -PTR_ERR(skb); 2521 - 2519 + if (xdp_res) { 2522 2520 if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) { 2523 2521 xdp_xmit |= xdp_res; 2524 2522 i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
+9
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 137 137 **/ 138 138 static inline void i40e_vc_disable_vf(struct i40e_vf *vf) 139 139 { 140 + struct i40e_pf *pf = vf->pf; 140 141 int i; 141 142 142 143 i40e_vc_notify_vf_reset(vf); ··· 148 147 * ensure a reset. 149 148 */ 150 149 for (i = 0; i < 20; i++) { 150 + /* If PF is in VFs releasing state reset VF is impossible, 151 + * so leave it. 152 + */ 153 + if (test_bit(__I40E_VFS_RELEASING, pf->state)) 154 + return; 151 155 if (i40e_reset_vf(vf, false)) 152 156 return; 153 157 usleep_range(10000, 20000); ··· 1580 1574 1581 1575 if (!pf->vf) 1582 1576 return; 1577 + 1578 + set_bit(__I40E_VFS_RELEASING, pf->state); 1583 1579 while (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) 1584 1580 usleep_range(1000, 2000); 1585 1581 ··· 1639 1631 } 1640 1632 } 1641 1633 clear_bit(__I40E_VF_DISABLE, pf->state); 1634 + clear_bit(__I40E_VFS_RELEASING, pf->state); 1642 1635 } 1643 1636 1644 1637 #ifdef CONFIG_PCI_IOV
+2 -2
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 474 474 475 475 nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, descs, budget); 476 476 if (!nb_pkts) 477 - return false; 477 + return true; 478 478 479 479 if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { 480 480 nb_processed = xdp_ring->count - xdp_ring->next_to_use; ··· 491 491 492 492 i40e_update_tx_stats(xdp_ring, nb_pkts, total_bytes); 493 493 494 - return true; 494 + return nb_pkts < budget; 495 495 } 496 496 497 497 /**
+2 -2
drivers/net/ethernet/intel/ice/ice.h
··· 199 199 __ICE_NEEDS_RESTART, 200 200 __ICE_PREPARED_FOR_RESET, /* set by driver when prepared */ 201 201 __ICE_RESET_OICR_RECV, /* set by driver after rcv reset OICR */ 202 - __ICE_DCBNL_DEVRESET, /* set by dcbnl devreset */ 203 202 __ICE_PFR_REQ, /* set by driver and peers */ 204 203 __ICE_CORER_REQ, /* set by driver and peers */ 205 204 __ICE_GLOBR_REQ, /* set by driver and peers */ ··· 629 630 void ice_print_link_msg(struct ice_vsi *vsi, bool isup); 630 631 const char *ice_stat_str(enum ice_status stat_err); 631 632 const char *ice_aq_str(enum ice_aq_err aq_err); 632 - bool ice_is_wol_supported(struct ice_pf *pf); 633 + bool ice_is_wol_supported(struct ice_hw *hw); 633 634 int 634 635 ice_fdir_write_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input, bool add, 635 636 bool is_tun); ··· 647 648 int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, 648 649 struct ice_rq_event_info *event); 649 650 int ice_open(struct net_device *netdev); 651 + int ice_open_internal(struct net_device *netdev); 650 652 int ice_stop(struct net_device *netdev); 651 653 void ice_service_task_schedule(struct ice_pf *pf); 652 654
+1 -1
drivers/net/ethernet/intel/ice/ice_common.c
··· 721 721 722 722 if (!data) { 723 723 data = devm_kcalloc(ice_hw_to_dev(hw), 724 - sizeof(*data), 725 724 ICE_AQC_FW_LOG_ID_MAX, 725 + sizeof(*data), 726 726 GFP_KERNEL); 727 727 if (!data) 728 728 return ICE_ERR_NO_MEMORY;
+2 -2
drivers/net/ethernet/intel/ice/ice_controlq.h
··· 31 31 ICE_CTL_Q_MAILBOX, 32 32 }; 33 33 34 - /* Control Queue timeout settings - max delay 250ms */ 35 - #define ICE_CTL_Q_SQ_CMD_TIMEOUT 2500 /* Count 2500 times */ 34 + /* Control Queue timeout settings - max delay 1s */ 35 + #define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */ 36 36 #define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */ 37 37 #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */ 38 38 #define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */
+29 -9
drivers/net/ethernet/intel/ice/ice_dcb.c
··· 738 738 /** 739 739 * ice_cee_to_dcb_cfg 740 740 * @cee_cfg: pointer to CEE configuration struct 741 - * @dcbcfg: DCB configuration struct 741 + * @pi: port information structure 742 742 * 743 743 * Convert CEE configuration from firmware to DCB configuration 744 744 */ 745 745 static void 746 746 ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg, 747 - struct ice_dcbx_cfg *dcbcfg) 747 + struct ice_port_info *pi) 748 748 { 749 749 u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status); 750 750 u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift; 751 + u8 i, j, err, sync, oper, app_index, ice_app_sel_type; 751 752 u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio); 752 - u8 i, err, sync, oper, app_index, ice_app_sel_type; 753 753 u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift; 754 + struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg; 754 755 u16 ice_app_prot_id_type; 755 756 756 - /* CEE PG data to ETS config */ 757 + dcbcfg = &pi->qos_cfg.local_dcbx_cfg; 758 + dcbcfg->dcbx_mode = ICE_DCBX_MODE_CEE; 759 + dcbcfg->tlv_status = tlv_status; 760 + 761 + /* CEE PG data */ 757 762 dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc; 758 763 759 764 /* Note that the FW creates the oper_prio_tc nibbles reversed ··· 785 780 } 786 781 } 787 782 788 - /* CEE PFC data to ETS config */ 783 + /* CEE PFC data */ 789 784 dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en; 790 785 dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS; 786 + 787 + /* CEE APP TLV data */ 788 + if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING) 789 + cmp_dcbcfg = &pi->qos_cfg.desired_dcbx_cfg; 790 + else 791 + cmp_dcbcfg = &pi->qos_cfg.remote_dcbx_cfg; 791 792 792 793 app_index = 0; 793 794 for (i = 0; i < 3; i++) { ··· 813 802 ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S; 814 803 ice_app_sel_type = ICE_APP_SEL_TCPIP; 815 804 ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI; 805 + 806 + for (j = 0; j < cmp_dcbcfg->numapps; j++) { 807 + u16 prot_id = cmp_dcbcfg->app[j].prot_id; 808 + u8 sel = cmp_dcbcfg->app[j].selector; 809 + 810 + if (sel == ICE_APP_SEL_TCPIP && 811 + (prot_id == ICE_APP_PROT_ID_ISCSI || 812 + prot_id == ICE_APP_PROT_ID_ISCSI_860)) { 813 + ice_app_prot_id_type = prot_id; 814 + break; 815 + } 816 + } 816 817 } else { 817 818 /* FIP APP */ 818 819 ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M; ··· 915 892 ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL); 916 893 if (!ret) { 917 894 /* CEE mode */ 918 - dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg; 919 - dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE; 920 - dcbx_cfg->tlv_status = le32_to_cpu(cee_cfg.tlv_status); 921 - ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg); 922 895 ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE); 896 + ice_cee_to_dcb_cfg(&cee_cfg, pi); 923 897 } else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) { 924 898 /* CEE mode not enabled try querying IEEE data */ 925 899 dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
-2
drivers/net/ethernet/intel/ice/ice_dcb_nl.c
··· 18 18 while (ice_is_reset_in_progress(pf->state)) 19 19 usleep_range(1000, 2000); 20 20 21 - set_bit(__ICE_DCBNL_DEVRESET, pf->state); 22 21 dev_close(netdev); 23 22 netdev_state_change(netdev); 24 23 dev_open(netdev, NULL); 25 24 netdev_state_change(netdev); 26 - clear_bit(__ICE_DCBNL_DEVRESET, pf->state); 27 25 } 28 26 29 27 /**
+2 -2
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 3442 3442 netdev_warn(netdev, "Wake on LAN is not supported on this interface!\n"); 3443 3443 3444 3444 /* Get WoL settings based on the HW capability */ 3445 - if (ice_is_wol_supported(pf)) { 3445 + if (ice_is_wol_supported(&pf->hw)) { 3446 3446 wol->supported = WAKE_MAGIC; 3447 3447 wol->wolopts = pf->wol_ena ? WAKE_MAGIC : 0; 3448 3448 } else { ··· 3462 3462 struct ice_vsi *vsi = np->vsi; 3463 3463 struct ice_pf *pf = vsi->back; 3464 3464 3465 - if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(pf)) 3465 + if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(&pf->hw)) 3466 3466 return -EOPNOTSUPP; 3467 3467 3468 3468 /* only magic packet is supported */
+2 -3
drivers/net/ethernet/intel/ice/ice_lib.c
··· 2620 2620 if (!locked) 2621 2621 rtnl_lock(); 2622 2622 2623 - err = ice_open(vsi->netdev); 2623 + err = ice_open_internal(vsi->netdev); 2624 2624 2625 2625 if (!locked) 2626 2626 rtnl_unlock(); ··· 2649 2649 if (!locked) 2650 2650 rtnl_lock(); 2651 2651 2652 - ice_stop(vsi->netdev); 2652 + ice_vsi_close(vsi); 2653 2653 2654 2654 if (!locked) 2655 2655 rtnl_unlock(); ··· 3152 3152 bool ice_is_reset_in_progress(unsigned long *state) 3153 3153 { 3154 3154 return test_bit(__ICE_RESET_OICR_RECV, state) || 3155 - test_bit(__ICE_DCBNL_DEVRESET, state) || 3156 3155 test_bit(__ICE_PFR_REQ, state) || 3157 3156 test_bit(__ICE_CORER_REQ, state) || 3158 3157 test_bit(__ICE_GLOBR_REQ, state);
+39 -14
drivers/net/ethernet/intel/ice/ice_main.c
··· 3493 3493 } 3494 3494 3495 3495 /** 3496 - * ice_is_wol_supported - get NVM state of WoL 3497 - * @pf: board private structure 3496 + * ice_is_wol_supported - check if WoL is supported 3497 + * @hw: pointer to hardware info 3498 3498 * 3499 3499 * Check if WoL is supported based on the HW configuration. 3500 3500 * Returns true if NVM supports and enables WoL for this port, false otherwise 3501 3501 */ 3502 - bool ice_is_wol_supported(struct ice_pf *pf) 3502 + bool ice_is_wol_supported(struct ice_hw *hw) 3503 3503 { 3504 - struct ice_hw *hw = &pf->hw; 3505 3504 u16 wol_ctrl; 3506 3505 3507 3506 /* A bit set to 1 in the NVM Software Reserved Word 2 (WoL control ··· 3509 3510 if (ice_read_sr_word(hw, ICE_SR_NVM_WOL_CFG, &wol_ctrl)) 3510 3511 return false; 3511 3512 3512 - return !(BIT(hw->pf_id) & wol_ctrl); 3513 + return !(BIT(hw->port_info->lport) & wol_ctrl); 3513 3514 } 3514 3515 3515 3516 /** ··· 4181 4182 goto err_send_version_unroll; 4182 4183 } 4183 4184 4185 + /* not a fatal error if this fails */ 4184 4186 err = ice_init_nvm_phy_type(pf->hw.port_info); 4185 - if (err) { 4187 + if (err) 4186 4188 dev_err(dev, "ice_init_nvm_phy_type failed: %d\n", err); 4187 - goto err_send_version_unroll; 4188 - } 4189 4189 4190 + /* not a fatal error if this fails */ 4190 4191 err = ice_update_link_info(pf->hw.port_info); 4191 - if (err) { 4192 + if (err) 4192 4193 dev_err(dev, "ice_update_link_info failed: %d\n", err); 4193 - goto err_send_version_unroll; 4194 - } 4195 4194 4196 4195 ice_init_link_dflt_override(pf->hw.port_info); 4197 4196 4198 4197 /* if media available, initialize PHY settings */ 4199 4198 if (pf->hw.port_info->phy.link_info.link_info & 4200 4199 ICE_AQ_MEDIA_AVAILABLE) { 4200 + /* not a fatal error if this fails */ 4201 4201 err = ice_init_phy_user_cfg(pf->hw.port_info); 4202 - if (err) { 4202 + if (err) 4203 4203 dev_err(dev, "ice_init_phy_user_cfg failed: %d\n", err); 4204 - goto err_send_version_unroll; 4205 - } 4206 4204 4207 4205 if (!test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, pf->flags)) { 4208 4206 struct ice_vsi *vsi = ice_get_main_vsi(pf); ··· 4560 4564 continue; 4561 4565 ice_vsi_free_q_vectors(pf->vsi[v]); 4562 4566 } 4567 + ice_free_cpu_rx_rmap(ice_get_main_vsi(pf)); 4563 4568 ice_clear_interrupt_scheme(pf); 4564 4569 4565 4570 pci_save_state(pdev); ··· 6657 6660 int ice_open(struct net_device *netdev) 6658 6661 { 6659 6662 struct ice_netdev_priv *np = netdev_priv(netdev); 6663 + struct ice_pf *pf = np->vsi->back; 6664 + 6665 + if (ice_is_reset_in_progress(pf->state)) { 6666 + netdev_err(netdev, "can't open net device while reset is in progress"); 6667 + return -EBUSY; 6668 + } 6669 + 6670 + return ice_open_internal(netdev); 6671 + } 6672 + 6673 + /** 6674 + * ice_open_internal - Called when a network interface becomes active 6675 + * @netdev: network interface device structure 6676 + * 6677 + * Internal ice_open implementation. Should not be used directly except for ice_open and reset 6678 + * handling routine 6679 + * 6680 + * Returns 0 on success, negative value on failure 6681 + */ 6682 + int ice_open_internal(struct net_device *netdev) 6683 + { 6684 + struct ice_netdev_priv *np = netdev_priv(netdev); 6660 6685 struct ice_vsi *vsi = np->vsi; 6661 6686 struct ice_pf *pf = vsi->back; 6662 6687 struct ice_port_info *pi; ··· 6748 6729 { 6749 6730 struct ice_netdev_priv *np = netdev_priv(netdev); 6750 6731 struct ice_vsi *vsi = np->vsi; 6732 + struct ice_pf *pf = vsi->back; 6733 + 6734 + if (ice_is_reset_in_progress(pf->state)) { 6735 + netdev_err(netdev, "can't stop net device while reset is in progress"); 6736 + return -EBUSY; 6737 + } 6751 6738 6752 6739 ice_vsi_close(vsi); 6753 6740
+9 -6
drivers/net/ethernet/intel/ice/ice_switch.c
··· 1238 1238 ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2, 1239 1239 vsi_list_id); 1240 1240 1241 + if (!m_entry->vsi_list_info) 1242 + return ICE_ERR_NO_MEMORY; 1243 + 1241 1244 /* If this entry was large action then the large action needs 1242 1245 * to be updated to point to FWD to VSI list 1243 1246 */ ··· 2223 2220 return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI && 2224 2221 fm_entry->fltr_info.vsi_handle == vsi_handle) || 2225 2222 (fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST && 2223 + fm_entry->vsi_list_info && 2226 2224 (test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map)))); 2227 2225 } 2228 2226 ··· 2296 2292 return ICE_ERR_PARAM; 2297 2293 2298 2294 list_for_each_entry(fm_entry, lkup_list_head, list_entry) { 2299 - struct ice_fltr_info *fi; 2300 - 2301 - fi = &fm_entry->fltr_info; 2302 - if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle)) 2295 + if (!ice_vsi_uses_fltr(fm_entry, vsi_handle)) 2303 2296 continue; 2304 2297 2305 2298 status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle, 2306 - vsi_list_head, fi); 2299 + vsi_list_head, 2300 + &fm_entry->fltr_info); 2307 2301 if (status) 2308 2302 return status; 2309 2303 } ··· 2624 2622 &remove_list_head); 2625 2623 mutex_unlock(rule_lock); 2626 2624 if (status) 2627 - return; 2625 + goto free_fltr_list; 2628 2626 2629 2627 switch (lkup) { 2630 2628 case ICE_SW_LKUP_MAC: ··· 2647 2645 break; 2648 2646 } 2649 2647 2648 + free_fltr_list: 2650 2649 list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) { 2651 2650 list_del(&fm_entry->list_entry); 2652 2651 devm_kfree(ice_hw_to_dev(hw), fm_entry);
+1
drivers/net/ethernet/intel/ice/ice_type.h
··· 553 553 #define ICE_TLV_STATUS_ERR 0x4 554 554 #define ICE_APP_PROT_ID_FCOE 0x8906 555 555 #define ICE_APP_PROT_ID_ISCSI 0x0cbc 556 + #define ICE_APP_PROT_ID_ISCSI_860 0x035c 556 557 #define ICE_APP_PROT_ID_FIP 0x8914 557 558 #define ICE_APP_SEL_ETHTYPE 0x1 558 559 #define ICE_APP_SEL_TCPIP 0x2
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 188 188 } 189 189 190 190 enum { 191 - MLX5_INTERFACE_PROTOCOL_ETH_REP, 192 191 MLX5_INTERFACE_PROTOCOL_ETH, 192 + MLX5_INTERFACE_PROTOCOL_ETH_REP, 193 193 194 + MLX5_INTERFACE_PROTOCOL_IB, 194 195 MLX5_INTERFACE_PROTOCOL_IB_REP, 195 196 MLX5_INTERFACE_PROTOCOL_MPIB, 196 - MLX5_INTERFACE_PROTOCOL_IB, 197 197 198 198 MLX5_INTERFACE_PROTOCOL_VNET, 199 199 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 517 517 struct mlx5_wq_cyc wq; 518 518 void __iomem *uar_map; 519 519 u32 sqn; 520 + u16 reserved_room; 520 521 unsigned long state; 521 522 522 523 /* control path */
+29 -7
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 188 188 } 189 189 190 190 static int 191 + mlx5_get_label_mapping(struct mlx5_tc_ct_priv *ct_priv, 192 + u32 *labels, u32 *id) 193 + { 194 + if (!memchr_inv(labels, 0, sizeof(u32) * 4)) { 195 + *id = 0; 196 + return 0; 197 + } 198 + 199 + if (mapping_add(ct_priv->labels_mapping, labels, id)) 200 + return -EOPNOTSUPP; 201 + 202 + return 0; 203 + } 204 + 205 + static void 206 + mlx5_put_label_mapping(struct mlx5_tc_ct_priv *ct_priv, u32 id) 207 + { 208 + if (id) 209 + mapping_remove(ct_priv->labels_mapping, id); 210 + } 211 + 212 + static int 191 213 mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule) 192 214 { 193 215 struct flow_match_control control; ··· 460 438 mlx5_tc_rule_delete(netdev_priv(ct_priv->netdev), zone_rule->rule, attr); 461 439 mlx5e_mod_hdr_detach(ct_priv->dev, 462 440 ct_priv->mod_hdr_tbl, zone_rule->mh); 463 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 441 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 464 442 kfree(attr); 465 443 } 466 444 ··· 663 641 if (!meta) 664 642 return -EOPNOTSUPP; 665 643 666 - err = mapping_add(ct_priv->labels_mapping, meta->ct_metadata.labels, 667 - &attr->ct_attr.ct_labels_id); 644 + err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels, 645 + &attr->ct_attr.ct_labels_id); 668 646 if (err) 669 647 return -EOPNOTSUPP; 670 648 if (nat) { ··· 701 679 702 680 err_mapping: 703 681 dealloc_mod_hdr_actions(&mod_acts); 704 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 682 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 705 683 return err; 706 684 } 707 685 ··· 769 747 err_rule: 770 748 mlx5e_mod_hdr_detach(ct_priv->dev, 771 749 ct_priv->mod_hdr_tbl, zone_rule->mh); 772 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 750 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 773 751 err_mod_hdr: 774 752 kfree(attr); 775 753 err_attr: ··· 1221 1199 if (!priv || !ct_attr->ct_labels_id) 1222 1200 return; 1223 1201 1224 - mapping_remove(priv->labels_mapping, ct_attr->ct_labels_id); 1202 + mlx5_put_label_mapping(priv, ct_attr->ct_labels_id); 1225 1203 } 1226 1204 1227 1205 int ··· 1324 1302 ct_labels[1] = key->ct_labels[1] & mask->ct_labels[1]; 1325 1303 ct_labels[2] = key->ct_labels[2] & mask->ct_labels[2]; 1326 1304 ct_labels[3] = key->ct_labels[3] & mask->ct_labels[3]; 1327 - if (mapping_add(priv->labels_mapping, ct_labels, &ct_attr->ct_labels_id)) 1305 + if (mlx5_get_label_mapping(priv, ct_labels, &ct_attr->ct_labels_id)) 1328 1306 return -EOPNOTSUPP; 1329 1307 mlx5e_tc_match_to_reg_match(spec, LABELS_TO_REG, ct_attr->ct_labels_id, 1330 1308 MLX5_CT_LABELS_MASK);
+10
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h
··· 21 21 MLX5E_TC_TUNNEL_TYPE_MPLSOUDP, 22 22 }; 23 23 24 + struct mlx5e_encap_key { 25 + const struct ip_tunnel_key *ip_tun_key; 26 + struct mlx5e_tc_tunnel *tc_tunnel; 27 + }; 28 + 24 29 struct mlx5e_tc_tunnel { 25 30 int tunnel_type; 26 31 enum mlx5_flow_match_level match_level; ··· 49 44 struct flow_cls_offload *f, 50 45 void *headers_c, 51 46 void *headers_v); 47 + bool (*encap_info_equal)(struct mlx5e_encap_key *a, 48 + struct mlx5e_encap_key *b); 52 49 }; 53 50 54 51 extern struct mlx5e_tc_tunnel vxlan_tunnel; ··· 109 102 struct flow_cls_offload *f, 110 103 void *headers_c, 111 104 void *headers_v); 105 + 106 + bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a, 107 + struct mlx5e_encap_key *b); 112 108 113 109 #endif /* CONFIG_MLX5_ESWITCH */ 114 110
+9 -14
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
··· 477 477 mlx5e_decap_dealloc(priv, d); 478 478 } 479 479 480 - struct encap_key { 481 - const struct ip_tunnel_key *ip_tun_key; 482 - struct mlx5e_tc_tunnel *tc_tunnel; 483 - }; 484 - 485 - static int cmp_encap_info(struct encap_key *a, 486 - struct encap_key *b) 480 + bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a, 481 + struct mlx5e_encap_key *b) 487 482 { 488 - return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) || 489 - a->tc_tunnel->tunnel_type != b->tc_tunnel->tunnel_type; 483 + return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) == 0 && 484 + a->tc_tunnel->tunnel_type == b->tc_tunnel->tunnel_type; 490 485 } 491 486 492 487 static int cmp_decap_info(struct mlx5e_decap_key *a, ··· 490 495 return memcmp(&a->key, &b->key, sizeof(b->key)); 491 496 } 492 497 493 - static int hash_encap_info(struct encap_key *key) 498 + static int hash_encap_info(struct mlx5e_encap_key *key) 494 499 { 495 500 return jhash(key->ip_tun_key, sizeof(*key->ip_tun_key), 496 501 key->tc_tunnel->tunnel_type); ··· 512 517 } 513 518 514 519 static struct mlx5e_encap_entry * 515 - mlx5e_encap_get(struct mlx5e_priv *priv, struct encap_key *key, 520 + mlx5e_encap_get(struct mlx5e_priv *priv, struct mlx5e_encap_key *key, 516 521 uintptr_t hash_key) 517 522 { 518 523 struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 524 + struct mlx5e_encap_key e_key; 519 525 struct mlx5e_encap_entry *e; 520 - struct encap_key e_key; 521 526 522 527 hash_for_each_possible_rcu(esw->offloads.encap_tbl, e, 523 528 encap_hlist, hash_key) { 524 529 e_key.ip_tun_key = &e->tun_info->key; 525 530 e_key.tc_tunnel = e->tunnel; 526 - if (!cmp_encap_info(&e_key, key) && 531 + if (e->tunnel->encap_info_equal(&e_key, key) && 527 532 mlx5e_encap_take(e)) 528 533 return e; 529 534 } ··· 690 695 struct mlx5_flow_attr *attr = flow->attr; 691 696 const struct ip_tunnel_info *tun_info; 692 697 unsigned long tbl_time_before = 0; 693 - struct encap_key key; 694 698 struct mlx5e_encap_entry *e; 699 + struct mlx5e_encap_key key; 695 700 bool entry_created = false; 696 701 unsigned short family; 697 702 uintptr_t hash_key;
+29
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
··· 329 329 return mlx5e_tc_tun_parse_geneve_options(priv, spec, f); 330 330 } 331 331 332 + static bool mlx5e_tc_tun_encap_info_equal_geneve(struct mlx5e_encap_key *a, 333 + struct mlx5e_encap_key *b) 334 + { 335 + struct ip_tunnel_info *a_info; 336 + struct ip_tunnel_info *b_info; 337 + bool a_has_opts, b_has_opts; 338 + 339 + if (!mlx5e_tc_tun_encap_info_equal_generic(a, b)) 340 + return false; 341 + 342 + a_has_opts = !!(a->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT); 343 + b_has_opts = !!(b->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT); 344 + 345 + /* keys are equal when both don't have any options attached */ 346 + if (!a_has_opts && !b_has_opts) 347 + return true; 348 + 349 + if (a_has_opts != b_has_opts) 350 + return false; 351 + 352 + /* geneve options stored in memory next to ip_tunnel_info struct */ 353 + a_info = container_of(a->ip_tun_key, struct ip_tunnel_info, key); 354 + b_info = container_of(b->ip_tun_key, struct ip_tunnel_info, key); 355 + 356 + return a_info->options_len == b_info->options_len && 357 + memcmp(a_info + 1, b_info + 1, a_info->options_len) == 0; 358 + } 359 + 332 360 struct mlx5e_tc_tunnel geneve_tunnel = { 333 361 .tunnel_type = MLX5E_TC_TUNNEL_TYPE_GENEVE, 334 362 .match_level = MLX5_MATCH_L4, ··· 366 338 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_geneve, 367 339 .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_geneve, 368 340 .parse_tunnel = mlx5e_tc_tun_parse_geneve, 341 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_geneve, 369 342 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
··· 94 94 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_gretap, 95 95 .parse_udp_ports = NULL, 96 96 .parse_tunnel = mlx5e_tc_tun_parse_gretap, 97 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 97 98 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c
··· 131 131 .generate_ip_tun_hdr = generate_ip_tun_hdr, 132 132 .parse_udp_ports = parse_udp_ports, 133 133 .parse_tunnel = parse_tunnel, 134 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 134 135 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
··· 150 150 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_vxlan, 151 151 .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_vxlan, 152 152 .parse_tunnel = mlx5e_tc_tun_parse_vxlan, 153 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 153 154 };
+6
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 441 441 return wqe_size * 2 - 1; 442 442 } 443 443 444 + static inline bool mlx5e_icosq_can_post_wqe(struct mlx5e_icosq *sq, u16 wqe_size) 445 + { 446 + u16 room = sq->reserved_room + mlx5e_stop_room_for_wqe(wqe_size); 447 + 448 + return mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room); 449 + } 444 450 #endif
+19 -21
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 46 46 struct tls12_crypto_info_aes_gcm_128 crypto_info; 47 47 struct accel_rule rule; 48 48 struct sock *sk; 49 - struct mlx5e_rq_stats *stats; 49 + struct mlx5e_rq_stats *rq_stats; 50 + struct mlx5e_tls_sw_stats *sw_stats; 50 51 struct completion add_ctx; 51 52 u32 tirn; 52 53 u32 key_id; ··· 138 137 { 139 138 struct mlx5e_set_tls_static_params_wqe *wqe; 140 139 struct mlx5e_icosq_wqe_info wi; 141 - u16 pi, num_wqebbs, room; 140 + u16 pi, num_wqebbs; 142 141 143 142 num_wqebbs = MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS; 144 - room = mlx5e_stop_room_for_wqe(num_wqebbs); 145 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room))) 143 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs))) 146 144 return ERR_PTR(-ENOSPC); 147 145 148 146 pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs); ··· 168 168 { 169 169 struct mlx5e_set_tls_progress_params_wqe *wqe; 170 170 struct mlx5e_icosq_wqe_info wi; 171 - u16 pi, num_wqebbs, room; 171 + u16 pi, num_wqebbs; 172 172 173 173 num_wqebbs = MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS; 174 - room = mlx5e_stop_room_for_wqe(num_wqebbs); 175 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room))) 174 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs))) 176 175 return ERR_PTR(-ENOSPC); 177 176 178 177 pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs); ··· 217 218 return err; 218 219 219 220 err_out: 220 - priv_rx->stats->tls_resync_req_skip++; 221 + priv_rx->rq_stats->tls_resync_req_skip++; 221 222 err = PTR_ERR(cseg); 222 223 complete(&priv_rx->add_ctx); 223 224 goto unlock; ··· 276 277 277 278 buf->priv_rx = priv_rx; 278 279 279 - BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1); 280 - 281 280 spin_lock_bh(&sq->channel->async_icosq_lock); 282 281 283 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) { 282 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS))) { 284 283 spin_unlock_bh(&sq->channel->async_icosq_lock); 285 284 err = -ENOSPC; 286 285 goto err_dma_unmap; 287 286 } 288 287 289 - pi = mlx5e_icosq_get_next_pi(sq, 1); 288 + pi = mlx5e_icosq_get_next_pi(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS); 290 289 wqe = MLX5E_TLS_FETCH_GET_PROGRESS_PARAMS_WQE(sq, pi); 291 290 292 291 #define GET_PSV_DS_CNT (DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_DS)) ··· 304 307 305 308 wi = (struct mlx5e_icosq_wqe_info) { 306 309 .wqe_type = MLX5E_ICOSQ_WQE_GET_PSV_TLS, 307 - .num_wqebbs = 1, 310 + .num_wqebbs = MLX5E_KTLS_GET_PROGRESS_WQEBBS, 308 311 .tls_get_params.buf = buf, 309 312 }; 310 313 icosq_fill_wi(sq, pi, &wi); ··· 319 322 err_free: 320 323 kfree(buf); 321 324 err_out: 322 - priv_rx->stats->tls_resync_req_skip++; 325 + priv_rx->rq_stats->tls_resync_req_skip++; 323 326 return err; 324 327 } 325 328 ··· 375 378 376 379 cseg = post_static_params(sq, priv_rx); 377 380 if (IS_ERR(cseg)) { 378 - priv_rx->stats->tls_resync_res_skip++; 381 + priv_rx->rq_stats->tls_resync_res_skip++; 379 382 err = PTR_ERR(cseg); 380 383 goto unlock; 381 384 } 382 385 /* Do not increment priv_rx refcnt, CQE handling is empty */ 383 386 mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg); 384 - priv_rx->stats->tls_resync_res_ok++; 387 + priv_rx->rq_stats->tls_resync_res_ok++; 385 388 unlock: 386 389 spin_unlock_bh(&c->async_icosq_lock); 387 390 ··· 417 420 auth_state = MLX5_GET(tls_progress_params, ctx, auth_state); 418 421 if (tracker_state != MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING || 419 422 auth_state != MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD) { 420 - priv_rx->stats->tls_resync_req_skip++; 423 + priv_rx->rq_stats->tls_resync_req_skip++; 421 424 goto out; 422 425 } 423 426 424 427 hw_seq = MLX5_GET(tls_progress_params, ctx, hw_resync_tcp_sn); 425 428 tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq)); 426 - priv_rx->stats->tls_resync_req_end++; 429 + priv_rx->rq_stats->tls_resync_req_end++; 427 430 out: 428 431 mlx5e_ktls_priv_rx_put(priv_rx); 429 432 dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); ··· 606 609 priv_rx->rxq = rxq; 607 610 priv_rx->sk = sk; 608 611 609 - priv_rx->stats = &priv->channel_stats[rxq].rq; 612 + priv_rx->rq_stats = &priv->channel_stats[rxq].rq; 613 + priv_rx->sw_stats = &priv->tls->sw_stats; 610 614 mlx5e_set_ktls_rx_priv_ctx(tls_ctx, priv_rx); 611 615 612 616 rqtn = priv->direct_tir[rxq].rqt.rqtn; ··· 628 630 if (err) 629 631 goto err_post_wqes; 630 632 631 - priv_rx->stats->tls_ctx++; 633 + atomic64_inc(&priv_rx->sw_stats->rx_tls_ctx); 632 634 633 635 return 0; 634 636 ··· 664 666 if (cancel_work_sync(&resync->work)) 665 667 mlx5e_ktls_priv_rx_put(priv_rx); 666 668 667 - priv_rx->stats->tls_del++; 669 + atomic64_inc(&priv_rx->sw_stats->rx_tls_del); 668 670 if (priv_rx->rule.rule) 669 671 mlx5e_accel_fs_del_sk(priv_rx->rule.rule); 670 672
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 // Copyright (c) 2019 Mellanox Technologies. 3 3 4 + #include "en_accel/tls.h" 4 5 #include "en_accel/ktls_txrx.h" 5 6 #include "en_accel/ktls_utils.h" 6 7 ··· 51 50 struct mlx5e_ktls_offload_context_tx { 52 51 struct tls_offload_context_tx *tx_ctx; 53 52 struct tls12_crypto_info_aes_gcm_128 crypto_info; 53 + struct mlx5e_tls_sw_stats *sw_stats; 54 54 u32 expected_seq; 55 55 u32 tisn; 56 56 u32 key_id; ··· 101 99 if (err) 102 100 goto err_create_key; 103 101 102 + priv_tx->sw_stats = &priv->tls->sw_stats; 104 103 priv_tx->expected_seq = start_offload_tcp_sn; 105 104 priv_tx->crypto_info = 106 105 *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info; ··· 114 111 goto err_create_tis; 115 112 116 113 priv_tx->ctx_post_pending = true; 114 + atomic64_inc(&priv_tx->sw_stats->tx_tls_ctx); 117 115 118 116 return 0; 119 117 ··· 456 452 457 453 if (unlikely(mlx5e_ktls_tx_offload_test_and_clear_pending(priv_tx))) { 458 454 mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, false, false); 459 - stats->tls_ctx++; 460 455 } 461 456 462 457 seq = ntohl(tcp_hdr(skb)->seq);
+3
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h
··· 41 41 #include "en.h" 42 42 43 43 struct mlx5e_tls_sw_stats { 44 + atomic64_t tx_tls_ctx; 44 45 atomic64_t tx_tls_drop_metadata; 45 46 atomic64_t tx_tls_drop_resync_alloc; 46 47 atomic64_t tx_tls_drop_no_sync_data; 47 48 atomic64_t tx_tls_drop_bypass_required; 49 + atomic64_t rx_tls_ctx; 50 + atomic64_t rx_tls_del; 48 51 atomic64_t rx_tls_drop_resync_request; 49 52 atomic64_t rx_tls_resync_request; 50 53 atomic64_t rx_tls_resync_reply;
+30 -19
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
··· 45 45 { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_drop_bypass_required) }, 46 46 }; 47 47 48 + static const struct counter_desc mlx5e_ktls_sw_stats_desc[] = { 49 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_ctx) }, 50 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_ctx) }, 51 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_del) }, 52 + }; 53 + 48 54 #define MLX5E_READ_CTR_ATOMIC64(ptr, dsc, i) \ 49 55 atomic64_read((atomic64_t *)((char *)(ptr) + (dsc)[i].offset)) 50 56 51 - #define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc) 52 - 53 - static bool is_tls_atomic_stats(struct mlx5e_priv *priv) 57 + static const struct counter_desc *get_tls_atomic_stats(struct mlx5e_priv *priv) 54 58 { 55 - return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev); 59 + if (!priv->tls) 60 + return NULL; 61 + if (mlx5_accel_is_ktls_device(priv->mdev)) 62 + return mlx5e_ktls_sw_stats_desc; 63 + return mlx5e_tls_sw_stats_desc; 56 64 } 57 65 58 66 int mlx5e_tls_get_count(struct mlx5e_priv *priv) 59 67 { 60 - if (!is_tls_atomic_stats(priv)) 68 + if (!priv->tls) 61 69 return 0; 62 - 63 - return NUM_TLS_SW_COUNTERS; 70 + if (mlx5_accel_is_ktls_device(priv->mdev)) 71 + return ARRAY_SIZE(mlx5e_ktls_sw_stats_desc); 72 + return ARRAY_SIZE(mlx5e_tls_sw_stats_desc); 64 73 } 65 74 66 75 int mlx5e_tls_get_strings(struct mlx5e_priv *priv, uint8_t *data) 67 76 { 68 - unsigned int i, idx = 0; 77 + const struct counter_desc *stats_desc; 78 + unsigned int i, n, idx = 0; 69 79 70 - if (!is_tls_atomic_stats(priv)) 71 - return 0; 80 + stats_desc = get_tls_atomic_stats(priv); 81 + n = mlx5e_tls_get_count(priv); 72 82 73 - for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) 83 + for (i = 0; i < n; i++) 74 84 strcpy(data + (idx++) * ETH_GSTRING_LEN, 75 - mlx5e_tls_sw_stats_desc[i].format); 85 + stats_desc[i].format); 76 86 77 - return NUM_TLS_SW_COUNTERS; 87 + return n; 78 88 } 79 89 80 90 int mlx5e_tls_get_stats(struct mlx5e_priv *priv, u64 *data) 81 91 { 82 - int i, idx = 0; 92 + const struct counter_desc *stats_desc; 93 + unsigned int i, n, idx = 0; 83 94 84 - if (!is_tls_atomic_stats(priv)) 85 - return 0; 95 + stats_desc = get_tls_atomic_stats(priv); 96 + n = mlx5e_tls_get_count(priv); 86 97 87 - for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) 98 + for (i = 0; i < n; i++) 88 99 data[idx++] = 89 100 MLX5E_READ_CTR_ATOMIC64(&priv->tls->sw_stats, 90 - mlx5e_tls_sw_stats_desc, i); 101 + stats_desc, i); 91 102 92 - return NUM_TLS_SW_COUNTERS; 103 + return n; 93 104 }
+11 -11
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 759 759 return 0; 760 760 } 761 761 762 - static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings, 763 - u32 eth_proto_cap, 764 - u8 connector_type, bool ext) 762 + static void ptys2ethtool_supported_advertised_port(struct mlx5_core_dev *mdev, 763 + struct ethtool_link_ksettings *link_ksettings, 764 + u32 eth_proto_cap, u8 connector_type) 765 765 { 766 - if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) { 766 + if (!MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) { 767 767 if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR) 768 768 | MLX5E_PROT_MASK(MLX5E_10GBASE_SR) 769 769 | MLX5E_PROT_MASK(MLX5E_40GBASE_CR4) ··· 899 899 [MLX5E_PORT_OTHER] = PORT_OTHER, 900 900 }; 901 901 902 - static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext) 902 + static u8 get_connector_port(struct mlx5_core_dev *mdev, u32 eth_proto, u8 connector_type) 903 903 { 904 - if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER) 904 + if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) 905 905 return ptys2connector_type[connector_type]; 906 906 907 907 if (eth_proto & ··· 1002 1002 data_rate_oper, link_ksettings); 1003 1003 1004 1004 eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap; 1005 - 1006 - link_ksettings->base.port = get_connector_port(eth_proto_oper, 1007 - connector_type, ext); 1008 - ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin, 1009 - connector_type, ext); 1005 + connector_type = connector_type < MLX5E_CONNECTOR_TYPE_NUMBER ? 1006 + connector_type : MLX5E_PORT_UNKNOWN; 1007 + link_ksettings->base.port = get_connector_port(mdev, eth_proto_oper, connector_type); 1008 + ptys2ethtool_supported_advertised_port(mdev, link_ksettings, eth_proto_admin, 1009 + connector_type); 1010 1010 get_lp_advertising(mdev, eth_proto_lp, link_ksettings); 1011 1011 1012 1012 if (an_status == MLX5_AN_COMPLETE)
+1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1041 1041 1042 1042 sq->channel = c; 1043 1043 sq->uar_map = mdev->mlx5e_res.hw_objs.bfreg.map; 1044 + sq->reserved_room = param->stop_room; 1044 1045 1045 1046 param->wq.db_numa_node = cpu_to_node(c->cpu); 1046 1047 err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl);
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 972 972 973 973 mlx5e_rep_tc_enable(priv); 974 974 975 - mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK, 976 - 0, 0, MLX5_VPORT_ADMIN_STATE_AUTO); 975 + if (MLX5_CAP_GEN(mdev, uplink_follow)) 976 + mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK, 977 + 0, 0, MLX5_VPORT_ADMIN_STATE_AUTO); 977 978 mlx5_lag_add(mdev, netdev); 978 979 priv->events_nb.notifier_call = uplink_rep_async_event; 979 980 mlx5_notifier_register(mdev, &priv->events_nb);
-10
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 116 116 #ifdef CONFIG_MLX5_EN_TLS 117 117 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_packets) }, 118 118 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) }, 119 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) }, 120 119 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) }, 121 120 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) }, 122 121 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) }, ··· 179 180 #ifdef CONFIG_MLX5_EN_TLS 180 181 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) }, 181 182 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_bytes) }, 182 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_ctx) }, 183 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_del) }, 184 183 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_pkt) }, 185 184 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_start) }, 186 185 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_end) }, ··· 339 342 #ifdef CONFIG_MLX5_EN_TLS 340 343 s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets; 341 344 s->rx_tls_decrypted_bytes += rq_stats->tls_decrypted_bytes; 342 - s->rx_tls_ctx += rq_stats->tls_ctx; 343 - s->rx_tls_del += rq_stats->tls_del; 344 345 s->rx_tls_resync_req_pkt += rq_stats->tls_resync_req_pkt; 345 346 s->rx_tls_resync_req_start += rq_stats->tls_resync_req_start; 346 347 s->rx_tls_resync_req_end += rq_stats->tls_resync_req_end; ··· 385 390 #ifdef CONFIG_MLX5_EN_TLS 386 391 s->tx_tls_encrypted_packets += sq_stats->tls_encrypted_packets; 387 392 s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes; 388 - s->tx_tls_ctx += sq_stats->tls_ctx; 389 393 s->tx_tls_ooo += sq_stats->tls_ooo; 390 394 s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes; 391 395 s->tx_tls_dump_packets += sq_stats->tls_dump_packets; ··· 1624 1630 #ifdef CONFIG_MLX5_EN_TLS 1625 1631 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) }, 1626 1632 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_bytes) }, 1627 - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_ctx) }, 1628 - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_del) }, 1629 1633 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_pkt) }, 1630 1634 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_start) }, 1631 1635 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_end) }, ··· 1650 1658 #ifdef CONFIG_MLX5_EN_TLS 1651 1659 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) }, 1652 1660 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, 1653 - { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) }, 1654 1661 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) }, 1655 1662 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) }, 1656 1663 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) }, ··· 1807 1816 #ifdef CONFIG_MLX5_EN_TLS 1808 1817 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) }, 1809 1818 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, 1810 - { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ctx) }, 1811 1819 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ooo) }, 1812 1820 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) }, 1813 1821 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },
-6
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 192 192 #ifdef CONFIG_MLX5_EN_TLS 193 193 u64 tx_tls_encrypted_packets; 194 194 u64 tx_tls_encrypted_bytes; 195 - u64 tx_tls_ctx; 196 195 u64 tx_tls_ooo; 197 196 u64 tx_tls_dump_packets; 198 197 u64 tx_tls_dump_bytes; ··· 202 203 203 204 u64 rx_tls_decrypted_packets; 204 205 u64 rx_tls_decrypted_bytes; 205 - u64 rx_tls_ctx; 206 - u64 rx_tls_del; 207 206 u64 rx_tls_resync_req_pkt; 208 207 u64 rx_tls_resync_req_start; 209 208 u64 rx_tls_resync_req_end; ··· 332 335 #ifdef CONFIG_MLX5_EN_TLS 333 336 u64 tls_decrypted_packets; 334 337 u64 tls_decrypted_bytes; 335 - u64 tls_ctx; 336 - u64 tls_del; 337 338 u64 tls_resync_req_pkt; 338 339 u64 tls_resync_req_start; 339 340 u64 tls_resync_req_end; ··· 360 365 #ifdef CONFIG_MLX5_EN_TLS 361 366 u64 tls_encrypted_packets; 362 367 u64 tls_encrypted_bytes; 363 - u64 tls_ctx; 364 368 u64 tls_ooo; 365 369 u64 tls_dump_packets; 366 370 u64 tls_dump_bytes;
+12 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 936 936 mutex_unlock(&table->lock); 937 937 } 938 938 939 + #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING 940 + #define MLX5_MAX_ASYNC_EQS 4 941 + #else 942 + #define MLX5_MAX_ASYNC_EQS 3 943 + #endif 944 + 939 945 int mlx5_eq_table_create(struct mlx5_core_dev *dev) 940 946 { 941 947 struct mlx5_eq_table *eq_table = dev->priv.eq_table; 948 + int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ? 949 + MLX5_CAP_GEN(dev, max_num_eqs) : 950 + 1 << MLX5_CAP_GEN(dev, log_max_eq); 942 951 int err; 943 952 944 953 eq_table->num_comp_eqs = 945 - mlx5_irq_get_num_comp(eq_table->irq_table); 954 + min_t(int, 955 + mlx5_irq_get_num_comp(eq_table->irq_table), 956 + num_eqs - MLX5_MAX_ASYNC_EQS); 946 957 947 958 err = create_async_eqs(dev); 948 959 if (err) {
+38 -28
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 400 400 return i; 401 401 } 402 402 403 + static bool 404 + esw_src_port_rewrite_supported(struct mlx5_eswitch *esw) 405 + { 406 + return MLX5_CAP_GEN(esw->dev, reg_c_preserve) && 407 + mlx5_eswitch_vport_match_metadata_enabled(esw) && 408 + MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level); 409 + } 410 + 403 411 static int 404 412 esw_setup_dests(struct mlx5_flow_destination *dest, 405 413 struct mlx5_flow_act *flow_act, ··· 421 413 int err = 0; 422 414 423 415 if (!mlx5_eswitch_termtbl_required(esw, attr, flow_act, spec) && 424 - MLX5_CAP_GEN(esw_attr->in_mdev, reg_c_preserve) && 425 - mlx5_eswitch_vport_match_metadata_enabled(esw) && 426 - MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level)) 416 + esw_src_port_rewrite_supported(esw)) 427 417 attr->flags |= MLX5_ESW_ATTR_FLAG_SRC_REWRITE; 428 418 429 419 if (attr->flags & MLX5_ESW_ATTR_FLAG_SAMPLE) { ··· 1642 1636 } 1643 1637 esw->fdb_table.offloads.send_to_vport_grp = g; 1644 1638 1645 - /* meta send to vport */ 1646 - memset(flow_group_in, 0, inlen); 1647 - MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, 1648 - MLX5_MATCH_MISC_PARAMETERS_2); 1639 + if (esw_src_port_rewrite_supported(esw)) { 1640 + /* meta send to vport */ 1641 + memset(flow_group_in, 0, inlen); 1642 + MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, 1643 + MLX5_MATCH_MISC_PARAMETERS_2); 1649 1644 1650 - match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria); 1645 + match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria); 1651 1646 1652 - MLX5_SET(fte_match_param, match_criteria, 1653 - misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask()); 1654 - MLX5_SET(fte_match_param, match_criteria, 1655 - misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK); 1647 + MLX5_SET(fte_match_param, match_criteria, 1648 + misc_parameters_2.metadata_reg_c_0, 1649 + mlx5_eswitch_get_vport_metadata_mask()); 1650 + MLX5_SET(fte_match_param, match_criteria, 1651 + misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK); 1656 1652 1657 - num_vfs = esw->esw_funcs.num_vfs; 1658 - if (num_vfs) { 1659 - MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1660 - MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ix + num_vfs - 1); 1661 - ix += num_vfs; 1653 + num_vfs = esw->esw_funcs.num_vfs; 1654 + if (num_vfs) { 1655 + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1656 + MLX5_SET(create_flow_group_in, flow_group_in, 1657 + end_flow_index, ix + num_vfs - 1); 1658 + ix += num_vfs; 1662 1659 1663 - g = mlx5_create_flow_group(fdb, flow_group_in); 1664 - if (IS_ERR(g)) { 1665 - err = PTR_ERR(g); 1666 - esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n", 1667 - err); 1668 - goto send_vport_meta_err; 1660 + g = mlx5_create_flow_group(fdb, flow_group_in); 1661 + if (IS_ERR(g)) { 1662 + err = PTR_ERR(g); 1663 + esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n", 1664 + err); 1665 + goto send_vport_meta_err; 1666 + } 1667 + esw->fdb_table.offloads.send_to_vport_meta_grp = g; 1668 + 1669 + err = mlx5_eswitch_add_send_to_vport_meta_rules(esw); 1670 + if (err) 1671 + goto meta_rule_err; 1669 1672 } 1670 - esw->fdb_table.offloads.send_to_vport_meta_grp = g; 1671 - 1672 - err = mlx5_eswitch_add_send_to_vport_meta_rules(esw); 1673 - if (err) 1674 - goto meta_rule_err; 1675 1673 } 1676 1674 1677 1675 if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
+15
drivers/net/ethernet/mellanox/mlxsw/spectrum.h
··· 22 22 #include <net/red.h> 23 23 #include <net/vxlan.h> 24 24 #include <net/flow_offload.h> 25 + #include <net/inet_ecn.h> 25 26 26 27 #include "port.h" 27 28 #include "core.h" ··· 367 366 u32 *p_eth_proto_oper); 368 367 u32 (*ptys_proto_cap_masked_get)(u32 eth_proto_cap); 369 368 }; 369 + 370 + static inline u8 mlxsw_sp_tunnel_ecn_decap(u8 outer_ecn, u8 inner_ecn, 371 + bool *trap_en) 372 + { 373 + bool set_ce = false; 374 + 375 + *trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 376 + if (set_ce) 377 + return INET_ECN_CE; 378 + else if (outer_ecn == INET_ECN_ECT_1 && inner_ecn == INET_ECN_ECT_0) 379 + return INET_ECN_ECT_1; 380 + else 381 + return inner_ecn; 382 + } 370 383 371 384 static inline struct net_device * 372 385 mlxsw_sp_bridge_vxlan_dev_find(struct net_device *br_dev)
+14 -5
drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
··· 1230 1230 u32 ptys_eth_proto, 1231 1231 struct ethtool_link_ksettings *cmd) 1232 1232 { 1233 + struct mlxsw_sp1_port_link_mode link; 1233 1234 int i; 1234 1235 1235 - cmd->link_mode = -1; 1236 + cmd->base.speed = SPEED_UNKNOWN; 1237 + cmd->base.duplex = DUPLEX_UNKNOWN; 1238 + cmd->lanes = 0; 1236 1239 1237 1240 if (!carrier_ok) 1238 1241 return; 1239 1242 1240 1243 for (i = 0; i < MLXSW_SP1_PORT_LINK_MODE_LEN; i++) { 1241 - if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) 1242 - cmd->link_mode = mlxsw_sp1_port_link_mode[i].mask_ethtool; 1244 + if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) { 1245 + link = mlxsw_sp1_port_link_mode[i]; 1246 + ethtool_params_from_link_mode(cmd, 1247 + link.mask_ethtool); 1248 + } 1243 1249 } 1244 1250 } 1245 1251 ··· 1678 1672 struct mlxsw_sp2_port_link_mode link; 1679 1673 int i; 1680 1674 1681 - cmd->link_mode = -1; 1675 + cmd->base.speed = SPEED_UNKNOWN; 1676 + cmd->base.duplex = DUPLEX_UNKNOWN; 1677 + cmd->lanes = 0; 1682 1678 1683 1679 if (!carrier_ok) 1684 1680 return; ··· 1688 1680 for (i = 0; i < MLXSW_SP2_PORT_LINK_MODE_LEN; i++) { 1689 1681 if (ptys_eth_proto & mlxsw_sp2_port_link_mode[i].mask) { 1690 1682 link = mlxsw_sp2_port_link_mode[i]; 1691 - cmd->link_mode = link.mask_ethtool[1]; 1683 + ethtool_params_from_link_mode(cmd, 1684 + link.mask_ethtool[1]); 1692 1685 } 1693 1686 } 1694 1687 }
+3 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
··· 337 337 u8 inner_ecn, u8 outer_ecn) 338 338 { 339 339 char tidem_pl[MLXSW_REG_TIDEM_LEN]; 340 - bool trap_en, set_ce = false; 341 340 u8 new_inner_ecn; 341 + bool trap_en; 342 342 343 - trap_en = __INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 344 - new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn; 345 - 343 + new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn, 344 + &trap_en); 346 345 mlxsw_reg_tidem_pack(tidem_pl, outer_ecn, inner_ecn, new_inner_ecn, 347 346 trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0); 348 347 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tidem), tidem_pl);
+3 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
··· 909 909 u8 inner_ecn, u8 outer_ecn) 910 910 { 911 911 char tndem_pl[MLXSW_REG_TNDEM_LEN]; 912 - bool trap_en, set_ce = false; 913 912 u8 new_inner_ecn; 913 + bool trap_en; 914 914 915 - trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 916 - new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn; 917 - 915 + new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn, 916 + &trap_en); 918 917 mlxsw_reg_tndem_pack(tndem_pl, outer_ecn, inner_ecn, new_inner_ecn, 919 918 trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0); 920 919 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tndem), tndem_pl);
+4 -4
drivers/net/ethernet/microchip/lan743x_main.c
··· 885 885 } 886 886 887 887 mac_rx &= ~(MAC_RX_MAX_SIZE_MASK_); 888 - mac_rx |= (((new_mtu + ETH_HLEN + 4) << MAC_RX_MAX_SIZE_SHIFT_) & 889 - MAC_RX_MAX_SIZE_MASK_); 888 + mac_rx |= (((new_mtu + ETH_HLEN + ETH_FCS_LEN) 889 + << MAC_RX_MAX_SIZE_SHIFT_) & MAC_RX_MAX_SIZE_MASK_); 890 890 lan743x_csr_write(adapter, MAC_RX, mac_rx); 891 891 892 892 if (enabled) { ··· 1944 1944 struct sk_buff *skb; 1945 1945 dma_addr_t dma_ptr; 1946 1946 1947 - buffer_length = netdev->mtu + ETH_HLEN + 4 + RX_HEAD_PADDING; 1947 + buffer_length = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + RX_HEAD_PADDING; 1948 1948 1949 1949 descriptor = &rx->ring_cpu_ptr[index]; 1950 1950 buffer_info = &rx->buffer_info[index]; ··· 2040 2040 dev_kfree_skb_irq(skb); 2041 2041 return NULL; 2042 2042 } 2043 - frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 4); 2043 + frame_length = max_t(int, 0, frame_length - ETH_FCS_LEN); 2044 2044 if (skb->len > frame_length) { 2045 2045 skb->tail -= skb->len - frame_length; 2046 2046 skb->len = frame_length;
+1 -1
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 2897 2897 dev_kfree_skb_any(curr); 2898 2898 if (segs != NULL) { 2899 2899 curr = segs; 2900 - segs = segs->next; 2900 + segs = next; 2901 2901 curr->next = NULL; 2902 2902 dev_kfree_skb_any(segs); 2903 2903 }
+1
drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
··· 454 454 dev_consume_skb_any(skb); 455 455 else 456 456 dev_kfree_skb_any(skb); 457 + return; 457 458 } 458 459 459 460 nfp_ccm_rx(&bpf->ccm, skb);
+8
drivers/net/ethernet/netronome/nfp/flower/main.h
··· 192 192 * @qos_rate_limiters: Current active qos rate limiters 193 193 * @qos_stats_lock: Lock on qos stats updates 194 194 * @pre_tun_rule_cnt: Number of pre-tunnel rules offloaded 195 + * @merge_table: Hash table to store merged flows 195 196 */ 196 197 struct nfp_flower_priv { 197 198 struct nfp_app *app; ··· 226 225 unsigned int qos_rate_limiters; 227 226 spinlock_t qos_stats_lock; /* Protect the qos stats */ 228 227 int pre_tun_rule_cnt; 228 + struct rhashtable merge_table; 229 229 }; 230 230 231 231 /** ··· 354 352 }; 355 353 356 354 extern const struct rhashtable_params nfp_flower_table_params; 355 + extern const struct rhashtable_params merge_table_params; 356 + 357 + struct nfp_merge_info { 358 + u64 parent_ctx; 359 + struct rhash_head ht_node; 360 + }; 357 361 358 362 struct nfp_fl_stats_frame { 359 363 __be32 stats_con_id;
+15 -1
drivers/net/ethernet/netronome/nfp/flower/metadata.c
··· 490 490 .automatic_shrinking = true, 491 491 }; 492 492 493 + const struct rhashtable_params merge_table_params = { 494 + .key_offset = offsetof(struct nfp_merge_info, parent_ctx), 495 + .head_offset = offsetof(struct nfp_merge_info, ht_node), 496 + .key_len = sizeof(u64), 497 + }; 498 + 493 499 int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count, 494 500 unsigned int host_num_mems) 495 501 { ··· 512 506 if (err) 513 507 goto err_free_flow_table; 514 508 509 + err = rhashtable_init(&priv->merge_table, &merge_table_params); 510 + if (err) 511 + goto err_free_stats_ctx_table; 512 + 515 513 get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed)); 516 514 517 515 /* Init ring buffer and unallocated mask_ids. */ ··· 523 513 kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS, 524 514 NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL); 525 515 if (!priv->mask_ids.mask_id_free_list.buf) 526 - goto err_free_stats_ctx_table; 516 + goto err_free_merge_table; 527 517 528 518 priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1; 529 519 ··· 560 550 kfree(priv->mask_ids.last_used); 561 551 err_free_mask_id: 562 552 kfree(priv->mask_ids.mask_id_free_list.buf); 553 + err_free_merge_table: 554 + rhashtable_destroy(&priv->merge_table); 563 555 err_free_stats_ctx_table: 564 556 rhashtable_destroy(&priv->stats_ctx_table); 565 557 err_free_flow_table: ··· 579 567 rhashtable_free_and_destroy(&priv->flow_table, 580 568 nfp_check_rhashtable_empty, NULL); 581 569 rhashtable_free_and_destroy(&priv->stats_ctx_table, 570 + nfp_check_rhashtable_empty, NULL); 571 + rhashtable_free_and_destroy(&priv->merge_table, 582 572 nfp_check_rhashtable_empty, NULL); 583 573 kvfree(priv->stats); 584 574 kfree(priv->mask_ids.mask_id_free_list.buf);
+46 -2
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 1009 1009 struct netlink_ext_ack *extack = NULL; 1010 1010 struct nfp_fl_payload *merge_flow; 1011 1011 struct nfp_fl_key_ls merge_key_ls; 1012 + struct nfp_merge_info *merge_info; 1013 + u64 parent_ctx = 0; 1012 1014 int err; 1013 1015 1014 1016 ASSERT_RTNL(); ··· 1020 1018 nfp_flower_is_merge_flow(sub_flow1) || 1021 1019 nfp_flower_is_merge_flow(sub_flow2)) 1022 1020 return -EINVAL; 1021 + 1022 + /* check if the two flows are already merged */ 1023 + parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32; 1024 + parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id)); 1025 + if (rhashtable_lookup_fast(&priv->merge_table, 1026 + &parent_ctx, merge_table_params)) { 1027 + nfp_flower_cmsg_warn(app, "The two flows are already merged.\n"); 1028 + return 0; 1029 + } 1023 1030 1024 1031 err = nfp_flower_can_merge(sub_flow1, sub_flow2); 1025 1032 if (err) ··· 1071 1060 if (err) 1072 1061 goto err_release_metadata; 1073 1062 1063 + merge_info = kmalloc(sizeof(*merge_info), GFP_KERNEL); 1064 + if (!merge_info) { 1065 + err = -ENOMEM; 1066 + goto err_remove_rhash; 1067 + } 1068 + merge_info->parent_ctx = parent_ctx; 1069 + err = rhashtable_insert_fast(&priv->merge_table, &merge_info->ht_node, 1070 + merge_table_params); 1071 + if (err) 1072 + goto err_destroy_merge_info; 1073 + 1074 1074 err = nfp_flower_xmit_flow(app, merge_flow, 1075 1075 NFP_FLOWER_CMSG_TYPE_FLOW_MOD); 1076 1076 if (err) 1077 - goto err_remove_rhash; 1077 + goto err_remove_merge_info; 1078 1078 1079 1079 merge_flow->in_hw = true; 1080 1080 sub_flow1->in_hw = false; 1081 1081 1082 1082 return 0; 1083 1083 1084 + err_remove_merge_info: 1085 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table, 1086 + &merge_info->ht_node, 1087 + merge_table_params)); 1088 + err_destroy_merge_info: 1089 + kfree(merge_info); 1084 1090 err_remove_rhash: 1085 1091 WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table, 1086 1092 &merge_flow->fl_node, ··· 1387 1359 { 1388 1360 struct nfp_flower_priv *priv = app->priv; 1389 1361 struct nfp_fl_payload_link *link, *temp; 1362 + struct nfp_merge_info *merge_info; 1390 1363 struct nfp_fl_payload *origin; 1364 + u64 parent_ctx = 0; 1391 1365 bool mod = false; 1392 1366 int err; 1393 1367 ··· 1426 1396 err_free_links: 1427 1397 /* Clean any links connected with the merged flow. */ 1428 1398 list_for_each_entry_safe(link, temp, &merge_flow->linked_flows, 1429 - merge_flow.list) 1399 + merge_flow.list) { 1400 + u32 ctx_id = be32_to_cpu(link->sub_flow.flow->meta.host_ctx_id); 1401 + 1402 + parent_ctx = (parent_ctx << 32) | (u64)(ctx_id); 1430 1403 nfp_flower_unlink_flow(link); 1404 + } 1405 + 1406 + merge_info = rhashtable_lookup_fast(&priv->merge_table, 1407 + &parent_ctx, 1408 + merge_table_params); 1409 + if (merge_info) { 1410 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table, 1411 + &merge_info->ht_node, 1412 + merge_table_params)); 1413 + kfree(merge_info); 1414 + } 1431 1415 1432 1416 kfree(merge_flow->action_data); 1433 1417 kfree(merge_flow->mask_data);
+12
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 508 508 return axienet_ior(lp, XAE_MDIO_MCR_OFFSET); 509 509 } 510 510 511 + static inline void axienet_lock_mii(struct axienet_local *lp) 512 + { 513 + if (lp->mii_bus) 514 + mutex_lock(&lp->mii_bus->mdio_lock); 515 + } 516 + 517 + static inline void axienet_unlock_mii(struct axienet_local *lp) 518 + { 519 + if (lp->mii_bus) 520 + mutex_unlock(&lp->mii_bus->mdio_lock); 521 + } 522 + 511 523 /** 512 524 * axienet_iow - Memory mapped Axi Ethernet register write 513 525 * @lp: Pointer to axienet local structure
+6 -6
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1053 1053 * including the MDIO. MDIO must be disabled before resetting. 1054 1054 * Hold MDIO bus lock to avoid MDIO accesses during the reset. 1055 1055 */ 1056 - mutex_lock(&lp->mii_bus->mdio_lock); 1056 + axienet_lock_mii(lp); 1057 1057 ret = axienet_device_reset(ndev); 1058 - mutex_unlock(&lp->mii_bus->mdio_lock); 1058 + axienet_unlock_mii(lp); 1059 1059 1060 1060 ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0); 1061 1061 if (ret) { ··· 1148 1148 } 1149 1149 1150 1150 /* Do a reset to ensure DMA is really stopped */ 1151 - mutex_lock(&lp->mii_bus->mdio_lock); 1151 + axienet_lock_mii(lp); 1152 1152 __axienet_device_reset(lp); 1153 - mutex_unlock(&lp->mii_bus->mdio_lock); 1153 + axienet_unlock_mii(lp); 1154 1154 1155 1155 cancel_work_sync(&lp->dma_err_task); 1156 1156 ··· 1709 1709 * including the MDIO. MDIO must be disabled before resetting. 1710 1710 * Hold MDIO bus lock to avoid MDIO accesses during the reset. 1711 1711 */ 1712 - mutex_lock(&lp->mii_bus->mdio_lock); 1712 + axienet_lock_mii(lp); 1713 1713 __axienet_device_reset(lp); 1714 - mutex_unlock(&lp->mii_bus->mdio_lock); 1714 + axienet_unlock_mii(lp); 1715 1715 1716 1716 for (i = 0; i < lp->tx_bd_num; i++) { 1717 1717 cur_p = &lp->tx_bd_v[i];
+20 -4
drivers/net/geneve.c
··· 909 909 910 910 info = skb_tunnel_info(skb); 911 911 if (info) { 912 - info->key.u.ipv4.dst = fl4.saddr; 913 - info->key.u.ipv4.src = fl4.daddr; 912 + struct ip_tunnel_info *unclone; 913 + 914 + unclone = skb_tunnel_info_unclone(skb); 915 + if (unlikely(!unclone)) { 916 + dst_release(&rt->dst); 917 + return -ENOMEM; 918 + } 919 + 920 + unclone->key.u.ipv4.dst = fl4.saddr; 921 + unclone->key.u.ipv4.src = fl4.daddr; 914 922 } 915 923 916 924 if (!pskb_may_pull(skb, ETH_HLEN)) { ··· 1002 994 struct ip_tunnel_info *info = skb_tunnel_info(skb); 1003 995 1004 996 if (info) { 1005 - info->key.u.ipv6.dst = fl6.saddr; 1006 - info->key.u.ipv6.src = fl6.daddr; 997 + struct ip_tunnel_info *unclone; 998 + 999 + unclone = skb_tunnel_info_unclone(skb); 1000 + if (unlikely(!unclone)) { 1001 + dst_release(dst); 1002 + return -ENOMEM; 1003 + } 1004 + 1005 + unclone->key.u.ipv6.dst = fl6.saddr; 1006 + unclone->key.u.ipv6.src = fl6.daddr; 1007 1007 } 1008 1008 1009 1009 if (!pskb_may_pull(skb, ETH_HLEN)) {
+1
drivers/net/ieee802154/atusb.c
··· 365 365 return -ENOMEM; 366 366 } 367 367 usb_anchor_urb(urb, &atusb->idle_urbs); 368 + usb_free_urb(urb); 368 369 n--; 369 370 } 370 371 return 0;
+10 -3
drivers/net/phy/bcm-phy-lib.c
··· 369 369 370 370 int bcm_phy_set_eee(struct phy_device *phydev, bool enable) 371 371 { 372 - int val; 372 + int val, mask = 0; 373 373 374 374 /* Enable EEE at PHY level */ 375 375 val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL); ··· 388 388 if (val < 0) 389 389 return val; 390 390 391 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, 392 + phydev->supported)) 393 + mask |= MDIO_EEE_1000T; 394 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, 395 + phydev->supported)) 396 + mask |= MDIO_EEE_100TX; 397 + 391 398 if (enable) 392 - val |= (MDIO_EEE_100TX | MDIO_EEE_1000T); 399 + val |= mask; 393 400 else 394 - val &= ~(MDIO_EEE_100TX | MDIO_EEE_1000T); 401 + val &= ~mask; 395 402 396 403 phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val); 397 404
+48
drivers/net/tun.c
··· 69 69 #include <linux/bpf.h> 70 70 #include <linux/bpf_trace.h> 71 71 #include <linux/mutex.h> 72 + #include <linux/ieee802154.h> 73 + #include <linux/if_ltalk.h> 74 + #include <uapi/linux/if_fddi.h> 75 + #include <uapi/linux/if_hippi.h> 76 + #include <uapi/linux/if_fc.h> 77 + #include <net/ax25.h> 78 + #include <net/rose.h> 79 + #include <net/6lowpan.h> 72 80 73 81 #include <linux/uaccess.h> 74 82 #include <linux/proc_fs.h> ··· 2930 2922 return __tun_set_ebpf(tun, prog_p, prog); 2931 2923 } 2932 2924 2925 + /* Return correct value for tun->dev->addr_len based on tun->dev->type. */ 2926 + static unsigned char tun_get_addr_len(unsigned short type) 2927 + { 2928 + switch (type) { 2929 + case ARPHRD_IP6GRE: 2930 + case ARPHRD_TUNNEL6: 2931 + return sizeof(struct in6_addr); 2932 + case ARPHRD_IPGRE: 2933 + case ARPHRD_TUNNEL: 2934 + case ARPHRD_SIT: 2935 + return 4; 2936 + case ARPHRD_ETHER: 2937 + return ETH_ALEN; 2938 + case ARPHRD_IEEE802154: 2939 + case ARPHRD_IEEE802154_MONITOR: 2940 + return IEEE802154_EXTENDED_ADDR_LEN; 2941 + case ARPHRD_PHONET_PIPE: 2942 + case ARPHRD_PPP: 2943 + case ARPHRD_NONE: 2944 + return 0; 2945 + case ARPHRD_6LOWPAN: 2946 + return EUI64_ADDR_LEN; 2947 + case ARPHRD_FDDI: 2948 + return FDDI_K_ALEN; 2949 + case ARPHRD_HIPPI: 2950 + return HIPPI_ALEN; 2951 + case ARPHRD_IEEE802: 2952 + return FC_ALEN; 2953 + case ARPHRD_ROSE: 2954 + return ROSE_ADDR_LEN; 2955 + case ARPHRD_NETROM: 2956 + return AX25_ADDR_LEN; 2957 + case ARPHRD_LOCALTLK: 2958 + return LTALK_ALEN; 2959 + default: 2960 + return 0; 2961 + } 2962 + } 2963 + 2933 2964 static long __tun_chr_ioctl(struct file *file, unsigned int cmd, 2934 2965 unsigned long arg, int ifreq_len) 2935 2966 { ··· 3132 3085 break; 3133 3086 } 3134 3087 tun->dev->type = (int) arg; 3088 + tun->dev->addr_len = tun_get_addr_len(tun->dev->type); 3135 3089 netif_info(tun, drv, tun->dev, "linktype set to %d\n", 3136 3090 tun->dev->type); 3137 3091 call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE,
+12 -21
drivers/net/usb/hso.c
··· 611 611 return serial; 612 612 } 613 613 614 - static int get_free_serial_index(void) 614 + static int obtain_minor(struct hso_serial *serial) 615 615 { 616 616 int index; 617 617 unsigned long flags; ··· 619 619 spin_lock_irqsave(&serial_table_lock, flags); 620 620 for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) { 621 621 if (serial_table[index] == NULL) { 622 + serial_table[index] = serial->parent; 623 + serial->minor = index; 622 624 spin_unlock_irqrestore(&serial_table_lock, flags); 623 - return index; 625 + return 0; 624 626 } 625 627 } 626 628 spin_unlock_irqrestore(&serial_table_lock, flags); ··· 631 629 return -1; 632 630 } 633 631 634 - static void set_serial_by_index(unsigned index, struct hso_serial *serial) 632 + static void release_minor(struct hso_serial *serial) 635 633 { 636 634 unsigned long flags; 637 635 638 636 spin_lock_irqsave(&serial_table_lock, flags); 639 - if (serial) 640 - serial_table[index] = serial->parent; 641 - else 642 - serial_table[index] = NULL; 637 + serial_table[serial->minor] = NULL; 643 638 spin_unlock_irqrestore(&serial_table_lock, flags); 644 639 } 645 640 ··· 2229 2230 static void hso_serial_tty_unregister(struct hso_serial *serial) 2230 2231 { 2231 2232 tty_unregister_device(tty_drv, serial->minor); 2233 + release_minor(serial); 2232 2234 } 2233 2235 2234 2236 static void hso_serial_common_free(struct hso_serial *serial) ··· 2253 2253 static int hso_serial_common_create(struct hso_serial *serial, int num_urbs, 2254 2254 int rx_size, int tx_size) 2255 2255 { 2256 - int minor; 2257 2256 int i; 2258 2257 2259 2258 tty_port_init(&serial->port); 2260 2259 2261 - minor = get_free_serial_index(); 2262 - if (minor < 0) 2260 + if (obtain_minor(serial)) 2263 2261 goto exit2; 2264 2262 2265 2263 /* register our minor number */ 2266 2264 serial->parent->dev = tty_port_register_device_attr(&serial->port, 2267 - tty_drv, minor, &serial->parent->interface->dev, 2265 + tty_drv, serial->minor, &serial->parent->interface->dev, 2268 2266 serial->parent, hso_serial_dev_groups); 2269 - if (IS_ERR(serial->parent->dev)) 2267 + if (IS_ERR(serial->parent->dev)) { 2268 + release_minor(serial); 2270 2269 goto exit2; 2270 + } 2271 2271 2272 - /* fill in specific data for later use */ 2273 - serial->minor = minor; 2274 2272 serial->magic = HSO_SERIAL_MAGIC; 2275 2273 spin_lock_init(&serial->serial_lock); 2276 2274 serial->num_rx_urbs = num_urbs; ··· 2665 2667 2666 2668 serial->write_data = hso_std_serial_write_data; 2667 2669 2668 - /* and record this serial */ 2669 - set_serial_by_index(serial->minor, serial); 2670 - 2671 2670 /* setup the proc dirs and files if needed */ 2672 2671 hso_log_port(hso_dev); 2673 2672 ··· 2720 2725 mutex_lock(&serial->shared_int->shared_int_lock); 2721 2726 serial->shared_int->ref_count++; 2722 2727 mutex_unlock(&serial->shared_int->shared_int_lock); 2723 - 2724 - /* and record this serial */ 2725 - set_serial_by_index(serial->minor, serial); 2726 2728 2727 2729 /* setup the proc dirs and files if needed */ 2728 2730 hso_log_port(hso_dev); ··· 3105 3113 cancel_work_sync(&serial_table[i]->async_get_intf); 3106 3114 hso_serial_tty_unregister(serial); 3107 3115 kref_put(&serial_table[i]->ref, hso_serial_ref_free); 3108 - set_serial_by_index(i, NULL); 3109 3116 } 3110 3117 } 3111 3118
+7 -3
drivers/net/virtio_net.c
··· 409 409 offset += hdr_padded_len; 410 410 p += hdr_padded_len; 411 411 412 - copy = len; 413 - if (copy > skb_tailroom(skb)) 414 - copy = skb_tailroom(skb); 412 + /* Copy all frame if it fits skb->head, otherwise 413 + * we let virtio_net_hdr_to_skb() and GRO pull headers as needed. 414 + */ 415 + if (len <= skb_tailroom(skb)) 416 + copy = len; 417 + else 418 + copy = ETH_HLEN + metasize; 415 419 skb_put_data(skb, p, copy); 416 420 417 421 if (metasize) {
+14 -4
drivers/net/vxlan.c
··· 2725 2725 goto tx_error; 2726 2726 } else if (err) { 2727 2727 if (info) { 2728 + struct ip_tunnel_info *unclone; 2728 2729 struct in_addr src, dst; 2730 + 2731 + unclone = skb_tunnel_info_unclone(skb); 2732 + if (unlikely(!unclone)) 2733 + goto tx_error; 2729 2734 2730 2735 src = remote_ip.sin.sin_addr; 2731 2736 dst = local_ip.sin.sin_addr; 2732 - info->key.u.ipv4.src = src.s_addr; 2733 - info->key.u.ipv4.dst = dst.s_addr; 2737 + unclone->key.u.ipv4.src = src.s_addr; 2738 + unclone->key.u.ipv4.dst = dst.s_addr; 2734 2739 } 2735 2740 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false); 2736 2741 dst_release(ndst); ··· 2786 2781 goto tx_error; 2787 2782 } else if (err) { 2788 2783 if (info) { 2784 + struct ip_tunnel_info *unclone; 2789 2785 struct in6_addr src, dst; 2786 + 2787 + unclone = skb_tunnel_info_unclone(skb); 2788 + if (unlikely(!unclone)) 2789 + goto tx_error; 2790 2790 2791 2791 src = remote_ip.sin6.sin6_addr; 2792 2792 dst = local_ip.sin6.sin6_addr; 2793 - info->key.u.ipv6.src = src; 2794 - info->key.u.ipv6.dst = dst; 2793 + unclone->key.u.ipv6.src = src; 2794 + unclone->key.u.ipv6.dst = dst; 2795 2795 } 2796 2796 2797 2797 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
+3 -2
drivers/net/wan/hdlc_fr.c
··· 415 415 416 416 if (pad > 0) { /* Pad the frame with zeros */ 417 417 if (__skb_pad(skb, pad, false)) 418 - goto drop; 418 + goto out; 419 419 skb_put(skb, pad); 420 420 } 421 421 } ··· 448 448 return NETDEV_TX_OK; 449 449 450 450 drop: 451 - dev->stats.tx_dropped++; 452 451 kfree_skb(skb); 452 + out: 453 + dev->stats.tx_dropped++; 453 454 return NETDEV_TX_OK; 454 455 } 455 456
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
··· 2439 2439 vif = ifp->vif; 2440 2440 cfg = wdev_to_cfg(&vif->wdev); 2441 2441 cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif = NULL; 2442 - if (locked) { 2442 + if (!locked) { 2443 2443 rtnl_lock(); 2444 2444 wiphy_lock(cfg->wiphy); 2445 2445 cfg80211_unregister_wdev(&vif->wdev);
+5 -5
drivers/net/wireless/intel/iwlwifi/fw/notif-wait.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2005-2014 Intel Corporation 3 + * Copyright (C) 2005-2014, 2021 Intel Corporation 4 4 * Copyright (C) 2015-2017 Intel Deutschland GmbH 5 5 */ 6 6 #include <linux/sched.h> ··· 26 26 if (!list_empty(&notif_wait->notif_waits)) { 27 27 struct iwl_notification_wait *w; 28 28 29 - spin_lock(&notif_wait->notif_wait_lock); 29 + spin_lock_bh(&notif_wait->notif_wait_lock); 30 30 list_for_each_entry(w, &notif_wait->notif_waits, list) { 31 31 int i; 32 32 bool found = false; ··· 59 59 triggered = true; 60 60 } 61 61 } 62 - spin_unlock(&notif_wait->notif_wait_lock); 62 + spin_unlock_bh(&notif_wait->notif_wait_lock); 63 63 } 64 64 65 65 return triggered; ··· 70 70 { 71 71 struct iwl_notification_wait *wait_entry; 72 72 73 - spin_lock(&notif_wait->notif_wait_lock); 73 + spin_lock_bh(&notif_wait->notif_wait_lock); 74 74 list_for_each_entry(wait_entry, &notif_wait->notif_waits, list) 75 75 wait_entry->aborted = true; 76 - spin_unlock(&notif_wait->notif_wait_lock); 76 + spin_unlock_bh(&notif_wait->notif_wait_lock); 77 77 78 78 wake_up_all(&notif_wait->notif_waitq); 79 79 }
+1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 414 414 #define IWL_CFG_MAC_TYPE_QNJ 0x36 415 415 #define IWL_CFG_MAC_TYPE_SO 0x37 416 416 #define IWL_CFG_MAC_TYPE_SNJ 0x42 417 + #define IWL_CFG_MAC_TYPE_SOF 0x43 417 418 #define IWL_CFG_MAC_TYPE_MA 0x44 418 419 419 420 #define IWL_CFG_RF_TYPE_TH 0x105
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 232 232 REG_CAPA_V2_MCS_9_ALLOWED = BIT(6), 233 233 REG_CAPA_V2_WEATHER_DISABLED = BIT(7), 234 234 REG_CAPA_V2_40MHZ_ALLOWED = BIT(8), 235 - REG_CAPA_V2_11AX_DISABLED = BIT(13), 235 + REG_CAPA_V2_11AX_DISABLED = BIT(10), 236 236 }; 237 237 238 238 /*
+5 -2
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
··· 1786 1786 return -EINVAL; 1787 1787 1788 1788 /* value zero triggers re-sending the default table to the device */ 1789 - if (!op_id) 1789 + if (!op_id) { 1790 + mutex_lock(&mvm->mutex); 1790 1791 ret = iwl_rfi_send_config_cmd(mvm, NULL); 1791 - else 1792 + mutex_unlock(&mvm->mutex); 1793 + } else { 1792 1794 ret = -EOPNOTSUPP; /* in the future a new table will be added */ 1795 + } 1793 1796 1794 1797 return ret ?: count; 1795 1798 }
+3 -3
drivers/net/wireless/intel/iwlwifi/mvm/rfi.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2020 Intel Corporation 3 + * Copyright (C) 2020 - 2021 Intel Corporation 4 4 */ 5 5 6 6 #include "mvm.h" ··· 66 66 if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_RFIM_SUPPORT)) 67 67 return -EOPNOTSUPP; 68 68 69 + lockdep_assert_held(&mvm->mutex); 70 + 69 71 /* in case no table is passed, use the default one */ 70 72 if (!rfi_table) { 71 73 memcpy(cmd.table, iwl_rfi_table, sizeof(cmd.table)); ··· 77 75 cmd.oem = 1; 78 76 } 79 77 80 - mutex_lock(&mvm->mutex); 81 78 ret = iwl_mvm_send_cmd(mvm, &hcmd); 82 - mutex_unlock(&mvm->mutex); 83 79 84 80 if (ret) 85 81 IWL_ERR(mvm, "Failed to send RFI config cmd %d\n", ret);
+12 -5
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 272 272 rx_status->chain_signal[2] = S8_MIN; 273 273 } 274 274 275 - static int iwl_mvm_rx_mgmt_crypto(struct ieee80211_sta *sta, 276 - struct ieee80211_hdr *hdr, 277 - struct iwl_rx_mpdu_desc *desc, 278 - u32 status) 275 + static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta, 276 + struct ieee80211_hdr *hdr, 277 + struct iwl_rx_mpdu_desc *desc, 278 + u32 status) 279 279 { 280 280 struct iwl_mvm_sta *mvmsta; 281 281 struct iwl_mvm_vif *mvmvif; ··· 284 284 struct ieee80211_key_conf *key; 285 285 u32 len = le16_to_cpu(desc->mpdu_len); 286 286 const u8 *frame = (void *)hdr; 287 + 288 + if ((status & IWL_RX_MPDU_STATUS_SEC_MASK) == IWL_RX_MPDU_STATUS_SEC_NONE) 289 + return 0; 287 290 288 291 /* 289 292 * For non-beacon, we don't really care. But beacons may ··· 359 356 IWL_RX_MPDU_STATUS_SEC_UNKNOWN && !mvm->monitor_on) 360 357 return -1; 361 358 359 + if (unlikely(ieee80211_is_mgmt(hdr->frame_control) && 360 + !ieee80211_has_protected(hdr->frame_control))) 361 + return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status); 362 + 362 363 if (!ieee80211_has_protected(hdr->frame_control) || 363 364 (status & IWL_RX_MPDU_STATUS_SEC_MASK) == 364 365 IWL_RX_MPDU_STATUS_SEC_NONE) ··· 418 411 stats->flag |= RX_FLAG_DECRYPTED; 419 412 return 0; 420 413 case RX_MPDU_RES_STATUS_SEC_CMAC_GMAC_ENC: 421 - return iwl_mvm_rx_mgmt_crypto(sta, hdr, desc, status); 414 + break; 422 415 default: 423 416 /* 424 417 * Sometimes we can get frames that were not decrypted
+1 -30
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2018-2020 Intel Corporation 3 + * Copyright (C) 2018-2021 Intel Corporation 4 4 */ 5 5 #include "iwl-trans.h" 6 6 #include "iwl-fh.h" ··· 75 75 const struct fw_img *fw) 76 76 { 77 77 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 78 - u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 79 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 80 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 81 - u32_encode_bits(250, 82 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 83 - CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 84 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 85 - CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 86 - u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 87 78 struct iwl_context_info_gen3 *ctxt_info_gen3; 88 79 struct iwl_prph_scratch *prph_scratch; 89 80 struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl; ··· 207 216 208 217 iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL, 209 218 CSR_AUTO_FUNC_BOOT_ENA); 210 - 211 - /* 212 - * To workaround hardware latency issues during the boot process, 213 - * initialize the LTR to ~250 usec (see ltr_val above). 214 - * The firmware initializes this again later (to a smaller value). 215 - */ 216 - if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 || 217 - trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) && 218 - !trans->trans_cfg->integrated) { 219 - iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val); 220 - } else if (trans->trans_cfg->integrated && 221 - trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) { 222 - iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL); 223 - iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val); 224 - } 225 - 226 - if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) 227 - iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1); 228 - else 229 - iwl_set_bit(trans, CSR_GP_CNTRL, CSR_AUTO_FUNC_INIT); 230 219 231 220 return 0; 232 221
+1 -2
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 3 * Copyright (C) 2017 Intel Deutschland GmbH 4 - * Copyright (C) 2018-2020 Intel Corporation 4 + * Copyright (C) 2018-2021 Intel Corporation 5 5 */ 6 6 #include "iwl-trans.h" 7 7 #include "iwl-fh.h" ··· 240 240 241 241 /* kick FW self load */ 242 242 iwl_write64(trans, CSR_CTXT_INFO_BA, trans_pcie->ctxt_info_dma_addr); 243 - iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1); 244 243 245 244 /* Context info will be released upon alive or failure to get one */ 246 245
+26 -1
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 592 592 IWL_DEV_INFO(0x4DF0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL), 593 593 IWL_DEV_INFO(0x4DF0, 0x2074, iwl_ax201_cfg_qu_hr, NULL), 594 594 IWL_DEV_INFO(0x4DF0, 0x4070, iwl_ax201_cfg_qu_hr, NULL), 595 + IWL_DEV_INFO(0x4DF0, 0x6074, iwl_ax201_cfg_qu_hr, NULL), 595 596 596 597 /* So with HR */ 597 598 IWL_DEV_INFO(0x2725, 0x0090, iwlax211_2ax_cfg_so_gf_a0, NULL), ··· 1041 1040 IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, 1042 1041 IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1043 1042 IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1044 - iwl_cfg_so_a0_hr_a0, iwl_ax201_name) 1043 + iwl_cfg_so_a0_hr_a0, iwl_ax201_name), 1044 + 1045 + /* So-F with Hr */ 1046 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1047 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1048 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1049 + IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1050 + iwl_cfg_so_a0_hr_a0, iwl_ax203_name), 1051 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1052 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1053 + IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, 1054 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1055 + iwl_cfg_so_a0_hr_a0, iwl_ax101_name), 1056 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1057 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1058 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1059 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1060 + iwl_cfg_so_a0_hr_a0, iwl_ax201_name), 1061 + 1062 + /* So-F with Gf */ 1063 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1064 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1065 + IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, 1066 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1067 + iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name), 1045 1068 1046 1069 #endif /* CONFIG_IWLMVM */ 1047 1070 };
+35
drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
··· 266 266 mutex_unlock(&trans_pcie->mutex); 267 267 } 268 268 269 + static void iwl_pcie_set_ltr(struct iwl_trans *trans) 270 + { 271 + u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 272 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 273 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 274 + u32_encode_bits(250, 275 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 276 + CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 277 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 278 + CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 279 + u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 280 + 281 + /* 282 + * To workaround hardware latency issues during the boot process, 283 + * initialize the LTR to ~250 usec (see ltr_val above). 284 + * The firmware initializes this again later (to a smaller value). 285 + */ 286 + if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 || 287 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) && 288 + !trans->trans_cfg->integrated) { 289 + iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val); 290 + } else if (trans->trans_cfg->integrated && 291 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) { 292 + iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL); 293 + iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val); 294 + } 295 + } 296 + 269 297 int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans, 270 298 const struct fw_img *fw, bool run_in_rfkill) 271 299 { ··· 359 331 ret = iwl_pcie_ctxt_info_init(trans, fw); 360 332 if (ret) 361 333 goto out; 334 + 335 + iwl_pcie_set_ltr(trans); 336 + 337 + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) 338 + iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1); 339 + else 340 + iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1); 362 341 363 342 /* re-check RF-Kill state since we may have missed the interrupt */ 364 343 hw_rfkill = iwl_pcie_check_hw_rf_kill(trans);
+4 -3
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 928 928 u32 cmd_pos; 929 929 const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD]; 930 930 u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD]; 931 + unsigned long flags; 931 932 932 933 if (WARN(!trans->wide_cmd_header && 933 934 group_id > IWL_ALWAYS_LONG_GROUP, ··· 1012 1011 goto free_dup_buf; 1013 1012 } 1014 1013 1015 - spin_lock_bh(&txq->lock); 1014 + spin_lock_irqsave(&txq->lock, flags); 1016 1015 1017 1016 if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) { 1018 - spin_unlock_bh(&txq->lock); 1017 + spin_unlock_irqrestore(&txq->lock, flags); 1019 1018 1020 1019 IWL_ERR(trans, "No space in command queue\n"); 1021 1020 iwl_op_mode_cmd_queue_full(trans->op_mode); ··· 1175 1174 unlock_reg: 1176 1175 spin_unlock(&trans_pcie->reg_lock); 1177 1176 out: 1178 - spin_unlock_bh(&txq->lock); 1177 + spin_unlock_irqrestore(&txq->lock, flags); 1179 1178 free_dup_buf: 1180 1179 if (idx < 0) 1181 1180 kfree(dup_buf);
+2 -2
drivers/net/wireless/mediatek/mt76/mt7921/regs.h
··· 135 135 136 136 #define MT_WTBLON_TOP_BASE 0x34000 137 137 #define MT_WTBLON_TOP(ofs) (MT_WTBLON_TOP_BASE + (ofs)) 138 - #define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x0) 138 + #define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x200) 139 139 #define MT_WTBLON_TOP_WDUCR_GROUP GENMASK(2, 0) 140 140 141 - #define MT_WTBL_UPDATE MT_WTBLON_TOP(0x030) 141 + #define MT_WTBL_UPDATE MT_WTBLON_TOP(0x230) 142 142 #define MT_WTBL_UPDATE_WLAN_IDX GENMASK(9, 0) 143 143 #define MT_WTBL_UPDATE_ADM_COUNT_CLEAR BIT(12) 144 144 #define MT_WTBL_UPDATE_BUSY BIT(31)
+3 -2
drivers/net/wireless/virt_wifi.c
··· 12 12 #include <net/cfg80211.h> 13 13 #include <net/rtnetlink.h> 14 14 #include <linux/etherdevice.h> 15 + #include <linux/math64.h> 15 16 #include <linux/module.h> 16 17 17 18 static struct wiphy *common_wiphy; ··· 169 168 scan_result.work); 170 169 struct wiphy *wiphy = priv_to_wiphy(priv); 171 170 struct cfg80211_scan_info scan_info = { .aborted = false }; 171 + u64 tsf = div_u64(ktime_get_boottime_ns(), 1000); 172 172 173 173 informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz, 174 174 CFG80211_BSS_FTYPE_PRESP, 175 - fake_router_bssid, 176 - ktime_get_boottime_ns(), 175 + fake_router_bssid, tsf, 177 176 WLAN_CAPABILITY_ESS, 0, 178 177 (void *)&ssid, sizeof(ssid), 179 178 DBM_TO_MBM(-50), GFP_KERNEL);
+23 -13
drivers/of/fdt.c
··· 205 205 *pprev = NULL; 206 206 } 207 207 208 - static bool populate_node(const void *blob, 208 + static int populate_node(const void *blob, 209 209 int offset, 210 210 void **mem, 211 211 struct device_node *dad, ··· 214 214 { 215 215 struct device_node *np; 216 216 const char *pathp; 217 - unsigned int l, allocl; 217 + int len; 218 218 219 - pathp = fdt_get_name(blob, offset, &l); 219 + pathp = fdt_get_name(blob, offset, &len); 220 220 if (!pathp) { 221 221 *pnp = NULL; 222 - return false; 222 + return len; 223 223 } 224 224 225 - allocl = ++l; 225 + len++; 226 226 227 - np = unflatten_dt_alloc(mem, sizeof(struct device_node) + allocl, 227 + np = unflatten_dt_alloc(mem, sizeof(struct device_node) + len, 228 228 __alignof__(struct device_node)); 229 229 if (!dryrun) { 230 230 char *fn; 231 231 of_node_init(np); 232 232 np->full_name = fn = ((char *)np) + sizeof(*np); 233 233 234 - memcpy(fn, pathp, l); 234 + memcpy(fn, pathp, len); 235 235 236 236 if (dad != NULL) { 237 237 np->parent = dad; ··· 295 295 struct device_node *nps[FDT_MAX_DEPTH]; 296 296 void *base = mem; 297 297 bool dryrun = !base; 298 + int ret; 298 299 299 300 if (nodepp) 300 301 *nodepp = NULL; ··· 323 322 !of_fdt_device_is_available(blob, offset)) 324 323 continue; 325 324 326 - if (!populate_node(blob, offset, &mem, nps[depth], 327 - &nps[depth+1], dryrun)) 328 - return mem - base; 325 + ret = populate_node(blob, offset, &mem, nps[depth], 326 + &nps[depth+1], dryrun); 327 + if (ret < 0) 328 + return ret; 329 329 330 330 if (!dryrun && nodepp && !*nodepp) 331 331 *nodepp = nps[depth+1]; ··· 374 372 { 375 373 int size; 376 374 void *mem; 375 + int ret; 376 + 377 + if (mynodes) 378 + *mynodes = NULL; 377 379 378 380 pr_debug(" -> unflatten_device_tree()\n"); 379 381 ··· 398 392 399 393 /* First pass, scan for size */ 400 394 size = unflatten_dt_nodes(blob, NULL, dad, NULL); 401 - if (size < 0) 395 + if (size <= 0) 402 396 return NULL; 403 397 404 398 size = ALIGN(size, 4); ··· 416 410 pr_debug(" unflattening %p...\n", mem); 417 411 418 412 /* Second pass, do actual unflattening */ 419 - unflatten_dt_nodes(blob, mem, dad, mynodes); 413 + ret = unflatten_dt_nodes(blob, mem, dad, mynodes); 414 + 420 415 if (be32_to_cpup(mem + size) != 0xdeadbeef) 421 416 pr_warn("End of tree marker overwritten: %08x\n", 422 417 be32_to_cpup(mem + size)); 423 418 424 - if (detached && mynodes) { 419 + if (ret <= 0) 420 + return NULL; 421 + 422 + if (detached && mynodes && *mynodes) { 425 423 of_node_set_flag(*mynodes, OF_DETACHED); 426 424 pr_debug("unflattened tree is detached\n"); 427 425 }
+2
drivers/of/of_private.h
··· 8 8 * Copyright (C) 1996-2005 Paul Mackerras. 9 9 */ 10 10 11 + #define FDT_ALIGN_SIZE 8 12 + 11 13 /** 12 14 * struct alias_prop - Alias property in 'aliases' node 13 15 * @link: List node to link the structure in aliases_lookup list
+15 -9
drivers/of/overlay.c
··· 57 57 * struct overlay_changeset 58 58 * @id: changeset identifier 59 59 * @ovcs_list: list on which we are located 60 - * @fdt: FDT that was unflattened to create @overlay_tree 60 + * @fdt: base of memory allocated to hold aligned FDT that was unflattened to create @overlay_tree 61 61 * @overlay_tree: expanded device tree that contains the fragment nodes 62 62 * @count: count of fragment structures 63 63 * @fragments: fragment nodes in the overlay expanded device tree ··· 719 719 /** 720 720 * init_overlay_changeset() - initialize overlay changeset from overlay tree 721 721 * @ovcs: Overlay changeset to build 722 - * @fdt: the FDT that was unflattened to create @tree 723 - * @tree: Contains all the overlay fragments and overlay fixup nodes 722 + * @fdt: base of memory allocated to hold aligned FDT that was unflattened to create @tree 723 + * @tree: Contains the overlay fragments and overlay fixup nodes 724 724 * 725 725 * Initialize @ovcs. Populate @ovcs->fragments with node information from 726 726 * the top level of @tree. The relevant top level nodes are the fragment ··· 873 873 * internal documentation 874 874 * 875 875 * of_overlay_apply() - Create and apply an overlay changeset 876 - * @fdt: the FDT that was unflattened to create @tree 876 + * @fdt: base of memory allocated to hold the aligned FDT 877 877 * @tree: Expanded overlay device tree 878 878 * @ovcs_id: Pointer to overlay changeset id 879 879 * ··· 953 953 /* 954 954 * after overlay_notify(), ovcs->overlay_tree related pointers may have 955 955 * leaked to drivers, so can not kfree() tree, aka ovcs->overlay_tree; 956 - * and can not free fdt, aka ovcs->fdt 956 + * and can not free memory containing aligned fdt. The aligned fdt 957 + * is contained within the memory at ovcs->fdt, possibly at an offset 958 + * from ovcs->fdt. 957 959 */ 958 960 ret = overlay_notify(ovcs, OF_OVERLAY_PRE_APPLY); 959 961 if (ret) { ··· 1016 1014 int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size, 1017 1015 int *ovcs_id) 1018 1016 { 1019 - const void *new_fdt; 1017 + void *new_fdt; 1018 + void *new_fdt_align; 1020 1019 int ret; 1021 1020 u32 size; 1022 - struct device_node *overlay_root; 1021 + struct device_node *overlay_root = NULL; 1023 1022 1024 1023 *ovcs_id = 0; 1025 1024 ret = 0; ··· 1039 1036 * Must create permanent copy of FDT because of_fdt_unflatten_tree() 1040 1037 * will create pointers to the passed in FDT in the unflattened tree. 1041 1038 */ 1042 - new_fdt = kmemdup(overlay_fdt, size, GFP_KERNEL); 1039 + new_fdt = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL); 1043 1040 if (!new_fdt) 1044 1041 return -ENOMEM; 1045 1042 1046 - of_fdt_unflatten_tree(new_fdt, NULL, &overlay_root); 1043 + new_fdt_align = PTR_ALIGN(new_fdt, FDT_ALIGN_SIZE); 1044 + memcpy(new_fdt_align, overlay_fdt, size); 1045 + 1046 + of_fdt_unflatten_tree(new_fdt_align, NULL, &overlay_root); 1047 1047 if (!overlay_root) { 1048 1048 pr_err("unable to unflatten overlay_fdt\n"); 1049 1049 ret = -EINVAL;
+10 -1
drivers/of/property.c
··· 1262 1262 DEFINE_SIMPLE_PROP(pinctrl8, "pinctrl-8", NULL) 1263 1263 DEFINE_SUFFIX_PROP(regulators, "-supply", NULL) 1264 1264 DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells") 1265 - DEFINE_SUFFIX_PROP(gpios, "-gpios", "#gpio-cells") 1265 + 1266 + static struct device_node *parse_gpios(struct device_node *np, 1267 + const char *prop_name, int index) 1268 + { 1269 + if (!strcmp_suffix(prop_name, ",nr-gpios")) 1270 + return NULL; 1271 + 1272 + return parse_suffix_prop_cells(np, prop_name, index, "-gpios", 1273 + "#gpio-cells"); 1274 + } 1266 1275 1267 1276 static struct device_node *parse_iommu_maps(struct device_node *np, 1268 1277 const char *prop_name, int index)
+16 -6
drivers/of/unittest.c
··· 22 22 #include <linux/slab.h> 23 23 #include <linux/device.h> 24 24 #include <linux/platform_device.h> 25 + #include <linux/kernel.h> 25 26 26 27 #include <linux/i2c.h> 27 28 #include <linux/i2c-mux.h> ··· 1409 1408 static int __init unittest_data_add(void) 1410 1409 { 1411 1410 void *unittest_data; 1412 - struct device_node *unittest_data_node, *np; 1411 + void *unittest_data_align; 1412 + struct device_node *unittest_data_node = NULL, *np; 1413 1413 /* 1414 1414 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically 1415 1415 * created by cmd_dt_S_dtb in scripts/Makefile.lib ··· 1419 1417 extern uint8_t __dtb_testcases_end[]; 1420 1418 const int size = __dtb_testcases_end - __dtb_testcases_begin; 1421 1419 int rc; 1420 + void *ret; 1422 1421 1423 1422 if (!size) { 1424 - pr_warn("%s: No testcase data to attach; not running tests\n", 1425 - __func__); 1423 + pr_warn("%s: testcases is empty\n", __func__); 1426 1424 return -ENODATA; 1427 1425 } 1428 1426 1429 1427 /* creating copy */ 1430 - unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL); 1428 + unittest_data = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL); 1431 1429 if (!unittest_data) 1432 1430 return -ENOMEM; 1433 1431 1434 - of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node); 1432 + unittest_data_align = PTR_ALIGN(unittest_data, FDT_ALIGN_SIZE); 1433 + memcpy(unittest_data_align, __dtb_testcases_begin, size); 1434 + 1435 + ret = of_fdt_unflatten_tree(unittest_data_align, NULL, &unittest_data_node); 1436 + if (!ret) { 1437 + pr_warn("%s: unflatten testcases tree failed\n", __func__); 1438 + kfree(unittest_data); 1439 + return -ENODATA; 1440 + } 1435 1441 if (!unittest_data_node) { 1436 - pr_warn("%s: No tree to attach; not running tests\n", __func__); 1442 + pr_warn("%s: testcases tree is empty\n", __func__); 1437 1443 kfree(unittest_data); 1438 1444 return -ENODATA; 1439 1445 }
+8 -1
drivers/pinctrl/intel/pinctrl-intel.c
··· 1357 1357 gpps[i].gpio_base = 0; 1358 1358 break; 1359 1359 case INTEL_GPIO_BASE_NOMAP: 1360 + break; 1360 1361 default: 1361 1362 break; 1362 1363 } ··· 1394 1393 gpps[i].size = min(gpp_size, npins); 1395 1394 npins -= gpps[i].size; 1396 1395 1396 + gpps[i].gpio_base = gpps[i].base; 1397 1397 gpps[i].padown_num = padown_num; 1398 1398 1399 1399 /* ··· 1493 1491 if (IS_ERR(regs)) 1494 1492 return PTR_ERR(regs); 1495 1493 1496 - /* Determine community features based on the revision */ 1494 + /* 1495 + * Determine community features based on the revision. 1496 + * A value of all ones means the device is not present. 1497 + */ 1497 1498 value = readl(regs + REVID); 1499 + if (value == ~0u) 1500 + return -ENODEV; 1498 1501 if (((value & REVID_MASK) >> REVID_SHIFT) >= 0x94) { 1499 1502 community->features |= PINCTRL_FEATURE_DEBOUNCE; 1500 1503 community->features |= PINCTRL_FEATURE_1K_PD;
+1 -1
drivers/pinctrl/pinctrl-microchip-sgpio.c
··· 572 572 /* Type value spread over 2 registers sets: low, high bit */ 573 573 sgpio_clrsetbits(bank->priv, REG_INT_TRIGGER, addr.bit, 574 574 BIT(addr.port), (!!(type & 0x1)) << addr.port); 575 - sgpio_clrsetbits(bank->priv, REG_INT_TRIGGER + SGPIO_MAX_BITS, addr.bit, 575 + sgpio_clrsetbits(bank->priv, REG_INT_TRIGGER, SGPIO_MAX_BITS + addr.bit, 576 576 BIT(addr.port), (!!(type & 0x2)) << addr.port); 577 577 578 578 if (type == SGPIO_INT_TRG_LEVEL)
+8 -5
drivers/pinctrl/pinctrl-rockchip.c
··· 3727 3727 static int __maybe_unused rockchip_pinctrl_resume(struct device *dev) 3728 3728 { 3729 3729 struct rockchip_pinctrl *info = dev_get_drvdata(dev); 3730 - int ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX, 3731 - rk3288_grf_gpio6c_iomux | 3732 - GPIO6C6_SEL_WRITE_ENABLE); 3730 + int ret; 3733 3731 3734 - if (ret) 3735 - return ret; 3732 + if (info->ctrl->type == RK3288) { 3733 + ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX, 3734 + rk3288_grf_gpio6c_iomux | 3735 + GPIO6C6_SEL_WRITE_ENABLE); 3736 + if (ret) 3737 + return ret; 3738 + } 3736 3739 3737 3740 return pinctrl_force_default(info->pctl_dev); 3738 3741 }
+1 -1
drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
··· 392 392 unsigned long *configs, unsigned int nconfs) 393 393 { 394 394 struct lpi_pinctrl *pctrl = dev_get_drvdata(pctldev->dev); 395 - unsigned int param, arg, pullup, strength; 395 + unsigned int param, arg, pullup = LPI_GPIO_BIAS_DISABLE, strength = 2; 396 396 bool value, output_enabled = false; 397 397 const struct lpi_pingroup *g; 398 398 unsigned long sval;
+8 -8
drivers/pinctrl/qcom/pinctrl-sc7280.c
··· 1439 1439 [172] = PINGROUP(172, qdss, _, _, _, _, _, _, _, _), 1440 1440 [173] = PINGROUP(173, qdss, _, _, _, _, _, _, _, _), 1441 1441 [174] = PINGROUP(174, qdss, _, _, _, _, _, _, _, _), 1442 - [175] = UFS_RESET(ufs_reset, 0x1be000), 1443 - [176] = SDC_QDSD_PINGROUP(sdc1_rclk, 0x1b3000, 15, 0), 1444 - [177] = SDC_QDSD_PINGROUP(sdc1_clk, 0x1b3000, 13, 6), 1445 - [178] = SDC_QDSD_PINGROUP(sdc1_cmd, 0x1b3000, 11, 3), 1446 - [179] = SDC_QDSD_PINGROUP(sdc1_data, 0x1b3000, 9, 0), 1447 - [180] = SDC_QDSD_PINGROUP(sdc2_clk, 0x1b4000, 14, 6), 1448 - [181] = SDC_QDSD_PINGROUP(sdc2_cmd, 0x1b4000, 11, 3), 1449 - [182] = SDC_QDSD_PINGROUP(sdc2_data, 0x1b4000, 9, 0), 1442 + [175] = UFS_RESET(ufs_reset, 0xbe000), 1443 + [176] = SDC_QDSD_PINGROUP(sdc1_rclk, 0xb3004, 0, 6), 1444 + [177] = SDC_QDSD_PINGROUP(sdc1_clk, 0xb3000, 13, 6), 1445 + [178] = SDC_QDSD_PINGROUP(sdc1_cmd, 0xb3000, 11, 3), 1446 + [179] = SDC_QDSD_PINGROUP(sdc1_data, 0xb3000, 9, 0), 1447 + [180] = SDC_QDSD_PINGROUP(sdc2_clk, 0xb4000, 14, 6), 1448 + [181] = SDC_QDSD_PINGROUP(sdc2_cmd, 0xb4000, 11, 3), 1449 + [182] = SDC_QDSD_PINGROUP(sdc2_data, 0xb4000, 9, 0), 1450 1450 }; 1451 1451 1452 1452 static const struct msm_pinctrl_soc_data sc7280_pinctrl = {
+1 -1
drivers/pinctrl/qcom/pinctrl-sdx55.c
··· 423 423 424 424 static const char * const qdss_stm_groups[] = { 425 425 "gpio0", "gpio1", "gpio2", "gpio3", "gpio4", "gpio5", "gpio6", "gpio7", "gpio12", "gpio13", 426 - "gpio14", "gpio15", "gpio16", "gpio17", "gpio18", "gpio19" "gpio20", "gpio21", "gpio22", 426 + "gpio14", "gpio15", "gpio16", "gpio17", "gpio18", "gpio19", "gpio20", "gpio21", "gpio22", 427 427 "gpio23", "gpio44", "gpio45", "gpio52", "gpio53", "gpio56", "gpio57", "gpio61", "gpio62", 428 428 "gpio63", "gpio64", "gpio65", "gpio66", 429 429 };
+9 -7
drivers/platform/x86/intel-hid.c
··· 483 483 goto wakeup; 484 484 485 485 /* 486 - * Switch events will wake the device and report the new switch 487 - * position to the input subsystem. 486 + * Some devices send (duplicate) tablet-mode events when moved 487 + * around even though the mode has not changed; and they do this 488 + * even when suspended. 489 + * Update the switch state in case it changed and then return 490 + * without waking up to avoid spurious wakeups. 488 491 */ 489 - if (priv->switches && (event == 0xcc || event == 0xcd)) 490 - goto wakeup; 492 + if (event == 0xcc || event == 0xcd) { 493 + report_tablet_mode_event(priv->switches, event); 494 + return; 495 + } 491 496 492 497 /* Wake up on 5-button array events only. */ 493 498 if (event == 0xc0 || !priv->array) ··· 505 500 506 501 wakeup: 507 502 pm_wakeup_hard_event(&device->dev); 508 - 509 - if (report_tablet_mode_event(priv->switches, event)) 510 - return; 511 503 512 504 return; 513 505 }
+4 -4
drivers/regulator/bd9571mwv-regulator.c
··· 125 125 126 126 static const struct regulator_desc regulators[] = { 127 127 BD9571MWV_REG("VD09", "vd09", VD09, avs_ops, 0, 0x7f, 128 - 0x80, 600000, 10000, 0x3c), 128 + 0x6f, 600000, 10000, 0x3c), 129 129 BD9571MWV_REG("VD18", "vd18", VD18, vid_ops, BD9571MWV_VD18_VID, 0xf, 130 130 16, 1625000, 25000, 0), 131 131 BD9571MWV_REG("VD25", "vd25", VD25, vid_ops, BD9571MWV_VD25_VID, 0xf, ··· 134 134 11, 2800000, 100000, 0), 135 135 BD9571MWV_REG("DVFS", "dvfs", DVFS, reg_ops, 136 136 BD9571MWV_DVFS_MONIVDAC, 0x7f, 137 - 0x80, 600000, 10000, 0x3c), 137 + 0x6f, 600000, 10000, 0x3c), 138 138 }; 139 139 140 140 #ifdef CONFIG_PM_SLEEP ··· 174 174 { 175 175 struct bd9571mwv_reg *bdreg = dev_get_drvdata(dev); 176 176 177 - return sprintf(buf, "%s\n", bdreg->bkup_mode_enabled ? "on" : "off"); 177 + return sysfs_emit(buf, "%s\n", bdreg->bkup_mode_enabled ? "on" : "off"); 178 178 } 179 179 180 180 static ssize_t backup_mode_store(struct device *dev, ··· 301 301 &config); 302 302 if (IS_ERR(rdev)) { 303 303 dev_err(&pdev->dev, "failed to register %s regulator\n", 304 - pdev->name); 304 + regulators[i].name); 305 305 return PTR_ERR(rdev); 306 306 } 307 307 }
+19 -1
drivers/remoteproc/pru_rproc.c
··· 450 450 if (len == 0) 451 451 return NULL; 452 452 453 + /* 454 + * GNU binutils do not support multiple address spaces. The GNU 455 + * linker's default linker script places IRAM at an arbitrary high 456 + * offset, in order to differentiate it from DRAM. Hence we need to 457 + * strip the artificial offset in the IRAM addresses coming from the 458 + * ELF file. 459 + * 460 + * The TI proprietary linker would never set those higher IRAM address 461 + * bits anyway. PRU architecture limits the program counter to 16-bit 462 + * word-address range. This in turn corresponds to 18-bit IRAM 463 + * byte-address range for ELF. 464 + * 465 + * Two more bits are added just in case to make the final 20-bit mask. 466 + * Idea is to have a safeguard in case TI decides to add banking 467 + * in future SoCs. 468 + */ 469 + da &= 0xfffff; 470 + 453 471 if (da >= PRU_IRAM_DA && 454 472 da + len <= PRU_IRAM_DA + pru->mem_regions[PRU_IOMEM_IRAM].size) { 455 473 offset = da - PRU_IRAM_DA; ··· 603 585 break; 604 586 } 605 587 606 - if (pru->data->is_k3 && is_iram) { 588 + if (pru->data->is_k3) { 607 589 ret = pru_rproc_memcpy(ptr, elf_data + phdr->p_offset, 608 590 filesz); 609 591 if (ret) {
+1 -1
drivers/remoteproc/qcom_pil_info.c
··· 56 56 memset_io(base, 0, resource_size(&imem)); 57 57 58 58 _reloc.base = base; 59 - _reloc.num_entries = resource_size(&imem) / PIL_RELOC_ENTRY_SIZE; 59 + _reloc.num_entries = (u32)resource_size(&imem) / PIL_RELOC_ENTRY_SIZE; 60 60 61 61 return 0; 62 62 }
+54 -13
drivers/scsi/ibmvscsi/ibmvfc.c
··· 2372 2372 } 2373 2373 2374 2374 /** 2375 + * ibmvfc_event_is_free - Check if event is free or not 2376 + * @evt: ibmvfc event struct 2377 + * 2378 + * Returns: 2379 + * true / false 2380 + **/ 2381 + static bool ibmvfc_event_is_free(struct ibmvfc_event *evt) 2382 + { 2383 + struct ibmvfc_event *loop_evt; 2384 + 2385 + list_for_each_entry(loop_evt, &evt->queue->free, queue_list) 2386 + if (loop_evt == evt) 2387 + return true; 2388 + 2389 + return false; 2390 + } 2391 + 2392 + /** 2375 2393 * ibmvfc_wait_for_ops - Wait for ops to complete 2376 2394 * @vhost: ibmvfc host struct 2377 2395 * @device: device to match (starget or sdev) ··· 2403 2385 { 2404 2386 struct ibmvfc_event *evt; 2405 2387 DECLARE_COMPLETION_ONSTACK(comp); 2406 - int wait; 2388 + int wait, i, q_index, q_size; 2407 2389 unsigned long flags; 2408 2390 signed long timeout = IBMVFC_ABORT_WAIT_TIMEOUT * HZ; 2391 + struct ibmvfc_queue *queues; 2409 2392 2410 2393 ENTER; 2394 + if (vhost->mq_enabled && vhost->using_channels) { 2395 + queues = vhost->scsi_scrqs.scrqs; 2396 + q_size = vhost->scsi_scrqs.active_queues; 2397 + } else { 2398 + queues = &vhost->crq; 2399 + q_size = 1; 2400 + } 2401 + 2411 2402 do { 2412 2403 wait = 0; 2413 - spin_lock_irqsave(&vhost->crq.l_lock, flags); 2414 - list_for_each_entry(evt, &vhost->crq.sent, queue_list) { 2415 - if (match(evt, device)) { 2416 - evt->eh_comp = &comp; 2417 - wait++; 2404 + spin_lock_irqsave(vhost->host->host_lock, flags); 2405 + for (q_index = 0; q_index < q_size; q_index++) { 2406 + spin_lock(&queues[q_index].l_lock); 2407 + for (i = 0; i < queues[q_index].evt_pool.size; i++) { 2408 + evt = &queues[q_index].evt_pool.events[i]; 2409 + if (!ibmvfc_event_is_free(evt)) { 2410 + if (match(evt, device)) { 2411 + evt->eh_comp = &comp; 2412 + wait++; 2413 + } 2414 + } 2418 2415 } 2416 + spin_unlock(&queues[q_index].l_lock); 2419 2417 } 2420 - spin_unlock_irqrestore(&vhost->crq.l_lock, flags); 2418 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 2421 2419 2422 2420 if (wait) { 2423 2421 timeout = wait_for_completion_timeout(&comp, timeout); 2424 2422 2425 2423 if (!timeout) { 2426 2424 wait = 0; 2427 - spin_lock_irqsave(&vhost->crq.l_lock, flags); 2428 - list_for_each_entry(evt, &vhost->crq.sent, queue_list) { 2429 - if (match(evt, device)) { 2430 - evt->eh_comp = NULL; 2431 - wait++; 2425 + spin_lock_irqsave(vhost->host->host_lock, flags); 2426 + for (q_index = 0; q_index < q_size; q_index++) { 2427 + spin_lock(&queues[q_index].l_lock); 2428 + for (i = 0; i < queues[q_index].evt_pool.size; i++) { 2429 + evt = &queues[q_index].evt_pool.events[i]; 2430 + if (!ibmvfc_event_is_free(evt)) { 2431 + if (match(evt, device)) { 2432 + evt->eh_comp = NULL; 2433 + wait++; 2434 + } 2435 + } 2432 2436 } 2437 + spin_unlock(&queues[q_index].l_lock); 2433 2438 } 2434 - spin_unlock_irqrestore(&vhost->crq.l_lock, flags); 2439 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 2435 2440 if (wait) 2436 2441 dev_err(vhost->dev, "Timed out waiting for aborted commands\n"); 2437 2442 LEAVE;
+6 -2
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 7806 7806 ioc->pend_os_device_add_sz++; 7807 7807 ioc->pend_os_device_add = kzalloc(ioc->pend_os_device_add_sz, 7808 7808 GFP_KERNEL); 7809 - if (!ioc->pend_os_device_add) 7809 + if (!ioc->pend_os_device_add) { 7810 + r = -ENOMEM; 7810 7811 goto out_free_resources; 7812 + } 7811 7813 7812 7814 ioc->device_remove_in_progress_sz = ioc->pend_os_device_add_sz; 7813 7815 ioc->device_remove_in_progress = 7814 7816 kzalloc(ioc->device_remove_in_progress_sz, GFP_KERNEL); 7815 - if (!ioc->device_remove_in_progress) 7817 + if (!ioc->device_remove_in_progress) { 7818 + r = -ENOMEM; 7816 7819 goto out_free_resources; 7820 + } 7817 7821 7818 7822 ioc->fwfault_debug = mpt3sas_fwfault_debug; 7819 7823
+1
drivers/scsi/qedi/qedi_main.c
··· 1675 1675 if (!qedi->global_queues[i]) { 1676 1676 QEDI_ERR(&qedi->dbg_ctx, 1677 1677 "Unable to allocation global queue %d.\n", i); 1678 + status = -ENOMEM; 1678 1679 goto mem_alloc_failure; 1679 1680 } 1680 1681
+5 -8
drivers/scsi/qla2xxx/qla_target.c
··· 3222 3222 if (!qpair->fw_started || (cmd->reset_count != qpair->chip_reset) || 3223 3223 (cmd->sess && cmd->sess->deleted)) { 3224 3224 cmd->state = QLA_TGT_STATE_PROCESSED; 3225 - res = 0; 3226 - goto free; 3225 + return 0; 3227 3226 } 3228 3227 3229 3228 ql_dbg_qp(ql_dbg_tgt, qpair, 0xe018, ··· 3233 3234 3234 3235 res = qlt_pre_xmit_response(cmd, &prm, xmit_type, scsi_status, 3235 3236 &full_req_cnt); 3236 - if (unlikely(res != 0)) 3237 - goto free; 3237 + if (unlikely(res != 0)) { 3238 + return res; 3239 + } 3238 3240 3239 3241 spin_lock_irqsave(qpair->qp_lock_ptr, flags); 3240 3242 ··· 3255 3255 vha->flags.online, qla2x00_reset_active(vha), 3256 3256 cmd->reset_count, qpair->chip_reset); 3257 3257 spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 3258 - res = 0; 3259 - goto free; 3258 + return 0; 3260 3259 } 3261 3260 3262 3261 /* Does F/W have an IOCBs for this request */ ··· 3358 3359 qlt_unmap_sg(vha, cmd); 3359 3360 spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 3360 3361 3361 - free: 3362 - vha->hw->tgt.tgt_ops->free_cmd(cmd); 3363 3362 return res; 3364 3363 } 3365 3364 EXPORT_SYMBOL(qlt_xmit_response);
-4
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 644 644 { 645 645 struct qla_tgt_cmd *cmd = container_of(se_cmd, 646 646 struct qla_tgt_cmd, se_cmd); 647 - struct scsi_qla_host *vha = cmd->vha; 648 647 649 648 if (cmd->aborted) { 650 649 /* Cmd can loop during Q-full. tcm_qla2xxx_aborted_task ··· 656 657 cmd->se_cmd.transport_state, 657 658 cmd->se_cmd.t_state, 658 659 cmd->se_cmd.se_cmd_flags); 659 - vha->hw->tgt.tgt_ops->free_cmd(cmd); 660 660 return 0; 661 661 } 662 662 ··· 683 685 { 684 686 struct qla_tgt_cmd *cmd = container_of(se_cmd, 685 687 struct qla_tgt_cmd, se_cmd); 686 - struct scsi_qla_host *vha = cmd->vha; 687 688 int xmit_type = QLA_TGT_XMIT_STATUS; 688 689 689 690 if (cmd->aborted) { ··· 696 699 cmd, kref_read(&cmd->se_cmd.cmd_kref), 697 700 cmd->se_cmd.transport_state, cmd->se_cmd.t_state, 698 701 cmd->se_cmd.se_cmd_flags); 699 - vha->hw->tgt.tgt_ops->free_cmd(cmd); 700 702 return 0; 701 703 } 702 704 cmd->bufflen = se_cmd->data_length;
+13 -1
drivers/scsi/scsi_transport_iscsi.c
··· 2475 2475 */ 2476 2476 mutex_lock(&conn_mutex); 2477 2477 conn->transport->stop_conn(conn, flag); 2478 + conn->state = ISCSI_CONN_DOWN; 2478 2479 mutex_unlock(&conn_mutex); 2479 2480 2480 2481 } ··· 2902 2901 default: 2903 2902 err = transport->set_param(conn, ev->u.set_param.param, 2904 2903 data, ev->u.set_param.len); 2904 + if ((conn->state == ISCSI_CONN_BOUND) || 2905 + (conn->state == ISCSI_CONN_UP)) { 2906 + err = transport->set_param(conn, ev->u.set_param.param, 2907 + data, ev->u.set_param.len); 2908 + } else { 2909 + return -ENOTCONN; 2910 + } 2905 2911 } 2906 2912 2907 2913 return err; ··· 2968 2960 mutex_lock(&conn->ep_mutex); 2969 2961 conn->ep = NULL; 2970 2962 mutex_unlock(&conn->ep_mutex); 2963 + conn->state = ISCSI_CONN_DOWN; 2971 2964 } 2972 2965 2973 2966 transport->ep_disconnect(ep); ··· 3736 3727 ev->r.retcode = transport->bind_conn(session, conn, 3737 3728 ev->u.b_conn.transport_eph, 3738 3729 ev->u.b_conn.is_leading); 3730 + if (!ev->r.retcode) 3731 + conn->state = ISCSI_CONN_BOUND; 3739 3732 mutex_unlock(&conn_mutex); 3740 3733 3741 3734 if (ev->r.retcode || !transport->ep_connect) ··· 3977 3966 static const char *const connection_state_names[] = { 3978 3967 [ISCSI_CONN_UP] = "up", 3979 3968 [ISCSI_CONN_DOWN] = "down", 3980 - [ISCSI_CONN_FAILED] = "failed" 3969 + [ISCSI_CONN_FAILED] = "failed", 3970 + [ISCSI_CONN_BOUND] = "bound" 3981 3971 }; 3982 3972 3983 3973 static ssize_t show_conn_state(struct device *dev,
+1 -1
drivers/soc/fsl/qbman/qman.c
··· 186 186 __be32 tag; 187 187 struct qm_fd fd; 188 188 u8 __reserved3[32]; 189 - } __packed; 189 + } __packed __aligned(8); 190 190 #define QM_EQCR_VERB_VBIT 0x80 191 191 #define QM_EQCR_VERB_CMD_MASK 0x61 /* but only one value; */ 192 192 #define QM_EQCR_VERB_CMD_ENQUEUE 0x01
-1
drivers/soc/litex/litex_soc_ctrl.c
··· 13 13 #include <linux/platform_device.h> 14 14 #include <linux/printk.h> 15 15 #include <linux/module.h> 16 - #include <linux/errno.h> 17 16 #include <linux/io.h> 18 17 #include <linux/reboot.h> 19 18
-74
drivers/soc/qcom/qcom-geni-se.c
··· 3 3 4 4 #include <linux/acpi.h> 5 5 #include <linux/clk.h> 6 - #include <linux/console.h> 7 6 #include <linux/slab.h> 8 7 #include <linux/dma-mapping.h> 9 8 #include <linux/io.h> ··· 91 92 struct device *dev; 92 93 void __iomem *base; 93 94 struct clk_bulk_data ahb_clks[NUM_AHB_CLKS]; 94 - struct geni_icc_path to_core; 95 95 }; 96 96 97 97 static const char * const icc_path_names[] = {"qup-core", "qup-config", 98 98 "qup-memory"}; 99 - 100 - static struct geni_wrapper *earlycon_wrapper; 101 99 102 100 #define QUP_HW_VER_REG 0x4 103 101 ··· 839 843 } 840 844 EXPORT_SYMBOL(geni_icc_disable); 841 845 842 - void geni_remove_earlycon_icc_vote(void) 843 - { 844 - struct platform_device *pdev; 845 - struct geni_wrapper *wrapper; 846 - struct device_node *parent; 847 - struct device_node *child; 848 - 849 - if (!earlycon_wrapper) 850 - return; 851 - 852 - wrapper = earlycon_wrapper; 853 - parent = of_get_next_parent(wrapper->dev->of_node); 854 - for_each_child_of_node(parent, child) { 855 - if (!of_device_is_compatible(child, "qcom,geni-se-qup")) 856 - continue; 857 - 858 - pdev = of_find_device_by_node(child); 859 - if (!pdev) 860 - continue; 861 - 862 - wrapper = platform_get_drvdata(pdev); 863 - icc_put(wrapper->to_core.path); 864 - wrapper->to_core.path = NULL; 865 - 866 - } 867 - of_node_put(parent); 868 - 869 - earlycon_wrapper = NULL; 870 - } 871 - EXPORT_SYMBOL(geni_remove_earlycon_icc_vote); 872 - 873 846 static int geni_se_probe(struct platform_device *pdev) 874 847 { 875 848 struct device *dev = &pdev->dev; 876 849 struct resource *res; 877 850 struct geni_wrapper *wrapper; 878 - struct console __maybe_unused *bcon; 879 - bool __maybe_unused has_earlycon = false; 880 851 int ret; 881 852 882 853 wrapper = devm_kzalloc(dev, sizeof(*wrapper), GFP_KERNEL); ··· 866 903 } 867 904 } 868 905 869 - #ifdef CONFIG_SERIAL_EARLYCON 870 - for_each_console(bcon) { 871 - if (!strcmp(bcon->name, "qcom_geni")) { 872 - has_earlycon = true; 873 - break; 874 - } 875 - } 876 - if (!has_earlycon) 877 - goto exit; 878 - 879 - wrapper->to_core.path = devm_of_icc_get(dev, "qup-core"); 880 - if (IS_ERR(wrapper->to_core.path)) 881 - return PTR_ERR(wrapper->to_core.path); 882 - /* 883 - * Put minmal BW request on core clocks on behalf of early console. 884 - * The vote will be removed earlycon exit function. 885 - * 886 - * Note: We are putting vote on each QUP wrapper instead only to which 887 - * earlycon is connected because QUP core clock of different wrapper 888 - * share same voltage domain. If core1 is put to 0, then core2 will 889 - * also run at 0, if not voted. Default ICC vote will be removed ASA 890 - * we touch any of the core clock. 891 - * core1 = core2 = max(core1, core2) 892 - */ 893 - ret = icc_set_bw(wrapper->to_core.path, GENI_DEFAULT_BW, 894 - GENI_DEFAULT_BW); 895 - if (ret) { 896 - dev_err(&pdev->dev, "%s: ICC BW voting failed for core: %d\n", 897 - __func__, ret); 898 - return ret; 899 - } 900 - 901 - if (of_get_compatible_child(pdev->dev.of_node, "qcom,geni-debug-uart")) 902 - earlycon_wrapper = wrapper; 903 - of_node_put(pdev->dev.of_node); 904 - exit: 905 - #endif 906 906 dev_set_drvdata(dev, wrapper); 907 907 dev_dbg(dev, "GENI SE Driver probed\n"); 908 908 return devm_of_platform_populate(dev);
+6 -2
drivers/soc/ti/omap_prm.c
··· 332 332 { 333 333 .name = "l3init", .base = 0x4ae07300, 334 334 .pwrstctrl = 0x0, .pwrstst = 0x4, .dmap = &omap_prm_alwon, 335 - .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_012, 335 + .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01, 336 336 .clkdm_name = "pcie" 337 337 }, 338 338 { ··· 830 830 reset->prm->data->name, id); 831 831 832 832 exit: 833 - if (reset->clkdm) 833 + if (reset->clkdm) { 834 + /* At least dra7 iva needs a delay before clkdm idle */ 835 + if (has_rstst) 836 + udelay(1); 834 837 pdata->clkdm_allow_idle(reset->clkdm); 838 + } 835 839 836 840 return ret; 837 841 }
+1 -1
drivers/staging/rtl8192e/rtllib.h
··· 1105 1105 bool bWithAironetIE; 1106 1106 bool bCkipSupported; 1107 1107 bool bCcxRmEnable; 1108 - u16 CcxRmState[2]; 1108 + u8 CcxRmState[2]; 1109 1109 bool bMBssidValid; 1110 1110 u8 MBssidMask; 1111 1111 u8 MBssid[ETH_ALEN];
+1 -1
drivers/staging/rtl8192e/rtllib_rx.c
··· 1967 1967 info_element->data[2] == 0x96 && 1968 1968 info_element->data[3] == 0x01) { 1969 1969 if (info_element->len == 6) { 1970 - memcpy(network->CcxRmState, &info_element[4], 2); 1970 + memcpy(network->CcxRmState, &info_element->data[4], 2); 1971 1971 if (network->CcxRmState[0] != 0) 1972 1972 network->bCcxRmEnable = true; 1973 1973 else
+8 -1
drivers/target/target_core_pscsi.c
··· 882 882 if (!bio) { 883 883 new_bio: 884 884 nr_vecs = bio_max_segs(nr_pages); 885 - nr_pages -= nr_vecs; 886 885 /* 887 886 * Calls bio_kmalloc() and sets bio->bi_end_io() 888 887 */ ··· 938 939 939 940 return 0; 940 941 fail: 942 + if (bio) 943 + bio_put(bio); 944 + while (req->bio) { 945 + bio = req->bio; 946 + req->bio = bio->bi_next; 947 + bio_put(bio); 948 + } 949 + req->biotail = NULL; 941 950 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 942 951 } 943 952
-7
drivers/tty/serial/qcom_geni_serial.c
··· 1177 1177 struct console *con) { } 1178 1178 #endif 1179 1179 1180 - static int qcom_geni_serial_earlycon_exit(struct console *con) 1181 - { 1182 - geni_remove_earlycon_icc_vote(); 1183 - return 0; 1184 - } 1185 - 1186 1180 static struct qcom_geni_private_data earlycon_private_data; 1187 1181 1188 1182 static int __init qcom_geni_serial_earlycon_setup(struct earlycon_device *dev, ··· 1227 1233 writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN); 1228 1234 1229 1235 dev->con->write = qcom_geni_serial_earlycon_write; 1230 - dev->con->exit = qcom_geni_serial_earlycon_exit; 1231 1236 dev->con->setup = NULL; 1232 1237 qcom_geni_serial_enable_early_read(&se, dev->con); 1233 1238
+70 -50
drivers/usb/class/cdc-acm.c
··· 147 147 #define acm_send_break(acm, ms) \ 148 148 acm_ctrl_msg(acm, USB_CDC_REQ_SEND_BREAK, ms, NULL, 0) 149 149 150 - static void acm_kill_urbs(struct acm *acm) 150 + static void acm_poison_urbs(struct acm *acm) 151 151 { 152 152 int i; 153 153 154 - usb_kill_urb(acm->ctrlurb); 154 + usb_poison_urb(acm->ctrlurb); 155 155 for (i = 0; i < ACM_NW; i++) 156 - usb_kill_urb(acm->wb[i].urb); 156 + usb_poison_urb(acm->wb[i].urb); 157 157 for (i = 0; i < acm->rx_buflimit; i++) 158 - usb_kill_urb(acm->read_urbs[i]); 158 + usb_poison_urb(acm->read_urbs[i]); 159 159 } 160 + 161 + static void acm_unpoison_urbs(struct acm *acm) 162 + { 163 + int i; 164 + 165 + for (i = 0; i < acm->rx_buflimit; i++) 166 + usb_unpoison_urb(acm->read_urbs[i]); 167 + for (i = 0; i < ACM_NW; i++) 168 + usb_unpoison_urb(acm->wb[i].urb); 169 + usb_unpoison_urb(acm->ctrlurb); 170 + } 171 + 160 172 161 173 /* 162 174 * Write buffer management. ··· 238 226 239 227 rc = usb_submit_urb(wb->urb, GFP_ATOMIC); 240 228 if (rc < 0) { 241 - dev_err(&acm->data->dev, 242 - "%s - usb_submit_urb(write bulk) failed: %d\n", 243 - __func__, rc); 229 + if (rc != -EPERM) 230 + dev_err(&acm->data->dev, 231 + "%s - usb_submit_urb(write bulk) failed: %d\n", 232 + __func__, rc); 244 233 acm_write_done(acm, wb); 245 234 } 246 235 return rc; ··· 326 313 acm->iocount.dsr++; 327 314 if (difference & ACM_CTRL_DCD) 328 315 acm->iocount.dcd++; 329 - if (newctrl & ACM_CTRL_BRK) 316 + if (newctrl & ACM_CTRL_BRK) { 330 317 acm->iocount.brk++; 318 + tty_insert_flip_char(&acm->port, 0, TTY_BREAK); 319 + } 331 320 if (newctrl & ACM_CTRL_RI) 332 321 acm->iocount.rng++; 333 322 if (newctrl & ACM_CTRL_FRAMING) ··· 495 480 dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n", 496 481 rb->index, urb->actual_length, status); 497 482 498 - if (!acm->dev) { 499 - dev_dbg(&acm->data->dev, "%s - disconnected\n", __func__); 500 - return; 501 - } 502 - 503 483 switch (status) { 504 484 case 0: 505 485 usb_mark_last_busy(acm->dev); ··· 659 649 660 650 res = acm_set_control(acm, val); 661 651 if (res && (acm->ctrl_caps & USB_CDC_CAP_LINE)) 662 - dev_err(&acm->control->dev, "failed to set dtr/rts\n"); 652 + /* This is broken in too many devices to spam the logs */ 653 + dev_dbg(&acm->control->dev, "failed to set dtr/rts\n"); 663 654 } 664 655 665 656 static int acm_port_activate(struct tty_port *port, struct tty_struct *tty) ··· 742 731 * Need to grab write_lock to prevent race with resume, but no need to 743 732 * hold it due to the tty-port initialised flag. 744 733 */ 734 + acm_poison_urbs(acm); 745 735 spin_lock_irq(&acm->write_lock); 746 736 spin_unlock_irq(&acm->write_lock); 747 737 ··· 759 747 usb_autopm_put_interface_async(acm->control); 760 748 } 761 749 762 - acm_kill_urbs(acm); 750 + acm_unpoison_urbs(acm); 751 + 763 752 } 764 753 765 754 static void acm_tty_cleanup(struct tty_struct *tty) ··· 1309 1296 if (!combined_interfaces && intf != control_interface) 1310 1297 return -ENODEV; 1311 1298 1312 - if (!combined_interfaces && usb_interface_claimed(data_interface)) { 1313 - /* valid in this context */ 1314 - dev_dbg(&intf->dev, "The data interface isn't available\n"); 1315 - return -EBUSY; 1316 - } 1317 - 1318 - 1319 1299 if (data_interface->cur_altsetting->desc.bNumEndpoints < 2 || 1320 1300 control_interface->cur_altsetting->desc.bNumEndpoints == 0) 1321 1301 return -EINVAL; ··· 1329 1323 dev_dbg(&intf->dev, "interfaces are valid\n"); 1330 1324 1331 1325 acm = kzalloc(sizeof(struct acm), GFP_KERNEL); 1332 - if (acm == NULL) 1333 - goto alloc_fail; 1326 + if (!acm) 1327 + return -ENOMEM; 1334 1328 1335 1329 tty_port_init(&acm->port); 1336 1330 acm->port.ops = &acm_port_ops; ··· 1347 1341 1348 1342 minor = acm_alloc_minor(acm); 1349 1343 if (minor < 0) 1350 - goto alloc_fail1; 1344 + goto err_put_port; 1351 1345 1352 1346 acm->minor = minor; 1353 1347 acm->dev = usb_dev; ··· 1378 1372 1379 1373 buf = usb_alloc_coherent(usb_dev, ctrlsize, GFP_KERNEL, &acm->ctrl_dma); 1380 1374 if (!buf) 1381 - goto alloc_fail1; 1375 + goto err_put_port; 1382 1376 acm->ctrl_buffer = buf; 1383 1377 1384 1378 if (acm_write_buffers_alloc(acm) < 0) 1385 - goto alloc_fail2; 1379 + goto err_free_ctrl_buffer; 1386 1380 1387 1381 acm->ctrlurb = usb_alloc_urb(0, GFP_KERNEL); 1388 1382 if (!acm->ctrlurb) 1389 - goto alloc_fail3; 1383 + goto err_free_write_buffers; 1390 1384 1391 1385 for (i = 0; i < num_rx_buf; i++) { 1392 1386 struct acm_rb *rb = &(acm->read_buffers[i]); ··· 1395 1389 rb->base = usb_alloc_coherent(acm->dev, readsize, GFP_KERNEL, 1396 1390 &rb->dma); 1397 1391 if (!rb->base) 1398 - goto alloc_fail4; 1392 + goto err_free_read_urbs; 1399 1393 rb->index = i; 1400 1394 rb->instance = acm; 1401 1395 1402 1396 urb = usb_alloc_urb(0, GFP_KERNEL); 1403 1397 if (!urb) 1404 - goto alloc_fail4; 1398 + goto err_free_read_urbs; 1405 1399 1406 1400 urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 1407 1401 urb->transfer_dma = rb->dma; ··· 1422 1416 struct acm_wb *snd = &(acm->wb[i]); 1423 1417 1424 1418 snd->urb = usb_alloc_urb(0, GFP_KERNEL); 1425 - if (snd->urb == NULL) 1426 - goto alloc_fail5; 1419 + if (!snd->urb) 1420 + goto err_free_write_urbs; 1427 1421 1428 1422 if (usb_endpoint_xfer_int(epwrite)) 1429 1423 usb_fill_int_urb(snd->urb, usb_dev, acm->out, ··· 1441 1435 1442 1436 i = device_create_file(&intf->dev, &dev_attr_bmCapabilities); 1443 1437 if (i < 0) 1444 - goto alloc_fail5; 1438 + goto err_free_write_urbs; 1445 1439 1446 1440 if (h.usb_cdc_country_functional_desc) { /* export the country data */ 1447 1441 struct usb_cdc_country_functional_desc * cfd = ··· 1486 1480 acm->nb_index = 0; 1487 1481 acm->nb_size = 0; 1488 1482 1489 - dev_info(&intf->dev, "ttyACM%d: USB ACM device\n", minor); 1490 - 1491 1483 acm->line.dwDTERate = cpu_to_le32(9600); 1492 1484 acm->line.bDataBits = 8; 1493 1485 acm_set_line(acm, &acm->line); 1494 1486 1495 - usb_driver_claim_interface(&acm_driver, data_interface, acm); 1496 - usb_set_intfdata(data_interface, acm); 1487 + if (!acm->combined_interfaces) { 1488 + rv = usb_driver_claim_interface(&acm_driver, data_interface, acm); 1489 + if (rv) 1490 + goto err_remove_files; 1491 + } 1497 1492 1498 1493 tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor, 1499 1494 &control_interface->dev); 1500 1495 if (IS_ERR(tty_dev)) { 1501 1496 rv = PTR_ERR(tty_dev); 1502 - goto alloc_fail6; 1497 + goto err_release_data_interface; 1503 1498 } 1504 1499 1505 1500 if (quirks & CLEAR_HALT_CONDITIONS) { ··· 1508 1501 usb_clear_halt(usb_dev, acm->out); 1509 1502 } 1510 1503 1504 + dev_info(&intf->dev, "ttyACM%d: USB ACM device\n", minor); 1505 + 1511 1506 return 0; 1512 - alloc_fail6: 1507 + 1508 + err_release_data_interface: 1509 + if (!acm->combined_interfaces) { 1510 + /* Clear driver data so that disconnect() returns early. */ 1511 + usb_set_intfdata(data_interface, NULL); 1512 + usb_driver_release_interface(&acm_driver, data_interface); 1513 + } 1514 + err_remove_files: 1513 1515 if (acm->country_codes) { 1514 1516 device_remove_file(&acm->control->dev, 1515 1517 &dev_attr_wCountryCodes); 1516 1518 device_remove_file(&acm->control->dev, 1517 1519 &dev_attr_iCountryCodeRelDate); 1518 - kfree(acm->country_codes); 1519 1520 } 1520 1521 device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities); 1521 - alloc_fail5: 1522 - usb_set_intfdata(intf, NULL); 1522 + err_free_write_urbs: 1523 1523 for (i = 0; i < ACM_NW; i++) 1524 1524 usb_free_urb(acm->wb[i].urb); 1525 - alloc_fail4: 1525 + err_free_read_urbs: 1526 1526 for (i = 0; i < num_rx_buf; i++) 1527 1527 usb_free_urb(acm->read_urbs[i]); 1528 1528 acm_read_buffers_free(acm); 1529 1529 usb_free_urb(acm->ctrlurb); 1530 - alloc_fail3: 1530 + err_free_write_buffers: 1531 1531 acm_write_buffers_free(acm); 1532 - alloc_fail2: 1532 + err_free_ctrl_buffer: 1533 1533 usb_free_coherent(usb_dev, ctrlsize, acm->ctrl_buffer, acm->ctrl_dma); 1534 - alloc_fail1: 1534 + err_put_port: 1535 1535 tty_port_put(&acm->port); 1536 - alloc_fail: 1536 + 1537 1537 return rv; 1538 1538 } 1539 1539 ··· 1554 1540 if (!acm) 1555 1541 return; 1556 1542 1557 - mutex_lock(&acm->mutex); 1558 1543 acm->disconnected = true; 1544 + /* 1545 + * there is a circular dependency. acm_softint() can resubmit 1546 + * the URBs in error handling so we need to block any 1547 + * submission right away 1548 + */ 1549 + acm_poison_urbs(acm); 1550 + mutex_lock(&acm->mutex); 1559 1551 if (acm->country_codes) { 1560 1552 device_remove_file(&acm->control->dev, 1561 1553 &dev_attr_wCountryCodes); ··· 1580 1560 tty_kref_put(tty); 1581 1561 } 1582 1562 1583 - acm_kill_urbs(acm); 1584 1563 cancel_delayed_work_sync(&acm->dwork); 1585 1564 1586 1565 tty_unregister_device(acm_tty_driver, acm->minor); ··· 1621 1602 if (cnt) 1622 1603 return 0; 1623 1604 1624 - acm_kill_urbs(acm); 1605 + acm_poison_urbs(acm); 1625 1606 cancel_delayed_work_sync(&acm->dwork); 1626 1607 acm->urbs_in_error_delay = 0; 1627 1608 ··· 1634 1615 struct urb *urb; 1635 1616 int rv = 0; 1636 1617 1618 + acm_unpoison_urbs(acm); 1637 1619 spin_lock_irq(&acm->write_lock); 1638 1620 1639 1621 if (--acm->susp_count)
+4
drivers/usb/core/quirks.c
··· 498 498 /* DJI CineSSD */ 499 499 { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM }, 500 500 501 + /* Fibocom L850-GL LTE Modem */ 502 + { USB_DEVICE(0x2cb7, 0x0007), .driver_info = 503 + USB_QUIRK_IGNORE_REMOTE_WAKEUP }, 504 + 501 505 /* INTEL VALUE SSD */ 502 506 { USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME }, 503 507
+3 -2
drivers/usb/dwc2/hcd.c
··· 4322 4322 if (hsotg->op_state == OTG_STATE_B_PERIPHERAL) 4323 4323 goto unlock; 4324 4324 4325 - if (hsotg->params.power_down > DWC2_POWER_DOWN_PARAM_PARTIAL) 4325 + if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_PARTIAL || 4326 + hsotg->flags.b.port_connect_status == 0) 4326 4327 goto skip_power_saving; 4327 4328 4328 4329 /* ··· 5399 5398 dwc2_writel(hsotg, hprt0, HPRT0); 5400 5399 5401 5400 /* Wait for the HPRT0.PrtSusp register field to be set */ 5402 - if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000)) 5401 + if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 5000)) 5403 5402 dev_warn(hsotg->dev, "Suspend wasn't generated\n"); 5404 5403 5405 5404 /*
+2
drivers/usb/dwc3/dwc3-pci.c
··· 120 120 static const struct property_entry dwc3_pci_mrfld_properties[] = { 121 121 PROPERTY_ENTRY_STRING("dr_mode", "otg"), 122 122 PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"), 123 + PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"), 124 + PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"), 123 125 PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"), 124 126 {} 125 127 };
+3
drivers/usb/dwc3/dwc3-qcom.c
··· 244 244 struct device *dev = qcom->dev; 245 245 int ret; 246 246 247 + if (has_acpi_companion(dev)) 248 + return 0; 249 + 247 250 qcom->icc_path_ddr = of_icc_get(dev, "usb-ddr"); 248 251 if (IS_ERR(qcom->icc_path_ddr)) { 249 252 dev_err(dev, "failed to get usb-ddr path: %ld\n",
+6 -5
drivers/usb/dwc3/gadget.c
··· 791 791 reg &= ~DWC3_DALEPENA_EP(dep->number); 792 792 dwc3_writel(dwc->regs, DWC3_DALEPENA, reg); 793 793 794 - dep->stream_capable = false; 795 - dep->type = 0; 796 - dep->flags = 0; 797 - 798 794 /* Clear out the ep descriptors for non-ep0 */ 799 795 if (dep->number > 1) { 800 796 dep->endpoint.comp_desc = NULL; ··· 798 802 } 799 803 800 804 dwc3_remove_requests(dwc, dep); 805 + 806 + dep->stream_capable = false; 807 + dep->type = 0; 808 + dep->flags = 0; 801 809 802 810 return 0; 803 811 } ··· 2083 2083 u32 reg; 2084 2084 2085 2085 speed = dwc->gadget_max_speed; 2086 - if (speed > dwc->maximum_speed) 2086 + if (speed == USB_SPEED_UNKNOWN || speed > dwc->maximum_speed) 2087 2087 speed = dwc->maximum_speed; 2088 2088 2089 2089 if (speed == USB_SPEED_SUPER_PLUS && ··· 2523 2523 unsigned long flags; 2524 2524 2525 2525 spin_lock_irqsave(&dwc->lock, flags); 2526 + dwc->gadget_max_speed = USB_SPEED_SUPER_PLUS; 2526 2527 dwc->gadget_ssp_rate = rate; 2527 2528 spin_unlock_irqrestore(&dwc->lock, flags); 2528 2529 }
+5 -5
drivers/usb/gadget/udc/amd5536udc_pci.c
··· 153 153 pci_set_master(pdev); 154 154 pci_try_set_mwi(pdev); 155 155 156 + dev->phys_addr = resource; 157 + dev->irq = pdev->irq; 158 + dev->pdev = pdev; 159 + dev->dev = &pdev->dev; 160 + 156 161 /* init dma pools */ 157 162 if (use_dma) { 158 163 retval = init_dma_pools(dev); 159 164 if (retval != 0) 160 165 goto err_dma; 161 166 } 162 - 163 - dev->phys_addr = resource; 164 - dev->irq = pdev->irq; 165 - dev->pdev = pdev; 166 - dev->dev = &pdev->dev; 167 167 168 168 /* general probing */ 169 169 if (udc_probe(dev)) {
+9 -1
drivers/usb/host/xhci-mtk.c
··· 397 397 xhci->quirks |= XHCI_SPURIOUS_SUCCESS; 398 398 if (mtk->lpm_support) 399 399 xhci->quirks |= XHCI_LPM_SUPPORT; 400 + 401 + /* 402 + * MTK xHCI 0.96: PSA is 1 by default even if doesn't support stream, 403 + * and it's 3 when support it. 404 + */ 405 + if (xhci->hci_version < 0x100 && HCC_MAX_PSA(xhci->hcc_params) == 4) 406 + xhci->quirks |= XHCI_BROKEN_STREAMS; 400 407 } 401 408 402 409 /* called during probe() after chip reset completes */ ··· 555 548 if (ret) 556 549 goto put_usb3_hcd; 557 550 558 - if (HCC_MAX_PSA(xhci->hcc_params) >= 4) 551 + if (HCC_MAX_PSA(xhci->hcc_params) >= 4 && 552 + !(xhci->quirks & XHCI_BROKEN_STREAMS)) 559 553 xhci->shared_hcd->can_do_streams = 1; 560 554 561 555 ret = usb_add_hcd(xhci->shared_hcd, irq, IRQF_SHARED);
+8 -4
drivers/usb/musb/musb_core.c
··· 2004 2004 MUSB_DEVCTL_HR; 2005 2005 switch (devctl & ~s) { 2006 2006 case MUSB_QUIRK_B_DISCONNECT_99: 2007 - musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n"); 2008 - schedule_delayed_work(&musb->irq_work, 2009 - msecs_to_jiffies(1000)); 2010 - break; 2007 + if (musb->quirk_retries && !musb->flush_irq_work) { 2008 + musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n"); 2009 + schedule_delayed_work(&musb->irq_work, 2010 + msecs_to_jiffies(1000)); 2011 + musb->quirk_retries--; 2012 + break; 2013 + } 2014 + fallthrough; 2011 2015 case MUSB_QUIRK_B_INVALID_VBUS_91: 2012 2016 if (musb->quirk_retries && !musb->flush_irq_work) { 2013 2017 musb_dbg(musb,
+2
drivers/usb/usbip/vhci_hcd.c
··· 594 594 pr_err("invalid port number %d\n", wIndex); 595 595 goto error; 596 596 } 597 + if (wValue >= 32) 598 + goto error; 597 599 if (hcd->speed == HCD_USB3) { 598 600 if ((vhci_hcd->port_status[rhport] & 599 601 USB_SS_PORT_STAT_POWER) != 0) {
+4
drivers/vdpa/mlx5/core/mlx5_vdpa.h
··· 4 4 #ifndef __MLX5_VDPA_H__ 5 5 #define __MLX5_VDPA_H__ 6 6 7 + #include <linux/etherdevice.h> 8 + #include <linux/if_vlan.h> 7 9 #include <linux/vdpa.h> 8 10 #include <linux/mlx5/driver.h> 11 + 12 + #define MLX5V_ETH_HARD_MTU (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN) 9 13 10 14 struct mlx5_vdpa_direct_mr { 11 15 u64 start;
+7 -2
drivers/vdpa/mlx5/core/mr.c
··· 219 219 mlx5_vdpa_destroy_mkey(mvdev, &mkey->mkey); 220 220 } 221 221 222 + static struct device *get_dma_device(struct mlx5_vdpa_dev *mvdev) 223 + { 224 + return &mvdev->mdev->pdev->dev; 225 + } 226 + 222 227 static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr, 223 228 struct vhost_iotlb *iotlb) 224 229 { ··· 239 234 u64 pa; 240 235 u64 paend; 241 236 struct scatterlist *sg; 242 - struct device *dma = mvdev->mdev->device; 237 + struct device *dma = get_dma_device(mvdev); 243 238 244 239 for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1); 245 240 map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) { ··· 296 291 297 292 static void unmap_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr) 298 293 { 299 - struct device *dma = mvdev->mdev->device; 294 + struct device *dma = get_dma_device(mvdev); 300 295 301 296 destroy_direct_mr(mvdev, mr); 302 297 dma_unmap_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
+2 -1
drivers/vdpa/mlx5/core/resources.c
··· 246 246 if (err) 247 247 goto err_key; 248 248 249 - kick_addr = pci_resource_start(mdev->pdev, 0) + offset; 249 + kick_addr = mdev->bar_addr + offset; 250 + 250 251 res->kick_addr = ioremap(kick_addr, PAGE_SIZE); 251 252 if (!res->kick_addr) { 252 253 err = -ENOMEM;
+24 -16
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 820 820 MLX5_SET(virtio_q, vq_ctx, event_qpn_or_msix, mvq->fwqp.mqp.qpn); 821 821 MLX5_SET(virtio_q, vq_ctx, queue_size, mvq->num_ent); 822 822 MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 823 - !!(ndev->mvdev.actual_features & VIRTIO_F_VERSION_1)); 823 + !!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_F_VERSION_1))); 824 824 MLX5_SET64(virtio_q, vq_ctx, desc_addr, mvq->desc_addr); 825 825 MLX5_SET64(virtio_q, vq_ctx, used_addr, mvq->device_addr); 826 826 MLX5_SET64(virtio_q, vq_ctx, available_addr, mvq->driver_addr); ··· 1169 1169 return; 1170 1170 } 1171 1171 mvq->avail_idx = attr.available_index; 1172 + mvq->used_idx = attr.used_index; 1172 1173 } 1173 1174 1174 1175 static void suspend_vqs(struct mlx5_vdpa_net *ndev) ··· 1427 1426 return -EINVAL; 1428 1427 } 1429 1428 1429 + mvq->used_idx = state->avail_index; 1430 1430 mvq->avail_idx = state->avail_index; 1431 1431 return 0; 1432 1432 } ··· 1445 1443 * that cares about emulating the index after vq is stopped. 1446 1444 */ 1447 1445 if (!mvq->initialized) { 1448 - state->avail_index = mvq->avail_idx; 1446 + /* Firmware returns a wrong value for the available index. 1447 + * Since both values should be identical, we take the value of 1448 + * used_idx which is reported correctly. 1449 + */ 1450 + state->avail_index = mvq->used_idx; 1449 1451 return 0; 1450 1452 } 1451 1453 ··· 1458 1452 mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n"); 1459 1453 return err; 1460 1454 } 1461 - state->avail_index = attr.available_index; 1455 + state->avail_index = attr.used_index; 1462 1456 return 0; 1463 1457 } 1464 1458 ··· 1546 1540 } 1547 1541 } 1548 1542 1549 - static void clear_virtqueues(struct mlx5_vdpa_net *ndev) 1550 - { 1551 - int i; 1552 - 1553 - for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) { 1554 - ndev->vqs[i].avail_idx = 0; 1555 - ndev->vqs[i].used_idx = 0; 1556 - } 1557 - } 1558 - 1559 1543 /* TODO: cross-endian support */ 1560 1544 static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev) 1561 1545 { 1562 1546 return virtio_legacy_is_little_endian() || 1563 - (mvdev->actual_features & (1ULL << VIRTIO_F_VERSION_1)); 1547 + (mvdev->actual_features & BIT_ULL(VIRTIO_F_VERSION_1)); 1564 1548 } 1565 1549 1566 1550 static __virtio16 cpu_to_mlx5vdpa16(struct mlx5_vdpa_dev *mvdev, u16 val) ··· 1781 1785 if (!status) { 1782 1786 mlx5_vdpa_info(mvdev, "performing device reset\n"); 1783 1787 teardown_driver(ndev); 1784 - clear_virtqueues(ndev); 1785 1788 mlx5_vdpa_destroy_mr(&ndev->mvdev); 1786 1789 ndev->mvdev.status = 0; 1787 1790 ndev->mvdev.mlx_features = 0; ··· 1902 1907 .free = mlx5_vdpa_free, 1903 1908 }; 1904 1909 1910 + static int query_mtu(struct mlx5_core_dev *mdev, u16 *mtu) 1911 + { 1912 + u16 hw_mtu; 1913 + int err; 1914 + 1915 + err = mlx5_query_nic_vport_mtu(mdev, &hw_mtu); 1916 + if (err) 1917 + return err; 1918 + 1919 + *mtu = hw_mtu - MLX5V_ETH_HARD_MTU; 1920 + return 0; 1921 + } 1922 + 1905 1923 static int alloc_resources(struct mlx5_vdpa_net *ndev) 1906 1924 { 1907 1925 struct mlx5_vdpa_net_resources *res = &ndev->res; ··· 2000 1992 init_mvqs(ndev); 2001 1993 mutex_init(&ndev->reslock); 2002 1994 config = &ndev->config; 2003 - err = mlx5_query_nic_vport_mtu(mdev, &ndev->mtu); 1995 + err = query_mtu(mdev, &ndev->mtu); 2004 1996 if (err) 2005 1997 goto err_mtu; 2006 1998
+1 -1
drivers/vfio/pci/Kconfig
··· 42 42 43 43 config VFIO_PCI_NVLINK2 44 44 def_bool y 45 - depends on VFIO_PCI && PPC_POWERNV 45 + depends on VFIO_PCI && PPC_POWERNV && SPAPR_TCE_IOMMU 46 46 help 47 47 VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs
+6
drivers/vfio/vfio_iommu_type1.c
··· 739 739 ret = vfio_lock_acct(dma, lock_acct, false); 740 740 741 741 unpin_out: 742 + if (batch->size == 1 && !batch->offset) { 743 + /* May be a VM_PFNMAP pfn, which the batch can't remember. */ 744 + put_pfn(pfn, dma->prot); 745 + batch->size = 0; 746 + } 747 + 742 748 if (ret < 0) { 743 749 if (pinned && !rsvd) { 744 750 for (pfn = *pfn_base ; pinned ; pfn++, pinned--)
+3
drivers/video/fbdev/core/fbcon.c
··· 1333 1333 1334 1334 ops->cursor_flash = (mode == CM_ERASE) ? 0 : 1; 1335 1335 1336 + if (!ops->cursor) 1337 + return; 1338 + 1336 1339 ops->cursor(vc, info, mode, get_color(vc, info, c, 1), 1337 1340 get_color(vc, info, c, 0)); 1338 1341 }
-3
drivers/video/fbdev/hyperv_fb.c
··· 1031 1031 PCI_DEVICE_ID_HYPERV_VIDEO, NULL); 1032 1032 if (!pdev) { 1033 1033 pr_err("Unable to find PCI Hyper-V video\n"); 1034 - kfree(info->apertures); 1035 1034 return -ENODEV; 1036 1035 } 1037 1036 ··· 1128 1129 } else { 1129 1130 pci_dev_put(pdev); 1130 1131 } 1131 - kfree(info->apertures); 1132 1132 1133 1133 return 0; 1134 1134 ··· 1139 1141 err1: 1140 1142 if (!gen2vm) 1141 1143 pci_dev_put(pdev); 1142 - kfree(info->apertures); 1143 1144 1144 1145 return -ENOMEM; 1145 1146 }
+2 -2
drivers/xen/Kconfig
··· 50 50 51 51 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'" 52 52 53 - config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 53 + config XEN_MEMORY_HOTPLUG_LIMIT 54 54 int "Hotplugged memory limit (in GiB) for a PV guest" 55 55 default 512 56 56 depends on XEN_HAVE_PVMMU 57 - depends on XEN_BALLOON_MEMORY_HOTPLUG 57 + depends on MEMORY_HOTPLUG 58 58 help 59 59 Maxmium amount of memory (in GiB) that a PV guest can be 60 60 expanded to when using memory hotplug.
+6 -6
drivers/xen/events/events_base.c
··· 110 110 unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */ 111 111 unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */ 112 112 u64 eoi_time; /* Time in jiffies when to EOI. */ 113 - spinlock_t lock; 113 + raw_spinlock_t lock; 114 114 115 115 union { 116 116 unsigned short virq; ··· 312 312 info->evtchn = evtchn; 313 313 info->cpu = cpu; 314 314 info->mask_reason = EVT_MASK_REASON_EXPLICIT; 315 - spin_lock_init(&info->lock); 315 + raw_spin_lock_init(&info->lock); 316 316 317 317 ret = set_evtchn_to_irq(evtchn, irq); 318 318 if (ret < 0) ··· 472 472 { 473 473 unsigned long flags; 474 474 475 - spin_lock_irqsave(&info->lock, flags); 475 + raw_spin_lock_irqsave(&info->lock, flags); 476 476 477 477 if (!info->mask_reason) 478 478 mask_evtchn(info->evtchn); 479 479 480 480 info->mask_reason |= reason; 481 481 482 - spin_unlock_irqrestore(&info->lock, flags); 482 + raw_spin_unlock_irqrestore(&info->lock, flags); 483 483 } 484 484 485 485 static void do_unmask(struct irq_info *info, u8 reason) 486 486 { 487 487 unsigned long flags; 488 488 489 - spin_lock_irqsave(&info->lock, flags); 489 + raw_spin_lock_irqsave(&info->lock, flags); 490 490 491 491 info->mask_reason &= ~reason; 492 492 493 493 if (!info->mask_reason) 494 494 unmask_evtchn(info->evtchn); 495 495 496 - spin_unlock_irqrestore(&info->lock, flags); 496 + raw_spin_unlock_irqrestore(&info->lock, flags); 497 497 } 498 498 499 499 #ifdef CONFIG_X86
+6 -2
fs/block_dev.c
··· 275 275 bio.bi_opf = dio_bio_write_op(iocb); 276 276 task_io_account_write(ret); 277 277 } 278 + if (iocb->ki_flags & IOCB_NOWAIT) 279 + bio.bi_opf |= REQ_NOWAIT; 278 280 if (iocb->ki_flags & IOCB_HIPRI) 279 281 bio_set_polled(&bio, iocb); 280 282 ··· 430 428 bio->bi_opf = dio_bio_write_op(iocb); 431 429 task_io_account_write(bio->bi_iter.bi_size); 432 430 } 431 + if (iocb->ki_flags & IOCB_NOWAIT) 432 + bio->bi_opf |= REQ_NOWAIT; 433 433 434 434 dio->size += bio->bi_iter.bi_size; 435 435 pos += bio->bi_iter.bi_size; ··· 1244 1240 1245 1241 lockdep_assert_held(&bdev->bd_mutex); 1246 1242 1247 - clear_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state); 1248 - 1249 1243 rescan: 1250 1244 ret = blk_drop_partitions(bdev); 1251 1245 if (ret) 1252 1246 return ret; 1247 + 1248 + clear_bit(GD_NEED_PART_SCAN, &disk->state); 1253 1249 1254 1250 /* 1255 1251 * Historically we only set the capacity to zero for devices that
+6 -4
fs/btrfs/Makefile
··· 7 7 subdir-ccflags-y += -Wmissing-prototypes 8 8 subdir-ccflags-y += -Wold-style-definition 9 9 subdir-ccflags-y += -Wmissing-include-dirs 10 - subdir-ccflags-y += $(call cc-option, -Wunused-but-set-variable) 11 - subdir-ccflags-y += $(call cc-option, -Wunused-const-variable) 12 - subdir-ccflags-y += $(call cc-option, -Wpacked-not-aligned) 13 - subdir-ccflags-y += $(call cc-option, -Wstringop-truncation) 10 + condflags := \ 11 + $(call cc-option, -Wunused-but-set-variable) \ 12 + $(call cc-option, -Wunused-const-variable) \ 13 + $(call cc-option, -Wpacked-not-aligned) \ 14 + $(call cc-option, -Wstringop-truncation) 15 + subdir-ccflags-y += $(condflags) 14 16 # The following turn off the warnings enabled by -Wextra 15 17 subdir-ccflags-y += -Wno-missing-field-initializers 16 18 subdir-ccflags-y += -Wno-sign-compare
+3
fs/btrfs/dev-replace.c
··· 81 81 struct btrfs_dev_replace_item *ptr; 82 82 u64 src_devid; 83 83 84 + if (!dev_root) 85 + return 0; 86 + 84 87 path = btrfs_alloc_path(); 85 88 if (!path) { 86 89 ret = -ENOMEM;
+17 -2
fs/btrfs/disk-io.c
··· 2387 2387 } else { 2388 2388 set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state); 2389 2389 fs_info->dev_root = root; 2390 - btrfs_init_devices_late(fs_info); 2391 2390 } 2391 + /* Initialize fs_info for all devices in any case */ 2392 + btrfs_init_devices_late(fs_info); 2392 2393 2393 2394 /* If IGNOREDATACSUMS is set don't bother reading the csum root. */ 2394 2395 if (!btrfs_test_opt(fs_info, IGNOREDATACSUMS)) { ··· 3010 3009 } 3011 3010 } 3012 3011 3012 + /* 3013 + * btrfs_find_orphan_roots() is responsible for finding all the dead 3014 + * roots (with 0 refs), flag them with BTRFS_ROOT_DEAD_TREE and load 3015 + * them into the fs_info->fs_roots_radix tree. This must be done before 3016 + * calling btrfs_orphan_cleanup() on the tree root. If we don't do it 3017 + * first, then btrfs_orphan_cleanup() will delete a dead root's orphan 3018 + * item before the root's tree is deleted - this means that if we unmount 3019 + * or crash before the deletion completes, on the next mount we will not 3020 + * delete what remains of the tree because the orphan item does not 3021 + * exists anymore, which is what tells us we have a pending deletion. 3022 + */ 3023 + ret = btrfs_find_orphan_roots(fs_info); 3024 + if (ret) 3025 + goto out; 3026 + 3013 3027 ret = btrfs_cleanup_fs_roots(fs_info); 3014 3028 if (ret) 3015 3029 goto out; ··· 3084 3068 } 3085 3069 } 3086 3070 3087 - ret = btrfs_find_orphan_roots(fs_info); 3088 3071 out: 3089 3072 return ret; 3090 3073 }
+9 -9
fs/btrfs/inode.c
··· 3099 3099 * @bio_offset: offset to the beginning of the bio (in bytes) 3100 3100 * @page: page where is the data to be verified 3101 3101 * @pgoff: offset inside the page 3102 + * @start: logical offset in the file 3102 3103 * 3103 3104 * The length of such check is always one sector size. 3104 3105 */ 3105 3106 static int check_data_csum(struct inode *inode, struct btrfs_io_bio *io_bio, 3106 - u32 bio_offset, struct page *page, u32 pgoff) 3107 + u32 bio_offset, struct page *page, u32 pgoff, 3108 + u64 start) 3107 3109 { 3108 3110 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); 3109 3111 SHASH_DESC_ON_STACK(shash, fs_info->csum_shash); ··· 3132 3130 kunmap_atomic(kaddr); 3133 3131 return 0; 3134 3132 zeroit: 3135 - btrfs_print_data_csum_error(BTRFS_I(inode), page_offset(page) + pgoff, 3136 - csum, csum_expected, io_bio->mirror_num); 3133 + btrfs_print_data_csum_error(BTRFS_I(inode), start, csum, csum_expected, 3134 + io_bio->mirror_num); 3137 3135 if (io_bio->device) 3138 3136 btrfs_dev_stat_inc_and_print(io_bio->device, 3139 3137 BTRFS_DEV_STAT_CORRUPTION_ERRS); ··· 3186 3184 pg_off += sectorsize, bio_offset += sectorsize) { 3187 3185 int ret; 3188 3186 3189 - ret = check_data_csum(inode, io_bio, bio_offset, page, pg_off); 3187 + ret = check_data_csum(inode, io_bio, bio_offset, page, pg_off, 3188 + page_offset(page) + pg_off); 3190 3189 if (ret < 0) 3191 3190 return -EIO; 3192 3191 } ··· 7913 7910 ASSERT(pgoff < PAGE_SIZE); 7914 7911 if (uptodate && 7915 7912 (!csum || !check_data_csum(inode, io_bio, 7916 - bio_offset, bvec.bv_page, pgoff))) { 7913 + bio_offset, bvec.bv_page, 7914 + pgoff, start))) { 7917 7915 clean_io_failure(fs_info, failure_tree, io_tree, 7918 7916 start, bvec.bv_page, 7919 7917 btrfs_ino(BTRFS_I(inode)), ··· 8172 8168 bio->bi_private = dip; 8173 8169 bio->bi_end_io = btrfs_end_dio_bio; 8174 8170 btrfs_io_bio(bio)->logical = file_offset; 8175 - 8176 - WARN_ON_ONCE(write && btrfs_is_zoned(fs_info) && 8177 - fs_info->max_zone_append_size && 8178 - bio_op(bio) != REQ_OP_ZONE_APPEND); 8179 8171 8180 8172 if (bio_op(bio) == REQ_OP_ZONE_APPEND) { 8181 8173 status = extract_ordered_extent(BTRFS_I(inode), bio,
+10 -2
fs/btrfs/qgroup.c
··· 226 226 { 227 227 struct btrfs_qgroup_list *list; 228 228 229 - btrfs_sysfs_del_one_qgroup(fs_info, qgroup); 230 229 list_del(&qgroup->dirty); 231 230 while (!list_empty(&qgroup->groups)) { 232 231 list = list_first_entry(&qgroup->groups, ··· 242 243 list_del(&list->next_member); 243 244 kfree(list); 244 245 } 245 - kfree(qgroup); 246 246 } 247 247 248 248 /* must be called with qgroup_lock held */ ··· 567 569 qgroup = rb_entry(n, struct btrfs_qgroup, node); 568 570 rb_erase(n, &fs_info->qgroup_tree); 569 571 __del_qgroup_rb(fs_info, qgroup); 572 + btrfs_sysfs_del_one_qgroup(fs_info, qgroup); 573 + kfree(qgroup); 570 574 } 571 575 /* 572 576 * We call btrfs_free_qgroup_config() when unmounting ··· 1578 1578 spin_lock(&fs_info->qgroup_lock); 1579 1579 del_qgroup_rb(fs_info, qgroupid); 1580 1580 spin_unlock(&fs_info->qgroup_lock); 1581 + 1582 + /* 1583 + * Remove the qgroup from sysfs now without holding the qgroup_lock 1584 + * spinlock, since the sysfs_remove_group() function needs to take 1585 + * the mutex kernfs_mutex through kernfs_remove_by_name_ns(). 1586 + */ 1587 + btrfs_sysfs_del_one_qgroup(fs_info, qgroup); 1588 + kfree(qgroup); 1581 1589 out: 1582 1590 mutex_unlock(&fs_info->qgroup_ioctl_lock); 1583 1591 return ret;
+3
fs/btrfs/volumes.c
··· 7448 7448 int item_size; 7449 7449 int i, ret, slot; 7450 7450 7451 + if (!device->fs_info->dev_root) 7452 + return 0; 7453 + 7451 7454 key.objectid = BTRFS_DEV_STATS_OBJECTID; 7452 7455 key.type = BTRFS_PERSISTENT_ITEM_KEY; 7453 7456 key.offset = device->devid;
+1 -2
fs/cifs/Kconfig
··· 18 18 select CRYPTO_AES 19 19 select CRYPTO_LIB_DES 20 20 select KEYS 21 + select DNS_RESOLVER 21 22 help 22 23 This is the client VFS module for the SMB3 family of NAS protocols, 23 24 (including support for the most recent, most secure dialect SMB3.1.1) ··· 113 112 config CIFS_UPCALL 114 113 bool "Kerberos/SPNEGO advanced session setup" 115 114 depends on CIFS 116 - select DNS_RESOLVER 117 115 help 118 116 Enables an upcall mechanism for CIFS which accesses userspace helper 119 117 utilities to provide SPNEGO packaged (RFC 4178) Kerberos tickets ··· 179 179 config CIFS_DFS_UPCALL 180 180 bool "DFS feature support" 181 181 depends on CIFS 182 - select DNS_RESOLVER 183 182 help 184 183 Distributed File System (DFS) support is used to access shares 185 184 transparently in an enterprise name space, even if the share
+3 -2
fs/cifs/Makefile
··· 10 10 cifs_unicode.o nterr.o cifsencrypt.o \ 11 11 readdir.o ioctl.o sess.o export.o smb1ops.o unc.o winucase.o \ 12 12 smb2ops.o smb2maperror.o smb2transport.o \ 13 - smb2misc.o smb2pdu.o smb2inode.o smb2file.o cifsacl.o fs_context.o 13 + smb2misc.o smb2pdu.o smb2inode.o smb2file.o cifsacl.o fs_context.o \ 14 + dns_resolve.o 14 15 15 16 cifs-$(CONFIG_CIFS_XATTR) += xattr.o 16 17 17 18 cifs-$(CONFIG_CIFS_UPCALL) += cifs_spnego.o 18 19 19 - cifs-$(CONFIG_CIFS_DFS_UPCALL) += dns_resolve.o cifs_dfs_ref.o dfs_cache.o 20 + cifs-$(CONFIG_CIFS_DFS_UPCALL) += cifs_dfs_ref.o dfs_cache.o 20 21 21 22 cifs-$(CONFIG_CIFS_SWN_UPCALL) += netlink.o cifs_swn.o 22 23
+1 -2
fs/cifs/cifsacl.c
··· 1130 1130 } 1131 1131 1132 1132 /* If it's any one of the ACE we're replacing, skip! */ 1133 - if (!mode_from_sid && 1134 - ((compare_sids(&pntace->sid, &sid_unix_NFS_mode) == 0) || 1133 + if (((compare_sids(&pntace->sid, &sid_unix_NFS_mode) == 0) || 1135 1134 (compare_sids(&pntace->sid, pownersid) == 0) || 1136 1135 (compare_sids(&pntace->sid, pgrpsid) == 0) || 1137 1136 (compare_sids(&pntace->sid, &sid_everyone) == 0) ||
+2 -1
fs/cifs/cifsfs.c
··· 476 476 seq_puts(m, "none"); 477 477 else { 478 478 convert_delimiter(devname, '/'); 479 - seq_puts(m, devname); 479 + /* escape all spaces in share names */ 480 + seq_escape(m, devname, " \t"); 480 481 kfree(devname); 481 482 } 482 483 return 0;
+2 -4
fs/cifs/cifsglob.h
··· 919 919 bool binding:1; /* are we binding the session? */ 920 920 __u16 session_flags; 921 921 __u8 smb3signingkey[SMB3_SIGN_KEY_SIZE]; 922 - __u8 smb3encryptionkey[SMB3_SIGN_KEY_SIZE]; 923 - __u8 smb3decryptionkey[SMB3_SIGN_KEY_SIZE]; 922 + __u8 smb3encryptionkey[SMB3_ENC_DEC_KEY_SIZE]; 923 + __u8 smb3decryptionkey[SMB3_ENC_DEC_KEY_SIZE]; 924 924 __u8 preauth_sha_hash[SMB2_PREAUTH_HASH_SIZE]; 925 925 926 926 __u8 binding_preauth_sha_hash[SMB2_PREAUTH_HASH_SIZE]; ··· 1282 1282 */ 1283 1283 bool direct_io; 1284 1284 }; 1285 - 1286 - struct cifs_readdata; 1287 1285 1288 1286 /* asynchronous read support */ 1289 1287 struct cifs_readdata {
+5
fs/cifs/cifspdu.h
··· 147 147 */ 148 148 #define SMB3_SIGN_KEY_SIZE (16) 149 149 150 + /* 151 + * Size of the smb3 encryption/decryption keys 152 + */ 153 + #define SMB3_ENC_DEC_KEY_SIZE (32) 154 + 150 155 #define CIFS_CLIENT_CHALLENGE_SIZE (8) 151 156 #define CIFS_SERVER_CHALLENGE_SIZE (8) 152 157 #define CIFS_HMAC_MD5_HASH_SIZE (16)
+16 -1
fs/cifs/connect.c
··· 87 87 * 88 88 * This should be called with server->srv_mutex held. 89 89 */ 90 - #ifdef CONFIG_CIFS_DFS_UPCALL 91 90 static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server) 92 91 { 93 92 int rc; ··· 123 124 return !rc ? -1 : 0; 124 125 } 125 126 127 + #ifdef CONFIG_CIFS_DFS_UPCALL 126 128 /* These functions must be called with server->srv_mutex held */ 127 129 static void reconn_set_next_dfs_target(struct TCP_Server_Info *server, 128 130 struct cifs_sb_info *cifs_sb, ··· 321 321 #endif 322 322 323 323 #ifdef CONFIG_CIFS_DFS_UPCALL 324 + if (cifs_sb && cifs_sb->origin_fullpath) 324 325 /* 325 326 * Set up next DFS target server (if any) for reconnect. If DFS 326 327 * feature is disabled, then we will retry last server we 327 328 * connected to before. 328 329 */ 329 330 reconn_set_next_dfs_target(server, cifs_sb, &tgt_list, &tgt_it); 331 + else { 330 332 #endif 333 + /* 334 + * Resolve the hostname again to make sure that IP address is up-to-date. 335 + */ 336 + rc = reconn_set_ipaddr_from_hostname(server); 337 + if (rc) { 338 + cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n", 339 + __func__, rc); 340 + } 341 + 342 + #ifdef CONFIG_CIFS_DFS_UPCALL 343 + } 344 + #endif 345 + 331 346 332 347 #ifdef CONFIG_CIFS_SWN_UPCALL 333 348 }
+1
fs/cifs/file.c
··· 165 165 goto posix_open_ret; 166 166 } 167 167 } else { 168 + cifs_revalidate_mapping(*pinode); 168 169 cifs_fattr_to_inode(*pinode, &fattr); 169 170 } 170 171
+1
fs/cifs/smb2glob.h
··· 58 58 #define SMB2_HMACSHA256_SIZE (32) 59 59 #define SMB2_CMACAES_SIZE (16) 60 60 #define SMB3_SIGNKEY_SIZE (16) 61 + #define SMB3_GCM128_CRYPTKEY_SIZE (16) 61 62 #define SMB3_GCM256_CRYPTKEY_SIZE (32) 62 63 63 64 /* Maximum buffer size value we can send with 1 credit */
+2 -2
fs/cifs/smb2misc.c
··· 754 754 } 755 755 } 756 756 spin_unlock(&cifs_tcp_ses_lock); 757 - cifs_dbg(FYI, "Can not process oplock break for non-existent connection\n"); 758 - return false; 757 + cifs_dbg(FYI, "No file id matched, oplock break ignored\n"); 758 + return true; 759 759 } 760 760 761 761 void
+20 -7
fs/cifs/smb2ops.c
··· 2038 2038 { 2039 2039 int rc; 2040 2040 unsigned int ret_data_len; 2041 + struct inode *inode; 2041 2042 struct duplicate_extents_to_file dup_ext_buf; 2042 2043 struct cifs_tcon *tcon = tlink_tcon(trgtfile->tlink); 2043 2044 ··· 2055 2054 cifs_dbg(FYI, "Duplicate extents: src off %lld dst off %lld len %lld\n", 2056 2055 src_off, dest_off, len); 2057 2056 2058 - rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false); 2059 - if (rc) 2060 - goto duplicate_extents_out; 2057 + inode = d_inode(trgtfile->dentry); 2058 + if (inode->i_size < dest_off + len) { 2059 + rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false); 2060 + if (rc) 2061 + goto duplicate_extents_out; 2061 2062 2063 + /* 2064 + * Although also could set plausible allocation size (i_blocks) 2065 + * here in addition to setting the file size, in reflink 2066 + * it is likely that the target file is sparse. Its allocation 2067 + * size will be queried on next revalidate, but it is important 2068 + * to make sure that file's cached size is updated immediately 2069 + */ 2070 + cifs_setsize(inode, dest_off + len); 2071 + } 2062 2072 rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid, 2063 2073 trgtfile->fid.volatile_fid, 2064 2074 FSCTL_DUPLICATE_EXTENTS_TO_FILE, ··· 4170 4158 if (ses->Suid == ses_id) { 4171 4159 ses_enc_key = enc ? ses->smb3encryptionkey : 4172 4160 ses->smb3decryptionkey; 4173 - memcpy(key, ses_enc_key, SMB3_SIGN_KEY_SIZE); 4161 + memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE); 4174 4162 spin_unlock(&cifs_tcp_ses_lock); 4175 4163 return 0; 4176 4164 } ··· 4197 4185 int rc = 0; 4198 4186 struct scatterlist *sg; 4199 4187 u8 sign[SMB2_SIGNATURE_SIZE] = {}; 4200 - u8 key[SMB3_SIGN_KEY_SIZE]; 4188 + u8 key[SMB3_ENC_DEC_KEY_SIZE]; 4201 4189 struct aead_request *req; 4202 4190 char *iv; 4203 4191 unsigned int iv_len; ··· 4221 4209 tfm = enc ? server->secmech.ccmaesencrypt : 4222 4210 server->secmech.ccmaesdecrypt; 4223 4211 4224 - if (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM) 4212 + if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) || 4213 + (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)) 4225 4214 rc = crypto_aead_setkey(tfm, key, SMB3_GCM256_CRYPTKEY_SIZE); 4226 4215 else 4227 - rc = crypto_aead_setkey(tfm, key, SMB3_SIGN_KEY_SIZE); 4216 + rc = crypto_aead_setkey(tfm, key, SMB3_GCM128_CRYPTKEY_SIZE); 4228 4217 4229 4218 if (rc) { 4230 4219 cifs_server_dbg(VFS, "%s: Failed to set aead key %d\n", __func__, rc);
+28 -9
fs/cifs/smb2transport.c
··· 298 298 { 299 299 unsigned char zero = 0x0; 300 300 __u8 i[4] = {0, 0, 0, 1}; 301 - __u8 L[4] = {0, 0, 0, 128}; 301 + __u8 L128[4] = {0, 0, 0, 128}; 302 + __u8 L256[4] = {0, 0, 1, 0}; 302 303 int rc = 0; 303 304 unsigned char prfhash[SMB2_HMACSHA256_SIZE]; 304 305 unsigned char *hashptr = prfhash; ··· 355 354 goto smb3signkey_ret; 356 355 } 357 356 358 - rc = crypto_shash_update(&server->secmech.sdeschmacsha256->shash, 359 - L, 4); 357 + if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) || 358 + (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)) { 359 + rc = crypto_shash_update(&server->secmech.sdeschmacsha256->shash, 360 + L256, 4); 361 + } else { 362 + rc = crypto_shash_update(&server->secmech.sdeschmacsha256->shash, 363 + L128, 4); 364 + } 360 365 if (rc) { 361 366 cifs_server_dbg(VFS, "%s: Could not update with L\n", __func__); 362 367 goto smb3signkey_ret; ··· 397 390 const struct derivation_triplet *ptriplet) 398 391 { 399 392 int rc; 393 + #ifdef CONFIG_CIFS_DEBUG_DUMP_KEYS 394 + struct TCP_Server_Info *server = ses->server; 395 + #endif 400 396 401 397 /* 402 398 * All channels use the same encryption/decryption keys but ··· 432 422 rc = generate_key(ses, ptriplet->encryption.label, 433 423 ptriplet->encryption.context, 434 424 ses->smb3encryptionkey, 435 - SMB3_SIGN_KEY_SIZE); 425 + SMB3_ENC_DEC_KEY_SIZE); 436 426 rc = generate_key(ses, ptriplet->decryption.label, 437 427 ptriplet->decryption.context, 438 428 ses->smb3decryptionkey, 439 - SMB3_SIGN_KEY_SIZE); 429 + SMB3_ENC_DEC_KEY_SIZE); 440 430 if (rc) 441 431 return rc; 442 432 } ··· 452 442 */ 453 443 cifs_dbg(VFS, "Session Id %*ph\n", (int)sizeof(ses->Suid), 454 444 &ses->Suid); 445 + cifs_dbg(VFS, "Cipher type %d\n", server->cipher_type); 455 446 cifs_dbg(VFS, "Session Key %*ph\n", 456 447 SMB2_NTLMV2_SESSKEY_SIZE, ses->auth_key.response); 457 448 cifs_dbg(VFS, "Signing Key %*ph\n", 458 449 SMB3_SIGN_KEY_SIZE, ses->smb3signingkey); 459 - cifs_dbg(VFS, "ServerIn Key %*ph\n", 460 - SMB3_SIGN_KEY_SIZE, ses->smb3encryptionkey); 461 - cifs_dbg(VFS, "ServerOut Key %*ph\n", 462 - SMB3_SIGN_KEY_SIZE, ses->smb3decryptionkey); 450 + if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) || 451 + (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)) { 452 + cifs_dbg(VFS, "ServerIn Key %*ph\n", 453 + SMB3_GCM256_CRYPTKEY_SIZE, ses->smb3encryptionkey); 454 + cifs_dbg(VFS, "ServerOut Key %*ph\n", 455 + SMB3_GCM256_CRYPTKEY_SIZE, ses->smb3decryptionkey); 456 + } else { 457 + cifs_dbg(VFS, "ServerIn Key %*ph\n", 458 + SMB3_GCM128_CRYPTKEY_SIZE, ses->smb3encryptionkey); 459 + cifs_dbg(VFS, "ServerOut Key %*ph\n", 460 + SMB3_GCM128_CRYPTKEY_SIZE, ses->smb3decryptionkey); 461 + } 463 462 #endif 464 463 return rc; 465 464 }
+17 -4
fs/file.c
··· 629 629 } 630 630 EXPORT_SYMBOL(close_fd); /* for ksys_close() */ 631 631 632 + /** 633 + * last_fd - return last valid index into fd table 634 + * @cur_fds: files struct 635 + * 636 + * Context: Either rcu read lock or files_lock must be held. 637 + * 638 + * Returns: Last valid index into fdtable. 639 + */ 640 + static inline unsigned last_fd(struct fdtable *fdt) 641 + { 642 + return fdt->max_fds - 1; 643 + } 644 + 632 645 static inline void __range_cloexec(struct files_struct *cur_fds, 633 646 unsigned int fd, unsigned int max_fd) 634 647 { 635 648 struct fdtable *fdt; 636 649 637 - if (fd > max_fd) 638 - return; 639 - 650 + /* make sure we're using the correct maximum value */ 640 651 spin_lock(&cur_fds->file_lock); 641 652 fdt = files_fdtable(cur_fds); 642 - bitmap_set(fdt->close_on_exec, fd, max_fd - fd + 1); 653 + max_fd = min(last_fd(fdt), max_fd); 654 + if (fd <= max_fd) 655 + bitmap_set(fdt->close_on_exec, fd, max_fd - fd + 1); 643 656 spin_unlock(&cur_fds->file_lock); 644 657 } 645 658
+9 -5
fs/gfs2/super.c
··· 162 162 int error; 163 163 164 164 error = init_threads(sdp); 165 - if (error) 165 + if (error) { 166 + gfs2_withdraw_delayed(sdp); 166 167 return error; 168 + } 167 169 168 170 j_gl->gl_ops->go_inval(j_gl, DIO_METADATA); 169 171 if (gfs2_withdrawn(sdp)) { ··· 752 750 static int gfs2_freeze(struct super_block *sb) 753 751 { 754 752 struct gfs2_sbd *sdp = sb->s_fs_info; 755 - int error = 0; 753 + int error; 756 754 757 755 mutex_lock(&sdp->sd_freeze_mutex); 758 - if (atomic_read(&sdp->sd_freeze_state) != SFS_UNFROZEN) 756 + if (atomic_read(&sdp->sd_freeze_state) != SFS_UNFROZEN) { 757 + error = -EBUSY; 759 758 goto out; 759 + } 760 760 761 761 for (;;) { 762 762 if (gfs2_withdrawn(sdp)) { ··· 799 795 struct gfs2_sbd *sdp = sb->s_fs_info; 800 796 801 797 mutex_lock(&sdp->sd_freeze_mutex); 802 - if (atomic_read(&sdp->sd_freeze_state) != SFS_FROZEN || 798 + if (atomic_read(&sdp->sd_freeze_state) != SFS_FROZEN || 803 799 !gfs2_holder_initialized(&sdp->sd_freeze_gh)) { 804 800 mutex_unlock(&sdp->sd_freeze_mutex); 805 - return 0; 801 + return -EINVAL; 806 802 } 807 803 808 804 gfs2_freeze_unlock(&sdp->sd_freeze_gh);
+3 -4
fs/hostfs/hostfs_kern.c
··· 144 144 char *name, *resolved, *end; 145 145 int n; 146 146 147 - name = __getname(); 147 + name = kmalloc(PATH_MAX, GFP_KERNEL); 148 148 if (!name) { 149 149 n = -ENOMEM; 150 150 goto out_free; ··· 173 173 goto out_free; 174 174 } 175 175 176 - __putname(name); 177 - kfree(link); 176 + kfree(name); 178 177 return resolved; 179 178 180 179 out_free: 181 - __putname(name); 180 + kfree(name); 182 181 return ERR_PTR(n); 183 182 } 184 183
+27 -13
fs/io-wq.c
··· 16 16 #include <linux/rculist_nulls.h> 17 17 #include <linux/cpu.h> 18 18 #include <linux/tracehook.h> 19 - #include <linux/freezer.h> 20 19 21 20 #include "../kernel/sched/sched.h" 22 21 #include "io-wq.h" ··· 387 388 388 389 static bool io_flush_signals(void) 389 390 { 390 - if (unlikely(test_tsk_thread_flag(current, TIF_NOTIFY_SIGNAL))) { 391 + if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) { 391 392 __set_current_state(TASK_RUNNING); 392 - if (current->task_works) 393 - task_work_run(); 394 - clear_tsk_thread_flag(current, TIF_NOTIFY_SIGNAL); 393 + tracehook_notify_signal(); 395 394 return true; 396 395 } 397 396 return false; ··· 415 418 { 416 419 struct io_wqe *wqe = worker->wqe; 417 420 struct io_wq *wq = wqe->wq; 421 + bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); 418 422 419 423 do { 420 424 struct io_wq_work *work; ··· 445 447 unsigned int hash = io_get_work_hash(work); 446 448 447 449 next_hashed = wq_next_work(work); 450 + 451 + if (unlikely(do_kill) && (work->flags & IO_WQ_WORK_UNBOUND)) 452 + work->flags |= IO_WQ_WORK_CANCEL; 448 453 wq->do_work(work); 449 454 io_assign_current_work(worker, NULL); 450 455 ··· 488 487 worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING); 489 488 io_wqe_inc_running(worker); 490 489 491 - sprintf(buf, "iou-wrk-%d", wq->task_pid); 490 + snprintf(buf, sizeof(buf), "iou-wrk-%d", wq->task_pid); 492 491 set_task_comm(current, buf); 493 492 494 493 while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)) { ··· 506 505 if (io_flush_signals()) 507 506 continue; 508 507 ret = schedule_timeout(WORKER_IDLE_TIMEOUT); 509 - if (try_to_freeze() || ret) 510 - continue; 511 - if (fatal_signal_pending(current)) 508 + if (signal_pending(current)) { 509 + struct ksignal ksig; 510 + 511 + if (!get_signal(&ksig)) 512 + continue; 512 513 break; 514 + } 515 + if (ret) 516 + continue; 513 517 /* timed out, exit unless we're the fixed worker */ 514 518 if (test_bit(IO_WQ_BIT_EXIT, &wq->state) || 515 519 !(worker->flags & IO_WORKER_F_FIXED)) ··· 715 709 char buf[TASK_COMM_LEN]; 716 710 int node; 717 711 718 - sprintf(buf, "iou-mgr-%d", wq->task_pid); 712 + snprintf(buf, sizeof(buf), "iou-mgr-%d", wq->task_pid); 719 713 set_task_comm(current, buf); 720 714 721 715 do { 722 716 set_current_state(TASK_INTERRUPTIBLE); 723 717 io_wq_check_workers(wq); 724 718 schedule_timeout(HZ); 725 - try_to_freeze(); 726 - if (fatal_signal_pending(current)) 719 + if (signal_pending(current)) { 720 + struct ksignal ksig; 721 + 722 + if (!get_signal(&ksig)) 723 + continue; 727 724 set_bit(IO_WQ_BIT_EXIT, &wq->state); 725 + } 728 726 } while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)); 729 727 730 728 io_wq_check_workers(wq); ··· 1075 1065 1076 1066 for_each_node(node) { 1077 1067 struct io_wqe *wqe = wq->wqes[node]; 1078 - WARN_ON_ONCE(!wq_list_empty(&wqe->work_list)); 1068 + struct io_cb_cancel_data match = { 1069 + .fn = io_wq_work_match_all, 1070 + .cancel_all = true, 1071 + }; 1072 + io_wqe_cancel_pending_work(wqe, &match); 1079 1073 kfree(wqe); 1080 1074 } 1081 1075 io_wq_put_hash(wq->hash);
+94 -60
fs/io_uring.c
··· 78 78 #include <linux/task_work.h> 79 79 #include <linux/pagemap.h> 80 80 #include <linux/io_uring.h> 81 - #include <linux/freezer.h> 82 81 83 82 #define CREATE_TRACE_POINTS 84 83 #include <trace/events/io_uring.h> ··· 697 698 REQ_F_NO_FILE_TABLE_BIT, 698 699 REQ_F_LTIMEOUT_ACTIVE_BIT, 699 700 REQ_F_COMPLETE_INLINE_BIT, 701 + REQ_F_REISSUE_BIT, 700 702 701 703 /* not a real bit, just to check we're not overflowing the space */ 702 704 __REQ_F_LAST_BIT, ··· 741 741 REQ_F_LTIMEOUT_ACTIVE = BIT(REQ_F_LTIMEOUT_ACTIVE_BIT), 742 742 /* completion is deferred through io_comp_state */ 743 743 REQ_F_COMPLETE_INLINE = BIT(REQ_F_COMPLETE_INLINE_BIT), 744 + /* caller should reissue async */ 745 + REQ_F_REISSUE = BIT(REQ_F_REISSUE_BIT), 744 746 }; 745 747 746 748 struct async_poll { ··· 1097 1095 io_for_each_link(req, head) { 1098 1096 if (req->flags & REQ_F_INFLIGHT) 1099 1097 return true; 1100 - if (req->task->files == files) 1101 - return true; 1102 1098 } 1103 1099 return false; 1104 1100 } ··· 1216 1216 if (req->flags & REQ_F_ISREG) { 1217 1217 if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL)) 1218 1218 io_wq_hash_work(&req->work, file_inode(req->file)); 1219 - } else { 1219 + } else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) { 1220 1220 if (def->unbound_nonreg_file) 1221 1221 req->work.flags |= IO_WQ_WORK_UNBOUND; 1222 1222 } ··· 1239 1239 BUG_ON(!tctx); 1240 1240 BUG_ON(!tctx->io_wq); 1241 1241 1242 - trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req, 1243 - &req->work, req->flags); 1244 1242 /* init ->work of the whole link before punting */ 1245 1243 io_prep_async_link(req); 1244 + trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req, 1245 + &req->work, req->flags); 1246 1246 io_wq_enqueue(tctx->io_wq, &req->work); 1247 1247 if (link) 1248 1248 io_queue_linked_timeout(link); 1249 1249 } 1250 1250 1251 - static void io_kill_timeout(struct io_kiocb *req) 1251 + static void io_kill_timeout(struct io_kiocb *req, int status) 1252 1252 { 1253 1253 struct io_timeout_data *io = req->async_data; 1254 1254 int ret; ··· 1258 1258 atomic_set(&req->ctx->cq_timeouts, 1259 1259 atomic_read(&req->ctx->cq_timeouts) + 1); 1260 1260 list_del_init(&req->timeout.list); 1261 - io_cqring_fill_event(req, 0); 1261 + io_cqring_fill_event(req, status); 1262 1262 io_put_req_deferred(req, 1); 1263 1263 } 1264 - } 1265 - 1266 - /* 1267 - * Returns true if we found and killed one or more timeouts 1268 - */ 1269 - static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, 1270 - struct files_struct *files) 1271 - { 1272 - struct io_kiocb *req, *tmp; 1273 - int canceled = 0; 1274 - 1275 - spin_lock_irq(&ctx->completion_lock); 1276 - list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) { 1277 - if (io_match_task(req, tsk, files)) { 1278 - io_kill_timeout(req); 1279 - canceled++; 1280 - } 1281 - } 1282 - spin_unlock_irq(&ctx->completion_lock); 1283 - return canceled != 0; 1284 1264 } 1285 1265 1286 1266 static void __io_queue_deferred(struct io_ring_ctx *ctx) ··· 1307 1327 break; 1308 1328 1309 1329 list_del_init(&req->timeout.list); 1310 - io_kill_timeout(req); 1330 + io_kill_timeout(req, 0); 1311 1331 } while (!list_empty(&ctx->timeout_list)); 1312 1332 1313 1333 ctx->cq_last_tm_flush = seq; ··· 2479 2499 return false; 2480 2500 return true; 2481 2501 } 2502 + #else 2503 + static bool io_rw_should_reissue(struct io_kiocb *req) 2504 + { 2505 + return false; 2506 + } 2482 2507 #endif 2483 2508 2484 2509 static bool io_rw_reissue(struct io_kiocb *req) ··· 2509 2524 { 2510 2525 int cflags = 0; 2511 2526 2512 - if ((res == -EAGAIN || res == -EOPNOTSUPP) && io_rw_reissue(req)) 2513 - return; 2514 - if (res != req->result) 2515 - req_set_fail_links(req); 2516 - 2517 2527 if (req->rw.kiocb.ki_flags & IOCB_WRITE) 2518 2528 kiocb_end_write(req); 2529 + if ((res == -EAGAIN || res == -EOPNOTSUPP) && io_rw_should_reissue(req)) { 2530 + req->flags |= REQ_F_REISSUE; 2531 + return; 2532 + } 2533 + if (res != req->result) 2534 + req_set_fail_links(req); 2519 2535 if (req->flags & REQ_F_BUFFER_SELECTED) 2520 2536 cflags = io_put_rw_kbuf(req); 2521 2537 __io_req_complete(req, issue_flags, res, cflags); ··· 2762 2776 { 2763 2777 struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb); 2764 2778 struct io_async_rw *io = req->async_data; 2779 + bool check_reissue = kiocb->ki_complete == io_complete_rw; 2765 2780 2766 2781 /* add previously done IO, if any */ 2767 2782 if (io && io->bytes_done > 0) { ··· 2778 2791 __io_complete_rw(req, ret, 0, issue_flags); 2779 2792 else 2780 2793 io_rw_done(kiocb, ret); 2794 + 2795 + if (check_reissue && req->flags & REQ_F_REISSUE) { 2796 + req->flags &= ~REQ_F_REISSUE; 2797 + if (!io_rw_reissue(req)) { 2798 + int cflags = 0; 2799 + 2800 + req_set_fail_links(req); 2801 + if (req->flags & REQ_F_BUFFER_SELECTED) 2802 + cflags = io_put_rw_kbuf(req); 2803 + __io_req_complete(req, issue_flags, ret, cflags); 2804 + } 2805 + } 2781 2806 } 2782 2807 2783 2808 static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter) ··· 3306 3307 3307 3308 ret = io_iter_do_read(req, iter); 3308 3309 3309 - if (ret == -EIOCBQUEUED) { 3310 - if (req->async_data) 3311 - iov_iter_revert(iter, io_size - iov_iter_count(iter)); 3312 - goto out_free; 3313 - } else if (ret == -EAGAIN) { 3310 + if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) { 3311 + req->flags &= ~REQ_F_REISSUE; 3314 3312 /* IOPOLL retry should happen for io-wq threads */ 3315 3313 if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL)) 3316 3314 goto done; ··· 3317 3321 /* some cases will consume bytes even on error returns */ 3318 3322 iov_iter_revert(iter, io_size - iov_iter_count(iter)); 3319 3323 ret = 0; 3324 + } else if (ret == -EIOCBQUEUED) { 3325 + goto out_free; 3320 3326 } else if (ret <= 0 || ret == io_size || !force_nonblock || 3321 3327 (req->flags & REQ_F_NOWAIT) || !(req->flags & REQ_F_ISREG)) { 3322 3328 /* read all, failed, already did sync or don't want to retry */ ··· 3431 3433 else 3432 3434 ret2 = -EINVAL; 3433 3435 3436 + if (req->flags & REQ_F_REISSUE) { 3437 + req->flags &= ~REQ_F_REISSUE; 3438 + ret2 = -EAGAIN; 3439 + } 3440 + 3434 3441 /* 3435 3442 * Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just 3436 3443 * retry them without IOCB_NOWAIT. ··· 3445 3442 /* no retry on NONBLOCK nor RWF_NOWAIT */ 3446 3443 if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT)) 3447 3444 goto done; 3448 - if (ret2 == -EIOCBQUEUED && req->async_data) 3449 - iov_iter_revert(iter, io_size - iov_iter_count(iter)); 3450 3445 if (!force_nonblock || ret2 != -EAGAIN) { 3451 3446 /* IOPOLL retry should happen for io-wq threads */ 3452 3447 if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN) ··· 3979 3978 static int io_provide_buffers_prep(struct io_kiocb *req, 3980 3979 const struct io_uring_sqe *sqe) 3981 3980 { 3981 + unsigned long size; 3982 3982 struct io_provide_buf *p = &req->pbuf; 3983 3983 u64 tmp; 3984 3984 ··· 3993 3991 p->addr = READ_ONCE(sqe->addr); 3994 3992 p->len = READ_ONCE(sqe->len); 3995 3993 3996 - if (!access_ok(u64_to_user_ptr(p->addr), (p->len * p->nbufs))) 3994 + size = (unsigned long)p->len * p->nbufs; 3995 + if (!access_ok(u64_to_user_ptr(p->addr), size)) 3997 3996 return -EFAULT; 3998 3997 3999 3998 p->bgid = READ_ONCE(sqe->buf_group); ··· 4823 4820 ret = -ENOMEM; 4824 4821 goto out; 4825 4822 } 4826 - io = req->async_data; 4827 4823 memcpy(req->async_data, &__io, sizeof(__io)); 4828 4824 return -EAGAIN; 4829 4825 } ··· 5585 5583 5586 5584 data->mode = io_translate_timeout_mode(flags); 5587 5585 hrtimer_init(&data->timer, CLOCK_MONOTONIC, data->mode); 5588 - io_req_track_inflight(req); 5586 + if (is_timeout_link) 5587 + io_req_track_inflight(req); 5589 5588 return 0; 5590 5589 } 5591 5590 ··· 6482 6479 ret = io_init_req(ctx, req, sqe); 6483 6480 if (unlikely(ret)) { 6484 6481 fail_req: 6485 - io_put_req(req); 6486 - io_req_complete(req, ret); 6487 6482 if (link->head) { 6488 6483 /* fail even hard links since we don't submit */ 6489 6484 link->head->flags |= REQ_F_FAIL_LINK; ··· 6489 6488 io_req_complete(link->head, -ECANCELED); 6490 6489 link->head = NULL; 6491 6490 } 6491 + io_put_req(req); 6492 + io_req_complete(req, ret); 6492 6493 return ret; 6493 6494 } 6494 6495 ret = io_req_prep(req, sqe); ··· 6743 6740 char buf[TASK_COMM_LEN]; 6744 6741 DEFINE_WAIT(wait); 6745 6742 6746 - sprintf(buf, "iou-sqp-%d", sqd->task_pid); 6743 + snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid); 6747 6744 set_task_comm(current, buf); 6748 6745 current->pf_io_worker = NULL; 6749 6746 ··· 6758 6755 int ret; 6759 6756 bool cap_entries, sqt_spin, needs_sched; 6760 6757 6761 - if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state)) { 6758 + if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) || 6759 + signal_pending(current)) { 6760 + bool did_sig = false; 6761 + 6762 6762 mutex_unlock(&sqd->lock); 6763 + if (signal_pending(current)) { 6764 + struct ksignal ksig; 6765 + 6766 + did_sig = get_signal(&ksig); 6767 + } 6763 6768 cond_resched(); 6764 6769 mutex_lock(&sqd->lock); 6770 + if (did_sig) 6771 + break; 6765 6772 io_run_task_work(); 6766 6773 io_run_task_work_head(&sqd->park_task_work); 6767 6774 timeout = jiffies + sqd->sq_thread_idle; 6768 6775 continue; 6769 6776 } 6770 - if (fatal_signal_pending(current)) 6771 - break; 6772 6777 sqt_spin = false; 6773 6778 cap_entries = !list_is_singular(&sqd->ctx_list); 6774 6779 list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) { ··· 6819 6808 6820 6809 mutex_unlock(&sqd->lock); 6821 6810 schedule(); 6822 - try_to_freeze(); 6823 6811 mutex_lock(&sqd->lock); 6824 6812 list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) 6825 6813 io_ring_clear_wakeup_flag(ctx); ··· 6883 6873 return 1; 6884 6874 if (!signal_pending(current)) 6885 6875 return 0; 6886 - if (test_tsk_thread_flag(current, TIF_NOTIFY_SIGNAL)) 6876 + if (test_thread_flag(TIF_NOTIFY_SIGNAL)) 6887 6877 return -ERESTARTSYS; 6888 6878 return -EINTR; 6889 6879 } ··· 8573 8563 struct io_tctx_node *node; 8574 8564 int ret; 8575 8565 8566 + /* prevent SQPOLL from submitting new requests */ 8567 + if (ctx->sq_data) { 8568 + io_sq_thread_park(ctx->sq_data); 8569 + list_del_init(&ctx->sqd_list); 8570 + io_sqd_update_thread_idle(ctx->sq_data); 8571 + io_sq_thread_unpark(ctx->sq_data); 8572 + } 8573 + 8576 8574 /* 8577 8575 * If we're doing polled IO and end up having requests being 8578 8576 * submitted async (out-of-line), then completions can come in while ··· 8617 8599 io_ring_ctx_free(ctx); 8618 8600 } 8619 8601 8602 + /* Returns true if we found and killed one or more timeouts */ 8603 + static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk, 8604 + struct files_struct *files) 8605 + { 8606 + struct io_kiocb *req, *tmp; 8607 + int canceled = 0; 8608 + 8609 + spin_lock_irq(&ctx->completion_lock); 8610 + list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) { 8611 + if (io_match_task(req, tsk, files)) { 8612 + io_kill_timeout(req, -ECANCELED); 8613 + canceled++; 8614 + } 8615 + } 8616 + if (canceled != 0) 8617 + io_commit_cqring(ctx); 8618 + spin_unlock_irq(&ctx->completion_lock); 8619 + if (canceled != 0) 8620 + io_cqring_ev_posted(ctx); 8621 + return canceled != 0; 8622 + } 8623 + 8620 8624 static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx) 8621 8625 { 8622 8626 unsigned long index; ··· 8653 8613 xa_for_each(&ctx->personalities, index, creds) 8654 8614 io_unregister_personality(ctx, index); 8655 8615 mutex_unlock(&ctx->uring_lock); 8656 - 8657 - /* prevent SQPOLL from submitting new requests */ 8658 - if (ctx->sq_data) { 8659 - io_sq_thread_park(ctx->sq_data); 8660 - list_del_init(&ctx->sqd_list); 8661 - io_sqd_update_thread_idle(ctx->sq_data); 8662 - io_sq_thread_unpark(ctx->sq_data); 8663 - } 8664 8616 8665 8617 io_kill_timeouts(ctx, NULL, NULL); 8666 8618 io_poll_remove_all(ctx, NULL, NULL); ··· 9030 8998 9031 8999 /* make sure overflow events are dropped */ 9032 9000 atomic_inc(&tctx->in_idle); 9001 + __io_uring_files_cancel(NULL); 9002 + 9033 9003 do { 9034 9004 /* read completions before cancelations */ 9035 9005 inflight = tctx_inflight(tctx);
+8 -6
fs/namei.c
··· 579 579 p->stack = p->internal; 580 580 p->dfd = dfd; 581 581 p->name = name; 582 + p->path.mnt = NULL; 583 + p->path.dentry = NULL; 582 584 p->total_link_count = old ? old->total_link_count : 0; 583 585 p->saved = old; 584 586 current->nameidata = p; ··· 654 652 rcu_read_unlock(); 655 653 } 656 654 nd->depth = 0; 655 + nd->path.mnt = NULL; 656 + nd->path.dentry = NULL; 657 657 } 658 658 659 659 /* path_put is needed afterwards regardless of success or failure */ ··· 2326 2322 } 2327 2323 2328 2324 nd->root.mnt = NULL; 2329 - nd->path.mnt = NULL; 2330 - nd->path.dentry = NULL; 2331 2325 2332 2326 /* Absolute pathname -- fetch the root (LOOKUP_IN_ROOT uses nd->dfd). */ 2333 2327 if (*s == '/' && !(flags & LOOKUP_IN_ROOT)) { ··· 2421 2419 while (!(err = link_path_walk(s, nd)) && 2422 2420 (s = lookup_last(nd)) != NULL) 2423 2421 ; 2422 + if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) { 2423 + err = handle_lookup_down(nd); 2424 + nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please... 2425 + } 2424 2426 if (!err) 2425 2427 err = complete_walk(nd); 2426 2428 2427 2429 if (!err && nd->flags & LOOKUP_DIRECTORY) 2428 2430 if (!d_can_lookup(nd->path.dentry)) 2429 2431 err = -ENOTDIR; 2430 - if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) { 2431 - err = handle_lookup_down(nd); 2432 - nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please... 2433 - } 2434 2432 if (!err) { 2435 2433 *path = nd->path; 2436 2434 nd->path.mnt = NULL;
+1 -1
fs/reiserfs/xattr.h
··· 44 44 45 45 static inline int reiserfs_xattrs_initialized(struct super_block *sb) 46 46 { 47 - return REISERFS_SB(sb)->priv_root != NULL; 47 + return REISERFS_SB(sb)->priv_root && REISERFS_SB(sb)->xattr_root; 48 48 } 49 49 50 50 #define xattr_size(size) ((size) + sizeof(struct reiserfs_xattr_header))
+1
include/acpi/acpi_bus.h
··· 233 233 234 234 struct acpi_device_pnp { 235 235 acpi_bus_id bus_id; /* Object name */ 236 + int instance_no; /* Instance number of this object */ 236 237 struct acpi_pnp_type type; /* ID type */ 237 238 acpi_bus_address bus_address; /* _ADR */ 238 239 char *unique_id; /* _UID */
+8 -1
include/linux/acpi.h
··· 222 222 void __acpi_unmap_table(void __iomem *map, unsigned long size); 223 223 int early_acpi_boot_init(void); 224 224 int acpi_boot_init (void); 225 + void acpi_boot_table_prepare (void); 225 226 void acpi_boot_table_init (void); 226 227 int acpi_mps_check (void); 227 228 int acpi_numa_init (void); 228 229 230 + int acpi_locate_initial_tables (void); 231 + void acpi_reserve_initial_tables (void); 232 + void acpi_table_init_complete (void); 229 233 int acpi_table_init (void); 230 234 int acpi_table_parse(char *id, acpi_tbl_table_handler handler); 231 235 int __init acpi_table_parse_entries(char *id, unsigned long table_size, ··· 818 814 return 0; 819 815 } 820 816 817 + static inline void acpi_boot_table_prepare(void) 818 + { 819 + } 820 + 821 821 static inline void acpi_boot_table_init(void) 822 822 { 823 - return; 824 823 } 825 824 826 825 static inline int acpi_mps_check(void)
-2
include/linux/avf/virtchnl.h
··· 480 480 u16 vsi_id; 481 481 u16 key_len; 482 482 u8 key[1]; /* RSS hash key, packed bytes */ 483 - u8 pad[1]; 484 483 }; 485 484 486 485 VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key); ··· 488 489 u16 vsi_id; 489 490 u16 lut_entries; 490 491 u8 lut[1]; /* RSS lookup table */ 491 - u8 pad[1]; 492 492 }; 493 493 494 494 VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
-2
include/linux/blkdev.h
··· 85 85 #define RQF_ELVPRIV ((__force req_flags_t)(1 << 12)) 86 86 /* account into disk and partition IO statistics */ 87 87 #define RQF_IO_STAT ((__force req_flags_t)(1 << 13)) 88 - /* request came from our alloc pool */ 89 - #define RQF_ALLOCED ((__force req_flags_t)(1 << 14)) 90 88 /* runtime pm request */ 91 89 #define RQF_PM ((__force req_flags_t)(1 << 15)) 92 90 /* on IO scheduler merge hash */
+2
include/linux/bpf.h
··· 40 40 struct bpf_local_storage_map; 41 41 struct kobject; 42 42 struct mem_cgroup; 43 + struct module; 43 44 struct bpf_func_state; 44 45 45 46 extern struct idr btf_idr; ··· 646 645 /* Executable image of trampoline */ 647 646 struct bpf_tramp_image *cur_image; 648 647 u64 selector; 648 + struct module *mod; 649 649 }; 650 650 651 651 struct bpf_attach_target_info {
+14 -1
include/linux/device-mapper.h
··· 253 253 #define dm_target_passes_integrity(type) ((type)->features & DM_TARGET_PASSES_INTEGRITY) 254 254 255 255 /* 256 - * Indicates that a target supports host-managed zoned block devices. 256 + * Indicates support for zoned block devices: 257 + * - DM_TARGET_ZONED_HM: the target also supports host-managed zoned 258 + * block devices but does not support combining different zoned models. 259 + * - DM_TARGET_MIXED_ZONED_MODEL: the target supports combining multiple 260 + * devices with different zoned models. 257 261 */ 258 262 #ifdef CONFIG_BLK_DEV_ZONED 259 263 #define DM_TARGET_ZONED_HM 0x00000040 ··· 278 274 */ 279 275 #define DM_TARGET_PASSES_CRYPTO 0x00000100 280 276 #define dm_target_passes_crypto(type) ((type)->features & DM_TARGET_PASSES_CRYPTO) 277 + 278 + #ifdef CONFIG_BLK_DEV_ZONED 279 + #define DM_TARGET_MIXED_ZONED_MODEL 0x00000200 280 + #define dm_target_supports_mixed_zoned_model(type) \ 281 + ((type)->features & DM_TARGET_MIXED_ZONED_MODEL) 282 + #else 283 + #define DM_TARGET_MIXED_ZONED_MODEL 0x00000000 284 + #define dm_target_supports_mixed_zoned_model(type) (false) 285 + #endif 281 286 282 287 struct dm_target { 283 288 struct dm_table *table;
+17 -6
include/linux/ethtool.h
··· 87 87 int ethtool_op_get_ts_info(struct net_device *dev, struct ethtool_ts_info *eti); 88 88 89 89 90 - /** 91 - * struct ethtool_link_ext_state_info - link extended state and substate. 92 - */ 90 + /* Link extended state and substate. */ 93 91 struct ethtool_link_ext_state_info { 94 92 enum ethtool_link_ext_state link_ext_state; 95 93 union { ··· 127 129 __ETHTOOL_DECLARE_LINK_MODE_MASK(lp_advertising); 128 130 } link_modes; 129 131 u32 lanes; 130 - enum ethtool_link_mode_bit_indices link_mode; 131 132 }; 132 133 133 134 /** ··· 289 292 * do not attach ext_substate attribute to netlink message). If link_ext_state 290 293 * and link_ext_substate are unknown, return -ENODATA. If not implemented, 291 294 * link_ext_state and link_ext_substate will not be sent to userspace. 295 + * @get_eeprom_len: Read range of EEPROM addresses for validation of 296 + * @get_eeprom and @set_eeprom requests. 297 + * Returns 0 if device does not support EEPROM access. 292 298 * @get_eeprom: Read data from the device EEPROM. 293 299 * Should fill in the magic field. Don't need to check len for zero 294 300 * or wraparound. Fill in the data argument with the eeprom values ··· 384 384 * @get_module_eeprom: Get the eeprom information from the plug-in module 385 385 * @get_eee: Get Energy-Efficient (EEE) supported and status. 386 386 * @set_eee: Set EEE status (enable/disable) as well as LPI timers. 387 + * @get_tunable: Read the value of a driver / device tunable. 388 + * @set_tunable: Set the value of a driver / device tunable. 387 389 * @get_per_queue_coalesce: Get interrupt coalescing parameters per queue. 388 390 * It must check that the given queue number is valid. If neither a RX nor 389 391 * a TX queue has this number, return -EINVAL. If only a RX queue or a TX ··· 551 549 * @get_sset_count: Get number of strings that @get_strings will write. 552 550 * @get_strings: Return a set of strings that describe the requested objects 553 551 * @get_stats: Return extended statistics about the PHY device. 554 - * @start_cable_test - Start a cable test 555 - * @start_cable_test_tdr - Start a Time Domain Reflectometry cable test 552 + * @start_cable_test: Start a cable test 553 + * @start_cable_test_tdr: Start a Time Domain Reflectometry cable test 556 554 * 557 555 * All operations are optional (i.e. the function pointer may be set to %NULL) 558 556 * and callers must take this into account. Callers must hold the RTNL lock. ··· 574 572 * @ops: Ethtool PHY operations to set 575 573 */ 576 574 void ethtool_set_ethtool_phy_ops(const struct ethtool_phy_ops *ops); 575 + 576 + /** 577 + * ethtool_params_from_link_mode - Derive link parameters from a given link mode 578 + * @link_ksettings: Link parameters to be derived from the link mode 579 + * @link_mode: Link mode 580 + */ 581 + void 582 + ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings, 583 + enum ethtool_link_mode_bit_indices link_mode); 577 584 578 585 /** 579 586 * ethtool_sprintf - Write formatted string to ethtool string data
+23
include/linux/extcon.h
··· 271 271 struct extcon_dev *edev, unsigned int id, 272 272 struct notifier_block *nb) { } 273 273 274 + static inline int extcon_register_notifier_all(struct extcon_dev *edev, 275 + struct notifier_block *nb) 276 + { 277 + return 0; 278 + } 279 + 280 + static inline int extcon_unregister_notifier_all(struct extcon_dev *edev, 281 + struct notifier_block *nb) 282 + { 283 + return 0; 284 + } 285 + 286 + static inline int devm_extcon_register_notifier_all(struct device *dev, 287 + struct extcon_dev *edev, 288 + struct notifier_block *nb) 289 + { 290 + return 0; 291 + } 292 + 293 + static inline void devm_extcon_unregister_notifier_all(struct device *dev, 294 + struct extcon_dev *edev, 295 + struct notifier_block *nb) { } 296 + 274 297 static inline struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name) 275 298 { 276 299 return ERR_PTR(-ENODEV);
+1 -1
include/linux/firmware/intel/stratix10-svc-client.h
··· 56 56 * COMMAND_RECONFIG_FLAG_PARTIAL: 57 57 * Set to FPGA configuration type (full or partial). 58 58 */ 59 - #define COMMAND_RECONFIG_FLAG_PARTIAL 1 59 + #define COMMAND_RECONFIG_FLAG_PARTIAL 0 60 60 61 61 /* 62 62 * Timeout settings for service clients:
+8 -1
include/linux/host1x.h
··· 320 320 int host1x_device_init(struct host1x_device *device); 321 321 int host1x_device_exit(struct host1x_device *device); 322 322 323 - int host1x_client_register(struct host1x_client *client); 323 + int __host1x_client_register(struct host1x_client *client, 324 + struct lock_class_key *key); 325 + #define host1x_client_register(class) \ 326 + ({ \ 327 + static struct lock_class_key __key; \ 328 + __host1x_client_register(class, &__key); \ 329 + }) 330 + 324 331 int host1x_client_unregister(struct host1x_client *client); 325 332 326 333 int host1x_client_suspend(struct host1x_client *client);
+6 -4
include/linux/mlx5/mlx5_ifc.h
··· 437 437 u8 reserved_at_60[0x18]; 438 438 u8 log_max_ft_num[0x8]; 439 439 440 - u8 reserved_at_80[0x18]; 440 + u8 reserved_at_80[0x10]; 441 + u8 log_max_flow_counter[0x8]; 441 442 u8 log_max_destination[0x8]; 442 443 443 - u8 log_max_flow_counter[0x8]; 444 - u8 reserved_at_a8[0x10]; 444 + u8 reserved_at_a0[0x18]; 445 445 u8 log_max_flow[0x8]; 446 446 447 447 u8 reserved_at_c0[0x40]; ··· 8847 8847 8848 8848 u8 fec_override_admin_100g_2x[0x10]; 8849 8849 u8 fec_override_admin_50g_1x[0x10]; 8850 + 8851 + u8 reserved_at_140[0x140]; 8850 8852 }; 8851 8853 8852 8854 struct mlx5_ifc_ppcnt_reg_bits { ··· 10217 10215 10218 10216 struct mlx5_ifc_bufferx_reg_bits buffer[10]; 10219 10217 10220 - u8 reserved_at_2e0[0x40]; 10218 + u8 reserved_at_2e0[0x80]; 10221 10219 }; 10222 10220 10223 10221 struct mlx5_ifc_qtct_reg_bits {
+1 -1
include/linux/mutex.h
··· 185 185 # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock) 186 186 # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lock) 187 187 # define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock) 188 - # define mutex_lock_io_nested(lock, subclass) mutex_lock(lock) 188 + # define mutex_lock_io_nested(lock, subclass) mutex_lock_io(lock) 189 189 #endif 190 190 191 191 /*
-2
include/linux/qcom-geni-se.h
··· 460 460 int geni_icc_enable(struct geni_se *se); 461 461 462 462 int geni_icc_disable(struct geni_se *se); 463 - 464 - void geni_remove_earlycon_icc_vote(void); 465 463 #endif 466 464 #endif
-1
include/linux/skmsg.h
··· 403 403 static inline void sk_psock_restore_proto(struct sock *sk, 404 404 struct sk_psock *psock) 405 405 { 406 - sk->sk_prot->unhash = psock->saved_unhash; 407 406 if (psock->psock_update_sk_prot) 408 407 psock->psock_update_sk_prot(sk, true); 409 408 }
+11 -5
include/linux/virtio_net.h
··· 62 62 return -EINVAL; 63 63 } 64 64 65 + skb_reset_mac_header(skb); 66 + 65 67 if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { 66 - u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start); 67 - u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); 68 + u32 start = __virtio16_to_cpu(little_endian, hdr->csum_start); 69 + u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); 70 + u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16)); 71 + 72 + if (!pskb_may_pull(skb, needed)) 73 + return -EINVAL; 68 74 69 75 if (!skb_partial_csum_set(skb, start, off)) 70 76 return -EINVAL; 71 77 72 78 p_off = skb_transport_offset(skb) + thlen; 73 - if (p_off > skb_headlen(skb)) 79 + if (!pskb_may_pull(skb, p_off)) 74 80 return -EINVAL; 75 81 } else { 76 82 /* gso packets without NEEDS_CSUM do not set transport_offset. ··· 106 100 } 107 101 108 102 p_off = keys.control.thoff + thlen; 109 - if (p_off > skb_headlen(skb) || 103 + if (!pskb_may_pull(skb, p_off) || 110 104 keys.basic.ip_proto != ip_proto) 111 105 return -EINVAL; 112 106 113 107 skb_set_transport_header(skb, keys.control.thoff); 114 108 } else if (gso_type) { 115 109 p_off = thlen; 116 - if (p_off > skb_headlen(skb)) 110 + if (!pskb_may_pull(skb, p_off)) 117 111 return -EINVAL; 118 112 } 119 113 }
+3 -1
include/linux/xarray.h
··· 229 229 * 230 230 * This structure is used either directly or via the XA_LIMIT() macro 231 231 * to communicate the range of IDs that are valid for allocation. 232 - * Two common ranges are predefined for you: 232 + * Three common ranges are predefined for you: 233 233 * * xa_limit_32b - [0 - UINT_MAX] 234 234 * * xa_limit_31b - [0 - INT_MAX] 235 + * * xa_limit_16b - [0 - USHRT_MAX] 235 236 */ 236 237 struct xa_limit { 237 238 u32 max; ··· 243 242 244 243 #define xa_limit_32b XA_LIMIT(0, UINT_MAX) 245 244 #define xa_limit_31b XA_LIMIT(0, INT_MAX) 245 + #define xa_limit_16b XA_LIMIT(0, USHRT_MAX) 246 246 247 247 typedef unsigned __bitwise xa_mark_t; 248 248 #define XA_MARK_0 ((__force xa_mark_t)0U)
+4 -8
include/net/act_api.h
··· 170 170 void tcf_idr_cleanup(struct tc_action_net *tn, u32 index); 171 171 int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index, 172 172 struct tc_action **a, int bind); 173 - int __tcf_idr_release(struct tc_action *a, bool bind, bool strict); 174 - 175 - static inline int tcf_idr_release(struct tc_action *a, bool bind) 176 - { 177 - return __tcf_idr_release(a, bind, false); 178 - } 173 + int tcf_idr_release(struct tc_action *a, bool bind); 179 174 180 175 int tcf_register_action(struct tc_action_ops *a, struct pernet_operations *ops); 181 176 int tcf_unregister_action(struct tc_action_ops *a, ··· 180 185 int nr_actions, struct tcf_result *res); 181 186 int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, 182 187 struct nlattr *est, char *name, int ovr, int bind, 183 - struct tc_action *actions[], size_t *attr_size, 188 + struct tc_action *actions[], int init_res[], size_t *attr_size, 184 189 bool rtnl_held, struct netlink_ext_ack *extack); 185 190 struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla, 186 191 bool rtnl_held, ··· 188 193 struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, 189 194 struct nlattr *nla, struct nlattr *est, 190 195 char *name, int ovr, int bind, 191 - struct tc_action_ops *ops, bool rtnl_held, 196 + struct tc_action_ops *a_o, int *init_res, 197 + bool rtnl_held, 192 198 struct netlink_ext_ack *extack); 193 199 int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind, 194 200 int ref, bool terse);
+3 -1
include/net/netns/xfrm.h
··· 72 72 #if IS_ENABLED(CONFIG_IPV6) 73 73 struct dst_ops xfrm6_dst_ops; 74 74 #endif 75 - spinlock_t xfrm_state_lock; 75 + spinlock_t xfrm_state_lock; 76 + seqcount_spinlock_t xfrm_state_hash_generation; 77 + 76 78 spinlock_t xfrm_policy_lock; 77 79 struct mutex xfrm_cfg_mutex; 78 80 };
+2 -2
include/net/red.h
··· 171 171 static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, 172 172 u8 Scell_log, u8 *stab) 173 173 { 174 - if (fls(qth_min) + Wlog > 32) 174 + if (fls(qth_min) + Wlog >= 32) 175 175 return false; 176 - if (fls(qth_max) + Wlog > 32) 176 + if (fls(qth_max) + Wlog >= 32) 177 177 return false; 178 178 if (Scell_log >= 32) 179 179 return false;
+2 -2
include/net/rtnetlink.h
··· 147 147 int (*validate_link_af)(const struct net_device *dev, 148 148 const struct nlattr *attr); 149 149 int (*set_link_af)(struct net_device *dev, 150 - const struct nlattr *attr); 151 - 150 + const struct nlattr *attr, 151 + struct netlink_ext_ack *extack); 152 152 int (*fill_stats_af)(struct sk_buff *skb, 153 153 const struct net_device *dev); 154 154 size_t (*get_stats_af_size)(const struct net_device *dev);
+14 -1
include/net/sock.h
··· 934 934 WRITE_ONCE(sk->sk_ack_backlog, sk->sk_ack_backlog + 1); 935 935 } 936 936 937 + /* Note: If you think the test should be: 938 + * return READ_ONCE(sk->sk_ack_backlog) >= READ_ONCE(sk->sk_max_ack_backlog); 939 + * Then please take a look at commit 64a146513f8f ("[NET]: Revert incorrect accept queue backlog changes.") 940 + */ 937 941 static inline bool sk_acceptq_is_full(const struct sock *sk) 938 942 { 939 - return READ_ONCE(sk->sk_ack_backlog) >= READ_ONCE(sk->sk_max_ack_backlog); 943 + return READ_ONCE(sk->sk_ack_backlog) > READ_ONCE(sk->sk_max_ack_backlog); 940 944 } 941 945 942 946 /* ··· 2226 2222 skb->destructor = sock_rfree; 2227 2223 atomic_add(skb->truesize, &sk->sk_rmem_alloc); 2228 2224 sk_mem_charge(sk, skb->truesize); 2225 + } 2226 + 2227 + static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk) 2228 + { 2229 + if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) { 2230 + skb_orphan(skb); 2231 + skb->destructor = sock_efree; 2232 + skb->sk = sk; 2233 + } 2229 2234 } 2230 2235 2231 2236 void sk_reset_timer(struct sock *sk, struct timer_list *timer,
+2 -2
include/net/xfrm.h
··· 1097 1097 return __xfrm_policy_check(sk, ndir, skb, family); 1098 1098 1099 1099 return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || 1100 - (skb_dst(skb)->flags & DST_NOPOLICY) || 1100 + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || 1101 1101 __xfrm_policy_check(sk, ndir, skb, family); 1102 1102 } 1103 1103 ··· 1557 1557 int xfrm_trans_queue(struct sk_buff *skb, 1558 1558 int (*finish)(struct net *, struct sock *, 1559 1559 struct sk_buff *)); 1560 - int xfrm_output_resume(struct sk_buff *skb, int err); 1560 + int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err); 1561 1561 int xfrm_output(struct sock *sk, struct sk_buff *skb); 1562 1562 1563 1563 #if IS_ENABLED(CONFIG_NET_PKTGEN)
+1
include/scsi/scsi_transport_iscsi.h
··· 193 193 ISCSI_CONN_UP = 0, 194 194 ISCSI_CONN_DOWN, 195 195 ISCSI_CONN_FAILED, 196 + ISCSI_CONN_BOUND, 196 197 }; 197 198 198 199 struct iscsi_cls_conn {
+2 -26
include/uapi/linux/blkpg.h
··· 2 2 #ifndef _UAPI__LINUX_BLKPG_H 3 3 #define _UAPI__LINUX_BLKPG_H 4 4 5 - /* 6 - * Partition table and disk geometry handling 7 - * 8 - * A single ioctl with lots of subfunctions: 9 - * 10 - * Device number stuff: 11 - * get_whole_disk() (given the device number of a partition, 12 - * find the device number of the encompassing disk) 13 - * get_all_partitions() (given the device number of a disk, return the 14 - * device numbers of all its known partitions) 15 - * 16 - * Partition stuff: 17 - * add_partition() 18 - * delete_partition() 19 - * test_partition_in_use() (also for test_disk_in_use) 20 - * 21 - * Geometry stuff: 22 - * get_geometry() 23 - * set_geometry() 24 - * get_bios_drivedata() 25 - * 26 - * For today, only the partition stuff - aeb, 990515 27 - */ 28 5 #include <linux/compiler.h> 29 6 #include <linux/ioctl.h> 30 7 ··· 29 52 long long start; /* starting offset in bytes */ 30 53 long long length; /* length in bytes */ 31 54 int pno; /* partition number */ 32 - char devname[BLKPG_DEVNAMELTH]; /* partition name, like sda5 or c0d1p2, 33 - to be used in kernel messages */ 34 - char volname[BLKPG_VOLNAMELTH]; /* volume label */ 55 + char devname[BLKPG_DEVNAMELTH]; /* unused / ignored */ 56 + char volname[BLKPG_VOLNAMELTH]; /* unused / ignore */ 35 57 }; 36 58 37 59 #endif /* _UAPI__LINUX_BLKPG_H */
+1 -1
include/uapi/linux/can.h
··· 113 113 */ 114 114 __u8 len; 115 115 __u8 can_dlc; /* deprecated */ 116 - }; 116 + } __attribute__((packed)); /* disable padding added in some ABIs */ 117 117 __u8 __pad; /* padding */ 118 118 __u8 __res0; /* reserved / padding */ 119 119 __u8 len8_dlc; /* optional DLC for 8 byte payload length (9 .. 15) */
+33 -21
include/uapi/linux/ethtool.h
··· 26 26 * have the same layout for 32-bit and 64-bit userland. 27 27 */ 28 28 29 + /* Note on reserved space. 30 + * Reserved fields must not be accessed directly by user space because 31 + * they may be replaced by a different field in the future. They must 32 + * be initialized to zero before making the request, e.g. via memset 33 + * of the entire structure or implicitly by not being set in a structure 34 + * initializer. 35 + */ 36 + 29 37 /** 30 38 * struct ethtool_cmd - DEPRECATED, link control and status 31 39 * This structure is DEPRECATED, please use struct ethtool_link_settings. ··· 75 67 * and other link features that the link partner advertised 76 68 * through autonegotiation; 0 if unknown or not applicable. 77 69 * Read-only. 70 + * @reserved: Reserved for future use; see the note on reserved space. 78 71 * 79 72 * The link speed in Mbps is split between @speed and @speed_hi. Use 80 73 * the ethtool_cmd_speed() and ethtool_cmd_speed_set() functions to ··· 164 155 * @bus_info: Device bus address. This should match the dev_name() 165 156 * string for the underlying bus device, if there is one. May be 166 157 * an empty string. 158 + * @reserved2: Reserved for future use; see the note on reserved space. 167 159 * @n_priv_flags: Number of flags valid for %ETHTOOL_GPFLAGS and 168 160 * %ETHTOOL_SPFLAGS commands; also the number of strings in the 169 161 * %ETH_SS_PRIV_FLAGS set ··· 366 356 * @tx_lpi_timer: Time in microseconds the interface delays prior to asserting 367 357 * its tx lpi (after reaching 'idle' state). Effective only when eee 368 358 * was negotiated and tx_lpi_enabled was set. 359 + * @reserved: Reserved for future use; see the note on reserved space. 369 360 */ 370 361 struct ethtool_eee { 371 362 __u32 cmd; ··· 385 374 * @cmd: %ETHTOOL_GMODULEINFO 386 375 * @type: Standard the module information conforms to %ETH_MODULE_SFF_xxxx 387 376 * @eeprom_len: Length of the eeprom 377 + * @reserved: Reserved for future use; see the note on reserved space. 388 378 * 389 379 * This structure is used to return the information to 390 380 * properly size memory for a subsequent call to %ETHTOOL_GMODULEEEPROM. ··· 591 579 __u32 tx_pause; 592 580 }; 593 581 594 - /** 595 - * enum ethtool_link_ext_state - link extended state 596 - */ 582 + /* Link extended state */ 597 583 enum ethtool_link_ext_state { 598 584 ETHTOOL_LINK_EXT_STATE_AUTONEG, 599 585 ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE, ··· 605 595 ETHTOOL_LINK_EXT_STATE_OVERHEAT, 606 596 }; 607 597 608 - /** 609 - * enum ethtool_link_ext_substate_autoneg - more information in addition to 610 - * ETHTOOL_LINK_EXT_STATE_AUTONEG. 611 - */ 598 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_AUTONEG. */ 612 599 enum ethtool_link_ext_substate_autoneg { 613 600 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_PARTNER_DETECTED = 1, 614 601 ETHTOOL_LINK_EXT_SUBSTATE_AN_ACK_NOT_RECEIVED, ··· 615 608 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_HCD, 616 609 }; 617 610 618 - /** 619 - * enum ethtool_link_ext_substate_link_training - more information in addition to 620 - * ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE. 611 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE. 621 612 */ 622 613 enum ethtool_link_ext_substate_link_training { 623 614 ETHTOOL_LINK_EXT_SUBSTATE_LT_KR_FRAME_LOCK_NOT_ACQUIRED = 1, ··· 624 619 ETHTOOL_LINK_EXT_SUBSTATE_LT_REMOTE_FAULT, 625 620 }; 626 621 627 - /** 628 - * enum ethtool_link_ext_substate_logical_mismatch - more information in addition 629 - * to ETHTOOL_LINK_EXT_STATE_LINK_LOGICAL_MISMATCH. 622 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_LINK_LOGICAL_MISMATCH. 630 623 */ 631 624 enum ethtool_link_ext_substate_link_logical_mismatch { 632 625 ETHTOOL_LINK_EXT_SUBSTATE_LLM_PCS_DID_NOT_ACQUIRE_BLOCK_LOCK = 1, ··· 634 631 ETHTOOL_LINK_EXT_SUBSTATE_LLM_RS_FEC_IS_NOT_LOCKED, 635 632 }; 636 633 637 - /** 638 - * enum ethtool_link_ext_substate_bad_signal_integrity - more information in 639 - * addition to ETHTOOL_LINK_EXT_STATE_BAD_SIGNAL_INTEGRITY. 634 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_BAD_SIGNAL_INTEGRITY. 640 635 */ 641 636 enum ethtool_link_ext_substate_bad_signal_integrity { 642 637 ETHTOOL_LINK_EXT_SUBSTATE_BSI_LARGE_NUMBER_OF_PHYSICAL_ERRORS = 1, 643 638 ETHTOOL_LINK_EXT_SUBSTATE_BSI_UNSUPPORTED_RATE, 644 639 }; 645 640 646 - /** 647 - * enum ethtool_link_ext_substate_cable_issue - more information in 648 - * addition to ETHTOOL_LINK_EXT_STATE_CABLE_ISSUE. 649 - */ 641 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_CABLE_ISSUE. */ 650 642 enum ethtool_link_ext_substate_cable_issue { 651 643 ETHTOOL_LINK_EXT_SUBSTATE_CI_UNSUPPORTED_CABLE = 1, 652 644 ETHTOOL_LINK_EXT_SUBSTATE_CI_CABLE_TEST_FAILURE, ··· 659 661 * now deprecated 660 662 * @ETH_SS_FEATURES: Device feature names 661 663 * @ETH_SS_RSS_HASH_FUNCS: RSS hush function names 664 + * @ETH_SS_TUNABLES: tunable names 662 665 * @ETH_SS_PHY_STATS: Statistic names, for use with %ETHTOOL_GPHYSTATS 663 666 * @ETH_SS_PHY_TUNABLES: PHY tunable names 664 667 * @ETH_SS_LINK_MODES: link mode names ··· 669 670 * @ETH_SS_TS_TX_TYPES: timestamping Tx types 670 671 * @ETH_SS_TS_RX_FILTERS: timestamping Rx filters 671 672 * @ETH_SS_UDP_TUNNEL_TYPES: UDP tunnel types 673 + * 674 + * @ETH_SS_COUNT: number of defined string sets 672 675 */ 673 676 enum ethtool_stringset { 674 677 ETH_SS_TEST = 0, ··· 716 715 /** 717 716 * struct ethtool_sset_info - string set information 718 717 * @cmd: Command number = %ETHTOOL_GSSET_INFO 718 + * @reserved: Reserved for future use; see the note on reserved space. 719 719 * @sset_mask: On entry, a bitmask of string sets to query, with bits 720 720 * numbered according to &enum ethtool_stringset. On return, a 721 721 * bitmask of those string sets queried that are supported. ··· 761 759 * @flags: A bitmask of flags from &enum ethtool_test_flags. Some 762 760 * flags may be set by the user on entry; others may be set by 763 761 * the driver on return. 762 + * @reserved: Reserved for future use; see the note on reserved space. 764 763 * @len: On return, the number of test results 765 764 * @data: Array of test results 766 765 * ··· 962 959 * @vlan_etype: VLAN EtherType 963 960 * @vlan_tci: VLAN tag control information 964 961 * @data: user defined data 962 + * @padding: Reserved for future use; see the note on reserved space. 965 963 * 966 964 * Note, @vlan_etype, @vlan_tci, and @data are only valid if %FLOW_EXT 967 965 * is set in &struct ethtool_rx_flow_spec @flow_type. ··· 1138 1134 * hardware hash key. 1139 1135 * @hfunc: Defines the current RSS hash function used by HW (or to be set to). 1140 1136 * Valid values are one of the %ETH_RSS_HASH_*. 1141 - * @rsvd: Reserved for future extensions. 1137 + * @rsvd8: Reserved for future use; see the note on reserved space. 1138 + * @rsvd32: Reserved for future use; see the note on reserved space. 1142 1139 * @rss_config: RX ring/queue index for each hash value i.e., indirection table 1143 1140 * of @indir_size __u32 elements, followed by hash key of @key_size 1144 1141 * bytes. ··· 1307 1302 * @so_timestamping: bit mask of the sum of the supported SO_TIMESTAMPING flags 1308 1303 * @phc_index: device index of the associated PHC, or -1 if there is none 1309 1304 * @tx_types: bit mask of the supported hwtstamp_tx_types enumeration values 1305 + * @tx_reserved: Reserved for future use; see the note on reserved space. 1310 1306 * @rx_filters: bit mask of the supported hwtstamp_rx_filters enumeration values 1307 + * @rx_reserved: Reserved for future use; see the note on reserved space. 1311 1308 * 1312 1309 * The bits in the 'tx_types' and 'rx_filters' fields correspond to 1313 1310 * the 'hwtstamp_tx_types' and 'hwtstamp_rx_filters' enumeration values, ··· 1988 1981 * autonegotiation; 0 if unknown or not applicable. Read-only. 1989 1982 * @transceiver: Used to distinguish different possible PHY types, 1990 1983 * reported consistently by PHYLIB. Read-only. 1984 + * @master_slave_cfg: Master/slave port mode. 1985 + * @master_slave_state: Master/slave port state. 1986 + * @reserved: Reserved for future use; see the note on reserved space. 1987 + * @reserved1: Reserved for future use; see the note on reserved space. 1988 + * @link_mode_masks: Variable length bitmaps. 1991 1989 * 1992 1990 * If autonegotiation is disabled, the speed and @duplex represent the 1993 1991 * fixed link mode and are writable if the driver supports multiple
+69 -13
include/uapi/linux/rfkill.h
··· 86 86 * @op: operation code 87 87 * @hard: hard state (0/1) 88 88 * @soft: soft state (0/1) 89 - * @hard_block_reasons: valid if hard is set. One or several reasons from 90 - * &enum rfkill_hard_block_reasons. 91 89 * 92 90 * Structure used for userspace communication on /dev/rfkill, 93 91 * used for events from the kernel and control to the kernel. ··· 96 98 __u8 op; 97 99 __u8 soft; 98 100 __u8 hard; 101 + } __attribute__((packed)); 102 + 103 + /** 104 + * struct rfkill_event_ext - events for userspace on /dev/rfkill 105 + * @idx: index of dev rfkill 106 + * @type: type of the rfkill struct 107 + * @op: operation code 108 + * @hard: hard state (0/1) 109 + * @soft: soft state (0/1) 110 + * @hard_block_reasons: valid if hard is set. One or several reasons from 111 + * &enum rfkill_hard_block_reasons. 112 + * 113 + * Structure used for userspace communication on /dev/rfkill, 114 + * used for events from the kernel and control to the kernel. 115 + * 116 + * See the extensibility docs below. 117 + */ 118 + struct rfkill_event_ext { 119 + __u32 idx; 120 + __u8 type; 121 + __u8 op; 122 + __u8 soft; 123 + __u8 hard; 124 + 125 + /* 126 + * older kernels will accept/send only up to this point, 127 + * and if extended further up to any chunk marked below 128 + */ 129 + 99 130 __u8 hard_block_reasons; 100 131 } __attribute__((packed)); 101 132 102 - /* 103 - * We are planning to be backward and forward compatible with changes 104 - * to the event struct, by adding new, optional, members at the end. 105 - * When reading an event (whether the kernel from userspace or vice 106 - * versa) we need to accept anything that's at least as large as the 107 - * version 1 event size, but might be able to accept other sizes in 108 - * the future. 133 + /** 134 + * DOC: Extensibility 109 135 * 110 - * One exception is the kernel -- we already have two event sizes in 111 - * that we've made the 'hard' member optional since our only option 112 - * is to ignore it anyway. 136 + * Originally, we had planned to allow backward and forward compatible 137 + * changes by just adding fields at the end of the structure that are 138 + * then not reported on older kernels on read(), and not written to by 139 + * older kernels on write(), with the kernel reporting the size it did 140 + * accept as the result. 141 + * 142 + * This would have allowed userspace to detect on read() and write() 143 + * which kernel structure version it was dealing with, and if was just 144 + * recompiled it would have gotten the new fields, but obviously not 145 + * accessed them, but things should've continued to work. 146 + * 147 + * Unfortunately, while actually exercising this mechanism to add the 148 + * hard block reasons field, we found that userspace (notably systemd) 149 + * did all kinds of fun things not in line with this scheme: 150 + * 151 + * 1. treat the (expected) short writes as an error; 152 + * 2. ask to read sizeof(struct rfkill_event) but then compare the 153 + * actual return value to RFKILL_EVENT_SIZE_V1 and treat any 154 + * mismatch as an error. 155 + * 156 + * As a consequence, just recompiling with a new struct version caused 157 + * things to no longer work correctly on old and new kernels. 158 + * 159 + * Hence, we've rolled back &struct rfkill_event to the original version 160 + * and added &struct rfkill_event_ext. This effectively reverts to the 161 + * old behaviour for all userspace, unless it explicitly opts in to the 162 + * rules outlined here by using the new &struct rfkill_event_ext. 163 + * 164 + * Userspace using &struct rfkill_event_ext must adhere to the following 165 + * rules 166 + * 167 + * 1. accept short writes, optionally using them to detect that it's 168 + * running on an older kernel; 169 + * 2. accept short reads, knowing that this means it's running on an 170 + * older kernel; 171 + * 3. treat reads that are as long as requested as acceptable, not 172 + * checking against RFKILL_EVENT_SIZE_V1 or such. 113 173 */ 114 - #define RFKILL_EVENT_SIZE_V1 8 174 + #define RFKILL_EVENT_SIZE_V1 sizeof(struct rfkill_event) 115 175 116 176 /* ioctl for turning off rfkill-input (if present) */ 117 177 #define RFKILL_IOC_MAGIC 'R'
+1 -1
kernel/bpf/disasm.c
··· 91 91 [BPF_ADD >> 4] = "add", 92 92 [BPF_AND >> 4] = "and", 93 93 [BPF_OR >> 4] = "or", 94 - [BPF_XOR >> 4] = "or", 94 + [BPF_XOR >> 4] = "xor", 95 95 }; 96 96 97 97 static const char *const bpf_ldst_string[] = {
+2 -2
kernel/bpf/inode.c
··· 543 543 return PTR_ERR(raw); 544 544 545 545 if (type == BPF_TYPE_PROG) 546 - ret = bpf_prog_new_fd(raw); 546 + ret = (f_flags != O_RDWR) ? -EINVAL : bpf_prog_new_fd(raw); 547 547 else if (type == BPF_TYPE_MAP) 548 548 ret = bpf_map_new_fd(raw, f_flags); 549 549 else if (type == BPF_TYPE_LINK) 550 - ret = bpf_link_new_fd(raw); 550 + ret = (f_flags != O_RDWR) ? -EINVAL : bpf_link_new_fd(raw); 551 551 else 552 552 return -ENOENT; 553 553
+10 -2
kernel/bpf/stackmap.c
··· 517 517 BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf, 518 518 u32, size, u64, flags) 519 519 { 520 - struct pt_regs *regs = task_pt_regs(task); 520 + struct pt_regs *regs; 521 + long res; 521 522 522 - return __bpf_get_stack(regs, task, NULL, buf, size, flags); 523 + if (!try_get_task_stack(task)) 524 + return -EFAULT; 525 + 526 + regs = task_pt_regs(task); 527 + res = __bpf_get_stack(regs, task, NULL, buf, size, flags); 528 + put_task_stack(task); 529 + 530 + return res; 523 531 } 524 532 525 533 BTF_ID_LIST_SINGLE(bpf_get_task_stack_btf_ids, struct, task_struct)
+30
kernel/bpf/trampoline.c
··· 9 9 #include <linux/btf.h> 10 10 #include <linux/rcupdate_trace.h> 11 11 #include <linux/rcupdate_wait.h> 12 + #include <linux/module.h> 12 13 13 14 /* dummy _ops. The verifier will operate on target program's ops. */ 14 15 const struct bpf_verifier_ops bpf_extension_verifier_ops = { ··· 88 87 return tr; 89 88 } 90 89 90 + static int bpf_trampoline_module_get(struct bpf_trampoline *tr) 91 + { 92 + struct module *mod; 93 + int err = 0; 94 + 95 + preempt_disable(); 96 + mod = __module_text_address((unsigned long) tr->func.addr); 97 + if (mod && !try_module_get(mod)) 98 + err = -ENOENT; 99 + preempt_enable(); 100 + tr->mod = mod; 101 + return err; 102 + } 103 + 104 + static void bpf_trampoline_module_put(struct bpf_trampoline *tr) 105 + { 106 + module_put(tr->mod); 107 + tr->mod = NULL; 108 + } 109 + 91 110 static int is_ftrace_location(void *ip) 92 111 { 93 112 long addr; ··· 129 108 ret = unregister_ftrace_direct((long)ip, (long)old_addr); 130 109 else 131 110 ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, NULL); 111 + 112 + if (!ret) 113 + bpf_trampoline_module_put(tr); 132 114 return ret; 133 115 } 134 116 ··· 158 134 return ret; 159 135 tr->func.ftrace_managed = ret; 160 136 137 + if (bpf_trampoline_module_get(tr)) 138 + return -ENOENT; 139 + 161 140 if (tr->func.ftrace_managed) 162 141 ret = register_ftrace_direct((long)ip, (long)new_addr); 163 142 else 164 143 ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr); 144 + 145 + if (ret) 146 + bpf_trampoline_module_put(tr); 165 147 return ret; 166 148 } 167 149
+5
kernel/bpf/verifier.c
··· 12724 12724 u32 btf_id, member_idx; 12725 12725 const char *mname; 12726 12726 12727 + if (!prog->gpl_compatible) { 12728 + verbose(env, "struct ops programs must have a GPL compatible license\n"); 12729 + return -EINVAL; 12730 + } 12731 + 12727 12732 btf_id = prog->aux->attach_btf_id; 12728 12733 st_ops = bpf_struct_ops_find(btf_id); 12729 12734 if (!st_ops) {
+8 -8
kernel/fork.c
··· 1950 1950 p = dup_task_struct(current, node); 1951 1951 if (!p) 1952 1952 goto fork_out; 1953 - if (args->io_thread) 1953 + if (args->io_thread) { 1954 + /* 1955 + * Mark us an IO worker, and block any signal that isn't 1956 + * fatal or STOP 1957 + */ 1954 1958 p->flags |= PF_IO_WORKER; 1959 + siginitsetinv(&p->blocked, sigmask(SIGKILL)|sigmask(SIGSTOP)); 1960 + } 1955 1961 1956 1962 /* 1957 1963 * This _must_ happen before we call free_task(), i.e. before we jump ··· 2449 2443 .stack_size = (unsigned long)arg, 2450 2444 .io_thread = 1, 2451 2445 }; 2452 - struct task_struct *tsk; 2453 2446 2454 - tsk = copy_process(NULL, 0, node, &args); 2455 - if (!IS_ERR(tsk)) { 2456 - sigfillset(&tsk->blocked); 2457 - sigdelsetmask(&tsk->blocked, sigmask(SIGKILL)); 2458 - } 2459 - return tsk; 2447 + return copy_process(NULL, 0, node, &args); 2460 2448 } 2461 2449 2462 2450 /*
+1 -1
kernel/freezer.c
··· 134 134 return false; 135 135 } 136 136 137 - if (!(p->flags & (PF_KTHREAD | PF_IO_WORKER))) 137 + if (!(p->flags & PF_KTHREAD)) 138 138 fake_signal_wake_up(p); 139 139 else 140 140 wake_up_state(p, TASK_INTERRUPTIBLE);
+1 -1
kernel/power/energy_model.c
··· 98 98 99 99 return 0; 100 100 } 101 - core_initcall(em_debug_init); 101 + fs_initcall(em_debug_init); 102 102 #else /* CONFIG_DEBUG_FS */ 103 103 static void em_debug_create_pd(struct device *dev) {} 104 104 static void em_debug_remove_pd(struct device *dev) {}
+1 -1
kernel/ptrace.c
··· 375 375 audit_ptrace(task); 376 376 377 377 retval = -EPERM; 378 - if (unlikely(task->flags & (PF_KTHREAD | PF_IO_WORKER))) 378 + if (unlikely(task->flags & PF_KTHREAD)) 379 379 goto out; 380 380 if (same_thread_group(task, current)) 381 381 goto out;
+12 -8
kernel/signal.c
··· 91 91 return true; 92 92 93 93 /* Only allow kernel generated signals to this kthread */ 94 - if (unlikely((t->flags & (PF_KTHREAD | PF_IO_WORKER)) && 94 + if (unlikely((t->flags & PF_KTHREAD) && 95 95 (handler == SIG_KTHREAD_KERNEL) && !force)) 96 96 return true; 97 97 ··· 288 288 JOBCTL_STOP_SIGMASK | JOBCTL_TRAPPING)); 289 289 BUG_ON((mask & JOBCTL_TRAPPING) && !(mask & JOBCTL_PENDING_MASK)); 290 290 291 - if (unlikely(fatal_signal_pending(task) || 292 - (task->flags & (PF_EXITING | PF_IO_WORKER)))) 291 + if (unlikely(fatal_signal_pending(task) || (task->flags & PF_EXITING))) 293 292 return false; 294 293 295 294 if (mask & JOBCTL_STOP_SIGMASK) ··· 833 834 834 835 if (!valid_signal(sig)) 835 836 return -EINVAL; 836 - /* PF_IO_WORKER threads don't take any signals */ 837 - if (t->flags & PF_IO_WORKER) 838 - return -ESRCH; 839 837 840 838 if (!si_fromuser(info)) 841 839 return 0; ··· 1096 1100 /* 1097 1101 * Skip useless siginfo allocation for SIGKILL and kernel threads. 1098 1102 */ 1099 - if ((sig == SIGKILL) || (t->flags & (PF_KTHREAD | PF_IO_WORKER))) 1103 + if ((sig == SIGKILL) || (t->flags & PF_KTHREAD)) 1100 1104 goto out_set; 1101 1105 1102 1106 /* ··· 2768 2772 } 2769 2773 2770 2774 /* 2775 + * PF_IO_WORKER threads will catch and exit on fatal signals 2776 + * themselves. They have cleanup that must be performed, so 2777 + * we cannot call do_exit() on their behalf. 2778 + */ 2779 + if (current->flags & PF_IO_WORKER) 2780 + goto out; 2781 + 2782 + /* 2771 2783 * Death signals, no core dump. 2772 2784 */ 2773 2785 do_group_exit(ksig->info.si_signo); 2774 2786 /* NOTREACHED */ 2775 2787 } 2776 2788 spin_unlock_irq(&sighand->siglock); 2777 - 2789 + out: 2778 2790 ksig->sig = signr; 2779 2791 2780 2792 if (!(ksig->ka.sa.sa_flags & SA_EXPOSE_TAGBITS))
+6 -3
kernel/trace/ftrace.c
··· 3231 3231 pg = start_pg; 3232 3232 while (pg) { 3233 3233 order = get_count_order(pg->size / ENTRIES_PER_PAGE); 3234 - free_pages((unsigned long)pg->records, order); 3234 + if (order >= 0) 3235 + free_pages((unsigned long)pg->records, order); 3235 3236 start_pg = pg->next; 3236 3237 kfree(pg); 3237 3238 pg = start_pg; ··· 6452 6451 clear_mod_from_hashes(pg); 6453 6452 6454 6453 order = get_count_order(pg->size / ENTRIES_PER_PAGE); 6455 - free_pages((unsigned long)pg->records, order); 6454 + if (order >= 0) 6455 + free_pages((unsigned long)pg->records, order); 6456 6456 tmp_page = pg->next; 6457 6457 kfree(pg); 6458 6458 ftrace_number_of_pages -= 1 << order; ··· 6813 6811 if (!pg->index) { 6814 6812 *last_pg = pg->next; 6815 6813 order = get_count_order(pg->size / ENTRIES_PER_PAGE); 6816 - free_pages((unsigned long)pg->records, order); 6814 + if (order >= 0) 6815 + free_pages((unsigned long)pg->records, order); 6817 6816 ftrace_number_of_pages -= 1 << order; 6818 6817 ftrace_number_of_groups--; 6819 6818 kfree(pg);
+2 -1
kernel/trace/trace.c
··· 2984 2984 2985 2985 size = nr_entries * sizeof(unsigned long); 2986 2986 event = __trace_buffer_lock_reserve(buffer, TRACE_STACK, 2987 - sizeof(*entry) + size, trace_ctx); 2987 + (sizeof(*entry) - sizeof(entry->caller)) + size, 2988 + trace_ctx); 2988 2989 if (!event) 2989 2990 goto out; 2990 2991 entry = ring_buffer_event_data(event);
+3 -2
kernel/watchdog.c
··· 278 278 * update as well, the only side effect might be a cycle delay for 279 279 * the softlockup check. 280 280 */ 281 - for_each_cpu(cpu, &watchdog_allowed_mask) 281 + for_each_cpu(cpu, &watchdog_allowed_mask) { 282 282 per_cpu(watchdog_touch_ts, cpu) = SOFTLOCKUP_RESET; 283 - wq_watchdog_touch(-1); 283 + wq_watchdog_touch(cpu); 284 + } 284 285 } 285 286 286 287 void touch_softlockup_watchdog_sync(void)
+7 -12
kernel/workqueue.c
··· 1412 1412 */ 1413 1413 lockdep_assert_irqs_disabled(); 1414 1414 1415 - debug_work_activate(work); 1416 1415 1417 1416 /* if draining, only works from the same workqueue are allowed */ 1418 1417 if (unlikely(wq->flags & __WQ_DRAINING) && ··· 1493 1494 worklist = &pwq->delayed_works; 1494 1495 } 1495 1496 1497 + debug_work_activate(work); 1496 1498 insert_work(pwq, work, worklist, work_flags); 1497 1499 1498 1500 out: ··· 5787 5787 continue; 5788 5788 5789 5789 /* get the latest of pool and touched timestamps */ 5790 + if (pool->cpu >= 0) 5791 + touched = READ_ONCE(per_cpu(wq_watchdog_touched_cpu, pool->cpu)); 5792 + else 5793 + touched = READ_ONCE(wq_watchdog_touched); 5790 5794 pool_ts = READ_ONCE(pool->watchdog_ts); 5791 - touched = READ_ONCE(wq_watchdog_touched); 5792 5795 5793 5796 if (time_after(pool_ts, touched)) 5794 5797 ts = pool_ts; 5795 5798 else 5796 5799 ts = touched; 5797 - 5798 - if (pool->cpu >= 0) { 5799 - unsigned long cpu_touched = 5800 - READ_ONCE(per_cpu(wq_watchdog_touched_cpu, 5801 - pool->cpu)); 5802 - if (time_after(cpu_touched, ts)) 5803 - ts = cpu_touched; 5804 - } 5805 5800 5806 5801 /* did we stall? */ 5807 5802 if (time_after(jiffies, ts + thresh)) { ··· 5821 5826 { 5822 5827 if (cpu >= 0) 5823 5828 per_cpu(wq_watchdog_touched_cpu, cpu) = jiffies; 5824 - else 5825 - wq_watchdog_touched = jiffies; 5829 + 5830 + wq_watchdog_touched = jiffies; 5826 5831 } 5827 5832 5828 5833 static void wq_watchdog_set_thresh(unsigned long thresh)
+14 -12
lib/test_xarray.c
··· 1530 1530 1531 1531 #ifdef CONFIG_XARRAY_MULTI 1532 1532 static void check_split_1(struct xarray *xa, unsigned long index, 1533 - unsigned int order) 1533 + unsigned int order, unsigned int new_order) 1534 1534 { 1535 - XA_STATE(xas, xa, index); 1536 - void *entry; 1537 - unsigned int i = 0; 1535 + XA_STATE_ORDER(xas, xa, index, new_order); 1536 + unsigned int i; 1538 1537 1539 1538 xa_store_order(xa, index, order, xa, GFP_KERNEL); 1540 1539 1541 1540 xas_split_alloc(&xas, xa, order, GFP_KERNEL); 1542 1541 xas_lock(&xas); 1543 1542 xas_split(&xas, xa, order); 1543 + for (i = 0; i < (1 << order); i += (1 << new_order)) 1544 + __xa_store(xa, index + i, xa_mk_index(index + i), 0); 1544 1545 xas_unlock(&xas); 1545 1546 1546 - xa_for_each(xa, index, entry) { 1547 - XA_BUG_ON(xa, entry != xa); 1548 - i++; 1547 + for (i = 0; i < (1 << order); i++) { 1548 + unsigned int val = index + (i & ~((1 << new_order) - 1)); 1549 + XA_BUG_ON(xa, xa_load(xa, index + i) != xa_mk_index(val)); 1549 1550 } 1550 - XA_BUG_ON(xa, i != 1 << order); 1551 1551 1552 1552 xa_set_mark(xa, index, XA_MARK_0); 1553 1553 XA_BUG_ON(xa, !xa_get_mark(xa, index, XA_MARK_0)); ··· 1557 1557 1558 1558 static noinline void check_split(struct xarray *xa) 1559 1559 { 1560 - unsigned int order; 1560 + unsigned int order, new_order; 1561 1561 1562 1562 XA_BUG_ON(xa, !xa_empty(xa)); 1563 1563 1564 1564 for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) { 1565 - check_split_1(xa, 0, order); 1566 - check_split_1(xa, 1UL << order, order); 1567 - check_split_1(xa, 3UL << order, order); 1565 + for (new_order = 0; new_order < order; new_order++) { 1566 + check_split_1(xa, 0, order, new_order); 1567 + check_split_1(xa, 1UL << order, order, new_order); 1568 + check_split_1(xa, 3UL << order, order, new_order); 1569 + } 1568 1570 } 1569 1571 } 1570 1572 #else
+6 -5
lib/xarray.c
··· 987 987 * xas_split_alloc() - Allocate memory for splitting an entry. 988 988 * @xas: XArray operation state. 989 989 * @entry: New entry which will be stored in the array. 990 - * @order: New entry order. 990 + * @order: Current entry order. 991 991 * @gfp: Memory allocation flags. 992 992 * 993 993 * This function should be called before calling xas_split(). ··· 1011 1011 1012 1012 do { 1013 1013 unsigned int i; 1014 - void *sibling; 1014 + void *sibling = NULL; 1015 1015 struct xa_node *node; 1016 1016 1017 1017 node = kmem_cache_alloc(radix_tree_node_cachep, gfp); ··· 1021 1021 for (i = 0; i < XA_CHUNK_SIZE; i++) { 1022 1022 if ((i & mask) == 0) { 1023 1023 RCU_INIT_POINTER(node->slots[i], entry); 1024 - sibling = xa_mk_sibling(0); 1024 + sibling = xa_mk_sibling(i); 1025 1025 } else { 1026 1026 RCU_INIT_POINTER(node->slots[i], sibling); 1027 1027 } ··· 1041 1041 * xas_split() - Split a multi-index entry into smaller entries. 1042 1042 * @xas: XArray operation state. 1043 1043 * @entry: New entry to store in the array. 1044 - * @order: New entry order. 1044 + * @order: Current entry order. 1045 1045 * 1046 - * The value in the entry is copied to all the replacement entries. 1046 + * The size of the new entries is set in @xas. The value in @entry is 1047 + * copied to all the replacement entries. 1047 1048 * 1048 1049 * Context: Any context. The caller should hold the xa_lock. 1049 1050 */
+1 -1
mm/memory.c
··· 166 166 zero_pfn = page_to_pfn(ZERO_PAGE(0)); 167 167 return 0; 168 168 } 169 - core_initcall(init_zero_pfn); 169 + early_initcall(init_zero_pfn); 170 170 171 171 void mm_trace_rss_stat(struct mm_struct *mm, int member, long count) 172 172 {
+2
net/batman-adv/translation-table.c
··· 890 890 hlist_for_each_entry(vlan, &orig_node->vlan_list, list) { 891 891 tt_vlan->vid = htons(vlan->vid); 892 892 tt_vlan->crc = htonl(vlan->tt.crc); 893 + tt_vlan->reserved = 0; 893 894 894 895 tt_vlan++; 895 896 } ··· 974 973 975 974 tt_vlan->vid = htons(vlan->vid); 976 975 tt_vlan->crc = htonl(vlan->tt.crc); 976 + tt_vlan->reserved = 0; 977 977 978 978 tt_vlan++; 979 979 }
+6 -4
net/can/bcm.c
··· 86 86 MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>"); 87 87 MODULE_ALIAS("can-proto-2"); 88 88 89 + #define BCM_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex) 90 + 89 91 /* 90 92 * easy access to the first 64 bit of can(fd)_frame payload. cp->data is 91 93 * 64 bit aligned so the offset has to be multiples of 8 which is ensured ··· 1294 1292 /* no bound device as default => check msg_name */ 1295 1293 DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); 1296 1294 1297 - if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 1295 + if (msg->msg_namelen < BCM_MIN_NAMELEN) 1298 1296 return -EINVAL; 1299 1297 1300 1298 if (addr->can_family != AF_CAN) ··· 1536 1534 struct net *net = sock_net(sk); 1537 1535 int ret = 0; 1538 1536 1539 - if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 1537 + if (len < BCM_MIN_NAMELEN) 1540 1538 return -EINVAL; 1541 1539 1542 1540 lock_sock(sk); ··· 1618 1616 sock_recv_ts_and_drops(msg, sk, skb); 1619 1617 1620 1618 if (msg->msg_name) { 1621 - __sockaddr_check_size(sizeof(struct sockaddr_can)); 1622 - msg->msg_namelen = sizeof(struct sockaddr_can); 1619 + __sockaddr_check_size(BCM_MIN_NAMELEN); 1620 + msg->msg_namelen = BCM_MIN_NAMELEN; 1623 1621 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 1624 1622 } 1625 1623
+7 -4
net/can/isotp.c
··· 77 77 MODULE_AUTHOR("Oliver Hartkopp <socketcan@hartkopp.net>"); 78 78 MODULE_ALIAS("can-proto-6"); 79 79 80 + #define ISOTP_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp) 81 + 80 82 #define SINGLE_MASK(id) (((id) & CAN_EFF_FLAG) ? \ 81 83 (CAN_EFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG) : \ 82 84 (CAN_SFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG)) ··· 988 986 sock_recv_timestamp(msg, sk, skb); 989 987 990 988 if (msg->msg_name) { 991 - msg->msg_namelen = sizeof(struct sockaddr_can); 989 + __sockaddr_check_size(ISOTP_MIN_NAMELEN); 990 + msg->msg_namelen = ISOTP_MIN_NAMELEN; 992 991 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 993 992 } 994 993 ··· 1059 1056 int notify_enetdown = 0; 1060 1057 int do_rx_reg = 1; 1061 1058 1062 - if (len < CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp)) 1059 + if (len < ISOTP_MIN_NAMELEN) 1063 1060 return -EINVAL; 1064 1061 1065 1062 /* do not register frame reception for functional addressing */ ··· 1155 1152 if (peer) 1156 1153 return -EOPNOTSUPP; 1157 1154 1158 - memset(addr, 0, sizeof(*addr)); 1155 + memset(addr, 0, ISOTP_MIN_NAMELEN); 1159 1156 addr->can_family = AF_CAN; 1160 1157 addr->can_ifindex = so->ifindex; 1161 1158 addr->can_addr.tp.rx_id = so->rxid; 1162 1159 addr->can_addr.tp.tx_id = so->txid; 1163 1160 1164 - return sizeof(*addr); 1161 + return ISOTP_MIN_NAMELEN; 1165 1162 } 1166 1163 1167 1164 static int isotp_setsockopt(struct socket *sock, int level, int optname,
+8 -6
net/can/raw.c
··· 60 60 MODULE_AUTHOR("Urs Thuermann <urs.thuermann@volkswagen.de>"); 61 61 MODULE_ALIAS("can-proto-1"); 62 62 63 + #define RAW_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex) 64 + 63 65 #define MASK_ALL 0 64 66 65 67 /* A raw socket has a list of can_filters attached to it, each receiving ··· 396 394 int err = 0; 397 395 int notify_enetdown = 0; 398 396 399 - if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 397 + if (len < RAW_MIN_NAMELEN) 400 398 return -EINVAL; 401 399 if (addr->can_family != AF_CAN) 402 400 return -EINVAL; ··· 477 475 if (peer) 478 476 return -EOPNOTSUPP; 479 477 480 - memset(addr, 0, sizeof(*addr)); 478 + memset(addr, 0, RAW_MIN_NAMELEN); 481 479 addr->can_family = AF_CAN; 482 480 addr->can_ifindex = ro->ifindex; 483 481 484 - return sizeof(*addr); 482 + return RAW_MIN_NAMELEN; 485 483 } 486 484 487 485 static int raw_setsockopt(struct socket *sock, int level, int optname, ··· 741 739 if (msg->msg_name) { 742 740 DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); 743 741 744 - if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 742 + if (msg->msg_namelen < RAW_MIN_NAMELEN) 745 743 return -EINVAL; 746 744 747 745 if (addr->can_family != AF_CAN) ··· 834 832 sock_recv_ts_and_drops(msg, sk, skb); 835 833 836 834 if (msg->msg_name) { 837 - __sockaddr_check_size(sizeof(struct sockaddr_can)); 838 - msg->msg_namelen = sizeof(struct sockaddr_can); 835 + __sockaddr_check_size(RAW_MIN_NAMELEN); 836 + msg->msg_namelen = RAW_MIN_NAMELEN; 839 837 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 840 838 } 841 839
+2 -1
net/core/dev.c
··· 7041 7041 7042 7042 set_current_state(TASK_INTERRUPTIBLE); 7043 7043 7044 - while (!kthread_should_stop() && !napi_disable_pending(napi)) { 7044 + while (!kthread_should_stop()) { 7045 7045 /* Testing SCHED_THREADED bit here to make sure the current 7046 7046 * kthread owns this napi and could poll on this napi. 7047 7047 * Testing SCHED bit is not enough because SCHED bit might be ··· 7059 7059 set_current_state(TASK_INTERRUPTIBLE); 7060 7060 } 7061 7061 __set_current_state(TASK_RUNNING); 7062 + 7062 7063 return -1; 7063 7064 } 7064 7065
+1 -1
net/core/neighbour.c
··· 1379 1379 * we can reinject the packet there. 1380 1380 */ 1381 1381 n2 = NULL; 1382 - if (dst) { 1382 + if (dst && dst->obsolete != DST_OBSOLETE_DEAD) { 1383 1383 n2 = dst_neigh_lookup_skb(dst, skb); 1384 1384 if (n2) 1385 1385 n1 = n2;
+1 -1
net/core/rtnetlink.c
··· 2872 2872 2873 2873 BUG_ON(!(af_ops = rtnl_af_lookup(nla_type(af)))); 2874 2874 2875 - err = af_ops->set_link_af(dev, af); 2875 + err = af_ops->set_link_af(dev, af, extack); 2876 2876 if (err < 0) { 2877 2877 rcu_read_unlock(); 2878 2878 goto errout;
+5 -7
net/core/skmsg.c
··· 586 586 if (unlikely(!msg)) 587 587 return -EAGAIN; 588 588 sk_msg_init(msg); 589 + skb_set_owner_r(skb, sk); 589 590 return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg); 590 591 } 591 592 ··· 885 884 { 886 885 switch (verdict) { 887 886 case __SK_REDIRECT: 888 - skb_set_owner_r(skb, sk); 889 887 sk_psock_skb_redirect(skb); 890 888 break; 891 889 case __SK_PASS: ··· 902 902 rcu_read_lock(); 903 903 prog = READ_ONCE(psock->progs.stream_verdict); 904 904 if (likely(prog)) { 905 - /* We skip full set_owner_r here because if we do a SK_PASS 906 - * or SK_DROP we can skip skb memory accounting and use the 907 - * TLS context. 908 - */ 909 905 skb->sk = psock->sk; 910 906 skb_dst_drop(skb); 911 907 skb_bpf_redirect_clear(skb); ··· 991 995 kfree_skb(skb); 992 996 goto out; 993 997 } 994 - skb_set_owner_r(skb, sk); 995 998 prog = READ_ONCE(psock->progs.stream_verdict); 996 999 if (likely(prog)) { 1000 + skb->sk = sk; 997 1001 skb_dst_drop(skb); 998 1002 skb_bpf_redirect_clear(skb); 999 1003 ret = bpf_prog_run_pin_on_cpu(prog, skb); 1000 1004 ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb)); 1005 + skb->sk = NULL; 1001 1006 } 1002 1007 sk_psock_verdict_apply(psock, skb, ret); 1003 1008 out: ··· 1112 1115 kfree_skb(skb); 1113 1116 goto out; 1114 1117 } 1115 - skb_set_owner_r(skb, sk); 1116 1118 prog = READ_ONCE(psock->progs.stream_verdict); 1117 1119 if (!prog) 1118 1120 prog = READ_ONCE(psock->progs.skb_verdict); 1119 1121 if (likely(prog)) { 1122 + skb->sk = sk; 1120 1123 skb_dst_drop(skb); 1121 1124 skb_bpf_redirect_clear(skb); 1122 1125 ret = bpf_prog_run_pin_on_cpu(prog, skb); 1123 1126 ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb)); 1127 + skb->sk = NULL; 1124 1128 } 1125 1129 sk_psock_verdict_apply(psock, skb, ret); 1126 1130 out:
+3 -9
net/core/sock.c
··· 2132 2132 if (skb_is_tcp_pure_ack(skb)) 2133 2133 return; 2134 2134 2135 - if (can_skb_orphan_partial(skb)) { 2136 - struct sock *sk = skb->sk; 2137 - 2138 - if (refcount_inc_not_zero(&sk->sk_refcnt)) { 2139 - WARN_ON(refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc)); 2140 - skb->destructor = sock_efree; 2141 - } 2142 - } else { 2135 + if (can_skb_orphan_partial(skb)) 2136 + skb_set_owner_sk_safe(skb, skb->sk); 2137 + else 2143 2138 skb_orphan(skb); 2144 - } 2145 2139 } 2146 2140 EXPORT_SYMBOL(skb_orphan_partial); 2147 2141
+2 -1
net/core/xdp.c
··· 350 350 /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */ 351 351 xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); 352 352 page = virt_to_head_page(data); 353 - napi_direct &= !xdp_return_frame_no_direct(); 353 + if (napi_direct && xdp_return_frame_no_direct()) 354 + napi_direct = false; 354 355 page_pool_put_full_page(xa->page_pool, page, napi_direct); 355 356 rcu_read_unlock(); 356 357 break;
+7 -1
net/dsa/dsa2.c
··· 795 795 796 796 list_for_each_entry(dp, &dst->ports, list) { 797 797 err = dsa_port_setup(dp); 798 - if (err) 798 + if (err) { 799 + dsa_port_devlink_teardown(dp); 800 + dp->type = DSA_PORT_TYPE_UNUSED; 801 + err = dsa_port_devlink_setup(dp); 802 + if (err) 803 + goto teardown; 799 804 continue; 805 + } 800 806 } 801 807 802 808 return 0;
+9 -6
net/dsa/switch.c
··· 107 107 bool unset_vlan_filtering = br_vlan_enabled(info->br); 108 108 struct dsa_switch_tree *dst = ds->dst; 109 109 struct netlink_ext_ack extack = {0}; 110 - int err, i; 110 + int err, port; 111 111 112 112 if (dst->index == info->tree_index && ds->index == info->sw_index && 113 113 ds->ops->port_bridge_join) ··· 124 124 * it. That is a good thing, because that lets us handle it and also 125 125 * handle the case where the switch's vlan_filtering setting is global 126 126 * (not per port). When that happens, the correct moment to trigger the 127 - * vlan_filtering callback is only when the last port left this bridge. 127 + * vlan_filtering callback is only when the last port leaves the last 128 + * VLAN-aware bridge. 128 129 */ 129 130 if (unset_vlan_filtering && ds->vlan_filtering_is_global) { 130 - for (i = 0; i < ds->num_ports; i++) { 131 - if (i == info->port) 132 - continue; 133 - if (dsa_to_port(ds, i)->bridge_dev == info->br) { 131 + for (port = 0; port < ds->num_ports; port++) { 132 + struct net_device *bridge_dev; 133 + 134 + bridge_dev = dsa_to_port(ds, port)->bridge_dev; 135 + 136 + if (bridge_dev && br_vlan_enabled(bridge_dev)) { 134 137 unset_vlan_filtering = false; 135 138 break; 136 139 }
+17
net/ethtool/common.c
··· 273 273 __DEFINE_LINK_MODE_PARAMS(10000, KR, Full), 274 274 [ETHTOOL_LINK_MODE_10000baseR_FEC_BIT] = { 275 275 .speed = SPEED_10000, 276 + .lanes = 1, 276 277 .duplex = DUPLEX_FULL, 277 278 }, 278 279 __DEFINE_LINK_MODE_PARAMS(20000, MLD2, Full), ··· 563 562 rtnl_unlock(); 564 563 } 565 564 EXPORT_SYMBOL_GPL(ethtool_set_ethtool_phy_ops); 565 + 566 + void 567 + ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings, 568 + enum ethtool_link_mode_bit_indices link_mode) 569 + { 570 + const struct link_mode_info *link_info; 571 + 572 + if (WARN_ON_ONCE(link_mode >= __ETHTOOL_LINK_MODE_MASK_NBITS)) 573 + return; 574 + 575 + link_info = &link_mode_params[link_mode]; 576 + link_ksettings->base.speed = link_info->speed; 577 + link_ksettings->lanes = link_info->lanes; 578 + link_ksettings->base.duplex = link_info->duplex; 579 + } 580 + EXPORT_SYMBOL_GPL(ethtool_params_from_link_mode);
+2 -2
net/ethtool/eee.c
··· 169 169 ethnl_update_bool32(&eee.eee_enabled, tb[ETHTOOL_A_EEE_ENABLED], &mod); 170 170 ethnl_update_bool32(&eee.tx_lpi_enabled, 171 171 tb[ETHTOOL_A_EEE_TX_LPI_ENABLED], &mod); 172 - ethnl_update_bool32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER], 173 - &mod); 172 + ethnl_update_u32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER], 173 + &mod); 174 174 ret = 0; 175 175 if (!mod) 176 176 goto out_ops;
+1 -17
net/ethtool/ioctl.c
··· 426 426 int __ethtool_get_link_ksettings(struct net_device *dev, 427 427 struct ethtool_link_ksettings *link_ksettings) 428 428 { 429 - const struct link_mode_info *link_info; 430 - int err; 431 - 432 429 ASSERT_RTNL(); 433 430 434 431 if (!dev->ethtool_ops->get_link_ksettings) 435 432 return -EOPNOTSUPP; 436 433 437 434 memset(link_ksettings, 0, sizeof(*link_ksettings)); 438 - 439 - link_ksettings->link_mode = -1; 440 - err = dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); 441 - if (err) 442 - return err; 443 - 444 - if (link_ksettings->link_mode != -1) { 445 - link_info = &link_mode_params[link_ksettings->link_mode]; 446 - link_ksettings->base.speed = link_info->speed; 447 - link_ksettings->lanes = link_info->lanes; 448 - link_ksettings->base.duplex = link_info->duplex; 449 - } 450 - 451 - return 0; 435 + return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); 452 436 } 453 437 EXPORT_SYMBOL(__ethtool_get_link_ksettings); 454 438
+1
net/hsr/hsr_device.c
··· 217 217 master = hsr_port_get_hsr(hsr, HSR_PT_MASTER); 218 218 if (master) { 219 219 skb->dev = master->dev; 220 + skb_reset_mac_header(skb); 220 221 hsr_forward_skb(skb, master); 221 222 } else { 222 223 atomic_long_inc(&dev->tx_dropped);
-6
net/hsr/hsr_forward.c
··· 555 555 { 556 556 struct hsr_frame_info frame; 557 557 558 - if (skb_mac_header(skb) != skb->data) { 559 - WARN_ONCE(1, "%s:%d: Malformed frame (port_src %s)\n", 560 - __FILE__, __LINE__, port->dev->name); 561 - goto out_drop; 562 - } 563 - 564 558 if (fill_frame_info(&frame, skb, port) < 0) 565 559 goto out_drop; 566 560
+4 -3
net/ieee802154/nl-mac.c
··· 551 551 desc->mode = nla_get_u8(info->attrs[IEEE802154_ATTR_LLSEC_KEY_MODE]); 552 552 553 553 if (desc->mode == IEEE802154_SCF_KEY_IMPLICIT) { 554 - if (!info->attrs[IEEE802154_ATTR_PAN_ID] && 555 - !(info->attrs[IEEE802154_ATTR_SHORT_ADDR] || 556 - info->attrs[IEEE802154_ATTR_HW_ADDR])) 554 + if (!info->attrs[IEEE802154_ATTR_PAN_ID]) 557 555 return -EINVAL; 558 556 559 557 desc->device_addr.pan_id = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_PAN_ID]); ··· 560 562 desc->device_addr.mode = IEEE802154_ADDR_SHORT; 561 563 desc->device_addr.short_addr = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_SHORT_ADDR]); 562 564 } else { 565 + if (!info->attrs[IEEE802154_ATTR_HW_ADDR]) 566 + return -EINVAL; 567 + 563 568 desc->device_addr.mode = IEEE802154_ADDR_LONG; 564 569 desc->device_addr.extended_addr = nla_get_hwaddr(info->attrs[IEEE802154_ATTR_HW_ADDR]); 565 570 }
+60 -8
net/ieee802154/nl802154.c
··· 820 820 goto nla_put_failure; 821 821 822 822 #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL 823 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 824 + goto out; 825 + 823 826 if (nl802154_get_llsec_params(msg, rdev, wpan_dev) < 0) 824 827 goto nla_put_failure; 828 + 829 + out: 825 830 #endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */ 826 831 827 832 genlmsg_end(msg, hdr); ··· 1389 1384 u32 changed = 0; 1390 1385 int ret; 1391 1386 1387 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1388 + return -EOPNOTSUPP; 1389 + 1392 1390 if (info->attrs[NL802154_ATTR_SEC_ENABLED]) { 1393 1391 u8 enabled; 1394 1392 ··· 1498 1490 if (err) 1499 1491 return err; 1500 1492 1493 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1494 + err = skb->len; 1495 + goto out_err; 1496 + } 1497 + 1501 1498 if (!wpan_dev->netdev) { 1502 1499 err = -EINVAL; 1503 1500 goto out_err; ··· 1557 1544 struct ieee802154_llsec_key_id id = { }; 1558 1545 u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { }; 1559 1546 1560 - if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1547 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1548 + return -EOPNOTSUPP; 1549 + 1550 + if (!info->attrs[NL802154_ATTR_SEC_KEY] || 1551 + nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1561 1552 return -EINVAL; 1562 1553 1563 1554 if (!attrs[NL802154_KEY_ATTR_USAGE_FRAMES] || ··· 1609 1592 struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1]; 1610 1593 struct ieee802154_llsec_key_id id; 1611 1594 1612 - if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1595 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1596 + return -EOPNOTSUPP; 1597 + 1598 + if (!info->attrs[NL802154_ATTR_SEC_KEY] || 1599 + nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1613 1600 return -EINVAL; 1614 1601 1615 1602 if (ieee802154_llsec_parse_key_id(attrs[NL802154_KEY_ATTR_ID], &id) < 0) ··· 1676 1655 err = nl802154_prepare_wpan_dev_dump(skb, cb, &rdev, &wpan_dev); 1677 1656 if (err) 1678 1657 return err; 1658 + 1659 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1660 + err = skb->len; 1661 + goto out_err; 1662 + } 1679 1663 1680 1664 if (!wpan_dev->netdev) { 1681 1665 err = -EINVAL; ··· 1768 1742 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 1769 1743 struct ieee802154_llsec_device dev_desc; 1770 1744 1745 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1746 + return -EOPNOTSUPP; 1747 + 1771 1748 if (ieee802154_llsec_parse_device(info->attrs[NL802154_ATTR_SEC_DEVICE], 1772 1749 &dev_desc) < 0) 1773 1750 return -EINVAL; ··· 1786 1757 struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1]; 1787 1758 __le64 extended_addr; 1788 1759 1789 - if (nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack)) 1760 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1761 + return -EOPNOTSUPP; 1762 + 1763 + if (!info->attrs[NL802154_ATTR_SEC_DEVICE] || 1764 + nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack)) 1790 1765 return -EINVAL; 1791 1766 1792 1767 if (!attrs[NL802154_DEV_ATTR_EXTENDED_ADDR]) ··· 1858 1825 if (err) 1859 1826 return err; 1860 1827 1828 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1829 + err = skb->len; 1830 + goto out_err; 1831 + } 1832 + 1861 1833 if (!wpan_dev->netdev) { 1862 1834 err = -EINVAL; 1863 1835 goto out_err; ··· 1920 1882 struct ieee802154_llsec_device_key key; 1921 1883 __le64 extended_addr; 1922 1884 1885 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1886 + return -EOPNOTSUPP; 1887 + 1923 1888 if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] || 1924 1889 nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack) < 0) 1925 1890 return -EINVAL; ··· 1954 1913 struct ieee802154_llsec_device_key key; 1955 1914 __le64 extended_addr; 1956 1915 1957 - if (nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack)) 1916 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1917 + return -EOPNOTSUPP; 1918 + 1919 + if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] || 1920 + nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack)) 1958 1921 return -EINVAL; 1959 1922 1960 1923 if (!attrs[NL802154_DEVKEY_ATTR_EXTENDED_ADDR]) ··· 2030 1985 err = nl802154_prepare_wpan_dev_dump(skb, cb, &rdev, &wpan_dev); 2031 1986 if (err) 2032 1987 return err; 1988 + 1989 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1990 + err = skb->len; 1991 + goto out_err; 1992 + } 2033 1993 2034 1994 if (!wpan_dev->netdev) { 2035 1995 err = -EINVAL; ··· 2120 2070 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 2121 2071 struct ieee802154_llsec_seclevel sl; 2122 2072 2073 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 2074 + return -EOPNOTSUPP; 2075 + 2123 2076 if (llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2124 2077 &sl) < 0) 2125 2078 return -EINVAL; ··· 2138 2085 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 2139 2086 struct ieee802154_llsec_seclevel sl; 2140 2087 2088 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 2089 + return -EOPNOTSUPP; 2090 + 2141 2091 if (!info->attrs[NL802154_ATTR_SEC_LEVEL] || 2142 2092 llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2143 2093 &sl) < 0) ··· 2154 2098 #define NL802154_FLAG_NEED_NETDEV 0x02 2155 2099 #define NL802154_FLAG_NEED_RTNL 0x04 2156 2100 #define NL802154_FLAG_CHECK_NETDEV_UP 0x08 2157 - #define NL802154_FLAG_NEED_NETDEV_UP (NL802154_FLAG_NEED_NETDEV |\ 2158 - NL802154_FLAG_CHECK_NETDEV_UP) 2159 2101 #define NL802154_FLAG_NEED_WPAN_DEV 0x10 2160 - #define NL802154_FLAG_NEED_WPAN_DEV_UP (NL802154_FLAG_NEED_WPAN_DEV |\ 2161 - NL802154_FLAG_CHECK_NETDEV_UP) 2162 2102 2163 2103 static int nl802154_pre_doit(const struct genl_ops *ops, struct sk_buff *skb, 2164 2104 struct genl_info *info)
+1 -1
net/ipv4/ah4.c
··· 141 141 } 142 142 143 143 kfree(AH_SKB_CB(skb)->tmp); 144 - xfrm_output_resume(skb, err); 144 + xfrm_output_resume(skb->sk, skb, err); 145 145 } 146 146 147 147 static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
+2 -1
net/ipv4/devinet.c
··· 1978 1978 return 0; 1979 1979 } 1980 1980 1981 - static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla) 1981 + static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla, 1982 + struct netlink_ext_ack *extack) 1982 1983 { 1983 1984 struct in_device *in_dev = __in_dev_get_rcu(dev); 1984 1985 struct nlattr *a, *tb[IFLA_INET_MAX+1];
+1 -1
net/ipv4/esp4.c
··· 279 279 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 280 280 esp_output_tail_tcp(x, skb); 281 281 else 282 - xfrm_output_resume(skb, err); 282 + xfrm_output_resume(skb->sk, skb, err); 283 283 } 284 284 } 285 285
+14 -3
net/ipv4/esp4_offload.c
··· 217 217 218 218 if ((!(skb->dev->gso_partial_features & NETIF_F_HW_ESP) && 219 219 !(features & NETIF_F_HW_ESP)) || x->xso.dev != skb->dev) 220 - esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); 220 + esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK | 221 + NETIF_F_SCTP_CRC); 221 222 else if (!(features & NETIF_F_HW_ESP_TX_CSUM) && 222 223 !(skb->dev->gso_partial_features & NETIF_F_HW_ESP_TX_CSUM)) 223 - esp_features = features & ~NETIF_F_CSUM_MASK; 224 + esp_features = features & ~(NETIF_F_CSUM_MASK | 225 + NETIF_F_SCTP_CRC); 224 226 225 227 xo->flags |= XFRM_GSO_SEGMENT; 226 228 ··· 314 312 ip_hdr(skb)->tot_len = htons(skb->len); 315 313 ip_send_check(ip_hdr(skb)); 316 314 317 - if (hw_offload) 315 + if (hw_offload) { 316 + if (!skb_ext_add(skb, SKB_EXT_SEC_PATH)) 317 + return -ENOMEM; 318 + 319 + xo = xfrm_offload(skb); 320 + if (!xo) 321 + return -EINVAL; 322 + 323 + xo->flags |= XFRM_XMIT; 318 324 return 0; 325 + } 319 326 320 327 err = esp_output_tail(x, skb, &esp); 321 328 if (err)
+4 -2
net/ipv4/ip_vti.c
··· 218 218 } 219 219 220 220 if (dst->flags & DST_XFRM_QUEUE) 221 - goto queued; 221 + goto xmit; 222 222 223 223 if (!vti_state_check(dst->xfrm, parms->iph.daddr, parms->iph.saddr)) { 224 224 dev->stats.tx_carrier_errors++; ··· 238 238 if (skb->len > mtu) { 239 239 skb_dst_update_pmtu_no_confirm(skb, mtu); 240 240 if (skb->protocol == htons(ETH_P_IP)) { 241 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 242 + goto xmit; 241 243 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 242 244 htonl(mtu)); 243 245 } else { ··· 253 251 goto tx_error; 254 252 } 255 253 256 - queued: 254 + xmit: 257 255 skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(dev))); 258 256 skb_dst_set(skb, dst); 259 257 skb->dev = skb_dst(skb)->dev;
+6
net/ipv4/tcp_bpf.c
··· 507 507 508 508 if (restore) { 509 509 if (inet_csk_has_ulp(sk)) { 510 + /* TLS does not have an unhash proto in SW cases, 511 + * but we need to ensure we stop using the sock_map 512 + * unhash routine because the associated psock is being 513 + * removed. So use the original unhash handler. 514 + */ 515 + WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash); 510 516 tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space); 511 517 } else { 512 518 sk->sk_write_space = psock->saved_write_space;
+4
net/ipv4/udp.c
··· 2788 2788 val = up->gso_size; 2789 2789 break; 2790 2790 2791 + case UDP_GRO: 2792 + val = up->gro_enabled; 2793 + break; 2794 + 2791 2795 /* The following two cannot be changed on UDP sockets, the return is 2792 2796 * always 0 (which corresponds to the full checksum coverage of UDP). */ 2793 2797 case UDPLITE_SEND_CSCOV:
+26 -6
net/ipv6/addrconf.c
··· 5672 5672 return 0; 5673 5673 } 5674 5674 5675 - static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token) 5675 + static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token, 5676 + struct netlink_ext_ack *extack) 5676 5677 { 5677 5678 struct inet6_ifaddr *ifp; 5678 5679 struct net_device *dev = idev->dev; ··· 5684 5683 5685 5684 if (!token) 5686 5685 return -EINVAL; 5687 - if (dev->flags & (IFF_LOOPBACK | IFF_NOARP)) 5686 + 5687 + if (dev->flags & IFF_LOOPBACK) { 5688 + NL_SET_ERR_MSG_MOD(extack, "Device is loopback"); 5688 5689 return -EINVAL; 5689 - if (!ipv6_accept_ra(idev)) 5690 + } 5691 + 5692 + if (dev->flags & IFF_NOARP) { 5693 + NL_SET_ERR_MSG_MOD(extack, 5694 + "Device does not do neighbour discovery"); 5690 5695 return -EINVAL; 5691 - if (idev->cnf.rtr_solicits == 0) 5696 + } 5697 + 5698 + if (!ipv6_accept_ra(idev)) { 5699 + NL_SET_ERR_MSG_MOD(extack, 5700 + "Router advertisement is disabled on device"); 5692 5701 return -EINVAL; 5702 + } 5703 + 5704 + if (idev->cnf.rtr_solicits == 0) { 5705 + NL_SET_ERR_MSG(extack, 5706 + "Router solicitation is disabled on device"); 5707 + return -EINVAL; 5708 + } 5693 5709 5694 5710 write_lock_bh(&idev->lock); 5695 5711 ··· 5814 5796 return 0; 5815 5797 } 5816 5798 5817 - static int inet6_set_link_af(struct net_device *dev, const struct nlattr *nla) 5799 + static int inet6_set_link_af(struct net_device *dev, const struct nlattr *nla, 5800 + struct netlink_ext_ack *extack) 5818 5801 { 5819 5802 struct inet6_dev *idev = __in6_dev_get(dev); 5820 5803 struct nlattr *tb[IFLA_INET6_MAX + 1]; ··· 5828 5809 BUG(); 5829 5810 5830 5811 if (tb[IFLA_INET6_TOKEN]) { 5831 - err = inet6_set_iftoken(idev, nla_data(tb[IFLA_INET6_TOKEN])); 5812 + err = inet6_set_iftoken(idev, nla_data(tb[IFLA_INET6_TOKEN]), 5813 + extack); 5832 5814 if (err) 5833 5815 return err; 5834 5816 }
+1 -1
net/ipv6/ah6.c
··· 316 316 } 317 317 318 318 kfree(AH_SKB_CB(skb)->tmp); 319 - xfrm_output_resume(skb, err); 319 + xfrm_output_resume(skb->sk, skb, err); 320 320 } 321 321 322 322 static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
+1 -1
net/ipv6/esp6.c
··· 314 314 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 315 315 esp_output_tail_tcp(x, skb); 316 316 else 317 - xfrm_output_resume(skb, err); 317 + xfrm_output_resume(skb->sk, skb, err); 318 318 } 319 319 } 320 320
+14 -3
net/ipv6/esp6_offload.c
··· 254 254 skb->encap_hdr_csum = 1; 255 255 256 256 if (!(features & NETIF_F_HW_ESP) || x->xso.dev != skb->dev) 257 - esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); 257 + esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK | 258 + NETIF_F_SCTP_CRC); 258 259 else if (!(features & NETIF_F_HW_ESP_TX_CSUM)) 259 - esp_features = features & ~NETIF_F_CSUM_MASK; 260 + esp_features = features & ~(NETIF_F_CSUM_MASK | 261 + NETIF_F_SCTP_CRC); 260 262 261 263 xo->flags |= XFRM_GSO_SEGMENT; 262 264 ··· 348 346 349 347 ipv6_hdr(skb)->payload_len = htons(len); 350 348 351 - if (hw_offload) 349 + if (hw_offload) { 350 + if (!skb_ext_add(skb, SKB_EXT_SEC_PATH)) 351 + return -ENOMEM; 352 + 353 + xo = xfrm_offload(skb); 354 + if (!xo) 355 + return -EINVAL; 356 + 357 + xo->flags |= XFRM_XMIT; 352 358 return 0; 359 + } 353 360 354 361 err = esp6_output_tail(x, skb, &esp); 355 362 if (err)
+4 -2
net/ipv6/ip6_vti.c
··· 493 493 } 494 494 495 495 if (dst->flags & DST_XFRM_QUEUE) 496 - goto queued; 496 + goto xmit; 497 497 498 498 x = dst->xfrm; 499 499 if (!vti6_state_check(x, &t->parms.raddr, &t->parms.laddr)) ··· 522 522 523 523 icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 524 524 } else { 525 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 526 + goto xmit; 525 527 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 526 528 htonl(mtu)); 527 529 } ··· 532 530 goto tx_err_dst_release; 533 531 } 534 532 535 - queued: 533 + xmit: 536 534 skb_scrub_packet(skb, !net_eq(t->net, dev_net(dev))); 537 535 skb_dst_set(skb, dst); 538 536 skb->dev = skb_dst(skb)->dev;
+1 -1
net/ipv6/raw.c
··· 298 298 */ 299 299 v4addr = LOOPBACK4_IPV6; 300 300 if (!(addr_type & IPV6_ADDR_MULTICAST) && 301 - !sock_net(sk)->ipv6.sysctl.ip_nonlocal_bind) { 301 + !ipv6_can_nonlocal_bind(sock_net(sk), inet)) { 302 302 err = -EADDRNOTAVAIL; 303 303 if (!ipv6_chk_addr(sock_net(sk), &addr->sin6_addr, 304 304 dev, 0)) {
+5 -3
net/ipv6/route.c
··· 5206 5206 * nexthops have been replaced by first new, the rest should 5207 5207 * be added to it. 5208 5208 */ 5209 - cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | 5210 - NLM_F_REPLACE); 5211 - cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE; 5209 + if (cfg->fc_nlinfo.nlh) { 5210 + cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | 5211 + NLM_F_REPLACE); 5212 + cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE; 5213 + } 5212 5214 nhn++; 5213 5215 } 5214 5216
+3 -1
net/mac80211/cfg.c
··· 1788 1788 } 1789 1789 1790 1790 if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN && 1791 - sta->sdata->u.vlan.sta) 1791 + sta->sdata->u.vlan.sta) { 1792 + ieee80211_clear_fast_rx(sta); 1792 1793 RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL); 1794 + } 1793 1795 1794 1796 if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) 1795 1797 ieee80211_vif_dec_num_mcast(sta->sdata);
+4 -1
net/mac80211/mlme.c
··· 4707 4707 timeout = sta->rx_stats.last_rx; 4708 4708 timeout += IEEE80211_CONNECTION_IDLE_TIME; 4709 4709 4710 - if (time_is_before_jiffies(timeout)) { 4710 + /* If timeout is after now, then update timer to fire at 4711 + * the later date, but do not actually probe at this time. 4712 + */ 4713 + if (time_is_after_jiffies(timeout)) { 4711 4714 mod_timer(&ifmgd->conn_mon_timer, round_jiffies_up(timeout)); 4712 4715 return; 4713 4716 }
+1 -1
net/mac80211/tx.c
··· 3573 3573 test_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags)) 3574 3574 goto out; 3575 3575 3576 - if (vif->txqs_stopped[ieee80211_ac_from_tid(txq->tid)]) { 3576 + if (vif->txqs_stopped[txq->ac]) { 3577 3577 set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags); 3578 3578 goto out; 3579 3579 }
+1 -1
net/mac802154/llsec.c
··· 152 152 crypto_free_sync_skcipher(key->tfm0); 153 153 err_tfm: 154 154 for (i = 0; i < ARRAY_SIZE(key->tfm); i++) 155 - if (key->tfm[i]) 155 + if (!IS_ERR_OR_NULL(key->tfm[i])) 156 156 crypto_free_aead(key->tfm[i]); 157 157 158 158 kfree_sensitive(key);
+47 -53
net/mptcp/protocol.c
··· 11 11 #include <linux/netdevice.h> 12 12 #include <linux/sched/signal.h> 13 13 #include <linux/atomic.h> 14 - #include <linux/igmp.h> 15 14 #include <net/sock.h> 16 15 #include <net/inet_common.h> 17 16 #include <net/inet_hashtables.h> ··· 19 20 #include <net/tcp_states.h> 20 21 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 21 22 #include <net/transp_v6.h> 22 - #include <net/addrconf.h> 23 23 #endif 24 24 #include <net/mptcp.h> 25 25 #include <net/xfrm.h> ··· 2869 2871 return ret; 2870 2872 } 2871 2873 2874 + static bool mptcp_unsupported(int level, int optname) 2875 + { 2876 + if (level == SOL_IP) { 2877 + switch (optname) { 2878 + case IP_ADD_MEMBERSHIP: 2879 + case IP_ADD_SOURCE_MEMBERSHIP: 2880 + case IP_DROP_MEMBERSHIP: 2881 + case IP_DROP_SOURCE_MEMBERSHIP: 2882 + case IP_BLOCK_SOURCE: 2883 + case IP_UNBLOCK_SOURCE: 2884 + case MCAST_JOIN_GROUP: 2885 + case MCAST_LEAVE_GROUP: 2886 + case MCAST_JOIN_SOURCE_GROUP: 2887 + case MCAST_LEAVE_SOURCE_GROUP: 2888 + case MCAST_BLOCK_SOURCE: 2889 + case MCAST_UNBLOCK_SOURCE: 2890 + case MCAST_MSFILTER: 2891 + return true; 2892 + } 2893 + return false; 2894 + } 2895 + if (level == SOL_IPV6) { 2896 + switch (optname) { 2897 + case IPV6_ADDRFORM: 2898 + case IPV6_ADD_MEMBERSHIP: 2899 + case IPV6_DROP_MEMBERSHIP: 2900 + case IPV6_JOIN_ANYCAST: 2901 + case IPV6_LEAVE_ANYCAST: 2902 + case MCAST_JOIN_GROUP: 2903 + case MCAST_LEAVE_GROUP: 2904 + case MCAST_JOIN_SOURCE_GROUP: 2905 + case MCAST_LEAVE_SOURCE_GROUP: 2906 + case MCAST_BLOCK_SOURCE: 2907 + case MCAST_UNBLOCK_SOURCE: 2908 + case MCAST_MSFILTER: 2909 + return true; 2910 + } 2911 + return false; 2912 + } 2913 + return false; 2914 + } 2915 + 2872 2916 static int mptcp_setsockopt(struct sock *sk, int level, int optname, 2873 2917 sockptr_t optval, unsigned int optlen) 2874 2918 { ··· 2918 2878 struct sock *ssk; 2919 2879 2920 2880 pr_debug("msk=%p", msk); 2881 + 2882 + if (mptcp_unsupported(level, optname)) 2883 + return -ENOPROTOOPT; 2921 2884 2922 2885 if (level == SOL_SOCKET) 2923 2886 return mptcp_setsockopt_sol_socket(msk, optname, optval, optlen); ··· 3452 3409 return mask; 3453 3410 } 3454 3411 3455 - static int mptcp_release(struct socket *sock) 3456 - { 3457 - struct mptcp_subflow_context *subflow; 3458 - struct sock *sk = sock->sk; 3459 - struct mptcp_sock *msk; 3460 - 3461 - if (!sk) 3462 - return 0; 3463 - 3464 - lock_sock(sk); 3465 - 3466 - msk = mptcp_sk(sk); 3467 - 3468 - mptcp_for_each_subflow(msk, subflow) { 3469 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 3470 - 3471 - ip_mc_drop_socket(ssk); 3472 - } 3473 - 3474 - release_sock(sk); 3475 - 3476 - return inet_release(sock); 3477 - } 3478 - 3479 3412 static const struct proto_ops mptcp_stream_ops = { 3480 3413 .family = PF_INET, 3481 3414 .owner = THIS_MODULE, 3482 - .release = mptcp_release, 3415 + .release = inet_release, 3483 3416 .bind = mptcp_bind, 3484 3417 .connect = mptcp_stream_connect, 3485 3418 .socketpair = sock_no_socketpair, ··· 3547 3528 } 3548 3529 3549 3530 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 3550 - static int mptcp6_release(struct socket *sock) 3551 - { 3552 - struct mptcp_subflow_context *subflow; 3553 - struct mptcp_sock *msk; 3554 - struct sock *sk = sock->sk; 3555 - 3556 - if (!sk) 3557 - return 0; 3558 - 3559 - lock_sock(sk); 3560 - 3561 - msk = mptcp_sk(sk); 3562 - 3563 - mptcp_for_each_subflow(msk, subflow) { 3564 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 3565 - 3566 - ip_mc_drop_socket(ssk); 3567 - ipv6_sock_mc_close(ssk); 3568 - ipv6_sock_ac_close(ssk); 3569 - } 3570 - 3571 - release_sock(sk); 3572 - return inet6_release(sock); 3573 - } 3574 - 3575 3531 static const struct proto_ops mptcp_v6_stream_ops = { 3576 3532 .family = PF_INET6, 3577 3533 .owner = THIS_MODULE, 3578 - .release = mptcp6_release, 3534 + .release = inet6_release, 3579 3535 .bind = mptcp_bind, 3580 3536 .connect = mptcp_stream_connect, 3581 3537 .socketpair = sock_no_socketpair,
+13 -7
net/ncsi/ncsi-manage.c
··· 105 105 monitor_state = nc->monitor.state; 106 106 spin_unlock_irqrestore(&nc->lock, flags); 107 107 108 - if (!enabled || chained) { 109 - ncsi_stop_channel_monitor(nc); 110 - return; 111 - } 108 + if (!enabled) 109 + return; /* expected race disabling timer */ 110 + if (WARN_ON_ONCE(chained)) 111 + goto bad_state; 112 + 112 113 if (state != NCSI_CHANNEL_INACTIVE && 113 114 state != NCSI_CHANNEL_ACTIVE) { 114 - ncsi_stop_channel_monitor(nc); 115 + bad_state: 116 + netdev_warn(ndp->ndev.dev, 117 + "Bad NCSI monitor state channel %d 0x%x %s queue\n", 118 + nc->id, state, chained ? "on" : "off"); 119 + spin_lock_irqsave(&nc->lock, flags); 120 + nc->monitor.enabled = false; 121 + spin_unlock_irqrestore(&nc->lock, flags); 115 122 return; 116 123 } 117 124 ··· 143 136 ncsi_report_link(ndp, true); 144 137 ndp->flags |= NCSI_DEV_RESHUFFLE; 145 138 146 - ncsi_stop_channel_monitor(nc); 147 - 148 139 ncm = &nc->modes[NCSI_MODE_LINK]; 149 140 spin_lock_irqsave(&nc->lock, flags); 141 + nc->monitor.enabled = false; 150 142 nc->state = NCSI_CHANNEL_INVISIBLE; 151 143 ncm->data[2] &= ~0x1; 152 144 spin_unlock_irqrestore(&nc->lock, flags);
+10
net/nfc/llcp_sock.c
··· 108 108 llcp_sock->service_name_len, 109 109 GFP_KERNEL); 110 110 if (!llcp_sock->service_name) { 111 + nfc_llcp_local_put(llcp_sock->local); 111 112 ret = -ENOMEM; 112 113 goto put_dev; 113 114 } 114 115 llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock); 115 116 if (llcp_sock->ssap == LLCP_SAP_MAX) { 117 + nfc_llcp_local_put(llcp_sock->local); 116 118 kfree(llcp_sock->service_name); 117 119 llcp_sock->service_name = NULL; 118 120 ret = -EADDRINUSE; ··· 673 671 ret = -EISCONN; 674 672 goto error; 675 673 } 674 + if (sk->sk_state == LLCP_CONNECTING) { 675 + ret = -EINPROGRESS; 676 + goto error; 677 + } 676 678 677 679 dev = nfc_get_device(addr->dev_idx); 678 680 if (dev == NULL) { ··· 708 702 llcp_sock->local = nfc_llcp_local_get(local); 709 703 llcp_sock->ssap = nfc_llcp_get_local_ssap(local); 710 704 if (llcp_sock->ssap == LLCP_SAP_MAX) { 705 + nfc_llcp_local_put(llcp_sock->local); 711 706 ret = -ENOMEM; 712 707 goto put_dev; 713 708 } ··· 750 743 751 744 sock_unlink: 752 745 nfc_llcp_sock_unlink(&local->connecting_sockets, sk); 746 + kfree(llcp_sock->service_name); 747 + llcp_sock->service_name = NULL; 753 748 754 749 sock_llcp_release: 755 750 nfc_llcp_put_ssap(local, llcp_sock->ssap); 751 + nfc_llcp_local_put(llcp_sock->local); 756 752 757 753 put_dev: 758 754 nfc_put_device(dev);
+4 -4
net/openvswitch/conntrack.c
··· 2032 2032 static int ovs_ct_limit_get_default_limit(struct ovs_ct_limit_info *info, 2033 2033 struct sk_buff *reply) 2034 2034 { 2035 - struct ovs_zone_limit zone_limit; 2036 - 2037 - zone_limit.zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE; 2038 - zone_limit.limit = info->default_limit; 2035 + struct ovs_zone_limit zone_limit = { 2036 + .zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE, 2037 + .limit = info->default_limit, 2038 + }; 2039 2039 2040 2040 return nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit); 2041 2041 }
+4 -1
net/qrtr/qrtr.c
··· 272 272 flow = kzalloc(sizeof(*flow), GFP_KERNEL); 273 273 if (flow) { 274 274 init_waitqueue_head(&flow->resume_tx); 275 - radix_tree_insert(&node->qrtr_tx_flow, key, flow); 275 + if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) { 276 + kfree(flow); 277 + flow = NULL; 278 + } 276 279 } 277 280 } 278 281 mutex_unlock(&node->qrtr_tx_lock);
+3 -1
net/rds/message.c
··· 180 180 rds_message_purge(rm); 181 181 182 182 kfree(rm); 183 + rm = NULL; 183 184 } 184 185 } 185 186 EXPORT_SYMBOL_GPL(rds_message_put); ··· 348 347 rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE); 349 348 rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); 350 349 if (IS_ERR(rm->data.op_sg)) { 350 + void *err = ERR_CAST(rm->data.op_sg); 351 351 rds_message_put(rm); 352 - return ERR_CAST(rm->data.op_sg); 352 + return err; 353 353 } 354 354 355 355 for (i = 0; i < rm->data.op_nents; ++i) {
+1 -1
net/rds/send.c
··· 665 665 unlock_and_drop: 666 666 spin_unlock_irqrestore(&rm->m_rs_lock, flags); 667 667 rds_message_put(rm); 668 - if (was_on_sock) 668 + if (was_on_sock && rm) 669 669 rds_message_put(rm); 670 670 } 671 671
+4 -3
net/rfkill/core.c
··· 69 69 70 70 struct rfkill_int_event { 71 71 struct list_head list; 72 - struct rfkill_event ev; 72 + struct rfkill_event_ext ev; 73 73 }; 74 74 75 75 struct rfkill_data { ··· 253 253 } 254 254 #endif /* CONFIG_RFKILL_LEDS */ 255 255 256 - static void rfkill_fill_event(struct rfkill_event *ev, struct rfkill *rfkill, 256 + static void rfkill_fill_event(struct rfkill_event_ext *ev, 257 + struct rfkill *rfkill, 257 258 enum rfkill_operation op) 258 259 { 259 260 unsigned long flags; ··· 1238 1237 size_t count, loff_t *pos) 1239 1238 { 1240 1239 struct rfkill *rfkill; 1241 - struct rfkill_event ev; 1240 + struct rfkill_event_ext ev; 1242 1241 int ret; 1243 1242 1244 1243 /* we don't need the 'hard' variable but accept it */
+31 -17
net/sched/act_api.c
··· 158 158 return 0; 159 159 } 160 160 161 - int __tcf_idr_release(struct tc_action *p, bool bind, bool strict) 161 + static int __tcf_idr_release(struct tc_action *p, bool bind, bool strict) 162 162 { 163 163 int ret = 0; 164 164 ··· 184 184 185 185 return ret; 186 186 } 187 - EXPORT_SYMBOL(__tcf_idr_release); 187 + 188 + int tcf_idr_release(struct tc_action *a, bool bind) 189 + { 190 + const struct tc_action_ops *ops = a->ops; 191 + int ret; 192 + 193 + ret = __tcf_idr_release(a, bind, false); 194 + if (ret == ACT_P_DELETED) 195 + module_put(ops->owner); 196 + return ret; 197 + } 198 + EXPORT_SYMBOL(tcf_idr_release); 188 199 189 200 static size_t tcf_action_shared_attrs_size(const struct tc_action *act) 190 201 { ··· 504 493 } 505 494 506 495 p->idrinfo = idrinfo; 496 + __module_get(ops->owner); 507 497 p->ops = ops; 508 498 *a = p; 509 499 return 0; ··· 1004 992 struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, 1005 993 struct nlattr *nla, struct nlattr *est, 1006 994 char *name, int ovr, int bind, 1007 - struct tc_action_ops *a_o, bool rtnl_held, 995 + struct tc_action_ops *a_o, int *init_res, 996 + bool rtnl_held, 1008 997 struct netlink_ext_ack *extack) 1009 998 { 1010 999 struct nla_bitfield32 flags = { 0, 0 }; ··· 1041 1028 } 1042 1029 if (err < 0) 1043 1030 goto err_out; 1031 + *init_res = err; 1044 1032 1045 1033 if (!name && tb[TCA_ACT_COOKIE]) 1046 1034 tcf_set_action_cookie(&a->act_cookie, cookie); 1047 1035 1048 1036 if (!name) 1049 1037 a->hw_stats = hw_stats; 1050 - 1051 - /* module count goes up only when brand new policy is created 1052 - * if it exists and is only bound to in a_o->init() then 1053 - * ACT_P_CREATED is not returned (a zero is). 1054 - */ 1055 - if (err != ACT_P_CREATED) 1056 - module_put(a_o->owner); 1057 1038 1058 1039 return a; 1059 1040 ··· 1063 1056 1064 1057 int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, 1065 1058 struct nlattr *est, char *name, int ovr, int bind, 1066 - struct tc_action *actions[], size_t *attr_size, 1059 + struct tc_action *actions[], int init_res[], size_t *attr_size, 1067 1060 bool rtnl_held, struct netlink_ext_ack *extack) 1068 1061 { 1069 1062 struct tc_action_ops *ops[TCA_ACT_MAX_PRIO] = {}; ··· 1091 1084 1092 1085 for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) { 1093 1086 act = tcf_action_init_1(net, tp, tb[i], est, name, ovr, bind, 1094 - ops[i - 1], rtnl_held, extack); 1087 + ops[i - 1], &init_res[i - 1], rtnl_held, 1088 + extack); 1095 1089 if (IS_ERR(act)) { 1096 1090 err = PTR_ERR(act); 1097 1091 goto err; ··· 1108 1100 tcf_idr_insert_many(actions); 1109 1101 1110 1102 *attr_size = tcf_action_full_attrs_size(sz); 1111 - return i - 1; 1103 + err = i - 1; 1104 + goto err_mod; 1112 1105 1113 1106 err: 1114 1107 tcf_action_destroy(actions, bind); ··· 1506 1497 struct netlink_ext_ack *extack) 1507 1498 { 1508 1499 size_t attr_size = 0; 1509 - int loop, ret; 1500 + int loop, ret, i; 1510 1501 struct tc_action *actions[TCA_ACT_MAX_PRIO] = {}; 1502 + int init_res[TCA_ACT_MAX_PRIO] = {}; 1511 1503 1512 1504 for (loop = 0; loop < 10; loop++) { 1513 1505 ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0, 1514 - actions, &attr_size, true, extack); 1506 + actions, init_res, &attr_size, true, extack); 1515 1507 if (ret != -EAGAIN) 1516 1508 break; 1517 1509 } ··· 1520 1510 if (ret < 0) 1521 1511 return ret; 1522 1512 ret = tcf_add_notify(net, n, actions, portid, attr_size, extack); 1523 - if (ovr) 1524 - tcf_action_put_many(actions); 1513 + 1514 + /* only put existing actions */ 1515 + for (i = 0; i < TCA_ACT_MAX_PRIO; i++) 1516 + if (init_res[i] == ACT_P_CREATED) 1517 + actions[i] = NULL; 1518 + tcf_action_put_many(actions); 1525 1519 1526 1520 return ret; 1527 1521 }
+8 -8
net/sched/cls_api.c
··· 646 646 struct net_device *dev = block_cb->indr.dev; 647 647 struct Qdisc *sch = block_cb->indr.sch; 648 648 struct netlink_ext_ack extack = {}; 649 - struct flow_block_offload bo; 649 + struct flow_block_offload bo = {}; 650 650 651 651 tcf_block_offload_init(&bo, dev, sch, FLOW_BLOCK_UNBIND, 652 652 block_cb->indr.binder_type, ··· 3040 3040 { 3041 3041 #ifdef CONFIG_NET_CLS_ACT 3042 3042 { 3043 + int init_res[TCA_ACT_MAX_PRIO] = {}; 3043 3044 struct tc_action *act; 3044 3045 size_t attr_size = 0; 3045 3046 ··· 3052 3051 return PTR_ERR(a_o); 3053 3052 act = tcf_action_init_1(net, tp, tb[exts->police], 3054 3053 rate_tlv, "police", ovr, 3055 - TCA_ACT_BIND, a_o, rtnl_held, 3056 - extack); 3057 - if (IS_ERR(act)) { 3058 - module_put(a_o->owner); 3054 + TCA_ACT_BIND, a_o, init_res, 3055 + rtnl_held, extack); 3056 + module_put(a_o->owner); 3057 + if (IS_ERR(act)) 3059 3058 return PTR_ERR(act); 3060 - } 3061 3059 3062 3060 act->type = exts->type = TCA_OLD_COMPAT; 3063 3061 exts->actions[0] = act; ··· 3067 3067 3068 3068 err = tcf_action_init(net, tp, tb[exts->action], 3069 3069 rate_tlv, NULL, ovr, TCA_ACT_BIND, 3070 - exts->actions, &attr_size, 3071 - rtnl_held, extack); 3070 + exts->actions, init_res, 3071 + &attr_size, rtnl_held, extack); 3072 3072 if (err < 0) 3073 3073 return err; 3074 3074 exts->nr_actions = err;
+3 -2
net/sched/sch_htb.c
··· 1675 1675 cl->parent->common.classid, 1676 1676 NULL); 1677 1677 if (q->offload) { 1678 - if (new_q) 1678 + if (new_q) { 1679 1679 htb_set_lockdep_class_child(new_q); 1680 - htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1680 + htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1681 + } 1681 1682 } 1682 1683 } 1683 1684
+3
net/sched/sch_teql.c
··· 134 134 struct teql_sched_data *dat = qdisc_priv(sch); 135 135 struct teql_master *master = dat->m; 136 136 137 + if (!master) 138 + return; 139 + 137 140 prev = master->slaves; 138 141 if (prev) { 139 142 do {
+3 -4
net/sctp/ipv6.c
··· 664 664 if (!(type & IPV6_ADDR_UNICAST)) 665 665 return 0; 666 666 667 - return sp->inet.freebind || net->ipv6.sysctl.ip_nonlocal_bind || 668 - ipv6_chk_addr(net, in6, NULL, 0); 667 + return ipv6_can_nonlocal_bind(net, &sp->inet) || 668 + ipv6_chk_addr(net, in6, NULL, 0); 669 669 } 670 670 671 671 /* This function checks if the address is a valid address to be used for ··· 954 954 net = sock_net(&opt->inet.sk); 955 955 rcu_read_lock(); 956 956 dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id); 957 - if (!dev || !(opt->inet.freebind || 958 - net->ipv6.sysctl.ip_nonlocal_bind || 957 + if (!dev || !(ipv6_can_nonlocal_bind(net, &opt->inet) || 959 958 ipv6_chk_addr(net, &addr->v6.sin6_addr, 960 959 dev, 0))) { 961 960 rcu_read_unlock();
+2 -1
net/tipc/crypto.c
··· 1943 1943 goto rcv; 1944 1944 if (tipc_aead_clone(&tmp, aead) < 0) 1945 1945 goto rcv; 1946 + WARN_ON(!refcount_inc_not_zero(&tmp->refcnt)); 1946 1947 if (tipc_crypto_key_attach(rx, tmp, ehdr->tx_key, false) < 0) { 1947 1948 tipc_aead_free(&tmp->rcu); 1948 1949 goto rcv; 1949 1950 } 1950 1951 tipc_aead_put(aead); 1951 - aead = tipc_aead_get((struct tipc_aead __force __rcu *)tmp); 1952 + aead = tmp; 1952 1953 } 1953 1954 1954 1955 if (unlikely(err)) {
+1 -1
net/tipc/net.c
··· 89 89 * - A spin lock to protect the registry of kernel/driver users (reg.c) 90 90 * - A global spin_lock (tipc_port_lock), which only task is to ensure 91 91 * consistency where more than one port is involved in an operation, 92 - * i.e., whe a port is part of a linked list of ports. 92 + * i.e., when a port is part of a linked list of ports. 93 93 * There are two such lists; 'port_list', which is used for management, 94 94 * and 'wait_list', which is used to queue ports during congestion. 95 95 *
+1 -1
net/tipc/node.c
··· 1739 1739 } 1740 1740 1741 1741 /* tipc_node_xmit_skb(): send single buffer to destination 1742 - * Buffers sent via this functon are generally TIPC_SYSTEM_IMPORTANCE 1742 + * Buffers sent via this function are generally TIPC_SYSTEM_IMPORTANCE 1743 1743 * messages, which will not be rejected 1744 1744 * The only exception is datagram messages rerouted after secondary 1745 1745 * lookup, which are rare and safe to dispose of anyway.
+1 -1
net/tipc/socket.c
··· 1262 1262 spin_lock_bh(&inputq->lock); 1263 1263 if (skb_peek(arrvq) == skb) { 1264 1264 skb_queue_splice_tail_init(&tmpq, inputq); 1265 - kfree_skb(__skb_dequeue(arrvq)); 1265 + __skb_dequeue(arrvq); 1266 1266 } 1267 1267 spin_unlock_bh(&inputq->lock); 1268 1268 __skb_queue_purge(&tmpq);
+7 -3
net/wireless/nl80211.c
··· 5 5 * Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net> 6 6 * Copyright 2013-2014 Intel Mobile Communications GmbH 7 7 * Copyright 2015-2017 Intel Deutschland GmbH 8 - * Copyright (C) 2018-2020 Intel Corporation 8 + * Copyright (C) 2018-2021 Intel Corporation 9 9 */ 10 10 11 11 #include <linux/if.h> ··· 229 229 unsigned int len = nla_len(attr); 230 230 const struct element *elem; 231 231 const struct ieee80211_mgmt *mgmt = (void *)data; 232 - bool s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control); 233 232 unsigned int fixedlen, hdrlen; 233 + bool s1g_bcn; 234 234 235 + if (len < offsetofend(typeof(*mgmt), frame_control)) 236 + goto err; 237 + 238 + s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control); 235 239 if (s1g_bcn) { 236 240 fixedlen = offsetof(struct ieee80211_ext, 237 241 u.s1g_beacon.variable); ··· 5489 5485 rdev, info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP], 5490 5486 &params); 5491 5487 if (err) 5492 - return err; 5488 + goto out; 5493 5489 } 5494 5490 5495 5491 nl80211_calculate_ap_params(&params);
+8 -6
net/wireless/scan.c
··· 2352 2352 return NULL; 2353 2353 2354 2354 if (ext) { 2355 - struct ieee80211_s1g_bcn_compat_ie *compat; 2356 - u8 *ie; 2355 + const struct ieee80211_s1g_bcn_compat_ie *compat; 2356 + const struct element *elem; 2357 2357 2358 - ie = (void *)cfg80211_find_ie(WLAN_EID_S1G_BCN_COMPAT, 2359 - variable, ielen); 2360 - if (!ie) 2358 + elem = cfg80211_find_elem(WLAN_EID_S1G_BCN_COMPAT, 2359 + variable, ielen); 2360 + if (!elem) 2361 2361 return NULL; 2362 - compat = (void *)(ie + 2); 2362 + if (elem->datalen < sizeof(*compat)) 2363 + return NULL; 2364 + compat = (void *)elem->data; 2363 2365 bssid = ext->u.s1g_beacon.sa; 2364 2366 capability = le16_to_cpu(compat->compat_info); 2365 2367 beacon_int = le16_to_cpu(compat->beacon_int);
+1 -1
net/wireless/sme.c
··· 529 529 cfg80211_sme_free(wdev); 530 530 } 531 531 532 - if (WARN_ON(wdev->conn)) 532 + if (wdev->conn) 533 533 return -EINPROGRESS; 534 534 535 535 wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);
+9 -3
net/xfrm/xfrm_compat.c
··· 216 216 case XFRM_MSG_GETSADINFO: 217 217 case XFRM_MSG_GETSPDINFO: 218 218 default: 219 - WARN_ONCE(1, "unsupported nlmsg_type %d", nlh_src->nlmsg_type); 219 + pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type); 220 220 return ERR_PTR(-EOPNOTSUPP); 221 221 } 222 222 ··· 277 277 return xfrm_nla_cpy(dst, src, nla_len(src)); 278 278 default: 279 279 BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID); 280 - WARN_ONCE(1, "unsupported nla_type %d", src->nla_type); 280 + pr_warn_once("unsupported nla_type %d\n", src->nla_type); 281 281 return -EOPNOTSUPP; 282 282 } 283 283 } ··· 315 315 struct sk_buff *new = NULL; 316 316 int err; 317 317 318 - if (WARN_ON_ONCE(type >= ARRAY_SIZE(xfrm_msg_min))) 318 + if (type >= ARRAY_SIZE(xfrm_msg_min)) { 319 + pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type); 319 320 return -EOPNOTSUPP; 321 + } 320 322 321 323 if (skb_shinfo(skb)->frag_list == NULL) { 322 324 new = alloc_skb(skb->len + skb_tailroom(skb), GFP_ATOMIC); ··· 380 378 struct nlmsghdr *nlmsg = dst; 381 379 struct nlattr *nla; 382 380 381 + /* xfrm_user_rcv_msg_compat() relies on fact that 32-bit messages 382 + * have the same len or shorted than 64-bit ones. 383 + * 32-bit translation that is bigger than 64-bit original is unexpected. 384 + */ 383 385 if (WARN_ON_ONCE(copy_len > payload)) 384 386 copy_len = payload; 385 387
-2
net/xfrm/xfrm_device.c
··· 134 134 return skb; 135 135 } 136 136 137 - xo->flags |= XFRM_XMIT; 138 - 139 137 if (skb_is_gso(skb) && unlikely(x->xso.dev != dev)) { 140 138 struct sk_buff *segs; 141 139
+3
net/xfrm/xfrm_interface.c
··· 306 306 307 307 icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 308 308 } else { 309 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 310 + goto xmit; 309 311 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 310 312 htonl(mtu)); 311 313 } ··· 316 314 return -EMSGSIZE; 317 315 } 318 316 317 + xmit: 319 318 xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev))); 320 319 skb_dst_set(skb, dst); 321 320 skb->dev = tdev;
+18 -5
net/xfrm/xfrm_output.c
··· 503 503 return err; 504 504 } 505 505 506 - int xfrm_output_resume(struct sk_buff *skb, int err) 506 + int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err) 507 507 { 508 508 struct net *net = xs_net(skb_dst(skb)->xfrm); 509 509 510 510 while (likely((err = xfrm_output_one(skb, err)) == 0)) { 511 511 nf_reset_ct(skb); 512 512 513 - err = skb_dst(skb)->ops->local_out(net, skb->sk, skb); 513 + err = skb_dst(skb)->ops->local_out(net, sk, skb); 514 514 if (unlikely(err != 1)) 515 515 goto out; 516 516 517 517 if (!skb_dst(skb)->xfrm) 518 - return dst_output(net, skb->sk, skb); 518 + return dst_output(net, sk, skb); 519 519 520 520 err = nf_hook(skb_dst(skb)->ops->family, 521 - NF_INET_POST_ROUTING, net, skb->sk, skb, 521 + NF_INET_POST_ROUTING, net, sk, skb, 522 522 NULL, skb_dst(skb)->dev, xfrm_output2); 523 523 if (unlikely(err != 1)) 524 524 goto out; ··· 534 534 535 535 static int xfrm_output2(struct net *net, struct sock *sk, struct sk_buff *skb) 536 536 { 537 - return xfrm_output_resume(skb, 1); 537 + return xfrm_output_resume(sk, skb, 1); 538 538 } 539 539 540 540 static int xfrm_output_gso(struct net *net, struct sock *sk, struct sk_buff *skb) ··· 660 660 { 661 661 int err; 662 662 663 + if (x->outer_mode.encap == XFRM_MODE_BEET && 664 + ip_is_fragment(ip_hdr(skb))) { 665 + net_warn_ratelimited("BEET mode doesn't support inner IPv4 fragments\n"); 666 + return -EAFNOSUPPORT; 667 + } 668 + 663 669 err = xfrm4_tunnel_check_size(skb); 664 670 if (err) 665 671 return err; ··· 711 705 static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb) 712 706 { 713 707 #if IS_ENABLED(CONFIG_IPV6) 708 + unsigned int ptr = 0; 714 709 int err; 710 + 711 + if (x->outer_mode.encap == XFRM_MODE_BEET && 712 + ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) { 713 + net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n"); 714 + return -EAFNOSUPPORT; 715 + } 715 716 716 717 err = xfrm6_tunnel_check_size(skb); 717 718 if (err)
+6 -5
net/xfrm/xfrm_state.c
··· 44 44 */ 45 45 46 46 static unsigned int xfrm_state_hashmax __read_mostly = 1 * 1024 * 1024; 47 - static __read_mostly seqcount_t xfrm_state_hash_generation = SEQCNT_ZERO(xfrm_state_hash_generation); 48 47 static struct kmem_cache *xfrm_state_cache __ro_after_init; 49 48 50 49 static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task); ··· 139 140 } 140 141 141 142 spin_lock_bh(&net->xfrm.xfrm_state_lock); 142 - write_seqcount_begin(&xfrm_state_hash_generation); 143 + write_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 143 144 144 145 nhashmask = (nsize / sizeof(struct hlist_head)) - 1U; 145 146 odst = xfrm_state_deref_prot(net->xfrm.state_bydst, net); ··· 155 156 rcu_assign_pointer(net->xfrm.state_byspi, nspi); 156 157 net->xfrm.state_hmask = nhashmask; 157 158 158 - write_seqcount_end(&xfrm_state_hash_generation); 159 + write_seqcount_end(&net->xfrm.xfrm_state_hash_generation); 159 160 spin_unlock_bh(&net->xfrm.xfrm_state_lock); 160 161 161 162 osize = (ohashmask + 1) * sizeof(struct hlist_head); ··· 1062 1063 1063 1064 to_put = NULL; 1064 1065 1065 - sequence = read_seqcount_begin(&xfrm_state_hash_generation); 1066 + sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 1066 1067 1067 1068 rcu_read_lock(); 1068 1069 h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family); ··· 1175 1176 if (to_put) 1176 1177 xfrm_state_put(to_put); 1177 1178 1178 - if (read_seqcount_retry(&xfrm_state_hash_generation, sequence)) { 1179 + if (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence)) { 1179 1180 *err = -EAGAIN; 1180 1181 if (x) { 1181 1182 xfrm_state_put(x); ··· 2665 2666 net->xfrm.state_num = 0; 2666 2667 INIT_WORK(&net->xfrm.state_hash_work, xfrm_hash_resize); 2667 2668 spin_lock_init(&net->xfrm.xfrm_state_lock); 2669 + seqcount_spinlock_init(&net->xfrm.xfrm_state_hash_generation, 2670 + &net->xfrm.xfrm_state_lock); 2668 2671 return 0; 2669 2672 2670 2673 out_byspi:
+2
scripts/module.lds.S
··· 20 20 21 21 __patchable_function_entries : { *(__patchable_function_entries) } 22 22 23 + #ifdef CONFIG_LTO_CLANG 23 24 /* 24 25 * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and 25 26 * -ffunction-sections, which increases the size of the final module. ··· 42 41 } 43 42 44 43 .text : { *(.text .text.[0-9a-zA-Z_]*) } 44 + #endif 45 45 } 46 46 47 47 /* bring in arch-specific sections */
+8
security/integrity/iint.c
··· 98 98 struct rb_node *node, *parent = NULL; 99 99 struct integrity_iint_cache *iint, *test_iint; 100 100 101 + /* 102 + * The integrity's "iint_cache" is initialized at security_init(), 103 + * unless it is not included in the ordered list of LSMs enabled 104 + * on the boot command line. 105 + */ 106 + if (!iint_cache) 107 + panic("%s: lsm=integrity required.\n", __func__); 108 + 101 109 iint = integrity_iint_find(inode); 102 110 if (iint) 103 111 return iint;
+33 -68
security/selinux/ss/avtab.c
··· 109 109 struct avtab_node *prev, *cur, *newnode; 110 110 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 111 111 112 - if (!h) 112 + if (!h || !h->nslot) 113 113 return -EINVAL; 114 114 115 115 hvalue = avtab_hash(key, h->mask); ··· 154 154 struct avtab_node *prev, *cur; 155 155 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 156 156 157 - if (!h) 157 + if (!h || !h->nslot) 158 158 return NULL; 159 159 hvalue = avtab_hash(key, h->mask); 160 160 for (prev = NULL, cur = h->htable[hvalue]; ··· 184 184 struct avtab_node *cur; 185 185 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 186 186 187 - if (!h) 187 + if (!h || !h->nslot) 188 188 return NULL; 189 189 190 190 hvalue = avtab_hash(key, h->mask); ··· 220 220 struct avtab_node *cur; 221 221 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 222 222 223 - if (!h) 223 + if (!h || !h->nslot) 224 224 return NULL; 225 225 226 226 hvalue = avtab_hash(key, h->mask); ··· 295 295 } 296 296 kvfree(h->htable); 297 297 h->htable = NULL; 298 + h->nel = 0; 298 299 h->nslot = 0; 299 300 h->mask = 0; 300 301 } ··· 304 303 { 305 304 h->htable = NULL; 306 305 h->nel = 0; 306 + h->nslot = 0; 307 + h->mask = 0; 307 308 } 308 309 309 - int avtab_alloc(struct avtab *h, u32 nrules) 310 + static int avtab_alloc_common(struct avtab *h, u32 nslot) 310 311 { 311 - u32 mask = 0; 312 - u32 shift = 0; 313 - u32 work = nrules; 314 - u32 nslot = 0; 315 - 316 - if (nrules == 0) 317 - goto avtab_alloc_out; 318 - 319 - while (work) { 320 - work = work >> 1; 321 - shift++; 322 - } 323 - if (shift > 2) 324 - shift = shift - 2; 325 - nslot = 1 << shift; 326 - if (nslot > MAX_AVTAB_HASH_BUCKETS) 327 - nslot = MAX_AVTAB_HASH_BUCKETS; 328 - mask = nslot - 1; 312 + if (!nslot) 313 + return 0; 329 314 330 315 h->htable = kvcalloc(nslot, sizeof(void *), GFP_KERNEL); 331 316 if (!h->htable) 332 317 return -ENOMEM; 333 318 334 - avtab_alloc_out: 335 - h->nel = 0; 336 319 h->nslot = nslot; 337 - h->mask = mask; 338 - pr_debug("SELinux: %d avtab hash slots, %d rules.\n", 339 - h->nslot, nrules); 320 + h->mask = nslot - 1; 340 321 return 0; 341 322 } 342 323 343 - int avtab_duplicate(struct avtab *new, struct avtab *orig) 324 + int avtab_alloc(struct avtab *h, u32 nrules) 344 325 { 345 - int i; 346 - struct avtab_node *node, *tmp, *tail; 326 + int rc; 327 + u32 nslot = 0; 347 328 348 - memset(new, 0, sizeof(*new)); 349 - 350 - new->htable = kvcalloc(orig->nslot, sizeof(void *), GFP_KERNEL); 351 - if (!new->htable) 352 - return -ENOMEM; 353 - new->nslot = orig->nslot; 354 - new->mask = orig->mask; 355 - 356 - for (i = 0; i < orig->nslot; i++) { 357 - tail = NULL; 358 - for (node = orig->htable[i]; node; node = node->next) { 359 - tmp = kmem_cache_zalloc(avtab_node_cachep, GFP_KERNEL); 360 - if (!tmp) 361 - goto error; 362 - tmp->key = node->key; 363 - if (tmp->key.specified & AVTAB_XPERMS) { 364 - tmp->datum.u.xperms = 365 - kmem_cache_zalloc(avtab_xperms_cachep, 366 - GFP_KERNEL); 367 - if (!tmp->datum.u.xperms) { 368 - kmem_cache_free(avtab_node_cachep, tmp); 369 - goto error; 370 - } 371 - tmp->datum.u.xperms = node->datum.u.xperms; 372 - } else 373 - tmp->datum.u.data = node->datum.u.data; 374 - 375 - if (tail) 376 - tail->next = tmp; 377 - else 378 - new->htable[i] = tmp; 379 - 380 - tail = tmp; 381 - new->nel++; 329 + if (nrules != 0) { 330 + u32 shift = 1; 331 + u32 work = nrules >> 3; 332 + while (work) { 333 + work >>= 1; 334 + shift++; 382 335 } 336 + nslot = 1 << shift; 337 + if (nslot > MAX_AVTAB_HASH_BUCKETS) 338 + nslot = MAX_AVTAB_HASH_BUCKETS; 339 + 340 + rc = avtab_alloc_common(h, nslot); 341 + if (rc) 342 + return rc; 383 343 } 384 344 345 + pr_debug("SELinux: %d avtab hash slots, %d rules.\n", nslot, nrules); 385 346 return 0; 386 - error: 387 - avtab_destroy(new); 388 - return -ENOMEM; 347 + } 348 + 349 + int avtab_alloc_dup(struct avtab *new, const struct avtab *orig) 350 + { 351 + return avtab_alloc_common(new, orig->nslot); 389 352 } 390 353 391 354 void avtab_hash_eval(struct avtab *h, char *tag)
+1 -1
security/selinux/ss/avtab.h
··· 89 89 90 90 void avtab_init(struct avtab *h); 91 91 int avtab_alloc(struct avtab *, u32); 92 - int avtab_duplicate(struct avtab *new, struct avtab *orig); 92 + int avtab_alloc_dup(struct avtab *new, const struct avtab *orig); 93 93 struct avtab_datum *avtab_search(struct avtab *h, struct avtab_key *k); 94 94 void avtab_destroy(struct avtab *h); 95 95 void avtab_hash_eval(struct avtab *h, char *tag);
+6 -6
security/selinux/ss/conditional.c
··· 605 605 struct cond_av_list *orig, 606 606 struct avtab *avtab) 607 607 { 608 - struct avtab_node *avnode; 609 608 u32 i; 610 609 611 610 memset(new, 0, sizeof(*new)); ··· 614 615 return -ENOMEM; 615 616 616 617 for (i = 0; i < orig->len; i++) { 617 - avnode = avtab_search_node(avtab, &orig->nodes[i]->key); 618 - if (WARN_ON(!avnode)) 619 - return -EINVAL; 620 - new->nodes[i] = avnode; 618 + new->nodes[i] = avtab_insert_nonunique(avtab, 619 + &orig->nodes[i]->key, 620 + &orig->nodes[i]->datum); 621 + if (!new->nodes[i]) 622 + return -ENOMEM; 621 623 new->len++; 622 624 } 623 625 ··· 630 630 { 631 631 int rc, i, j; 632 632 633 - rc = avtab_duplicate(&newp->te_cond_avtab, &origp->te_cond_avtab); 633 + rc = avtab_alloc_dup(&newp->te_cond_avtab, &origp->te_cond_avtab); 634 634 if (rc) 635 635 return rc; 636 636
+120 -37
security/selinux/ss/services.c
··· 1552 1552 if (!str) 1553 1553 goto out; 1554 1554 } 1555 + retry: 1555 1556 rcu_read_lock(); 1556 1557 policy = rcu_dereference(state->policy); 1557 1558 policydb = &policy->policydb; ··· 1566 1565 } else if (rc) 1567 1566 goto out_unlock; 1568 1567 rc = sidtab_context_to_sid(sidtab, &context, sid); 1568 + if (rc == -ESTALE) { 1569 + rcu_read_unlock(); 1570 + if (context.str) { 1571 + str = context.str; 1572 + context.str = NULL; 1573 + } 1574 + context_destroy(&context); 1575 + goto retry; 1576 + } 1569 1577 context_destroy(&context); 1570 1578 out_unlock: 1571 1579 rcu_read_unlock(); ··· 1724 1714 struct selinux_policy *policy; 1725 1715 struct policydb *policydb; 1726 1716 struct sidtab *sidtab; 1727 - struct class_datum *cladatum = NULL; 1717 + struct class_datum *cladatum; 1728 1718 struct context *scontext, *tcontext, newcontext; 1729 1719 struct sidtab_entry *sentry, *tentry; 1730 1720 struct avtab_key avkey; ··· 1746 1736 goto out; 1747 1737 } 1748 1738 1739 + retry: 1740 + cladatum = NULL; 1749 1741 context_init(&newcontext); 1750 1742 1751 1743 rcu_read_lock(); ··· 1892 1880 } 1893 1881 /* Obtain the sid for the context. */ 1894 1882 rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); 1883 + if (rc == -ESTALE) { 1884 + rcu_read_unlock(); 1885 + context_destroy(&newcontext); 1886 + goto retry; 1887 + } 1895 1888 out_unlock: 1896 1889 rcu_read_unlock(); 1897 1890 context_destroy(&newcontext); ··· 2209 2192 struct selinux_load_state *load_state) 2210 2193 { 2211 2194 struct selinux_policy *oldpolicy, *newpolicy = load_state->policy; 2195 + unsigned long flags; 2212 2196 u32 seqno; 2213 2197 2214 2198 oldpolicy = rcu_dereference_protected(state->policy, ··· 2231 2213 seqno = newpolicy->latest_granting; 2232 2214 2233 2215 /* Install the new policy. */ 2234 - rcu_assign_pointer(state->policy, newpolicy); 2216 + if (oldpolicy) { 2217 + sidtab_freeze_begin(oldpolicy->sidtab, &flags); 2218 + rcu_assign_pointer(state->policy, newpolicy); 2219 + sidtab_freeze_end(oldpolicy->sidtab, &flags); 2220 + } else { 2221 + rcu_assign_pointer(state->policy, newpolicy); 2222 + } 2235 2223 2236 2224 /* Load the policycaps from the new policy */ 2237 2225 security_load_policycaps(state, newpolicy); ··· 2381 2357 struct policydb *policydb; 2382 2358 struct sidtab *sidtab; 2383 2359 struct ocontext *c; 2384 - int rc = 0; 2360 + int rc; 2385 2361 2386 2362 if (!selinux_initialized(state)) { 2387 2363 *out_sid = SECINITSID_PORT; 2388 2364 return 0; 2389 2365 } 2390 2366 2367 + retry: 2368 + rc = 0; 2391 2369 rcu_read_lock(); 2392 2370 policy = rcu_dereference(state->policy); 2393 2371 policydb = &policy->policydb; ··· 2408 2382 if (!c->sid[0]) { 2409 2383 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2410 2384 &c->sid[0]); 2385 + if (rc == -ESTALE) { 2386 + rcu_read_unlock(); 2387 + goto retry; 2388 + } 2411 2389 if (rc) 2412 2390 goto out; 2413 2391 } ··· 2438 2408 struct policydb *policydb; 2439 2409 struct sidtab *sidtab; 2440 2410 struct ocontext *c; 2441 - int rc = 0; 2411 + int rc; 2442 2412 2443 2413 if (!selinux_initialized(state)) { 2444 2414 *out_sid = SECINITSID_UNLABELED; 2445 2415 return 0; 2446 2416 } 2447 2417 2418 + retry: 2419 + rc = 0; 2448 2420 rcu_read_lock(); 2449 2421 policy = rcu_dereference(state->policy); 2450 2422 policydb = &policy->policydb; ··· 2467 2435 rc = sidtab_context_to_sid(sidtab, 2468 2436 &c->context[0], 2469 2437 &c->sid[0]); 2438 + if (rc == -ESTALE) { 2439 + rcu_read_unlock(); 2440 + goto retry; 2441 + } 2470 2442 if (rc) 2471 2443 goto out; 2472 2444 } ··· 2496 2460 struct policydb *policydb; 2497 2461 struct sidtab *sidtab; 2498 2462 struct ocontext *c; 2499 - int rc = 0; 2463 + int rc; 2500 2464 2501 2465 if (!selinux_initialized(state)) { 2502 2466 *out_sid = SECINITSID_UNLABELED; 2503 2467 return 0; 2504 2468 } 2505 2469 2470 + retry: 2471 + rc = 0; 2506 2472 rcu_read_lock(); 2507 2473 policy = rcu_dereference(state->policy); 2508 2474 policydb = &policy->policydb; ··· 2525 2487 if (!c->sid[0]) { 2526 2488 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2527 2489 &c->sid[0]); 2490 + if (rc == -ESTALE) { 2491 + rcu_read_unlock(); 2492 + goto retry; 2493 + } 2528 2494 if (rc) 2529 2495 goto out; 2530 2496 } ··· 2552 2510 struct selinux_policy *policy; 2553 2511 struct policydb *policydb; 2554 2512 struct sidtab *sidtab; 2555 - int rc = 0; 2513 + int rc; 2556 2514 struct ocontext *c; 2557 2515 2558 2516 if (!selinux_initialized(state)) { ··· 2560 2518 return 0; 2561 2519 } 2562 2520 2521 + retry: 2522 + rc = 0; 2563 2523 rcu_read_lock(); 2564 2524 policy = rcu_dereference(state->policy); 2565 2525 policydb = &policy->policydb; ··· 2578 2534 if (!c->sid[0] || !c->sid[1]) { 2579 2535 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2580 2536 &c->sid[0]); 2537 + if (rc == -ESTALE) { 2538 + rcu_read_unlock(); 2539 + goto retry; 2540 + } 2581 2541 if (rc) 2582 2542 goto out; 2583 2543 rc = sidtab_context_to_sid(sidtab, &c->context[1], 2584 2544 &c->sid[1]); 2545 + if (rc == -ESTALE) { 2546 + rcu_read_unlock(); 2547 + goto retry; 2548 + } 2585 2549 if (rc) 2586 2550 goto out; 2587 2551 } ··· 2639 2587 return 0; 2640 2588 } 2641 2589 2590 + retry: 2642 2591 rcu_read_lock(); 2643 2592 policy = rcu_dereference(state->policy); 2644 2593 policydb = &policy->policydb; ··· 2688 2635 rc = sidtab_context_to_sid(sidtab, 2689 2636 &c->context[0], 2690 2637 &c->sid[0]); 2638 + if (rc == -ESTALE) { 2639 + rcu_read_unlock(); 2640 + goto retry; 2641 + } 2691 2642 if (rc) 2692 2643 goto out; 2693 2644 } ··· 2733 2676 struct sidtab *sidtab; 2734 2677 struct context *fromcon, usercon; 2735 2678 u32 *mysids = NULL, *mysids2, sid; 2736 - u32 mynel = 0, maxnel = SIDS_NEL; 2679 + u32 i, j, mynel, maxnel = SIDS_NEL; 2737 2680 struct user_datum *user; 2738 2681 struct role_datum *role; 2739 2682 struct ebitmap_node *rnode, *tnode; 2740 - int rc = 0, i, j; 2683 + int rc; 2741 2684 2742 2685 *sids = NULL; 2743 2686 *nel = 0; 2744 2687 2745 2688 if (!selinux_initialized(state)) 2746 - goto out; 2689 + return 0; 2747 2690 2691 + mysids = kcalloc(maxnel, sizeof(*mysids), GFP_KERNEL); 2692 + if (!mysids) 2693 + return -ENOMEM; 2694 + 2695 + retry: 2696 + mynel = 0; 2748 2697 rcu_read_lock(); 2749 2698 policy = rcu_dereference(state->policy); 2750 2699 policydb = &policy->policydb; ··· 2770 2707 2771 2708 usercon.user = user->value; 2772 2709 2773 - rc = -ENOMEM; 2774 - mysids = kcalloc(maxnel, sizeof(*mysids), GFP_ATOMIC); 2775 - if (!mysids) 2776 - goto out_unlock; 2777 - 2778 2710 ebitmap_for_each_positive_bit(&user->roles, rnode, i) { 2779 2711 role = policydb->role_val_to_struct[i]; 2780 2712 usercon.role = i + 1; ··· 2781 2723 continue; 2782 2724 2783 2725 rc = sidtab_context_to_sid(sidtab, &usercon, &sid); 2726 + if (rc == -ESTALE) { 2727 + rcu_read_unlock(); 2728 + goto retry; 2729 + } 2784 2730 if (rc) 2785 2731 goto out_unlock; 2786 2732 if (mynel < maxnel) { ··· 2807 2745 rcu_read_unlock(); 2808 2746 if (rc || !mynel) { 2809 2747 kfree(mysids); 2810 - goto out; 2748 + return rc; 2811 2749 } 2812 2750 2813 2751 rc = -ENOMEM; 2814 2752 mysids2 = kcalloc(mynel, sizeof(*mysids2), GFP_KERNEL); 2815 2753 if (!mysids2) { 2816 2754 kfree(mysids); 2817 - goto out; 2755 + return rc; 2818 2756 } 2819 2757 for (i = 0, j = 0; i < mynel; i++) { 2820 2758 struct av_decision dummy_avd; ··· 2827 2765 mysids2[j++] = mysids[i]; 2828 2766 cond_resched(); 2829 2767 } 2830 - rc = 0; 2831 2768 kfree(mysids); 2832 2769 *sids = mysids2; 2833 2770 *nel = j; 2834 - out: 2835 - return rc; 2771 + return 0; 2836 2772 } 2837 2773 2838 2774 /** ··· 2843 2783 * Obtain a SID to use for a file in a filesystem that 2844 2784 * cannot support xattr or use a fixed labeling behavior like 2845 2785 * transition SIDs or task SIDs. 2786 + * 2787 + * WARNING: This function may return -ESTALE, indicating that the caller 2788 + * must retry the operation after re-acquiring the policy pointer! 2846 2789 */ 2847 2790 static inline int __security_genfs_sid(struct selinux_policy *policy, 2848 2791 const char *fstype, ··· 2924 2861 return 0; 2925 2862 } 2926 2863 2927 - rcu_read_lock(); 2928 - policy = rcu_dereference(state->policy); 2929 - retval = __security_genfs_sid(policy, 2930 - fstype, path, orig_sclass, sid); 2931 - rcu_read_unlock(); 2864 + do { 2865 + rcu_read_lock(); 2866 + policy = rcu_dereference(state->policy); 2867 + retval = __security_genfs_sid(policy, fstype, path, 2868 + orig_sclass, sid); 2869 + rcu_read_unlock(); 2870 + } while (retval == -ESTALE); 2932 2871 return retval; 2933 2872 } 2934 2873 ··· 2953 2888 struct selinux_policy *policy; 2954 2889 struct policydb *policydb; 2955 2890 struct sidtab *sidtab; 2956 - int rc = 0; 2891 + int rc; 2957 2892 struct ocontext *c; 2958 2893 struct superblock_security_struct *sbsec = sb->s_security; 2959 2894 const char *fstype = sb->s_type->name; ··· 2964 2899 return 0; 2965 2900 } 2966 2901 2902 + retry: 2903 + rc = 0; 2967 2904 rcu_read_lock(); 2968 2905 policy = rcu_dereference(state->policy); 2969 2906 policydb = &policy->policydb; ··· 2983 2916 if (!c->sid[0]) { 2984 2917 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2985 2918 &c->sid[0]); 2919 + if (rc == -ESTALE) { 2920 + rcu_read_unlock(); 2921 + goto retry; 2922 + } 2986 2923 if (rc) 2987 2924 goto out; 2988 2925 } ··· 2994 2923 } else { 2995 2924 rc = __security_genfs_sid(policy, fstype, "/", 2996 2925 SECCLASS_DIR, &sbsec->sid); 2926 + if (rc == -ESTALE) { 2927 + rcu_read_unlock(); 2928 + goto retry; 2929 + } 2997 2930 if (rc) { 2998 2931 sbsec->behavior = SECURITY_FS_USE_NONE; 2999 2932 rc = 0; ··· 3207 3132 u32 len; 3208 3133 int rc; 3209 3134 3210 - rc = 0; 3211 3135 if (!selinux_initialized(state)) { 3212 3136 *new_sid = sid; 3213 - goto out; 3137 + return 0; 3214 3138 } 3215 3139 3140 + retry: 3141 + rc = 0; 3216 3142 context_init(&newcon); 3217 3143 3218 3144 rcu_read_lock(); ··· 3272 3196 } 3273 3197 } 3274 3198 rc = sidtab_context_to_sid(sidtab, &newcon, new_sid); 3199 + if (rc == -ESTALE) { 3200 + rcu_read_unlock(); 3201 + context_destroy(&newcon); 3202 + goto retry; 3203 + } 3275 3204 out_unlock: 3276 3205 rcu_read_unlock(); 3277 3206 context_destroy(&newcon); 3278 - out: 3279 3207 return rc; 3280 3208 } 3281 3209 ··· 3872 3792 return 0; 3873 3793 } 3874 3794 3795 + retry: 3796 + rc = 0; 3875 3797 rcu_read_lock(); 3876 3798 policy = rcu_dereference(state->policy); 3877 3799 policydb = &policy->policydb; ··· 3900 3818 goto out; 3901 3819 } 3902 3820 rc = -EIDRM; 3903 - if (!mls_context_isvalid(policydb, &ctx_new)) 3904 - goto out_free; 3821 + if (!mls_context_isvalid(policydb, &ctx_new)) { 3822 + ebitmap_destroy(&ctx_new.range.level[0].cat); 3823 + goto out; 3824 + } 3905 3825 3906 3826 rc = sidtab_context_to_sid(sidtab, &ctx_new, sid); 3827 + ebitmap_destroy(&ctx_new.range.level[0].cat); 3828 + if (rc == -ESTALE) { 3829 + rcu_read_unlock(); 3830 + goto retry; 3831 + } 3907 3832 if (rc) 3908 - goto out_free; 3833 + goto out; 3909 3834 3910 3835 security_netlbl_cache_add(secattr, *sid); 3911 - 3912 - ebitmap_destroy(&ctx_new.range.level[0].cat); 3913 3836 } else 3914 3837 *sid = SECSID_NULL; 3915 3838 3916 - rcu_read_unlock(); 3917 - return 0; 3918 - out_free: 3919 - ebitmap_destroy(&ctx_new.range.level[0].cat); 3920 3839 out: 3921 3840 rcu_read_unlock(); 3922 3841 return rc;
+21
security/selinux/ss/sidtab.c
··· 39 39 for (i = 0; i < SECINITSID_NUM; i++) 40 40 s->isids[i].set = 0; 41 41 42 + s->frozen = false; 42 43 s->count = 0; 43 44 s->convert = NULL; 44 45 hash_init(s->context_to_sid); ··· 282 281 if (*sid) 283 282 goto out_unlock; 284 283 284 + if (unlikely(s->frozen)) { 285 + /* 286 + * This sidtab is now frozen - tell the caller to abort and 287 + * get the new one. 288 + */ 289 + rc = -ESTALE; 290 + goto out_unlock; 291 + } 292 + 285 293 count = s->count; 286 294 convert = s->convert; 287 295 ··· 482 472 spin_lock_irqsave(&s->lock, flags); 483 473 s->convert = NULL; 484 474 spin_unlock_irqrestore(&s->lock, flags); 475 + } 476 + 477 + void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock) 478 + { 479 + spin_lock_irqsave(&s->lock, *flags); 480 + s->frozen = true; 481 + s->convert = NULL; 482 + } 483 + void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock) 484 + { 485 + spin_unlock_irqrestore(&s->lock, *flags); 485 486 } 486 487 487 488 static void sidtab_destroy_entry(struct sidtab_entry *entry)
+4
security/selinux/ss/sidtab.h
··· 86 86 u32 count; 87 87 /* access only under spinlock */ 88 88 struct sidtab_convert_params *convert; 89 + bool frozen; 89 90 spinlock_t lock; 90 91 91 92 #if CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE > 0 ··· 125 124 int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params); 126 125 127 126 void sidtab_cancel_convert(struct sidtab *s); 127 + 128 + void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock); 129 + void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock); 128 130 129 131 int sidtab_context_to_sid(struct sidtab *s, struct context *context, u32 *sid); 130 132
+1 -1
security/tomoyo/network.c
··· 613 613 static bool tomoyo_kernel_service(void) 614 614 { 615 615 /* Nothing to do if I am a kernel service. */ 616 - return (current->flags & (PF_KTHREAD | PF_IO_WORKER)) == PF_KTHREAD; 616 + return current->flags & PF_KTHREAD; 617 617 } 618 618 619 619 /**
+8 -3
sound/drivers/aloop.c
··· 1571 1571 return -ENOMEM; 1572 1572 kctl->id.device = dev; 1573 1573 kctl->id.subdevice = substr; 1574 + 1575 + /* Add the control before copying the id so that 1576 + * the numid field of the id is set in the copy. 1577 + */ 1578 + err = snd_ctl_add(card, kctl); 1579 + if (err < 0) 1580 + return err; 1581 + 1574 1582 switch (idx) { 1575 1583 case ACTIVE_IDX: 1576 1584 setup->active_id = kctl->id; ··· 1595 1587 default: 1596 1588 break; 1597 1589 } 1598 - err = snd_ctl_add(card, kctl); 1599 - if (err < 0) 1600 - return err; 1601 1590 } 1602 1591 } 1603 1592 }
+8
sound/pci/hda/hda_intel.c
··· 989 989 struct snd_card *card = dev_get_drvdata(dev); 990 990 struct azx *chip; 991 991 992 + if (!azx_is_pm_ready(card)) 993 + return 0; 994 + 992 995 chip = card->private_data; 993 996 chip->pm_prepared = 1; 997 + snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); 994 998 995 999 flush_work(&azx_bus(chip)->unsol_work); 996 1000 ··· 1009 1005 struct snd_card *card = dev_get_drvdata(dev); 1010 1006 struct azx *chip; 1011 1007 1008 + if (!azx_is_pm_ready(card)) 1009 + return; 1010 + 1012 1011 chip = card->private_data; 1012 + snd_power_change_state(card, SNDRV_CTL_POWER_D0); 1013 1013 chip->pm_prepared = 0; 1014 1014 } 1015 1015
+1
sound/pci/hda/patch_conexant.c
··· 944 944 SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE), 945 945 SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO), 946 946 SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED), 947 + SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED), 947 948 SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE), 948 949 SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE), 949 950 SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+19 -1
sound/pci/hda/patch_realtek.c
··· 3927 3927 snd_hda_sequence_write(codec, verbs); 3928 3928 } 3929 3929 3930 + /* Fix the speaker amp after resume, etc */ 3931 + static void alc269vb_fixup_aspire_e1_coef(struct hda_codec *codec, 3932 + const struct hda_fixup *fix, 3933 + int action) 3934 + { 3935 + if (action == HDA_FIXUP_ACT_INIT) 3936 + alc_update_coef_idx(codec, 0x0d, 0x6000, 0x6000); 3937 + } 3938 + 3930 3939 static void alc269_fixup_pcm_44k(struct hda_codec *codec, 3931 3940 const struct hda_fixup *fix, int action) 3932 3941 { ··· 5265 5256 case 0x10ec0274: 5266 5257 case 0x10ec0294: 5267 5258 alc_process_coef_fw(codec, coef0274); 5268 - msleep(80); 5259 + msleep(850); 5269 5260 val = alc_read_coef_idx(codec, 0x46); 5270 5261 is_ctia = (val & 0x00f0) == 0x00f0; 5271 5262 break; ··· 5449 5440 struct hda_jack_callback *jack) 5450 5441 { 5451 5442 snd_hda_gen_hp_automute(codec, jack); 5443 + alc_update_headset_mode(codec); 5452 5444 } 5453 5445 5454 5446 static void alc_probe_headset_mode(struct hda_codec *codec) ··· 6310 6300 ALC283_FIXUP_HEADSET_MIC, 6311 6301 ALC255_FIXUP_MIC_MUTE_LED, 6312 6302 ALC282_FIXUP_ASPIRE_V5_PINS, 6303 + ALC269VB_FIXUP_ASPIRE_E1_COEF, 6313 6304 ALC280_FIXUP_HP_GPIO4, 6314 6305 ALC286_FIXUP_HP_GPIO_LED, 6315 6306 ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, ··· 6988 6977 { 0x21, 0x0321101f }, 6989 6978 { }, 6990 6979 }, 6980 + }, 6981 + [ALC269VB_FIXUP_ASPIRE_E1_COEF] = { 6982 + .type = HDA_FIXUP_FUNC, 6983 + .v.func = alc269vb_fixup_aspire_e1_coef, 6991 6984 }, 6992 6985 [ALC280_FIXUP_HP_GPIO4] = { 6993 6986 .type = HDA_FIXUP_FUNC, ··· 7915 7900 SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), 7916 7901 SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), 7917 7902 SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS), 7903 + SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF), 7918 7904 SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK), 7919 7905 SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), 7920 7906 SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC), ··· 8073 8057 ALC285_FIXUP_HP_GPIO_AMP_INIT), 8074 8058 SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED), 8075 8059 SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), 8060 + SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), 8076 8061 SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED), 8077 8062 SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED), 8078 8063 SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), ··· 8410 8393 {.id = ALC283_FIXUP_HEADSET_MIC, .name = "alc283-headset"}, 8411 8394 {.id = ALC255_FIXUP_MIC_MUTE_LED, .name = "alc255-dell-mute"}, 8412 8395 {.id = ALC282_FIXUP_ASPIRE_V5_PINS, .name = "aspire-v5"}, 8396 + {.id = ALC269VB_FIXUP_ASPIRE_E1_COEF, .name = "aspire-e1-coef"}, 8413 8397 {.id = ALC280_FIXUP_HP_GPIO4, .name = "hp-gpio4"}, 8414 8398 {.id = ALC286_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"}, 8415 8399 {.id = ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, .name = "hp-gpio2-hotkey"},
+3 -1
sound/soc/bcm/cygnus-ssp.c
··· 1348 1348 &cygnus_ssp_dai[active_port_count]); 1349 1349 1350 1350 /* negative is err, 0 is active and good, 1 is disabled */ 1351 - if (err < 0) 1351 + if (err < 0) { 1352 + of_node_put(child_node); 1352 1353 return err; 1354 + } 1353 1355 else if (!err) { 1354 1356 dev_dbg(dev, "Activating DAI: %s\n", 1355 1357 cygnus_ssp_dai[active_port_count].name);
+1 -1
sound/soc/codecs/lpass-rx-macro.c
··· 3551 3551 3552 3552 /* set MCLK and NPL rates */ 3553 3553 clk_set_rate(rx->clks[2].clk, MCLK_FREQ); 3554 - clk_set_rate(rx->clks[3].clk, MCLK_FREQ); 3554 + clk_set_rate(rx->clks[3].clk, 2 * MCLK_FREQ); 3555 3555 3556 3556 ret = clk_bulk_prepare_enable(RX_NUM_CLKS_MAX, rx->clks); 3557 3557 if (ret)
+1 -1
sound/soc/codecs/lpass-tx-macro.c
··· 1811 1811 1812 1812 /* set MCLK and NPL rates */ 1813 1813 clk_set_rate(tx->clks[2].clk, MCLK_FREQ); 1814 - clk_set_rate(tx->clks[3].clk, MCLK_FREQ); 1814 + clk_set_rate(tx->clks[3].clk, 2 * MCLK_FREQ); 1815 1815 1816 1816 ret = clk_bulk_prepare_enable(TX_NUM_CLKS_MAX, tx->clks); 1817 1817 if (ret)
+1
sound/soc/codecs/max98373-i2c.c
··· 446 446 case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK: 447 447 case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK: 448 448 case MAX98373_R20B6_BDE_CUR_STATE_READBACK: 449 + case MAX98373_R20FF_GLOBAL_SHDN: 449 450 case MAX98373_R21FF_REV_ID: 450 451 return true; 451 452 default:
+1
sound/soc/codecs/max98373-sdw.c
··· 220 220 case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK: 221 221 case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK: 222 222 case MAX98373_R20B6_BDE_CUR_STATE_READBACK: 223 + case MAX98373_R20FF_GLOBAL_SHDN: 223 224 case MAX98373_R21FF_REV_ID: 224 225 /* SoundWire Control Port Registers */ 225 226 case MAX98373_R0040_SCP_INIT_STAT_1 ... MAX98373_R0070_SCP_FRAME_CTLR:
+2
sound/soc/codecs/max98373.c
··· 28 28 regmap_update_bits(max98373->regmap, 29 29 MAX98373_R20FF_GLOBAL_SHDN, 30 30 MAX98373_GLOBAL_EN_MASK, 1); 31 + usleep_range(30000, 31000); 31 32 break; 32 33 case SND_SOC_DAPM_POST_PMD: 33 34 regmap_update_bits(max98373->regmap, 34 35 MAX98373_R20FF_GLOBAL_SHDN, 35 36 MAX98373_GLOBAL_EN_MASK, 0); 37 + usleep_range(30000, 31000); 36 38 max98373->tdm_mode = false; 37 39 break; 38 40 default:
+7 -1
sound/soc/codecs/wm8960.c
··· 707 707 best_freq_out = -EINVAL; 708 708 *sysclk_idx = *dac_idx = *bclk_idx = -1; 709 709 710 - for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) { 710 + /* 711 + * From Datasheet, the PLL performs best when f2 is between 712 + * 90MHz and 100MHz, the desired sysclk output is 11.2896MHz 713 + * or 12.288MHz, then sysclkdiv = 2 is the best choice. 714 + * So search sysclk_divs from 2 to 1 other than from 1 to 2. 715 + */ 716 + for (i = ARRAY_SIZE(sysclk_divs) - 1; i >= 0; --i) { 711 717 if (sysclk_divs[i] == -1) 712 718 continue; 713 719 for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) {
+5 -3
sound/soc/fsl/fsl_esai.c
··· 519 519 ESAI_SAICR_SYNC, esai_priv->synchronous ? 520 520 ESAI_SAICR_SYNC : 0); 521 521 522 - /* Set a default slot number -- 2 */ 522 + /* Set slots count */ 523 523 regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, 524 - ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2)); 524 + ESAI_xCCR_xDC_MASK, 525 + ESAI_xCCR_xDC(esai_priv->slots)); 525 526 regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, 526 - ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2)); 527 + ESAI_xCCR_xDC_MASK, 528 + ESAI_xCCR_xDC(esai_priv->slots)); 527 529 } 528 530 529 531 return 0;
+6 -6
sound/soc/intel/atom/sst-mfld-platform-pcm.c
··· 487 487 .stream_name = "Headset Playback", 488 488 .channels_min = SST_STEREO, 489 489 .channels_max = SST_STEREO, 490 - .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, 491 - .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, 490 + .rates = SNDRV_PCM_RATE_48000, 491 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 492 492 }, 493 493 .capture = { 494 494 .stream_name = "Headset Capture", 495 495 .channels_min = 1, 496 496 .channels_max = 2, 497 - .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, 498 - .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, 497 + .rates = SNDRV_PCM_RATE_48000, 498 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 499 499 }, 500 500 }, 501 501 { ··· 505 505 .stream_name = "Deepbuffer Playback", 506 506 .channels_min = SST_STEREO, 507 507 .channels_max = SST_STEREO, 508 - .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, 509 - .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, 508 + .rates = SNDRV_PCM_RATE_48000, 509 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 510 510 }, 511 511 }, 512 512 {
+7 -1
sound/soc/sof/core.c
··· 399 399 { 400 400 struct snd_sof_dev *sdev = dev_get_drvdata(dev); 401 401 402 - return snd_sof_shutdown(sdev); 402 + if (IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE)) 403 + cancel_work_sync(&sdev->probe_work); 404 + 405 + if (sdev->fw_state == SOF_FW_BOOT_COMPLETE) 406 + return snd_sof_shutdown(sdev); 407 + 408 + return 0; 403 409 } 404 410 EXPORT_SYMBOL(snd_sof_device_shutdown); 405 411
+2 -1
sound/soc/sof/intel/apl.c
··· 27 27 28 28 /* apollolake ops */ 29 29 const struct snd_sof_dsp_ops sof_apl_ops = { 30 - /* probe and remove */ 30 + /* probe/remove/shutdown */ 31 31 .probe = hda_dsp_probe, 32 32 .remove = hda_dsp_remove, 33 + .shutdown = hda_dsp_shutdown, 33 34 34 35 /* Register IO */ 35 36 .write = sof_io_write,
+2 -17
sound/soc/sof/intel/cnl.c
··· 232 232 233 233 /* cannonlake ops */ 234 234 const struct snd_sof_dsp_ops sof_cnl_ops = { 235 - /* probe and remove */ 235 + /* probe/remove/shutdown */ 236 236 .probe = hda_dsp_probe, 237 237 .remove = hda_dsp_remove, 238 + .shutdown = hda_dsp_shutdown, 238 239 239 240 /* Register IO */ 240 241 .write = sof_io_write, ··· 349 348 .ssp_base_offset = CNL_SSP_BASE_OFFSET, 350 349 }; 351 350 EXPORT_SYMBOL_NS(cnl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 352 - 353 - const struct sof_intel_dsp_desc ehl_chip_info = { 354 - /* Elkhartlake */ 355 - .cores_num = 4, 356 - .init_core_mask = 1, 357 - .host_managed_cores_mask = BIT(0), 358 - .ipc_req = CNL_DSP_REG_HIPCIDR, 359 - .ipc_req_mask = CNL_DSP_REG_HIPCIDR_BUSY, 360 - .ipc_ack = CNL_DSP_REG_HIPCIDA, 361 - .ipc_ack_mask = CNL_DSP_REG_HIPCIDA_DONE, 362 - .ipc_ctl = CNL_DSP_REG_HIPCCTL, 363 - .rom_init_timeout = 300, 364 - .ssp_count = ICL_SSP_COUNT, 365 - .ssp_base_offset = CNL_SSP_BASE_OFFSET, 366 - }; 367 - EXPORT_SYMBOL_NS(ehl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 368 351 369 352 const struct sof_intel_dsp_desc jsl_chip_info = { 370 353 /* Jasperlake */
+17 -4
sound/soc/sof/intel/hda-dsp.c
··· 226 226 227 227 val = snd_sof_dsp_read(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS); 228 228 229 - is_enable = (val & HDA_DSP_ADSPCS_CPA_MASK(core_mask)) && 230 - (val & HDA_DSP_ADSPCS_SPA_MASK(core_mask)) && 231 - !(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) && 232 - !(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask)); 229 + #define MASK_IS_EQUAL(v, m, field) ({ \ 230 + u32 _m = field(m); \ 231 + ((v) & _m) == _m; \ 232 + }) 233 + 234 + is_enable = MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_CPA_MASK) && 235 + MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_SPA_MASK) && 236 + !(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) && 237 + !(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask)); 238 + 239 + #undef MASK_IS_EQUAL 233 240 234 241 dev_dbg(sdev->dev, "DSP core(s) enabled? %d : core_mask %x\n", 235 242 is_enable, core_mask); ··· 890 883 } 891 884 892 885 return snd_sof_dsp_set_power_state(sdev, &target_dsp_state); 886 + } 887 + 888 + int hda_dsp_shutdown(struct snd_sof_dev *sdev) 889 + { 890 + sdev->system_suspend_target = SOF_SUSPEND_S3; 891 + return snd_sof_suspend(sdev->dev); 893 892 } 894 893 895 894 int hda_dsp_set_hw_params_upon_resume(struct snd_sof_dev *sdev)
+1
sound/soc/sof/intel/hda.h
··· 517 517 int hda_dsp_runtime_suspend(struct snd_sof_dev *sdev); 518 518 int hda_dsp_runtime_resume(struct snd_sof_dev *sdev); 519 519 int hda_dsp_runtime_idle(struct snd_sof_dev *sdev); 520 + int hda_dsp_shutdown(struct snd_sof_dev *sdev); 520 521 int hda_dsp_set_hw_params_upon_resume(struct snd_sof_dev *sdev); 521 522 void hda_dsp_dump(struct snd_sof_dev *sdev, u32 flags); 522 523 void hda_ipc_dump(struct snd_sof_dev *sdev);
+2 -1
sound/soc/sof/intel/icl.c
··· 26 26 27 27 /* Icelake ops */ 28 28 const struct snd_sof_dsp_ops sof_icl_ops = { 29 - /* probe and remove */ 29 + /* probe/remove/shutdown */ 30 30 .probe = hda_dsp_probe, 31 31 .remove = hda_dsp_remove, 32 + .shutdown = hda_dsp_shutdown, 32 33 33 34 /* Register IO */ 34 35 .write = sof_io_write,
+1 -1
sound/soc/sof/intel/pci-tgl.c
··· 65 65 .default_tplg_path = "intel/sof-tplg", 66 66 .default_fw_filename = "sof-ehl.ri", 67 67 .nocodec_tplg_filename = "sof-ehl-nocodec.tplg", 68 - .ops = &sof_cnl_ops, 68 + .ops = &sof_tgl_ops, 69 69 }; 70 70 71 71 static const struct sof_dev_desc adls_desc = {
+17 -1
sound/soc/sof/intel/tgl.c
··· 25 25 /* probe/remove/shutdown */ 26 26 .probe = hda_dsp_probe, 27 27 .remove = hda_dsp_remove, 28 - .shutdown = hda_dsp_remove, 28 + .shutdown = hda_dsp_shutdown, 29 29 30 30 /* Register IO */ 31 31 .write = sof_io_write, ··· 155 155 .ssp_base_offset = CNL_SSP_BASE_OFFSET, 156 156 }; 157 157 EXPORT_SYMBOL_NS(tglh_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 158 + 159 + const struct sof_intel_dsp_desc ehl_chip_info = { 160 + /* Elkhartlake */ 161 + .cores_num = 4, 162 + .init_core_mask = 1, 163 + .host_managed_cores_mask = BIT(0), 164 + .ipc_req = CNL_DSP_REG_HIPCIDR, 165 + .ipc_req_mask = CNL_DSP_REG_HIPCIDR_BUSY, 166 + .ipc_ack = CNL_DSP_REG_HIPCIDA, 167 + .ipc_ack_mask = CNL_DSP_REG_HIPCIDA_DONE, 168 + .ipc_ctl = CNL_DSP_REG_HIPCCTL, 169 + .rom_init_timeout = 300, 170 + .ssp_count = ICL_SSP_COUNT, 171 + .ssp_base_offset = CNL_SSP_BASE_OFFSET, 172 + }; 173 + EXPORT_SYMBOL_NS(ehl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 158 174 159 175 const struct sof_intel_dsp_desc adls_chip_info = { 160 176 /* Alderlake-S */
+5
sound/soc/sunxi/sun4i-codec.c
··· 1364 1364 return ERR_PTR(-ENOMEM); 1365 1365 1366 1366 card->dev = dev; 1367 + card->owner = THIS_MODULE; 1367 1368 card->name = "sun4i-codec"; 1368 1369 card->dapm_widgets = sun4i_codec_card_dapm_widgets; 1369 1370 card->num_dapm_widgets = ARRAY_SIZE(sun4i_codec_card_dapm_widgets); ··· 1397 1396 return ERR_PTR(-ENOMEM); 1398 1397 1399 1398 card->dev = dev; 1399 + card->owner = THIS_MODULE; 1400 1400 card->name = "A31 Audio Codec"; 1401 1401 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1402 1402 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets); ··· 1451 1449 return ERR_PTR(-ENOMEM); 1452 1450 1453 1451 card->dev = dev; 1452 + card->owner = THIS_MODULE; 1454 1453 card->name = "A23 Audio Codec"; 1455 1454 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1456 1455 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets); ··· 1490 1487 return ERR_PTR(-ENOMEM); 1491 1488 1492 1489 card->dev = dev; 1490 + card->owner = THIS_MODULE; 1493 1491 card->name = "H3 Audio Codec"; 1494 1492 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1495 1493 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets); ··· 1529 1525 return ERR_PTR(-ENOMEM); 1530 1526 1531 1527 card->dev = dev; 1528 + card->owner = THIS_MODULE; 1532 1529 card->name = "V3s Audio Codec"; 1533 1530 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1534 1531 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
+1
sound/usb/quirks.c
··· 1521 1521 case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */ 1522 1522 case USB_ID(0x2912, 0x30c8): /* Audioengine D1 */ 1523 1523 case USB_ID(0x413c, 0xa506): /* Dell AE515 sound bar */ 1524 + case USB_ID(0x046d, 0x084c): /* Logitech ConferenceCam Connect */ 1524 1525 return true; 1525 1526 } 1526 1527
+13
tools/include/uapi/linux/kvm.h
··· 1154 1154 #define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR (1 << 0) 1155 1155 #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL (1 << 1) 1156 1156 #define KVM_XEN_HVM_CONFIG_SHARED_INFO (1 << 2) 1157 + #define KVM_XEN_HVM_CONFIG_RUNSTATE (1 << 3) 1157 1158 1158 1159 struct kvm_xen_hvm_config { 1159 1160 __u32 flags; ··· 1622 1621 union { 1623 1622 __u64 gpa; 1624 1623 __u64 pad[8]; 1624 + struct { 1625 + __u64 state; 1626 + __u64 state_entry_time; 1627 + __u64 time_running; 1628 + __u64 time_runnable; 1629 + __u64 time_blocked; 1630 + __u64 time_offline; 1631 + } runstate; 1625 1632 } u; 1626 1633 }; 1627 1634 1628 1635 /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */ 1629 1636 #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO 0x0 1630 1637 #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO 0x1 1638 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR 0x2 1639 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT 0x3 1640 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA 0x4 1641 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5 1631 1642 1632 1643 /* Secure Encrypted Virtualization command */ 1633 1644 enum sev_cmd_id {
+1
tools/kvm/kvm_stat/kvm_stat.service
··· 9 9 ExecStart=/usr/bin/kvm_stat -dtcz -s 10 -L /var/log/kvm_stat.csv 10 10 ExecReload=/bin/kill -HUP $MAINPID 11 11 Restart=always 12 + RestartSec=60s 12 13 SyslogIdentifier=kvm_stat 13 14 SyslogLevel=debug 14 15
+1 -1
tools/lib/bpf/ringbuf.c
··· 227 227 if ((len & BPF_RINGBUF_DISCARD_BIT) == 0) { 228 228 sample = (void *)len_ptr + BPF_RINGBUF_HDR_SZ; 229 229 err = r->sample_cb(r->ctx, sample, len); 230 - if (err) { 230 + if (err < 0) { 231 231 /* update consumer pos and bail out */ 232 232 smp_store_release(r->consumer_pos, 233 233 cons_pos);
+37 -20
tools/lib/bpf/xsk.c
··· 60 60 int fd; 61 61 int refcount; 62 62 struct list_head ctx_list; 63 + bool rx_ring_setup_done; 64 + bool tx_ring_setup_done; 63 65 }; 64 66 65 67 struct xsk_ctx { ··· 910 908 return NULL; 911 909 } 912 910 913 - static void xsk_put_ctx(struct xsk_ctx *ctx) 911 + static void xsk_put_ctx(struct xsk_ctx *ctx, bool unmap) 914 912 { 915 913 struct xsk_umem *umem = ctx->umem; 916 914 struct xdp_mmap_offsets off; 917 915 int err; 918 916 919 - if (--ctx->refcount == 0) { 920 - err = xsk_get_mmap_offsets(umem->fd, &off); 921 - if (!err) { 922 - munmap(ctx->fill->ring - off.fr.desc, 923 - off.fr.desc + umem->config.fill_size * 924 - sizeof(__u64)); 925 - munmap(ctx->comp->ring - off.cr.desc, 926 - off.cr.desc + umem->config.comp_size * 927 - sizeof(__u64)); 928 - } 917 + if (--ctx->refcount) 918 + return; 929 919 930 - list_del(&ctx->list); 931 - free(ctx); 932 - } 920 + if (!unmap) 921 + goto out_free; 922 + 923 + err = xsk_get_mmap_offsets(umem->fd, &off); 924 + if (err) 925 + goto out_free; 926 + 927 + munmap(ctx->fill->ring - off.fr.desc, off.fr.desc + umem->config.fill_size * 928 + sizeof(__u64)); 929 + munmap(ctx->comp->ring - off.cr.desc, off.cr.desc + umem->config.comp_size * 930 + sizeof(__u64)); 931 + 932 + out_free: 933 + list_del(&ctx->list); 934 + free(ctx); 933 935 } 934 936 935 937 static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk, ··· 968 962 memcpy(ctx->ifname, ifname, IFNAMSIZ - 1); 969 963 ctx->ifname[IFNAMSIZ - 1] = '\0'; 970 964 971 - umem->fill_save = NULL; 972 - umem->comp_save = NULL; 973 965 ctx->fill = fill; 974 966 ctx->comp = comp; 975 967 list_add(&ctx->list, &umem->ctx_list); ··· 1023 1019 struct xsk_socket *xsk; 1024 1020 struct xsk_ctx *ctx; 1025 1021 int err, ifindex; 1022 + bool unmap = umem->fill_save != fill; 1023 + bool rx_setup_done = false, tx_setup_done = false; 1026 1024 1027 1025 if (!umem || !xsk_ptr || !(rx || tx)) 1028 1026 return -EFAULT; ··· 1052 1046 } 1053 1047 } else { 1054 1048 xsk->fd = umem->fd; 1049 + rx_setup_done = umem->rx_ring_setup_done; 1050 + tx_setup_done = umem->tx_ring_setup_done; 1055 1051 } 1056 1052 1057 1053 ctx = xsk_get_ctx(umem, ifindex, queue_id); ··· 1073 1065 xsk->ctx = ctx; 1074 1066 xsk->ctx->has_bpf_link = xsk_probe_bpf_link(); 1075 1067 1076 - if (rx) { 1068 + if (rx && !rx_setup_done) { 1077 1069 err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING, 1078 1070 &xsk->config.rx_size, 1079 1071 sizeof(xsk->config.rx_size)); ··· 1081 1073 err = -errno; 1082 1074 goto out_put_ctx; 1083 1075 } 1076 + if (xsk->fd == umem->fd) 1077 + umem->rx_ring_setup_done = true; 1084 1078 } 1085 - if (tx) { 1079 + if (tx && !tx_setup_done) { 1086 1080 err = setsockopt(xsk->fd, SOL_XDP, XDP_TX_RING, 1087 1081 &xsk->config.tx_size, 1088 1082 sizeof(xsk->config.tx_size)); ··· 1092 1082 err = -errno; 1093 1083 goto out_put_ctx; 1094 1084 } 1085 + if (xsk->fd == umem->fd) 1086 + umem->rx_ring_setup_done = true; 1095 1087 } 1096 1088 1097 1089 err = xsk_get_mmap_offsets(xsk->fd, &off); ··· 1172 1160 } 1173 1161 1174 1162 *xsk_ptr = xsk; 1163 + umem->fill_save = NULL; 1164 + umem->comp_save = NULL; 1175 1165 return 0; 1176 1166 1177 1167 out_mmap_tx: ··· 1185 1171 munmap(rx_map, off.rx.desc + 1186 1172 xsk->config.rx_size * sizeof(struct xdp_desc)); 1187 1173 out_put_ctx: 1188 - xsk_put_ctx(ctx); 1174 + xsk_put_ctx(ctx, unmap); 1189 1175 out_socket: 1190 1176 if (--umem->refcount) 1191 1177 close(xsk->fd); ··· 1199 1185 struct xsk_ring_cons *rx, struct xsk_ring_prod *tx, 1200 1186 const struct xsk_socket_config *usr_config) 1201 1187 { 1188 + if (!umem) 1189 + return -EFAULT; 1190 + 1202 1191 return xsk_socket__create_shared(xsk_ptr, ifname, queue_id, umem, 1203 1192 rx, tx, umem->fill_save, 1204 1193 umem->comp_save, usr_config); ··· 1253 1236 } 1254 1237 } 1255 1238 1256 - xsk_put_ctx(ctx); 1239 + xsk_put_ctx(ctx, true); 1257 1240 1258 1241 umem->refcount--; 1259 1242 /* Do not close an fd that also has an associated umem connected
+34 -25
tools/perf/builtin-daemon.c
··· 402 402 int status; 403 403 pid_t pid; 404 404 405 + /* 406 + * Take signal fd data as pure signal notification and check all 407 + * the sessions state. The reason is that multiple signals can get 408 + * coalesced in kernel and we can receive only single signal even 409 + * if multiple SIGCHLD were generated. 410 + */ 405 411 err = read(daemon->signal_fd, &si, sizeof(struct signalfd_siginfo)); 406 - if (err != sizeof(struct signalfd_siginfo)) 412 + if (err != sizeof(struct signalfd_siginfo)) { 413 + pr_err("failed to read signal fd\n"); 407 414 return -1; 415 + } 408 416 409 417 list_for_each_entry(session, &daemon->sessions, list) { 410 - 411 - if (session->pid != (int) si.ssi_pid) 418 + if (session->pid == -1) 412 419 continue; 413 420 414 - pid = waitpid(session->pid, &status, 0); 415 - if (pid == session->pid) { 416 - if (WIFEXITED(status)) { 417 - pr_info("session '%s' exited, status=%d\n", 418 - session->name, WEXITSTATUS(status)); 419 - } else if (WIFSIGNALED(status)) { 420 - pr_info("session '%s' killed (signal %d)\n", 421 - session->name, WTERMSIG(status)); 422 - } else if (WIFSTOPPED(status)) { 423 - pr_info("session '%s' stopped (signal %d)\n", 424 - session->name, WSTOPSIG(status)); 425 - } else { 426 - pr_info("session '%s' Unexpected status (0x%x)\n", 427 - session->name, status); 428 - } 421 + pid = waitpid(session->pid, &status, WNOHANG); 422 + if (pid <= 0) 423 + continue; 424 + 425 + if (WIFEXITED(status)) { 426 + pr_info("session '%s' exited, status=%d\n", 427 + session->name, WEXITSTATUS(status)); 428 + } else if (WIFSIGNALED(status)) { 429 + pr_info("session '%s' killed (signal %d)\n", 430 + session->name, WTERMSIG(status)); 431 + } else if (WIFSTOPPED(status)) { 432 + pr_info("session '%s' stopped (signal %d)\n", 433 + session->name, WSTOPSIG(status)); 434 + } else { 435 + pr_info("session '%s' Unexpected status (0x%x)\n", 436 + session->name, status); 429 437 } 430 438 431 439 session->state = KILL; 432 440 session->pid = -1; 433 - return pid; 434 441 } 435 442 436 443 return 0; ··· 450 443 .fd = daemon->signal_fd, 451 444 .events = POLLIN, 452 445 }; 453 - pid_t wpid = 0, pid = session->pid; 454 446 time_t start; 455 447 456 448 start = time(NULL); ··· 458 452 int err = poll(&pollfd, 1, 1000); 459 453 460 454 if (err > 0) { 461 - wpid = handle_signalfd(daemon); 455 + handle_signalfd(daemon); 462 456 } else if (err < 0) { 463 457 perror("failed: poll\n"); 464 458 return -1; ··· 466 460 467 461 if (start + secs < time(NULL)) 468 462 return -1; 469 - } while (wpid != pid); 463 + } while (session->pid != -1); 470 464 471 465 return 0; 472 466 } ··· 908 902 daemon_session__signal(session, SIGKILL); 909 903 break; 910 904 default: 911 - break; 905 + pr_err("failed to wait for session %s\n", 906 + session->name); 907 + return; 912 908 } 913 909 how++; 914 910 ··· 963 955 daemon__signal(daemon, SIGKILL); 964 956 break; 965 957 default: 966 - break; 958 + pr_err("failed to wait for sessions\n"); 959 + return; 967 960 } 968 961 how++; 969 962 ··· 1353 1344 close(sock_fd); 1354 1345 if (conf_fd != -1) 1355 1346 close(conf_fd); 1356 - if (conf_fd != -1) 1347 + if (signal_fd != -1) 1357 1348 close(signal_fd); 1358 1349 1359 1350 pr_info("daemon exited\n");
+1 -8
tools/perf/tests/bpf.c
··· 86 86 .msg_load_fail = "check your vmlinux setting?", 87 87 .target_func = &epoll_pwait_loop, 88 88 .expect_result = (NR_ITERS + 1) / 2, 89 - .pin = true, 89 + .pin = true, 90 90 }, 91 91 #ifdef HAVE_BPF_PROLOGUE 92 92 { ··· 99 99 .expect_result = (NR_ITERS + 1) / 4, 100 100 }, 101 101 #endif 102 - { 103 - .prog_id = LLVM_TESTCASE_BPF_RELOCATION, 104 - .desc = "BPF relocation checker", 105 - .name = "[bpf_relocation_test]", 106 - .msg_compile_fail = "fix 'perf test LLVM' first", 107 - .msg_load_fail = "libbpf error when dealing with relocation", 108 - }, 109 102 }; 110 103 111 104 static int do_test(struct bpf_object *obj, int (*func)(void),
+1 -1
tools/perf/tests/shell/daemon.sh
··· 1 - #!/bin/sh 1 + #!/bin/bash 2 2 # daemon operations 3 3 # SPDX-License-Identifier: GPL-2.0 4 4
-4
tools/perf/util/auxtrace.c
··· 298 298 queue->set = true; 299 299 queue->tid = buffer->tid; 300 300 queue->cpu = buffer->cpu; 301 - } else if (buffer->cpu != queue->cpu || buffer->tid != queue->tid) { 302 - pr_err("auxtrace queue conflict: cpu %d, tid %d vs cpu %d, tid %d\n", 303 - queue->cpu, queue->tid, buffer->cpu, buffer->tid); 304 - return -EINVAL; 305 301 } 306 302 307 303 buffer->buffer_nr = queues->next_buffer_nr++;
+10 -3
tools/perf/util/bpf-event.c
··· 196 196 } 197 197 198 198 if (info_linear->info_len < offsetof(struct bpf_prog_info, prog_tags)) { 199 + free(info_linear); 199 200 pr_debug("%s: the kernel is too old, aborting\n", __func__); 200 201 return -2; 201 202 } 202 203 203 204 info = &info_linear->info; 205 + if (!info->jited_ksyms) { 206 + free(info_linear); 207 + return -1; 208 + } 204 209 205 210 /* number of ksyms, func_lengths, and tags should match */ 206 211 sub_prog_cnt = info->nr_jited_ksyms; 207 212 if (sub_prog_cnt != info->nr_prog_tags || 208 - sub_prog_cnt != info->nr_jited_func_lens) 213 + sub_prog_cnt != info->nr_jited_func_lens) { 214 + free(info_linear); 209 215 return -1; 216 + } 210 217 211 218 /* check BTF func info support */ 212 219 if (info->btf_id && info->nr_func_info && info->func_info_rec_size) { 213 220 /* btf func info number should be same as sub_prog_cnt */ 214 221 if (sub_prog_cnt != info->nr_func_info) { 215 222 pr_debug("%s: mismatch in BPF sub program count and BTF function info count, aborting\n", __func__); 216 - err = -1; 217 - goto out; 223 + free(info_linear); 224 + return -1; 218 225 } 219 226 if (btf__get_from_id(info->btf_id, &btf)) { 220 227 pr_debug("%s: failed to get BTF of id %u, aborting\n", __func__, info->btf_id);
+3
tools/perf/util/parse-events.c
··· 356 356 struct perf_cpu_map *cpus = pmu ? perf_cpu_map__get(pmu->cpus) : 357 357 cpu_list ? perf_cpu_map__new(cpu_list) : NULL; 358 358 359 + if (pmu && attr->type == PERF_TYPE_RAW) 360 + perf_pmu__warn_invalid_config(pmu, attr->config, name); 361 + 359 362 if (init_attr) 360 363 event_attr_init(attr); 361 364
+33
tools/perf/util/pmu.c
··· 1812 1812 1813 1813 return nr_caps; 1814 1814 } 1815 + 1816 + void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config, 1817 + char *name) 1818 + { 1819 + struct perf_pmu_format *format; 1820 + __u64 masks = 0, bits; 1821 + char buf[100]; 1822 + unsigned int i; 1823 + 1824 + list_for_each_entry(format, &pmu->format, list) { 1825 + if (format->value != PERF_PMU_FORMAT_VALUE_CONFIG) 1826 + continue; 1827 + 1828 + for_each_set_bit(i, format->bits, PERF_PMU_FORMAT_BITS) 1829 + masks |= 1ULL << i; 1830 + } 1831 + 1832 + /* 1833 + * Kernel doesn't export any valid format bits. 1834 + */ 1835 + if (masks == 0) 1836 + return; 1837 + 1838 + bits = config & ~masks; 1839 + if (bits == 0) 1840 + return; 1841 + 1842 + bitmap_scnprintf((unsigned long *)&bits, sizeof(bits) * 8, buf, sizeof(buf)); 1843 + 1844 + pr_warning("WARNING: event '%s' not valid (bits %s of config " 1845 + "'%llx' not supported by kernel)!\n", 1846 + name ?: "N/A", buf, config); 1847 + }
+3
tools/perf/util/pmu.h
··· 123 123 124 124 int perf_pmu__caps_parse(struct perf_pmu *pmu); 125 125 126 + void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config, 127 + char *name); 128 + 126 129 #endif /* __PMU_H */
+6 -5
tools/perf/util/synthetic-events.c
··· 424 424 425 425 while (!io.eof) { 426 426 static const char anonstr[] = "//anon"; 427 - size_t size; 427 + size_t size, aligned_size; 428 428 429 429 /* ensure null termination since stack will be reused. */ 430 430 event->mmap2.filename[0] = '\0'; ··· 484 484 } 485 485 486 486 size = strlen(event->mmap2.filename) + 1; 487 - size = PERF_ALIGN(size, sizeof(u64)); 487 + aligned_size = PERF_ALIGN(size, sizeof(u64)); 488 488 event->mmap2.len -= event->mmap.start; 489 489 event->mmap2.header.size = (sizeof(event->mmap2) - 490 - (sizeof(event->mmap2.filename) - size)); 491 - memset(event->mmap2.filename + size, 0, machine->id_hdr_size); 490 + (sizeof(event->mmap2.filename) - aligned_size)); 491 + memset(event->mmap2.filename + size, 0, machine->id_hdr_size + 492 + (aligned_size - size)); 492 493 event->mmap2.header.size += machine->id_hdr_size; 493 494 event->mmap2.pid = tgid; 494 495 event->mmap2.tid = pid; ··· 759 758 for (i = 0; i < n; i++) { 760 759 char *end; 761 760 pid_t _pid; 762 - bool kernel_thread; 761 + bool kernel_thread = false; 763 762 764 763 _pid = strtol(dirent[i]->d_name, &end, 10); 765 764 if (*end)
+2
tools/perf/util/vdso.c
··· 133 133 if (dso != NULL) { 134 134 __dsos__add(&machine->dsos, dso); 135 135 dso__set_long_name(dso, long_name, false); 136 + /* Put dso here because __dsos_add already got it */ 137 + dso__put(dso); 136 138 } 137 139 138 140 return dso;
+18 -3
tools/testing/radix-tree/idr-test.c
··· 296 296 return NULL; 297 297 } 298 298 299 + /* 300 + * There are always either 1 or 2 objects in the IDR. If we find nothing, 301 + * or we find something at an ID we didn't expect, that's a bug. 302 + */ 299 303 void idr_find_test_1(int anchor_id, int throbber_id) 300 304 { 301 305 pthread_t throbber; 302 306 time_t start = time(NULL); 303 307 304 - pthread_create(&throbber, NULL, idr_throbber, &throbber_id); 305 - 306 308 BUG_ON(idr_alloc(&find_idr, xa_mk_value(anchor_id), anchor_id, 307 309 anchor_id + 1, GFP_KERNEL) != anchor_id); 308 310 311 + pthread_create(&throbber, NULL, idr_throbber, &throbber_id); 312 + 313 + rcu_read_lock(); 309 314 do { 310 315 int id = 0; 311 316 void *entry = idr_get_next(&find_idr, &id); 312 - BUG_ON(entry != xa_mk_value(id)); 317 + rcu_read_unlock(); 318 + if ((id != anchor_id && id != throbber_id) || 319 + entry != xa_mk_value(id)) { 320 + printf("%s(%d, %d): %p at %d\n", __func__, anchor_id, 321 + throbber_id, entry, id); 322 + abort(); 323 + } 324 + rcu_read_lock(); 313 325 } while (time(NULL) < start + 11); 326 + rcu_read_unlock(); 314 327 315 328 pthread_join(throbber, NULL); 316 329 ··· 590 577 591 578 int __weak main(void) 592 579 { 580 + rcu_register_thread(); 593 581 radix_tree_init(); 594 582 idr_checks(); 595 583 ida_tests(); ··· 598 584 rcu_barrier(); 599 585 if (nr_allocated) 600 586 printf("nr_allocated = %d\n", nr_allocated); 587 + rcu_unregister_thread(); 601 588 return 0; 602 589 }
tools/testing/radix-tree/linux/compiler_types.h
+2
tools/testing/radix-tree/multiorder.c
··· 224 224 225 225 int __weak main(void) 226 226 { 227 + rcu_register_thread(); 227 228 radix_tree_init(); 228 229 multiorder_checks(); 230 + rcu_unregister_thread(); 229 231 return 0; 230 232 }
+2
tools/testing/radix-tree/xarray.c
··· 25 25 26 26 int __weak main(void) 27 27 { 28 + rcu_register_thread(); 28 29 radix_tree_init(); 29 30 xarray_tests(); 30 31 radix_tree_cpu_dead(1); 31 32 rcu_barrier(); 32 33 if (nr_allocated) 33 34 printf("nr_allocated = %d\n", nr_allocated); 35 + rcu_unregister_thread(); 34 36 return 0; 35 37 }
+44
tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
··· 6 6 #include <test_progs.h> 7 7 #include "bpf_dctcp.skel.h" 8 8 #include "bpf_cubic.skel.h" 9 + #include "bpf_tcp_nogpl.skel.h" 9 10 10 11 #define min(a, b) ((a) < (b) ? (a) : (b)) 11 12 ··· 228 227 bpf_dctcp__destroy(dctcp_skel); 229 228 } 230 229 230 + static char *err_str; 231 + static bool found; 232 + 233 + static int libbpf_debug_print(enum libbpf_print_level level, 234 + const char *format, va_list args) 235 + { 236 + char *log_buf; 237 + 238 + if (level != LIBBPF_WARN || 239 + strcmp(format, "libbpf: \n%s\n")) { 240 + vprintf(format, args); 241 + return 0; 242 + } 243 + 244 + log_buf = va_arg(args, char *); 245 + if (!log_buf) 246 + goto out; 247 + if (err_str && strstr(log_buf, err_str) != NULL) 248 + found = true; 249 + out: 250 + printf(format, log_buf); 251 + return 0; 252 + } 253 + 254 + static void test_invalid_license(void) 255 + { 256 + libbpf_print_fn_t old_print_fn; 257 + struct bpf_tcp_nogpl *skel; 258 + 259 + err_str = "struct ops programs must have a GPL compatible license"; 260 + found = false; 261 + old_print_fn = libbpf_set_print(libbpf_debug_print); 262 + 263 + skel = bpf_tcp_nogpl__open_and_load(); 264 + ASSERT_NULL(skel, "bpf_tcp_nogpl"); 265 + ASSERT_EQ(found, true, "expected_err_msg"); 266 + 267 + bpf_tcp_nogpl__destroy(skel); 268 + libbpf_set_print(old_print_fn); 269 + } 270 + 231 271 void test_bpf_tcp_ca(void) 232 272 { 233 273 if (test__start_subtest("dctcp")) 234 274 test_dctcp(); 235 275 if (test__start_subtest("cubic")) 236 276 test_cubic(); 277 + if (test__start_subtest("invalid_license")) 278 + test_invalid_license(); 237 279 }
+19
tools/testing/selftests/bpf/progs/bpf_tcp_nogpl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bpf.h> 4 + #include <linux/types.h> 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + #include "bpf_tcp_helpers.h" 8 + 9 + char _license[] SEC("license") = "X"; 10 + 11 + void BPF_STRUCT_OPS(nogpltcp_init, struct sock *sk) 12 + { 13 + } 14 + 15 + SEC(".struct_ops") 16 + struct tcp_congestion_ops bpf_nogpltcp = { 17 + .init = (void *)nogpltcp_init, 18 + .name = "bpf_nogpltcp", 19 + };
+5 -5
tools/testing/selftests/kvm/hardware_disable_test.c
··· 108 108 kvm_vm_elf_load(vm, program_invocation_name, 0, 0); 109 109 vm_create_irqchip(vm); 110 110 111 - fprintf(stderr, "%s: [%d] start vcpus\n", __func__, run); 111 + pr_debug("%s: [%d] start vcpus\n", __func__, run); 112 112 for (i = 0; i < VCPU_NUM; ++i) { 113 113 vm_vcpu_add_default(vm, i, guest_code); 114 114 payloads[i].vm = vm; ··· 124 124 check_set_affinity(throw_away, &cpu_set); 125 125 } 126 126 } 127 - fprintf(stderr, "%s: [%d] all threads launched\n", __func__, run); 127 + pr_debug("%s: [%d] all threads launched\n", __func__, run); 128 128 sem_post(sem); 129 129 for (i = 0; i < VCPU_NUM; ++i) 130 130 check_join(threads[i], &b); ··· 147 147 if (pid == 0) 148 148 run_test(i); /* This function always exits */ 149 149 150 - fprintf(stderr, "%s: [%d] waiting semaphore\n", __func__, i); 150 + pr_debug("%s: [%d] waiting semaphore\n", __func__, i); 151 151 sem_wait(sem); 152 152 r = (rand() % DELAY_US_MAX) + 1; 153 - fprintf(stderr, "%s: [%d] waiting %dus\n", __func__, i, r); 153 + pr_debug("%s: [%d] waiting %dus\n", __func__, i, r); 154 154 usleep(r); 155 155 r = waitpid(pid, &s, WNOHANG); 156 156 TEST_ASSERT(r != pid, 157 157 "%s: [%d] child exited unexpectedly status: [%d]", 158 158 __func__, i, s); 159 - fprintf(stderr, "%s: [%d] killing child\n", __func__, i); 159 + pr_debug("%s: [%d] killing child\n", __func__, i); 160 160 kill(pid, SIGKILL); 161 161 } 162 162
+11 -2
tools/testing/selftests/kvm/x86_64/hyperv_clock.c
··· 80 80 GUEST_ASSERT(delta_ns * 100 < (t2 - t1) * 100); 81 81 } 82 82 83 + static inline u64 get_tscpage_ts(struct ms_hyperv_tsc_page *tsc_page) 84 + { 85 + return mul_u64_u64_shr64(rdtsc(), tsc_page->tsc_scale) + tsc_page->tsc_offset; 86 + } 87 + 83 88 static inline void check_tsc_msr_tsc_page(struct ms_hyperv_tsc_page *tsc_page) 84 89 { 85 90 u64 r1, r2, t1, t2; 86 91 87 92 /* Compare TSC page clocksource with HV_X64_MSR_TIME_REF_COUNT */ 88 - t1 = mul_u64_u64_shr64(rdtsc(), tsc_page->tsc_scale) + tsc_page->tsc_offset; 93 + t1 = get_tscpage_ts(tsc_page); 89 94 r1 = rdmsr(HV_X64_MSR_TIME_REF_COUNT); 90 95 91 96 /* 10 ms tolerance */ 92 97 GUEST_ASSERT(r1 >= t1 && r1 - t1 < 100000); 93 98 nop_loop(); 94 99 95 - t2 = mul_u64_u64_shr64(rdtsc(), tsc_page->tsc_scale) + tsc_page->tsc_offset; 100 + t2 = get_tscpage_ts(tsc_page); 96 101 r2 = rdmsr(HV_X64_MSR_TIME_REF_COUNT); 97 102 GUEST_ASSERT(r2 >= t1 && r2 - t2 < 100000); 98 103 } ··· 135 130 136 131 tsc_offset = tsc_page->tsc_offset; 137 132 /* Call KVM_SET_CLOCK from userspace, check that TSC page was updated */ 133 + 138 134 GUEST_SYNC(7); 135 + /* Sanity check TSC page timestamp, it should be close to 0 */ 136 + GUEST_ASSERT(get_tscpage_ts(tsc_page) < 100000); 137 + 139 138 GUEST_ASSERT(tsc_page->tsc_offset != tsc_offset); 140 139 141 140 nop_loop();
+12 -1
tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
··· 657 657 { 658 658 # In accordance with INET_ECN_decapsulate() 659 659 __test_ecn_decap 00 00 0x00 660 + __test_ecn_decap 00 01 0x00 661 + __test_ecn_decap 00 02 0x00 662 + # 00 03 is tested in test_ecn_decap_error() 663 + __test_ecn_decap 01 00 0x01 660 664 __test_ecn_decap 01 01 0x01 661 - __test_ecn_decap 02 01 0x01 665 + __test_ecn_decap 01 02 0x01 662 666 __test_ecn_decap 01 03 0x03 667 + __test_ecn_decap 02 00 0x02 668 + __test_ecn_decap 02 01 0x01 669 + __test_ecn_decap 02 02 0x02 663 670 __test_ecn_decap 02 03 0x03 671 + __test_ecn_decap 03 00 0x03 672 + __test_ecn_decap 03 01 0x03 673 + __test_ecn_decap 03 02 0x03 674 + __test_ecn_decap 03 03 0x03 664 675 test_ecn_decap_error 665 676 } 666 677