Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.12-rc7 into usb-next

We need the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3055 -1410
+7
.mailmap
··· 168 168 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com> 169 169 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 170 170 John Stultz <johnstul@us.ibm.com> 171 + Jordan Crouse <jordan@cosmicpenguin.net> <jcrouse@codeaurora.org> 171 172 <josh@joshtriplett.org> <josh@freedesktop.org> 172 173 <josh@joshtriplett.org> <josh@kernel.org> 173 174 <josh@joshtriplett.org> <josht@linux.vnet.ibm.com> ··· 254 253 Morten Welinder <welinder@darter.rentec.com> 255 254 Morten Welinder <welinder@troll.com> 256 255 Mythri P K <mythripk@ti.com> 256 + Nadia Yvette Chambers <nyc@holomorphy.com> William Lee Irwin III <wli@holomorphy.com> 257 257 Nathan Chancellor <nathan@kernel.org> <natechancellor@gmail.com> 258 258 Nguyen Anh Quynh <aquynh@gmail.com> 259 + Nicholas Piggin <npiggin@gmail.com> <npiggen@suse.de> 260 + Nicholas Piggin <npiggin@gmail.com> <npiggin@kernel.dk> 261 + Nicholas Piggin <npiggin@gmail.com> <npiggin@suse.de> 262 + Nicholas Piggin <npiggin@gmail.com> <nickpiggin@yahoo.com.au> 263 + Nicholas Piggin <npiggin@gmail.com> <piggin@cyberone.com.au> 259 264 Nicolas Ferre <nicolas.ferre@microchip.com> <nicolas.ferre@atmel.com> 260 265 Nicolas Pitre <nico@fluxnic.net> <nicolas.pitre@linaro.org> 261 266 Nicolas Pitre <nico@fluxnic.net> <nico@linaro.org>
+2 -2
Documentation/ABI/testing/debugfs-moxtet
··· 1 1 What: /sys/kernel/debug/moxtet/input 2 2 Date: March 2019 3 3 KernelVersion: 5.3 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (Read) Read input from the shift registers, in hexadecimal. 6 6 Returns N+1 bytes, where N is the number of Moxtet connected 7 7 modules. The first byte is from the CPU board itself. ··· 19 19 What: /sys/kernel/debug/moxtet/output 20 20 Date: March 2019 21 21 KernelVersion: 5.3 22 - Contact: Marek Behún <marek.behun@nic.cz> 22 + Contact: Marek Behún <kabel@kernel.org> 23 23 Description: (RW) Read last written value to the shift registers, in 24 24 hexadecimal, or write values to the shift registers, also 25 25 in hexadecimal.
+1 -1
Documentation/ABI/testing/debugfs-turris-mox-rwtm
··· 1 1 What: /sys/kernel/debug/turris-mox-rwtm/do_sign 2 2 Date: Jun 2020 3 3 KernelVersion: 5.8 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: 6 6 7 7 ======= ===========================================================
+3 -3
Documentation/ABI/testing/sysfs-bus-moxtet-devices
··· 1 1 What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_description 2 2 Date: March 2019 3 3 KernelVersion: 5.3 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (Read) Moxtet module description. Format: string 6 6 7 7 What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_id 8 8 Date: March 2019 9 9 KernelVersion: 5.3 10 - Contact: Marek Behún <marek.behun@nic.cz> 10 + Contact: Marek Behún <kabel@kernel.org> 11 11 Description: (Read) Moxtet module ID. Format: %x 12 12 13 13 What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_name 14 14 Date: March 2019 15 15 KernelVersion: 5.3 16 - Contact: Marek Behún <marek.behun@nic.cz> 16 + Contact: Marek Behún <kabel@kernel.org> 17 17 Description: (Read) Moxtet module name. Format: string
+1 -1
Documentation/ABI/testing/sysfs-class-led-driver-turris-omnia
··· 1 1 What: /sys/class/leds/<led>/device/brightness 2 2 Date: July 2020 3 3 KernelVersion: 5.9 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (RW) On the front panel of the Turris Omnia router there is also 6 6 a button which can be used to control the intensity of all the 7 7 LEDs at once, so that if they are too bright, user can dim them.
+5 -5
Documentation/ABI/testing/sysfs-firmware-turris-mox-rwtm
··· 1 1 What: /sys/firmware/turris-mox-rwtm/board_version 2 2 Date: August 2019 3 3 KernelVersion: 5.4 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (Read) Board version burned into eFuses of this Turris Mox board. 6 6 Format: %i 7 7 8 8 What: /sys/firmware/turris-mox-rwtm/mac_address* 9 9 Date: August 2019 10 10 KernelVersion: 5.4 11 - Contact: Marek Behún <marek.behun@nic.cz> 11 + Contact: Marek Behún <kabel@kernel.org> 12 12 Description: (Read) MAC addresses burned into eFuses of this Turris Mox board. 13 13 Format: %pM 14 14 15 15 What: /sys/firmware/turris-mox-rwtm/pubkey 16 16 Date: August 2019 17 17 KernelVersion: 5.4 18 - Contact: Marek Behún <marek.behun@nic.cz> 18 + Contact: Marek Behún <kabel@kernel.org> 19 19 Description: (Read) ECDSA public key (in pubkey hex compressed form) computed 20 20 as pair to the ECDSA private key burned into eFuses of this 21 21 Turris Mox Board. ··· 24 24 What: /sys/firmware/turris-mox-rwtm/ram_size 25 25 Date: August 2019 26 26 KernelVersion: 5.4 27 - Contact: Marek Behún <marek.behun@nic.cz> 27 + Contact: Marek Behún <kabel@kernel.org> 28 28 Description: (Read) RAM size in MiB of this Turris Mox board as was detected 29 29 during manufacturing and burned into eFuses. Can be 512 or 1024. 30 30 Format: %i ··· 32 32 What: /sys/firmware/turris-mox-rwtm/serial_number 33 33 Date: August 2019 34 34 KernelVersion: 5.4 35 - Contact: Marek Behún <marek.behun@nic.cz> 35 + Contact: Marek Behún <kabel@kernel.org> 36 36 Description: (Read) Serial number burned into eFuses of this Turris Mox device. 37 37 Format: %016X
+1 -1
Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt
··· 32 32 - "#thermal-sensor-cells" Used to expose itself to thermal fw. 33 33 34 34 Read more about iio bindings at 35 - Documentation/devicetree/bindings/iio/iio-bindings.txt 35 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/ 36 36 37 37 Example: 38 38 ncp15wb473@0 {
+1 -1
Documentation/devicetree/bindings/i2c/i2c-gpio.yaml
··· 7 7 title: Bindings for GPIO bitbanged I2C 8 8 9 9 maintainers: 10 - - Wolfram Sang <wolfram@the-dreams.de> 10 + - Wolfram Sang <wsa@kernel.org> 11 11 12 12 allOf: 13 13 - $ref: /schemas/i2c/i2c-controller.yaml#
+1 -1
Documentation/devicetree/bindings/i2c/i2c-imx.yaml
··· 7 7 title: Freescale Inter IC (I2C) and High Speed Inter IC (HS-I2C) for i.MX 8 8 9 9 maintainers: 10 - - Wolfram Sang <wolfram@the-dreams.de> 10 + - Oleksij Rempel <o.rempel@pengutronix.de> 11 11 12 12 allOf: 13 13 - $ref: /schemas/i2c/i2c-controller.yaml#
+3 -2
Documentation/devicetree/bindings/iio/adc/ingenic,adc.yaml
··· 14 14 Industrial I/O subsystem bindings for ADC controller found in 15 15 Ingenic JZ47xx SoCs. 16 16 17 - ADC clients must use the format described in iio-bindings.txt, giving 18 - a phandle and IIO specifier pair ("io-channels") to the ADC controller. 17 + ADC clients must use the format described in 18 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml, 19 + giving a phandle and IIO specifier pair ("io-channels") to the ADC controller. 19 20 20 21 properties: 21 22 compatible:
+3 -1
Documentation/devicetree/bindings/input/adc-joystick.yaml
··· 24 24 description: > 25 25 List of phandle and IIO specifier pairs. 26 26 Each pair defines one ADC channel to which a joystick axis is connected. 27 - See Documentation/devicetree/bindings/iio/iio-bindings.txt for details. 27 + See 28 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml 29 + for details. 28 30 29 31 '#address-cells': 30 32 const: 1
+4 -1
Documentation/devicetree/bindings/input/touchscreen/resistive-adc-touch.txt
··· 5 5 - compatible: must be "resistive-adc-touch" 6 6 The device must be connected to an ADC device that provides channels for 7 7 position measurement and optional pressure. 8 - Refer to ../iio/iio-bindings.txt for details 8 + Refer to 9 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml 10 + for details 11 + 9 12 - iio-channels: must have at least two channels connected to an ADC device. 10 13 These should correspond to the channels exposed by the ADC device and should 11 14 have the right index as the ADC device registers them. These channels
+1 -1
Documentation/devicetree/bindings/leds/cznic,turris-omnia-leds.yaml
··· 7 7 title: CZ.NIC's Turris Omnia LEDs driver 8 8 9 9 maintainers: 10 - - Marek Behún <marek.behun@nic.cz> 10 + - Marek Behún <kabel@kernel.org> 11 11 12 12 description: 13 13 This module adds support for the RGB LEDs found on the front panel of the
+3 -1
Documentation/devicetree/bindings/mfd/ab8500.txt
··· 72 72 pwm|regulator|rtc|sysctrl|usb]"; 73 73 74 74 A few child devices require ADC channels from the GPADC node. Those follow the 75 - standard bindings from iio/iio-bindings.txt and iio/adc/adc.txt 75 + standard bindings from 76 + https://github.com/devicetree-org/dt-schema/blob/master/schemas/iio/iio-consumer.yaml 77 + and Documentation/devicetree/bindings/iio/adc/adc.yaml 76 78 77 79 abx500-temp : io-channels "aux1" and "aux2" for measuring external 78 80 temperatures.
+8 -8
Documentation/devicetree/bindings/mfd/motorola-cpcap.txt
··· 16 16 The sub-functions of CPCAP get their own node with their own compatible values, 17 17 which are described in the following files: 18 18 19 - - ../power/supply/cpcap-battery.txt 20 - - ../power/supply/cpcap-charger.txt 21 - - ../regulator/cpcap-regulator.txt 22 - - ../phy/phy-cpcap-usb.txt 23 - - ../input/cpcap-pwrbutton.txt 24 - - ../rtc/cpcap-rtc.txt 25 - - ../leds/leds-cpcap.txt 26 - - ../iio/adc/cpcap-adc.txt 19 + - Documentation/devicetree/bindings/power/supply/cpcap-battery.txt 20 + - Documentation/devicetree/bindings/power/supply/cpcap-charger.txt 21 + - Documentation/devicetree/bindings/regulator/cpcap-regulator.txt 22 + - Documentation/devicetree/bindings/phy/phy-cpcap-usb.txt 23 + - Documentation/devicetree/bindings/input/cpcap-pwrbutton.txt 24 + - Documentation/devicetree/bindings/rtc/cpcap-rtc.txt 25 + - Documentation/devicetree/bindings/leds/leds-cpcap.txt 26 + - Documentation/devicetree/bindings/iio/adc/motorola,cpcap-adc.yaml 27 27 28 28 The only exception is the audio codec. Instead of a compatible value its 29 29 node must be named "audio-codec".
+1 -1
Documentation/devicetree/bindings/net/brcm,bcm4908-enet.yaml
··· 32 32 - interrupts 33 33 - interrupt-names 34 34 35 - additionalProperties: false 35 + unevaluatedProperties: false 36 36 37 37 examples: 38 38 - |
+1 -1
Documentation/devicetree/bindings/net/ethernet-controller.yaml
··· 49 49 description: 50 50 Reference to an nvmem node for the MAC address 51 51 52 - nvmem-cells-names: 52 + nvmem-cell-names: 53 53 const: mac-address 54 54 55 55 phy-connection-type:
+94 -2
Documentation/devicetree/bindings/net/micrel-ksz90x1.txt
··· 65 65 step is 60ps. The default value is the neutral setting, so setting 66 66 rxc-skew-ps=<0> actually results in -900 picoseconds adjustment. 67 67 68 + The KSZ9031 hardware supports a range of skew values from negative to 69 + positive, where the specific range is property dependent. All values 70 + specified in the devicetree are offset by the minimum value so they 71 + can be represented as positive integers in the devicetree since it's 72 + difficult to represent a negative number in the devictree. 73 + 74 + The following 5-bit values table apply to rxc-skew-ps and txc-skew-ps. 75 + 76 + Pad Skew Value Delay (ps) Devicetree Value 77 + ------------------------------------------------------ 78 + 0_0000 -900ps 0 79 + 0_0001 -840ps 60 80 + 0_0010 -780ps 120 81 + 0_0011 -720ps 180 82 + 0_0100 -660ps 240 83 + 0_0101 -600ps 300 84 + 0_0110 -540ps 360 85 + 0_0111 -480ps 420 86 + 0_1000 -420ps 480 87 + 0_1001 -360ps 540 88 + 0_1010 -300ps 600 89 + 0_1011 -240ps 660 90 + 0_1100 -180ps 720 91 + 0_1101 -120ps 780 92 + 0_1110 -60ps 840 93 + 0_1111 0ps 900 94 + 1_0000 60ps 960 95 + 1_0001 120ps 1020 96 + 1_0010 180ps 1080 97 + 1_0011 240ps 1140 98 + 1_0100 300ps 1200 99 + 1_0101 360ps 1260 100 + 1_0110 420ps 1320 101 + 1_0111 480ps 1380 102 + 1_1000 540ps 1440 103 + 1_1001 600ps 1500 104 + 1_1010 660ps 1560 105 + 1_1011 720ps 1620 106 + 1_1100 780ps 1680 107 + 1_1101 840ps 1740 108 + 1_1110 900ps 1800 109 + 1_1111 960ps 1860 110 + 111 + The following 4-bit values table apply to the txdX-skew-ps, rxdX-skew-ps 112 + data pads, and the rxdv-skew-ps, txen-skew-ps control pads. 113 + 114 + Pad Skew Value Delay (ps) Devicetree Value 115 + ------------------------------------------------------ 116 + 0000 -420ps 0 117 + 0001 -360ps 60 118 + 0010 -300ps 120 119 + 0011 -240ps 180 120 + 0100 -180ps 240 121 + 0101 -120ps 300 122 + 0110 -60ps 360 123 + 0111 0ps 420 124 + 1000 60ps 480 125 + 1001 120ps 540 126 + 1010 180ps 600 127 + 1011 240ps 660 128 + 1100 300ps 720 129 + 1101 360ps 780 130 + 1110 420ps 840 131 + 1111 480ps 900 132 + 68 133 Optional properties: 69 134 70 135 Maximum value of 1860, default value 900: ··· 185 120 186 121 Examples: 187 122 123 + /* Attach to an Ethernet device with autodetected PHY */ 124 + &enet { 125 + rxc-skew-ps = <1800>; 126 + rxdv-skew-ps = <0>; 127 + txc-skew-ps = <1800>; 128 + txen-skew-ps = <0>; 129 + status = "okay"; 130 + }; 131 + 132 + /* Attach to an explicitly-specified PHY */ 188 133 mdio { 189 134 phy0: ethernet-phy@0 { 190 - rxc-skew-ps = <3000>; 135 + rxc-skew-ps = <1800>; 191 136 rxdv-skew-ps = <0>; 192 - txc-skew-ps = <3000>; 137 + txc-skew-ps = <1800>; 193 138 txen-skew-ps = <0>; 194 139 reg = <0>; 195 140 }; ··· 208 133 phy = <&phy0>; 209 134 phy-mode = "rgmii-id"; 210 135 }; 136 + 137 + References 138 + 139 + Micrel ksz9021rl/rn Data Sheet, Revision 1.2. Dated 2/13/2014. 140 + http://www.micrel.com/_PDF/Ethernet/datasheets/ksz9021rl-rn_ds.pdf 141 + 142 + Micrel ksz9031rnx Data Sheet, Revision 2.1. Dated 11/20/2014. 143 + http://www.micrel.com/_PDF/Ethernet/datasheets/KSZ9031RNX.pdf 144 + 145 + Notes: 146 + 147 + Note that a previous version of the Micrel ksz9021rl/rn Data Sheet 148 + was missing extended register 106 (transmit data pad skews), and 149 + incorrectly specified the ps per step as 200ps/step instead of 150 + 120ps/step. The latest update to this document reflects the latest 151 + revision of the Micrel specification even though usage in the kernel 152 + still reflects that incorrect document.
+5 -5
Documentation/networking/ethtool-netlink.rst
··· 976 976 977 977 978 978 PAUSE_GET 979 - ============ 979 + ========= 980 980 981 - Gets channel counts like ``ETHTOOL_GPAUSE`` ioctl request. 981 + Gets pause frame settings like ``ETHTOOL_GPAUSEPARAM`` ioctl request. 982 982 983 983 Request contents: 984 984 ··· 1007 1007 Each member has a corresponding attribute defined. 1008 1008 1009 1009 PAUSE_SET 1010 - ============ 1010 + ========= 1011 1011 1012 1012 Sets pause parameters like ``ETHTOOL_GPAUSEPARAM`` ioctl request. 1013 1013 ··· 1024 1024 EEE_GET 1025 1025 ======= 1026 1026 1027 - Gets channel counts like ``ETHTOOL_GEEE`` ioctl request. 1027 + Gets Energy Efficient Ethernet settings like ``ETHTOOL_GEEE`` ioctl request. 1028 1028 1029 1029 Request contents: 1030 1030 ··· 1054 1054 EEE_SET 1055 1055 ======= 1056 1056 1057 - Sets pause parameters like ``ETHTOOL_GEEEPARAM`` ioctl request. 1057 + Sets Energy Efficient Ethernet parameters like ``ETHTOOL_SEEE`` ioctl request. 1058 1058 1059 1059 Request contents: 1060 1060
+20 -3
MAINTAINERS
··· 1790 1790 F: drivers/pinctrl/pinctrl-gemini.c 1791 1791 F: drivers/rtc/rtc-ftrtc010.c 1792 1792 1793 - ARM/CZ.NIC TURRIS MOX SUPPORT 1794 - M: Marek Behun <marek.behun@nic.cz> 1793 + ARM/CZ.NIC TURRIS SUPPORT 1794 + M: Marek Behun <kabel@kernel.org> 1795 1795 S: Maintained 1796 - W: http://mox.turris.cz 1796 + W: https://www.turris.cz/ 1797 1797 F: Documentation/ABI/testing/debugfs-moxtet 1798 1798 F: Documentation/ABI/testing/sysfs-bus-moxtet-devices 1799 1799 F: Documentation/ABI/testing/sysfs-firmware-turris-mox-rwtm 1800 1800 F: Documentation/devicetree/bindings/bus/moxtet.txt 1801 1801 F: Documentation/devicetree/bindings/firmware/cznic,turris-mox-rwtm.txt 1802 1802 F: Documentation/devicetree/bindings/gpio/gpio-moxtet.txt 1803 + F: Documentation/devicetree/bindings/leds/cznic,turris-omnia-leds.yaml 1804 + F: Documentation/devicetree/bindings/watchdog/armada-37xx-wdt.txt 1803 1805 F: drivers/bus/moxtet.c 1804 1806 F: drivers/firmware/turris-mox-rwtm.c 1807 + F: drivers/leds/leds-turris-omnia.c 1808 + F: drivers/mailbox/armada-37xx-rwtm-mailbox.c 1805 1809 F: drivers/gpio/gpio-moxtet.c 1810 + F: drivers/watchdog/armada_37xx_wdt.c 1811 + F: include/dt-bindings/bus/moxtet.h 1812 + F: include/linux/armada-37xx-rwtm-mailbox.h 1806 1813 F: include/linux/moxtet.h 1807 1814 1808 1815 ARM/EZX SMARTPHONES (A780, A910, A1200, E680, ROKR E2 and ROKR E6) ··· 14857 14850 S: Maintained 14858 14851 F: drivers/iommu/arm/arm-smmu/qcom_iommu.c 14859 14852 14853 + QUALCOMM IPC ROUTER (QRTR) DRIVER 14854 + M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14855 + L: linux-arm-msm@vger.kernel.org 14856 + S: Maintained 14857 + F: include/trace/events/qrtr.h 14858 + F: include/uapi/linux/qrtr.h 14859 + F: net/qrtr/ 14860 + 14860 14861 QUALCOMM IPCC MAILBOX DRIVER 14861 14862 M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14862 14863 L: linux-arm-msm@vger.kernel.org ··· 15214 15199 REMOTE PROCESSOR (REMOTEPROC) SUBSYSTEM 15215 15200 M: Ohad Ben-Cohen <ohad@wizery.com> 15216 15201 M: Bjorn Andersson <bjorn.andersson@linaro.org> 15202 + M: Mathieu Poirier <mathieu.poirier@linaro.org> 15217 15203 L: linux-remoteproc@vger.kernel.org 15218 15204 S: Maintained 15219 15205 T: git git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc.git rproc-next ··· 15228 15212 REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM 15229 15213 M: Ohad Ben-Cohen <ohad@wizery.com> 15230 15214 M: Bjorn Andersson <bjorn.andersson@linaro.org> 15215 + M: Mathieu Poirier <mathieu.poirier@linaro.org> 15231 15216 L: linux-remoteproc@vger.kernel.org 15232 15217 S: Maintained 15233 15218 T: git git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc.git rpmsg-next
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 12 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Frozen Wasteland 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arc/boot/dts/haps_hs.dts
··· 16 16 memory { 17 17 device_type = "memory"; 18 18 /* CONFIG_LINUX_RAM_BASE needs to match low mem start */ 19 - reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MB low mem */ 19 + reg = <0x0 0x80000000 0x0 0x40000000 /* 1 GB low mem */ 20 20 0x1 0x00000000 0x0 0x40000000>; /* 1 GB highmem */ 21 21 }; 22 22
+2 -2
arch/arc/kernel/signal.c
··· 96 96 sizeof(sf->uc.uc_mcontext.regs.scratch)); 97 97 err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t)); 98 98 99 - return err; 99 + return err ? -EFAULT : 0; 100 100 } 101 101 102 102 static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf) ··· 110 110 &(sf->uc.uc_mcontext.regs.scratch), 111 111 sizeof(sf->uc.uc_mcontext.regs.scratch)); 112 112 if (err) 113 - return err; 113 + return -EFAULT; 114 114 115 115 set_current_blocked(&set); 116 116 regs->bta = uregs.scratch.bta;
+14 -13
arch/arc/kernel/unwind.c
··· 187 187 const void *table_start, unsigned long table_size, 188 188 const u8 *header_start, unsigned long header_size) 189 189 { 190 - const u8 *ptr = header_start + 4; 191 - const u8 *end = header_start + header_size; 192 - 193 190 table->core.pc = (unsigned long)core_start; 194 191 table->core.range = core_size; 195 192 table->init.pc = (unsigned long)init_start; 196 193 table->init.range = init_size; 197 194 table->address = table_start; 198 195 table->size = table_size; 199 - 200 - /* See if the linker provided table looks valid. */ 201 - if (header_size <= 4 202 - || header_start[0] != 1 203 - || (void *)read_pointer(&ptr, end, header_start[1]) != table_start 204 - || header_start[2] == DW_EH_PE_omit 205 - || read_pointer(&ptr, end, header_start[2]) <= 0 206 - || header_start[3] == DW_EH_PE_omit) 207 - header_start = NULL; 208 - 196 + /* To avoid the pointer addition with NULL pointer.*/ 197 + if (header_start != NULL) { 198 + const u8 *ptr = header_start + 4; 199 + const u8 *end = header_start + header_size; 200 + /* See if the linker provided table looks valid. */ 201 + if (header_size <= 4 202 + || header_start[0] != 1 203 + || (void *)read_pointer(&ptr, end, header_start[1]) 204 + != table_start 205 + || header_start[2] == DW_EH_PE_omit 206 + || read_pointer(&ptr, end, header_start[2]) <= 0 207 + || header_start[3] == DW_EH_PE_omit) 208 + header_start = NULL; 209 + } 209 210 table->hdrsz = header_size; 210 211 smp_wmb(); 211 212 table->header = header_start;
+3 -1
arch/arm/boot/dts/armada-385-turris-omnia.dts
··· 32 32 ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000 33 33 MBUS_ID(0x01, 0x1d) 0 0xfff00000 0x100000 34 34 MBUS_ID(0x09, 0x19) 0 0xf1100000 0x10000 35 - MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000>; 35 + MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000 36 + MBUS_ID(0x0c, 0x04) 0 0xf1200000 0x100000>; 36 37 37 38 internal-regs { 38 39 ··· 390 389 phy1: ethernet-phy@1 { 391 390 compatible = "ethernet-phy-ieee802.3-c22"; 392 391 reg = <1>; 392 + marvell,reg-init = <3 18 0 0x4985>; 393 393 394 394 /* irq is connected to &pcawan pin 7 */ 395 395 };
-12
arch/arm/boot/dts/bcm2711.dtsi
··· 308 308 #reset-cells = <1>; 309 309 }; 310 310 311 - bsc_intr: interrupt-controller@7ef00040 { 312 - compatible = "brcm,bcm2711-l2-intc", "brcm,l2-intc"; 313 - reg = <0x7ef00040 0x30>; 314 - interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>; 315 - interrupt-controller; 316 - #interrupt-cells = <1>; 317 - }; 318 - 319 311 aon_intr: interrupt-controller@7ef00100 { 320 312 compatible = "brcm,bcm2711-l2-intc", "brcm,l2-intc"; 321 313 reg = <0x7ef00100 0x30>; ··· 354 362 reg = <0x7ef04500 0x100>, <0x7ef00b00 0x300>; 355 363 reg-names = "bsc", "auto-i2c"; 356 364 clock-frequency = <97500>; 357 - interrupt-parent = <&bsc_intr>; 358 - interrupts = <0>; 359 365 status = "disabled"; 360 366 }; 361 367 ··· 395 405 reg = <0x7ef09500 0x100>, <0x7ef05b00 0x300>; 396 406 reg-names = "bsc", "auto-i2c"; 397 407 clock-frequency = <97500>; 398 - interrupt-parent = <&bsc_intr>; 399 - interrupts = <1>; 400 408 status = "disabled"; 401 409 }; 402 410 };
+2
arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
··· 433 433 pinctrl-0 = <&pinctrl_usdhc2>; 434 434 cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>; 435 435 wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>; 436 + vmmc-supply = <&vdd_sd1_reg>; 436 437 status = "disabled"; 437 438 }; 438 439 ··· 443 442 &pinctrl_usdhc3_cdwp>; 444 443 cd-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>; 445 444 wp-gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>; 445 + vmmc-supply = <&vdd_sd0_reg>; 446 446 status = "disabled"; 447 447 };
+5
arch/arm/boot/dts/omap4.dtsi
··· 22 22 i2c1 = &i2c2; 23 23 i2c2 = &i2c3; 24 24 i2c3 = &i2c4; 25 + mmc0 = &mmc1; 26 + mmc1 = &mmc2; 27 + mmc2 = &mmc3; 28 + mmc3 = &mmc4; 29 + mmc4 = &mmc5; 25 30 serial0 = &uart1; 26 31 serial1 = &uart2; 27 32 serial2 = &uart3;
-8
arch/arm/boot/dts/omap44xx-clocks.dtsi
··· 770 770 ti,max-div = <2>; 771 771 }; 772 772 773 - sha2md5_fck: sha2md5_fck@15c8 { 774 - #clock-cells = <0>; 775 - compatible = "ti,gate-clock"; 776 - clocks = <&l3_div_ck>; 777 - ti,bit-shift = <1>; 778 - reg = <0x15c8>; 779 - }; 780 - 781 773 usb_phy_cm_clk32k: usb_phy_cm_clk32k@640 { 782 774 #clock-cells = <0>; 783 775 compatible = "ti,gate-clock";
+5
arch/arm/boot/dts/omap5.dtsi
··· 25 25 i2c2 = &i2c3; 26 26 i2c3 = &i2c4; 27 27 i2c4 = &i2c5; 28 + mmc0 = &mmc1; 29 + mmc1 = &mmc2; 30 + mmc2 = &mmc3; 31 + mmc3 = &mmc4; 32 + mmc4 = &mmc5; 28 33 serial0 = &uart1; 29 34 serial1 = &uart2; 30 35 serial2 = &uart3;
+2 -2
arch/arm/mach-keystone/keystone.c
··· 65 65 static long long __init keystone_pv_fixup(void) 66 66 { 67 67 long long offset; 68 - phys_addr_t mem_start, mem_end; 68 + u64 mem_start, mem_end; 69 69 70 70 mem_start = memblock_start_of_DRAM(); 71 71 mem_end = memblock_end_of_DRAM(); ··· 78 78 if (mem_start < KEYSTONE_HIGH_PHYS_START || 79 79 mem_end > KEYSTONE_HIGH_PHYS_END) { 80 80 pr_crit("Invalid address space for memory (%08llx-%08llx)\n", 81 - (u64)mem_start, (u64)mem_end); 81 + mem_start, mem_end); 82 82 return 0; 83 83 } 84 84
+1
arch/arm/mach-omap1/ams-delta-fiq-handler.S
··· 15 15 #include <linux/platform_data/gpio-omap.h> 16 16 17 17 #include <asm/assembler.h> 18 + #include <asm/irq.h> 18 19 19 20 #include "ams-delta-fiq.h" 20 21 #include "board-ams-delta.h"
+39
arch/arm/mach-omap2/omap-secure.c
··· 9 9 */ 10 10 11 11 #include <linux/arm-smccc.h> 12 + #include <linux/cpu_pm.h> 12 13 #include <linux/kernel.h> 13 14 #include <linux/init.h> 14 15 #include <linux/io.h> ··· 21 20 22 21 #include "common.h" 23 22 #include "omap-secure.h" 23 + #include "soc.h" 24 24 25 25 static phys_addr_t omap_secure_memblock_base; 26 26 ··· 215 213 { 216 214 omap_optee_init_check(); 217 215 } 216 + 217 + /* 218 + * Dummy dispatcher call after core OSWR and MPU off. Updates the ROM return 219 + * address after MMU has been re-enabled after CPU1 has been woken up again. 220 + * Otherwise the ROM code will attempt to use the earlier physical return 221 + * address that got set with MMU off when waking up CPU1. Only used on secure 222 + * devices. 223 + */ 224 + static int cpu_notifier(struct notifier_block *nb, unsigned long cmd, void *v) 225 + { 226 + switch (cmd) { 227 + case CPU_CLUSTER_PM_EXIT: 228 + omap_secure_dispatcher(OMAP4_PPA_SERVICE_0, 229 + FLAG_START_CRITICAL, 230 + 0, 0, 0, 0, 0); 231 + break; 232 + default: 233 + break; 234 + } 235 + 236 + return NOTIFY_OK; 237 + } 238 + 239 + static struct notifier_block secure_notifier_block = { 240 + .notifier_call = cpu_notifier, 241 + }; 242 + 243 + static int __init secure_pm_init(void) 244 + { 245 + if (omap_type() == OMAP2_DEVICE_TYPE_GP || !soc_is_omap44xx()) 246 + return 0; 247 + 248 + cpu_pm_register_notifier(&secure_notifier_block); 249 + 250 + return 0; 251 + } 252 + omap_arch_initcall(secure_pm_init);
+1
arch/arm/mach-omap2/omap-secure.h
··· 50 50 #define OMAP5_DRA7_MON_SET_ACR_INDEX 0x107 51 51 52 52 /* Secure PPA(Primary Protected Application) APIs */ 53 + #define OMAP4_PPA_SERVICE_0 0x21 53 54 #define OMAP4_PPA_L2_POR_INDEX 0x23 54 55 #define OMAP4_PPA_CPU_ACTRL_SMP_INDEX 0x25 55 56
+2 -2
arch/arm/mach-omap2/pmic-cpcap.c
··· 246 246 omap_voltage_register_pmic(voltdm, &omap443x_max8952_mpu); 247 247 248 248 if (of_machine_is_compatible("motorola,droid-bionic")) { 249 - voltdm = voltdm_lookup("mpu"); 249 + voltdm = voltdm_lookup("core"); 250 250 omap_voltage_register_pmic(voltdm, &omap_cpcap_core); 251 251 252 - voltdm = voltdm_lookup("mpu"); 252 + voltdm = voltdm_lookup("iva"); 253 253 omap_voltage_register_pmic(voltdm, &omap_cpcap_iva); 254 254 } else { 255 255 voltdm = voltdm_lookup("core");
+6 -2
arch/arm/mach-pxa/mainstone.c
··· 502 502 #endif 503 503 504 504 static int mst_pcmcia0_irqs[11] = { 505 - [0 ... 10] = -1, 505 + [0 ... 4] = -1, 506 506 [5] = MAINSTONE_S0_CD_IRQ, 507 + [6 ... 7] = -1, 507 508 [8] = MAINSTONE_S0_STSCHG_IRQ, 509 + [9] = -1, 508 510 [10] = MAINSTONE_S0_IRQ, 509 511 }; 510 512 511 513 static int mst_pcmcia1_irqs[11] = { 512 - [0 ... 10] = -1, 514 + [0 ... 4] = -1, 513 515 [5] = MAINSTONE_S1_CD_IRQ, 516 + [6 ... 7] = -1, 514 517 [8] = MAINSTONE_S1_STSCHG_IRQ, 518 + [9] = -1, 515 519 [10] = MAINSTONE_S1_IRQ, 516 520 }; 517 521
+1 -1
arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
··· 124 124 #define MX8MM_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0 125 125 #define MX8MM_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0 126 126 #define MX8MM_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0 127 - #define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0 127 + #define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0 128 128 #define MX8MM_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0 129 129 #define MX8MM_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0 130 130 #define MX8MM_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
+1 -1
arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
··· 130 130 #define MX8MQ_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0 131 131 #define MX8MQ_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0 132 132 #define MX8MQ_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0 133 - #define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0 133 + #define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0 134 134 #define MX8MQ_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0 135 135 #define MX8MQ_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0 136 136 #define MX8MQ_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
+1 -1
arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
··· 1 1 // SPDX-License-Identifier: (GPL-2.0+ OR MIT) 2 2 /* 3 3 * Device Tree file for CZ.NIC Turris Mox Board 4 - * 2019 by Marek Behun <marek.behun@nic.cz> 4 + * 2019 by Marek Behún <kabel@kernel.org> 5 5 */ 6 6 7 7 /dts-v1/;
+3 -3
arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
··· 310 310 }; 311 311 312 312 CP11X_LABEL(sata0): sata@540000 { 313 - compatible = "marvell,armada-8k-ahci"; 313 + compatible = "marvell,armada-8k-ahci", 314 + "generic-ahci"; 314 315 reg = <0x540000 0x30000>; 315 316 dma-coherent; 317 + interrupts = <107 IRQ_TYPE_LEVEL_HIGH>; 316 318 clocks = <&CP11X_LABEL(clk) 1 15>, 317 319 <&CP11X_LABEL(clk) 1 16>; 318 320 #address-cells = <1>; ··· 322 320 status = "disabled"; 323 321 324 322 sata-port@0 { 325 - interrupts = <109 IRQ_TYPE_LEVEL_HIGH>; 326 323 reg = <0>; 327 324 }; 328 325 329 326 sata-port@1 { 330 - interrupts = <107 IRQ_TYPE_LEVEL_HIGH>; 331 327 reg = <1>; 332 328 }; 333 329 };
+1 -7
arch/ia64/include/asm/ptrace.h
··· 54 54 55 55 static inline unsigned long user_stack_pointer(struct pt_regs *regs) 56 56 { 57 - /* FIXME: should this be bspstore + nr_dirty regs? */ 58 - return regs->ar_bspstore; 57 + return regs->r12; 59 58 } 60 59 61 60 static inline int is_syscall_success(struct pt_regs *regs) ··· 78 79 unsigned long __ip = instruction_pointer(regs); \ 79 80 (__ip & ~3UL) + ((__ip & 3UL) << 2); \ 80 81 }) 81 - /* 82 - * Why not default? Because user_stack_pointer() on ia64 gives register 83 - * stack backing store instead... 84 - */ 85 - #define current_user_stack_pointer() (current_pt_regs()->r12) 86 82 87 83 /* given a pointer to a task_struct, return the user's pt_regs */ 88 84 # define task_pt_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)
+1 -1
arch/nds32/mm/cacheflush.c
··· 238 238 { 239 239 struct address_space *mapping; 240 240 241 - mapping = page_mapping(page); 241 + mapping = page_mapping_file(page); 242 242 if (mapping && !mapping_mapped(mapping)) 243 243 set_bit(PG_dcache_dirty, &page->flags); 244 244 else {
+1 -1
arch/parisc/include/asm/cmpxchg.h
··· 72 72 #endif 73 73 case 4: return __cmpxchg_u32((unsigned int *)ptr, 74 74 (unsigned int)old, (unsigned int)new_); 75 - case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_); 75 + case 1: return __cmpxchg_u8((u8 *)ptr, old & 0xff, new_ & 0xff); 76 76 } 77 77 __cmpxchg_called_with_bad_pointer(); 78 78 return old;
-1
arch/parisc/include/asm/processor.h
··· 272 272 regs->gr[23] = 0; \ 273 273 } while(0) 274 274 275 - struct task_struct; 276 275 struct mm_struct; 277 276 278 277 /* Free all resources held by a thread. */
+3 -29
arch/parisc/math-emu/fpu.h
··· 5 5 * Floating-point emulation code 6 6 * Copyright (C) 2001 Hewlett-Packard (Paul Bame) <bame@debian.org> 7 7 */ 8 - /* 9 - * BEGIN_DESC 10 - * 11 - * File: 12 - * @(#) pa/fp/fpu.h $Revision: 1.1 $ 13 - * 14 - * Purpose: 15 - * <<please update with a synopis of the functionality provided by this file>> 16 - * 17 - * 18 - * END_DESC 19 - */ 20 - 21 - #ifdef __NO_PA_HDRS 22 - PA header file -- do not include this header file for non-PA builds. 23 - #endif 24 - 25 8 26 9 #ifndef _MACHINE_FPU_INCLUDED /* allows multiple inclusion */ 27 10 #define _MACHINE_FPU_INCLUDED 28 - 29 - #if 0 30 - #ifndef _SYS_STDSYMS_INCLUDED 31 - # include <sys/stdsyms.h> 32 - #endif /* _SYS_STDSYMS_INCLUDED */ 33 - #include <machine/pdc/pdc_rqsts.h> 34 - #endif 35 11 36 12 #define PA83_FPU_FLAG 0x00000001 37 13 #define PA89_FPU_FLAG 0x00000002 ··· 19 43 #define COPR_FP 0x00000080 /* Floating point -- Coprocessor 0 */ 20 44 #define SFU_MPY_DIVIDE 0x00008000 /* Multiply/Divide __ SFU 0 */ 21 45 22 - 23 46 #define EM_FPU_TYPE_OFFSET 272 24 47 25 48 /* version of EMULATION software for COPR,0,0 instruction */ 26 49 #define EMULATION_VERSION 4 27 50 28 51 /* 29 - * The only was to differeniate between TIMEX and ROLEX (or PCX-S and PCX-T) 30 - * is thorough the potential type field from the PDC_MODEL call. The 31 - * following flags are used at assist this differeniation. 52 + * The only way to differentiate between TIMEX and ROLEX (or PCX-S and PCX-T) 53 + * is through the potential type field from the PDC_MODEL call. 54 + * The following flags are used to assist this differentiation. 32 55 */ 33 56 34 57 #define ROLEX_POTENTIAL_KEY_FLAGS PDC_MODEL_CPU_KEY_WORD_TO_IO 35 58 #define TIMEX_POTENTIAL_KEY_FLAGS (PDC_MODEL_CPU_KEY_QUAD_STORE | \ 36 59 PDC_MODEL_CPU_KEY_RECIP_SQRT) 37 - 38 60 39 61 #endif /* ! _MACHINE_FPU_INCLUDED */
+4
arch/powerpc/kernel/Makefile
··· 191 191 targets += prom_init_check 192 192 193 193 clean-files := vmlinux.lds 194 + 195 + # Force dependency (incbin is bad) 196 + $(obj)/vdso32_wrapper.o : $(obj)/vdso32/vdso32.so.dbg 197 + $(obj)/vdso64_wrapper.o : $(obj)/vdso64/vdso64.so.dbg
+2 -2
arch/powerpc/kernel/ptrace/Makefile
··· 6 6 CFLAGS_ptrace-view.o += -DUTS_MACHINE='"$(UTS_MACHINE)"' 7 7 8 8 obj-y += ptrace.o ptrace-view.o 9 - obj-$(CONFIG_PPC_FPU_REGS) += ptrace-fpu.o 9 + obj-y += ptrace-fpu.o 10 10 obj-$(CONFIG_COMPAT) += ptrace32.o 11 11 obj-$(CONFIG_VSX) += ptrace-vsx.o 12 12 ifneq ($(CONFIG_VSX),y) 13 - obj-$(CONFIG_PPC_FPU_REGS) += ptrace-novsx.o 13 + obj-y += ptrace-novsx.o 14 14 endif 15 15 obj-$(CONFIG_ALTIVEC) += ptrace-altivec.o 16 16 obj-$(CONFIG_SPE) += ptrace-spe.o
-14
arch/powerpc/kernel/ptrace/ptrace-decl.h
··· 165 165 extern const struct user_regset_view user_ppc_native_view; 166 166 167 167 /* ptrace-fpu */ 168 - #ifdef CONFIG_PPC_FPU_REGS 169 168 int ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data); 170 169 int ptrace_put_fpr(struct task_struct *child, int index, unsigned long data); 171 - #else 172 - static inline int 173 - ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data) 174 - { 175 - return -EIO; 176 - } 177 - 178 - static inline int 179 - ptrace_put_fpr(struct task_struct *child, int index, unsigned long data) 180 - { 181 - return -EIO; 182 - } 183 - #endif 184 170 185 171 /* ptrace-(no)adv */ 186 172 void ppc_gethwdinfo(struct ppc_debug_info *dbginfo);
+10
arch/powerpc/kernel/ptrace/ptrace-fpu.c
··· 8 8 9 9 int ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data) 10 10 { 11 + #ifdef CONFIG_PPC_FPU_REGS 11 12 unsigned int fpidx = index - PT_FPR0; 13 + #endif 12 14 13 15 if (index > PT_FPSCR) 14 16 return -EIO; 15 17 18 + #ifdef CONFIG_PPC_FPU_REGS 16 19 flush_fp_to_thread(child); 17 20 if (fpidx < (PT_FPSCR - PT_FPR0)) 18 21 memcpy(data, &child->thread.TS_FPR(fpidx), sizeof(long)); 19 22 else 20 23 *data = child->thread.fp_state.fpscr; 24 + #else 25 + *data = 0; 26 + #endif 21 27 22 28 return 0; 23 29 } 24 30 25 31 int ptrace_put_fpr(struct task_struct *child, int index, unsigned long data) 26 32 { 33 + #ifdef CONFIG_PPC_FPU_REGS 27 34 unsigned int fpidx = index - PT_FPR0; 35 + #endif 28 36 29 37 if (index > PT_FPSCR) 30 38 return -EIO; 31 39 40 + #ifdef CONFIG_PPC_FPU_REGS 32 41 flush_fp_to_thread(child); 33 42 if (fpidx < (PT_FPSCR - PT_FPR0)) 34 43 memcpy(&child->thread.TS_FPR(fpidx), &data, sizeof(long)); 35 44 else 36 45 child->thread.fp_state.fpscr = data; 46 + #endif 37 47 38 48 return 0; 39 49 }
+8
arch/powerpc/kernel/ptrace/ptrace-novsx.c
··· 21 21 int fpr_get(struct task_struct *target, const struct user_regset *regset, 22 22 struct membuf to) 23 23 { 24 + #ifdef CONFIG_PPC_FPU_REGS 24 25 BUILD_BUG_ON(offsetof(struct thread_fp_state, fpscr) != 25 26 offsetof(struct thread_fp_state, fpr[32])); 26 27 27 28 flush_fp_to_thread(target); 28 29 29 30 return membuf_write(&to, &target->thread.fp_state, 33 * sizeof(u64)); 31 + #else 32 + return membuf_write(&to, &empty_zero_page, 33 * sizeof(u64)); 33 + #endif 30 34 } 31 35 32 36 /* ··· 50 46 unsigned int pos, unsigned int count, 51 47 const void *kbuf, const void __user *ubuf) 52 48 { 49 + #ifdef CONFIG_PPC_FPU_REGS 53 50 BUILD_BUG_ON(offsetof(struct thread_fp_state, fpscr) != 54 51 offsetof(struct thread_fp_state, fpr[32])); 55 52 ··· 58 53 59 54 return user_regset_copyin(&pos, &count, &kbuf, &ubuf, 60 55 &target->thread.fp_state, 0, -1); 56 + #else 57 + return 0; 58 + #endif 61 59 }
-2
arch/powerpc/kernel/ptrace/ptrace-view.c
··· 522 522 .size = sizeof(long), .align = sizeof(long), 523 523 .regset_get = gpr_get, .set = gpr_set 524 524 }, 525 - #ifdef CONFIG_PPC_FPU_REGS 526 525 [REGSET_FPR] = { 527 526 .core_note_type = NT_PRFPREG, .n = ELF_NFPREG, 528 527 .size = sizeof(double), .align = sizeof(double), 529 528 .regset_get = fpr_get, .set = fpr_set 530 529 }, 531 - #endif 532 530 #ifdef CONFIG_ALTIVEC 533 531 [REGSET_VMX] = { 534 532 .core_note_type = NT_PPC_VMX, .n = 34,
+8 -12
arch/powerpc/kernel/signal_32.c
··· 775 775 else 776 776 prepare_save_user_regs(1); 777 777 778 - if (!user_write_access_begin(frame, sizeof(*frame))) 778 + if (!user_access_begin(frame, sizeof(*frame))) 779 779 goto badframe; 780 780 781 781 /* Put the siginfo & fill in most of the ucontext */ ··· 809 809 unsafe_put_user(PPC_INST_ADDI + __NR_rt_sigreturn, &mctx->mc_pad[0], 810 810 failed); 811 811 unsafe_put_user(PPC_INST_SC, &mctx->mc_pad[1], failed); 812 + asm("dcbst %y0; sync; icbi %y0; sync" :: "Z" (mctx->mc_pad[0])); 812 813 } 813 814 unsafe_put_sigset_t(&frame->uc.uc_sigmask, oldset, failed); 814 815 815 - user_write_access_end(); 816 + user_access_end(); 816 817 817 818 if (copy_siginfo_to_user(&frame->info, &ksig->info)) 818 819 goto badframe; 819 - 820 - if (tramp == (unsigned long)mctx->mc_pad) 821 - flush_icache_range(tramp, tramp + 2 * sizeof(unsigned long)); 822 820 823 821 regs->link = tramp; 824 822 ··· 842 844 return 0; 843 845 844 846 failed: 845 - user_write_access_end(); 847 + user_access_end(); 846 848 847 849 badframe: 848 850 signal_fault(tsk, regs, "handle_rt_signal32", frame); ··· 877 879 else 878 880 prepare_save_user_regs(1); 879 881 880 - if (!user_write_access_begin(frame, sizeof(*frame))) 882 + if (!user_access_begin(frame, sizeof(*frame))) 881 883 goto badframe; 882 884 sc = (struct sigcontext __user *) &frame->sctx; 883 885 ··· 906 908 /* Set up the sigreturn trampoline: li r0,sigret; sc */ 907 909 unsafe_put_user(PPC_INST_ADDI + __NR_sigreturn, &mctx->mc_pad[0], failed); 908 910 unsafe_put_user(PPC_INST_SC, &mctx->mc_pad[1], failed); 911 + asm("dcbst %y0; sync; icbi %y0; sync" :: "Z" (mctx->mc_pad[0])); 909 912 } 910 - user_write_access_end(); 911 - 912 - if (tramp == (unsigned long)mctx->mc_pad) 913 - flush_icache_range(tramp, tramp + 2 * sizeof(unsigned long)); 913 + user_access_end(); 914 914 915 915 regs->link = tramp; 916 916 ··· 931 935 return 0; 932 936 933 937 failed: 934 - user_write_access_end(); 938 + user_access_end(); 935 939 936 940 badframe: 937 941 signal_fault(tsk, regs, "handle_signal32", frame);
+1
arch/s390/include/asm/stacktrace.h
··· 12 12 STACK_TYPE_IRQ, 13 13 STACK_TYPE_NODAT, 14 14 STACK_TYPE_RESTART, 15 + STACK_TYPE_MCCK, 15 16 }; 16 17 17 18 struct stack_info {
+4 -2
arch/s390/kernel/cpcmd.c
··· 37 37 38 38 static int diag8_response(int cmdlen, char *response, int *rlen) 39 39 { 40 + unsigned long _cmdlen = cmdlen | 0x40000000L; 41 + unsigned long _rlen = *rlen; 40 42 register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf; 41 43 register unsigned long reg3 asm ("3") = (addr_t) response; 42 - register unsigned long reg4 asm ("4") = cmdlen | 0x40000000L; 43 - register unsigned long reg5 asm ("5") = *rlen; 44 + register unsigned long reg4 asm ("4") = _cmdlen; 45 + register unsigned long reg5 asm ("5") = _rlen; 44 46 45 47 asm volatile( 46 48 " diag %2,%0,0x8\n"
+11 -1
arch/s390/kernel/dumpstack.c
··· 79 79 return in_stack(sp, info, STACK_TYPE_NODAT, top - THREAD_SIZE, top); 80 80 } 81 81 82 + static bool in_mcck_stack(unsigned long sp, struct stack_info *info) 83 + { 84 + unsigned long frame_size, top; 85 + 86 + frame_size = STACK_FRAME_OVERHEAD + sizeof(struct pt_regs); 87 + top = S390_lowcore.mcck_stack + frame_size; 88 + return in_stack(sp, info, STACK_TYPE_MCCK, top - THREAD_SIZE, top); 89 + } 90 + 82 91 static bool in_restart_stack(unsigned long sp, struct stack_info *info) 83 92 { 84 93 unsigned long frame_size, top; ··· 117 108 /* Check per-cpu stacks */ 118 109 if (!in_irq_stack(sp, info) && 119 110 !in_nodat_stack(sp, info) && 120 - !in_restart_stack(sp, info)) 111 + !in_restart_stack(sp, info) && 112 + !in_mcck_stack(sp, info)) 121 113 goto unknown; 122 114 123 115 recursion_check:
+1 -1
arch/s390/kernel/irq.c
··· 174 174 175 175 memcpy(&regs->int_code, &S390_lowcore.ext_cpu_addr, 4); 176 176 regs->int_parm = S390_lowcore.ext_params; 177 - regs->int_parm_long = *(unsigned long *)S390_lowcore.ext_params2; 177 + regs->int_parm_long = S390_lowcore.ext_params2; 178 178 179 179 from_idle = !user_mode(regs) && regs->psw.addr == (unsigned long)psw_idle_exit; 180 180 if (from_idle)
+1 -1
arch/s390/kernel/setup.c
··· 354 354 if (!new) 355 355 panic("Couldn't allocate machine check stack"); 356 356 WRITE_ONCE(S390_lowcore.mcck_stack, new + STACK_INIT_OFFSET); 357 - memblock_free(old, THREAD_SIZE); 357 + memblock_free_late(old, THREAD_SIZE); 358 358 return 0; 359 359 } 360 360 early_initcall(stack_realloc);
+6 -1
arch/x86/include/asm/kfence.h
··· 56 56 else 57 57 set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); 58 58 59 - /* Flush this CPU's TLB. */ 59 + /* 60 + * Flush this CPU's TLB, assuming whoever did the allocation/free is 61 + * likely to continue running on this CPU. 62 + */ 63 + preempt_disable(); 60 64 flush_tlb_one_kernel(addr); 65 + preempt_enable(); 61 66 return true; 62 67 } 63 68
+1 -1
arch/x86/include/asm/smp.h
··· 132 132 void play_dead_common(void); 133 133 void wbinvd_on_cpu(int cpu); 134 134 int wbinvd_on_all_cpus(void); 135 - bool wakeup_cpu0(void); 135 + void cond_wakeup_cpu0(void); 136 136 137 137 void native_smp_send_reschedule(int cpu); 138 138 void native_send_call_func_ipi(const struct cpumask *mask);
+12 -14
arch/x86/kernel/smpboot.c
··· 1659 1659 local_irq_disable(); 1660 1660 } 1661 1661 1662 - bool wakeup_cpu0(void) 1662 + /** 1663 + * cond_wakeup_cpu0 - Wake up CPU0 if needed. 1664 + * 1665 + * If NMI wants to wake up CPU0, start CPU0. 1666 + */ 1667 + void cond_wakeup_cpu0(void) 1663 1668 { 1664 1669 if (smp_processor_id() == 0 && enable_start_cpu0) 1665 - return true; 1666 - 1667 - return false; 1670 + start_cpu0(); 1668 1671 } 1672 + EXPORT_SYMBOL_GPL(cond_wakeup_cpu0); 1669 1673 1670 1674 /* 1671 1675 * We need to flush the caches before going to sleep, lest we have ··· 1738 1734 __monitor(mwait_ptr, 0, 0); 1739 1735 mb(); 1740 1736 __mwait(eax, 0); 1741 - /* 1742 - * If NMI wants to wake up CPU0, start CPU0. 1743 - */ 1744 - if (wakeup_cpu0()) 1745 - start_cpu0(); 1737 + 1738 + cond_wakeup_cpu0(); 1746 1739 } 1747 1740 } 1748 1741 ··· 1750 1749 1751 1750 while (1) { 1752 1751 native_halt(); 1753 - /* 1754 - * If NMI wants to wake up CPU0, start CPU0. 1755 - */ 1756 - if (wakeup_cpu0()) 1757 - start_cpu0(); 1752 + 1753 + cond_wakeup_cpu0(); 1758 1754 } 1759 1755 } 1760 1756
+2 -2
arch/x86/kernel/traps.c
··· 556 556 tsk->thread.trap_nr = X86_TRAP_GP; 557 557 558 558 if (fixup_vdso_exception(regs, X86_TRAP_GP, error_code, 0)) 559 - return; 559 + goto exit; 560 560 561 561 show_signal(tsk, SIGSEGV, "", desc, regs, error_code); 562 562 force_sig(SIGSEGV); ··· 1057 1057 goto exit; 1058 1058 1059 1059 if (fixup_vdso_exception(regs, trapnr, 0, 0)) 1060 - return; 1060 + goto exit; 1061 1061 1062 1062 force_sig_fault(SIGFPE, si_code, 1063 1063 (void __user *)uprobe_get_trap_addr(regs));
+1 -1
arch/x86/kvm/mmu/mmu.c
··· 5906 5906 lpage_disallowed_link); 5907 5907 WARN_ON_ONCE(!sp->lpage_disallowed); 5908 5908 if (is_tdp_mmu_page(sp)) { 5909 - flush = kvm_tdp_mmu_zap_sp(kvm, sp); 5909 + flush |= kvm_tdp_mmu_zap_sp(kvm, sp); 5910 5910 } else { 5911 5911 kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); 5912 5912 WARN_ON_ONCE(sp->lpage_disallowed);
+10 -1
arch/x86/net/bpf_jit_comp.c
··· 1689 1689 } 1690 1690 1691 1691 if (image) { 1692 - if (unlikely(proglen + ilen > oldproglen)) { 1692 + /* 1693 + * When populating the image, assert that: 1694 + * 1695 + * i) We do not write beyond the allocated space, and 1696 + * ii) addrs[i] did not change from the prior run, in order 1697 + * to validate assumptions made for computing branch 1698 + * displacements. 1699 + */ 1700 + if (unlikely(proglen + ilen > oldproglen || 1701 + proglen + ilen != addrs[i])) { 1693 1702 pr_err("bpf_jit: fatal error\n"); 1694 1703 return -EFAULT; 1695 1704 }
+10 -1
arch/x86/net/bpf_jit_comp32.c
··· 2276 2276 } 2277 2277 2278 2278 if (image) { 2279 - if (unlikely(proglen + ilen > oldproglen)) { 2279 + /* 2280 + * When populating the image, assert that: 2281 + * 2282 + * i) We do not write beyond the allocated space, and 2283 + * ii) addrs[i] did not change from the prior run, in order 2284 + * to validate assumptions made for computing branch 2285 + * displacements. 2286 + */ 2287 + if (unlikely(proglen + ilen > oldproglen || 2288 + proglen + ilen != addrs[i])) { 2280 2289 pr_err("bpf_jit: fatal error\n"); 2281 2290 return -EFAULT; 2282 2291 }
+1 -3
drivers/acpi/processor_idle.c
··· 544 544 return -ENODEV; 545 545 546 546 #if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU) 547 - /* If NMI wants to wake up CPU0, start CPU0. */ 548 - if (wakeup_cpu0()) 549 - start_cpu0(); 547 + cond_wakeup_cpu0(); 550 548 #endif 551 549 } 552 550
+5 -3
drivers/base/dd.c
··· 292 292 293 293 static void deferred_probe_timeout_work_func(struct work_struct *work) 294 294 { 295 - struct device_private *private, *p; 295 + struct device_private *p; 296 296 297 297 driver_deferred_probe_timeout = 0; 298 298 driver_deferred_probe_trigger(); 299 299 flush_work(&deferred_probe_work); 300 300 301 - list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe) 302 - dev_info(private->device, "deferred probe pending\n"); 301 + mutex_lock(&deferred_probe_mutex); 302 + list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe) 303 + dev_info(p->device, "deferred probe pending\n"); 304 + mutex_unlock(&deferred_probe_mutex); 303 305 wake_up_all(&probe_timeout_waitqueue); 304 306 } 305 307 static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
+2 -5
drivers/bluetooth/btusb.c
··· 4849 4849 data->diag = NULL; 4850 4850 } 4851 4851 4852 - if (!enable_autosuspend) 4853 - usb_disable_autosuspend(data->udev); 4852 + if (enable_autosuspend) 4853 + usb_enable_autosuspend(data->udev); 4854 4854 4855 4855 err = hci_register_dev(hdev); 4856 4856 if (err < 0) ··· 4910 4910 gpiod_put(data->reset_gpio); 4911 4911 4912 4912 hci_free_dev(hdev); 4913 - 4914 - if (!enable_autosuspend) 4915 - usb_enable_autosuspend(data->udev); 4916 4913 } 4917 4914 4918 4915 #ifdef CONFIG_PM
+2 -2
drivers/bus/moxtet.c
··· 2 2 /* 3 3 * Turris Mox module configuration bus driver 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <dt-bindings/bus/moxtet.h> ··· 879 879 } 880 880 module_exit(moxtet_exit); 881 881 882 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 882 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 883 883 MODULE_DESCRIPTION("CZ.NIC's Turris Mox module configuration bus"); 884 884 MODULE_LICENSE("GPL v2");
+1 -1
drivers/bus/mvebu-mbus.c
··· 618 618 * This part of the memory is above 4 GB, so we don't 619 619 * care for the MBus bridge hole. 620 620 */ 621 - if (reg_start >= 0x100000000ULL) 621 + if ((u64)reg_start >= 0x100000000ULL) 622 622 continue; 623 623 624 624 /*
+1 -1
drivers/char/agp/Kconfig
··· 125 125 126 126 config AGP_PARISC 127 127 tristate "HP Quicksilver AGP support" 128 - depends on AGP && PARISC && 64BIT 128 + depends on AGP && PARISC && 64BIT && IOMMU_SBA 129 129 help 130 130 This option gives you AGP GART support for the HP Quicksilver 131 131 AGP bus adapter on HP PA-RISC machines (Ok, just on the C8000
+8 -1
drivers/clk/clk-fixed-factor.c
··· 66 66 67 67 static void devm_clk_hw_register_fixed_factor_release(struct device *dev, void *res) 68 68 { 69 - clk_hw_unregister_fixed_factor(&((struct clk_fixed_factor *)res)->hw); 69 + struct clk_fixed_factor *fix = res; 70 + 71 + /* 72 + * We can not use clk_hw_unregister_fixed_factor, since it will kfree() 73 + * the hw, resulting in double free. Just unregister the hw and let 74 + * devres code kfree() it. 75 + */ 76 + clk_hw_unregister(&fix->hw); 70 77 } 71 78 72 79 static struct clk_hw *
+22 -27
drivers/clk/clk.c
··· 4357 4357 /* search the list of notifiers for this clk */ 4358 4358 list_for_each_entry(cn, &clk_notifier_list, node) 4359 4359 if (cn->clk == clk) 4360 - break; 4360 + goto found; 4361 4361 4362 4362 /* if clk wasn't in the notifier list, allocate new clk_notifier */ 4363 - if (cn->clk != clk) { 4364 - cn = kzalloc(sizeof(*cn), GFP_KERNEL); 4365 - if (!cn) 4366 - goto out; 4363 + cn = kzalloc(sizeof(*cn), GFP_KERNEL); 4364 + if (!cn) 4365 + goto out; 4367 4366 4368 - cn->clk = clk; 4369 - srcu_init_notifier_head(&cn->notifier_head); 4367 + cn->clk = clk; 4368 + srcu_init_notifier_head(&cn->notifier_head); 4370 4369 4371 - list_add(&cn->node, &clk_notifier_list); 4372 - } 4370 + list_add(&cn->node, &clk_notifier_list); 4373 4371 4372 + found: 4374 4373 ret = srcu_notifier_chain_register(&cn->notifier_head, nb); 4375 4374 4376 4375 clk->core->notifier_count++; ··· 4394 4395 */ 4395 4396 int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb) 4396 4397 { 4397 - struct clk_notifier *cn = NULL; 4398 - int ret = -EINVAL; 4398 + struct clk_notifier *cn; 4399 + int ret = -ENOENT; 4399 4400 4400 4401 if (!clk || !nb) 4401 4402 return -EINVAL; 4402 4403 4403 4404 clk_prepare_lock(); 4404 4405 4405 - list_for_each_entry(cn, &clk_notifier_list, node) 4406 - if (cn->clk == clk) 4406 + list_for_each_entry(cn, &clk_notifier_list, node) { 4407 + if (cn->clk == clk) { 4408 + ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb); 4409 + 4410 + clk->core->notifier_count--; 4411 + 4412 + /* XXX the notifier code should handle this better */ 4413 + if (!cn->notifier_head.head) { 4414 + srcu_cleanup_notifier_head(&cn->notifier_head); 4415 + list_del(&cn->node); 4416 + kfree(cn); 4417 + } 4407 4418 break; 4408 - 4409 - if (cn->clk == clk) { 4410 - ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb); 4411 - 4412 - clk->core->notifier_count--; 4413 - 4414 - /* XXX the notifier code should handle this better */ 4415 - if (!cn->notifier_head.head) { 4416 - srcu_cleanup_notifier_head(&cn->notifier_head); 4417 - list_del(&cn->node); 4418 - kfree(cn); 4419 4419 } 4420 - 4421 - } else { 4422 - ret = -ENOENT; 4423 4420 } 4424 4421 4425 4422 clk_prepare_unlock();
+25 -25
drivers/clk/qcom/camcc-sc7180.c
··· 304 304 .name = "cam_cc_bps_clk_src", 305 305 .parent_data = cam_cc_parent_data_2, 306 306 .num_parents = 5, 307 - .ops = &clk_rcg2_ops, 307 + .ops = &clk_rcg2_shared_ops, 308 308 }, 309 309 }; 310 310 ··· 325 325 .name = "cam_cc_cci_0_clk_src", 326 326 .parent_data = cam_cc_parent_data_5, 327 327 .num_parents = 3, 328 - .ops = &clk_rcg2_ops, 328 + .ops = &clk_rcg2_shared_ops, 329 329 }, 330 330 }; 331 331 ··· 339 339 .name = "cam_cc_cci_1_clk_src", 340 340 .parent_data = cam_cc_parent_data_5, 341 341 .num_parents = 3, 342 - .ops = &clk_rcg2_ops, 342 + .ops = &clk_rcg2_shared_ops, 343 343 }, 344 344 }; 345 345 ··· 360 360 .name = "cam_cc_cphy_rx_clk_src", 361 361 .parent_data = cam_cc_parent_data_3, 362 362 .num_parents = 6, 363 - .ops = &clk_rcg2_ops, 363 + .ops = &clk_rcg2_shared_ops, 364 364 }, 365 365 }; 366 366 ··· 379 379 .name = "cam_cc_csi0phytimer_clk_src", 380 380 .parent_data = cam_cc_parent_data_0, 381 381 .num_parents = 4, 382 - .ops = &clk_rcg2_ops, 382 + .ops = &clk_rcg2_shared_ops, 383 383 }, 384 384 }; 385 385 ··· 393 393 .name = "cam_cc_csi1phytimer_clk_src", 394 394 .parent_data = cam_cc_parent_data_0, 395 395 .num_parents = 4, 396 - .ops = &clk_rcg2_ops, 396 + .ops = &clk_rcg2_shared_ops, 397 397 }, 398 398 }; 399 399 ··· 407 407 .name = "cam_cc_csi2phytimer_clk_src", 408 408 .parent_data = cam_cc_parent_data_0, 409 409 .num_parents = 4, 410 - .ops = &clk_rcg2_ops, 410 + .ops = &clk_rcg2_shared_ops, 411 411 }, 412 412 }; 413 413 ··· 421 421 .name = "cam_cc_csi3phytimer_clk_src", 422 422 .parent_data = cam_cc_parent_data_0, 423 423 .num_parents = 4, 424 - .ops = &clk_rcg2_ops, 424 + .ops = &clk_rcg2_shared_ops, 425 425 }, 426 426 }; 427 427 ··· 443 443 .name = "cam_cc_fast_ahb_clk_src", 444 444 .parent_data = cam_cc_parent_data_0, 445 445 .num_parents = 4, 446 - .ops = &clk_rcg2_ops, 446 + .ops = &clk_rcg2_shared_ops, 447 447 }, 448 448 }; 449 449 ··· 466 466 .name = "cam_cc_icp_clk_src", 467 467 .parent_data = cam_cc_parent_data_2, 468 468 .num_parents = 5, 469 - .ops = &clk_rcg2_ops, 469 + .ops = &clk_rcg2_shared_ops, 470 470 }, 471 471 }; 472 472 ··· 488 488 .name = "cam_cc_ife_0_clk_src", 489 489 .parent_data = cam_cc_parent_data_4, 490 490 .num_parents = 4, 491 - .ops = &clk_rcg2_ops, 491 + .ops = &clk_rcg2_shared_ops, 492 492 }, 493 493 }; 494 494 ··· 510 510 .name = "cam_cc_ife_0_csid_clk_src", 511 511 .parent_data = cam_cc_parent_data_3, 512 512 .num_parents = 6, 513 - .ops = &clk_rcg2_ops, 513 + .ops = &clk_rcg2_shared_ops, 514 514 }, 515 515 }; 516 516 ··· 524 524 .name = "cam_cc_ife_1_clk_src", 525 525 .parent_data = cam_cc_parent_data_4, 526 526 .num_parents = 4, 527 - .ops = &clk_rcg2_ops, 527 + .ops = &clk_rcg2_shared_ops, 528 528 }, 529 529 }; 530 530 ··· 538 538 .name = "cam_cc_ife_1_csid_clk_src", 539 539 .parent_data = cam_cc_parent_data_3, 540 540 .num_parents = 6, 541 - .ops = &clk_rcg2_ops, 541 + .ops = &clk_rcg2_shared_ops, 542 542 }, 543 543 }; 544 544 ··· 553 553 .parent_data = cam_cc_parent_data_4, 554 554 .num_parents = 4, 555 555 .flags = CLK_SET_RATE_PARENT, 556 - .ops = &clk_rcg2_ops, 556 + .ops = &clk_rcg2_shared_ops, 557 557 }, 558 558 }; 559 559 ··· 567 567 .name = "cam_cc_ife_lite_csid_clk_src", 568 568 .parent_data = cam_cc_parent_data_3, 569 569 .num_parents = 6, 570 - .ops = &clk_rcg2_ops, 570 + .ops = &clk_rcg2_shared_ops, 571 571 }, 572 572 }; 573 573 ··· 590 590 .name = "cam_cc_ipe_0_clk_src", 591 591 .parent_data = cam_cc_parent_data_2, 592 592 .num_parents = 5, 593 - .ops = &clk_rcg2_ops, 593 + .ops = &clk_rcg2_shared_ops, 594 594 }, 595 595 }; 596 596 ··· 613 613 .name = "cam_cc_jpeg_clk_src", 614 614 .parent_data = cam_cc_parent_data_2, 615 615 .num_parents = 5, 616 - .ops = &clk_rcg2_ops, 616 + .ops = &clk_rcg2_shared_ops, 617 617 }, 618 618 }; 619 619 ··· 635 635 .name = "cam_cc_lrme_clk_src", 636 636 .parent_data = cam_cc_parent_data_6, 637 637 .num_parents = 5, 638 - .ops = &clk_rcg2_ops, 638 + .ops = &clk_rcg2_shared_ops, 639 639 }, 640 640 }; 641 641 ··· 656 656 .name = "cam_cc_mclk0_clk_src", 657 657 .parent_data = cam_cc_parent_data_1, 658 658 .num_parents = 3, 659 - .ops = &clk_rcg2_ops, 659 + .ops = &clk_rcg2_shared_ops, 660 660 }, 661 661 }; 662 662 ··· 670 670 .name = "cam_cc_mclk1_clk_src", 671 671 .parent_data = cam_cc_parent_data_1, 672 672 .num_parents = 3, 673 - .ops = &clk_rcg2_ops, 673 + .ops = &clk_rcg2_shared_ops, 674 674 }, 675 675 }; 676 676 ··· 684 684 .name = "cam_cc_mclk2_clk_src", 685 685 .parent_data = cam_cc_parent_data_1, 686 686 .num_parents = 3, 687 - .ops = &clk_rcg2_ops, 687 + .ops = &clk_rcg2_shared_ops, 688 688 }, 689 689 }; 690 690 ··· 698 698 .name = "cam_cc_mclk3_clk_src", 699 699 .parent_data = cam_cc_parent_data_1, 700 700 .num_parents = 3, 701 - .ops = &clk_rcg2_ops, 701 + .ops = &clk_rcg2_shared_ops, 702 702 }, 703 703 }; 704 704 ··· 712 712 .name = "cam_cc_mclk4_clk_src", 713 713 .parent_data = cam_cc_parent_data_1, 714 714 .num_parents = 3, 715 - .ops = &clk_rcg2_ops, 715 + .ops = &clk_rcg2_shared_ops, 716 716 }, 717 717 }; 718 718 ··· 732 732 .parent_data = cam_cc_parent_data_0, 733 733 .num_parents = 4, 734 734 .flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE, 735 - .ops = &clk_rcg2_ops, 735 + .ops = &clk_rcg2_shared_ops, 736 736 }, 737 737 }; 738 738
+1 -1
drivers/clk/socfpga/clk-gate.c
··· 99 99 val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift; 100 100 val &= GENMASK(socfpgaclk->width - 1, 0); 101 101 /* Check for GPIO_DB_CLK by its offset */ 102 - if ((int) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET) 102 + if ((uintptr_t) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET) 103 103 div = val + 1; 104 104 else 105 105 div = (1 << val);
+2 -2
drivers/firmware/turris-mox-rwtm.c
··· 2 2 /* 3 3 * Turris Mox rWTM firmware driver 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/armada-37xx-rwtm-mailbox.h> ··· 547 547 548 548 MODULE_LICENSE("GPL v2"); 549 549 MODULE_DESCRIPTION("Turris Mox rWTM firmware driver"); 550 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 550 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>");
+2 -2
drivers/gpio/gpio-moxtet.c
··· 2 2 /* 3 3 * Turris Mox Moxtet GPIO expander 4 4 * 5 - * Copyright (C) 2018 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2018 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/bitops.h> ··· 174 174 }; 175 175 module_moxtet_driver(moxtet_gpio_driver); 176 176 177 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 177 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 178 178 MODULE_DESCRIPTION("Turris Mox Moxtet GPIO expander"); 179 179 MODULE_LICENSE("GPL v2");
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 906 906 907 907 /* Allocate an SG array and squash pages into it */ 908 908 r = sg_alloc_table_from_pages(ttm->sg, ttm->pages, ttm->num_pages, 0, 909 - ttm->num_pages << PAGE_SHIFT, 909 + (u64)ttm->num_pages << PAGE_SHIFT, 910 910 GFP_KERNEL); 911 911 if (r) 912 912 goto release_sg;
+1
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubp.h
··· 134 134 HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_EN, mask_sh),\ 135 135 HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_IND_BLK, mask_sh),\ 136 136 HUBP_SF(HUBPREQ0_DCSURF_SURFACE_CONTROL, SECONDARY_SURFACE_DCC_IND_BLK_C, mask_sh),\ 137 + HUBP_SF(HUBPREQ0_DCSURF_SURFACE_FLIP_INTERRUPT, SURFACE_FLIP_INT_MASK, mask_sh),\ 137 138 HUBP_SF(HUBPRET0_HUBPRET_CONTROL, DET_BUF_PLANE1_BASE_ADDRESS, mask_sh),\ 138 139 HUBP_SF(HUBPRET0_HUBPRET_CONTROL, CROSSBAR_SRC_CB_B, mask_sh),\ 139 140 HUBP_SF(HUBPRET0_HUBPRET_CONTROL, CROSSBAR_SRC_CR_R, mask_sh),\
+2 -1
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
··· 1224 1224 (hwmgr->chip_id == CHIP_POLARIS10) || 1225 1225 (hwmgr->chip_id == CHIP_POLARIS11) || 1226 1226 (hwmgr->chip_id == CHIP_POLARIS12) || 1227 - (hwmgr->chip_id == CHIP_TONGA)) 1227 + (hwmgr->chip_id == CHIP_TONGA) || 1228 + (hwmgr->chip_id == CHIP_TOPAZ)) 1228 1229 PHM_WRITE_FIELD(hwmgr->device, MC_SEQ_CNTL_3, CAC_EN, 0x1); 1229 1230 1230 1231
+20 -2
drivers/gpu/drm/i915/display/intel_acpi.c
··· 84 84 return; 85 85 } 86 86 87 + if (!pkg->package.count) { 88 + DRM_DEBUG_DRIVER("no connection in _DSM\n"); 89 + return; 90 + } 91 + 87 92 connector_count = &pkg->package.elements[0]; 88 93 DRM_DEBUG_DRIVER("MUX info connectors: %lld\n", 89 94 (unsigned long long)connector_count->integer.value); 90 95 for (i = 1; i < pkg->package.count; i++) { 91 96 union acpi_object *obj = &pkg->package.elements[i]; 92 - union acpi_object *connector_id = &obj->package.elements[0]; 93 - union acpi_object *info = &obj->package.elements[1]; 97 + union acpi_object *connector_id; 98 + union acpi_object *info; 99 + 100 + if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < 2) { 101 + DRM_DEBUG_DRIVER("Invalid object for MUX #%d\n", i); 102 + continue; 103 + } 104 + 105 + connector_id = &obj->package.elements[0]; 106 + info = &obj->package.elements[1]; 107 + if (info->type != ACPI_TYPE_BUFFER || info->buffer.length < 4) { 108 + DRM_DEBUG_DRIVER("Invalid info for MUX obj #%d\n", i); 109 + continue; 110 + } 111 + 94 112 DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n", 95 113 (unsigned long long)connector_id->integer.value); 96 114 DRM_DEBUG_DRIVER(" port id: %s\n",
+2 -2
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
··· 1386 1386 1387 1387 static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 1388 1388 { 1389 - *value = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_CP_0_LO, 1390 - REG_A5XX_RBBM_PERFCTR_CP_0_HI); 1389 + *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO, 1390 + REG_A5XX_RBBM_ALWAYSON_COUNTER_HI); 1391 1391 1392 1392 return 0; 1393 1393 }
+12 -6
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 567 567 } else { 568 568 /* 569 569 * a650 tier targets don't need whereami but still need to be 570 - * equal to or newer than 1.95 for other security fixes 570 + * equal to or newer than 0.95 for other security fixes 571 571 */ 572 572 if (adreno_is_a650(adreno_gpu)) { 573 - if ((buf[0] & 0xfff) >= 0x195) { 573 + if ((buf[0] & 0xfff) >= 0x095) { 574 574 ret = true; 575 575 goto out; 576 576 } 577 577 578 578 DRM_DEV_ERROR(&gpu->pdev->dev, 579 579 "a650 SQE ucode is too old. Have version %x need at least %x\n", 580 - buf[0] & 0xfff, 0x195); 580 + buf[0] & 0xfff, 0x095); 581 581 } 582 582 583 583 /* ··· 1228 1228 /* Force the GPU power on so we can read this register */ 1229 1229 a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1230 1230 1231 - *value = gpu_read64(gpu, REG_A6XX_RBBM_PERFCTR_CP_0_LO, 1232 - REG_A6XX_RBBM_PERFCTR_CP_0_HI); 1231 + *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, 1232 + REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); 1233 1233 1234 1234 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1235 1235 mutex_unlock(&perfcounter_oob); ··· 1406 1406 int ret; 1407 1407 1408 1408 ret = nvmem_cell_read_u16(dev, "speed_bin", &speedbin); 1409 - if (ret) { 1409 + /* 1410 + * -ENOENT means that the platform doesn't support speedbin which is 1411 + * fine 1412 + */ 1413 + if (ret == -ENOENT) { 1414 + return 0; 1415 + } else if (ret) { 1410 1416 DRM_DEV_ERROR(dev, 1411 1417 "failed to read speed-bin (%d). Some OPPs may not be supported by hardware", 1412 1418 ret);
+3 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
··· 496 496 497 497 DPU_REG_WRITE(c, CTL_TOP, mode_sel); 498 498 DPU_REG_WRITE(c, CTL_INTF_ACTIVE, intf_active); 499 - DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE, BIT(cfg->merge_3d - MERGE_3D_0)); 499 + if (cfg->merge_3d) 500 + DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE, 501 + BIT(cfg->merge_3d - MERGE_3D_0)); 500 502 } 501 503 502 504 static void dpu_hw_ctl_intf_cfg(struct dpu_hw_ctl *ctx,
+1
drivers/gpu/drm/msm/msm_drv.c
··· 570 570 kfree(priv); 571 571 err_put_drm_dev: 572 572 drm_dev_put(ddev); 573 + platform_set_drvdata(pdev, NULL); 573 574 return ret; 574 575 } 575 576
+9 -3
drivers/gpu/drm/panel/panel-dsi-cm.c
··· 37 37 u32 height_mm; 38 38 u32 max_hs_rate; 39 39 u32 max_lp_rate; 40 + bool te_support; 40 41 }; 41 42 42 43 struct panel_drv_data { ··· 335 334 if (r) 336 335 goto err; 337 336 338 - r = mipi_dsi_dcs_set_tear_on(ddata->dsi, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 339 - if (r) 340 - goto err; 337 + if (ddata->panel_data->te_support) { 338 + r = mipi_dsi_dcs_set_tear_on(ddata->dsi, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 339 + if (r) 340 + goto err; 341 + } 341 342 342 343 /* possible panel bug */ 343 344 msleep(100); ··· 622 619 .height_mm = 0, 623 620 .max_hs_rate = 300000000, 624 621 .max_lp_rate = 10000000, 622 + .te_support = true, 625 623 }; 626 624 627 625 static const struct dsic_panel_data himalaya_data = { ··· 633 629 .height_mm = 88, 634 630 .max_hs_rate = 300000000, 635 631 .max_lp_rate = 10000000, 632 + .te_support = false, 636 633 }; 637 634 638 635 static const struct dsic_panel_data droid4_data = { ··· 644 639 .height_mm = 89, 645 640 .max_hs_rate = 300000000, 646 641 .max_lp_rate = 10000000, 642 + .te_support = false, 647 643 }; 648 644 649 645 static const struct of_device_id dsicm_of_match[] = {
+2 -2
drivers/gpu/drm/radeon/radeon_ttm.c
··· 364 364 if (gtt->userflags & RADEON_GEM_USERPTR_ANONONLY) { 365 365 /* check that we only pin down anonymous memory 366 366 to prevent problems with writeback */ 367 - unsigned long end = gtt->userptr + ttm->num_pages * PAGE_SIZE; 367 + unsigned long end = gtt->userptr + (u64)ttm->num_pages * PAGE_SIZE; 368 368 struct vm_area_struct *vma; 369 369 vma = find_vma(gtt->usermm, gtt->userptr); 370 370 if (!vma || vma->vm_file || vma->vm_end < end) ··· 386 386 } while (pinned < ttm->num_pages); 387 387 388 388 r = sg_alloc_table_from_pages(ttm->sg, ttm->pages, ttm->num_pages, 0, 389 - ttm->num_pages << PAGE_SHIFT, 389 + (u64)ttm->num_pages << PAGE_SHIFT, 390 390 GFP_KERNEL); 391 391 if (r) 392 392 goto release_sg;
+17
drivers/gpu/drm/vc4/vc4_crtc.c
··· 210 210 { 211 211 const struct vc4_crtc_data *crtc_data = vc4_crtc_to_vc4_crtc_data(vc4_crtc); 212 212 const struct vc4_pv_data *pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc); 213 + struct vc4_dev *vc4 = to_vc4_dev(vc4_crtc->base.dev); 213 214 u32 fifo_len_bytes = pv_data->fifo_depth; 214 215 215 216 /* ··· 238 237 */ 239 238 if (crtc_data->hvs_output == 5) 240 239 return 32; 240 + 241 + /* 242 + * It looks like in some situations, we will overflow 243 + * the PixelValve FIFO (with the bit 10 of PV stat being 244 + * set) and stall the HVS / PV, eventually resulting in 245 + * a page flip timeout. 246 + * 247 + * Displaying the video overlay during a playback with 248 + * Kodi on an RPi3 seems to be a great solution with a 249 + * failure rate around 50%. 250 + * 251 + * Removing 1 from the FIFO full level however 252 + * seems to completely remove that issue. 253 + */ 254 + if (!vc4->hvs->hvs5) 255 + return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1; 241 256 242 257 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX; 243 258 }
-1
drivers/gpu/drm/vc4/vc4_plane.c
··· 1146 1146 plane->state->src_y = state->src_y; 1147 1147 plane->state->src_w = state->src_w; 1148 1148 plane->state->src_h = state->src_h; 1149 - plane->state->src_h = state->src_h; 1150 1149 plane->state->alpha = state->alpha; 1151 1150 plane->state->pixel_blend_mode = state->pixel_blend_mode; 1152 1151 plane->state->rotation = state->rotation;
+4 -2
drivers/gpu/drm/xen/xen_drm_front.c
··· 521 521 drm_dev = drm_dev_alloc(&xen_drm_driver, dev); 522 522 if (IS_ERR(drm_dev)) { 523 523 ret = PTR_ERR(drm_dev); 524 - goto fail; 524 + goto fail_dev; 525 525 } 526 526 527 527 drm_info->drm_dev = drm_dev; ··· 551 551 drm_kms_helper_poll_fini(drm_dev); 552 552 drm_mode_config_cleanup(drm_dev); 553 553 drm_dev_put(drm_dev); 554 - fail: 554 + fail_dev: 555 555 kfree(drm_info); 556 + front_info->drm_info = NULL; 557 + fail: 556 558 return ret; 557 559 } 558 560
-1
drivers/gpu/drm/xen/xen_drm_front_conn.h
··· 16 16 struct drm_connector; 17 17 struct xen_drm_front_drm_info; 18 18 19 - struct xen_drm_front_drm_info; 20 19 21 20 int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info, 22 21 struct drm_connector *connector);
+1
drivers/i2c/busses/i2c-designware-master.c
··· 129 129 if ((comp_param1 & DW_IC_COMP_PARAM_1_SPEED_MODE_MASK) 130 130 != DW_IC_COMP_PARAM_1_SPEED_MODE_HIGH) { 131 131 dev_err(dev->dev, "High Speed not supported!\n"); 132 + t->bus_freq_hz = I2C_MAX_FAST_MODE_FREQ; 132 133 dev->master_cfg &= ~DW_IC_CON_SPEED_MASK; 133 134 dev->master_cfg |= DW_IC_CON_SPEED_FAST; 134 135 dev->hs_hcnt = 0;
+1 -1
drivers/i2c/busses/i2c-exynos5.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 2 + /* 3 3 * i2c-exynos5.c - Samsung Exynos5 I2C Controller Driver 4 4 * 5 5 * Copyright (C) 2013 Samsung Electronics Co., Ltd.
+1 -1
drivers/i2c/busses/i2c-hix5hd2.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 3 * Copyright (c) 2014 Linaro Ltd. 4 - * Copyright (c) 2014 Hisilicon Limited. 4 + * Copyright (c) 2014 HiSilicon Limited. 5 5 * 6 6 * Now only support 7 bit address. 7 7 */
+2 -2
drivers/i2c/busses/i2c-jz4780.c
··· 525 525 i2c_sta = jz4780_i2c_readw(i2c, JZ4780_I2C_STA); 526 526 data = *i2c->wbuf; 527 527 data &= ~JZ4780_I2C_DC_READ; 528 - if ((!i2c->stop_hold) && (i2c->cdata->version >= 529 - ID_X1000)) 528 + if ((i2c->wt_len == 1) && (!i2c->stop_hold) && 529 + (i2c->cdata->version >= ID_X1000)) 530 530 data |= X1000_I2C_DC_STOP; 531 531 jz4780_i2c_writew(i2c, JZ4780_I2C_DC, data); 532 532 i2c->wbuf++;
+1 -1
drivers/i2c/busses/i2c-stm32f4.c
··· 534 534 default: 535 535 /* 536 536 * N-byte reception: 537 - * Enable ACK, reset POS (ACK postion) and clear ADDR flag. 537 + * Enable ACK, reset POS (ACK position) and clear ADDR flag. 538 538 * In that way, ACK will be sent as soon as the current byte 539 539 * will be received in the shift register 540 540 */
+4 -3
drivers/i2c/i2c-core-base.c
··· 378 378 static int i2c_init_recovery(struct i2c_adapter *adap) 379 379 { 380 380 struct i2c_bus_recovery_info *bri = adap->bus_recovery_info; 381 - char *err_str; 381 + char *err_str, *err_level = KERN_ERR; 382 382 383 383 if (!bri) 384 384 return 0; ··· 387 387 return -EPROBE_DEFER; 388 388 389 389 if (!bri->recover_bus) { 390 - err_str = "no recover_bus() found"; 390 + err_str = "no suitable method provided"; 391 + err_level = KERN_DEBUG; 391 392 goto err; 392 393 } 393 394 ··· 415 414 416 415 return 0; 417 416 err: 418 - dev_err(&adap->dev, "Not using recovery: %s\n", err_str); 417 + dev_printk(err_level, &adap->dev, "Not using recovery: %s\n", err_str); 419 418 adap->bus_recovery_info = NULL; 420 419 421 420 return -EINVAL;
+3 -1
drivers/infiniband/core/addr.c
··· 76 76 77 77 static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = { 78 78 [LS_NLA_TYPE_DGID] = {.type = NLA_BINARY, 79 - .len = sizeof(struct rdma_nla_ls_gid)}, 79 + .len = sizeof(struct rdma_nla_ls_gid), 80 + .validation_type = NLA_VALIDATE_MIN, 81 + .min = sizeof(struct rdma_nla_ls_gid)}, 80 82 }; 81 83 82 84 static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
+2 -1
drivers/infiniband/hw/cxgb4/cm.c
··· 3616 3616 c4iw_init_wr_wait(ep->com.wr_waitp); 3617 3617 err = cxgb4_remove_server( 3618 3618 ep->com.dev->rdev.lldi.ports[0], ep->stid, 3619 - ep->com.dev->rdev.lldi.rxq_ids[0], true); 3619 + ep->com.dev->rdev.lldi.rxq_ids[0], 3620 + ep->com.local_addr.ss_family == AF_INET6); 3620 3621 if (err) 3621 3622 goto done; 3622 3623 err = c4iw_wait_for_reply(&ep->com.dev->rdev, ep->com.wr_waitp,
+5 -16
drivers/infiniband/hw/hfi1/affinity.c
··· 632 632 */ 633 633 int hfi1_dev_affinity_init(struct hfi1_devdata *dd) 634 634 { 635 - int node = pcibus_to_node(dd->pcidev->bus); 636 635 struct hfi1_affinity_node *entry; 637 636 const struct cpumask *local_mask; 638 637 int curr_cpu, possible, i, ret; 639 638 bool new_entry = false; 640 - 641 - /* 642 - * If the BIOS does not have the NUMA node information set, select 643 - * NUMA 0 so we get consistent performance. 644 - */ 645 - if (node < 0) { 646 - dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n"); 647 - node = 0; 648 - } 649 - dd->node = node; 650 639 651 640 local_mask = cpumask_of_node(dd->node); 652 641 if (cpumask_first(local_mask) >= nr_cpu_ids) ··· 649 660 * create an entry in the global affinity structure and initialize it. 650 661 */ 651 662 if (!entry) { 652 - entry = node_affinity_allocate(node); 663 + entry = node_affinity_allocate(dd->node); 653 664 if (!entry) { 654 665 dd_dev_err(dd, 655 666 "Unable to allocate global affinity node\n"); ··· 740 751 if (new_entry) 741 752 node_affinity_add_tail(entry); 742 753 754 + dd->affinity_entry = entry; 743 755 mutex_unlock(&node_affinity.lock); 744 756 745 757 return 0; ··· 756 766 { 757 767 struct hfi1_affinity_node *entry; 758 768 759 - if (dd->node < 0) 760 - return; 761 - 762 769 mutex_lock(&node_affinity.lock); 770 + if (!dd->affinity_entry) 771 + goto unlock; 763 772 entry = node_affinity_lookup(dd->node); 764 773 if (!entry) 765 774 goto unlock; ··· 769 780 */ 770 781 _dev_comp_vect_cpu_mask_clean_up(dd, entry); 771 782 unlock: 783 + dd->affinity_entry = NULL; 772 784 mutex_unlock(&node_affinity.lock); 773 - dd->node = NUMA_NO_NODE; 774 785 } 775 786 776 787 /*
+1
drivers/infiniband/hw/hfi1/hfi.h
··· 1409 1409 spinlock_t irq_src_lock; 1410 1410 int vnic_num_vports; 1411 1411 struct net_device *dummy_netdev; 1412 + struct hfi1_affinity_node *affinity_entry; 1412 1413 1413 1414 /* Keeps track of IPoIB RSM rule users */ 1414 1415 atomic_t ipoib_rsm_usr_num;
+9 -1
drivers/infiniband/hw/hfi1/init.c
··· 1277 1277 dd->pport = (struct hfi1_pportdata *)(dd + 1); 1278 1278 dd->pcidev = pdev; 1279 1279 pci_set_drvdata(pdev, dd); 1280 - dd->node = NUMA_NO_NODE; 1281 1280 1282 1281 ret = xa_alloc_irq(&hfi1_dev_table, &dd->unit, dd, xa_limit_32b, 1283 1282 GFP_KERNEL); ··· 1286 1287 goto bail; 1287 1288 } 1288 1289 rvt_set_ibdev_name(&dd->verbs_dev.rdi, "%s_%d", class_name(), dd->unit); 1290 + /* 1291 + * If the BIOS does not have the NUMA node information set, select 1292 + * NUMA 0 so we get consistent performance. 1293 + */ 1294 + dd->node = pcibus_to_node(pdev->bus); 1295 + if (dd->node == NUMA_NO_NODE) { 1296 + dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n"); 1297 + dd->node = 0; 1298 + } 1289 1299 1290 1300 /* 1291 1301 * Initialize all locks for the device. This needs to be as early as
+1 -2
drivers/infiniband/hw/hfi1/netdev_rx.c
··· 173 173 return 0; 174 174 } 175 175 176 - cpumask_and(node_cpu_mask, cpu_mask, 177 - cpumask_of_node(pcibus_to_node(dd->pcidev->bus))); 176 + cpumask_and(node_cpu_mask, cpu_mask, cpumask_of_node(dd->node)); 178 177 179 178 available_cpus = cpumask_weight(node_cpu_mask); 180 179
+2 -1
drivers/infiniband/hw/qedr/verbs.c
··· 1244 1244 * TGT QP isn't associated with RQ/SQ 1245 1245 */ 1246 1246 if ((attrs->qp_type != IB_QPT_GSI) && (dev->gsi_qp_created) && 1247 - (attrs->qp_type != IB_QPT_XRC_TGT)) { 1247 + (attrs->qp_type != IB_QPT_XRC_TGT) && 1248 + (attrs->qp_type != IB_QPT_XRC_INI)) { 1248 1249 struct qedr_cq *send_cq = get_qedr_cq(attrs->send_cq); 1249 1250 struct qedr_cq *recv_cq = get_qedr_cq(attrs->recv_cq); 1250 1251
+1 -1
drivers/infiniband/ulp/rtrs/rtrs-clt.c
··· 2720 2720 2721 2721 /* Now it is safe to iterate over all paths without locks */ 2722 2722 list_for_each_entry_safe(sess, tmp, &clt->paths_list, s.entry) { 2723 - rtrs_clt_destroy_sess_files(sess, NULL); 2724 2723 rtrs_clt_close_conns(sess, true); 2724 + rtrs_clt_destroy_sess_files(sess, NULL); 2725 2725 kobject_put(&sess->kobj); 2726 2726 } 2727 2727 free_clt(clt);
+2 -2
drivers/leds/leds-turris-omnia.c
··· 2 2 /* 3 3 * CZ.NIC's Turris Omnia LEDs driver 4 4 * 5 - * 2020 by Marek Behun <marek.behun@nic.cz> 5 + * 2020 by Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/i2c.h> ··· 287 287 288 288 module_i2c_driver(omnia_leds_driver); 289 289 290 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 290 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 291 291 MODULE_DESCRIPTION("CZ.NIC's Turris Omnia LEDs"); 292 292 MODULE_LICENSE("GPL v2");
+2 -2
drivers/mailbox/armada-37xx-rwtm-mailbox.c
··· 2 2 /* 3 3 * rWTM BIU Mailbox driver for Armada 37xx 4 4 * 5 - * Author: Marek Behun <marek.behun@nic.cz> 5 + * Author: Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/device.h> ··· 203 203 204 204 MODULE_LICENSE("GPL v2"); 205 205 MODULE_DESCRIPTION("rWTM BIU Mailbox driver for Armada 37xx"); 206 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 206 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>");
+18 -6
drivers/net/can/spi/mcp251x.c
··· 314 314 return ret; 315 315 } 316 316 317 + static int mcp251x_spi_write(struct spi_device *spi, int len) 318 + { 319 + struct mcp251x_priv *priv = spi_get_drvdata(spi); 320 + int ret; 321 + 322 + ret = spi_write(spi, priv->spi_tx_buf, len); 323 + if (ret) 324 + dev_err(&spi->dev, "spi write failed: ret = %d\n", ret); 325 + 326 + return ret; 327 + } 328 + 317 329 static u8 mcp251x_read_reg(struct spi_device *spi, u8 reg) 318 330 { 319 331 struct mcp251x_priv *priv = spi_get_drvdata(spi); ··· 373 361 priv->spi_tx_buf[1] = reg; 374 362 priv->spi_tx_buf[2] = val; 375 363 376 - mcp251x_spi_trans(spi, 3); 364 + mcp251x_spi_write(spi, 3); 377 365 } 378 366 379 367 static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2) ··· 385 373 priv->spi_tx_buf[2] = v1; 386 374 priv->spi_tx_buf[3] = v2; 387 375 388 - mcp251x_spi_trans(spi, 4); 376 + mcp251x_spi_write(spi, 4); 389 377 } 390 378 391 379 static void mcp251x_write_bits(struct spi_device *spi, u8 reg, ··· 398 386 priv->spi_tx_buf[2] = mask; 399 387 priv->spi_tx_buf[3] = val; 400 388 401 - mcp251x_spi_trans(spi, 4); 389 + mcp251x_spi_write(spi, 4); 402 390 } 403 391 404 392 static u8 mcp251x_read_stat(struct spi_device *spi) ··· 630 618 buf[i]); 631 619 } else { 632 620 memcpy(priv->spi_tx_buf, buf, TXBDAT_OFF + len); 633 - mcp251x_spi_trans(spi, TXBDAT_OFF + len); 621 + mcp251x_spi_write(spi, TXBDAT_OFF + len); 634 622 } 635 623 } 636 624 ··· 662 650 663 651 /* use INSTRUCTION_RTS, to avoid "repeated frame problem" */ 664 652 priv->spi_tx_buf[0] = INSTRUCTION_RTS(1 << tx_buf_idx); 665 - mcp251x_spi_trans(priv->spi, 1); 653 + mcp251x_spi_write(priv->spi, 1); 666 654 } 667 655 668 656 static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf, ··· 900 888 mdelay(MCP251X_OST_DELAY_MS); 901 889 902 890 priv->spi_tx_buf[0] = INSTRUCTION_RESET; 903 - ret = mcp251x_spi_trans(spi, 1); 891 + ret = mcp251x_spi_write(spi, 1); 904 892 if (ret) 905 893 return ret; 906 894
+5 -1
drivers/net/can/usb/peak_usb/pcan_usb_core.c
··· 857 857 if (dev->adapter->dev_set_bus) { 858 858 err = dev->adapter->dev_set_bus(dev, 0); 859 859 if (err) 860 - goto lbl_unregister_candev; 860 + goto adap_dev_free; 861 861 } 862 862 863 863 /* get device number early */ ··· 868 868 peak_usb_adapter->name, ctrl_idx, dev->device_number); 869 869 870 870 return 0; 871 + 872 + adap_dev_free: 873 + if (dev->adapter->dev_free) 874 + dev->adapter->dev_free(dev); 871 875 872 876 lbl_unregister_candev: 873 877 unregister_candev(netdev);
+172 -21
drivers/net/dsa/lantiq_gswip.c
··· 93 93 94 94 /* GSWIP MII Registers */ 95 95 #define GSWIP_MII_CFGp(p) (0x2 * (p)) 96 + #define GSWIP_MII_CFG_RESET BIT(15) 96 97 #define GSWIP_MII_CFG_EN BIT(14) 98 + #define GSWIP_MII_CFG_ISOLATE BIT(13) 97 99 #define GSWIP_MII_CFG_LDCLKDIS BIT(12) 100 + #define GSWIP_MII_CFG_RGMII_IBS BIT(8) 101 + #define GSWIP_MII_CFG_RMII_CLK BIT(7) 98 102 #define GSWIP_MII_CFG_MODE_MIIP 0x0 99 103 #define GSWIP_MII_CFG_MODE_MIIM 0x1 100 104 #define GSWIP_MII_CFG_MODE_RMIIP 0x2 ··· 194 190 #define GSWIP_PCE_DEFPVID(p) (0x486 + ((p) * 0xA)) 195 191 196 192 #define GSWIP_MAC_FLEN 0x8C5 193 + #define GSWIP_MAC_CTRL_0p(p) (0x903 + ((p) * 0xC)) 194 + #define GSWIP_MAC_CTRL_0_PADEN BIT(8) 195 + #define GSWIP_MAC_CTRL_0_FCS_EN BIT(7) 196 + #define GSWIP_MAC_CTRL_0_FCON_MASK 0x0070 197 + #define GSWIP_MAC_CTRL_0_FCON_AUTO 0x0000 198 + #define GSWIP_MAC_CTRL_0_FCON_RX 0x0010 199 + #define GSWIP_MAC_CTRL_0_FCON_TX 0x0020 200 + #define GSWIP_MAC_CTRL_0_FCON_RXTX 0x0030 201 + #define GSWIP_MAC_CTRL_0_FCON_NONE 0x0040 202 + #define GSWIP_MAC_CTRL_0_FDUP_MASK 0x000C 203 + #define GSWIP_MAC_CTRL_0_FDUP_AUTO 0x0000 204 + #define GSWIP_MAC_CTRL_0_FDUP_EN 0x0004 205 + #define GSWIP_MAC_CTRL_0_FDUP_DIS 0x000C 206 + #define GSWIP_MAC_CTRL_0_GMII_MASK 0x0003 207 + #define GSWIP_MAC_CTRL_0_GMII_AUTO 0x0000 208 + #define GSWIP_MAC_CTRL_0_GMII_MII 0x0001 209 + #define GSWIP_MAC_CTRL_0_GMII_RGMII 0x0002 197 210 #define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC)) 198 211 #define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */ 199 212 ··· 674 653 GSWIP_SDMA_PCTRLp(port)); 675 654 676 655 if (!dsa_is_cpu_port(ds, port)) { 677 - u32 macconf = GSWIP_MDIO_PHY_LINK_AUTO | 678 - GSWIP_MDIO_PHY_SPEED_AUTO | 679 - GSWIP_MDIO_PHY_FDUP_AUTO | 680 - GSWIP_MDIO_PHY_FCONTX_AUTO | 681 - GSWIP_MDIO_PHY_FCONRX_AUTO | 682 - (phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK); 656 + u32 mdio_phy = 0; 683 657 684 - gswip_mdio_w(priv, macconf, GSWIP_MDIO_PHYp(port)); 685 - /* Activate MDIO auto polling */ 686 - gswip_mdio_mask(priv, 0, BIT(port), GSWIP_MDIO_MDC_CFG0); 658 + if (phydev) 659 + mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK; 660 + 661 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_ADDR_MASK, mdio_phy, 662 + GSWIP_MDIO_PHYp(port)); 687 663 } 688 664 689 665 return 0; ··· 692 674 693 675 if (!dsa_is_user_port(ds, port)) 694 676 return; 695 - 696 - if (!dsa_is_cpu_port(ds, port)) { 697 - gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_DOWN, 698 - GSWIP_MDIO_PHY_LINK_MASK, 699 - GSWIP_MDIO_PHYp(port)); 700 - /* Deactivate MDIO auto polling */ 701 - gswip_mdio_mask(priv, BIT(port), 0, GSWIP_MDIO_MDC_CFG0); 702 - } 703 677 704 678 gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0, 705 679 GSWIP_FDMA_PCTRLp(port)); ··· 804 794 gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2); 805 795 gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3); 806 796 807 - /* disable PHY auto polling */ 797 + /* Deactivate MDIO PHY auto polling. Some PHYs as the AR8030 have an 798 + * interoperability problem with this auto polling mechanism because 799 + * their status registers think that the link is in a different state 800 + * than it actually is. For the AR8030 it has the BMSR_ESTATEN bit set 801 + * as well as ESTATUS_1000_TFULL and ESTATUS_1000_XFULL. This makes the 802 + * auto polling state machine consider the link being negotiated with 803 + * 1Gbit/s. Since the PHY itself is a Fast Ethernet RMII PHY this leads 804 + * to the switch port being completely dead (RX and TX are both not 805 + * working). 806 + * Also with various other PHY / port combinations (PHY11G GPHY, PHY22F 807 + * GPHY, external RGMII PEF7071/7072) any traffic would stop. Sometimes 808 + * it would work fine for a few minutes to hours and then stop, on 809 + * other device it would no traffic could be sent or received at all. 810 + * Testing shows that when PHY auto polling is disabled these problems 811 + * go away. 812 + */ 808 813 gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0); 814 + 809 815 /* Configure the MDIO Clock 2.5 MHz */ 810 816 gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1); 811 817 812 - /* Disable the xMII link */ 818 + /* Disable the xMII interface and clear it's isolation bit */ 813 819 for (i = 0; i < priv->hw_info->max_ports; i++) 814 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i); 820 + gswip_mii_mask_cfg(priv, 821 + GSWIP_MII_CFG_EN | GSWIP_MII_CFG_ISOLATE, 822 + 0, i); 815 823 816 824 /* enable special tag insertion on cpu port */ 817 825 gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN, ··· 1478 1450 return; 1479 1451 } 1480 1452 1453 + static void gswip_port_set_link(struct gswip_priv *priv, int port, bool link) 1454 + { 1455 + u32 mdio_phy; 1456 + 1457 + if (link) 1458 + mdio_phy = GSWIP_MDIO_PHY_LINK_UP; 1459 + else 1460 + mdio_phy = GSWIP_MDIO_PHY_LINK_DOWN; 1461 + 1462 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_MASK, mdio_phy, 1463 + GSWIP_MDIO_PHYp(port)); 1464 + } 1465 + 1466 + static void gswip_port_set_speed(struct gswip_priv *priv, int port, int speed, 1467 + phy_interface_t interface) 1468 + { 1469 + u32 mdio_phy = 0, mii_cfg = 0, mac_ctrl_0 = 0; 1470 + 1471 + switch (speed) { 1472 + case SPEED_10: 1473 + mdio_phy = GSWIP_MDIO_PHY_SPEED_M10; 1474 + 1475 + if (interface == PHY_INTERFACE_MODE_RMII) 1476 + mii_cfg = GSWIP_MII_CFG_RATE_M50; 1477 + else 1478 + mii_cfg = GSWIP_MII_CFG_RATE_M2P5; 1479 + 1480 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII; 1481 + break; 1482 + 1483 + case SPEED_100: 1484 + mdio_phy = GSWIP_MDIO_PHY_SPEED_M100; 1485 + 1486 + if (interface == PHY_INTERFACE_MODE_RMII) 1487 + mii_cfg = GSWIP_MII_CFG_RATE_M50; 1488 + else 1489 + mii_cfg = GSWIP_MII_CFG_RATE_M25; 1490 + 1491 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII; 1492 + break; 1493 + 1494 + case SPEED_1000: 1495 + mdio_phy = GSWIP_MDIO_PHY_SPEED_G1; 1496 + 1497 + mii_cfg = GSWIP_MII_CFG_RATE_M125; 1498 + 1499 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_RGMII; 1500 + break; 1501 + } 1502 + 1503 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_SPEED_MASK, mdio_phy, 1504 + GSWIP_MDIO_PHYp(port)); 1505 + gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_RATE_MASK, mii_cfg, port); 1506 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_GMII_MASK, mac_ctrl_0, 1507 + GSWIP_MAC_CTRL_0p(port)); 1508 + } 1509 + 1510 + static void gswip_port_set_duplex(struct gswip_priv *priv, int port, int duplex) 1511 + { 1512 + u32 mac_ctrl_0, mdio_phy; 1513 + 1514 + if (duplex == DUPLEX_FULL) { 1515 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_EN; 1516 + mdio_phy = GSWIP_MDIO_PHY_FDUP_EN; 1517 + } else { 1518 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_DIS; 1519 + mdio_phy = GSWIP_MDIO_PHY_FDUP_DIS; 1520 + } 1521 + 1522 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FDUP_MASK, mac_ctrl_0, 1523 + GSWIP_MAC_CTRL_0p(port)); 1524 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_FDUP_MASK, mdio_phy, 1525 + GSWIP_MDIO_PHYp(port)); 1526 + } 1527 + 1528 + static void gswip_port_set_pause(struct gswip_priv *priv, int port, 1529 + bool tx_pause, bool rx_pause) 1530 + { 1531 + u32 mac_ctrl_0, mdio_phy; 1532 + 1533 + if (tx_pause && rx_pause) { 1534 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RXTX; 1535 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN | 1536 + GSWIP_MDIO_PHY_FCONRX_EN; 1537 + } else if (tx_pause) { 1538 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_TX; 1539 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN | 1540 + GSWIP_MDIO_PHY_FCONRX_DIS; 1541 + } else if (rx_pause) { 1542 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RX; 1543 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS | 1544 + GSWIP_MDIO_PHY_FCONRX_EN; 1545 + } else { 1546 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_NONE; 1547 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS | 1548 + GSWIP_MDIO_PHY_FCONRX_DIS; 1549 + } 1550 + 1551 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FCON_MASK, 1552 + mac_ctrl_0, GSWIP_MAC_CTRL_0p(port)); 1553 + gswip_mdio_mask(priv, 1554 + GSWIP_MDIO_PHY_FCONTX_MASK | 1555 + GSWIP_MDIO_PHY_FCONRX_MASK, 1556 + mdio_phy, GSWIP_MDIO_PHYp(port)); 1557 + } 1558 + 1481 1559 static void gswip_phylink_mac_config(struct dsa_switch *ds, int port, 1482 1560 unsigned int mode, 1483 1561 const struct phylink_link_state *state) ··· 1603 1469 break; 1604 1470 case PHY_INTERFACE_MODE_RMII: 1605 1471 miicfg |= GSWIP_MII_CFG_MODE_RMIIM; 1472 + 1473 + /* Configure the RMII clock as output: */ 1474 + miicfg |= GSWIP_MII_CFG_RMII_CLK; 1606 1475 break; 1607 1476 case PHY_INTERFACE_MODE_RGMII: 1608 1477 case PHY_INTERFACE_MODE_RGMII_ID: ··· 1618 1481 "Unsupported interface: %d\n", state->interface); 1619 1482 return; 1620 1483 } 1621 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_MODE_MASK, miicfg, port); 1484 + 1485 + gswip_mii_mask_cfg(priv, 1486 + GSWIP_MII_CFG_MODE_MASK | GSWIP_MII_CFG_RMII_CLK | 1487 + GSWIP_MII_CFG_RGMII_IBS | GSWIP_MII_CFG_LDCLKDIS, 1488 + miicfg, port); 1622 1489 1623 1490 switch (state->interface) { 1624 1491 case PHY_INTERFACE_MODE_RGMII_ID: ··· 1647 1506 struct gswip_priv *priv = ds->priv; 1648 1507 1649 1508 gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port); 1509 + 1510 + if (!dsa_is_cpu_port(ds, port)) 1511 + gswip_port_set_link(priv, port, false); 1650 1512 } 1651 1513 1652 1514 static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port, ··· 1660 1516 bool tx_pause, bool rx_pause) 1661 1517 { 1662 1518 struct gswip_priv *priv = ds->priv; 1519 + 1520 + if (!dsa_is_cpu_port(ds, port)) { 1521 + gswip_port_set_link(priv, port, true); 1522 + gswip_port_set_speed(priv, port, speed, interface); 1523 + gswip_port_set_duplex(priv, port, duplex); 1524 + gswip_port_set_pause(priv, port, tx_pause, rx_pause); 1525 + } 1663 1526 1664 1527 gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); 1665 1528 }
+3 -2
drivers/net/ethernet/amd/pcnet32.c
··· 1534 1534 } 1535 1535 pci_set_master(pdev); 1536 1536 1537 - ioaddr = pci_resource_start(pdev, 0); 1538 - if (!ioaddr) { 1537 + if (!pci_resource_len(pdev, 0)) { 1539 1538 if (pcnet32_debug & NETIF_MSG_PROBE) 1540 1539 pr_err("card has no PCI IO resources, aborting\n"); 1541 1540 err = -ENODEV; ··· 1547 1548 pr_err("architecture does not support 32bit PCI busmaster DMA\n"); 1548 1549 goto err_disable_dev; 1549 1550 } 1551 + 1552 + ioaddr = pci_resource_start(pdev, 0); 1550 1553 if (!request_region(ioaddr, PCNET32_TOTAL_SIZE, "pcnet32_probe_pci")) { 1551 1554 if (pcnet32_debug & NETIF_MSG_PROBE) 1552 1555 pr_err("io address range already allocated\n");
+3 -3
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 180 180 #define XGBE_DMA_SYS_AWCR 0x30303030 181 181 182 182 /* DMA cache settings - PCI device */ 183 - #define XGBE_DMA_PCI_ARCR 0x00000003 184 - #define XGBE_DMA_PCI_AWCR 0x13131313 185 - #define XGBE_DMA_PCI_AWARCR 0x00000313 183 + #define XGBE_DMA_PCI_ARCR 0x000f0f0f 184 + #define XGBE_DMA_PCI_AWCR 0x0f0f0f0f 185 + #define XGBE_DMA_PCI_AWARCR 0x00000f0f 186 186 187 187 /* DMA channel interrupt modes */ 188 188 #define XGBE_IRQ_MODE_EDGE 0
+1
drivers/net/ethernet/broadcom/bcm4908_enet.c
··· 172 172 173 173 err_free_buf_descs: 174 174 dma_free_coherent(dev, size, ring->cpu_addr, ring->dma_addr); 175 + ring->cpu_addr = NULL; 175 176 return -ENOMEM; 176 177 } 177 178
+7
drivers/net/ethernet/cadence/macb_main.c
··· 3239 3239 bool cmp_b = false; 3240 3240 bool cmp_c = false; 3241 3241 3242 + if (!macb_is_gem(bp)) 3243 + return; 3244 + 3242 3245 tp4sp_v = &(fs->h_u.tcp_ip4_spec); 3243 3246 tp4sp_m = &(fs->m_u.tcp_ip4_spec); 3244 3247 ··· 3610 3607 { 3611 3608 struct net_device *netdev = bp->dev; 3612 3609 netdev_features_t features = netdev->features; 3610 + struct ethtool_rx_fs_item *item; 3613 3611 3614 3612 /* TX checksum offload */ 3615 3613 macb_set_txcsum_feature(bp, features); ··· 3619 3615 macb_set_rxcsum_feature(bp, features); 3620 3616 3621 3617 /* RX Flow Filters */ 3618 + list_for_each_entry(item, &bp->rx_fs_list.list, list) 3619 + gem_prog_cmp_regs(bp, &item->fs); 3620 + 3622 3621 macb_set_rxflow_feature(bp, features); 3623 3622 } 3624 3623
+19 -4
drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
··· 1794 1794 struct cudbg_buffer temp_buff = { 0 }; 1795 1795 struct sge_qbase_reg_field *sge_qbase; 1796 1796 struct ireg_buf *ch_sge_dbg; 1797 + u8 padap_running = 0; 1797 1798 int i, rc; 1799 + u32 size; 1798 1800 1799 - rc = cudbg_get_buff(pdbg_init, dbg_buff, 1800 - sizeof(*ch_sge_dbg) * 2 + sizeof(*sge_qbase), 1801 - &temp_buff); 1801 + /* Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX regs can 1802 + * lead to SGE missing doorbells under heavy traffic. So, only 1803 + * collect them when adapter is idle. 1804 + */ 1805 + for_each_port(padap, i) { 1806 + padap_running = netif_running(padap->port[i]); 1807 + if (padap_running) 1808 + break; 1809 + } 1810 + 1811 + size = sizeof(*ch_sge_dbg) * 2; 1812 + if (!padap_running) 1813 + size += sizeof(*sge_qbase); 1814 + 1815 + rc = cudbg_get_buff(pdbg_init, dbg_buff, size, &temp_buff); 1802 1816 if (rc) 1803 1817 return rc; 1804 1818 ··· 1834 1820 ch_sge_dbg++; 1835 1821 } 1836 1822 1837 - if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5) { 1823 + if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5 && 1824 + !padap_running) { 1838 1825 sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg; 1839 1826 /* 1 addr reg SGE_QBASE_INDEX and 4 data reg 1840 1827 * SGE_QBASE_MAP[0-3]
+2 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 2090 2090 0x1190, 0x1194, 2091 2091 0x11a0, 0x11a4, 2092 2092 0x11b0, 0x11b4, 2093 - 0x11fc, 0x1274, 2093 + 0x11fc, 0x123c, 2094 + 0x1254, 0x1274, 2094 2095 0x1280, 0x133c, 2095 2096 0x1800, 0x18fc, 2096 2097 0x3000, 0x302c,
+5 -1
drivers/net/ethernet/freescale/gianfar.c
··· 363 363 364 364 static int gfar_set_mac_addr(struct net_device *dev, void *p) 365 365 { 366 - eth_mac_addr(dev, p); 366 + int ret; 367 + 368 + ret = eth_mac_addr(dev, p); 369 + if (ret) 370 + return ret; 367 371 368 372 gfar_set_mac_for_addr(dev, 0, dev->dev_addr); 369 373
+4 -5
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 3966 3966 * normalcy is to reset. 3967 3967 * 2. A new reset request from the stack due to timeout 3968 3968 * 3969 - * For the first case,error event might not have ae handle available. 3970 3969 * check if this is a new reset request and we are not here just because 3971 3970 * last reset attempt did not succeed and watchdog hit us again. We will 3972 3971 * know this if last reset request did not occur very recently (watchdog ··· 3975 3976 * want to make sure we throttle the reset request. Therefore, we will 3976 3977 * not allow it again before 3*HZ times. 3977 3978 */ 3978 - if (!handle) 3979 - handle = &hdev->vport[0].nic; 3980 3979 3981 3980 if (time_before(jiffies, (hdev->last_reset_time + 3982 3981 HCLGE_RESET_INTERVAL))) { 3983 3982 mod_timer(&hdev->reset_timer, jiffies + HCLGE_RESET_INTERVAL); 3984 3983 return; 3985 - } else if (hdev->default_reset_request) { 3984 + } 3985 + 3986 + if (hdev->default_reset_request) { 3986 3987 hdev->reset_level = 3987 3988 hclge_get_reset_level(ae_dev, 3988 3989 &hdev->default_reset_request); ··· 11210 11211 if (ret) 11211 11212 return ret; 11212 11213 11213 - /* RSS indirection table has been configuared by user */ 11214 + /* RSS indirection table has been configured by user */ 11214 11215 if (rxfh_configured) 11215 11216 goto out; 11216 11217
+4 -4
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 2193 2193 2194 2194 if (test_and_clear_bit(HCLGEVF_RESET_PENDING, 2195 2195 &hdev->reset_state)) { 2196 - /* PF has initmated that it is about to reset the hardware. 2196 + /* PF has intimated that it is about to reset the hardware. 2197 2197 * We now have to poll & check if hardware has actually 2198 2198 * completed the reset sequence. On hardware reset completion, 2199 2199 * VF needs to reset the client and ae device. ··· 2624 2624 { 2625 2625 struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 2626 2626 2627 + clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); 2628 + 2627 2629 hclgevf_reset_tqp_stats(handle); 2628 2630 2629 2631 hclgevf_request_link_info(hdev); 2630 2632 2631 2633 hclgevf_update_link_mode(hdev); 2632 - 2633 - clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); 2634 2634 2635 2635 return 0; 2636 2636 } ··· 3497 3497 if (ret) 3498 3498 return ret; 3499 3499 3500 - /* RSS indirection table has been configuared by user */ 3500 + /* RSS indirection table has been configured by user */ 3501 3501 if (rxfh_configured) 3502 3502 goto out; 3503 3503
+1
drivers/net/ethernet/intel/i40e/i40e.h
··· 142 142 __I40E_VIRTCHNL_OP_PENDING, 143 143 __I40E_RECOVERY_MODE, 144 144 __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */ 145 + __I40E_VFS_RELEASING, 145 146 /* This must be last as it determines the size of the BITMAP */ 146 147 __I40E_STATE_SIZE__, 147 148 };
+3
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 578 578 case RING_TYPE_XDP: 579 579 ring = kmemdup(vsi->xdp_rings[ring_id], sizeof(*ring), GFP_KERNEL); 580 580 break; 581 + default: 582 + ring = NULL; 583 + break; 581 584 } 582 585 if (!ring) 583 586 return;
+48 -7
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
··· 232 232 I40E_STAT(struct i40e_vsi, _name, _stat) 233 233 #define I40E_VEB_STAT(_name, _stat) \ 234 234 I40E_STAT(struct i40e_veb, _name, _stat) 235 + #define I40E_VEB_TC_STAT(_name, _stat) \ 236 + I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat) 235 237 #define I40E_PFC_STAT(_name, _stat) \ 236 238 I40E_STAT(struct i40e_pfc_stats, _name, _stat) 237 239 #define I40E_QUEUE_STAT(_name, _stat) \ ··· 268 266 I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol), 269 267 }; 270 268 269 + struct i40e_cp_veb_tc_stats { 270 + u64 tc_rx_packets; 271 + u64 tc_rx_bytes; 272 + u64 tc_tx_packets; 273 + u64 tc_tx_bytes; 274 + }; 275 + 271 276 static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = { 272 - I40E_VEB_STAT("veb.tc_%u_tx_packets", tc_stats.tc_tx_packets), 273 - I40E_VEB_STAT("veb.tc_%u_tx_bytes", tc_stats.tc_tx_bytes), 274 - I40E_VEB_STAT("veb.tc_%u_rx_packets", tc_stats.tc_rx_packets), 275 - I40E_VEB_STAT("veb.tc_%u_rx_bytes", tc_stats.tc_rx_bytes), 277 + I40E_VEB_TC_STAT("veb.tc_%u_tx_packets", tc_tx_packets), 278 + I40E_VEB_TC_STAT("veb.tc_%u_tx_bytes", tc_tx_bytes), 279 + I40E_VEB_TC_STAT("veb.tc_%u_rx_packets", tc_rx_packets), 280 + I40E_VEB_TC_STAT("veb.tc_%u_rx_bytes", tc_rx_bytes), 276 281 }; 277 282 278 283 static const struct i40e_stats i40e_gstrings_misc_stats[] = { ··· 1110 1101 1111 1102 /* Set flow control settings */ 1112 1103 ethtool_link_ksettings_add_link_mode(ks, supported, Pause); 1104 + ethtool_link_ksettings_add_link_mode(ks, supported, Asym_Pause); 1113 1105 1114 1106 switch (hw->fc.requested_mode) { 1115 1107 case I40E_FC_FULL: ··· 2227 2217 } 2228 2218 2229 2219 /** 2220 + * i40e_get_veb_tc_stats - copy VEB TC statistics to formatted structure 2221 + * @tc: the TC statistics in VEB structure (veb->tc_stats) 2222 + * @i: the index of traffic class in (veb->tc_stats) structure to copy 2223 + * 2224 + * Copy VEB TC statistics from structure of arrays (veb->tc_stats) to 2225 + * one dimensional structure i40e_cp_veb_tc_stats. 2226 + * Produce formatted i40e_cp_veb_tc_stats structure of the VEB TC 2227 + * statistics for the given TC. 2228 + **/ 2229 + static struct i40e_cp_veb_tc_stats 2230 + i40e_get_veb_tc_stats(struct i40e_veb_tc_stats *tc, unsigned int i) 2231 + { 2232 + struct i40e_cp_veb_tc_stats veb_tc = { 2233 + .tc_rx_packets = tc->tc_rx_packets[i], 2234 + .tc_rx_bytes = tc->tc_rx_bytes[i], 2235 + .tc_tx_packets = tc->tc_tx_packets[i], 2236 + .tc_tx_bytes = tc->tc_tx_bytes[i], 2237 + }; 2238 + 2239 + return veb_tc; 2240 + } 2241 + 2242 + /** 2230 2243 * i40e_get_pfc_stats - copy HW PFC statistics to formatted structure 2231 2244 * @pf: the PF device structure 2232 2245 * @i: the priority value to copy ··· 2333 2300 i40e_gstrings_veb_stats); 2334 2301 2335 2302 for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) 2336 - i40e_add_ethtool_stats(&data, veb_stats ? veb : NULL, 2337 - i40e_gstrings_veb_tc_stats); 2303 + if (veb_stats) { 2304 + struct i40e_cp_veb_tc_stats veb_tc = 2305 + i40e_get_veb_tc_stats(&veb->tc_stats, i); 2306 + 2307 + i40e_add_ethtool_stats(&data, &veb_tc, 2308 + i40e_gstrings_veb_tc_stats); 2309 + } else { 2310 + i40e_add_ethtool_stats(&data, NULL, 2311 + i40e_gstrings_veb_tc_stats); 2312 + } 2338 2313 2339 2314 i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats); 2340 2315 ··· 5480 5439 5481 5440 status = i40e_aq_get_phy_register(hw, 5482 5441 I40E_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE, 5483 - true, addr, offset, &value, NULL); 5442 + addr, true, offset, &value, NULL); 5484 5443 if (status) 5485 5444 return -EIO; 5486 5445 data[i] = value;
+16 -14
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2560 2560 i40e_stat_str(hw, aq_ret), 2561 2561 i40e_aq_str(hw, hw->aq.asq_last_status)); 2562 2562 } else { 2563 - dev_info(&pf->pdev->dev, "%s is %s allmulti mode.\n", 2564 - vsi->netdev->name, 2563 + dev_info(&pf->pdev->dev, "%s allmulti mode.\n", 2565 2564 cur_multipromisc ? "entering" : "leaving"); 2566 2565 } 2567 2566 } ··· 6737 6738 set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state); 6738 6739 set_bit(__I40E_CLIENT_L2_CHANGE, pf->state); 6739 6740 } 6740 - /* registers are set, lets apply */ 6741 - if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) 6742 - ret = i40e_hw_set_dcb_config(pf, new_cfg); 6741 + /* registers are set, lets apply */ 6742 + if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) 6743 + ret = i40e_hw_set_dcb_config(pf, new_cfg); 6743 6744 } 6744 6745 6745 6746 err: ··· 10572 10573 goto end_core_reset; 10573 10574 } 10574 10575 10575 - if (!lock_acquired) 10576 - rtnl_lock(); 10577 - ret = i40e_setup_pf_switch(pf, reinit); 10578 - if (ret) 10579 - goto end_unlock; 10580 - 10581 10576 #ifdef CONFIG_I40E_DCB 10582 10577 /* Enable FW to write a default DCB config on link-up 10583 10578 * unless I40E_FLAG_TC_MQPRIO was enabled or DCB ··· 10586 10593 i40e_aq_set_dcb_parameters(hw, false, NULL); 10587 10594 dev_warn(&pf->pdev->dev, 10588 10595 "DCB is not supported for X710-T*L 2.5/5G speeds\n"); 10589 - pf->flags &= ~I40E_FLAG_DCB_CAPABLE; 10596 + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; 10590 10597 } else { 10591 10598 i40e_aq_set_dcb_parameters(hw, true, NULL); 10592 10599 ret = i40e_init_pf_dcb(pf); ··· 10600 10607 } 10601 10608 10602 10609 #endif /* CONFIG_I40E_DCB */ 10610 + if (!lock_acquired) 10611 + rtnl_lock(); 10612 + ret = i40e_setup_pf_switch(pf, reinit); 10613 + if (ret) 10614 + goto end_unlock; 10603 10615 10604 10616 /* The driver only wants link up/down and module qualification 10605 10617 * reports from firmware. Note the negative logic. ··· 15138 15140 * in order to register the netdev 15139 15141 */ 15140 15142 v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN); 15141 - if (v_idx < 0) 15143 + if (v_idx < 0) { 15144 + err = v_idx; 15142 15145 goto err_switch_setup; 15146 + } 15143 15147 pf->lan_vsi = v_idx; 15144 15148 vsi = pf->vsi[v_idx]; 15145 - if (!vsi) 15149 + if (!vsi) { 15150 + err = -EFAULT; 15146 15151 goto err_switch_setup; 15152 + } 15147 15153 vsi->alloc_queue_pairs = 1; 15148 15154 err = i40e_config_netdev(vsi); 15149 15155 if (err)
+5 -7
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2295 2295 * @rx_ring: Rx ring being processed 2296 2296 * @xdp: XDP buffer containing the frame 2297 2297 **/ 2298 - static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring, 2299 - struct xdp_buff *xdp) 2298 + static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp) 2300 2299 { 2301 2300 int err, result = I40E_XDP_PASS; 2302 2301 struct i40e_ring *xdp_ring; ··· 2334 2335 } 2335 2336 xdp_out: 2336 2337 rcu_read_unlock(); 2337 - return ERR_PTR(-result); 2338 + return result; 2338 2339 } 2339 2340 2340 2341 /** ··· 2447 2448 unsigned int xdp_xmit = 0; 2448 2449 bool failure = false; 2449 2450 struct xdp_buff xdp; 2451 + int xdp_res = 0; 2450 2452 2451 2453 #if (PAGE_SIZE < 8192) 2452 2454 frame_sz = i40e_rx_frame_truesize(rx_ring, 0); ··· 2513 2513 /* At larger PAGE_SIZE, frame_sz depend on len size */ 2514 2514 xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size); 2515 2515 #endif 2516 - skb = i40e_run_xdp(rx_ring, &xdp); 2516 + xdp_res = i40e_run_xdp(rx_ring, &xdp); 2517 2517 } 2518 2518 2519 - if (IS_ERR(skb)) { 2520 - unsigned int xdp_res = -PTR_ERR(skb); 2521 - 2519 + if (xdp_res) { 2522 2520 if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) { 2523 2521 xdp_xmit |= xdp_res; 2524 2522 i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
+9
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 137 137 **/ 138 138 static inline void i40e_vc_disable_vf(struct i40e_vf *vf) 139 139 { 140 + struct i40e_pf *pf = vf->pf; 140 141 int i; 141 142 142 143 i40e_vc_notify_vf_reset(vf); ··· 148 147 * ensure a reset. 149 148 */ 150 149 for (i = 0; i < 20; i++) { 150 + /* If PF is in VFs releasing state reset VF is impossible, 151 + * so leave it. 152 + */ 153 + if (test_bit(__I40E_VFS_RELEASING, pf->state)) 154 + return; 151 155 if (i40e_reset_vf(vf, false)) 152 156 return; 153 157 usleep_range(10000, 20000); ··· 1580 1574 1581 1575 if (!pf->vf) 1582 1576 return; 1577 + 1578 + set_bit(__I40E_VFS_RELEASING, pf->state); 1583 1579 while (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) 1584 1580 usleep_range(1000, 2000); 1585 1581 ··· 1639 1631 } 1640 1632 } 1641 1633 clear_bit(__I40E_VF_DISABLE, pf->state); 1634 + clear_bit(__I40E_VFS_RELEASING, pf->state); 1642 1635 } 1643 1636 1644 1637 #ifdef CONFIG_PCI_IOV
+2 -2
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 471 471 472 472 nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, descs, budget); 473 473 if (!nb_pkts) 474 - return false; 474 + return true; 475 475 476 476 if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { 477 477 nb_processed = xdp_ring->count - xdp_ring->next_to_use; ··· 488 488 489 489 i40e_update_tx_stats(xdp_ring, nb_pkts, total_bytes); 490 490 491 - return true; 491 + return nb_pkts < budget; 492 492 } 493 493 494 494 /**
+2 -2
drivers/net/ethernet/intel/ice/ice.h
··· 196 196 __ICE_NEEDS_RESTART, 197 197 __ICE_PREPARED_FOR_RESET, /* set by driver when prepared */ 198 198 __ICE_RESET_OICR_RECV, /* set by driver after rcv reset OICR */ 199 - __ICE_DCBNL_DEVRESET, /* set by dcbnl devreset */ 200 199 __ICE_PFR_REQ, /* set by driver and peers */ 201 200 __ICE_CORER_REQ, /* set by driver and peers */ 202 201 __ICE_GLOBR_REQ, /* set by driver and peers */ ··· 623 624 void ice_print_link_msg(struct ice_vsi *vsi, bool isup); 624 625 const char *ice_stat_str(enum ice_status stat_err); 625 626 const char *ice_aq_str(enum ice_aq_err aq_err); 626 - bool ice_is_wol_supported(struct ice_pf *pf); 627 + bool ice_is_wol_supported(struct ice_hw *hw); 627 628 int 628 629 ice_fdir_write_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input, bool add, 629 630 bool is_tun); ··· 641 642 int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, 642 643 struct ice_rq_event_info *event); 643 644 int ice_open(struct net_device *netdev); 645 + int ice_open_internal(struct net_device *netdev); 644 646 int ice_stop(struct net_device *netdev); 645 647 void ice_service_task_schedule(struct ice_pf *pf); 646 648
+1 -1
drivers/net/ethernet/intel/ice/ice_common.c
··· 717 717 718 718 if (!data) { 719 719 data = devm_kcalloc(ice_hw_to_dev(hw), 720 - sizeof(*data), 721 720 ICE_AQC_FW_LOG_ID_MAX, 721 + sizeof(*data), 722 722 GFP_KERNEL); 723 723 if (!data) 724 724 return ICE_ERR_NO_MEMORY;
+2 -2
drivers/net/ethernet/intel/ice/ice_controlq.h
··· 31 31 ICE_CTL_Q_MAILBOX, 32 32 }; 33 33 34 - /* Control Queue timeout settings - max delay 250ms */ 35 - #define ICE_CTL_Q_SQ_CMD_TIMEOUT 2500 /* Count 2500 times */ 34 + /* Control Queue timeout settings - max delay 1s */ 35 + #define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */ 36 36 #define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */ 37 37 #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */ 38 38 #define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */
+29 -9
drivers/net/ethernet/intel/ice/ice_dcb.c
··· 738 738 /** 739 739 * ice_cee_to_dcb_cfg 740 740 * @cee_cfg: pointer to CEE configuration struct 741 - * @dcbcfg: DCB configuration struct 741 + * @pi: port information structure 742 742 * 743 743 * Convert CEE configuration from firmware to DCB configuration 744 744 */ 745 745 static void 746 746 ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg, 747 - struct ice_dcbx_cfg *dcbcfg) 747 + struct ice_port_info *pi) 748 748 { 749 749 u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status); 750 750 u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift; 751 + u8 i, j, err, sync, oper, app_index, ice_app_sel_type; 751 752 u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio); 752 - u8 i, err, sync, oper, app_index, ice_app_sel_type; 753 753 u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift; 754 + struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg; 754 755 u16 ice_app_prot_id_type; 755 756 756 - /* CEE PG data to ETS config */ 757 + dcbcfg = &pi->qos_cfg.local_dcbx_cfg; 758 + dcbcfg->dcbx_mode = ICE_DCBX_MODE_CEE; 759 + dcbcfg->tlv_status = tlv_status; 760 + 761 + /* CEE PG data */ 757 762 dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc; 758 763 759 764 /* Note that the FW creates the oper_prio_tc nibbles reversed ··· 785 780 } 786 781 } 787 782 788 - /* CEE PFC data to ETS config */ 783 + /* CEE PFC data */ 789 784 dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en; 790 785 dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS; 786 + 787 + /* CEE APP TLV data */ 788 + if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING) 789 + cmp_dcbcfg = &pi->qos_cfg.desired_dcbx_cfg; 790 + else 791 + cmp_dcbcfg = &pi->qos_cfg.remote_dcbx_cfg; 791 792 792 793 app_index = 0; 793 794 for (i = 0; i < 3; i++) { ··· 813 802 ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S; 814 803 ice_app_sel_type = ICE_APP_SEL_TCPIP; 815 804 ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI; 805 + 806 + for (j = 0; j < cmp_dcbcfg->numapps; j++) { 807 + u16 prot_id = cmp_dcbcfg->app[j].prot_id; 808 + u8 sel = cmp_dcbcfg->app[j].selector; 809 + 810 + if (sel == ICE_APP_SEL_TCPIP && 811 + (prot_id == ICE_APP_PROT_ID_ISCSI || 812 + prot_id == ICE_APP_PROT_ID_ISCSI_860)) { 813 + ice_app_prot_id_type = prot_id; 814 + break; 815 + } 816 + } 816 817 } else { 817 818 /* FIP APP */ 818 819 ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M; ··· 915 892 ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL); 916 893 if (!ret) { 917 894 /* CEE mode */ 918 - dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg; 919 - dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE; 920 - dcbx_cfg->tlv_status = le32_to_cpu(cee_cfg.tlv_status); 921 - ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg); 922 895 ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE); 896 + ice_cee_to_dcb_cfg(&cee_cfg, pi); 923 897 } else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) { 924 898 /* CEE mode not enabled try querying IEEE data */ 925 899 dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
-2
drivers/net/ethernet/intel/ice/ice_dcb_nl.c
··· 18 18 while (ice_is_reset_in_progress(pf->state)) 19 19 usleep_range(1000, 2000); 20 20 21 - set_bit(__ICE_DCBNL_DEVRESET, pf->state); 22 21 dev_close(netdev); 23 22 netdev_state_change(netdev); 24 23 dev_open(netdev, NULL); 25 24 netdev_state_change(netdev); 26 - clear_bit(__ICE_DCBNL_DEVRESET, pf->state); 27 25 } 28 26 29 27 /**
+2 -2
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 3472 3472 netdev_warn(netdev, "Wake on LAN is not supported on this interface!\n"); 3473 3473 3474 3474 /* Get WoL settings based on the HW capability */ 3475 - if (ice_is_wol_supported(pf)) { 3475 + if (ice_is_wol_supported(&pf->hw)) { 3476 3476 wol->supported = WAKE_MAGIC; 3477 3477 wol->wolopts = pf->wol_ena ? WAKE_MAGIC : 0; 3478 3478 } else { ··· 3492 3492 struct ice_vsi *vsi = np->vsi; 3493 3493 struct ice_pf *pf = vsi->back; 3494 3494 3495 - if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(pf)) 3495 + if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(&pf->hw)) 3496 3496 return -EOPNOTSUPP; 3497 3497 3498 3498 /* only magic packet is supported */
+2 -3
drivers/net/ethernet/intel/ice/ice_lib.c
··· 2620 2620 if (!locked) 2621 2621 rtnl_lock(); 2622 2622 2623 - err = ice_open(vsi->netdev); 2623 + err = ice_open_internal(vsi->netdev); 2624 2624 2625 2625 if (!locked) 2626 2626 rtnl_unlock(); ··· 2649 2649 if (!locked) 2650 2650 rtnl_lock(); 2651 2651 2652 - ice_stop(vsi->netdev); 2652 + ice_vsi_close(vsi); 2653 2653 2654 2654 if (!locked) 2655 2655 rtnl_unlock(); ··· 3078 3078 bool ice_is_reset_in_progress(unsigned long *state) 3079 3079 { 3080 3080 return test_bit(__ICE_RESET_OICR_RECV, state) || 3081 - test_bit(__ICE_DCBNL_DEVRESET, state) || 3082 3081 test_bit(__ICE_PFR_REQ, state) || 3083 3082 test_bit(__ICE_CORER_REQ, state) || 3084 3083 test_bit(__ICE_GLOBR_REQ, state);
+39 -14
drivers/net/ethernet/intel/ice/ice_main.c
··· 3537 3537 } 3538 3538 3539 3539 /** 3540 - * ice_is_wol_supported - get NVM state of WoL 3541 - * @pf: board private structure 3540 + * ice_is_wol_supported - check if WoL is supported 3541 + * @hw: pointer to hardware info 3542 3542 * 3543 3543 * Check if WoL is supported based on the HW configuration. 3544 3544 * Returns true if NVM supports and enables WoL for this port, false otherwise 3545 3545 */ 3546 - bool ice_is_wol_supported(struct ice_pf *pf) 3546 + bool ice_is_wol_supported(struct ice_hw *hw) 3547 3547 { 3548 - struct ice_hw *hw = &pf->hw; 3549 3548 u16 wol_ctrl; 3550 3549 3551 3550 /* A bit set to 1 in the NVM Software Reserved Word 2 (WoL control ··· 3553 3554 if (ice_read_sr_word(hw, ICE_SR_NVM_WOL_CFG, &wol_ctrl)) 3554 3555 return false; 3555 3556 3556 - return !(BIT(hw->pf_id) & wol_ctrl); 3557 + return !(BIT(hw->port_info->lport) & wol_ctrl); 3557 3558 } 3558 3559 3559 3560 /** ··· 4191 4192 goto err_send_version_unroll; 4192 4193 } 4193 4194 4195 + /* not a fatal error if this fails */ 4194 4196 err = ice_init_nvm_phy_type(pf->hw.port_info); 4195 - if (err) { 4197 + if (err) 4196 4198 dev_err(dev, "ice_init_nvm_phy_type failed: %d\n", err); 4197 - goto err_send_version_unroll; 4198 - } 4199 4199 4200 + /* not a fatal error if this fails */ 4200 4201 err = ice_update_link_info(pf->hw.port_info); 4201 - if (err) { 4202 + if (err) 4202 4203 dev_err(dev, "ice_update_link_info failed: %d\n", err); 4203 - goto err_send_version_unroll; 4204 - } 4205 4204 4206 4205 ice_init_link_dflt_override(pf->hw.port_info); 4207 4206 4208 4207 /* if media available, initialize PHY settings */ 4209 4208 if (pf->hw.port_info->phy.link_info.link_info & 4210 4209 ICE_AQ_MEDIA_AVAILABLE) { 4210 + /* not a fatal error if this fails */ 4211 4211 err = ice_init_phy_user_cfg(pf->hw.port_info); 4212 - if (err) { 4212 + if (err) 4213 4213 dev_err(dev, "ice_init_phy_user_cfg failed: %d\n", err); 4214 - goto err_send_version_unroll; 4215 - } 4216 4214 4217 4215 if (!test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, pf->flags)) { 4218 4216 struct ice_vsi *vsi = ice_get_main_vsi(pf); ··· 4564 4568 continue; 4565 4569 ice_vsi_free_q_vectors(pf->vsi[v]); 4566 4570 } 4571 + ice_free_cpu_rx_rmap(ice_get_main_vsi(pf)); 4567 4572 ice_clear_interrupt_scheme(pf); 4568 4573 4569 4574 pci_save_state(pdev); ··· 6634 6637 int ice_open(struct net_device *netdev) 6635 6638 { 6636 6639 struct ice_netdev_priv *np = netdev_priv(netdev); 6640 + struct ice_pf *pf = np->vsi->back; 6641 + 6642 + if (ice_is_reset_in_progress(pf->state)) { 6643 + netdev_err(netdev, "can't open net device while reset is in progress"); 6644 + return -EBUSY; 6645 + } 6646 + 6647 + return ice_open_internal(netdev); 6648 + } 6649 + 6650 + /** 6651 + * ice_open_internal - Called when a network interface becomes active 6652 + * @netdev: network interface device structure 6653 + * 6654 + * Internal ice_open implementation. Should not be used directly except for ice_open and reset 6655 + * handling routine 6656 + * 6657 + * Returns 0 on success, negative value on failure 6658 + */ 6659 + int ice_open_internal(struct net_device *netdev) 6660 + { 6661 + struct ice_netdev_priv *np = netdev_priv(netdev); 6637 6662 struct ice_vsi *vsi = np->vsi; 6638 6663 struct ice_pf *pf = vsi->back; 6639 6664 struct ice_port_info *pi; ··· 6734 6715 { 6735 6716 struct ice_netdev_priv *np = netdev_priv(netdev); 6736 6717 struct ice_vsi *vsi = np->vsi; 6718 + struct ice_pf *pf = vsi->back; 6719 + 6720 + if (ice_is_reset_in_progress(pf->state)) { 6721 + netdev_err(netdev, "can't stop net device while reset is in progress"); 6722 + return -EBUSY; 6723 + } 6737 6724 6738 6725 ice_vsi_close(vsi); 6739 6726
+9 -6
drivers/net/ethernet/intel/ice/ice_switch.c
··· 1238 1238 ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2, 1239 1239 vsi_list_id); 1240 1240 1241 + if (!m_entry->vsi_list_info) 1242 + return ICE_ERR_NO_MEMORY; 1243 + 1241 1244 /* If this entry was large action then the large action needs 1242 1245 * to be updated to point to FWD to VSI list 1243 1246 */ ··· 2223 2220 return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI && 2224 2221 fm_entry->fltr_info.vsi_handle == vsi_handle) || 2225 2222 (fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST && 2223 + fm_entry->vsi_list_info && 2226 2224 (test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map)))); 2227 2225 } 2228 2226 ··· 2296 2292 return ICE_ERR_PARAM; 2297 2293 2298 2294 list_for_each_entry(fm_entry, lkup_list_head, list_entry) { 2299 - struct ice_fltr_info *fi; 2300 - 2301 - fi = &fm_entry->fltr_info; 2302 - if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle)) 2295 + if (!ice_vsi_uses_fltr(fm_entry, vsi_handle)) 2303 2296 continue; 2304 2297 2305 2298 status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle, 2306 - vsi_list_head, fi); 2299 + vsi_list_head, 2300 + &fm_entry->fltr_info); 2307 2301 if (status) 2308 2302 return status; 2309 2303 } ··· 2624 2622 &remove_list_head); 2625 2623 mutex_unlock(rule_lock); 2626 2624 if (status) 2627 - return; 2625 + goto free_fltr_list; 2628 2626 2629 2627 switch (lkup) { 2630 2628 case ICE_SW_LKUP_MAC: ··· 2647 2645 break; 2648 2646 } 2649 2647 2648 + free_fltr_list: 2650 2649 list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) { 2651 2650 list_del(&fm_entry->list_entry); 2652 2651 devm_kfree(ice_hw_to_dev(hw), fm_entry);
+1
drivers/net/ethernet/intel/ice/ice_type.h
··· 535 535 #define ICE_TLV_STATUS_ERR 0x4 536 536 #define ICE_APP_PROT_ID_FCOE 0x8906 537 537 #define ICE_APP_PROT_ID_ISCSI 0x0cbc 538 + #define ICE_APP_PROT_ID_ISCSI_860 0x035c 538 539 #define ICE_APP_PROT_ID_FIP 0x8914 539 540 #define ICE_APP_SEL_ETHTYPE 0x1 540 541 #define ICE_APP_SEL_TCPIP 0x2
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 191 191 } 192 192 193 193 enum { 194 - MLX5_INTERFACE_PROTOCOL_ETH_REP, 195 194 MLX5_INTERFACE_PROTOCOL_ETH, 195 + MLX5_INTERFACE_PROTOCOL_ETH_REP, 196 196 197 + MLX5_INTERFACE_PROTOCOL_IB, 197 198 MLX5_INTERFACE_PROTOCOL_IB_REP, 198 199 MLX5_INTERFACE_PROTOCOL_MPIB, 199 - MLX5_INTERFACE_PROTOCOL_IB, 200 200 201 201 MLX5_INTERFACE_PROTOCOL_VNET, 202 202 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 516 516 struct mlx5_wq_cyc wq; 517 517 void __iomem *uar_map; 518 518 u32 sqn; 519 + u16 reserved_room; 519 520 unsigned long state; 520 521 521 522 /* control path */
+29 -7
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 186 186 } 187 187 188 188 static int 189 + mlx5_get_label_mapping(struct mlx5_tc_ct_priv *ct_priv, 190 + u32 *labels, u32 *id) 191 + { 192 + if (!memchr_inv(labels, 0, sizeof(u32) * 4)) { 193 + *id = 0; 194 + return 0; 195 + } 196 + 197 + if (mapping_add(ct_priv->labels_mapping, labels, id)) 198 + return -EOPNOTSUPP; 199 + 200 + return 0; 201 + } 202 + 203 + static void 204 + mlx5_put_label_mapping(struct mlx5_tc_ct_priv *ct_priv, u32 id) 205 + { 206 + if (id) 207 + mapping_remove(ct_priv->labels_mapping, id); 208 + } 209 + 210 + static int 189 211 mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule) 190 212 { 191 213 struct flow_match_control control; ··· 458 436 mlx5_tc_rule_delete(netdev_priv(ct_priv->netdev), zone_rule->rule, attr); 459 437 mlx5e_mod_hdr_detach(ct_priv->dev, 460 438 ct_priv->mod_hdr_tbl, zone_rule->mh); 461 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 439 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 462 440 kfree(attr); 463 441 } 464 442 ··· 661 639 if (!meta) 662 640 return -EOPNOTSUPP; 663 641 664 - err = mapping_add(ct_priv->labels_mapping, meta->ct_metadata.labels, 665 - &attr->ct_attr.ct_labels_id); 642 + err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels, 643 + &attr->ct_attr.ct_labels_id); 666 644 if (err) 667 645 return -EOPNOTSUPP; 668 646 if (nat) { ··· 699 677 700 678 err_mapping: 701 679 dealloc_mod_hdr_actions(&mod_acts); 702 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 680 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 703 681 return err; 704 682 } 705 683 ··· 767 745 err_rule: 768 746 mlx5e_mod_hdr_detach(ct_priv->dev, 769 747 ct_priv->mod_hdr_tbl, zone_rule->mh); 770 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 748 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 771 749 err_mod_hdr: 772 750 kfree(attr); 773 751 err_attr: ··· 1219 1197 if (!priv || !ct_attr->ct_labels_id) 1220 1198 return; 1221 1199 1222 - mapping_remove(priv->labels_mapping, ct_attr->ct_labels_id); 1200 + mlx5_put_label_mapping(priv, ct_attr->ct_labels_id); 1223 1201 } 1224 1202 1225 1203 int ··· 1302 1280 ct_labels[1] = key->ct_labels[1] & mask->ct_labels[1]; 1303 1281 ct_labels[2] = key->ct_labels[2] & mask->ct_labels[2]; 1304 1282 ct_labels[3] = key->ct_labels[3] & mask->ct_labels[3]; 1305 - if (mapping_add(priv->labels_mapping, ct_labels, &ct_attr->ct_labels_id)) 1283 + if (mlx5_get_label_mapping(priv, ct_labels, &ct_attr->ct_labels_id)) 1306 1284 return -EOPNOTSUPP; 1307 1285 mlx5e_tc_match_to_reg_match(spec, LABELS_TO_REG, ct_attr->ct_labels_id, 1308 1286 MLX5_CT_LABELS_MASK);
+10
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h
··· 21 21 MLX5E_TC_TUNNEL_TYPE_MPLSOUDP, 22 22 }; 23 23 24 + struct mlx5e_encap_key { 25 + const struct ip_tunnel_key *ip_tun_key; 26 + struct mlx5e_tc_tunnel *tc_tunnel; 27 + }; 28 + 24 29 struct mlx5e_tc_tunnel { 25 30 int tunnel_type; 26 31 enum mlx5_flow_match_level match_level; ··· 49 44 struct flow_cls_offload *f, 50 45 void *headers_c, 51 46 void *headers_v); 47 + bool (*encap_info_equal)(struct mlx5e_encap_key *a, 48 + struct mlx5e_encap_key *b); 52 49 }; 53 50 54 51 extern struct mlx5e_tc_tunnel vxlan_tunnel; ··· 107 100 struct flow_cls_offload *f, 108 101 void *headers_c, 109 102 void *headers_v); 103 + 104 + bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a, 105 + struct mlx5e_encap_key *b); 110 106 111 107 #endif /* CONFIG_MLX5_ESWITCH */ 112 108
+9 -14
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
··· 476 476 mlx5e_decap_dealloc(priv, d); 477 477 } 478 478 479 - struct encap_key { 480 - const struct ip_tunnel_key *ip_tun_key; 481 - struct mlx5e_tc_tunnel *tc_tunnel; 482 - }; 483 - 484 - static int cmp_encap_info(struct encap_key *a, 485 - struct encap_key *b) 479 + bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a, 480 + struct mlx5e_encap_key *b) 486 481 { 487 - return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) || 488 - a->tc_tunnel->tunnel_type != b->tc_tunnel->tunnel_type; 482 + return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) == 0 && 483 + a->tc_tunnel->tunnel_type == b->tc_tunnel->tunnel_type; 489 484 } 490 485 491 486 static int cmp_decap_info(struct mlx5e_decap_key *a, ··· 489 494 return memcmp(&a->key, &b->key, sizeof(b->key)); 490 495 } 491 496 492 - static int hash_encap_info(struct encap_key *key) 497 + static int hash_encap_info(struct mlx5e_encap_key *key) 493 498 { 494 499 return jhash(key->ip_tun_key, sizeof(*key->ip_tun_key), 495 500 key->tc_tunnel->tunnel_type); ··· 511 516 } 512 517 513 518 static struct mlx5e_encap_entry * 514 - mlx5e_encap_get(struct mlx5e_priv *priv, struct encap_key *key, 519 + mlx5e_encap_get(struct mlx5e_priv *priv, struct mlx5e_encap_key *key, 515 520 uintptr_t hash_key) 516 521 { 517 522 struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 523 + struct mlx5e_encap_key e_key; 518 524 struct mlx5e_encap_entry *e; 519 - struct encap_key e_key; 520 525 521 526 hash_for_each_possible_rcu(esw->offloads.encap_tbl, e, 522 527 encap_hlist, hash_key) { 523 528 e_key.ip_tun_key = &e->tun_info->key; 524 529 e_key.tc_tunnel = e->tunnel; 525 - if (!cmp_encap_info(&e_key, key) && 530 + if (e->tunnel->encap_info_equal(&e_key, key) && 526 531 mlx5e_encap_take(e)) 527 532 return e; 528 533 } ··· 689 694 struct mlx5_flow_attr *attr = flow->attr; 690 695 const struct ip_tunnel_info *tun_info; 691 696 unsigned long tbl_time_before = 0; 692 - struct encap_key key; 693 697 struct mlx5e_encap_entry *e; 698 + struct mlx5e_encap_key key; 694 699 bool entry_created = false; 695 700 unsigned short family; 696 701 uintptr_t hash_key;
+29
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
··· 329 329 return mlx5e_tc_tun_parse_geneve_options(priv, spec, f); 330 330 } 331 331 332 + static bool mlx5e_tc_tun_encap_info_equal_geneve(struct mlx5e_encap_key *a, 333 + struct mlx5e_encap_key *b) 334 + { 335 + struct ip_tunnel_info *a_info; 336 + struct ip_tunnel_info *b_info; 337 + bool a_has_opts, b_has_opts; 338 + 339 + if (!mlx5e_tc_tun_encap_info_equal_generic(a, b)) 340 + return false; 341 + 342 + a_has_opts = !!(a->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT); 343 + b_has_opts = !!(b->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT); 344 + 345 + /* keys are equal when both don't have any options attached */ 346 + if (!a_has_opts && !b_has_opts) 347 + return true; 348 + 349 + if (a_has_opts != b_has_opts) 350 + return false; 351 + 352 + /* geneve options stored in memory next to ip_tunnel_info struct */ 353 + a_info = container_of(a->ip_tun_key, struct ip_tunnel_info, key); 354 + b_info = container_of(b->ip_tun_key, struct ip_tunnel_info, key); 355 + 356 + return a_info->options_len == b_info->options_len && 357 + memcmp(a_info + 1, b_info + 1, a_info->options_len) == 0; 358 + } 359 + 332 360 struct mlx5e_tc_tunnel geneve_tunnel = { 333 361 .tunnel_type = MLX5E_TC_TUNNEL_TYPE_GENEVE, 334 362 .match_level = MLX5_MATCH_L4, ··· 366 338 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_geneve, 367 339 .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_geneve, 368 340 .parse_tunnel = mlx5e_tc_tun_parse_geneve, 341 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_geneve, 369 342 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
··· 94 94 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_gretap, 95 95 .parse_udp_ports = NULL, 96 96 .parse_tunnel = mlx5e_tc_tun_parse_gretap, 97 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 97 98 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c
··· 131 131 .generate_ip_tun_hdr = generate_ip_tun_hdr, 132 132 .parse_udp_ports = parse_udp_ports, 133 133 .parse_tunnel = parse_tunnel, 134 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 134 135 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
··· 150 150 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_vxlan, 151 151 .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_vxlan, 152 152 .parse_tunnel = mlx5e_tc_tun_parse_vxlan, 153 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 153 154 };
+6
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 441 441 return wqe_size * 2 - 1; 442 442 } 443 443 444 + static inline bool mlx5e_icosq_can_post_wqe(struct mlx5e_icosq *sq, u16 wqe_size) 445 + { 446 + u16 room = sq->reserved_room + mlx5e_stop_room_for_wqe(wqe_size); 447 + 448 + return mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room); 449 + } 444 450 #endif
+19 -21
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 46 46 struct tls12_crypto_info_aes_gcm_128 crypto_info; 47 47 struct accel_rule rule; 48 48 struct sock *sk; 49 - struct mlx5e_rq_stats *stats; 49 + struct mlx5e_rq_stats *rq_stats; 50 + struct mlx5e_tls_sw_stats *sw_stats; 50 51 struct completion add_ctx; 51 52 u32 tirn; 52 53 u32 key_id; ··· 138 137 { 139 138 struct mlx5e_set_tls_static_params_wqe *wqe; 140 139 struct mlx5e_icosq_wqe_info wi; 141 - u16 pi, num_wqebbs, room; 140 + u16 pi, num_wqebbs; 142 141 143 142 num_wqebbs = MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS; 144 - room = mlx5e_stop_room_for_wqe(num_wqebbs); 145 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room))) 143 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs))) 146 144 return ERR_PTR(-ENOSPC); 147 145 148 146 pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs); ··· 168 168 { 169 169 struct mlx5e_set_tls_progress_params_wqe *wqe; 170 170 struct mlx5e_icosq_wqe_info wi; 171 - u16 pi, num_wqebbs, room; 171 + u16 pi, num_wqebbs; 172 172 173 173 num_wqebbs = MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS; 174 - room = mlx5e_stop_room_for_wqe(num_wqebbs); 175 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room))) 174 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs))) 176 175 return ERR_PTR(-ENOSPC); 177 176 178 177 pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs); ··· 217 218 return err; 218 219 219 220 err_out: 220 - priv_rx->stats->tls_resync_req_skip++; 221 + priv_rx->rq_stats->tls_resync_req_skip++; 221 222 err = PTR_ERR(cseg); 222 223 complete(&priv_rx->add_ctx); 223 224 goto unlock; ··· 276 277 277 278 buf->priv_rx = priv_rx; 278 279 279 - BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1); 280 - 281 280 spin_lock_bh(&sq->channel->async_icosq_lock); 282 281 283 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) { 282 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS))) { 284 283 spin_unlock_bh(&sq->channel->async_icosq_lock); 285 284 err = -ENOSPC; 286 285 goto err_dma_unmap; 287 286 } 288 287 289 - pi = mlx5e_icosq_get_next_pi(sq, 1); 288 + pi = mlx5e_icosq_get_next_pi(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS); 290 289 wqe = MLX5E_TLS_FETCH_GET_PROGRESS_PARAMS_WQE(sq, pi); 291 290 292 291 #define GET_PSV_DS_CNT (DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_DS)) ··· 304 307 305 308 wi = (struct mlx5e_icosq_wqe_info) { 306 309 .wqe_type = MLX5E_ICOSQ_WQE_GET_PSV_TLS, 307 - .num_wqebbs = 1, 310 + .num_wqebbs = MLX5E_KTLS_GET_PROGRESS_WQEBBS, 308 311 .tls_get_params.buf = buf, 309 312 }; 310 313 icosq_fill_wi(sq, pi, &wi); ··· 319 322 err_free: 320 323 kfree(buf); 321 324 err_out: 322 - priv_rx->stats->tls_resync_req_skip++; 325 + priv_rx->rq_stats->tls_resync_req_skip++; 323 326 return err; 324 327 } 325 328 ··· 375 378 376 379 cseg = post_static_params(sq, priv_rx); 377 380 if (IS_ERR(cseg)) { 378 - priv_rx->stats->tls_resync_res_skip++; 381 + priv_rx->rq_stats->tls_resync_res_skip++; 379 382 err = PTR_ERR(cseg); 380 383 goto unlock; 381 384 } 382 385 /* Do not increment priv_rx refcnt, CQE handling is empty */ 383 386 mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg); 384 - priv_rx->stats->tls_resync_res_ok++; 387 + priv_rx->rq_stats->tls_resync_res_ok++; 385 388 unlock: 386 389 spin_unlock_bh(&c->async_icosq_lock); 387 390 ··· 417 420 auth_state = MLX5_GET(tls_progress_params, ctx, auth_state); 418 421 if (tracker_state != MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING || 419 422 auth_state != MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD) { 420 - priv_rx->stats->tls_resync_req_skip++; 423 + priv_rx->rq_stats->tls_resync_req_skip++; 421 424 goto out; 422 425 } 423 426 424 427 hw_seq = MLX5_GET(tls_progress_params, ctx, hw_resync_tcp_sn); 425 428 tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq)); 426 - priv_rx->stats->tls_resync_req_end++; 429 + priv_rx->rq_stats->tls_resync_req_end++; 427 430 out: 428 431 mlx5e_ktls_priv_rx_put(priv_rx); 429 432 dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); ··· 606 609 priv_rx->rxq = rxq; 607 610 priv_rx->sk = sk; 608 611 609 - priv_rx->stats = &priv->channel_stats[rxq].rq; 612 + priv_rx->rq_stats = &priv->channel_stats[rxq].rq; 613 + priv_rx->sw_stats = &priv->tls->sw_stats; 610 614 mlx5e_set_ktls_rx_priv_ctx(tls_ctx, priv_rx); 611 615 612 616 rqtn = priv->direct_tir[rxq].rqt.rqtn; ··· 628 630 if (err) 629 631 goto err_post_wqes; 630 632 631 - priv_rx->stats->tls_ctx++; 633 + atomic64_inc(&priv_rx->sw_stats->rx_tls_ctx); 632 634 633 635 return 0; 634 636 ··· 664 666 if (cancel_work_sync(&resync->work)) 665 667 mlx5e_ktls_priv_rx_put(priv_rx); 666 668 667 - priv_rx->stats->tls_del++; 669 + atomic64_inc(&priv_rx->sw_stats->rx_tls_del); 668 670 if (priv_rx->rule.rule) 669 671 mlx5e_accel_fs_del_sk(priv_rx->rule.rule); 670 672
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 // Copyright (c) 2019 Mellanox Technologies. 3 3 4 + #include "en_accel/tls.h" 4 5 #include "en_accel/ktls_txrx.h" 5 6 #include "en_accel/ktls_utils.h" 6 7 ··· 51 50 struct mlx5e_ktls_offload_context_tx { 52 51 struct tls_offload_context_tx *tx_ctx; 53 52 struct tls12_crypto_info_aes_gcm_128 crypto_info; 53 + struct mlx5e_tls_sw_stats *sw_stats; 54 54 u32 expected_seq; 55 55 u32 tisn; 56 56 u32 key_id; ··· 101 99 if (err) 102 100 goto err_create_key; 103 101 102 + priv_tx->sw_stats = &priv->tls->sw_stats; 104 103 priv_tx->expected_seq = start_offload_tcp_sn; 105 104 priv_tx->crypto_info = 106 105 *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info; ··· 114 111 goto err_create_tis; 115 112 116 113 priv_tx->ctx_post_pending = true; 114 + atomic64_inc(&priv_tx->sw_stats->tx_tls_ctx); 117 115 118 116 return 0; 119 117 ··· 456 452 457 453 if (unlikely(mlx5e_ktls_tx_offload_test_and_clear_pending(priv_tx))) { 458 454 mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, false, false); 459 - stats->tls_ctx++; 460 455 } 461 456 462 457 seq = ntohl(tcp_hdr(skb)->seq);
+3
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h
··· 41 41 #include "en.h" 42 42 43 43 struct mlx5e_tls_sw_stats { 44 + atomic64_t tx_tls_ctx; 44 45 atomic64_t tx_tls_drop_metadata; 45 46 atomic64_t tx_tls_drop_resync_alloc; 46 47 atomic64_t tx_tls_drop_no_sync_data; 47 48 atomic64_t tx_tls_drop_bypass_required; 49 + atomic64_t rx_tls_ctx; 50 + atomic64_t rx_tls_del; 48 51 atomic64_t rx_tls_drop_resync_request; 49 52 atomic64_t rx_tls_resync_request; 50 53 atomic64_t rx_tls_resync_reply;
+30 -19
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
··· 45 45 { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_drop_bypass_required) }, 46 46 }; 47 47 48 + static const struct counter_desc mlx5e_ktls_sw_stats_desc[] = { 49 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_ctx) }, 50 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_ctx) }, 51 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_del) }, 52 + }; 53 + 48 54 #define MLX5E_READ_CTR_ATOMIC64(ptr, dsc, i) \ 49 55 atomic64_read((atomic64_t *)((char *)(ptr) + (dsc)[i].offset)) 50 56 51 - #define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc) 52 - 53 - static bool is_tls_atomic_stats(struct mlx5e_priv *priv) 57 + static const struct counter_desc *get_tls_atomic_stats(struct mlx5e_priv *priv) 54 58 { 55 - return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev); 59 + if (!priv->tls) 60 + return NULL; 61 + if (mlx5_accel_is_ktls_device(priv->mdev)) 62 + return mlx5e_ktls_sw_stats_desc; 63 + return mlx5e_tls_sw_stats_desc; 56 64 } 57 65 58 66 int mlx5e_tls_get_count(struct mlx5e_priv *priv) 59 67 { 60 - if (!is_tls_atomic_stats(priv)) 68 + if (!priv->tls) 61 69 return 0; 62 - 63 - return NUM_TLS_SW_COUNTERS; 70 + if (mlx5_accel_is_ktls_device(priv->mdev)) 71 + return ARRAY_SIZE(mlx5e_ktls_sw_stats_desc); 72 + return ARRAY_SIZE(mlx5e_tls_sw_stats_desc); 64 73 } 65 74 66 75 int mlx5e_tls_get_strings(struct mlx5e_priv *priv, uint8_t *data) 67 76 { 68 - unsigned int i, idx = 0; 77 + const struct counter_desc *stats_desc; 78 + unsigned int i, n, idx = 0; 69 79 70 - if (!is_tls_atomic_stats(priv)) 71 - return 0; 80 + stats_desc = get_tls_atomic_stats(priv); 81 + n = mlx5e_tls_get_count(priv); 72 82 73 - for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) 83 + for (i = 0; i < n; i++) 74 84 strcpy(data + (idx++) * ETH_GSTRING_LEN, 75 - mlx5e_tls_sw_stats_desc[i].format); 85 + stats_desc[i].format); 76 86 77 - return NUM_TLS_SW_COUNTERS; 87 + return n; 78 88 } 79 89 80 90 int mlx5e_tls_get_stats(struct mlx5e_priv *priv, u64 *data) 81 91 { 82 - int i, idx = 0; 92 + const struct counter_desc *stats_desc; 93 + unsigned int i, n, idx = 0; 83 94 84 - if (!is_tls_atomic_stats(priv)) 85 - return 0; 95 + stats_desc = get_tls_atomic_stats(priv); 96 + n = mlx5e_tls_get_count(priv); 86 97 87 - for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) 98 + for (i = 0; i < n; i++) 88 99 data[idx++] = 89 100 MLX5E_READ_CTR_ATOMIC64(&priv->tls->sw_stats, 90 - mlx5e_tls_sw_stats_desc, i); 101 + stats_desc, i); 91 102 92 - return NUM_TLS_SW_COUNTERS; 103 + return n; 93 104 }
+11 -11
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 758 758 return 0; 759 759 } 760 760 761 - static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings, 762 - u32 eth_proto_cap, 763 - u8 connector_type, bool ext) 761 + static void ptys2ethtool_supported_advertised_port(struct mlx5_core_dev *mdev, 762 + struct ethtool_link_ksettings *link_ksettings, 763 + u32 eth_proto_cap, u8 connector_type) 764 764 { 765 - if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) { 765 + if (!MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) { 766 766 if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR) 767 767 | MLX5E_PROT_MASK(MLX5E_10GBASE_SR) 768 768 | MLX5E_PROT_MASK(MLX5E_40GBASE_CR4) ··· 898 898 [MLX5E_PORT_OTHER] = PORT_OTHER, 899 899 }; 900 900 901 - static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext) 901 + static u8 get_connector_port(struct mlx5_core_dev *mdev, u32 eth_proto, u8 connector_type) 902 902 { 903 - if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER) 903 + if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) 904 904 return ptys2connector_type[connector_type]; 905 905 906 906 if (eth_proto & ··· 1001 1001 data_rate_oper, link_ksettings); 1002 1002 1003 1003 eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap; 1004 - 1005 - link_ksettings->base.port = get_connector_port(eth_proto_oper, 1006 - connector_type, ext); 1007 - ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin, 1008 - connector_type, ext); 1004 + connector_type = connector_type < MLX5E_CONNECTOR_TYPE_NUMBER ? 1005 + connector_type : MLX5E_PORT_UNKNOWN; 1006 + link_ksettings->base.port = get_connector_port(mdev, eth_proto_oper, connector_type); 1007 + ptys2ethtool_supported_advertised_port(mdev, link_ksettings, eth_proto_admin, 1008 + connector_type); 1009 1009 get_lp_advertising(mdev, eth_proto_lp, link_ksettings); 1010 1010 1011 1011 if (an_status == MLX5_AN_COMPLETE)
+20 -1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1091 1091 1092 1092 sq->channel = c; 1093 1093 sq->uar_map = mdev->mlx5e_res.bfreg.map; 1094 + sq->reserved_room = param->stop_room; 1094 1095 1095 1096 param->wq.db_numa_node = cpu_to_node(c->cpu); 1096 1097 err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl); ··· 2351 2350 mlx5e_build_ico_cq_param(priv, log_wq_size, &param->cqp); 2352 2351 } 2353 2352 2353 + static void mlx5e_build_async_icosq_param(struct mlx5e_priv *priv, 2354 + struct mlx5e_params *params, 2355 + u8 log_wq_size, 2356 + struct mlx5e_sq_param *param) 2357 + { 2358 + void *sqc = param->sqc; 2359 + void *wq = MLX5_ADDR_OF(sqc, sqc, wq); 2360 + 2361 + mlx5e_build_sq_param_common(priv, param); 2362 + 2363 + /* async_icosq is used by XSK only if xdp_prog is active */ 2364 + if (params->xdp_prog) 2365 + param->stop_room = mlx5e_stop_room_for_wqe(1); /* for XSK NOP */ 2366 + MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq)); 2367 + MLX5_SET(wq, wq, log_wq_sz, log_wq_size); 2368 + mlx5e_build_ico_cq_param(priv, log_wq_size, &param->cqp); 2369 + } 2370 + 2354 2371 void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, 2355 2372 struct mlx5e_params *params, 2356 2373 struct mlx5e_sq_param *param) ··· 2417 2398 mlx5e_build_sq_param(priv, params, &cparam->txq_sq); 2418 2399 mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq); 2419 2400 mlx5e_build_icosq_param(priv, icosq_log_wq_sz, &cparam->icosq); 2420 - mlx5e_build_icosq_param(priv, async_icosq_log_wq_sz, &cparam->async_icosq); 2401 + mlx5e_build_async_icosq_param(priv, params, async_icosq_log_wq_sz, &cparam->async_icosq); 2421 2402 } 2422 2403 2423 2404 int mlx5e_open_channels(struct mlx5e_priv *priv,
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1107 1107 1108 1108 mlx5e_rep_tc_enable(priv); 1109 1109 1110 - mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK, 1111 - 0, 0, MLX5_VPORT_ADMIN_STATE_AUTO); 1110 + if (MLX5_CAP_GEN(mdev, uplink_follow)) 1111 + mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK, 1112 + 0, 0, MLX5_VPORT_ADMIN_STATE_AUTO); 1112 1113 mlx5_lag_add(mdev, netdev); 1113 1114 priv->events_nb.notifier_call = uplink_rep_async_event; 1114 1115 mlx5_notifier_register(mdev, &priv->events_nb);
-10
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 116 116 #ifdef CONFIG_MLX5_EN_TLS 117 117 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_packets) }, 118 118 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) }, 119 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) }, 120 119 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) }, 121 120 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) }, 122 121 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) }, ··· 179 180 #ifdef CONFIG_MLX5_EN_TLS 180 181 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) }, 181 182 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_bytes) }, 182 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_ctx) }, 183 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_del) }, 184 183 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_pkt) }, 185 184 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_start) }, 186 185 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_end) }, ··· 339 342 #ifdef CONFIG_MLX5_EN_TLS 340 343 s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets; 341 344 s->rx_tls_decrypted_bytes += rq_stats->tls_decrypted_bytes; 342 - s->rx_tls_ctx += rq_stats->tls_ctx; 343 - s->rx_tls_del += rq_stats->tls_del; 344 345 s->rx_tls_resync_req_pkt += rq_stats->tls_resync_req_pkt; 345 346 s->rx_tls_resync_req_start += rq_stats->tls_resync_req_start; 346 347 s->rx_tls_resync_req_end += rq_stats->tls_resync_req_end; ··· 385 390 #ifdef CONFIG_MLX5_EN_TLS 386 391 s->tx_tls_encrypted_packets += sq_stats->tls_encrypted_packets; 387 392 s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes; 388 - s->tx_tls_ctx += sq_stats->tls_ctx; 389 393 s->tx_tls_ooo += sq_stats->tls_ooo; 390 394 s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes; 391 395 s->tx_tls_dump_packets += sq_stats->tls_dump_packets; ··· 1616 1622 #ifdef CONFIG_MLX5_EN_TLS 1617 1623 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) }, 1618 1624 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_bytes) }, 1619 - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_ctx) }, 1620 - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_del) }, 1621 1625 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_pkt) }, 1622 1626 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_start) }, 1623 1627 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_end) }, ··· 1642 1650 #ifdef CONFIG_MLX5_EN_TLS 1643 1651 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) }, 1644 1652 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, 1645 - { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) }, 1646 1653 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) }, 1647 1654 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) }, 1648 1655 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) }, ··· 1767 1776 #ifdef CONFIG_MLX5_EN_TLS 1768 1777 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) }, 1769 1778 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, 1770 - { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ctx) }, 1771 1779 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ooo) }, 1772 1780 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) }, 1773 1781 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },
-6
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 191 191 #ifdef CONFIG_MLX5_EN_TLS 192 192 u64 tx_tls_encrypted_packets; 193 193 u64 tx_tls_encrypted_bytes; 194 - u64 tx_tls_ctx; 195 194 u64 tx_tls_ooo; 196 195 u64 tx_tls_dump_packets; 197 196 u64 tx_tls_dump_bytes; ··· 201 202 202 203 u64 rx_tls_decrypted_packets; 203 204 u64 rx_tls_decrypted_bytes; 204 - u64 rx_tls_ctx; 205 - u64 rx_tls_del; 206 205 u64 rx_tls_resync_req_pkt; 207 206 u64 rx_tls_resync_req_start; 208 207 u64 rx_tls_resync_req_end; ··· 331 334 #ifdef CONFIG_MLX5_EN_TLS 332 335 u64 tls_decrypted_packets; 333 336 u64 tls_decrypted_bytes; 334 - u64 tls_ctx; 335 - u64 tls_del; 336 337 u64 tls_resync_req_pkt; 337 338 u64 tls_resync_req_start; 338 339 u64 tls_resync_req_end; ··· 359 364 #ifdef CONFIG_MLX5_EN_TLS 360 365 u64 tls_encrypted_packets; 361 366 u64 tls_encrypted_bytes; 362 - u64 tls_ctx; 363 367 u64 tls_ooo; 364 368 u64 tls_dump_packets; 365 369 u64 tls_dump_bytes;
+12 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 931 931 mutex_unlock(&table->lock); 932 932 } 933 933 934 + #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING 935 + #define MLX5_MAX_ASYNC_EQS 4 936 + #else 937 + #define MLX5_MAX_ASYNC_EQS 3 938 + #endif 939 + 934 940 int mlx5_eq_table_create(struct mlx5_core_dev *dev) 935 941 { 936 942 struct mlx5_eq_table *eq_table = dev->priv.eq_table; 943 + int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ? 944 + MLX5_CAP_GEN(dev, max_num_eqs) : 945 + 1 << MLX5_CAP_GEN(dev, log_max_eq); 937 946 int err; 938 947 939 948 eq_table->num_comp_eqs = 940 - mlx5_irq_get_num_comp(eq_table->irq_table); 949 + min_t(int, 950 + mlx5_irq_get_num_comp(eq_table->irq_table), 951 + num_eqs - MLX5_MAX_ASYNC_EQS); 941 952 942 953 err = create_async_eqs(dev); 943 954 if (err) {
+5 -5
drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
··· 248 248 err_ethertype: 249 249 kfree(rule); 250 250 out: 251 - kfree(rule_spec); 251 + kvfree(rule_spec); 252 252 return err; 253 253 } 254 254 ··· 328 328 e->recirc_cnt = 0; 329 329 330 330 out: 331 - kfree(in); 331 + kvfree(in); 332 332 return err; 333 333 } 334 334 ··· 347 347 348 348 spec = kvzalloc(sizeof(*spec), GFP_KERNEL); 349 349 if (!spec) { 350 - kfree(in); 350 + kvfree(in); 351 351 return -ENOMEM; 352 352 } 353 353 ··· 371 371 } 372 372 373 373 err_out: 374 - kfree(spec); 375 - kfree(in); 374 + kvfree(spec); 375 + kvfree(in); 376 376 return err; 377 377 } 378 378
+38 -28
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 537 537 return i; 538 538 } 539 539 540 + static bool 541 + esw_src_port_rewrite_supported(struct mlx5_eswitch *esw) 542 + { 543 + return MLX5_CAP_GEN(esw->dev, reg_c_preserve) && 544 + mlx5_eswitch_vport_match_metadata_enabled(esw) && 545 + MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level); 546 + } 547 + 540 548 static int 541 549 esw_setup_dests(struct mlx5_flow_destination *dest, 542 550 struct mlx5_flow_act *flow_act, ··· 558 550 int err = 0; 559 551 560 552 if (!mlx5_eswitch_termtbl_required(esw, attr, flow_act, spec) && 561 - MLX5_CAP_GEN(esw_attr->in_mdev, reg_c_preserve) && 562 - mlx5_eswitch_vport_match_metadata_enabled(esw) && 563 - MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level)) 553 + esw_src_port_rewrite_supported(esw)) 564 554 attr->flags |= MLX5_ESW_ATTR_FLAG_SRC_REWRITE; 565 555 566 556 if (attr->dest_ft) { ··· 1722 1716 } 1723 1717 esw->fdb_table.offloads.send_to_vport_grp = g; 1724 1718 1725 - /* meta send to vport */ 1726 - memset(flow_group_in, 0, inlen); 1727 - MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, 1728 - MLX5_MATCH_MISC_PARAMETERS_2); 1719 + if (esw_src_port_rewrite_supported(esw)) { 1720 + /* meta send to vport */ 1721 + memset(flow_group_in, 0, inlen); 1722 + MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, 1723 + MLX5_MATCH_MISC_PARAMETERS_2); 1729 1724 1730 - match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria); 1725 + match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria); 1731 1726 1732 - MLX5_SET(fte_match_param, match_criteria, 1733 - misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask()); 1734 - MLX5_SET(fte_match_param, match_criteria, 1735 - misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK); 1727 + MLX5_SET(fte_match_param, match_criteria, 1728 + misc_parameters_2.metadata_reg_c_0, 1729 + mlx5_eswitch_get_vport_metadata_mask()); 1730 + MLX5_SET(fte_match_param, match_criteria, 1731 + misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK); 1736 1732 1737 - num_vfs = esw->esw_funcs.num_vfs; 1738 - if (num_vfs) { 1739 - MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1740 - MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ix + num_vfs - 1); 1741 - ix += num_vfs; 1733 + num_vfs = esw->esw_funcs.num_vfs; 1734 + if (num_vfs) { 1735 + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1736 + MLX5_SET(create_flow_group_in, flow_group_in, 1737 + end_flow_index, ix + num_vfs - 1); 1738 + ix += num_vfs; 1742 1739 1743 - g = mlx5_create_flow_group(fdb, flow_group_in); 1744 - if (IS_ERR(g)) { 1745 - err = PTR_ERR(g); 1746 - esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n", 1747 - err); 1748 - goto send_vport_meta_err; 1740 + g = mlx5_create_flow_group(fdb, flow_group_in); 1741 + if (IS_ERR(g)) { 1742 + err = PTR_ERR(g); 1743 + esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n", 1744 + err); 1745 + goto send_vport_meta_err; 1746 + } 1747 + esw->fdb_table.offloads.send_to_vport_meta_grp = g; 1748 + 1749 + err = mlx5_eswitch_add_send_to_vport_meta_rules(esw); 1750 + if (err) 1751 + goto meta_rule_err; 1749 1752 } 1750 - esw->fdb_table.offloads.send_to_vport_meta_grp = g; 1751 - 1752 - err = mlx5_eswitch_add_send_to_vport_meta_rules(esw); 1753 - if (err) 1754 - goto meta_rule_err; 1755 1753 } 1756 1754 1757 1755 if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
+15
drivers/net/ethernet/mellanox/mlxsw/spectrum.h
··· 21 21 #include <net/red.h> 22 22 #include <net/vxlan.h> 23 23 #include <net/flow_offload.h> 24 + #include <net/inet_ecn.h> 24 25 25 26 #include "port.h" 26 27 #include "core.h" ··· 347 346 u32 *p_eth_proto_oper); 348 347 u32 (*ptys_proto_cap_masked_get)(u32 eth_proto_cap); 349 348 }; 349 + 350 + static inline u8 mlxsw_sp_tunnel_ecn_decap(u8 outer_ecn, u8 inner_ecn, 351 + bool *trap_en) 352 + { 353 + bool set_ce = false; 354 + 355 + *trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 356 + if (set_ce) 357 + return INET_ECN_CE; 358 + else if (outer_ecn == INET_ECN_ECT_1 && inner_ecn == INET_ECN_ECT_0) 359 + return INET_ECN_ECT_1; 360 + else 361 + return inner_ecn; 362 + } 350 363 351 364 static inline struct net_device * 352 365 mlxsw_sp_bridge_vxlan_dev_find(struct net_device *br_dev)
+14 -5
drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
··· 1230 1230 u32 ptys_eth_proto, 1231 1231 struct ethtool_link_ksettings *cmd) 1232 1232 { 1233 + struct mlxsw_sp1_port_link_mode link; 1233 1234 int i; 1234 1235 1235 - cmd->link_mode = -1; 1236 + cmd->base.speed = SPEED_UNKNOWN; 1237 + cmd->base.duplex = DUPLEX_UNKNOWN; 1238 + cmd->lanes = 0; 1236 1239 1237 1240 if (!carrier_ok) 1238 1241 return; 1239 1242 1240 1243 for (i = 0; i < MLXSW_SP1_PORT_LINK_MODE_LEN; i++) { 1241 - if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) 1242 - cmd->link_mode = mlxsw_sp1_port_link_mode[i].mask_ethtool; 1244 + if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) { 1245 + link = mlxsw_sp1_port_link_mode[i]; 1246 + ethtool_params_from_link_mode(cmd, 1247 + link.mask_ethtool); 1248 + } 1243 1249 } 1244 1250 } 1245 1251 ··· 1678 1672 struct mlxsw_sp2_port_link_mode link; 1679 1673 int i; 1680 1674 1681 - cmd->link_mode = -1; 1675 + cmd->base.speed = SPEED_UNKNOWN; 1676 + cmd->base.duplex = DUPLEX_UNKNOWN; 1677 + cmd->lanes = 0; 1682 1678 1683 1679 if (!carrier_ok) 1684 1680 return; ··· 1688 1680 for (i = 0; i < MLXSW_SP2_PORT_LINK_MODE_LEN; i++) { 1689 1681 if (ptys_eth_proto & mlxsw_sp2_port_link_mode[i].mask) { 1690 1682 link = mlxsw_sp2_port_link_mode[i]; 1691 - cmd->link_mode = link.mask_ethtool[1]; 1683 + ethtool_params_from_link_mode(cmd, 1684 + link.mask_ethtool[1]); 1692 1685 } 1693 1686 } 1694 1687 }
+3 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
··· 335 335 u8 inner_ecn, u8 outer_ecn) 336 336 { 337 337 char tidem_pl[MLXSW_REG_TIDEM_LEN]; 338 - bool trap_en, set_ce = false; 339 338 u8 new_inner_ecn; 339 + bool trap_en; 340 340 341 - trap_en = __INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 342 - new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn; 343 - 341 + new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn, 342 + &trap_en); 344 343 mlxsw_reg_tidem_pack(tidem_pl, outer_ecn, inner_ecn, new_inner_ecn, 345 344 trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0); 346 345 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tidem), tidem_pl);
+3 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
··· 909 909 u8 inner_ecn, u8 outer_ecn) 910 910 { 911 911 char tndem_pl[MLXSW_REG_TNDEM_LEN]; 912 - bool trap_en, set_ce = false; 913 912 u8 new_inner_ecn; 913 + bool trap_en; 914 914 915 - trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 916 - new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn; 917 - 915 + new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn, 916 + &trap_en); 918 917 mlxsw_reg_tndem_pack(tndem_pl, outer_ecn, inner_ecn, new_inner_ecn, 919 918 trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0); 920 919 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tndem), tndem_pl);
+4 -4
drivers/net/ethernet/microchip/lan743x_main.c
··· 885 885 } 886 886 887 887 mac_rx &= ~(MAC_RX_MAX_SIZE_MASK_); 888 - mac_rx |= (((new_mtu + ETH_HLEN + 4) << MAC_RX_MAX_SIZE_SHIFT_) & 889 - MAC_RX_MAX_SIZE_MASK_); 888 + mac_rx |= (((new_mtu + ETH_HLEN + ETH_FCS_LEN) 889 + << MAC_RX_MAX_SIZE_SHIFT_) & MAC_RX_MAX_SIZE_MASK_); 890 890 lan743x_csr_write(adapter, MAC_RX, mac_rx); 891 891 892 892 if (enabled) { ··· 1944 1944 struct sk_buff *skb; 1945 1945 dma_addr_t dma_ptr; 1946 1946 1947 - buffer_length = netdev->mtu + ETH_HLEN + 4 + RX_HEAD_PADDING; 1947 + buffer_length = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + RX_HEAD_PADDING; 1948 1948 1949 1949 descriptor = &rx->ring_cpu_ptr[index]; 1950 1950 buffer_info = &rx->buffer_info[index]; ··· 2040 2040 dev_kfree_skb_irq(skb); 2041 2041 return NULL; 2042 2042 } 2043 - frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 4); 2043 + frame_length = max_t(int, 0, frame_length - ETH_FCS_LEN); 2044 2044 if (skb->len > frame_length) { 2045 2045 skb->tail -= skb->len - frame_length; 2046 2046 skb->len = frame_length;
+1 -1
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 2897 2897 dev_kfree_skb_any(curr); 2898 2898 if (segs != NULL) { 2899 2899 curr = segs; 2900 - segs = segs->next; 2900 + segs = next; 2901 2901 curr->next = NULL; 2902 2902 dev_kfree_skb_any(segs); 2903 2903 }
+1
drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
··· 454 454 dev_consume_skb_any(skb); 455 455 else 456 456 dev_kfree_skb_any(skb); 457 + return; 457 458 } 458 459 459 460 nfp_ccm_rx(&bpf->ccm, skb);
+8
drivers/net/ethernet/netronome/nfp/flower/main.h
··· 190 190 * @qos_rate_limiters: Current active qos rate limiters 191 191 * @qos_stats_lock: Lock on qos stats updates 192 192 * @pre_tun_rule_cnt: Number of pre-tunnel rules offloaded 193 + * @merge_table: Hash table to store merged flows 193 194 */ 194 195 struct nfp_flower_priv { 195 196 struct nfp_app *app; ··· 224 223 unsigned int qos_rate_limiters; 225 224 spinlock_t qos_stats_lock; /* Protect the qos stats */ 226 225 int pre_tun_rule_cnt; 226 + struct rhashtable merge_table; 227 227 }; 228 228 229 229 /** ··· 352 350 }; 353 351 354 352 extern const struct rhashtable_params nfp_flower_table_params; 353 + extern const struct rhashtable_params merge_table_params; 354 + 355 + struct nfp_merge_info { 356 + u64 parent_ctx; 357 + struct rhash_head ht_node; 358 + }; 355 359 356 360 struct nfp_fl_stats_frame { 357 361 __be32 stats_con_id;
+15 -1
drivers/net/ethernet/netronome/nfp/flower/metadata.c
··· 490 490 .automatic_shrinking = true, 491 491 }; 492 492 493 + const struct rhashtable_params merge_table_params = { 494 + .key_offset = offsetof(struct nfp_merge_info, parent_ctx), 495 + .head_offset = offsetof(struct nfp_merge_info, ht_node), 496 + .key_len = sizeof(u64), 497 + }; 498 + 493 499 int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count, 494 500 unsigned int host_num_mems) 495 501 { ··· 512 506 if (err) 513 507 goto err_free_flow_table; 514 508 509 + err = rhashtable_init(&priv->merge_table, &merge_table_params); 510 + if (err) 511 + goto err_free_stats_ctx_table; 512 + 515 513 get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed)); 516 514 517 515 /* Init ring buffer and unallocated mask_ids. */ ··· 523 513 kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS, 524 514 NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL); 525 515 if (!priv->mask_ids.mask_id_free_list.buf) 526 - goto err_free_stats_ctx_table; 516 + goto err_free_merge_table; 527 517 528 518 priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1; 529 519 ··· 560 550 kfree(priv->mask_ids.last_used); 561 551 err_free_mask_id: 562 552 kfree(priv->mask_ids.mask_id_free_list.buf); 553 + err_free_merge_table: 554 + rhashtable_destroy(&priv->merge_table); 563 555 err_free_stats_ctx_table: 564 556 rhashtable_destroy(&priv->stats_ctx_table); 565 557 err_free_flow_table: ··· 579 567 rhashtable_free_and_destroy(&priv->flow_table, 580 568 nfp_check_rhashtable_empty, NULL); 581 569 rhashtable_free_and_destroy(&priv->stats_ctx_table, 570 + nfp_check_rhashtable_empty, NULL); 571 + rhashtable_free_and_destroy(&priv->merge_table, 582 572 nfp_check_rhashtable_empty, NULL); 583 573 kvfree(priv->stats); 584 574 kfree(priv->mask_ids.mask_id_free_list.buf);
+46 -2
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 1009 1009 struct netlink_ext_ack *extack = NULL; 1010 1010 struct nfp_fl_payload *merge_flow; 1011 1011 struct nfp_fl_key_ls merge_key_ls; 1012 + struct nfp_merge_info *merge_info; 1013 + u64 parent_ctx = 0; 1012 1014 int err; 1013 1015 1014 1016 ASSERT_RTNL(); ··· 1020 1018 nfp_flower_is_merge_flow(sub_flow1) || 1021 1019 nfp_flower_is_merge_flow(sub_flow2)) 1022 1020 return -EINVAL; 1021 + 1022 + /* check if the two flows are already merged */ 1023 + parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32; 1024 + parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id)); 1025 + if (rhashtable_lookup_fast(&priv->merge_table, 1026 + &parent_ctx, merge_table_params)) { 1027 + nfp_flower_cmsg_warn(app, "The two flows are already merged.\n"); 1028 + return 0; 1029 + } 1023 1030 1024 1031 err = nfp_flower_can_merge(sub_flow1, sub_flow2); 1025 1032 if (err) ··· 1071 1060 if (err) 1072 1061 goto err_release_metadata; 1073 1062 1063 + merge_info = kmalloc(sizeof(*merge_info), GFP_KERNEL); 1064 + if (!merge_info) { 1065 + err = -ENOMEM; 1066 + goto err_remove_rhash; 1067 + } 1068 + merge_info->parent_ctx = parent_ctx; 1069 + err = rhashtable_insert_fast(&priv->merge_table, &merge_info->ht_node, 1070 + merge_table_params); 1071 + if (err) 1072 + goto err_destroy_merge_info; 1073 + 1074 1074 err = nfp_flower_xmit_flow(app, merge_flow, 1075 1075 NFP_FLOWER_CMSG_TYPE_FLOW_MOD); 1076 1076 if (err) 1077 - goto err_remove_rhash; 1077 + goto err_remove_merge_info; 1078 1078 1079 1079 merge_flow->in_hw = true; 1080 1080 sub_flow1->in_hw = false; 1081 1081 1082 1082 return 0; 1083 1083 1084 + err_remove_merge_info: 1085 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table, 1086 + &merge_info->ht_node, 1087 + merge_table_params)); 1088 + err_destroy_merge_info: 1089 + kfree(merge_info); 1084 1090 err_remove_rhash: 1085 1091 WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table, 1086 1092 &merge_flow->fl_node, ··· 1387 1359 { 1388 1360 struct nfp_flower_priv *priv = app->priv; 1389 1361 struct nfp_fl_payload_link *link, *temp; 1362 + struct nfp_merge_info *merge_info; 1390 1363 struct nfp_fl_payload *origin; 1364 + u64 parent_ctx = 0; 1391 1365 bool mod = false; 1392 1366 int err; 1393 1367 ··· 1426 1396 err_free_links: 1427 1397 /* Clean any links connected with the merged flow. */ 1428 1398 list_for_each_entry_safe(link, temp, &merge_flow->linked_flows, 1429 - merge_flow.list) 1399 + merge_flow.list) { 1400 + u32 ctx_id = be32_to_cpu(link->sub_flow.flow->meta.host_ctx_id); 1401 + 1402 + parent_ctx = (parent_ctx << 32) | (u64)(ctx_id); 1430 1403 nfp_flower_unlink_flow(link); 1404 + } 1405 + 1406 + merge_info = rhashtable_lookup_fast(&priv->merge_table, 1407 + &parent_ctx, 1408 + merge_table_params); 1409 + if (merge_info) { 1410 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table, 1411 + &merge_info->ht_node, 1412 + merge_table_params)); 1413 + kfree(merge_info); 1414 + } 1431 1415 1432 1416 kfree(merge_flow->action_data); 1433 1417 kfree(merge_flow->mask_data);
+12
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 504 504 return axienet_ior(lp, XAE_MDIO_MCR_OFFSET); 505 505 } 506 506 507 + static inline void axienet_lock_mii(struct axienet_local *lp) 508 + { 509 + if (lp->mii_bus) 510 + mutex_lock(&lp->mii_bus->mdio_lock); 511 + } 512 + 513 + static inline void axienet_unlock_mii(struct axienet_local *lp) 514 + { 515 + if (lp->mii_bus) 516 + mutex_unlock(&lp->mii_bus->mdio_lock); 517 + } 518 + 507 519 /** 508 520 * axienet_iow - Memory mapped Axi Ethernet register write 509 521 * @lp: Pointer to axienet local structure
+6 -6
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1053 1053 * including the MDIO. MDIO must be disabled before resetting. 1054 1054 * Hold MDIO bus lock to avoid MDIO accesses during the reset. 1055 1055 */ 1056 - mutex_lock(&lp->mii_bus->mdio_lock); 1056 + axienet_lock_mii(lp); 1057 1057 ret = axienet_device_reset(ndev); 1058 - mutex_unlock(&lp->mii_bus->mdio_lock); 1058 + axienet_unlock_mii(lp); 1059 1059 1060 1060 ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0); 1061 1061 if (ret) { ··· 1148 1148 } 1149 1149 1150 1150 /* Do a reset to ensure DMA is really stopped */ 1151 - mutex_lock(&lp->mii_bus->mdio_lock); 1151 + axienet_lock_mii(lp); 1152 1152 __axienet_device_reset(lp); 1153 - mutex_unlock(&lp->mii_bus->mdio_lock); 1153 + axienet_unlock_mii(lp); 1154 1154 1155 1155 cancel_work_sync(&lp->dma_err_task); 1156 1156 ··· 1709 1709 * including the MDIO. MDIO must be disabled before resetting. 1710 1710 * Hold MDIO bus lock to avoid MDIO accesses during the reset. 1711 1711 */ 1712 - mutex_lock(&lp->mii_bus->mdio_lock); 1712 + axienet_lock_mii(lp); 1713 1713 __axienet_device_reset(lp); 1714 - mutex_unlock(&lp->mii_bus->mdio_lock); 1714 + axienet_unlock_mii(lp); 1715 1715 1716 1716 for (i = 0; i < lp->tx_bd_num; i++) { 1717 1717 cur_p = &lp->tx_bd_v[i];
+20 -4
drivers/net/geneve.c
··· 908 908 909 909 info = skb_tunnel_info(skb); 910 910 if (info) { 911 - info->key.u.ipv4.dst = fl4.saddr; 912 - info->key.u.ipv4.src = fl4.daddr; 911 + struct ip_tunnel_info *unclone; 912 + 913 + unclone = skb_tunnel_info_unclone(skb); 914 + if (unlikely(!unclone)) { 915 + dst_release(&rt->dst); 916 + return -ENOMEM; 917 + } 918 + 919 + unclone->key.u.ipv4.dst = fl4.saddr; 920 + unclone->key.u.ipv4.src = fl4.daddr; 913 921 } 914 922 915 923 if (!pskb_may_pull(skb, ETH_HLEN)) { ··· 1001 993 struct ip_tunnel_info *info = skb_tunnel_info(skb); 1002 994 1003 995 if (info) { 1004 - info->key.u.ipv6.dst = fl6.saddr; 1005 - info->key.u.ipv6.src = fl6.daddr; 996 + struct ip_tunnel_info *unclone; 997 + 998 + unclone = skb_tunnel_info_unclone(skb); 999 + if (unlikely(!unclone)) { 1000 + dst_release(dst); 1001 + return -ENOMEM; 1002 + } 1003 + 1004 + unclone->key.u.ipv6.dst = fl6.saddr; 1005 + unclone->key.u.ipv6.src = fl6.daddr; 1006 1006 } 1007 1007 1008 1008 if (!pskb_may_pull(skb, ETH_HLEN)) {
+1
drivers/net/ieee802154/atusb.c
··· 365 365 return -ENOMEM; 366 366 } 367 367 usb_anchor_urb(urb, &atusb->idle_urbs); 368 + usb_free_urb(urb); 368 369 n--; 369 370 } 370 371 return 0;
+10 -3
drivers/net/phy/bcm-phy-lib.c
··· 369 369 370 370 int bcm_phy_set_eee(struct phy_device *phydev, bool enable) 371 371 { 372 - int val; 372 + int val, mask = 0; 373 373 374 374 /* Enable EEE at PHY level */ 375 375 val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL); ··· 388 388 if (val < 0) 389 389 return val; 390 390 391 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, 392 + phydev->supported)) 393 + mask |= MDIO_EEE_1000T; 394 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, 395 + phydev->supported)) 396 + mask |= MDIO_EEE_100TX; 397 + 391 398 if (enable) 392 - val |= (MDIO_EEE_100TX | MDIO_EEE_1000T); 399 + val |= mask; 393 400 else 394 - val &= ~(MDIO_EEE_100TX | MDIO_EEE_1000T); 401 + val &= ~mask; 395 402 396 403 phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val); 397 404
+48
drivers/net/tun.c
··· 69 69 #include <linux/bpf.h> 70 70 #include <linux/bpf_trace.h> 71 71 #include <linux/mutex.h> 72 + #include <linux/ieee802154.h> 73 + #include <linux/if_ltalk.h> 74 + #include <uapi/linux/if_fddi.h> 75 + #include <uapi/linux/if_hippi.h> 76 + #include <uapi/linux/if_fc.h> 77 + #include <net/ax25.h> 78 + #include <net/rose.h> 79 + #include <net/6lowpan.h> 72 80 73 81 #include <linux/uaccess.h> 74 82 #include <linux/proc_fs.h> ··· 2927 2919 return __tun_set_ebpf(tun, prog_p, prog); 2928 2920 } 2929 2921 2922 + /* Return correct value for tun->dev->addr_len based on tun->dev->type. */ 2923 + static unsigned char tun_get_addr_len(unsigned short type) 2924 + { 2925 + switch (type) { 2926 + case ARPHRD_IP6GRE: 2927 + case ARPHRD_TUNNEL6: 2928 + return sizeof(struct in6_addr); 2929 + case ARPHRD_IPGRE: 2930 + case ARPHRD_TUNNEL: 2931 + case ARPHRD_SIT: 2932 + return 4; 2933 + case ARPHRD_ETHER: 2934 + return ETH_ALEN; 2935 + case ARPHRD_IEEE802154: 2936 + case ARPHRD_IEEE802154_MONITOR: 2937 + return IEEE802154_EXTENDED_ADDR_LEN; 2938 + case ARPHRD_PHONET_PIPE: 2939 + case ARPHRD_PPP: 2940 + case ARPHRD_NONE: 2941 + return 0; 2942 + case ARPHRD_6LOWPAN: 2943 + return EUI64_ADDR_LEN; 2944 + case ARPHRD_FDDI: 2945 + return FDDI_K_ALEN; 2946 + case ARPHRD_HIPPI: 2947 + return HIPPI_ALEN; 2948 + case ARPHRD_IEEE802: 2949 + return FC_ALEN; 2950 + case ARPHRD_ROSE: 2951 + return ROSE_ADDR_LEN; 2952 + case ARPHRD_NETROM: 2953 + return AX25_ADDR_LEN; 2954 + case ARPHRD_LOCALTLK: 2955 + return LTALK_ALEN; 2956 + default: 2957 + return 0; 2958 + } 2959 + } 2960 + 2930 2961 static long __tun_chr_ioctl(struct file *file, unsigned int cmd, 2931 2962 unsigned long arg, int ifreq_len) 2932 2963 { ··· 3129 3082 break; 3130 3083 } 3131 3084 tun->dev->type = (int) arg; 3085 + tun->dev->addr_len = tun_get_addr_len(tun->dev->type); 3132 3086 netif_info(tun, drv, tun->dev, "linktype set to %d\n", 3133 3087 tun->dev->type); 3134 3088 call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE,
+12 -21
drivers/net/usb/hso.c
··· 611 611 return serial; 612 612 } 613 613 614 - static int get_free_serial_index(void) 614 + static int obtain_minor(struct hso_serial *serial) 615 615 { 616 616 int index; 617 617 unsigned long flags; ··· 619 619 spin_lock_irqsave(&serial_table_lock, flags); 620 620 for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) { 621 621 if (serial_table[index] == NULL) { 622 + serial_table[index] = serial->parent; 623 + serial->minor = index; 622 624 spin_unlock_irqrestore(&serial_table_lock, flags); 623 - return index; 625 + return 0; 624 626 } 625 627 } 626 628 spin_unlock_irqrestore(&serial_table_lock, flags); ··· 631 629 return -1; 632 630 } 633 631 634 - static void set_serial_by_index(unsigned index, struct hso_serial *serial) 632 + static void release_minor(struct hso_serial *serial) 635 633 { 636 634 unsigned long flags; 637 635 638 636 spin_lock_irqsave(&serial_table_lock, flags); 639 - if (serial) 640 - serial_table[index] = serial->parent; 641 - else 642 - serial_table[index] = NULL; 637 + serial_table[serial->minor] = NULL; 643 638 spin_unlock_irqrestore(&serial_table_lock, flags); 644 639 } 645 640 ··· 2229 2230 static void hso_serial_tty_unregister(struct hso_serial *serial) 2230 2231 { 2231 2232 tty_unregister_device(tty_drv, serial->minor); 2233 + release_minor(serial); 2232 2234 } 2233 2235 2234 2236 static void hso_serial_common_free(struct hso_serial *serial) ··· 2253 2253 static int hso_serial_common_create(struct hso_serial *serial, int num_urbs, 2254 2254 int rx_size, int tx_size) 2255 2255 { 2256 - int minor; 2257 2256 int i; 2258 2257 2259 2258 tty_port_init(&serial->port); 2260 2259 2261 - minor = get_free_serial_index(); 2262 - if (minor < 0) 2260 + if (obtain_minor(serial)) 2263 2261 goto exit2; 2264 2262 2265 2263 /* register our minor number */ 2266 2264 serial->parent->dev = tty_port_register_device_attr(&serial->port, 2267 - tty_drv, minor, &serial->parent->interface->dev, 2265 + tty_drv, serial->minor, &serial->parent->interface->dev, 2268 2266 serial->parent, hso_serial_dev_groups); 2269 - if (IS_ERR(serial->parent->dev)) 2267 + if (IS_ERR(serial->parent->dev)) { 2268 + release_minor(serial); 2270 2269 goto exit2; 2270 + } 2271 2271 2272 - /* fill in specific data for later use */ 2273 - serial->minor = minor; 2274 2272 serial->magic = HSO_SERIAL_MAGIC; 2275 2273 spin_lock_init(&serial->serial_lock); 2276 2274 serial->num_rx_urbs = num_urbs; ··· 2665 2667 2666 2668 serial->write_data = hso_std_serial_write_data; 2667 2669 2668 - /* and record this serial */ 2669 - set_serial_by_index(serial->minor, serial); 2670 - 2671 2670 /* setup the proc dirs and files if needed */ 2672 2671 hso_log_port(hso_dev); 2673 2672 ··· 2720 2725 mutex_lock(&serial->shared_int->shared_int_lock); 2721 2726 serial->shared_int->ref_count++; 2722 2727 mutex_unlock(&serial->shared_int->shared_int_lock); 2723 - 2724 - /* and record this serial */ 2725 - set_serial_by_index(serial->minor, serial); 2726 2728 2727 2729 /* setup the proc dirs and files if needed */ 2728 2730 hso_log_port(hso_dev); ··· 3105 3113 cancel_work_sync(&serial_table[i]->async_get_intf); 3106 3114 hso_serial_tty_unregister(serial); 3107 3115 kref_put(&serial_table[i]->ref, hso_serial_ref_free); 3108 - set_serial_by_index(i, NULL); 3109 3116 } 3110 3117 } 3111 3118
+7 -3
drivers/net/virtio_net.c
··· 406 406 offset += hdr_padded_len; 407 407 p += hdr_padded_len; 408 408 409 - copy = len; 410 - if (copy > skb_tailroom(skb)) 411 - copy = skb_tailroom(skb); 409 + /* Copy all frame if it fits skb->head, otherwise 410 + * we let virtio_net_hdr_to_skb() and GRO pull headers as needed. 411 + */ 412 + if (len <= skb_tailroom(skb)) 413 + copy = len; 414 + else 415 + copy = ETH_HLEN + metasize; 412 416 skb_put_data(skb, p, copy); 413 417 414 418 if (metasize) {
+14 -4
drivers/net/vxlan.c
··· 2725 2725 goto tx_error; 2726 2726 } else if (err) { 2727 2727 if (info) { 2728 + struct ip_tunnel_info *unclone; 2728 2729 struct in_addr src, dst; 2730 + 2731 + unclone = skb_tunnel_info_unclone(skb); 2732 + if (unlikely(!unclone)) 2733 + goto tx_error; 2729 2734 2730 2735 src = remote_ip.sin.sin_addr; 2731 2736 dst = local_ip.sin.sin_addr; 2732 - info->key.u.ipv4.src = src.s_addr; 2733 - info->key.u.ipv4.dst = dst.s_addr; 2737 + unclone->key.u.ipv4.src = src.s_addr; 2738 + unclone->key.u.ipv4.dst = dst.s_addr; 2734 2739 } 2735 2740 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false); 2736 2741 dst_release(ndst); ··· 2786 2781 goto tx_error; 2787 2782 } else if (err) { 2788 2783 if (info) { 2784 + struct ip_tunnel_info *unclone; 2789 2785 struct in6_addr src, dst; 2786 + 2787 + unclone = skb_tunnel_info_unclone(skb); 2788 + if (unlikely(!unclone)) 2789 + goto tx_error; 2790 2790 2791 2791 src = remote_ip.sin6.sin6_addr; 2792 2792 dst = local_ip.sin6.sin6_addr; 2793 - info->key.u.ipv6.src = src; 2794 - info->key.u.ipv6.dst = dst; 2793 + unclone->key.u.ipv6.src = src; 2794 + unclone->key.u.ipv6.dst = dst; 2795 2795 } 2796 2796 2797 2797 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
+3 -2
drivers/net/wan/hdlc_fr.c
··· 415 415 416 416 if (pad > 0) { /* Pad the frame with zeros */ 417 417 if (__skb_pad(skb, pad, false)) 418 - goto drop; 418 + goto out; 419 419 skb_put(skb, pad); 420 420 } 421 421 } ··· 448 448 return NETDEV_TX_OK; 449 449 450 450 drop: 451 - dev->stats.tx_dropped++; 452 451 kfree_skb(skb); 452 + out: 453 + dev->stats.tx_dropped++; 453 454 return NETDEV_TX_OK; 454 455 } 455 456
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
··· 2439 2439 vif = ifp->vif; 2440 2440 cfg = wdev_to_cfg(&vif->wdev); 2441 2441 cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif = NULL; 2442 - if (locked) { 2442 + if (!locked) { 2443 2443 rtnl_lock(); 2444 2444 wiphy_lock(cfg->wiphy); 2445 2445 cfg80211_unregister_wdev(&vif->wdev);
+5 -5
drivers/net/wireless/intel/iwlwifi/fw/notif-wait.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2005-2014 Intel Corporation 3 + * Copyright (C) 2005-2014, 2021 Intel Corporation 4 4 * Copyright (C) 2015-2017 Intel Deutschland GmbH 5 5 */ 6 6 #include <linux/sched.h> ··· 26 26 if (!list_empty(&notif_wait->notif_waits)) { 27 27 struct iwl_notification_wait *w; 28 28 29 - spin_lock(&notif_wait->notif_wait_lock); 29 + spin_lock_bh(&notif_wait->notif_wait_lock); 30 30 list_for_each_entry(w, &notif_wait->notif_waits, list) { 31 31 int i; 32 32 bool found = false; ··· 59 59 triggered = true; 60 60 } 61 61 } 62 - spin_unlock(&notif_wait->notif_wait_lock); 62 + spin_unlock_bh(&notif_wait->notif_wait_lock); 63 63 } 64 64 65 65 return triggered; ··· 70 70 { 71 71 struct iwl_notification_wait *wait_entry; 72 72 73 - spin_lock(&notif_wait->notif_wait_lock); 73 + spin_lock_bh(&notif_wait->notif_wait_lock); 74 74 list_for_each_entry(wait_entry, &notif_wait->notif_waits, list) 75 75 wait_entry->aborted = true; 76 - spin_unlock(&notif_wait->notif_wait_lock); 76 + spin_unlock_bh(&notif_wait->notif_wait_lock); 77 77 78 78 wake_up_all(&notif_wait->notif_waitq); 79 79 }
+1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 414 414 #define IWL_CFG_MAC_TYPE_QNJ 0x36 415 415 #define IWL_CFG_MAC_TYPE_SO 0x37 416 416 #define IWL_CFG_MAC_TYPE_SNJ 0x42 417 + #define IWL_CFG_MAC_TYPE_SOF 0x43 417 418 #define IWL_CFG_MAC_TYPE_MA 0x44 418 419 419 420 #define IWL_CFG_RF_TYPE_TH 0x105
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 232 232 REG_CAPA_V2_MCS_9_ALLOWED = BIT(6), 233 233 REG_CAPA_V2_WEATHER_DISABLED = BIT(7), 234 234 REG_CAPA_V2_40MHZ_ALLOWED = BIT(8), 235 - REG_CAPA_V2_11AX_DISABLED = BIT(13), 235 + REG_CAPA_V2_11AX_DISABLED = BIT(10), 236 236 }; 237 237 238 238 /*
+5 -2
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
··· 1786 1786 return -EINVAL; 1787 1787 1788 1788 /* value zero triggers re-sending the default table to the device */ 1789 - if (!op_id) 1789 + if (!op_id) { 1790 + mutex_lock(&mvm->mutex); 1790 1791 ret = iwl_rfi_send_config_cmd(mvm, NULL); 1791 - else 1792 + mutex_unlock(&mvm->mutex); 1793 + } else { 1792 1794 ret = -EOPNOTSUPP; /* in the future a new table will be added */ 1795 + } 1793 1796 1794 1797 return ret ?: count; 1795 1798 }
+3 -3
drivers/net/wireless/intel/iwlwifi/mvm/rfi.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2020 Intel Corporation 3 + * Copyright (C) 2020 - 2021 Intel Corporation 4 4 */ 5 5 6 6 #include "mvm.h" ··· 66 66 if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_RFIM_SUPPORT)) 67 67 return -EOPNOTSUPP; 68 68 69 + lockdep_assert_held(&mvm->mutex); 70 + 69 71 /* in case no table is passed, use the default one */ 70 72 if (!rfi_table) { 71 73 memcpy(cmd.table, iwl_rfi_table, sizeof(cmd.table)); ··· 77 75 cmd.oem = 1; 78 76 } 79 77 80 - mutex_lock(&mvm->mutex); 81 78 ret = iwl_mvm_send_cmd(mvm, &hcmd); 82 - mutex_unlock(&mvm->mutex); 83 79 84 80 if (ret) 85 81 IWL_ERR(mvm, "Failed to send RFI config cmd %d\n", ret);
+12 -5
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 272 272 rx_status->chain_signal[2] = S8_MIN; 273 273 } 274 274 275 - static int iwl_mvm_rx_mgmt_crypto(struct ieee80211_sta *sta, 276 - struct ieee80211_hdr *hdr, 277 - struct iwl_rx_mpdu_desc *desc, 278 - u32 status) 275 + static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta, 276 + struct ieee80211_hdr *hdr, 277 + struct iwl_rx_mpdu_desc *desc, 278 + u32 status) 279 279 { 280 280 struct iwl_mvm_sta *mvmsta; 281 281 struct iwl_mvm_vif *mvmvif; ··· 284 284 struct ieee80211_key_conf *key; 285 285 u32 len = le16_to_cpu(desc->mpdu_len); 286 286 const u8 *frame = (void *)hdr; 287 + 288 + if ((status & IWL_RX_MPDU_STATUS_SEC_MASK) == IWL_RX_MPDU_STATUS_SEC_NONE) 289 + return 0; 287 290 288 291 /* 289 292 * For non-beacon, we don't really care. But beacons may ··· 359 356 IWL_RX_MPDU_STATUS_SEC_UNKNOWN && !mvm->monitor_on) 360 357 return -1; 361 358 359 + if (unlikely(ieee80211_is_mgmt(hdr->frame_control) && 360 + !ieee80211_has_protected(hdr->frame_control))) 361 + return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status); 362 + 362 363 if (!ieee80211_has_protected(hdr->frame_control) || 363 364 (status & IWL_RX_MPDU_STATUS_SEC_MASK) == 364 365 IWL_RX_MPDU_STATUS_SEC_NONE) ··· 418 411 stats->flag |= RX_FLAG_DECRYPTED; 419 412 return 0; 420 413 case RX_MPDU_RES_STATUS_SEC_CMAC_GMAC_ENC: 421 - return iwl_mvm_rx_mgmt_crypto(sta, hdr, desc, status); 414 + break; 422 415 default: 423 416 /* 424 417 * Sometimes we can get frames that were not decrypted
+1 -30
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2018-2020 Intel Corporation 3 + * Copyright (C) 2018-2021 Intel Corporation 4 4 */ 5 5 #include "iwl-trans.h" 6 6 #include "iwl-fh.h" ··· 75 75 const struct fw_img *fw) 76 76 { 77 77 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 78 - u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 79 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 80 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 81 - u32_encode_bits(250, 82 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 83 - CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 84 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 85 - CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 86 - u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 87 78 struct iwl_context_info_gen3 *ctxt_info_gen3; 88 79 struct iwl_prph_scratch *prph_scratch; 89 80 struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl; ··· 207 216 208 217 iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL, 209 218 CSR_AUTO_FUNC_BOOT_ENA); 210 - 211 - /* 212 - * To workaround hardware latency issues during the boot process, 213 - * initialize the LTR to ~250 usec (see ltr_val above). 214 - * The firmware initializes this again later (to a smaller value). 215 - */ 216 - if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 || 217 - trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) && 218 - !trans->trans_cfg->integrated) { 219 - iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val); 220 - } else if (trans->trans_cfg->integrated && 221 - trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) { 222 - iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL); 223 - iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val); 224 - } 225 - 226 - if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) 227 - iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1); 228 - else 229 - iwl_set_bit(trans, CSR_GP_CNTRL, CSR_AUTO_FUNC_INIT); 230 219 231 220 return 0; 232 221
+1 -2
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 3 * Copyright (C) 2017 Intel Deutschland GmbH 4 - * Copyright (C) 2018-2020 Intel Corporation 4 + * Copyright (C) 2018-2021 Intel Corporation 5 5 */ 6 6 #include "iwl-trans.h" 7 7 #include "iwl-fh.h" ··· 240 240 241 241 /* kick FW self load */ 242 242 iwl_write64(trans, CSR_CTXT_INFO_BA, trans_pcie->ctxt_info_dma_addr); 243 - iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1); 244 243 245 244 /* Context info will be released upon alive or failure to get one */ 246 245
+26 -1
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 592 592 IWL_DEV_INFO(0x4DF0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL), 593 593 IWL_DEV_INFO(0x4DF0, 0x2074, iwl_ax201_cfg_qu_hr, NULL), 594 594 IWL_DEV_INFO(0x4DF0, 0x4070, iwl_ax201_cfg_qu_hr, NULL), 595 + IWL_DEV_INFO(0x4DF0, 0x6074, iwl_ax201_cfg_qu_hr, NULL), 595 596 596 597 /* So with HR */ 597 598 IWL_DEV_INFO(0x2725, 0x0090, iwlax211_2ax_cfg_so_gf_a0, NULL), ··· 1041 1040 IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, 1042 1041 IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1043 1042 IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1044 - iwl_cfg_so_a0_hr_a0, iwl_ax201_name) 1043 + iwl_cfg_so_a0_hr_a0, iwl_ax201_name), 1044 + 1045 + /* So-F with Hr */ 1046 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1047 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1048 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1049 + IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1050 + iwl_cfg_so_a0_hr_a0, iwl_ax203_name), 1051 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1052 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1053 + IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, 1054 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1055 + iwl_cfg_so_a0_hr_a0, iwl_ax101_name), 1056 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1057 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1058 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1059 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1060 + iwl_cfg_so_a0_hr_a0, iwl_ax201_name), 1061 + 1062 + /* So-F with Gf */ 1063 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1064 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1065 + IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, 1066 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1067 + iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name), 1045 1068 1046 1069 #endif /* CONFIG_IWLMVM */ 1047 1070 };
+35
drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
··· 266 266 mutex_unlock(&trans_pcie->mutex); 267 267 } 268 268 269 + static void iwl_pcie_set_ltr(struct iwl_trans *trans) 270 + { 271 + u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 272 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 273 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 274 + u32_encode_bits(250, 275 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 276 + CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 277 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 278 + CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 279 + u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 280 + 281 + /* 282 + * To workaround hardware latency issues during the boot process, 283 + * initialize the LTR to ~250 usec (see ltr_val above). 284 + * The firmware initializes this again later (to a smaller value). 285 + */ 286 + if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 || 287 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) && 288 + !trans->trans_cfg->integrated) { 289 + iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val); 290 + } else if (trans->trans_cfg->integrated && 291 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) { 292 + iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL); 293 + iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val); 294 + } 295 + } 296 + 269 297 int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans, 270 298 const struct fw_img *fw, bool run_in_rfkill) 271 299 { ··· 359 331 ret = iwl_pcie_ctxt_info_init(trans, fw); 360 332 if (ret) 361 333 goto out; 334 + 335 + iwl_pcie_set_ltr(trans); 336 + 337 + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) 338 + iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1); 339 + else 340 + iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1); 362 341 363 342 /* re-check RF-Kill state since we may have missed the interrupt */ 364 343 hw_rfkill = iwl_pcie_check_hw_rf_kill(trans);
+4 -3
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 928 928 u32 cmd_pos; 929 929 const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD]; 930 930 u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD]; 931 + unsigned long flags; 931 932 932 933 if (WARN(!trans->wide_cmd_header && 933 934 group_id > IWL_ALWAYS_LONG_GROUP, ··· 1012 1011 goto free_dup_buf; 1013 1012 } 1014 1013 1015 - spin_lock_bh(&txq->lock); 1014 + spin_lock_irqsave(&txq->lock, flags); 1016 1015 1017 1016 if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) { 1018 - spin_unlock_bh(&txq->lock); 1017 + spin_unlock_irqrestore(&txq->lock, flags); 1019 1018 1020 1019 IWL_ERR(trans, "No space in command queue\n"); 1021 1020 iwl_op_mode_cmd_queue_full(trans->op_mode); ··· 1175 1174 unlock_reg: 1176 1175 spin_unlock(&trans_pcie->reg_lock); 1177 1176 out: 1178 - spin_unlock_bh(&txq->lock); 1177 + spin_unlock_irqrestore(&txq->lock, flags); 1179 1178 free_dup_buf: 1180 1179 if (idx < 0) 1181 1180 kfree(dup_buf);
+2 -2
drivers/net/wireless/mediatek/mt76/mt7921/regs.h
··· 135 135 136 136 #define MT_WTBLON_TOP_BASE 0x34000 137 137 #define MT_WTBLON_TOP(ofs) (MT_WTBLON_TOP_BASE + (ofs)) 138 - #define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x0) 138 + #define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x200) 139 139 #define MT_WTBLON_TOP_WDUCR_GROUP GENMASK(2, 0) 140 140 141 - #define MT_WTBL_UPDATE MT_WTBLON_TOP(0x030) 141 + #define MT_WTBL_UPDATE MT_WTBLON_TOP(0x230) 142 142 #define MT_WTBL_UPDATE_WLAN_IDX GENMASK(9, 0) 143 143 #define MT_WTBL_UPDATE_ADM_COUNT_CLEAR BIT(12) 144 144 #define MT_WTBL_UPDATE_BUSY BIT(31)
+3 -2
drivers/net/wireless/virt_wifi.c
··· 12 12 #include <net/cfg80211.h> 13 13 #include <net/rtnetlink.h> 14 14 #include <linux/etherdevice.h> 15 + #include <linux/math64.h> 15 16 #include <linux/module.h> 16 17 17 18 static struct wiphy *common_wiphy; ··· 169 168 scan_result.work); 170 169 struct wiphy *wiphy = priv_to_wiphy(priv); 171 170 struct cfg80211_scan_info scan_info = { .aborted = false }; 171 + u64 tsf = div_u64(ktime_get_boottime_ns(), 1000); 172 172 173 173 informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz, 174 174 CFG80211_BSS_FTYPE_PRESP, 175 - fake_router_bssid, 176 - ktime_get_boottime_ns(), 175 + fake_router_bssid, tsf, 177 176 WLAN_CAPABILITY_ESS, 0, 178 177 (void *)&ssid, sizeof(ssid), 179 178 DBM_TO_MBM(-50), GFP_KERNEL);
+23 -13
drivers/of/fdt.c
··· 205 205 *pprev = NULL; 206 206 } 207 207 208 - static bool populate_node(const void *blob, 208 + static int populate_node(const void *blob, 209 209 int offset, 210 210 void **mem, 211 211 struct device_node *dad, ··· 214 214 { 215 215 struct device_node *np; 216 216 const char *pathp; 217 - unsigned int l, allocl; 217 + int len; 218 218 219 - pathp = fdt_get_name(blob, offset, &l); 219 + pathp = fdt_get_name(blob, offset, &len); 220 220 if (!pathp) { 221 221 *pnp = NULL; 222 - return false; 222 + return len; 223 223 } 224 224 225 - allocl = ++l; 225 + len++; 226 226 227 - np = unflatten_dt_alloc(mem, sizeof(struct device_node) + allocl, 227 + np = unflatten_dt_alloc(mem, sizeof(struct device_node) + len, 228 228 __alignof__(struct device_node)); 229 229 if (!dryrun) { 230 230 char *fn; 231 231 of_node_init(np); 232 232 np->full_name = fn = ((char *)np) + sizeof(*np); 233 233 234 - memcpy(fn, pathp, l); 234 + memcpy(fn, pathp, len); 235 235 236 236 if (dad != NULL) { 237 237 np->parent = dad; ··· 295 295 struct device_node *nps[FDT_MAX_DEPTH]; 296 296 void *base = mem; 297 297 bool dryrun = !base; 298 + int ret; 298 299 299 300 if (nodepp) 300 301 *nodepp = NULL; ··· 323 322 !of_fdt_device_is_available(blob, offset)) 324 323 continue; 325 324 326 - if (!populate_node(blob, offset, &mem, nps[depth], 327 - &nps[depth+1], dryrun)) 328 - return mem - base; 325 + ret = populate_node(blob, offset, &mem, nps[depth], 326 + &nps[depth+1], dryrun); 327 + if (ret < 0) 328 + return ret; 329 329 330 330 if (!dryrun && nodepp && !*nodepp) 331 331 *nodepp = nps[depth+1]; ··· 374 372 { 375 373 int size; 376 374 void *mem; 375 + int ret; 376 + 377 + if (mynodes) 378 + *mynodes = NULL; 377 379 378 380 pr_debug(" -> unflatten_device_tree()\n"); 379 381 ··· 398 392 399 393 /* First pass, scan for size */ 400 394 size = unflatten_dt_nodes(blob, NULL, dad, NULL); 401 - if (size < 0) 395 + if (size <= 0) 402 396 return NULL; 403 397 404 398 size = ALIGN(size, 4); ··· 416 410 pr_debug(" unflattening %p...\n", mem); 417 411 418 412 /* Second pass, do actual unflattening */ 419 - unflatten_dt_nodes(blob, mem, dad, mynodes); 413 + ret = unflatten_dt_nodes(blob, mem, dad, mynodes); 414 + 420 415 if (be32_to_cpup(mem + size) != 0xdeadbeef) 421 416 pr_warn("End of tree marker overwritten: %08x\n", 422 417 be32_to_cpup(mem + size)); 423 418 424 - if (detached && mynodes) { 419 + if (ret <= 0) 420 + return NULL; 421 + 422 + if (detached && mynodes && *mynodes) { 425 423 of_node_set_flag(*mynodes, OF_DETACHED); 426 424 pr_debug("unflattened tree is detached\n"); 427 425 }
+2
drivers/of/of_private.h
··· 8 8 * Copyright (C) 1996-2005 Paul Mackerras. 9 9 */ 10 10 11 + #define FDT_ALIGN_SIZE 8 12 + 11 13 /** 12 14 * struct alias_prop - Alias property in 'aliases' node 13 15 * @link: List node to link the structure in aliases_lookup list
+15 -9
drivers/of/overlay.c
··· 57 57 * struct overlay_changeset 58 58 * @id: changeset identifier 59 59 * @ovcs_list: list on which we are located 60 - * @fdt: FDT that was unflattened to create @overlay_tree 60 + * @fdt: base of memory allocated to hold aligned FDT that was unflattened to create @overlay_tree 61 61 * @overlay_tree: expanded device tree that contains the fragment nodes 62 62 * @count: count of fragment structures 63 63 * @fragments: fragment nodes in the overlay expanded device tree ··· 719 719 /** 720 720 * init_overlay_changeset() - initialize overlay changeset from overlay tree 721 721 * @ovcs: Overlay changeset to build 722 - * @fdt: the FDT that was unflattened to create @tree 723 - * @tree: Contains all the overlay fragments and overlay fixup nodes 722 + * @fdt: base of memory allocated to hold aligned FDT that was unflattened to create @tree 723 + * @tree: Contains the overlay fragments and overlay fixup nodes 724 724 * 725 725 * Initialize @ovcs. Populate @ovcs->fragments with node information from 726 726 * the top level of @tree. The relevant top level nodes are the fragment ··· 873 873 * internal documentation 874 874 * 875 875 * of_overlay_apply() - Create and apply an overlay changeset 876 - * @fdt: the FDT that was unflattened to create @tree 876 + * @fdt: base of memory allocated to hold the aligned FDT 877 877 * @tree: Expanded overlay device tree 878 878 * @ovcs_id: Pointer to overlay changeset id 879 879 * ··· 953 953 /* 954 954 * after overlay_notify(), ovcs->overlay_tree related pointers may have 955 955 * leaked to drivers, so can not kfree() tree, aka ovcs->overlay_tree; 956 - * and can not free fdt, aka ovcs->fdt 956 + * and can not free memory containing aligned fdt. The aligned fdt 957 + * is contained within the memory at ovcs->fdt, possibly at an offset 958 + * from ovcs->fdt. 957 959 */ 958 960 ret = overlay_notify(ovcs, OF_OVERLAY_PRE_APPLY); 959 961 if (ret) { ··· 1016 1014 int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size, 1017 1015 int *ovcs_id) 1018 1016 { 1019 - const void *new_fdt; 1017 + void *new_fdt; 1018 + void *new_fdt_align; 1020 1019 int ret; 1021 1020 u32 size; 1022 - struct device_node *overlay_root; 1021 + struct device_node *overlay_root = NULL; 1023 1022 1024 1023 *ovcs_id = 0; 1025 1024 ret = 0; ··· 1039 1036 * Must create permanent copy of FDT because of_fdt_unflatten_tree() 1040 1037 * will create pointers to the passed in FDT in the unflattened tree. 1041 1038 */ 1042 - new_fdt = kmemdup(overlay_fdt, size, GFP_KERNEL); 1039 + new_fdt = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL); 1043 1040 if (!new_fdt) 1044 1041 return -ENOMEM; 1045 1042 1046 - of_fdt_unflatten_tree(new_fdt, NULL, &overlay_root); 1043 + new_fdt_align = PTR_ALIGN(new_fdt, FDT_ALIGN_SIZE); 1044 + memcpy(new_fdt_align, overlay_fdt, size); 1045 + 1046 + of_fdt_unflatten_tree(new_fdt_align, NULL, &overlay_root); 1047 1047 if (!overlay_root) { 1048 1048 pr_err("unable to unflatten overlay_fdt\n"); 1049 1049 ret = -EINVAL;
+10 -1
drivers/of/property.c
··· 1262 1262 DEFINE_SIMPLE_PROP(pinctrl8, "pinctrl-8", NULL) 1263 1263 DEFINE_SUFFIX_PROP(regulators, "-supply", NULL) 1264 1264 DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells") 1265 - DEFINE_SUFFIX_PROP(gpios, "-gpios", "#gpio-cells") 1265 + 1266 + static struct device_node *parse_gpios(struct device_node *np, 1267 + const char *prop_name, int index) 1268 + { 1269 + if (!strcmp_suffix(prop_name, ",nr-gpios")) 1270 + return NULL; 1271 + 1272 + return parse_suffix_prop_cells(np, prop_name, index, "-gpios", 1273 + "#gpio-cells"); 1274 + } 1266 1275 1267 1276 static struct device_node *parse_iommu_maps(struct device_node *np, 1268 1277 const char *prop_name, int index)
+16 -6
drivers/of/unittest.c
··· 22 22 #include <linux/slab.h> 23 23 #include <linux/device.h> 24 24 #include <linux/platform_device.h> 25 + #include <linux/kernel.h> 25 26 26 27 #include <linux/i2c.h> 27 28 #include <linux/i2c-mux.h> ··· 1409 1408 static int __init unittest_data_add(void) 1410 1409 { 1411 1410 void *unittest_data; 1412 - struct device_node *unittest_data_node, *np; 1411 + void *unittest_data_align; 1412 + struct device_node *unittest_data_node = NULL, *np; 1413 1413 /* 1414 1414 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically 1415 1415 * created by cmd_dt_S_dtb in scripts/Makefile.lib ··· 1419 1417 extern uint8_t __dtb_testcases_end[]; 1420 1418 const int size = __dtb_testcases_end - __dtb_testcases_begin; 1421 1419 int rc; 1420 + void *ret; 1422 1421 1423 1422 if (!size) { 1424 - pr_warn("%s: No testcase data to attach; not running tests\n", 1425 - __func__); 1423 + pr_warn("%s: testcases is empty\n", __func__); 1426 1424 return -ENODATA; 1427 1425 } 1428 1426 1429 1427 /* creating copy */ 1430 - unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL); 1428 + unittest_data = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL); 1431 1429 if (!unittest_data) 1432 1430 return -ENOMEM; 1433 1431 1434 - of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node); 1432 + unittest_data_align = PTR_ALIGN(unittest_data, FDT_ALIGN_SIZE); 1433 + memcpy(unittest_data_align, __dtb_testcases_begin, size); 1434 + 1435 + ret = of_fdt_unflatten_tree(unittest_data_align, NULL, &unittest_data_node); 1436 + if (!ret) { 1437 + pr_warn("%s: unflatten testcases tree failed\n", __func__); 1438 + kfree(unittest_data); 1439 + return -ENODATA; 1440 + } 1435 1441 if (!unittest_data_node) { 1436 - pr_warn("%s: No tree to attach; not running tests\n", __func__); 1442 + pr_warn("%s: testcases tree is empty\n", __func__); 1437 1443 kfree(unittest_data); 1438 1444 return -ENODATA; 1439 1445 }
+9 -7
drivers/platform/x86/intel-hid.c
··· 483 483 goto wakeup; 484 484 485 485 /* 486 - * Switch events will wake the device and report the new switch 487 - * position to the input subsystem. 486 + * Some devices send (duplicate) tablet-mode events when moved 487 + * around even though the mode has not changed; and they do this 488 + * even when suspended. 489 + * Update the switch state in case it changed and then return 490 + * without waking up to avoid spurious wakeups. 488 491 */ 489 - if (priv->switches && (event == 0xcc || event == 0xcd)) 490 - goto wakeup; 492 + if (event == 0xcc || event == 0xcd) { 493 + report_tablet_mode_event(priv->switches, event); 494 + return; 495 + } 491 496 492 497 /* Wake up on 5-button array events only. */ 493 498 if (event == 0xc0 || !priv->array) ··· 505 500 506 501 wakeup: 507 502 pm_wakeup_hard_event(&device->dev); 508 - 509 - if (report_tablet_mode_event(priv->switches, event)) 510 - return; 511 503 512 504 return; 513 505 }
+12 -3
drivers/ras/cec.c
··· 309 309 return ret; 310 310 } 311 311 312 + /** 313 + * cec_add_elem - Add an element to the CEC array. 314 + * @pfn: page frame number to insert 315 + * 316 + * Return values: 317 + * - <0: on error 318 + * - 0: on success 319 + * - >0: when the inserted pfn was offlined 320 + */ 312 321 static int cec_add_elem(u64 pfn) 313 322 { 314 323 struct ce_array *ca = &ce_arr; 324 + int count, err, ret = 0; 315 325 unsigned int to = 0; 316 - int count, ret = 0; 317 326 318 327 /* 319 328 * We can be called very early on the identify_cpu() path where we are ··· 339 330 if (ca->n == MAX_ELEMS) 340 331 WARN_ON(!del_lru_elem_unlocked(ca)); 341 332 342 - ret = find_elem(ca, pfn, &to); 343 - if (ret < 0) { 333 + err = find_elem(ca, pfn, &to); 334 + if (err < 0) { 344 335 /* 345 336 * Shift range [to-end] to make room for one more element. 346 337 */
+4 -4
drivers/regulator/bd9571mwv-regulator.c
··· 125 125 126 126 static const struct regulator_desc regulators[] = { 127 127 BD9571MWV_REG("VD09", "vd09", VD09, avs_ops, 0, 0x7f, 128 - 0x80, 600000, 10000, 0x3c), 128 + 0x6f, 600000, 10000, 0x3c), 129 129 BD9571MWV_REG("VD18", "vd18", VD18, vid_ops, BD9571MWV_VD18_VID, 0xf, 130 130 16, 1625000, 25000, 0), 131 131 BD9571MWV_REG("VD25", "vd25", VD25, vid_ops, BD9571MWV_VD25_VID, 0xf, ··· 134 134 11, 2800000, 100000, 0), 135 135 BD9571MWV_REG("DVFS", "dvfs", DVFS, reg_ops, 136 136 BD9571MWV_DVFS_MONIVDAC, 0x7f, 137 - 0x80, 600000, 10000, 0x3c), 137 + 0x6f, 600000, 10000, 0x3c), 138 138 }; 139 139 140 140 #ifdef CONFIG_PM_SLEEP ··· 174 174 { 175 175 struct bd9571mwv_reg *bdreg = dev_get_drvdata(dev); 176 176 177 - return sprintf(buf, "%s\n", bdreg->bkup_mode_enabled ? "on" : "off"); 177 + return sysfs_emit(buf, "%s\n", bdreg->bkup_mode_enabled ? "on" : "off"); 178 178 } 179 179 180 180 static ssize_t backup_mode_store(struct device *dev, ··· 301 301 &config); 302 302 if (IS_ERR(rdev)) { 303 303 dev_err(&pdev->dev, "failed to register %s regulator\n", 304 - pdev->name); 304 + regulators[i].name); 305 305 return PTR_ERR(rdev); 306 306 } 307 307 }
+19 -1
drivers/remoteproc/pru_rproc.c
··· 450 450 if (len == 0) 451 451 return NULL; 452 452 453 + /* 454 + * GNU binutils do not support multiple address spaces. The GNU 455 + * linker's default linker script places IRAM at an arbitrary high 456 + * offset, in order to differentiate it from DRAM. Hence we need to 457 + * strip the artificial offset in the IRAM addresses coming from the 458 + * ELF file. 459 + * 460 + * The TI proprietary linker would never set those higher IRAM address 461 + * bits anyway. PRU architecture limits the program counter to 16-bit 462 + * word-address range. This in turn corresponds to 18-bit IRAM 463 + * byte-address range for ELF. 464 + * 465 + * Two more bits are added just in case to make the final 20-bit mask. 466 + * Idea is to have a safeguard in case TI decides to add banking 467 + * in future SoCs. 468 + */ 469 + da &= 0xfffff; 470 + 453 471 if (da >= PRU_IRAM_DA && 454 472 da + len <= PRU_IRAM_DA + pru->mem_regions[PRU_IOMEM_IRAM].size) { 455 473 offset = da - PRU_IRAM_DA; ··· 603 585 break; 604 586 } 605 587 606 - if (pru->data->is_k3 && is_iram) { 588 + if (pru->data->is_k3) { 607 589 ret = pru_rproc_memcpy(ptr, elf_data + phdr->p_offset, 608 590 filesz); 609 591 if (ret) {
+1 -1
drivers/remoteproc/qcom_pil_info.c
··· 56 56 memset_io(base, 0, resource_size(&imem)); 57 57 58 58 _reloc.base = base; 59 - _reloc.num_entries = resource_size(&imem) / PIL_RELOC_ENTRY_SIZE; 59 + _reloc.num_entries = (u32)resource_size(&imem) / PIL_RELOC_ENTRY_SIZE; 60 60 61 61 return 0; 62 62 }
+45 -33
drivers/scsi/hpsa_cmd.h
··· 20 20 #ifndef HPSA_CMD_H 21 21 #define HPSA_CMD_H 22 22 23 + #include <linux/compiler.h> 24 + 25 + #include <linux/build_bug.h> /* static_assert */ 26 + #include <linux/stddef.h> /* offsetof */ 27 + 23 28 /* general boundary defintions */ 24 29 #define SENSEINFOBYTES 32 /* may vary between hbas */ 25 30 #define SG_ENTRIES_IN_CMD 32 /* Max SG entries excluding chain blocks */ ··· 205 200 MAX_EXT_TARGETS + 1) /* + 1 is for the controller itself */ 206 201 207 202 /* SCSI-3 Commands */ 208 - #pragma pack(1) 209 - 210 203 #define HPSA_INQUIRY 0x12 211 204 struct InquiryData { 212 205 u8 data_byte[36]; 213 - }; 206 + } __packed; 214 207 215 208 #define HPSA_REPORT_LOG 0xc2 /* Report Logical LUNs */ 216 209 #define HPSA_REPORT_PHYS 0xc3 /* Report Physical LUNs */ ··· 224 221 u8 xor_mult[2]; /**< XOR multipliers for this position, 225 222 * valid for data disks only */ 226 223 u8 reserved[2]; 227 - }; 224 + } __packed; 228 225 229 226 struct raid_map_data { 230 227 __le32 structure_size; /* Size of entire structure in bytes */ ··· 250 247 __le16 dekindex; /* Data encryption key index. */ 251 248 u8 reserved[16]; 252 249 struct raid_map_disk_data data[RAID_MAP_MAX_ENTRIES]; 253 - }; 250 + } __packed; 254 251 255 252 struct ReportLUNdata { 256 253 u8 LUNListLength[4]; 257 254 u8 extended_response_flag; 258 255 u8 reserved[3]; 259 256 u8 LUN[HPSA_MAX_LUN][8]; 260 - }; 257 + } __packed; 261 258 262 259 struct ext_report_lun_entry { 263 260 u8 lunid[8]; ··· 272 269 u8 lun_count; /* multi-lun device, how many luns */ 273 270 u8 redundant_paths; 274 271 u32 ioaccel_handle; /* ioaccel1 only uses lower 16 bits */ 275 - }; 272 + } __packed; 276 273 277 274 struct ReportExtendedLUNdata { 278 275 u8 LUNListLength[4]; 279 276 u8 extended_response_flag; 280 277 u8 reserved[3]; 281 278 struct ext_report_lun_entry LUN[HPSA_MAX_PHYS_LUN]; 282 - }; 279 + } __packed; 283 280 284 281 struct SenseSubsystem_info { 285 282 u8 reserved[36]; 286 283 u8 portname[8]; 287 284 u8 reserved1[1108]; 288 - }; 285 + } __packed; 289 286 290 287 /* BMIC commands */ 291 288 #define BMIC_READ 0x26 ··· 320 317 u8 Targ:6; 321 318 u8 Mode:2; /* b10 */ 322 319 } LogUnit; 323 - }; 320 + } __packed; 324 321 325 322 struct PhysDevAddr { 326 323 u32 TargetId:24; ··· 328 325 u32 Mode:2; 329 326 /* 2 level target device addr */ 330 327 union SCSI3Addr Target[2]; 331 - }; 328 + } __packed; 332 329 333 330 struct LogDevAddr { 334 331 u32 VolId:30; 335 332 u32 Mode:2; 336 333 u8 reserved[4]; 337 - }; 334 + } __packed; 338 335 339 336 union LUNAddr { 340 337 u8 LunAddrBytes[8]; 341 338 union SCSI3Addr SCSI3Lun[4]; 342 339 struct PhysDevAddr PhysDev; 343 340 struct LogDevAddr LogDev; 344 - }; 341 + } __packed; 345 342 346 343 struct CommandListHeader { 347 344 u8 ReplyQueue; ··· 349 346 __le16 SGTotal; 350 347 __le64 tag; 351 348 union LUNAddr LUN; 352 - }; 349 + } __packed; 353 350 354 351 struct RequestBlock { 355 352 u8 CDBLen; ··· 368 365 #define GET_DIR(tad) (((tad) >> 6) & 0x03) 369 366 u16 Timeout; 370 367 u8 CDB[16]; 371 - }; 368 + } __packed; 372 369 373 370 struct ErrDescriptor { 374 371 __le64 Addr; 375 372 __le32 Len; 376 - }; 373 + } __packed; 377 374 378 375 struct SGDescriptor { 379 376 __le64 Addr; 380 377 __le32 Len; 381 378 __le32 Ext; 382 - }; 379 + } __packed; 383 380 384 381 union MoreErrInfo { 385 382 struct { ··· 393 390 u8 offense_num; /* byte # of offense 0-base */ 394 391 u32 offense_value; 395 392 } Invalid_Cmd; 396 - }; 393 + } __packed; 394 + 397 395 struct ErrorInfo { 398 396 u8 ScsiStatus; 399 397 u8 SenseLen; ··· 402 398 u32 ResidualCnt; 403 399 union MoreErrInfo MoreErrInfo; 404 400 u8 SenseInfo[SENSEINFOBYTES]; 405 - }; 401 + } __packed; 406 402 /* Command types */ 407 403 #define CMD_IOCTL_PEND 0x01 408 404 #define CMD_SCSI 0x03 ··· 457 453 atomic_t refcount; /* Must be last to avoid memset in hpsa_cmd_init() */ 458 454 } __aligned(COMMANDLIST_ALIGNMENT); 459 455 456 + /* 457 + * Make sure our embedded atomic variable is aligned. Otherwise we break atomic 458 + * operations on architectures that don't support unaligned atomics like IA64. 459 + * 460 + * The assert guards against reintroductin against unwanted __packed to 461 + * the struct CommandList. 462 + */ 463 + static_assert(offsetof(struct CommandList, refcount) % __alignof__(atomic_t) == 0); 464 + 460 465 /* Max S/G elements in I/O accelerator command */ 461 466 #define IOACCEL1_MAXSGENTRIES 24 462 467 #define IOACCEL2_MAXSGENTRIES 28 ··· 502 489 __le64 host_addr; /* 0x70 - 0x77 */ 503 490 u8 CISS_LUN[8]; /* 0x78 - 0x7F */ 504 491 struct SGDescriptor SG[IOACCEL1_MAXSGENTRIES]; 505 - } __aligned(IOACCEL1_COMMANDLIST_ALIGNMENT); 492 + } __packed __aligned(IOACCEL1_COMMANDLIST_ALIGNMENT); 506 493 507 494 #define IOACCEL1_FUNCTION_SCSIIO 0x00 508 495 #define IOACCEL1_SGLOFFSET 32 ··· 532 519 u8 chain_indicator; 533 520 #define IOACCEL2_CHAIN 0x80 534 521 #define IOACCEL2_LAST_SG 0x40 535 - }; 522 + } __packed; 536 523 537 524 /* 538 525 * SCSI Response Format structure for IO Accelerator Mode 2 ··· 572 559 u8 sense_data_len; /* sense/response data length */ 573 560 u8 resid_cnt[4]; /* residual count */ 574 561 u8 sense_data_buff[32]; /* sense/response data buffer */ 575 - }; 562 + } __packed; 576 563 577 564 /* 578 565 * Structure for I/O accelerator (mode 2 or m2) commands. ··· 605 592 __le32 tweak_upper; /* Encryption tweak, upper 4 bytes */ 606 593 struct ioaccel2_sg_element sg[IOACCEL2_MAXSGENTRIES]; 607 594 struct io_accel2_scsi_response error_data; 608 - } __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 595 + } __packed __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 609 596 610 597 /* 611 598 * defines for Mode 2 command struct ··· 631 618 __le64 abort_tag; /* cciss tag of SCSI cmd or TMF to abort */ 632 619 __le64 error_ptr; /* Error Pointer */ 633 620 __le32 error_len; /* Error Length */ 634 - } __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 621 + } __packed __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 635 622 636 623 /* Configuration Table Structure */ 637 624 struct HostWrite { ··· 639 626 __le32 command_pool_addr_hi; 640 627 __le32 CoalIntDelay; 641 628 __le32 CoalIntCount; 642 - }; 629 + } __packed; 643 630 644 631 #define SIMPLE_MODE 0x02 645 632 #define PERFORMANT_MODE 0x04 ··· 688 675 #define HPSA_EVENT_NOTIFY_ACCEL_IO_PATH_STATE_CHANGE (1 << 30) 689 676 #define HPSA_EVENT_NOTIFY_ACCEL_IO_PATH_CONFIG_CHANGE (1 << 31) 690 677 __le32 clear_event_notify; 691 - }; 678 + } __packed; 692 679 693 680 #define NUM_BLOCKFETCH_ENTRIES 8 694 681 struct TransTable_struct { ··· 699 686 __le32 RepQCtrAddrHigh32; 700 687 #define MAX_REPLY_QUEUES 64 701 688 struct vals32 RepQAddr[MAX_REPLY_QUEUES]; 702 - }; 689 + } __packed; 703 690 704 691 struct hpsa_pci_info { 705 692 unsigned char bus; 706 693 unsigned char dev_fn; 707 694 unsigned short domain; 708 695 u32 board_id; 709 - }; 696 + } __packed; 710 697 711 698 struct bmic_identify_controller { 712 699 u8 configured_logical_drive_count; /* offset 0 */ ··· 715 702 u8 pad2[136]; 716 703 u8 controller_mode; /* offset 292 */ 717 704 u8 pad3[32]; 718 - }; 705 + } __packed; 719 706 720 707 721 708 struct bmic_identify_physical_device { ··· 858 845 u8 max_link_rate[256]; 859 846 u8 neg_phys_link_rate[256]; 860 847 u8 box_conn_name[8]; 861 - } __attribute((aligned(512))); 848 + } __packed __attribute((aligned(512))); 862 849 863 850 struct bmic_sense_subsystem_info { 864 851 u8 primary_slot_number; ··· 871 858 u8 secondary_array_serial_number[32]; 872 859 u8 secondary_cache_serial_number[32]; 873 860 u8 pad[332]; 874 - }; 861 + } __packed; 875 862 876 863 struct bmic_sense_storage_box_params { 877 864 u8 reserved[36]; ··· 883 870 u8 reserver_3[84]; 884 871 u8 phys_connector[2]; 885 872 u8 reserved_4[296]; 886 - }; 873 + } __packed; 887 874 888 - #pragma pack() 889 875 #endif /* HPSA_CMD_H */
+4 -4
drivers/scsi/pm8001/pm8001_hwi.c
··· 223 223 PM8001_EVENT_LOG_SIZE; 224 224 pm8001_ha->main_cfg_tbl.pm8001_tbl.iop_event_log_option = 0x01; 225 225 pm8001_ha->main_cfg_tbl.pm8001_tbl.fatal_err_interrupt = 0x01; 226 - for (i = 0; i < PM8001_MAX_INB_NUM; i++) { 226 + for (i = 0; i < pm8001_ha->max_q_num; i++) { 227 227 pm8001_ha->inbnd_q_tbl[i].element_pri_size_cnt = 228 228 PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x00<<30); 229 229 pm8001_ha->inbnd_q_tbl[i].upper_base_addr = ··· 249 249 pm8001_ha->inbnd_q_tbl[i].producer_idx = 0; 250 250 pm8001_ha->inbnd_q_tbl[i].consumer_index = 0; 251 251 } 252 - for (i = 0; i < PM8001_MAX_OUTB_NUM; i++) { 252 + for (i = 0; i < pm8001_ha->max_q_num; i++) { 253 253 pm8001_ha->outbnd_q_tbl[i].element_size_cnt = 254 254 PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x01<<30); 255 255 pm8001_ha->outbnd_q_tbl[i].upper_base_addr = ··· 671 671 read_outbnd_queue_table(pm8001_ha); 672 672 /* update main config table ,inbound table and outbound table */ 673 673 update_main_config_table(pm8001_ha); 674 - for (i = 0; i < PM8001_MAX_INB_NUM; i++) 674 + for (i = 0; i < pm8001_ha->max_q_num; i++) 675 675 update_inbnd_queue_table(pm8001_ha, i); 676 - for (i = 0; i < PM8001_MAX_OUTB_NUM; i++) 676 + for (i = 0; i < pm8001_ha->max_q_num; i++) 677 677 update_outbnd_queue_table(pm8001_ha, i); 678 678 /* 8081 controller donot require these operations */ 679 679 if (deviceid != 0x8081 && deviceid != 0x0042) {
+1 -1
drivers/scsi/scsi_transport_srp.c
··· 541 541 res = mutex_lock_interruptible(&rport->mutex); 542 542 if (res) 543 543 goto out; 544 - if (rport->state != SRP_RPORT_FAIL_FAST) 544 + if (rport->state != SRP_RPORT_FAIL_FAST && rport->state != SRP_RPORT_LOST) 545 545 /* 546 546 * sdev state must be SDEV_TRANSPORT_OFFLINE, transition 547 547 * to SDEV_BLOCK is illegal. Calling scsi_target_unblock()
+14 -17
drivers/scsi/ufs/ufshcd.c
··· 6386 6386 DECLARE_COMPLETION_ONSTACK(wait); 6387 6387 struct request *req; 6388 6388 unsigned long flags; 6389 - int free_slot, task_tag, err; 6389 + int task_tag, err; 6390 6390 6391 6391 /* 6392 - * Get free slot, sleep if slots are unavailable. 6393 - * Even though we use wait_event() which sleeps indefinitely, 6394 - * the maximum wait time is bounded by %TM_CMD_TIMEOUT. 6392 + * blk_get_request() is used here only to get a free tag. 6395 6393 */ 6396 6394 req = blk_get_request(q, REQ_OP_DRV_OUT, 0); 6397 6395 if (IS_ERR(req)) 6398 6396 return PTR_ERR(req); 6399 6397 6400 6398 req->end_io_data = &wait; 6401 - free_slot = req->tag; 6402 - WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs); 6403 6399 ufshcd_hold(hba, false); 6404 6400 6405 6401 spin_lock_irqsave(host->host_lock, flags); 6406 - task_tag = hba->nutrs + free_slot; 6402 + blk_mq_start_request(req); 6407 6403 6404 + task_tag = req->tag; 6408 6405 treq->req_header.dword_0 |= cpu_to_be32(task_tag); 6409 6406 6410 - memcpy(hba->utmrdl_base_addr + free_slot, treq, sizeof(*treq)); 6411 - ufshcd_vops_setup_task_mgmt(hba, free_slot, tm_function); 6407 + memcpy(hba->utmrdl_base_addr + task_tag, treq, sizeof(*treq)); 6408 + ufshcd_vops_setup_task_mgmt(hba, task_tag, tm_function); 6412 6409 6413 6410 /* send command to the controller */ 6414 - __set_bit(free_slot, &hba->outstanding_tasks); 6411 + __set_bit(task_tag, &hba->outstanding_tasks); 6415 6412 6416 6413 /* Make sure descriptors are ready before ringing the task doorbell */ 6417 6414 wmb(); 6418 6415 6419 - ufshcd_writel(hba, 1 << free_slot, REG_UTP_TASK_REQ_DOOR_BELL); 6416 + ufshcd_writel(hba, 1 << task_tag, REG_UTP_TASK_REQ_DOOR_BELL); 6420 6417 /* Make sure that doorbell is committed immediately */ 6421 6418 wmb(); 6422 6419 ··· 6433 6436 ufshcd_add_tm_upiu_trace(hba, task_tag, UFS_TM_ERR); 6434 6437 dev_err(hba->dev, "%s: task management cmd 0x%.2x timed-out\n", 6435 6438 __func__, tm_function); 6436 - if (ufshcd_clear_tm_cmd(hba, free_slot)) 6437 - dev_WARN(hba->dev, "%s: unable clear tm cmd (slot %d) after timeout\n", 6438 - __func__, free_slot); 6439 + if (ufshcd_clear_tm_cmd(hba, task_tag)) 6440 + dev_WARN(hba->dev, "%s: unable to clear tm cmd (slot %d) after timeout\n", 6441 + __func__, task_tag); 6439 6442 err = -ETIMEDOUT; 6440 6443 } else { 6441 6444 err = 0; 6442 - memcpy(treq, hba->utmrdl_base_addr + free_slot, sizeof(*treq)); 6445 + memcpy(treq, hba->utmrdl_base_addr + task_tag, sizeof(*treq)); 6443 6446 6444 6447 ufshcd_add_tm_upiu_trace(hba, task_tag, UFS_TM_COMP); 6445 6448 } 6446 6449 6447 6450 spin_lock_irqsave(hba->host->host_lock, flags); 6448 - __clear_bit(free_slot, &hba->outstanding_tasks); 6451 + __clear_bit(task_tag, &hba->outstanding_tasks); 6449 6452 spin_unlock_irqrestore(hba->host->host_lock, flags); 6450 6453 6454 + ufshcd_release(hba); 6451 6455 blk_put_request(req); 6452 6456 6453 - ufshcd_release(hba); 6454 6457 return err; 6455 6458 } 6456 6459
+1 -1
drivers/soc/fsl/qbman/qman.c
··· 186 186 __be32 tag; 187 187 struct qm_fd fd; 188 188 u8 __reserved3[32]; 189 - } __packed; 189 + } __packed __aligned(8); 190 190 #define QM_EQCR_VERB_VBIT 0x80 191 191 #define QM_EQCR_VERB_CMD_MASK 0x61 /* but only one value; */ 192 192 #define QM_EQCR_VERB_CMD_ENQUEUE 0x01
+1 -2
drivers/target/iscsi/iscsi_target.c
··· 1166 1166 1167 1167 target_get_sess_cmd(&cmd->se_cmd, true); 1168 1168 1169 + cmd->se_cmd.tag = (__force u32)cmd->init_task_tag; 1169 1170 cmd->sense_reason = target_cmd_init_cdb(&cmd->se_cmd, hdr->cdb); 1170 1171 if (cmd->sense_reason) { 1171 1172 if (cmd->sense_reason == TCM_OUT_OF_RESOURCES) { ··· 1181 1180 if (cmd->sense_reason) 1182 1181 goto attach_cmd; 1183 1182 1184 - /* only used for printks or comparing with ->ref_task_tag */ 1185 - cmd->se_cmd.tag = (__force u32)cmd->init_task_tag; 1186 1183 cmd->sense_reason = target_cmd_parse_cdb(&cmd->se_cmd); 1187 1184 if (cmd->sense_reason) 1188 1185 goto attach_cmd;
+2 -2
drivers/thunderbolt/retimer.c
··· 347 347 ret = tb_retimer_nvm_add(rt); 348 348 if (ret) { 349 349 dev_err(&rt->dev, "failed to add NVM devices: %d\n", ret); 350 - device_del(&rt->dev); 350 + device_unregister(&rt->dev); 351 351 return ret; 352 352 } 353 353 ··· 406 406 */ 407 407 int tb_retimer_scan(struct tb_port *port) 408 408 { 409 - u32 status[TB_MAX_RETIMER_INDEX] = {}; 409 + u32 status[TB_MAX_RETIMER_INDEX + 1] = {}; 410 410 int ret, i, last_idx = 0; 411 411 412 412 if (!port->cap_usb4)
+4
drivers/usb/cdns3/cdnsp-gadget.c
··· 1128 1128 return -ESHUTDOWN; 1129 1129 } 1130 1130 1131 + /* Requests has been dequeued during disabling endpoint. */ 1132 + if (!(pep->ep_state & EP_ENABLED)) 1133 + return 0; 1134 + 1131 1135 spin_lock_irqsave(&pdev->lock, flags); 1132 1136 ret = cdnsp_ep_dequeue(pep, to_cdnsp_request(request)); 1133 1137 spin_unlock_irqrestore(&pdev->lock, flags);
+9 -2
drivers/usb/usbip/stub_dev.c
··· 63 63 64 64 dev_info(dev, "stub up\n"); 65 65 66 + mutex_lock(&sdev->ud.sysfs_lock); 66 67 spin_lock_irq(&sdev->ud.lock); 67 68 68 69 if (sdev->ud.status != SDEV_ST_AVAILABLE) { ··· 88 87 tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx"); 89 88 if (IS_ERR(tcp_rx)) { 90 89 sockfd_put(socket); 91 - return -EINVAL; 90 + goto unlock_mutex; 92 91 } 93 92 tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx"); 94 93 if (IS_ERR(tcp_tx)) { 95 94 kthread_stop(tcp_rx); 96 95 sockfd_put(socket); 97 - return -EINVAL; 96 + goto unlock_mutex; 98 97 } 99 98 100 99 /* get task structs now */ ··· 113 112 wake_up_process(sdev->ud.tcp_rx); 114 113 wake_up_process(sdev->ud.tcp_tx); 115 114 115 + mutex_unlock(&sdev->ud.sysfs_lock); 116 + 116 117 } else { 117 118 dev_info(dev, "stub down\n"); 118 119 ··· 125 122 spin_unlock_irq(&sdev->ud.lock); 126 123 127 124 usbip_event_add(&sdev->ud, SDEV_EVENT_DOWN); 125 + mutex_unlock(&sdev->ud.sysfs_lock); 128 126 } 129 127 130 128 return count; ··· 134 130 sockfd_put(socket); 135 131 err: 136 132 spin_unlock_irq(&sdev->ud.lock); 133 + unlock_mutex: 134 + mutex_unlock(&sdev->ud.sysfs_lock); 137 135 return -EINVAL; 138 136 } 139 137 static DEVICE_ATTR_WO(usbip_sockfd); ··· 276 270 sdev->ud.side = USBIP_STUB; 277 271 sdev->ud.status = SDEV_ST_AVAILABLE; 278 272 spin_lock_init(&sdev->ud.lock); 273 + mutex_init(&sdev->ud.sysfs_lock); 279 274 sdev->ud.tcp_socket = NULL; 280 275 sdev->ud.sockfd = -1; 281 276
+3
drivers/usb/usbip/usbip_common.h
··· 263 263 /* lock for status */ 264 264 spinlock_t lock; 265 265 266 + /* mutex for synchronizing sysfs store paths */ 267 + struct mutex sysfs_lock; 268 + 266 269 int sockfd; 267 270 struct socket *tcp_socket; 268 271
+2
drivers/usb/usbip/usbip_event.c
··· 70 70 while ((ud = get_event()) != NULL) { 71 71 usbip_dbg_eh("pending event %lx\n", ud->event); 72 72 73 + mutex_lock(&ud->sysfs_lock); 73 74 /* 74 75 * NOTE: shutdown must come first. 75 76 * Shutdown the device. ··· 91 90 ud->eh_ops.unusable(ud); 92 91 unset_event(ud, USBIP_EH_UNUSABLE); 93 92 } 93 + mutex_unlock(&ud->sysfs_lock); 94 94 95 95 wake_up(&ud->eh_waitq); 96 96 }
+1
drivers/usb/usbip/vhci_hcd.c
··· 1101 1101 vdev->ud.side = USBIP_VHCI; 1102 1102 vdev->ud.status = VDEV_ST_NULL; 1103 1103 spin_lock_init(&vdev->ud.lock); 1104 + mutex_init(&vdev->ud.sysfs_lock); 1104 1105 1105 1106 INIT_LIST_HEAD(&vdev->priv_rx); 1106 1107 INIT_LIST_HEAD(&vdev->priv_tx);
+25 -5
drivers/usb/usbip/vhci_sysfs.c
··· 185 185 186 186 usbip_dbg_vhci_sysfs("enter\n"); 187 187 188 + mutex_lock(&vdev->ud.sysfs_lock); 189 + 188 190 /* lock */ 189 191 spin_lock_irqsave(&vhci->lock, flags); 190 192 spin_lock(&vdev->ud.lock); ··· 197 195 /* unlock */ 198 196 spin_unlock(&vdev->ud.lock); 199 197 spin_unlock_irqrestore(&vhci->lock, flags); 198 + mutex_unlock(&vdev->ud.sysfs_lock); 200 199 201 200 return -EINVAL; 202 201 } ··· 207 204 spin_unlock_irqrestore(&vhci->lock, flags); 208 205 209 206 usbip_event_add(&vdev->ud, VDEV_EVENT_DOWN); 207 + 208 + mutex_unlock(&vdev->ud.sysfs_lock); 210 209 211 210 return 0; 212 211 } ··· 354 349 else 355 350 vdev = &vhci->vhci_hcd_hs->vdev[rhport]; 356 351 352 + mutex_lock(&vdev->ud.sysfs_lock); 353 + 357 354 /* Extract socket from fd. */ 358 355 socket = sockfd_lookup(sockfd, &err); 359 356 if (!socket) { 360 357 dev_err(dev, "failed to lookup sock"); 361 - return -EINVAL; 358 + err = -EINVAL; 359 + goto unlock_mutex; 362 360 } 363 361 if (socket->type != SOCK_STREAM) { 364 362 dev_err(dev, "Expecting SOCK_STREAM - found %d", 365 363 socket->type); 366 364 sockfd_put(socket); 367 - return -EINVAL; 365 + err = -EINVAL; 366 + goto unlock_mutex; 368 367 } 369 368 370 369 /* create threads before locking */ 371 370 tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx"); 372 371 if (IS_ERR(tcp_rx)) { 373 372 sockfd_put(socket); 374 - return -EINVAL; 373 + err = -EINVAL; 374 + goto unlock_mutex; 375 375 } 376 376 tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx"); 377 377 if (IS_ERR(tcp_tx)) { 378 378 kthread_stop(tcp_rx); 379 379 sockfd_put(socket); 380 - return -EINVAL; 380 + err = -EINVAL; 381 + goto unlock_mutex; 381 382 } 382 383 383 384 /* get task structs now */ ··· 408 397 * Will be retried from userspace 409 398 * if there's another free port. 410 399 */ 411 - return -EBUSY; 400 + err = -EBUSY; 401 + goto unlock_mutex; 412 402 } 413 403 414 404 dev_info(dev, "pdev(%u) rhport(%u) sockfd(%d)\n", ··· 435 423 436 424 rh_port_connect(vdev, speed); 437 425 426 + dev_info(dev, "Device attached\n"); 427 + 428 + mutex_unlock(&vdev->ud.sysfs_lock); 429 + 438 430 return count; 431 + 432 + unlock_mutex: 433 + mutex_unlock(&vdev->ud.sysfs_lock); 434 + return err; 439 435 } 440 436 static DEVICE_ATTR_WO(attach); 441 437
+1
drivers/usb/usbip/vudc_dev.c
··· 572 572 init_waitqueue_head(&udc->tx_waitq); 573 573 574 574 spin_lock_init(&ud->lock); 575 + mutex_init(&ud->sysfs_lock); 575 576 ud->status = SDEV_ST_AVAILABLE; 576 577 ud->side = USBIP_VUDC; 577 578
+5
drivers/usb/usbip/vudc_sysfs.c
··· 112 112 dev_err(dev, "no device"); 113 113 return -ENODEV; 114 114 } 115 + mutex_lock(&udc->ud.sysfs_lock); 115 116 spin_lock_irqsave(&udc->lock, flags); 116 117 /* Don't export what we don't have */ 117 118 if (!udc->driver || !udc->pullup) { ··· 188 187 189 188 wake_up_process(udc->ud.tcp_rx); 190 189 wake_up_process(udc->ud.tcp_tx); 190 + 191 + mutex_unlock(&udc->ud.sysfs_lock); 191 192 return count; 192 193 193 194 } else { ··· 210 207 } 211 208 212 209 spin_unlock_irqrestore(&udc->lock, flags); 210 + mutex_unlock(&udc->ud.sysfs_lock); 213 211 214 212 return count; 215 213 ··· 220 216 spin_unlock_irq(&udc->ud.lock); 221 217 unlock: 222 218 spin_unlock_irqrestore(&udc->lock, flags); 219 + mutex_unlock(&udc->ud.sysfs_lock); 223 220 224 221 return ret; 225 222 }
+4
drivers/vdpa/mlx5/core/mlx5_vdpa.h
··· 4 4 #ifndef __MLX5_VDPA_H__ 5 5 #define __MLX5_VDPA_H__ 6 6 7 + #include <linux/etherdevice.h> 8 + #include <linux/if_vlan.h> 7 9 #include <linux/vdpa.h> 8 10 #include <linux/mlx5/driver.h> 11 + 12 + #define MLX5V_ETH_HARD_MTU (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN) 9 13 10 14 struct mlx5_vdpa_direct_mr { 11 15 u64 start;
+7 -2
drivers/vdpa/mlx5/core/mr.c
··· 219 219 mlx5_vdpa_destroy_mkey(mvdev, &mkey->mkey); 220 220 } 221 221 222 + static struct device *get_dma_device(struct mlx5_vdpa_dev *mvdev) 223 + { 224 + return &mvdev->mdev->pdev->dev; 225 + } 226 + 222 227 static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr, 223 228 struct vhost_iotlb *iotlb) 224 229 { ··· 239 234 u64 pa; 240 235 u64 paend; 241 236 struct scatterlist *sg; 242 - struct device *dma = mvdev->mdev->device; 237 + struct device *dma = get_dma_device(mvdev); 243 238 244 239 for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1); 245 240 map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) { ··· 296 291 297 292 static void unmap_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr) 298 293 { 299 - struct device *dma = mvdev->mdev->device; 294 + struct device *dma = get_dma_device(mvdev); 300 295 301 296 destroy_direct_mr(mvdev, mr); 302 297 dma_unmap_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
+2 -1
drivers/vdpa/mlx5/core/resources.c
··· 246 246 if (err) 247 247 goto err_key; 248 248 249 - kick_addr = pci_resource_start(mdev->pdev, 0) + offset; 249 + kick_addr = mdev->bar_addr + offset; 250 + 250 251 res->kick_addr = ioremap(kick_addr, PAGE_SIZE); 251 252 if (!res->kick_addr) { 252 253 err = -ENOMEM;
+24 -16
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 820 820 MLX5_SET(virtio_q, vq_ctx, event_qpn_or_msix, mvq->fwqp.mqp.qpn); 821 821 MLX5_SET(virtio_q, vq_ctx, queue_size, mvq->num_ent); 822 822 MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 823 - !!(ndev->mvdev.actual_features & VIRTIO_F_VERSION_1)); 823 + !!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_F_VERSION_1))); 824 824 MLX5_SET64(virtio_q, vq_ctx, desc_addr, mvq->desc_addr); 825 825 MLX5_SET64(virtio_q, vq_ctx, used_addr, mvq->device_addr); 826 826 MLX5_SET64(virtio_q, vq_ctx, available_addr, mvq->driver_addr); ··· 1169 1169 return; 1170 1170 } 1171 1171 mvq->avail_idx = attr.available_index; 1172 + mvq->used_idx = attr.used_index; 1172 1173 } 1173 1174 1174 1175 static void suspend_vqs(struct mlx5_vdpa_net *ndev) ··· 1427 1426 return -EINVAL; 1428 1427 } 1429 1428 1429 + mvq->used_idx = state->avail_index; 1430 1430 mvq->avail_idx = state->avail_index; 1431 1431 return 0; 1432 1432 } ··· 1445 1443 * that cares about emulating the index after vq is stopped. 1446 1444 */ 1447 1445 if (!mvq->initialized) { 1448 - state->avail_index = mvq->avail_idx; 1446 + /* Firmware returns a wrong value for the available index. 1447 + * Since both values should be identical, we take the value of 1448 + * used_idx which is reported correctly. 1449 + */ 1450 + state->avail_index = mvq->used_idx; 1449 1451 return 0; 1450 1452 } 1451 1453 ··· 1458 1452 mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n"); 1459 1453 return err; 1460 1454 } 1461 - state->avail_index = attr.available_index; 1455 + state->avail_index = attr.used_index; 1462 1456 return 0; 1463 1457 } 1464 1458 ··· 1546 1540 } 1547 1541 } 1548 1542 1549 - static void clear_virtqueues(struct mlx5_vdpa_net *ndev) 1550 - { 1551 - int i; 1552 - 1553 - for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) { 1554 - ndev->vqs[i].avail_idx = 0; 1555 - ndev->vqs[i].used_idx = 0; 1556 - } 1557 - } 1558 - 1559 1543 /* TODO: cross-endian support */ 1560 1544 static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev) 1561 1545 { 1562 1546 return virtio_legacy_is_little_endian() || 1563 - (mvdev->actual_features & (1ULL << VIRTIO_F_VERSION_1)); 1547 + (mvdev->actual_features & BIT_ULL(VIRTIO_F_VERSION_1)); 1564 1548 } 1565 1549 1566 1550 static __virtio16 cpu_to_mlx5vdpa16(struct mlx5_vdpa_dev *mvdev, u16 val) ··· 1781 1785 if (!status) { 1782 1786 mlx5_vdpa_info(mvdev, "performing device reset\n"); 1783 1787 teardown_driver(ndev); 1784 - clear_virtqueues(ndev); 1785 1788 mlx5_vdpa_destroy_mr(&ndev->mvdev); 1786 1789 ndev->mvdev.status = 0; 1787 1790 ndev->mvdev.mlx_features = 0; ··· 1902 1907 .free = mlx5_vdpa_free, 1903 1908 }; 1904 1909 1910 + static int query_mtu(struct mlx5_core_dev *mdev, u16 *mtu) 1911 + { 1912 + u16 hw_mtu; 1913 + int err; 1914 + 1915 + err = mlx5_query_nic_vport_mtu(mdev, &hw_mtu); 1916 + if (err) 1917 + return err; 1918 + 1919 + *mtu = hw_mtu - MLX5V_ETH_HARD_MTU; 1920 + return 0; 1921 + } 1922 + 1905 1923 static int alloc_resources(struct mlx5_vdpa_net *ndev) 1906 1924 { 1907 1925 struct mlx5_vdpa_net_resources *res = &ndev->res; ··· 2000 1992 init_mvqs(ndev); 2001 1993 mutex_init(&ndev->reslock); 2002 1994 config = &ndev->config; 2003 - err = mlx5_query_nic_vport_mtu(mdev, &ndev->mtu); 1995 + err = query_mtu(mdev, &ndev->mtu); 2004 1996 if (err) 2005 1997 goto err_mtu; 2006 1998
+2 -2
drivers/watchdog/armada_37xx_wdt.c
··· 2 2 /* 3 3 * Watchdog driver for Marvell Armada 37xx SoCs 4 4 * 5 - * Author: Marek Behun <marek.behun@nic.cz> 5 + * Author: Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/clk.h> ··· 366 366 367 367 module_platform_driver(armada_37xx_wdt_driver); 368 368 369 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 369 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 370 370 MODULE_DESCRIPTION("Armada 37xx CPU Watchdog"); 371 371 372 372 MODULE_LICENSE("GPL v2");
+6 -6
drivers/xen/events/events_base.c
··· 110 110 unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */ 111 111 unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */ 112 112 u64 eoi_time; /* Time in jiffies when to EOI. */ 113 - spinlock_t lock; 113 + raw_spinlock_t lock; 114 114 115 115 union { 116 116 unsigned short virq; ··· 312 312 info->evtchn = evtchn; 313 313 info->cpu = cpu; 314 314 info->mask_reason = EVT_MASK_REASON_EXPLICIT; 315 - spin_lock_init(&info->lock); 315 + raw_spin_lock_init(&info->lock); 316 316 317 317 ret = set_evtchn_to_irq(evtchn, irq); 318 318 if (ret < 0) ··· 472 472 { 473 473 unsigned long flags; 474 474 475 - spin_lock_irqsave(&info->lock, flags); 475 + raw_spin_lock_irqsave(&info->lock, flags); 476 476 477 477 if (!info->mask_reason) 478 478 mask_evtchn(info->evtchn); 479 479 480 480 info->mask_reason |= reason; 481 481 482 - spin_unlock_irqrestore(&info->lock, flags); 482 + raw_spin_unlock_irqrestore(&info->lock, flags); 483 483 } 484 484 485 485 static void do_unmask(struct irq_info *info, u8 reason) 486 486 { 487 487 unsigned long flags; 488 488 489 - spin_lock_irqsave(&info->lock, flags); 489 + raw_spin_lock_irqsave(&info->lock, flags); 490 490 491 491 info->mask_reason &= ~reason; 492 492 493 493 if (!info->mask_reason) 494 494 unmask_evtchn(info->evtchn); 495 495 496 - spin_unlock_irqrestore(&info->lock, flags); 496 + raw_spin_unlock_irqrestore(&info->lock, flags); 497 497 } 498 498 499 499 #ifdef CONFIG_X86
+42 -11
fs/btrfs/zoned.c
··· 21 21 /* Pseudo write pointer value for conventional zone */ 22 22 #define WP_CONVENTIONAL ((u64)-2) 23 23 24 + /* 25 + * Location of the first zone of superblock logging zone pairs. 26 + * 27 + * - primary superblock: 0B (zone 0) 28 + * - first copy: 512G (zone starting at that offset) 29 + * - second copy: 4T (zone starting at that offset) 30 + */ 31 + #define BTRFS_SB_LOG_PRIMARY_OFFSET (0ULL) 32 + #define BTRFS_SB_LOG_FIRST_OFFSET (512ULL * SZ_1G) 33 + #define BTRFS_SB_LOG_SECOND_OFFSET (4096ULL * SZ_1G) 34 + 35 + #define BTRFS_SB_LOG_FIRST_SHIFT const_ilog2(BTRFS_SB_LOG_FIRST_OFFSET) 36 + #define BTRFS_SB_LOG_SECOND_SHIFT const_ilog2(BTRFS_SB_LOG_SECOND_OFFSET) 37 + 24 38 /* Number of superblock log zones */ 25 39 #define BTRFS_NR_SB_LOG_ZONES 2 40 + 41 + /* 42 + * Maximum supported zone size. Currently, SMR disks have a zone size of 43 + * 256MiB, and we are expecting ZNS drives to be in the 1-4GiB range. We do not 44 + * expect the zone size to become larger than 8GiB in the near future. 45 + */ 46 + #define BTRFS_MAX_ZONE_SIZE SZ_8G 26 47 27 48 static int copy_zone_info_cb(struct blk_zone *zone, unsigned int idx, void *data) 28 49 { ··· 132 111 } 133 112 134 113 /* 135 - * The following zones are reserved as the circular buffer on ZONED btrfs. 136 - * - The primary superblock: zones 0 and 1 137 - * - The first copy: zones 16 and 17 138 - * - The second copy: zones 1024 or zone at 256GB which is minimum, and 139 - * the following one 114 + * Get the first zone number of the superblock mirror 140 115 */ 141 116 static inline u32 sb_zone_number(int shift, int mirror) 142 117 { 143 - ASSERT(mirror < BTRFS_SUPER_MIRROR_MAX); 118 + u64 zone; 144 119 120 + ASSERT(mirror < BTRFS_SUPER_MIRROR_MAX); 145 121 switch (mirror) { 146 - case 0: return 0; 147 - case 1: return 16; 148 - case 2: return min_t(u64, btrfs_sb_offset(mirror) >> shift, 1024); 122 + case 0: zone = 0; break; 123 + case 1: zone = 1ULL << (BTRFS_SB_LOG_FIRST_SHIFT - shift); break; 124 + case 2: zone = 1ULL << (BTRFS_SB_LOG_SECOND_SHIFT - shift); break; 149 125 } 150 126 151 - return 0; 127 + ASSERT(zone <= U32_MAX); 128 + 129 + return (u32)zone; 152 130 } 153 131 154 132 /* ··· 320 300 zone_sectors = bdev_zone_sectors(bdev); 321 301 } 322 302 323 - nr_sectors = bdev_nr_sectors(bdev); 324 303 /* Check if it's power of 2 (see is_power_of_2) */ 325 304 ASSERT(zone_sectors != 0 && (zone_sectors & (zone_sectors - 1)) == 0); 326 305 zone_info->zone_size = zone_sectors << SECTOR_SHIFT; 306 + 307 + /* We reject devices with a zone size larger than 8GB */ 308 + if (zone_info->zone_size > BTRFS_MAX_ZONE_SIZE) { 309 + btrfs_err_in_rcu(fs_info, 310 + "zoned: %s: zone size %llu larger than supported maximum %llu", 311 + rcu_str_deref(device->name), 312 + zone_info->zone_size, BTRFS_MAX_ZONE_SIZE); 313 + ret = -EINVAL; 314 + goto out; 315 + } 316 + 317 + nr_sectors = bdev_nr_sectors(bdev); 327 318 zone_info->zone_size_shift = ilog2(zone_info->zone_size); 328 319 zone_info->max_zone_append_size = 329 320 (u64)queue_max_zone_append_sectors(queue) << SECTOR_SHIFT;
+1 -2
fs/cifs/Kconfig
··· 18 18 select CRYPTO_AES 19 19 select CRYPTO_LIB_DES 20 20 select KEYS 21 + select DNS_RESOLVER 21 22 help 22 23 This is the client VFS module for the SMB3 family of NAS protocols, 23 24 (including support for the most recent, most secure dialect SMB3.1.1) ··· 113 112 config CIFS_UPCALL 114 113 bool "Kerberos/SPNEGO advanced session setup" 115 114 depends on CIFS 116 - select DNS_RESOLVER 117 115 help 118 116 Enables an upcall mechanism for CIFS which accesses userspace helper 119 117 utilities to provide SPNEGO packaged (RFC 4178) Kerberos tickets ··· 179 179 config CIFS_DFS_UPCALL 180 180 bool "DFS feature support" 181 181 depends on CIFS 182 - select DNS_RESOLVER 183 182 help 184 183 Distributed File System (DFS) support is used to access shares 185 184 transparently in an enterprise name space, even if the share
+3 -2
fs/cifs/Makefile
··· 10 10 cifs_unicode.o nterr.o cifsencrypt.o \ 11 11 readdir.o ioctl.o sess.o export.o smb1ops.o unc.o winucase.o \ 12 12 smb2ops.o smb2maperror.o smb2transport.o \ 13 - smb2misc.o smb2pdu.o smb2inode.o smb2file.o cifsacl.o fs_context.o 13 + smb2misc.o smb2pdu.o smb2inode.o smb2file.o cifsacl.o fs_context.o \ 14 + dns_resolve.o 14 15 15 16 cifs-$(CONFIG_CIFS_XATTR) += xattr.o 16 17 17 18 cifs-$(CONFIG_CIFS_UPCALL) += cifs_spnego.o 18 19 19 - cifs-$(CONFIG_CIFS_DFS_UPCALL) += dns_resolve.o cifs_dfs_ref.o dfs_cache.o 20 + cifs-$(CONFIG_CIFS_DFS_UPCALL) += cifs_dfs_ref.o dfs_cache.o 20 21 21 22 cifs-$(CONFIG_CIFS_SWN_UPCALL) += netlink.o cifs_swn.o 22 23
+2 -1
fs/cifs/cifsfs.c
··· 476 476 seq_puts(m, "none"); 477 477 else { 478 478 convert_delimiter(devname, '/'); 479 - seq_puts(m, devname); 479 + /* escape all spaces in share names */ 480 + seq_escape(m, devname, " \t"); 480 481 kfree(devname); 481 482 } 482 483 return 0;
-2
fs/cifs/cifsglob.h
··· 1283 1283 bool direct_io; 1284 1284 }; 1285 1285 1286 - struct cifs_readdata; 1287 - 1288 1286 /* asynchronous read support */ 1289 1287 struct cifs_readdata { 1290 1288 struct kref refcount;
+16 -1
fs/cifs/connect.c
··· 87 87 * 88 88 * This should be called with server->srv_mutex held. 89 89 */ 90 - #ifdef CONFIG_CIFS_DFS_UPCALL 91 90 static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server) 92 91 { 93 92 int rc; ··· 123 124 return !rc ? -1 : 0; 124 125 } 125 126 127 + #ifdef CONFIG_CIFS_DFS_UPCALL 126 128 /* These functions must be called with server->srv_mutex held */ 127 129 static void reconn_set_next_dfs_target(struct TCP_Server_Info *server, 128 130 struct cifs_sb_info *cifs_sb, ··· 321 321 #endif 322 322 323 323 #ifdef CONFIG_CIFS_DFS_UPCALL 324 + if (cifs_sb && cifs_sb->origin_fullpath) 324 325 /* 325 326 * Set up next DFS target server (if any) for reconnect. If DFS 326 327 * feature is disabled, then we will retry last server we 327 328 * connected to before. 328 329 */ 329 330 reconn_set_next_dfs_target(server, cifs_sb, &tgt_list, &tgt_it); 331 + else { 330 332 #endif 333 + /* 334 + * Resolve the hostname again to make sure that IP address is up-to-date. 335 + */ 336 + rc = reconn_set_ipaddr_from_hostname(server); 337 + if (rc) { 338 + cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n", 339 + __func__, rc); 340 + } 341 + 342 + #ifdef CONFIG_CIFS_DFS_UPCALL 343 + } 344 + #endif 345 + 331 346 332 347 #ifdef CONFIG_CIFS_SWN_UPCALL 333 348 }
+3 -2
fs/direct-io.c
··· 812 812 struct buffer_head *map_bh) 813 813 { 814 814 int ret = 0; 815 + int boundary = sdio->boundary; /* dio_send_cur_page may clear it */ 815 816 816 817 if (dio->op == REQ_OP_WRITE) { 817 818 /* ··· 851 850 sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits; 852 851 out: 853 852 /* 854 - * If sdio->boundary then we want to schedule the IO now to 853 + * If boundary then we want to schedule the IO now to 855 854 * avoid metadata seeks. 856 855 */ 857 - if (sdio->boundary) { 856 + if (boundary) { 858 857 ret = dio_send_cur_page(dio, sdio, map_bh); 859 858 if (sdio->bio) 860 859 dio_bio_submit(dio, sdio);
+17 -4
fs/file.c
··· 629 629 } 630 630 EXPORT_SYMBOL(close_fd); /* for ksys_close() */ 631 631 632 + /** 633 + * last_fd - return last valid index into fd table 634 + * @cur_fds: files struct 635 + * 636 + * Context: Either rcu read lock or files_lock must be held. 637 + * 638 + * Returns: Last valid index into fdtable. 639 + */ 640 + static inline unsigned last_fd(struct fdtable *fdt) 641 + { 642 + return fdt->max_fds - 1; 643 + } 644 + 632 645 static inline void __range_cloexec(struct files_struct *cur_fds, 633 646 unsigned int fd, unsigned int max_fd) 634 647 { 635 648 struct fdtable *fdt; 636 649 637 - if (fd > max_fd) 638 - return; 639 - 650 + /* make sure we're using the correct maximum value */ 640 651 spin_lock(&cur_fds->file_lock); 641 652 fdt = files_fdtable(cur_fds); 642 - bitmap_set(fdt->close_on_exec, fd, max_fd - fd + 1); 653 + max_fd = min(last_fd(fdt), max_fd); 654 + if (fd <= max_fd) 655 + bitmap_set(fdt->close_on_exec, fd, max_fd - fd + 1); 643 656 spin_unlock(&cur_fds->file_lock); 644 657 } 645 658
+3 -4
fs/hostfs/hostfs_kern.c
··· 144 144 char *name, *resolved, *end; 145 145 int n; 146 146 147 - name = __getname(); 147 + name = kmalloc(PATH_MAX, GFP_KERNEL); 148 148 if (!name) { 149 149 n = -ENOMEM; 150 150 goto out_free; ··· 173 173 goto out_free; 174 174 } 175 175 176 - __putname(name); 177 - kfree(link); 176 + kfree(name); 178 177 return resolved; 179 178 180 179 out_free: 181 - __putname(name); 180 + kfree(name); 182 181 return ERR_PTR(n); 183 182 } 184 183
+4
fs/io-wq.c
··· 415 415 { 416 416 struct io_wqe *wqe = worker->wqe; 417 417 struct io_wq *wq = wqe->wq; 418 + bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); 418 419 419 420 do { 420 421 struct io_wq_work *work; ··· 445 444 unsigned int hash = io_get_work_hash(work); 446 445 447 446 next_hashed = wq_next_work(work); 447 + 448 + if (unlikely(do_kill) && (work->flags & IO_WQ_WORK_UNBOUND)) 449 + work->flags |= IO_WQ_WORK_CANCEL; 448 450 wq->do_work(work); 449 451 io_assign_current_work(worker, NULL); 450 452
+17 -2
fs/io_uring.c
··· 2762 2762 { 2763 2763 struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb); 2764 2764 struct io_async_rw *io = req->async_data; 2765 + bool check_reissue = kiocb->ki_complete == io_complete_rw; 2765 2766 2766 2767 /* add previously done IO, if any */ 2767 2768 if (io && io->bytes_done > 0) { ··· 2778 2777 __io_complete_rw(req, ret, 0, issue_flags); 2779 2778 else 2780 2779 io_rw_done(kiocb, ret); 2780 + 2781 + if (check_reissue && req->flags & REQ_F_REISSUE) { 2782 + req->flags &= ~REQ_F_REISSUE; 2783 + if (!io_rw_reissue(req)) { 2784 + int cflags = 0; 2785 + 2786 + req_set_fail_links(req); 2787 + if (req->flags & REQ_F_BUFFER_SELECTED) 2788 + cflags = io_put_rw_kbuf(req); 2789 + __io_req_complete(req, issue_flags, ret, cflags); 2790 + } 2791 + } 2781 2792 } 2782 2793 2783 2794 static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter) ··· 3307 3294 ret = io_iter_do_read(req, iter); 3308 3295 3309 3296 if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) { 3297 + req->flags &= ~REQ_F_REISSUE; 3310 3298 /* IOPOLL retry should happen for io-wq threads */ 3311 3299 if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL)) 3312 3300 goto done; ··· 3431 3417 else 3432 3418 ret2 = -EINVAL; 3433 3419 3434 - if (req->flags & REQ_F_REISSUE) 3420 + if (req->flags & REQ_F_REISSUE) { 3421 + req->flags &= ~REQ_F_REISSUE; 3435 3422 ret2 = -EAGAIN; 3423 + } 3436 3424 3437 3425 /* 3438 3426 * Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just ··· 6189 6173 ret = -ECANCELED; 6190 6174 6191 6175 if (!ret) { 6192 - req->flags &= ~REQ_F_REISSUE; 6193 6176 do { 6194 6177 ret = io_issue_sqe(req, 0); 6195 6178 /*
+8 -6
fs/namei.c
··· 579 579 p->stack = p->internal; 580 580 p->dfd = dfd; 581 581 p->name = name; 582 + p->path.mnt = NULL; 583 + p->path.dentry = NULL; 582 584 p->total_link_count = old ? old->total_link_count : 0; 583 585 p->saved = old; 584 586 current->nameidata = p; ··· 654 652 rcu_read_unlock(); 655 653 } 656 654 nd->depth = 0; 655 + nd->path.mnt = NULL; 656 + nd->path.dentry = NULL; 657 657 } 658 658 659 659 /* path_put is needed afterwards regardless of success or failure */ ··· 2326 2322 } 2327 2323 2328 2324 nd->root.mnt = NULL; 2329 - nd->path.mnt = NULL; 2330 - nd->path.dentry = NULL; 2331 2325 2332 2326 /* Absolute pathname -- fetch the root (LOOKUP_IN_ROOT uses nd->dfd). */ 2333 2327 if (*s == '/' && !(flags & LOOKUP_IN_ROOT)) { ··· 2421 2419 while (!(err = link_path_walk(s, nd)) && 2422 2420 (s = lookup_last(nd)) != NULL) 2423 2421 ; 2422 + if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) { 2423 + err = handle_lookup_down(nd); 2424 + nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please... 2425 + } 2424 2426 if (!err) 2425 2427 err = complete_walk(nd); 2426 2428 2427 2429 if (!err && nd->flags & LOOKUP_DIRECTORY) 2428 2430 if (!d_can_lookup(nd->path.dentry)) 2429 2431 err = -ENOTDIR; 2430 - if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) { 2431 - err = handle_lookup_down(nd); 2432 - nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please... 2433 - } 2434 2432 if (!err) { 2435 2433 *path = nd->path; 2436 2434 nd->path.mnt = NULL;
+1 -10
fs/ocfs2/aops.c
··· 2295 2295 struct ocfs2_alloc_context *meta_ac = NULL; 2296 2296 handle_t *handle = NULL; 2297 2297 loff_t end = offset + bytes; 2298 - int ret = 0, credits = 0, locked = 0; 2298 + int ret = 0, credits = 0; 2299 2299 2300 2300 ocfs2_init_dealloc_ctxt(&dealloc); 2301 2301 ··· 2305 2305 end <= i_size_read(inode) && 2306 2306 !dwc->dw_orphaned) 2307 2307 goto out; 2308 - 2309 - /* ocfs2_file_write_iter will get i_mutex, so we need not lock if we 2310 - * are in that context. */ 2311 - if (dwc->dw_writer_pid != task_pid_nr(current)) { 2312 - inode_lock(inode); 2313 - locked = 1; 2314 - } 2315 2308 2316 2309 ret = ocfs2_inode_lock(inode, &di_bh, 1); 2317 2310 if (ret < 0) { ··· 2386 2393 if (meta_ac) 2387 2394 ocfs2_free_alloc_context(meta_ac); 2388 2395 ocfs2_run_deallocs(osb, &dealloc); 2389 - if (locked) 2390 - inode_unlock(inode); 2391 2396 ocfs2_dio_free_write_ctx(inode, dwc); 2392 2397 2393 2398 return ret;
+6 -2
fs/ocfs2/file.c
··· 1245 1245 goto bail_unlock; 1246 1246 } 1247 1247 } 1248 + down_write(&OCFS2_I(inode)->ip_alloc_sem); 1248 1249 handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS + 1249 1250 2 * ocfs2_quota_trans_credits(sb)); 1250 1251 if (IS_ERR(handle)) { 1251 1252 status = PTR_ERR(handle); 1252 1253 mlog_errno(status); 1253 - goto bail_unlock; 1254 + goto bail_unlock_alloc; 1254 1255 } 1255 1256 status = __dquot_transfer(inode, transfer_to); 1256 1257 if (status < 0) 1257 1258 goto bail_commit; 1258 1259 } else { 1260 + down_write(&OCFS2_I(inode)->ip_alloc_sem); 1259 1261 handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS); 1260 1262 if (IS_ERR(handle)) { 1261 1263 status = PTR_ERR(handle); 1262 1264 mlog_errno(status); 1263 - goto bail_unlock; 1265 + goto bail_unlock_alloc; 1264 1266 } 1265 1267 } 1266 1268 ··· 1275 1273 1276 1274 bail_commit: 1277 1275 ocfs2_commit_trans(osb, handle); 1276 + bail_unlock_alloc: 1277 + up_write(&OCFS2_I(inode)->ip_alloc_sem); 1278 1278 bail_unlock: 1279 1279 if (status && inode_locked) { 1280 1280 ocfs2_inode_unlock_tracker(inode, 1, &oh, had_lock);
+1 -1
include/dt-bindings/bus/moxtet.h
··· 2 2 /* 3 3 * Constant for device tree bindings for Turris Mox module configuration bus 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #ifndef _DT_BINDINGS_BUS_MOXTET_H
+1 -1
include/linux/armada-37xx-rwtm-mailbox.h
··· 2 2 /* 3 3 * rWTM BIU Mailbox driver for Armada 37xx 4 4 * 5 - * Author: Marek Behun <marek.behun@nic.cz> 5 + * Author: Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #ifndef _LINUX_ARMADA_37XX_RWTM_MAILBOX_H_
-2
include/linux/avf/virtchnl.h
··· 476 476 u16 vsi_id; 477 477 u16 key_len; 478 478 u8 key[1]; /* RSS hash key, packed bytes */ 479 - u8 pad[1]; 480 479 }; 481 480 482 481 VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key); ··· 484 485 u16 vsi_id; 485 486 u16 lut_entries; 486 487 u8 lut[1]; /* RSS lookup table */ 487 - u8 pad[1]; 488 488 }; 489 489 490 490 VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+2
include/linux/bpf.h
··· 40 40 struct bpf_local_storage_map; 41 41 struct kobject; 42 42 struct mem_cgroup; 43 + struct module; 43 44 44 45 extern struct idr btf_idr; 45 46 extern spinlock_t btf_idr_lock; ··· 624 623 /* Executable image of trampoline */ 625 624 struct bpf_tramp_image *cur_image; 626 625 u64 selector; 626 + struct module *mod; 627 627 }; 628 628 629 629 struct bpf_attach_target_info {
+16 -6
include/linux/ethtool.h
··· 87 87 int ethtool_op_get_ts_info(struct net_device *dev, struct ethtool_ts_info *eti); 88 88 89 89 90 - /** 91 - * struct ethtool_link_ext_state_info - link extended state and substate. 92 - */ 90 + /* Link extended state and substate. */ 93 91 struct ethtool_link_ext_state_info { 94 92 enum ethtool_link_ext_state link_ext_state; 95 93 union { ··· 127 129 __ETHTOOL_DECLARE_LINK_MODE_MASK(lp_advertising); 128 130 } link_modes; 129 131 u32 lanes; 130 - enum ethtool_link_mode_bit_indices link_mode; 131 132 }; 132 133 133 134 /** ··· 289 292 * do not attach ext_substate attribute to netlink message). If link_ext_state 290 293 * and link_ext_substate are unknown, return -ENODATA. If not implemented, 291 294 * link_ext_state and link_ext_substate will not be sent to userspace. 295 + * @get_eeprom_len: Read range of EEPROM addresses for validation of 296 + * @get_eeprom and @set_eeprom requests. 297 + * Returns 0 if device does not support EEPROM access. 292 298 * @get_eeprom: Read data from the device EEPROM. 293 299 * Should fill in the magic field. Don't need to check len for zero 294 300 * or wraparound. Fill in the data argument with the eeprom values ··· 384 384 * @get_module_eeprom: Get the eeprom information from the plug-in module 385 385 * @get_eee: Get Energy-Efficient (EEE) supported and status. 386 386 * @set_eee: Set EEE status (enable/disable) as well as LPI timers. 387 + * @get_tunable: Read the value of a driver / device tunable. 388 + * @set_tunable: Set the value of a driver / device tunable. 387 389 * @get_per_queue_coalesce: Get interrupt coalescing parameters per queue. 388 390 * It must check that the given queue number is valid. If neither a RX nor 389 391 * a TX queue has this number, return -EINVAL. If only a RX queue or a TX ··· 549 547 * @get_sset_count: Get number of strings that @get_strings will write. 550 548 * @get_strings: Return a set of strings that describe the requested objects 551 549 * @get_stats: Return extended statistics about the PHY device. 552 - * @start_cable_test - Start a cable test 553 - * @start_cable_test_tdr - Start a Time Domain Reflectometry cable test 550 + * @start_cable_test: Start a cable test 551 + * @start_cable_test_tdr: Start a Time Domain Reflectometry cable test 554 552 * 555 553 * All operations are optional (i.e. the function pointer may be set to %NULL) 556 554 * and callers must take this into account. Callers must hold the RTNL lock. ··· 573 571 */ 574 572 void ethtool_set_ethtool_phy_ops(const struct ethtool_phy_ops *ops); 575 573 574 + /* 575 + * ethtool_params_from_link_mode - Derive link parameters from a given link mode 576 + * @link_ksettings: Link parameters to be derived from the link mode 577 + * @link_mode: Link mode 578 + */ 579 + void 580 + ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings, 581 + enum ethtool_link_mode_bit_indices link_mode); 576 582 #endif /* _LINUX_ETHTOOL_H */
+6 -4
include/linux/mlx5/mlx5_ifc.h
··· 437 437 u8 reserved_at_60[0x18]; 438 438 u8 log_max_ft_num[0x8]; 439 439 440 - u8 reserved_at_80[0x18]; 440 + u8 reserved_at_80[0x10]; 441 + u8 log_max_flow_counter[0x8]; 441 442 u8 log_max_destination[0x8]; 442 443 443 - u8 log_max_flow_counter[0x8]; 444 - u8 reserved_at_a8[0x10]; 444 + u8 reserved_at_a0[0x18]; 445 445 u8 log_max_flow[0x8]; 446 446 447 447 u8 reserved_at_c0[0x40]; ··· 8835 8835 8836 8836 u8 fec_override_admin_100g_2x[0x10]; 8837 8837 u8 fec_override_admin_50g_1x[0x10]; 8838 + 8839 + u8 reserved_at_140[0x140]; 8838 8840 }; 8839 8841 8840 8842 struct mlx5_ifc_ppcnt_reg_bits { ··· 10200 10198 10201 10199 struct mlx5_ifc_bufferx_reg_bits buffer[10]; 10202 10200 10203 - u8 reserved_at_2e0[0x40]; 10201 + u8 reserved_at_2e0[0x80]; 10204 10202 }; 10205 10203 10206 10204 struct mlx5_ifc_qtct_reg_bits {
+1 -1
include/linux/moxtet.h
··· 2 2 /* 3 3 * Turris Mox module configuration bus driver 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #ifndef __LINUX_MOXTET_H
+6 -1
include/linux/skmsg.h
··· 349 349 static inline void sk_psock_restore_proto(struct sock *sk, 350 350 struct sk_psock *psock) 351 351 { 352 - sk->sk_prot->unhash = psock->saved_unhash; 353 352 if (inet_csk_has_ulp(sk)) { 353 + /* TLS does not have an unhash proto in SW cases, but we need 354 + * to ensure we stop using the sock_map unhash routine because 355 + * the associated psock is being removed. So use the original 356 + * unhash handler. 357 + */ 358 + WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash); 354 359 tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space); 355 360 } else { 356 361 sk->sk_write_space = psock->saved_write_space;
+11 -5
include/linux/virtio_net.h
··· 62 62 return -EINVAL; 63 63 } 64 64 65 + skb_reset_mac_header(skb); 66 + 65 67 if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { 66 - u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start); 67 - u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); 68 + u32 start = __virtio16_to_cpu(little_endian, hdr->csum_start); 69 + u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); 70 + u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16)); 71 + 72 + if (!pskb_may_pull(skb, needed)) 73 + return -EINVAL; 68 74 69 75 if (!skb_partial_csum_set(skb, start, off)) 70 76 return -EINVAL; 71 77 72 78 p_off = skb_transport_offset(skb) + thlen; 73 - if (p_off > skb_headlen(skb)) 79 + if (!pskb_may_pull(skb, p_off)) 74 80 return -EINVAL; 75 81 } else { 76 82 /* gso packets without NEEDS_CSUM do not set transport_offset. ··· 106 100 } 107 101 108 102 p_off = keys.control.thoff + thlen; 109 - if (p_off > skb_headlen(skb) || 103 + if (!pskb_may_pull(skb, p_off) || 110 104 keys.basic.ip_proto != ip_proto) 111 105 return -EINVAL; 112 106 113 107 skb_set_transport_header(skb, keys.control.thoff); 114 108 } else if (gso_type) { 115 109 p_off = thlen; 116 - if (p_off > skb_headlen(skb)) 110 + if (!pskb_may_pull(skb, p_off)) 117 111 return -EINVAL; 118 112 } 119 113 }
+4 -8
include/net/act_api.h
··· 170 170 void tcf_idr_cleanup(struct tc_action_net *tn, u32 index); 171 171 int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index, 172 172 struct tc_action **a, int bind); 173 - int __tcf_idr_release(struct tc_action *a, bool bind, bool strict); 174 - 175 - static inline int tcf_idr_release(struct tc_action *a, bool bind) 176 - { 177 - return __tcf_idr_release(a, bind, false); 178 - } 173 + int tcf_idr_release(struct tc_action *a, bool bind); 179 174 180 175 int tcf_register_action(struct tc_action_ops *a, struct pernet_operations *ops); 181 176 int tcf_unregister_action(struct tc_action_ops *a, ··· 180 185 int nr_actions, struct tcf_result *res); 181 186 int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, 182 187 struct nlattr *est, char *name, int ovr, int bind, 183 - struct tc_action *actions[], size_t *attr_size, 188 + struct tc_action *actions[], int init_res[], size_t *attr_size, 184 189 bool rtnl_held, struct netlink_ext_ack *extack); 185 190 struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla, 186 191 bool rtnl_held, ··· 188 193 struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, 189 194 struct nlattr *nla, struct nlattr *est, 190 195 char *name, int ovr, int bind, 191 - struct tc_action_ops *ops, bool rtnl_held, 196 + struct tc_action_ops *a_o, int *init_res, 197 + bool rtnl_held, 192 198 struct netlink_ext_ack *extack); 193 199 int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind, 194 200 int ref, bool terse);
+3 -1
include/net/netns/xfrm.h
··· 72 72 #if IS_ENABLED(CONFIG_IPV6) 73 73 struct dst_ops xfrm6_dst_ops; 74 74 #endif 75 - spinlock_t xfrm_state_lock; 75 + spinlock_t xfrm_state_lock; 76 + seqcount_spinlock_t xfrm_state_hash_generation; 77 + 76 78 spinlock_t xfrm_policy_lock; 77 79 struct mutex xfrm_cfg_mutex; 78 80 };
+2 -2
include/net/red.h
··· 171 171 static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, 172 172 u8 Scell_log, u8 *stab) 173 173 { 174 - if (fls(qth_min) + Wlog > 32) 174 + if (fls(qth_min) + Wlog >= 32) 175 175 return false; 176 - if (fls(qth_max) + Wlog > 32) 176 + if (fls(qth_max) + Wlog >= 32) 177 177 return false; 178 178 if (Scell_log >= 32) 179 179 return false;
+2 -2
include/net/rtnetlink.h
··· 147 147 int (*validate_link_af)(const struct net_device *dev, 148 148 const struct nlattr *attr); 149 149 int (*set_link_af)(struct net_device *dev, 150 - const struct nlattr *attr); 151 - 150 + const struct nlattr *attr, 151 + struct netlink_ext_ack *extack); 152 152 int (*fill_stats_af)(struct sk_buff *skb, 153 153 const struct net_device *dev); 154 154 size_t (*get_stats_af_size)(const struct net_device *dev);
+14 -1
include/net/sock.h
··· 934 934 WRITE_ONCE(sk->sk_ack_backlog, sk->sk_ack_backlog + 1); 935 935 } 936 936 937 + /* Note: If you think the test should be: 938 + * return READ_ONCE(sk->sk_ack_backlog) >= READ_ONCE(sk->sk_max_ack_backlog); 939 + * Then please take a look at commit 64a146513f8f ("[NET]: Revert incorrect accept queue backlog changes.") 940 + */ 937 941 static inline bool sk_acceptq_is_full(const struct sock *sk) 938 942 { 939 - return READ_ONCE(sk->sk_ack_backlog) >= READ_ONCE(sk->sk_max_ack_backlog); 943 + return READ_ONCE(sk->sk_ack_backlog) > READ_ONCE(sk->sk_max_ack_backlog); 940 944 } 941 945 942 946 /* ··· 2223 2219 skb->destructor = sock_rfree; 2224 2220 atomic_add(skb->truesize, &sk->sk_rmem_alloc); 2225 2221 sk_mem_charge(sk, skb->truesize); 2222 + } 2223 + 2224 + static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk) 2225 + { 2226 + if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) { 2227 + skb_orphan(skb); 2228 + skb->destructor = sock_efree; 2229 + skb->sk = sk; 2230 + } 2226 2231 } 2227 2232 2228 2233 void sk_reset_timer(struct sock *sk, struct timer_list *timer,
+2 -2
include/net/xfrm.h
··· 1097 1097 return __xfrm_policy_check(sk, ndir, skb, family); 1098 1098 1099 1099 return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || 1100 - (skb_dst(skb)->flags & DST_NOPOLICY) || 1100 + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || 1101 1101 __xfrm_policy_check(sk, ndir, skb, family); 1102 1102 } 1103 1103 ··· 1557 1557 int xfrm_trans_queue(struct sk_buff *skb, 1558 1558 int (*finish)(struct net *, struct sock *, 1559 1559 struct sk_buff *)); 1560 - int xfrm_output_resume(struct sk_buff *skb, int err); 1560 + int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err); 1561 1561 int xfrm_output(struct sock *sk, struct sk_buff *skb); 1562 1562 1563 1563 #if IS_ENABLED(CONFIG_NET_PKTGEN)
+1 -1
include/uapi/linux/can.h
··· 113 113 */ 114 114 __u8 len; 115 115 __u8 can_dlc; /* deprecated */ 116 - }; 116 + } __attribute__((packed)); /* disable padding added in some ABIs */ 117 117 __u8 __pad; /* padding */ 118 118 __u8 __res0; /* reserved / padding */ 119 119 __u8 len8_dlc; /* optional DLC for 8 byte payload length (9 .. 15) */
+33 -21
include/uapi/linux/ethtool.h
··· 26 26 * have the same layout for 32-bit and 64-bit userland. 27 27 */ 28 28 29 + /* Note on reserved space. 30 + * Reserved fields must not be accessed directly by user space because 31 + * they may be replaced by a different field in the future. They must 32 + * be initialized to zero before making the request, e.g. via memset 33 + * of the entire structure or implicitly by not being set in a structure 34 + * initializer. 35 + */ 36 + 29 37 /** 30 38 * struct ethtool_cmd - DEPRECATED, link control and status 31 39 * This structure is DEPRECATED, please use struct ethtool_link_settings. ··· 75 67 * and other link features that the link partner advertised 76 68 * through autonegotiation; 0 if unknown or not applicable. 77 69 * Read-only. 70 + * @reserved: Reserved for future use; see the note on reserved space. 78 71 * 79 72 * The link speed in Mbps is split between @speed and @speed_hi. Use 80 73 * the ethtool_cmd_speed() and ethtool_cmd_speed_set() functions to ··· 164 155 * @bus_info: Device bus address. This should match the dev_name() 165 156 * string for the underlying bus device, if there is one. May be 166 157 * an empty string. 158 + * @reserved2: Reserved for future use; see the note on reserved space. 167 159 * @n_priv_flags: Number of flags valid for %ETHTOOL_GPFLAGS and 168 160 * %ETHTOOL_SPFLAGS commands; also the number of strings in the 169 161 * %ETH_SS_PRIV_FLAGS set ··· 366 356 * @tx_lpi_timer: Time in microseconds the interface delays prior to asserting 367 357 * its tx lpi (after reaching 'idle' state). Effective only when eee 368 358 * was negotiated and tx_lpi_enabled was set. 359 + * @reserved: Reserved for future use; see the note on reserved space. 369 360 */ 370 361 struct ethtool_eee { 371 362 __u32 cmd; ··· 385 374 * @cmd: %ETHTOOL_GMODULEINFO 386 375 * @type: Standard the module information conforms to %ETH_MODULE_SFF_xxxx 387 376 * @eeprom_len: Length of the eeprom 377 + * @reserved: Reserved for future use; see the note on reserved space. 388 378 * 389 379 * This structure is used to return the information to 390 380 * properly size memory for a subsequent call to %ETHTOOL_GMODULEEEPROM. ··· 591 579 __u32 tx_pause; 592 580 }; 593 581 594 - /** 595 - * enum ethtool_link_ext_state - link extended state 596 - */ 582 + /* Link extended state */ 597 583 enum ethtool_link_ext_state { 598 584 ETHTOOL_LINK_EXT_STATE_AUTONEG, 599 585 ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE, ··· 605 595 ETHTOOL_LINK_EXT_STATE_OVERHEAT, 606 596 }; 607 597 608 - /** 609 - * enum ethtool_link_ext_substate_autoneg - more information in addition to 610 - * ETHTOOL_LINK_EXT_STATE_AUTONEG. 611 - */ 598 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_AUTONEG. */ 612 599 enum ethtool_link_ext_substate_autoneg { 613 600 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_PARTNER_DETECTED = 1, 614 601 ETHTOOL_LINK_EXT_SUBSTATE_AN_ACK_NOT_RECEIVED, ··· 615 608 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_HCD, 616 609 }; 617 610 618 - /** 619 - * enum ethtool_link_ext_substate_link_training - more information in addition to 620 - * ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE. 611 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE. 621 612 */ 622 613 enum ethtool_link_ext_substate_link_training { 623 614 ETHTOOL_LINK_EXT_SUBSTATE_LT_KR_FRAME_LOCK_NOT_ACQUIRED = 1, ··· 624 619 ETHTOOL_LINK_EXT_SUBSTATE_LT_REMOTE_FAULT, 625 620 }; 626 621 627 - /** 628 - * enum ethtool_link_ext_substate_logical_mismatch - more information in addition 629 - * to ETHTOOL_LINK_EXT_STATE_LINK_LOGICAL_MISMATCH. 622 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_LINK_LOGICAL_MISMATCH. 630 623 */ 631 624 enum ethtool_link_ext_substate_link_logical_mismatch { 632 625 ETHTOOL_LINK_EXT_SUBSTATE_LLM_PCS_DID_NOT_ACQUIRE_BLOCK_LOCK = 1, ··· 634 631 ETHTOOL_LINK_EXT_SUBSTATE_LLM_RS_FEC_IS_NOT_LOCKED, 635 632 }; 636 633 637 - /** 638 - * enum ethtool_link_ext_substate_bad_signal_integrity - more information in 639 - * addition to ETHTOOL_LINK_EXT_STATE_BAD_SIGNAL_INTEGRITY. 634 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_BAD_SIGNAL_INTEGRITY. 640 635 */ 641 636 enum ethtool_link_ext_substate_bad_signal_integrity { 642 637 ETHTOOL_LINK_EXT_SUBSTATE_BSI_LARGE_NUMBER_OF_PHYSICAL_ERRORS = 1, 643 638 ETHTOOL_LINK_EXT_SUBSTATE_BSI_UNSUPPORTED_RATE, 644 639 }; 645 640 646 - /** 647 - * enum ethtool_link_ext_substate_cable_issue - more information in 648 - * addition to ETHTOOL_LINK_EXT_STATE_CABLE_ISSUE. 649 - */ 641 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_CABLE_ISSUE. */ 650 642 enum ethtool_link_ext_substate_cable_issue { 651 643 ETHTOOL_LINK_EXT_SUBSTATE_CI_UNSUPPORTED_CABLE = 1, 652 644 ETHTOOL_LINK_EXT_SUBSTATE_CI_CABLE_TEST_FAILURE, ··· 659 661 * now deprecated 660 662 * @ETH_SS_FEATURES: Device feature names 661 663 * @ETH_SS_RSS_HASH_FUNCS: RSS hush function names 664 + * @ETH_SS_TUNABLES: tunable names 662 665 * @ETH_SS_PHY_STATS: Statistic names, for use with %ETHTOOL_GPHYSTATS 663 666 * @ETH_SS_PHY_TUNABLES: PHY tunable names 664 667 * @ETH_SS_LINK_MODES: link mode names ··· 669 670 * @ETH_SS_TS_TX_TYPES: timestamping Tx types 670 671 * @ETH_SS_TS_RX_FILTERS: timestamping Rx filters 671 672 * @ETH_SS_UDP_TUNNEL_TYPES: UDP tunnel types 673 + * 674 + * @ETH_SS_COUNT: number of defined string sets 672 675 */ 673 676 enum ethtool_stringset { 674 677 ETH_SS_TEST = 0, ··· 716 715 /** 717 716 * struct ethtool_sset_info - string set information 718 717 * @cmd: Command number = %ETHTOOL_GSSET_INFO 718 + * @reserved: Reserved for future use; see the note on reserved space. 719 719 * @sset_mask: On entry, a bitmask of string sets to query, with bits 720 720 * numbered according to &enum ethtool_stringset. On return, a 721 721 * bitmask of those string sets queried that are supported. ··· 761 759 * @flags: A bitmask of flags from &enum ethtool_test_flags. Some 762 760 * flags may be set by the user on entry; others may be set by 763 761 * the driver on return. 762 + * @reserved: Reserved for future use; see the note on reserved space. 764 763 * @len: On return, the number of test results 765 764 * @data: Array of test results 766 765 * ··· 962 959 * @vlan_etype: VLAN EtherType 963 960 * @vlan_tci: VLAN tag control information 964 961 * @data: user defined data 962 + * @padding: Reserved for future use; see the note on reserved space. 965 963 * 966 964 * Note, @vlan_etype, @vlan_tci, and @data are only valid if %FLOW_EXT 967 965 * is set in &struct ethtool_rx_flow_spec @flow_type. ··· 1138 1134 * hardware hash key. 1139 1135 * @hfunc: Defines the current RSS hash function used by HW (or to be set to). 1140 1136 * Valid values are one of the %ETH_RSS_HASH_*. 1141 - * @rsvd: Reserved for future extensions. 1137 + * @rsvd8: Reserved for future use; see the note on reserved space. 1138 + * @rsvd32: Reserved for future use; see the note on reserved space. 1142 1139 * @rss_config: RX ring/queue index for each hash value i.e., indirection table 1143 1140 * of @indir_size __u32 elements, followed by hash key of @key_size 1144 1141 * bytes. ··· 1307 1302 * @so_timestamping: bit mask of the sum of the supported SO_TIMESTAMPING flags 1308 1303 * @phc_index: device index of the associated PHC, or -1 if there is none 1309 1304 * @tx_types: bit mask of the supported hwtstamp_tx_types enumeration values 1305 + * @tx_reserved: Reserved for future use; see the note on reserved space. 1310 1306 * @rx_filters: bit mask of the supported hwtstamp_rx_filters enumeration values 1307 + * @rx_reserved: Reserved for future use; see the note on reserved space. 1311 1308 * 1312 1309 * The bits in the 'tx_types' and 'rx_filters' fields correspond to 1313 1310 * the 'hwtstamp_tx_types' and 'hwtstamp_rx_filters' enumeration values, ··· 1965 1958 * autonegotiation; 0 if unknown or not applicable. Read-only. 1966 1959 * @transceiver: Used to distinguish different possible PHY types, 1967 1960 * reported consistently by PHYLIB. Read-only. 1961 + * @master_slave_cfg: Master/slave port mode. 1962 + * @master_slave_state: Master/slave port state. 1963 + * @reserved: Reserved for future use; see the note on reserved space. 1964 + * @reserved1: Reserved for future use; see the note on reserved space. 1965 + * @link_mode_masks: Variable length bitmaps. 1968 1966 * 1969 1967 * If autonegotiation is disabled, the speed and @duplex represent the 1970 1968 * fixed link mode and are writable if the driver supports multiple
+69 -13
include/uapi/linux/rfkill.h
··· 86 86 * @op: operation code 87 87 * @hard: hard state (0/1) 88 88 * @soft: soft state (0/1) 89 - * @hard_block_reasons: valid if hard is set. One or several reasons from 90 - * &enum rfkill_hard_block_reasons. 91 89 * 92 90 * Structure used for userspace communication on /dev/rfkill, 93 91 * used for events from the kernel and control to the kernel. ··· 96 98 __u8 op; 97 99 __u8 soft; 98 100 __u8 hard; 101 + } __attribute__((packed)); 102 + 103 + /** 104 + * struct rfkill_event_ext - events for userspace on /dev/rfkill 105 + * @idx: index of dev rfkill 106 + * @type: type of the rfkill struct 107 + * @op: operation code 108 + * @hard: hard state (0/1) 109 + * @soft: soft state (0/1) 110 + * @hard_block_reasons: valid if hard is set. One or several reasons from 111 + * &enum rfkill_hard_block_reasons. 112 + * 113 + * Structure used for userspace communication on /dev/rfkill, 114 + * used for events from the kernel and control to the kernel. 115 + * 116 + * See the extensibility docs below. 117 + */ 118 + struct rfkill_event_ext { 119 + __u32 idx; 120 + __u8 type; 121 + __u8 op; 122 + __u8 soft; 123 + __u8 hard; 124 + 125 + /* 126 + * older kernels will accept/send only up to this point, 127 + * and if extended further up to any chunk marked below 128 + */ 129 + 99 130 __u8 hard_block_reasons; 100 131 } __attribute__((packed)); 101 132 102 - /* 103 - * We are planning to be backward and forward compatible with changes 104 - * to the event struct, by adding new, optional, members at the end. 105 - * When reading an event (whether the kernel from userspace or vice 106 - * versa) we need to accept anything that's at least as large as the 107 - * version 1 event size, but might be able to accept other sizes in 108 - * the future. 133 + /** 134 + * DOC: Extensibility 109 135 * 110 - * One exception is the kernel -- we already have two event sizes in 111 - * that we've made the 'hard' member optional since our only option 112 - * is to ignore it anyway. 136 + * Originally, we had planned to allow backward and forward compatible 137 + * changes by just adding fields at the end of the structure that are 138 + * then not reported on older kernels on read(), and not written to by 139 + * older kernels on write(), with the kernel reporting the size it did 140 + * accept as the result. 141 + * 142 + * This would have allowed userspace to detect on read() and write() 143 + * which kernel structure version it was dealing with, and if was just 144 + * recompiled it would have gotten the new fields, but obviously not 145 + * accessed them, but things should've continued to work. 146 + * 147 + * Unfortunately, while actually exercising this mechanism to add the 148 + * hard block reasons field, we found that userspace (notably systemd) 149 + * did all kinds of fun things not in line with this scheme: 150 + * 151 + * 1. treat the (expected) short writes as an error; 152 + * 2. ask to read sizeof(struct rfkill_event) but then compare the 153 + * actual return value to RFKILL_EVENT_SIZE_V1 and treat any 154 + * mismatch as an error. 155 + * 156 + * As a consequence, just recompiling with a new struct version caused 157 + * things to no longer work correctly on old and new kernels. 158 + * 159 + * Hence, we've rolled back &struct rfkill_event to the original version 160 + * and added &struct rfkill_event_ext. This effectively reverts to the 161 + * old behaviour for all userspace, unless it explicitly opts in to the 162 + * rules outlined here by using the new &struct rfkill_event_ext. 163 + * 164 + * Userspace using &struct rfkill_event_ext must adhere to the following 165 + * rules 166 + * 167 + * 1. accept short writes, optionally using them to detect that it's 168 + * running on an older kernel; 169 + * 2. accept short reads, knowing that this means it's running on an 170 + * older kernel; 171 + * 3. treat reads that are as long as requested as acceptable, not 172 + * checking against RFKILL_EVENT_SIZE_V1 or such. 113 173 */ 114 - #define RFKILL_EVENT_SIZE_V1 8 174 + #define RFKILL_EVENT_SIZE_V1 sizeof(struct rfkill_event) 115 175 116 176 /* ioctl for turning off rfkill-input (if present) */ 117 177 #define RFKILL_IOC_MAGIC 'R'
+1 -1
kernel/bpf/disasm.c
··· 84 84 [BPF_ADD >> 4] = "add", 85 85 [BPF_AND >> 4] = "and", 86 86 [BPF_OR >> 4] = "or", 87 - [BPF_XOR >> 4] = "or", 87 + [BPF_XOR >> 4] = "xor", 88 88 }; 89 89 90 90 static const char *const bpf_ldst_string[] = {
+2 -2
kernel/bpf/inode.c
··· 543 543 return PTR_ERR(raw); 544 544 545 545 if (type == BPF_TYPE_PROG) 546 - ret = bpf_prog_new_fd(raw); 546 + ret = (f_flags != O_RDWR) ? -EINVAL : bpf_prog_new_fd(raw); 547 547 else if (type == BPF_TYPE_MAP) 548 548 ret = bpf_map_new_fd(raw, f_flags); 549 549 else if (type == BPF_TYPE_LINK) 550 - ret = bpf_link_new_fd(raw); 550 + ret = (f_flags != O_RDWR) ? -EINVAL : bpf_link_new_fd(raw); 551 551 else 552 552 return -ENOENT; 553 553
+10 -2
kernel/bpf/stackmap.c
··· 517 517 BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf, 518 518 u32, size, u64, flags) 519 519 { 520 - struct pt_regs *regs = task_pt_regs(task); 520 + struct pt_regs *regs; 521 + long res; 521 522 522 - return __bpf_get_stack(regs, task, NULL, buf, size, flags); 523 + if (!try_get_task_stack(task)) 524 + return -EFAULT; 525 + 526 + regs = task_pt_regs(task); 527 + res = __bpf_get_stack(regs, task, NULL, buf, size, flags); 528 + put_task_stack(task); 529 + 530 + return res; 523 531 } 524 532 525 533 BTF_ID_LIST_SINGLE(bpf_get_task_stack_btf_ids, struct, task_struct)
+30
kernel/bpf/trampoline.c
··· 9 9 #include <linux/btf.h> 10 10 #include <linux/rcupdate_trace.h> 11 11 #include <linux/rcupdate_wait.h> 12 + #include <linux/module.h> 12 13 13 14 /* dummy _ops. The verifier will operate on target program's ops. */ 14 15 const struct bpf_verifier_ops bpf_extension_verifier_ops = { ··· 88 87 return tr; 89 88 } 90 89 90 + static int bpf_trampoline_module_get(struct bpf_trampoline *tr) 91 + { 92 + struct module *mod; 93 + int err = 0; 94 + 95 + preempt_disable(); 96 + mod = __module_text_address((unsigned long) tr->func.addr); 97 + if (mod && !try_module_get(mod)) 98 + err = -ENOENT; 99 + preempt_enable(); 100 + tr->mod = mod; 101 + return err; 102 + } 103 + 104 + static void bpf_trampoline_module_put(struct bpf_trampoline *tr) 105 + { 106 + module_put(tr->mod); 107 + tr->mod = NULL; 108 + } 109 + 91 110 static int is_ftrace_location(void *ip) 92 111 { 93 112 long addr; ··· 129 108 ret = unregister_ftrace_direct((long)ip, (long)old_addr); 130 109 else 131 110 ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, NULL); 111 + 112 + if (!ret) 113 + bpf_trampoline_module_put(tr); 132 114 return ret; 133 115 } 134 116 ··· 158 134 return ret; 159 135 tr->func.ftrace_managed = ret; 160 136 137 + if (bpf_trampoline_module_get(tr)) 138 + return -ENOENT; 139 + 161 140 if (tr->func.ftrace_managed) 162 141 ret = register_ftrace_direct((long)ip, (long)new_addr); 163 142 else 164 143 ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr); 144 + 145 + if (ret) 146 + bpf_trampoline_module_put(tr); 165 147 return ret; 166 148 } 167 149
+5
kernel/bpf/verifier.c
··· 12158 12158 u32 btf_id, member_idx; 12159 12159 const char *mname; 12160 12160 12161 + if (!prog->gpl_compatible) { 12162 + verbose(env, "struct ops programs must have a GPL compatible license\n"); 12163 + return -EINVAL; 12164 + } 12165 + 12161 12166 btf_id = prog->aux->attach_btf_id; 12162 12167 st_ops = bpf_struct_ops_find(btf_id); 12163 12168 if (!st_ops) {
+19 -10
kernel/gcov/clang.c
··· 70 70 71 71 u32 ident; 72 72 u32 checksum; 73 + #if CONFIG_CLANG_VERSION < 110000 73 74 u8 use_extra_checksum; 75 + #endif 74 76 u32 cfg_checksum; 75 77 76 78 u32 num_counters; ··· 147 145 148 146 list_add_tail(&info->head, &current_info->functions); 149 147 } 150 - EXPORT_SYMBOL(llvm_gcda_emit_function); 151 148 #else 152 - void llvm_gcda_emit_function(u32 ident, u32 func_checksum, 153 - u8 use_extra_checksum, u32 cfg_checksum) 149 + void llvm_gcda_emit_function(u32 ident, u32 func_checksum, u32 cfg_checksum) 154 150 { 155 151 struct gcov_fn_info *info = kzalloc(sizeof(*info), GFP_KERNEL); 156 152 ··· 158 158 INIT_LIST_HEAD(&info->head); 159 159 info->ident = ident; 160 160 info->checksum = func_checksum; 161 - info->use_extra_checksum = use_extra_checksum; 162 161 info->cfg_checksum = cfg_checksum; 163 162 list_add_tail(&info->head, &current_info->functions); 164 163 } 165 - EXPORT_SYMBOL(llvm_gcda_emit_function); 166 164 #endif 165 + EXPORT_SYMBOL(llvm_gcda_emit_function); 167 166 168 167 void llvm_gcda_emit_arcs(u32 num_counters, u64 *counters) 169 168 { ··· 292 293 !list_is_last(&fn_ptr2->head, &info2->functions)) { 293 294 if (fn_ptr1->checksum != fn_ptr2->checksum) 294 295 return false; 296 + #if CONFIG_CLANG_VERSION < 110000 295 297 if (fn_ptr1->use_extra_checksum != fn_ptr2->use_extra_checksum) 296 298 return false; 297 299 if (fn_ptr1->use_extra_checksum && 298 300 fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum) 299 301 return false; 302 + #else 303 + if (fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum) 304 + return false; 305 + #endif 300 306 fn_ptr1 = list_next_entry(fn_ptr1, head); 301 307 fn_ptr2 = list_next_entry(fn_ptr2, head); 302 308 } ··· 533 529 534 530 list_for_each_entry(fi_ptr, &info->functions, head) { 535 531 u32 i; 536 - u32 len = 2; 537 - 538 - if (fi_ptr->use_extra_checksum) 539 - len++; 540 532 541 533 pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION); 542 - pos += store_gcov_u32(buffer, pos, len); 534 + #if CONFIG_CLANG_VERSION < 110000 535 + pos += store_gcov_u32(buffer, pos, 536 + fi_ptr->use_extra_checksum ? 3 : 2); 537 + #else 538 + pos += store_gcov_u32(buffer, pos, 3); 539 + #endif 543 540 pos += store_gcov_u32(buffer, pos, fi_ptr->ident); 544 541 pos += store_gcov_u32(buffer, pos, fi_ptr->checksum); 542 + #if CONFIG_CLANG_VERSION < 110000 545 543 if (fi_ptr->use_extra_checksum) 546 544 pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum); 545 + #else 546 + pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum); 547 + #endif 547 548 548 549 pos += store_gcov_u32(buffer, pos, GCOV_TAG_COUNTER_BASE); 549 550 pos += store_gcov_u32(buffer, pos, fi_ptr->num_counters * 2);
+3 -2
kernel/locking/lockdep.c
··· 705 705 706 706 printk(KERN_CONT " ("); 707 707 __print_lock_name(class); 708 - printk(KERN_CONT "){%s}-{%hd:%hd}", usage, 708 + printk(KERN_CONT "){%s}-{%d:%d}", usage, 709 709 class->wait_type_outer ?: class->wait_type_inner, 710 710 class->wait_type_inner); 711 711 } ··· 930 930 /* Debug-check: all keys must be persistent! */ 931 931 debug_locks_off(); 932 932 pr_err("INFO: trying to register non-static key.\n"); 933 - pr_err("the code is fine but needs lockdep annotation.\n"); 933 + pr_err("The code is fine but needs lockdep annotation, or maybe\n"); 934 + pr_err("you didn't initialize this object before use?\n"); 934 935 pr_err("turning off the locking correctness validator.\n"); 935 936 dump_stack(); 936 937 return false;
+3 -2
kernel/watchdog.c
··· 278 278 * update as well, the only side effect might be a cycle delay for 279 279 * the softlockup check. 280 280 */ 281 - for_each_cpu(cpu, &watchdog_allowed_mask) 281 + for_each_cpu(cpu, &watchdog_allowed_mask) { 282 282 per_cpu(watchdog_touch_ts, cpu) = SOFTLOCKUP_RESET; 283 - wq_watchdog_touch(-1); 283 + wq_watchdog_touch(cpu); 284 + } 284 285 } 285 286 286 287 void touch_softlockup_watchdog_sync(void)
+7 -12
kernel/workqueue.c
··· 1412 1412 */ 1413 1413 lockdep_assert_irqs_disabled(); 1414 1414 1415 - debug_work_activate(work); 1416 1415 1417 1416 /* if draining, only works from the same workqueue are allowed */ 1418 1417 if (unlikely(wq->flags & __WQ_DRAINING) && ··· 1493 1494 worklist = &pwq->delayed_works; 1494 1495 } 1495 1496 1497 + debug_work_activate(work); 1496 1498 insert_work(pwq, work, worklist, work_flags); 1497 1499 1498 1500 out: ··· 5787 5787 continue; 5788 5788 5789 5789 /* get the latest of pool and touched timestamps */ 5790 + if (pool->cpu >= 0) 5791 + touched = READ_ONCE(per_cpu(wq_watchdog_touched_cpu, pool->cpu)); 5792 + else 5793 + touched = READ_ONCE(wq_watchdog_touched); 5790 5794 pool_ts = READ_ONCE(pool->watchdog_ts); 5791 - touched = READ_ONCE(wq_watchdog_touched); 5792 5795 5793 5796 if (time_after(pool_ts, touched)) 5794 5797 ts = pool_ts; 5795 5798 else 5796 5799 ts = touched; 5797 - 5798 - if (pool->cpu >= 0) { 5799 - unsigned long cpu_touched = 5800 - READ_ONCE(per_cpu(wq_watchdog_touched_cpu, 5801 - pool->cpu)); 5802 - if (time_after(cpu_touched, ts)) 5803 - ts = cpu_touched; 5804 - } 5805 5800 5806 5801 /* did we stall? */ 5807 5802 if (time_after(jiffies, ts + thresh)) { ··· 5821 5826 { 5822 5827 if (cpu >= 0) 5823 5828 per_cpu(wq_watchdog_touched_cpu, cpu) = jiffies; 5824 - else 5825 - wq_watchdog_touched = jiffies; 5829 + 5830 + wq_watchdog_touched = jiffies; 5826 5831 } 5827 5832 5828 5833 static void wq_watchdog_set_thresh(unsigned long thresh)
+3 -3
lib/Kconfig.debug
··· 1363 1363 bool 1364 1364 depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT 1365 1365 select STACKTRACE 1366 - select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86 1366 + depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86 1367 1367 select KALLSYMS 1368 1368 select KALLSYMS_ALL 1369 1369 ··· 1665 1665 depends on DEBUG_KERNEL 1666 1666 depends on STACKTRACE_SUPPORT 1667 1667 depends on PROC_FS 1668 - select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86 1668 + depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86 1669 1669 select KALLSYMS 1670 1670 select KALLSYMS_ALL 1671 1671 select STACKTRACE ··· 1918 1918 depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT 1919 1919 depends on !X86_64 1920 1920 select STACKTRACE 1921 - select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86 1921 + depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86 1922 1922 help 1923 1923 Provide stacktrace filter for fault-injection capabilities 1924 1924
+1 -1
lib/test_kasan_module.c
··· 22 22 char *kmem; 23 23 char __user *usermem; 24 24 size_t size = 10; 25 - int unused; 25 + int __maybe_unused unused; 26 26 27 27 kmem = kmalloc(size, GFP_KERNEL); 28 28 if (!kmem)
+4
mm/gup.c
··· 1535 1535 FOLL_FORCE | FOLL_DUMP | FOLL_GET); 1536 1536 if (locked) 1537 1537 mmap_read_unlock(mm); 1538 + 1539 + if (ret == 1 && is_page_poisoned(page)) 1540 + return NULL; 1541 + 1538 1542 return (ret == 1) ? page : NULL; 1539 1543 } 1540 1544 #endif /* CONFIG_ELF_CORE */
+20
mm/internal.h
··· 97 97 set_page_count(page, 1); 98 98 } 99 99 100 + /* 101 + * When kernel touch the user page, the user page may be have been marked 102 + * poison but still mapped in user space, if without this page, the kernel 103 + * can guarantee the data integrity and operation success, the kernel is 104 + * better to check the posion status and avoid touching it, be good not to 105 + * panic, coredump for process fatal signal is a sample case matching this 106 + * scenario. Or if kernel can't guarantee the data integrity, it's better 107 + * not to call this function, let kernel touch the poison page and get to 108 + * panic. 109 + */ 110 + static inline bool is_page_poisoned(struct page *page) 111 + { 112 + if (PageHWPoison(page)) 113 + return true; 114 + else if (PageHuge(page) && PageHWPoison(compound_head(page))) 115 + return true; 116 + 117 + return false; 118 + } 119 + 100 120 extern unsigned long highest_memmap_pfn; 101 121 102 122 /*
+3 -1
mm/page_poison.c
··· 77 77 void *addr; 78 78 79 79 addr = kmap_atomic(page); 80 + kasan_disable_current(); 80 81 /* 81 82 * Page poisoning when enabled poisons each and every page 82 83 * that is freed to buddy. Thus no extra check is done to 83 84 * see if a page was poisoned. 84 85 */ 85 - check_poison_mem(addr, PAGE_SIZE); 86 + check_poison_mem(kasan_reset_tag(addr), PAGE_SIZE); 87 + kasan_enable_current(); 86 88 kunmap_atomic(addr); 87 89 } 88 90
+1 -1
mm/percpu-internal.h
··· 87 87 88 88 extern struct list_head *pcpu_chunk_lists; 89 89 extern int pcpu_nr_slots; 90 - extern int pcpu_nr_empty_pop_pages; 90 + extern int pcpu_nr_empty_pop_pages[]; 91 91 92 92 extern struct pcpu_chunk *pcpu_first_chunk; 93 93 extern struct pcpu_chunk *pcpu_reserved_chunk;
+7 -2
mm/percpu-stats.c
··· 145 145 int slot, max_nr_alloc; 146 146 int *buffer; 147 147 enum pcpu_chunk_type type; 148 + int nr_empty_pop_pages; 148 149 149 150 alloc_buffer: 150 151 spin_lock_irq(&pcpu_lock); ··· 166 165 goto alloc_buffer; 167 166 } 168 167 169 - #define PL(X) \ 168 + nr_empty_pop_pages = 0; 169 + for (type = 0; type < PCPU_NR_CHUNK_TYPES; type++) 170 + nr_empty_pop_pages += pcpu_nr_empty_pop_pages[type]; 171 + 172 + #define PL(X) \ 170 173 seq_printf(m, " %-20s: %12lld\n", #X, (long long int)pcpu_stats_ai.X) 171 174 172 175 seq_printf(m, ··· 201 196 PU(nr_max_chunks); 202 197 PU(min_alloc_size); 203 198 PU(max_alloc_size); 204 - P("empty_pop_pages", pcpu_nr_empty_pop_pages); 199 + P("empty_pop_pages", nr_empty_pop_pages); 205 200 seq_putc(m, '\n'); 206 201 207 202 #undef PU
+7 -7
mm/percpu.c
··· 173 173 static LIST_HEAD(pcpu_map_extend_chunks); 174 174 175 175 /* 176 - * The number of empty populated pages, protected by pcpu_lock. The 177 - * reserved chunk doesn't contribute to the count. 176 + * The number of empty populated pages by chunk type, protected by pcpu_lock. 177 + * The reserved chunk doesn't contribute to the count. 178 178 */ 179 - int pcpu_nr_empty_pop_pages; 179 + int pcpu_nr_empty_pop_pages[PCPU_NR_CHUNK_TYPES]; 180 180 181 181 /* 182 182 * The number of populated pages in use by the allocator, protected by ··· 556 556 { 557 557 chunk->nr_empty_pop_pages += nr; 558 558 if (chunk != pcpu_reserved_chunk) 559 - pcpu_nr_empty_pop_pages += nr; 559 + pcpu_nr_empty_pop_pages[pcpu_chunk_type(chunk)] += nr; 560 560 } 561 561 562 562 /* ··· 1832 1832 mutex_unlock(&pcpu_alloc_mutex); 1833 1833 } 1834 1834 1835 - if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) 1835 + if (pcpu_nr_empty_pop_pages[type] < PCPU_EMPTY_POP_PAGES_LOW) 1836 1836 pcpu_schedule_balance_work(); 1837 1837 1838 1838 /* clear the areas and return address relative to base address */ ··· 2000 2000 pcpu_atomic_alloc_failed = false; 2001 2001 } else { 2002 2002 nr_to_pop = clamp(PCPU_EMPTY_POP_PAGES_HIGH - 2003 - pcpu_nr_empty_pop_pages, 2003 + pcpu_nr_empty_pop_pages[type], 2004 2004 0, PCPU_EMPTY_POP_PAGES_HIGH); 2005 2005 } 2006 2006 ··· 2580 2580 2581 2581 /* link the first chunk in */ 2582 2582 pcpu_first_chunk = chunk; 2583 - pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages; 2583 + pcpu_nr_empty_pop_pages[PCPU_CHUNK_ROOT] = pcpu_first_chunk->nr_empty_pop_pages; 2584 2584 pcpu_chunk_relocate(pcpu_first_chunk, -1); 2585 2585 2586 2586 /* include all regions of the first chunk */
+2
net/batman-adv/translation-table.c
··· 890 890 hlist_for_each_entry(vlan, &orig_node->vlan_list, list) { 891 891 tt_vlan->vid = htons(vlan->vid); 892 892 tt_vlan->crc = htonl(vlan->tt.crc); 893 + tt_vlan->reserved = 0; 893 894 894 895 tt_vlan++; 895 896 } ··· 974 973 975 974 tt_vlan->vid = htons(vlan->vid); 976 975 tt_vlan->crc = htonl(vlan->tt.crc); 976 + tt_vlan->reserved = 0; 977 977 978 978 tt_vlan++; 979 979 }
+6 -4
net/can/bcm.c
··· 86 86 MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>"); 87 87 MODULE_ALIAS("can-proto-2"); 88 88 89 + #define BCM_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex) 90 + 89 91 /* 90 92 * easy access to the first 64 bit of can(fd)_frame payload. cp->data is 91 93 * 64 bit aligned so the offset has to be multiples of 8 which is ensured ··· 1294 1292 /* no bound device as default => check msg_name */ 1295 1293 DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); 1296 1294 1297 - if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 1295 + if (msg->msg_namelen < BCM_MIN_NAMELEN) 1298 1296 return -EINVAL; 1299 1297 1300 1298 if (addr->can_family != AF_CAN) ··· 1536 1534 struct net *net = sock_net(sk); 1537 1535 int ret = 0; 1538 1536 1539 - if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 1537 + if (len < BCM_MIN_NAMELEN) 1540 1538 return -EINVAL; 1541 1539 1542 1540 lock_sock(sk); ··· 1618 1616 sock_recv_ts_and_drops(msg, sk, skb); 1619 1617 1620 1618 if (msg->msg_name) { 1621 - __sockaddr_check_size(sizeof(struct sockaddr_can)); 1622 - msg->msg_namelen = sizeof(struct sockaddr_can); 1619 + __sockaddr_check_size(BCM_MIN_NAMELEN); 1620 + msg->msg_namelen = BCM_MIN_NAMELEN; 1623 1621 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 1624 1622 } 1625 1623
+7 -4
net/can/isotp.c
··· 77 77 MODULE_AUTHOR("Oliver Hartkopp <socketcan@hartkopp.net>"); 78 78 MODULE_ALIAS("can-proto-6"); 79 79 80 + #define ISOTP_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp) 81 + 80 82 #define SINGLE_MASK(id) (((id) & CAN_EFF_FLAG) ? \ 81 83 (CAN_EFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG) : \ 82 84 (CAN_SFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG)) ··· 988 986 sock_recv_timestamp(msg, sk, skb); 989 987 990 988 if (msg->msg_name) { 991 - msg->msg_namelen = sizeof(struct sockaddr_can); 989 + __sockaddr_check_size(ISOTP_MIN_NAMELEN); 990 + msg->msg_namelen = ISOTP_MIN_NAMELEN; 992 991 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 993 992 } 994 993 ··· 1059 1056 int notify_enetdown = 0; 1060 1057 int do_rx_reg = 1; 1061 1058 1062 - if (len < CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp)) 1059 + if (len < ISOTP_MIN_NAMELEN) 1063 1060 return -EINVAL; 1064 1061 1065 1062 /* do not register frame reception for functional addressing */ ··· 1155 1152 if (peer) 1156 1153 return -EOPNOTSUPP; 1157 1154 1158 - memset(addr, 0, sizeof(*addr)); 1155 + memset(addr, 0, ISOTP_MIN_NAMELEN); 1159 1156 addr->can_family = AF_CAN; 1160 1157 addr->can_ifindex = so->ifindex; 1161 1158 addr->can_addr.tp.rx_id = so->rxid; 1162 1159 addr->can_addr.tp.tx_id = so->txid; 1163 1160 1164 - return sizeof(*addr); 1161 + return ISOTP_MIN_NAMELEN; 1165 1162 } 1166 1163 1167 1164 static int isotp_setsockopt(struct socket *sock, int level, int optname,
+8 -6
net/can/raw.c
··· 60 60 MODULE_AUTHOR("Urs Thuermann <urs.thuermann@volkswagen.de>"); 61 61 MODULE_ALIAS("can-proto-1"); 62 62 63 + #define RAW_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex) 64 + 63 65 #define MASK_ALL 0 64 66 65 67 /* A raw socket has a list of can_filters attached to it, each receiving ··· 396 394 int err = 0; 397 395 int notify_enetdown = 0; 398 396 399 - if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 397 + if (len < RAW_MIN_NAMELEN) 400 398 return -EINVAL; 401 399 if (addr->can_family != AF_CAN) 402 400 return -EINVAL; ··· 477 475 if (peer) 478 476 return -EOPNOTSUPP; 479 477 480 - memset(addr, 0, sizeof(*addr)); 478 + memset(addr, 0, RAW_MIN_NAMELEN); 481 479 addr->can_family = AF_CAN; 482 480 addr->can_ifindex = ro->ifindex; 483 481 484 - return sizeof(*addr); 482 + return RAW_MIN_NAMELEN; 485 483 } 486 484 487 485 static int raw_setsockopt(struct socket *sock, int level, int optname, ··· 741 739 if (msg->msg_name) { 742 740 DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); 743 741 744 - if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 742 + if (msg->msg_namelen < RAW_MIN_NAMELEN) 745 743 return -EINVAL; 746 744 747 745 if (addr->can_family != AF_CAN) ··· 834 832 sock_recv_ts_and_drops(msg, sk, skb); 835 833 836 834 if (msg->msg_name) { 837 - __sockaddr_check_size(sizeof(struct sockaddr_can)); 838 - msg->msg_namelen = sizeof(struct sockaddr_can); 835 + __sockaddr_check_size(RAW_MIN_NAMELEN); 836 + msg->msg_namelen = RAW_MIN_NAMELEN; 839 837 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 840 838 } 841 839
+2 -1
net/core/dev.c
··· 6992 6992 6993 6993 set_current_state(TASK_INTERRUPTIBLE); 6994 6994 6995 - while (!kthread_should_stop() && !napi_disable_pending(napi)) { 6995 + while (!kthread_should_stop()) { 6996 6996 /* Testing SCHED_THREADED bit here to make sure the current 6997 6997 * kthread owns this napi and could poll on this napi. 6998 6998 * Testing SCHED bit is not enough because SCHED bit might be ··· 7010 7010 set_current_state(TASK_INTERRUPTIBLE); 7011 7011 } 7012 7012 __set_current_state(TASK_RUNNING); 7013 + 7013 7014 return -1; 7014 7015 } 7015 7016
+1 -1
net/core/neighbour.c
··· 1379 1379 * we can reinject the packet there. 1380 1380 */ 1381 1381 n2 = NULL; 1382 - if (dst) { 1382 + if (dst && dst->obsolete != DST_OBSOLETE_DEAD) { 1383 1383 n2 = dst_neigh_lookup_skb(dst, skb); 1384 1384 if (n2) 1385 1385 n1 = n2;
+1 -1
net/core/rtnetlink.c
··· 2863 2863 2864 2864 BUG_ON(!(af_ops = rtnl_af_lookup(nla_type(af)))); 2865 2865 2866 - err = af_ops->set_link_af(dev, af); 2866 + err = af_ops->set_link_af(dev, af, extack); 2867 2867 if (err < 0) { 2868 2868 rcu_read_unlock(); 2869 2869 goto errout;
+5 -7
net/core/skmsg.c
··· 488 488 if (unlikely(!msg)) 489 489 return -EAGAIN; 490 490 sk_msg_init(msg); 491 + skb_set_owner_r(skb, sk); 491 492 return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg); 492 493 } 493 494 ··· 791 790 { 792 791 switch (verdict) { 793 792 case __SK_REDIRECT: 794 - skb_set_owner_r(skb, sk); 795 793 sk_psock_skb_redirect(skb); 796 794 break; 797 795 case __SK_PASS: ··· 808 808 rcu_read_lock(); 809 809 prog = READ_ONCE(psock->progs.skb_verdict); 810 810 if (likely(prog)) { 811 - /* We skip full set_owner_r here because if we do a SK_PASS 812 - * or SK_DROP we can skip skb memory accounting and use the 813 - * TLS context. 814 - */ 815 811 skb->sk = psock->sk; 816 812 tcp_skb_bpf_redirect_clear(skb); 817 813 ret = sk_psock_bpf_run(psock, prog, skb); ··· 876 880 kfree_skb(skb); 877 881 goto out; 878 882 } 879 - skb_set_owner_r(skb, sk); 880 883 prog = READ_ONCE(psock->progs.skb_verdict); 881 884 if (likely(prog)) { 885 + skb->sk = sk; 882 886 tcp_skb_bpf_redirect_clear(skb); 883 887 ret = sk_psock_bpf_run(psock, prog, skb); 884 888 ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb)); 889 + skb->sk = NULL; 885 890 } 886 891 sk_psock_verdict_apply(psock, skb, ret); 887 892 out: ··· 953 956 kfree_skb(skb); 954 957 goto out; 955 958 } 956 - skb_set_owner_r(skb, sk); 957 959 prog = READ_ONCE(psock->progs.skb_verdict); 958 960 if (likely(prog)) { 961 + skb->sk = sk; 959 962 tcp_skb_bpf_redirect_clear(skb); 960 963 ret = sk_psock_bpf_run(psock, prog, skb); 961 964 ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb)); 965 + skb->sk = NULL; 962 966 } 963 967 sk_psock_verdict_apply(psock, skb, ret); 964 968 out:
+3 -9
net/core/sock.c
··· 2132 2132 if (skb_is_tcp_pure_ack(skb)) 2133 2133 return; 2134 2134 2135 - if (can_skb_orphan_partial(skb)) { 2136 - struct sock *sk = skb->sk; 2137 - 2138 - if (refcount_inc_not_zero(&sk->sk_refcnt)) { 2139 - WARN_ON(refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc)); 2140 - skb->destructor = sock_efree; 2141 - } 2142 - } else { 2135 + if (can_skb_orphan_partial(skb)) 2136 + skb_set_owner_sk_safe(skb, skb->sk); 2137 + else 2143 2138 skb_orphan(skb); 2144 - } 2145 2139 } 2146 2140 EXPORT_SYMBOL(skb_orphan_partial); 2147 2141
+2 -1
net/core/xdp.c
··· 350 350 /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */ 351 351 xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); 352 352 page = virt_to_head_page(data); 353 - napi_direct &= !xdp_return_frame_no_direct(); 353 + if (napi_direct && xdp_return_frame_no_direct()) 354 + napi_direct = false; 354 355 page_pool_put_full_page(xa->page_pool, page, napi_direct); 355 356 rcu_read_unlock(); 356 357 break;
+7 -1
net/dsa/dsa2.c
··· 795 795 796 796 list_for_each_entry(dp, &dst->ports, list) { 797 797 err = dsa_port_setup(dp); 798 - if (err) 798 + if (err) { 799 + dsa_port_devlink_teardown(dp); 800 + dp->type = DSA_PORT_TYPE_UNUSED; 801 + err = dsa_port_devlink_setup(dp); 802 + if (err) 803 + goto teardown; 799 804 continue; 805 + } 800 806 } 801 807 802 808 return 0;
+9 -6
net/dsa/switch.c
··· 107 107 bool unset_vlan_filtering = br_vlan_enabled(info->br); 108 108 struct dsa_switch_tree *dst = ds->dst; 109 109 struct netlink_ext_ack extack = {0}; 110 - int err, i; 110 + int err, port; 111 111 112 112 if (dst->index == info->tree_index && ds->index == info->sw_index && 113 113 ds->ops->port_bridge_join) ··· 124 124 * it. That is a good thing, because that lets us handle it and also 125 125 * handle the case where the switch's vlan_filtering setting is global 126 126 * (not per port). When that happens, the correct moment to trigger the 127 - * vlan_filtering callback is only when the last port left this bridge. 127 + * vlan_filtering callback is only when the last port leaves the last 128 + * VLAN-aware bridge. 128 129 */ 129 130 if (unset_vlan_filtering && ds->vlan_filtering_is_global) { 130 - for (i = 0; i < ds->num_ports; i++) { 131 - if (i == info->port) 132 - continue; 133 - if (dsa_to_port(ds, i)->bridge_dev == info->br) { 131 + for (port = 0; port < ds->num_ports; port++) { 132 + struct net_device *bridge_dev; 133 + 134 + bridge_dev = dsa_to_port(ds, port)->bridge_dev; 135 + 136 + if (bridge_dev && br_vlan_enabled(bridge_dev)) { 134 137 unset_vlan_filtering = false; 135 138 break; 136 139 }
+17
net/ethtool/common.c
··· 273 273 __DEFINE_LINK_MODE_PARAMS(10000, KR, Full), 274 274 [ETHTOOL_LINK_MODE_10000baseR_FEC_BIT] = { 275 275 .speed = SPEED_10000, 276 + .lanes = 1, 276 277 .duplex = DUPLEX_FULL, 277 278 }, 278 279 __DEFINE_LINK_MODE_PARAMS(20000, MLD2, Full), ··· 563 562 rtnl_unlock(); 564 563 } 565 564 EXPORT_SYMBOL_GPL(ethtool_set_ethtool_phy_ops); 565 + 566 + void 567 + ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings, 568 + enum ethtool_link_mode_bit_indices link_mode) 569 + { 570 + const struct link_mode_info *link_info; 571 + 572 + if (WARN_ON_ONCE(link_mode >= __ETHTOOL_LINK_MODE_MASK_NBITS)) 573 + return; 574 + 575 + link_info = &link_mode_params[link_mode]; 576 + link_ksettings->base.speed = link_info->speed; 577 + link_ksettings->lanes = link_info->lanes; 578 + link_ksettings->base.duplex = link_info->duplex; 579 + } 580 + EXPORT_SYMBOL_GPL(ethtool_params_from_link_mode);
+2 -2
net/ethtool/eee.c
··· 169 169 ethnl_update_bool32(&eee.eee_enabled, tb[ETHTOOL_A_EEE_ENABLED], &mod); 170 170 ethnl_update_bool32(&eee.tx_lpi_enabled, 171 171 tb[ETHTOOL_A_EEE_TX_LPI_ENABLED], &mod); 172 - ethnl_update_bool32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER], 173 - &mod); 172 + ethnl_update_u32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER], 173 + &mod); 174 174 ret = 0; 175 175 if (!mod) 176 176 goto out_ops;
+1 -17
net/ethtool/ioctl.c
··· 426 426 int __ethtool_get_link_ksettings(struct net_device *dev, 427 427 struct ethtool_link_ksettings *link_ksettings) 428 428 { 429 - const struct link_mode_info *link_info; 430 - int err; 431 - 432 429 ASSERT_RTNL(); 433 430 434 431 if (!dev->ethtool_ops->get_link_ksettings) 435 432 return -EOPNOTSUPP; 436 433 437 434 memset(link_ksettings, 0, sizeof(*link_ksettings)); 438 - 439 - link_ksettings->link_mode = -1; 440 - err = dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); 441 - if (err) 442 - return err; 443 - 444 - if (link_ksettings->link_mode != -1) { 445 - link_info = &link_mode_params[link_ksettings->link_mode]; 446 - link_ksettings->base.speed = link_info->speed; 447 - link_ksettings->lanes = link_info->lanes; 448 - link_ksettings->base.duplex = link_info->duplex; 449 - } 450 - 451 - return 0; 435 + return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); 452 436 } 453 437 EXPORT_SYMBOL(__ethtool_get_link_ksettings); 454 438
+1
net/hsr/hsr_device.c
··· 217 217 master = hsr_port_get_hsr(hsr, HSR_PT_MASTER); 218 218 if (master) { 219 219 skb->dev = master->dev; 220 + skb_reset_mac_header(skb); 220 221 hsr_forward_skb(skb, master); 221 222 } else { 222 223 atomic_long_inc(&dev->tx_dropped);
-6
net/hsr/hsr_forward.c
··· 555 555 { 556 556 struct hsr_frame_info frame; 557 557 558 - if (skb_mac_header(skb) != skb->data) { 559 - WARN_ONCE(1, "%s:%d: Malformed frame (port_src %s)\n", 560 - __FILE__, __LINE__, port->dev->name); 561 - goto out_drop; 562 - } 563 - 564 558 if (fill_frame_info(&frame, skb, port) < 0) 565 559 goto out_drop; 566 560
+4 -3
net/ieee802154/nl-mac.c
··· 551 551 desc->mode = nla_get_u8(info->attrs[IEEE802154_ATTR_LLSEC_KEY_MODE]); 552 552 553 553 if (desc->mode == IEEE802154_SCF_KEY_IMPLICIT) { 554 - if (!info->attrs[IEEE802154_ATTR_PAN_ID] && 555 - !(info->attrs[IEEE802154_ATTR_SHORT_ADDR] || 556 - info->attrs[IEEE802154_ATTR_HW_ADDR])) 554 + if (!info->attrs[IEEE802154_ATTR_PAN_ID]) 557 555 return -EINVAL; 558 556 559 557 desc->device_addr.pan_id = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_PAN_ID]); ··· 560 562 desc->device_addr.mode = IEEE802154_ADDR_SHORT; 561 563 desc->device_addr.short_addr = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_SHORT_ADDR]); 562 564 } else { 565 + if (!info->attrs[IEEE802154_ATTR_HW_ADDR]) 566 + return -EINVAL; 567 + 563 568 desc->device_addr.mode = IEEE802154_ADDR_LONG; 564 569 desc->device_addr.extended_addr = nla_get_hwaddr(info->attrs[IEEE802154_ATTR_HW_ADDR]); 565 570 }
+60 -8
net/ieee802154/nl802154.c
··· 820 820 goto nla_put_failure; 821 821 822 822 #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL 823 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 824 + goto out; 825 + 823 826 if (nl802154_get_llsec_params(msg, rdev, wpan_dev) < 0) 824 827 goto nla_put_failure; 828 + 829 + out: 825 830 #endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */ 826 831 827 832 genlmsg_end(msg, hdr); ··· 1389 1384 u32 changed = 0; 1390 1385 int ret; 1391 1386 1387 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1388 + return -EOPNOTSUPP; 1389 + 1392 1390 if (info->attrs[NL802154_ATTR_SEC_ENABLED]) { 1393 1391 u8 enabled; 1394 1392 ··· 1498 1490 if (err) 1499 1491 return err; 1500 1492 1493 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1494 + err = skb->len; 1495 + goto out_err; 1496 + } 1497 + 1501 1498 if (!wpan_dev->netdev) { 1502 1499 err = -EINVAL; 1503 1500 goto out_err; ··· 1557 1544 struct ieee802154_llsec_key_id id = { }; 1558 1545 u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { }; 1559 1546 1560 - if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1547 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1548 + return -EOPNOTSUPP; 1549 + 1550 + if (!info->attrs[NL802154_ATTR_SEC_KEY] || 1551 + nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1561 1552 return -EINVAL; 1562 1553 1563 1554 if (!attrs[NL802154_KEY_ATTR_USAGE_FRAMES] || ··· 1609 1592 struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1]; 1610 1593 struct ieee802154_llsec_key_id id; 1611 1594 1612 - if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1595 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1596 + return -EOPNOTSUPP; 1597 + 1598 + if (!info->attrs[NL802154_ATTR_SEC_KEY] || 1599 + nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1613 1600 return -EINVAL; 1614 1601 1615 1602 if (ieee802154_llsec_parse_key_id(attrs[NL802154_KEY_ATTR_ID], &id) < 0) ··· 1676 1655 err = nl802154_prepare_wpan_dev_dump(skb, cb, &rdev, &wpan_dev); 1677 1656 if (err) 1678 1657 return err; 1658 + 1659 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1660 + err = skb->len; 1661 + goto out_err; 1662 + } 1679 1663 1680 1664 if (!wpan_dev->netdev) { 1681 1665 err = -EINVAL; ··· 1768 1742 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 1769 1743 struct ieee802154_llsec_device dev_desc; 1770 1744 1745 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1746 + return -EOPNOTSUPP; 1747 + 1771 1748 if (ieee802154_llsec_parse_device(info->attrs[NL802154_ATTR_SEC_DEVICE], 1772 1749 &dev_desc) < 0) 1773 1750 return -EINVAL; ··· 1786 1757 struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1]; 1787 1758 __le64 extended_addr; 1788 1759 1789 - if (nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack)) 1760 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1761 + return -EOPNOTSUPP; 1762 + 1763 + if (!info->attrs[NL802154_ATTR_SEC_DEVICE] || 1764 + nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack)) 1790 1765 return -EINVAL; 1791 1766 1792 1767 if (!attrs[NL802154_DEV_ATTR_EXTENDED_ADDR]) ··· 1858 1825 if (err) 1859 1826 return err; 1860 1827 1828 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1829 + err = skb->len; 1830 + goto out_err; 1831 + } 1832 + 1861 1833 if (!wpan_dev->netdev) { 1862 1834 err = -EINVAL; 1863 1835 goto out_err; ··· 1920 1882 struct ieee802154_llsec_device_key key; 1921 1883 __le64 extended_addr; 1922 1884 1885 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1886 + return -EOPNOTSUPP; 1887 + 1923 1888 if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] || 1924 1889 nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack) < 0) 1925 1890 return -EINVAL; ··· 1954 1913 struct ieee802154_llsec_device_key key; 1955 1914 __le64 extended_addr; 1956 1915 1957 - if (nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack)) 1916 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1917 + return -EOPNOTSUPP; 1918 + 1919 + if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] || 1920 + nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack)) 1958 1921 return -EINVAL; 1959 1922 1960 1923 if (!attrs[NL802154_DEVKEY_ATTR_EXTENDED_ADDR]) ··· 2030 1985 err = nl802154_prepare_wpan_dev_dump(skb, cb, &rdev, &wpan_dev); 2031 1986 if (err) 2032 1987 return err; 1988 + 1989 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1990 + err = skb->len; 1991 + goto out_err; 1992 + } 2033 1993 2034 1994 if (!wpan_dev->netdev) { 2035 1995 err = -EINVAL; ··· 2120 2070 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 2121 2071 struct ieee802154_llsec_seclevel sl; 2122 2072 2073 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 2074 + return -EOPNOTSUPP; 2075 + 2123 2076 if (llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2124 2077 &sl) < 0) 2125 2078 return -EINVAL; ··· 2138 2085 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 2139 2086 struct ieee802154_llsec_seclevel sl; 2140 2087 2088 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 2089 + return -EOPNOTSUPP; 2090 + 2141 2091 if (!info->attrs[NL802154_ATTR_SEC_LEVEL] || 2142 2092 llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2143 2093 &sl) < 0) ··· 2154 2098 #define NL802154_FLAG_NEED_NETDEV 0x02 2155 2099 #define NL802154_FLAG_NEED_RTNL 0x04 2156 2100 #define NL802154_FLAG_CHECK_NETDEV_UP 0x08 2157 - #define NL802154_FLAG_NEED_NETDEV_UP (NL802154_FLAG_NEED_NETDEV |\ 2158 - NL802154_FLAG_CHECK_NETDEV_UP) 2159 2101 #define NL802154_FLAG_NEED_WPAN_DEV 0x10 2160 - #define NL802154_FLAG_NEED_WPAN_DEV_UP (NL802154_FLAG_NEED_WPAN_DEV |\ 2161 - NL802154_FLAG_CHECK_NETDEV_UP) 2162 2102 2163 2103 static int nl802154_pre_doit(const struct genl_ops *ops, struct sk_buff *skb, 2164 2104 struct genl_info *info)
+1 -1
net/ipv4/ah4.c
··· 141 141 } 142 142 143 143 kfree(AH_SKB_CB(skb)->tmp); 144 - xfrm_output_resume(skb, err); 144 + xfrm_output_resume(skb->sk, skb, err); 145 145 } 146 146 147 147 static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
+2 -1
net/ipv4/devinet.c
··· 1978 1978 return 0; 1979 1979 } 1980 1980 1981 - static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla) 1981 + static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla, 1982 + struct netlink_ext_ack *extack) 1982 1983 { 1983 1984 struct in_device *in_dev = __in_dev_get_rcu(dev); 1984 1985 struct nlattr *a, *tb[IFLA_INET_MAX+1];
+1 -1
net/ipv4/esp4.c
··· 279 279 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 280 280 esp_output_tail_tcp(x, skb); 281 281 else 282 - xfrm_output_resume(skb, err); 282 + xfrm_output_resume(skb->sk, skb, err); 283 283 } 284 284 } 285 285
+14 -3
net/ipv4/esp4_offload.c
··· 217 217 218 218 if ((!(skb->dev->gso_partial_features & NETIF_F_HW_ESP) && 219 219 !(features & NETIF_F_HW_ESP)) || x->xso.dev != skb->dev) 220 - esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); 220 + esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK | 221 + NETIF_F_SCTP_CRC); 221 222 else if (!(features & NETIF_F_HW_ESP_TX_CSUM) && 222 223 !(skb->dev->gso_partial_features & NETIF_F_HW_ESP_TX_CSUM)) 223 - esp_features = features & ~NETIF_F_CSUM_MASK; 224 + esp_features = features & ~(NETIF_F_CSUM_MASK | 225 + NETIF_F_SCTP_CRC); 224 226 225 227 xo->flags |= XFRM_GSO_SEGMENT; 226 228 ··· 314 312 ip_hdr(skb)->tot_len = htons(skb->len); 315 313 ip_send_check(ip_hdr(skb)); 316 314 317 - if (hw_offload) 315 + if (hw_offload) { 316 + if (!skb_ext_add(skb, SKB_EXT_SEC_PATH)) 317 + return -ENOMEM; 318 + 319 + xo = xfrm_offload(skb); 320 + if (!xo) 321 + return -EINVAL; 322 + 323 + xo->flags |= XFRM_XMIT; 318 324 return 0; 325 + } 319 326 320 327 err = esp_output_tail(x, skb, &esp); 321 328 if (err)
+4 -2
net/ipv4/ip_vti.c
··· 218 218 } 219 219 220 220 if (dst->flags & DST_XFRM_QUEUE) 221 - goto queued; 221 + goto xmit; 222 222 223 223 if (!vti_state_check(dst->xfrm, parms->iph.daddr, parms->iph.saddr)) { 224 224 dev->stats.tx_carrier_errors++; ··· 238 238 if (skb->len > mtu) { 239 239 skb_dst_update_pmtu_no_confirm(skb, mtu); 240 240 if (skb->protocol == htons(ETH_P_IP)) { 241 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 242 + goto xmit; 241 243 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 242 244 htonl(mtu)); 243 245 } else { ··· 253 251 goto tx_error; 254 252 } 255 253 256 - queued: 254 + xmit: 257 255 skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(dev))); 258 256 skb_dst_set(skb, dst); 259 257 skb->dev = skb_dst(skb)->dev;
+4
net/ipv4/udp.c
··· 2754 2754 val = up->gso_size; 2755 2755 break; 2756 2756 2757 + case UDP_GRO: 2758 + val = up->gro_enabled; 2759 + break; 2760 + 2757 2761 /* The following two cannot be changed on UDP sockets, the return is 2758 2762 * always 0 (which corresponds to the full checksum coverage of UDP). */ 2759 2763 case UDPLITE_SEND_CSCOV:
+26 -6
net/ipv6/addrconf.c
··· 5669 5669 return 0; 5670 5670 } 5671 5671 5672 - static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token) 5672 + static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token, 5673 + struct netlink_ext_ack *extack) 5673 5674 { 5674 5675 struct inet6_ifaddr *ifp; 5675 5676 struct net_device *dev = idev->dev; ··· 5681 5680 5682 5681 if (!token) 5683 5682 return -EINVAL; 5684 - if (dev->flags & (IFF_LOOPBACK | IFF_NOARP)) 5683 + 5684 + if (dev->flags & IFF_LOOPBACK) { 5685 + NL_SET_ERR_MSG_MOD(extack, "Device is loopback"); 5685 5686 return -EINVAL; 5686 - if (!ipv6_accept_ra(idev)) 5687 + } 5688 + 5689 + if (dev->flags & IFF_NOARP) { 5690 + NL_SET_ERR_MSG_MOD(extack, 5691 + "Device does not do neighbour discovery"); 5687 5692 return -EINVAL; 5688 - if (idev->cnf.rtr_solicits == 0) 5693 + } 5694 + 5695 + if (!ipv6_accept_ra(idev)) { 5696 + NL_SET_ERR_MSG_MOD(extack, 5697 + "Router advertisement is disabled on device"); 5689 5698 return -EINVAL; 5699 + } 5700 + 5701 + if (idev->cnf.rtr_solicits == 0) { 5702 + NL_SET_ERR_MSG(extack, 5703 + "Router solicitation is disabled on device"); 5704 + return -EINVAL; 5705 + } 5690 5706 5691 5707 write_lock_bh(&idev->lock); 5692 5708 ··· 5811 5793 return 0; 5812 5794 } 5813 5795 5814 - static int inet6_set_link_af(struct net_device *dev, const struct nlattr *nla) 5796 + static int inet6_set_link_af(struct net_device *dev, const struct nlattr *nla, 5797 + struct netlink_ext_ack *extack) 5815 5798 { 5816 5799 struct inet6_dev *idev = __in6_dev_get(dev); 5817 5800 struct nlattr *tb[IFLA_INET6_MAX + 1]; ··· 5825 5806 BUG(); 5826 5807 5827 5808 if (tb[IFLA_INET6_TOKEN]) { 5828 - err = inet6_set_iftoken(idev, nla_data(tb[IFLA_INET6_TOKEN])); 5809 + err = inet6_set_iftoken(idev, nla_data(tb[IFLA_INET6_TOKEN]), 5810 + extack); 5829 5811 if (err) 5830 5812 return err; 5831 5813 }
+1 -1
net/ipv6/ah6.c
··· 316 316 } 317 317 318 318 kfree(AH_SKB_CB(skb)->tmp); 319 - xfrm_output_resume(skb, err); 319 + xfrm_output_resume(skb->sk, skb, err); 320 320 } 321 321 322 322 static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
+1 -1
net/ipv6/esp6.c
··· 314 314 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 315 315 esp_output_tail_tcp(x, skb); 316 316 else 317 - xfrm_output_resume(skb, err); 317 + xfrm_output_resume(skb->sk, skb, err); 318 318 } 319 319 } 320 320
+14 -3
net/ipv6/esp6_offload.c
··· 254 254 skb->encap_hdr_csum = 1; 255 255 256 256 if (!(features & NETIF_F_HW_ESP) || x->xso.dev != skb->dev) 257 - esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); 257 + esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK | 258 + NETIF_F_SCTP_CRC); 258 259 else if (!(features & NETIF_F_HW_ESP_TX_CSUM)) 259 - esp_features = features & ~NETIF_F_CSUM_MASK; 260 + esp_features = features & ~(NETIF_F_CSUM_MASK | 261 + NETIF_F_SCTP_CRC); 260 262 261 263 xo->flags |= XFRM_GSO_SEGMENT; 262 264 ··· 348 346 349 347 ipv6_hdr(skb)->payload_len = htons(len); 350 348 351 - if (hw_offload) 349 + if (hw_offload) { 350 + if (!skb_ext_add(skb, SKB_EXT_SEC_PATH)) 351 + return -ENOMEM; 352 + 353 + xo = xfrm_offload(skb); 354 + if (!xo) 355 + return -EINVAL; 356 + 357 + xo->flags |= XFRM_XMIT; 352 358 return 0; 359 + } 353 360 354 361 err = esp6_output_tail(x, skb, &esp); 355 362 if (err)
+4 -2
net/ipv6/ip6_vti.c
··· 494 494 } 495 495 496 496 if (dst->flags & DST_XFRM_QUEUE) 497 - goto queued; 497 + goto xmit; 498 498 499 499 x = dst->xfrm; 500 500 if (!vti6_state_check(x, &t->parms.raddr, &t->parms.laddr)) ··· 523 523 524 524 icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 525 525 } else { 526 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 527 + goto xmit; 526 528 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 527 529 htonl(mtu)); 528 530 } ··· 533 531 goto tx_err_dst_release; 534 532 } 535 533 536 - queued: 534 + xmit: 537 535 skb_scrub_packet(skb, !net_eq(t->net, dev_net(dev))); 538 536 skb_dst_set(skb, dst); 539 537 skb->dev = skb_dst(skb)->dev;
+1 -1
net/ipv6/raw.c
··· 298 298 */ 299 299 v4addr = LOOPBACK4_IPV6; 300 300 if (!(addr_type & IPV6_ADDR_MULTICAST) && 301 - !sock_net(sk)->ipv6.sysctl.ip_nonlocal_bind) { 301 + !ipv6_can_nonlocal_bind(sock_net(sk), inet)) { 302 302 err = -EADDRNOTAVAIL; 303 303 if (!ipv6_chk_addr(sock_net(sk), &addr->sin6_addr, 304 304 dev, 0)) {
+5 -3
net/ipv6/route.c
··· 5209 5209 * nexthops have been replaced by first new, the rest should 5210 5210 * be added to it. 5211 5211 */ 5212 - cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | 5213 - NLM_F_REPLACE); 5214 - cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE; 5212 + if (cfg->fc_nlinfo.nlh) { 5213 + cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | 5214 + NLM_F_REPLACE); 5215 + cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE; 5216 + } 5215 5217 nhn++; 5216 5218 } 5217 5219
+3 -1
net/mac80211/cfg.c
··· 1788 1788 } 1789 1789 1790 1790 if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN && 1791 - sta->sdata->u.vlan.sta) 1791 + sta->sdata->u.vlan.sta) { 1792 + ieee80211_clear_fast_rx(sta); 1792 1793 RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL); 1794 + } 1793 1795 1794 1796 if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) 1795 1797 ieee80211_vif_dec_num_mcast(sta->sdata);
+4 -1
net/mac80211/mlme.c
··· 4707 4707 timeout = sta->rx_stats.last_rx; 4708 4708 timeout += IEEE80211_CONNECTION_IDLE_TIME; 4709 4709 4710 - if (time_is_before_jiffies(timeout)) { 4710 + /* If timeout is after now, then update timer to fire at 4711 + * the later date, but do not actually probe at this time. 4712 + */ 4713 + if (time_is_after_jiffies(timeout)) { 4711 4714 mod_timer(&ifmgd->conn_mon_timer, round_jiffies_up(timeout)); 4712 4715 return; 4713 4716 }
+1 -1
net/mac80211/tx.c
··· 3573 3573 test_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags)) 3574 3574 goto out; 3575 3575 3576 - if (vif->txqs_stopped[ieee80211_ac_from_tid(txq->tid)]) { 3576 + if (vif->txqs_stopped[txq->ac]) { 3577 3577 set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags); 3578 3578 goto out; 3579 3579 }
+1 -1
net/mac802154/llsec.c
··· 152 152 crypto_free_sync_skcipher(key->tfm0); 153 153 err_tfm: 154 154 for (i = 0; i < ARRAY_SIZE(key->tfm); i++) 155 - if (key->tfm[i]) 155 + if (!IS_ERR_OR_NULL(key->tfm[i])) 156 156 crypto_free_aead(key->tfm[i]); 157 157 158 158 kfree_sensitive(key);
+47 -53
net/mptcp/protocol.c
··· 11 11 #include <linux/netdevice.h> 12 12 #include <linux/sched/signal.h> 13 13 #include <linux/atomic.h> 14 - #include <linux/igmp.h> 15 14 #include <net/sock.h> 16 15 #include <net/inet_common.h> 17 16 #include <net/inet_hashtables.h> ··· 19 20 #include <net/tcp_states.h> 20 21 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 21 22 #include <net/transp_v6.h> 22 - #include <net/addrconf.h> 23 23 #endif 24 24 #include <net/mptcp.h> 25 25 #include <net/xfrm.h> ··· 2876 2878 return ret; 2877 2879 } 2878 2880 2881 + static bool mptcp_unsupported(int level, int optname) 2882 + { 2883 + if (level == SOL_IP) { 2884 + switch (optname) { 2885 + case IP_ADD_MEMBERSHIP: 2886 + case IP_ADD_SOURCE_MEMBERSHIP: 2887 + case IP_DROP_MEMBERSHIP: 2888 + case IP_DROP_SOURCE_MEMBERSHIP: 2889 + case IP_BLOCK_SOURCE: 2890 + case IP_UNBLOCK_SOURCE: 2891 + case MCAST_JOIN_GROUP: 2892 + case MCAST_LEAVE_GROUP: 2893 + case MCAST_JOIN_SOURCE_GROUP: 2894 + case MCAST_LEAVE_SOURCE_GROUP: 2895 + case MCAST_BLOCK_SOURCE: 2896 + case MCAST_UNBLOCK_SOURCE: 2897 + case MCAST_MSFILTER: 2898 + return true; 2899 + } 2900 + return false; 2901 + } 2902 + if (level == SOL_IPV6) { 2903 + switch (optname) { 2904 + case IPV6_ADDRFORM: 2905 + case IPV6_ADD_MEMBERSHIP: 2906 + case IPV6_DROP_MEMBERSHIP: 2907 + case IPV6_JOIN_ANYCAST: 2908 + case IPV6_LEAVE_ANYCAST: 2909 + case MCAST_JOIN_GROUP: 2910 + case MCAST_LEAVE_GROUP: 2911 + case MCAST_JOIN_SOURCE_GROUP: 2912 + case MCAST_LEAVE_SOURCE_GROUP: 2913 + case MCAST_BLOCK_SOURCE: 2914 + case MCAST_UNBLOCK_SOURCE: 2915 + case MCAST_MSFILTER: 2916 + return true; 2917 + } 2918 + return false; 2919 + } 2920 + return false; 2921 + } 2922 + 2879 2923 static int mptcp_setsockopt(struct sock *sk, int level, int optname, 2880 2924 sockptr_t optval, unsigned int optlen) 2881 2925 { ··· 2925 2885 struct sock *ssk; 2926 2886 2927 2887 pr_debug("msk=%p", msk); 2888 + 2889 + if (mptcp_unsupported(level, optname)) 2890 + return -ENOPROTOOPT; 2928 2891 2929 2892 if (level == SOL_SOCKET) 2930 2893 return mptcp_setsockopt_sol_socket(msk, optname, optval, optlen); ··· 3462 3419 return mask; 3463 3420 } 3464 3421 3465 - static int mptcp_release(struct socket *sock) 3466 - { 3467 - struct mptcp_subflow_context *subflow; 3468 - struct sock *sk = sock->sk; 3469 - struct mptcp_sock *msk; 3470 - 3471 - if (!sk) 3472 - return 0; 3473 - 3474 - lock_sock(sk); 3475 - 3476 - msk = mptcp_sk(sk); 3477 - 3478 - mptcp_for_each_subflow(msk, subflow) { 3479 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 3480 - 3481 - ip_mc_drop_socket(ssk); 3482 - } 3483 - 3484 - release_sock(sk); 3485 - 3486 - return inet_release(sock); 3487 - } 3488 - 3489 3422 static const struct proto_ops mptcp_stream_ops = { 3490 3423 .family = PF_INET, 3491 3424 .owner = THIS_MODULE, 3492 - .release = mptcp_release, 3425 + .release = inet_release, 3493 3426 .bind = mptcp_bind, 3494 3427 .connect = mptcp_stream_connect, 3495 3428 .socketpair = sock_no_socketpair, ··· 3557 3538 } 3558 3539 3559 3540 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 3560 - static int mptcp6_release(struct socket *sock) 3561 - { 3562 - struct mptcp_subflow_context *subflow; 3563 - struct mptcp_sock *msk; 3564 - struct sock *sk = sock->sk; 3565 - 3566 - if (!sk) 3567 - return 0; 3568 - 3569 - lock_sock(sk); 3570 - 3571 - msk = mptcp_sk(sk); 3572 - 3573 - mptcp_for_each_subflow(msk, subflow) { 3574 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 3575 - 3576 - ip_mc_drop_socket(ssk); 3577 - ipv6_sock_mc_close(ssk); 3578 - ipv6_sock_ac_close(ssk); 3579 - } 3580 - 3581 - release_sock(sk); 3582 - return inet6_release(sock); 3583 - } 3584 - 3585 3541 static const struct proto_ops mptcp_v6_stream_ops = { 3586 3542 .family = PF_INET6, 3587 3543 .owner = THIS_MODULE, 3588 - .release = mptcp6_release, 3544 + .release = inet6_release, 3589 3545 .bind = mptcp_bind, 3590 3546 .connect = mptcp_stream_connect, 3591 3547 .socketpair = sock_no_socketpair,
+13 -7
net/ncsi/ncsi-manage.c
··· 105 105 monitor_state = nc->monitor.state; 106 106 spin_unlock_irqrestore(&nc->lock, flags); 107 107 108 - if (!enabled || chained) { 109 - ncsi_stop_channel_monitor(nc); 110 - return; 111 - } 108 + if (!enabled) 109 + return; /* expected race disabling timer */ 110 + if (WARN_ON_ONCE(chained)) 111 + goto bad_state; 112 + 112 113 if (state != NCSI_CHANNEL_INACTIVE && 113 114 state != NCSI_CHANNEL_ACTIVE) { 114 - ncsi_stop_channel_monitor(nc); 115 + bad_state: 116 + netdev_warn(ndp->ndev.dev, 117 + "Bad NCSI monitor state channel %d 0x%x %s queue\n", 118 + nc->id, state, chained ? "on" : "off"); 119 + spin_lock_irqsave(&nc->lock, flags); 120 + nc->monitor.enabled = false; 121 + spin_unlock_irqrestore(&nc->lock, flags); 115 122 return; 116 123 } 117 124 ··· 143 136 ncsi_report_link(ndp, true); 144 137 ndp->flags |= NCSI_DEV_RESHUFFLE; 145 138 146 - ncsi_stop_channel_monitor(nc); 147 - 148 139 ncm = &nc->modes[NCSI_MODE_LINK]; 149 140 spin_lock_irqsave(&nc->lock, flags); 141 + nc->monitor.enabled = false; 150 142 nc->state = NCSI_CHANNEL_INVISIBLE; 151 143 ncm->data[2] &= ~0x1; 152 144 spin_unlock_irqrestore(&nc->lock, flags);
+10
net/nfc/llcp_sock.c
··· 108 108 llcp_sock->service_name_len, 109 109 GFP_KERNEL); 110 110 if (!llcp_sock->service_name) { 111 + nfc_llcp_local_put(llcp_sock->local); 111 112 ret = -ENOMEM; 112 113 goto put_dev; 113 114 } 114 115 llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock); 115 116 if (llcp_sock->ssap == LLCP_SAP_MAX) { 117 + nfc_llcp_local_put(llcp_sock->local); 116 118 kfree(llcp_sock->service_name); 117 119 llcp_sock->service_name = NULL; 118 120 ret = -EADDRINUSE; ··· 673 671 ret = -EISCONN; 674 672 goto error; 675 673 } 674 + if (sk->sk_state == LLCP_CONNECTING) { 675 + ret = -EINPROGRESS; 676 + goto error; 677 + } 676 678 677 679 dev = nfc_get_device(addr->dev_idx); 678 680 if (dev == NULL) { ··· 708 702 llcp_sock->local = nfc_llcp_local_get(local); 709 703 llcp_sock->ssap = nfc_llcp_get_local_ssap(local); 710 704 if (llcp_sock->ssap == LLCP_SAP_MAX) { 705 + nfc_llcp_local_put(llcp_sock->local); 711 706 ret = -ENOMEM; 712 707 goto put_dev; 713 708 } ··· 750 743 751 744 sock_unlink: 752 745 nfc_llcp_sock_unlink(&local->connecting_sockets, sk); 746 + kfree(llcp_sock->service_name); 747 + llcp_sock->service_name = NULL; 753 748 754 749 sock_llcp_release: 755 750 nfc_llcp_put_ssap(local, llcp_sock->ssap); 751 + nfc_llcp_local_put(llcp_sock->local); 756 752 757 753 put_dev: 758 754 nfc_put_device(dev);
+4 -4
net/openvswitch/conntrack.c
··· 2034 2034 static int ovs_ct_limit_get_default_limit(struct ovs_ct_limit_info *info, 2035 2035 struct sk_buff *reply) 2036 2036 { 2037 - struct ovs_zone_limit zone_limit; 2038 - 2039 - zone_limit.zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE; 2040 - zone_limit.limit = info->default_limit; 2037 + struct ovs_zone_limit zone_limit = { 2038 + .zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE, 2039 + .limit = info->default_limit, 2040 + }; 2041 2041 2042 2042 return nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit); 2043 2043 }
+4 -1
net/qrtr/qrtr.c
··· 271 271 flow = kzalloc(sizeof(*flow), GFP_KERNEL); 272 272 if (flow) { 273 273 init_waitqueue_head(&flow->resume_tx); 274 - radix_tree_insert(&node->qrtr_tx_flow, key, flow); 274 + if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) { 275 + kfree(flow); 276 + flow = NULL; 277 + } 275 278 } 276 279 } 277 280 mutex_unlock(&node->qrtr_tx_lock);
+3 -1
net/rds/message.c
··· 180 180 rds_message_purge(rm); 181 181 182 182 kfree(rm); 183 + rm = NULL; 183 184 } 184 185 } 185 186 EXPORT_SYMBOL_GPL(rds_message_put); ··· 348 347 rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE); 349 348 rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); 350 349 if (IS_ERR(rm->data.op_sg)) { 350 + void *err = ERR_CAST(rm->data.op_sg); 351 351 rds_message_put(rm); 352 - return ERR_CAST(rm->data.op_sg); 352 + return err; 353 353 } 354 354 355 355 for (i = 0; i < rm->data.op_nents; ++i) {
+1 -1
net/rds/send.c
··· 665 665 unlock_and_drop: 666 666 spin_unlock_irqrestore(&rm->m_rs_lock, flags); 667 667 rds_message_put(rm); 668 - if (was_on_sock) 668 + if (was_on_sock && rm) 669 669 rds_message_put(rm); 670 670 } 671 671
+4 -3
net/rfkill/core.c
··· 69 69 70 70 struct rfkill_int_event { 71 71 struct list_head list; 72 - struct rfkill_event ev; 72 + struct rfkill_event_ext ev; 73 73 }; 74 74 75 75 struct rfkill_data { ··· 253 253 } 254 254 #endif /* CONFIG_RFKILL_LEDS */ 255 255 256 - static void rfkill_fill_event(struct rfkill_event *ev, struct rfkill *rfkill, 256 + static void rfkill_fill_event(struct rfkill_event_ext *ev, 257 + struct rfkill *rfkill, 257 258 enum rfkill_operation op) 258 259 { 259 260 unsigned long flags; ··· 1238 1237 size_t count, loff_t *pos) 1239 1238 { 1240 1239 struct rfkill *rfkill; 1241 - struct rfkill_event ev; 1240 + struct rfkill_event_ext ev; 1242 1241 int ret; 1243 1242 1244 1243 /* we don't need the 'hard' variable but accept it */
+31 -17
net/sched/act_api.c
··· 158 158 return 0; 159 159 } 160 160 161 - int __tcf_idr_release(struct tc_action *p, bool bind, bool strict) 161 + static int __tcf_idr_release(struct tc_action *p, bool bind, bool strict) 162 162 { 163 163 int ret = 0; 164 164 ··· 184 184 185 185 return ret; 186 186 } 187 - EXPORT_SYMBOL(__tcf_idr_release); 187 + 188 + int tcf_idr_release(struct tc_action *a, bool bind) 189 + { 190 + const struct tc_action_ops *ops = a->ops; 191 + int ret; 192 + 193 + ret = __tcf_idr_release(a, bind, false); 194 + if (ret == ACT_P_DELETED) 195 + module_put(ops->owner); 196 + return ret; 197 + } 198 + EXPORT_SYMBOL(tcf_idr_release); 188 199 189 200 static size_t tcf_action_shared_attrs_size(const struct tc_action *act) 190 201 { ··· 504 493 } 505 494 506 495 p->idrinfo = idrinfo; 496 + __module_get(ops->owner); 507 497 p->ops = ops; 508 498 *a = p; 509 499 return 0; ··· 1004 992 struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, 1005 993 struct nlattr *nla, struct nlattr *est, 1006 994 char *name, int ovr, int bind, 1007 - struct tc_action_ops *a_o, bool rtnl_held, 995 + struct tc_action_ops *a_o, int *init_res, 996 + bool rtnl_held, 1008 997 struct netlink_ext_ack *extack) 1009 998 { 1010 999 struct nla_bitfield32 flags = { 0, 0 }; ··· 1041 1028 } 1042 1029 if (err < 0) 1043 1030 goto err_out; 1031 + *init_res = err; 1044 1032 1045 1033 if (!name && tb[TCA_ACT_COOKIE]) 1046 1034 tcf_set_action_cookie(&a->act_cookie, cookie); 1047 1035 1048 1036 if (!name) 1049 1037 a->hw_stats = hw_stats; 1050 - 1051 - /* module count goes up only when brand new policy is created 1052 - * if it exists and is only bound to in a_o->init() then 1053 - * ACT_P_CREATED is not returned (a zero is). 1054 - */ 1055 - if (err != ACT_P_CREATED) 1056 - module_put(a_o->owner); 1057 1038 1058 1039 return a; 1059 1040 ··· 1063 1056 1064 1057 int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, 1065 1058 struct nlattr *est, char *name, int ovr, int bind, 1066 - struct tc_action *actions[], size_t *attr_size, 1059 + struct tc_action *actions[], int init_res[], size_t *attr_size, 1067 1060 bool rtnl_held, struct netlink_ext_ack *extack) 1068 1061 { 1069 1062 struct tc_action_ops *ops[TCA_ACT_MAX_PRIO] = {}; ··· 1091 1084 1092 1085 for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) { 1093 1086 act = tcf_action_init_1(net, tp, tb[i], est, name, ovr, bind, 1094 - ops[i - 1], rtnl_held, extack); 1087 + ops[i - 1], &init_res[i - 1], rtnl_held, 1088 + extack); 1095 1089 if (IS_ERR(act)) { 1096 1090 err = PTR_ERR(act); 1097 1091 goto err; ··· 1108 1100 tcf_idr_insert_many(actions); 1109 1101 1110 1102 *attr_size = tcf_action_full_attrs_size(sz); 1111 - return i - 1; 1103 + err = i - 1; 1104 + goto err_mod; 1112 1105 1113 1106 err: 1114 1107 tcf_action_destroy(actions, bind); ··· 1506 1497 struct netlink_ext_ack *extack) 1507 1498 { 1508 1499 size_t attr_size = 0; 1509 - int loop, ret; 1500 + int loop, ret, i; 1510 1501 struct tc_action *actions[TCA_ACT_MAX_PRIO] = {}; 1502 + int init_res[TCA_ACT_MAX_PRIO] = {}; 1511 1503 1512 1504 for (loop = 0; loop < 10; loop++) { 1513 1505 ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0, 1514 - actions, &attr_size, true, extack); 1506 + actions, init_res, &attr_size, true, extack); 1515 1507 if (ret != -EAGAIN) 1516 1508 break; 1517 1509 } ··· 1520 1510 if (ret < 0) 1521 1511 return ret; 1522 1512 ret = tcf_add_notify(net, n, actions, portid, attr_size, extack); 1523 - if (ovr) 1524 - tcf_action_put_many(actions); 1513 + 1514 + /* only put existing actions */ 1515 + for (i = 0; i < TCA_ACT_MAX_PRIO; i++) 1516 + if (init_res[i] == ACT_P_CREATED) 1517 + actions[i] = NULL; 1518 + tcf_action_put_many(actions); 1525 1519 1526 1520 return ret; 1527 1521 }
+8 -8
net/sched/cls_api.c
··· 646 646 struct net_device *dev = block_cb->indr.dev; 647 647 struct Qdisc *sch = block_cb->indr.sch; 648 648 struct netlink_ext_ack extack = {}; 649 - struct flow_block_offload bo; 649 + struct flow_block_offload bo = {}; 650 650 651 651 tcf_block_offload_init(&bo, dev, sch, FLOW_BLOCK_UNBIND, 652 652 block_cb->indr.binder_type, ··· 3040 3040 { 3041 3041 #ifdef CONFIG_NET_CLS_ACT 3042 3042 { 3043 + int init_res[TCA_ACT_MAX_PRIO] = {}; 3043 3044 struct tc_action *act; 3044 3045 size_t attr_size = 0; 3045 3046 ··· 3052 3051 return PTR_ERR(a_o); 3053 3052 act = tcf_action_init_1(net, tp, tb[exts->police], 3054 3053 rate_tlv, "police", ovr, 3055 - TCA_ACT_BIND, a_o, rtnl_held, 3056 - extack); 3057 - if (IS_ERR(act)) { 3058 - module_put(a_o->owner); 3054 + TCA_ACT_BIND, a_o, init_res, 3055 + rtnl_held, extack); 3056 + module_put(a_o->owner); 3057 + if (IS_ERR(act)) 3059 3058 return PTR_ERR(act); 3060 - } 3061 3059 3062 3060 act->type = exts->type = TCA_OLD_COMPAT; 3063 3061 exts->actions[0] = act; ··· 3067 3067 3068 3068 err = tcf_action_init(net, tp, tb[exts->action], 3069 3069 rate_tlv, NULL, ovr, TCA_ACT_BIND, 3070 - exts->actions, &attr_size, 3071 - rtnl_held, extack); 3070 + exts->actions, init_res, 3071 + &attr_size, rtnl_held, extack); 3072 3072 if (err < 0) 3073 3073 return err; 3074 3074 exts->nr_actions = err;
+3 -2
net/sched/sch_htb.c
··· 1675 1675 cl->parent->common.classid, 1676 1676 NULL); 1677 1677 if (q->offload) { 1678 - if (new_q) 1678 + if (new_q) { 1679 1679 htb_set_lockdep_class_child(new_q); 1680 - htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1680 + htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1681 + } 1681 1682 } 1682 1683 } 1683 1684
+3
net/sched/sch_teql.c
··· 134 134 struct teql_sched_data *dat = qdisc_priv(sch); 135 135 struct teql_master *master = dat->m; 136 136 137 + if (!master) 138 + return; 139 + 137 140 prev = master->slaves; 138 141 if (prev) { 139 142 do {
+3 -4
net/sctp/ipv6.c
··· 664 664 if (!(type & IPV6_ADDR_UNICAST)) 665 665 return 0; 666 666 667 - return sp->inet.freebind || net->ipv6.sysctl.ip_nonlocal_bind || 668 - ipv6_chk_addr(net, in6, NULL, 0); 667 + return ipv6_can_nonlocal_bind(net, &sp->inet) || 668 + ipv6_chk_addr(net, in6, NULL, 0); 669 669 } 670 670 671 671 /* This function checks if the address is a valid address to be used for ··· 954 954 net = sock_net(&opt->inet.sk); 955 955 rcu_read_lock(); 956 956 dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id); 957 - if (!dev || !(opt->inet.freebind || 958 - net->ipv6.sysctl.ip_nonlocal_bind || 957 + if (!dev || !(ipv6_can_nonlocal_bind(net, &opt->inet) || 959 958 ipv6_chk_addr(net, &addr->v6.sin6_addr, 960 959 dev, 0))) { 961 960 rcu_read_unlock();
+3 -3
net/tipc/bearer.h
··· 154 154 * care of initializing all other fields. 155 155 */ 156 156 struct tipc_bearer { 157 - void __rcu *media_ptr; /* initalized by media */ 158 - u32 mtu; /* initalized by media */ 159 - struct tipc_media_addr addr; /* initalized by media */ 157 + void __rcu *media_ptr; /* initialized by media */ 158 + u32 mtu; /* initialized by media */ 159 + struct tipc_media_addr addr; /* initialized by media */ 160 160 char name[TIPC_MAX_BEARER_NAME]; 161 161 struct tipc_media *media; 162 162 struct tipc_media_addr bcast_addr;
+2 -1
net/tipc/crypto.c
··· 1941 1941 goto rcv; 1942 1942 if (tipc_aead_clone(&tmp, aead) < 0) 1943 1943 goto rcv; 1944 + WARN_ON(!refcount_inc_not_zero(&tmp->refcnt)); 1944 1945 if (tipc_crypto_key_attach(rx, tmp, ehdr->tx_key, false) < 0) { 1945 1946 tipc_aead_free(&tmp->rcu); 1946 1947 goto rcv; 1947 1948 } 1948 1949 tipc_aead_put(aead); 1949 - aead = tipc_aead_get(tmp); 1950 + aead = tmp; 1950 1951 } 1951 1952 1952 1953 if (unlikely(err)) {
+1 -1
net/tipc/net.c
··· 89 89 * - A spin lock to protect the registry of kernel/driver users (reg.c) 90 90 * - A global spin_lock (tipc_port_lock), which only task is to ensure 91 91 * consistency where more than one port is involved in an operation, 92 - * i.e., whe a port is part of a linked list of ports. 92 + * i.e., when a port is part of a linked list of ports. 93 93 * There are two such lists; 'port_list', which is used for management, 94 94 * and 'wait_list', which is used to queue ports during congestion. 95 95 *
+1 -1
net/tipc/node.c
··· 1734 1734 } 1735 1735 1736 1736 /* tipc_node_xmit_skb(): send single buffer to destination 1737 - * Buffers sent via this functon are generally TIPC_SYSTEM_IMPORTANCE 1737 + * Buffers sent via this function are generally TIPC_SYSTEM_IMPORTANCE 1738 1738 * messages, which will not be rejected 1739 1739 * The only exception is datagram messages rerouted after secondary 1740 1740 * lookup, which are rare and safe to dispose of anyway.
+1 -1
net/tipc/socket.c
··· 1265 1265 spin_lock_bh(&inputq->lock); 1266 1266 if (skb_peek(arrvq) == skb) { 1267 1267 skb_queue_splice_tail_init(&tmpq, inputq); 1268 - kfree_skb(__skb_dequeue(arrvq)); 1268 + __skb_dequeue(arrvq); 1269 1269 } 1270 1270 spin_unlock_bh(&inputq->lock); 1271 1271 __skb_queue_purge(&tmpq);
+7 -3
net/wireless/nl80211.c
··· 5 5 * Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net> 6 6 * Copyright 2013-2014 Intel Mobile Communications GmbH 7 7 * Copyright 2015-2017 Intel Deutschland GmbH 8 - * Copyright (C) 2018-2020 Intel Corporation 8 + * Copyright (C) 2018-2021 Intel Corporation 9 9 */ 10 10 11 11 #include <linux/if.h> ··· 229 229 unsigned int len = nla_len(attr); 230 230 const struct element *elem; 231 231 const struct ieee80211_mgmt *mgmt = (void *)data; 232 - bool s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control); 233 232 unsigned int fixedlen, hdrlen; 233 + bool s1g_bcn; 234 234 235 + if (len < offsetofend(typeof(*mgmt), frame_control)) 236 + goto err; 237 + 238 + s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control); 235 239 if (s1g_bcn) { 236 240 fixedlen = offsetof(struct ieee80211_ext, 237 241 u.s1g_beacon.variable); ··· 5489 5485 rdev, info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP], 5490 5486 &params); 5491 5487 if (err) 5492 - return err; 5488 + goto out; 5493 5489 } 5494 5490 5495 5491 nl80211_calculate_ap_params(&params);
+8 -6
net/wireless/scan.c
··· 2352 2352 return NULL; 2353 2353 2354 2354 if (ext) { 2355 - struct ieee80211_s1g_bcn_compat_ie *compat; 2356 - u8 *ie; 2355 + const struct ieee80211_s1g_bcn_compat_ie *compat; 2356 + const struct element *elem; 2357 2357 2358 - ie = (void *)cfg80211_find_ie(WLAN_EID_S1G_BCN_COMPAT, 2359 - variable, ielen); 2360 - if (!ie) 2358 + elem = cfg80211_find_elem(WLAN_EID_S1G_BCN_COMPAT, 2359 + variable, ielen); 2360 + if (!elem) 2361 2361 return NULL; 2362 - compat = (void *)(ie + 2); 2362 + if (elem->datalen < sizeof(*compat)) 2363 + return NULL; 2364 + compat = (void *)elem->data; 2363 2365 bssid = ext->u.s1g_beacon.sa; 2364 2366 capability = le16_to_cpu(compat->compat_info); 2365 2367 beacon_int = le16_to_cpu(compat->beacon_int);
+1 -1
net/wireless/sme.c
··· 529 529 cfg80211_sme_free(wdev); 530 530 } 531 531 532 - if (WARN_ON(wdev->conn)) 532 + if (wdev->conn) 533 533 return -EINPROGRESS; 534 534 535 535 wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);
+9 -3
net/xfrm/xfrm_compat.c
··· 216 216 case XFRM_MSG_GETSADINFO: 217 217 case XFRM_MSG_GETSPDINFO: 218 218 default: 219 - WARN_ONCE(1, "unsupported nlmsg_type %d", nlh_src->nlmsg_type); 219 + pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type); 220 220 return ERR_PTR(-EOPNOTSUPP); 221 221 } 222 222 ··· 277 277 return xfrm_nla_cpy(dst, src, nla_len(src)); 278 278 default: 279 279 BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID); 280 - WARN_ONCE(1, "unsupported nla_type %d", src->nla_type); 280 + pr_warn_once("unsupported nla_type %d\n", src->nla_type); 281 281 return -EOPNOTSUPP; 282 282 } 283 283 } ··· 315 315 struct sk_buff *new = NULL; 316 316 int err; 317 317 318 - if (WARN_ON_ONCE(type >= ARRAY_SIZE(xfrm_msg_min))) 318 + if (type >= ARRAY_SIZE(xfrm_msg_min)) { 319 + pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type); 319 320 return -EOPNOTSUPP; 321 + } 320 322 321 323 if (skb_shinfo(skb)->frag_list == NULL) { 322 324 new = alloc_skb(skb->len + skb_tailroom(skb), GFP_ATOMIC); ··· 380 378 struct nlmsghdr *nlmsg = dst; 381 379 struct nlattr *nla; 382 380 381 + /* xfrm_user_rcv_msg_compat() relies on fact that 32-bit messages 382 + * have the same len or shorted than 64-bit ones. 383 + * 32-bit translation that is bigger than 64-bit original is unexpected. 384 + */ 383 385 if (WARN_ON_ONCE(copy_len > payload)) 384 386 copy_len = payload; 385 387
-2
net/xfrm/xfrm_device.c
··· 134 134 return skb; 135 135 } 136 136 137 - xo->flags |= XFRM_XMIT; 138 - 139 137 if (skb_is_gso(skb) && unlikely(x->xso.dev != dev)) { 140 138 struct sk_buff *segs; 141 139
+3
net/xfrm/xfrm_interface.c
··· 306 306 307 307 icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 308 308 } else { 309 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 310 + goto xmit; 309 311 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 310 312 htonl(mtu)); 311 313 } ··· 316 314 return -EMSGSIZE; 317 315 } 318 316 317 + xmit: 319 318 xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev))); 320 319 skb_dst_set(skb, dst); 321 320 skb->dev = tdev;
+18 -5
net/xfrm/xfrm_output.c
··· 503 503 return err; 504 504 } 505 505 506 - int xfrm_output_resume(struct sk_buff *skb, int err) 506 + int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err) 507 507 { 508 508 struct net *net = xs_net(skb_dst(skb)->xfrm); 509 509 510 510 while (likely((err = xfrm_output_one(skb, err)) == 0)) { 511 511 nf_reset_ct(skb); 512 512 513 - err = skb_dst(skb)->ops->local_out(net, skb->sk, skb); 513 + err = skb_dst(skb)->ops->local_out(net, sk, skb); 514 514 if (unlikely(err != 1)) 515 515 goto out; 516 516 517 517 if (!skb_dst(skb)->xfrm) 518 - return dst_output(net, skb->sk, skb); 518 + return dst_output(net, sk, skb); 519 519 520 520 err = nf_hook(skb_dst(skb)->ops->family, 521 - NF_INET_POST_ROUTING, net, skb->sk, skb, 521 + NF_INET_POST_ROUTING, net, sk, skb, 522 522 NULL, skb_dst(skb)->dev, xfrm_output2); 523 523 if (unlikely(err != 1)) 524 524 goto out; ··· 534 534 535 535 static int xfrm_output2(struct net *net, struct sock *sk, struct sk_buff *skb) 536 536 { 537 - return xfrm_output_resume(skb, 1); 537 + return xfrm_output_resume(sk, skb, 1); 538 538 } 539 539 540 540 static int xfrm_output_gso(struct net *net, struct sock *sk, struct sk_buff *skb) ··· 660 660 { 661 661 int err; 662 662 663 + if (x->outer_mode.encap == XFRM_MODE_BEET && 664 + ip_is_fragment(ip_hdr(skb))) { 665 + net_warn_ratelimited("BEET mode doesn't support inner IPv4 fragments\n"); 666 + return -EAFNOSUPPORT; 667 + } 668 + 663 669 err = xfrm4_tunnel_check_size(skb); 664 670 if (err) 665 671 return err; ··· 711 705 static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb) 712 706 { 713 707 #if IS_ENABLED(CONFIG_IPV6) 708 + unsigned int ptr = 0; 714 709 int err; 710 + 711 + if (x->outer_mode.encap == XFRM_MODE_BEET && 712 + ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) { 713 + net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n"); 714 + return -EAFNOSUPPORT; 715 + } 715 716 716 717 err = xfrm6_tunnel_check_size(skb); 717 718 if (err)
+6 -5
net/xfrm/xfrm_state.c
··· 44 44 */ 45 45 46 46 static unsigned int xfrm_state_hashmax __read_mostly = 1 * 1024 * 1024; 47 - static __read_mostly seqcount_t xfrm_state_hash_generation = SEQCNT_ZERO(xfrm_state_hash_generation); 48 47 static struct kmem_cache *xfrm_state_cache __ro_after_init; 49 48 50 49 static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task); ··· 139 140 } 140 141 141 142 spin_lock_bh(&net->xfrm.xfrm_state_lock); 142 - write_seqcount_begin(&xfrm_state_hash_generation); 143 + write_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 143 144 144 145 nhashmask = (nsize / sizeof(struct hlist_head)) - 1U; 145 146 odst = xfrm_state_deref_prot(net->xfrm.state_bydst, net); ··· 155 156 rcu_assign_pointer(net->xfrm.state_byspi, nspi); 156 157 net->xfrm.state_hmask = nhashmask; 157 158 158 - write_seqcount_end(&xfrm_state_hash_generation); 159 + write_seqcount_end(&net->xfrm.xfrm_state_hash_generation); 159 160 spin_unlock_bh(&net->xfrm.xfrm_state_lock); 160 161 161 162 osize = (ohashmask + 1) * sizeof(struct hlist_head); ··· 1062 1063 1063 1064 to_put = NULL; 1064 1065 1065 - sequence = read_seqcount_begin(&xfrm_state_hash_generation); 1066 + sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 1066 1067 1067 1068 rcu_read_lock(); 1068 1069 h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family); ··· 1175 1176 if (to_put) 1176 1177 xfrm_state_put(to_put); 1177 1178 1178 - if (read_seqcount_retry(&xfrm_state_hash_generation, sequence)) { 1179 + if (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence)) { 1179 1180 *err = -EAGAIN; 1180 1181 if (x) { 1181 1182 xfrm_state_put(x); ··· 2665 2666 net->xfrm.state_num = 0; 2666 2667 INIT_WORK(&net->xfrm.state_hash_work, xfrm_hash_resize); 2667 2668 spin_lock_init(&net->xfrm.xfrm_state_lock); 2669 + seqcount_spinlock_init(&net->xfrm.xfrm_state_hash_generation, 2670 + &net->xfrm.xfrm_state_lock); 2668 2671 return 0; 2669 2672 2670 2673 out_byspi:
+33 -68
security/selinux/ss/avtab.c
··· 109 109 struct avtab_node *prev, *cur, *newnode; 110 110 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 111 111 112 - if (!h) 112 + if (!h || !h->nslot) 113 113 return -EINVAL; 114 114 115 115 hvalue = avtab_hash(key, h->mask); ··· 154 154 struct avtab_node *prev, *cur; 155 155 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 156 156 157 - if (!h) 157 + if (!h || !h->nslot) 158 158 return NULL; 159 159 hvalue = avtab_hash(key, h->mask); 160 160 for (prev = NULL, cur = h->htable[hvalue]; ··· 184 184 struct avtab_node *cur; 185 185 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 186 186 187 - if (!h) 187 + if (!h || !h->nslot) 188 188 return NULL; 189 189 190 190 hvalue = avtab_hash(key, h->mask); ··· 220 220 struct avtab_node *cur; 221 221 u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD); 222 222 223 - if (!h) 223 + if (!h || !h->nslot) 224 224 return NULL; 225 225 226 226 hvalue = avtab_hash(key, h->mask); ··· 295 295 } 296 296 kvfree(h->htable); 297 297 h->htable = NULL; 298 + h->nel = 0; 298 299 h->nslot = 0; 299 300 h->mask = 0; 300 301 } ··· 304 303 { 305 304 h->htable = NULL; 306 305 h->nel = 0; 306 + h->nslot = 0; 307 + h->mask = 0; 307 308 } 308 309 309 - int avtab_alloc(struct avtab *h, u32 nrules) 310 + static int avtab_alloc_common(struct avtab *h, u32 nslot) 310 311 { 311 - u32 mask = 0; 312 - u32 shift = 0; 313 - u32 work = nrules; 314 - u32 nslot = 0; 315 - 316 - if (nrules == 0) 317 - goto avtab_alloc_out; 318 - 319 - while (work) { 320 - work = work >> 1; 321 - shift++; 322 - } 323 - if (shift > 2) 324 - shift = shift - 2; 325 - nslot = 1 << shift; 326 - if (nslot > MAX_AVTAB_HASH_BUCKETS) 327 - nslot = MAX_AVTAB_HASH_BUCKETS; 328 - mask = nslot - 1; 312 + if (!nslot) 313 + return 0; 329 314 330 315 h->htable = kvcalloc(nslot, sizeof(void *), GFP_KERNEL); 331 316 if (!h->htable) 332 317 return -ENOMEM; 333 318 334 - avtab_alloc_out: 335 - h->nel = 0; 336 319 h->nslot = nslot; 337 - h->mask = mask; 338 - pr_debug("SELinux: %d avtab hash slots, %d rules.\n", 339 - h->nslot, nrules); 320 + h->mask = nslot - 1; 340 321 return 0; 341 322 } 342 323 343 - int avtab_duplicate(struct avtab *new, struct avtab *orig) 324 + int avtab_alloc(struct avtab *h, u32 nrules) 344 325 { 345 - int i; 346 - struct avtab_node *node, *tmp, *tail; 326 + int rc; 327 + u32 nslot = 0; 347 328 348 - memset(new, 0, sizeof(*new)); 349 - 350 - new->htable = kvcalloc(orig->nslot, sizeof(void *), GFP_KERNEL); 351 - if (!new->htable) 352 - return -ENOMEM; 353 - new->nslot = orig->nslot; 354 - new->mask = orig->mask; 355 - 356 - for (i = 0; i < orig->nslot; i++) { 357 - tail = NULL; 358 - for (node = orig->htable[i]; node; node = node->next) { 359 - tmp = kmem_cache_zalloc(avtab_node_cachep, GFP_KERNEL); 360 - if (!tmp) 361 - goto error; 362 - tmp->key = node->key; 363 - if (tmp->key.specified & AVTAB_XPERMS) { 364 - tmp->datum.u.xperms = 365 - kmem_cache_zalloc(avtab_xperms_cachep, 366 - GFP_KERNEL); 367 - if (!tmp->datum.u.xperms) { 368 - kmem_cache_free(avtab_node_cachep, tmp); 369 - goto error; 370 - } 371 - tmp->datum.u.xperms = node->datum.u.xperms; 372 - } else 373 - tmp->datum.u.data = node->datum.u.data; 374 - 375 - if (tail) 376 - tail->next = tmp; 377 - else 378 - new->htable[i] = tmp; 379 - 380 - tail = tmp; 381 - new->nel++; 329 + if (nrules != 0) { 330 + u32 shift = 1; 331 + u32 work = nrules >> 3; 332 + while (work) { 333 + work >>= 1; 334 + shift++; 382 335 } 336 + nslot = 1 << shift; 337 + if (nslot > MAX_AVTAB_HASH_BUCKETS) 338 + nslot = MAX_AVTAB_HASH_BUCKETS; 339 + 340 + rc = avtab_alloc_common(h, nslot); 341 + if (rc) 342 + return rc; 383 343 } 384 344 345 + pr_debug("SELinux: %d avtab hash slots, %d rules.\n", nslot, nrules); 385 346 return 0; 386 - error: 387 - avtab_destroy(new); 388 - return -ENOMEM; 347 + } 348 + 349 + int avtab_alloc_dup(struct avtab *new, const struct avtab *orig) 350 + { 351 + return avtab_alloc_common(new, orig->nslot); 389 352 } 390 353 391 354 void avtab_hash_eval(struct avtab *h, char *tag)
+1 -1
security/selinux/ss/avtab.h
··· 89 89 90 90 void avtab_init(struct avtab *h); 91 91 int avtab_alloc(struct avtab *, u32); 92 - int avtab_duplicate(struct avtab *new, struct avtab *orig); 92 + int avtab_alloc_dup(struct avtab *new, const struct avtab *orig); 93 93 struct avtab_datum *avtab_search(struct avtab *h, struct avtab_key *k); 94 94 void avtab_destroy(struct avtab *h); 95 95 void avtab_hash_eval(struct avtab *h, char *tag);
+6 -6
security/selinux/ss/conditional.c
··· 605 605 struct cond_av_list *orig, 606 606 struct avtab *avtab) 607 607 { 608 - struct avtab_node *avnode; 609 608 u32 i; 610 609 611 610 memset(new, 0, sizeof(*new)); ··· 614 615 return -ENOMEM; 615 616 616 617 for (i = 0; i < orig->len; i++) { 617 - avnode = avtab_search_node(avtab, &orig->nodes[i]->key); 618 - if (WARN_ON(!avnode)) 619 - return -EINVAL; 620 - new->nodes[i] = avnode; 618 + new->nodes[i] = avtab_insert_nonunique(avtab, 619 + &orig->nodes[i]->key, 620 + &orig->nodes[i]->datum); 621 + if (!new->nodes[i]) 622 + return -ENOMEM; 621 623 new->len++; 622 624 } 623 625 ··· 630 630 { 631 631 int rc, i, j; 632 632 633 - rc = avtab_duplicate(&newp->te_cond_avtab, &origp->te_cond_avtab); 633 + rc = avtab_alloc_dup(&newp->te_cond_avtab, &origp->te_cond_avtab); 634 634 if (rc) 635 635 return rc; 636 636
+120 -37
security/selinux/ss/services.c
··· 1552 1552 if (!str) 1553 1553 goto out; 1554 1554 } 1555 + retry: 1555 1556 rcu_read_lock(); 1556 1557 policy = rcu_dereference(state->policy); 1557 1558 policydb = &policy->policydb; ··· 1566 1565 } else if (rc) 1567 1566 goto out_unlock; 1568 1567 rc = sidtab_context_to_sid(sidtab, &context, sid); 1568 + if (rc == -ESTALE) { 1569 + rcu_read_unlock(); 1570 + if (context.str) { 1571 + str = context.str; 1572 + context.str = NULL; 1573 + } 1574 + context_destroy(&context); 1575 + goto retry; 1576 + } 1569 1577 context_destroy(&context); 1570 1578 out_unlock: 1571 1579 rcu_read_unlock(); ··· 1724 1714 struct selinux_policy *policy; 1725 1715 struct policydb *policydb; 1726 1716 struct sidtab *sidtab; 1727 - struct class_datum *cladatum = NULL; 1717 + struct class_datum *cladatum; 1728 1718 struct context *scontext, *tcontext, newcontext; 1729 1719 struct sidtab_entry *sentry, *tentry; 1730 1720 struct avtab_key avkey; ··· 1746 1736 goto out; 1747 1737 } 1748 1738 1739 + retry: 1740 + cladatum = NULL; 1749 1741 context_init(&newcontext); 1750 1742 1751 1743 rcu_read_lock(); ··· 1892 1880 } 1893 1881 /* Obtain the sid for the context. */ 1894 1882 rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); 1883 + if (rc == -ESTALE) { 1884 + rcu_read_unlock(); 1885 + context_destroy(&newcontext); 1886 + goto retry; 1887 + } 1895 1888 out_unlock: 1896 1889 rcu_read_unlock(); 1897 1890 context_destroy(&newcontext); ··· 2209 2192 struct selinux_load_state *load_state) 2210 2193 { 2211 2194 struct selinux_policy *oldpolicy, *newpolicy = load_state->policy; 2195 + unsigned long flags; 2212 2196 u32 seqno; 2213 2197 2214 2198 oldpolicy = rcu_dereference_protected(state->policy, ··· 2231 2213 seqno = newpolicy->latest_granting; 2232 2214 2233 2215 /* Install the new policy. */ 2234 - rcu_assign_pointer(state->policy, newpolicy); 2216 + if (oldpolicy) { 2217 + sidtab_freeze_begin(oldpolicy->sidtab, &flags); 2218 + rcu_assign_pointer(state->policy, newpolicy); 2219 + sidtab_freeze_end(oldpolicy->sidtab, &flags); 2220 + } else { 2221 + rcu_assign_pointer(state->policy, newpolicy); 2222 + } 2235 2223 2236 2224 /* Load the policycaps from the new policy */ 2237 2225 security_load_policycaps(state, newpolicy); ··· 2381 2357 struct policydb *policydb; 2382 2358 struct sidtab *sidtab; 2383 2359 struct ocontext *c; 2384 - int rc = 0; 2360 + int rc; 2385 2361 2386 2362 if (!selinux_initialized(state)) { 2387 2363 *out_sid = SECINITSID_PORT; 2388 2364 return 0; 2389 2365 } 2390 2366 2367 + retry: 2368 + rc = 0; 2391 2369 rcu_read_lock(); 2392 2370 policy = rcu_dereference(state->policy); 2393 2371 policydb = &policy->policydb; ··· 2408 2382 if (!c->sid[0]) { 2409 2383 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2410 2384 &c->sid[0]); 2385 + if (rc == -ESTALE) { 2386 + rcu_read_unlock(); 2387 + goto retry; 2388 + } 2411 2389 if (rc) 2412 2390 goto out; 2413 2391 } ··· 2438 2408 struct policydb *policydb; 2439 2409 struct sidtab *sidtab; 2440 2410 struct ocontext *c; 2441 - int rc = 0; 2411 + int rc; 2442 2412 2443 2413 if (!selinux_initialized(state)) { 2444 2414 *out_sid = SECINITSID_UNLABELED; 2445 2415 return 0; 2446 2416 } 2447 2417 2418 + retry: 2419 + rc = 0; 2448 2420 rcu_read_lock(); 2449 2421 policy = rcu_dereference(state->policy); 2450 2422 policydb = &policy->policydb; ··· 2467 2435 rc = sidtab_context_to_sid(sidtab, 2468 2436 &c->context[0], 2469 2437 &c->sid[0]); 2438 + if (rc == -ESTALE) { 2439 + rcu_read_unlock(); 2440 + goto retry; 2441 + } 2470 2442 if (rc) 2471 2443 goto out; 2472 2444 } ··· 2496 2460 struct policydb *policydb; 2497 2461 struct sidtab *sidtab; 2498 2462 struct ocontext *c; 2499 - int rc = 0; 2463 + int rc; 2500 2464 2501 2465 if (!selinux_initialized(state)) { 2502 2466 *out_sid = SECINITSID_UNLABELED; 2503 2467 return 0; 2504 2468 } 2505 2469 2470 + retry: 2471 + rc = 0; 2506 2472 rcu_read_lock(); 2507 2473 policy = rcu_dereference(state->policy); 2508 2474 policydb = &policy->policydb; ··· 2525 2487 if (!c->sid[0]) { 2526 2488 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2527 2489 &c->sid[0]); 2490 + if (rc == -ESTALE) { 2491 + rcu_read_unlock(); 2492 + goto retry; 2493 + } 2528 2494 if (rc) 2529 2495 goto out; 2530 2496 } ··· 2552 2510 struct selinux_policy *policy; 2553 2511 struct policydb *policydb; 2554 2512 struct sidtab *sidtab; 2555 - int rc = 0; 2513 + int rc; 2556 2514 struct ocontext *c; 2557 2515 2558 2516 if (!selinux_initialized(state)) { ··· 2560 2518 return 0; 2561 2519 } 2562 2520 2521 + retry: 2522 + rc = 0; 2563 2523 rcu_read_lock(); 2564 2524 policy = rcu_dereference(state->policy); 2565 2525 policydb = &policy->policydb; ··· 2578 2534 if (!c->sid[0] || !c->sid[1]) { 2579 2535 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2580 2536 &c->sid[0]); 2537 + if (rc == -ESTALE) { 2538 + rcu_read_unlock(); 2539 + goto retry; 2540 + } 2581 2541 if (rc) 2582 2542 goto out; 2583 2543 rc = sidtab_context_to_sid(sidtab, &c->context[1], 2584 2544 &c->sid[1]); 2545 + if (rc == -ESTALE) { 2546 + rcu_read_unlock(); 2547 + goto retry; 2548 + } 2585 2549 if (rc) 2586 2550 goto out; 2587 2551 } ··· 2639 2587 return 0; 2640 2588 } 2641 2589 2590 + retry: 2642 2591 rcu_read_lock(); 2643 2592 policy = rcu_dereference(state->policy); 2644 2593 policydb = &policy->policydb; ··· 2688 2635 rc = sidtab_context_to_sid(sidtab, 2689 2636 &c->context[0], 2690 2637 &c->sid[0]); 2638 + if (rc == -ESTALE) { 2639 + rcu_read_unlock(); 2640 + goto retry; 2641 + } 2691 2642 if (rc) 2692 2643 goto out; 2693 2644 } ··· 2733 2676 struct sidtab *sidtab; 2734 2677 struct context *fromcon, usercon; 2735 2678 u32 *mysids = NULL, *mysids2, sid; 2736 - u32 mynel = 0, maxnel = SIDS_NEL; 2679 + u32 i, j, mynel, maxnel = SIDS_NEL; 2737 2680 struct user_datum *user; 2738 2681 struct role_datum *role; 2739 2682 struct ebitmap_node *rnode, *tnode; 2740 - int rc = 0, i, j; 2683 + int rc; 2741 2684 2742 2685 *sids = NULL; 2743 2686 *nel = 0; 2744 2687 2745 2688 if (!selinux_initialized(state)) 2746 - goto out; 2689 + return 0; 2747 2690 2691 + mysids = kcalloc(maxnel, sizeof(*mysids), GFP_KERNEL); 2692 + if (!mysids) 2693 + return -ENOMEM; 2694 + 2695 + retry: 2696 + mynel = 0; 2748 2697 rcu_read_lock(); 2749 2698 policy = rcu_dereference(state->policy); 2750 2699 policydb = &policy->policydb; ··· 2770 2707 2771 2708 usercon.user = user->value; 2772 2709 2773 - rc = -ENOMEM; 2774 - mysids = kcalloc(maxnel, sizeof(*mysids), GFP_ATOMIC); 2775 - if (!mysids) 2776 - goto out_unlock; 2777 - 2778 2710 ebitmap_for_each_positive_bit(&user->roles, rnode, i) { 2779 2711 role = policydb->role_val_to_struct[i]; 2780 2712 usercon.role = i + 1; ··· 2781 2723 continue; 2782 2724 2783 2725 rc = sidtab_context_to_sid(sidtab, &usercon, &sid); 2726 + if (rc == -ESTALE) { 2727 + rcu_read_unlock(); 2728 + goto retry; 2729 + } 2784 2730 if (rc) 2785 2731 goto out_unlock; 2786 2732 if (mynel < maxnel) { ··· 2807 2745 rcu_read_unlock(); 2808 2746 if (rc || !mynel) { 2809 2747 kfree(mysids); 2810 - goto out; 2748 + return rc; 2811 2749 } 2812 2750 2813 2751 rc = -ENOMEM; 2814 2752 mysids2 = kcalloc(mynel, sizeof(*mysids2), GFP_KERNEL); 2815 2753 if (!mysids2) { 2816 2754 kfree(mysids); 2817 - goto out; 2755 + return rc; 2818 2756 } 2819 2757 for (i = 0, j = 0; i < mynel; i++) { 2820 2758 struct av_decision dummy_avd; ··· 2827 2765 mysids2[j++] = mysids[i]; 2828 2766 cond_resched(); 2829 2767 } 2830 - rc = 0; 2831 2768 kfree(mysids); 2832 2769 *sids = mysids2; 2833 2770 *nel = j; 2834 - out: 2835 - return rc; 2771 + return 0; 2836 2772 } 2837 2773 2838 2774 /** ··· 2843 2783 * Obtain a SID to use for a file in a filesystem that 2844 2784 * cannot support xattr or use a fixed labeling behavior like 2845 2785 * transition SIDs or task SIDs. 2786 + * 2787 + * WARNING: This function may return -ESTALE, indicating that the caller 2788 + * must retry the operation after re-acquiring the policy pointer! 2846 2789 */ 2847 2790 static inline int __security_genfs_sid(struct selinux_policy *policy, 2848 2791 const char *fstype, ··· 2924 2861 return 0; 2925 2862 } 2926 2863 2927 - rcu_read_lock(); 2928 - policy = rcu_dereference(state->policy); 2929 - retval = __security_genfs_sid(policy, 2930 - fstype, path, orig_sclass, sid); 2931 - rcu_read_unlock(); 2864 + do { 2865 + rcu_read_lock(); 2866 + policy = rcu_dereference(state->policy); 2867 + retval = __security_genfs_sid(policy, fstype, path, 2868 + orig_sclass, sid); 2869 + rcu_read_unlock(); 2870 + } while (retval == -ESTALE); 2932 2871 return retval; 2933 2872 } 2934 2873 ··· 2953 2888 struct selinux_policy *policy; 2954 2889 struct policydb *policydb; 2955 2890 struct sidtab *sidtab; 2956 - int rc = 0; 2891 + int rc; 2957 2892 struct ocontext *c; 2958 2893 struct superblock_security_struct *sbsec = sb->s_security; 2959 2894 const char *fstype = sb->s_type->name; ··· 2964 2899 return 0; 2965 2900 } 2966 2901 2902 + retry: 2903 + rc = 0; 2967 2904 rcu_read_lock(); 2968 2905 policy = rcu_dereference(state->policy); 2969 2906 policydb = &policy->policydb; ··· 2983 2916 if (!c->sid[0]) { 2984 2917 rc = sidtab_context_to_sid(sidtab, &c->context[0], 2985 2918 &c->sid[0]); 2919 + if (rc == -ESTALE) { 2920 + rcu_read_unlock(); 2921 + goto retry; 2922 + } 2986 2923 if (rc) 2987 2924 goto out; 2988 2925 } ··· 2994 2923 } else { 2995 2924 rc = __security_genfs_sid(policy, fstype, "/", 2996 2925 SECCLASS_DIR, &sbsec->sid); 2926 + if (rc == -ESTALE) { 2927 + rcu_read_unlock(); 2928 + goto retry; 2929 + } 2997 2930 if (rc) { 2998 2931 sbsec->behavior = SECURITY_FS_USE_NONE; 2999 2932 rc = 0; ··· 3207 3132 u32 len; 3208 3133 int rc; 3209 3134 3210 - rc = 0; 3211 3135 if (!selinux_initialized(state)) { 3212 3136 *new_sid = sid; 3213 - goto out; 3137 + return 0; 3214 3138 } 3215 3139 3140 + retry: 3141 + rc = 0; 3216 3142 context_init(&newcon); 3217 3143 3218 3144 rcu_read_lock(); ··· 3272 3196 } 3273 3197 } 3274 3198 rc = sidtab_context_to_sid(sidtab, &newcon, new_sid); 3199 + if (rc == -ESTALE) { 3200 + rcu_read_unlock(); 3201 + context_destroy(&newcon); 3202 + goto retry; 3203 + } 3275 3204 out_unlock: 3276 3205 rcu_read_unlock(); 3277 3206 context_destroy(&newcon); 3278 - out: 3279 3207 return rc; 3280 3208 } 3281 3209 ··· 3872 3792 return 0; 3873 3793 } 3874 3794 3795 + retry: 3796 + rc = 0; 3875 3797 rcu_read_lock(); 3876 3798 policy = rcu_dereference(state->policy); 3877 3799 policydb = &policy->policydb; ··· 3900 3818 goto out; 3901 3819 } 3902 3820 rc = -EIDRM; 3903 - if (!mls_context_isvalid(policydb, &ctx_new)) 3904 - goto out_free; 3821 + if (!mls_context_isvalid(policydb, &ctx_new)) { 3822 + ebitmap_destroy(&ctx_new.range.level[0].cat); 3823 + goto out; 3824 + } 3905 3825 3906 3826 rc = sidtab_context_to_sid(sidtab, &ctx_new, sid); 3827 + ebitmap_destroy(&ctx_new.range.level[0].cat); 3828 + if (rc == -ESTALE) { 3829 + rcu_read_unlock(); 3830 + goto retry; 3831 + } 3907 3832 if (rc) 3908 - goto out_free; 3833 + goto out; 3909 3834 3910 3835 security_netlbl_cache_add(secattr, *sid); 3911 - 3912 - ebitmap_destroy(&ctx_new.range.level[0].cat); 3913 3836 } else 3914 3837 *sid = SECSID_NULL; 3915 3838 3916 - rcu_read_unlock(); 3917 - return 0; 3918 - out_free: 3919 - ebitmap_destroy(&ctx_new.range.level[0].cat); 3920 3839 out: 3921 3840 rcu_read_unlock(); 3922 3841 return rc;
+21
security/selinux/ss/sidtab.c
··· 39 39 for (i = 0; i < SECINITSID_NUM; i++) 40 40 s->isids[i].set = 0; 41 41 42 + s->frozen = false; 42 43 s->count = 0; 43 44 s->convert = NULL; 44 45 hash_init(s->context_to_sid); ··· 282 281 if (*sid) 283 282 goto out_unlock; 284 283 284 + if (unlikely(s->frozen)) { 285 + /* 286 + * This sidtab is now frozen - tell the caller to abort and 287 + * get the new one. 288 + */ 289 + rc = -ESTALE; 290 + goto out_unlock; 291 + } 292 + 285 293 count = s->count; 286 294 convert = s->convert; 287 295 ··· 482 472 spin_lock_irqsave(&s->lock, flags); 483 473 s->convert = NULL; 484 474 spin_unlock_irqrestore(&s->lock, flags); 475 + } 476 + 477 + void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock) 478 + { 479 + spin_lock_irqsave(&s->lock, *flags); 480 + s->frozen = true; 481 + s->convert = NULL; 482 + } 483 + void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock) 484 + { 485 + spin_unlock_irqrestore(&s->lock, *flags); 485 486 } 486 487 487 488 static void sidtab_destroy_entry(struct sidtab_entry *entry)
+4
security/selinux/ss/sidtab.h
··· 86 86 u32 count; 87 87 /* access only under spinlock */ 88 88 struct sidtab_convert_params *convert; 89 + bool frozen; 89 90 spinlock_t lock; 90 91 91 92 #if CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE > 0 ··· 125 124 int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params); 126 125 127 126 void sidtab_cancel_convert(struct sidtab *s); 127 + 128 + void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock); 129 + void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock); 128 130 129 131 int sidtab_context_to_sid(struct sidtab *s, struct context *context, u32 *sid); 130 132
+8 -3
sound/drivers/aloop.c
··· 1571 1571 return -ENOMEM; 1572 1572 kctl->id.device = dev; 1573 1573 kctl->id.subdevice = substr; 1574 + 1575 + /* Add the control before copying the id so that 1576 + * the numid field of the id is set in the copy. 1577 + */ 1578 + err = snd_ctl_add(card, kctl); 1579 + if (err < 0) 1580 + return err; 1581 + 1574 1582 switch (idx) { 1575 1583 case ACTIVE_IDX: 1576 1584 setup->active_id = kctl->id; ··· 1595 1587 default: 1596 1588 break; 1597 1589 } 1598 - err = snd_ctl_add(card, kctl); 1599 - if (err < 0) 1600 - return err; 1601 1590 } 1602 1591 } 1603 1592 }
+1
sound/pci/hda/patch_conexant.c
··· 944 944 SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE), 945 945 SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO), 946 946 SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED), 947 + SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED), 947 948 SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE), 948 949 SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE), 949 950 SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+16
sound/pci/hda/patch_realtek.c
··· 3927 3927 snd_hda_sequence_write(codec, verbs); 3928 3928 } 3929 3929 3930 + /* Fix the speaker amp after resume, etc */ 3931 + static void alc269vb_fixup_aspire_e1_coef(struct hda_codec *codec, 3932 + const struct hda_fixup *fix, 3933 + int action) 3934 + { 3935 + if (action == HDA_FIXUP_ACT_INIT) 3936 + alc_update_coef_idx(codec, 0x0d, 0x6000, 0x6000); 3937 + } 3938 + 3930 3939 static void alc269_fixup_pcm_44k(struct hda_codec *codec, 3931 3940 const struct hda_fixup *fix, int action) 3932 3941 { ··· 6310 6301 ALC283_FIXUP_HEADSET_MIC, 6311 6302 ALC255_FIXUP_MIC_MUTE_LED, 6312 6303 ALC282_FIXUP_ASPIRE_V5_PINS, 6304 + ALC269VB_FIXUP_ASPIRE_E1_COEF, 6313 6305 ALC280_FIXUP_HP_GPIO4, 6314 6306 ALC286_FIXUP_HP_GPIO_LED, 6315 6307 ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, ··· 6988 6978 { 0x21, 0x0321101f }, 6989 6979 { }, 6990 6980 }, 6981 + }, 6982 + [ALC269VB_FIXUP_ASPIRE_E1_COEF] = { 6983 + .type = HDA_FIXUP_FUNC, 6984 + .v.func = alc269vb_fixup_aspire_e1_coef, 6991 6985 }, 6992 6986 [ALC280_FIXUP_HP_GPIO4] = { 6993 6987 .type = HDA_FIXUP_FUNC, ··· 7915 7901 SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), 7916 7902 SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), 7917 7903 SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS), 7904 + SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF), 7918 7905 SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK), 7919 7906 SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), 7920 7907 SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC), ··· 8410 8395 {.id = ALC283_FIXUP_HEADSET_MIC, .name = "alc283-headset"}, 8411 8396 {.id = ALC255_FIXUP_MIC_MUTE_LED, .name = "alc255-dell-mute"}, 8412 8397 {.id = ALC282_FIXUP_ASPIRE_V5_PINS, .name = "aspire-v5"}, 8398 + {.id = ALC269VB_FIXUP_ASPIRE_E1_COEF, .name = "aspire-e1-coef"}, 8413 8399 {.id = ALC280_FIXUP_HP_GPIO4, .name = "hp-gpio4"}, 8414 8400 {.id = ALC286_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"}, 8415 8401 {.id = ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, .name = "hp-gpio2-hotkey"},
+3 -1
sound/soc/bcm/cygnus-ssp.c
··· 1348 1348 &cygnus_ssp_dai[active_port_count]); 1349 1349 1350 1350 /* negative is err, 0 is active and good, 1 is disabled */ 1351 - if (err < 0) 1351 + if (err < 0) { 1352 + of_node_put(child_node); 1352 1353 return err; 1354 + } 1353 1355 else if (!err) { 1354 1356 dev_dbg(dev, "Activating DAI: %s\n", 1355 1357 cygnus_ssp_dai[active_port_count].name);
+1 -1
sound/soc/codecs/lpass-rx-macro.c
··· 3551 3551 3552 3552 /* set MCLK and NPL rates */ 3553 3553 clk_set_rate(rx->clks[2].clk, MCLK_FREQ); 3554 - clk_set_rate(rx->clks[3].clk, MCLK_FREQ); 3554 + clk_set_rate(rx->clks[3].clk, 2 * MCLK_FREQ); 3555 3555 3556 3556 ret = clk_bulk_prepare_enable(RX_NUM_CLKS_MAX, rx->clks); 3557 3557 if (ret)
+1 -1
sound/soc/codecs/lpass-tx-macro.c
··· 1811 1811 1812 1812 /* set MCLK and NPL rates */ 1813 1813 clk_set_rate(tx->clks[2].clk, MCLK_FREQ); 1814 - clk_set_rate(tx->clks[3].clk, MCLK_FREQ); 1814 + clk_set_rate(tx->clks[3].clk, 2 * MCLK_FREQ); 1815 1815 1816 1816 ret = clk_bulk_prepare_enable(TX_NUM_CLKS_MAX, tx->clks); 1817 1817 if (ret)
+1
sound/soc/codecs/max98373-i2c.c
··· 446 446 case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK: 447 447 case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK: 448 448 case MAX98373_R20B6_BDE_CUR_STATE_READBACK: 449 + case MAX98373_R20FF_GLOBAL_SHDN: 449 450 case MAX98373_R21FF_REV_ID: 450 451 return true; 451 452 default:
+1
sound/soc/codecs/max98373-sdw.c
··· 220 220 case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK: 221 221 case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK: 222 222 case MAX98373_R20B6_BDE_CUR_STATE_READBACK: 223 + case MAX98373_R20FF_GLOBAL_SHDN: 223 224 case MAX98373_R21FF_REV_ID: 224 225 /* SoundWire Control Port Registers */ 225 226 case MAX98373_R0040_SCP_INIT_STAT_1 ... MAX98373_R0070_SCP_FRAME_CTLR:
+2
sound/soc/codecs/max98373.c
··· 28 28 regmap_update_bits(max98373->regmap, 29 29 MAX98373_R20FF_GLOBAL_SHDN, 30 30 MAX98373_GLOBAL_EN_MASK, 1); 31 + usleep_range(30000, 31000); 31 32 break; 32 33 case SND_SOC_DAPM_POST_PMD: 33 34 regmap_update_bits(max98373->regmap, 34 35 MAX98373_R20FF_GLOBAL_SHDN, 35 36 MAX98373_GLOBAL_EN_MASK, 0); 37 + usleep_range(30000, 31000); 36 38 max98373->tdm_mode = false; 37 39 break; 38 40 default:
+7 -1
sound/soc/codecs/wm8960.c
··· 707 707 best_freq_out = -EINVAL; 708 708 *sysclk_idx = *dac_idx = *bclk_idx = -1; 709 709 710 - for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) { 710 + /* 711 + * From Datasheet, the PLL performs best when f2 is between 712 + * 90MHz and 100MHz, the desired sysclk output is 11.2896MHz 713 + * or 12.288MHz, then sysclkdiv = 2 is the best choice. 714 + * So search sysclk_divs from 2 to 1 other than from 1 to 2. 715 + */ 716 + for (i = ARRAY_SIZE(sysclk_divs) - 1; i >= 0; --i) { 711 717 if (sysclk_divs[i] == -1) 712 718 continue; 713 719 for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) {
+5 -3
sound/soc/fsl/fsl_esai.c
··· 519 519 ESAI_SAICR_SYNC, esai_priv->synchronous ? 520 520 ESAI_SAICR_SYNC : 0); 521 521 522 - /* Set a default slot number -- 2 */ 522 + /* Set slots count */ 523 523 regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, 524 - ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2)); 524 + ESAI_xCCR_xDC_MASK, 525 + ESAI_xCCR_xDC(esai_priv->slots)); 525 526 regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, 526 - ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2)); 527 + ESAI_xCCR_xDC_MASK, 528 + ESAI_xCCR_xDC(esai_priv->slots)); 527 529 } 528 530 529 531 return 0;
+6 -6
sound/soc/intel/atom/sst-mfld-platform-pcm.c
··· 487 487 .stream_name = "Headset Playback", 488 488 .channels_min = SST_STEREO, 489 489 .channels_max = SST_STEREO, 490 - .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, 491 - .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, 490 + .rates = SNDRV_PCM_RATE_48000, 491 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 492 492 }, 493 493 .capture = { 494 494 .stream_name = "Headset Capture", 495 495 .channels_min = 1, 496 496 .channels_max = 2, 497 - .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, 498 - .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, 497 + .rates = SNDRV_PCM_RATE_48000, 498 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 499 499 }, 500 500 }, 501 501 { ··· 505 505 .stream_name = "Deepbuffer Playback", 506 506 .channels_min = SST_STEREO, 507 507 .channels_max = SST_STEREO, 508 - .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, 509 - .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, 508 + .rates = SNDRV_PCM_RATE_48000, 509 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 510 510 }, 511 511 }, 512 512 {
+7 -1
sound/soc/sof/core.c
··· 399 399 { 400 400 struct snd_sof_dev *sdev = dev_get_drvdata(dev); 401 401 402 - return snd_sof_shutdown(sdev); 402 + if (IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE)) 403 + cancel_work_sync(&sdev->probe_work); 404 + 405 + if (sdev->fw_state == SOF_FW_BOOT_COMPLETE) 406 + return snd_sof_shutdown(sdev); 407 + 408 + return 0; 403 409 } 404 410 EXPORT_SYMBOL(snd_sof_device_shutdown); 405 411
+2 -1
sound/soc/sof/intel/apl.c
··· 27 27 28 28 /* apollolake ops */ 29 29 const struct snd_sof_dsp_ops sof_apl_ops = { 30 - /* probe and remove */ 30 + /* probe/remove/shutdown */ 31 31 .probe = hda_dsp_probe, 32 32 .remove = hda_dsp_remove, 33 + .shutdown = hda_dsp_shutdown, 33 34 34 35 /* Register IO */ 35 36 .write = sof_io_write,
+2 -17
sound/soc/sof/intel/cnl.c
··· 232 232 233 233 /* cannonlake ops */ 234 234 const struct snd_sof_dsp_ops sof_cnl_ops = { 235 - /* probe and remove */ 235 + /* probe/remove/shutdown */ 236 236 .probe = hda_dsp_probe, 237 237 .remove = hda_dsp_remove, 238 + .shutdown = hda_dsp_shutdown, 238 239 239 240 /* Register IO */ 240 241 .write = sof_io_write, ··· 349 348 .ssp_base_offset = CNL_SSP_BASE_OFFSET, 350 349 }; 351 350 EXPORT_SYMBOL_NS(cnl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 352 - 353 - const struct sof_intel_dsp_desc ehl_chip_info = { 354 - /* Elkhartlake */ 355 - .cores_num = 4, 356 - .init_core_mask = 1, 357 - .host_managed_cores_mask = BIT(0), 358 - .ipc_req = CNL_DSP_REG_HIPCIDR, 359 - .ipc_req_mask = CNL_DSP_REG_HIPCIDR_BUSY, 360 - .ipc_ack = CNL_DSP_REG_HIPCIDA, 361 - .ipc_ack_mask = CNL_DSP_REG_HIPCIDA_DONE, 362 - .ipc_ctl = CNL_DSP_REG_HIPCCTL, 363 - .rom_init_timeout = 300, 364 - .ssp_count = ICL_SSP_COUNT, 365 - .ssp_base_offset = CNL_SSP_BASE_OFFSET, 366 - }; 367 - EXPORT_SYMBOL_NS(ehl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 368 351 369 352 const struct sof_intel_dsp_desc jsl_chip_info = { 370 353 /* Jasperlake */
+17 -4
sound/soc/sof/intel/hda-dsp.c
··· 226 226 227 227 val = snd_sof_dsp_read(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS); 228 228 229 - is_enable = (val & HDA_DSP_ADSPCS_CPA_MASK(core_mask)) && 230 - (val & HDA_DSP_ADSPCS_SPA_MASK(core_mask)) && 231 - !(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) && 232 - !(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask)); 229 + #define MASK_IS_EQUAL(v, m, field) ({ \ 230 + u32 _m = field(m); \ 231 + ((v) & _m) == _m; \ 232 + }) 233 + 234 + is_enable = MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_CPA_MASK) && 235 + MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_SPA_MASK) && 236 + !(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) && 237 + !(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask)); 238 + 239 + #undef MASK_IS_EQUAL 233 240 234 241 dev_dbg(sdev->dev, "DSP core(s) enabled? %d : core_mask %x\n", 235 242 is_enable, core_mask); ··· 890 883 } 891 884 892 885 return snd_sof_dsp_set_power_state(sdev, &target_dsp_state); 886 + } 887 + 888 + int hda_dsp_shutdown(struct snd_sof_dev *sdev) 889 + { 890 + sdev->system_suspend_target = SOF_SUSPEND_S3; 891 + return snd_sof_suspend(sdev->dev); 893 892 } 894 893 895 894 int hda_dsp_set_hw_params_upon_resume(struct snd_sof_dev *sdev)
+1
sound/soc/sof/intel/hda.h
··· 517 517 int hda_dsp_runtime_suspend(struct snd_sof_dev *sdev); 518 518 int hda_dsp_runtime_resume(struct snd_sof_dev *sdev); 519 519 int hda_dsp_runtime_idle(struct snd_sof_dev *sdev); 520 + int hda_dsp_shutdown(struct snd_sof_dev *sdev); 520 521 int hda_dsp_set_hw_params_upon_resume(struct snd_sof_dev *sdev); 521 522 void hda_dsp_dump(struct snd_sof_dev *sdev, u32 flags); 522 523 void hda_ipc_dump(struct snd_sof_dev *sdev);
+2 -1
sound/soc/sof/intel/icl.c
··· 26 26 27 27 /* Icelake ops */ 28 28 const struct snd_sof_dsp_ops sof_icl_ops = { 29 - /* probe and remove */ 29 + /* probe/remove/shutdown */ 30 30 .probe = hda_dsp_probe, 31 31 .remove = hda_dsp_remove, 32 + .shutdown = hda_dsp_shutdown, 32 33 33 34 /* Register IO */ 34 35 .write = sof_io_write,
+1 -1
sound/soc/sof/intel/pci-tgl.c
··· 65 65 .default_tplg_path = "intel/sof-tplg", 66 66 .default_fw_filename = "sof-ehl.ri", 67 67 .nocodec_tplg_filename = "sof-ehl-nocodec.tplg", 68 - .ops = &sof_cnl_ops, 68 + .ops = &sof_tgl_ops, 69 69 }; 70 70 71 71 static const struct sof_dev_desc adls_desc = {
+17 -1
sound/soc/sof/intel/tgl.c
··· 25 25 /* probe/remove/shutdown */ 26 26 .probe = hda_dsp_probe, 27 27 .remove = hda_dsp_remove, 28 - .shutdown = hda_dsp_remove, 28 + .shutdown = hda_dsp_shutdown, 29 29 30 30 /* Register IO */ 31 31 .write = sof_io_write, ··· 155 155 .ssp_base_offset = CNL_SSP_BASE_OFFSET, 156 156 }; 157 157 EXPORT_SYMBOL_NS(tglh_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 158 + 159 + const struct sof_intel_dsp_desc ehl_chip_info = { 160 + /* Elkhartlake */ 161 + .cores_num = 4, 162 + .init_core_mask = 1, 163 + .host_managed_cores_mask = BIT(0), 164 + .ipc_req = CNL_DSP_REG_HIPCIDR, 165 + .ipc_req_mask = CNL_DSP_REG_HIPCIDR_BUSY, 166 + .ipc_ack = CNL_DSP_REG_HIPCIDA, 167 + .ipc_ack_mask = CNL_DSP_REG_HIPCIDA_DONE, 168 + .ipc_ctl = CNL_DSP_REG_HIPCCTL, 169 + .rom_init_timeout = 300, 170 + .ssp_count = ICL_SSP_COUNT, 171 + .ssp_base_offset = CNL_SSP_BASE_OFFSET, 172 + }; 173 + EXPORT_SYMBOL_NS(ehl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON); 158 174 159 175 const struct sof_intel_dsp_desc adls_chip_info = { 160 176 /* Alderlake-S */
+5
sound/soc/sunxi/sun4i-codec.c
··· 1364 1364 return ERR_PTR(-ENOMEM); 1365 1365 1366 1366 card->dev = dev; 1367 + card->owner = THIS_MODULE; 1367 1368 card->name = "sun4i-codec"; 1368 1369 card->dapm_widgets = sun4i_codec_card_dapm_widgets; 1369 1370 card->num_dapm_widgets = ARRAY_SIZE(sun4i_codec_card_dapm_widgets); ··· 1397 1396 return ERR_PTR(-ENOMEM); 1398 1397 1399 1398 card->dev = dev; 1399 + card->owner = THIS_MODULE; 1400 1400 card->name = "A31 Audio Codec"; 1401 1401 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1402 1402 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets); ··· 1451 1449 return ERR_PTR(-ENOMEM); 1452 1450 1453 1451 card->dev = dev; 1452 + card->owner = THIS_MODULE; 1454 1453 card->name = "A23 Audio Codec"; 1455 1454 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1456 1455 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets); ··· 1490 1487 return ERR_PTR(-ENOMEM); 1491 1488 1492 1489 card->dev = dev; 1490 + card->owner = THIS_MODULE; 1493 1491 card->name = "H3 Audio Codec"; 1494 1492 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1495 1493 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets); ··· 1529 1525 return ERR_PTR(-ENOMEM); 1530 1526 1531 1527 card->dev = dev; 1528 + card->owner = THIS_MODULE; 1532 1529 card->name = "V3s Audio Codec"; 1533 1530 card->dapm_widgets = sun6i_codec_card_dapm_widgets; 1534 1531 card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
+1 -1
tools/lib/bpf/ringbuf.c
··· 227 227 if ((len & BPF_RINGBUF_DISCARD_BIT) == 0) { 228 228 sample = (void *)len_ptr + BPF_RINGBUF_HDR_SZ; 229 229 err = r->sample_cb(r->ctx, sample, len); 230 - if (err) { 230 + if (err < 0) { 231 231 /* update consumer pos and bail out */ 232 232 smp_store_release(r->consumer_pos, 233 233 cons_pos);
+37 -20
tools/lib/bpf/xsk.c
··· 59 59 int fd; 60 60 int refcount; 61 61 struct list_head ctx_list; 62 + bool rx_ring_setup_done; 63 + bool tx_ring_setup_done; 62 64 }; 63 65 64 66 struct xsk_ctx { ··· 745 743 return NULL; 746 744 } 747 745 748 - static void xsk_put_ctx(struct xsk_ctx *ctx) 746 + static void xsk_put_ctx(struct xsk_ctx *ctx, bool unmap) 749 747 { 750 748 struct xsk_umem *umem = ctx->umem; 751 749 struct xdp_mmap_offsets off; 752 750 int err; 753 751 754 - if (--ctx->refcount == 0) { 755 - err = xsk_get_mmap_offsets(umem->fd, &off); 756 - if (!err) { 757 - munmap(ctx->fill->ring - off.fr.desc, 758 - off.fr.desc + umem->config.fill_size * 759 - sizeof(__u64)); 760 - munmap(ctx->comp->ring - off.cr.desc, 761 - off.cr.desc + umem->config.comp_size * 762 - sizeof(__u64)); 763 - } 752 + if (--ctx->refcount) 753 + return; 764 754 765 - list_del(&ctx->list); 766 - free(ctx); 767 - } 755 + if (!unmap) 756 + goto out_free; 757 + 758 + err = xsk_get_mmap_offsets(umem->fd, &off); 759 + if (err) 760 + goto out_free; 761 + 762 + munmap(ctx->fill->ring - off.fr.desc, off.fr.desc + umem->config.fill_size * 763 + sizeof(__u64)); 764 + munmap(ctx->comp->ring - off.cr.desc, off.cr.desc + umem->config.comp_size * 765 + sizeof(__u64)); 766 + 767 + out_free: 768 + list_del(&ctx->list); 769 + free(ctx); 768 770 } 769 771 770 772 static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk, ··· 803 797 memcpy(ctx->ifname, ifname, IFNAMSIZ - 1); 804 798 ctx->ifname[IFNAMSIZ - 1] = '\0'; 805 799 806 - umem->fill_save = NULL; 807 - umem->comp_save = NULL; 808 800 ctx->fill = fill; 809 801 ctx->comp = comp; 810 802 list_add(&ctx->list, &umem->ctx_list); ··· 858 854 struct xsk_socket *xsk; 859 855 struct xsk_ctx *ctx; 860 856 int err, ifindex; 857 + bool unmap = umem->fill_save != fill; 858 + bool rx_setup_done = false, tx_setup_done = false; 861 859 862 860 if (!umem || !xsk_ptr || !(rx || tx)) 863 861 return -EFAULT; ··· 887 881 } 888 882 } else { 889 883 xsk->fd = umem->fd; 884 + rx_setup_done = umem->rx_ring_setup_done; 885 + tx_setup_done = umem->tx_ring_setup_done; 890 886 } 891 887 892 888 ctx = xsk_get_ctx(umem, ifindex, queue_id); ··· 907 899 } 908 900 xsk->ctx = ctx; 909 901 910 - if (rx) { 902 + if (rx && !rx_setup_done) { 911 903 err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING, 912 904 &xsk->config.rx_size, 913 905 sizeof(xsk->config.rx_size)); ··· 915 907 err = -errno; 916 908 goto out_put_ctx; 917 909 } 910 + if (xsk->fd == umem->fd) 911 + umem->rx_ring_setup_done = true; 918 912 } 919 - if (tx) { 913 + if (tx && !tx_setup_done) { 920 914 err = setsockopt(xsk->fd, SOL_XDP, XDP_TX_RING, 921 915 &xsk->config.tx_size, 922 916 sizeof(xsk->config.tx_size)); ··· 926 916 err = -errno; 927 917 goto out_put_ctx; 928 918 } 919 + if (xsk->fd == umem->fd) 920 + umem->rx_ring_setup_done = true; 929 921 } 930 922 931 923 err = xsk_get_mmap_offsets(xsk->fd, &off); ··· 1006 994 } 1007 995 1008 996 *xsk_ptr = xsk; 997 + umem->fill_save = NULL; 998 + umem->comp_save = NULL; 1009 999 return 0; 1010 1000 1011 1001 out_mmap_tx: ··· 1019 1005 munmap(rx_map, off.rx.desc + 1020 1006 xsk->config.rx_size * sizeof(struct xdp_desc)); 1021 1007 out_put_ctx: 1022 - xsk_put_ctx(ctx); 1008 + xsk_put_ctx(ctx, unmap); 1023 1009 out_socket: 1024 1010 if (--umem->refcount) 1025 1011 close(xsk->fd); ··· 1033 1019 struct xsk_ring_cons *rx, struct xsk_ring_prod *tx, 1034 1020 const struct xsk_socket_config *usr_config) 1035 1021 { 1022 + if (!umem) 1023 + return -EFAULT; 1024 + 1036 1025 return xsk_socket__create_shared(xsk_ptr, ifname, queue_id, umem, 1037 1026 rx, tx, umem->fill_save, 1038 1027 umem->comp_save, usr_config); ··· 1085 1068 } 1086 1069 } 1087 1070 1088 - xsk_put_ctx(ctx); 1071 + xsk_put_ctx(ctx, true); 1089 1072 1090 1073 umem->refcount--; 1091 1074 /* Do not close an fd that also has an associated umem connected
+1 -1
tools/perf/builtin-inject.c
··· 906 906 } 907 907 908 908 data.path = inject.input_name; 909 - inject.session = perf_session__new(&data, true, &inject.tool); 909 + inject.session = perf_session__new(&data, inject.output.is_pipe, &inject.tool); 910 910 if (IS_ERR(inject.session)) 911 911 return PTR_ERR(inject.session); 912 912
+3 -1
tools/perf/util/arm-spe-decoder/arm-spe-pkt-decoder.c
··· 210 210 211 211 if ((hdr & SPE_HEADER0_MASK2) == SPE_HEADER0_EXTENDED) { 212 212 /* 16-bit extended format header */ 213 - ext_hdr = 1; 213 + if (len == 1) 214 + return ARM_SPE_BAD_PACKET; 214 215 216 + ext_hdr = 1; 215 217 hdr = buf[1]; 216 218 if (hdr == SPE_HEADER1_ALIGNMENT) 217 219 return arm_spe_get_alignment(buf, len, packet);
+3 -3
tools/perf/util/block-info.c
··· 201 201 double ratio = 0.0; 202 202 203 203 if (block_fmt->total_cycles) 204 - ratio = (double)bi->cycles / (double)block_fmt->total_cycles; 204 + ratio = (double)bi->cycles_aggr / (double)block_fmt->total_cycles; 205 205 206 206 return color_pct(hpp, block_fmt->width, 100.0 * ratio); 207 207 } ··· 216 216 double l, r; 217 217 218 218 if (block_fmt->total_cycles) { 219 - l = ((double)bi_l->cycles / 219 + l = ((double)bi_l->cycles_aggr / 220 220 (double)block_fmt->total_cycles) * 100000.0; 221 - r = ((double)bi_r->cycles / 221 + r = ((double)bi_r->cycles_aggr / 222 222 (double)block_fmt->total_cycles) * 100000.0; 223 223 return (int64_t)l - (int64_t)r; 224 224 }
+44
tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
··· 6 6 #include <test_progs.h> 7 7 #include "bpf_dctcp.skel.h" 8 8 #include "bpf_cubic.skel.h" 9 + #include "bpf_tcp_nogpl.skel.h" 9 10 10 11 #define min(a, b) ((a) < (b) ? (a) : (b)) 11 12 ··· 228 227 bpf_dctcp__destroy(dctcp_skel); 229 228 } 230 229 230 + static char *err_str; 231 + static bool found; 232 + 233 + static int libbpf_debug_print(enum libbpf_print_level level, 234 + const char *format, va_list args) 235 + { 236 + char *log_buf; 237 + 238 + if (level != LIBBPF_WARN || 239 + strcmp(format, "libbpf: \n%s\n")) { 240 + vprintf(format, args); 241 + return 0; 242 + } 243 + 244 + log_buf = va_arg(args, char *); 245 + if (!log_buf) 246 + goto out; 247 + if (err_str && strstr(log_buf, err_str) != NULL) 248 + found = true; 249 + out: 250 + printf(format, log_buf); 251 + return 0; 252 + } 253 + 254 + static void test_invalid_license(void) 255 + { 256 + libbpf_print_fn_t old_print_fn; 257 + struct bpf_tcp_nogpl *skel; 258 + 259 + err_str = "struct ops programs must have a GPL compatible license"; 260 + found = false; 261 + old_print_fn = libbpf_set_print(libbpf_debug_print); 262 + 263 + skel = bpf_tcp_nogpl__open_and_load(); 264 + ASSERT_NULL(skel, "bpf_tcp_nogpl"); 265 + ASSERT_EQ(found, true, "expected_err_msg"); 266 + 267 + bpf_tcp_nogpl__destroy(skel); 268 + libbpf_set_print(old_print_fn); 269 + } 270 + 231 271 void test_bpf_tcp_ca(void) 232 272 { 233 273 if (test__start_subtest("dctcp")) 234 274 test_dctcp(); 235 275 if (test__start_subtest("cubic")) 236 276 test_cubic(); 277 + if (test__start_subtest("invalid_license")) 278 + test_invalid_license(); 237 279 }
+19
tools/testing/selftests/bpf/progs/bpf_tcp_nogpl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bpf.h> 4 + #include <linux/types.h> 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + #include "bpf_tcp_helpers.h" 8 + 9 + char _license[] SEC("license") = "X"; 10 + 11 + void BPF_STRUCT_OPS(nogpltcp_init, struct sock *sk) 12 + { 13 + } 14 + 15 + SEC(".struct_ops") 16 + struct tcp_congestion_ops bpf_nogpltcp = { 17 + .init = (void *)nogpltcp_init, 18 + .name = "bpf_nogpltcp", 19 + };
+12 -1
tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
··· 657 657 { 658 658 # In accordance with INET_ECN_decapsulate() 659 659 __test_ecn_decap 00 00 0x00 660 + __test_ecn_decap 00 01 0x00 661 + __test_ecn_decap 00 02 0x00 662 + # 00 03 is tested in test_ecn_decap_error() 663 + __test_ecn_decap 01 00 0x01 660 664 __test_ecn_decap 01 01 0x01 661 - __test_ecn_decap 02 01 0x01 665 + __test_ecn_decap 01 02 0x01 662 666 __test_ecn_decap 01 03 0x03 667 + __test_ecn_decap 02 00 0x02 668 + __test_ecn_decap 02 01 0x01 669 + __test_ecn_decap 02 02 0x02 663 670 __test_ecn_decap 02 03 0x03 671 + __test_ecn_decap 03 00 0x03 672 + __test_ecn_decap 03 01 0x03 673 + __test_ecn_decap 03 02 0x03 674 + __test_ecn_decap 03 03 0x03 664 675 test_ecn_decap_error 665 676 } 666 677