Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.9-rc8 into usb-next

We need the USB fixes in here as well for testing.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3883 -1980
+18 -7
Documentation/admin-guide/cgroup-v2.rst
··· 1324 1324 pgmajfault 1325 1325 Number of major page faults incurred 1326 1326 1327 - workingset_refault 1328 - Number of refaults of previously evicted pages 1327 + workingset_refault_anon 1328 + Number of refaults of previously evicted anonymous pages. 1329 1329 1330 - workingset_activate 1331 - Number of refaulted pages that were immediately activated 1330 + workingset_refault_file 1331 + Number of refaults of previously evicted file pages. 1332 1332 1333 - workingset_restore 1334 - Number of restored pages which have been detected as an active 1335 - workingset before they got reclaimed. 1333 + workingset_activate_anon 1334 + Number of refaulted anonymous pages that were immediately 1335 + activated. 1336 + 1337 + workingset_activate_file 1338 + Number of refaulted file pages that were immediately activated. 1339 + 1340 + workingset_restore_anon 1341 + Number of restored anonymous pages which have been detected as 1342 + an active workingset before they got reclaimed. 1343 + 1344 + workingset_restore_file 1345 + Number of restored file pages which have been detected as an 1346 + active workingset before they got reclaimed. 1336 1347 1337 1348 workingset_nodereclaim 1338 1349 Number of times a shadow node has been reclaimed
+9 -1
Documentation/admin-guide/device-mapper/dm-crypt.rst
··· 67 67 the value passed in <key_size>. 68 68 69 69 <key_type> 70 - Either 'logon' or 'user' kernel key type. 70 + Either 'logon', 'user' or 'encrypted' kernel key type. 71 71 72 72 <key_description> 73 73 The kernel keyring key description crypt target should look for ··· 120 120 significantly. The default is to offload write bios to the same 121 121 thread because it benefits CFQ to have writes submitted using the 122 122 same context. 123 + 124 + no_read_workqueue 125 + Bypass dm-crypt internal workqueue and process read requests synchronously. 126 + 127 + no_write_workqueue 128 + Bypass dm-crypt internal workqueue and process write requests synchronously. 129 + This option is automatically enabled for host-managed zoned block devices 130 + (e.g. host-managed SMR hard-disks). 123 131 124 132 integrity:<bytes>:<type> 125 133 The device requires additional <bytes> metadata per-sector stored
+1 -1
Documentation/admin-guide/pm/cpuidle.rst
··· 690 690 instruction of the CPUs (which, as a rule, suspends the execution of the program 691 691 and causes the hardware to attempt to enter the shallowest available idle state) 692 692 for this purpose, and if ``idle=poll`` is used, idle CPUs will execute a 693 - more or less ``lightweight'' sequence of instructions in a tight loop. [Note 693 + more or less "lightweight" sequence of instructions in a tight loop. [Note 694 694 that using ``idle=poll`` is somewhat drastic in many cases, as preventing idle 695 695 CPUs from saving almost any energy at all may not be the only effect of it. 696 696 For example, on Intel hardware it effectively prevents CPUs from using
+1 -4
Documentation/bpf/ringbuf.rst
··· 182 182 already committed. It is thus possible for slow producers to temporarily hold 183 183 off submitted records, that were reserved later. 184 184 185 - Reservation/commit/consumer protocol is verified by litmus tests in 186 - Documentation/litmus_tests/bpf-rb/_. 187 - 188 185 One interesting implementation bit, that significantly simplifies (and thus 189 186 speeds up as well) implementation of both producers and consumers is how data 190 187 area is mapped twice contiguously back-to-back in the virtual memory. This ··· 197 200 being available after commit only if consumer has already caught up right up to 198 201 the record being committed. If not, consumer still has to catch up and thus 199 202 will see new data anyways without needing an extra poll notification. 200 - Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c_) show that 203 + Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbufs.c) show that 201 204 this allows to achieve a very high throughput without having to resort to 202 205 tricks like "notify only every Nth sample", which are necessary with perf 203 206 buffer. For extreme cases, when BPF program wants more manual control of
+2 -2
Documentation/devicetree/bindings/arm/bcm/raspberrypi,bcm2835-firmware.yaml
··· 23 23 compatible: 24 24 items: 25 25 - const: raspberrypi,bcm2835-firmware 26 - - const: simple-bus 26 + - const: simple-mfd 27 27 28 28 mboxes: 29 29 $ref: '/schemas/types.yaml#/definitions/phandle' ··· 73 73 examples: 74 74 - | 75 75 firmware { 76 - compatible = "raspberrypi,bcm2835-firmware", "simple-bus"; 76 + compatible = "raspberrypi,bcm2835-firmware", "simple-mfd"; 77 77 mboxes = <&mailbox>; 78 78 79 79 firmware_clocks: clocks {
+1 -1
Documentation/devicetree/bindings/crypto/ti,sa2ul.yaml
··· 67 67 68 68 main_crypto: crypto@4e00000 { 69 69 compatible = "ti,j721-sa2ul"; 70 - reg = <0x0 0x4e00000 0x0 0x1200>; 70 + reg = <0x4e00000 0x1200>; 71 71 power-domains = <&k3_pds 264 TI_SCI_PD_EXCLUSIVE>; 72 72 dmas = <&main_udmap 0xc000>, <&main_udmap 0x4000>, 73 73 <&main_udmap 0x4001>;
+4 -4
Documentation/devicetree/bindings/display/xlnx/xlnx,zynqmp-dpsub.yaml
··· 145 145 146 146 display@fd4a0000 { 147 147 compatible = "xlnx,zynqmp-dpsub-1.7"; 148 - reg = <0x0 0xfd4a0000 0x0 0x1000>, 149 - <0x0 0xfd4aa000 0x0 0x1000>, 150 - <0x0 0xfd4ab000 0x0 0x1000>, 151 - <0x0 0xfd4ac000 0x0 0x1000>; 148 + reg = <0xfd4a0000 0x1000>, 149 + <0xfd4aa000 0x1000>, 150 + <0xfd4ab000 0x1000>, 151 + <0xfd4ac000 0x1000>; 152 152 reg-names = "dp", "blend", "av_buf", "aud"; 153 153 interrupts = <0 119 4>; 154 154 interrupt-parent = <&gic>;
+1 -1
Documentation/devicetree/bindings/dma/xilinx/xlnx,zynqmp-dpdma.yaml
··· 57 57 58 58 dma: dma-controller@fd4c0000 { 59 59 compatible = "xlnx,zynqmp-dpdma"; 60 - reg = <0x0 0xfd4c0000 0x0 0x1000>; 60 + reg = <0xfd4c0000 0x1000>; 61 61 interrupts = <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>; 62 62 interrupt-parent = <&gic>; 63 63 clocks = <&dpdma_clk>;
+3 -2
Documentation/devicetree/bindings/gpio/sgpio-aspeed.txt
··· 20 20 - gpio-controller : Marks the device node as a GPIO controller 21 21 - interrupts : Interrupt specifier, see interrupt-controller/interrupts.txt 22 22 - interrupt-controller : Mark the GPIO controller as an interrupt-controller 23 - - ngpios : number of GPIO lines, see gpio.txt 24 - (should be multiple of 8, up to 80 pins) 23 + - ngpios : number of *hardware* GPIO lines, see gpio.txt. This will expose 24 + 2 software GPIOs per hardware GPIO: one for hardware input, one for hardware 25 + output. Up to 80 pins, must be a multiple of 8. 25 26 - clocks : A phandle to the APB clock for SGPM clock division 26 27 - bus-frequency : SGPM CLK frequency 27 28
+1 -1
Documentation/devicetree/bindings/leds/cznic,turris-omnia-leds.yaml
··· 30 30 const: 0 31 31 32 32 patternProperties: 33 - "^multi-led[0-9a-f]$": 33 + "^multi-led@[0-9a-b]$": 34 34 type: object 35 35 allOf: 36 36 - $ref: leds-class-multicolor.yaml#
-38
Documentation/devicetree/bindings/media/i2c/imx274.txt
··· 1 - * Sony 1/2.5-Inch 8.51Mp CMOS Digital Image Sensor 2 - 3 - The Sony imx274 is a 1/2.5-inch CMOS active pixel digital image sensor with 4 - an active array size of 3864H x 2202V. It is programmable through I2C 5 - interface. The I2C address is fixed to 0x1a as per sensor data sheet. 6 - Image data is sent through MIPI CSI-2, which is configured as 4 lanes 7 - at 1440 Mbps. 8 - 9 - 10 - Required Properties: 11 - - compatible: value should be "sony,imx274" for imx274 sensor 12 - - reg: I2C bus address of the device 13 - 14 - Optional Properties: 15 - - reset-gpios: Sensor reset GPIO 16 - - clocks: Reference to the input clock. 17 - - clock-names: Should be "inck". 18 - - VANA-supply: Sensor 2.8v analog supply. 19 - - VDIG-supply: Sensor 1.8v digital core supply. 20 - - VDDL-supply: Sensor digital IO 1.2v supply. 21 - 22 - The imx274 device node should contain one 'port' child node with 23 - an 'endpoint' subnode. For further reading on port node refer to 24 - Documentation/devicetree/bindings/media/video-interfaces.txt. 25 - 26 - Example: 27 - sensor@1a { 28 - compatible = "sony,imx274"; 29 - reg = <0x1a>; 30 - #address-cells = <1>; 31 - #size-cells = <0>; 32 - reset-gpios = <&gpio_sensor 0 0>; 33 - port { 34 - sensor_out: endpoint { 35 - remote-endpoint = <&csiss_in>; 36 - }; 37 - }; 38 - };
+76
Documentation/devicetree/bindings/media/i2c/sony,imx274.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/media/i2c/sony,imx274.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Sony 1/2.5-Inch 8.51MP CMOS Digital Image Sensor 8 + 9 + maintainers: 10 + - Leon Luo <leonl@leopardimaging.com> 11 + 12 + description: | 13 + The Sony IMX274 is a 1/2.5-inch CMOS active pixel digital image sensor with an 14 + active array size of 3864H x 2202V. It is programmable through I2C interface. 15 + Image data is sent through MIPI CSI-2, which is configured as 4 lanes at 1440 16 + Mbps. 17 + 18 + properties: 19 + compatible: 20 + const: sony,imx274 21 + 22 + reg: 23 + const: 0x1a 24 + 25 + reset-gpios: 26 + maxItems: 1 27 + 28 + clocks: 29 + maxItems: 1 30 + 31 + clock-names: 32 + const: inck 33 + 34 + vana-supply: 35 + description: Sensor 2.8 V analog supply. 36 + maxItems: 1 37 + 38 + vdig-supply: 39 + description: Sensor 1.8 V digital core supply. 40 + maxItems: 1 41 + 42 + vddl-supply: 43 + description: Sensor digital IO 1.2 V supply. 44 + maxItems: 1 45 + 46 + port: 47 + type: object 48 + description: Output video port. See ../video-interfaces.txt. 49 + 50 + required: 51 + - compatible 52 + - reg 53 + - port 54 + 55 + additionalProperties: false 56 + 57 + examples: 58 + - | 59 + i2c0 { 60 + #address-cells = <1>; 61 + #size-cells = <0>; 62 + 63 + imx274: camera-sensor@1a { 64 + compatible = "sony,imx274"; 65 + reg = <0x1a>; 66 + reset-gpios = <&gpio_sensor 0 0>; 67 + 68 + port { 69 + sensor_out: endpoint { 70 + remote-endpoint = <&csiss_in>; 71 + }; 72 + }; 73 + }; 74 + }; 75 + 76 + ...
+2 -2
Documentation/kbuild/llvm.rst
··· 39 39 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make CC=clang 40 40 41 41 ``CROSS_COMPILE`` is not used to prefix the Clang compiler binary, instead 42 - ``CROSS_COMPILE`` is used to set a command line flag: ``--target <triple>``. For 42 + ``CROSS_COMPILE`` is used to set a command line flag: ``--target=<triple>``. For 43 43 example: :: 44 44 45 - clang --target aarch64-linux-gnu foo.c 45 + clang --target=aarch64-linux-gnu foo.c 46 46 47 47 LLVM Utilities 48 48 --------------
+3
Documentation/networking/ethtool-netlink.rst
··· 206 206 ``ETHTOOL_MSG_TSINFO_GET`` get timestamping info 207 207 ``ETHTOOL_MSG_CABLE_TEST_ACT`` action start cable test 208 208 ``ETHTOOL_MSG_CABLE_TEST_TDR_ACT`` action start raw TDR cable test 209 + ``ETHTOOL_MSG_TUNNEL_INFO_GET`` get tunnel offload info 209 210 ===================================== ================================ 210 211 211 212 Kernel to userspace: ··· 240 239 ``ETHTOOL_MSG_TSINFO_GET_REPLY`` timestamping info 241 240 ``ETHTOOL_MSG_CABLE_TEST_NTF`` Cable test results 242 241 ``ETHTOOL_MSG_CABLE_TEST_TDR_NTF`` Cable test TDR results 242 + ``ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY`` tunnel offload info 243 243 ===================================== ================================= 244 244 245 245 ``GET`` requests are sent by userspace applications to retrieve device ··· 1365 1363 ``ETHTOOL_SFECPARAM`` n/a 1366 1364 n/a ''ETHTOOL_MSG_CABLE_TEST_ACT'' 1367 1365 n/a ''ETHTOOL_MSG_CABLE_TEST_TDR_ACT'' 1366 + n/a ``ETHTOOL_MSG_TUNNEL_INFO_GET`` 1368 1367 =================================== =====================================
-17
Documentation/userspace-api/media/v4l/buffer.rst
··· 701 701 :stub-columns: 0 702 702 :widths: 3 1 4 703 703 704 - * .. _`V4L2-FLAG-MEMORY-NON-CONSISTENT`: 705 - 706 - - ``V4L2_FLAG_MEMORY_NON_CONSISTENT`` 707 - - 0x00000001 708 - - A buffer is allocated either in consistent (it will be automatically 709 - coherent between the CPU and the bus) or non-consistent memory. The 710 - latter can provide performance gains, for instance the CPU cache 711 - sync/flush operations can be avoided if the buffer is accessed by the 712 - corresponding device only and the CPU does not read/write to/from that 713 - buffer. However, this requires extra care from the driver -- it must 714 - guarantee memory consistency by issuing a cache flush/sync when 715 - consistency is needed. If this flag is set V4L2 will attempt to 716 - allocate the buffer in non-consistent memory. The flag takes effect 717 - only if the buffer is used for :ref:`memory mapping <mmap>` I/O and the 718 - queue reports the :ref:`V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS 719 - <V4L2-BUF-CAP-SUPPORTS-MMAP-CACHE-HINTS>` capability. 720 - 721 704 .. c:type:: v4l2_memory 722 705 723 706 enum v4l2_memory
+1 -5
Documentation/userspace-api/media/v4l/vidioc-create-bufs.rst
··· 120 120 If you want to just query the capabilities without making any 121 121 other changes, then set ``count`` to 0, ``memory`` to 122 122 ``V4L2_MEMORY_MMAP`` and ``format.type`` to the buffer type. 123 - * - __u32 124 - - ``flags`` 125 - - Specifies additional buffer management attributes. 126 - See :ref:`memory-flags`. 127 123 128 124 * - __u32 129 - - ``reserved``\ [6] 125 + - ``reserved``\ [7] 130 126 - A place holder for future extensions. Drivers and applications 131 127 must set the array to zero. 132 128
+2 -10
Documentation/userspace-api/media/v4l/vidioc-reqbufs.rst
··· 112 112 ``V4L2_MEMORY_MMAP`` and ``type`` set to the buffer type. This will 113 113 free any previously allocated buffers, so this is typically something 114 114 that will be done at the start of the application. 115 - * - union { 116 - - (anonymous) 117 - * - __u32 118 - - ``flags`` 119 - - Specifies additional buffer management attributes. 120 - See :ref:`memory-flags`. 121 115 * - __u32 122 116 - ``reserved``\ [1] 123 - - Kept for backwards compatibility. Use ``flags`` instead. 124 - * - } 125 - - 117 + - A place holder for future extensions. Drivers and applications 118 + must set the array to zero. 126 119 127 120 .. tabularcolumns:: |p{6.1cm}|p{2.2cm}|p{8.7cm}| 128 121 ··· 162 169 - This capability is set by the driver to indicate that the queue supports 163 170 cache and memory management hints. However, it's only valid when the 164 171 queue is used for :ref:`memory mapping <mmap>` streaming I/O. See 165 - :ref:`V4L2_FLAG_MEMORY_NON_CONSISTENT <V4L2-FLAG-MEMORY-NON-CONSISTENT>`, 166 172 :ref:`V4L2_BUF_FLAG_NO_CACHE_INVALIDATE <V4L2-BUF-FLAG-NO-CACHE-INVALIDATE>` and 167 173 :ref:`V4L2_BUF_FLAG_NO_CACHE_CLEAN <V4L2-BUF-FLAG-NO-CACHE-CLEAN>`. 168 174
+20
Documentation/virt/kvm/api.rst
··· 6173 6173 is supported, than the other should as well and vice versa. For arm64 6174 6174 see Documentation/virt/kvm/devices/vcpu.rst "KVM_ARM_VCPU_PVTIME_CTRL". 6175 6175 For x86 see Documentation/virt/kvm/msr.rst "MSR_KVM_STEAL_TIME". 6176 + 6177 + 8.25 KVM_CAP_S390_DIAG318 6178 + ------------------------- 6179 + 6180 + :Architectures: s390 6181 + 6182 + This capability enables a guest to set information about its control program 6183 + (i.e. guest kernel type and version). The information is helpful during 6184 + system/firmware service events, providing additional data about the guest 6185 + environments running on the machine. 6186 + 6187 + The information is associated with the DIAGNOSE 0x318 instruction, which sets 6188 + an 8-byte value consisting of a one-byte Control Program Name Code (CPNC) and 6189 + a 7-byte Control Program Version Code (CPVC). The CPNC determines what 6190 + environment the control program is running in (e.g. Linux, z/VM...), and the 6191 + CPVC is used for information specific to OS (e.g. Linux version, Linux 6192 + distribution...) 6193 + 6194 + If this capability is available, then the CPNC and CPVC can be synchronized 6195 + between KVM and userspace via the sync regs mechanism (KVM_SYNC_DIAG318).
+8 -10
MAINTAINERS
··· 4426 4426 F: fs/configfs/ 4427 4427 F: include/linux/configfs.h 4428 4428 4429 - CONNECTOR 4430 - M: Evgeniy Polyakov <zbr@ioremap.net> 4431 - L: netdev@vger.kernel.org 4432 - S: Maintained 4433 - F: drivers/connector/ 4434 - 4435 4429 CONSOLE SUBSYSTEM 4436 4430 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 4437 4431 S: Supported ··· 8341 8347 F: drivers/pci/hotplug/rpaphp* 8342 8348 8343 8349 IBM Power SRIOV Virtual NIC Device Driver 8344 - M: Thomas Falcon <tlfalcon@linux.ibm.com> 8345 - M: John Allen <jallen@linux.ibm.com> 8350 + M: Dany Madden <drt@linux.ibm.com> 8351 + M: Lijun Pan <ljp@linux.ibm.com> 8352 + M: Sukadev Bhattiprolu <sukadev@linux.ibm.com> 8346 8353 L: netdev@vger.kernel.org 8347 8354 S: Supported 8348 8355 F: drivers/net/ethernet/ibm/ibmvnic.* ··· 8357 8362 F: arch/powerpc/platforms/powernv/vas* 8358 8363 8359 8364 IBM Power Virtual Ethernet Device Driver 8360 - M: Thomas Falcon <tlfalcon@linux.ibm.com> 8365 + M: Cristobal Forno <cforno12@linux.ibm.com> 8361 8366 L: netdev@vger.kernel.org 8362 8367 S: Supported 8363 8368 F: drivers/net/ethernet/ibm/ibmveth.* ··· 11055 11060 11056 11061 MEDIATEK SWITCH DRIVER 11057 11062 M: Sean Wang <sean.wang@mediatek.com> 11063 + M: Landen Chao <Landen.Chao@mediatek.com> 11058 11064 L: netdev@vger.kernel.org 11059 11065 S: Maintained 11060 11066 F: drivers/net/dsa/mt7530.* ··· 12069 12073 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 12070 12074 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 12071 12075 F: Documentation/devicetree/bindings/net/ 12076 + F: drivers/connector/ 12072 12077 F: drivers/net/ 12073 12078 F: include/linux/etherdevice.h 12074 12079 F: include/linux/fcdevice.h ··· 13200 13203 13201 13204 PCI DRIVER FOR AARDVARK (Marvell Armada 3700) 13202 13205 M: Thomas Petazzoni <thomas.petazzoni@bootlin.com> 13206 + M: Pali Rohár <pali@kernel.org> 13203 13207 L: linux-pci@vger.kernel.org 13204 13208 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 13205 13209 S: Maintained ··· 16173 16175 L: linux-media@vger.kernel.org 16174 16176 S: Maintained 16175 16177 T: git git://linuxtv.org/media_tree.git 16176 - F: Documentation/devicetree/bindings/media/i2c/imx274.txt 16178 + F: Documentation/devicetree/bindings/media/i2c/sony,imx274.yaml 16177 16179 F: drivers/media/i2c/imx274.c 16178 16180 16179 16181 SONY IMX290 SENSOR DRIVER
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 9 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc8 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/at91-sama5d2_icp.dts
··· 116 116 switch0: ksz8563@0 { 117 117 compatible = "microchip,ksz8563"; 118 118 reg = <0>; 119 - phy-mode = "mii"; 120 119 reset-gpios = <&pioA PIN_PD4 GPIO_ACTIVE_LOW>; 121 120 122 121 spi-max-frequency = <500000>; ··· 139 140 reg = <2>; 140 141 label = "cpu"; 141 142 ethernet = <&macb0>; 143 + phy-mode = "mii"; 142 144 fixed-link { 143 145 speed = <100>; 144 146 full-duplex;
+1 -1
arch/arm/boot/dts/bcm2835-rpi.dtsi
··· 13 13 14 14 soc { 15 15 firmware: firmware { 16 - compatible = "raspberrypi,bcm2835-firmware", "simple-bus"; 16 + compatible = "raspberrypi,bcm2835-firmware", "simple-mfd"; 17 17 #address-cells = <1>; 18 18 #size-cells = <1>; 19 19
+3 -1
arch/arm/mach-imx/cpuidle-imx6q.c
··· 24 24 imx6_set_lpm(WAIT_UNCLOCKED); 25 25 raw_spin_unlock(&cpuidle_lock); 26 26 27 + rcu_idle_enter(); 27 28 cpu_do_idle(); 29 + rcu_idle_exit(); 28 30 29 31 raw_spin_lock(&cpuidle_lock); 30 32 if (num_idle_cpus-- == num_online_cpus()) ··· 46 44 { 47 45 .exit_latency = 50, 48 46 .target_residency = 75, 49 - .flags = CPUIDLE_FLAG_TIMER_STOP, 47 + .flags = CPUIDLE_FLAG_TIMER_STOP | CPUIDLE_FLAG_RCU_IDLE, 50 48 .enter = imx6q_enter_wait, 51 49 .name = "WAIT", 52 50 .desc = "Clock off",
+11 -3
arch/arm64/include/asm/kvm_emulate.h
··· 298 298 return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; 299 299 } 300 300 301 - static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu) 301 + static __always_inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu) 302 302 { 303 303 return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW); 304 304 } 305 305 306 + /* Always check for S1PTW *before* using this. */ 306 307 static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) 307 308 { 308 - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR) || 309 - kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */ 309 + return kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR; 310 310 } 311 311 312 312 static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) ··· 333 333 static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) 334 334 { 335 335 return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW; 336 + } 337 + 338 + static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu) 339 + { 340 + return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu); 336 341 } 337 342 338 343 static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu) ··· 377 372 378 373 static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu) 379 374 { 375 + if (kvm_vcpu_abt_iss1tw(vcpu)) 376 + return true; 377 + 380 378 if (kvm_vcpu_trap_is_iabt(vcpu)) 381 379 return false; 382 380
+20 -2
arch/arm64/kernel/acpi.c
··· 298 298 case EFI_BOOT_SERVICES_DATA: 299 299 case EFI_CONVENTIONAL_MEMORY: 300 300 case EFI_PERSISTENT_MEMORY: 301 - pr_warn(FW_BUG "requested region covers kernel memory @ %pa\n", &phys); 302 - return NULL; 301 + if (memblock_is_map_memory(phys) || 302 + !memblock_is_region_memory(phys, size)) { 303 + pr_warn(FW_BUG "requested region covers kernel memory @ %pa\n", &phys); 304 + return NULL; 305 + } 306 + /* 307 + * Mapping kernel memory is permitted if the region in 308 + * question is covered by a single memblock with the 309 + * NOMAP attribute set: this enables the use of ACPI 310 + * table overrides passed via initramfs, which are 311 + * reserved in memory using arch_reserve_mem_area() 312 + * below. As this particular use case only requires 313 + * read access, fall through to the R/O mapping case. 314 + */ 315 + fallthrough; 303 316 304 317 case EFI_RUNTIME_SERVICES_CODE: 305 318 /* ··· 400 387 local_daif_restore(current_flags); 401 388 402 389 return err; 390 + } 391 + 392 + void arch_reserve_mem_area(acpi_physical_address addr, size_t size) 393 + { 394 + memblock_mark_nomap(addr, size); 403 395 }
+1 -1
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 449 449 kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && 450 450 kvm_vcpu_dabt_isvalid(vcpu) && 451 451 !kvm_vcpu_abt_issea(vcpu) && 452 - !kvm_vcpu_dabt_iss1tw(vcpu); 452 + !kvm_vcpu_abt_iss1tw(vcpu); 453 453 454 454 if (valid) { 455 455 int ret = __vgic_v2_perform_cpuif_access(vcpu);
+7
arch/arm64/kvm/hyp/nvhe/tlb.c
··· 31 31 isb(); 32 32 } 33 33 34 + /* 35 + * __load_guest_stage2() includes an ISB only when the AT 36 + * workaround is applied. Take care of the opposite condition, 37 + * ensuring that we always have an ISB, but not two ISBs back 38 + * to back. 39 + */ 34 40 __load_guest_stage2(mmu); 41 + asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); 35 42 } 36 43 37 44 static void __tlb_switch_to_host(struct tlb_inv_context *cxt)
+2 -2
arch/arm64/kvm/mmu.c
··· 1849 1849 struct kvm_s2_mmu *mmu = vcpu->arch.hw_mmu; 1850 1850 1851 1851 write_fault = kvm_is_write_fault(vcpu); 1852 - exec_fault = kvm_vcpu_trap_is_iabt(vcpu); 1852 + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); 1853 1853 VM_BUG_ON(write_fault && exec_fault); 1854 1854 1855 1855 if (fault_status == FSC_PERM && !write_fault && !exec_fault) { ··· 2131 2131 goto out; 2132 2132 } 2133 2133 2134 - if (kvm_vcpu_dabt_iss1tw(vcpu)) { 2134 + if (kvm_vcpu_abt_iss1tw(vcpu)) { 2135 2135 kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); 2136 2136 ret = 1; 2137 2137 goto out_unlock;
+3 -3
arch/ia64/mm/init.c
··· 538 538 if (map_start < map_end) 539 539 memmap_init_zone((unsigned long)(map_end - map_start), 540 540 args->nid, args->zone, page_to_pfn(map_start), 541 - MEMMAP_EARLY, NULL); 541 + MEMINIT_EARLY, NULL); 542 542 return 0; 543 543 } 544 544 ··· 547 547 unsigned long start_pfn) 548 548 { 549 549 if (!vmem_map) { 550 - memmap_init_zone(size, nid, zone, start_pfn, MEMMAP_EARLY, 551 - NULL); 550 + memmap_init_zone(size, nid, zone, start_pfn, 551 + MEMINIT_EARLY, NULL); 552 552 } else { 553 553 struct page *start; 554 554 struct memmap_init_callback_data args;
+1 -1
arch/mips/bcm47xx/setup.c
··· 148 148 { 149 149 struct cpuinfo_mips *c = &current_cpu_data; 150 150 151 - if ((c->cputype == CPU_74K) || (c->cputype == CPU_1074K)) { 151 + if (c->cputype == CPU_74K) { 152 152 pr_info("Using bcma bus\n"); 153 153 #ifdef CONFIG_BCM47XX_BCMA 154 154 bcm47xx_bus_type = BCM47XX_BUS_TYPE_BCMA;
+1
arch/mips/include/asm/cpu-type.h
··· 47 47 case CPU_34K: 48 48 case CPU_1004K: 49 49 case CPU_74K: 50 + case CPU_1074K: 50 51 case CPU_M14KC: 51 52 case CPU_M14KEC: 52 53 case CPU_INTERAPTIV:
+4
arch/mips/loongson2ef/Platform
··· 44 44 endif 45 45 endif 46 46 47 + # Some -march= flags enable MMI instructions, and GCC complains about that 48 + # support being enabled alongside -msoft-float. Thus explicitly disable MMI. 49 + cflags-y += $(call cc-option,-mno-loongson-mmi) 50 + 47 51 # 48 52 # Loongson Machines' Support 49 53 #
+8 -16
arch/mips/loongson64/cop2-ex.c
··· 95 95 if (res) 96 96 goto fault; 97 97 98 - set_fpr64(current->thread.fpu.fpr, 99 - insn.loongson3_lswc2_format.rt, value); 100 - set_fpr64(current->thread.fpu.fpr, 101 - insn.loongson3_lswc2_format.rq, value_next); 98 + set_fpr64(&current->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0, value); 99 + set_fpr64(&current->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0, value_next); 102 100 compute_return_epc(regs); 103 101 own_fpu(1); 104 102 } ··· 128 130 goto sigbus; 129 131 130 132 lose_fpu(1); 131 - value_next = get_fpr64(current->thread.fpu.fpr, 132 - insn.loongson3_lswc2_format.rq); 133 + value_next = get_fpr64(&current->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0); 133 134 134 135 StoreDW(addr + 8, value_next, res); 135 136 if (res) 136 137 goto fault; 137 138 138 - value = get_fpr64(current->thread.fpu.fpr, 139 - insn.loongson3_lswc2_format.rt); 139 + value = get_fpr64(&current->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0); 140 140 141 141 StoreDW(addr, value, res); 142 142 if (res) ··· 200 204 if (res) 201 205 goto fault; 202 206 203 - set_fpr64(current->thread.fpu.fpr, 204 - insn.loongson3_lsdc2_format.rt, value); 207 + set_fpr64(&current->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value); 205 208 compute_return_epc(regs); 206 209 own_fpu(1); 207 210 ··· 216 221 if (res) 217 222 goto fault; 218 223 219 - set_fpr64(current->thread.fpu.fpr, 220 - insn.loongson3_lsdc2_format.rt, value); 224 + set_fpr64(&current->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value); 221 225 compute_return_epc(regs); 222 226 own_fpu(1); 223 227 break; ··· 280 286 goto sigbus; 281 287 282 288 lose_fpu(1); 283 - value = get_fpr64(current->thread.fpu.fpr, 284 - insn.loongson3_lsdc2_format.rt); 289 + value = get_fpr64(&current->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0); 285 290 286 291 StoreW(addr, value, res); 287 292 if (res) ··· 298 305 goto sigbus; 299 306 300 307 lose_fpu(1); 301 - value = get_fpr64(current->thread.fpu.fpr, 302 - insn.loongson3_lsdc2_format.rt); 308 + value = get_fpr64(&current->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0); 303 309 304 310 StoreDW(addr, value, res); 305 311 if (res)
-4
arch/riscv/include/asm/stackprotector.h
··· 5 5 6 6 #include <linux/random.h> 7 7 #include <linux/version.h> 8 - #include <asm/timex.h> 9 8 10 9 extern unsigned long __stack_chk_guard; 11 10 ··· 17 18 static __always_inline void boot_init_stack_canary(void) 18 19 { 19 20 unsigned long canary; 20 - unsigned long tsc; 21 21 22 22 /* Try to get a semi random initial value. */ 23 23 get_random_bytes(&canary, sizeof(canary)); 24 - tsc = get_cycles(); 25 - canary += tsc + (tsc << BITS_PER_LONG/2); 26 24 canary ^= LINUX_VERSION_CODE; 27 25 canary &= CANARY_MASK; 28 26
+13
arch/riscv/include/asm/timex.h
··· 33 33 #define get_cycles_hi get_cycles_hi 34 34 #endif /* CONFIG_64BIT */ 35 35 36 + /* 37 + * Much like MIPS, we may not have a viable counter to use at an early point 38 + * in the boot process. Unfortunately we don't have a fallback, so instead 39 + * we just return 0. 40 + */ 41 + static inline unsigned long random_get_entropy(void) 42 + { 43 + if (unlikely(clint_time_val == NULL)) 44 + return 0; 45 + return get_cycles(); 46 + } 47 + #define random_get_entropy() random_get_entropy() 48 + 36 49 #else /* CONFIG_RISCV_M_MODE */ 37 50 38 51 static inline cycles_t get_cycles(void)
+30 -12
arch/s390/include/asm/pgtable.h
··· 1260 1260 1261 1261 #define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), address) 1262 1262 1263 - static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 1263 + static inline p4d_t *p4d_offset_lockless(pgd_t *pgdp, pgd_t pgd, unsigned long address) 1264 1264 { 1265 - if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1) 1266 - return (p4d_t *) pgd_deref(*pgd) + p4d_index(address); 1267 - return (p4d_t *) pgd; 1265 + if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1) 1266 + return (p4d_t *) pgd_deref(pgd) + p4d_index(address); 1267 + return (p4d_t *) pgdp; 1268 + } 1269 + #define p4d_offset_lockless p4d_offset_lockless 1270 + 1271 + static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long address) 1272 + { 1273 + return p4d_offset_lockless(pgdp, *pgdp, address); 1268 1274 } 1269 1275 1270 - static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) 1276 + static inline pud_t *pud_offset_lockless(p4d_t *p4dp, p4d_t p4d, unsigned long address) 1271 1277 { 1272 - if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2) 1273 - return (pud_t *) p4d_deref(*p4d) + pud_index(address); 1274 - return (pud_t *) p4d; 1278 + if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2) 1279 + return (pud_t *) p4d_deref(p4d) + pud_index(address); 1280 + return (pud_t *) p4dp; 1281 + } 1282 + #define pud_offset_lockless pud_offset_lockless 1283 + 1284 + static inline pud_t *pud_offset(p4d_t *p4dp, unsigned long address) 1285 + { 1286 + return pud_offset_lockless(p4dp, *p4dp, address); 1275 1287 } 1276 1288 #define pud_offset pud_offset 1277 1289 1278 - static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) 1290 + static inline pmd_t *pmd_offset_lockless(pud_t *pudp, pud_t pud, unsigned long address) 1279 1291 { 1280 - if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3) 1281 - return (pmd_t *) pud_deref(*pud) + pmd_index(address); 1282 - return (pmd_t *) pud; 1292 + if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3) 1293 + return (pmd_t *) pud_deref(pud) + pmd_index(address); 1294 + return (pmd_t *) pudp; 1295 + } 1296 + #define pmd_offset_lockless pmd_offset_lockless 1297 + 1298 + static inline pmd_t *pmd_offset(pud_t *pudp, unsigned long address) 1299 + { 1300 + return pmd_offset_lockless(pudp, *pudp, address); 1283 1301 } 1284 1302 #define pmd_offset pmd_offset 1285 1303
+1 -1
arch/x86/entry/common.c
··· 299 299 old_regs = set_irq_regs(regs); 300 300 301 301 instrumentation_begin(); 302 - run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, NULL, regs); 302 + run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs); 303 303 instrumentation_begin(); 304 304 305 305 set_irq_regs(old_regs);
+2
arch/x86/entry/entry_64.S
··· 682 682 * rdx: Function argument (can be NULL if none) 683 683 */ 684 684 SYM_FUNC_START(asm_call_on_stack) 685 + SYM_INNER_LABEL(asm_call_sysvec_on_stack, SYM_L_GLOBAL) 686 + SYM_INNER_LABEL(asm_call_irq_on_stack, SYM_L_GLOBAL) 685 687 /* 686 688 * Save the frame pointer unconditionally. This allows the ORC 687 689 * unwinder to handle the stack switch.
+1 -1
arch/x86/include/asm/idtentry.h
··· 242 242 instrumentation_begin(); \ 243 243 irq_enter_rcu(); \ 244 244 kvm_set_cpu_l1tf_flush_l1d(); \ 245 - run_on_irqstack_cond(__##func, regs, regs); \ 245 + run_sysvec_on_irqstack_cond(__##func, regs); \ 246 246 irq_exit_rcu(); \ 247 247 instrumentation_end(); \ 248 248 irqentry_exit(regs, state); \
+62 -9
arch/x86/include/asm/irq_stack.h
··· 12 12 return __this_cpu_read(irq_count) != -1; 13 13 } 14 14 15 - void asm_call_on_stack(void *sp, void *func, void *arg); 15 + void asm_call_on_stack(void *sp, void (*func)(void), void *arg); 16 + void asm_call_sysvec_on_stack(void *sp, void (*func)(struct pt_regs *regs), 17 + struct pt_regs *regs); 18 + void asm_call_irq_on_stack(void *sp, void (*func)(struct irq_desc *desc), 19 + struct irq_desc *desc); 16 20 17 - static __always_inline void __run_on_irqstack(void *func, void *arg) 21 + static __always_inline void __run_on_irqstack(void (*func)(void)) 18 22 { 19 23 void *tos = __this_cpu_read(hardirq_stack_ptr); 20 24 21 25 __this_cpu_add(irq_count, 1); 22 - asm_call_on_stack(tos - 8, func, arg); 26 + asm_call_on_stack(tos - 8, func, NULL); 27 + __this_cpu_sub(irq_count, 1); 28 + } 29 + 30 + static __always_inline void 31 + __run_sysvec_on_irqstack(void (*func)(struct pt_regs *regs), 32 + struct pt_regs *regs) 33 + { 34 + void *tos = __this_cpu_read(hardirq_stack_ptr); 35 + 36 + __this_cpu_add(irq_count, 1); 37 + asm_call_sysvec_on_stack(tos - 8, func, regs); 38 + __this_cpu_sub(irq_count, 1); 39 + } 40 + 41 + static __always_inline void 42 + __run_irq_on_irqstack(void (*func)(struct irq_desc *desc), 43 + struct irq_desc *desc) 44 + { 45 + void *tos = __this_cpu_read(hardirq_stack_ptr); 46 + 47 + __this_cpu_add(irq_count, 1); 48 + asm_call_irq_on_stack(tos - 8, func, desc); 23 49 __this_cpu_sub(irq_count, 1); 24 50 } 25 51 26 52 #else /* CONFIG_X86_64 */ 27 53 static inline bool irqstack_active(void) { return false; } 28 - static inline void __run_on_irqstack(void *func, void *arg) { } 54 + static inline void __run_on_irqstack(void (*func)(void)) { } 55 + static inline void __run_sysvec_on_irqstack(void (*func)(struct pt_regs *regs), 56 + struct pt_regs *regs) { } 57 + static inline void __run_irq_on_irqstack(void (*func)(struct irq_desc *desc), 58 + struct irq_desc *desc) { } 29 59 #endif /* !CONFIG_X86_64 */ 30 60 31 61 static __always_inline bool irq_needs_irq_stack(struct pt_regs *regs) ··· 67 37 return !user_mode(regs) && !irqstack_active(); 68 38 } 69 39 70 - static __always_inline void run_on_irqstack_cond(void *func, void *arg, 40 + 41 + static __always_inline void run_on_irqstack_cond(void (*func)(void), 71 42 struct pt_regs *regs) 72 43 { 73 - void (*__func)(void *arg) = func; 74 - 75 44 lockdep_assert_irqs_disabled(); 76 45 77 46 if (irq_needs_irq_stack(regs)) 78 - __run_on_irqstack(__func, arg); 47 + __run_on_irqstack(func); 79 48 else 80 - __func(arg); 49 + func(); 50 + } 51 + 52 + static __always_inline void 53 + run_sysvec_on_irqstack_cond(void (*func)(struct pt_regs *regs), 54 + struct pt_regs *regs) 55 + { 56 + lockdep_assert_irqs_disabled(); 57 + 58 + if (irq_needs_irq_stack(regs)) 59 + __run_sysvec_on_irqstack(func, regs); 60 + else 61 + func(regs); 62 + } 63 + 64 + static __always_inline void 65 + run_irq_on_irqstack_cond(void (*func)(struct irq_desc *desc), struct irq_desc *desc, 66 + struct pt_regs *regs) 67 + { 68 + lockdep_assert_irqs_disabled(); 69 + 70 + if (irq_needs_irq_stack(regs)) 71 + __run_irq_on_irqstack(func, desc); 72 + else 73 + func(desc); 81 74 } 82 75 83 76 #endif
+1
arch/x86/kernel/apic/io_apic.c
··· 2243 2243 legacy_pic->init(0); 2244 2244 legacy_pic->make_irq(0); 2245 2245 apic_write(APIC_LVT0, APIC_DM_EXTINT); 2246 + legacy_pic->unmask(0); 2246 2247 2247 2248 unlock_ExtINT_logic(); 2248 2249
+1 -1
arch/x86/kernel/irq.c
··· 227 227 struct pt_regs *regs) 228 228 { 229 229 if (IS_ENABLED(CONFIG_X86_64)) 230 - run_on_irqstack_cond(desc->handle_irq, desc, regs); 230 + run_irq_on_irqstack_cond(desc->handle_irq, desc, regs); 231 231 else 232 232 __handle_irq(desc, regs); 233 233 }
+1 -1
arch/x86/kernel/irq_64.c
··· 74 74 75 75 void do_softirq_own_stack(void) 76 76 { 77 - run_on_irqstack_cond(__do_softirq, NULL, NULL); 77 + run_on_irqstack_cond(__do_softirq, NULL); 78 78 }
+3 -19
arch/x86/kernel/kvm.c
··· 652 652 } 653 653 654 654 if (pv_tlb_flush_supported()) { 655 + pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others; 655 656 pv_ops.mmu.tlb_remove_table = tlb_remove_table; 656 657 pr_info("KVM setup pv remote TLB flush\n"); 657 658 } ··· 765 764 } 766 765 arch_initcall(activate_jump_labels); 767 766 768 - static void kvm_free_pv_cpu_mask(void) 769 - { 770 - unsigned int cpu; 771 - 772 - for_each_possible_cpu(cpu) 773 - free_cpumask_var(per_cpu(__pv_cpu_mask, cpu)); 774 - } 775 - 776 767 static __init int kvm_alloc_cpumask(void) 777 768 { 778 769 int cpu; ··· 783 790 784 791 if (alloc) 785 792 for_each_possible_cpu(cpu) { 786 - if (!zalloc_cpumask_var_node( 787 - per_cpu_ptr(&__pv_cpu_mask, cpu), 788 - GFP_KERNEL, cpu_to_node(cpu))) { 789 - goto zalloc_cpumask_fail; 790 - } 793 + zalloc_cpumask_var_node(per_cpu_ptr(&__pv_cpu_mask, cpu), 794 + GFP_KERNEL, cpu_to_node(cpu)); 791 795 } 792 796 793 - apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself; 794 - pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others; 795 797 return 0; 796 - 797 - zalloc_cpumask_fail: 798 - kvm_free_pv_cpu_mask(); 799 - return -ENOMEM; 800 798 } 801 799 arch_initcall(kvm_alloc_cpumask); 802 800
+7 -1
arch/x86/kvm/svm/svm.c
··· 2183 2183 return 1; 2184 2184 } 2185 2185 2186 + static int invd_interception(struct vcpu_svm *svm) 2187 + { 2188 + /* Treat an INVD instruction as a NOP and just skip it. */ 2189 + return kvm_skip_emulated_instruction(&svm->vcpu); 2190 + } 2191 + 2186 2192 static int invlpg_interception(struct vcpu_svm *svm) 2187 2193 { 2188 2194 if (!static_cpu_has(X86_FEATURE_DECODEASSISTS)) ··· 2780 2774 [SVM_EXIT_RDPMC] = rdpmc_interception, 2781 2775 [SVM_EXIT_CPUID] = cpuid_interception, 2782 2776 [SVM_EXIT_IRET] = iret_interception, 2783 - [SVM_EXIT_INVD] = emulate_on_interception, 2777 + [SVM_EXIT_INVD] = invd_interception, 2784 2778 [SVM_EXIT_PAUSE] = pause_interception, 2785 2779 [SVM_EXIT_HLT] = halt_interception, 2786 2780 [SVM_EXIT_INVLPG] = invlpg_interception,
+22 -15
arch/x86/kvm/vmx/vmx.c
··· 129 129 module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO); 130 130 #endif 131 131 132 + extern bool __read_mostly allow_smaller_maxphyaddr; 133 + module_param(allow_smaller_maxphyaddr, bool, S_IRUGO); 134 + 132 135 #define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD) 133 136 #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE 134 137 #define KVM_VM_CR0_ALWAYS_ON \ ··· 794 791 */ 795 792 if (is_guest_mode(vcpu)) 796 793 eb |= get_vmcs12(vcpu)->exception_bitmap; 794 + else { 795 + /* 796 + * If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched 797 + * between guest and host. In that case we only care about present 798 + * faults. For vmcs02, however, PFEC_MASK and PFEC_MATCH are set in 799 + * prepare_vmcs02_rare. 800 + */ 801 + bool selective_pf_trap = enable_ept && (eb & (1u << PF_VECTOR)); 802 + int mask = selective_pf_trap ? PFERR_PRESENT_MASK : 0; 803 + vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, mask); 804 + vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, mask); 805 + } 797 806 798 807 vmcs_write32(EXCEPTION_BITMAP, eb); 799 808 } ··· 4367 4352 vmx->pt_desc.guest.output_mask = 0x7F; 4368 4353 vmcs_write64(GUEST_IA32_RTIT_CTL, 0); 4369 4354 } 4370 - 4371 - /* 4372 - * If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched 4373 - * between guest and host. In that case we only care about present 4374 - * faults. 4375 - */ 4376 - if (enable_ept) { 4377 - vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, PFERR_PRESENT_MASK); 4378 - vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, PFERR_PRESENT_MASK); 4379 - } 4380 4355 } 4381 4356 4382 4357 static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) ··· 4808 4803 * EPT will cause page fault only if we need to 4809 4804 * detect illegal GPAs. 4810 4805 */ 4806 + WARN_ON_ONCE(!allow_smaller_maxphyaddr); 4811 4807 kvm_fixup_and_inject_pf_error(vcpu, cr2, error_code); 4812 4808 return 1; 4813 4809 } else ··· 5337 5331 * would also use advanced VM-exit information for EPT violations to 5338 5332 * reconstruct the page fault error code. 5339 5333 */ 5340 - if (unlikely(kvm_mmu_is_illegal_gpa(vcpu, gpa))) 5334 + if (unlikely(allow_smaller_maxphyaddr && kvm_mmu_is_illegal_gpa(vcpu, gpa))) 5341 5335 return kvm_emulate_instruction(vcpu, 0); 5342 5336 5343 5337 return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); ··· 8311 8305 vmx_check_vmcs12_offsets(); 8312 8306 8313 8307 /* 8314 - * Intel processors don't have problems with 8315 - * GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable 8316 - * it for VMX by default 8308 + * Shadow paging doesn't have a (further) performance penalty 8309 + * from GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable it 8310 + * by default 8317 8311 */ 8318 - allow_smaller_maxphyaddr = true; 8312 + if (!enable_ept) 8313 + allow_smaller_maxphyaddr = true; 8319 8314 8320 8315 return 0; 8321 8316 }
+4 -1
arch/x86/kvm/vmx/vmx.h
··· 552 552 553 553 static inline bool vmx_need_pf_intercept(struct kvm_vcpu *vcpu) 554 554 { 555 - return !enable_ept || cpuid_maxphyaddr(vcpu) < boot_cpu_data.x86_phys_bits; 555 + if (!enable_ept) 556 + return true; 557 + 558 + return allow_smaller_maxphyaddr && cpuid_maxphyaddr(vcpu) < boot_cpu_data.x86_phys_bits; 556 559 } 557 560 558 561 void dump_vmcs(void);
+18 -4
arch/x86/kvm/x86.c
··· 188 188 u64 __read_mostly host_efer; 189 189 EXPORT_SYMBOL_GPL(host_efer); 190 190 191 - bool __read_mostly allow_smaller_maxphyaddr; 191 + bool __read_mostly allow_smaller_maxphyaddr = 0; 192 192 EXPORT_SYMBOL_GPL(allow_smaller_maxphyaddr); 193 193 194 194 static u64 __read_mostly host_xss; ··· 976 976 unsigned long old_cr4 = kvm_read_cr4(vcpu); 977 977 unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | 978 978 X86_CR4_SMEP; 979 + unsigned long mmu_role_bits = pdptr_bits | X86_CR4_SMAP | X86_CR4_PKE; 979 980 980 981 if (kvm_valid_cr4(vcpu, cr4)) 981 982 return 1; ··· 1004 1003 if (kvm_x86_ops.set_cr4(vcpu, cr4)) 1005 1004 return 1; 1006 1005 1007 - if (((cr4 ^ old_cr4) & pdptr_bits) || 1006 + if (((cr4 ^ old_cr4) & mmu_role_bits) || 1008 1007 (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) 1009 1008 kvm_mmu_reset_context(vcpu); 1010 1009 ··· 3222 3221 case MSR_IA32_POWER_CTL: 3223 3222 msr_info->data = vcpu->arch.msr_ia32_power_ctl; 3224 3223 break; 3225 - case MSR_IA32_TSC: 3226 - msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset; 3224 + case MSR_IA32_TSC: { 3225 + /* 3226 + * Intel SDM states that MSR_IA32_TSC read adds the TSC offset 3227 + * even when not intercepted. AMD manual doesn't explicitly 3228 + * state this but appears to behave the same. 3229 + * 3230 + * On userspace reads and writes, however, we unconditionally 3231 + * operate L1's TSC value to ensure backwards-compatible 3232 + * behavior for migration. 3233 + */ 3234 + u64 tsc_offset = msr_info->host_initiated ? vcpu->arch.l1_tsc_offset : 3235 + vcpu->arch.tsc_offset; 3236 + 3237 + msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + tsc_offset; 3227 3238 break; 3239 + } 3228 3240 case MSR_MTRRcap: 3229 3241 case 0x200 ... 0x2ff: 3230 3242 return kvm_mtrr_get_msr(vcpu, msr_info->index, &msr_info->data);
+1 -1
arch/x86/lib/usercopy_64.c
··· 120 120 */ 121 121 if (size < 8) { 122 122 if (!IS_ALIGNED(dest, 4) || size != 4) 123 - clean_cache_range(dst, 1); 123 + clean_cache_range(dst, size); 124 124 } else { 125 125 if (!IS_ALIGNED(dest, 8)) { 126 126 dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
+9 -9
block/blk-mq.c
··· 1412 1412 1413 1413 hctx->dispatched[queued_to_index(queued)]++; 1414 1414 1415 + /* If we didn't flush the entire list, we could have told the driver 1416 + * there was more coming, but that turned out to be a lie. 1417 + */ 1418 + if ((!list_empty(list) || errors) && q->mq_ops->commit_rqs && queued) 1419 + q->mq_ops->commit_rqs(hctx); 1415 1420 /* 1416 1421 * Any items that need requeuing? Stuff them into hctx->dispatch, 1417 1422 * that is where we will continue on next queue run. ··· 1429 1424 bool no_budget_avail = prep == PREP_DISPATCH_NO_BUDGET; 1430 1425 1431 1426 blk_mq_release_budgets(q, nr_budgets); 1432 - 1433 - /* 1434 - * If we didn't flush the entire list, we could have told 1435 - * the driver there was more coming, but that turned out to 1436 - * be a lie. 1437 - */ 1438 - if (q->mq_ops->commit_rqs && queued) 1439 - q->mq_ops->commit_rqs(hctx); 1440 1427 1441 1428 spin_lock(&hctx->lock); 1442 1429 list_splice_tail_init(list, &hctx->dispatch); ··· 2076 2079 struct list_head *list) 2077 2080 { 2078 2081 int queued = 0; 2082 + int errors = 0; 2079 2083 2080 2084 while (!list_empty(list)) { 2081 2085 blk_status_t ret; ··· 2093 2095 break; 2094 2096 } 2095 2097 blk_mq_end_request(rq, ret); 2098 + errors++; 2096 2099 } else 2097 2100 queued++; 2098 2101 } ··· 2103 2104 * the driver there was more coming, but that turned out to 2104 2105 * be a lie. 2105 2106 */ 2106 - if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs && queued) 2107 + if ((!list_empty(list) || errors) && 2108 + hctx->queue->mq_ops->commit_rqs && queued) 2107 2109 hctx->queue->mq_ops->commit_rqs(hctx); 2108 2110 } 2109 2111
+46
block/blk-settings.c
··· 801 801 } 802 802 EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging); 803 803 804 + /** 805 + * blk_queue_set_zoned - configure a disk queue zoned model. 806 + * @disk: the gendisk of the queue to configure 807 + * @model: the zoned model to set 808 + * 809 + * Set the zoned model of the request queue of @disk according to @model. 810 + * When @model is BLK_ZONED_HM (host managed), this should be called only 811 + * if zoned block device support is enabled (CONFIG_BLK_DEV_ZONED option). 812 + * If @model specifies BLK_ZONED_HA (host aware), the effective model used 813 + * depends on CONFIG_BLK_DEV_ZONED settings and on the existence of partitions 814 + * on the disk. 815 + */ 816 + void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model) 817 + { 818 + switch (model) { 819 + case BLK_ZONED_HM: 820 + /* 821 + * Host managed devices are supported only if 822 + * CONFIG_BLK_DEV_ZONED is enabled. 823 + */ 824 + WARN_ON_ONCE(!IS_ENABLED(CONFIG_BLK_DEV_ZONED)); 825 + break; 826 + case BLK_ZONED_HA: 827 + /* 828 + * Host aware devices can be treated either as regular block 829 + * devices (similar to drive managed devices) or as zoned block 830 + * devices to take advantage of the zone command set, similarly 831 + * to host managed devices. We try the latter if there are no 832 + * partitions and zoned block device support is enabled, else 833 + * we do nothing special as far as the block layer is concerned. 834 + */ 835 + if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) || 836 + disk_has_partitions(disk)) 837 + model = BLK_ZONED_NONE; 838 + break; 839 + case BLK_ZONED_NONE: 840 + default: 841 + if (WARN_ON_ONCE(model != BLK_ZONED_NONE)) 842 + model = BLK_ZONED_NONE; 843 + break; 844 + } 845 + 846 + disk->queue->limits.zoned = model; 847 + } 848 + EXPORT_SYMBOL_GPL(blk_queue_set_zoned); 849 + 804 850 static int __init blk_settings_init(void) 805 851 { 806 852 blk_max_low_pfn = max_low_pfn - 1;
+1
drivers/acpi/processor_idle.c
··· 176 176 static bool lapic_timer_needs_broadcast(struct acpi_processor *pr, 177 177 struct acpi_processor_cx *cx) 178 178 { 179 + return false; 179 180 } 180 181 181 182 #endif
+1 -1
drivers/atm/eni.c
··· 2224 2224 2225 2225 rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32)); 2226 2226 if (rc < 0) 2227 - goto out; 2227 + goto err_disable; 2228 2228 2229 2229 rc = -ENOMEM; 2230 2230 eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
+55 -30
drivers/base/node.c
··· 761 761 return pfn_to_nid(pfn); 762 762 } 763 763 764 + static int do_register_memory_block_under_node(int nid, 765 + struct memory_block *mem_blk) 766 + { 767 + int ret; 768 + 769 + /* 770 + * If this memory block spans multiple nodes, we only indicate 771 + * the last processed node. 772 + */ 773 + mem_blk->nid = nid; 774 + 775 + ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj, 776 + &mem_blk->dev.kobj, 777 + kobject_name(&mem_blk->dev.kobj)); 778 + if (ret) 779 + return ret; 780 + 781 + return sysfs_create_link_nowarn(&mem_blk->dev.kobj, 782 + &node_devices[nid]->dev.kobj, 783 + kobject_name(&node_devices[nid]->dev.kobj)); 784 + } 785 + 764 786 /* register memory section under specified node if it spans that node */ 765 - static int register_mem_sect_under_node(struct memory_block *mem_blk, 766 - void *arg) 787 + static int register_mem_block_under_node_early(struct memory_block *mem_blk, 788 + void *arg) 767 789 { 768 790 unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; 769 791 unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); 770 792 unsigned long end_pfn = start_pfn + memory_block_pfns - 1; 771 - int ret, nid = *(int *)arg; 793 + int nid = *(int *)arg; 772 794 unsigned long pfn; 773 795 774 796 for (pfn = start_pfn; pfn <= end_pfn; pfn++) { ··· 807 785 } 808 786 809 787 /* 810 - * We need to check if page belongs to nid only for the boot 811 - * case, during hotplug we know that all pages in the memory 812 - * block belong to the same node. 788 + * We need to check if page belongs to nid only at the boot 789 + * case because node's ranges can be interleaved. 813 790 */ 814 - if (system_state == SYSTEM_BOOTING) { 815 - page_nid = get_nid_for_pfn(pfn); 816 - if (page_nid < 0) 817 - continue; 818 - if (page_nid != nid) 819 - continue; 820 - } 791 + page_nid = get_nid_for_pfn(pfn); 792 + if (page_nid < 0) 793 + continue; 794 + if (page_nid != nid) 795 + continue; 821 796 822 - /* 823 - * If this memory block spans multiple nodes, we only indicate 824 - * the last processed node. 825 - */ 826 - mem_blk->nid = nid; 827 - 828 - ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj, 829 - &mem_blk->dev.kobj, 830 - kobject_name(&mem_blk->dev.kobj)); 831 - if (ret) 832 - return ret; 833 - 834 - return sysfs_create_link_nowarn(&mem_blk->dev.kobj, 835 - &node_devices[nid]->dev.kobj, 836 - kobject_name(&node_devices[nid]->dev.kobj)); 797 + return do_register_memory_block_under_node(nid, mem_blk); 837 798 } 838 799 /* mem section does not span the specified node */ 839 800 return 0; 801 + } 802 + 803 + /* 804 + * During hotplug we know that all pages in the memory block belong to the same 805 + * node. 806 + */ 807 + static int register_mem_block_under_node_hotplug(struct memory_block *mem_blk, 808 + void *arg) 809 + { 810 + int nid = *(int *)arg; 811 + 812 + return do_register_memory_block_under_node(nid, mem_blk); 840 813 } 841 814 842 815 /* ··· 849 832 kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); 850 833 } 851 834 852 - int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) 835 + int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn, 836 + enum meminit_context context) 853 837 { 838 + walk_memory_blocks_func_t func; 839 + 840 + if (context == MEMINIT_HOTPLUG) 841 + func = register_mem_block_under_node_hotplug; 842 + else 843 + func = register_mem_block_under_node_early; 844 + 854 845 return walk_memory_blocks(PFN_PHYS(start_pfn), 855 846 PFN_PHYS(end_pfn - start_pfn), (void *)&nid, 856 - register_mem_sect_under_node); 847 + func); 857 848 } 858 849 859 850 #ifdef CONFIG_HUGETLBFS
+3 -3
drivers/base/regmap/internal.h
··· 217 217 218 218 #ifdef CONFIG_DEBUG_FS 219 219 extern void regmap_debugfs_initcall(void); 220 - extern void regmap_debugfs_init(struct regmap *map, const char *name); 220 + extern void regmap_debugfs_init(struct regmap *map); 221 221 extern void regmap_debugfs_exit(struct regmap *map); 222 222 223 223 static inline void regmap_debugfs_disable(struct regmap *map) ··· 227 227 228 228 #else 229 229 static inline void regmap_debugfs_initcall(void) { } 230 - static inline void regmap_debugfs_init(struct regmap *map, const char *name) { } 230 + static inline void regmap_debugfs_init(struct regmap *map) { } 231 231 static inline void regmap_debugfs_exit(struct regmap *map) { } 232 232 static inline void regmap_debugfs_disable(struct regmap *map) { } 233 233 #endif ··· 259 259 int regcache_lookup_reg(struct regmap *map, unsigned int reg); 260 260 261 261 int _regmap_raw_write(struct regmap *map, unsigned int reg, 262 - const void *val, size_t val_len); 262 + const void *val, size_t val_len, bool noinc); 263 263 264 264 void regmap_async_complete_cb(struct regmap_async *async, int ret); 265 265
+1 -1
drivers/base/regmap/regcache.c
··· 717 717 718 718 map->cache_bypass = true; 719 719 720 - ret = _regmap_raw_write(map, base, *data, count * val_bytes); 720 + ret = _regmap_raw_write(map, base, *data, count * val_bytes, false); 721 721 if (ret) 722 722 dev_err(map->dev, "Unable to sync registers %#x-%#x. %d\n", 723 723 base, cur - map->reg_stride, ret);
+3 -4
drivers/base/regmap/regmap-debugfs.c
··· 17 17 18 18 struct regmap_debugfs_node { 19 19 struct regmap *map; 20 - const char *name; 21 20 struct list_head link; 22 21 }; 23 22 ··· 543 544 .write = regmap_cache_bypass_write_file, 544 545 }; 545 546 546 - void regmap_debugfs_init(struct regmap *map, const char *name) 547 + void regmap_debugfs_init(struct regmap *map) 547 548 { 548 549 struct rb_node *next; 549 550 struct regmap_range_node *range_node; 550 551 const char *devname = "dummy"; 552 + const char *name = map->name; 551 553 552 554 /* 553 555 * Userspace can initiate reads from the hardware over debugfs. ··· 569 569 if (!node) 570 570 return; 571 571 node->map = map; 572 - node->name = name; 573 572 mutex_lock(&regmap_debugfs_early_lock); 574 573 list_add(&node->link, &regmap_debugfs_early_list); 575 574 mutex_unlock(&regmap_debugfs_early_lock); ··· 678 679 679 680 mutex_lock(&regmap_debugfs_early_lock); 680 681 list_for_each_entry_safe(node, tmp, &regmap_debugfs_early_list, link) { 681 - regmap_debugfs_init(node->map, node->name); 682 + regmap_debugfs_init(node->map); 682 683 list_del(&node->link); 683 684 kfree(node); 684 685 }
+49 -26
drivers/base/regmap/regmap.c
··· 581 581 kfree(map->selector_work_buf); 582 582 } 583 583 584 + static int regmap_set_name(struct regmap *map, const struct regmap_config *config) 585 + { 586 + if (config->name) { 587 + const char *name = kstrdup_const(config->name, GFP_KERNEL); 588 + 589 + if (!name) 590 + return -ENOMEM; 591 + 592 + kfree_const(map->name); 593 + map->name = name; 594 + } 595 + 596 + return 0; 597 + } 598 + 584 599 int regmap_attach_dev(struct device *dev, struct regmap *map, 585 600 const struct regmap_config *config) 586 601 { 587 602 struct regmap **m; 603 + int ret; 588 604 589 605 map->dev = dev; 590 606 591 - regmap_debugfs_init(map, config->name); 607 + ret = regmap_set_name(map, config); 608 + if (ret) 609 + return ret; 610 + 611 + regmap_debugfs_init(map); 592 612 593 613 /* Add a devres resource for dev_get_regmap() */ 594 614 m = devres_alloc(dev_get_regmap_release, sizeof(*m), GFP_KERNEL); ··· 707 687 goto err; 708 688 } 709 689 710 - if (config->name) { 711 - map->name = kstrdup_const(config->name, GFP_KERNEL); 712 - if (!map->name) { 713 - ret = -ENOMEM; 714 - goto err_map; 715 - } 716 - } 690 + ret = regmap_set_name(map, config); 691 + if (ret) 692 + goto err_map; 717 693 718 694 if (config->disable_locking) { 719 695 map->lock = map->unlock = regmap_lock_unlock_none; ··· 1153 1137 if (ret != 0) 1154 1138 goto err_regcache; 1155 1139 } else { 1156 - regmap_debugfs_init(map, config->name); 1140 + regmap_debugfs_init(map); 1157 1141 } 1158 1142 1159 1143 return map; ··· 1313 1297 */ 1314 1298 int regmap_reinit_cache(struct regmap *map, const struct regmap_config *config) 1315 1299 { 1300 + int ret; 1301 + 1316 1302 regcache_exit(map); 1317 1303 regmap_debugfs_exit(map); 1318 1304 ··· 1327 1309 map->readable_noinc_reg = config->readable_noinc_reg; 1328 1310 map->cache_type = config->cache_type; 1329 1311 1330 - regmap_debugfs_init(map, config->name); 1312 + ret = regmap_set_name(map, config); 1313 + if (ret) 1314 + return ret; 1315 + 1316 + regmap_debugfs_init(map); 1331 1317 1332 1318 map->cache_bypass = false; 1333 1319 map->cache_only = false; ··· 1486 1464 } 1487 1465 1488 1466 static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg, 1489 - const void *val, size_t val_len) 1467 + const void *val, size_t val_len, bool noinc) 1490 1468 { 1491 1469 struct regmap_range_node *range; 1492 1470 unsigned long flags; ··· 1545 1523 win_residue, val_len / map->format.val_bytes); 1546 1524 ret = _regmap_raw_write_impl(map, reg, val, 1547 1525 win_residue * 1548 - map->format.val_bytes); 1526 + map->format.val_bytes, noinc); 1549 1527 if (ret != 0) 1550 1528 return ret; 1551 1529 ··· 1559 1537 win_residue = range->window_len - win_offset; 1560 1538 } 1561 1539 1562 - ret = _regmap_select_page(map, &reg, range, val_num); 1540 + ret = _regmap_select_page(map, &reg, range, noinc ? 1 : val_num); 1563 1541 if (ret != 0) 1564 1542 return ret; 1565 1543 } ··· 1767 1745 map->work_buf + 1768 1746 map->format.reg_bytes + 1769 1747 map->format.pad_bytes, 1770 - map->format.val_bytes); 1748 + map->format.val_bytes, 1749 + false); 1771 1750 } 1772 1751 1773 1752 static inline void *_regmap_map_get_context(struct regmap *map) ··· 1862 1839 EXPORT_SYMBOL_GPL(regmap_write_async); 1863 1840 1864 1841 int _regmap_raw_write(struct regmap *map, unsigned int reg, 1865 - const void *val, size_t val_len) 1842 + const void *val, size_t val_len, bool noinc) 1866 1843 { 1867 1844 size_t val_bytes = map->format.val_bytes; 1868 1845 size_t val_count = val_len / val_bytes; ··· 1883 1860 1884 1861 /* Write as many bytes as possible with chunk_size */ 1885 1862 for (i = 0; i < chunk_count; i++) { 1886 - ret = _regmap_raw_write_impl(map, reg, val, chunk_bytes); 1863 + ret = _regmap_raw_write_impl(map, reg, val, chunk_bytes, noinc); 1887 1864 if (ret) 1888 1865 return ret; 1889 1866 ··· 1894 1871 1895 1872 /* Write remaining bytes */ 1896 1873 if (val_len) 1897 - ret = _regmap_raw_write_impl(map, reg, val, val_len); 1874 + ret = _regmap_raw_write_impl(map, reg, val, val_len, noinc); 1898 1875 1899 1876 return ret; 1900 1877 } ··· 1927 1904 1928 1905 map->lock(map->lock_arg); 1929 1906 1930 - ret = _regmap_raw_write(map, reg, val, val_len); 1907 + ret = _regmap_raw_write(map, reg, val, val_len, false); 1931 1908 1932 1909 map->unlock(map->lock_arg); 1933 1910 ··· 1985 1962 write_len = map->max_raw_write; 1986 1963 else 1987 1964 write_len = val_len; 1988 - ret = _regmap_raw_write(map, reg, val, write_len); 1965 + ret = _regmap_raw_write(map, reg, val, write_len, true); 1989 1966 if (ret) 1990 1967 goto out_unlock; 1991 1968 val = ((u8 *)val) + write_len; ··· 2462 2439 2463 2440 map->async = true; 2464 2441 2465 - ret = _regmap_raw_write(map, reg, val, val_len); 2442 + ret = _regmap_raw_write(map, reg, val, val_len, false); 2466 2443 2467 2444 map->async = false; 2468 2445 ··· 2473 2450 EXPORT_SYMBOL_GPL(regmap_raw_write_async); 2474 2451 2475 2452 static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val, 2476 - unsigned int val_len) 2453 + unsigned int val_len, bool noinc) 2477 2454 { 2478 2455 struct regmap_range_node *range; 2479 2456 int ret; ··· 2486 2463 range = _regmap_range_lookup(map, reg); 2487 2464 if (range) { 2488 2465 ret = _regmap_select_page(map, &reg, range, 2489 - val_len / map->format.val_bytes); 2466 + noinc ? 1 : val_len / map->format.val_bytes); 2490 2467 if (ret != 0) 2491 2468 return ret; 2492 2469 } ··· 2524 2501 if (!map->format.parse_val) 2525 2502 return -EINVAL; 2526 2503 2527 - ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes); 2504 + ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes, false); 2528 2505 if (ret == 0) 2529 2506 *val = map->format.parse_val(work_val); 2530 2507 ··· 2640 2617 2641 2618 /* Read bytes that fit into whole chunks */ 2642 2619 for (i = 0; i < chunk_count; i++) { 2643 - ret = _regmap_raw_read(map, reg, val, chunk_bytes); 2620 + ret = _regmap_raw_read(map, reg, val, chunk_bytes, false); 2644 2621 if (ret != 0) 2645 2622 goto out; 2646 2623 ··· 2651 2628 2652 2629 /* Read remaining bytes */ 2653 2630 if (val_len) { 2654 - ret = _regmap_raw_read(map, reg, val, val_len); 2631 + ret = _regmap_raw_read(map, reg, val, val_len, false); 2655 2632 if (ret != 0) 2656 2633 goto out; 2657 2634 } ··· 2726 2703 read_len = map->max_raw_read; 2727 2704 else 2728 2705 read_len = val_len; 2729 - ret = _regmap_raw_read(map, reg, val, read_len); 2706 + ret = _regmap_raw_read(map, reg, val, read_len, true); 2730 2707 if (ret) 2731 2708 goto out_unlock; 2732 2709 val = ((u8 *)val) + read_len;
+2 -2
drivers/clk/samsung/clk-exynos4.c
··· 927 927 GATE(CLK_PCIE, "pcie", "aclk133", GATE_IP_FSYS, 14, 0, 0), 928 928 GATE(CLK_SMMU_PCIE, "smmu_pcie", "aclk133", GATE_IP_FSYS, 18, 0, 0), 929 929 GATE(CLK_MODEMIF, "modemif", "aclk100", GATE_IP_PERIL, 28, 0, 0), 930 - GATE(CLK_CHIPID, "chipid", "aclk100", E4210_GATE_IP_PERIR, 0, 0, 0), 930 + GATE(CLK_CHIPID, "chipid", "aclk100", E4210_GATE_IP_PERIR, 0, CLK_IGNORE_UNUSED, 0), 931 931 GATE(CLK_SYSREG, "sysreg", "aclk100", E4210_GATE_IP_PERIR, 0, 932 932 CLK_IGNORE_UNUSED, 0), 933 933 GATE(CLK_HDMI_CEC, "hdmi_cec", "aclk100", E4210_GATE_IP_PERIR, 11, 0, ··· 969 969 0), 970 970 GATE(CLK_TSADC, "tsadc", "aclk133", E4X12_GATE_BUS_FSYS1, 16, 0, 0), 971 971 GATE(CLK_MIPI_HSI, "mipi_hsi", "aclk133", GATE_IP_FSYS, 10, 0, 0), 972 - GATE(CLK_CHIPID, "chipid", "aclk100", E4X12_GATE_IP_PERIR, 0, 0, 0), 972 + GATE(CLK_CHIPID, "chipid", "aclk100", E4X12_GATE_IP_PERIR, 0, CLK_IGNORE_UNUSED, 0), 973 973 GATE(CLK_SYSREG, "sysreg", "aclk100", E4X12_GATE_IP_PERIR, 1, 974 974 CLK_IGNORE_UNUSED, 0), 975 975 GATE(CLK_HDMI_CEC, "hdmi_cec", "aclk100", E4X12_GATE_IP_PERIR, 11, 0,
+5
drivers/clk/samsung/clk-exynos5420.c
··· 1655 1655 * main G3D clock enablement status. 1656 1656 */ 1657 1657 clk_prepare_enable(__clk_lookup("mout_sw_aclk_g3d")); 1658 + /* 1659 + * Keep top BPLL mux enabled permanently to ensure that DRAM operates 1660 + * properly. 1661 + */ 1662 + clk_prepare_enable(__clk_lookup("mout_bpll")); 1658 1663 1659 1664 samsung_clk_of_add_provider(np, ctx); 1660 1665 }
+1 -1
drivers/clk/socfpga/clk-s10.c
··· 209 209 { STRATIX10_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux), 210 210 0, 0, 2, 0xB0, 1}, 211 211 { STRATIX10_EMAC_PTP_FREE_CLK, "emac_ptp_free_clk", NULL, emac_ptp_free_mux, 212 - ARRAY_SIZE(emac_ptp_free_mux), 0, 0, 4, 0xB0, 2}, 212 + ARRAY_SIZE(emac_ptp_free_mux), 0, 0, 2, 0xB0, 2}, 213 213 { STRATIX10_GPIO_DB_FREE_CLK, "gpio_db_free_clk", NULL, gpio_db_free_mux, 214 214 ARRAY_SIZE(gpio_db_free_mux), 0, 0, 0, 0xB0, 3}, 215 215 { STRATIX10_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
+2 -5
drivers/clk/tegra/clk-pll.c
··· 1611 1611 unsigned long flags = 0; 1612 1612 unsigned long input_rate; 1613 1613 1614 - if (clk_pll_is_enabled(hw)) 1615 - return 0; 1616 - 1617 1614 input_rate = clk_hw_get_rate(clk_hw_get_parent(hw)); 1618 1615 1619 1616 if (_get_table_rate(hw, &sel, pll->params->fixed_rate, input_rate)) ··· 1670 1673 pll_writel(val, PLLE_SS_CTRL, pll); 1671 1674 udelay(1); 1672 1675 1673 - /* Enable hw control of xusb brick pll */ 1676 + /* Enable HW control of XUSB brick PLL */ 1674 1677 val = pll_readl_misc(pll); 1675 1678 val &= ~PLLE_MISC_IDDQ_SW_CTRL; 1676 1679 pll_writel_misc(val, pll); ··· 1693 1696 val |= XUSBIO_PLL_CFG0_SEQ_ENABLE; 1694 1697 pll_writel(val, XUSBIO_PLL_CFG0, pll); 1695 1698 1696 - /* Enable hw control of SATA pll */ 1699 + /* Enable HW control of SATA PLL */ 1697 1700 val = pll_readl(SATA_PLL_CFG0, pll); 1698 1701 val &= ~SATA_PLL_CFG0_PADPLL_RESET_SWCTL; 1699 1702 val |= SATA_PLL_CFG0_PADPLL_USE_LOCKDET;
+2
drivers/clk/tegra/clk-tegra210-emc.c
··· 12 12 #include <linux/io.h> 13 13 #include <linux/slab.h> 14 14 15 + #include "clk.h" 16 + 15 17 #define CLK_SOURCE_EMC 0x19c 16 18 #define CLK_SOURCE_EMC_2X_CLK_SRC GENMASK(31, 29) 17 19 #define CLK_SOURCE_EMC_MC_EMC_SAME_FREQ BIT(16)
+1 -1
drivers/clocksource/h8300_timer8.c
··· 169 169 return PTR_ERR(clk); 170 170 } 171 171 172 - ret = ENXIO; 172 + ret = -ENXIO; 173 173 base = of_iomap(node, 0); 174 174 if (!base) { 175 175 pr_err("failed to map registers for clockevent\n");
+1
drivers/clocksource/timer-clint.c
··· 38 38 39 39 #ifdef CONFIG_RISCV_M_MODE 40 40 u64 __iomem *clint_time_val; 41 + EXPORT_SYMBOL(clint_time_val); 41 42 #endif 42 43 43 44 static void clint_send_ipi(const struct cpumask *target)
+1
drivers/clocksource/timer-gx6605s.c
··· 28 28 void __iomem *base = timer_of_base(to_timer_of(ce)); 29 29 30 30 writel_relaxed(GX6605S_STATUS_CLR, base + TIMER_STATUS); 31 + writel_relaxed(0, base + TIMER_INI); 31 32 32 33 ce->event_handler(ce); 33 34
+23 -21
drivers/clocksource/timer-ti-dm-systimer.c
··· 69 69 return !(tidr >> 16); 70 70 } 71 71 72 + static void dmtimer_systimer_enable(struct dmtimer_systimer *t) 73 + { 74 + u32 val; 75 + 76 + if (dmtimer_systimer_revision1(t)) 77 + val = DMTIMER_TYPE1_ENABLE; 78 + else 79 + val = DMTIMER_TYPE2_ENABLE; 80 + 81 + writel_relaxed(val, t->base + t->sysc); 82 + } 83 + 84 + static void dmtimer_systimer_disable(struct dmtimer_systimer *t) 85 + { 86 + if (!dmtimer_systimer_revision1(t)) 87 + return; 88 + 89 + writel_relaxed(DMTIMER_TYPE1_DISABLE, t->base + t->sysc); 90 + } 91 + 72 92 static int __init dmtimer_systimer_type1_reset(struct dmtimer_systimer *t) 73 93 { 74 94 void __iomem *syss = t->base + OMAP_TIMER_V1_SYS_STAT_OFFSET; 75 95 int ret; 76 96 u32 l; 77 97 98 + dmtimer_systimer_enable(t); 78 99 writel_relaxed(BIT(1) | BIT(2), t->base + t->ifctrl); 79 100 ret = readl_poll_timeout_atomic(syss, l, l & BIT(0), 100, 80 101 DMTIMER_RESET_WAIT); ··· 109 88 void __iomem *sysc = t->base + t->sysc; 110 89 u32 l; 111 90 91 + dmtimer_systimer_enable(t); 112 92 l = readl_relaxed(sysc); 113 93 l |= BIT(0); 114 94 writel_relaxed(l, sysc); ··· 358 336 return 0; 359 337 } 360 338 361 - static void dmtimer_systimer_enable(struct dmtimer_systimer *t) 362 - { 363 - u32 val; 364 - 365 - if (dmtimer_systimer_revision1(t)) 366 - val = DMTIMER_TYPE1_ENABLE; 367 - else 368 - val = DMTIMER_TYPE2_ENABLE; 369 - 370 - writel_relaxed(val, t->base + t->sysc); 371 - } 372 - 373 - static void dmtimer_systimer_disable(struct dmtimer_systimer *t) 374 - { 375 - if (!dmtimer_systimer_revision1(t)) 376 - return; 377 - 378 - writel_relaxed(DMTIMER_TYPE1_DISABLE, t->base + t->sysc); 379 - } 380 - 381 339 static int __init dmtimer_systimer_setup(struct device_node *np, 382 340 struct dmtimer_systimer *t) 383 341 { ··· 411 409 t->wakeup = regbase + _OMAP_TIMER_WAKEUP_EN_OFFSET; 412 410 t->ifctrl = regbase + _OMAP_TIMER_IF_CTRL_OFFSET; 413 411 414 - dmtimer_systimer_enable(t); 415 412 dmtimer_systimer_reset(t); 413 + dmtimer_systimer_enable(t); 416 414 pr_debug("dmtimer rev %08x sysc %08x\n", readl_relaxed(t->base), 417 415 readl_relaxed(t->base + t->sysc)); 418 416
+1
drivers/cpufreq/intel_pstate.c
··· 2781 2781 2782 2782 cpufreq_unregister_driver(intel_pstate_driver); 2783 2783 intel_pstate_driver_cleanup(); 2784 + return 0; 2784 2785 } 2785 2786 2786 2787 if (size == 6 && !strncmp(buf, "active", size)) {
+2 -2
drivers/cpuidle/cpuidle-psci.c
··· 66 66 return -1; 67 67 68 68 /* Do runtime PM to manage a hierarchical CPU toplogy. */ 69 - pm_runtime_put_sync_suspend(pd_dev); 69 + RCU_NONIDLE(pm_runtime_put_sync_suspend(pd_dev)); 70 70 71 71 state = psci_get_domain_state(); 72 72 if (!state) ··· 74 74 75 75 ret = psci_cpu_suspend_enter(state) ? -1 : idx; 76 76 77 - pm_runtime_get_sync(pd_dev); 77 + RCU_NONIDLE(pm_runtime_get_sync(pd_dev)); 78 78 79 79 cpu_pm_exit(); 80 80
-10
drivers/cpuidle/cpuidle.c
··· 142 142 143 143 time_start = ns_to_ktime(local_clock()); 144 144 145 - /* 146 - * trace_suspend_resume() called by tick_freeze() for the last CPU 147 - * executing it contains RCU usage regarded as invalid in the idle 148 - * context, so tell RCU about that. 149 - */ 150 145 tick_freeze(); 151 146 /* 152 147 * The state used here cannot be a "coupled" one, because the "coupled" ··· 154 159 target_state->enter_s2idle(dev, drv, index); 155 160 if (WARN_ON_ONCE(!irqs_disabled())) 156 161 local_irq_disable(); 157 - /* 158 - * timekeeping_resume() that will be called by tick_unfreeze() for the 159 - * first CPU executing it calls functions containing RCU read-side 160 - * critical sections, so tell RCU about that. 161 - */ 162 162 if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE)) 163 163 rcu_idle_exit(); 164 164 tick_unfreeze();
+8 -3
drivers/devfreq/devfreq.c
··· 1766 1766 struct devfreq *p_devfreq = NULL; 1767 1767 unsigned long cur_freq, min_freq, max_freq; 1768 1768 unsigned int polling_ms; 1769 + unsigned int timer; 1769 1770 1770 - seq_printf(s, "%-30s %-30s %-15s %10s %12s %12s %12s\n", 1771 + seq_printf(s, "%-30s %-30s %-15s %-10s %10s %12s %12s %12s\n", 1771 1772 "dev", 1772 1773 "parent_dev", 1773 1774 "governor", 1775 + "timer", 1774 1776 "polling_ms", 1775 1777 "cur_freq_Hz", 1776 1778 "min_freq_Hz", 1777 1779 "max_freq_Hz"); 1778 - seq_printf(s, "%30s %30s %15s %10s %12s %12s %12s\n", 1780 + seq_printf(s, "%30s %30s %15s %10s %10s %12s %12s %12s\n", 1779 1781 "------------------------------", 1780 1782 "------------------------------", 1781 1783 "---------------", 1784 + "----------", 1782 1785 "----------", 1783 1786 "------------", 1784 1787 "------------", ··· 1806 1803 cur_freq = devfreq->previous_freq; 1807 1804 get_freq_range(devfreq, &min_freq, &max_freq); 1808 1805 polling_ms = devfreq->profile->polling_ms; 1806 + timer = devfreq->profile->timer; 1809 1807 mutex_unlock(&devfreq->lock); 1810 1808 1811 1809 seq_printf(s, 1812 - "%-30s %-30s %-15s %10d %12ld %12ld %12ld\n", 1810 + "%-30s %-30s %-15s %-10s %10d %12ld %12ld %12ld\n", 1813 1811 dev_name(&devfreq->dev), 1814 1812 p_devfreq ? dev_name(&p_devfreq->dev) : "null", 1815 1813 devfreq->governor_name, 1814 + polling_ms ? timer_name[timer] : "null", 1816 1815 polling_ms, 1817 1816 cur_freq, 1818 1817 min_freq,
+3 -1
drivers/devfreq/tegra30-devfreq.c
··· 836 836 rate = clk_round_rate(tegra->emc_clock, ULONG_MAX); 837 837 if (rate < 0) { 838 838 dev_err(&pdev->dev, "Failed to round clock rate: %ld\n", rate); 839 - return rate; 839 + err = rate; 840 + goto disable_clk; 840 841 } 841 842 842 843 tegra->max_freq = rate / KHZ; ··· 898 897 dev_pm_opp_remove_all_dynamic(&pdev->dev); 899 898 900 899 reset_control_reset(tegra->reset); 900 + disable_clk: 901 901 clk_disable_unprepare(tegra->clock); 902 902 903 903 return err;
+2
drivers/dma-buf/dma-buf.c
··· 59 59 struct dma_buf *dmabuf; 60 60 61 61 dmabuf = dentry->d_fsdata; 62 + if (unlikely(!dmabuf)) 63 + return; 62 64 63 65 BUG_ON(dmabuf->vmapping_counter); 64 66
+21 -5
drivers/dma/dmatest.c
··· 129 129 * @nr_channels: number of channels under test 130 130 * @lock: access protection to the fields of this structure 131 131 * @did_init: module has been initialized completely 132 + * @last_error: test has faced configuration issues 132 133 */ 133 134 static struct dmatest_info { 134 135 /* Test parameters */ ··· 138 137 /* Internal state */ 139 138 struct list_head channels; 140 139 unsigned int nr_channels; 140 + int last_error; 141 141 struct mutex lock; 142 142 bool did_init; 143 143 } test_info = { ··· 1186 1184 return ret; 1187 1185 } else if (dmatest_run) { 1188 1186 if (!is_threaded_test_pending(info)) { 1189 - pr_info("No channels configured, continue with any\n"); 1190 - if (!is_threaded_test_run(info)) 1191 - stop_threaded_test(info); 1192 - add_threaded_test(info); 1187 + /* 1188 + * We have nothing to run. This can be due to: 1189 + */ 1190 + ret = info->last_error; 1191 + if (ret) { 1192 + /* 1) Misconfiguration */ 1193 + pr_err("Channel misconfigured, can't continue\n"); 1194 + mutex_unlock(&info->lock); 1195 + return ret; 1196 + } else { 1197 + /* 2) We rely on defaults */ 1198 + pr_info("No channels configured, continue with any\n"); 1199 + if (!is_threaded_test_run(info)) 1200 + stop_threaded_test(info); 1201 + add_threaded_test(info); 1202 + } 1193 1203 } 1194 1204 start_threaded_tests(info); 1195 1205 } else { ··· 1218 1204 struct dmatest_info *info = &test_info; 1219 1205 struct dmatest_chan *dtc; 1220 1206 char chan_reset_val[20]; 1221 - int ret = 0; 1207 + int ret; 1222 1208 1223 1209 mutex_lock(&info->lock); 1224 1210 ret = param_set_copystring(val, kp); ··· 1273 1259 goto add_chan_err; 1274 1260 } 1275 1261 1262 + info->last_error = ret; 1276 1263 mutex_unlock(&info->lock); 1277 1264 1278 1265 return ret; 1279 1266 1280 1267 add_chan_err: 1281 1268 param_set_copystring(chan_reset_val, kp); 1269 + info->last_error = ret; 1282 1270 mutex_unlock(&info->lock); 1283 1271 1284 1272 return ret;
+1 -1
drivers/gpio/gpio-amd-fch.c
··· 92 92 ret = (readl_relaxed(ptr) & AMD_FCH_GPIO_FLAG_DIRECTION); 93 93 spin_unlock_irqrestore(&priv->lock, flags); 94 94 95 - return ret ? GPIO_LINE_DIRECTION_IN : GPIO_LINE_DIRECTION_OUT; 95 + return ret ? GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN; 96 96 } 97 97 98 98 static void amd_fch_gpio_set(struct gpio_chip *gc,
+87 -47
drivers/gpio/gpio-aspeed-sgpio.c
··· 17 17 #include <linux/spinlock.h> 18 18 #include <linux/string.h> 19 19 20 - #define MAX_NR_SGPIO 80 20 + /* 21 + * MAX_NR_HW_GPIO represents the number of actual hardware-supported GPIOs (ie, 22 + * slots within the clocked serial GPIO data). Since each HW GPIO is both an 23 + * input and an output, we provide MAX_NR_HW_GPIO * 2 lines on our gpiochip 24 + * device. 25 + * 26 + * We use SGPIO_OUTPUT_OFFSET to define the split between the inputs and 27 + * outputs; the inputs start at line 0, the outputs start at OUTPUT_OFFSET. 28 + */ 29 + #define MAX_NR_HW_SGPIO 80 30 + #define SGPIO_OUTPUT_OFFSET MAX_NR_HW_SGPIO 21 31 22 32 #define ASPEED_SGPIO_CTRL 0x54 23 33 ··· 40 30 struct clk *pclk; 41 31 spinlock_t lock; 42 32 void __iomem *base; 43 - uint32_t dir_in[3]; 44 33 int irq; 34 + int n_sgpio; 45 35 }; 46 36 47 37 struct aspeed_sgpio_bank { ··· 121 111 } 122 112 } 123 113 124 - #define GPIO_BANK(x) ((x) >> 5) 125 - #define GPIO_OFFSET(x) ((x) & 0x1f) 114 + #define GPIO_BANK(x) ((x % SGPIO_OUTPUT_OFFSET) >> 5) 115 + #define GPIO_OFFSET(x) ((x % SGPIO_OUTPUT_OFFSET) & 0x1f) 126 116 #define GPIO_BIT(x) BIT(GPIO_OFFSET(x)) 127 117 128 118 static const struct aspeed_sgpio_bank *to_bank(unsigned int offset) 129 119 { 130 - unsigned int bank = GPIO_BANK(offset); 120 + unsigned int bank; 121 + 122 + bank = GPIO_BANK(offset); 131 123 132 124 WARN_ON(bank >= ARRAY_SIZE(aspeed_sgpio_banks)); 133 125 return &aspeed_sgpio_banks[bank]; 126 + } 127 + 128 + static int aspeed_sgpio_init_valid_mask(struct gpio_chip *gc, 129 + unsigned long *valid_mask, unsigned int ngpios) 130 + { 131 + struct aspeed_sgpio *sgpio = gpiochip_get_data(gc); 132 + int n = sgpio->n_sgpio; 133 + int c = SGPIO_OUTPUT_OFFSET - n; 134 + 135 + WARN_ON(ngpios < MAX_NR_HW_SGPIO * 2); 136 + 137 + /* input GPIOs in the lower range */ 138 + bitmap_set(valid_mask, 0, n); 139 + bitmap_clear(valid_mask, n, c); 140 + 141 + /* output GPIOS above SGPIO_OUTPUT_OFFSET */ 142 + bitmap_set(valid_mask, SGPIO_OUTPUT_OFFSET, n); 143 + bitmap_clear(valid_mask, SGPIO_OUTPUT_OFFSET + n, c); 144 + 145 + return 0; 146 + } 147 + 148 + static void aspeed_sgpio_irq_init_valid_mask(struct gpio_chip *gc, 149 + unsigned long *valid_mask, unsigned int ngpios) 150 + { 151 + struct aspeed_sgpio *sgpio = gpiochip_get_data(gc); 152 + int n = sgpio->n_sgpio; 153 + 154 + WARN_ON(ngpios < MAX_NR_HW_SGPIO * 2); 155 + 156 + /* input GPIOs in the lower range */ 157 + bitmap_set(valid_mask, 0, n); 158 + bitmap_clear(valid_mask, n, ngpios - n); 159 + } 160 + 161 + static bool aspeed_sgpio_is_input(unsigned int offset) 162 + { 163 + return offset < SGPIO_OUTPUT_OFFSET; 134 164 } 135 165 136 166 static int aspeed_sgpio_get(struct gpio_chip *gc, unsigned int offset) ··· 179 129 const struct aspeed_sgpio_bank *bank = to_bank(offset); 180 130 unsigned long flags; 181 131 enum aspeed_sgpio_reg reg; 182 - bool is_input; 183 132 int rc = 0; 184 133 185 134 spin_lock_irqsave(&gpio->lock, flags); 186 135 187 - is_input = gpio->dir_in[GPIO_BANK(offset)] & GPIO_BIT(offset); 188 - reg = is_input ? reg_val : reg_rdata; 136 + reg = aspeed_sgpio_is_input(offset) ? reg_val : reg_rdata; 189 137 rc = !!(ioread32(bank_reg(gpio, bank, reg)) & GPIO_BIT(offset)); 190 138 191 139 spin_unlock_irqrestore(&gpio->lock, flags); ··· 191 143 return rc; 192 144 } 193 145 194 - static void sgpio_set_value(struct gpio_chip *gc, unsigned int offset, int val) 146 + static int sgpio_set_value(struct gpio_chip *gc, unsigned int offset, int val) 195 147 { 196 148 struct aspeed_sgpio *gpio = gpiochip_get_data(gc); 197 149 const struct aspeed_sgpio_bank *bank = to_bank(offset); 198 - void __iomem *addr; 150 + void __iomem *addr_r, *addr_w; 199 151 u32 reg = 0; 200 152 201 - addr = bank_reg(gpio, bank, reg_val); 202 - reg = ioread32(addr); 153 + if (aspeed_sgpio_is_input(offset)) 154 + return -EINVAL; 155 + 156 + /* Since this is an output, read the cached value from rdata, then 157 + * update val. */ 158 + addr_r = bank_reg(gpio, bank, reg_rdata); 159 + addr_w = bank_reg(gpio, bank, reg_val); 160 + 161 + reg = ioread32(addr_r); 203 162 204 163 if (val) 205 164 reg |= GPIO_BIT(offset); 206 165 else 207 166 reg &= ~GPIO_BIT(offset); 208 167 209 - iowrite32(reg, addr); 168 + iowrite32(reg, addr_w); 169 + 170 + return 0; 210 171 } 211 172 212 173 static void aspeed_sgpio_set(struct gpio_chip *gc, unsigned int offset, int val) ··· 232 175 233 176 static int aspeed_sgpio_dir_in(struct gpio_chip *gc, unsigned int offset) 234 177 { 235 - struct aspeed_sgpio *gpio = gpiochip_get_data(gc); 236 - unsigned long flags; 237 - 238 - spin_lock_irqsave(&gpio->lock, flags); 239 - gpio->dir_in[GPIO_BANK(offset)] |= GPIO_BIT(offset); 240 - spin_unlock_irqrestore(&gpio->lock, flags); 241 - 242 - return 0; 178 + return aspeed_sgpio_is_input(offset) ? 0 : -EINVAL; 243 179 } 244 180 245 181 static int aspeed_sgpio_dir_out(struct gpio_chip *gc, unsigned int offset, int val) 246 182 { 247 183 struct aspeed_sgpio *gpio = gpiochip_get_data(gc); 248 184 unsigned long flags; 185 + int rc; 186 + 187 + /* No special action is required for setting the direction; we'll 188 + * error-out in sgpio_set_value if this isn't an output GPIO */ 249 189 250 190 spin_lock_irqsave(&gpio->lock, flags); 251 - 252 - gpio->dir_in[GPIO_BANK(offset)] &= ~GPIO_BIT(offset); 253 - sgpio_set_value(gc, offset, val); 254 - 191 + rc = sgpio_set_value(gc, offset, val); 255 192 spin_unlock_irqrestore(&gpio->lock, flags); 256 193 257 - return 0; 194 + return rc; 258 195 } 259 196 260 197 static int aspeed_sgpio_get_direction(struct gpio_chip *gc, unsigned int offset) 261 198 { 262 - int dir_status; 263 - struct aspeed_sgpio *gpio = gpiochip_get_data(gc); 264 - unsigned long flags; 265 - 266 - spin_lock_irqsave(&gpio->lock, flags); 267 - dir_status = gpio->dir_in[GPIO_BANK(offset)] & GPIO_BIT(offset); 268 - spin_unlock_irqrestore(&gpio->lock, flags); 269 - 270 - return dir_status; 271 - 199 + return !!aspeed_sgpio_is_input(offset); 272 200 } 273 201 274 202 static void irqd_to_aspeed_sgpio_data(struct irq_data *d, ··· 444 402 445 403 irq = &gpio->chip.irq; 446 404 irq->chip = &aspeed_sgpio_irqchip; 405 + irq->init_valid_mask = aspeed_sgpio_irq_init_valid_mask; 447 406 irq->handler = handle_bad_irq; 448 407 irq->default_type = IRQ_TYPE_NONE; 449 408 irq->parent_handler = aspeed_sgpio_irq_handler; ··· 452 409 irq->parents = &gpio->irq; 453 410 irq->num_parents = 1; 454 411 455 - /* set IRQ settings and Enable Interrupt */ 412 + /* Apply default IRQ settings */ 456 413 for (i = 0; i < ARRAY_SIZE(aspeed_sgpio_banks); i++) { 457 414 bank = &aspeed_sgpio_banks[i]; 458 415 /* set falling or level-low irq */ 459 416 iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type0)); 460 417 /* trigger type is edge */ 461 418 iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type1)); 462 - /* dual edge trigger mode. */ 463 - iowrite32(0xffffffff, bank_reg(gpio, bank, reg_irq_type2)); 464 - /* enable irq */ 465 - iowrite32(0xffffffff, bank_reg(gpio, bank, reg_irq_enable)); 419 + /* single edge trigger */ 420 + iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type2)); 466 421 } 467 422 468 423 return 0; ··· 493 452 if (rc < 0) { 494 453 dev_err(&pdev->dev, "Could not read ngpios property\n"); 495 454 return -EINVAL; 496 - } else if (nr_gpios > MAX_NR_SGPIO) { 455 + } else if (nr_gpios > MAX_NR_HW_SGPIO) { 497 456 dev_err(&pdev->dev, "Number of GPIOs exceeds the maximum of %d: %d\n", 498 - MAX_NR_SGPIO, nr_gpios); 457 + MAX_NR_HW_SGPIO, nr_gpios); 499 458 return -EINVAL; 500 459 } 460 + gpio->n_sgpio = nr_gpios; 501 461 502 462 rc = of_property_read_u32(pdev->dev.of_node, "bus-frequency", &sgpio_freq); 503 463 if (rc < 0) { ··· 539 497 spin_lock_init(&gpio->lock); 540 498 541 499 gpio->chip.parent = &pdev->dev; 542 - gpio->chip.ngpio = nr_gpios; 500 + gpio->chip.ngpio = MAX_NR_HW_SGPIO * 2; 501 + gpio->chip.init_valid_mask = aspeed_sgpio_init_valid_mask; 543 502 gpio->chip.direction_input = aspeed_sgpio_dir_in; 544 503 gpio->chip.direction_output = aspeed_sgpio_dir_out; 545 504 gpio->chip.get_direction = aspeed_sgpio_get_direction; ··· 551 508 gpio->chip.set_config = NULL; 552 509 gpio->chip.label = dev_name(&pdev->dev); 553 510 gpio->chip.base = -1; 554 - 555 - /* set all SGPIO pins as input (1). */ 556 - memset(gpio->dir_in, 0xff, sizeof(gpio->dir_in)); 557 511 558 512 aspeed_sgpio_setup_irqs(gpio, pdev); 559 513
+2 -2
drivers/gpio/gpio-aspeed.c
··· 1114 1114 1115 1115 static const struct aspeed_bank_props ast2600_bank_props[] = { 1116 1116 /* input output */ 1117 - {5, 0xffffffff, 0x0000ffff}, /* U/V/W/X */ 1118 - {6, 0xffff0000, 0x0fff0000}, /* Y/Z */ 1117 + {5, 0xffffffff, 0xffffff00}, /* U/V/W/X */ 1118 + {6, 0x0000ffff, 0x0000ffff}, /* Y/Z */ 1119 1119 { }, 1120 1120 }; 1121 1121
+2
drivers/gpio/gpio-mockup.c
··· 552 552 err = platform_driver_register(&gpio_mockup_driver); 553 553 if (err) { 554 554 gpio_mockup_err("error registering platform driver\n"); 555 + debugfs_remove_recursive(gpio_mockup_dbg_dir); 555 556 return err; 556 557 } 557 558 ··· 583 582 gpio_mockup_err("error registering device"); 584 583 platform_driver_unregister(&gpio_mockup_driver); 585 584 gpio_mockup_unregister_pdevs(); 585 + debugfs_remove_recursive(gpio_mockup_dbg_dir); 586 586 return PTR_ERR(pdev); 587 587 } 588 588
+2 -2
drivers/gpio/gpio-omap.c
··· 1516 1516 return 0; 1517 1517 } 1518 1518 1519 - static int omap_gpio_suspend(struct device *dev) 1519 + static int __maybe_unused omap_gpio_suspend(struct device *dev) 1520 1520 { 1521 1521 struct gpio_bank *bank = dev_get_drvdata(dev); 1522 1522 ··· 1528 1528 return omap_gpio_runtime_suspend(dev); 1529 1529 } 1530 1530 1531 - static int omap_gpio_resume(struct device *dev) 1531 + static int __maybe_unused omap_gpio_resume(struct device *dev) 1532 1532 { 1533 1533 struct gpio_bank *bank = dev_get_drvdata(dev); 1534 1534
+6 -1
drivers/gpio/gpio-pca953x.c
··· 818 818 int level; 819 819 bool ret; 820 820 821 + bitmap_zero(pending, MAX_LINE); 822 + 821 823 mutex_lock(&chip->i2c_lock); 822 824 ret = pca953x_irq_pending(chip, pending); 823 825 mutex_unlock(&chip->i2c_lock); ··· 942 940 static int device_pca957x_init(struct pca953x_chip *chip, u32 invert) 943 941 { 944 942 DECLARE_BITMAP(val, MAX_LINE); 943 + unsigned int i; 945 944 int ret; 946 945 947 946 ret = device_pca95xx_init(chip, invert); ··· 950 947 goto out; 951 948 952 949 /* To enable register 6, 7 to control pull up and pull down */ 953 - memset(val, 0x02, NBANK(chip)); 950 + for (i = 0; i < NBANK(chip); i++) 951 + bitmap_set_value8(val, 0x02, i * BANK_SZ); 952 + 954 953 ret = pca953x_write_regs(chip, PCA957X_BKEN, val); 955 954 if (ret) 956 955 goto out;
+1
drivers/gpio/gpio-siox.c
··· 245 245 girq->chip = &ddata->ichip; 246 246 girq->default_type = IRQ_TYPE_NONE; 247 247 girq->handler = handle_level_irq; 248 + girq->threaded = true; 248 249 249 250 ret = devm_gpiochip_add_data(dev, &ddata->gchip, NULL); 250 251 if (ret)
+3
drivers/gpio/gpio-sprd.c
··· 149 149 sprd_gpio_update(chip, offset, SPRD_GPIO_IS, 0); 150 150 sprd_gpio_update(chip, offset, SPRD_GPIO_IBE, 0); 151 151 sprd_gpio_update(chip, offset, SPRD_GPIO_IEV, 1); 152 + sprd_gpio_update(chip, offset, SPRD_GPIO_IC, 1); 152 153 irq_set_handler_locked(data, handle_edge_irq); 153 154 break; 154 155 case IRQ_TYPE_EDGE_FALLING: 155 156 sprd_gpio_update(chip, offset, SPRD_GPIO_IS, 0); 156 157 sprd_gpio_update(chip, offset, SPRD_GPIO_IBE, 0); 157 158 sprd_gpio_update(chip, offset, SPRD_GPIO_IEV, 0); 159 + sprd_gpio_update(chip, offset, SPRD_GPIO_IC, 1); 158 160 irq_set_handler_locked(data, handle_edge_irq); 159 161 break; 160 162 case IRQ_TYPE_EDGE_BOTH: 161 163 sprd_gpio_update(chip, offset, SPRD_GPIO_IS, 0); 162 164 sprd_gpio_update(chip, offset, SPRD_GPIO_IBE, 1); 165 + sprd_gpio_update(chip, offset, SPRD_GPIO_IC, 1); 163 166 irq_set_handler_locked(data, handle_edge_irq); 164 167 break; 165 168 case IRQ_TYPE_LEVEL_HIGH:
+1 -1
drivers/gpio/gpio-tc3589x.c
··· 212 212 continue; 213 213 214 214 tc3589x_gpio->oldregs[i][j] = new; 215 - tc3589x_reg_write(tc3589x, regmap[i] + j * 8, new); 215 + tc3589x_reg_write(tc3589x, regmap[i] + j, new); 216 216 } 217 217 } 218 218
+30 -4
drivers/gpio/gpiolib-cdev.c
··· 423 423 return events; 424 424 } 425 425 426 + static ssize_t lineevent_get_size(void) 427 + { 428 + #ifdef __x86_64__ 429 + /* i386 has no padding after 'id' */ 430 + if (in_ia32_syscall()) { 431 + struct compat_gpioeevent_data { 432 + compat_u64 timestamp; 433 + u32 id; 434 + }; 435 + 436 + return sizeof(struct compat_gpioeevent_data); 437 + } 438 + #endif 439 + return sizeof(struct gpioevent_data); 440 + } 426 441 427 442 static ssize_t lineevent_read(struct file *file, 428 443 char __user *buf, ··· 447 432 struct lineevent_state *le = file->private_data; 448 433 struct gpioevent_data ge; 449 434 ssize_t bytes_read = 0; 435 + ssize_t ge_size; 450 436 int ret; 451 437 452 - if (count < sizeof(ge)) 438 + /* 439 + * When compatible system call is being used the struct gpioevent_data, 440 + * in case of at least ia32, has different size due to the alignment 441 + * differences. Because we have first member 64 bits followed by one of 442 + * 32 bits there is no gap between them. The only difference is the 443 + * padding at the end of the data structure. Hence, we calculate the 444 + * actual sizeof() and pass this as an argument to copy_to_user() to 445 + * drop unneeded bytes from the output. 446 + */ 447 + ge_size = lineevent_get_size(); 448 + if (count < ge_size) 453 449 return -EINVAL; 454 450 455 451 do { ··· 496 470 break; 497 471 } 498 472 499 - if (copy_to_user(buf + bytes_read, &ge, sizeof(ge))) 473 + if (copy_to_user(buf + bytes_read, &ge, ge_size)) 500 474 return -EFAULT; 501 - bytes_read += sizeof(ge); 502 - } while (count >= bytes_read + sizeof(ge)); 475 + bytes_read += ge_size; 476 + } while (count >= bytes_read + ge_size); 503 477 504 478 return bytes_read; 505 479 }
+2 -8
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 80 80 MODULE_FIRMWARE("amdgpu/navi10_gpu_info.bin"); 81 81 MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin"); 82 82 MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin"); 83 - MODULE_FIRMWARE("amdgpu/sienna_cichlid_gpu_info.bin"); 84 - MODULE_FIRMWARE("amdgpu/navy_flounder_gpu_info.bin"); 85 83 86 84 #define AMDGPU_RESUME_MS 2000 87 85 ··· 1598 1600 case CHIP_CARRIZO: 1599 1601 case CHIP_STONEY: 1600 1602 case CHIP_VEGA20: 1603 + case CHIP_SIENNA_CICHLID: 1604 + case CHIP_NAVY_FLOUNDER: 1601 1605 default: 1602 1606 return 0; 1603 1607 case CHIP_VEGA10: ··· 1630 1630 break; 1631 1631 case CHIP_NAVI12: 1632 1632 chip_name = "navi12"; 1633 - break; 1634 - case CHIP_SIENNA_CICHLID: 1635 - chip_name = "sienna_cichlid"; 1636 - break; 1637 - case CHIP_NAVY_FLOUNDER: 1638 - chip_name = "navy_flounder"; 1639 1633 break; 1640 1634 } 1641 1635
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 297 297 take the current one */ 298 298 if (active && !adev->have_disp_power_ref) { 299 299 adev->have_disp_power_ref = true; 300 - goto out; 300 + return ret; 301 301 } 302 302 /* if we have no active crtcs, then drop the power ref 303 303 we got before */
+10 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 1044 1044 {0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU}, 1045 1045 1046 1046 /* Navi12 */ 1047 - {0x1002, 0x7360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12|AMD_EXP_HW_SUPPORT}, 1048 - {0x1002, 0x7362, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12|AMD_EXP_HW_SUPPORT}, 1047 + {0x1002, 0x7360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12}, 1048 + {0x1002, 0x7362, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12}, 1049 + 1050 + /* Sienna_Cichlid */ 1051 + {0x1002, 0x73A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1052 + {0x1002, 0x73A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1053 + {0x1002, 0x73A3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1054 + {0x1002, 0x73AB, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1055 + {0x1002, 0x73AE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1056 + {0x1002, 0x73BF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SIENNA_CICHLID}, 1049 1057 1050 1058 {0, 0, 0} 1051 1059 };
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1076 1076 1077 1077 release_sg: 1078 1078 kfree(ttm->sg); 1079 + ttm->sg = NULL; 1079 1080 return r; 1080 1081 } 1081 1082
+3
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 3595 3595 if (!gfx_v10_0_navi10_gfxoff_should_enable(adev)) 3596 3596 adev->pm.pp_feature &= ~PP_GFXOFF_MASK; 3597 3597 break; 3598 + case CHIP_NAVY_FLOUNDER: 3599 + adev->pm.pp_feature &= ~PP_GFXOFF_MASK; 3600 + break; 3598 3601 default: 3599 3602 break; 3600 3603 }
+8 -8
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
··· 746 746 | UVD_SUVD_CGC_GATE__IME_HEVC_MASK 747 747 | UVD_SUVD_CGC_GATE__EFC_MASK 748 748 | UVD_SUVD_CGC_GATE__SAOE_MASK 749 - | 0x08000000 749 + | UVD_SUVD_CGC_GATE__SRE_AV1_MASK 750 750 | UVD_SUVD_CGC_GATE__FBC_PCLK_MASK 751 751 | UVD_SUVD_CGC_GATE__FBC_CCLK_MASK 752 - | 0x40000000 752 + | UVD_SUVD_CGC_GATE__SCM_AV1_MASK 753 753 | UVD_SUVD_CGC_GATE__SMPA_MASK); 754 754 WREG32_SOC15(VCN, inst, mmUVD_SUVD_CGC_GATE, data); 755 755 756 756 data = RREG32_SOC15(VCN, inst, mmUVD_SUVD_CGC_GATE2); 757 757 data |= (UVD_SUVD_CGC_GATE2__MPBE0_MASK 758 758 | UVD_SUVD_CGC_GATE2__MPBE1_MASK 759 - | 0x00000004 760 - | 0x00000008 759 + | UVD_SUVD_CGC_GATE2__SIT_AV1_MASK 760 + | UVD_SUVD_CGC_GATE2__SDB_AV1_MASK 761 761 | UVD_SUVD_CGC_GATE2__MPC1_MASK); 762 762 WREG32_SOC15(VCN, inst, mmUVD_SUVD_CGC_GATE2, data); 763 763 ··· 776 776 | UVD_SUVD_CGC_CTRL__SMPA_MODE_MASK 777 777 | UVD_SUVD_CGC_CTRL__MPBE0_MODE_MASK 778 778 | UVD_SUVD_CGC_CTRL__MPBE1_MODE_MASK 779 - | 0x00008000 780 - | 0x00010000 779 + | UVD_SUVD_CGC_CTRL__SIT_AV1_MODE_MASK 780 + | UVD_SUVD_CGC_CTRL__SDB_AV1_MODE_MASK 781 781 | UVD_SUVD_CGC_CTRL__MPC1_MODE_MASK 782 782 | UVD_SUVD_CGC_CTRL__FBC_PCLK_MASK 783 783 | UVD_SUVD_CGC_CTRL__FBC_CCLK_MASK); ··· 892 892 | UVD_SUVD_CGC_CTRL__SMPA_MODE_MASK 893 893 | UVD_SUVD_CGC_CTRL__MPBE0_MODE_MASK 894 894 | UVD_SUVD_CGC_CTRL__MPBE1_MODE_MASK 895 - | 0x00008000 896 - | 0x00010000 895 + | UVD_SUVD_CGC_CTRL__SIT_AV1_MODE_MASK 896 + | UVD_SUVD_CGC_CTRL__SDB_AV1_MODE_MASK 897 897 | UVD_SUVD_CGC_CTRL__MPC1_MODE_MASK 898 898 | UVD_SUVD_CGC_CTRL__FBC_PCLK_MASK 899 899 | UVD_SUVD_CGC_CTRL__FBC_CCLK_MASK);
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
··· 604 604 int i = 0; 605 605 606 606 hdcp_work = kcalloc(max_caps, sizeof(*hdcp_work), GFP_KERNEL); 607 - if (hdcp_work == NULL) 607 + if (ZERO_OR_NULL_PTR(hdcp_work)) 608 608 return NULL; 609 609 610 610 hdcp_work->srm = kcalloc(PSP_HDCP_SRM_FIRST_GEN_MAX_SIZE, sizeof(*hdcp_work->srm), GFP_KERNEL);
-1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
··· 783 783 } else { 784 784 struct clk_log_info log_info = {0}; 785 785 786 - clk_mgr->smu_ver = rn_vbios_smu_get_smu_version(clk_mgr); 787 786 clk_mgr->periodic_retraining_disabled = rn_vbios_smu_is_periodic_retraining_disabled(clk_mgr); 788 787 789 788 /* SMU Version 55.51.0 and up no longer have an issue
+16 -2
drivers/gpu/drm/amd/display/dc/dcn30/Makefile
··· 31 31 dcn30_dio_link_encoder.o dcn30_resource.o 32 32 33 33 34 - CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_optc.o := -mhard-float -msse -mpreferred-stack-boundary=4 35 - 34 + ifdef CONFIG_X86 36 35 CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_resource.o := -mhard-float -msse 36 + CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_optc.o := -mhard-float -msse 37 + endif 38 + 39 + ifdef CONFIG_PPC64 40 + CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_resource.o := -mhard-float -maltivec 41 + CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_optc.o := -mhard-float -maltivec 42 + endif 43 + 44 + ifdef CONFIG_ARM64 45 + CFLAGS_REMOVE_$(AMDDALPATH)/dc/dcn30/dcn30_resource.o := -mgeneral-regs-only 46 + CFLAGS_REMOVE_$(AMDDALPATH)/dc/dcn30/dcn30_optc.o := -mgeneral-regs-only 47 + endif 48 + 37 49 ifdef CONFIG_CC_IS_GCC 38 50 ifeq ($(call cc-ifversion, -lt, 0701, y), y) 39 51 IS_OLD_GCC = 1 ··· 57 45 # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3 58 46 # (8B stack alignment). 59 47 CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_resource.o += -mpreferred-stack-boundary=4 48 + CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_optc.o += -mpreferred-stack-boundary=4 60 49 else 61 50 CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_resource.o += -msse2 51 + CFLAGS_$(AMDDALPATH)/dc/dcn30/dcn30_optc.o += -msse2 62 52 endif 63 53 64 54 AMD_DAL_DCN30 = $(addprefix $(AMDDALPATH)/dc/dcn30/,$(DCN30))
+2
drivers/gpu/drm/amd/include/asic_reg/gc/gc_10_3_0_default.h
··· 2727 2727 #define mmDB_STENCIL_WRITE_BASE_DEFAULT 0x00000000 2728 2728 #define mmDB_RESERVED_REG_1_DEFAULT 0x00000000 2729 2729 #define mmDB_RESERVED_REG_3_DEFAULT 0x00000000 2730 + #define mmDB_VRS_OVERRIDE_CNTL_DEFAULT 0x00000000 2730 2731 #define mmDB_Z_READ_BASE_HI_DEFAULT 0x00000000 2731 2732 #define mmDB_STENCIL_READ_BASE_HI_DEFAULT 0x00000000 2732 2733 #define mmDB_Z_WRITE_BASE_HI_DEFAULT 0x00000000 ··· 3063 3062 #define mmPA_SU_OVER_RASTERIZATION_CNTL_DEFAULT 0x00000000 3064 3063 #define mmPA_STEREO_CNTL_DEFAULT 0x00000000 3065 3064 #define mmPA_STATE_STEREO_X_DEFAULT 0x00000000 3065 + #define mmPA_CL_VRS_CNTL_DEFAULT 0x00000000 3066 3066 #define mmPA_SU_POINT_SIZE_DEFAULT 0x00000000 3067 3067 #define mmPA_SU_POINT_MINMAX_DEFAULT 0x00000000 3068 3068 #define mmPA_SU_LINE_CNTL_DEFAULT 0x00000000
+4
drivers/gpu/drm/amd/include/asic_reg/gc/gc_10_3_0_offset.h
··· 5379 5379 #define mmDB_RESERVED_REG_1_BASE_IDX 1 5380 5380 #define mmDB_RESERVED_REG_3 0x0017 5381 5381 #define mmDB_RESERVED_REG_3_BASE_IDX 1 5382 + #define mmDB_VRS_OVERRIDE_CNTL 0x0019 5383 + #define mmDB_VRS_OVERRIDE_CNTL_BASE_IDX 1 5382 5384 #define mmDB_Z_READ_BASE_HI 0x001a 5383 5385 #define mmDB_Z_READ_BASE_HI_BASE_IDX 1 5384 5386 #define mmDB_STENCIL_READ_BASE_HI 0x001b ··· 6051 6049 #define mmPA_STEREO_CNTL_BASE_IDX 1 6052 6050 #define mmPA_STATE_STEREO_X 0x0211 6053 6051 #define mmPA_STATE_STEREO_X_BASE_IDX 1 6052 + #define mmPA_CL_VRS_CNTL 0x0212 6053 + #define mmPA_CL_VRS_CNTL_BASE_IDX 1 6054 6054 #define mmPA_SU_POINT_SIZE 0x0280 6055 6055 #define mmPA_SU_POINT_SIZE_BASE_IDX 1 6056 6056 #define mmPA_SU_POINT_MINMAX 0x0281
+50
drivers/gpu/drm/amd/include/asic_reg/gc/gc_10_3_0_sh_mask.h
··· 9777 9777 #define DB_EXCEPTION_CONTROL__AUTO_FLUSH_HTILE__SHIFT 0x3 9778 9778 #define DB_EXCEPTION_CONTROL__AUTO_FLUSH_QUAD__SHIFT 0x4 9779 9779 #define DB_EXCEPTION_CONTROL__FORCE_SUMMARIZE__SHIFT 0x8 9780 + #define DB_EXCEPTION_CONTROL__FORCE_VRS_RATE_FINE__SHIFT 0x10 9780 9781 #define DB_EXCEPTION_CONTROL__DTAG_WATERMARK__SHIFT 0x18 9781 9782 #define DB_EXCEPTION_CONTROL__EARLY_Z_PANIC_DISABLE_MASK 0x00000001L 9782 9783 #define DB_EXCEPTION_CONTROL__LATE_Z_PANIC_DISABLE_MASK 0x00000002L ··· 9785 9784 #define DB_EXCEPTION_CONTROL__AUTO_FLUSH_HTILE_MASK 0x00000008L 9786 9785 #define DB_EXCEPTION_CONTROL__AUTO_FLUSH_QUAD_MASK 0x00000010L 9787 9786 #define DB_EXCEPTION_CONTROL__FORCE_SUMMARIZE_MASK 0x00000F00L 9787 + #define DB_EXCEPTION_CONTROL__FORCE_VRS_RATE_FINE_MASK 0x00FF0000L 9788 9788 #define DB_EXCEPTION_CONTROL__DTAG_WATERMARK_MASK 0x7F000000L 9789 9789 //DB_DFSM_CONFIG 9790 9790 #define DB_DFSM_CONFIG__BYPASS_DFSM__SHIFT 0x0 ··· 10078 10076 #define CB_HW_CONTROL_3__DISABLE_NACK_PROCESSING_CM__SHIFT 0x18 10079 10077 #define CB_HW_CONTROL_3__DISABLE_NACK_COLOR_RD_WR_OPT__SHIFT 0x19 10080 10078 #define CB_HW_CONTROL_3__DISABLE_BLENDER_CLOCK_GATING__SHIFT 0x1a 10079 + #define CB_HW_CONTROL_3__DISABLE_DCC_VRS_OPT__SHIFT 0x1c 10081 10080 #define CB_HW_CONTROL_3__DISABLE_FMASK_NOFETCH_OPT__SHIFT 0x1e 10082 10081 #define CB_HW_CONTROL_3__DISABLE_FMASK_NOFETCH_OPT_BC__SHIFT 0x1f 10083 10082 #define CB_HW_CONTROL_3__DISABLE_SLOW_MODE_EMPTY_HALF_QUAD_KILL_MASK 0x00000001L ··· 10106 10103 #define CB_HW_CONTROL_3__DISABLE_NACK_PROCESSING_CM_MASK 0x01000000L 10107 10104 #define CB_HW_CONTROL_3__DISABLE_NACK_COLOR_RD_WR_OPT_MASK 0x02000000L 10108 10105 #define CB_HW_CONTROL_3__DISABLE_BLENDER_CLOCK_GATING_MASK 0x04000000L 10106 + #define CB_HW_CONTROL_3__DISABLE_DCC_VRS_OPT_MASK 0x10000000L 10109 10107 #define CB_HW_CONTROL_3__DISABLE_FMASK_NOFETCH_OPT_MASK 0x40000000L 10110 10108 #define CB_HW_CONTROL_3__DISABLE_FMASK_NOFETCH_OPT_BC_MASK 0x80000000L 10111 10109 //CB_HW_CONTROL 10112 10110 #define CB_HW_CONTROL__ALLOW_MRT_WITH_DUAL_SOURCE__SHIFT 0x0 10111 + #define CB_HW_CONTROL__DISABLE_VRS_FILLRATE_OPTIMIZATION__SHIFT 0x1 10113 10112 #define CB_HW_CONTROL__DISABLE_FILLRATE_OPT_FIX_WITH_CFC__SHIFT 0x3 10114 10113 #define CB_HW_CONTROL__DISABLE_POST_DCC_WITH_CFC_FIX__SHIFT 0x4 10114 + #define CB_HW_CONTROL__DISABLE_COMPRESS_1FRAG_WHEN_VRS_RATE_HINT_EN__SHIFT 0x5 10115 10115 #define CB_HW_CONTROL__RMI_CREDITS__SHIFT 0x6 10116 10116 #define CB_HW_CONTROL__CHICKEN_BITS__SHIFT 0xc 10117 10117 #define CB_HW_CONTROL__DISABLE_FMASK_MULTI_MGCG_DOMAINS__SHIFT 0xf ··· 10135 10129 #define CB_HW_CONTROL__DISABLE_CC_IB_SERIALIZER_STATE_OPT__SHIFT 0x1e 10136 10130 #define CB_HW_CONTROL__DISABLE_PIXEL_IN_QUAD_FIX_FOR_LINEAR_SURFACE__SHIFT 0x1f 10137 10131 #define CB_HW_CONTROL__ALLOW_MRT_WITH_DUAL_SOURCE_MASK 0x00000001L 10132 + #define CB_HW_CONTROL__DISABLE_VRS_FILLRATE_OPTIMIZATION_MASK 0x00000002L 10138 10133 #define CB_HW_CONTROL__DISABLE_FILLRATE_OPT_FIX_WITH_CFC_MASK 0x00000008L 10139 10134 #define CB_HW_CONTROL__DISABLE_POST_DCC_WITH_CFC_FIX_MASK 0x00000010L 10135 + #define CB_HW_CONTROL__DISABLE_COMPRESS_1FRAG_WHEN_VRS_RATE_HINT_EN_MASK 0x00000020L 10140 10136 #define CB_HW_CONTROL__RMI_CREDITS_MASK 0x00000FC0L 10141 10137 #define CB_HW_CONTROL__CHICKEN_BITS_MASK 0x00007000L 10142 10138 #define CB_HW_CONTROL__DISABLE_FMASK_MULTI_MGCG_DOMAINS_MASK 0x00008000L ··· 19889 19881 #define DB_RENDER_OVERRIDE2__PRESERVE_SRESULTS__SHIFT 0x16 19890 19882 #define DB_RENDER_OVERRIDE2__DISABLE_FAST_PASS__SHIFT 0x17 19891 19883 #define DB_RENDER_OVERRIDE2__ALLOW_PARTIAL_RES_HIER_KILL__SHIFT 0x19 19884 + #define DB_RENDER_OVERRIDE2__FORCE_VRS_RATE_FINE__SHIFT 0x1a 19892 19885 #define DB_RENDER_OVERRIDE2__CENTROID_COMPUTATION_MODE__SHIFT 0x1b 19893 19886 #define DB_RENDER_OVERRIDE2__PARTIAL_SQUAD_LAUNCH_CONTROL_MASK 0x00000003L 19894 19887 #define DB_RENDER_OVERRIDE2__PARTIAL_SQUAD_LAUNCH_COUNTDOWN_MASK 0x0000001CL ··· 19907 19898 #define DB_RENDER_OVERRIDE2__PRESERVE_SRESULTS_MASK 0x00400000L 19908 19899 #define DB_RENDER_OVERRIDE2__DISABLE_FAST_PASS_MASK 0x00800000L 19909 19900 #define DB_RENDER_OVERRIDE2__ALLOW_PARTIAL_RES_HIER_KILL_MASK 0x02000000L 19901 + #define DB_RENDER_OVERRIDE2__FORCE_VRS_RATE_FINE_MASK 0x04000000L 19910 19902 #define DB_RENDER_OVERRIDE2__CENTROID_COMPUTATION_MODE_MASK 0x18000000L 19911 19903 //DB_HTILE_DATA_BASE 19912 19904 #define DB_HTILE_DATA_BASE__BASE_256B__SHIFT 0x0 ··· 20031 20021 //DB_RESERVED_REG_3 20032 20022 #define DB_RESERVED_REG_3__FIELD_1__SHIFT 0x0 20033 20023 #define DB_RESERVED_REG_3__FIELD_1_MASK 0x003FFFFFL 20024 + //DB_VRS_OVERRIDE_CNTL 20025 + #define DB_VRS_OVERRIDE_CNTL__VRS_OVERRIDE_RATE_COMBINER_MODE__SHIFT 0x0 20026 + #define DB_VRS_OVERRIDE_CNTL__VRS_OVERRIDE_RATE_X__SHIFT 0x4 20027 + #define DB_VRS_OVERRIDE_CNTL__VRS_OVERRIDE_RATE_Y__SHIFT 0x6 20028 + #define DB_VRS_OVERRIDE_CNTL__VRS_OVERRIDE_RATE_COMBINER_MODE_MASK 0x00000007L 20029 + #define DB_VRS_OVERRIDE_CNTL__VRS_OVERRIDE_RATE_X_MASK 0x00000030L 20030 + #define DB_VRS_OVERRIDE_CNTL__VRS_OVERRIDE_RATE_Y_MASK 0x000000C0L 20034 20031 //DB_Z_READ_BASE_HI 20035 20032 #define DB_Z_READ_BASE_HI__BASE_HI__SHIFT 0x0 20036 20033 #define DB_Z_READ_BASE_HI__BASE_HI_MASK 0x000000FFL ··· 22615 22598 #define PA_CL_VS_OUT_CNTL__VS_OUT_MISC_SIDE_BUS_ENA__SHIFT 0x18 22616 22599 #define PA_CL_VS_OUT_CNTL__USE_VTX_GS_CUT_FLAG__SHIFT 0x19 22617 22600 #define PA_CL_VS_OUT_CNTL__USE_VTX_LINE_WIDTH__SHIFT 0x1b 22601 + #define PA_CL_VS_OUT_CNTL__USE_VTX_VRS_RATE__SHIFT 0x1c 22618 22602 #define PA_CL_VS_OUT_CNTL__BYPASS_VTX_RATE_COMBINER__SHIFT 0x1d 22619 22603 #define PA_CL_VS_OUT_CNTL__BYPASS_PRIM_RATE_COMBINER__SHIFT 0x1e 22620 22604 #define PA_CL_VS_OUT_CNTL__CLIP_DIST_ENA_0_MASK 0x00000001L ··· 22645 22627 #define PA_CL_VS_OUT_CNTL__VS_OUT_MISC_SIDE_BUS_ENA_MASK 0x01000000L 22646 22628 #define PA_CL_VS_OUT_CNTL__USE_VTX_GS_CUT_FLAG_MASK 0x02000000L 22647 22629 #define PA_CL_VS_OUT_CNTL__USE_VTX_LINE_WIDTH_MASK 0x08000000L 22630 + #define PA_CL_VS_OUT_CNTL__USE_VTX_VRS_RATE_MASK 0x10000000L 22648 22631 #define PA_CL_VS_OUT_CNTL__BYPASS_VTX_RATE_COMBINER_MASK 0x20000000L 22649 22632 #define PA_CL_VS_OUT_CNTL__BYPASS_PRIM_RATE_COMBINER_MASK 0x40000000L 22650 22633 //PA_CL_NANINF_CNTL ··· 22759 22740 //PA_STATE_STEREO_X 22760 22741 #define PA_STATE_STEREO_X__STEREO_X_OFFSET__SHIFT 0x0 22761 22742 #define PA_STATE_STEREO_X__STEREO_X_OFFSET_MASK 0xFFFFFFFFL 22743 + //PA_CL_VRS_CNTL 22744 + #define PA_CL_VRS_CNTL__VERTEX_RATE_COMBINER_MODE__SHIFT 0x0 22745 + #define PA_CL_VRS_CNTL__PRIMITIVE_RATE_COMBINER_MODE__SHIFT 0x3 22746 + #define PA_CL_VRS_CNTL__HTILE_RATE_COMBINER_MODE__SHIFT 0x6 22747 + #define PA_CL_VRS_CNTL__SAMPLE_ITER_COMBINER_MODE__SHIFT 0x9 22748 + #define PA_CL_VRS_CNTL__EXPOSE_VRS_PIXELS_MASK__SHIFT 0xd 22749 + #define PA_CL_VRS_CNTL__CMASK_RATE_HINT_FORCE_ZERO__SHIFT 0xe 22750 + #define PA_CL_VRS_CNTL__VERTEX_RATE_COMBINER_MODE_MASK 0x00000007L 22751 + #define PA_CL_VRS_CNTL__PRIMITIVE_RATE_COMBINER_MODE_MASK 0x00000038L 22752 + #define PA_CL_VRS_CNTL__HTILE_RATE_COMBINER_MODE_MASK 0x000001C0L 22753 + #define PA_CL_VRS_CNTL__SAMPLE_ITER_COMBINER_MODE_MASK 0x00000E00L 22754 + #define PA_CL_VRS_CNTL__EXPOSE_VRS_PIXELS_MASK_MASK 0x00002000L 22755 + #define PA_CL_VRS_CNTL__CMASK_RATE_HINT_FORCE_ZERO_MASK 0x00004000L 22762 22756 //PA_SU_POINT_SIZE 22763 22757 #define PA_SU_POINT_SIZE__HEIGHT__SHIFT 0x0 22764 22758 #define PA_SU_POINT_SIZE__WIDTH__SHIFT 0x10 ··· 23120 23088 #define DB_HTILE_SURFACE__DST_OUTSIDE_ZERO_TO_ONE__SHIFT 0x10 23121 23089 #define DB_HTILE_SURFACE__RESERVED_FIELD_6__SHIFT 0x11 23122 23090 #define DB_HTILE_SURFACE__PIPE_ALIGNED__SHIFT 0x12 23091 + #define DB_HTILE_SURFACE__VRS_HTILE_ENCODING__SHIFT 0x13 23123 23092 #define DB_HTILE_SURFACE__RESERVED_FIELD_1_MASK 0x00000001L 23124 23093 #define DB_HTILE_SURFACE__FULL_CACHE_MASK 0x00000002L 23125 23094 #define DB_HTILE_SURFACE__RESERVED_FIELD_2_MASK 0x00000004L ··· 23130 23097 #define DB_HTILE_SURFACE__DST_OUTSIDE_ZERO_TO_ONE_MASK 0x00010000L 23131 23098 #define DB_HTILE_SURFACE__RESERVED_FIELD_6_MASK 0x00020000L 23132 23099 #define DB_HTILE_SURFACE__PIPE_ALIGNED_MASK 0x00040000L 23100 + #define DB_HTILE_SURFACE__VRS_HTILE_ENCODING_MASK 0x00180000L 23133 23101 //DB_SRESULTS_COMPARE_STATE0 23134 23102 #define DB_SRESULTS_COMPARE_STATE0__COMPAREFUNC0__SHIFT 0x0 23135 23103 #define DB_SRESULTS_COMPARE_STATE0__COMPAREVALUE0__SHIFT 0x4 ··· 24988 24954 #define CB_COLOR0_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 24989 24955 #define CB_COLOR0_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 24990 24956 #define CB_COLOR0_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 24957 + #define CB_COLOR0_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 24991 24958 #define CB_COLOR0_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 24992 24959 #define CB_COLOR0_ATTRIB3__META_LINEAR_MASK 0x00002000L 24993 24960 #define CB_COLOR0_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 24997 24962 #define CB_COLOR0_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 24998 24963 #define CB_COLOR0_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 24999 24964 #define CB_COLOR0_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 24965 + #define CB_COLOR0_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25000 24966 //CB_COLOR1_ATTRIB3 25001 24967 #define CB_COLOR1_ATTRIB3__MIP0_DEPTH__SHIFT 0x0 25002 24968 #define CB_COLOR1_ATTRIB3__META_LINEAR__SHIFT 0xd ··· 25007 24971 #define CB_COLOR1_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 25008 24972 #define CB_COLOR1_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 25009 24973 #define CB_COLOR1_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 24974 + #define CB_COLOR1_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 25010 24975 #define CB_COLOR1_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 25011 24976 #define CB_COLOR1_ATTRIB3__META_LINEAR_MASK 0x00002000L 25012 24977 #define CB_COLOR1_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 25016 24979 #define CB_COLOR1_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 25017 24980 #define CB_COLOR1_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 25018 24981 #define CB_COLOR1_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 24982 + #define CB_COLOR1_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25019 24983 //CB_COLOR2_ATTRIB3 25020 24984 #define CB_COLOR2_ATTRIB3__MIP0_DEPTH__SHIFT 0x0 25021 24985 #define CB_COLOR2_ATTRIB3__META_LINEAR__SHIFT 0xd ··· 25026 24988 #define CB_COLOR2_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 25027 24989 #define CB_COLOR2_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 25028 24990 #define CB_COLOR2_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 24991 + #define CB_COLOR2_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 25029 24992 #define CB_COLOR2_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 25030 24993 #define CB_COLOR2_ATTRIB3__META_LINEAR_MASK 0x00002000L 25031 24994 #define CB_COLOR2_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 25035 24996 #define CB_COLOR2_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 25036 24997 #define CB_COLOR2_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 25037 24998 #define CB_COLOR2_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 24999 + #define CB_COLOR2_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25038 25000 //CB_COLOR3_ATTRIB3 25039 25001 #define CB_COLOR3_ATTRIB3__MIP0_DEPTH__SHIFT 0x0 25040 25002 #define CB_COLOR3_ATTRIB3__META_LINEAR__SHIFT 0xd ··· 25045 25005 #define CB_COLOR3_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 25046 25006 #define CB_COLOR3_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 25047 25007 #define CB_COLOR3_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 25008 + #define CB_COLOR3_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 25048 25009 #define CB_COLOR3_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 25049 25010 #define CB_COLOR3_ATTRIB3__META_LINEAR_MASK 0x00002000L 25050 25011 #define CB_COLOR3_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 25054 25013 #define CB_COLOR3_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 25055 25014 #define CB_COLOR3_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 25056 25015 #define CB_COLOR3_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 25016 + #define CB_COLOR3_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25057 25017 //CB_COLOR4_ATTRIB3 25058 25018 #define CB_COLOR4_ATTRIB3__MIP0_DEPTH__SHIFT 0x0 25059 25019 #define CB_COLOR4_ATTRIB3__META_LINEAR__SHIFT 0xd ··· 25064 25022 #define CB_COLOR4_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 25065 25023 #define CB_COLOR4_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 25066 25024 #define CB_COLOR4_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 25025 + #define CB_COLOR4_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 25067 25026 #define CB_COLOR4_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 25068 25027 #define CB_COLOR4_ATTRIB3__META_LINEAR_MASK 0x00002000L 25069 25028 #define CB_COLOR4_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 25073 25030 #define CB_COLOR4_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 25074 25031 #define CB_COLOR4_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 25075 25032 #define CB_COLOR4_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 25033 + #define CB_COLOR4_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25076 25034 //CB_COLOR5_ATTRIB3 25077 25035 #define CB_COLOR5_ATTRIB3__MIP0_DEPTH__SHIFT 0x0 25078 25036 #define CB_COLOR5_ATTRIB3__META_LINEAR__SHIFT 0xd ··· 25083 25039 #define CB_COLOR5_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 25084 25040 #define CB_COLOR5_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 25085 25041 #define CB_COLOR5_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 25042 + #define CB_COLOR5_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 25086 25043 #define CB_COLOR5_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 25087 25044 #define CB_COLOR5_ATTRIB3__META_LINEAR_MASK 0x00002000L 25088 25045 #define CB_COLOR5_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 25092 25047 #define CB_COLOR5_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 25093 25048 #define CB_COLOR5_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 25094 25049 #define CB_COLOR5_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 25050 + #define CB_COLOR5_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25095 25051 //CB_COLOR6_ATTRIB3 25096 25052 #define CB_COLOR6_ATTRIB3__MIP0_DEPTH__SHIFT 0x0 25097 25053 #define CB_COLOR6_ATTRIB3__META_LINEAR__SHIFT 0xd ··· 25102 25056 #define CB_COLOR6_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 25103 25057 #define CB_COLOR6_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 25104 25058 #define CB_COLOR6_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 25059 + #define CB_COLOR6_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 25105 25060 #define CB_COLOR6_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 25106 25061 #define CB_COLOR6_ATTRIB3__META_LINEAR_MASK 0x00002000L 25107 25062 #define CB_COLOR6_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 25111 25064 #define CB_COLOR6_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 25112 25065 #define CB_COLOR6_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 25113 25066 #define CB_COLOR6_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 25067 + #define CB_COLOR6_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25114 25068 //CB_COLOR7_ATTRIB3 25115 25069 #define CB_COLOR7_ATTRIB3__MIP0_DEPTH__SHIFT 0x0 25116 25070 #define CB_COLOR7_ATTRIB3__META_LINEAR__SHIFT 0xd ··· 25121 25073 #define CB_COLOR7_ATTRIB3__CMASK_PIPE_ALIGNED__SHIFT 0x1a 25122 25074 #define CB_COLOR7_ATTRIB3__RESOURCE_LEVEL__SHIFT 0x1b 25123 25075 #define CB_COLOR7_ATTRIB3__DCC_PIPE_ALIGNED__SHIFT 0x1e 25076 + #define CB_COLOR7_ATTRIB3__VRS_RATE_HINT_ENABLE__SHIFT 0x1f 25124 25077 #define CB_COLOR7_ATTRIB3__MIP0_DEPTH_MASK 0x00001FFFL 25125 25078 #define CB_COLOR7_ATTRIB3__META_LINEAR_MASK 0x00002000L 25126 25079 #define CB_COLOR7_ATTRIB3__COLOR_SW_MODE_MASK 0x0007C000L ··· 25130 25081 #define CB_COLOR7_ATTRIB3__CMASK_PIPE_ALIGNED_MASK 0x04000000L 25131 25082 #define CB_COLOR7_ATTRIB3__RESOURCE_LEVEL_MASK 0x38000000L 25132 25083 #define CB_COLOR7_ATTRIB3__DCC_PIPE_ALIGNED_MASK 0x40000000L 25084 + #define CB_COLOR7_ATTRIB3__VRS_RATE_HINT_ENABLE_MASK 0x80000000L 25133 25085 25134 25086 25135 25087 // addressBlock: gc_gfxudec
+34
drivers/gpu/drm/amd/include/asic_reg/vcn/vcn_3_0_0_sh_mask.h
··· 2393 2393 #define VCN_FEATURES__HAS_MJPEG2_IDCT_DEC__SHIFT 0x7 2394 2394 #define VCN_FEATURES__HAS_SCLR_DEC__SHIFT 0x8 2395 2395 #define VCN_FEATURES__HAS_VP9_DEC__SHIFT 0x9 2396 + #define VCN_FEATURES__HAS_AV1_DEC__SHIFT 0xa 2396 2397 #define VCN_FEATURES__HAS_EFC_ENC__SHIFT 0xb 2397 2398 #define VCN_FEATURES__HAS_EFC_HDR2SDR_ENC__SHIFT 0xc 2398 2399 #define VCN_FEATURES__HAS_DUAL_MJPEG_DEC__SHIFT 0xd ··· 2408 2407 #define VCN_FEATURES__HAS_MJPEG2_IDCT_DEC_MASK 0x00000080L 2409 2408 #define VCN_FEATURES__HAS_SCLR_DEC_MASK 0x00000100L 2410 2409 #define VCN_FEATURES__HAS_VP9_DEC_MASK 0x00000200L 2410 + #define VCN_FEATURES__HAS_AV1_DEC_MASK 0x00000400L 2411 2411 #define VCN_FEATURES__HAS_EFC_ENC_MASK 0x00000800L 2412 2412 #define VCN_FEATURES__HAS_EFC_HDR2SDR_ENC_MASK 0x00001000L 2413 2413 #define VCN_FEATURES__HAS_DUAL_MJPEG_DEC_MASK 0x00002000L ··· 2811 2809 #define UVD_SUVD_CGC_GATE__IME_HEVC__SHIFT 0x18 2812 2810 #define UVD_SUVD_CGC_GATE__EFC__SHIFT 0x19 2813 2811 #define UVD_SUVD_CGC_GATE__SAOE__SHIFT 0x1a 2812 + #define UVD_SUVD_CGC_GATE__SRE_AV1__SHIFT 0x1b 2814 2813 #define UVD_SUVD_CGC_GATE__FBC_PCLK__SHIFT 0x1c 2815 2814 #define UVD_SUVD_CGC_GATE__FBC_CCLK__SHIFT 0x1d 2815 + #define UVD_SUVD_CGC_GATE__SCM_AV1__SHIFT 0x1e 2816 2816 #define UVD_SUVD_CGC_GATE__SMPA__SHIFT 0x1f 2817 2817 #define UVD_SUVD_CGC_GATE__SRE_MASK 0x00000001L 2818 2818 #define UVD_SUVD_CGC_GATE__SIT_MASK 0x00000002L ··· 2843 2839 #define UVD_SUVD_CGC_GATE__IME_HEVC_MASK 0x01000000L 2844 2840 #define UVD_SUVD_CGC_GATE__EFC_MASK 0x02000000L 2845 2841 #define UVD_SUVD_CGC_GATE__SAOE_MASK 0x04000000L 2842 + #define UVD_SUVD_CGC_GATE__SRE_AV1_MASK 0x08000000L 2846 2843 #define UVD_SUVD_CGC_GATE__FBC_PCLK_MASK 0x10000000L 2847 2844 #define UVD_SUVD_CGC_GATE__FBC_CCLK_MASK 0x20000000L 2845 + #define UVD_SUVD_CGC_GATE__SCM_AV1_MASK 0x40000000L 2848 2846 #define UVD_SUVD_CGC_GATE__SMPA_MASK 0x80000000L 2849 2847 //UVD_SUVD_CGC_STATUS 2850 2848 #define UVD_SUVD_CGC_STATUS__SRE_VCLK__SHIFT 0x0 ··· 2879 2873 #define UVD_SUVD_CGC_STATUS__IME_HEVC_DCLK__SHIFT 0x1b 2880 2874 #define UVD_SUVD_CGC_STATUS__EFC_DCLK__SHIFT 0x1c 2881 2875 #define UVD_SUVD_CGC_STATUS__SAOE_DCLK__SHIFT 0x1d 2876 + #define UVD_SUVD_CGC_STATUS__SRE_AV1_VCLK__SHIFT 0x1e 2877 + #define UVD_SUVD_CGC_STATUS__SCM_AV1_DCLK__SHIFT 0x1f 2882 2878 #define UVD_SUVD_CGC_STATUS__SRE_VCLK_MASK 0x00000001L 2883 2879 #define UVD_SUVD_CGC_STATUS__SRE_DCLK_MASK 0x00000002L 2884 2880 #define UVD_SUVD_CGC_STATUS__SIT_DCLK_MASK 0x00000004L ··· 2911 2903 #define UVD_SUVD_CGC_STATUS__IME_HEVC_DCLK_MASK 0x08000000L 2912 2904 #define UVD_SUVD_CGC_STATUS__EFC_DCLK_MASK 0x10000000L 2913 2905 #define UVD_SUVD_CGC_STATUS__SAOE_DCLK_MASK 0x20000000L 2906 + #define UVD_SUVD_CGC_STATUS__SRE_AV1_VCLK_MASK 0x40000000L 2907 + #define UVD_SUVD_CGC_STATUS__SCM_AV1_DCLK_MASK 0x80000000L 2914 2908 //UVD_SUVD_CGC_CTRL 2915 2909 #define UVD_SUVD_CGC_CTRL__SRE_MODE__SHIFT 0x0 2916 2910 #define UVD_SUVD_CGC_CTRL__SIT_MODE__SHIFT 0x1 ··· 2929 2919 #define UVD_SUVD_CGC_CTRL__SMPA_MODE__SHIFT 0xc 2930 2920 #define UVD_SUVD_CGC_CTRL__MPBE0_MODE__SHIFT 0xd 2931 2921 #define UVD_SUVD_CGC_CTRL__MPBE1_MODE__SHIFT 0xe 2922 + #define UVD_SUVD_CGC_CTRL__SIT_AV1_MODE__SHIFT 0xf 2923 + #define UVD_SUVD_CGC_CTRL__SDB_AV1_MODE__SHIFT 0x10 2932 2924 #define UVD_SUVD_CGC_CTRL__MPC1_MODE__SHIFT 0x11 2933 2925 #define UVD_SUVD_CGC_CTRL__FBC_PCLK__SHIFT 0x1c 2934 2926 #define UVD_SUVD_CGC_CTRL__FBC_CCLK__SHIFT 0x1d ··· 2949 2937 #define UVD_SUVD_CGC_CTRL__SMPA_MODE_MASK 0x00001000L 2950 2938 #define UVD_SUVD_CGC_CTRL__MPBE0_MODE_MASK 0x00002000L 2951 2939 #define UVD_SUVD_CGC_CTRL__MPBE1_MODE_MASK 0x00004000L 2940 + #define UVD_SUVD_CGC_CTRL__SIT_AV1_MODE_MASK 0x00008000L 2941 + #define UVD_SUVD_CGC_CTRL__SDB_AV1_MODE_MASK 0x00010000L 2952 2942 #define UVD_SUVD_CGC_CTRL__MPC1_MODE_MASK 0x00020000L 2953 2943 #define UVD_SUVD_CGC_CTRL__FBC_PCLK_MASK 0x10000000L 2954 2944 #define UVD_SUVD_CGC_CTRL__FBC_CCLK_MASK 0x20000000L ··· 3672 3658 #define UVD_SUVD_CGC_STATUS2__SMPA_VCLK__SHIFT 0x0 3673 3659 #define UVD_SUVD_CGC_STATUS2__SMPA_DCLK__SHIFT 0x1 3674 3660 #define UVD_SUVD_CGC_STATUS2__MPBE1_DCLK__SHIFT 0x3 3661 + #define UVD_SUVD_CGC_STATUS2__SIT_AV1_DCLK__SHIFT 0x4 3662 + #define UVD_SUVD_CGC_STATUS2__SDB_AV1_DCLK__SHIFT 0x5 3675 3663 #define UVD_SUVD_CGC_STATUS2__MPC1_DCLK__SHIFT 0x6 3676 3664 #define UVD_SUVD_CGC_STATUS2__MPC1_SCLK__SHIFT 0x7 3677 3665 #define UVD_SUVD_CGC_STATUS2__MPC1_VCLK__SHIFT 0x8 ··· 3682 3666 #define UVD_SUVD_CGC_STATUS2__SMPA_VCLK_MASK 0x00000001L 3683 3667 #define UVD_SUVD_CGC_STATUS2__SMPA_DCLK_MASK 0x00000002L 3684 3668 #define UVD_SUVD_CGC_STATUS2__MPBE1_DCLK_MASK 0x00000008L 3669 + #define UVD_SUVD_CGC_STATUS2__SIT_AV1_DCLK_MASK 0x00000010L 3670 + #define UVD_SUVD_CGC_STATUS2__SDB_AV1_DCLK_MASK 0x00000020L 3685 3671 #define UVD_SUVD_CGC_STATUS2__MPC1_DCLK_MASK 0x00000040L 3686 3672 #define UVD_SUVD_CGC_STATUS2__MPC1_SCLK_MASK 0x00000080L 3687 3673 #define UVD_SUVD_CGC_STATUS2__MPC1_VCLK_MASK 0x00000100L ··· 3692 3674 //UVD_SUVD_CGC_GATE2 3693 3675 #define UVD_SUVD_CGC_GATE2__MPBE0__SHIFT 0x0 3694 3676 #define UVD_SUVD_CGC_GATE2__MPBE1__SHIFT 0x1 3677 + #define UVD_SUVD_CGC_GATE2__SIT_AV1__SHIFT 0x2 3678 + #define UVD_SUVD_CGC_GATE2__SDB_AV1__SHIFT 0x3 3695 3679 #define UVD_SUVD_CGC_GATE2__MPC1__SHIFT 0x4 3696 3680 #define UVD_SUVD_CGC_GATE2__MPBE0_MASK 0x00000001L 3697 3681 #define UVD_SUVD_CGC_GATE2__MPBE1_MASK 0x00000002L 3682 + #define UVD_SUVD_CGC_GATE2__SIT_AV1_MASK 0x00000004L 3683 + #define UVD_SUVD_CGC_GATE2__SDB_AV1_MASK 0x00000008L 3698 3684 #define UVD_SUVD_CGC_GATE2__MPC1_MASK 0x00000010L 3699 3685 //UVD_SUVD_INT_STATUS2 3700 3686 #define UVD_SUVD_INT_STATUS2__SMPA_FUNC_INT__SHIFT 0x0 3701 3687 #define UVD_SUVD_INT_STATUS2__SMPA_ERR_INT__SHIFT 0x5 3688 + #define UVD_SUVD_INT_STATUS2__SDB_AV1_FUNC_INT__SHIFT 0x6 3689 + #define UVD_SUVD_INT_STATUS2__SDB_AV1_ERR_INT__SHIFT 0xb 3702 3690 #define UVD_SUVD_INT_STATUS2__SMPA_FUNC_INT_MASK 0x0000001FL 3703 3691 #define UVD_SUVD_INT_STATUS2__SMPA_ERR_INT_MASK 0x00000020L 3692 + #define UVD_SUVD_INT_STATUS2__SDB_AV1_FUNC_INT_MASK 0x000007C0L 3693 + #define UVD_SUVD_INT_STATUS2__SDB_AV1_ERR_INT_MASK 0x00000800L 3704 3694 //UVD_SUVD_INT_EN2 3705 3695 #define UVD_SUVD_INT_EN2__SMPA_FUNC_INT_EN__SHIFT 0x0 3706 3696 #define UVD_SUVD_INT_EN2__SMPA_ERR_INT_EN__SHIFT 0x5 3697 + #define UVD_SUVD_INT_EN2__SDB_AV1_FUNC_INT_EN__SHIFT 0x6 3698 + #define UVD_SUVD_INT_EN2__SDB_AV1_ERR_INT_EN__SHIFT 0xb 3707 3699 #define UVD_SUVD_INT_EN2__SMPA_FUNC_INT_EN_MASK 0x0000001FL 3708 3700 #define UVD_SUVD_INT_EN2__SMPA_ERR_INT_EN_MASK 0x00000020L 3701 + #define UVD_SUVD_INT_EN2__SDB_AV1_FUNC_INT_EN_MASK 0x000007C0L 3702 + #define UVD_SUVD_INT_EN2__SDB_AV1_ERR_INT_EN_MASK 0x00000800L 3709 3703 //UVD_SUVD_INT_ACK2 3710 3704 #define UVD_SUVD_INT_ACK2__SMPA_FUNC_INT_ACK__SHIFT 0x0 3711 3705 #define UVD_SUVD_INT_ACK2__SMPA_ERR_INT_ACK__SHIFT 0x5 3706 + #define UVD_SUVD_INT_ACK2__SDB_AV1_FUNC_INT_ACK__SHIFT 0x6 3707 + #define UVD_SUVD_INT_ACK2__SDB_AV1_ERR_INT_ACK__SHIFT 0xb 3712 3708 #define UVD_SUVD_INT_ACK2__SMPA_FUNC_INT_ACK_MASK 0x0000001FL 3713 3709 #define UVD_SUVD_INT_ACK2__SMPA_ERR_INT_ACK_MASK 0x00000020L 3710 + #define UVD_SUVD_INT_ACK2__SDB_AV1_FUNC_INT_ACK_MASK 0x000007C0L 3711 + #define UVD_SUVD_INT_ACK2__SDB_AV1_ERR_INT_ACK_MASK 0x00000800L 3714 3712 3715 3713 3716 3714 // addressBlock: uvd0_ecpudec
+11 -11
drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
··· 479 479 return ret; 480 480 } 481 481 482 - /* 483 - * Set initialized values (get from vbios) to dpm tables context such as 484 - * gfxclk, memclk, dcefclk, and etc. And enable the DPM feature for each 485 - * type of clks. 486 - */ 487 - ret = smu_set_default_dpm_table(smu); 488 - if (ret) { 489 - dev_err(adev->dev, "Failed to setup default dpm clock tables!\n"); 490 - return ret; 491 - } 492 - 493 482 ret = smu_populate_umd_state_clk(smu); 494 483 if (ret) { 495 484 dev_err(adev->dev, "Failed to populate UMD state clocks!\n"); ··· 970 981 SMU_POWER_SOURCE_DC); 971 982 if (ret) { 972 983 dev_err(adev->dev, "Failed to switch to %s mode!\n", adev->pm.ac_power ? "AC" : "DC"); 984 + return ret; 985 + } 986 + 987 + /* 988 + * Set initialized values (get from vbios) to dpm tables context such as 989 + * gfxclk, memclk, dcefclk, and etc. And enable the DPM feature for each 990 + * type of clks. 991 + */ 992 + ret = smu_set_default_dpm_table(smu); 993 + if (ret) { 994 + dev_err(adev->dev, "Failed to setup default dpm clock tables!\n"); 973 995 return ret; 974 996 } 975 997
+6 -4
drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
··· 563 563 struct smu10_hwmgr *data = hwmgr->backend; 564 564 uint32_t min_sclk = hwmgr->display_config->min_core_set_clock; 565 565 uint32_t min_mclk = hwmgr->display_config->min_mem_set_clock/100; 566 + uint32_t index_fclk = data->clock_vol_info.vdd_dep_on_fclk->count - 1; 567 + uint32_t index_socclk = data->clock_vol_info.vdd_dep_on_socclk->count - 1; 566 568 567 569 if (hwmgr->smu_version < 0x1E3700) { 568 570 pr_info("smu firmware version too old, can not set dpm level\n"); ··· 678 676 smum_send_msg_to_smc_with_parameter(hwmgr, 679 677 PPSMC_MSG_SetHardMinFclkByFreq, 680 678 hwmgr->display_config->num_display > 3 ? 681 - SMU10_UMD_PSTATE_PEAK_FCLK : 679 + data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk : 682 680 min_mclk, 683 681 NULL); 684 682 685 683 smum_send_msg_to_smc_with_parameter(hwmgr, 686 684 PPSMC_MSG_SetHardMinSocclkByFreq, 687 - SMU10_UMD_PSTATE_MIN_SOCCLK, 685 + data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk, 688 686 NULL); 689 687 smum_send_msg_to_smc_with_parameter(hwmgr, 690 688 PPSMC_MSG_SetHardMinVcn, ··· 697 695 NULL); 698 696 smum_send_msg_to_smc_with_parameter(hwmgr, 699 697 PPSMC_MSG_SetSoftMaxFclkByFreq, 700 - SMU10_UMD_PSTATE_PEAK_FCLK, 698 + data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk, 701 699 NULL); 702 700 smum_send_msg_to_smc_with_parameter(hwmgr, 703 701 PPSMC_MSG_SetSoftMaxSocclkByFreq, 704 - SMU10_UMD_PSTATE_PEAK_SOCCLK, 702 + data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk, 705 703 NULL); 706 704 smum_send_msg_to_smc_with_parameter(hwmgr, 707 705 PPSMC_MSG_SetSoftMaxVcn,
+5 -3
drivers/gpu/drm/amd/powerplay/renoir_ppt.c
··· 232 232 *sclk_mask = 0; 233 233 } else if (level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) { 234 234 if (mclk_mask) 235 - *mclk_mask = 0; 235 + /* mclk levels are in reverse order */ 236 + *mclk_mask = NUM_MEMCLK_DPM_LEVELS - 1; 236 237 } else if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) { 237 238 if(sclk_mask) 238 239 /* The sclk as gfxclk and has three level about max/min/current */ 239 240 *sclk_mask = 3 - 1; 240 241 241 242 if(mclk_mask) 242 - *mclk_mask = NUM_MEMCLK_DPM_LEVELS - 1; 243 + /* mclk levels are in reverse order */ 244 + *mclk_mask = 0; 243 245 244 246 if(soc_mask) 245 247 *soc_mask = NUM_SOCCLK_DPM_LEVELS - 1; ··· 335 333 case SMU_UCLK: 336 334 case SMU_FCLK: 337 335 case SMU_MCLK: 338 - ret = renoir_get_dpm_clk_limited(smu, clk_type, 0, min); 336 + ret = renoir_get_dpm_clk_limited(smu, clk_type, NUM_MEMCLK_DPM_LEVELS - 1, min); 339 337 if (ret) 340 338 goto failed; 341 339 break;
+5 -1
drivers/gpu/drm/i915/gvt/vgpu.c
··· 368 368 static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt, 369 369 struct intel_vgpu_creation_params *param) 370 370 { 371 + struct drm_i915_private *dev_priv = gvt->gt->i915; 371 372 struct intel_vgpu *vgpu; 372 373 int ret; 373 374 ··· 437 436 if (ret) 438 437 goto out_clean_sched_policy; 439 438 440 - ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D); 439 + if (IS_BROADWELL(dev_priv)) 440 + ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B); 441 + else 442 + ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D); 441 443 if (ret) 442 444 goto out_clean_sched_policy; 443 445
+5 -7
drivers/gpu/drm/i915/selftests/mock_gem_device.c
··· 118 118 119 119 struct drm_i915_private *mock_gem_device(void) 120 120 { 121 + #if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU) 122 + static struct dev_iommu fake_iommu = { .priv = (void *)-1 }; 123 + #endif 121 124 struct drm_i915_private *i915; 122 125 struct pci_dev *pdev; 123 - #if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU) 124 - struct dev_iommu iommu; 125 - #endif 126 126 int err; 127 127 128 128 pdev = kzalloc(sizeof(*pdev), GFP_KERNEL); ··· 141 141 dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 142 142 143 143 #if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU) 144 - /* HACK HACK HACK to disable iommu for the fake device; force identity mapping */ 145 - memset(&iommu, 0, sizeof(iommu)); 146 - iommu.priv = (void *)-1; 147 - pdev->dev.iommu = &iommu; 144 + /* HACK to disable iommu for the fake device; force identity mapping */ 145 + pdev->dev.iommu = &fake_iommu; 148 146 #endif 149 147 150 148 pci_set_drvdata(pdev, i915);
+1 -1
drivers/gpu/drm/sun4i/sun8i_csc.h
··· 12 12 13 13 /* VI channel CSC units offsets */ 14 14 #define CCSC00_OFFSET 0xAA050 15 - #define CCSC01_OFFSET 0xFA000 15 + #define CCSC01_OFFSET 0xFA050 16 16 #define CCSC10_OFFSET 0xA0000 17 17 #define CCSC11_OFFSET 0xF0000 18 18
+1 -1
drivers/gpu/drm/sun4i/sun8i_mixer.c
··· 307 307 .reg_bits = 32, 308 308 .val_bits = 32, 309 309 .reg_stride = 4, 310 - .max_register = 0xbfffc, /* guessed */ 310 + .max_register = 0xffffc, /* guessed */ 311 311 }; 312 312 313 313 static int sun8i_mixer_of_get_id(struct device_node *node)
+1
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 1117 1117 card->num_links = 1; 1118 1118 card->name = "vc4-hdmi"; 1119 1119 card->dev = dev; 1120 + card->owner = THIS_MODULE; 1120 1121 1121 1122 /* 1122 1123 * Be careful, snd_soc_register_card() calls dev_set_drvdata() and
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
··· 55 55 56 56 id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL); 57 57 if (id < 0) 58 - return (id != -ENOMEM ? 0 : id); 58 + return id; 59 59 60 60 spin_lock(&gman->lock); 61 61
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_thp.c
··· 95 95 mem->start = node->start; 96 96 } 97 97 98 - return 0; 98 + return ret; 99 99 } 100 100 101 101
+3
drivers/i2c/busses/i2c-cpm.c
··· 65 65 char res1[4]; /* Reserved */ 66 66 ushort rpbase; /* Relocation pointer */ 67 67 char res2[2]; /* Reserved */ 68 + /* The following elements are only for CPM2 */ 69 + char res3[4]; /* Reserved */ 70 + uint sdmatmp; /* Internal */ 68 71 }; 69 72 70 73 #define I2COM_START 0x80
+1
drivers/i2c/busses/i2c-i801.c
··· 1917 1917 1918 1918 pci_set_drvdata(dev, priv); 1919 1919 1920 + dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NO_DIRECT_COMPLETE); 1920 1921 pm_runtime_set_autosuspend_delay(&dev->dev, 1000); 1921 1922 pm_runtime_use_autosuspend(&dev->dev); 1922 1923 pm_runtime_put_autosuspend(&dev->dev);
+9
drivers/i2c/busses/i2c-npcm7xx.c
··· 2163 2163 if (bus->cmd_err == -EAGAIN) 2164 2164 ret = i2c_recover_bus(adap); 2165 2165 2166 + /* 2167 + * After any type of error, check if LAST bit is still set, 2168 + * due to a HW issue. 2169 + * It cannot be cleared without resetting the module. 2170 + */ 2171 + if (bus->cmd_err && 2172 + (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL))) 2173 + npcm_i2c_reset(bus); 2174 + 2166 2175 #if IS_ENABLED(CONFIG_I2C_SLAVE) 2167 2176 /* reenable slave if it was enabled */ 2168 2177 if (bus->slave)
+2 -2
drivers/iio/adc/ad7124.c
··· 177 177 178 178 static struct ad7124_chip_info ad7124_chip_info_tbl[] = { 179 179 [ID_AD7124_4] = { 180 - .name = "ad7127-4", 180 + .name = "ad7124-4", 181 181 .chip_id = CHIPID_AD7124_4, 182 182 .num_inputs = 8, 183 183 }, 184 184 [ID_AD7124_8] = { 185 - .name = "ad7127-8", 185 + .name = "ad7124-8", 186 186 .chip_id = CHIPID_AD7124_8, 187 187 .num_inputs = 16, 188 188 },
+1 -1
drivers/iio/adc/qcom-spmi-adc5.c
··· 982 982 983 983 static struct platform_driver adc5_driver = { 984 984 .driver = { 985 - .name = "qcom-spmi-adc5.c", 985 + .name = "qcom-spmi-adc5", 986 986 .of_match_table = adc5_match_table, 987 987 }, 988 988 .probe = adc5_probe,
+4 -2
drivers/infiniband/core/device.c
··· 1285 1285 remove_client_context(device, cid); 1286 1286 } 1287 1287 1288 + ib_cq_pool_destroy(device); 1289 + 1288 1290 /* Pairs with refcount_set in enable_device */ 1289 1291 ib_device_put(device); 1290 1292 wait_for_completion(&device->unreg_completion); ··· 1329 1327 if (ret) 1330 1328 goto out; 1331 1329 } 1330 + 1331 + ib_cq_pool_init(device); 1332 1332 1333 1333 down_read(&clients_rwsem); 1334 1334 xa_for_each_marked (&clients, index, client, CLIENT_REGISTERED) { ··· 1404 1400 goto dev_cleanup; 1405 1401 } 1406 1402 1407 - ib_cq_pool_init(device); 1408 1403 ret = enable_device_and_get(device); 1409 1404 dev_set_uevent_suppress(&device->dev, false); 1410 1405 /* Mark for userspace that device is ready */ ··· 1458 1455 goto out; 1459 1456 1460 1457 disable_device(ib_dev); 1461 - ib_cq_pool_destroy(ib_dev); 1462 1458 1463 1459 /* Expedite removing unregistered pointers from the hash table */ 1464 1460 free_netdevs(ib_dev);
+2
drivers/input/mouse/trackpoint.c
··· 282 282 case TP_VARIANT_ALPS: 283 283 case TP_VARIANT_ELAN: 284 284 case TP_VARIANT_NXP: 285 + case TP_VARIANT_JYT_SYNAPTICS: 286 + case TP_VARIANT_SYNAPTICS: 285 287 if (variant_id) 286 288 *variant_id = param[0]; 287 289 if (firmware_id)
+7
drivers/input/serio/i8042-x86ia64io.h
··· 721 721 DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"), 722 722 }, 723 723 }, 724 + { 725 + /* Acer Aspire 5 A515 */ 726 + .matches = { 727 + DMI_MATCH(DMI_BOARD_NAME, "Grumpy_PK"), 728 + DMI_MATCH(DMI_BOARD_VENDOR, "PK"), 729 + }, 730 + }, 724 731 { } 725 732 }; 726 733
+10 -46
drivers/iommu/amd/init.c
··· 1104 1104 } 1105 1105 1106 1106 /* 1107 - * Reads the device exclusion range from ACPI and initializes the IOMMU with 1108 - * it 1109 - */ 1110 - static void __init set_device_exclusion_range(u16 devid, struct ivmd_header *m) 1111 - { 1112 - if (!(m->flags & IVMD_FLAG_EXCL_RANGE)) 1113 - return; 1114 - 1115 - /* 1116 - * Treat per-device exclusion ranges as r/w unity-mapped regions 1117 - * since some buggy BIOSes might lead to the overwritten exclusion 1118 - * range (exclusion_start and exclusion_length members). This 1119 - * happens when there are multiple exclusion ranges (IVMD entries) 1120 - * defined in ACPI table. 1121 - */ 1122 - m->flags = (IVMD_FLAG_IW | IVMD_FLAG_IR | IVMD_FLAG_UNITY_MAP); 1123 - } 1124 - 1125 - /* 1126 1107 * Takes a pointer to an AMD IOMMU entry in the ACPI table and 1127 1108 * initializes the hardware and our data structures with it. 1128 1109 */ ··· 2054 2073 } 2055 2074 } 2056 2075 2057 - /* called when we find an exclusion range definition in ACPI */ 2058 - static int __init init_exclusion_range(struct ivmd_header *m) 2059 - { 2060 - int i; 2061 - 2062 - switch (m->type) { 2063 - case ACPI_IVMD_TYPE: 2064 - set_device_exclusion_range(m->devid, m); 2065 - break; 2066 - case ACPI_IVMD_TYPE_ALL: 2067 - for (i = 0; i <= amd_iommu_last_bdf; ++i) 2068 - set_device_exclusion_range(i, m); 2069 - break; 2070 - case ACPI_IVMD_TYPE_RANGE: 2071 - for (i = m->devid; i <= m->aux; ++i) 2072 - set_device_exclusion_range(i, m); 2073 - break; 2074 - default: 2075 - break; 2076 - } 2077 - 2078 - return 0; 2079 - } 2080 - 2081 2076 /* called for unity map ACPI definition */ 2082 2077 static int __init init_unity_map_range(struct ivmd_header *m) 2083 2078 { ··· 2063 2106 e = kzalloc(sizeof(*e), GFP_KERNEL); 2064 2107 if (e == NULL) 2065 2108 return -ENOMEM; 2066 - 2067 - if (m->flags & IVMD_FLAG_EXCL_RANGE) 2068 - init_exclusion_range(m); 2069 2109 2070 2110 switch (m->type) { 2071 2111 default: ··· 2086 2132 e->address_start = PAGE_ALIGN(m->range_start); 2087 2133 e->address_end = e->address_start + PAGE_ALIGN(m->range_length); 2088 2134 e->prot = m->flags >> 1; 2135 + 2136 + /* 2137 + * Treat per-device exclusion ranges as r/w unity-mapped regions 2138 + * since some buggy BIOSes might lead to the overwritten exclusion 2139 + * range (exclusion_start and exclusion_length members). This 2140 + * happens when there are multiple exclusion ranges (IVMD entries) 2141 + * defined in ACPI table. 2142 + */ 2143 + if (m->flags & IVMD_FLAG_EXCL_RANGE) 2144 + e->prot = (IVMD_FLAG_IW | IVMD_FLAG_IR) >> 1; 2089 2145 2090 2146 DUMP_printk("%s devid_start: %02x:%02x.%x devid_end: %02x:%02x.%x" 2091 2147 " range_start: %016llx range_end: %016llx flags: %x\n", s,
+6 -2
drivers/iommu/exynos-iommu.c
··· 1295 1295 return -ENODEV; 1296 1296 1297 1297 data = platform_get_drvdata(sysmmu); 1298 - if (!data) 1298 + if (!data) { 1299 + put_device(&sysmmu->dev); 1299 1300 return -ENODEV; 1301 + } 1300 1302 1301 1303 if (!owner) { 1302 1304 owner = kzalloc(sizeof(*owner), GFP_KERNEL); 1303 - if (!owner) 1305 + if (!owner) { 1306 + put_device(&sysmmu->dev); 1304 1307 return -ENOMEM; 1308 + } 1305 1309 1306 1310 INIT_LIST_HEAD(&owner->controllers); 1307 1311 mutex_init(&owner->rpm_lock);
+2 -2
drivers/iommu/intel/iommu.c
··· 2664 2664 } 2665 2665 2666 2666 /* Setup the PASID entry for requests without PASID: */ 2667 - spin_lock(&iommu->lock); 2667 + spin_lock_irqsave(&iommu->lock, flags); 2668 2668 if (hw_pass_through && domain_type_is_si(domain)) 2669 2669 ret = intel_pasid_setup_pass_through(iommu, domain, 2670 2670 dev, PASID_RID2PASID); ··· 2674 2674 else 2675 2675 ret = intel_pasid_setup_second_level(iommu, domain, 2676 2676 dev, PASID_RID2PASID); 2677 - spin_unlock(&iommu->lock); 2677 + spin_unlock_irqrestore(&iommu->lock, flags); 2678 2678 if (ret) { 2679 2679 dev_err(dev, "Setup RID2PASID failed\n"); 2680 2680 dmar_remove_one_dev_info(dev);
+5 -22
drivers/md/dm.c
··· 1724 1724 return ret; 1725 1725 } 1726 1726 1727 - static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struct bio **bio) 1728 - { 1729 - unsigned len, sector_count; 1730 - 1731 - sector_count = bio_sectors(*bio); 1732 - len = min_t(sector_t, max_io_len((*bio)->bi_iter.bi_sector, ti), sector_count); 1733 - 1734 - if (sector_count > len) { 1735 - struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split); 1736 - 1737 - bio_chain(split, *bio); 1738 - trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector); 1739 - submit_bio_noacct(*bio); 1740 - *bio = split; 1741 - } 1742 - } 1743 - 1744 1727 static blk_qc_t dm_process_bio(struct mapped_device *md, 1745 1728 struct dm_table *map, struct bio *bio) 1746 1729 { ··· 1744 1761 } 1745 1762 1746 1763 /* 1747 - * If in ->queue_bio we need to use blk_queue_split(), otherwise 1764 + * If in ->submit_bio we need to use blk_queue_split(), otherwise 1748 1765 * queue_limits for abnormal requests (e.g. discard, writesame, etc) 1749 1766 * won't be imposed. 1767 + * If called from dm_wq_work() for deferred bio processing, bio 1768 + * was already handled by following code with previous ->submit_bio. 1750 1769 */ 1751 1770 if (current->bio_list) { 1752 1771 if (is_abnormal_io(bio)) 1753 1772 blk_queue_split(&bio); 1754 - else 1755 - dm_queue_split(md, ti, &bio); 1773 + /* regular IO is split by __split_and_process_bio */ 1756 1774 } 1757 1775 1758 1776 if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED) 1759 1777 return __process_bio(md, map, bio, ti); 1760 - else 1761 - return __split_and_process_bio(md, map, bio); 1778 + return __split_and_process_bio(md, map, bio); 1762 1779 } 1763 1780 1764 1781 static blk_qc_t dm_submit_bio(struct bio *bio)
+1 -1
drivers/media/cec/core/cec-adap.c
··· 1199 1199 /* Cancel the pending timeout work */ 1200 1200 if (!cancel_delayed_work(&data->work)) { 1201 1201 mutex_unlock(&adap->lock); 1202 - flush_scheduled_work(); 1202 + cancel_delayed_work_sync(&data->work); 1203 1203 mutex_lock(&adap->lock); 1204 1204 } 1205 1205 /*
+6 -40
drivers/media/common/videobuf2/videobuf2-core.c
··· 721 721 } 722 722 EXPORT_SYMBOL(vb2_verify_memory_type); 723 723 724 - static void set_queue_consistency(struct vb2_queue *q, bool consistent_mem) 725 - { 726 - q->dma_attrs &= ~DMA_ATTR_NON_CONSISTENT; 727 - 728 - if (!vb2_queue_allows_cache_hints(q)) 729 - return; 730 - if (!consistent_mem) 731 - q->dma_attrs |= DMA_ATTR_NON_CONSISTENT; 732 - } 733 - 734 - static bool verify_consistency_attr(struct vb2_queue *q, bool consistent_mem) 735 - { 736 - bool queue_is_consistent = !(q->dma_attrs & DMA_ATTR_NON_CONSISTENT); 737 - 738 - if (consistent_mem != queue_is_consistent) { 739 - dprintk(q, 1, "memory consistency model mismatch\n"); 740 - return false; 741 - } 742 - return true; 743 - } 744 - 745 724 int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory, 746 - unsigned int flags, unsigned int *count) 725 + unsigned int *count) 747 726 { 748 727 unsigned int num_buffers, allocated_buffers, num_planes = 0; 749 728 unsigned plane_sizes[VB2_MAX_PLANES] = { }; 750 - bool consistent_mem = true; 751 729 unsigned int i; 752 730 int ret; 753 - 754 - if (flags & V4L2_FLAG_MEMORY_NON_CONSISTENT) 755 - consistent_mem = false; 756 731 757 732 if (q->streaming) { 758 733 dprintk(q, 1, "streaming active\n"); ··· 740 765 } 741 766 742 767 if (*count == 0 || q->num_buffers != 0 || 743 - (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory) || 744 - !verify_consistency_attr(q, consistent_mem)) { 768 + (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory)) { 745 769 /* 746 770 * We already have buffers allocated, so first check if they 747 771 * are not in use and can be freed. ··· 777 803 num_buffers = min_t(unsigned int, num_buffers, VB2_MAX_FRAME); 778 804 memset(q->alloc_devs, 0, sizeof(q->alloc_devs)); 779 805 q->memory = memory; 780 - set_queue_consistency(q, consistent_mem); 781 806 782 807 /* 783 808 * Ask the driver how many buffers and planes per buffer it requires. ··· 861 888 EXPORT_SYMBOL_GPL(vb2_core_reqbufs); 862 889 863 890 int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory, 864 - unsigned int flags, unsigned int *count, 891 + unsigned int *count, 865 892 unsigned int requested_planes, 866 893 const unsigned int requested_sizes[]) 867 894 { 868 895 unsigned int num_planes = 0, num_buffers, allocated_buffers; 869 896 unsigned plane_sizes[VB2_MAX_PLANES] = { }; 870 - bool consistent_mem = true; 871 897 int ret; 872 - 873 - if (flags & V4L2_FLAG_MEMORY_NON_CONSISTENT) 874 - consistent_mem = false; 875 898 876 899 if (q->num_buffers == VB2_MAX_FRAME) { 877 900 dprintk(q, 1, "maximum number of buffers already allocated\n"); ··· 881 912 } 882 913 memset(q->alloc_devs, 0, sizeof(q->alloc_devs)); 883 914 q->memory = memory; 884 - set_queue_consistency(q, consistent_mem); 885 915 q->waiting_for_buffers = !q->is_output; 886 916 } else { 887 917 if (q->memory != memory) { 888 918 dprintk(q, 1, "memory model mismatch\n"); 889 919 return -EINVAL; 890 920 } 891 - if (!verify_consistency_attr(q, consistent_mem)) 892 - return -EINVAL; 893 921 } 894 922 895 923 num_buffers = min(*count, VB2_MAX_FRAME - q->num_buffers); ··· 2547 2581 fileio->memory = VB2_MEMORY_MMAP; 2548 2582 fileio->type = q->type; 2549 2583 q->fileio = fileio; 2550 - ret = vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count); 2584 + ret = vb2_core_reqbufs(q, fileio->memory, &fileio->count); 2551 2585 if (ret) 2552 2586 goto err_kfree; 2553 2587 ··· 2604 2638 2605 2639 err_reqbufs: 2606 2640 fileio->count = 0; 2607 - vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count); 2641 + vb2_core_reqbufs(q, fileio->memory, &fileio->count); 2608 2642 2609 2643 err_kfree: 2610 2644 q->fileio = NULL; ··· 2624 2658 vb2_core_streamoff(q, q->type); 2625 2659 q->fileio = NULL; 2626 2660 fileio->count = 0; 2627 - vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count); 2661 + vb2_core_reqbufs(q, fileio->memory, &fileio->count); 2628 2662 kfree(fileio); 2629 2663 dprintk(q, 3, "file io emulator closed\n"); 2630 2664 }
-19
drivers/media/common/videobuf2/videobuf2-dma-contig.c
··· 42 42 struct dma_buf_attachment *db_attach; 43 43 }; 44 44 45 - static inline bool vb2_dc_buffer_consistent(unsigned long attr) 46 - { 47 - return !(attr & DMA_ATTR_NON_CONSISTENT); 48 - } 49 - 50 45 /*********************************************/ 51 46 /* scatterlist table functions */ 52 47 /*********************************************/ ··· 336 341 vb2_dc_dmabuf_ops_begin_cpu_access(struct dma_buf *dbuf, 337 342 enum dma_data_direction direction) 338 343 { 339 - struct vb2_dc_buf *buf = dbuf->priv; 340 - struct sg_table *sgt = buf->dma_sgt; 341 - 342 - if (vb2_dc_buffer_consistent(buf->attrs)) 343 - return 0; 344 - 345 - dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir); 346 344 return 0; 347 345 } 348 346 ··· 343 355 vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf, 344 356 enum dma_data_direction direction) 345 357 { 346 - struct vb2_dc_buf *buf = dbuf->priv; 347 - struct sg_table *sgt = buf->dma_sgt; 348 - 349 - if (vb2_dc_buffer_consistent(buf->attrs)) 350 - return 0; 351 - 352 - dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir); 353 358 return 0; 354 359 } 355 360
+1 -2
drivers/media/common/videobuf2/videobuf2-dma-sg.c
··· 123 123 /* 124 124 * NOTE: dma-sg allocates memory using the page allocator directly, so 125 125 * there is no memory consistency guarantee, hence dma-sg ignores DMA 126 - * attributes passed from the upper layer. That means that 127 - * V4L2_FLAG_MEMORY_NON_CONSISTENT has no effect on dma-sg buffers. 126 + * attributes passed from the upper layer. 128 127 */ 129 128 buf->pages = kvmalloc_array(buf->num_pages, sizeof(struct page *), 130 129 GFP_KERNEL | __GFP_ZERO);
+2 -16
drivers/media/common/videobuf2/videobuf2-v4l2.c
··· 722 722 #endif 723 723 } 724 724 725 - static void clear_consistency_attr(struct vb2_queue *q, 726 - int memory, 727 - unsigned int *flags) 728 - { 729 - if (!q->allow_cache_hints || memory != V4L2_MEMORY_MMAP) 730 - *flags &= ~V4L2_FLAG_MEMORY_NON_CONSISTENT; 731 - } 732 - 733 725 int vb2_reqbufs(struct vb2_queue *q, struct v4l2_requestbuffers *req) 734 726 { 735 727 int ret = vb2_verify_memory_type(q, req->memory, req->type); 736 728 737 729 fill_buf_caps(q, &req->capabilities); 738 - clear_consistency_attr(q, req->memory, &req->flags); 739 - return ret ? ret : vb2_core_reqbufs(q, req->memory, 740 - req->flags, &req->count); 730 + return ret ? ret : vb2_core_reqbufs(q, req->memory, &req->count); 741 731 } 742 732 EXPORT_SYMBOL_GPL(vb2_reqbufs); 743 733 ··· 759 769 unsigned i; 760 770 761 771 fill_buf_caps(q, &create->capabilities); 762 - clear_consistency_attr(q, create->memory, &create->flags); 763 772 create->index = q->num_buffers; 764 773 if (create->count == 0) 765 774 return ret != -EBUSY ? ret : 0; ··· 802 813 if (requested_sizes[i] == 0) 803 814 return -EINVAL; 804 815 return ret ? ret : vb2_core_create_bufs(q, create->memory, 805 - create->flags, 806 816 &create->count, 807 817 requested_planes, 808 818 requested_sizes); ··· 986 998 int res = vb2_verify_memory_type(vdev->queue, p->memory, p->type); 987 999 988 1000 fill_buf_caps(vdev->queue, &p->capabilities); 989 - clear_consistency_attr(vdev->queue, p->memory, &p->flags); 990 1001 if (res) 991 1002 return res; 992 1003 if (vb2_queue_is_busy(vdev, file)) 993 1004 return -EBUSY; 994 - res = vb2_core_reqbufs(vdev->queue, p->memory, p->flags, &p->count); 1005 + res = vb2_core_reqbufs(vdev->queue, p->memory, &p->count); 995 1006 /* If count == 0, then the owner has released all buffers and he 996 1007 is no longer owner of the queue. Otherwise we have a new owner. */ 997 1008 if (res == 0) ··· 1008 1021 1009 1022 p->index = vdev->queue->num_buffers; 1010 1023 fill_buf_caps(vdev->queue, &p->capabilities); 1011 - clear_consistency_attr(vdev->queue, p->memory, &p->flags); 1012 1024 /* 1013 1025 * If count == 0, then just check if memory and type are valid. 1014 1026 * Any -EBUSY result from vb2_verify_memory_type can be mapped to 0.
+1 -1
drivers/media/dvb-core/dvb_vb2.c
··· 342 342 343 343 ctx->buf_siz = req->size; 344 344 ctx->buf_cnt = req->count; 345 - ret = vb2_core_reqbufs(&ctx->vb_q, VB2_MEMORY_MMAP, 0, &req->count); 345 + ret = vb2_core_reqbufs(&ctx->vb_q, VB2_MEMORY_MMAP, &req->count); 346 346 if (ret) { 347 347 ctx->state = DVB_VB2_STATE_NONE; 348 348 dprintk(1, "[%s] count=%d size=%d errno=%d\n", ctx->name,
+2 -8
drivers/media/v4l2-core/v4l2-compat-ioctl32.c
··· 246 246 * @memory: buffer memory type 247 247 * @format: frame format, for which buffers are requested 248 248 * @capabilities: capabilities of this buffer type. 249 - * @flags: additional buffer management attributes (ignored unless the 250 - * queue has V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS capability and 251 - * configured for MMAP streaming I/O). 252 249 * @reserved: future extensions 253 250 */ 254 251 struct v4l2_create_buffers32 { ··· 254 257 __u32 memory; /* enum v4l2_memory */ 255 258 struct v4l2_format32 format; 256 259 __u32 capabilities; 257 - __u32 flags; 258 - __u32 reserved[6]; 260 + __u32 reserved[7]; 259 261 }; 260 262 261 263 static int __bufsize_v4l2_format(struct v4l2_format32 __user *p32, u32 *size) ··· 355 359 { 356 360 if (!access_ok(p32, sizeof(*p32)) || 357 361 copy_in_user(p64, p32, 358 - offsetof(struct v4l2_create_buffers32, format)) || 359 - assign_in_user(&p64->flags, &p32->flags)) 362 + offsetof(struct v4l2_create_buffers32, format))) 360 363 return -EFAULT; 361 364 return __get_v4l2_format32(&p64->format, &p32->format, 362 365 aux_buf, aux_space); ··· 417 422 copy_in_user(p32, p64, 418 423 offsetof(struct v4l2_create_buffers32, format)) || 419 424 assign_in_user(&p32->capabilities, &p64->capabilities) || 420 - assign_in_user(&p32->flags, &p64->flags) || 421 425 copy_in_user(p32->reserved, p64->reserved, sizeof(p64->reserved))) 422 426 return -EFAULT; 423 427 return __put_v4l2_format32(&p64->format, &p32->format);
+4 -1
drivers/media/v4l2-core/v4l2-ioctl.c
··· 2042 2042 2043 2043 if (ret) 2044 2044 return ret; 2045 + 2046 + CLEAR_AFTER_FIELD(p, capabilities); 2047 + 2045 2048 return ops->vidioc_reqbufs(file, fh, p); 2046 2049 } 2047 2050 ··· 2084 2081 if (ret) 2085 2082 return ret; 2086 2083 2087 - CLEAR_AFTER_FIELD(create, flags); 2084 + CLEAR_AFTER_FIELD(create, capabilities); 2088 2085 2089 2086 v4l_sanitize_format(&create->format); 2090 2087
+4
drivers/memstick/core/memstick.c
··· 441 441 } else if (host->card->stop) 442 442 host->card->stop(host->card); 443 443 444 + if (host->removing) 445 + goto out_power_off; 446 + 444 447 card = memstick_alloc_card(host); 445 448 446 449 if (!card) { ··· 548 545 */ 549 546 void memstick_remove_host(struct memstick_host *host) 550 547 { 548 + host->removing = 1; 551 549 flush_workqueue(workqueue); 552 550 mutex_lock(&host->lock); 553 551 if (host->card)
+1 -1
drivers/mmc/host/mmc_spi.c
··· 1320 1320 DMA_BIDIRECTIONAL); 1321 1321 } 1322 1322 #else 1323 - static inline mmc_spi_dma_alloc(struct mmc_spi_host *host) { return 0; } 1323 + static inline int mmc_spi_dma_alloc(struct mmc_spi_host *host) { return 0; } 1324 1324 static inline void mmc_spi_dma_free(struct mmc_spi_host *host) {} 1325 1325 #endif 1326 1326
+2 -1
drivers/mmc/host/sdhci-pci-core.c
··· 794 794 static bool glk_broken_cqhci(struct sdhci_pci_slot *slot) 795 795 { 796 796 return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC && 797 - dmi_match(DMI_BIOS_VENDOR, "LENOVO"); 797 + (dmi_match(DMI_BIOS_VENDOR, "LENOVO") || 798 + dmi_match(DMI_SYS_VENDOR, "IRBIS")); 798 799 } 799 800 800 801 static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
+14 -6
drivers/net/dsa/microchip/ksz8795.c
··· 932 932 ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_ENABLE, true); 933 933 934 934 if (cpu_port) { 935 + if (!p->interface && dev->compat_interface) { 936 + dev_warn(dev->dev, 937 + "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. " 938 + "Please update your device tree.\n", 939 + port); 940 + p->interface = dev->compat_interface; 941 + } 942 + 935 943 /* Configure MII interface for proper network communication. */ 936 944 ksz_read8(dev, REG_PORT_5_CTRL_6, &data8); 937 945 data8 &= ~PORT_INTERFACE_TYPE; 938 946 data8 &= ~PORT_GMII_1GPS_MODE; 939 - switch (dev->interface) { 947 + switch (p->interface) { 940 948 case PHY_INTERFACE_MODE_MII: 941 949 p->phydev.speed = SPEED_100; 942 950 break; ··· 960 952 default: 961 953 data8 &= ~PORT_RGMII_ID_IN_ENABLE; 962 954 data8 &= ~PORT_RGMII_ID_OUT_ENABLE; 963 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 964 - dev->interface == PHY_INTERFACE_MODE_RGMII_RXID) 955 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 956 + p->interface == PHY_INTERFACE_MODE_RGMII_RXID) 965 957 data8 |= PORT_RGMII_ID_IN_ENABLE; 966 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 967 - dev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 958 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 959 + p->interface == PHY_INTERFACE_MODE_RGMII_TXID) 968 960 data8 |= PORT_RGMII_ID_OUT_ENABLE; 969 961 data8 |= PORT_GMII_1GPS_MODE; 970 962 data8 |= PORT_INTERFACE_RGMII; ··· 1260 1252 } 1261 1253 1262 1254 /* set the real number of ports */ 1263 - dev->ds->num_ports = dev->port_cnt; 1255 + dev->ds->num_ports = dev->port_cnt + 1; 1264 1256 1265 1257 return 0; 1266 1258 }
+19 -10
drivers/net/dsa/microchip/ksz9477.c
··· 1208 1208 1209 1209 /* configure MAC to 1G & RGMII mode */ 1210 1210 ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8); 1211 - switch (dev->interface) { 1211 + switch (p->interface) { 1212 1212 case PHY_INTERFACE_MODE_MII: 1213 1213 ksz9477_set_xmii(dev, 0, &data8); 1214 1214 ksz9477_set_gbit(dev, false, &data8); ··· 1229 1229 ksz9477_set_gbit(dev, true, &data8); 1230 1230 data8 &= ~PORT_RGMII_ID_IG_ENABLE; 1231 1231 data8 &= ~PORT_RGMII_ID_EG_ENABLE; 1232 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 1233 - dev->interface == PHY_INTERFACE_MODE_RGMII_RXID) 1232 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 1233 + p->interface == PHY_INTERFACE_MODE_RGMII_RXID) 1234 1234 data8 |= PORT_RGMII_ID_IG_ENABLE; 1235 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 1236 - dev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 1235 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 1236 + p->interface == PHY_INTERFACE_MODE_RGMII_TXID) 1237 1237 data8 |= PORT_RGMII_ID_EG_ENABLE; 1238 1238 p->phydev.speed = SPEED_1000; 1239 1239 break; ··· 1269 1269 dev->cpu_port = i; 1270 1270 dev->host_mask = (1 << dev->cpu_port); 1271 1271 dev->port_mask |= dev->host_mask; 1272 + p = &dev->ports[i]; 1272 1273 1273 1274 /* Read from XMII register to determine host port 1274 1275 * interface. If set specifically in device tree 1275 1276 * note the difference to help debugging. 1276 1277 */ 1277 1278 interface = ksz9477_get_interface(dev, i); 1278 - if (!dev->interface) 1279 - dev->interface = interface; 1280 - if (interface && interface != dev->interface) 1279 + if (!p->interface) { 1280 + if (dev->compat_interface) { 1281 + dev_warn(dev->dev, 1282 + "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. " 1283 + "Please update your device tree.\n", 1284 + i); 1285 + p->interface = dev->compat_interface; 1286 + } else { 1287 + p->interface = interface; 1288 + } 1289 + } 1290 + if (interface && interface != p->interface) 1281 1291 dev_info(dev->dev, 1282 1292 "use %s instead of %s\n", 1283 - phy_modes(dev->interface), 1293 + phy_modes(p->interface), 1284 1294 phy_modes(interface)); 1285 1295 1286 1296 /* enable cpu port */ 1287 1297 ksz9477_port_setup(dev, i, true); 1288 - p = &dev->ports[dev->cpu_port]; 1289 1298 p->vid_member = dev->port_mask; 1290 1299 p->on = 1; 1291 1300 }
+12 -1
drivers/net/dsa/microchip/ksz_common.c
··· 388 388 const struct ksz_dev_ops *ops) 389 389 { 390 390 phy_interface_t interface; 391 + struct device_node *port; 392 + unsigned int port_num; 391 393 int ret; 392 394 393 395 if (dev->pdata) ··· 423 421 /* Host port interface will be self detected, or specifically set in 424 422 * device tree. 425 423 */ 424 + for (port_num = 0; port_num < dev->port_cnt; ++port_num) 425 + dev->ports[port_num].interface = PHY_INTERFACE_MODE_NA; 426 426 if (dev->dev->of_node) { 427 427 ret = of_get_phy_mode(dev->dev->of_node, &interface); 428 428 if (ret == 0) 429 - dev->interface = interface; 429 + dev->compat_interface = interface; 430 + for_each_available_child_of_node(dev->dev->of_node, port) { 431 + if (of_property_read_u32(port, "reg", &port_num)) 432 + continue; 433 + if (port_num >= dev->port_cnt) 434 + return -EINVAL; 435 + of_get_phy_mode(port, &dev->ports[port_num].interface); 436 + } 430 437 dev->synclko_125 = of_property_read_bool(dev->dev->of_node, 431 438 "microchip,synclko-125"); 432 439 }
+2 -1
drivers/net/dsa/microchip/ksz_common.h
··· 39 39 u32 freeze:1; /* MIB counter freeze is enabled */ 40 40 41 41 struct ksz_port_mib mib; 42 + phy_interface_t interface; 42 43 }; 43 44 44 45 struct ksz_device { ··· 73 72 int mib_cnt; 74 73 int mib_port_cnt; 75 74 int last_port; /* ports after that not used */ 76 - phy_interface_t interface; 75 + phy_interface_t compat_interface; 77 76 u32 regs_size; 78 77 bool phy_errata_9477; 79 78 bool synclko_125;
+7 -1
drivers/net/dsa/ocelot/felix.c
··· 585 585 if (err) 586 586 return err; 587 587 588 - ocelot_init(ocelot); 588 + err = ocelot_init(ocelot); 589 + if (err) 590 + return err; 591 + 589 592 if (ocelot->ptp) { 590 593 err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info); 591 594 if (err) { ··· 643 640 { 644 641 struct ocelot *ocelot = ds->priv; 645 642 struct felix *felix = ocelot_to_felix(ocelot); 643 + int port; 646 644 647 645 if (felix->info->mdio_bus_free) 648 646 felix->info->mdio_bus_free(ocelot); 649 647 648 + for (port = 0; port < ocelot->num_phys_ports; port++) 649 + ocelot_deinit_port(ocelot, port); 650 650 ocelot_deinit_timestamp(ocelot); 651 651 /* stop workqueue thread */ 652 652 ocelot_deinit(ocelot);
+8 -8
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 645 645 [VCAP_IS2_HK_DIP_EQ_SIP] = {118, 1}, 646 646 /* IP4_TCP_UDP (TYPE=100) */ 647 647 [VCAP_IS2_HK_TCP] = {119, 1}, 648 - [VCAP_IS2_HK_L4_SPORT] = {120, 16}, 649 - [VCAP_IS2_HK_L4_DPORT] = {136, 16}, 648 + [VCAP_IS2_HK_L4_DPORT] = {120, 16}, 649 + [VCAP_IS2_HK_L4_SPORT] = {136, 16}, 650 650 [VCAP_IS2_HK_L4_RNG] = {152, 8}, 651 651 [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {160, 1}, 652 652 [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {161, 1}, 653 - [VCAP_IS2_HK_L4_URG] = {162, 1}, 654 - [VCAP_IS2_HK_L4_ACK] = {163, 1}, 655 - [VCAP_IS2_HK_L4_PSH] = {164, 1}, 656 - [VCAP_IS2_HK_L4_RST] = {165, 1}, 657 - [VCAP_IS2_HK_L4_SYN] = {166, 1}, 658 - [VCAP_IS2_HK_L4_FIN] = {167, 1}, 653 + [VCAP_IS2_HK_L4_FIN] = {162, 1}, 654 + [VCAP_IS2_HK_L4_SYN] = {163, 1}, 655 + [VCAP_IS2_HK_L4_RST] = {164, 1}, 656 + [VCAP_IS2_HK_L4_PSH] = {165, 1}, 657 + [VCAP_IS2_HK_L4_ACK] = {166, 1}, 658 + [VCAP_IS2_HK_L4_URG] = {167, 1}, 659 659 [VCAP_IS2_HK_L4_1588_DOM] = {168, 8}, 660 660 [VCAP_IS2_HK_L4_1588_VER] = {176, 4}, 661 661 /* IP4_OTHER (TYPE=101) */
+9 -9
drivers/net/dsa/ocelot/seville_vsc9953.c
··· 659 659 [VCAP_IS2_HK_DIP_EQ_SIP] = {122, 1}, 660 660 /* IP4_TCP_UDP (TYPE=100) */ 661 661 [VCAP_IS2_HK_TCP] = {123, 1}, 662 - [VCAP_IS2_HK_L4_SPORT] = {124, 16}, 663 - [VCAP_IS2_HK_L4_DPORT] = {140, 16}, 662 + [VCAP_IS2_HK_L4_DPORT] = {124, 16}, 663 + [VCAP_IS2_HK_L4_SPORT] = {140, 16}, 664 664 [VCAP_IS2_HK_L4_RNG] = {156, 8}, 665 665 [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {164, 1}, 666 666 [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {165, 1}, 667 - [VCAP_IS2_HK_L4_URG] = {166, 1}, 668 - [VCAP_IS2_HK_L4_ACK] = {167, 1}, 669 - [VCAP_IS2_HK_L4_PSH] = {168, 1}, 670 - [VCAP_IS2_HK_L4_RST] = {169, 1}, 671 - [VCAP_IS2_HK_L4_SYN] = {170, 1}, 672 - [VCAP_IS2_HK_L4_FIN] = {171, 1}, 667 + [VCAP_IS2_HK_L4_FIN] = {166, 1}, 668 + [VCAP_IS2_HK_L4_SYN] = {167, 1}, 669 + [VCAP_IS2_HK_L4_RST] = {168, 1}, 670 + [VCAP_IS2_HK_L4_PSH] = {169, 1}, 671 + [VCAP_IS2_HK_L4_ACK] = {170, 1}, 672 + [VCAP_IS2_HK_L4_URG] = {171, 1}, 673 673 /* IP4_OTHER (TYPE=101) */ 674 674 [VCAP_IS2_HK_IP4_L3_PROTO] = {123, 8}, 675 675 [VCAP_IS2_HK_L3_PAYLOAD] = {131, 56}, ··· 1008 1008 .vcap_is2_keys = vsc9953_vcap_is2_keys, 1009 1009 .vcap_is2_actions = vsc9953_vcap_is2_actions, 1010 1010 .vcap = vsc9953_vcap_props, 1011 - .shared_queue_sz = 128 * 1024, 1011 + .shared_queue_sz = 2048 * 1024, 1012 1012 .num_mact_rows = 2048, 1013 1013 .num_ports = 10, 1014 1014 .mdio_bus_alloc = vsc9953_mdio_bus_alloc,
+13 -7
drivers/net/dsa/rtl8366.c
··· 452 452 return ret; 453 453 454 454 if (vid == vlanmc.vid) { 455 - /* clear VLAN member configurations */ 456 - vlanmc.vid = 0; 457 - vlanmc.priority = 0; 458 - vlanmc.member = 0; 459 - vlanmc.untag = 0; 460 - vlanmc.fid = 0; 461 - 455 + /* Remove this port from the VLAN */ 456 + vlanmc.member &= ~BIT(port); 457 + vlanmc.untag &= ~BIT(port); 458 + /* 459 + * If no ports are members of this VLAN 460 + * anymore then clear the whole member 461 + * config so it can be reused. 462 + */ 463 + if (!vlanmc.member && vlanmc.untag) { 464 + vlanmc.vid = 0; 465 + vlanmc.priority = 0; 466 + vlanmc.fid = 0; 467 + } 462 468 ret = smi->ops->set_vlan_mc(smi, i, &vlanmc); 463 469 if (ret) { 464 470 dev_err(smi->dev,
+27 -16
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 3782 3782 return -EOPNOTSUPP; 3783 3783 3784 3784 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_QSTATS_EXT, -1, -1); 3785 + req.fid = cpu_to_le16(0xffff); 3785 3786 req.flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; 3786 3787 mutex_lock(&bp->hwrm_cmd_lock); 3787 3788 rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); ··· 3853 3852 tx_masks = stats->hw_masks; 3854 3853 tx_count = sizeof(struct tx_port_stats_ext) / 8; 3855 3854 3856 - flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; 3855 + flags = PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; 3857 3856 rc = bnxt_hwrm_port_qstats_ext(bp, flags); 3858 3857 if (rc) { 3859 3858 mask = (1ULL << 40) - 1; ··· 4306 4305 u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM; 4307 4306 u16 dst = BNXT_HWRM_CHNL_CHIMP; 4308 4307 4309 - if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) 4308 + if (BNXT_NO_FW_ACCESS(bp)) 4310 4309 return -EBUSY; 4311 4310 4312 4311 if (msg_len > BNXT_HWRM_MAX_REQ_LEN) { ··· 5724 5723 struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr; 5725 5724 u16 error_code; 5726 5725 5727 - if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) 5726 + if (BNXT_NO_FW_ACCESS(bp)) 5728 5727 return 0; 5729 5728 5730 5729 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_RING_FREE, cmpl_ring_id, -1); ··· 7818 7817 7819 7818 if (set_tpa) 7820 7819 tpa_flags = bp->flags & BNXT_FLAG_TPA; 7821 - else if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) 7820 + else if (BNXT_NO_FW_ACCESS(bp)) 7822 7821 return 0; 7823 7822 for (i = 0; i < bp->nr_vnics; i++) { 7824 7823 rc = bnxt_hwrm_vnic_set_tpa(bp, i, tpa_flags); ··· 9312 9311 struct hwrm_temp_monitor_query_output *resp; 9313 9312 struct bnxt *bp = dev_get_drvdata(dev); 9314 9313 u32 len = 0; 9314 + int rc; 9315 9315 9316 9316 resp = bp->hwrm_cmd_resp_addr; 9317 9317 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1); 9318 9318 mutex_lock(&bp->hwrm_cmd_lock); 9319 - if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT)) 9319 + rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 9320 + if (!rc) 9320 9321 len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */ 9321 9322 mutex_unlock(&bp->hwrm_cmd_lock); 9322 - 9323 - if (len) 9324 - return len; 9325 - 9326 - return sprintf(buf, "unknown\n"); 9323 + return rc ?: len; 9327 9324 } 9328 9325 static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0); 9329 9326 ··· 9341 9342 9342 9343 static void bnxt_hwmon_open(struct bnxt *bp) 9343 9344 { 9345 + struct hwrm_temp_monitor_query_input req = {0}; 9344 9346 struct pci_dev *pdev = bp->pdev; 9347 + int rc; 9348 + 9349 + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1); 9350 + rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 9351 + if (rc == -EACCES || rc == -EOPNOTSUPP) { 9352 + bnxt_hwmon_close(bp); 9353 + return; 9354 + } 9345 9355 9346 9356 if (bp->hwmon_dev) 9347 9357 return; ··· 11787 11779 if (BNXT_PF(bp)) 11788 11780 bnxt_sriov_disable(bp); 11789 11781 11782 + clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 11783 + bnxt_cancel_sp_work(bp); 11784 + bp->sp_event = 0; 11785 + 11790 11786 bnxt_dl_fw_reporters_destroy(bp, true); 11791 11787 if (BNXT_PF(bp)) 11792 11788 devlink_port_type_clear(&bp->dl_port); ··· 11798 11786 unregister_netdev(dev); 11799 11787 bnxt_dl_unregister(bp); 11800 11788 bnxt_shutdown_tc(bp); 11801 - clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 11802 - bnxt_cancel_sp_work(bp); 11803 - bp->sp_event = 0; 11804 11789 11805 11790 bnxt_clear_int_mode(bp); 11806 11791 bnxt_hwrm_func_drv_unrgtr(bp); ··· 12098 12089 static void bnxt_vpd_read_info(struct bnxt *bp) 12099 12090 { 12100 12091 struct pci_dev *pdev = bp->pdev; 12101 - int i, len, pos, ro_size; 12092 + int i, len, pos, ro_size, size; 12102 12093 ssize_t vpd_size; 12103 12094 u8 *vpd_data; 12104 12095 ··· 12133 12124 if (len + pos > vpd_size) 12134 12125 goto read_sn; 12135 12126 12136 - strlcpy(bp->board_partno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN)); 12127 + size = min(len, BNXT_VPD_FLD_LEN - 1); 12128 + memcpy(bp->board_partno, &vpd_data[pos], size); 12137 12129 12138 12130 read_sn: 12139 12131 pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size, ··· 12147 12137 if (len + pos > vpd_size) 12148 12138 goto exit; 12149 12139 12150 - strlcpy(bp->board_serialno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN)); 12140 + size = min(len, BNXT_VPD_FLD_LEN - 1); 12141 + memcpy(bp->board_serialno, &vpd_data[pos], size); 12151 12142 exit: 12152 12143 kfree(vpd_data); 12153 12144 }
+4
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1737 1737 #define BNXT_STATE_FW_FATAL_COND 6 1738 1738 #define BNXT_STATE_DRV_REGISTERED 7 1739 1739 1740 + #define BNXT_NO_FW_ACCESS(bp) \ 1741 + (test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \ 1742 + pci_channel_offline((bp)->pdev)) 1743 + 1740 1744 struct bnxt_irq *irq_tbl; 1741 1745 int total_irqs; 1742 1746 u8 mac_addr[ETH_ALEN];
+23 -11
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 1322 1322 struct bnxt *bp = netdev_priv(dev); 1323 1323 int reg_len; 1324 1324 1325 + if (!BNXT_PF(bp)) 1326 + return -EOPNOTSUPP; 1327 + 1325 1328 reg_len = BNXT_PXP_REG_LEN; 1326 1329 1327 1330 if (bp->fw_cap & BNXT_FW_CAP_PCIE_STATS_SUPPORTED) ··· 1791 1788 if (!BNXT_PHY_CFG_ABLE(bp)) 1792 1789 return -EOPNOTSUPP; 1793 1790 1791 + mutex_lock(&bp->link_lock); 1794 1792 if (epause->autoneg) { 1795 - if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) 1796 - return -EINVAL; 1793 + if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) { 1794 + rc = -EINVAL; 1795 + goto pause_exit; 1796 + } 1797 1797 1798 1798 link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL; 1799 1799 if (bp->hwrm_spec_code >= 0x10201) ··· 1817 1811 if (epause->tx_pause) 1818 1812 link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX; 1819 1813 1820 - if (netif_running(dev)) { 1821 - mutex_lock(&bp->link_lock); 1814 + if (netif_running(dev)) 1822 1815 rc = bnxt_hwrm_set_pause(bp); 1823 - mutex_unlock(&bp->link_lock); 1824 - } 1816 + 1817 + pause_exit: 1818 + mutex_unlock(&bp->link_lock); 1825 1819 return rc; 1826 1820 } 1827 1821 ··· 2558 2552 struct bnxt *bp = netdev_priv(dev); 2559 2553 struct ethtool_eee *eee = &bp->eee; 2560 2554 struct bnxt_link_info *link_info = &bp->link_info; 2561 - u32 advertising = 2562 - _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0); 2555 + u32 advertising; 2563 2556 int rc = 0; 2564 2557 2565 2558 if (!BNXT_PHY_CFG_ABLE(bp)) ··· 2567 2562 if (!(bp->flags & BNXT_FLAG_EEE_CAP)) 2568 2563 return -EOPNOTSUPP; 2569 2564 2565 + mutex_lock(&bp->link_lock); 2566 + advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0); 2570 2567 if (!edata->eee_enabled) 2571 2568 goto eee_ok; 2572 2569 2573 2570 if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) { 2574 2571 netdev_warn(dev, "EEE requires autoneg\n"); 2575 - return -EINVAL; 2572 + rc = -EINVAL; 2573 + goto eee_exit; 2576 2574 } 2577 2575 if (edata->tx_lpi_enabled) { 2578 2576 if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi || 2579 2577 edata->tx_lpi_timer < bp->lpi_tmr_lo)) { 2580 2578 netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n", 2581 2579 bp->lpi_tmr_lo, bp->lpi_tmr_hi); 2582 - return -EINVAL; 2580 + rc = -EINVAL; 2581 + goto eee_exit; 2583 2582 } else if (!bp->lpi_tmr_hi) { 2584 2583 edata->tx_lpi_timer = eee->tx_lpi_timer; 2585 2584 } ··· 2593 2584 } else if (edata->advertised & ~advertising) { 2594 2585 netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n", 2595 2586 edata->advertised, advertising); 2596 - return -EINVAL; 2587 + rc = -EINVAL; 2588 + goto eee_exit; 2597 2589 } 2598 2590 2599 2591 eee->advertised = edata->advertised; ··· 2606 2596 if (netif_running(dev)) 2607 2597 rc = bnxt_hwrm_set_link_setting(bp, false, true); 2608 2598 2599 + eee_exit: 2600 + mutex_unlock(&bp->link_lock); 2609 2601 return rc; 2610 2602 } 2611 2603
+1 -2
drivers/net/ethernet/cadence/macb_main.c
··· 647 647 ctrl |= GEM_BIT(GBE); 648 648 } 649 649 650 - /* We do not support MLO_PAUSE_RX yet */ 651 - if (tx_pause) 650 + if (rx_pause) 652 651 ctrl |= MACB_BIT(PAE); 653 652 654 653 macb_set_tx_clk(bp->tx_clk, speed, ndev);
+6 -3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
··· 1911 1911 static int configure_filter_tcb(struct adapter *adap, unsigned int tid, 1912 1912 struct filter_entry *f) 1913 1913 { 1914 - if (f->fs.hitcnts) 1914 + if (f->fs.hitcnts) { 1915 1915 set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W, 1916 - TCB_TIMESTAMP_V(TCB_TIMESTAMP_M) | 1916 + TCB_TIMESTAMP_V(TCB_TIMESTAMP_M), 1917 + TCB_TIMESTAMP_V(0ULL), 1918 + 1); 1919 + set_tcb_field(adap, f, tid, TCB_RTT_TS_RECENT_AGE_W, 1917 1920 TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M), 1918 - TCB_TIMESTAMP_V(0ULL) | 1919 1921 TCB_RTT_TS_RECENT_AGE_V(0ULL), 1920 1922 1); 1923 + } 1921 1924 1922 1925 if (f->fs.newdmac) 1923 1926 set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1,
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
··· 229 229 { 230 230 struct mps_entries_ref *mps_entry, *tmp; 231 231 232 - if (!list_empty(&adap->mps_ref)) 232 + if (list_empty(&adap->mps_ref)) 233 233 return; 234 234 235 235 spin_lock(&adap->mps_ref_lock);
+1 -1
drivers/net/ethernet/dec/tulip/de2104x.c
··· 85 85 #define DSL CONFIG_DE2104X_DSL 86 86 #endif 87 87 88 - #define DE_RX_RING_SIZE 64 88 + #define DE_RX_RING_SIZE 128 89 89 #define DE_TX_RING_SIZE 64 90 90 #define DE_RING_BYTES \ 91 91 ((sizeof(struct de_desc) * DE_RX_RING_SIZE) + \
+2 -2
drivers/net/ethernet/freescale/dpaa2/dpmac-cmd.h
··· 66 66 }; 67 67 68 68 struct dpmac_rsp_get_counter { 69 - u64 pad; 70 - u64 counter; 69 + __le64 pad; 70 + __le64 counter; 71 71 }; 72 72 73 73 #endif /* _FSL_DPMAC_CMD_H */
+1 -1
drivers/net/ethernet/freescale/enetc/enetc_pf.c
··· 1053 1053 1054 1054 err_reg_netdev: 1055 1055 enetc_teardown_serdes(priv); 1056 - enetc_mdio_remove(pf); 1057 1056 enetc_free_msix(priv); 1058 1057 err_alloc_msix: 1059 1058 enetc_free_si_resources(priv); ··· 1060 1061 si->ndev = NULL; 1061 1062 free_netdev(ndev); 1062 1063 err_alloc_netdev: 1064 + enetc_mdio_remove(pf); 1063 1065 enetc_of_put_phy(pf); 1064 1066 err_map_pf_space: 1065 1067 enetc_pci_remove(pdev);
+2 -2
drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
··· 334 334 * bit6-11 for ppe0-5 335 335 * bit12-17 for roce0-5 336 336 * bit18-19 for com/dfx 337 - * @enable: false - request reset , true - drop reset 337 + * @dereset: false - request reset , true - drop reset 338 338 */ 339 339 static void 340 340 hns_dsaf_srst_chns(struct dsaf_device *dsaf_dev, u32 msk, bool dereset) ··· 357 357 * bit6-11 for ppe0-5 358 358 * bit12-17 for roce0-5 359 359 * bit18-19 for com/dfx 360 - * @enable: false - request reset , true - drop reset 360 + * @dereset: false - request reset , true - drop reset 361 361 */ 362 362 static void 363 363 hns_dsaf_srst_chns_acpi(struct dsaf_device *dsaf_dev, u32 msk, bool dereset)
+20 -20
drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
··· 463 463 464 464 /** 465 465 * nic_run_loopback_test - run loopback test 466 - * @nic_dev: net device 467 - * @loopback_type: loopback type 466 + * @ndev: net device 467 + * @loop_mode: loopback mode 468 468 */ 469 469 static int __lb_run_test(struct net_device *ndev, 470 470 enum hnae_loop loop_mode) ··· 572 572 573 573 /** 574 574 * hns_nic_self_test - self test 575 - * @dev: net device 575 + * @ndev: net device 576 576 * @eth_test: test cmd 577 577 * @data: test result 578 578 */ ··· 633 633 634 634 /** 635 635 * hns_nic_get_drvinfo - get net driver info 636 - * @dev: net device 636 + * @net_dev: net device 637 637 * @drvinfo: driver info 638 638 */ 639 639 static void hns_nic_get_drvinfo(struct net_device *net_dev, ··· 658 658 659 659 /** 660 660 * hns_get_ringparam - get ring parameter 661 - * @dev: net device 661 + * @net_dev: net device 662 662 * @param: ethtool parameter 663 663 */ 664 664 static void hns_get_ringparam(struct net_device *net_dev, ··· 683 683 684 684 /** 685 685 * hns_get_pauseparam - get pause parameter 686 - * @dev: net device 686 + * @net_dev: net device 687 687 * @param: pause parameter 688 688 */ 689 689 static void hns_get_pauseparam(struct net_device *net_dev, ··· 701 701 702 702 /** 703 703 * hns_set_pauseparam - set pause parameter 704 - * @dev: net device 704 + * @net_dev: net device 705 705 * @param: pause parameter 706 706 * 707 707 * Return 0 on success, negative on failure ··· 725 725 726 726 /** 727 727 * hns_get_coalesce - get coalesce info. 728 - * @dev: net device 728 + * @net_dev: net device 729 729 * @ec: coalesce info. 730 730 * 731 731 * Return 0 on success, negative on failure. ··· 769 769 770 770 /** 771 771 * hns_set_coalesce - set coalesce info. 772 - * @dev: net device 772 + * @net_dev: net device 773 773 * @ec: coalesce info. 774 774 * 775 775 * Return 0 on success, negative on failure. ··· 808 808 809 809 /** 810 810 * hns_get_channels - get channel info. 811 - * @dev: net device 811 + * @net_dev: net device 812 812 * @ch: channel info. 813 813 */ 814 814 static void ··· 825 825 826 826 /** 827 827 * get_ethtool_stats - get detail statistics. 828 - * @dev: net device 828 + * @netdev: net device 829 829 * @stats: statistics info. 830 830 * @data: statistics data. 831 831 */ ··· 883 883 884 884 /** 885 885 * get_strings: Return a set of strings that describe the requested objects 886 - * @dev: net device 887 - * @stats: string set ID. 886 + * @netdev: net device 887 + * @stringset: string set ID. 888 888 * @data: objects data. 889 889 */ 890 890 static void hns_get_strings(struct net_device *netdev, u32 stringset, u8 *data) ··· 972 972 973 973 /** 974 974 * nic_get_sset_count - get string set count witch returned by nic_get_strings. 975 - * @dev: net device 975 + * @netdev: net device 976 976 * @stringset: string set index, 0: self test string; 1: statistics string. 977 977 * 978 978 * Return string set count. ··· 1006 1006 1007 1007 /** 1008 1008 * hns_phy_led_set - set phy LED status. 1009 - * @dev: net device 1009 + * @netdev: net device 1010 1010 * @value: LED state. 1011 1011 * 1012 1012 * Return 0 on success, negative on failure. ··· 1028 1028 1029 1029 /** 1030 1030 * nic_set_phys_id - set phy identify LED. 1031 - * @dev: net device 1031 + * @netdev: net device 1032 1032 * @state: LED state. 1033 1033 * 1034 1034 * Return 0 on success, negative on failure. ··· 1104 1104 1105 1105 /** 1106 1106 * hns_get_regs - get net device register 1107 - * @dev: net device 1107 + * @net_dev: net device 1108 1108 * @cmd: ethtool cmd 1109 - * @date: register data 1109 + * @data: register data 1110 1110 */ 1111 1111 static void hns_get_regs(struct net_device *net_dev, struct ethtool_regs *cmd, 1112 1112 void *data) ··· 1126 1126 1127 1127 /** 1128 1128 * nic_get_regs_len - get total register len. 1129 - * @dev: net device 1129 + * @net_dev: net device 1130 1130 * 1131 1131 * Return total register len. 1132 1132 */ ··· 1151 1151 1152 1152 /** 1153 1153 * hns_nic_nway_reset - nway reset 1154 - * @dev: net device 1154 + * @netdev: net device 1155 1155 * 1156 1156 * Return 0 on success, negative on failure 1157 1157 */
+4
drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
··· 1654 1654 } 1655 1655 1656 1656 netif_carrier_off(netdev); 1657 + netif_tx_disable(netdev); 1657 1658 1658 1659 err = do_lp_test(nic_dev, eth_test->flags, LP_DEFAULT_TIME, 1659 1660 &test_index); ··· 1663 1662 data[test_index] = 1; 1664 1663 } 1665 1664 1665 + netif_tx_wake_all_queues(netdev); 1666 + 1666 1667 err = hinic_port_link_state(nic_dev, &link_state); 1667 1668 if (!err && link_state == HINIC_LINK_STATE_UP) 1668 1669 netif_carrier_on(netdev); 1670 + 1669 1671 } 1670 1672 1671 1673 static int hinic_set_phys_id(struct net_device *netdev,
+15 -5
drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
··· 47 47 48 48 #define MGMT_MSG_TIMEOUT 5000 49 49 50 + #define SET_FUNC_PORT_MBOX_TIMEOUT 30000 51 + 50 52 #define SET_FUNC_PORT_MGMT_TIMEOUT 25000 53 + 54 + #define UPDATE_FW_MGMT_TIMEOUT 20000 51 55 52 56 #define mgmt_to_pfhwdev(pf_mgmt) \ 53 57 container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt) ··· 365 361 return -EINVAL; 366 362 } 367 363 368 - if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE) 369 - timeout = SET_FUNC_PORT_MGMT_TIMEOUT; 364 + if (HINIC_IS_VF(hwif)) { 365 + if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE) 366 + timeout = SET_FUNC_PORT_MBOX_TIMEOUT; 370 367 371 - if (HINIC_IS_VF(hwif)) 372 368 return hinic_mbox_to_pf(pf_to_mgmt->hwdev, mod, cmd, buf_in, 373 - in_size, buf_out, out_size, 0); 374 - else 369 + in_size, buf_out, out_size, timeout); 370 + } else { 371 + if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE) 372 + timeout = SET_FUNC_PORT_MGMT_TIMEOUT; 373 + else if (cmd == HINIC_PORT_CMD_UPDATE_FW) 374 + timeout = UPDATE_FW_MGMT_TIMEOUT; 375 + 375 376 return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size, 376 377 buf_out, out_size, MGMT_DIRECT_SEND, 377 378 MSG_NOT_RESP, timeout); 379 + } 378 380 } 379 381 380 382 static void recv_mgmt_msg_work_handler(struct work_struct *work)
+24
drivers/net/ethernet/huawei/hinic/hinic_main.c
··· 174 174 return err; 175 175 } 176 176 177 + static void enable_txqs_napi(struct hinic_dev *nic_dev) 178 + { 179 + int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev); 180 + int i; 181 + 182 + for (i = 0; i < num_txqs; i++) 183 + napi_enable(&nic_dev->txqs[i].napi); 184 + } 185 + 186 + static void disable_txqs_napi(struct hinic_dev *nic_dev) 187 + { 188 + int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev); 189 + int i; 190 + 191 + for (i = 0; i < num_txqs; i++) 192 + napi_disable(&nic_dev->txqs[i].napi); 193 + } 194 + 177 195 /** 178 196 * free_txqs - Free the Logical Tx Queues of specific NIC device 179 197 * @nic_dev: the specific NIC device ··· 418 400 goto err_create_txqs; 419 401 } 420 402 403 + enable_txqs_napi(nic_dev); 404 + 421 405 err = create_rxqs(nic_dev); 422 406 if (err) { 423 407 netif_err(nic_dev, drv, netdev, ··· 504 484 } 505 485 506 486 err_create_rxqs: 487 + disable_txqs_napi(nic_dev); 507 488 free_txqs(nic_dev); 508 489 509 490 err_create_txqs: ··· 517 496 { 518 497 struct hinic_dev *nic_dev = netdev_priv(netdev); 519 498 unsigned int flags; 499 + 500 + /* Disable txq napi firstly to aviod rewaking txq in free_tx_poll */ 501 + disable_txqs_napi(nic_dev); 520 502 521 503 down(&nic_dev->mgmt_lock); 522 504
+14 -7
drivers/net/ethernet/huawei/hinic/hinic_rx.c
··· 543 543 if (err) { 544 544 netif_err(nic_dev, drv, rxq->netdev, 545 545 "Failed to set RX interrupt coalescing attribute\n"); 546 - rx_del_napi(rxq); 547 - return err; 546 + goto err_req_irq; 548 547 } 549 548 550 549 err = request_irq(rq->irq, rx_irq, 0, rxq->irq_name, rxq); 551 - if (err) { 552 - rx_del_napi(rxq); 553 - return err; 554 - } 550 + if (err) 551 + goto err_req_irq; 555 552 556 553 cpumask_set_cpu(qp->q_id % num_online_cpus(), &rq->affinity_mask); 557 - return irq_set_affinity_hint(rq->irq, &rq->affinity_mask); 554 + err = irq_set_affinity_hint(rq->irq, &rq->affinity_mask); 555 + if (err) 556 + goto err_irq_affinity; 557 + 558 + return 0; 559 + 560 + err_irq_affinity: 561 + free_irq(rq->irq, rxq); 562 + err_req_irq: 563 + rx_del_napi(rxq); 564 + return err; 558 565 } 559 566 560 567 static void rx_free_irq(struct hinic_rxq *rxq)
+6 -18
drivers/net/ethernet/huawei/hinic/hinic_tx.c
··· 717 717 netdev_txq = netdev_get_tx_queue(txq->netdev, qp->q_id); 718 718 719 719 __netif_tx_lock(netdev_txq, smp_processor_id()); 720 - 721 - netif_wake_subqueue(nic_dev->netdev, qp->q_id); 720 + if (!netif_testing(nic_dev->netdev)) 721 + netif_wake_subqueue(nic_dev->netdev, qp->q_id); 722 722 723 723 __netif_tx_unlock(netdev_txq); 724 724 ··· 743 743 } 744 744 745 745 return budget; 746 - } 747 - 748 - static void tx_napi_add(struct hinic_txq *txq, int weight) 749 - { 750 - netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, weight); 751 - napi_enable(&txq->napi); 752 - } 753 - 754 - static void tx_napi_del(struct hinic_txq *txq) 755 - { 756 - napi_disable(&txq->napi); 757 - netif_napi_del(&txq->napi); 758 746 } 759 747 760 748 static irqreturn_t tx_irq(int irq, void *data) ··· 778 790 779 791 qp = container_of(sq, struct hinic_qp, sq); 780 792 781 - tx_napi_add(txq, nic_dev->tx_weight); 793 + netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, nic_dev->tx_weight); 782 794 783 795 hinic_hwdev_msix_set(nic_dev->hwdev, sq->msix_entry, 784 796 TX_IRQ_NO_PENDING, TX_IRQ_NO_COALESC, ··· 795 807 if (err) { 796 808 netif_err(nic_dev, drv, txq->netdev, 797 809 "Failed to set TX interrupt coalescing attribute\n"); 798 - tx_napi_del(txq); 810 + netif_napi_del(&txq->napi); 799 811 return err; 800 812 } 801 813 802 814 err = request_irq(sq->irq, tx_irq, 0, txq->irq_name, txq); 803 815 if (err) { 804 816 dev_err(&pdev->dev, "Failed to request Tx irq\n"); 805 - tx_napi_del(txq); 817 + netif_napi_del(&txq->napi); 806 818 return err; 807 819 } 808 820 ··· 814 826 struct hinic_sq *sq = txq->sq; 815 827 816 828 free_irq(sq->irq, txq); 817 - tx_napi_del(txq); 829 + netif_napi_del(&txq->napi); 818 830 } 819 831 820 832 /**
+4 -2
drivers/net/ethernet/ibm/ibmvnic.c
··· 2032 2032 2033 2033 } else { 2034 2034 rc = reset_tx_pools(adapter); 2035 - if (rc) 2035 + if (rc) { 2036 2036 netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n", 2037 2037 rc); 2038 2038 goto out; 2039 + } 2039 2040 2040 2041 rc = reset_rx_pools(adapter); 2041 - if (rc) 2042 + if (rc) { 2042 2043 netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n", 2043 2044 rc); 2044 2045 goto out; 2046 + } 2045 2047 } 2046 2048 ibmvnic_disable_irqs(adapter); 2047 2049 }
+16 -6
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1115 1115 static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi) 1116 1116 { 1117 1117 struct i40e_mac_filter *f; 1118 - int num_vlans = 0, bkt; 1118 + u16 num_vlans = 0, bkt; 1119 1119 1120 1120 hash_for_each(vsi->mac_filter_hash, bkt, f, hlist) { 1121 1121 if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID) ··· 1134 1134 * 1135 1135 * Called to get number of VLANs and VLAN list present in mac_filter_hash. 1136 1136 **/ 1137 - static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, int *num_vlans, 1138 - s16 **vlan_list) 1137 + static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans, 1138 + s16 **vlan_list) 1139 1139 { 1140 1140 struct i40e_mac_filter *f; 1141 1141 int i = 0; ··· 1169 1169 **/ 1170 1170 static i40e_status 1171 1171 i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, 1172 - bool unicast_enable, s16 *vl, int num_vlans) 1172 + bool unicast_enable, s16 *vl, u16 num_vlans) 1173 1173 { 1174 + i40e_status aq_ret, aq_tmp = 0; 1174 1175 struct i40e_pf *pf = vf->pf; 1175 1176 struct i40e_hw *hw = &pf->hw; 1176 - i40e_status aq_ret; 1177 1177 int i; 1178 1178 1179 1179 /* No VLAN to set promisc on, set on VSI */ ··· 1222 1222 vf->vf_id, 1223 1223 i40e_stat_str(&pf->hw, aq_ret), 1224 1224 i40e_aq_str(&pf->hw, aq_err)); 1225 + 1226 + if (!aq_tmp) 1227 + aq_tmp = aq_ret; 1225 1228 } 1226 1229 1227 1230 aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, seid, ··· 1238 1235 vf->vf_id, 1239 1236 i40e_stat_str(&pf->hw, aq_ret), 1240 1237 i40e_aq_str(&pf->hw, aq_err)); 1238 + 1239 + if (!aq_tmp) 1240 + aq_tmp = aq_ret; 1241 1241 } 1242 1242 } 1243 + 1244 + if (aq_tmp) 1245 + aq_ret = aq_tmp; 1246 + 1243 1247 return aq_ret; 1244 1248 } 1245 1249 ··· 1268 1258 i40e_status aq_ret = I40E_SUCCESS; 1269 1259 struct i40e_pf *pf = vf->pf; 1270 1260 struct i40e_vsi *vsi; 1271 - int num_vlans; 1261 + u16 num_vlans; 1272 1262 s16 *vl; 1273 1263 1274 1264 vsi = i40e_find_vsi_from_id(pf, vsi_id);
+8 -12
drivers/net/ethernet/intel/igc/igc.h
··· 299 299 #define IGC_RX_HDR_LEN IGC_RXBUFFER_256 300 300 301 301 /* Transmit and receive latency (for PTP timestamps) */ 302 - /* FIXME: These values were estimated using the ones that i225 has as 303 - * basis, they seem to provide good numbers with ptp4l/phc2sys, but we 304 - * need to confirm them. 305 - */ 306 - #define IGC_I225_TX_LATENCY_10 9542 307 - #define IGC_I225_TX_LATENCY_100 1024 308 - #define IGC_I225_TX_LATENCY_1000 178 309 - #define IGC_I225_TX_LATENCY_2500 64 310 - #define IGC_I225_RX_LATENCY_10 20662 311 - #define IGC_I225_RX_LATENCY_100 2213 312 - #define IGC_I225_RX_LATENCY_1000 448 313 - #define IGC_I225_RX_LATENCY_2500 160 302 + #define IGC_I225_TX_LATENCY_10 240 303 + #define IGC_I225_TX_LATENCY_100 58 304 + #define IGC_I225_TX_LATENCY_1000 80 305 + #define IGC_I225_TX_LATENCY_2500 1325 306 + #define IGC_I225_RX_LATENCY_10 6450 307 + #define IGC_I225_RX_LATENCY_100 185 308 + #define IGC_I225_RX_LATENCY_1000 300 309 + #define IGC_I225_RX_LATENCY_2500 1485 314 310 315 311 /* RX and TX descriptor control thresholds. 316 312 * PTHRESH - MAC will consider prefetch if it has fewer than this number of
+19
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 364 364 struct sk_buff *skb = adapter->ptp_tx_skb; 365 365 struct skb_shared_hwtstamps shhwtstamps; 366 366 struct igc_hw *hw = &adapter->hw; 367 + int adjust = 0; 367 368 u64 regval; 368 369 369 370 if (WARN_ON_ONCE(!skb)) ··· 373 372 regval = rd32(IGC_TXSTMPL); 374 373 regval |= (u64)rd32(IGC_TXSTMPH) << 32; 375 374 igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval); 375 + 376 + switch (adapter->link_speed) { 377 + case SPEED_10: 378 + adjust = IGC_I225_TX_LATENCY_10; 379 + break; 380 + case SPEED_100: 381 + adjust = IGC_I225_TX_LATENCY_100; 382 + break; 383 + case SPEED_1000: 384 + adjust = IGC_I225_TX_LATENCY_1000; 385 + break; 386 + case SPEED_2500: 387 + adjust = IGC_I225_TX_LATENCY_2500; 388 + break; 389 + } 390 + 391 + shhwtstamps.hwtstamp = 392 + ktime_add_ns(shhwtstamps.hwtstamp, adjust); 376 393 377 394 /* Clear the lock early before calling skb_tstamp_tx so that 378 395 * applications are not woken up before the lock bit is clear. We use
+13 -8
drivers/net/ethernet/lantiq_xrx200.c
··· 230 230 } 231 231 232 232 if (rx < budget) { 233 - napi_complete(&ch->napi); 234 - ltq_dma_enable_irq(&ch->dma); 233 + if (napi_complete_done(&ch->napi, rx)) 234 + ltq_dma_enable_irq(&ch->dma); 235 235 } 236 236 237 237 return rx; ··· 268 268 net_dev->stats.tx_bytes += bytes; 269 269 netdev_completed_queue(ch->priv->net_dev, pkts, bytes); 270 270 271 + if (netif_queue_stopped(net_dev)) 272 + netif_wake_queue(net_dev); 273 + 271 274 if (pkts < budget) { 272 - napi_complete(&ch->napi); 273 - ltq_dma_enable_irq(&ch->dma); 275 + if (napi_complete_done(&ch->napi, pkts)) 276 + ltq_dma_enable_irq(&ch->dma); 274 277 } 275 278 276 279 return pkts; ··· 345 342 { 346 343 struct xrx200_chan *ch = ptr; 347 344 348 - ltq_dma_disable_irq(&ch->dma); 349 - ltq_dma_ack_irq(&ch->dma); 345 + if (napi_schedule_prep(&ch->napi)) { 346 + __napi_schedule(&ch->napi); 347 + ltq_dma_disable_irq(&ch->dma); 348 + } 350 349 351 - napi_schedule(&ch->napi); 350 + ltq_dma_ack_irq(&ch->dma); 352 351 353 352 return IRQ_HANDLED; 354 353 } ··· 504 499 505 500 /* setup NAPI */ 506 501 netif_napi_add(net_dev, &priv->chan_rx.napi, xrx200_poll_rx, 32); 507 - netif_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32); 502 + netif_tx_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32); 508 503 509 504 platform_set_drvdata(pdev, priv); 510 505
+7 -3
drivers/net/ethernet/marvell/mvneta.c
··· 2029 2029 struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); 2030 2030 int i; 2031 2031 2032 - page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), 2033 - sync_len, napi); 2034 2032 for (i = 0; i < sinfo->nr_frags; i++) 2035 2033 page_pool_put_full_page(rxq->page_pool, 2036 2034 skb_frag_page(&sinfo->frags[i]), napi); 2035 + page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), 2036 + sync_len, napi); 2037 2037 } 2038 2038 2039 2039 static int ··· 2383 2383 mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, 2384 2384 &size, page, &ps); 2385 2385 } else { 2386 - if (unlikely(!xdp_buf.data_hard_start)) 2386 + if (unlikely(!xdp_buf.data_hard_start)) { 2387 + rx_desc->buf_phys_addr = 0; 2388 + page_pool_put_full_page(rxq->page_pool, page, 2389 + true); 2387 2390 continue; 2391 + } 2388 2392 2389 2393 mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, 2390 2394 &size, page);
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 600 600 struct dim dim; /* Dynamic Interrupt Moderation */ 601 601 602 602 /* XDP */ 603 - struct bpf_prog *xdp_prog; 603 + struct bpf_prog __rcu *xdp_prog; 604 604 struct mlx5e_xdpsq *xdpsq; 605 605 DECLARE_BITMAP(flags, 8); 606 606 struct page_pool *page_pool; ··· 1005 1005 void mlx5e_update_carrier(struct mlx5e_priv *priv); 1006 1006 int mlx5e_close(struct net_device *netdev); 1007 1007 int mlx5e_open(struct net_device *netdev); 1008 - void mlx5e_update_ndo_stats(struct mlx5e_priv *priv); 1009 1008 1010 1009 void mlx5e_queue_update_stats(struct mlx5e_priv *priv); 1011 1010 int mlx5e_bits_invert(unsigned long a, int size);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.c
··· 51 51 monitor_counters_work); 52 52 53 53 mutex_lock(&priv->state_lock); 54 - mlx5e_update_ndo_stats(priv); 54 + mlx5e_stats_update_ndo_stats(priv); 55 55 mutex_unlock(&priv->state_lock); 56 56 mlx5e_monitor_counter_arm(priv); 57 57 }
+2 -5
drivers/net/ethernet/mellanox/mlx5/core/en/port.c
··· 490 490 int err; 491 491 int i; 492 492 493 - if (!MLX5_CAP_GEN(dev, pcam_reg)) 494 - return -EOPNOTSUPP; 495 - 496 - if (!MLX5_CAP_PCAM_REG(dev, pplm)) 497 - return -EOPNOTSUPP; 493 + if (!MLX5_CAP_GEN(dev, pcam_reg) || !MLX5_CAP_PCAM_REG(dev, pplm)) 494 + return false; 498 495 499 496 MLX5_SET(pplm_reg, in, local_port, 1); 500 497 err = mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 0);
+16 -5
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 699 699 err_rule: 700 700 mlx5e_mod_hdr_detach(ct_priv->esw->dev, 701 701 &esw->offloads.mod_hdr, zone_rule->mh); 702 + mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 702 703 err_mod_hdr: 703 704 kfree(spec); 704 705 return err; ··· 959 958 return 0; 960 959 } 961 960 961 + void mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr) 962 + { 963 + struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv); 964 + 965 + if (!ct_priv || !ct_attr->ct_labels_id) 966 + return; 967 + 968 + mapping_remove(ct_priv->labels_mapping, ct_attr->ct_labels_id); 969 + } 970 + 962 971 int 963 - mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, 964 - struct mlx5_flow_spec *spec, 965 - struct flow_cls_offload *f, 966 - struct mlx5_ct_attr *ct_attr, 967 - struct netlink_ext_ack *extack) 972 + mlx5_tc_ct_match_add(struct mlx5e_priv *priv, 973 + struct mlx5_flow_spec *spec, 974 + struct flow_cls_offload *f, 975 + struct mlx5_ct_attr *ct_attr, 976 + struct netlink_ext_ack *extack) 968 977 { 969 978 struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv); 970 979 struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+16 -10
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
··· 87 87 void 88 88 mlx5_tc_ct_clean(struct mlx5_rep_uplink_priv *uplink_priv); 89 89 90 + void 91 + mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr); 92 + 90 93 int 91 - mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, 92 - struct mlx5_flow_spec *spec, 93 - struct flow_cls_offload *f, 94 - struct mlx5_ct_attr *ct_attr, 95 - struct netlink_ext_ack *extack); 94 + mlx5_tc_ct_match_add(struct mlx5e_priv *priv, 95 + struct mlx5_flow_spec *spec, 96 + struct flow_cls_offload *f, 97 + struct mlx5_ct_attr *ct_attr, 98 + struct netlink_ext_ack *extack); 96 99 int 97 100 mlx5_tc_ct_add_no_trk_match(struct mlx5e_priv *priv, 98 101 struct mlx5_flow_spec *spec); ··· 133 130 { 134 131 } 135 132 133 + static inline void 134 + mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr) {} 135 + 136 136 static inline int 137 - mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, 138 - struct mlx5_flow_spec *spec, 139 - struct flow_cls_offload *f, 140 - struct mlx5_ct_attr *ct_attr, 141 - struct netlink_ext_ack *extack) 137 + mlx5_tc_ct_match_add(struct mlx5e_priv *priv, 138 + struct mlx5_flow_spec *spec, 139 + struct flow_cls_offload *f, 140 + struct mlx5_ct_attr *ct_attr, 141 + struct netlink_ext_ack *extack) 142 142 { 143 143 struct flow_rule *rule = flow_cls_offload_flow_rule(f); 144 144
+5
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 20 20 }; 21 21 22 22 /* General */ 23 + static inline bool mlx5e_skb_is_multicast(struct sk_buff *skb) 24 + { 25 + return skb->pkt_type == PACKET_MULTICAST || skb->pkt_type == PACKET_BROADCAST; 26 + } 27 + 23 28 void mlx5e_trigger_irq(struct mlx5e_icosq *sq); 24 29 void mlx5e_completion_event(struct mlx5_core_cq *mcq, struct mlx5_eqe *eqe); 25 30 void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
··· 122 122 bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, 123 123 u32 *len, struct xdp_buff *xdp) 124 124 { 125 - struct bpf_prog *prog = READ_ONCE(rq->xdp_prog); 125 + struct bpf_prog *prog = rcu_dereference(rq->xdp_prog); 126 126 u32 act; 127 127 int err; 128 128
+2 -12
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
··· 31 31 { 32 32 struct xdp_buff *xdp = wi->umr.dma_info[page_idx].xsk; 33 33 u32 cqe_bcnt32 = cqe_bcnt; 34 - bool consumed; 35 34 36 35 /* Check packet size. Note LRO doesn't use linear SKB */ 37 36 if (unlikely(cqe_bcnt > rq->hw_mtu)) { ··· 50 51 xsk_buff_dma_sync_for_cpu(xdp); 51 52 prefetch(xdp->data); 52 53 53 - rcu_read_lock(); 54 - consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp); 55 - rcu_read_unlock(); 56 - 57 54 /* Possible flows: 58 55 * - XDP_REDIRECT to XSKMAP: 59 56 * The page is owned by the userspace from now. ··· 65 70 * allocated first from the Reuse Ring, so it has enough space. 66 71 */ 67 72 68 - if (likely(consumed)) { 73 + if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp))) { 69 74 if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))) 70 75 __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ 71 76 return NULL; /* page/packet was consumed by XDP */ ··· 83 88 u32 cqe_bcnt) 84 89 { 85 90 struct xdp_buff *xdp = wi->di->xsk; 86 - bool consumed; 87 91 88 92 /* wi->offset is not used in this function, because xdp->data and the 89 93 * DMA address point directly to the necessary place. Furthermore, the ··· 101 107 return NULL; 102 108 } 103 109 104 - rcu_read_lock(); 105 - consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp); 106 - rcu_read_unlock(); 107 - 108 - if (likely(consumed)) 110 + if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp))) 109 111 return NULL; /* page/packet was consumed by XDP */ 110 112 111 113 /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
··· 106 106 void mlx5e_close_xsk(struct mlx5e_channel *c) 107 107 { 108 108 clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 109 - napi_synchronize(&c->napi); 110 - synchronize_rcu(); /* Sync with the XSK wakeup. */ 109 + synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */ 111 110 112 111 mlx5e_close_rq(&c->xskrq); 113 112 mlx5e_close_cq(&c->xskrq.cq);
+22 -21
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 234 234 235 235 /* Re-sync */ 236 236 /* Runs in work context */ 237 - static struct mlx5_wqe_ctrl_seg * 237 + static int 238 238 resync_post_get_progress_params(struct mlx5e_icosq *sq, 239 239 struct mlx5e_ktls_offload_context_rx *priv_rx) 240 240 { ··· 258 258 PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 259 259 if (unlikely(dma_mapping_error(pdev, buf->dma_addr))) { 260 260 err = -ENOMEM; 261 - goto err_out; 261 + goto err_free; 262 262 } 263 263 264 264 buf->priv_rx = priv_rx; 265 265 266 266 BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1); 267 + 268 + spin_lock(&sq->channel->async_icosq_lock); 269 + 267 270 if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) { 271 + spin_unlock(&sq->channel->async_icosq_lock); 268 272 err = -ENOSPC; 269 - goto err_out; 273 + goto err_dma_unmap; 270 274 } 271 275 272 276 pi = mlx5e_icosq_get_next_pi(sq, 1); ··· 298 294 }; 299 295 icosq_fill_wi(sq, pi, &wi); 300 296 sq->pc++; 297 + mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg); 298 + spin_unlock(&sq->channel->async_icosq_lock); 301 299 302 - return cseg; 300 + return 0; 303 301 302 + err_dma_unmap: 303 + dma_unmap_single(pdev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 304 + err_free: 305 + kfree(buf); 304 306 err_out: 305 307 priv_rx->stats->tls_resync_req_skip++; 306 - return ERR_PTR(err); 308 + return err; 307 309 } 308 310 309 311 /* Function is called with elevated refcount. ··· 319 309 { 320 310 struct mlx5e_ktls_offload_context_rx *priv_rx; 321 311 struct mlx5e_ktls_rx_resync_ctx *resync; 322 - struct mlx5_wqe_ctrl_seg *cseg; 323 312 struct mlx5e_channel *c; 324 313 struct mlx5e_icosq *sq; 325 - struct mlx5_wq_cyc *wq; 326 314 327 315 resync = container_of(work, struct mlx5e_ktls_rx_resync_ctx, work); 328 316 priv_rx = container_of(resync, struct mlx5e_ktls_offload_context_rx, resync); ··· 332 324 333 325 c = resync->priv->channels.c[priv_rx->rxq]; 334 326 sq = &c->async_icosq; 335 - wq = &sq->wq; 336 327 337 - spin_lock(&c->async_icosq_lock); 338 - 339 - cseg = resync_post_get_progress_params(sq, priv_rx); 340 - if (IS_ERR(cseg)) { 328 + if (resync_post_get_progress_params(sq, priv_rx)) 341 329 refcount_dec(&resync->refcnt); 342 - goto unlock; 343 - } 344 - mlx5e_notify_hw(wq, sq->pc, sq->uar_map, cseg); 345 - unlock: 346 - spin_unlock(&c->async_icosq_lock); 347 330 } 348 331 349 332 static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync, ··· 385 386 struct mlx5e_ktls_offload_context_rx *priv_rx; 386 387 struct mlx5e_ktls_rx_resync_ctx *resync; 387 388 u8 tracker_state, auth_state, *ctx; 389 + struct device *dev; 388 390 u32 hw_seq; 389 391 390 392 priv_rx = buf->priv_rx; 391 393 resync = &priv_rx->resync; 392 - 394 + dev = resync->priv->mdev->device; 393 395 if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) 394 396 goto out; 395 397 396 - dma_sync_single_for_cpu(resync->priv->mdev->device, buf->dma_addr, 397 - PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 398 + dma_sync_single_for_cpu(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, 399 + DMA_FROM_DEVICE); 398 400 399 401 ctx = buf->progress.ctx; 400 402 tracker_state = MLX5_GET(tls_progress_params, ctx, record_tracker_state); ··· 411 411 priv_rx->stats->tls_resync_req_end++; 412 412 out: 413 413 refcount_dec(&resync->refcnt); 414 + dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 414 415 kfree(buf); 415 416 } 416 417 ··· 660 659 priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx); 661 660 set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags); 662 661 mlx5e_set_ktls_rx_priv_ctx(tls_ctx, NULL); 663 - napi_synchronize(&priv->channels.c[priv_rx->rxq]->napi); 662 + synchronize_rcu(); /* Sync with NAPI */ 664 663 if (!cancel_work_sync(&priv_rx->rule.work)) 665 664 /* completion is needed, as the priv_rx in the add flow 666 665 * is maintained on the wqe info (wi), not on the socket.
+8 -4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
··· 35 35 #include <net/sock.h> 36 36 37 37 #include "en.h" 38 - #include "accel/tls.h" 39 38 #include "fpga/sdk.h" 40 39 #include "en_accel/tls.h" 41 40 ··· 50 51 51 52 #define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc) 52 53 54 + static bool is_tls_atomic_stats(struct mlx5e_priv *priv) 55 + { 56 + return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev); 57 + } 58 + 53 59 int mlx5e_tls_get_count(struct mlx5e_priv *priv) 54 60 { 55 - if (!priv->tls) 61 + if (!is_tls_atomic_stats(priv)) 56 62 return 0; 57 63 58 64 return NUM_TLS_SW_COUNTERS; ··· 67 63 { 68 64 unsigned int i, idx = 0; 69 65 70 - if (!priv->tls) 66 + if (!is_tls_atomic_stats(priv)) 71 67 return 0; 72 68 73 69 for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) ··· 81 77 { 82 78 int i, idx = 0; 83 79 84 - if (!priv->tls) 80 + if (!is_tls_atomic_stats(priv)) 85 81 return 0; 86 82 87 83 for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
+31 -54
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 158 158 mutex_unlock(&priv->state_lock); 159 159 } 160 160 161 - void mlx5e_update_ndo_stats(struct mlx5e_priv *priv) 162 - { 163 - int i; 164 - 165 - for (i = mlx5e_nic_stats_grps_num(priv) - 1; i >= 0; i--) 166 - if (mlx5e_nic_stats_grps[i]->update_stats_mask & 167 - MLX5E_NDO_UPDATE_STATS) 168 - mlx5e_nic_stats_grps[i]->update_stats(priv); 169 - } 170 - 171 161 static void mlx5e_update_stats_work(struct work_struct *work) 172 162 { 173 163 struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv, ··· 389 399 390 400 if (params->xdp_prog) 391 401 bpf_prog_inc(params->xdp_prog); 392 - rq->xdp_prog = params->xdp_prog; 402 + RCU_INIT_POINTER(rq->xdp_prog, params->xdp_prog); 393 403 394 404 rq_xdp_ix = rq->ix; 395 405 if (xsk) ··· 398 408 if (err < 0) 399 409 goto err_rq_wq_destroy; 400 410 401 - rq->buff.map_dir = rq->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; 411 + rq->buff.map_dir = params->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; 402 412 rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk); 403 413 pool_size = 1 << params->log_rq_mtu_frames; 404 414 ··· 554 564 } 555 565 556 566 err_rq_wq_destroy: 557 - if (rq->xdp_prog) 558 - bpf_prog_put(rq->xdp_prog); 567 + if (params->xdp_prog) 568 + bpf_prog_put(params->xdp_prog); 559 569 xdp_rxq_info_unreg(&rq->xdp_rxq); 560 570 page_pool_destroy(rq->page_pool); 561 571 mlx5_wq_destroy(&rq->wq_ctrl); ··· 565 575 566 576 static void mlx5e_free_rq(struct mlx5e_rq *rq) 567 577 { 578 + struct mlx5e_channel *c = rq->channel; 579 + struct bpf_prog *old_prog = NULL; 568 580 int i; 569 581 570 - if (rq->xdp_prog) 571 - bpf_prog_put(rq->xdp_prog); 582 + /* drop_rq has neither channel nor xdp_prog. */ 583 + if (c) 584 + old_prog = rcu_dereference_protected(rq->xdp_prog, 585 + lockdep_is_held(&c->priv->state_lock)); 586 + if (old_prog) 587 + bpf_prog_put(old_prog); 572 588 573 589 switch (rq->wq_type) { 574 590 case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: ··· 863 867 void mlx5e_deactivate_rq(struct mlx5e_rq *rq) 864 868 { 865 869 clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); 866 - napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */ 870 + synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */ 867 871 } 868 872 869 873 void mlx5e_close_rq(struct mlx5e_rq *rq) ··· 1308 1312 1309 1313 static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq) 1310 1314 { 1311 - struct mlx5e_channel *c = sq->channel; 1312 1315 struct mlx5_wq_cyc *wq = &sq->wq; 1313 1316 1314 1317 clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 1315 - /* prevent netif_tx_wake_queue */ 1316 - napi_synchronize(&c->napi); 1318 + synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */ 1317 1319 1318 1320 mlx5e_tx_disable_queue(sq->txq); 1319 1321 ··· 1386 1392 1387 1393 void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq) 1388 1394 { 1389 - struct mlx5e_channel *c = icosq->channel; 1390 - 1391 1395 clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state); 1392 - napi_synchronize(&c->napi); 1396 + synchronize_rcu(); /* Sync with NAPI. */ 1393 1397 } 1394 1398 1395 1399 void mlx5e_close_icosq(struct mlx5e_icosq *sq) ··· 1466 1474 struct mlx5e_channel *c = sq->channel; 1467 1475 1468 1476 clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 1469 - napi_synchronize(&c->napi); 1477 + synchronize_rcu(); /* Sync with NAPI. */ 1470 1478 1471 1479 mlx5e_destroy_sq(c->mdev, sq->sqn); 1472 1480 mlx5e_free_xdpsq_descs(sq); ··· 3559 3567 3560 3568 s->rx_packets += rq_stats->packets + xskrq_stats->packets; 3561 3569 s->rx_bytes += rq_stats->bytes + xskrq_stats->bytes; 3570 + s->multicast += rq_stats->mcast_packets + xskrq_stats->mcast_packets; 3562 3571 3563 3572 for (j = 0; j < priv->max_opened_tc; j++) { 3564 3573 struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j]; ··· 3575 3582 mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) 3576 3583 { 3577 3584 struct mlx5e_priv *priv = netdev_priv(dev); 3578 - struct mlx5e_vport_stats *vstats = &priv->stats.vport; 3579 3585 struct mlx5e_pport_stats *pstats = &priv->stats.pport; 3580 3586 3581 3587 /* In switchdev mode, monitor counters doesn't monitor ··· 3609 3617 stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors + 3610 3618 stats->rx_frame_errors; 3611 3619 stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors; 3612 - 3613 - /* vport multicast also counts packets that are dropped due to steering 3614 - * or rx out of buffer 3615 - */ 3616 - stats->multicast = 3617 - VPORT_COUNTER_GET(vstats, received_eth_multicast.packets); 3618 3620 } 3619 3621 3620 3622 static void mlx5e_set_rx_mode(struct net_device *dev) ··· 4316 4330 return 0; 4317 4331 } 4318 4332 4333 + static void mlx5e_rq_replace_xdp_prog(struct mlx5e_rq *rq, struct bpf_prog *prog) 4334 + { 4335 + struct bpf_prog *old_prog; 4336 + 4337 + old_prog = rcu_replace_pointer(rq->xdp_prog, prog, 4338 + lockdep_is_held(&rq->channel->priv->state_lock)); 4339 + if (old_prog) 4340 + bpf_prog_put(old_prog); 4341 + } 4342 + 4319 4343 static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) 4320 4344 { 4321 4345 struct mlx5e_priv *priv = netdev_priv(netdev); ··· 4384 4388 */ 4385 4389 for (i = 0; i < priv->channels.num; i++) { 4386 4390 struct mlx5e_channel *c = priv->channels.c[i]; 4387 - bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 4388 4391 4389 - clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state); 4390 - if (xsk_open) 4391 - clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 4392 - napi_synchronize(&c->napi); 4393 - /* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */ 4394 - 4395 - old_prog = xchg(&c->rq.xdp_prog, prog); 4396 - if (old_prog) 4397 - bpf_prog_put(old_prog); 4398 - 4399 - if (xsk_open) { 4400 - old_prog = xchg(&c->xskrq.xdp_prog, prog); 4401 - if (old_prog) 4402 - bpf_prog_put(old_prog); 4403 - } 4404 - 4405 - set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state); 4406 - if (xsk_open) 4407 - set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 4408 - /* napi_schedule in case we have missed anything */ 4409 - napi_schedule(&c->napi); 4392 + mlx5e_rq_replace_xdp_prog(&c->rq, prog); 4393 + if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state)) 4394 + mlx5e_rq_replace_xdp_prog(&c->xskrq, prog); 4410 4395 } 4411 4396 4412 4397 unlock: ··· 5177 5200 .enable = mlx5e_nic_enable, 5178 5201 .disable = mlx5e_nic_disable, 5179 5202 .update_rx = mlx5e_update_nic_rx, 5180 - .update_stats = mlx5e_update_ndo_stats, 5203 + .update_stats = mlx5e_stats_update_ndo_stats, 5181 5204 .update_carrier = mlx5e_update_carrier, 5182 5205 .rx_handlers = &mlx5e_rx_handlers_nic, 5183 5206 .max_tc = MLX5E_MAX_NUM_TC,
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1171 1171 .cleanup_tx = mlx5e_cleanup_rep_tx, 1172 1172 .enable = mlx5e_rep_enable, 1173 1173 .update_rx = mlx5e_update_rep_rx, 1174 - .update_stats = mlx5e_update_ndo_stats, 1174 + .update_stats = mlx5e_stats_update_ndo_stats, 1175 1175 .rx_handlers = &mlx5e_rx_handlers_rep, 1176 1176 .max_tc = 1, 1177 1177 .rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR), ··· 1189 1189 .enable = mlx5e_uplink_rep_enable, 1190 1190 .disable = mlx5e_uplink_rep_disable, 1191 1191 .update_rx = mlx5e_update_rep_rx, 1192 - .update_stats = mlx5e_update_ndo_stats, 1192 + .update_stats = mlx5e_stats_update_ndo_stats, 1193 1193 .update_carrier = mlx5e_update_carrier, 1194 1194 .rx_handlers = &mlx5e_rx_handlers_rep, 1195 1195 .max_tc = MLX5E_MAX_NUM_TC,
+6 -10
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 53 53 #include "en/xsk/rx.h" 54 54 #include "en/health.h" 55 55 #include "en/params.h" 56 + #include "en/txrx.h" 56 57 57 58 static struct sk_buff * 58 59 mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, ··· 1081 1080 mlx5e_enable_ecn(rq, skb); 1082 1081 1083 1082 skb->protocol = eth_type_trans(skb, netdev); 1083 + 1084 + if (unlikely(mlx5e_skb_is_multicast(skb))) 1085 + stats->mcast_packets++; 1084 1086 } 1085 1087 1086 1088 static inline void mlx5e_complete_rx_cqe(struct mlx5e_rq *rq, ··· 1136 1132 struct xdp_buff xdp; 1137 1133 struct sk_buff *skb; 1138 1134 void *va, *data; 1139 - bool consumed; 1140 1135 u32 frag_size; 1141 1136 1142 1137 va = page_address(di->page) + wi->offset; ··· 1147 1144 prefetchw(va); /* xdp_frame data area */ 1148 1145 prefetch(data); 1149 1146 1150 - rcu_read_lock(); 1151 1147 mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); 1152 - consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp); 1153 - rcu_read_unlock(); 1154 - if (consumed) 1148 + if (mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp)) 1155 1149 return NULL; /* page/packet was consumed by XDP */ 1156 1150 1157 1151 rx_headroom = xdp.data - xdp.data_hard_start; ··· 1438 1438 struct sk_buff *skb; 1439 1439 void *va, *data; 1440 1440 u32 frag_size; 1441 - bool consumed; 1442 1441 1443 1442 /* Check packet size. Note LRO doesn't use linear SKB */ 1444 1443 if (unlikely(cqe_bcnt > rq->hw_mtu)) { ··· 1454 1455 prefetchw(va); /* xdp_frame data area */ 1455 1456 prefetch(data); 1456 1457 1457 - rcu_read_lock(); 1458 1458 mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt32, &xdp); 1459 - consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp); 1460 - rcu_read_unlock(); 1461 - if (consumed) { 1459 + if (mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp)) { 1462 1460 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) 1463 1461 __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ 1464 1462 return NULL; /* page/packet was consumed by XDP */
+12
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 54 54 return total; 55 55 } 56 56 57 + void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv) 58 + { 59 + mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps; 60 + const unsigned int num_stats_grps = stats_grps_num(priv); 61 + int i; 62 + 63 + for (i = num_stats_grps - 1; i >= 0; i--) 64 + if (stats_grps[i]->update_stats && 65 + stats_grps[i]->update_stats_mask & MLX5E_NDO_UPDATE_STATS) 66 + stats_grps[i]->update_stats(priv); 67 + } 68 + 57 69 void mlx5e_stats_update(struct mlx5e_priv *priv) 58 70 { 59 71 mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps;
+3
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 103 103 void mlx5e_stats_update(struct mlx5e_priv *priv); 104 104 void mlx5e_stats_fill(struct mlx5e_priv *priv, u64 *data, int idx); 105 105 void mlx5e_stats_fill_strings(struct mlx5e_priv *priv, u8 *data); 106 + void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv); 106 107 107 108 /* Concrete NIC Stats */ 108 109 ··· 120 119 u64 tx_nop; 121 120 u64 rx_lro_packets; 122 121 u64 rx_lro_bytes; 122 + u64 rx_mcast_packets; 123 123 u64 rx_ecn_mark; 124 124 u64 rx_removed_vlan_packets; 125 125 u64 rx_csum_unnecessary; ··· 300 298 u64 csum_none; 301 299 u64 lro_packets; 302 300 u64 lro_bytes; 301 + u64 mcast_packets; 303 302 u64 ecn_mark; 304 303 u64 removed_vlan_packets; 305 304 u64 xdp_drop;
+26 -19
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1290 1290 1291 1291 mlx5e_put_flow_tunnel_id(flow); 1292 1292 1293 - if (flow_flag_test(flow, NOT_READY)) { 1293 + if (flow_flag_test(flow, NOT_READY)) 1294 1294 remove_unready_flow(flow); 1295 - kvfree(attr->parse_attr); 1296 - return; 1297 - } 1298 1295 1299 1296 if (mlx5e_is_offloaded_flow(flow)) { 1300 1297 if (flow_flag_test(flow, SLOW)) ··· 1311 1314 kfree(attr->parse_attr->tun_info[out_index]); 1312 1315 } 1313 1316 kvfree(attr->parse_attr); 1317 + 1318 + mlx5_tc_ct_match_del(priv, &flow->esw_attr->ct_attr); 1314 1319 1315 1320 if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) 1316 1321 mlx5e_detach_mod_hdr(priv, flow); ··· 2624 2625 OFFLOAD(UDP_DPORT, 16, U16_MAX, udp.dest, 0, udp_dport), 2625 2626 }; 2626 2627 2628 + static unsigned long mask_to_le(unsigned long mask, int size) 2629 + { 2630 + __be32 mask_be32; 2631 + __be16 mask_be16; 2632 + 2633 + if (size == 32) { 2634 + mask_be32 = (__force __be32)(mask); 2635 + mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32)); 2636 + } else if (size == 16) { 2637 + mask_be32 = (__force __be32)(mask); 2638 + mask_be16 = *(__be16 *)&mask_be32; 2639 + mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16)); 2640 + } 2641 + 2642 + return mask; 2643 + } 2627 2644 static int offload_pedit_fields(struct mlx5e_priv *priv, 2628 2645 int namespace, 2629 2646 struct pedit_headers_action *hdrs, ··· 2653 2638 u32 *s_masks_p, *a_masks_p, s_mask, a_mask; 2654 2639 struct mlx5e_tc_mod_hdr_acts *mod_acts; 2655 2640 struct mlx5_fields *f; 2656 - unsigned long mask; 2657 - __be32 mask_be32; 2658 - __be16 mask_be16; 2641 + unsigned long mask, field_mask; 2659 2642 int err; 2660 2643 u8 cmd; 2661 2644 ··· 2719 2706 if (skip) 2720 2707 continue; 2721 2708 2722 - if (f->field_bsize == 32) { 2723 - mask_be32 = (__force __be32)(mask); 2724 - mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32)); 2725 - } else if (f->field_bsize == 16) { 2726 - mask_be32 = (__force __be32)(mask); 2727 - mask_be16 = *(__be16 *)&mask_be32; 2728 - mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16)); 2729 - } 2709 + mask = mask_to_le(mask, f->field_bsize); 2730 2710 2731 2711 first = find_first_bit(&mask, f->field_bsize); 2732 2712 next_z = find_next_zero_bit(&mask, f->field_bsize, first); ··· 2750 2744 if (cmd == MLX5_ACTION_TYPE_SET) { 2751 2745 int start; 2752 2746 2747 + field_mask = mask_to_le(f->field_mask, f->field_bsize); 2748 + 2753 2749 /* if field is bit sized it can start not from first bit */ 2754 - start = find_first_bit((unsigned long *)&f->field_mask, 2755 - f->field_bsize); 2750 + start = find_first_bit(&field_mask, f->field_bsize); 2756 2751 2757 2752 MLX5_SET(set_action_in, action, offset, first - start); 2758 2753 /* length is num of bits to be written, zero means length of 32 */ ··· 4409 4402 goto err_free; 4410 4403 4411 4404 /* actions validation depends on parsing the ct matches first */ 4412 - err = mlx5_tc_ct_parse_match(priv, &parse_attr->spec, f, 4413 - &flow->esw_attr->ct_attr, extack); 4405 + err = mlx5_tc_ct_match_add(priv, &parse_attr->spec, f, 4406 + &flow->esw_attr->ct_attr, extack); 4414 4407 if (err) 4415 4408 goto err_free; 4416 4409
+13 -4
drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
··· 121 121 struct mlx5e_xdpsq *xsksq = &c->xsksq; 122 122 struct mlx5e_rq *xskrq = &c->xskrq; 123 123 struct mlx5e_rq *rq = &c->rq; 124 - bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 125 124 bool aff_change = false; 126 125 bool busy_xsk = false; 127 126 bool busy = false; 128 127 int work_done = 0; 128 + bool xsk_open; 129 129 int i; 130 + 131 + rcu_read_lock(); 132 + 133 + xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 130 134 131 135 ch_stats->poll++; 132 136 ··· 171 167 busy |= busy_xsk; 172 168 173 169 if (busy) { 174 - if (likely(mlx5e_channel_no_affinity_change(c))) 175 - return budget; 170 + if (likely(mlx5e_channel_no_affinity_change(c))) { 171 + work_done = budget; 172 + goto out; 173 + } 176 174 ch_stats->aff_change++; 177 175 aff_change = true; 178 176 if (budget && work_done == budget) ··· 182 176 } 183 177 184 178 if (unlikely(!napi_complete_done(napi, work_done))) 185 - return work_done; 179 + goto out; 186 180 187 181 ch_stats->arm++; 188 182 ··· 208 202 mlx5e_trigger_irq(&c->icosq); 209 203 ch_stats->force_irq++; 210 204 } 205 + 206 + out: 207 + rcu_read_unlock(); 211 208 212 209 return work_done; 213 210 }
+30 -26
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1219 1219 } 1220 1220 esw->fdb_table.offloads.send_to_vport_grp = g; 1221 1221 1222 - /* create peer esw miss group */ 1223 - memset(flow_group_in, 0, inlen); 1222 + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) { 1223 + /* create peer esw miss group */ 1224 + memset(flow_group_in, 0, inlen); 1224 1225 1225 - esw_set_flow_group_source_port(esw, flow_group_in); 1226 + esw_set_flow_group_source_port(esw, flow_group_in); 1226 1227 1227 - if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) { 1228 - match_criteria = MLX5_ADDR_OF(create_flow_group_in, 1229 - flow_group_in, 1230 - match_criteria); 1228 + if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) { 1229 + match_criteria = MLX5_ADDR_OF(create_flow_group_in, 1230 + flow_group_in, 1231 + match_criteria); 1231 1232 1232 - MLX5_SET_TO_ONES(fte_match_param, match_criteria, 1233 - misc_parameters.source_eswitch_owner_vhca_id); 1233 + MLX5_SET_TO_ONES(fte_match_param, match_criteria, 1234 + misc_parameters.source_eswitch_owner_vhca_id); 1234 1235 1235 - MLX5_SET(create_flow_group_in, flow_group_in, 1236 - source_eswitch_owner_vhca_id_valid, 1); 1236 + MLX5_SET(create_flow_group_in, flow_group_in, 1237 + source_eswitch_owner_vhca_id_valid, 1); 1238 + } 1239 + 1240 + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1241 + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1242 + ix + esw->total_vports - 1); 1243 + ix += esw->total_vports; 1244 + 1245 + g = mlx5_create_flow_group(fdb, flow_group_in); 1246 + if (IS_ERR(g)) { 1247 + err = PTR_ERR(g); 1248 + esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err); 1249 + goto peer_miss_err; 1250 + } 1251 + esw->fdb_table.offloads.peer_miss_grp = g; 1237 1252 } 1238 - 1239 - MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1240 - MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1241 - ix + esw->total_vports - 1); 1242 - ix += esw->total_vports; 1243 - 1244 - g = mlx5_create_flow_group(fdb, flow_group_in); 1245 - if (IS_ERR(g)) { 1246 - err = PTR_ERR(g); 1247 - esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err); 1248 - goto peer_miss_err; 1249 - } 1250 - esw->fdb_table.offloads.peer_miss_grp = g; 1251 1253 1252 1254 /* create miss group */ 1253 1255 memset(flow_group_in, 0, inlen); ··· 1283 1281 miss_rule_err: 1284 1282 mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); 1285 1283 miss_err: 1286 - mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1284 + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) 1285 + mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1287 1286 peer_miss_err: 1288 1287 mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); 1289 1288 send_vport_err: ··· 1308 1305 mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_multi); 1309 1306 mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_uni); 1310 1307 mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); 1311 - mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1308 + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) 1309 + mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1312 1310 mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); 1313 1311 1314 1312 mlx5_esw_chains_destroy(esw);
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 654 654 fte->action = *flow_act; 655 655 fte->flow_context = spec->flow_context; 656 656 657 - tree_init_node(&fte->node, NULL, del_sw_fte); 657 + tree_init_node(&fte->node, del_hw_fte, del_sw_fte); 658 658 659 659 return fte; 660 660 } ··· 1792 1792 up_write_ref_node(&g->node, false); 1793 1793 rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte); 1794 1794 up_write_ref_node(&fte->node, false); 1795 - tree_put_node(&fte->node, false); 1796 1795 return rule; 1797 1796 } 1798 1797 rule = ERR_PTR(-ENOENT); ··· 1890 1891 up_write_ref_node(&g->node, false); 1891 1892 rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte); 1892 1893 up_write_ref_node(&fte->node, false); 1893 - tree_put_node(&fte->node, false); 1894 1894 tree_put_node(&g->node, false); 1895 1895 return rule; 1896 1896 ··· 1999 2001 up_write_ref_node(&fte->node, false); 2000 2002 } else { 2001 2003 del_hw_fte(&fte->node); 2002 - up_write(&fte->node.lock); 2004 + /* Avoid double call to del_hw_fte */ 2005 + fte->node.del_hw_func = NULL; 2006 + up_write_ref_node(&fte->node, false); 2003 2007 tree_put_node(&fte->node, false); 2004 2008 } 2005 2009 kfree(handle);
+15 -9
drivers/net/ethernet/mscc/ocelot.c
··· 421 421 422 422 if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP && 423 423 ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) { 424 + spin_lock(&ocelot_port->ts_id_lock); 425 + 424 426 shinfo->tx_flags |= SKBTX_IN_PROGRESS; 425 427 /* Store timestamp ID in cb[0] of sk_buff */ 426 - skb->cb[0] = ocelot_port->ts_id % 4; 428 + skb->cb[0] = ocelot_port->ts_id; 429 + ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4; 427 430 skb_queue_tail(&ocelot_port->tx_skbs, skb); 431 + 432 + spin_unlock(&ocelot_port->ts_id_lock); 428 433 return 0; 429 434 } 430 435 return -ENODATA; ··· 1305 1300 struct ocelot_port *ocelot_port = ocelot->ports[port]; 1306 1301 1307 1302 skb_queue_head_init(&ocelot_port->tx_skbs); 1303 + spin_lock_init(&ocelot_port->ts_id_lock); 1308 1304 1309 1305 /* Basic L2 initialization */ 1310 1306 ··· 1550 1544 1551 1545 void ocelot_deinit(struct ocelot *ocelot) 1552 1546 { 1553 - struct ocelot_port *port; 1554 - int i; 1555 - 1556 1547 cancel_delayed_work(&ocelot->stats_work); 1557 1548 destroy_workqueue(ocelot->stats_queue); 1558 1549 mutex_destroy(&ocelot->stats_lock); 1559 - 1560 - for (i = 0; i < ocelot->num_phys_ports; i++) { 1561 - port = ocelot->ports[i]; 1562 - skb_queue_purge(&port->tx_skbs); 1563 - } 1564 1550 } 1565 1551 EXPORT_SYMBOL(ocelot_deinit); 1552 + 1553 + void ocelot_deinit_port(struct ocelot *ocelot, int port) 1554 + { 1555 + struct ocelot_port *ocelot_port = ocelot->ports[port]; 1556 + 1557 + skb_queue_purge(&ocelot_port->tx_skbs); 1558 + } 1559 + EXPORT_SYMBOL(ocelot_deinit_port); 1566 1560 1567 1561 MODULE_LICENSE("Dual MIT/GPL");
+6 -6
drivers/net/ethernet/mscc/ocelot_net.c
··· 330 330 u8 grp = 0; /* Send everything on CPU group 0 */ 331 331 unsigned int i, count, last; 332 332 int port = priv->chip_port; 333 + bool do_tstamp; 333 334 334 335 val = ocelot_read(ocelot, QS_INJ_STATUS); 335 336 if (!(val & QS_INJ_STATUS_FIFO_RDY(BIT(grp))) || ··· 345 344 info.vid = skb_vlan_tag_get(skb); 346 345 347 346 /* Check if timestamping is needed */ 347 + do_tstamp = (ocelot_port_add_txtstamp_skb(ocelot_port, skb) == 0); 348 + 348 349 if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP) { 349 350 info.rew_op = ocelot_port->ptp_cmd; 350 351 if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) 351 - info.rew_op |= (ocelot_port->ts_id % 4) << 3; 352 + info.rew_op |= skb->cb[0] << 3; 352 353 } 353 354 354 355 ocelot_gen_ifh(ifh, &info); ··· 383 380 dev->stats.tx_packets++; 384 381 dev->stats.tx_bytes += skb->len; 385 382 386 - if (!ocelot_port_add_txtstamp_skb(ocelot_port, skb)) { 387 - ocelot_port->ts_id++; 388 - return NETDEV_TX_OK; 389 - } 383 + if (!do_tstamp) 384 + dev_kfree_skb_any(skb); 390 385 391 - dev_kfree_skb_any(skb); 392 386 return NETDEV_TX_OK; 393 387 } 394 388
+145 -104
drivers/net/ethernet/mscc/ocelot_vsc7514.c
··· 806 806 [VCAP_IS2_HK_DIP_EQ_SIP] = {123, 1}, 807 807 /* IP4_TCP_UDP (TYPE=100) */ 808 808 [VCAP_IS2_HK_TCP] = {124, 1}, 809 - [VCAP_IS2_HK_L4_SPORT] = {125, 16}, 810 - [VCAP_IS2_HK_L4_DPORT] = {141, 16}, 809 + [VCAP_IS2_HK_L4_DPORT] = {125, 16}, 810 + [VCAP_IS2_HK_L4_SPORT] = {141, 16}, 811 811 [VCAP_IS2_HK_L4_RNG] = {157, 8}, 812 812 [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {165, 1}, 813 813 [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {166, 1}, 814 - [VCAP_IS2_HK_L4_URG] = {167, 1}, 815 - [VCAP_IS2_HK_L4_ACK] = {168, 1}, 816 - [VCAP_IS2_HK_L4_PSH] = {169, 1}, 817 - [VCAP_IS2_HK_L4_RST] = {170, 1}, 818 - [VCAP_IS2_HK_L4_SYN] = {171, 1}, 819 - [VCAP_IS2_HK_L4_FIN] = {172, 1}, 814 + [VCAP_IS2_HK_L4_FIN] = {167, 1}, 815 + [VCAP_IS2_HK_L4_SYN] = {168, 1}, 816 + [VCAP_IS2_HK_L4_RST] = {169, 1}, 817 + [VCAP_IS2_HK_L4_PSH] = {170, 1}, 818 + [VCAP_IS2_HK_L4_ACK] = {171, 1}, 819 + [VCAP_IS2_HK_L4_URG] = {172, 1}, 820 820 [VCAP_IS2_HK_L4_1588_DOM] = {173, 8}, 821 821 [VCAP_IS2_HK_L4_1588_VER] = {181, 4}, 822 822 /* IP4_OTHER (TYPE=101) */ ··· 896 896 .enable = ocelot_ptp_enable, 897 897 }; 898 898 899 + static void mscc_ocelot_release_ports(struct ocelot *ocelot) 900 + { 901 + int port; 902 + 903 + for (port = 0; port < ocelot->num_phys_ports; port++) { 904 + struct ocelot_port_private *priv; 905 + struct ocelot_port *ocelot_port; 906 + 907 + ocelot_port = ocelot->ports[port]; 908 + if (!ocelot_port) 909 + continue; 910 + 911 + ocelot_deinit_port(ocelot, port); 912 + 913 + priv = container_of(ocelot_port, struct ocelot_port_private, 914 + port); 915 + 916 + unregister_netdev(priv->dev); 917 + free_netdev(priv->dev); 918 + } 919 + } 920 + 921 + static int mscc_ocelot_init_ports(struct platform_device *pdev, 922 + struct device_node *ports) 923 + { 924 + struct ocelot *ocelot = platform_get_drvdata(pdev); 925 + struct device_node *portnp; 926 + int err; 927 + 928 + ocelot->ports = devm_kcalloc(ocelot->dev, ocelot->num_phys_ports, 929 + sizeof(struct ocelot_port *), GFP_KERNEL); 930 + if (!ocelot->ports) 931 + return -ENOMEM; 932 + 933 + /* No NPI port */ 934 + ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE, 935 + OCELOT_TAG_PREFIX_NONE); 936 + 937 + for_each_available_child_of_node(ports, portnp) { 938 + struct ocelot_port_private *priv; 939 + struct ocelot_port *ocelot_port; 940 + struct device_node *phy_node; 941 + phy_interface_t phy_mode; 942 + struct phy_device *phy; 943 + struct regmap *target; 944 + struct resource *res; 945 + struct phy *serdes; 946 + char res_name[8]; 947 + u32 port; 948 + 949 + if (of_property_read_u32(portnp, "reg", &port)) 950 + continue; 951 + 952 + snprintf(res_name, sizeof(res_name), "port%d", port); 953 + 954 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 955 + res_name); 956 + target = ocelot_regmap_init(ocelot, res); 957 + if (IS_ERR(target)) 958 + continue; 959 + 960 + phy_node = of_parse_phandle(portnp, "phy-handle", 0); 961 + if (!phy_node) 962 + continue; 963 + 964 + phy = of_phy_find_device(phy_node); 965 + of_node_put(phy_node); 966 + if (!phy) 967 + continue; 968 + 969 + err = ocelot_probe_port(ocelot, port, target, phy); 970 + if (err) { 971 + of_node_put(portnp); 972 + return err; 973 + } 974 + 975 + ocelot_port = ocelot->ports[port]; 976 + priv = container_of(ocelot_port, struct ocelot_port_private, 977 + port); 978 + 979 + of_get_phy_mode(portnp, &phy_mode); 980 + 981 + ocelot_port->phy_mode = phy_mode; 982 + 983 + switch (ocelot_port->phy_mode) { 984 + case PHY_INTERFACE_MODE_NA: 985 + continue; 986 + case PHY_INTERFACE_MODE_SGMII: 987 + break; 988 + case PHY_INTERFACE_MODE_QSGMII: 989 + /* Ensure clock signals and speed is set on all 990 + * QSGMII links 991 + */ 992 + ocelot_port_writel(ocelot_port, 993 + DEV_CLOCK_CFG_LINK_SPEED 994 + (OCELOT_SPEED_1000), 995 + DEV_CLOCK_CFG); 996 + break; 997 + default: 998 + dev_err(ocelot->dev, 999 + "invalid phy mode for port%d, (Q)SGMII only\n", 1000 + port); 1001 + of_node_put(portnp); 1002 + return -EINVAL; 1003 + } 1004 + 1005 + serdes = devm_of_phy_get(ocelot->dev, portnp, NULL); 1006 + if (IS_ERR(serdes)) { 1007 + err = PTR_ERR(serdes); 1008 + if (err == -EPROBE_DEFER) 1009 + dev_dbg(ocelot->dev, "deferring probe\n"); 1010 + else 1011 + dev_err(ocelot->dev, 1012 + "missing SerDes phys for port%d\n", 1013 + port); 1014 + 1015 + of_node_put(portnp); 1016 + return err; 1017 + } 1018 + 1019 + priv->serdes = serdes; 1020 + } 1021 + 1022 + return 0; 1023 + } 1024 + 899 1025 static int mscc_ocelot_probe(struct platform_device *pdev) 900 1026 { 901 1027 struct device_node *np = pdev->dev.of_node; 902 - struct device_node *ports, *portnp; 903 1028 int err, irq_xtr, irq_ptp_rdy; 1029 + struct device_node *ports; 904 1030 struct ocelot *ocelot; 905 1031 struct regmap *hsio; 906 1032 unsigned int i; ··· 1111 985 1112 986 ports = of_get_child_by_name(np, "ethernet-ports"); 1113 987 if (!ports) { 1114 - dev_err(&pdev->dev, "no ethernet-ports child node found\n"); 988 + dev_err(ocelot->dev, "no ethernet-ports child node found\n"); 1115 989 return -ENODEV; 1116 990 } 1117 991 1118 992 ocelot->num_phys_ports = of_get_child_count(ports); 1119 993 1120 - ocelot->ports = devm_kcalloc(&pdev->dev, ocelot->num_phys_ports, 1121 - sizeof(struct ocelot_port *), GFP_KERNEL); 1122 - 1123 994 ocelot->vcap_is2_keys = vsc7514_vcap_is2_keys; 1124 995 ocelot->vcap_is2_actions = vsc7514_vcap_is2_actions; 1125 996 ocelot->vcap = vsc7514_vcap_props; 1126 997 1127 - ocelot_init(ocelot); 998 + err = ocelot_init(ocelot); 999 + if (err) 1000 + goto out_put_ports; 1001 + 1002 + err = mscc_ocelot_init_ports(pdev, ports); 1003 + if (err) 1004 + goto out_put_ports; 1005 + 1128 1006 if (ocelot->ptp) { 1129 1007 err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info); 1130 1008 if (err) { ··· 1136 1006 "Timestamp initialization failed\n"); 1137 1007 ocelot->ptp = 0; 1138 1008 } 1139 - } 1140 - 1141 - /* No NPI port */ 1142 - ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE, 1143 - OCELOT_TAG_PREFIX_NONE); 1144 - 1145 - for_each_available_child_of_node(ports, portnp) { 1146 - struct ocelot_port_private *priv; 1147 - struct ocelot_port *ocelot_port; 1148 - struct device_node *phy_node; 1149 - phy_interface_t phy_mode; 1150 - struct phy_device *phy; 1151 - struct regmap *target; 1152 - struct resource *res; 1153 - struct phy *serdes; 1154 - char res_name[8]; 1155 - u32 port; 1156 - 1157 - if (of_property_read_u32(portnp, "reg", &port)) 1158 - continue; 1159 - 1160 - snprintf(res_name, sizeof(res_name), "port%d", port); 1161 - 1162 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 1163 - res_name); 1164 - target = ocelot_regmap_init(ocelot, res); 1165 - if (IS_ERR(target)) 1166 - continue; 1167 - 1168 - phy_node = of_parse_phandle(portnp, "phy-handle", 0); 1169 - if (!phy_node) 1170 - continue; 1171 - 1172 - phy = of_phy_find_device(phy_node); 1173 - of_node_put(phy_node); 1174 - if (!phy) 1175 - continue; 1176 - 1177 - err = ocelot_probe_port(ocelot, port, target, phy); 1178 - if (err) { 1179 - of_node_put(portnp); 1180 - goto out_put_ports; 1181 - } 1182 - 1183 - ocelot_port = ocelot->ports[port]; 1184 - priv = container_of(ocelot_port, struct ocelot_port_private, 1185 - port); 1186 - 1187 - of_get_phy_mode(portnp, &phy_mode); 1188 - 1189 - ocelot_port->phy_mode = phy_mode; 1190 - 1191 - switch (ocelot_port->phy_mode) { 1192 - case PHY_INTERFACE_MODE_NA: 1193 - continue; 1194 - case PHY_INTERFACE_MODE_SGMII: 1195 - break; 1196 - case PHY_INTERFACE_MODE_QSGMII: 1197 - /* Ensure clock signals and speed is set on all 1198 - * QSGMII links 1199 - */ 1200 - ocelot_port_writel(ocelot_port, 1201 - DEV_CLOCK_CFG_LINK_SPEED 1202 - (OCELOT_SPEED_1000), 1203 - DEV_CLOCK_CFG); 1204 - break; 1205 - default: 1206 - dev_err(ocelot->dev, 1207 - "invalid phy mode for port%d, (Q)SGMII only\n", 1208 - port); 1209 - of_node_put(portnp); 1210 - err = -EINVAL; 1211 - goto out_put_ports; 1212 - } 1213 - 1214 - serdes = devm_of_phy_get(ocelot->dev, portnp, NULL); 1215 - if (IS_ERR(serdes)) { 1216 - err = PTR_ERR(serdes); 1217 - if (err == -EPROBE_DEFER) 1218 - dev_dbg(ocelot->dev, "deferring probe\n"); 1219 - else 1220 - dev_err(ocelot->dev, 1221 - "missing SerDes phys for port%d\n", 1222 - port); 1223 - 1224 - of_node_put(portnp); 1225 - goto out_put_ports; 1226 - } 1227 - 1228 - priv->serdes = serdes; 1229 1009 } 1230 1010 1231 1011 register_netdevice_notifier(&ocelot_netdevice_nb); ··· 1154 1114 struct ocelot *ocelot = platform_get_drvdata(pdev); 1155 1115 1156 1116 ocelot_deinit_timestamp(ocelot); 1117 + mscc_ocelot_release_ports(ocelot); 1157 1118 ocelot_deinit(ocelot); 1158 1119 unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb); 1159 1120 unregister_switchdev_notifier(&ocelot_switchdev_nb);
+2 -2
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
··· 829 829 struct nfp_eth_table_port *eth_port; 830 830 struct nfp_port *port; 831 831 832 - param->active_fec = ETHTOOL_FEC_NONE_BIT; 833 - param->fec = ETHTOOL_FEC_NONE_BIT; 832 + param->active_fec = ETHTOOL_FEC_NONE; 833 + param->fec = ETHTOOL_FEC_NONE; 834 834 835 835 port = nfp_port_from_netdev(netdev); 836 836 eth_port = nfp_port_get_eth_port(port);
+10 -1
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 4253 4253 cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) | 4254 4254 BIT(QED_MF_LLH_PROTO_CLSS) | 4255 4255 BIT(QED_MF_LL2_NON_UNICAST) | 4256 - BIT(QED_MF_INTER_PF_SWITCH); 4256 + BIT(QED_MF_INTER_PF_SWITCH) | 4257 + BIT(QED_MF_DISABLE_ARFS); 4257 4258 break; 4258 4259 case NVM_CFG1_GLOB_MF_MODE_DEFAULT: 4259 4260 cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) | ··· 4267 4266 4268 4267 DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n", 4269 4268 cdev->mf_bits); 4269 + 4270 + /* In CMT the PF is unknown when the GFS block processes the 4271 + * packet. Therefore cannot use searcher as it has a per PF 4272 + * database, and thus ARFS must be disabled. 4273 + * 4274 + */ 4275 + if (QED_IS_CMT(cdev)) 4276 + cdev->mf_bits |= BIT(QED_MF_DISABLE_ARFS); 4270 4277 } 4271 4278 4272 4279 DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
+3
drivers/net/ethernet/qlogic/qed/qed_l2.c
··· 1980 1980 struct qed_ptt *p_ptt, 1981 1981 struct qed_arfs_config_params *p_cfg_params) 1982 1982 { 1983 + if (test_bit(QED_MF_DISABLE_ARFS, &p_hwfn->cdev->mf_bits)) 1984 + return; 1985 + 1983 1986 if (p_cfg_params->mode != QED_FILTER_CONFIG_MODE_DISABLE) { 1984 1987 qed_gft_config(p_hwfn, p_ptt, p_hwfn->rel_pf_id, 1985 1988 p_cfg_params->tcp,
+2
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 444 444 dev_info->fw_eng = FW_ENGINEERING_VERSION; 445 445 dev_info->b_inter_pf_switch = test_bit(QED_MF_INTER_PF_SWITCH, 446 446 &cdev->mf_bits); 447 + if (!test_bit(QED_MF_DISABLE_ARFS, &cdev->mf_bits)) 448 + dev_info->b_arfs_capable = true; 447 449 dev_info->tx_switching = true; 448 450 449 451 if (hw_info->b_wol_support == QED_WOL_SUPPORT_PME)
+1
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 71 71 p_ramrod->personality = PERSONALITY_ETH; 72 72 break; 73 73 case QED_PCI_ETH_ROCE: 74 + case QED_PCI_ETH_IWARP: 74 75 p_ramrod->personality = PERSONALITY_RDMA_AND_ETH; 75 76 break; 76 77 default:
+3
drivers/net/ethernet/qlogic/qede/qede_filter.c
··· 311 311 { 312 312 int i; 313 313 314 + if (!edev->dev_info.common.b_arfs_capable) 315 + return -EINVAL; 316 + 314 317 edev->arfs = vzalloc(sizeof(*edev->arfs)); 315 318 if (!edev->arfs) 316 319 return -ENOMEM;
+5 -6
drivers/net/ethernet/qlogic/qede/qede_main.c
··· 804 804 NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 805 805 NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC; 806 806 807 - if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) 807 + if (edev->dev_info.common.b_arfs_capable) 808 808 hw_features |= NETIF_F_NTUPLE; 809 809 810 810 if (edev->dev_info.common.vxlan_enable || ··· 2274 2274 qede_vlan_mark_nonconfigured(edev); 2275 2275 edev->ops->fastpath_stop(edev->cdev); 2276 2276 2277 - if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) { 2277 + if (edev->dev_info.common.b_arfs_capable) { 2278 2278 qede_poll_for_freeing_arfs_filters(edev); 2279 2279 qede_free_arfs(edev); 2280 2280 } ··· 2341 2341 if (rc) 2342 2342 goto err2; 2343 2343 2344 - if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) { 2345 - rc = qede_alloc_arfs(edev); 2346 - if (rc) 2347 - DP_NOTICE(edev, "aRFS memory allocation failed\n"); 2344 + if (qede_alloc_arfs(edev)) { 2345 + edev->ndev->features &= ~NETIF_F_NTUPLE; 2346 + edev->dev_info.common.b_arfs_capable = false; 2348 2347 } 2349 2348 2350 2349 qede_napi_add_enable(edev);
+1
drivers/net/ethernet/sfc/ef100.c
··· 490 490 if (fcw.offset > pci_resource_len(efx->pci_dev, fcw.bar) - ESE_GZ_FCW_LEN) { 491 491 netif_err(efx, probe, efx->net_dev, 492 492 "Func control window overruns BAR\n"); 493 + rc = -EIO; 493 494 goto fail; 494 495 } 495 496
+53
drivers/net/ethernet/ti/cpsw_new.c
··· 17 17 #include <linux/phy.h> 18 18 #include <linux/phy/phy.h> 19 19 #include <linux/delay.h> 20 + #include <linux/pinctrl/consumer.h> 20 21 #include <linux/pm_runtime.h> 21 22 #include <linux/gpio/consumer.h> 22 23 #include <linux/of.h> ··· 2071 2070 return 0; 2072 2071 } 2073 2072 2073 + static int __maybe_unused cpsw_suspend(struct device *dev) 2074 + { 2075 + struct cpsw_common *cpsw = dev_get_drvdata(dev); 2076 + int i; 2077 + 2078 + rtnl_lock(); 2079 + 2080 + for (i = 0; i < cpsw->data.slaves; i++) { 2081 + struct net_device *ndev = cpsw->slaves[i].ndev; 2082 + 2083 + if (!(ndev && netif_running(ndev))) 2084 + continue; 2085 + 2086 + cpsw_ndo_stop(ndev); 2087 + } 2088 + 2089 + rtnl_unlock(); 2090 + 2091 + /* Select sleep pin state */ 2092 + pinctrl_pm_select_sleep_state(dev); 2093 + 2094 + return 0; 2095 + } 2096 + 2097 + static int __maybe_unused cpsw_resume(struct device *dev) 2098 + { 2099 + struct cpsw_common *cpsw = dev_get_drvdata(dev); 2100 + int i; 2101 + 2102 + /* Select default pin state */ 2103 + pinctrl_pm_select_default_state(dev); 2104 + 2105 + /* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */ 2106 + rtnl_lock(); 2107 + 2108 + for (i = 0; i < cpsw->data.slaves; i++) { 2109 + struct net_device *ndev = cpsw->slaves[i].ndev; 2110 + 2111 + if (!(ndev && netif_running(ndev))) 2112 + continue; 2113 + 2114 + cpsw_ndo_open(ndev); 2115 + } 2116 + 2117 + rtnl_unlock(); 2118 + 2119 + return 0; 2120 + } 2121 + 2122 + static SIMPLE_DEV_PM_OPS(cpsw_pm_ops, cpsw_suspend, cpsw_resume); 2123 + 2074 2124 static struct platform_driver cpsw_driver = { 2075 2125 .driver = { 2076 2126 .name = "cpsw-switch", 2127 + .pm = &cpsw_pm_ops, 2077 2128 .of_match_table = cpsw_of_mtable, 2078 2129 }, 2079 2130 .probe = cpsw_probe,
+29 -12
drivers/net/geneve.c
··· 777 777 struct net_device *dev, 778 778 struct geneve_sock *gs4, 779 779 struct flowi4 *fl4, 780 - const struct ip_tunnel_info *info) 780 + const struct ip_tunnel_info *info, 781 + __be16 dport, __be16 sport) 781 782 { 782 783 bool use_cache = ip_tunnel_dst_cache_usable(skb, info); 783 784 struct geneve_dev *geneve = netdev_priv(dev); ··· 794 793 fl4->flowi4_proto = IPPROTO_UDP; 795 794 fl4->daddr = info->key.u.ipv4.dst; 796 795 fl4->saddr = info->key.u.ipv4.src; 796 + fl4->fl4_dport = dport; 797 + fl4->fl4_sport = sport; 797 798 798 799 tos = info->key.tos; 799 800 if ((tos == 1) && !geneve->cfg.collect_md) { ··· 830 827 struct net_device *dev, 831 828 struct geneve_sock *gs6, 832 829 struct flowi6 *fl6, 833 - const struct ip_tunnel_info *info) 830 + const struct ip_tunnel_info *info, 831 + __be16 dport, __be16 sport) 834 832 { 835 833 bool use_cache = ip_tunnel_dst_cache_usable(skb, info); 836 834 struct geneve_dev *geneve = netdev_priv(dev); ··· 847 843 fl6->flowi6_proto = IPPROTO_UDP; 848 844 fl6->daddr = info->key.u.ipv6.dst; 849 845 fl6->saddr = info->key.u.ipv6.src; 846 + fl6->fl6_dport = dport; 847 + fl6->fl6_sport = sport; 848 + 850 849 prio = info->key.tos; 851 850 if ((prio == 1) && !geneve->cfg.collect_md) { 852 851 prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb); ··· 896 889 __be16 sport; 897 890 int err; 898 891 899 - rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info); 892 + sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 893 + rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info, 894 + geneve->cfg.info.key.tp_dst, sport); 900 895 if (IS_ERR(rt)) 901 896 return PTR_ERR(rt); 902 897 ··· 928 919 return -EMSGSIZE; 929 920 } 930 921 931 - sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 932 922 if (geneve->cfg.collect_md) { 933 923 tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 934 924 ttl = key->ttl; ··· 982 974 __be16 sport; 983 975 int err; 984 976 985 - dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info); 977 + sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 978 + dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info, 979 + geneve->cfg.info.key.tp_dst, sport); 986 980 if (IS_ERR(dst)) 987 981 return PTR_ERR(dst); 988 982 ··· 1013 1003 return -EMSGSIZE; 1014 1004 } 1015 1005 1016 - sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 1017 1006 if (geneve->cfg.collect_md) { 1018 1007 prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 1019 1008 ttl = key->ttl; ··· 1094 1085 { 1095 1086 struct ip_tunnel_info *info = skb_tunnel_info(skb); 1096 1087 struct geneve_dev *geneve = netdev_priv(dev); 1088 + __be16 sport; 1097 1089 1098 1090 if (ip_tunnel_info_af(info) == AF_INET) { 1099 1091 struct rtable *rt; 1100 1092 struct flowi4 fl4; 1101 - struct geneve_sock *gs4 = rcu_dereference(geneve->sock4); 1102 1093 1103 - rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info); 1094 + struct geneve_sock *gs4 = rcu_dereference(geneve->sock4); 1095 + sport = udp_flow_src_port(geneve->net, skb, 1096 + 1, USHRT_MAX, true); 1097 + 1098 + rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info, 1099 + geneve->cfg.info.key.tp_dst, sport); 1104 1100 if (IS_ERR(rt)) 1105 1101 return PTR_ERR(rt); 1106 1102 ··· 1115 1101 } else if (ip_tunnel_info_af(info) == AF_INET6) { 1116 1102 struct dst_entry *dst; 1117 1103 struct flowi6 fl6; 1118 - struct geneve_sock *gs6 = rcu_dereference(geneve->sock6); 1119 1104 1120 - dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info); 1105 + struct geneve_sock *gs6 = rcu_dereference(geneve->sock6); 1106 + sport = udp_flow_src_port(geneve->net, skb, 1107 + 1, USHRT_MAX, true); 1108 + 1109 + dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info, 1110 + geneve->cfg.info.key.tp_dst, sport); 1121 1111 if (IS_ERR(dst)) 1122 1112 return PTR_ERR(dst); 1123 1113 ··· 1132 1114 return -EINVAL; 1133 1115 } 1134 1116 1135 - info->key.tp_src = udp_flow_src_port(geneve->net, skb, 1136 - 1, USHRT_MAX, true); 1117 + info->key.tp_src = sport; 1137 1118 info->key.tp_dst = geneve->cfg.info.key.tp_dst; 1138 1119 return 0; 1139 1120 }
+7
drivers/net/hyperv/hyperv_net.h
··· 847 847 848 848 #define NETVSC_XDP_HDRM 256 849 849 850 + #define NETVSC_XFER_HEADER_SIZE(rng_cnt) \ 851 + (offsetof(struct vmtransfer_page_packet_header, ranges) + \ 852 + (rng_cnt) * sizeof(struct vmtransfer_page_range)) 853 + 850 854 struct multi_send_data { 851 855 struct sk_buff *skb; /* skb containing the pkt */ 852 856 struct hv_netvsc_packet *pkt; /* netvsc pkt pending */ ··· 977 973 u32 vf_alloc; 978 974 /* Serial number of the VF to team with */ 979 975 u32 vf_serial; 976 + 977 + /* Is the current data path through the VF NIC? */ 978 + bool data_path_is_vf; 980 979 981 980 /* Used to temporarily save the config info across hibernation */ 982 981 struct netvsc_device_info *saved_netvsc_dev_info;
+111 -13
drivers/net/hyperv/netvsc.c
··· 388 388 net_device->recv_section_size = resp->sections[0].sub_alloc_size; 389 389 net_device->recv_section_cnt = resp->sections[0].num_sub_allocs; 390 390 391 + /* Ensure buffer will not overflow */ 392 + if (net_device->recv_section_size < NETVSC_MTU_MIN || (u64)net_device->recv_section_size * 393 + (u64)net_device->recv_section_cnt > (u64)buf_size) { 394 + netdev_err(ndev, "invalid recv_section_size %u\n", 395 + net_device->recv_section_size); 396 + ret = -EINVAL; 397 + goto cleanup; 398 + } 399 + 391 400 /* Setup receive completion ring. 392 401 * Add 1 to the recv_section_cnt because at least one entry in a 393 402 * ring buffer has to be empty. ··· 469 460 /* Parse the response */ 470 461 net_device->send_section_size = init_packet->msg. 471 462 v1_msg.send_send_buf_complete.section_size; 463 + if (net_device->send_section_size < NETVSC_MTU_MIN) { 464 + netdev_err(ndev, "invalid send_section_size %u\n", 465 + net_device->send_section_size); 466 + ret = -EINVAL; 467 + goto cleanup; 468 + } 472 469 473 470 /* Section count is simply the size divided by the section size. */ 474 471 net_device->send_section_cnt = buf_size / net_device->send_section_size; ··· 746 731 int budget) 747 732 { 748 733 const struct nvsp_message *nvsp_packet = hv_pkt_data(desc); 734 + u32 msglen = hv_pkt_datalen(desc); 735 + 736 + /* Ensure packet is big enough to read header fields */ 737 + if (msglen < sizeof(struct nvsp_message_header)) { 738 + netdev_err(ndev, "nvsp_message length too small: %u\n", msglen); 739 + return; 740 + } 749 741 750 742 switch (nvsp_packet->hdr.msg_type) { 751 743 case NVSP_MSG_TYPE_INIT_COMPLETE: 744 + if (msglen < sizeof(struct nvsp_message_header) + 745 + sizeof(struct nvsp_message_init_complete)) { 746 + netdev_err(ndev, "nvsp_msg length too small: %u\n", 747 + msglen); 748 + return; 749 + } 750 + fallthrough; 751 + 752 752 case NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE: 753 + if (msglen < sizeof(struct nvsp_message_header) + 754 + sizeof(struct nvsp_1_message_send_receive_buffer_complete)) { 755 + netdev_err(ndev, "nvsp_msg1 length too small: %u\n", 756 + msglen); 757 + return; 758 + } 759 + fallthrough; 760 + 753 761 case NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE: 762 + if (msglen < sizeof(struct nvsp_message_header) + 763 + sizeof(struct nvsp_1_message_send_send_buffer_complete)) { 764 + netdev_err(ndev, "nvsp_msg1 length too small: %u\n", 765 + msglen); 766 + return; 767 + } 768 + fallthrough; 769 + 754 770 case NVSP_MSG5_TYPE_SUBCHANNEL: 771 + if (msglen < sizeof(struct nvsp_message_header) + 772 + sizeof(struct nvsp_5_subchannel_complete)) { 773 + netdev_err(ndev, "nvsp_msg5 length too small: %u\n", 774 + msglen); 775 + return; 776 + } 755 777 /* Copy the response back */ 756 778 memcpy(&net_device->channel_init_pkt, nvsp_packet, 757 779 sizeof(struct nvsp_message)); ··· 1169 1117 static int netvsc_receive(struct net_device *ndev, 1170 1118 struct netvsc_device *net_device, 1171 1119 struct netvsc_channel *nvchan, 1172 - const struct vmpacket_descriptor *desc, 1173 - const struct nvsp_message *nvsp) 1120 + const struct vmpacket_descriptor *desc) 1174 1121 { 1175 1122 struct net_device_context *net_device_ctx = netdev_priv(ndev); 1176 1123 struct vmbus_channel *channel = nvchan->channel; 1177 1124 const struct vmtransfer_page_packet_header *vmxferpage_packet 1178 1125 = container_of(desc, const struct vmtransfer_page_packet_header, d); 1126 + const struct nvsp_message *nvsp = hv_pkt_data(desc); 1127 + u32 msglen = hv_pkt_datalen(desc); 1179 1128 u16 q_idx = channel->offermsg.offer.sub_channel_index; 1180 1129 char *recv_buf = net_device->recv_buf; 1181 1130 u32 status = NVSP_STAT_SUCCESS; 1182 1131 int i; 1183 1132 int count = 0; 1184 1133 1134 + /* Ensure packet is big enough to read header fields */ 1135 + if (msglen < sizeof(struct nvsp_message_header)) { 1136 + netif_err(net_device_ctx, rx_err, ndev, 1137 + "invalid nvsp header, length too small: %u\n", 1138 + msglen); 1139 + return 0; 1140 + } 1141 + 1185 1142 /* Make sure this is a valid nvsp packet */ 1186 1143 if (unlikely(nvsp->hdr.msg_type != NVSP_MSG1_TYPE_SEND_RNDIS_PKT)) { 1187 1144 netif_err(net_device_ctx, rx_err, ndev, 1188 1145 "Unknown nvsp packet type received %u\n", 1189 1146 nvsp->hdr.msg_type); 1147 + return 0; 1148 + } 1149 + 1150 + /* Validate xfer page pkt header */ 1151 + if ((desc->offset8 << 3) < sizeof(struct vmtransfer_page_packet_header)) { 1152 + netif_err(net_device_ctx, rx_err, ndev, 1153 + "Invalid xfer page pkt, offset too small: %u\n", 1154 + desc->offset8 << 3); 1190 1155 return 0; 1191 1156 } 1192 1157 ··· 1217 1148 1218 1149 count = vmxferpage_packet->range_cnt; 1219 1150 1151 + /* Check count for a valid value */ 1152 + if (NETVSC_XFER_HEADER_SIZE(count) > desc->offset8 << 3) { 1153 + netif_err(net_device_ctx, rx_err, ndev, 1154 + "Range count is not valid: %d\n", 1155 + count); 1156 + return 0; 1157 + } 1158 + 1220 1159 /* Each range represents 1 RNDIS pkt that contains 1 ethernet frame */ 1221 1160 for (i = 0; i < count; i++) { 1222 1161 u32 offset = vmxferpage_packet->ranges[i].byte_offset; ··· 1232 1155 void *data; 1233 1156 int ret; 1234 1157 1235 - if (unlikely(offset + buflen > net_device->recv_buf_size)) { 1158 + if (unlikely(offset > net_device->recv_buf_size || 1159 + buflen > net_device->recv_buf_size - offset)) { 1236 1160 nvchan->rsc.cnt = 0; 1237 1161 status = NVSP_STAT_FAIL; 1238 1162 netif_err(net_device_ctx, rx_err, ndev, ··· 1272 1194 u32 count, offset, *tab; 1273 1195 int i; 1274 1196 1197 + /* Ensure packet is big enough to read send_table fields */ 1198 + if (msglen < sizeof(struct nvsp_message_header) + 1199 + sizeof(struct nvsp_5_send_indirect_table)) { 1200 + netdev_err(ndev, "nvsp_v5_msg length too small: %u\n", msglen); 1201 + return; 1202 + } 1203 + 1275 1204 count = nvmsg->msg.v5_msg.send_table.count; 1276 1205 offset = nvmsg->msg.v5_msg.send_table.offset; 1277 1206 ··· 1310 1225 } 1311 1226 1312 1227 static void netvsc_send_vf(struct net_device *ndev, 1313 - const struct nvsp_message *nvmsg) 1228 + const struct nvsp_message *nvmsg, 1229 + u32 msglen) 1314 1230 { 1315 1231 struct net_device_context *net_device_ctx = netdev_priv(ndev); 1232 + 1233 + /* Ensure packet is big enough to read its fields */ 1234 + if (msglen < sizeof(struct nvsp_message_header) + 1235 + sizeof(struct nvsp_4_send_vf_association)) { 1236 + netdev_err(ndev, "nvsp_v4_msg length too small: %u\n", msglen); 1237 + return; 1238 + } 1316 1239 1317 1240 net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated; 1318 1241 net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial; ··· 1331 1238 1332 1239 static void netvsc_receive_inband(struct net_device *ndev, 1333 1240 struct netvsc_device *nvscdev, 1334 - const struct nvsp_message *nvmsg, 1335 - u32 msglen) 1241 + const struct vmpacket_descriptor *desc) 1336 1242 { 1243 + const struct nvsp_message *nvmsg = hv_pkt_data(desc); 1244 + u32 msglen = hv_pkt_datalen(desc); 1245 + 1246 + /* Ensure packet is big enough to read header fields */ 1247 + if (msglen < sizeof(struct nvsp_message_header)) { 1248 + netdev_err(ndev, "inband nvsp_message length too small: %u\n", msglen); 1249 + return; 1250 + } 1251 + 1337 1252 switch (nvmsg->hdr.msg_type) { 1338 1253 case NVSP_MSG5_TYPE_SEND_INDIRECTION_TABLE: 1339 1254 netvsc_send_table(ndev, nvscdev, nvmsg, msglen); 1340 1255 break; 1341 1256 1342 1257 case NVSP_MSG4_TYPE_SEND_VF_ASSOCIATION: 1343 - netvsc_send_vf(ndev, nvmsg); 1258 + netvsc_send_vf(ndev, nvmsg, msglen); 1344 1259 break; 1345 1260 } 1346 1261 } ··· 1362 1261 { 1363 1262 struct vmbus_channel *channel = nvchan->channel; 1364 1263 const struct nvsp_message *nvmsg = hv_pkt_data(desc); 1365 - u32 msglen = hv_pkt_datalen(desc); 1366 1264 1367 1265 trace_nvsp_recv(ndev, channel, nvmsg); 1368 1266 1369 1267 switch (desc->type) { 1370 1268 case VM_PKT_COMP: 1371 - netvsc_send_completion(ndev, net_device, channel, 1372 - desc, budget); 1269 + netvsc_send_completion(ndev, net_device, channel, desc, budget); 1373 1270 break; 1374 1271 1375 1272 case VM_PKT_DATA_USING_XFER_PAGES: 1376 - return netvsc_receive(ndev, net_device, nvchan, 1377 - desc, nvmsg); 1273 + return netvsc_receive(ndev, net_device, nvchan, desc); 1378 1274 break; 1379 1275 1380 1276 case VM_PKT_DATA_INBAND: 1381 - netvsc_receive_inband(ndev, net_device, nvmsg, msglen); 1277 + netvsc_receive_inband(ndev, net_device, desc); 1382 1278 break; 1383 1279 1384 1280 default:
+29 -6
drivers/net/hyperv/netvsc_drv.c
··· 748 748 struct netvsc_reconfig *event; 749 749 unsigned long flags; 750 750 751 + /* Ensure the packet is big enough to access its fields */ 752 + if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(struct rndis_indicate_status)) { 753 + netdev_err(net, "invalid rndis_indicate_status packet, len: %u\n", 754 + resp->msg_len); 755 + return; 756 + } 757 + 751 758 /* Update the physical link speed when changing to another vSwitch */ 752 759 if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) { 753 760 u32 speed; ··· 2373 2366 return NOTIFY_OK; 2374 2367 } 2375 2368 2376 - /* VF up/down change detected, schedule to change data path */ 2369 + /* Change the data path when VF UP/DOWN/CHANGE are detected. 2370 + * 2371 + * Typically a UP or DOWN event is followed by a CHANGE event, so 2372 + * net_device_ctx->data_path_is_vf is used to cache the current data path 2373 + * to avoid the duplicate call of netvsc_switch_datapath() and the duplicate 2374 + * message. 2375 + * 2376 + * During hibernation, if a VF NIC driver (e.g. mlx5) preserves the network 2377 + * interface, there is only the CHANGE event and no UP or DOWN event. 2378 + */ 2377 2379 static int netvsc_vf_changed(struct net_device *vf_netdev) 2378 2380 { 2379 2381 struct net_device_context *net_device_ctx; ··· 2398 2382 netvsc_dev = rtnl_dereference(net_device_ctx->nvdev); 2399 2383 if (!netvsc_dev) 2400 2384 return NOTIFY_DONE; 2385 + 2386 + if (net_device_ctx->data_path_is_vf == vf_is_up) 2387 + return NOTIFY_OK; 2388 + net_device_ctx->data_path_is_vf = vf_is_up; 2401 2389 2402 2390 netvsc_switch_datapath(ndev, vf_is_up); 2403 2391 netdev_info(ndev, "Data path switched %s VF: %s\n", ··· 2607 2587 static int netvsc_suspend(struct hv_device *dev) 2608 2588 { 2609 2589 struct net_device_context *ndev_ctx; 2610 - struct net_device *vf_netdev, *net; 2611 2590 struct netvsc_device *nvdev; 2591 + struct net_device *net; 2612 2592 int ret; 2613 2593 2614 2594 net = hv_get_drvdata(dev); ··· 2623 2603 ret = -ENODEV; 2624 2604 goto out; 2625 2605 } 2626 - 2627 - vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev); 2628 - if (vf_netdev) 2629 - netvsc_unregister_vf(vf_netdev); 2630 2606 2631 2607 /* Save the current config info */ 2632 2608 ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev); ··· 2644 2628 rtnl_lock(); 2645 2629 2646 2630 net_device_ctx = netdev_priv(net); 2631 + 2632 + /* Reset the data path to the netvsc NIC before re-opening the vmbus 2633 + * channel. Later netvsc_netdev_event() will switch the data path to 2634 + * the VF upon the UP or CHANGE event. 2635 + */ 2636 + net_device_ctx->data_path_is_vf = false; 2647 2637 device_info = net_device_ctx->saved_netvsc_dev_info; 2648 2638 2649 2639 ret = netvsc_attach(net, device_info); ··· 2717 2695 return netvsc_unregister_vf(event_dev); 2718 2696 case NETDEV_UP: 2719 2697 case NETDEV_DOWN: 2698 + case NETDEV_CHANGE: 2720 2699 return netvsc_vf_changed(event_dev); 2721 2700 default: 2722 2701 return NOTIFY_DONE;
+66 -7
drivers/net/hyperv/rndis_filter.c
··· 275 275 return; 276 276 } 277 277 278 + /* Ensure the packet is big enough to read req_id. Req_id is the 1st 279 + * field in any request/response message, so the payload should have at 280 + * least sizeof(u32) bytes 281 + */ 282 + if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(u32)) { 283 + netdev_err(ndev, "rndis msg_len too small: %u\n", 284 + resp->msg_len); 285 + return; 286 + } 287 + 278 288 spin_lock_irqsave(&dev->request_lock, flags); 279 289 list_for_each_entry(request, &dev->req_list, list_ent) { 280 290 /* ··· 341 331 * Get the Per-Packet-Info with the specified type 342 332 * return NULL if not found. 343 333 */ 344 - static inline void *rndis_get_ppi(struct rndis_packet *rpkt, 345 - u32 type, u8 internal) 334 + static inline void *rndis_get_ppi(struct net_device *ndev, 335 + struct rndis_packet *rpkt, 336 + u32 rpkt_len, u32 type, u8 internal) 346 337 { 347 338 struct rndis_per_packet_info *ppi; 348 339 int len; ··· 351 340 if (rpkt->per_pkt_info_offset == 0) 352 341 return NULL; 353 342 343 + /* Validate info_offset and info_len */ 344 + if (rpkt->per_pkt_info_offset < sizeof(struct rndis_packet) || 345 + rpkt->per_pkt_info_offset > rpkt_len) { 346 + netdev_err(ndev, "Invalid per_pkt_info_offset: %u\n", 347 + rpkt->per_pkt_info_offset); 348 + return NULL; 349 + } 350 + 351 + if (rpkt->per_pkt_info_len > rpkt_len - rpkt->per_pkt_info_offset) { 352 + netdev_err(ndev, "Invalid per_pkt_info_len: %u\n", 353 + rpkt->per_pkt_info_len); 354 + return NULL; 355 + } 356 + 354 357 ppi = (struct rndis_per_packet_info *)((ulong)rpkt + 355 358 rpkt->per_pkt_info_offset); 356 359 len = rpkt->per_pkt_info_len; 357 360 358 361 while (len > 0) { 362 + /* Validate ppi_offset and ppi_size */ 363 + if (ppi->size > len) { 364 + netdev_err(ndev, "Invalid ppi size: %u\n", ppi->size); 365 + continue; 366 + } 367 + 368 + if (ppi->ppi_offset >= ppi->size) { 369 + netdev_err(ndev, "Invalid ppi_offset: %u\n", ppi->ppi_offset); 370 + continue; 371 + } 372 + 359 373 if (ppi->type == type && ppi->internal == internal) 360 374 return (void *)((ulong)ppi + ppi->ppi_offset); 361 375 len -= ppi->size; ··· 424 388 const struct ndis_pkt_8021q_info *vlan; 425 389 const struct rndis_pktinfo_id *pktinfo_id; 426 390 const u32 *hash_info; 427 - u32 data_offset; 391 + u32 data_offset, rpkt_len; 428 392 void *data; 429 393 bool rsc_more = false; 430 394 int ret; 431 395 396 + /* Ensure data_buflen is big enough to read header fields */ 397 + if (data_buflen < RNDIS_HEADER_SIZE + sizeof(struct rndis_packet)) { 398 + netdev_err(ndev, "invalid rndis pkt, data_buflen too small: %u\n", 399 + data_buflen); 400 + return NVSP_STAT_FAIL; 401 + } 402 + 403 + /* Validate rndis_pkt offset */ 404 + if (rndis_pkt->data_offset >= data_buflen - RNDIS_HEADER_SIZE) { 405 + netdev_err(ndev, "invalid rndis packet offset: %u\n", 406 + rndis_pkt->data_offset); 407 + return NVSP_STAT_FAIL; 408 + } 409 + 432 410 /* Remove the rndis header and pass it back up the stack */ 433 411 data_offset = RNDIS_HEADER_SIZE + rndis_pkt->data_offset; 434 412 413 + rpkt_len = data_buflen - RNDIS_HEADER_SIZE; 435 414 data_buflen -= data_offset; 436 415 437 416 /* ··· 461 410 return NVSP_STAT_FAIL; 462 411 } 463 412 464 - vlan = rndis_get_ppi(rndis_pkt, IEEE_8021Q_INFO, 0); 413 + vlan = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, IEEE_8021Q_INFO, 0); 465 414 466 - csum_info = rndis_get_ppi(rndis_pkt, TCPIP_CHKSUM_PKTINFO, 0); 415 + csum_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, TCPIP_CHKSUM_PKTINFO, 0); 467 416 468 - hash_info = rndis_get_ppi(rndis_pkt, NBL_HASH_VALUE, 0); 417 + hash_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, NBL_HASH_VALUE, 0); 469 418 470 - pktinfo_id = rndis_get_ppi(rndis_pkt, RNDIS_PKTINFO_ID, 1); 419 + pktinfo_id = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, RNDIS_PKTINFO_ID, 1); 471 420 472 421 data = (void *)msg + data_offset; 473 422 ··· 524 473 525 474 if (netif_msg_rx_status(net_device_ctx)) 526 475 dump_rndis_message(ndev, rndis_msg); 476 + 477 + /* Validate incoming rndis_message packet */ 478 + if (buflen < RNDIS_HEADER_SIZE || rndis_msg->msg_len < RNDIS_HEADER_SIZE || 479 + buflen < rndis_msg->msg_len) { 480 + netdev_err(ndev, "Invalid rndis_msg (buflen: %u, msg_len: %u)\n", 481 + buflen, rndis_msg->msg_len); 482 + return NVSP_STAT_FAIL; 483 + } 527 484 528 485 switch (rndis_msg->ndis_msg_type) { 529 486 case RNDIS_MSG_PACKET:
+3 -1
drivers/net/ieee802154/adf7242.c
··· 882 882 int ret; 883 883 u8 lqi, len_u8, *data; 884 884 885 - adf7242_read_reg(lp, 0, &len_u8); 885 + ret = adf7242_read_reg(lp, 0, &len_u8); 886 + if (ret) 887 + return ret; 886 888 887 889 len = len_u8; 888 890
+1
drivers/net/ieee802154/ca8210.c
··· 2925 2925 ); 2926 2926 if (!priv->irq_workqueue) { 2927 2927 dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n"); 2928 + destroy_workqueue(priv->mlme_workqueue); 2928 2929 return -ENOMEM; 2929 2930 } 2930 2931
+2 -2
drivers/net/ipa/ipa_table.c
··· 521 521 val = ioread32(endpoint->ipa->reg_virt + offset); 522 522 523 523 /* Zero all filter-related fields, preserving the rest */ 524 - u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); 524 + u32p_replace_bits(&val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); 525 525 526 526 iowrite32(val, endpoint->ipa->reg_virt + offset); 527 527 } ··· 573 573 val = ioread32(ipa->reg_virt + offset); 574 574 575 575 /* Zero all route-related fields, preserving the rest */ 576 - u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); 576 + u32p_replace_bits(&val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); 577 577 578 578 iowrite32(val, ipa->reg_virt + offset); 579 579 }
+1 -1
drivers/net/phy/phy.c
··· 996 996 { 997 997 struct net_device *dev = phydev->attached_dev; 998 998 999 - if (!phy_is_started(phydev)) { 999 + if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) { 1000 1000 WARN(1, "called from state %s\n", 1001 1001 phy_state_to_str(phydev->state)); 1002 1002 return;
+6 -5
drivers/net/phy/phy_device.c
··· 1143 1143 if (ret < 0) 1144 1144 return ret; 1145 1145 1146 - ret = phy_disable_interrupts(phydev); 1147 - if (ret) 1148 - return ret; 1149 - 1150 1146 if (phydev->drv->config_init) 1151 1147 ret = phydev->drv->config_init(phydev); 1152 1148 ··· 1419 1423 if (err) 1420 1424 goto error; 1421 1425 1426 + err = phy_disable_interrupts(phydev); 1427 + if (err) 1428 + return err; 1429 + 1422 1430 phy_resume(phydev); 1423 1431 phy_led_triggers_register(phydev); 1424 1432 ··· 1682 1682 1683 1683 phy_led_triggers_unregister(phydev); 1684 1684 1685 - module_put(phydev->mdio.dev.driver->owner); 1685 + if (phydev->mdio.dev.driver) 1686 + module_put(phydev->mdio.dev.driver->owner); 1686 1687 1687 1688 /* If the device had no specific driver before (i.e. - it 1688 1689 * was using the generic driver), we unbind the device
+1 -1
drivers/net/usb/rndis_host.c
··· 201 201 dev_dbg(&info->control->dev, 202 202 "rndis response error, code %d\n", retval); 203 203 } 204 - msleep(20); 204 + msleep(40); 205 205 } 206 206 dev_dbg(&info->control->dev, "rndis response timeout\n"); 207 207 return -ETIMEDOUT;
+1
drivers/net/wan/hdlc_cisco.c
··· 118 118 skb_put(skb, sizeof(struct cisco_packet)); 119 119 skb->priority = TC_PRIO_CONTROL; 120 120 skb->dev = dev; 121 + skb->protocol = htons(ETH_P_HDLC); 121 122 skb_reset_network_header(skb); 122 123 123 124 dev_queue_xmit(skb);
+5 -1
drivers/net/wan/hdlc_fr.c
··· 433 433 if (pvc->state.fecn) /* TX Congestion counter */ 434 434 dev->stats.tx_compressed++; 435 435 skb->dev = pvc->frad; 436 + skb->protocol = htons(ETH_P_HDLC); 437 + skb_reset_network_header(skb); 436 438 dev_queue_xmit(skb); 437 439 return NETDEV_TX_OK; 438 440 } ··· 557 555 skb_put(skb, i); 558 556 skb->priority = TC_PRIO_CONTROL; 559 557 skb->dev = dev; 558 + skb->protocol = htons(ETH_P_HDLC); 560 559 skb_reset_network_header(skb); 561 560 562 561 dev_queue_xmit(skb); ··· 1044 1041 { 1045 1042 dev->type = ARPHRD_DLCI; 1046 1043 dev->flags = IFF_POINTOPOINT; 1047 - dev->hard_header_len = 10; 1044 + dev->hard_header_len = 0; 1048 1045 dev->addr_len = 2; 1049 1046 netif_keep_dst(dev); 1050 1047 } ··· 1096 1093 dev->mtu = HDLC_MAX_MTU; 1097 1094 dev->min_mtu = 68; 1098 1095 dev->max_mtu = HDLC_MAX_MTU; 1096 + dev->needed_headroom = 10; 1099 1097 dev->priv_flags |= IFF_NO_QUEUE; 1100 1098 dev->ml_priv = pvc; 1101 1099
+12 -5
drivers/net/wan/hdlc_ppp.c
··· 251 251 252 252 skb->priority = TC_PRIO_CONTROL; 253 253 skb->dev = dev; 254 + skb->protocol = htons(ETH_P_HDLC); 254 255 skb_reset_network_header(skb); 255 256 skb_queue_tail(&tx_queue, skb); 256 257 } ··· 384 383 } 385 384 386 385 for (opt = data; len; len -= opt[1], opt += opt[1]) { 387 - if (len < 2 || len < opt[1]) { 388 - dev->stats.rx_errors++; 389 - kfree(out); 390 - return; /* bad packet, drop silently */ 391 - } 386 + if (len < 2 || opt[1] < 2 || len < opt[1]) 387 + goto err_out; 392 388 393 389 if (pid == PID_LCP) 394 390 switch (opt[0]) { ··· 393 395 continue; /* MRU always OK and > 1500 bytes? */ 394 396 395 397 case LCP_OPTION_ACCM: /* async control character map */ 398 + if (opt[1] < sizeof(valid_accm)) 399 + goto err_out; 396 400 if (!memcmp(opt, valid_accm, 397 401 sizeof(valid_accm))) 398 402 continue; ··· 406 406 } 407 407 break; 408 408 case LCP_OPTION_MAGIC: 409 + if (len < 6) 410 + goto err_out; 409 411 if (opt[1] != 6 || (!opt[2] && !opt[3] && 410 412 !opt[4] && !opt[5])) 411 413 break; /* reject invalid magic number */ ··· 425 423 else 426 424 ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data); 427 425 426 + kfree(out); 427 + return; 428 + 429 + err_out: 430 + dev->stats.rx_errors++; 428 431 kfree(out); 429 432 } 430 433
+2 -2
drivers/net/wan/lapbether.c
··· 198 198 struct net_device *dev; 199 199 int size = skb->len; 200 200 201 - skb->protocol = htons(ETH_P_X25); 202 - 203 201 ptr = skb_push(skb, 2); 204 202 205 203 *ptr++ = size % 256; ··· 207 209 ndev->stats.tx_bytes += size; 208 210 209 211 skb->dev = dev = lapbeth->ethdev; 212 + 213 + skb->protocol = htons(ETH_P_DEC); 210 214 211 215 skb_reset_network_header(skb); 212 216
+1 -4
drivers/net/wireguard/noise.c
··· 87 87 88 88 void wg_noise_handshake_clear(struct noise_handshake *handshake) 89 89 { 90 + down_write(&handshake->lock); 90 91 wg_index_hashtable_remove( 91 92 handshake->entry.peer->device->index_hashtable, 92 93 &handshake->entry); 93 - down_write(&handshake->lock); 94 94 handshake_zero(handshake); 95 95 up_write(&handshake->lock); 96 - wg_index_hashtable_remove( 97 - handshake->entry.peer->device->index_hashtable, 98 - &handshake->entry); 99 96 } 100 97 101 98 static struct noise_keypair *keypair_create(struct wg_peer *peer)
+8 -3
drivers/net/wireguard/peerlookup.c
··· 167 167 struct index_hashtable_entry *old, 168 168 struct index_hashtable_entry *new) 169 169 { 170 - if (unlikely(hlist_unhashed(&old->index_hash))) 171 - return false; 170 + bool ret; 171 + 172 172 spin_lock_bh(&table->lock); 173 + ret = !hlist_unhashed(&old->index_hash); 174 + if (unlikely(!ret)) 175 + goto out; 176 + 173 177 new->index = old->index; 174 178 hlist_replace_rcu(&old->index_hash, &new->index_hash); 175 179 ··· 184 180 * simply gets dropped, which isn't terrible. 185 181 */ 186 182 INIT_HLIST_NODE(&old->index_hash); 183 + out: 187 184 spin_unlock_bh(&table->lock); 188 - return true; 185 + return ret; 189 186 } 190 187 191 188 void wg_index_hashtable_remove(struct index_hashtable *table,
+9 -3
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
··· 664 664 /* To check if there's window offered */ 665 665 static bool data_ok(struct brcmf_sdio *bus) 666 666 { 667 - /* Reserve TXCTL_CREDITS credits for txctl */ 668 - return (bus->tx_max - bus->tx_seq) > TXCTL_CREDITS && 669 - ((bus->tx_max - bus->tx_seq) & 0x80) == 0; 667 + u8 tx_rsv = 0; 668 + 669 + /* Reserve TXCTL_CREDITS credits for txctl when it is ready to send */ 670 + if (bus->ctrl_frame_stat) 671 + tx_rsv = TXCTL_CREDITS; 672 + 673 + return (bus->tx_max - bus->tx_seq - tx_rsv) != 0 && 674 + ((bus->tx_max - bus->tx_seq - tx_rsv) & 0x80) == 0; 675 + 670 676 } 671 677 672 678 /* To check if there's window offered */
+1 -1
drivers/net/wireless/marvell/mwifiex/fw.h
··· 954 954 struct mwifiex_aes_param { 955 955 u8 pn[WPA_PN_SIZE]; 956 956 __le16 key_len; 957 - u8 key[WLAN_KEY_LEN_CCMP]; 957 + u8 key[WLAN_KEY_LEN_CCMP_256]; 958 958 } __packed; 959 959 960 960 struct mwifiex_wapi_param {
+2 -2
drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
··· 619 619 key_v2 = &resp->params.key_material_v2; 620 620 621 621 len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len); 622 - if (len > WLAN_KEY_LEN_CCMP) 622 + if (len > sizeof(key_v2->key_param_set.key_params.aes.key)) 623 623 return -EINVAL; 624 624 625 625 if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) { ··· 635 635 return 0; 636 636 637 637 memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0, 638 - WLAN_KEY_LEN_CCMP); 638 + sizeof(key_v2->key_param_set.key_params.aes.key)); 639 639 priv->aes_key_v2.key_param_set.key_params.aes.key_len = 640 640 cpu_to_le16(len); 641 641 memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,
+2 -1
drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
··· 2128 2128 sizeof(dev->mt76.hw->wiphy->fw_version), 2129 2129 "%.10s-%.15s", hdr->fw_ver, hdr->build_date); 2130 2130 2131 - if (!strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) { 2131 + if (!is_mt7615(&dev->mt76) && 2132 + !strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) { 2132 2133 dev->fw_ver = MT7615_FIRMWARE_V2; 2133 2134 dev->mcu_ops = &sta_update_ops; 2134 2135 } else {
+6 -2
drivers/net/wireless/mediatek/mt76/mt7915/init.c
··· 699 699 spin_lock_bh(&dev->token_lock); 700 700 idr_for_each_entry(&dev->token, txwi, id) { 701 701 mt7915_txp_skb_unmap(&dev->mt76, txwi); 702 - if (txwi->skb) 703 - dev_kfree_skb_any(txwi->skb); 702 + if (txwi->skb) { 703 + struct ieee80211_hw *hw; 704 + 705 + hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb); 706 + ieee80211_free_txskb(hw, txwi->skb); 707 + } 704 708 mt76_put_txwi(&dev->mt76, txwi); 705 709 } 706 710 spin_unlock_bh(&dev->token_lock);
+1 -1
drivers/net/wireless/mediatek/mt76/mt7915/mac.c
··· 841 841 if (sta || !(info->flags & IEEE80211_TX_CTL_NO_ACK)) 842 842 mt7915_tx_status(sta, hw, info, NULL); 843 843 844 - dev_kfree_skb(skb); 844 + ieee80211_free_txskb(hw, skb); 845 845 } 846 846 847 847 void mt7915_txp_skb_unmap(struct mt76_dev *dev,
-1
drivers/net/wireless/ti/wlcore/cmd.h
··· 458 458 KEY_TKIP = 2, 459 459 KEY_AES = 3, 460 460 KEY_GEM = 4, 461 - KEY_IGTK = 5, 462 461 }; 463 462 464 463 struct wl1271_cmd_set_keys {
-4
drivers/net/wireless/ti/wlcore/main.c
··· 3559 3559 case WL1271_CIPHER_SUITE_GEM: 3560 3560 key_type = KEY_GEM; 3561 3561 break; 3562 - case WLAN_CIPHER_SUITE_AES_CMAC: 3563 - key_type = KEY_IGTK; 3564 - break; 3565 3562 default: 3566 3563 wl1271_error("Unknown key algo 0x%x", key_conf->cipher); 3567 3564 ··· 6228 6231 WLAN_CIPHER_SUITE_TKIP, 6229 6232 WLAN_CIPHER_SUITE_CCMP, 6230 6233 WL1271_CIPHER_SUITE_GEM, 6231 - WLAN_CIPHER_SUITE_AES_CMAC, 6232 6234 }; 6233 6235 6234 6236 /* The tx descriptor buffer */
+1
drivers/nvme/host/Kconfig
··· 73 73 depends on INET 74 74 depends on BLK_DEV_NVME 75 75 select NVME_FABRICS 76 + select CRYPTO 76 77 select CRYPTO_CRC32C 77 78 help 78 79 This provides support for the NVMe over Fabrics protocol using
+21 -3
drivers/nvme/host/core.c
··· 3041 3041 if (!cel) 3042 3042 return -ENOMEM; 3043 3043 3044 - ret = nvme_get_log(ctrl, NVME_NSID_ALL, NVME_LOG_CMD_EFFECTS, 0, csi, 3044 + ret = nvme_get_log(ctrl, 0x00, NVME_LOG_CMD_EFFECTS, 0, csi, 3045 3045 &cel->log, sizeof(cel->log), 0); 3046 3046 if (ret) { 3047 3047 kfree(cel); ··· 3236 3236 if (ret < 0) 3237 3237 return ret; 3238 3238 3239 - if (!ctrl->identified) 3240 - nvme_hwmon_init(ctrl); 3239 + if (!ctrl->identified) { 3240 + ret = nvme_hwmon_init(ctrl); 3241 + if (ret < 0) 3242 + return ret; 3243 + } 3241 3244 3242 3245 ctrl->identified = true; 3243 3246 ··· 3264 3261 return -EWOULDBLOCK; 3265 3262 } 3266 3263 3264 + nvme_get_ctrl(ctrl); 3265 + if (!try_module_get(ctrl->ops->module)) 3266 + return -EINVAL; 3267 + 3267 3268 file->private_data = ctrl; 3269 + return 0; 3270 + } 3271 + 3272 + static int nvme_dev_release(struct inode *inode, struct file *file) 3273 + { 3274 + struct nvme_ctrl *ctrl = 3275 + container_of(inode->i_cdev, struct nvme_ctrl, cdev); 3276 + 3277 + module_put(ctrl->ops->module); 3278 + nvme_put_ctrl(ctrl); 3268 3279 return 0; 3269 3280 } 3270 3281 ··· 3344 3327 static const struct file_operations nvme_dev_fops = { 3345 3328 .owner = THIS_MODULE, 3346 3329 .open = nvme_dev_open, 3330 + .release = nvme_dev_release, 3347 3331 .unlocked_ioctl = nvme_dev_ioctl, 3348 3332 .compat_ioctl = compat_ptr_ioctl, 3349 3333 };
+4 -2
drivers/nvme/host/fc.c
··· 3671 3671 spin_lock_irqsave(&nvme_fc_lock, flags); 3672 3672 list_for_each_entry(lport, &nvme_fc_lport_list, port_list) { 3673 3673 if (lport->localport.node_name != laddr.nn || 3674 - lport->localport.port_name != laddr.pn) 3674 + lport->localport.port_name != laddr.pn || 3675 + lport->localport.port_state != FC_OBJSTATE_ONLINE) 3675 3676 continue; 3676 3677 3677 3678 list_for_each_entry(rport, &lport->endp_list, endp_list) { 3678 3679 if (rport->remoteport.node_name != raddr.nn || 3679 - rport->remoteport.port_name != raddr.pn) 3680 + rport->remoteport.port_name != raddr.pn || 3681 + rport->remoteport.port_state != FC_OBJSTATE_ONLINE) 3680 3682 continue; 3681 3683 3682 3684 /* if fail to get reference fall through. Will error */
+6 -8
drivers/nvme/host/hwmon.c
··· 59 59 60 60 static int nvme_hwmon_get_smart_log(struct nvme_hwmon_data *data) 61 61 { 62 - int ret; 63 - 64 - ret = nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0, 62 + return nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0, 65 63 NVME_CSI_NVM, &data->log, sizeof(data->log), 0); 66 - 67 - return ret <= 0 ? ret : -EIO; 68 64 } 69 65 70 66 static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type, ··· 221 225 .info = nvme_hwmon_info, 222 226 }; 223 227 224 - void nvme_hwmon_init(struct nvme_ctrl *ctrl) 228 + int nvme_hwmon_init(struct nvme_ctrl *ctrl) 225 229 { 226 230 struct device *dev = ctrl->dev; 227 231 struct nvme_hwmon_data *data; ··· 230 234 231 235 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 232 236 if (!data) 233 - return; 237 + return 0; 234 238 235 239 data->ctrl = ctrl; 236 240 mutex_init(&data->read_lock); ··· 240 244 dev_warn(ctrl->device, 241 245 "Failed to read smart log (error %d)\n", err); 242 246 devm_kfree(dev, data); 243 - return; 247 + return err; 244 248 } 245 249 246 250 hwmon = devm_hwmon_device_register_with_info(dev, "nvme", data, ··· 250 254 dev_warn(dev, "Failed to instantiate hwmon device\n"); 251 255 devm_kfree(dev, data); 252 256 } 257 + 258 + return 0; 253 259 }
+5 -2
drivers/nvme/host/nvme.h
··· 827 827 } 828 828 829 829 #ifdef CONFIG_NVME_HWMON 830 - void nvme_hwmon_init(struct nvme_ctrl *ctrl); 830 + int nvme_hwmon_init(struct nvme_ctrl *ctrl); 831 831 #else 832 - static inline void nvme_hwmon_init(struct nvme_ctrl *ctrl) { } 832 + static inline int nvme_hwmon_init(struct nvme_ctrl *ctrl) 833 + { 834 + return 0; 835 + } 833 836 #endif 834 837 835 838 u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+9 -8
drivers/nvme/host/pci.c
··· 940 940 struct nvme_completion *cqe = &nvmeq->cqes[idx]; 941 941 struct request *req; 942 942 943 - if (unlikely(cqe->command_id >= nvmeq->q_depth)) { 944 - dev_warn(nvmeq->dev->ctrl.device, 945 - "invalid id %d completed on queue %d\n", 946 - cqe->command_id, le16_to_cpu(cqe->sq_id)); 947 - return; 948 - } 949 - 950 943 /* 951 944 * AEN requests are special as they don't time out and can 952 945 * survive any kind of queue freeze and often don't respond to ··· 953 960 } 954 961 955 962 req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), cqe->command_id); 963 + if (unlikely(!req)) { 964 + dev_warn(nvmeq->dev->ctrl.device, 965 + "invalid id %d completed on queue %d\n", 966 + cqe->command_id, le16_to_cpu(cqe->sq_id)); 967 + return; 968 + } 969 + 956 970 trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail); 957 971 if (!nvme_try_complete_req(req, cqe->status, cqe->result)) 958 972 nvme_pci_complete_rq(req); ··· 3153 3153 { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */ 3154 3154 .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 3155 3155 NVME_QUIRK_MEDIUM_PRIO_SQ | 3156 - NVME_QUIRK_NO_TEMP_THRESH_CHANGE }, 3156 + NVME_QUIRK_NO_TEMP_THRESH_CHANGE | 3157 + NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3157 3158 { PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */ 3158 3159 .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3159 3160 { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
+2
drivers/nvme/target/passthru.c
··· 517 517 subsys->ver = NVME_VS(1, 2, 1); 518 518 } 519 519 520 + __module_get(subsys->passthru_ctrl->ops->module); 520 521 mutex_unlock(&subsys->lock); 521 522 return 0; 522 523 ··· 532 531 { 533 532 if (subsys->passthru_ctrl) { 534 533 xa_erase(&passthru_subsystems, subsys->passthru_ctrl->cntlid); 534 + module_put(subsys->passthru_ctrl->ops->module); 535 535 nvme_put_ctrl(subsys->passthru_ctrl); 536 536 } 537 537 subsys->passthru_ctrl = NULL;
+4 -7
drivers/pci/controller/pcie-rockchip-host.c
··· 71 71 static int rockchip_pcie_valid_device(struct rockchip_pcie *rockchip, 72 72 struct pci_bus *bus, int dev) 73 73 { 74 - /* access only one slot on each root port */ 75 - if (pci_is_root_bus(bus) && dev > 0) 76 - return 0; 77 - 78 74 /* 79 - * do not read more than one device on the bus directly attached 75 + * Access only one slot on each root port. 76 + * Do not read more than one device on the bus directly attached 80 77 * to RC's downstream side. 81 78 */ 82 - if (pci_is_root_bus(bus->parent) && dev > 0) 83 - return 0; 79 + if (pci_is_root_bus(bus) || pci_is_root_bus(bus->parent)) 80 + return dev == 0; 84 81 85 82 return 1; 86 83 }
+4 -2
drivers/phy/ti/phy-am654-serdes.c
··· 822 822 pm_runtime_enable(dev); 823 823 824 824 phy = devm_phy_create(dev, NULL, &ops); 825 - if (IS_ERR(phy)) 826 - return PTR_ERR(phy); 825 + if (IS_ERR(phy)) { 826 + ret = PTR_ERR(phy); 827 + goto clk_err; 828 + } 827 829 828 830 phy_set_drvdata(phy, am654_phy); 829 831 phy_provider = devm_of_phy_provider_register(dev, serdes_am654_xlate);
+13 -1
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 58 58 #define CHV_PADCTRL1_CFGLOCK BIT(31) 59 59 #define CHV_PADCTRL1_INVRXTX_SHIFT 4 60 60 #define CHV_PADCTRL1_INVRXTX_MASK GENMASK(7, 4) 61 + #define CHV_PADCTRL1_INVRXTX_TXDATA BIT(7) 61 62 #define CHV_PADCTRL1_INVRXTX_RXDATA BIT(6) 62 63 #define CHV_PADCTRL1_INVRXTX_TXENABLE BIT(5) 63 64 #define CHV_PADCTRL1_ODEN BIT(3) ··· 793 792 static void chv_gpio_clear_triggering(struct chv_pinctrl *pctrl, 794 793 unsigned int offset) 795 794 { 795 + u32 invrxtx_mask = CHV_PADCTRL1_INVRXTX_MASK; 796 796 u32 value; 797 + 798 + /* 799 + * One some devices the GPIO should output the inverted value from what 800 + * device-drivers / ACPI code expects (inverted external buffer?). The 801 + * BIOS makes this work by setting the CHV_PADCTRL1_INVRXTX_TXDATA flag, 802 + * preserve this flag if the pin is already setup as GPIO. 803 + */ 804 + value = chv_readl(pctrl, offset, CHV_PADCTRL0); 805 + if (value & CHV_PADCTRL0_GPIOEN) 806 + invrxtx_mask &= ~CHV_PADCTRL1_INVRXTX_TXDATA; 797 807 798 808 value = chv_readl(pctrl, offset, CHV_PADCTRL1); 799 809 value &= ~CHV_PADCTRL1_INTWAKECFG_MASK; 800 - value &= ~CHV_PADCTRL1_INVRXTX_MASK; 810 + value &= ~invrxtx_mask; 801 811 chv_writel(pctrl, offset, CHV_PADCTRL1, value); 802 812 } 803 813
+4
drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
··· 259 259 260 260 desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio_n]; 261 261 262 + /* if the GPIO is not supported for eint mode */ 263 + if (desc->eint.eint_m == NO_EINT_SUPPORT) 264 + return virt_gpio; 265 + 262 266 if (desc->funcs && !desc->funcs[desc->eint.eint_m].name) 263 267 virt_gpio = true; 264 268
+1 -1
drivers/pinctrl/mvebu/pinctrl-armada-xp.c
··· 414 414 MPP_VAR_FUNCTION(0x1, "i2c0", "sck", V_98DX3236_PLUS)), 415 415 MPP_MODE(15, 416 416 MPP_VAR_FUNCTION(0x0, "gpio", NULL, V_98DX3236_PLUS), 417 - MPP_VAR_FUNCTION(0x4, "i2c0", "sda", V_98DX3236_PLUS)), 417 + MPP_VAR_FUNCTION(0x1, "i2c0", "sda", V_98DX3236_PLUS)), 418 418 MPP_MODE(16, 419 419 MPP_VAR_FUNCTION(0x0, "gpo", NULL, V_98DX3236_PLUS), 420 420 MPP_VAR_FUNCTION(0x4, "dev", "oe", V_98DX3236_PLUS)),
+1 -1
drivers/pinctrl/qcom/pinctrl-sm8250.c
··· 1308 1308 [178] = PINGROUP(178, WEST, _, _, _, _, _, _, _, _, _), 1309 1309 [179] = PINGROUP(179, WEST, _, _, _, _, _, _, _, _, _), 1310 1310 [180] = UFS_RESET(ufs_reset, 0xb8000), 1311 - [181] = SDC_PINGROUP(sdc2_clk, 0x7000, 14, 6), 1311 + [181] = SDC_PINGROUP(sdc2_clk, 0xb7000, 14, 6), 1312 1312 [182] = SDC_PINGROUP(sdc2_cmd, 0xb7000, 11, 3), 1313 1313 [183] = SDC_PINGROUP(sdc2_data, 0xb7000, 9, 0), 1314 1314 };
+4 -3
drivers/regulator/axp20x-regulator.c
··· 42 42 43 43 #define AXP20X_DCDC2_V_OUT_MASK GENMASK(5, 0) 44 44 #define AXP20X_DCDC3_V_OUT_MASK GENMASK(7, 0) 45 - #define AXP20X_LDO24_V_OUT_MASK GENMASK(7, 4) 45 + #define AXP20X_LDO2_V_OUT_MASK GENMASK(7, 4) 46 46 #define AXP20X_LDO3_V_OUT_MASK GENMASK(6, 0) 47 + #define AXP20X_LDO4_V_OUT_MASK GENMASK(3, 0) 47 48 #define AXP20X_LDO5_V_OUT_MASK GENMASK(7, 4) 48 49 49 50 #define AXP20X_PWR_OUT_EXTEN_MASK BIT_MASK(0) ··· 543 542 AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_DCDC3_MASK), 544 543 AXP_DESC_FIXED(AXP20X, LDO1, "ldo1", "acin", 1300), 545 544 AXP_DESC(AXP20X, LDO2, "ldo2", "ldo24in", 1800, 3300, 100, 546 - AXP20X_LDO24_V_OUT, AXP20X_LDO24_V_OUT_MASK, 545 + AXP20X_LDO24_V_OUT, AXP20X_LDO2_V_OUT_MASK, 547 546 AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO2_MASK), 548 547 AXP_DESC(AXP20X, LDO3, "ldo3", "ldo3in", 700, 3500, 25, 549 548 AXP20X_LDO3_V_OUT, AXP20X_LDO3_V_OUT_MASK, 550 549 AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO3_MASK), 551 550 AXP_DESC_RANGES(AXP20X, LDO4, "ldo4", "ldo24in", 552 551 axp20x_ldo4_ranges, AXP20X_LDO4_V_OUT_NUM_VOLTAGES, 553 - AXP20X_LDO24_V_OUT, AXP20X_LDO24_V_OUT_MASK, 552 + AXP20X_LDO24_V_OUT, AXP20X_LDO4_V_OUT_MASK, 554 553 AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO4_MASK), 555 554 AXP_DESC_IO(AXP20X, LDO5, "ldo5", "ldo5in", 1800, 3300, 100, 556 555 AXP20X_LDO5_V_OUT, AXP20X_LDO5_V_OUT_MASK,
+8 -1
drivers/s390/block/dasd_fba.c
··· 40 40 MODULE_LICENSE("GPL"); 41 41 42 42 static struct dasd_discipline dasd_fba_discipline; 43 + static void *dasd_fba_zero_page; 43 44 44 45 struct dasd_fba_private { 45 46 struct dasd_fba_characteristics rdc_data; ··· 271 270 ccw->cmd_code = DASD_FBA_CCW_WRITE; 272 271 ccw->flags |= CCW_FLAG_SLI; 273 272 ccw->count = count; 274 - ccw->cda = (__u32) (addr_t) page_to_phys(ZERO_PAGE(0)); 273 + ccw->cda = (__u32) (addr_t) dasd_fba_zero_page; 275 274 } 276 275 277 276 /* ··· 831 830 int ret; 832 831 833 832 ASCEBC(dasd_fba_discipline.ebcname, 4); 833 + 834 + dasd_fba_zero_page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 835 + if (!dasd_fba_zero_page) 836 + return -ENOMEM; 837 + 834 838 ret = ccw_driver_register(&dasd_fba_driver); 835 839 if (!ret) 836 840 wait_for_device_probe(); ··· 847 841 dasd_fba_cleanup(void) 848 842 { 849 843 ccw_driver_unregister(&dasd_fba_driver); 844 + free_page((unsigned long)dasd_fba_zero_page); 850 845 } 851 846 852 847 module_init(dasd_fba_init);
+2 -1
drivers/s390/crypto/zcrypt_api.c
··· 1449 1449 if (!reqcnt) 1450 1450 return -ENOMEM; 1451 1451 zcrypt_perdev_reqcnt(reqcnt, AP_DEVICES); 1452 - if (copy_to_user((int __user *) arg, reqcnt, sizeof(reqcnt))) 1452 + if (copy_to_user((int __user *) arg, reqcnt, 1453 + sizeof(u32) * AP_DEVICES)) 1453 1454 rc = -EFAULT; 1454 1455 kfree(reqcnt); 1455 1456 return rc;
+1 -1
drivers/s390/net/qeth_l2_main.c
··· 284 284 285 285 if (card->state == CARD_STATE_SOFTSETUP) { 286 286 qeth_clear_ipacmd_list(card); 287 - qeth_drain_output_queues(card); 288 287 card->state = CARD_STATE_DOWN; 289 288 } 290 289 291 290 qeth_qdio_clear_card(card, 0); 291 + qeth_drain_output_queues(card); 292 292 qeth_clear_working_pool_list(card); 293 293 flush_workqueue(card->event_wq); 294 294 qeth_flush_local_addrs(card);
+1 -1
drivers/s390/net/qeth_l3_main.c
··· 1168 1168 if (card->state == CARD_STATE_SOFTSETUP) { 1169 1169 qeth_l3_clear_ip_htable(card, 1); 1170 1170 qeth_clear_ipacmd_list(card); 1171 - qeth_drain_output_queues(card); 1172 1171 card->state = CARD_STATE_DOWN; 1173 1172 } 1174 1173 1175 1174 qeth_qdio_clear_card(card, 0); 1175 + qeth_drain_output_queues(card); 1176 1176 qeth_clear_working_pool_list(card); 1177 1177 flush_workqueue(card->event_wq); 1178 1178 qeth_flush_local_addrs(card);
+16 -8
drivers/scsi/iscsi_tcp.c
··· 736 736 struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 737 737 struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data; 738 738 struct sockaddr_in6 addr; 739 + struct socket *sock; 739 740 int rc; 740 741 741 742 switch(param) { ··· 748 747 spin_unlock_bh(&conn->session->frwd_lock); 749 748 return -ENOTCONN; 750 749 } 750 + sock = tcp_sw_conn->sock; 751 + sock_hold(sock->sk); 752 + spin_unlock_bh(&conn->session->frwd_lock); 753 + 751 754 if (param == ISCSI_PARAM_LOCAL_PORT) 752 - rc = kernel_getsockname(tcp_sw_conn->sock, 755 + rc = kernel_getsockname(sock, 753 756 (struct sockaddr *)&addr); 754 757 else 755 - rc = kernel_getpeername(tcp_sw_conn->sock, 758 + rc = kernel_getpeername(sock, 756 759 (struct sockaddr *)&addr); 757 - spin_unlock_bh(&conn->session->frwd_lock); 760 + sock_put(sock->sk); 758 761 if (rc < 0) 759 762 return rc; 760 763 ··· 780 775 struct iscsi_tcp_conn *tcp_conn; 781 776 struct iscsi_sw_tcp_conn *tcp_sw_conn; 782 777 struct sockaddr_in6 addr; 778 + struct socket *sock; 783 779 int rc; 784 780 785 781 switch (param) { ··· 795 789 return -ENOTCONN; 796 790 } 797 791 tcp_conn = conn->dd_data; 798 - 799 792 tcp_sw_conn = tcp_conn->dd_data; 800 - if (!tcp_sw_conn->sock) { 793 + sock = tcp_sw_conn->sock; 794 + if (!sock) { 801 795 spin_unlock_bh(&session->frwd_lock); 802 796 return -ENOTCONN; 803 797 } 804 - 805 - rc = kernel_getsockname(tcp_sw_conn->sock, 806 - (struct sockaddr *)&addr); 798 + sock_hold(sock->sk); 807 799 spin_unlock_bh(&session->frwd_lock); 800 + 801 + rc = kernel_getsockname(sock, 802 + (struct sockaddr *)&addr); 803 + sock_put(sock->sk); 808 804 if (rc < 0) 809 805 return rc; 810 806
+52 -24
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 71 71 static void lpfc_disc_flush_list(struct lpfc_vport *vport); 72 72 static void lpfc_unregister_fcfi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *); 73 73 static int lpfc_fcf_inuse(struct lpfc_hba *); 74 + static void lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *); 74 75 75 76 void 76 77 lpfc_terminate_rport_io(struct fc_rport *rport) ··· 1139 1138 return; 1140 1139 } 1141 1140 1142 - 1143 1141 void 1144 1142 lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) 1145 1143 { 1146 1144 struct lpfc_vport *vport = pmb->vport; 1145 + LPFC_MBOXQ_t *sparam_mb; 1146 + struct lpfc_dmabuf *sparam_mp; 1147 + int rc; 1147 1148 1148 1149 if (pmb->u.mb.mbxStatus) 1149 1150 goto out; ··· 1170 1167 } 1171 1168 1172 1169 /* Start discovery by sending a FLOGI. port_state is identically 1173 - * LPFC_FLOGI while waiting for FLOGI cmpl. Check if sending 1174 - * the FLOGI is being deferred till after MBX_READ_SPARAM completes. 1170 + * LPFC_FLOGI while waiting for FLOGI cmpl. 1175 1171 */ 1176 1172 if (vport->port_state != LPFC_FLOGI) { 1177 - if (!(phba->hba_flag & HBA_DEFER_FLOGI)) 1173 + /* Issue MBX_READ_SPARAM to update CSPs before FLOGI if 1174 + * bb-credit recovery is in place. 1175 + */ 1176 + if (phba->bbcredit_support && phba->cfg_enable_bbcr && 1177 + !(phba->link_flag & LS_LOOPBACK_MODE)) { 1178 + sparam_mb = mempool_alloc(phba->mbox_mem_pool, 1179 + GFP_KERNEL); 1180 + if (!sparam_mb) 1181 + goto sparam_out; 1182 + 1183 + rc = lpfc_read_sparam(phba, sparam_mb, 0); 1184 + if (rc) { 1185 + mempool_free(sparam_mb, phba->mbox_mem_pool); 1186 + goto sparam_out; 1187 + } 1188 + sparam_mb->vport = vport; 1189 + sparam_mb->mbox_cmpl = lpfc_mbx_cmpl_read_sparam; 1190 + rc = lpfc_sli_issue_mbox(phba, sparam_mb, MBX_NOWAIT); 1191 + if (rc == MBX_NOT_FINISHED) { 1192 + sparam_mp = (struct lpfc_dmabuf *) 1193 + sparam_mb->ctx_buf; 1194 + lpfc_mbuf_free(phba, sparam_mp->virt, 1195 + sparam_mp->phys); 1196 + kfree(sparam_mp); 1197 + sparam_mb->ctx_buf = NULL; 1198 + mempool_free(sparam_mb, phba->mbox_mem_pool); 1199 + goto sparam_out; 1200 + } 1201 + 1202 + phba->hba_flag |= HBA_DEFER_FLOGI; 1203 + } else { 1178 1204 lpfc_initial_flogi(vport); 1205 + } 1179 1206 } else { 1180 1207 if (vport->fc_flag & FC_PT2PT) 1181 1208 lpfc_disc_start(vport); ··· 1217 1184 "0306 CONFIG_LINK mbxStatus error x%x " 1218 1185 "HBA state x%x\n", 1219 1186 pmb->u.mb.mbxStatus, vport->port_state); 1187 + sparam_out: 1220 1188 mempool_free(pmb, phba->mbox_mem_pool); 1221 1189 1222 1190 lpfc_linkdown(phba); ··· 3273 3239 lpfc_linkup(phba); 3274 3240 sparam_mbox = NULL; 3275 3241 3276 - if (!(phba->hba_flag & HBA_FCOE_MODE)) { 3277 - cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 3278 - if (!cfglink_mbox) 3279 - goto out; 3280 - vport->port_state = LPFC_LOCAL_CFG_LINK; 3281 - lpfc_config_link(phba, cfglink_mbox); 3282 - cfglink_mbox->vport = vport; 3283 - cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link; 3284 - rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT); 3285 - if (rc == MBX_NOT_FINISHED) { 3286 - mempool_free(cfglink_mbox, phba->mbox_mem_pool); 3287 - goto out; 3288 - } 3289 - } 3290 - 3291 3242 sparam_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 3292 3243 if (!sparam_mbox) 3293 3244 goto out; ··· 3293 3274 goto out; 3294 3275 } 3295 3276 3296 - if (phba->hba_flag & HBA_FCOE_MODE) { 3277 + if (!(phba->hba_flag & HBA_FCOE_MODE)) { 3278 + cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 3279 + if (!cfglink_mbox) 3280 + goto out; 3281 + vport->port_state = LPFC_LOCAL_CFG_LINK; 3282 + lpfc_config_link(phba, cfglink_mbox); 3283 + cfglink_mbox->vport = vport; 3284 + cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link; 3285 + rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT); 3286 + if (rc == MBX_NOT_FINISHED) { 3287 + mempool_free(cfglink_mbox, phba->mbox_mem_pool); 3288 + goto out; 3289 + } 3290 + } else { 3297 3291 vport->port_state = LPFC_VPORT_UNKNOWN; 3298 3292 /* 3299 3293 * Add the driver's default FCF record at FCF index 0 now. This ··· 3363 3331 } 3364 3332 /* Reset FCF roundrobin bmask for new discovery */ 3365 3333 lpfc_sli4_clear_fcf_rr_bmask(phba); 3366 - } else { 3367 - if (phba->bbcredit_support && phba->cfg_enable_bbcr && 3368 - !(phba->link_flag & LS_LOOPBACK_MODE)) 3369 - phba->hba_flag |= HBA_DEFER_FLOGI; 3370 3334 } 3371 3335 3372 3336 /* Prepare for LINK up registrations */
+18 -16
drivers/scsi/sd.c
··· 2964 2964 2965 2965 if (sdkp->device->type == TYPE_ZBC) { 2966 2966 /* Host-managed */ 2967 - q->limits.zoned = BLK_ZONED_HM; 2967 + blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HM); 2968 2968 } else { 2969 2969 sdkp->zoned = (buffer[8] >> 4) & 3; 2970 - if (sdkp->zoned == 1 && !disk_has_partitions(sdkp->disk)) { 2970 + if (sdkp->zoned == 1) { 2971 2971 /* Host-aware */ 2972 - q->limits.zoned = BLK_ZONED_HA; 2972 + blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HA); 2973 2973 } else { 2974 - /* 2975 - * Treat drive-managed devices and host-aware devices 2976 - * with partitions as regular block devices. 2977 - */ 2978 - q->limits.zoned = BLK_ZONED_NONE; 2979 - if (sdkp->zoned == 2 && sdkp->first_scan) 2980 - sd_printk(KERN_NOTICE, sdkp, 2981 - "Drive-managed SMR disk\n"); 2974 + /* Regular disk or drive managed disk */ 2975 + blk_queue_set_zoned(sdkp->disk, BLK_ZONED_NONE); 2982 2976 } 2983 2977 } 2984 - if (blk_queue_is_zoned(q) && sdkp->first_scan) 2978 + 2979 + if (!sdkp->first_scan) 2980 + goto out; 2981 + 2982 + if (blk_queue_is_zoned(q)) { 2985 2983 sd_printk(KERN_NOTICE, sdkp, "Host-%s zoned block device\n", 2986 2984 q->limits.zoned == BLK_ZONED_HM ? "managed" : "aware"); 2985 + } else { 2986 + if (sdkp->zoned == 1) 2987 + sd_printk(KERN_NOTICE, sdkp, 2988 + "Host-aware SMR disk used as regular disk\n"); 2989 + else if (sdkp->zoned == 2) 2990 + sd_printk(KERN_NOTICE, sdkp, 2991 + "Drive-managed SMR disk\n"); 2992 + } 2987 2993 2988 2994 out: 2989 2995 kfree(buffer); ··· 3409 3403 sdkp->ATO = 0; 3410 3404 sdkp->first_scan = 1; 3411 3405 sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS; 3412 - 3413 - error = sd_zbc_init_disk(sdkp); 3414 - if (error) 3415 - goto out_free_index; 3416 3406 3417 3407 sd_revalidate_disk(gd); 3418 3408
+1 -7
drivers/scsi/sd.h
··· 215 215 216 216 #ifdef CONFIG_BLK_DEV_ZONED 217 217 218 - int sd_zbc_init_disk(struct scsi_disk *sdkp); 219 218 void sd_zbc_release_disk(struct scsi_disk *sdkp); 220 219 int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buffer); 221 220 int sd_zbc_revalidate_zones(struct scsi_disk *sdkp); ··· 229 230 unsigned int nr_blocks); 230 231 231 232 #else /* CONFIG_BLK_DEV_ZONED */ 232 - 233 - static inline int sd_zbc_init_disk(struct scsi_disk *sdkp) 234 - { 235 - return 0; 236 - } 237 233 238 234 static inline void sd_zbc_release_disk(struct scsi_disk *sdkp) {} 239 235 ··· 253 259 static inline unsigned int sd_zbc_complete(struct scsi_cmnd *cmd, 254 260 unsigned int good_bytes, struct scsi_sense_hdr *sshdr) 255 261 { 256 - return 0; 262 + return good_bytes; 257 263 } 258 264 259 265 static inline blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd,
+40 -26
drivers/scsi/sd_zbc.c
··· 651 651 sdkp->zone_blocks); 652 652 } 653 653 654 + static int sd_zbc_init_disk(struct scsi_disk *sdkp) 655 + { 656 + sdkp->zones_wp_offset = NULL; 657 + spin_lock_init(&sdkp->zones_wp_offset_lock); 658 + sdkp->rev_wp_offset = NULL; 659 + mutex_init(&sdkp->rev_mutex); 660 + INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn); 661 + sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL); 662 + if (!sdkp->zone_wp_update_buf) 663 + return -ENOMEM; 664 + 665 + return 0; 666 + } 667 + 668 + void sd_zbc_release_disk(struct scsi_disk *sdkp) 669 + { 670 + kvfree(sdkp->zones_wp_offset); 671 + sdkp->zones_wp_offset = NULL; 672 + kfree(sdkp->zone_wp_update_buf); 673 + sdkp->zone_wp_update_buf = NULL; 674 + } 675 + 654 676 static void sd_zbc_revalidate_zones_cb(struct gendisk *disk) 655 677 { 656 678 struct scsi_disk *sdkp = scsi_disk(disk); ··· 689 667 u32 max_append; 690 668 int ret = 0; 691 669 692 - if (!sd_is_zoned(sdkp)) 670 + /* 671 + * For all zoned disks, initialize zone append emulation data if not 672 + * already done. This is necessary also for host-aware disks used as 673 + * regular disks due to the presence of partitions as these partitions 674 + * may be deleted and the disk zoned model changed back from 675 + * BLK_ZONED_NONE to BLK_ZONED_HA. 676 + */ 677 + if (sd_is_zoned(sdkp) && !sdkp->zone_wp_update_buf) { 678 + ret = sd_zbc_init_disk(sdkp); 679 + if (ret) 680 + return ret; 681 + } 682 + 683 + /* 684 + * There is nothing to do for regular disks, including host-aware disks 685 + * that have partitions. 686 + */ 687 + if (!blk_queue_is_zoned(q)) 693 688 return 0; 694 689 695 690 /* ··· 802 763 sdkp->capacity = 0; 803 764 804 765 return ret; 805 - } 806 - 807 - int sd_zbc_init_disk(struct scsi_disk *sdkp) 808 - { 809 - if (!sd_is_zoned(sdkp)) 810 - return 0; 811 - 812 - sdkp->zones_wp_offset = NULL; 813 - spin_lock_init(&sdkp->zones_wp_offset_lock); 814 - sdkp->rev_wp_offset = NULL; 815 - mutex_init(&sdkp->rev_mutex); 816 - INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn); 817 - sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL); 818 - if (!sdkp->zone_wp_update_buf) 819 - return -ENOMEM; 820 - 821 - return 0; 822 - } 823 - 824 - void sd_zbc_release_disk(struct scsi_disk *sdkp) 825 - { 826 - kvfree(sdkp->zones_wp_offset); 827 - sdkp->zones_wp_offset = NULL; 828 - kfree(sdkp->zone_wp_update_buf); 829 - sdkp->zone_wp_update_buf = NULL; 830 766 }
+1 -1
drivers/spi/spi-bcm-qspi.c
··· 1295 1295 }, 1296 1296 { 1297 1297 .compatible = "brcm,spi-bcm-qspi", 1298 - .data = &bcm_qspi_rev_data, 1298 + .data = &bcm_qspi_no_rev_data, 1299 1299 }, 1300 1300 { 1301 1301 .compatible = "brcm,spi-bcm7216-qspi",
+1 -1
drivers/spi/spi-bcm2835.c
··· 75 75 #define DRV_NAME "spi-bcm2835" 76 76 77 77 /* define polling limits */ 78 - unsigned int polling_limit_us = 30; 78 + static unsigned int polling_limit_us = 30; 79 79 module_param(polling_limit_us, uint, 0664); 80 80 MODULE_PARM_DESC(polling_limit_us, 81 81 "time in us to run a transfer in polling mode\n");
+10 -8
drivers/spi/spi-fsl-dspi.c
··· 174 174 .fifo_size = 16, 175 175 }, 176 176 [LS2080A] = { 177 - .trans_mode = DSPI_DMA_MODE, 177 + .trans_mode = DSPI_XSPI_MODE, 178 178 .max_clock_factor = 8, 179 179 .fifo_size = 4, 180 180 }, 181 181 [LS2085A] = { 182 - .trans_mode = DSPI_DMA_MODE, 182 + .trans_mode = DSPI_XSPI_MODE, 183 183 .max_clock_factor = 8, 184 184 .fifo_size = 4, 185 185 }, 186 186 [LX2160A] = { 187 - .trans_mode = DSPI_DMA_MODE, 187 + .trans_mode = DSPI_XSPI_MODE, 188 188 .max_clock_factor = 8, 189 189 .fifo_size = 4, 190 190 }, ··· 1273 1273 void __iomem *base; 1274 1274 bool big_endian; 1275 1275 1276 - ctlr = spi_alloc_master(&pdev->dev, sizeof(struct fsl_dspi)); 1276 + dspi = devm_kzalloc(&pdev->dev, sizeof(*dspi), GFP_KERNEL); 1277 + if (!dspi) 1278 + return -ENOMEM; 1279 + 1280 + ctlr = spi_alloc_master(&pdev->dev, 0); 1277 1281 if (!ctlr) 1278 1282 return -ENOMEM; 1279 1283 1280 - dspi = spi_controller_get_devdata(ctlr); 1281 1284 dspi->pdev = pdev; 1282 1285 dspi->ctlr = ctlr; 1283 1286 ··· 1417 1414 if (dspi->devtype_data->trans_mode != DSPI_DMA_MODE) 1418 1415 ctlr->ptp_sts_supported = true; 1419 1416 1420 - platform_set_drvdata(pdev, ctlr); 1417 + platform_set_drvdata(pdev, dspi); 1421 1418 1422 1419 ret = spi_register_controller(ctlr); 1423 1420 if (ret != 0) { ··· 1440 1437 1441 1438 static int dspi_remove(struct platform_device *pdev) 1442 1439 { 1443 - struct spi_controller *ctlr = platform_get_drvdata(pdev); 1444 - struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr); 1440 + struct fsl_dspi *dspi = platform_get_drvdata(pdev); 1445 1441 1446 1442 /* Disconnect from the SPI framework */ 1447 1443 spi_unregister_controller(dspi->ctlr);
+3 -2
drivers/spi/spi-fsl-espi.c
··· 564 564 static irqreturn_t fsl_espi_irq(s32 irq, void *context_data) 565 565 { 566 566 struct fsl_espi *espi = context_data; 567 - u32 events; 567 + u32 events, mask; 568 568 569 569 spin_lock(&espi->lock); 570 570 571 571 /* Get interrupt events(tx/rx) */ 572 572 events = fsl_espi_read_reg(espi, ESPI_SPIE); 573 - if (!events) { 573 + mask = fsl_espi_read_reg(espi, ESPI_SPIM); 574 + if (!(events & mask)) { 574 575 spin_unlock(&espi->lock); 575 576 return IRQ_NONE; 576 577 }
+2 -1
drivers/target/target_core_transport.c
··· 1840 1840 * out unpacked_lun for the original se_cmd. 1841 1841 */ 1842 1842 if (tm_type == TMR_ABORT_TASK && (flags & TARGET_SCF_LOOKUP_LUN_FROM_TAG)) { 1843 - if (!target_lookup_lun_from_tag(se_sess, tag, &unpacked_lun)) 1843 + if (!target_lookup_lun_from_tag(se_sess, tag, 1844 + &se_cmd->orig_fe_lun)) 1844 1845 goto failure; 1845 1846 } 1846 1847
+34 -16
drivers/usb/core/driver.c
··· 269 269 if (error) 270 270 return error; 271 271 272 + /* Probe the USB device with the driver in hand, but only 273 + * defer to a generic driver in case the current USB 274 + * device driver has an id_table or a match function; i.e., 275 + * when the device driver was explicitly matched against 276 + * a device. 277 + * 278 + * If the device driver does not have either of these, 279 + * then we assume that it can bind to any device and is 280 + * not truly a more specialized/non-generic driver, so a 281 + * return value of -ENODEV should not force the device 282 + * to be handled by the generic USB driver, as there 283 + * can still be another, more specialized, device driver. 284 + * 285 + * This accommodates the usbip driver. 286 + * 287 + * TODO: What if, in the future, there are multiple 288 + * specialized USB device drivers for a particular device? 289 + * In such cases, there is a need to try all matching 290 + * specialised device drivers prior to setting the 291 + * use_generic_driver bit. 292 + */ 272 293 error = udriver->probe(udev); 273 - if (error == -ENODEV && udriver != &usb_generic_driver) { 294 + if (error == -ENODEV && udriver != &usb_generic_driver && 295 + (udriver->id_table || udriver->match)) { 274 296 udev->use_generic_driver = 1; 275 297 return -EPROBE_DEFER; 276 298 } ··· 853 831 udev = to_usb_device(dev); 854 832 udrv = to_usb_device_driver(drv); 855 833 856 - if (udrv->id_table && 857 - usb_device_match_id(udev, udrv->id_table) != NULL) { 858 - return 1; 859 - } 834 + if (udrv->id_table) 835 + return usb_device_match_id(udev, udrv->id_table) != NULL; 860 836 861 837 if (udrv->match) 862 838 return udrv->match(udev); 863 - return 0; 839 + 840 + /* If the device driver under consideration does not have a 841 + * id_table or a match function, then let the driver's probe 842 + * function decide. 843 + */ 844 + return 1; 864 845 865 846 } else if (is_usb_interface(dev)) { 866 847 struct usb_interface *intf; ··· 930 905 return 0; 931 906 } 932 907 933 - static bool is_dev_usb_generic_driver(struct device *dev) 934 - { 935 - struct usb_device_driver *udd = dev->driver ? 936 - to_usb_device_driver(dev->driver) : NULL; 937 - 938 - return udd == &usb_generic_driver; 939 - } 940 - 941 908 static int __usb_bus_reprobe_drivers(struct device *dev, void *data) 942 909 { 943 910 struct usb_device_driver *new_udriver = data; 944 911 struct usb_device *udev; 945 912 int ret; 946 913 947 - if (!is_dev_usb_generic_driver(dev)) 914 + /* Don't reprobe if current driver isn't usb_generic_driver */ 915 + if (dev->driver != &usb_generic_driver.drvwrap.driver) 948 916 return 0; 949 917 950 918 udev = to_usb_device(dev); 951 919 if (usb_device_match_id(udev, new_udriver->id_table) == NULL && 952 - (!new_udriver->match || new_udriver->match(udev) != 0)) 920 + (!new_udriver->match || new_udriver->match(udev) == 0)) 953 921 return 0; 954 922 955 923 ret = device_reprobe(dev);
-6
drivers/usb/usbip/stub_dev.c
··· 461 461 return; 462 462 } 463 463 464 - static bool usbip_match(struct usb_device *udev) 465 - { 466 - return true; 467 - } 468 - 469 464 #ifdef CONFIG_PM 470 465 471 466 /* These functions need usb_port_suspend and usb_port_resume, ··· 486 491 .name = "usbip-host", 487 492 .probe = stub_probe, 488 493 .disconnect = stub_disconnect, 489 - .match = usbip_match, 490 494 #ifdef CONFIG_PM 491 495 .suspend = stub_suspend, 492 496 .resume = stub_resume,
+2 -2
drivers/vhost/iotlb.c
··· 149 149 * vhost_iotlb_itree_first - return the first overlapped range 150 150 * @iotlb: the IOTLB 151 151 * @start: start of IOVA range 152 - * @end: end of IOVA range 152 + * @last: last byte in IOVA range 153 153 */ 154 154 struct vhost_iotlb_map * 155 155 vhost_iotlb_itree_first(struct vhost_iotlb *iotlb, u64 start, u64 last) ··· 162 162 * vhost_iotlb_itree_next - return the next overlapped range 163 163 * @map: the starting map node 164 164 * @start: start of IOVA range 165 - * @end: end of IOVA range 165 + * @last: last byte IOVA range 166 166 */ 167 167 struct vhost_iotlb_map * 168 168 vhost_iotlb_itree_next(struct vhost_iotlb_map *map, u64 start, u64 last)
+16 -14
drivers/vhost/vdpa.c
··· 353 353 struct vdpa_callback cb; 354 354 struct vhost_virtqueue *vq; 355 355 struct vhost_vring_state s; 356 - u64 __user *featurep = argp; 357 - u64 features; 358 356 u32 idx; 359 357 long r; 360 358 ··· 379 381 380 382 vq->last_avail_idx = vq_state.avail_index; 381 383 break; 382 - case VHOST_GET_BACKEND_FEATURES: 383 - features = VHOST_VDPA_BACKEND_FEATURES; 384 - if (copy_to_user(featurep, &features, sizeof(features))) 385 - return -EFAULT; 386 - return 0; 387 - case VHOST_SET_BACKEND_FEATURES: 388 - if (copy_from_user(&features, featurep, sizeof(features))) 389 - return -EFAULT; 390 - if (features & ~VHOST_VDPA_BACKEND_FEATURES) 391 - return -EOPNOTSUPP; 392 - vhost_set_backend_features(&v->vdev, features); 393 - return 0; 394 384 } 395 385 396 386 r = vhost_vring_ioctl(&v->vdev, cmd, argp); ··· 426 440 struct vhost_vdpa *v = filep->private_data; 427 441 struct vhost_dev *d = &v->vdev; 428 442 void __user *argp = (void __user *)arg; 443 + u64 __user *featurep = argp; 444 + u64 features; 429 445 long r; 446 + 447 + if (cmd == VHOST_SET_BACKEND_FEATURES) { 448 + r = copy_from_user(&features, featurep, sizeof(features)); 449 + if (r) 450 + return r; 451 + if (features & ~VHOST_VDPA_BACKEND_FEATURES) 452 + return -EOPNOTSUPP; 453 + vhost_set_backend_features(&v->vdev, features); 454 + return 0; 455 + } 430 456 431 457 mutex_lock(&d->mutex); 432 458 ··· 473 475 break; 474 476 case VHOST_VDPA_SET_CONFIG_CALL: 475 477 r = vhost_vdpa_set_config_call(v, argp); 478 + break; 479 + case VHOST_GET_BACKEND_FEATURES: 480 + features = VHOST_VDPA_BACKEND_FEATURES; 481 + r = copy_to_user(featurep, &features, sizeof(features)); 476 482 break; 477 483 default: 478 484 r = vhost_dev_ioctl(&v->vdev, cmd, argp);
+21 -8
drivers/xen/events/events_base.c
··· 92 92 /* Xen will never allocate port zero for any purpose. */ 93 93 #define VALID_EVTCHN(chn) ((chn) != 0) 94 94 95 + static struct irq_info *legacy_info_ptrs[NR_IRQS_LEGACY]; 96 + 95 97 static struct irq_chip xen_dynamic_chip; 96 98 static struct irq_chip xen_percpu_chip; 97 99 static struct irq_chip xen_pirq_chip; ··· 158 156 /* Get info for IRQ */ 159 157 struct irq_info *info_for_irq(unsigned irq) 160 158 { 161 - return irq_get_chip_data(irq); 159 + if (irq < nr_legacy_irqs()) 160 + return legacy_info_ptrs[irq]; 161 + else 162 + return irq_get_chip_data(irq); 163 + } 164 + 165 + static void set_info_for_irq(unsigned int irq, struct irq_info *info) 166 + { 167 + if (irq < nr_legacy_irqs()) 168 + legacy_info_ptrs[irq] = info; 169 + else 170 + irq_set_chip_data(irq, info); 162 171 } 163 172 164 173 /* Constructors for packed IRQ information. */ ··· 390 377 info->type = IRQT_UNBOUND; 391 378 info->refcnt = -1; 392 379 393 - irq_set_chip_data(irq, info); 380 + set_info_for_irq(irq, info); 394 381 395 382 list_add_tail(&info->list, &xen_irq_list_head); 396 383 } ··· 439 426 440 427 static void xen_free_irq(unsigned irq) 441 428 { 442 - struct irq_info *info = irq_get_chip_data(irq); 429 + struct irq_info *info = info_for_irq(irq); 443 430 444 431 if (WARN_ON(!info)) 445 432 return; 446 433 447 434 list_del(&info->list); 448 435 449 - irq_set_chip_data(irq, NULL); 436 + set_info_for_irq(irq, NULL); 450 437 451 438 WARN_ON(info->refcnt > 0); 452 439 ··· 616 603 static void __unbind_from_irq(unsigned int irq) 617 604 { 618 605 evtchn_port_t evtchn = evtchn_from_irq(irq); 619 - struct irq_info *info = irq_get_chip_data(irq); 606 + struct irq_info *info = info_for_irq(irq); 620 607 621 608 if (info->refcnt > 0) { 622 609 info->refcnt--; ··· 1121 1108 1122 1109 void unbind_from_irqhandler(unsigned int irq, void *dev_id) 1123 1110 { 1124 - struct irq_info *info = irq_get_chip_data(irq); 1111 + struct irq_info *info = info_for_irq(irq); 1125 1112 1126 1113 if (WARN_ON(!info)) 1127 1114 return; ··· 1155 1142 if (irq == -1) 1156 1143 return -ENOENT; 1157 1144 1158 - info = irq_get_chip_data(irq); 1145 + info = info_for_irq(irq); 1159 1146 1160 1147 if (!info) 1161 1148 return -ENOENT; ··· 1183 1170 if (irq == -1) 1184 1171 goto done; 1185 1172 1186 - info = irq_get_chip_data(irq); 1173 + info = info_for_irq(irq); 1187 1174 1188 1175 if (!info) 1189 1176 goto done;
+1 -1
fs/autofs/waitq.c
··· 53 53 54 54 mutex_lock(&sbi->pipe_mutex); 55 55 while (bytes) { 56 - wr = kernel_write(file, data, bytes, &file->f_pos); 56 + wr = __kernel_write(file, data, bytes, NULL); 57 57 if (wr <= 0) 58 58 break; 59 59 data += wr;
+44 -2
fs/btrfs/dev-replace.c
··· 599 599 wake_up(&fs_info->dev_replace.replace_wait); 600 600 } 601 601 602 + /* 603 + * When finishing the device replace, before swapping the source device with the 604 + * target device we must update the chunk allocation state in the target device, 605 + * as it is empty because replace works by directly copying the chunks and not 606 + * through the normal chunk allocation path. 607 + */ 608 + static int btrfs_set_target_alloc_state(struct btrfs_device *srcdev, 609 + struct btrfs_device *tgtdev) 610 + { 611 + struct extent_state *cached_state = NULL; 612 + u64 start = 0; 613 + u64 found_start; 614 + u64 found_end; 615 + int ret = 0; 616 + 617 + lockdep_assert_held(&srcdev->fs_info->chunk_mutex); 618 + 619 + while (!find_first_extent_bit(&srcdev->alloc_state, start, 620 + &found_start, &found_end, 621 + CHUNK_ALLOCATED, &cached_state)) { 622 + ret = set_extent_bits(&tgtdev->alloc_state, found_start, 623 + found_end, CHUNK_ALLOCATED); 624 + if (ret) 625 + break; 626 + start = found_end + 1; 627 + } 628 + 629 + free_extent_state(cached_state); 630 + return ret; 631 + } 632 + 602 633 static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, 603 634 int scrub_ret) 604 635 { ··· 704 673 dev_replace->time_stopped = ktime_get_real_seconds(); 705 674 dev_replace->item_needs_writeback = 1; 706 675 707 - /* replace old device with new one in mapping tree */ 676 + /* 677 + * Update allocation state in the new device and replace the old device 678 + * with the new one in the mapping tree. 679 + */ 708 680 if (!scrub_ret) { 681 + scrub_ret = btrfs_set_target_alloc_state(src_device, tgt_device); 682 + if (scrub_ret) 683 + goto error; 709 684 btrfs_dev_replace_update_device_in_mapping_tree(fs_info, 710 685 src_device, 711 686 tgt_device); ··· 722 685 btrfs_dev_name(src_device), 723 686 src_device->devid, 724 687 rcu_str_deref(tgt_device->name), scrub_ret); 688 + error: 725 689 up_write(&dev_replace->rwsem); 726 690 mutex_unlock(&fs_info->chunk_mutex); 727 691 mutex_unlock(&fs_info->fs_devices->device_list_mutex); ··· 783 745 /* replace the sysfs entry */ 784 746 btrfs_sysfs_remove_devices_dir(fs_info->fs_devices, src_device); 785 747 btrfs_sysfs_update_devid(tgt_device); 786 - btrfs_rm_dev_replace_free_srcdev(src_device); 748 + if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &src_device->dev_state)) 749 + btrfs_scratch_superblocks(fs_info, src_device->bdev, 750 + src_device->name->str); 787 751 788 752 /* write back the superblocks */ 789 753 trans = btrfs_start_transaction(root, 0); ··· 793 753 btrfs_commit_transaction(trans); 794 754 795 755 mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); 756 + 757 + btrfs_rm_dev_replace_free_srcdev(src_device); 796 758 797 759 return 0; 798 760 }
+5 -6
fs/btrfs/disk-io.c
··· 636 636 csum_tree_block(eb, result); 637 637 638 638 if (memcmp_extent_buffer(eb, result, 0, csum_size)) { 639 - u32 val; 640 - u32 found = 0; 641 - 642 - memcpy(&found, result, csum_size); 639 + u8 val[BTRFS_CSUM_SIZE] = { 0 }; 643 640 644 641 read_extent_buffer(eb, &val, 0, csum_size); 645 642 btrfs_warn_rl(fs_info, 646 - "%s checksum verify failed on %llu wanted %x found %x level %d", 643 + "%s checksum verify failed on %llu wanted " CSUM_FMT " found " CSUM_FMT " level %d", 647 644 fs_info->sb->s_id, eb->start, 648 - val, found, btrfs_header_level(eb)); 645 + CSUM_FMT_VALUE(csum_size, val), 646 + CSUM_FMT_VALUE(csum_size, result), 647 + btrfs_header_level(eb)); 649 648 ret = -EUCLEAN; 650 649 goto err; 651 650 }
+10 -6
fs/btrfs/sysfs.c
··· 1170 1170 disk_kobj->name); 1171 1171 } 1172 1172 1173 - kobject_del(&one_device->devid_kobj); 1174 - kobject_put(&one_device->devid_kobj); 1173 + if (one_device->devid_kobj.state_initialized) { 1174 + kobject_del(&one_device->devid_kobj); 1175 + kobject_put(&one_device->devid_kobj); 1175 1176 1176 - wait_for_completion(&one_device->kobj_unregister); 1177 + wait_for_completion(&one_device->kobj_unregister); 1178 + } 1177 1179 1178 1180 return 0; 1179 1181 } ··· 1188 1186 sysfs_remove_link(fs_devices->devices_kobj, 1189 1187 disk_kobj->name); 1190 1188 } 1191 - kobject_del(&one_device->devid_kobj); 1192 - kobject_put(&one_device->devid_kobj); 1189 + if (one_device->devid_kobj.state_initialized) { 1190 + kobject_del(&one_device->devid_kobj); 1191 + kobject_put(&one_device->devid_kobj); 1193 1192 1194 - wait_for_completion(&one_device->kobj_unregister); 1193 + wait_for_completion(&one_device->kobj_unregister); 1194 + } 1195 1195 } 1196 1196 1197 1197 return 0;
+5 -8
fs/btrfs/volumes.c
··· 1999 1999 return num_devices; 2000 2000 } 2001 2001 2002 - static void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info, 2003 - struct block_device *bdev, 2004 - const char *device_path) 2002 + void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info, 2003 + struct block_device *bdev, 2004 + const char *device_path) 2005 2005 { 2006 2006 struct btrfs_super_block *disk_super; 2007 2007 int copy_num; ··· 2224 2224 struct btrfs_fs_info *fs_info = srcdev->fs_info; 2225 2225 struct btrfs_fs_devices *fs_devices = srcdev->fs_devices; 2226 2226 2227 - if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &srcdev->dev_state)) { 2228 - /* zero out the old super if it is writable */ 2229 - btrfs_scratch_superblocks(fs_info, srcdev->bdev, 2230 - srcdev->name->str); 2231 - } 2227 + mutex_lock(&uuid_mutex); 2232 2228 2233 2229 btrfs_close_bdev(srcdev); 2234 2230 synchronize_rcu(); ··· 2254 2258 close_fs_devices(fs_devices); 2255 2259 free_fs_devices(fs_devices); 2256 2260 } 2261 + mutex_unlock(&uuid_mutex); 2257 2262 } 2258 2263 2259 2264 void btrfs_destroy_dev_replace_tgtdev(struct btrfs_device *tgtdev)
+3
fs/btrfs/volumes.h
··· 573 573 void btrfs_reset_fs_info_ptr(struct btrfs_fs_info *fs_info); 574 574 bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info, 575 575 struct btrfs_device *failing_dev); 576 + void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info, 577 + struct block_device *bdev, 578 + const char *device_path); 576 579 577 580 int btrfs_bg_type_to_factor(u64 flags); 578 581 const char *btrfs_bg_type_to_raid_name(u64 flags);
+31 -41
fs/eventpoll.c
··· 218 218 struct file *file; 219 219 220 220 /* used to optimize loop detection check */ 221 - struct list_head visited_list_link; 222 - int visited; 221 + u64 gen; 223 222 224 223 #ifdef CONFIG_NET_RX_BUSY_POLL 225 224 /* used to track busy poll napi_id */ ··· 273 274 */ 274 275 static DEFINE_MUTEX(epmutex); 275 276 277 + static u64 loop_check_gen = 0; 278 + 276 279 /* Used to check for epoll file descriptor inclusion loops */ 277 280 static struct nested_calls poll_loop_ncalls; 278 281 ··· 283 282 284 283 /* Slab cache used to allocate "struct eppoll_entry" */ 285 284 static struct kmem_cache *pwq_cache __read_mostly; 286 - 287 - /* Visited nodes during ep_loop_check(), so we can unset them when we finish */ 288 - static LIST_HEAD(visited_list); 289 285 290 286 /* 291 287 * List of files with newly added links, where we may need to limit the number ··· 1448 1450 1449 1451 static int ep_create_wakeup_source(struct epitem *epi) 1450 1452 { 1451 - const char *name; 1453 + struct name_snapshot n; 1452 1454 struct wakeup_source *ws; 1453 1455 1454 1456 if (!epi->ep->ws) { ··· 1457 1459 return -ENOMEM; 1458 1460 } 1459 1461 1460 - name = epi->ffd.file->f_path.dentry->d_name.name; 1461 - ws = wakeup_source_register(NULL, name); 1462 + take_dentry_name_snapshot(&n, epi->ffd.file->f_path.dentry); 1463 + ws = wakeup_source_register(NULL, n.name.name); 1464 + release_dentry_name_snapshot(&n); 1462 1465 1463 1466 if (!ws) 1464 1467 return -ENOMEM; ··· 1521 1522 RCU_INIT_POINTER(epi->ws, NULL); 1522 1523 } 1523 1524 1525 + /* Add the current item to the list of active epoll hook for this file */ 1526 + spin_lock(&tfile->f_lock); 1527 + list_add_tail_rcu(&epi->fllink, &tfile->f_ep_links); 1528 + spin_unlock(&tfile->f_lock); 1529 + 1530 + /* 1531 + * Add the current item to the RB tree. All RB tree operations are 1532 + * protected by "mtx", and ep_insert() is called with "mtx" held. 1533 + */ 1534 + ep_rbtree_insert(ep, epi); 1535 + 1536 + /* now check if we've created too many backpaths */ 1537 + error = -EINVAL; 1538 + if (full_check && reverse_path_check()) 1539 + goto error_remove_epi; 1540 + 1524 1541 /* Initialize the poll table using the queue callback */ 1525 1542 epq.epi = epi; 1526 1543 init_poll_funcptr(&epq.pt, ep_ptable_queue_proc); ··· 1558 1543 error = -ENOMEM; 1559 1544 if (epi->nwait < 0) 1560 1545 goto error_unregister; 1561 - 1562 - /* Add the current item to the list of active epoll hook for this file */ 1563 - spin_lock(&tfile->f_lock); 1564 - list_add_tail_rcu(&epi->fllink, &tfile->f_ep_links); 1565 - spin_unlock(&tfile->f_lock); 1566 - 1567 - /* 1568 - * Add the current item to the RB tree. All RB tree operations are 1569 - * protected by "mtx", and ep_insert() is called with "mtx" held. 1570 - */ 1571 - ep_rbtree_insert(ep, epi); 1572 - 1573 - /* now check if we've created too many backpaths */ 1574 - error = -EINVAL; 1575 - if (full_check && reverse_path_check()) 1576 - goto error_remove_epi; 1577 1546 1578 1547 /* We have to drop the new item inside our item list to keep track of it */ 1579 1548 write_lock_irq(&ep->lock); ··· 1587 1588 1588 1589 return 0; 1589 1590 1591 + error_unregister: 1592 + ep_unregister_pollwait(ep, epi); 1590 1593 error_remove_epi: 1591 1594 spin_lock(&tfile->f_lock); 1592 1595 list_del_rcu(&epi->fllink); 1593 1596 spin_unlock(&tfile->f_lock); 1594 1597 1595 1598 rb_erase_cached(&epi->rbn, &ep->rbr); 1596 - 1597 - error_unregister: 1598 - ep_unregister_pollwait(ep, epi); 1599 1599 1600 1600 /* 1601 1601 * We need to do this because an event could have been arrived on some ··· 1970 1972 struct epitem *epi; 1971 1973 1972 1974 mutex_lock_nested(&ep->mtx, call_nests + 1); 1973 - ep->visited = 1; 1974 - list_add(&ep->visited_list_link, &visited_list); 1975 + ep->gen = loop_check_gen; 1975 1976 for (rbp = rb_first_cached(&ep->rbr); rbp; rbp = rb_next(rbp)) { 1976 1977 epi = rb_entry(rbp, struct epitem, rbn); 1977 1978 if (unlikely(is_file_epoll(epi->ffd.file))) { 1978 1979 ep_tovisit = epi->ffd.file->private_data; 1979 - if (ep_tovisit->visited) 1980 + if (ep_tovisit->gen == loop_check_gen) 1980 1981 continue; 1981 1982 error = ep_call_nested(&poll_loop_ncalls, 1982 1983 ep_loop_check_proc, epi->ffd.file, ··· 2016 2019 */ 2017 2020 static int ep_loop_check(struct eventpoll *ep, struct file *file) 2018 2021 { 2019 - int ret; 2020 - struct eventpoll *ep_cur, *ep_next; 2021 - 2022 - ret = ep_call_nested(&poll_loop_ncalls, 2022 + return ep_call_nested(&poll_loop_ncalls, 2023 2023 ep_loop_check_proc, file, ep, current); 2024 - /* clear visited list */ 2025 - list_for_each_entry_safe(ep_cur, ep_next, &visited_list, 2026 - visited_list_link) { 2027 - ep_cur->visited = 0; 2028 - list_del(&ep_cur->visited_list_link); 2029 - } 2030 - return ret; 2031 2024 } 2032 2025 2033 2026 static void clear_tfile_check_list(void) ··· 2182 2195 goto error_tgt_fput; 2183 2196 if (op == EPOLL_CTL_ADD) { 2184 2197 if (!list_empty(&f.file->f_ep_links) || 2198 + ep->gen == loop_check_gen || 2185 2199 is_file_epoll(tf.file)) { 2186 2200 mutex_unlock(&ep->mtx); 2187 2201 error = epoll_mutex_lock(&epmutex, 0, nonblock); 2188 2202 if (error) 2189 2203 goto error_tgt_fput; 2204 + loop_check_gen++; 2190 2205 full_check = 1; 2191 2206 if (is_file_epoll(tf.file)) { 2192 2207 error = -ELOOP; ··· 2252 2263 error_tgt_fput: 2253 2264 if (full_check) { 2254 2265 clear_tfile_check_list(); 2266 + loop_check_gen++; 2255 2267 mutex_unlock(&epmutex); 2256 2268 } 2257 2269
+12 -13
fs/fuse/file.c
··· 3091 3091 ssize_t ret = 0; 3092 3092 struct file *file = iocb->ki_filp; 3093 3093 struct fuse_file *ff = file->private_data; 3094 - bool async_dio = ff->fc->async_dio; 3095 3094 loff_t pos = 0; 3096 3095 struct inode *inode; 3097 3096 loff_t i_size; 3098 - size_t count = iov_iter_count(iter); 3097 + size_t count = iov_iter_count(iter), shortened = 0; 3099 3098 loff_t offset = iocb->ki_pos; 3100 3099 struct fuse_io_priv *io; 3101 3100 ··· 3102 3103 inode = file->f_mapping->host; 3103 3104 i_size = i_size_read(inode); 3104 3105 3105 - if ((iov_iter_rw(iter) == READ) && (offset > i_size)) 3106 + if ((iov_iter_rw(iter) == READ) && (offset >= i_size)) 3106 3107 return 0; 3107 - 3108 - /* optimization for short read */ 3109 - if (async_dio && iov_iter_rw(iter) != WRITE && offset + count > i_size) { 3110 - if (offset >= i_size) 3111 - return 0; 3112 - iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset)); 3113 - count = iov_iter_count(iter); 3114 - } 3115 3108 3116 3109 io = kmalloc(sizeof(struct fuse_io_priv), GFP_KERNEL); 3117 3110 if (!io) ··· 3120 3129 * By default, we want to optimize all I/Os with async request 3121 3130 * submission to the client filesystem if supported. 3122 3131 */ 3123 - io->async = async_dio; 3132 + io->async = ff->fc->async_dio; 3124 3133 io->iocb = iocb; 3125 3134 io->blocking = is_sync_kiocb(iocb); 3135 + 3136 + /* optimization for short read */ 3137 + if (io->async && !io->write && offset + count > i_size) { 3138 + iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset)); 3139 + shortened = count - iov_iter_count(iter); 3140 + count -= shortened; 3141 + } 3126 3142 3127 3143 /* 3128 3144 * We cannot asynchronously extend the size of a file. 3129 3145 * In such case the aio will behave exactly like sync io. 3130 3146 */ 3131 - if ((offset + count > i_size) && iov_iter_rw(iter) == WRITE) 3147 + if ((offset + count > i_size) && io->write) 3132 3148 io->blocking = true; 3133 3149 3134 3150 if (io->async && io->blocking) { ··· 3153 3155 } else { 3154 3156 ret = __fuse_direct_read(io, iter, &pos); 3155 3157 } 3158 + iov_iter_reexpand(iter, iov_iter_count(iter) + shortened); 3156 3159 3157 3160 if (io->async) { 3158 3161 bool blocking = io->blocking;
+63 -23
fs/io_uring.c
··· 1753 1753 struct io_ring_ctx *ctx = req->ctx; 1754 1754 int ret, notify; 1755 1755 1756 + if (tsk->flags & PF_EXITING) 1757 + return -ESRCH; 1758 + 1756 1759 /* 1757 1760 * SQPOLL kernel thread doesn't need notification, just a wakeup. For 1758 1761 * all other cases, use TWA_SIGNAL unconditionally to ensure we're ··· 1790 1787 static void io_req_task_cancel(struct callback_head *cb) 1791 1788 { 1792 1789 struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work); 1790 + struct io_ring_ctx *ctx = req->ctx; 1793 1791 1794 1792 __io_req_task_cancel(req, -ECANCELED); 1793 + percpu_ref_put(&ctx->refs); 1795 1794 } 1796 1795 1797 1796 static void __io_req_task_submit(struct io_kiocb *req) ··· 2015 2010 2016 2011 static inline bool io_run_task_work(void) 2017 2012 { 2013 + /* 2014 + * Not safe to run on exiting task, and the task_work handling will 2015 + * not add work to such a task. 2016 + */ 2017 + if (unlikely(current->flags & PF_EXITING)) 2018 + return false; 2018 2019 if (current->task_works) { 2019 2020 __set_current_state(TASK_RUNNING); 2020 2021 task_work_run(); ··· 2294 2283 goto end_req; 2295 2284 } 2296 2285 2297 - ret = io_import_iovec(rw, req, &iovec, &iter, false); 2298 - if (ret < 0) 2299 - goto end_req; 2300 - ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false); 2301 - if (!ret) 2286 + if (!req->io) { 2287 + ret = io_import_iovec(rw, req, &iovec, &iter, false); 2288 + if (ret < 0) 2289 + goto end_req; 2290 + ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false); 2291 + if (!ret) 2292 + return true; 2293 + kfree(iovec); 2294 + } else { 2302 2295 return true; 2303 - kfree(iovec); 2296 + } 2304 2297 end_req: 2305 2298 req_set_fail_links(req); 2306 2299 io_req_complete(req, ret); ··· 3049 3034 if (!wake_page_match(wpq, key)) 3050 3035 return 0; 3051 3036 3037 + req->rw.kiocb.ki_flags &= ~IOCB_WAITQ; 3052 3038 list_del_init(&wait->entry); 3053 3039 3054 3040 init_task_work(&req->task_work, io_req_task_submit); ··· 3107 3091 wait->wait.flags = 0; 3108 3092 INIT_LIST_HEAD(&wait->wait.entry); 3109 3093 kiocb->ki_flags |= IOCB_WAITQ; 3094 + kiocb->ki_flags &= ~IOCB_NOWAIT; 3110 3095 kiocb->ki_waitq = wait; 3111 3096 3112 3097 io_get_req_task(req); ··· 3132 3115 struct iov_iter __iter, *iter = &__iter; 3133 3116 ssize_t io_size, ret, ret2; 3134 3117 size_t iov_count; 3118 + bool no_async; 3135 3119 3136 3120 if (req->io) 3137 3121 iter = &req->io->rw.iter; ··· 3150 3132 kiocb->ki_flags &= ~IOCB_NOWAIT; 3151 3133 3152 3134 /* If the file doesn't support async, just async punt */ 3153 - if (force_nonblock && !io_file_supports_async(req->file, READ)) 3135 + no_async = force_nonblock && !io_file_supports_async(req->file, READ); 3136 + if (no_async) 3154 3137 goto copy_iov; 3155 3138 3156 3139 ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), iov_count); ··· 3174 3155 goto done; 3175 3156 /* some cases will consume bytes even on error returns */ 3176 3157 iov_iter_revert(iter, iov_count - iov_iter_count(iter)); 3177 - ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false); 3178 - if (ret) 3179 - goto out_free; 3180 - return -EAGAIN; 3158 + ret = 0; 3159 + goto copy_iov; 3181 3160 } else if (ret < 0) { 3182 3161 /* make sure -ERESTARTSYS -> -EINTR is done */ 3183 3162 goto done; ··· 3193 3176 ret = ret2; 3194 3177 goto out_free; 3195 3178 } 3179 + if (no_async) 3180 + return -EAGAIN; 3196 3181 /* it's copied and will be cleaned with ->io */ 3197 3182 iovec = NULL; 3198 3183 /* now use our persistent iterator, if we aren't already */ ··· 3527 3508 const char __user *fname; 3528 3509 int ret; 3529 3510 3530 - if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL))) 3531 - return -EINVAL; 3532 3511 if (unlikely(sqe->ioprio || sqe->buf_index)) 3533 3512 return -EINVAL; 3534 3513 if (unlikely(req->flags & REQ_F_FIXED_FILE)) ··· 3553 3536 { 3554 3537 u64 flags, mode; 3555 3538 3539 + if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL))) 3540 + return -EINVAL; 3556 3541 if (req->flags & REQ_F_NEED_CLEANUP) 3557 3542 return 0; 3558 3543 mode = READ_ONCE(sqe->len); ··· 3569 3550 size_t len; 3570 3551 int ret; 3571 3552 3553 + if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL))) 3554 + return -EINVAL; 3572 3555 if (req->flags & REQ_F_NEED_CLEANUP) 3573 3556 return 0; 3574 3557 how = u64_to_user_ptr(READ_ONCE(sqe->addr2)); ··· 3788 3767 #if defined(CONFIG_EPOLL) 3789 3768 if (sqe->ioprio || sqe->buf_index) 3790 3769 return -EINVAL; 3791 - if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 3770 + if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL))) 3792 3771 return -EINVAL; 3793 3772 3794 3773 req->epoll.epfd = READ_ONCE(sqe->fd); ··· 3903 3882 3904 3883 static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) 3905 3884 { 3906 - if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 3885 + if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL))) 3907 3886 return -EINVAL; 3908 3887 if (sqe->ioprio || sqe->buf_index) 3909 3888 return -EINVAL; ··· 4745 4724 if (mask && !(mask & poll->events)) 4746 4725 return 0; 4747 4726 4727 + list_del_init(&wait->entry); 4728 + 4748 4729 if (poll && poll->head) { 4749 4730 bool done; 4750 4731 ··· 5422 5399 static int io_files_update_prep(struct io_kiocb *req, 5423 5400 const struct io_uring_sqe *sqe) 5424 5401 { 5402 + if (unlikely(req->ctx->flags & IORING_SETUP_SQPOLL)) 5403 + return -EINVAL; 5425 5404 if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) 5426 5405 return -EINVAL; 5427 5406 if (sqe->ioprio || sqe->rw_flags) ··· 5473 5448 ret = io_prep_work_files(req); 5474 5449 if (unlikely(ret)) 5475 5450 return ret; 5451 + 5452 + io_prep_async_work(req); 5476 5453 5477 5454 switch (req->opcode) { 5478 5455 case IORING_OP_NOP: ··· 5672 5645 case IORING_OP_TEE: 5673 5646 io_put_file(req, req->splice.file_in, 5674 5647 (req->splice.flags & SPLICE_F_FD_IN_FIXED)); 5648 + break; 5649 + case IORING_OP_OPENAT: 5650 + case IORING_OP_OPENAT2: 5651 + if (req->open.filename) 5652 + putname(req->open.filename); 5675 5653 break; 5676 5654 } 5677 5655 req->flags &= ~REQ_F_NEED_CLEANUP; ··· 6355 6323 struct io_ring_ctx *ctx, unsigned int max_ios) 6356 6324 { 6357 6325 blk_start_plug(&state->plug); 6358 - #ifdef CONFIG_BLOCK 6359 - state->plug.nowait = true; 6360 - #endif 6361 6326 state->comp.nr = 0; 6362 6327 INIT_LIST_HEAD(&state->comp.list); 6363 6328 state->comp.ctx = ctx; ··· 8209 8180 /* cancel this request, or head link requests */ 8210 8181 io_attempt_cancel(ctx, cancel_req); 8211 8182 io_put_req(cancel_req); 8183 + /* cancellations _may_ trigger task work */ 8184 + io_run_task_work(); 8212 8185 schedule(); 8213 8186 finish_wait(&ctx->inflight_wait, &wait); 8214 8187 } ··· 8416 8385 8417 8386 static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m) 8418 8387 { 8388 + bool has_lock; 8419 8389 int i; 8420 8390 8421 - mutex_lock(&ctx->uring_lock); 8391 + /* 8392 + * Avoid ABBA deadlock between the seq lock and the io_uring mutex, 8393 + * since fdinfo case grabs it in the opposite direction of normal use 8394 + * cases. If we fail to get the lock, we just don't iterate any 8395 + * structures that could be going away outside the io_uring mutex. 8396 + */ 8397 + has_lock = mutex_trylock(&ctx->uring_lock); 8398 + 8422 8399 seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files); 8423 - for (i = 0; i < ctx->nr_user_files; i++) { 8400 + for (i = 0; has_lock && i < ctx->nr_user_files; i++) { 8424 8401 struct fixed_file_table *table; 8425 8402 struct file *f; 8426 8403 ··· 8440 8401 seq_printf(m, "%5u: <none>\n", i); 8441 8402 } 8442 8403 seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs); 8443 - for (i = 0; i < ctx->nr_user_bufs; i++) { 8404 + for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) { 8444 8405 struct io_mapped_ubuf *buf = &ctx->user_bufs[i]; 8445 8406 8446 8407 seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, 8447 8408 (unsigned int) buf->len); 8448 8409 } 8449 - if (!idr_is_empty(&ctx->personality_idr)) { 8410 + if (has_lock && !idr_is_empty(&ctx->personality_idr)) { 8450 8411 seq_printf(m, "Personalities:\n"); 8451 8412 idr_for_each(&ctx->personality_idr, io_uring_show_cred, m); 8452 8413 } ··· 8461 8422 req->task->task_works != NULL); 8462 8423 } 8463 8424 spin_unlock_irq(&ctx->completion_lock); 8464 - mutex_unlock(&ctx->uring_lock); 8425 + if (has_lock) 8426 + mutex_unlock(&ctx->uring_lock); 8465 8427 } 8466 8428 8467 8429 static void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
+3
fs/nfs/dir.c
··· 579 579 xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE); 580 580 581 581 do { 582 + if (entry->label) 583 + entry->label->len = NFS4_MAXLABELLEN; 584 + 582 585 status = xdr_decode(desc, entry, &stream); 583 586 if (status != 0) { 584 587 if (status == -EAGAIN)
+22 -21
fs/nfs/flexfilelayout/flexfilelayout.c
··· 715 715 } 716 716 717 717 static void 718 - ff_layout_mark_ds_unreachable(struct pnfs_layout_segment *lseg, int idx) 718 + ff_layout_mark_ds_unreachable(struct pnfs_layout_segment *lseg, u32 idx) 719 719 { 720 720 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx); 721 721 ··· 724 724 } 725 725 726 726 static void 727 - ff_layout_mark_ds_reachable(struct pnfs_layout_segment *lseg, int idx) 727 + ff_layout_mark_ds_reachable(struct pnfs_layout_segment *lseg, u32 idx) 728 728 { 729 729 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx); 730 730 ··· 734 734 735 735 static struct nfs4_pnfs_ds * 736 736 ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg, 737 - int start_idx, int *best_idx, 737 + u32 start_idx, u32 *best_idx, 738 738 bool check_device) 739 739 { 740 740 struct nfs4_ff_layout_segment *fls = FF_LAYOUT_LSEG(lseg); 741 741 struct nfs4_ff_layout_mirror *mirror; 742 742 struct nfs4_pnfs_ds *ds; 743 743 bool fail_return = false; 744 - int idx; 744 + u32 idx; 745 745 746 746 /* mirrors are initially sorted by efficiency */ 747 747 for (idx = start_idx; idx < fls->mirror_array_cnt; idx++) { ··· 766 766 767 767 static struct nfs4_pnfs_ds * 768 768 ff_layout_choose_any_ds_for_read(struct pnfs_layout_segment *lseg, 769 - int start_idx, int *best_idx) 769 + u32 start_idx, u32 *best_idx) 770 770 { 771 771 return ff_layout_choose_ds_for_read(lseg, start_idx, best_idx, false); 772 772 } 773 773 774 774 static struct nfs4_pnfs_ds * 775 775 ff_layout_choose_valid_ds_for_read(struct pnfs_layout_segment *lseg, 776 - int start_idx, int *best_idx) 776 + u32 start_idx, u32 *best_idx) 777 777 { 778 778 return ff_layout_choose_ds_for_read(lseg, start_idx, best_idx, true); 779 779 } 780 780 781 781 static struct nfs4_pnfs_ds * 782 782 ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg, 783 - int start_idx, int *best_idx) 783 + u32 start_idx, u32 *best_idx) 784 784 { 785 785 struct nfs4_pnfs_ds *ds; 786 786 ··· 791 791 } 792 792 793 793 static struct nfs4_pnfs_ds * 794 - ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, int *best_idx) 794 + ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, 795 + u32 *best_idx) 795 796 { 796 797 struct pnfs_layout_segment *lseg = pgio->pg_lseg; 797 798 struct nfs4_pnfs_ds *ds; ··· 838 837 struct nfs_pgio_mirror *pgm; 839 838 struct nfs4_ff_layout_mirror *mirror; 840 839 struct nfs4_pnfs_ds *ds; 841 - int ds_idx; 840 + u32 ds_idx, i; 842 841 843 842 retry: 844 843 ff_layout_pg_check_layout(pgio, req); ··· 864 863 goto retry; 865 864 } 866 865 867 - mirror = FF_LAYOUT_COMP(pgio->pg_lseg, ds_idx); 866 + for (i = 0; i < pgio->pg_mirror_count; i++) { 867 + mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i); 868 + pgm = &pgio->pg_mirrors[i]; 869 + pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize; 870 + } 868 871 869 872 pgio->pg_mirror_idx = ds_idx; 870 - 871 - /* read always uses only one mirror - idx 0 for pgio layer */ 872 - pgm = &pgio->pg_mirrors[0]; 873 - pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize; 874 873 875 874 if (NFS_SERVER(pgio->pg_inode)->flags & 876 875 (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR)) ··· 895 894 struct nfs4_ff_layout_mirror *mirror; 896 895 struct nfs_pgio_mirror *pgm; 897 896 struct nfs4_pnfs_ds *ds; 898 - int i; 897 + u32 i; 899 898 900 899 retry: 901 900 ff_layout_pg_check_layout(pgio, req); ··· 1039 1038 static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr) 1040 1039 { 1041 1040 u32 idx = hdr->pgio_mirror_idx + 1; 1042 - int new_idx = 0; 1041 + u32 new_idx = 0; 1043 1042 1044 1043 if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx)) 1045 1044 ff_layout_send_layouterror(hdr->lseg); ··· 1076 1075 struct nfs4_state *state, 1077 1076 struct nfs_client *clp, 1078 1077 struct pnfs_layout_segment *lseg, 1079 - int idx) 1078 + u32 idx) 1080 1079 { 1081 1080 struct pnfs_layout_hdr *lo = lseg->pls_layout; 1082 1081 struct inode *inode = lo->plh_inode; ··· 1150 1149 /* Retry all errors through either pNFS or MDS except for -EJUKEBOX */ 1151 1150 static int ff_layout_async_handle_error_v3(struct rpc_task *task, 1152 1151 struct pnfs_layout_segment *lseg, 1153 - int idx) 1152 + u32 idx) 1154 1153 { 1155 1154 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx); 1156 1155 ··· 1185 1184 struct nfs4_state *state, 1186 1185 struct nfs_client *clp, 1187 1186 struct pnfs_layout_segment *lseg, 1188 - int idx) 1187 + u32 idx) 1189 1188 { 1190 1189 int vers = clp->cl_nfs_mod->rpc_vers->number; 1191 1190 ··· 1212 1211 } 1213 1212 1214 1213 static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg, 1215 - int idx, u64 offset, u64 length, 1214 + u32 idx, u64 offset, u64 length, 1216 1215 u32 *op_status, int opnum, int error) 1217 1216 { 1218 1217 struct nfs4_ff_layout_mirror *mirror; ··· 1810 1809 loff_t offset = hdr->args.offset; 1811 1810 int vers; 1812 1811 struct nfs_fh *fh; 1813 - int idx = hdr->pgio_mirror_idx; 1812 + u32 idx = hdr->pgio_mirror_idx; 1814 1813 1815 1814 mirror = FF_LAYOUT_COMP(lseg, idx); 1816 1815 ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+9 -1
fs/nfs/nfs42proc.c
··· 356 356 357 357 truncate_pagecache_range(dst_inode, pos_dst, 358 358 pos_dst + res->write_res.count); 359 - 359 + spin_lock(&dst_inode->i_lock); 360 + NFS_I(dst_inode)->cache_validity |= (NFS_INO_REVAL_PAGECACHE | 361 + NFS_INO_REVAL_FORCED | NFS_INO_INVALID_SIZE | 362 + NFS_INO_INVALID_ATTR | NFS_INO_INVALID_DATA); 363 + spin_unlock(&dst_inode->i_lock); 364 + spin_lock(&src_inode->i_lock); 365 + NFS_I(src_inode)->cache_validity |= (NFS_INO_REVAL_PAGECACHE | 366 + NFS_INO_REVAL_FORCED | NFS_INO_INVALID_ATIME); 367 + spin_unlock(&src_inode->i_lock); 360 368 status = res->write_res.count; 361 369 out: 362 370 if (args->sync)
+41 -21
fs/pipe.c
··· 106 106 } 107 107 } 108 108 109 - /* Drop the inode semaphore and wait for a pipe event, atomically */ 110 - void pipe_wait(struct pipe_inode_info *pipe) 111 - { 112 - DEFINE_WAIT(rdwait); 113 - DEFINE_WAIT(wrwait); 114 - 115 - /* 116 - * Pipes are system-local resources, so sleeping on them 117 - * is considered a noninteractive wait: 118 - */ 119 - prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE); 120 - prepare_to_wait(&pipe->wr_wait, &wrwait, TASK_INTERRUPTIBLE); 121 - pipe_unlock(pipe); 122 - schedule(); 123 - finish_wait(&pipe->rd_wait, &rdwait); 124 - finish_wait(&pipe->wr_wait, &wrwait); 125 - pipe_lock(pipe); 126 - } 127 - 128 109 static void anon_pipe_buf_release(struct pipe_inode_info *pipe, 129 110 struct pipe_buffer *buf) 130 111 { ··· 1016 1035 return do_pipe2(fildes, 0); 1017 1036 } 1018 1037 1038 + /* 1039 + * This is the stupid "wait for pipe to be readable or writable" 1040 + * model. 1041 + * 1042 + * See pipe_read/write() for the proper kind of exclusive wait, 1043 + * but that requires that we wake up any other readers/writers 1044 + * if we then do not end up reading everything (ie the whole 1045 + * "wake_next_reader/writer" logic in pipe_read/write()). 1046 + */ 1047 + void pipe_wait_readable(struct pipe_inode_info *pipe) 1048 + { 1049 + pipe_unlock(pipe); 1050 + wait_event_interruptible(pipe->rd_wait, pipe_readable(pipe)); 1051 + pipe_lock(pipe); 1052 + } 1053 + 1054 + void pipe_wait_writable(struct pipe_inode_info *pipe) 1055 + { 1056 + pipe_unlock(pipe); 1057 + wait_event_interruptible(pipe->wr_wait, pipe_writable(pipe)); 1058 + pipe_lock(pipe); 1059 + } 1060 + 1061 + /* 1062 + * This depends on both the wait (here) and the wakeup (wake_up_partner) 1063 + * holding the pipe lock, so "*cnt" is stable and we know a wakeup cannot 1064 + * race with the count check and waitqueue prep. 1065 + * 1066 + * Normally in order to avoid races, you'd do the prepare_to_wait() first, 1067 + * then check the condition you're waiting for, and only then sleep. But 1068 + * because of the pipe lock, we can check the condition before being on 1069 + * the wait queue. 1070 + * 1071 + * We use the 'rd_wait' waitqueue for pipe partner waiting. 1072 + */ 1019 1073 static int wait_for_partner(struct pipe_inode_info *pipe, unsigned int *cnt) 1020 1074 { 1075 + DEFINE_WAIT(rdwait); 1021 1076 int cur = *cnt; 1022 1077 1023 1078 while (cur == *cnt) { 1024 - pipe_wait(pipe); 1079 + prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE); 1080 + pipe_unlock(pipe); 1081 + schedule(); 1082 + finish_wait(&pipe->rd_wait, &rdwait); 1083 + pipe_lock(pipe); 1025 1084 if (signal_pending(current)) 1026 1085 break; 1027 1086 } ··· 1071 1050 static void wake_up_partner(struct pipe_inode_info *pipe) 1072 1051 { 1073 1052 wake_up_interruptible_all(&pipe->rd_wait); 1074 - wake_up_interruptible_all(&pipe->wr_wait); 1075 1053 } 1076 1054 1077 1055 static int fifo_open(struct inode *inode, struct file *filp)
+8
fs/read_write.c
··· 538 538 inc_syscw(current); 539 539 return ret; 540 540 } 541 + /* 542 + * This "EXPORT_SYMBOL_GPL()" is more of a "EXPORT_SYMBOL_DONTUSE()", 543 + * but autofs is one of the few internal kernel users that actually 544 + * wants this _and_ can be built as a module. So we need to export 545 + * this symbol for autofs, even though it really isn't appropriate 546 + * for any other kernel modules. 547 + */ 548 + EXPORT_SYMBOL_GPL(__kernel_write); 541 549 542 550 ssize_t kernel_write(struct file *file, const void *buf, size_t count, 543 551 loff_t *pos)
+4 -4
fs/splice.c
··· 563 563 sd->need_wakeup = false; 564 564 } 565 565 566 - pipe_wait(pipe); 566 + pipe_wait_readable(pipe); 567 567 } 568 568 569 569 return 1; ··· 1077 1077 return -EAGAIN; 1078 1078 if (signal_pending(current)) 1079 1079 return -ERESTARTSYS; 1080 - pipe_wait(pipe); 1080 + pipe_wait_writable(pipe); 1081 1081 } 1082 1082 } 1083 1083 ··· 1454 1454 ret = -EAGAIN; 1455 1455 break; 1456 1456 } 1457 - pipe_wait(pipe); 1457 + pipe_wait_readable(pipe); 1458 1458 } 1459 1459 1460 1460 pipe_unlock(pipe); ··· 1493 1493 ret = -ERESTARTSYS; 1494 1494 break; 1495 1495 } 1496 - pipe_wait(pipe); 1496 + pipe_wait_writable(pipe); 1497 1497 } 1498 1498 1499 1499 pipe_unlock(pipe);
+1 -1
fs/vboxsf/super.c
··· 384 384 385 385 static int vboxsf_parse_monolithic(struct fs_context *fc, void *data) 386 386 { 387 - char *options = data; 387 + unsigned char *options = data; 388 388 389 389 if (options && options[0] == VBSF_MOUNT_SIGNATURE_BYTE_0 && 390 390 options[1] == VBSF_MOUNT_SIGNATURE_BYTE_1 &&
+1 -1
include/linux/acpi.h
··· 958 958 acpi_status acpi_os_prepare_extended_sleep(u8 sleep_state, 959 959 u32 val_a, u32 val_b); 960 960 961 - #ifdef CONFIG_X86 961 + #ifndef CONFIG_IA64 962 962 void arch_reserve_mem_area(acpi_physical_address addr, size_t size); 963 963 #else 964 964 static inline void arch_reserve_mem_area(acpi_physical_address addr,
+1 -2
include/linux/blk_types.h
··· 497 497 498 498 typedef unsigned int blk_qc_t; 499 499 #define BLK_QC_T_NONE -1U 500 - #define BLK_QC_T_EAGAIN -2U 501 500 #define BLK_QC_T_SHIFT 16 502 501 #define BLK_QC_T_INTERNAL (1U << 31) 503 502 504 503 static inline bool blk_qc_t_valid(blk_qc_t cookie) 505 504 { 506 - return cookie != BLK_QC_T_NONE && cookie != BLK_QC_T_EAGAIN; 505 + return cookie != BLK_QC_T_NONE; 507 506 } 508 507 509 508 static inline unsigned int blk_qc_t_to_queue_num(blk_qc_t cookie)
+2
include/linux/blkdev.h
··· 352 352 typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx, 353 353 void *data); 354 354 355 + void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model); 356 + 355 357 #ifdef CONFIG_BLK_DEV_ZONED 356 358 357 359 #define BLK_ALL_ZONES ((unsigned int)-1)
+1 -1
include/linux/fs_parser.h
··· 120 120 #define fsparam_u32oct(NAME, OPT) \ 121 121 __fsparam(fs_param_is_u32, NAME, OPT, 0, (void *)8) 122 122 #define fsparam_u32hex(NAME, OPT) \ 123 - __fsparam(fs_param_is_u32_hex, NAME, OPT, 0, (void *16)) 123 + __fsparam(fs_param_is_u32_hex, NAME, OPT, 0, (void *)16) 124 124 #define fsparam_s32(NAME, OPT) __fsparam(fs_param_is_s32, NAME, OPT, 0, NULL) 125 125 #define fsparam_u64(NAME, OPT) __fsparam(fs_param_is_u64, NAME, OPT, 0, NULL) 126 126 #define fsparam_enum(NAME, OPT, array) __fsparam(fs_param_is_enum, NAME, OPT, 0, array)
+5
include/linux/kprobes.h
··· 373 373 void kprobe_flush_task(struct task_struct *tk); 374 374 void recycle_rp_inst(struct kretprobe_instance *ri, struct hlist_head *head); 375 375 376 + void kprobe_free_init_mem(void); 377 + 376 378 int disable_kprobe(struct kprobe *kp); 377 379 int enable_kprobe(struct kprobe *kp); 378 380 ··· 435 433 { 436 434 } 437 435 static inline void kprobe_flush_task(struct task_struct *tk) 436 + { 437 + } 438 + static inline void kprobe_free_init_mem(void) 438 439 { 439 440 } 440 441 static inline int disable_kprobe(struct kprobe *kp)
+1
include/linux/memstick.h
··· 281 281 282 282 struct memstick_dev *card; 283 283 unsigned int retries; 284 + bool removing; 284 285 285 286 /* Notify the host that some requests are pending. */ 286 287 void (*request)(struct memstick_host *host);
+2 -2
include/linux/mm.h
··· 1646 1646 void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, 1647 1647 unsigned long end, unsigned long floor, unsigned long ceiling); 1648 1648 int copy_page_range(struct mm_struct *dst, struct mm_struct *src, 1649 - struct vm_area_struct *vma); 1649 + struct vm_area_struct *vma, struct vm_area_struct *new); 1650 1650 int follow_pte_pmd(struct mm_struct *mm, unsigned long address, 1651 1651 struct mmu_notifier_range *range, 1652 1652 pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); ··· 2416 2416 2417 2417 extern void set_dma_reserve(unsigned long new_dma_reserve); 2418 2418 extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long, 2419 - enum memmap_context, struct vmem_altmap *); 2419 + enum meminit_context, struct vmem_altmap *); 2420 2420 extern void setup_per_zone_wmarks(void); 2421 2421 extern int __meminit init_per_zone_wmark_min(void); 2422 2422 extern void mem_init(void);
+10
include/linux/mm_types.h
··· 436 436 */ 437 437 atomic_t mm_count; 438 438 439 + /** 440 + * @has_pinned: Whether this mm has pinned any pages. This can 441 + * be either replaced in the future by @pinned_vm when it 442 + * becomes stable, or grow into a counter on its own. We're 443 + * aggresive on this bit now - even if the pinned pages were 444 + * unpinned later on, we'll still keep this bit set for the 445 + * lifecycle of this mm just for simplicity. 446 + */ 447 + atomic_t has_pinned; 448 + 439 449 #ifdef CONFIG_MMU 440 450 atomic_long_t pgtables_bytes; /* PTE page table pages */ 441 451 #endif
+8 -3
include/linux/mmzone.h
··· 824 824 unsigned int alloc_flags); 825 825 bool zone_watermark_ok_safe(struct zone *z, unsigned int order, 826 826 unsigned long mark, int highest_zoneidx); 827 - enum memmap_context { 828 - MEMMAP_EARLY, 829 - MEMMAP_HOTPLUG, 827 + /* 828 + * Memory initialization context, use to differentiate memory added by 829 + * the platform statically or via memory hotplug interface. 830 + */ 831 + enum meminit_context { 832 + MEMINIT_EARLY, 833 + MEMINIT_HOTPLUG, 830 834 }; 835 + 831 836 extern void init_currently_empty_zone(struct zone *zone, unsigned long start_pfn, 832 837 unsigned long size); 833 838
+1 -1
include/linux/netdev_features.h
··· 193 193 #define NETIF_F_GSO_MASK (__NETIF_F_BIT(NETIF_F_GSO_LAST + 1) - \ 194 194 __NETIF_F_BIT(NETIF_F_GSO_SHIFT)) 195 195 196 - /* List of IP checksum features. Note that NETIF_F_ HW_CSUM should not be 196 + /* List of IP checksum features. Note that NETIF_F_HW_CSUM should not be 197 197 * set in features when NETIF_F_IP_CSUM or NETIF_F_IPV6_CSUM are set-- 198 198 * this would be contradictory 199 199 */
+2
include/linux/netdevice.h
··· 1784 1784 * the watchdog (see dev_watchdog()) 1785 1785 * @watchdog_timer: List of timers 1786 1786 * 1787 + * @proto_down_reason: reason a netdev interface is held down 1787 1788 * @pcpu_refcnt: Number of references to this device 1788 1789 * @todo_list: Delayed register/unregister 1789 1790 * @link_watch_list: XXX: need comments on this one ··· 1849 1848 * @udp_tunnel_nic_info: static structure describing the UDP tunnel 1850 1849 * offload capabilities of the device 1851 1850 * @udp_tunnel_nic: UDP tunnel offload state 1851 + * @xdp_state: stores info on attached XDP BPF programs 1852 1852 * 1853 1853 * FIXME: cleanup struct net_device such that network protocol info 1854 1854 * moves out.
+2 -2
include/linux/nfs_xdr.h
··· 1611 1611 __u64 mds_offset; /* Filelayout dense stripe */ 1612 1612 struct nfs_page_array page_array; 1613 1613 struct nfs_client *ds_clp; /* pNFS data server */ 1614 - int ds_commit_idx; /* ds index if ds_clp is set */ 1615 - int pgio_mirror_idx;/* mirror index in pgio layer */ 1614 + u32 ds_commit_idx; /* ds index if ds_clp is set */ 1615 + u32 pgio_mirror_idx;/* mirror index in pgio layer */ 1616 1616 }; 1617 1617 1618 1618 struct nfs_mds_commit_info {
+7 -4
include/linux/node.h
··· 99 99 typedef void (*node_registration_func_t)(struct node *); 100 100 101 101 #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) 102 - extern int link_mem_sections(int nid, unsigned long start_pfn, 103 - unsigned long end_pfn); 102 + int link_mem_sections(int nid, unsigned long start_pfn, 103 + unsigned long end_pfn, 104 + enum meminit_context context); 104 105 #else 105 106 static inline int link_mem_sections(int nid, unsigned long start_pfn, 106 - unsigned long end_pfn) 107 + unsigned long end_pfn, 108 + enum meminit_context context) 107 109 { 108 110 return 0; 109 111 } ··· 130 128 if (error) 131 129 return error; 132 130 /* link memory sections under this node */ 133 - error = link_mem_sections(nid, start_pfn, end_pfn); 131 + error = link_mem_sections(nid, start_pfn, end_pfn, 132 + MEMINIT_EARLY); 134 133 } 135 134 136 135 return error;
+10
include/linux/pgtable.h
··· 1427 1427 #define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED) 1428 1428 #endif 1429 1429 1430 + #ifndef p4d_offset_lockless 1431 + #define p4d_offset_lockless(pgdp, pgd, address) p4d_offset(&(pgd), address) 1432 + #endif 1433 + #ifndef pud_offset_lockless 1434 + #define pud_offset_lockless(p4dp, p4d, address) pud_offset(&(p4d), address) 1435 + #endif 1436 + #ifndef pmd_offset_lockless 1437 + #define pmd_offset_lockless(pudp, pud, address) pmd_offset(&(pud), address) 1438 + #endif 1439 + 1430 1440 /* 1431 1441 * p?d_leaf() - true if this entry is a final mapping to a physical address. 1432 1442 * This differs from p?d_huge() by the fact that they are always available (if
+3 -2
include/linux/pipe_fs_i.h
··· 240 240 extern unsigned long pipe_user_pages_hard; 241 241 extern unsigned long pipe_user_pages_soft; 242 242 243 - /* Drop the inode semaphore and wait for a pipe event, atomically */ 244 - void pipe_wait(struct pipe_inode_info *pipe); 243 + /* Wait for a pipe to be readable/writable while dropping the pipe lock */ 244 + void pipe_wait_readable(struct pipe_inode_info *); 245 + void pipe_wait_writable(struct pipe_inode_info *); 245 246 246 247 struct pipe_inode_info *alloc_pipe_info(void); 247 248 void free_pipe_info(struct pipe_inode_info *);
+1
include/linux/qed/qed_if.h
··· 623 623 #define QED_MFW_VERSION_3_OFFSET 24 624 624 625 625 u32 flash_size; 626 + bool b_arfs_capable; 626 627 bool b_inter_pf_switch; 627 628 bool tx_switching; 628 629 bool rdma_supported;
+4 -3
include/linux/skbuff.h
··· 3223 3223 * is untouched. Otherwise it is extended. Returns zero on 3224 3224 * success. The skb is freed on error if @free_on_error is true. 3225 3225 */ 3226 - static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len, 3227 - bool free_on_error) 3226 + static inline int __must_check __skb_put_padto(struct sk_buff *skb, 3227 + unsigned int len, 3228 + bool free_on_error) 3228 3229 { 3229 3230 unsigned int size = skb->len; 3230 3231 ··· 3248 3247 * is untouched. Otherwise it is extended. Returns zero on 3249 3248 * success. The skb is freed on error. 3250 3249 */ 3251 - static inline int skb_put_padto(struct sk_buff *skb, unsigned int len) 3250 + static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int len) 3252 3251 { 3253 3252 return __skb_put_padto(skb, len, true); 3254 3253 }
+5
include/linux/vmstat.h
··· 312 312 static inline void __mod_node_page_state(struct pglist_data *pgdat, 313 313 enum node_stat_item item, int delta) 314 314 { 315 + if (vmstat_item_in_bytes(item)) { 316 + VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); 317 + delta >>= PAGE_SHIFT; 318 + } 319 + 315 320 node_page_state_add(delta, pgdat, item); 316 321 } 317 322
+2 -5
include/media/videobuf2-core.h
··· 744 744 * vb2_core_reqbufs() - Initiate streaming. 745 745 * @q: pointer to &struct vb2_queue with videobuf2 queue. 746 746 * @memory: memory type, as defined by &enum vb2_memory. 747 - * @flags: auxiliary queue/buffer management flags. Currently, the only 748 - * used flag is %V4L2_FLAG_MEMORY_NON_CONSISTENT. 749 747 * @count: requested buffer count. 750 748 * 751 749 * Videobuf2 core helper to implement VIDIOC_REQBUF() operation. It is called ··· 768 770 * Return: returns zero on success; an error code otherwise. 769 771 */ 770 772 int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory, 771 - unsigned int flags, unsigned int *count); 773 + unsigned int *count); 772 774 773 775 /** 774 776 * vb2_core_create_bufs() - Allocate buffers and any required auxiliary structs 775 777 * @q: pointer to &struct vb2_queue with videobuf2 queue. 776 778 * @memory: memory type, as defined by &enum vb2_memory. 777 - * @flags: auxiliary queue/buffer management flags. 778 779 * @count: requested buffer count. 779 780 * @requested_planes: number of planes requested. 780 781 * @requested_sizes: array with the size of the planes. ··· 791 794 * Return: returns zero on success; an error code otherwise. 792 795 */ 793 796 int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory, 794 - unsigned int flags, unsigned int *count, 797 + unsigned int *count, 795 798 unsigned int requested_planes, 796 799 const unsigned int requested_sizes[]); 797 800
+1
include/net/flow.h
··· 116 116 fl4->saddr = saddr; 117 117 fl4->fl4_dport = dport; 118 118 fl4->fl4_sport = sport; 119 + fl4->flowi4_multipath_hash = 0; 119 120 } 120 121 121 122 /* Reset some input parameters after previous lookup */
-2
include/net/netlink.h
··· 726 726 * @hdrlen: length of family specific header 727 727 * @tb: destination array with maxtype+1 elements 728 728 * @maxtype: maximum attribute type to be expected 729 - * @validate: validation strictness 730 729 * @extack: extended ACK report struct 731 730 * 732 731 * See nla_parse() ··· 823 824 * @len: length of attribute stream 824 825 * @maxtype: maximum attribute type to be expected 825 826 * @policy: validation policy 826 - * @validate: validation strictness 827 827 * @extack: extended ACK report struct 828 828 * 829 829 * Validates all attributes in the specified attribute stream against the
+1
include/net/netns/nftables.h
··· 8 8 struct list_head tables; 9 9 struct list_head commit_list; 10 10 struct list_head module_list; 11 + struct list_head notify_list; 11 12 struct mutex commit_mutex; 12 13 unsigned int base_seq; 13 14 u8 gencursor;
+5 -3
include/net/sctp/structs.h
··· 226 226 data_ready_signalled:1; 227 227 228 228 atomic_t pd_mode; 229 + 230 + /* Fields after this point will be skipped on copies, like on accept 231 + * and peeloff operations 232 + */ 233 + 229 234 /* Receive to here while partial delivery is in effect. */ 230 235 struct sk_buff_head pd_lobby; 231 236 232 - /* These must be the last fields, as they will skipped on copies, 233 - * like on accept and peeloff operations 234 - */ 235 237 struct list_head auto_asconf_list; 236 238 int do_auto_asconf; 237 239 };
+3
include/net/vxlan.h
··· 121 121 #define VXLAN_GBP_POLICY_APPLIED (BIT(3) << 16) 122 122 #define VXLAN_GBP_ID_MASK (0xFFFF) 123 123 124 + #define VXLAN_GBP_MASK (VXLAN_GBP_DONT_LEARN | VXLAN_GBP_POLICY_APPLIED | \ 125 + VXLAN_GBP_ID_MASK) 126 + 124 127 /* 125 128 * VXLAN Generic Protocol Extension (VXLAN_F_GPE): 126 129 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+2
include/soc/mscc/ocelot.h
··· 566 566 u8 ptp_cmd; 567 567 struct sk_buff_head tx_skbs; 568 568 u8 ts_id; 569 + spinlock_t ts_id_lock; 569 570 570 571 phy_interface_t phy_mode; 571 572 ··· 678 677 int ocelot_init(struct ocelot *ocelot); 679 678 void ocelot_deinit(struct ocelot *ocelot); 680 679 void ocelot_init_port(struct ocelot *ocelot, int port); 680 + void ocelot_deinit_port(struct ocelot *ocelot, int port); 681 681 682 682 /* DSA callbacks */ 683 683 void ocelot_port_enable(struct ocelot *ocelot, int port,
+1
include/uapi/linux/ethtool_netlink.h
··· 79 79 ETHTOOL_MSG_TSINFO_GET_REPLY, 80 80 ETHTOOL_MSG_CABLE_TEST_NTF, 81 81 ETHTOOL_MSG_CABLE_TEST_TDR_NTF, 82 + ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY, 82 83 83 84 /* add new constants above here */ 84 85 __ETHTOOL_MSG_KERNEL_CNT,
+2 -11
include/uapi/linux/videodev2.h
··· 191 191 V4L2_MEMORY_DMABUF = 4, 192 192 }; 193 193 194 - #define V4L2_FLAG_MEMORY_NON_CONSISTENT (1 << 0) 195 - 196 194 /* see also http://vektor.theorem.ca/graphics/ycbcr/ */ 197 195 enum v4l2_colorspace { 198 196 /* ··· 947 949 __u32 type; /* enum v4l2_buf_type */ 948 950 __u32 memory; /* enum v4l2_memory */ 949 951 __u32 capabilities; 950 - union { 951 - __u32 flags; 952 - __u32 reserved[1]; 953 - }; 952 + __u32 reserved[1]; 954 953 }; 955 954 956 955 /* capabilities for struct v4l2_requestbuffers and v4l2_create_buffers */ ··· 2451 2456 * @memory: enum v4l2_memory; buffer memory type 2452 2457 * @format: frame format, for which buffers are requested 2453 2458 * @capabilities: capabilities of this buffer type. 2454 - * @flags: additional buffer management attributes (ignored unless the 2455 - * queue has V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS capability 2456 - * and configured for MMAP streaming I/O). 2457 2459 * @reserved: future extensions 2458 2460 */ 2459 2461 struct v4l2_create_buffers { ··· 2459 2467 __u32 memory; 2460 2468 struct v4l2_format format; 2461 2469 __u32 capabilities; 2462 - __u32 flags; 2463 - __u32 reserved[6]; 2470 + __u32 reserved[7]; 2464 2471 }; 2465 2472 2466 2473 /*
+3 -1
init/main.c
··· 33 33 #include <linux/nmi.h> 34 34 #include <linux/percpu.h> 35 35 #include <linux/kmod.h> 36 + #include <linux/kprobes.h> 36 37 #include <linux/vmalloc.h> 37 38 #include <linux/kernel_stat.h> 38 39 #include <linux/start_kernel.h> ··· 304 303 305 304 #ifdef CONFIG_BOOT_CONFIG 306 305 307 - char xbc_namebuf[XBC_KEYLEN_MAX] __initdata; 306 + static char xbc_namebuf[XBC_KEYLEN_MAX] __initdata; 308 307 309 308 #define rest(dst, end) ((end) > (dst) ? (end) - (dst) : 0) 310 309 ··· 1403 1402 kernel_init_freeable(); 1404 1403 /* need to finish all async __init code before freeing the memory */ 1405 1404 async_synchronize_full(); 1405 + kprobe_free_init_mem(); 1406 1406 ftrace_free_init_mem(); 1407 1407 free_initmem(); 1408 1408 mark_readonly();
+4 -11
kernel/bpf/hashtab.c
··· 1622 1622 struct bpf_map *map; 1623 1623 struct bpf_htab *htab; 1624 1624 void *percpu_value_buf; // non-zero means percpu hash 1625 - unsigned long flags; 1626 1625 u32 bucket_id; 1627 1626 u32 skip_elems; 1628 1627 }; ··· 1631 1632 struct htab_elem *prev_elem) 1632 1633 { 1633 1634 const struct bpf_htab *htab = info->htab; 1634 - unsigned long flags = info->flags; 1635 1635 u32 skip_elems = info->skip_elems; 1636 1636 u32 bucket_id = info->bucket_id; 1637 1637 struct hlist_nulls_head *head; ··· 1654 1656 1655 1657 /* not found, unlock and go to the next bucket */ 1656 1658 b = &htab->buckets[bucket_id++]; 1657 - htab_unlock_bucket(htab, b, flags); 1659 + rcu_read_unlock(); 1658 1660 skip_elems = 0; 1659 1661 } 1660 1662 1661 1663 for (i = bucket_id; i < htab->n_buckets; i++) { 1662 1664 b = &htab->buckets[i]; 1663 - flags = htab_lock_bucket(htab, b); 1665 + rcu_read_lock(); 1664 1666 1665 1667 count = 0; 1666 1668 head = &b->head; 1667 1669 hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) { 1668 1670 if (count >= skip_elems) { 1669 - info->flags = flags; 1670 1671 info->bucket_id = i; 1671 1672 info->skip_elems = count; 1672 1673 return elem; ··· 1673 1676 count++; 1674 1677 } 1675 1678 1676 - htab_unlock_bucket(htab, b, flags); 1679 + rcu_read_unlock(); 1677 1680 skip_elems = 0; 1678 1681 } 1679 1682 ··· 1751 1754 1752 1755 static void bpf_hash_map_seq_stop(struct seq_file *seq, void *v) 1753 1756 { 1754 - struct bpf_iter_seq_hash_map_info *info = seq->private; 1755 - 1756 1757 if (!v) 1757 1758 (void)__bpf_hash_map_seq_show(seq, NULL); 1758 1759 else 1759 - htab_unlock_bucket(info->htab, 1760 - &info->htab->buckets[info->bucket_id], 1761 - info->flags); 1760 + rcu_read_unlock(); 1762 1761 } 1763 1762 1764 1763 static int bpf_iter_init_hash_map(void *priv_data,
+3 -1
kernel/bpf/inode.c
··· 226 226 else 227 227 prev_key = key; 228 228 229 + rcu_read_lock(); 229 230 if (map->ops->map_get_next_key(map, prev_key, key)) { 230 231 map_iter(m)->done = true; 231 - return NULL; 232 + key = NULL; 232 233 } 234 + rcu_read_unlock(); 233 235 return key; 234 236 } 235 237
+2 -1
kernel/fork.c
··· 589 589 590 590 mm->map_count++; 591 591 if (!(tmp->vm_flags & VM_WIPEONFORK)) 592 - retval = copy_page_range(mm, oldmm, mpnt); 592 + retval = copy_page_range(mm, oldmm, mpnt, tmp); 593 593 594 594 if (tmp->vm_ops && tmp->vm_ops->open) 595 595 tmp->vm_ops->open(tmp); ··· 1011 1011 mm_pgtables_bytes_init(mm); 1012 1012 mm->map_count = 0; 1013 1013 mm->locked_vm = 0; 1014 + atomic_set(&mm->has_pinned, 0); 1014 1015 atomic64_set(&mm->pinned_vm, 0); 1015 1016 memset(&mm->rss_stat, 0, sizeof(mm->rss_stat)); 1016 1017 spin_lock_init(&mm->page_table_lock);
+25 -2
kernel/kprobes.c
··· 2162 2162 2163 2163 /* 2164 2164 * The module is going away. We should disarm the kprobe which 2165 - * is using ftrace. 2165 + * is using ftrace, because ftrace framework is still available at 2166 + * MODULE_STATE_GOING notification. 2166 2167 */ 2167 - if (kprobe_ftrace(p)) 2168 + if (kprobe_ftrace(p) && !kprobe_disabled(p) && !kprobes_all_disarmed) 2168 2169 disarm_kprobe_ftrace(p); 2169 2170 } 2170 2171 ··· 2459 2458 /* Markers of _kprobe_blacklist section */ 2460 2459 extern unsigned long __start_kprobe_blacklist[]; 2461 2460 extern unsigned long __stop_kprobe_blacklist[]; 2461 + 2462 + void kprobe_free_init_mem(void) 2463 + { 2464 + void *start = (void *)(&__init_begin); 2465 + void *end = (void *)(&__init_end); 2466 + struct hlist_head *head; 2467 + struct kprobe *p; 2468 + int i; 2469 + 2470 + mutex_lock(&kprobe_mutex); 2471 + 2472 + /* Kill all kprobes on initmem */ 2473 + for (i = 0; i < KPROBE_TABLE_SIZE; i++) { 2474 + head = &kprobe_table[i]; 2475 + hlist_for_each_entry(p, head, hlist) { 2476 + if (start <= (void *)p->addr && (void *)p->addr < end) 2477 + kill_kprobe(p); 2478 + } 2479 + } 2480 + 2481 + mutex_unlock(&kprobe_mutex); 2482 + } 2462 2483 2463 2484 static int __init init_kprobes(void) 2464 2485 {
+1 -1
kernel/rcu/tasks.h
··· 590 590 } 591 591 592 592 #else /* #ifdef CONFIG_TASKS_RCU */ 593 - static void show_rcu_tasks_classic_gp_kthread(void) { } 593 + static inline void show_rcu_tasks_classic_gp_kthread(void) { } 594 594 void exit_tasks_rcu_start(void) { } 595 595 void exit_tasks_rcu_finish(void) { exit_tasks_rcu_finish_trace(current); } 596 596 #endif /* #else #ifdef CONFIG_TASKS_RCU */
+2
kernel/rcu/tree.c
··· 673 673 lockdep_assert_irqs_disabled(); 674 674 rcu_eqs_enter(false); 675 675 } 676 + EXPORT_SYMBOL_GPL(rcu_idle_enter); 676 677 677 678 #ifdef CONFIG_NO_HZ_FULL 678 679 /** ··· 887 886 rcu_eqs_exit(false); 888 887 local_irq_restore(flags); 889 888 } 889 + EXPORT_SYMBOL_GPL(rcu_idle_exit); 890 890 891 891 #ifdef CONFIG_NO_HZ_FULL 892 892 /**
+5 -4
kernel/trace/ftrace.c
··· 2782 2782 { 2783 2783 lockdep_assert_held(&ftrace_lock); 2784 2784 list_del_rcu(&ops->list); 2785 + synchronize_rcu(); 2785 2786 } 2786 2787 2787 2788 /* ··· 2863 2862 __unregister_ftrace_function(ops); 2864 2863 ftrace_start_up--; 2865 2864 ops->flags &= ~FTRACE_OPS_FL_ENABLED; 2865 + if (ops->flags & FTRACE_OPS_FL_DYNAMIC) 2866 + ftrace_trampoline_free(ops); 2866 2867 return ret; 2867 2868 } 2868 2869 ··· 6993 6990 { 6994 6991 int bit; 6995 6992 6996 - if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching()) 6997 - return; 6998 - 6999 6993 bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX); 7000 6994 if (bit < 0) 7001 6995 return; 7002 6996 7003 6997 preempt_disable_notrace(); 7004 6998 7005 - op->func(ip, parent_ip, op, regs); 6999 + if (!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching()) 7000 + op->func(ip, parent_ip, op, regs); 7006 7001 7007 7002 preempt_enable_notrace(); 7008 7003 trace_clear_recursion(bit);
+25 -23
kernel/trace/trace.c
··· 3546 3546 if (iter->ent && iter->ent != iter->temp) { 3547 3547 if ((!iter->temp || iter->temp_size < iter->ent_size) && 3548 3548 !WARN_ON_ONCE(iter->temp == static_temp_buf)) { 3549 - kfree(iter->temp); 3550 - iter->temp = kmalloc(iter->ent_size, GFP_KERNEL); 3551 - if (!iter->temp) 3549 + void *temp; 3550 + temp = kmalloc(iter->ent_size, GFP_KERNEL); 3551 + if (!temp) 3552 3552 return NULL; 3553 + kfree(iter->temp); 3554 + iter->temp = temp; 3555 + iter->temp_size = iter->ent_size; 3553 3556 } 3554 3557 memcpy(iter->temp, iter->ent, iter->ent_size); 3555 - iter->temp_size = iter->ent_size; 3556 3558 iter->ent = iter->temp; 3557 3559 } 3558 3560 entry = __find_next_entry(iter, ent_cpu, NULL, ent_ts); ··· 3784 3782 3785 3783 static void print_lat_help_header(struct seq_file *m) 3786 3784 { 3787 - seq_puts(m, "# _------=> CPU# \n" 3788 - "# / _-----=> irqs-off \n" 3789 - "# | / _----=> need-resched \n" 3790 - "# || / _---=> hardirq/softirq \n" 3791 - "# ||| / _--=> preempt-depth \n" 3792 - "# |||| / delay \n" 3793 - "# cmd pid ||||| time | caller \n" 3794 - "# \\ / ||||| \\ | / \n"); 3785 + seq_puts(m, "# _------=> CPU# \n" 3786 + "# / _-----=> irqs-off \n" 3787 + "# | / _----=> need-resched \n" 3788 + "# || / _---=> hardirq/softirq \n" 3789 + "# ||| / _--=> preempt-depth \n" 3790 + "# |||| / delay \n" 3791 + "# cmd pid ||||| time | caller \n" 3792 + "# \\ / ||||| \\ | / \n"); 3795 3793 } 3796 3794 3797 3795 static void print_event_info(struct array_buffer *buf, struct seq_file *m) ··· 3812 3810 3813 3811 print_event_info(buf, m); 3814 3812 3815 - seq_printf(m, "# TASK-PID %s CPU# TIMESTAMP FUNCTION\n", tgid ? "TGID " : ""); 3816 - seq_printf(m, "# | | %s | | |\n", tgid ? " | " : ""); 3813 + seq_printf(m, "# TASK-PID %s CPU# TIMESTAMP FUNCTION\n", tgid ? " TGID " : ""); 3814 + seq_printf(m, "# | | %s | | |\n", tgid ? " | " : ""); 3817 3815 } 3818 3816 3819 3817 static void print_func_help_header_irq(struct array_buffer *buf, struct seq_file *m, 3820 3818 unsigned int flags) 3821 3819 { 3822 3820 bool tgid = flags & TRACE_ITER_RECORD_TGID; 3823 - const char *space = " "; 3824 - int prec = tgid ? 10 : 2; 3821 + const char *space = " "; 3822 + int prec = tgid ? 12 : 2; 3825 3823 3826 3824 print_event_info(buf, m); 3827 3825 3828 - seq_printf(m, "# %.*s _-----=> irqs-off\n", prec, space); 3829 - seq_printf(m, "# %.*s / _----=> need-resched\n", prec, space); 3830 - seq_printf(m, "# %.*s| / _---=> hardirq/softirq\n", prec, space); 3831 - seq_printf(m, "# %.*s|| / _--=> preempt-depth\n", prec, space); 3832 - seq_printf(m, "# %.*s||| / delay\n", prec, space); 3833 - seq_printf(m, "# TASK-PID %.*sCPU# |||| TIMESTAMP FUNCTION\n", prec, " TGID "); 3834 - seq_printf(m, "# | | %.*s | |||| | |\n", prec, " | "); 3826 + seq_printf(m, "# %.*s _-----=> irqs-off\n", prec, space); 3827 + seq_printf(m, "# %.*s / _----=> need-resched\n", prec, space); 3828 + seq_printf(m, "# %.*s| / _---=> hardirq/softirq\n", prec, space); 3829 + seq_printf(m, "# %.*s|| / _--=> preempt-depth\n", prec, space); 3830 + seq_printf(m, "# %.*s||| / delay\n", prec, space); 3831 + seq_printf(m, "# TASK-PID %.*s CPU# |||| TIMESTAMP FUNCTION\n", prec, " TGID "); 3832 + seq_printf(m, "# | | %.*s | |||| | |\n", prec, " | "); 3835 3833 } 3836 3834 3837 3835 void
-1
kernel/trace/trace_events_hist.c
··· 3865 3865 3866 3866 s = kstrdup(field_str, GFP_KERNEL); 3867 3867 if (!s) { 3868 - kfree(hist_data->attrs->var_defs.name[n_vars]); 3869 3868 ret = -ENOMEM; 3870 3869 goto free; 3871 3870 }
+6 -6
kernel/trace/trace_output.c
··· 497 497 498 498 trace_find_cmdline(entry->pid, comm); 499 499 500 - trace_seq_printf(s, "%8.8s-%-5d %3d", 500 + trace_seq_printf(s, "%8.8s-%-7d %3d", 501 501 comm, entry->pid, cpu); 502 502 503 503 return trace_print_lat_fmt(s, entry); ··· 588 588 589 589 trace_find_cmdline(entry->pid, comm); 590 590 591 - trace_seq_printf(s, "%16s-%-5d ", comm, entry->pid); 591 + trace_seq_printf(s, "%16s-%-7d ", comm, entry->pid); 592 592 593 593 if (tr->trace_flags & TRACE_ITER_RECORD_TGID) { 594 594 unsigned int tgid = trace_find_tgid(entry->pid); 595 595 596 596 if (!tgid) 597 - trace_seq_printf(s, "(-----) "); 597 + trace_seq_printf(s, "(-------) "); 598 598 else 599 - trace_seq_printf(s, "(%5d) ", tgid); 599 + trace_seq_printf(s, "(%7d) ", tgid); 600 600 } 601 601 602 602 trace_seq_printf(s, "[%03d] ", iter->cpu); ··· 636 636 trace_find_cmdline(entry->pid, comm); 637 637 638 638 trace_seq_printf( 639 - s, "%16s %5d %3d %d %08x %08lx ", 639 + s, "%16s %7d %3d %d %08x %08lx ", 640 640 comm, entry->pid, iter->cpu, entry->flags, 641 641 entry->preempt_count, iter->idx); 642 642 } else { ··· 917 917 S = task_index_to_char(field->prev_state); 918 918 trace_find_cmdline(field->next_pid, comm); 919 919 trace_seq_printf(&iter->seq, 920 - " %5d:%3d:%c %s [%03d] %5d:%3d:%c %s\n", 920 + " %7d:%3d:%c %s [%03d] %7d:%3d:%c %s\n", 921 921 field->prev_pid, 922 922 field->prev_prio, 923 923 S, delim,
+24 -14
lib/bootconfig.c
··· 31 31 static struct xbc_node *last_parent __initdata; 32 32 static const char *xbc_err_msg __initdata; 33 33 static int xbc_err_pos __initdata; 34 + static int open_brace[XBC_DEPTH_MAX] __initdata; 35 + static int brace_index __initdata; 34 36 35 37 static int __init xbc_parse_error(const char *msg, const char *p) 36 38 { ··· 433 431 return p; 434 432 } 435 433 436 - static int __init __xbc_open_brace(void) 434 + static int __init __xbc_open_brace(char *p) 437 435 { 438 - /* Mark the last key as open brace */ 439 - last_parent->next = XBC_NODE_MAX; 436 + /* Push the last key as open brace */ 437 + open_brace[brace_index++] = xbc_node_index(last_parent); 438 + if (brace_index >= XBC_DEPTH_MAX) 439 + return xbc_parse_error("Exceed max depth of braces", p); 440 440 441 441 return 0; 442 442 } 443 443 444 444 static int __init __xbc_close_brace(char *p) 445 445 { 446 - struct xbc_node *node; 447 - 448 - if (!last_parent || last_parent->next != XBC_NODE_MAX) 446 + brace_index--; 447 + if (!last_parent || brace_index < 0 || 448 + (open_brace[brace_index] != xbc_node_index(last_parent))) 449 449 return xbc_parse_error("Unexpected closing brace", p); 450 450 451 - node = last_parent; 452 - node->next = 0; 453 - do { 454 - node = xbc_node_get_parent(node); 455 - } while (node && node->next != XBC_NODE_MAX); 456 - last_parent = node; 451 + if (brace_index == 0) 452 + last_parent = NULL; 453 + else 454 + last_parent = &xbc_nodes[open_brace[brace_index - 1]]; 457 455 458 456 return 0; 459 457 } ··· 494 492 break; 495 493 } 496 494 if (strchr(",;\n#}", c)) { 497 - v = strim(v); 498 495 *p++ = '\0'; 496 + v = strim(v); 499 497 break; 500 498 } 501 499 } ··· 663 661 return ret; 664 662 *k = n; 665 663 666 - return __xbc_open_brace(); 664 + return __xbc_open_brace(n - 1); 667 665 } 668 666 669 667 static int __init xbc_close_brace(char **k, char *n) ··· 682 680 { 683 681 int i, depth, len, wlen; 684 682 struct xbc_node *n, *m; 683 + 684 + /* Brace closing */ 685 + if (brace_index) { 686 + n = &xbc_nodes[open_brace[brace_index]]; 687 + return xbc_parse_error("Brace is not closed", 688 + xbc_node_get_data(n)); 689 + } 685 690 686 691 /* Empty tree */ 687 692 if (xbc_node_num == 0) { ··· 754 745 xbc_node_num = 0; 755 746 memblock_free(__pa(xbc_nodes), sizeof(struct xbc_node) * XBC_NODE_MAX); 756 747 xbc_nodes = NULL; 748 + brace_index = 0; 757 749 } 758 750 759 751 /**
+1
lib/memregion.c
··· 2 2 /* identifiers for device / performance-differentiated memory regions */ 3 3 #include <linux/idr.h> 4 4 #include <linux/types.h> 5 + #include <linux/memregion.h> 5 6 6 7 static DEFINE_IDA(memregion_ids); 7 8
+1 -1
lib/random32.c
··· 49 49 } 50 50 #endif 51 51 52 - DEFINE_PER_CPU(struct rnd_state, net_rand_state); 52 + DEFINE_PER_CPU(struct rnd_state, net_rand_state) __latent_entropy; 53 53 54 54 /** 55 55 * prandom_u32_state - seeded pseudo-random number generator.
+24
lib/string.c
··· 272 272 } 273 273 EXPORT_SYMBOL(strscpy_pad); 274 274 275 + /** 276 + * stpcpy - copy a string from src to dest returning a pointer to the new end 277 + * of dest, including src's %NUL-terminator. May overrun dest. 278 + * @dest: pointer to end of string being copied into. Must be large enough 279 + * to receive copy. 280 + * @src: pointer to the beginning of string being copied from. Must not overlap 281 + * dest. 282 + * 283 + * stpcpy differs from strcpy in a key way: the return value is a pointer 284 + * to the new %NUL-terminating character in @dest. (For strcpy, the return 285 + * value is a pointer to the start of @dest). This interface is considered 286 + * unsafe as it doesn't perform bounds checking of the inputs. As such it's 287 + * not recommended for usage. Instead, its definition is provided in case 288 + * the compiler lowers other libcalls to stpcpy. 289 + */ 290 + char *stpcpy(char *__restrict__ dest, const char *__restrict__ src); 291 + char *stpcpy(char *__restrict__ dest, const char *__restrict__ src) 292 + { 293 + while ((*dest++ = *src++) != '\0') 294 + /* nothing */; 295 + return --dest; 296 + } 297 + EXPORT_SYMBOL(stpcpy); 298 + 275 299 #ifndef __HAVE_ARCH_STRCAT 276 300 /** 277 301 * strcat - Append one %NUL-terminated string to another
+1 -1
lib/test_rhashtable.c
··· 434 434 } else { 435 435 if (WARN(err != -ENOENT, "removed non-existent element, error %d not %d", 436 436 err, -ENOENT)) 437 - continue; 437 + continue; 438 438 } 439 439 } 440 440
+5 -1
mm/filemap.c
··· 2365 2365 } 2366 2366 2367 2367 if (!PageUptodate(page)) { 2368 - error = lock_page_killable(page); 2368 + if (iocb->ki_flags & IOCB_WAITQ) 2369 + error = lock_page_async(page, iocb->ki_waitq); 2370 + else 2371 + error = lock_page_killable(page); 2372 + 2369 2373 if (unlikely(error)) 2370 2374 goto readpage_error; 2371 2375 if (!PageUptodate(page)) {
+15 -9
mm/gup.c
··· 1255 1255 BUG_ON(*locked != 1); 1256 1256 } 1257 1257 1258 + if (flags & FOLL_PIN) 1259 + atomic_set(&mm->has_pinned, 1); 1260 + 1258 1261 /* 1259 1262 * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior 1260 1263 * is to set FOLL_GET if the caller wants pages[] filled in (but has ··· 2488 2485 return 1; 2489 2486 } 2490 2487 2491 - static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, 2488 + static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned long end, 2492 2489 unsigned int flags, struct page **pages, int *nr) 2493 2490 { 2494 2491 unsigned long next; 2495 2492 pmd_t *pmdp; 2496 2493 2497 - pmdp = pmd_offset(&pud, addr); 2494 + pmdp = pmd_offset_lockless(pudp, pud, addr); 2498 2495 do { 2499 2496 pmd_t pmd = READ_ONCE(*pmdp); 2500 2497 ··· 2531 2528 return 1; 2532 2529 } 2533 2530 2534 - static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, 2531 + static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned long end, 2535 2532 unsigned int flags, struct page **pages, int *nr) 2536 2533 { 2537 2534 unsigned long next; 2538 2535 pud_t *pudp; 2539 2536 2540 - pudp = pud_offset(&p4d, addr); 2537 + pudp = pud_offset_lockless(p4dp, p4d, addr); 2541 2538 do { 2542 2539 pud_t pud = READ_ONCE(*pudp); 2543 2540 ··· 2552 2549 if (!gup_huge_pd(__hugepd(pud_val(pud)), addr, 2553 2550 PUD_SHIFT, next, flags, pages, nr)) 2554 2551 return 0; 2555 - } else if (!gup_pmd_range(pud, addr, next, flags, pages, nr)) 2552 + } else if (!gup_pmd_range(pudp, pud, addr, next, flags, pages, nr)) 2556 2553 return 0; 2557 2554 } while (pudp++, addr = next, addr != end); 2558 2555 2559 2556 return 1; 2560 2557 } 2561 2558 2562 - static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, 2559 + static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, unsigned long end, 2563 2560 unsigned int flags, struct page **pages, int *nr) 2564 2561 { 2565 2562 unsigned long next; 2566 2563 p4d_t *p4dp; 2567 2564 2568 - p4dp = p4d_offset(&pgd, addr); 2565 + p4dp = p4d_offset_lockless(pgdp, pgd, addr); 2569 2566 do { 2570 2567 p4d_t p4d = READ_ONCE(*p4dp); 2571 2568 ··· 2577 2574 if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr, 2578 2575 P4D_SHIFT, next, flags, pages, nr)) 2579 2576 return 0; 2580 - } else if (!gup_pud_range(p4d, addr, next, flags, pages, nr)) 2577 + } else if (!gup_pud_range(p4dp, p4d, addr, next, flags, pages, nr)) 2581 2578 return 0; 2582 2579 } while (p4dp++, addr = next, addr != end); 2583 2580 ··· 2605 2602 if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr, 2606 2603 PGDIR_SHIFT, next, flags, pages, nr)) 2607 2604 return; 2608 - } else if (!gup_p4d_range(pgd, addr, next, flags, pages, nr)) 2605 + } else if (!gup_p4d_range(pgdp, pgd, addr, next, flags, pages, nr)) 2609 2606 return; 2610 2607 } while (pgdp++, addr = next, addr != end); 2611 2608 } ··· 2662 2659 FOLL_FORCE | FOLL_PIN | FOLL_GET | 2663 2660 FOLL_FAST_ONLY))) 2664 2661 return -EINVAL; 2662 + 2663 + if (gup_flags & FOLL_PIN) 2664 + atomic_set(&current->mm->has_pinned, 1); 2665 2665 2666 2666 if (!(gup_flags & FOLL_FAST_ONLY)) 2667 2667 might_lock_read(&current->mm->mmap_lock);
+28
mm/huge_memory.c
··· 1074 1074 1075 1075 src_page = pmd_page(pmd); 1076 1076 VM_BUG_ON_PAGE(!PageHead(src_page), src_page); 1077 + 1078 + /* 1079 + * If this page is a potentially pinned page, split and retry the fault 1080 + * with smaller page size. Normally this should not happen because the 1081 + * userspace should use MADV_DONTFORK upon pinned regions. This is a 1082 + * best effort that the pinned pages won't be replaced by another 1083 + * random page during the coming copy-on-write. 1084 + */ 1085 + if (unlikely(is_cow_mapping(vma->vm_flags) && 1086 + atomic_read(&src_mm->has_pinned) && 1087 + page_maybe_dma_pinned(src_page))) { 1088 + pte_free(dst_mm, pgtable); 1089 + spin_unlock(src_ptl); 1090 + spin_unlock(dst_ptl); 1091 + __split_huge_pmd(vma, src_pmd, addr, false, NULL); 1092 + return -EAGAIN; 1093 + } 1094 + 1077 1095 get_page(src_page); 1078 1096 page_dup_rmap(src_page, true); 1079 1097 add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); ··· 1193 1175 */ 1194 1176 if (is_huge_zero_pud(pud)) { 1195 1177 /* No huge zero pud yet */ 1178 + } 1179 + 1180 + /* Please refer to comments in copy_huge_pmd() */ 1181 + if (unlikely(is_cow_mapping(vma->vm_flags) && 1182 + atomic_read(&src_mm->has_pinned) && 1183 + page_maybe_dma_pinned(pud_page(pud)))) { 1184 + spin_unlock(src_ptl); 1185 + spin_unlock(dst_ptl); 1186 + __split_huge_pud(vma, src_pud, addr); 1187 + return -EAGAIN; 1196 1188 } 1197 1189 1198 1190 pudp_set_wrprotect(src_mm, addr, src_pud);
+1 -1
mm/madvise.c
··· 381 381 return 0; 382 382 } 383 383 384 + regular_page: 384 385 if (pmd_trans_unstable(pmd)) 385 386 return 0; 386 - regular_page: 387 387 #endif 388 388 tlb_change_page_size(tlb, PAGE_SIZE); 389 389 orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+2 -2
mm/memcontrol.c
··· 1538 1538 memcg_page_state(memcg, WORKINGSET_ACTIVATE_ANON)); 1539 1539 seq_buf_printf(&s, "workingset_activate_file %lu\n", 1540 1540 memcg_page_state(memcg, WORKINGSET_ACTIVATE_FILE)); 1541 - seq_buf_printf(&s, "workingset_restore %lu\n", 1541 + seq_buf_printf(&s, "workingset_restore_anon %lu\n", 1542 1542 memcg_page_state(memcg, WORKINGSET_RESTORE_ANON)); 1543 - seq_buf_printf(&s, "workingset_restore %lu\n", 1543 + seq_buf_printf(&s, "workingset_restore_file %lu\n", 1544 1544 memcg_page_state(memcg, WORKINGSET_RESTORE_FILE)); 1545 1545 seq_buf_printf(&s, "workingset_nodereclaim %lu\n", 1546 1546 memcg_page_state(memcg, WORKINGSET_NODERECLAIM));
+280 -91
mm/memory.c
··· 695 695 * covered by this vma. 696 696 */ 697 697 698 - static inline unsigned long 699 - copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, 698 + static unsigned long 699 + copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, 700 700 pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, 701 701 unsigned long addr, int *rss) 702 702 { 703 703 unsigned long vm_flags = vma->vm_flags; 704 704 pte_t pte = *src_pte; 705 705 struct page *page; 706 + swp_entry_t entry = pte_to_swp_entry(pte); 706 707 707 - /* pte contains position in swap or file, so copy. */ 708 - if (unlikely(!pte_present(pte))) { 709 - swp_entry_t entry = pte_to_swp_entry(pte); 708 + if (likely(!non_swap_entry(entry))) { 709 + if (swap_duplicate(entry) < 0) 710 + return entry.val; 710 711 711 - if (likely(!non_swap_entry(entry))) { 712 - if (swap_duplicate(entry) < 0) 713 - return entry.val; 714 - 715 - /* make sure dst_mm is on swapoff's mmlist. */ 716 - if (unlikely(list_empty(&dst_mm->mmlist))) { 717 - spin_lock(&mmlist_lock); 718 - if (list_empty(&dst_mm->mmlist)) 719 - list_add(&dst_mm->mmlist, 720 - &src_mm->mmlist); 721 - spin_unlock(&mmlist_lock); 722 - } 723 - rss[MM_SWAPENTS]++; 724 - } else if (is_migration_entry(entry)) { 725 - page = migration_entry_to_page(entry); 726 - 727 - rss[mm_counter(page)]++; 728 - 729 - if (is_write_migration_entry(entry) && 730 - is_cow_mapping(vm_flags)) { 731 - /* 732 - * COW mappings require pages in both 733 - * parent and child to be set to read. 734 - */ 735 - make_migration_entry_read(&entry); 736 - pte = swp_entry_to_pte(entry); 737 - if (pte_swp_soft_dirty(*src_pte)) 738 - pte = pte_swp_mksoft_dirty(pte); 739 - if (pte_swp_uffd_wp(*src_pte)) 740 - pte = pte_swp_mkuffd_wp(pte); 741 - set_pte_at(src_mm, addr, src_pte, pte); 742 - } 743 - } else if (is_device_private_entry(entry)) { 744 - page = device_private_entry_to_page(entry); 745 - 746 - /* 747 - * Update rss count even for unaddressable pages, as 748 - * they should treated just like normal pages in this 749 - * respect. 750 - * 751 - * We will likely want to have some new rss counters 752 - * for unaddressable pages, at some point. But for now 753 - * keep things as they are. 754 - */ 755 - get_page(page); 756 - rss[mm_counter(page)]++; 757 - page_dup_rmap(page, false); 758 - 759 - /* 760 - * We do not preserve soft-dirty information, because so 761 - * far, checkpoint/restore is the only feature that 762 - * requires that. And checkpoint/restore does not work 763 - * when a device driver is involved (you cannot easily 764 - * save and restore device driver state). 765 - */ 766 - if (is_write_device_private_entry(entry) && 767 - is_cow_mapping(vm_flags)) { 768 - make_device_private_entry_read(&entry); 769 - pte = swp_entry_to_pte(entry); 770 - if (pte_swp_uffd_wp(*src_pte)) 771 - pte = pte_swp_mkuffd_wp(pte); 772 - set_pte_at(src_mm, addr, src_pte, pte); 773 - } 712 + /* make sure dst_mm is on swapoff's mmlist. */ 713 + if (unlikely(list_empty(&dst_mm->mmlist))) { 714 + spin_lock(&mmlist_lock); 715 + if (list_empty(&dst_mm->mmlist)) 716 + list_add(&dst_mm->mmlist, 717 + &src_mm->mmlist); 718 + spin_unlock(&mmlist_lock); 774 719 } 775 - goto out_set_pte; 720 + rss[MM_SWAPENTS]++; 721 + } else if (is_migration_entry(entry)) { 722 + page = migration_entry_to_page(entry); 723 + 724 + rss[mm_counter(page)]++; 725 + 726 + if (is_write_migration_entry(entry) && 727 + is_cow_mapping(vm_flags)) { 728 + /* 729 + * COW mappings require pages in both 730 + * parent and child to be set to read. 731 + */ 732 + make_migration_entry_read(&entry); 733 + pte = swp_entry_to_pte(entry); 734 + if (pte_swp_soft_dirty(*src_pte)) 735 + pte = pte_swp_mksoft_dirty(pte); 736 + if (pte_swp_uffd_wp(*src_pte)) 737 + pte = pte_swp_mkuffd_wp(pte); 738 + set_pte_at(src_mm, addr, src_pte, pte); 739 + } 740 + } else if (is_device_private_entry(entry)) { 741 + page = device_private_entry_to_page(entry); 742 + 743 + /* 744 + * Update rss count even for unaddressable pages, as 745 + * they should treated just like normal pages in this 746 + * respect. 747 + * 748 + * We will likely want to have some new rss counters 749 + * for unaddressable pages, at some point. But for now 750 + * keep things as they are. 751 + */ 752 + get_page(page); 753 + rss[mm_counter(page)]++; 754 + page_dup_rmap(page, false); 755 + 756 + /* 757 + * We do not preserve soft-dirty information, because so 758 + * far, checkpoint/restore is the only feature that 759 + * requires that. And checkpoint/restore does not work 760 + * when a device driver is involved (you cannot easily 761 + * save and restore device driver state). 762 + */ 763 + if (is_write_device_private_entry(entry) && 764 + is_cow_mapping(vm_flags)) { 765 + make_device_private_entry_read(&entry); 766 + pte = swp_entry_to_pte(entry); 767 + if (pte_swp_uffd_wp(*src_pte)) 768 + pte = pte_swp_mkuffd_wp(pte); 769 + set_pte_at(src_mm, addr, src_pte, pte); 770 + } 771 + } 772 + set_pte_at(dst_mm, addr, dst_pte, pte); 773 + return 0; 774 + } 775 + 776 + /* 777 + * Copy a present and normal page if necessary. 778 + * 779 + * NOTE! The usual case is that this doesn't need to do 780 + * anything, and can just return a positive value. That 781 + * will let the caller know that it can just increase 782 + * the page refcount and re-use the pte the traditional 783 + * way. 784 + * 785 + * But _if_ we need to copy it because it needs to be 786 + * pinned in the parent (and the child should get its own 787 + * copy rather than just a reference to the same page), 788 + * we'll do that here and return zero to let the caller 789 + * know we're done. 790 + * 791 + * And if we need a pre-allocated page but don't yet have 792 + * one, return a negative error to let the preallocation 793 + * code know so that it can do so outside the page table 794 + * lock. 795 + */ 796 + static inline int 797 + copy_present_page(struct mm_struct *dst_mm, struct mm_struct *src_mm, 798 + pte_t *dst_pte, pte_t *src_pte, 799 + struct vm_area_struct *vma, struct vm_area_struct *new, 800 + unsigned long addr, int *rss, struct page **prealloc, 801 + pte_t pte, struct page *page) 802 + { 803 + struct page *new_page; 804 + 805 + if (!is_cow_mapping(vma->vm_flags)) 806 + return 1; 807 + 808 + /* 809 + * The trick starts. 810 + * 811 + * What we want to do is to check whether this page may 812 + * have been pinned by the parent process. If so, 813 + * instead of wrprotect the pte on both sides, we copy 814 + * the page immediately so that we'll always guarantee 815 + * the pinned page won't be randomly replaced in the 816 + * future. 817 + * 818 + * To achieve this, we do the following: 819 + * 820 + * 1. Write-protect the pte if it's writable. This is 821 + * to protect concurrent write fast-gup with 822 + * FOLL_PIN, so that we'll fail the fast-gup with 823 + * the write bit removed. 824 + * 825 + * 2. Check page_maybe_dma_pinned() to see whether this 826 + * page may have been pinned. 827 + * 828 + * The order of these steps is important to serialize 829 + * against the fast-gup code (gup_pte_range()) on the 830 + * pte check and try_grab_compound_head(), so that 831 + * we'll make sure either we'll capture that fast-gup 832 + * so we'll copy the pinned page here, or we'll fail 833 + * that fast-gup. 834 + * 835 + * NOTE! Even if we don't end up copying the page, 836 + * we won't undo this wrprotect(), because the normal 837 + * reference copy will need it anyway. 838 + */ 839 + if (pte_write(pte)) 840 + ptep_set_wrprotect(src_mm, addr, src_pte); 841 + 842 + /* 843 + * These are the "normally we can just copy by reference" 844 + * checks. 845 + */ 846 + if (likely(!atomic_read(&src_mm->has_pinned))) 847 + return 1; 848 + if (likely(!page_maybe_dma_pinned(page))) 849 + return 1; 850 + 851 + /* 852 + * Uhhuh. It looks like the page might be a pinned page, 853 + * and we actually need to copy it. Now we can set the 854 + * source pte back to being writable. 855 + */ 856 + if (pte_write(pte)) 857 + set_pte_at(src_mm, addr, src_pte, pte); 858 + 859 + new_page = *prealloc; 860 + if (!new_page) 861 + return -EAGAIN; 862 + 863 + /* 864 + * We have a prealloc page, all good! Take it 865 + * over and copy the page & arm it. 866 + */ 867 + *prealloc = NULL; 868 + copy_user_highpage(new_page, page, addr, vma); 869 + __SetPageUptodate(new_page); 870 + page_add_new_anon_rmap(new_page, new, addr, false); 871 + lru_cache_add_inactive_or_unevictable(new_page, new); 872 + rss[mm_counter(new_page)]++; 873 + 874 + /* All done, just insert the new page copy in the child */ 875 + pte = mk_pte(new_page, new->vm_page_prot); 876 + pte = maybe_mkwrite(pte_mkdirty(pte), new); 877 + set_pte_at(dst_mm, addr, dst_pte, pte); 878 + return 0; 879 + } 880 + 881 + /* 882 + * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated page 883 + * is required to copy this pte. 884 + */ 885 + static inline int 886 + copy_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, 887 + pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, 888 + struct vm_area_struct *new, 889 + unsigned long addr, int *rss, struct page **prealloc) 890 + { 891 + unsigned long vm_flags = vma->vm_flags; 892 + pte_t pte = *src_pte; 893 + struct page *page; 894 + 895 + page = vm_normal_page(vma, addr, pte); 896 + if (page) { 897 + int retval; 898 + 899 + retval = copy_present_page(dst_mm, src_mm, 900 + dst_pte, src_pte, 901 + vma, new, 902 + addr, rss, prealloc, 903 + pte, page); 904 + if (retval <= 0) 905 + return retval; 906 + 907 + get_page(page); 908 + page_dup_rmap(page, false); 909 + rss[mm_counter(page)]++; 776 910 } 777 911 778 912 /* ··· 934 800 if (!(vm_flags & VM_UFFD_WP)) 935 801 pte = pte_clear_uffd_wp(pte); 936 802 937 - page = vm_normal_page(vma, addr, pte); 938 - if (page) { 939 - get_page(page); 940 - page_dup_rmap(page, false); 941 - rss[mm_counter(page)]++; 942 - } 943 - 944 - out_set_pte: 945 803 set_pte_at(dst_mm, addr, dst_pte, pte); 946 804 return 0; 947 805 } 948 806 807 + static inline struct page * 808 + page_copy_prealloc(struct mm_struct *src_mm, struct vm_area_struct *vma, 809 + unsigned long addr) 810 + { 811 + struct page *new_page; 812 + 813 + new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, addr); 814 + if (!new_page) 815 + return NULL; 816 + 817 + if (mem_cgroup_charge(new_page, src_mm, GFP_KERNEL)) { 818 + put_page(new_page); 819 + return NULL; 820 + } 821 + cgroup_throttle_swaprate(new_page, GFP_KERNEL); 822 + 823 + return new_page; 824 + } 825 + 949 826 static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, 950 827 pmd_t *dst_pmd, pmd_t *src_pmd, struct vm_area_struct *vma, 828 + struct vm_area_struct *new, 951 829 unsigned long addr, unsigned long end) 952 830 { 953 831 pte_t *orig_src_pte, *orig_dst_pte; 954 832 pte_t *src_pte, *dst_pte; 955 833 spinlock_t *src_ptl, *dst_ptl; 956 - int progress = 0; 834 + int progress, ret = 0; 957 835 int rss[NR_MM_COUNTERS]; 958 836 swp_entry_t entry = (swp_entry_t){0}; 837 + struct page *prealloc = NULL; 959 838 960 839 again: 840 + progress = 0; 961 841 init_rss_vec(rss); 962 842 963 843 dst_pte = pte_alloc_map_lock(dst_mm, dst_pmd, addr, &dst_ptl); 964 - if (!dst_pte) 965 - return -ENOMEM; 844 + if (!dst_pte) { 845 + ret = -ENOMEM; 846 + goto out; 847 + } 966 848 src_pte = pte_offset_map(src_pmd, addr); 967 849 src_ptl = pte_lockptr(src_mm, src_pmd); 968 850 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); ··· 1001 851 progress++; 1002 852 continue; 1003 853 } 1004 - entry.val = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte, 854 + if (unlikely(!pte_present(*src_pte))) { 855 + entry.val = copy_nonpresent_pte(dst_mm, src_mm, 856 + dst_pte, src_pte, 1005 857 vma, addr, rss); 1006 - if (entry.val) 858 + if (entry.val) 859 + break; 860 + progress += 8; 861 + continue; 862 + } 863 + /* copy_present_pte() will clear `*prealloc' if consumed */ 864 + ret = copy_present_pte(dst_mm, src_mm, dst_pte, src_pte, 865 + vma, new, addr, rss, &prealloc); 866 + /* 867 + * If we need a pre-allocated page for this pte, drop the 868 + * locks, allocate, and try again. 869 + */ 870 + if (unlikely(ret == -EAGAIN)) 1007 871 break; 872 + if (unlikely(prealloc)) { 873 + /* 874 + * pre-alloc page cannot be reused by next time so as 875 + * to strictly follow mempolicy (e.g., alloc_page_vma() 876 + * will allocate page according to address). This 877 + * could only happen if one pinned pte changed. 878 + */ 879 + put_page(prealloc); 880 + prealloc = NULL; 881 + } 1008 882 progress += 8; 1009 883 } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end); 1010 884 ··· 1040 866 cond_resched(); 1041 867 1042 868 if (entry.val) { 1043 - if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) 869 + if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { 870 + ret = -ENOMEM; 871 + goto out; 872 + } 873 + entry.val = 0; 874 + } else if (ret) { 875 + WARN_ON_ONCE(ret != -EAGAIN); 876 + prealloc = page_copy_prealloc(src_mm, vma, addr); 877 + if (!prealloc) 1044 878 return -ENOMEM; 1045 - progress = 0; 879 + /* We've captured and resolved the error. Reset, try again. */ 880 + ret = 0; 1046 881 } 1047 882 if (addr != end) 1048 883 goto again; 1049 - return 0; 884 + out: 885 + if (unlikely(prealloc)) 886 + put_page(prealloc); 887 + return ret; 1050 888 } 1051 889 1052 890 static inline int copy_pmd_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, 1053 891 pud_t *dst_pud, pud_t *src_pud, struct vm_area_struct *vma, 892 + struct vm_area_struct *new, 1054 893 unsigned long addr, unsigned long end) 1055 894 { 1056 895 pmd_t *src_pmd, *dst_pmd; ··· 1090 903 if (pmd_none_or_clear_bad(src_pmd)) 1091 904 continue; 1092 905 if (copy_pte_range(dst_mm, src_mm, dst_pmd, src_pmd, 1093 - vma, addr, next)) 906 + vma, new, addr, next)) 1094 907 return -ENOMEM; 1095 908 } while (dst_pmd++, src_pmd++, addr = next, addr != end); 1096 909 return 0; ··· 1098 911 1099 912 static inline int copy_pud_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, 1100 913 p4d_t *dst_p4d, p4d_t *src_p4d, struct vm_area_struct *vma, 914 + struct vm_area_struct *new, 1101 915 unsigned long addr, unsigned long end) 1102 916 { 1103 917 pud_t *src_pud, *dst_pud; ··· 1125 937 if (pud_none_or_clear_bad(src_pud)) 1126 938 continue; 1127 939 if (copy_pmd_range(dst_mm, src_mm, dst_pud, src_pud, 1128 - vma, addr, next)) 940 + vma, new, addr, next)) 1129 941 return -ENOMEM; 1130 942 } while (dst_pud++, src_pud++, addr = next, addr != end); 1131 943 return 0; ··· 1133 945 1134 946 static inline int copy_p4d_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, 1135 947 pgd_t *dst_pgd, pgd_t *src_pgd, struct vm_area_struct *vma, 948 + struct vm_area_struct *new, 1136 949 unsigned long addr, unsigned long end) 1137 950 { 1138 951 p4d_t *src_p4d, *dst_p4d; ··· 1148 959 if (p4d_none_or_clear_bad(src_p4d)) 1149 960 continue; 1150 961 if (copy_pud_range(dst_mm, src_mm, dst_p4d, src_p4d, 1151 - vma, addr, next)) 962 + vma, new, addr, next)) 1152 963 return -ENOMEM; 1153 964 } while (dst_p4d++, src_p4d++, addr = next, addr != end); 1154 965 return 0; 1155 966 } 1156 967 1157 968 int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, 1158 - struct vm_area_struct *vma) 969 + struct vm_area_struct *vma, struct vm_area_struct *new) 1159 970 { 1160 971 pgd_t *src_pgd, *dst_pgd; 1161 972 unsigned long next; ··· 1210 1021 if (pgd_none_or_clear_bad(src_pgd)) 1211 1022 continue; 1212 1023 if (unlikely(copy_p4d_range(dst_mm, src_mm, dst_pgd, src_pgd, 1213 - vma, addr, next))) { 1024 + vma, new, addr, next))) { 1214 1025 ret = -ENOMEM; 1215 1026 break; 1216 1027 } ··· 3144 2955 * page count reference, and the page is locked, 3145 2956 * it's dark out, and we're wearing sunglasses. Hit it. 3146 2957 */ 3147 - wp_page_reuse(vmf); 3148 2958 unlock_page(page); 2959 + wp_page_reuse(vmf); 3149 2960 return VM_FAULT_WRITE; 3150 2961 } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == 3151 2962 (VM_WRITE|VM_SHARED))) {
+3 -2
mm/memory_hotplug.c
··· 729 729 * are reserved so nobody should be touching them so we should be safe 730 730 */ 731 731 memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn, 732 - MEMMAP_HOTPLUG, altmap); 732 + MEMINIT_HOTPLUG, altmap); 733 733 734 734 set_zone_contiguous(zone); 735 735 } ··· 1080 1080 } 1081 1081 1082 1082 /* link memory sections under this node.*/ 1083 - ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1)); 1083 + ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1), 1084 + MEMINIT_HOTPLUG); 1084 1085 BUG_ON(ret); 1085 1086 1086 1087 /* create new memmap entry */
+3 -4
mm/migrate.c
··· 1446 1446 * Capture required information that might get lost 1447 1447 * during migration. 1448 1448 */ 1449 - is_thp = PageTransHuge(page); 1449 + is_thp = PageTransHuge(page) && !PageHuge(page); 1450 1450 nr_subpages = thp_nr_pages(page); 1451 1451 cond_resched(); 1452 1452 ··· 1472 1472 * we encounter them after the rest of the list 1473 1473 * is processed. 1474 1474 */ 1475 - if (PageTransHuge(page) && !PageHuge(page)) { 1475 + if (is_thp) { 1476 1476 lock_page(page); 1477 1477 rc = split_huge_page_to_list(page, from); 1478 1478 unlock_page(page); ··· 1481 1481 nr_thp_split++; 1482 1482 goto retry; 1483 1483 } 1484 - } 1485 - if (is_thp) { 1484 + 1486 1485 nr_thp_failed++; 1487 1486 nr_failed += nr_subpages; 1488 1487 goto out;
+21 -8
mm/page_alloc.c
··· 3367 3367 struct page *page; 3368 3368 3369 3369 if (likely(order == 0)) { 3370 - page = rmqueue_pcplist(preferred_zone, zone, gfp_flags, 3370 + /* 3371 + * MIGRATE_MOVABLE pcplist could have the pages on CMA area and 3372 + * we need to skip it when CMA area isn't allowed. 3373 + */ 3374 + if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA || 3375 + migratetype != MIGRATE_MOVABLE) { 3376 + page = rmqueue_pcplist(preferred_zone, zone, gfp_flags, 3371 3377 migratetype, alloc_flags); 3372 - goto out; 3378 + goto out; 3379 + } 3373 3380 } 3374 3381 3375 3382 /* ··· 3388 3381 3389 3382 do { 3390 3383 page = NULL; 3391 - if (alloc_flags & ALLOC_HARDER) { 3384 + /* 3385 + * order-0 request can reach here when the pcplist is skipped 3386 + * due to non-CMA allocation context. HIGHATOMIC area is 3387 + * reserved for high-order atomic allocation, so order-0 3388 + * request should skip it. 3389 + */ 3390 + if (order > 0 && alloc_flags & ALLOC_HARDER) { 3392 3391 page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); 3393 3392 if (page) 3394 3393 trace_mm_page_alloc_zone_locked(page, order, migratetype); ··· 5988 5975 * done. Non-atomic initialization, single-pass. 5989 5976 */ 5990 5977 void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, 5991 - unsigned long start_pfn, enum memmap_context context, 5978 + unsigned long start_pfn, enum meminit_context context, 5992 5979 struct vmem_altmap *altmap) 5993 5980 { 5994 5981 unsigned long pfn, end_pfn = start_pfn + size; ··· 6020 6007 * There can be holes in boot-time mem_map[]s handed to this 6021 6008 * function. They do not exist on hotplugged memory. 6022 6009 */ 6023 - if (context == MEMMAP_EARLY) { 6010 + if (context == MEMINIT_EARLY) { 6024 6011 if (overlap_memmap_init(zone, &pfn)) 6025 6012 continue; 6026 6013 if (defer_init(nid, pfn, end_pfn)) ··· 6029 6016 6030 6017 page = pfn_to_page(pfn); 6031 6018 __init_single_page(page, pfn, zone, nid); 6032 - if (context == MEMMAP_HOTPLUG) 6019 + if (context == MEMINIT_HOTPLUG) 6033 6020 __SetPageReserved(page); 6034 6021 6035 6022 /* ··· 6112 6099 * check here not to call set_pageblock_migratetype() against 6113 6100 * pfn out of zone. 6114 6101 * 6115 - * Please note that MEMMAP_HOTPLUG path doesn't clear memmap 6102 + * Please note that MEMINIT_HOTPLUG path doesn't clear memmap 6116 6103 * because this is done early in section_activate() 6117 6104 */ 6118 6105 if (!(pfn & (pageblock_nr_pages - 1))) { ··· 6150 6137 if (end_pfn > start_pfn) { 6151 6138 size = end_pfn - start_pfn; 6152 6139 memmap_init_zone(size, nid, zone, start_pfn, 6153 - MEMMAP_EARLY, NULL); 6140 + MEMINIT_EARLY, NULL); 6154 6141 } 6155 6142 } 6156 6143 }
+6 -2
mm/slab.c
··· 1632 1632 kmem_cache_free(cachep->freelist_cache, freelist); 1633 1633 } 1634 1634 1635 + /* 1636 + * Update the size of the caches before calling slabs_destroy as it may 1637 + * recursively call kfree. 1638 + */ 1635 1639 static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) 1636 1640 { 1637 1641 struct page *page, *n; ··· 2157 2153 spin_lock(&n->list_lock); 2158 2154 free_block(cachep, ac->entry, ac->avail, node, &list); 2159 2155 spin_unlock(&n->list_lock); 2160 - slabs_destroy(cachep, &list); 2161 2156 ac->avail = 0; 2157 + slabs_destroy(cachep, &list); 2162 2158 } 2163 2159 2164 2160 static void drain_cpu_caches(struct kmem_cache *cachep) ··· 3406 3402 } 3407 3403 #endif 3408 3404 spin_unlock(&n->list_lock); 3409 - slabs_destroy(cachep, &list); 3410 3405 ac->avail -= batchcount; 3411 3406 memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail); 3407 + slabs_destroy(cachep, &list); 3412 3408 } 3413 3409 3414 3410 /*
+1 -5
mm/slub.c
··· 1413 1413 char *next_block; 1414 1414 slab_flags_t block_flags; 1415 1415 1416 - /* If slub_debug = 0, it folds into the if conditional. */ 1417 - if (!slub_debug_string) 1418 - return flags | slub_debug; 1419 - 1420 1416 len = strlen(name); 1421 1417 next_block = slub_debug_string; 1422 1418 /* Go through all blocks of debug options, see if any matches our slab's name */ ··· 1446 1450 } 1447 1451 } 1448 1452 1449 - return slub_debug; 1453 + return flags | slub_debug; 1450 1454 } 1451 1455 #else /* !CONFIG_SLUB_DEBUG */ 1452 1456 static inline void setup_object_debug(struct kmem_cache *s,
+1 -1
mm/swapfile.c
··· 1078 1078 goto nextsi; 1079 1079 } 1080 1080 if (size == SWAPFILE_CLUSTER) { 1081 - if (!(si->flags & SWP_FS)) 1081 + if (si->flags & SWP_BLKDEV) 1082 1082 n_ret = swap_alloc_cluster(si, swp_entries); 1083 1083 } else 1084 1084 n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
+117 -28
net/batman-adv/bridge_loop_avoidance.c
··· 25 25 #include <linux/lockdep.h> 26 26 #include <linux/netdevice.h> 27 27 #include <linux/netlink.h> 28 + #include <linux/preempt.h> 28 29 #include <linux/rculist.h> 29 30 #include <linux/rcupdate.h> 30 31 #include <linux/seq_file.h> ··· 84 83 */ 85 84 static inline u32 batadv_choose_backbone_gw(const void *data, u32 size) 86 85 { 87 - const struct batadv_bla_claim *claim = (struct batadv_bla_claim *)data; 86 + const struct batadv_bla_backbone_gw *gw; 88 87 u32 hash = 0; 89 88 90 - hash = jhash(&claim->addr, sizeof(claim->addr), hash); 91 - hash = jhash(&claim->vid, sizeof(claim->vid), hash); 89 + gw = (struct batadv_bla_backbone_gw *)data; 90 + hash = jhash(&gw->orig, sizeof(gw->orig), hash); 91 + hash = jhash(&gw->vid, sizeof(gw->vid), hash); 92 92 93 93 return hash % size; 94 94 } ··· 1581 1579 } 1582 1580 1583 1581 /** 1584 - * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup. 1582 + * batadv_bla_check_duplist() - Check if a frame is in the broadcast dup. 1585 1583 * @bat_priv: the bat priv with all the soft interface information 1586 - * @skb: contains the bcast_packet to be checked 1584 + * @skb: contains the multicast packet to be checked 1585 + * @payload_ptr: pointer to position inside the head buffer of the skb 1586 + * marking the start of the data to be CRC'ed 1587 + * @orig: originator mac address, NULL if unknown 1587 1588 * 1588 - * check if it is on our broadcast list. Another gateway might 1589 - * have sent the same packet because it is connected to the same backbone, 1590 - * so we have to remove this duplicate. 1589 + * Check if it is on our broadcast list. Another gateway might have sent the 1590 + * same packet because it is connected to the same backbone, so we have to 1591 + * remove this duplicate. 1591 1592 * 1592 1593 * This is performed by checking the CRC, which will tell us 1593 1594 * with a good chance that it is the same packet. If it is furthermore ··· 1599 1594 * 1600 1595 * Return: true if a packet is in the duplicate list, false otherwise. 1601 1596 */ 1602 - bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv, 1603 - struct sk_buff *skb) 1597 + static bool batadv_bla_check_duplist(struct batadv_priv *bat_priv, 1598 + struct sk_buff *skb, u8 *payload_ptr, 1599 + const u8 *orig) 1604 1600 { 1605 - int i, curr; 1606 - __be32 crc; 1607 - struct batadv_bcast_packet *bcast_packet; 1608 1601 struct batadv_bcast_duplist_entry *entry; 1609 1602 bool ret = false; 1610 - 1611 - bcast_packet = (struct batadv_bcast_packet *)skb->data; 1603 + int i, curr; 1604 + __be32 crc; 1612 1605 1613 1606 /* calculate the crc ... */ 1614 - crc = batadv_skb_crc32(skb, (u8 *)(bcast_packet + 1)); 1607 + crc = batadv_skb_crc32(skb, payload_ptr); 1615 1608 1616 1609 spin_lock_bh(&bat_priv->bla.bcast_duplist_lock); 1617 1610 ··· 1628 1625 if (entry->crc != crc) 1629 1626 continue; 1630 1627 1631 - if (batadv_compare_eth(entry->orig, bcast_packet->orig)) 1632 - continue; 1628 + /* are the originators both known and not anonymous? */ 1629 + if (orig && !is_zero_ether_addr(orig) && 1630 + !is_zero_ether_addr(entry->orig)) { 1631 + /* If known, check if the new frame came from 1632 + * the same originator: 1633 + * We are safe to take identical frames from the 1634 + * same orig, if known, as multiplications in 1635 + * the mesh are detected via the (orig, seqno) pair. 1636 + * So we can be a bit more liberal here and allow 1637 + * identical frames from the same orig which the source 1638 + * host might have sent multiple times on purpose. 1639 + */ 1640 + if (batadv_compare_eth(entry->orig, orig)) 1641 + continue; 1642 + } 1633 1643 1634 1644 /* this entry seems to match: same crc, not too old, 1635 1645 * and from another gw. therefore return true to forbid it. ··· 1658 1642 entry = &bat_priv->bla.bcast_duplist[curr]; 1659 1643 entry->crc = crc; 1660 1644 entry->entrytime = jiffies; 1661 - ether_addr_copy(entry->orig, bcast_packet->orig); 1645 + 1646 + /* known originator */ 1647 + if (orig) 1648 + ether_addr_copy(entry->orig, orig); 1649 + /* anonymous originator */ 1650 + else 1651 + eth_zero_addr(entry->orig); 1652 + 1662 1653 bat_priv->bla.bcast_duplist_curr = curr; 1663 1654 1664 1655 out: 1665 1656 spin_unlock_bh(&bat_priv->bla.bcast_duplist_lock); 1666 1657 1667 1658 return ret; 1659 + } 1660 + 1661 + /** 1662 + * batadv_bla_check_ucast_duplist() - Check if a frame is in the broadcast dup. 1663 + * @bat_priv: the bat priv with all the soft interface information 1664 + * @skb: contains the multicast packet to be checked, decapsulated from a 1665 + * unicast_packet 1666 + * 1667 + * Check if it is on our broadcast list. Another gateway might have sent the 1668 + * same packet because it is connected to the same backbone, so we have to 1669 + * remove this duplicate. 1670 + * 1671 + * Return: true if a packet is in the duplicate list, false otherwise. 1672 + */ 1673 + static bool batadv_bla_check_ucast_duplist(struct batadv_priv *bat_priv, 1674 + struct sk_buff *skb) 1675 + { 1676 + return batadv_bla_check_duplist(bat_priv, skb, (u8 *)skb->data, NULL); 1677 + } 1678 + 1679 + /** 1680 + * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup. 1681 + * @bat_priv: the bat priv with all the soft interface information 1682 + * @skb: contains the bcast_packet to be checked 1683 + * 1684 + * Check if it is on our broadcast list. Another gateway might have sent the 1685 + * same packet because it is connected to the same backbone, so we have to 1686 + * remove this duplicate. 1687 + * 1688 + * Return: true if a packet is in the duplicate list, false otherwise. 1689 + */ 1690 + bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv, 1691 + struct sk_buff *skb) 1692 + { 1693 + struct batadv_bcast_packet *bcast_packet; 1694 + u8 *payload_ptr; 1695 + 1696 + bcast_packet = (struct batadv_bcast_packet *)skb->data; 1697 + payload_ptr = (u8 *)(bcast_packet + 1); 1698 + 1699 + return batadv_bla_check_duplist(bat_priv, skb, payload_ptr, 1700 + bcast_packet->orig); 1668 1701 } 1669 1702 1670 1703 /** ··· 1877 1812 * @bat_priv: the bat priv with all the soft interface information 1878 1813 * @skb: the frame to be checked 1879 1814 * @vid: the VLAN ID of the frame 1880 - * @is_bcast: the packet came in a broadcast packet type. 1815 + * @packet_type: the batman packet type this frame came in 1881 1816 * 1882 1817 * batadv_bla_rx avoidance checks if: 1883 1818 * * we have to race for a claim ··· 1889 1824 * further process the skb. 1890 1825 */ 1891 1826 bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, 1892 - unsigned short vid, bool is_bcast) 1827 + unsigned short vid, int packet_type) 1893 1828 { 1894 1829 struct batadv_bla_backbone_gw *backbone_gw; 1895 1830 struct ethhdr *ethhdr; ··· 1911 1846 goto handled; 1912 1847 1913 1848 if (unlikely(atomic_read(&bat_priv->bla.num_requests))) 1914 - /* don't allow broadcasts while requests are in flight */ 1915 - if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast) 1916 - goto handled; 1849 + /* don't allow multicast packets while requests are in flight */ 1850 + if (is_multicast_ether_addr(ethhdr->h_dest)) 1851 + /* Both broadcast flooding or multicast-via-unicasts 1852 + * delivery might send to multiple backbone gateways 1853 + * sharing the same LAN and therefore need to coordinate 1854 + * which backbone gateway forwards into the LAN, 1855 + * by claiming the payload source address. 1856 + * 1857 + * Broadcast flooding and multicast-via-unicasts 1858 + * delivery use the following two batman packet types. 1859 + * Note: explicitly exclude BATADV_UNICAST_4ADDR, 1860 + * as the DHCP gateway feature will send explicitly 1861 + * to only one BLA gateway, so the claiming process 1862 + * should be avoided there. 1863 + */ 1864 + if (packet_type == BATADV_BCAST || 1865 + packet_type == BATADV_UNICAST) 1866 + goto handled; 1867 + 1868 + /* potential duplicates from foreign BLA backbone gateways via 1869 + * multicast-in-unicast packets 1870 + */ 1871 + if (is_multicast_ether_addr(ethhdr->h_dest) && 1872 + packet_type == BATADV_UNICAST && 1873 + batadv_bla_check_ucast_duplist(bat_priv, skb)) 1874 + goto handled; 1917 1875 1918 1876 ether_addr_copy(search_claim.addr, ethhdr->h_source); 1919 1877 search_claim.vid = vid; ··· 1971 1883 goto allow; 1972 1884 } 1973 1885 1974 - /* if it is a broadcast ... */ 1975 - if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast) { 1886 + /* if it is a multicast ... */ 1887 + if (is_multicast_ether_addr(ethhdr->h_dest) && 1888 + (packet_type == BATADV_BCAST || packet_type == BATADV_UNICAST)) { 1976 1889 /* ... drop it. the responsible gateway is in charge. 1977 1890 * 1978 - * We need to check is_bcast because with the gateway 1891 + * We need to check packet type because with the gateway 1979 1892 * feature, broadcasts (like DHCP requests) may be sent 1980 - * using a unicast packet type. 1893 + * using a unicast 4 address packet type. See comment above. 1981 1894 */ 1982 1895 goto handled; 1983 1896 } else {
+2 -2
net/batman-adv/bridge_loop_avoidance.h
··· 35 35 36 36 #ifdef CONFIG_BATMAN_ADV_BLA 37 37 bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, 38 - unsigned short vid, bool is_bcast); 38 + unsigned short vid, int packet_type); 39 39 bool batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb, 40 40 unsigned short vid); 41 41 bool batadv_bla_is_backbone_gw(struct sk_buff *skb, ··· 66 66 67 67 static inline bool batadv_bla_rx(struct batadv_priv *bat_priv, 68 68 struct sk_buff *skb, unsigned short vid, 69 - bool is_bcast) 69 + int packet_type) 70 70 { 71 71 return false; 72 72 }
+36 -10
net/batman-adv/multicast.c
··· 51 51 #include <uapi/linux/batadv_packet.h> 52 52 #include <uapi/linux/batman_adv.h> 53 53 54 + #include "bridge_loop_avoidance.h" 54 55 #include "hard-interface.h" 55 56 #include "hash.h" 56 57 #include "log.h" ··· 1436 1435 } 1437 1436 1438 1437 /** 1438 + * batadv_mcast_forw_send_orig() - send a multicast packet to an originator 1439 + * @bat_priv: the bat priv with all the soft interface information 1440 + * @skb: the multicast packet to send 1441 + * @vid: the vlan identifier 1442 + * @orig_node: the originator to send the packet to 1443 + * 1444 + * Return: NET_XMIT_DROP in case of error or NET_XMIT_SUCCESS otherwise. 1445 + */ 1446 + int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv, 1447 + struct sk_buff *skb, 1448 + unsigned short vid, 1449 + struct batadv_orig_node *orig_node) 1450 + { 1451 + /* Avoid sending multicast-in-unicast packets to other BLA 1452 + * gateways - they already got the frame from the LAN side 1453 + * we share with them. 1454 + * TODO: Refactor to take BLA into account earlier, to avoid 1455 + * reducing the mcast_fanout count. 1456 + */ 1457 + if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig, vid)) { 1458 + dev_kfree_skb(skb); 1459 + return NET_XMIT_SUCCESS; 1460 + } 1461 + 1462 + return batadv_send_skb_unicast(bat_priv, skb, BATADV_UNICAST, 0, 1463 + orig_node, vid); 1464 + } 1465 + 1466 + /** 1439 1467 * batadv_mcast_forw_tt() - forwards a packet to multicast listeners 1440 1468 * @bat_priv: the bat priv with all the soft interface information 1441 1469 * @skb: the multicast packet to transmit ··· 1501 1471 break; 1502 1472 } 1503 1473 1504 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1505 - orig_entry->orig_node, vid); 1474 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, 1475 + orig_entry->orig_node); 1506 1476 } 1507 1477 rcu_read_unlock(); 1508 1478 ··· 1543 1513 break; 1544 1514 } 1545 1515 1546 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1547 - orig_node, vid); 1516 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1548 1517 } 1549 1518 rcu_read_unlock(); 1550 1519 return ret; ··· 1580 1551 break; 1581 1552 } 1582 1553 1583 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1584 - orig_node, vid); 1554 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1585 1555 } 1586 1556 rcu_read_unlock(); 1587 1557 return ret; ··· 1646 1618 break; 1647 1619 } 1648 1620 1649 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1650 - orig_node, vid); 1621 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1651 1622 } 1652 1623 rcu_read_unlock(); 1653 1624 return ret; ··· 1683 1656 break; 1684 1657 } 1685 1658 1686 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1687 - orig_node, vid); 1659 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1688 1660 } 1689 1661 rcu_read_unlock(); 1690 1662 return ret;
+15
net/batman-adv/multicast.h
··· 46 46 batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb, 47 47 struct batadv_orig_node **mcast_single_orig); 48 48 49 + int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv, 50 + struct sk_buff *skb, 51 + unsigned short vid, 52 + struct batadv_orig_node *orig_node); 53 + 49 54 int batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb, 50 55 unsigned short vid); 51 56 ··· 74 69 struct batadv_orig_node **mcast_single_orig) 75 70 { 76 71 return BATADV_FORW_ALL; 72 + } 73 + 74 + static inline int 75 + batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv, 76 + struct sk_buff *skb, 77 + unsigned short vid, 78 + struct batadv_orig_node *orig_node) 79 + { 80 + kfree_skb(skb); 81 + return NET_XMIT_DROP; 77 82 } 78 83 79 84 static inline int
+4
net/batman-adv/routing.c
··· 826 826 vid = batadv_get_vid(skb, hdr_len); 827 827 ethhdr = (struct ethhdr *)(skb->data + hdr_len); 828 828 829 + /* do not reroute multicast frames in a unicast header */ 830 + if (is_multicast_ether_addr(ethhdr->h_dest)) 831 + return true; 832 + 829 833 /* check if the destination client was served by this node and it is now 830 834 * roaming. In this case, it means that the node has got a ROAM_ADV 831 835 * message and that it knows the new destination in the mesh to re-route
+5 -6
net/batman-adv/soft-interface.c
··· 364 364 goto dropped; 365 365 ret = batadv_send_skb_via_gw(bat_priv, skb, vid); 366 366 } else if (mcast_single_orig) { 367 - ret = batadv_send_skb_unicast(bat_priv, skb, 368 - BATADV_UNICAST, 0, 369 - mcast_single_orig, vid); 367 + ret = batadv_mcast_forw_send_orig(bat_priv, skb, vid, 368 + mcast_single_orig); 370 369 } else if (forw_mode == BATADV_FORW_SOME) { 371 370 ret = batadv_mcast_forw_send(bat_priv, skb, vid); 372 371 } else { ··· 424 425 struct vlan_ethhdr *vhdr; 425 426 struct ethhdr *ethhdr; 426 427 unsigned short vid; 427 - bool is_bcast; 428 + int packet_type; 428 429 429 430 batadv_bcast_packet = (struct batadv_bcast_packet *)skb->data; 430 - is_bcast = (batadv_bcast_packet->packet_type == BATADV_BCAST); 431 + packet_type = batadv_bcast_packet->packet_type; 431 432 432 433 skb_pull_rcsum(skb, hdr_size); 433 434 skb_reset_mac_header(skb); ··· 470 471 /* Let the bridge loop avoidance check the packet. If will 471 472 * not handle it, we can safely push it up. 472 473 */ 473 - if (batadv_bla_rx(bat_priv, skb, vid, is_bcast)) 474 + if (batadv_bla_rx(bat_priv, skb, vid, packet_type)) 474 475 goto out; 475 476 476 477 if (orig_node)
+17 -10
net/bridge/br_vlan.c
··· 1288 1288 } 1289 1289 } 1290 1290 1291 - static int __br_vlan_get_pvid(const struct net_device *dev, 1292 - struct net_bridge_port *p, u16 *p_pvid) 1291 + int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid) 1293 1292 { 1294 1293 struct net_bridge_vlan_group *vg; 1294 + struct net_bridge_port *p; 1295 1295 1296 + ASSERT_RTNL(); 1297 + p = br_port_get_check_rtnl(dev); 1296 1298 if (p) 1297 1299 vg = nbp_vlan_group(p); 1298 1300 else if (netif_is_bridge_master(dev)) ··· 1305 1303 *p_pvid = br_get_pvid(vg); 1306 1304 return 0; 1307 1305 } 1308 - 1309 - int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid) 1310 - { 1311 - ASSERT_RTNL(); 1312 - 1313 - return __br_vlan_get_pvid(dev, br_port_get_check_rtnl(dev), p_pvid); 1314 - } 1315 1306 EXPORT_SYMBOL_GPL(br_vlan_get_pvid); 1316 1307 1317 1308 int br_vlan_get_pvid_rcu(const struct net_device *dev, u16 *p_pvid) 1318 1309 { 1319 - return __br_vlan_get_pvid(dev, br_port_get_check_rcu(dev), p_pvid); 1310 + struct net_bridge_vlan_group *vg; 1311 + struct net_bridge_port *p; 1312 + 1313 + p = br_port_get_check_rcu(dev); 1314 + if (p) 1315 + vg = nbp_vlan_group_rcu(p); 1316 + else if (netif_is_bridge_master(dev)) 1317 + vg = br_vlan_group_rcu(netdev_priv(dev)); 1318 + else 1319 + return -EINVAL; 1320 + 1321 + *p_pvid = br_get_pvid(vg); 1322 + return 0; 1320 1323 } 1321 1324 EXPORT_SYMBOL_GPL(br_vlan_get_pvid_rcu); 1322 1325
+1 -1
net/core/dev.c
··· 8647 8647 if (!first.id_len) 8648 8648 first = *ppid; 8649 8649 else if (memcmp(&first, ppid, sizeof(*ppid))) 8650 - return -ENODATA; 8650 + return -EOPNOTSUPP; 8651 8651 } 8652 8652 8653 8653 return err;
+1 -1
net/core/dst.c
··· 144 144 145 145 /* Operations to mark dst as DEAD and clean up the net device referenced 146 146 * by dst: 147 - * 1. put the dst under loopback interface and discard all tx/rx packets 147 + * 1. put the dst under blackhole interface and discard all tx/rx packets 148 148 * on this route. 149 149 * 2. release the net_device 150 150 * This function should be called when removing routes from the fib tree
+1 -1
net/core/fib_rules.c
··· 16 16 #include <net/ip_tunnels.h> 17 17 #include <linux/indirect_call_wrapper.h> 18 18 19 - #ifdef CONFIG_IPV6_MULTIPLE_TABLES 19 + #if defined(CONFIG_IPV6) && defined(CONFIG_IPV6_MULTIPLE_TABLES) 20 20 #ifdef CONFIG_IP_MULTIPLE_TABLES 21 21 #define INDIRECT_CALL_MT(f, f2, f1, ...) \ 22 22 INDIRECT_CALL_INET(f, f2, f1, __VA_ARGS__)
+10 -9
net/core/filter.c
··· 4838 4838 fl4.saddr = params->ipv4_src; 4839 4839 fl4.fl4_sport = params->sport; 4840 4840 fl4.fl4_dport = params->dport; 4841 + fl4.flowi4_multipath_hash = 0; 4841 4842 4842 4843 if (flags & BPF_FIB_LOOKUP_DIRECT) { 4843 4844 u32 tbid = l3mdev_fib_table_rcu(dev) ? : RT_TABLE_MAIN; ··· 7066 7065 bool indirect = BPF_MODE(orig->code) == BPF_IND; 7067 7066 struct bpf_insn *insn = insn_buf; 7068 7067 7069 - /* We're guaranteed here that CTX is in R6. */ 7070 - *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX); 7071 7068 if (!indirect) { 7072 7069 *insn++ = BPF_MOV64_IMM(BPF_REG_2, orig->imm); 7073 7070 } else { ··· 7073 7074 if (orig->imm) 7074 7075 *insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, orig->imm); 7075 7076 } 7077 + /* We're guaranteed here that CTX is in R6. */ 7078 + *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX); 7076 7079 7077 7080 switch (BPF_SIZE(orig->code)) { 7078 7081 case BPF_B: ··· 9523 9522 * trigger an explicit type generation here. 9524 9523 */ 9525 9524 BTF_TYPE_EMIT(struct tcp6_sock); 9526 - if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP && 9525 + if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP && 9527 9526 sk->sk_family == AF_INET6) 9528 9527 return (unsigned long)sk; 9529 9528 ··· 9541 9540 9542 9541 BPF_CALL_1(bpf_skc_to_tcp_sock, struct sock *, sk) 9543 9542 { 9544 - if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP) 9543 + if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP) 9545 9544 return (unsigned long)sk; 9546 9545 9547 9546 return (unsigned long)NULL; ··· 9559 9558 BPF_CALL_1(bpf_skc_to_tcp_timewait_sock, struct sock *, sk) 9560 9559 { 9561 9560 #ifdef CONFIG_INET 9562 - if (sk->sk_prot == &tcp_prot && sk->sk_state == TCP_TIME_WAIT) 9561 + if (sk && sk->sk_prot == &tcp_prot && sk->sk_state == TCP_TIME_WAIT) 9563 9562 return (unsigned long)sk; 9564 9563 #endif 9565 9564 9566 9565 #if IS_BUILTIN(CONFIG_IPV6) 9567 - if (sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_TIME_WAIT) 9566 + if (sk && sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_TIME_WAIT) 9568 9567 return (unsigned long)sk; 9569 9568 #endif 9570 9569 ··· 9583 9582 BPF_CALL_1(bpf_skc_to_tcp_request_sock, struct sock *, sk) 9584 9583 { 9585 9584 #ifdef CONFIG_INET 9586 - if (sk->sk_prot == &tcp_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9585 + if (sk && sk->sk_prot == &tcp_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9587 9586 return (unsigned long)sk; 9588 9587 #endif 9589 9588 9590 9589 #if IS_BUILTIN(CONFIG_IPV6) 9591 - if (sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9590 + if (sk && sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9592 9591 return (unsigned long)sk; 9593 9592 #endif 9594 9593 ··· 9610 9609 * trigger an explicit type generation here. 9611 9610 */ 9612 9611 BTF_TYPE_EMIT(struct udp6_sock); 9613 - if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_UDP && 9612 + if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_UDP && 9614 9613 sk->sk_type == SOCK_DGRAM && sk->sk_family == AF_INET6) 9615 9614 return (unsigned long)sk; 9616 9615
+11 -11
net/core/net_namespace.c
··· 251 251 if (refcount_read(&net->count) == 0) 252 252 return NETNSA_NSID_NOT_ASSIGNED; 253 253 254 - spin_lock(&net->nsid_lock); 254 + spin_lock_bh(&net->nsid_lock); 255 255 id = __peernet2id(net, peer); 256 256 if (id >= 0) { 257 - spin_unlock(&net->nsid_lock); 257 + spin_unlock_bh(&net->nsid_lock); 258 258 return id; 259 259 } 260 260 ··· 264 264 * just been idr_remove()'d from there in cleanup_net(). 265 265 */ 266 266 if (!maybe_get_net(peer)) { 267 - spin_unlock(&net->nsid_lock); 267 + spin_unlock_bh(&net->nsid_lock); 268 268 return NETNSA_NSID_NOT_ASSIGNED; 269 269 } 270 270 271 271 id = alloc_netid(net, peer, -1); 272 - spin_unlock(&net->nsid_lock); 272 + spin_unlock_bh(&net->nsid_lock); 273 273 274 274 put_net(peer); 275 275 if (id < 0) ··· 534 534 for_each_net(tmp) { 535 535 int id; 536 536 537 - spin_lock(&tmp->nsid_lock); 537 + spin_lock_bh(&tmp->nsid_lock); 538 538 id = __peernet2id(tmp, net); 539 539 if (id >= 0) 540 540 idr_remove(&tmp->netns_ids, id); 541 - spin_unlock(&tmp->nsid_lock); 541 + spin_unlock_bh(&tmp->nsid_lock); 542 542 if (id >= 0) 543 543 rtnl_net_notifyid(tmp, RTM_DELNSID, id, 0, NULL, 544 544 GFP_KERNEL); 545 545 if (tmp == last) 546 546 break; 547 547 } 548 - spin_lock(&net->nsid_lock); 548 + spin_lock_bh(&net->nsid_lock); 549 549 idr_destroy(&net->netns_ids); 550 - spin_unlock(&net->nsid_lock); 550 + spin_unlock_bh(&net->nsid_lock); 551 551 } 552 552 553 553 static LLIST_HEAD(cleanup_list); ··· 760 760 return PTR_ERR(peer); 761 761 } 762 762 763 - spin_lock(&net->nsid_lock); 763 + spin_lock_bh(&net->nsid_lock); 764 764 if (__peernet2id(net, peer) >= 0) { 765 - spin_unlock(&net->nsid_lock); 765 + spin_unlock_bh(&net->nsid_lock); 766 766 err = -EEXIST; 767 767 NL_SET_BAD_ATTR(extack, nla); 768 768 NL_SET_ERR_MSG(extack, ··· 771 771 } 772 772 773 773 err = alloc_netid(net, peer, nsid); 774 - spin_unlock(&net->nsid_lock); 774 + spin_unlock_bh(&net->nsid_lock); 775 775 if (err >= 0) { 776 776 rtnl_net_notifyid(net, RTM_NEWNSID, err, NETLINK_CB(skb).portid, 777 777 nlh, GFP_KERNEL);
+8
net/dcb/dcbnl.c
··· 1426 1426 { 1427 1427 const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops; 1428 1428 struct nlattr *ieee[DCB_ATTR_IEEE_MAX + 1]; 1429 + int prio; 1429 1430 int err; 1430 1431 1431 1432 if (!ops) ··· 1475 1474 if (ieee[DCB_ATTR_DCB_BUFFER] && ops->dcbnl_setbuffer) { 1476 1475 struct dcbnl_buffer *buffer = 1477 1476 nla_data(ieee[DCB_ATTR_DCB_BUFFER]); 1477 + 1478 + for (prio = 0; prio < ARRAY_SIZE(buffer->prio2buffer); prio++) { 1479 + if (buffer->prio2buffer[prio] >= DCBX_MAX_BUFFERS) { 1480 + err = -EINVAL; 1481 + goto err; 1482 + } 1483 + } 1478 1484 1479 1485 err = ops->dcbnl_setbuffer(netdev, buffer); 1480 1486 if (err)
+16 -2
net/dsa/slave.c
··· 1799 1799 1800 1800 dsa_slave_notify(slave_dev, DSA_PORT_REGISTER); 1801 1801 1802 - ret = register_netdev(slave_dev); 1802 + rtnl_lock(); 1803 + 1804 + ret = register_netdevice(slave_dev); 1803 1805 if (ret) { 1804 1806 netdev_err(master, "error %d registering interface %s\n", 1805 1807 ret, slave_dev->name); 1808 + rtnl_unlock(); 1806 1809 goto out_phy; 1807 1810 } 1808 1811 1812 + ret = netdev_upper_dev_link(master, slave_dev, NULL); 1813 + 1814 + rtnl_unlock(); 1815 + 1816 + if (ret) 1817 + goto out_unregister; 1818 + 1809 1819 return 0; 1810 1820 1821 + out_unregister: 1822 + unregister_netdev(slave_dev); 1811 1823 out_phy: 1812 1824 rtnl_lock(); 1813 1825 phylink_disconnect_phy(p->dp->pl); ··· 1836 1824 1837 1825 void dsa_slave_destroy(struct net_device *slave_dev) 1838 1826 { 1827 + struct net_device *master = dsa_slave_to_master(slave_dev); 1839 1828 struct dsa_port *dp = dsa_slave_to_port(slave_dev); 1840 1829 struct dsa_slave_priv *p = netdev_priv(slave_dev); 1841 1830 1842 1831 netif_carrier_off(slave_dev); 1843 1832 rtnl_lock(); 1833 + netdev_upper_dev_unlink(master, slave_dev); 1834 + unregister_netdevice(slave_dev); 1844 1835 phylink_disconnect_phy(dp->pl); 1845 1836 rtnl_unlock(); 1846 1837 1847 1838 dsa_slave_notify(slave_dev, DSA_PORT_UNREGISTER); 1848 - unregister_netdev(slave_dev); 1849 1839 phylink_destroy(dp->pl); 1850 1840 gro_cells_destroy(&p->gcells); 1851 1841 free_percpu(p->stats64);
+7 -4
net/dsa/tag_ocelot.c
··· 160 160 packing(injection, &qos_class, 19, 17, OCELOT_TAG_LEN, PACK, 0); 161 161 162 162 if (ocelot->ptp && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) { 163 + struct sk_buff *clone = DSA_SKB_CB(skb)->clone; 164 + 163 165 rew_op = ocelot_port->ptp_cmd; 164 - if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) { 165 - rew_op |= (ocelot_port->ts_id % 4) << 3; 166 - ocelot_port->ts_id++; 167 - } 166 + /* Retrieve timestamp ID populated inside skb->cb[0] of the 167 + * clone by ocelot_port_add_txtstamp_skb 168 + */ 169 + if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) 170 + rew_op |= clone->cb[0] << 3; 168 171 169 172 packing(injection, &rew_op, 125, 117, OCELOT_TAG_LEN, PACK, 0); 170 173 }
+2 -2
net/ethtool/tunnels.c
··· 200 200 reply_len = ret + ethnl_reply_header_size(); 201 201 202 202 rskb = ethnl_reply_init(reply_len, req_info.dev, 203 - ETHTOOL_MSG_TUNNEL_INFO_GET, 203 + ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY, 204 204 ETHTOOL_A_TUNNEL_INFO_HEADER, 205 205 info, &reply_payload); 206 206 if (!rskb) { ··· 273 273 goto cont; 274 274 275 275 ehdr = ethnl_dump_put(skb, cb, 276 - ETHTOOL_MSG_TUNNEL_INFO_GET); 276 + ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY); 277 277 if (!ehdr) { 278 278 ret = -EMSGSIZE; 279 279 goto out;
+3 -3
net/hsr/hsr_netlink.c
··· 76 76 proto = nla_get_u8(data[IFLA_HSR_PROTOCOL]); 77 77 78 78 if (proto >= HSR_PROTOCOL_MAX) { 79 - NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol\n"); 79 + NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol"); 80 80 return -EINVAL; 81 81 } 82 82 ··· 84 84 proto_version = HSR_V0; 85 85 } else { 86 86 if (proto == HSR_PROTOCOL_PRP) { 87 - NL_SET_ERR_MSG_MOD(extack, "PRP version unsupported\n"); 87 + NL_SET_ERR_MSG_MOD(extack, "PRP version unsupported"); 88 88 return -EINVAL; 89 89 } 90 90 91 91 proto_version = nla_get_u8(data[IFLA_HSR_VERSION]); 92 92 if (proto_version > HSR_V1) { 93 93 NL_SET_ERR_MSG_MOD(extack, 94 - "Only HSR version 0/1 supported\n"); 94 + "Only HSR version 0/1 supported"); 95 95 return -EINVAL; 96 96 } 97 97 }
+1
net/ipv4/fib_frontend.c
··· 362 362 fl4.flowi4_tun_key.tun_id = 0; 363 363 fl4.flowi4_flags = 0; 364 364 fl4.flowi4_uid = sock_net_uid(net, NULL); 365 + fl4.flowi4_multipath_hash = 0; 365 366 366 367 no_addr = idev->ifa_list == NULL; 367 368
+15 -5
net/ipv4/inet_diag.c
··· 186 186 } 187 187 EXPORT_SYMBOL_GPL(inet_diag_msg_attrs_fill); 188 188 189 - static void inet_diag_parse_attrs(const struct nlmsghdr *nlh, int hdrlen, 190 - struct nlattr **req_nlas) 189 + static int inet_diag_parse_attrs(const struct nlmsghdr *nlh, int hdrlen, 190 + struct nlattr **req_nlas) 191 191 { 192 192 struct nlattr *nla; 193 193 int remaining; ··· 195 195 nlmsg_for_each_attr(nla, nlh, hdrlen, remaining) { 196 196 int type = nla_type(nla); 197 197 198 + if (type == INET_DIAG_REQ_PROTOCOL && nla_len(nla) != sizeof(u32)) 199 + return -EINVAL; 200 + 198 201 if (type < __INET_DIAG_REQ_MAX) 199 202 req_nlas[type] = nla; 200 203 } 204 + return 0; 201 205 } 202 206 203 207 static int inet_diag_get_protocol(const struct inet_diag_req_v2 *req, ··· 578 574 int err, protocol; 579 575 580 576 memset(&dump_data, 0, sizeof(dump_data)); 581 - inet_diag_parse_attrs(nlh, hdrlen, dump_data.req_nlas); 577 + err = inet_diag_parse_attrs(nlh, hdrlen, dump_data.req_nlas); 578 + if (err) 579 + return err; 580 + 582 581 protocol = inet_diag_get_protocol(req, &dump_data); 583 582 584 583 handler = inet_diag_lock_handler(protocol); ··· 1187 1180 if (!cb_data) 1188 1181 return -ENOMEM; 1189 1182 1190 - inet_diag_parse_attrs(nlh, hdrlen, cb_data->req_nlas); 1191 - 1183 + err = inet_diag_parse_attrs(nlh, hdrlen, cb_data->req_nlas); 1184 + if (err) { 1185 + kfree(cb_data); 1186 + return err; 1187 + } 1192 1188 nla = cb_data->inet_diag_nla_bc; 1193 1189 if (nla) { 1194 1190 err = inet_diag_bc_audit(nla, skb);
+2 -1
net/ipv4/ip_output.c
··· 74 74 #include <net/icmp.h> 75 75 #include <net/checksum.h> 76 76 #include <net/inetpeer.h> 77 + #include <net/inet_ecn.h> 77 78 #include <net/lwtunnel.h> 78 79 #include <linux/bpf-cgroup.h> 79 80 #include <linux/igmp.h> ··· 1704 1703 if (IS_ERR(rt)) 1705 1704 return; 1706 1705 1707 - inet_sk(sk)->tos = arg->tos; 1706 + inet_sk(sk)->tos = arg->tos & ~INET_ECN_MASK; 1708 1707 1709 1708 sk->sk_protocol = ip_hdr(skb)->protocol; 1710 1709 sk->sk_bound_dev_if = arg->bound_dev_if;
+1
net/ipv4/ip_tunnel_core.c
··· 554 554 555 555 attr = tb[LWTUNNEL_IP_OPT_VXLAN_GBP]; 556 556 md->gbp = nla_get_u32(attr); 557 + md->gbp &= VXLAN_GBP_MASK; 557 558 info->key.tun_flags |= TUNNEL_VXLAN_OPT; 558 559 } 559 560
+9 -5
net/ipv4/route.c
··· 786 786 neigh_event_send(n, NULL); 787 787 } else { 788 788 if (fib_lookup(net, fl4, &res, 0) == 0) { 789 - struct fib_nh_common *nhc = FIB_RES_NHC(res); 789 + struct fib_nh_common *nhc; 790 790 791 + fib_select_path(net, &res, fl4, skb); 792 + nhc = FIB_RES_NHC(res); 791 793 update_or_create_fnhe(nhc, fl4->daddr, new_gw, 792 794 0, false, 793 795 jiffies + ip_rt_gc_timeout); ··· 1015 1013 static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu) 1016 1014 { 1017 1015 struct dst_entry *dst = &rt->dst; 1016 + struct net *net = dev_net(dst->dev); 1018 1017 u32 old_mtu = ipv4_mtu(dst); 1019 1018 struct fib_result res; 1020 1019 bool lock = false; ··· 1036 1033 return; 1037 1034 1038 1035 rcu_read_lock(); 1039 - if (fib_lookup(dev_net(dst->dev), fl4, &res, 0) == 0) { 1040 - struct fib_nh_common *nhc = FIB_RES_NHC(res); 1036 + if (fib_lookup(net, fl4, &res, 0) == 0) { 1037 + struct fib_nh_common *nhc; 1041 1038 1039 + fib_select_path(net, &res, fl4, NULL); 1040 + nhc = FIB_RES_NHC(res); 1042 1041 update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock, 1043 1042 jiffies + ip_rt_mtu_expires); 1044 1043 } ··· 2152 2147 fl4.daddr = daddr; 2153 2148 fl4.saddr = saddr; 2154 2149 fl4.flowi4_uid = sock_net_uid(net, NULL); 2150 + fl4.flowi4_multipath_hash = 0; 2155 2151 2156 2152 if (fib4_rules_early_flow_dissect(net, skb, &fl4, &_flkeys)) { 2157 2153 flkeys = &_flkeys; ··· 2673 2667 fib_select_path(net, res, fl4, skb); 2674 2668 2675 2669 dev_out = FIB_RES_DEV(*res); 2676 - fl4->flowi4_oif = dev_out->ifindex; 2677 - 2678 2670 2679 2671 make_route: 2680 2672 rth = __mkroute_output(res, fl4, orig_oif, dev_out, flags);
+1
net/ipv6/Kconfig
··· 303 303 config IPV6_SEG6_HMAC 304 304 bool "IPv6: Segment Routing HMAC support" 305 305 depends on IPV6 306 + select CRYPTO 306 307 select CRYPTO_HMAC 307 308 select CRYPTO_SHA1 308 309 select CRYPTO_SHA256
+9 -4
net/ipv6/ip6_fib.c
··· 1993 1993 /* Need to own table->tb6_lock */ 1994 1994 int fib6_del(struct fib6_info *rt, struct nl_info *info) 1995 1995 { 1996 - struct fib6_node *fn = rcu_dereference_protected(rt->fib6_node, 1997 - lockdep_is_held(&rt->fib6_table->tb6_lock)); 1998 - struct fib6_table *table = rt->fib6_table; 1999 1996 struct net *net = info->nl_net; 2000 1997 struct fib6_info __rcu **rtp; 2001 1998 struct fib6_info __rcu **rtp_next; 1999 + struct fib6_table *table; 2000 + struct fib6_node *fn; 2002 2001 2003 - if (!fn || rt == net->ipv6.fib6_null_entry) 2002 + if (rt == net->ipv6.fib6_null_entry) 2003 + return -ENOENT; 2004 + 2005 + table = rt->fib6_table; 2006 + fn = rcu_dereference_protected(rt->fib6_node, 2007 + lockdep_is_held(&table->tb6_lock)); 2008 + if (!fn) 2004 2009 return -ENOENT; 2005 2010 2006 2011 WARN_ON(!(fn->fn_flags & RTN_RTINFO));
+1 -1
net/ipv6/route.c
··· 4202 4202 .fc_nlinfo.nl_net = net, 4203 4203 }; 4204 4204 4205 - cfg.fc_table = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO, 4205 + cfg.fc_table = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO; 4206 4206 cfg.fc_dst = *prefix; 4207 4207 cfg.fc_gateway = *gwaddr; 4208 4208
+14 -6
net/mac80211/airtime.c
··· 560 560 if (rate->idx < 0 || !rate->count) 561 561 return -1; 562 562 563 - if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH) 563 + if (rate->flags & IEEE80211_TX_RC_160_MHZ_WIDTH) 564 + stat->bw = RATE_INFO_BW_160; 565 + else if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH) 564 566 stat->bw = RATE_INFO_BW_80; 565 567 else if (rate->flags & IEEE80211_TX_RC_40_MHZ_WIDTH) 566 568 stat->bw = RATE_INFO_BW_40; ··· 670 668 * This will not be very accurate, but much better than simply 671 669 * assuming un-aggregated tx in all cases. 672 670 */ 673 - if (duration > 400) /* <= VHT20 MCS2 1S */ 671 + if (duration > 400 * 1024) /* <= VHT20 MCS2 1S */ 674 672 agg_shift = 1; 675 - else if (duration > 250) /* <= VHT20 MCS3 1S or MCS1 2S */ 673 + else if (duration > 250 * 1024) /* <= VHT20 MCS3 1S or MCS1 2S */ 676 674 agg_shift = 2; 677 - else if (duration > 150) /* <= VHT20 MCS5 1S or MCS3 2S */ 675 + else if (duration > 150 * 1024) /* <= VHT20 MCS5 1S or MCS2 2S */ 678 676 agg_shift = 3; 679 - else 677 + else if (duration > 70 * 1024) /* <= VHT20 MCS5 2S */ 680 678 agg_shift = 4; 679 + else if (stat.encoding != RX_ENC_HE || 680 + duration > 20 * 1024) /* <= HE40 MCS6 2S */ 681 + agg_shift = 5; 682 + else 683 + agg_shift = 6; 681 684 682 685 duration *= len; 683 686 duration /= AVG_PKT_SIZE; 684 687 duration /= 1024; 688 + duration += (overhead >> agg_shift); 685 689 686 - return duration + (overhead >> agg_shift); 690 + return max_t(u32, duration, 4); 687 691 } 688 692 689 693 if (!conf)
+2 -1
net/mac80211/mlme.c
··· 4861 4861 struct ieee80211_supported_band *sband; 4862 4862 struct cfg80211_chan_def chandef; 4863 4863 bool is_6ghz = cbss->channel->band == NL80211_BAND_6GHZ; 4864 + bool is_5ghz = cbss->channel->band == NL80211_BAND_5GHZ; 4864 4865 struct ieee80211_bss *bss = (void *)cbss->priv; 4865 4866 int ret; 4866 4867 u32 i; ··· 4880 4879 ifmgd->flags |= IEEE80211_STA_DISABLE_HE; 4881 4880 } 4882 4881 4883 - if (!sband->vht_cap.vht_supported && !is_6ghz) { 4882 + if (!sband->vht_cap.vht_supported && is_5ghz) { 4884 4883 ifmgd->flags |= IEEE80211_STA_DISABLE_VHT; 4885 4884 ifmgd->flags |= IEEE80211_STA_DISABLE_HE; 4886 4885 }
+2 -1
net/mac80211/rx.c
··· 451 451 else if (status->bw == RATE_INFO_BW_5) 452 452 channel_flags |= IEEE80211_CHAN_QUARTER; 453 453 454 - if (status->band == NL80211_BAND_5GHZ) 454 + if (status->band == NL80211_BAND_5GHZ || 455 + status->band == NL80211_BAND_6GHZ) 455 456 channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_5GHZ; 456 457 else if (status->encoding != RX_ENC_LEGACY) 457 458 channel_flags |= IEEE80211_CHAN_DYN | IEEE80211_CHAN_2GHZ;
+4 -3
net/mac80211/util.c
··· 3353 3353 he_chandef.center_freq1 = 3354 3354 ieee80211_channel_to_frequency(he_6ghz_oper->ccfs0, 3355 3355 NL80211_BAND_6GHZ); 3356 - he_chandef.center_freq2 = 3357 - ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1, 3358 - NL80211_BAND_6GHZ); 3356 + if (support_80_80 || support_160) 3357 + he_chandef.center_freq2 = 3358 + ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1, 3359 + NL80211_BAND_6GHZ); 3359 3360 } 3360 3361 3361 3362 if (!cfg80211_chandef_valid(&he_chandef)) {
+4 -4
net/mac80211/vht.c
··· 168 168 /* take some capabilities as-is */ 169 169 cap_info = le32_to_cpu(vht_cap_ie->vht_cap_info); 170 170 vht_cap->cap = cap_info; 171 - vht_cap->cap &= IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895 | 172 - IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 | 173 - IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 | 174 - IEEE80211_VHT_CAP_RXLDPC | 171 + vht_cap->cap &= IEEE80211_VHT_CAP_RXLDPC | 175 172 IEEE80211_VHT_CAP_VHT_TXOP_PS | 176 173 IEEE80211_VHT_CAP_HTC_VHT | 177 174 IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK | ··· 176 179 IEEE80211_VHT_CAP_VHT_LINK_ADAPTATION_VHT_MRQ_MFB | 177 180 IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN | 178 181 IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN; 182 + 183 + vht_cap->cap |= min_t(u32, cap_info & IEEE80211_VHT_CAP_MAX_MPDU_MASK, 184 + own_cap.cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK); 179 185 180 186 /* and some based on our own capabilities */ 181 187 switch (own_cap.cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {
+5 -3
net/mac802154/tx.c
··· 34 34 if (res) 35 35 goto err_tx; 36 36 37 - ieee802154_xmit_complete(&local->hw, skb, false); 38 - 39 37 dev->stats.tx_packets++; 40 38 dev->stats.tx_bytes += skb->len; 39 + 40 + ieee802154_xmit_complete(&local->hw, skb, false); 41 41 42 42 return; 43 43 ··· 78 78 79 79 /* async is priority, otherwise sync is fallback */ 80 80 if (local->ops->xmit_async) { 81 + unsigned int len = skb->len; 82 + 81 83 ret = drv_xmit_async(local, skb); 82 84 if (ret) { 83 85 ieee802154_wake_queue(&local->hw); ··· 87 85 } 88 86 89 87 dev->stats.tx_packets++; 90 - dev->stats.tx_bytes += skb->len; 88 + dev->stats.tx_bytes += len; 91 89 } else { 92 90 local->tx_skb = skb; 93 91 queue_work(local->workqueue, &local->tx_work);
+16 -3
net/mptcp/pm_netlink.c
··· 66 66 return a->port == b->port; 67 67 } 68 68 69 + static bool address_zero(const struct mptcp_addr_info *addr) 70 + { 71 + struct mptcp_addr_info zero; 72 + 73 + memset(&zero, 0, sizeof(zero)); 74 + zero.family = addr->family; 75 + 76 + return addresses_equal(addr, &zero, false); 77 + } 78 + 69 79 static void local_address(const struct sock_common *skc, 70 80 struct mptcp_addr_info *addr) 71 81 { ··· 181 171 182 172 static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk) 183 173 { 174 + struct mptcp_addr_info remote = { 0 }; 184 175 struct sock *sk = (struct sock *)msk; 185 176 struct mptcp_pm_addr_entry *local; 186 - struct mptcp_addr_info remote; 187 177 struct pm_nl_pernet *pernet; 188 178 189 179 pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id); ··· 333 323 * addr 334 324 */ 335 325 local_address((struct sock_common *)msk, &msk_local); 336 - local_address((struct sock_common *)msk, &skc_local); 326 + local_address((struct sock_common *)skc, &skc_local); 337 327 if (addresses_equal(&msk_local, &skc_local, false)) 328 + return 0; 329 + 330 + if (address_zero(&skc_local)) 338 331 return 0; 339 332 340 333 pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id); ··· 354 341 return ret; 355 342 356 343 /* address not found, add to local list */ 357 - entry = kmalloc(sizeof(*entry), GFP_KERNEL); 344 + entry = kmalloc(sizeof(*entry), GFP_ATOMIC); 358 345 if (!entry) 359 346 return -ENOMEM; 360 347
+5 -2
net/mptcp/subflow.c
··· 1063 1063 struct mptcp_sock *msk = mptcp_sk(sk); 1064 1064 struct mptcp_subflow_context *subflow; 1065 1065 struct sockaddr_storage addr; 1066 + int remote_id = remote->id; 1066 1067 int local_id = loc->id; 1067 1068 struct socket *sf; 1068 1069 struct sock *ssk; ··· 1108 1107 goto failed; 1109 1108 1110 1109 mptcp_crypto_key_sha(subflow->remote_key, &remote_token, NULL); 1111 - pr_debug("msk=%p remote_token=%u local_id=%d", msk, remote_token, 1112 - local_id); 1110 + pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk, 1111 + remote_token, local_id, remote_id); 1113 1112 subflow->remote_token = remote_token; 1114 1113 subflow->local_id = local_id; 1114 + subflow->remote_id = remote_id; 1115 1115 subflow->request_join = 1; 1116 1116 subflow->request_bkup = 1; 1117 1117 mptcp_info2sockaddr(remote, &addr); ··· 1349 1347 new_ctx->fully_established = 1; 1350 1348 new_ctx->backup = subflow_req->backup; 1351 1349 new_ctx->local_id = subflow_req->local_id; 1350 + new_ctx->remote_id = subflow_req->remote_id; 1352 1351 new_ctx->token = subflow_req->token; 1353 1352 new_ctx->thmac = subflow_req->thmac; 1354 1353 }
+5 -17
net/netfilter/nf_conntrack_netlink.c
··· 851 851 } 852 852 853 853 struct ctnetlink_filter { 854 - u_int32_t cta_flags; 855 854 u8 family; 856 855 857 856 u_int32_t orig_flags; ··· 905 906 struct nf_conntrack_zone *zone, 906 907 u_int32_t flags); 907 908 908 - /* applied on filters */ 909 - #define CTA_FILTER_F_CTA_MARK (1 << 0) 910 - #define CTA_FILTER_F_CTA_MARK_MASK (1 << 1) 911 - 912 909 static struct ctnetlink_filter * 913 910 ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family) 914 911 { ··· 925 930 #ifdef CONFIG_NF_CONNTRACK_MARK 926 931 if (cda[CTA_MARK]) { 927 932 filter->mark.val = ntohl(nla_get_be32(cda[CTA_MARK])); 928 - filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK); 929 - 930 - if (cda[CTA_MARK_MASK]) { 933 + if (cda[CTA_MARK_MASK]) 931 934 filter->mark.mask = ntohl(nla_get_be32(cda[CTA_MARK_MASK])); 932 - filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK_MASK); 933 - } else { 935 + else 934 936 filter->mark.mask = 0xffffffff; 935 - } 936 937 } else if (cda[CTA_MARK_MASK]) { 937 938 err = -EINVAL; 938 939 goto err_filter; ··· 1108 1117 } 1109 1118 1110 1119 #ifdef CONFIG_NF_CONNTRACK_MARK 1111 - if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK_MASK)) && 1112 - (ct->mark & filter->mark.mask) != filter->mark.val) 1113 - goto ignore_entry; 1114 - else if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK)) && 1115 - ct->mark != filter->mark.val) 1120 + if ((ct->mark & filter->mark.mask) != filter->mark.val) 1116 1121 goto ignore_entry; 1117 1122 #endif 1118 1123 ··· 1391 1404 if (err < 0) 1392 1405 return err; 1393 1406 1394 - 1407 + if (l3num != NFPROTO_IPV4 && l3num != NFPROTO_IPV6) 1408 + return -EOPNOTSUPP; 1395 1409 tuple->src.l3num = l3num; 1396 1410 1397 1411 if (flags & CTA_FILTER_FLAG(CTA_IP_DST) ||
+2
net/netfilter/nf_conntrack_proto.c
··· 565 565 int err; 566 566 567 567 err = nf_ct_netns_do_get(net, NFPROTO_IPV4); 568 + #if IS_ENABLED(CONFIG_IPV6) 568 569 if (err < 0) 569 570 goto err1; 570 571 err = nf_ct_netns_do_get(net, NFPROTO_IPV6); ··· 576 575 err2: 577 576 nf_ct_netns_put(net, NFPROTO_IPV4); 578 577 err1: 578 + #endif 579 579 return err; 580 580 } 581 581
+57 -13
net/netfilter/nf_tables_api.c
··· 684 684 return -1; 685 685 } 686 686 687 + struct nftnl_skb_parms { 688 + bool report; 689 + }; 690 + #define NFT_CB(skb) (*(struct nftnl_skb_parms*)&((skb)->cb)) 691 + 692 + static void nft_notify_enqueue(struct sk_buff *skb, bool report, 693 + struct list_head *notify_list) 694 + { 695 + NFT_CB(skb).report = report; 696 + list_add_tail(&skb->list, notify_list); 697 + } 698 + 687 699 static void nf_tables_table_notify(const struct nft_ctx *ctx, int event) 688 700 { 689 701 struct sk_buff *skb; ··· 727 715 goto err; 728 716 } 729 717 730 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 731 - ctx->report, GFP_KERNEL); 718 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 732 719 return; 733 720 err: 734 721 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 1479 1468 goto err; 1480 1469 } 1481 1470 1482 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 1483 - ctx->report, GFP_KERNEL); 1471 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 1484 1472 return; 1485 1473 err: 1486 1474 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 2817 2807 goto err; 2818 2808 } 2819 2809 2820 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 2821 - ctx->report, GFP_KERNEL); 2810 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 2822 2811 return; 2823 2812 err: 2824 2813 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 3846 3837 goto err; 3847 3838 } 3848 3839 3849 - nfnetlink_send(skb, ctx->net, portid, NFNLGRP_NFTABLES, ctx->report, 3850 - gfp_flags); 3840 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 3851 3841 return; 3852 3842 err: 3853 3843 nfnetlink_set_err(ctx->net, portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 4967 4959 goto err; 4968 4960 } 4969 4961 4970 - nfnetlink_send(skb, net, portid, NFNLGRP_NFTABLES, ctx->report, 4971 - GFP_KERNEL); 4962 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 4972 4963 return; 4973 4964 err: 4974 4965 nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 6282 6275 goto err; 6283 6276 } 6284 6277 6285 - nfnetlink_send(skb, net, portid, NFNLGRP_NFTABLES, report, gfp); 6278 + nft_notify_enqueue(skb, report, &net->nft.notify_list); 6286 6279 return; 6287 6280 err: 6288 6281 nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 7092 7085 goto err; 7093 7086 } 7094 7087 7095 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 7096 - ctx->report, GFP_KERNEL); 7088 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 7097 7089 return; 7098 7090 err: 7099 7091 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 7701 7695 mutex_unlock(&net->nft.commit_mutex); 7702 7696 } 7703 7697 7698 + static void nft_commit_notify(struct net *net, u32 portid) 7699 + { 7700 + struct sk_buff *batch_skb = NULL, *nskb, *skb; 7701 + unsigned char *data; 7702 + int len; 7703 + 7704 + list_for_each_entry_safe(skb, nskb, &net->nft.notify_list, list) { 7705 + if (!batch_skb) { 7706 + new_batch: 7707 + batch_skb = skb; 7708 + len = NLMSG_GOODSIZE - skb->len; 7709 + list_del(&skb->list); 7710 + continue; 7711 + } 7712 + len -= skb->len; 7713 + if (len > 0 && NFT_CB(skb).report == NFT_CB(batch_skb).report) { 7714 + data = skb_put(batch_skb, skb->len); 7715 + memcpy(data, skb->data, skb->len); 7716 + list_del(&skb->list); 7717 + kfree_skb(skb); 7718 + continue; 7719 + } 7720 + nfnetlink_send(batch_skb, net, portid, NFNLGRP_NFTABLES, 7721 + NFT_CB(batch_skb).report, GFP_KERNEL); 7722 + goto new_batch; 7723 + } 7724 + 7725 + if (batch_skb) { 7726 + nfnetlink_send(batch_skb, net, portid, NFNLGRP_NFTABLES, 7727 + NFT_CB(batch_skb).report, GFP_KERNEL); 7728 + } 7729 + 7730 + WARN_ON_ONCE(!list_empty(&net->nft.notify_list)); 7731 + } 7732 + 7704 7733 static int nf_tables_commit(struct net *net, struct sk_buff *skb) 7705 7734 { 7706 7735 struct nft_trans *trans, *next; ··· 7938 7897 } 7939 7898 } 7940 7899 7900 + nft_commit_notify(net, NETLINK_CB(skb).portid); 7941 7901 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); 7942 7902 nf_tables_commit_release(net); 7943 7903 ··· 8763 8721 INIT_LIST_HEAD(&net->nft.tables); 8764 8722 INIT_LIST_HEAD(&net->nft.commit_list); 8765 8723 INIT_LIST_HEAD(&net->nft.module_list); 8724 + INIT_LIST_HEAD(&net->nft.notify_list); 8766 8725 mutex_init(&net->nft.commit_mutex); 8767 8726 net->nft.base_seq = 1; 8768 8727 net->nft.validate_state = NFT_VALIDATE_SKIP; ··· 8780 8737 mutex_unlock(&net->nft.commit_mutex); 8781 8738 WARN_ON_ONCE(!list_empty(&net->nft.tables)); 8782 8739 WARN_ON_ONCE(!list_empty(&net->nft.module_list)); 8740 + WARN_ON_ONCE(!list_empty(&net->nft.notify_list)); 8783 8741 } 8784 8742 8785 8743 static struct pernet_operations nf_tables_net_ops = {
+2 -2
net/netfilter/nft_meta.c
··· 147 147 148 148 switch (key) { 149 149 case NFT_META_SKUID: 150 - *dest = from_kuid_munged(&init_user_ns, 150 + *dest = from_kuid_munged(sock_net(sk)->user_ns, 151 151 sock->file->f_cred->fsuid); 152 152 break; 153 153 case NFT_META_SKGID: 154 - *dest = from_kgid_munged(&init_user_ns, 154 + *dest = from_kgid_munged(sock_net(sk)->user_ns, 155 155 sock->file->f_cred->fsgid); 156 156 break; 157 157 default:
+11 -10
net/qrtr/qrtr.c
··· 332 332 { 333 333 struct qrtr_hdr_v1 *hdr; 334 334 size_t len = skb->len; 335 - int rc = -ENODEV; 336 - int confirm_rx; 335 + int rc, confirm_rx; 337 336 338 337 confirm_rx = qrtr_tx_wait(node, to->sq_node, to->sq_port, type); 339 338 if (confirm_rx < 0) { ··· 356 357 hdr->size = cpu_to_le32(len); 357 358 hdr->confirm_rx = !!confirm_rx; 358 359 359 - skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr)); 360 + rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr)); 360 361 361 - mutex_lock(&node->ep_lock); 362 - if (node->ep) 363 - rc = node->ep->xmit(node->ep, skb); 364 - else 365 - kfree_skb(skb); 366 - mutex_unlock(&node->ep_lock); 367 - 362 + if (!rc) { 363 + mutex_lock(&node->ep_lock); 364 + rc = -ENODEV; 365 + if (node->ep) 366 + rc = node->ep->xmit(node->ep, skb); 367 + else 368 + kfree_skb(skb); 369 + mutex_unlock(&node->ep_lock); 370 + } 368 371 /* Need to ensure that a subsequent message carries the otherwise lost 369 372 * confirm_rx flag if we dropped this one */ 370 373 if (rc && confirm_rx)
+34 -10
net/sched/act_ife.c
··· 436 436 kfree_rcu(p, rcu); 437 437 } 438 438 439 + static int load_metalist(struct nlattr **tb, bool rtnl_held) 440 + { 441 + int i; 442 + 443 + for (i = 1; i < max_metacnt; i++) { 444 + if (tb[i]) { 445 + void *val = nla_data(tb[i]); 446 + int len = nla_len(tb[i]); 447 + int rc; 448 + 449 + rc = load_metaops_and_vet(i, val, len, rtnl_held); 450 + if (rc != 0) 451 + return rc; 452 + } 453 + } 454 + 455 + return 0; 456 + } 457 + 439 458 static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb, 440 459 bool exists, bool rtnl_held) 441 460 { ··· 467 448 if (tb[i]) { 468 449 val = nla_data(tb[i]); 469 450 len = nla_len(tb[i]); 470 - 471 - rc = load_metaops_and_vet(i, val, len, rtnl_held); 472 - if (rc != 0) 473 - return rc; 474 451 475 452 rc = add_metainfo(ife, i, val, len, exists); 476 453 if (rc) ··· 523 508 p = kzalloc(sizeof(*p), GFP_KERNEL); 524 509 if (!p) 525 510 return -ENOMEM; 511 + 512 + if (tb[TCA_IFE_METALST]) { 513 + err = nla_parse_nested_deprecated(tb2, IFE_META_MAX, 514 + tb[TCA_IFE_METALST], NULL, 515 + NULL); 516 + if (err) { 517 + kfree(p); 518 + return err; 519 + } 520 + err = load_metalist(tb2, rtnl_held); 521 + if (err) { 522 + kfree(p); 523 + return err; 524 + } 525 + } 526 526 527 527 index = parm->index; 528 528 err = tcf_idr_check_alloc(tn, &index, a, bind); ··· 600 570 } 601 571 602 572 if (tb[TCA_IFE_METALST]) { 603 - err = nla_parse_nested_deprecated(tb2, IFE_META_MAX, 604 - tb[TCA_IFE_METALST], NULL, 605 - NULL); 606 - if (err) 607 - goto metadata_parse_err; 608 573 err = populate_metalist(ife, tb2, exists, rtnl_held); 609 574 if (err) 610 575 goto metadata_parse_err; 611 - 612 576 } else { 613 577 /* if no passed metadata allow list or passed allow-all 614 578 * then here we process by adding as many supported metadatum
+1
net/sched/act_tunnel_key.c
··· 156 156 struct vxlan_metadata *md = dst; 157 157 158 158 md->gbp = nla_get_u32(tb[TCA_TUNNEL_KEY_ENC_OPT_VXLAN_GBP]); 159 + md->gbp &= VXLAN_GBP_MASK; 159 160 } 160 161 161 162 return sizeof(struct vxlan_metadata);
+4 -1
net/sched/cls_flower.c
··· 1175 1175 return -EINVAL; 1176 1176 } 1177 1177 1178 - if (tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]) 1178 + if (tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]) { 1179 1179 md->gbp = nla_get_u32(tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]); 1180 + md->gbp &= VXLAN_GBP_MASK; 1181 + } 1180 1182 1181 1183 return sizeof(*md); 1182 1184 } ··· 1223 1221 } 1224 1222 if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]) { 1225 1223 nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]; 1224 + memset(&md->u, 0x00, sizeof(md->u)); 1226 1225 md->u.index = nla_get_be32(nla); 1227 1226 } 1228 1227 } else if (md->version == 2) {
+33 -15
net/sched/sch_generic.c
··· 1131 1131 1132 1132 static void qdisc_deactivate(struct Qdisc *qdisc) 1133 1133 { 1134 - bool nolock = qdisc->flags & TCQ_F_NOLOCK; 1135 - 1136 1134 if (qdisc->flags & TCQ_F_BUILTIN) 1137 1135 return; 1138 - if (test_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state)) 1139 - return; 1140 - 1141 - if (nolock) 1142 - spin_lock_bh(&qdisc->seqlock); 1143 - spin_lock_bh(qdisc_lock(qdisc)); 1144 1136 1145 1137 set_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state); 1146 - 1147 - qdisc_reset(qdisc); 1148 - 1149 - spin_unlock_bh(qdisc_lock(qdisc)); 1150 - if (nolock) 1151 - spin_unlock_bh(&qdisc->seqlock); 1152 1138 } 1153 1139 1154 1140 static void dev_deactivate_queue(struct net_device *dev, ··· 1149 1163 qdisc_deactivate(qdisc); 1150 1164 rcu_assign_pointer(dev_queue->qdisc, qdisc_default); 1151 1165 } 1166 + } 1167 + 1168 + static void dev_reset_queue(struct net_device *dev, 1169 + struct netdev_queue *dev_queue, 1170 + void *_unused) 1171 + { 1172 + struct Qdisc *qdisc; 1173 + bool nolock; 1174 + 1175 + qdisc = dev_queue->qdisc_sleeping; 1176 + if (!qdisc) 1177 + return; 1178 + 1179 + nolock = qdisc->flags & TCQ_F_NOLOCK; 1180 + 1181 + if (nolock) 1182 + spin_lock_bh(&qdisc->seqlock); 1183 + spin_lock_bh(qdisc_lock(qdisc)); 1184 + 1185 + qdisc_reset(qdisc); 1186 + 1187 + spin_unlock_bh(qdisc_lock(qdisc)); 1188 + if (nolock) 1189 + spin_unlock_bh(&qdisc->seqlock); 1152 1190 } 1153 1191 1154 1192 static bool some_qdisc_is_busy(struct net_device *dev) ··· 1223 1213 dev_watchdog_down(dev); 1224 1214 } 1225 1215 1226 - /* Wait for outstanding qdisc-less dev_queue_xmit calls. 1216 + /* Wait for outstanding qdisc-less dev_queue_xmit calls or 1217 + * outstanding qdisc enqueuing calls. 1227 1218 * This is avoided if all devices are in dismantle phase : 1228 1219 * Caller will call synchronize_net() for us 1229 1220 */ 1230 1221 synchronize_net(); 1222 + 1223 + list_for_each_entry(dev, head, close_list) { 1224 + netdev_for_each_tx_queue(dev, dev_reset_queue, NULL); 1225 + 1226 + if (dev_ingress_queue(dev)) 1227 + dev_reset_queue(dev, dev_ingress_queue(dev), NULL); 1228 + } 1231 1229 1232 1230 /* Wait for outstanding qdisc_run calls. */ 1233 1231 list_for_each_entry(dev, head, close_list) {
+17 -11
net/sched/sch_taprio.c
··· 777 777 [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 }, 778 778 }; 779 779 780 - static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry, 780 + static int fill_sched_entry(struct taprio_sched *q, struct nlattr **tb, 781 + struct sched_entry *entry, 781 782 struct netlink_ext_ack *extack) 782 783 { 784 + int min_duration = length_to_duration(q, ETH_ZLEN); 783 785 u32 interval = 0; 784 786 785 787 if (tb[TCA_TAPRIO_SCHED_ENTRY_CMD]) ··· 796 794 interval = nla_get_u32( 797 795 tb[TCA_TAPRIO_SCHED_ENTRY_INTERVAL]); 798 796 799 - if (interval == 0) { 797 + /* The interval should allow at least the minimum ethernet 798 + * frame to go out. 799 + */ 800 + if (interval < min_duration) { 800 801 NL_SET_ERR_MSG(extack, "Invalid interval for schedule entry"); 801 802 return -EINVAL; 802 803 } ··· 809 804 return 0; 810 805 } 811 806 812 - static int parse_sched_entry(struct nlattr *n, struct sched_entry *entry, 813 - int index, struct netlink_ext_ack *extack) 807 + static int parse_sched_entry(struct taprio_sched *q, struct nlattr *n, 808 + struct sched_entry *entry, int index, 809 + struct netlink_ext_ack *extack) 814 810 { 815 811 struct nlattr *tb[TCA_TAPRIO_SCHED_ENTRY_MAX + 1] = { }; 816 812 int err; ··· 825 819 826 820 entry->index = index; 827 821 828 - return fill_sched_entry(tb, entry, extack); 822 + return fill_sched_entry(q, tb, entry, extack); 829 823 } 830 824 831 - static int parse_sched_list(struct nlattr *list, 825 + static int parse_sched_list(struct taprio_sched *q, struct nlattr *list, 832 826 struct sched_gate_list *sched, 833 827 struct netlink_ext_ack *extack) 834 828 { ··· 853 847 return -ENOMEM; 854 848 } 855 849 856 - err = parse_sched_entry(n, entry, i, extack); 850 + err = parse_sched_entry(q, n, entry, i, extack); 857 851 if (err < 0) { 858 852 kfree(entry); 859 853 return err; ··· 868 862 return i; 869 863 } 870 864 871 - static int parse_taprio_schedule(struct nlattr **tb, 865 + static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb, 872 866 struct sched_gate_list *new, 873 867 struct netlink_ext_ack *extack) 874 868 { ··· 889 883 new->cycle_time = nla_get_s64(tb[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME]); 890 884 891 885 if (tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST]) 892 - err = parse_sched_list( 893 - tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST], new, extack); 886 + err = parse_sched_list(q, tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST], 887 + new, extack); 894 888 if (err < 0) 895 889 return err; 896 890 ··· 1479 1473 goto free_sched; 1480 1474 } 1481 1475 1482 - err = parse_taprio_schedule(tb, new_admin, extack); 1476 + err = parse_taprio_schedule(q, tb, new_admin, extack); 1483 1477 if (err < 0) 1484 1478 goto free_sched; 1485 1479
+3 -6
net/sctp/socket.c
··· 9220 9220 static inline void sctp_copy_descendant(struct sock *sk_to, 9221 9221 const struct sock *sk_from) 9222 9222 { 9223 - int ancestor_size = sizeof(struct inet_sock) + 9224 - sizeof(struct sctp_sock) - 9225 - offsetof(struct sctp_sock, pd_lobby); 9223 + size_t ancestor_size = sizeof(struct inet_sock); 9226 9224 9227 - if (sk_from->sk_family == PF_INET6) 9228 - ancestor_size += sizeof(struct ipv6_pinfo); 9229 - 9225 + ancestor_size += sk_from->sk_prot->obj_size; 9226 + ancestor_size -= offsetof(struct sctp_sock, pd_lobby); 9230 9227 __inet_sk_copy_descendant(sk_to, sk_from, ancestor_size); 9231 9228 } 9232 9229
+1 -1
net/sunrpc/svcsock.c
··· 228 228 static void svc_flush_bvec(const struct bio_vec *bvec, size_t size, size_t seek) 229 229 { 230 230 struct bvec_iter bi = { 231 - .bi_size = size, 231 + .bi_size = size + seek, 232 232 }; 233 233 struct bio_vec bv; 234 234
+10 -4
net/tipc/group.c
··· 273 273 return NULL; 274 274 } 275 275 276 - static void tipc_group_add_to_tree(struct tipc_group *grp, 277 - struct tipc_member *m) 276 + static int tipc_group_add_to_tree(struct tipc_group *grp, 277 + struct tipc_member *m) 278 278 { 279 279 u64 nkey, key = (u64)m->node << 32 | m->port; 280 280 struct rb_node **n, *parent = NULL; ··· 291 291 else if (key > nkey) 292 292 n = &(*n)->rb_right; 293 293 else 294 - return; 294 + return -EEXIST; 295 295 } 296 296 rb_link_node(&m->tree_node, parent, n); 297 297 rb_insert_color(&m->tree_node, &grp->members); 298 + return 0; 298 299 } 299 300 300 301 static struct tipc_member *tipc_group_create_member(struct tipc_group *grp, ··· 303 302 u32 instance, int state) 304 303 { 305 304 struct tipc_member *m; 305 + int ret; 306 306 307 307 m = kzalloc(sizeof(*m), GFP_ATOMIC); 308 308 if (!m) ··· 316 314 m->port = port; 317 315 m->instance = instance; 318 316 m->bc_acked = grp->bc_snd_nxt - 1; 317 + ret = tipc_group_add_to_tree(grp, m); 318 + if (ret < 0) { 319 + kfree(m); 320 + return NULL; 321 + } 319 322 grp->member_cnt++; 320 - tipc_group_add_to_tree(grp, m); 321 323 tipc_nlist_add(&grp->dests, m->node); 322 324 m->state = state; 323 325 return m;
+2 -1
net/tipc/link.c
··· 532 532 * tipc_link_bc_create - create new link to be used for broadcast 533 533 * @net: pointer to associated network namespace 534 534 * @mtu: mtu to be used initially if no peers 535 - * @window: send window to be used 535 + * @min_win: minimal send window to be used by link 536 + * @max_win: maximal send window to be used by link 536 537 * @inputq: queue to put messages ready for delivery 537 538 * @namedq: queue to put binding table update messages ready for delivery 538 539 * @link: return value, pointer to put the created link
+2 -1
net/tipc/msg.c
··· 150 150 if (fragid == FIRST_FRAGMENT) { 151 151 if (unlikely(head)) 152 152 goto err; 153 - if (unlikely(skb_unclone(frag, GFP_ATOMIC))) 153 + frag = skb_unshare(frag, GFP_ATOMIC); 154 + if (unlikely(!frag)) 154 155 goto err; 155 156 head = *headbuf = frag; 156 157 *buf = NULL;
+1 -4
net/tipc/socket.c
··· 2771 2771 2772 2772 trace_tipc_sk_shutdown(sk, NULL, TIPC_DUMP_ALL, " "); 2773 2773 __tipc_shutdown(sock, TIPC_CONN_SHUTDOWN); 2774 - if (tipc_sk_type_connectionless(sk)) 2775 - sk->sk_shutdown = SHUTDOWN_MASK; 2776 - else 2777 - sk->sk_shutdown = SEND_SHUTDOWN; 2774 + sk->sk_shutdown = SHUTDOWN_MASK; 2778 2775 2779 2776 if (sk->sk_state == TIPC_DISCONNECTING) { 2780 2777 /* Discard any unreceived messages */
+1
net/wireless/Kconfig
··· 217 217 218 218 config LIB80211_CRYPT_CCMP 219 219 tristate 220 + select CRYPTO 220 221 select CRYPTO_AES 221 222 select CRYPTO_CCM 222 223
+1 -1
net/wireless/util.c
··· 95 95 /* see 802.11ax D6.1 27.3.23.2 */ 96 96 if (chan == 2) 97 97 return MHZ_TO_KHZ(5935); 98 - if (chan <= 253) 98 + if (chan <= 233) 99 99 return MHZ_TO_KHZ(5950 + chan * 5); 100 100 break; 101 101 case NL80211_BAND_60GHZ:
+8 -9
net/xdp/xdp_umem.c
··· 303 303 304 304 static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) 305 305 { 306 + u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom; 306 307 bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG; 307 - u32 chunk_size = mr->chunk_size, headroom = mr->headroom; 308 308 u64 npgs, addr = mr->addr, size = mr->len; 309 - unsigned int chunks, chunks_per_page; 309 + unsigned int chunks, chunks_rem; 310 310 int err; 311 311 312 312 if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) { ··· 336 336 if ((addr + size) < addr) 337 337 return -EINVAL; 338 338 339 - npgs = size >> PAGE_SHIFT; 339 + npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem); 340 + if (npgs_rem) 341 + npgs++; 340 342 if (npgs > U32_MAX) 341 343 return -EINVAL; 342 344 343 - chunks = (unsigned int)div_u64(size, chunk_size); 345 + chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem); 344 346 if (chunks == 0) 345 347 return -EINVAL; 346 348 347 - if (!unaligned_chunks) { 348 - chunks_per_page = PAGE_SIZE / chunk_size; 349 - if (chunks < chunks_per_page || chunks % chunks_per_page) 350 - return -EINVAL; 351 - } 349 + if (!unaligned_chunks && chunks_rem) 350 + return -EINVAL; 352 351 353 352 if (headroom >= chunk_size - XDP_PACKET_HEADROOM) 354 353 return -EINVAL;
+1 -1
scripts/dtc/Makefile
··· 9 9 dtc-objs += dtc-lexer.lex.o dtc-parser.tab.o 10 10 11 11 # Source files need to get at the userspace version of libfdt_env.h to compile 12 - HOST_EXTRACFLAGS := -I $(srctree)/$(src)/libfdt 12 + HOST_EXTRACFLAGS += -I $(srctree)/$(src)/libfdt 13 13 14 14 ifeq ($(shell pkg-config --exists yaml-0.1 2>/dev/null && echo yes),) 15 15 ifneq ($(CHECK_DT_BINDING)$(CHECK_DTBS),)
+15 -1
scripts/kallsyms.c
··· 82 82 83 83 static bool is_ignored_symbol(const char *name, char type) 84 84 { 85 + /* Symbol names that exactly match to the following are ignored.*/ 85 86 static const char * const ignored_symbols[] = { 86 87 /* 87 88 * Symbols which vary between passes. Passes 1 and 2 must have ··· 105 104 NULL 106 105 }; 107 106 107 + /* Symbol names that begin with the following are ignored.*/ 108 108 static const char * const ignored_prefixes[] = { 109 109 "$", /* local symbols for ARM, MIPS, etc. */ 110 110 ".LASANPC", /* s390 kasan local symbols */ ··· 115 113 NULL 116 114 }; 117 115 116 + /* Symbol names that end with the following are ignored.*/ 118 117 static const char * const ignored_suffixes[] = { 119 118 "_from_arm", /* arm */ 120 119 "_from_thumb", /* arm */ ··· 123 120 NULL 124 121 }; 125 122 123 + /* Symbol names that contain the following are ignored.*/ 124 + static const char * const ignored_matches[] = { 125 + ".long_branch.", /* ppc stub */ 126 + ".plt_branch.", /* ppc stub */ 127 + NULL 128 + }; 129 + 126 130 const char * const *p; 127 131 128 - /* Exclude symbols which vary between passes. */ 129 132 for (p = ignored_symbols; *p; p++) 130 133 if (!strcmp(name, *p)) 131 134 return true; ··· 144 135 int l = strlen(name) - strlen(*p); 145 136 146 137 if (l >= 0 && !strcmp(name + l, *p)) 138 + return true; 139 + } 140 + 141 + for (p = ignored_matches; *p; p++) { 142 + if (strstr(name, *p)) 147 143 return true; 148 144 } 149 145
+1 -1
scripts/spelling.txt
··· 589 589 expresion||expression 590 590 exprimental||experimental 591 591 extened||extended 592 - exteneded||extended||extended 592 + exteneded||extended 593 593 extensability||extensibility 594 594 extention||extension 595 595 extenstion||extension
+2 -2
sound/pci/asihpi/hpioctl.c
··· 343 343 struct hpi_message hm; 344 344 struct hpi_response hr; 345 345 struct hpi_adapter adapter; 346 - struct hpi_pci pci; 346 + struct hpi_pci pci = { 0 }; 347 347 348 348 memset(&adapter, 0, sizeof(adapter)); 349 349 ··· 499 499 return 0; 500 500 501 501 err: 502 - for (idx = 0; idx < HPI_MAX_ADAPTER_MEM_SPACES; idx++) { 502 + while (--idx >= 0) { 503 503 if (pci.ap_mem_base[idx]) { 504 504 iounmap(pci.ap_mem_base[idx]); 505 505 pci.ap_mem_base[idx] = NULL;
+12 -2
sound/pci/hda/patch_realtek.c
··· 2475 2475 SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950), 2476 2476 SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950), 2477 2477 SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD), 2478 - SND_PCI_QUIRK(0x1462, 0x9c37, "MSI X570-A PRO", ALC1220_FIXUP_CLEVO_P950), 2479 2478 SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), 2480 2479 SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), 2481 2480 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), ··· 3427 3428 3428 3429 /* 3k pull low control for Headset jack. */ 3429 3430 /* NOTE: call this before clearing the pin, otherwise codec stalls */ 3430 - alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 3431 + /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly 3432 + * when booting with headset plugged. So skip setting it for the codec alc257 3433 + */ 3434 + if (codec->core.vendor_id != 0x10ec0257) 3435 + alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 3431 3436 3432 3437 if (!spec->no_shutup_pins) 3433 3438 snd_hda_codec_write(codec, hp_pin, 0, ··· 6054 6051 #include "hp_x360_helper.c" 6055 6052 6056 6053 enum { 6054 + ALC269_FIXUP_GPIO2, 6057 6055 ALC269_FIXUP_SONY_VAIO, 6058 6056 ALC275_FIXUP_SONY_VAIO_GPIO2, 6059 6057 ALC269_FIXUP_DELL_M101Z, ··· 6236 6232 }; 6237 6233 6238 6234 static const struct hda_fixup alc269_fixups[] = { 6235 + [ALC269_FIXUP_GPIO2] = { 6236 + .type = HDA_FIXUP_FUNC, 6237 + .v.func = alc_fixup_gpio2, 6238 + }, 6239 6239 [ALC269_FIXUP_SONY_VAIO] = { 6240 6240 .type = HDA_FIXUP_PINCTLS, 6241 6241 .v.pins = (const struct hda_pintbl[]) { ··· 7059 7051 [ALC233_FIXUP_LENOVO_MULTI_CODECS] = { 7060 7052 .type = HDA_FIXUP_FUNC, 7061 7053 .v.func = alc233_alc662_fixup_lenovo_dual_codecs, 7054 + .chained = true, 7055 + .chain_id = ALC269_FIXUP_GPIO2 7062 7056 }, 7063 7057 [ALC233_FIXUP_ACER_HEADSET_MIC] = { 7064 7058 .type = HDA_FIXUP_VERBS,
-1
sound/usb/mixer_maps.c
··· 371 371 }; 372 372 373 373 static const struct usbmix_name_map lenovo_p620_rear_map[] = { 374 - { 19, NULL, 2 }, /* FU, Volume */ 375 374 { 19, NULL, 12 }, /* FU, Input Gain Pad */ 376 375 {} 377 376 };
+4 -3
sound/usb/quirks.c
··· 1672 1672 && (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) 1673 1673 msleep(20); 1674 1674 1675 - /* Zoom R16/24, Logitech H650e, Jabra 550a, Kingston HyperX needs a tiny 1676 - * delay here, otherwise requests like get/set frequency return as 1677 - * failed despite actually succeeding. 1675 + /* Zoom R16/24, Logitech H650e/H570e, Jabra 550a, Kingston HyperX 1676 + * needs a tiny delay here, otherwise requests like get/set 1677 + * frequency return as failed despite actually succeeding. 1678 1678 */ 1679 1679 if ((chip->usb_id == USB_ID(0x1686, 0x00dd) || 1680 1680 chip->usb_id == USB_ID(0x046d, 0x0a46) || 1681 + chip->usb_id == USB_ID(0x046d, 0x0a56) || 1681 1682 chip->usb_id == USB_ID(0x0b0e, 0x0349) || 1682 1683 chip->usb_id == USB_ID(0x0951, 0x16ad)) && 1683 1684 (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+25
tools/bootconfig/test-bootconfig.sh
··· 137 137 cat $TEMPCONF 138 138 xpass grep \'\"string\"\' $TEMPCONF 139 139 140 + echo "Repeat same-key tree" 141 + cat > $TEMPCONF << EOF 142 + foo 143 + bar 144 + foo { buz } 145 + EOF 146 + echo > $INITRD 147 + 148 + xpass $BOOTCONF -a $TEMPCONF $INITRD 149 + $BOOTCONF $INITRD > $OUTFILE 150 + xpass grep -q "bar" $OUTFILE 151 + 152 + 153 + echo "Remove/keep tailing spaces" 154 + cat > $TEMPCONF << EOF 155 + foo = val # comment 156 + bar = "val2 " # comment 157 + EOF 158 + echo > $INITRD 159 + 160 + xpass $BOOTCONF -a $TEMPCONF $INITRD 161 + $BOOTCONF $INITRD > $OUTFILE 162 + xfail grep -q val[[:space:]] $OUTFILE 163 + xpass grep -q val2[[:space:]] $OUTFILE 164 + 140 165 echo "=== expected failure cases ===" 141 166 for i in samples/bad-* ; do 142 167 xfail $BOOTCONF -a $i $INITRD
+2 -2
tools/bpf/Makefile
··· 38 38 FEATURE_DISPLAY = libbfd disassembler-four-args 39 39 40 40 check_feat := 1 41 - NON_CHECK_FEAT_TARGETS := clean bpftool_clean runqslower_clean 41 + NON_CHECK_FEAT_TARGETS := clean bpftool_clean runqslower_clean resolve_btfids_clean 42 42 ifdef MAKECMDGOALS 43 43 ifeq ($(filter-out $(NON_CHECK_FEAT_TARGETS),$(MAKECMDGOALS)),) 44 44 check_feat := 0 ··· 89 89 $(OUTPUT)bpf_exp.yacc.o: $(OUTPUT)bpf_exp.yacc.c 90 90 $(OUTPUT)bpf_exp.lex.o: $(OUTPUT)bpf_exp.lex.c 91 91 92 - clean: bpftool_clean runqslower_clean 92 + clean: bpftool_clean runqslower_clean resolve_btfids_clean 93 93 $(call QUIET_CLEAN, bpf-progs) 94 94 $(Q)$(RM) -r -- $(OUTPUT)*.o $(OUTPUT)bpf_jit_disasm $(OUTPUT)bpf_dbg \ 95 95 $(OUTPUT)bpf_asm $(OUTPUT)bpf_exp.yacc.* $(OUTPUT)bpf_exp.lex.*
+1
tools/bpf/resolve_btfids/Makefile
··· 80 80 clean: libsubcmd-clean libbpf-clean fixdep-clean 81 81 $(call msg,CLEAN,$(BINARY)) 82 82 $(Q)$(RM) -f $(BINARY); \ 83 + $(RM) -rf $(if $(OUTPUT),$(OUTPUT),.)/feature; \ 83 84 find $(if $(OUTPUT),$(OUTPUT),.) -name \*.o -or -name \*.o.cmd -or -name \*.o.d | xargs $(RM) 84 85 85 86 tags:
+2 -2
tools/io_uring/io_uring-bench.c
··· 130 130 s->nr_files); 131 131 } 132 132 133 - static int gettid(void) 133 + static int lk_gettid(void) 134 134 { 135 135 return syscall(__NR_gettid); 136 136 } ··· 281 281 struct io_sq_ring *ring = &s->sq_ring; 282 282 int ret, prepped; 283 283 284 - printf("submitter=%d\n", gettid()); 284 + printf("submitter=%d\n", lk_gettid()); 285 285 286 286 srand48_r(pthread_self(), &s->rand); 287 287
+3 -1
tools/lib/bpf/Makefile
··· 59 59 FEATURE_TESTS = libelf libelf-mmap zlib bpf reallocarray 60 60 FEATURE_DISPLAY = libelf zlib bpf 61 61 62 - INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(ARCH)/include/uapi -I$(srctree)/tools/include/uapi 62 + INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi 63 63 FEATURE_CHECK_CFLAGS-bpf = $(INCLUDES) 64 64 65 65 check_feat := 1 ··· 152 152 awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \ 153 153 sort -u | wc -l) 154 154 VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \ 155 + awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \ 155 156 grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l) 156 157 157 158 CMD_TARGETS = $(LIB_TARGET) $(PC_FILE) ··· 220 219 awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \ 221 220 sort -u > $(OUTPUT)libbpf_global_syms.tmp; \ 222 221 readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \ 222 + awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \ 223 223 grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | \ 224 224 sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; \ 225 225 diff -u $(OUTPUT)libbpf_global_syms.tmp \
+1 -1
tools/lib/bpf/libbpf.c
··· 5203 5203 int i, j, nrels, new_sz; 5204 5204 const struct btf_var_secinfo *vi = NULL; 5205 5205 const struct btf_type *sec, *var, *def; 5206 + struct bpf_map *map = NULL, *targ_map; 5206 5207 const struct btf_member *member; 5207 - struct bpf_map *map, *targ_map; 5208 5208 const char *name, *mname; 5209 5209 Elf_Data *symbols; 5210 5210 unsigned int moff;
+15
tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
··· 47 47 __u32 seq_num = ctx->meta->seq_num; 48 48 struct bpf_map *map = ctx->map; 49 49 struct key_t *key = ctx->key; 50 + struct key_t tmp_key; 50 51 __u64 *val = ctx->value; 52 + __u64 tmp_val = 0; 53 + int ret; 51 54 52 55 if (in_test_mode) { 53 56 /* test mode is used by selftests to ··· 62 59 * size. 63 60 */ 64 61 if (key == (void *)0 || val == (void *)0) 62 + return 0; 63 + 64 + /* update the value and then delete the <key, value> pair. 65 + * it should not impact the existing 'val' which is still 66 + * accessible under rcu. 67 + */ 68 + __builtin_memcpy(&tmp_key, key, sizeof(struct key_t)); 69 + ret = bpf_map_update_elem(&hashmap1, &tmp_key, &tmp_val, 0); 70 + if (ret) 71 + return 0; 72 + ret = bpf_map_delete_elem(&hashmap1, &tmp_key); 73 + if (ret) 65 74 return 0; 66 75 67 76 key_sum_a += key->a;
+1 -1
tools/testing/selftests/kvm/x86_64/debug_regs.c
··· 73 73 int i; 74 74 /* Instruction lengths starting at ss_start */ 75 75 int ss_size[4] = { 76 - 3, /* xor */ 76 + 2, /* xor */ 77 77 2, /* cpuid */ 78 78 5, /* mov */ 79 79 2, /* rdmsr */
+47
tools/testing/selftests/net/rtnetlink.sh
··· 1175 1175 echo "PASS: neigh get" 1176 1176 } 1177 1177 1178 + kci_test_bridge_parent_id() 1179 + { 1180 + local ret=0 1181 + sysfsnet=/sys/bus/netdevsim/devices/netdevsim 1182 + probed=false 1183 + 1184 + if [ ! -w /sys/bus/netdevsim/new_device ] ; then 1185 + modprobe -q netdevsim 1186 + check_err $? 1187 + if [ $ret -ne 0 ]; then 1188 + echo "SKIP: bridge_parent_id can't load netdevsim" 1189 + return $ksft_skip 1190 + fi 1191 + probed=true 1192 + fi 1193 + 1194 + echo "10 1" > /sys/bus/netdevsim/new_device 1195 + while [ ! -d ${sysfsnet}10 ] ; do :; done 1196 + echo "20 1" > /sys/bus/netdevsim/new_device 1197 + while [ ! -d ${sysfsnet}20 ] ; do :; done 1198 + udevadm settle 1199 + dev10=`ls ${sysfsnet}10/net/` 1200 + dev20=`ls ${sysfsnet}20/net/` 1201 + 1202 + ip link add name test-bond0 type bond mode 802.3ad 1203 + ip link set dev $dev10 master test-bond0 1204 + ip link set dev $dev20 master test-bond0 1205 + ip link add name test-br0 type bridge 1206 + ip link set dev test-bond0 master test-br0 1207 + check_err $? 1208 + 1209 + # clean up any leftovers 1210 + ip link del dev test-br0 1211 + ip link del dev test-bond0 1212 + echo 20 > /sys/bus/netdevsim/del_device 1213 + echo 10 > /sys/bus/netdevsim/del_device 1214 + $probed && rmmod netdevsim 1215 + 1216 + if [ $ret -ne 0 ]; then 1217 + echo "FAIL: bridge_parent_id" 1218 + return 1 1219 + fi 1220 + echo "PASS: bridge_parent_id" 1221 + } 1222 + 1178 1223 kci_test_rtnl() 1179 1224 { 1180 1225 local ret=0 ··· 1268 1223 kci_test_fdb_get 1269 1224 check_err $? 1270 1225 kci_test_neigh_get 1226 + check_err $? 1227 + kci_test_bridge_parent_id 1271 1228 check_err $? 1272 1229 1273 1230 kci_del_dummy