Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge v5.10-rc3 into drm-next

We need commit f8f6ae5d077a ("mm: always have io_remap_pfn_range() set
pgprot_decrypted()") to be able to merge Jason's cleanup patch.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>

+3853 -2275
+5 -5
Documentation/ABI/stable/sysfs-driver-dma-ioatdma
··· 1 - What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/cap 1 + What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/cap 2 2 Date: December 3, 2009 3 3 KernelVersion: 2.6.32 4 4 Contact: dmaengine@vger.kernel.org 5 5 Description: Capabilities the DMA supports.Currently there are DMA_PQ, DMA_PQ_VAL, 6 6 DMA_XOR,DMA_XOR_VAL,DMA_INTERRUPT. 7 7 8 - What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_active 8 + What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_active 9 9 Date: December 3, 2009 10 10 KernelVersion: 2.6.32 11 11 Contact: dmaengine@vger.kernel.org 12 12 Description: The number of descriptors active in the ring. 13 13 14 - What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_size 14 + What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_size 15 15 Date: December 3, 2009 16 16 KernelVersion: 2.6.32 17 17 Contact: dmaengine@vger.kernel.org 18 18 Description: Descriptor ring size, total number of descriptors available. 19 19 20 - What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/version 20 + What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/version 21 21 Date: December 3, 2009 22 22 KernelVersion: 2.6.32 23 23 Contact: dmaengine@vger.kernel.org 24 24 Description: Version of ioatdma device. 25 25 26 - What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/intr_coalesce 26 + What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/intr_coalesce 27 27 Date: August 8, 2017 28 28 KernelVersion: 4.14 29 29 Contact: dmaengine@vger.kernel.org
+1 -1
Documentation/ABI/testing/sysfs-class-net
··· 152 152 When an interface is under test, it cannot be expected 153 153 to pass packets as normal. 154 154 155 - What: /sys/clas/net/<iface>/duplex 155 + What: /sys/class/net/<iface>/duplex 156 156 Date: October 2009 157 157 KernelVersion: 2.6.33 158 158 Contact: netdev@vger.kernel.org
+4
Documentation/Makefile
··· 26 26 PDFLATEX = xelatex 27 27 LATEXOPTS = -interaction=batchmode 28 28 29 + ifeq ($(KBUILD_VERBOSE),0) 30 + SPHINXOPTS += "-q" 31 + endif 32 + 29 33 # User-friendly check for sphinx-build 30 34 HAVE_SPHINX := $(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi) 31 35
+1 -1
Documentation/admin-guide/LSM/SafeSetID.rst
··· 107 107 privileges, such as allowing a user to set up user namespace UID/GID mappings. 108 108 109 109 Note on GID policies and setgroups() 110 - ================== 110 + ==================================== 111 111 In v5.9 we are adding support for limiting CAP_SETGID privileges as was done 112 112 previously for CAP_SETUID. However, for compatibility with common sandboxing 113 113 related code conventions in userspace, we currently allow arbitrary
+2 -2
Documentation/admin-guide/pm/cpuidle.rst
··· 478 478 statistics of the given idle state. That information is exposed by the kernel 479 479 via ``sysfs``. 480 480 481 - For each CPU in the system, there is a :file:`/sys/devices/system/cpu<N>/cpuidle/` 481 + For each CPU in the system, there is a :file:`/sys/devices/system/cpu/cpu<N>/cpuidle/` 482 482 directory in ``sysfs``, where the number ``<N>`` is assigned to the given 483 483 CPU at the initialization time. That directory contains a set of subdirectories 484 484 called :file:`state0`, :file:`state1` and so on, up to the number of idle state ··· 494 494 residency. 495 495 496 496 ``below`` 497 - Total number of times this idle state had been asked for, but cerainly 497 + Total number of times this idle state had been asked for, but certainly 498 498 a deeper idle state would have been a better match for the observed idle 499 499 duration. 500 500
+1
Documentation/admin-guide/sysctl/net.rst
··· 300 300 0: 0 1 2 3 4 5 6 7 301 301 RSS hash key: 302 302 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89 303 + 303 304 netdev_tstamp_prequeue 304 305 ---------------------- 305 306
+10
Documentation/arm/sunxi.rst
··· 148 148 * User Manual 149 149 150 150 http://dl.linux-sunxi.org/A64/Allwinner%20A64%20User%20Manual%20v1.0.pdf 151 + 152 + - Allwinner H6 153 + 154 + * Datasheet 155 + 156 + https://linux-sunxi.org/images/5/5c/Allwinner_H6_V200_Datasheet_V1.1.pdf 157 + 158 + * User Manual 159 + 160 + https://linux-sunxi.org/images/4/46/Allwinner_H6_V200_User_Manual_V1.1.pdf
+1 -1
Documentation/conf.py
··· 51 51 support for Sphinx v3.0 and above is brand new. Be prepared for 52 52 possible issues in the generated output. 53 53 ''') 54 - if minor > 0 or patch >= 2: 54 + if (major > 3) or (minor > 0 or patch >= 2): 55 55 # Sphinx c function parser is more pedantic with regards to type 56 56 # checking. Due to that, having macros at c:function cause problems. 57 57 # Those needed to be scaped by using c_id_attributes[] array
+2
Documentation/dev-tools/kasan.rst
··· 295 295 pass:: 296 296 297 297 ok 28 - kmalloc_double_kzfree 298 + 298 299 or, if kmalloc failed:: 299 300 300 301 # kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163 301 302 Expected ptr is not null, but is 302 303 not ok 4 - kmalloc_large_oob_right 304 + 303 305 or, if a KASAN report was expected, but not found:: 304 306 305 307 # kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629
+1 -1
Documentation/dev-tools/kunit/start.rst
··· 197 197 198 198 config MISC_EXAMPLE_TEST 199 199 bool "Test for my example" 200 - depends on MISC_EXAMPLE && KUNIT 200 + depends on MISC_EXAMPLE && KUNIT=y 201 201 202 202 and the following to ``drivers/misc/Makefile``: 203 203
+5
Documentation/dev-tools/kunit/usage.rst
··· 561 561 562 562 ...will run the tests. 563 563 564 + .. note:: 565 + Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig 566 + if the test does not support module build. Otherwise, it will trigger 567 + compile errors if ``CONFIG_KUNIT`` is ``m``. 568 + 564 569 Writing new tests for other architectures 565 570 ----------------------------------------- 566 571
+1 -1
Documentation/devicetree/bindings/clock/hi6220-clock.txt
··· 4 4 please refer the following document to know more about the binding rules 5 5 for these system controllers: 6 6 7 - Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt 7 + Documentation/devicetree/bindings/arm/hisilicon/hisilicon.yaml 8 8 9 9 Required Properties: 10 10
+10
Documentation/devicetree/bindings/interrupt-controller/ti,sci-inta.yaml
··· 32 32 | | vint | bit | | 0 |.....|63| vintx | 33 33 | +--------------+ +------------+ | 34 34 | | 35 + | Unmap | 36 + | +--------------+ | 37 + Unmapped events ---->| | umapidx |-------------------------> Globalevents 38 + | +--------------+ | 39 + | | 35 40 +-----------------------------------------+ 36 41 37 42 Configuration of these Intmap registers that maps global events to vint is ··· 74 69 "parent's input irq" specifies the base for parent irq 75 70 - description: | 76 71 "limit" specifies the limit for translation 72 + 73 + ti,unmapped-event-sources: 74 + $ref: /schemas/types.yaml#definitions/phandle-array 75 + description: 76 + Array of phandles to DMA controllers where the unmapped events originate. 77 77 78 78 required: 79 79 - compatible
+18
Documentation/devicetree/bindings/net/can/can-controller.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/net/can/can-controller.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: CAN Controller Generic Binding 8 + 9 + maintainers: 10 + - Marc Kleine-Budde <mkl@pengutronix.de> 11 + 12 + properties: 13 + $nodename: 14 + pattern: "^can(@.*)?$" 15 + 16 + additionalProperties: true 17 + 18 + ...
+135
Documentation/devicetree/bindings/net/can/fsl,flexcan.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/net/can/fsl,flexcan.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: 8 + Flexcan CAN controller on Freescale's ARM and PowerPC system-on-a-chip (SOC). 9 + 10 + maintainers: 11 + - Marc Kleine-Budde <mkl@pengutronix.de> 12 + 13 + allOf: 14 + - $ref: can-controller.yaml# 15 + 16 + properties: 17 + compatible: 18 + oneOf: 19 + - enum: 20 + - fsl,imx8qm-flexcan 21 + - fsl,imx8mp-flexcan 22 + - fsl,imx6q-flexcan 23 + - fsl,imx53-flexcan 24 + - fsl,imx35-flexcan 25 + - fsl,imx28-flexcan 26 + - fsl,imx25-flexcan 27 + - fsl,p1010-flexcan 28 + - fsl,vf610-flexcan 29 + - fsl,ls1021ar2-flexcan 30 + - fsl,lx2160ar1-flexcan 31 + - items: 32 + - enum: 33 + - fsl,imx7d-flexcan 34 + - fsl,imx6ul-flexcan 35 + - fsl,imx6sx-flexcan 36 + - const: fsl,imx6q-flexcan 37 + - items: 38 + - enum: 39 + - fsl,ls1028ar1-flexcan 40 + - const: fsl,lx2160ar1-flexcan 41 + 42 + reg: 43 + maxItems: 1 44 + 45 + interrupts: 46 + maxItems: 1 47 + 48 + clocks: 49 + maxItems: 2 50 + 51 + clock-names: 52 + items: 53 + - const: ipg 54 + - const: per 55 + 56 + clock-frequency: 57 + description: | 58 + The oscillator frequency driving the flexcan device, filled in by the 59 + boot loader. This property should only be used the used operating system 60 + doesn't support the clocks and clock-names property. 61 + 62 + xceiver-supply: 63 + description: Regulator that powers the CAN transceiver. 64 + 65 + big-endian: 66 + $ref: /schemas/types.yaml#/definitions/flag 67 + description: | 68 + This means the registers of FlexCAN controller are big endian. This is 69 + optional property.i.e. if this property is not present in device tree 70 + node then controller is assumed to be little endian. If this property is 71 + present then controller is assumed to be big endian. 72 + 73 + fsl,stop-mode: 74 + description: | 75 + Register bits of stop mode control. 76 + 77 + The format should be as follows: 78 + <gpr req_gpr req_bit> 79 + gpr is the phandle to general purpose register node. 80 + req_gpr is the gpr register offset of CAN stop request. 81 + req_bit is the bit offset of CAN stop request. 82 + $ref: /schemas/types.yaml#/definitions/phandle-array 83 + items: 84 + - description: The 'gpr' is the phandle to general purpose register node. 85 + - description: The 'req_gpr' is the gpr register offset of CAN stop request. 86 + maximum: 0xff 87 + - description: The 'req_bit' is the bit offset of CAN stop request. 88 + maximum: 0x1f 89 + 90 + fsl,clk-source: 91 + description: | 92 + Select the clock source to the CAN Protocol Engine (PE). It's SoC 93 + implementation dependent. Refer to RM for detailed definition. If this 94 + property is not set in device tree node then driver selects clock source 1 95 + by default. 96 + 0: clock source 0 (oscillator clock) 97 + 1: clock source 1 (peripheral clock) 98 + $ref: /schemas/types.yaml#/definitions/uint32 99 + default: 1 100 + minimum: 0 101 + maximum: 1 102 + 103 + wakeup-source: 104 + $ref: /schemas/types.yaml#/definitions/flag 105 + description: 106 + Enable CAN remote wakeup. 107 + 108 + required: 109 + - compatible 110 + - reg 111 + - interrupts 112 + 113 + additionalProperties: false 114 + 115 + examples: 116 + - | 117 + can@1c000 { 118 + compatible = "fsl,p1010-flexcan"; 119 + reg = <0x1c000 0x1000>; 120 + interrupts = <48 0x2>; 121 + interrupt-parent = <&mpic>; 122 + clock-frequency = <200000000>; 123 + fsl,clk-source = <0>; 124 + }; 125 + - | 126 + #include <dt-bindings/interrupt-controller/irq.h> 127 + 128 + can@2090000 { 129 + compatible = "fsl,imx6q-flexcan"; 130 + reg = <0x02090000 0x4000>; 131 + interrupts = <0 110 IRQ_TYPE_LEVEL_HIGH>; 132 + clocks = <&clks 1>, <&clks 2>; 133 + clock-names = "ipg", "per"; 134 + fsl,stop-mode = <&gpr 0x34 28>; 135 + };
-57
Documentation/devicetree/bindings/net/can/fsl-flexcan.txt
··· 1 - Flexcan CAN controller on Freescale's ARM and PowerPC system-on-a-chip (SOC). 2 - 3 - Required properties: 4 - 5 - - compatible : Should be "fsl,<processor>-flexcan" 6 - 7 - where <processor> is imx8qm, imx6q, imx28, imx53, imx35, imx25, p1010, 8 - vf610, ls1021ar2, lx2160ar1, ls1028ar1. 9 - 10 - The ls1028ar1 must be followed by lx2160ar1, e.g. 11 - - "fsl,ls1028ar1-flexcan", "fsl,lx2160ar1-flexcan" 12 - 13 - An implementation should also claim any of the following compatibles 14 - that it is fully backwards compatible with: 15 - 16 - - fsl,p1010-flexcan 17 - 18 - - reg : Offset and length of the register set for this device 19 - - interrupts : Interrupt tuple for this device 20 - 21 - Optional properties: 22 - 23 - - clock-frequency : The oscillator frequency driving the flexcan device 24 - 25 - - xceiver-supply: Regulator that powers the CAN transceiver 26 - 27 - - big-endian: This means the registers of FlexCAN controller are big endian. 28 - This is optional property.i.e. if this property is not present in 29 - device tree node then controller is assumed to be little endian. 30 - if this property is present then controller is assumed to be big 31 - endian. 32 - 33 - - fsl,stop-mode: register bits of stop mode control, the format is 34 - <&gpr req_gpr req_bit>. 35 - gpr is the phandle to general purpose register node. 36 - req_gpr is the gpr register offset of CAN stop request. 37 - req_bit is the bit offset of CAN stop request. 38 - 39 - - fsl,clk-source: Select the clock source to the CAN Protocol Engine (PE). 40 - It's SoC Implementation dependent. Refer to RM for detailed 41 - definition. If this property is not set in device tree node 42 - then driver selects clock source 1 by default. 43 - 0: clock source 0 (oscillator clock) 44 - 1: clock source 1 (peripheral clock) 45 - 46 - - wakeup-source: enable CAN remote wakeup 47 - 48 - Example: 49 - 50 - can@1c000 { 51 - compatible = "fsl,p1010-flexcan"; 52 - reg = <0x1c000 0x1000>; 53 - interrupts = <48 0x2>; 54 - interrupt-parent = <&mpic>; 55 - clock-frequency = <200000000>; // filled in by bootloader 56 - fsl,clk-source = <0>; // select clock source 0 for PE 57 - };
-3
Documentation/filesystems/api-summary.rst
··· 86 86 .. kernel-doc:: fs/dax.c 87 87 :export: 88 88 89 - .. kernel-doc:: fs/direct-io.c 90 - :export: 91 - 92 89 .. kernel-doc:: fs/libfs.c 93 90 :export: 94 91
-7
Documentation/gpu/amdgpu.rst
··· 83 83 =================== 84 84 85 85 .. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c 86 - :doc: AMDGPU XGMI Support 87 - 88 - .. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c 89 - :internal: 90 86 91 87 AMDGPU RAS Support 92 88 ================== ··· 119 123 120 124 .. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 121 125 :doc: AMDGPU RAS sysfs gpu_vram_bad_pages Interface 122 - 123 - .. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 124 - :internal: 125 126 126 127 Sample Code 127 128 -----------
+1 -1
Documentation/hwmon/adm1266.rst
··· 20 20 integrated 12 bit SAR ADC, accessed using a PMBus interface. 21 21 22 22 The driver is a client driver to the core PMBus driver. Please see 23 - Documentation/hwmon/pmbus for details on PMBus client drivers. 23 + Documentation/hwmon/pmbus.rst for details on PMBus client drivers. 24 24 25 25 26 26 Sysfs entries
+1
Documentation/hwmon/index.rst
··· 132 132 mcp3021 133 133 menf21bmc 134 134 mlxreg-fan 135 + mp2975 135 136 nct6683 136 137 nct6775 137 138 nct7802
+13 -1
Documentation/hwmon/mp2975.rst
··· 20 20 vendor dual-loop, digital, multi-phase controller MP2975. 21 21 22 22 This device: 23 + 23 24 - Supports up to two power rail. 24 25 - Provides 8 pulse-width modulations (PWMs), and can be configured up 25 26 to 8-phase operation for rail 1 and up to 4-phase operation for rail ··· 33 32 10-mV DAC, IMVP9 mode with 5-mV DAC. 34 33 35 34 Device supports: 35 + 36 36 - SVID interface. 37 37 - AVSBus interface. 38 38 39 39 Device complaint with: 40 + 40 41 - PMBus rev 1.3 interface. 41 42 42 43 Device supports direct format for reading output current, output voltage, ··· 48 45 The below VID modes are supported: VR12, VR13, IMVP9. 49 46 50 47 The driver provides the next attributes for the current: 48 + 51 49 - for current in: input, maximum alarm; 52 50 - for current out input, maximum alarm and highest values; 53 51 - for phase current: input and label. 54 - attributes. 52 + attributes. 53 + 55 54 The driver exports the following attributes via the 'sysfs' files, where 55 + 56 56 - 'n' is number of telemetry pages (from 1 to 2); 57 57 - 'k' is number of configured phases (from 1 to 8); 58 58 - indexes 1, 1*n for "iin"; ··· 71 65 **curr[1-{2n+k}]_label** 72 66 73 67 The driver provides the next attributes for the voltage: 68 + 74 69 - for voltage in: input, high critical threshold, high critical alarm, all only 75 70 from page 0; 76 71 - for voltage out: input, low and high critical thresholds, low and high 77 72 critical alarms, from pages 0 and 1; 73 + 78 74 The driver exports the following attributes via the 'sysfs' files, where 75 + 79 76 - 'n' is number of telemetry pages (from 1 to 2); 80 77 - indexes 1 for "iin"; 81 78 - indexes n+1, n+2 for "vout"; ··· 96 87 **in[2-{n+1}1_lcrit_alarm** 97 88 98 89 The driver provides the next attributes for the power: 90 + 99 91 - for power in alarm and input. 100 92 - for power out: highest and input. 93 + 101 94 The driver exports the following attributes via the 'sysfs' files, where 95 + 102 96 - 'n' is number of telemetry pages (from 1 to 2); 103 97 - indexes 1 for "pin"; 104 98 - indexes n+1, n+2 for "pout";
+1
Documentation/leds/index.rst
··· 25 25 leds-lp5562 26 26 leds-lp55xx 27 27 leds-mlxcpld 28 + leds-sc27xx
+31 -20
Documentation/locking/lockdep-design.rst
··· 42 42 (4 usages * n STATEs + 1) categories: 43 43 44 44 where the 4 usages can be: 45 + 45 46 - 'ever held in STATE context' 46 47 - 'ever held as readlock in STATE context' 47 48 - 'ever held with STATE enabled' ··· 50 49 51 50 where the n STATEs are coded in kernel/locking/lockdep_states.h and as of 52 51 now they include: 52 + 53 53 - hardirq 54 54 - softirq 55 55 56 56 where the last 1 category is: 57 + 57 58 - 'ever used' [ == !unused ] 58 59 59 60 When locking rules are violated, these usage bits are presented in the ··· 99 96 +--------------+-------------+--------------+ 100 97 | | irq enabled | irq disabled | 101 98 +--------------+-------------+--------------+ 102 - | ever in irq | ? | - | 99 + | ever in irq | '?' | '-' | 103 100 +--------------+-------------+--------------+ 104 - | never in irq | + | . | 101 + | never in irq | '+' | '.' | 105 102 +--------------+-------------+--------------+ 106 103 107 104 The character '-' suggests irq is disabled because if otherwise the ··· 219 216 BD_MUTEX_PARTITION 220 217 }; 221 218 222 - mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION); 219 + mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION); 223 220 224 221 In this case the locking is done on a bdev object that is known to be a 225 222 partition. ··· 337 334 ---------------- 338 335 339 336 The validator tracks a maximum of MAX_LOCKDEP_KEYS number of lock classes. 340 - Exceeding this number will trigger the following lockdep warning: 337 + Exceeding this number will trigger the following lockdep warning:: 341 338 342 339 (DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS)) 343 340 ··· 423 420 424 421 The difference between recursive readers and non-recursive readers is because: 425 422 recursive readers get blocked only by a write lock *holder*, while non-recursive 426 - readers could get blocked by a write lock *waiter*. Considering the follow example: 423 + readers could get blocked by a write lock *waiter*. Considering the follow 424 + example:: 427 425 428 426 TASK A: TASK B: 429 427 ··· 452 448 453 449 Block condition matrix, Y means the row blocks the column, and N means otherwise. 454 450 455 - | E | r | R | 456 451 +---+---+---+---+ 457 - E | Y | Y | Y | 452 + | | E | r | R | 458 453 +---+---+---+---+ 459 - r | Y | Y | N | 454 + | E | Y | Y | Y | 460 455 +---+---+---+---+ 461 - R | Y | Y | N | 456 + | r | Y | Y | N | 457 + +---+---+---+---+ 458 + | R | Y | Y | N | 459 + +---+---+---+---+ 462 460 463 461 (W: writers, r: non-recursive readers, R: recursive readers) 464 462 465 463 466 464 acquired recursively. Unlike non-recursive read locks, recursive read locks 467 465 only get blocked by current write lock *holders* other than write lock 468 - *waiters*, for example: 466 + *waiters*, for example:: 469 467 470 468 TASK A: TASK B: 471 469 ··· 497 491 even true for two non-recursive read locks). A non-recursive lock can block the 498 492 corresponding recursive lock, and vice versa. 499 493 500 - A deadlock case with recursive locks involved is as follow: 494 + A deadlock case with recursive locks involved is as follow:: 501 495 502 496 TASK A: TASK B: 503 497 ··· 516 510 dependencies, but we can show that 4 types of lock dependencies are enough for 517 511 deadlock detection. 518 512 519 - For each lock dependency: 513 + For each lock dependency:: 520 514 521 515 L1 -> L2 522 516 ··· 531 525 With the above combination for simplification, there are 4 types of dependency edges 532 526 in the lockdep graph: 533 527 534 - 1) -(ER)->: exclusive writer to recursive reader dependency, "X -(ER)-> Y" means 528 + 1) -(ER)->: 529 + exclusive writer to recursive reader dependency, "X -(ER)-> Y" means 535 530 X -> Y and X is a writer and Y is a recursive reader. 536 531 537 - 2) -(EN)->: exclusive writer to non-recursive locker dependency, "X -(EN)-> Y" means 532 + 2) -(EN)->: 533 + exclusive writer to non-recursive locker dependency, "X -(EN)-> Y" means 538 534 X -> Y and X is a writer and Y is either a writer or non-recursive reader. 539 535 540 - 3) -(SR)->: shared reader to recursive reader dependency, "X -(SR)-> Y" means 536 + 3) -(SR)->: 537 + shared reader to recursive reader dependency, "X -(SR)-> Y" means 541 538 X -> Y and X is a reader (recursive or not) and Y is a recursive reader. 542 539 543 - 4) -(SN)->: shared reader to non-recursive locker dependency, "X -(SN)-> Y" means 540 + 4) -(SN)->: 541 + shared reader to non-recursive locker dependency, "X -(SN)-> Y" means 544 542 X -> Y and X is a reader (recursive or not) and Y is either a writer or 545 543 non-recursive reader. 546 544 547 - Note that given two locks, they may have multiple dependencies between them, for example: 545 + Note that given two locks, they may have multiple dependencies between them, 546 + for example:: 548 547 549 548 TASK A: 550 549 ··· 603 592 604 593 Proof for sufficiency (Lemma 1): 605 594 606 - Let's say we have a strong circle: 595 + Let's say we have a strong circle:: 607 596 608 597 L1 -> L2 ... -> Ln -> L1 609 598 610 - , which means we have dependencies: 599 + , which means we have dependencies:: 611 600 612 601 L1 -> L2 613 602 L2 -> L3 ··· 644 633 for a lock held by P1. Let's name the lock Px is waiting as Lx, so since P1 is waiting 645 634 for L1 and holding Ln, so we will have Ln -> L1 in the dependency graph. Similarly, 646 635 we have L1 -> L2, L2 -> L3, ..., Ln-1 -> Ln in the dependency graph, which means we 647 - have a circle: 636 + have a circle:: 648 637 649 638 Ln -> L1 -> L2 -> ... -> Ln 650 639
-1
Documentation/misc-devices/index.rst
··· 24 24 isl29003 25 25 lis3lv02d 26 26 max6875 27 - mic/index 28 27 pci-endpoint-test 29 28 spear-pcie-gadget 30 29 uacce
+1
Documentation/networking/devlink/ice.rst
··· 70 70 that both the name (as reported by ``fw.app.name``) and version are 71 71 required to uniquely identify the package. 72 72 * - ``fw.app.bundle_id`` 73 + - running 73 74 - 0xc0000001 74 75 - Unique identifier for the DDP package loaded in the device. Also 75 76 referred to as the DDP Track ID. Can be used to uniquely identify
+60 -60
Documentation/networking/j1939.rst
··· 10 10 SAE J1939 defines a higher layer protocol on CAN. It implements a more 11 11 sophisticated addressing scheme and extends the maximum packet size above 8 12 12 bytes. Several derived specifications exist, which differ from the original 13 - J1939 on the application level, like MilCAN A, NMEA2000 and especially 13 + J1939 on the application level, like MilCAN A, NMEA2000, and especially 14 14 ISO-11783 (ISOBUS). This last one specifies the so-called ETP (Extended 15 - Transport Protocol) which is has been included in this implementation. This 15 + Transport Protocol), which has been included in this implementation. This 16 16 results in a maximum packet size of ((2 ^ 24) - 1) * 7 bytes == 111 MiB. 17 17 18 18 Specifications used ··· 32 32 addressing and transport methods used by J1939. 33 33 34 34 * **Addressing:** when a process on an ECU communicates via J1939, it should 35 - not necessarily know its source address. Although at least one process per 35 + not necessarily know its source address. Although, at least one process per 36 36 ECU should know the source address. Other processes should be able to reuse 37 37 that address. This way, address parameters for different processes 38 38 cooperating for the same ECU, are not duplicated. This way of working is 39 - closely related to the UNIX concept where programs do just one thing, and do 39 + closely related to the UNIX concept, where programs do just one thing and do 40 40 it well. 41 41 42 42 * **Dynamic addressing:** Address Claiming in J1939 is time critical. 43 - Furthermore data transport should be handled properly during the address 43 + Furthermore, data transport should be handled properly during the address 44 44 negotiation. Putting this functionality in the kernel eliminates it as a 45 45 requirement for _every_ user space process that communicates via J1939. This 46 46 results in a consistent J1939 bus with proper addressing. ··· 58 58 59 59 The J1939 sockets operate on CAN network devices (see SocketCAN). Any J1939 60 60 user space library operating on CAN raw sockets will still operate properly. 61 - Since such library does not communicate with the in-kernel implementation, care 61 + Since such a library does not communicate with the in-kernel implementation, care 62 62 must be taken that these two do not interfere. In practice, this means they 63 63 cannot share ECU addresses. A single ECU (or virtual ECU) address is used by 64 64 the library exclusively, or by the in-kernel system exclusively. ··· 77 77 8 bits : PS (PDU Specific) 78 78 79 79 In J1939-21 distinction is made between PDU1 format (where PF < 240) and PDU2 80 - format (where PF >= 240). Furthermore, when using PDU2 format, the PS-field 80 + format (where PF >= 240). Furthermore, when using the PDU2 format, the PS-field 81 81 contains a so-called Group Extension, which is part of the PGN. When using PDU2 82 82 format, the Group Extension is set in the PS-field. 83 83 84 84 On the other hand, when using PDU1 format, the PS-field contains a so-called 85 85 Destination Address, which is _not_ part of the PGN. When communicating a PGN 86 - from user space to kernel (or visa versa) and PDU2 format is used, the PS-field 86 + from user space to kernel (or vice versa) and PDU2 format is used, the PS-field 87 87 of the PGN shall be set to zero. The Destination Address shall be set 88 88 elsewhere. 89 89 ··· 96 96 97 97 Both static and dynamic addressing methods can be used. 98 98 99 - For static addresses, no extra checks are made by the kernel, and provided 99 + For static addresses, no extra checks are made by the kernel and provided 100 100 addresses are considered right. This responsibility is for the OEM or system 101 101 integrator. 102 102 103 103 For dynamic addressing, so-called Address Claiming, extra support is foreseen 104 - in the kernel. In J1939 any ECU is known by it's 64-bit NAME. At the moment of 104 + in the kernel. In J1939 any ECU is known by its 64-bit NAME. At the moment of 105 105 a successful address claim, the kernel keeps track of both NAME and source 106 106 address being claimed. This serves as a base for filter schemes. By default, 107 - packets with a destination that is not locally, will be rejected. 107 + packets with a destination that is not locally will be rejected. 108 108 109 109 Mixed mode packets (from a static to a dynamic address or vice versa) are 110 110 allowed. The BSD sockets define separate API calls for getting/setting the ··· 131 131 --------- 132 132 133 133 On CAN, you first need to open a socket for communicating over a CAN network. 134 - To use J1939, #include <linux/can/j1939.h>. From there, <linux/can.h> will be 134 + To use J1939, ``#include <linux/can/j1939.h>``. From there, ``<linux/can.h>`` will be 135 135 included too. To open a socket, use: 136 136 137 137 .. code-block:: C 138 138 139 139 s = socket(PF_CAN, SOCK_DGRAM, CAN_J1939); 140 140 141 - J1939 does use SOCK_DGRAM sockets. In the J1939 specification, connections are 141 + J1939 does use ``SOCK_DGRAM`` sockets. In the J1939 specification, connections are 142 142 mentioned in the context of transport protocol sessions. These still deliver 143 - packets to the other end (using several CAN packets). SOCK_STREAM is not 143 + packets to the other end (using several CAN packets). ``SOCK_STREAM`` is not 144 144 supported. 145 145 146 - After the successful creation of the socket, you would normally use the bind(2) 147 - and/or connect(2) system call to bind the socket to a CAN interface. After 148 - binding and/or connecting the socket, you can read(2) and write(2) from/to the 149 - socket or use send(2), sendto(2), sendmsg(2) and the recv*() counterpart 146 + After the successful creation of the socket, you would normally use the ``bind(2)`` 147 + and/or ``connect(2)`` system call to bind the socket to a CAN interface. After 148 + binding and/or connecting the socket, you can ``read(2)`` and ``write(2)`` from/to the 149 + socket or use ``send(2)``, ``sendto(2)``, ``sendmsg(2)`` and the ``recv*()`` counterpart 150 150 operations on the socket as usual. There are also J1939 specific socket options 151 151 described below. 152 152 153 - In order to send data, a bind(2) must have been successful. bind(2) assigns a 153 + In order to send data, a ``bind(2)`` must have been successful. ``bind(2)`` assigns a 154 154 local address to a socket. 155 155 156 - Different from CAN is that the payload data is just the data that get send, 157 - without it's header info. The header info is derived from the sockaddr supplied 158 - to bind(2), connect(2), sendto(2) and recvfrom(2). A write(2) with size 4 will 156 + Different from CAN is that the payload data is just the data that get sends, 157 + without its header info. The header info is derived from the sockaddr supplied 158 + to ``bind(2)``, ``connect(2)``, ``sendto(2)`` and ``recvfrom(2)``. A ``write(2)`` with size 4 will 159 159 result in a packet with 4 bytes. 160 160 161 161 The sockaddr structure has extensions for use with J1939 as specified below: ··· 180 180 } can_addr; 181 181 } 182 182 183 - can_family & can_ifindex serve the same purpose as for other SocketCAN sockets. 183 + ``can_family`` & ``can_ifindex`` serve the same purpose as for other SocketCAN sockets. 184 184 185 - can_addr.j1939.pgn specifies the PGN (max 0x3ffff). Individual bits are 185 + ``can_addr.j1939.pgn`` specifies the PGN (max 0x3ffff). Individual bits are 186 186 specified above. 187 187 188 - can_addr.j1939.name contains the 64-bit J1939 NAME. 188 + ``can_addr.j1939.name`` contains the 64-bit J1939 NAME. 189 189 190 - can_addr.j1939.addr contains the address. 190 + ``can_addr.j1939.addr`` contains the address. 191 191 192 - The bind(2) system call assigns the local address, i.e. the source address when 193 - sending packages. If a PGN during bind(2) is set, it's used as a RX filter. 194 - I.e. only packets with a matching PGN are received. If an ADDR or NAME is set 192 + The ``bind(2)`` system call assigns the local address, i.e. the source address when 193 + sending packages. If a PGN during ``bind(2)`` is set, it's used as a RX filter. 194 + I.e. only packets with a matching PGN are received. If an ADDR or NAME is set 195 195 it is used as a receive filter, too. It will match the destination NAME or ADDR 196 196 of the incoming packet. The NAME filter will work only if appropriate Address 197 197 Claiming for this name was done on the CAN bus and registered/cached by the 198 198 kernel. 199 199 200 - On the other hand connect(2) assigns the remote address, i.e. the destination 201 - address. The PGN from connect(2) is used as the default PGN when sending 200 + On the other hand ``connect(2)`` assigns the remote address, i.e. the destination 201 + address. The PGN from ``connect(2)`` is used as the default PGN when sending 202 202 packets. If ADDR or NAME is set it will be used as the default destination ADDR 203 - or NAME. Further a set ADDR or NAME during connect(2) is used as a receive 203 + or NAME. Further a set ADDR or NAME during ``connect(2)`` is used as a receive 204 204 filter. It will match the source NAME or ADDR of the incoming packet. 205 205 206 - Both write(2) and send(2) will send a packet with local address from bind(2) and 207 - the remote address from connect(2). Use sendto(2) to overwrite the destination 206 + Both ``write(2)`` and ``send(2)`` will send a packet with local address from ``bind(2)`` and the 207 + remote address from ``connect(2)``. Use ``sendto(2)`` to overwrite the destination 208 208 address. 209 209 210 - If can_addr.j1939.name is set (!= 0) the NAME is looked up by the kernel and 211 - the corresponding ADDR is used. If can_addr.j1939.name is not set (== 0), 212 - can_addr.j1939.addr is used. 210 + If ``can_addr.j1939.name`` is set (!= 0) the NAME is looked up by the kernel and 211 + the corresponding ADDR is used. If ``can_addr.j1939.name`` is not set (== 0), 212 + ``can_addr.j1939.addr`` is used. 213 213 214 214 When creating a socket, reasonable defaults are set. Some options can be 215 - modified with setsockopt(2) & getsockopt(2). 215 + modified with ``setsockopt(2)`` & ``getsockopt(2)``. 216 216 217 217 RX path related options: 218 218 219 - - SO_J1939_FILTER - configure array of filters 220 - - SO_J1939_PROMISC - disable filters set by bind(2) and connect(2) 219 + - ``SO_J1939_FILTER`` - configure array of filters 220 + - ``SO_J1939_PROMISC`` - disable filters set by ``bind(2)`` and ``connect(2)`` 221 221 222 222 By default no broadcast packets can be send or received. To enable sending or 223 - receiving broadcast packets use the socket option SO_BROADCAST: 223 + receiving broadcast packets use the socket option ``SO_BROADCAST``: 224 224 225 225 .. code-block:: C 226 226 ··· 261 261 +---------------------------+ 262 262 263 263 TX path related options: 264 - SO_J1939_SEND_PRIO - change default send priority for the socket 264 + ``SO_J1939_SEND_PRIO`` - change default send priority for the socket 265 265 266 266 Message Flags during send() and Related System Calls 267 267 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 268 268 269 - send(2), sendto(2) and sendmsg(2) take a 'flags' argument. Currently 269 + ``send(2)``, ``sendto(2)`` and ``sendmsg(2)`` take a 'flags' argument. Currently 270 270 supported flags are: 271 271 272 - * MSG_DONTWAIT, i.e. non-blocking operation. 272 + * ``MSG_DONTWAIT``, i.e. non-blocking operation. 273 273 274 274 recvmsg(2) 275 275 ^^^^^^^^^^ 276 276 277 - In most cases recvmsg(2) is needed if you want to extract more information than 278 - recvfrom(2) can provide. For example package priority and timestamp. The 277 + In most cases ``recvmsg(2)`` is needed if you want to extract more information than 278 + ``recvfrom(2)`` can provide. For example package priority and timestamp. The 279 279 Destination Address, name and packet priority (if applicable) are attached to 280 - the msghdr in the recvmsg(2) call. They can be extracted using cmsg(3) macros, 281 - with cmsg_level == SOL_J1939 && cmsg_type == SCM_J1939_DEST_ADDR, 282 - SCM_J1939_DEST_NAME or SCM_J1939_PRIO. The returned data is a uint8_t for 283 - priority and dst_addr, and uint64_t for dst_name. 280 + the msghdr in the ``recvmsg(2)`` call. They can be extracted using ``cmsg(3)`` macros, 281 + with ``cmsg_level == SOL_J1939 && cmsg_type == SCM_J1939_DEST_ADDR``, 282 + ``SCM_J1939_DEST_NAME`` or ``SCM_J1939_PRIO``. The returned data is a ``uint8_t`` for 283 + ``priority`` and ``dst_addr``, and ``uint64_t`` for ``dst_name``. 284 284 285 285 .. code-block:: C 286 286 ··· 305 305 306 306 Distinction has to be made between using the claimed address and doing an 307 307 address claim. To use an already claimed address, one has to fill in the 308 - j1939.name member and provide it to bind(2). If the name had claimed an address 308 + ``j1939.name`` member and provide it to ``bind(2)``. If the name had claimed an address 309 309 earlier, all further messages being sent will use that address. And the 310 - j1939.addr member will be ignored. 310 + ``j1939.addr`` member will be ignored. 311 311 312 312 An exception on this is PGN 0x0ee00. This is the "Address Claim/Cannot Claim 313 - Address" message and the kernel will use the j1939.addr member for that PGN if 313 + Address" message and the kernel will use the ``j1939.addr`` member for that PGN if 314 314 necessary. 315 315 316 316 To claim an address following code example can be used: ··· 371 371 372 372 If another ECU claims the address, the kernel will mark the NAME-SA expired. 373 373 No socket bound to the NAME can send packets (other than address claims). To 374 - claim another address, some socket bound to NAME, must bind(2) again, but with 375 - only j1939.addr changed to the new SA, and must then send a valid address claim 374 + claim another address, some socket bound to NAME, must ``bind(2)`` again, but with 375 + only ``j1939.addr`` changed to the new SA, and must then send a valid address claim 376 376 packet. This restarts the state machine in the kernel (and any other 377 377 participant on the bus) for this NAME. 378 378 379 - can-utils also include the jacd tool, so it can be used as code example or as 379 + ``can-utils`` also include the ``j1939acd`` tool, so it can be used as code example or as 380 380 default Address Claiming daemon. 381 381 382 382 Send Examples ··· 403 403 404 404 bind(sock, (struct sockaddr *)&baddr, sizeof(baddr)); 405 405 406 - Now, the socket 'sock' is bound to the SA 0x20. Since no connect(2) was called, 407 - at this point we can use only sendto(2) or sendmsg(2). 406 + Now, the socket 'sock' is bound to the SA 0x20. Since no ``connect(2)`` was called, 407 + at this point we can use only ``sendto(2)`` or ``sendmsg(2)``. 408 408 409 409 Send: 410 410 ··· 414 414 .can_family = AF_CAN, 415 415 .can_addr.j1939 = { 416 416 .name = J1939_NO_NAME; 417 - .pgn = 0x30, 418 - .addr = 0x12300, 417 + .addr = 0x30, 418 + .pgn = 0x12300, 419 419 }, 420 420 }; 421 421
+1 -2
Documentation/networking/statistics.rst
··· 175 175 translated to netlink attributes when dumped. Drivers must not overwrite 176 176 the statistics they don't report with 0. 177 177 178 - .. kernel-doc:: include/linux/ethtool.h 179 - :identifiers: ethtool_pause_stats 178 + - ethtool_pause_stats()
+14 -6
Documentation/sphinx/automarkup.py
··· 16 16 from itertools import chain 17 17 18 18 # 19 + # Python 2 lacks re.ASCII... 20 + # 21 + try: 22 + ascii_p3 = re.ASCII 23 + except AttributeError: 24 + ascii_p3 = 0 25 + 26 + # 19 27 # Regex nastiness. Of course. 20 28 # Try to identify "function()" that's not already marked up some 21 29 # other way. Sphinx doesn't like a lot of stuff right after a 22 30 # :c:func: block (i.e. ":c:func:`mmap()`s" flakes out), so the last 23 31 # bit tries to restrict matches to things that won't create trouble. 24 32 # 25 - RE_function = re.compile(r'\b(([a-zA-Z_]\w+)\(\))', flags=re.ASCII) 33 + RE_function = re.compile(r'\b(([a-zA-Z_]\w+)\(\))', flags=ascii_p3) 26 34 27 35 # 28 36 # Sphinx 2 uses the same :c:type role for struct, union, enum and typedef 29 37 # 30 38 RE_generic_type = re.compile(r'\b(struct|union|enum|typedef)\s+([a-zA-Z_]\w+)', 31 - flags=re.ASCII) 39 + flags=ascii_p3) 32 40 33 41 # 34 42 # Sphinx 3 uses a different C role for each one of struct, union, enum and 35 43 # typedef 36 44 # 37 - RE_struct = re.compile(r'\b(struct)\s+([a-zA-Z_]\w+)', flags=re.ASCII) 38 - RE_union = re.compile(r'\b(union)\s+([a-zA-Z_]\w+)', flags=re.ASCII) 39 - RE_enum = re.compile(r'\b(enum)\s+([a-zA-Z_]\w+)', flags=re.ASCII) 40 - RE_typedef = re.compile(r'\b(typedef)\s+([a-zA-Z_]\w+)', flags=re.ASCII) 45 + RE_struct = re.compile(r'\b(struct)\s+([a-zA-Z_]\w+)', flags=ascii_p3) 46 + RE_union = re.compile(r'\b(union)\s+([a-zA-Z_]\w+)', flags=ascii_p3) 47 + RE_enum = re.compile(r'\b(enum)\s+([a-zA-Z_]\w+)', flags=ascii_p3) 48 + RE_typedef = re.compile(r'\b(typedef)\s+([a-zA-Z_]\w+)', flags=ascii_p3) 41 49 42 50 # 43 51 # Detects a reference to a documentation page of the form Documentation/... with
+1
Documentation/userspace-api/index.rst
··· 22 22 spec_ctrl 23 23 accelerators/ocxl 24 24 ioctl/index 25 + iommu 25 26 media/index 26 27 27 28 .. only:: subproject and html
+13 -6
MAINTAINERS
··· 978 978 L: linux-iio@vger.kernel.org 979 979 S: Supported 980 980 W: http://ez.analog.com/community/linux-device-drivers 981 - F: Documentation/devicetree/bindings/iio/adc/adi,ad7768-1.txt 981 + F: Documentation/devicetree/bindings/iio/adc/adi,ad7768-1.yaml 982 982 F: drivers/iio/adc/ad7768-1.c 983 983 984 984 ANALOG DEVICES INC AD7780 DRIVER ··· 3857 3857 L: linux-usb@vger.kernel.org 3858 3858 S: Maintained 3859 3859 T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git 3860 - F: Documentation/devicetree/bindings/usb/cdns-usb3.txt 3860 + F: Documentation/devicetree/bindings/usb/cdns,usb3.yaml 3861 3861 F: drivers/usb/cdns3/ 3862 3862 3863 3863 CADET FM/AM RADIO RECEIVER DRIVER ··· 7923 7923 M: john.garry@huawei.com 7924 7924 S: Maintained 7925 7925 W: http://www.hisilicon.com 7926 - F: Documentation/devicetree/bindings/arm/hisilicon/hisilicon-low-pin-count.txt 7926 + F: Documentation/devicetree/bindings/arm/hisilicon/low-pin-count.yaml 7927 7927 F: drivers/bus/hisi_lpc.c 7928 7928 7929 7929 HISILICON NETWORK SUBSYSTEM 3 DRIVER (HNS3) ··· 11170 11170 F: drivers/input/touchscreen/melfas_mip4.c 11171 11171 11172 11172 MELLANOX BLUEFIELD I2C DRIVER 11173 - M: Khalil Blaiech <kblaiech@mellanox.com> 11173 + M: Khalil Blaiech <kblaiech@nvidia.com> 11174 11174 L: linux-i2c@vger.kernel.org 11175 11175 S: Supported 11176 11176 F: drivers/i2c/busses/i2c-mlxbf.c ··· 14534 14534 F: drivers/mailbox/qcom-ipcc.c 14535 14535 F: include/dt-bindings/mailbox/qcom-ipcc.h 14536 14536 14537 + QUALCOMM IPQ4019 VQMMC REGULATOR DRIVER 14538 + M: Robert Marko <robert.marko@sartura.hr> 14539 + M: Luka Perkov <luka.perkov@sartura.hr> 14540 + L: linux-arm-msm@vger.kernel.org 14541 + S: Maintained 14542 + F: Documentation/devicetree/bindings/regulator/vqmmc-ipq4019-regulator.yaml 14543 + F: drivers/regulator/vqmmc-ipq4019-regulator.c 14544 + 14537 14545 QUALCOMM RMNET DRIVER 14538 14546 M: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> 14539 14547 M: Sean Tranchetti <stranche@codeaurora.org> ··· 14897 14889 R: Sergei Shtylyov <sergei.shtylyov@gmail.com> 14898 14890 L: netdev@vger.kernel.org 14899 14891 L: linux-renesas-soc@vger.kernel.org 14900 - F: Documentation/devicetree/bindings/net/renesas,*.txt 14901 14892 F: Documentation/devicetree/bindings/net/renesas,*.yaml 14902 14893 F: drivers/net/ethernet/renesas/ 14903 14894 F: include/linux/sh_eth.h ··· 18097 18090 M: Binghui Wang <wangbinghui@hisilicon.com> 18098 18091 L: linux-usb@vger.kernel.org 18099 18092 S: Maintained 18100 - F: Documentation/devicetree/bindings/phy/phy-hi3660-usb3.txt 18093 + F: Documentation/devicetree/bindings/phy/hisilicon,hi3660-usb3.yaml 18101 18094 F: drivers/phy/hisilicon/phy-hi3660-usb3.c 18102 18095 18103 18096 USB ISP116X DRIVER
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 10 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+16 -1
arch/arc/kernel/head.S
··· 67 67 sr r5, [ARC_REG_LPB_CTRL] 68 68 1: 69 69 #endif /* CONFIG_ARC_LPB_DISABLE */ 70 - #endif 70 + 71 + /* On HSDK, CCMs need to remapped super early */ 72 + #ifdef CONFIG_ARC_SOC_HSDK 73 + mov r6, 0x60000000 74 + lr r5, [ARC_REG_ICCM_BUILD] 75 + breq r5, 0, 1f 76 + sr r6, [ARC_REG_AUX_ICCM] 77 + 1: 78 + lr r5, [ARC_REG_DCCM_BUILD] 79 + breq r5, 0, 2f 80 + sr r6, [ARC_REG_AUX_DCCM] 81 + 2: 82 + #endif /* CONFIG_ARC_SOC_HSDK */ 83 + 84 + #endif /* CONFIG_ISA_ARCV2 */ 85 + 71 86 ; Config DSP_CTRL properly, so kernel may use integer multiply, 72 87 ; multiply-accumulate, and divide operations 73 88 DSP_EARLY_INIT
+6 -1
arch/arc/kernel/stacktrace.c
··· 112 112 int (*consumer_fn) (unsigned int, void *), void *arg) 113 113 { 114 114 #ifdef CONFIG_ARC_DW2_UNWIND 115 - int ret = 0; 115 + int ret = 0, cnt = 0; 116 116 unsigned int address; 117 117 struct unwind_frame_info frame_info; 118 118 ··· 132 132 break; 133 133 134 134 frame_info.regs.r63 = frame_info.regs.r31; 135 + 136 + if (cnt++ > 128) { 137 + printk("unwinder looping too long, aborting !\n"); 138 + return 0; 139 + } 135 140 } 136 141 137 142 return address; /* return the last address it saw */
-17
arch/arc/plat-hsdk/platform.c
··· 17 17 18 18 #define ARC_CCM_UNUSED_ADDR 0x60000000 19 19 20 - static void __init hsdk_init_per_cpu(unsigned int cpu) 21 - { 22 - /* 23 - * By default ICCM is mapped to 0x7z while this area is used for 24 - * kernel virtual mappings, so move it to currently unused area. 25 - */ 26 - if (cpuinfo_arc700[cpu].iccm.sz) 27 - write_aux_reg(ARC_REG_AUX_ICCM, ARC_CCM_UNUSED_ADDR); 28 - 29 - /* 30 - * By default DCCM is mapped to 0x8z while this area is used by kernel, 31 - * so move it to currently unused area. 32 - */ 33 - if (cpuinfo_arc700[cpu].dccm.sz) 34 - write_aux_reg(ARC_REG_AUX_DCCM, ARC_CCM_UNUSED_ADDR); 35 - } 36 20 37 21 #define ARC_PERIPHERAL_BASE 0xf0000000 38 22 #define CREG_BASE (ARC_PERIPHERAL_BASE + 0x1000) ··· 323 339 MACHINE_START(SIMULATION, "hsdk") 324 340 .dt_compat = hsdk_compat, 325 341 .init_early = hsdk_init_early, 326 - .init_per_cpu = hsdk_init_per_cpu, 327 342 MACHINE_END
+2 -2
arch/arm/mm/init.c
··· 354 354 /* set highmem page free */ 355 355 for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, 356 356 &range_start, &range_end, NULL) { 357 - unsigned long start = PHYS_PFN(range_start); 358 - unsigned long end = PHYS_PFN(range_end); 357 + unsigned long start = PFN_UP(range_start); 358 + unsigned long end = PFN_DOWN(range_end); 359 359 360 360 /* Ignore complete lowmem entries */ 361 361 if (end <= max_low)
+1 -1
arch/arm64/Kconfig
··· 1002 1002 config NODES_SHIFT 1003 1003 int "Maximum NUMA Nodes (as a power of 2)" 1004 1004 range 1 10 1005 - default "2" 1005 + default "4" 1006 1006 depends on NEED_MULTIPLE_NODES 1007 1007 help 1008 1008 Specify the maximum number of NUMA Nodes available on the target
+2
arch/arm64/include/asm/brk-imm.h
··· 10 10 * #imm16 values used for BRK instruction generation 11 11 * 0x004: for installing kprobes 12 12 * 0x005: for installing uprobes 13 + * 0x006: for kprobe software single-step 13 14 * Allowed values for kgdb are 0x400 - 0x7ff 14 15 * 0x100: for triggering a fault on purpose (reserved) 15 16 * 0x400: for dynamic BRK instruction ··· 20 19 */ 21 20 #define KPROBES_BRK_IMM 0x004 22 21 #define UPROBES_BRK_IMM 0x005 22 + #define KPROBES_BRK_SS_IMM 0x006 23 23 #define FAULT_BRK_IMM 0x100 24 24 #define KGDB_DYN_DBG_BRK_IMM 0x400 25 25 #define KGDB_COMPILED_DBG_BRK_IMM 0x401
+1
arch/arm64/include/asm/debug-monitors.h
··· 53 53 54 54 /* kprobes BRK opcodes with ESR encoding */ 55 55 #define BRK64_OPCODE_KPROBES (AARCH64_BREAK_MON | (KPROBES_BRK_IMM << 5)) 56 + #define BRK64_OPCODE_KPROBES_SS (AARCH64_BREAK_MON | (KPROBES_BRK_SS_IMM << 5)) 56 57 /* uprobes BRK opcodes with ESR encoding */ 57 58 #define BRK64_OPCODE_UPROBES (AARCH64_BREAK_MON | (UPROBES_BRK_IMM << 5)) 58 59
+1 -1
arch/arm64/include/asm/kprobes.h
··· 16 16 #include <linux/percpu.h> 17 17 18 18 #define __ARCH_WANT_KPROBES_INSN_SLOT 19 - #define MAX_INSN_SIZE 1 19 + #define MAX_INSN_SIZE 2 20 20 21 21 #define flush_insn_slot(p) do { } while (0) 22 22 #define kretprobe_blacklist_size 0
+32 -11
arch/arm64/kernel/kexec_image.c
··· 43 43 u64 flags, value; 44 44 bool be_image, be_kernel; 45 45 struct kexec_buf kbuf; 46 - unsigned long text_offset; 46 + unsigned long text_offset, kernel_segment_number; 47 47 struct kexec_segment *kernel_segment; 48 48 int ret; 49 49 ··· 88 88 /* Adjust kernel segment with TEXT_OFFSET */ 89 89 kbuf.memsz += text_offset; 90 90 91 - ret = kexec_add_buffer(&kbuf); 92 - if (ret) 93 - return ERR_PTR(ret); 91 + kernel_segment_number = image->nr_segments; 94 92 95 - kernel_segment = &image->segment[image->nr_segments - 1]; 93 + /* 94 + * The location of the kernel segment may make it impossible to satisfy 95 + * the other segment requirements, so we try repeatedly to find a 96 + * location that will work. 97 + */ 98 + while ((ret = kexec_add_buffer(&kbuf)) == 0) { 99 + /* Try to load additional data */ 100 + kernel_segment = &image->segment[kernel_segment_number]; 101 + ret = load_other_segments(image, kernel_segment->mem, 102 + kernel_segment->memsz, initrd, 103 + initrd_len, cmdline); 104 + if (!ret) 105 + break; 106 + 107 + /* 108 + * We couldn't find space for the other segments; erase the 109 + * kernel segment and try the next available hole. 110 + */ 111 + image->nr_segments -= 1; 112 + kbuf.buf_min = kernel_segment->mem + kernel_segment->memsz; 113 + kbuf.mem = KEXEC_BUF_MEM_UNKNOWN; 114 + } 115 + 116 + if (ret) { 117 + pr_err("Could not find any suitable kernel location!"); 118 + return ERR_PTR(ret); 119 + } 120 + 121 + kernel_segment = &image->segment[kernel_segment_number]; 96 122 kernel_segment->mem += text_offset; 97 123 kernel_segment->memsz -= text_offset; 98 124 image->start = kernel_segment->mem; ··· 127 101 kernel_segment->mem, kbuf.bufsz, 128 102 kernel_segment->memsz); 129 103 130 - /* Load additional data */ 131 - ret = load_other_segments(image, 132 - kernel_segment->mem, kernel_segment->memsz, 133 - initrd, initrd_len, cmdline); 134 - 135 - return ERR_PTR(ret); 104 + return 0; 136 105 } 137 106 138 107 #ifdef CONFIG_KEXEC_IMAGE_VERIFY_SIG
+8 -1
arch/arm64/kernel/machine_kexec_file.c
··· 240 240 return ret; 241 241 } 242 242 243 + /* 244 + * Tries to add the initrd and DTB to the image. If it is not possible to find 245 + * valid locations, this function will undo changes to the image and return non 246 + * zero. 247 + */ 243 248 int load_other_segments(struct kimage *image, 244 249 unsigned long kernel_load_addr, 245 250 unsigned long kernel_size, ··· 253 248 { 254 249 struct kexec_buf kbuf; 255 250 void *headers, *dtb = NULL; 256 - unsigned long headers_sz, initrd_load_addr = 0, dtb_len; 251 + unsigned long headers_sz, initrd_load_addr = 0, dtb_len, 252 + orig_segments = image->nr_segments; 257 253 int ret = 0; 258 254 259 255 kbuf.image = image; ··· 340 334 return 0; 341 335 342 336 out_err: 337 + image->nr_segments = orig_segments; 343 338 vfree(dtb); 344 339 return ret; 345 340 }
+24 -47
arch/arm64/kernel/probes/kprobes.c
··· 36 36 static void __kprobes 37 37 post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *); 38 38 39 - static int __kprobes patch_text(kprobe_opcode_t *addr, u32 opcode) 40 - { 41 - void *addrs[1]; 42 - u32 insns[1]; 43 - 44 - addrs[0] = addr; 45 - insns[0] = opcode; 46 - 47 - return aarch64_insn_patch_text(addrs, insns, 1); 48 - } 49 - 50 39 static void __kprobes arch_prepare_ss_slot(struct kprobe *p) 51 40 { 52 - /* prepare insn slot */ 53 - patch_text(p->ainsn.api.insn, p->opcode); 41 + kprobe_opcode_t *addr = p->ainsn.api.insn; 42 + void *addrs[] = {addr, addr + 1}; 43 + u32 insns[] = {p->opcode, BRK64_OPCODE_KPROBES_SS}; 54 44 55 - flush_icache_range((uintptr_t) (p->ainsn.api.insn), 56 - (uintptr_t) (p->ainsn.api.insn) + 57 - MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); 45 + /* prepare insn slot */ 46 + aarch64_insn_patch_text(addrs, insns, 2); 47 + 48 + flush_icache_range((uintptr_t)addr, (uintptr_t)(addr + MAX_INSN_SIZE)); 58 49 59 50 /* 60 51 * Needs restoring of return address after stepping xol. ··· 119 128 /* arm kprobe: install breakpoint in text */ 120 129 void __kprobes arch_arm_kprobe(struct kprobe *p) 121 130 { 122 - patch_text(p->addr, BRK64_OPCODE_KPROBES); 131 + void *addr = p->addr; 132 + u32 insn = BRK64_OPCODE_KPROBES; 133 + 134 + aarch64_insn_patch_text(&addr, &insn, 1); 123 135 } 124 136 125 137 /* disarm kprobe: remove breakpoint from text */ 126 138 void __kprobes arch_disarm_kprobe(struct kprobe *p) 127 139 { 128 - patch_text(p->addr, p->opcode); 140 + void *addr = p->addr; 141 + 142 + aarch64_insn_patch_text(&addr, &p->opcode, 1); 129 143 } 130 144 131 145 void __kprobes arch_remove_kprobe(struct kprobe *p) ··· 159 163 } 160 164 161 165 /* 162 - * Interrupts need to be disabled before single-step mode is set, and not 163 - * reenabled until after single-step mode ends. 164 - * Without disabling interrupt on local CPU, there is a chance of 165 - * interrupt occurrence in the period of exception return and start of 166 - * out-of-line single-step, that result in wrongly single stepping 167 - * into the interrupt handler. 166 + * Mask all of DAIF while executing the instruction out-of-line, to keep things 167 + * simple and avoid nesting exceptions. Interrupts do have to be disabled since 168 + * the kprobe state is per-CPU and doesn't get migrated. 168 169 */ 169 170 static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb, 170 171 struct pt_regs *regs) 171 172 { 172 173 kcb->saved_irqflag = regs->pstate & DAIF_MASK; 173 - regs->pstate |= PSR_I_BIT; 174 - /* Unmask PSTATE.D for enabling software step exceptions. */ 175 - regs->pstate &= ~PSR_D_BIT; 174 + regs->pstate |= DAIF_MASK; 176 175 } 177 176 178 177 static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb, ··· 210 219 slot = (unsigned long)p->ainsn.api.insn; 211 220 212 221 set_ss_context(kcb, slot); /* mark pending ss */ 213 - 214 - /* IRQs and single stepping do not mix well. */ 215 222 kprobes_save_local_irqflag(kcb, regs); 216 - kernel_enable_single_step(regs); 217 223 instruction_pointer_set(regs, slot); 218 224 } else { 219 225 /* insn simulation */ ··· 261 273 } 262 274 /* call post handler */ 263 275 kcb->kprobe_status = KPROBE_HIT_SSDONE; 264 - if (cur->post_handler) { 265 - /* post_handler can hit breakpoint and single step 266 - * again, so we enable D-flag for recursive exception. 267 - */ 276 + if (cur->post_handler) 268 277 cur->post_handler(cur, regs, 0); 269 - } 270 278 271 279 reset_current_kprobe(); 272 280 } ··· 285 301 instruction_pointer_set(regs, (unsigned long) cur->addr); 286 302 if (!instruction_pointer(regs)) 287 303 BUG(); 288 - 289 - kernel_disable_single_step(); 290 304 291 305 if (kcb->kprobe_status == KPROBE_REENTER) 292 306 restore_previous_kprobe(kcb); ··· 347 365 * pre-handler and it returned non-zero, it will 348 366 * modify the execution path and no need to single 349 367 * stepping. Let's just reset current kprobe and exit. 350 - * 351 - * pre_handler can hit a breakpoint and can step thru 352 - * before return, keep PSTATE D-flag enabled until 353 - * pre_handler return back. 354 368 */ 355 369 if (!p->pre_handler || !p->pre_handler(p, regs)) { 356 370 setup_singlestep(p, regs, kcb, 0); ··· 377 399 } 378 400 379 401 static int __kprobes 380 - kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr) 402 + kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned int esr) 381 403 { 382 404 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 383 405 int retval; ··· 387 409 388 410 if (retval == DBG_HOOK_HANDLED) { 389 411 kprobes_restore_local_irqflag(kcb, regs); 390 - kernel_disable_single_step(); 391 - 392 412 post_kprobe_handler(kcb, regs); 393 413 } 394 414 395 415 return retval; 396 416 } 397 417 398 - static struct step_hook kprobes_step_hook = { 399 - .fn = kprobe_single_step_handler, 418 + static struct break_hook kprobes_break_ss_hook = { 419 + .imm = KPROBES_BRK_SS_IMM, 420 + .fn = kprobe_breakpoint_ss_handler, 400 421 }; 401 422 402 423 static int __kprobes ··· 463 486 int __init arch_init_kprobes(void) 464 487 { 465 488 register_kernel_break_hook(&kprobes_break_hook); 466 - register_kernel_step_hook(&kprobes_step_hook); 489 + register_kernel_break_hook(&kprobes_break_ss_hook); 467 490 468 491 return 0; 469 492 }
+1 -1
arch/powerpc/include/asm/nohash/32/kup-8xx.h
··· 63 63 static inline bool 64 64 bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 65 65 { 66 - return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xf0000000), 66 + return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xff000000), 67 67 "Bug: fault blocked by AP register !"); 68 68 } 69 69
+14 -31
arch/powerpc/include/asm/nohash/32/mmu-8xx.h
··· 33 33 * respectively NA for All or X for Supervisor and no access for User. 34 34 * Then we use the APG to say whether accesses are according to Page rules or 35 35 * "all Supervisor" rules (Access to all) 36 - * Therefore, we define 2 APG groups. lsb is _PMD_USER 37 - * 0 => Kernel => 01 (all accesses performed according to page definition) 38 - * 1 => User => 00 (all accesses performed as supervisor iaw page definition) 39 - * 2-15 => Not Used 36 + * _PAGE_ACCESSED is also managed via APG. When _PAGE_ACCESSED is not set, say 37 + * "all User" rules, that will lead to NA for all. 38 + * Therefore, we define 4 APG groups. lsb is _PAGE_ACCESSED 39 + * 0 => Kernel => 11 (all accesses performed according as user iaw page definition) 40 + * 1 => Kernel+Accessed => 01 (all accesses performed according to page definition) 41 + * 2 => User => 11 (all accesses performed according as user iaw page definition) 42 + * 3 => User+Accessed => 00 (all accesses performed as supervisor iaw page definition) for INIT 43 + * => 10 (all accesses performed according to swaped page definition) for KUEP 44 + * 4-15 => Not Used 40 45 */ 41 - #define MI_APG_INIT 0x40000000 42 - 43 - /* 44 - * 0 => Kernel => 01 (all accesses performed according to page definition) 45 - * 1 => User => 10 (all accesses performed according to swaped page definition) 46 - * 2-15 => Not Used 47 - */ 48 - #define MI_APG_KUEP 0x60000000 46 + #define MI_APG_INIT 0xdc000000 47 + #define MI_APG_KUEP 0xde000000 49 48 50 49 /* The effective page number register. When read, contains the information 51 50 * about the last instruction TLB miss. When MI_RPN is written, bits in ··· 105 106 #define MD_Ks 0x80000000 /* Should not be set */ 106 107 #define MD_Kp 0x40000000 /* Should always be set */ 107 108 108 - /* 109 - * All pages' PP data bits are set to either 000 or 011 or 001, which means 110 - * respectively RW for Supervisor and no access for User, or RO for 111 - * Supervisor and no access for user and NA for ALL. 112 - * Then we use the APG to say whether accesses are according to Page rules or 113 - * "all Supervisor" rules (Access to all) 114 - * Therefore, we define 2 APG groups. lsb is _PMD_USER 115 - * 0 => Kernel => 01 (all accesses performed according to page definition) 116 - * 1 => User => 00 (all accesses performed as supervisor iaw page definition) 117 - * 2-15 => Not Used 118 - */ 119 - #define MD_APG_INIT 0x40000000 120 - 121 - /* 122 - * 0 => No user => 01 (all accesses performed according to page definition) 123 - * 1 => User => 10 (all accesses performed according to swaped page definition) 124 - * 2-15 => Not Used 125 - */ 126 - #define MD_APG_KUAP 0x60000000 109 + /* See explanation above at the definition of MI_APG_INIT */ 110 + #define MD_APG_INIT 0xdc000000 111 + #define MD_APG_KUAP 0xde000000 127 112 128 113 /* The effective page number register. When read, contains the information 129 114 * about the last instruction TLB miss. When MD_RPN is written, bits in
+5 -4
arch/powerpc/include/asm/nohash/32/pte-8xx.h
··· 39 39 * into the TLB. 40 40 */ 41 41 #define _PAGE_GUARDED 0x0010 /* Copied to L1 G entry in DTLB */ 42 - #define _PAGE_SPECIAL 0x0020 /* SW entry */ 42 + #define _PAGE_ACCESSED 0x0020 /* Copied to L1 APG 1 entry in I/DTLB */ 43 43 #define _PAGE_EXEC 0x0040 /* Copied to PP (bit 21) in ITLB */ 44 - #define _PAGE_ACCESSED 0x0080 /* software: page referenced */ 44 + #define _PAGE_SPECIAL 0x0080 /* SW entry */ 45 45 46 46 #define _PAGE_NA 0x0200 /* Supervisor NA, User no access */ 47 47 #define _PAGE_RO 0x0600 /* Supervisor RO, User no access */ ··· 59 59 60 60 #define _PMD_PRESENT 0x0001 61 61 #define _PMD_PRESENT_MASK _PMD_PRESENT 62 - #define _PMD_BAD 0x0fd0 62 + #define _PMD_BAD 0x0f90 63 63 #define _PMD_PAGE_MASK 0x000c 64 64 #define _PMD_PAGE_8M 0x000c 65 65 #define _PMD_PAGE_512K 0x0004 66 - #define _PMD_USER 0x0020 /* APG 1 */ 66 + #define _PMD_ACCESSED 0x0020 /* APG 1 */ 67 + #define _PMD_USER 0x0040 /* APG 2 */ 67 68 68 69 #define _PTE_NONE_MASK 0 69 70
+9 -3
arch/powerpc/include/asm/topology.h
··· 6 6 7 7 struct device; 8 8 struct device_node; 9 + struct drmem_lmb; 9 10 10 11 #ifdef CONFIG_NUMA 11 12 ··· 62 61 */ 63 62 return (nid < 0) ? 0 : nid; 64 63 } 64 + 65 + int of_drconf_to_nid_single(struct drmem_lmb *lmb); 66 + 65 67 #else 66 68 67 69 static inline int early_cpu_to_node(int cpu) { return 0; } ··· 88 84 return 0; 89 85 } 90 86 91 - #endif /* CONFIG_NUMA */ 87 + static inline int of_drconf_to_nid_single(struct drmem_lmb *lmb) 88 + { 89 + return first_online_node; 90 + } 92 91 93 - struct drmem_lmb; 94 - int of_drconf_to_nid_single(struct drmem_lmb *lmb); 92 + #endif /* CONFIG_NUMA */ 95 93 96 94 #if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR) 97 95 extern int find_and_online_cpu_nid(int cpu);
+2 -2
arch/powerpc/include/asm/uaccess.h
··· 178 178 * are no aliasing issues. 179 179 */ 180 180 #define __put_user_asm_goto(x, addr, label, op) \ 181 - asm volatile goto( \ 181 + asm_volatile_goto( \ 182 182 "1: " op "%U1%X1 %0,%1 # put_user\n" \ 183 183 EX_TABLE(1b, %l2) \ 184 184 : \ ··· 191 191 __put_user_asm_goto(x, ptr, label, "std") 192 192 #else /* __powerpc64__ */ 193 193 #define __put_user_asm2_goto(x, addr, label) \ 194 - asm volatile goto( \ 194 + asm_volatile_goto( \ 195 195 "1: stw%X1 %0, %1\n" \ 196 196 "2: stw%X1 %L0, %L1\n" \ 197 197 EX_TABLE(1b, %l2) \
+3 -2
arch/powerpc/kernel/eeh_cache.c
··· 264 264 { 265 265 struct pci_io_addr_range *piar; 266 266 struct rb_node *n; 267 + unsigned long flags; 267 268 268 - spin_lock(&pci_io_addr_cache_root.piar_lock); 269 + spin_lock_irqsave(&pci_io_addr_cache_root.piar_lock, flags); 269 270 for (n = rb_first(&pci_io_addr_cache_root.rb_root); n; n = rb_next(n)) { 270 271 piar = rb_entry(n, struct pci_io_addr_range, rb_node); 271 272 ··· 274 273 (piar->flags & IORESOURCE_IO) ? "i/o" : "mem", 275 274 &piar->addr_lo, &piar->addr_hi, pci_name(piar->pcidev)); 276 275 } 277 - spin_unlock(&pci_io_addr_cache_root.piar_lock); 276 + spin_unlock_irqrestore(&pci_io_addr_cache_root.piar_lock, flags); 278 277 279 278 return 0; 280 279 }
-8
arch/powerpc/kernel/head_40x.S
··· 284 284 285 285 rlwimi r11, r10, 22, 20, 29 /* Compute PTE address */ 286 286 lwz r11, 0(r11) /* Get Linux PTE */ 287 - #ifdef CONFIG_SWAP 288 287 li r9, _PAGE_PRESENT | _PAGE_ACCESSED 289 - #else 290 - li r9, _PAGE_PRESENT 291 - #endif 292 288 andc. r9, r9, r11 /* Check permission */ 293 289 bne 5f 294 290 ··· 365 369 366 370 rlwimi r11, r10, 22, 20, 29 /* Compute PTE address */ 367 371 lwz r11, 0(r11) /* Get Linux PTE */ 368 - #ifdef CONFIG_SWAP 369 372 li r9, _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC 370 - #else 371 - li r9, _PAGE_PRESENT | _PAGE_EXEC 372 - #endif 373 373 andc. r9, r9, r11 /* Check permission */ 374 374 bne 5f 375 375
+7 -39
arch/powerpc/kernel/head_8xx.S
··· 202 202 203 203 InstructionTLBMiss: 204 204 mtspr SPRN_SPRG_SCRATCH0, r10 205 - #if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS) 206 205 mtspr SPRN_SPRG_SCRATCH1, r11 207 - #endif 208 206 209 207 /* If we are faulting a kernel address, we have to use the 210 208 * kernel page tables. ··· 222 224 3: 223 225 mtcr r11 224 226 #endif 225 - #if defined(CONFIG_HUGETLBFS) || !defined(CONFIG_PIN_TLB_TEXT) 226 227 lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */ 227 228 mtspr SPRN_MD_TWC, r11 228 - #else 229 - lwz r10, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */ 230 - mtspr SPRN_MI_TWC, r10 /* Set segment attributes */ 231 - mtspr SPRN_MD_TWC, r10 232 - #endif 233 229 mfspr r10, SPRN_MD_TWC 234 230 lwz r10, 0(r10) /* Get the pte */ 235 - #if defined(CONFIG_HUGETLBFS) || !defined(CONFIG_PIN_TLB_TEXT) 231 + rlwimi r11, r10, 0, _PAGE_GUARDED | _PAGE_ACCESSED 236 232 rlwimi r11, r10, 32 - 9, _PMD_PAGE_512K 237 233 mtspr SPRN_MI_TWC, r11 238 - #endif 239 - #ifdef CONFIG_SWAP 240 - rlwinm r11, r10, 32-5, _PAGE_PRESENT 241 - and r11, r11, r10 242 - rlwimi r10, r11, 0, _PAGE_PRESENT 243 - #endif 244 234 /* The Linux PTE won't go exactly into the MMU TLB. 245 235 * Software indicator bits 20 and 23 must be clear. 246 236 * Software indicator bits 22, 24, 25, 26, and 27 must be ··· 242 256 243 257 /* Restore registers */ 244 258 0: mfspr r10, SPRN_SPRG_SCRATCH0 245 - #if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS) 246 259 mfspr r11, SPRN_SPRG_SCRATCH1 247 - #endif 248 260 rfi 249 261 patch_site 0b, patch__itlbmiss_exit_1 250 262 ··· 252 268 addi r10, r10, 1 253 269 stw r10, (itlb_miss_counter - PAGE_OFFSET)@l(0) 254 270 mfspr r10, SPRN_SPRG_SCRATCH0 255 - #if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) 256 271 mfspr r11, SPRN_SPRG_SCRATCH1 257 - #endif 258 272 rfi 259 273 #endif 260 274 ··· 279 297 mfspr r10, SPRN_MD_TWC 280 298 lwz r10, 0(r10) /* Get the pte */ 281 299 282 - /* Insert the Guarded flag into the TWC from the Linux PTE. 300 + /* Insert Guarded and Accessed flags into the TWC from the Linux PTE. 283 301 * It is bit 27 of both the Linux PTE and the TWC (at least 284 302 * I got that right :-). It will be better when we can put 285 303 * this into the Linux pgd/pmd and load it in the operation 286 304 * above. 287 305 */ 288 - rlwimi r11, r10, 0, _PAGE_GUARDED 306 + rlwimi r11, r10, 0, _PAGE_GUARDED | _PAGE_ACCESSED 289 307 rlwimi r11, r10, 32 - 9, _PMD_PAGE_512K 290 308 mtspr SPRN_MD_TWC, r11 291 309 292 - /* Both _PAGE_ACCESSED and _PAGE_PRESENT has to be set. 293 - * We also need to know if the insn is a load/store, so: 294 - * Clear _PAGE_PRESENT and load that which will 295 - * trap into DTLB Error with store bit set accordinly. 296 - */ 297 - /* PRESENT=0x1, ACCESSED=0x20 298 - * r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5)); 299 - * r10 = (r10 & ~PRESENT) | r11; 300 - */ 301 - #ifdef CONFIG_SWAP 302 - rlwinm r11, r10, 32-5, _PAGE_PRESENT 303 - and r11, r11, r10 304 - rlwimi r10, r11, 0, _PAGE_PRESENT 305 - #endif 306 310 /* The Linux PTE won't go exactly into the MMU TLB. 307 311 * Software indicator bits 24, 25, 26, and 27 must be 308 312 * set. All other Linux PTE bits control the behavior ··· 679 711 li r9, 4 /* up to 4 pages of 8M */ 680 712 mtctr r9 681 713 lis r9, KERNELBASE@h /* Create vaddr for TLB */ 682 - li r10, MI_PS8MEG | MI_SVALID /* Set 8M byte page */ 714 + li r10, MI_PS8MEG | _PMD_ACCESSED | MI_SVALID 683 715 li r11, MI_BOOTINIT /* Create RPN for address 0 */ 684 716 1: 685 717 mtspr SPRN_MI_CTR, r8 /* Set instruction MMU control */ ··· 743 775 #ifdef CONFIG_PIN_TLB_TEXT 744 776 LOAD_REG_IMMEDIATE(r5, 28 << 8) 745 777 LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET) 746 - LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG) 778 + LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED) 747 779 LOAD_REG_IMMEDIATE(r8, 0xf0 | _PAGE_RO | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT) 748 780 LOAD_REG_ADDR(r9, _sinittext) 749 781 li r0, 4 ··· 765 797 LOAD_REG_IMMEDIATE(r5, 28 << 8 | MD_TWAM) 766 798 #ifdef CONFIG_PIN_TLB_DATA 767 799 LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET) 768 - LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG) 800 + LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED) 769 801 #ifdef CONFIG_PIN_TLB_IMMR 770 802 li r0, 3 771 803 #else ··· 802 834 #endif 803 835 #ifdef CONFIG_PIN_TLB_IMMR 804 836 LOAD_REG_IMMEDIATE(r0, VIRT_IMMR_BASE | MD_EVALID) 805 - LOAD_REG_IMMEDIATE(r7, MD_SVALID | MD_PS512K | MD_GUARDED) 837 + LOAD_REG_IMMEDIATE(r7, MD_SVALID | MD_PS512K | MD_GUARDED | _PMD_ACCESSED) 806 838 mfspr r8, SPRN_IMMR 807 839 rlwinm r8, r8, 0, 0xfff80000 808 840 ori r8, r8, 0xf0 | _PAGE_DIRTY | _PAGE_SPS | _PAGE_SH | \
-12
arch/powerpc/kernel/head_book3s_32.S
··· 457 457 cmplw 0,r1,r3 458 458 #endif 459 459 mfspr r2, SPRN_SPRG_PGDIR 460 - #ifdef CONFIG_SWAP 461 460 li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC 462 - #else 463 - li r1,_PAGE_PRESENT | _PAGE_EXEC 464 - #endif 465 461 #if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) 466 462 bgt- 112f 467 463 lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ ··· 519 523 lis r1, TASK_SIZE@h /* check if kernel address */ 520 524 cmplw 0,r1,r3 521 525 mfspr r2, SPRN_SPRG_PGDIR 522 - #ifdef CONFIG_SWAP 523 526 li r1, _PAGE_PRESENT | _PAGE_ACCESSED 524 - #else 525 - li r1, _PAGE_PRESENT 526 - #endif 527 527 bgt- 112f 528 528 lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ 529 529 addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */ ··· 595 603 lis r1, TASK_SIZE@h /* check if kernel address */ 596 604 cmplw 0,r1,r3 597 605 mfspr r2, SPRN_SPRG_PGDIR 598 - #ifdef CONFIG_SWAP 599 606 li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED 600 - #else 601 - li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT 602 - #endif 603 607 bgt- 112f 604 608 lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ 605 609 addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
+2 -1
arch/powerpc/kernel/smp.c
··· 1393 1393 /* Activate a secondary processor. */ 1394 1394 void start_secondary(void *unused) 1395 1395 { 1396 - unsigned int cpu = smp_processor_id(); 1396 + unsigned int cpu = raw_smp_processor_id(); 1397 1397 1398 1398 mmgrab(&init_mm); 1399 1399 current->active_mm = &init_mm; 1400 1400 1401 1401 smp_store_cpu_info(cpu); 1402 1402 set_dec(tb_ticks_per_jiffy); 1403 + rcu_cpu_starting(cpu); 1403 1404 preempt_disable(); 1404 1405 cpu_callin_map[cpu] = 1; 1405 1406
+1 -1
arch/riscv/include/asm/uaccess.h
··· 476 476 do { \ 477 477 long __kr_err; \ 478 478 \ 479 - __put_user_nocheck(*((type *)(dst)), (type *)(src), __kr_err); \ 479 + __put_user_nocheck(*((type *)(src)), (type *)(dst), __kr_err); \ 480 480 if (unlikely(__kr_err)) \ 481 481 goto err_label; \ 482 482 } while (0)
+1 -1
arch/riscv/kernel/ftrace.c
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (C) 2013 Linaro Limited 4 4 * Author: AKASHI Takahiro <takahiro.akashi@linaro.org>
+5
arch/riscv/kernel/head.S
··· 35 35 .word 0 36 36 #endif 37 37 .balign 8 38 + #ifdef CONFIG_RISCV_M_MODE 39 + /* Image load offset (0MB) from start of RAM for M-mode */ 40 + .dword 0 41 + #else 38 42 #if __riscv_xlen == 64 39 43 /* Image load offset(2MB) from start of RAM */ 40 44 .dword 0x200000 41 45 #else 42 46 /* Image load offset(4MB) from start of RAM */ 43 47 .dword 0x400000 48 + #endif 44 49 #endif 45 50 /* Effective size of kernel image */ 46 51 .dword _end - _start
+1
arch/riscv/kernel/vdso/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 vdso.lds 3 3 *.tmp 4 + vdso-syms.S
+9 -9
arch/riscv/kernel/vdso/Makefile
··· 43 43 SYSCFLAGS_vdso.so.dbg = $(c_flags) 44 44 $(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE 45 45 $(call if_changed,vdsold) 46 + SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \ 47 + -Wl,--build-id -Wl,--hash-style=both 46 48 47 49 # We also create a special relocatable object that should mirror the symbol 48 50 # table and layout of the linked DSO. With ld --just-symbols we can then 49 51 # refer to these symbols in the kernel code rather than hand-coded addresses. 50 - 51 - SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \ 52 - -Wl,--build-id=sha1 -Wl,--hash-style=both 53 - $(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE 54 - $(call if_changed,vdsold) 55 - 56 - LDFLAGS_vdso-syms.o := -r --just-symbols 57 - $(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE 58 - $(call if_changed,ld) 52 + $(obj)/vdso-syms.S: $(obj)/vdso.so FORCE 53 + $(call if_changed,so2s) 59 54 60 55 # strip rule for the .so file 61 56 $(obj)/%.so: OBJCOPYFLAGS := -S ··· 67 72 $(CROSS_COMPILE)objcopy \ 68 73 $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \ 69 74 rm $@.tmp 75 + 76 + # Extracts symbol offsets from the VDSO, converting them into an assembly file 77 + # that contains the same symbols at the same offsets. 78 + quiet_cmd_so2s = SO2S $@ 79 + cmd_so2s = $(NM) -D $< | $(srctree)/$(src)/so2s.sh > $@ 70 80 71 81 # install commands for the unstripped file 72 82 quiet_cmd_vdso_install = INSTALL $@
+6
arch/riscv/kernel/vdso/so2s.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0+ 3 + # Copyright 2020 Palmer Dabbelt <palmerdabbelt@google.com> 4 + 5 + sed 's!\([0-9a-f]*\) T \([a-z0-9_]*\)\(@@LINUX_4.15\)*!.global \2\n.set \2,0x\1!' \ 6 + | grep '^\.'
+3 -1
arch/riscv/mm/fault.c
··· 86 86 pmd_t *pmd, *pmd_k; 87 87 pte_t *pte_k; 88 88 int index; 89 + unsigned long pfn; 89 90 90 91 /* User mode accesses just cause a SIGSEGV */ 91 92 if (user_mode(regs)) ··· 101 100 * of a task switch. 102 101 */ 103 102 index = pgd_index(addr); 104 - pgd = (pgd_t *)pfn_to_virt(csr_read(CSR_SATP)) + index; 103 + pfn = csr_read(CSR_SATP) & SATP_PPN; 104 + pgd = (pgd_t *)pfn_to_virt(pfn) + index; 105 105 pgd_k = init_mm.pgd + index; 106 106 107 107 if (!pgd_present(*pgd_k)) {
+21 -11
arch/riscv/mm/init.c
··· 154 154 155 155 void __init setup_bootmem(void) 156 156 { 157 - phys_addr_t mem_size = 0; 158 - phys_addr_t total_mem = 0; 159 - phys_addr_t mem_start, start, end = 0; 157 + phys_addr_t mem_start = 0; 158 + phys_addr_t start, end = 0; 160 159 phys_addr_t vmlinux_end = __pa_symbol(&_end); 161 160 phys_addr_t vmlinux_start = __pa_symbol(&_start); 162 161 u64 i; ··· 163 164 /* Find the memory region containing the kernel */ 164 165 for_each_mem_range(i, &start, &end) { 165 166 phys_addr_t size = end - start; 166 - if (!total_mem) 167 + if (!mem_start) 167 168 mem_start = start; 168 169 if (start <= vmlinux_start && vmlinux_end <= end) 169 170 BUG_ON(size == 0); 170 - total_mem = total_mem + size; 171 171 } 172 172 173 173 /* 174 - * Remove memblock from the end of usable area to the 175 - * end of region 174 + * The maximal physical memory size is -PAGE_OFFSET. 175 + * Make sure that any memory beyond mem_start + (-PAGE_OFFSET) is removed 176 + * as it is unusable by kernel. 176 177 */ 177 - mem_size = min(total_mem, (phys_addr_t)-PAGE_OFFSET); 178 - if (mem_start + mem_size < end) 179 - memblock_remove(mem_start + mem_size, 180 - end - mem_start - mem_size); 178 + memblock_enforce_memory_limit(mem_start - PAGE_OFFSET); 181 179 182 180 /* Reserve from the start of the kernel to the end of the kernel */ 183 181 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); ··· 293 297 #define NUM_EARLY_PMDS (1UL + MAX_EARLY_MAPPING_SIZE / PGDIR_SIZE) 294 298 #endif 295 299 pmd_t early_pmd[PTRS_PER_PMD * NUM_EARLY_PMDS] __initdata __aligned(PAGE_SIZE); 300 + pmd_t early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE); 296 301 297 302 static pmd_t *__init get_pmd_virt_early(phys_addr_t pa) 298 303 { ··· 491 494 load_pa + (va - PAGE_OFFSET), 492 495 map_size, PAGE_KERNEL_EXEC); 493 496 497 + #ifndef __PAGETABLE_PMD_FOLDED 498 + /* Setup early PMD for DTB */ 499 + create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA, 500 + (uintptr_t)early_dtb_pmd, PGDIR_SIZE, PAGE_TABLE); 501 + /* Create two consecutive PMD mappings for FDT early scan */ 502 + pa = dtb_pa & ~(PMD_SIZE - 1); 503 + create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA, 504 + pa, PMD_SIZE, PAGE_KERNEL); 505 + create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA + PMD_SIZE, 506 + pa + PMD_SIZE, PMD_SIZE, PAGE_KERNEL); 507 + dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PMD_SIZE - 1)); 508 + #else 494 509 /* Create two consecutive PGD mappings for FDT early scan */ 495 510 pa = dtb_pa & ~(PGDIR_SIZE - 1); 496 511 create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA, ··· 510 501 create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA + PGDIR_SIZE, 511 502 pa + PGDIR_SIZE, PGDIR_SIZE, PAGE_KERNEL); 512 503 dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PGDIR_SIZE - 1)); 504 + #endif 513 505 dtb_early_pa = dtb_pa; 514 506 515 507 /*
+6 -4
arch/s390/configs/debug_defconfig
··· 93 93 CONFIG_FRONTSWAP=y 94 94 CONFIG_CMA_DEBUG=y 95 95 CONFIG_CMA_DEBUGFS=y 96 + CONFIG_CMA_AREAS=7 96 97 CONFIG_MEM_SOFT_DIRTY=y 97 98 CONFIG_ZSWAP=y 98 - CONFIG_ZSMALLOC=m 99 + CONFIG_ZSMALLOC=y 99 100 CONFIG_ZSMALLOC_STAT=y 100 101 CONFIG_DEFERRED_STRUCT_PAGE_INIT=y 101 102 CONFIG_IDLE_PAGE_TRACKING=y ··· 379 378 CONFIG_CGROUP_NET_PRIO=y 380 379 CONFIG_BPF_JIT=y 381 380 CONFIG_NET_PKTGEN=m 382 - # CONFIG_NET_DROP_MONITOR is not set 383 381 CONFIG_PCI=y 384 382 # CONFIG_PCIEASPM is not set 385 383 CONFIG_PCI_DEBUG=y ··· 386 386 CONFIG_HOTPLUG_PCI_S390=y 387 387 CONFIG_DEVTMPFS=y 388 388 CONFIG_CONNECTOR=y 389 - CONFIG_ZRAM=m 389 + CONFIG_ZRAM=y 390 390 CONFIG_BLK_DEV_LOOP=m 391 391 CONFIG_BLK_DEV_CRYPTOLOOP=m 392 392 CONFIG_BLK_DEV_DRBD=m ··· 689 689 CONFIG_CRYPTO_DH=m 690 690 CONFIG_CRYPTO_ECDH=m 691 691 CONFIG_CRYPTO_ECRDSA=m 692 + CONFIG_CRYPTO_SM2=m 692 693 CONFIG_CRYPTO_CURVE25519=m 693 694 CONFIG_CRYPTO_GCM=y 694 695 CONFIG_CRYPTO_CHACHA20POLY1305=m ··· 710 709 CONFIG_CRYPTO_RMD256=m 711 710 CONFIG_CRYPTO_RMD320=m 712 711 CONFIG_CRYPTO_SHA3=m 713 - CONFIG_CRYPTO_SM3=m 714 712 CONFIG_CRYPTO_TGR192=m 715 713 CONFIG_CRYPTO_WP512=m 716 714 CONFIG_CRYPTO_AES_TI=m ··· 753 753 CONFIG_CRYPTO_AES_S390=m 754 754 CONFIG_CRYPTO_GHASH_S390=m 755 755 CONFIG_CRYPTO_CRC32_S390=y 756 + CONFIG_CRYPTO_DEV_VIRTIO=m 756 757 CONFIG_CORDIC=m 757 758 CONFIG_CRC32_SELFTEST=y 758 759 CONFIG_CRC4=m ··· 830 829 CONFIG_FAULT_INJECTION=y 831 830 CONFIG_FAILSLAB=y 832 831 CONFIG_FAIL_PAGE_ALLOC=y 832 + CONFIG_FAULT_INJECTION_USERCOPY=y 833 833 CONFIG_FAIL_MAKE_REQUEST=y 834 834 CONFIG_FAIL_IO_TIMEOUT=y 835 835 CONFIG_FAIL_FUTEX=y
+5 -4
arch/s390/configs/defconfig
··· 87 87 CONFIG_TRANSPARENT_HUGEPAGE=y 88 88 CONFIG_CLEANCACHE=y 89 89 CONFIG_FRONTSWAP=y 90 + CONFIG_CMA_AREAS=7 90 91 CONFIG_MEM_SOFT_DIRTY=y 91 92 CONFIG_ZSWAP=y 92 - CONFIG_ZSMALLOC=m 93 + CONFIG_ZSMALLOC=y 93 94 CONFIG_ZSMALLOC_STAT=y 94 95 CONFIG_DEFERRED_STRUCT_PAGE_INIT=y 95 96 CONFIG_IDLE_PAGE_TRACKING=y ··· 372 371 CONFIG_CGROUP_NET_PRIO=y 373 372 CONFIG_BPF_JIT=y 374 373 CONFIG_NET_PKTGEN=m 375 - # CONFIG_NET_DROP_MONITOR is not set 376 374 CONFIG_PCI=y 377 375 # CONFIG_PCIEASPM is not set 378 376 CONFIG_HOTPLUG_PCI=y ··· 379 379 CONFIG_UEVENT_HELPER=y 380 380 CONFIG_DEVTMPFS=y 381 381 CONFIG_CONNECTOR=y 382 - CONFIG_ZRAM=m 382 + CONFIG_ZRAM=y 383 383 CONFIG_BLK_DEV_LOOP=m 384 384 CONFIG_BLK_DEV_CRYPTOLOOP=m 385 385 CONFIG_BLK_DEV_DRBD=m ··· 680 680 CONFIG_CRYPTO_DH=m 681 681 CONFIG_CRYPTO_ECDH=m 682 682 CONFIG_CRYPTO_ECRDSA=m 683 + CONFIG_CRYPTO_SM2=m 683 684 CONFIG_CRYPTO_CURVE25519=m 684 685 CONFIG_CRYPTO_GCM=y 685 686 CONFIG_CRYPTO_CHACHA20POLY1305=m ··· 702 701 CONFIG_CRYPTO_RMD256=m 703 702 CONFIG_CRYPTO_RMD320=m 704 703 CONFIG_CRYPTO_SHA3=m 705 - CONFIG_CRYPTO_SM3=m 706 704 CONFIG_CRYPTO_TGR192=m 707 705 CONFIG_CRYPTO_WP512=m 708 706 CONFIG_CRYPTO_AES_TI=m ··· 745 745 CONFIG_CRYPTO_AES_S390=m 746 746 CONFIG_CRYPTO_GHASH_S390=m 747 747 CONFIG_CRYPTO_CRC32_S390=y 748 + CONFIG_CRYPTO_DEV_VIRTIO=m 748 749 CONFIG_CORDIC=m 749 750 CONFIG_PRIME_NUMBERS=m 750 751 CONFIG_CRC4=m
+1 -1
arch/s390/configs/zfcpdump_defconfig
··· 17 17 # CONFIG_CHSC_SCH is not set 18 18 # CONFIG_SCM_BUS is not set 19 19 CONFIG_CRASH_DUMP=y 20 - # CONFIG_SECCOMP is not set 21 20 # CONFIG_PFAULT is not set 22 21 # CONFIG_S390_HYPFS_FS is not set 23 22 # CONFIG_VIRTUALIZATION is not set 24 23 # CONFIG_S390_GUEST is not set 24 + # CONFIG_SECCOMP is not set 25 25 CONFIG_PARTITION_ADVANCED=y 26 26 CONFIG_IBM_PARTITION=y 27 27 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+30 -22
arch/s390/include/asm/pgtable.h
··· 692 692 return !!(pud_val(pud) & _REGION3_ENTRY_LARGE); 693 693 } 694 694 695 - static inline unsigned long pud_pfn(pud_t pud) 696 - { 697 - unsigned long origin_mask; 698 - 699 - origin_mask = _REGION_ENTRY_ORIGIN; 700 - if (pud_large(pud)) 701 - origin_mask = _REGION3_ENTRY_ORIGIN_LARGE; 702 - return (pud_val(pud) & origin_mask) >> PAGE_SHIFT; 703 - } 704 - 705 695 #define pmd_leaf pmd_large 706 696 static inline int pmd_large(pmd_t pmd) 707 697 { ··· 735 745 static inline int pmd_none(pmd_t pmd) 736 746 { 737 747 return pmd_val(pmd) == _SEGMENT_ENTRY_EMPTY; 738 - } 739 - 740 - static inline unsigned long pmd_pfn(pmd_t pmd) 741 - { 742 - unsigned long origin_mask; 743 - 744 - origin_mask = _SEGMENT_ENTRY_ORIGIN; 745 - if (pmd_large(pmd)) 746 - origin_mask = _SEGMENT_ENTRY_ORIGIN_LARGE; 747 - return (pmd_val(pmd) & origin_mask) >> PAGE_SHIFT; 748 748 } 749 749 750 750 #define pmd_write pmd_write ··· 1218 1238 #define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) 1219 1239 #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 1220 1240 1221 - #define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN) 1222 - #define pud_deref(pud) (pud_val(pud) & _REGION_ENTRY_ORIGIN) 1223 1241 #define p4d_deref(pud) (p4d_val(pud) & _REGION_ENTRY_ORIGIN) 1224 1242 #define pgd_deref(pgd) (pgd_val(pgd) & _REGION_ENTRY_ORIGIN) 1243 + 1244 + static inline unsigned long pmd_deref(pmd_t pmd) 1245 + { 1246 + unsigned long origin_mask; 1247 + 1248 + origin_mask = _SEGMENT_ENTRY_ORIGIN; 1249 + if (pmd_large(pmd)) 1250 + origin_mask = _SEGMENT_ENTRY_ORIGIN_LARGE; 1251 + return pmd_val(pmd) & origin_mask; 1252 + } 1253 + 1254 + static inline unsigned long pmd_pfn(pmd_t pmd) 1255 + { 1256 + return pmd_deref(pmd) >> PAGE_SHIFT; 1257 + } 1258 + 1259 + static inline unsigned long pud_deref(pud_t pud) 1260 + { 1261 + unsigned long origin_mask; 1262 + 1263 + origin_mask = _REGION_ENTRY_ORIGIN; 1264 + if (pud_large(pud)) 1265 + origin_mask = _REGION3_ENTRY_ORIGIN_LARGE; 1266 + return pud_val(pud) & origin_mask; 1267 + } 1268 + 1269 + static inline unsigned long pud_pfn(pud_t pud) 1270 + { 1271 + return pud_deref(pud) >> PAGE_SHIFT; 1272 + } 1225 1273 1226 1274 /* 1227 1275 * The pgd_offset function *always* adds the index for the top-level
arch/s390/include/asm/vdso/vdso.h
-8
arch/s390/kernel/asm-offsets.c
··· 61 61 BLANK(); 62 62 OFFSET(__VDSO_GETCPU_VAL, vdso_per_cpu_data, getcpu_val); 63 63 BLANK(); 64 - /* constants used by the vdso */ 65 - DEFINE(__CLOCK_REALTIME, CLOCK_REALTIME); 66 - DEFINE(__CLOCK_MONOTONIC, CLOCK_MONOTONIC); 67 - DEFINE(__CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE); 68 - DEFINE(__CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE); 69 - DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID); 70 - DEFINE(__CLOCK_COARSE_RES, LOW_RES_NSEC); 71 - BLANK(); 72 64 /* idle data offsets */ 73 65 OFFSET(__CLOCK_IDLE_ENTER, s390_idle_data, clock_idle_enter); 74 66 OFFSET(__CLOCK_IDLE_EXIT, s390_idle_data, clock_idle_exit);
+2 -1
arch/s390/kernel/smp.c
··· 855 855 856 856 static void smp_init_secondary(void) 857 857 { 858 - int cpu = smp_processor_id(); 858 + int cpu = raw_smp_processor_id(); 859 859 860 860 S390_lowcore.last_update_clock = get_tod_clock(); 861 861 restore_access_regs(S390_lowcore.access_regs_save_area); 862 862 set_cpu_flag(CIF_ASCE_PRIMARY); 863 863 set_cpu_flag(CIF_ASCE_SECONDARY); 864 864 cpu_init(); 865 + rcu_cpu_starting(cpu); 865 866 preempt_disable(); 866 867 init_cpu_timer(); 867 868 vtime_init();
+4
arch/s390/pci/pci_event.c
··· 101 101 if (ret) 102 102 break; 103 103 104 + /* the PCI function will be scanned once function 0 appears */ 105 + if (!zdev->zbus->bus) 106 + break; 107 + 104 108 pdev = pci_scan_single_device(zdev->zbus->bus, zdev->devfn); 105 109 if (!pdev) 106 110 break;
+1
arch/x86/boot/compressed/ident_map_64.c
··· 164 164 add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE); 165 165 166 166 /* Load the new page-table. */ 167 + sev_verify_cbit(top_level_pgt); 167 168 write_cr3(top_level_pgt); 168 169 } 169 170
+19 -1
arch/x86/boot/compressed/mem_encrypt.S
··· 68 68 SYM_FUNC_END(get_sev_encryption_bit) 69 69 70 70 .code64 71 + 72 + #include "../../kernel/sev_verify_cbit.S" 73 + 71 74 SYM_FUNC_START(set_sev_encryption_mask) 72 75 #ifdef CONFIG_AMD_MEM_ENCRYPT 73 76 push %rbp ··· 83 80 jz .Lno_sev_mask 84 81 85 82 bts %rax, sme_me_mask(%rip) /* Create the encryption mask */ 83 + 84 + /* 85 + * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in 86 + * get_sev_encryption_bit() because this function is 32-bit code and 87 + * shared between 64-bit and 32-bit boot path. 88 + */ 89 + movl $MSR_AMD64_SEV, %ecx /* Read the SEV MSR */ 90 + rdmsr 91 + 92 + /* Store MSR value in sev_status */ 93 + shlq $32, %rdx 94 + orq %rdx, %rax 95 + movq %rax, sev_status(%rip) 86 96 87 97 .Lno_sev_mask: 88 98 movq %rbp, %rsp /* Restore original stack pointer */ ··· 112 96 113 97 #ifdef CONFIG_AMD_MEM_ENCRYPT 114 98 .balign 8 115 - SYM_DATA(sme_me_mask, .quad 0) 99 + SYM_DATA(sme_me_mask, .quad 0) 100 + SYM_DATA(sev_status, .quad 0) 101 + SYM_DATA(sev_check_data, .quad 0) 116 102 #endif
+2
arch/x86/boot/compressed/misc.h
··· 159 159 void boot_stage1_vc(void); 160 160 void boot_stage2_vc(void); 161 161 162 + unsigned long sev_verify_cbit(unsigned long cr3); 163 + 162 164 #endif /* BOOT_COMPRESSED_MISC_H */
+9 -5
arch/x86/hyperv/hv_apic.c
··· 273 273 pr_info("Hyper-V: Using enlightened APIC (%s mode)", 274 274 x2apic_enabled() ? "x2apic" : "xapic"); 275 275 /* 276 - * With x2apic, architectural x2apic MSRs are equivalent to the 277 - * respective synthetic MSRs, so there's no need to override 278 - * the apic accessors. The only exception is 279 - * hv_apic_eoi_write, because it benefits from lazy EOI when 280 - * available, but it works for both xapic and x2apic modes. 276 + * When in x2apic mode, don't use the Hyper-V specific APIC 277 + * accessors since the field layout in the ICR register is 278 + * different in x2apic mode. Furthermore, the architectural 279 + * x2apic MSRs function just as well as the Hyper-V 280 + * synthetic APIC MSRs, so there's no benefit in having 281 + * separate Hyper-V accessors for x2apic mode. The only 282 + * exception is hv_apic_eoi_write, because it benefits from 283 + * lazy EOI when available, but the same accessor works for 284 + * both xapic and x2apic because the field layout is the same. 281 285 */ 282 286 apic_set_eoi_write(hv_apic_eoi_write); 283 287 if (!x2apic_enabled()) {
+18 -5
arch/x86/kernel/apic/x2apic_uv_x.c
··· 290 290 { 291 291 /* Relies on 'to' being NULL chars so result will be NULL terminated */ 292 292 strncpy(to, from, len-1); 293 + 294 + /* Trim trailing spaces */ 295 + (void)strim(to); 293 296 } 294 297 295 298 /* Find UV arch type entry in UVsystab */ ··· 369 366 return ret; 370 367 } 371 368 372 - static int __init uv_set_system_type(char *_oem_id) 369 + static int __init uv_set_system_type(char *_oem_id, char *_oem_table_id) 373 370 { 374 371 /* Save OEM_ID passed from ACPI MADT */ 375 372 uv_stringify(sizeof(oem_id), oem_id, _oem_id); ··· 389 386 /* (Not hubless), not a UV */ 390 387 return 0; 391 388 389 + /* Is UV hubless system */ 390 + uv_hubless_system = 0x01; 391 + 392 + /* UV5 Hubless */ 393 + if (strncmp(uv_archtype, "NSGI5", 5) == 0) 394 + uv_hubless_system |= 0x20; 395 + 392 396 /* UV4 Hubless: CH */ 393 - if (strncmp(uv_archtype, "NSGI4", 5) == 0) 394 - uv_hubless_system = 0x11; 397 + else if (strncmp(uv_archtype, "NSGI4", 5) == 0) 398 + uv_hubless_system |= 0x10; 395 399 396 400 /* UV3 Hubless: UV300/MC990X w/o hub */ 397 401 else 398 - uv_hubless_system = 0x9; 402 + uv_hubless_system |= 0x8; 403 + 404 + /* Copy APIC type */ 405 + uv_stringify(sizeof(oem_table_id), oem_table_id, _oem_table_id); 399 406 400 407 pr_info("UV: OEM IDs %s/%s, SystemType %d, HUBLESS ID %x\n", 401 408 oem_id, oem_table_id, uv_system_type, uv_hubless_system); ··· 469 456 uv_cpu_info->p_uv_hub_info = &uv_hub_info_node0; 470 457 471 458 /* If not UV, return. */ 472 - if (likely(uv_set_system_type(_oem_id) == 0)) 459 + if (uv_set_system_type(_oem_id, _oem_table_id) == 0) 473 460 return 0; 474 461 475 462 /* Save and Decode OEM Table ID */
+33 -18
arch/x86/kernel/cpu/bugs.c
··· 1254 1254 return 0; 1255 1255 } 1256 1256 1257 + static bool is_spec_ib_user_controlled(void) 1258 + { 1259 + return spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL || 1260 + spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP || 1261 + spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL || 1262 + spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP; 1263 + } 1264 + 1257 1265 static int ib_prctl_set(struct task_struct *task, unsigned long ctrl) 1258 1266 { 1259 1267 switch (ctrl) { ··· 1269 1261 if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && 1270 1262 spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) 1271 1263 return 0; 1264 + 1272 1265 /* 1273 - * Indirect branch speculation is always disabled in strict 1274 - * mode. It can neither be enabled if it was force-disabled 1275 - * by a previous prctl call. 1266 + * With strict mode for both IBPB and STIBP, the instruction 1267 + * code paths avoid checking this task flag and instead, 1268 + * unconditionally run the instruction. However, STIBP and IBPB 1269 + * are independent and either can be set to conditionally 1270 + * enabled regardless of the mode of the other. 1271 + * 1272 + * If either is set to conditional, allow the task flag to be 1273 + * updated, unless it was force-disabled by a previous prctl 1274 + * call. Currently, this is possible on an AMD CPU which has the 1275 + * feature X86_FEATURE_AMD_STIBP_ALWAYS_ON. In this case, if the 1276 + * kernel is booted with 'spectre_v2_user=seccomp', then 1277 + * spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP and 1278 + * spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED. 1276 1279 */ 1277 - if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || 1278 - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || 1279 - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED || 1280 + if (!is_spec_ib_user_controlled() || 1280 1281 task_spec_ib_force_disable(task)) 1281 1282 return -EPERM; 1283 + 1282 1284 task_clear_spec_ib_disable(task); 1283 1285 task_update_spec_tif(task); 1284 1286 break; ··· 1301 1283 if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && 1302 1284 spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) 1303 1285 return -EPERM; 1304 - if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || 1305 - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || 1306 - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED) 1286 + 1287 + if (!is_spec_ib_user_controlled()) 1307 1288 return 0; 1289 + 1308 1290 task_set_spec_ib_disable(task); 1309 1291 if (ctrl == PR_SPEC_FORCE_DISABLE) 1310 1292 task_set_spec_ib_force_disable(task); ··· 1369 1351 if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && 1370 1352 spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) 1371 1353 return PR_SPEC_ENABLE; 1372 - else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || 1373 - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || 1374 - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED) 1375 - return PR_SPEC_DISABLE; 1376 - else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL || 1377 - spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP || 1378 - spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL || 1379 - spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) { 1354 + else if (is_spec_ib_user_controlled()) { 1380 1355 if (task_spec_ib_force_disable(task)) 1381 1356 return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE; 1382 1357 if (task_spec_ib_disable(task)) 1383 1358 return PR_SPEC_PRCTL | PR_SPEC_DISABLE; 1384 1359 return PR_SPEC_PRCTL | PR_SPEC_ENABLE; 1385 - } else 1360 + } else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || 1361 + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || 1362 + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED) 1363 + return PR_SPEC_DISABLE; 1364 + else 1386 1365 return PR_SPEC_NOT_AFFECTED; 1387 1366 } 1388 1367
+16
arch/x86/kernel/head_64.S
··· 161 161 162 162 /* Setup early boot stage 4-/5-level pagetables. */ 163 163 addq phys_base(%rip), %rax 164 + 165 + /* 166 + * For SEV guests: Verify that the C-bit is correct. A malicious 167 + * hypervisor could lie about the C-bit position to perform a ROP 168 + * attack on the guest by writing to the unencrypted stack and wait for 169 + * the next RET instruction. 170 + * %rsi carries pointer to realmode data and is callee-clobbered. Save 171 + * and restore it. 172 + */ 173 + pushq %rsi 174 + movq %rax, %rdi 175 + call sev_verify_cbit 176 + popq %rsi 177 + 178 + /* Switch to new page-table */ 164 179 movq %rax, %cr3 165 180 166 181 /* Ensure I am executing from virtual addresses */ ··· 294 279 SYM_CODE_END(secondary_startup_64) 295 280 296 281 #include "verify_cpu.S" 282 + #include "sev_verify_cbit.S" 297 283 298 284 #ifdef CONFIG_HOTPLUG_CPU 299 285 /*
+26
arch/x86/kernel/sev-es-shared.c
··· 178 178 goto fail; 179 179 regs->dx = val >> 32; 180 180 181 + /* 182 + * This is a VC handler and the #VC is only raised when SEV-ES is 183 + * active, which means SEV must be active too. Do sanity checks on the 184 + * CPUID results to make sure the hypervisor does not trick the kernel 185 + * into the no-sev path. This could map sensitive data unencrypted and 186 + * make it accessible to the hypervisor. 187 + * 188 + * In particular, check for: 189 + * - Hypervisor CPUID bit 190 + * - Availability of CPUID leaf 0x8000001f 191 + * - SEV CPUID bit. 192 + * 193 + * The hypervisor might still report the wrong C-bit position, but this 194 + * can't be checked here. 195 + */ 196 + 197 + if ((fn == 1 && !(regs->cx & BIT(31)))) 198 + /* Hypervisor bit */ 199 + goto fail; 200 + else if (fn == 0x80000000 && (regs->ax < 0x8000001f)) 201 + /* SEV leaf check */ 202 + goto fail; 203 + else if ((fn == 0x8000001f && !(regs->ax & BIT(1)))) 204 + /* SEV bit */ 205 + goto fail; 206 + 181 207 /* Skip over the CPUID two-byte opcode */ 182 208 regs->ip += 2; 183 209
+13 -7
arch/x86/kernel/sev-es.c
··· 374 374 return ES_EXCEPTION; 375 375 } 376 376 377 - static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt, 378 - unsigned long vaddr, phys_addr_t *paddr) 377 + static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt, 378 + unsigned long vaddr, phys_addr_t *paddr) 379 379 { 380 380 unsigned long va = (unsigned long)vaddr; 381 381 unsigned int level; ··· 394 394 if (user_mode(ctxt->regs)) 395 395 ctxt->fi.error_code |= X86_PF_USER; 396 396 397 - return false; 397 + return ES_EXCEPTION; 398 398 } 399 + 400 + if (WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC)) 401 + /* Emulated MMIO to/from encrypted memory not supported */ 402 + return ES_UNSUPPORTED; 399 403 400 404 pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; 401 405 pa |= va & ~page_level_mask(level); 402 406 403 407 *paddr = pa; 404 408 405 - return true; 409 + return ES_OK; 406 410 } 407 411 408 412 /* Include code shared with pre-decompression boot stage */ ··· 735 731 { 736 732 u64 exit_code, exit_info_1, exit_info_2; 737 733 unsigned long ghcb_pa = __pa(ghcb); 734 + enum es_result res; 738 735 phys_addr_t paddr; 739 736 void __user *ref; 740 737 ··· 745 740 746 741 exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE; 747 742 748 - if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) { 749 - if (!read) 743 + res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr); 744 + if (res != ES_OK) { 745 + if (res == ES_EXCEPTION && !read) 750 746 ctxt->fi.error_code |= X86_PF_WRITE; 751 747 752 - return ES_EXCEPTION; 748 + return res; 753 749 } 754 750 755 751 exit_info_1 = paddr;
+89
arch/x86/kernel/sev_verify_cbit.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * sev_verify_cbit.S - Code for verification of the C-bit position reported 4 + * by the Hypervisor when running with SEV enabled. 5 + * 6 + * Copyright (c) 2020 Joerg Roedel (jroedel@suse.de) 7 + * 8 + * sev_verify_cbit() is called before switching to a new long-mode page-table 9 + * at boot. 10 + * 11 + * Verify that the C-bit position is correct by writing a random value to 12 + * an encrypted memory location while on the current page-table. Then it 13 + * switches to the new page-table to verify the memory content is still the 14 + * same. After that it switches back to the current page-table and when the 15 + * check succeeded it returns. If the check failed the code invalidates the 16 + * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to 17 + * make sure no interrupt or exception can get the CPU out of the hlt loop. 18 + * 19 + * New page-table pointer is expected in %rdi (first parameter) 20 + * 21 + */ 22 + SYM_FUNC_START(sev_verify_cbit) 23 + #ifdef CONFIG_AMD_MEM_ENCRYPT 24 + /* First check if a C-bit was detected */ 25 + movq sme_me_mask(%rip), %rsi 26 + testq %rsi, %rsi 27 + jz 3f 28 + 29 + /* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */ 30 + movq sev_status(%rip), %rsi 31 + testq %rsi, %rsi 32 + jz 3f 33 + 34 + /* Save CR4 in %rsi */ 35 + movq %cr4, %rsi 36 + 37 + /* Disable Global Pages */ 38 + movq %rsi, %rdx 39 + andq $(~X86_CR4_PGE), %rdx 40 + movq %rdx, %cr4 41 + 42 + /* 43 + * Verified that running under SEV - now get a random value using 44 + * RDRAND. This instruction is mandatory when running as an SEV guest. 45 + * 46 + * Don't bail out of the loop if RDRAND returns errors. It is better to 47 + * prevent forward progress than to work with a non-random value here. 48 + */ 49 + 1: rdrand %rdx 50 + jnc 1b 51 + 52 + /* Store value to memory and keep it in %rdx */ 53 + movq %rdx, sev_check_data(%rip) 54 + 55 + /* Backup current %cr3 value to restore it later */ 56 + movq %cr3, %rcx 57 + 58 + /* Switch to new %cr3 - This might unmap the stack */ 59 + movq %rdi, %cr3 60 + 61 + /* 62 + * Compare value in %rdx with memory location. If C-bit is incorrect 63 + * this would read the encrypted data and make the check fail. 64 + */ 65 + cmpq %rdx, sev_check_data(%rip) 66 + 67 + /* Restore old %cr3 */ 68 + movq %rcx, %cr3 69 + 70 + /* Restore previous CR4 */ 71 + movq %rsi, %cr4 72 + 73 + /* Check CMPQ result */ 74 + je 3f 75 + 76 + /* 77 + * The check failed, prevent any forward progress to prevent ROP 78 + * attacks, invalidate the stack and go into a hlt loop. 79 + */ 80 + xorq %rsp, %rsp 81 + subq $0x1000, %rsp 82 + 2: hlt 83 + jmp 2b 84 + 3: 85 + #endif 86 + /* Return page-table pointer */ 87 + movq %rdi, %rax 88 + ret 89 + SYM_FUNC_END(sev_verify_cbit)
+1 -3
arch/x86/lib/memcpy_64.S
··· 16 16 * to a jmp to memcpy_erms which does the REP; MOVSB mem copy. 17 17 */ 18 18 19 - .weak memcpy 20 - 21 19 /* 22 20 * memcpy - Copy a memory block. 23 21 * ··· 28 30 * rax original destination 29 31 */ 30 32 SYM_FUNC_START_ALIAS(__memcpy) 31 - SYM_FUNC_START_LOCAL(memcpy) 33 + SYM_FUNC_START_WEAK(memcpy) 32 34 ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \ 33 35 "jmp memcpy_erms", X86_FEATURE_ERMS 34 36
+1 -3
arch/x86/lib/memmove_64.S
··· 24 24 * Output: 25 25 * rax: dest 26 26 */ 27 - .weak memmove 28 - 29 - SYM_FUNC_START_ALIAS(memmove) 27 + SYM_FUNC_START_WEAK(memmove) 30 28 SYM_FUNC_START(__memmove) 31 29 32 30 mov %rdi, %rax
+1 -3
arch/x86/lib/memset_64.S
··· 6 6 #include <asm/alternative-asm.h> 7 7 #include <asm/export.h> 8 8 9 - .weak memset 10 - 11 9 /* 12 10 * ISO C memset - set a memory block to a byte value. This function uses fast 13 11 * string to get better performance than the original function. The code is ··· 17 19 * 18 20 * rax original destination 19 21 */ 20 - SYM_FUNC_START_ALIAS(memset) 22 + SYM_FUNC_START_WEAK(memset) 21 23 SYM_FUNC_START(__memset) 22 24 /* 23 25 * Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
+1
arch/x86/mm/mem_encrypt.c
··· 39 39 */ 40 40 u64 sme_me_mask __section(".data") = 0; 41 41 u64 sev_status __section(".data") = 0; 42 + u64 sev_check_data __section(".data") = 0; 42 43 EXPORT_SYMBOL(sme_me_mask); 43 44 DEFINE_STATIC_KEY_FALSE(sev_enable_key); 44 45 EXPORT_SYMBOL_GPL(sev_enable_key);
+2 -2
arch/xtensa/mm/init.c
··· 89 89 /* set highmem page free */ 90 90 for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, 91 91 &range_start, &range_end, NULL) { 92 - unsigned long start = PHYS_PFN(range_start); 93 - unsigned long end = PHYS_PFN(range_end); 92 + unsigned long start = PFN_UP(range_start); 93 + unsigned long end = PFN_DOWN(range_end); 94 94 95 95 /* Ignore complete lowmem entries */ 96 96 if (end <= max_low)
+2 -4
drivers/base/core.c
··· 773 773 dev_dbg(link->consumer, "Dropping the link to %s\n", 774 774 dev_name(link->supplier)); 775 775 776 - if (link->flags & DL_FLAG_PM_RUNTIME) 777 - pm_runtime_drop_link(link->consumer); 776 + pm_runtime_drop_link(link); 778 777 779 778 list_del_rcu(&link->s_node); 780 779 list_del_rcu(&link->c_node); ··· 787 788 dev_info(link->consumer, "Dropping the link to %s\n", 788 789 dev_name(link->supplier)); 789 790 790 - if (link->flags & DL_FLAG_PM_RUNTIME) 791 - pm_runtime_drop_link(link->consumer); 791 + pm_runtime_drop_link(link); 792 792 793 793 list_del(&link->s_node); 794 794 list_del(&link->c_node);
+5 -4
drivers/base/dd.c
··· 1117 1117 1118 1118 drv = dev->driver; 1119 1119 if (drv) { 1120 + pm_runtime_get_sync(dev); 1121 + 1120 1122 while (device_links_busy(dev)) { 1121 1123 __device_driver_unlock(dev, parent); 1122 1124 ··· 1130 1128 * have released the driver successfully while this one 1131 1129 * was waiting, so check for that. 1132 1130 */ 1133 - if (dev->driver != drv) 1131 + if (dev->driver != drv) { 1132 + pm_runtime_put(dev); 1134 1133 return; 1134 + } 1135 1135 } 1136 - 1137 - pm_runtime_get_sync(dev); 1138 - pm_runtime_clean_up_links(dev); 1139 1136 1140 1137 driver_sysfs_remove(dev); 1141 1138
+20 -37
drivers/base/power/runtime.c
··· 1643 1643 } 1644 1644 1645 1645 /** 1646 - * pm_runtime_clean_up_links - Prepare links to consumers for driver removal. 1647 - * @dev: Device whose driver is going to be removed. 1648 - * 1649 - * Check links from this device to any consumers and if any of them have active 1650 - * runtime PM references to the device, drop the usage counter of the device 1651 - * (as many times as needed). 1652 - * 1653 - * Links with the DL_FLAG_MANAGED flag unset are ignored. 1654 - * 1655 - * Since the device is guaranteed to be runtime-active at the point this is 1656 - * called, nothing else needs to be done here. 1657 - * 1658 - * Moreover, this is called after device_links_busy() has returned 'false', so 1659 - * the status of each link is guaranteed to be DL_STATE_SUPPLIER_UNBIND and 1660 - * therefore rpm_active can't be manipulated concurrently. 1661 - */ 1662 - void pm_runtime_clean_up_links(struct device *dev) 1663 - { 1664 - struct device_link *link; 1665 - int idx; 1666 - 1667 - idx = device_links_read_lock(); 1668 - 1669 - list_for_each_entry_rcu(link, &dev->links.consumers, s_node, 1670 - device_links_read_lock_held()) { 1671 - if (!(link->flags & DL_FLAG_MANAGED)) 1672 - continue; 1673 - 1674 - while (refcount_dec_not_one(&link->rpm_active)) 1675 - pm_runtime_put_noidle(dev); 1676 - } 1677 - 1678 - device_links_read_unlock(idx); 1679 - } 1680 - 1681 - /** 1682 1646 * pm_runtime_get_suppliers - Resume and reference-count supplier devices. 1683 1647 * @dev: Consumer device. 1684 1648 */ ··· 1693 1729 spin_unlock_irq(&dev->power.lock); 1694 1730 } 1695 1731 1696 - void pm_runtime_drop_link(struct device *dev) 1732 + static void pm_runtime_drop_link_count(struct device *dev) 1697 1733 { 1698 1734 spin_lock_irq(&dev->power.lock); 1699 1735 WARN_ON(dev->power.links_count == 0); 1700 1736 dev->power.links_count--; 1701 1737 spin_unlock_irq(&dev->power.lock); 1738 + } 1739 + 1740 + /** 1741 + * pm_runtime_drop_link - Prepare for device link removal. 1742 + * @link: Device link going away. 1743 + * 1744 + * Drop the link count of the consumer end of @link and decrement the supplier 1745 + * device's runtime PM usage counter as many times as needed to drop all of the 1746 + * PM runtime reference to it from the consumer. 1747 + */ 1748 + void pm_runtime_drop_link(struct device_link *link) 1749 + { 1750 + if (!(link->flags & DL_FLAG_PM_RUNTIME)) 1751 + return; 1752 + 1753 + pm_runtime_drop_link_count(link->consumer); 1754 + 1755 + while (refcount_dec_not_one(&link->rpm_active)) 1756 + pm_runtime_put(link->supplier); 1702 1757 } 1703 1758 1704 1759 static bool pm_runtime_need_not_resume(struct device *dev)
+1 -1
drivers/block/null_blk.h
··· 47 47 unsigned int nr_zones_closed; 48 48 struct blk_zone *zones; 49 49 sector_t zone_size_sects; 50 - spinlock_t zone_dev_lock; 50 + spinlock_t zone_lock; 51 51 unsigned long *zone_locks; 52 52 53 53 unsigned long size; /* device size in MB */
+31 -16
drivers/block/null_blk_zoned.c
··· 46 46 if (!dev->zones) 47 47 return -ENOMEM; 48 48 49 - spin_lock_init(&dev->zone_dev_lock); 50 - dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL); 51 - if (!dev->zone_locks) { 52 - kvfree(dev->zones); 53 - return -ENOMEM; 49 + /* 50 + * With memory backing, the zone_lock spinlock needs to be temporarily 51 + * released to avoid scheduling in atomic context. To guarantee zone 52 + * information protection, use a bitmap to lock zones with 53 + * wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing 54 + * implies that the queue is marked with BLK_MQ_F_BLOCKING. 55 + */ 56 + spin_lock_init(&dev->zone_lock); 57 + if (dev->memory_backed) { 58 + dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL); 59 + if (!dev->zone_locks) { 60 + kvfree(dev->zones); 61 + return -ENOMEM; 62 + } 54 63 } 55 64 56 65 if (dev->zone_nr_conv >= dev->nr_zones) { ··· 146 137 147 138 static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno) 148 139 { 149 - wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE); 140 + if (dev->memory_backed) 141 + wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE); 142 + spin_lock_irq(&dev->zone_lock); 150 143 } 151 144 152 145 static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno) 153 146 { 154 - clear_and_wake_up_bit(zno, dev->zone_locks); 147 + spin_unlock_irq(&dev->zone_lock); 148 + 149 + if (dev->memory_backed) 150 + clear_and_wake_up_bit(zno, dev->zone_locks); 155 151 } 156 152 157 153 int null_report_zones(struct gendisk *disk, sector_t sector, ··· 336 322 return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); 337 323 338 324 null_lock_zone(dev, zno); 339 - spin_lock(&dev->zone_dev_lock); 340 325 341 326 switch (zone->cond) { 342 327 case BLK_ZONE_COND_FULL: ··· 388 375 if (zone->cond != BLK_ZONE_COND_EXP_OPEN) 389 376 zone->cond = BLK_ZONE_COND_IMP_OPEN; 390 377 391 - spin_unlock(&dev->zone_dev_lock); 378 + /* 379 + * Memory backing allocation may sleep: release the zone_lock spinlock 380 + * to avoid scheduling in atomic context. Zone operation atomicity is 381 + * still guaranteed through the zone_locks bitmap. 382 + */ 383 + if (dev->memory_backed) 384 + spin_unlock_irq(&dev->zone_lock); 392 385 ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); 393 - spin_lock(&dev->zone_dev_lock); 386 + if (dev->memory_backed) 387 + spin_lock_irq(&dev->zone_lock); 388 + 394 389 if (ret != BLK_STS_OK) 395 390 goto unlock; 396 391 ··· 413 392 ret = BLK_STS_OK; 414 393 415 394 unlock: 416 - spin_unlock(&dev->zone_dev_lock); 417 395 null_unlock_zone(dev, zno); 418 396 419 397 return ret; ··· 536 516 null_lock_zone(dev, i); 537 517 zone = &dev->zones[i]; 538 518 if (zone->cond != BLK_ZONE_COND_EMPTY) { 539 - spin_lock(&dev->zone_dev_lock); 540 519 null_reset_zone(dev, zone); 541 - spin_unlock(&dev->zone_dev_lock); 542 520 trace_nullb_zone_op(cmd, i, zone->cond); 543 521 } 544 522 null_unlock_zone(dev, i); ··· 548 530 zone = &dev->zones[zone_no]; 549 531 550 532 null_lock_zone(dev, zone_no); 551 - spin_lock(&dev->zone_dev_lock); 552 533 553 534 switch (op) { 554 535 case REQ_OP_ZONE_RESET: ··· 566 549 ret = BLK_STS_NOTSUPP; 567 550 break; 568 551 } 569 - 570 - spin_unlock(&dev->zone_dev_lock); 571 552 572 553 if (ret == BLK_STS_OK) 573 554 trace_nullb_zone_op(cmd, zone_no, zone->cond);
+5
drivers/char/tpm/eventlog/efi.c
··· 41 41 log_size = log_tbl->size; 42 42 memunmap(log_tbl); 43 43 44 + if (!log_size) { 45 + pr_warn("UEFI TPM log area empty\n"); 46 + return -EIO; 47 + } 48 + 44 49 log_tbl = memremap(efi.tpm_log, sizeof(*log_tbl) + log_size, 45 50 MEMREMAP_WB); 46 51 if (!log_tbl) {
+27 -2
drivers/char/tpm/tpm_tis.c
··· 27 27 #include <linux/of.h> 28 28 #include <linux/of_device.h> 29 29 #include <linux/kernel.h> 30 + #include <linux/dmi.h> 30 31 #include "tpm.h" 31 32 #include "tpm_tis_core.h" 32 33 ··· 50 49 return container_of(data, struct tpm_tis_tcg_phy, priv); 51 50 } 52 51 53 - static bool interrupts = true; 54 - module_param(interrupts, bool, 0444); 52 + static int interrupts = -1; 53 + module_param(interrupts, int, 0444); 55 54 MODULE_PARM_DESC(interrupts, "Enable interrupts"); 56 55 57 56 static bool itpm; ··· 63 62 module_param(force, bool, 0444); 64 63 MODULE_PARM_DESC(force, "Force device probe rather than using ACPI entry"); 65 64 #endif 65 + 66 + static int tpm_tis_disable_irq(const struct dmi_system_id *d) 67 + { 68 + if (interrupts == -1) { 69 + pr_notice("tpm_tis: %s detected: disabling interrupts.\n", d->ident); 70 + interrupts = 0; 71 + } 72 + 73 + return 0; 74 + } 75 + 76 + static const struct dmi_system_id tpm_tis_dmi_table[] = { 77 + { 78 + .callback = tpm_tis_disable_irq, 79 + .ident = "ThinkPad T490s", 80 + .matches = { 81 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 82 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T490s"), 83 + }, 84 + }, 85 + {} 86 + }; 66 87 67 88 #if defined(CONFIG_PNP) && defined(CONFIG_ACPI) 68 89 static int has_hid(struct acpi_device *dev, const char *hid) ··· 214 191 struct tpm_tis_tcg_phy *phy; 215 192 int irq = -1; 216 193 int rc; 194 + 195 + dmi_check_system(tpm_tis_dmi_table); 217 196 218 197 rc = check_acpi_tpm2(dev); 219 198 if (rc)
+1 -1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
··· 7 7 * 8 8 * This file add support for MD5 and SHA1/SHA224/SHA256/SHA384/SHA512. 9 9 * 10 - * You could find the datasheet in Documentation/arm/sunxi/README 10 + * You could find the datasheet in Documentation/arm/sunxi.rst 11 11 */ 12 12 #include <linux/dma-mapping.h> 13 13 #include <linux/pm_runtime.h>
+1 -1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-prng.c
··· 7 7 * 8 8 * This file handle the PRNG 9 9 * 10 - * You could find a link for the datasheet in Documentation/arm/sunxi/README 10 + * You could find a link for the datasheet in Documentation/arm/sunxi.rst 11 11 */ 12 12 #include "sun8i-ce.h" 13 13 #include <linux/dma-mapping.h>
+1 -1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-trng.c
··· 7 7 * 8 8 * This file handle the TRNG 9 9 * 10 - * You could find a link for the datasheet in Documentation/arm/sunxi/README 10 + * You could find a link for the datasheet in Documentation/arm/sunxi.rst 11 11 */ 12 12 #include "sun8i-ce.h" 13 13 #include <linux/dma-mapping.h>
+4
drivers/gpu/drm/amd/display/include/dal_asic_id.h
··· 217 217 #ifndef ASICREV_IS_VANGOGH 218 218 #define ASICREV_IS_VANGOGH(eChipRev) ((eChipRev >= VANGOGH_A0) && (eChipRev < VANGOGH_UNKNOWN)) 219 219 #endif 220 + #define GREEN_SARDINE_A0 0xA1 221 + #ifndef ASICREV_IS_GREEN_SARDINE 222 + #define ASICREV_IS_GREEN_SARDINE(eChipRev) ((eChipRev >= GREEN_SARDINE_A0) && (eChipRev < 0xFF)) 223 + #endif 220 224 221 225 /* 222 226 * ASIC chip ID
+13 -15
drivers/gpu/drm/i915/gem/i915_gem_domain.c
··· 509 509 return -ENOENT; 510 510 511 511 /* 512 - * Already in the desired write domain? Nothing for us to do! 513 - * 514 - * We apply a little bit of cunning here to catch a broader set of 515 - * no-ops. If obj->write_domain is set, we must be in the same 516 - * obj->read_domains, and only that domain. Therefore, if that 517 - * obj->write_domain matches the request read_domains, we are 518 - * already in the same read/write domain and can skip the operation, 519 - * without having to further check the requested write_domain. 520 - */ 521 - if (READ_ONCE(obj->write_domain) == read_domains) { 522 - err = 0; 523 - goto out; 524 - } 525 - 526 - /* 527 512 * Try to flush the object off the GPU without holding the lock. 528 513 * We will repeat the flush holding the lock in the normal manner 529 514 * to catch cases where we are gazumped. ··· 544 559 err = i915_gem_object_pin_pages(obj); 545 560 if (err) 546 561 goto out; 562 + 563 + /* 564 + * Already in the desired write domain? Nothing for us to do! 565 + * 566 + * We apply a little bit of cunning here to catch a broader set of 567 + * no-ops. If obj->write_domain is set, we must be in the same 568 + * obj->read_domains, and only that domain. Therefore, if that 569 + * obj->write_domain matches the request read_domains, we are 570 + * already in the same read/write domain and can skip the operation, 571 + * without having to further check the requested write_domain. 572 + */ 573 + if (READ_ONCE(obj->write_domain) == read_domains) 574 + goto out_unpin; 547 575 548 576 err = i915_gem_object_lock_interruptible(obj, NULL); 549 577 if (err)
+35 -20
drivers/gpu/drm/i915/gt/intel_engine.h
··· 245 245 } 246 246 247 247 static inline u32 * 248 - __gen8_emit_ggtt_write_rcs(u32 *cs, u32 value, u32 gtt_offset, u32 flags0, u32 flags1) 248 + __gen8_emit_write_rcs(u32 *cs, u32 value, u32 offset, u32 flags0, u32 flags1) 249 249 { 250 - /* We're using qword write, offset should be aligned to 8 bytes. */ 251 - GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8)); 252 - 253 - /* w/a for post sync ops following a GPGPU operation we 254 - * need a prior CS_STALL, which is emitted by the flush 255 - * following the batch. 256 - */ 257 250 *cs++ = GFX_OP_PIPE_CONTROL(6) | flags0; 258 - *cs++ = flags1 | PIPE_CONTROL_QW_WRITE | PIPE_CONTROL_GLOBAL_GTT_IVB; 259 - *cs++ = gtt_offset; 251 + *cs++ = flags1 | PIPE_CONTROL_QW_WRITE; 252 + *cs++ = offset; 260 253 *cs++ = 0; 261 254 *cs++ = value; 262 - /* We're thrashing one dword of HWS. */ 263 - *cs++ = 0; 255 + *cs++ = 0; /* We're thrashing one extra dword. */ 264 256 265 257 return cs; 266 258 } ··· 260 268 static inline u32* 261 269 gen8_emit_ggtt_write_rcs(u32 *cs, u32 value, u32 gtt_offset, u32 flags) 262 270 { 263 - return __gen8_emit_ggtt_write_rcs(cs, value, gtt_offset, 0, flags); 271 + /* We're using qword write, offset should be aligned to 8 bytes. */ 272 + GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8)); 273 + 274 + return __gen8_emit_write_rcs(cs, 275 + value, 276 + gtt_offset, 277 + 0, 278 + flags | PIPE_CONTROL_GLOBAL_GTT_IVB); 264 279 } 265 280 266 281 static inline u32* 267 282 gen12_emit_ggtt_write_rcs(u32 *cs, u32 value, u32 gtt_offset, u32 flags0, u32 flags1) 268 283 { 269 - return __gen8_emit_ggtt_write_rcs(cs, value, gtt_offset, flags0, flags1); 284 + /* We're using qword write, offset should be aligned to 8 bytes. */ 285 + GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8)); 286 + 287 + return __gen8_emit_write_rcs(cs, 288 + value, 289 + gtt_offset, 290 + flags0, 291 + flags1 | PIPE_CONTROL_GLOBAL_GTT_IVB); 292 + } 293 + 294 + static inline u32 * 295 + __gen8_emit_flush_dw(u32 *cs, u32 value, u32 gtt_offset, u32 flags) 296 + { 297 + *cs++ = (MI_FLUSH_DW + 1) | flags; 298 + *cs++ = gtt_offset; 299 + *cs++ = 0; 300 + *cs++ = value; 301 + 302 + return cs; 270 303 } 271 304 272 305 static inline u32 * ··· 302 285 /* Offset should be aligned to 8 bytes for both (QW/DW) write types */ 303 286 GEM_BUG_ON(!IS_ALIGNED(gtt_offset, 8)); 304 287 305 - *cs++ = (MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW | flags; 306 - *cs++ = gtt_offset | MI_FLUSH_DW_USE_GTT; 307 - *cs++ = 0; 308 - *cs++ = value; 309 - 310 - return cs; 288 + return __gen8_emit_flush_dw(cs, 289 + value, 290 + gtt_offset | MI_FLUSH_DW_USE_GTT, 291 + flags | MI_FLUSH_DW_OP_STOREDW); 311 292 } 312 293 313 294 static inline void __intel_engine_reset(struct intel_engine_cs *engine,
+22 -9
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 3547 3547 .destroy = execlists_context_destroy, 3548 3548 }; 3549 3549 3550 + static u32 hwsp_offset(const struct i915_request *rq) 3551 + { 3552 + const struct intel_timeline_cacheline *cl; 3553 + 3554 + /* Before the request is executed, the timeline/cachline is fixed */ 3555 + 3556 + cl = rcu_dereference_protected(rq->hwsp_cacheline, 1); 3557 + if (cl) 3558 + return cl->ggtt_offset; 3559 + 3560 + return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset; 3561 + } 3562 + 3550 3563 static int gen8_emit_init_breadcrumb(struct i915_request *rq) 3551 3564 { 3552 3565 u32 *cs; ··· 3582 3569 *cs++ = MI_NOOP; 3583 3570 3584 3571 *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; 3585 - *cs++ = i915_request_timeline(rq)->hwsp_offset; 3572 + *cs++ = hwsp_offset(rq); 3586 3573 *cs++ = 0; 3587 3574 *cs++ = rq->fence.seqno - 1; 3588 3575 ··· 4899 4886 return gen8_emit_wa_tail(request, cs); 4900 4887 } 4901 4888 4902 - static u32 *emit_xcs_breadcrumb(struct i915_request *request, u32 *cs) 4889 + static u32 *emit_xcs_breadcrumb(struct i915_request *rq, u32 *cs) 4903 4890 { 4904 - u32 addr = i915_request_active_timeline(request)->hwsp_offset; 4905 - 4906 - return gen8_emit_ggtt_write(cs, request->fence.seqno, addr, 0); 4891 + return gen8_emit_ggtt_write(cs, rq->fence.seqno, hwsp_offset(rq), 0); 4907 4892 } 4908 4893 4909 4894 static u32 *gen8_emit_fini_breadcrumb(struct i915_request *rq, u32 *cs) ··· 4920 4909 /* XXX flush+write+CS_STALL all in one upsets gem_concurrent_blt:kbl */ 4921 4910 cs = gen8_emit_ggtt_write_rcs(cs, 4922 4911 request->fence.seqno, 4923 - i915_request_active_timeline(request)->hwsp_offset, 4912 + hwsp_offset(request), 4924 4913 PIPE_CONTROL_FLUSH_ENABLE | 4925 4914 PIPE_CONTROL_CS_STALL); 4926 4915 ··· 4932 4921 { 4933 4922 cs = gen8_emit_ggtt_write_rcs(cs, 4934 4923 request->fence.seqno, 4935 - i915_request_active_timeline(request)->hwsp_offset, 4924 + hwsp_offset(request), 4936 4925 PIPE_CONTROL_CS_STALL | 4937 4926 PIPE_CONTROL_TILE_CACHE_FLUSH | 4938 4927 PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH | ··· 4994 4983 4995 4984 static u32 *gen12_emit_fini_breadcrumb(struct i915_request *rq, u32 *cs) 4996 4985 { 4997 - return gen12_emit_fini_breadcrumb_tail(rq, emit_xcs_breadcrumb(rq, cs)); 4986 + /* XXX Stalling flush before seqno write; post-sync not */ 4987 + cs = emit_xcs_breadcrumb(rq, __gen8_emit_flush_dw(cs, 0, 0, 0)); 4988 + return gen12_emit_fini_breadcrumb_tail(rq, cs); 4998 4989 } 4999 4990 5000 4991 static u32 * ··· 5004 4991 { 5005 4992 cs = gen12_emit_ggtt_write_rcs(cs, 5006 4993 request->fence.seqno, 5007 - i915_request_active_timeline(request)->hwsp_offset, 4994 + hwsp_offset(request), 5008 4995 PIPE_CONTROL0_HDC_PIPELINE_FLUSH, 5009 4996 PIPE_CONTROL_CS_STALL | 5010 4997 PIPE_CONTROL_TILE_CACHE_FLUSH |
+10 -8
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 188 188 return cl; 189 189 } 190 190 191 - static void cacheline_acquire(struct intel_timeline_cacheline *cl) 191 + static void cacheline_acquire(struct intel_timeline_cacheline *cl, 192 + u32 ggtt_offset) 192 193 { 193 - if (cl) 194 - i915_active_acquire(&cl->active); 194 + if (!cl) 195 + return; 196 + 197 + cl->ggtt_offset = ggtt_offset; 198 + i915_active_acquire(&cl->active); 195 199 } 196 200 197 201 static void cacheline_release(struct intel_timeline_cacheline *cl) ··· 344 340 GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n", 345 341 tl->fence_context, tl->hwsp_offset); 346 342 347 - cacheline_acquire(tl->hwsp_cacheline); 343 + cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset); 348 344 if (atomic_fetch_inc(&tl->pin_count)) { 349 345 cacheline_release(tl->hwsp_cacheline); 350 346 __i915_vma_unpin(tl->hwsp_ggtt); ··· 519 515 GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n", 520 516 tl->fence_context, tl->hwsp_offset); 521 517 522 - cacheline_acquire(cl); 518 + cacheline_acquire(cl, tl->hwsp_offset); 523 519 tl->hwsp_cacheline = cl; 524 520 525 521 *seqno = timeline_advance(tl); ··· 577 573 if (err) 578 574 goto out; 579 575 580 - *hwsp = i915_ggtt_offset(cl->hwsp->vma) + 581 - ptr_unmask_bits(cl->vaddr, CACHELINE_BITS) * CACHELINE_BYTES; 582 - 576 + *hwsp = cl->ggtt_offset; 583 577 out: 584 578 i915_active_release(&cl->active); 585 579 return err;
+2
drivers/gpu/drm/i915/gt/intel_timeline_types.h
··· 94 94 struct intel_timeline_hwsp *hwsp; 95 95 void *vaddr; 96 96 97 + u32 ggtt_offset; 98 + 97 99 struct rcu_head rcu; 98 100 }; 99 101
+44 -3
drivers/gpu/drm/i915/gvt/handlers.c
··· 1489 1489 const struct intel_engine_cs *engine = 1490 1490 intel_gvt_render_mmio_to_engine(vgpu->gvt, offset); 1491 1491 1492 - if (!intel_gvt_ggtt_validate_range(vgpu, value, I915_GTT_PAGE_SIZE)) { 1492 + if (value != 0 && 1493 + !intel_gvt_ggtt_validate_range(vgpu, value, I915_GTT_PAGE_SIZE)) { 1493 1494 gvt_vgpu_err("write invalid HWSP address, reg:0x%x, value:0x%x\n", 1494 1495 offset, value); 1495 1496 return -EINVAL; ··· 1648 1647 unsigned int offset, void *p_data, unsigned int bytes) 1649 1648 { 1650 1649 vgpu_vreg(vgpu, offset) = 0; 1650 + return 0; 1651 + } 1652 + 1653 + /** 1654 + * FixMe: 1655 + * If guest fills non-priv batch buffer on ApolloLake/Broxton as Mesa i965 did: 1656 + * 717e7539124d (i965: Use a WC map and memcpy for the batch instead of pwrite.) 1657 + * Due to the missing flush of bb filled by VM vCPU, host GPU hangs on executing 1658 + * these MI_BATCH_BUFFER. 1659 + * Temporarily workaround this by setting SNOOP bit for PAT3 used by PPGTT 1660 + * PML4 PTE: PAT(0) PCD(1) PWT(1). 1661 + * The performance is still expected to be low, will need further improvement. 1662 + */ 1663 + static int bxt_ppat_low_write(struct intel_vgpu *vgpu, unsigned int offset, 1664 + void *p_data, unsigned int bytes) 1665 + { 1666 + u64 pat = 1667 + GEN8_PPAT(0, CHV_PPAT_SNOOP) | 1668 + GEN8_PPAT(1, 0) | 1669 + GEN8_PPAT(2, 0) | 1670 + GEN8_PPAT(3, CHV_PPAT_SNOOP) | 1671 + GEN8_PPAT(4, CHV_PPAT_SNOOP) | 1672 + GEN8_PPAT(5, CHV_PPAT_SNOOP) | 1673 + GEN8_PPAT(6, CHV_PPAT_SNOOP) | 1674 + GEN8_PPAT(7, CHV_PPAT_SNOOP); 1675 + 1676 + vgpu_vreg(vgpu, offset) = lower_32_bits(pat); 1677 + 1651 1678 return 0; 1652 1679 } 1653 1680 ··· 2841 2812 2842 2813 MMIO_DH(GEN6_PCODE_MAILBOX, D_BDW_PLUS, NULL, mailbox_write); 2843 2814 2844 - MMIO_D(GEN8_PRIVATE_PAT_LO, D_BDW_PLUS); 2815 + MMIO_D(GEN8_PRIVATE_PAT_LO, D_BDW_PLUS & ~D_BXT); 2845 2816 MMIO_D(GEN8_PRIVATE_PAT_HI, D_BDW_PLUS); 2846 2817 2847 2818 MMIO_D(GAMTARBMODE, D_BDW_PLUS); ··· 3168 3139 NULL, NULL); 3169 3140 3170 3141 MMIO_DFH(GAMT_CHKN_BIT_REG, D_KBL | D_CFL, F_CMD_ACCESS, NULL, NULL); 3171 - MMIO_D(GEN9_CTX_PREEMPT_REG, D_SKL_PLUS); 3142 + MMIO_D(GEN9_CTX_PREEMPT_REG, D_SKL_PLUS & ~D_BXT); 3172 3143 3173 3144 return 0; 3174 3145 } ··· 3342 3313 MMIO_D(GEN8_PUSHBUS_SHIFT, D_BXT); 3343 3314 MMIO_D(GEN6_GFXPAUSE, D_BXT); 3344 3315 MMIO_DFH(GEN8_L3SQCREG1, D_BXT, F_CMD_ACCESS, NULL, NULL); 3316 + MMIO_DFH(GEN8_L3CNTLREG, D_BXT, F_CMD_ACCESS, NULL, NULL); 3317 + MMIO_DFH(_MMIO(0x20D8), D_BXT, F_CMD_ACCESS, NULL, NULL); 3318 + MMIO_F(GEN8_RING_CS_GPR(RENDER_RING_BASE, 0), 0x40, F_CMD_ACCESS, 3319 + 0, 0, D_BXT, NULL, NULL); 3320 + MMIO_F(GEN8_RING_CS_GPR(GEN6_BSD_RING_BASE, 0), 0x40, F_CMD_ACCESS, 3321 + 0, 0, D_BXT, NULL, NULL); 3322 + MMIO_F(GEN8_RING_CS_GPR(BLT_RING_BASE, 0), 0x40, F_CMD_ACCESS, 3323 + 0, 0, D_BXT, NULL, NULL); 3324 + MMIO_F(GEN8_RING_CS_GPR(VEBOX_RING_BASE, 0), 0x40, F_CMD_ACCESS, 3325 + 0, 0, D_BXT, NULL, NULL); 3345 3326 3346 3327 MMIO_DFH(GEN9_CTX_PREEMPT_REG, D_BXT, F_CMD_ACCESS, NULL, NULL); 3328 + 3329 + MMIO_DH(GEN8_PRIVATE_PAT_LO, D_BXT, NULL, bxt_ppat_low_write); 3347 3330 3348 3331 return 0; 3349 3332 }
+8 -7
drivers/gpu/drm/i915/gvt/scheduler.c
··· 1277 1277 1278 1278 i915_context_ppgtt_root_restore(s, i915_vm_to_ppgtt(s->shadow[0]->vm)); 1279 1279 for_each_engine(engine, vgpu->gvt->gt, id) 1280 - intel_context_unpin(s->shadow[id]); 1280 + intel_context_put(s->shadow[id]); 1281 1281 1282 1282 kmem_cache_destroy(s->workloads); 1283 1283 } ··· 1369 1369 ce->ring = __intel_context_ring_size(ring_size); 1370 1370 } 1371 1371 1372 - ret = intel_context_pin(ce); 1373 - intel_context_put(ce); 1374 - if (ret) 1375 - goto out_shadow_ctx; 1376 - 1377 1372 s->shadow[i] = ce; 1378 1373 } 1379 1374 ··· 1400 1405 if (IS_ERR(s->shadow[i])) 1401 1406 break; 1402 1407 1403 - intel_context_unpin(s->shadow[i]); 1404 1408 intel_context_put(s->shadow[i]); 1405 1409 } 1406 1410 i915_vm_put(&ppgtt->vm); ··· 1473 1479 { 1474 1480 struct intel_vgpu_submission *s = &workload->vgpu->submission; 1475 1481 1482 + intel_context_unpin(s->shadow[workload->engine->id]); 1476 1483 release_shadow_batch_buffer(workload); 1477 1484 release_shadow_wa_ctx(&workload->wa_ctx); 1478 1485 ··· 1715 1720 if (ret) { 1716 1721 if (vgpu_is_vm_unhealthy(ret)) 1717 1722 enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR); 1723 + intel_vgpu_destroy_workload(workload); 1724 + return ERR_PTR(ret); 1725 + } 1726 + 1727 + ret = intel_context_pin(s->shadow[engine->id]); 1728 + if (ret) { 1718 1729 intel_vgpu_destroy_workload(workload); 1719 1730 return ERR_PTR(ret); 1720 1731 }
+4 -2
drivers/gpu/drm/i915/i915_vma.c
··· 314 314 { 315 315 struct i915_vma_work *vw = container_of(work, typeof(*vw), base); 316 316 317 - if (vw->pinned) 317 + if (vw->pinned) { 318 318 __i915_gem_object_unpin_pages(vw->pinned); 319 + i915_gem_object_put(vw->pinned); 320 + } 319 321 320 322 i915_vm_free_pt_stash(vw->vm, &vw->stash); 321 323 i915_vm_put(vw->vm); ··· 433 431 434 432 if (vma->obj) { 435 433 __i915_gem_object_pin_pages(vma->obj); 436 - work->pinned = vma->obj; 434 + work->pinned = i915_gem_object_get(vma->obj); 437 435 } 438 436 } else { 439 437 vma->ops->bind_vma(vma->vm, NULL, vma, cache_level, bind_flags);
+3 -14
drivers/gpu/drm/imx/dw_hdmi-imx.c
··· 111 111 return 0; 112 112 } 113 113 114 - static void dw_hdmi_imx_encoder_disable(struct drm_encoder *encoder) 115 - { 116 - } 117 - 118 114 static void dw_hdmi_imx_encoder_enable(struct drm_encoder *encoder) 119 115 { 120 116 struct imx_hdmi *hdmi = enc_to_imx_hdmi(encoder); ··· 136 140 137 141 static const struct drm_encoder_helper_funcs dw_hdmi_imx_encoder_helper_funcs = { 138 142 .enable = dw_hdmi_imx_encoder_enable, 139 - .disable = dw_hdmi_imx_encoder_disable, 140 143 .atomic_check = dw_hdmi_imx_atomic_check, 141 144 }; 142 145 ··· 214 219 hdmi->dev = &pdev->dev; 215 220 encoder = &hdmi->encoder; 216 221 217 - encoder->possible_crtcs = drm_of_find_possible_crtcs(drm, dev->of_node); 218 - /* 219 - * If we failed to find the CRTC(s) which this encoder is 220 - * supposed to be connected to, it's because the CRTC has 221 - * not been registered yet. Defer probing, and hope that 222 - * the required CRTC is added later. 223 - */ 224 - if (encoder->possible_crtcs == 0) 225 - return -EPROBE_DEFER; 222 + ret = imx_drm_encoder_parse_of(drm, encoder, dev->of_node); 223 + if (ret) 224 + return ret; 226 225 227 226 ret = dw_hdmi_imx_parse_dt(hdmi); 228 227 if (ret < 0)
+5 -5
drivers/gpu/drm/imx/imx-drm-core.c
··· 20 20 #include <drm/drm_fb_helper.h> 21 21 #include <drm/drm_gem_cma_helper.h> 22 22 #include <drm/drm_gem_framebuffer_helper.h> 23 + #include <drm/drm_managed.h> 23 24 #include <drm/drm_of.h> 24 25 #include <drm/drm_plane_helper.h> 25 26 #include <drm/drm_probe_helper.h> ··· 213 212 drm->mode_config.allow_fb_modifiers = true; 214 213 drm->mode_config.normalize_zpos = true; 215 214 216 - drm_mode_config_init(drm); 215 + ret = drmm_mode_config_init(drm); 216 + if (ret) 217 + return ret; 217 218 218 219 ret = drm_vblank_init(drm, MAX_CRTC); 219 220 if (ret) ··· 254 251 drm_kms_helper_poll_fini(drm); 255 252 component_unbind_all(drm->dev, drm); 256 253 err_kms: 257 - drm_mode_config_cleanup(drm); 258 254 drm_dev_put(drm); 259 255 260 256 return ret; ··· 269 267 270 268 component_unbind_all(drm->dev, drm); 271 269 272 - drm_mode_config_cleanup(drm); 270 + drm_dev_put(drm); 273 271 274 272 dev_set_drvdata(dev, NULL); 275 - 276 - drm_dev_put(drm); 277 273 } 278 274 279 275 static const struct component_master_ops imx_drm_ops = {
+4 -6
drivers/gpu/drm/imx/imx-ldb.c
··· 62 62 struct i2c_adapter *ddc; 63 63 int chno; 64 64 void *edid; 65 - int edid_len; 66 65 struct drm_display_mode mode; 67 66 int mode_valid; 68 67 u32 bus_format; ··· 535 536 } 536 537 537 538 if (!channel->ddc) { 539 + int edid_len; 540 + 538 541 /* if no DDC available, fallback to hardcoded EDID */ 539 542 dev_dbg(dev, "no ddc available\n"); 540 543 541 - edidp = of_get_property(child, "edid", 542 - &channel->edid_len); 544 + edidp = of_get_property(child, "edid", &edid_len); 543 545 if (edidp) { 544 - channel->edid = kmemdup(edidp, 545 - channel->edid_len, 546 - GFP_KERNEL); 546 + channel->edid = kmemdup(edidp, edid_len, GFP_KERNEL); 547 547 } else if (!channel->panel) { 548 548 /* fallback to display-timings node */ 549 549 ret = of_get_drm_display_mode(child,
+6 -34
drivers/gpu/drm/imx/imx-tve.c
··· 13 13 #include <linux/platform_device.h> 14 14 #include <linux/regmap.h> 15 15 #include <linux/regulator/consumer.h> 16 - #include <linux/spinlock.h> 17 16 #include <linux/videodev2.h> 18 17 19 18 #include <video/imx-ipu-v3.h> ··· 103 104 struct drm_connector connector; 104 105 struct drm_encoder encoder; 105 106 struct device *dev; 106 - spinlock_t lock; /* register lock */ 107 - bool enabled; 108 107 int mode; 109 108 int di_hsync_pin; 110 109 int di_vsync_pin; ··· 126 129 return container_of(e, struct imx_tve, encoder); 127 130 } 128 131 129 - static void tve_lock(void *__tve) 130 - __acquires(&tve->lock) 131 - { 132 - struct imx_tve *tve = __tve; 133 - 134 - spin_lock(&tve->lock); 135 - } 136 - 137 - static void tve_unlock(void *__tve) 138 - __releases(&tve->lock) 139 - { 140 - struct imx_tve *tve = __tve; 141 - 142 - spin_unlock(&tve->lock); 143 - } 144 - 145 132 static void tve_enable(struct imx_tve *tve) 146 133 { 147 - if (!tve->enabled) { 148 - tve->enabled = true; 149 - clk_prepare_enable(tve->clk); 150 - regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, 151 - TVE_EN, TVE_EN); 152 - } 134 + clk_prepare_enable(tve->clk); 135 + regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, TVE_EN, TVE_EN); 153 136 154 137 /* clear interrupt status register */ 155 138 regmap_write(tve->regmap, TVE_STAT_REG, 0xffffffff); ··· 146 169 147 170 static void tve_disable(struct imx_tve *tve) 148 171 { 149 - if (tve->enabled) { 150 - tve->enabled = false; 151 - regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, TVE_EN, 0); 152 - clk_disable_unprepare(tve->clk); 153 - } 172 + regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, TVE_EN, 0); 173 + clk_disable_unprepare(tve->clk); 154 174 } 155 175 156 176 static int tve_setup_tvout(struct imx_tve *tve) ··· 474 500 475 501 .readable_reg = imx_tve_readable_reg, 476 502 477 - .lock = tve_lock, 478 - .unlock = tve_unlock, 503 + .fast_io = true, 479 504 480 505 .max_register = 0xdc, 481 506 }; ··· 484 511 [TVE_MODE_VGA] = "vga", 485 512 }; 486 513 487 - static const int of_get_tve_mode(struct device_node *np) 514 + static int of_get_tve_mode(struct device_node *np) 488 515 { 489 516 const char *bm; 490 517 int ret, i; ··· 517 544 memset(tve, 0, sizeof(*tve)); 518 545 519 546 tve->dev = dev; 520 - spin_lock_init(&tve->lock); 521 547 522 548 ddc_node = of_parse_phandle(np, "ddc-i2c-bus", 0); 523 549 if (ddc_node) {
+3 -17
drivers/gpu/drm/imx/parallel-display.c
··· 28 28 struct drm_bridge bridge; 29 29 struct device *dev; 30 30 void *edid; 31 - int edid_len; 32 31 u32 bus_format; 33 32 u32 bus_flags; 34 33 struct drm_display_mode mode; ··· 38 39 static inline struct imx_parallel_display *con_to_imxpd(struct drm_connector *c) 39 40 { 40 41 return container_of(c, struct imx_parallel_display, connector); 41 - } 42 - 43 - static inline struct imx_parallel_display *enc_to_imxpd(struct drm_encoder *e) 44 - { 45 - return container_of(e, struct imx_parallel_display, encoder); 46 42 } 47 43 48 44 static inline struct imx_parallel_display *bridge_to_imxpd(struct drm_bridge *b) ··· 304 310 struct device_node *np = dev->of_node; 305 311 const u8 *edidp; 306 312 struct imx_parallel_display *imxpd; 313 + int edid_len; 307 314 int ret; 308 315 u32 bus_format = 0; 309 316 const char *fmt; ··· 318 323 if (ret && ret != -ENODEV) 319 324 return ret; 320 325 321 - edidp = of_get_property(np, "edid", &imxpd->edid_len); 326 + edidp = of_get_property(np, "edid", &edid_len); 322 327 if (edidp) 323 - imxpd->edid = kmemdup(edidp, imxpd->edid_len, GFP_KERNEL); 328 + imxpd->edid = devm_kmemdup(dev, edidp, edid_len, GFP_KERNEL); 324 329 325 330 ret = of_property_read_string(np, "interface-pix-fmt", &fmt); 326 331 if (!ret) { ··· 344 349 return 0; 345 350 } 346 351 347 - static void imx_pd_unbind(struct device *dev, struct device *master, 348 - void *data) 349 - { 350 - struct imx_parallel_display *imxpd = dev_get_drvdata(dev); 351 - 352 - kfree(imxpd->edid); 353 - } 354 - 355 352 static const struct component_ops imx_pd_ops = { 356 353 .bind = imx_pd_bind, 357 - .unbind = imx_pd_unbind, 358 354 }; 359 355 360 356 static int imx_pd_probe(struct platform_device *pdev)
+3 -2
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 628 628 err_out1: 629 629 pm_runtime_disable(pfdev->dev); 630 630 panfrost_device_fini(pfdev); 631 + pm_runtime_set_suspended(pfdev->dev); 631 632 err_out0: 632 633 drm_dev_put(ddev); 633 634 return err; ··· 643 642 panfrost_gem_shrinker_cleanup(ddev); 644 643 645 644 pm_runtime_get_sync(pfdev->dev); 646 - panfrost_device_fini(pfdev); 647 - pm_runtime_put_sync_suspend(pfdev->dev); 648 645 pm_runtime_disable(pfdev->dev); 646 + panfrost_device_fini(pfdev); 647 + pm_runtime_set_suspended(pfdev->dev); 649 648 650 649 drm_dev_put(ddev); 651 650 return 0;
+1 -3
drivers/gpu/drm/panfrost/panfrost_gem.c
··· 105 105 kref_put(&mapping->refcount, panfrost_gem_mapping_release); 106 106 } 107 107 108 - void panfrost_gem_teardown_mappings(struct panfrost_gem_object *bo) 108 + void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo) 109 109 { 110 110 struct panfrost_gem_mapping *mapping; 111 111 112 - mutex_lock(&bo->mappings.lock); 113 112 list_for_each_entry(mapping, &bo->mappings.list, node) 114 113 panfrost_gem_teardown_mapping(mapping); 115 - mutex_unlock(&bo->mappings.lock); 116 114 } 117 115 118 116 int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv)
+1 -1
drivers/gpu/drm/panfrost/panfrost_gem.h
··· 82 82 panfrost_gem_mapping_get(struct panfrost_gem_object *bo, 83 83 struct panfrost_file_priv *priv); 84 84 void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping); 85 - void panfrost_gem_teardown_mappings(struct panfrost_gem_object *bo); 85 + void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo); 86 86 87 87 void panfrost_gem_shrinker_init(struct drm_device *dev); 88 88 void panfrost_gem_shrinker_cleanup(struct drm_device *dev);
+11 -3
drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
··· 40 40 { 41 41 struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); 42 42 struct panfrost_gem_object *bo = to_panfrost_bo(obj); 43 + bool ret = false; 43 44 44 45 if (atomic_read(&bo->gpu_usecount)) 45 46 return false; 46 47 47 - if (!mutex_trylock(&shmem->pages_lock)) 48 + if (!mutex_trylock(&bo->mappings.lock)) 48 49 return false; 49 50 50 - panfrost_gem_teardown_mappings(bo); 51 + if (!mutex_trylock(&shmem->pages_lock)) 52 + goto unlock_mappings; 53 + 54 + panfrost_gem_teardown_mappings_locked(bo); 51 55 drm_gem_shmem_purge_locked(obj); 56 + ret = true; 52 57 53 58 mutex_unlock(&shmem->pages_lock); 54 - return true; 59 + 60 + unlock_mappings: 61 + mutex_unlock(&bo->mappings.lock); 62 + return ret; 55 63 } 56 64 57 65 static unsigned long
+5 -4
drivers/gpu/drm/vc4/vc4_bo.c
··· 468 468 } 469 469 470 470 if (IS_ERR(cma_obj)) { 471 - struct drm_printer p = drm_info_printer(vc4->dev->dev); 471 + struct drm_printer p = drm_info_printer(vc4->base.dev); 472 472 DRM_ERROR("Failed to allocate from CMA:\n"); 473 473 vc4_bo_stats_print(&p, vc4); 474 474 return ERR_PTR(-ENOMEM); ··· 609 609 { 610 610 struct vc4_dev *vc4 = 611 611 container_of(work, struct vc4_dev, bo_cache.time_work); 612 - struct drm_device *dev = vc4->dev; 612 + struct drm_device *dev = &vc4->base; 613 613 614 614 mutex_lock(&vc4->bo_lock); 615 615 vc4_bo_cache_free_old(dev); ··· 1024 1024 return 0; 1025 1025 } 1026 1026 1027 + static void vc4_bo_cache_destroy(struct drm_device *dev, void *unused); 1027 1028 int vc4_bo_cache_init(struct drm_device *dev) 1028 1029 { 1029 1030 struct vc4_dev *vc4 = to_vc4_dev(dev); ··· 1053 1052 INIT_WORK(&vc4->bo_cache.time_work, vc4_bo_cache_time_work); 1054 1053 timer_setup(&vc4->bo_cache.time_timer, vc4_bo_cache_time_timer, 0); 1055 1054 1056 - return 0; 1055 + return drmm_add_action_or_reset(dev, vc4_bo_cache_destroy, NULL); 1057 1056 } 1058 1057 1059 - void vc4_bo_cache_destroy(struct drm_device *dev) 1058 + static void vc4_bo_cache_destroy(struct drm_device *dev, void *unused) 1060 1059 { 1061 1060 struct vc4_dev *vc4 = to_vc4_dev(dev); 1062 1061 int i;
+14 -27
drivers/gpu/drm/vc4/vc4_drv.c
··· 245 245 246 246 dev->coherent_dma_mask = DMA_BIT_MASK(32); 247 247 248 - vc4 = devm_kzalloc(dev, sizeof(*vc4), GFP_KERNEL); 249 - if (!vc4) 250 - return -ENOMEM; 251 - 252 248 /* If VC4 V3D is missing, don't advertise render nodes. */ 253 249 node = of_find_matching_node_and_match(NULL, vc4_v3d_dt_match, NULL); 254 250 if (!node || !of_device_is_available(node)) 255 251 vc4_drm_driver.driver_features &= ~DRIVER_RENDER; 256 252 of_node_put(node); 257 253 258 - drm = drm_dev_alloc(&vc4_drm_driver, dev); 259 - if (IS_ERR(drm)) 260 - return PTR_ERR(drm); 254 + vc4 = devm_drm_dev_alloc(dev, &vc4_drm_driver, struct vc4_dev, base); 255 + if (IS_ERR(vc4)) 256 + return PTR_ERR(vc4); 257 + 258 + drm = &vc4->base; 261 259 platform_set_drvdata(pdev, drm); 262 - vc4->dev = drm; 263 - drm->dev_private = vc4; 264 260 INIT_LIST_HEAD(&vc4->debugfs_list); 265 261 266 262 mutex_init(&vc4->bin_bo_lock); 267 263 268 264 ret = vc4_bo_cache_init(drm); 269 265 if (ret) 270 - goto dev_put; 266 + return ret; 271 267 272 - drm_mode_config_init(drm); 268 + ret = drmm_mode_config_init(drm); 269 + if (ret) 270 + return ret; 273 271 274 - vc4_gem_init(drm); 272 + ret = vc4_gem_init(drm); 273 + if (ret) 274 + return ret; 275 275 276 276 ret = component_bind_all(dev, drm); 277 277 if (ret) 278 - goto gem_destroy; 278 + return ret; 279 279 280 280 ret = vc4_plane_create_additional_planes(drm); 281 281 if (ret) ··· 300 300 301 301 unbind_all: 302 302 component_unbind_all(dev, drm); 303 - gem_destroy: 304 - vc4_gem_destroy(drm); 305 - drm_mode_config_cleanup(drm); 306 - vc4_bo_cache_destroy(drm); 307 - dev_put: 308 - drm_dev_put(drm); 303 + 309 304 return ret; 310 305 } 311 306 312 307 static void vc4_drm_unbind(struct device *dev) 313 308 { 314 309 struct drm_device *drm = dev_get_drvdata(dev); 315 - struct vc4_dev *vc4 = to_vc4_dev(drm); 316 310 317 311 drm_dev_unregister(drm); 318 312 319 313 drm_atomic_helper_shutdown(drm); 320 - 321 - drm_mode_config_cleanup(drm); 322 - 323 - drm_atomic_private_obj_fini(&vc4->load_tracker); 324 - drm_atomic_private_obj_fini(&vc4->ctm_manager); 325 - 326 - drm_dev_put(drm); 327 314 } 328 315 329 316 static const struct component_master_ops vc4_drm_ops = {
+4 -5
drivers/gpu/drm/vc4/vc4_drv.h
··· 14 14 #include <drm/drm_device.h> 15 15 #include <drm/drm_encoder.h> 16 16 #include <drm/drm_gem_cma_helper.h> 17 + #include <drm/drm_managed.h> 17 18 #include <drm/drm_mm.h> 18 19 #include <drm/drm_modeset_lock.h> 19 20 ··· 72 71 }; 73 72 74 73 struct vc4_dev { 75 - struct drm_device *dev; 74 + struct drm_device base; 76 75 77 76 struct vc4_hvs *hvs; 78 77 struct vc4_v3d *v3d; ··· 235 234 static inline struct vc4_dev * 236 235 to_vc4_dev(struct drm_device *dev) 237 236 { 238 - return (struct vc4_dev *)dev->dev_private; 237 + return container_of(dev, struct vc4_dev, base); 239 238 } 240 239 241 240 struct vc4_bo { ··· 809 808 struct sg_table *sgt); 810 809 void *vc4_prime_vmap(struct drm_gem_object *obj); 811 810 int vc4_bo_cache_init(struct drm_device *dev); 812 - void vc4_bo_cache_destroy(struct drm_device *dev); 813 811 int vc4_bo_inc_usecnt(struct vc4_bo *bo); 814 812 void vc4_bo_dec_usecnt(struct vc4_bo *bo); 815 813 void vc4_bo_add_to_purgeable_pool(struct vc4_bo *bo); ··· 873 873 extern const struct dma_fence_ops vc4_fence_ops; 874 874 875 875 /* vc4_gem.c */ 876 - void vc4_gem_init(struct drm_device *dev); 877 - void vc4_gem_destroy(struct drm_device *dev); 876 + int vc4_gem_init(struct drm_device *dev); 878 877 int vc4_submit_cl_ioctl(struct drm_device *dev, void *data, 879 878 struct drm_file *file_priv); 880 879 int vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
+10 -9
drivers/gpu/drm/vc4/vc4_gem.c
··· 314 314 struct vc4_dev *vc4 = 315 315 container_of(work, struct vc4_dev, hangcheck.reset_work); 316 316 317 - vc4_save_hang_state(vc4->dev); 317 + vc4_save_hang_state(&vc4->base); 318 318 319 - vc4_reset(vc4->dev); 319 + vc4_reset(&vc4->base); 320 320 } 321 321 322 322 static void 323 323 vc4_hangcheck_elapsed(struct timer_list *t) 324 324 { 325 325 struct vc4_dev *vc4 = from_timer(vc4, t, hangcheck.timer); 326 - struct drm_device *dev = vc4->dev; 326 + struct drm_device *dev = &vc4->base; 327 327 uint32_t ct0ca, ct1ca; 328 328 unsigned long irqflags; 329 329 struct vc4_exec_info *bin_exec, *render_exec; ··· 1000 1000 list_del(&exec->head); 1001 1001 1002 1002 spin_unlock_irqrestore(&vc4->job_lock, irqflags); 1003 - vc4_complete_exec(vc4->dev, exec); 1003 + vc4_complete_exec(&vc4->base, exec); 1004 1004 spin_lock_irqsave(&vc4->job_lock, irqflags); 1005 1005 } 1006 1006 ··· 1258 1258 return 0; 1259 1259 1260 1260 fail: 1261 - vc4_complete_exec(vc4->dev, exec); 1261 + vc4_complete_exec(&vc4->base, exec); 1262 1262 1263 1263 return ret; 1264 1264 } 1265 1265 1266 - void 1267 - vc4_gem_init(struct drm_device *dev) 1266 + static void vc4_gem_destroy(struct drm_device *dev, void *unused); 1267 + int vc4_gem_init(struct drm_device *dev) 1268 1268 { 1269 1269 struct vc4_dev *vc4 = to_vc4_dev(dev); 1270 1270 ··· 1285 1285 1286 1286 INIT_LIST_HEAD(&vc4->purgeable.list); 1287 1287 mutex_init(&vc4->purgeable.lock); 1288 + 1289 + return drmm_add_action_or_reset(dev, vc4_gem_destroy, NULL); 1288 1290 } 1289 1291 1290 - void 1291 - vc4_gem_destroy(struct drm_device *dev) 1292 + static void vc4_gem_destroy(struct drm_device *dev, void *unused) 1292 1293 { 1293 1294 struct vc4_dev *vc4 = to_vc4_dev(dev); 1294 1295
+2 -2
drivers/gpu/drm/vc4/vc4_hvs.c
··· 562 562 { 563 563 struct platform_device *pdev = to_platform_device(dev); 564 564 struct drm_device *drm = dev_get_drvdata(master); 565 - struct vc4_dev *vc4 = drm->dev_private; 565 + struct vc4_dev *vc4 = to_vc4_dev(drm); 566 566 struct vc4_hvs *hvs = NULL; 567 567 int ret; 568 568 u32 dispctrl; ··· 681 681 void *data) 682 682 { 683 683 struct drm_device *drm = dev_get_drvdata(master); 684 - struct vc4_dev *vc4 = drm->dev_private; 684 + struct vc4_dev *vc4 = to_vc4_dev(drm); 685 685 struct vc4_hvs *hvs = vc4->hvs; 686 686 687 687 if (drm_mm_node_allocated(&vc4->hvs->mitchell_netravali_filter))
+58 -22
drivers/gpu/drm/vc4/vc4_kms.c
··· 51 51 struct drm_private_obj *manager) 52 52 { 53 53 struct drm_device *dev = state->dev; 54 - struct vc4_dev *vc4 = dev->dev_private; 54 + struct vc4_dev *vc4 = to_vc4_dev(dev); 55 55 struct drm_private_state *priv_state; 56 56 int ret; 57 57 ··· 92 92 .atomic_duplicate_state = vc4_ctm_duplicate_state, 93 93 .atomic_destroy_state = vc4_ctm_destroy_state, 94 94 }; 95 + 96 + static void vc4_ctm_obj_fini(struct drm_device *dev, void *unused) 97 + { 98 + struct vc4_dev *vc4 = to_vc4_dev(dev); 99 + 100 + drm_atomic_private_obj_fini(&vc4->ctm_manager); 101 + } 102 + 103 + static int vc4_ctm_obj_init(struct vc4_dev *vc4) 104 + { 105 + struct vc4_ctm_state *ctm_state; 106 + 107 + drm_modeset_lock_init(&vc4->ctm_state_lock); 108 + 109 + ctm_state = kzalloc(sizeof(*ctm_state), GFP_KERNEL); 110 + if (!ctm_state) 111 + return -ENOMEM; 112 + 113 + drm_atomic_private_obj_init(&vc4->base, &vc4->ctm_manager, &ctm_state->base, 114 + &vc4_ctm_state_funcs); 115 + 116 + return drmm_add_action(&vc4->base, vc4_ctm_obj_fini, NULL); 117 + } 95 118 96 119 /* Converts a DRM S31.32 value to the HW S0.9 format. */ 97 120 static u16 vc4_ctm_s31_32_to_s0_9(u64 in) ··· 632 609 .atomic_destroy_state = vc4_load_tracker_destroy_state, 633 610 }; 634 611 612 + static void vc4_load_tracker_obj_fini(struct drm_device *dev, void *unused) 613 + { 614 + struct vc4_dev *vc4 = to_vc4_dev(dev); 615 + 616 + if (!vc4->load_tracker_available) 617 + return; 618 + 619 + drm_atomic_private_obj_fini(&vc4->load_tracker); 620 + } 621 + 622 + static int vc4_load_tracker_obj_init(struct vc4_dev *vc4) 623 + { 624 + struct vc4_load_tracker_state *load_state; 625 + 626 + if (!vc4->load_tracker_available) 627 + return 0; 628 + 629 + load_state = kzalloc(sizeof(*load_state), GFP_KERNEL); 630 + if (!load_state) 631 + return -ENOMEM; 632 + 633 + drm_atomic_private_obj_init(&vc4->base, &vc4->load_tracker, 634 + &load_state->base, 635 + &vc4_load_tracker_state_funcs); 636 + 637 + return drmm_add_action(&vc4->base, vc4_load_tracker_obj_fini, NULL); 638 + } 639 + 635 640 #define NUM_OUTPUTS 6 636 641 #define NUM_CHANNELS 3 637 642 ··· 762 711 int vc4_kms_load(struct drm_device *dev) 763 712 { 764 713 struct vc4_dev *vc4 = to_vc4_dev(dev); 765 - struct vc4_ctm_state *ctm_state; 766 - struct vc4_load_tracker_state *load_state; 767 714 bool is_vc5 = of_device_is_compatible(dev->dev->of_node, 768 715 "brcm,bcm2711-vc5"); 769 716 int ret; ··· 800 751 dev->mode_config.async_page_flip = true; 801 752 dev->mode_config.allow_fb_modifiers = true; 802 753 803 - drm_modeset_lock_init(&vc4->ctm_state_lock); 754 + ret = vc4_ctm_obj_init(vc4); 755 + if (ret) 756 + return ret; 804 757 805 - ctm_state = kzalloc(sizeof(*ctm_state), GFP_KERNEL); 806 - if (!ctm_state) 807 - return -ENOMEM; 808 - 809 - drm_atomic_private_obj_init(dev, &vc4->ctm_manager, &ctm_state->base, 810 - &vc4_ctm_state_funcs); 811 - 812 - if (vc4->load_tracker_available) { 813 - load_state = kzalloc(sizeof(*load_state), GFP_KERNEL); 814 - if (!load_state) { 815 - drm_atomic_private_obj_fini(&vc4->ctm_manager); 816 - return -ENOMEM; 817 - } 818 - 819 - drm_atomic_private_obj_init(dev, &vc4->load_tracker, 820 - &load_state->base, 821 - &vc4_load_tracker_state_funcs); 822 - } 758 + ret = vc4_load_tracker_obj_init(vc4); 759 + if (ret) 760 + return ret; 823 761 824 762 drm_mode_config_reset(dev); 825 763
+6 -6
drivers/gpu/drm/vc4/vc4_v3d.c
··· 168 168 169 169 int vc4_v3d_get_bin_slot(struct vc4_dev *vc4) 170 170 { 171 - struct drm_device *dev = vc4->dev; 171 + struct drm_device *dev = &vc4->base; 172 172 unsigned long irqflags; 173 173 int slot; 174 174 uint64_t seqno = 0; ··· 246 246 INIT_LIST_HEAD(&list); 247 247 248 248 while (true) { 249 - struct vc4_bo *bo = vc4_bo_create(vc4->dev, size, true, 249 + struct vc4_bo *bo = vc4_bo_create(&vc4->base, size, true, 250 250 VC4_BO_TYPE_BIN); 251 251 252 252 if (IS_ERR(bo)) { ··· 361 361 struct vc4_v3d *v3d = dev_get_drvdata(dev); 362 362 struct vc4_dev *vc4 = v3d->vc4; 363 363 364 - vc4_irq_uninstall(vc4->dev); 364 + vc4_irq_uninstall(&vc4->base); 365 365 366 366 clk_disable_unprepare(v3d->clk); 367 367 ··· 378 378 if (ret != 0) 379 379 return ret; 380 380 381 - vc4_v3d_init_hw(vc4->dev); 381 + vc4_v3d_init_hw(&vc4->base); 382 382 383 383 /* We disabled the IRQ as part of vc4_irq_uninstall in suspend. */ 384 - enable_irq(vc4->dev->irq); 385 - vc4_irq_postinstall(vc4->dev); 384 + enable_irq(vc4->base.irq); 385 + vc4_irq_postinstall(&vc4->base); 386 386 387 387 return 0; 388 388 }
-67
drivers/gpu/ipu-v3/ipu-common.c
··· 133 133 } 134 134 EXPORT_SYMBOL_GPL(ipu_pixelformat_to_colorspace); 135 135 136 - bool ipu_pixelformat_is_planar(u32 pixelformat) 137 - { 138 - switch (pixelformat) { 139 - case V4L2_PIX_FMT_YUV420: 140 - case V4L2_PIX_FMT_YVU420: 141 - case V4L2_PIX_FMT_YUV422P: 142 - case V4L2_PIX_FMT_NV12: 143 - case V4L2_PIX_FMT_NV21: 144 - case V4L2_PIX_FMT_NV16: 145 - case V4L2_PIX_FMT_NV61: 146 - return true; 147 - } 148 - 149 - return false; 150 - } 151 - EXPORT_SYMBOL_GPL(ipu_pixelformat_is_planar); 152 - 153 - enum ipu_color_space ipu_mbus_code_to_colorspace(u32 mbus_code) 154 - { 155 - switch (mbus_code & 0xf000) { 156 - case 0x1000: 157 - return IPUV3_COLORSPACE_RGB; 158 - case 0x2000: 159 - return IPUV3_COLORSPACE_YUV; 160 - default: 161 - return IPUV3_COLORSPACE_UNKNOWN; 162 - } 163 - } 164 - EXPORT_SYMBOL_GPL(ipu_mbus_code_to_colorspace); 165 - 166 - int ipu_stride_to_bytes(u32 pixel_stride, u32 pixelformat) 167 - { 168 - switch (pixelformat) { 169 - case V4L2_PIX_FMT_YUV420: 170 - case V4L2_PIX_FMT_YVU420: 171 - case V4L2_PIX_FMT_YUV422P: 172 - case V4L2_PIX_FMT_NV12: 173 - case V4L2_PIX_FMT_NV21: 174 - case V4L2_PIX_FMT_NV16: 175 - case V4L2_PIX_FMT_NV61: 176 - /* 177 - * for the planar YUV formats, the stride passed to 178 - * cpmem must be the stride in bytes of the Y plane. 179 - * And all the planar YUV formats have an 8-bit 180 - * Y component. 181 - */ 182 - return (8 * pixel_stride) >> 3; 183 - case V4L2_PIX_FMT_RGB565: 184 - case V4L2_PIX_FMT_YUYV: 185 - case V4L2_PIX_FMT_UYVY: 186 - return (16 * pixel_stride) >> 3; 187 - case V4L2_PIX_FMT_BGR24: 188 - case V4L2_PIX_FMT_RGB24: 189 - return (24 * pixel_stride) >> 3; 190 - case V4L2_PIX_FMT_BGR32: 191 - case V4L2_PIX_FMT_RGB32: 192 - case V4L2_PIX_FMT_XBGR32: 193 - case V4L2_PIX_FMT_XRGB32: 194 - return (32 * pixel_stride) >> 3; 195 - default: 196 - break; 197 - } 198 - 199 - return -EINVAL; 200 - } 201 - EXPORT_SYMBOL_GPL(ipu_stride_to_bytes); 202 - 203 136 int ipu_degrees_to_rot_mode(enum ipu_rotate_mode *mode, int degrees, 204 137 bool hflip, bool vflip) 205 138 {
+1 -1
drivers/hv/hv_balloon.c
··· 1275 1275 1276 1276 /* Refuse to balloon below the floor. */ 1277 1277 if (avail_pages < num_pages || avail_pages - num_pages < floor) { 1278 - pr_warn("Balloon request will be partially fulfilled. %s\n", 1278 + pr_info("Balloon request will be partially fulfilled. %s\n", 1279 1279 avail_pages < num_pages ? "Not enough memory." : 1280 1280 "Balloon floor reached."); 1281 1281
+1 -1
drivers/i2c/busses/Kconfig
··· 733 733 734 734 config I2C_MLXBF 735 735 tristate "Mellanox BlueField I2C controller" 736 - depends on ARM64 736 + depends on MELLANOX_PLATFORM && ARM64 737 737 help 738 738 Enabling this option will add I2C SMBus support for Mellanox BlueField 739 739 system.
+18 -32
drivers/i2c/busses/i2c-designware-slave.c
··· 159 159 u32 raw_stat, stat, enabled, tmp; 160 160 u8 val = 0, slave_activity; 161 161 162 - regmap_read(dev->map, DW_IC_INTR_STAT, &stat); 163 162 regmap_read(dev->map, DW_IC_ENABLE, &enabled); 164 163 regmap_read(dev->map, DW_IC_RAW_INTR_STAT, &raw_stat); 165 164 regmap_read(dev->map, DW_IC_STATUS, &tmp); ··· 167 168 if (!enabled || !(raw_stat & ~DW_IC_INTR_ACTIVITY) || !dev->slave) 168 169 return 0; 169 170 171 + stat = i2c_dw_read_clear_intrbits_slave(dev); 170 172 dev_dbg(dev->dev, 171 173 "%#x STATUS SLAVE_ACTIVITY=%#x : RAW_INTR_STAT=%#x : INTR_STAT=%#x\n", 172 174 enabled, slave_activity, raw_stat, stat); 173 175 174 - if ((stat & DW_IC_INTR_RX_FULL) && (stat & DW_IC_INTR_STOP_DET)) 175 - i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_REQUESTED, &val); 176 + if (stat & DW_IC_INTR_RX_FULL) { 177 + if (dev->status != STATUS_WRITE_IN_PROGRESS) { 178 + dev->status = STATUS_WRITE_IN_PROGRESS; 179 + i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_REQUESTED, 180 + &val); 181 + } 182 + 183 + regmap_read(dev->map, DW_IC_DATA_CMD, &tmp); 184 + val = tmp; 185 + if (!i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_RECEIVED, 186 + &val)) 187 + dev_vdbg(dev->dev, "Byte %X acked!", val); 188 + } 176 189 177 190 if (stat & DW_IC_INTR_RD_REQ) { 178 191 if (slave_activity) { 179 - if (stat & DW_IC_INTR_RX_FULL) { 180 - regmap_read(dev->map, DW_IC_DATA_CMD, &tmp); 181 - val = tmp; 192 + regmap_read(dev->map, DW_IC_CLR_RD_REQ, &tmp); 182 193 183 - if (!i2c_slave_event(dev->slave, 184 - I2C_SLAVE_WRITE_RECEIVED, 185 - &val)) { 186 - dev_vdbg(dev->dev, "Byte %X acked!", 187 - val); 188 - } 189 - regmap_read(dev->map, DW_IC_CLR_RD_REQ, &tmp); 190 - stat = i2c_dw_read_clear_intrbits_slave(dev); 191 - } else { 192 - regmap_read(dev->map, DW_IC_CLR_RD_REQ, &tmp); 193 - regmap_read(dev->map, DW_IC_CLR_RX_UNDER, &tmp); 194 - stat = i2c_dw_read_clear_intrbits_slave(dev); 195 - } 194 + dev->status = STATUS_READ_IN_PROGRESS; 196 195 if (!i2c_slave_event(dev->slave, 197 196 I2C_SLAVE_READ_REQUESTED, 198 197 &val)) ··· 202 205 if (!i2c_slave_event(dev->slave, I2C_SLAVE_READ_PROCESSED, 203 206 &val)) 204 207 regmap_read(dev->map, DW_IC_CLR_RX_DONE, &tmp); 205 - 206 - i2c_slave_event(dev->slave, I2C_SLAVE_STOP, &val); 207 - stat = i2c_dw_read_clear_intrbits_slave(dev); 208 - return 1; 209 208 } 210 209 211 - if (stat & DW_IC_INTR_RX_FULL) { 212 - regmap_read(dev->map, DW_IC_DATA_CMD, &tmp); 213 - val = tmp; 214 - if (!i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_RECEIVED, 215 - &val)) 216 - dev_vdbg(dev->dev, "Byte %X acked!", val); 217 - } else { 210 + if (stat & DW_IC_INTR_STOP_DET) { 211 + dev->status = STATUS_IDLE; 218 212 i2c_slave_event(dev->slave, I2C_SLAVE_STOP, &val); 219 - stat = i2c_dw_read_clear_intrbits_slave(dev); 220 213 } 221 214 222 215 return 1; ··· 217 230 struct dw_i2c_dev *dev = dev_id; 218 231 int ret; 219 232 220 - i2c_dw_read_clear_intrbits_slave(dev); 221 233 ret = i2c_dw_irq_handler_slave(dev); 222 234 if (ret > 0) 223 235 complete(&dev->cmd_complete);
+86 -118
drivers/i2c/busses/i2c-mlxbf.c
··· 62 62 * Master. Default value is set to 400MHz. 63 63 */ 64 64 #define MLXBF_I2C_TYU_PLL_OUT_FREQ (400 * 1000 * 1000) 65 - /* Reference clock for Bluefield 1 - 156 MHz. */ 66 - #define MLXBF_I2C_TYU_PLL_IN_FREQ (156 * 1000 * 1000) 67 - /* Reference clock for BlueField 2 - 200 MHz. */ 68 - #define MLXBF_I2C_YU_PLL_IN_FREQ (200 * 1000 * 1000) 65 + /* Reference clock for Bluefield - 156 MHz. */ 66 + #define MLXBF_I2C_PLL_IN_FREQ (156 * 1000 * 1000) 69 67 70 68 /* Constant used to determine the PLL frequency. */ 71 69 #define MLNXBF_I2C_COREPLL_CONST 16384 ··· 487 489 488 490 #define MLXBF_I2C_FREQUENCY_1GHZ 1000000000 489 491 490 - static void mlxbf_i2c_write(void __iomem *io, int reg, u32 val) 491 - { 492 - writel(val, io + reg); 493 - } 494 - 495 - static u32 mlxbf_i2c_read(void __iomem *io, int reg) 496 - { 497 - return readl(io + reg); 498 - } 499 - 500 - /* 501 - * This function is used to read data from Master GW Data Descriptor. 502 - * Data bytes in the Master GW Data Descriptor are shifted left so the 503 - * data starts at the MSB of the descriptor registers as set by the 504 - * underlying hardware. TYU_READ_DATA enables byte swapping while 505 - * reading data bytes, and MUST be called by the SMBus read routines 506 - * to copy data from the 32 * 32-bit HW Data registers a.k.a Master GW 507 - * Data Descriptor. 508 - */ 509 - static u32 mlxbf_i2c_read_data(void __iomem *io, int reg) 510 - { 511 - return (u32)be32_to_cpu(mlxbf_i2c_read(io, reg)); 512 - } 513 - 514 - /* 515 - * This function is used to write data to the Master GW Data Descriptor. 516 - * Data copied to the Master GW Data Descriptor MUST be shifted left so 517 - * the data starts at the MSB of the descriptor registers as required by 518 - * the underlying hardware. TYU_WRITE_DATA enables byte swapping when 519 - * writing data bytes, and MUST be called by the SMBus write routines to 520 - * copy data to the 32 * 32-bit HW Data registers a.k.a Master GW Data 521 - * Descriptor. 522 - */ 523 - static void mlxbf_i2c_write_data(void __iomem *io, int reg, u32 val) 524 - { 525 - mlxbf_i2c_write(io, reg, (u32)cpu_to_be32(val)); 526 - } 527 - 528 492 /* 529 493 * Function to poll a set of bits at a specific address; it checks whether 530 494 * the bits are equal to zero when eq_zero is set to 'true', and not equal ··· 501 541 timeout = (timeout / MLXBF_I2C_POLL_FREQ_IN_USEC) + 1; 502 542 503 543 do { 504 - bits = mlxbf_i2c_read(io, addr) & mask; 544 + bits = readl(io + addr) & mask; 505 545 if (eq_zero ? bits == 0 : bits != 0) 506 546 return eq_zero ? 1 : bits; 507 547 udelay(MLXBF_I2C_POLL_FREQ_IN_USEC); ··· 569 609 MLXBF_I2C_SMBUS_TIMEOUT); 570 610 571 611 /* Read cause status bits. */ 572 - cause_status_bits = mlxbf_i2c_read(priv->mst_cause->io, 573 - MLXBF_I2C_CAUSE_ARBITER); 612 + cause_status_bits = readl(priv->mst_cause->io + 613 + MLXBF_I2C_CAUSE_ARBITER); 574 614 cause_status_bits &= MLXBF_I2C_CAUSE_MASTER_ARBITER_BITS_MASK; 575 615 576 616 /* 577 617 * Parse both Cause and Master GW bits, then return transaction status. 578 618 */ 579 619 580 - master_status_bits = mlxbf_i2c_read(priv->smbus->io, 581 - MLXBF_I2C_SMBUS_MASTER_STATUS); 620 + master_status_bits = readl(priv->smbus->io + 621 + MLXBF_I2C_SMBUS_MASTER_STATUS); 582 622 master_status_bits &= MLXBF_I2C_SMBUS_MASTER_STATUS_MASK; 583 623 584 624 if (mlxbf_i2c_smbus_transaction_success(master_status_bits, ··· 609 649 610 650 aligned_length = round_up(length, 4); 611 651 612 - /* Copy data bytes from 4-byte aligned source buffer. */ 652 + /* 653 + * Copy data bytes from 4-byte aligned source buffer. 654 + * Data copied to the Master GW Data Descriptor MUST be shifted 655 + * left so the data starts at the MSB of the descriptor registers 656 + * as required by the underlying hardware. Enable byte swapping 657 + * when writing data bytes to the 32 * 32-bit HW Data registers 658 + * a.k.a Master GW Data Descriptor. 659 + */ 613 660 for (offset = 0; offset < aligned_length; offset += sizeof(u32)) { 614 661 data32 = *((u32 *)(data + offset)); 615 - mlxbf_i2c_write_data(priv->smbus->io, addr + offset, data32); 662 + iowrite32be(data32, priv->smbus->io + addr + offset); 616 663 } 617 664 } 618 665 ··· 631 664 632 665 mask = sizeof(u32) - 1; 633 666 667 + /* 668 + * Data bytes in the Master GW Data Descriptor are shifted left 669 + * so the data starts at the MSB of the descriptor registers as 670 + * set by the underlying hardware. Enable byte swapping while 671 + * reading data bytes from the 32 * 32-bit HW Data registers 672 + * a.k.a Master GW Data Descriptor. 673 + */ 674 + 634 675 for (offset = 0; offset < (length & ~mask); offset += sizeof(u32)) { 635 - data32 = mlxbf_i2c_read_data(priv->smbus->io, addr + offset); 676 + data32 = ioread32be(priv->smbus->io + addr + offset); 636 677 *((u32 *)(data + offset)) = data32; 637 678 } 638 679 639 680 if (!(length & mask)) 640 681 return; 641 682 642 - data32 = mlxbf_i2c_read_data(priv->smbus->io, addr + offset); 683 + data32 = ioread32be(priv->smbus->io + addr + offset); 643 684 644 685 for (byte = 0; byte < (length & mask); byte++) { 645 686 data[offset + byte] = data32 & GENMASK(7, 0); ··· 673 698 command |= rol32(pec_en, MLXBF_I2C_MASTER_SEND_PEC_SHIFT); 674 699 675 700 /* Clear status bits. */ 676 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_STATUS, 0x0); 701 + writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_STATUS); 677 702 /* Set the cause data. */ 678 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_CAUSE_OR_CLEAR, ~0x0); 703 + writel(~0x0, priv->smbus->io + MLXBF_I2C_CAUSE_OR_CLEAR); 679 704 /* Zero PEC byte. */ 680 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_PEC, 0x0); 705 + writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_PEC); 681 706 /* Zero byte count. */ 682 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_RS_BYTES, 0x0); 707 + writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_RS_BYTES); 683 708 684 709 /* GW activation. */ 685 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_GW, command); 710 + writel(command, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_GW); 686 711 687 712 /* 688 713 * Poll master status and check status bits. An ACK is sent when ··· 798 823 * needs to be 'manually' reset. This should be removed in 799 824 * next tag integration. 800 825 */ 801 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_MASTER_FSM, 802 - MLXBF_I2C_SMBUS_MASTER_FSM_PS_STATE_MASK); 826 + writel(MLXBF_I2C_SMBUS_MASTER_FSM_PS_STATE_MASK, 827 + priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_FSM); 803 828 } 804 829 805 830 return ret; ··· 1088 1113 timer |= mlxbf_i2c_set_timer(priv, timings->scl_low, 1089 1114 false, MLXBF_I2C_MASK_16, 1090 1115 MLXBF_I2C_SHIFT_16); 1091 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_SCL_LOW_SCL_HIGH, 1092 - timer); 1116 + writel(timer, priv->smbus->io + 1117 + MLXBF_I2C_SMBUS_TIMER_SCL_LOW_SCL_HIGH); 1093 1118 1094 1119 timer = mlxbf_i2c_set_timer(priv, timings->sda_rise, false, 1095 1120 MLXBF_I2C_MASK_8, MLXBF_I2C_SHIFT_0); ··· 1099 1124 MLXBF_I2C_MASK_8, MLXBF_I2C_SHIFT_16); 1100 1125 timer |= mlxbf_i2c_set_timer(priv, timings->scl_fall, false, 1101 1126 MLXBF_I2C_MASK_8, MLXBF_I2C_SHIFT_24); 1102 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_FALL_RISE_SPIKE, 1103 - timer); 1127 + writel(timer, priv->smbus->io + 1128 + MLXBF_I2C_SMBUS_TIMER_FALL_RISE_SPIKE); 1104 1129 1105 1130 timer = mlxbf_i2c_set_timer(priv, timings->hold_start, true, 1106 1131 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0); 1107 1132 timer |= mlxbf_i2c_set_timer(priv, timings->hold_data, true, 1108 1133 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_16); 1109 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_THOLD, timer); 1134 + writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_TIMER_THOLD); 1110 1135 1111 1136 timer = mlxbf_i2c_set_timer(priv, timings->setup_start, true, 1112 1137 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0); 1113 1138 timer |= mlxbf_i2c_set_timer(priv, timings->setup_stop, true, 1114 1139 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_16); 1115 - mlxbf_i2c_write(priv->smbus->io, 1116 - MLXBF_I2C_SMBUS_TIMER_TSETUP_START_STOP, timer); 1140 + writel(timer, priv->smbus->io + 1141 + MLXBF_I2C_SMBUS_TIMER_TSETUP_START_STOP); 1117 1142 1118 1143 timer = mlxbf_i2c_set_timer(priv, timings->setup_data, true, 1119 1144 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0); 1120 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_TIMER_TSETUP_DATA, 1121 - timer); 1145 + writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_TIMER_TSETUP_DATA); 1122 1146 1123 1147 timer = mlxbf_i2c_set_timer(priv, timings->buf, false, 1124 1148 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_0); 1125 1149 timer |= mlxbf_i2c_set_timer(priv, timings->thigh_max, false, 1126 1150 MLXBF_I2C_MASK_16, MLXBF_I2C_SHIFT_16); 1127 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_THIGH_MAX_TBUF, 1128 - timer); 1151 + writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_THIGH_MAX_TBUF); 1129 1152 1130 1153 timer = timings->timeout; 1131 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SCL_LOW_TIMEOUT, 1132 - timer); 1154 + writel(timer, priv->smbus->io + MLXBF_I2C_SMBUS_SCL_LOW_TIMEOUT); 1133 1155 } 1134 1156 1135 1157 enum mlxbf_i2c_timings_config { ··· 1398 1426 * platform firmware; disabling the bus might compromise the system 1399 1427 * functionality. 1400 1428 */ 1401 - config_reg = mlxbf_i2c_read(gpio_res->io, 1402 - MLXBF_I2C_GPIO_0_FUNC_EN_0); 1429 + config_reg = readl(gpio_res->io + MLXBF_I2C_GPIO_0_FUNC_EN_0); 1403 1430 config_reg = MLXBF_I2C_GPIO_SMBUS_GW_ASSERT_PINS(priv->bus, 1404 1431 config_reg); 1405 - mlxbf_i2c_write(gpio_res->io, MLXBF_I2C_GPIO_0_FUNC_EN_0, 1406 - config_reg); 1432 + writel(config_reg, gpio_res->io + MLXBF_I2C_GPIO_0_FUNC_EN_0); 1407 1433 1408 - config_reg = mlxbf_i2c_read(gpio_res->io, 1409 - MLXBF_I2C_GPIO_0_FORCE_OE_EN); 1434 + config_reg = readl(gpio_res->io + MLXBF_I2C_GPIO_0_FORCE_OE_EN); 1410 1435 config_reg = MLXBF_I2C_GPIO_SMBUS_GW_RESET_PINS(priv->bus, 1411 1436 config_reg); 1412 - mlxbf_i2c_write(gpio_res->io, MLXBF_I2C_GPIO_0_FORCE_OE_EN, 1413 - config_reg); 1437 + writel(config_reg, gpio_res->io + MLXBF_I2C_GPIO_0_FORCE_OE_EN); 1414 1438 1415 1439 mutex_unlock(gpio_res->lock); 1416 1440 ··· 1420 1452 u32 corepll_val; 1421 1453 u16 core_f; 1422 1454 1423 - pad_frequency = MLXBF_I2C_TYU_PLL_IN_FREQ; 1455 + pad_frequency = MLXBF_I2C_PLL_IN_FREQ; 1424 1456 1425 - corepll_val = mlxbf_i2c_read(corepll_res->io, 1426 - MLXBF_I2C_CORE_PLL_REG1); 1457 + corepll_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1); 1427 1458 1428 1459 /* Get Core PLL configuration bits. */ 1429 1460 core_f = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT) & ··· 1455 1488 u8 core_od, core_r; 1456 1489 u32 core_f; 1457 1490 1458 - pad_frequency = MLXBF_I2C_YU_PLL_IN_FREQ; 1491 + pad_frequency = MLXBF_I2C_PLL_IN_FREQ; 1459 1492 1460 - corepll_reg1_val = mlxbf_i2c_read(corepll_res->io, 1461 - MLXBF_I2C_CORE_PLL_REG1); 1462 - corepll_reg2_val = mlxbf_i2c_read(corepll_res->io, 1463 - MLXBF_I2C_CORE_PLL_REG2); 1493 + corepll_reg1_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1); 1494 + corepll_reg2_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG2); 1464 1495 1465 1496 /* Get Core PLL configuration bits */ 1466 1497 core_f = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT) & ··· 1550 1585 * (7-bit address, 1 status bit (1 if enabled, 0 if not)). 1551 1586 */ 1552 1587 for (reg = 0; reg < reg_cnt; reg++) { 1553 - slave_reg = mlxbf_i2c_read(priv->smbus->io, 1588 + slave_reg = readl(priv->smbus->io + 1554 1589 MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4); 1555 1590 /* 1556 1591 * Each register holds 4 slave addresses. So, we have to keep ··· 1608 1643 1609 1644 /* Enable the slave address and update the register. */ 1610 1645 slave_reg |= (1 << MLXBF_I2C_SMBUS_SLAVE_ADDR_EN_BIT) << (byte * 8); 1611 - mlxbf_i2c_write(priv->smbus->io, 1612 - MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4, slave_reg); 1646 + writel(slave_reg, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + 1647 + reg * 0x4); 1613 1648 1614 1649 return 0; 1615 1650 } ··· 1633 1668 * (7-bit address, 1 status bit (1 if enabled, 0 if not)). 1634 1669 */ 1635 1670 for (reg = 0; reg < reg_cnt; reg++) { 1636 - slave_reg = mlxbf_i2c_read(priv->smbus->io, 1671 + slave_reg = readl(priv->smbus->io + 1637 1672 MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4); 1638 1673 1639 1674 /* Check whether the address slots are empty. */ ··· 1673 1708 1674 1709 /* Cleanup the slave address slot. */ 1675 1710 slave_reg &= ~(GENMASK(7, 0) << (slave_byte * 8)); 1676 - mlxbf_i2c_write(priv->smbus->io, 1677 - MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + reg * 0x4, slave_reg); 1711 + writel(slave_reg, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_ADDR_CFG + 1712 + reg * 0x4); 1678 1713 1679 1714 return 0; 1680 1715 } ··· 1766 1801 int ret; 1767 1802 1768 1803 /* Reset FSM. */ 1769 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_FSM, 0); 1804 + writel(0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_FSM); 1770 1805 1771 1806 /* 1772 1807 * Enable slave cause interrupt bits. Drive ··· 1775 1810 * masters issue a Read and Write, respectively. But, clear all 1776 1811 * interrupts first. 1777 1812 */ 1778 - mlxbf_i2c_write(priv->slv_cause->io, 1779 - MLXBF_I2C_CAUSE_OR_CLEAR, ~0); 1813 + writel(~0, priv->slv_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR); 1780 1814 int_reg = MLXBF_I2C_CAUSE_READ_WAIT_FW_RESPONSE; 1781 1815 int_reg |= MLXBF_I2C_CAUSE_WRITE_SUCCESS; 1782 - mlxbf_i2c_write(priv->slv_cause->io, 1783 - MLXBF_I2C_CAUSE_OR_EVTEN0, int_reg); 1816 + writel(int_reg, priv->slv_cause->io + MLXBF_I2C_CAUSE_OR_EVTEN0); 1784 1817 1785 1818 /* Finally, set the 'ready' bit to start handling transactions. */ 1786 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_READY, 0x1); 1819 + writel(0x1, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_READY); 1787 1820 1788 1821 /* Initialize the cause coalesce resource. */ 1789 1822 ret = mlxbf_i2c_init_coalesce(pdev, priv); ··· 1807 1844 MLXBF_I2C_CAUSE_YU_SLAVE_BIT : 1808 1845 priv->bus + MLXBF_I2C_CAUSE_TYU_SLAVE_BIT; 1809 1846 1810 - coalesce0_reg = mlxbf_i2c_read(priv->coalesce->io, 1811 - MLXBF_I2C_CAUSE_COALESCE_0); 1847 + coalesce0_reg = readl(priv->coalesce->io + MLXBF_I2C_CAUSE_COALESCE_0); 1812 1848 is_set = coalesce0_reg & (1 << slave_shift); 1813 1849 1814 1850 if (!is_set) 1815 1851 return false; 1816 1852 1817 1853 /* Check the source of the interrupt, i.e. whether a Read or Write. */ 1818 - cause_reg = mlxbf_i2c_read(priv->slv_cause->io, 1819 - MLXBF_I2C_CAUSE_ARBITER); 1854 + cause_reg = readl(priv->slv_cause->io + MLXBF_I2C_CAUSE_ARBITER); 1820 1855 if (cause_reg & MLXBF_I2C_CAUSE_READ_WAIT_FW_RESPONSE) 1821 1856 *read = true; 1822 1857 else if (cause_reg & MLXBF_I2C_CAUSE_WRITE_SUCCESS) 1823 1858 *write = true; 1824 1859 1825 1860 /* Clear cause bits. */ 1826 - mlxbf_i2c_write(priv->slv_cause->io, MLXBF_I2C_CAUSE_OR_CLEAR, ~0x0); 1861 + writel(~0x0, priv->slv_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR); 1827 1862 1828 1863 return true; 1829 1864 } ··· 1861 1900 * address, if supplied. 1862 1901 */ 1863 1902 if (recv_bytes > 0) { 1864 - data32 = mlxbf_i2c_read_data(priv->smbus->io, 1865 - MLXBF_I2C_SLAVE_DATA_DESC_ADDR); 1903 + data32 = ioread32be(priv->smbus->io + 1904 + MLXBF_I2C_SLAVE_DATA_DESC_ADDR); 1866 1905 1867 1906 /* Parse the received bytes. */ 1868 1907 switch (recv_bytes) { ··· 1927 1966 control32 |= rol32(write_size, MLXBF_I2C_SLAVE_WRITE_BYTES_SHIFT); 1928 1967 control32 |= rol32(pec_en, MLXBF_I2C_SLAVE_SEND_PEC_SHIFT); 1929 1968 1930 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_GW, control32); 1969 + writel(control32, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_GW); 1931 1970 1932 1971 /* 1933 1972 * Wait until the transfer is completed; the driver will wait ··· 1936 1975 mlxbf_smbus_slave_wait_for_idle(priv, MLXBF_I2C_SMBUS_TIMEOUT); 1937 1976 1938 1977 /* Release the Slave GW. */ 1939 - mlxbf_i2c_write(priv->smbus->io, 1940 - MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES, 0x0); 1941 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_PEC, 0x0); 1942 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_READY, 0x1); 1978 + writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES); 1979 + writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_PEC); 1980 + writel(0x1, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_READY); 1943 1981 1944 1982 return 0; 1945 1983 } ··· 1983 2023 i2c_slave_event(slave, I2C_SLAVE_STOP, &value); 1984 2024 1985 2025 /* Release the Slave GW. */ 1986 - mlxbf_i2c_write(priv->smbus->io, 1987 - MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES, 0x0); 1988 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_PEC, 0x0); 1989 - mlxbf_i2c_write(priv->smbus->io, MLXBF_I2C_SMBUS_SLAVE_READY, 0x1); 2026 + writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES); 2027 + writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_PEC); 2028 + writel(0x1, priv->smbus->io + MLXBF_I2C_SMBUS_SLAVE_READY); 1990 2029 1991 2030 return ret; 1992 2031 } ··· 2020 2061 * slave, if the higher 8 bits are sent then the slave expect N bytes 2021 2062 * from the master. 2022 2063 */ 2023 - rw_bytes_reg = mlxbf_i2c_read(priv->smbus->io, 2024 - MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES); 2064 + rw_bytes_reg = readl(priv->smbus->io + 2065 + MLXBF_I2C_SMBUS_SLAVE_RS_MASTER_BYTES); 2025 2066 recv_bytes = (rw_bytes_reg >> 8) & GENMASK(7, 0); 2026 2067 2027 2068 /* ··· 2223 2264 2224 2265 MODULE_DEVICE_TABLE(of, mlxbf_i2c_dt_ids); 2225 2266 2267 + #ifdef CONFIG_ACPI 2226 2268 static const struct acpi_device_id mlxbf_i2c_acpi_ids[] = { 2227 2269 { "MLNXBF03", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_1] }, 2228 2270 { "MLNXBF23", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_2] }, ··· 2265 2305 2266 2306 return ret; 2267 2307 } 2308 + #else 2309 + static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv) 2310 + { 2311 + return -ENOENT; 2312 + } 2313 + #endif /* CONFIG_ACPI */ 2268 2314 2269 2315 static int mlxbf_i2c_of_probe(struct device *dev, struct mlxbf_i2c_priv *priv) 2270 2316 { ··· 2439 2473 .driver = { 2440 2474 .name = "i2c-mlxbf", 2441 2475 .of_match_table = mlxbf_i2c_dt_ids, 2476 + #ifdef CONFIG_ACPI 2442 2477 .acpi_match_table = ACPI_PTR(mlxbf_i2c_acpi_ids), 2478 + #endif /* CONFIG_ACPI */ 2443 2479 }, 2444 2480 }; 2445 2481 ··· 2470 2502 module_exit(mlxbf_i2c_exit); 2471 2503 2472 2504 MODULE_DESCRIPTION("Mellanox BlueField I2C bus driver"); 2473 - MODULE_AUTHOR("Khalil Blaiech <kblaiech@mellanox.com>"); 2505 + MODULE_AUTHOR("Khalil Blaiech <kblaiech@nvidia.com>"); 2474 2506 MODULE_LICENSE("GPL v2");
+4 -4
drivers/i2c/busses/i2c-mt65xx.c
··· 475 475 { 476 476 u16 control_reg; 477 477 478 + writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST); 479 + udelay(50); 480 + writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST); 481 + 478 482 mtk_i2c_writew(i2c, I2C_SOFT_RST, OFFSET_SOFTRESET); 479 483 480 484 /* Set ioconfig */ ··· 533 529 534 530 mtk_i2c_writew(i2c, control_reg, OFFSET_CONTROL); 535 531 mtk_i2c_writew(i2c, I2C_DELAY_LEN, OFFSET_DELAY_LEN); 536 - 537 - writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST); 538 - udelay(50); 539 - writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST); 540 532 } 541 533 542 534 static const struct i2c_spec_values *mtk_i2c_get_spec(unsigned int speed)
+65 -19
drivers/i2c/busses/i2c-sh_mobile.c
··· 129 129 int sr; 130 130 bool send_stop; 131 131 bool stop_after_dma; 132 + bool atomic_xfer; 132 133 133 134 struct resource *res; 134 135 struct dma_chan *dma_tx; ··· 331 330 ret = iic_rd(pd, ICDR); 332 331 break; 333 332 case OP_RX_STOP: /* enable DTE interrupt, issue stop */ 334 - iic_wr(pd, ICIC, 335 - ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE); 333 + if (!pd->atomic_xfer) 334 + iic_wr(pd, ICIC, 335 + ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE); 336 336 iic_wr(pd, ICCR, ICCR_ICE | ICCR_RACK); 337 337 break; 338 338 case OP_RX_STOP_DATA: /* enable DTE interrupt, read data, issue stop */ 339 - iic_wr(pd, ICIC, 340 - ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE); 339 + if (!pd->atomic_xfer) 340 + iic_wr(pd, ICIC, 341 + ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE); 341 342 ret = iic_rd(pd, ICDR); 342 343 iic_wr(pd, ICCR, ICCR_ICE | ICCR_RACK); 343 344 break; ··· 432 429 433 430 if (wakeup) { 434 431 pd->sr |= SW_DONE; 435 - wake_up(&pd->wait); 432 + if (!pd->atomic_xfer) 433 + wake_up(&pd->wait); 436 434 } 437 435 438 436 /* defeat write posting to avoid spurious WAIT interrupts */ ··· 585 581 pd->pos = -1; 586 582 pd->sr = 0; 587 583 584 + if (pd->atomic_xfer) 585 + return; 586 + 588 587 pd->dma_buf = i2c_get_dma_safe_msg_buf(pd->msg, 8); 589 588 if (pd->dma_buf) 590 589 sh_mobile_i2c_xfer_dma(pd); ··· 644 637 return i ? 0 : -ETIMEDOUT; 645 638 } 646 639 647 - static int sh_mobile_i2c_xfer(struct i2c_adapter *adapter, 648 - struct i2c_msg *msgs, 649 - int num) 640 + static int sh_mobile_xfer(struct sh_mobile_i2c_data *pd, 641 + struct i2c_msg *msgs, int num) 650 642 { 651 - struct sh_mobile_i2c_data *pd = i2c_get_adapdata(adapter); 652 643 struct i2c_msg *msg; 653 644 int err = 0; 654 645 int i; 655 - long timeout; 646 + long time_left; 656 647 657 648 /* Wake up device and enable clock */ 658 649 pm_runtime_get_sync(pd->dev); ··· 667 662 if (do_start) 668 663 i2c_op(pd, OP_START); 669 664 670 - /* The interrupt handler takes care of the rest... */ 671 - timeout = wait_event_timeout(pd->wait, 672 - pd->sr & (ICSR_TACK | SW_DONE), 673 - adapter->timeout); 665 + if (pd->atomic_xfer) { 666 + unsigned long j = jiffies + pd->adap.timeout; 674 667 675 - /* 'stop_after_dma' tells if DMA transfer was complete */ 676 - i2c_put_dma_safe_msg_buf(pd->dma_buf, pd->msg, pd->stop_after_dma); 668 + time_left = time_before_eq(jiffies, j); 669 + while (time_left && 670 + !(pd->sr & (ICSR_TACK | SW_DONE))) { 671 + unsigned char sr = iic_rd(pd, ICSR); 677 672 678 - if (!timeout) { 673 + if (sr & (ICSR_AL | ICSR_TACK | 674 + ICSR_WAIT | ICSR_DTE)) { 675 + sh_mobile_i2c_isr(0, pd); 676 + udelay(150); 677 + } else { 678 + cpu_relax(); 679 + } 680 + time_left = time_before_eq(jiffies, j); 681 + } 682 + } else { 683 + /* The interrupt handler takes care of the rest... */ 684 + time_left = wait_event_timeout(pd->wait, 685 + pd->sr & (ICSR_TACK | SW_DONE), 686 + pd->adap.timeout); 687 + 688 + /* 'stop_after_dma' tells if DMA xfer was complete */ 689 + i2c_put_dma_safe_msg_buf(pd->dma_buf, pd->msg, 690 + pd->stop_after_dma); 691 + } 692 + 693 + if (!time_left) { 679 694 dev_err(pd->dev, "Transfer request timed out\n"); 680 695 if (pd->dma_direction != DMA_NONE) 681 696 sh_mobile_i2c_cleanup_dma(pd); ··· 721 696 return err ?: num; 722 697 } 723 698 699 + static int sh_mobile_i2c_xfer(struct i2c_adapter *adapter, 700 + struct i2c_msg *msgs, 701 + int num) 702 + { 703 + struct sh_mobile_i2c_data *pd = i2c_get_adapdata(adapter); 704 + 705 + pd->atomic_xfer = false; 706 + return sh_mobile_xfer(pd, msgs, num); 707 + } 708 + 709 + static int sh_mobile_i2c_xfer_atomic(struct i2c_adapter *adapter, 710 + struct i2c_msg *msgs, 711 + int num) 712 + { 713 + struct sh_mobile_i2c_data *pd = i2c_get_adapdata(adapter); 714 + 715 + pd->atomic_xfer = true; 716 + return sh_mobile_xfer(pd, msgs, num); 717 + } 718 + 724 719 static u32 sh_mobile_i2c_func(struct i2c_adapter *adapter) 725 720 { 726 721 return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL | I2C_FUNC_PROTOCOL_MANGLING; 727 722 } 728 723 729 724 static const struct i2c_algorithm sh_mobile_i2c_algorithm = { 730 - .functionality = sh_mobile_i2c_func, 731 - .master_xfer = sh_mobile_i2c_xfer, 725 + .functionality = sh_mobile_i2c_func, 726 + .master_xfer = sh_mobile_i2c_xfer, 727 + .master_xfer_atomic = sh_mobile_i2c_xfer_atomic, 732 728 }; 733 729 734 730 static const struct i2c_adapter_quirks sh_mobile_i2c_quirks = {
+1 -1
drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
··· 176 176 u8 subnet_timeout; 177 177 u8 init_type_reply; 178 178 u8 active_width; 179 - u16 active_speed; 179 + u8 active_speed; 180 180 u8 phys_state; 181 181 u8 reserved[2]; 182 182 };
+5 -2
drivers/infiniband/sw/rdmavt/vt.c
··· 524 524 int rvt_register_device(struct rvt_dev_info *rdi) 525 525 { 526 526 int ret = 0, i; 527 + u64 dma_mask; 527 528 528 529 if (!rdi) 529 530 return -EINVAL; ··· 581 580 582 581 /* DMA Operations */ 583 582 rdi->ibdev.dev.dma_parms = rdi->ibdev.dev.parent->dma_parms; 584 - dma_set_coherent_mask(&rdi->ibdev.dev, 585 - rdi->ibdev.dev.parent->coherent_dma_mask); 583 + dma_mask = IS_ENABLED(CONFIG_64BIT) ? DMA_BIT_MASK(64) : DMA_BIT_MASK(32); 584 + ret = dma_coerce_mask_and_coherent(&rdi->ibdev.dev, dma_mask); 585 + if (ret) 586 + goto bail_wss; 586 587 587 588 /* Protection Domain */ 588 589 spin_lock_init(&rdi->n_pds_lock);
+5 -1
drivers/infiniband/sw/rxe/rxe_verbs.c
··· 1118 1118 int err; 1119 1119 struct ib_device *dev = &rxe->ib_dev; 1120 1120 struct crypto_shash *tfm; 1121 + u64 dma_mask; 1121 1122 1122 1123 strlcpy(dev->node_desc, "rxe", sizeof(dev->node_desc)); 1123 1124 ··· 1131 1130 rxe->ndev->dev_addr); 1132 1131 dev->dev.dma_parms = &rxe->dma_parms; 1133 1132 dma_set_max_seg_size(&dev->dev, UINT_MAX); 1134 - dma_set_coherent_mask(&dev->dev, dma_get_required_mask(&dev->dev)); 1133 + dma_mask = IS_ENABLED(CONFIG_64BIT) ? DMA_BIT_MASK(64) : DMA_BIT_MASK(32); 1134 + err = dma_coerce_mask_and_coherent(&dev->dev, dma_mask); 1135 + if (err) 1136 + return err; 1135 1137 1136 1138 dev->uverbs_cmd_mask = BIT_ULL(IB_USER_VERBS_CMD_GET_CONTEXT) 1137 1139 | BIT_ULL(IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL)
+5 -2
drivers/infiniband/sw/siw/siw_main.c
··· 306 306 struct siw_device *sdev = NULL; 307 307 struct ib_device *base_dev; 308 308 struct device *parent = netdev->dev.parent; 309 + u64 dma_mask; 309 310 int rv; 310 311 311 312 if (!parent) { ··· 385 384 base_dev->dev.parent = parent; 386 385 base_dev->dev.dma_parms = &sdev->dma_parms; 387 386 dma_set_max_seg_size(&base_dev->dev, UINT_MAX); 388 - dma_set_coherent_mask(&base_dev->dev, 389 - dma_get_required_mask(&base_dev->dev)); 387 + dma_mask = IS_ENABLED(CONFIG_64BIT) ? DMA_BIT_MASK(64) : DMA_BIT_MASK(32); 388 + if (dma_coerce_mask_and_coherent(&base_dev->dev, dma_mask)) 389 + goto error; 390 + 390 391 base_dev->num_comp_vectors = num_possible_cpus(); 391 392 392 393 xa_init_flags(&sdev->qp_xa, XA_FLAGS_ALLOC1);
+8 -5
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 622 622 /** 623 623 * srpt_unregister_mad_agent - unregister MAD callback functions 624 624 * @sdev: SRPT HCA pointer. 625 + * @port_cnt: number of ports with registered MAD 625 626 * 626 627 * Note: It is safe to call this function more than once for the same device. 627 628 */ 628 - static void srpt_unregister_mad_agent(struct srpt_device *sdev) 629 + static void srpt_unregister_mad_agent(struct srpt_device *sdev, int port_cnt) 629 630 { 630 631 struct ib_port_modify port_modify = { 631 632 .clr_port_cap_mask = IB_PORT_DEVICE_MGMT_SUP, ··· 634 633 struct srpt_port *sport; 635 634 int i; 636 635 637 - for (i = 1; i <= sdev->device->phys_port_cnt; i++) { 636 + for (i = 1; i <= port_cnt; i++) { 638 637 sport = &sdev->port[i - 1]; 639 638 WARN_ON(sport->port != i); 640 639 if (sport->mad_agent) { ··· 3186 3185 if (ret) { 3187 3186 pr_err("MAD registration failed for %s-%d.\n", 3188 3187 dev_name(&sdev->device->dev), i); 3189 - goto err_event; 3188 + i--; 3189 + goto err_port; 3190 3190 } 3191 3191 } 3192 3192 ··· 3199 3197 pr_debug("added %s.\n", dev_name(&device->dev)); 3200 3198 return 0; 3201 3199 3202 - err_event: 3200 + err_port: 3201 + srpt_unregister_mad_agent(sdev, i); 3203 3202 ib_unregister_event_handler(&sdev->event_handler); 3204 3203 err_cm: 3205 3204 if (sdev->cm_id) ··· 3224 3221 struct srpt_device *sdev = client_data; 3225 3222 int i; 3226 3223 3227 - srpt_unregister_mad_agent(sdev); 3224 + srpt_unregister_mad_agent(sdev, sdev->device->phys_port_cnt); 3228 3225 3229 3226 ib_unregister_event_handler(&sdev->event_handler); 3230 3227
+1
drivers/infiniband/ulp/srpt/ib_srpt.h
··· 256 256 * @rdma_cm: See below. 257 257 * @rdma_cm.cm_id: RDMA CM ID associated with the channel. 258 258 * @cq: IB completion queue for this channel. 259 + * @cq_size: Number of CQEs in @cq. 259 260 * @zw_cqe: Zero-length write CQE. 260 261 * @rcu: RCU head. 261 262 * @kref: kref for this channel.
+5 -1
drivers/iommu/amd/amd_iommu_types.h
··· 409 409 /* Only true if all IOMMUs support device IOTLBs */ 410 410 extern bool amd_iommu_iotlb_sup; 411 411 412 - #define MAX_IRQS_PER_TABLE 256 412 + /* 413 + * AMD IOMMU hardware only support 512 IRTEs despite 414 + * the architectural limitation of 2048 entries. 415 + */ 416 + #define MAX_IRQS_PER_TABLE 512 413 417 #define IRQ_TABLE_ALIGNMENT 128 414 418 415 419 struct irq_remap_table {
+3
drivers/iommu/intel/iommu.c
··· 2525 2525 { 2526 2526 struct device_domain_info *info; 2527 2527 2528 + if (unlikely(!dev || !dev->iommu)) 2529 + return NULL; 2530 + 2528 2531 if (unlikely(attach_deferred(dev))) 2529 2532 return NULL; 2530 2533
+7 -1
drivers/iommu/intel/svm.c
··· 279 279 struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL); 280 280 struct intel_svm_dev *sdev = NULL; 281 281 struct dmar_domain *dmar_domain; 282 + struct device_domain_info *info; 282 283 struct intel_svm *svm = NULL; 283 284 int ret = 0; 284 285 ··· 309 308 * guest PASID range. 310 309 */ 311 310 if (data->hpasid <= 0 || data->hpasid >= PASID_MAX) 311 + return -EINVAL; 312 + 313 + info = get_domain_info(dev); 314 + if (!info) 312 315 return -EINVAL; 313 316 314 317 dmar_domain = to_dmar_domain(domain); ··· 362 357 goto out; 363 358 } 364 359 sdev->dev = dev; 360 + sdev->sid = PCI_DEVID(info->bus, info->devfn); 365 361 366 362 /* Only count users if device has aux domains */ 367 363 if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)) ··· 1035 1029 resp.qw0 = QI_PGRP_PASID(req->pasid) | 1036 1030 QI_PGRP_DID(req->rid) | 1037 1031 QI_PGRP_PASID_P(req->pasid_present) | 1038 - QI_PGRP_PDP(req->pasid_present) | 1032 + QI_PGRP_PDP(req->priv_data_present) | 1039 1033 QI_PGRP_RESP_CODE(result) | 1040 1034 QI_PGRP_RESP_TYPE; 1041 1035 resp.qw1 = QI_PGRP_IDX(req->prg_index) |
+1 -1
drivers/iommu/iommu.c
··· 2071 2071 2072 2072 static int iommu_check_bind_data(struct iommu_gpasid_bind_data *data) 2073 2073 { 2074 - u32 mask; 2074 + u64 mask; 2075 2075 int i; 2076 2076 2077 2077 if (data->version != IOMMU_GPASID_BIND_VERSION_1)
+1 -2
drivers/irqchip/Kconfig
··· 180 180 select GENERIC_IRQ_CHIP 181 181 select GENERIC_IRQ_IPI if SYS_SUPPORTS_MULTITHREADING 182 182 select IRQ_DOMAIN 183 - select IRQ_DOMAIN_HIERARCHY if GENERIC_IRQ_IPI 184 183 select GENERIC_IRQ_EFFECTIVE_AFF_MASK 185 184 186 185 config CLPS711X_IRQCHIP ··· 314 315 config MIPS_GIC 315 316 bool 316 317 select GENERIC_IRQ_IPI 317 - select IRQ_DOMAIN_HIERARCHY 318 318 select MIPS_CM 319 319 320 320 config INGENIC_IRQ ··· 589 591 590 592 config MST_IRQ 591 593 bool "MStar Interrupt Controller" 594 + depends on ARCH_MEDIATEK || ARCH_MSTARV7 || COMPILE_TEST 592 595 default ARCH_MEDIATEK 593 596 select IRQ_DOMAIN 594 597 select IRQ_DOMAIN_HIERARCHY
+1 -1
drivers/irqchip/irq-bcm2836.c
··· 244 244 245 245 #define BITS_PER_MBOX 32 246 246 247 - static void bcm2836_arm_irqchip_smp_init(void) 247 + static void __init bcm2836_arm_irqchip_smp_init(void) 248 248 { 249 249 struct irq_fwspec ipi_fwspec = { 250 250 .fwnode = intc.domain->fwnode,
+2 -2
drivers/irqchip/irq-mst-intc.c
··· 154 154 .free = irq_domain_free_irqs_common, 155 155 }; 156 156 157 - int __init 158 - mst_intc_of_init(struct device_node *dn, struct device_node *parent) 157 + static int __init mst_intc_of_init(struct device_node *dn, 158 + struct device_node *parent) 159 159 { 160 160 struct irq_domain *domain, *domain_parent; 161 161 struct mst_intc_chip_data *cd;
+3 -5
drivers/irqchip/irq-renesas-intc-irqpin.c
··· 71 71 }; 72 72 73 73 struct intc_irqpin_config { 74 - unsigned int irlm_bit; 75 - unsigned needs_irlm:1; 74 + int irlm_bit; /* -1 if non-existent */ 76 75 }; 77 76 78 77 static unsigned long intc_irqpin_read32(void __iomem *iomem) ··· 348 349 349 350 static const struct intc_irqpin_config intc_irqpin_irlm_r8a777x = { 350 351 .irlm_bit = 23, /* ICR0.IRLM0 */ 351 - .needs_irlm = 1, 352 352 }; 353 353 354 354 static const struct intc_irqpin_config intc_irqpin_rmobile = { 355 - .needs_irlm = 0, 355 + .irlm_bit = -1, 356 356 }; 357 357 358 358 static const struct of_device_id intc_irqpin_dt_ids[] = { ··· 468 470 } 469 471 470 472 /* configure "individual IRQ mode" where needed */ 471 - if (config && config->needs_irlm) { 473 + if (config && config->irlm_bit >= 0) { 472 474 if (io[INTC_IRQPIN_REG_IRLM]) 473 475 intc_irqpin_read_modify_write(p, INTC_IRQPIN_REG_IRLM, 474 476 config->irlm_bit, 1, 1);
+5 -5
drivers/irqchip/irq-sifive-plic.c
··· 99 99 struct irq_data *d, int enable) 100 100 { 101 101 int cpu; 102 - struct plic_priv *priv = irq_get_chip_data(d->irq); 102 + struct plic_priv *priv = irq_data_get_irq_chip_data(d); 103 103 104 104 writel(enable, priv->regs + PRIORITY_BASE + d->hwirq * PRIORITY_PER_ID); 105 105 for_each_cpu(cpu, mask) { ··· 115 115 { 116 116 struct cpumask amask; 117 117 unsigned int cpu; 118 - struct plic_priv *priv = irq_get_chip_data(d->irq); 118 + struct plic_priv *priv = irq_data_get_irq_chip_data(d); 119 119 120 120 cpumask_and(&amask, &priv->lmask, cpu_online_mask); 121 121 cpu = cpumask_any_and(irq_data_get_affinity_mask(d), ··· 127 127 128 128 static void plic_irq_mask(struct irq_data *d) 129 129 { 130 - struct plic_priv *priv = irq_get_chip_data(d->irq); 130 + struct plic_priv *priv = irq_data_get_irq_chip_data(d); 131 131 132 132 plic_irq_toggle(&priv->lmask, d, 0); 133 133 } ··· 138 138 { 139 139 unsigned int cpu; 140 140 struct cpumask amask; 141 - struct plic_priv *priv = irq_get_chip_data(d->irq); 141 + struct plic_priv *priv = irq_data_get_irq_chip_data(d); 142 142 143 143 cpumask_and(&amask, &priv->lmask, mask_val); 144 144 ··· 151 151 return -EINVAL; 152 152 153 153 plic_irq_toggle(&priv->lmask, d, 0); 154 - plic_irq_toggle(cpumask_of(cpu), d, 1); 154 + plic_irq_toggle(cpumask_of(cpu), d, !irqd_irq_masked(d)); 155 155 156 156 irq_data_update_effective_affinity(d, cpumask_of(cpu)); 157 157
+4
drivers/irqchip/irq-stm32-exti.c
··· 195 195 { .exti = 25, .irq_parent = 107, .chip = &stm32_exti_h_chip_direct }, 196 196 { .exti = 30, .irq_parent = 52, .chip = &stm32_exti_h_chip_direct }, 197 197 { .exti = 47, .irq_parent = 93, .chip = &stm32_exti_h_chip_direct }, 198 + { .exti = 48, .irq_parent = 138, .chip = &stm32_exti_h_chip_direct }, 199 + { .exti = 50, .irq_parent = 139, .chip = &stm32_exti_h_chip_direct }, 200 + { .exti = 52, .irq_parent = 140, .chip = &stm32_exti_h_chip_direct }, 201 + { .exti = 53, .irq_parent = 141, .chip = &stm32_exti_h_chip_direct }, 198 202 { .exti = 54, .irq_parent = 135, .chip = &stm32_exti_h_chip_direct }, 199 203 { .exti = 61, .irq_parent = 100, .chip = &stm32_exti_h_chip_direct }, 200 204 { .exti = 65, .irq_parent = 144, .chip = &stm32_exti_h_chip },
+80 -3
drivers/irqchip/irq-ti-sci-inta.c
··· 85 85 * @base: Base address of the memory mapped IO registers 86 86 * @pdev: Pointer to platform device. 87 87 * @ti_sci_id: TI-SCI device identifier 88 + * @unmapped_cnt: Number of @unmapped_dev_ids entries 89 + * @unmapped_dev_ids: Pointer to an array of TI-SCI device identifiers of 90 + * unmapped event sources. 91 + * Unmapped Events are not part of the Global Event Map and 92 + * they are converted to Global event within INTA to be 93 + * received by the same INTA to generate an interrupt. 94 + * In case an interrupt request comes for a device which is 95 + * generating Unmapped Event, we must use the INTA's TI-SCI 96 + * device identifier in place of the source device 97 + * identifier to let sysfw know where it has to program the 98 + * Global Event number. 88 99 */ 89 100 struct ti_sci_inta_irq_domain { 90 101 const struct ti_sci_handle *sci; ··· 107 96 void __iomem *base; 108 97 struct platform_device *pdev; 109 98 u32 ti_sci_id; 99 + 100 + int unmapped_cnt; 101 + u16 *unmapped_dev_ids; 110 102 }; 111 103 112 104 #define to_vint_desc(e, i) container_of(e, struct ti_sci_inta_vint_desc, \ 113 105 events[i]) 106 + 107 + static u16 ti_sci_inta_get_dev_id(struct ti_sci_inta_irq_domain *inta, u32 hwirq) 108 + { 109 + u16 dev_id = HWIRQ_TO_DEVID(hwirq); 110 + int i; 111 + 112 + if (inta->unmapped_cnt == 0) 113 + return dev_id; 114 + 115 + /* 116 + * For devices sending Unmapped Events we must use the INTA's TI-SCI 117 + * device identifier number to be able to convert it to a Global Event 118 + * and map it to an interrupt. 119 + */ 120 + for (i = 0; i < inta->unmapped_cnt; i++) { 121 + if (dev_id == inta->unmapped_dev_ids[i]) { 122 + dev_id = inta->ti_sci_id; 123 + break; 124 + } 125 + } 126 + 127 + return dev_id; 128 + } 114 129 115 130 /** 116 131 * ti_sci_inta_irq_handler() - Chained IRQ handler for the vint irqs ··· 288 251 u16 dev_id, dev_index; 289 252 int err; 290 253 291 - dev_id = HWIRQ_TO_DEVID(hwirq); 254 + dev_id = ti_sci_inta_get_dev_id(inta, hwirq); 292 255 dev_index = HWIRQ_TO_IRQID(hwirq); 293 256 294 257 event_desc = &vint_desc->events[free_bit]; ··· 389 352 { 390 353 struct ti_sci_inta_vint_desc *vint_desc; 391 354 struct ti_sci_inta_irq_domain *inta; 355 + u16 dev_id; 392 356 393 357 vint_desc = to_vint_desc(event_desc, event_desc->vint_bit); 394 358 inta = vint_desc->domain->host_data; 359 + dev_id = ti_sci_inta_get_dev_id(inta, hwirq); 395 360 /* free event irq */ 396 361 mutex_lock(&inta->vint_mutex); 397 362 inta->sci->ops.rm_irq_ops.free_event_map(inta->sci, 398 - HWIRQ_TO_DEVID(hwirq), 399 - HWIRQ_TO_IRQID(hwirq), 363 + dev_id, HWIRQ_TO_IRQID(hwirq), 400 364 inta->ti_sci_id, 401 365 vint_desc->vint_id, 402 366 event_desc->global_event, ··· 612 574 .chip = &ti_sci_inta_msi_irq_chip, 613 575 }; 614 576 577 + static int ti_sci_inta_get_unmapped_sources(struct ti_sci_inta_irq_domain *inta) 578 + { 579 + struct device *dev = &inta->pdev->dev; 580 + struct device_node *node = dev_of_node(dev); 581 + struct of_phandle_iterator it; 582 + int count, err, ret, i; 583 + 584 + count = of_count_phandle_with_args(node, "ti,unmapped-event-sources", NULL); 585 + if (count <= 0) 586 + return 0; 587 + 588 + inta->unmapped_dev_ids = devm_kcalloc(dev, count, 589 + sizeof(*inta->unmapped_dev_ids), 590 + GFP_KERNEL); 591 + if (!inta->unmapped_dev_ids) 592 + return -ENOMEM; 593 + 594 + i = 0; 595 + of_for_each_phandle(&it, err, node, "ti,unmapped-event-sources", NULL, 0) { 596 + u32 dev_id; 597 + 598 + ret = of_property_read_u32(it.node, "ti,sci-dev-id", &dev_id); 599 + if (ret) { 600 + dev_err(dev, "ti,sci-dev-id read failure for %pOFf\n", it.node); 601 + of_node_put(it.node); 602 + return ret; 603 + } 604 + inta->unmapped_dev_ids[i++] = dev_id; 605 + } 606 + 607 + inta->unmapped_cnt = count; 608 + 609 + return 0; 610 + } 611 + 615 612 static int ti_sci_inta_irq_domain_probe(struct platform_device *pdev) 616 613 { 617 614 struct irq_domain *parent_domain, *domain, *msi_domain; ··· 701 628 inta->base = devm_ioremap_resource(dev, res); 702 629 if (IS_ERR(inta->base)) 703 630 return PTR_ERR(inta->base); 631 + 632 + ret = ti_sci_inta_get_unmapped_sources(inta); 633 + if (ret) 634 + return ret; 704 635 705 636 domain = irq_domain_add_linear(dev_of_node(dev), 706 637 ti_sci_get_num_resources(inta->vint),
+24 -19
drivers/mtd/nand/raw/fsl_ifc_nand.c
··· 707 707 { 708 708 struct mtd_info *mtd = nand_to_mtd(chip); 709 709 struct fsl_ifc_mtd *priv = nand_get_controller_data(chip); 710 + struct fsl_ifc_ctrl *ctrl = priv->ctrl; 711 + struct fsl_ifc_global __iomem *ifc_global = ctrl->gregs; 712 + u32 csor; 713 + 714 + csor = ifc_in32(&ifc_global->csor_cs[priv->bank].csor); 715 + 716 + /* Must also set CSOR_NAND_ECC_ENC_EN if DEC_EN set */ 717 + if (csor & CSOR_NAND_ECC_DEC_EN) { 718 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 719 + mtd_set_ooblayout(mtd, &fsl_ifc_ooblayout_ops); 720 + 721 + /* Hardware generates ECC per 512 Bytes */ 722 + chip->ecc.size = 512; 723 + if ((csor & CSOR_NAND_ECC_MODE_MASK) == CSOR_NAND_ECC_MODE_4) { 724 + chip->ecc.bytes = 8; 725 + chip->ecc.strength = 4; 726 + } else { 727 + chip->ecc.bytes = 16; 728 + chip->ecc.strength = 8; 729 + } 730 + } else { 731 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 732 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 733 + } 710 734 711 735 dev_dbg(priv->dev, "%s: nand->numchips = %d\n", __func__, 712 736 nanddev_ntargets(&chip->base)); ··· 932 908 default: 933 909 dev_err(priv->dev, "bad csor %#x: bad page size\n", csor); 934 910 return -ENODEV; 935 - } 936 - 937 - /* Must also set CSOR_NAND_ECC_ENC_EN if DEC_EN set */ 938 - if (csor & CSOR_NAND_ECC_DEC_EN) { 939 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 940 - mtd_set_ooblayout(mtd, &fsl_ifc_ooblayout_ops); 941 - 942 - /* Hardware generates ECC per 512 Bytes */ 943 - chip->ecc.size = 512; 944 - if ((csor & CSOR_NAND_ECC_MODE_MASK) == CSOR_NAND_ECC_MODE_4) { 945 - chip->ecc.bytes = 8; 946 - chip->ecc.strength = 4; 947 - } else { 948 - chip->ecc.bytes = 16; 949 - chip->ecc.strength = 8; 950 - } 951 - } else { 952 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 953 - chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 954 911 } 955 912 956 913 ret = fsl_ifc_sram_init(priv);
+5 -12
drivers/mtd/nand/raw/mxc_nand.c
··· 1681 1681 struct mxc_nand_host *host = nand_get_controller_data(chip); 1682 1682 struct device *dev = mtd->dev.parent; 1683 1683 1684 + chip->ecc.bytes = host->devtype_data->eccbytes; 1685 + host->eccsize = host->devtype_data->eccsize; 1686 + chip->ecc.size = 512; 1687 + mtd_set_ooblayout(mtd, host->devtype_data->ooblayout); 1688 + 1684 1689 switch (chip->ecc.engine_type) { 1685 1690 case NAND_ECC_ENGINE_TYPE_ON_HOST: 1686 1691 chip->ecc.read_page = mxc_nand_read_page; ··· 1841 1836 if (host->devtype_data->axi_offset) 1842 1837 host->regs_axi = host->base + host->devtype_data->axi_offset; 1843 1838 1844 - this->ecc.bytes = host->devtype_data->eccbytes; 1845 - host->eccsize = host->devtype_data->eccsize; 1846 - 1847 1839 this->legacy.select_chip = host->devtype_data->select_chip; 1848 - this->ecc.size = 512; 1849 - mtd_set_ooblayout(mtd, host->devtype_data->ooblayout); 1850 - 1851 - if (host->pdata.hw_ecc) { 1852 - this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 1853 - } else { 1854 - this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 1855 - this->ecc.algo = NAND_ECC_ALGO_HAMMING; 1856 - } 1857 1840 1858 1841 /* NAND bus width determines access functions used by upper layer */ 1859 1842 if (host->pdata.width == 2)
+8 -7
drivers/mtd/nand/raw/stm32_fmc2_nand.c
··· 1708 1708 return -EINVAL; 1709 1709 } 1710 1710 1711 + /* Default ECC settings in case they are not set in the device tree */ 1712 + if (!chip->ecc.size) 1713 + chip->ecc.size = FMC2_ECC_STEP_SIZE; 1714 + 1715 + if (!chip->ecc.strength) 1716 + chip->ecc.strength = FMC2_ECC_BCH8; 1717 + 1711 1718 ret = nand_ecc_choose_conf(chip, &stm32_fmc2_nfc_ecc_caps, 1712 1719 mtd->oobsize - FMC2_BBM_LEN); 1713 1720 if (ret) { ··· 1734 1727 1735 1728 mtd_set_ooblayout(mtd, &stm32_fmc2_nfc_ooblayout_ops); 1736 1729 1737 - if (chip->options & NAND_BUSWIDTH_16) 1738 - stm32_fmc2_nfc_set_buswidth_16(nfc, true); 1730 + stm32_fmc2_nfc_setup(chip); 1739 1731 1740 1732 return 0; 1741 1733 } ··· 1957 1951 chip->controller = &nfc->base; 1958 1952 chip->options |= NAND_BUSWIDTH_AUTO | NAND_NO_SUBPAGE_WRITE | 1959 1953 NAND_USES_DMA; 1960 - 1961 - /* Default ECC settings */ 1962 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 1963 - chip->ecc.size = FMC2_ECC_STEP_SIZE; 1964 - chip->ecc.strength = FMC2_ECC_BCH8; 1965 1954 1966 1955 /* Scan to find existence of the device */ 1967 1956 ret = nand_scan(chip, nand->ncs);
+7 -6
drivers/mtd/spi-nor/core.c
··· 2701 2701 2702 2702 memcpy(&sfdp_params, nor->params, sizeof(sfdp_params)); 2703 2703 2704 - if (spi_nor_parse_sfdp(nor, &sfdp_params)) { 2704 + if (spi_nor_parse_sfdp(nor, nor->params)) { 2705 + memcpy(nor->params, &sfdp_params, sizeof(*nor->params)); 2705 2706 nor->addr_width = 0; 2706 2707 nor->flags &= ~SNOR_F_4B_OPCODES; 2707 - } else { 2708 - memcpy(nor->params, &sfdp_params, sizeof(*nor->params)); 2709 2708 } 2710 2709 } 2711 2710 ··· 3008 3009 /* already configured from SFDP */ 3009 3010 } else if (nor->info->addr_width) { 3010 3011 nor->addr_width = nor->info->addr_width; 3011 - } else if (nor->mtd.size > 0x1000000) { 3012 - /* enable 4-byte addressing if the device exceeds 16MiB */ 3013 - nor->addr_width = 4; 3014 3012 } else { 3015 3013 nor->addr_width = 3; 3014 + } 3015 + 3016 + if (nor->addr_width == 3 && nor->mtd.size > 0x1000000) { 3017 + /* enable 4-byte addressing if the device exceeds 16MiB */ 3018 + nor->addr_width = 4; 3016 3019 } 3017 3020 3018 3021 if (nor->addr_width > SPI_NOR_MAX_ADDR_WIDTH) {
+11 -3
drivers/net/can/dev.c
··· 512 512 */ 513 513 struct sk_buff *skb = priv->echo_skb[idx]; 514 514 struct canfd_frame *cf = (struct canfd_frame *)skb->data; 515 - u8 len = cf->len; 516 515 517 - *len_ptr = len; 516 + /* get the real payload length for netdev statistics */ 517 + if (cf->can_id & CAN_RTR_FLAG) 518 + *len_ptr = 0; 519 + else 520 + *len_ptr = cf->len; 521 + 518 522 priv->echo_skb[idx] = NULL; 519 523 520 524 return skb; ··· 542 538 if (!skb) 543 539 return 0; 544 540 545 - netif_rx(skb); 541 + skb_get(skb); 542 + if (netif_rx(skb) == NET_RX_SUCCESS) 543 + dev_consume_skb_any(skb); 544 + else 545 + dev_kfree_skb_any(skb); 546 546 547 547 return len; 548 548 }
+7 -5
drivers/net/can/flexcan.c
··· 217 217 * MX8MP FlexCAN3 03.00.17.01 yes yes no yes yes yes 218 218 * VF610 FlexCAN3 ? no yes no yes yes? no 219 219 * LS1021A FlexCAN2 03.00.04.00 no yes no no yes no 220 - * LX2160A FlexCAN3 03.00.23.00 no yes no no yes yes 220 + * LX2160A FlexCAN3 03.00.23.00 no yes no yes yes yes 221 221 * 222 222 * Some SOCs do not have the RX_WARN & TX_WARN interrupt line connected. 223 223 */ ··· 400 400 static const struct flexcan_devtype_data fsl_vf610_devtype_data = { 401 401 .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS | 402 402 FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | 403 - FLEXCAN_QUIRK_BROKEN_PERR_STATE, 403 + FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_SUPPORT_ECC, 404 404 }; 405 405 406 406 static const struct flexcan_devtype_data fsl_ls1021a_r2_devtype_data = { 407 407 .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS | 408 - FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE | 409 - FLEXCAN_QUIRK_USE_OFF_TIMESTAMP, 408 + FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP, 410 409 }; 411 410 412 411 static const struct flexcan_devtype_data fsl_lx2160a_r1_devtype_data = { 413 412 .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS | 414 413 FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE | 415 - FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_SUPPORT_FD, 414 + FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_SUPPORT_FD | 415 + FLEXCAN_QUIRK_SUPPORT_ECC, 416 416 }; 417 417 418 418 static const struct can_bittiming_const flexcan_bittiming_const = { ··· 2062 2062 { 2063 2063 struct net_device *dev = platform_get_drvdata(pdev); 2064 2064 2065 + device_set_wakeup_enable(&pdev->dev, false); 2066 + device_set_wakeup_capable(&pdev->dev, false); 2065 2067 unregister_flexcandev(dev); 2066 2068 pm_runtime_disable(&pdev->dev); 2067 2069 free_candev(dev);
+8 -3
drivers/net/can/peak_canfd/peak_canfd.c
··· 262 262 cf_len = get_can_dlc(pucan_msg_get_dlc(msg)); 263 263 264 264 /* if this frame is an echo, */ 265 - if ((rx_msg_flags & PUCAN_MSG_LOOPED_BACK) && 266 - !(rx_msg_flags & PUCAN_MSG_SELF_RECEIVE)) { 265 + if (rx_msg_flags & PUCAN_MSG_LOOPED_BACK) { 267 266 unsigned long flags; 268 267 269 268 spin_lock_irqsave(&priv->echo_lock, flags); ··· 276 277 netif_wake_queue(priv->ndev); 277 278 278 279 spin_unlock_irqrestore(&priv->echo_lock, flags); 279 - return 0; 280 + 281 + /* if this frame is only an echo, stop here. Otherwise, 282 + * continue to push this application self-received frame into 283 + * its own rx queue. 284 + */ 285 + if (!(rx_msg_flags & PUCAN_MSG_SELF_RECEIVE)) 286 + return 0; 280 287 } 281 288 282 289 /* otherwise, it should be pushed into rx fifo */
+2 -2
drivers/net/can/rx-offload.c
··· 245 245 246 246 if (skb_queue_len(&offload->skb_queue) > 247 247 offload->skb_queue_len_max) { 248 - kfree_skb(skb); 248 + dev_kfree_skb_any(skb); 249 249 return -ENOBUFS; 250 250 } 251 251 ··· 290 290 { 291 291 if (skb_queue_len(&offload->skb_queue) > 292 292 offload->skb_queue_len_max) { 293 - kfree_skb(skb); 293 + dev_kfree_skb_any(skb); 294 294 return -ENOBUFS; 295 295 } 296 296
+11 -11
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 75 75 { 76 76 switch (model) { 77 77 case MCP251XFD_MODEL_MCP2517FD: 78 - return "MCP2517FD"; break; 78 + return "MCP2517FD"; 79 79 case MCP251XFD_MODEL_MCP2518FD: 80 - return "MCP2518FD"; break; 80 + return "MCP2518FD"; 81 81 case MCP251XFD_MODEL_MCP251XFD: 82 - return "MCP251xFD"; break; 82 + return "MCP251xFD"; 83 83 } 84 84 85 85 return "<unknown>"; ··· 95 95 { 96 96 switch (mode) { 97 97 case MCP251XFD_REG_CON_MODE_MIXED: 98 - return "Mixed (CAN FD/CAN 2.0)"; break; 98 + return "Mixed (CAN FD/CAN 2.0)"; 99 99 case MCP251XFD_REG_CON_MODE_SLEEP: 100 - return "Sleep"; break; 100 + return "Sleep"; 101 101 case MCP251XFD_REG_CON_MODE_INT_LOOPBACK: 102 - return "Internal Loopback"; break; 102 + return "Internal Loopback"; 103 103 case MCP251XFD_REG_CON_MODE_LISTENONLY: 104 - return "Listen Only"; break; 104 + return "Listen Only"; 105 105 case MCP251XFD_REG_CON_MODE_CONFIG: 106 - return "Configuration"; break; 106 + return "Configuration"; 107 107 case MCP251XFD_REG_CON_MODE_EXT_LOOPBACK: 108 - return "External Loopback"; break; 108 + return "External Loopback"; 109 109 case MCP251XFD_REG_CON_MODE_CAN2_0: 110 - return "CAN 2.0"; break; 110 + return "CAN 2.0"; 111 111 case MCP251XFD_REG_CON_MODE_RESTRICTED: 112 - return "Restricted Operation"; break; 112 + return "Restricted Operation"; 113 113 } 114 114 115 115 return "<unknown>";
+9 -9
drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c
··· 173 173 memcpy(&buf_tx->cmd, reg, sizeof(buf_tx->cmd)); 174 174 if (MCP251XFD_SANITIZE_SPI) 175 175 memset(buf_tx->data, 0x0, val_len); 176 - }; 176 + } 177 177 178 178 err = spi_sync(spi, &msg); 179 179 if (err) ··· 330 330 goto out; 331 331 } 332 332 333 - netdev_dbg(priv->ndev, 334 - "CRC read error at address 0x%04x (length=%zd, data=%*ph, CRC=0x%04x) retrying.\n", 335 - reg, val_len, (int)val_len, buf_rx->data, 336 - get_unaligned_be16(buf_rx->data + val_len)); 333 + netdev_info(priv->ndev, 334 + "CRC read error at address 0x%04x (length=%zd, data=%*ph, CRC=0x%04x) retrying.\n", 335 + reg, val_len, (int)val_len, buf_rx->data, 336 + get_unaligned_be16(buf_rx->data + val_len)); 337 337 } 338 338 339 339 if (err) { 340 - netdev_info(priv->ndev, 341 - "CRC read error at address 0x%04x (length=%zd, data=%*ph, CRC=0x%04x).\n", 342 - reg, val_len, (int)val_len, buf_rx->data, 343 - get_unaligned_be16(buf_rx->data + val_len)); 340 + netdev_err(priv->ndev, 341 + "CRC read error at address 0x%04x (length=%zd, data=%*ph, CRC=0x%04x).\n", 342 + reg, val_len, (int)val_len, buf_rx->data, 343 + get_unaligned_be16(buf_rx->data + val_len)); 344 344 345 345 return err; 346 346 }
+5 -3
drivers/net/can/ti_hecc.c
··· 933 933 err = clk_prepare_enable(priv->clk); 934 934 if (err) { 935 935 dev_err(&pdev->dev, "clk_prepare_enable() failed\n"); 936 - goto probe_exit_clk; 936 + goto probe_exit_release_clk; 937 937 } 938 938 939 939 priv->offload.mailbox_read = ti_hecc_mailbox_read; ··· 942 942 err = can_rx_offload_add_timestamp(ndev, &priv->offload); 943 943 if (err) { 944 944 dev_err(&pdev->dev, "can_rx_offload_add_timestamp() failed\n"); 945 - goto probe_exit_clk; 945 + goto probe_exit_disable_clk; 946 946 } 947 947 948 948 err = register_candev(ndev); ··· 960 960 961 961 probe_exit_offload: 962 962 can_rx_offload_del(&priv->offload); 963 - probe_exit_clk: 963 + probe_exit_disable_clk: 964 + clk_disable_unprepare(priv->clk); 965 + probe_exit_release_clk: 964 966 clk_put(priv->clk); 965 967 probe_exit_candev: 966 968 free_candev(ndev);
+46 -5
drivers/net/can/usb/peak_usb/pcan_usb_core.c
··· 130 130 /* protect from getting time before setting now */ 131 131 if (ktime_to_ns(time_ref->tv_host)) { 132 132 u64 delta_us; 133 + s64 delta_ts = 0; 133 134 134 - delta_us = ts - time_ref->ts_dev_2; 135 - if (ts < time_ref->ts_dev_2) 136 - delta_us &= (1 << time_ref->adapter->ts_used_bits) - 1; 135 + /* General case: dev_ts_1 < dev_ts_2 < ts, with: 136 + * 137 + * - dev_ts_1 = previous sync timestamp 138 + * - dev_ts_2 = last sync timestamp 139 + * - ts = event timestamp 140 + * - ts_period = known sync period (theoretical) 141 + * ~ dev_ts2 - dev_ts1 142 + * *but*: 143 + * 144 + * - time counters wrap (see adapter->ts_used_bits) 145 + * - sometimes, dev_ts_1 < ts < dev_ts2 146 + * 147 + * "normal" case (sync time counters increase): 148 + * must take into account case when ts wraps (tsw) 149 + * 150 + * < ts_period > < > 151 + * | | | 152 + * ---+--------+----+-------0-+--+--> 153 + * ts_dev_1 | ts_dev_2 | 154 + * ts tsw 155 + */ 156 + if (time_ref->ts_dev_1 < time_ref->ts_dev_2) { 157 + /* case when event time (tsw) wraps */ 158 + if (ts < time_ref->ts_dev_1) 159 + delta_ts = 1 << time_ref->adapter->ts_used_bits; 137 160 138 - delta_us += time_ref->ts_total; 161 + /* Otherwise, sync time counter (ts_dev_2) has wrapped: 162 + * handle case when event time (tsn) hasn't. 163 + * 164 + * < ts_period > < > 165 + * | | | 166 + * ---+--------+--0-+---------+--+--> 167 + * ts_dev_1 | ts_dev_2 | 168 + * tsn ts 169 + */ 170 + } else if (time_ref->ts_dev_1 < ts) { 171 + delta_ts = -(1 << time_ref->adapter->ts_used_bits); 172 + } 139 173 140 - delta_us *= time_ref->adapter->us_per_ts_scale; 174 + /* add delay between last sync and event timestamps */ 175 + delta_ts += (signed int)(ts - time_ref->ts_dev_2); 176 + 177 + /* add time from beginning to last sync */ 178 + delta_ts += time_ref->ts_total; 179 + 180 + /* convert ticks number into microseconds */ 181 + delta_us = delta_ts * time_ref->adapter->us_per_ts_scale; 141 182 delta_us >>= time_ref->adapter->us_per_ts_shift; 142 183 143 184 *time = ktime_add_us(time_ref->tv_host_0, delta_us);
+37 -11
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 468 468 struct pucan_msg *rx_msg) 469 469 { 470 470 struct pucan_rx_msg *rm = (struct pucan_rx_msg *)rx_msg; 471 - struct peak_usb_device *dev = usb_if->dev[pucan_msg_get_channel(rm)]; 472 - struct net_device *netdev = dev->netdev; 471 + struct peak_usb_device *dev; 472 + struct net_device *netdev; 473 473 struct canfd_frame *cfd; 474 474 struct sk_buff *skb; 475 475 const u16 rx_msg_flags = le16_to_cpu(rm->flags); 476 + 477 + if (pucan_msg_get_channel(rm) >= ARRAY_SIZE(usb_if->dev)) 478 + return -ENOMEM; 479 + 480 + dev = usb_if->dev[pucan_msg_get_channel(rm)]; 481 + netdev = dev->netdev; 476 482 477 483 if (rx_msg_flags & PUCAN_MSG_EXT_DATA_LEN) { 478 484 /* CANFD frame case */ ··· 525 519 struct pucan_msg *rx_msg) 526 520 { 527 521 struct pucan_status_msg *sm = (struct pucan_status_msg *)rx_msg; 528 - struct peak_usb_device *dev = usb_if->dev[pucan_stmsg_get_channel(sm)]; 529 - struct pcan_usb_fd_device *pdev = 530 - container_of(dev, struct pcan_usb_fd_device, dev); 522 + struct pcan_usb_fd_device *pdev; 531 523 enum can_state new_state = CAN_STATE_ERROR_ACTIVE; 532 524 enum can_state rx_state, tx_state; 533 - struct net_device *netdev = dev->netdev; 525 + struct peak_usb_device *dev; 526 + struct net_device *netdev; 534 527 struct can_frame *cf; 535 528 struct sk_buff *skb; 529 + 530 + if (pucan_stmsg_get_channel(sm) >= ARRAY_SIZE(usb_if->dev)) 531 + return -ENOMEM; 532 + 533 + dev = usb_if->dev[pucan_stmsg_get_channel(sm)]; 534 + pdev = container_of(dev, struct pcan_usb_fd_device, dev); 535 + netdev = dev->netdev; 536 536 537 537 /* nothing should be sent while in BUS_OFF state */ 538 538 if (dev->can.state == CAN_STATE_BUS_OFF) ··· 591 579 struct pucan_msg *rx_msg) 592 580 { 593 581 struct pucan_error_msg *er = (struct pucan_error_msg *)rx_msg; 594 - struct peak_usb_device *dev = usb_if->dev[pucan_ermsg_get_channel(er)]; 595 - struct pcan_usb_fd_device *pdev = 596 - container_of(dev, struct pcan_usb_fd_device, dev); 582 + struct pcan_usb_fd_device *pdev; 583 + struct peak_usb_device *dev; 584 + 585 + if (pucan_ermsg_get_channel(er) >= ARRAY_SIZE(usb_if->dev)) 586 + return -EINVAL; 587 + 588 + dev = usb_if->dev[pucan_ermsg_get_channel(er)]; 589 + pdev = container_of(dev, struct pcan_usb_fd_device, dev); 597 590 598 591 /* keep a trace of tx and rx error counters for later use */ 599 592 pdev->bec.txerr = er->tx_err_cnt; ··· 612 595 struct pucan_msg *rx_msg) 613 596 { 614 597 struct pcan_ufd_ovr_msg *ov = (struct pcan_ufd_ovr_msg *)rx_msg; 615 - struct peak_usb_device *dev = usb_if->dev[pufd_omsg_get_channel(ov)]; 616 - struct net_device *netdev = dev->netdev; 598 + struct peak_usb_device *dev; 599 + struct net_device *netdev; 617 600 struct can_frame *cf; 618 601 struct sk_buff *skb; 602 + 603 + if (pufd_omsg_get_channel(ov) >= ARRAY_SIZE(usb_if->dev)) 604 + return -EINVAL; 605 + 606 + dev = usb_if->dev[pufd_omsg_get_channel(ov)]; 607 + netdev = dev->netdev; 619 608 620 609 /* allocate an skb to store the error frame */ 621 610 skb = alloc_can_err_skb(netdev, &cf); ··· 738 715 struct canfd_frame *cfd = (struct canfd_frame *)skb->data; 739 716 u16 tx_msg_size, tx_msg_flags; 740 717 u8 can_dlc; 718 + 719 + if (cfd->len > CANFD_MAX_DLEN) 720 + return -EINVAL; 741 721 742 722 tx_msg_size = ALIGN(sizeof(struct pucan_tx_msg) + cfd->len, 4); 743 723 tx_msg->size = cpu_to_le16(tx_msg_size);
+3 -3
drivers/net/can/xilinx_can.c
··· 1395 1395 if (ret < 0) { 1396 1396 netdev_err(ndev, "%s: pm_runtime_get failed(%d)\n", 1397 1397 __func__, ret); 1398 - return ret; 1398 + goto err; 1399 1399 } 1400 1400 1401 1401 ret = request_irq(ndev->irq, xcan_interrupt, priv->irq_flags, ··· 1479 1479 if (ret < 0) { 1480 1480 netdev_err(ndev, "%s: pm_runtime_get failed(%d)\n", 1481 1481 __func__, ret); 1482 + pm_runtime_put(priv->dev); 1482 1483 return ret; 1483 1484 } 1484 1485 ··· 1794 1793 if (ret < 0) { 1795 1794 netdev_err(ndev, "%s: pm_runtime_get failed(%d)\n", 1796 1795 __func__, ret); 1797 - goto err_pmdisable; 1796 + goto err_disableclks; 1798 1797 } 1799 1798 1800 1799 if (priv->read_reg(priv, XCAN_SR_OFFSET) != XCAN_SR_CONFIG_MASK) { ··· 1829 1828 1830 1829 err_disableclks: 1831 1830 pm_runtime_put(priv->dev); 1832 - err_pmdisable: 1833 1831 pm_runtime_disable(&pdev->dev); 1834 1832 err_free: 1835 1833 free_candev(ndev);
+2 -2
drivers/net/dsa/qca8k.c
··· 1219 1219 priv->port_mtu[port] = new_mtu; 1220 1220 1221 1221 for (i = 0; i < QCA8K_NUM_PORTS; i++) 1222 - if (priv->port_mtu[port] > mtu) 1223 - mtu = priv->port_mtu[port]; 1222 + if (priv->port_mtu[i] > mtu) 1223 + mtu = priv->port_mtu[i]; 1224 1224 1225 1225 /* Include L2 header / FCS length */ 1226 1226 qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, mtu + ETH_HLEN + ETH_FCS_LEN);
+2 -1
drivers/net/ethernet/cadence/macb_main.c
··· 1929 1929 1930 1930 static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev) 1931 1931 { 1932 - bool cloned = skb_cloned(*skb) || skb_header_cloned(*skb); 1932 + bool cloned = skb_cloned(*skb) || skb_header_cloned(*skb) || 1933 + skb_is_nonlinear(*skb); 1933 1934 int padlen = ETH_ZLEN - (*skb)->len; 1934 1935 int headroom = skb_headroom(*skb); 1935 1936 int tailroom = skb_tailroom(*skb);
+1 -1
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
··· 212 212 { 213 213 if (likely(skb && !skb_shared(skb) && !skb_cloned(skb))) { 214 214 __skb_trim(skb, 0); 215 - refcount_add(2, &skb->users); 215 + refcount_inc(&skb->users); 216 216 } else { 217 217 skb = alloc_skb(len, GFP_KERNEL | __GFP_NOFAIL); 218 218 }
+3
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c
··· 383 383 if (ret) 384 384 goto out_notcb; 385 385 386 + if (unlikely(csk_flag(sk, CSK_ABORT_SHUTDOWN))) 387 + goto out_notcb; 388 + 386 389 set_wr_txq(skb, CPL_PRIORITY_DATA, csk->tlshws.txqid); 387 390 csk->wr_credits -= DIV_ROUND_UP(len, 16); 388 391 csk->wr_unacked += DIV_ROUND_UP(len, 16);
+19 -11
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 174 174 #define DPAA_PARSE_RESULTS_SIZE sizeof(struct fman_prs_result) 175 175 #define DPAA_TIME_STAMP_SIZE 8 176 176 #define DPAA_HASH_RESULTS_SIZE 8 177 - #ifdef CONFIG_DPAA_ERRATUM_A050385 178 - #define DPAA_RX_PRIV_DATA_SIZE (DPAA_A050385_ALIGN - (DPAA_PARSE_RESULTS_SIZE\ 179 - + DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE)) 180 - #else 181 - #define DPAA_RX_PRIV_DATA_SIZE (u16)(DPAA_TX_PRIV_DATA_SIZE + \ 177 + #define DPAA_HWA_SIZE (DPAA_PARSE_RESULTS_SIZE + DPAA_TIME_STAMP_SIZE \ 178 + + DPAA_HASH_RESULTS_SIZE) 179 + #define DPAA_RX_PRIV_DATA_DEFAULT_SIZE (DPAA_TX_PRIV_DATA_SIZE + \ 182 180 dpaa_rx_extra_headroom) 181 + #ifdef CONFIG_DPAA_ERRATUM_A050385 182 + #define DPAA_RX_PRIV_DATA_A050385_SIZE (DPAA_A050385_ALIGN - DPAA_HWA_SIZE) 183 + #define DPAA_RX_PRIV_DATA_SIZE (fman_has_errata_a050385() ? \ 184 + DPAA_RX_PRIV_DATA_A050385_SIZE : \ 185 + DPAA_RX_PRIV_DATA_DEFAULT_SIZE) 186 + #else 187 + #define DPAA_RX_PRIV_DATA_SIZE DPAA_RX_PRIV_DATA_DEFAULT_SIZE 183 188 #endif 184 189 185 190 #define DPAA_ETH_PCD_RXQ_NUM 128 ··· 2845 2840 return err; 2846 2841 } 2847 2842 2848 - static inline u16 dpaa_get_headroom(struct dpaa_buffer_layout *bl) 2843 + static u16 dpaa_get_headroom(struct dpaa_buffer_layout *bl, 2844 + enum port_type port) 2849 2845 { 2850 2846 u16 headroom; 2851 2847 ··· 2860 2854 * 2861 2855 * Also make sure the headroom is a multiple of data_align bytes 2862 2856 */ 2863 - headroom = (u16)(bl->priv_data_size + DPAA_PARSE_RESULTS_SIZE + 2864 - DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE); 2857 + headroom = (u16)(bl[port].priv_data_size + DPAA_HWA_SIZE); 2865 2858 2866 - return ALIGN(headroom, DPAA_FD_DATA_ALIGNMENT); 2859 + if (port == RX) 2860 + return ALIGN(headroom, DPAA_FD_RX_DATA_ALIGNMENT); 2861 + else 2862 + return ALIGN(headroom, DPAA_FD_DATA_ALIGNMENT); 2867 2863 } 2868 2864 2869 2865 static int dpaa_eth_probe(struct platform_device *pdev) ··· 3033 3025 goto free_dpaa_fqs; 3034 3026 } 3035 3027 3036 - priv->tx_headroom = dpaa_get_headroom(&priv->buf_layout[TX]); 3037 - priv->rx_headroom = dpaa_get_headroom(&priv->buf_layout[RX]); 3028 + priv->tx_headroom = dpaa_get_headroom(priv->buf_layout, TX); 3029 + priv->rx_headroom = dpaa_get_headroom(priv->buf_layout, RX); 3038 3030 3039 3031 /* All real interfaces need their ports initialized */ 3040 3032 err = dpaa_eth_init_ports(mac_dev, dpaa_bp, &port_fqs,
+6
drivers/net/ethernet/freescale/fec.h
··· 456 456 */ 457 457 #define FEC_QUIRK_HAS_FRREG (1 << 16) 458 458 459 + /* Some FEC hardware blocks need the MMFR cleared at setup time to avoid 460 + * the generation of an MII event. This must be avoided in the older 461 + * FEC blocks where it will stop MII events being generated. 462 + */ 463 + #define FEC_QUIRK_CLEAR_SETUP_MII (1 << 17) 464 + 459 465 struct bufdesc_prop { 460 466 int qid; 461 467 /* Address of Rx and Tx buffers */
+16 -13
drivers/net/ethernet/freescale/fec_main.c
··· 100 100 static const struct fec_devinfo fec_imx28_info = { 101 101 .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME | 102 102 FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC | 103 - FEC_QUIRK_HAS_FRREG, 103 + FEC_QUIRK_HAS_FRREG | FEC_QUIRK_CLEAR_SETUP_MII, 104 104 }; 105 105 106 106 static const struct fec_devinfo fec_imx6q_info = { 107 107 .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 108 108 FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 109 109 FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 | 110 - FEC_QUIRK_HAS_RACC, 110 + FEC_QUIRK_HAS_RACC | FEC_QUIRK_CLEAR_SETUP_MII, 111 111 }; 112 112 113 113 static const struct fec_devinfo fec_mvf600_info = { ··· 119 119 FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 120 120 FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB | 121 121 FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE | 122 - FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE, 122 + FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE | 123 + FEC_QUIRK_CLEAR_SETUP_MII, 123 124 }; 124 125 125 126 static const struct fec_devinfo fec_imx6ul_info = { ··· 128 127 FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 129 128 FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 | 130 129 FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC | 131 - FEC_QUIRK_HAS_COALESCE, 130 + FEC_QUIRK_HAS_COALESCE | FEC_QUIRK_CLEAR_SETUP_MII, 132 131 }; 133 132 134 133 static struct platform_device_id fec_devtype[] = { ··· 2135 2134 if (suppress_preamble) 2136 2135 fep->phy_speed |= BIT(7); 2137 2136 2138 - /* Clear MMFR to avoid to generate MII event by writing MSCR. 2139 - * MII event generation condition: 2140 - * - writing MSCR: 2141 - * - mmfr[31:0]_not_zero & mscr[7:0]_is_zero & 2142 - * mscr_reg_data_in[7:0] != 0 2143 - * - writing MMFR: 2144 - * - mscr[7:0]_not_zero 2145 - */ 2146 - writel(0, fep->hwp + FEC_MII_DATA); 2137 + if (fep->quirks & FEC_QUIRK_CLEAR_SETUP_MII) { 2138 + /* Clear MMFR to avoid to generate MII event by writing MSCR. 2139 + * MII event generation condition: 2140 + * - writing MSCR: 2141 + * - mmfr[31:0]_not_zero & mscr[7:0]_is_zero & 2142 + * mscr_reg_data_in[7:0] != 0 2143 + * - writing MMFR: 2144 + * - mscr[7:0]_not_zero 2145 + */ 2146 + writel(0, fep->hwp + FEC_MII_DATA); 2147 + } 2147 2148 2148 2149 writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED); 2149 2150
+3 -11
drivers/net/ethernet/freescale/gianfar.c
··· 1829 1829 fcb_len = GMAC_FCB_LEN + GMAC_TXPAL_LEN; 1830 1830 1831 1831 /* make space for additional header when fcb is needed */ 1832 - if (fcb_len && unlikely(skb_headroom(skb) < fcb_len)) { 1833 - struct sk_buff *skb_new; 1834 - 1835 - skb_new = skb_realloc_headroom(skb, fcb_len); 1836 - if (!skb_new) { 1832 + if (fcb_len) { 1833 + if (unlikely(skb_cow_head(skb, fcb_len))) { 1837 1834 dev->stats.tx_errors++; 1838 1835 dev_kfree_skb_any(skb); 1839 1836 return NETDEV_TX_OK; 1840 1837 } 1841 - 1842 - if (skb->sk) 1843 - skb_set_owner_w(skb_new, skb->sk); 1844 - dev_consume_skb_any(skb); 1845 - skb = skb_new; 1846 1838 } 1847 1839 1848 1840 /* total number of fragments in the SKB */ ··· 3372 3380 3373 3381 if (dev->features & NETIF_F_IP_CSUM || 3374 3382 priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER) 3375 - dev->needed_headroom = GMAC_FCB_LEN; 3383 + dev->needed_headroom = GMAC_FCB_LEN + GMAC_TXPAL_LEN; 3376 3384 3377 3385 /* Initializing some of the rx/tx queue level parameters */ 3378 3386 for (i = 0; i < priv->num_tx_queues; i++) {
+32 -4
drivers/net/ethernet/ibm/ibmvnic.c
··· 1185 1185 if (adapter->state != VNIC_CLOSED) { 1186 1186 rc = ibmvnic_login(netdev); 1187 1187 if (rc) 1188 - return rc; 1188 + goto out; 1189 1189 1190 1190 rc = init_resources(adapter); 1191 1191 if (rc) { 1192 1192 netdev_err(netdev, "failed to initialize resources\n"); 1193 1193 release_resources(adapter); 1194 - return rc; 1194 + goto out; 1195 1195 } 1196 1196 } 1197 1197 1198 1198 rc = __ibmvnic_open(netdev); 1199 1199 1200 + out: 1201 + /* 1202 + * If open fails due to a pending failover, set device state and 1203 + * return. Device operation will be handled by reset routine. 1204 + */ 1205 + if (rc && adapter->failover_pending) { 1206 + adapter->state = VNIC_OPEN; 1207 + rc = 0; 1208 + } 1200 1209 return rc; 1201 1210 } 1202 1211 ··· 1931 1922 rwi->reset_reason); 1932 1923 1933 1924 rtnl_lock(); 1925 + /* 1926 + * Now that we have the rtnl lock, clear any pending failover. 1927 + * This will ensure ibmvnic_open() has either completed or will 1928 + * block until failover is complete. 1929 + */ 1930 + if (rwi->reset_reason == VNIC_RESET_FAILOVER) 1931 + adapter->failover_pending = false; 1934 1932 1935 1933 netif_carrier_off(netdev); 1936 1934 adapter->reset_reason = rwi->reset_reason; ··· 2218 2202 /* CHANGE_PARAM requestor holds rtnl_lock */ 2219 2203 rc = do_change_param_reset(adapter, rwi, reset_state); 2220 2204 } else if (adapter->force_reset_recovery) { 2205 + /* 2206 + * Since we are doing a hard reset now, clear the 2207 + * failover_pending flag so we don't ignore any 2208 + * future MOBILITY or other resets. 2209 + */ 2210 + adapter->failover_pending = false; 2211 + 2221 2212 /* Transport event occurred during previous reset */ 2222 2213 if (adapter->wait_for_reset) { 2223 2214 /* Previous was CHANGE_PARAM; caller locked */ ··· 2289 2266 unsigned long flags; 2290 2267 int ret; 2291 2268 2269 + /* 2270 + * If failover is pending don't schedule any other reset. 2271 + * Instead let the failover complete. If there is already a 2272 + * a failover reset scheduled, we will detect and drop the 2273 + * duplicate reset when walking the ->rwi_list below. 2274 + */ 2292 2275 if (adapter->state == VNIC_REMOVING || 2293 2276 adapter->state == VNIC_REMOVED || 2294 - adapter->failover_pending) { 2277 + (adapter->failover_pending && reason != VNIC_RESET_FAILOVER)) { 2295 2278 ret = EBUSY; 2296 2279 netdev_dbg(netdev, "Adapter removing or pending failover, skipping reset\n"); 2297 2280 goto err; ··· 4742 4713 case IBMVNIC_CRQ_INIT: 4743 4714 dev_info(dev, "Partner initialized\n"); 4744 4715 adapter->from_passive_init = true; 4745 - adapter->failover_pending = false; 4746 4716 if (!completion_done(&adapter->init_done)) { 4747 4717 complete(&adapter->init_done); 4748 4718 adapter->init_done_rc = -EIO;
+5
drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
··· 126 126 127 127 ethtool_link_ksettings_zero_link_mode(ks, supported); 128 128 129 + if (!idev->port_info) { 130 + netdev_err(netdev, "port_info not initialized\n"); 131 + return -EOPNOTSUPP; 132 + } 133 + 129 134 /* The port_info data is found in a DMA space that the NIC keeps 130 135 * up-to-date, so there's no need to request the data from the 131 136 * NIC, we already have it in our memory space.
+11 -3
drivers/net/ethernet/realtek/r8169_main.c
··· 4080 4080 return -EIO; 4081 4081 } 4082 4082 4083 - static bool rtl_test_hw_pad_bug(struct rtl8169_private *tp, struct sk_buff *skb) 4083 + static bool rtl_test_hw_pad_bug(struct rtl8169_private *tp) 4084 4084 { 4085 - return skb->len < ETH_ZLEN && tp->mac_version == RTL_GIGA_MAC_VER_34; 4085 + switch (tp->mac_version) { 4086 + case RTL_GIGA_MAC_VER_34: 4087 + case RTL_GIGA_MAC_VER_60: 4088 + case RTL_GIGA_MAC_VER_61: 4089 + case RTL_GIGA_MAC_VER_63: 4090 + return true; 4091 + default: 4092 + return false; 4093 + } 4086 4094 } 4087 4095 4088 4096 static void rtl8169_tso_csum_v1(struct sk_buff *skb, u32 *opts) ··· 4162 4154 4163 4155 opts[1] |= transport_offset << TCPHO_SHIFT; 4164 4156 } else { 4165 - if (unlikely(rtl_test_hw_pad_bug(tp, skb))) 4157 + if (unlikely(skb->len < ETH_ZLEN && rtl_test_hw_pad_bug(tp))) 4166 4158 return !eth_skb_pad(skb); 4167 4159 } 4168 4160
+7 -7
drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
··· 625 625 if (ret) 626 626 return ret; 627 627 628 - if (plat->eee_usecs_rate > 0) { 629 - u32 tx_lpi_usec; 630 - 631 - tx_lpi_usec = (plat->eee_usecs_rate / 1000000) - 1; 632 - writel(tx_lpi_usec, res.addr + GMAC_1US_TIC_COUNTER); 633 - } 634 - 635 628 ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES); 636 629 if (ret < 0) 637 630 return ret; ··· 633 640 res.addr = pcim_iomap_table(pdev)[0]; 634 641 res.wol_irq = pci_irq_vector(pdev, 0); 635 642 res.irq = pci_irq_vector(pdev, 0); 643 + 644 + if (plat->eee_usecs_rate > 0) { 645 + u32 tx_lpi_usec; 646 + 647 + tx_lpi_usec = (plat->eee_usecs_rate / 1000000) - 1; 648 + writel(tx_lpi_usec, res.addr + GMAC_1US_TIC_COUNTER); 649 + } 636 650 637 651 ret = stmmac_dvr_probe(&pdev->dev, plat, &res); 638 652 if (ret) {
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4757 4757 4758 4758 ch->priv_data = priv; 4759 4759 ch->index = queue; 4760 + spin_lock_init(&ch->lock); 4760 4761 4761 4762 if (queue < priv->plat->rx_queues_to_use) { 4762 4763 netif_napi_add(dev, &ch->rx_napi, stmmac_napi_poll_rx,
-1
drivers/net/ethernet/ti/cpsw_ethtool.c
··· 728 728 (1 << HWTSTAMP_TX_ON); 729 729 info->rx_filters = 730 730 (1 << HWTSTAMP_FILTER_NONE) | 731 - (1 << HWTSTAMP_FILTER_PTP_V1_L4_EVENT) | 732 731 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT); 733 732 return 0; 734 733 }
+1 -4
drivers/net/ethernet/ti/cpsw_priv.c
··· 639 639 break; 640 640 case HWTSTAMP_FILTER_ALL: 641 641 case HWTSTAMP_FILTER_NTP_ALL: 642 - return -ERANGE; 643 642 case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: 644 643 case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: 645 644 case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: 646 - priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 647 - cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 648 - break; 645 + return -ERANGE; 649 646 case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: 650 647 case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: 651 648 case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+2 -1
drivers/net/phy/sfp.c
··· 2389 2389 continue; 2390 2390 2391 2391 sfp->gpio_irq[i] = gpiod_to_irq(sfp->gpio[i]); 2392 - if (!sfp->gpio_irq[i]) { 2392 + if (sfp->gpio_irq[i] < 0) { 2393 + sfp->gpio_irq[i] = 0; 2393 2394 sfp->need_poll = true; 2394 2395 continue; 2395 2396 }
+1
drivers/net/usb/qmi_wwan.c
··· 1309 1309 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1310 1310 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 1311 1311 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */ 1312 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1230, 2)}, /* Telit LE910Cx */ 1312 1313 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1260, 2)}, /* Telit LE910Cx */ 1313 1314 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1261, 2)}, /* Telit LE910Cx */ 1314 1315 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)}, /* Telit LN940 series */
+6 -2
drivers/nvme/host/core.c
··· 4582 4582 } 4583 4583 EXPORT_SYMBOL_GPL(nvme_start_queues); 4584 4584 4585 - 4586 - void nvme_sync_queues(struct nvme_ctrl *ctrl) 4585 + void nvme_sync_io_queues(struct nvme_ctrl *ctrl) 4587 4586 { 4588 4587 struct nvme_ns *ns; 4589 4588 ··· 4590 4591 list_for_each_entry(ns, &ctrl->namespaces, list) 4591 4592 blk_sync_queue(ns->queue); 4592 4593 up_read(&ctrl->namespaces_rwsem); 4594 + } 4595 + EXPORT_SYMBOL_GPL(nvme_sync_io_queues); 4593 4596 4597 + void nvme_sync_queues(struct nvme_ctrl *ctrl) 4598 + { 4599 + nvme_sync_io_queues(ctrl); 4594 4600 if (ctrl->admin_q) 4595 4601 blk_sync_queue(ctrl->admin_q); 4596 4602 }
+1
drivers/nvme/host/nvme.h
··· 602 602 void nvme_start_queues(struct nvme_ctrl *ctrl); 603 603 void nvme_kill_queues(struct nvme_ctrl *ctrl); 604 604 void nvme_sync_queues(struct nvme_ctrl *ctrl); 605 + void nvme_sync_io_queues(struct nvme_ctrl *ctrl); 605 606 void nvme_unfreeze(struct nvme_ctrl *ctrl); 606 607 void nvme_wait_freeze(struct nvme_ctrl *ctrl); 607 608 int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
+19 -4
drivers/nvme/host/pci.c
··· 198 198 u32 q_depth; 199 199 u16 cq_vector; 200 200 u16 sq_tail; 201 + u16 last_sq_tail; 201 202 u16 cq_head; 202 203 u16 qid; 203 204 u8 cq_phase; ··· 456 455 return 0; 457 456 } 458 457 459 - static inline void nvme_write_sq_db(struct nvme_queue *nvmeq) 458 + /* 459 + * Write sq tail if we are asked to, or if the next command would wrap. 460 + */ 461 + static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq) 460 462 { 463 + if (!write_sq) { 464 + u16 next_tail = nvmeq->sq_tail + 1; 465 + 466 + if (next_tail == nvmeq->q_depth) 467 + next_tail = 0; 468 + if (next_tail != nvmeq->last_sq_tail) 469 + return; 470 + } 471 + 461 472 if (nvme_dbbuf_update_and_check_event(nvmeq->sq_tail, 462 473 nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei)) 463 474 writel(nvmeq->sq_tail, nvmeq->q_db); 475 + nvmeq->last_sq_tail = nvmeq->sq_tail; 464 476 } 465 477 466 478 /** ··· 490 476 cmd, sizeof(*cmd)); 491 477 if (++nvmeq->sq_tail == nvmeq->q_depth) 492 478 nvmeq->sq_tail = 0; 493 - if (write_sq) 494 - nvme_write_sq_db(nvmeq); 479 + nvme_write_sq_db(nvmeq, write_sq); 495 480 spin_unlock(&nvmeq->sq_lock); 496 481 } 497 482 ··· 499 486 struct nvme_queue *nvmeq = hctx->driver_data; 500 487 501 488 spin_lock(&nvmeq->sq_lock); 502 - nvme_write_sq_db(nvmeq); 489 + if (nvmeq->sq_tail != nvmeq->last_sq_tail) 490 + nvme_write_sq_db(nvmeq, true); 503 491 spin_unlock(&nvmeq->sq_lock); 504 492 } 505 493 ··· 1510 1496 struct nvme_dev *dev = nvmeq->dev; 1511 1497 1512 1498 nvmeq->sq_tail = 0; 1499 + nvmeq->last_sq_tail = 0; 1513 1500 nvmeq->cq_head = 0; 1514 1501 nvmeq->cq_phase = 1; 1515 1502 nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride];
+3 -11
drivers/nvme/host/rdma.c
··· 122 122 struct sockaddr_storage src_addr; 123 123 124 124 struct nvme_ctrl ctrl; 125 - struct mutex teardown_lock; 126 125 bool use_inline_data; 127 126 u32 io_queues[HCTX_MAX_TYPES]; 128 127 }; ··· 1009 1010 static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl, 1010 1011 bool remove) 1011 1012 { 1012 - mutex_lock(&ctrl->teardown_lock); 1013 1013 blk_mq_quiesce_queue(ctrl->ctrl.admin_q); 1014 + blk_sync_queue(ctrl->ctrl.admin_q); 1014 1015 nvme_rdma_stop_queue(&ctrl->queues[0]); 1015 1016 if (ctrl->ctrl.admin_tagset) { 1016 1017 blk_mq_tagset_busy_iter(ctrl->ctrl.admin_tagset, ··· 1020 1021 if (remove) 1021 1022 blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); 1022 1023 nvme_rdma_destroy_admin_queue(ctrl, remove); 1023 - mutex_unlock(&ctrl->teardown_lock); 1024 1024 } 1025 1025 1026 1026 static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl, 1027 1027 bool remove) 1028 1028 { 1029 - mutex_lock(&ctrl->teardown_lock); 1030 1029 if (ctrl->ctrl.queue_count > 1) { 1031 1030 nvme_start_freeze(&ctrl->ctrl); 1032 1031 nvme_stop_queues(&ctrl->ctrl); 1032 + nvme_sync_io_queues(&ctrl->ctrl); 1033 1033 nvme_rdma_stop_io_queues(ctrl); 1034 1034 if (ctrl->ctrl.tagset) { 1035 1035 blk_mq_tagset_busy_iter(ctrl->ctrl.tagset, ··· 1039 1041 nvme_start_queues(&ctrl->ctrl); 1040 1042 nvme_rdma_destroy_io_queues(ctrl, remove); 1041 1043 } 1042 - mutex_unlock(&ctrl->teardown_lock); 1043 1044 } 1044 1045 1045 1046 static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl) ··· 1973 1976 { 1974 1977 struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq); 1975 1978 struct nvme_rdma_queue *queue = req->queue; 1976 - struct nvme_rdma_ctrl *ctrl = queue->ctrl; 1977 1979 1978 - /* fence other contexts that may complete the command */ 1979 - mutex_lock(&ctrl->teardown_lock); 1980 1980 nvme_rdma_stop_queue(queue); 1981 - if (!blk_mq_request_completed(rq)) { 1981 + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) { 1982 1982 nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD; 1983 1983 blk_mq_complete_request(rq); 1984 1984 } 1985 - mutex_unlock(&ctrl->teardown_lock); 1986 1985 } 1987 1986 1988 1987 static enum blk_eh_timer_return ··· 2313 2320 return ERR_PTR(-ENOMEM); 2314 2321 ctrl->ctrl.opts = opts; 2315 2322 INIT_LIST_HEAD(&ctrl->list); 2316 - mutex_init(&ctrl->teardown_lock); 2317 2323 2318 2324 if (!(opts->mask & NVMF_OPT_TRSVCID)) { 2319 2325 opts->trsvcid =
+4 -12
drivers/nvme/host/tcp.c
··· 124 124 struct sockaddr_storage src_addr; 125 125 struct nvme_ctrl ctrl; 126 126 127 - struct mutex teardown_lock; 128 127 struct work_struct err_work; 129 128 struct delayed_work connect_work; 130 129 struct nvme_tcp_request async_req; ··· 1885 1886 static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl, 1886 1887 bool remove) 1887 1888 { 1888 - mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock); 1889 1889 blk_mq_quiesce_queue(ctrl->admin_q); 1890 + blk_sync_queue(ctrl->admin_q); 1890 1891 nvme_tcp_stop_queue(ctrl, 0); 1891 1892 if (ctrl->admin_tagset) { 1892 1893 blk_mq_tagset_busy_iter(ctrl->admin_tagset, ··· 1896 1897 if (remove) 1897 1898 blk_mq_unquiesce_queue(ctrl->admin_q); 1898 1899 nvme_tcp_destroy_admin_queue(ctrl, remove); 1899 - mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock); 1900 1900 } 1901 1901 1902 1902 static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl, 1903 1903 bool remove) 1904 1904 { 1905 - mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock); 1906 1905 if (ctrl->queue_count <= 1) 1907 - goto out; 1906 + return; 1908 1907 blk_mq_quiesce_queue(ctrl->admin_q); 1909 1908 nvme_start_freeze(ctrl); 1910 1909 nvme_stop_queues(ctrl); 1910 + nvme_sync_io_queues(ctrl); 1911 1911 nvme_tcp_stop_io_queues(ctrl); 1912 1912 if (ctrl->tagset) { 1913 1913 blk_mq_tagset_busy_iter(ctrl->tagset, ··· 1916 1918 if (remove) 1917 1919 nvme_start_queues(ctrl); 1918 1920 nvme_tcp_destroy_io_queues(ctrl, remove); 1919 - out: 1920 - mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock); 1921 1921 } 1922 1922 1923 1923 static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl) ··· 2167 2171 struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); 2168 2172 struct nvme_ctrl *ctrl = &req->queue->ctrl->ctrl; 2169 2173 2170 - /* fence other contexts that may complete the command */ 2171 - mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock); 2172 2174 nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue)); 2173 - if (!blk_mq_request_completed(rq)) { 2175 + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) { 2174 2176 nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD; 2175 2177 blk_mq_complete_request(rq); 2176 2178 } 2177 - mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock); 2178 2179 } 2179 2180 2180 2181 static enum blk_eh_timer_return ··· 2448 2455 nvme_tcp_reconnect_ctrl_work); 2449 2456 INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work); 2450 2457 INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work); 2451 - mutex_init(&ctrl->teardown_lock); 2452 2458 2453 2459 if (!(opts->mask & NVMF_OPT_TRSVCID)) { 2454 2460 opts->trsvcid =
+1 -1
drivers/of/device.c
··· 112 112 u64 dma_end = 0; 113 113 114 114 /* Determine the overall bounds of all DMA regions */ 115 - for (dma_start = ~0ULL; r->size; r++) { 115 + for (dma_start = ~0; r->size; r++) { 116 116 /* Take lower and upper limits */ 117 117 if (r->dma_start < dma_start) 118 118 dma_start = r->dma_start;
+5 -4
drivers/opp/core.c
··· 1181 1181 struct opp_device *opp_dev, *temp; 1182 1182 int i; 1183 1183 1184 + /* Drop the lock as soon as we can */ 1185 + list_del(&opp_table->node); 1186 + mutex_unlock(&opp_table_lock); 1187 + 1184 1188 _of_clear_opp_table(opp_table); 1185 1189 1186 1190 /* Release clk */ ··· 1212 1208 1213 1209 mutex_destroy(&opp_table->genpd_virt_dev_lock); 1214 1210 mutex_destroy(&opp_table->lock); 1215 - list_del(&opp_table->node); 1216 1211 kfree(opp_table); 1217 - 1218 - mutex_unlock(&opp_table_lock); 1219 1212 } 1220 1213 1221 1214 void dev_pm_opp_put_opp_table(struct opp_table *opp_table) ··· 1931 1930 return ERR_PTR(-EINVAL); 1932 1931 1933 1932 opp_table = dev_pm_opp_get_opp_table(dev); 1934 - if (!IS_ERR(opp_table)) 1933 + if (IS_ERR(opp_table)) 1935 1934 return opp_table; 1936 1935 1937 1936 /* This should be called before OPPs are initialized */
+2
drivers/opp/of.c
··· 944 944 nr -= 2; 945 945 } 946 946 947 + return 0; 948 + 947 949 remove_static_opp: 948 950 _opp_remove_all_static(opp_table); 949 951
+6 -2
drivers/pci/controller/dwc/pcie-designware-host.c
··· 586 586 * ATU, so we should not program the ATU here. 587 587 */ 588 588 if (pp->bridge->child_ops == &dw_child_pcie_ops) { 589 - struct resource_entry *entry = 590 - resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM); 589 + struct resource_entry *tmp, *entry = NULL; 590 + 591 + /* Get last memory resource entry */ 592 + resource_list_for_each_entry(tmp, &pp->bridge->windows) 593 + if (resource_type(tmp->res) == IORESOURCE_MEM) 594 + entry = tmp; 591 595 592 596 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0, 593 597 PCIE_ATU_TYPE_MEM, entry->res->start,
+10 -13
drivers/pci/controller/pci-mvebu.c
··· 958 958 } 959 959 960 960 /* 961 - * We can't use devm_of_pci_get_host_bridge_resources() because we 962 - * need to parse our special DT properties encoding the MEM and IO 963 - * apertures. 961 + * devm_of_pci_get_host_bridge_resources() only sets up translateable resources, 962 + * so we need extra resource setup parsing our special DT properties encoding 963 + * the MEM and IO apertures. 964 964 */ 965 965 static int mvebu_pcie_parse_request_resources(struct mvebu_pcie *pcie) 966 966 { 967 967 struct device *dev = &pcie->pdev->dev; 968 - struct device_node *np = dev->of_node; 969 968 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 970 969 int ret; 971 - 972 - /* Get the bus range */ 973 - ret = of_pci_parse_bus_range(np, &pcie->busn); 974 - if (ret) { 975 - dev_err(dev, "failed to parse bus-range property: %d\n", ret); 976 - return ret; 977 - } 978 - pci_add_resource(&bridge->windows, &pcie->busn); 979 970 980 971 /* Get the PCIe memory aperture */ 981 972 mvebu_mbus_get_pcie_mem_aperture(&pcie->mem); ··· 977 986 978 987 pcie->mem.name = "PCI MEM"; 979 988 pci_add_resource(&bridge->windows, &pcie->mem); 989 + ret = devm_request_resource(dev, &iomem_resource, &pcie->mem); 990 + if (ret) 991 + return ret; 980 992 981 993 /* Get the PCIe IO aperture */ 982 994 mvebu_mbus_get_pcie_io_aperture(&pcie->io); ··· 993 999 pcie->realio.name = "PCI I/O"; 994 1000 995 1001 pci_add_resource(&bridge->windows, &pcie->realio); 1002 + ret = devm_request_resource(dev, &ioport_resource, &pcie->realio); 1003 + if (ret) 1004 + return ret; 996 1005 } 997 1006 998 - return devm_request_pci_bus_resources(dev, &bridge->windows); 1007 + return 0; 999 1008 } 1000 1009 1001 1010 /*
+7 -2
drivers/pci/pci.c
··· 3516 3516 { 3517 3517 dev->acs_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); 3518 3518 3519 - if (dev->acs_cap) 3520 - pci_enable_acs(dev); 3519 + /* 3520 + * Attempt to enable ACS regardless of capability because some Root 3521 + * Ports (e.g. those quirked with *_intel_pch_acs_*) do not have 3522 + * the standard ACS capability but still support ACS via those 3523 + * quirks. 3524 + */ 3525 + pci_enable_acs(dev); 3521 3526 } 3522 3527 3523 3528 /**
+1 -1
drivers/powercap/intel_rapl_common.c
··· 620 620 case ARBITRARY_UNIT: 621 621 default: 622 622 return value; 623 - }; 623 + } 624 624 625 625 if (to_raw) 626 626 return div64_u64(value, units) * scale;
+2
drivers/regulator/core.c
··· 4165 4165 ret = rdev->desc->fixed_uV; 4166 4166 } else if (rdev->supply) { 4167 4167 ret = regulator_get_voltage_rdev(rdev->supply->rdev); 4168 + } else if (rdev->supply_name) { 4169 + return -EPROBE_DEFER; 4168 4170 } else { 4169 4171 return -EINVAL; 4170 4172 }
+12 -2
drivers/s390/crypto/ap_bus.c
··· 680 680 { 681 681 struct ap_device *ap_dev = to_ap_dev(dev); 682 682 struct ap_driver *ap_drv = to_ap_drv(dev->driver); 683 - int card, queue, devres, drvres, rc; 683 + int card, queue, devres, drvres, rc = -ENODEV; 684 + 685 + if (!get_device(dev)) 686 + return rc; 684 687 685 688 if (is_queue_dev(dev)) { 686 689 /* ··· 700 697 mutex_unlock(&ap_perms_mutex); 701 698 drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT; 702 699 if (!!devres != !!drvres) 703 - return -ENODEV; 700 + goto out; 704 701 } 705 702 706 703 /* Add queue/card to list of active queues/cards */ ··· 721 718 ap_dev->drv = NULL; 722 719 } 723 720 721 + out: 722 + if (rc) 723 + put_device(dev); 724 724 return rc; 725 725 } 726 726 ··· 749 743 if (is_queue_dev(dev)) 750 744 hash_del(&to_ap_queue(dev)->hnode); 751 745 spin_unlock_bh(&ap_queues_lock); 746 + 747 + put_device(dev); 752 748 753 749 return 0; 754 750 } ··· 1379 1371 __func__, ac->id, dom); 1380 1372 goto put_dev_and_continue; 1381 1373 } 1374 + /* get it and thus adjust reference counter */ 1375 + get_device(dev); 1382 1376 if (decfg) 1383 1377 AP_DBF_INFO("%s(%d,%d) new (decfg) queue device created\n", 1384 1378 __func__, ac->id, dom);
+16 -14
drivers/s390/crypto/pkey_api.c
··· 35 35 #define PROTKEYBLOBBUFSIZE 256 /* protected key buffer size used internal */ 36 36 #define MAXAPQNSINLIST 64 /* max 64 apqns within a apqn list */ 37 37 38 - /* mask of available pckmo subfunctions, fetched once at module init */ 39 - static cpacf_mask_t pckmo_functions; 40 - 41 38 /* 42 39 * debug feature data and functions 43 40 */ ··· 88 91 const struct pkey_clrkey *clrkey, 89 92 struct pkey_protkey *protkey) 90 93 { 94 + /* mask of available pckmo subfunctions */ 95 + static cpacf_mask_t pckmo_functions; 96 + 91 97 long fc; 92 98 int keysize; 93 99 u8 paramblock[64]; ··· 114 114 return -EINVAL; 115 115 } 116 116 117 - /* 118 - * Check if the needed pckmo subfunction is available. 119 - * These subfunctions can be enabled/disabled by customers 120 - * in the LPAR profile or may even change on the fly. 121 - */ 117 + /* Did we already check for PCKMO ? */ 118 + if (!pckmo_functions.bytes[0]) { 119 + /* no, so check now */ 120 + if (!cpacf_query(CPACF_PCKMO, &pckmo_functions)) 121 + return -ENODEV; 122 + } 123 + /* check for the pckmo subfunction we need now */ 122 124 if (!cpacf_test_func(&pckmo_functions, fc)) { 123 125 DEBUG_ERR("%s pckmo functions not available\n", __func__); 124 126 return -ENODEV; ··· 2060 2058 */ 2061 2059 static int __init pkey_init(void) 2062 2060 { 2063 - cpacf_mask_t kmc_functions; 2061 + cpacf_mask_t func_mask; 2064 2062 2065 2063 /* 2066 2064 * The pckmo instruction should be available - even if we don't ··· 2068 2066 * is also the minimum level for the kmc instructions which 2069 2067 * are able to work with protected keys. 2070 2068 */ 2071 - if (!cpacf_query(CPACF_PCKMO, &pckmo_functions)) 2069 + if (!cpacf_query(CPACF_PCKMO, &func_mask)) 2072 2070 return -ENODEV; 2073 2071 2074 2072 /* check for kmc instructions available */ 2075 - if (!cpacf_query(CPACF_KMC, &kmc_functions)) 2073 + if (!cpacf_query(CPACF_KMC, &func_mask)) 2076 2074 return -ENODEV; 2077 - if (!cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_128) || 2078 - !cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_192) || 2079 - !cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_256)) 2075 + if (!cpacf_test_func(&func_mask, CPACF_KMC_PAES_128) || 2076 + !cpacf_test_func(&func_mask, CPACF_KMC_PAES_192) || 2077 + !cpacf_test_func(&func_mask, CPACF_KMC_PAES_256)) 2080 2078 return -ENODEV; 2081 2079 2082 2080 pkey_debug_init();
+8 -5
drivers/s390/crypto/zcrypt_card.c
··· 157 157 { 158 158 int rc; 159 159 160 - rc = sysfs_create_group(&zc->card->ap_dev.device.kobj, 161 - &zcrypt_card_attr_group); 162 - if (rc) 163 - return rc; 164 - 165 160 spin_lock(&zcrypt_list_lock); 166 161 list_add_tail(&zc->list, &zcrypt_card_list); 167 162 spin_unlock(&zcrypt_list_lock); ··· 164 169 zc->online = 1; 165 170 166 171 ZCRYPT_DBF(DBF_INFO, "card=%02x register online=1\n", zc->card->id); 172 + 173 + rc = sysfs_create_group(&zc->card->ap_dev.device.kobj, 174 + &zcrypt_card_attr_group); 175 + if (rc) { 176 + spin_lock(&zcrypt_list_lock); 177 + list_del_init(&zc->list); 178 + spin_unlock(&zcrypt_list_lock); 179 + } 167 180 168 181 return rc; 169 182 }
+1 -5
drivers/s390/crypto/zcrypt_queue.c
··· 180 180 &zcrypt_queue_attr_group); 181 181 if (rc) 182 182 goto out; 183 - get_device(&zq->queue->ap_dev.device); 184 183 185 184 if (zq->ops->rng) { 186 185 rc = zcrypt_rng_device_add(); ··· 191 192 out_unregister: 192 193 sysfs_remove_group(&zq->queue->ap_dev.device.kobj, 193 194 &zcrypt_queue_attr_group); 194 - put_device(&zq->queue->ap_dev.device); 195 195 out: 196 196 spin_lock(&zcrypt_list_lock); 197 197 list_del_init(&zq->list); ··· 218 220 list_del_init(&zq->list); 219 221 zcrypt_device_count--; 220 222 spin_unlock(&zcrypt_list_lock); 221 - zcrypt_card_put(zc); 222 223 if (zq->ops->rng) 223 224 zcrypt_rng_device_remove(); 224 225 sysfs_remove_group(&zq->queue->ap_dev.device.kobj, 225 226 &zcrypt_queue_attr_group); 226 - put_device(&zq->queue->ap_dev.device); 227 - zcrypt_queue_put(zq); 227 + zcrypt_card_put(zc); 228 228 } 229 229 EXPORT_SYMBOL(zcrypt_queue_unregister);
+5 -4
drivers/scsi/device_handler/scsi_dh_alua.c
··· 658 658 rcu_read_lock(); 659 659 list_for_each_entry_rcu(h, 660 660 &tmp_pg->dh_list, node) { 661 - /* h->sdev should always be valid */ 662 - BUG_ON(!h->sdev); 661 + if (!h->sdev) 662 + continue; 663 663 h->sdev->access_state = desc[0]; 664 664 } 665 665 rcu_read_unlock(); ··· 705 705 pg->expiry = 0; 706 706 rcu_read_lock(); 707 707 list_for_each_entry_rcu(h, &pg->dh_list, node) { 708 - BUG_ON(!h->sdev); 708 + if (!h->sdev) 709 + continue; 709 710 h->sdev->access_state = 710 711 (pg->state & SCSI_ACCESS_STATE_MASK); 711 712 if (pg->pref) ··· 1148 1147 spin_lock(&h->pg_lock); 1149 1148 pg = rcu_dereference_protected(h->pg, lockdep_is_held(&h->pg_lock)); 1150 1149 rcu_assign_pointer(h->pg, NULL); 1151 - h->sdev = NULL; 1152 1150 spin_unlock(&h->pg_lock); 1153 1151 if (pg) { 1154 1152 spin_lock_irq(&pg->lock); ··· 1156 1156 kref_put(&pg->kref, release_port_group); 1157 1157 } 1158 1158 sdev->handler_data = NULL; 1159 + synchronize_rcu(); 1159 1160 kfree(h); 1160 1161 } 1161 1162
+3 -1
drivers/scsi/hpsa.c
··· 8855 8855 /* hook into SCSI subsystem */ 8856 8856 rc = hpsa_scsi_add_host(h); 8857 8857 if (rc) 8858 - goto clean7; /* perf, sg, cmd, irq, shost, pci, lu, aer/h */ 8858 + goto clean8; /* lastlogicals, perf, sg, cmd, irq, shost, pci, lu, aer/h */ 8859 8859 8860 8860 /* Monitor the controller for firmware lockups */ 8861 8861 h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL; ··· 8870 8870 HPSA_EVENT_MONITOR_INTERVAL); 8871 8871 return 0; 8872 8872 8873 + clean8: /* lastlogicals, perf, sg, cmd, irq, shost, pci, lu, aer/h */ 8874 + kfree(h->lastlogicals); 8873 8875 clean7: /* perf, sg, cmd, irq, shost, pci, lu, aer/h */ 8874 8876 hpsa_free_performant_mode(h); 8875 8877 h->access.set_intr_mask(h, HPSA_INTR_OFF);
+7
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 1740 1740 reply_q->irq_poll_scheduled = false; 1741 1741 reply_q->irq_line_enable = true; 1742 1742 enable_irq(reply_q->os_irq); 1743 + /* 1744 + * Go for one more round of processing the 1745 + * reply descriptor post queue incase if HBA 1746 + * Firmware has posted some reply descriptors 1747 + * while reenabling the IRQ. 1748 + */ 1749 + _base_process_reply_queue(reply_q); 1743 1750 } 1744 1751 1745 1752 return num_entries;
+1 -14
drivers/spi/spi-bcm2835.c
··· 1193 1193 struct spi_controller *ctlr = spi->controller; 1194 1194 struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); 1195 1195 struct gpio_chip *chip; 1196 - enum gpio_lookup_flags lflags; 1197 1196 u32 cs; 1198 1197 1199 1198 /* ··· 1258 1259 if (!chip) 1259 1260 return 0; 1260 1261 1261 - /* 1262 - * Retrieve the corresponding GPIO line used for CS. 1263 - * The inversion semantics will be handled by the GPIO core 1264 - * code, so we pass GPIOD_OUT_LOW for "unasserted" and 1265 - * the correct flag for inversion semantics. The SPI_CS_HIGH 1266 - * on spi->mode cannot be checked for polarity in this case 1267 - * as the flag use_gpio_descriptors enforces SPI_CS_HIGH. 1268 - */ 1269 - if (of_property_read_bool(spi->dev.of_node, "spi-cs-high")) 1270 - lflags = GPIO_ACTIVE_HIGH; 1271 - else 1272 - lflags = GPIO_ACTIVE_LOW; 1273 1262 spi->cs_gpiod = gpiochip_request_own_desc(chip, 8 - spi->chip_select, 1274 1263 DRV_NAME, 1275 - lflags, 1264 + GPIO_LOOKUP_FLAGS_DEFAULT, 1276 1265 GPIOD_OUT_LOW); 1277 1266 if (IS_ERR(spi->cs_gpiod)) 1278 1267 return PTR_ERR(spi->cs_gpiod);
+4 -6
drivers/spi/spi-fsl-dspi.c
··· 1080 1080 #ifdef CONFIG_PM_SLEEP 1081 1081 static int dspi_suspend(struct device *dev) 1082 1082 { 1083 - struct spi_controller *ctlr = dev_get_drvdata(dev); 1084 - struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr); 1083 + struct fsl_dspi *dspi = dev_get_drvdata(dev); 1085 1084 1086 1085 if (dspi->irq) 1087 1086 disable_irq(dspi->irq); 1088 - spi_controller_suspend(ctlr); 1087 + spi_controller_suspend(dspi->ctlr); 1089 1088 clk_disable_unprepare(dspi->clk); 1090 1089 1091 1090 pinctrl_pm_select_sleep_state(dev); ··· 1094 1095 1095 1096 static int dspi_resume(struct device *dev) 1096 1097 { 1097 - struct spi_controller *ctlr = dev_get_drvdata(dev); 1098 - struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr); 1098 + struct fsl_dspi *dspi = dev_get_drvdata(dev); 1099 1099 int ret; 1100 1100 1101 1101 pinctrl_pm_select_default_state(dev); ··· 1102 1104 ret = clk_prepare_enable(dspi->clk); 1103 1105 if (ret) 1104 1106 return ret; 1105 - spi_controller_resume(ctlr); 1107 + spi_controller_resume(dspi->ctlr); 1106 1108 if (dspi->irq) 1107 1109 enable_irq(dspi->irq); 1108 1110
+15 -8
drivers/spi/spi-imx.c
··· 1676 1676 goto out_master_put; 1677 1677 } 1678 1678 1679 - pm_runtime_enable(spi_imx->dev); 1679 + ret = clk_prepare_enable(spi_imx->clk_per); 1680 + if (ret) 1681 + goto out_master_put; 1682 + 1683 + ret = clk_prepare_enable(spi_imx->clk_ipg); 1684 + if (ret) 1685 + goto out_put_per; 1686 + 1680 1687 pm_runtime_set_autosuspend_delay(spi_imx->dev, MXC_RPM_TIMEOUT); 1681 1688 pm_runtime_use_autosuspend(spi_imx->dev); 1682 - 1683 - ret = pm_runtime_get_sync(spi_imx->dev); 1684 - if (ret < 0) { 1685 - dev_err(spi_imx->dev, "failed to enable clock\n"); 1686 - goto out_runtime_pm_put; 1687 - } 1689 + pm_runtime_set_active(spi_imx->dev); 1690 + pm_runtime_enable(spi_imx->dev); 1688 1691 1689 1692 spi_imx->spi_clk = clk_get_rate(spi_imx->clk_per); 1690 1693 /* ··· 1725 1722 spi_imx_sdma_exit(spi_imx); 1726 1723 out_runtime_pm_put: 1727 1724 pm_runtime_dont_use_autosuspend(spi_imx->dev); 1728 - pm_runtime_put_sync(spi_imx->dev); 1725 + pm_runtime_set_suspended(&pdev->dev); 1729 1726 pm_runtime_disable(spi_imx->dev); 1727 + 1728 + clk_disable_unprepare(spi_imx->clk_ipg); 1729 + out_put_per: 1730 + clk_disable_unprepare(spi_imx->clk_per); 1730 1731 out_master_put: 1731 1732 spi_master_put(master); 1732 1733
+1 -1
drivers/staging/wfx/Documentation/devicetree/bindings/net/wireless/silabs,wfx.yaml
··· 24 24 In addition, it is recommended to declare a mmc-pwrseq on SDIO host above 25 25 WFx. Without it, you may encounter issues with warm boot. The mmc-pwrseq 26 26 should be compatible with mmc-pwrseq-simple. Please consult 27 - Documentation/devicetree/bindings/mmc/mmc-pwrseq-simple.txt for more 27 + Documentation/devicetree/bindings/mmc/mmc-pwrseq-simple.yaml for more 28 28 information. 29 29 30 30 For SPI':'
+1 -1
drivers/tty/serial/8250/8250_mtk.c
··· 317 317 */ 318 318 baud = tty_termios_baud_rate(termios); 319 319 320 - serial8250_do_set_termios(port, termios, old); 320 + serial8250_do_set_termios(port, termios, NULL); 321 321 322 322 tty_termios_encode_baud_rate(termios, baud, baud); 323 323
+1
drivers/tty/serial/Kconfig
··· 522 522 depends on OF 523 523 select SERIAL_EARLYCON 524 524 select SERIAL_CORE_CONSOLE 525 + default y if SERIAL_IMX_CONSOLE 525 526 help 526 527 If you have enabled the earlycon on the Freescale IMX 527 528 CPU you can make it the earlycon by answering Y to this option.
+3
drivers/tty/serial/serial_txx9.c
··· 1280 1280 1281 1281 #ifdef ENABLE_SERIAL_TXX9_PCI 1282 1282 ret = pci_register_driver(&serial_txx9_pci_driver); 1283 + if (ret) { 1284 + platform_driver_unregister(&serial_txx9_plat_driver); 1285 + } 1283 1286 #endif 1284 1287 if (ret == 0) 1285 1288 goto out;
+4 -2
drivers/tty/tty_io.c
··· 1515 1515 tty->ops->shutdown(tty); 1516 1516 tty_save_termios(tty); 1517 1517 tty_driver_remove_tty(tty->driver, tty); 1518 - tty->port->itty = NULL; 1518 + if (tty->port) 1519 + tty->port->itty = NULL; 1519 1520 if (tty->link) 1520 1521 tty->link->port->itty = NULL; 1521 - tty_buffer_cancel_work(tty->port); 1522 + if (tty->port) 1523 + tty_buffer_cancel_work(tty->port); 1522 1524 if (tty->link) 1523 1525 tty_buffer_cancel_work(tty->link->port); 1524 1526
+2 -22
drivers/tty/vt/vt.c
··· 4704 4704 return rc; 4705 4705 } 4706 4706 4707 - static int con_font_copy(struct vc_data *vc, struct console_font_op *op) 4708 - { 4709 - int con = op->height; 4710 - int rc; 4711 - 4712 - 4713 - console_lock(); 4714 - if (vc->vc_mode != KD_TEXT) 4715 - rc = -EINVAL; 4716 - else if (!vc->vc_sw->con_font_copy) 4717 - rc = -ENOSYS; 4718 - else if (con < 0 || !vc_cons_allocated(con)) 4719 - rc = -ENOTTY; 4720 - else if (con == vc->vc_num) /* nothing to do */ 4721 - rc = 0; 4722 - else 4723 - rc = vc->vc_sw->con_font_copy(vc, con); 4724 - console_unlock(); 4725 - return rc; 4726 - } 4727 - 4728 4707 int con_font_op(struct vc_data *vc, struct console_font_op *op) 4729 4708 { 4730 4709 switch (op->op) { ··· 4714 4735 case KD_FONT_OP_SET_DEFAULT: 4715 4736 return con_font_default(vc, op); 4716 4737 case KD_FONT_OP_COPY: 4717 - return con_font_copy(vc, op); 4738 + /* was buggy and never really used */ 4739 + return -EINVAL; 4718 4740 } 4719 4741 return -ENOSYS; 4720 4742 }
+19 -17
drivers/tty/vt/vt_ioctl.c
··· 484 484 return 0; 485 485 } 486 486 487 - static inline int do_fontx_ioctl(int cmd, 487 + static inline int do_fontx_ioctl(struct vc_data *vc, int cmd, 488 488 struct consolefontdesc __user *user_cfd, 489 489 struct console_font_op *op) 490 490 { ··· 502 502 op->height = cfdarg.charheight; 503 503 op->charcount = cfdarg.charcount; 504 504 op->data = cfdarg.chardata; 505 - return con_font_op(vc_cons[fg_console].d, op); 506 - case GIO_FONTX: { 505 + return con_font_op(vc, op); 506 + 507 + case GIO_FONTX: 507 508 op->op = KD_FONT_OP_GET; 508 509 op->flags = KD_FONT_FLAG_OLD; 509 510 op->width = 8; 510 511 op->height = cfdarg.charheight; 511 512 op->charcount = cfdarg.charcount; 512 513 op->data = cfdarg.chardata; 513 - i = con_font_op(vc_cons[fg_console].d, op); 514 + i = con_font_op(vc, op); 514 515 if (i) 515 516 return i; 516 517 cfdarg.charheight = op->height; ··· 519 518 if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc))) 520 519 return -EFAULT; 521 520 return 0; 522 - } 523 521 } 524 522 return -EINVAL; 525 523 } 526 524 527 - static int vt_io_fontreset(struct console_font_op *op) 525 + static int vt_io_fontreset(struct vc_data *vc, struct console_font_op *op) 528 526 { 529 527 int ret; 530 528 ··· 537 537 538 538 op->op = KD_FONT_OP_SET_DEFAULT; 539 539 op->data = NULL; 540 - ret = con_font_op(vc_cons[fg_console].d, op); 540 + ret = con_font_op(vc, op); 541 541 if (ret) 542 542 return ret; 543 543 544 544 console_lock(); 545 - con_set_default_unimap(vc_cons[fg_console].d); 545 + con_set_default_unimap(vc); 546 546 console_unlock(); 547 547 548 548 return 0; ··· 584 584 op.height = 0; 585 585 op.charcount = 256; 586 586 op.data = up; 587 - return con_font_op(vc_cons[fg_console].d, &op); 587 + return con_font_op(vc, &op); 588 588 589 589 case GIO_FONT: 590 590 op.op = KD_FONT_OP_GET; ··· 593 593 op.height = 32; 594 594 op.charcount = 256; 595 595 op.data = up; 596 - return con_font_op(vc_cons[fg_console].d, &op); 596 + return con_font_op(vc, &op); 597 597 598 598 case PIO_CMAP: 599 599 if (!perm) ··· 609 609 610 610 fallthrough; 611 611 case GIO_FONTX: 612 - return do_fontx_ioctl(cmd, up, &op); 612 + return do_fontx_ioctl(vc, cmd, up, &op); 613 613 614 614 case PIO_FONTRESET: 615 615 if (!perm) 616 616 return -EPERM; 617 617 618 - return vt_io_fontreset(&op); 618 + return vt_io_fontreset(vc, &op); 619 619 620 620 case PIO_SCRNMAP: 621 621 if (!perm) ··· 1066 1066 }; 1067 1067 1068 1068 static inline int 1069 - compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd, 1070 - int perm, struct console_font_op *op) 1069 + compat_fontx_ioctl(struct vc_data *vc, int cmd, 1070 + struct compat_consolefontdesc __user *user_cfd, 1071 + int perm, struct console_font_op *op) 1071 1072 { 1072 1073 struct compat_consolefontdesc cfdarg; 1073 1074 int i; ··· 1086 1085 op->height = cfdarg.charheight; 1087 1086 op->charcount = cfdarg.charcount; 1088 1087 op->data = compat_ptr(cfdarg.chardata); 1089 - return con_font_op(vc_cons[fg_console].d, op); 1088 + return con_font_op(vc, op); 1089 + 1090 1090 case GIO_FONTX: 1091 1091 op->op = KD_FONT_OP_GET; 1092 1092 op->flags = KD_FONT_FLAG_OLD; ··· 1095 1093 op->height = cfdarg.charheight; 1096 1094 op->charcount = cfdarg.charcount; 1097 1095 op->data = compat_ptr(cfdarg.chardata); 1098 - i = con_font_op(vc_cons[fg_console].d, op); 1096 + i = con_font_op(vc, op); 1099 1097 if (i) 1100 1098 return i; 1101 1099 cfdarg.charheight = op->height; ··· 1185 1183 */ 1186 1184 case PIO_FONTX: 1187 1185 case GIO_FONTX: 1188 - return compat_fontx_ioctl(cmd, up, perm, &op); 1186 + return compat_fontx_ioctl(vc, cmd, up, perm, &op); 1189 1187 1190 1188 case KDFONTOP: 1191 1189 return compat_kdfontop_ioctl(up, perm, &op, vc);
+3
drivers/usb/core/quirks.c
··· 378 378 { USB_DEVICE(0x0926, 0x3333), .driver_info = 379 379 USB_QUIRK_CONFIG_INTF_STRINGS }, 380 380 381 + /* Kingston DataTraveler 3.0 */ 382 + { USB_DEVICE(0x0951, 0x1666), .driver_info = USB_QUIRK_NO_LPM }, 383 + 381 384 /* X-Rite/Gretag-Macbeth Eye-One Pro display colorimeter */ 382 385 { USB_DEVICE(0x0971, 0x2000), .driver_info = USB_QUIRK_NO_SET_INTF }, 383 386
+3
drivers/usb/dwc2/platform.c
··· 608 608 #endif /* CONFIG_USB_DWC2_PERIPHERAL || CONFIG_USB_DWC2_DUAL_ROLE */ 609 609 return 0; 610 610 611 + #if IS_ENABLED(CONFIG_USB_DWC2_PERIPHERAL) || \ 612 + IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE) 611 613 error_debugfs: 612 614 dwc2_debugfs_exit(hsotg); 613 615 if (hsotg->hcd_enabled) 614 616 dwc2_hcd_remove(hsotg); 617 + #endif 615 618 error_drd: 616 619 dwc2_drd_exit(hsotg); 617 620
+4
drivers/usb/dwc3/dwc3-pci.c
··· 40 40 #define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee 41 41 #define PCI_DEVICE_ID_INTEL_TGPH 0x43ee 42 42 #define PCI_DEVICE_ID_INTEL_JSP 0x4dee 43 + #define PCI_DEVICE_ID_INTEL_ADLS 0x7ae1 43 44 44 45 #define PCI_INTEL_BXT_DSM_GUID "732b85d5-b7a7-4a1b-9ba0-4bbd00ffd511" 45 46 #define PCI_INTEL_BXT_FUNC_PMU_PWR 4 ··· 366 365 (kernel_ulong_t) &dwc3_pci_intel_properties, }, 367 366 368 367 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_JSP), 368 + (kernel_ulong_t) &dwc3_pci_intel_properties, }, 369 + 370 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADLS), 369 371 (kernel_ulong_t) &dwc3_pci_intel_properties, }, 370 372 371 373 { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_NL_USB),
+2 -1
drivers/usb/dwc3/ep0.c
··· 1058 1058 { 1059 1059 unsigned int direction = !dwc->ep0_expect_in; 1060 1060 1061 + dwc->delayed_status = false; 1062 + 1061 1063 if (dwc->ep0state != EP0_STATUS_PHASE) 1062 1064 return; 1063 1065 1064 - dwc->delayed_status = false; 1065 1066 __dwc3_ep0_do_control_status(dwc, dwc->eps[direction]); 1066 1067 } 1067 1068
+4 -1
drivers/usb/gadget/legacy/raw_gadget.c
··· 564 564 return -ENODEV; 565 565 } 566 566 length = min(arg.length, event->length); 567 - if (copy_to_user((void __user *)value, event, sizeof(*event) + length)) 567 + if (copy_to_user((void __user *)value, event, sizeof(*event) + length)) { 568 + kfree(event); 568 569 return -EFAULT; 570 + } 569 571 572 + kfree(event); 570 573 return 0; 571 574 } 572 575
+1 -1
drivers/usb/gadget/udc/fsl_udc_core.c
··· 1051 1051 u32 bitmask; 1052 1052 struct ep_queue_head *qh; 1053 1053 1054 - if (!_ep || _ep->desc || !(_ep->desc->bEndpointAddress&0xF)) 1054 + if (!_ep || !_ep->desc || !(_ep->desc->bEndpointAddress&0xF)) 1055 1055 return -ENODEV; 1056 1056 1057 1057 ep = container_of(_ep, struct fsl_ep, ep);
+1 -1
drivers/usb/gadget/udc/goku_udc.c
··· 1760 1760 goto err; 1761 1761 } 1762 1762 1763 + pci_set_drvdata(pdev, dev); 1763 1764 spin_lock_init(&dev->lock); 1764 1765 dev->pdev = pdev; 1765 1766 dev->gadget.ops = &goku_ops; ··· 1794 1793 } 1795 1794 dev->regs = (struct goku_udc_regs __iomem *) base; 1796 1795 1797 - pci_set_drvdata(pdev, dev); 1798 1796 INFO(dev, "%s\n", driver_desc); 1799 1797 INFO(dev, "version: " DRIVER_VERSION " %s\n", dmastr()); 1800 1798 INFO(dev, "irq %d, pci mem %p\n", pdev->irq, base);
+3 -1
drivers/usb/misc/apple-mfi-fastcharge.c
··· 120 120 dev_dbg(&mfi->udev->dev, "prop: %d\n", psp); 121 121 122 122 ret = pm_runtime_get_sync(&mfi->udev->dev); 123 - if (ret < 0) 123 + if (ret < 0) { 124 + pm_runtime_put_noidle(&mfi->udev->dev); 124 125 return ret; 126 + } 125 127 126 128 switch (psp) { 127 129 case POWER_SUPPLY_PROP_CHARGE_TYPE:
+1
drivers/usb/mtu3/mtu3_gadget.c
··· 564 564 565 565 spin_unlock_irqrestore(&mtu->lock, flags); 566 566 567 + synchronize_irq(mtu->irq); 567 568 return 0; 568 569 } 569 570
+6 -1
drivers/usb/serial/cyberjack.c
··· 357 357 struct device *dev = &port->dev; 358 358 int status = urb->status; 359 359 unsigned long flags; 360 + bool resubmitted = false; 360 361 361 - set_bit(0, &port->write_urbs_free); 362 362 if (status) { 363 363 dev_dbg(dev, "%s - nonzero write bulk status received: %d\n", 364 364 __func__, status); 365 + set_bit(0, &port->write_urbs_free); 365 366 return; 366 367 } 367 368 ··· 395 394 goto exit; 396 395 } 397 396 397 + resubmitted = true; 398 + 398 399 dev_dbg(dev, "%s - priv->wrsent=%d\n", __func__, priv->wrsent); 399 400 dev_dbg(dev, "%s - priv->wrfilled=%d\n", __func__, priv->wrfilled); 400 401 ··· 413 410 414 411 exit: 415 412 spin_unlock_irqrestore(&priv->lock, flags); 413 + if (!resubmitted) 414 + set_bit(0, &port->write_urbs_free); 416 415 usb_serial_port_softint(port); 417 416 } 418 417
+10
drivers/usb/serial/option.c
··· 250 250 #define QUECTEL_PRODUCT_EP06 0x0306 251 251 #define QUECTEL_PRODUCT_EM12 0x0512 252 252 #define QUECTEL_PRODUCT_RM500Q 0x0800 253 + #define QUECTEL_PRODUCT_EC200T 0x6026 253 254 254 255 #define CMOTECH_VENDOR_ID 0x16d8 255 256 #define CMOTECH_PRODUCT_6001 0x6001 ··· 1118 1117 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) }, 1119 1118 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10), 1120 1119 .driver_info = ZLP }, 1120 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, 1121 1121 1122 1122 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, 1123 1123 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) }, ··· 1191 1189 .driver_info = NCTRL(0) | RSVD(1) }, 1192 1190 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1054, 0xff), /* Telit FT980-KS */ 1193 1191 .driver_info = NCTRL(2) | RSVD(3) }, 1192 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1055, 0xff), /* Telit FN980 (PCIe) */ 1193 + .driver_info = NCTRL(0) | RSVD(1) }, 1194 1194 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1195 1195 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1196 1196 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), ··· 1205 1201 .driver_info = NCTRL(0) }, 1206 1202 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), 1207 1203 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1204 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1203, 0xff), /* Telit LE910Cx (RNDIS) */ 1205 + .driver_info = NCTRL(2) | RSVD(3) }, 1208 1206 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4), 1209 1207 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, 1210 1208 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920), ··· 1221 1215 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1213, 0xff) }, 1222 1216 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1214), 1223 1217 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, 1218 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1230, 0xff), /* Telit LE910Cx (rmnet) */ 1219 + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1220 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1231, 0xff), /* Telit LE910Cx (RNDIS) */ 1221 + .driver_info = NCTRL(2) | RSVD(3) }, 1224 1222 { USB_DEVICE(TELIT_VENDOR_ID, 0x1260), 1225 1223 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1226 1224 { USB_DEVICE(TELIT_VENDOR_ID, 0x1261),
+7 -3
drivers/vfio/fsl-mc/vfio_fsl_mc.c
··· 248 248 info.size = vdev->regions[info.index].size; 249 249 info.flags = vdev->regions[info.index].flags; 250 250 251 - return copy_to_user((void __user *)arg, &info, minsz); 251 + if (copy_to_user((void __user *)arg, &info, minsz)) 252 + return -EFAULT; 253 + return 0; 252 254 } 253 255 case VFIO_DEVICE_GET_IRQ_INFO: 254 256 { ··· 269 267 info.flags = VFIO_IRQ_INFO_EVENTFD; 270 268 info.count = 1; 271 269 272 - return copy_to_user((void __user *)arg, &info, minsz); 270 + if (copy_to_user((void __user *)arg, &info, minsz)) 271 + return -EFAULT; 272 + return 0; 273 273 } 274 274 case VFIO_DEVICE_SET_IRQS: 275 275 { ··· 472 468 { 473 469 struct vfio_fsl_mc_device *vdev = device_data; 474 470 struct fsl_mc_device *mc_dev = vdev->mc_dev; 475 - int index; 471 + unsigned int index; 476 472 477 473 index = vma->vm_pgoff >> (VFIO_FSL_MC_OFFSET_SHIFT - PAGE_SHIFT); 478 474
+1 -1
drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c
··· 13 13 #include "linux/fsl/mc.h" 14 14 #include "vfio_fsl_mc_private.h" 15 15 16 - int vfio_fsl_mc_irqs_allocate(struct vfio_fsl_mc_device *vdev) 16 + static int vfio_fsl_mc_irqs_allocate(struct vfio_fsl_mc_device *vdev) 17 17 { 18 18 struct fsl_mc_device *mc_dev = vdev->mc_dev; 19 19 struct vfio_fsl_mc_irq *mc_irq;
+1 -1
drivers/vfio/pci/vfio_pci.c
··· 385 385 pdev->vendor == PCI_VENDOR_ID_INTEL && 386 386 IS_ENABLED(CONFIG_VFIO_PCI_IGD)) { 387 387 ret = vfio_pci_igd_init(vdev); 388 - if (ret) { 388 + if (ret && ret != -ENODEV) { 389 389 pci_warn(pdev, "Failed to setup Intel IGD regions\n"); 390 390 goto disable_exit; 391 391 }
+35 -8
drivers/vfio/pci/vfio_pci_rdwr.c
··· 356 356 return done; 357 357 } 358 358 359 - static int vfio_pci_ioeventfd_handler(void *opaque, void *unused) 359 + static void vfio_pci_ioeventfd_do_write(struct vfio_pci_ioeventfd *ioeventfd, 360 + bool test_mem) 360 361 { 361 - struct vfio_pci_ioeventfd *ioeventfd = opaque; 362 - 363 362 switch (ioeventfd->count) { 364 363 case 1: 365 - vfio_pci_iowrite8(ioeventfd->vdev, ioeventfd->test_mem, 364 + vfio_pci_iowrite8(ioeventfd->vdev, test_mem, 366 365 ioeventfd->data, ioeventfd->addr); 367 366 break; 368 367 case 2: 369 - vfio_pci_iowrite16(ioeventfd->vdev, ioeventfd->test_mem, 368 + vfio_pci_iowrite16(ioeventfd->vdev, test_mem, 370 369 ioeventfd->data, ioeventfd->addr); 371 370 break; 372 371 case 4: 373 - vfio_pci_iowrite32(ioeventfd->vdev, ioeventfd->test_mem, 372 + vfio_pci_iowrite32(ioeventfd->vdev, test_mem, 374 373 ioeventfd->data, ioeventfd->addr); 375 374 break; 376 375 #ifdef iowrite64 377 376 case 8: 378 - vfio_pci_iowrite64(ioeventfd->vdev, ioeventfd->test_mem, 377 + vfio_pci_iowrite64(ioeventfd->vdev, test_mem, 379 378 ioeventfd->data, ioeventfd->addr); 380 379 break; 381 380 #endif 382 381 } 382 + } 383 + 384 + static int vfio_pci_ioeventfd_handler(void *opaque, void *unused) 385 + { 386 + struct vfio_pci_ioeventfd *ioeventfd = opaque; 387 + struct vfio_pci_device *vdev = ioeventfd->vdev; 388 + 389 + if (ioeventfd->test_mem) { 390 + if (!down_read_trylock(&vdev->memory_lock)) 391 + return 1; /* Lock contended, use thread */ 392 + if (!__vfio_pci_memory_enabled(vdev)) { 393 + up_read(&vdev->memory_lock); 394 + return 0; 395 + } 396 + } 397 + 398 + vfio_pci_ioeventfd_do_write(ioeventfd, false); 399 + 400 + if (ioeventfd->test_mem) 401 + up_read(&vdev->memory_lock); 383 402 384 403 return 0; 404 + } 405 + 406 + static void vfio_pci_ioeventfd_thread(void *opaque, void *unused) 407 + { 408 + struct vfio_pci_ioeventfd *ioeventfd = opaque; 409 + 410 + vfio_pci_ioeventfd_do_write(ioeventfd, ioeventfd->test_mem); 385 411 } 386 412 387 413 long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset, ··· 483 457 ioeventfd->test_mem = vdev->pdev->resource[bar].flags & IORESOURCE_MEM; 484 458 485 459 ret = vfio_virqfd_enable(ioeventfd, vfio_pci_ioeventfd_handler, 486 - NULL, NULL, &ioeventfd->virqfd, fd); 460 + vfio_pci_ioeventfd_thread, NULL, 461 + &ioeventfd->virqfd, fd); 487 462 if (ret) { 488 463 kfree(ioeventfd); 489 464 goto out_unlock;
+1 -2
drivers/vfio/platform/vfio_platform_common.c
··· 267 267 268 268 ret = pm_runtime_get_sync(vdev->device); 269 269 if (ret < 0) 270 - goto err_pm; 270 + goto err_rst; 271 271 272 272 ret = vfio_platform_call_reset(vdev, &extra_dbg); 273 273 if (ret && vdev->reset_required) { ··· 284 284 285 285 err_rst: 286 286 pm_runtime_put(vdev->device); 287 - err_pm: 288 287 vfio_platform_irq_cleanup(vdev); 289 288 err_irq: 290 289 vfio_platform_regions_cleanup(vdev);
+5 -12
drivers/vfio/vfio_iommu_type1.c
··· 1993 1993 1994 1994 list_splice_tail(iova_copy, iova); 1995 1995 } 1996 + 1996 1997 static int vfio_iommu_type1_attach_group(void *iommu_data, 1997 1998 struct iommu_group *iommu_group) 1998 1999 { ··· 2010 2009 2011 2010 mutex_lock(&iommu->lock); 2012 2011 2013 - list_for_each_entry(d, &iommu->domain_list, next) { 2014 - if (find_iommu_group(d, iommu_group)) { 2015 - mutex_unlock(&iommu->lock); 2016 - return -EINVAL; 2017 - } 2018 - } 2019 - 2020 - if (iommu->external_domain) { 2021 - if (find_iommu_group(iommu->external_domain, iommu_group)) { 2022 - mutex_unlock(&iommu->lock); 2023 - return -EINVAL; 2024 - } 2012 + /* Check for duplicates */ 2013 + if (vfio_iommu_find_iommu_group(iommu, iommu_group)) { 2014 + mutex_unlock(&iommu->lock); 2015 + return -EINVAL; 2025 2016 } 2026 2017 2027 2018 group = kzalloc(sizeof(*group), GFP_KERNEL);
+1 -6
fs/afs/xattr.c
··· 148 148 .set = afs_xattr_set_acl, 149 149 }; 150 150 151 - static void yfs_acl_put(struct afs_operation *op) 152 - { 153 - yfs_free_opaque_acl(op->yacl); 154 - } 155 - 156 151 static const struct afs_operation_ops yfs_fetch_opaque_acl_operation = { 157 152 .issue_yfs_rpc = yfs_fs_fetch_opaque_acl, 158 153 .success = afs_acl_success, ··· 241 246 static const struct afs_operation_ops yfs_store_opaque_acl2_operation = { 242 247 .issue_yfs_rpc = yfs_fs_store_opaque_acl2, 243 248 .success = afs_acl_success, 244 - .put = yfs_acl_put, 249 + .put = afs_acl_put, 245 250 }; 246 251 247 252 /*
+1
fs/afs/yfsclient.c
··· 1990 1990 memcpy(bp, acl->data, acl->size); 1991 1991 if (acl->size != size) 1992 1992 memset((void *)bp + acl->size, 0, size - acl->size); 1993 + bp += size / sizeof(__be32); 1993 1994 yfs_check_req(call, bp); 1994 1995 1995 1996 trace_afs_make_fs_call(call, &vp->fid);
+1 -1
fs/ceph/caps.c
··· 4074 4074 vino.snap, inode); 4075 4075 4076 4076 mutex_lock(&session->s_mutex); 4077 - session->s_seq++; 4077 + inc_session_sequence(session); 4078 4078 dout(" mds%d seq %lld cap seq %u\n", session->s_mds, session->s_seq, 4079 4079 (unsigned)seq); 4080 4080
+35 -15
fs/ceph/mds_client.c
··· 4231 4231 dname.len, dname.name); 4232 4232 4233 4233 mutex_lock(&session->s_mutex); 4234 - session->s_seq++; 4234 + inc_session_sequence(session); 4235 4235 4236 4236 if (!inode) { 4237 4237 dout("handle_lease no inode %llx\n", vino.ino); ··· 4385 4385 4386 4386 bool check_session_state(struct ceph_mds_session *s) 4387 4387 { 4388 - if (s->s_state == CEPH_MDS_SESSION_CLOSING) { 4389 - dout("resending session close request for mds%d\n", 4390 - s->s_mds); 4391 - request_close_session(s); 4392 - return false; 4393 - } 4394 - if (s->s_ttl && time_after(jiffies, s->s_ttl)) { 4395 - if (s->s_state == CEPH_MDS_SESSION_OPEN) { 4388 + switch (s->s_state) { 4389 + case CEPH_MDS_SESSION_OPEN: 4390 + if (s->s_ttl && time_after(jiffies, s->s_ttl)) { 4396 4391 s->s_state = CEPH_MDS_SESSION_HUNG; 4397 4392 pr_info("mds%d hung\n", s->s_mds); 4398 4393 } 4399 - } 4400 - if (s->s_state == CEPH_MDS_SESSION_NEW || 4401 - s->s_state == CEPH_MDS_SESSION_RESTARTING || 4402 - s->s_state == CEPH_MDS_SESSION_CLOSED || 4403 - s->s_state == CEPH_MDS_SESSION_REJECTED) 4404 - /* this mds is failed or recovering, just wait */ 4394 + break; 4395 + case CEPH_MDS_SESSION_CLOSING: 4396 + /* Should never reach this when we're unmounting */ 4397 + WARN_ON_ONCE(true); 4398 + fallthrough; 4399 + case CEPH_MDS_SESSION_NEW: 4400 + case CEPH_MDS_SESSION_RESTARTING: 4401 + case CEPH_MDS_SESSION_CLOSED: 4402 + case CEPH_MDS_SESSION_REJECTED: 4405 4403 return false; 4404 + } 4406 4405 4407 4406 return true; 4407 + } 4408 + 4409 + /* 4410 + * If the sequence is incremented while we're waiting on a REQUEST_CLOSE reply, 4411 + * then we need to retransmit that request. 4412 + */ 4413 + void inc_session_sequence(struct ceph_mds_session *s) 4414 + { 4415 + lockdep_assert_held(&s->s_mutex); 4416 + 4417 + s->s_seq++; 4418 + 4419 + if (s->s_state == CEPH_MDS_SESSION_CLOSING) { 4420 + int ret; 4421 + 4422 + dout("resending session close request for mds%d\n", s->s_mds); 4423 + ret = request_close_session(s); 4424 + if (ret < 0) 4425 + pr_err("unable to close session to mds%d: %d\n", 4426 + s->s_mds, ret); 4427 + } 4408 4428 } 4409 4429 4410 4430 /*
+1
fs/ceph/mds_client.h
··· 480 480 extern const char *ceph_mds_op_name(int op); 481 481 482 482 extern bool check_session_state(struct ceph_mds_session *s); 483 + void inc_session_sequence(struct ceph_mds_session *s); 483 484 484 485 extern struct ceph_mds_session * 485 486 __ceph_lookup_mds_session(struct ceph_mds_client *, int mds);
+1 -1
fs/ceph/quota.c
··· 53 53 54 54 /* increment msg sequence number */ 55 55 mutex_lock(&session->s_mutex); 56 - session->s_seq++; 56 + inc_session_sequence(session); 57 57 mutex_unlock(&session->s_mutex); 58 58 59 59 /* lookup inode */
+1 -1
fs/ceph/snap.c
··· 873 873 ceph_snap_op_name(op), split, trace_len); 874 874 875 875 mutex_lock(&session->s_mutex); 876 - session->s_seq++; 876 + inc_session_sequence(session); 877 877 mutex_unlock(&session->s_mutex); 878 878 879 879 down_write(&mdsc->snap_rwsem);
+2 -1
fs/gfs2/glock.c
··· 1078 1078 out_free: 1079 1079 kfree(gl->gl_lksb.sb_lvbptr); 1080 1080 kmem_cache_free(cachep, gl); 1081 - atomic_dec(&sdp->sd_glock_disposal); 1081 + if (atomic_dec_and_test(&sdp->sd_glock_disposal)) 1082 + wake_up(&sdp->sd_glock_wait); 1082 1083 1083 1084 out: 1084 1085 return ret;
+45 -11
fs/gfs2/glops.c
··· 165 165 } 166 166 167 167 /** 168 + * gfs2_rgrp_metasync - sync out the metadata of a resource group 169 + * @gl: the glock protecting the resource group 170 + * 171 + */ 172 + 173 + static int gfs2_rgrp_metasync(struct gfs2_glock *gl) 174 + { 175 + struct gfs2_sbd *sdp = gl->gl_name.ln_sbd; 176 + struct address_space *metamapping = &sdp->sd_aspace; 177 + struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl); 178 + const unsigned bsize = sdp->sd_sb.sb_bsize; 179 + loff_t start = (rgd->rd_addr * bsize) & PAGE_MASK; 180 + loff_t end = PAGE_ALIGN((rgd->rd_addr + rgd->rd_length) * bsize) - 1; 181 + int error; 182 + 183 + filemap_fdatawrite_range(metamapping, start, end); 184 + error = filemap_fdatawait_range(metamapping, start, end); 185 + WARN_ON_ONCE(error && !gfs2_withdrawn(sdp)); 186 + mapping_set_error(metamapping, error); 187 + if (error) 188 + gfs2_io_error(sdp); 189 + return error; 190 + } 191 + 192 + /** 168 193 * rgrp_go_sync - sync out the metadata for this glock 169 194 * @gl: the glock 170 195 * ··· 201 176 static int rgrp_go_sync(struct gfs2_glock *gl) 202 177 { 203 178 struct gfs2_sbd *sdp = gl->gl_name.ln_sbd; 204 - struct address_space *mapping = &sdp->sd_aspace; 205 179 struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl); 206 - const unsigned bsize = sdp->sd_sb.sb_bsize; 207 - loff_t start = (rgd->rd_addr * bsize) & PAGE_MASK; 208 - loff_t end = PAGE_ALIGN((rgd->rd_addr + rgd->rd_length) * bsize) - 1; 209 180 int error; 210 181 211 182 if (!test_and_clear_bit(GLF_DIRTY, &gl->gl_flags)) ··· 210 189 211 190 gfs2_log_flush(sdp, gl, GFS2_LOG_HEAD_FLUSH_NORMAL | 212 191 GFS2_LFC_RGRP_GO_SYNC); 213 - filemap_fdatawrite_range(mapping, start, end); 214 - error = filemap_fdatawait_range(mapping, start, end); 215 - WARN_ON_ONCE(error && !gfs2_withdrawn(sdp)); 216 - mapping_set_error(mapping, error); 192 + error = gfs2_rgrp_metasync(gl); 217 193 if (!error) 218 194 error = gfs2_ail_empty_gl(gl); 219 195 gfs2_free_clones(rgd); ··· 284 266 } 285 267 286 268 /** 287 - * inode_go_sync - Sync the dirty data and/or metadata for an inode glock 269 + * gfs2_inode_metasync - sync out the metadata of an inode 270 + * @gl: the glock protecting the inode 271 + * 272 + */ 273 + int gfs2_inode_metasync(struct gfs2_glock *gl) 274 + { 275 + struct address_space *metamapping = gfs2_glock2aspace(gl); 276 + int error; 277 + 278 + filemap_fdatawrite(metamapping); 279 + error = filemap_fdatawait(metamapping); 280 + if (error) 281 + gfs2_io_error(gl->gl_name.ln_sbd); 282 + return error; 283 + } 284 + 285 + /** 286 + * inode_go_sync - Sync the dirty metadata of an inode 288 287 * @gl: the glock protecting the inode 289 288 * 290 289 */ ··· 332 297 error = filemap_fdatawait(mapping); 333 298 mapping_set_error(mapping, error); 334 299 } 335 - ret = filemap_fdatawait(metamapping); 336 - mapping_set_error(metamapping, ret); 300 + ret = gfs2_inode_metasync(gl); 337 301 if (!error) 338 302 error = ret; 339 303 gfs2_ail_empty_gl(gl);
+1
fs/gfs2/glops.h
··· 22 22 extern const struct gfs2_glock_operations gfs2_journal_glops; 23 23 extern const struct gfs2_glock_operations *gfs2_glops_list[]; 24 24 25 + extern int gfs2_inode_metasync(struct gfs2_glock *gl); 25 26 extern void gfs2_ail_flush(struct gfs2_glock *gl, bool fsync); 26 27 27 28 #endif /* __GLOPS_DOT_H__ */
+2 -1
fs/gfs2/inode.c
··· 180 180 error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh); 181 181 if (unlikely(error)) 182 182 goto fail; 183 - gfs2_cancel_delete_work(ip->i_iopen_gh.gh_gl); 183 + if (blktype != GFS2_BLKST_UNLINKED) 184 + gfs2_cancel_delete_work(ip->i_iopen_gh.gh_gl); 184 185 glock_set_object(ip->i_iopen_gh.gh_gl, ip); 185 186 gfs2_glock_put(io_gl); 186 187 io_gl = NULL;
+5 -26
fs/gfs2/lops.c
··· 22 22 #include "incore.h" 23 23 #include "inode.h" 24 24 #include "glock.h" 25 + #include "glops.h" 25 26 #include "log.h" 26 27 #include "lops.h" 27 28 #include "meta_io.h" ··· 818 817 return error; 819 818 } 820 819 821 - /** 822 - * gfs2_meta_sync - Sync all buffers associated with a glock 823 - * @gl: The glock 824 - * 825 - */ 826 - 827 - void gfs2_meta_sync(struct gfs2_glock *gl) 828 - { 829 - struct address_space *mapping = gfs2_glock2aspace(gl); 830 - struct gfs2_sbd *sdp = gl->gl_name.ln_sbd; 831 - int error; 832 - 833 - if (mapping == NULL) 834 - mapping = &sdp->sd_aspace; 835 - 836 - filemap_fdatawrite(mapping); 837 - error = filemap_fdatawait(mapping); 838 - 839 - if (error) 840 - gfs2_io_error(gl->gl_name.ln_sbd); 841 - } 842 - 843 820 static void buf_lo_after_scan(struct gfs2_jdesc *jd, int error, int pass) 844 821 { 845 822 struct gfs2_inode *ip = GFS2_I(jd->jd_inode); 846 823 struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode); 847 824 848 825 if (error) { 849 - gfs2_meta_sync(ip->i_gl); 826 + gfs2_inode_metasync(ip->i_gl); 850 827 return; 851 828 } 852 829 if (pass != 1) 853 830 return; 854 831 855 - gfs2_meta_sync(ip->i_gl); 832 + gfs2_inode_metasync(ip->i_gl); 856 833 857 834 fs_info(sdp, "jid=%u: Replayed %u of %u blocks\n", 858 835 jd->jd_jid, jd->jd_replayed_blocks, jd->jd_found_blocks); ··· 1039 1060 struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode); 1040 1061 1041 1062 if (error) { 1042 - gfs2_meta_sync(ip->i_gl); 1063 + gfs2_inode_metasync(ip->i_gl); 1043 1064 return; 1044 1065 } 1045 1066 if (pass != 1) 1046 1067 return; 1047 1068 1048 1069 /* data sync? */ 1049 - gfs2_meta_sync(ip->i_gl); 1070 + gfs2_inode_metasync(ip->i_gl); 1050 1071 1051 1072 fs_info(sdp, "jid=%u: Replayed %u of %u data blocks\n", 1052 1073 jd->jd_jid, jd->jd_replayed_blocks, jd->jd_found_blocks);
-2
fs/gfs2/lops.h
··· 27 27 extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh); 28 28 extern int gfs2_find_jhead(struct gfs2_jdesc *jd, 29 29 struct gfs2_log_header_host *head, bool keep_cache); 30 - extern void gfs2_meta_sync(struct gfs2_glock *gl); 31 - 32 30 static inline unsigned int buf_limit(struct gfs2_sbd *sdp) 33 31 { 34 32 unsigned int limit;
+9 -5
fs/gfs2/ops_fstype.c
··· 633 633 if (IS_ERR(sdp->sd_statfs_inode)) { 634 634 error = PTR_ERR(sdp->sd_statfs_inode); 635 635 fs_err(sdp, "can't read in statfs inode: %d\n", error); 636 - goto fail; 636 + goto out; 637 637 } 638 + if (sdp->sd_args.ar_spectator) 639 + goto out; 638 640 639 641 pn = gfs2_lookup_simple(master, "per_node"); 640 642 if (IS_ERR(pn)) { ··· 684 682 iput(pn); 685 683 put_statfs: 686 684 iput(sdp->sd_statfs_inode); 687 - fail: 685 + out: 688 686 return error; 689 687 } 690 688 691 689 /* Uninitialize and free up memory used by the list of statfs inodes */ 692 690 static void uninit_statfs(struct gfs2_sbd *sdp) 693 691 { 694 - gfs2_glock_dq_uninit(&sdp->sd_sc_gh); 695 - free_local_statfs_inodes(sdp); 692 + if (!sdp->sd_args.ar_spectator) { 693 + gfs2_glock_dq_uninit(&sdp->sd_sc_gh); 694 + free_local_statfs_inodes(sdp); 695 + } 696 696 iput(sdp->sd_statfs_inode); 697 697 } 698 698 ··· 708 704 709 705 if (undo) { 710 706 jindex = 0; 711 - goto fail_jinode_gh; 707 + goto fail_statfs; 712 708 } 713 709 714 710 sdp->sd_jindex = gfs2_lookup_simple(master, "jindex");
+1 -1
fs/gfs2/recovery.c
··· 349 349 350 350 mark_buffer_dirty(bh); 351 351 brelse(bh); 352 - gfs2_meta_sync(ip->i_gl); 352 + gfs2_inode_metasync(ip->i_gl); 353 353 354 354 out: 355 355 return error;
+4 -1
fs/gfs2/rgrp.c
··· 719 719 } 720 720 721 721 gfs2_free_clones(rgd); 722 + return_all_reservations(rgd); 722 723 kfree(rgd->rd_bits); 723 724 rgd->rd_bits = NULL; 724 - return_all_reservations(rgd); 725 725 kmem_cache_free(gfs2_rgrpd_cachep, rgd); 726 726 } 727 727 } ··· 1369 1369 1370 1370 if (!capable(CAP_SYS_ADMIN)) 1371 1371 return -EPERM; 1372 + 1373 + if (!test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags)) 1374 + return -EROFS; 1372 1375 1373 1376 if (!blk_queue_discard(q)) 1374 1377 return -EOPNOTSUPP;
+1
fs/gfs2/super.c
··· 738 738 gfs2_jindex_free(sdp); 739 739 /* Take apart glock structures and buffer lists */ 740 740 gfs2_gl_hash_clear(sdp); 741 + truncate_inode_pages_final(&sdp->sd_aspace); 741 742 gfs2_delete_debugfs_file(sdp); 742 743 /* Unmount the locking protocol */ 743 744 gfs2_lm_unmount(sdp);
+4
fs/io-wq.c
··· 482 482 current->files = work->identity->files; 483 483 current->nsproxy = work->identity->nsproxy; 484 484 task_unlock(current); 485 + if (!work->identity->files) { 486 + /* failed grabbing files, ensure work gets cancelled */ 487 + work->flags |= IO_WQ_WORK_CANCEL; 488 + } 485 489 } 486 490 if ((work->flags & IO_WQ_WORK_FS) && current->fs != work->identity->fs) 487 491 current->fs = work->identity->fs;
+136 -47
fs/io_uring.c
··· 995 995 if (mm) { 996 996 kthread_unuse_mm(mm); 997 997 mmput(mm); 998 + current->mm = NULL; 998 999 } 999 1000 } 1000 1001 1001 1002 static int __io_sq_thread_acquire_mm(struct io_ring_ctx *ctx) 1002 1003 { 1003 - if (!current->mm) { 1004 - if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) || 1005 - !ctx->sqo_task->mm || 1006 - !mmget_not_zero(ctx->sqo_task->mm))) 1007 - return -EFAULT; 1008 - kthread_use_mm(ctx->sqo_task->mm); 1004 + struct mm_struct *mm; 1005 + 1006 + if (current->mm) 1007 + return 0; 1008 + 1009 + /* Should never happen */ 1010 + if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL))) 1011 + return -EFAULT; 1012 + 1013 + task_lock(ctx->sqo_task); 1014 + mm = ctx->sqo_task->mm; 1015 + if (unlikely(!mm || !mmget_not_zero(mm))) 1016 + mm = NULL; 1017 + task_unlock(ctx->sqo_task); 1018 + 1019 + if (mm) { 1020 + kthread_use_mm(mm); 1021 + return 0; 1009 1022 } 1010 1023 1011 - return 0; 1024 + return -EFAULT; 1012 1025 } 1013 1026 1014 1027 static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx, ··· 1287 1274 /* add one for this request */ 1288 1275 refcount_inc(&id->count); 1289 1276 1290 - /* drop old identity, assign new one. one ref for req, one for tctx */ 1291 - if (req->work.identity != tctx->identity && 1292 - refcount_sub_and_test(2, &req->work.identity->count)) 1277 + /* drop tctx and req identity references, if needed */ 1278 + if (tctx->identity != &tctx->__identity && 1279 + refcount_dec_and_test(&tctx->identity->count)) 1280 + kfree(tctx->identity); 1281 + if (req->work.identity != &tctx->__identity && 1282 + refcount_dec_and_test(&req->work.identity->count)) 1293 1283 kfree(req->work.identity); 1294 1284 1295 1285 req->work.identity = id; ··· 1593 1577 } 1594 1578 } 1595 1579 1596 - static inline bool io_match_files(struct io_kiocb *req, 1597 - struct files_struct *files) 1580 + static inline bool __io_match_files(struct io_kiocb *req, 1581 + struct files_struct *files) 1598 1582 { 1583 + return ((req->flags & REQ_F_WORK_INITIALIZED) && 1584 + (req->work.flags & IO_WQ_WORK_FILES)) && 1585 + req->work.identity->files == files; 1586 + } 1587 + 1588 + static bool io_match_files(struct io_kiocb *req, 1589 + struct files_struct *files) 1590 + { 1591 + struct io_kiocb *link; 1592 + 1599 1593 if (!files) 1600 1594 return true; 1601 - if ((req->flags & REQ_F_WORK_INITIALIZED) && 1602 - (req->work.flags & IO_WQ_WORK_FILES)) 1603 - return req->work.identity->files == files; 1595 + if (__io_match_files(req, files)) 1596 + return true; 1597 + if (req->flags & REQ_F_LINK_HEAD) { 1598 + list_for_each_entry(link, &req->link_list, link_list) { 1599 + if (__io_match_files(link, files)) 1600 + return true; 1601 + } 1602 + } 1604 1603 return false; 1605 1604 } 1606 1605 ··· 1699 1668 WRITE_ONCE(cqe->user_data, req->user_data); 1700 1669 WRITE_ONCE(cqe->res, res); 1701 1670 WRITE_ONCE(cqe->flags, cflags); 1702 - } else if (ctx->cq_overflow_flushed || req->task->io_uring->in_idle) { 1671 + } else if (ctx->cq_overflow_flushed || 1672 + atomic_read(&req->task->io_uring->in_idle)) { 1703 1673 /* 1704 1674 * If we're in ring overflow flush mode, or in task cancel mode, 1705 1675 * then we cannot store the request for later flushing, we need ··· 1870 1838 io_dismantle_req(req); 1871 1839 1872 1840 percpu_counter_dec(&tctx->inflight); 1873 - if (tctx->in_idle) 1841 + if (atomic_read(&tctx->in_idle)) 1874 1842 wake_up(&tctx->wait); 1875 1843 put_task_struct(req->task); 1876 1844 ··· 7727 7695 xa_init(&tctx->xa); 7728 7696 init_waitqueue_head(&tctx->wait); 7729 7697 tctx->last = NULL; 7730 - tctx->in_idle = 0; 7698 + atomic_set(&tctx->in_idle, 0); 7699 + tctx->sqpoll = false; 7731 7700 io_init_identity(&tctx->__identity); 7732 7701 tctx->identity = &tctx->__identity; 7733 7702 task->io_uring = tctx; ··· 8421 8388 return false; 8422 8389 } 8423 8390 8424 - static bool io_match_link_files(struct io_kiocb *req, 8425 - struct files_struct *files) 8426 - { 8427 - struct io_kiocb *link; 8428 - 8429 - if (io_match_files(req, files)) 8430 - return true; 8431 - if (req->flags & REQ_F_LINK_HEAD) { 8432 - list_for_each_entry(link, &req->link_list, link_list) { 8433 - if (io_match_files(link, files)) 8434 - return true; 8435 - } 8436 - } 8437 - return false; 8438 - } 8439 - 8440 8391 /* 8441 8392 * We're looking to cancel 'req' because it's holding on to our files, but 8442 8393 * 'req' could be a link to another request. See if it is, and cancel that ··· 8470 8453 8471 8454 static bool io_cancel_link_cb(struct io_wq_work *work, void *data) 8472 8455 { 8473 - return io_match_link(container_of(work, struct io_kiocb, work), data); 8456 + struct io_kiocb *req = container_of(work, struct io_kiocb, work); 8457 + bool ret; 8458 + 8459 + if (req->flags & REQ_F_LINK_TIMEOUT) { 8460 + unsigned long flags; 8461 + struct io_ring_ctx *ctx = req->ctx; 8462 + 8463 + /* protect against races with linked timeouts */ 8464 + spin_lock_irqsave(&ctx->completion_lock, flags); 8465 + ret = io_match_link(req, data); 8466 + spin_unlock_irqrestore(&ctx->completion_lock, flags); 8467 + } else { 8468 + ret = io_match_link(req, data); 8469 + } 8470 + return ret; 8474 8471 } 8475 8472 8476 8473 static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req) ··· 8510 8479 } 8511 8480 8512 8481 static void io_cancel_defer_files(struct io_ring_ctx *ctx, 8482 + struct task_struct *task, 8513 8483 struct files_struct *files) 8514 8484 { 8515 8485 struct io_defer_entry *de = NULL; ··· 8518 8486 8519 8487 spin_lock_irq(&ctx->completion_lock); 8520 8488 list_for_each_entry_reverse(de, &ctx->defer_list, list) { 8521 - if (io_match_link_files(de->req, files)) { 8489 + if (io_task_match(de->req, task) && 8490 + io_match_files(de->req, files)) { 8522 8491 list_cut_position(&list, &ctx->defer_list, &de->list); 8523 8492 break; 8524 8493 } ··· 8545 8512 if (list_empty_careful(&ctx->inflight_list)) 8546 8513 return false; 8547 8514 8548 - io_cancel_defer_files(ctx, files); 8549 8515 /* cancel all at once, should be faster than doing it one by one*/ 8550 8516 io_wq_cancel_cb(ctx->io_wq, io_wq_files_match, files, true); 8551 8517 ··· 8630 8598 { 8631 8599 struct task_struct *task = current; 8632 8600 8633 - if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) 8601 + if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) { 8634 8602 task = ctx->sq_data->thread; 8603 + atomic_inc(&task->io_uring->in_idle); 8604 + io_sq_thread_park(ctx->sq_data); 8605 + } 8606 + 8607 + if (files) 8608 + io_cancel_defer_files(ctx, NULL, files); 8609 + else 8610 + io_cancel_defer_files(ctx, task, NULL); 8635 8611 8636 8612 io_cqring_overflow_flush(ctx, true, task, files); 8637 8613 ··· 8647 8607 io_run_task_work(); 8648 8608 cond_resched(); 8649 8609 } 8610 + 8611 + if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) { 8612 + atomic_dec(&task->io_uring->in_idle); 8613 + /* 8614 + * If the files that are going away are the ones in the thread 8615 + * identity, clear them out. 8616 + */ 8617 + if (task->io_uring->identity->files == files) 8618 + task->io_uring->identity->files = NULL; 8619 + io_sq_thread_unpark(ctx->sq_data); 8620 + } 8650 8621 } 8651 8622 8652 8623 /* 8653 8624 * Note that this task has used io_uring. We use it for cancelation purposes. 8654 8625 */ 8655 - static int io_uring_add_task_file(struct file *file) 8626 + static int io_uring_add_task_file(struct io_ring_ctx *ctx, struct file *file) 8656 8627 { 8657 8628 struct io_uring_task *tctx = current->io_uring; 8658 8629 ··· 8684 8633 } 8685 8634 tctx->last = file; 8686 8635 } 8636 + 8637 + /* 8638 + * This is race safe in that the task itself is doing this, hence it 8639 + * cannot be going through the exit/cancel paths at the same time. 8640 + * This cannot be modified while exit/cancel is running. 8641 + */ 8642 + if (!tctx->sqpoll && (ctx->flags & IORING_SETUP_SQPOLL)) 8643 + tctx->sqpoll = true; 8687 8644 8688 8645 return 0; 8689 8646 } ··· 8734 8675 unsigned long index; 8735 8676 8736 8677 /* make sure overflow events are dropped */ 8737 - tctx->in_idle = true; 8678 + atomic_inc(&tctx->in_idle); 8738 8679 8739 8680 xa_for_each(&tctx->xa, index, file) { 8740 8681 struct io_ring_ctx *ctx = file->private_data; ··· 8743 8684 if (files) 8744 8685 io_uring_del_task_file(file); 8745 8686 } 8687 + 8688 + atomic_dec(&tctx->in_idle); 8689 + } 8690 + 8691 + static s64 tctx_inflight(struct io_uring_task *tctx) 8692 + { 8693 + unsigned long index; 8694 + struct file *file; 8695 + s64 inflight; 8696 + 8697 + inflight = percpu_counter_sum(&tctx->inflight); 8698 + if (!tctx->sqpoll) 8699 + return inflight; 8700 + 8701 + /* 8702 + * If we have SQPOLL rings, then we need to iterate and find them, and 8703 + * add the pending count for those. 8704 + */ 8705 + xa_for_each(&tctx->xa, index, file) { 8706 + struct io_ring_ctx *ctx = file->private_data; 8707 + 8708 + if (ctx->flags & IORING_SETUP_SQPOLL) { 8709 + struct io_uring_task *__tctx = ctx->sqo_task->io_uring; 8710 + 8711 + inflight += percpu_counter_sum(&__tctx->inflight); 8712 + } 8713 + } 8714 + 8715 + return inflight; 8746 8716 } 8747 8717 8748 8718 /* ··· 8785 8697 s64 inflight; 8786 8698 8787 8699 /* make sure overflow events are dropped */ 8788 - tctx->in_idle = true; 8700 + atomic_inc(&tctx->in_idle); 8789 8701 8790 8702 do { 8791 8703 /* read completions before cancelations */ 8792 - inflight = percpu_counter_sum(&tctx->inflight); 8704 + inflight = tctx_inflight(tctx); 8793 8705 if (!inflight) 8794 8706 break; 8795 8707 __io_uring_files_cancel(NULL); ··· 8800 8712 * If we've seen completions, retry. This avoids a race where 8801 8713 * a completion comes in before we did prepare_to_wait(). 8802 8714 */ 8803 - if (inflight != percpu_counter_sum(&tctx->inflight)) 8715 + if (inflight != tctx_inflight(tctx)) 8804 8716 continue; 8805 8717 schedule(); 8806 8718 } while (1); 8807 8719 8808 8720 finish_wait(&tctx->wait, &wait); 8809 - tctx->in_idle = false; 8721 + atomic_dec(&tctx->in_idle); 8810 8722 } 8811 8723 8812 8724 static int io_uring_flush(struct file *file, void *data) ··· 8951 8863 io_sqpoll_wait_sq(ctx); 8952 8864 submitted = to_submit; 8953 8865 } else if (to_submit) { 8954 - ret = io_uring_add_task_file(f.file); 8866 + ret = io_uring_add_task_file(ctx, f.file); 8955 8867 if (unlikely(ret)) 8956 8868 goto out; 8957 8869 mutex_lock(&ctx->uring_lock); ··· 8988 8900 #ifdef CONFIG_PROC_FS 8989 8901 static int io_uring_show_cred(int id, void *p, void *data) 8990 8902 { 8991 - const struct cred *cred = p; 8903 + struct io_identity *iod = p; 8904 + const struct cred *cred = iod->creds; 8992 8905 struct seq_file *m = data; 8993 8906 struct user_namespace *uns = seq_user_ns(m); 8994 8907 struct group_info *gi; ··· 9181 9092 #if defined(CONFIG_UNIX) 9182 9093 ctx->ring_sock->file = file; 9183 9094 #endif 9184 - if (unlikely(io_uring_add_task_file(file))) { 9095 + if (unlikely(io_uring_add_task_file(ctx, file))) { 9185 9096 file = ERR_PTR(-ENOMEM); 9186 9097 goto err_fd; 9187 9098 }
+10 -20
fs/iomap/buffered-io.c
··· 1374 1374 WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); 1375 1375 WARN_ON_ONCE(!PageLocked(page)); 1376 1376 WARN_ON_ONCE(PageWriteback(page)); 1377 + WARN_ON_ONCE(PageDirty(page)); 1377 1378 1378 1379 /* 1379 1380 * We cannot cancel the ioend directly here on error. We may have ··· 1383 1382 * appropriately. 1384 1383 */ 1385 1384 if (unlikely(error)) { 1385 + /* 1386 + * Let the filesystem know what portion of the current page 1387 + * failed to map. If the page wasn't been added to ioend, it 1388 + * won't be affected by I/O completion and we must unlock it 1389 + * now. 1390 + */ 1391 + if (wpc->ops->discard_page) 1392 + wpc->ops->discard_page(page, file_offset); 1386 1393 if (!count) { 1387 - /* 1388 - * If the current page hasn't been added to ioend, it 1389 - * won't be affected by I/O completions and we must 1390 - * discard and unlock it right here. 1391 - */ 1392 - if (wpc->ops->discard_page) 1393 - wpc->ops->discard_page(page); 1394 1394 ClearPageUptodate(page); 1395 1395 unlock_page(page); 1396 1396 goto done; 1397 1397 } 1398 - 1399 - /* 1400 - * If the page was not fully cleaned, we need to ensure that the 1401 - * higher layers come back to it correctly. That means we need 1402 - * to keep the page dirty, and for WB_SYNC_ALL writeback we need 1403 - * to ensure the PAGECACHE_TAG_TOWRITE index mark is not removed 1404 - * so another attempt to write this page in this writeback sweep 1405 - * will be made. 1406 - */ 1407 - set_page_writeback_keepwrite(page); 1408 - } else { 1409 - clear_page_dirty_for_io(page); 1410 - set_page_writeback(page); 1411 1398 } 1412 1399 1400 + set_page_writeback(page); 1413 1401 unlock_page(page); 1414 1402 1415 1403 /*
+2
fs/proc/base.c
··· 1049 1049 oom_adj = (task->signal->oom_score_adj * -OOM_DISABLE) / 1050 1050 OOM_SCORE_ADJ_MAX; 1051 1051 put_task_struct(task); 1052 + if (oom_adj > OOM_ADJUST_MAX) 1053 + oom_adj = OOM_ADJUST_MAX; 1052 1054 len = snprintf(buffer, sizeof(buffer), "%d\n", oom_adj); 1053 1055 return simple_read_from_buffer(buf, count, ppos, buffer, len); 1054 1056 }
+1 -1
fs/proc/cpuinfo.c
··· 19 19 static const struct proc_ops cpuinfo_proc_ops = { 20 20 .proc_flags = PROC_ENTRY_PERMANENT, 21 21 .proc_open = cpuinfo_open, 22 - .proc_read = seq_read, 22 + .proc_read_iter = seq_read_iter, 23 23 .proc_lseek = seq_lseek, 24 24 .proc_release = seq_release, 25 25 };
+2 -2
fs/proc/generic.c
··· 590 590 static const struct proc_ops proc_seq_ops = { 591 591 /* not permanent -- can call into arbitrary seq_operations */ 592 592 .proc_open = proc_seq_open, 593 - .proc_read = seq_read, 593 + .proc_read_iter = seq_read_iter, 594 594 .proc_lseek = seq_lseek, 595 595 .proc_release = proc_seq_release, 596 596 }; ··· 621 621 static const struct proc_ops proc_single_ops = { 622 622 /* not permanent -- can call into arbitrary ->single_show */ 623 623 .proc_open = proc_single_open, 624 - .proc_read = seq_read, 624 + .proc_read_iter = seq_read_iter, 625 625 .proc_lseek = seq_lseek, 626 626 .proc_release = single_release, 627 627 };
+2
fs/proc/inode.c
··· 597 597 .llseek = proc_reg_llseek, 598 598 .read_iter = proc_reg_read_iter, 599 599 .write = proc_reg_write, 600 + .splice_read = generic_file_splice_read, 600 601 .poll = proc_reg_poll, 601 602 .unlocked_ioctl = proc_reg_unlocked_ioctl, 602 603 .mmap = proc_reg_mmap, ··· 623 622 static const struct file_operations proc_iter_file_ops_compat = { 624 623 .llseek = proc_reg_llseek, 625 624 .read_iter = proc_reg_read_iter, 625 + .splice_read = generic_file_splice_read, 626 626 .write = proc_reg_write, 627 627 .poll = proc_reg_poll, 628 628 .unlocked_ioctl = proc_reg_unlocked_ioctl,
+1 -1
fs/proc/stat.c
··· 226 226 static const struct proc_ops stat_proc_ops = { 227 227 .proc_flags = PROC_ENTRY_PERMANENT, 228 228 .proc_open = stat_open, 229 - .proc_read = seq_read, 229 + .proc_read_iter = seq_read_iter, 230 230 .proc_lseek = seq_lseek, 231 231 .proc_release = single_release, 232 232 };
+32 -13
fs/seq_file.c
··· 18 18 #include <linux/mm.h> 19 19 #include <linux/printk.h> 20 20 #include <linux/string_helpers.h> 21 + #include <linux/uio.h> 21 22 22 23 #include <linux/uaccess.h> 23 24 #include <asm/page.h> ··· 147 146 */ 148 147 ssize_t seq_read(struct file *file, char __user *buf, size_t size, loff_t *ppos) 149 148 { 150 - struct seq_file *m = file->private_data; 149 + struct iovec iov = { .iov_base = buf, .iov_len = size}; 150 + struct kiocb kiocb; 151 + struct iov_iter iter; 152 + ssize_t ret; 153 + 154 + init_sync_kiocb(&kiocb, file); 155 + iov_iter_init(&iter, READ, &iov, 1, size); 156 + 157 + kiocb.ki_pos = *ppos; 158 + ret = seq_read_iter(&kiocb, &iter); 159 + *ppos = kiocb.ki_pos; 160 + return ret; 161 + } 162 + EXPORT_SYMBOL(seq_read); 163 + 164 + /* 165 + * Ready-made ->f_op->read_iter() 166 + */ 167 + ssize_t seq_read_iter(struct kiocb *iocb, struct iov_iter *iter) 168 + { 169 + struct seq_file *m = iocb->ki_filp->private_data; 170 + size_t size = iov_iter_count(iter); 151 171 size_t copied = 0; 152 172 size_t n; 153 173 void *p; ··· 180 158 * if request is to read from zero offset, reset iterator to first 181 159 * record as it might have been already advanced by previous requests 182 160 */ 183 - if (*ppos == 0) { 161 + if (iocb->ki_pos == 0) { 184 162 m->index = 0; 185 163 m->count = 0; 186 164 } 187 165 188 - /* Don't assume *ppos is where we left it */ 189 - if (unlikely(*ppos != m->read_pos)) { 190 - while ((err = traverse(m, *ppos)) == -EAGAIN) 166 + /* Don't assume ki_pos is where we left it */ 167 + if (unlikely(iocb->ki_pos != m->read_pos)) { 168 + while ((err = traverse(m, iocb->ki_pos)) == -EAGAIN) 191 169 ; 192 170 if (err) { 193 171 /* With prejudice... */ ··· 196 174 m->count = 0; 197 175 goto Done; 198 176 } else { 199 - m->read_pos = *ppos; 177 + m->read_pos = iocb->ki_pos; 200 178 } 201 179 } 202 180 ··· 209 187 /* if not empty - flush it first */ 210 188 if (m->count) { 211 189 n = min(m->count, size); 212 - err = copy_to_user(buf, m->buf + m->from, n); 213 - if (err) 190 + if (copy_to_iter(m->buf + m->from, n, iter) != n) 214 191 goto Efault; 215 192 m->count -= n; 216 193 m->from += n; 217 194 size -= n; 218 - buf += n; 219 195 copied += n; 220 196 if (!size) 221 197 goto Done; ··· 274 254 } 275 255 m->op->stop(m, p); 276 256 n = min(m->count, size); 277 - err = copy_to_user(buf, m->buf, n); 278 - if (err) 257 + if (copy_to_iter(m->buf, n, iter) != n) 279 258 goto Efault; 280 259 copied += n; 281 260 m->count -= n; ··· 283 264 if (!copied) 284 265 copied = err; 285 266 else { 286 - *ppos += copied; 267 + iocb->ki_pos += copied; 287 268 m->read_pos += copied; 288 269 } 289 270 mutex_unlock(&m->lock); ··· 295 276 err = -EFAULT; 296 277 goto Done; 297 278 } 298 - EXPORT_SYMBOL(seq_read); 279 + EXPORT_SYMBOL(seq_read_iter); 299 280 300 281 /** 301 282 * seq_lseek - ->llseek() method for sequential files.
+1
fs/xfs/libxfs/xfs_alloc.c
··· 2467 2467 new->xefi_startblock = XFS_AGB_TO_FSB(mp, agno, agbno); 2468 2468 new->xefi_blockcount = 1; 2469 2469 new->xefi_oinfo = *oinfo; 2470 + new->xefi_skip_discard = false; 2470 2471 2471 2472 trace_xfs_agfl_free_defer(mp, agno, 0, agbno, 1); 2472 2473
+1 -1
fs/xfs/libxfs/xfs_bmap.h
··· 52 52 { 53 53 xfs_fsblock_t xefi_startblock;/* starting fs block number */ 54 54 xfs_extlen_t xefi_blockcount;/* number of blocks in extent */ 55 + bool xefi_skip_discard; 55 56 struct list_head xefi_list; 56 57 struct xfs_owner_info xefi_oinfo; /* extent owner */ 57 - bool xefi_skip_discard; 58 58 }; 59 59 60 60 #define XFS_BMAP_MAX_NMAP 4
+1 -2
fs/xfs/scrub/inode.c
··· 121 121 goto bad; 122 122 123 123 /* rt flags require rt device */ 124 - if ((flags & (XFS_DIFLAG_REALTIME | XFS_DIFLAG_RTINHERIT)) && 125 - !mp->m_rtdev_targp) 124 + if ((flags & XFS_DIFLAG_REALTIME) && !mp->m_rtdev_targp) 126 125 goto bad; 127 126 128 127 /* new rt bitmap flag only valid for rbmino */
+12 -8
fs/xfs/xfs_aops.c
··· 346 346 ssize_t count = i_blocksize(inode); 347 347 xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); 348 348 xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + count); 349 - xfs_fileoff_t cow_fsb = NULLFILEOFF; 350 - int whichfork = XFS_DATA_FORK; 349 + xfs_fileoff_t cow_fsb; 350 + int whichfork; 351 351 struct xfs_bmbt_irec imap; 352 352 struct xfs_iext_cursor icur; 353 353 int retries = 0; ··· 381 381 * landed in a hole and we skip the block. 382 382 */ 383 383 retry: 384 + cow_fsb = NULLFILEOFF; 385 + whichfork = XFS_DATA_FORK; 384 386 xfs_ilock(ip, XFS_ILOCK_SHARED); 385 387 ASSERT(ip->i_df.if_format != XFS_DINODE_FMT_BTREE || 386 388 (ip->i_df.if_flags & XFS_IFEXTENTS)); ··· 529 527 */ 530 528 static void 531 529 xfs_discard_page( 532 - struct page *page) 530 + struct page *page, 531 + loff_t fileoff) 533 532 { 534 533 struct inode *inode = page->mapping->host; 535 534 struct xfs_inode *ip = XFS_I(inode); 536 535 struct xfs_mount *mp = ip->i_mount; 537 - loff_t offset = page_offset(page); 538 - xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, offset); 536 + unsigned int pageoff = offset_in_page(fileoff); 537 + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, fileoff); 538 + xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, pageoff); 539 539 int error; 540 540 541 541 if (XFS_FORCED_SHUTDOWN(mp)) ··· 545 541 546 542 xfs_alert_ratelimited(mp, 547 543 "page discard on page "PTR_FMT", inode 0x%llx, offset %llu.", 548 - page, ip->i_ino, offset); 544 + page, ip->i_ino, fileoff); 549 545 550 546 error = xfs_bmap_punch_delalloc_range(ip, start_fsb, 551 - i_blocks_per_page(inode, page)); 547 + i_blocks_per_page(inode, page) - pageoff_fsb); 552 548 if (error && !XFS_FORCED_SHUTDOWN(mp)) 553 549 xfs_alert(mp, "page discard unable to remove delalloc mapping."); 554 550 out_invalidate: 555 - iomap_invalidatepage(page, 0, PAGE_SIZE); 551 + iomap_invalidatepage(page, pageoff, PAGE_SIZE - pageoff); 556 552 } 557 553 558 554 static const struct iomap_writeback_ops xfs_writeback_ops = {
+10
fs/xfs/xfs_iops.c
··· 911 911 error = iomap_zero_range(inode, oldsize, newsize - oldsize, 912 912 &did_zeroing, &xfs_buffered_write_iomap_ops); 913 913 } else { 914 + /* 915 + * iomap won't detect a dirty page over an unwritten block (or a 916 + * cow block over a hole) and subsequently skips zeroing the 917 + * newly post-EOF portion of the page. Flush the new EOF to 918 + * convert the block before the pagecache truncate. 919 + */ 920 + error = filemap_write_and_wait_range(inode->i_mapping, newsize, 921 + newsize); 922 + if (error) 923 + return error; 914 924 error = iomap_truncate_page(inode, newsize, &did_zeroing, 915 925 &xfs_buffered_write_iomap_ops); 916 926 }
+2 -1
fs/xfs/xfs_reflink.c
··· 1502 1502 &xfs_buffered_write_iomap_ops); 1503 1503 if (error) 1504 1504 goto out; 1505 - error = filemap_write_and_wait(inode->i_mapping); 1505 + 1506 + error = filemap_write_and_wait_range(inode->i_mapping, offset, len); 1506 1507 if (error) 1507 1508 goto out; 1508 1509
+8 -8
include/kunit/test.h
··· 252 252 } 253 253 #endif /* IS_BUILTIN(CONFIG_KUNIT) */ 254 254 255 + #ifdef MODULE 255 256 /** 256 - * kunit_test_suites() - used to register one or more &struct kunit_suite 257 - * with KUnit. 257 + * kunit_test_suites_for_module() - used to register one or more 258 + * &struct kunit_suite with KUnit. 258 259 * 259 - * @suites_list...: a statically allocated list of &struct kunit_suite. 260 + * @__suites: a statically allocated list of &struct kunit_suite. 260 261 * 261 - * Registers @suites_list with the test framework. See &struct kunit_suite for 262 + * Registers @__suites with the test framework. See &struct kunit_suite for 262 263 * more information. 263 264 * 264 265 * If a test suite is built-in, module_init() gets translated into ··· 268 267 * module_{init|exit} functions for the builtin case when registering 269 268 * suites via kunit_test_suites() below. 270 269 */ 271 - #ifdef MODULE 272 270 #define kunit_test_suites_for_module(__suites) \ 273 271 static int __init kunit_test_suites_init(void) \ 274 272 { \ ··· 294 294 * kunit_test_suites() - used to register one or more &struct kunit_suite 295 295 * with KUnit. 296 296 * 297 - * @suites: a statically allocated list of &struct kunit_suite. 297 + * @__suites: a statically allocated list of &struct kunit_suite. 298 298 * 299 299 * Registers @suites with the test framework. See &struct kunit_suite for 300 300 * more information. ··· 308 308 * module. 309 309 * 310 310 */ 311 - #define kunit_test_suites(...) \ 311 + #define kunit_test_suites(__suites...) \ 312 312 __kunit_test_suites(__UNIQUE_ID(array), \ 313 313 __UNIQUE_ID(suites), \ 314 - __VA_ARGS__) 314 + ##__suites) 315 315 316 316 #define kunit_test_suite(suite) kunit_test_suites(&suite) 317 317
+2
include/linux/blk-mq.h
··· 235 235 * @flags: Zero or more BLK_MQ_F_* flags. 236 236 * @driver_data: Pointer to data owned by the block driver that created this 237 237 * tag set. 238 + * @active_queues_shared_sbitmap: 239 + * number of active request queues per tag set. 238 240 * @__bitmap_tags: A shared tags sbitmap, used over all hctx's 239 241 * @__breserved_tags: 240 242 * A shared reserved tags sbitmap, used over all hctx's
+8 -12
include/linux/can/skb.h
··· 61 61 */ 62 62 static inline struct sk_buff *can_create_echo_skb(struct sk_buff *skb) 63 63 { 64 - if (skb_shared(skb)) { 65 - struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC); 64 + struct sk_buff *nskb; 66 65 67 - if (likely(nskb)) { 68 - can_skb_set_owner(nskb, skb->sk); 69 - consume_skb(skb); 70 - return nskb; 71 - } else { 72 - kfree_skb(skb); 73 - return NULL; 74 - } 66 + nskb = skb_clone(skb, GFP_ATOMIC); 67 + if (unlikely(!nskb)) { 68 + kfree_skb(skb); 69 + return NULL; 75 70 } 76 71 77 - /* we can assume to have an unshared skb with proper owner */ 78 - return skb; 72 + can_skb_set_owner(nskb, skb->sk); 73 + consume_skb(skb); 74 + return nskb; 79 75 } 80 76 81 77 #endif /* !_CAN_SKB_H */
+2 -1
include/linux/io_uring.h
··· 30 30 struct percpu_counter inflight; 31 31 struct io_identity __identity; 32 32 struct io_identity *identity; 33 - bool in_idle; 33 + atomic_t in_idle; 34 + bool sqpoll; 34 35 }; 35 36 36 37 #if defined(CONFIG_IO_URING)
+1 -1
include/linux/iomap.h
··· 221 221 * Optional, allows the file system to discard state on a page where 222 222 * we failed to submit any I/O. 223 223 */ 224 - void (*discard_page)(struct page *page); 224 + void (*discard_page)(struct page *page, loff_t fileoff); 225 225 }; 226 226 227 227 struct iomap_writepage_ctx {
+9
include/linux/mm.h
··· 2759 2759 return VM_FAULT_NOPAGE; 2760 2760 } 2761 2761 2762 + #ifndef io_remap_pfn_range 2763 + static inline int io_remap_pfn_range(struct vm_area_struct *vma, 2764 + unsigned long addr, unsigned long pfn, 2765 + unsigned long size, pgprot_t prot) 2766 + { 2767 + return remap_pfn_range(vma, addr, pfn, size, pgprot_decrypted(prot)); 2768 + } 2769 + #endif 2770 + 2762 2771 static inline vm_fault_t vmf_error(int err) 2763 2772 { 2764 2773 if (err == -ENOMEM)
+8 -1
include/linux/netfilter/nfnetlink.h
··· 24 24 const u_int16_t attr_count; /* number of nlattr's */ 25 25 }; 26 26 27 + enum nfnl_abort_action { 28 + NFNL_ABORT_NONE = 0, 29 + NFNL_ABORT_AUTOLOAD, 30 + NFNL_ABORT_VALIDATE, 31 + }; 32 + 27 33 struct nfnetlink_subsystem { 28 34 const char *name; 29 35 __u8 subsys_id; /* nfnetlink subsystem ID */ ··· 37 31 const struct nfnl_callback *cb; /* callback for individual types */ 38 32 struct module *owner; 39 33 int (*commit)(struct net *net, struct sk_buff *skb); 40 - int (*abort)(struct net *net, struct sk_buff *skb, bool autoload); 34 + int (*abort)(struct net *net, struct sk_buff *skb, 35 + enum nfnl_abort_action action); 41 36 void (*cleanup)(struct net *net); 42 37 bool (*valid_genid)(struct net *net, u32 genid); 43 38 };
+1 -1
include/linux/netfilter_ipv4.h
··· 16 16 u_int32_t mark; 17 17 }; 18 18 19 - int ip_route_me_harder(struct net *net, struct sk_buff *skb, unsigned addr_type); 19 + int ip_route_me_harder(struct net *net, struct sock *sk, struct sk_buff *skb, unsigned addr_type); 20 20 21 21 struct nf_queue_entry; 22 22
+5 -5
include/linux/netfilter_ipv6.h
··· 42 42 #if IS_MODULE(CONFIG_IPV6) 43 43 int (*chk_addr)(struct net *net, const struct in6_addr *addr, 44 44 const struct net_device *dev, int strict); 45 - int (*route_me_harder)(struct net *net, struct sk_buff *skb); 45 + int (*route_me_harder)(struct net *net, struct sock *sk, struct sk_buff *skb); 46 46 int (*dev_get_saddr)(struct net *net, const struct net_device *dev, 47 47 const struct in6_addr *daddr, unsigned int srcprefs, 48 48 struct in6_addr *saddr); ··· 143 143 #endif 144 144 } 145 145 146 - int ip6_route_me_harder(struct net *net, struct sk_buff *skb); 146 + int ip6_route_me_harder(struct net *net, struct sock *sk, struct sk_buff *skb); 147 147 148 - static inline int nf_ip6_route_me_harder(struct net *net, struct sk_buff *skb) 148 + static inline int nf_ip6_route_me_harder(struct net *net, struct sock *sk, struct sk_buff *skb) 149 149 { 150 150 #if IS_MODULE(CONFIG_IPV6) 151 151 const struct nf_ipv6_ops *v6_ops = nf_get_ipv6_ops(); ··· 153 153 if (!v6_ops) 154 154 return -EHOSTUNREACH; 155 155 156 - return v6_ops->route_me_harder(net, skb); 156 + return v6_ops->route_me_harder(net, sk, skb); 157 157 #elif IS_BUILTIN(CONFIG_IPV6) 158 - return ip6_route_me_harder(net, skb); 158 + return ip6_route_me_harder(net, sk, skb); 159 159 #else 160 160 return -EHOSTUNREACH; 161 161 #endif
+4 -4
include/linux/pagemap.h
··· 344 344 /** 345 345 * find_lock_page - locate, pin and lock a pagecache page 346 346 * @mapping: the address_space to search 347 - * @offset: the page index 347 + * @index: the page index 348 348 * 349 - * Looks up the page cache entry at @mapping & @offset. If there is a 349 + * Looks up the page cache entry at @mapping & @index. If there is a 350 350 * page cache page, it is returned locked and with an increased 351 351 * refcount. 352 352 * ··· 363 363 /** 364 364 * find_lock_head - Locate, pin and lock a pagecache page. 365 365 * @mapping: The address_space to search. 366 - * @offset: The page index. 366 + * @index: The page index. 367 367 * 368 - * Looks up the page cache entry at @mapping & @offset. If there is a 368 + * Looks up the page cache entry at @mapping & @index. If there is a 369 369 * page cache page, its head page is returned locked and with an increased 370 370 * refcount. 371 371 *
-4
include/linux/pgtable.h
··· 1427 1427 1428 1428 #endif /* !__ASSEMBLY__ */ 1429 1429 1430 - #ifndef io_remap_pfn_range 1431 - #define io_remap_pfn_range remap_pfn_range 1432 - #endif 1433 - 1434 1430 #ifndef has_transparent_hugepage 1435 1431 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1436 1432 #define has_transparent_hugepage() 1
+5 -35
include/linux/phy.h
··· 147 147 PHY_INTERFACE_MODE_MAX, 148 148 } phy_interface_t; 149 149 150 - /** 150 + /* 151 151 * phy_supported_speeds - return all speeds currently supported by a PHY device 152 - * @phy: The PHY device to return supported speeds of. 153 - * @speeds: buffer to store supported speeds in. 154 - * @size: size of speeds buffer. 155 - * 156 - * Description: Returns the number of supported speeds, and fills 157 - * the speeds buffer with the supported speeds. If speeds buffer is 158 - * too small to contain all currently supported speeds, will return as 159 - * many speeds as can fit. 160 152 */ 161 153 unsigned int phy_supported_speeds(struct phy_device *phy, 162 154 unsigned int *speeds, ··· 1014 1022 regnum, mask, set); 1015 1023 } 1016 1024 1017 - /** 1025 + /* 1018 1026 * phy_read_mmd - Convenience function for reading a register 1019 1027 * from an MMD on a given PHY. 1020 - * @phydev: The phy_device struct 1021 - * @devad: The MMD to read from 1022 - * @regnum: The register on the MMD to read 1023 - * 1024 - * Same rules as for phy_read(); 1025 1028 */ 1026 1029 int phy_read_mmd(struct phy_device *phydev, int devad, u32 regnum); 1027 1030 ··· 1051 1064 __ret; \ 1052 1065 }) 1053 1066 1054 - /** 1067 + /* 1055 1068 * __phy_read_mmd - Convenience function for reading a register 1056 1069 * from an MMD on a given PHY. 1057 - * @phydev: The phy_device struct 1058 - * @devad: The MMD to read from 1059 - * @regnum: The register on the MMD to read 1060 - * 1061 - * Same rules as for __phy_read(); 1062 1070 */ 1063 1071 int __phy_read_mmd(struct phy_device *phydev, int devad, u32 regnum); 1064 1072 1065 - /** 1073 + /* 1066 1074 * phy_write_mmd - Convenience function for writing a register 1067 1075 * on an MMD on a given PHY. 1068 - * @phydev: The phy_device struct 1069 - * @devad: The MMD to write to 1070 - * @regnum: The register on the MMD to read 1071 - * @val: value to write to @regnum 1072 - * 1073 - * Same rules as for phy_write(); 1074 1076 */ 1075 1077 int phy_write_mmd(struct phy_device *phydev, int devad, u32 regnum, u16 val); 1076 1078 1077 - /** 1079 + /* 1078 1080 * __phy_write_mmd - Convenience function for writing a register 1079 1081 * on an MMD on a given PHY. 1080 - * @phydev: The phy_device struct 1081 - * @devad: The MMD to write to 1082 - * @regnum: The register on the MMD to read 1083 - * @val: value to write to @regnum 1084 - * 1085 - * Same rules as for __phy_write(); 1086 1082 */ 1087 1083 int __phy_write_mmd(struct phy_device *phydev, int devad, u32 regnum, u16 val); 1088 1084
+2 -4
include/linux/pm_runtime.h
··· 54 54 extern void pm_runtime_update_max_time_suspended(struct device *dev, 55 55 s64 delta_ns); 56 56 extern void pm_runtime_set_memalloc_noio(struct device *dev, bool enable); 57 - extern void pm_runtime_clean_up_links(struct device *dev); 58 57 extern void pm_runtime_get_suppliers(struct device *dev); 59 58 extern void pm_runtime_put_suppliers(struct device *dev); 60 59 extern void pm_runtime_new_link(struct device *dev); 61 - extern void pm_runtime_drop_link(struct device *dev); 60 + extern void pm_runtime_drop_link(struct device_link *link); 62 61 63 62 /** 64 63 * pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter. ··· 275 276 struct device *dev) { return 0; } 276 277 static inline void pm_runtime_set_memalloc_noio(struct device *dev, 277 278 bool enable){} 278 - static inline void pm_runtime_clean_up_links(struct device *dev) {} 279 279 static inline void pm_runtime_get_suppliers(struct device *dev) {} 280 280 static inline void pm_runtime_put_suppliers(struct device *dev) {} 281 281 static inline void pm_runtime_new_link(struct device *dev) {} 282 - static inline void pm_runtime_drop_link(struct device *dev) {} 282 + static inline void pm_runtime_drop_link(struct device_link *link) {} 283 283 284 284 #endif /* !CONFIG_PM */ 285 285
+75 -75
include/linux/refcount.h
··· 147 147 return atomic_read(&r->refs); 148 148 } 149 149 150 - /** 151 - * refcount_add_not_zero - add a value to a refcount unless it is 0 152 - * @i: the value to add to the refcount 153 - * @r: the refcount 154 - * 155 - * Will saturate at REFCOUNT_SATURATED and WARN. 156 - * 157 - * Provides no memory ordering, it is assumed the caller has guaranteed the 158 - * object memory to be stable (RCU, etc.). It does provide a control dependency 159 - * and thereby orders future stores. See the comment on top. 160 - * 161 - * Use of this function is not recommended for the normal reference counting 162 - * use case in which references are taken and released one at a time. In these 163 - * cases, refcount_inc(), or one of its variants, should instead be used to 164 - * increment a reference count. 165 - * 166 - * Return: false if the passed refcount is 0, true otherwise 167 - */ 168 150 static inline __must_check bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) 169 151 { 170 152 int old = refcount_read(r); ··· 165 183 return old; 166 184 } 167 185 186 + /** 187 + * refcount_add_not_zero - add a value to a refcount unless it is 0 188 + * @i: the value to add to the refcount 189 + * @r: the refcount 190 + * 191 + * Will saturate at REFCOUNT_SATURATED and WARN. 192 + * 193 + * Provides no memory ordering, it is assumed the caller has guaranteed the 194 + * object memory to be stable (RCU, etc.). It does provide a control dependency 195 + * and thereby orders future stores. See the comment on top. 196 + * 197 + * Use of this function is not recommended for the normal reference counting 198 + * use case in which references are taken and released one at a time. In these 199 + * cases, refcount_inc(), or one of its variants, should instead be used to 200 + * increment a reference count. 201 + * 202 + * Return: false if the passed refcount is 0, true otherwise 203 + */ 168 204 static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) 169 205 { 170 206 return __refcount_add_not_zero(i, r, NULL); 207 + } 208 + 209 + static inline void __refcount_add(int i, refcount_t *r, int *oldp) 210 + { 211 + int old = atomic_fetch_add_relaxed(i, &r->refs); 212 + 213 + if (oldp) 214 + *oldp = old; 215 + 216 + if (unlikely(!old)) 217 + refcount_warn_saturate(r, REFCOUNT_ADD_UAF); 218 + else if (unlikely(old < 0 || old + i < 0)) 219 + refcount_warn_saturate(r, REFCOUNT_ADD_OVF); 171 220 } 172 221 173 222 /** ··· 217 204 * cases, refcount_inc(), or one of its variants, should instead be used to 218 205 * increment a reference count. 219 206 */ 220 - static inline void __refcount_add(int i, refcount_t *r, int *oldp) 221 - { 222 - int old = atomic_fetch_add_relaxed(i, &r->refs); 223 - 224 - if (oldp) 225 - *oldp = old; 226 - 227 - if (unlikely(!old)) 228 - refcount_warn_saturate(r, REFCOUNT_ADD_UAF); 229 - else if (unlikely(old < 0 || old + i < 0)) 230 - refcount_warn_saturate(r, REFCOUNT_ADD_OVF); 231 - } 232 - 233 207 static inline void refcount_add(int i, refcount_t *r) 234 208 { 235 209 __refcount_add(i, r, NULL); 210 + } 211 + 212 + static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) 213 + { 214 + return __refcount_add_not_zero(1, r, oldp); 236 215 } 237 216 238 217 /** ··· 240 235 * 241 236 * Return: true if the increment was successful, false otherwise 242 237 */ 243 - static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) 244 - { 245 - return __refcount_add_not_zero(1, r, oldp); 246 - } 247 - 248 238 static inline __must_check bool refcount_inc_not_zero(refcount_t *r) 249 239 { 250 240 return __refcount_inc_not_zero(r, NULL); 241 + } 242 + 243 + static inline void __refcount_inc(refcount_t *r, int *oldp) 244 + { 245 + __refcount_add(1, r, oldp); 251 246 } 252 247 253 248 /** ··· 262 257 * Will WARN if the refcount is 0, as this represents a possible use-after-free 263 258 * condition. 264 259 */ 265 - static inline void __refcount_inc(refcount_t *r, int *oldp) 266 - { 267 - __refcount_add(1, r, oldp); 268 - } 269 - 270 260 static inline void refcount_inc(refcount_t *r) 271 261 { 272 262 __refcount_inc(r, NULL); 263 + } 264 + 265 + static inline __must_check bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp) 266 + { 267 + int old = atomic_fetch_sub_release(i, &r->refs); 268 + 269 + if (oldp) 270 + *oldp = old; 271 + 272 + if (old == i) { 273 + smp_acquire__after_ctrl_dep(); 274 + return true; 275 + } 276 + 277 + if (unlikely(old < 0 || old - i < 0)) 278 + refcount_warn_saturate(r, REFCOUNT_SUB_UAF); 279 + 280 + return false; 273 281 } 274 282 275 283 /** ··· 305 287 * 306 288 * Return: true if the resulting refcount is 0, false otherwise 307 289 */ 308 - static inline __must_check bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp) 309 - { 310 - int old = atomic_fetch_sub_release(i, &r->refs); 311 - 312 - if (oldp) 313 - *oldp = old; 314 - 315 - if (old == i) { 316 - smp_acquire__after_ctrl_dep(); 317 - return true; 318 - } 319 - 320 - if (unlikely(old < 0 || old - i < 0)) 321 - refcount_warn_saturate(r, REFCOUNT_SUB_UAF); 322 - 323 - return false; 324 - } 325 - 326 290 static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) 327 291 { 328 292 return __refcount_sub_and_test(i, r, NULL); 293 + } 294 + 295 + static inline __must_check bool __refcount_dec_and_test(refcount_t *r, int *oldp) 296 + { 297 + return __refcount_sub_and_test(1, r, oldp); 329 298 } 330 299 331 300 /** ··· 328 323 * 329 324 * Return: true if the resulting refcount is 0, false otherwise 330 325 */ 331 - static inline __must_check bool __refcount_dec_and_test(refcount_t *r, int *oldp) 332 - { 333 - return __refcount_sub_and_test(1, r, oldp); 334 - } 335 - 336 326 static inline __must_check bool refcount_dec_and_test(refcount_t *r) 337 327 { 338 328 return __refcount_dec_and_test(r, NULL); 329 + } 330 + 331 + static inline void __refcount_dec(refcount_t *r, int *oldp) 332 + { 333 + int old = atomic_fetch_sub_release(1, &r->refs); 334 + 335 + if (oldp) 336 + *oldp = old; 337 + 338 + if (unlikely(old <= 1)) 339 + refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); 339 340 } 340 341 341 342 /** ··· 354 343 * Provides release memory ordering, such that prior loads and stores are done 355 344 * before. 356 345 */ 357 - static inline void __refcount_dec(refcount_t *r, int *oldp) 358 - { 359 - int old = atomic_fetch_sub_release(1, &r->refs); 360 - 361 - if (oldp) 362 - *oldp = old; 363 - 364 - if (unlikely(old <= 1)) 365 - refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); 366 - } 367 - 368 346 static inline void refcount_dec(refcount_t *r) 369 347 { 370 348 __refcount_dec(r, NULL);
+1
include/linux/seq_file.h
··· 107 107 char *mangle_path(char *s, const char *p, const char *esc); 108 108 int seq_open(struct file *, const struct seq_operations *); 109 109 ssize_t seq_read(struct file *, char __user *, size_t, loff_t *); 110 + ssize_t seq_read_iter(struct kiocb *iocb, struct iov_iter *iter); 110 111 loff_t seq_lseek(struct file *, loff_t, int); 111 112 int seq_release(struct inode *, struct file *); 112 113 int seq_write(struct seq_file *seq, const void *data, size_t len);
+5 -4
include/net/cfg80211.h
··· 1444 1444 enum cfg80211_station_type statype); 1445 1445 1446 1446 /** 1447 - * enum station_info_rate_flags - bitrate info flags 1447 + * enum rate_info_flags - bitrate info flags 1448 1448 * 1449 1449 * Used by the driver to indicate the specific rate transmission 1450 1450 * type for 802.11n transmissions. ··· 1517 1517 }; 1518 1518 1519 1519 /** 1520 - * enum station_info_rate_flags - bitrate info flags 1520 + * enum bss_param_flags - bitrate info flags 1521 1521 * 1522 1522 * Used by the driver to indicate the specific rate transmission 1523 1523 * type for 802.11n transmissions. ··· 6467 6467 struct ieee80211_channel *channel, gfp_t gfp); 6468 6468 6469 6469 /** 6470 - * cfg80211_notify_new_candidate - notify cfg80211 of a new mesh peer candidate 6470 + * cfg80211_notify_new_peer_candidate - notify cfg80211 of a new mesh peer 6471 + * candidate 6471 6472 * 6472 6473 * @dev: network device 6473 6474 * @macaddr: the MAC address of the new candidate ··· 7607 7606 void cfg80211_unregister_wdev(struct wireless_dev *wdev); 7608 7607 7609 7608 /** 7610 - * struct cfg80211_ft_event - FT Information Elements 7609 + * struct cfg80211_ft_event_params - FT Information Elements 7611 7610 * @ies: FT IEs 7612 7611 * @ies_len: length of the FT IE in bytes 7613 7612 * @target_ap: target AP's MAC address
+4 -3
include/net/mac80211.h
··· 3311 3311 }; 3312 3312 3313 3313 /** 3314 - * enum ieee80211_reconfig_complete_type - reconfig type 3314 + * enum ieee80211_reconfig_type - reconfig type 3315 3315 * 3316 3316 * This enum is used by the reconfig_complete() callback to indicate what 3317 3317 * reconfiguration type was completed. ··· 6334 6334 int band, struct ieee80211_sta **sta); 6335 6335 6336 6336 /** 6337 - * Sanity-check and parse the radiotap header of injected frames 6337 + * ieee80211_parse_tx_radiotap - Sanity-check and parse the radiotap header 6338 + * of injected frames 6338 6339 * @skb: packet injected by userspace 6339 6340 * @dev: the &struct device of this 802.11 device 6340 6341 */ ··· 6390 6389 void ieee80211_update_p2p_noa(struct ieee80211_noa_data *data, u32 tsf); 6391 6390 6392 6391 /** 6393 - * ieee80211_tdls_oper - request userspace to perform a TDLS operation 6392 + * ieee80211_tdls_oper_request - request userspace to perform a TDLS operation 6394 6393 * @vif: virtual interface 6395 6394 * @peer: the peer's destination address 6396 6395 * @oper: the requested TDLS operation
+1 -1
include/sound/control.h
··· 42 42 snd_ctl_elem_iface_t iface; /* interface identifier */ 43 43 unsigned int device; /* device/client number */ 44 44 unsigned int subdevice; /* subdevice (substream) number */ 45 - const unsigned char *name; /* ASCII name of item */ 45 + const char *name; /* ASCII name of item */ 46 46 unsigned int index; /* index of item */ 47 47 unsigned int access; /* access rights */ 48 48 unsigned int count; /* count of same elements */
+2 -1
include/sound/core.h
··· 332 332 #define snd_BUG() WARN(1, "BUG?\n") 333 333 334 334 /** 335 - * Suppress high rates of output when CONFIG_SND_DEBUG is enabled. 335 + * snd_printd_ratelimit - Suppress high rates of output when 336 + * CONFIG_SND_DEBUG is enabled. 336 337 */ 337 338 #define snd_printd_ratelimit() printk_ratelimit() 338 339
+2 -2
include/sound/pcm.h
··· 1284 1284 } 1285 1285 1286 1286 /** 1287 - * snd_pcm_sgbuf_chunk_size - Compute the max size that fits within the contig. 1288 - * page from the given size 1287 + * snd_pcm_sgbuf_get_chunk_size - Compute the max size that fits within the 1288 + * contig. page from the given size 1289 1289 * @substream: PCM substream 1290 1290 * @ofs: byte offset 1291 1291 * @size: byte size to examine
+1
include/uapi/linux/icmpv6.h
··· 138 138 #define ICMPV6_HDR_FIELD 0 139 139 #define ICMPV6_UNK_NEXTHDR 1 140 140 #define ICMPV6_UNK_OPTION 2 141 + #define ICMPV6_HDR_INCOMP 3 141 142 142 143 /* 143 144 * constants for (set|get)sockopt
+1 -1
include/uapi/sound/compress_offload.h
··· 144 144 __u32 value[8]; 145 145 } __attribute__((packed, aligned(4))); 146 146 147 - /** 147 + /* 148 148 * compress path ioctl definitions 149 149 * SNDRV_COMPRESS_GET_CAPS: Query capability of DSP 150 150 * SNDRV_COMPRESS_GET_CODEC_CAPS: Query capability of a codec
-3
include/video/imx-ipu-v3.h
··· 484 484 485 485 enum ipu_color_space ipu_drm_fourcc_to_colorspace(u32 drm_fourcc); 486 486 enum ipu_color_space ipu_pixelformat_to_colorspace(u32 pixelformat); 487 - enum ipu_color_space ipu_mbus_code_to_colorspace(u32 mbus_code); 488 - int ipu_stride_to_bytes(u32 pixel_stride, u32 pixelformat); 489 - bool ipu_pixelformat_is_planar(u32 pixelformat); 490 487 int ipu_degrees_to_rot_mode(enum ipu_rotate_mode *mode, int degrees, 491 488 bool hflip, bool vflip); 492 489 int ipu_rot_mode_to_degrees(int *degrees, enum ipu_rotate_mode mode,
+2 -2
kernel/entry/common.c
··· 337 337 * already contains a warning when RCU is not watching, so no point 338 338 * in having another one here. 339 339 */ 340 + lockdep_hardirqs_off(CALLER_ADDR0); 340 341 instrumentation_begin(); 341 342 rcu_irq_enter_check_tick(); 342 - /* Use the combo lockdep/tracing function */ 343 - trace_hardirqs_off(); 343 + trace_hardirqs_off_finish(); 344 344 instrumentation_end(); 345 345 346 346 return ret;
+5 -7
kernel/events/core.c
··· 10085 10085 if (token == IF_SRC_FILE || token == IF_SRC_FILEADDR) { 10086 10086 int fpos = token == IF_SRC_FILE ? 2 : 1; 10087 10087 10088 + kfree(filename); 10088 10089 filename = match_strdup(&args[fpos]); 10089 10090 if (!filename) { 10090 10091 ret = -ENOMEM; ··· 10132 10131 */ 10133 10132 ret = -EOPNOTSUPP; 10134 10133 if (!event->ctx->task) 10135 - goto fail_free_name; 10134 + goto fail; 10136 10135 10137 10136 /* look up the path and grab its inode */ 10138 10137 ret = kern_path(filename, LOOKUP_FOLLOW, 10139 10138 &filter->path); 10140 10139 if (ret) 10141 - goto fail_free_name; 10142 - 10143 - kfree(filename); 10144 - filename = NULL; 10140 + goto fail; 10145 10141 10146 10142 ret = -EINVAL; 10147 10143 if (!filter->path.dentry || ··· 10158 10160 if (state != IF_STATE_ACTION) 10159 10161 goto fail; 10160 10162 10163 + kfree(filename); 10161 10164 kfree(orig); 10162 10165 10163 10166 return 0; 10164 10167 10165 - fail_free_name: 10166 - kfree(filename); 10167 10168 fail: 10169 + kfree(filename); 10168 10170 free_filters_list(filters); 10169 10171 kfree(orig); 10170 10172
+5 -5
kernel/fork.c
··· 2167 2167 /* ok, now we should be set up.. */ 2168 2168 p->pid = pid_nr(pid); 2169 2169 if (clone_flags & CLONE_THREAD) { 2170 - p->exit_signal = -1; 2171 2170 p->group_leader = current->group_leader; 2172 2171 p->tgid = current->tgid; 2173 2172 } else { 2174 - if (clone_flags & CLONE_PARENT) 2175 - p->exit_signal = current->group_leader->exit_signal; 2176 - else 2177 - p->exit_signal = args->exit_signal; 2178 2173 p->group_leader = p; 2179 2174 p->tgid = p->pid; 2180 2175 } ··· 2213 2218 if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) { 2214 2219 p->real_parent = current->real_parent; 2215 2220 p->parent_exec_id = current->parent_exec_id; 2221 + if (clone_flags & CLONE_THREAD) 2222 + p->exit_signal = -1; 2223 + else 2224 + p->exit_signal = current->group_leader->exit_signal; 2216 2225 } else { 2217 2226 p->real_parent = current; 2218 2227 p->parent_exec_id = current->self_exec_id; 2228 + p->exit_signal = args->exit_signal; 2219 2229 } 2220 2230 2221 2231 klp_copy_process(p);
+14 -2
kernel/futex.c
··· 2380 2380 } 2381 2381 2382 2382 /* 2383 - * Since we just failed the trylock; there must be an owner. 2383 + * The trylock just failed, so either there is an owner or 2384 + * there is a higher priority waiter than this one. 2384 2385 */ 2385 2386 newowner = rt_mutex_owner(&pi_state->pi_mutex); 2386 - BUG_ON(!newowner); 2387 + /* 2388 + * If the higher priority waiter has not yet taken over the 2389 + * rtmutex then newowner is NULL. We can't return here with 2390 + * that state because it's inconsistent vs. the user space 2391 + * state. So drop the locks and try again. It's a valid 2392 + * situation and not any different from the other retry 2393 + * conditions. 2394 + */ 2395 + if (unlikely(!newowner)) { 2396 + err = -EAGAIN; 2397 + goto handle_err; 2398 + } 2387 2399 } else { 2388 2400 WARN_ON_ONCE(argowner != current); 2389 2401 if (oldowner == current) {
+1 -2
kernel/hung_task.c
··· 225 225 * Process updating of timeout sysctl 226 226 */ 227 227 int proc_dohung_task_timeout_secs(struct ctl_table *table, int write, 228 - void __user *buffer, 229 - size_t *lenp, loff_t *ppos) 228 + void *buffer, size_t *lenp, loff_t *ppos) 230 229 { 231 230 int ret; 232 231
+1
kernel/irq/Kconfig
··· 82 82 # Generic IRQ IPI support 83 83 config GENERIC_IRQ_IPI 84 84 bool 85 + select IRQ_DOMAIN_HIERARCHY 85 86 86 87 # Generic MSI interrupt support 87 88 config GENERIC_MSI_IRQ
+21 -4
kernel/kprobes.c
··· 1249 1249 1250 1250 *head = &kretprobe_inst_table[hash]; 1251 1251 hlist_lock = kretprobe_table_lock_ptr(hash); 1252 - raw_spin_lock_irqsave(hlist_lock, *flags); 1252 + /* 1253 + * Nested is a workaround that will soon not be needed. 1254 + * There's other protections that make sure the same lock 1255 + * is not taken on the same CPU that lockdep is unaware of. 1256 + * Differentiate when it is taken in NMI context. 1257 + */ 1258 + raw_spin_lock_irqsave_nested(hlist_lock, *flags, !!in_nmi()); 1253 1259 } 1254 1260 NOKPROBE_SYMBOL(kretprobe_hash_lock); 1255 1261 ··· 1264 1258 __acquires(hlist_lock) 1265 1259 { 1266 1260 raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash); 1267 - raw_spin_lock_irqsave(hlist_lock, *flags); 1261 + /* 1262 + * Nested is a workaround that will soon not be needed. 1263 + * There's other protections that make sure the same lock 1264 + * is not taken on the same CPU that lockdep is unaware of. 1265 + * Differentiate when it is taken in NMI context. 1266 + */ 1267 + raw_spin_lock_irqsave_nested(hlist_lock, *flags, !!in_nmi()); 1268 1268 } 1269 1269 NOKPROBE_SYMBOL(kretprobe_table_lock); 1270 1270 ··· 2040 2028 2041 2029 /* TODO: consider to only swap the RA after the last pre_handler fired */ 2042 2030 hash = hash_ptr(current, KPROBE_HASH_BITS); 2043 - raw_spin_lock_irqsave(&rp->lock, flags); 2031 + /* 2032 + * Nested is a workaround that will soon not be needed. 2033 + * There's other protections that make sure the same lock 2034 + * is not taken on the same CPU that lockdep is unaware of. 2035 + */ 2036 + raw_spin_lock_irqsave_nested(&rp->lock, flags, 1); 2044 2037 if (!hlist_empty(&rp->free_instances)) { 2045 2038 ri = hlist_entry(rp->free_instances.first, 2046 2039 struct kretprobe_instance, hlist); ··· 2056 2039 ri->task = current; 2057 2040 2058 2041 if (rp->entry_handler && rp->entry_handler(ri, regs)) { 2059 - raw_spin_lock_irqsave(&rp->lock, flags); 2042 + raw_spin_lock_irqsave_nested(&rp->lock, flags, 1); 2060 2043 hlist_add_head(&ri->hlist, &rp->free_instances); 2061 2044 raw_spin_unlock_irqrestore(&rp->lock, flags); 2062 2045 return 0;
+2 -1
kernel/kthread.c
··· 897 897 /* Move the work from worker->delayed_work_list. */ 898 898 WARN_ON_ONCE(list_empty(&work->node)); 899 899 list_del_init(&work->node); 900 - kthread_insert_work(worker, work, &worker->work_list); 900 + if (!work->canceling) 901 + kthread_insert_work(worker, work, &worker->work_list); 901 902 902 903 raw_spin_unlock_irqrestore(&worker->lock, flags); 903 904 }
+10 -12
kernel/sched/cpufreq_schedutil.c
··· 102 102 static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time, 103 103 unsigned int next_freq) 104 104 { 105 - if (sg_policy->next_freq == next_freq && 106 - !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS)) 107 - return false; 105 + if (!sg_policy->need_freq_update) { 106 + if (sg_policy->next_freq == next_freq) 107 + return false; 108 + } else { 109 + sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS); 110 + } 108 111 109 112 sg_policy->next_freq = next_freq; 110 113 sg_policy->last_freq_update_time = time; ··· 165 162 166 163 freq = map_util_freq(util, freq, max); 167 164 168 - if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update && 169 - !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS)) 165 + if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update) 170 166 return sg_policy->next_freq; 171 167 172 - sg_policy->need_freq_update = false; 173 168 sg_policy->cached_raw_freq = freq; 174 169 return cpufreq_driver_resolve_freq(policy, freq); 175 170 } ··· 443 442 struct sugov_policy *sg_policy = sg_cpu->sg_policy; 444 443 unsigned long util, max; 445 444 unsigned int next_f; 446 - bool busy; 447 445 unsigned int cached_freq = sg_policy->cached_raw_freq; 448 446 449 447 sugov_iowait_boost(sg_cpu, time, flags); ··· 453 453 if (!sugov_should_update_freq(sg_policy, time)) 454 454 return; 455 455 456 - /* Limits may have changed, don't skip frequency update */ 457 - busy = !sg_policy->need_freq_update && sugov_cpu_is_busy(sg_cpu); 458 - 459 456 util = sugov_get_util(sg_cpu); 460 457 max = sg_cpu->max; 461 458 util = sugov_iowait_apply(sg_cpu, time, util, max); ··· 461 464 * Do not reduce the frequency if the CPU has not been idle 462 465 * recently, as the reduction is likely to be premature then. 463 466 */ 464 - if (busy && next_f < sg_policy->next_freq) { 467 + if (sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) { 465 468 next_f = sg_policy->next_freq; 466 469 467 470 /* Restore cached freq as next_freq has changed */ ··· 826 829 sg_policy->next_freq = 0; 827 830 sg_policy->work_in_progress = false; 828 831 sg_policy->limits_changed = false; 829 - sg_policy->need_freq_update = false; 830 832 sg_policy->cached_raw_freq = 0; 833 + 834 + sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS); 831 835 832 836 for_each_cpu(cpu, policy->cpus) { 833 837 struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu);
+10 -9
kernel/signal.c
··· 391 391 392 392 void task_join_group_stop(struct task_struct *task) 393 393 { 394 + unsigned long mask = current->jobctl & JOBCTL_STOP_SIGMASK; 395 + struct signal_struct *sig = current->signal; 396 + 397 + if (sig->group_stop_count) { 398 + sig->group_stop_count++; 399 + mask |= JOBCTL_STOP_CONSUME; 400 + } else if (!(sig->flags & SIGNAL_STOP_STOPPED)) 401 + return; 402 + 394 403 /* Have the new thread join an on-going signal group stop */ 395 - unsigned long jobctl = current->jobctl; 396 - if (jobctl & JOBCTL_STOP_PENDING) { 397 - struct signal_struct *sig = current->signal; 398 - unsigned long signr = jobctl & JOBCTL_STOP_SIGMASK; 399 - unsigned long gstop = JOBCTL_STOP_PENDING | JOBCTL_STOP_CONSUME; 400 - if (task_set_jobctl_pending(task, signr | gstop)) { 401 - sig->group_stop_count++; 402 - } 403 - } 404 + task_set_jobctl_pending(task, mask | JOBCTL_STOP_PENDING); 404 405 } 405 406 406 407 /*
+46 -12
kernel/trace/ring_buffer.c
··· 438 438 }; 439 439 /* 440 440 * Used for which event context the event is in. 441 - * NMI = 0 442 - * IRQ = 1 443 - * SOFTIRQ = 2 444 - * NORMAL = 3 441 + * TRANSITION = 0 442 + * NMI = 1 443 + * IRQ = 2 444 + * SOFTIRQ = 3 445 + * NORMAL = 4 445 446 * 446 447 * See trace_recursive_lock() comment below for more details. 447 448 */ 448 449 enum { 450 + RB_CTX_TRANSITION, 449 451 RB_CTX_NMI, 450 452 RB_CTX_IRQ, 451 453 RB_CTX_SOFTIRQ, ··· 3016 3014 * a bit of overhead in something as critical as function tracing, 3017 3015 * we use a bitmask trick. 3018 3016 * 3019 - * bit 0 = NMI context 3020 - * bit 1 = IRQ context 3021 - * bit 2 = SoftIRQ context 3022 - * bit 3 = normal context. 3017 + * bit 1 = NMI context 3018 + * bit 2 = IRQ context 3019 + * bit 3 = SoftIRQ context 3020 + * bit 4 = normal context. 3023 3021 * 3024 3022 * This works because this is the order of contexts that can 3025 3023 * preempt other contexts. A SoftIRQ never preempts an IRQ ··· 3042 3040 * The least significant bit can be cleared this way, and it 3043 3041 * just so happens that it is the same bit corresponding to 3044 3042 * the current context. 3043 + * 3044 + * Now the TRANSITION bit breaks the above slightly. The TRANSITION bit 3045 + * is set when a recursion is detected at the current context, and if 3046 + * the TRANSITION bit is already set, it will fail the recursion. 3047 + * This is needed because there's a lag between the changing of 3048 + * interrupt context and updating the preempt count. In this case, 3049 + * a false positive will be found. To handle this, one extra recursion 3050 + * is allowed, and this is done by the TRANSITION bit. If the TRANSITION 3051 + * bit is already set, then it is considered a recursion and the function 3052 + * ends. Otherwise, the TRANSITION bit is set, and that bit is returned. 3053 + * 3054 + * On the trace_recursive_unlock(), the TRANSITION bit will be the first 3055 + * to be cleared. Even if it wasn't the context that set it. That is, 3056 + * if an interrupt comes in while NORMAL bit is set and the ring buffer 3057 + * is called before preempt_count() is updated, since the check will 3058 + * be on the NORMAL bit, the TRANSITION bit will then be set. If an 3059 + * NMI then comes in, it will set the NMI bit, but when the NMI code 3060 + * does the trace_recursive_unlock() it will clear the TRANSTION bit 3061 + * and leave the NMI bit set. But this is fine, because the interrupt 3062 + * code that set the TRANSITION bit will then clear the NMI bit when it 3063 + * calls trace_recursive_unlock(). If another NMI comes in, it will 3064 + * set the TRANSITION bit and continue. 3065 + * 3066 + * Note: The TRANSITION bit only handles a single transition between context. 3045 3067 */ 3046 3068 3047 3069 static __always_inline int ··· 3081 3055 bit = pc & NMI_MASK ? RB_CTX_NMI : 3082 3056 pc & HARDIRQ_MASK ? RB_CTX_IRQ : RB_CTX_SOFTIRQ; 3083 3057 3084 - if (unlikely(val & (1 << (bit + cpu_buffer->nest)))) 3085 - return 1; 3058 + if (unlikely(val & (1 << (bit + cpu_buffer->nest)))) { 3059 + /* 3060 + * It is possible that this was called by transitioning 3061 + * between interrupt context, and preempt_count() has not 3062 + * been updated yet. In this case, use the TRANSITION bit. 3063 + */ 3064 + bit = RB_CTX_TRANSITION; 3065 + if (val & (1 << (bit + cpu_buffer->nest))) 3066 + return 1; 3067 + } 3086 3068 3087 3069 val |= (1 << (bit + cpu_buffer->nest)); 3088 3070 cpu_buffer->current_context = val; ··· 3105 3071 cpu_buffer->current_context - (1 << cpu_buffer->nest); 3106 3072 } 3107 3073 3108 - /* The recursive locking above uses 4 bits */ 3109 - #define NESTED_BITS 4 3074 + /* The recursive locking above uses 5 bits */ 3075 + #define NESTED_BITS 5 3110 3076 3111 3077 /** 3112 3078 * ring_buffer_nest_start - Allow to trace while nested
+3 -3
kernel/trace/trace.c
··· 2750 2750 /* 2751 2751 * If tracing is off, but we have triggers enabled 2752 2752 * we still need to look at the event data. Use the temp_buffer 2753 - * to store the trace event for the tigger to use. It's recusive 2753 + * to store the trace event for the trigger to use. It's recursive 2754 2754 * safe and will not be recorded anywhere. 2755 2755 */ 2756 2756 if (!entry && trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) { ··· 2952 2952 stackidx = __this_cpu_inc_return(ftrace_stack_reserve) - 1; 2953 2953 2954 2954 /* This should never happen. If it does, yell once and skip */ 2955 - if (WARN_ON_ONCE(stackidx > FTRACE_KSTACK_NESTING)) 2955 + if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING)) 2956 2956 goto out; 2957 2957 2958 2958 /* ··· 3132 3132 3133 3133 /* Interrupts must see nesting incremented before we use the buffer */ 3134 3134 barrier(); 3135 - return &buffer->buffer[buffer->nesting][0]; 3135 + return &buffer->buffer[buffer->nesting - 1][0]; 3136 3136 } 3137 3137 3138 3138 static void put_trace_buf(void)
+23 -3
kernel/trace/trace.h
··· 637 637 * function is called to clear it. 638 638 */ 639 639 TRACE_GRAPH_NOTRACE_BIT, 640 + 641 + /* 642 + * When transitioning between context, the preempt_count() may 643 + * not be correct. Allow for a single recursion to cover this case. 644 + */ 645 + TRACE_TRANSITION_BIT, 640 646 }; 641 647 642 648 #define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0) ··· 697 691 return 0; 698 692 699 693 bit = trace_get_context_bit() + start; 700 - if (unlikely(val & (1 << bit))) 701 - return -1; 694 + if (unlikely(val & (1 << bit))) { 695 + /* 696 + * It could be that preempt_count has not been updated during 697 + * a switch between contexts. Allow for a single recursion. 698 + */ 699 + bit = TRACE_TRANSITION_BIT; 700 + if (trace_recursion_test(bit)) 701 + return -1; 702 + trace_recursion_set(bit); 703 + barrier(); 704 + return bit + 1; 705 + } 706 + 707 + /* Normal check passed, clear the transition to allow it again */ 708 + trace_recursion_clear(TRACE_TRANSITION_BIT); 702 709 703 710 val |= 1 << bit; 704 711 current->trace_recursion = val; 705 712 barrier(); 706 713 707 - return bit; 714 + return bit + 1; 708 715 } 709 716 710 717 static __always_inline void trace_clear_recursion(int bit) ··· 727 708 if (!bit) 728 709 return; 729 710 711 + bit--; 730 712 bit = 1 << bit; 731 713 val &= ~bit; 732 714
+7 -10
kernel/trace/trace_events_synth.c
··· 584 584 { 585 585 struct synth_field *field; 586 586 const char *prefix = NULL, *field_type = argv[0], *field_name, *array; 587 - int len, ret = 0; 587 + int len, ret = -ENOMEM; 588 588 struct seq_buf s; 589 589 ssize_t size; 590 590 ··· 617 617 len--; 618 618 619 619 field->name = kmemdup_nul(field_name, len, GFP_KERNEL); 620 - if (!field->name) { 621 - ret = -ENOMEM; 620 + if (!field->name) 622 621 goto free; 623 - } 622 + 624 623 if (!is_good_name(field->name)) { 625 624 synth_err(SYNTH_ERR_BAD_NAME, errpos(field_name)); 626 625 ret = -EINVAL; ··· 637 638 len += strlen(prefix); 638 639 639 640 field->type = kzalloc(len, GFP_KERNEL); 640 - if (!field->type) { 641 - ret = -ENOMEM; 641 + if (!field->type) 642 642 goto free; 643 - } 643 + 644 644 seq_buf_init(&s, field->type, len); 645 645 if (prefix) 646 646 seq_buf_puts(&s, prefix); ··· 651 653 } 652 654 if (WARN_ON_ONCE(!seq_buf_buffer_left(&s))) 653 655 goto free; 656 + 654 657 s.buffer[s.len] = '\0'; 655 658 656 659 size = synth_field_size(field->type); ··· 665 666 666 667 len = sizeof("__data_loc ") + strlen(field->type) + 1; 667 668 type = kzalloc(len, GFP_KERNEL); 668 - if (!type) { 669 - ret = -ENOMEM; 669 + if (!type) 670 670 goto free; 671 - } 672 671 673 672 seq_buf_init(&s, type, len); 674 673 seq_buf_puts(&s, "__data_loc ");
+7 -2
kernel/trace/trace_selftest.c
··· 492 492 unregister_ftrace_function(&test_rec_probe); 493 493 494 494 ret = -1; 495 - if (trace_selftest_recursion_cnt != 1) { 496 - pr_cont("*callback not called once (%d)* ", 495 + /* 496 + * Recursion allows for transitions between context, 497 + * and may call the callback twice. 498 + */ 499 + if (trace_selftest_recursion_cnt != 1 && 500 + trace_selftest_recursion_cnt != 2) { 501 + pr_cont("*callback not called once (or twice) (%d)* ", 497 502 trace_selftest_recursion_cnt); 498 503 goto out; 499 504 }
-4
lib/crc32test.c
··· 683 683 684 684 /* reduce OS noise */ 685 685 local_irq_save(flags); 686 - local_irq_disable(); 687 686 688 687 nsec = ktime_get_ns(); 689 688 for (i = 0; i < 100; i++) { ··· 693 694 nsec = ktime_get_ns() - nsec; 694 695 695 696 local_irq_restore(flags); 696 - local_irq_enable(); 697 697 698 698 pr_info("crc32c: CRC_LE_BITS = %d\n", CRC_LE_BITS); 699 699 ··· 766 768 767 769 /* reduce OS noise */ 768 770 local_irq_save(flags); 769 - local_irq_disable(); 770 771 771 772 nsec = ktime_get_ns(); 772 773 for (i = 0; i < 100; i++) { ··· 780 783 nsec = ktime_get_ns() - nsec; 781 784 782 785 local_irq_restore(flags); 783 - local_irq_enable(); 784 786 785 787 pr_info("crc32: CRC_LE_BITS = %d, CRC_BE BITS = %d\n", 786 788 CRC_LE_BITS, CRC_BE_BITS);
+1 -1
lib/fonts/font_10x18.c
··· 8 8 9 9 #define FONTDATAMAX 9216 10 10 11 - static struct font_data fontdata_10x18 = { 11 + static const struct font_data fontdata_10x18 = { 12 12 { 0, 0, FONTDATAMAX, 0 }, { 13 13 /* 0 0x00 '^@' */ 14 14 0x00, 0x00, /* 0000000000 */
+1 -1
lib/fonts/font_6x10.c
··· 3 3 4 4 #define FONTDATAMAX 2560 5 5 6 - static struct font_data fontdata_6x10 = { 6 + static const struct font_data fontdata_6x10 = { 7 7 { 0, 0, FONTDATAMAX, 0 }, { 8 8 /* 0 0x00 '^@' */ 9 9 0x00, /* 00000000 */
+1 -1
lib/fonts/font_6x11.c
··· 9 9 10 10 #define FONTDATAMAX (11*256) 11 11 12 - static struct font_data fontdata_6x11 = { 12 + static const struct font_data fontdata_6x11 = { 13 13 { 0, 0, FONTDATAMAX, 0 }, { 14 14 /* 0 0x00 '^@' */ 15 15 0x00, /* 00000000 */
+1 -1
lib/fonts/font_6x8.c
··· 3 3 4 4 #define FONTDATAMAX 2048 5 5 6 - static struct font_data fontdata_6x8 = { 6 + static const struct font_data fontdata_6x8 = { 7 7 { 0, 0, FONTDATAMAX, 0 }, { 8 8 /* 0 0x00 '^@' */ 9 9 0x00, /* 000000 */
+1 -1
lib/fonts/font_7x14.c
··· 8 8 9 9 #define FONTDATAMAX 3584 10 10 11 - static struct font_data fontdata_7x14 = { 11 + static const struct font_data fontdata_7x14 = { 12 12 { 0, 0, FONTDATAMAX, 0 }, { 13 13 /* 0 0x00 '^@' */ 14 14 0x00, /* 0000000 */
+1 -1
lib/fonts/font_8x16.c
··· 10 10 11 11 #define FONTDATAMAX 4096 12 12 13 - static struct font_data fontdata_8x16 = { 13 + static const struct font_data fontdata_8x16 = { 14 14 { 0, 0, FONTDATAMAX, 0 }, { 15 15 /* 0 0x00 '^@' */ 16 16 0x00, /* 00000000 */
+1 -1
lib/fonts/font_8x8.c
··· 9 9 10 10 #define FONTDATAMAX 2048 11 11 12 - static struct font_data fontdata_8x8 = { 12 + static const struct font_data fontdata_8x8 = { 13 13 { 0, 0, FONTDATAMAX, 0 }, { 14 14 /* 0 0x00 '^@' */ 15 15 0x00, /* 00000000 */
+1 -1
lib/fonts/font_acorn_8x8.c
··· 5 5 6 6 #define FONTDATAMAX 2048 7 7 8 - static struct font_data acorndata_8x8 = { 8 + static const struct font_data acorndata_8x8 = { 9 9 { 0, 0, FONTDATAMAX, 0 }, { 10 10 /* 00 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* ^@ */ 11 11 /* 01 */ 0x7e, 0x81, 0xa5, 0x81, 0xbd, 0x99, 0x81, 0x7e, /* ^A */
+1 -1
lib/fonts/font_mini_4x6.c
··· 43 43 44 44 #define FONTDATAMAX 1536 45 45 46 - static struct font_data fontdata_mini_4x6 = { 46 + static const struct font_data fontdata_mini_4x6 = { 47 47 { 0, 0, FONTDATAMAX, 0 }, { 48 48 /*{*/ 49 49 /* Char 0: ' ' */
+1 -1
lib/fonts/font_pearl_8x8.c
··· 14 14 15 15 #define FONTDATAMAX 2048 16 16 17 - static struct font_data fontdata_pearl8x8 = { 17 + static const struct font_data fontdata_pearl8x8 = { 18 18 { 0, 0, FONTDATAMAX, 0 }, { 19 19 /* 0 0x00 '^@' */ 20 20 0x00, /* 00000000 */
+1 -1
lib/fonts/font_sun12x22.c
··· 3 3 4 4 #define FONTDATAMAX 11264 5 5 6 - static struct font_data fontdata_sun12x22 = { 6 + static const struct font_data fontdata_sun12x22 = { 7 7 { 0, 0, FONTDATAMAX, 0 }, { 8 8 /* 0 0x00 '^@' */ 9 9 0x00, 0x00, /* 000000000000 */
+1 -1
lib/fonts/font_sun8x16.c
··· 3 3 4 4 #define FONTDATAMAX 4096 5 5 6 - static struct font_data fontdata_sun8x16 = { 6 + static const struct font_data fontdata_sun8x16 = { 7 7 { 0, 0, FONTDATAMAX, 0 }, { 8 8 /* */ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 9 9 /* */ 0x00,0x00,0x7e,0x81,0xa5,0x81,0x81,0xbd,0x99,0x81,0x81,0x7e,0x00,0x00,0x00,0x00,
+1 -1
lib/fonts/font_ter16x32.c
··· 4 4 5 5 #define FONTDATAMAX 16384 6 6 7 - static struct font_data fontdata_ter16x32 = { 7 + static const struct font_data fontdata_ter16x32 = { 8 8 { 0, 0, FONTDATAMAX, 0 }, { 9 9 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 10 10 0x00, 0x00, 0x00, 0x00, 0x7f, 0xfc, 0x7f, 0xfc,
+107 -42
lib/test_kasan.c
··· 216 216 u64 words[2]; 217 217 } *ptr1, *ptr2; 218 218 219 + /* This test is specifically crafted for the generic mode. */ 220 + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) { 221 + kunit_info(test, "CONFIG_KASAN_GENERIC required\n"); 222 + return; 223 + } 224 + 219 225 ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL); 220 226 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); 221 227 ··· 231 225 KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2); 232 226 kfree(ptr1); 233 227 kfree(ptr2); 228 + } 229 + 230 + static void kmalloc_uaf_16(struct kunit *test) 231 + { 232 + struct { 233 + u64 words[2]; 234 + } *ptr1, *ptr2; 235 + 236 + ptr1 = kmalloc(sizeof(*ptr1), GFP_KERNEL); 237 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); 238 + 239 + ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL); 240 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2); 241 + kfree(ptr2); 242 + 243 + KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2); 244 + kfree(ptr1); 234 245 } 235 246 236 247 static void kmalloc_oob_memset_2(struct kunit *test) ··· 452 429 volatile int i = 3; 453 430 char *p = &global_array[ARRAY_SIZE(global_array) + i]; 454 431 432 + /* Only generic mode instruments globals. */ 433 + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) { 434 + kunit_info(test, "CONFIG_KASAN_GENERIC required"); 435 + return; 436 + } 437 + 455 438 KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); 456 439 } 457 440 ··· 496 467 char alloca_array[i]; 497 468 char *p = alloca_array - 1; 498 469 470 + /* Only generic mode instruments dynamic allocas. */ 471 + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) { 472 + kunit_info(test, "CONFIG_KASAN_GENERIC required"); 473 + return; 474 + } 475 + 499 476 if (!IS_ENABLED(CONFIG_KASAN_STACK)) { 500 477 kunit_info(test, "CONFIG_KASAN_STACK is not enabled"); 501 478 return; ··· 515 480 volatile int i = 10; 516 481 char alloca_array[i]; 517 482 char *p = alloca_array + i; 483 + 484 + /* Only generic mode instruments dynamic allocas. */ 485 + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) { 486 + kunit_info(test, "CONFIG_KASAN_GENERIC required"); 487 + return; 488 + } 518 489 519 490 if (!IS_ENABLED(CONFIG_KASAN_STACK)) { 520 491 kunit_info(test, "CONFIG_KASAN_STACK is not enabled"); ··· 592 551 return; 593 552 } 594 553 554 + if (OOB_TAG_OFF) 555 + size = round_up(size, OOB_TAG_OFF); 556 + 595 557 ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); 596 558 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); 597 559 ··· 616 572 "str* functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT"); 617 573 return; 618 574 } 575 + 576 + if (OOB_TAG_OFF) 577 + size = round_up(size, OOB_TAG_OFF); 619 578 620 579 ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); 621 580 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); ··· 666 619 KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1)); 667 620 } 668 621 669 - static void kasan_bitops(struct kunit *test) 622 + static void kasan_bitops_modify(struct kunit *test, int nr, void *addr) 670 623 { 624 + KUNIT_EXPECT_KASAN_FAIL(test, set_bit(nr, addr)); 625 + KUNIT_EXPECT_KASAN_FAIL(test, __set_bit(nr, addr)); 626 + KUNIT_EXPECT_KASAN_FAIL(test, clear_bit(nr, addr)); 627 + KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit(nr, addr)); 628 + KUNIT_EXPECT_KASAN_FAIL(test, clear_bit_unlock(nr, addr)); 629 + KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit_unlock(nr, addr)); 630 + KUNIT_EXPECT_KASAN_FAIL(test, change_bit(nr, addr)); 631 + KUNIT_EXPECT_KASAN_FAIL(test, __change_bit(nr, addr)); 632 + } 633 + 634 + static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr) 635 + { 636 + KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr)); 637 + KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr)); 638 + KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr)); 639 + KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr)); 640 + KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr)); 641 + KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr)); 642 + KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr)); 643 + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr)); 644 + 645 + #if defined(clear_bit_unlock_is_negative_byte) 646 + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = 647 + clear_bit_unlock_is_negative_byte(nr, addr)); 648 + #endif 649 + } 650 + 651 + static void kasan_bitops_generic(struct kunit *test) 652 + { 653 + long *bits; 654 + 655 + /* This test is specifically crafted for the generic mode. */ 656 + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) { 657 + kunit_info(test, "CONFIG_KASAN_GENERIC required\n"); 658 + return; 659 + } 660 + 671 661 /* 672 662 * Allocate 1 more byte, which causes kzalloc to round up to 16-bytes; 673 663 * this way we do not actually corrupt other memory. 674 664 */ 675 - long *bits = kzalloc(sizeof(*bits) + 1, GFP_KERNEL); 665 + bits = kzalloc(sizeof(*bits) + 1, GFP_KERNEL); 676 666 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits); 677 667 678 668 /* ··· 717 633 * below accesses are still out-of-bounds, since bitops are defined to 718 634 * operate on the whole long the bit is in. 719 635 */ 720 - KUNIT_EXPECT_KASAN_FAIL(test, set_bit(BITS_PER_LONG, bits)); 721 - 722 - KUNIT_EXPECT_KASAN_FAIL(test, __set_bit(BITS_PER_LONG, bits)); 723 - 724 - KUNIT_EXPECT_KASAN_FAIL(test, clear_bit(BITS_PER_LONG, bits)); 725 - 726 - KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit(BITS_PER_LONG, bits)); 727 - 728 - KUNIT_EXPECT_KASAN_FAIL(test, clear_bit_unlock(BITS_PER_LONG, bits)); 729 - 730 - KUNIT_EXPECT_KASAN_FAIL(test, __clear_bit_unlock(BITS_PER_LONG, bits)); 731 - 732 - KUNIT_EXPECT_KASAN_FAIL(test, change_bit(BITS_PER_LONG, bits)); 733 - 734 - KUNIT_EXPECT_KASAN_FAIL(test, __change_bit(BITS_PER_LONG, bits)); 636 + kasan_bitops_modify(test, BITS_PER_LONG, bits); 735 637 736 638 /* 737 639 * Below calls try to access bit beyond allocated memory. 738 640 */ 739 - KUNIT_EXPECT_KASAN_FAIL(test, 740 - test_and_set_bit(BITS_PER_LONG + BITS_PER_BYTE, bits)); 641 + kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, bits); 741 642 742 - KUNIT_EXPECT_KASAN_FAIL(test, 743 - __test_and_set_bit(BITS_PER_LONG + BITS_PER_BYTE, bits)); 643 + kfree(bits); 644 + } 744 645 745 - KUNIT_EXPECT_KASAN_FAIL(test, 746 - test_and_set_bit_lock(BITS_PER_LONG + BITS_PER_BYTE, bits)); 646 + static void kasan_bitops_tags(struct kunit *test) 647 + { 648 + long *bits; 747 649 748 - KUNIT_EXPECT_KASAN_FAIL(test, 749 - test_and_clear_bit(BITS_PER_LONG + BITS_PER_BYTE, bits)); 650 + /* This test is specifically crafted for the tag-based mode. */ 651 + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) { 652 + kunit_info(test, "CONFIG_KASAN_SW_TAGS required\n"); 653 + return; 654 + } 750 655 751 - KUNIT_EXPECT_KASAN_FAIL(test, 752 - __test_and_clear_bit(BITS_PER_LONG + BITS_PER_BYTE, bits)); 656 + /* Allocation size will be rounded to up granule size, which is 16. */ 657 + bits = kzalloc(sizeof(*bits), GFP_KERNEL); 658 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits); 753 659 754 - KUNIT_EXPECT_KASAN_FAIL(test, 755 - test_and_change_bit(BITS_PER_LONG + BITS_PER_BYTE, bits)); 660 + /* Do the accesses past the 16 allocated bytes. */ 661 + kasan_bitops_modify(test, BITS_PER_LONG, &bits[1]); 662 + kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, &bits[1]); 756 663 757 - KUNIT_EXPECT_KASAN_FAIL(test, 758 - __test_and_change_bit(BITS_PER_LONG + BITS_PER_BYTE, bits)); 759 - 760 - KUNIT_EXPECT_KASAN_FAIL(test, 761 - kasan_int_result = 762 - test_bit(BITS_PER_LONG + BITS_PER_BYTE, bits)); 763 - 764 - #if defined(clear_bit_unlock_is_negative_byte) 765 - KUNIT_EXPECT_KASAN_FAIL(test, 766 - kasan_int_result = clear_bit_unlock_is_negative_byte( 767 - BITS_PER_LONG + BITS_PER_BYTE, bits)); 768 - #endif 769 664 kfree(bits); 770 665 } 771 666 ··· 791 728 KUNIT_CASE(kmalloc_oob_krealloc_more), 792 729 KUNIT_CASE(kmalloc_oob_krealloc_less), 793 730 KUNIT_CASE(kmalloc_oob_16), 731 + KUNIT_CASE(kmalloc_uaf_16), 794 732 KUNIT_CASE(kmalloc_oob_in_memset), 795 733 KUNIT_CASE(kmalloc_oob_memset_2), 796 734 KUNIT_CASE(kmalloc_oob_memset_4), ··· 815 751 KUNIT_CASE(kasan_memchr), 816 752 KUNIT_CASE(kasan_memcmp), 817 753 KUNIT_CASE(kasan_strings), 818 - KUNIT_CASE(kasan_bitops), 754 + KUNIT_CASE(kasan_bitops_generic), 755 + KUNIT_CASE(kasan_bitops_tags), 819 756 KUNIT_CASE(kmalloc_double_kzfree), 820 757 KUNIT_CASE(vmalloc_oob), 821 758 {}
+11 -9
mm/hugetlb.c
··· 648 648 } 649 649 650 650 del += t - f; 651 + hugetlb_cgroup_uncharge_file_region( 652 + resv, rg, t - f); 651 653 652 654 /* New entry for end of split region */ 653 655 nrg->from = t; ··· 661 659 662 660 /* Original entry is trimmed */ 663 661 rg->to = f; 664 - 665 - hugetlb_cgroup_uncharge_file_region( 666 - resv, rg, nrg->to - nrg->from); 667 662 668 663 list_add(&nrg->link, &rg->link); 669 664 nrg = NULL; ··· 677 678 } 678 679 679 680 if (f <= rg->from) { /* Trim beginning of region */ 680 - del += t - rg->from; 681 - rg->from = t; 682 - 683 681 hugetlb_cgroup_uncharge_file_region(resv, rg, 684 682 t - rg->from); 685 - } else { /* Trim end of region */ 686 - del += rg->to - f; 687 - rg->to = f; 688 683 684 + del += t - rg->from; 685 + rg->from = t; 686 + } else { /* Trim end of region */ 689 687 hugetlb_cgroup_uncharge_file_region(resv, rg, 690 688 rg->to - f); 689 + 690 + del += rg->to - f; 691 + rg->to = f; 691 692 } 692 693 } 693 694 ··· 2442 2443 2443 2444 rsv_adjust = hugepage_subpool_put_pages(spool, 1); 2444 2445 hugetlb_acct_memory(h, -rsv_adjust); 2446 + if (deferred_reserve) 2447 + hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h), 2448 + pages_per_huge_page(h), page); 2445 2449 } 2446 2450 return page; 2447 2451
+18 -7
mm/memcontrol.c
··· 4110 4110 (u64)memsw * PAGE_SIZE); 4111 4111 4112 4112 for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { 4113 + unsigned long nr; 4114 + 4113 4115 if (memcg1_stats[i] == MEMCG_SWAP && !do_memsw_account()) 4114 4116 continue; 4117 + nr = memcg_page_state(memcg, memcg1_stats[i]); 4118 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 4119 + if (memcg1_stats[i] == NR_ANON_THPS) 4120 + nr *= HPAGE_PMD_NR; 4121 + #endif 4115 4122 seq_printf(m, "total_%s %llu\n", memcg1_stat_names[i], 4116 - (u64)memcg_page_state(memcg, memcg1_stats[i]) * 4117 - PAGE_SIZE); 4123 + (u64)nr * PAGE_SIZE); 4118 4124 } 4119 4125 4120 4126 for (i = 0; i < ARRAY_SIZE(memcg1_events); i++) ··· 5345 5339 memcg->swappiness = mem_cgroup_swappiness(parent); 5346 5340 memcg->oom_kill_disable = parent->oom_kill_disable; 5347 5341 } 5348 - if (parent && parent->use_hierarchy) { 5342 + if (!parent) { 5343 + page_counter_init(&memcg->memory, NULL); 5344 + page_counter_init(&memcg->swap, NULL); 5345 + page_counter_init(&memcg->kmem, NULL); 5346 + page_counter_init(&memcg->tcpmem, NULL); 5347 + } else if (parent->use_hierarchy) { 5349 5348 memcg->use_hierarchy = true; 5350 5349 page_counter_init(&memcg->memory, &parent->memory); 5351 5350 page_counter_init(&memcg->swap, &parent->swap); 5352 5351 page_counter_init(&memcg->kmem, &parent->kmem); 5353 5352 page_counter_init(&memcg->tcpmem, &parent->tcpmem); 5354 5353 } else { 5355 - page_counter_init(&memcg->memory, NULL); 5356 - page_counter_init(&memcg->swap, NULL); 5357 - page_counter_init(&memcg->kmem, NULL); 5358 - page_counter_init(&memcg->tcpmem, NULL); 5354 + page_counter_init(&memcg->memory, &root_mem_cgroup->memory); 5355 + page_counter_init(&memcg->swap, &root_mem_cgroup->swap); 5356 + page_counter_init(&memcg->kmem, &root_mem_cgroup->kmem); 5357 + page_counter_init(&memcg->tcpmem, &root_mem_cgroup->tcpmem); 5359 5358 /* 5360 5359 * Deeper hierachy with use_hierarchy == false doesn't make 5361 5360 * much sense so let cgroup subsystem know about this
+3 -3
mm/mempolicy.c
··· 525 525 unsigned long flags = qp->flags; 526 526 int ret; 527 527 bool has_unmovable = false; 528 - pte_t *pte; 528 + pte_t *pte, *mapped_pte; 529 529 spinlock_t *ptl; 530 530 531 531 ptl = pmd_trans_huge_lock(pmd, vma); ··· 539 539 if (pmd_trans_unstable(pmd)) 540 540 return 0; 541 541 542 - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); 542 + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); 543 543 for (; addr != end; pte++, addr += PAGE_SIZE) { 544 544 if (!pte_present(*pte)) 545 545 continue; ··· 571 571 } else 572 572 break; 573 573 } 574 - pte_unmap_unlock(pte - 1, ptl); 574 + pte_unmap_unlock(mapped_pte, ptl); 575 575 cond_resched(); 576 576 577 577 if (has_unmovable)
+16 -23
mm/memremap.c
··· 41 41 DEFINE_STATIC_KEY_FALSE(devmap_managed_key); 42 42 EXPORT_SYMBOL(devmap_managed_key); 43 43 44 - static void devmap_managed_enable_put(void) 44 + static void devmap_managed_enable_put(struct dev_pagemap *pgmap) 45 45 { 46 - static_branch_dec(&devmap_managed_key); 46 + if (pgmap->type == MEMORY_DEVICE_PRIVATE || 47 + pgmap->type == MEMORY_DEVICE_FS_DAX) 48 + static_branch_dec(&devmap_managed_key); 47 49 } 48 50 49 - static int devmap_managed_enable_get(struct dev_pagemap *pgmap) 51 + static void devmap_managed_enable_get(struct dev_pagemap *pgmap) 50 52 { 51 - if (pgmap->type == MEMORY_DEVICE_PRIVATE && 52 - (!pgmap->ops || !pgmap->ops->page_free)) { 53 - WARN(1, "Missing page_free method\n"); 54 - return -EINVAL; 55 - } 56 - 57 - static_branch_inc(&devmap_managed_key); 58 - return 0; 53 + if (pgmap->type == MEMORY_DEVICE_PRIVATE || 54 + pgmap->type == MEMORY_DEVICE_FS_DAX) 55 + static_branch_inc(&devmap_managed_key); 59 56 } 60 57 #else 61 - static int devmap_managed_enable_get(struct dev_pagemap *pgmap) 58 + static void devmap_managed_enable_get(struct dev_pagemap *pgmap) 62 59 { 63 - return -EINVAL; 64 60 } 65 - static void devmap_managed_enable_put(void) 61 + static void devmap_managed_enable_put(struct dev_pagemap *pgmap) 66 62 { 67 63 } 68 64 #endif /* CONFIG_DEV_PAGEMAP_OPS */ ··· 165 169 pageunmap_range(pgmap, i); 166 170 167 171 WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); 168 - devmap_managed_enable_put(); 172 + devmap_managed_enable_put(pgmap); 169 173 } 170 174 EXPORT_SYMBOL_GPL(memunmap_pages); 171 175 ··· 303 307 .pgprot = PAGE_KERNEL, 304 308 }; 305 309 const int nr_range = pgmap->nr_range; 306 - bool need_devmap_managed = true; 307 310 int error, i; 308 311 309 312 if (WARN_ONCE(!nr_range, "nr_range must be specified\n")) ··· 318 323 WARN(1, "Missing migrate_to_ram method\n"); 319 324 return ERR_PTR(-EINVAL); 320 325 } 326 + if (!pgmap->ops->page_free) { 327 + WARN(1, "Missing page_free method\n"); 328 + return ERR_PTR(-EINVAL); 329 + } 321 330 if (!pgmap->owner) { 322 331 WARN(1, "Missing owner\n"); 323 332 return ERR_PTR(-EINVAL); ··· 335 336 } 336 337 break; 337 338 case MEMORY_DEVICE_GENERIC: 338 - need_devmap_managed = false; 339 339 break; 340 340 case MEMORY_DEVICE_PCI_P2PDMA: 341 341 params.pgprot = pgprot_noncached(params.pgprot); 342 - need_devmap_managed = false; 343 342 break; 344 343 default: 345 344 WARN(1, "Invalid pgmap type %d\n", pgmap->type); ··· 361 364 } 362 365 } 363 366 364 - if (need_devmap_managed) { 365 - error = devmap_managed_enable_get(pgmap); 366 - if (error) 367 - return ERR_PTR(error); 368 - } 367 + devmap_managed_enable_get(pgmap); 369 368 370 369 /* 371 370 * Clear the pgmap nr_range as it will be incremented for each
+1 -1
mm/truncate.c
··· 528 528 } 529 529 EXPORT_SYMBOL(truncate_inode_pages_final); 530 530 531 - unsigned long __invalidate_mapping_pages(struct address_space *mapping, 531 + static unsigned long __invalidate_mapping_pages(struct address_space *mapping, 532 532 pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) 533 533 { 534 534 pgoff_t indices[PAGEVEC_SIZE];
+2 -3
net/atm/lec.c
··· 954 954 { 955 955 struct lec_state *state = seq->private; 956 956 957 - v = lec_get_idx(state, 1); 958 - *pos += !!PTR_ERR(v); 959 - return v; 957 + ++*pos; 958 + return lec_get_idx(state, 1); 960 959 } 961 960 962 961 static int lec_seq_show(struct seq_file *seq, void *v)
+3 -2
net/can/Kconfig
··· 62 62 communication between CAN nodes via two defined CAN Identifiers. 63 63 As CAN frames can only transport a small amount of data bytes 64 64 (max. 8 bytes for 'classic' CAN and max. 64 bytes for CAN FD) this 65 - segmentation is needed to transport longer PDUs as needed e.g. for 66 - vehicle diagnosis (UDS, ISO 14229) or IP-over-CAN traffic. 65 + segmentation is needed to transport longer Protocol Data Units (PDU) 66 + as needed e.g. for vehicle diagnosis (UDS, ISO 14229) or IP-over-CAN 67 + traffic. 67 68 This protocol driver implements data transfers according to 68 69 ISO 15765-2:2016 for 'classic' CAN and CAN FD frame types. 69 70 If you want to perform automotive vehicle diagnostic services (UDS),
+14 -12
net/can/isotp.c
··· 252 252 253 253 static u8 padlen(u8 datalen) 254 254 { 255 - const u8 plen[] = {8, 8, 8, 8, 8, 8, 8, 8, 8, /* 0 - 8 */ 256 - 12, 12, 12, 12, /* 9 - 12 */ 257 - 16, 16, 16, 16, /* 13 - 16 */ 258 - 20, 20, 20, 20, /* 17 - 20 */ 259 - 24, 24, 24, 24, /* 21 - 24 */ 260 - 32, 32, 32, 32, 32, 32, 32, 32, /* 25 - 32 */ 261 - 48, 48, 48, 48, 48, 48, 48, 48, /* 33 - 40 */ 262 - 48, 48, 48, 48, 48, 48, 48, 48}; /* 41 - 48 */ 255 + static const u8 plen[] = { 256 + 8, 8, 8, 8, 8, 8, 8, 8, 8, /* 0 - 8 */ 257 + 12, 12, 12, 12, /* 9 - 12 */ 258 + 16, 16, 16, 16, /* 13 - 16 */ 259 + 20, 20, 20, 20, /* 17 - 20 */ 260 + 24, 24, 24, 24, /* 21 - 24 */ 261 + 32, 32, 32, 32, 32, 32, 32, 32, /* 25 - 32 */ 262 + 48, 48, 48, 48, 48, 48, 48, 48, /* 33 - 40 */ 263 + 48, 48, 48, 48, 48, 48, 48, 48 /* 41 - 48 */ 264 + }; 263 265 264 266 if (datalen > 48) 265 267 return 64; ··· 571 569 return 0; 572 570 } 573 571 574 - /* no creation of flow control frames */ 575 - if (so->opt.flags & CAN_ISOTP_LISTEN_MODE) 576 - return 0; 577 - 578 572 /* perform blocksize handling, if enabled */ 579 573 if (!so->rxfc.bs || ++so->rx.bs < so->rxfc.bs) { 580 574 /* start rx timeout watchdog */ ··· 578 580 HRTIMER_MODE_REL_SOFT); 579 581 return 0; 580 582 } 583 + 584 + /* no creation of flow control frames */ 585 + if (so->opt.flags & CAN_ISOTP_LISTEN_MODE) 586 + return 0; 581 587 582 588 /* we reached the specified blocksize so->rxfc.bs */ 583 589 isotp_send_fc(sk, ae, ISOTP_FC_CTS);
+6
net/can/j1939/socket.c
··· 475 475 goto out_release_sock; 476 476 } 477 477 478 + if (!(ndev->flags & IFF_UP)) { 479 + dev_put(ndev); 480 + ret = -ENETDOWN; 481 + goto out_release_sock; 482 + } 483 + 478 484 priv = j1939_netdev_start(ndev); 479 485 dev_put(ndev); 480 486 if (IS_ERR(priv)) {
+4 -2
net/can/proc.c
··· 462 462 */ 463 463 void can_remove_proc(struct net *net) 464 464 { 465 + if (!net->can.proc_dir) 466 + return; 467 + 465 468 if (net->can.pde_stats) 466 469 remove_proc_entry(CAN_PROC_STATS, net->can.proc_dir); 467 470 ··· 489 486 if (net->can.pde_rcvlist_sff) 490 487 remove_proc_entry(CAN_PROC_RCVLIST_SFF, net->can.proc_dir); 491 488 492 - if (net->can.proc_dir) 493 - remove_proc_entry("can", net->proc_net); 489 + remove_proc_entry("can", net->proc_net); 494 490 }
-3
net/ipv4/ip_tunnel.c
··· 608 608 ttl = ip4_dst_hoplimit(&rt->dst); 609 609 } 610 610 611 - if (!df && skb->protocol == htons(ETH_P_IP)) 612 - df = inner_iph->frag_off & htons(IP_DF); 613 - 614 611 headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len; 615 612 if (headroom > dev->needed_headroom) 616 613 dev->needed_headroom = headroom;
+5 -3
net/ipv4/netfilter.c
··· 17 17 #include <net/netfilter/nf_queue.h> 18 18 19 19 /* route_me_harder function, used by iptable_nat, iptable_mangle + ip_queue */ 20 - int ip_route_me_harder(struct net *net, struct sk_buff *skb, unsigned int addr_type) 20 + int ip_route_me_harder(struct net *net, struct sock *sk, struct sk_buff *skb, unsigned int addr_type) 21 21 { 22 22 const struct iphdr *iph = ip_hdr(skb); 23 23 struct rtable *rt; 24 24 struct flowi4 fl4 = {}; 25 25 __be32 saddr = iph->saddr; 26 - const struct sock *sk = skb_to_full_sk(skb); 27 - __u8 flags = sk ? inet_sk_flowi_flags(sk) : 0; 26 + __u8 flags; 28 27 struct net_device *dev = skb_dst(skb)->dev; 29 28 unsigned int hh_len; 29 + 30 + sk = sk_to_full_sk(sk); 31 + flags = sk ? inet_sk_flowi_flags(sk) : 0; 30 32 31 33 if (addr_type == RTN_UNSPEC) 32 34 addr_type = inet_addr_type_dev_table(net, dev, saddr);
+1 -1
net/ipv4/netfilter/iptable_mangle.c
··· 62 62 iph->daddr != daddr || 63 63 skb->mark != mark || 64 64 iph->tos != tos) { 65 - err = ip_route_me_harder(state->net, skb, RTN_UNSPEC); 65 + err = ip_route_me_harder(state->net, state->sk, skb, RTN_UNSPEC); 66 66 if (err < 0) 67 67 ret = NF_DROP_ERR(err); 68 68 }
+1 -1
net/ipv4/netfilter/nf_reject_ipv4.c
··· 145 145 ip4_dst_hoplimit(skb_dst(nskb))); 146 146 nf_reject_ip_tcphdr_put(nskb, oldskb, oth); 147 147 148 - if (ip_route_me_harder(net, nskb, RTN_UNSPEC)) 148 + if (ip_route_me_harder(net, nskb->sk, nskb, RTN_UNSPEC)) 149 149 goto free_nskb; 150 150 151 151 niph = ip_hdr(nskb);
+2 -2
net/ipv4/xfrm4_tunnel.c
··· 64 64 static struct xfrm_tunnel xfrm_tunnel_handler __read_mostly = { 65 65 .handler = xfrm_tunnel_rcv, 66 66 .err_handler = xfrm_tunnel_err, 67 - .priority = 3, 67 + .priority = 4, 68 68 }; 69 69 70 70 #if IS_ENABLED(CONFIG_IPV6) 71 71 static struct xfrm_tunnel xfrm64_tunnel_handler __read_mostly = { 72 72 .handler = xfrm_tunnel_rcv, 73 73 .err_handler = xfrm_tunnel_err, 74 - .priority = 2, 74 + .priority = 3, 75 75 }; 76 76 #endif 77 77
+7 -1
net/ipv6/icmp.c
··· 158 158 tp = skb_header_pointer(skb, 159 159 ptr+offsetof(struct icmp6hdr, icmp6_type), 160 160 sizeof(_type), &_type); 161 - if (!tp || !(*tp & ICMPV6_INFOMSG_MASK)) 161 + 162 + /* Based on RFC 8200, Section 4.5 Fragment Header, return 163 + * false if this is a fragment packet with no icmp header info. 164 + */ 165 + if (!tp && frag_off != 0) 166 + return false; 167 + else if (!tp || !(*tp & ICMPV6_INFOMSG_MASK)) 162 168 return true; 163 169 } 164 170 return false;
+2 -2
net/ipv6/ip6_tunnel.c
··· 1271 1271 if (max_headroom > dev->needed_headroom) 1272 1272 dev->needed_headroom = max_headroom; 1273 1273 1274 + skb_set_inner_ipproto(skb, proto); 1275 + 1274 1276 err = ip6_tnl_encap(skb, t, &proto, fl6); 1275 1277 if (err) 1276 1278 return err; ··· 1281 1279 init_tel_txopt(&opt, encap_limit); 1282 1280 ipv6_push_frag_opts(skb, &opt.ops, &proto); 1283 1281 } 1284 - 1285 - skb_set_inner_ipproto(skb, proto); 1286 1282 1287 1283 skb_push(skb, sizeof(struct ipv6hdr)); 1288 1284 skb_reset_network_header(skb);
+3 -3
net/ipv6/netfilter.c
··· 20 20 #include <net/netfilter/ipv6/nf_defrag_ipv6.h> 21 21 #include "../bridge/br_private.h" 22 22 23 - int ip6_route_me_harder(struct net *net, struct sk_buff *skb) 23 + int ip6_route_me_harder(struct net *net, struct sock *sk_partial, struct sk_buff *skb) 24 24 { 25 25 const struct ipv6hdr *iph = ipv6_hdr(skb); 26 - struct sock *sk = sk_to_full_sk(skb->sk); 26 + struct sock *sk = sk_to_full_sk(sk_partial); 27 27 unsigned int hh_len; 28 28 struct dst_entry *dst; 29 29 int strict = (ipv6_addr_type(&iph->daddr) & ··· 84 84 if (!ipv6_addr_equal(&iph->daddr, &rt_info->daddr) || 85 85 !ipv6_addr_equal(&iph->saddr, &rt_info->saddr) || 86 86 skb->mark != rt_info->mark) 87 - return ip6_route_me_harder(entry->state.net, skb); 87 + return ip6_route_me_harder(entry->state.net, entry->state.sk, skb); 88 88 } 89 89 return 0; 90 90 }
+1 -1
net/ipv6/netfilter/ip6table_mangle.c
··· 57 57 skb->mark != mark || 58 58 ipv6_hdr(skb)->hop_limit != hop_limit || 59 59 flowlabel != *((u_int32_t *)ipv6_hdr(skb)))) { 60 - err = ip6_route_me_harder(state->net, skb); 60 + err = ip6_route_me_harder(state->net, state->sk, skb); 61 61 if (err < 0) 62 62 ret = NF_DROP_ERR(err); 63 63 }
+32 -1
net/ipv6/reassembly.c
··· 42 42 #include <linux/skbuff.h> 43 43 #include <linux/slab.h> 44 44 #include <linux/export.h> 45 + #include <linux/tcp.h> 46 + #include <linux/udp.h> 45 47 46 48 #include <net/sock.h> 47 49 #include <net/snmp.h> ··· 324 322 struct frag_queue *fq; 325 323 const struct ipv6hdr *hdr = ipv6_hdr(skb); 326 324 struct net *net = dev_net(skb_dst(skb)->dev); 327 - int iif; 325 + __be16 frag_off; 326 + int iif, offset; 327 + u8 nexthdr; 328 328 329 329 if (IP6CB(skb)->flags & IP6SKB_FRAGMENTED) 330 330 goto fail_hdr; ··· 353 349 IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb); 354 350 IP6CB(skb)->flags |= IP6SKB_FRAGMENTED; 355 351 return 1; 352 + } 353 + 354 + /* RFC 8200, Section 4.5 Fragment Header: 355 + * If the first fragment does not include all headers through an 356 + * Upper-Layer header, then that fragment should be discarded and 357 + * an ICMP Parameter Problem, Code 3, message should be sent to 358 + * the source of the fragment, with the Pointer field set to zero. 359 + */ 360 + nexthdr = hdr->nexthdr; 361 + offset = ipv6_skip_exthdr(skb, skb_transport_offset(skb), &nexthdr, &frag_off); 362 + if (offset >= 0) { 363 + /* Check some common protocols' header */ 364 + if (nexthdr == IPPROTO_TCP) 365 + offset += sizeof(struct tcphdr); 366 + else if (nexthdr == IPPROTO_UDP) 367 + offset += sizeof(struct udphdr); 368 + else if (nexthdr == IPPROTO_ICMPV6) 369 + offset += sizeof(struct icmp6hdr); 370 + else 371 + offset += 1; 372 + 373 + if (!(frag_off & htons(IP6_OFFSET)) && offset > skb->len) { 374 + __IP6_INC_STATS(net, __in6_dev_get_safely(skb->dev), 375 + IPSTATS_MIB_INHDRERRORS); 376 + icmpv6_param_prob(skb, ICMPV6_HDR_INCOMP, 0); 377 + return -1; 378 + } 356 379 } 357 380 358 381 iif = skb->dev ? skb->dev->ifindex : 0;
+2 -2
net/ipv6/xfrm6_tunnel.c
··· 303 303 static struct xfrm6_tunnel xfrm6_tunnel_handler __read_mostly = { 304 304 .handler = xfrm6_tunnel_rcv, 305 305 .err_handler = xfrm6_tunnel_err, 306 - .priority = 2, 306 + .priority = 3, 307 307 }; 308 308 309 309 static struct xfrm6_tunnel xfrm46_tunnel_handler __read_mostly = { 310 310 .handler = xfrm6_tunnel_rcv, 311 311 .err_handler = xfrm6_tunnel_err, 312 - .priority = 2, 312 + .priority = 3, 313 313 }; 314 314 315 315 static int __net_init xfrm6_tunnel_net_init(struct net *net)
+2 -1
net/mac80211/mlme.c
··· 5464 5464 struct cfg80211_assoc_request *req) 5465 5465 { 5466 5466 bool is_6ghz = req->bss->channel->band == NL80211_BAND_6GHZ; 5467 + bool is_5ghz = req->bss->channel->band == NL80211_BAND_5GHZ; 5467 5468 struct ieee80211_local *local = sdata->local; 5468 5469 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 5469 5470 struct ieee80211_bss *bss = (void *)req->bss->priv; ··· 5617 5616 if (vht_ie && vht_ie[1] >= sizeof(struct ieee80211_vht_cap)) 5618 5617 memcpy(&assoc_data->ap_vht_cap, vht_ie + 2, 5619 5618 sizeof(struct ieee80211_vht_cap)); 5620 - else if (!is_6ghz) 5619 + else if (is_5ghz) 5621 5620 ifmgd->flags |= IEEE80211_STA_DISABLE_VHT | 5622 5621 IEEE80211_STA_DISABLE_HE; 5623 5622 rcu_read_unlock();
+18
net/mac80211/sta_info.c
··· 258 258 */ 259 259 void sta_info_free(struct ieee80211_local *local, struct sta_info *sta) 260 260 { 261 + /* 262 + * If we had used sta_info_pre_move_state() then we might not 263 + * have gone through the state transitions down again, so do 264 + * it here now (and warn if it's inserted). 265 + * 266 + * This will clear state such as fast TX/RX that may have been 267 + * allocated during state transitions. 268 + */ 269 + while (sta->sta_state > IEEE80211_STA_NONE) { 270 + int ret; 271 + 272 + WARN_ON_ONCE(test_sta_flag(sta, WLAN_STA_INSERTED)); 273 + 274 + ret = sta_info_move_state(sta, sta->sta_state - 1); 275 + if (WARN_ONCE(ret, "sta_info_move_state() returned %d\n", ret)) 276 + break; 277 + } 278 + 261 279 if (sta->rate_ctrl) 262 280 rate_control_free_sta(sta); 263 281
+8 -1
net/mac80211/sta_info.h
··· 785 785 void sta_info_stop(struct ieee80211_local *local); 786 786 787 787 /** 788 - * sta_info_flush - flush matching STA entries from the STA table 788 + * __sta_info_flush - flush matching STA entries from the STA table 789 789 * 790 790 * Returns the number of removed STA entries. 791 791 * ··· 794 794 */ 795 795 int __sta_info_flush(struct ieee80211_sub_if_data *sdata, bool vlans); 796 796 797 + /** 798 + * sta_info_flush - flush matching STA entries from the STA table 799 + * 800 + * Returns the number of removed STA entries. 801 + * 802 + * @sdata: sdata to remove all stations from 803 + */ 797 804 static inline int sta_info_flush(struct ieee80211_sub_if_data *sdata) 798 805 { 799 806 return __sta_info_flush(sdata, false);
+28 -16
net/mac80211/tx.c
··· 1942 1942 1943 1943 /* device xmit handlers */ 1944 1944 1945 + enum ieee80211_encrypt { 1946 + ENCRYPT_NO, 1947 + ENCRYPT_MGMT, 1948 + ENCRYPT_DATA, 1949 + }; 1950 + 1945 1951 static int ieee80211_skb_resize(struct ieee80211_sub_if_data *sdata, 1946 1952 struct sk_buff *skb, 1947 - int head_need, bool may_encrypt) 1953 + int head_need, 1954 + enum ieee80211_encrypt encrypt) 1948 1955 { 1949 1956 struct ieee80211_local *local = sdata->local; 1950 - struct ieee80211_hdr *hdr; 1951 1957 bool enc_tailroom; 1952 1958 int tail_need = 0; 1953 1959 1954 - hdr = (struct ieee80211_hdr *) skb->data; 1955 - enc_tailroom = may_encrypt && 1956 - (sdata->crypto_tx_tailroom_needed_cnt || 1957 - ieee80211_is_mgmt(hdr->frame_control)); 1960 + enc_tailroom = encrypt == ENCRYPT_MGMT || 1961 + (encrypt == ENCRYPT_DATA && 1962 + sdata->crypto_tx_tailroom_needed_cnt); 1958 1963 1959 1964 if (enc_tailroom) { 1960 1965 tail_need = IEEE80211_ENCRYPT_TAILROOM; ··· 1990 1985 { 1991 1986 struct ieee80211_local *local = sdata->local; 1992 1987 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1993 - struct ieee80211_hdr *hdr; 1988 + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; 1994 1989 int headroom; 1995 - bool may_encrypt; 1990 + enum ieee80211_encrypt encrypt; 1996 1991 1997 - may_encrypt = !(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT); 1992 + if (info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT) 1993 + encrypt = ENCRYPT_NO; 1994 + else if (ieee80211_is_mgmt(hdr->frame_control)) 1995 + encrypt = ENCRYPT_MGMT; 1996 + else 1997 + encrypt = ENCRYPT_DATA; 1998 1998 1999 1999 headroom = local->tx_headroom; 2000 - if (may_encrypt) 2000 + if (encrypt != ENCRYPT_NO) 2001 2001 headroom += sdata->encrypt_headroom; 2002 2002 headroom -= skb_headroom(skb); 2003 2003 headroom = max_t(int, 0, headroom); 2004 2004 2005 - if (ieee80211_skb_resize(sdata, skb, headroom, may_encrypt)) { 2005 + if (ieee80211_skb_resize(sdata, skb, headroom, encrypt)) { 2006 2006 ieee80211_free_txskb(&local->hw, skb); 2007 2007 return; 2008 2008 } 2009 2009 2010 + /* reload after potential resize */ 2010 2011 hdr = (struct ieee80211_hdr *) skb->data; 2011 2012 info->control.vif = &sdata->vif; 2012 2013 ··· 2839 2828 head_need += sdata->encrypt_headroom; 2840 2829 head_need += local->tx_headroom; 2841 2830 head_need = max_t(int, 0, head_need); 2842 - if (ieee80211_skb_resize(sdata, skb, head_need, true)) { 2831 + if (ieee80211_skb_resize(sdata, skb, head_need, ENCRYPT_DATA)) { 2843 2832 ieee80211_free_txskb(&local->hw, skb); 2844 2833 skb = NULL; 2845 2834 return ERR_PTR(-ENOMEM); ··· 3513 3502 if (unlikely(ieee80211_skb_resize(sdata, skb, 3514 3503 max_t(int, extra_head + hw_headroom - 3515 3504 skb_headroom(skb), 0), 3516 - false))) { 3505 + ENCRYPT_NO))) { 3517 3506 kfree_skb(skb); 3518 3507 return true; 3519 3508 } ··· 3630 3619 tx.skb = skb; 3631 3620 tx.sdata = vif_to_sdata(info->control.vif); 3632 3621 3633 - if (txq->sta && !(info->flags & IEEE80211_TX_CTL_INJECTED)) { 3622 + if (txq->sta) { 3634 3623 tx.sta = container_of(txq->sta, struct sta_info, sta); 3635 3624 /* 3636 3625 * Drop unicast frames to unauthorised stations unless they are 3637 - * EAPOL frames from the local station. 3626 + * injected frames or EAPOL frames from the local station. 3638 3627 */ 3639 - if (unlikely(ieee80211_is_data(hdr->frame_control) && 3628 + if (unlikely(!(info->flags & IEEE80211_TX_CTL_INJECTED) && 3629 + ieee80211_is_data(hdr->frame_control) && 3640 3630 !ieee80211_vif_is_mesh(&tx.sdata->vif) && 3641 3631 tx.sdata->vif.type != NL80211_IFTYPE_OCB && 3642 3632 !is_multicast_ether_addr(hdr->addr1) &&
+1 -1
net/mptcp/token.c
··· 291 291 { 292 292 struct mptcp_sock *ret = NULL; 293 293 struct hlist_nulls_node *pos; 294 - int slot, num; 294 + int slot, num = 0; 295 295 296 296 for (slot = *s_slot; slot <= token_mask; *s_num = 0, slot++) { 297 297 struct token_bucket *bucket = &token_hash[slot];
+2 -1
net/netfilter/ipset/ip_set_core.c
··· 637 637 if (SET_WITH_COUNTER(set)) { 638 638 struct ip_set_counter *counter = ext_counter(data, set); 639 639 640 + ip_set_update_counter(counter, ext, flags); 641 + 640 642 if (flags & IPSET_FLAG_MATCH_COUNTERS && 641 643 !(ip_set_match_counter(ip_set_get_packets(counter), 642 644 mext->packets, mext->packets_op) && 643 645 ip_set_match_counter(ip_set_get_bytes(counter), 644 646 mext->bytes, mext->bytes_op))) 645 647 return false; 646 - ip_set_update_counter(counter, ext, flags); 647 648 } 648 649 if (SET_WITH_SKBINFO(set)) 649 650 ip_set_get_skbinfo(ext_skbinfo(data, set),
+2 -2
net/netfilter/ipvs/ip_vs_core.c
··· 742 742 struct dst_entry *dst = skb_dst(skb); 743 743 744 744 if (dst->dev && !(dst->dev->flags & IFF_LOOPBACK) && 745 - ip6_route_me_harder(ipvs->net, skb) != 0) 745 + ip6_route_me_harder(ipvs->net, skb->sk, skb) != 0) 746 746 return 1; 747 747 } else 748 748 #endif 749 749 if (!(skb_rtable(skb)->rt_flags & RTCF_LOCAL) && 750 - ip_route_me_harder(ipvs->net, skb, RTN_LOCAL) != 0) 750 + ip_route_me_harder(ipvs->net, skb->sk, skb, RTN_LOCAL) != 0) 751 751 return 1; 752 752 753 753 return 0;
+2 -2
net/netfilter/nf_nat_proto.c
··· 715 715 716 716 if (ct->tuplehash[dir].tuple.dst.u3.ip != 717 717 ct->tuplehash[!dir].tuple.src.u3.ip) { 718 - err = ip_route_me_harder(state->net, skb, RTN_UNSPEC); 718 + err = ip_route_me_harder(state->net, state->sk, skb, RTN_UNSPEC); 719 719 if (err < 0) 720 720 ret = NF_DROP_ERR(err); 721 721 } ··· 953 953 954 954 if (!nf_inet_addr_cmp(&ct->tuplehash[dir].tuple.dst.u3, 955 955 &ct->tuplehash[!dir].tuple.src.u3)) { 956 - err = nf_ip6_route_me_harder(state->net, skb); 956 + err = nf_ip6_route_me_harder(state->net, state->sk, skb); 957 957 if (err < 0) 958 958 ret = NF_DROP_ERR(err); 959 959 }
+1 -1
net/netfilter/nf_synproxy_core.c
··· 446 446 447 447 skb_dst_set_noref(nskb, skb_dst(skb)); 448 448 nskb->protocol = htons(ETH_P_IP); 449 - if (ip_route_me_harder(net, nskb, RTN_UNSPEC)) 449 + if (ip_route_me_harder(net, nskb->sk, nskb, RTN_UNSPEC)) 450 450 goto free_nskb; 451 451 452 452 if (nfct) {
+12 -7
net/netfilter/nf_tables_api.c
··· 7137 7137 GFP_KERNEL); 7138 7138 kfree(buf); 7139 7139 7140 - if (ctx->report && 7140 + if (!ctx->report && 7141 7141 !nfnetlink_has_listeners(ctx->net, NFNLGRP_NFTABLES)) 7142 7142 return; 7143 7143 ··· 7259 7259 audit_log_nfcfg("?:0;?:0", 0, net->nft.base_seq, 7260 7260 AUDIT_NFT_OP_GEN_REGISTER, GFP_KERNEL); 7261 7261 7262 - if (nlmsg_report(nlh) && 7262 + if (!nlmsg_report(nlh) && 7263 7263 !nfnetlink_has_listeners(net, NFNLGRP_NFTABLES)) 7264 7264 return; 7265 7265 ··· 8053 8053 kfree(trans); 8054 8054 } 8055 8055 8056 - static int __nf_tables_abort(struct net *net, bool autoload) 8056 + static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action) 8057 8057 { 8058 8058 struct nft_trans *trans, *next; 8059 8059 struct nft_trans_elem *te; 8060 8060 struct nft_hook *hook; 8061 + 8062 + if (action == NFNL_ABORT_VALIDATE && 8063 + nf_tables_validate(net) < 0) 8064 + return -EAGAIN; 8061 8065 8062 8066 list_for_each_entry_safe_reverse(trans, next, &net->nft.commit_list, 8063 8067 list) { ··· 8194 8190 nf_tables_abort_release(trans); 8195 8191 } 8196 8192 8197 - if (autoload) 8193 + if (action == NFNL_ABORT_AUTOLOAD) 8198 8194 nf_tables_module_autoload(net); 8199 8195 else 8200 8196 nf_tables_module_autoload_cleanup(net); ··· 8207 8203 nft_validate_state_update(net, NFT_VALIDATE_SKIP); 8208 8204 } 8209 8205 8210 - static int nf_tables_abort(struct net *net, struct sk_buff *skb, bool autoload) 8206 + static int nf_tables_abort(struct net *net, struct sk_buff *skb, 8207 + enum nfnl_abort_action action) 8211 8208 { 8212 - int ret = __nf_tables_abort(net, autoload); 8209 + int ret = __nf_tables_abort(net, action); 8213 8210 8214 8211 mutex_unlock(&net->nft.commit_mutex); 8215 8212 ··· 8841 8836 { 8842 8837 mutex_lock(&net->nft.commit_mutex); 8843 8838 if (!list_empty(&net->nft.commit_list)) 8844 - __nf_tables_abort(net, false); 8839 + __nf_tables_abort(net, NFNL_ABORT_NONE); 8845 8840 __nft_release_tables(net); 8846 8841 mutex_unlock(&net->nft.commit_mutex); 8847 8842 WARN_ON_ONCE(!list_empty(&net->nft.tables));
+18 -4
net/netfilter/nfnetlink.c
··· 333 333 return netlink_ack(skb, nlh, -EINVAL, NULL); 334 334 replay: 335 335 status = 0; 336 - 336 + replay_abort: 337 337 skb = netlink_skb_clone(oskb, GFP_KERNEL); 338 338 if (!skb) 339 339 return netlink_ack(oskb, nlh, -ENOMEM, NULL); ··· 499 499 } 500 500 done: 501 501 if (status & NFNL_BATCH_REPLAY) { 502 - ss->abort(net, oskb, true); 502 + ss->abort(net, oskb, NFNL_ABORT_AUTOLOAD); 503 503 nfnl_err_reset(&err_list); 504 504 kfree_skb(skb); 505 505 module_put(ss->owner); ··· 510 510 status |= NFNL_BATCH_REPLAY; 511 511 goto done; 512 512 } else if (err) { 513 - ss->abort(net, oskb, false); 513 + ss->abort(net, oskb, NFNL_ABORT_NONE); 514 514 netlink_ack(oskb, nlmsg_hdr(oskb), err, NULL); 515 515 } 516 516 } else { 517 - ss->abort(net, oskb, false); 517 + enum nfnl_abort_action abort_action; 518 + 519 + if (status & NFNL_BATCH_FAILURE) 520 + abort_action = NFNL_ABORT_NONE; 521 + else 522 + abort_action = NFNL_ABORT_VALIDATE; 523 + 524 + err = ss->abort(net, oskb, abort_action); 525 + if (err == -EAGAIN) { 526 + nfnl_err_reset(&err_list); 527 + kfree_skb(skb); 528 + module_put(ss->owner); 529 + status |= NFNL_BATCH_FAILURE; 530 + goto replay_abort; 531 + } 518 532 } 519 533 if (ss->cleanup) 520 534 ss->cleanup(net);
+2 -2
net/netfilter/nft_chain_route.c
··· 42 42 iph->daddr != daddr || 43 43 skb->mark != mark || 44 44 iph->tos != tos) { 45 - err = ip_route_me_harder(state->net, skb, RTN_UNSPEC); 45 + err = ip_route_me_harder(state->net, state->sk, skb, RTN_UNSPEC); 46 46 if (err < 0) 47 47 ret = NF_DROP_ERR(err); 48 48 } ··· 92 92 skb->mark != mark || 93 93 ipv6_hdr(skb)->hop_limit != hop_limit || 94 94 flowlabel != *((u32 *)ipv6_hdr(skb)))) { 95 - err = nf_ip6_route_me_harder(state->net, skb); 95 + err = nf_ip6_route_me_harder(state->net, state->sk, skb); 96 96 if (err < 0) 97 97 ret = NF_DROP_ERR(err); 98 98 }
+2 -2
net/netfilter/utils.c
··· 191 191 skb->mark == rt_info->mark && 192 192 iph->daddr == rt_info->daddr && 193 193 iph->saddr == rt_info->saddr)) 194 - return ip_route_me_harder(entry->state.net, skb, 195 - RTN_UNSPEC); 194 + return ip_route_me_harder(entry->state.net, entry->state.sk, 195 + skb, RTN_UNSPEC); 196 196 } 197 197 #endif 198 198 return 0;
+7 -7
net/openvswitch/datapath.c
··· 1703 1703 parms.port_no = OVSP_LOCAL; 1704 1704 parms.upcall_portids = a[OVS_DP_ATTR_UPCALL_PID]; 1705 1705 1706 - err = ovs_dp_change(dp, a); 1707 - if (err) 1708 - goto err_destroy_meters; 1709 - 1710 1706 /* So far only local changes have been made, now need the lock. */ 1711 1707 ovs_lock(); 1708 + 1709 + err = ovs_dp_change(dp, a); 1710 + if (err) 1711 + goto err_unlock_and_destroy_meters; 1712 1712 1713 1713 vport = new_vport(&parms); 1714 1714 if (IS_ERR(vport)) { ··· 1725 1725 ovs_dp_reset_user_features(skb, info); 1726 1726 } 1727 1727 1728 - ovs_unlock(); 1729 - goto err_destroy_meters; 1728 + goto err_unlock_and_destroy_meters; 1730 1729 } 1731 1730 1732 1731 err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid, ··· 1740 1741 ovs_notify(&dp_datapath_genl_family, reply, info); 1741 1742 return 0; 1742 1743 1743 - err_destroy_meters: 1744 + err_unlock_and_destroy_meters: 1745 + ovs_unlock(); 1744 1746 ovs_meters_exit(dp); 1745 1747 err_destroy_ports: 1746 1748 kfree(dp->ports);
+1 -1
net/openvswitch/flow_table.c
··· 390 390 } 391 391 int ovs_flow_tbl_masks_cache_resize(struct flow_table *table, u32 size) 392 392 { 393 - struct mask_cache *mc = rcu_dereference(table->mask_cache); 393 + struct mask_cache *mc = rcu_dereference_ovsl(table->mask_cache); 394 394 struct mask_cache *new; 395 395 396 396 if (size == mc->cache_size)
+2 -2
net/sctp/sm_sideeffect.c
··· 1601 1601 break; 1602 1602 1603 1603 case SCTP_CMD_INIT_FAILED: 1604 - sctp_cmd_init_failed(commands, asoc, cmd->obj.u32); 1604 + sctp_cmd_init_failed(commands, asoc, cmd->obj.u16); 1605 1605 break; 1606 1606 1607 1607 case SCTP_CMD_ASSOC_FAILED: 1608 1608 sctp_cmd_assoc_failed(commands, asoc, event_type, 1609 - subtype, chunk, cmd->obj.u32); 1609 + subtype, chunk, cmd->obj.u16); 1610 1610 break; 1611 1611 1612 1612 case SCTP_CMD_INIT_COUNTER_INC:
+31 -26
net/wireless/core.c
··· 1250 1250 } 1251 1251 EXPORT_SYMBOL(cfg80211_stop_iface); 1252 1252 1253 - void cfg80211_init_wdev(struct cfg80211_registered_device *rdev, 1254 - struct wireless_dev *wdev) 1253 + void cfg80211_init_wdev(struct wireless_dev *wdev) 1255 1254 { 1256 1255 mutex_init(&wdev->mtx); 1257 1256 INIT_LIST_HEAD(&wdev->event_list); ··· 1261 1262 spin_lock_init(&wdev->pmsr_lock); 1262 1263 INIT_WORK(&wdev->pmsr_free_wk, cfg80211_pmsr_free_wk); 1263 1264 1265 + #ifdef CONFIG_CFG80211_WEXT 1266 + wdev->wext.default_key = -1; 1267 + wdev->wext.default_mgmt_key = -1; 1268 + wdev->wext.connect.auth_type = NL80211_AUTHTYPE_AUTOMATIC; 1269 + #endif 1270 + 1271 + if (wdev->wiphy->flags & WIPHY_FLAG_PS_ON_BY_DEFAULT) 1272 + wdev->ps = true; 1273 + else 1274 + wdev->ps = false; 1275 + /* allow mac80211 to determine the timeout */ 1276 + wdev->ps_timeout = -1; 1277 + 1278 + if ((wdev->iftype == NL80211_IFTYPE_STATION || 1279 + wdev->iftype == NL80211_IFTYPE_P2P_CLIENT || 1280 + wdev->iftype == NL80211_IFTYPE_ADHOC) && !wdev->use_4addr) 1281 + wdev->netdev->priv_flags |= IFF_DONT_BRIDGE; 1282 + 1283 + INIT_WORK(&wdev->disconnect_wk, cfg80211_autodisconnect_wk); 1284 + } 1285 + 1286 + void cfg80211_register_wdev(struct cfg80211_registered_device *rdev, 1287 + struct wireless_dev *wdev) 1288 + { 1264 1289 /* 1265 1290 * We get here also when the interface changes network namespaces, 1266 1291 * as it's registered into the new one, but we don't want it to ··· 1318 1295 switch (state) { 1319 1296 case NETDEV_POST_INIT: 1320 1297 SET_NETDEV_DEVTYPE(dev, &wiphy_type); 1298 + wdev->netdev = dev; 1299 + /* can only change netns with wiphy */ 1300 + dev->features |= NETIF_F_NETNS_LOCAL; 1301 + 1302 + cfg80211_init_wdev(wdev); 1321 1303 break; 1322 1304 case NETDEV_REGISTER: 1323 1305 /* ··· 1330 1302 * called within code protected by it when interfaces 1331 1303 * are added with nl80211. 1332 1304 */ 1333 - /* can only change netns with wiphy */ 1334 - dev->features |= NETIF_F_NETNS_LOCAL; 1335 - 1336 1305 if (sysfs_create_link(&dev->dev.kobj, &rdev->wiphy.dev.kobj, 1337 1306 "phy80211")) { 1338 1307 pr_err("failed to add phy80211 symlink to netdev!\n"); 1339 1308 } 1340 - wdev->netdev = dev; 1341 - #ifdef CONFIG_CFG80211_WEXT 1342 - wdev->wext.default_key = -1; 1343 - wdev->wext.default_mgmt_key = -1; 1344 - wdev->wext.connect.auth_type = NL80211_AUTHTYPE_AUTOMATIC; 1345 - #endif 1346 1309 1347 - if (wdev->wiphy->flags & WIPHY_FLAG_PS_ON_BY_DEFAULT) 1348 - wdev->ps = true; 1349 - else 1350 - wdev->ps = false; 1351 - /* allow mac80211 to determine the timeout */ 1352 - wdev->ps_timeout = -1; 1353 - 1354 - if ((wdev->iftype == NL80211_IFTYPE_STATION || 1355 - wdev->iftype == NL80211_IFTYPE_P2P_CLIENT || 1356 - wdev->iftype == NL80211_IFTYPE_ADHOC) && !wdev->use_4addr) 1357 - dev->priv_flags |= IFF_DONT_BRIDGE; 1358 - 1359 - INIT_WORK(&wdev->disconnect_wk, cfg80211_autodisconnect_wk); 1360 - 1361 - cfg80211_init_wdev(rdev, wdev); 1310 + cfg80211_register_wdev(rdev, wdev); 1362 1311 break; 1363 1312 case NETDEV_GOING_DOWN: 1364 1313 cfg80211_leave(rdev, wdev);
+3 -2
net/wireless/core.h
··· 209 209 int cfg80211_switch_netns(struct cfg80211_registered_device *rdev, 210 210 struct net *net); 211 211 212 - void cfg80211_init_wdev(struct cfg80211_registered_device *rdev, 213 - struct wireless_dev *wdev); 212 + void cfg80211_init_wdev(struct wireless_dev *wdev); 213 + void cfg80211_register_wdev(struct cfg80211_registered_device *rdev, 214 + struct wireless_dev *wdev); 214 215 215 216 static inline void wdev_lock(struct wireless_dev *wdev) 216 217 __acquires(wdev)
+2 -1
net/wireless/nl80211.c
··· 3885 3885 * P2P Device and NAN do not have a netdev, so don't go 3886 3886 * through the netdev notifier and must be added here 3887 3887 */ 3888 - cfg80211_init_wdev(rdev, wdev); 3888 + cfg80211_init_wdev(wdev); 3889 + cfg80211_register_wdev(rdev, wdev); 3889 3890 break; 3890 3891 default: 3891 3892 break;
+1 -1
net/wireless/reg.c
··· 3616 3616 power_rule = &reg_rule->power_rule; 3617 3617 3618 3618 if (reg_rule->flags & NL80211_RRF_AUTO_BW) 3619 - snprintf(bw, sizeof(bw), "%d KHz, %d KHz AUTO", 3619 + snprintf(bw, sizeof(bw), "%d KHz, %u KHz AUTO", 3620 3620 freq_range->max_bandwidth_khz, 3621 3621 reg_get_max_bandwidth(rd, reg_rule)); 3622 3622 else
+4 -4
net/xfrm/xfrm_interface.c
··· 803 803 .handler = xfrmi6_rcv_tunnel, 804 804 .cb_handler = xfrmi_rcv_cb, 805 805 .err_handler = xfrmi6_err, 806 - .priority = -1, 806 + .priority = 2, 807 807 }; 808 808 809 809 static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = { 810 810 .handler = xfrmi6_rcv_tunnel, 811 811 .cb_handler = xfrmi_rcv_cb, 812 812 .err_handler = xfrmi6_err, 813 - .priority = -1, 813 + .priority = 2, 814 814 }; 815 815 #endif 816 816 ··· 848 848 .handler = xfrmi4_rcv_tunnel, 849 849 .cb_handler = xfrmi_rcv_cb, 850 850 .err_handler = xfrmi4_err, 851 - .priority = -1, 851 + .priority = 3, 852 852 }; 853 853 854 854 static struct xfrm_tunnel xfrmi_ipip6_handler __read_mostly = { 855 855 .handler = xfrmi4_rcv_tunnel, 856 856 .cb_handler = xfrmi_rcv_cb, 857 857 .err_handler = xfrmi4_err, 858 - .priority = -1, 858 + .priority = 2, 859 859 }; 860 860 #endif 861 861
+5 -3
net/xfrm/xfrm_state.c
··· 2004 2004 int err = -ENOENT; 2005 2005 __be32 minspi = htonl(low); 2006 2006 __be32 maxspi = htonl(high); 2007 + __be32 newspi = 0; 2007 2008 u32 mark = x->mark.v & x->mark.m; 2008 2009 2009 2010 spin_lock_bh(&x->lock); ··· 2023 2022 xfrm_state_put(x0); 2024 2023 goto unlock; 2025 2024 } 2026 - x->id.spi = minspi; 2025 + newspi = minspi; 2027 2026 } else { 2028 2027 u32 spi = 0; 2029 2028 for (h = 0; h < high-low+1; h++) { 2030 2029 spi = low + prandom_u32()%(high-low+1); 2031 2030 x0 = xfrm_state_lookup(net, mark, &x->id.daddr, htonl(spi), x->id.proto, x->props.family); 2032 2031 if (x0 == NULL) { 2033 - x->id.spi = htonl(spi); 2032 + newspi = htonl(spi); 2034 2033 break; 2035 2034 } 2036 2035 xfrm_state_put(x0); 2037 2036 } 2038 2037 } 2039 - if (x->id.spi) { 2038 + if (newspi) { 2040 2039 spin_lock_bh(&net->xfrm.xfrm_state_lock); 2040 + x->id.spi = newspi; 2041 2041 h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, x->props.family); 2042 2042 hlist_add_head_rcu(&x->byspi, net->xfrm.state_byspi + h); 2043 2043 spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+23
scripts/get_abi.pl
··· 287 287 sub output_rest { 288 288 create_labels(); 289 289 290 + my $part = ""; 291 + 290 292 foreach my $what (sort { 291 293 ($data{$a}->{type} eq "File") cmp ($data{$b}->{type} eq "File") || 292 294 $a cmp $b ··· 308 306 $w =~ s/([\(\)\_\-\*\=\^\~\\])/\\$1/g; 309 307 310 308 if ($type ne "File") { 309 + my $cur_part = $what; 310 + if ($what =~ '/') { 311 + if ($what =~ m#^(\/?(?:[\w\-]+\/?){1,2})#) { 312 + $cur_part = "Symbols under $1"; 313 + $cur_part =~ s,/$,,; 314 + } 315 + } 316 + 317 + if ($cur_part ne "" && $part ne $cur_part) { 318 + $part = $cur_part; 319 + my $bar = $part; 320 + $bar =~ s/./-/g; 321 + print "$part\n$bar\n\n"; 322 + } 323 + 311 324 printf ".. _%s:\n\n", $data{$what}->{label}; 312 325 313 326 my @names = split /, /,$w; ··· 369 352 370 353 if (!($desc =~ /^\s*$/)) { 371 354 if ($description_is_rst) { 355 + # Remove title markups from the description 356 + # Having titles inside ABI files will only work if extra 357 + # care would be taken in order to strictly follow the same 358 + # level order for each markup. 359 + $desc =~ s/\n[\-\*\=\^\~]+\n/\n\n/g; 360 + 372 361 # Enrich text by creating cross-references 373 362 374 363 $desc =~ s,Documentation/(?!devicetree)(\S+)\.rst,:doc:`/$1`,g;
+15 -6
scripts/kernel-doc
··· 1092 1092 print "\n\n.. c:type:: " . $name . "\n\n"; 1093 1093 } else { 1094 1094 my $name = $args{'struct'}; 1095 - print "\n\n.. c:struct:: " . $name . "\n\n"; 1095 + if ($args{'type'} eq 'union') { 1096 + print "\n\n.. c:union:: " . $name . "\n\n"; 1097 + } else { 1098 + print "\n\n.. c:struct:: " . $name . "\n\n"; 1099 + } 1096 1100 } 1097 1101 print_lineno($declaration_start_line); 1098 1102 $lineprefix = " "; ··· 1431 1427 } 1432 1428 } 1433 1429 1430 + my $typedef_type = qr { ((?:\s+[\w\*]+){1,8})\s* }x; 1431 + my $typedef_ident = qr { \*?\s*(\w\S+)\s* }x; 1432 + my $typedef_args = qr { \s*\((.*)\); }x; 1433 + 1434 + my $typedef1 = qr { typedef$typedef_type\($typedef_ident\)$typedef_args }x; 1435 + my $typedef2 = qr { typedef$typedef_type$typedef_ident$typedef_args }x; 1436 + 1434 1437 sub dump_typedef($$) { 1435 1438 my $x = shift; 1436 1439 my $file = shift; 1437 1440 1438 1441 $x =~ s@/\*.*?\*/@@gos; # strip comments. 1439 1442 1440 - # Parse function prototypes 1441 - if ($x =~ /typedef\s+(\w+)\s*\(\*\s*(\w\S+)\s*\)\s*\((.*)\);/ || 1442 - $x =~ /typedef\s+(\w+)\s*(\w\S+)\s*\s*\((.*)\);/) { 1443 - 1444 - # Function typedefs 1443 + # Parse function typedef prototypes 1444 + if ($x =~ $typedef1 || $x =~ $typedef2) { 1445 1445 $return_type = $1; 1446 1446 $declaration_name = $2; 1447 1447 my $args = $3; 1448 + $return_type =~ s/^\s+//; 1448 1449 1449 1450 create_parameterlist($args, ',', $file, $declaration_name); 1450 1451
+2 -2
sound/core/control.c
··· 1925 1925 1926 1926 #ifdef CONFIG_COMPAT 1927 1927 /** 1928 - * snd_ctl_unregister_ioctl - de-register the device-specific compat 32bit 1929 - * control-ioctls 1928 + * snd_ctl_unregister_ioctl_compat - de-register the device-specific compat 1929 + * 32bit control-ioctls 1930 1930 * @fcn: ioctl callback function to unregister 1931 1931 */ 1932 1932 int snd_ctl_unregister_ioctl_compat(snd_kctl_ioctl_func_t fcn)
+2 -1
sound/core/pcm_dmaengine.c
··· 356 356 EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_close); 357 357 358 358 /** 359 - * snd_dmaengine_pcm_release_chan_close - Close a dmaengine based PCM substream and release channel 359 + * snd_dmaengine_pcm_close_release_chan - Close a dmaengine based PCM 360 + * substream and release channel 360 361 * @substream: PCM substream 361 362 * 362 363 * Releases the DMA channel associated with the PCM substream.
+1 -1
sound/core/pcm_lib.c
··· 490 490 EXPORT_SYMBOL(snd_pcm_set_ops); 491 491 492 492 /** 493 - * snd_pcm_sync - set the PCM sync id 493 + * snd_pcm_set_sync - set the PCM sync id 494 494 * @substream: the pcm substream 495 495 * 496 496 * Sets the PCM sync identifier for the card.
+2 -2
sound/core/pcm_native.c
··· 112 112 EXPORT_SYMBOL_GPL(snd_pcm_stream_lock); 113 113 114 114 /** 115 - * snd_pcm_stream_lock - Unlock the PCM stream 115 + * snd_pcm_stream_unlock - Unlock the PCM stream 116 116 * @substream: PCM substream 117 117 * 118 118 * This unlocks the PCM stream that has been locked via snd_pcm_stream_lock(). ··· 595 595 } 596 596 597 597 /** 598 - * snd_pcm_hw_param_choose - choose a configuration defined by @params 598 + * snd_pcm_hw_params_choose - choose a configuration defined by @params 599 599 * @pcm: PCM instance 600 600 * @params: the hw_params instance 601 601 *
+2
sound/hda/ext/hdac_ext_controller.c
··· 148 148 return NULL; 149 149 if (bus->idx != bus_idx) 150 150 return NULL; 151 + if (addr < 0 || addr > 31) 152 + return NULL; 151 153 152 154 list_for_each_entry(hlink, &bus->hlink_list, list) { 153 155 for (i = 0; i < HDA_MAX_CODECS; i++) {
+29 -16
sound/pci/hda/hda_codec.c
··· 2934 2934 snd_hdac_leave_pm(&codec->core); 2935 2935 } 2936 2936 2937 - static int hda_codec_runtime_suspend(struct device *dev) 2937 + static int hda_codec_suspend(struct device *dev) 2938 2938 { 2939 2939 struct hda_codec *codec = dev_to_hda_codec(dev); 2940 2940 unsigned int state; ··· 2953 2953 return 0; 2954 2954 } 2955 2955 2956 - static int hda_codec_runtime_resume(struct device *dev) 2956 + static int hda_codec_resume(struct device *dev) 2957 2957 { 2958 2958 struct hda_codec *codec = dev_to_hda_codec(dev); 2959 2959 ··· 2967 2967 pm_runtime_mark_last_busy(dev); 2968 2968 return 0; 2969 2969 } 2970 + 2971 + static int hda_codec_runtime_suspend(struct device *dev) 2972 + { 2973 + return hda_codec_suspend(dev); 2974 + } 2975 + 2976 + static int hda_codec_runtime_resume(struct device *dev) 2977 + { 2978 + return hda_codec_resume(dev); 2979 + } 2980 + 2970 2981 #endif /* CONFIG_PM */ 2971 2982 2972 2983 #ifdef CONFIG_PM_SLEEP 2973 - static int hda_codec_force_resume(struct device *dev) 2984 + static int hda_codec_pm_prepare(struct device *dev) 2985 + { 2986 + return pm_runtime_suspended(dev); 2987 + } 2988 + 2989 + static void hda_codec_pm_complete(struct device *dev) 2974 2990 { 2975 2991 struct hda_codec *codec = dev_to_hda_codec(dev); 2976 - int ret; 2977 2992 2978 - ret = pm_runtime_force_resume(dev); 2979 - /* schedule jackpoll work for jack detection update */ 2980 - if (codec->jackpoll_interval || 2981 - (pm_runtime_suspended(dev) && hda_codec_need_resume(codec))) 2982 - schedule_delayed_work(&codec->jackpoll_work, 2983 - codec->jackpoll_interval); 2984 - return ret; 2993 + if (pm_runtime_suspended(dev) && (codec->jackpoll_interval || 2994 + hda_codec_need_resume(codec) || codec->forced_resume)) 2995 + pm_request_resume(dev); 2985 2996 } 2986 2997 2987 2998 static int hda_codec_pm_suspend(struct device *dev) 2988 2999 { 2989 3000 dev->power.power_state = PMSG_SUSPEND; 2990 - return pm_runtime_force_suspend(dev); 3001 + return hda_codec_suspend(dev); 2991 3002 } 2992 3003 2993 3004 static int hda_codec_pm_resume(struct device *dev) 2994 3005 { 2995 3006 dev->power.power_state = PMSG_RESUME; 2996 - return hda_codec_force_resume(dev); 3007 + return hda_codec_resume(dev); 2997 3008 } 2998 3009 2999 3010 static int hda_codec_pm_freeze(struct device *dev) 3000 3011 { 3001 3012 dev->power.power_state = PMSG_FREEZE; 3002 - return pm_runtime_force_suspend(dev); 3013 + return hda_codec_suspend(dev); 3003 3014 } 3004 3015 3005 3016 static int hda_codec_pm_thaw(struct device *dev) 3006 3017 { 3007 3018 dev->power.power_state = PMSG_THAW; 3008 - return hda_codec_force_resume(dev); 3019 + return hda_codec_resume(dev); 3009 3020 } 3010 3021 3011 3022 static int hda_codec_pm_restore(struct device *dev) 3012 3023 { 3013 3024 dev->power.power_state = PMSG_RESTORE; 3014 - return hda_codec_force_resume(dev); 3025 + return hda_codec_resume(dev); 3015 3026 } 3016 3027 #endif /* CONFIG_PM_SLEEP */ 3017 3028 3018 3029 /* referred in hda_bind.c */ 3019 3030 const struct dev_pm_ops hda_codec_driver_pm = { 3020 3031 #ifdef CONFIG_PM_SLEEP 3032 + .prepare = hda_codec_pm_prepare, 3033 + .complete = hda_codec_pm_complete, 3021 3034 .suspend = hda_codec_pm_suspend, 3022 3035 .resume = hda_codec_pm_resume, 3023 3036 .freeze = hda_codec_pm_freeze,
+2 -1
sound/pci/hda/hda_controller.h
··· 41 41 /* 24 unused */ 42 42 #define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */ 43 43 #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */ 44 - #define AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP (1 << 27) /* Workaround for spurious wakeups after suspend */ 44 + /* 27 unused */ 45 45 #define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */ 46 46 #define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */ 47 47 #define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and playback use separate stream tag */ ··· 143 143 unsigned int align_buffer_size:1; 144 144 unsigned int region_requested:1; 145 145 unsigned int disabled:1; /* disabled by vga_switcheroo */ 146 + unsigned int pm_prepared:1; 146 147 147 148 /* GTS present */ 148 149 unsigned int gts_present:1;
+35 -28
sound/pci/hda/hda_intel.c
··· 297 297 /* PCH for HSW/BDW; with runtime PM */ 298 298 /* no i915 binding for this as HSW/BDW has another controller for HDMI */ 299 299 #define AZX_DCAPS_INTEL_PCH \ 300 - (AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME |\ 301 - AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP) 300 + (AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME) 302 301 303 302 /* HSW HDMI */ 304 303 #define AZX_DCAPS_INTEL_HASWELL \ ··· 984 985 display_power(chip, false); 985 986 } 986 987 987 - static void __azx_runtime_resume(struct azx *chip, bool from_rt) 988 + static void __azx_runtime_resume(struct azx *chip) 988 989 { 989 990 struct hda_intel *hda = container_of(chip, struct hda_intel, chip); 990 991 struct hdac_bus *bus = azx_bus(chip); ··· 1001 1002 azx_init_pci(chip); 1002 1003 hda_intel_init_chip(chip, true); 1003 1004 1004 - if (from_rt) { 1005 + /* Avoid codec resume if runtime resume is for system suspend */ 1006 + if (!chip->pm_prepared) { 1005 1007 list_for_each_codec(codec, &chip->bus) { 1006 1008 if (codec->relaxed_resume) 1007 1009 continue; ··· 1018 1018 } 1019 1019 1020 1020 #ifdef CONFIG_PM_SLEEP 1021 + static int azx_prepare(struct device *dev) 1022 + { 1023 + struct snd_card *card = dev_get_drvdata(dev); 1024 + struct azx *chip; 1025 + 1026 + chip = card->private_data; 1027 + chip->pm_prepared = 1; 1028 + 1029 + /* HDA controller always requires different WAKEEN for runtime suspend 1030 + * and system suspend, so don't use direct-complete here. 1031 + */ 1032 + return 0; 1033 + } 1034 + 1035 + static void azx_complete(struct device *dev) 1036 + { 1037 + struct snd_card *card = dev_get_drvdata(dev); 1038 + struct azx *chip; 1039 + 1040 + chip = card->private_data; 1041 + chip->pm_prepared = 0; 1042 + } 1043 + 1021 1044 static int azx_suspend(struct device *dev) 1022 1045 { 1023 1046 struct snd_card *card = dev_get_drvdata(dev); ··· 1052 1029 1053 1030 chip = card->private_data; 1054 1031 bus = azx_bus(chip); 1055 - snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); 1056 - /* An ugly workaround: direct call of __azx_runtime_suspend() and 1057 - * __azx_runtime_resume() for old Intel platforms that suffer from 1058 - * spurious wakeups after S3 suspend 1059 - */ 1060 - if (chip->driver_caps & AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP) 1061 - __azx_runtime_suspend(chip); 1062 - else 1063 - pm_runtime_force_suspend(dev); 1032 + __azx_runtime_suspend(chip); 1064 1033 if (bus->irq >= 0) { 1065 1034 free_irq(bus->irq, chip); 1066 1035 bus->irq = -1; ··· 1081 1066 if (azx_acquire_irq(chip, 1) < 0) 1082 1067 return -EIO; 1083 1068 1084 - if (chip->driver_caps & AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP) 1085 - __azx_runtime_resume(chip, false); 1086 - else 1087 - pm_runtime_force_resume(dev); 1088 - snd_power_change_state(card, SNDRV_CTL_POWER_D0); 1069 + __azx_runtime_resume(chip); 1089 1070 1090 1071 trace_azx_resume(chip); 1091 1072 return 0; ··· 1129 1118 chip = card->private_data; 1130 1119 1131 1120 /* enable controller wake up event */ 1132 - if (snd_power_get_state(card) == SNDRV_CTL_POWER_D0) { 1133 - azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) | 1134 - STATESTS_INT_MASK); 1135 - } 1121 + azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) | STATESTS_INT_MASK); 1136 1122 1137 1123 __azx_runtime_suspend(chip); 1138 1124 trace_azx_runtime_suspend(chip); ··· 1140 1132 { 1141 1133 struct snd_card *card = dev_get_drvdata(dev); 1142 1134 struct azx *chip; 1143 - bool from_rt = snd_power_get_state(card) == SNDRV_CTL_POWER_D0; 1144 1135 1145 1136 if (!azx_is_pm_ready(card)) 1146 1137 return 0; 1147 1138 chip = card->private_data; 1148 - __azx_runtime_resume(chip, from_rt); 1139 + __azx_runtime_resume(chip); 1149 1140 1150 1141 /* disable controller Wake Up event*/ 1151 - if (from_rt) { 1152 - azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) & 1153 - ~STATESTS_INT_MASK); 1154 - } 1142 + azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) & ~STATESTS_INT_MASK); 1155 1143 1156 1144 trace_azx_runtime_resume(chip); 1157 1145 return 0; ··· 1181 1177 static const struct dev_pm_ops azx_pm = { 1182 1178 SET_SYSTEM_SLEEP_PM_OPS(azx_suspend, azx_resume) 1183 1179 #ifdef CONFIG_PM_SLEEP 1180 + .prepare = azx_prepare, 1181 + .complete = azx_complete, 1184 1182 .freeze_noirq = azx_freeze_noirq, 1185 1183 .thaw_noirq = azx_thaw_noirq, 1186 1184 #endif ··· 2362 2356 2363 2357 if (azx_has_pm_runtime(chip)) { 2364 2358 pm_runtime_use_autosuspend(&pci->dev); 2359 + pm_runtime_allow(&pci->dev); 2365 2360 pm_runtime_put_autosuspend(&pci->dev); 2366 2361 } 2367 2362
+56 -11
sound/pci/hda/patch_realtek.c
··· 6008 6008 snd_hda_override_wcaps(codec, 0x03, 0); 6009 6009 } 6010 6010 6011 + static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec) 6012 + { 6013 + switch (codec->core.vendor_id) { 6014 + case 0x10ec0274: 6015 + case 0x10ec0294: 6016 + case 0x10ec0225: 6017 + case 0x10ec0295: 6018 + case 0x10ec0299: 6019 + alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */ 6020 + alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15); 6021 + break; 6022 + case 0x10ec0235: 6023 + case 0x10ec0236: 6024 + case 0x10ec0255: 6025 + case 0x10ec0256: 6026 + alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */ 6027 + alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15); 6028 + break; 6029 + } 6030 + } 6031 + 6011 6032 static void alc295_fixup_chromebook(struct hda_codec *codec, 6012 6033 const struct hda_fixup *fix, int action) 6013 6034 { ··· 6039 6018 spec->ultra_low_power = true; 6040 6019 break; 6041 6020 case HDA_FIXUP_ACT_INIT: 6042 - switch (codec->core.vendor_id) { 6043 - case 0x10ec0295: 6044 - alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */ 6045 - alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15); 6046 - break; 6047 - case 0x10ec0236: 6048 - alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */ 6049 - alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15); 6050 - break; 6051 - } 6021 + alc_combo_jack_hp_jd_restart(codec); 6052 6022 break; 6053 6023 } 6054 6024 } ··· 6093 6081 6094 6082 msleep(100); 6095 6083 alc_write_coef_idx(codec, 0x65, 0x0); 6084 + } 6085 + 6086 + static void alc274_fixup_hp_headset_mic(struct hda_codec *codec, 6087 + const struct hda_fixup *fix, int action) 6088 + { 6089 + switch (action) { 6090 + case HDA_FIXUP_ACT_INIT: 6091 + alc_combo_jack_hp_jd_restart(codec); 6092 + break; 6093 + } 6096 6094 } 6097 6095 6098 6096 /* for hda_fixup_thinkpad_acpi() */ ··· 6299 6277 ALC256_FIXUP_INTEL_NUC8_RUGGED, 6300 6278 ALC255_FIXUP_XIAOMI_HEADSET_MIC, 6301 6279 ALC274_FIXUP_HP_MIC, 6280 + ALC274_FIXUP_HP_HEADSET_MIC, 6281 + ALC256_FIXUP_ASUS_HPE, 6302 6282 }; 6303 6283 6304 6284 static const struct hda_fixup alc269_fixups[] = { ··· 7688 7664 { } 7689 7665 }, 7690 7666 }, 7667 + [ALC274_FIXUP_HP_HEADSET_MIC] = { 7668 + .type = HDA_FIXUP_FUNC, 7669 + .v.func = alc274_fixup_hp_headset_mic, 7670 + .chained = true, 7671 + .chain_id = ALC274_FIXUP_HP_MIC 7672 + }, 7673 + [ALC256_FIXUP_ASUS_HPE] = { 7674 + .type = HDA_FIXUP_VERBS, 7675 + .v.verbs = (const struct hda_verb[]) { 7676 + /* Set EAPD high */ 7677 + { 0x20, AC_VERB_SET_COEF_INDEX, 0x0f }, 7678 + { 0x20, AC_VERB_SET_PROC_COEF, 0x7778 }, 7679 + { } 7680 + }, 7681 + .chained = true, 7682 + .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC 7683 + }, 7691 7684 }; 7692 7685 7693 7686 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 7856 7815 SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED), 7857 7816 SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED), 7858 7817 SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT), 7859 - SND_PCI_QUIRK(0x103c, 0x874e, "HP", ALC274_FIXUP_HP_MIC), 7860 7818 SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED), 7861 7819 SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED), 7862 7820 SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED), ··· 7888 7848 SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 7889 7849 SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 7890 7850 SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC), 7851 + SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), 7891 7852 SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), 7892 7853 SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401), 7893 7854 SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS), ··· 8379 8338 SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE, 8380 8339 {0x1a, 0x90a70130}, 8381 8340 {0x1b, 0x90170110}, 8341 + {0x21, 0x03211020}), 8342 + SND_HDA_PIN_QUIRK(0x10ec0274, 0x103c, "HP", ALC274_FIXUP_HP_HEADSET_MIC, 8343 + {0x17, 0x90170110}, 8344 + {0x19, 0x03a11030}, 8382 8345 {0x21, 0x03211020}), 8383 8346 SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4, 8384 8347 {0x12, 0x90a60130},
-1
sound/soc/atmel/mchp-spdiftx.c
··· 487 487 } 488 488 mchp_spdiftx_channel_status_write(dev); 489 489 spin_unlock_irqrestore(&ctrl->lock, flags); 490 - mr |= SPDIFTX_MR_VALID1 | SPDIFTX_MR_VALID2; 491 490 492 491 if (dev->gclk_enabled) { 493 492 clk_disable_unprepare(dev->gclk);
+21 -1
sound/soc/codecs/cs42l51.c
··· 254 254 &cs42l51_adcr_mux_controls), 255 255 }; 256 256 257 + static int mclk_event(struct snd_soc_dapm_widget *w, 258 + struct snd_kcontrol *kcontrol, int event) 259 + { 260 + struct snd_soc_component *comp = snd_soc_dapm_to_component(w->dapm); 261 + struct cs42l51_private *cs42l51 = snd_soc_component_get_drvdata(comp); 262 + 263 + switch (event) { 264 + case SND_SOC_DAPM_PRE_PMU: 265 + return clk_prepare_enable(cs42l51->mclk_handle); 266 + case SND_SOC_DAPM_POST_PMD: 267 + /* Delay mclk shutdown to fulfill power-down sequence requirements */ 268 + msleep(20); 269 + clk_disable_unprepare(cs42l51->mclk_handle); 270 + break; 271 + } 272 + 273 + return 0; 274 + } 275 + 257 276 static const struct snd_soc_dapm_widget cs42l51_dapm_mclk_widgets[] = { 258 - SND_SOC_DAPM_CLOCK_SUPPLY("MCLK") 277 + SND_SOC_DAPM_SUPPLY("MCLK", SND_SOC_NOPM, 0, 0, mclk_event, 278 + SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD), 259 279 }; 260 280 261 281 static const struct snd_soc_dapm_route cs42l51_routes[] = {
+1 -1
sound/soc/codecs/wcd9335.c
··· 618 618 "ZERO", "RX_MIX_TX8", "DEC8", "DEC8_192" 619 619 }; 620 620 621 - static const DECLARE_TLV_DB_SCALE(digital_gain, 0, 1, 0); 621 + static const DECLARE_TLV_DB_SCALE(digital_gain, -8400, 100, -8400); 622 622 static const DECLARE_TLV_DB_SCALE(line_gain, 0, 7, 1); 623 623 static const DECLARE_TLV_DB_SCALE(analog_gain, 0, 25, 1); 624 624 static const DECLARE_TLV_DB_SCALE(ear_pa_gain, 0, 150, 0);
+1 -1
sound/soc/codecs/wcd934x.c
··· 551 551 struct soc_bytes_ext bytes_ext; 552 552 }; 553 553 554 - static const DECLARE_TLV_DB_SCALE(digital_gain, 0, 1, 0); 554 + static const DECLARE_TLV_DB_SCALE(digital_gain, -8400, 100, -8400); 555 555 static const DECLARE_TLV_DB_SCALE(line_gain, 0, 7, 1); 556 556 static const DECLARE_TLV_DB_SCALE(analog_gain, 0, 25, 1); 557 557 static const DECLARE_TLV_DB_SCALE(ear_pa_gain, 0, 150, 0);
+2
sound/soc/codecs/wsa881x.c
··· 1026 1026 .id = 0, 1027 1027 .playback = { 1028 1028 .stream_name = "SPKR Playback", 1029 + .rates = SNDRV_PCM_RATE_48000, 1030 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 1029 1031 .rate_max = 48000, 1030 1032 .rate_min = 48000, 1031 1033 .channels_min = 1,
-18
sound/soc/intel/Kconfig
··· 15 15 16 16 if SND_SOC_INTEL_SST_TOPLEVEL 17 17 18 - config SND_SST_IPC 19 - tristate 20 - # This option controls the IPC core for HiFi2 platforms 21 - 22 - config SND_SST_IPC_PCI 23 - tristate 24 - select SND_SST_IPC 25 - # This option controls the PCI-based IPC for HiFi2 platforms 26 - # (Medfield, Merrifield). 27 - 28 - config SND_SST_IPC_ACPI 29 - tristate 30 - select SND_SST_IPC 31 - # This option controls the ACPI-based IPC for HiFi2 platforms 32 - # (Baytrail, Cherrytrail) 33 - 34 18 config SND_SOC_INTEL_SST 35 19 tristate 36 20 ··· 41 57 config SND_SST_ATOM_HIFI2_PLATFORM_PCI 42 58 tristate "PCI HiFi2 (Merrifield) Platforms" 43 59 depends on X86 && PCI 44 - select SND_SST_IPC_PCI 45 60 select SND_SST_ATOM_HIFI2_PLATFORM 46 61 help 47 62 If you have a Intel Merrifield/Edison platform, then ··· 53 70 tristate "ACPI HiFi2 (Baytrail, Cherrytrail) Platforms" 54 71 default ACPI 55 72 depends on X86 && ACPI && PCI 56 - select SND_SST_IPC_ACPI 57 73 select SND_SST_ATOM_HIFI2_PLATFORM 58 74 select SND_SOC_ACPI_INTEL_MATCH 59 75 select IOSF_MBI
+1 -1
sound/soc/intel/atom/Makefile
··· 6 6 obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM) += snd-soc-sst-atom-hifi2-platform.o 7 7 8 8 # DSP driver 9 - obj-$(CONFIG_SND_SST_IPC) += sst/ 9 + obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM) += sst/
+3 -3
sound/soc/intel/atom/sst/Makefile
··· 3 3 snd-intel-sst-pci-objs += sst_pci.o 4 4 snd-intel-sst-acpi-objs += sst_acpi.o 5 5 6 - obj-$(CONFIG_SND_SST_IPC) += snd-intel-sst-core.o 7 - obj-$(CONFIG_SND_SST_IPC_PCI) += snd-intel-sst-pci.o 8 - obj-$(CONFIG_SND_SST_IPC_ACPI) += snd-intel-sst-acpi.o 6 + obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM) += snd-intel-sst-core.o 7 + obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_PCI) += snd-intel-sst-pci.o 8 + obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_ACPI) += snd-intel-sst-acpi.o
+31 -8
sound/soc/intel/boards/kbl_rt5663_max98927.c
··· 401 401 struct snd_interval *chan = hw_param_interval(params, 402 402 SNDRV_PCM_HW_PARAM_CHANNELS); 403 403 struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); 404 - struct snd_soc_dpcm *dpcm = container_of( 405 - params, struct snd_soc_dpcm, hw_params); 406 - struct snd_soc_dai_link *fe_dai_link = dpcm->fe->dai_link; 407 - struct snd_soc_dai_link *be_dai_link = dpcm->be->dai_link; 404 + struct snd_soc_dpcm *dpcm, *rtd_dpcm = NULL; 405 + 406 + /* 407 + * The following loop will be called only for playback stream 408 + * In this platform, there is only one playback device on every SSP 409 + */ 410 + for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_PLAYBACK, dpcm) { 411 + rtd_dpcm = dpcm; 412 + break; 413 + } 414 + 415 + /* 416 + * This following loop will be called only for capture stream 417 + * In this platform, there is only one capture device on every SSP 418 + */ 419 + for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_CAPTURE, dpcm) { 420 + rtd_dpcm = dpcm; 421 + break; 422 + } 423 + 424 + if (!rtd_dpcm) 425 + return -EINVAL; 426 + 427 + /* 428 + * The above 2 loops are mutually exclusive based on the stream direction, 429 + * thus rtd_dpcm variable will never be overwritten 430 + */ 408 431 409 432 /* 410 433 * The ADSP will convert the FE rate to 48k, stereo, 24 bit 411 434 */ 412 - if (!strcmp(fe_dai_link->name, "Kbl Audio Port") || 413 - !strcmp(fe_dai_link->name, "Kbl Audio Headset Playback") || 414 - !strcmp(fe_dai_link->name, "Kbl Audio Capture Port")) { 435 + if (!strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Port") || 436 + !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Headset Playback") || 437 + !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Capture Port")) { 415 438 rate->min = rate->max = 48000; 416 439 chan->min = chan->max = 2; 417 440 snd_mask_none(fmt); ··· 444 421 * The speaker on the SSP0 supports S16_LE and not S24_LE. 445 422 * thus changing the mask here 446 423 */ 447 - if (!strcmp(be_dai_link->name, "SSP0-Codec")) 424 + if (!strcmp(rtd_dpcm->be->dai_link->name, "SSP0-Codec")) 448 425 snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE); 449 426 450 427 return 0;
+6 -3
sound/soc/intel/catpt/dsp.c
··· 267 267 reg, (reg & CATPT_ISD_DCPWM), 268 268 500, 10000); 269 269 if (ret) { 270 - dev_err(cdev->dev, "await WAITI timeout\n"); 271 - mutex_unlock(&cdev->clk_mutex); 272 - return ret; 270 + dev_warn(cdev->dev, "await WAITI timeout\n"); 271 + /* no signal - only high clock selection allowed */ 272 + if (lp) { 273 + mutex_unlock(&cdev->clk_mutex); 274 + return 0; 275 + } 273 276 } 274 277 } 275 278
+10
sound/soc/intel/catpt/pcm.c
··· 667 667 break; 668 668 } 669 669 670 + /* see if this is a new configuration */ 671 + if (!memcmp(&cdev->devfmt[devfmt.iface], &devfmt, sizeof(devfmt))) 672 + return 0; 673 + 674 + pm_runtime_get_sync(cdev->dev); 675 + 670 676 ret = catpt_ipc_set_device_format(cdev, &devfmt); 677 + 678 + pm_runtime_mark_last_busy(cdev->dev); 679 + pm_runtime_put_autosuspend(cdev->dev); 680 + 671 681 if (ret) 672 682 return CATPT_IPC_ERROR(ret); 673 683
+25 -6
sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
··· 630 630 }, 631 631 }; 632 632 633 + static const struct snd_kcontrol_new mt8183_da7219_rt1015_snd_controls[] = { 634 + SOC_DAPM_PIN_SWITCH("Left Spk"), 635 + SOC_DAPM_PIN_SWITCH("Right Spk"), 636 + }; 637 + 638 + static const 639 + struct snd_soc_dapm_widget mt8183_da7219_rt1015_dapm_widgets[] = { 640 + SND_SOC_DAPM_SPK("Left Spk", NULL), 641 + SND_SOC_DAPM_SPK("Right Spk", NULL), 642 + SND_SOC_DAPM_PINCTRL("TDM_OUT_PINCTRL", 643 + "aud_tdm_out_on", "aud_tdm_out_off"), 644 + }; 645 + 646 + static const struct snd_soc_dapm_route mt8183_da7219_rt1015_dapm_routes[] = { 647 + {"Left Spk", NULL, "Left SPO"}, 648 + {"Right Spk", NULL, "Right SPO"}, 649 + {"I2S Playback", NULL, "TDM_OUT_PINCTRL"}, 650 + }; 651 + 633 652 static struct snd_soc_card mt8183_da7219_rt1015_card = { 634 653 .name = "mt8183_da7219_rt1015", 635 654 .owner = THIS_MODULE, 636 - .controls = mt8183_da7219_max98357_snd_controls, 637 - .num_controls = ARRAY_SIZE(mt8183_da7219_max98357_snd_controls), 638 - .dapm_widgets = mt8183_da7219_max98357_dapm_widgets, 639 - .num_dapm_widgets = ARRAY_SIZE(mt8183_da7219_max98357_dapm_widgets), 640 - .dapm_routes = mt8183_da7219_max98357_dapm_routes, 641 - .num_dapm_routes = ARRAY_SIZE(mt8183_da7219_max98357_dapm_routes), 655 + .controls = mt8183_da7219_rt1015_snd_controls, 656 + .num_controls = ARRAY_SIZE(mt8183_da7219_rt1015_snd_controls), 657 + .dapm_widgets = mt8183_da7219_rt1015_dapm_widgets, 658 + .num_dapm_widgets = ARRAY_SIZE(mt8183_da7219_rt1015_dapm_widgets), 659 + .dapm_routes = mt8183_da7219_rt1015_dapm_routes, 660 + .num_dapm_routes = ARRAY_SIZE(mt8183_da7219_rt1015_dapm_routes), 642 661 .dai_link = mt8183_da7219_dai_links, 643 662 .num_links = ARRAY_SIZE(mt8183_da7219_dai_links), 644 663 .aux_dev = &mt8183_da7219_max98357_headset_dev,
+10 -4
sound/soc/qcom/lpass-cpu.c
··· 80 80 dev_err(dai->dev, "error in enabling mi2s osr clk: %d\n", ret); 81 81 return ret; 82 82 } 83 + ret = clk_prepare(drvdata->mi2s_bit_clk[dai->driver->id]); 84 + if (ret) { 85 + dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret); 86 + clk_disable_unprepare(drvdata->mi2s_osr_clk[dai->driver->id]); 87 + return ret; 88 + } 83 89 return 0; 84 90 } 85 91 ··· 94 88 { 95 89 struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai); 96 90 97 - clk_disable_unprepare(drvdata->mi2s_bit_clk[dai->driver->id]); 98 - 99 91 clk_disable_unprepare(drvdata->mi2s_osr_clk[dai->driver->id]); 92 + clk_unprepare(drvdata->mi2s_bit_clk[dai->driver->id]); 100 93 } 101 94 102 95 static int lpass_cpu_daiops_hw_params(struct snd_pcm_substream *substream, ··· 308 303 dev_err(dai->dev, "error writing to i2sctl reg: %d\n", 309 304 ret); 310 305 311 - ret = clk_prepare_enable(drvdata->mi2s_bit_clk[id]); 306 + ret = clk_enable(drvdata->mi2s_bit_clk[id]); 312 307 if (ret) { 313 308 dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret); 314 - clk_disable_unprepare(drvdata->mi2s_osr_clk[id]); 309 + clk_disable(drvdata->mi2s_osr_clk[id]); 315 310 return ret; 316 311 } 317 312 ··· 329 324 if (ret) 330 325 dev_err(dai->dev, "error writing to i2sctl reg: %d\n", 331 326 ret); 327 + clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]); 332 328 break; 333 329 } 334 330
+1 -1
sound/soc/qcom/lpass-sc7180.c
··· 188 188 .micmode = REG_FIELD_ID(0x1000, 4, 8, 3, 0x1000), 189 189 .micmono = REG_FIELD_ID(0x1000, 3, 3, 3, 0x1000), 190 190 .wssrc = REG_FIELD_ID(0x1000, 2, 2, 3, 0x1000), 191 - .bitwidth = REG_FIELD_ID(0x1000, 0, 0, 3, 0x1000), 191 + .bitwidth = REG_FIELD_ID(0x1000, 0, 1, 3, 0x1000), 192 192 193 193 .rdma_dyncclk = REG_FIELD_ID(0xC000, 21, 21, 5, 0x1000), 194 194 .rdma_bursten = REG_FIELD_ID(0xC000, 20, 20, 5, 0x1000),
+2
sound/soc/qcom/sdm845.c
··· 17 17 #include "qdsp6/q6afe.h" 18 18 #include "../codecs/rt5663.h" 19 19 20 + #define DRIVER_NAME "sdm845" 20 21 #define DEFAULT_SAMPLE_RATE_48K 48000 21 22 #define DEFAULT_MCLK_RATE 24576000 22 23 #define TDM_BCLK_RATE 6144000 ··· 553 552 if (!data) 554 553 return -ENOMEM; 555 554 555 + card->driver_name = DRIVER_NAME; 556 556 card->dapm_widgets = sdm845_snd_widgets; 557 557 card->num_dapm_widgets = ARRAY_SIZE(sdm845_snd_widgets); 558 558 card->dev = dev;
+1 -1
sound/soc/soc-core.c
··· 2341 2341 } 2342 2342 2343 2343 /** 2344 - * snd_soc_unregister_dai - Unregister DAIs from the ASoC core 2344 + * snd_soc_unregister_dais - Unregister DAIs from the ASoC core 2345 2345 * 2346 2346 * @component: The component for which the DAIs should be unregistered 2347 2347 */
+1 -1
sound/soc/soc-dapm.c
··· 1276 1276 } 1277 1277 1278 1278 /** 1279 - * snd_soc_dapm_get_connected_widgets - query audio path and it's widgets. 1279 + * snd_soc_dapm_dai_get_connected_widgets - query audio path and it's widgets. 1280 1280 * @dai: the soc DAI. 1281 1281 * @stream: stream direction. 1282 1282 * @list: list of active widgets for this stream.
+5
sound/soc/sof/loader.c
··· 118 118 case SOF_IPC_EXT_CC_INFO: 119 119 ret = get_cc_info(sdev, ext_hdr); 120 120 break; 121 + case SOF_IPC_EXT_UNUSED: 122 + case SOF_IPC_EXT_PROBE_INFO: 123 + case SOF_IPC_EXT_USER_ABI_INFO: 124 + /* They are supported but we don't do anything here */ 125 + break; 121 126 default: 122 127 dev_warn(sdev->dev, "warning: unknown ext header type %d size 0x%x\n", 123 128 ext_hdr->type, ext_hdr->hdr.size);
+6
sound/usb/pcm.c
··· 336 336 switch (subs->stream->chip->usb_id) { 337 337 case USB_ID(0x0763, 0x2030): /* M-Audio Fast Track C400 */ 338 338 case USB_ID(0x0763, 0x2031): /* M-Audio Fast Track C600 */ 339 + case USB_ID(0x22f0, 0x0006): /* Allen&Heath Qu-16 */ 339 340 ep = 0x81; 340 341 ifnum = 3; 341 342 goto add_sync_ep_from_ifnum; ··· 346 345 ifnum = 2; 347 346 goto add_sync_ep_from_ifnum; 348 347 case USB_ID(0x2466, 0x8003): /* Fractal Audio Axe-Fx II */ 348 + case USB_ID(0x0499, 0x172a): /* Yamaha MODX */ 349 349 ep = 0x86; 350 350 ifnum = 2; 351 351 goto add_sync_ep_from_ifnum; 352 352 case USB_ID(0x2466, 0x8010): /* Fractal Audio Axe-Fx III */ 353 353 ep = 0x81; 354 + ifnum = 2; 355 + goto add_sync_ep_from_ifnum; 356 + case USB_ID(0x1686, 0xf029): /* Zoom UAC-2 */ 357 + ep = 0x82; 354 358 ifnum = 2; 355 359 goto add_sync_ep_from_ifnum; 356 360 case USB_ID(0x1397, 0x0001): /* Behringer UFX1604 */
+1
sound/usb/quirks.c
··· 1800 1800 case 0x278b: /* Rotel? */ 1801 1801 case 0x292b: /* Gustard/Ess based devices */ 1802 1802 case 0x2ab6: /* T+A devices */ 1803 + case 0x3353: /* Khadas devices */ 1803 1804 case 0x3842: /* EVGA */ 1804 1805 case 0xc502: /* HiBy devices */ 1805 1806 if (fp->dsd_raw)
+25
tools/arch/arm64/include/uapi/asm/kvm.h
··· 159 159 struct kvm_arch_memory_slot { 160 160 }; 161 161 162 + /* 163 + * PMU filter structure. Describe a range of events with a particular 164 + * action. To be used with KVM_ARM_VCPU_PMU_V3_FILTER. 165 + */ 166 + struct kvm_pmu_event_filter { 167 + __u16 base_event; 168 + __u16 nevents; 169 + 170 + #define KVM_PMU_EVENT_ALLOW 0 171 + #define KVM_PMU_EVENT_DENY 1 172 + 173 + __u8 action; 174 + __u8 pad[3]; 175 + }; 176 + 162 177 /* for KVM_GET/SET_VCPU_EVENTS */ 163 178 struct kvm_vcpu_events { 164 179 struct { ··· 257 242 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL 0 258 243 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL 1 259 244 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED 2 245 + 246 + /* 247 + * Only two states can be presented by the host kernel: 248 + * - NOT_REQUIRED: the guest doesn't need to do anything 249 + * - NOT_AVAIL: the guest isn't mitigated (it can still use SSBS if available) 250 + * 251 + * All the other values are deprecated. The host still accepts all 252 + * values (they are ABI), but will narrow them to the above two. 253 + */ 260 254 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2 KVM_REG_ARM_FW_REG(2) 261 255 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL 0 262 256 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN 1 ··· 353 329 #define KVM_ARM_VCPU_PMU_V3_CTRL 0 354 330 #define KVM_ARM_VCPU_PMU_V3_IRQ 0 355 331 #define KVM_ARM_VCPU_PMU_V3_INIT 1 332 + #define KVM_ARM_VCPU_PMU_V3_FILTER 2 356 333 #define KVM_ARM_VCPU_TIMER_CTRL 1 357 334 #define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0 358 335 #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
+1 -1
tools/arch/s390/include/uapi/asm/sie.h
··· 29 29 { 0x13, "SIGP conditional emergency signal" }, \ 30 30 { 0x15, "SIGP sense running" }, \ 31 31 { 0x16, "SIGP set multithreading"}, \ 32 - { 0x17, "SIGP store additional status ait address"} 32 + { 0x17, "SIGP store additional status at address"} 33 33 34 34 #define icpt_prog_codes \ 35 35 { 0x0001, "Prog Operation" }, \
+5 -1
tools/arch/x86/include/asm/cpufeatures.h
··· 96 96 #define X86_FEATURE_SYSCALL32 ( 3*32+14) /* "" syscall in IA32 userspace */ 97 97 #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */ 98 98 #define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */ 99 - /* free ( 3*32+17) */ 99 + #define X86_FEATURE_SME_COHERENT ( 3*32+17) /* "" AMD hardware-enforced cache coherency */ 100 100 #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */ 101 101 #define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */ 102 102 #define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */ ··· 236 236 #define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */ 237 237 #define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */ 238 238 #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ 239 + #define X86_FEATURE_SEV_ES ( 8*32+20) /* AMD Secure Encrypted Virtualization - Encrypted State */ 239 240 240 241 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ 241 242 #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ ··· 289 288 #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */ 290 289 #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */ 291 290 #define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */ 291 + #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */ 292 292 293 293 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ 294 294 #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ ··· 355 353 #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ 356 354 #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ 357 355 #define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */ 356 + #define X86_FEATURE_ENQCMD (16*32+29) /* ENQCMD and ENQCMDS instructions */ 358 357 359 358 /* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ 360 359 #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */ ··· 371 368 #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */ 372 369 #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */ 373 370 #define X86_FEATURE_SERIALIZE (18*32+14) /* SERIALIZE instruction */ 371 + #define X86_FEATURE_TSXLDTRK (18*32+16) /* TSX Suspend Load Address Tracking */ 374 372 #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */ 375 373 #define X86_FEATURE_ARCH_LBR (18*32+19) /* Intel ARCH LBR */ 376 374 #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+8 -1
tools/arch/x86/include/asm/disabled-features.h
··· 56 56 # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) 57 57 #endif 58 58 59 + #ifdef CONFIG_IOMMU_SUPPORT 60 + # define DISABLE_ENQCMD 0 61 + #else 62 + # define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31)) 63 + #endif 64 + 59 65 /* 60 66 * Make sure to add features to the correct mask 61 67 */ ··· 81 75 #define DISABLED_MASK13 0 82 76 #define DISABLED_MASK14 0 83 77 #define DISABLED_MASK15 0 84 - #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP) 78 + #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \ 79 + DISABLE_ENQCMD) 85 80 #define DISABLED_MASK17 0 86 81 #define DISABLED_MASK18 0 87 82 #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
+10
tools/arch/x86/include/asm/msr-index.h
··· 257 257 #define MSR_IA32_LASTINTFROMIP 0x000001dd 258 258 #define MSR_IA32_LASTINTTOIP 0x000001de 259 259 260 + #define MSR_IA32_PASID 0x00000d93 261 + #define MSR_IA32_PASID_VALID BIT_ULL(31) 262 + 260 263 /* DEBUGCTLMSR bits (others vary by model): */ 261 264 #define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */ 262 265 #define DEBUGCTLMSR_BTF_SHIFT 1 ··· 467 464 #define MSR_AMD64_IBSOP_REG_MASK ((1UL<<MSR_AMD64_IBSOP_REG_COUNT)-1) 468 465 #define MSR_AMD64_IBSCTL 0xc001103a 469 466 #define MSR_AMD64_IBSBRTARGET 0xc001103b 467 + #define MSR_AMD64_ICIBSEXTDCTL 0xc001103c 470 468 #define MSR_AMD64_IBSOPDATA4 0xc001103d 471 469 #define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */ 470 + #define MSR_AMD64_SEV_ES_GHCB 0xc0010130 472 471 #define MSR_AMD64_SEV 0xc0010131 473 472 #define MSR_AMD64_SEV_ENABLED_BIT 0 473 + #define MSR_AMD64_SEV_ES_ENABLED_BIT 1 474 474 #define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT) 475 + #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) 475 476 476 477 #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f 477 478 ··· 864 857 #define MSR_CORE_PERF_FIXED_CTR0 0x00000309 865 858 #define MSR_CORE_PERF_FIXED_CTR1 0x0000030a 866 859 #define MSR_CORE_PERF_FIXED_CTR2 0x0000030b 860 + #define MSR_CORE_PERF_FIXED_CTR3 0x0000030c 867 861 #define MSR_CORE_PERF_FIXED_CTR_CTRL 0x0000038d 868 862 #define MSR_CORE_PERF_GLOBAL_STATUS 0x0000038e 869 863 #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f 870 864 #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 865 + 866 + #define MSR_PERF_METRICS 0x00000329 871 867 872 868 /* PERF_GLOBAL_OVF_CTL bits */ 873 869 #define MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI_BIT 55
+1 -1
tools/arch/x86/include/asm/required-features.h
··· 54 54 #endif 55 55 56 56 #ifdef CONFIG_X86_64 57 - #ifdef CONFIG_PARAVIRT 57 + #ifdef CONFIG_PARAVIRT_XXL 58 58 /* Paravirtualized systems may not have PSE or PGE available */ 59 59 #define NEED_PSE 0 60 60 #define NEED_PGE 0
+20
tools/arch/x86/include/uapi/asm/kvm.h
··· 192 192 __u32 indices[0]; 193 193 }; 194 194 195 + /* Maximum size of any access bitmap in bytes */ 196 + #define KVM_MSR_FILTER_MAX_BITMAP_SIZE 0x600 197 + 198 + /* for KVM_X86_SET_MSR_FILTER */ 199 + struct kvm_msr_filter_range { 200 + #define KVM_MSR_FILTER_READ (1 << 0) 201 + #define KVM_MSR_FILTER_WRITE (1 << 1) 202 + __u32 flags; 203 + __u32 nmsrs; /* number of msrs in bitmap */ 204 + __u32 base; /* MSR index the bitmap starts at */ 205 + __u8 *bitmap; /* a 1 bit allows the operations in flags, 0 denies */ 206 + }; 207 + 208 + #define KVM_MSR_FILTER_MAX_RANGES 16 209 + struct kvm_msr_filter { 210 + #define KVM_MSR_FILTER_DEFAULT_ALLOW (0 << 0) 211 + #define KVM_MSR_FILTER_DEFAULT_DENY (1 << 0) 212 + __u32 flags; 213 + struct kvm_msr_filter_range ranges[KVM_MSR_FILTER_MAX_RANGES]; 214 + }; 195 215 196 216 struct kvm_cpuid_entry { 197 217 __u32 function;
+13
tools/arch/x86/include/uapi/asm/svm.h
··· 29 29 #define SVM_EXIT_WRITE_DR6 0x036 30 30 #define SVM_EXIT_WRITE_DR7 0x037 31 31 #define SVM_EXIT_EXCP_BASE 0x040 32 + #define SVM_EXIT_LAST_EXCP 0x05f 32 33 #define SVM_EXIT_INTR 0x060 33 34 #define SVM_EXIT_NMI 0x061 34 35 #define SVM_EXIT_SMI 0x062 ··· 77 76 #define SVM_EXIT_MWAIT_COND 0x08c 78 77 #define SVM_EXIT_XSETBV 0x08d 79 78 #define SVM_EXIT_RDPRU 0x08e 79 + #define SVM_EXIT_INVPCID 0x0a2 80 80 #define SVM_EXIT_NPF 0x400 81 81 #define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401 82 82 #define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402 83 + 84 + /* SEV-ES software-defined VMGEXIT events */ 85 + #define SVM_VMGEXIT_MMIO_READ 0x80000001 86 + #define SVM_VMGEXIT_MMIO_WRITE 0x80000002 87 + #define SVM_VMGEXIT_NMI_COMPLETE 0x80000003 88 + #define SVM_VMGEXIT_AP_HLT_LOOP 0x80000004 89 + #define SVM_VMGEXIT_AP_JUMP_TABLE 0x80000005 90 + #define SVM_VMGEXIT_SET_AP_JUMP_TABLE 0 91 + #define SVM_VMGEXIT_GET_AP_JUMP_TABLE 1 92 + #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff 83 93 84 94 #define SVM_EXIT_ERR -1 85 95 ··· 183 171 { SVM_EXIT_MONITOR, "monitor" }, \ 184 172 { SVM_EXIT_MWAIT, "mwait" }, \ 185 173 { SVM_EXIT_XSETBV, "xsetbv" }, \ 174 + { SVM_EXIT_INVPCID, "invpcid" }, \ 186 175 { SVM_EXIT_NPF, "npf" }, \ 187 176 { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \ 188 177 { SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \
-1
tools/build/feature/test-all.c
··· 185 185 main_test_libperl(); 186 186 main_test_hello(); 187 187 main_test_libelf(); 188 - main_test_libelf_mmap(); 189 188 main_test_get_current_dir_name(); 190 189 main_test_gettid(); 191 190 main_test_glibc();
-12
tools/include/linux/compiler-gcc.h
··· 27 27 #define __pure __attribute__((pure)) 28 28 #endif 29 29 #define noinline __attribute__((noinline)) 30 - #ifdef __has_attribute 31 - #if __has_attribute(disable_tail_calls) 32 - #define __no_tail_call __attribute__((disable_tail_calls)) 33 - #endif 34 - #endif 35 - #ifndef __no_tail_call 36 - #if GCC_VERSION > 40201 37 - #define __no_tail_call __attribute__((optimize("no-optimize-sibling-calls"))) 38 - #else 39 - #define __no_tail_call 40 - #endif 41 - #endif 42 30 #ifndef __packed 43 31 #define __packed __attribute__((packed)) 44 32 #endif
-3
tools/include/linux/compiler.h
··· 47 47 #ifndef noinline 48 48 #define noinline 49 49 #endif 50 - #ifndef __no_tail_call 51 - #define __no_tail_call 52 - #endif 53 50 54 51 /* Are two types/vars the same type (ignoring qualifiers)? */ 55 52 #ifndef __same_type
+3 -1
tools/include/uapi/asm-generic/unistd.h
··· 857 857 __SYSCALL(__NR_pidfd_getfd, sys_pidfd_getfd) 858 858 #define __NR_faccessat2 439 859 859 __SYSCALL(__NR_faccessat2, sys_faccessat2) 860 + #define __NR_process_madvise 440 861 + __SYSCALL(__NR_process_madvise, sys_process_madvise) 860 862 861 863 #undef __NR_syscalls 862 - #define __NR_syscalls 440 864 + #define __NR_syscalls 441 863 865 864 866 /* 865 867 * 32 bit systems traditionally used different
+56 -3
tools/include/uapi/drm/i915_drm.h
··· 619 619 */ 620 620 #define I915_PARAM_PERF_REVISION 54 621 621 622 + /* Query whether DRM_I915_GEM_EXECBUFFER2 supports supplying an array of 623 + * timeline syncobj through drm_i915_gem_execbuffer_ext_timeline_fences. See 624 + * I915_EXEC_USE_EXTENSIONS. 625 + */ 626 + #define I915_PARAM_HAS_EXEC_TIMELINE_FENCES 55 627 + 622 628 /* Must be kept compact -- no holes and well documented */ 623 629 624 630 typedef struct drm_i915_getparam { ··· 1052 1046 __u32 flags; 1053 1047 }; 1054 1048 1049 + /** 1050 + * See drm_i915_gem_execbuffer_ext_timeline_fences. 1051 + */ 1052 + #define DRM_I915_GEM_EXECBUFFER_EXT_TIMELINE_FENCES 0 1053 + 1054 + /** 1055 + * This structure describes an array of drm_syncobj and associated points for 1056 + * timeline variants of drm_syncobj. It is invalid to append this structure to 1057 + * the execbuf if I915_EXEC_FENCE_ARRAY is set. 1058 + */ 1059 + struct drm_i915_gem_execbuffer_ext_timeline_fences { 1060 + struct i915_user_extension base; 1061 + 1062 + /** 1063 + * Number of element in the handles_ptr & value_ptr arrays. 1064 + */ 1065 + __u64 fence_count; 1066 + 1067 + /** 1068 + * Pointer to an array of struct drm_i915_gem_exec_fence of length 1069 + * fence_count. 1070 + */ 1071 + __u64 handles_ptr; 1072 + 1073 + /** 1074 + * Pointer to an array of u64 values of length fence_count. Values 1075 + * must be 0 for a binary drm_syncobj. A Value of 0 for a timeline 1076 + * drm_syncobj is invalid as it turns a drm_syncobj into a binary one. 1077 + */ 1078 + __u64 values_ptr; 1079 + }; 1080 + 1055 1081 struct drm_i915_gem_execbuffer2 { 1056 1082 /** 1057 1083 * List of gem_exec_object2 structs ··· 1100 1062 __u32 num_cliprects; 1101 1063 /** 1102 1064 * This is a struct drm_clip_rect *cliprects if I915_EXEC_FENCE_ARRAY 1103 - * is not set. If I915_EXEC_FENCE_ARRAY is set, then this is a 1104 - * struct drm_i915_gem_exec_fence *fences. 1065 + * & I915_EXEC_USE_EXTENSIONS are not set. 1066 + * 1067 + * If I915_EXEC_FENCE_ARRAY is set, then this is a pointer to an array 1068 + * of struct drm_i915_gem_exec_fence and num_cliprects is the length 1069 + * of the array. 1070 + * 1071 + * If I915_EXEC_USE_EXTENSIONS is set, then this is a pointer to a 1072 + * single struct i915_user_extension and num_cliprects is 0. 1105 1073 */ 1106 1074 __u64 cliprects_ptr; 1107 1075 #define I915_EXEC_RING_MASK (0x3f) ··· 1225 1181 */ 1226 1182 #define I915_EXEC_FENCE_SUBMIT (1 << 20) 1227 1183 1228 - #define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_SUBMIT << 1)) 1184 + /* 1185 + * Setting I915_EXEC_USE_EXTENSIONS implies that 1186 + * drm_i915_gem_execbuffer2.cliprects_ptr is treated as a pointer to an linked 1187 + * list of i915_user_extension. Each i915_user_extension node is the base of a 1188 + * larger structure. The list of supported structures are listed in the 1189 + * drm_i915_gem_execbuffer_ext enum. 1190 + */ 1191 + #define I915_EXEC_USE_EXTENSIONS (1 << 21) 1192 + 1193 + #define __I915_EXEC_UNKNOWN_FLAGS (-(I915_EXEC_USE_EXTENSIONS << 1)) 1229 1194 1230 1195 #define I915_EXEC_CONTEXT_ID_MASK (0xffffffff) 1231 1196 #define i915_execbuffer2_set_context_id(eb2, context) \
+3 -3
tools/include/uapi/linux/fscrypt.h
··· 45 45 __u8 flags; 46 46 __u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE]; 47 47 }; 48 - #define fscrypt_policy fscrypt_policy_v1 49 48 50 49 /* 51 50 * Process-subscribed "logon" key description prefix and payload format. ··· 155 156 __u32 __out_reserved[13]; 156 157 }; 157 158 158 - #define FS_IOC_SET_ENCRYPTION_POLICY _IOR('f', 19, struct fscrypt_policy) 159 + #define FS_IOC_SET_ENCRYPTION_POLICY _IOR('f', 19, struct fscrypt_policy_v1) 159 160 #define FS_IOC_GET_ENCRYPTION_PWSALT _IOW('f', 20, __u8[16]) 160 - #define FS_IOC_GET_ENCRYPTION_POLICY _IOW('f', 21, struct fscrypt_policy) 161 + #define FS_IOC_GET_ENCRYPTION_POLICY _IOW('f', 21, struct fscrypt_policy_v1) 161 162 #define FS_IOC_GET_ENCRYPTION_POLICY_EX _IOWR('f', 22, __u8[9]) /* size + version */ 162 163 #define FS_IOC_ADD_ENCRYPTION_KEY _IOWR('f', 23, struct fscrypt_add_key_arg) 163 164 #define FS_IOC_REMOVE_ENCRYPTION_KEY _IOWR('f', 24, struct fscrypt_remove_key_arg) ··· 169 170 170 171 /* old names; don't add anything new here! */ 171 172 #ifndef __KERNEL__ 173 + #define fscrypt_policy fscrypt_policy_v1 172 174 #define FS_KEY_DESCRIPTOR_SIZE FSCRYPT_KEY_DESCRIPTOR_SIZE 173 175 #define FS_POLICY_FLAGS_PAD_4 FSCRYPT_POLICY_FLAGS_PAD_4 174 176 #define FS_POLICY_FLAGS_PAD_8 FSCRYPT_POLICY_FLAGS_PAD_8
+19
tools/include/uapi/linux/kvm.h
··· 248 248 #define KVM_EXIT_IOAPIC_EOI 26 249 249 #define KVM_EXIT_HYPERV 27 250 250 #define KVM_EXIT_ARM_NISV 28 251 + #define KVM_EXIT_X86_RDMSR 29 252 + #define KVM_EXIT_X86_WRMSR 30 251 253 252 254 /* For KVM_EXIT_INTERNAL_ERROR */ 253 255 /* Emulate instruction failed. */ ··· 415 413 __u64 esr_iss; 416 414 __u64 fault_ipa; 417 415 } arm_nisv; 416 + /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */ 417 + struct { 418 + __u8 error; /* user -> kernel */ 419 + __u8 pad[7]; 420 + #define KVM_MSR_EXIT_REASON_INVAL (1 << 0) 421 + #define KVM_MSR_EXIT_REASON_UNKNOWN (1 << 1) 422 + #define KVM_MSR_EXIT_REASON_FILTER (1 << 2) 423 + __u32 reason; /* kernel -> user */ 424 + __u32 index; /* kernel -> user */ 425 + __u64 data; /* kernel <-> user */ 426 + } msr; 418 427 /* Fix the size of the union. */ 419 428 char padding[256]; 420 429 }; ··· 1050 1037 #define KVM_CAP_SMALLER_MAXPHYADDR 185 1051 1038 #define KVM_CAP_S390_DIAG318 186 1052 1039 #define KVM_CAP_STEAL_TIME 187 1040 + #define KVM_CAP_X86_USER_SPACE_MSR 188 1041 + #define KVM_CAP_X86_MSR_FILTER 189 1042 + #define KVM_CAP_ENFORCE_PV_FEATURE_CPUID 190 1053 1043 1054 1044 #ifdef KVM_CAP_IRQ_ROUTING 1055 1045 ··· 1553 1537 1554 1538 /* Available with KVM_CAP_S390_PROTECTED */ 1555 1539 #define KVM_S390_PV_COMMAND _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd) 1540 + 1541 + /* Available with KVM_CAP_X86_MSR_FILTER */ 1542 + #define KVM_X86_SET_MSR_FILTER _IOW(KVMIO, 0xc6, struct kvm_msr_filter) 1556 1543 1557 1544 /* Secure Encrypted Virtualization command */ 1558 1545 enum sev_cmd_id {
+1
tools/include/uapi/linux/mman.h
··· 27 27 #define MAP_HUGE_SHIFT HUGETLB_FLAG_ENCODE_SHIFT 28 28 #define MAP_HUGE_MASK HUGETLB_FLAG_ENCODE_MASK 29 29 30 + #define MAP_HUGE_16KB HUGETLB_FLAG_ENCODE_16KB 30 31 #define MAP_HUGE_64KB HUGETLB_FLAG_ENCODE_64KB 31 32 #define MAP_HUGE_512KB HUGETLB_FLAG_ENCODE_512KB 32 33 #define MAP_HUGE_1MB HUGETLB_FLAG_ENCODE_1MB
+1
tools/include/uapi/linux/mount.h
··· 16 16 #define MS_REMOUNT 32 /* Alter flags of a mounted FS */ 17 17 #define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */ 18 18 #define MS_DIRSYNC 128 /* Directory modifications are synchronous */ 19 + #define MS_NOSYMFOLLOW 256 /* Do not follow symlinks */ 19 20 #define MS_NOATIME 1024 /* Do not update access times. */ 20 21 #define MS_NODIRATIME 2048 /* Do not update directory access times */ 21 22 #define MS_BIND 4096
+1 -1
tools/include/uapi/linux/perf_event.h
··· 1196 1196 1197 1197 #define PERF_MEM_SNOOPX_FWD 0x01 /* forward */ 1198 1198 /* 1 free */ 1199 - #define PERF_MEM_SNOOPX_SHIFT 38 1199 + #define PERF_MEM_SNOOPX_SHIFT 38 1200 1200 1201 1201 /* locked instruction */ 1202 1202 #define PERF_MEM_LOCK_NA 0x01 /* not available */
+9
tools/include/uapi/linux/prctl.h
··· 233 233 #define PR_SET_TAGGED_ADDR_CTRL 55 234 234 #define PR_GET_TAGGED_ADDR_CTRL 56 235 235 # define PR_TAGGED_ADDR_ENABLE (1UL << 0) 236 + /* MTE tag check fault modes */ 237 + # define PR_MTE_TCF_SHIFT 1 238 + # define PR_MTE_TCF_NONE (0UL << PR_MTE_TCF_SHIFT) 239 + # define PR_MTE_TCF_SYNC (1UL << PR_MTE_TCF_SHIFT) 240 + # define PR_MTE_TCF_ASYNC (2UL << PR_MTE_TCF_SHIFT) 241 + # define PR_MTE_TCF_MASK (3UL << PR_MTE_TCF_SHIFT) 242 + /* MTE tag inclusion mask */ 243 + # define PR_MTE_TAG_SHIFT 3 244 + # define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT) 236 245 237 246 /* Control reclaim behavior when allocating memory */ 238 247 #define PR_SET_IO_FLUSHER 57
+4
tools/include/uapi/linux/vhost.h
··· 146 146 147 147 /* Set event fd for config interrupt*/ 148 148 #define VHOST_VDPA_SET_CONFIG_CALL _IOW(VHOST_VIRTIO, 0x77, int) 149 + 150 + /* Get the valid iova range */ 151 + #define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \ 152 + struct vhost_vdpa_iova_range) 149 153 #endif
+1
tools/perf/Makefile.config
··· 749 749 PERL_EMBED_LIBADD = $(call grep-libs,$(PERL_EMBED_LDOPTS)) 750 750 PERL_EMBED_CCOPTS = $(shell perl -MExtUtils::Embed -e ccopts 2>/dev/null) 751 751 PERL_EMBED_CCOPTS := $(filter-out -specs=%,$(PERL_EMBED_CCOPTS)) 752 + PERL_EMBED_CCOPTS := $(filter-out -flto=auto -ffat-lto-objects, $(PERL_EMBED_CCOPTS)) 752 753 PERL_EMBED_LDOPTS := $(filter-out -specs=%,$(PERL_EMBED_LDOPTS)) 753 754 FLAGS_PERL_EMBED=$(PERL_EMBED_CCOPTS) $(PERL_EMBED_LDOPTS) 754 755
+7 -4
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 361 361 437 common openat2 sys_openat2 362 362 438 common pidfd_getfd sys_pidfd_getfd 363 363 439 common faccessat2 sys_faccessat2 364 + 440 common process_madvise sys_process_madvise 364 365 365 366 # 366 - # x32-specific system call numbers start at 512 to avoid cache impact 367 - # for native 64-bit operation. The __x32_compat_sys stubs are created 368 - # on-the-fly for compat_sys_*() compatibility system calls if X86_X32 369 - # is defined. 367 + # Due to a historical design error, certain syscalls are numbered differently 368 + # in x32 as compared to native x86_64. These syscalls have numbers 512-547. 369 + # Do not add new syscalls to this range. Numbers 548 and above are available 370 + # for non-x32 use. 370 371 # 371 372 512 x32 rt_sigaction compat_sys_rt_sigaction 372 373 513 x32 rt_sigreturn compat_sys_x32_rt_sigreturn ··· 405 404 545 x32 execveat compat_sys_execveat 406 405 546 x32 preadv2 compat_sys_preadv64v2 407 406 547 x32 pwritev2 compat_sys_pwritev64v2 407 + # This is the end of the legacy x32 range. Numbers 548 and above are 408 + # not special and are not to be used for x32-specific syscalls.
+9 -6
tools/perf/builtin-trace.c
··· 4639 4639 err = 0; 4640 4640 4641 4641 if (lists[0]) { 4642 - struct option o = OPT_CALLBACK('e', "event", &trace->evlist, "event", 4643 - "event selector. use 'perf list' to list available events", 4644 - parse_events_option); 4642 + struct option o = { 4643 + .value = &trace->evlist, 4644 + }; 4645 4645 err = parse_events_option(&o, lists[0], 0); 4646 4646 } 4647 4647 out: ··· 4655 4655 { 4656 4656 struct trace *trace = opt->value; 4657 4657 4658 - if (!list_empty(&trace->evlist->core.entries)) 4659 - return parse_cgroups(opt, str, unset); 4660 - 4658 + if (!list_empty(&trace->evlist->core.entries)) { 4659 + struct option o = { 4660 + .value = &trace->evlist, 4661 + }; 4662 + return parse_cgroups(&o, str, unset); 4663 + } 4661 4664 trace->cgroup = evlist__findnew_cgroup(trace->evlist, str); 4662 4665 4663 4666 return 0;
+1 -1
tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json
··· 329 329 }, 330 330 { 331 331 "BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]", 332 - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@cas_count_write@ ) / 1000000000 ) / duration_time", 332 + "MetricExpr": "( ( ( uncore_imc@cas_count_read@ + uncore_imc@cas_count_write@ ) * 1048576 ) / 1000000000 ) / duration_time", 333 333 "MetricGroup": "Memory_BW;SoC", 334 334 "MetricName": "DRAM_BW_Use" 335 335 },
+1 -1
tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json
··· 323 323 }, 324 324 { 325 325 "BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]", 326 - "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@cas_count_write@ ) / 1000000000 ) / duration_time", 326 + "MetricExpr": "( ( ( uncore_imc@cas_count_read@ + uncore_imc@cas_count_write@ ) * 1048576 ) / 1000000000 ) / duration_time", 327 327 "MetricGroup": "Memory_BW;SoC", 328 328 "MetricName": "DRAM_BW_Use" 329 329 },
+5 -5
tools/perf/tests/dwarf-unwind.c
··· 95 95 return strcmp((const char *) symbol, funcs[idx]); 96 96 } 97 97 98 - __no_tail_call noinline int test_dwarf_unwind__thread(struct thread *thread) 98 + noinline int test_dwarf_unwind__thread(struct thread *thread) 99 99 { 100 100 struct perf_sample sample; 101 101 unsigned long cnt = 0; ··· 126 126 127 127 static int global_unwind_retval = -INT_MAX; 128 128 129 - __no_tail_call noinline int test_dwarf_unwind__compare(void *p1, void *p2) 129 + noinline int test_dwarf_unwind__compare(void *p1, void *p2) 130 130 { 131 131 /* Any possible value should be 'thread' */ 132 132 struct thread *thread = *(struct thread **)p1; ··· 145 145 return p1 - p2; 146 146 } 147 147 148 - __no_tail_call noinline int test_dwarf_unwind__krava_3(struct thread *thread) 148 + noinline int test_dwarf_unwind__krava_3(struct thread *thread) 149 149 { 150 150 struct thread *array[2] = {thread, thread}; 151 151 void *fp = &bsearch; ··· 164 164 return global_unwind_retval; 165 165 } 166 166 167 - __no_tail_call noinline int test_dwarf_unwind__krava_2(struct thread *thread) 167 + noinline int test_dwarf_unwind__krava_2(struct thread *thread) 168 168 { 169 169 return test_dwarf_unwind__krava_3(thread); 170 170 } 171 171 172 - __no_tail_call noinline int test_dwarf_unwind__krava_1(struct thread *thread) 172 + noinline int test_dwarf_unwind__krava_1(struct thread *thread) 173 173 { 174 174 return test_dwarf_unwind__krava_2(thread); 175 175 }
+1 -1
tools/perf/ui/browsers/hists.c
··· 2963 2963 struct popup_action actions[MAX_OPTIONS]; 2964 2964 int nr_options = 0; 2965 2965 int key = -1; 2966 - char buf[64]; 2966 + char buf[128]; 2967 2967 int delay_secs = hbt ? hbt->refresh : 0; 2968 2968 2969 2969 #define HIST_BROWSER_HELP_COMMON \
+2
tools/perf/util/build-id.c
··· 102 102 const u8 *raw = build_id->data; 103 103 size_t i; 104 104 105 + bf[0] = 0x0; 106 + 105 107 for (i = 0; i < build_id->size; ++i) { 106 108 sprintf(bid, "%02x", *raw); 107 109 ++raw;
+3
tools/perf/util/hashmap.c
··· 15 15 /* make sure libbpf doesn't use kernel-only integer typedefs */ 16 16 #pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64 17 17 18 + /* prevent accidental re-addition of reallocarray() */ 19 + #pragma GCC poison reallocarray 20 + 18 21 /* start with 4 buckets */ 19 22 #define HASHMAP_MIN_CAP_BITS 2 20 23
+12
tools/perf/util/hashmap.h
··· 25 25 #endif 26 26 } 27 27 28 + /* generic C-string hashing function */ 29 + static inline size_t str_hash(const char *s) 30 + { 31 + size_t h = 0; 32 + 33 + while (*s) { 34 + h = h * 31 + *s; 35 + s++; 36 + } 37 + return h; 38 + } 39 + 28 40 typedef size_t (*hashmap_hash_fn)(const void *key, void *ctx); 29 41 typedef bool (*hashmap_equal_fn)(const void *key1, const void *key2, void *ctx); 30 42
+10 -1
tools/perf/util/machine.c
··· 786 786 union perf_event *event, 787 787 struct perf_sample *sample __maybe_unused) 788 788 { 789 + struct symbol *sym; 789 790 struct map *map; 790 791 791 792 map = maps__find(&machine->kmaps, event->ksymbol.addr); 792 - if (map) 793 + if (!map) 794 + return 0; 795 + 796 + if (map != machine->vmlinux_map) 793 797 maps__remove(&machine->kmaps, map); 798 + else { 799 + sym = dso__find_symbol(map->dso, map->map_ip(map, map->start)); 800 + if (sym) 801 + dso__delete_symbol(map->dso, sym); 802 + } 794 803 795 804 return 0; 796 805 }
+2 -5
tools/perf/util/scripting-engines/trace-event-python.c
··· 1592 1592 static int python_start_script(const char *script, int argc, const char **argv) 1593 1593 { 1594 1594 struct tables *tables = &tables_global; 1595 - PyMODINIT_FUNC (*initfunc)(void); 1596 1595 #if PY_MAJOR_VERSION < 3 1597 1596 const char **command_line; 1598 1597 #else ··· 1606 1607 FILE *fp; 1607 1608 1608 1609 #if PY_MAJOR_VERSION < 3 1609 - initfunc = initperf_trace_context; 1610 1610 command_line = malloc((argc + 1) * sizeof(const char *)); 1611 1611 command_line[0] = script; 1612 1612 for (i = 1; i < argc + 1; i++) 1613 1613 command_line[i] = argv[i - 1]; 1614 + PyImport_AppendInittab(name, initperf_trace_context); 1614 1615 #else 1615 - initfunc = PyInit_perf_trace_context; 1616 1616 command_line = malloc((argc + 1) * sizeof(wchar_t *)); 1617 1617 command_line[0] = Py_DecodeLocale(script, NULL); 1618 1618 for (i = 1; i < argc + 1; i++) 1619 1619 command_line[i] = Py_DecodeLocale(argv[i - 1], NULL); 1620 + PyImport_AppendInittab(name, PyInit_perf_trace_context); 1620 1621 #endif 1621 - 1622 - PyImport_AppendInittab(name, initfunc); 1623 1622 Py_Initialize(); 1624 1623 1625 1624 #if PY_MAJOR_VERSION < 3
+14
tools/perf/util/session.c
··· 595 595 event->mmap2.maj = bswap_32(event->mmap2.maj); 596 596 event->mmap2.min = bswap_32(event->mmap2.min); 597 597 event->mmap2.ino = bswap_64(event->mmap2.ino); 598 + event->mmap2.ino_generation = bswap_64(event->mmap2.ino_generation); 598 599 599 600 if (sample_id_all) { 600 601 void *data = &event->mmap2.filename; ··· 709 708 710 709 if (sample_id_all) 711 710 swap_sample_id_all(event, &event->namespaces.link_info[i]); 711 + } 712 + 713 + static void perf_event__cgroup_swap(union perf_event *event, bool sample_id_all) 714 + { 715 + event->cgroup.id = bswap_64(event->cgroup.id); 716 + 717 + if (sample_id_all) { 718 + void *data = &event->cgroup.path; 719 + 720 + data += PERF_ALIGN(strlen(data) + 1, sizeof(u64)); 721 + swap_sample_id_all(event, data); 722 + } 712 723 } 713 724 714 725 static u8 revbyte(u8 b) ··· 965 952 [PERF_RECORD_SWITCH] = perf_event__switch_swap, 966 953 [PERF_RECORD_SWITCH_CPU_WIDE] = perf_event__switch_swap, 967 954 [PERF_RECORD_NAMESPACES] = perf_event__namespaces_swap, 955 + [PERF_RECORD_CGROUP] = perf_event__cgroup_swap, 968 956 [PERF_RECORD_TEXT_POKE] = perf_event__text_poke_swap, 969 957 [PERF_RECORD_HEADER_ATTR] = perf_event__hdr_attr_swap, 970 958 [PERF_RECORD_HEADER_EVENT_TYPE] = perf_event__event_type_swap,
+7
tools/perf/util/symbol.c
··· 515 515 } 516 516 } 517 517 518 + void dso__delete_symbol(struct dso *dso, struct symbol *sym) 519 + { 520 + rb_erase_cached(&sym->rb_node, &dso->symbols); 521 + symbol__delete(sym); 522 + dso__reset_find_symbol_cache(dso); 523 + } 524 + 518 525 struct symbol *dso__find_symbol(struct dso *dso, u64 addr) 519 526 { 520 527 if (dso->last_find_result.addr != addr || dso->last_find_result.symbol == NULL) {
+2
tools/perf/util/symbol.h
··· 131 131 132 132 void dso__insert_symbol(struct dso *dso, 133 133 struct symbol *sym); 134 + void dso__delete_symbol(struct dso *dso, 135 + struct symbol *sym); 134 136 135 137 struct symbol *dso__find_symbol(struct dso *dso, u64 addr); 136 138 struct symbol *dso__find_symbol_by_name(struct dso *dso, const char *name);
+1 -2
tools/testing/kunit/kunit_parser.py
··· 66 66 def raw_output(kernel_output): 67 67 for line in kernel_output: 68 68 print(line) 69 - yield line 70 69 71 70 DIVIDER = '=' * 60 72 71 ··· 241 242 return None 242 243 test_suite.name = name 243 244 expected_test_case_num = parse_subtest_plan(lines) 244 - if not expected_test_case_num: 245 + if expected_test_case_num is None: 245 246 return None 246 247 while expected_test_case_num > 0: 247 248 test_case = parse_test_case(lines)
+25 -7
tools/testing/kunit/kunit_tool_test.py
··· 179 179 print_mock = mock.patch('builtins.print').start() 180 180 result = kunit_parser.parse_run_tests( 181 181 kunit_parser.isolate_kunit_output(file.readlines())) 182 - print_mock.assert_any_call(StrContains("no kunit output detected")) 182 + print_mock.assert_any_call(StrContains('no tests run!')) 183 183 print_mock.stop() 184 184 file.close() 185 185 ··· 198 198 'test_data/test_config_printk_time.log') 199 199 with open(prefix_log) as file: 200 200 result = kunit_parser.parse_run_tests(file.readlines()) 201 - self.assertEqual('kunit-resource-test', result.suites[0].name) 201 + self.assertEqual( 202 + kunit_parser.TestStatus.SUCCESS, 203 + result.status) 204 + self.assertEqual('kunit-resource-test', result.suites[0].name) 202 205 203 206 def test_ignores_multiple_prefixes(self): 204 207 prefix_log = get_absolute_path( 205 208 'test_data/test_multiple_prefixes.log') 206 209 with open(prefix_log) as file: 207 210 result = kunit_parser.parse_run_tests(file.readlines()) 208 - self.assertEqual('kunit-resource-test', result.suites[0].name) 211 + self.assertEqual( 212 + kunit_parser.TestStatus.SUCCESS, 213 + result.status) 214 + self.assertEqual('kunit-resource-test', result.suites[0].name) 209 215 210 216 def test_prefix_mixed_kernel_output(self): 211 217 mixed_prefix_log = get_absolute_path( 212 218 'test_data/test_interrupted_tap_output.log') 213 219 with open(mixed_prefix_log) as file: 214 220 result = kunit_parser.parse_run_tests(file.readlines()) 215 - self.assertEqual('kunit-resource-test', result.suites[0].name) 221 + self.assertEqual( 222 + kunit_parser.TestStatus.SUCCESS, 223 + result.status) 224 + self.assertEqual('kunit-resource-test', result.suites[0].name) 216 225 217 226 def test_prefix_poundsign(self): 218 227 pound_log = get_absolute_path('test_data/test_pound_sign.log') 219 228 with open(pound_log) as file: 220 229 result = kunit_parser.parse_run_tests(file.readlines()) 221 - self.assertEqual('kunit-resource-test', result.suites[0].name) 230 + self.assertEqual( 231 + kunit_parser.TestStatus.SUCCESS, 232 + result.status) 233 + self.assertEqual('kunit-resource-test', result.suites[0].name) 222 234 223 235 def test_kernel_panic_end(self): 224 236 panic_log = get_absolute_path('test_data/test_kernel_panic_interrupt.log') 225 237 with open(panic_log) as file: 226 238 result = kunit_parser.parse_run_tests(file.readlines()) 227 - self.assertEqual('kunit-resource-test', result.suites[0].name) 239 + self.assertEqual( 240 + kunit_parser.TestStatus.TEST_CRASHED, 241 + result.status) 242 + self.assertEqual('kunit-resource-test', result.suites[0].name) 228 243 229 244 def test_pound_no_prefix(self): 230 245 pound_log = get_absolute_path('test_data/test_pound_no_prefix.log') 231 246 with open(pound_log) as file: 232 247 result = kunit_parser.parse_run_tests(file.readlines()) 233 - self.assertEqual('kunit-resource-test', result.suites[0].name) 248 + self.assertEqual( 249 + kunit_parser.TestStatus.SUCCESS, 250 + result.status) 251 + self.assertEqual('kunit-resource-test', result.suites[0].name) 234 252 235 253 class KUnitJsonTest(unittest.TestCase): 236 254
+1
tools/testing/kunit/test_data/test_config_printk_time.log
··· 1 1 [ 0.060000] printk: console [mc-1] enabled 2 2 [ 0.060000] random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=0 3 3 [ 0.060000] TAP version 14 4 + [ 0.060000] 1..3 4 5 [ 0.060000] # Subtest: kunit-resource-test 5 6 [ 0.060000] 1..5 6 7 [ 0.060000] ok 1 - kunit_resource_test_init_resources
+1
tools/testing/kunit/test_data/test_interrupted_tap_output.log
··· 1 1 [ 0.060000] printk: console [mc-1] enabled 2 2 [ 0.060000] random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=0 3 3 [ 0.060000] TAP version 14 4 + [ 0.060000] 1..3 4 5 [ 0.060000] # Subtest: kunit-resource-test 5 6 [ 0.060000] 1..5 6 7 [ 0.060000] ok 1 - kunit_resource_test_init_resources
+1
tools/testing/kunit/test_data/test_kernel_panic_interrupt.log
··· 1 1 [ 0.060000] printk: console [mc-1] enabled 2 2 [ 0.060000] random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=0 3 3 [ 0.060000] TAP version 14 4 + [ 0.060000] 1..3 4 5 [ 0.060000] # Subtest: kunit-resource-test 5 6 [ 0.060000] 1..5 6 7 [ 0.060000] ok 1 - kunit_resource_test_init_resources
+1
tools/testing/kunit/test_data/test_multiple_prefixes.log
··· 1 1 [ 0.060000][ T1] printk: console [mc-1] enabled 2 2 [ 0.060000][ T1] random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=0 3 3 [ 0.060000][ T1] TAP version 14 4 + [ 0.060000][ T1] 1..3 4 5 [ 0.060000][ T1] # Subtest: kunit-resource-test 5 6 [ 0.060000][ T1] 1..5 6 7 [ 0.060000][ T1] ok 1 - kunit_resource_test_init_resources
+1
tools/testing/kunit/test_data/test_pound_no_prefix.log
··· 1 1 printk: console [mc-1] enabled 2 2 random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=0 3 3 TAP version 14 4 + 1..3 4 5 # Subtest: kunit-resource-test 5 6 1..5 6 7 ok 1 - kunit_resource_test_init_resources
+1
tools/testing/kunit/test_data/test_pound_sign.log
··· 1 1 [ 0.060000] printk: console [mc-1] enabled 2 2 [ 0.060000] random: get_random_bytes called from init_oops_id+0x35/0x40 with crng_init=0 3 3 [ 0.060000] TAP version 14 4 + [ 0.060000] 1..3 4 5 [ 0.060000] # Subtest: kunit-resource-test 5 6 [ 0.060000] 1..5 6 7 [ 0.060000] ok 1 - kunit_resource_test_init_resources
+1 -1
tools/testing/selftests/clone3/clone3_cap_checkpoint_restore.c
··· 145 145 test_clone3_supported(); 146 146 147 147 EXPECT_EQ(getuid(), 0) 148 - XFAIL(return, "Skipping all tests as non-root\n"); 148 + SKIP(return, "Skipping all tests as non-root"); 149 149 150 150 memset(&set_tid, 0, sizeof(set_tid)); 151 151
+4 -4
tools/testing/selftests/core/close_range_test.c
··· 44 44 fd = open("/dev/null", O_RDONLY | O_CLOEXEC); 45 45 ASSERT_GE(fd, 0) { 46 46 if (errno == ENOENT) 47 - XFAIL(return, "Skipping test since /dev/null does not exist"); 47 + SKIP(return, "Skipping test since /dev/null does not exist"); 48 48 } 49 49 50 50 open_fds[i] = fd; ··· 52 52 53 53 EXPECT_EQ(-1, sys_close_range(open_fds[0], open_fds[100], -1)) { 54 54 if (errno == ENOSYS) 55 - XFAIL(return, "close_range() syscall not supported"); 55 + SKIP(return, "close_range() syscall not supported"); 56 56 } 57 57 58 58 EXPECT_EQ(0, sys_close_range(open_fds[0], open_fds[50], 0)); ··· 108 108 fd = open("/dev/null", O_RDONLY | O_CLOEXEC); 109 109 ASSERT_GE(fd, 0) { 110 110 if (errno == ENOENT) 111 - XFAIL(return, "Skipping test since /dev/null does not exist"); 111 + SKIP(return, "Skipping test since /dev/null does not exist"); 112 112 } 113 113 114 114 open_fds[i] = fd; ··· 197 197 fd = open("/dev/null", O_RDONLY | O_CLOEXEC); 198 198 ASSERT_GE(fd, 0) { 199 199 if (errno == ENOENT) 200 - XFAIL(return, "Skipping test since /dev/null does not exist"); 200 + SKIP(return, "Skipping test since /dev/null does not exist"); 201 201 } 202 202 203 203 open_fds[i] = fd;
+4 -4
tools/testing/selftests/filesystems/binderfs/binderfs_test.c
··· 74 74 ret = mount(NULL, binderfs_mntpt, "binder", 0, 0); 75 75 EXPECT_EQ(ret, 0) { 76 76 if (errno == ENODEV) 77 - XFAIL(goto out, "binderfs missing"); 77 + SKIP(goto out, "binderfs missing"); 78 78 TH_LOG("%s - Failed to mount binderfs", strerror(errno)); 79 79 goto rmdir; 80 80 } ··· 475 475 TEST(binderfs_test_privileged) 476 476 { 477 477 if (geteuid() != 0) 478 - XFAIL(return, "Tests are not run as root. Skipping privileged tests"); 478 + SKIP(return, "Tests are not run as root. Skipping privileged tests"); 479 479 480 480 if (__do_binderfs_test(_metadata)) 481 - XFAIL(return, "The Android binderfs filesystem is not available"); 481 + SKIP(return, "The Android binderfs filesystem is not available"); 482 482 } 483 483 484 484 TEST(binderfs_test_unprivileged) ··· 511 511 ret = wait_for_pid(pid); 512 512 if (ret) { 513 513 if (ret == 2) 514 - XFAIL(return, "The Android binderfs filesystem is not available"); 514 + SKIP(return, "The Android binderfs filesystem is not available"); 515 515 ASSERT_EQ(ret, 0) { 516 516 TH_LOG("wait_for_pid() failed"); 517 517 }
+95
tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c
··· 3282 3282 close(ctx.epfd); 3283 3283 } 3284 3284 3285 + struct epoll61_ctx { 3286 + int epfd; 3287 + int evfd; 3288 + }; 3289 + 3290 + static void *epoll61_write_eventfd(void *ctx_) 3291 + { 3292 + struct epoll61_ctx *ctx = ctx_; 3293 + int64_t l = 1; 3294 + 3295 + usleep(10950); 3296 + write(ctx->evfd, &l, sizeof(l)); 3297 + return NULL; 3298 + } 3299 + 3300 + static void *epoll61_epoll_with_timeout(void *ctx_) 3301 + { 3302 + struct epoll61_ctx *ctx = ctx_; 3303 + struct epoll_event events[1]; 3304 + int n; 3305 + 3306 + n = epoll_wait(ctx->epfd, events, 1, 11); 3307 + /* 3308 + * If epoll returned the eventfd, write on the eventfd to wake up the 3309 + * blocking poller. 3310 + */ 3311 + if (n == 1) { 3312 + int64_t l = 1; 3313 + 3314 + write(ctx->evfd, &l, sizeof(l)); 3315 + } 3316 + return NULL; 3317 + } 3318 + 3319 + static void *epoll61_blocking_epoll(void *ctx_) 3320 + { 3321 + struct epoll61_ctx *ctx = ctx_; 3322 + struct epoll_event events[1]; 3323 + 3324 + epoll_wait(ctx->epfd, events, 1, -1); 3325 + return NULL; 3326 + } 3327 + 3328 + TEST(epoll61) 3329 + { 3330 + struct epoll61_ctx ctx; 3331 + struct epoll_event ev; 3332 + int i, r; 3333 + 3334 + ctx.epfd = epoll_create1(0); 3335 + ASSERT_GE(ctx.epfd, 0); 3336 + ctx.evfd = eventfd(0, EFD_NONBLOCK); 3337 + ASSERT_GE(ctx.evfd, 0); 3338 + 3339 + ev.events = EPOLLIN | EPOLLET | EPOLLERR | EPOLLHUP; 3340 + ev.data.ptr = NULL; 3341 + r = epoll_ctl(ctx.epfd, EPOLL_CTL_ADD, ctx.evfd, &ev); 3342 + ASSERT_EQ(r, 0); 3343 + 3344 + /* 3345 + * We are testing a race. Repeat the test case 1000 times to make it 3346 + * more likely to fail in case of a bug. 3347 + */ 3348 + for (i = 0; i < 1000; i++) { 3349 + pthread_t threads[3]; 3350 + int n; 3351 + 3352 + /* 3353 + * Start 3 threads: 3354 + * Thread 1 sleeps for 10.9ms and writes to the evenfd. 3355 + * Thread 2 calls epoll with a timeout of 11ms. 3356 + * Thread 3 calls epoll with a timeout of -1. 3357 + * 3358 + * The eventfd write by Thread 1 should either wakeup Thread 2 3359 + * or Thread 3. If it wakes up Thread 2, Thread 2 writes on the 3360 + * eventfd to wake up Thread 3. 3361 + * 3362 + * If no events are missed, all three threads should eventually 3363 + * be joinable. 3364 + */ 3365 + ASSERT_EQ(pthread_create(&threads[0], NULL, 3366 + epoll61_write_eventfd, &ctx), 0); 3367 + ASSERT_EQ(pthread_create(&threads[1], NULL, 3368 + epoll61_epoll_with_timeout, &ctx), 0); 3369 + ASSERT_EQ(pthread_create(&threads[2], NULL, 3370 + epoll61_blocking_epoll, &ctx), 0); 3371 + 3372 + for (n = 0; n < ARRAY_SIZE(threads); ++n) 3373 + ASSERT_EQ(pthread_join(threads[n], NULL), 0); 3374 + } 3375 + 3376 + close(ctx.epfd); 3377 + close(ctx.evfd); 3378 + } 3379 + 3285 3380 TEST_HARNESS_MAIN
+1 -1
tools/testing/selftests/ftrace/test.d/dynevent/add_remove_kprobe.tc
··· 6 6 echo 0 > events/enable 7 7 echo > dynamic_events 8 8 9 - PLACE=kernel_clone 9 + PLACE=$FUNCTION_FORK 10 10 11 11 echo "p:myevent1 $PLACE" >> dynamic_events 12 12 echo "r:myevent2 $PLACE" >> dynamic_events
+1 -1
tools/testing/selftests/ftrace/test.d/dynevent/clear_select_events.tc
··· 6 6 echo 0 > events/enable 7 7 echo > dynamic_events 8 8 9 - PLACE=kernel_clone 9 + PLACE=$FUNCTION_FORK 10 10 11 11 setup_events() { 12 12 echo "p:myevent1 $PLACE" >> dynamic_events
+1 -1
tools/testing/selftests/ftrace/test.d/dynevent/generic_clear_event.tc
··· 6 6 echo 0 > events/enable 7 7 echo > dynamic_events 8 8 9 - PLACE=kernel_clone 9 + PLACE=$FUNCTION_FORK 10 10 11 11 setup_events() { 12 12 echo "p:myevent1 $PLACE" >> dynamic_events
+1 -1
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-notrace-pid.tc
··· 39 39 disable_tracing 40 40 41 41 echo do_execve* > set_ftrace_filter 42 - echo *do_fork >> set_ftrace_filter 42 + echo $FUNCTION_FORK >> set_ftrace_filter 43 43 44 44 echo $PID > set_ftrace_notrace_pid 45 45 echo function > current_tracer
+1 -1
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-pid.tc
··· 39 39 disable_tracing 40 40 41 41 echo do_execve* > set_ftrace_filter 42 - echo *do_fork >> set_ftrace_filter 42 + echo $FUNCTION_FORK >> set_ftrace_filter 43 43 44 44 echo $PID > set_ftrace_pid 45 45 echo function > current_tracer
+2 -2
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-stacktrace.tc
··· 4 4 # requires: set_ftrace_filter 5 5 # flags: instance 6 6 7 - echo kernel_clone:stacktrace >> set_ftrace_filter 7 + echo $FUNCTION_FORK:stacktrace >> set_ftrace_filter 8 8 9 - grep -q "kernel_clone:stacktrace:unlimited" set_ftrace_filter 9 + grep -q "$FUNCTION_FORK:stacktrace:unlimited" set_ftrace_filter 10 10 11 11 (echo "forked"; sleep 1) 12 12
+7
tools/testing/selftests/ftrace/test.d/functions
··· 133 133 ping $LOCALHOST -c 1 || sleep .001 || usleep 1 || sleep 1 134 134 } 135 135 136 + # The fork function in the kernel was renamed from "_do_fork" to 137 + # "kernel_fork". As older tests should still work with older kernels 138 + # as well as newer kernels, check which version of fork is used on this 139 + # kernel so that the tests can use the fork function for the running kernel. 140 + FUNCTION_FORK=`(if grep '\bkernel_clone\b' /proc/kallsyms > /dev/null; then 141 + echo kernel_clone; else echo '_do_fork'; fi)` 142 + 136 143 # Since probe event command may include backslash, explicitly use printf "%s" 137 144 # to NOT interpret it. 138 145 ftrace_errlog_check() { # err-prefix command-with-error-pos-by-^ command-file
+1 -1
tools/testing/selftests/ftrace/test.d/kprobe/add_and_remove.tc
··· 3 3 # description: Kprobe dynamic event - adding and removing 4 4 # requires: kprobe_events 5 5 6 - echo p:myevent kernel_clone > kprobe_events 6 + echo p:myevent $FUNCTION_FORK > kprobe_events 7 7 grep myevent kprobe_events 8 8 test -d events/kprobes/myevent 9 9 echo > kprobe_events
+1 -1
tools/testing/selftests/ftrace/test.d/kprobe/busy_check.tc
··· 3 3 # description: Kprobe dynamic event - busy event check 4 4 # requires: kprobe_events 5 5 6 - echo p:myevent kernel_clone > kprobe_events 6 + echo p:myevent $FUNCTION_FORK > kprobe_events 7 7 test -d events/kprobes/myevent 8 8 echo 1 > events/kprobes/myevent/enable 9 9 echo > kprobe_events && exit_fail # this must fail
+2 -2
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args.tc
··· 3 3 # description: Kprobe dynamic event with arguments 4 4 # requires: kprobe_events 5 5 6 - echo 'p:testprobe kernel_clone $stack $stack0 +0($stack)' > kprobe_events 6 + echo "p:testprobe $FUNCTION_FORK \$stack \$stack0 +0(\$stack)" > kprobe_events 7 7 grep testprobe kprobe_events | grep -q 'arg1=\$stack arg2=\$stack0 arg3=+0(\$stack)' 8 8 test -d events/kprobes/testprobe 9 9 10 10 echo 1 > events/kprobes/testprobe/enable 11 11 ( echo "forked") 12 - grep testprobe trace | grep 'kernel_clone' | \ 12 + grep testprobe trace | grep "$FUNCTION_FORK" | \ 13 13 grep -q 'arg1=0x[[:xdigit:]]* arg2=0x[[:xdigit:]]* arg3=0x[[:xdigit:]]*$' 14 14 15 15 echo 0 > events/kprobes/testprobe/enable
+1 -1
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_comm.tc
··· 5 5 6 6 grep -A1 "fetcharg:" README | grep -q "\$comm" || exit_unsupported # this is too old 7 7 8 - echo 'p:testprobe kernel_clone comm=$comm ' > kprobe_events 8 + echo "p:testprobe $FUNCTION_FORK comm=\$comm " > kprobe_events 9 9 grep testprobe kprobe_events | grep -q 'comm=$comm' 10 10 test -d events/kprobes/testprobe 11 11
+2 -2
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc
··· 30 30 : "Test get argument (1)" 31 31 echo "p:testprobe tracefs_create_dir arg1=+0(${ARG1}):string" > kprobe_events 32 32 echo 1 > events/kprobes/testprobe/enable 33 - echo "p:test kernel_clone" >> kprobe_events 33 + echo "p:test $FUNCTION_FORK" >> kprobe_events 34 34 grep -qe "testprobe.* arg1=\"test\"" trace 35 35 36 36 echo 0 > events/kprobes/testprobe/enable 37 37 : "Test get argument (2)" 38 38 echo "p:testprobe tracefs_create_dir arg1=+0(${ARG1}):string arg2=+0(${ARG1}):string" > kprobe_events 39 39 echo 1 > events/kprobes/testprobe/enable 40 - echo "p:test kernel_clone" >> kprobe_events 40 + echo "p:test $FUNCTION_FORK" >> kprobe_events 41 41 grep -qe "testprobe.* arg1=\"test\" arg2=\"test\"" trace 42 42
+5 -5
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_symbol.tc
··· 14 14 fi 15 15 16 16 : "Test get basic types symbol argument" 17 - echo "p:testprobe_u kernel_clone arg1=@linux_proc_banner:u64 arg2=@linux_proc_banner:u32 arg3=@linux_proc_banner:u16 arg4=@linux_proc_banner:u8" > kprobe_events 18 - echo "p:testprobe_s kernel_clone arg1=@linux_proc_banner:s64 arg2=@linux_proc_banner:s32 arg3=@linux_proc_banner:s16 arg4=@linux_proc_banner:s8" >> kprobe_events 17 + echo "p:testprobe_u $FUNCTION_FORK arg1=@linux_proc_banner:u64 arg2=@linux_proc_banner:u32 arg3=@linux_proc_banner:u16 arg4=@linux_proc_banner:u8" > kprobe_events 18 + echo "p:testprobe_s $FUNCTION_FORK arg1=@linux_proc_banner:s64 arg2=@linux_proc_banner:s32 arg3=@linux_proc_banner:s16 arg4=@linux_proc_banner:s8" >> kprobe_events 19 19 if grep -q "x8/16/32/64" README; then 20 - echo "p:testprobe_x kernel_clone arg1=@linux_proc_banner:x64 arg2=@linux_proc_banner:x32 arg3=@linux_proc_banner:x16 arg4=@linux_proc_banner:x8" >> kprobe_events 20 + echo "p:testprobe_x $FUNCTION_FORK arg1=@linux_proc_banner:x64 arg2=@linux_proc_banner:x32 arg3=@linux_proc_banner:x16 arg4=@linux_proc_banner:x8" >> kprobe_events 21 21 fi 22 - echo "p:testprobe_bf kernel_clone arg1=@linux_proc_banner:b8@4/32" >> kprobe_events 22 + echo "p:testprobe_bf $FUNCTION_FORK arg1=@linux_proc_banner:b8@4/32" >> kprobe_events 23 23 echo 1 > events/kprobes/enable 24 24 (echo "forked") 25 25 echo 0 > events/kprobes/enable ··· 27 27 grep "testprobe_bf:.* arg1=.*" trace 28 28 29 29 : "Test get string symbol argument" 30 - echo "p:testprobe_str kernel_clone arg1=@linux_proc_banner:string" > kprobe_events 30 + echo "p:testprobe_str $FUNCTION_FORK arg1=@linux_proc_banner:string" > kprobe_events 31 31 echo 1 > events/kprobes/enable 32 32 (echo "forked") 33 33 echo 0 > events/kprobes/enable
+1 -1
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_type.tc
··· 4 4 # requires: kprobe_events "x8/16/32/64":README 5 5 6 6 gen_event() { # Bitsize 7 - echo "p:testprobe kernel_clone \$stack0:s$1 \$stack0:u$1 \$stack0:x$1 \$stack0:b4@4/$1" 7 + echo "p:testprobe $FUNCTION_FORK \$stack0:s$1 \$stack0:u$1 \$stack0:x$1 \$stack0:b4@4/$1" 8 8 } 9 9 10 10 check_types() { # s-type u-type x-type bf-type width
+4
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_user.tc
··· 9 9 :;: "user-memory access syntax and ustring working on user memory";: 10 10 echo 'p:myevent do_sys_open path=+0($arg2):ustring path2=+u0($arg2):string' \ 11 11 > kprobe_events 12 + echo 'p:myevent2 do_sys_openat2 path=+0($arg2):ustring path2=+u0($arg2):string' \ 13 + >> kprobe_events 12 14 13 15 grep myevent kprobe_events | \ 14 16 grep -q 'path=+0($arg2):ustring path2=+u0($arg2):string' 15 17 echo 1 > events/kprobes/myevent/enable 18 + echo 1 > events/kprobes/myevent2/enable 16 19 echo > /dev/null 17 20 echo 0 > events/kprobes/myevent/enable 21 + echo 0 > events/kprobes/myevent2/enable 18 22 19 23 grep myevent trace | grep -q 'path="/dev/null" path2="/dev/null"' 20 24
+7 -7
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_ftrace.tc
··· 5 5 6 6 # prepare 7 7 echo nop > current_tracer 8 - echo kernel_clone > set_ftrace_filter 9 - echo 'p:testprobe kernel_clone' > kprobe_events 8 + echo $FUNCTION_FORK > set_ftrace_filter 9 + echo "p:testprobe $FUNCTION_FORK" > kprobe_events 10 10 11 11 # kprobe on / ftrace off 12 12 echo 1 > events/kprobes/testprobe/enable 13 13 echo > trace 14 14 ( echo "forked") 15 15 grep testprobe trace 16 - ! grep 'kernel_clone <-' trace 16 + ! grep "$FUNCTION_FORK <-" trace 17 17 18 18 # kprobe on / ftrace on 19 19 echo function > current_tracer 20 20 echo > trace 21 21 ( echo "forked") 22 22 grep testprobe trace 23 - grep 'kernel_clone <-' trace 23 + grep "$FUNCTION_FORK <-" trace 24 24 25 25 # kprobe off / ftrace on 26 26 echo 0 > events/kprobes/testprobe/enable 27 27 echo > trace 28 28 ( echo "forked") 29 29 ! grep testprobe trace 30 - grep 'kernel_clone <-' trace 30 + grep "$FUNCTION_FORK <-" trace 31 31 32 32 # kprobe on / ftrace on 33 33 echo 1 > events/kprobes/testprobe/enable ··· 35 35 echo > trace 36 36 ( echo "forked") 37 37 grep testprobe trace 38 - grep 'kernel_clone <-' trace 38 + grep "$FUNCTION_FORK <-" trace 39 39 40 40 # kprobe on / ftrace off 41 41 echo nop > current_tracer 42 42 echo > trace 43 43 ( echo "forked") 44 44 grep testprobe trace 45 - ! grep 'kernel_clone <-' trace 45 + ! grep "$FUNCTION_FORK <-" trace
+1 -1
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_multiprobe.tc
··· 4 4 # requires: kprobe_events "Create/append/":README 5 5 6 6 # Choose 2 symbols for target 7 - SYM1=kernel_clone 7 + SYM1=$FUNCTION_FORK 8 8 SYM2=do_exit 9 9 EVENT_NAME=kprobes/testevent 10 10
+6 -6
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
··· 86 86 87 87 # multiprobe errors 88 88 if grep -q "Create/append/" README && grep -q "imm-value" README; then 89 - echo 'p:kprobes/testevent kernel_clone' > kprobe_events 89 + echo "p:kprobes/testevent $FUNCTION_FORK" > kprobe_events 90 90 check_error '^r:kprobes/testevent do_exit' # DIFF_PROBE_TYPE 91 91 92 92 # Explicitly use printf "%s" to not interpret \1 93 - printf "%s" 'p:kprobes/testevent kernel_clone abcd=\1' > kprobe_events 94 - check_error 'p:kprobes/testevent kernel_clone ^bcd=\1' # DIFF_ARG_TYPE 95 - check_error 'p:kprobes/testevent kernel_clone ^abcd=\1:u8' # DIFF_ARG_TYPE 96 - check_error 'p:kprobes/testevent kernel_clone ^abcd=\"foo"' # DIFF_ARG_TYPE 97 - check_error '^p:kprobes/testevent kernel_clone abcd=\1' # SAME_PROBE 93 + printf "%s" "p:kprobes/testevent $FUNCTION_FORK abcd=\\1" > kprobe_events 94 + check_error "p:kprobes/testevent $FUNCTION_FORK ^bcd=\\1" # DIFF_ARG_TYPE 95 + check_error "p:kprobes/testevent $FUNCTION_FORK ^abcd=\\1:u8" # DIFF_ARG_TYPE 96 + check_error "p:kprobes/testevent $FUNCTION_FORK ^abcd=\\\"foo\"" # DIFF_ARG_TYPE 97 + check_error "^p:kprobes/testevent $FUNCTION_FORK abcd=\\1" # SAME_PROBE 98 98 fi 99 99 100 100 # %return suffix errors
+2 -2
tools/testing/selftests/ftrace/test.d/kprobe/kretprobe_args.tc
··· 4 4 # requires: kprobe_events 5 5 6 6 # Add new kretprobe event 7 - echo 'r:testprobe2 kernel_clone $retval' > kprobe_events 7 + echo "r:testprobe2 $FUNCTION_FORK \$retval" > kprobe_events 8 8 grep testprobe2 kprobe_events | grep -q 'arg1=\$retval' 9 9 test -d events/kprobes/testprobe2 10 10 11 11 echo 1 > events/kprobes/testprobe2/enable 12 12 ( echo "forked") 13 13 14 - cat trace | grep testprobe2 | grep -q '<- kernel_clone' 14 + cat trace | grep testprobe2 | grep -q "<- $FUNCTION_FORK" 15 15 16 16 echo 0 > events/kprobes/testprobe2/enable 17 17 echo '-:testprobe2' >> kprobe_events
+1 -1
tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
··· 4 4 # requires: kprobe_events 5 5 6 6 ! grep -q 'myevent' kprobe_profile 7 - echo p:myevent kernel_clone > kprobe_events 7 + echo "p:myevent $FUNCTION_FORK" > kprobe_events 8 8 grep -q 'myevent[[:space:]]*0[[:space:]]*0$' kprobe_profile 9 9 echo 1 > events/kprobes/myevent/enable 10 10 ( echo "forked" )
+23 -23
tools/testing/selftests/kselftest_harness.h
··· 126 126 snprintf(_metadata->results->reason, \ 127 127 sizeof(_metadata->results->reason), fmt, ##__VA_ARGS__); \ 128 128 if (TH_LOG_ENABLED) { \ 129 - fprintf(TH_LOG_STREAM, "# SKIP %s\n", \ 129 + fprintf(TH_LOG_STREAM, "# SKIP %s\n", \ 130 130 _metadata->results->reason); \ 131 131 } \ 132 132 _metadata->passed = 1; \ ··· 432 432 */ 433 433 434 434 /** 435 - * ASSERT_EQ(expected, seen) 435 + * ASSERT_EQ() 436 436 * 437 437 * @expected: expected value 438 438 * @seen: measured value ··· 443 443 __EXPECT(expected, #expected, seen, #seen, ==, 1) 444 444 445 445 /** 446 - * ASSERT_NE(expected, seen) 446 + * ASSERT_NE() 447 447 * 448 448 * @expected: expected value 449 449 * @seen: measured value ··· 454 454 __EXPECT(expected, #expected, seen, #seen, !=, 1) 455 455 456 456 /** 457 - * ASSERT_LT(expected, seen) 457 + * ASSERT_LT() 458 458 * 459 459 * @expected: expected value 460 460 * @seen: measured value ··· 465 465 __EXPECT(expected, #expected, seen, #seen, <, 1) 466 466 467 467 /** 468 - * ASSERT_LE(expected, seen) 468 + * ASSERT_LE() 469 469 * 470 470 * @expected: expected value 471 471 * @seen: measured value ··· 476 476 __EXPECT(expected, #expected, seen, #seen, <=, 1) 477 477 478 478 /** 479 - * ASSERT_GT(expected, seen) 479 + * ASSERT_GT() 480 480 * 481 481 * @expected: expected value 482 482 * @seen: measured value ··· 487 487 __EXPECT(expected, #expected, seen, #seen, >, 1) 488 488 489 489 /** 490 - * ASSERT_GE(expected, seen) 490 + * ASSERT_GE() 491 491 * 492 492 * @expected: expected value 493 493 * @seen: measured value ··· 498 498 __EXPECT(expected, #expected, seen, #seen, >=, 1) 499 499 500 500 /** 501 - * ASSERT_NULL(seen) 501 + * ASSERT_NULL() 502 502 * 503 503 * @seen: measured value 504 504 * ··· 508 508 __EXPECT(NULL, "NULL", seen, #seen, ==, 1) 509 509 510 510 /** 511 - * ASSERT_TRUE(seen) 511 + * ASSERT_TRUE() 512 512 * 513 513 * @seen: measured value 514 514 * ··· 518 518 __EXPECT(0, "0", seen, #seen, !=, 1) 519 519 520 520 /** 521 - * ASSERT_FALSE(seen) 521 + * ASSERT_FALSE() 522 522 * 523 523 * @seen: measured value 524 524 * ··· 528 528 __EXPECT(0, "0", seen, #seen, ==, 1) 529 529 530 530 /** 531 - * ASSERT_STREQ(expected, seen) 531 + * ASSERT_STREQ() 532 532 * 533 533 * @expected: expected value 534 534 * @seen: measured value ··· 539 539 __EXPECT_STR(expected, seen, ==, 1) 540 540 541 541 /** 542 - * ASSERT_STRNE(expected, seen) 542 + * ASSERT_STRNE() 543 543 * 544 544 * @expected: expected value 545 545 * @seen: measured value ··· 550 550 __EXPECT_STR(expected, seen, !=, 1) 551 551 552 552 /** 553 - * EXPECT_EQ(expected, seen) 553 + * EXPECT_EQ() 554 554 * 555 555 * @expected: expected value 556 556 * @seen: measured value ··· 561 561 __EXPECT(expected, #expected, seen, #seen, ==, 0) 562 562 563 563 /** 564 - * EXPECT_NE(expected, seen) 564 + * EXPECT_NE() 565 565 * 566 566 * @expected: expected value 567 567 * @seen: measured value ··· 572 572 __EXPECT(expected, #expected, seen, #seen, !=, 0) 573 573 574 574 /** 575 - * EXPECT_LT(expected, seen) 575 + * EXPECT_LT() 576 576 * 577 577 * @expected: expected value 578 578 * @seen: measured value ··· 583 583 __EXPECT(expected, #expected, seen, #seen, <, 0) 584 584 585 585 /** 586 - * EXPECT_LE(expected, seen) 586 + * EXPECT_LE() 587 587 * 588 588 * @expected: expected value 589 589 * @seen: measured value ··· 594 594 __EXPECT(expected, #expected, seen, #seen, <=, 0) 595 595 596 596 /** 597 - * EXPECT_GT(expected, seen) 597 + * EXPECT_GT() 598 598 * 599 599 * @expected: expected value 600 600 * @seen: measured value ··· 605 605 __EXPECT(expected, #expected, seen, #seen, >, 0) 606 606 607 607 /** 608 - * EXPECT_GE(expected, seen) 608 + * EXPECT_GE() 609 609 * 610 610 * @expected: expected value 611 611 * @seen: measured value ··· 616 616 __EXPECT(expected, #expected, seen, #seen, >=, 0) 617 617 618 618 /** 619 - * EXPECT_NULL(seen) 619 + * EXPECT_NULL() 620 620 * 621 621 * @seen: measured value 622 622 * ··· 626 626 __EXPECT(NULL, "NULL", seen, #seen, ==, 0) 627 627 628 628 /** 629 - * EXPECT_TRUE(seen) 629 + * EXPECT_TRUE() 630 630 * 631 631 * @seen: measured value 632 632 * ··· 636 636 __EXPECT(0, "0", seen, #seen, !=, 0) 637 637 638 638 /** 639 - * EXPECT_FALSE(seen) 639 + * EXPECT_FALSE() 640 640 * 641 641 * @seen: measured value 642 642 * ··· 646 646 __EXPECT(0, "0", seen, #seen, ==, 0) 647 647 648 648 /** 649 - * EXPECT_STREQ(expected, seen) 649 + * EXPECT_STREQ() 650 650 * 651 651 * @expected: expected value 652 652 * @seen: measured value ··· 657 657 __EXPECT_STR(expected, seen, ==, 0) 658 658 659 659 /** 660 - * EXPECT_STRNE(expected, seen) 660 + * EXPECT_STRNE() 661 661 * 662 662 * @expected: expected value 663 663 * @seen: measured value
+1 -1
tools/testing/selftests/lib.mk
··· 136 136 ifeq ($(OVERRIDE_TARGETS),) 137 137 LOCAL_HDRS := $(selfdir)/kselftest_harness.h $(selfdir)/kselftest.h 138 138 $(OUTPUT)/%:%.c $(LOCAL_HDRS) 139 - $(LINK.c) $^ $(LDLIBS) -o $@ 139 + $(LINK.c) $(filter-out $(LOCAL_HDRS),$^) $(LDLIBS) -o $@ 140 140 141 141 $(OUTPUT)/%.o:%.S 142 142 $(COMPILE.S) $^ -o $@
+1
tools/testing/selftests/pidfd/config
··· 4 4 CONFIG_PID_NS=y 5 5 CONFIG_NET_NS=y 6 6 CONFIG_CGROUPS=y 7 + CONFIG_CHECKPOINT_RESTORE=y
+4 -1
tools/testing/selftests/pidfd/pidfd_getfd_test.c
··· 204 204 fd = sys_pidfd_getfd(self->pidfd, self->remote_fd, 0); 205 205 ASSERT_GE(fd, 0); 206 206 207 - EXPECT_EQ(0, sys_kcmp(getpid(), self->pid, KCMP_FILE, fd, self->remote_fd)); 207 + ret = sys_kcmp(getpid(), self->pid, KCMP_FILE, fd, self->remote_fd); 208 + if (ret < 0 && errno == ENOSYS) 209 + SKIP(return, "kcmp() syscall not supported"); 210 + EXPECT_EQ(ret, 0); 208 211 209 212 ret = fcntl(fd, F_GETFD); 210 213 ASSERT_GE(ret, 0);
-1
tools/testing/selftests/pidfd/pidfd_open_test.c
··· 6 6 #include <inttypes.h> 7 7 #include <limits.h> 8 8 #include <linux/types.h> 9 - #include <linux/wait.h> 10 9 #include <sched.h> 11 10 #include <signal.h> 12 11 #include <stdbool.h>
-1
tools/testing/selftests/pidfd/pidfd_poll_test.c
··· 3 3 #define _GNU_SOURCE 4 4 #include <errno.h> 5 5 #include <linux/types.h> 6 - #include <linux/wait.h> 7 6 #include <poll.h> 8 7 #include <signal.h> 9 8 #include <stdbool.h>
-1
tools/testing/selftests/pidfd/pidfd_setns_test.c
··· 16 16 #include <unistd.h> 17 17 #include <sys/socket.h> 18 18 #include <sys/stat.h> 19 - #include <linux/kcmp.h> 20 19 21 20 #include "pidfd.h" 22 21 #include "../clone3/clone3_selftests.h"
+1 -1
tools/testing/selftests/pidfd/pidfd_test.c
··· 330 330 ksft_exit_fail_msg("%s test: Failed to recycle pid %d\n", 331 331 test_name, PID_RECYCLE); 332 332 case PIDFD_SKIP: 333 - ksft_print_msg("%s test: Skipping test\n", test_name); 333 + ksft_test_result_skip("%s test: Skipping test\n", test_name); 334 334 ret = 0; 335 335 break; 336 336 case PIDFD_XFAIL:
-1
tools/testing/selftests/proc/proc-loadavg-001.c
··· 14 14 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 15 */ 16 16 /* Test that /proc/loadavg correctly reports last pid in pid namespace. */ 17 - #define _GNU_SOURCE 18 17 #include <errno.h> 19 18 #include <sched.h> 20 19 #include <sys/types.h>
-1
tools/testing/selftests/proc/proc-self-syscall.c
··· 13 13 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 14 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 15 */ 16 - #define _GNU_SOURCE 17 16 #include <unistd.h> 18 17 #include <sys/syscall.h> 19 18 #include <sys/types.h>
-1
tools/testing/selftests/proc/proc-uptime-002.c
··· 15 15 */ 16 16 // Test that values in /proc/uptime increment monotonically 17 17 // while shifting across CPUs. 18 - #define _GNU_SOURCE 19 18 #undef NDEBUG 20 19 #include <assert.h> 21 20 #include <unistd.h>
+8
tools/testing/selftests/wireguard/netns.sh
··· 316 316 n2 ping -W 1 -c 1 192.168.241.1 317 317 n1 wg set wg0 peer "$pub2" persistent-keepalive 0 318 318 319 + # Test that sk_bound_dev_if works 320 + n1 ping -I wg0 -c 1 -W 1 192.168.241.2 321 + # What about when the mark changes and the packet must be rerouted? 322 + n1 iptables -t mangle -I OUTPUT -j MARK --set-xmark 1 323 + n1 ping -c 1 -W 1 192.168.241.2 # First the boring case 324 + n1 ping -I wg0 -c 1 -W 1 192.168.241.2 # Then the sk_bound_dev_if case 325 + n1 iptables -t mangle -D OUTPUT -j MARK --set-xmark 1 326 + 319 327 # Test that onion routing works, even when it loops 320 328 n1 wg set wg0 peer "$pub3" allowed-ips 192.168.242.2/32 endpoint 192.168.241.2:5 321 329 ip1 addr add 192.168.242.1/24 dev wg0
+2
tools/testing/selftests/wireguard/qemu/kernel.config
··· 18 18 CONFIG_NETFILTER_XTABLES=y 19 19 CONFIG_NETFILTER_XT_NAT=y 20 20 CONFIG_NETFILTER_XT_MATCH_LENGTH=y 21 + CONFIG_NETFILTER_XT_MARK=y 21 22 CONFIG_NF_CONNTRACK_IPV4=y 22 23 CONFIG_NF_NAT_IPV4=y 23 24 CONFIG_IP_NF_IPTABLES=y 24 25 CONFIG_IP_NF_FILTER=y 26 + CONFIG_IP_NF_MANGLE=y 25 27 CONFIG_IP_NF_NAT=y 26 28 CONFIG_IP_ADVANCED_ROUTER=y 27 29 CONFIG_IP_MULTIPLE_TABLES=y