Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC/qcom/arm64: Qualcomm ADSP DTS and binding fixes

Merge series from Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>:

Hi,

Dependencies/merging
====================
1. The DTS patches are independent.
2. The binding patches should come together, because of context changes. Could
be one of: Qualcomm SoC, ASoC or DT tree.

Changes since v3
================
1. Patch 9-10: re-order, so first apr.yaml is corrected and then we convert to
DT schema. This makes patchset fully bisectable in expense of changing the same
lines twice.
2. Patch 11: New patch.

Changes since v2
================
1. Patch 9: rename and extend commit msg.
2. Add Rb tags.

Changes since v1
================
1. Patch 9: New patch.
2. Patch 10: Correct also sound/qcom,q6apm-dai.yaml (Rob).
2. Patch 13: New patch.
3. Add Rb/Tb tags.

Best regards,
Krzysztof

Krzysztof Kozlowski (15):
arm64: dts: qcom: sdm630: align APR services node names with dtschema
arm64: dts: qcom: sdm845: align APR services node names with dtschema
arm64: dts: qcom: sm8250: align APR services node names with dtschema
arm64: dts: qcom: msm8996: fix APR services nodes
arm64: dts: qcom: sdm845: align dai node names with dtschema
arm64: dts: qcom: msm8996: align dai node names with dtschema
arm64: dts: qcom: qrb5165-rb5: align dai node names with dtschema
arm64: dts: qcom: sm8250: use generic name for LPASS clock controller
dt-bindings: soc: qcom: apr: correct service children
ASoC: dt-bindings: qcom,q6asm: convert to dtschema
ASoC: dt-bindings: qcom,q6adm: convert to dtschema
ASoC: dt-bindings: qcom,q6dsp-lpass-ports: cleanup example
ASoC: dt-bindings: qcom,q6dsp-lpass-clocks: cleanup example
ASoC: dt-bindings: qcom,q6apm-dai: adjust indentation in example
dt-bindings: soc: qcom: apr: add missing properties

.../bindings/soc/qcom/qcom,apr.yaml | 112 ++++++++++++++++--
.../bindings/sound/qcom,q6adm-routing.yaml | 52 ++++++++
.../devicetree/bindings/sound/qcom,q6adm.txt | 39 ------
.../bindings/sound/qcom,q6apm-dai.yaml | 21 ++--
.../bindings/sound/qcom,q6asm-dais.yaml | 112 ++++++++++++++++++
.../devicetree/bindings/sound/qcom,q6asm.txt | 70 -----------
.../sound/qcom,q6dsp-lpass-clocks.yaml | 36 +++---
.../sound/qcom,q6dsp-lpass-ports.yaml | 64 +++++-----
arch/arm64/boot/dts/qcom/msm8996.dtsi | 10 +-
arch/arm64/boot/dts/qcom/qrb5165-rb5.dts | 4 +-
arch/arm64/boot/dts/qcom/sdm630.dtsi | 8 +-
arch/arm64/boot/dts/qcom/sdm845-db845c.dts | 2 +-
.../boot/dts/qcom/sdm845-xiaomi-beryllium.dts | 2 +-
.../boot/dts/qcom/sdm845-xiaomi-polaris.dts | 4 +-
arch/arm64/boot/dts/qcom/sdm845.dtsi | 8 +-
arch/arm64/boot/dts/qcom/sm8250.dtsi | 10 +-
16 files changed, 346 insertions(+), 208 deletions(-)
create mode 100644 Documentation/devicetree/bindings/sound/qcom,q6adm-routing.yaml
delete mode 100644 Documentation/devicetree/bindings/sound/qcom,q6adm.txt
create mode 100644 Documentation/devicetree/bindings/sound/qcom,q6asm-dais.yaml
delete mode 100644 Documentation/devicetree/bindings/sound/qcom,q6asm.txt

--
2.34.1

+3143 -1854
-1
Documentation/devicetree/bindings/hwmon/moortec,mr75203.yaml
··· 48 48 - compatible 49 49 - reg 50 50 - reg-names 51 - - intel,vm-map 52 51 - clocks 53 52 - resets 54 53 - "#thermal-sensor-cells"
+3
Documentation/devicetree/bindings/i2c/renesas,riic.yaml
··· 60 60 power-domains: 61 61 maxItems: 1 62 62 63 + resets: 64 + maxItems: 1 65 + 63 66 required: 64 67 - compatible 65 68 - reg
+1 -2
Documentation/devicetree/bindings/regulator/qcom,spmi-regulator.yaml
··· 35 35 description: List of regulators and its properties 36 36 type: object 37 37 $ref: regulator.yaml# 38 + unevaluatedProperties: false 38 39 39 40 properties: 40 41 qcom,ocp-max-retries: ··· 100 99 description: 101 100 SAW controlled gang leader. Will be configured as SAW regulator. 102 101 type: boolean 103 - 104 - unevaluatedProperties: false 105 102 106 103 required: 107 104 - compatible
+48 -29
Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
··· 17 17 acts as directory-based coherency manager. 18 18 All the properties in ePAPR/DeviceTree specification applies for this platform. 19 19 20 - allOf: 21 - - $ref: /schemas/cache-controller.yaml# 22 - 23 20 select: 24 21 properties: 25 22 compatible: ··· 30 33 31 34 properties: 32 35 compatible: 33 - items: 34 - - enum: 35 - - sifive,fu540-c000-ccache 36 - - sifive,fu740-c000-ccache 37 - - const: cache 36 + oneOf: 37 + - items: 38 + - enum: 39 + - sifive,fu540-c000-ccache 40 + - sifive,fu740-c000-ccache 41 + - const: cache 42 + - items: 43 + - const: microchip,mpfs-ccache 44 + - const: sifive,fu540-c000-ccache 45 + - const: cache 38 46 39 47 cache-block-size: 40 48 const: 64 ··· 74 72 The reference to the reserved-memory for the L2 Loosely Integrated Memory region. 75 73 The reserved memory node should be defined as per the bindings in reserved-memory.txt. 76 74 77 - if: 78 - properties: 79 - compatible: 80 - contains: 81 - const: sifive,fu540-c000-ccache 75 + allOf: 76 + - $ref: /schemas/cache-controller.yaml# 82 77 83 - then: 84 - properties: 85 - interrupts: 86 - description: | 87 - Must contain entries for DirError, DataError and DataFail signals. 88 - maxItems: 3 89 - cache-sets: 90 - const: 1024 78 + - if: 79 + properties: 80 + compatible: 81 + contains: 82 + enum: 83 + - sifive,fu740-c000-ccache 84 + - microchip,mpfs-ccache 91 85 92 - else: 93 - properties: 94 - interrupts: 95 - description: | 96 - Must contain entries for DirError, DataError, DataFail, DirFail signals. 97 - minItems: 4 98 - cache-sets: 99 - const: 2048 86 + then: 87 + properties: 88 + interrupts: 89 + description: | 90 + Must contain entries for DirError, DataError, DataFail, DirFail signals. 91 + minItems: 4 92 + 93 + else: 94 + properties: 95 + interrupts: 96 + description: | 97 + Must contain entries for DirError, DataError and DataFail signals. 98 + maxItems: 3 99 + 100 + - if: 101 + properties: 102 + compatible: 103 + contains: 104 + const: sifive,fu740-c000-ccache 105 + 106 + then: 107 + properties: 108 + cache-sets: 109 + const: 2048 110 + 111 + else: 112 + properties: 113 + cache-sets: 114 + const: 1024 100 115 101 116 additionalProperties: false 102 117
+103 -9
Documentation/devicetree/bindings/soc/qcom/qcom,apr.yaml
··· 20 20 - qcom,apr-v2 21 21 - qcom,gpr 22 22 23 + power-domains: 24 + maxItems: 1 25 + 23 26 qcom,apr-domain: 24 27 $ref: /schemas/types.yaml#/definitions/uint32 25 28 enum: [1, 2, 3, 4, 5, 6, 7] ··· 54 51 1 = Modem Domain 55 52 2 = Audio DSP Domain 56 53 3 = Application Processor Domain 54 + 55 + qcom,glink-channels: 56 + $ref: /schemas/types.yaml#/definitions/string-array 57 + description: Channel name used for the communication 58 + items: 59 + - const: apr_audio_svc 60 + 61 + qcom,intents: 62 + $ref: /schemas/types.yaml#/definitions/uint32-array 63 + description: 64 + List of (size, amount) pairs describing what intents should be 65 + preallocated for this virtual channel. This can be used to tweak the 66 + default intents available for the channel to meet expectations of the 67 + remote. 68 + 69 + qcom,smd-channels: 70 + $ref: /schemas/types.yaml#/definitions/string-array 71 + description: Channel name used for the communication 72 + items: 73 + - const: apr_audio_svc 57 74 58 75 '#address-cells': 59 76 const: 1 ··· 120 97 3 = AMDB Service. 121 98 4 = Voice processing manager. 122 99 100 + clock-controller: 101 + $ref: /schemas/sound/qcom,q6dsp-lpass-clocks.yaml# 102 + description: Qualcomm DSP LPASS clock controller 103 + unevaluatedProperties: false 104 + 105 + dais: 106 + type: object 107 + oneOf: 108 + - $ref: /schemas/sound/qcom,q6apm-dai.yaml# 109 + - $ref: /schemas/sound/qcom,q6dsp-lpass-ports.yaml# 110 + - $ref: /schemas/sound/qcom,q6asm-dais.yaml# 111 + unevaluatedProperties: false 112 + description: Qualcomm DSP audio ports 113 + 114 + routing: 115 + type: object 116 + $ref: /schemas/sound/qcom,q6adm-routing.yaml# 117 + unevaluatedProperties: false 118 + description: Qualcomm DSP LPASS audio routing 119 + 123 120 qcom,protection-domain: 124 121 $ref: /schemas/types.yaml#/definitions/string-array 125 122 description: protection domain service name and path for apr service ··· 150 107 "tms/servreg", "msm/modem/wlan_pd". 151 108 "tms/servreg", "msm/slpi/sensor_pd". 152 109 153 - '#address-cells': 154 - const: 1 110 + allOf: 111 + - if: 112 + properties: 113 + compatible: 114 + enum: 115 + - qcom,q6afe 116 + then: 117 + properties: 118 + dais: 119 + properties: 120 + compatible: 121 + const: qcom,q6afe-dais 155 122 156 - '#size-cells': 157 - const: 0 123 + - if: 124 + properties: 125 + compatible: 126 + enum: 127 + - qcom,q6apm 128 + then: 129 + properties: 130 + dais: 131 + properties: 132 + compatible: 133 + enum: 134 + - qcom,q6apm-dais 135 + - qcom,q6apm-lpass-dais 158 136 159 - patternProperties: 160 - "^.*@[0-9a-f]+$": 161 - type: object 162 - description: 163 - Service based devices like clock controllers or digital audio interfaces. 137 + - if: 138 + properties: 139 + compatible: 140 + enum: 141 + - qcom,q6asm 142 + then: 143 + properties: 144 + dais: 145 + properties: 146 + compatible: 147 + const: qcom,q6asm-dais 164 148 165 149 additionalProperties: false 166 150 167 151 required: 168 152 - compatible 169 153 - qcom,domain 154 + 155 + allOf: 156 + - if: 157 + properties: 158 + compatible: 159 + enum: 160 + - qcom,gpr 161 + then: 162 + properties: 163 + power-domains: false 164 + 165 + - if: 166 + required: 167 + - qcom,glink-channels 168 + then: 169 + properties: 170 + qcom,smd-channels: false 171 + 172 + - if: 173 + required: 174 + - qcom,smd-channels 175 + then: 176 + properties: 177 + qcom,glink-channels: false 170 178 171 179 additionalProperties: false 172 180
+52
Documentation/devicetree/bindings/sound/qcom,q6adm-routing.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/sound/qcom,q6adm-routing.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Audio Device Manager (Q6ADM) routing 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 11 + - Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 12 + 13 + description: 14 + Qualcomm Audio Device Manager (Q6ADM) routing node represents routing 15 + specific configuration. 16 + 17 + properties: 18 + compatible: 19 + enum: 20 + - qcom,q6adm-routing 21 + 22 + "#sound-dai-cells": 23 + const: 0 24 + 25 + required: 26 + - compatible 27 + - "#sound-dai-cells" 28 + 29 + additionalProperties: false 30 + 31 + examples: 32 + - | 33 + #include <dt-bindings/soc/qcom,apr.h> 34 + #include <dt-bindings/sound/qcom,q6asm.h> 35 + 36 + apr { 37 + compatible = "qcom,apr-v2"; 38 + qcom,domain = <APR_DOMAIN_ADSP>; 39 + #address-cells = <1>; 40 + #size-cells = <0>; 41 + 42 + service@8 { 43 + compatible = "qcom,q6adm"; 44 + reg = <APR_SVC_ADM>; 45 + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd"; 46 + 47 + routing { 48 + compatible = "qcom,q6adm-routing"; 49 + #sound-dai-cells = <0>; 50 + }; 51 + }; 52 + };
-39
Documentation/devicetree/bindings/sound/qcom,q6adm.txt
··· 1 - Qualcomm Audio Device Manager (Q6ADM) binding 2 - 3 - Q6ADM is one of the APR audio service on Q6DSP. 4 - Please refer to qcom,apr.txt for details of the coommon apr service bindings 5 - used by the apr service device. 6 - 7 - - but must contain the following property: 8 - 9 - - compatible: 10 - Usage: required 11 - Value type: <stringlist> 12 - Definition: must be "qcom,q6adm-v<MAJOR-NUMBER>.<MINOR-NUMBER>". 13 - Or "qcom,q6adm" where the version number can be queried 14 - from DSP. 15 - example "qcom,q6adm-v2.0" 16 - 17 - 18 - = ADM routing 19 - "routing" subnode of the ADM node represents adm routing specific configuration 20 - 21 - - compatible: 22 - Usage: required 23 - Value type: <stringlist> 24 - Definition: must be "qcom,q6adm-routing". 25 - 26 - - #sound-dai-cells 27 - Usage: required 28 - Value type: <u32> 29 - Definition: Must be 0 30 - 31 - = EXAMPLE 32 - apr-service@8 { 33 - compatible = "qcom,q6adm"; 34 - reg = <APR_SVC_ADM>; 35 - q6routing: routing { 36 - compatible = "qcom,q6adm-routing"; 37 - #sound-dai-cells = <0>; 38 - }; 39 - };
+7 -14
Documentation/devicetree/bindings/sound/qcom,q6apm-dai.yaml
··· 16 16 compatible: 17 17 const: qcom,q6apm-dais 18 18 19 - reg: 20 - maxItems: 1 21 - 22 19 iommus: 23 20 maxItems: 1 24 21 25 22 required: 26 23 - compatible 27 24 - iommus 28 - - reg 29 25 30 26 additionalProperties: false 31 27 ··· 33 37 #address-cells = <1>; 34 38 #size-cells = <0>; 35 39 qcom,domain = <GPR_DOMAIN_ID_ADSP>; 40 + 36 41 service@1 { 37 - compatible = "qcom,q6apm"; 38 - reg = <1>; 39 - 40 - #address-cells = <1>; 41 - #size-cells = <0>; 42 - 43 - apm-dai@1 { 44 - compatible = "qcom,q6apm-dais"; 45 - iommus = <&apps_smmu 0x1801 0x0>; 42 + compatible = "qcom,q6apm"; 46 43 reg = <1>; 47 - }; 44 + 45 + dais { 46 + compatible = "qcom,q6apm-dais"; 47 + iommus = <&apps_smmu 0x1801 0x0>; 48 + }; 48 49 }; 49 50 };
+112
Documentation/devicetree/bindings/sound/qcom,q6asm-dais.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/sound/qcom,q6asm-dais.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Audio Stream Manager (Q6ASM) 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 11 + - Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 12 + 13 + description: 14 + Q6ASM is one of the APR audio services on Q6DSP. Each of its subnodes 15 + represent a dai with board specific configuration. 16 + 17 + properties: 18 + compatible: 19 + enum: 20 + - qcom,q6asm-dais 21 + 22 + iommus: 23 + maxItems: 1 24 + 25 + "#sound-dai-cells": 26 + const: 1 27 + 28 + "#address-cells": 29 + const: 1 30 + 31 + "#size-cells": 32 + const: 0 33 + 34 + patternProperties: 35 + "^dai@[0-9]+$": 36 + type: object 37 + description: 38 + Q6ASM Digital Audio Interface 39 + 40 + properties: 41 + reg: 42 + maxItems: 1 43 + 44 + direction: 45 + $ref: /schemas/types.yaml#/definitions/uint32 46 + enum: [0, 1, 2] 47 + description: | 48 + The direction of the dai stream:: 49 + - Q6ASM_DAI_TX_RX (0) for both tx and rx 50 + - Q6ASM_DAI_TX (1) for only tx (Capture/Encode) 51 + - Q6ASM_DAI_RX (2) for only rx (Playback/Decode) 52 + 53 + is-compress-dai: 54 + type: boolean 55 + description: 56 + Compress offload dai. 57 + 58 + dependencies: 59 + is-compress-dai: ["direction"] 60 + 61 + required: 62 + - reg 63 + 64 + additionalProperties: false 65 + 66 + required: 67 + - compatible 68 + - "#sound-dai-cells" 69 + - "#address-cells" 70 + - "#size-cells" 71 + 72 + additionalProperties: false 73 + 74 + examples: 75 + - | 76 + #include <dt-bindings/soc/qcom,apr.h> 77 + #include <dt-bindings/sound/qcom,q6asm.h> 78 + 79 + apr { 80 + compatible = "qcom,apr-v2"; 81 + qcom,domain = <APR_DOMAIN_ADSP>; 82 + #address-cells = <1>; 83 + #size-cells = <0>; 84 + 85 + service@7 { 86 + compatible = "qcom,q6asm"; 87 + reg = <APR_SVC_ASM>; 88 + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd"; 89 + 90 + dais { 91 + compatible = "qcom,q6asm-dais"; 92 + iommus = <&apps_smmu 0x1821 0x0>; 93 + #address-cells = <1>; 94 + #size-cells = <0>; 95 + #sound-dai-cells = <1>; 96 + 97 + dai@0 { 98 + reg = <0>; 99 + }; 100 + 101 + dai@1 { 102 + reg = <1>; 103 + }; 104 + 105 + dai@2 { 106 + reg = <2>; 107 + is-compress-dai; 108 + direction = <1>; 109 + }; 110 + }; 111 + }; 112 + };
-70
Documentation/devicetree/bindings/sound/qcom,q6asm.txt
··· 1 - Qualcomm Audio Stream Manager (Q6ASM) binding 2 - 3 - Q6ASM is one of the APR audio service on Q6DSP. 4 - Please refer to qcom,apr.txt for details of the common apr service bindings 5 - used by the apr service device. 6 - 7 - - but must contain the following property: 8 - 9 - - compatible: 10 - Usage: required 11 - Value type: <stringlist> 12 - Definition: must be "qcom,q6asm-v<MAJOR-NUMBER>.<MINOR-NUMBER>". 13 - Or "qcom,q6asm" where the version number can be queried 14 - from DSP. 15 - example "qcom,q6asm-v2.0" 16 - 17 - = ASM DAIs (Digital Audio Interface) 18 - "dais" subnode of the ASM node represents dai specific configuration 19 - 20 - - compatible: 21 - Usage: required 22 - Value type: <stringlist> 23 - Definition: must be "qcom,q6asm-dais". 24 - 25 - - #sound-dai-cells 26 - Usage: required 27 - Value type: <u32> 28 - Definition: Must be 1 29 - 30 - == ASM DAI is subnode of "dais" and represent a dai, it includes board specific 31 - configuration of each dai. Must contain the following properties. 32 - 33 - - reg 34 - Usage: required 35 - Value type: <u32> 36 - Definition: Must be dai id 37 - 38 - - direction: 39 - Usage: Required for Compress offload dais 40 - Value type: <u32> 41 - Definition: Specifies the direction of the dai stream 42 - Q6ASM_DAI_TX_RX (0) for both tx and rx 43 - Q6ASM_DAI_TX (1) for only tx (Capture/Encode) 44 - Q6ASM_DAI_RX (2) for only rx (Playback/Decode) 45 - 46 - - is-compress-dai: 47 - Usage: Required for Compress offload dais 48 - Value type: <boolean> 49 - Definition: present for Compress offload dais 50 - 51 - 52 - = EXAMPLE 53 - #include <dt-bindings/sound/qcom,q6asm.h> 54 - 55 - apr-service@7 { 56 - compatible = "qcom,q6asm"; 57 - reg = <APR_SVC_ASM>; 58 - q6asmdai: dais { 59 - compatible = "qcom,q6asm-dais"; 60 - #address-cells = <1>; 61 - #size-cells = <0>; 62 - #sound-dai-cells = <1>; 63 - 64 - dai@0 { 65 - reg = <0>; 66 - direction = <Q6ASM_DAI_RX>; 67 - is-compress-dai; 68 - }; 69 - }; 70 - };
+17 -19
Documentation/devicetree/bindings/sound/qcom,q6dsp-lpass-clocks.yaml
··· 18 18 - qcom,q6afe-clocks 19 19 - qcom,q6prm-lpass-clocks 20 20 21 - reg: 22 - maxItems: 1 23 - 24 21 '#clock-cells': 25 22 const: 2 26 23 description: ··· 29 32 30 33 required: 31 34 - compatible 32 - - reg 33 35 - "#clock-cells" 34 36 35 37 additionalProperties: false ··· 38 42 #include <dt-bindings/soc/qcom,apr.h> 39 43 #include <dt-bindings/sound/qcom,q6afe.h> 40 44 apr { 45 + compatible = "qcom,apr-v2"; 46 + qcom,domain = <APR_DOMAIN_ADSP>; 41 47 #address-cells = <1>; 42 48 #size-cells = <0>; 43 - apr-service@4 { 49 + 50 + service@4 { 51 + compatible = "qcom,q6afe"; 44 52 reg = <APR_SVC_AFE>; 45 - #address-cells = <1>; 46 - #size-cells = <0>; 47 - clock-controller@2 { 48 - compatible = "qcom,q6afe-clocks"; 49 - reg = <2>; 50 - #clock-cells = <2>; 53 + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd"; 54 + 55 + clock-controller { 56 + compatible = "qcom,q6afe-clocks"; 57 + #clock-cells = <2>; 51 58 }; 52 59 }; 53 - }; 60 + }; 54 61 55 62 - | 56 63 #include <dt-bindings/soc/qcom,gpr.h> ··· 62 63 qcom,domain = <GPR_DOMAIN_ID_ADSP>; 63 64 #address-cells = <1>; 64 65 #size-cells = <0>; 66 + 65 67 service@2 { 66 68 reg = <GPR_PRM_MODULE_IID>; 67 69 compatible = "qcom,q6prm"; 68 - #address-cells = <1>; 69 - #size-cells = <0>; 70 - clock-controller@2 { 71 - compatible = "qcom,q6prm-lpass-clocks"; 72 - reg = <2>; 73 - #clock-cells = <2>; 70 + 71 + clock-controller { 72 + compatible = "qcom,q6prm-lpass-clocks"; 73 + #clock-cells = <2>; 74 74 }; 75 75 }; 76 - }; 76 + };
+30 -32
Documentation/devicetree/bindings/sound/qcom,q6dsp-lpass-ports.yaml
··· 18 18 - qcom,q6afe-dais 19 19 - qcom,q6apm-lpass-dais 20 20 21 - reg: 22 - maxItems: 1 23 - 24 21 '#sound-dai-cells': 25 22 const: 1 26 23 ··· 142 145 143 146 required: 144 147 - compatible 145 - - reg 146 148 - "#sound-dai-cells" 147 149 - "#address-cells" 148 150 - "#size-cells" ··· 153 157 #include <dt-bindings/soc/qcom,apr.h> 154 158 #include <dt-bindings/sound/qcom,q6afe.h> 155 159 apr { 160 + compatible = "qcom,apr-v2"; 156 161 #address-cells = <1>; 157 162 #size-cells = <0>; 158 - apr-service@4 { 159 - reg = <APR_SVC_AFE>; 160 - #address-cells = <1>; 161 - #size-cells = <0>; 162 - q6afedai@1 { 163 - compatible = "qcom,q6afe-dais"; 164 - reg = <1>; 165 - #address-cells = <1>; 166 - #size-cells = <0>; 167 - #sound-dai-cells = <1>; 163 + qcom,domain = <APR_DOMAIN_ADSP>; 168 164 169 - dai@22 { 170 - reg = <QUATERNARY_MI2S_RX>; 171 - qcom,sd-lines = <0 1 2 3>; 172 - }; 165 + service@4 { 166 + compatible = "qcom,q6afe"; 167 + reg = <APR_SVC_AFE>; 168 + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd"; 169 + 170 + dais { 171 + compatible = "qcom,q6afe-dais"; 172 + #address-cells = <1>; 173 + #size-cells = <0>; 174 + #sound-dai-cells = <1>; 175 + 176 + dai@22 { 177 + reg = <QUATERNARY_MI2S_RX>; 178 + qcom,sd-lines = <0 1 2 3>; 179 + }; 173 180 }; 174 181 }; 175 - }; 182 + }; 176 183 - | 177 184 #include <dt-bindings/soc/qcom,gpr.h> 178 185 gpr { ··· 183 184 #address-cells = <1>; 184 185 #size-cells = <0>; 185 186 qcom,domain = <GPR_DOMAIN_ID_ADSP>; 187 + 186 188 service@1 { 187 189 compatible = "qcom,q6apm"; 188 190 reg = <GPR_APM_MODULE_IID>; 189 - #address-cells = <1>; 190 - #size-cells = <0>; 191 - q6apmdai@1 { 192 - compatible = "qcom,q6apm-lpass-dais"; 193 - reg = <1>; 194 - #address-cells = <1>; 195 - #size-cells = <0>; 196 - #sound-dai-cells = <1>; 197 191 198 - dai@22 { 199 - reg = <QUATERNARY_MI2S_RX>; 200 - qcom,sd-lines = <0 1 2 3>; 201 - }; 192 + dais { 193 + compatible = "qcom,q6apm-lpass-dais"; 194 + #address-cells = <1>; 195 + #size-cells = <0>; 196 + #sound-dai-cells = <1>; 197 + 198 + dai@22 { 199 + reg = <QUATERNARY_MI2S_RX>; 200 + qcom,sd-lines = <0 1 2 3>; 201 + }; 202 202 }; 203 203 }; 204 - }; 204 + };
+5 -8
Documentation/i2c/busses/i2c-piix4.rst
··· 64 64 crashes, data corruption, etc.). Try this only as a last resort (try BIOS 65 65 updates first, for example), and backup first! An even more dangerous 66 66 option is 'force_addr=<IOPORT>'. This will not only enable the PIIX4 like 67 - 'force' foes, but it will also set a new base I/O port address. The SMBus 67 + 'force' does, but it will also set a new base I/O port address. The SMBus 68 68 parts of the PIIX4 needs a range of 8 of these addresses to function 69 69 correctly. If these addresses are already reserved by some other device, 70 70 you will get into big trouble! DON'T USE THIS IF YOU ARE NOT VERY SURE ··· 86 86 to change the SMBus Interrupt Select register so the SMBus controller uses 87 87 the SMI mode. 88 88 89 - 1) Use lspci command and locate the PCI device with the SMBus controller: 89 + 1) Use ``lspci`` command and locate the PCI device with the SMBus controller: 90 90 00:0f.0 ISA bridge: ServerWorks OSB4 South Bridge (rev 4f) 91 91 The line may vary for different chipsets. Please consult the driver source 92 - for all possible PCI ids (and lspci -n to match them). Lets assume the 92 + for all possible PCI ids (and ``lspci -n`` to match them). Let's assume the 93 93 device is located at 00:0f.0. 94 94 2) Now you just need to change the value in 0xD2 register. Get it first with 95 - command: lspci -xxx -s 00:0f.0 95 + command: ``lspci -xxx -s 00:0f.0`` 96 96 If the value is 0x3 then you need to change it to 0x1: 97 - setpci -s 00:0f.0 d2.b=1 97 + ``setpci -s 00:0f.0 d2.b=1`` 98 98 99 99 Please note that you don't need to do that in all cases, just when the SMBus is 100 100 not working properly. ··· 109 109 Thinkpad laptops, but desktop systems may also be affected. We have no list 110 110 of all affected systems, so the only safe solution was to prevent access to 111 111 the SMBus on all IBM systems (detected using DMI data.) 112 - 113 - For additional information, read: 114 - http://www.lm-sensors.org/browser/lm-sensors/trunk/README
+115 -99
Documentation/i2c/i2c-topology.rst
··· 5 5 There are a couple of reasons for building more complex I2C topologies 6 6 than a straight-forward I2C bus with one adapter and one or more devices. 7 7 8 + Some example use cases are: 9 + 8 10 1. A mux may be needed on the bus to prevent address collisions. 9 11 10 12 2. The bus may be accessible from some external bus master, and arbitration ··· 16 14 from the I2C bus, at least most of the time, and sits behind a gate 17 15 that has to be operated before the device can be accessed. 18 16 19 - Etc 20 - === 17 + Several types of hardware components such as I2C muxes, I2C gates and I2C 18 + arbitrators allow to handle such needs. 21 19 22 - These constructs are represented as I2C adapter trees by Linux, where 20 + These components are represented as I2C adapter trees by Linux, where 23 21 each adapter has a parent adapter (except the root adapter) and zero or 24 22 more child adapters. The root adapter is the actual adapter that issues 25 23 I2C transfers, and all adapters with a parent are part of an "i2c-mux" ··· 37 35 ======= 38 36 39 37 There are two variants of locking available to I2C muxes, they can be 40 - mux-locked or parent-locked muxes. As is evident from below, it can be 41 - useful to know if a mux is mux-locked or if it is parent-locked. The 42 - following list was correct at the time of writing: 43 - 44 - In drivers/i2c/muxes/: 45 - 46 - ====================== ============================================= 47 - i2c-arb-gpio-challenge Parent-locked 48 - i2c-mux-gpio Normally parent-locked, mux-locked iff 49 - all involved gpio pins are controlled by the 50 - same I2C root adapter that they mux. 51 - i2c-mux-gpmux Normally parent-locked, mux-locked iff 52 - specified in device-tree. 53 - i2c-mux-ltc4306 Mux-locked 54 - i2c-mux-mlxcpld Parent-locked 55 - i2c-mux-pca9541 Parent-locked 56 - i2c-mux-pca954x Parent-locked 57 - i2c-mux-pinctrl Normally parent-locked, mux-locked iff 58 - all involved pinctrl devices are controlled 59 - by the same I2C root adapter that they mux. 60 - i2c-mux-reg Parent-locked 61 - ====================== ============================================= 62 - 63 - In drivers/iio/: 64 - 65 - ====================== ============================================= 66 - gyro/mpu3050 Mux-locked 67 - imu/inv_mpu6050/ Mux-locked 68 - ====================== ============================================= 69 - 70 - In drivers/media/: 71 - 72 - ======================= ============================================= 73 - dvb-frontends/lgdt3306a Mux-locked 74 - dvb-frontends/m88ds3103 Parent-locked 75 - dvb-frontends/rtl2830 Parent-locked 76 - dvb-frontends/rtl2832 Mux-locked 77 - dvb-frontends/si2168 Mux-locked 78 - usb/cx231xx/ Parent-locked 79 - ======================= ============================================= 38 + mux-locked or parent-locked muxes. 80 39 81 40 82 41 Mux-locked muxes ··· 52 89 stages of the transaction. This has the benefit that the mux driver 53 90 may be easier and cleaner to implement, but it has some caveats. 54 91 55 - ==== ===================================================================== 56 - ML1. If you build a topology with a mux-locked mux being the parent 57 - of a parent-locked mux, this might break the expectation from the 58 - parent-locked mux that the root adapter is locked during the 59 - transaction. 60 - 61 - ML2. It is not safe to build arbitrary topologies with two (or more) 62 - mux-locked muxes that are not siblings, when there are address 63 - collisions between the devices on the child adapters of these 64 - non-sibling muxes. 65 - 66 - I.e. the select-transfer-deselect transaction targeting e.g. device 67 - address 0x42 behind mux-one may be interleaved with a similar 68 - operation targeting device address 0x42 behind mux-two. The 69 - intension with such a topology would in this hypothetical example 70 - be that mux-one and mux-two should not be selected simultaneously, 71 - but mux-locked muxes do not guarantee that in all topologies. 72 - 73 - ML3. A mux-locked mux cannot be used by a driver for auto-closing 74 - gates/muxes, i.e. something that closes automatically after a given 75 - number (one, in most cases) of I2C transfers. Unrelated I2C transfers 76 - may creep in and close prematurely. 77 - 78 - ML4. If any non-I2C operation in the mux driver changes the I2C mux state, 79 - the driver has to lock the root adapter during that operation. 80 - Otherwise garbage may appear on the bus as seen from devices 81 - behind the mux, when an unrelated I2C transfer is in flight during 82 - the non-I2C mux-changing operation. 83 - ==== ===================================================================== 84 - 85 - 86 92 Mux-locked Example 87 - ------------------ 88 - 93 + ~~~~~~~~~~~~~~~~~~ 89 94 90 95 :: 91 96 ··· 84 153 of the entire operation. But accesses to D3 are possibly interleaved 85 154 at any point. 86 155 156 + Mux-locked caveats 157 + ~~~~~~~~~~~~~~~~~~ 158 + 159 + When using a mux-locked mux, be aware of the following restrictions: 160 + 161 + [ML1] 162 + If you build a topology with a mux-locked mux being the parent 163 + of a parent-locked mux, this might break the expectation from the 164 + parent-locked mux that the root adapter is locked during the 165 + transaction. 166 + 167 + [ML2] 168 + It is not safe to build arbitrary topologies with two (or more) 169 + mux-locked muxes that are not siblings, when there are address 170 + collisions between the devices on the child adapters of these 171 + non-sibling muxes. 172 + 173 + I.e. the select-transfer-deselect transaction targeting e.g. device 174 + address 0x42 behind mux-one may be interleaved with a similar 175 + operation targeting device address 0x42 behind mux-two. The 176 + intent with such a topology would in this hypothetical example 177 + be that mux-one and mux-two should not be selected simultaneously, 178 + but mux-locked muxes do not guarantee that in all topologies. 179 + 180 + [ML3] 181 + A mux-locked mux cannot be used by a driver for auto-closing 182 + gates/muxes, i.e. something that closes automatically after a given 183 + number (one, in most cases) of I2C transfers. Unrelated I2C transfers 184 + may creep in and close prematurely. 185 + 186 + [ML4] 187 + If any non-I2C operation in the mux driver changes the I2C mux state, 188 + the driver has to lock the root adapter during that operation. 189 + Otherwise garbage may appear on the bus as seen from devices 190 + behind the mux, when an unrelated I2C transfer is in flight during 191 + the non-I2C mux-changing operation. 192 + 87 193 88 194 Parent-locked muxes 89 195 ------------------- ··· 129 161 transfer-deselect transaction. The implication is that the mux driver 130 162 has to ensure that any and all I2C transfers through that parent 131 163 adapter during the transaction are unlocked I2C transfers (using e.g. 132 - __i2c_transfer), or a deadlock will follow. There are a couple of 133 - caveats. 134 - 135 - ==== ==================================================================== 136 - PL1. If you build a topology with a parent-locked mux being the child 137 - of another mux, this might break a possible assumption from the 138 - child mux that the root adapter is unused between its select op 139 - and the actual transfer (e.g. if the child mux is auto-closing 140 - and the parent mux issues I2C transfers as part of its select). 141 - This is especially the case if the parent mux is mux-locked, but 142 - it may also happen if the parent mux is parent-locked. 143 - 144 - PL2. If select/deselect calls out to other subsystems such as gpio, 145 - pinctrl, regmap or iio, it is essential that any I2C transfers 146 - caused by these subsystems are unlocked. This can be convoluted to 147 - accomplish, maybe even impossible if an acceptably clean solution 148 - is sought. 149 - ==== ==================================================================== 150 - 164 + __i2c_transfer), or a deadlock will follow. 151 165 152 166 Parent-locked Example 153 - --------------------- 167 + ~~~~~~~~~~~~~~~~~~~~~ 154 168 155 169 :: 156 170 ··· 162 212 9. M1 unlocks its parent adapter. 163 213 10. M1 unlocks muxes on its parent. 164 214 165 - 166 215 This means that accesses to both D2 and D3 are locked out for the full 167 216 duration of the entire operation. 217 + 218 + Parent-locked Caveats 219 + ~~~~~~~~~~~~~~~~~~~~~ 220 + 221 + When using a parent-locked mux, be aware of the following restrictions: 222 + 223 + [PL1] 224 + If you build a topology with a parent-locked mux being the child 225 + of another mux, this might break a possible assumption from the 226 + child mux that the root adapter is unused between its select op 227 + and the actual transfer (e.g. if the child mux is auto-closing 228 + and the parent mux issues I2C transfers as part of its select). 229 + This is especially the case if the parent mux is mux-locked, but 230 + it may also happen if the parent mux is parent-locked. 231 + 232 + [PL2] 233 + If select/deselect calls out to other subsystems such as gpio, 234 + pinctrl, regmap or iio, it is essential that any I2C transfers 235 + caused by these subsystems are unlocked. This can be convoluted to 236 + accomplish, maybe even impossible if an acceptably clean solution 237 + is sought. 168 238 169 239 170 240 Complex Examples ··· 231 261 When device D1 is accessed, accesses to D2 are locked out for the 232 262 full duration of the operation (muxes on the top child adapter of M1 233 263 are locked). But accesses to D3 and D4 are possibly interleaved at 234 - any point. Accesses to D3 locks out D1 and D2, but accesses to D4 235 - are still possibly interleaved. 264 + any point. 265 + 266 + Accesses to D3 locks out D1 and D2, but accesses to D4 are still possibly 267 + interleaved. 236 268 237 269 238 270 Mux-locked mux as parent of parent-locked mux ··· 366 394 When D1 or D2 are accessed, accesses to D3 and D4 are locked out while 367 395 accesses to D5 may interleave. When D3 or D4 are accessed, accesses to 368 396 all other devices are locked out. 397 + 398 + 399 + Mux type of existing device drivers 400 + =================================== 401 + 402 + Whether a device is mux-locked or parent-locked depends on its 403 + implementation. The following list was correct at the time of writing: 404 + 405 + In drivers/i2c/muxes/: 406 + 407 + ====================== ============================================= 408 + i2c-arb-gpio-challenge Parent-locked 409 + i2c-mux-gpio Normally parent-locked, mux-locked iff 410 + all involved gpio pins are controlled by the 411 + same I2C root adapter that they mux. 412 + i2c-mux-gpmux Normally parent-locked, mux-locked iff 413 + specified in device-tree. 414 + i2c-mux-ltc4306 Mux-locked 415 + i2c-mux-mlxcpld Parent-locked 416 + i2c-mux-pca9541 Parent-locked 417 + i2c-mux-pca954x Parent-locked 418 + i2c-mux-pinctrl Normally parent-locked, mux-locked iff 419 + all involved pinctrl devices are controlled 420 + by the same I2C root adapter that they mux. 421 + i2c-mux-reg Parent-locked 422 + ====================== ============================================= 423 + 424 + In drivers/iio/: 425 + 426 + ====================== ============================================= 427 + gyro/mpu3050 Mux-locked 428 + imu/inv_mpu6050/ Mux-locked 429 + ====================== ============================================= 430 + 431 + In drivers/media/: 432 + 433 + ======================= ============================================= 434 + dvb-frontends/lgdt3306a Mux-locked 435 + dvb-frontends/m88ds3103 Parent-locked 436 + dvb-frontends/rtl2830 Parent-locked 437 + dvb-frontends/rtl2832 Mux-locked 438 + dvb-frontends/si2168 Mux-locked 439 + usb/cx231xx/ Parent-locked 440 + ======================= =============================================
-11
Documentation/networking/rxrpc.rst
··· 1055 1055 first function to change. Note that this must be called in TASK_RUNNING 1056 1056 state. 1057 1057 1058 - (#) Get reply timestamp:: 1059 - 1060 - bool rxrpc_kernel_get_reply_time(struct socket *sock, 1061 - struct rxrpc_call *call, 1062 - ktime_t *_ts) 1063 - 1064 - This allows the timestamp on the first DATA packet of the reply of a 1065 - client call to be queried, provided that it is still in the Rx ring. If 1066 - successful, the timestamp will be stored into ``*_ts`` and true will be 1067 - returned; false will be returned otherwise. 1068 - 1069 1058 (#) Get remote client epoch:: 1070 1059 1071 1060 u32 rxrpc_kernel_get_epoch(struct socket *sock,
+23 -1
MAINTAINERS
··· 9216 9216 F: drivers/crypto/hisilicon/zip/ 9217 9217 9218 9218 HISILICON ROCE DRIVER 9219 + M: Haoyue Xu <xuhaoyue1@hisilicon.com> 9219 9220 M: Wenpeng Liang <liangwenpeng@huawei.com> 9220 - M: Weihang Li <liweihang@huawei.com> 9221 9221 L: linux-rdma@vger.kernel.org 9222 9222 S: Maintained 9223 9223 F: Documentation/devicetree/bindings/infiniband/hisilicon-hns-roce.txt ··· 17540 17540 M: Daire McNamara <daire.mcnamara@microchip.com> 17541 17541 L: linux-riscv@lists.infradead.org 17542 17542 S: Supported 17543 + F: Documentation/devicetree/bindings/clock/microchip,mpfs.yaml 17544 + F: Documentation/devicetree/bindings/gpio/microchip,mpfs-gpio.yaml 17545 + F: Documentation/devicetree/bindings/i2c/microchip,corei2c.yaml 17546 + F: Documentation/devicetree/bindings/mailbox/microchip,mpfs-mailbox.yaml 17547 + F: Documentation/devicetree/bindings/net/can/microchip,mpfs-can.yaml 17548 + F: Documentation/devicetree/bindings/pwm/microchip,corepwm.yaml 17549 + F: Documentation/devicetree/bindings/soc/microchip/microchip,mpfs-sys-controller.yaml 17550 + F: Documentation/devicetree/bindings/spi/microchip,mpfs-spi.yaml 17551 + F: Documentation/devicetree/bindings/usb/microchip,mpfs-musb.yaml 17543 17552 F: arch/riscv/boot/dts/microchip/ 17544 17553 F: drivers/char/hw_random/mpfs-rng.c 17545 17554 F: drivers/clk/microchip/clk-mpfs.c 17555 + F: drivers/i2c/busses/i2c-microchip-core.c 17546 17556 F: drivers/mailbox/mailbox-mpfs.c 17547 17557 F: drivers/pci/controller/pcie-microchip-host.c 17548 17558 F: drivers/rtc/rtc-mpfs.c ··· 17752 17742 L: linux-rdma@vger.kernel.org 17753 17743 S: Maintained 17754 17744 F: drivers/infiniband/ulp/rtrs/ 17745 + 17746 + RUNTIME VERIFICATION (RV) 17747 + M: Daniel Bristot de Oliveira <bristot@kernel.org> 17748 + M: Steven Rostedt <rostedt@goodmis.org> 17749 + L: linux-trace-devel@vger.kernel.org 17750 + S: Maintained 17751 + F: Documentation/trace/rv/ 17752 + F: include/linux/rv.h 17753 + F: include/rv/ 17754 + F: kernel/trace/rv/ 17755 + F: tools/verification/ 17755 17756 17756 17757 RXRPC SOCKETS (AF_RXRPC) 17757 17758 M: David Howells <dhowells@redhat.com> ··· 20630 20609 F: include/linux/trace*.h 20631 20610 F: include/trace/ 20632 20611 F: kernel/trace/ 20612 + F: scripts/tracing/ 20633 20613 F: tools/testing/selftests/ftrace/ 20634 20614 20635 20615 TRACING MMIO ACCESSES (MMIOTRACE)
+2 -3
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION* ··· 1287 1287 1288 1288 PHONY += headers 1289 1289 headers: $(version_h) scripts_unifdef uapi-asm-generic archheaders archscripts 1290 - $(if $(wildcard $(srctree)/arch/$(SRCARCH)/include/uapi/asm/Kbuild),, \ 1291 - $(error Headers not exportable for the $(SRCARCH) architecture)) 1290 + $(if $(filter um, $(SRCARCH)), $(error Headers not exportable for UML)) 1292 1291 $(Q)$(MAKE) $(hdr-inst)=include/uapi 1293 1292 $(Q)$(MAKE) $(hdr-inst)=arch/$(SRCARCH)/include/uapi 1294 1293
+3
arch/Kconfig
··· 923 923 Architecture provides a function to run __do_softirq() on a 924 924 separate stack. 925 925 926 + config SOFTIRQ_ON_OWN_STACK 927 + def_bool HAVE_SOFTIRQ_ON_OWN_STACK && !PREEMPT_RT 928 + 926 929 config ALTERNATE_USER_ADDRESS_SPACE 927 930 bool 928 931 help
+1 -1
arch/arm/boot/dts/arm-realview-eb.dtsi
··· 399 399 compatible = "arm,pl022", "arm,primecell"; 400 400 reg = <0x1000d000 0x1000>; 401 401 clocks = <&sspclk>, <&pclk>; 402 - clock-names = "SSPCLK", "apb_pclk"; 402 + clock-names = "sspclk", "apb_pclk"; 403 403 }; 404 404 405 405 wdog: watchdog@10010000 {
+1 -1
arch/arm/boot/dts/arm-realview-pb1176.dts
··· 410 410 interrupt-parent = <&intc_dc1176>; 411 411 interrupts = <0 17 IRQ_TYPE_LEVEL_HIGH>; 412 412 clocks = <&sspclk>, <&pclk>; 413 - clock-names = "SSPCLK", "apb_pclk"; 413 + clock-names = "sspclk", "apb_pclk"; 414 414 }; 415 415 416 416 pb1176_serial0: serial@1010c000 {
+1 -1
arch/arm/boot/dts/arm-realview-pb11mp.dts
··· 555 555 interrupt-parent = <&intc_pb11mp>; 556 556 interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>; 557 557 clocks = <&sspclk>, <&pclk>; 558 - clock-names = "SSPCLK", "apb_pclk"; 558 + clock-names = "sspclk", "apb_pclk"; 559 559 }; 560 560 561 561 watchdog@1000f000 {
+1 -1
arch/arm/boot/dts/arm-realview-pbx.dtsi
··· 390 390 compatible = "arm,pl022", "arm,primecell"; 391 391 reg = <0x1000d000 0x1000>; 392 392 clocks = <&sspclk>, <&pclk>; 393 - clock-names = "SSPCLK", "apb_pclk"; 393 + clock-names = "sspclk", "apb_pclk"; 394 394 }; 395 395 396 396 wdog0: watchdog@1000f000 {
+10 -11
arch/arm/boot/dts/at91-sama5d27_wlsom1.dtsi
··· 76 76 regulators { 77 77 vdd_3v3: VDD_IO { 78 78 regulator-name = "VDD_IO"; 79 - regulator-min-microvolt = <1200000>; 80 - regulator-max-microvolt = <3700000>; 79 + regulator-min-microvolt = <3300000>; 80 + regulator-max-microvolt = <3300000>; 81 81 regulator-initial-mode = <2>; 82 82 regulator-allowed-modes = <2>, <4>; 83 83 regulator-always-on; ··· 95 95 96 96 vddio_ddr: VDD_DDR { 97 97 regulator-name = "VDD_DDR"; 98 - regulator-min-microvolt = <600000>; 99 - regulator-max-microvolt = <1850000>; 98 + regulator-min-microvolt = <1200000>; 99 + regulator-max-microvolt = <1200000>; 100 100 regulator-initial-mode = <2>; 101 101 regulator-allowed-modes = <2>, <4>; 102 102 regulator-always-on; ··· 118 118 119 119 vdd_core: VDD_CORE { 120 120 regulator-name = "VDD_CORE"; 121 - regulator-min-microvolt = <600000>; 122 - regulator-max-microvolt = <1850000>; 121 + regulator-min-microvolt = <1250000>; 122 + regulator-max-microvolt = <1250000>; 123 123 regulator-initial-mode = <2>; 124 124 regulator-allowed-modes = <2>, <4>; 125 125 regulator-always-on; ··· 160 160 161 161 LDO1 { 162 162 regulator-name = "LDO1"; 163 - regulator-min-microvolt = <1200000>; 164 - regulator-max-microvolt = <3700000>; 163 + regulator-min-microvolt = <3300000>; 164 + regulator-max-microvolt = <3300000>; 165 165 regulator-always-on; 166 166 167 167 regulator-state-standby { ··· 175 175 176 176 LDO2 { 177 177 regulator-name = "LDO2"; 178 - regulator-min-microvolt = <1200000>; 179 - regulator-max-microvolt = <3700000>; 180 - regulator-always-on; 178 + regulator-min-microvolt = <1800000>; 179 + regulator-max-microvolt = <3300000>; 181 180 182 181 regulator-state-standby { 183 182 regulator-on-in-suspend;
+10 -11
arch/arm/boot/dts/at91-sama5d2_icp.dts
··· 196 196 regulators { 197 197 vdd_io_reg: VDD_IO { 198 198 regulator-name = "VDD_IO"; 199 - regulator-min-microvolt = <1200000>; 200 - regulator-max-microvolt = <3700000>; 199 + regulator-min-microvolt = <3300000>; 200 + regulator-max-microvolt = <3300000>; 201 201 regulator-initial-mode = <2>; 202 202 regulator-allowed-modes = <2>, <4>; 203 203 regulator-always-on; ··· 215 215 216 216 VDD_DDR { 217 217 regulator-name = "VDD_DDR"; 218 - regulator-min-microvolt = <600000>; 219 - regulator-max-microvolt = <1850000>; 218 + regulator-min-microvolt = <1350000>; 219 + regulator-max-microvolt = <1350000>; 220 220 regulator-initial-mode = <2>; 221 221 regulator-allowed-modes = <2>, <4>; 222 222 regulator-always-on; ··· 234 234 235 235 VDD_CORE { 236 236 regulator-name = "VDD_CORE"; 237 - regulator-min-microvolt = <600000>; 238 - regulator-max-microvolt = <1850000>; 237 + regulator-min-microvolt = <1250000>; 238 + regulator-max-microvolt = <1250000>; 239 239 regulator-initial-mode = <2>; 240 240 regulator-allowed-modes = <2>, <4>; 241 241 regulator-always-on; ··· 257 257 regulator-max-microvolt = <1850000>; 258 258 regulator-initial-mode = <2>; 259 259 regulator-allowed-modes = <2>, <4>; 260 - regulator-always-on; 261 260 262 261 regulator-state-standby { 263 262 regulator-on-in-suspend; ··· 271 272 272 273 LDO1 { 273 274 regulator-name = "LDO1"; 274 - regulator-min-microvolt = <1200000>; 275 - regulator-max-microvolt = <3700000>; 275 + regulator-min-microvolt = <2500000>; 276 + regulator-max-microvolt = <2500000>; 276 277 regulator-always-on; 277 278 278 279 regulator-state-standby { ··· 286 287 287 288 LDO2 { 288 289 regulator-name = "LDO2"; 289 - regulator-min-microvolt = <1200000>; 290 - regulator-max-microvolt = <3700000>; 290 + regulator-min-microvolt = <3300000>; 291 + regulator-max-microvolt = <3300000>; 291 292 regulator-always-on; 292 293 293 294 regulator-state-standby {
+9 -9
arch/arm/boot/dts/at91-sama7g5ek.dts
··· 244 244 regulators { 245 245 vdd_3v3: VDD_IO { 246 246 regulator-name = "VDD_IO"; 247 - regulator-min-microvolt = <1200000>; 248 - regulator-max-microvolt = <3700000>; 247 + regulator-min-microvolt = <3300000>; 248 + regulator-max-microvolt = <3300000>; 249 249 regulator-initial-mode = <2>; 250 250 regulator-allowed-modes = <2>, <4>; 251 251 regulator-always-on; ··· 264 264 265 265 vddioddr: VDD_DDR { 266 266 regulator-name = "VDD_DDR"; 267 - regulator-min-microvolt = <1300000>; 268 - regulator-max-microvolt = <1450000>; 267 + regulator-min-microvolt = <1350000>; 268 + regulator-max-microvolt = <1350000>; 269 269 regulator-initial-mode = <2>; 270 270 regulator-allowed-modes = <2>, <4>; 271 271 regulator-always-on; ··· 285 285 286 286 vddcore: VDD_CORE { 287 287 regulator-name = "VDD_CORE"; 288 - regulator-min-microvolt = <1100000>; 289 - regulator-max-microvolt = <1850000>; 288 + regulator-min-microvolt = <1150000>; 289 + regulator-max-microvolt = <1150000>; 290 290 regulator-initial-mode = <2>; 291 291 regulator-allowed-modes = <2>, <4>; 292 292 regulator-always-on; ··· 306 306 vddcpu: VDD_OTHER { 307 307 regulator-name = "VDD_OTHER"; 308 308 regulator-min-microvolt = <1050000>; 309 - regulator-max-microvolt = <1850000>; 309 + regulator-max-microvolt = <1250000>; 310 310 regulator-initial-mode = <2>; 311 311 regulator-allowed-modes = <2>, <4>; 312 312 regulator-ramp-delay = <3125>; ··· 326 326 327 327 vldo1: LDO1 { 328 328 regulator-name = "LDO1"; 329 - regulator-min-microvolt = <1200000>; 330 - regulator-max-microvolt = <3700000>; 329 + regulator-min-microvolt = <1800000>; 330 + regulator-max-microvolt = <1800000>; 331 331 regulator-always-on; 332 332 333 333 regulator-state-standby {
+11 -9
arch/arm/boot/dts/bcm63178.dtsi
··· 32 32 next-level-cache = <&L2_0>; 33 33 enable-method = "psci"; 34 34 }; 35 + 35 36 CA7_2: cpu@2 { 36 37 device_type = "cpu"; 37 38 compatible = "arm,cortex-a7"; ··· 40 39 next-level-cache = <&L2_0>; 41 40 enable-method = "psci"; 42 41 }; 42 + 43 43 L2_0: l2-cache0 { 44 44 compatible = "cache"; 45 45 }; ··· 48 46 49 47 timer { 50 48 compatible = "arm,armv7-timer"; 51 - interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 52 - <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 53 - <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 54 - <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>; 49 + interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(3) | IRQ_TYPE_LEVEL_LOW)>, 50 + <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(3) | IRQ_TYPE_LEVEL_LOW)>, 51 + <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(3) | IRQ_TYPE_LEVEL_LOW)>, 52 + <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(3) | IRQ_TYPE_LEVEL_LOW)>; 55 53 arm,cpu-registers-not-fw-configured; 56 54 }; 57 55 ··· 82 80 psci { 83 81 compatible = "arm,psci-0.2"; 84 82 method = "smc"; 85 - cpu_off = <1>; 86 - cpu_on = <2>; 87 83 }; 88 84 89 85 axi@81000000 { 90 86 compatible = "simple-bus"; 91 87 #address-cells = <1>; 92 88 #size-cells = <1>; 93 - ranges = <0 0x81000000 0x4000>; 89 + ranges = <0 0x81000000 0x8000>; 94 90 95 91 gic: interrupt-controller@1000 { 96 92 compatible = "arm,cortex-a7-gic"; 97 93 #interrupt-cells = <3>; 98 - #address-cells = <0>; 99 94 interrupt-controller; 95 + interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(3) | IRQ_TYPE_LEVEL_HIGH)>; 100 96 reg = <0x1000 0x1000>, 101 - <0x2000 0x2000>; 97 + <0x2000 0x2000>, 98 + <0x4000 0x2000>, 99 + <0x6000 0x2000>; 102 100 }; 103 101 }; 104 102
+9 -9
arch/arm/boot/dts/bcm6846.dtsi
··· 40 40 41 41 timer { 42 42 compatible = "arm,armv7-timer"; 43 - interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 44 - <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 45 - <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 46 - <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>; 43 + interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 44 + <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 45 + <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 46 + <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>; 47 47 arm,cpu-registers-not-fw-configured; 48 48 }; 49 49 ··· 65 65 psci { 66 66 compatible = "arm,psci-0.2"; 67 67 method = "smc"; 68 - cpu_off = <1>; 69 - cpu_on = <2>; 70 68 }; 71 69 72 70 axi@81000000 { 73 71 compatible = "simple-bus"; 74 72 #address-cells = <1>; 75 73 #size-cells = <1>; 76 - ranges = <0 0x81000000 0x4000>; 74 + ranges = <0 0x81000000 0x8000>; 77 75 78 76 gic: interrupt-controller@1000 { 79 77 compatible = "arm,cortex-a7-gic"; 80 78 #interrupt-cells = <3>; 81 - #address-cells = <0>; 82 79 interrupt-controller; 80 + interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>; 83 81 reg = <0x1000 0x1000>, 84 - <0x2000 0x2000>; 82 + <0x2000 0x2000>, 83 + <0x4000 0x2000>, 84 + <0x6000 0x2000>; 85 85 }; 86 86 }; 87 87
+5 -4
arch/arm/boot/dts/bcm6878.dtsi
··· 32 32 next-level-cache = <&L2_0>; 33 33 enable-method = "psci"; 34 34 }; 35 + 35 36 L2_0: l2-cache0 { 36 37 compatible = "cache"; 37 38 }; ··· 40 39 41 40 timer { 42 41 compatible = "arm,armv7-timer"; 43 - interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 44 - <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 45 - <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>, 46 - <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>; 42 + interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 43 + <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 44 + <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 45 + <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>; 47 46 arm,cpu-registers-not-fw-configured; 48 47 }; 49 48
+1 -11
arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
··· 51 51 vin-supply = <&reg_3p3v_s5>; 52 52 }; 53 53 54 - reg_3p3v_s0: regulator-3p3v-s0 { 55 - compatible = "regulator-fixed"; 56 - regulator-name = "V_3V3_S0"; 57 - regulator-min-microvolt = <3300000>; 58 - regulator-max-microvolt = <3300000>; 59 - regulator-always-on; 60 - regulator-boot-on; 61 - vin-supply = <&reg_3p3v_s5>; 62 - }; 63 - 64 54 reg_3p3v_s5: regulator-3p3v-s5 { 65 55 compatible = "regulator-fixed"; 66 56 regulator-name = "V_3V3_S5"; ··· 249 259 250 260 /* default boot source: workaround #1 for errata ERR006282 */ 251 261 smarc_flash: flash@0 { 252 - compatible = "winbond,w25q16dw", "jedec,spi-nor"; 262 + compatible = "jedec,spi-nor"; 253 263 reg = <0>; 254 264 spi-max-frequency = <20000000>; 255 265 };
+1 -1
arch/arm/boot/dts/imx6qdl-vicut1.dtsi
··· 28 28 enable-gpios = <&gpio4 28 GPIO_ACTIVE_HIGH>; 29 29 }; 30 30 31 - backlight_led: backlight_led { 31 + backlight_led: backlight-led { 32 32 compatible = "pwm-backlight"; 33 33 pwms = <&pwm3 0 5000000 0>; 34 34 brightness-levels = <0 16 64 255>;
+2 -2
arch/arm/boot/dts/integratorap-im-pd1.dts
··· 178 178 clock-names = "uartclk", "apb_pclk"; 179 179 }; 180 180 181 - ssp@300000 { 181 + spi@300000 { 182 182 compatible = "arm,pl022", "arm,primecell"; 183 183 reg = <0x00300000 0x1000>; 184 184 interrupts-extended = <&impd1_vic 3>; 185 185 clocks = <&impd1_sspclk>, <&sysclk>; 186 - clock-names = "spiclk", "apb_pclk"; 186 + clock-names = "sspclk", "apb_pclk"; 187 187 }; 188 188 189 189 impd1_gpio0: gpio@400000 {
+1 -1
arch/arm/boot/dts/versatile-ab.dts
··· 391 391 reg = <0x101f4000 0x1000>; 392 392 interrupts = <11>; 393 393 clocks = <&xtal24mhz>, <&pclk>; 394 - clock-names = "SSPCLK", "apb_pclk"; 394 + clock-names = "sspclk", "apb_pclk"; 395 395 }; 396 396 397 397 fpga {
-1
arch/arm/configs/at91_dt_defconfig
··· 196 196 CONFIG_DMADEVICES=y 197 197 CONFIG_AT_HDMAC=y 198 198 CONFIG_AT_XDMAC=y 199 - CONFIG_MICROCHIP_PIT64B=y 200 199 # CONFIG_IOMMU_SUPPORT is not set 201 200 CONFIG_IIO=y 202 201 CONFIG_AT91_ADC=y
-1
arch/arm/configs/sama7_defconfig
··· 188 188 CONFIG_DMADEVICES=y 189 189 CONFIG_AT_XDMAC=y 190 190 CONFIG_STAGING=y 191 - CONFIG_MICROCHIP_PIT64B=y 192 191 # CONFIG_IOMMU_SUPPORT is not set 193 192 CONFIG_IIO=y 194 193 CONFIG_IIO_SW_TRIGGER=y
+1 -1
arch/arm/kernel/irq.c
··· 70 70 } 71 71 } 72 72 73 - #ifndef CONFIG_PREEMPT_RT 73 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 74 74 static void ____do_softirq(void *arg) 75 75 { 76 76 __do_softirq();
+32 -4
arch/arm/mach-at91/pm.c
··· 541 541 542 542 static int at91_suspend_finish(unsigned long val) 543 543 { 544 + unsigned char modified_gray_code[] = { 545 + 0x00, 0x01, 0x02, 0x03, 0x06, 0x07, 0x04, 0x05, 0x0c, 0x0d, 546 + 0x0e, 0x0f, 0x0a, 0x0b, 0x08, 0x09, 0x18, 0x19, 0x1a, 0x1b, 547 + 0x1e, 0x1f, 0x1c, 0x1d, 0x14, 0x15, 0x16, 0x17, 0x12, 0x13, 548 + 0x10, 0x11, 549 + }; 550 + unsigned int tmp, index; 544 551 int i; 545 552 546 553 if (soc_pm.data.mode == AT91_PM_BACKUP && soc_pm.data.ramc_phy) { 554 + /* 555 + * Bootloader will perform DDR recalibration and will try to 556 + * restore the ZQ0SR0 with the value saved here. But the 557 + * calibration is buggy and restoring some values from ZQ0SR0 558 + * is forbidden and risky thus we need to provide processed 559 + * values for these (modified gray code values). 560 + */ 561 + tmp = readl(soc_pm.data.ramc_phy + DDR3PHY_ZQ0SR0); 562 + 563 + /* Store pull-down output impedance select. */ 564 + index = (tmp >> DDR3PHY_ZQ0SR0_PDO_OFF) & 0x1f; 565 + soc_pm.bu->ddr_phy_calibration[0] = modified_gray_code[index]; 566 + 567 + /* Store pull-up output impedance select. */ 568 + index = (tmp >> DDR3PHY_ZQ0SR0_PUO_OFF) & 0x1f; 569 + soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index]; 570 + 571 + /* Store pull-down on-die termination impedance select. */ 572 + index = (tmp >> DDR3PHY_ZQ0SR0_PDODT_OFF) & 0x1f; 573 + soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index]; 574 + 575 + /* Store pull-up on-die termination impedance select. */ 576 + index = (tmp >> DDR3PHY_ZQ0SRO_PUODT_OFF) & 0x1f; 577 + soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index]; 578 + 547 579 /* 548 580 * The 1st 8 words of memory might get corrupted in the process 549 581 * of DDR PHY recalibration; it is saved here in securam and it ··· 1098 1066 of_scan_flat_dt(at91_pm_backup_scan_memcs, &located); 1099 1067 if (!located) 1100 1068 goto securam_fail; 1101 - 1102 - /* DDR3PHY_ZQ0SR0 */ 1103 - soc_pm.bu->ddr_phy_calibration[0] = readl(soc_pm.data.ramc_phy + 1104 - 0x188); 1105 1069 } 1106 1070 1107 1071 return 0;
+17 -7
arch/arm/mach-at91/pm_suspend.S
··· 172 172 /* Put DDR PHY's DLL in bypass mode for non-backup modes. */ 173 173 cmp r7, #AT91_PM_BACKUP 174 174 beq sr_ena_3 175 - ldr tmp1, [r3, #DDR3PHY_PIR] 176 - orr tmp1, tmp1, #DDR3PHY_PIR_DLLBYP 177 - str tmp1, [r3, #DDR3PHY_PIR] 175 + 176 + /* Disable DX DLLs. */ 177 + ldr tmp1, [r3, #DDR3PHY_DX0DLLCR] 178 + orr tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS 179 + str tmp1, [r3, #DDR3PHY_DX0DLLCR] 180 + 181 + ldr tmp1, [r3, #DDR3PHY_DX1DLLCR] 182 + orr tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS 183 + str tmp1, [r3, #DDR3PHY_DX1DLLCR] 178 184 179 185 sr_ena_3: 180 186 /* Power down DDR PHY data receivers. */ ··· 227 221 bic tmp1, tmp1, #DDR3PHY_DSGCR_ODTPDD_ODT0 228 222 str tmp1, [r3, #DDR3PHY_DSGCR] 229 223 230 - /* Take DDR PHY's DLL out of bypass mode. */ 231 - ldr tmp1, [r3, #DDR3PHY_PIR] 232 - bic tmp1, tmp1, #DDR3PHY_PIR_DLLBYP 233 - str tmp1, [r3, #DDR3PHY_PIR] 224 + /* Enable DX DLLs. */ 225 + ldr tmp1, [r3, #DDR3PHY_DX0DLLCR] 226 + bic tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS 227 + str tmp1, [r3, #DDR3PHY_DX0DLLCR] 228 + 229 + ldr tmp1, [r3, #DDR3PHY_DX1DLLCR] 230 + bic tmp1, tmp1, #DDR3PHY_DXDLLCR_DLLDIS 231 + str tmp1, [r3, #DDR3PHY_DX1DLLCR] 234 232 235 233 /* Enable quasi-dynamic programming. */ 236 234 mov tmp1, #0
+1 -1
arch/arm/mach-ixp4xx/ixp4xx-of.c
··· 46 46 } 47 47 48 48 /* 49 - * We handle 4 differen SoC families. These compatible strings are enough 49 + * We handle 4 different SoC families. These compatible strings are enough 50 50 * to provide the core so that different boards can add their more detailed 51 51 * specifics. 52 52 */
+2
arch/arm64/Kconfig
··· 1887 1887 depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI 1888 1888 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697 1889 1889 depends on !CC_IS_GCC || GCC_VERSION >= 100100 1890 + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671 1891 + depends on !CC_IS_GCC 1890 1892 # https://github.com/llvm/llvm-project/commit/a88c722e687e6780dcd6a58718350dc76fcc4cc9 1891 1893 depends on !CC_IS_CLANG || CLANG_VERSION >= 120000 1892 1894 depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
+2 -1
arch/arm64/boot/dts/arm/juno-base.dtsi
··· 26 26 compatible = "arm,mhu", "arm,primecell"; 27 27 reg = <0x0 0x2b1f0000 0x0 0x1000>; 28 28 interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>, 29 - <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>; 29 + <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>, 30 + <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>; 30 31 #mbox-cells = <1>; 31 32 clocks = <&soc_refclk100mhz>; 32 33 clock-names = "apb_pclk";
-2
arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi
··· 67 67 port@0 { 68 68 reg = <0>; 69 69 csys2_funnel_in_port0: endpoint { 70 - slave-mode; 71 70 remote-endpoint = <&etf0_out_port>; 72 71 }; 73 72 }; ··· 74 75 port@1 { 75 76 reg = <1>; 76 77 csys2_funnel_in_port1: endpoint { 77 - slave-mode; 78 78 remote-endpoint = <&etf1_out_port>; 79 79 }; 80 80 };
-1
arch/arm64/boot/dts/freescale/fsl-ls1028a-qds-65bb.dts
··· 25 25 &enetc_port0 { 26 26 phy-handle = <&slot1_sgmii>; 27 27 phy-mode = "2500base-x"; 28 - managed = "in-band-status"; 29 28 status = "okay"; 30 29 }; 31 30
+4
arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
··· 626 626 lan1: port@0 { 627 627 reg = <0>; 628 628 label = "lan1"; 629 + phy-mode = "internal"; 629 630 local-mac-address = [00 00 00 00 00 00]; 630 631 }; 631 632 632 633 lan2: port@1 { 633 634 reg = <1>; 634 635 label = "lan2"; 636 + phy-mode = "internal"; 635 637 local-mac-address = [00 00 00 00 00 00]; 636 638 }; 637 639 638 640 lan3: port@2 { 639 641 reg = <2>; 640 642 label = "lan3"; 643 + phy-mode = "internal"; 641 644 local-mac-address = [00 00 00 00 00 00]; 642 645 }; 643 646 644 647 lan4: port@3 { 645 648 reg = <3>; 646 649 label = "lan4"; 650 + phy-mode = "internal"; 647 651 local-mac-address = [00 00 00 00 00 00]; 648 652 }; 649 653
+6 -5
arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
··· 32 32 }; 33 33 34 34 /* Fixed clock dedicated to SPI CAN controller */ 35 - clk20m: oscillator { 35 + clk40m: oscillator { 36 36 compatible = "fixed-clock"; 37 37 #clock-cells = <0>; 38 - clock-frequency = <20000000>; 38 + clock-frequency = <40000000>; 39 39 }; 40 40 41 41 gpio-keys { ··· 202 202 203 203 can1: can@0 { 204 204 compatible = "microchip,mcp251xfd"; 205 - clocks = <&clk20m>; 206 - interrupts-extended = <&gpio1 6 IRQ_TYPE_EDGE_FALLING>; 205 + clocks = <&clk40m>; 206 + interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_LOW>; 207 207 pinctrl-names = "default"; 208 208 pinctrl-0 = <&pinctrl_can1_int>; 209 209 reg = <0>; ··· 603 603 pinctrl-0 = <&pinctrl_gpio_9_dsi>, <&pinctrl_i2s_2_bclk_touch_reset>; 604 604 reg = <0x4a>; 605 605 /* Verdin I2S_2_BCLK (TOUCH_RESET#, SODIMM 42) */ 606 - reset-gpios = <&gpio3 23 GPIO_ACTIVE_HIGH>; 606 + reset-gpios = <&gpio3 23 GPIO_ACTIVE_LOW>; 607 607 status = "disabled"; 608 608 }; 609 609 ··· 745 745 }; 746 746 747 747 &usbphynop2 { 748 + power-domains = <&pgc_otg2>; 748 749 vcc-supply = <&reg_vdd_3v3>; 749 750 }; 750 751
+7 -7
arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
··· 70 70 &ecspi1 { 71 71 pinctrl-names = "default"; 72 72 pinctrl-0 = <&pinctrl_ecspi1>; 73 - cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>; 73 + cs-gpios = <&gpio5 17 GPIO_ACTIVE_LOW>; 74 74 status = "disabled"; 75 75 }; 76 76 ··· 403 403 pinctrl-names = "default", "gpio"; 404 404 pinctrl-0 = <&pinctrl_i2c5>; 405 405 pinctrl-1 = <&pinctrl_i2c5_gpio>; 406 - scl-gpios = <&gpio5 26 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; 407 - sda-gpios = <&gpio5 27 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; 406 + scl-gpios = <&gpio3 26 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; 407 + sda-gpios = <&gpio3 27 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; 408 408 status = "okay"; 409 409 }; 410 410 ··· 648 648 649 649 pinctrl_ecspi1: dhcom-ecspi1-grp { 650 650 fsl,pins = < 651 - MX8MP_IOMUXC_ECSPI1_SCLK__ECSPI1_SCLK 0x44 652 - MX8MP_IOMUXC_ECSPI1_MOSI__ECSPI1_MOSI 0x44 653 - MX8MP_IOMUXC_ECSPI1_MISO__ECSPI1_MISO 0x44 654 - MX8MP_IOMUXC_ECSPI1_SS0__GPIO5_IO09 0x40 651 + MX8MP_IOMUXC_I2C1_SCL__ECSPI1_SCLK 0x44 652 + MX8MP_IOMUXC_I2C1_SDA__ECSPI1_MOSI 0x44 653 + MX8MP_IOMUXC_I2C2_SCL__ECSPI1_MISO 0x44 654 + MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x40 655 655 >; 656 656 }; 657 657
+4 -4
arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
··· 770 770 771 771 pinctrl_sai2: sai2grp { 772 772 fsl,pins = < 773 - MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_SAI2_TX_SYNC 774 - MX8MP_IOMUXC_SAI2_TXD0__AUDIOMIX_SAI2_TX_DATA00 775 - MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_SAI2_TX_BCLK 776 - MX8MP_IOMUXC_SAI2_MCLK__AUDIOMIX_SAI2_MCLK 773 + MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_SAI2_TX_SYNC 0xd6 774 + MX8MP_IOMUXC_SAI2_TXD0__AUDIOMIX_SAI2_TX_DATA00 0xd6 775 + MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_SAI2_TX_BCLK 0xd6 776 + MX8MP_IOMUXC_SAI2_MCLK__AUDIOMIX_SAI2_MCLK 0xd6 777 777 >; 778 778 }; 779 779
+2 -2
arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
··· 628 628 interrupts = <5 IRQ_TYPE_EDGE_FALLING>; 629 629 reg = <0x4a>; 630 630 /* Verdin GPIO_2 (SODIMM 208) */ 631 - reset-gpios = <&gpio1 1 GPIO_ACTIVE_HIGH>; 631 + reset-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>; 632 632 status = "disabled"; 633 633 }; 634 634 }; ··· 705 705 pinctrl-0 = <&pinctrl_gpio_9_dsi>, <&pinctrl_i2s_2_bclk_touch_reset>; 706 706 reg = <0x4a>; 707 707 /* Verdin I2S_2_BCLK (TOUCH_RESET#, SODIMM 42) */ 708 - reset-gpios = <&gpio5 0 GPIO_ACTIVE_HIGH>; 708 + reset-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>; 709 709 status = "disabled"; 710 710 }; 711 711
-1
arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi
··· 204 204 reg = <0x51>; 205 205 pinctrl-names = "default"; 206 206 pinctrl-0 = <&pinctrl_rtc>; 207 - interrupt-names = "irq"; 208 207 interrupt-parent = <&gpio1>; 209 208 interrupts = <1 IRQ_TYPE_EDGE_FALLING>; 210 209 quartz-load-femtofarads = <7000>;
+1 -1
arch/arm64/boot/dts/renesas/r8a779g0.dtsi
··· 85 85 "renesas,rcar-gen4-hscif", 86 86 "renesas,hscif"; 87 87 reg = <0 0xe6540000 0 96>; 88 - interrupts = <GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>; 88 + interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>; 89 89 clocks = <&cpg CPG_MOD 514>, 90 90 <&cpg CPG_CORE R8A779G0_CLK_S0D3_PER>, 91 91 <&scif_clk>;
-2
arch/arm64/kernel/ptrace.c
··· 1084 1084 if (!target->thread.sve_state) { 1085 1085 sve_alloc(target, false); 1086 1086 if (!target->thread.sve_state) { 1087 - clear_thread_flag(TIF_SME); 1088 1087 ret = -ENOMEM; 1089 1088 goto out; 1090 1089 } ··· 1093 1094 sme_alloc(target); 1094 1095 if (!target->thread.za_state) { 1095 1096 ret = -ENOMEM; 1096 - clear_tsk_thread_flag(target, TIF_SME); 1097 1097 goto out; 1098 1098 } 1099 1099
+3
arch/arm64/kernel/sleep.S
··· 101 101 SYM_CODE_START(cpu_resume) 102 102 bl init_kernel_el 103 103 bl finalise_el2 104 + #if VA_BITS > 48 105 + ldr_l x0, vabits_actual 106 + #endif 104 107 bl __cpu_setup 105 108 /* enable the MMU early - so we can access sleep_save_stash by va */ 106 109 adrp x1, swapper_pg_dir
-1
arch/mips/Kconfig
··· 2669 2669 2670 2670 config ARCH_SPARSEMEM_ENABLE 2671 2671 bool 2672 - select SPARSEMEM_STATIC if !SGI_IP27 2673 2672 2674 2673 config NUMA 2675 2674 bool "NUMA Support"
-4
arch/mips/cavium-octeon/executive/cvmx-cmd-queue.c
··· 57 57 static cvmx_cmd_queue_result_t __cvmx_cmd_queue_init_state_ptr(void) 58 58 { 59 59 char *alloc_name = "cvmx_cmd_queues"; 60 - #if defined(CONFIG_CAVIUM_RESERVE32) && CONFIG_CAVIUM_RESERVE32 61 60 extern uint64_t octeon_reserve32_memory; 62 - #endif 63 61 64 62 if (likely(__cvmx_cmd_queue_state_ptr)) 65 63 return CVMX_CMD_QUEUE_SUCCESS; 66 64 67 - #if defined(CONFIG_CAVIUM_RESERVE32) && CONFIG_CAVIUM_RESERVE32 68 65 if (octeon_reserve32_memory) 69 66 __cvmx_cmd_queue_state_ptr = 70 67 cvmx_bootmem_alloc_named_range(sizeof(*__cvmx_cmd_queue_state_ptr), ··· 70 73 (CONFIG_CAVIUM_RESERVE32 << 71 74 20) - 1, 128, alloc_name); 72 75 else 73 - #endif 74 76 __cvmx_cmd_queue_state_ptr = 75 77 cvmx_bootmem_alloc_named(sizeof(*__cvmx_cmd_queue_state_ptr), 76 78 128,
+10
arch/mips/cavium-octeon/octeon-irq.c
··· 127 127 static int octeon_irq_force_ciu_mapping(struct irq_domain *domain, 128 128 int irq, int line, int bit) 129 129 { 130 + struct device_node *of_node; 131 + int ret; 132 + 133 + of_node = irq_domain_get_of_node(domain); 134 + if (!of_node) 135 + return -EINVAL; 136 + ret = irq_alloc_desc_at(irq, of_node_to_nid(of_node)); 137 + if (ret < 0) 138 + return ret; 139 + 130 140 return irq_domain_associate(domain, irq, line << 6 | bit); 131 141 } 132 142
+11 -16
arch/mips/cavium-octeon/setup.c
··· 284 284 285 285 #endif /* CONFIG_KEXEC */ 286 286 287 - #ifdef CONFIG_CAVIUM_RESERVE32 288 287 uint64_t octeon_reserve32_memory; 289 288 EXPORT_SYMBOL(octeon_reserve32_memory); 290 - #endif 291 289 292 290 #ifdef CONFIG_KEXEC 293 291 /* crashkernel cmdline parameter is parsed _after_ memory setup ··· 664 666 int i; 665 667 u64 t; 666 668 int argc; 667 - #ifdef CONFIG_CAVIUM_RESERVE32 668 - int64_t addr = -1; 669 - #endif 670 669 /* 671 670 * The bootloader passes a pointer to the boot descriptor in 672 671 * $a3, this is available as fw_arg3. ··· 778 783 cvmx_write_csr(CVMX_LED_UDD_DATX(1), 0); 779 784 cvmx_write_csr(CVMX_LED_EN, 1); 780 785 } 781 - #ifdef CONFIG_CAVIUM_RESERVE32 786 + 782 787 /* 783 788 * We need to temporarily allocate all memory in the reserve32 784 789 * region. This makes sure the kernel doesn't allocate this ··· 789 794 * Allocate memory for RESERVED32 aligned on 2MB boundary. This 790 795 * is in case we later use hugetlb entries with it. 791 796 */ 792 - addr = cvmx_bootmem_phy_named_block_alloc(CONFIG_CAVIUM_RESERVE32 << 20, 793 - 0, 0, 2 << 20, 794 - "CAVIUM_RESERVE32", 0); 795 - if (addr < 0) 796 - pr_err("Failed to allocate CAVIUM_RESERVE32 memory area\n"); 797 - else 798 - octeon_reserve32_memory = addr; 799 - #endif 797 + if (CONFIG_CAVIUM_RESERVE32) { 798 + int64_t addr = 799 + cvmx_bootmem_phy_named_block_alloc(CONFIG_CAVIUM_RESERVE32 << 20, 800 + 0, 0, 2 << 20, 801 + "CAVIUM_RESERVE32", 0); 802 + if (addr < 0) 803 + pr_err("Failed to allocate CAVIUM_RESERVE32 memory area\n"); 804 + else 805 + octeon_reserve32_memory = addr; 806 + } 800 807 801 808 #ifdef CONFIG_CAVIUM_OCTEON_LOCK_L2 802 809 if (cvmx_read_csr(CVMX_L2D_FUS3) & (3ull << 34)) { ··· 1076 1079 cvmx_bootmem_unlock(); 1077 1080 #endif /* CONFIG_CRASH_DUMP */ 1078 1081 1079 - #ifdef CONFIG_CAVIUM_RESERVE32 1080 1082 /* 1081 1083 * Now that we've allocated the kernel memory it is safe to 1082 1084 * free the reserved region. We free it here so that builtin ··· 1083 1087 */ 1084 1088 if (octeon_reserve32_memory) 1085 1089 cvmx_bootmem_free_named("CAVIUM_RESERVE32"); 1086 - #endif /* CONFIG_CAVIUM_RESERVE32 */ 1087 1090 1088 1091 if (total == 0) 1089 1092 panic("Unable to allocate memory from "
-1
arch/mips/loongson32/ls1c/board.c
··· 15 15 static int __init ls1c_platform_init(void) 16 16 { 17 17 ls1x_serial_set_uartclk(&ls1x_uart_pdev); 18 - ls1x_rtc_set_extclk(&ls1x_rtc_pdev); 19 18 20 19 return platform_add_devices(ls1c_platform_devices, 21 20 ARRAY_SIZE(ls1c_platform_devices));
+1 -1
arch/parisc/kernel/irq.c
··· 480 480 *irq_stack_in_use = 1; 481 481 } 482 482 483 - #ifndef CONFIG_PREEMPT_RT 483 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 484 484 void do_softirq_own_stack(void) 485 485 { 486 486 execute_on_irq_stack(__do_softirq, 0);
+2 -2
arch/powerpc/kernel/irq.c
··· 199 199 } 200 200 } 201 201 202 - #ifndef CONFIG_PREEMPT_RT 202 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 203 203 static __always_inline void call_do_softirq(const void *sp) 204 204 { 205 205 /* Temporarily switch r1 to sp, call __do_softirq() then restore r1. */ ··· 335 335 void *softirq_ctx[NR_CPUS] __read_mostly; 336 336 void *hardirq_ctx[NR_CPUS] __read_mostly; 337 337 338 - #ifndef CONFIG_PREEMPT_RT 338 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 339 339 void do_softirq_own_stack(void) 340 340 { 341 341 call_do_softirq(softirq_ctx[smp_processor_id()]);
+2 -1
arch/powerpc/platforms/pseries/plpks.c
··· 17 17 #include <linux/string.h> 18 18 #include <linux/types.h> 19 19 #include <asm/hvcall.h> 20 + #include <asm/machdep.h> 20 21 21 22 #include "plpks.h" 22 23 ··· 458 457 459 458 return rc; 460 459 } 461 - arch_initcall(pseries_plpks_init); 460 + machine_arch_initcall(pseries, pseries_plpks_init);
+1 -1
arch/riscv/boot/dts/microchip/mpfs.dtsi
··· 185 185 ranges; 186 186 187 187 cctrllr: cache-controller@2010000 { 188 - compatible = "sifive,fu540-c000-ccache", "cache"; 188 + compatible = "microchip,mpfs-ccache", "sifive,fu540-c000-ccache", "cache"; 189 189 reg = <0x0 0x2010000 0x0 0x1000>; 190 190 cache-block-size = <64>; 191 191 cache-level = <2>;
+1 -1
arch/s390/include/asm/softirq_stack.h
··· 5 5 #include <asm/lowcore.h> 6 6 #include <asm/stacktrace.h> 7 7 8 - #ifndef CONFIG_PREEMPT_RT 8 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 9 9 static inline void do_softirq_own_stack(void) 10 10 { 11 11 call_on_stack(0, S390_lowcore.async_stack, void, __do_softirq);
+1 -1
arch/s390/kernel/nmi.c
··· 64 64 * structure. The structure is required for machine check happening 65 65 * early in the boot process. 66 66 */ 67 - static struct mcesa boot_mcesa __initdata __aligned(MCESA_MAX_SIZE); 67 + static struct mcesa boot_mcesa __aligned(MCESA_MAX_SIZE); 68 68 69 69 void __init nmi_alloc_mcesa_early(u64 *mcesad) 70 70 {
+2 -1
arch/s390/kernel/setup.c
··· 479 479 put_abs_lowcore(restart_data, lc->restart_data); 480 480 put_abs_lowcore(restart_source, lc->restart_source); 481 481 put_abs_lowcore(restart_psw, lc->restart_psw); 482 + put_abs_lowcore(mcesad, lc->mcesad); 482 483 483 484 mcck_stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE); 484 485 if (!mcck_stack) ··· 508 507 S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT; 509 508 S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT; 510 509 S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT; 511 - __ctl_store(S390_lowcore.cregs_save_area, 0, 15); 512 510 __ctl_set_bit(0, 28); 511 + __ctl_store(S390_lowcore.cregs_save_area, 0, 15); 513 512 put_abs_lowcore(restart_flags, RESTART_FLAG_CTLREGS); 514 513 put_abs_lowcore(program_new_psw, lc->program_new_psw); 515 514 for (cr = 0; cr < ARRAY_SIZE(lc->cregs_save_area); cr++)
+1 -1
arch/sh/kernel/irq.c
··· 149 149 hardirq_ctx[cpu] = NULL; 150 150 } 151 151 152 - #ifndef CONFIG_PREEMPT_RT 152 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 153 153 void do_softirq_own_stack(void) 154 154 { 155 155 struct thread_info *curctx;
+1 -1
arch/sparc/kernel/irq_64.c
··· 855 855 set_irq_regs(old_regs); 856 856 } 857 857 858 - #ifndef CONFIG_PREEMPT_RT 858 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 859 859 void do_softirq_own_stack(void) 860 860 { 861 861 void *orig_sp, *sp = softirq_stack[smp_processor_id()];
+1 -1
arch/x86/include/asm/irq_stack.h
··· 203 203 IRQ_CONSTRAINTS, regs, vector); \ 204 204 } 205 205 206 - #ifndef CONFIG_PREEMPT_RT 206 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 207 207 /* 208 208 * Macro to invoke __do_softirq on the irq stack. This is only called from 209 209 * task context when bottom halves are about to be reenabled and soft
+1 -1
arch/x86/kernel/irq_32.c
··· 132 132 return 0; 133 133 } 134 134 135 - #ifndef CONFIG_PREEMPT_RT 135 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 136 136 void do_softirq_own_stack(void) 137 137 { 138 138 struct irq_stack *irqstk;
+2
block/blk-mq-debugfs.c
··· 283 283 RQF_NAME(SPECIAL_PAYLOAD), 284 284 RQF_NAME(ZONE_WRITE_LOCKED), 285 285 RQF_NAME(MQ_POLL_SLEPT), 286 + RQF_NAME(TIMED_OUT), 286 287 RQF_NAME(ELV), 288 + RQF_NAME(RESV), 287 289 }; 288 290 #undef RQF_NAME 289 291
+3
block/partitions/core.c
··· 596 596 if (disk->flags & GENHD_FL_NO_PART) 597 597 return 0; 598 598 599 + if (test_bit(GD_SUPPRESS_PART_SCAN, &disk->state)) 600 + return 0; 601 + 599 602 state = check_partition(disk); 600 603 if (!state) 601 604 return 0;
+7 -1
drivers/amba/bus.c
··· 209 209 struct amba_device *pcdev = to_amba_device(dev); 210 210 struct amba_driver *pcdrv = to_amba_driver(drv); 211 211 212 + mutex_lock(&pcdev->periphid_lock); 212 213 if (!pcdev->periphid) { 213 214 int ret = amba_read_periphid(pcdev); 214 215 ··· 219 218 * permanent failure in reading pid and cid, simply map it to 220 219 * -EPROBE_DEFER. 221 220 */ 222 - if (ret) 221 + if (ret) { 222 + mutex_unlock(&pcdev->periphid_lock); 223 223 return -EPROBE_DEFER; 224 + } 224 225 dev_set_uevent_suppress(dev, false); 225 226 kobject_uevent(&dev->kobj, KOBJ_ADD); 226 227 } 228 + mutex_unlock(&pcdev->periphid_lock); 227 229 228 230 /* When driver_override is set, only bind to the matching driver */ 229 231 if (pcdev->driver_override) ··· 536 532 537 533 if (d->res.parent) 538 534 release_resource(&d->res); 535 + mutex_destroy(&d->periphid_lock); 539 536 kfree(d); 540 537 } 541 538 ··· 589 584 dev->dev.dma_mask = &dev->dev.coherent_dma_mask; 590 585 dev->dev.dma_parms = &dev->dma_parms; 591 586 dev->res.name = dev_name(&dev->dev); 587 + mutex_init(&dev->periphid_lock); 592 588 } 593 589 594 590 /**
+1 -1
drivers/base/arch_topology.c
··· 724 724 */ 725 725 if (cpumask_subset(cpu_coregroup_mask(cpu), 726 726 &cpu_topology[cpu].cluster_sibling)) 727 - return get_cpu_mask(cpu); 727 + return topology_sibling_cpumask(cpu); 728 728 729 729 return &cpu_topology[cpu].cluster_sibling; 730 730 }
+6
drivers/base/driver.c
··· 63 63 if (len >= (PAGE_SIZE - 1)) 64 64 return -EINVAL; 65 65 66 + /* 67 + * Compute the real length of the string in case userspace sends us a 68 + * bunch of \0 characters like python likes to do. 69 + */ 70 + len = strlen(s); 71 + 66 72 if (!len) { 67 73 /* Empty string passed - clear override */ 68 74 device_lock(dev);
+8
drivers/base/regmap/regmap-spi.c
··· 113 113 const struct regmap_config *config) 114 114 { 115 115 size_t max_size = spi_max_transfer_size(spi); 116 + size_t max_msg_size, reg_reserve_size; 116 117 struct regmap_bus *bus; 117 118 118 119 if (max_size != SIZE_MAX) { ··· 121 120 if (!bus) 122 121 return ERR_PTR(-ENOMEM); 123 122 123 + max_msg_size = spi_max_message_size(spi); 124 + reg_reserve_size = config->reg_bits / BITS_PER_BYTE 125 + + config->pad_bits / BITS_PER_BYTE; 126 + if (max_size + reg_reserve_size > max_msg_size) 127 + max_size -= reg_reserve_size; 128 + 124 129 bus->free_on_exit = true; 125 130 bus->max_raw_read = max_size; 126 131 bus->max_raw_write = max_size; 132 + 127 133 return bus; 128 134 } 129 135
+7 -24
drivers/firmware/efi/capsule-loader.c
··· 243 243 } 244 244 245 245 /** 246 - * efi_capsule_flush - called by file close or file flush 247 - * @file: file pointer 248 - * @id: not used 249 - * 250 - * If a capsule is being partially uploaded then calling this function 251 - * will be treated as upload termination and will free those completed 252 - * buffer pages and -ECANCELED will be returned. 253 - **/ 254 - static int efi_capsule_flush(struct file *file, fl_owner_t id) 255 - { 256 - int ret = 0; 257 - struct capsule_info *cap_info = file->private_data; 258 - 259 - if (cap_info->index > 0) { 260 - pr_err("capsule upload not complete\n"); 261 - efi_free_all_buff_pages(cap_info); 262 - ret = -ECANCELED; 263 - } 264 - 265 - return ret; 266 - } 267 - 268 - /** 269 246 * efi_capsule_release - called by file close 270 247 * @inode: not used 271 248 * @file: file pointer ··· 253 276 static int efi_capsule_release(struct inode *inode, struct file *file) 254 277 { 255 278 struct capsule_info *cap_info = file->private_data; 279 + 280 + if (cap_info->index > 0 && 281 + (cap_info->header.headersize == 0 || 282 + cap_info->count < cap_info->total_size)) { 283 + pr_err("capsule upload not complete\n"); 284 + efi_free_all_buff_pages(cap_info); 285 + } 256 286 257 287 kfree(cap_info->pages); 258 288 kfree(cap_info->phys); ··· 308 324 .owner = THIS_MODULE, 309 325 .open = efi_capsule_open, 310 326 .write = efi_capsule_write, 311 - .flush = efi_capsule_flush, 312 327 .release = efi_capsule_release, 313 328 .llseek = no_llseek, 314 329 };
+7
drivers/firmware/efi/libstub/Makefile
··· 37 37 $(call cc-option,-fno-addrsig) \ 38 38 -D__DISABLE_EXPORTS 39 39 40 + # 41 + # struct randomization only makes sense for Linux internal types, which the EFI 42 + # stub code never touches, so let's turn off struct randomization for the stub 43 + # altogether 44 + # 45 + KBUILD_CFLAGS := $(filter-out $(RANDSTRUCT_CFLAGS), $(KBUILD_CFLAGS)) 46 + 40 47 # remove SCS flags from all objects in this directory 41 48 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) 42 49 # disable LTO
-1
drivers/firmware/efi/libstub/x86-stub.c
··· 220 220 unsigned long end, next; 221 221 unsigned long rounded_start, rounded_end; 222 222 unsigned long unprotect_start, unprotect_size; 223 - int has_system_memory = 0; 224 223 225 224 if (efi_dxe_table == NULL) 226 225 return;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1728 1728 add_kgd_mem_to_kfd_bo_list(*mem, avm->process_info, user_addr); 1729 1729 1730 1730 if (user_addr) { 1731 - pr_debug("creating userptr BO for user_addr = %llu\n", user_addr); 1731 + pr_debug("creating userptr BO for user_addr = %llx\n", user_addr); 1732 1732 ret = init_user_pages(*mem, user_addr, criu_resume); 1733 1733 if (ret) 1734 1734 goto allocate_init_user_pages_failed;
+5 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 486 486 release_firmware(psp->ta_fw); 487 487 psp->ta_fw = NULL; 488 488 } 489 - if (adev->psp.cap_fw) { 489 + if (psp->cap_fw) { 490 490 release_firmware(psp->cap_fw); 491 491 psp->cap_fw = NULL; 492 492 } 493 - 493 + if (psp->toc_fw) { 494 + release_firmware(psp->toc_fw); 495 + psp->toc_fw = NULL; 496 + } 494 497 if (adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 0) || 495 498 adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 7)) 496 499 psp_sysfs_fini(adev);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
··· 390 390 struct rlc_firmware_header_v2_1 rlc_v2_1; 391 391 struct rlc_firmware_header_v2_2 rlc_v2_2; 392 392 struct rlc_firmware_header_v2_3 rlc_v2_3; 393 + struct rlc_firmware_header_v2_4 rlc_v2_4; 393 394 struct sdma_firmware_header_v1_0 sdma; 394 395 struct sdma_firmware_header_v1_1 sdma_v1_1; 395 396 struct sdma_firmware_header_v2_0 sdma_v2_0;
-6
drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
··· 68 68 doorbell_range = REG_SET_FIELD(doorbell_range, 69 69 GDC0_BIF_CSDMA_DOORBELL_RANGE, 70 70 SIZE, doorbell_size); 71 - doorbell_range = REG_SET_FIELD(doorbell_range, 72 - GDC0_BIF_SDMA0_DOORBELL_RANGE, 73 - OFFSET, doorbell_index); 74 - doorbell_range = REG_SET_FIELD(doorbell_range, 75 - GDC0_BIF_SDMA0_DOORBELL_RANGE, 76 - SIZE, doorbell_size); 77 71 } else { 78 72 doorbell_range = REG_SET_FIELD(doorbell_range, 79 73 GDC0_BIF_SDMA0_DOORBELL_RANGE,
+1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
··· 3288 3288 &crc_win_y_end_fops); 3289 3289 debugfs_create_file_unsafe("crc_win_update", 0644, dir, crtc, 3290 3290 &crc_win_update_fops); 3291 + dput(dir); 3291 3292 #endif 3292 3293 debugfs_create_file("amdgpu_current_bpc", 0644, crtc->debugfs_entry, 3293 3294 crtc, &amdgpu_current_bpc_fops);
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 120 120 MSG_MAP(DisallowGfxOff, PPSMC_MSG_DisallowGfxOff, 0), 121 121 MSG_MAP(Mode1Reset, PPSMC_MSG_Mode1Reset, 0), 122 122 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 0), 123 + MSG_MAP(SetMGpuFanBoostLimitRpm, PPSMC_MSG_SetMGpuFanBoostLimitRpm, 0), 123 124 }; 124 125 125 126 static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
+2 -2
drivers/gpu/drm/drm_debugfs.c
··· 377 377 if (connector->status != connector_status_connected) 378 378 return -ENODEV; 379 379 380 - seq_printf(m, "Min: %u\n", (u8)connector->display_info.monitor_range.min_vfreq); 381 - seq_printf(m, "Max: %u\n", (u8)connector->display_info.monitor_range.max_vfreq); 380 + seq_printf(m, "Min: %u\n", connector->display_info.monitor_range.min_vfreq); 381 + seq_printf(m, "Max: %u\n", connector->display_info.monitor_range.max_vfreq); 382 382 383 383 return 0; 384 384 }
+18 -6
drivers/gpu/drm/drm_edid.c
··· 5971 5971 } 5972 5972 5973 5973 static 5974 - void get_monitor_range(const struct detailed_timing *timing, 5975 - void *info_monitor_range) 5974 + void get_monitor_range(const struct detailed_timing *timing, void *c) 5976 5975 { 5977 - struct drm_monitor_range_info *monitor_range = info_monitor_range; 5976 + struct detailed_mode_closure *closure = c; 5977 + struct drm_display_info *info = &closure->connector->display_info; 5978 + struct drm_monitor_range_info *monitor_range = &info->monitor_range; 5978 5979 const struct detailed_non_pixel *data = &timing->data.other_data; 5979 5980 const struct detailed_data_monitor_range *range = &data->data.range; 5981 + const struct edid *edid = closure->drm_edid->edid; 5980 5982 5981 5983 if (!is_display_descriptor(timing, EDID_DETAIL_MONITOR_RANGE)) 5982 5984 return; ··· 5994 5992 5995 5993 monitor_range->min_vfreq = range->min_vfreq; 5996 5994 monitor_range->max_vfreq = range->max_vfreq; 5995 + 5996 + if (edid->revision >= 4) { 5997 + if (data->pad2 & DRM_EDID_RANGE_OFFSET_MIN_VFREQ) 5998 + monitor_range->min_vfreq += 255; 5999 + if (data->pad2 & DRM_EDID_RANGE_OFFSET_MAX_VFREQ) 6000 + monitor_range->max_vfreq += 255; 6001 + } 5997 6002 } 5998 6003 5999 6004 static void drm_get_monitor_range(struct drm_connector *connector, 6000 6005 const struct drm_edid *drm_edid) 6001 6006 { 6002 - struct drm_display_info *info = &connector->display_info; 6007 + const struct drm_display_info *info = &connector->display_info; 6008 + struct detailed_mode_closure closure = { 6009 + .connector = connector, 6010 + .drm_edid = drm_edid, 6011 + }; 6003 6012 6004 6013 if (!version_greater(drm_edid, 1, 1)) 6005 6014 return; 6006 6015 6007 - drm_for_each_detailed_block(drm_edid, get_monitor_range, 6008 - &info->monitor_range); 6016 + drm_for_each_detailed_block(drm_edid, get_monitor_range, &closure); 6009 6017 6010 6018 DRM_DEBUG_KMS("Supported Monitor Refresh rate range is %d Hz - %d Hz\n", 6011 6019 info->monitor_range.min_vfreq,
+7
drivers/gpu/drm/i915/display/intel_bios.c
··· 479 479 480 480 block_size = get_blocksize(block); 481 481 482 + /* 483 + * Version number and new block size are considered 484 + * part of the header for MIPI sequenece block v3+. 485 + */ 486 + if (section_id == BDB_MIPI_SEQUENCE && *(const u8 *)block >= 3) 487 + block_size += 5; 488 + 482 489 entry = kzalloc(struct_size(entry, data, max(min_size, block_size) + 3), 483 490 GFP_KERNEL); 484 491 if (!entry) {
+3
drivers/gpu/drm/i915/gem/i915_gem_object.c
··· 723 723 bool lmem_placement = false; 724 724 int i; 725 725 726 + if (!HAS_FLAT_CCS(to_i915(obj->base.dev))) 727 + return false; 728 + 726 729 for (i = 0; i < obj->mm.n_placements; i++) { 727 730 /* Compression is not allowed for the objects with smem placement */ 728 731 if (obj->mm.placements[i]->type == INTEL_MEMORY_SYSTEM)
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_ttm.c
··· 297 297 i915_tt->is_shmem = true; 298 298 } 299 299 300 - if (HAS_FLAT_CCS(i915) && i915_gem_object_needs_ccs_pages(obj)) 300 + if (i915_gem_object_needs_ccs_pages(obj)) 301 301 ccs_pages = DIV_ROUND_UP(DIV_ROUND_UP(bo->base.size, 302 302 NUM_BYTES_PER_CCS_BYTE), 303 303 PAGE_SIZE);
+9 -10
drivers/gpu/drm/i915/gt/intel_llc.c
··· 12 12 #include "intel_llc.h" 13 13 #include "intel_mchbar_regs.h" 14 14 #include "intel_pcode.h" 15 + #include "intel_rps.h" 15 16 16 17 struct ia_constants { 17 18 unsigned int min_gpu_freq; ··· 56 55 if (!HAS_LLC(i915) || IS_DGFX(i915)) 57 56 return false; 58 57 59 - if (rps->max_freq <= rps->min_freq) 60 - return false; 61 - 62 58 consts->max_ia_freq = cpu_max_MHz(); 63 59 64 60 consts->min_ring_freq = ··· 63 65 /* convert DDR frequency from units of 266.6MHz to bandwidth */ 64 66 consts->min_ring_freq = mult_frac(consts->min_ring_freq, 8, 3); 65 67 66 - consts->min_gpu_freq = rps->min_freq; 67 - consts->max_gpu_freq = rps->max_freq; 68 - if (GRAPHICS_VER(i915) >= 9) { 69 - /* Convert GT frequency to 50 HZ units */ 70 - consts->min_gpu_freq /= GEN9_FREQ_SCALER; 71 - consts->max_gpu_freq /= GEN9_FREQ_SCALER; 72 - } 68 + consts->min_gpu_freq = intel_rps_get_min_raw_freq(rps); 69 + consts->max_gpu_freq = intel_rps_get_max_raw_freq(rps); 73 70 74 71 return true; 75 72 } ··· 123 130 if (!get_ia_constants(llc, &consts)) 124 131 return; 125 132 133 + /* 134 + * Although this is unlikely on any platform during initialization, 135 + * let's ensure we don't get accidentally into infinite loop 136 + */ 137 + if (consts.max_gpu_freq <= consts.min_gpu_freq) 138 + return; 126 139 /* 127 140 * For each potential GPU frequency, load a ring frequency we'd like 128 141 * to use for memory access. We do this by specifying the IA frequency
+50
drivers/gpu/drm/i915/gt/intel_rps.c
··· 2126 2126 return intel_gpu_freq(rps, rps->max_freq_softlimit); 2127 2127 } 2128 2128 2129 + /** 2130 + * intel_rps_get_max_raw_freq - returns the max frequency in some raw format. 2131 + * @rps: the intel_rps structure 2132 + * 2133 + * Returns the max frequency in a raw format. In newer platforms raw is in 2134 + * units of 50 MHz. 2135 + */ 2136 + u32 intel_rps_get_max_raw_freq(struct intel_rps *rps) 2137 + { 2138 + struct intel_guc_slpc *slpc = rps_to_slpc(rps); 2139 + u32 freq; 2140 + 2141 + if (rps_uses_slpc(rps)) { 2142 + return DIV_ROUND_CLOSEST(slpc->rp0_freq, 2143 + GT_FREQUENCY_MULTIPLIER); 2144 + } else { 2145 + freq = rps->max_freq; 2146 + if (GRAPHICS_VER(rps_to_i915(rps)) >= 9) { 2147 + /* Convert GT frequency to 50 MHz units */ 2148 + freq /= GEN9_FREQ_SCALER; 2149 + } 2150 + return freq; 2151 + } 2152 + } 2153 + 2129 2154 u32 intel_rps_get_rp0_frequency(struct intel_rps *rps) 2130 2155 { 2131 2156 struct intel_guc_slpc *slpc = rps_to_slpc(rps); ··· 2237 2212 return slpc->min_freq_softlimit; 2238 2213 else 2239 2214 return intel_gpu_freq(rps, rps->min_freq_softlimit); 2215 + } 2216 + 2217 + /** 2218 + * intel_rps_get_min_raw_freq - returns the min frequency in some raw format. 2219 + * @rps: the intel_rps structure 2220 + * 2221 + * Returns the min frequency in a raw format. In newer platforms raw is in 2222 + * units of 50 MHz. 2223 + */ 2224 + u32 intel_rps_get_min_raw_freq(struct intel_rps *rps) 2225 + { 2226 + struct intel_guc_slpc *slpc = rps_to_slpc(rps); 2227 + u32 freq; 2228 + 2229 + if (rps_uses_slpc(rps)) { 2230 + return DIV_ROUND_CLOSEST(slpc->min_freq, 2231 + GT_FREQUENCY_MULTIPLIER); 2232 + } else { 2233 + freq = rps->min_freq; 2234 + if (GRAPHICS_VER(rps_to_i915(rps)) >= 9) { 2235 + /* Convert GT frequency to 50 MHz units */ 2236 + freq /= GEN9_FREQ_SCALER; 2237 + } 2238 + return freq; 2239 + } 2240 2240 } 2241 2241 2242 2242 static int set_min_freq(struct intel_rps *rps, u32 val)
+2
drivers/gpu/drm/i915/gt/intel_rps.h
··· 37 37 u32 intel_rps_read_actual_frequency(struct intel_rps *rps); 38 38 u32 intel_rps_get_requested_frequency(struct intel_rps *rps); 39 39 u32 intel_rps_get_min_frequency(struct intel_rps *rps); 40 + u32 intel_rps_get_min_raw_freq(struct intel_rps *rps); 40 41 int intel_rps_set_min_frequency(struct intel_rps *rps, u32 val); 41 42 u32 intel_rps_get_max_frequency(struct intel_rps *rps); 43 + u32 intel_rps_get_max_raw_freq(struct intel_rps *rps); 42 44 int intel_rps_set_max_frequency(struct intel_rps *rps, u32 val); 43 45 u32 intel_rps_get_rp0_frequency(struct intel_rps *rps); 44 46 u32 intel_rps_get_rp1_frequency(struct intel_rps *rps);
+11
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 131 131 return PTR_ERR(opp); 132 132 133 133 panfrost_devfreq_profile.initial_freq = cur_freq; 134 + 135 + /* 136 + * Set the recommend OPP this will enable and configure the regulator 137 + * if any and will avoid a switch off by regulator_late_cleanup() 138 + */ 139 + ret = dev_pm_opp_set_opp(dev, opp); 140 + if (ret) { 141 + DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n"); 142 + return ret; 143 + } 144 + 134 145 dev_pm_opp_put(opp); 135 146 136 147 /*
+8 -5
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 236 236 if (bo->type != ttm_bo_type_sg) 237 237 fbo->base.base.resv = &fbo->base.base._resv; 238 238 239 - if (fbo->base.resource) { 240 - ttm_resource_set_bo(fbo->base.resource, &fbo->base); 241 - bo->resource = NULL; 242 - } 243 - 244 239 dma_resv_init(&fbo->base.base._resv); 245 240 fbo->base.base.dev = NULL; 246 241 ret = dma_resv_trylock(&fbo->base.base._resv); 247 242 WARN_ON(!ret); 243 + 244 + if (fbo->base.resource) { 245 + ttm_resource_set_bo(fbo->base.resource, &fbo->base); 246 + bo->resource = NULL; 247 + ttm_bo_set_bulk_move(&fbo->base, NULL); 248 + } else { 249 + fbo->base.bulk_move = NULL; 250 + } 248 251 249 252 ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1); 250 253 if (ret) {
+222 -186
drivers/hwmon/asus-ec-sensors.c
··· 266 266 #define SENSOR_SET_WATER_BLOCK \ 267 267 (SENSOR_TEMP_WATER_BLOCK_IN | SENSOR_TEMP_WATER_BLOCK_OUT) 268 268 269 - 270 269 struct ec_board_info { 271 - const char *board_names[MAX_IDENTICAL_BOARD_VARIATIONS]; 272 270 unsigned long sensors; 273 271 /* 274 272 * Defines which mutex to use for guarding access to the state and the ··· 279 281 enum board_family family; 280 282 }; 281 283 282 - static const struct ec_board_info board_info[] = { 283 - { 284 - .board_names = {"PRIME X470-PRO"}, 285 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 286 - SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 287 - SENSOR_FAN_CPU_OPT | 288 - SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 289 - .mutex_path = ACPI_GLOBAL_LOCK_PSEUDO_PATH, 290 - .family = family_amd_400_series, 291 - }, 292 - { 293 - .board_names = {"PRIME X570-PRO"}, 294 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM | 295 - SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET, 296 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 297 - .family = family_amd_500_series, 298 - }, 299 - { 300 - .board_names = {"ProArt X570-CREATOR WIFI"}, 301 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM | 302 - SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CPU_OPT | 303 - SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 304 - }, 305 - { 306 - .board_names = {"Pro WS X570-ACE"}, 307 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM | 308 - SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET | 309 - SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 310 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 311 - .family = family_amd_500_series, 312 - }, 313 - { 314 - .board_names = {"ROG CROSSHAIR VIII DARK HERO"}, 315 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 316 - SENSOR_TEMP_T_SENSOR | 317 - SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 318 - SENSOR_FAN_CPU_OPT | SENSOR_FAN_WATER_FLOW | 319 - SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 320 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 321 - .family = family_amd_500_series, 322 - }, 323 - { 324 - .board_names = { 325 - "ROG CROSSHAIR VIII FORMULA", 326 - "ROG CROSSHAIR VIII HERO", 327 - "ROG CROSSHAIR VIII HERO (WI-FI)", 328 - }, 329 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 330 - SENSOR_TEMP_T_SENSOR | 331 - SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 332 - SENSOR_FAN_CPU_OPT | SENSOR_FAN_CHIPSET | 333 - SENSOR_FAN_WATER_FLOW | SENSOR_CURR_CPU | 334 - SENSOR_IN_CPU_CORE, 335 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 336 - .family = family_amd_500_series, 337 - }, 338 - { 339 - .board_names = { 340 - "ROG MAXIMUS XI HERO", 341 - "ROG MAXIMUS XI HERO (WI-FI)", 342 - }, 343 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 344 - SENSOR_TEMP_T_SENSOR | 345 - SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 346 - SENSOR_FAN_CPU_OPT | SENSOR_FAN_WATER_FLOW, 347 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 348 - .family = family_intel_300_series, 349 - }, 350 - { 351 - .board_names = {"ROG CROSSHAIR VIII IMPACT"}, 352 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 353 - SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 354 - SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU | 355 - SENSOR_IN_CPU_CORE, 356 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 357 - .family = family_amd_500_series, 358 - }, 359 - { 360 - .board_names = {"ROG STRIX B550-E GAMING"}, 361 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 362 - SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 363 - SENSOR_FAN_CPU_OPT, 364 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 365 - .family = family_amd_500_series, 366 - }, 367 - { 368 - .board_names = {"ROG STRIX B550-I GAMING"}, 369 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 370 - SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 371 - SENSOR_FAN_VRM_HS | SENSOR_CURR_CPU | 372 - SENSOR_IN_CPU_CORE, 373 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 374 - .family = family_amd_500_series, 375 - }, 376 - { 377 - .board_names = {"ROG STRIX X570-E GAMING"}, 378 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 379 - SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 380 - SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU | 381 - SENSOR_IN_CPU_CORE, 382 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 383 - .family = family_amd_500_series, 384 - }, 385 - { 386 - .board_names = {"ROG STRIX X570-E GAMING WIFI II"}, 387 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 388 - SENSOR_TEMP_T_SENSOR | SENSOR_CURR_CPU | 389 - SENSOR_IN_CPU_CORE, 390 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 391 - .family = family_amd_500_series, 392 - }, 393 - { 394 - .board_names = {"ROG STRIX X570-F GAMING"}, 395 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 396 - SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET, 397 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 398 - .family = family_amd_500_series, 399 - }, 400 - { 401 - .board_names = {"ROG STRIX X570-I GAMING"}, 402 - .sensors = SENSOR_TEMP_CHIPSET | SENSOR_TEMP_VRM | 403 - SENSOR_TEMP_T_SENSOR | 404 - SENSOR_FAN_VRM_HS | SENSOR_FAN_CHIPSET | 405 - SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 406 - .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 407 - .family = family_amd_500_series, 408 - }, 409 - { 410 - .board_names = {"ROG STRIX Z690-A GAMING WIFI D4"}, 411 - .sensors = SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM, 412 - .mutex_path = ASUS_HW_ACCESS_MUTEX_RMTW_ASMX, 413 - .family = family_intel_600_series, 414 - }, 415 - { 416 - .board_names = {"ROG ZENITH II EXTREME"}, 417 - .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_T_SENSOR | 418 - SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 419 - SENSOR_FAN_CPU_OPT | SENSOR_FAN_CHIPSET | SENSOR_FAN_VRM_HS | 420 - SENSOR_FAN_WATER_FLOW | SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE | 421 - SENSOR_SET_WATER_BLOCK | 422 - SENSOR_TEMP_T_SENSOR_2 | SENSOR_TEMP_SENSOR_EXTRA_1 | 423 - SENSOR_TEMP_SENSOR_EXTRA_2 | SENSOR_TEMP_SENSOR_EXTRA_3, 424 - .mutex_path = ASUS_HW_ACCESS_MUTEX_SB_PCI0_SBRG_SIO1_MUT0, 425 - .family = family_amd_500_series, 426 - }, 427 - {} 284 + static const struct ec_board_info board_info_prime_x470_pro = { 285 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 286 + SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 287 + SENSOR_FAN_CPU_OPT | 288 + SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 289 + .mutex_path = ACPI_GLOBAL_LOCK_PSEUDO_PATH, 290 + .family = family_amd_400_series, 291 + }; 292 + 293 + static const struct ec_board_info board_info_prime_x570_pro = { 294 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM | 295 + SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET, 296 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 297 + .family = family_amd_500_series, 298 + }; 299 + 300 + static const struct ec_board_info board_info_pro_art_x570_creator_wifi = { 301 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM | 302 + SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CPU_OPT | 303 + SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 304 + .family = family_amd_500_series, 305 + }; 306 + 307 + static const struct ec_board_info board_info_pro_ws_x570_ace = { 308 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM | 309 + SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET | 310 + SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 311 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 312 + .family = family_amd_500_series, 313 + }; 314 + 315 + static const struct ec_board_info board_info_crosshair_viii_dark_hero = { 316 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 317 + SENSOR_TEMP_T_SENSOR | 318 + SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 319 + SENSOR_FAN_CPU_OPT | SENSOR_FAN_WATER_FLOW | 320 + SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 321 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 322 + .family = family_amd_500_series, 323 + }; 324 + 325 + static const struct ec_board_info board_info_crosshair_viii_hero = { 326 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 327 + SENSOR_TEMP_T_SENSOR | 328 + SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 329 + SENSOR_FAN_CPU_OPT | SENSOR_FAN_CHIPSET | 330 + SENSOR_FAN_WATER_FLOW | SENSOR_CURR_CPU | 331 + SENSOR_IN_CPU_CORE, 332 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 333 + .family = family_amd_500_series, 334 + }; 335 + 336 + static const struct ec_board_info board_info_maximus_xi_hero = { 337 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 338 + SENSOR_TEMP_T_SENSOR | 339 + SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 340 + SENSOR_FAN_CPU_OPT | SENSOR_FAN_WATER_FLOW, 341 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 342 + .family = family_intel_300_series, 343 + }; 344 + 345 + static const struct ec_board_info board_info_crosshair_viii_impact = { 346 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 347 + SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 348 + SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU | 349 + SENSOR_IN_CPU_CORE, 350 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 351 + .family = family_amd_500_series, 352 + }; 353 + 354 + static const struct ec_board_info board_info_strix_b550_e_gaming = { 355 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 356 + SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 357 + SENSOR_FAN_CPU_OPT, 358 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 359 + .family = family_amd_500_series, 360 + }; 361 + 362 + static const struct ec_board_info board_info_strix_b550_i_gaming = { 363 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 364 + SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 365 + SENSOR_FAN_VRM_HS | SENSOR_CURR_CPU | 366 + SENSOR_IN_CPU_CORE, 367 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 368 + .family = family_amd_500_series, 369 + }; 370 + 371 + static const struct ec_board_info board_info_strix_x570_e_gaming = { 372 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 373 + SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM | 374 + SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU | 375 + SENSOR_IN_CPU_CORE, 376 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 377 + .family = family_amd_500_series, 378 + }; 379 + 380 + static const struct ec_board_info board_info_strix_x570_e_gaming_wifi_ii = { 381 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 382 + SENSOR_TEMP_T_SENSOR | SENSOR_CURR_CPU | 383 + SENSOR_IN_CPU_CORE, 384 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 385 + .family = family_amd_500_series, 386 + }; 387 + 388 + static const struct ec_board_info board_info_strix_x570_f_gaming = { 389 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | 390 + SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CHIPSET, 391 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 392 + .family = family_amd_500_series, 393 + }; 394 + 395 + static const struct ec_board_info board_info_strix_x570_i_gaming = { 396 + .sensors = SENSOR_TEMP_CHIPSET | SENSOR_TEMP_VRM | 397 + SENSOR_TEMP_T_SENSOR | 398 + SENSOR_FAN_VRM_HS | SENSOR_FAN_CHIPSET | 399 + SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE, 400 + .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX, 401 + .family = family_amd_500_series, 402 + }; 403 + 404 + static const struct ec_board_info board_info_strix_z690_a_gaming_wifi_d4 = { 405 + .sensors = SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM, 406 + .mutex_path = ASUS_HW_ACCESS_MUTEX_RMTW_ASMX, 407 + .family = family_intel_600_series, 408 + }; 409 + 410 + static const struct ec_board_info board_info_zenith_ii_extreme = { 411 + .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_T_SENSOR | 412 + SENSOR_TEMP_VRM | SENSOR_SET_TEMP_WATER | 413 + SENSOR_FAN_CPU_OPT | SENSOR_FAN_CHIPSET | SENSOR_FAN_VRM_HS | 414 + SENSOR_FAN_WATER_FLOW | SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE | 415 + SENSOR_SET_WATER_BLOCK | 416 + SENSOR_TEMP_T_SENSOR_2 | SENSOR_TEMP_SENSOR_EXTRA_1 | 417 + SENSOR_TEMP_SENSOR_EXTRA_2 | SENSOR_TEMP_SENSOR_EXTRA_3, 418 + .mutex_path = ASUS_HW_ACCESS_MUTEX_SB_PCI0_SBRG_SIO1_MUT0, 419 + .family = family_amd_500_series, 420 + }; 421 + 422 + #define DMI_EXACT_MATCH_ASUS_BOARD_NAME(name, board_info) \ 423 + { \ 424 + .matches = { \ 425 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, \ 426 + "ASUSTeK COMPUTER INC."), \ 427 + DMI_EXACT_MATCH(DMI_BOARD_NAME, name), \ 428 + }, \ 429 + .driver_data = (void *)board_info, \ 430 + } 431 + 432 + static const struct dmi_system_id dmi_table[] = { 433 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("PRIME X470-PRO", 434 + &board_info_prime_x470_pro), 435 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("PRIME X570-PRO", 436 + &board_info_prime_x570_pro), 437 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ProArt X570-CREATOR WIFI", 438 + &board_info_pro_art_x570_creator_wifi), 439 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("Pro WS X570-ACE", 440 + &board_info_pro_ws_x570_ace), 441 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII DARK HERO", 442 + &board_info_crosshair_viii_dark_hero), 443 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII FORMULA", 444 + &board_info_crosshair_viii_hero), 445 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII HERO", 446 + &board_info_crosshair_viii_hero), 447 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII HERO (WI-FI)", 448 + &board_info_crosshair_viii_hero), 449 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG MAXIMUS XI HERO", 450 + &board_info_maximus_xi_hero), 451 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG MAXIMUS XI HERO (WI-FI)", 452 + &board_info_maximus_xi_hero), 453 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VIII IMPACT", 454 + &board_info_crosshair_viii_impact), 455 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX B550-E GAMING", 456 + &board_info_strix_b550_e_gaming), 457 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX B550-I GAMING", 458 + &board_info_strix_b550_i_gaming), 459 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-E GAMING", 460 + &board_info_strix_x570_e_gaming), 461 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-E GAMING WIFI II", 462 + &board_info_strix_x570_e_gaming_wifi_ii), 463 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-F GAMING", 464 + &board_info_strix_x570_f_gaming), 465 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX X570-I GAMING", 466 + &board_info_strix_x570_i_gaming), 467 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG STRIX Z690-A GAMING WIFI D4", 468 + &board_info_strix_z690_a_gaming_wifi_d4), 469 + DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG ZENITH II EXTREME", 470 + &board_info_zenith_ii_extreme), 471 + {}, 428 472 }; 429 473 430 474 struct ec_sensor { ··· 577 537 return -ENOENT; 578 538 } 579 539 580 - static int __init bank_compare(const void *a, const void *b) 540 + static int bank_compare(const void *a, const void *b) 581 541 { 582 542 return *((const s8 *)a) - *((const s8 *)b); 583 543 } 584 544 585 - static void __init setup_sensor_data(struct ec_sensors_data *ec) 545 + static void setup_sensor_data(struct ec_sensors_data *ec) 586 546 { 587 547 struct ec_sensor *s = ec->sensors; 588 548 bool bank_found; ··· 614 574 sort(ec->banks, ec->nr_banks, 1, bank_compare, NULL); 615 575 } 616 576 617 - static void __init fill_ec_registers(struct ec_sensors_data *ec) 577 + static void fill_ec_registers(struct ec_sensors_data *ec) 618 578 { 619 579 const struct ec_sensor_info *si; 620 580 unsigned int i, j, register_idx = 0; ··· 629 589 } 630 590 } 631 591 632 - static int __init setup_lock_data(struct device *dev) 592 + static int setup_lock_data(struct device *dev) 633 593 { 634 594 const char *mutex_path; 635 595 int status; ··· 852 812 return find_ec_sensor_index(state, type, channel) >= 0 ? S_IRUGO : 0; 853 813 } 854 814 855 - static int __init 815 + static int 856 816 asus_ec_hwmon_add_chan_info(struct hwmon_channel_info *asus_ec_hwmon_chan, 857 817 struct device *dev, int num, 858 818 enum hwmon_sensor_types type, u32 config) ··· 881 841 .ops = &asus_ec_hwmon_ops, 882 842 }; 883 843 884 - static const struct ec_board_info * __init get_board_info(void) 844 + static const struct ec_board_info *get_board_info(void) 885 845 { 886 - const char *dmi_board_vendor = dmi_get_system_info(DMI_BOARD_VENDOR); 887 - const char *dmi_board_name = dmi_get_system_info(DMI_BOARD_NAME); 888 - const struct ec_board_info *board; 846 + const struct dmi_system_id *dmi_entry; 889 847 890 - if (!dmi_board_vendor || !dmi_board_name || 891 - strcasecmp(dmi_board_vendor, "ASUSTeK COMPUTER INC.")) 892 - return NULL; 893 - 894 - for (board = board_info; board->sensors; board++) { 895 - if (match_string(board->board_names, 896 - MAX_IDENTICAL_BOARD_VARIATIONS, 897 - dmi_board_name) >= 0) 898 - return board; 899 - } 900 - 901 - return NULL; 848 + dmi_entry = dmi_first_match(dmi_table); 849 + return dmi_entry ? dmi_entry->driver_data : NULL; 902 850 } 903 851 904 - static int __init asus_ec_probe(struct platform_device *pdev) 852 + static int asus_ec_probe(struct platform_device *pdev) 905 853 { 906 854 const struct hwmon_channel_info **ptr_asus_ec_ci; 907 855 int nr_count[hwmon_max] = { 0 }, nr_types = 0; ··· 998 970 return PTR_ERR_OR_ZERO(hwdev); 999 971 } 1000 972 1001 - 1002 - static const struct acpi_device_id acpi_ec_ids[] = { 1003 - /* Embedded Controller Device */ 1004 - { "PNP0C09", 0 }, 1005 - {} 1006 - }; 973 + MODULE_DEVICE_TABLE(dmi, dmi_table); 1007 974 1008 975 static struct platform_driver asus_ec_sensors_platform_driver = { 1009 976 .driver = { 1010 977 .name = "asus-ec-sensors", 1011 - .acpi_match_table = acpi_ec_ids, 1012 978 }, 979 + .probe = asus_ec_probe, 1013 980 }; 1014 981 1015 - MODULE_DEVICE_TABLE(acpi, acpi_ec_ids); 1016 - /* 1017 - * we use module_platform_driver_probe() rather than module_platform_driver() 1018 - * because the probe function (and its dependants) are marked with __init, which 1019 - * means we can't put it into the .probe member of the platform_driver struct 1020 - * above, and we can't mark the asus_ec_sensors_platform_driver object as __init 1021 - * because the object is referenced from the module exit code. 1022 - */ 1023 - module_platform_driver_probe(asus_ec_sensors_platform_driver, asus_ec_probe); 982 + static struct platform_device *asus_ec_sensors_platform_device; 983 + 984 + static int __init asus_ec_init(void) 985 + { 986 + asus_ec_sensors_platform_device = 987 + platform_create_bundle(&asus_ec_sensors_platform_driver, 988 + asus_ec_probe, NULL, 0, NULL, 0); 989 + 990 + if (IS_ERR(asus_ec_sensors_platform_device)) 991 + return PTR_ERR(asus_ec_sensors_platform_device); 992 + 993 + return 0; 994 + } 995 + 996 + static void __exit asus_ec_exit(void) 997 + { 998 + platform_device_unregister(asus_ec_sensors_platform_device); 999 + platform_driver_unregister(&asus_ec_sensors_platform_driver); 1000 + } 1001 + 1002 + module_init(asus_ec_init); 1003 + module_exit(asus_ec_exit); 1024 1004 1025 1005 module_param_named(mutex_path, mutex_path_override, charp, 0); 1026 1006 MODULE_PARM_DESC(mutex_path,
+50 -22
drivers/hwmon/mr75203.c
··· 68 68 69 69 /* VM Individual Macro Register */ 70 70 #define VM_COM_REG_SIZE 0x200 71 - #define VM_SDIF_DONE(n) (VM_COM_REG_SIZE + 0x34 + 0x200 * (n)) 72 - #define VM_SDIF_DATA(n) (VM_COM_REG_SIZE + 0x40 + 0x200 * (n)) 71 + #define VM_SDIF_DONE(vm) (VM_COM_REG_SIZE + 0x34 + 0x200 * (vm)) 72 + #define VM_SDIF_DATA(vm, ch) \ 73 + (VM_COM_REG_SIZE + 0x40 + 0x200 * (vm) + 0x4 * (ch)) 73 74 74 75 /* SDA Slave Register */ 75 76 #define IP_CTRL 0x00 ··· 116 115 u32 t_num; 117 116 u32 p_num; 118 117 u32 v_num; 118 + u32 c_num; 119 119 u32 ip_freq; 120 120 u8 *vm_idx; 121 121 }; ··· 180 178 { 181 179 struct pvt_device *pvt = dev_get_drvdata(dev); 182 180 struct regmap *v_map = pvt->v_map; 181 + u8 vm_idx, ch_idx; 183 182 u32 n, stat; 184 - u8 vm_idx; 185 183 int ret; 186 184 187 - if (channel >= pvt->v_num) 185 + if (channel >= pvt->v_num * pvt->c_num) 188 186 return -EINVAL; 189 187 190 - vm_idx = pvt->vm_idx[channel]; 188 + vm_idx = pvt->vm_idx[channel / pvt->c_num]; 189 + ch_idx = channel % pvt->c_num; 191 190 192 191 switch (attr) { 193 192 case hwmon_in_input: ··· 199 196 if (ret) 200 197 return ret; 201 198 202 - ret = regmap_read(v_map, VM_SDIF_DATA(vm_idx), &n); 199 + ret = regmap_read(v_map, VM_SDIF_DATA(vm_idx, ch_idx), &n); 203 200 if(ret < 0) 204 201 return ret; 205 202 206 203 n &= SAMPLE_DATA_MSK; 207 - /* Convert the N bitstream count into voltage */ 208 - *val = (PVT_N_CONST * n - PVT_R_CONST) >> PVT_CONV_BITS; 204 + /* 205 + * Convert the N bitstream count into voltage. 206 + * To support negative voltage calculation for 64bit machines 207 + * n must be cast to long, since n and *val differ both in 208 + * signedness and in size. 209 + * Division is used instead of right shift, because for signed 210 + * numbers, the sign bit is used to fill the vacated bit 211 + * positions, and if the number is negative, 1 is used. 212 + * BIT(x) may not be used instead of (1 << x) because it's 213 + * unsigned. 214 + */ 215 + *val = (PVT_N_CONST * (long)n - PVT_R_CONST) / (1 << PVT_CONV_BITS); 209 216 210 217 return 0; 211 218 default: ··· 388 375 if (ret) 389 376 return ret; 390 377 378 + val = (BIT(pvt->c_num) - 1) | VM_CH_INIT | 379 + IP_POLL << SDIF_ADDR_SFT | SDIF_WRN_W | SDIF_PROG; 380 + ret = regmap_write(v_map, SDIF_W, val); 381 + if (ret < 0) 382 + return ret; 383 + 384 + ret = regmap_read_poll_timeout(v_map, SDIF_STAT, 385 + val, !(val & SDIF_BUSY), 386 + PVT_POLL_DELAY_US, 387 + PVT_POLL_TIMEOUT_US); 388 + if (ret) 389 + return ret; 390 + 391 391 val = CFG1_VOL_MEAS_MODE | CFG1_PARALLEL_OUT | 392 392 CFG1_14_BIT | IP_CFG << SDIF_ADDR_SFT | 393 393 SDIF_WRN_W | SDIF_PROG; ··· 515 489 516 490 static int mr75203_probe(struct platform_device *pdev) 517 491 { 492 + u32 ts_num, vm_num, pd_num, ch_num, val, index, i; 518 493 const struct hwmon_channel_info **pvt_info; 519 - u32 ts_num, vm_num, pd_num, val, index, i; 520 494 struct device *dev = &pdev->dev; 521 495 u32 *temp_config, *in_config; 522 496 struct device *hwmon_dev; ··· 557 531 ts_num = (val & TS_NUM_MSK) >> TS_NUM_SFT; 558 532 pd_num = (val & PD_NUM_MSK) >> PD_NUM_SFT; 559 533 vm_num = (val & VM_NUM_MSK) >> VM_NUM_SFT; 534 + ch_num = (val & CH_NUM_MSK) >> CH_NUM_SFT; 560 535 pvt->t_num = ts_num; 561 536 pvt->p_num = pd_num; 562 537 pvt->v_num = vm_num; 538 + pvt->c_num = ch_num; 563 539 val = 0; 564 540 if (ts_num) 565 541 val++; ··· 598 570 } 599 571 600 572 if (vm_num) { 601 - u32 num = vm_num; 573 + u32 total_ch; 602 574 603 575 ret = pvt_get_regmap(pdev, "vm", pvt); 604 576 if (ret) ··· 612 584 ret = device_property_read_u8_array(dev, "intel,vm-map", 613 585 pvt->vm_idx, vm_num); 614 586 if (ret) { 615 - num = 0; 587 + /* 588 + * Incase intel,vm-map property is not defined, we 589 + * assume incremental channel numbers. 590 + */ 591 + for (i = 0; i < vm_num; i++) 592 + pvt->vm_idx[i] = i; 616 593 } else { 617 594 for (i = 0; i < vm_num; i++) 618 595 if (pvt->vm_idx[i] >= vm_num || 619 596 pvt->vm_idx[i] == 0xff) { 620 - num = i; 597 + pvt->v_num = i; 598 + vm_num = i; 621 599 break; 622 600 } 623 601 } 624 602 625 - /* 626 - * Incase intel,vm-map property is not defined, we assume 627 - * incremental channel numbers. 628 - */ 629 - for (i = num; i < vm_num; i++) 630 - pvt->vm_idx[i] = i; 631 - 632 - in_config = devm_kcalloc(dev, num + 1, 603 + total_ch = ch_num * vm_num; 604 + in_config = devm_kcalloc(dev, total_ch + 1, 633 605 sizeof(*in_config), GFP_KERNEL); 634 606 if (!in_config) 635 607 return -ENOMEM; 636 608 637 - memset32(in_config, HWMON_I_INPUT, num); 638 - in_config[num] = 0; 609 + memset32(in_config, HWMON_I_INPUT, total_ch); 610 + in_config[total_ch] = 0; 639 611 pvt_in.config = in_config; 640 612 641 613 pvt_info[index++] = &pvt_in;
+6 -4
drivers/hwmon/tps23861.c
··· 493 493 494 494 static int tps23861_port_resistance(struct tps23861_data *data, int port) 495 495 { 496 - u16 regval; 496 + unsigned int raw_val; 497 + __le16 regval; 497 498 498 499 regmap_bulk_read(data->regmap, 499 500 PORT_1_RESISTANCE_LSB + PORT_N_RESISTANCE_LSB_OFFSET * (port - 1), 500 501 &regval, 501 502 2); 502 503 503 - switch (FIELD_GET(PORT_RESISTANCE_RSN_MASK, regval)) { 504 + raw_val = le16_to_cpu(regval); 505 + switch (FIELD_GET(PORT_RESISTANCE_RSN_MASK, raw_val)) { 504 506 case PORT_RESISTANCE_RSN_OTHER: 505 - return (FIELD_GET(PORT_RESISTANCE_MASK, regval) * RESISTANCE_LSB) / 10000; 507 + return (FIELD_GET(PORT_RESISTANCE_MASK, raw_val) * RESISTANCE_LSB) / 10000; 506 508 case PORT_RESISTANCE_RSN_LOW: 507 - return (FIELD_GET(PORT_RESISTANCE_MASK, regval) * RESISTANCE_LSB_LOW) / 10000; 509 + return (FIELD_GET(PORT_RESISTANCE_MASK, raw_val) * RESISTANCE_LSB_LOW) / 10000; 508 510 case PORT_RESISTANCE_RSN_SHORT: 509 511 case PORT_RESISTANCE_RSN_OPEN: 510 512 default:
+2 -2
drivers/infiniband/core/cma.c
··· 1841 1841 } 1842 1842 1843 1843 if (!validate_net_dev(*net_dev, 1844 - (struct sockaddr *)&req->listen_addr_storage, 1845 - (struct sockaddr *)&req->src_addr_storage)) { 1844 + (struct sockaddr *)&req->src_addr_storage, 1845 + (struct sockaddr *)&req->listen_addr_storage)) { 1846 1846 id_priv = ERR_PTR(-EHOSTUNREACH); 1847 1847 goto err; 1848 1848 }
+1 -1
drivers/infiniband/core/umem_odp.c
··· 462 462 mutex_unlock(&umem_odp->umem_mutex); 463 463 464 464 out_put_mm: 465 - mmput(owning_mm); 465 + mmput_async(owning_mm); 466 466 out_put_task: 467 467 if (owning_process) 468 468 put_task_struct(owning_process);
-1
drivers/infiniband/hw/hns/hns_roce_device.h
··· 730 730 u32 num_qps; 731 731 u32 num_pi_qps; 732 732 u32 reserved_qps; 733 - int num_qpc_timer; 734 733 u32 num_srqs; 735 734 u32 max_wqes; 736 735 u32 max_srq_wrs;
+1 -2
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 1977 1977 1978 1978 caps->num_mtpts = HNS_ROCE_V2_MAX_MTPT_NUM; 1979 1979 caps->num_pds = HNS_ROCE_V2_MAX_PD_NUM; 1980 - caps->num_qpc_timer = HNS_ROCE_V2_MAX_QPC_TIMER_NUM; 1980 + caps->qpc_timer_bt_num = HNS_ROCE_V2_MAX_QPC_TIMER_BT_NUM; 1981 1981 caps->cqc_timer_bt_num = HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM; 1982 1982 1983 1983 caps->max_qp_init_rdma = HNS_ROCE_V2_MAX_QP_INIT_RDMA; ··· 2273 2273 caps->max_rq_sg = le16_to_cpu(resp_a->max_rq_sg); 2274 2274 caps->max_rq_sg = roundup_pow_of_two(caps->max_rq_sg); 2275 2275 caps->max_extend_sg = le32_to_cpu(resp_a->max_extend_sg); 2276 - caps->num_qpc_timer = le16_to_cpu(resp_a->num_qpc_timer); 2277 2276 caps->max_srq_sges = le16_to_cpu(resp_a->max_srq_sges); 2278 2277 caps->max_srq_sges = roundup_pow_of_two(caps->max_srq_sges); 2279 2278 caps->num_aeq_vectors = resp_a->num_aeq_vectors;
+2 -2
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 36 36 #include <linux/bitops.h> 37 37 38 38 #define HNS_ROCE_V2_MAX_QP_NUM 0x1000 39 - #define HNS_ROCE_V2_MAX_QPC_TIMER_NUM 0x200 40 39 #define HNS_ROCE_V2_MAX_WQE_NUM 0x8000 41 40 #define HNS_ROCE_V2_MAX_SRQ_WR 0x8000 42 41 #define HNS_ROCE_V2_MAX_SRQ_SGE 64 43 42 #define HNS_ROCE_V2_MAX_CQ_NUM 0x100000 43 + #define HNS_ROCE_V2_MAX_QPC_TIMER_BT_NUM 0x100 44 44 #define HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM 0x100 45 45 #define HNS_ROCE_V2_MAX_SRQ_NUM 0x100000 46 46 #define HNS_ROCE_V2_MAX_CQE_NUM 0x400000 ··· 83 83 84 84 #define HNS_ROCE_V2_QPC_TIMER_ENTRY_SZ PAGE_SIZE 85 85 #define HNS_ROCE_V2_CQC_TIMER_ENTRY_SZ PAGE_SIZE 86 - #define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED 0xFFFFF000 86 + #define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED 0xFFFF000 87 87 #define HNS_ROCE_V2_MAX_INNER_MTPT_NUM 2 88 88 #define HNS_ROCE_INVALID_LKEY 0x0 89 89 #define HNS_ROCE_INVALID_SGE_LENGTH 0x80000000
+1 -1
drivers/infiniband/hw/hns/hns_roce_main.c
··· 725 725 ret = hns_roce_init_hem_table(hr_dev, &hr_dev->qpc_timer_table, 726 726 HEM_TYPE_QPC_TIMER, 727 727 hr_dev->caps.qpc_timer_entry_sz, 728 - hr_dev->caps.num_qpc_timer, 1); 728 + hr_dev->caps.qpc_timer_bt_num, 1); 729 729 if (ret) { 730 730 dev_err(dev, 731 731 "Failed to init QPC timer memory, aborting.\n");
+2 -5
drivers/infiniband/hw/hns/hns_roce_qp.c
··· 462 462 hr_qp->rq.max_gs = roundup_pow_of_two(max(1U, cap->max_recv_sge) + 463 463 hr_qp->rq.rsv_sge); 464 464 465 - if (hr_dev->caps.max_rq_sg <= HNS_ROCE_SGE_IN_WQE) 466 - hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz); 467 - else 468 - hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz * 469 - hr_qp->rq.max_gs); 465 + hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz * 466 + hr_qp->rq.max_gs); 470 467 471 468 hr_qp->rq.wqe_cnt = cnt; 472 469 if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE &&
+5 -2
drivers/infiniband/hw/irdma/uk.c
··· 497 497 FIELD_PREP(IRDMAQPSQ_IMMDATA, info->imm_data)); 498 498 i = 0; 499 499 } else { 500 - qp->wqe_ops.iw_set_fragment(wqe, 0, op_info->sg_list, 500 + qp->wqe_ops.iw_set_fragment(wqe, 0, 501 + frag_cnt ? op_info->sg_list : NULL, 501 502 qp->swqe_polarity); 502 503 i = 1; 503 504 } ··· 1006 1005 int ret_code; 1007 1006 bool move_cq_head = true; 1008 1007 u8 polarity; 1008 + u8 op_type; 1009 1009 bool ext_valid; 1010 1010 __le64 *ext_cqe; 1011 1011 ··· 1189 1187 do { 1190 1188 __le64 *sw_wqe; 1191 1189 u64 wqe_qword; 1192 - u8 op_type; 1193 1190 u32 tail; 1194 1191 1195 1192 tail = qp->sq_ring.tail; ··· 1205 1204 break; 1206 1205 } 1207 1206 } while (1); 1207 + if (op_type == IRDMA_OP_TYPE_BIND_MW && info->minor_err == FLUSH_PROT_ERR) 1208 + info->minor_err = FLUSH_MW_BIND_ERR; 1208 1209 qp->sq_flush_seen = true; 1209 1210 if (!IRDMA_RING_MORE_WORK(qp->sq_ring)) 1210 1211 qp->sq_flush_complete = true;
+9 -6
drivers/infiniband/hw/irdma/utils.c
··· 590 590 cqp_error = cqp_request->compl_info.error; 591 591 if (cqp_error) { 592 592 err_code = -EIO; 593 - if (cqp_request->compl_info.maj_err_code == 0xFFFF && 594 - cqp_request->compl_info.min_err_code == 0x8029) { 595 - if (!rf->reset) { 596 - rf->reset = true; 597 - rf->gen_ops.request_reset(rf); 593 + if (cqp_request->compl_info.maj_err_code == 0xFFFF) { 594 + if (cqp_request->compl_info.min_err_code == 0x8002) 595 + err_code = -EBUSY; 596 + else if (cqp_request->compl_info.min_err_code == 0x8029) { 597 + if (!rf->reset) { 598 + rf->reset = true; 599 + rf->gen_ops.request_reset(rf); 600 + } 598 601 } 599 602 } 600 603 } ··· 2601 2598 spin_unlock_irqrestore(&iwqp->lock, flags2); 2602 2599 spin_unlock_irqrestore(&iwqp->iwscq->lock, flags1); 2603 2600 if (compl_generated) 2604 - irdma_comp_handler(iwqp->iwrcq); 2601 + irdma_comp_handler(iwqp->iwscq); 2605 2602 } else { 2606 2603 spin_unlock_irqrestore(&iwqp->iwscq->lock, flags1); 2607 2604 mod_delayed_work(iwqp->iwdev->cleanup_wq, &iwqp->dwork_flush,
+10 -3
drivers/infiniband/hw/irdma/verbs.c
··· 39 39 props->max_send_sge = hw_attrs->uk_attrs.max_hw_wq_frags; 40 40 props->max_recv_sge = hw_attrs->uk_attrs.max_hw_wq_frags; 41 41 props->max_cq = rf->max_cq - rf->used_cqs; 42 - props->max_cqe = rf->max_cqe; 42 + props->max_cqe = rf->max_cqe - 1; 43 43 props->max_mr = rf->max_mr - rf->used_mrs; 44 44 props->max_mw = props->max_mr; 45 45 props->max_pd = rf->max_pd - rf->used_pds; 46 46 props->max_sge_rd = hw_attrs->uk_attrs.max_hw_read_sges; 47 47 props->max_qp_rd_atom = hw_attrs->max_hw_ird; 48 48 props->max_qp_init_rd_atom = hw_attrs->max_hw_ord; 49 - if (rdma_protocol_roce(ibdev, 1)) 49 + if (rdma_protocol_roce(ibdev, 1)) { 50 + props->device_cap_flags |= IB_DEVICE_RC_RNR_NAK_GEN; 50 51 props->max_pkeys = IRDMA_PKEY_TBL_SZ; 52 + } 53 + 51 54 props->max_ah = rf->max_ah; 52 55 props->max_mcast_grp = rf->max_mcg; 53 56 props->max_mcast_qp_attach = IRDMA_MAX_MGS_PER_CTX; ··· 3012 3009 struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; 3013 3010 struct irdma_cqp_request *cqp_request; 3014 3011 struct cqp_cmds_info *cqp_info; 3012 + int status; 3015 3013 3016 3014 if (iwmr->type != IRDMA_MEMREG_TYPE_MEM) { 3017 3015 if (iwmr->region) { ··· 3043 3039 cqp_info->post_sq = 1; 3044 3040 cqp_info->in.u.dealloc_stag.dev = &iwdev->rf->sc_dev; 3045 3041 cqp_info->in.u.dealloc_stag.scratch = (uintptr_t)cqp_request; 3046 - irdma_handle_cqp_op(iwdev->rf, cqp_request); 3042 + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); 3047 3043 irdma_put_cqp_request(&iwdev->rf->cqp, cqp_request); 3044 + if (status) 3045 + return status; 3046 + 3048 3047 irdma_free_stag(iwdev, iwmr->stag); 3049 3048 done: 3050 3049 if (iwpbl->pbl_allocated)
+6
drivers/infiniband/hw/mlx5/mad.c
··· 166 166 mdev = dev->mdev; 167 167 mdev_port_num = 1; 168 168 } 169 + if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1) { 170 + /* set local port to one for Function-Per-Port HCA. */ 171 + mdev = dev->mdev; 172 + mdev_port_num = 1; 173 + } 174 + 169 175 /* Declaring support of extended counters */ 170 176 if (in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO) { 171 177 struct ib_class_port_info cpi = {};
+1 -1
drivers/infiniband/hw/mlx5/main.c
··· 4336 4336 dev->mdev = mdev; 4337 4337 dev->num_ports = num_ports; 4338 4338 4339 - if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_is_roce_init_enabled(mdev)) 4339 + if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_get_roce_state(mdev)) 4340 4340 profile = &raw_eth_profile; 4341 4341 else 4342 4342 profile = &pf_profile;
+1
drivers/infiniband/hw/mlx5/mlx5_ib.h
··· 708 708 }; 709 709 710 710 enum { 711 + MLX5_UMR_STATE_UNINIT, 711 712 MLX5_UMR_STATE_ACTIVE, 712 713 MLX5_UMR_STATE_RECOVER, 713 714 MLX5_UMR_STATE_ERR,
+3
drivers/infiniband/hw/mlx5/umr.c
··· 177 177 178 178 sema_init(&dev->umrc.sem, MAX_UMR_WR); 179 179 mutex_init(&dev->umrc.lock); 180 + dev->umrc.state = MLX5_UMR_STATE_ACTIVE; 180 181 181 182 return 0; 182 183 ··· 192 191 193 192 void mlx5r_umr_resource_cleanup(struct mlx5_ib_dev *dev) 194 193 { 194 + if (dev->umrc.state == MLX5_UMR_STATE_UNINIT) 195 + return; 195 196 ib_destroy_qp(dev->umrc.qp); 196 197 ib_free_cq(dev->umrc.cq); 197 198 ib_dealloc_pd(dev->umrc.pd);
+14 -4
drivers/infiniband/sw/siw/siw_qp_tx.c
··· 29 29 dma_addr_t paddr = siw_pbl_get_buffer(pbl, offset, NULL, idx); 30 30 31 31 if (paddr) 32 - return virt_to_page(paddr); 32 + return virt_to_page((void *)paddr); 33 33 34 34 return NULL; 35 35 } ··· 533 533 kunmap_local(kaddr); 534 534 } 535 535 } else { 536 - u64 va = sge->laddr + sge_off; 536 + /* 537 + * Cast to an uintptr_t to preserve all 64 bits 538 + * in sge->laddr. 539 + */ 540 + uintptr_t va = (uintptr_t)(sge->laddr + sge_off); 537 541 538 - page_array[seg] = virt_to_page(va & PAGE_MASK); 542 + /* 543 + * virt_to_page() takes a (void *) pointer 544 + * so cast to a (void *) meaning it will be 64 545 + * bits on a 64 bit platform and 32 bits on a 546 + * 32 bit platform. 547 + */ 548 + page_array[seg] = virt_to_page((void *)(va & PAGE_MASK)); 539 549 if (do_crc) 540 550 crypto_shash_update( 541 551 c_tx->mpa_crc_hd, 542 - (void *)(uintptr_t)va, 552 + (void *)va, 543 553 plen); 544 554 } 545 555
+5 -4
drivers/infiniband/ulp/rtrs/rtrs-clt.c
··· 1004 1004 static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, 1005 1005 struct rtrs_clt_io_req *req, 1006 1006 struct rtrs_rbuf *rbuf, bool fr_en, 1007 - u32 size, u32 imm, struct ib_send_wr *wr, 1007 + u32 count, u32 size, u32 imm, 1008 + struct ib_send_wr *wr, 1008 1009 struct ib_send_wr *tail) 1009 1010 { 1010 1011 struct rtrs_clt_path *clt_path = to_clt_path(con->c.path); ··· 1025 1024 num_sge = 2; 1026 1025 ptail = tail; 1027 1026 } else { 1028 - for_each_sg(req->sglist, sg, req->sg_cnt, i) { 1027 + for_each_sg(req->sglist, sg, count, i) { 1029 1028 sge[i].addr = sg_dma_address(sg); 1030 1029 sge[i].length = sg_dma_len(sg); 1031 1030 sge[i].lkey = clt_path->s.dev->ib_pd->local_dma_lkey; 1032 1031 } 1033 - num_sge = 1 + req->sg_cnt; 1032 + num_sge = 1 + count; 1034 1033 } 1035 1034 sge[i].addr = req->iu->dma_addr; 1036 1035 sge[i].length = size; ··· 1143 1142 */ 1144 1143 rtrs_clt_update_all_stats(req, WRITE); 1145 1144 1146 - ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, fr_en, 1145 + ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, fr_en, count, 1147 1146 req->usr_len + sizeof(*msg), 1148 1147 imm, wr, &inv_wr); 1149 1148 if (ret) {
+7 -7
drivers/infiniband/ulp/rtrs/rtrs-srv.c
··· 595 595 struct sg_table *sgt = &srv_mr->sgt; 596 596 struct scatterlist *s; 597 597 struct ib_mr *mr; 598 - int nr, chunks; 598 + int nr, nr_sgt, chunks; 599 599 600 600 chunks = chunks_per_mr * mri; 601 601 if (!always_invalidate) ··· 610 610 sg_set_page(s, srv->chunks[chunks + i], 611 611 max_chunk_size, 0); 612 612 613 - nr = ib_dma_map_sg(srv_path->s.dev->ib_dev, sgt->sgl, 613 + nr_sgt = ib_dma_map_sg(srv_path->s.dev->ib_dev, sgt->sgl, 614 614 sgt->nents, DMA_BIDIRECTIONAL); 615 - if (nr < sgt->nents) { 616 - err = nr < 0 ? nr : -EINVAL; 615 + if (!nr_sgt) { 616 + err = -EINVAL; 617 617 goto free_sg; 618 618 } 619 619 mr = ib_alloc_mr(srv_path->s.dev->ib_pd, IB_MR_TYPE_MEM_REG, 620 - sgt->nents); 620 + nr_sgt); 621 621 if (IS_ERR(mr)) { 622 622 err = PTR_ERR(mr); 623 623 goto unmap_sg; 624 624 } 625 - nr = ib_map_mr_sg(mr, sgt->sgl, sgt->nents, 625 + nr = ib_map_mr_sg(mr, sgt->sgl, nr_sgt, 626 626 NULL, max_chunk_size); 627 627 if (nr < 0 || nr < sgt->nents) { 628 628 err = nr < 0 ? nr : -EINVAL; ··· 641 641 } 642 642 } 643 643 /* Eventually dma addr for each chunk can be cached */ 644 - for_each_sg(sgt->sgl, s, sgt->orig_nents, i) 644 + for_each_sg(sgt->sgl, s, nr_sgt, i) 645 645 srv_path->dma_addr[chunks + i] = sg_dma_address(s); 646 646 647 647 ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
+2 -1
drivers/infiniband/ulp/srp/ib_srp.c
··· 1961 1961 if (scmnd) { 1962 1962 req = scsi_cmd_priv(scmnd); 1963 1963 scmnd = srp_claim_req(ch, req, NULL, scmnd); 1964 - } else { 1964 + } 1965 + if (!scmnd) { 1965 1966 shost_printk(KERN_ERR, target->scsi_host, 1966 1967 "Null scmnd for RSP w/tag %#016llx received on ch %td / QP %#x\n", 1967 1968 rsp->tag, ch - target->ch, ch->qp->qp_num);
+2 -1
drivers/iommu/amd/iommu.c
··· 939 939 memset(cmd, 0, sizeof(*cmd)); 940 940 cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK; 941 941 cmd->data[1] = upper_32_bits(paddr); 942 - cmd->data[2] = data; 942 + cmd->data[2] = lower_32_bits(data); 943 + cmd->data[3] = upper_32_bits(data); 943 944 CMD_SET_TYPE(cmd, CMD_COMPL_WAIT); 944 945 } 945 946
+2
drivers/iommu/amd/iommu_v2.c
··· 777 777 if (dev_state->domain == NULL) 778 778 goto out_free_states; 779 779 780 + /* See iommu_is_default_domain() */ 781 + dev_state->domain->type = IOMMU_DOMAIN_IDENTITY; 780 782 amd_iommu_domain_direct_map(dev_state->domain); 781 783 782 784 ret = amd_iommu_domain_enable_v2(dev_state->domain, pasids);
+7
drivers/iommu/intel/dmar.c
··· 2349 2349 if (!dmar_in_use()) 2350 2350 return 0; 2351 2351 2352 + /* 2353 + * It's unlikely that any I/O board is hot added before the IOMMU 2354 + * subsystem is initialized. 2355 + */ 2356 + if (IS_ENABLED(CONFIG_INTEL_IOMMU) && !intel_iommu_enabled) 2357 + return -EOPNOTSUPP; 2358 + 2352 2359 if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) { 2353 2360 tmp = handle; 2354 2361 } else {
+113 -128
drivers/iommu/intel/iommu.c
··· 163 163 return re->hi & VTD_PAGE_MASK; 164 164 } 165 165 166 - static inline void context_clear_pasid_enable(struct context_entry *context) 167 - { 168 - context->lo &= ~(1ULL << 11); 169 - } 170 - 171 - static inline bool context_pasid_enabled(struct context_entry *context) 172 - { 173 - return !!(context->lo & (1ULL << 11)); 174 - } 175 - 176 - static inline void context_set_copied(struct context_entry *context) 177 - { 178 - context->hi |= (1ull << 3); 179 - } 180 - 181 - static inline bool context_copied(struct context_entry *context) 182 - { 183 - return !!(context->hi & (1ULL << 3)); 184 - } 185 - 186 - static inline bool __context_present(struct context_entry *context) 187 - { 188 - return (context->lo & 1); 189 - } 190 - 191 - bool context_present(struct context_entry *context) 192 - { 193 - return context_pasid_enabled(context) ? 194 - __context_present(context) : 195 - __context_present(context) && !context_copied(context); 196 - } 197 - 198 166 static inline void context_set_present(struct context_entry *context) 199 167 { 200 168 context->lo |= 1; ··· 208 240 { 209 241 context->lo = 0; 210 242 context->hi = 0; 243 + } 244 + 245 + static inline bool context_copied(struct intel_iommu *iommu, u8 bus, u8 devfn) 246 + { 247 + if (!iommu->copied_tables) 248 + return false; 249 + 250 + return test_bit(((long)bus << 8) | devfn, iommu->copied_tables); 251 + } 252 + 253 + static inline void 254 + set_context_copied(struct intel_iommu *iommu, u8 bus, u8 devfn) 255 + { 256 + set_bit(((long)bus << 8) | devfn, iommu->copied_tables); 257 + } 258 + 259 + static inline void 260 + clear_context_copied(struct intel_iommu *iommu, u8 bus, u8 devfn) 261 + { 262 + clear_bit(((long)bus << 8) | devfn, iommu->copied_tables); 211 263 } 212 264 213 265 /* ··· 390 402 return !(addr_width < BITS_PER_LONG && pfn >> addr_width); 391 403 } 392 404 405 + /* 406 + * Calculate the Supported Adjusted Guest Address Widths of an IOMMU. 407 + * Refer to 11.4.2 of the VT-d spec for the encoding of each bit of 408 + * the returned SAGAW. 409 + */ 410 + static unsigned long __iommu_calculate_sagaw(struct intel_iommu *iommu) 411 + { 412 + unsigned long fl_sagaw, sl_sagaw; 413 + 414 + fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0); 415 + sl_sagaw = cap_sagaw(iommu->cap); 416 + 417 + /* Second level only. */ 418 + if (!sm_supported(iommu) || !ecap_flts(iommu->ecap)) 419 + return sl_sagaw; 420 + 421 + /* First level only. */ 422 + if (!ecap_slts(iommu->ecap)) 423 + return fl_sagaw; 424 + 425 + return fl_sagaw & sl_sagaw; 426 + } 427 + 393 428 static int __iommu_calculate_agaw(struct intel_iommu *iommu, int max_gaw) 394 429 { 395 430 unsigned long sagaw; 396 431 int agaw; 397 432 398 - sagaw = cap_sagaw(iommu->cap); 399 - for (agaw = width_to_agaw(max_gaw); 400 - agaw >= 0; agaw--) { 433 + sagaw = __iommu_calculate_sagaw(iommu); 434 + for (agaw = width_to_agaw(max_gaw); agaw >= 0; agaw--) { 401 435 if (test_bit(agaw, &sagaw)) 402 436 break; 403 437 } ··· 515 505 { 516 506 struct device_domain_info *info; 517 507 int nid = NUMA_NO_NODE; 508 + unsigned long flags; 518 509 519 - spin_lock(&domain->lock); 510 + spin_lock_irqsave(&domain->lock, flags); 520 511 list_for_each_entry(info, &domain->devices, link) { 521 512 /* 522 513 * There could possibly be multiple device numa nodes as devices ··· 529 518 if (nid != NUMA_NO_NODE) 530 519 break; 531 520 } 532 - spin_unlock(&domain->lock); 521 + spin_unlock_irqrestore(&domain->lock, flags); 533 522 534 523 return nid; 535 524 } ··· 588 577 struct root_entry *root = &iommu->root_entry[bus]; 589 578 struct context_entry *context; 590 579 u64 *entry; 580 + 581 + /* 582 + * Except that the caller requested to allocate a new entry, 583 + * returning a copied context entry makes no sense. 584 + */ 585 + if (!alloc && context_copied(iommu, bus, devfn)) 586 + return NULL; 591 587 592 588 entry = &root->lo; 593 589 if (sm_supported(iommu)) { ··· 813 795 } 814 796 815 797 #ifdef CONFIG_DMAR_DEBUG 816 - static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn, u8 bus, u8 devfn) 798 + static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn, 799 + u8 bus, u8 devfn, struct dma_pte *parent, int level) 817 800 { 818 - struct device_domain_info *info; 819 - struct dma_pte *parent, *pte; 820 - struct dmar_domain *domain; 821 - struct pci_dev *pdev; 822 - int offset, level; 823 - 824 - pdev = pci_get_domain_bus_and_slot(iommu->segment, bus, devfn); 825 - if (!pdev) 826 - return; 827 - 828 - info = dev_iommu_priv_get(&pdev->dev); 829 - if (!info || !info->domain) { 830 - pr_info("device [%02x:%02x.%d] not probed\n", 831 - bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); 832 - return; 833 - } 834 - 835 - domain = info->domain; 836 - level = agaw_to_level(domain->agaw); 837 - parent = domain->pgd; 838 - if (!parent) { 839 - pr_info("no page table setup\n"); 840 - return; 841 - } 801 + struct dma_pte *pte; 802 + int offset; 842 803 843 804 while (1) { 844 805 offset = pfn_level_offset(pfn, level); ··· 844 847 struct pasid_entry *entries, *pte; 845 848 struct context_entry *ctx_entry; 846 849 struct root_entry *rt_entry; 850 + int i, dir_index, index, level; 847 851 u8 devfn = source_id & 0xff; 848 852 u8 bus = source_id >> 8; 849 - int i, dir_index, index; 853 + struct dma_pte *pgtable; 850 854 851 855 pr_info("Dump %s table entries for IOVA 0x%llx\n", iommu->name, addr); 852 856 ··· 875 877 ctx_entry->hi, ctx_entry->lo); 876 878 877 879 /* legacy mode does not require PASID entries */ 878 - if (!sm_supported(iommu)) 880 + if (!sm_supported(iommu)) { 881 + level = agaw_to_level(ctx_entry->hi & 7); 882 + pgtable = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK); 879 883 goto pgtable_walk; 884 + } 880 885 881 886 /* get the pointer to pasid directory entry */ 882 887 dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK); ··· 906 905 for (i = 0; i < ARRAY_SIZE(pte->val); i++) 907 906 pr_info("pasid table entry[%d]: 0x%016llx\n", i, pte->val[i]); 908 907 908 + if (pasid_pte_get_pgtt(pte) == PASID_ENTRY_PGTT_FL_ONLY) { 909 + level = pte->val[2] & BIT_ULL(2) ? 5 : 4; 910 + pgtable = phys_to_virt(pte->val[2] & VTD_PAGE_MASK); 911 + } else { 912 + level = agaw_to_level((pte->val[0] >> 2) & 0x7); 913 + pgtable = phys_to_virt(pte->val[0] & VTD_PAGE_MASK); 914 + } 915 + 909 916 pgtable_walk: 910 - pgtable_walk(iommu, addr >> VTD_PAGE_SHIFT, bus, devfn); 917 + pgtable_walk(iommu, addr >> VTD_PAGE_SHIFT, bus, devfn, pgtable, level); 911 918 } 912 919 #endif 913 920 ··· 1354 1345 u8 bus, u8 devfn) 1355 1346 { 1356 1347 struct device_domain_info *info; 1348 + unsigned long flags; 1357 1349 1358 1350 if (!iommu->qi) 1359 1351 return NULL; 1360 1352 1361 - spin_lock(&domain->lock); 1353 + spin_lock_irqsave(&domain->lock, flags); 1362 1354 list_for_each_entry(info, &domain->devices, link) { 1363 1355 if (info->iommu == iommu && info->bus == bus && 1364 1356 info->devfn == devfn) { 1365 - spin_unlock(&domain->lock); 1357 + spin_unlock_irqrestore(&domain->lock, flags); 1366 1358 return info->ats_supported ? info : NULL; 1367 1359 } 1368 1360 } 1369 - spin_unlock(&domain->lock); 1361 + spin_unlock_irqrestore(&domain->lock, flags); 1370 1362 1371 1363 return NULL; 1372 1364 } ··· 1376 1366 { 1377 1367 struct device_domain_info *info; 1378 1368 bool has_iotlb_device = false; 1369 + unsigned long flags; 1379 1370 1380 - spin_lock(&domain->lock); 1371 + spin_lock_irqsave(&domain->lock, flags); 1381 1372 list_for_each_entry(info, &domain->devices, link) { 1382 1373 if (info->ats_enabled) { 1383 1374 has_iotlb_device = true; ··· 1386 1375 } 1387 1376 } 1388 1377 domain->has_iotlb_device = has_iotlb_device; 1389 - spin_unlock(&domain->lock); 1378 + spin_unlock_irqrestore(&domain->lock, flags); 1390 1379 } 1391 1380 1392 1381 static void iommu_enable_dev_iotlb(struct device_domain_info *info) ··· 1478 1467 u64 addr, unsigned mask) 1479 1468 { 1480 1469 struct device_domain_info *info; 1470 + unsigned long flags; 1481 1471 1482 1472 if (!domain->has_iotlb_device) 1483 1473 return; 1484 1474 1485 - spin_lock(&domain->lock); 1475 + spin_lock_irqsave(&domain->lock, flags); 1486 1476 list_for_each_entry(info, &domain->devices, link) 1487 1477 __iommu_flush_dev_iotlb(info, addr, mask); 1488 - spin_unlock(&domain->lock); 1478 + spin_unlock_irqrestore(&domain->lock, flags); 1489 1479 } 1490 1480 1491 1481 static void iommu_flush_iotlb_psi(struct intel_iommu *iommu, ··· 1698 1686 if (iommu->domain_ids) { 1699 1687 bitmap_free(iommu->domain_ids); 1700 1688 iommu->domain_ids = NULL; 1689 + } 1690 + 1691 + if (iommu->copied_tables) { 1692 + bitmap_free(iommu->copied_tables); 1693 + iommu->copied_tables = NULL; 1701 1694 } 1702 1695 1703 1696 /* free context mapping */ ··· 1930 1913 goto out_unlock; 1931 1914 1932 1915 ret = 0; 1933 - if (context_present(context)) 1916 + if (context_present(context) && !context_copied(iommu, bus, devfn)) 1934 1917 goto out_unlock; 1935 1918 1936 1919 /* ··· 1942 1925 * in-flight DMA will exist, and we don't need to worry anymore 1943 1926 * hereafter. 1944 1927 */ 1945 - if (context_copied(context)) { 1928 + if (context_copied(iommu, bus, devfn)) { 1946 1929 u16 did_old = context_domain_id(context); 1947 1930 1948 1931 if (did_old < cap_ndoms(iommu->cap)) { ··· 1953 1936 iommu->flush.flush_iotlb(iommu, did_old, 0, 0, 1954 1937 DMA_TLB_DSI_FLUSH); 1955 1938 } 1939 + 1940 + clear_context_copied(iommu, bus, devfn); 1956 1941 } 1957 1942 1958 1943 context_clear_entry(context); ··· 2448 2429 { 2449 2430 struct device_domain_info *info = dev_iommu_priv_get(dev); 2450 2431 struct intel_iommu *iommu; 2432 + unsigned long flags; 2451 2433 u8 bus, devfn; 2452 2434 int ret; 2453 2435 ··· 2460 2440 if (ret) 2461 2441 return ret; 2462 2442 info->domain = domain; 2463 - spin_lock(&domain->lock); 2443 + spin_lock_irqsave(&domain->lock, flags); 2464 2444 list_add(&info->link, &domain->devices); 2465 - spin_unlock(&domain->lock); 2445 + spin_unlock_irqrestore(&domain->lock, flags); 2466 2446 2467 2447 /* PASID table is mandatory for a PCI device in scalable mode. */ 2468 2448 if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) { ··· 2704 2684 /* Now copy the context entry */ 2705 2685 memcpy(&ce, old_ce + idx, sizeof(ce)); 2706 2686 2707 - if (!__context_present(&ce)) 2687 + if (!context_present(&ce)) 2708 2688 continue; 2709 2689 2710 2690 did = context_domain_id(&ce); 2711 2691 if (did >= 0 && did < cap_ndoms(iommu->cap)) 2712 2692 set_bit(did, iommu->domain_ids); 2713 2693 2714 - /* 2715 - * We need a marker for copied context entries. This 2716 - * marker needs to work for the old format as well as 2717 - * for extended context entries. 2718 - * 2719 - * Bit 67 of the context entry is used. In the old 2720 - * format this bit is available to software, in the 2721 - * extended format it is the PGE bit, but PGE is ignored 2722 - * by HW if PASIDs are disabled (and thus still 2723 - * available). 2724 - * 2725 - * So disable PASIDs first and then mark the entry 2726 - * copied. This means that we don't copy PASID 2727 - * translations from the old kernel, but this is fine as 2728 - * faults there are not fatal. 2729 - */ 2730 - context_clear_pasid_enable(&ce); 2731 - context_set_copied(&ce); 2732 - 2694 + set_context_copied(iommu, bus, devfn); 2733 2695 new_ce[idx] = ce; 2734 2696 } 2735 2697 ··· 2737 2735 bool new_ext, ext; 2738 2736 2739 2737 rtaddr_reg = dmar_readq(iommu->reg + DMAR_RTADDR_REG); 2740 - ext = !!(rtaddr_reg & DMA_RTADDR_RTT); 2741 - new_ext = !!ecap_ecs(iommu->ecap); 2738 + ext = !!(rtaddr_reg & DMA_RTADDR_SMT); 2739 + new_ext = !!sm_supported(iommu); 2742 2740 2743 2741 /* 2744 2742 * The RTT bit can only be changed when translation is disabled, ··· 2748 2746 */ 2749 2747 if (new_ext != ext) 2750 2748 return -EINVAL; 2749 + 2750 + iommu->copied_tables = bitmap_zalloc(BIT_ULL(16), GFP_KERNEL); 2751 + if (!iommu->copied_tables) 2752 + return -ENOMEM; 2751 2753 2752 2754 old_rt_phys = rtaddr_reg & VTD_PAGE_MASK; 2753 2755 if (!old_rt_phys) ··· 3019 3013 3020 3014 #ifdef CONFIG_INTEL_IOMMU_SVM 3021 3015 if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) { 3022 - /* 3023 - * Call dmar_alloc_hwirq() with dmar_global_lock held, 3024 - * could cause possible lock race condition. 3025 - */ 3026 - up_write(&dmar_global_lock); 3027 3016 ret = intel_svm_enable_prq(iommu); 3028 - down_write(&dmar_global_lock); 3029 3017 if (ret) 3030 3018 goto free_iommu; 3031 3019 } ··· 3932 3932 force_on = (!intel_iommu_tboot_noforce && tboot_force_iommu()) || 3933 3933 platform_optin_force_iommu(); 3934 3934 3935 - down_write(&dmar_global_lock); 3936 3935 if (dmar_table_init()) { 3937 3936 if (force_on) 3938 3937 panic("tboot: Failed to initialize DMAR table\n"); ··· 3943 3944 panic("tboot: Failed to initialize DMAR device scope\n"); 3944 3945 goto out_free_dmar; 3945 3946 } 3946 - 3947 - up_write(&dmar_global_lock); 3948 - 3949 - /* 3950 - * The bus notifier takes the dmar_global_lock, so lockdep will 3951 - * complain later when we register it under the lock. 3952 - */ 3953 - dmar_register_bus_notifier(); 3954 - 3955 - down_write(&dmar_global_lock); 3956 3947 3957 3948 if (!no_iommu) 3958 3949 intel_iommu_debugfs_init(); ··· 3988 3999 pr_err("Initialization failed\n"); 3989 4000 goto out_free_dmar; 3990 4001 } 3991 - up_write(&dmar_global_lock); 3992 4002 3993 4003 init_iommu_pm_ops(); 3994 4004 3995 - down_read(&dmar_global_lock); 3996 4005 for_each_active_iommu(iommu, drhd) { 3997 4006 /* 3998 4007 * The flush queue implementation does not perform ··· 4008 4021 "%s", iommu->name); 4009 4022 iommu_device_register(&iommu->iommu, &intel_iommu_ops, NULL); 4010 4023 } 4011 - up_read(&dmar_global_lock); 4012 4024 4013 4025 bus_set_iommu(&pci_bus_type, &intel_iommu_ops); 4014 4026 if (si_domain && !hw_pass_through) 4015 4027 register_memory_notifier(&intel_iommu_memory_nb); 4016 4028 4017 - down_read(&dmar_global_lock); 4018 4029 if (probe_acpi_namespace_devices()) 4019 4030 pr_warn("ACPI name space devices didn't probe correctly\n"); 4020 4031 ··· 4023 4038 4024 4039 iommu_disable_protect_mem_regions(iommu); 4025 4040 } 4026 - up_read(&dmar_global_lock); 4027 - 4028 - pr_info("Intel(R) Virtualization Technology for Directed I/O\n"); 4029 4041 4030 4042 intel_iommu_enabled = 1; 4043 + dmar_register_bus_notifier(); 4044 + pr_info("Intel(R) Virtualization Technology for Directed I/O\n"); 4031 4045 4032 4046 return 0; 4033 4047 4034 4048 out_free_dmar: 4035 4049 intel_iommu_free_dmars(); 4036 - up_write(&dmar_global_lock); 4037 4050 return ret; 4038 4051 } 4039 4052 ··· 4063 4080 struct device_domain_info *info = dev_iommu_priv_get(dev); 4064 4081 struct dmar_domain *domain = info->domain; 4065 4082 struct intel_iommu *iommu = info->iommu; 4083 + unsigned long flags; 4066 4084 4067 4085 if (!dev_is_real_dma_subdevice(info->dev)) { 4068 4086 if (dev_is_pci(info->dev) && sm_supported(iommu)) ··· 4075 4091 intel_pasid_free_table(info->dev); 4076 4092 } 4077 4093 4078 - spin_lock(&domain->lock); 4094 + spin_lock_irqsave(&domain->lock, flags); 4079 4095 list_del(&info->link); 4080 - spin_unlock(&domain->lock); 4096 + spin_unlock_irqrestore(&domain->lock, flags); 4081 4097 4082 4098 domain_detach_iommu(domain, iommu); 4083 4099 info->domain = NULL; ··· 4396 4412 static bool intel_iommu_enforce_cache_coherency(struct iommu_domain *domain) 4397 4413 { 4398 4414 struct dmar_domain *dmar_domain = to_dmar_domain(domain); 4415 + unsigned long flags; 4399 4416 4400 4417 if (dmar_domain->force_snooping) 4401 4418 return true; 4402 4419 4403 - spin_lock(&dmar_domain->lock); 4420 + spin_lock_irqsave(&dmar_domain->lock, flags); 4404 4421 if (!domain_support_force_snooping(dmar_domain)) { 4405 - spin_unlock(&dmar_domain->lock); 4422 + spin_unlock_irqrestore(&dmar_domain->lock, flags); 4406 4423 return false; 4407 4424 } 4408 4425 4409 4426 domain_set_force_snooping(dmar_domain); 4410 4427 dmar_domain->force_snooping = true; 4411 - spin_unlock(&dmar_domain->lock); 4428 + spin_unlock_irqrestore(&dmar_domain->lock, flags); 4412 4429 4413 4430 return true; 4414 4431 }
+6 -3
drivers/iommu/intel/iommu.h
··· 197 197 #define ecap_dis(e) (((e) >> 27) & 0x1) 198 198 #define ecap_nest(e) (((e) >> 26) & 0x1) 199 199 #define ecap_mts(e) (((e) >> 25) & 0x1) 200 - #define ecap_ecs(e) (((e) >> 24) & 0x1) 201 200 #define ecap_iotlb_offset(e) ((((e) >> 8) & 0x3ff) * 16) 202 201 #define ecap_max_iotlb_offset(e) (ecap_iotlb_offset(e) + 16) 203 202 #define ecap_coherent(e) ((e) & 0x1) ··· 264 265 #define DMA_GSTS_CFIS (((u32)1) << 23) 265 266 266 267 /* DMA_RTADDR_REG */ 267 - #define DMA_RTADDR_RTT (((u64)1) << 11) 268 268 #define DMA_RTADDR_SMT (((u64)1) << 10) 269 269 270 270 /* CCMD_REG */ ··· 577 579 578 580 #ifdef CONFIG_INTEL_IOMMU 579 581 unsigned long *domain_ids; /* bitmap of domains */ 582 + unsigned long *copied_tables; /* bitmap of copied tables */ 580 583 spinlock_t lock; /* protect context, domain ids */ 581 584 struct root_entry *root_entry; /* virtual address */ 582 585 ··· 700 701 (struct dma_pte *)ALIGN((unsigned long)pte, VTD_PAGE_SIZE) - pte; 701 702 } 702 703 704 + static inline bool context_present(struct context_entry *context) 705 + { 706 + return (context->lo & 1); 707 + } 708 + 703 709 extern struct dmar_drhd_unit * dmar_find_matched_drhd_unit(struct pci_dev *dev); 704 710 705 711 extern int dmar_enable_qi(struct intel_iommu *iommu); ··· 788 784 #endif /* CONFIG_INTEL_IOMMU_DEBUGFS */ 789 785 790 786 extern const struct attribute_group *intel_iommu_groups[]; 791 - bool context_present(struct context_entry *context); 792 787 struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus, 793 788 u8 devfn, int alloc); 794 789
+19 -2
drivers/iommu/iommu.c
··· 3076 3076 return ret; 3077 3077 } 3078 3078 3079 + static bool iommu_is_default_domain(struct iommu_group *group) 3080 + { 3081 + if (group->domain == group->default_domain) 3082 + return true; 3083 + 3084 + /* 3085 + * If the default domain was set to identity and it is still an identity 3086 + * domain then we consider this a pass. This happens because of 3087 + * amd_iommu_init_device() replacing the default idenytity domain with an 3088 + * identity domain that has a different configuration for AMDGPU. 3089 + */ 3090 + if (group->default_domain && 3091 + group->default_domain->type == IOMMU_DOMAIN_IDENTITY && 3092 + group->domain && group->domain->type == IOMMU_DOMAIN_IDENTITY) 3093 + return true; 3094 + return false; 3095 + } 3096 + 3079 3097 /** 3080 3098 * iommu_device_use_default_domain() - Device driver wants to handle device 3081 3099 * DMA through the kernel DMA API. ··· 3112 3094 3113 3095 mutex_lock(&group->mutex); 3114 3096 if (group->owner_cnt) { 3115 - if (group->domain != group->default_domain || 3116 - group->owner) { 3097 + if (group->owner || !iommu_is_default_domain(group)) { 3117 3098 ret = -EBUSY; 3118 3099 goto unlock_out; 3119 3100 }
+11
drivers/iommu/virtio-iommu.c
··· 1006 1006 return iommu_fwspec_add_ids(dev, args->args, 1); 1007 1007 } 1008 1008 1009 + static bool viommu_capable(enum iommu_cap cap) 1010 + { 1011 + switch (cap) { 1012 + case IOMMU_CAP_CACHE_COHERENCY: 1013 + return true; 1014 + default: 1015 + return false; 1016 + } 1017 + } 1018 + 1009 1019 static struct iommu_ops viommu_ops = { 1020 + .capable = viommu_capable, 1010 1021 .domain_alloc = viommu_domain_alloc, 1011 1022 .probe_device = viommu_probe_device, 1012 1023 .probe_finalize = viommu_probe_finalize,
+15 -5
drivers/net/bonding/bond_main.c
··· 3167 3167 found: 3168 3168 if (!ipv6_dev_get_saddr(dev_net(dst->dev), dst->dev, &targets[i], 0, &saddr)) 3169 3169 bond_ns_send(slave, &targets[i], &saddr, tags); 3170 + else 3171 + bond_ns_send(slave, &targets[i], &in6addr_any, tags); 3172 + 3170 3173 dst_release(dst); 3171 3174 kfree(tags); 3172 3175 } ··· 3201 3198 return ret; 3202 3199 } 3203 3200 3204 - static void bond_validate_ns(struct bonding *bond, struct slave *slave, 3201 + static void bond_validate_na(struct bonding *bond, struct slave *slave, 3205 3202 struct in6_addr *saddr, struct in6_addr *daddr) 3206 3203 { 3207 3204 int i; 3208 3205 3209 - if (ipv6_addr_any(saddr) || !bond_has_this_ip6(bond, daddr)) { 3206 + /* Ignore NAs that: 3207 + * 1. Source address is unspecified address. 3208 + * 2. Dest address is neither all-nodes multicast address nor 3209 + * exist on bond interface. 3210 + */ 3211 + if (ipv6_addr_any(saddr) || 3212 + (!ipv6_addr_equal(daddr, &in6addr_linklocal_allnodes) && 3213 + !bond_has_this_ip6(bond, daddr))) { 3210 3214 slave_dbg(bond->dev, slave->dev, "%s: sip %pI6c tip %pI6c not found\n", 3211 3215 __func__, saddr, daddr); 3212 3216 return; ··· 3256 3246 * see bond_arp_rcv(). 3257 3247 */ 3258 3248 if (bond_is_active_slave(slave)) 3259 - bond_validate_ns(bond, slave, saddr, daddr); 3249 + bond_validate_na(bond, slave, saddr, daddr); 3260 3250 else if (curr_active_slave && 3261 3251 time_after(slave_last_rx(bond, curr_active_slave), 3262 3252 curr_active_slave->last_link_up)) 3263 - bond_validate_ns(bond, slave, saddr, daddr); 3253 + bond_validate_na(bond, slave, saddr, daddr); 3264 3254 else if (curr_arp_slave && 3265 3255 bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1)) 3266 - bond_validate_ns(bond, slave, saddr, daddr); 3256 + bond_validate_na(bond, slave, saddr, daddr); 3267 3257 3268 3258 out: 3269 3259 return RX_HANDLER_ANOTHER;
+24 -6
drivers/net/dsa/microchip/ksz_common.c
··· 170 170 .exit = ksz8_switch_exit, 171 171 }; 172 172 173 + static void ksz9477_phylink_mac_link_up(struct ksz_device *dev, int port, 174 + unsigned int mode, 175 + phy_interface_t interface, 176 + struct phy_device *phydev, int speed, 177 + int duplex, bool tx_pause, 178 + bool rx_pause); 179 + 173 180 static const struct ksz_dev_ops ksz9477_dev_ops = { 174 181 .setup = ksz9477_setup, 175 182 .get_port_addr = ksz9477_get_port_addr, ··· 203 196 .mdb_del = ksz9477_mdb_del, 204 197 .change_mtu = ksz9477_change_mtu, 205 198 .max_mtu = ksz9477_max_mtu, 199 + .phylink_mac_link_up = ksz9477_phylink_mac_link_up, 206 200 .config_cpu_port = ksz9477_config_cpu_port, 207 201 .enable_stp_addr = ksz9477_enable_stp_addr, 208 202 .reset = ksz9477_reset_switch, ··· 238 230 .mdb_del = ksz9477_mdb_del, 239 231 .change_mtu = lan937x_change_mtu, 240 232 .max_mtu = ksz9477_max_mtu, 233 + .phylink_mac_link_up = ksz9477_phylink_mac_link_up, 241 234 .config_cpu_port = lan937x_config_cpu_port, 242 235 .enable_stp_addr = ksz9477_enable_stp_addr, 243 236 .reset = lan937x_reset_switch, ··· 1665 1656 ksz_prmw8(dev, port, regs[P_XMII_CTRL_0], mask, val); 1666 1657 } 1667 1658 1668 - static void ksz_phylink_mac_link_up(struct dsa_switch *ds, int port, 1669 - unsigned int mode, 1670 - phy_interface_t interface, 1671 - struct phy_device *phydev, int speed, 1672 - int duplex, bool tx_pause, bool rx_pause) 1659 + static void ksz9477_phylink_mac_link_up(struct ksz_device *dev, int port, 1660 + unsigned int mode, 1661 + phy_interface_t interface, 1662 + struct phy_device *phydev, int speed, 1663 + int duplex, bool tx_pause, 1664 + bool rx_pause) 1673 1665 { 1674 - struct ksz_device *dev = ds->priv; 1675 1666 struct ksz_port *p; 1676 1667 1677 1668 p = &dev->ports[port]; ··· 1685 1676 ksz_port_set_xmii_speed(dev, port, speed); 1686 1677 1687 1678 ksz_duplex_flowctrl(dev, port, duplex, tx_pause, rx_pause); 1679 + } 1680 + 1681 + static void ksz_phylink_mac_link_up(struct dsa_switch *ds, int port, 1682 + unsigned int mode, 1683 + phy_interface_t interface, 1684 + struct phy_device *phydev, int speed, 1685 + int duplex, bool tx_pause, bool rx_pause) 1686 + { 1687 + struct ksz_device *dev = ds->priv; 1688 1688 1689 1689 if (dev->dev_ops->phylink_mac_link_up) 1690 1690 dev->dev_ops->phylink_mac_link_up(dev, port, mode, interface,
+112 -49
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 22 22 #define VSC9959_NUM_PORTS 6 23 23 24 24 #define VSC9959_TAS_GCL_ENTRY_MAX 63 25 + #define VSC9959_TAS_MIN_GATE_LEN_NS 33 25 26 #define VSC9959_VCAP_POLICER_BASE 63 26 27 #define VSC9959_VCAP_POLICER_MAX 383 27 28 #define VSC9959_SWITCH_PCI_BAR 4 ··· 1479 1478 mdiobus_free(felix->imdio); 1480 1479 } 1481 1480 1481 + /* The switch considers any frame (regardless of size) as eligible for 1482 + * transmission if the traffic class gate is open for at least 33 ns. 1483 + * Overruns are prevented by cropping an interval at the end of the gate time 1484 + * slot for which egress scheduling is blocked, but we need to still keep 33 ns 1485 + * available for one packet to be transmitted, otherwise the port tc will hang. 1486 + * This function returns the size of a gate interval that remains available for 1487 + * setting the guard band, after reserving the space for one egress frame. 1488 + */ 1489 + static u64 vsc9959_tas_remaining_gate_len_ps(u64 gate_len_ns) 1490 + { 1491 + /* Gate always open */ 1492 + if (gate_len_ns == U64_MAX) 1493 + return U64_MAX; 1494 + 1495 + return (gate_len_ns - VSC9959_TAS_MIN_GATE_LEN_NS) * PSEC_PER_NSEC; 1496 + } 1497 + 1482 1498 /* Extract shortest continuous gate open intervals in ns for each traffic class 1483 1499 * of a cyclic tc-taprio schedule. If a gate is always open, the duration is 1484 1500 * considered U64_MAX. If the gate is always closed, it is considered 0. ··· 1557 1539 min_gate_len[tc] = 0; 1558 1540 } 1559 1541 1542 + /* ocelot_write_rix is a macro that concatenates QSYS_MAXSDU_CFG_* with _RSZ, 1543 + * so we need to spell out the register access to each traffic class in helper 1544 + * functions, to simplify callers 1545 + */ 1546 + static void vsc9959_port_qmaxsdu_set(struct ocelot *ocelot, int port, int tc, 1547 + u32 max_sdu) 1548 + { 1549 + switch (tc) { 1550 + case 0: 1551 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_0, 1552 + port); 1553 + break; 1554 + case 1: 1555 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_1, 1556 + port); 1557 + break; 1558 + case 2: 1559 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_2, 1560 + port); 1561 + break; 1562 + case 3: 1563 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_3, 1564 + port); 1565 + break; 1566 + case 4: 1567 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_4, 1568 + port); 1569 + break; 1570 + case 5: 1571 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_5, 1572 + port); 1573 + break; 1574 + case 6: 1575 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_6, 1576 + port); 1577 + break; 1578 + case 7: 1579 + ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_7, 1580 + port); 1581 + break; 1582 + } 1583 + } 1584 + 1585 + static u32 vsc9959_port_qmaxsdu_get(struct ocelot *ocelot, int port, int tc) 1586 + { 1587 + switch (tc) { 1588 + case 0: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_0, port); 1589 + case 1: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_1, port); 1590 + case 2: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_2, port); 1591 + case 3: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_3, port); 1592 + case 4: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_4, port); 1593 + case 5: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_5, port); 1594 + case 6: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_6, port); 1595 + case 7: return ocelot_read_rix(ocelot, QSYS_QMAXSDU_CFG_7, port); 1596 + default: 1597 + return 0; 1598 + } 1599 + } 1600 + 1560 1601 /* Update QSYS_PORT_MAX_SDU to make sure the static guard bands added by the 1561 1602 * switch (see the ALWAYS_GUARD_BAND_SCH_Q comment) are correct at all MTU 1562 1603 * values (the default value is 1518). Also, for traffic class windows smaller ··· 1672 1595 1673 1596 vsc9959_tas_min_gate_lengths(ocelot_port->taprio, min_gate_len); 1674 1597 1598 + mutex_lock(&ocelot->fwd_domain_lock); 1599 + 1675 1600 for (tc = 0; tc < OCELOT_NUM_TC; tc++) { 1601 + u64 remaining_gate_len_ps; 1676 1602 u32 max_sdu; 1677 1603 1678 - if (min_gate_len[tc] == U64_MAX /* Gate always open */ || 1679 - min_gate_len[tc] * PSEC_PER_NSEC > needed_bit_time_ps) { 1604 + remaining_gate_len_ps = 1605 + vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]); 1606 + 1607 + if (remaining_gate_len_ps > needed_bit_time_ps) { 1680 1608 /* Setting QMAXSDU_CFG to 0 disables oversized frame 1681 1609 * dropping. 1682 1610 */ ··· 1694 1612 /* If traffic class doesn't support a full MTU sized 1695 1613 * frame, make sure to enable oversize frame dropping 1696 1614 * for frames larger than the smallest that would fit. 1615 + * 1616 + * However, the exact same register, QSYS_QMAXSDU_CFG_*, 1617 + * controls not only oversized frame dropping, but also 1618 + * per-tc static guard band lengths, so it reduces the 1619 + * useful gate interval length. Therefore, be careful 1620 + * to calculate a guard band (and therefore max_sdu) 1621 + * that still leaves 33 ns available in the time slot. 1697 1622 */ 1698 - max_sdu = div_u64(min_gate_len[tc] * PSEC_PER_NSEC, 1699 - picos_per_byte); 1623 + max_sdu = div_u64(remaining_gate_len_ps, picos_per_byte); 1700 1624 /* A TC gate may be completely closed, which is a 1701 1625 * special case where all packets are oversized. 1702 1626 * Any limit smaller than 64 octets accomplishes this ··· 1725 1637 max_sdu); 1726 1638 } 1727 1639 1728 - /* ocelot_write_rix is a macro that concatenates 1729 - * QSYS_MAXSDU_CFG_* with _RSZ, so we need to spell out 1730 - * the writes to each traffic class 1731 - */ 1732 - switch (tc) { 1733 - case 0: 1734 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_0, 1735 - port); 1736 - break; 1737 - case 1: 1738 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_1, 1739 - port); 1740 - break; 1741 - case 2: 1742 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_2, 1743 - port); 1744 - break; 1745 - case 3: 1746 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_3, 1747 - port); 1748 - break; 1749 - case 4: 1750 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_4, 1751 - port); 1752 - break; 1753 - case 5: 1754 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_5, 1755 - port); 1756 - break; 1757 - case 6: 1758 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_6, 1759 - port); 1760 - break; 1761 - case 7: 1762 - ocelot_write_rix(ocelot, max_sdu, QSYS_QMAXSDU_CFG_7, 1763 - port); 1764 - break; 1765 - } 1640 + vsc9959_port_qmaxsdu_set(ocelot, port, tc, max_sdu); 1766 1641 } 1767 1642 1768 1643 ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port); 1644 + 1645 + ocelot->ops->cut_through_fwd(ocelot); 1646 + 1647 + mutex_unlock(&ocelot->fwd_domain_lock); 1769 1648 } 1770 1649 1771 1650 static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port, ··· 1759 1704 break; 1760 1705 } 1761 1706 1707 + mutex_lock(&ocelot->tas_lock); 1708 + 1762 1709 ocelot_rmw_rix(ocelot, 1763 1710 QSYS_TAG_CONFIG_LINK_SPEED(tas_speed), 1764 1711 QSYS_TAG_CONFIG_LINK_SPEED_M, 1765 1712 QSYS_TAG_CONFIG, port); 1766 - 1767 - mutex_lock(&ocelot->tas_lock); 1768 1713 1769 1714 if (ocelot_port->taprio) 1770 1715 vsc9959_tas_guard_bands_update(ocelot, port); ··· 2825 2770 { 2826 2771 struct felix *felix = ocelot_to_felix(ocelot); 2827 2772 struct dsa_switch *ds = felix->ds; 2828 - int port, other_port; 2773 + int tc, port, other_port; 2829 2774 2830 2775 lockdep_assert_held(&ocelot->fwd_domain_lock); 2831 2776 ··· 2869 2814 min_speed = other_ocelot_port->speed; 2870 2815 } 2871 2816 2872 - /* Enable cut-through forwarding for all traffic classes. */ 2873 - if (ocelot_port->speed == min_speed) 2817 + /* Enable cut-through forwarding for all traffic classes that 2818 + * don't have oversized dropping enabled, since this check is 2819 + * bypassed in cut-through mode. 2820 + */ 2821 + if (ocelot_port->speed == min_speed) { 2874 2822 val = GENMASK(7, 0); 2823 + 2824 + for (tc = 0; tc < OCELOT_NUM_TC; tc++) 2825 + if (vsc9959_port_qmaxsdu_get(ocelot, port, tc)) 2826 + val &= ~BIT(tc); 2827 + } 2875 2828 2876 2829 set: 2877 2830 tmp = ocelot_read_rix(ocelot, ANA_CUT_THRU_CFG, port); ··· 2887 2824 continue; 2888 2825 2889 2826 dev_dbg(ocelot->dev, 2890 - "port %d fwd mask 0x%lx speed %d min_speed %d, %s cut-through forwarding\n", 2827 + "port %d fwd mask 0x%lx speed %d min_speed %d, %s cut-through forwarding on TC mask 0x%x\n", 2891 2828 port, mask, ocelot_port->speed, min_speed, 2892 - val ? "enabling" : "disabling"); 2829 + val ? "enabling" : "disabling", val); 2893 2830 2894 2831 ocelot_write_rix(ocelot, val, ANA_CUT_THRU_CFG, port); 2895 2832 }
+1 -1
drivers/net/dsa/qca/qca8k-8xxx.c
··· 1889 1889 if (!priv) 1890 1890 return -ENOMEM; 1891 1891 1892 - priv->info = of_device_get_match_data(priv->dev); 1893 1892 priv->bus = mdiodev->bus; 1894 1893 priv->dev = &mdiodev->dev; 1894 + priv->info = of_device_get_match_data(priv->dev); 1895 1895 1896 1896 priv->reset_gpio = devm_gpiod_get_optional(priv->dev, "reset", 1897 1897 GPIOD_ASIS);
+5 -1
drivers/net/ethernet/freescale/fec.h
··· 16 16 17 17 #include <linux/clocksource.h> 18 18 #include <linux/net_tstamp.h> 19 + #include <linux/pm_qos.h> 19 20 #include <linux/ptp_clock_kernel.h> 20 21 #include <linux/timecounter.h> 21 22 ··· 499 498 /* i.MX8MQ SoC integration mix wakeup interrupt signal into "int2" interrupt line. */ 500 499 #define FEC_QUIRK_WAKEUP_FROM_INT2 (1 << 22) 501 500 501 + /* i.MX6Q adds pm_qos support */ 502 + #define FEC_QUIRK_HAS_PMQOS BIT(23) 503 + 502 504 struct bufdesc_prop { 503 505 int qid; 504 506 /* Address of Rx and Tx buffers */ ··· 561 557 struct clk *clk_2x_txclk; 562 558 563 559 bool ptp_clk_on; 564 - struct mutex ptp_clk_mutex; 565 560 unsigned int num_tx_queues; 566 561 unsigned int num_rx_queues; 567 562 ··· 611 608 struct delayed_work time_keep; 612 609 struct regulator *reg_phy; 613 610 struct fec_stop_mode_gpr stop_gpr; 611 + struct pm_qos_request pm_qos_req; 614 612 615 613 unsigned int tx_align; 616 614 unsigned int rx_align;
+17 -9
drivers/net/ethernet/freescale/fec_main.c
··· 111 111 .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 112 112 FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 113 113 FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 | 114 - FEC_QUIRK_HAS_RACC | FEC_QUIRK_CLEAR_SETUP_MII, 114 + FEC_QUIRK_HAS_RACC | FEC_QUIRK_CLEAR_SETUP_MII | 115 + FEC_QUIRK_HAS_PMQOS, 115 116 }; 116 117 117 118 static const struct fec_devinfo fec_mvf600_info = { ··· 2029 2028 static int fec_enet_clk_enable(struct net_device *ndev, bool enable) 2030 2029 { 2031 2030 struct fec_enet_private *fep = netdev_priv(ndev); 2031 + unsigned long flags; 2032 2032 int ret; 2033 2033 2034 2034 if (enable) { ··· 2038 2036 return ret; 2039 2037 2040 2038 if (fep->clk_ptp) { 2041 - mutex_lock(&fep->ptp_clk_mutex); 2039 + spin_lock_irqsave(&fep->tmreg_lock, flags); 2042 2040 ret = clk_prepare_enable(fep->clk_ptp); 2043 2041 if (ret) { 2044 - mutex_unlock(&fep->ptp_clk_mutex); 2042 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 2045 2043 goto failed_clk_ptp; 2046 2044 } else { 2047 2045 fep->ptp_clk_on = true; 2048 2046 } 2049 - mutex_unlock(&fep->ptp_clk_mutex); 2047 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 2050 2048 } 2051 2049 2052 2050 ret = clk_prepare_enable(fep->clk_ref); ··· 2061 2059 } else { 2062 2060 clk_disable_unprepare(fep->clk_enet_out); 2063 2061 if (fep->clk_ptp) { 2064 - mutex_lock(&fep->ptp_clk_mutex); 2062 + spin_lock_irqsave(&fep->tmreg_lock, flags); 2065 2063 clk_disable_unprepare(fep->clk_ptp); 2066 2064 fep->ptp_clk_on = false; 2067 - mutex_unlock(&fep->ptp_clk_mutex); 2065 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 2068 2066 } 2069 2067 clk_disable_unprepare(fep->clk_ref); 2070 2068 clk_disable_unprepare(fep->clk_2x_txclk); ··· 2077 2075 clk_disable_unprepare(fep->clk_ref); 2078 2076 failed_clk_ref: 2079 2077 if (fep->clk_ptp) { 2080 - mutex_lock(&fep->ptp_clk_mutex); 2078 + spin_lock_irqsave(&fep->tmreg_lock, flags); 2081 2079 clk_disable_unprepare(fep->clk_ptp); 2082 2080 fep->ptp_clk_on = false; 2083 - mutex_unlock(&fep->ptp_clk_mutex); 2081 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 2084 2082 } 2085 2083 failed_clk_ptp: 2086 2084 clk_disable_unprepare(fep->clk_enet_out); ··· 3246 3244 if (fep->quirks & FEC_QUIRK_ERR006687) 3247 3245 imx6q_cpuidle_fec_irqs_used(); 3248 3246 3247 + if (fep->quirks & FEC_QUIRK_HAS_PMQOS) 3248 + cpu_latency_qos_add_request(&fep->pm_qos_req, 0); 3249 + 3249 3250 napi_enable(&fep->napi); 3250 3251 phy_start(ndev->phydev); 3251 3252 netif_tx_start_all_queues(ndev); ··· 3290 3285 fec_enet_update_ethtool_stats(ndev); 3291 3286 3292 3287 fec_enet_clk_enable(ndev, false); 3288 + if (fep->quirks & FEC_QUIRK_HAS_PMQOS) 3289 + cpu_latency_qos_remove_request(&fep->pm_qos_req); 3290 + 3293 3291 pinctrl_pm_select_sleep_state(&fep->pdev->dev); 3294 3292 pm_runtime_mark_last_busy(&fep->pdev->dev); 3295 3293 pm_runtime_put_autosuspend(&fep->pdev->dev); ··· 3915 3907 } 3916 3908 3917 3909 fep->ptp_clk_on = false; 3918 - mutex_init(&fep->ptp_clk_mutex); 3910 + spin_lock_init(&fep->tmreg_lock); 3919 3911 3920 3912 /* clk_ref is optional, depends on board */ 3921 3913 fep->clk_ref = devm_clk_get_optional(&pdev->dev, "enet_clk_ref");
+10 -18
drivers/net/ethernet/freescale/fec_ptp.c
··· 365 365 */ 366 366 static int fec_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts) 367 367 { 368 - struct fec_enet_private *adapter = 368 + struct fec_enet_private *fep = 369 369 container_of(ptp, struct fec_enet_private, ptp_caps); 370 370 u64 ns; 371 371 unsigned long flags; 372 372 373 - mutex_lock(&adapter->ptp_clk_mutex); 373 + spin_lock_irqsave(&fep->tmreg_lock, flags); 374 374 /* Check the ptp clock */ 375 - if (!adapter->ptp_clk_on) { 376 - mutex_unlock(&adapter->ptp_clk_mutex); 375 + if (!fep->ptp_clk_on) { 376 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 377 377 return -EINVAL; 378 378 } 379 - spin_lock_irqsave(&adapter->tmreg_lock, flags); 380 - ns = timecounter_read(&adapter->tc); 381 - spin_unlock_irqrestore(&adapter->tmreg_lock, flags); 382 - mutex_unlock(&adapter->ptp_clk_mutex); 379 + ns = timecounter_read(&fep->tc); 380 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 383 381 384 382 *ts = ns_to_timespec64(ns); 385 383 ··· 402 404 unsigned long flags; 403 405 u32 counter; 404 406 405 - mutex_lock(&fep->ptp_clk_mutex); 407 + spin_lock_irqsave(&fep->tmreg_lock, flags); 406 408 /* Check the ptp clock */ 407 409 if (!fep->ptp_clk_on) { 408 - mutex_unlock(&fep->ptp_clk_mutex); 410 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 409 411 return -EINVAL; 410 412 } 411 413 ··· 415 417 */ 416 418 counter = ns & fep->cc.mask; 417 419 418 - spin_lock_irqsave(&fep->tmreg_lock, flags); 419 420 writel(counter, fep->hwp + FEC_ATIME); 420 421 timecounter_init(&fep->tc, &fep->cc, ns); 421 422 spin_unlock_irqrestore(&fep->tmreg_lock, flags); 422 - mutex_unlock(&fep->ptp_clk_mutex); 423 423 return 0; 424 424 } 425 425 ··· 514 518 struct fec_enet_private *fep = container_of(dwork, struct fec_enet_private, time_keep); 515 519 unsigned long flags; 516 520 517 - mutex_lock(&fep->ptp_clk_mutex); 521 + spin_lock_irqsave(&fep->tmreg_lock, flags); 518 522 if (fep->ptp_clk_on) { 519 - spin_lock_irqsave(&fep->tmreg_lock, flags); 520 523 timecounter_read(&fep->tc); 521 - spin_unlock_irqrestore(&fep->tmreg_lock, flags); 522 524 } 523 - mutex_unlock(&fep->ptp_clk_mutex); 525 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 524 526 525 527 schedule_delayed_work(&fep->time_keep, HZ); 526 528 } ··· 592 598 dev_err(&fep->pdev->dev, "clk_ptp clock rate is zero\n"); 593 599 } 594 600 fep->ptp_inc = NSEC_PER_SEC / fep->cycle_speed; 595 - 596 - spin_lock_init(&fep->tmreg_lock); 597 601 598 602 fec_ptp_start_cyclecounter(ndev); 599 603
+4 -1
drivers/net/ethernet/intel/i40e/i40e_client.c
··· 177 177 "Cannot locate client instance close routine\n"); 178 178 return; 179 179 } 180 + if (!test_bit(__I40E_CLIENT_INSTANCE_OPENED, &cdev->state)) { 181 + dev_dbg(&pf->pdev->dev, "Client is not open, abort close\n"); 182 + return; 183 + } 180 184 cdev->client->ops->close(&cdev->lan_info, cdev->client, reset); 181 185 clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cdev->state); 182 186 i40e_client_release_qvlist(&cdev->lan_info); ··· 433 429 /* Remove failed client instance */ 434 430 clear_bit(__I40E_CLIENT_INSTANCE_OPENED, 435 431 &cdev->state); 436 - i40e_client_del_instance(pf); 437 432 return; 438 433 } 439 434 }
+3
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 6659 6659 vsi->tc_seid_map[i] = ch->seid; 6660 6660 } 6661 6661 } 6662 + 6663 + /* reset to reconfigure TX queue contexts */ 6664 + i40e_do_reset(vsi->back, I40E_PF_RESET_FLAG, true); 6662 6665 return ret; 6663 6666 6664 6667 err_free:
+2 -1
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 3688 3688 u8 prio; 3689 3689 3690 3690 /* is DCB enabled at all? */ 3691 - if (vsi->tc_config.numtc == 1) 3691 + if (vsi->tc_config.numtc == 1 || 3692 + i40e_is_tc_mqprio_enabled(vsi->back)) 3692 3693 return netdev_pick_tx(netdev, skb, sb_dev); 3693 3694 3694 3695 prio = skb->priority;
+11 -3
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2877 2877 int i = 0, err; 2878 2878 bool running; 2879 2879 2880 + /* Detach interface to avoid subsequent NDO callbacks */ 2881 + rtnl_lock(); 2882 + netif_device_detach(netdev); 2883 + rtnl_unlock(); 2884 + 2880 2885 /* When device is being removed it doesn't make sense to run the reset 2881 2886 * task, just return in such a case. 2882 2887 */ ··· 2889 2884 if (adapter->state != __IAVF_REMOVE) 2890 2885 queue_work(iavf_wq, &adapter->reset_task); 2891 2886 2892 - return; 2887 + goto reset_finish; 2893 2888 } 2894 2889 2895 2890 while (!mutex_trylock(&adapter->client_lock)) ··· 2959 2954 2960 2955 if (running) { 2961 2956 netif_carrier_off(netdev); 2962 - netif_tx_stop_all_queues(netdev); 2963 2957 adapter->link_up = false; 2964 2958 iavf_napi_disable_all(adapter); 2965 2959 } ··· 3088 3084 mutex_unlock(&adapter->client_lock); 3089 3085 mutex_unlock(&adapter->crit_lock); 3090 3086 3091 - return; 3087 + goto reset_finish; 3092 3088 reset_err: 3093 3089 if (running) { 3094 3090 set_bit(__IAVF_VSI_DOWN, adapter->vsi.state); ··· 3099 3095 mutex_unlock(&adapter->client_lock); 3100 3096 mutex_unlock(&adapter->crit_lock); 3101 3097 dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n"); 3098 + reset_finish: 3099 + rtnl_lock(); 3100 + netif_device_attach(netdev); 3101 + rtnl_unlock(); 3102 3102 } 3103 3103 3104 3104 /**
-17
drivers/net/ethernet/intel/ice/ice_base.c
··· 7 7 #include "ice_dcb_lib.h" 8 8 #include "ice_sriov.h" 9 9 10 - static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring) 11 - { 12 - rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL); 13 - return !!rx_ring->xdp_buf; 14 - } 15 - 16 - static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring) 17 - { 18 - rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL); 19 - return !!rx_ring->rx_buf; 20 - } 21 - 22 10 /** 23 11 * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI 24 12 * @qs_cfg: gathered variables needed for PF->VSI queues assignment ··· 507 519 xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 508 520 ring->q_index, ring->q_vector->napi.napi_id); 509 521 510 - kfree(ring->rx_buf); 511 522 ring->xsk_pool = ice_xsk_pool(ring); 512 523 if (ring->xsk_pool) { 513 - if (!ice_alloc_rx_buf_zc(ring)) 514 - return -ENOMEM; 515 524 xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); 516 525 517 526 ring->rx_buf_len = ··· 523 538 dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n", 524 539 ring->q_index); 525 540 } else { 526 - if (!ice_alloc_rx_buf(ring)) 527 - return -ENOMEM; 528 541 if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) 529 542 /* coverity[check_return] */ 530 543 xdp_rxq_info_reg(&ring->xdp_rxq,
+9 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 2898 2898 if (xdp_ring_err) 2899 2899 NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed"); 2900 2900 } 2901 + /* reallocate Rx queues that are used for zero-copy */ 2902 + xdp_ring_err = ice_realloc_zc_buf(vsi, true); 2903 + if (xdp_ring_err) 2904 + NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Rx resources failed"); 2901 2905 } else if (ice_is_xdp_ena_vsi(vsi) && !prog) { 2902 2906 xdp_ring_err = ice_destroy_xdp_rings(vsi); 2903 2907 if (xdp_ring_err) 2904 2908 NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed"); 2909 + /* reallocate Rx queues that were used for zero-copy */ 2910 + xdp_ring_err = ice_realloc_zc_buf(vsi, false); 2911 + if (xdp_ring_err) 2912 + NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Rx resources failed"); 2905 2913 } else { 2906 2914 /* safe to call even when prog == vsi->xdp_prog as 2907 2915 * dev_xdp_install in net/core/dev.c incremented prog's ··· 3913 3905 3914 3906 pf->avail_rxqs = bitmap_zalloc(pf->max_pf_rxqs, GFP_KERNEL); 3915 3907 if (!pf->avail_rxqs) { 3916 - devm_kfree(ice_pf_to_dev(pf), pf->avail_txqs); 3908 + bitmap_free(pf->avail_txqs); 3917 3909 pf->avail_txqs = NULL; 3918 3910 return -ENOMEM; 3919 3911 }
+63
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 192 192 err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true); 193 193 if (err) 194 194 return err; 195 + ice_clean_rx_ring(rx_ring); 195 196 196 197 ice_qvec_toggle_napi(vsi, q_vector, false); 197 198 ice_qp_clean_rings(vsi, q_idx); ··· 318 317 } 319 318 320 319 /** 320 + * ice_realloc_rx_xdp_bufs - reallocate for either XSK or normal buffer 321 + * @rx_ring: Rx ring 322 + * @pool_present: is pool for XSK present 323 + * 324 + * Try allocating memory and return ENOMEM, if failed to allocate. 325 + * If allocation was successful, substitute buffer with allocated one. 326 + * Returns 0 on success, negative on failure 327 + */ 328 + static int 329 + ice_realloc_rx_xdp_bufs(struct ice_rx_ring *rx_ring, bool pool_present) 330 + { 331 + size_t elem_size = pool_present ? sizeof(*rx_ring->xdp_buf) : 332 + sizeof(*rx_ring->rx_buf); 333 + void *sw_ring = kcalloc(rx_ring->count, elem_size, GFP_KERNEL); 334 + 335 + if (!sw_ring) 336 + return -ENOMEM; 337 + 338 + if (pool_present) { 339 + kfree(rx_ring->rx_buf); 340 + rx_ring->rx_buf = NULL; 341 + rx_ring->xdp_buf = sw_ring; 342 + } else { 343 + kfree(rx_ring->xdp_buf); 344 + rx_ring->xdp_buf = NULL; 345 + rx_ring->rx_buf = sw_ring; 346 + } 347 + 348 + return 0; 349 + } 350 + 351 + /** 352 + * ice_realloc_zc_buf - reallocate XDP ZC queue pairs 353 + * @vsi: Current VSI 354 + * @zc: is zero copy set 355 + * 356 + * Reallocate buffer for rx_rings that might be used by XSK. 357 + * XDP requires more memory, than rx_buf provides. 358 + * Returns 0 on success, negative on failure 359 + */ 360 + int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc) 361 + { 362 + struct ice_rx_ring *rx_ring; 363 + unsigned long q; 364 + 365 + for_each_set_bit(q, vsi->af_xdp_zc_qps, 366 + max_t(int, vsi->alloc_txq, vsi->alloc_rxq)) { 367 + rx_ring = vsi->rx_rings[q]; 368 + if (ice_realloc_rx_xdp_bufs(rx_ring, zc)) 369 + return -ENOMEM; 370 + } 371 + 372 + return 0; 373 + } 374 + 375 + /** 321 376 * ice_xsk_pool_setup - enable/disable a buffer pool region depending on its state 322 377 * @vsi: Current VSI 323 378 * @pool: buffer pool to enable/associate to a ring, NULL to disable ··· 402 345 if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi); 403 346 404 347 if (if_running) { 348 + struct ice_rx_ring *rx_ring = vsi->rx_rings[qid]; 349 + 405 350 ret = ice_qp_dis(vsi, qid); 406 351 if (ret) { 407 352 netdev_err(vsi->netdev, "ice_qp_dis error = %d\n", ret); 408 353 goto xsk_pool_if_up; 409 354 } 355 + 356 + ret = ice_realloc_rx_xdp_bufs(rx_ring, pool_present); 357 + if (ret) 358 + goto xsk_pool_if_up; 410 359 } 411 360 412 361 pool_failure = pool_present ? ice_xsk_pool_enable(vsi, pool, qid) :
+8
drivers/net/ethernet/intel/ice/ice_xsk.h
··· 27 27 void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring); 28 28 void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring); 29 29 bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget); 30 + int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc); 30 31 #else 31 32 static inline bool 32 33 ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring, ··· 73 72 74 73 static inline void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring) { } 75 74 static inline void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring) { } 75 + 76 + static inline int 77 + ice_realloc_zc_buf(struct ice_vsi __always_unused *vsi, 78 + bool __always_unused zc) 79 + { 80 + return 0; 81 + } 76 82 #endif /* CONFIG_XDP_SOCKETS */ 77 83 #endif /* !_ICE_XSK_H_ */
+2 -2
drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
··· 700 700 701 701 void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name) 702 702 { 703 - struct dentry *mvpp2_dir, *mvpp2_root; 703 + static struct dentry *mvpp2_root; 704 + struct dentry *mvpp2_dir; 704 705 int ret, i; 705 706 706 - mvpp2_root = debugfs_lookup(MVPP2_DRIVER_NAME, NULL); 707 707 if (!mvpp2_root) 708 708 mvpp2_root = debugfs_create_dir(MVPP2_DRIVER_NAME, NULL); 709 709
+1 -1
drivers/net/ethernet/mediatek/mtk_ppe.c
··· 412 412 if (entry->hash != 0xffff) { 413 413 ppe->foe_table[entry->hash].ib1 &= ~MTK_FOE_IB1_STATE; 414 414 ppe->foe_table[entry->hash].ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE, 415 - MTK_FOE_STATE_BIND); 415 + MTK_FOE_STATE_UNBIND); 416 416 dma_wmb(); 417 417 } 418 418 entry->hash = 0xffff;
+3
drivers/net/ethernet/mediatek/mtk_ppe.h
··· 293 293 if (!ppe) 294 294 return; 295 295 296 + if (hash > MTK_PPE_HASH_MASK) 297 + return; 298 + 296 299 now = (u16)jiffies; 297 300 diff = now - ppe->foe_check_time[hash]; 298 301 if (diff < HZ / 10)
+21 -2
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 494 494 return err; 495 495 } 496 496 497 + bool mlx5_is_roce_on(struct mlx5_core_dev *dev) 498 + { 499 + struct devlink *devlink = priv_to_devlink(dev); 500 + union devlink_param_value val; 501 + int err; 502 + 503 + err = devlink_param_driverinit_value_get(devlink, 504 + DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, 505 + &val); 506 + 507 + if (!err) 508 + return val.vbool; 509 + 510 + mlx5_core_dbg(dev, "Failed to get param. err = %d\n", err); 511 + return MLX5_CAP_GEN(dev, roce); 512 + } 513 + EXPORT_SYMBOL(mlx5_is_roce_on); 514 + 497 515 static int handle_hca_cap_2(struct mlx5_core_dev *dev, void *set_ctx) 498 516 { 499 517 void *set_hca_cap; ··· 615 597 MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix)); 616 598 617 599 if (MLX5_CAP_GEN(dev, roce_rw_supported)) 618 - MLX5_SET(cmd_hca_cap, set_hca_cap, roce, mlx5_is_roce_init_enabled(dev)); 600 + MLX5_SET(cmd_hca_cap, set_hca_cap, roce, 601 + mlx5_is_roce_on(dev)); 619 602 620 603 max_uc_list = max_uc_list_get_devlink_param(dev); 621 604 if (max_uc_list > 0) ··· 642 623 */ 643 624 static bool is_roce_fw_disabled(struct mlx5_core_dev *dev) 644 625 { 645 - return (MLX5_CAP_GEN(dev, roce_rw_supported) && !mlx5_is_roce_init_enabled(dev)) || 626 + return (MLX5_CAP_GEN(dev, roce_rw_supported) && !mlx5_is_roce_on(dev)) || 646 627 (!MLX5_CAP_GEN(dev, roce_rw_supported) && !MLX5_CAP_GEN(dev, roce)); 647 628 } 648 629
-2
drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
··· 1136 1136 1137 1137 clk_disable_unprepare(priv->plat->stmmac_clk); 1138 1138 clk_unregister_fixed_rate(priv->plat->stmmac_clk); 1139 - 1140 - pcim_iounmap_regions(pdev, BIT(0)); 1141 1139 } 1142 1140 1143 1141 static int __maybe_unused intel_eth_pci_suspend(struct device *dev)
+1 -7
drivers/net/phy/meson-gxl.c
··· 243 243 irq_status == INTSRC_ENERGY_DETECT) 244 244 return IRQ_HANDLED; 245 245 246 - /* Give PHY some time before MAC starts sending data. This works 247 - * around an issue where network doesn't come up properly. 248 - */ 249 - if (!(irq_status & INTSRC_LINK_DOWN)) 250 - phy_queue_state_machine(phydev, msecs_to_jiffies(100)); 251 - else 252 - phy_trigger_machine(phydev); 246 + phy_trigger_machine(phydev); 253 247 254 248 return IRQ_HANDLED; 255 249 }
+57 -7
drivers/net/phy/microchip_t1.c
··· 28 28 29 29 /* Interrupt Source Register */ 30 30 #define LAN87XX_INTERRUPT_SOURCE (0x18) 31 + #define LAN87XX_INTERRUPT_SOURCE_2 (0x08) 31 32 32 33 /* Interrupt Mask Register */ 33 34 #define LAN87XX_INTERRUPT_MASK (0x19) 34 35 #define LAN87XX_MASK_LINK_UP (0x0004) 35 36 #define LAN87XX_MASK_LINK_DOWN (0x0002) 37 + 38 + #define LAN87XX_INTERRUPT_MASK_2 (0x09) 39 + #define LAN87XX_MASK_COMM_RDY BIT(10) 36 40 37 41 /* MISC Control 1 Register */ 38 42 #define LAN87XX_CTRL_1 (0x11) ··· 428 424 int rc, val = 0; 429 425 430 426 if (phydev->interrupts == PHY_INTERRUPT_ENABLED) { 431 - /* unmask all source and clear them before enable */ 432 - rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, 0x7FFF); 433 - rc = phy_read(phydev, LAN87XX_INTERRUPT_SOURCE); 434 - val = LAN87XX_MASK_LINK_UP | LAN87XX_MASK_LINK_DOWN; 427 + /* clear all interrupt */ 435 428 rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, val); 436 - } else { 437 - rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, val); 438 - if (rc) 429 + if (rc < 0) 439 430 return rc; 440 431 441 432 rc = phy_read(phydev, LAN87XX_INTERRUPT_SOURCE); 433 + if (rc < 0) 434 + return rc; 435 + 436 + rc = access_ereg(phydev, PHYACC_ATTR_MODE_WRITE, 437 + PHYACC_ATTR_BANK_MISC, 438 + LAN87XX_INTERRUPT_MASK_2, val); 439 + if (rc < 0) 440 + return rc; 441 + 442 + rc = access_ereg(phydev, PHYACC_ATTR_MODE_READ, 443 + PHYACC_ATTR_BANK_MISC, 444 + LAN87XX_INTERRUPT_SOURCE_2, 0); 445 + if (rc < 0) 446 + return rc; 447 + 448 + /* enable link down and comm ready interrupt */ 449 + val = LAN87XX_MASK_LINK_DOWN; 450 + rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, val); 451 + if (rc < 0) 452 + return rc; 453 + 454 + val = LAN87XX_MASK_COMM_RDY; 455 + rc = access_ereg(phydev, PHYACC_ATTR_MODE_WRITE, 456 + PHYACC_ATTR_BANK_MISC, 457 + LAN87XX_INTERRUPT_MASK_2, val); 458 + } else { 459 + rc = phy_write(phydev, LAN87XX_INTERRUPT_MASK, val); 460 + if (rc < 0) 461 + return rc; 462 + 463 + rc = phy_read(phydev, LAN87XX_INTERRUPT_SOURCE); 464 + if (rc < 0) 465 + return rc; 466 + 467 + rc = access_ereg(phydev, PHYACC_ATTR_MODE_WRITE, 468 + PHYACC_ATTR_BANK_MISC, 469 + LAN87XX_INTERRUPT_MASK_2, val); 470 + if (rc < 0) 471 + return rc; 472 + 473 + rc = access_ereg(phydev, PHYACC_ATTR_MODE_READ, 474 + PHYACC_ATTR_BANK_MISC, 475 + LAN87XX_INTERRUPT_SOURCE_2, 0); 442 476 } 443 477 444 478 return rc < 0 ? rc : 0; ··· 485 443 static irqreturn_t lan87xx_handle_interrupt(struct phy_device *phydev) 486 444 { 487 445 int irq_status; 446 + 447 + irq_status = access_ereg(phydev, PHYACC_ATTR_MODE_READ, 448 + PHYACC_ATTR_BANK_MISC, 449 + LAN87XX_INTERRUPT_SOURCE_2, 0); 450 + if (irq_status < 0) { 451 + phy_error(phydev); 452 + return IRQ_NONE; 453 + } 488 454 489 455 irq_status = phy_read(phydev, LAN87XX_INTERRUPT_SOURCE); 490 456 if (irq_status < 0) {
+1
drivers/net/usb/qmi_wwan.c
··· 1087 1087 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */ 1088 1088 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0620)}, /* Quectel EM160R-GL */ 1089 1089 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0800)}, /* Quectel RM500Q-GL */ 1090 + {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0801)}, /* Quectel RM520N */ 1090 1091 1091 1092 /* 3. Combined interface devices matching on interface number */ 1092 1093 {QMI_FIXED_INTF(0x0408, 0xea42, 4)}, /* Yota / Megafon M100-1 */
+1 -4
drivers/net/wireless/intel/iwlegacy/4965-rs.c
··· 2403 2403 /* Repeat initial/next rate. 2404 2404 * For legacy IL_NUMBER_TRY == 1, this loop will not execute. 2405 2405 * For HT IL_HT_NUMBER_TRY == 3, this executes twice. */ 2406 - while (repeat_rate > 0) { 2406 + while (repeat_rate > 0 && idx < (LINK_QUAL_MAX_RETRY_NUM - 1)) { 2407 2407 if (is_legacy(tbl_type.lq_type)) { 2408 2408 if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE) 2409 2409 ant_toggle_cnt++; ··· 2422 2422 cpu_to_le32(new_rate); 2423 2423 repeat_rate--; 2424 2424 idx++; 2425 - if (idx >= LINK_QUAL_MAX_RETRY_NUM) 2426 - goto out; 2427 2425 } 2428 2426 2429 2427 il4965_rs_get_tbl_info_from_mcs(new_rate, lq_sta->band, ··· 2466 2468 repeat_rate--; 2467 2469 } 2468 2470 2469 - out: 2470 2471 lq_cmd->agg_params.agg_frame_cnt_limit = LINK_QUAL_AGG_FRAME_LIMIT_DEF; 2471 2472 lq_cmd->agg_params.agg_dis_start_th = LINK_QUAL_AGG_DISABLE_START_DEF; 2472 2473
+6 -1
drivers/net/wireless/mac80211_hwsim.c
··· 5060 5060 5061 5061 nlh = nlmsg_hdr(skb); 5062 5062 gnlh = nlmsg_data(nlh); 5063 + 5064 + if (skb->len < nlh->nlmsg_len) 5065 + return -EINVAL; 5066 + 5063 5067 err = genlmsg_parse(nlh, &hwsim_genl_family, tb, HWSIM_ATTR_MAX, 5064 5068 hwsim_genl_policy, NULL); 5065 5069 if (err) { ··· 5106 5102 spin_unlock_irqrestore(&hwsim_virtio_lock, flags); 5107 5103 5108 5104 skb->data = skb->head; 5109 - skb_set_tail_pointer(skb, len); 5105 + skb_reset_tail_pointer(skb); 5106 + skb_put(skb, len); 5110 5107 hwsim_virtio_handle_cmd(skb); 5111 5108 5112 5109 spin_lock_irqsave(&hwsim_virtio_lock, flags);
+1 -1
drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
··· 261 261 262 262 err = mt7921e_driver_own(dev); 263 263 if (err) 264 - return err; 264 + goto out; 265 265 266 266 err = mt7921_run_firmware(dev); 267 267 if (err)
+1
drivers/net/wireless/microchip/wilc1000/netdev.h
··· 245 245 u8 *rx_buffer; 246 246 u32 rx_buffer_offset; 247 247 u8 *tx_buffer; 248 + u32 *vmm_table; 248 249 249 250 struct txq_handle txq[NQUEUES]; 250 251 int txq_entries;
+33 -6
drivers/net/wireless/microchip/wilc1000/sdio.c
··· 28 28 u32 block_size; 29 29 bool isinit; 30 30 int has_thrpt_enh3; 31 + u8 *cmd53_buf; 31 32 }; 32 33 33 34 struct sdio_cmd52 { ··· 48 47 u32 count: 9; 49 48 u8 *buffer; 50 49 u32 block_size; 50 + bool use_global_buf; 51 51 }; 52 52 53 53 static const struct wilc_hif_func wilc_hif_sdio; ··· 93 91 { 94 92 struct sdio_func *func = container_of(wilc->dev, struct sdio_func, dev); 95 93 int size, ret; 94 + struct wilc_sdio *sdio_priv = wilc->bus_data; 95 + u8 *buf = cmd->buffer; 96 96 97 97 sdio_claim_host(func); 98 98 ··· 105 101 else 106 102 size = cmd->count; 107 103 104 + if (cmd->use_global_buf) { 105 + if (size > sizeof(u32)) 106 + return -EINVAL; 107 + 108 + buf = sdio_priv->cmd53_buf; 109 + } 110 + 108 111 if (cmd->read_write) { /* write */ 109 - ret = sdio_memcpy_toio(func, cmd->address, 110 - (void *)cmd->buffer, size); 112 + if (cmd->use_global_buf) 113 + memcpy(buf, cmd->buffer, size); 114 + 115 + ret = sdio_memcpy_toio(func, cmd->address, buf, size); 111 116 } else { /* read */ 112 - ret = sdio_memcpy_fromio(func, (void *)cmd->buffer, 113 - cmd->address, size); 117 + ret = sdio_memcpy_fromio(func, buf, cmd->address, size); 118 + 119 + if (cmd->use_global_buf) 120 + memcpy(cmd->buffer, buf, size); 114 121 } 115 122 116 123 sdio_release_host(func); ··· 142 127 sdio_priv = kzalloc(sizeof(*sdio_priv), GFP_KERNEL); 143 128 if (!sdio_priv) 144 129 return -ENOMEM; 130 + 131 + sdio_priv->cmd53_buf = kzalloc(sizeof(u32), GFP_KERNEL); 132 + if (!sdio_priv->cmd53_buf) { 133 + ret = -ENOMEM; 134 + goto free; 135 + } 145 136 146 137 ret = wilc_cfg80211_init(&wilc, &func->dev, WILC_HIF_SDIO, 147 138 &wilc_hif_sdio); ··· 182 161 irq_dispose_mapping(wilc->dev_irq_num); 183 162 wilc_netdev_cleanup(wilc); 184 163 free: 164 + kfree(sdio_priv->cmd53_buf); 185 165 kfree(sdio_priv); 186 166 return ret; 187 167 } ··· 194 172 195 173 clk_disable_unprepare(wilc->rtc_clk); 196 174 wilc_netdev_cleanup(wilc); 175 + kfree(sdio_priv->cmd53_buf); 197 176 kfree(sdio_priv); 198 177 } 199 178 ··· 398 375 cmd.address = WILC_SDIO_FBR_DATA_REG; 399 376 cmd.block_mode = 0; 400 377 cmd.increment = 1; 401 - cmd.count = 4; 378 + cmd.count = sizeof(u32); 402 379 cmd.buffer = (u8 *)&data; 380 + cmd.use_global_buf = true; 403 381 cmd.block_size = sdio_priv->block_size; 404 382 ret = wilc_sdio_cmd53(wilc, &cmd); 405 383 if (ret) ··· 438 414 nblk = size / block_size; 439 415 nleft = size % block_size; 440 416 417 + cmd.use_global_buf = false; 441 418 if (nblk > 0) { 442 419 cmd.block_mode = 1; 443 420 cmd.increment = 1; ··· 517 492 cmd.address = WILC_SDIO_FBR_DATA_REG; 518 493 cmd.block_mode = 0; 519 494 cmd.increment = 1; 520 - cmd.count = 4; 495 + cmd.count = sizeof(u32); 521 496 cmd.buffer = (u8 *)data; 497 + cmd.use_global_buf = true; 522 498 523 499 cmd.block_size = sdio_priv->block_size; 524 500 ret = wilc_sdio_cmd53(wilc, &cmd); ··· 561 535 nblk = size / block_size; 562 536 nleft = size % block_size; 563 537 538 + cmd.use_global_buf = false; 564 539 if (nblk > 0) { 565 540 cmd.block_mode = 1; 566 541 cmd.increment = 1;
+13 -2
drivers/net/wireless/microchip/wilc1000/wlan.c
··· 714 714 int ret = 0; 715 715 int counter; 716 716 int timeout; 717 - u32 vmm_table[WILC_VMM_TBL_SIZE]; 717 + u32 *vmm_table = wilc->vmm_table; 718 718 u8 ac_pkt_num_to_chip[NQUEUES] = {0, 0, 0, 0}; 719 719 const struct wilc_hif_func *func; 720 720 int srcu_idx; ··· 1252 1252 while ((rqe = wilc_wlan_rxq_remove(wilc))) 1253 1253 kfree(rqe); 1254 1254 1255 + kfree(wilc->vmm_table); 1256 + wilc->vmm_table = NULL; 1255 1257 kfree(wilc->rx_buffer); 1256 1258 wilc->rx_buffer = NULL; 1257 1259 kfree(wilc->tx_buffer); ··· 1491 1489 goto fail; 1492 1490 } 1493 1491 1492 + if (!wilc->vmm_table) 1493 + wilc->vmm_table = kzalloc(WILC_VMM_TBL_SIZE, GFP_KERNEL); 1494 + 1495 + if (!wilc->vmm_table) { 1496 + ret = -ENOBUFS; 1497 + goto fail; 1498 + } 1499 + 1494 1500 if (!wilc->tx_buffer) 1495 1501 wilc->tx_buffer = kmalloc(WILC_TX_BUFF_SIZE, GFP_KERNEL); 1496 1502 ··· 1523 1513 return 0; 1524 1514 1525 1515 fail: 1526 - 1516 + kfree(wilc->vmm_table); 1517 + wilc->vmm_table = NULL; 1527 1518 kfree(wilc->rx_buffer); 1528 1519 wilc->rx_buffer = NULL; 1529 1520 kfree(wilc->tx_buffer);
+1 -1
drivers/net/xen-netback/xenbus.c
··· 256 256 unsigned int queue_index; 257 257 258 258 xen_unregister_watchers(vif); 259 - xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status"); 260 259 #ifdef CONFIG_DEBUG_FS 261 260 xenvif_debugfs_delif(vif); 262 261 #endif /* CONFIG_DEBUG_FS */ ··· 983 984 struct backend_info *be = dev_get_drvdata(&dev->dev); 984 985 985 986 unregister_hotplug_status_watch(be); 987 + xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status"); 986 988 if (be->vif) { 987 989 kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE); 988 990 backend_disconnect(be);
+11 -3
drivers/nvme/host/core.c
··· 4703 4703 nvme_start_queues(ctrl); 4704 4704 /* read FW slot information to clear the AER */ 4705 4705 nvme_get_fw_slot_info(ctrl); 4706 + 4707 + queue_work(nvme_wq, &ctrl->async_event_work); 4706 4708 } 4707 4709 4708 4710 static u32 nvme_aer_type(u32 result) ··· 4717 4715 return (result & 0xff00) >> 8; 4718 4716 } 4719 4717 4720 - static void nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result) 4718 + static bool nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result) 4721 4719 { 4722 4720 u32 aer_notice_type = nvme_aer_subtype(result); 4721 + bool requeue = true; 4723 4722 4724 4723 trace_nvme_async_event(ctrl, aer_notice_type); 4725 4724 ··· 4737 4734 */ 4738 4735 if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING)) { 4739 4736 nvme_auth_stop(ctrl); 4737 + requeue = false; 4740 4738 queue_work(nvme_wq, &ctrl->fw_act_work); 4741 4739 } 4742 4740 break; ··· 4754 4750 default: 4755 4751 dev_warn(ctrl->device, "async event result %08x\n", result); 4756 4752 } 4753 + return requeue; 4757 4754 } 4758 4755 4759 4756 static void nvme_handle_aer_persistent_error(struct nvme_ctrl *ctrl) ··· 4770 4765 u32 result = le32_to_cpu(res->u32); 4771 4766 u32 aer_type = nvme_aer_type(result); 4772 4767 u32 aer_subtype = nvme_aer_subtype(result); 4768 + bool requeue = true; 4773 4769 4774 4770 if (le16_to_cpu(status) >> 1 != NVME_SC_SUCCESS) 4775 4771 return; 4776 4772 4777 4773 switch (aer_type) { 4778 4774 case NVME_AER_NOTICE: 4779 - nvme_handle_aen_notice(ctrl, result); 4775 + requeue = nvme_handle_aen_notice(ctrl, result); 4780 4776 break; 4781 4777 case NVME_AER_ERROR: 4782 4778 /* ··· 4798 4792 default: 4799 4793 break; 4800 4794 } 4801 - queue_work(nvme_wq, &ctrl->async_event_work); 4795 + 4796 + if (requeue) 4797 + queue_work(nvme_wq, &ctrl->async_event_work); 4802 4798 } 4803 4799 EXPORT_SYMBOL_GPL(nvme_complete_async_event); 4804 4800
+2 -5
drivers/nvme/host/tcp.c
··· 121 121 struct mutex send_mutex; 122 122 struct llist_head req_list; 123 123 struct list_head send_list; 124 - bool more_requests; 125 124 126 125 /* recv state */ 127 126 void *pdu; ··· 319 320 static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue) 320 321 { 321 322 return !list_empty(&queue->send_list) || 322 - !llist_empty(&queue->req_list) || queue->more_requests; 323 + !llist_empty(&queue->req_list); 323 324 } 324 325 325 326 static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req, ··· 338 339 */ 339 340 if (queue->io_cpu == raw_smp_processor_id() && 340 341 sync && empty && mutex_trylock(&queue->send_mutex)) { 341 - queue->more_requests = !last; 342 342 nvme_tcp_send_all(queue); 343 - queue->more_requests = false; 344 343 mutex_unlock(&queue->send_mutex); 345 344 } 346 345 ··· 1226 1229 else if (unlikely(result < 0)) 1227 1230 return; 1228 1231 1229 - if (!pending) 1232 + if (!pending || !queue->rd_enabled) 1230 1233 return; 1231 1234 1232 1235 } while (!time_after(jiffies, deadline)); /* quota is exhausted */
+4 -2
drivers/nvme/target/core.c
··· 735 735 736 736 static void __nvmet_req_complete(struct nvmet_req *req, u16 status) 737 737 { 738 + struct nvmet_ns *ns = req->ns; 739 + 738 740 if (!req->sq->sqhd_disabled) 739 741 nvmet_update_sq_head(req); 740 742 req->cqe->sq_id = cpu_to_le16(req->sq->qid); ··· 747 745 748 746 trace_nvmet_req_complete(req); 749 747 750 - if (req->ns) 751 - nvmet_put_namespace(req->ns); 752 748 req->ops->queue_response(req); 749 + if (ns) 750 + nvmet_put_namespace(ns); 753 751 } 754 752 755 753 void nvmet_req_complete(struct nvmet_req *req, u16 status)
+15 -2
drivers/nvme/target/zns.c
··· 100 100 struct nvme_id_ns_zns *id_zns; 101 101 u64 zsze; 102 102 u16 status; 103 + u32 mar, mor; 103 104 104 105 if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) { 105 106 req->error_loc = offsetof(struct nvme_identify, nsid); ··· 131 130 zsze = (bdev_zone_sectors(req->ns->bdev) << 9) >> 132 131 req->ns->blksize_shift; 133 132 id_zns->lbafe[0].zsze = cpu_to_le64(zsze); 134 - id_zns->mor = cpu_to_le32(bdev_max_open_zones(req->ns->bdev)); 135 - id_zns->mar = cpu_to_le32(bdev_max_active_zones(req->ns->bdev)); 133 + 134 + mor = bdev_max_open_zones(req->ns->bdev); 135 + if (!mor) 136 + mor = U32_MAX; 137 + else 138 + mor--; 139 + id_zns->mor = cpu_to_le32(mor); 140 + 141 + mar = bdev_max_active_zones(req->ns->bdev); 142 + if (!mar) 143 + mar = U32_MAX; 144 + else 145 + mar--; 146 + id_zns->mar = cpu_to_le32(mar); 136 147 137 148 done: 138 149 status = nvmet_copy_to_sgl(req, 0, id_zns, sizeof(*id_zns));
+1 -1
drivers/perf/riscv_pmu_sbi.c
··· 473 473 if (!pmu_ctr_list) 474 474 return -ENOMEM; 475 475 476 - for (i = 0; i <= nctr; i++) { 476 + for (i = 0; i < nctr; i++) { 477 477 ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_GET_INFO, i, 0, 0, 0, 0, 0); 478 478 if (ret.error) 479 479 /* The logical counter ids are not expected to be contiguous */
+7 -2
drivers/regulator/core.c
··· 2733 2733 */ 2734 2734 static int _regulator_handle_consumer_enable(struct regulator *regulator) 2735 2735 { 2736 + int ret; 2736 2737 struct regulator_dev *rdev = regulator->rdev; 2737 2738 2738 2739 lockdep_assert_held_once(&rdev->mutex.base); 2739 2740 2740 2741 regulator->enable_count++; 2741 - if (regulator->uA_load && regulator->enable_count == 1) 2742 - return drms_uA_update(rdev); 2742 + if (regulator->uA_load && regulator->enable_count == 1) { 2743 + ret = drms_uA_update(rdev); 2744 + if (ret) 2745 + regulator->enable_count--; 2746 + return ret; 2747 + } 2743 2748 2744 2749 return 0; 2745 2750 }
+1 -1
drivers/regulator/pfuze100-regulator.c
··· 766 766 ((pfuze_chip->chip_id == PFUZE3000) ? "3000" : "3001")))); 767 767 768 768 memcpy(pfuze_chip->regulator_descs, pfuze_chip->pfuze_regulators, 769 - sizeof(pfuze_chip->regulator_descs)); 769 + regulator_num * sizeof(struct pfuze_regulator)); 770 770 771 771 ret = pfuze_parse_regulators_dt(pfuze_chip); 772 772 if (ret)
+15 -13
drivers/scsi/hosts.c
··· 182 182 mutex_unlock(&shost->scan_mutex); 183 183 scsi_proc_host_rm(shost); 184 184 185 + /* 186 + * New SCSI devices cannot be attached anymore because of the SCSI host 187 + * state so drop the tag set refcnt. Wait until the tag set refcnt drops 188 + * to zero because .exit_cmd_priv implementations may need the host 189 + * pointer. 190 + */ 191 + kref_put(&shost->tagset_refcnt, scsi_mq_free_tags); 192 + wait_for_completion(&shost->tagset_freed); 193 + 185 194 spin_lock_irqsave(shost->host_lock, flags); 186 195 if (scsi_host_set_state(shost, SHOST_DEL)) 187 196 BUG_ON(scsi_host_set_state(shost, SHOST_DEL_RECOVERY)); ··· 199 190 transport_unregister_device(&shost->shost_gendev); 200 191 device_unregister(&shost->shost_dev); 201 192 device_del(&shost->shost_gendev); 202 - 203 - /* 204 - * After scsi_remove_host() has returned the scsi LLD module can be 205 - * unloaded and/or the host resources can be released. Hence wait until 206 - * the dependent SCSI targets and devices are gone before returning. 207 - */ 208 - wait_event(shost->targets_wq, atomic_read(&shost->target_count) == 0); 209 - 210 - scsi_mq_destroy_tags(shost); 211 193 } 212 194 EXPORT_SYMBOL(scsi_remove_host); 213 195 ··· 253 253 error = scsi_mq_setup_tags(shost); 254 254 if (error) 255 255 goto fail; 256 + 257 + kref_init(&shost->tagset_refcnt); 258 + init_completion(&shost->tagset_freed); 256 259 257 260 /* 258 261 * Increase usage count temporarily here so that calling ··· 312 309 return error; 313 310 314 311 /* 315 - * Any resources associated with the SCSI host in this function except 316 - * the tag set will be freed by scsi_host_dev_release(). 312 + * Any host allocation in this function will be freed in 313 + * scsi_host_dev_release(). 317 314 */ 318 315 out_del_dev: 319 316 device_del(&shost->shost_dev); ··· 329 326 pm_runtime_disable(&shost->shost_gendev); 330 327 pm_runtime_set_suspended(&shost->shost_gendev); 331 328 pm_runtime_put_noidle(&shost->shost_gendev); 332 - scsi_mq_destroy_tags(shost); 329 + kref_put(&shost->tagset_refcnt, scsi_mq_free_tags); 333 330 fail: 334 331 return error; 335 332 } ··· 409 406 INIT_LIST_HEAD(&shost->starved_list); 410 407 init_waitqueue_head(&shost->host_wait); 411 408 mutex_init(&shost->scan_mutex); 412 - init_waitqueue_head(&shost->targets_wq); 413 409 414 410 index = ida_alloc(&host_index_ida, GFP_KERNEL); 415 411 if (index < 0) {
+4 -1
drivers/scsi/lpfc/lpfc_init.c
··· 8053 8053 /* Allocate device driver memory */ 8054 8054 rc = lpfc_mem_alloc(phba, SGL_ALIGN_SZ); 8055 8055 if (rc) 8056 - return -ENOMEM; 8056 + goto out_destroy_workqueue; 8057 8057 8058 8058 /* IF Type 2 ports get initialized now. */ 8059 8059 if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >= ··· 8481 8481 lpfc_destroy_bootstrap_mbox(phba); 8482 8482 out_free_mem: 8483 8483 lpfc_mem_free(phba); 8484 + out_destroy_workqueue: 8485 + destroy_workqueue(phba->wq); 8486 + phba->wq = NULL; 8484 8487 return rc; 8485 8488 } 8486 8489
+2 -2
drivers/scsi/lpfc/lpfc_scsi.c
··· 4272 4272 lpfc_cmd->result == IOERR_ABORT_REQUESTED || 4273 4273 lpfc_cmd->result == IOERR_RPI_SUSPENDED || 4274 4274 lpfc_cmd->result == IOERR_SLER_CMD_RCV_FAILURE) { 4275 - cmd->result = DID_REQUEUE << 16; 4275 + cmd->result = DID_TRANSPORT_DISRUPTED << 16; 4276 4276 break; 4277 4277 } 4278 4278 if ((lpfc_cmd->result == IOERR_RX_DMA_FAILED || ··· 4562 4562 lpfc_cmd->result == IOERR_NO_RESOURCES || 4563 4563 lpfc_cmd->result == IOERR_ABORT_REQUESTED || 4564 4564 lpfc_cmd->result == IOERR_SLER_CMD_RCV_FAILURE) { 4565 - cmd->result = DID_REQUEUE << 16; 4565 + cmd->result = DID_TRANSPORT_DISRUPTED << 16; 4566 4566 break; 4567 4567 } 4568 4568 if ((lpfc_cmd->result == IOERR_RX_DMA_FAILED ||
+1 -1
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 3670 3670 fw_event = list_first_entry(&ioc->fw_event_list, 3671 3671 struct fw_event_work, list); 3672 3672 list_del_init(&fw_event->list); 3673 + fw_event_work_put(fw_event); 3673 3674 } 3674 3675 spin_unlock_irqrestore(&ioc->fw_event_lock, flags); 3675 3676 ··· 3752 3751 if (cancel_work_sync(&fw_event->work)) 3753 3752 fw_event_work_put(fw_event); 3754 3753 3755 - fw_event_work_put(fw_event); 3756 3754 } 3757 3755 ioc->fw_events_cleanup = 0; 3758 3756 }
+3 -6
drivers/scsi/scsi.c
··· 586 586 */ 587 587 void scsi_device_put(struct scsi_device *sdev) 588 588 { 589 - /* 590 - * Decreasing the module reference count before the device reference 591 - * count is safe since scsi_remove_host() only returns after all 592 - * devices have been removed. 593 - */ 594 - module_put(sdev->host->hostt->module); 589 + struct module *mod = sdev->host->hostt->module; 590 + 595 591 put_device(&sdev->sdev_gendev); 592 + module_put(mod); 596 593 } 597 594 EXPORT_SYMBOL(scsi_device_put); 598 595
+5 -1
drivers/scsi/scsi_lib.c
··· 1983 1983 return blk_mq_alloc_tag_set(tag_set); 1984 1984 } 1985 1985 1986 - void scsi_mq_destroy_tags(struct Scsi_Host *shost) 1986 + void scsi_mq_free_tags(struct kref *kref) 1987 1987 { 1988 + struct Scsi_Host *shost = container_of(kref, typeof(*shost), 1989 + tagset_refcnt); 1990 + 1988 1991 blk_mq_free_tag_set(&shost->tag_set); 1992 + complete(&shost->tagset_freed); 1989 1993 } 1990 1994 1991 1995 /**
+1 -1
drivers/scsi/scsi_priv.h
··· 94 94 extern void scsi_requeue_run_queue(struct work_struct *work); 95 95 extern void scsi_start_queue(struct scsi_device *sdev); 96 96 extern int scsi_mq_setup_tags(struct Scsi_Host *shost); 97 - extern void scsi_mq_destroy_tags(struct Scsi_Host *shost); 97 + extern void scsi_mq_free_tags(struct kref *kref); 98 98 extern void scsi_exit_queue(void); 99 99 extern void scsi_evt_thread(struct work_struct *work); 100 100
+1 -9
drivers/scsi/scsi_scan.c
··· 340 340 kfree(sdev); 341 341 goto out; 342 342 } 343 + kref_get(&sdev->host->tagset_refcnt); 343 344 sdev->request_queue = q; 344 345 q->queuedata = sdev; 345 346 __scsi_init_queue(sdev->host, q); ··· 407 406 static void scsi_target_dev_release(struct device *dev) 408 407 { 409 408 struct device *parent = dev->parent; 410 - struct Scsi_Host *shost = dev_to_shost(parent); 411 409 struct scsi_target *starget = to_scsi_target(dev); 412 410 413 411 kfree(starget); 414 - 415 - if (atomic_dec_return(&shost->target_count) == 0) 416 - wake_up(&shost->targets_wq); 417 - 418 412 put_device(parent); 419 413 } 420 414 ··· 522 526 starget->state = STARGET_CREATED; 523 527 starget->scsi_level = SCSI_2; 524 528 starget->max_target_blocked = SCSI_DEFAULT_TARGET_BLOCKED; 525 - init_waitqueue_head(&starget->sdev_wq); 526 - 527 - atomic_inc(&shost->target_count); 528 - 529 529 retry: 530 530 spin_lock_irqsave(shost->host_lock, flags); 531 531
+13 -17
drivers/scsi/scsi_sysfs.c
··· 443 443 444 444 static void scsi_device_dev_release_usercontext(struct work_struct *work) 445 445 { 446 - struct scsi_device *sdev = container_of(work, struct scsi_device, 447 - ew.work); 448 - struct scsi_target *starget = sdev->sdev_target; 446 + struct scsi_device *sdev; 449 447 struct device *parent; 450 448 struct list_head *this, *tmp; 451 449 struct scsi_vpd *vpd_pg80 = NULL, *vpd_pg83 = NULL; 452 450 struct scsi_vpd *vpd_pg0 = NULL, *vpd_pg89 = NULL; 453 451 struct scsi_vpd *vpd_pgb0 = NULL, *vpd_pgb1 = NULL, *vpd_pgb2 = NULL; 454 452 unsigned long flags; 453 + struct module *mod; 454 + 455 + sdev = container_of(work, struct scsi_device, ew.work); 456 + 457 + mod = sdev->host->hostt->module; 455 458 456 459 scsi_dh_release_device(sdev); 457 460 ··· 516 513 kfree(sdev->inquiry); 517 514 kfree(sdev); 518 515 519 - if (starget && atomic_dec_return(&starget->sdev_count) == 0) 520 - wake_up(&starget->sdev_wq); 521 - 522 516 if (parent) 523 517 put_device(parent); 518 + module_put(mod); 524 519 } 525 520 526 521 static void scsi_device_dev_release(struct device *dev) 527 522 { 528 523 struct scsi_device *sdp = to_scsi_device(dev); 524 + 525 + /* Set module pointer as NULL in case of module unloading */ 526 + if (!try_module_get(sdp->host->hostt->module)) 527 + sdp->host->hostt->module = NULL; 528 + 529 529 execute_in_process_context(scsi_device_dev_release_usercontext, 530 530 &sdp->ew); 531 531 } ··· 1476 1470 mutex_unlock(&sdev->state_mutex); 1477 1471 1478 1472 blk_mq_destroy_queue(sdev->request_queue); 1473 + kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags); 1479 1474 cancel_work_sync(&sdev->requeue_work); 1480 1475 1481 1476 if (sdev->host->hostt->slave_destroy) ··· 1536 1529 goto restart; 1537 1530 } 1538 1531 spin_unlock_irqrestore(shost->host_lock, flags); 1539 - 1540 - /* 1541 - * After scsi_remove_target() returns its caller can remove resources 1542 - * associated with @starget, e.g. an rport or session. Wait until all 1543 - * devices associated with @starget have been removed to prevent that 1544 - * a SCSI error handling callback function triggers a use-after-free. 1545 - */ 1546 - wait_event(starget->sdev_wq, atomic_read(&starget->sdev_count) == 0); 1547 1532 } 1548 1533 1549 1534 /** ··· 1646 1647 list_add_tail(&sdev->same_target_siblings, &starget->devices); 1647 1648 list_add_tail(&sdev->siblings, &shost->__devices); 1648 1649 spin_unlock_irqrestore(shost->host_lock, flags); 1649 - 1650 - atomic_inc(&starget->sdev_count); 1651 - 1652 1650 /* 1653 1651 * device can now only be removed via __scsi_remove_device() so hold 1654 1652 * the target. Target will be held in CREATED state until something
+38 -10
drivers/soc/bcm/brcmstb/pm/pm-arm.c
··· 684 684 const struct of_device_id *of_id = NULL; 685 685 struct device_node *dn; 686 686 void __iomem *base; 687 - int ret, i; 687 + int ret, i, s; 688 688 689 689 /* AON ctrl registers */ 690 690 base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL); 691 691 if (IS_ERR(base)) { 692 692 pr_err("error mapping AON_CTRL\n"); 693 - return PTR_ERR(base); 693 + ret = PTR_ERR(base); 694 + goto aon_err; 694 695 } 695 696 ctrl.aon_ctrl_base = base; 696 697 ··· 701 700 /* Assume standard offset */ 702 701 ctrl.aon_sram = ctrl.aon_ctrl_base + 703 702 AON_CTRL_SYSTEM_DATA_RAM_OFS; 703 + s = 0; 704 704 } else { 705 705 ctrl.aon_sram = base; 706 + s = 1; 706 707 } 707 708 708 709 writel_relaxed(0, ctrl.aon_sram + AON_REG_PANIC); ··· 714 711 (const void **)&ddr_phy_data); 715 712 if (IS_ERR(base)) { 716 713 pr_err("error mapping DDR PHY\n"); 717 - return PTR_ERR(base); 714 + ret = PTR_ERR(base); 715 + goto ddr_phy_err; 718 716 } 719 717 ctrl.support_warm_boot = ddr_phy_data->supports_warm_boot; 720 718 ctrl.pll_status_offset = ddr_phy_data->pll_status_offset; ··· 735 731 for_each_matching_node(dn, ddr_shimphy_dt_ids) { 736 732 i = ctrl.num_memc; 737 733 if (i >= MAX_NUM_MEMC) { 734 + of_node_put(dn); 738 735 pr_warn("too many MEMCs (max %d)\n", MAX_NUM_MEMC); 739 736 break; 740 737 } 741 738 742 739 base = of_io_request_and_map(dn, 0, dn->full_name); 743 740 if (IS_ERR(base)) { 741 + of_node_put(dn); 744 742 if (!ctrl.support_warm_boot) 745 743 break; 746 744 747 745 pr_err("error mapping DDR SHIMPHY %d\n", i); 748 - return PTR_ERR(base); 746 + ret = PTR_ERR(base); 747 + goto ddr_shimphy_err; 749 748 } 750 749 ctrl.memcs[i].ddr_shimphy_base = base; 751 750 ctrl.num_memc++; ··· 759 752 for_each_matching_node(dn, brcmstb_memc_of_match) { 760 753 base = of_iomap(dn, 0); 761 754 if (!base) { 755 + of_node_put(dn); 762 756 pr_err("error mapping DDR Sequencer %d\n", i); 763 - return -ENOMEM; 757 + ret = -ENOMEM; 758 + goto brcmstb_memc_err; 764 759 } 765 760 766 761 of_id = of_match_node(brcmstb_memc_of_match, dn); 767 762 if (!of_id) { 768 763 iounmap(base); 769 - return -EINVAL; 764 + of_node_put(dn); 765 + ret = -EINVAL; 766 + goto brcmstb_memc_err; 770 767 } 771 768 772 769 ddr_seq_data = of_id->data; ··· 790 779 dn = of_find_matching_node(NULL, sram_dt_ids); 791 780 if (!dn) { 792 781 pr_err("SRAM not found\n"); 793 - return -EINVAL; 782 + ret = -EINVAL; 783 + goto brcmstb_memc_err; 794 784 } 795 785 796 786 ret = brcmstb_init_sram(dn); 797 787 of_node_put(dn); 798 788 if (ret) { 799 789 pr_err("error setting up SRAM for PM\n"); 800 - return ret; 790 + goto brcmstb_memc_err; 801 791 } 802 792 803 793 ctrl.pdev = pdev; 804 794 805 795 ctrl.s3_params = kmalloc(sizeof(*ctrl.s3_params), GFP_KERNEL); 806 - if (!ctrl.s3_params) 807 - return -ENOMEM; 796 + if (!ctrl.s3_params) { 797 + ret = -ENOMEM; 798 + goto s3_params_err; 799 + } 808 800 ctrl.s3_params_pa = dma_map_single(&pdev->dev, ctrl.s3_params, 809 801 sizeof(*ctrl.s3_params), 810 802 DMA_TO_DEVICE); ··· 827 813 828 814 out: 829 815 kfree(ctrl.s3_params); 816 + s3_params_err: 817 + iounmap(ctrl.boot_sram); 818 + brcmstb_memc_err: 819 + for (i--; i >= 0; i--) 820 + iounmap(ctrl.memcs[i].ddr_ctrl); 821 + ddr_shimphy_err: 822 + for (i = 0; i < ctrl.num_memc; i++) 823 + iounmap(ctrl.memcs[i].ddr_shimphy_base); 830 824 825 + iounmap(ctrl.memcs[0].ddr_phy_base); 826 + ddr_phy_err: 827 + iounmap(ctrl.aon_ctrl_base); 828 + if (s) 829 + iounmap(ctrl.aon_sram); 830 + aon_err: 831 831 pr_warn("PM: initialization failed with code %d\n", ret); 832 832 833 833 return ret;
+1
drivers/soc/fsl/Kconfig
··· 24 24 tristate "QorIQ DPAA2 DPIO driver" 25 25 depends on FSL_MC_BUS 26 26 select SOC_BUS 27 + select FSL_GUTS 27 28 select DIMLIB 28 29 help 29 30 Driver for the DPAA2 DPIO object. A DPIO provides queue and
+4 -1
drivers/soc/imx/gpcv2.c
··· 335 335 } 336 336 } 337 337 338 + reset_control_assert(domain->reset); 339 + 338 340 /* Enable reset clocks for all devices in the domain */ 339 341 ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks); 340 342 if (ret) { ··· 344 342 goto out_regulator_disable; 345 343 } 346 344 347 - reset_control_assert(domain->reset); 345 + /* delays for reset to propagate */ 346 + udelay(5); 348 347 349 348 if (domain->bits.pxx) { 350 349 /* request the domain to power up */
-1
drivers/soc/imx/imx8m-blk-ctrl.c
··· 243 243 ret = PTR_ERR(domain->power_dev); 244 244 goto cleanup_pds; 245 245 } 246 - dev_set_name(domain->power_dev, "%s", data->name); 247 246 248 247 domain->genpd.name = data->name; 249 248 domain->genpd.power_on = imx8m_blk_ctrl_power_on;
+4 -2
drivers/spi/spi-bitbang-txrx.h
··· 116 116 { 117 117 /* if (cpol == 0) this is SPI_MODE_0; else this is SPI_MODE_2 */ 118 118 119 + u8 rxbit = bits - 1; 119 120 u32 oldbit = !(word & 1); 120 121 /* clock starts at inactive polarity */ 121 122 for (; likely(bits); bits--) { ··· 136 135 /* sample LSB (from slave) on leading edge */ 137 136 word >>= 1; 138 137 if ((flags & SPI_MASTER_NO_RX) == 0) 139 - word |= getmiso(spi) << (bits - 1); 138 + word |= getmiso(spi) << rxbit; 140 139 setsck(spi, cpol); 141 140 } 142 141 return word; ··· 149 148 { 150 149 /* if (cpol == 0) this is SPI_MODE_1; else this is SPI_MODE_3 */ 151 150 151 + u8 rxbit = bits - 1; 152 152 u32 oldbit = !(word & 1); 153 153 /* clock starts at inactive polarity */ 154 154 for (; likely(bits); bits--) { ··· 170 168 /* sample LSB (from slave) on trailing edge */ 171 169 word >>= 1; 172 170 if ((flags & SPI_MASTER_NO_RX) == 0) 173 - word |= getmiso(spi) << (bits - 1); 171 + word |= getmiso(spi) << rxbit; 174 172 } 175 173 return word; 176 174 }
+34 -4
drivers/spi/spi-cadence-quadspi.c
··· 39 39 #define CQSPI_DISABLE_DAC_MODE BIT(1) 40 40 #define CQSPI_SUPPORT_EXTERNAL_DMA BIT(2) 41 41 #define CQSPI_NO_SUPPORT_WR_COMPLETION BIT(3) 42 + #define CQSPI_SLOW_SRAM BIT(4) 42 43 43 44 /* Capabilities */ 44 45 #define CQSPI_SUPPORTS_OCTAL BIT(0) ··· 88 87 bool use_dma_read; 89 88 u32 pd_dev_id; 90 89 bool wr_completion; 90 + bool slow_sram; 91 91 }; 92 92 93 93 struct cqspi_driver_platdata { ··· 335 333 } 336 334 } 337 335 338 - irq_status &= CQSPI_IRQ_MASK_RD | CQSPI_IRQ_MASK_WR; 336 + else if (!cqspi->slow_sram) 337 + irq_status &= CQSPI_IRQ_MASK_RD | CQSPI_IRQ_MASK_WR; 338 + else 339 + irq_status &= CQSPI_REG_IRQ_WATERMARK | CQSPI_IRQ_MASK_WR; 339 340 340 341 if (irq_status) 341 342 complete(&cqspi->transfer_complete); ··· 678 673 /* Clear all interrupts. */ 679 674 writel(CQSPI_IRQ_STATUS_MASK, reg_base + CQSPI_REG_IRQSTATUS); 680 675 681 - writel(CQSPI_IRQ_MASK_RD, reg_base + CQSPI_REG_IRQMASK); 676 + /* 677 + * On SoCFPGA platform reading the SRAM is slow due to 678 + * hardware limitation and causing read interrupt storm to CPU, 679 + * so enabling only watermark interrupt to disable all read 680 + * interrupts later as we want to run "bytes to read" loop with 681 + * all the read interrupts disabled for max performance. 682 + */ 683 + 684 + if (!cqspi->slow_sram) 685 + writel(CQSPI_IRQ_MASK_RD, reg_base + CQSPI_REG_IRQMASK); 686 + else 687 + writel(CQSPI_REG_IRQ_WATERMARK, reg_base + CQSPI_REG_IRQMASK); 682 688 683 689 reinit_completion(&cqspi->transfer_complete); 684 690 writel(CQSPI_REG_INDIRECTRD_START_MASK, ··· 699 683 if (!wait_for_completion_timeout(&cqspi->transfer_complete, 700 684 msecs_to_jiffies(CQSPI_READ_TIMEOUT_MS))) 701 685 ret = -ETIMEDOUT; 686 + 687 + /* 688 + * Disable all read interrupts until 689 + * we are out of "bytes to read" 690 + */ 691 + if (cqspi->slow_sram) 692 + writel(0x0, reg_base + CQSPI_REG_IRQMASK); 702 693 703 694 bytes_to_read = cqspi_get_rd_sram_level(cqspi); 704 695 ··· 738 715 bytes_to_read = cqspi_get_rd_sram_level(cqspi); 739 716 } 740 717 741 - if (remaining > 0) 718 + if (remaining > 0) { 742 719 reinit_completion(&cqspi->transfer_complete); 720 + if (cqspi->slow_sram) 721 + writel(CQSPI_REG_IRQ_WATERMARK, reg_base + CQSPI_REG_IRQMASK); 722 + } 743 723 } 744 724 745 725 /* Check indirect done status */ ··· 1693 1667 cqspi->use_dma_read = true; 1694 1668 if (ddata->quirks & CQSPI_NO_SUPPORT_WR_COMPLETION) 1695 1669 cqspi->wr_completion = false; 1670 + if (ddata->quirks & CQSPI_SLOW_SRAM) 1671 + cqspi->slow_sram = true; 1696 1672 1697 1673 if (of_device_is_compatible(pdev->dev.of_node, 1698 1674 "xlnx,versal-ospi-1.0")) ··· 1807 1779 }; 1808 1780 1809 1781 static const struct cqspi_driver_platdata socfpga_qspi = { 1810 - .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_NO_SUPPORT_WR_COMPLETION, 1782 + .quirks = CQSPI_DISABLE_DAC_MODE 1783 + | CQSPI_NO_SUPPORT_WR_COMPLETION 1784 + | CQSPI_SLOW_SRAM, 1811 1785 }; 1812 1786 1813 1787 static const struct cqspi_driver_platdata versal_ospi = {
+1
drivers/spi/spi-mux.c
··· 161 161 ctlr->num_chipselect = mux_control_states(priv->mux); 162 162 ctlr->bus_num = -1; 163 163 ctlr->dev.of_node = spi->dev.of_node; 164 + ctlr->must_async = true; 164 165 165 166 ret = devm_spi_register_controller(&spi->dev, ctlr); 166 167 if (ret)
+2 -3
drivers/spi/spi.c
··· 1727 1727 spin_unlock_irqrestore(&ctlr->queue_lock, flags); 1728 1728 1729 1729 ret = __spi_pump_transfer_message(ctlr, msg, was_busy); 1730 - if (!ret) 1731 - kthread_queue_work(ctlr->kworker, &ctlr->pump_messages); 1730 + kthread_queue_work(ctlr->kworker, &ctlr->pump_messages); 1732 1731 1733 1732 ctlr->cur_msg = NULL; 1734 1733 ctlr->fallback = false; ··· 4032 4033 * guard against reentrancy from a different context. The io_mutex 4033 4034 * will catch those cases. 4034 4035 */ 4035 - if (READ_ONCE(ctlr->queue_empty)) { 4036 + if (READ_ONCE(ctlr->queue_empty) && !ctlr->must_async) { 4036 4037 message->actual_length = 0; 4037 4038 message->status = -EINPROGRESS; 4038 4039
+1
drivers/tee/tee_shm.c
··· 9 9 #include <linux/sched.h> 10 10 #include <linux/slab.h> 11 11 #include <linux/tee_drv.h> 12 + #include <linux/uaccess.h> 12 13 #include <linux/uio.h> 13 14 #include "tee_private.h" 14 15
+1 -2
drivers/thunderbolt/Kconfig
··· 29 29 30 30 config USB4_KUNIT_TEST 31 31 bool "KUnit tests" if !KUNIT_ALL_TESTS 32 - depends on (USB4=m || KUNIT=y) 33 - depends on KUNIT 32 + depends on USB4 && KUNIT=y 34 33 default KUNIT_ALL_TESTS 35 34 36 35 config USB4_DMA_TEST
+12
drivers/vfio/vfio_iommu_type1.c
··· 558 558 ret = pin_user_pages_remote(mm, vaddr, npages, flags | FOLL_LONGTERM, 559 559 pages, NULL, NULL); 560 560 if (ret > 0) { 561 + int i; 562 + 563 + /* 564 + * The zero page is always resident, we don't need to pin it 565 + * and it falls into our invalid/reserved test so we don't 566 + * unpin in put_pfn(). Unpin all zero pages in the batch here. 567 + */ 568 + for (i = 0 ; i < ret; i++) { 569 + if (unlikely(is_zero_pfn(page_to_pfn(pages[i])))) 570 + unpin_user_page(pages[i]); 571 + } 572 + 561 573 *pfn = page_to_pfn(pages[0]); 562 574 goto done; 563 575 }
+1 -1
drivers/virt/nitro_enclaves/Kconfig
··· 17 17 18 18 config NITRO_ENCLAVES_MISC_DEV_TEST 19 19 bool "Tests for the misc device functionality of the Nitro Enclaves" if !KUNIT_ALL_TESTS 20 - depends on NITRO_ENCLAVES && KUNIT 20 + depends on NITRO_ENCLAVES && KUNIT=y 21 21 default KUNIT_ALL_TESTS 22 22 help 23 23 Enable KUnit tests for the misc device functionality of the Nitro
+1 -1
fs/afs/flock.c
··· 76 76 if (call->error == 0) { 77 77 spin_lock(&vnode->lock); 78 78 trace_afs_flock_ev(vnode, NULL, afs_flock_timestamp, 0); 79 - vnode->locked_at = call->reply_time; 79 + vnode->locked_at = call->issue_time; 80 80 afs_schedule_lock_extension(vnode); 81 81 spin_unlock(&vnode->lock); 82 82 }
+1 -1
fs/afs/fsclient.c
··· 131 131 132 132 static time64_t xdr_decode_expiry(struct afs_call *call, u32 expiry) 133 133 { 134 - return ktime_divns(call->reply_time, NSEC_PER_SEC) + expiry; 134 + return ktime_divns(call->issue_time, NSEC_PER_SEC) + expiry; 135 135 } 136 136 137 137 static void xdr_decode_AFSCallBack(const __be32 **_bp,
+1 -2
fs/afs/internal.h
··· 137 137 bool need_attention; /* T if RxRPC poked us */ 138 138 bool async; /* T if asynchronous */ 139 139 bool upgrade; /* T to request service upgrade */ 140 - bool have_reply_time; /* T if have got reply_time */ 141 140 bool intr; /* T if interruptible */ 142 141 bool unmarshalling_error; /* T if an unmarshalling error occurred */ 143 142 u16 service_id; /* Actual service ID (after upgrade) */ ··· 150 151 } __attribute__((packed)); 151 152 __be64 tmp64; 152 153 }; 153 - ktime_t reply_time; /* Time of first reply packet */ 154 + ktime_t issue_time; /* Time of issue of operation */ 154 155 }; 155 156 156 157 struct afs_call_type {
+1
fs/afs/misc.c
··· 69 69 /* Unified AFS error table */ 70 70 case UAEPERM: return -EPERM; 71 71 case UAENOENT: return -ENOENT; 72 + case UAEAGAIN: return -EAGAIN; 72 73 case UAEACCES: return -EACCES; 73 74 case UAEBUSY: return -EBUSY; 74 75 case UAEEXIST: return -EEXIST;
+1 -6
fs/afs/rxrpc.c
··· 351 351 if (call->max_lifespan) 352 352 rxrpc_kernel_set_max_life(call->net->socket, rxcall, 353 353 call->max_lifespan); 354 + call->issue_time = ktime_get_real(); 354 355 355 356 /* send the request */ 356 357 iov[0].iov_base = call->request; ··· 501 500 } 502 501 return; 503 502 } 504 - 505 - if (!call->have_reply_time && 506 - rxrpc_kernel_get_reply_time(call->net->socket, 507 - call->rxcall, 508 - &call->reply_time)) 509 - call->have_reply_time = true; 510 503 511 504 ret = call->type->deliver(call); 512 505 state = READ_ONCE(call->state);
+1 -2
fs/afs/yfsclient.c
··· 232 232 struct afs_callback *cb = &scb->callback; 233 233 ktime_t cb_expiry; 234 234 235 - cb_expiry = call->reply_time; 236 - cb_expiry = ktime_add(cb_expiry, xdr_to_u64(x->expiration_time) * 100); 235 + cb_expiry = ktime_add(call->issue_time, xdr_to_u64(x->expiration_time) * 100); 237 236 cb->expires_at = ktime_divns(cb_expiry, NSEC_PER_SEC); 238 237 scb->have_cb = true; 239 238 *_bp += xdr_size(x);
-2
fs/btrfs/ctree.h
··· 1088 1088 1089 1089 spinlock_t zone_active_bgs_lock; 1090 1090 struct list_head zone_active_bgs; 1091 - /* Waiters when BTRFS_FS_NEED_ZONE_FINISH is set */ 1092 - wait_queue_head_t zone_finish_wait; 1093 1091 1094 1092 /* Updates are not protected by any lock */ 1095 1093 struct btrfs_commit_stats commit_stats;
-1
fs/btrfs/disk-io.c
··· 3068 3068 init_waitqueue_head(&fs_info->transaction_blocked_wait); 3069 3069 init_waitqueue_head(&fs_info->async_submit_wait); 3070 3070 init_waitqueue_head(&fs_info->delayed_iputs_wait); 3071 - init_waitqueue_head(&fs_info->zone_finish_wait); 3072 3071 3073 3072 /* Usable values until the real ones are cached from the superblock */ 3074 3073 fs_info->nodesize = 4096;
+3 -4
fs/btrfs/inode.c
··· 1644 1644 done_offset = end; 1645 1645 1646 1646 if (done_offset == start) { 1647 - struct btrfs_fs_info *info = inode->root->fs_info; 1648 - 1649 - wait_var_event(&info->zone_finish_wait, 1650 - !test_bit(BTRFS_FS_NEED_ZONE_FINISH, &info->flags)); 1647 + wait_on_bit_io(&inode->root->fs_info->flags, 1648 + BTRFS_FS_NEED_ZONE_FINISH, 1649 + TASK_UNINTERRUPTIBLE); 1651 1650 continue; 1652 1651 } 1653 1652
+1 -1
fs/btrfs/space-info.c
··· 199 199 ASSERT(flags & BTRFS_BLOCK_GROUP_TYPE_MASK); 200 200 201 201 if (flags & BTRFS_BLOCK_GROUP_DATA) 202 - return SZ_1G; 202 + return BTRFS_MAX_DATA_CHUNK_SIZE; 203 203 else if (flags & BTRFS_BLOCK_GROUP_SYSTEM) 204 204 return SZ_32M; 205 205
+3
fs/btrfs/volumes.c
··· 5267 5267 ctl->stripe_size); 5268 5268 } 5269 5269 5270 + /* Stripe size should not go beyond 1G. */ 5271 + ctl->stripe_size = min_t(u64, ctl->stripe_size, SZ_1G); 5272 + 5270 5273 /* Align to BTRFS_STRIPE_LEN */ 5271 5274 ctl->stripe_size = round_down(ctl->stripe_size, BTRFS_STRIPE_LEN); 5272 5275 ctl->chunk_size = ctl->stripe_size * data_stripes;
+53 -46
fs/btrfs/zoned.c
··· 421 421 * since btrfs adds the pages one by one to a bio, and btrfs cannot 422 422 * increase the metadata reservation even if it increases the number of 423 423 * extents, it is safe to stick with the limit. 424 + * 425 + * With the zoned emulation, we can have non-zoned device on the zoned 426 + * mode. In this case, we don't have a valid max zone append size. So, 427 + * use max_segments * PAGE_SIZE as the pseudo max_zone_append_size. 424 428 */ 425 - zone_info->max_zone_append_size = 426 - min_t(u64, (u64)bdev_max_zone_append_sectors(bdev) << SECTOR_SHIFT, 427 - (u64)bdev_max_segments(bdev) << PAGE_SHIFT); 429 + if (bdev_is_zoned(bdev)) { 430 + zone_info->max_zone_append_size = min_t(u64, 431 + (u64)bdev_max_zone_append_sectors(bdev) << SECTOR_SHIFT, 432 + (u64)bdev_max_segments(bdev) << PAGE_SHIFT); 433 + } else { 434 + zone_info->max_zone_append_size = 435 + (u64)bdev_max_segments(bdev) << PAGE_SHIFT; 436 + } 428 437 if (!IS_ALIGNED(nr_sectors, zone_sectors)) 429 438 zone_info->nr_zones++; 430 439 ··· 1187 1178 * offset. 1188 1179 */ 1189 1180 static int calculate_alloc_pointer(struct btrfs_block_group *cache, 1190 - u64 *offset_ret) 1181 + u64 *offset_ret, bool new) 1191 1182 { 1192 1183 struct btrfs_fs_info *fs_info = cache->fs_info; 1193 1184 struct btrfs_root *root; ··· 1196 1187 struct btrfs_key found_key; 1197 1188 int ret; 1198 1189 u64 length; 1190 + 1191 + /* 1192 + * Avoid tree lookups for a new block group, there's no use for it. 1193 + * It must always be 0. 1194 + * 1195 + * Also, we have a lock chain of extent buffer lock -> chunk mutex. 1196 + * For new a block group, this function is called from 1197 + * btrfs_make_block_group() which is already taking the chunk mutex. 1198 + * Thus, we cannot call calculate_alloc_pointer() which takes extent 1199 + * buffer locks to avoid deadlock. 1200 + */ 1201 + if (new) { 1202 + *offset_ret = 0; 1203 + return 0; 1204 + } 1199 1205 1200 1206 path = btrfs_alloc_path(); 1201 1207 if (!path) ··· 1347 1323 else 1348 1324 num_conventional++; 1349 1325 1326 + /* 1327 + * Consider a zone as active if we can allow any number of 1328 + * active zones. 1329 + */ 1330 + if (!device->zone_info->max_active_zones) 1331 + __set_bit(i, active); 1332 + 1350 1333 if (!is_sequential) { 1351 1334 alloc_offsets[i] = WP_CONVENTIONAL; 1352 1335 continue; ··· 1420 1389 __set_bit(i, active); 1421 1390 break; 1422 1391 } 1423 - 1424 - /* 1425 - * Consider a zone as active if we can allow any number of 1426 - * active zones. 1427 - */ 1428 - if (!device->zone_info->max_active_zones) 1429 - __set_bit(i, active); 1430 1392 } 1431 1393 1432 1394 if (num_sequential > 0) 1433 1395 cache->seq_zone = true; 1434 1396 1435 1397 if (num_conventional > 0) { 1436 - /* 1437 - * Avoid calling calculate_alloc_pointer() for new BG. It 1438 - * is no use for new BG. It must be always 0. 1439 - * 1440 - * Also, we have a lock chain of extent buffer lock -> 1441 - * chunk mutex. For new BG, this function is called from 1442 - * btrfs_make_block_group() which is already taking the 1443 - * chunk mutex. Thus, we cannot call 1444 - * calculate_alloc_pointer() which takes extent buffer 1445 - * locks to avoid deadlock. 1446 - */ 1447 - 1448 1398 /* Zone capacity is always zone size in emulation */ 1449 1399 cache->zone_capacity = cache->length; 1450 - if (new) { 1451 - cache->alloc_offset = 0; 1452 - goto out; 1453 - } 1454 - ret = calculate_alloc_pointer(cache, &last_alloc); 1455 - if (ret || map->num_stripes == num_conventional) { 1456 - if (!ret) 1457 - cache->alloc_offset = last_alloc; 1458 - else 1459 - btrfs_err(fs_info, 1400 + ret = calculate_alloc_pointer(cache, &last_alloc, new); 1401 + if (ret) { 1402 + btrfs_err(fs_info, 1460 1403 "zoned: failed to determine allocation offset of bg %llu", 1461 - cache->start); 1404 + cache->start); 1405 + goto out; 1406 + } else if (map->num_stripes == num_conventional) { 1407 + cache->alloc_offset = last_alloc; 1408 + cache->zone_is_active = 1; 1462 1409 goto out; 1463 1410 } 1464 1411 } ··· 1504 1495 goto out; 1505 1496 } 1506 1497 1507 - if (cache->zone_is_active) { 1508 - btrfs_get_block_group(cache); 1509 - spin_lock(&fs_info->zone_active_bgs_lock); 1510 - list_add_tail(&cache->active_bg_list, &fs_info->zone_active_bgs); 1511 - spin_unlock(&fs_info->zone_active_bgs_lock); 1512 - } 1513 - 1514 1498 out: 1515 1499 if (cache->alloc_offset > fs_info->zone_size) { 1516 1500 btrfs_err(fs_info, ··· 1528 1526 ret = -EIO; 1529 1527 } 1530 1528 1531 - if (!ret) 1529 + if (!ret) { 1532 1530 cache->meta_write_pointer = cache->alloc_offset + cache->start; 1533 - 1534 - if (ret) { 1531 + if (cache->zone_is_active) { 1532 + btrfs_get_block_group(cache); 1533 + spin_lock(&fs_info->zone_active_bgs_lock); 1534 + list_add_tail(&cache->active_bg_list, 1535 + &fs_info->zone_active_bgs); 1536 + spin_unlock(&fs_info->zone_active_bgs_lock); 1537 + } 1538 + } else { 1535 1539 kfree(cache->physical_map); 1536 1540 cache->physical_map = NULL; 1537 1541 } ··· 2015 2007 /* For active_bg_list */ 2016 2008 btrfs_put_block_group(block_group); 2017 2009 2018 - clear_bit(BTRFS_FS_NEED_ZONE_FINISH, &fs_info->flags); 2019 - wake_up_all(&fs_info->zone_finish_wait); 2010 + clear_and_wake_up_bit(BTRFS_FS_NEED_ZONE_FINISH, &fs_info->flags); 2020 2011 2021 2012 return 0; 2022 2013 }
+22
fs/debugfs/inode.c
··· 745 745 EXPORT_SYMBOL_GPL(debugfs_remove); 746 746 747 747 /** 748 + * debugfs_lookup_and_remove - lookup a directory or file and recursively remove it 749 + * @name: a pointer to a string containing the name of the item to look up. 750 + * @parent: a pointer to the parent dentry of the item. 751 + * 752 + * This is the equlivant of doing something like 753 + * debugfs_remove(debugfs_lookup(..)) but with the proper reference counting 754 + * handled for the directory being looked up. 755 + */ 756 + void debugfs_lookup_and_remove(const char *name, struct dentry *parent) 757 + { 758 + struct dentry *dentry; 759 + 760 + dentry = debugfs_lookup(name, parent); 761 + if (!dentry) 762 + return; 763 + 764 + debugfs_remove(dentry); 765 + dput(dentry); 766 + } 767 + EXPORT_SYMBOL_GPL(debugfs_lookup_and_remove); 768 + 769 + /** 748 770 * debugfs_rename - rename a file/directory in the debugfs filesystem 749 771 * @old_dir: a pointer to the parent dentry for the renamed object. This 750 772 * should be a directory dentry.
+6 -2
fs/erofs/fscache.c
··· 222 222 223 223 rreq = erofs_fscache_alloc_request(folio_mapping(folio), 224 224 folio_pos(folio), folio_size(folio)); 225 - if (IS_ERR(rreq)) 225 + if (IS_ERR(rreq)) { 226 + ret = PTR_ERR(rreq); 226 227 goto out; 228 + } 227 229 228 230 return erofs_fscache_read_folios_async(mdev.m_fscache->cookie, 229 231 rreq, mdev.m_pa); ··· 303 301 304 302 rreq = erofs_fscache_alloc_request(folio_mapping(folio), 305 303 folio_pos(folio), folio_size(folio)); 306 - if (IS_ERR(rreq)) 304 + if (IS_ERR(rreq)) { 305 + ret = PTR_ERR(rreq); 307 306 goto out_unlock; 307 + } 308 308 309 309 pstart = mdev.m_pa + (pos - map.m_la); 310 310 return erofs_fscache_read_folios_async(mdev.m_fscache->cookie,
-29
fs/erofs/internal.h
··· 195 195 atomic_t refcount; 196 196 }; 197 197 198 - #if defined(CONFIG_SMP) 199 198 static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, 200 199 int val) 201 200 { ··· 223 224 return atomic_cond_read_relaxed(&grp->refcount, 224 225 VAL != EROFS_LOCKED_MAGIC); 225 226 } 226 - #else 227 - static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, 228 - int val) 229 - { 230 - preempt_disable(); 231 - /* no need to spin on UP platforms, let's just disable preemption. */ 232 - if (val != atomic_read(&grp->refcount)) { 233 - preempt_enable(); 234 - return false; 235 - } 236 - return true; 237 - } 238 - 239 - static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp, 240 - int orig_val) 241 - { 242 - preempt_enable(); 243 - } 244 - 245 - static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) 246 - { 247 - int v = atomic_read(&grp->refcount); 248 - 249 - /* workgroup is never freezed on uniprocessor systems */ 250 - DBG_BUGON(v == EROFS_LOCKED_MAGIC); 251 - return v; 252 - } 253 - #endif /* !CONFIG_SMP */ 254 227 #endif /* !CONFIG_EROFS_FS_ZIP */ 255 228 256 229 /* we strictly follow PAGE_SIZE and no buffer head yet */
+8 -8
fs/erofs/zmap.c
··· 141 141 u8 type, headtype; 142 142 u16 clusterofs; 143 143 u16 delta[2]; 144 - erofs_blk_t pblk, compressedlcs; 144 + erofs_blk_t pblk, compressedblks; 145 145 erofs_off_t nextpackoff; 146 146 }; 147 147 ··· 192 192 DBG_BUGON(1); 193 193 return -EFSCORRUPTED; 194 194 } 195 - m->compressedlcs = m->delta[0] & 195 + m->compressedblks = m->delta[0] & 196 196 ~Z_EROFS_VLE_DI_D0_CBLKCNT; 197 197 m->delta[0] = 1; 198 198 } ··· 293 293 DBG_BUGON(1); 294 294 return -EFSCORRUPTED; 295 295 } 296 - m->compressedlcs = lo & ~Z_EROFS_VLE_DI_D0_CBLKCNT; 296 + m->compressedblks = lo & ~Z_EROFS_VLE_DI_D0_CBLKCNT; 297 297 m->delta[0] = 1; 298 298 return 0; 299 299 } else if (i + 1 != (int)vcnt) { ··· 497 497 return 0; 498 498 } 499 499 lcn = m->lcn + 1; 500 - if (m->compressedlcs) 500 + if (m->compressedblks) 501 501 goto out; 502 502 503 503 err = z_erofs_load_cluster_from_disk(m, lcn, false); ··· 506 506 507 507 /* 508 508 * If the 1st NONHEAD lcluster has already been handled initially w/o 509 - * valid compressedlcs, which means at least it mustn't be CBLKCNT, or 509 + * valid compressedblks, which means at least it mustn't be CBLKCNT, or 510 510 * an internal implemenatation error is detected. 511 511 * 512 512 * The following code can also handle it properly anyway, but let's ··· 523 523 * if the 1st NONHEAD lcluster is actually PLAIN or HEAD type 524 524 * rather than CBLKCNT, it's a 1 lcluster-sized pcluster. 525 525 */ 526 - m->compressedlcs = 1; 526 + m->compressedblks = 1 << (lclusterbits - LOG_BLOCK_SIZE); 527 527 break; 528 528 case Z_EROFS_VLE_CLUSTER_TYPE_NONHEAD: 529 529 if (m->delta[0] != 1) 530 530 goto err_bonus_cblkcnt; 531 - if (m->compressedlcs) 531 + if (m->compressedblks) 532 532 break; 533 533 fallthrough; 534 534 default: ··· 539 539 return -EFSCORRUPTED; 540 540 } 541 541 out: 542 - map->m_plen = (u64)m->compressedlcs << lclusterbits; 542 + map->m_plen = (u64)m->compressedblks << LOG_BLOCK_SIZE; 543 543 return 0; 544 544 err_bonus_cblkcnt: 545 545 erofs_err(m->inode->i_sb,
+23 -8
fs/tracefs/inode.c
··· 141 141 kuid_t uid; 142 142 kgid_t gid; 143 143 umode_t mode; 144 + /* Opt_* bitfield. */ 145 + unsigned int opts; 144 146 }; 145 147 146 148 enum { ··· 243 241 kgid_t gid; 244 242 char *p; 245 243 244 + opts->opts = 0; 246 245 opts->mode = TRACEFS_DEFAULT_MODE; 247 246 248 247 while ((p = strsep(&data, ",")) != NULL) { ··· 278 275 * but traditionally tracefs has ignored all mount options 279 276 */ 280 277 } 278 + 279 + opts->opts |= BIT(token); 281 280 } 282 281 283 282 return 0; 284 283 } 285 284 286 - static int tracefs_apply_options(struct super_block *sb) 285 + static int tracefs_apply_options(struct super_block *sb, bool remount) 287 286 { 288 287 struct tracefs_fs_info *fsi = sb->s_fs_info; 289 288 struct inode *inode = d_inode(sb->s_root); 290 289 struct tracefs_mount_opts *opts = &fsi->mount_opts; 291 290 292 - inode->i_mode &= ~S_IALLUGO; 293 - inode->i_mode |= opts->mode; 291 + /* 292 + * On remount, only reset mode/uid/gid if they were provided as mount 293 + * options. 294 + */ 294 295 295 - inode->i_uid = opts->uid; 296 + if (!remount || opts->opts & BIT(Opt_mode)) { 297 + inode->i_mode &= ~S_IALLUGO; 298 + inode->i_mode |= opts->mode; 299 + } 296 300 297 - /* Set all the group ids to the mount option */ 298 - set_gid(sb->s_root, opts->gid); 301 + if (!remount || opts->opts & BIT(Opt_uid)) 302 + inode->i_uid = opts->uid; 303 + 304 + if (!remount || opts->opts & BIT(Opt_gid)) { 305 + /* Set all the group ids to the mount option */ 306 + set_gid(sb->s_root, opts->gid); 307 + } 299 308 300 309 return 0; 301 310 } ··· 322 307 if (err) 323 308 goto fail; 324 309 325 - tracefs_apply_options(sb); 310 + tracefs_apply_options(sb, true); 326 311 327 312 fail: 328 313 return err; ··· 374 359 375 360 sb->s_op = &tracefs_super_operations; 376 361 377 - tracefs_apply_options(sb); 362 + tracefs_apply_options(sb, false); 378 363 379 364 return 0; 380 365
+1 -1
include/asm-generic/softirq_stack.h
··· 2 2 #ifndef __ASM_GENERIC_SOFTIRQ_STACK_H 3 3 #define __ASM_GENERIC_SOFTIRQ_STACK_H 4 4 5 - #if defined(CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK) && !defined(CONFIG_PREEMPT_RT) 5 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 6 6 void do_softirq_own_stack(void); 7 7 #else 8 8 static inline void do_softirq_own_stack(void)
+2 -2
include/drm/drm_connector.h
··· 319 319 * EDID's detailed monitor range 320 320 */ 321 321 struct drm_monitor_range_info { 322 - u8 min_vfreq; 323 - u8 max_vfreq; 322 + u16 min_vfreq; 323 + u16 max_vfreq; 324 324 }; 325 325 326 326 /**
+5
include/drm/drm_edid.h
··· 92 92 u8 str[13]; 93 93 } __attribute__((packed)); 94 94 95 + #define DRM_EDID_RANGE_OFFSET_MIN_VFREQ (1 << 0) /* 1.4 */ 96 + #define DRM_EDID_RANGE_OFFSET_MAX_VFREQ (1 << 1) /* 1.4 */ 97 + #define DRM_EDID_RANGE_OFFSET_MIN_HFREQ (1 << 2) /* 1.4 */ 98 + #define DRM_EDID_RANGE_OFFSET_MAX_HFREQ (1 << 3) /* 1.4 */ 99 + 95 100 #define DRM_EDID_DEFAULT_GTF_SUPPORT_FLAG 0x00 96 101 #define DRM_EDID_RANGE_LIMITS_ONLY_FLAG 0x01 97 102 #define DRM_EDID_SECONDARY_GTF_SUPPORT_FLAG 0x02
+3 -3
include/kunit/test.h
··· 826 826 827 827 #define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...) \ 828 828 KUNIT_BINARY_INT_ASSERTION(test, \ 829 - KUNIT_ASSERTION, \ 829 + KUNIT_EXPECTATION, \ 830 830 left, <=, right, \ 831 831 fmt, \ 832 832 ##__VA_ARGS__) ··· 1116 1116 1117 1117 #define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...) \ 1118 1118 KUNIT_BINARY_INT_ASSERTION(test, \ 1119 - KUNIT_EXPECTATION, \ 1119 + KUNIT_ASSERTION, \ 1120 1120 left, <, right, \ 1121 1121 fmt, \ 1122 1122 ##__VA_ARGS__) ··· 1157 1157 1158 1158 #define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...) \ 1159 1159 KUNIT_BINARY_INT_ASSERTION(test, \ 1160 - KUNIT_EXPECTATION, \ 1160 + KUNIT_ASSERTION, \ 1161 1161 left, >, right, \ 1162 1162 fmt, \ 1163 1163 ##__VA_ARGS__)
+1
include/linux/amba/bus.h
··· 67 67 struct clk *pclk; 68 68 struct device_dma_parameters dma_parms; 69 69 unsigned int periphid; 70 + struct mutex periphid_lock; 70 71 unsigned int cid; 71 72 struct amba_cs_uci_id uci; 72 73 unsigned int irq[AMBA_NR_IRQS];
+11
include/linux/buffer_head.h
··· 138 138 static __always_inline void set_buffer_uptodate(struct buffer_head *bh) 139 139 { 140 140 /* 141 + * If somebody else already set this uptodate, they will 142 + * have done the memory barrier, and a reader will thus 143 + * see *some* valid buffer state. 144 + * 145 + * Any other serialization (with IO errors or whatever that 146 + * might clear the bit) has to come from other state (eg BH_Lock). 147 + */ 148 + if (test_bit(BH_Uptodate, &bh->b_state)) 149 + return; 150 + 151 + /* 141 152 * make it consistent with folio_mark_uptodate 142 153 * pairs with smp_load_acquire in buffer_uptodate 143 154 */
+6
include/linux/debugfs.h
··· 91 91 void debugfs_remove(struct dentry *dentry); 92 92 #define debugfs_remove_recursive debugfs_remove 93 93 94 + void debugfs_lookup_and_remove(const char *name, struct dentry *parent); 95 + 94 96 const struct file_operations *debugfs_real_fops(const struct file *filp); 95 97 96 98 int debugfs_file_get(struct dentry *dentry); ··· 225 223 { } 226 224 227 225 static inline void debugfs_remove_recursive(struct dentry *dentry) 226 + { } 227 + 228 + static inline void debugfs_lookup_and_remove(const char *name, 229 + struct dentry *parent) 228 230 { } 229 231 230 232 const struct file_operations *debugfs_real_fops(const struct file *filp);
-5
include/linux/dma-mapping.h
··· 139 139 void *cpu_addr, dma_addr_t dma_addr, size_t size, 140 140 unsigned long attrs); 141 141 bool dma_can_mmap(struct device *dev); 142 - int dma_supported(struct device *dev, u64 mask); 143 142 bool dma_pci_p2pdma_supported(struct device *dev); 144 143 int dma_set_mask(struct device *dev, u64 mask); 145 144 int dma_set_coherent_mask(struct device *dev, u64 mask); ··· 246 247 static inline bool dma_can_mmap(struct device *dev) 247 248 { 248 249 return false; 249 - } 250 - static inline int dma_supported(struct device *dev, u64 mask) 251 - { 252 - return 0; 253 250 } 254 251 static inline bool dma_pci_p2pdma_supported(struct device *dev) 255 252 {
+3 -1
include/linux/dmar.h
··· 65 65 66 66 extern struct rw_semaphore dmar_global_lock; 67 67 extern struct list_head dmar_drhd_units; 68 + extern int intel_iommu_enabled; 68 69 69 70 #define for_each_drhd_unit(drhd) \ 70 71 list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ ··· 89 88 static inline bool dmar_rcu_check(void) 90 89 { 91 90 return rwsem_is_locked(&dmar_global_lock) || 92 - system_state == SYSTEM_BOOTING; 91 + system_state == SYSTEM_BOOTING || 92 + (IS_ENABLED(CONFIG_INTEL_IOMMU) && !intel_iommu_enabled); 93 93 } 94 94 95 95 #define dmar_rcu_dereference(p) rcu_dereference_check((p), dmar_rcu_check())
+5 -3
include/linux/ieee80211.h
··· 310 310 struct ieee80211_hdr { 311 311 __le16 frame_control; 312 312 __le16 duration_id; 313 - u8 addr1[ETH_ALEN]; 314 - u8 addr2[ETH_ALEN]; 315 - u8 addr3[ETH_ALEN]; 313 + struct_group(addrs, 314 + u8 addr1[ETH_ALEN]; 315 + u8 addr2[ETH_ALEN]; 316 + u8 addr3[ETH_ALEN]; 317 + ); 316 318 __le16 seq_ctrl; 317 319 u8 addr4[ETH_ALEN]; 318 320 } __packed __aligned(2);
+10 -9
include/linux/mlx5/driver.h
··· 1280 1280 MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32, 1281 1281 }; 1282 1282 1283 - static inline bool mlx5_is_roce_init_enabled(struct mlx5_core_dev *dev) 1284 - { 1285 - struct devlink *devlink = priv_to_devlink(dev); 1286 - union devlink_param_value val; 1287 - int err; 1283 + bool mlx5_is_roce_on(struct mlx5_core_dev *dev); 1288 1284 1289 - err = devlink_param_driverinit_value_get(devlink, 1290 - DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, 1291 - &val); 1292 - return err ? MLX5_CAP_GEN(dev, roce) : val.vbool; 1285 + static inline bool mlx5_get_roce_state(struct mlx5_core_dev *dev) 1286 + { 1287 + if (MLX5_CAP_GEN(dev, roce_rw_supported)) 1288 + return MLX5_CAP_GEN(dev, roce); 1289 + 1290 + /* If RoCE cap is read-only in FW, get RoCE state from devlink 1291 + * in order to support RoCE enable/disable feature 1292 + */ 1293 + return mlx5_is_roce_on(dev); 1293 1294 } 1294 1295 1295 1296 #endif /* MLX5_DRIVER_H */
+21
include/linux/skbuff.h
··· 2444 2444 skb_shinfo(skb)->nr_frags = i + 1; 2445 2445 } 2446 2446 2447 + /** 2448 + * skb_fill_page_desc_noacc - initialise a paged fragment in an skb 2449 + * @skb: buffer containing fragment to be initialised 2450 + * @i: paged fragment index to initialise 2451 + * @page: the page to use for this fragment 2452 + * @off: the offset to the data with @page 2453 + * @size: the length of the data 2454 + * 2455 + * Variant of skb_fill_page_desc() which does not deal with 2456 + * pfmemalloc, if page is not owned by us. 2457 + */ 2458 + static inline void skb_fill_page_desc_noacc(struct sk_buff *skb, int i, 2459 + struct page *page, int off, 2460 + int size) 2461 + { 2462 + struct skb_shared_info *shinfo = skb_shinfo(skb); 2463 + 2464 + __skb_fill_page_desc_noacc(shinfo, i, page, off, size); 2465 + shinfo->nr_frags = i + 1; 2466 + } 2467 + 2447 2468 void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off, 2448 2469 int size, unsigned int truesize); 2449 2470
+2
include/linux/spi/spi.h
··· 469 469 * SPI_TRANS_FAIL_NO_START. 470 470 * @queue_empty: signal green light for opportunistically skipping the queue 471 471 * for spi_sync transfers. 472 + * @must_async: disable all fast paths in the core 472 473 * 473 474 * Each SPI controller can communicate with one or more @spi_device 474 475 * children. These make a small bus, sharing MOSI, MISO and SCK signals ··· 691 690 692 691 /* Flag for enabling opportunistic skipping of the queue in spi_sync */ 693 692 bool queue_empty; 693 + bool must_async; 694 694 }; 695 695 696 696 static inline void *spi_controller_get_devdata(struct spi_controller *ctlr)
+1
include/linux/udp.h
··· 70 70 * For encapsulation sockets. 71 71 */ 72 72 int (*encap_rcv)(struct sock *sk, struct sk_buff *skb); 73 + void (*encap_err_rcv)(struct sock *sk, struct sk_buff *skb, unsigned int udp_offset); 73 74 int (*encap_err_lookup)(struct sock *sk, struct sk_buff *skb); 74 75 void (*encap_destroy)(struct sock *sk); 75 76
-2
include/net/af_rxrpc.h
··· 66 66 void rxrpc_kernel_set_tx_length(struct socket *, struct rxrpc_call *, s64); 67 67 bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *); 68 68 u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *); 69 - bool rxrpc_kernel_get_reply_time(struct socket *, struct rxrpc_call *, 70 - ktime_t *); 71 69 bool rxrpc_kernel_call_is_complete(struct rxrpc_call *); 72 70 void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *, 73 71 unsigned long);
+67
include/net/dropreason.h
··· 3 3 #ifndef _LINUX_DROPREASON_H 4 4 #define _LINUX_DROPREASON_H 5 5 6 + #define DEFINE_DROP_REASON(FN, FNe) \ 7 + FN(NOT_SPECIFIED) \ 8 + FN(NO_SOCKET) \ 9 + FN(PKT_TOO_SMALL) \ 10 + FN(TCP_CSUM) \ 11 + FN(SOCKET_FILTER) \ 12 + FN(UDP_CSUM) \ 13 + FN(NETFILTER_DROP) \ 14 + FN(OTHERHOST) \ 15 + FN(IP_CSUM) \ 16 + FN(IP_INHDR) \ 17 + FN(IP_RPFILTER) \ 18 + FN(UNICAST_IN_L2_MULTICAST) \ 19 + FN(XFRM_POLICY) \ 20 + FN(IP_NOPROTO) \ 21 + FN(SOCKET_RCVBUFF) \ 22 + FN(PROTO_MEM) \ 23 + FN(TCP_MD5NOTFOUND) \ 24 + FN(TCP_MD5UNEXPECTED) \ 25 + FN(TCP_MD5FAILURE) \ 26 + FN(SOCKET_BACKLOG) \ 27 + FN(TCP_FLAGS) \ 28 + FN(TCP_ZEROWINDOW) \ 29 + FN(TCP_OLD_DATA) \ 30 + FN(TCP_OVERWINDOW) \ 31 + FN(TCP_OFOMERGE) \ 32 + FN(TCP_RFC7323_PAWS) \ 33 + FN(TCP_INVALID_SEQUENCE) \ 34 + FN(TCP_RESET) \ 35 + FN(TCP_INVALID_SYN) \ 36 + FN(TCP_CLOSE) \ 37 + FN(TCP_FASTOPEN) \ 38 + FN(TCP_OLD_ACK) \ 39 + FN(TCP_TOO_OLD_ACK) \ 40 + FN(TCP_ACK_UNSENT_DATA) \ 41 + FN(TCP_OFO_QUEUE_PRUNE) \ 42 + FN(TCP_OFO_DROP) \ 43 + FN(IP_OUTNOROUTES) \ 44 + FN(BPF_CGROUP_EGRESS) \ 45 + FN(IPV6DISABLED) \ 46 + FN(NEIGH_CREATEFAIL) \ 47 + FN(NEIGH_FAILED) \ 48 + FN(NEIGH_QUEUEFULL) \ 49 + FN(NEIGH_DEAD) \ 50 + FN(TC_EGRESS) \ 51 + FN(QDISC_DROP) \ 52 + FN(CPU_BACKLOG) \ 53 + FN(XDP) \ 54 + FN(TC_INGRESS) \ 55 + FN(UNHANDLED_PROTO) \ 56 + FN(SKB_CSUM) \ 57 + FN(SKB_GSO_SEG) \ 58 + FN(SKB_UCOPY_FAULT) \ 59 + FN(DEV_HDR) \ 60 + FN(DEV_READY) \ 61 + FN(FULL_RING) \ 62 + FN(NOMEM) \ 63 + FN(HDR_TRUNC) \ 64 + FN(TAP_FILTER) \ 65 + FN(TAP_TXFILTER) \ 66 + FN(ICMP_CSUM) \ 67 + FN(INVALID_PROTO) \ 68 + FN(IP_INADDRERRORS) \ 69 + FN(IP_INNOROUTES) \ 70 + FN(PKT_TOO_BIG) \ 71 + FNe(MAX) 72 + 6 73 /** 7 74 * enum skb_drop_reason - the reasons of skb drops 8 75 *
-2
include/net/netfilter/nf_conntrack.h
··· 53 53 /* only used when new connection is allocated: */ 54 54 atomic_t count; 55 55 unsigned int expect_count; 56 - u8 sysctl_auto_assign_helper; 57 - bool auto_assign_helper_warned; 58 56 59 57 /* only used from work queues, configuration plane, and so on: */ 60 58 unsigned int users4;
-1
include/net/netns/conntrack.h
··· 101 101 u8 sysctl_log_invalid; /* Log invalid packets */ 102 102 u8 sysctl_events; 103 103 u8 sysctl_acct; 104 - u8 sysctl_auto_assign_helper; 105 104 u8 sysctl_tstamp; 106 105 u8 sysctl_checksum; 107 106
+4
include/net/udp_tunnel.h
··· 67 67 typedef int (*udp_tunnel_encap_rcv_t)(struct sock *sk, struct sk_buff *skb); 68 68 typedef int (*udp_tunnel_encap_err_lookup_t)(struct sock *sk, 69 69 struct sk_buff *skb); 70 + typedef void (*udp_tunnel_encap_err_rcv_t)(struct sock *sk, 71 + struct sk_buff *skb, 72 + unsigned int udp_offset); 70 73 typedef void (*udp_tunnel_encap_destroy_t)(struct sock *sk); 71 74 typedef struct sk_buff *(*udp_tunnel_gro_receive_t)(struct sock *sk, 72 75 struct list_head *head, ··· 83 80 __u8 encap_type; 84 81 udp_tunnel_encap_rcv_t encap_rcv; 85 82 udp_tunnel_encap_err_lookup_t encap_err_lookup; 83 + udp_tunnel_encap_err_rcv_t encap_err_rcv; 86 84 udp_tunnel_encap_destroy_t encap_destroy; 87 85 udp_tunnel_gro_receive_t gro_receive; 88 86 udp_tunnel_gro_complete_t gro_complete;
-2
include/scsi/scsi_device.h
··· 309 309 struct list_head devices; 310 310 struct device dev; 311 311 struct kref reap_ref; /* last put renders target invisible */ 312 - atomic_t sdev_count; 313 - wait_queue_head_t sdev_wq; 314 312 unsigned int channel; 315 313 unsigned int id; /* target id ... replace 316 314 * scsi_device.id eventually */
+2 -3
include/scsi/scsi_host.h
··· 557 557 struct scsi_host_template *hostt; 558 558 struct scsi_transport_template *transportt; 559 559 560 + struct kref tagset_refcnt; 561 + struct completion tagset_freed; 560 562 /* Area to keep a shared tag map */ 561 563 struct blk_mq_tag_set tag_set; 562 564 ··· 691 689 692 690 /* ldm bits */ 693 691 struct device shost_gendev, shost_dev; 694 - 695 - atomic_t target_count; 696 - wait_queue_head_t targets_wq; 697 692 698 693 /* 699 694 * Points to the transport data (if any) which is allocated
+8
include/soc/at91/sama7-ddr.h
··· 38 38 #define DDR3PHY_DSGCR_ODTPDD_ODT0 (1 << 20) /* ODT[0] Power Down Driver */ 39 39 40 40 #define DDR3PHY_ZQ0SR0 (0x188) /* ZQ status register 0 */ 41 + #define DDR3PHY_ZQ0SR0_PDO_OFF (0) /* Pull-down output impedance select offset */ 42 + #define DDR3PHY_ZQ0SR0_PUO_OFF (5) /* Pull-up output impedance select offset */ 43 + #define DDR3PHY_ZQ0SR0_PDODT_OFF (10) /* Pull-down on-die termination impedance select offset */ 44 + #define DDR3PHY_ZQ0SRO_PUODT_OFF (15) /* Pull-up on-die termination impedance select offset */ 45 + 46 + #define DDR3PHY_DX0DLLCR (0x1CC) /* DDR3PHY DATX8 DLL Control Register */ 47 + #define DDR3PHY_DX1DLLCR (0x20C) /* DDR3PHY DATX8 DLL Control Register */ 48 + #define DDR3PHY_DXDLLCR_DLLDIS (1 << 31) /* DLL Disable */ 41 49 42 50 /* UDDRC */ 43 51 #define UDDRC_STAT (0x04) /* UDDRC Operating Mode Status Register */
+14 -1
include/trace/events/skb.h
··· 9 9 #include <linux/netdevice.h> 10 10 #include <linux/tracepoint.h> 11 11 12 + #undef FN 13 + #define FN(reason) TRACE_DEFINE_ENUM(SKB_DROP_REASON_##reason); 14 + DEFINE_DROP_REASON(FN, FN) 15 + 16 + #undef FN 17 + #undef FNe 18 + #define FN(reason) { SKB_DROP_REASON_##reason, #reason }, 19 + #define FNe(reason) { SKB_DROP_REASON_##reason, #reason } 20 + 12 21 /* 13 22 * Tracepoint for free an sk_buff: 14 23 */ ··· 44 35 45 36 TP_printk("skbaddr=%p protocol=%u location=%p reason: %s", 46 37 __entry->skbaddr, __entry->protocol, __entry->location, 47 - drop_reasons[__entry->reason]) 38 + __print_symbolic(__entry->reason, 39 + DEFINE_DROP_REASON(FN, FNe))) 48 40 ); 41 + 42 + #undef FN 43 + #undef FNe 49 44 50 45 TRACE_EVENT(consume_skb, 51 46
+1
io_uring/io_uring.c
··· 1728 1728 1729 1729 switch (io_arm_poll_handler(req, 0)) { 1730 1730 case IO_APOLL_READY: 1731 + io_kbuf_recycle(req, 0); 1731 1732 io_req_task_queue(req); 1732 1733 break; 1733 1734 case IO_APOLL_ABORTED:
+6 -2
io_uring/kbuf.h
··· 91 91 * buffer data. However if that buffer is recycled the original request 92 92 * data stored in addr is lost. Therefore forbid recycling for now. 93 93 */ 94 - if (req->opcode == IORING_OP_READV) 94 + if (req->opcode == IORING_OP_READV) { 95 + if ((req->flags & REQ_F_BUFFER_RING) && req->buf_list) { 96 + req->buf_list->head++; 97 + req->buf_list = NULL; 98 + } 95 99 return; 96 - 100 + } 97 101 if (req->flags & REQ_F_BUFFER_SELECTED) 98 102 io_kbuf_recycle_legacy(req, issue_flags); 99 103 if (req->flags & REQ_F_BUFFER_RING)
+4 -3
io_uring/net.c
··· 1003 1003 unsigned msg_flags, cflags; 1004 1004 int ret, min_ret = 0; 1005 1005 1006 - if (!(req->flags & REQ_F_POLLED) && 1007 - (zc->flags & IORING_RECVSEND_POLL_FIRST)) 1008 - return -EAGAIN; 1009 1006 sock = sock_from_file(req->file); 1010 1007 if (unlikely(!sock)) 1011 1008 return -ENOTSOCK; ··· 1026 1029 } 1027 1030 msg.msg_namelen = zc->addr_len; 1028 1031 } 1032 + 1033 + if (!(req->flags & REQ_F_POLLED) && 1034 + (zc->flags & IORING_RECVSEND_POLL_FIRST)) 1035 + return io_setup_async_addr(req, addr, issue_flags); 1029 1036 1030 1037 if (zc->flags & IORING_RECVSEND_FIXED_BUF) { 1031 1038 ret = io_import_fixed(WRITE, &msg.msg_iter, req->imu,
-8
io_uring/notif.c
··· 21 21 io_req_task_complete(notif, locked); 22 22 } 23 23 24 - static inline void io_notif_complete(struct io_kiocb *notif) 25 - __must_hold(&notif->ctx->uring_lock) 26 - { 27 - bool locked = true; 28 - 29 - __io_notif_complete_tw(notif, &locked); 30 - } 31 - 32 24 static void io_uring_tx_zerocopy_callback(struct sk_buff *skb, 33 25 struct ubuf_info *uarg, 34 26 bool success)
+18 -12
io_uring/rw.c
··· 206 206 return false; 207 207 } 208 208 209 + static inline unsigned io_fixup_rw_res(struct io_kiocb *req, unsigned res) 210 + { 211 + struct io_async_rw *io = req->async_data; 212 + 213 + /* add previously done IO, if any */ 214 + if (req_has_async_data(req) && io->bytes_done > 0) { 215 + if (res < 0) 216 + res = io->bytes_done; 217 + else 218 + res += io->bytes_done; 219 + } 220 + return res; 221 + } 222 + 209 223 static void io_complete_rw(struct kiocb *kiocb, long res) 210 224 { 211 225 struct io_rw *rw = container_of(kiocb, struct io_rw, kiocb); ··· 227 213 228 214 if (__io_complete_rw_common(req, res)) 229 215 return; 230 - io_req_set_res(req, res, 0); 216 + io_req_set_res(req, io_fixup_rw_res(req, res), 0); 231 217 req->io_task_work.func = io_req_task_complete; 232 218 io_req_task_work_add(req); 233 219 } ··· 254 240 static int kiocb_done(struct io_kiocb *req, ssize_t ret, 255 241 unsigned int issue_flags) 256 242 { 257 - struct io_async_rw *io = req->async_data; 258 243 struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); 259 - 260 - /* add previously done IO, if any */ 261 - if (req_has_async_data(req) && io->bytes_done > 0) { 262 - if (ret < 0) 263 - ret = io->bytes_done; 264 - else 265 - ret += io->bytes_done; 266 - } 244 + unsigned final_ret = io_fixup_rw_res(req, ret); 267 245 268 246 if (req->flags & REQ_F_CUR_POS) 269 247 req->file->f_pos = rw->kiocb.ki_pos; 270 248 if (ret >= 0 && (rw->kiocb.ki_complete == io_complete_rw)) { 271 249 if (!__io_complete_rw_common(req, ret)) { 272 - io_req_set_res(req, req->cqe.res, 250 + io_req_set_res(req, final_ret, 273 251 io_put_kbuf(req, issue_flags)); 274 252 return IOU_OK; 275 253 } ··· 274 268 if (io_resubmit_prep(req)) 275 269 io_req_task_queue_reissue(req); 276 270 else 277 - io_req_task_queue_fail(req, ret); 271 + io_req_task_queue_fail(req, final_ret); 278 272 } 279 273 return IOU_ISSUE_SKIP_COMPLETE; 280 274 }
+2 -4
kernel/dma/debug.c
··· 350 350 unsigned long *flags) 351 351 { 352 352 353 - unsigned int max_range = dma_get_max_seg_size(ref->dev); 354 353 struct dma_debug_entry *entry, index = *ref; 355 - unsigned int range = 0; 354 + int limit = min(HASH_SIZE, (index.dev_addr >> HASH_FN_SHIFT) + 1); 356 355 357 - while (range <= max_range) { 356 + for (int i = 0; i < limit; i++) { 358 357 entry = __hash_bucket_find(*bucket, ref, containing_match); 359 358 360 359 if (entry) ··· 363 364 * Nothing found, go back a hash bucket 364 365 */ 365 366 put_hash_bucket(*bucket, *flags); 366 - range += (1 << HASH_FN_SHIFT); 367 367 index.dev_addr -= (1 << HASH_FN_SHIFT); 368 368 *bucket = get_hash_bucket(&index, flags); 369 369 }
+1 -2
kernel/dma/mapping.c
··· 707 707 } 708 708 EXPORT_SYMBOL_GPL(dma_mmap_noncontiguous); 709 709 710 - int dma_supported(struct device *dev, u64 mask) 710 + static int dma_supported(struct device *dev, u64 mask) 711 711 { 712 712 const struct dma_map_ops *ops = get_dma_ops(dev); 713 713 ··· 721 721 return 1; 722 722 return ops->dma_supported(dev, mask); 723 723 } 724 - EXPORT_SYMBOL(dma_supported); 725 724 726 725 bool dma_pci_p2pdma_supported(struct device *dev) 727 726 {
+6 -7
kernel/dma/swiotlb.c
··· 326 326 swiotlb_adjust_nareas(num_possible_cpus()); 327 327 328 328 nslabs = default_nslabs; 329 - if (nslabs < IO_TLB_MIN_SLABS) 330 - panic("%s: nslabs = %lu too small\n", __func__, nslabs); 331 - 332 329 /* 333 330 * By default allocate the bounce buffer memory from low memory, but 334 331 * allow to pick a location everywhere for hypervisors with guest ··· 338 341 else 339 342 tlb = memblock_alloc_low(bytes, PAGE_SIZE); 340 343 if (!tlb) { 341 - pr_warn("%s: Failed to allocate %zu bytes tlb structure\n", 342 - __func__, bytes); 344 + pr_warn("%s: failed to allocate tlb structure\n", __func__); 343 345 return; 344 346 } 345 347 ··· 575 579 } 576 580 } 577 581 578 - #define slot_addr(start, idx) ((start) + ((idx) << IO_TLB_SHIFT)) 582 + static inline phys_addr_t slot_addr(phys_addr_t start, phys_addr_t idx) 583 + { 584 + return start + (idx << IO_TLB_SHIFT); 585 + } 579 586 580 587 /* 581 588 * Carefully handle integer overflow which can occur when boundary_mask == ~0UL. ··· 764 765 /* 765 766 * When dir == DMA_FROM_DEVICE we could omit the copy from the orig 766 767 * to the tlb buffer, if we knew for sure the device will 767 - * overwirte the entire current content. But we don't. Thus 768 + * overwrite the entire current content. But we don't. Thus 768 769 * unconditional bounce may prevent leaking swiotlb content (i.e. 769 770 * kernel memory) to user-space. 770 771 */
+1
kernel/fork.c
··· 1225 1225 schedule_work(&mm->async_put_work); 1226 1226 } 1227 1227 } 1228 + EXPORT_SYMBOL_GPL(mmput_async); 1228 1229 #endif 1229 1230 1230 1231 /**
+1
kernel/kprobes.c
··· 1562 1562 /* Ensure it is not in reserved area nor out of text */ 1563 1563 if (!(core_kernel_text((unsigned long) p->addr) || 1564 1564 is_module_text_address((unsigned long) p->addr)) || 1565 + in_gate_area_no_mm((unsigned long) p->addr) || 1565 1566 within_kprobe_blacklist((unsigned long) p->addr) || 1566 1567 jump_label_text_reserved(p->addr, p->addr) || 1567 1568 static_call_text_reserved(p->addr, p->addr) ||
+1 -1
kernel/sched/debug.c
··· 416 416 char buf[32]; 417 417 418 418 snprintf(buf, sizeof(buf), "cpu%d", cpu); 419 - debugfs_remove(debugfs_lookup(buf, sd_dentry)); 419 + debugfs_lookup_and_remove(buf, sd_dentry); 420 420 d_cpu = debugfs_create_dir(buf, sd_dentry); 421 421 422 422 i = 0;
+1 -1
kernel/trace/rv/monitors/wip/wip.h
··· 27 27 bool final_states[state_max_wip]; 28 28 }; 29 29 30 - struct automaton_wip automaton_wip = { 30 + static struct automaton_wip automaton_wip = { 31 31 .state_names = { 32 32 "preemptive", 33 33 "non_preemptive"
+1 -1
kernel/trace/rv/monitors/wwnr/wwnr.h
··· 27 27 bool final_states[state_max_wwnr]; 28 28 }; 29 29 30 - struct automaton_wwnr automaton_wwnr = { 30 + static struct automaton_wwnr automaton_wwnr = { 31 31 .state_names = { 32 32 "not_running", 33 33 "running"
+2 -2
kernel/trace/rv/reactor_panic.c
··· 24 24 .react = rv_panic_reaction 25 25 }; 26 26 27 - static int register_react_panic(void) 27 + static int __init register_react_panic(void) 28 28 { 29 29 rv_register_reactor(&rv_panic); 30 30 return 0; 31 31 } 32 32 33 - static void unregister_react_panic(void) 33 + static void __exit unregister_react_panic(void) 34 34 { 35 35 rv_unregister_reactor(&rv_panic); 36 36 }
+2 -2
kernel/trace/rv/reactor_printk.c
··· 23 23 .react = rv_printk_reaction 24 24 }; 25 25 26 - static int register_react_printk(void) 26 + static int __init register_react_printk(void) 27 27 { 28 28 rv_register_reactor(&rv_printk); 29 29 return 0; 30 30 } 31 31 32 - static void unregister_react_printk(void) 32 + static void __exit unregister_react_printk(void) 33 33 { 34 34 rv_unregister_reactor(&rv_printk); 35 35 }
+2 -1
kernel/trace/trace_events_trigger.c
··· 142 142 { 143 143 struct event_trigger_data *data; 144 144 145 - list_for_each_entry_rcu(data, &file->triggers, list) { 145 + list_for_each_entry_rcu(data, &file->triggers, list, 146 + lockdep_is_held(&event_mutex)) { 146 147 if (data->flags & EVENT_TRIGGER_FL_PROBE) 147 148 continue; 148 149 return true;
+2 -2
kernel/trace/trace_preemptirq.c
··· 95 95 } 96 96 97 97 lockdep_hardirqs_on_prepare(); 98 - lockdep_hardirqs_on(CALLER_ADDR0); 98 + lockdep_hardirqs_on(caller_addr); 99 99 } 100 100 EXPORT_SYMBOL(trace_hardirqs_on_caller); 101 101 NOKPROBE_SYMBOL(trace_hardirqs_on_caller); 102 102 103 103 __visible void trace_hardirqs_off_caller(unsigned long caller_addr) 104 104 { 105 - lockdep_hardirqs_off(CALLER_ADDR0); 105 + lockdep_hardirqs_off(caller_addr); 106 106 107 107 if (!this_cpu_read(tracing_irq_cpu)) { 108 108 this_cpu_write(tracing_irq_cpu, 1);
+3 -2
kernel/tracepoint.c
··· 571 571 bool trace_module_has_bad_taint(struct module *mod) 572 572 { 573 573 return mod->taints & ~((1 << TAINT_OOT_MODULE) | (1 << TAINT_CRAP) | 574 - (1 << TAINT_UNSIGNED_MODULE)); 574 + (1 << TAINT_UNSIGNED_MODULE) | 575 + (1 << TAINT_TEST)); 575 576 } 576 577 577 578 static BLOCKING_NOTIFIER_HEAD(tracepoint_notify_list); ··· 648 647 /* 649 648 * We skip modules that taint the kernel, especially those with different 650 649 * module headers (for forced load), to make sure we don't cause a crash. 651 - * Staging, out-of-tree, and unsigned GPL modules are fine. 650 + * Staging, out-of-tree, unsigned GPL, and test modules are fine. 652 651 */ 653 652 if (trace_module_has_bad_taint(mod)) 654 653 return 0;
+6 -6
net/bluetooth/hci_sync.c
··· 3018 3018 /* Read Buffer Size (ACL mtu, max pkt, etc.) */ 3019 3019 static int hci_read_buffer_size_sync(struct hci_dev *hdev) 3020 3020 { 3021 - /* Use Read LE Buffer Size V2 if supported */ 3022 - if (hdev->commands[41] & 0x20) 3023 - return __hci_cmd_sync_status(hdev, 3024 - HCI_OP_LE_READ_BUFFER_SIZE_V2, 3025 - 0, NULL, HCI_CMD_TIMEOUT); 3026 - 3027 3021 return __hci_cmd_sync_status(hdev, HCI_OP_READ_BUFFER_SIZE, 3028 3022 0, NULL, HCI_CMD_TIMEOUT); 3029 3023 } ··· 3231 3237 /* Read LE Buffer Size */ 3232 3238 static int hci_le_read_buffer_size_sync(struct hci_dev *hdev) 3233 3239 { 3240 + /* Use Read LE Buffer Size V2 if supported */ 3241 + if (hdev->commands[41] & 0x20) 3242 + return __hci_cmd_sync_status(hdev, 3243 + HCI_OP_LE_READ_BUFFER_SIZE_V2, 3244 + 0, NULL, HCI_CMD_TIMEOUT); 3245 + 3234 3246 return __hci_cmd_sync_status(hdev, HCI_OP_LE_READ_BUFFER_SIZE, 3235 3247 0, NULL, HCI_CMD_TIMEOUT); 3236 3248 }
+2
net/bridge/br_netfilter_hooks.c
··· 384 384 /* - Bridged-and-DNAT'ed traffic doesn't 385 385 * require ip_forwarding. */ 386 386 if (rt->dst.dev == dev) { 387 + skb_dst_drop(skb); 387 388 skb_dst_set(skb, &rt->dst); 388 389 goto bridged_dnat; 389 390 } ··· 414 413 kfree_skb(skb); 415 414 return 0; 416 415 } 416 + skb_dst_drop(skb); 417 417 skb_dst_set_noref(skb, &rt->dst); 418 418 } 419 419
+1
net/bridge/br_netfilter_ipv6.c
··· 197 197 kfree_skb(skb); 198 198 return 0; 199 199 } 200 + skb_dst_drop(skb); 200 201 skb_dst_set_noref(skb, &rt->dst); 201 202 } 202 203
-1
net/core/.gitignore
··· 1 - dropreason_str.c
+1 -21
net/core/Makefile
··· 5 5 6 6 obj-y := sock.o request_sock.o skbuff.o datagram.o stream.o scm.o \ 7 7 gen_stats.o gen_estimator.o net_namespace.o secure_seq.o \ 8 - flow_dissector.o dropreason_str.o 8 + flow_dissector.o 9 9 10 10 obj-$(CONFIG_SYSCTL) += sysctl_net_core.o 11 11 ··· 40 40 obj-$(CONFIG_BPF_SYSCALL) += sock_map.o 41 41 obj-$(CONFIG_BPF_SYSCALL) += bpf_sk_storage.o 42 42 obj-$(CONFIG_OF) += of_net.o 43 - 44 - clean-files := dropreason_str.c 45 - 46 - quiet_cmd_dropreason_str = GEN $@ 47 - cmd_dropreason_str = awk -F ',' 'BEGIN{ print "\#include <net/dropreason.h>\n"; \ 48 - print "const char * const drop_reasons[] = {" }\ 49 - /^enum skb_drop/ { dr=1; }\ 50 - /^\};/ { dr=0; }\ 51 - /^\tSKB_DROP_REASON_/ {\ 52 - if (dr) {\ 53 - sub(/\tSKB_DROP_REASON_/, "", $$1);\ 54 - printf "\t[SKB_DROP_REASON_%s] = \"%s\",\n", $$1, $$1;\ 55 - }\ 56 - }\ 57 - END{ print "};" }' $< > $@ 58 - 59 - $(obj)/dropreason_str.c: $(srctree)/include/net/dropreason.h 60 - $(call cmd,dropreason_str) 61 - 62 - $(obj)/dropreason_str.o: $(obj)/dropreason_str.c
+1 -1
net/core/datagram.c
··· 677 677 page_ref_sub(last_head, refs); 678 678 refs = 0; 679 679 } 680 - skb_fill_page_desc(skb, frag++, head, start, size); 680 + skb_fill_page_desc_noacc(skb, frag++, head, start, size); 681 681 } 682 682 if (refs) 683 683 page_ref_sub(last_head, refs);
+5 -1
net/core/skbuff.c
··· 91 91 int sysctl_max_skb_frags __read_mostly = MAX_SKB_FRAGS; 92 92 EXPORT_SYMBOL(sysctl_max_skb_frags); 93 93 94 - /* The array 'drop_reasons' is auto-generated in dropreason_str.c */ 94 + #undef FN 95 + #define FN(reason) [SKB_DROP_REASON_##reason] = #reason, 96 + const char * const drop_reasons[] = { 97 + DEFINE_DROP_REASON(FN, FN) 98 + }; 95 99 EXPORT_SYMBOL(drop_reasons); 96 100 97 101 /**
+1 -1
net/ipv4/tcp.c
··· 1015 1015 skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); 1016 1016 } else { 1017 1017 get_page(page); 1018 - skb_fill_page_desc(skb, i, page, offset, copy); 1018 + skb_fill_page_desc_noacc(skb, i, page, offset, copy); 1019 1019 } 1020 1020 1021 1021 if (!(flags & MSG_NO_SHARED_FRAGS))
+18 -7
net/ipv4/tcp_input.c
··· 2513 2513 return tp->undo_marker && (!tp->undo_retrans || tcp_packet_delayed(tp)); 2514 2514 } 2515 2515 2516 + static bool tcp_is_non_sack_preventing_reopen(struct sock *sk) 2517 + { 2518 + struct tcp_sock *tp = tcp_sk(sk); 2519 + 2520 + if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) { 2521 + /* Hold old state until something *above* high_seq 2522 + * is ACKed. For Reno it is MUST to prevent false 2523 + * fast retransmits (RFC2582). SACK TCP is safe. */ 2524 + if (!tcp_any_retrans_done(sk)) 2525 + tp->retrans_stamp = 0; 2526 + return true; 2527 + } 2528 + return false; 2529 + } 2530 + 2516 2531 /* People celebrate: "We love our President!" */ 2517 2532 static bool tcp_try_undo_recovery(struct sock *sk) 2518 2533 { ··· 2550 2535 } else if (tp->rack.reo_wnd_persist) { 2551 2536 tp->rack.reo_wnd_persist--; 2552 2537 } 2553 - if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) { 2554 - /* Hold old state until something *above* high_seq 2555 - * is ACKed. For Reno it is MUST to prevent false 2556 - * fast retransmits (RFC2582). SACK TCP is safe. */ 2557 - if (!tcp_any_retrans_done(sk)) 2558 - tp->retrans_stamp = 0; 2538 + if (tcp_is_non_sack_preventing_reopen(sk)) 2559 2539 return true; 2560 - } 2561 2540 tcp_set_ca_state(sk, TCP_CA_Open); 2562 2541 tp->is_sack_reneg = 0; 2563 2542 return false; ··· 2587 2578 NET_INC_STATS(sock_net(sk), 2588 2579 LINUX_MIB_TCPSPURIOUSRTOS); 2589 2580 inet_csk(sk)->icsk_retransmits = 0; 2581 + if (tcp_is_non_sack_preventing_reopen(sk)) 2582 + return true; 2590 2583 if (frto_undo || tcp_is_sack(tp)) { 2591 2584 tcp_set_ca_state(sk, TCP_CA_Open); 2592 2585 tp->is_sack_reneg = 0;
+2
net/ipv4/udp.c
··· 783 783 */ 784 784 if (tunnel) { 785 785 /* ...not for tunnels though: we don't have a sending socket */ 786 + if (udp_sk(sk)->encap_err_rcv) 787 + udp_sk(sk)->encap_err_rcv(sk, skb, iph->ihl << 2); 786 788 goto out; 787 789 } 788 790 if (!inet->recverr) {
+1
net/ipv4/udp_tunnel_core.c
··· 72 72 73 73 udp_sk(sk)->encap_type = cfg->encap_type; 74 74 udp_sk(sk)->encap_rcv = cfg->encap_rcv; 75 + udp_sk(sk)->encap_err_rcv = cfg->encap_err_rcv; 75 76 udp_sk(sk)->encap_err_lookup = cfg->encap_err_lookup; 76 77 udp_sk(sk)->encap_destroy = cfg->encap_destroy; 77 78 udp_sk(sk)->gro_receive = cfg->gro_receive;
+7 -3
net/ipv6/addrconf.c
··· 3557 3557 fallthrough; 3558 3558 case NETDEV_UP: 3559 3559 case NETDEV_CHANGE: 3560 - if (dev->flags & IFF_SLAVE) 3561 - break; 3562 - 3563 3560 if (idev && idev->cnf.disable_ipv6) 3564 3561 break; 3562 + 3563 + if (dev->flags & IFF_SLAVE) { 3564 + if (event == NETDEV_UP && !IS_ERR_OR_NULL(idev) && 3565 + dev->flags & IFF_UP && dev->flags & IFF_MULTICAST) 3566 + ipv6_mc_up(idev); 3567 + break; 3568 + } 3565 3569 3566 3570 if (event == NETDEV_UP) { 3567 3571 /* restore routes for permanent addresses */
+5
net/ipv6/seg6.c
··· 191 191 goto out_unlock; 192 192 } 193 193 194 + if (slen > nla_len(info->attrs[SEG6_ATTR_SECRET])) { 195 + err = -EINVAL; 196 + goto out_unlock; 197 + } 198 + 194 199 if (hinfo) { 195 200 err = seg6_hmac_info_del(net, hmackeyid); 196 201 if (err)
+4 -1
net/ipv6/udp.c
··· 616 616 } 617 617 618 618 /* Tunnels don't have an application socket: don't pass errors back */ 619 - if (tunnel) 619 + if (tunnel) { 620 + if (udp_sk(sk)->encap_err_rcv) 621 + udp_sk(sk)->encap_err_rcv(sk, skb, offset); 620 622 goto out; 623 + } 621 624 622 625 if (!np->recverr) { 623 626 if (!harderr || sk->sk_state != TCP_ESTABLISHED)
+6 -6
net/mac80211/mlme.c
··· 3420 3420 ieee80211_link_info_change_notify(sdata, &sdata->deflink, 3421 3421 BSS_CHANGED_BSSID); 3422 3422 sdata->u.mgd.flags = 0; 3423 + 3423 3424 mutex_lock(&sdata->local->mtx); 3424 3425 ieee80211_link_release_channel(&sdata->deflink); 3425 - mutex_unlock(&sdata->local->mtx); 3426 - 3427 3426 ieee80211_vif_set_links(sdata, 0); 3427 + mutex_unlock(&sdata->local->mtx); 3428 3428 } 3429 3429 3430 3430 cfg80211_put_bss(sdata->local->hw.wiphy, auth_data->bss); ··· 3462 3462 sdata->u.mgd.flags = 0; 3463 3463 sdata->vif.bss_conf.mu_mimo_owner = false; 3464 3464 3465 - mutex_lock(&sdata->local->mtx); 3466 - ieee80211_link_release_channel(&sdata->deflink); 3467 - mutex_unlock(&sdata->local->mtx); 3468 - 3469 3465 if (status != ASSOC_REJECTED) { 3470 3466 struct cfg80211_assoc_failure data = { 3471 3467 .timeout = status == ASSOC_TIMEOUT, ··· 3480 3484 cfg80211_assoc_failure(sdata->dev, &data); 3481 3485 } 3482 3486 3487 + mutex_lock(&sdata->local->mtx); 3488 + ieee80211_link_release_channel(&sdata->deflink); 3483 3489 ieee80211_vif_set_links(sdata, 0); 3490 + mutex_unlock(&sdata->local->mtx); 3484 3491 } 3485 3492 3486 3493 kfree(assoc_data); ··· 6508 6509 return 0; 6509 6510 6510 6511 out_err: 6512 + ieee80211_link_release_channel(&sdata->deflink); 6511 6513 ieee80211_vif_set_links(sdata, 0); 6512 6514 return err; 6513 6515 }
+4
net/mac80211/rx.c
··· 4074 4074 .link_id = -1, 4075 4075 }; 4076 4076 struct tid_ampdu_rx *tid_agg_rx; 4077 + u8 link_id; 4077 4078 4078 4079 tid_agg_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[tid]); 4079 4080 if (!tid_agg_rx) ··· 4094 4093 }; 4095 4094 drv_event_callback(rx.local, rx.sdata, &event); 4096 4095 } 4096 + /* FIXME: statistics won't be right with this */ 4097 + link_id = sta->sta.valid_links ? ffs(sta->sta.valid_links) - 1 : 0; 4098 + rx.link = rcu_dereference(sta->sdata->link[link_id]); 4097 4099 4098 4100 ieee80211_rx_handlers(&rx, &frames); 4099 4101 }
+2 -2
net/mac80211/wpa.c
··· 351 351 * FC | A1 | A2 | A3 | SC | [A4] | [QC] */ 352 352 put_unaligned_be16(len_a, &aad[0]); 353 353 put_unaligned(mask_fc, (__le16 *)&aad[2]); 354 - memcpy(&aad[4], &hdr->addr1, 3 * ETH_ALEN); 354 + memcpy(&aad[4], &hdr->addrs, 3 * ETH_ALEN); 355 355 356 356 /* Mask Seq#, leave Frag# */ 357 357 aad[22] = *((u8 *) &hdr->seq_ctrl) & 0x0f; ··· 792 792 IEEE80211_FCTL_MOREDATA); 793 793 put_unaligned(mask_fc, (__le16 *) &aad[0]); 794 794 /* A1 || A2 || A3 */ 795 - memcpy(aad + 2, &hdr->addr1, 3 * ETH_ALEN); 795 + memcpy(aad + 2, &hdr->addrs, 3 * ETH_ALEN); 796 796 } 797 797 798 798
+1 -6
net/netfilter/nf_conntrack_core.c
··· 1782 1782 } 1783 1783 spin_unlock_bh(&nf_conntrack_expect_lock); 1784 1784 } 1785 - if (!exp) 1785 + if (!exp && tmpl) 1786 1786 __nf_ct_try_assign_helper(ct, tmpl, GFP_ATOMIC); 1787 1787 1788 1788 /* Other CPU might have obtained a pointer to this object before it was ··· 2068 2068 ct->tuplehash[IP_CT_DIR_REPLY].tuple = *newreply; 2069 2069 if (ct->master || (help && !hlist_empty(&help->expectations))) 2070 2070 return; 2071 - 2072 - rcu_read_lock(); 2073 - __nf_ct_try_assign_helper(ct, NULL, GFP_ATOMIC); 2074 - rcu_read_unlock(); 2075 2071 } 2076 2072 EXPORT_SYMBOL_GPL(nf_conntrack_alter_reply); 2077 2073 ··· 2793 2797 nf_conntrack_acct_pernet_init(net); 2794 2798 nf_conntrack_tstamp_pernet_init(net); 2795 2799 nf_conntrack_ecache_pernet_init(net); 2796 - nf_conntrack_helper_pernet_init(net); 2797 2800 nf_conntrack_proto_pernet_init(net); 2798 2801 2799 2802 return 0;
+10 -70
net/netfilter/nf_conntrack_helper.c
··· 35 35 EXPORT_SYMBOL_GPL(nf_ct_helper_hsize); 36 36 static unsigned int nf_ct_helper_count __read_mostly; 37 37 38 - static bool nf_ct_auto_assign_helper __read_mostly = false; 39 - module_param_named(nf_conntrack_helper, nf_ct_auto_assign_helper, bool, 0644); 40 - MODULE_PARM_DESC(nf_conntrack_helper, 41 - "Enable automatic conntrack helper assignment (default 0)"); 42 - 43 38 static DEFINE_MUTEX(nf_ct_nat_helpers_mutex); 44 39 static struct list_head nf_ct_nat_helpers __read_mostly; 45 40 ··· 44 49 { 45 50 return (((tuple->src.l3num << 8) | tuple->dst.protonum) ^ 46 51 (__force __u16)tuple->src.u.all) % nf_ct_helper_hsize; 47 - } 48 - 49 - static struct nf_conntrack_helper * 50 - __nf_ct_helper_find(const struct nf_conntrack_tuple *tuple) 51 - { 52 - struct nf_conntrack_helper *helper; 53 - struct nf_conntrack_tuple_mask mask = { .src.u.all = htons(0xFFFF) }; 54 - unsigned int h; 55 - 56 - if (!nf_ct_helper_count) 57 - return NULL; 58 - 59 - h = helper_hash(tuple); 60 - hlist_for_each_entry_rcu(helper, &nf_ct_helper_hash[h], hnode) { 61 - if (nf_ct_tuple_src_mask_cmp(tuple, &helper->tuple, &mask)) 62 - return helper; 63 - } 64 - return NULL; 65 52 } 66 53 67 54 struct nf_conntrack_helper * ··· 186 209 } 187 210 EXPORT_SYMBOL_GPL(nf_ct_helper_ext_add); 188 211 189 - static struct nf_conntrack_helper * 190 - nf_ct_lookup_helper(struct nf_conn *ct, struct net *net) 191 - { 192 - struct nf_conntrack_net *cnet = nf_ct_pernet(net); 193 - 194 - if (!cnet->sysctl_auto_assign_helper) { 195 - if (cnet->auto_assign_helper_warned) 196 - return NULL; 197 - if (!__nf_ct_helper_find(&ct->tuplehash[IP_CT_DIR_REPLY].tuple)) 198 - return NULL; 199 - pr_info("nf_conntrack: default automatic helper assignment " 200 - "has been turned off for security reasons and CT-based " 201 - "firewall rule not found. Use the iptables CT target " 202 - "to attach helpers instead.\n"); 203 - cnet->auto_assign_helper_warned = true; 204 - return NULL; 205 - } 206 - 207 - return __nf_ct_helper_find(&ct->tuplehash[IP_CT_DIR_REPLY].tuple); 208 - } 209 - 210 212 int __nf_ct_try_assign_helper(struct nf_conn *ct, struct nf_conn *tmpl, 211 213 gfp_t flags) 212 214 { 213 215 struct nf_conntrack_helper *helper = NULL; 214 216 struct nf_conn_help *help; 215 - struct net *net = nf_ct_net(ct); 216 217 217 218 /* We already got a helper explicitly attached. The function 218 219 * nf_conntrack_alter_reply - in case NAT is in use - asks for looking ··· 201 246 if (test_bit(IPS_HELPER_BIT, &ct->status)) 202 247 return 0; 203 248 204 - if (tmpl != NULL) { 205 - help = nfct_help(tmpl); 206 - if (help != NULL) { 207 - helper = rcu_dereference(help->helper); 208 - set_bit(IPS_HELPER_BIT, &ct->status); 209 - } 249 + if (WARN_ON_ONCE(!tmpl)) 250 + return 0; 251 + 252 + help = nfct_help(tmpl); 253 + if (help != NULL) { 254 + helper = rcu_dereference(help->helper); 255 + set_bit(IPS_HELPER_BIT, &ct->status); 210 256 } 211 257 212 258 help = nfct_help(ct); 213 259 214 260 if (helper == NULL) { 215 - helper = nf_ct_lookup_helper(ct, net); 216 - if (helper == NULL) { 217 - if (help) 218 - RCU_INIT_POINTER(help->helper, NULL); 219 - return 0; 220 - } 261 + if (help) 262 + RCU_INIT_POINTER(help->helper, NULL); 263 + return 0; 221 264 } 222 265 223 266 if (help == NULL) { ··· 497 544 mutex_unlock(&nf_ct_nat_helpers_mutex); 498 545 } 499 546 EXPORT_SYMBOL_GPL(nf_nat_helper_unregister); 500 - 501 - void nf_ct_set_auto_assign_helper_warned(struct net *net) 502 - { 503 - nf_ct_pernet(net)->auto_assign_helper_warned = true; 504 - } 505 - EXPORT_SYMBOL_GPL(nf_ct_set_auto_assign_helper_warned); 506 - 507 - void nf_conntrack_helper_pernet_init(struct net *net) 508 - { 509 - struct nf_conntrack_net *cnet = nf_ct_pernet(net); 510 - 511 - cnet->sysctl_auto_assign_helper = nf_ct_auto_assign_helper; 512 - } 513 547 514 548 int nf_conntrack_helper_init(void) 515 549 {
+3 -2
net/netfilter/nf_conntrack_irc.c
··· 194 194 195 195 /* dcc_ip can be the internal OR external (NAT'ed) IP */ 196 196 tuple = &ct->tuplehash[dir].tuple; 197 - if (tuple->src.u3.ip != dcc_ip && 198 - tuple->dst.u3.ip != dcc_ip) { 197 + if ((tuple->src.u3.ip != dcc_ip && 198 + ct->tuplehash[!dir].tuple.dst.u3.ip != dcc_ip) || 199 + dcc_port == 0) { 199 200 net_warn_ratelimited("Forged DCC command from %pI4: %pI4:%u\n", 200 201 &tuple->src.u3.ip, 201 202 &dcc_ip, dcc_port);
-5
net/netfilter/nf_conntrack_netlink.c
··· 2298 2298 ct->status |= IPS_HELPER; 2299 2299 RCU_INIT_POINTER(help->helper, helper); 2300 2300 } 2301 - } else { 2302 - /* try an implicit helper assignation */ 2303 - err = __nf_ct_try_assign_helper(ct, NULL, GFP_ATOMIC); 2304 - if (err < 0) 2305 - goto err2; 2306 2301 } 2307 2302 2308 2303 err = ctnetlink_setup_nat(ct, cda);
-10
net/netfilter/nf_conntrack_standalone.c
··· 561 561 NF_SYSCTL_CT_LOG_INVALID, 562 562 NF_SYSCTL_CT_EXPECT_MAX, 563 563 NF_SYSCTL_CT_ACCT, 564 - NF_SYSCTL_CT_HELPER, 565 564 #ifdef CONFIG_NF_CONNTRACK_EVENTS 566 565 NF_SYSCTL_CT_EVENTS, 567 566 #endif ··· 673 674 [NF_SYSCTL_CT_ACCT] = { 674 675 .procname = "nf_conntrack_acct", 675 676 .data = &init_net.ct.sysctl_acct, 676 - .maxlen = sizeof(u8), 677 - .mode = 0644, 678 - .proc_handler = proc_dou8vec_minmax, 679 - .extra1 = SYSCTL_ZERO, 680 - .extra2 = SYSCTL_ONE, 681 - }, 682 - [NF_SYSCTL_CT_HELPER] = { 683 - .procname = "nf_conntrack_helper", 684 677 .maxlen = sizeof(u8), 685 678 .mode = 0644, 686 679 .proc_handler = proc_dou8vec_minmax, ··· 1091 1100 table[NF_SYSCTL_CT_CHECKSUM].data = &net->ct.sysctl_checksum; 1092 1101 table[NF_SYSCTL_CT_LOG_INVALID].data = &net->ct.sysctl_log_invalid; 1093 1102 table[NF_SYSCTL_CT_ACCT].data = &net->ct.sysctl_acct; 1094 - table[NF_SYSCTL_CT_HELPER].data = &cnet->sysctl_auto_assign_helper; 1095 1103 #ifdef CONFIG_NF_CONNTRACK_EVENTS 1096 1104 table[NF_SYSCTL_CT_EVENTS].data = &net->ct.sysctl_events; 1097 1105 #endif
+3 -1
net/netfilter/nf_tables_api.c
··· 2166 2166 chain->flags |= NFT_CHAIN_BASE | flags; 2167 2167 basechain->policy = NF_ACCEPT; 2168 2168 if (chain->flags & NFT_CHAIN_HW_OFFLOAD && 2169 - !nft_chain_offload_support(basechain)) 2169 + !nft_chain_offload_support(basechain)) { 2170 + list_splice_init(&basechain->hook_list, &hook->list); 2170 2171 return -EOPNOTSUPP; 2172 + } 2171 2173 2172 2174 flow_block_init(&basechain->flow_block); 2173 2175
-3
net/netfilter/nft_ct.c
··· 1089 1089 if (err < 0) 1090 1090 goto err_put_helper; 1091 1091 1092 - /* Avoid the bogus warning, helper will be assigned after CT init */ 1093 - nf_ct_set_auto_assign_helper_warned(ctx->net); 1094 - 1095 1092 return 0; 1096 1093 1097 1094 err_put_helper:
+1
net/rxrpc/ar-internal.h
··· 982 982 /* 983 983 * peer_event.c 984 984 */ 985 + void rxrpc_encap_err_rcv(struct sock *sk, struct sk_buff *skb, unsigned int udp_offset); 985 986 void rxrpc_error_report(struct sock *); 986 987 void rxrpc_peer_keepalive_worker(struct work_struct *); 987 988
+1 -1
net/rxrpc/call_event.c
··· 166 166 _enter("{%d,%d}", call->tx_hard_ack, call->tx_top); 167 167 168 168 now = ktime_get_real(); 169 - max_age = ktime_sub(now, jiffies_to_usecs(call->peer->rto_j)); 169 + max_age = ktime_sub_us(now, jiffies_to_usecs(call->peer->rto_j)); 170 170 171 171 spin_lock_bh(&call->lock); 172 172
+4
net/rxrpc/local_object.c
··· 137 137 138 138 tuncfg.encap_type = UDP_ENCAP_RXRPC; 139 139 tuncfg.encap_rcv = rxrpc_input_packet; 140 + tuncfg.encap_err_rcv = rxrpc_encap_err_rcv; 140 141 tuncfg.sk_user_data = local; 141 142 setup_udp_tunnel_sock(net, local->socket, &tuncfg); 142 143 ··· 405 404 struct rxrpc_local *local = 406 405 container_of(work, struct rxrpc_local, processor); 407 406 bool again; 407 + 408 + if (local->dead) 409 + return; 408 410 409 411 trace_rxrpc_local(local->debug_id, rxrpc_local_processing, 410 412 refcount_read(&local->ref), NULL);
+258 -39
net/rxrpc/peer_event.c
··· 16 16 #include <net/sock.h> 17 17 #include <net/af_rxrpc.h> 18 18 #include <net/ip.h> 19 + #include <net/icmp.h> 19 20 #include "ar-internal.h" 20 21 22 + static void rxrpc_adjust_mtu(struct rxrpc_peer *, unsigned int); 21 23 static void rxrpc_store_error(struct rxrpc_peer *, struct sock_exterr_skb *); 22 24 static void rxrpc_distribute_error(struct rxrpc_peer *, int, 23 25 enum rxrpc_call_completion); 24 26 25 27 /* 26 - * Find the peer associated with an ICMP packet. 28 + * Find the peer associated with an ICMPv4 packet. 27 29 */ 28 30 static struct rxrpc_peer *rxrpc_lookup_peer_icmp_rcu(struct rxrpc_local *local, 29 - const struct sk_buff *skb, 31 + struct sk_buff *skb, 32 + unsigned int udp_offset, 33 + unsigned int *info, 30 34 struct sockaddr_rxrpc *srx) 35 + { 36 + struct iphdr *ip, *ip0 = ip_hdr(skb); 37 + struct icmphdr *icmp = icmp_hdr(skb); 38 + struct udphdr *udp = (struct udphdr *)(skb->data + udp_offset); 39 + 40 + _enter("%u,%u,%u", ip0->protocol, icmp->type, icmp->code); 41 + 42 + switch (icmp->type) { 43 + case ICMP_DEST_UNREACH: 44 + *info = ntohs(icmp->un.frag.mtu); 45 + fallthrough; 46 + case ICMP_TIME_EXCEEDED: 47 + case ICMP_PARAMETERPROB: 48 + ip = (struct iphdr *)((void *)icmp + 8); 49 + break; 50 + default: 51 + return NULL; 52 + } 53 + 54 + memset(srx, 0, sizeof(*srx)); 55 + srx->transport_type = local->srx.transport_type; 56 + srx->transport_len = local->srx.transport_len; 57 + srx->transport.family = local->srx.transport.family; 58 + 59 + /* Can we see an ICMP4 packet on an ICMP6 listening socket? and vice 60 + * versa? 61 + */ 62 + switch (srx->transport.family) { 63 + case AF_INET: 64 + srx->transport_len = sizeof(srx->transport.sin); 65 + srx->transport.family = AF_INET; 66 + srx->transport.sin.sin_port = udp->dest; 67 + memcpy(&srx->transport.sin.sin_addr, &ip->daddr, 68 + sizeof(struct in_addr)); 69 + break; 70 + 71 + #ifdef CONFIG_AF_RXRPC_IPV6 72 + case AF_INET6: 73 + srx->transport_len = sizeof(srx->transport.sin); 74 + srx->transport.family = AF_INET; 75 + srx->transport.sin.sin_port = udp->dest; 76 + memcpy(&srx->transport.sin.sin_addr, &ip->daddr, 77 + sizeof(struct in_addr)); 78 + break; 79 + #endif 80 + 81 + default: 82 + WARN_ON_ONCE(1); 83 + return NULL; 84 + } 85 + 86 + _net("ICMP {%pISp}", &srx->transport); 87 + return rxrpc_lookup_peer_rcu(local, srx); 88 + } 89 + 90 + #ifdef CONFIG_AF_RXRPC_IPV6 91 + /* 92 + * Find the peer associated with an ICMPv6 packet. 93 + */ 94 + static struct rxrpc_peer *rxrpc_lookup_peer_icmp6_rcu(struct rxrpc_local *local, 95 + struct sk_buff *skb, 96 + unsigned int udp_offset, 97 + unsigned int *info, 98 + struct sockaddr_rxrpc *srx) 99 + { 100 + struct icmp6hdr *icmp = icmp6_hdr(skb); 101 + struct ipv6hdr *ip, *ip0 = ipv6_hdr(skb); 102 + struct udphdr *udp = (struct udphdr *)(skb->data + udp_offset); 103 + 104 + _enter("%u,%u,%u", ip0->nexthdr, icmp->icmp6_type, icmp->icmp6_code); 105 + 106 + switch (icmp->icmp6_type) { 107 + case ICMPV6_DEST_UNREACH: 108 + *info = ntohl(icmp->icmp6_mtu); 109 + fallthrough; 110 + case ICMPV6_PKT_TOOBIG: 111 + case ICMPV6_TIME_EXCEED: 112 + case ICMPV6_PARAMPROB: 113 + ip = (struct ipv6hdr *)((void *)icmp + 8); 114 + break; 115 + default: 116 + return NULL; 117 + } 118 + 119 + memset(srx, 0, sizeof(*srx)); 120 + srx->transport_type = local->srx.transport_type; 121 + srx->transport_len = local->srx.transport_len; 122 + srx->transport.family = local->srx.transport.family; 123 + 124 + /* Can we see an ICMP4 packet on an ICMP6 listening socket? and vice 125 + * versa? 126 + */ 127 + switch (srx->transport.family) { 128 + case AF_INET: 129 + _net("Rx ICMP6 on v4 sock"); 130 + srx->transport_len = sizeof(srx->transport.sin); 131 + srx->transport.family = AF_INET; 132 + srx->transport.sin.sin_port = udp->dest; 133 + memcpy(&srx->transport.sin.sin_addr, 134 + &ip->daddr.s6_addr32[3], sizeof(struct in_addr)); 135 + break; 136 + case AF_INET6: 137 + _net("Rx ICMP6"); 138 + srx->transport.sin.sin_port = udp->dest; 139 + memcpy(&srx->transport.sin6.sin6_addr, &ip->daddr, 140 + sizeof(struct in6_addr)); 141 + break; 142 + default: 143 + WARN_ON_ONCE(1); 144 + return NULL; 145 + } 146 + 147 + _net("ICMP {%pISp}", &srx->transport); 148 + return rxrpc_lookup_peer_rcu(local, srx); 149 + } 150 + #endif /* CONFIG_AF_RXRPC_IPV6 */ 151 + 152 + /* 153 + * Handle an error received on the local endpoint as a tunnel. 154 + */ 155 + void rxrpc_encap_err_rcv(struct sock *sk, struct sk_buff *skb, 156 + unsigned int udp_offset) 157 + { 158 + struct sock_extended_err ee; 159 + struct sockaddr_rxrpc srx; 160 + struct rxrpc_local *local; 161 + struct rxrpc_peer *peer; 162 + unsigned int info = 0; 163 + int err; 164 + u8 version = ip_hdr(skb)->version; 165 + u8 type = icmp_hdr(skb)->type; 166 + u8 code = icmp_hdr(skb)->code; 167 + 168 + rcu_read_lock(); 169 + local = rcu_dereference_sk_user_data(sk); 170 + if (unlikely(!local)) { 171 + rcu_read_unlock(); 172 + return; 173 + } 174 + 175 + rxrpc_new_skb(skb, rxrpc_skb_received); 176 + 177 + switch (ip_hdr(skb)->version) { 178 + case IPVERSION: 179 + peer = rxrpc_lookup_peer_icmp_rcu(local, skb, udp_offset, 180 + &info, &srx); 181 + break; 182 + #ifdef CONFIG_AF_RXRPC_IPV6 183 + case 6: 184 + peer = rxrpc_lookup_peer_icmp6_rcu(local, skb, udp_offset, 185 + &info, &srx); 186 + break; 187 + #endif 188 + default: 189 + rcu_read_unlock(); 190 + return; 191 + } 192 + 193 + if (peer && !rxrpc_get_peer_maybe(peer)) 194 + peer = NULL; 195 + if (!peer) { 196 + rcu_read_unlock(); 197 + return; 198 + } 199 + 200 + memset(&ee, 0, sizeof(ee)); 201 + 202 + switch (version) { 203 + case IPVERSION: 204 + switch (type) { 205 + case ICMP_DEST_UNREACH: 206 + switch (code) { 207 + case ICMP_FRAG_NEEDED: 208 + rxrpc_adjust_mtu(peer, info); 209 + rcu_read_unlock(); 210 + rxrpc_put_peer(peer); 211 + return; 212 + default: 213 + break; 214 + } 215 + 216 + err = EHOSTUNREACH; 217 + if (code <= NR_ICMP_UNREACH) { 218 + /* Might want to do something different with 219 + * non-fatal errors 220 + */ 221 + //harderr = icmp_err_convert[code].fatal; 222 + err = icmp_err_convert[code].errno; 223 + } 224 + break; 225 + 226 + case ICMP_TIME_EXCEEDED: 227 + err = EHOSTUNREACH; 228 + break; 229 + default: 230 + err = EPROTO; 231 + break; 232 + } 233 + 234 + ee.ee_origin = SO_EE_ORIGIN_ICMP; 235 + ee.ee_type = type; 236 + ee.ee_code = code; 237 + ee.ee_errno = err; 238 + break; 239 + 240 + #ifdef CONFIG_AF_RXRPC_IPV6 241 + case 6: 242 + switch (type) { 243 + case ICMPV6_PKT_TOOBIG: 244 + rxrpc_adjust_mtu(peer, info); 245 + rcu_read_unlock(); 246 + rxrpc_put_peer(peer); 247 + return; 248 + } 249 + 250 + icmpv6_err_convert(type, code, &err); 251 + 252 + if (err == EACCES) 253 + err = EHOSTUNREACH; 254 + 255 + ee.ee_origin = SO_EE_ORIGIN_ICMP6; 256 + ee.ee_type = type; 257 + ee.ee_code = code; 258 + ee.ee_errno = err; 259 + break; 260 + #endif 261 + } 262 + 263 + trace_rxrpc_rx_icmp(peer, &ee, &srx); 264 + 265 + rxrpc_distribute_error(peer, err, RXRPC_CALL_NETWORK_ERROR); 266 + rcu_read_unlock(); 267 + rxrpc_put_peer(peer); 268 + } 269 + 270 + /* 271 + * Find the peer associated with a local error. 272 + */ 273 + static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local, 274 + const struct sk_buff *skb, 275 + struct sockaddr_rxrpc *srx) 31 276 { 32 277 struct sock_exterr_skb *serr = SKB_EXT_ERR(skb); 33 278 ··· 283 38 srx->transport_len = local->srx.transport_len; 284 39 srx->transport.family = local->srx.transport.family; 285 40 286 - /* Can we see an ICMP4 packet on an ICMP6 listening socket? and vice 287 - * versa? 288 - */ 289 41 switch (srx->transport.family) { 290 42 case AF_INET: 291 43 srx->transport_len = sizeof(srx->transport.sin); ··· 346 104 /* 347 105 * Handle an MTU/fragmentation problem. 348 106 */ 349 - static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, struct sock_exterr_skb *serr) 107 + static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu) 350 108 { 351 - u32 mtu = serr->ee.ee_info; 352 - 353 109 _net("Rx ICMP Fragmentation Needed (%d)", mtu); 354 110 355 111 /* wind down the local interface MTU */ ··· 388 148 struct sock_exterr_skb *serr; 389 149 struct sockaddr_rxrpc srx; 390 150 struct rxrpc_local *local; 391 - struct rxrpc_peer *peer; 151 + struct rxrpc_peer *peer = NULL; 392 152 struct sk_buff *skb; 393 153 394 154 rcu_read_lock(); ··· 412 172 } 413 173 rxrpc_new_skb(skb, rxrpc_skb_received); 414 174 serr = SKB_EXT_ERR(skb); 415 - if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) { 416 - _leave("UDP empty message"); 417 - rcu_read_unlock(); 418 - rxrpc_free_skb(skb, rxrpc_skb_freed); 419 - return; 175 + 176 + if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL) { 177 + peer = rxrpc_lookup_peer_local_rcu(local, skb, &srx); 178 + if (peer && !rxrpc_get_peer_maybe(peer)) 179 + peer = NULL; 180 + if (peer) { 181 + trace_rxrpc_rx_icmp(peer, &serr->ee, &srx); 182 + rxrpc_store_error(peer, serr); 183 + } 420 184 } 421 185 422 - peer = rxrpc_lookup_peer_icmp_rcu(local, skb, &srx); 423 - if (peer && !rxrpc_get_peer_maybe(peer)) 424 - peer = NULL; 425 - if (!peer) { 426 - rcu_read_unlock(); 427 - rxrpc_free_skb(skb, rxrpc_skb_freed); 428 - _leave(" [no peer]"); 429 - return; 430 - } 431 - 432 - trace_rxrpc_rx_icmp(peer, &serr->ee, &srx); 433 - 434 - if ((serr->ee.ee_origin == SO_EE_ORIGIN_ICMP && 435 - serr->ee.ee_type == ICMP_DEST_UNREACH && 436 - serr->ee.ee_code == ICMP_FRAG_NEEDED)) { 437 - rxrpc_adjust_mtu(peer, serr); 438 - rcu_read_unlock(); 439 - rxrpc_free_skb(skb, rxrpc_skb_freed); 440 - rxrpc_put_peer(peer); 441 - _leave(" [MTU update]"); 442 - return; 443 - } 444 - 445 - rxrpc_store_error(peer, serr); 446 186 rcu_read_unlock(); 447 187 rxrpc_free_skb(skb, rxrpc_skb_freed); 448 188 rxrpc_put_peer(peer); 449 - 450 189 _leave(""); 451 190 } 452 191
-43
net/rxrpc/recvmsg.c
··· 771 771 goto out; 772 772 } 773 773 EXPORT_SYMBOL(rxrpc_kernel_recv_data); 774 - 775 - /** 776 - * rxrpc_kernel_get_reply_time - Get timestamp on first reply packet 777 - * @sock: The socket that the call exists on 778 - * @call: The call to query 779 - * @_ts: Where to put the timestamp 780 - * 781 - * Retrieve the timestamp from the first DATA packet of the reply if it is 782 - * in the ring. Returns true if successful, false if not. 783 - */ 784 - bool rxrpc_kernel_get_reply_time(struct socket *sock, struct rxrpc_call *call, 785 - ktime_t *_ts) 786 - { 787 - struct sk_buff *skb; 788 - rxrpc_seq_t hard_ack, top, seq; 789 - bool success = false; 790 - 791 - mutex_lock(&call->user_mutex); 792 - 793 - if (READ_ONCE(call->state) != RXRPC_CALL_CLIENT_RECV_REPLY) 794 - goto out; 795 - 796 - hard_ack = call->rx_hard_ack; 797 - if (hard_ack != 0) 798 - goto out; 799 - 800 - seq = hard_ack + 1; 801 - top = smp_load_acquire(&call->rx_top); 802 - if (after(seq, top)) 803 - goto out; 804 - 805 - skb = call->rxtx_buffer[seq & RXRPC_RXTX_BUFF_MASK]; 806 - if (!skb) 807 - goto out; 808 - 809 - *_ts = skb_get_ktime(skb); 810 - success = true; 811 - 812 - out: 813 - mutex_unlock(&call->user_mutex); 814 - return success; 815 - } 816 - EXPORT_SYMBOL(rxrpc_kernel_get_reply_time);
+1 -1
net/rxrpc/rxkad.c
··· 540 540 * directly into the target buffer. 541 541 */ 542 542 sg = _sg; 543 - nsg = skb_shinfo(skb)->nr_frags; 543 + nsg = skb_shinfo(skb)->nr_frags + 1; 544 544 if (nsg <= 4) { 545 545 nsg = 4; 546 546 } else {
+8 -5
net/sched/sch_sfb.c
··· 135 135 } 136 136 } 137 137 138 - static void increment_qlen(const struct sk_buff *skb, struct sfb_sched_data *q) 138 + static void increment_qlen(const struct sfb_skb_cb *cb, struct sfb_sched_data *q) 139 139 { 140 140 u32 sfbhash; 141 141 142 - sfbhash = sfb_hash(skb, 0); 142 + sfbhash = cb->hashes[0]; 143 143 if (sfbhash) 144 144 increment_one_qlen(sfbhash, 0, q); 145 145 146 - sfbhash = sfb_hash(skb, 1); 146 + sfbhash = cb->hashes[1]; 147 147 if (sfbhash) 148 148 increment_one_qlen(sfbhash, 1, q); 149 149 } ··· 281 281 { 282 282 283 283 struct sfb_sched_data *q = qdisc_priv(sch); 284 + unsigned int len = qdisc_pkt_len(skb); 284 285 struct Qdisc *child = q->qdisc; 285 286 struct tcf_proto *fl; 287 + struct sfb_skb_cb cb; 286 288 int i; 287 289 u32 p_min = ~0; 288 290 u32 minqlen = ~0; ··· 401 399 } 402 400 403 401 enqueue: 402 + memcpy(&cb, sfb_skb_cb(skb), sizeof(cb)); 404 403 ret = qdisc_enqueue(skb, child, to_free); 405 404 if (likely(ret == NET_XMIT_SUCCESS)) { 406 - qdisc_qstats_backlog_inc(sch, skb); 405 + sch->qstats.backlog += len; 407 406 sch->q.qlen++; 408 - increment_qlen(skb, q); 407 + increment_qlen(&cb, q); 409 408 } else if (net_xmit_drop_count(ret)) { 410 409 q->stats.childdrop++; 411 410 qdisc_qstats_drop(sch);
+1
net/smc/smc_core.c
··· 757 757 lnk->lgr = lgr; 758 758 smc_lgr_hold(lgr); /* lgr_put in smcr_link_clear() */ 759 759 lnk->link_idx = link_idx; 760 + lnk->wr_rx_id_compl = 0; 760 761 smc_ibdev_cnt_inc(lnk); 761 762 smcr_copy_dev_info_to_link(lnk); 762 763 atomic_set(&lnk->conn_cnt, 0);
+2
net/smc/smc_core.h
··· 115 115 dma_addr_t wr_rx_dma_addr; /* DMA address of wr_rx_bufs */ 116 116 dma_addr_t wr_rx_v2_dma_addr; /* DMA address of v2 rx buf*/ 117 117 u64 wr_rx_id; /* seq # of last recv WR */ 118 + u64 wr_rx_id_compl; /* seq # of last completed WR */ 118 119 u32 wr_rx_cnt; /* number of WR recv buffers */ 119 120 unsigned long wr_rx_tstamp; /* jiffies when last buf rx */ 121 + wait_queue_head_t wr_rx_empty_wait; /* wait for RQ empty */ 120 122 121 123 struct ib_reg_wr wr_reg; /* WR register memory region */ 122 124 wait_queue_head_t wr_reg_wait; /* wait for wr_reg result */
+5
net/smc/smc_wr.c
··· 454 454 455 455 for (i = 0; i < num; i++) { 456 456 link = wc[i].qp->qp_context; 457 + link->wr_rx_id_compl = wc[i].wr_id; 457 458 if (wc[i].status == IB_WC_SUCCESS) { 458 459 link->wr_rx_tstamp = jiffies; 459 460 smc_wr_rx_demultiplex(&wc[i]); ··· 466 465 case IB_WC_RNR_RETRY_EXC_ERR: 467 466 case IB_WC_WR_FLUSH_ERR: 468 467 smcr_link_down_cond_sched(link); 468 + if (link->wr_rx_id_compl == link->wr_rx_id) 469 + wake_up(&link->wr_rx_empty_wait); 469 470 break; 470 471 default: 471 472 smc_wr_rx_post(link); /* refill WR RX */ ··· 642 639 return; 643 640 ibdev = lnk->smcibdev->ibdev; 644 641 642 + smc_wr_drain_cq(lnk); 645 643 smc_wr_wakeup_reg_wait(lnk); 646 644 smc_wr_wakeup_tx_wait(lnk); 647 645 ··· 893 889 atomic_set(&lnk->wr_tx_refcnt, 0); 894 890 init_waitqueue_head(&lnk->wr_reg_wait); 895 891 atomic_set(&lnk->wr_reg_refcnt, 0); 892 + init_waitqueue_head(&lnk->wr_rx_empty_wait); 896 893 return rc; 897 894 898 895 dma_unmap:
+5
net/smc/smc_wr.h
··· 73 73 wake_up_all(&link->wr_tx_wait); 74 74 } 75 75 76 + static inline void smc_wr_drain_cq(struct smc_link *lnk) 77 + { 78 + wait_event(lnk->wr_rx_empty_wait, lnk->wr_rx_id_compl == lnk->wr_rx_id); 79 + } 80 + 76 81 static inline void smc_wr_wakeup_tx_wait(struct smc_link *lnk) 77 82 { 78 83 wake_up_all(&lnk->wr_tx_wait);
+1 -1
net/tipc/monitor.c
··· 160 160 161 161 static int map_get(u64 up_map, int i) 162 162 { 163 - return (up_map & (1 << i)) >> i; 163 + return (up_map & (1ULL << i)) >> i; 164 164 } 165 165 166 166 static struct tipc_peer *peer_prev(struct tipc_peer *peer)
+1 -1
net/wireless/lib80211_crypt_ccmp.c
··· 136 136 pos = (u8 *) hdr; 137 137 aad[0] = pos[0] & 0x8f; 138 138 aad[1] = pos[1] & 0xc7; 139 - memcpy(aad + 2, hdr->addr1, 3 * ETH_ALEN); 139 + memcpy(aad + 2, &hdr->addrs, 3 * ETH_ALEN); 140 140 pos = (u8 *) & hdr->seq_ctrl; 141 141 aad[20] = pos[0] & 0x0f; 142 142 aad[21] = 0; /* all bits masked */
+1
scripts/extract-ikconfig
··· 62 62 try_decompress '\135\0\0\0' xxx unlzma 63 63 try_decompress '\211\114\132' xy 'lzop -d' 64 64 try_decompress '\002\041\114\030' xyy 'lz4 -d -l' 65 + try_decompress '\050\265\057\375' xxx unzstd 65 66 66 67 # Bail out: 67 68 echo "$me: Cannot find kernel config." >&2
-30
scripts/gcc-ld
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - # run gcc with ld options 4 - # used as a wrapper to execute link time optimizations 5 - # yes virginia, this is not pretty 6 - 7 - ARGS="-nostdlib" 8 - 9 - while [ "$1" != "" ] ; do 10 - case "$1" in 11 - -save-temps|-m32|-m64) N="$1" ;; 12 - -r) N="$1" ;; 13 - -[Wg]*) N="$1" ;; 14 - -[olv]|-[Ofd]*|-nostdlib) N="$1" ;; 15 - --end-group|--start-group) 16 - N="-Wl,$1" ;; 17 - -[RTFGhIezcbyYu]*|\ 18 - --script|--defsym|-init|-Map|--oformat|-rpath|\ 19 - -rpath-link|--sort-section|--section-start|-Tbss|-Tdata|-Ttext|\ 20 - --version-script|--dynamic-list|--version-exports-symbol|--wrap|-m) 21 - A="$1" ; shift ; N="-Wl,$A,$1" ;; 22 - -[m]*) N="$1" ;; 23 - -*) N="-Wl,$1" ;; 24 - *) N="$1" ;; 25 - esac 26 - ARGS="$ARGS $N" 27 - shift 28 - done 29 - 30 - exec $CC $ARGS
+1 -1
scripts/mksysmap
··· 41 41 # so we just ignore them to let readprofile continue to work. 42 42 # (At least sparc64 has __crc_ in the middle). 43 43 44 - $NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( \.L\)' > $2 44 + $NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( \.L\)\|\( L0\)' > $2
+7 -2
sound/core/memalloc.c
··· 543 543 dmab->dev.need_sync = dma_need_sync(dmab->dev.dev, 544 544 sg_dma_address(sgt->sgl)); 545 545 p = dma_vmap_noncontiguous(dmab->dev.dev, size, sgt); 546 - if (p) 546 + if (p) { 547 547 dmab->private_data = sgt; 548 - else 548 + /* store the first page address for convenience */ 549 + dmab->addr = snd_sgbuf_get_addr(dmab, 0); 550 + } else { 549 551 dma_free_noncontiguous(dmab->dev.dev, size, sgt, dmab->dev.dir); 552 + } 550 553 return p; 551 554 } 552 555 ··· 783 780 if (!p) 784 781 goto error; 785 782 dmab->private_data = sgbuf; 783 + /* store the first page address for convenience */ 784 + dmab->addr = snd_sgbuf_get_addr(dmab, 0); 786 785 return p; 787 786 788 787 error:
+3 -3
sound/core/oss/pcm_oss.c
··· 1672 1672 runtime = substream->runtime; 1673 1673 if (atomic_read(&substream->mmap_count)) 1674 1674 goto __direct; 1675 - err = snd_pcm_oss_make_ready(substream); 1676 - if (err < 0) 1677 - return err; 1678 1675 atomic_inc(&runtime->oss.rw_ref); 1679 1676 if (mutex_lock_interruptible(&runtime->oss.params_lock)) { 1680 1677 atomic_dec(&runtime->oss.rw_ref); 1681 1678 return -ERESTARTSYS; 1682 1679 } 1680 + err = snd_pcm_oss_make_ready_locked(substream); 1681 + if (err < 0) 1682 + goto unlock; 1683 1683 format = snd_pcm_oss_format_from(runtime->oss.format); 1684 1684 width = snd_pcm_format_physical_width(format); 1685 1685 if (runtime->oss.buffer_used > 0) {
+4 -3
sound/drivers/aloop.c
··· 605 605 cable->streams[SNDRV_PCM_STREAM_PLAYBACK]; 606 606 struct loopback_pcm *dpcm_capt = 607 607 cable->streams[SNDRV_PCM_STREAM_CAPTURE]; 608 - unsigned long delta_play = 0, delta_capt = 0; 608 + unsigned long delta_play = 0, delta_capt = 0, cur_jiffies; 609 609 unsigned int running, count1, count2; 610 610 611 + cur_jiffies = jiffies; 611 612 running = cable->running ^ cable->pause; 612 613 if (running & (1 << SNDRV_PCM_STREAM_PLAYBACK)) { 613 - delta_play = jiffies - dpcm_play->last_jiffies; 614 + delta_play = cur_jiffies - dpcm_play->last_jiffies; 614 615 dpcm_play->last_jiffies += delta_play; 615 616 } 616 617 617 618 if (running & (1 << SNDRV_PCM_STREAM_CAPTURE)) { 618 - delta_capt = jiffies - dpcm_capt->last_jiffies; 619 + delta_capt = cur_jiffies - dpcm_capt->last_jiffies; 619 620 dpcm_capt->last_jiffies += delta_capt; 620 621 } 621 622
+1 -1
sound/pci/emu10k1/emupcm.c
··· 124 124 epcm->voices[0]->epcm = epcm; 125 125 if (voices > 1) { 126 126 for (i = 1; i < voices; i++) { 127 - epcm->voices[i] = &epcm->emu->voices[epcm->voices[0]->number + i]; 127 + epcm->voices[i] = &epcm->emu->voices[(epcm->voices[0]->number + i) % NUM_G]; 128 128 epcm->voices[i]->epcm = epcm; 129 129 } 130 130 }
+1 -1
sound/pci/hda/hda_intel.c
··· 1817 1817 1818 1818 /* use the non-cached pages in non-snoop mode */ 1819 1819 if (!azx_snoop(chip)) 1820 - azx_bus(chip)->dma_type = SNDRV_DMA_TYPE_DEV_WC; 1820 + azx_bus(chip)->dma_type = SNDRV_DMA_TYPE_DEV_WC_SG; 1821 1821 1822 1822 if (chip->driver_type == AZX_DRIVER_NVIDIA) { 1823 1823 dev_dbg(chip->card->dev, "Enable delay in RIRB handling\n");
+2 -1
sound/pci/hda/hda_tegra.c
··· 474 474 static int hda_tegra_probe(struct platform_device *pdev) 475 475 { 476 476 const unsigned int driver_flags = AZX_DCAPS_CORBRP_SELF_CLEAR | 477 - AZX_DCAPS_PM_RUNTIME; 477 + AZX_DCAPS_PM_RUNTIME | 478 + AZX_DCAPS_4K_BDLE_BOUNDARY; 478 479 struct snd_card *card; 479 480 struct azx *chip; 480 481 struct hda_tegra *hda;
+24
sound/pci/hda/patch_sigmatel.c
··· 209 209 210 210 /* beep widgets */ 211 211 hda_nid_t anabeep_nid; 212 + bool beep_power_on; 212 213 213 214 /* SPDIF-out mux */ 214 215 const char * const *spdif_labels; ··· 4444 4443 4445 4444 return 0; 4446 4445 } 4446 + 4447 + static int stac_check_power_status(struct hda_codec *codec, hda_nid_t nid) 4448 + { 4449 + #ifdef CONFIG_SND_HDA_INPUT_BEEP 4450 + struct sigmatel_spec *spec = codec->spec; 4451 + #endif 4452 + int ret = snd_hda_gen_check_power_status(codec, nid); 4453 + 4454 + #ifdef CONFIG_SND_HDA_INPUT_BEEP 4455 + if (nid == spec->gen.beep_nid && codec->beep) { 4456 + if (codec->beep->enabled != spec->beep_power_on) { 4457 + spec->beep_power_on = codec->beep->enabled; 4458 + if (spec->beep_power_on) 4459 + snd_hda_power_up_pm(codec); 4460 + else 4461 + snd_hda_power_down_pm(codec); 4462 + } 4463 + ret |= spec->beep_power_on; 4464 + } 4465 + #endif 4466 + return ret; 4467 + } 4447 4468 #else 4448 4469 #define stac_suspend NULL 4449 4470 #endif /* CONFIG_PM */ ··· 4478 4455 .unsol_event = snd_hda_jack_unsol_event, 4479 4456 #ifdef CONFIG_PM 4480 4457 .suspend = stac_suspend, 4458 + .check_power_status = stac_check_power_status, 4481 4459 #endif 4482 4460 }; 4483 4461
+7 -6
sound/soc/codecs/cs42l42.c
··· 1624 1624 unsigned int current_plug_status; 1625 1625 unsigned int current_button_status; 1626 1626 unsigned int i; 1627 - int report = 0; 1628 1627 1629 1628 mutex_lock(&cs42l42->irq_lock); 1630 1629 if (cs42l42->suspended) { ··· 1717 1718 1718 1719 if (current_button_status & CS42L42_M_DETECT_TF_MASK) { 1719 1720 dev_dbg(cs42l42->dev, "Button released\n"); 1720 - report = 0; 1721 + snd_soc_jack_report(cs42l42->jack, 0, 1722 + SND_JACK_BTN_0 | SND_JACK_BTN_1 | 1723 + SND_JACK_BTN_2 | SND_JACK_BTN_3); 1721 1724 } else if (current_button_status & CS42L42_M_DETECT_FT_MASK) { 1722 - report = cs42l42_handle_button_press(cs42l42); 1723 - 1725 + snd_soc_jack_report(cs42l42->jack, 1726 + cs42l42_handle_button_press(cs42l42), 1727 + SND_JACK_BTN_0 | SND_JACK_BTN_1 | 1728 + SND_JACK_BTN_2 | SND_JACK_BTN_3); 1724 1729 } 1725 - snd_soc_jack_report(cs42l42->jack, report, SND_JACK_BTN_0 | SND_JACK_BTN_1 | 1726 - SND_JACK_BTN_2 | SND_JACK_BTN_3); 1727 1730 } 1728 1731 } 1729 1732
+30 -12
sound/soc/codecs/nau8540.c
··· 357 357 {"AIFTX", NULL, "Digital CH4 Mux"}, 358 358 }; 359 359 360 - static int nau8540_clock_check(struct nau8540 *nau8540, int rate, int osr) 360 + static const struct nau8540_osr_attr * 361 + nau8540_get_osr(struct nau8540 *nau8540) 361 362 { 363 + unsigned int osr; 364 + 365 + regmap_read(nau8540->regmap, NAU8540_REG_ADC_SAMPLE_RATE, &osr); 366 + osr &= NAU8540_ADC_OSR_MASK; 362 367 if (osr >= ARRAY_SIZE(osr_adc_sel)) 368 + return NULL; 369 + return &osr_adc_sel[osr]; 370 + } 371 + 372 + static int nau8540_dai_startup(struct snd_pcm_substream *substream, 373 + struct snd_soc_dai *dai) 374 + { 375 + struct snd_soc_component *component = dai->component; 376 + struct nau8540 *nau8540 = snd_soc_component_get_drvdata(component); 377 + const struct nau8540_osr_attr *osr; 378 + 379 + osr = nau8540_get_osr(nau8540); 380 + if (!osr || !osr->osr) 363 381 return -EINVAL; 364 382 365 - if (rate * osr > CLK_ADC_MAX) { 366 - dev_err(nau8540->dev, "exceed the maximum frequency of CLK_ADC\n"); 367 - return -EINVAL; 368 - } 369 - 370 - return 0; 383 + return snd_pcm_hw_constraint_minmax(substream->runtime, 384 + SNDRV_PCM_HW_PARAM_RATE, 385 + 0, CLK_ADC_MAX / osr->osr); 371 386 } 372 387 373 388 static int nau8540_hw_params(struct snd_pcm_substream *substream, ··· 390 375 { 391 376 struct snd_soc_component *component = dai->component; 392 377 struct nau8540 *nau8540 = snd_soc_component_get_drvdata(component); 393 - unsigned int val_len = 0, osr; 378 + unsigned int val_len = 0; 379 + const struct nau8540_osr_attr *osr; 394 380 395 381 /* CLK_ADC = OSR * FS 396 382 * ADC clock frequency is defined as Over Sampling Rate (OSR) ··· 399 383 * values must be selected such that the maximum frequency is less 400 384 * than 6.144 MHz. 401 385 */ 402 - regmap_read(nau8540->regmap, NAU8540_REG_ADC_SAMPLE_RATE, &osr); 403 - osr &= NAU8540_ADC_OSR_MASK; 404 - if (nau8540_clock_check(nau8540, params_rate(params), osr)) 386 + osr = nau8540_get_osr(nau8540); 387 + if (!osr || !osr->osr) 388 + return -EINVAL; 389 + if (params_rate(params) * osr->osr > CLK_ADC_MAX) 405 390 return -EINVAL; 406 391 regmap_update_bits(nau8540->regmap, NAU8540_REG_CLOCK_SRC, 407 392 NAU8540_CLK_ADC_SRC_MASK, 408 - osr_adc_sel[osr].clk_src << NAU8540_CLK_ADC_SRC_SFT); 393 + osr->clk_src << NAU8540_CLK_ADC_SRC_SFT); 409 394 410 395 switch (params_width(params)) { 411 396 case 16: ··· 532 515 533 516 534 517 static const struct snd_soc_dai_ops nau8540_dai_ops = { 518 + .startup = nau8540_dai_startup, 535 519 .hw_params = nau8540_hw_params, 536 520 .set_fmt = nau8540_set_fmt, 537 521 .set_tdm_slot = nau8540_set_tdm_slot,
+36 -30
sound/soc/codecs/nau8821.c
··· 670 670 {"HPOR", NULL, "Class G"}, 671 671 }; 672 672 673 - static int nau8821_clock_check(struct nau8821 *nau8821, 674 - int stream, int rate, int osr) 673 + static const struct nau8821_osr_attr * 674 + nau8821_get_osr(struct nau8821 *nau8821, int stream) 675 675 { 676 - int osrate = 0; 676 + unsigned int osr; 677 677 678 678 if (stream == SNDRV_PCM_STREAM_PLAYBACK) { 679 + regmap_read(nau8821->regmap, NAU8821_R2C_DAC_CTRL1, &osr); 680 + osr &= NAU8821_DAC_OVERSAMPLE_MASK; 679 681 if (osr >= ARRAY_SIZE(osr_dac_sel)) 680 - return -EINVAL; 681 - osrate = osr_dac_sel[osr].osr; 682 + return NULL; 683 + return &osr_dac_sel[osr]; 682 684 } else { 685 + regmap_read(nau8821->regmap, NAU8821_R2B_ADC_RATE, &osr); 686 + osr &= NAU8821_ADC_SYNC_DOWN_MASK; 683 687 if (osr >= ARRAY_SIZE(osr_adc_sel)) 684 - return -EINVAL; 685 - osrate = osr_adc_sel[osr].osr; 688 + return NULL; 689 + return &osr_adc_sel[osr]; 686 690 } 691 + } 687 692 688 - if (!osrate || rate * osrate > CLK_DA_AD_MAX) { 689 - dev_err(nau8821->dev, 690 - "exceed the maximum frequency of CLK_ADC or CLK_DAC"); 693 + static int nau8821_dai_startup(struct snd_pcm_substream *substream, 694 + struct snd_soc_dai *dai) 695 + { 696 + struct snd_soc_component *component = dai->component; 697 + struct nau8821 *nau8821 = snd_soc_component_get_drvdata(component); 698 + const struct nau8821_osr_attr *osr; 699 + 700 + osr = nau8821_get_osr(nau8821, substream->stream); 701 + if (!osr || !osr->osr) 691 702 return -EINVAL; 692 - } 693 703 694 - return 0; 704 + return snd_pcm_hw_constraint_minmax(substream->runtime, 705 + SNDRV_PCM_HW_PARAM_RATE, 706 + 0, CLK_DA_AD_MAX / osr->osr); 695 707 } 696 708 697 709 static int nau8821_hw_params(struct snd_pcm_substream *substream, ··· 711 699 { 712 700 struct snd_soc_component *component = dai->component; 713 701 struct nau8821 *nau8821 = snd_soc_component_get_drvdata(component); 714 - unsigned int val_len = 0, osr, ctrl_val, bclk_fs, clk_div; 702 + unsigned int val_len = 0, ctrl_val, bclk_fs, clk_div; 703 + const struct nau8821_osr_attr *osr; 715 704 716 705 nau8821->fs = params_rate(params); 717 706 /* CLK_DAC or CLK_ADC = OSR * FS ··· 721 708 * values must be selected such that the maximum frequency is less 722 709 * than 6.144 MHz. 723 710 */ 724 - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 725 - regmap_read(nau8821->regmap, NAU8821_R2C_DAC_CTRL1, &osr); 726 - osr &= NAU8821_DAC_OVERSAMPLE_MASK; 727 - if (nau8821_clock_check(nau8821, substream->stream, 728 - nau8821->fs, osr)) { 729 - return -EINVAL; 730 - } 711 + osr = nau8821_get_osr(nau8821, substream->stream); 712 + if (!osr || !osr->osr) 713 + return -EINVAL; 714 + if (nau8821->fs * osr->osr > CLK_DA_AD_MAX) 715 + return -EINVAL; 716 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 731 717 regmap_update_bits(nau8821->regmap, NAU8821_R03_CLK_DIVIDER, 732 718 NAU8821_CLK_DAC_SRC_MASK, 733 - osr_dac_sel[osr].clk_src << NAU8821_CLK_DAC_SRC_SFT); 734 - } else { 735 - regmap_read(nau8821->regmap, NAU8821_R2B_ADC_RATE, &osr); 736 - osr &= NAU8821_ADC_SYNC_DOWN_MASK; 737 - if (nau8821_clock_check(nau8821, substream->stream, 738 - nau8821->fs, osr)) { 739 - return -EINVAL; 740 - } 719 + osr->clk_src << NAU8821_CLK_DAC_SRC_SFT); 720 + else 741 721 regmap_update_bits(nau8821->regmap, NAU8821_R03_CLK_DIVIDER, 742 722 NAU8821_CLK_ADC_SRC_MASK, 743 - osr_adc_sel[osr].clk_src << NAU8821_CLK_ADC_SRC_SFT); 744 - } 723 + osr->clk_src << NAU8821_CLK_ADC_SRC_SFT); 745 724 746 725 /* make BCLK and LRC divde configuration if the codec as master. */ 747 726 regmap_read(nau8821->regmap, NAU8821_R1D_I2S_PCM_CTRL2, &ctrl_val); ··· 848 843 } 849 844 850 845 static const struct snd_soc_dai_ops nau8821_dai_ops = { 846 + .startup = nau8821_dai_startup, 851 847 .hw_params = nau8821_hw_params, 852 848 .set_fmt = nau8821_set_dai_fmt, 853 849 .mute_stream = nau8821_digital_mute,
+46 -34
sound/soc/codecs/nau8824.c
··· 1014 1014 return IRQ_HANDLED; 1015 1015 } 1016 1016 1017 - static int nau8824_clock_check(struct nau8824 *nau8824, 1018 - int stream, int rate, int osr) 1017 + static const struct nau8824_osr_attr * 1018 + nau8824_get_osr(struct nau8824 *nau8824, int stream) 1019 1019 { 1020 - int osrate; 1020 + unsigned int osr; 1021 1021 1022 1022 if (stream == SNDRV_PCM_STREAM_PLAYBACK) { 1023 + regmap_read(nau8824->regmap, 1024 + NAU8824_REG_DAC_FILTER_CTRL_1, &osr); 1025 + osr &= NAU8824_DAC_OVERSAMPLE_MASK; 1023 1026 if (osr >= ARRAY_SIZE(osr_dac_sel)) 1024 - return -EINVAL; 1025 - osrate = osr_dac_sel[osr].osr; 1027 + return NULL; 1028 + return &osr_dac_sel[osr]; 1026 1029 } else { 1030 + regmap_read(nau8824->regmap, 1031 + NAU8824_REG_ADC_FILTER_CTRL, &osr); 1032 + osr &= NAU8824_ADC_SYNC_DOWN_MASK; 1027 1033 if (osr >= ARRAY_SIZE(osr_adc_sel)) 1028 - return -EINVAL; 1029 - osrate = osr_adc_sel[osr].osr; 1034 + return NULL; 1035 + return &osr_adc_sel[osr]; 1030 1036 } 1037 + } 1031 1038 1032 - if (!osrate || rate * osr > CLK_DA_AD_MAX) { 1033 - dev_err(nau8824->dev, "exceed the maximum frequency of CLK_ADC or CLK_DAC\n"); 1039 + static int nau8824_dai_startup(struct snd_pcm_substream *substream, 1040 + struct snd_soc_dai *dai) 1041 + { 1042 + struct snd_soc_component *component = dai->component; 1043 + struct nau8824 *nau8824 = snd_soc_component_get_drvdata(component); 1044 + const struct nau8824_osr_attr *osr; 1045 + 1046 + osr = nau8824_get_osr(nau8824, substream->stream); 1047 + if (!osr || !osr->osr) 1034 1048 return -EINVAL; 1035 - } 1036 1049 1037 - return 0; 1050 + return snd_pcm_hw_constraint_minmax(substream->runtime, 1051 + SNDRV_PCM_HW_PARAM_RATE, 1052 + 0, CLK_DA_AD_MAX / osr->osr); 1038 1053 } 1039 1054 1040 1055 static int nau8824_hw_params(struct snd_pcm_substream *substream, ··· 1057 1042 { 1058 1043 struct snd_soc_component *component = dai->component; 1059 1044 struct nau8824 *nau8824 = snd_soc_component_get_drvdata(component); 1060 - unsigned int val_len = 0, osr, ctrl_val, bclk_fs, bclk_div; 1045 + unsigned int val_len = 0, ctrl_val, bclk_fs, bclk_div; 1046 + const struct nau8824_osr_attr *osr; 1047 + int err = -EINVAL; 1061 1048 1062 1049 nau8824_sema_acquire(nau8824, HZ); 1063 1050 ··· 1070 1053 * than 6.144 MHz. 1071 1054 */ 1072 1055 nau8824->fs = params_rate(params); 1073 - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 1074 - regmap_read(nau8824->regmap, 1075 - NAU8824_REG_DAC_FILTER_CTRL_1, &osr); 1076 - osr &= NAU8824_DAC_OVERSAMPLE_MASK; 1077 - if (nau8824_clock_check(nau8824, substream->stream, 1078 - nau8824->fs, osr)) 1079 - return -EINVAL; 1056 + osr = nau8824_get_osr(nau8824, substream->stream); 1057 + if (!osr || !osr->osr) 1058 + goto error; 1059 + if (nau8824->fs * osr->osr > CLK_DA_AD_MAX) 1060 + goto error; 1061 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 1080 1062 regmap_update_bits(nau8824->regmap, NAU8824_REG_CLK_DIVIDER, 1081 1063 NAU8824_CLK_DAC_SRC_MASK, 1082 - osr_dac_sel[osr].clk_src << NAU8824_CLK_DAC_SRC_SFT); 1083 - } else { 1084 - regmap_read(nau8824->regmap, 1085 - NAU8824_REG_ADC_FILTER_CTRL, &osr); 1086 - osr &= NAU8824_ADC_SYNC_DOWN_MASK; 1087 - if (nau8824_clock_check(nau8824, substream->stream, 1088 - nau8824->fs, osr)) 1089 - return -EINVAL; 1064 + osr->clk_src << NAU8824_CLK_DAC_SRC_SFT); 1065 + else 1090 1066 regmap_update_bits(nau8824->regmap, NAU8824_REG_CLK_DIVIDER, 1091 1067 NAU8824_CLK_ADC_SRC_MASK, 1092 - osr_adc_sel[osr].clk_src << NAU8824_CLK_ADC_SRC_SFT); 1093 - } 1068 + osr->clk_src << NAU8824_CLK_ADC_SRC_SFT); 1094 1069 1095 1070 /* make BCLK and LRC divde configuration if the codec as master. */ 1096 1071 regmap_read(nau8824->regmap, ··· 1099 1090 else if (bclk_fs <= 256) 1100 1091 bclk_div = 0; 1101 1092 else 1102 - return -EINVAL; 1093 + goto error; 1103 1094 regmap_update_bits(nau8824->regmap, 1104 1095 NAU8824_REG_PORT0_I2S_PCM_CTRL_2, 1105 1096 NAU8824_I2S_LRC_DIV_MASK | NAU8824_I2S_BLK_DIV_MASK, ··· 1120 1111 val_len |= NAU8824_I2S_DL_32; 1121 1112 break; 1122 1113 default: 1123 - return -EINVAL; 1114 + goto error; 1124 1115 } 1125 1116 1126 1117 regmap_update_bits(nau8824->regmap, NAU8824_REG_PORT0_I2S_PCM_CTRL_1, 1127 1118 NAU8824_I2S_DL_MASK, val_len); 1119 + err = 0; 1128 1120 1121 + error: 1129 1122 nau8824_sema_release(nau8824); 1130 1123 1131 - return 0; 1124 + return err; 1132 1125 } 1133 1126 1134 1127 static int nau8824_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) ··· 1138 1127 struct snd_soc_component *component = dai->component; 1139 1128 struct nau8824 *nau8824 = snd_soc_component_get_drvdata(component); 1140 1129 unsigned int ctrl1_val = 0, ctrl2_val = 0; 1141 - 1142 - nau8824_sema_acquire(nau8824, HZ); 1143 1130 1144 1131 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 1145 1132 case SND_SOC_DAIFMT_CBM_CFM: ··· 1179 1170 default: 1180 1171 return -EINVAL; 1181 1172 } 1173 + 1174 + nau8824_sema_acquire(nau8824, HZ); 1182 1175 1183 1176 regmap_update_bits(nau8824->regmap, NAU8824_REG_PORT0_I2S_PCM_CTRL_1, 1184 1177 NAU8824_I2S_DF_MASK | NAU8824_I2S_BP_MASK | ··· 1558 1547 }; 1559 1548 1560 1549 static const struct snd_soc_dai_ops nau8824_dai_ops = { 1550 + .startup = nau8824_dai_startup, 1561 1551 .hw_params = nau8824_hw_params, 1562 1552 .set_fmt = nau8824_set_fmt, 1563 1553 .set_tdm_slot = nau8824_set_tdm_slot,
+45 -38
sound/soc/codecs/nau8825.c
··· 1247 1247 {"HPOR", NULL, "Class G"}, 1248 1248 }; 1249 1249 1250 - static int nau8825_clock_check(struct nau8825 *nau8825, 1251 - int stream, int rate, int osr) 1250 + static const struct nau8825_osr_attr * 1251 + nau8825_get_osr(struct nau8825 *nau8825, int stream) 1252 1252 { 1253 - int osrate; 1253 + unsigned int osr; 1254 1254 1255 1255 if (stream == SNDRV_PCM_STREAM_PLAYBACK) { 1256 + regmap_read(nau8825->regmap, 1257 + NAU8825_REG_DAC_CTRL1, &osr); 1258 + osr &= NAU8825_DAC_OVERSAMPLE_MASK; 1256 1259 if (osr >= ARRAY_SIZE(osr_dac_sel)) 1257 - return -EINVAL; 1258 - osrate = osr_dac_sel[osr].osr; 1260 + return NULL; 1261 + return &osr_dac_sel[osr]; 1259 1262 } else { 1263 + regmap_read(nau8825->regmap, 1264 + NAU8825_REG_ADC_RATE, &osr); 1265 + osr &= NAU8825_ADC_SYNC_DOWN_MASK; 1260 1266 if (osr >= ARRAY_SIZE(osr_adc_sel)) 1261 - return -EINVAL; 1262 - osrate = osr_adc_sel[osr].osr; 1267 + return NULL; 1268 + return &osr_adc_sel[osr]; 1263 1269 } 1270 + } 1264 1271 1265 - if (!osrate || rate * osr > CLK_DA_AD_MAX) { 1266 - dev_err(nau8825->dev, "exceed the maximum frequency of CLK_ADC or CLK_DAC\n"); 1272 + static int nau8825_dai_startup(struct snd_pcm_substream *substream, 1273 + struct snd_soc_dai *dai) 1274 + { 1275 + struct snd_soc_component *component = dai->component; 1276 + struct nau8825 *nau8825 = snd_soc_component_get_drvdata(component); 1277 + const struct nau8825_osr_attr *osr; 1278 + 1279 + osr = nau8825_get_osr(nau8825, substream->stream); 1280 + if (!osr || !osr->osr) 1267 1281 return -EINVAL; 1268 - } 1269 1282 1270 - return 0; 1283 + return snd_pcm_hw_constraint_minmax(substream->runtime, 1284 + SNDRV_PCM_HW_PARAM_RATE, 1285 + 0, CLK_DA_AD_MAX / osr->osr); 1271 1286 } 1272 1287 1273 1288 static int nau8825_hw_params(struct snd_pcm_substream *substream, ··· 1291 1276 { 1292 1277 struct snd_soc_component *component = dai->component; 1293 1278 struct nau8825 *nau8825 = snd_soc_component_get_drvdata(component); 1294 - unsigned int val_len = 0, osr, ctrl_val, bclk_fs, bclk_div; 1279 + unsigned int val_len = 0, ctrl_val, bclk_fs, bclk_div; 1280 + const struct nau8825_osr_attr *osr; 1281 + int err = -EINVAL; 1295 1282 1296 1283 nau8825_sema_acquire(nau8825, 3 * HZ); 1297 1284 ··· 1303 1286 * values must be selected such that the maximum frequency is less 1304 1287 * than 6.144 MHz. 1305 1288 */ 1306 - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 1307 - regmap_read(nau8825->regmap, NAU8825_REG_DAC_CTRL1, &osr); 1308 - osr &= NAU8825_DAC_OVERSAMPLE_MASK; 1309 - if (nau8825_clock_check(nau8825, substream->stream, 1310 - params_rate(params), osr)) { 1311 - nau8825_sema_release(nau8825); 1312 - return -EINVAL; 1313 - } 1289 + osr = nau8825_get_osr(nau8825, substream->stream); 1290 + if (!osr || !osr->osr) 1291 + goto error; 1292 + if (params_rate(params) * osr->osr > CLK_DA_AD_MAX) 1293 + goto error; 1294 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 1314 1295 regmap_update_bits(nau8825->regmap, NAU8825_REG_CLK_DIVIDER, 1315 1296 NAU8825_CLK_DAC_SRC_MASK, 1316 - osr_dac_sel[osr].clk_src << NAU8825_CLK_DAC_SRC_SFT); 1317 - } else { 1318 - regmap_read(nau8825->regmap, NAU8825_REG_ADC_RATE, &osr); 1319 - osr &= NAU8825_ADC_SYNC_DOWN_MASK; 1320 - if (nau8825_clock_check(nau8825, substream->stream, 1321 - params_rate(params), osr)) { 1322 - nau8825_sema_release(nau8825); 1323 - return -EINVAL; 1324 - } 1297 + osr->clk_src << NAU8825_CLK_DAC_SRC_SFT); 1298 + else 1325 1299 regmap_update_bits(nau8825->regmap, NAU8825_REG_CLK_DIVIDER, 1326 1300 NAU8825_CLK_ADC_SRC_MASK, 1327 - osr_adc_sel[osr].clk_src << NAU8825_CLK_ADC_SRC_SFT); 1328 - } 1301 + osr->clk_src << NAU8825_CLK_ADC_SRC_SFT); 1329 1302 1330 1303 /* make BCLK and LRC divde configuration if the codec as master. */ 1331 1304 regmap_read(nau8825->regmap, NAU8825_REG_I2S_PCM_CTRL2, &ctrl_val); ··· 1328 1321 bclk_div = 1; 1329 1322 else if (bclk_fs <= 128) 1330 1323 bclk_div = 0; 1331 - else { 1332 - nau8825_sema_release(nau8825); 1333 - return -EINVAL; 1334 - } 1324 + else 1325 + goto error; 1335 1326 regmap_update_bits(nau8825->regmap, NAU8825_REG_I2S_PCM_CTRL2, 1336 1327 NAU8825_I2S_LRC_DIV_MASK | NAU8825_I2S_BLK_DIV_MASK, 1337 1328 ((bclk_div + 1) << NAU8825_I2S_LRC_DIV_SFT) | bclk_div); ··· 1349 1344 val_len |= NAU8825_I2S_DL_32; 1350 1345 break; 1351 1346 default: 1352 - nau8825_sema_release(nau8825); 1353 - return -EINVAL; 1347 + goto error; 1354 1348 } 1355 1349 1356 1350 regmap_update_bits(nau8825->regmap, NAU8825_REG_I2S_PCM_CTRL1, 1357 1351 NAU8825_I2S_DL_MASK, val_len); 1352 + err = 0; 1358 1353 1354 + error: 1359 1355 /* Release the semaphore. */ 1360 1356 nau8825_sema_release(nau8825); 1361 1357 1362 - return 0; 1358 + return err; 1363 1359 } 1364 1360 1365 1361 static int nau8825_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt) ··· 1426 1420 } 1427 1421 1428 1422 static const struct snd_soc_dai_ops nau8825_dai_ops = { 1423 + .startup = nau8825_dai_startup, 1429 1424 .hw_params = nau8825_hw_params, 1430 1425 .set_fmt = nau8825_set_dai_fmt, 1431 1426 };
+12 -4
sound/soc/fsl/fsl_aud2htx.c
··· 234 234 235 235 regcache_cache_only(aud2htx->regmap, true); 236 236 237 + /* 238 + * Register platform component before registering cpu dai for there 239 + * is not defer probe for platform component in snd_soc_add_pcm_runtime(). 240 + */ 241 + ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0); 242 + if (ret) { 243 + dev_err(&pdev->dev, "failed to pcm register\n"); 244 + pm_runtime_disable(&pdev->dev); 245 + return ret; 246 + } 247 + 237 248 ret = devm_snd_soc_register_component(&pdev->dev, 238 249 &fsl_aud2htx_component, 239 250 &fsl_aud2htx_dai, 1); 240 251 if (ret) { 241 252 dev_err(&pdev->dev, "failed to register ASoC DAI\n"); 253 + pm_runtime_disable(&pdev->dev); 242 254 return ret; 243 255 } 244 - 245 - ret = imx_pcm_dma_init(pdev); 246 - if (ret) 247 - dev_err(&pdev->dev, "failed to init imx pcm dma: %d\n", ret); 248 256 249 257 return ret; 250 258 }
+1 -1
sound/soc/fsl/fsl_mqs.c
··· 122 122 } 123 123 124 124 switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) { 125 - case SND_SOC_DAIFMT_BP_FP: 125 + case SND_SOC_DAIFMT_CBC_CFC: 126 126 break; 127 127 default: 128 128 return -EINVAL;
-3
sound/soc/mediatek/mt8186/mt8186-dai-adda.c
··· 271 271 /* should delayed 1/fs(smallest is 8k) = 125us before afe off */ 272 272 usleep_range(125, 135); 273 273 mt8186_afe_gpio_request(afe->dev, false, MT8186_DAI_ADDA, 1); 274 - 275 - /* reset dmic */ 276 - afe_priv->mtkaif_dmic = 0; 277 274 break; 278 275 default: 279 276 break;
+1
sound/soc/qcom/sm8250.c
··· 270 270 if (!card) 271 271 return -ENOMEM; 272 272 273 + card->owner = THIS_MODULE; 273 274 /* Allocate the private data */ 274 275 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 275 276 if (!data)
+2
sound/soc/sof/Kconfig
··· 196 196 197 197 config SND_SOC_SOF_DEBUG_IPC_FLOOD_TEST 198 198 tristate "SOF enable IPC flood test" 199 + depends on SND_SOC_SOF 199 200 select SND_SOC_SOF_CLIENT 200 201 help 201 202 This option enables a separate client device for IPC flood test ··· 215 214 216 215 config SND_SOC_SOF_DEBUG_IPC_MSG_INJECTOR 217 216 tristate "SOF enable IPC message injector" 217 + depends on SND_SOC_SOF 218 218 select SND_SOC_SOF_CLIENT 219 219 help 220 220 This option enables the IPC message injector which can be used to send
+2 -2
sound/soc/sof/ipc4-topology.c
··· 771 771 goto err; 772 772 773 773 ret = sof_update_ipc_object(scomp, src, SOF_SRC_TOKENS, swidget->tuples, 774 - swidget->num_tuples, sizeof(src), 1); 774 + swidget->num_tuples, sizeof(*src), 1); 775 775 if (ret) { 776 776 dev_err(scomp->dev, "Parsing SRC tokens failed\n"); 777 777 goto err; ··· 1251 1251 if (blob->alh_cfg.count > 1) { 1252 1252 int group_id; 1253 1253 1254 - group_id = ida_alloc_max(&alh_group_ida, ALH_MULTI_GTW_COUNT, 1254 + group_id = ida_alloc_max(&alh_group_ida, ALH_MULTI_GTW_COUNT - 1, 1255 1255 GFP_KERNEL); 1256 1256 1257 1257 if (group_id < 0)
+1 -1
sound/usb/card.c
··· 699 699 if (delayed_register[i] && 700 700 sscanf(delayed_register[i], "%x:%x", &id, &inum) == 2 && 701 701 id == chip->usb_id) 702 - return inum != iface; 702 + return iface < inum; 703 703 } 704 704 705 705 return false;
+11 -14
sound/usb/endpoint.c
··· 758 758 * The endpoint needs to be closed via snd_usb_endpoint_close() later. 759 759 * 760 760 * Note that this function doesn't configure the endpoint. The substream 761 - * needs to set it up later via snd_usb_endpoint_configure(). 761 + * needs to set it up later via snd_usb_endpoint_set_params() and 762 + * snd_usb_endpoint_prepare(). 762 763 */ 763 764 struct snd_usb_endpoint * 764 765 snd_usb_endpoint_open(struct snd_usb_audio *chip, ··· 925 924 endpoint_set_interface(chip, ep, false); 926 925 927 926 if (!--ep->opened) { 927 + if (ep->clock_ref && !atomic_read(&ep->clock_ref->locked)) 928 + ep->clock_ref->rate = 0; 928 929 ep->iface = 0; 929 930 ep->altsetting = 0; 930 931 ep->cur_audiofmt = NULL; ··· 1293 1290 /* 1294 1291 * snd_usb_endpoint_set_params: configure an snd_usb_endpoint 1295 1292 * 1293 + * It's called either from hw_params callback. 1296 1294 * Determine the number of URBs to be used on this endpoint. 1297 1295 * An endpoint must be configured before it can be started. 1298 1296 * An endpoint that is already running can not be reconfigured. 1299 1297 */ 1300 - static int snd_usb_endpoint_set_params(struct snd_usb_audio *chip, 1301 - struct snd_usb_endpoint *ep) 1298 + int snd_usb_endpoint_set_params(struct snd_usb_audio *chip, 1299 + struct snd_usb_endpoint *ep) 1302 1300 { 1303 1301 const struct audioformat *fmt = ep->cur_audiofmt; 1304 1302 int err; ··· 1382 1378 } 1383 1379 1384 1380 /* 1385 - * snd_usb_endpoint_configure: Configure the endpoint 1381 + * snd_usb_endpoint_prepare: Prepare the endpoint 1386 1382 * 1387 1383 * This function sets up the EP to be fully usable state. 1388 - * It's called either from hw_params or prepare callback. 1384 + * It's called either from prepare callback. 1389 1385 * The function checks need_setup flag, and performs nothing unless needed, 1390 1386 * so it's safe to call this multiple times. 1391 1387 * 1392 1388 * This returns zero if unchanged, 1 if the configuration has changed, 1393 1389 * or a negative error code. 1394 1390 */ 1395 - int snd_usb_endpoint_configure(struct snd_usb_audio *chip, 1396 - struct snd_usb_endpoint *ep) 1391 + int snd_usb_endpoint_prepare(struct snd_usb_audio *chip, 1392 + struct snd_usb_endpoint *ep) 1397 1393 { 1398 1394 bool iface_first; 1399 1395 int err = 0; ··· 1414 1410 if (err < 0) 1415 1411 goto unlock; 1416 1412 } 1417 - err = snd_usb_endpoint_set_params(chip, ep); 1418 - if (err < 0) 1419 - goto unlock; 1420 1413 goto done; 1421 1414 } 1422 1415 ··· 1438 1437 goto unlock; 1439 1438 1440 1439 err = init_sample_rate(chip, ep); 1441 - if (err < 0) 1442 - goto unlock; 1443 - 1444 - err = snd_usb_endpoint_set_params(chip, ep); 1445 1440 if (err < 0) 1446 1441 goto unlock; 1447 1442
+4 -2
sound/usb/endpoint.h
··· 17 17 bool is_sync_ep); 18 18 void snd_usb_endpoint_close(struct snd_usb_audio *chip, 19 19 struct snd_usb_endpoint *ep); 20 - int snd_usb_endpoint_configure(struct snd_usb_audio *chip, 21 - struct snd_usb_endpoint *ep); 20 + int snd_usb_endpoint_set_params(struct snd_usb_audio *chip, 21 + struct snd_usb_endpoint *ep); 22 + int snd_usb_endpoint_prepare(struct snd_usb_audio *chip, 23 + struct snd_usb_endpoint *ep); 22 24 int snd_usb_endpoint_get_clock_rate(struct snd_usb_audio *chip, int clock); 23 25 24 26 bool snd_usb_endpoint_compatible(struct snd_usb_audio *chip,
+10 -4
sound/usb/pcm.c
··· 443 443 if (stop_endpoints(subs, false)) 444 444 sync_pending_stops(subs); 445 445 if (subs->sync_endpoint) { 446 - err = snd_usb_endpoint_configure(chip, subs->sync_endpoint); 446 + err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint); 447 447 if (err < 0) 448 448 return err; 449 449 } 450 - err = snd_usb_endpoint_configure(chip, subs->data_endpoint); 450 + err = snd_usb_endpoint_prepare(chip, subs->data_endpoint); 451 451 if (err < 0) 452 452 return err; 453 453 snd_usb_set_format_quirk(subs, subs->cur_audiofmt); 454 454 } else { 455 455 if (subs->sync_endpoint) { 456 - err = snd_usb_endpoint_configure(chip, subs->sync_endpoint); 456 + err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint); 457 457 if (err < 0) 458 458 return err; 459 459 } ··· 551 551 subs->cur_audiofmt = fmt; 552 552 mutex_unlock(&chip->mutex); 553 553 554 - ret = configure_endpoints(chip, subs); 554 + if (subs->sync_endpoint) { 555 + ret = snd_usb_endpoint_set_params(chip, subs->sync_endpoint); 556 + if (ret < 0) 557 + goto unlock; 558 + } 559 + 560 + ret = snd_usb_endpoint_set_params(chip, subs->data_endpoint); 555 561 556 562 unlock: 557 563 if (ret < 0)
+1 -1
sound/usb/quirks.c
··· 1764 1764 1765 1765 for (q = registration_quirks; q->usb_id; q++) 1766 1766 if (chip->usb_id == q->usb_id) 1767 - return iface != q->interface; 1767 + return iface < q->interface; 1768 1768 1769 1769 /* Register as normal */ 1770 1770 return false;
+5 -4
sound/usb/stream.c
··· 495 495 return 0; 496 496 } 497 497 } 498 + 499 + if (chip->card->registered) 500 + chip->need_delayed_register = true; 501 + 498 502 /* look for an empty stream */ 499 503 list_for_each_entry(as, &chip->pcm_list, list) { 500 504 if (as->fmt_type != fp->fmt_type) ··· 506 502 subs = &as->substream[stream]; 507 503 if (subs->ep_num) 508 504 continue; 509 - if (snd_device_get_state(chip->card, as->pcm) != 510 - SNDRV_DEV_BUILD) 511 - chip->need_delayed_register = true; 512 505 err = snd_pcm_new_stream(as->pcm, stream, 1); 513 506 if (err < 0) 514 507 return err; ··· 1106 1105 * Dallas DS4201 workaround: It presents 5 altsettings, but the last 1107 1106 * one misses syncpipe, and does not produce any sound. 1108 1107 */ 1109 - if (chip->usb_id == USB_ID(0x04fa, 0x4201)) 1108 + if (chip->usb_id == USB_ID(0x04fa, 0x4201) && num >= 4) 1110 1109 num = 4; 1111 1110 1112 1111 for (i = 0; i < num; i++) {
+9
tools/debugging/kernel-chktaint
··· 187 187 echo " * auxiliary taint, defined for and used by distros (#16)" 188 188 189 189 fi 190 + 190 191 T=`expr $T / 2` 191 192 if [ `expr $T % 2` -eq 0 ]; then 192 193 addout " " 193 194 else 194 195 addout "T" 195 196 echo " * kernel was built with the struct randomization plugin (#17)" 197 + fi 198 + 199 + T=`expr $T / 2` 200 + if [ `expr $T % 2` -eq 0 ]; then 201 + addout " " 202 + else 203 + addout "N" 204 + echo " * an in-kernel test (such as a KUnit test) has been run (#18)" 196 205 fi 197 206 198 207 echo "For a more detailed explanation of the various taint flags see"
+50
tools/lib/perf/evlist.c
··· 486 486 if (ops->idx) 487 487 ops->idx(evlist, evsel, mp, idx); 488 488 489 + pr_debug("idx %d: mmapping fd %d\n", idx, *output); 489 490 if (ops->mmap(map, mp, *output, evlist_cpu) < 0) 490 491 return -1; 491 492 ··· 495 494 if (!idx) 496 495 perf_evlist__set_mmap_first(evlist, map, overwrite); 497 496 } else { 497 + pr_debug("idx %d: set output fd %d -> %d\n", idx, fd, *output); 498 498 if (ioctl(fd, PERF_EVENT_IOC_SET_OUTPUT, *output) != 0) 499 499 return -1; 500 500 ··· 522 520 } 523 521 524 522 static int 523 + mmap_per_thread(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops, 524 + struct perf_mmap_param *mp) 525 + { 526 + int nr_threads = perf_thread_map__nr(evlist->threads); 527 + int nr_cpus = perf_cpu_map__nr(evlist->all_cpus); 528 + int cpu, thread, idx = 0; 529 + int nr_mmaps = 0; 530 + 531 + pr_debug("%s: nr cpu values (may include -1) %d nr threads %d\n", 532 + __func__, nr_cpus, nr_threads); 533 + 534 + /* per-thread mmaps */ 535 + for (thread = 0; thread < nr_threads; thread++, idx++) { 536 + int output = -1; 537 + int output_overwrite = -1; 538 + 539 + if (mmap_per_evsel(evlist, ops, idx, mp, 0, thread, &output, 540 + &output_overwrite, &nr_mmaps)) 541 + goto out_unmap; 542 + } 543 + 544 + /* system-wide mmaps i.e. per-cpu */ 545 + for (cpu = 1; cpu < nr_cpus; cpu++, idx++) { 546 + int output = -1; 547 + int output_overwrite = -1; 548 + 549 + if (mmap_per_evsel(evlist, ops, idx, mp, cpu, 0, &output, 550 + &output_overwrite, &nr_mmaps)) 551 + goto out_unmap; 552 + } 553 + 554 + if (nr_mmaps != evlist->nr_mmaps) 555 + pr_err("Miscounted nr_mmaps %d vs %d\n", nr_mmaps, evlist->nr_mmaps); 556 + 557 + return 0; 558 + 559 + out_unmap: 560 + perf_evlist__munmap(evlist); 561 + return -1; 562 + } 563 + 564 + static int 525 565 mmap_per_cpu(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops, 526 566 struct perf_mmap_param *mp) 527 567 { ··· 571 527 int nr_cpus = perf_cpu_map__nr(evlist->all_cpus); 572 528 int nr_mmaps = 0; 573 529 int cpu, thread; 530 + 531 + pr_debug("%s: nr cpu values %d nr threads %d\n", __func__, nr_cpus, nr_threads); 574 532 575 533 for (cpu = 0; cpu < nr_cpus; cpu++) { 576 534 int output = -1; ··· 615 569 struct perf_evlist_mmap_ops *ops, 616 570 struct perf_mmap_param *mp) 617 571 { 572 + const struct perf_cpu_map *cpus = evlist->all_cpus; 618 573 struct perf_evsel *evsel; 619 574 620 575 if (!ops || !ops->get || !ops->mmap) ··· 634 587 635 588 if (evlist->pollfd.entries == NULL && perf_evlist__alloc_pollfd(evlist) < 0) 636 589 return -ENOMEM; 590 + 591 + if (perf_cpu_map__empty(cpus)) 592 + return mmap_per_thread(evlist, ops, mp); 637 593 638 594 return mmap_per_cpu(evlist, ops, mp); 639 595 }
+12 -12
tools/perf/Makefile.perf
··· 954 954 $(call QUIET_INSTALL, bpf-headers) \ 955 955 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf'; \ 956 956 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf/linux'; \ 957 - $(INSTALL) include/bpf/*.h -t '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf'; \ 958 - $(INSTALL) include/bpf/linux/*.h -t '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf/linux' 957 + $(INSTALL) include/bpf/*.h -m 644 -t '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf'; \ 958 + $(INSTALL) include/bpf/linux/*.h -m 644 -t '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf/linux' 959 959 $(call QUIET_INSTALL, bpf-examples) \ 960 960 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf'; \ 961 - $(INSTALL) examples/bpf/*.c -t '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf' 961 + $(INSTALL) examples/bpf/*.c -m 644 -t '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf' 962 962 endif 963 963 $(call QUIET_INSTALL, perf-archive) \ 964 964 $(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' ··· 967 967 ifndef NO_LIBAUDIT 968 968 $(call QUIET_INSTALL, strace/groups) \ 969 969 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(STRACE_GROUPS_INSTDIR_SQ)'; \ 970 - $(INSTALL) trace/strace/groups/* -t '$(DESTDIR_SQ)$(STRACE_GROUPS_INSTDIR_SQ)' 970 + $(INSTALL) trace/strace/groups/* -m 644 -t '$(DESTDIR_SQ)$(STRACE_GROUPS_INSTDIR_SQ)' 971 971 endif 972 972 ifndef NO_LIBPERL 973 973 $(call QUIET_INSTALL, perl-scripts) \ 974 974 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'; \ 975 - $(INSTALL) scripts/perl/Perf-Trace-Util/lib/Perf/Trace/* -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'; \ 976 - $(INSTALL) scripts/perl/*.pl -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl'; \ 975 + $(INSTALL) scripts/perl/Perf-Trace-Util/lib/Perf/Trace/* -m 644 -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'; \ 976 + $(INSTALL) scripts/perl/*.pl -m 644 -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl'; \ 977 977 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/bin'; \ 978 978 $(INSTALL) scripts/perl/bin/* -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/bin' 979 979 endif ··· 990 990 $(INSTALL) $(DLFILTERS) '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/dlfilters'; 991 991 $(call QUIET_INSTALL, perf_completion-script) \ 992 992 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(sysconfdir_SQ)/bash_completion.d'; \ 993 - $(INSTALL) perf-completion.sh '$(DESTDIR_SQ)$(sysconfdir_SQ)/bash_completion.d/perf' 993 + $(INSTALL) perf-completion.sh -m 644 '$(DESTDIR_SQ)$(sysconfdir_SQ)/bash_completion.d/perf' 994 994 $(call QUIET_INSTALL, perf-tip) \ 995 995 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(tip_instdir_SQ)'; \ 996 - $(INSTALL) Documentation/tips.txt -t '$(DESTDIR_SQ)$(tip_instdir_SQ)' 996 + $(INSTALL) Documentation/tips.txt -m 644 -t '$(DESTDIR_SQ)$(tip_instdir_SQ)' 997 997 998 998 install-tests: all install-gtk 999 999 $(call QUIET_INSTALL, tests) \ 1000 1000 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \ 1001 - $(INSTALL) tests/attr.py '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \ 1001 + $(INSTALL) tests/attr.py -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \ 1002 1002 $(INSTALL) tests/pe-file.exe* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \ 1003 1003 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \ 1004 - $(INSTALL) tests/attr/* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \ 1004 + $(INSTALL) tests/attr/* -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \ 1005 1005 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'; \ 1006 1006 $(INSTALL) tests/shell/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'; \ 1007 1007 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib'; \ 1008 - $(INSTALL) tests/shell/lib/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib'; \ 1009 - $(INSTALL) tests/shell/lib/*.py '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib' 1008 + $(INSTALL) tests/shell/lib/*.sh -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib'; \ 1009 + $(INSTALL) tests/shell/lib/*.py -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib' 1010 1010 1011 1011 install-bin: install-tools install-tests install-traceevent-plugins 1012 1012
+9 -3
tools/perf/builtin-c2c.c
··· 146 146 147 147 c2c_he->cpuset = bitmap_zalloc(c2c.cpus_cnt); 148 148 if (!c2c_he->cpuset) 149 - return NULL; 149 + goto out_free; 150 150 151 151 c2c_he->nodeset = bitmap_zalloc(c2c.nodes_cnt); 152 152 if (!c2c_he->nodeset) 153 - return NULL; 153 + goto out_free; 154 154 155 155 c2c_he->node_stats = zalloc(c2c.nodes_cnt * sizeof(*c2c_he->node_stats)); 156 156 if (!c2c_he->node_stats) 157 - return NULL; 157 + goto out_free; 158 158 159 159 init_stats(&c2c_he->cstats.lcl_hitm); 160 160 init_stats(&c2c_he->cstats.rmt_hitm); ··· 163 163 init_stats(&c2c_he->cstats.load); 164 164 165 165 return &c2c_he->he; 166 + 167 + out_free: 168 + free(c2c_he->nodeset); 169 + free(c2c_he->cpuset); 170 + free(c2c_he); 171 + return NULL; 166 172 } 167 173 168 174 static void c2c_he_free(void *he)
+1 -2
tools/perf/builtin-lock.c
··· 1874 1874 NULL 1875 1875 }; 1876 1876 const char *const lock_subcommands[] = { "record", "report", "script", 1877 - "info", "contention", 1878 - "contention", NULL }; 1877 + "info", "contention", NULL }; 1879 1878 const char *lock_usage[] = { 1880 1879 NULL, 1881 1880 NULL
+26 -8
tools/perf/builtin-record.c
··· 1906 1906 1907 1907 err = perf_event__synthesize_bpf_events(session, process_synthesized_event, 1908 1908 machine, opts); 1909 - if (err < 0) 1909 + if (err < 0) { 1910 1910 pr_warning("Couldn't synthesize bpf events.\n"); 1911 + err = 0; 1912 + } 1911 1913 1912 1914 if (rec->opts.synth & PERF_SYNTH_CGROUP) { 1913 1915 err = perf_event__synthesize_cgroups(tool, process_synthesized_event, 1914 1916 machine); 1915 - if (err < 0) 1917 + if (err < 0) { 1916 1918 pr_warning("Couldn't synthesize cgroup events.\n"); 1919 + err = 0; 1920 + } 1917 1921 } 1918 1922 1919 1923 if (rec->opts.nr_threads_synthesize > 1) { ··· 3362 3358 3363 3359 struct option *record_options = __record_options; 3364 3360 3365 - static void record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_cpu_map *cpus) 3361 + static int record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_cpu_map *cpus) 3366 3362 { 3367 3363 struct perf_cpu cpu; 3368 3364 int idx; 3369 3365 3370 3366 if (cpu_map__is_dummy(cpus)) 3371 - return; 3367 + return 0; 3372 3368 3373 - perf_cpu_map__for_each_cpu(cpu, idx, cpus) 3369 + perf_cpu_map__for_each_cpu(cpu, idx, cpus) { 3370 + /* Return ENODEV is input cpu is greater than max cpu */ 3371 + if ((unsigned long)cpu.cpu > mask->nbits) 3372 + return -ENODEV; 3374 3373 set_bit(cpu.cpu, mask->bits); 3374 + } 3375 + 3376 + return 0; 3375 3377 } 3376 3378 3377 3379 static int record__mmap_cpu_mask_init_spec(struct mmap_cpu_mask *mask, const char *mask_spec) ··· 3389 3379 return -ENOMEM; 3390 3380 3391 3381 bitmap_zero(mask->bits, mask->nbits); 3392 - record__mmap_cpu_mask_init(mask, cpus); 3382 + if (record__mmap_cpu_mask_init(mask, cpus)) 3383 + return -ENODEV; 3384 + 3393 3385 perf_cpu_map__put(cpus); 3394 3386 3395 3387 return 0; ··· 3473 3461 pr_err("Failed to allocate CPUs mask\n"); 3474 3462 return ret; 3475 3463 } 3476 - record__mmap_cpu_mask_init(&cpus_mask, cpus); 3464 + 3465 + ret = record__mmap_cpu_mask_init(&cpus_mask, cpus); 3466 + if (ret) { 3467 + pr_err("Failed to init cpu mask\n"); 3468 + goto out_free_cpu_mask; 3469 + } 3477 3470 3478 3471 ret = record__thread_mask_alloc(&full_mask, cpu__max_cpu().cpu); 3479 3472 if (ret) { ··· 3719 3702 if (ret) 3720 3703 return ret; 3721 3704 3722 - record__mmap_cpu_mask_init(&rec->thread_masks->maps, cpus); 3705 + if (record__mmap_cpu_mask_init(&rec->thread_masks->maps, cpus)) 3706 + return -ENODEV; 3723 3707 3724 3708 rec->nr_threads = 1; 3725 3709
+5
tools/perf/builtin-script.c
··· 445 445 struct perf_event_attr *attr = &evsel->core.attr; 446 446 bool allow_user_set; 447 447 448 + if (evsel__is_dummy_event(evsel)) 449 + return 0; 450 + 448 451 if (perf_header__has_feat(&session->header, HEADER_STAT)) 449 452 return 0; 450 453 ··· 569 566 struct evsel *evsel; 570 567 571 568 evlist__for_each_entry(evlist, evsel) { 569 + if (evsel__is_dummy_event(evsel)) 570 + continue; 572 571 if (output_type(evsel->core.attr.type) == (int)type) 573 572 return evsel; 574 573 }
+3 -2
tools/perf/builtin-stat.c
··· 1932 1932 free(str); 1933 1933 } 1934 1934 1935 + if (!stat_config.topdown_level) 1936 + stat_config.topdown_level = TOPDOWN_MAX_LEVEL; 1937 + 1935 1938 if (!evsel_list->core.nr_entries) { 1936 1939 if (target__has_cpu(&target)) 1937 1940 default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK; ··· 1951 1948 } 1952 1949 if (evlist__add_default_attrs(evsel_list, default_attrs1) < 0) 1953 1950 return -1; 1954 - 1955 - stat_config.topdown_level = TOPDOWN_MAX_LEVEL; 1956 1951 /* Platform specific attrs */ 1957 1952 if (evlist__add_default_attrs(evsel_list, default_null_attrs) < 0) 1958 1953 return -1;
+2 -2
tools/perf/dlfilters/dlfilter-show-cycles.c
··· 98 98 static void print_vals(__u64 cycles, __u64 delta) 99 99 { 100 100 if (delta) 101 - printf("%10llu %10llu ", cycles, delta); 101 + printf("%10llu %10llu ", (unsigned long long)cycles, (unsigned long long)delta); 102 102 else 103 - printf("%10llu %10s ", cycles, ""); 103 + printf("%10llu %10s ", (unsigned long long)cycles, ""); 104 104 } 105 105 106 106 int filter_event(void *data, const struct perf_dlfilter_sample *sample, void *ctx)
+7 -1
tools/perf/util/affinity.c
··· 49 49 { 50 50 int cpu_set_size = get_cpu_set_size(); 51 51 52 - if (cpu == -1) 52 + /* 53 + * Return: 54 + * - if cpu is -1 55 + * - restrict out of bound access to sched_cpus 56 + */ 57 + if (cpu == -1 || ((cpu >= (cpu_set_size * 8)))) 53 58 return; 59 + 54 60 a->changed = true; 55 61 set_bit(cpu, a->sched_cpus); 56 62 /*
+11 -9
tools/perf/util/genelf.c
··· 30 30 31 31 #define BUILD_ID_URANDOM /* different uuid for each run */ 32 32 33 - // FIXME, remove this and fix the deprecation warnings before its removed and 34 - // We'll break for good here... 35 - #pragma GCC diagnostic ignored "-Wdeprecated-declarations" 36 - 37 33 #ifdef HAVE_LIBCRYPTO_SUPPORT 38 34 39 35 #define BUILD_ID_MD5 ··· 41 45 #endif 42 46 43 47 #ifdef BUILD_ID_MD5 48 + #include <openssl/evp.h> 44 49 #include <openssl/md5.h> 45 50 #endif 46 51 #endif ··· 139 142 static void 140 143 gen_build_id(struct buildid_note *note, unsigned long load_addr, const void *code, size_t csize) 141 144 { 142 - MD5_CTX context; 145 + EVP_MD_CTX *mdctx; 143 146 144 147 if (sizeof(note->build_id) < 16) 145 148 errx(1, "build_id too small for MD5"); 146 149 147 - MD5_Init(&context); 148 - MD5_Update(&context, &load_addr, sizeof(load_addr)); 149 - MD5_Update(&context, code, csize); 150 - MD5_Final((unsigned char *)note->build_id, &context); 150 + mdctx = EVP_MD_CTX_new(); 151 + if (!mdctx) 152 + errx(2, "failed to create EVP_MD_CTX"); 153 + 154 + EVP_DigestInit_ex(mdctx, EVP_md5(), NULL); 155 + EVP_DigestUpdate(mdctx, &load_addr, sizeof(load_addr)); 156 + EVP_DigestUpdate(mdctx, code, csize); 157 + EVP_DigestFinal_ex(mdctx, (unsigned char *)note->build_id, NULL); 158 + EVP_MD_CTX_free(mdctx); 151 159 } 152 160 #endif 153 161
+3
tools/perf/util/metricgroup.c
··· 1655 1655 struct evlist *perf_evlist = *(struct evlist **)opt->value; 1656 1656 const struct pmu_events_table *table = pmu_events_table__find(); 1657 1657 1658 + if (!table) 1659 + return -EINVAL; 1660 + 1658 1661 return parse_groups(perf_evlist, str, metric_no_group, 1659 1662 metric_no_merge, NULL, metric_events, table); 1660 1663 }
+26 -10
tools/testing/selftests/netfilter/nft_conntrack_helper.sh
··· 102 102 103 103 ip netns exec ${netns} conntrack -L -f $family -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' 104 104 if [ $? -ne 0 ] ; then 105 - echo "FAIL: ${netns} did not show attached helper $message" 1>&2 106 - ret=1 105 + if [ $autoassign -eq 0 ] ;then 106 + echo "FAIL: ${netns} did not show attached helper $message" 1>&2 107 + ret=1 108 + else 109 + echo "PASS: ${netns} did not show attached helper $message" 1>&2 110 + fi 111 + else 112 + if [ $autoassign -eq 0 ] ;then 113 + echo "PASS: ${netns} connection on port $port has ftp helper attached" 1>&2 114 + else 115 + echo "FAIL: ${netns} connection on port $port has ftp helper attached" 1>&2 116 + ret=1 117 + fi 107 118 fi 108 119 109 - echo "PASS: ${netns} connection on port $port has ftp helper attached" 1>&2 110 120 return 0 111 121 } 112 122 113 123 test_helper() 114 124 { 115 125 local port=$1 116 - local msg=$2 126 + local autoassign=$2 127 + 128 + if [ $autoassign -eq 0 ] ;then 129 + msg="set via ruleset" 130 + else 131 + msg="auto-assign" 132 + fi 117 133 118 134 sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null & 119 135 120 136 sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null & 121 137 sleep 1 122 138 123 - check_for_helper "$ns1" "ip $msg" $port 124 - check_for_helper "$ns2" "ip $msg" $port 139 + check_for_helper "$ns1" "ip $msg" $port $autoassign 140 + check_for_helper "$ns2" "ip $msg" $port $autoassign 125 141 126 142 wait 127 143 ··· 189 173 fi 190 174 fi 191 175 192 - test_helper 2121 "set via ruleset" 193 - ip netns exec ${ns1} sysctl -q 'net.netfilter.nf_conntrack_helper=1' 194 - ip netns exec ${ns2} sysctl -q 'net.netfilter.nf_conntrack_helper=1' 195 - test_helper 21 "auto-assign" 176 + test_helper 2121 0 177 + ip netns exec ${ns1} sysctl -qe 'net.netfilter.nf_conntrack_helper=1' 178 + ip netns exec ${ns2} sysctl -qe 'net.netfilter.nf_conntrack_helper=1' 179 + test_helper 21 1 196 180 197 181 exit $ret