Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC: renesas: rz-ssi: Cleanups

Merge series from Claudiu <claudiu.beznea@tuxon.dev>:

This series adds cleanups for the Renesas RZ SSI driver.

+3658 -2060
+2
.mailmap
··· 207 207 Daniel Borkmann <daniel@iogearbox.net> <dborkmann@redhat.com> 208 208 Daniel Borkmann <daniel@iogearbox.net> <dborkman@redhat.com> 209 209 Daniel Borkmann <daniel@iogearbox.net> <dxchgb@gmail.com> 210 + Daniel Thompson <danielt@kernel.org> <daniel.thompson@linaro.org> 210 211 Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com> 211 212 David Brownell <david-b@pacbell.net> 212 213 David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> ··· 795 794 Sven Eckelmann <sven@narfation.org> <sven.eckelmann@openmesh.com> 796 795 Sven Eckelmann <sven@narfation.org> <sven@open-mesh.com> 797 796 Sven Peter <sven@kernel.org> <sven@svenpeter.dev> 797 + Szymon Wilczek <swilczek.lx@gmail.com> <szymonwilczek@gmx.com> 798 798 Takashi YOSHII <takashi.yoshii.zj@renesas.com> 799 799 Tamizh Chelvam Raja <quic_tamizhr@quicinc.com> <tamizhr@codeaurora.org> 800 800 Taniya Das <quic_tdas@quicinc.com> <tdas@codeaurora.org>
+35
Documentation/admin-guide/kernel-parameters.txt
··· 2917 2917 for Movable pages. "nn[KMGTPE]", "nn%", and "mirror" 2918 2918 are exclusive, so you cannot specify multiple forms. 2919 2919 2920 + kfence.burst= [MM,KFENCE] The number of additional successive 2921 + allocations to be attempted through KFENCE for each 2922 + sample interval. 2923 + Format: <unsigned integer> 2924 + Default: 0 2925 + 2926 + kfence.check_on_panic= 2927 + [MM,KFENCE] Whether to check all KFENCE-managed objects' 2928 + canaries on panic. 2929 + Format: <bool> 2930 + Default: false 2931 + 2932 + kfence.deferrable= 2933 + [MM,KFENCE] Whether to use a deferrable timer to trigger 2934 + allocations. This avoids forcing CPU wake-ups if the 2935 + system is idle, at the risk of a less predictable 2936 + sample interval. 2937 + Format: <bool> 2938 + Default: CONFIG_KFENCE_DEFERRABLE 2939 + 2940 + kfence.sample_interval= 2941 + [MM,KFENCE] KFENCE's sample interval in milliseconds. 2942 + Format: <unsigned integer> 2943 + 0 - Disable KFENCE. 2944 + >0 - Enabled KFENCE with given sample interval. 2945 + Default: CONFIG_KFENCE_SAMPLE_INTERVAL 2946 + 2947 + kfence.skip_covered_thresh= 2948 + [MM,KFENCE] If pool utilization reaches this threshold 2949 + (pool usage%), KFENCE limits currently covered 2950 + allocations of the same source from further filling 2951 + up the pool. 2952 + Format: <unsigned integer> 2953 + Default: 75 2954 + 2920 2955 kgdbdbgp= [KGDB,HW,EARLY] kgdb over EHCI usb debug port. 2921 2956 Format: <Controller#>[,poll interval] 2922 2957 The controller # is the number of the ehci usb debug
+8
Documentation/admin-guide/sysctl/net.rst
··· 303 303 Maximum number of packets, queued on the INPUT side, when the interface 304 304 receives packets faster than kernel can process them. 305 305 306 + qdisc_max_burst 307 + ------------------ 308 + 309 + Maximum number of packets that can be temporarily stored before 310 + reaching qdisc. 311 + 312 + Default: 1000 313 + 306 314 netdev_rss_key 307 315 -------------- 308 316
+9 -1
Documentation/devicetree/bindings/i2c/brcm,iproc-i2c.yaml
··· 16 16 - brcm,iproc-nic-i2c 17 17 18 18 reg: 19 - maxItems: 1 19 + minItems: 1 20 + maxItems: 2 20 21 21 22 clock-frequency: 22 23 enum: [ 100000, 400000 ] ··· 42 41 contains: 43 42 const: brcm,iproc-nic-i2c 44 43 then: 44 + properties: 45 + reg: 46 + minItems: 2 45 47 required: 46 48 - brcm,ape-hsls-addr-mask 49 + else: 50 + properties: 51 + reg: 52 + maxItems: 1 47 53 48 54 unevaluatedProperties: false 49 55
+2 -15
Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-pcie-phy.yaml
··· 56 56 57 57 clocks: 58 58 minItems: 5 59 - maxItems: 7 59 + maxItems: 6 60 60 61 61 clock-names: 62 62 minItems: 5 ··· 67 67 - enum: [rchng, refgen] 68 68 - const: pipe 69 69 - const: pipediv2 70 - - const: phy_aux 71 70 72 71 power-domains: 73 72 maxItems: 1 ··· 179 180 contains: 180 181 enum: 181 182 - qcom,glymur-qmp-gen5x4-pcie-phy 183 + - qcom,qcs8300-qmp-gen4x2-pcie-phy 182 184 - qcom,sa8775p-qmp-gen4x2-pcie-phy 183 185 - qcom,sa8775p-qmp-gen4x4-pcie-phy 184 186 - qcom,sc8280xp-qmp-gen3x1-pcie-phy ··· 196 196 minItems: 6 197 197 clock-names: 198 198 minItems: 6 199 - 200 - - if: 201 - properties: 202 - compatible: 203 - contains: 204 - enum: 205 - - qcom,qcs8300-qmp-gen4x2-pcie-phy 206 - then: 207 - properties: 208 - clocks: 209 - minItems: 7 210 - clock-names: 211 - minItems: 7 212 199 213 200 - if: 214 201 properties:
+4
Documentation/devicetree/bindings/sound/everest,es8316.yaml
··· 49 49 items: 50 50 - const: mclk 51 51 52 + interrupts: 53 + maxItems: 1 54 + description: Headphone detect interrupt 55 + 52 56 port: 53 57 $ref: audio-graph-port.yaml# 54 58 unevaluatedProperties: false
+11
Documentation/devicetree/bindings/sound/realtek,rt5640.yaml
··· 47 47 reg: 48 48 maxItems: 1 49 49 50 + clocks: 51 + maxItems: 1 52 + 53 + clock-names: 54 + const: mclk 55 + 50 56 interrupts: 51 57 maxItems: 1 52 58 description: The CODEC's interrupt output. ··· 104 98 - 4 # Use GPIO2 for jack-detect 105 99 - 5 # Use GPIO3 for jack-detect 106 100 - 6 # Use GPIO4 for jack-detect 101 + - 7 # Use HDA header for jack-detect 107 102 108 103 realtek,jack-detect-not-inverted: 109 104 description: ··· 127 120 - 1 # Scale current by 0.75 128 121 - 2 # Scale current by 1.0 129 122 - 3 # Scale current by 1.5 123 + 124 + port: 125 + $ref: audio-graph-port.yaml# 126 + unevaluatedProperties: false 130 127 131 128 required: 132 129 - compatible
+3
Documentation/devicetree/bindings/sound/rockchip-spdif.yaml
··· 70 70 "#sound-dai-cells": 71 71 const: 0 72 72 73 + port: 74 + $ref: /schemas/graph.yaml#/properties/port 75 + 73 76 required: 74 77 - compatible 75 78 - reg
+2 -2
Documentation/devicetree/bindings/ufs/ufs-common.yaml
··· 48 48 enum: [1, 2] 49 49 default: 2 50 50 description: 51 - Number of lanes available per direction. Note that it is assume same 52 - number of lanes is used both directions at once. 51 + Number of lanes available per direction. Note that it is assumed that 52 + the same number of lanes are used in both directions at once. 53 53 54 54 vdd-hba-supply: 55 55 description:
+2 -2
Documentation/devicetree/bindings/usb/qcom,dwc3.yaml
··· 406 406 compatible: 407 407 contains: 408 408 enum: 409 - - qcom,ipq5018-dwc3 410 409 - qcom,ipq6018-dwc3 411 410 - qcom,ipq8074-dwc3 412 411 - qcom,msm8953-dwc3 ··· 427 428 compatible: 428 429 contains: 429 430 enum: 431 + - qcom,msm8994-dwc3 430 432 - qcom,msm8996-dwc3 431 433 - qcom,qcs404-dwc3 432 434 - qcom,sdm660-dwc3 ··· 451 451 compatible: 452 452 contains: 453 453 enum: 454 + - qcom,ipq5018-dwc3 454 455 - qcom,ipq5332-dwc3 455 456 then: 456 457 properties: ··· 489 488 enum: 490 489 - qcom,ipq4019-dwc3 491 490 - qcom,ipq8064-dwc3 492 - - qcom,msm8994-dwc3 493 491 - qcom,qcs615-dwc3 494 492 - qcom,qcs8300-dwc3 495 493 - qcom,qdu1000-dwc3
+2 -2
Documentation/devicetree/bindings/usb/qcom,snps-dwc3.yaml
··· 420 420 compatible: 421 421 contains: 422 422 enum: 423 - - qcom,ipq5018-dwc3 424 423 - qcom,ipq6018-dwc3 425 424 - qcom,ipq8074-dwc3 426 425 - qcom,msm8953-dwc3 ··· 442 443 compatible: 443 444 contains: 444 445 enum: 446 + - qcom,msm8994-dwc3 445 447 - qcom,msm8996-dwc3 446 448 - qcom,qcs404-dwc3 447 449 - qcom,sdm660-dwc3 ··· 467 467 compatible: 468 468 contains: 469 469 enum: 470 + - qcom,ipq5018-dwc3 470 471 - qcom,ipq5332-dwc3 471 472 then: 472 473 properties: ··· 510 509 - qcom,ipq4019-dwc3 511 510 - qcom,ipq8064-dwc3 512 511 - qcom,kaanapali-dwc3 513 - - qcom,msm8994-dwc3 514 512 - qcom,qcs615-dwc3 515 513 - qcom,qcs8300-dwc3 516 514 - qcom,qdu1000-dwc3
+175
Documentation/netlink/specs/dev-energymodel.yaml
··· 1 + # SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 2 + # 3 + # Copyright (c) 2025 Valve Corporation. 4 + # 5 + --- 6 + name: dev-energymodel 7 + 8 + doc: | 9 + Energy model netlink interface to notify its changes. 10 + 11 + protocol: genetlink 12 + 13 + uapi-header: linux/dev_energymodel.h 14 + 15 + definitions: 16 + - 17 + type: flags 18 + name: perf-state-flags 19 + entries: 20 + - 21 + name: perf-state-inefficient 22 + doc: >- 23 + The performance state is inefficient. There is in this perf-domain, 24 + another performance state with a higher frequency but a lower or 25 + equal power cost. 26 + - 27 + type: flags 28 + name: perf-domain-flags 29 + entries: 30 + - 31 + name: perf-domain-microwatts 32 + doc: >- 33 + The power values are in micro-Watts or some other scale. 34 + - 35 + name: perf-domain-skip-inefficiencies 36 + doc: >- 37 + Skip inefficient states when estimating energy consumption. 38 + - 39 + name: perf-domain-artificial 40 + doc: >- 41 + The power values are artificial and might be created by platform 42 + missing real power information. 43 + 44 + attribute-sets: 45 + - 46 + name: perf-domain 47 + doc: >- 48 + Information on a single performance domains. 49 + attributes: 50 + - 51 + name: pad 52 + type: pad 53 + - 54 + name: perf-domain-id 55 + type: u32 56 + doc: >- 57 + A unique ID number for each performance domain. 58 + - 59 + name: flags 60 + type: u64 61 + doc: >- 62 + Bitmask of performance domain flags. 63 + enum: perf-domain-flags 64 + - 65 + name: cpus 66 + type: u64 67 + multi-attr: true 68 + doc: >- 69 + CPUs that belong to this performance domain. 70 + - 71 + name: perf-table 72 + doc: >- 73 + Performance states table. 74 + attributes: 75 + - 76 + name: perf-domain-id 77 + type: u32 78 + doc: >- 79 + A unique ID number for each performance domain. 80 + - 81 + name: perf-state 82 + type: nest 83 + nested-attributes: perf-state 84 + multi-attr: true 85 + - 86 + name: perf-state 87 + doc: >- 88 + Performance state of a performance domain. 89 + attributes: 90 + - 91 + name: pad 92 + type: pad 93 + - 94 + name: performance 95 + type: u64 96 + doc: >- 97 + CPU performance (capacity) at a given frequency. 98 + - 99 + name: frequency 100 + type: u64 101 + doc: >- 102 + The frequency in KHz, for consistency with CPUFreq. 103 + - 104 + name: power 105 + type: u64 106 + doc: >- 107 + The power consumed at this level (by 1 CPU or by a registered 108 + device). It can be a total power: static and dynamic. 109 + - 110 + name: cost 111 + type: u64 112 + doc: >- 113 + The cost coefficient associated with this level, used during energy 114 + calculation. Equal to: power * max_frequency / frequency. 115 + - 116 + name: flags 117 + type: u64 118 + doc: >- 119 + Bitmask of performance state flags. 120 + enum: perf-state-flags 121 + 122 + operations: 123 + list: 124 + - 125 + name: get-perf-domains 126 + attribute-set: perf-domain 127 + doc: Get the list of information for all performance domains. 128 + do: 129 + request: 130 + attributes: 131 + - perf-domain-id 132 + reply: 133 + attributes: &perf-domain-attrs 134 + - pad 135 + - perf-domain-id 136 + - flags 137 + - cpus 138 + dump: 139 + reply: 140 + attributes: *perf-domain-attrs 141 + - 142 + name: get-perf-table 143 + attribute-set: perf-table 144 + doc: Get the energy model table of a performance domain. 145 + do: 146 + request: 147 + attributes: 148 + - perf-domain-id 149 + reply: 150 + attributes: 151 + - perf-domain-id 152 + - perf-state 153 + - 154 + name: perf-domain-created 155 + doc: A performance domain is created. 156 + notify: get-perf-table 157 + mcgrp: event 158 + - 159 + name: perf-domain-updated 160 + doc: A performance domain is updated. 161 + notify: get-perf-table 162 + mcgrp: event 163 + - 164 + name: perf-domain-deleted 165 + doc: A performance domain is deleted. 166 + attribute-set: perf-table 167 + event: 168 + attributes: 169 + - perf-domain-id 170 + mcgrp: event 171 + 172 + mcast-groups: 173 + list: 174 + - 175 + name: event
-113
Documentation/netlink/specs/em.yaml
··· 1 - # SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 2 - 3 - name: em 4 - 5 - doc: | 6 - Energy model netlink interface to notify its changes. 7 - 8 - protocol: genetlink 9 - 10 - uapi-header: linux/energy_model.h 11 - 12 - attribute-sets: 13 - - 14 - name: pds 15 - attributes: 16 - - 17 - name: pd 18 - type: nest 19 - nested-attributes: pd 20 - multi-attr: true 21 - - 22 - name: pd 23 - attributes: 24 - - 25 - name: pad 26 - type: pad 27 - - 28 - name: pd-id 29 - type: u32 30 - - 31 - name: flags 32 - type: u64 33 - - 34 - name: cpus 35 - type: string 36 - - 37 - name: pd-table 38 - attributes: 39 - - 40 - name: pd-id 41 - type: u32 42 - - 43 - name: ps 44 - type: nest 45 - nested-attributes: ps 46 - multi-attr: true 47 - - 48 - name: ps 49 - attributes: 50 - - 51 - name: pad 52 - type: pad 53 - - 54 - name: performance 55 - type: u64 56 - - 57 - name: frequency 58 - type: u64 59 - - 60 - name: power 61 - type: u64 62 - - 63 - name: cost 64 - type: u64 65 - - 66 - name: flags 67 - type: u64 68 - 69 - operations: 70 - list: 71 - - 72 - name: get-pds 73 - attribute-set: pds 74 - doc: Get the list of information for all performance domains. 75 - do: 76 - reply: 77 - attributes: 78 - - pd 79 - - 80 - name: get-pd-table 81 - attribute-set: pd-table 82 - doc: Get the energy model table of a performance domain. 83 - do: 84 - request: 85 - attributes: 86 - - pd-id 87 - reply: 88 - attributes: 89 - - pd-id 90 - - ps 91 - - 92 - name: pd-created 93 - doc: A performance domain is created. 94 - notify: get-pd-table 95 - mcgrp: event 96 - - 97 - name: pd-updated 98 - doc: A performance domain is updated. 99 - notify: get-pd-table 100 - mcgrp: event 101 - - 102 - name: pd-deleted 103 - doc: A performance domain is deleted. 104 - attribute-set: pd-table 105 - event: 106 - attributes: 107 - - pd-id 108 - mcgrp: event 109 - 110 - mcast-groups: 111 - list: 112 - - 113 - name: event
+1 -1
Documentation/userspace-api/media/v4l/metafmt-arm-mali-c55.rst
··· 44 44 struct v4l2_isp_params_buffer *params = 45 45 (struct v4l2_isp_params_buffer *)buffer; 46 46 47 - params->version = MALI_C55_PARAM_BUFFER_V1; 47 + params->version = V4L2_ISP_PARAMS_VERSION_V1; 48 48 params->data_size = 0; 49 49 50 50 void *data = (void *)params->data;
+11 -5
MAINTAINERS
··· 314 314 R: Shuai Xue <xueshuai@linux.alibaba.com> 315 315 L: linux-acpi@vger.kernel.org 316 316 F: drivers/acpi/apei/ 317 + F: drivers/firmware/efi/cper* 317 318 318 319 ACPI COMPONENT ARCHITECTURE (ACPICA) 319 320 M: "Rafael J. Wysocki" <rafael@kernel.org> ··· 6421 6420 6422 6421 CONTROL GROUP - CPUSET 6423 6422 M: Waiman Long <longman@redhat.com> 6423 + R: Chen Ridong <chenridong@huaweicloud.com> 6424 6424 L: cgroups@vger.kernel.org 6425 6425 S: Maintained 6426 6426 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git ··· 9305 9303 M: "Rafael J. Wysocki" <rafael@kernel.org> 9306 9304 L: linux-pm@vger.kernel.org 9307 9305 S: Maintained 9308 - F: kernel/power/energy_model.c 9309 - F: include/linux/energy_model.h 9306 + F: Documentation/netlink/specs/dev-energymodel.yaml 9310 9307 F: Documentation/power/energy-model.rst 9311 - F: Documentation/netlink/specs/em.yaml 9312 - F: include/uapi/linux/energy_model.h 9308 + F: include/linux/energy_model.h 9309 + F: include/uapi/linux/dev_energymodel.h 9313 9310 F: kernel/power/em_netlink*.* 9311 + F: kernel/power/energy_model.c 9314 9312 9315 9313 EPAPR HYPERVISOR BYTE CHANNEL DEVICE DRIVER 9316 9314 M: Laurentiu Tudor <laurentiu.tudor@nxp.com> ··· 9518 9516 F: arch/x86/platform/efi/ 9519 9517 F: drivers/firmware/efi/ 9520 9518 F: include/linux/efi*.h 9519 + X: drivers/firmware/efi/cper* 9521 9520 9522 9521 EXTERNAL CONNECTOR SUBSYSTEM (EXTCON) 9523 9522 M: MyungJoo Ham <myungjoo.ham@samsung.com> ··· 14883 14880 M: Sathya Prakash <sathya.prakash@broadcom.com> 14884 14881 M: Sreekanth Reddy <sreekanth.reddy@broadcom.com> 14885 14882 M: Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com> 14883 + M: Ranjan Kumar <ranjan.kumar@broadcom.com> 14886 14884 L: MPT-FusionLinux.pdl@broadcom.com 14887 14885 L: linux-scsi@vger.kernel.org 14888 14886 S: Supported ··· 18427 18423 M: Sabrina Dubroca <sd@queasysnail.net> 18428 18424 L: netdev@vger.kernel.org 18429 18425 S: Maintained 18426 + F: Documentation/networking/tls* 18430 18427 F: include/net/tls.h 18431 18428 F: include/uapi/linux/tls.h 18432 - F: net/tls/* 18429 + F: net/tls/ 18430 + F: tools/testing/selftests/net/tls.c 18433 18431 18434 18432 NETWORKING [SOCKETS] 18435 18433 M: Eric Dumazet <edumazet@google.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+3
arch/loongarch/boot/dts/loongson-2k0500.dtsi
··· 131 131 reg-names = "main", "isr0"; 132 132 133 133 interrupt-controller; 134 + #address-cells = <0>; 134 135 #interrupt-cells = <2>; 135 136 interrupt-parent = <&cpuintc>; 136 137 interrupts = <2>; ··· 150 149 reg-names = "main", "isr0"; 151 150 152 151 interrupt-controller; 152 + #address-cells = <0>; 153 153 #interrupt-cells = <2>; 154 154 interrupt-parent = <&cpuintc>; 155 155 interrupts = <4>; ··· 166 164 compatible = "loongson,ls2k0500-eiointc"; 167 165 reg = <0x0 0x1fe11600 0x0 0xea00>; 168 166 interrupt-controller; 167 + #address-cells = <0>; 169 168 #interrupt-cells = <1>; 170 169 interrupt-parent = <&cpuintc>; 171 170 interrupts = <3>;
+13 -18
arch/loongarch/boot/dts/loongson-2k1000.dtsi
··· 46 46 }; 47 47 48 48 /* i2c of the dvi eeprom edid */ 49 - i2c-gpio-0 { 49 + i2c-0 { 50 50 compatible = "i2c-gpio"; 51 51 scl-gpios = <&gpio0 0 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; 52 52 sda-gpios = <&gpio0 1 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; ··· 57 57 }; 58 58 59 59 /* i2c of the eeprom edid */ 60 - i2c-gpio-1 { 60 + i2c-1 { 61 61 compatible = "i2c-gpio"; 62 62 scl-gpios = <&gpio0 33 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; 63 63 sda-gpios = <&gpio0 32 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; ··· 114 114 <0x0 0x1fe01140 0x0 0x8>; 115 115 reg-names = "main", "isr0", "isr1"; 116 116 interrupt-controller; 117 + #address-cells = <0>; 117 118 #interrupt-cells = <2>; 118 119 interrupt-parent = <&cpuintc>; 119 120 interrupts = <2>; ··· 132 131 <0x0 0x1fe01148 0x0 0x8>; 133 132 reg-names = "main", "isr0", "isr1"; 134 133 interrupt-controller; 134 + #address-cells = <0>; 135 135 #interrupt-cells = <2>; 136 136 interrupt-parent = <&cpuintc>; 137 137 interrupts = <3>; ··· 439 437 440 438 gmac0: ethernet@3,0 { 441 439 reg = <0x1800 0x0 0x0 0x0 0x0>; 442 - interrupt-parent = <&liointc0>; 443 - interrupts = <12 IRQ_TYPE_LEVEL_HIGH>, 444 - <13 IRQ_TYPE_LEVEL_HIGH>; 440 + interrupts-extended = <&liointc0 12 IRQ_TYPE_LEVEL_HIGH>, 441 + <&liointc0 13 IRQ_TYPE_LEVEL_HIGH>; 445 442 interrupt-names = "macirq", "eth_lpi"; 446 443 status = "disabled"; 447 444 }; 448 445 449 446 gmac1: ethernet@3,1 { 450 447 reg = <0x1900 0x0 0x0 0x0 0x0>; 451 - interrupt-parent = <&liointc0>; 452 - interrupts = <14 IRQ_TYPE_LEVEL_HIGH>, 453 - <15 IRQ_TYPE_LEVEL_HIGH>; 448 + interrupts-extended = <&liointc0 14 IRQ_TYPE_LEVEL_HIGH>, 449 + <&liointc0 15 IRQ_TYPE_LEVEL_HIGH>; 454 450 interrupt-names = "macirq", "eth_lpi"; 455 451 status = "disabled"; 456 452 }; 457 453 458 454 ehci0: usb@4,1 { 459 455 reg = <0x2100 0x0 0x0 0x0 0x0>; 460 - interrupt-parent = <&liointc1>; 461 - interrupts = <18 IRQ_TYPE_LEVEL_HIGH>; 456 + interrupts-extended = <&liointc1 18 IRQ_TYPE_LEVEL_HIGH>; 462 457 status = "disabled"; 463 458 }; 464 459 465 460 ohci0: usb@4,2 { 466 461 reg = <0x2200 0x0 0x0 0x0 0x0>; 467 - interrupt-parent = <&liointc1>; 468 - interrupts = <19 IRQ_TYPE_LEVEL_HIGH>; 462 + interrupts-extended = <&liointc1 19 IRQ_TYPE_LEVEL_HIGH>; 469 463 status = "disabled"; 470 464 }; 471 465 472 466 display@6,0 { 473 467 reg = <0x3000 0x0 0x0 0x0 0x0>; 474 - interrupt-parent = <&liointc0>; 475 - interrupts = <28 IRQ_TYPE_LEVEL_HIGH>; 468 + interrupts-extended = <&liointc0 28 IRQ_TYPE_LEVEL_HIGH>; 476 469 status = "disabled"; 477 470 }; 478 471 479 472 hda@7,0 { 480 473 reg = <0x3800 0x0 0x0 0x0 0x0>; 481 - interrupt-parent = <&liointc0>; 482 - interrupts = <4 IRQ_TYPE_LEVEL_HIGH>; 474 + interrupts-extended = <&liointc0 4 IRQ_TYPE_LEVEL_HIGH>; 483 475 status = "disabled"; 484 476 }; 485 477 486 478 sata: sata@8,0 { 487 479 reg = <0x4000 0x0 0x0 0x0 0x0>; 488 - interrupt-parent = <&liointc0>; 489 - interrupts = <19 IRQ_TYPE_LEVEL_HIGH>; 480 + interrupts-extended = <&liointc0 19 IRQ_TYPE_LEVEL_HIGH>; 490 481 status = "disabled"; 491 482 }; 492 483
+15 -20
arch/loongarch/boot/dts/loongson-2k2000.dtsi
··· 126 126 reg = <0x0 0x1fe01400 0x0 0x64>; 127 127 128 128 interrupt-controller; 129 + #address-cells = <0>; 129 130 #interrupt-cells = <2>; 130 131 interrupt-parent = <&cpuintc>; 131 132 interrupts = <2>; ··· 141 140 compatible = "loongson,ls2k2000-eiointc"; 142 141 reg = <0x0 0x1fe01600 0x0 0xea00>; 143 142 interrupt-controller; 143 + #address-cells = <0>; 144 144 #interrupt-cells = <1>; 145 145 interrupt-parent = <&cpuintc>; 146 146 interrupts = <3>; ··· 151 149 compatible = "loongson,pch-pic-1.0"; 152 150 reg = <0x0 0x10000000 0x0 0x400>; 153 151 interrupt-controller; 152 + #address-cells = <0>; 154 153 #interrupt-cells = <2>; 155 154 loongson,pic-base-vec = <0>; 156 155 interrupt-parent = <&eiointc>; ··· 294 291 295 292 gmac0: ethernet@3,0 { 296 293 reg = <0x1800 0x0 0x0 0x0 0x0>; 297 - interrupts = <12 IRQ_TYPE_LEVEL_HIGH>, 298 - <13 IRQ_TYPE_LEVEL_HIGH>; 294 + interrupts-extended = <&pic 12 IRQ_TYPE_LEVEL_HIGH>, 295 + <&pic 13 IRQ_TYPE_LEVEL_HIGH>; 299 296 interrupt-names = "macirq", "eth_lpi"; 300 - interrupt-parent = <&pic>; 301 297 status = "disabled"; 302 298 }; 303 299 304 300 gmac1: ethernet@3,1 { 305 301 reg = <0x1900 0x0 0x0 0x0 0x0>; 306 - interrupts = <14 IRQ_TYPE_LEVEL_HIGH>, 307 - <15 IRQ_TYPE_LEVEL_HIGH>; 302 + interrupts-extended = <&pic 14 IRQ_TYPE_LEVEL_HIGH>, 303 + <&pic 15 IRQ_TYPE_LEVEL_HIGH>; 308 304 interrupt-names = "macirq", "eth_lpi"; 309 - interrupt-parent = <&pic>; 310 305 status = "disabled"; 311 306 }; 312 307 313 308 gmac2: ethernet@3,2 { 314 309 reg = <0x1a00 0x0 0x0 0x0 0x0>; 315 - interrupts = <17 IRQ_TYPE_LEVEL_HIGH>, 316 - <18 IRQ_TYPE_LEVEL_HIGH>; 310 + interrupts-extended = <&pic 17 IRQ_TYPE_LEVEL_HIGH>, 311 + <&pic 18 IRQ_TYPE_LEVEL_HIGH>; 317 312 interrupt-names = "macirq", "eth_lpi"; 318 - interrupt-parent = <&pic>; 319 313 status = "disabled"; 320 314 }; 321 315 322 316 xhci0: usb@4,0 { 323 317 reg = <0x2000 0x0 0x0 0x0 0x0>; 324 - interrupts = <48 IRQ_TYPE_LEVEL_HIGH>; 325 - interrupt-parent = <&pic>; 318 + interrupts-extended = <&pic 48 IRQ_TYPE_LEVEL_HIGH>; 326 319 status = "disabled"; 327 320 }; 328 321 329 322 xhci1: usb@19,0 { 330 323 reg = <0xc800 0x0 0x0 0x0 0x0>; 331 - interrupts = <22 IRQ_TYPE_LEVEL_HIGH>; 332 - interrupt-parent = <&pic>; 324 + interrupts-extended = <&pic 22 IRQ_TYPE_LEVEL_HIGH>; 333 325 status = "disabled"; 334 326 }; 335 327 336 328 display@6,1 { 337 329 reg = <0x3100 0x0 0x0 0x0 0x0>; 338 - interrupts = <28 IRQ_TYPE_LEVEL_HIGH>; 339 - interrupt-parent = <&pic>; 330 + interrupts-extended = <&pic 28 IRQ_TYPE_LEVEL_HIGH>; 340 331 status = "disabled"; 341 332 }; 342 333 343 334 i2s@7,0 { 344 335 reg = <0x3800 0x0 0x0 0x0 0x0>; 345 - interrupts = <78 IRQ_TYPE_LEVEL_HIGH>, 346 - <79 IRQ_TYPE_LEVEL_HIGH>; 336 + interrupts-extended = <&pic 78 IRQ_TYPE_LEVEL_HIGH>, 337 + <&pic 79 IRQ_TYPE_LEVEL_HIGH>; 347 338 interrupt-names = "tx", "rx"; 348 - interrupt-parent = <&pic>; 349 339 status = "disabled"; 350 340 }; 351 341 352 342 sata: sata@8,0 { 353 343 reg = <0x4000 0x0 0x0 0x0 0x0>; 354 - interrupts = <16 IRQ_TYPE_LEVEL_HIGH>; 355 - interrupt-parent = <&pic>; 344 + interrupts-extended = <&pic 16 IRQ_TYPE_LEVEL_HIGH>; 356 345 status = "disabled"; 357 346 }; 358 347
-8
arch/loongarch/kernel/head.S
··· 126 126 LONG_LI t1, CSR_STFILL 127 127 csrxchg t0, t1, LOONGARCH_CSR_IMPCTL1 128 128 #endif 129 - /* Enable PG */ 130 - li.w t0, 0xb0 # PLV=0, IE=0, PG=1 131 - csrwr t0, LOONGARCH_CSR_CRMD 132 - li.w t0, 0x04 # PLV=0, PIE=1, PWE=0 133 - csrwr t0, LOONGARCH_CSR_PRMD 134 - li.w t0, 0x00 # FPE=0, SXE=0, ASXE=0, BTE=0 135 - csrwr t0, LOONGARCH_CSR_EUEN 136 - 137 129 la.pcrel t0, cpuboot_data 138 130 ld.d sp, t0, CPU_BOOT_STACK 139 131 ld.d tp, t0, CPU_BOOT_TINFO
+18 -3
arch/loongarch/kernel/perf_event.c
··· 626 626 return pev; 627 627 } 628 628 629 + static inline bool loongarch_pmu_event_requires_counter(const struct perf_event *event) 630 + { 631 + switch (event->attr.type) { 632 + case PERF_TYPE_HARDWARE: 633 + case PERF_TYPE_HW_CACHE: 634 + case PERF_TYPE_RAW: 635 + return true; 636 + default: 637 + return false; 638 + } 639 + } 640 + 629 641 static int validate_group(struct perf_event *event) 630 642 { 631 643 struct cpu_hw_events fake_cpuc; ··· 645 633 646 634 memset(&fake_cpuc, 0, sizeof(fake_cpuc)); 647 635 648 - if (loongarch_pmu_alloc_counter(&fake_cpuc, &leader->hw) < 0) 636 + if (loongarch_pmu_event_requires_counter(leader) && 637 + loongarch_pmu_alloc_counter(&fake_cpuc, &leader->hw) < 0) 649 638 return -EINVAL; 650 639 651 640 for_each_sibling_event(sibling, leader) { 652 - if (loongarch_pmu_alloc_counter(&fake_cpuc, &sibling->hw) < 0) 641 + if (loongarch_pmu_event_requires_counter(sibling) && 642 + loongarch_pmu_alloc_counter(&fake_cpuc, &sibling->hw) < 0) 653 643 return -EINVAL; 654 644 } 655 645 656 - if (loongarch_pmu_alloc_counter(&fake_cpuc, &event->hw) < 0) 646 + if (loongarch_pmu_event_requires_counter(event) && 647 + loongarch_pmu_alloc_counter(&fake_cpuc, &event->hw) < 0) 657 648 return -EINVAL; 658 649 659 650 return 0;
+1
arch/loongarch/kvm/intc/eiointc.c
··· 679 679 kvm_io_bus_unregister_dev(kvm, KVM_IOCSR_BUS, &eiointc->device); 680 680 kvm_io_bus_unregister_dev(kvm, KVM_IOCSR_BUS, &eiointc->device_vext); 681 681 kfree(eiointc); 682 + kfree(dev); 682 683 } 683 684 684 685 static struct kvm_device_ops kvm_eiointc_dev_ops = {
+1
arch/loongarch/kvm/intc/ipi.c
··· 459 459 ipi = kvm->arch.ipi; 460 460 kvm_io_bus_unregister_dev(kvm, KVM_IOCSR_BUS, &ipi->device); 461 461 kfree(ipi); 462 + kfree(dev); 462 463 } 463 464 464 465 static struct kvm_device_ops kvm_ipi_dev_ops = {
+1
arch/loongarch/kvm/intc/pch_pic.c
··· 475 475 /* unregister pch pic device and free it's memory */ 476 476 kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, &s->device); 477 477 kfree(s); 478 + kfree(dev); 478 479 } 479 480 480 481 static struct kvm_device_ops kvm_pch_pic_dev_ops = {
+23
arch/mips/mm/init.c
··· 425 425 static struct kcore_list kcore_kseg0; 426 426 #endif 427 427 428 + static inline void __init highmem_init(void) 429 + { 430 + #ifdef CONFIG_HIGHMEM 431 + unsigned long tmp; 432 + 433 + /* 434 + * If CPU cannot support HIGHMEM discard the memory above highstart_pfn 435 + */ 436 + if (cpu_has_dc_aliases) { 437 + memblock_remove(PFN_PHYS(highstart_pfn), -1); 438 + return; 439 + } 440 + 441 + for (tmp = highstart_pfn; tmp < highend_pfn; tmp++) { 442 + struct page *page = pfn_to_page(tmp); 443 + 444 + if (!memblock_is_memory(PFN_PHYS(tmp))) 445 + SetPageReserved(page); 446 + } 447 + #endif 448 + } 449 + 428 450 void __init arch_mm_preinit(void) 429 451 { 430 452 /* ··· 457 435 458 436 maar_init(); 459 437 setup_zero_pages(); /* Setup zeroed pages. */ 438 + highmem_init(); 460 439 461 440 #ifdef CONFIG_64BIT 462 441 if ((unsigned long) &_text > (unsigned long) CKSEG0)
+10 -5
arch/powerpc/kernel/watchdog.c
··· 26 26 #include <linux/delay.h> 27 27 #include <linux/processor.h> 28 28 #include <linux/smp.h> 29 + #include <linux/sys_info.h> 29 30 30 31 #include <asm/interrupt.h> 31 32 #include <asm/paca.h> ··· 236 235 pr_emerg("CPU %d TB:%lld, last SMP heartbeat TB:%lld (%lldms ago)\n", 237 236 cpu, tb, last_reset, tb_to_ns(tb - last_reset) / 1000000); 238 237 239 - if (!sysctl_hardlockup_all_cpu_backtrace) { 238 + if (sysctl_hardlockup_all_cpu_backtrace || 239 + (hardlockup_si_mask & SYS_INFO_ALL_BT)) { 240 + trigger_allbutcpu_cpu_backtrace(cpu); 241 + cpumask_clear(&wd_smp_cpus_ipi); 242 + } else { 240 243 /* 241 244 * Try to trigger the stuck CPUs, unless we are going to 242 245 * get a backtrace on all of them anyway. ··· 249 244 smp_send_nmi_ipi(c, wd_lockup_ipi, 1000000); 250 245 __cpumask_clear_cpu(c, &wd_smp_cpus_ipi); 251 246 } 252 - } else { 253 - trigger_allbutcpu_cpu_backtrace(cpu); 254 - cpumask_clear(&wd_smp_cpus_ipi); 255 247 } 256 248 249 + sys_info(hardlockup_si_mask & ~SYS_INFO_ALL_BT); 257 250 if (hardlockup_panic) 258 251 nmi_panic(NULL, "Hard LOCKUP"); 259 252 ··· 418 415 419 416 xchg(&__wd_nmi_output, 1); // see wd_lockup_ipi 420 417 421 - if (sysctl_hardlockup_all_cpu_backtrace) 418 + if (sysctl_hardlockup_all_cpu_backtrace || 419 + (hardlockup_si_mask & SYS_INFO_ALL_BT)) 422 420 trigger_allbutcpu_cpu_backtrace(cpu); 423 421 422 + sys_info(hardlockup_si_mask & ~SYS_INFO_ALL_BT); 424 423 if (hardlockup_panic) 425 424 nmi_panic(regs, "Hard LOCKUP"); 426 425
+2 -4
arch/riscv/net/bpf_jit_comp64.c
··· 1133 1133 1134 1134 store_args(nr_arg_slots, args_off, ctx); 1135 1135 1136 - /* skip to actual body of traced function */ 1137 - if (flags & BPF_TRAMP_F_ORIG_STACK) 1138 - orig_call += RV_FENTRY_NINSNS * 4; 1139 - 1140 1136 if (flags & BPF_TRAMP_F_CALL_ORIG) { 1141 1137 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx); 1142 1138 ret = emit_call((const u64)__bpf_tramp_enter, true, ctx); ··· 1167 1171 } 1168 1172 1169 1173 if (flags & BPF_TRAMP_F_CALL_ORIG) { 1174 + /* skip to actual body of traced function */ 1175 + orig_call += RV_FENTRY_NINSNS * 4; 1170 1176 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx); 1171 1177 restore_stack_args(nr_arg_slots - RV_MAX_REG_ARGS, args_off, stk_arg_off, ctx); 1172 1178 ret = emit_call((const u64)orig_call, true, ctx);
+17 -4
arch/x86/kernel/cpu/resctrl/core.c
··· 825 825 826 826 if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) 827 827 return __get_mem_config_intel(&hw_res->r_resctrl); 828 - else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) 828 + else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 829 + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) 829 830 return __rdt_get_mem_config_amd(&hw_res->r_resctrl); 830 831 831 832 return false; ··· 988 987 { 989 988 if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) 990 989 rdt_init_res_defs_intel(); 991 - else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) 990 + else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 991 + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) 992 992 rdt_init_res_defs_amd(); 993 993 } 994 994 ··· 1021 1019 c->x86_cache_occ_scale = ebx; 1022 1020 c->x86_cache_mbm_width_offset = eax & 0xff; 1023 1021 1024 - if (c->x86_vendor == X86_VENDOR_AMD && !c->x86_cache_mbm_width_offset) 1025 - c->x86_cache_mbm_width_offset = MBM_CNTR_WIDTH_OFFSET_AMD; 1022 + if (!c->x86_cache_mbm_width_offset) { 1023 + switch (c->x86_vendor) { 1024 + case X86_VENDOR_AMD: 1025 + c->x86_cache_mbm_width_offset = MBM_CNTR_WIDTH_OFFSET_AMD; 1026 + break; 1027 + case X86_VENDOR_HYGON: 1028 + c->x86_cache_mbm_width_offset = MBM_CNTR_WIDTH_OFFSET_HYGON; 1029 + break; 1030 + default: 1031 + /* Leave c->x86_cache_mbm_width_offset as 0 */ 1032 + break; 1033 + } 1034 + } 1026 1035 } 1027 1036 } 1028 1037
+3
arch/x86/kernel/cpu/resctrl/internal.h
··· 14 14 15 15 #define MBM_CNTR_WIDTH_OFFSET_AMD 20 16 16 17 + /* Hygon MBM counter width as an offset from MBM_CNTR_WIDTH_BASE */ 18 + #define MBM_CNTR_WIDTH_OFFSET_HYGON 8 19 + 17 20 #define RMID_VAL_ERROR BIT_ULL(63) 18 21 19 22 #define RMID_VAL_UNAVAIL BIT_ULL(62)
+29 -3
arch/x86/kernel/fpu/core.c
··· 319 319 #ifdef CONFIG_X86_64 320 320 void fpu_update_guest_xfd(struct fpu_guest *guest_fpu, u64 xfd) 321 321 { 322 + struct fpstate *fpstate = guest_fpu->fpstate; 323 + 322 324 fpregs_lock(); 323 - guest_fpu->fpstate->xfd = xfd; 324 - if (guest_fpu->fpstate->in_use) 325 - xfd_update_state(guest_fpu->fpstate); 325 + 326 + /* 327 + * KVM's guest ABI is that setting XFD[i]=1 *can* immediately revert the 328 + * save state to its initial configuration. Likewise, KVM_GET_XSAVE does 329 + * the same as XSAVE and returns XSTATE_BV[i]=0 whenever XFD[i]=1. 330 + * 331 + * If the guest's FPU state is in hardware, just update XFD: the XSAVE 332 + * in fpu_swap_kvm_fpstate will clear XSTATE_BV[i] whenever XFD[i]=1. 333 + * 334 + * If however the guest's FPU state is NOT resident in hardware, clear 335 + * disabled components in XSTATE_BV now, or a subsequent XRSTOR will 336 + * attempt to load disabled components and generate #NM _in the host_. 337 + */ 338 + if (xfd && test_thread_flag(TIF_NEED_FPU_LOAD)) 339 + fpstate->regs.xsave.header.xfeatures &= ~xfd; 340 + 341 + fpstate->xfd = xfd; 342 + if (fpstate->in_use) 343 + xfd_update_state(fpstate); 344 + 326 345 fpregs_unlock(); 327 346 } 328 347 EXPORT_SYMBOL_FOR_KVM(fpu_update_guest_xfd); ··· 447 428 } 448 429 449 430 if (ustate->xsave.header.xfeatures & ~xcr0) 431 + return -EINVAL; 432 + 433 + /* 434 + * Disabled features must be in their initial state, otherwise XRSTOR 435 + * causes an exception. 436 + */ 437 + if (WARN_ON_ONCE(ustate->xsave.header.xfeatures & kstate->xfd)) 450 438 return -EINVAL; 451 439 452 440 /*
+16 -3
arch/x86/kernel/kvm.c
··· 89 89 struct swait_queue_head wq; 90 90 u32 token; 91 91 int cpu; 92 + bool dummy; 92 93 }; 93 94 94 95 static struct kvm_task_sleep_head { ··· 121 120 raw_spin_lock(&b->lock); 122 121 e = _find_apf_task(b, token); 123 122 if (e) { 124 - /* dummy entry exist -> wake up was delivered ahead of PF */ 125 - hlist_del(&e->link); 123 + struct kvm_task_sleep_node *dummy = NULL; 124 + 125 + /* 126 + * The entry can either be a 'dummy' entry (which is put on the 127 + * list when wake-up happens ahead of APF handling completion) 128 + * or a token from another task which should not be touched. 129 + */ 130 + if (e->dummy) { 131 + hlist_del(&e->link); 132 + dummy = e; 133 + } 134 + 126 135 raw_spin_unlock(&b->lock); 127 - kfree(e); 136 + kfree(dummy); 128 137 return false; 129 138 } 130 139 131 140 n->token = token; 132 141 n->cpu = smp_processor_id(); 142 + n->dummy = false; 133 143 init_swait_queue_head(&n->wq); 134 144 hlist_add_head(&n->link, &b->list); 135 145 raw_spin_unlock(&b->lock); ··· 243 231 } 244 232 dummy->token = token; 245 233 dummy->cpu = smp_processor_id(); 234 + dummy->dummy = true; 246 235 init_swait_queue_head(&dummy->wq); 247 236 hlist_add_head(&dummy->link, &b->list); 248 237 dummy = NULL;
+9
arch/x86/kvm/x86.c
··· 5807 5807 static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, 5808 5808 struct kvm_xsave *guest_xsave) 5809 5809 { 5810 + union fpregs_state *xstate = (union fpregs_state *)guest_xsave->region; 5811 + 5810 5812 if (fpstate_is_confidential(&vcpu->arch.guest_fpu)) 5811 5813 return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0; 5814 + 5815 + /* 5816 + * For backwards compatibility, do not expect disabled features to be in 5817 + * their initial state. XSTATE_BV[i] must still be cleared whenever 5818 + * XFD[i]=1, or XRSTOR would cause a #NM. 5819 + */ 5820 + xstate->xsave.header.xfeatures &= ~vcpu->arch.guest_fpu.fpstate->xfd; 5812 5821 5813 5822 return fpu_copy_uabi_to_guest_fpstate(&vcpu->arch.guest_fpu, 5814 5823 guest_xsave->region,
+5 -5
arch/x86/mm/kaslr.c
··· 115 115 116 116 /* 117 117 * Adapt physical memory region size based on available memory, 118 - * except when CONFIG_PCI_P2PDMA is enabled. P2PDMA exposes the 119 - * device BAR space assuming the direct map space is large enough 120 - * for creating a ZONE_DEVICE mapping in the direct map corresponding 121 - * to the physical BAR address. 118 + * except when CONFIG_ZONE_DEVICE is enabled. ZONE_DEVICE wants to map 119 + * any physical address into the direct-map. KASLR wants to reliably 120 + * steal some physical address bits. Those design choices are in direct 121 + * conflict. 122 122 */ 123 - if (!IS_ENABLED(CONFIG_PCI_P2PDMA) && (memory_tb < kaslr_regions[0].size_tb)) 123 + if (!IS_ENABLED(CONFIG_ZONE_DEVICE) && (memory_tb < kaslr_regions[0].size_tb)) 124 124 kaslr_regions[0].size_tb = memory_tb; 125 125 126 126 /*
+1 -1
block/bio-integrity-auto.c
··· 140 140 return true; 141 141 set_flags = false; 142 142 gfp |= __GFP_ZERO; 143 - } else if (bi->csum_type == BLK_INTEGRITY_CSUM_NONE) 143 + } else if (bi->metadata_size > bi->pi_tuple_size) 144 144 gfp |= __GFP_ZERO; 145 145 break; 146 146 default:
+7 -2
drivers/acpi/x86/s2idle.c
··· 28 28 module_param(sleep_no_lps0, bool, 0644); 29 29 MODULE_PARM_DESC(sleep_no_lps0, "Do not use the special LPS0 device interface"); 30 30 31 + static bool check_lps0_constraints __read_mostly; 32 + module_param(check_lps0_constraints, bool, 0644); 33 + MODULE_PARM_DESC(check_lps0_constraints, "Check LPS0 device constraints"); 34 + 31 35 static const struct acpi_device_id lps0_device_ids[] = { 32 36 {"PNP0D80", }, 33 37 {"", }, ··· 519 515 520 516 static int acpi_s2idle_begin_lps0(void) 521 517 { 522 - if (pm_debug_messages_on && !lpi_constraints_table) { 518 + if (lps0_device_handle && !sleep_no_lps0 && check_lps0_constraints && 519 + !lpi_constraints_table) { 523 520 if (acpi_s2idle_vendor_amd()) 524 521 lpi_device_get_constraints_amd(); 525 522 else ··· 544 539 if (!lps0_device_handle || sleep_no_lps0) 545 540 return 0; 546 541 547 - if (pm_debug_messages_on) 542 + if (check_lps0_constraints) 548 543 lpi_check_constraints(); 549 544 550 545 /* Screen off */
+11 -1
drivers/block/null_blk/main.c
··· 665 665 configfs_add_default_group(&dev->init_hctx_fault_config.group, &dev->group); 666 666 } 667 667 668 + static void nullb_del_fault_config(struct nullb_device *dev) 669 + { 670 + config_item_put(&dev->init_hctx_fault_config.group.cg_item); 671 + config_item_put(&dev->requeue_config.group.cg_item); 672 + config_item_put(&dev->timeout_config.group.cg_item); 673 + } 674 + 668 675 #else 669 676 670 677 static void nullb_add_fault_config(struct nullb_device *dev) 671 678 { 672 679 } 673 680 681 + static void nullb_del_fault_config(struct nullb_device *dev) 682 + { 683 + } 674 684 #endif 675 685 676 686 static struct ··· 712 702 null_del_dev(dev->nullb); 713 703 mutex_unlock(&lock); 714 704 } 715 - 705 + nullb_del_fault_config(dev); 716 706 config_item_put(item); 717 707 } 718 708
-1
drivers/block/rnbd/rnbd-clt.c
··· 1662 1662 /* To avoid deadlock firstly remove itself */ 1663 1663 sysfs_remove_file_self(&dev->kobj, sysfs_self); 1664 1664 kobject_del(&dev->kobj); 1665 - kobject_put(&dev->kobj); 1666 1665 } 1667 1666 } 1668 1667
+9 -2
drivers/cxl/acpi.c
··· 75 75 76 76 static u64 cxl_apply_xor_maps(struct cxl_root_decoder *cxlrd, u64 addr) 77 77 { 78 - struct cxl_cxims_data *cximsd = cxlrd->platform_data; 78 + int hbiw = cxlrd->cxlsd.nr_targets; 79 + struct cxl_cxims_data *cximsd; 79 80 80 - return cxl_do_xormap_calc(cximsd, addr, cxlrd->cxlsd.nr_targets); 81 + /* No xormaps for host bridge interleave ways of 1 or 3 */ 82 + if (hbiw == 1 || hbiw == 3) 83 + return addr; 84 + 85 + cximsd = cxlrd->platform_data; 86 + 87 + return cxl_do_xormap_calc(cximsd, addr, hbiw); 81 88 } 82 89 83 90 struct cxl_cxims_context {
+2 -2
drivers/cxl/core/hdm.c
··· 403 403 * is not set. 404 404 */ 405 405 if (cxled->part < 0) 406 - for (int i = 0; cxlds->nr_partitions; i++) 406 + for (int i = 0; i < cxlds->nr_partitions; i++) 407 407 if (resource_contains(&cxlds->part[i].res, res)) { 408 408 cxled->part = i; 409 409 break; ··· 530 530 531 531 resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled) 532 532 { 533 - resource_size_t base = -1; 533 + resource_size_t base = RESOURCE_SIZE_MAX; 534 534 535 535 lockdep_assert_held(&cxl_rwsem.dpa); 536 536 if (cxled->dpa_res)
+1 -1
drivers/cxl/core/port.c
··· 1590 1590 cxlsd->target[i] = dport; 1591 1591 dev_dbg(dev, "dport%d found in target list, index %d\n", 1592 1592 dport->port_id, i); 1593 - return 1; 1593 + return 0; 1594 1594 } 1595 1595 } 1596 1596
+27 -9
drivers/cxl/core/region.c
··· 759 759 ACQUIRE(rwsem_read_intr, rwsem)(&cxl_rwsem.region); 760 760 if ((rc = ACQUIRE_ERR(rwsem_read_intr, &rwsem))) 761 761 return rc; 762 - return sysfs_emit(buf, "%#llx\n", p->cache_size); 762 + return sysfs_emit(buf, "%pap\n", &p->cache_size); 763 763 } 764 764 static DEVICE_ATTR_RO(extended_linear_cache_size); 765 765 ··· 3118 3118 struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); 3119 3119 struct cxl_region_params *p = &cxlr->params; 3120 3120 struct cxl_endpoint_decoder *cxled = NULL; 3121 - u64 dpa_offset, hpa_offset, hpa; 3121 + u64 base, dpa_offset, hpa_offset, hpa; 3122 3122 u16 eig = 0; 3123 3123 u8 eiw = 0; 3124 3124 int pos; ··· 3136 3136 ways_to_eiw(p->interleave_ways, &eiw); 3137 3137 granularity_to_eig(p->interleave_granularity, &eig); 3138 3138 3139 - dpa_offset = dpa - cxl_dpa_resource_start(cxled); 3139 + base = cxl_dpa_resource_start(cxled); 3140 + if (base == RESOURCE_SIZE_MAX) 3141 + return ULLONG_MAX; 3142 + 3143 + dpa_offset = dpa - base; 3140 3144 hpa_offset = cxl_calculate_hpa_offset(dpa_offset, pos, eiw, eig); 3145 + if (hpa_offset == ULLONG_MAX) 3146 + return ULLONG_MAX; 3141 3147 3142 3148 /* Apply the hpa_offset to the region base address */ 3143 3149 hpa = hpa_offset + p->res->start + p->cache_size; ··· 3151 3145 /* Root decoder translation overrides typical modulo decode */ 3152 3146 if (cxlrd->ops.hpa_to_spa) 3153 3147 hpa = cxlrd->ops.hpa_to_spa(cxlrd, hpa); 3148 + 3149 + if (hpa == ULLONG_MAX) 3150 + return ULLONG_MAX; 3154 3151 3155 3152 if (!cxl_resource_contains_addr(p->res, hpa)) { 3156 3153 dev_dbg(&cxlr->dev, ··· 3179 3170 struct cxl_region_params *p = &cxlr->params; 3180 3171 struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); 3181 3172 struct cxl_endpoint_decoder *cxled; 3182 - u64 hpa, hpa_offset, dpa_offset; 3173 + u64 hpa_offset = offset; 3174 + u64 dpa, dpa_offset; 3183 3175 u16 eig = 0; 3184 3176 u8 eiw = 0; 3185 3177 int pos; ··· 3197 3187 * CXL HPA is assumed to equal SPA. 3198 3188 */ 3199 3189 if (cxlrd->ops.spa_to_hpa) { 3200 - hpa = cxlrd->ops.spa_to_hpa(cxlrd, p->res->start + offset); 3201 - hpa_offset = hpa - p->res->start; 3202 - } else { 3203 - hpa_offset = offset; 3190 + hpa_offset = cxlrd->ops.spa_to_hpa(cxlrd, p->res->start + offset); 3191 + if (hpa_offset == ULLONG_MAX) { 3192 + dev_dbg(&cxlr->dev, "HPA not found for %pr offset %#llx\n", 3193 + p->res, offset); 3194 + return -ENXIO; 3195 + } 3196 + hpa_offset -= p->res->start; 3204 3197 } 3205 3198 3206 3199 pos = cxl_calculate_position(hpa_offset, eiw, eig); ··· 3220 3207 cxled = p->targets[i]; 3221 3208 if (cxled->pos != pos) 3222 3209 continue; 3210 + 3211 + dpa = cxl_dpa_resource_start(cxled); 3212 + if (dpa != RESOURCE_SIZE_MAX) 3213 + dpa += dpa_offset; 3214 + 3223 3215 result->cxlmd = cxled_to_memdev(cxled); 3224 - result->dpa = cxl_dpa_resource_start(cxled) + dpa_offset; 3216 + result->dpa = dpa; 3225 3217 3226 3218 return 0; 3227 3219 }
+6 -4
drivers/dax/dax-private.h
··· 67 67 /** 68 68 * struct dev_dax - instance data for a subdivision of a dax region, and 69 69 * data while the device is activated in the driver. 70 - * @region - parent region 71 - * @dax_dev - core dax functionality 70 + * @region: parent region 71 + * @dax_dev: core dax functionality 72 + * @align: alignment of this instance 72 73 * @target_node: effective numa node if dev_dax memory range is onlined 73 74 * @dyn_id: is this a dynamic or statically created instance 74 75 * @id: ida allocated id when the dax_region is not static 75 76 * @ida: mapping id allocator 76 - * @dev - device core 77 - * @pgmap - pgmap for memmap setup / lifetime (driver owned) 77 + * @dev: device core 78 + * @pgmap: pgmap for memmap setup / lifetime (driver owned) 79 + * @memmap_on_memory: allow kmem to put the memmap in the memory 78 80 * @nr_range: size of @ranges 79 81 * @ranges: range tuples of memory used 80 82 */
+1
drivers/dma/apple-admac.c
··· 936 936 } 937 937 938 938 static const struct of_device_id admac_of_match[] = { 939 + { .compatible = "apple,t8103-admac", }, 939 940 { .compatible = "apple,admac", }, 940 941 { } 941 942 };
+7 -2
drivers/dma/at_hdmac.c
··· 1765 1765 static void atc_free_chan_resources(struct dma_chan *chan) 1766 1766 { 1767 1767 struct at_dma_chan *atchan = to_at_dma_chan(chan); 1768 + struct at_dma_slave *atslave; 1768 1769 1769 1770 BUG_ON(atc_chan_is_enabled(atchan)); 1770 1771 ··· 1775 1774 /* 1776 1775 * Free atslave allocated in at_dma_xlate() 1777 1776 */ 1778 - kfree(chan->private); 1779 - chan->private = NULL; 1777 + atslave = chan->private; 1778 + if (atslave) { 1779 + put_device(atslave->dma_dev); 1780 + kfree(atslave); 1781 + chan->private = NULL; 1782 + } 1780 1783 1781 1784 dev_vdbg(chan2dev(chan), "free_chan_resources: done\n"); 1782 1785 }
+5 -1
drivers/dma/bcm-sba-raid.c
··· 1699 1699 /* Prealloc channel resource */ 1700 1700 ret = sba_prealloc_channel_resources(sba); 1701 1701 if (ret) 1702 - goto fail_free_mchan; 1702 + goto fail_put_mbox; 1703 1703 1704 1704 /* Check availability of debugfs */ 1705 1705 if (!debugfs_initialized()) ··· 1729 1729 fail_free_resources: 1730 1730 debugfs_remove_recursive(sba->root); 1731 1731 sba_freeup_channel_resources(sba); 1732 + fail_put_mbox: 1733 + put_device(sba->mbox_dev); 1732 1734 fail_free_mchan: 1733 1735 mbox_free_channel(sba->mchan); 1734 1736 return ret; ··· 1745 1743 debugfs_remove_recursive(sba->root); 1746 1744 1747 1745 sba_freeup_channel_resources(sba); 1746 + 1747 + put_device(sba->mbox_dev); 1748 1748 1749 1749 mbox_free_channel(sba->mchan); 1750 1750 }
+10 -7
drivers/dma/cv1800b-dmamux.c
··· 102 102 struct llist_node *node; 103 103 unsigned long flags; 104 104 unsigned int chid, devid, cpuid; 105 - int ret; 105 + int ret = -EINVAL; 106 106 107 107 if (dma_spec->args_count != DMAMUX_NCELLS) { 108 108 dev_err(&pdev->dev, "invalid number of dma mux args\n"); 109 - return ERR_PTR(-EINVAL); 109 + goto err_put_pdev; 110 110 } 111 111 112 112 devid = dma_spec->args[0]; ··· 115 115 116 116 if (devid > MAX_DMA_MAPPING_ID) { 117 117 dev_err(&pdev->dev, "invalid device id: %u\n", devid); 118 - return ERR_PTR(-EINVAL); 118 + goto err_put_pdev; 119 119 } 120 120 121 121 if (cpuid > MAX_DMA_CPU_ID) { 122 122 dev_err(&pdev->dev, "invalid cpu id: %u\n", cpuid); 123 - return ERR_PTR(-EINVAL); 123 + goto err_put_pdev; 124 124 } 125 125 126 126 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); 127 127 if (!dma_spec->np) { 128 128 dev_err(&pdev->dev, "can't get dma master\n"); 129 - return ERR_PTR(-EINVAL); 129 + goto err_put_pdev; 130 130 } 131 131 132 132 spin_lock_irqsave(&dmamux->lock, flags); ··· 136 136 if (map->peripheral == devid && map->cpu == cpuid) 137 137 goto found; 138 138 } 139 - 140 - ret = -EINVAL; 141 139 goto failed; 142 140 } else { 143 141 node = llist_del_first(&dmamux->free_maps); ··· 169 171 dev_dbg(&pdev->dev, "register channel %u for req %u (cpu %u)\n", 170 172 chid, devid, cpuid); 171 173 174 + put_device(&pdev->dev); 175 + 172 176 return map; 173 177 174 178 failed: 175 179 spin_unlock_irqrestore(&dmamux->lock, flags); 176 180 of_node_put(dma_spec->np); 177 181 dev_err(&pdev->dev, "errno %d\n", ret); 182 + err_put_pdev: 183 + put_device(&pdev->dev); 184 + 178 185 return ERR_PTR(ret); 179 186 } 180 187
+3 -1
drivers/dma/dw/rzn1-dmamux.c
··· 90 90 91 91 if (test_and_set_bit(map->req_idx, dmamux->used_chans)) { 92 92 ret = -EBUSY; 93 - goto free_map; 93 + goto put_dma_spec_np; 94 94 } 95 95 96 96 mask = BIT(map->req_idx); ··· 103 103 104 104 clear_bitmap: 105 105 clear_bit(map->req_idx, dmamux->used_chans); 106 + put_dma_spec_np: 107 + of_node_put(dma_spec->np); 106 108 free_map: 107 109 kfree(map); 108 110 put_device:
+1
drivers/dma/fsl-edma-common.c
··· 873 873 free_irq(fsl_chan->txirq, fsl_chan); 874 874 err_txirq: 875 875 dma_pool_destroy(fsl_chan->tcd_pool); 876 + clk_disable_unprepare(fsl_chan->clk); 876 877 877 878 return ret; 878 879 }
+19 -4
drivers/dma/idxd/compat.c
··· 20 20 int rc = -ENODEV; 21 21 22 22 dev = bus_find_device_by_name(bus, NULL, buf); 23 - if (dev && dev->driver) { 23 + if (!dev) 24 + return -ENODEV; 25 + 26 + if (dev->driver) { 24 27 device_driver_detach(dev); 25 28 rc = count; 26 29 } 30 + 31 + put_device(dev); 27 32 28 33 return rc; 29 34 } ··· 43 38 struct idxd_dev *idxd_dev; 44 39 45 40 dev = bus_find_device_by_name(bus, NULL, buf); 46 - if (!dev || dev->driver || drv != &dsa_drv.drv) 41 + if (!dev) 47 42 return -ENODEV; 43 + 44 + if (dev->driver || drv != &dsa_drv.drv) 45 + goto err_put_dev; 48 46 49 47 idxd_dev = confdev_to_idxd_dev(dev); 50 48 if (is_idxd_dev(idxd_dev)) { ··· 61 53 alt_drv = driver_find("user", bus); 62 54 } 63 55 if (!alt_drv) 64 - return -ENODEV; 56 + goto err_put_dev; 65 57 66 58 rc = device_driver_attach(alt_drv, dev); 67 59 if (rc < 0) 68 - return rc; 60 + goto err_put_dev; 61 + 62 + put_device(dev); 69 63 70 64 return count; 65 + 66 + err_put_dev: 67 + put_device(dev); 68 + 69 + return rc; 71 70 } 72 71 static DRIVER_ATTR_IGNORE_LOCKDEP(bind, 0200, NULL, bind_store); 73 72
+14 -5
drivers/dma/lpc18xx-dmamux.c
··· 57 57 struct lpc18xx_dmamux_data *dmamux = platform_get_drvdata(pdev); 58 58 unsigned long flags; 59 59 unsigned mux; 60 + int ret = -EINVAL; 60 61 61 62 if (dma_spec->args_count != 3) { 62 63 dev_err(&pdev->dev, "invalid number of dma mux args\n"); 63 - return ERR_PTR(-EINVAL); 64 + goto err_put_pdev; 64 65 } 65 66 66 67 mux = dma_spec->args[0]; 67 68 if (mux >= dmamux->dma_master_requests) { 68 69 dev_err(&pdev->dev, "invalid mux number: %d\n", 69 70 dma_spec->args[0]); 70 - return ERR_PTR(-EINVAL); 71 + goto err_put_pdev; 71 72 } 72 73 73 74 if (dma_spec->args[1] > LPC18XX_DMAMUX_MAX_VAL) { 74 75 dev_err(&pdev->dev, "invalid dma mux value: %d\n", 75 76 dma_spec->args[1]); 76 - return ERR_PTR(-EINVAL); 77 + goto err_put_pdev; 77 78 } 78 79 79 80 /* The of_node_put() will be done in the core for the node */ 80 81 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); 81 82 if (!dma_spec->np) { 82 83 dev_err(&pdev->dev, "can't get dma master\n"); 83 - return ERR_PTR(-EINVAL); 84 + goto err_put_pdev; 84 85 } 85 86 86 87 spin_lock_irqsave(&dmamux->lock, flags); ··· 90 89 dev_err(&pdev->dev, "dma request %u busy with %u.%u\n", 91 90 mux, mux, dmamux->muxes[mux].value); 92 91 of_node_put(dma_spec->np); 93 - return ERR_PTR(-EBUSY); 92 + ret = -EBUSY; 93 + goto err_put_pdev; 94 94 } 95 95 96 96 dmamux->muxes[mux].busy = true; ··· 108 106 dev_dbg(&pdev->dev, "mapping dmamux %u.%u to dma request %u\n", mux, 109 107 dmamux->muxes[mux].value, mux); 110 108 109 + put_device(&pdev->dev); 110 + 111 111 return &dmamux->muxes[mux]; 112 + 113 + err_put_pdev: 114 + put_device(&pdev->dev); 115 + 116 + return ERR_PTR(ret); 112 117 } 113 118 114 119 static int lpc18xx_dmamux_probe(struct platform_device *pdev)
+14 -5
drivers/dma/lpc32xx-dmamux.c
··· 95 95 struct lpc32xx_dmamux_data *dmamux = platform_get_drvdata(pdev); 96 96 unsigned long flags; 97 97 struct lpc32xx_dmamux *mux = NULL; 98 + int ret = -EINVAL; 98 99 int i; 99 100 100 101 if (dma_spec->args_count != 3) { 101 102 dev_err(&pdev->dev, "invalid number of dma mux args\n"); 102 - return ERR_PTR(-EINVAL); 103 + goto err_put_pdev; 103 104 } 104 105 105 106 for (i = 0; i < ARRAY_SIZE(lpc32xx_muxes); i++) { ··· 112 111 if (!mux) { 113 112 dev_err(&pdev->dev, "invalid mux request number: %d\n", 114 113 dma_spec->args[0]); 115 - return ERR_PTR(-EINVAL); 114 + goto err_put_pdev; 116 115 } 117 116 118 117 if (dma_spec->args[2] > 1) { 119 118 dev_err(&pdev->dev, "invalid dma mux value: %d\n", 120 119 dma_spec->args[1]); 121 - return ERR_PTR(-EINVAL); 120 + goto err_put_pdev; 122 121 } 123 122 124 123 /* The of_node_put() will be done in the core for the node */ 125 124 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); 126 125 if (!dma_spec->np) { 127 126 dev_err(&pdev->dev, "can't get dma master\n"); 128 - return ERR_PTR(-EINVAL); 127 + goto err_put_pdev; 129 128 } 130 129 131 130 spin_lock_irqsave(&dmamux->lock, flags); ··· 134 133 dev_err(dev, "dma request signal %d busy, routed to %s\n", 135 134 mux->signal, mux->muxval ? mux->name_sel1 : mux->name_sel1); 136 135 of_node_put(dma_spec->np); 137 - return ERR_PTR(-EBUSY); 136 + ret = -EBUSY; 137 + goto err_put_pdev; 138 138 } 139 139 140 140 mux->busy = true; ··· 150 148 dev_dbg(dev, "dma request signal %d routed to %s\n", 151 149 mux->signal, mux->muxval ? mux->name_sel1 : mux->name_sel1); 152 150 151 + put_device(&pdev->dev); 152 + 153 153 return mux; 154 + 155 + err_put_pdev: 156 + put_device(&pdev->dev); 157 + 158 + return ERR_PTR(ret); 154 159 } 155 160 156 161 static int lpc32xx_dmamux_probe(struct platform_device *pdev)
+14 -12
drivers/dma/mmp_pdma.c
··· 152 152 * 153 153 * Controller Configuration: 154 154 * @run_bits: Control bits in DCSR register for channel start/stop 155 - * @dma_mask: DMA addressing capability of controller. 0 to use OF/platform 156 - * settings, or explicit mask like DMA_BIT_MASK(32/64) 155 + * @dma_width: DMA addressing width in bits (32 or 64). Determines the 156 + * DMA mask capability of the controller hardware. 157 157 */ 158 158 struct mmp_pdma_ops { 159 159 /* Hardware Register Operations */ ··· 173 173 174 174 /* Controller Configuration */ 175 175 u32 run_bits; 176 - u64 dma_mask; 176 + u32 dma_width; 177 177 }; 178 178 179 179 struct mmp_pdma_device { ··· 928 928 { 929 929 struct mmp_pdma_desc_sw *sw; 930 930 struct mmp_pdma_device *pdev = to_mmp_pdma_dev(chan->chan.device); 931 + unsigned long flags; 931 932 u64 curr; 932 933 u32 residue = 0; 933 934 bool passed = false; ··· 945 944 curr = pdev->ops->read_dst_addr(chan->phy); 946 945 else 947 946 curr = pdev->ops->read_src_addr(chan->phy); 947 + 948 + spin_lock_irqsave(&chan->desc_lock, flags); 948 949 949 950 list_for_each_entry(sw, &chan->chain_running, node) { 950 951 u64 start, end; ··· 992 989 continue; 993 990 994 991 if (sw->async_tx.cookie == cookie) { 992 + spin_unlock_irqrestore(&chan->desc_lock, flags); 995 993 return residue; 996 994 } else { 997 995 residue = 0; 998 996 passed = false; 999 997 } 1000 998 } 999 + 1000 + spin_unlock_irqrestore(&chan->desc_lock, flags); 1001 1001 1002 1002 /* We should only get here in case of cyclic transactions */ 1003 1003 return residue; ··· 1178 1172 .get_desc_src_addr = get_desc_src_addr_32, 1179 1173 .get_desc_dst_addr = get_desc_dst_addr_32, 1180 1174 .run_bits = (DCSR_RUN), 1181 - .dma_mask = 0, /* let OF/platform set DMA mask */ 1175 + .dma_width = 32, 1182 1176 }; 1183 1177 1184 1178 static const struct mmp_pdma_ops spacemit_k1_pdma_ops = { ··· 1191 1185 .get_desc_src_addr = get_desc_src_addr_64, 1192 1186 .get_desc_dst_addr = get_desc_dst_addr_64, 1193 1187 .run_bits = (DCSR_RUN | DCSR_LPAEEN), 1194 - .dma_mask = DMA_BIT_MASK(64), /* force 64-bit DMA addr capability */ 1188 + .dma_width = 64, 1195 1189 }; 1196 1190 1197 1191 static const struct of_device_id mmp_pdma_dt_ids[] = { ··· 1320 1314 pdev->device.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 1321 1315 pdev->device.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 1322 1316 1323 - /* Set DMA mask based on ops->dma_mask, or OF/platform */ 1324 - if (pdev->ops->dma_mask) 1325 - dma_set_mask(pdev->dev, pdev->ops->dma_mask); 1326 - else if (pdev->dev->coherent_dma_mask) 1327 - dma_set_mask(pdev->dev, pdev->dev->coherent_dma_mask); 1328 - else 1329 - dma_set_mask(pdev->dev, DMA_BIT_MASK(64)); 1317 + /* Set DMA mask based on controller hardware capabilities */ 1318 + dma_set_mask_and_coherent(pdev->dev, 1319 + DMA_BIT_MASK(pdev->ops->dma_width)); 1330 1320 1331 1321 ret = dma_async_device_register(&pdev->device); 1332 1322 if (ret) {
+4 -2
drivers/dma/qcom/gpi.c
··· 1605 1605 gpi_peripheral_config(struct dma_chan *chan, struct dma_slave_config *config) 1606 1606 { 1607 1607 struct gchan *gchan = to_gchan(chan); 1608 + void *new_config; 1608 1609 1609 1610 if (!config->peripheral_config) 1610 1611 return -EINVAL; 1611 1612 1612 - gchan->config = krealloc(gchan->config, config->peripheral_size, GFP_NOWAIT); 1613 - if (!gchan->config) 1613 + new_config = krealloc(gchan->config, config->peripheral_size, GFP_NOWAIT); 1614 + if (!new_config) 1614 1615 return -ENOMEM; 1615 1616 1617 + gchan->config = new_config; 1616 1618 memcpy(gchan->config, config->peripheral_config, config->peripheral_size); 1617 1619 1618 1620 return 0;
+16 -2
drivers/dma/sh/rz-dmac.c
··· 557 557 static int rz_dmac_terminate_all(struct dma_chan *chan) 558 558 { 559 559 struct rz_dmac_chan *channel = to_rz_dmac_chan(chan); 560 + struct rz_lmdesc *lmdesc = channel->lmdesc.base; 560 561 unsigned long flags; 562 + unsigned int i; 561 563 LIST_HEAD(head); 562 564 563 565 rz_dmac_disable_hw(channel); 564 566 spin_lock_irqsave(&channel->vc.lock, flags); 567 + for (i = 0; i < DMAC_NR_LMDESC; i++) 568 + lmdesc[i].header = 0; 569 + 565 570 list_splice_tail_init(&channel->ld_active, &channel->ld_free); 566 571 list_splice_tail_init(&channel->ld_queue, &channel->ld_free); 567 572 vchan_get_all_descriptors(&channel->vc, &head); ··· 859 854 return 0; 860 855 } 861 856 857 + static void rz_dmac_put_device(void *_dev) 858 + { 859 + struct device *dev = _dev; 860 + 861 + put_device(dev); 862 + } 863 + 862 864 static int rz_dmac_parse_of_icu(struct device *dev, struct rz_dmac *dmac) 863 865 { 864 866 struct device_node *np = dev->of_node; ··· 887 875 dev_err(dev, "ICU device not found.\n"); 888 876 return -ENODEV; 889 877 } 878 + 879 + ret = devm_add_action_or_reset(dev, rz_dmac_put_device, &dmac->icu.pdev->dev); 880 + if (ret) 881 + return ret; 890 882 891 883 dmac_index = args.args[0]; 892 884 if (dmac_index > RZV2H_MAX_DMAC_INDEX) { ··· 1071 1055 reset_control_assert(dmac->rstc); 1072 1056 pm_runtime_put(&pdev->dev); 1073 1057 pm_runtime_disable(&pdev->dev); 1074 - 1075 - platform_device_put(dmac->icu.pdev); 1076 1058 } 1077 1059 1078 1060 static const struct of_device_id of_rz_dmac_match[] = {
+19 -12
drivers/dma/stm32/stm32-dmamux.c
··· 90 90 struct stm32_dmamux_data *dmamux = platform_get_drvdata(pdev); 91 91 struct stm32_dmamux *mux; 92 92 u32 i, min, max; 93 - int ret; 93 + int ret = -EINVAL; 94 94 unsigned long flags; 95 95 96 96 if (dma_spec->args_count != 3) { 97 97 dev_err(&pdev->dev, "invalid number of dma mux args\n"); 98 - return ERR_PTR(-EINVAL); 98 + goto err_put_pdev; 99 99 } 100 100 101 101 if (dma_spec->args[0] > dmamux->dmamux_requests) { 102 102 dev_err(&pdev->dev, "invalid mux request number: %d\n", 103 103 dma_spec->args[0]); 104 - return ERR_PTR(-EINVAL); 104 + goto err_put_pdev; 105 105 } 106 106 107 107 mux = kzalloc(sizeof(*mux), GFP_KERNEL); 108 - if (!mux) 109 - return ERR_PTR(-ENOMEM); 108 + if (!mux) { 109 + ret = -ENOMEM; 110 + goto err_put_pdev; 111 + } 110 112 111 113 spin_lock_irqsave(&dmamux->lock, flags); 112 114 mux->chan_id = find_first_zero_bit(dmamux->dma_inuse, ··· 118 116 spin_unlock_irqrestore(&dmamux->lock, flags); 119 117 dev_err(&pdev->dev, "Run out of free DMA requests\n"); 120 118 ret = -ENOMEM; 121 - goto error_chan_id; 119 + goto err_free_mux; 122 120 } 123 121 set_bit(mux->chan_id, dmamux->dma_inuse); 124 122 spin_unlock_irqrestore(&dmamux->lock, flags); ··· 135 133 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", i - 1); 136 134 if (!dma_spec->np) { 137 135 dev_err(&pdev->dev, "can't get dma master\n"); 138 - ret = -EINVAL; 139 - goto error; 136 + goto err_clear_inuse; 140 137 } 141 138 142 139 /* Set dma request */ ··· 143 142 ret = pm_runtime_resume_and_get(&pdev->dev); 144 143 if (ret < 0) { 145 144 spin_unlock_irqrestore(&dmamux->lock, flags); 146 - goto error; 145 + goto err_put_dma_spec_np; 147 146 } 148 147 spin_unlock_irqrestore(&dmamux->lock, flags); 149 148 ··· 161 160 dev_dbg(&pdev->dev, "Mapping DMAMUX(%u) to DMA%u(%u)\n", 162 161 mux->request, mux->master, mux->chan_id); 163 162 163 + put_device(&pdev->dev); 164 + 164 165 return mux; 165 166 166 - error: 167 + err_put_dma_spec_np: 168 + of_node_put(dma_spec->np); 169 + err_clear_inuse: 167 170 clear_bit(mux->chan_id, dmamux->dma_inuse); 168 - 169 - error_chan_id: 171 + err_free_mux: 170 172 kfree(mux); 173 + err_put_pdev: 174 + put_device(&pdev->dev); 175 + 171 176 return ERR_PTR(ret); 172 177 } 173 178
+9 -1
drivers/dma/tegra210-adma.c
··· 429 429 return; 430 430 } 431 431 432 - kfree(tdc->desc); 432 + vchan_terminate_vdesc(&tdc->desc->vd); 433 433 tdc->desc = NULL; 434 + } 435 + 436 + static void tegra_adma_synchronize(struct dma_chan *dc) 437 + { 438 + struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc); 439 + 440 + vchan_synchronize(&tdc->vc); 434 441 } 435 442 436 443 static void tegra_adma_start(struct tegra_adma_chan *tdc) ··· 1162 1155 tdma->dma_dev.device_config = tegra_adma_slave_config; 1163 1156 tdma->dma_dev.device_tx_status = tegra_adma_tx_status; 1164 1157 tdma->dma_dev.device_terminate_all = tegra_adma_terminate_all; 1158 + tdma->dma_dev.device_synchronize = tegra_adma_synchronize; 1165 1159 tdma->dma_dev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 1166 1160 tdma->dma_dev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 1167 1161 tdma->dma_dev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+20 -15
drivers/dma/ti/dma-crossbar.c
··· 79 79 { 80 80 struct platform_device *pdev = of_find_device_by_node(ofdma->of_node); 81 81 struct ti_am335x_xbar_data *xbar = platform_get_drvdata(pdev); 82 - struct ti_am335x_xbar_map *map; 82 + struct ti_am335x_xbar_map *map = ERR_PTR(-EINVAL); 83 83 84 84 if (dma_spec->args_count != 3) 85 - return ERR_PTR(-EINVAL); 85 + goto out_put_pdev; 86 86 87 87 if (dma_spec->args[2] >= xbar->xbar_events) { 88 88 dev_err(&pdev->dev, "Invalid XBAR event number: %d\n", 89 89 dma_spec->args[2]); 90 - return ERR_PTR(-EINVAL); 90 + goto out_put_pdev; 91 91 } 92 92 93 93 if (dma_spec->args[0] >= xbar->dma_requests) { 94 94 dev_err(&pdev->dev, "Invalid DMA request line number: %d\n", 95 95 dma_spec->args[0]); 96 - return ERR_PTR(-EINVAL); 96 + goto out_put_pdev; 97 97 } 98 98 99 99 /* The of_node_put() will be done in the core for the node */ 100 100 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); 101 101 if (!dma_spec->np) { 102 102 dev_err(&pdev->dev, "Can't get DMA master\n"); 103 - return ERR_PTR(-EINVAL); 103 + goto out_put_pdev; 104 104 } 105 105 106 106 map = kzalloc(sizeof(*map), GFP_KERNEL); 107 107 if (!map) { 108 108 of_node_put(dma_spec->np); 109 - return ERR_PTR(-ENOMEM); 109 + map = ERR_PTR(-ENOMEM); 110 + goto out_put_pdev; 110 111 } 111 112 112 113 map->dma_line = (u16)dma_spec->args[0]; ··· 120 119 map->mux_val, map->dma_line); 121 120 122 121 ti_am335x_xbar_write(xbar->iomem, map->dma_line, map->mux_val); 122 + 123 + out_put_pdev: 124 + put_device(&pdev->dev); 123 125 124 126 return map; 125 127 } ··· 245 241 { 246 242 struct platform_device *pdev = of_find_device_by_node(ofdma->of_node); 247 243 struct ti_dra7_xbar_data *xbar = platform_get_drvdata(pdev); 248 - struct ti_dra7_xbar_map *map; 244 + struct ti_dra7_xbar_map *map = ERR_PTR(-EINVAL); 249 245 250 246 if (dma_spec->args[0] >= xbar->xbar_requests) { 251 247 dev_err(&pdev->dev, "Invalid XBAR request number: %d\n", 252 248 dma_spec->args[0]); 253 - put_device(&pdev->dev); 254 - return ERR_PTR(-EINVAL); 249 + goto out_put_pdev; 255 250 } 256 251 257 252 /* The of_node_put() will be done in the core for the node */ 258 253 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); 259 254 if (!dma_spec->np) { 260 255 dev_err(&pdev->dev, "Can't get DMA master\n"); 261 - put_device(&pdev->dev); 262 - return ERR_PTR(-EINVAL); 256 + goto out_put_pdev; 263 257 } 264 258 265 259 map = kzalloc(sizeof(*map), GFP_KERNEL); 266 260 if (!map) { 267 261 of_node_put(dma_spec->np); 268 - put_device(&pdev->dev); 269 - return ERR_PTR(-ENOMEM); 262 + map = ERR_PTR(-ENOMEM); 263 + goto out_put_pdev; 270 264 } 271 265 272 266 mutex_lock(&xbar->mutex); ··· 275 273 dev_err(&pdev->dev, "Run out of free DMA requests\n"); 276 274 kfree(map); 277 275 of_node_put(dma_spec->np); 278 - put_device(&pdev->dev); 279 - return ERR_PTR(-ENOMEM); 276 + map = ERR_PTR(-ENOMEM); 277 + goto out_put_pdev; 280 278 } 281 279 set_bit(map->xbar_out, xbar->dma_inuse); 282 280 mutex_unlock(&xbar->mutex); ··· 289 287 map->xbar_in, map->xbar_out); 290 288 291 289 ti_dra7_xbar_write(xbar->iomem, map->xbar_out, map->xbar_in); 290 + 291 + out_put_pdev: 292 + put_device(&pdev->dev); 292 293 293 294 return map; 294 295 }
+1 -1
drivers/dma/ti/k3-udma-private.c
··· 42 42 } 43 43 44 44 ud = platform_get_drvdata(pdev); 45 + put_device(&pdev->dev); 45 46 if (!ud) { 46 47 pr_debug("UDMA has not been probed\n"); 47 - put_device(&pdev->dev); 48 48 return ERR_PTR(-EPROBE_DEFER); 49 49 } 50 50
+4
drivers/dma/ti/omap-dma.c
··· 1808 1808 if (rc) { 1809 1809 pr_warn("OMAP-DMA: failed to register slave DMA engine device: %d\n", 1810 1810 rc); 1811 + if (od->ll123_supported) 1812 + dma_pool_destroy(od->desc_pool); 1811 1813 omap_dma_free(od); 1812 1814 return rc; 1813 1815 } ··· 1825 1823 if (rc) { 1826 1824 pr_warn("OMAP-DMA: failed to register DMA controller\n"); 1827 1825 dma_async_device_unregister(&od->ddev); 1826 + if (od->ll123_supported) 1827 + dma_pool_destroy(od->desc_pool); 1828 1828 omap_dma_free(od); 1829 1829 } 1830 1830 }
+1
drivers/dma/xilinx/xdma-regs.h
··· 9 9 10 10 /* The length of register space exposed to host */ 11 11 #define XDMA_REG_SPACE_LEN 65536 12 + #define XDMA_MAX_REG_OFFSET (XDMA_REG_SPACE_LEN - 4) 12 13 13 14 /* 14 15 * maximum number of DMA channels for each direction:
+1 -1
drivers/dma/xilinx/xdma.c
··· 38 38 .reg_bits = 32, 39 39 .val_bits = 32, 40 40 .reg_stride = 4, 41 - .max_register = XDMA_REG_SPACE_LEN, 41 + .max_register = XDMA_MAX_REG_OFFSET, 42 42 }; 43 43 44 44 /**
+5 -2
drivers/dma/xilinx/xilinx_dma.c
··· 131 131 #define XILINX_MCDMA_MAX_CHANS_PER_DEVICE 0x20 132 132 #define XILINX_DMA_MAX_CHANS_PER_DEVICE 0x2 133 133 #define XILINX_CDMA_MAX_CHANS_PER_DEVICE 0x1 134 + #define XILINX_DMA_DFAULT_ADDRWIDTH 0x20 134 135 135 136 #define XILINX_DMA_DMAXR_ALL_IRQ_MASK \ 136 137 (XILINX_DMA_DMASR_FRM_CNT_IRQ | \ ··· 3160 3159 struct device_node *node = pdev->dev.of_node; 3161 3160 struct xilinx_dma_device *xdev; 3162 3161 struct device_node *child, *np = pdev->dev.of_node; 3163 - u32 num_frames, addr_width, len_width; 3162 + u32 num_frames, addr_width = XILINX_DMA_DFAULT_ADDRWIDTH, len_width; 3164 3163 int i, err; 3165 3164 3166 3165 /* Allocate and initialize the DMA engine structure */ ··· 3236 3235 3237 3236 err = of_property_read_u32(node, "xlnx,addrwidth", &addr_width); 3238 3237 if (err < 0) 3239 - dev_warn(xdev->dev, "missing xlnx,addrwidth property\n"); 3238 + dev_warn(xdev->dev, 3239 + "missing xlnx,addrwidth property, using default value %d\n", 3240 + XILINX_DMA_DFAULT_ADDRWIDTH); 3240 3241 3241 3242 if (addr_width > 32) 3242 3243 xdev->ext_addr = true;
+6 -5
drivers/edac/i3200_edac.c
··· 358 358 layers[1].type = EDAC_MC_LAYER_CHANNEL; 359 359 layers[1].size = nr_channels; 360 360 layers[1].is_virt_csrow = false; 361 - mci = edac_mc_alloc(0, ARRAY_SIZE(layers), layers, 362 - sizeof(struct i3200_priv)); 361 + 362 + rc = -ENOMEM; 363 + mci = edac_mc_alloc(0, ARRAY_SIZE(layers), layers, sizeof(struct i3200_priv)); 363 364 if (!mci) 364 - return -ENOMEM; 365 + goto unmap; 365 366 366 367 edac_dbg(3, "MC: init mci\n"); 367 368 ··· 422 421 return 0; 423 422 424 423 fail: 424 + edac_mc_free(mci); 425 + unmap: 425 426 iounmap(window); 426 - if (mci) 427 - edac_mc_free(mci); 428 427 429 428 return rc; 430 429 }
+6 -3
drivers/edac/x38_edac.c
··· 341 341 layers[1].type = EDAC_MC_LAYER_CHANNEL; 342 342 layers[1].size = x38_channel_num; 343 343 layers[1].is_virt_csrow = false; 344 + 345 + 346 + rc = -ENOMEM; 344 347 mci = edac_mc_alloc(0, ARRAY_SIZE(layers), layers, 0); 345 348 if (!mci) 346 - return -ENOMEM; 349 + goto unmap; 347 350 348 351 edac_dbg(3, "MC: init mci\n"); 349 352 ··· 406 403 return 0; 407 404 408 405 fail: 406 + edac_mc_free(mci); 407 + unmap: 409 408 iounmap(window); 410 - if (mci) 411 - edac_mc_free(mci); 412 409 413 410 return rc; 414 411 }
+1 -1
drivers/firmware/efi/cper.c
··· 162 162 len -= size; 163 163 str += size; 164 164 } 165 - return len - buf_size; 165 + return buf_size - len; 166 166 } 167 167 EXPORT_SYMBOL_GPL(cper_bits_to_str); 168 168
+1
drivers/firmware/efi/efi.c
··· 819 819 if (tbl) { 820 820 phys_initrd_start = tbl->base; 821 821 phys_initrd_size = tbl->size; 822 + tbl->base = tbl->size = 0; 822 823 early_memunmap(tbl, sizeof(*tbl)); 823 824 } 824 825 }
+18
drivers/gpio/gpio-davinci.c
··· 6 6 * Copyright (c) 2007, MontaVista Software, Inc. <source@mvista.com> 7 7 */ 8 8 9 + #include <linux/cleanup.h> 9 10 #include <linux/gpio/driver.h> 10 11 #include <linux/errno.h> 11 12 #include <linux/kernel.h> ··· 110 109 return __davinci_direction(chip, offset, true, value); 111 110 } 112 111 112 + static int davinci_get_direction(struct gpio_chip *chip, unsigned int offset) 113 + { 114 + struct davinci_gpio_controller *d = gpiochip_get_data(chip); 115 + struct davinci_gpio_regs __iomem *g; 116 + u32 mask = __gpio_mask(offset), val; 117 + int bank = offset / 32; 118 + 119 + g = d->regs[bank]; 120 + 121 + guard(spinlock_irqsave)(&d->lock); 122 + 123 + val = readl_relaxed(&g->dir); 124 + 125 + return (val & mask) ? GPIO_LINE_DIRECTION_IN : GPIO_LINE_DIRECTION_OUT; 126 + } 127 + 113 128 /* 114 129 * Read the pin's value (works even if it's set up as output); 115 130 * returns zero/nonzero. ··· 220 203 chips->chip.get = davinci_gpio_get; 221 204 chips->chip.direction_output = davinci_direction_out; 222 205 chips->chip.set = davinci_gpio_set; 206 + chips->chip.get_direction = davinci_get_direction; 223 207 224 208 chips->chip.ngpio = ngpio; 225 209 chips->chip.base = -1;
-3
drivers/gpio/gpiolib.c
··· 468 468 test_bit(GPIOD_FLAG_IS_OUT, &flags)) 469 469 return 0; 470 470 471 - if (!guard.gc->get_direction) 472 - return -ENOTSUPP; 473 - 474 471 ret = gpiochip_get_direction(guard.gc, offset); 475 472 if (ret < 0) 476 473 return ret;
+2
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 274 274 extern int amdgpu_wbrf; 275 275 extern int amdgpu_user_queue; 276 276 277 + extern uint amdgpu_hdmi_hpd_debounce_delay_ms; 278 + 277 279 #define AMDGPU_VM_MAX_NUM_CTX 4096 278 280 #define AMDGPU_SG_THRESHOLD (256*1024*1024) 279 281 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5063 5063 5064 5064 amdgpu_ttm_set_buffer_funcs_status(adev, false); 5065 5065 5066 + /* 5067 + * device went through surprise hotplug; we need to destroy topology 5068 + * before ip_fini_early to prevent kfd locking refcount issues by calling 5069 + * amdgpu_amdkfd_suspend() 5070 + */ 5071 + if (drm_dev_is_unplugged(adev_to_drm(adev))) 5072 + amdgpu_amdkfd_device_fini_sw(adev); 5073 + 5066 5074 amdgpu_device_ip_fini_early(adev); 5067 5075 5068 5076 amdgpu_irq_fini_hw(adev);
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 1880 1880 struct drm_scanout_buffer *sb) 1881 1881 { 1882 1882 struct amdgpu_bo *abo; 1883 - struct drm_framebuffer *fb = plane->state->fb; 1883 + struct drm_framebuffer *fb; 1884 + 1885 + if (drm_drv_uses_atomic_modeset(plane->dev)) 1886 + fb = plane->state->fb; 1887 + else 1888 + fb = plane->fb; 1884 1889 1885 1890 if (!fb) 1886 1891 return -EINVAL;
-12
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 95 95 bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) 96 96 attach->peer2peer = false; 97 97 98 - /* 99 - * Disable peer-to-peer access for DCC-enabled VRAM surfaces on GFX12+. 100 - * Such buffers cannot be safely accessed over P2P due to device-local 101 - * compression metadata. Fallback to system-memory path instead. 102 - * Device supports GFX12 (GC 12.x or newer) 103 - * BO was created with the AMDGPU_GEM_CREATE_GFX12_DCC flag 104 - * 105 - */ 106 - if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0) && 107 - bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) 108 - attach->peer2peer = false; 109 - 110 98 if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) && 111 99 pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) 112 100 attach->peer2peer = false;
+11
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 247 247 int amdgpu_umsch_mm_fwlog; 248 248 int amdgpu_rebar = -1; /* auto */ 249 249 int amdgpu_user_queue = -1; 250 + uint amdgpu_hdmi_hpd_debounce_delay_ms; 250 251 251 252 DECLARE_DYNDBG_CLASSMAP(drm_debug_classes, DD_CLASS_TYPE_DISJOINT_BITS, 0, 252 253 "DRM_UT_CORE", ··· 1123 1122 */ 1124 1123 MODULE_PARM_DESC(user_queue, "Enable user queues (-1 = auto (default), 0 = disable, 1 = enable, 2 = enable UQs and disable KQs)"); 1125 1124 module_param_named(user_queue, amdgpu_user_queue, int, 0444); 1125 + 1126 + /* 1127 + * DOC: hdmi_hpd_debounce_delay_ms (uint) 1128 + * HDMI HPD disconnect debounce delay in milliseconds. 1129 + * 1130 + * Used to filter short disconnect->reconnect HPD toggles some HDMI sinks 1131 + * generate while entering/leaving power save. Set to 0 to disable by default. 1132 + */ 1133 + MODULE_PARM_DESC(hdmi_hpd_debounce_delay_ms, "HDMI HPD disconnect debounce delay in milliseconds (0 to disable (by default), 1500 is common)"); 1134 + module_param_named(hdmi_hpd_debounce_delay_ms, amdgpu_hdmi_hpd_debounce_delay_ms, uint, 0644); 1126 1135 1127 1136 /* These devices are not supported by amdgpu. 1128 1137 * They are supported by the mach64, r128, radeon drivers
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
··· 375 375 * @start_page: first page to map in the GART aperture 376 376 * @num_pages: number of pages to be mapped 377 377 * @flags: page table entry flags 378 - * @dst: CPU address of the GART table 378 + * @dst: valid CPU address of GART table, cannot be null 379 379 * 380 380 * Binds a BO that is allocated in VRAM to the GART page table 381 381 * (all ASICs). ··· 396 396 return; 397 397 398 398 for (i = 0; i < num_pages; ++i) { 399 - amdgpu_gmc_set_pte_pde(adev, adev->gart.ptr, 399 + amdgpu_gmc_set_pte_pde(adev, dst, 400 400 start_page + i, pa + AMDGPU_GPU_PAGE_SIZE * i, flags); 401 401 } 402 402
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
··· 732 732 return 0; 733 733 734 734 if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready) { 735 + 736 + if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid) 737 + return 0; 738 + 735 739 if (adev->gmc.flush_tlb_needs_extra_type_2) 736 740 adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid, 737 741 2, all_hub,
+16
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
··· 885 885 return 0; 886 886 } 887 887 888 + bool amdgpu_userq_enabled(struct drm_device *dev) 889 + { 890 + struct amdgpu_device *adev = drm_to_adev(dev); 891 + int i; 892 + 893 + for (i = 0; i < AMDGPU_HW_IP_NUM; i++) { 894 + if (adev->userq_funcs[i]) 895 + return true; 896 + } 897 + 898 + return false; 899 + } 900 + 888 901 int amdgpu_userq_ioctl(struct drm_device *dev, void *data, 889 902 struct drm_file *filp) 890 903 { 891 904 union drm_amdgpu_userq *args = data; 892 905 int r; 906 + 907 + if (!amdgpu_userq_enabled(dev)) 908 + return -ENOTSUPP; 893 909 894 910 if (amdgpu_userq_input_args_validate(dev, args, filp) < 0) 895 911 return -EINVAL;
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.h
··· 141 141 struct drm_file *filp); 142 142 143 143 u32 amdgpu_userq_get_supported_ip_mask(struct amdgpu_device *adev); 144 + bool amdgpu_userq_enabled(struct drm_device *dev); 144 145 145 146 int amdgpu_userq_suspend(struct amdgpu_device *adev); 146 147 int amdgpu_userq_resume(struct amdgpu_device *adev);
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
··· 141 141 void 142 142 amdgpu_userq_fence_driver_free(struct amdgpu_usermode_queue *userq) 143 143 { 144 + dma_fence_put(userq->last_fence); 145 + 144 146 amdgpu_userq_walk_and_drop_fence_drv(&userq->fence_drv_xa); 145 147 xa_destroy(&userq->fence_drv_xa); 146 148 /* Drop the fence_drv reference held by user queue */ ··· 473 471 struct drm_exec exec; 474 472 u64 wptr; 475 473 474 + if (!amdgpu_userq_enabled(dev)) 475 + return -ENOTSUPP; 476 + 476 477 num_syncobj_handles = args->num_syncobj_handles; 477 478 syncobj_handles = memdup_user(u64_to_user_ptr(args->syncobj_handles), 478 479 size_mul(sizeof(u32), num_syncobj_handles)); ··· 657 652 u16 num_points, num_fences = 0; 658 653 int r, i, rentry, wentry, cnt; 659 654 struct drm_exec exec; 655 + 656 + if (!amdgpu_userq_enabled(dev)) 657 + return -ENOTSUPP; 660 658 661 659 num_read_bo_handles = wait_info->num_bo_read_handles; 662 660 bo_handles_read = memdup_user(u64_to_user_ptr(wait_info->bo_read_handles),
+1 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1069 1069 } 1070 1070 1071 1071 /* Prepare a TLB flush fence to be attached to PTs */ 1072 - if (!params->unlocked && 1073 - /* SI doesn't support pasid or KIQ/MES */ 1074 - params->adev->family > AMDGPU_FAMILY_SI) { 1072 + if (!params->unlocked) { 1075 1073 amdgpu_vm_tlb_fence_create(params->adev, vm, fence); 1076 1074 1077 1075 /* Makes sure no PD/PT is freed before the flush */
+4 -4
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
··· 1235 1235 *flags = AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_NC); 1236 1236 break; 1237 1237 case AMDGPU_VM_MTYPE_WC: 1238 - *flags |= AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_WC); 1238 + *flags = AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_WC); 1239 1239 break; 1240 1240 case AMDGPU_VM_MTYPE_RW: 1241 - *flags |= AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_RW); 1241 + *flags = AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_RW); 1242 1242 break; 1243 1243 case AMDGPU_VM_MTYPE_CC: 1244 - *flags |= AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_CC); 1244 + *flags = AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_CC); 1245 1245 break; 1246 1246 case AMDGPU_VM_MTYPE_UC: 1247 - *flags |= AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_UC); 1247 + *flags = AMDGPU_PTE_MTYPE_VG10(*flags, MTYPE_UC); 1248 1248 break; 1249 1249 } 1250 1250
+12 -19
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 1209 1209 pr_debug_ratelimited("Evicting process pid %d queues\n", 1210 1210 pdd->process->lead_thread->pid); 1211 1211 1212 - if (dqm->dev->kfd->shared_resources.enable_mes) { 1212 + if (dqm->dev->kfd->shared_resources.enable_mes) 1213 1213 pdd->last_evict_timestamp = get_jiffies_64(); 1214 - retval = suspend_all_queues_mes(dqm); 1215 - if (retval) { 1216 - dev_err(dev, "Suspending all queues failed"); 1217 - goto out; 1218 - } 1219 - } 1220 1214 1221 1215 /* Mark all queues as evicted. Deactivate all active queues on 1222 1216 * the qpd. ··· 1240 1246 KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES : 1241 1247 KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0, 1242 1248 USE_DEFAULT_GRACE_PERIOD); 1243 - } else { 1244 - retval = resume_all_queues_mes(dqm); 1245 - if (retval) 1246 - dev_err(dev, "Resuming all queues failed"); 1247 1249 } 1248 1250 1249 1251 out: ··· 2909 2919 return retval; 2910 2920 } 2911 2921 2922 + static void deallocate_hiq_sdma_mqd(struct kfd_node *dev, 2923 + struct kfd_mem_obj *mqd) 2924 + { 2925 + WARN(!mqd, "No hiq sdma mqd trunk to free"); 2926 + 2927 + amdgpu_amdkfd_free_gtt_mem(dev->adev, &mqd->gtt_mem); 2928 + } 2929 + 2912 2930 struct device_queue_manager *device_queue_manager_init(struct kfd_node *dev) 2913 2931 { 2914 2932 struct device_queue_manager *dqm; ··· 3040 3042 return dqm; 3041 3043 } 3042 3044 3045 + if (!dev->kfd->shared_resources.enable_mes) 3046 + deallocate_hiq_sdma_mqd(dev, &dqm->hiq_sdma_mqd); 3047 + 3043 3048 out_free: 3044 3049 kfree(dqm); 3045 3050 return NULL; 3046 - } 3047 - 3048 - static void deallocate_hiq_sdma_mqd(struct kfd_node *dev, 3049 - struct kfd_mem_obj *mqd) 3050 - { 3051 - WARN(!mqd, "No hiq sdma mqd trunk to free"); 3052 - 3053 - amdgpu_amdkfd_free_gtt_mem(dev->adev, &mqd->gtt_mem); 3054 3051 } 3055 3052 3056 3053 void device_queue_manager_uninit(struct device_queue_manager *dqm)
+31 -5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 5266 5266 struct amdgpu_dm_backlight_caps *caps; 5267 5267 char bl_name[16]; 5268 5268 int min, max; 5269 + int real_brightness; 5270 + int init_brightness; 5269 5271 5270 5272 if (aconnector->bl_idx == -1) 5271 5273 return; ··· 5292 5290 } else 5293 5291 props.brightness = props.max_brightness = MAX_BACKLIGHT_LEVEL; 5294 5292 5293 + init_brightness = props.brightness; 5294 + 5295 5295 if (caps->data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)) { 5296 5296 drm_info(drm, "Using custom brightness curve\n"); 5297 5297 props.scale = BACKLIGHT_SCALE_NON_LINEAR; ··· 5312 5308 if (IS_ERR(dm->backlight_dev[aconnector->bl_idx])) { 5313 5309 drm_err(drm, "DM: Backlight registration failed!\n"); 5314 5310 dm->backlight_dev[aconnector->bl_idx] = NULL; 5315 - } else 5311 + } else { 5312 + /* 5313 + * dm->brightness[x] can be inconsistent just after startup until 5314 + * ops.get_brightness is called. 5315 + */ 5316 + real_brightness = 5317 + amdgpu_dm_backlight_ops.get_brightness(dm->backlight_dev[aconnector->bl_idx]); 5318 + 5319 + if (real_brightness != init_brightness) { 5320 + dm->actual_brightness[aconnector->bl_idx] = real_brightness; 5321 + dm->brightness[aconnector->bl_idx] = real_brightness; 5322 + } 5316 5323 drm_dbg_driver(drm, "DM: Registered Backlight device: %s\n", bl_name); 5324 + } 5317 5325 } 5318 5326 5319 5327 static int initialize_plane(struct amdgpu_display_manager *dm, ··· 5642 5626 5643 5627 if (psr_feature_enabled) { 5644 5628 amdgpu_dm_set_psr_caps(link); 5645 - drm_info(adev_to_drm(adev), "PSR support %d, DC PSR ver %d, sink PSR ver %d DPCD caps 0x%x su_y_granularity %d\n", 5629 + drm_info(adev_to_drm(adev), "%s: PSR support %d, DC PSR ver %d, sink PSR ver %d DPCD caps 0x%x su_y_granularity %d\n", 5630 + aconnector->base.name, 5646 5631 link->psr_settings.psr_feature_enabled, 5647 5632 link->psr_settings.psr_version, 5648 5633 link->dpcd_caps.psr_info.psr_version, ··· 8947 8930 mutex_init(&aconnector->hpd_lock); 8948 8931 mutex_init(&aconnector->handle_mst_msg_ready); 8949 8932 8950 - aconnector->hdmi_hpd_debounce_delay_ms = AMDGPU_DM_HDMI_HPD_DEBOUNCE_MS; 8951 - INIT_DELAYED_WORK(&aconnector->hdmi_hpd_debounce_work, hdmi_hpd_debounce_work); 8952 - aconnector->hdmi_prev_sink = NULL; 8933 + /* 8934 + * If HDMI HPD debounce delay is set, use the minimum between selected 8935 + * value and AMDGPU_DM_MAX_HDMI_HPD_DEBOUNCE_MS 8936 + */ 8937 + if (amdgpu_hdmi_hpd_debounce_delay_ms) { 8938 + aconnector->hdmi_hpd_debounce_delay_ms = min(amdgpu_hdmi_hpd_debounce_delay_ms, 8939 + AMDGPU_DM_MAX_HDMI_HPD_DEBOUNCE_MS); 8940 + INIT_DELAYED_WORK(&aconnector->hdmi_hpd_debounce_work, hdmi_hpd_debounce_work); 8941 + aconnector->hdmi_prev_sink = NULL; 8942 + } else { 8943 + aconnector->hdmi_hpd_debounce_delay_ms = 0; 8944 + } 8953 8945 8954 8946 /* 8955 8947 * configure support HPD hot plug connector_>polled default value is 0
+4 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 59 59 60 60 #define AMDGPU_HDR_MULT_DEFAULT (0x100000000LL) 61 61 62 - #define AMDGPU_DM_HDMI_HPD_DEBOUNCE_MS 1500 62 + /* 63 + * Maximum HDMI HPD debounce delay in milliseconds 64 + */ 65 + #define AMDGPU_DM_MAX_HDMI_HPD_DEBOUNCE_MS 5000 63 66 /* 64 67 #include "include/amdgpu_dal_power_if.h" 65 68 #include "amdgpu_dm_irq.h"
+1 -1
drivers/gpu/drm/amd/display/dc/dc_hdmi_types.h
··· 41 41 /* kHZ*/ 42 42 #define DP_ADAPTOR_DVI_MAX_TMDS_CLK 165000 43 43 /* kHZ*/ 44 - #define DP_ADAPTOR_HDMI_SAFE_MAX_TMDS_CLK 165000 44 + #define DP_ADAPTOR_HDMI_SAFE_MAX_TMDS_CLK 340000 45 45 46 46 struct dp_hdmi_dongle_signature_data { 47 47 int8_t id[15];/* "DP-HDMI ADAPTOR"*/
+2 -1
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 1702 1702 table_context->power_play_table; 1703 1703 PPTable_t *pptable = table_context->driver_pptable; 1704 1704 CustomSkuTable_t *skutable = &pptable->CustomSkuTable; 1705 - uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0; 1705 + int16_t od_percent_upper = 0, od_percent_lower = 0; 1706 1706 uint32_t msg_limit = pptable->SkuTable.MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; 1707 + uint32_t power_limit; 1707 1708 1708 1709 if (smu_v14_0_get_current_power_limit(smu, &power_limit)) 1709 1710 power_limit = smu->adev->pm.ac_power ?
+9
drivers/gpu/drm/bridge/synopsys/dw-hdmi-qp.c
··· 163 163 164 164 unsigned long ref_clk_rate; 165 165 struct regmap *regm; 166 + int main_irq; 166 167 167 168 unsigned long tmds_char_rate; 168 169 }; ··· 1272 1271 1273 1272 dw_hdmi_qp_init_hw(hdmi); 1274 1273 1274 + hdmi->main_irq = plat_data->main_irq; 1275 1275 ret = devm_request_threaded_irq(dev, plat_data->main_irq, 1276 1276 dw_hdmi_qp_main_hardirq, NULL, 1277 1277 IRQF_SHARED, dev_name(dev), hdmi); ··· 1333 1331 } 1334 1332 EXPORT_SYMBOL_GPL(dw_hdmi_qp_bind); 1335 1333 1334 + void dw_hdmi_qp_suspend(struct device *dev, struct dw_hdmi_qp *hdmi) 1335 + { 1336 + disable_irq(hdmi->main_irq); 1337 + } 1338 + EXPORT_SYMBOL_GPL(dw_hdmi_qp_suspend); 1339 + 1336 1340 void dw_hdmi_qp_resume(struct device *dev, struct dw_hdmi_qp *hdmi) 1337 1341 { 1338 1342 dw_hdmi_qp_init_hw(hdmi); 1343 + enable_irq(hdmi->main_irq); 1339 1344 } 1340 1345 EXPORT_SYMBOL_GPL(dw_hdmi_qp_resume); 1341 1346
+51 -24
drivers/gpu/drm/drm_gpuvm.c
··· 1602 1602 } 1603 1603 EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create); 1604 1604 1605 + /* 1606 + * drm_gpuvm_bo_destroy_not_in_lists() - final part of drm_gpuvm_bo cleanup 1607 + * @vm_bo: the &drm_gpuvm_bo to destroy 1608 + * 1609 + * It is illegal to call this method if the @vm_bo is present in the GEMs gpuva 1610 + * list, the extobj list, or the evicted list. 1611 + * 1612 + * Note that this puts a refcount on the GEM object, which may destroy the GEM 1613 + * object if the refcount reaches zero. It's illegal for this to happen if the 1614 + * caller holds the GEMs gpuva mutex because it would free the mutex. 1615 + */ 1605 1616 static void 1606 - drm_gpuvm_bo_destroy(struct kref *kref) 1617 + drm_gpuvm_bo_destroy_not_in_lists(struct drm_gpuvm_bo *vm_bo) 1607 1618 { 1608 - struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo, 1609 - kref); 1610 1619 struct drm_gpuvm *gpuvm = vm_bo->vm; 1611 1620 const struct drm_gpuvm_ops *ops = gpuvm->ops; 1612 1621 struct drm_gem_object *obj = vm_bo->obj; 1613 - bool lock = !drm_gpuvm_resv_protected(gpuvm); 1614 - 1615 - if (!lock) 1616 - drm_gpuvm_resv_assert_held(gpuvm); 1617 - 1618 - drm_gpuvm_bo_list_del(vm_bo, extobj, lock); 1619 - drm_gpuvm_bo_list_del(vm_bo, evict, lock); 1620 - 1621 - drm_gem_gpuva_assert_lock_held(gpuvm, obj); 1622 - list_del(&vm_bo->list.entry.gem); 1623 1622 1624 1623 if (ops && ops->vm_bo_free) 1625 1624 ops->vm_bo_free(vm_bo); ··· 1627 1628 1628 1629 drm_gpuvm_put(gpuvm); 1629 1630 drm_gem_object_put(obj); 1631 + } 1632 + 1633 + static void 1634 + drm_gpuvm_bo_destroy_not_in_lists_kref(struct kref *kref) 1635 + { 1636 + struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo, 1637 + kref); 1638 + 1639 + drm_gpuvm_bo_destroy_not_in_lists(vm_bo); 1640 + } 1641 + 1642 + static void 1643 + drm_gpuvm_bo_destroy(struct kref *kref) 1644 + { 1645 + struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo, 1646 + kref); 1647 + struct drm_gpuvm *gpuvm = vm_bo->vm; 1648 + bool lock = !drm_gpuvm_resv_protected(gpuvm); 1649 + 1650 + if (!lock) 1651 + drm_gpuvm_resv_assert_held(gpuvm); 1652 + 1653 + drm_gpuvm_bo_list_del(vm_bo, extobj, lock); 1654 + drm_gpuvm_bo_list_del(vm_bo, evict, lock); 1655 + 1656 + drm_gem_gpuva_assert_lock_held(gpuvm, vm_bo->obj); 1657 + list_del(&vm_bo->list.entry.gem); 1658 + 1659 + drm_gpuvm_bo_destroy_not_in_lists(vm_bo); 1630 1660 } 1631 1661 1632 1662 /** ··· 1773 1745 void 1774 1746 drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm) 1775 1747 { 1776 - const struct drm_gpuvm_ops *ops = gpuvm->ops; 1777 1748 struct drm_gpuvm_bo *vm_bo; 1778 - struct drm_gem_object *obj; 1779 1749 struct llist_node *bo_defer; 1780 1750 1781 1751 bo_defer = llist_del_all(&gpuvm->bo_defer); ··· 1792 1766 while (bo_defer) { 1793 1767 vm_bo = llist_entry(bo_defer, struct drm_gpuvm_bo, list.entry.bo_defer); 1794 1768 bo_defer = bo_defer->next; 1795 - obj = vm_bo->obj; 1796 - if (ops && ops->vm_bo_free) 1797 - ops->vm_bo_free(vm_bo); 1798 - else 1799 - kfree(vm_bo); 1800 - 1801 - drm_gpuvm_put(gpuvm); 1802 - drm_gem_object_put(obj); 1769 + drm_gpuvm_bo_destroy_not_in_lists(vm_bo); 1803 1770 } 1804 1771 } 1805 1772 EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup); ··· 1880 1861 * count is decreased. If not found @__vm_bo is returned without further 1881 1862 * increase of the reference count. 1882 1863 * 1864 + * The provided @__vm_bo must not already be in the gpuva, evict, or extobj 1865 + * lists prior to calling this method. 1866 + * 1883 1867 * A new &drm_gpuvm_bo is added to the GEMs gpuva list. 1884 1868 * 1885 1869 * Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing ··· 1895 1873 struct drm_gem_object *obj = __vm_bo->obj; 1896 1874 struct drm_gpuvm_bo *vm_bo; 1897 1875 1876 + drm_WARN_ON(gpuvm->drm, !drm_gpuvm_immediate_mode(gpuvm)); 1877 + 1878 + mutex_lock(&obj->gpuva.lock); 1898 1879 vm_bo = drm_gpuvm_bo_find(gpuvm, obj); 1899 1880 if (vm_bo) { 1900 - drm_gpuvm_bo_put(__vm_bo); 1881 + mutex_unlock(&obj->gpuva.lock); 1882 + kref_put(&__vm_bo->kref, drm_gpuvm_bo_destroy_not_in_lists_kref); 1901 1883 return vm_bo; 1902 1884 } 1903 1885 1904 1886 drm_gem_gpuva_assert_lock_held(gpuvm, obj); 1905 1887 list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list); 1888 + mutex_unlock(&obj->gpuva.lock); 1906 1889 1907 1890 return __vm_bo; 1908 1891 }
+8 -12
drivers/gpu/drm/gud/gud_pipe.c
··· 457 457 struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 458 458 struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, plane); 459 459 struct drm_crtc *crtc = new_plane_state->crtc; 460 - struct drm_crtc_state *crtc_state; 460 + struct drm_crtc_state *crtc_state = NULL; 461 461 const struct drm_display_mode *mode; 462 462 struct drm_framebuffer *old_fb = old_plane_state->fb; 463 463 struct drm_connector_state *connector_state = NULL; 464 464 struct drm_framebuffer *fb = new_plane_state->fb; 465 - const struct drm_format_info *format = fb->format; 465 + const struct drm_format_info *format; 466 466 struct drm_connector *connector; 467 467 unsigned int i, num_properties; 468 468 struct gud_state_req *req; 469 469 int idx, ret; 470 470 size_t len; 471 471 472 - if (drm_WARN_ON_ONCE(plane->dev, !fb)) 473 - return -EINVAL; 474 - 475 - if (drm_WARN_ON_ONCE(plane->dev, !crtc)) 476 - return -EINVAL; 477 - 478 - crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 479 - 480 - mode = &crtc_state->mode; 472 + if (crtc) 473 + crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 481 474 482 475 ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state, 483 476 DRM_PLANE_NO_SCALING, ··· 484 491 485 492 if (old_plane_state->rotation != new_plane_state->rotation) 486 493 crtc_state->mode_changed = true; 494 + 495 + mode = &crtc_state->mode; 496 + format = fb->format; 487 497 488 498 if (old_fb && old_fb->format != format) 489 499 crtc_state->mode_changed = true; ··· 594 598 struct drm_atomic_helper_damage_iter iter; 595 599 int ret, idx; 596 600 597 - if (crtc->state->mode_changed || !crtc->state->enable) { 601 + if (!crtc || crtc->state->mode_changed || !crtc->state->enable) { 598 602 cancel_work_sync(&gdrm->work); 599 603 mutex_lock(&gdrm->damage_lock); 600 604 if (gdrm->fb) {
+1 -1
drivers/gpu/drm/i915/i915_gpu_error.c
··· 686 686 } 687 687 688 688 /* This list includes registers that are useful in debugging GuC hangs. */ 689 - const struct { 689 + static const struct { 690 690 u32 start; 691 691 u32 count; 692 692 } guc_hw_reg_state[] = {
+1
drivers/gpu/drm/nouveau/dispnv50/curs507a.c
··· 84 84 asyh->curs.handle = handle; 85 85 asyh->curs.offset = offset; 86 86 asyh->set.curs = asyh->curs.visible; 87 + nv50_atom(asyh->state.state)->lock_core = true; 87 88 } 88 89 } 89 90
+5
drivers/gpu/drm/nouveau/dispnv50/head.c
··· 43 43 union nv50_head_atom_mask clr = { 44 44 .mask = asyh->clr.mask & ~(flush ? 0 : asyh->set.mask), 45 45 }; 46 + 47 + lockdep_assert_held(&head->disp->mutex); 48 + 46 49 if (clr.crc) nv50_crc_atomic_clr(head); 47 50 if (clr.olut) head->func->olut_clr(head); 48 51 if (clr.core) head->func->core_clr(head); ··· 68 65 void 69 66 nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh) 70 67 { 68 + lockdep_assert_held(&head->disp->mutex); 69 + 71 70 if (asyh->set.view ) head->func->view (head, asyh); 72 71 if (asyh->set.mode ) head->func->mode (head, asyh); 73 72 if (asyh->set.core ) head->func->core_set(head, asyh);
+55 -55
drivers/gpu/drm/panel/panel-simple.c
··· 623 623 if (IS_ERR(desc)) 624 624 return ERR_CAST(desc); 625 625 626 + connector_type = desc->connector_type; 627 + /* Catch common mistakes for panels. */ 628 + switch (connector_type) { 629 + case 0: 630 + dev_warn(dev, "Specify missing connector_type\n"); 631 + connector_type = DRM_MODE_CONNECTOR_DPI; 632 + break; 633 + case DRM_MODE_CONNECTOR_LVDS: 634 + WARN_ON(desc->bus_flags & 635 + ~(DRM_BUS_FLAG_DE_LOW | 636 + DRM_BUS_FLAG_DE_HIGH | 637 + DRM_BUS_FLAG_DATA_MSB_TO_LSB | 638 + DRM_BUS_FLAG_DATA_LSB_TO_MSB)); 639 + WARN_ON(desc->bus_format != MEDIA_BUS_FMT_RGB666_1X7X3_SPWG && 640 + desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_SPWG && 641 + desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA); 642 + WARN_ON(desc->bus_format == MEDIA_BUS_FMT_RGB666_1X7X3_SPWG && 643 + desc->bpc != 6); 644 + WARN_ON((desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_SPWG || 645 + desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA) && 646 + desc->bpc != 8); 647 + break; 648 + case DRM_MODE_CONNECTOR_eDP: 649 + dev_warn(dev, "eDP panels moved to panel-edp\n"); 650 + return ERR_PTR(-EINVAL); 651 + case DRM_MODE_CONNECTOR_DSI: 652 + if (desc->bpc != 6 && desc->bpc != 8) 653 + dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc); 654 + break; 655 + case DRM_MODE_CONNECTOR_DPI: 656 + bus_flags = DRM_BUS_FLAG_DE_LOW | 657 + DRM_BUS_FLAG_DE_HIGH | 658 + DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE | 659 + DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE | 660 + DRM_BUS_FLAG_DATA_MSB_TO_LSB | 661 + DRM_BUS_FLAG_DATA_LSB_TO_MSB | 662 + DRM_BUS_FLAG_SYNC_SAMPLE_POSEDGE | 663 + DRM_BUS_FLAG_SYNC_SAMPLE_NEGEDGE; 664 + if (desc->bus_flags & ~bus_flags) 665 + dev_warn(dev, "Unexpected bus_flags(%d)\n", desc->bus_flags & ~bus_flags); 666 + if (!(desc->bus_flags & bus_flags)) 667 + dev_warn(dev, "Specify missing bus_flags\n"); 668 + if (desc->bus_format == 0) 669 + dev_warn(dev, "Specify missing bus_format\n"); 670 + if (desc->bpc != 6 && desc->bpc != 8) 671 + dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc); 672 + break; 673 + default: 674 + dev_warn(dev, "Specify a valid connector_type: %d\n", desc->connector_type); 675 + connector_type = DRM_MODE_CONNECTOR_DPI; 676 + break; 677 + } 678 + 626 679 panel = devm_drm_panel_alloc(dev, struct panel_simple, base, 627 - &panel_simple_funcs, desc->connector_type); 680 + &panel_simple_funcs, connector_type); 628 681 if (IS_ERR(panel)) 629 682 return ERR_CAST(panel); 630 683 ··· 717 664 err = panel_simple_override_nondefault_lvds_datamapping(dev, panel); 718 665 if (err) 719 666 goto free_ddc; 720 - } 721 - 722 - connector_type = desc->connector_type; 723 - /* Catch common mistakes for panels. */ 724 - switch (connector_type) { 725 - case 0: 726 - dev_warn(dev, "Specify missing connector_type\n"); 727 - connector_type = DRM_MODE_CONNECTOR_DPI; 728 - break; 729 - case DRM_MODE_CONNECTOR_LVDS: 730 - WARN_ON(desc->bus_flags & 731 - ~(DRM_BUS_FLAG_DE_LOW | 732 - DRM_BUS_FLAG_DE_HIGH | 733 - DRM_BUS_FLAG_DATA_MSB_TO_LSB | 734 - DRM_BUS_FLAG_DATA_LSB_TO_MSB)); 735 - WARN_ON(desc->bus_format != MEDIA_BUS_FMT_RGB666_1X7X3_SPWG && 736 - desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_SPWG && 737 - desc->bus_format != MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA); 738 - WARN_ON(desc->bus_format == MEDIA_BUS_FMT_RGB666_1X7X3_SPWG && 739 - desc->bpc != 6); 740 - WARN_ON((desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_SPWG || 741 - desc->bus_format == MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA) && 742 - desc->bpc != 8); 743 - break; 744 - case DRM_MODE_CONNECTOR_eDP: 745 - dev_warn(dev, "eDP panels moved to panel-edp\n"); 746 - err = -EINVAL; 747 - goto free_ddc; 748 - case DRM_MODE_CONNECTOR_DSI: 749 - if (desc->bpc != 6 && desc->bpc != 8) 750 - dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc); 751 - break; 752 - case DRM_MODE_CONNECTOR_DPI: 753 - bus_flags = DRM_BUS_FLAG_DE_LOW | 754 - DRM_BUS_FLAG_DE_HIGH | 755 - DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE | 756 - DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE | 757 - DRM_BUS_FLAG_DATA_MSB_TO_LSB | 758 - DRM_BUS_FLAG_DATA_LSB_TO_MSB | 759 - DRM_BUS_FLAG_SYNC_SAMPLE_POSEDGE | 760 - DRM_BUS_FLAG_SYNC_SAMPLE_NEGEDGE; 761 - if (desc->bus_flags & ~bus_flags) 762 - dev_warn(dev, "Unexpected bus_flags(%d)\n", desc->bus_flags & ~bus_flags); 763 - if (!(desc->bus_flags & bus_flags)) 764 - dev_warn(dev, "Specify missing bus_flags\n"); 765 - if (desc->bus_format == 0) 766 - dev_warn(dev, "Specify missing bus_format\n"); 767 - if (desc->bpc != 6 && desc->bpc != 8) 768 - dev_warn(dev, "Expected bpc in {6,8} but got: %u\n", desc->bpc); 769 - break; 770 - default: 771 - dev_warn(dev, "Specify a valid connector_type: %d\n", desc->connector_type); 772 - connector_type = DRM_MODE_CONNECTOR_DPI; 773 - break; 774 667 } 775 668 776 669 dev_set_drvdata(dev, panel); ··· 1899 1900 }, 1900 1901 .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 1901 1902 .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE, 1903 + .connector_type = DRM_MODE_CONNECTOR_DPI, 1902 1904 }; 1903 1905 1904 1906 static const struct display_timing dlc_dlc0700yzg_1_timing = {
-10
drivers/gpu/drm/panthor/panthor_mmu.c
··· 1252 1252 goto err_cleanup; 1253 1253 } 1254 1254 1255 - /* drm_gpuvm_bo_obtain_prealloc() will call drm_gpuvm_bo_put() on our 1256 - * pre-allocated BO if the <BO,VM> association exists. Given we 1257 - * only have one ref on preallocated_vm_bo, drm_gpuvm_bo_destroy() will 1258 - * be called immediately, and we have to hold the VM resv lock when 1259 - * calling this function. 1260 - */ 1261 - dma_resv_lock(panthor_vm_resv(vm), NULL); 1262 - mutex_lock(&bo->base.base.gpuva.lock); 1263 1255 op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo); 1264 - mutex_unlock(&bo->base.base.gpuva.lock); 1265 - dma_resv_unlock(panthor_vm_resv(vm)); 1266 1256 1267 1257 op_ctx->map.bo_offset = offset; 1268 1258
+12 -2
drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
··· 121 121 struct drm_crtc *crtc = encoder->crtc; 122 122 123 123 /* Unconditionally switch to TMDS as FRL is not yet supported */ 124 - gpiod_set_value(hdmi->frl_enable_gpio, 0); 124 + gpiod_set_value_cansleep(hdmi->frl_enable_gpio, 0); 125 125 126 126 if (!crtc || !crtc->state) 127 127 return; ··· 640 640 component_del(&pdev->dev, &dw_hdmi_qp_rockchip_ops); 641 641 } 642 642 643 + static int __maybe_unused dw_hdmi_qp_rockchip_suspend(struct device *dev) 644 + { 645 + struct rockchip_hdmi_qp *hdmi = dev_get_drvdata(dev); 646 + 647 + dw_hdmi_qp_suspend(dev, hdmi->hdmi); 648 + 649 + return 0; 650 + } 651 + 643 652 static int __maybe_unused dw_hdmi_qp_rockchip_resume(struct device *dev) 644 653 { 645 654 struct rockchip_hdmi_qp *hdmi = dev_get_drvdata(dev); ··· 664 655 } 665 656 666 657 static const struct dev_pm_ops dw_hdmi_qp_rockchip_pm = { 667 - SET_SYSTEM_SLEEP_PM_OPS(NULL, dw_hdmi_qp_rockchip_resume) 658 + SET_SYSTEM_SLEEP_PM_OPS(dw_hdmi_qp_rockchip_suspend, 659 + dw_hdmi_qp_rockchip_resume) 668 660 }; 669 661 670 662 struct platform_driver dw_hdmi_qp_rockchip_pltfm_driver = {
+13 -4
drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
··· 2104 2104 * Spin until the previous port_mux figuration is done. 2105 2105 */ 2106 2106 ret = readx_poll_timeout_atomic(rk3568_vop2_read_port_mux, vop2, port_mux_sel, 2107 - port_mux_sel == vop2->old_port_sel, 0, 50 * 1000); 2107 + port_mux_sel == vop2->old_port_sel, 10, 50 * 1000); 2108 2108 if (ret) 2109 2109 DRM_DEV_ERROR(vop2->dev, "wait port_mux done timeout: 0x%x--0x%x\n", 2110 2110 port_mux_sel, vop2->old_port_sel); ··· 2124 2124 * Spin until the previous layer configuration is done. 2125 2125 */ 2126 2126 ret = readx_poll_timeout_atomic(rk3568_vop2_read_layer_cfg, vop2, atv_layer_cfg, 2127 - atv_layer_cfg == cfg, 0, 50 * 1000); 2127 + atv_layer_cfg == cfg, 10, 50 * 1000); 2128 2128 if (ret) 2129 2129 DRM_DEV_ERROR(vop2->dev, "wait layer cfg done timeout: 0x%x--0x%x\n", 2130 2130 atv_layer_cfg, cfg); ··· 2144 2144 u8 layer_sel_id; 2145 2145 unsigned int ofs; 2146 2146 u32 ovl_ctrl; 2147 + u32 cfg_done; 2147 2148 int i; 2148 2149 struct vop2_video_port *vp0 = &vop2->vps[0]; 2149 2150 struct vop2_video_port *vp1 = &vop2->vps[1]; ··· 2299 2298 rk3568_vop2_wait_for_port_mux_done(vop2); 2300 2299 } 2301 2300 2302 - if (layer_sel != old_layer_sel && atv_layer_sel != old_layer_sel) 2303 - rk3568_vop2_wait_for_layer_cfg_done(vop2, vop2->old_layer_sel); 2301 + if (layer_sel != old_layer_sel && atv_layer_sel != old_layer_sel) { 2302 + cfg_done = vop2_readl(vop2, RK3568_REG_CFG_DONE); 2303 + cfg_done &= (BIT(vop2->data->nr_vps) - 1); 2304 + cfg_done &= ~BIT(vp->id); 2305 + /* 2306 + * Changes of other VPs' overlays have not taken effect 2307 + */ 2308 + if (cfg_done) 2309 + rk3568_vop2_wait_for_layer_cfg_done(vop2, vop2->old_layer_sel); 2310 + } 2304 2311 2305 2312 vop2_writel(vop2, RK3568_OVL_LAYER_SEL, layer_sel); 2306 2313 mutex_unlock(&vop2->ovl_lock);
-9
drivers/gpu/drm/sysfb/drm_sysfb_helper.h
··· 55 55 #endif 56 56 57 57 /* 58 - * Input parsing 59 - */ 60 - 61 - int drm_sysfb_get_validated_int(struct drm_device *dev, const char *name, 62 - u64 value, u32 max); 63 - int drm_sysfb_get_validated_int0(struct drm_device *dev, const char *name, 64 - u64 value, u32 max); 65 - 66 - /* 67 58 * Display modes 68 59 */ 69 60
+8 -14
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 32 32 33 33 #include <drm/ttm/ttm_placement.h> 34 34 35 - static void vmw_bo_release(struct vmw_bo *vbo) 35 + /** 36 + * vmw_bo_free - vmw_bo destructor 37 + * 38 + * @bo: Pointer to the embedded struct ttm_buffer_object 39 + */ 40 + static void vmw_bo_free(struct ttm_buffer_object *bo) 36 41 { 37 42 struct vmw_resource *res; 43 + struct vmw_bo *vbo = to_vmw_bo(&bo->base); 38 44 39 45 WARN_ON(kref_read(&vbo->tbo.base.refcount) != 0); 40 46 vmw_bo_unmap(vbo); ··· 68 62 } 69 63 vmw_surface_unreference(&vbo->dumb_surface); 70 64 } 71 - drm_gem_object_release(&vbo->tbo.base); 72 - } 73 - 74 - /** 75 - * vmw_bo_free - vmw_bo destructor 76 - * 77 - * @bo: Pointer to the embedded struct ttm_buffer_object 78 - */ 79 - static void vmw_bo_free(struct ttm_buffer_object *bo) 80 - { 81 - struct vmw_bo *vbo = to_vmw_bo(&bo->base); 82 - 83 65 WARN_ON(!RB_EMPTY_ROOT(&vbo->res_tree)); 84 - vmw_bo_release(vbo); 66 + drm_gem_object_release(&vbo->tbo.base); 85 67 WARN_ON(vbo->dirty); 86 68 kfree(vbo); 87 69 }
+5 -5
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
··· 515 515 /** 516 516 * vmw_event_fence_action_seq_passed 517 517 * 518 - * @action: The struct vmw_fence_action embedded in a struct 519 - * vmw_event_fence_action. 518 + * @f: The struct dma_fence which provides timestamp for the action event 519 + * @cb: The struct dma_fence_cb callback for the action event. 520 520 * 521 - * This function is called when the seqno of the fence where @action is 522 - * attached has passed. It queues the event on the submitter's event list. 523 - * This function is always called from atomic context. 521 + * This function is called when the seqno of the fence has passed 522 + * and it is always called from atomic context. 523 + * It queues the event on the submitter's event list. 524 524 */ 525 525 static void vmw_event_fence_action_seq_passed(struct dma_fence *f, 526 526 struct dma_fence_cb *cb)
+8 -6
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 766 766 return ERR_PTR(ret); 767 767 } 768 768 769 - ttm_bo_reserve(&bo->tbo, false, false, NULL); 770 - ret = vmw_bo_dirty_add(bo); 771 - if (!ret && surface && surface->res.func->dirty_alloc) { 772 - surface->res.coherent = true; 773 - ret = surface->res.func->dirty_alloc(&surface->res); 769 + if (bo) { 770 + ttm_bo_reserve(&bo->tbo, false, false, NULL); 771 + ret = vmw_bo_dirty_add(bo); 772 + if (!ret && surface && surface->res.func->dirty_alloc) { 773 + surface->res.coherent = true; 774 + ret = surface->res.func->dirty_alloc(&surface->res); 775 + } 776 + ttm_bo_unreserve(&bo->tbo); 774 777 } 775 - ttm_bo_unreserve(&bo->tbo); 776 778 777 779 return &vfb->base; 778 780 }
+3 -1
drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
··· 923 923 ttm_bo_unreserve(&buf->tbo); 924 924 925 925 res = vmw_shader_alloc(dev_priv, buf, size, 0, shader_type); 926 - if (unlikely(ret != 0)) 926 + if (IS_ERR(res)) { 927 + ret = PTR_ERR(res); 927 928 goto no_reserve; 929 + } 928 930 929 931 ret = vmw_cmdbuf_res_add(man, vmw_cmdbuf_res_shader, 930 932 vmw_shader_key(user_key, shader_type),
+2
drivers/hv/mshv_common.c
··· 142 142 } 143 143 EXPORT_SYMBOL_GPL(hv_call_get_partition_property); 144 144 145 + #ifdef CONFIG_X86 145 146 /* 146 147 * Corresponding sleep states have to be initialized in order for a subsequent 147 148 * HVCALL_ENTER_SLEEP_STATE call to succeed. Currently only S5 state as per ··· 238 237 BUG(); 239 238 240 239 } 240 + #endif
+11 -9
drivers/hv/mshv_regions.c
··· 58 58 59 59 page_order = folio_order(page_folio(page)); 60 60 /* The hypervisor only supports 4K and 2M page sizes */ 61 - if (page_order && page_order != HPAGE_PMD_ORDER) 61 + if (page_order && page_order != PMD_ORDER) 62 62 return -EINVAL; 63 63 64 64 stride = 1 << page_order; ··· 494 494 unsigned long mstart, mend; 495 495 int ret = -EPERM; 496 496 497 - if (mmu_notifier_range_blockable(range)) 498 - mutex_lock(&region->mutex); 499 - else if (!mutex_trylock(&region->mutex)) 500 - goto out_fail; 501 - 502 - mmu_interval_set_seq(mni, cur_seq); 503 - 504 497 mstart = max(range->start, region->start_uaddr); 505 498 mend = min(range->end, region->start_uaddr + 506 499 (region->nr_pages << HV_HYP_PAGE_SHIFT)); ··· 501 508 page_offset = HVPFN_DOWN(mstart - region->start_uaddr); 502 509 page_count = HVPFN_DOWN(mend - mstart); 503 510 511 + if (mmu_notifier_range_blockable(range)) 512 + mutex_lock(&region->mutex); 513 + else if (!mutex_trylock(&region->mutex)) 514 + goto out_fail; 515 + 516 + mmu_interval_set_seq(mni, cur_seq); 517 + 504 518 ret = mshv_region_remap_pages(region, HV_MAP_GPA_NO_ACCESS, 505 519 page_offset, page_count); 506 520 if (ret) 507 - goto out_fail; 521 + goto out_unlock; 508 522 509 523 mshv_region_invalidate_pages(region, page_offset, page_count); 510 524 ··· 519 519 520 520 return true; 521 521 522 + out_unlock: 523 + mutex_unlock(&region->mutex); 522 524 out_fail: 523 525 WARN_ONCE(ret, 524 526 "Failed to invalidate region %#llx-%#llx (range %#lx-%#lx, event: %u, pages %#llx-%#llx, mm: %#llx): %d\n",
+7
drivers/i2c/busses/i2c-imx-lpi2c.c
··· 593 593 return false; 594 594 595 595 /* 596 + * A system-wide suspend or resume transition is in progress. LPI2C should use PIO to 597 + * transfer data to avoid issue caused by no ready DMA HW resource. 598 + */ 599 + if (pm_suspend_in_progress()) 600 + return false; 601 + 602 + /* 596 603 * When the length of data is less than I2C_DMA_THRESHOLD, 597 604 * cpu mode is used directly to avoid low performance. 598 605 */
+7 -4
drivers/i2c/busses/i2c-qcom-geni.c
··· 116 116 dma_addr_t dma_addr; 117 117 struct dma_chan *tx_c; 118 118 struct dma_chan *rx_c; 119 + bool no_dma; 119 120 bool gpi_mode; 120 121 bool abort_done; 121 122 bool is_tx_multi_desc_xfer; ··· 448 447 size_t len = msg->len; 449 448 struct i2c_msg *cur; 450 449 451 - dma_buf = i2c_get_dma_safe_msg_buf(msg, 32); 450 + dma_buf = gi2c->no_dma ? NULL : i2c_get_dma_safe_msg_buf(msg, 32); 452 451 if (dma_buf) 453 452 geni_se_select_mode(se, GENI_SE_DMA); 454 453 else ··· 487 486 size_t len = msg->len; 488 487 struct i2c_msg *cur; 489 488 490 - dma_buf = i2c_get_dma_safe_msg_buf(msg, 32); 489 + dma_buf = gi2c->no_dma ? NULL : i2c_get_dma_safe_msg_buf(msg, 32); 491 490 if (dma_buf) 492 491 geni_se_select_mode(se, GENI_SE_DMA); 493 492 else ··· 1081 1080 goto err_resources; 1082 1081 } 1083 1082 1084 - if (desc && desc->no_dma_support) 1083 + if (desc && desc->no_dma_support) { 1085 1084 fifo_disable = false; 1086 - else 1085 + gi2c->no_dma = true; 1086 + } else { 1087 1087 fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE; 1088 + } 1088 1089 1089 1090 if (fifo_disable) { 1090 1091 /* FIFO is disabled, so we can only use GPI DMA */
+39 -7
drivers/i2c/busses/i2c-riic.c
··· 670 670 671 671 static int riic_i2c_suspend(struct device *dev) 672 672 { 673 - struct riic_dev *riic = dev_get_drvdata(dev); 674 - int ret; 673 + /* 674 + * Some I2C devices may need the I2C controller to remain active 675 + * during resume_noirq() or suspend_noirq(). If the controller is 676 + * autosuspended, there is no way to wake it up once runtime PM is 677 + * disabled (in suspend_late()). 678 + * 679 + * During system resume, the I2C controller will be available only 680 + * after runtime PM is re-enabled (in resume_early()). However, this 681 + * may be too late for some devices. 682 + * 683 + * Wake up the controller in the suspend() callback while runtime PM 684 + * is still enabled. The I2C controller will remain available until 685 + * the suspend_noirq() callback (pm_runtime_force_suspend()) is 686 + * called. During resume, the I2C controller can be restored by the 687 + * resume_noirq() callback (pm_runtime_force_resume()). 688 + * 689 + * Finally, the resume() callback re-enables autosuspend, ensuring 690 + * the I2C controller remains available until the system enters 691 + * suspend_noirq() and from resume_noirq(). 692 + */ 693 + return pm_runtime_resume_and_get(dev); 694 + } 675 695 676 - ret = pm_runtime_resume_and_get(dev); 677 - if (ret) 678 - return ret; 696 + static int riic_i2c_resume(struct device *dev) 697 + { 698 + pm_runtime_put_autosuspend(dev); 699 + 700 + return 0; 701 + } 702 + 703 + static int riic_i2c_suspend_noirq(struct device *dev) 704 + { 705 + struct riic_dev *riic = dev_get_drvdata(dev); 679 706 680 707 i2c_mark_adapter_suspended(&riic->adapter); 681 708 ··· 710 683 riic_clear_set_bit(riic, ICCR1_ICE, 0, RIIC_ICCR1); 711 684 712 685 pm_runtime_mark_last_busy(dev); 713 - pm_runtime_put_sync(dev); 686 + pm_runtime_force_suspend(dev); 714 687 715 688 return reset_control_assert(riic->rstc); 716 689 } 717 690 718 - static int riic_i2c_resume(struct device *dev) 691 + static int riic_i2c_resume_noirq(struct device *dev) 719 692 { 720 693 struct riic_dev *riic = dev_get_drvdata(dev); 721 694 int ret; 722 695 723 696 ret = reset_control_deassert(riic->rstc); 697 + if (ret) 698 + return ret; 699 + 700 + ret = pm_runtime_force_resume(dev); 724 701 if (ret) 725 702 return ret; 726 703 ··· 745 714 } 746 715 747 716 static const struct dev_pm_ops riic_i2c_pm_ops = { 717 + NOIRQ_SYSTEM_SLEEP_PM_OPS(riic_i2c_suspend_noirq, riic_i2c_resume_noirq) 748 718 SYSTEM_SLEEP_PM_OPS(riic_i2c_suspend, riic_i2c_resume) 749 719 }; 750 720
+1
drivers/iommu/iommu-sva.c
··· 3 3 * Helpers for IOMMU drivers implementing SVA 4 4 */ 5 5 #include <linux/mmu_context.h> 6 + #include <linux/mmu_notifier.h> 6 7 #include <linux/mutex.h> 7 8 #include <linux/sched/mm.h> 8 9 #include <linux/iommu.h>
+2 -2
drivers/irqchip/irq-riscv-imsic-platform.c
··· 158 158 tmp_vec.local_id = new_vec->local_id; 159 159 160 160 /* Point device to the temporary vector */ 161 - imsic_msi_update_msg(d, &tmp_vec); 161 + imsic_msi_update_msg(irq_get_irq_data(d->irq), &tmp_vec); 162 162 } 163 163 164 164 /* Point device to the new vector */ 165 - imsic_msi_update_msg(d, new_vec); 165 + imsic_msi_update_msg(irq_get_irq_data(d->irq), new_vec); 166 166 167 167 /* Update irq descriptors with the new vector */ 168 168 d->chip_data = new_vec;
+11 -17
drivers/media/i2c/ov02c10.c
··· 165 165 {0x3809, 0x88}, 166 166 {0x380a, 0x04}, 167 167 {0x380b, 0x44}, 168 - {0x3810, 0x00}, 169 - {0x3811, 0x02}, 170 - {0x3812, 0x00}, 171 - {0x3813, 0x02}, 172 168 {0x3814, 0x01}, 173 169 {0x3815, 0x01}, 174 170 {0x3816, 0x01}, 175 171 {0x3817, 0x01}, 176 172 177 - {0x3820, 0xa0}, 173 + {0x3820, 0xa8}, 178 174 {0x3821, 0x00}, 179 175 {0x3822, 0x80}, 180 176 {0x3823, 0x08}, ··· 381 385 struct v4l2_ctrl *vblank; 382 386 struct v4l2_ctrl *hblank; 383 387 struct v4l2_ctrl *exposure; 384 - struct v4l2_ctrl *hflip; 385 - struct v4l2_ctrl *vflip; 386 388 387 389 struct clk *img_clk; 388 390 struct gpio_desc *reset; ··· 459 465 break; 460 466 461 467 case V4L2_CID_HFLIP: 468 + cci_write(ov02c10->regmap, OV02C10_ISP_X_WIN_CONTROL, 469 + ctrl->val ? 2 : 1, &ret); 462 470 cci_update_bits(ov02c10->regmap, OV02C10_ROTATE_CONTROL, 463 - BIT(3), ov02c10->hflip->val << 3, &ret); 471 + BIT(3), ctrl->val ? 0 : BIT(3), &ret); 464 472 break; 465 473 466 474 case V4L2_CID_VFLIP: 475 + cci_write(ov02c10->regmap, OV02C10_ISP_Y_WIN_CONTROL, 476 + ctrl->val ? 2 : 1, &ret); 467 477 cci_update_bits(ov02c10->regmap, OV02C10_ROTATE_CONTROL, 468 - BIT(4), ov02c10->vflip->val << 4, &ret); 478 + BIT(4), ctrl->val << 4, &ret); 469 479 break; 470 480 471 481 default: ··· 547 549 OV02C10_EXPOSURE_STEP, 548 550 exposure_max); 549 551 550 - ov02c10->hflip = v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, 551 - V4L2_CID_HFLIP, 0, 1, 1, 0); 552 - if (ov02c10->hflip) 553 - ov02c10->hflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT; 552 + v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, V4L2_CID_HFLIP, 553 + 0, 1, 1, 0); 554 554 555 - ov02c10->vflip = v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, 556 - V4L2_CID_VFLIP, 0, 1, 1, 0); 557 - if (ov02c10->vflip) 558 - ov02c10->vflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT; 555 + v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, V4L2_CID_VFLIP, 556 + 0, 1, 1, 0); 559 557 560 558 v4l2_ctrl_new_std_menu_items(ctrl_hdlr, &ov02c10_ctrl_ops, 561 559 V4L2_CID_TEST_PATTERN,
+1 -1
drivers/media/pci/intel/Kconfig
··· 6 6 7 7 config IPU_BRIDGE 8 8 tristate "Intel IPU Bridge" 9 - depends on ACPI || COMPILE_TEST 9 + depends on ACPI 10 10 depends on I2C 11 11 help 12 12 The IPU bridge is a helper library for Intel IPU drivers to
+29
drivers/media/pci/intel/ipu-bridge.c
··· 5 5 #include <acpi/acpi_bus.h> 6 6 #include <linux/cleanup.h> 7 7 #include <linux/device.h> 8 + #include <linux/dmi.h> 8 9 #include <linux/i2c.h> 9 10 #include <linux/mei_cl_bus.h> 10 11 #include <linux/platform_device.h> ··· 97 96 IPU_SENSOR_CONFIG("SONY471A", 1, 200000000), 98 97 /* Toshiba T4KA3 */ 99 98 IPU_SENSOR_CONFIG("XMCC0003", 1, 321468000), 99 + }; 100 + 101 + /* 102 + * DMI matches for laptops which have their sensor mounted upside-down 103 + * without reporting a rotation of 180° in neither the SSDB nor the _PLD. 104 + */ 105 + static const struct dmi_system_id upside_down_sensor_dmi_ids[] = { 106 + { 107 + .matches = { 108 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 109 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 13 9350"), 110 + }, 111 + .driver_data = "OVTI02C1", 112 + }, 113 + { 114 + .matches = { 115 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 116 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 16 9640"), 117 + }, 118 + .driver_data = "OVTI02C1", 119 + }, 120 + {} /* Terminating entry */ 100 121 }; 101 122 102 123 static const struct ipu_property_names prop_names = { ··· 271 248 static u32 ipu_bridge_parse_rotation(struct acpi_device *adev, 272 249 struct ipu_sensor_ssdb *ssdb) 273 250 { 251 + const struct dmi_system_id *dmi_id; 252 + 253 + dmi_id = dmi_first_match(upside_down_sensor_dmi_ids); 254 + if (dmi_id && acpi_dev_hid_match(adev, dmi_id->driver_data)) 255 + return 180; 256 + 274 257 switch (ssdb->degree) { 275 258 case IPU_SENSOR_ROTATION_NORMAL: 276 259 return 0;
-7
drivers/media/platform/arm/mali-c55/mali-c55-params.c
··· 582 582 struct mali_c55 *mali_c55 = params->mali_c55; 583 583 int ret; 584 584 585 - if (config->version != MALI_C55_PARAM_BUFFER_V1) { 586 - dev_dbg(mali_c55->dev, 587 - "Unsupported extensible format version: %u\n", 588 - config->version); 589 - return -EINVAL; 590 - } 591 - 592 585 ret = v4l2_isp_params_validate_buffer_size(mali_c55->dev, vb, 593 586 v4l2_isp_params_buffer_size(MALI_C55_PARAMS_MAX_SIZE)); 594 587 if (ret)
+26 -15
drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
··· 96 96 97 97 #define VSRSTS_RETRIES 20 98 98 99 - #define RZG2L_CSI2_MIN_WIDTH 320 100 - #define RZG2L_CSI2_MIN_HEIGHT 240 101 - #define RZG2L_CSI2_MAX_WIDTH 2800 102 - #define RZG2L_CSI2_MAX_HEIGHT 4095 103 - 104 - #define RZG2L_CSI2_DEFAULT_WIDTH RZG2L_CSI2_MIN_WIDTH 105 - #define RZG2L_CSI2_DEFAULT_HEIGHT RZG2L_CSI2_MIN_HEIGHT 106 99 #define RZG2L_CSI2_DEFAULT_FMT MEDIA_BUS_FMT_UYVY8_1X16 107 100 108 101 enum rzg2l_csi2_pads { ··· 130 137 int (*dphy_enable)(struct rzg2l_csi2 *csi2); 131 138 int (*dphy_disable)(struct rzg2l_csi2 *csi2); 132 139 bool has_system_clk; 140 + unsigned int min_width; 141 + unsigned int min_height; 142 + unsigned int max_width; 143 + unsigned int max_height; 133 144 }; 134 145 135 146 struct rzg2l_csi2_timings { ··· 415 418 .dphy_enable = rzg2l_csi2_dphy_enable, 416 419 .dphy_disable = rzg2l_csi2_dphy_disable, 417 420 .has_system_clk = true, 421 + .min_width = 320, 422 + .min_height = 240, 423 + .max_width = 2800, 424 + .max_height = 4095, 418 425 }; 419 426 420 427 static int rzg2l_csi2_dphy_setting(struct v4l2_subdev *sd, bool on) ··· 543 542 .dphy_enable = rzv2h_csi2_dphy_enable, 544 543 .dphy_disable = rzv2h_csi2_dphy_disable, 545 544 .has_system_clk = false, 545 + .min_width = 320, 546 + .min_height = 240, 547 + .max_width = 4096, 548 + .max_height = 4096, 546 549 }; 547 550 548 551 static int rzg2l_csi2_mipi_link_setting(struct v4l2_subdev *sd, bool on) ··· 636 631 struct v4l2_subdev_state *state, 637 632 struct v4l2_subdev_format *fmt) 638 633 { 634 + struct rzg2l_csi2 *csi2 = sd_to_csi2(sd); 639 635 struct v4l2_mbus_framefmt *src_format; 640 636 struct v4l2_mbus_framefmt *sink_format; 641 637 ··· 659 653 sink_format->ycbcr_enc = fmt->format.ycbcr_enc; 660 654 sink_format->quantization = fmt->format.quantization; 661 655 sink_format->width = clamp_t(u32, fmt->format.width, 662 - RZG2L_CSI2_MIN_WIDTH, RZG2L_CSI2_MAX_WIDTH); 656 + csi2->info->min_width, 657 + csi2->info->max_width); 663 658 sink_format->height = clamp_t(u32, fmt->format.height, 664 - RZG2L_CSI2_MIN_HEIGHT, RZG2L_CSI2_MAX_HEIGHT); 659 + csi2->info->min_height, 660 + csi2->info->max_height); 665 661 fmt->format = *sink_format; 666 662 667 663 /* propagate format to source pad */ ··· 676 668 struct v4l2_subdev_state *sd_state) 677 669 { 678 670 struct v4l2_subdev_format fmt = { .pad = RZG2L_CSI2_SINK, }; 671 + struct rzg2l_csi2 *csi2 = sd_to_csi2(sd); 679 672 680 - fmt.format.width = RZG2L_CSI2_DEFAULT_WIDTH; 681 - fmt.format.height = RZG2L_CSI2_DEFAULT_HEIGHT; 673 + fmt.format.width = csi2->info->min_width; 674 + fmt.format.height = csi2->info->min_height; 682 675 fmt.format.field = V4L2_FIELD_NONE; 683 676 fmt.format.code = RZG2L_CSI2_DEFAULT_FMT; 684 677 fmt.format.colorspace = V4L2_COLORSPACE_SRGB; ··· 706 697 struct v4l2_subdev_state *sd_state, 707 698 struct v4l2_subdev_frame_size_enum *fse) 708 699 { 700 + struct rzg2l_csi2 *csi2 = sd_to_csi2(sd); 701 + 709 702 if (fse->index != 0) 710 703 return -EINVAL; 711 704 712 705 if (!rzg2l_csi2_code_to_fmt(fse->code)) 713 706 return -EINVAL; 714 707 715 - fse->min_width = RZG2L_CSI2_MIN_WIDTH; 716 - fse->min_height = RZG2L_CSI2_MIN_HEIGHT; 717 - fse->max_width = RZG2L_CSI2_MAX_WIDTH; 718 - fse->max_height = RZG2L_CSI2_MAX_HEIGHT; 708 + fse->min_width = csi2->info->min_width; 709 + fse->min_height = csi2->info->min_height; 710 + fse->max_width = csi2->info->max_width; 711 + fse->max_height = csi2->info->max_height; 719 712 720 713 return 0; 721 714 }
+5 -2
drivers/net/can/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 3 menuconfig CAN_DEV 4 - bool "CAN Device Drivers" 4 + tristate "CAN Device Drivers" 5 5 default y 6 6 depends on CAN 7 7 help ··· 17 17 virtual ones. If you own such devices or plan to use the virtual CAN 18 18 interfaces to develop applications, say Y here. 19 19 20 - if CAN_DEV && CAN 20 + To compile as a module, choose M here: the module will be called 21 + can-dev. 22 + 23 + if CAN_DEV 21 24 22 25 config CAN_VCAN 23 26 tristate "Virtual Local CAN Interface (vcan)"
+1 -1
drivers/net/can/Makefile
··· 7 7 obj-$(CONFIG_CAN_VXCAN) += vxcan.o 8 8 obj-$(CONFIG_CAN_SLCAN) += slcan/ 9 9 10 - obj-$(CONFIG_CAN_DEV) += dev/ 10 + obj-y += dev/ 11 11 obj-y += esd/ 12 12 obj-y += rcar/ 13 13 obj-y += rockchip/
+1 -1
drivers/net/can/ctucanfd/ctucanfd_base.c
··· 310 310 } 311 311 312 312 ssp_cfg = FIELD_PREP(REG_TRV_DELAY_SSP_OFFSET, ssp_offset); 313 - ssp_cfg |= FIELD_PREP(REG_TRV_DELAY_SSP_SRC, 0x1); 313 + ssp_cfg |= FIELD_PREP(REG_TRV_DELAY_SSP_SRC, 0x0); 314 314 } 315 315 316 316 ctucan_write32(priv, CTUCANFD_TRV_DELAY, ssp_cfg);
+3 -2
drivers/net/can/dev/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - obj-$(CONFIG_CAN) += can-dev.o 3 + obj-$(CONFIG_CAN_DEV) += can-dev.o 4 4 5 - can-dev-$(CONFIG_CAN_DEV) += skb.o 5 + can-dev-y += skb.o 6 + 6 7 can-dev-$(CONFIG_CAN_CALC_BITTIMING) += calc_bittiming.o 7 8 can-dev-$(CONFIG_CAN_NETLINK) += bittiming.o 8 9 can-dev-$(CONFIG_CAN_NETLINK) += dev.o
+27
drivers/net/can/dev/dev.c
··· 375 375 } 376 376 } 377 377 378 + void can_set_cap_info(struct net_device *dev) 379 + { 380 + struct can_priv *priv = netdev_priv(dev); 381 + u32 can_cap; 382 + 383 + if (can_dev_in_xl_only_mode(priv)) { 384 + /* XL only mode => no CC/FD capability */ 385 + can_cap = CAN_CAP_XL; 386 + } else { 387 + /* mixed mode => CC + FD/XL capability */ 388 + can_cap = CAN_CAP_CC; 389 + 390 + if (priv->ctrlmode & CAN_CTRLMODE_FD) 391 + can_cap |= CAN_CAP_FD; 392 + 393 + if (priv->ctrlmode & CAN_CTRLMODE_XL) 394 + can_cap |= CAN_CAP_XL; 395 + } 396 + 397 + if (priv->ctrlmode & (CAN_CTRLMODE_LISTENONLY | 398 + CAN_CTRLMODE_RESTRICTED)) 399 + can_cap |= CAN_CAP_RO; 400 + 401 + can_set_cap(dev, can_cap); 402 + } 403 + 378 404 /* helper to define static CAN controller features at device creation time */ 379 405 int can_set_static_ctrlmode(struct net_device *dev, u32 static_mode) 380 406 { ··· 416 390 417 391 /* override MTU which was set by default in can_setup()? */ 418 392 can_set_default_mtu(dev); 393 + can_set_cap_info(dev); 419 394 420 395 return 0; 421 396 }
+1
drivers/net/can/dev/netlink.c
··· 377 377 } 378 378 379 379 can_set_default_mtu(dev); 380 + can_set_cap_info(dev); 380 381 381 382 return 0; 382 383 }
+1 -1
drivers/net/can/usb/etas_es58x/es58x_core.c
··· 1736 1736 dev_dbg(dev, "%s: Allocated %d rx URBs each of size %u\n", 1737 1737 __func__, i, rx_buf_len); 1738 1738 1739 - return ret; 1739 + return 0; 1740 1740 } 1741 1741 1742 1742 /**
+2
drivers/net/can/usb/gs_usb.c
··· 751 751 hf, parent->hf_size_rx, 752 752 gs_usb_receive_bulk_callback, parent); 753 753 754 + usb_anchor_urb(urb, &parent->rx_submitted); 755 + 754 756 rc = usb_submit_urb(urb, GFP_ATOMIC); 755 757 756 758 /* USB failure take down all interfaces */
+15
drivers/net/can/vcan.c
··· 130 130 return NETDEV_TX_OK; 131 131 } 132 132 133 + static void vcan_set_cap_info(struct net_device *dev) 134 + { 135 + u32 can_cap = CAN_CAP_CC; 136 + 137 + if (dev->mtu > CAN_MTU) 138 + can_cap |= CAN_CAP_FD; 139 + 140 + if (dev->mtu >= CANXL_MIN_MTU) 141 + can_cap |= CAN_CAP_XL; 142 + 143 + can_set_cap(dev, can_cap); 144 + } 145 + 133 146 static int vcan_change_mtu(struct net_device *dev, int new_mtu) 134 147 { 135 148 /* Do not allow changing the MTU while running */ ··· 154 141 return -EINVAL; 155 142 156 143 WRITE_ONCE(dev->mtu, new_mtu); 144 + vcan_set_cap_info(dev); 157 145 return 0; 158 146 } 159 147 ··· 176 162 dev->tx_queue_len = 0; 177 163 dev->flags = IFF_NOARP; 178 164 can_set_ml_priv(dev, netdev_priv(dev)); 165 + vcan_set_cap_info(dev); 179 166 180 167 /* set flags according to driver capabilities */ 181 168 if (echo)
+15
drivers/net/can/vxcan.c
··· 125 125 return iflink; 126 126 } 127 127 128 + static void vxcan_set_cap_info(struct net_device *dev) 129 + { 130 + u32 can_cap = CAN_CAP_CC; 131 + 132 + if (dev->mtu > CAN_MTU) 133 + can_cap |= CAN_CAP_FD; 134 + 135 + if (dev->mtu >= CANXL_MIN_MTU) 136 + can_cap |= CAN_CAP_XL; 137 + 138 + can_set_cap(dev, can_cap); 139 + } 140 + 128 141 static int vxcan_change_mtu(struct net_device *dev, int new_mtu) 129 142 { 130 143 /* Do not allow changing the MTU while running */ ··· 149 136 return -EINVAL; 150 137 151 138 WRITE_ONCE(dev->mtu, new_mtu); 139 + vxcan_set_cap_info(dev); 152 140 return 0; 153 141 } 154 142 ··· 181 167 182 168 can_ml = netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN); 183 169 can_set_ml_priv(dev, can_ml); 170 + vxcan_set_cap_info(dev); 184 171 } 185 172 186 173 /* forward declaration for rtnl_create_link() */
+1 -1
drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
··· 218 218 ioq_irq_err: 219 219 while (i) { 220 220 --i; 221 - free_irq(oct->msix_entries[i].vector, oct); 221 + free_irq(oct->msix_entries[i].vector, oct->ioq_vector[i]); 222 222 } 223 223 return -1; 224 224 }
+8 -5
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 962 962 }; 963 963 964 964 struct mlx5e_dev { 965 - struct mlx5e_priv *priv; 965 + struct net_device *netdev; 966 966 struct devlink_port dl_port; 967 967 }; 968 968 ··· 1242 1242 mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile); 1243 1243 int mlx5e_attach_netdev(struct mlx5e_priv *priv); 1244 1244 void mlx5e_detach_netdev(struct mlx5e_priv *priv); 1245 - void mlx5e_destroy_netdev(struct mlx5e_priv *priv); 1246 - int mlx5e_netdev_change_profile(struct mlx5e_priv *priv, 1247 - const struct mlx5e_profile *new_profile, void *new_ppriv); 1248 - void mlx5e_netdev_attach_nic_profile(struct mlx5e_priv *priv); 1245 + void mlx5e_destroy_netdev(struct net_device *netdev); 1246 + int mlx5e_netdev_change_profile(struct net_device *netdev, 1247 + struct mlx5_core_dev *mdev, 1248 + const struct mlx5e_profile *new_profile, 1249 + void *new_ppriv); 1250 + void mlx5e_netdev_attach_nic_profile(struct net_device *netdev, 1251 + struct mlx5_core_dev *mdev); 1249 1252 void mlx5e_set_netdev_mtu_boundaries(struct mlx5e_priv *priv); 1250 1253 void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16 mtu); 1251 1254
+56 -30
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 6325 6325 6326 6326 void mlx5e_priv_cleanup(struct mlx5e_priv *priv) 6327 6327 { 6328 + bool destroying = test_bit(MLX5E_STATE_DESTROYING, &priv->state); 6328 6329 int i; 6329 6330 6330 6331 /* bail if change profile failed and also rollback failed */ ··· 6353 6352 } 6354 6353 6355 6354 memset(priv, 0, sizeof(*priv)); 6355 + if (destroying) /* restore destroying bit, to allow unload */ 6356 + set_bit(MLX5E_STATE_DESTROYING, &priv->state); 6356 6357 } 6357 6358 6358 6359 static unsigned int mlx5e_get_max_num_txqs(struct mlx5_core_dev *mdev, ··· 6587 6584 return err; 6588 6585 } 6589 6586 6590 - int mlx5e_netdev_change_profile(struct mlx5e_priv *priv, 6591 - const struct mlx5e_profile *new_profile, void *new_ppriv) 6587 + int mlx5e_netdev_change_profile(struct net_device *netdev, 6588 + struct mlx5_core_dev *mdev, 6589 + const struct mlx5e_profile *new_profile, 6590 + void *new_ppriv) 6592 6591 { 6593 - const struct mlx5e_profile *orig_profile = priv->profile; 6594 - struct net_device *netdev = priv->netdev; 6595 - struct mlx5_core_dev *mdev = priv->mdev; 6596 - void *orig_ppriv = priv->ppriv; 6592 + struct mlx5e_priv *priv = netdev_priv(netdev); 6593 + const struct mlx5e_profile *orig_profile; 6597 6594 int err, rollback_err; 6595 + void *orig_ppriv; 6598 6596 6599 - /* cleanup old profile */ 6600 - mlx5e_detach_netdev(priv); 6601 - priv->profile->cleanup(priv); 6602 - mlx5e_priv_cleanup(priv); 6597 + orig_profile = priv->profile; 6598 + orig_ppriv = priv->ppriv; 6599 + 6600 + /* NULL could happen if previous change_profile failed to rollback */ 6601 + if (priv->profile) { 6602 + WARN_ON_ONCE(priv->mdev != mdev); 6603 + /* cleanup old profile */ 6604 + mlx5e_detach_netdev(priv); 6605 + priv->profile->cleanup(priv); 6606 + mlx5e_priv_cleanup(priv); 6607 + } 6608 + /* priv members are not valid from this point ... */ 6603 6609 6604 6610 if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { 6605 6611 mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv); ··· 6625 6613 return 0; 6626 6614 6627 6615 rollback: 6616 + if (!orig_profile) { 6617 + netdev_warn(netdev, "no original profile to rollback to\n"); 6618 + priv->profile = NULL; 6619 + return err; 6620 + } 6621 + 6628 6622 rollback_err = mlx5e_netdev_attach_profile(netdev, mdev, orig_profile, orig_ppriv); 6629 - if (rollback_err) 6630 - netdev_err(netdev, "%s: failed to rollback to orig profile, %d\n", 6631 - __func__, rollback_err); 6623 + if (rollback_err) { 6624 + netdev_err(netdev, "failed to rollback to orig profile, %d\n", 6625 + rollback_err); 6626 + priv->profile = NULL; 6627 + } 6632 6628 return err; 6633 6629 } 6634 6630 6635 - void mlx5e_netdev_attach_nic_profile(struct mlx5e_priv *priv) 6631 + void mlx5e_netdev_attach_nic_profile(struct net_device *netdev, 6632 + struct mlx5_core_dev *mdev) 6636 6633 { 6637 - mlx5e_netdev_change_profile(priv, &mlx5e_nic_profile, NULL); 6634 + mlx5e_netdev_change_profile(netdev, mdev, &mlx5e_nic_profile, NULL); 6638 6635 } 6639 6636 6640 - void mlx5e_destroy_netdev(struct mlx5e_priv *priv) 6637 + void mlx5e_destroy_netdev(struct net_device *netdev) 6641 6638 { 6642 - struct net_device *netdev = priv->netdev; 6639 + struct mlx5e_priv *priv = netdev_priv(netdev); 6643 6640 6644 - mlx5e_priv_cleanup(priv); 6641 + if (priv->profile) 6642 + mlx5e_priv_cleanup(priv); 6645 6643 free_netdev(netdev); 6646 6644 } 6647 6645 ··· 6659 6637 { 6660 6638 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); 6661 6639 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); 6662 - struct mlx5e_priv *priv = mlx5e_dev->priv; 6663 - struct net_device *netdev = priv->netdev; 6640 + struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev); 6641 + struct net_device *netdev = mlx5e_dev->netdev; 6664 6642 struct mlx5_core_dev *mdev = edev->mdev; 6665 6643 struct mlx5_core_dev *pos, *to; 6666 6644 int err, i; ··· 6706 6684 6707 6685 static int _mlx5e_suspend(struct auxiliary_device *adev, bool pre_netdev_reg) 6708 6686 { 6687 + struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); 6709 6688 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); 6710 - struct mlx5e_priv *priv = mlx5e_dev->priv; 6711 - struct net_device *netdev = priv->netdev; 6712 - struct mlx5_core_dev *mdev = priv->mdev; 6689 + struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev); 6690 + struct net_device *netdev = mlx5e_dev->netdev; 6691 + struct mlx5_core_dev *mdev = edev->mdev; 6713 6692 struct mlx5_core_dev *pos; 6714 6693 int i; 6715 6694 ··· 6771 6748 goto err_devlink_port_unregister; 6772 6749 } 6773 6750 SET_NETDEV_DEVLINK_PORT(netdev, &mlx5e_dev->dl_port); 6751 + mlx5e_dev->netdev = netdev; 6774 6752 6775 6753 mlx5e_build_nic_netdev(netdev); 6776 6754 6777 6755 priv = netdev_priv(netdev); 6778 - mlx5e_dev->priv = priv; 6779 6756 6780 6757 priv->profile = profile; 6781 6758 priv->ppriv = NULL; ··· 6808 6785 err_profile_cleanup: 6809 6786 profile->cleanup(priv); 6810 6787 err_destroy_netdev: 6811 - mlx5e_destroy_netdev(priv); 6788 + mlx5e_destroy_netdev(netdev); 6812 6789 err_devlink_port_unregister: 6813 6790 mlx5e_devlink_port_unregister(mlx5e_dev); 6814 6791 err_devlink_unregister: ··· 6838 6815 { 6839 6816 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); 6840 6817 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); 6841 - struct mlx5e_priv *priv = mlx5e_dev->priv; 6818 + struct net_device *netdev = mlx5e_dev->netdev; 6819 + struct mlx5e_priv *priv = netdev_priv(netdev); 6842 6820 struct mlx5_core_dev *mdev = edev->mdev; 6843 6821 6844 6822 mlx5_core_uplink_netdev_set(mdev, NULL); 6845 - mlx5e_dcbnl_delete_app(priv); 6823 + 6824 + if (priv->profile) 6825 + mlx5e_dcbnl_delete_app(priv); 6846 6826 /* When unload driver, the netdev is in registered state 6847 6827 * if it's from legacy mode. If from switchdev mode, it 6848 6828 * is already unregistered before changing to NIC profile. 6849 6829 */ 6850 - if (priv->netdev->reg_state == NETREG_REGISTERED) { 6851 - unregister_netdev(priv->netdev); 6830 + if (netdev->reg_state == NETREG_REGISTERED) { 6831 + unregister_netdev(netdev); 6852 6832 _mlx5e_suspend(adev, false); 6853 6833 } else { 6854 6834 struct mlx5_core_dev *pos; ··· 6866 6840 /* Avoid cleanup if profile rollback failed. */ 6867 6841 if (priv->profile) 6868 6842 priv->profile->cleanup(priv); 6869 - mlx5e_destroy_netdev(priv); 6843 + mlx5e_destroy_netdev(netdev); 6870 6844 mlx5e_devlink_port_unregister(mlx5e_dev); 6871 6845 mlx5e_destroy_devlink(mlx5e_dev); 6872 6846 }
+7 -8
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1508 1508 { 1509 1509 struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep); 1510 1510 struct net_device *netdev; 1511 - struct mlx5e_priv *priv; 1512 1511 int err; 1513 1512 1514 1513 netdev = mlx5_uplink_netdev_get(dev); 1515 1514 if (!netdev) 1516 1515 return 0; 1517 1516 1518 - priv = netdev_priv(netdev); 1519 - rpriv->netdev = priv->netdev; 1520 - err = mlx5e_netdev_change_profile(priv, &mlx5e_uplink_rep_profile, 1521 - rpriv); 1517 + /* must not use netdev_priv(netdev), it might not be initialized yet */ 1518 + rpriv->netdev = netdev; 1519 + err = mlx5e_netdev_change_profile(netdev, dev, 1520 + &mlx5e_uplink_rep_profile, rpriv); 1522 1521 mlx5_uplink_netdev_put(dev, netdev); 1523 1522 return err; 1524 1523 } ··· 1545 1546 if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY)) 1546 1547 unregister_netdev(netdev); 1547 1548 1548 - mlx5e_netdev_attach_nic_profile(priv); 1549 + mlx5e_netdev_attach_nic_profile(netdev, priv->mdev); 1549 1550 } 1550 1551 1551 1552 static int ··· 1611 1612 priv->profile->cleanup(priv); 1612 1613 1613 1614 err_destroy_netdev: 1614 - mlx5e_destroy_netdev(netdev_priv(netdev)); 1615 + mlx5e_destroy_netdev(netdev); 1615 1616 return err; 1616 1617 } 1617 1618 ··· 1666 1667 mlx5e_rep_vnic_reporter_destroy(priv); 1667 1668 mlx5e_detach_netdev(priv); 1668 1669 priv->profile->cleanup(priv); 1669 - mlx5e_destroy_netdev(priv); 1670 + mlx5e_destroy_netdev(netdev); 1670 1671 free_ppriv: 1671 1672 kvfree(ppriv); /* mlx5e_rep_priv */ 1672 1673 }
+3
drivers/net/hyperv/netvsc_drv.c
··· 1750 1750 rxfh->hfunc != ETH_RSS_HASH_TOP) 1751 1751 return -EOPNOTSUPP; 1752 1752 1753 + if (!ndc->rx_table_sz) 1754 + return -EOPNOTSUPP; 1755 + 1753 1756 rndis_dev = ndev->extension; 1754 1757 if (rxfh->indir) { 1755 1758 for (i = 0; i < ndc->rx_table_sz; i++)
+13 -7
drivers/net/macvlan.c
··· 59 59 60 60 struct macvlan_source_entry { 61 61 struct hlist_node hlist; 62 - struct macvlan_dev *vlan; 62 + struct macvlan_dev __rcu *vlan; 63 63 unsigned char addr[6+2] __aligned(sizeof(u16)); 64 64 struct rcu_head rcu; 65 65 }; ··· 146 146 147 147 hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) { 148 148 if (ether_addr_equal_64bits(entry->addr, addr) && 149 - entry->vlan == vlan) 149 + rcu_access_pointer(entry->vlan) == vlan) 150 150 return entry; 151 151 } 152 152 return NULL; ··· 168 168 return -ENOMEM; 169 169 170 170 ether_addr_copy(entry->addr, addr); 171 - entry->vlan = vlan; 171 + RCU_INIT_POINTER(entry->vlan, vlan); 172 172 h = &port->vlan_source_hash[macvlan_eth_hash(addr)]; 173 173 hlist_add_head_rcu(&entry->hlist, h); 174 174 vlan->macaddr_count++; ··· 187 187 188 188 static void macvlan_hash_del_source(struct macvlan_source_entry *entry) 189 189 { 190 + RCU_INIT_POINTER(entry->vlan, NULL); 190 191 hlist_del_rcu(&entry->hlist); 191 192 kfree_rcu(entry, rcu); 192 193 } ··· 391 390 int i; 392 391 393 392 hash_for_each_safe(port->vlan_source_hash, i, next, entry, hlist) 394 - if (entry->vlan == vlan) 393 + if (rcu_access_pointer(entry->vlan) == vlan) 395 394 macvlan_hash_del_source(entry); 396 395 397 396 vlan->macaddr_count = 0; ··· 434 433 435 434 hlist_for_each_entry_rcu(entry, h, hlist) { 436 435 if (ether_addr_equal_64bits(entry->addr, addr)) { 437 - if (entry->vlan->flags & MACVLAN_FLAG_NODST) 436 + struct macvlan_dev *vlan = rcu_dereference(entry->vlan); 437 + 438 + if (!vlan) 439 + continue; 440 + 441 + if (vlan->flags & MACVLAN_FLAG_NODST) 438 442 consume = true; 439 - macvlan_forward_source_one(skb, entry->vlan); 443 + macvlan_forward_source_one(skb, vlan); 440 444 } 441 445 } 442 446 ··· 1686 1680 struct macvlan_source_entry *entry; 1687 1681 1688 1682 hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) { 1689 - if (entry->vlan != vlan) 1683 + if (rcu_access_pointer(entry->vlan) != vlan) 1690 1684 continue; 1691 1685 if (nla_put(skb, IFLA_MACVLAN_MACADDR, ETH_ALEN, entry->addr)) 1692 1686 return 1;
+2 -2
drivers/net/phy/motorcomm.c
··· 1741 1741 val |= YT8521_LED_1000_ON_EN; 1742 1742 1743 1743 if (test_bit(TRIGGER_NETDEV_FULL_DUPLEX, &rules)) 1744 - val |= YT8521_LED_HDX_ON_EN; 1744 + val |= YT8521_LED_FDX_ON_EN; 1745 1745 1746 1746 if (test_bit(TRIGGER_NETDEV_HALF_DUPLEX, &rules)) 1747 - val |= YT8521_LED_FDX_ON_EN; 1747 + val |= YT8521_LED_HDX_ON_EN; 1748 1748 1749 1749 if (test_bit(TRIGGER_NETDEV_TX, &rules) || 1750 1750 test_bit(TRIGGER_NETDEV_RX, &rules))
+43 -132
drivers/net/virtio_net.c
··· 425 425 u16 rss_indir_table_size; 426 426 u32 rss_hash_types_supported; 427 427 u32 rss_hash_types_saved; 428 - struct virtio_net_rss_config_hdr *rss_hdr; 429 - struct virtio_net_rss_config_trailer rss_trailer; 430 - u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE]; 431 428 432 429 /* Has control virtqueue */ 433 430 bool has_cvq; ··· 438 441 /* Packet virtio header size */ 439 442 u8 hdr_len; 440 443 441 - /* Work struct for delayed refilling if we run low on memory. */ 442 - struct delayed_work refill; 443 - 444 444 /* UDP tunnel support */ 445 445 bool tx_tnl; 446 446 447 447 bool rx_tnl; 448 448 449 449 bool rx_tnl_csum; 450 - 451 - /* Is delayed refill enabled? */ 452 - bool refill_enabled; 453 - 454 - /* The lock to synchronize the access to refill_enabled */ 455 - spinlock_t refill_lock; 456 450 457 451 /* Work struct for config space updates */ 458 452 struct work_struct config_work; ··· 481 493 struct failover *failover; 482 494 483 495 u64 device_stats_cap; 496 + 497 + struct virtio_net_rss_config_hdr *rss_hdr; 498 + 499 + /* Must be last as it ends in a flexible-array member. */ 500 + TRAILING_OVERLAP(struct virtio_net_rss_config_trailer, rss_trailer, hash_key_data, 501 + u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE]; 502 + ); 484 503 }; 504 + static_assert(offsetof(struct virtnet_info, rss_trailer.hash_key_data) == 505 + offsetof(struct virtnet_info, rss_hash_key_data)); 485 506 486 507 struct padded_vnet_hdr { 487 508 struct virtio_net_hdr_v1_hash hdr; ··· 715 718 give_pages(rq, buf); 716 719 else 717 720 put_page(virt_to_head_page(buf)); 718 - } 719 - 720 - static void enable_delayed_refill(struct virtnet_info *vi) 721 - { 722 - spin_lock_bh(&vi->refill_lock); 723 - vi->refill_enabled = true; 724 - spin_unlock_bh(&vi->refill_lock); 725 - } 726 - 727 - static void disable_delayed_refill(struct virtnet_info *vi) 728 - { 729 - spin_lock_bh(&vi->refill_lock); 730 - vi->refill_enabled = false; 731 - spin_unlock_bh(&vi->refill_lock); 732 721 } 733 722 734 723 static void enable_rx_mode_work(struct virtnet_info *vi) ··· 2931 2948 napi_disable(napi); 2932 2949 } 2933 2950 2934 - static void refill_work(struct work_struct *work) 2935 - { 2936 - struct virtnet_info *vi = 2937 - container_of(work, struct virtnet_info, refill.work); 2938 - bool still_empty; 2939 - int i; 2940 - 2941 - for (i = 0; i < vi->curr_queue_pairs; i++) { 2942 - struct receive_queue *rq = &vi->rq[i]; 2943 - 2944 - /* 2945 - * When queue API support is added in the future and the call 2946 - * below becomes napi_disable_locked, this driver will need to 2947 - * be refactored. 2948 - * 2949 - * One possible solution would be to: 2950 - * - cancel refill_work with cancel_delayed_work (note: 2951 - * non-sync) 2952 - * - cancel refill_work with cancel_delayed_work_sync in 2953 - * virtnet_remove after the netdev is unregistered 2954 - * - wrap all of the work in a lock (perhaps the netdev 2955 - * instance lock) 2956 - * - check netif_running() and return early to avoid a race 2957 - */ 2958 - napi_disable(&rq->napi); 2959 - still_empty = !try_fill_recv(vi, rq, GFP_KERNEL); 2960 - virtnet_napi_do_enable(rq->vq, &rq->napi); 2961 - 2962 - /* In theory, this can happen: if we don't get any buffers in 2963 - * we will *never* try to fill again. 2964 - */ 2965 - if (still_empty) 2966 - schedule_delayed_work(&vi->refill, HZ/2); 2967 - } 2968 - } 2969 - 2970 2951 static int virtnet_receive_xsk_bufs(struct virtnet_info *vi, 2971 2952 struct receive_queue *rq, 2972 2953 int budget, ··· 2993 3046 else 2994 3047 packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats); 2995 3048 3049 + u64_stats_set(&stats.packets, packets); 2996 3050 if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) { 2997 - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) { 2998 - spin_lock(&vi->refill_lock); 2999 - if (vi->refill_enabled) 3000 - schedule_delayed_work(&vi->refill, 0); 3001 - spin_unlock(&vi->refill_lock); 3002 - } 3051 + if (!try_fill_recv(vi, rq, GFP_ATOMIC)) 3052 + /* We need to retry refilling in the next NAPI poll so 3053 + * we must return budget to make sure the NAPI is 3054 + * repolled. 3055 + */ 3056 + packets = budget; 3003 3057 } 3004 3058 3005 - u64_stats_set(&stats.packets, packets); 3006 3059 u64_stats_update_begin(&rq->stats.syncp); 3007 3060 for (i = 0; i < ARRAY_SIZE(virtnet_rq_stats_desc); i++) { 3008 3061 size_t offset = virtnet_rq_stats_desc[i].offset; ··· 3173 3226 struct virtnet_info *vi = netdev_priv(dev); 3174 3227 int i, err; 3175 3228 3176 - enable_delayed_refill(vi); 3177 - 3178 3229 for (i = 0; i < vi->max_queue_pairs; i++) { 3179 3230 if (i < vi->curr_queue_pairs) 3180 - /* Make sure we have some buffers: if oom use wq. */ 3181 - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) 3182 - schedule_delayed_work(&vi->refill, 0); 3231 + /* Pre-fill rq agressively, to make sure we are ready to 3232 + * get packets immediately. 3233 + */ 3234 + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL); 3183 3235 3184 3236 err = virtnet_enable_queue_pair(vi, i); 3185 3237 if (err < 0) ··· 3197 3251 return 0; 3198 3252 3199 3253 err_enable_qp: 3200 - disable_delayed_refill(vi); 3201 - cancel_delayed_work_sync(&vi->refill); 3202 - 3203 3254 for (i--; i >= 0; i--) { 3204 3255 virtnet_disable_queue_pair(vi, i); 3205 3256 virtnet_cancel_dim(vi, &vi->rq[i].dim); ··· 3375 3432 return NETDEV_TX_OK; 3376 3433 } 3377 3434 3378 - static void __virtnet_rx_pause(struct virtnet_info *vi, 3379 - struct receive_queue *rq) 3435 + static void virtnet_rx_pause(struct virtnet_info *vi, 3436 + struct receive_queue *rq) 3380 3437 { 3381 3438 bool running = netif_running(vi->dev); 3382 3439 ··· 3390 3447 { 3391 3448 int i; 3392 3449 3393 - /* 3394 - * Make sure refill_work does not run concurrently to 3395 - * avoid napi_disable race which leads to deadlock. 3396 - */ 3397 - disable_delayed_refill(vi); 3398 - cancel_delayed_work_sync(&vi->refill); 3399 3450 for (i = 0; i < vi->max_queue_pairs; i++) 3400 - __virtnet_rx_pause(vi, &vi->rq[i]); 3451 + virtnet_rx_pause(vi, &vi->rq[i]); 3401 3452 } 3402 3453 3403 - static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq) 3454 + static void virtnet_rx_resume(struct virtnet_info *vi, 3455 + struct receive_queue *rq, 3456 + bool refill) 3404 3457 { 3405 - /* 3406 - * Make sure refill_work does not run concurrently to 3407 - * avoid napi_disable race which leads to deadlock. 3408 - */ 3409 - disable_delayed_refill(vi); 3410 - cancel_delayed_work_sync(&vi->refill); 3411 - __virtnet_rx_pause(vi, rq); 3412 - } 3458 + if (netif_running(vi->dev)) { 3459 + /* Pre-fill rq agressively, to make sure we are ready to get 3460 + * packets immediately. 3461 + */ 3462 + if (refill) 3463 + try_fill_recv(vi, rq, GFP_KERNEL); 3413 3464 3414 - static void __virtnet_rx_resume(struct virtnet_info *vi, 3415 - struct receive_queue *rq, 3416 - bool refill) 3417 - { 3418 - bool running = netif_running(vi->dev); 3419 - bool schedule_refill = false; 3420 - 3421 - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL)) 3422 - schedule_refill = true; 3423 - if (running) 3424 3465 virtnet_napi_enable(rq); 3425 - 3426 - if (schedule_refill) 3427 - schedule_delayed_work(&vi->refill, 0); 3466 + } 3428 3467 } 3429 3468 3430 3469 static void virtnet_rx_resume_all(struct virtnet_info *vi) 3431 3470 { 3432 3471 int i; 3433 3472 3434 - enable_delayed_refill(vi); 3435 3473 for (i = 0; i < vi->max_queue_pairs; i++) { 3436 3474 if (i < vi->curr_queue_pairs) 3437 - __virtnet_rx_resume(vi, &vi->rq[i], true); 3475 + virtnet_rx_resume(vi, &vi->rq[i], true); 3438 3476 else 3439 - __virtnet_rx_resume(vi, &vi->rq[i], false); 3477 + virtnet_rx_resume(vi, &vi->rq[i], false); 3440 3478 } 3441 - } 3442 - 3443 - static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq) 3444 - { 3445 - enable_delayed_refill(vi); 3446 - __virtnet_rx_resume(vi, rq, true); 3447 3479 } 3448 3480 3449 3481 static int virtnet_rx_resize(struct virtnet_info *vi, ··· 3434 3516 if (err) 3435 3517 netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err); 3436 3518 3437 - virtnet_rx_resume(vi, rq); 3519 + virtnet_rx_resume(vi, rq, true); 3438 3520 return err; 3439 3521 } 3440 3522 ··· 3747 3829 } 3748 3830 succ: 3749 3831 vi->curr_queue_pairs = queue_pairs; 3750 - /* virtnet_open() will refill when device is going to up. */ 3751 - spin_lock_bh(&vi->refill_lock); 3752 - if (dev->flags & IFF_UP && vi->refill_enabled) 3753 - schedule_delayed_work(&vi->refill, 0); 3754 - spin_unlock_bh(&vi->refill_lock); 3832 + if (dev->flags & IFF_UP) { 3833 + local_bh_disable(); 3834 + for (int i = 0; i < vi->curr_queue_pairs; ++i) 3835 + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq); 3836 + local_bh_enable(); 3837 + } 3755 3838 3756 3839 return 0; 3757 3840 } ··· 3762 3843 struct virtnet_info *vi = netdev_priv(dev); 3763 3844 int i; 3764 3845 3765 - /* Make sure NAPI doesn't schedule refill work */ 3766 - disable_delayed_refill(vi); 3767 - /* Make sure refill_work doesn't re-enable napi! */ 3768 - cancel_delayed_work_sync(&vi->refill); 3769 3846 /* Prevent the config change callback from changing carrier 3770 3847 * after close 3771 3848 */ ··· 5717 5802 5718 5803 virtio_device_ready(vdev); 5719 5804 5720 - enable_delayed_refill(vi); 5721 5805 enable_rx_mode_work(vi); 5722 5806 5723 5807 if (netif_running(vi->dev)) { ··· 5806 5892 5807 5893 rq->xsk_pool = pool; 5808 5894 5809 - virtnet_rx_resume(vi, rq); 5895 + virtnet_rx_resume(vi, rq, true); 5810 5896 5811 5897 if (pool) 5812 5898 return 0; ··· 6473 6559 if (!vi->rq) 6474 6560 goto err_rq; 6475 6561 6476 - INIT_DELAYED_WORK(&vi->refill, refill_work); 6477 6562 for (i = 0; i < vi->max_queue_pairs; i++) { 6478 6563 vi->rq[i].pages = NULL; 6479 6564 netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll, ··· 6814 6901 6815 6902 INIT_WORK(&vi->config_work, virtnet_config_changed_work); 6816 6903 INIT_WORK(&vi->rx_mode_work, virtnet_rx_mode_work); 6817 - spin_lock_init(&vi->refill_lock); 6818 6904 6819 6905 if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { 6820 6906 vi->mergeable_rx_bufs = true; ··· 7077 7165 net_failover_destroy(vi->failover); 7078 7166 free_vqs: 7079 7167 virtio_reset_device(vdev); 7080 - cancel_delayed_work_sync(&vi->refill); 7081 7168 free_receive_page_frags(vi); 7082 7169 virtnet_del_vqs(vi); 7083 7170 free:
+1
drivers/nvme/host/apple.c
··· 1704 1704 1705 1705 static const struct of_device_id apple_nvme_of_match[] = { 1706 1706 { .compatible = "apple,t8015-nvme-ans2", .data = &apple_nvme_t8015_hw }, 1707 + { .compatible = "apple,t8103-nvme-ans2", .data = &apple_nvme_t8103_hw }, 1707 1708 { .compatible = "apple,nvme-ans2", .data = &apple_nvme_t8103_hw }, 1708 1709 {}, 1709 1710 };
+2
drivers/nvme/host/fc.c
··· 3587 3587 3588 3588 ctrl->ctrl.opts = NULL; 3589 3589 3590 + if (ctrl->ctrl.admin_tagset) 3591 + nvme_remove_admin_tag_set(&ctrl->ctrl); 3590 3592 /* initiate nvme ctrl ref counting teardown */ 3591 3593 nvme_uninit_ctrl(&ctrl->ctrl); 3592 3594
+6 -1
drivers/nvme/host/pci.c
··· 1532 1532 } 1533 1533 1534 1534 writel(NVME_SUBSYS_RESET, dev->bar + NVME_REG_NSSR); 1535 - nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE); 1535 + 1536 + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING) || 1537 + !nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) 1538 + goto unlock; 1536 1539 1537 1540 /* 1538 1541 * Read controller status to flush the previous write and trigger a ··· 4002 3999 .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, 4003 4000 { PCI_DEVICE(0x1e49, 0x0041), /* ZHITAI TiPro7000 NVMe SSD */ 4004 4001 .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, 4002 + { PCI_DEVICE(0x1fa0, 0x2283), /* Wodposit WPBSNM8-256GTP */ 4003 + .driver_data = NVME_QUIRK_NO_SECONDARY_TEMP_THRESH, }, 4005 4004 { PCI_DEVICE(0x025e, 0xf1ac), /* SOLIDIGM P44 pro SSDPFKKW020X7 */ 4006 4005 .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, 4007 4006 { PCI_DEVICE(0xc0a9, 0x540a), /* Crucial P2 */
+1 -1
drivers/nvme/target/passthru.c
··· 150 150 * code path with duplicate ctrl subsysnqn. In order to prevent that we 151 151 * mask the passthru-ctrl subsysnqn with the target ctrl subsysnqn. 152 152 */ 153 - memcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn)); 153 + strscpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn)); 154 154 155 155 /* use fabric id-ctrl values */ 156 156 id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) +
+16 -5
drivers/nvme/target/tcp.c
··· 982 982 pr_err("H2CData PDU len %u is invalid\n", cmd->pdu_len); 983 983 goto err_proto; 984 984 } 985 + /* 986 + * Ensure command data structures are initialized. We must check both 987 + * cmd->req.sg and cmd->iov because they can have different NULL states: 988 + * - Uninitialized commands: both NULL 989 + * - READ commands: cmd->req.sg allocated, cmd->iov NULL 990 + * - WRITE commands: both allocated 991 + */ 992 + if (unlikely(!cmd->req.sg || !cmd->iov)) { 993 + pr_err("queue %d: H2CData PDU received for invalid command state (ttag %u)\n", 994 + queue->idx, data->ttag); 995 + goto err_proto; 996 + } 985 997 cmd->pdu_recv = 0; 986 998 nvmet_tcp_build_pdu_iovec(cmd); 987 999 queue->cmd = cmd; ··· 2004 1992 2005 1993 trace_sk_data_ready(sk); 2006 1994 1995 + if (sk->sk_state != TCP_LISTEN) 1996 + return; 1997 + 2007 1998 read_lock_bh(&sk->sk_callback_lock); 2008 1999 port = sk->sk_user_data; 2009 - if (!port) 2010 - goto out; 2011 - 2012 - if (sk->sk_state == TCP_LISTEN) 2000 + if (port) 2013 2001 queue_work(nvmet_wq, &port->accept_work); 2014 - out: 2015 2002 read_unlock_bh(&sk->sk_callback_lock); 2016 2003 } 2017 2004
-6
drivers/pci/Kconfig
··· 225 225 P2P DMA transactions must be between devices behind the same root 226 226 port. 227 227 228 - Enabling this option will reduce the entropy of x86 KASLR memory 229 - regions. For example - on a 46 bit system, the entropy goes down 230 - from 16 bits to 15 bits. The actual reduction in entropy depends 231 - on the physical address bits, on processor features, kernel config 232 - (5 level page table) and physical memory present on the system. 233 - 234 228 If unsure, say N. 235 229 236 230 config PCI_LABEL
+1 -1
drivers/phy/broadcom/phy-bcm-ns-usb3.c
··· 203 203 usb3->dev = dev; 204 204 usb3->mdiodev = mdiodev; 205 205 206 - usb3->family = (enum bcm_ns_family)device_get_match_data(dev); 206 + usb3->family = (unsigned long)device_get_match_data(dev); 207 207 208 208 syscon_np = of_parse_phandle(dev->of_node, "usb3-dmp-syscon", 0); 209 209 err = of_address_to_resource(syscon_np, 0, &res);
+2 -1
drivers/phy/freescale/phy-fsl-imx8m-pcie.c
··· 89 89 writel(imx8_phy->tx_deemph_gen2, 90 90 imx8_phy->base + PCIE_PHY_TRSV_REG6); 91 91 break; 92 - case IMX8MP: /* Do nothing. */ 92 + case IMX8MP: 93 + reset_control_assert(imx8_phy->reset); 93 94 break; 94 95 } 95 96
+1 -14
drivers/phy/freescale/phy-fsl-imx8mq-usb.c
··· 126 126 static void tca_blk_orientation_set(struct tca_blk *tca, 127 127 enum typec_orientation orientation); 128 128 129 - #ifdef CONFIG_TYPEC 130 - 131 129 static int tca_blk_typec_switch_set(struct typec_switch_dev *sw, 132 130 enum typec_orientation orientation) 133 131 { ··· 172 174 { 173 175 typec_switch_unregister(sw); 174 176 } 175 - 176 - #else 177 - 178 - static struct typec_switch_dev *tca_blk_get_typec_switch(struct platform_device *pdev, 179 - struct imx8mq_usb_phy *imx_phy) 180 - { 181 - return NULL; 182 - } 183 - 184 - static void tca_blk_put_typec_switch(struct typec_switch_dev *sw) {} 185 - 186 - #endif /* CONFIG_TYPEC */ 187 177 188 178 static void tca_blk_orientation_set(struct tca_blk *tca, 189 179 enum typec_orientation orientation) ··· 490 504 491 505 if (imx_phy->pcs_tx_swing_full != PHY_TUNE_DEFAULT) { 492 506 value = readl(imx_phy->base + PHY_CTRL5); 507 + value &= ~PHY_CTRL5_PCS_TX_SWING_FULL_MASK; 493 508 value |= FIELD_PREP(PHY_CTRL5_PCS_TX_SWING_FULL_MASK, 494 509 imx_phy->pcs_tx_swing_full); 495 510 writel(value, imx_phy->base + PHY_CTRL5);
+1 -1
drivers/phy/microchip/Kconfig
··· 6 6 config PHY_SPARX5_SERDES 7 7 tristate "Microchip Sparx5 SerDes PHY driver" 8 8 select GENERIC_PHY 9 - depends on ARCH_SPARX5 || COMPILE_TEST 9 + depends on ARCH_SPARX5 || ARCH_LAN969X || COMPILE_TEST 10 10 depends on OF 11 11 depends on HAS_IOMEM 12 12 help
+8 -8
drivers/phy/qualcomm/phy-qcom-qusb2.c
··· 1093 1093 or->hsdisc_trim.override = true; 1094 1094 } 1095 1095 1096 - pm_runtime_set_active(dev); 1097 - pm_runtime_enable(dev); 1096 + dev_set_drvdata(dev, qphy); 1097 + 1098 1098 /* 1099 - * Prevent runtime pm from being ON by default. Users can enable 1100 - * it using power/control in sysfs. 1099 + * Enable runtime PM support, but forbid it by default. 1100 + * Users can allow it again via the power/control attribute in sysfs. 1101 1101 */ 1102 + pm_runtime_set_active(dev); 1102 1103 pm_runtime_forbid(dev); 1104 + ret = devm_pm_runtime_enable(dev); 1105 + if (ret) 1106 + return ret; 1103 1107 1104 1108 generic_phy = devm_phy_create(dev, NULL, &qusb2_phy_gen_ops); 1105 1109 if (IS_ERR(generic_phy)) { 1106 1110 ret = PTR_ERR(generic_phy); 1107 1111 dev_err(dev, "failed to create phy, %d\n", ret); 1108 - pm_runtime_disable(dev); 1109 1112 return ret; 1110 1113 } 1111 1114 qphy->phy = generic_phy; 1112 1115 1113 - dev_set_drvdata(dev, qphy); 1114 1116 phy_set_drvdata(generic_phy, qphy); 1115 1117 1116 1118 phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); 1117 - if (IS_ERR(phy_provider)) 1118 - pm_runtime_disable(dev); 1119 1119 1120 1120 return PTR_ERR_OR_ZERO(phy_provider); 1121 1121 }
+9 -5
drivers/phy/rockchip/phy-rockchip-inno-usb2.c
··· 821 821 container_of(work, struct rockchip_usb2phy_port, chg_work.work); 822 822 struct rockchip_usb2phy *rphy = dev_get_drvdata(rport->phy->dev.parent); 823 823 struct regmap *base = get_reg_base(rphy); 824 - bool is_dcd, tmout, vout; 824 + bool is_dcd, tmout, vout, vbus_attach; 825 825 unsigned long delay; 826 + 827 + vbus_attach = property_enabled(rphy->grf, &rport->port_cfg->utmi_bvalid); 826 828 827 829 dev_dbg(&rport->phy->dev, "chg detection work state = %d\n", 828 830 rphy->chg_state); 829 831 switch (rphy->chg_state) { 830 832 case USB_CHG_STATE_UNDEFINED: 831 - if (!rport->suspended) 833 + if (!rport->suspended && !vbus_attach) 832 834 rockchip_usb2phy_power_off(rport->phy); 833 835 /* put the controller in non-driving mode */ 834 - property_enable(base, &rphy->phy_cfg->chg_det.opmode, false); 836 + if (!vbus_attach) 837 + property_enable(base, &rphy->phy_cfg->chg_det.opmode, false); 835 838 /* Start DCD processing stage 1 */ 836 839 rockchip_chg_enable_dcd(rphy, true); 837 840 rphy->chg_state = USB_CHG_STATE_WAIT_FOR_DCD; ··· 897 894 fallthrough; 898 895 case USB_CHG_STATE_DETECTED: 899 896 /* put the controller in normal mode */ 900 - property_enable(base, &rphy->phy_cfg->chg_det.opmode, true); 897 + if (!vbus_attach) 898 + property_enable(base, &rphy->phy_cfg->chg_det.opmode, true); 901 899 rockchip_usb2phy_otg_sm_work(&rport->otg_sm_work.work); 902 900 dev_dbg(&rport->phy->dev, "charger = %s\n", 903 901 chg_to_string(rphy->chg_type)); ··· 1495 1491 rphy); 1496 1492 if (ret) { 1497 1493 dev_err_probe(rphy->dev, ret, "failed to request usb2phy irq handle\n"); 1498 - goto put_child; 1494 + return ret; 1499 1495 } 1500 1496 } 1501 1497
+1 -1
drivers/phy/st/phy-stm32-usbphyc.c
··· 712 712 } 713 713 714 714 ret = of_property_read_u32(child, "reg", &index); 715 - if (ret || index > usbphyc->nphys) { 715 + if (ret || index >= usbphyc->nphys) { 716 716 dev_err(&phy->dev, "invalid reg property: %d\n", ret); 717 717 if (!ret) 718 718 ret = -EINVAL;
+3
drivers/phy/tegra/xusb-tegra186.c
··· 84 84 #define XUSB_PADCTL_USB2_BIAS_PAD_CTL0 0x284 85 85 #define BIAS_PAD_PD BIT(11) 86 86 #define HS_SQUELCH_LEVEL(x) (((x) & 0x7) << 0) 87 + #define HS_DISCON_LEVEL(x) (((x) & 0x7) << 3) 87 88 88 89 #define XUSB_PADCTL_USB2_BIAS_PAD_CTL1 0x288 89 90 #define USB2_TRK_START_TIMER(x) (((x) & 0x7f) << 12) ··· 624 623 value &= ~BIAS_PAD_PD; 625 624 value &= ~HS_SQUELCH_LEVEL(~0); 626 625 value |= HS_SQUELCH_LEVEL(priv->calib.hs_squelch); 626 + value &= ~HS_DISCON_LEVEL(~0); 627 + value |= HS_DISCON_LEVEL(0x7); 627 628 padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL0); 628 629 629 630 udelay(1);
+4 -3
drivers/phy/ti/phy-da8xx-usb.c
··· 180 180 struct da8xx_usb_phy_platform_data *pdata = dev->platform_data; 181 181 struct device_node *node = dev->of_node; 182 182 struct da8xx_usb_phy *d_phy; 183 + int ret; 183 184 184 185 d_phy = devm_kzalloc(dev, sizeof(*d_phy), GFP_KERNEL); 185 186 if (!d_phy) ··· 234 233 return PTR_ERR(d_phy->phy_provider); 235 234 } 236 235 } else { 237 - int ret; 238 - 239 236 ret = phy_create_lookup(d_phy->usb11_phy, "usb-phy", 240 237 "ohci-da8xx"); 241 238 if (ret) ··· 248 249 PHY_INIT_BITS, PHY_INIT_BITS); 249 250 250 251 pm_runtime_set_active(dev); 251 - devm_pm_runtime_enable(dev); 252 + ret = devm_pm_runtime_enable(dev); 253 + if (ret) 254 + return ret; 252 255 /* 253 256 * Prevent runtime pm from being ON by default. Users can enable 254 257 * it using power/control in sysfs.
+1 -1
drivers/phy/ti/phy-gmii-sel.c
··· 512 512 return dev_err_probe(dev, PTR_ERR(base), 513 513 "failed to get base memory resource\n"); 514 514 515 - priv->regmap = regmap_init_mmio(dev, base, &phy_gmii_sel_regmap_cfg); 515 + priv->regmap = devm_regmap_init_mmio(dev, base, &phy_gmii_sel_regmap_cfg); 516 516 if (IS_ERR(priv->regmap)) 517 517 return dev_err_probe(dev, PTR_ERR(priv->regmap), 518 518 "Failed to get syscon\n");
+6 -3
drivers/resctrl/mpam_internal.h
··· 12 12 #include <linux/jump_label.h> 13 13 #include <linux/llist.h> 14 14 #include <linux/mutex.h> 15 - #include <linux/srcu.h> 16 15 #include <linux/spinlock.h> 17 16 #include <linux/srcu.h> 18 17 #include <linux/types.h> ··· 200 201 } PACKED_FOR_KUNIT; 201 202 202 203 #define mpam_has_feature(_feat, x) test_bit(_feat, (x)->features) 203 - #define mpam_set_feature(_feat, x) set_bit(_feat, (x)->features) 204 - #define mpam_clear_feature(_feat, x) clear_bit(_feat, (x)->features) 204 + /* 205 + * The non-atomic get/set operations are used because if struct mpam_props is 206 + * packed, the alignment requirements for atomics aren't met. 207 + */ 208 + #define mpam_set_feature(_feat, x) __set_bit(_feat, (x)->features) 209 + #define mpam_clear_feature(_feat, x) __clear_bit(_feat, (x)->features) 205 210 206 211 /* The values for MSMON_CFG_MBWU_FLT.RWBW */ 207 212 enum mon_filter_options {
+1 -1
drivers/scsi/bfa/bfa_fcs.c
··· 1169 1169 * This function should be used only if there is any requirement 1170 1170 * to check for FOS version below 6.3. 1171 1171 * To check if the attached fabric is a brocade fabric, use 1172 - * bfa_lps_is_brcd_fabric() which works for FOS versions 6.3 1172 + * fabric->lps->brcd_switch which works for FOS versions 6.3 1173 1173 * or above only. 1174 1174 */ 1175 1175
+24
drivers/scsi/scsi_error.c
··· 1063 1063 unsigned char *cmnd, int cmnd_size, unsigned sense_bytes) 1064 1064 { 1065 1065 struct scsi_device *sdev = scmd->device; 1066 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1067 + struct request *rq = scsi_cmd_to_rq(scmd); 1068 + #endif 1066 1069 1067 1070 /* 1068 1071 * We need saved copies of a number of fields - this is because ··· 1118 1115 (sdev->lun << 5 & 0xe0); 1119 1116 1120 1117 /* 1118 + * Encryption must be disabled for the commands submitted by the error handler. 1119 + * Hence, clear the encryption context information. 1120 + */ 1121 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1122 + ses->rq_crypt_keyslot = rq->crypt_keyslot; 1123 + ses->rq_crypt_ctx = rq->crypt_ctx; 1124 + 1125 + rq->crypt_keyslot = NULL; 1126 + rq->crypt_ctx = NULL; 1127 + #endif 1128 + 1129 + /* 1121 1130 * Zero the sense buffer. The scsi spec mandates that any 1122 1131 * untransferred sense data should be interpreted as being zero. 1123 1132 */ ··· 1146 1131 */ 1147 1132 void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses) 1148 1133 { 1134 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1135 + struct request *rq = scsi_cmd_to_rq(scmd); 1136 + #endif 1137 + 1149 1138 /* 1150 1139 * Restore original data 1151 1140 */ ··· 1162 1143 scmd->underflow = ses->underflow; 1163 1144 scmd->prot_op = ses->prot_op; 1164 1145 scmd->eh_eflags = ses->eh_eflags; 1146 + 1147 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1148 + rq->crypt_keyslot = ses->rq_crypt_keyslot; 1149 + rq->crypt_ctx = ses->rq_crypt_ctx; 1150 + #endif 1165 1151 } 1166 1152 EXPORT_SYMBOL(scsi_eh_restore_cmnd); 1167 1153
+1 -1
drivers/scsi/scsi_lib.c
··· 2459 2459 * @retries: number of retries before failing 2460 2460 * @sshdr: outpout pointer for decoded sense information. 2461 2461 * 2462 - * Returns zero if unsuccessful or an error if TUR failed. For 2462 + * Returns zero if successful or an error if TUR failed. For 2463 2463 * removable media, UNIT_ATTENTION sets ->changed flag. 2464 2464 **/ 2465 2465 int
+1 -1
drivers/soundwire/bus_type.c
··· 105 105 if (ret) 106 106 return ret; 107 107 108 - ret = ida_alloc_max(&slave->bus->slave_ida, SDW_FW_MAX_DEVICES, GFP_KERNEL); 108 + ret = ida_alloc_max(&slave->bus->slave_ida, SDW_FW_MAX_DEVICES - 1, GFP_KERNEL); 109 109 if (ret < 0) { 110 110 dev_err(dev, "Failed to allocated ID: %d\n", ret); 111 111 return ret;
+1
drivers/soundwire/slave.c
··· 23 23 .release = sdw_slave_release, 24 24 .uevent = sdw_slave_uevent, 25 25 }; 26 + EXPORT_SYMBOL_GPL(sdw_slave_type); 26 27 27 28 int sdw_slave_add(struct sdw_bus *bus, 28 29 struct sdw_slave_id *id, struct fwnode_handle *fwnode)
+4 -3
drivers/ufs/core/ufshcd.c
··· 10736 10736 if (is_mcq_supported(hba)) { 10737 10737 ufshcd_mcq_enable(hba); 10738 10738 err = ufshcd_alloc_mcq(hba); 10739 - if (!err) { 10740 - ufshcd_config_mcq(hba); 10741 - } else { 10739 + if (err) { 10742 10740 /* Continue with SDB mode */ 10743 10741 ufshcd_mcq_disable(hba); 10744 10742 use_mcq_mode = false; ··· 11008 11010 err = ufshcd_link_startup(hba); 11009 11011 if (err) 11010 11012 goto out_disable; 11013 + 11014 + if (hba->mcq_enabled) 11015 + ufshcd_config_mcq(hba); 11011 11016 11012 11017 if (hba->quirks & UFSHCD_QUIRK_SKIP_PH_CONFIGURATION) 11013 11018 goto initialized;
+1 -1
drivers/ufs/host/ufs-mediatek.c
··· 1112 1112 unsigned long flags; 1113 1113 u32 ah_ms = 10; 1114 1114 u32 ah_scale, ah_timer; 1115 - u32 scale_us[] = {1, 10, 100, 1000, 10000, 100000}; 1115 + static const u32 scale_us[] = {1, 10, 100, 1000, 10000, 100000}; 1116 1116 1117 1117 if (ufshcd_is_clkgating_allowed(hba)) { 1118 1118 if (ufshcd_is_auto_hibern8_supported(hba) && hba->ahit) {
+5
drivers/usb/core/config.c
··· 1040 1040 __u8 cap_type; 1041 1041 int ret; 1042 1042 1043 + if (dev->quirks & USB_QUIRK_NO_BOS) { 1044 + dev_dbg(ddev, "skipping BOS descriptor\n"); 1045 + return -ENOMSG; 1046 + } 1047 + 1043 1048 bos = kzalloc(sizeof(*bos), GFP_KERNEL); 1044 1049 if (!bos) 1045 1050 return -ENOMEM;
+3
drivers/usb/core/quirks.c
··· 450 450 { USB_DEVICE(0x0c45, 0x7056), .driver_info = 451 451 USB_QUIRK_IGNORE_REMOTE_WAKEUP }, 452 452 453 + /* Elgato 4K X - BOS descriptor fetch hangs at SuperSpeed Plus */ 454 + { USB_DEVICE(0x0fd9, 0x009b), .driver_info = USB_QUIRK_NO_BOS }, 455 + 453 456 /* Sony Xperia XZ1 Compact (lilac) smartphone in fastboot mode */ 454 457 { USB_DEVICE(0x0fce, 0x0dde), .driver_info = USB_QUIRK_NO_LPM }, 455 458
+2
drivers/usb/dwc3/core.c
··· 993 993 994 994 reg = dwc3_readl(dwc->regs, DWC3_GSNPSID); 995 995 dwc->ip = DWC3_GSNPS_ID(reg); 996 + if (dwc->ip == DWC4_IP) 997 + dwc->ip = DWC32_IP; 996 998 997 999 /* This should read as U3 followed by revision number */ 998 1000 if (DWC3_IP_IS(DWC3)) {
+1
drivers/usb/dwc3/core.h
··· 1265 1265 #define DWC3_IP 0x5533 1266 1266 #define DWC31_IP 0x3331 1267 1267 #define DWC32_IP 0x3332 1268 + #define DWC4_IP 0x3430 1268 1269 1269 1270 u32 revision; 1270 1271
+49 -15
drivers/usb/dwc3/dwc3-apple.c
··· 218 218 return ret; 219 219 } 220 220 221 - static void dwc3_apple_phy_set_mode(struct dwc3_apple *appledwc, enum phy_mode mode) 222 - { 223 - lockdep_assert_held(&appledwc->lock); 224 - 225 - /* 226 - * This platform requires SUSPHY to be enabled here already in order to properly configure 227 - * the PHY and switch dwc3's PIPE interface to USB3 PHY. 228 - */ 229 - dwc3_enable_susphy(&appledwc->dwc, true); 230 - phy_set_mode(appledwc->dwc.usb2_generic_phy[0], mode); 231 - phy_set_mode(appledwc->dwc.usb3_generic_phy[0], mode); 232 - } 233 - 234 221 static int dwc3_apple_init(struct dwc3_apple *appledwc, enum dwc3_apple_state state) 235 222 { 236 223 int ret, ret_reset; 237 224 238 225 lockdep_assert_held(&appledwc->lock); 226 + 227 + /* 228 + * The USB2 PHY on this platform must be configured for host or device mode while it is 229 + * still powered off and before dwc3 tries to access it. Otherwise, the new configuration 230 + * will sometimes only take affect after the *next* time dwc3 is brought up which causes 231 + * the connected device to just not work. 232 + * The USB3 PHY must be configured later after dwc3 has already been initialized. 233 + */ 234 + switch (state) { 235 + case DWC3_APPLE_HOST: 236 + phy_set_mode(appledwc->dwc.usb2_generic_phy[0], PHY_MODE_USB_HOST); 237 + break; 238 + case DWC3_APPLE_DEVICE: 239 + phy_set_mode(appledwc->dwc.usb2_generic_phy[0], PHY_MODE_USB_DEVICE); 240 + break; 241 + default: 242 + /* Unreachable unless there's a bug in this driver */ 243 + return -EINVAL; 244 + } 239 245 240 246 ret = reset_control_deassert(appledwc->reset); 241 247 if (ret) { ··· 263 257 case DWC3_APPLE_HOST: 264 258 appledwc->dwc.dr_mode = USB_DR_MODE_HOST; 265 259 dwc3_apple_set_ptrcap(appledwc, DWC3_GCTL_PRTCAP_HOST); 266 - dwc3_apple_phy_set_mode(appledwc, PHY_MODE_USB_HOST); 260 + /* 261 + * This platform requires SUSPHY to be enabled here already in order to properly 262 + * configure the PHY and switch dwc3's PIPE interface to USB3 PHY. The USB2 PHY 263 + * has already been configured to the correct mode earlier. 264 + */ 265 + dwc3_enable_susphy(&appledwc->dwc, true); 266 + phy_set_mode(appledwc->dwc.usb3_generic_phy[0], PHY_MODE_USB_HOST); 267 267 ret = dwc3_host_init(&appledwc->dwc); 268 268 if (ret) { 269 269 dev_err(appledwc->dev, "Failed to initialize host, ret=%d\n", ret); ··· 280 268 case DWC3_APPLE_DEVICE: 281 269 appledwc->dwc.dr_mode = USB_DR_MODE_PERIPHERAL; 282 270 dwc3_apple_set_ptrcap(appledwc, DWC3_GCTL_PRTCAP_DEVICE); 283 - dwc3_apple_phy_set_mode(appledwc, PHY_MODE_USB_DEVICE); 271 + /* 272 + * This platform requires SUSPHY to be enabled here already in order to properly 273 + * configure the PHY and switch dwc3's PIPE interface to USB3 PHY. The USB2 PHY 274 + * has already been configured to the correct mode earlier. 275 + */ 276 + dwc3_enable_susphy(&appledwc->dwc, true); 277 + phy_set_mode(appledwc->dwc.usb3_generic_phy[0], PHY_MODE_USB_DEVICE); 284 278 ret = dwc3_gadget_init(&appledwc->dwc); 285 279 if (ret) { 286 280 dev_err(appledwc->dev, "Failed to initialize gadget, ret=%d\n", ret); ··· 356 338 int ret; 357 339 358 340 guard(mutex)(&appledwc->lock); 341 + 342 + /* 343 + * Skip role switches if appledwc is already in the desired state. The 344 + * USB-C port controller on M2 and M1/M2 Pro/Max/Ultra devices issues 345 + * additional interrupts which results in usb_role_switch_set_role() 346 + * calls with the current role. 347 + * Ignore those calls here to ensure the USB-C port controller and 348 + * appledwc are in a consistent state. 349 + * This matches the behaviour in __dwc3_set_mode(). 350 + * Do no handle USB_ROLE_NONE for DWC3_APPLE_NO_CABLE and 351 + * DWC3_APPLE_PROBE_PENDING since that is no-op anyway. 352 + */ 353 + if (appledwc->state == DWC3_APPLE_HOST && role == USB_ROLE_HOST) 354 + return 0; 355 + if (appledwc->state == DWC3_APPLE_DEVICE && role == USB_ROLE_DEVICE) 356 + return 0; 359 357 360 358 /* 361 359 * We need to tear all of dwc3 down and re-initialize it every time a cable is
+4
drivers/usb/gadget/function/f_uvc.c
··· 362 362 return ret; 363 363 usb_ep_enable(uvc->video.ep); 364 364 365 + uvc->video.max_req_size = uvc->video.ep->maxpacket 366 + * max_t(unsigned int, uvc->video.ep->maxburst, 1) 367 + * (uvc->video.ep->mult); 368 + 365 369 memset(&v4l2_event, 0, sizeof(v4l2_event)); 366 370 v4l2_event.type = UVC_EVENT_STREAMON; 367 371 v4l2_event_queue(&uvc->vdev, &v4l2_event);
+2 -1
drivers/usb/gadget/function/uvc.h
··· 107 107 unsigned int width; 108 108 unsigned int height; 109 109 unsigned int imagesize; 110 - unsigned int interval; 110 + unsigned int interval; /* in 100ns units */ 111 111 struct mutex mutex; /* protects frame parameters */ 112 112 113 113 unsigned int uvc_num_requests; ··· 117 117 /* Requests */ 118 118 bool is_enabled; /* tracks whether video stream is enabled */ 119 119 unsigned int req_size; 120 + unsigned int max_req_size; 120 121 struct list_head ureqs; /* all uvc_requests allocated by uvc_video */ 121 122 122 123 /* USB requests that the video pump thread can encode into */
+19 -4
drivers/usb/gadget/function/uvc_queue.c
··· 86 86 buf->bytesused = 0; 87 87 } else { 88 88 buf->bytesused = vb2_get_plane_payload(vb, 0); 89 - buf->req_payload_size = 90 - DIV_ROUND_UP(buf->bytesused + 91 - (video->reqs_per_frame * UVCG_REQUEST_HEADER_LEN), 92 - video->reqs_per_frame); 89 + 90 + if (video->reqs_per_frame != 0) { 91 + buf->req_payload_size = 92 + DIV_ROUND_UP(buf->bytesused + 93 + (video->reqs_per_frame * UVCG_REQUEST_HEADER_LEN), 94 + video->reqs_per_frame); 95 + if (buf->req_payload_size > video->req_size) 96 + buf->req_payload_size = video->req_size; 97 + } else { 98 + buf->req_payload_size = video->max_req_size; 99 + } 93 100 } 94 101 95 102 return 0; ··· 182 175 { 183 176 int ret; 184 177 178 + retry: 185 179 ret = vb2_reqbufs(&queue->queue, rb); 180 + if (ret < 0 && queue->use_sg) { 181 + uvc_trace(UVC_TRACE_IOCTL, 182 + "failed to alloc buffer with sg enabled, try non-sg mode\n"); 183 + queue->use_sg = 0; 184 + queue->queue.mem_ops = &vb2_vmalloc_memops; 185 + goto retry; 186 + } 186 187 187 188 return ret ? ret : rb->count; 188 189 }
+7 -7
drivers/usb/gadget/function/uvc_video.c
··· 499 499 { 500 500 struct uvc_device *uvc = container_of(video, struct uvc_device, video); 501 501 struct usb_composite_dev *cdev = uvc->func.config->cdev; 502 - unsigned int interval_duration = video->ep->desc->bInterval * 1250; 502 + unsigned int interval_duration; 503 503 unsigned int max_req_size, req_size, header_size; 504 504 unsigned int nreq; 505 505 506 - max_req_size = video->ep->maxpacket 507 - * max_t(unsigned int, video->ep->maxburst, 1) 508 - * (video->ep->mult); 506 + max_req_size = video->max_req_size; 509 507 510 508 if (!usb_endpoint_xfer_isoc(video->ep->desc)) { 511 509 video->req_size = max_req_size; ··· 513 515 return; 514 516 } 515 517 518 + interval_duration = 2 << (video->ep->desc->bInterval - 1); 516 519 if (cdev->gadget->speed < USB_SPEED_HIGH) 517 - interval_duration = video->ep->desc->bInterval * 10000; 520 + interval_duration *= 10000; 521 + else 522 + interval_duration *= 1250; 518 523 519 524 nreq = DIV_ROUND_UP(video->interval, interval_duration); 520 525 ··· 838 837 video->interval = 666666; 839 838 840 839 /* Initialize the video buffers queue. */ 841 - uvcg_queue_init(&video->queue, uvc->v4l2_dev.dev->parent, 840 + return uvcg_queue_init(&video->queue, uvc->v4l2_dev.dev->parent, 842 841 V4L2_BUF_TYPE_VIDEO_OUTPUT, &video->mutex); 843 - return 0; 844 842 }
+1
drivers/usb/host/ohci-platform.c
··· 392 392 MODULE_AUTHOR("Hauke Mehrtens"); 393 393 MODULE_AUTHOR("Alan Stern"); 394 394 MODULE_LICENSE("GPL"); 395 + MODULE_SOFTDEP("pre: ehci_platform");
+1
drivers/usb/host/uhci-platform.c
··· 211 211 .of_match_table = platform_uhci_ids, 212 212 }, 213 213 }; 214 + MODULE_SOFTDEP("pre: ehci_platform");
-1
drivers/usb/host/xhci-sideband.c
··· 210 210 return -ENODEV; 211 211 212 212 __xhci_sideband_remove_endpoint(sb, ep); 213 - xhci_initialize_ring_info(ep->ring); 214 213 215 214 return 0; 216 215 }
+1 -1
drivers/usb/host/xhci-tegra.c
··· 1563 1563 for (i = 0; i < tegra->soc->max_num_wakes; i++) { 1564 1564 struct irq_data *data; 1565 1565 1566 - tegra->wake_irqs[i] = platform_get_irq(pdev, i + WAKE_IRQ_START_INDEX); 1566 + tegra->wake_irqs[i] = platform_get_irq_optional(pdev, i + WAKE_IRQ_START_INDEX); 1567 1567 if (tegra->wake_irqs[i] < 0) 1568 1568 break; 1569 1569
+12 -3
drivers/usb/host/xhci.c
··· 2898 2898 gfp_t gfp_flags) 2899 2899 { 2900 2900 struct xhci_command *command; 2901 + struct xhci_ep_ctx *ep_ctx; 2901 2902 unsigned long flags; 2902 - int ret; 2903 + int ret = -ENODEV; 2903 2904 2904 2905 command = xhci_alloc_command(xhci, true, gfp_flags); 2905 2906 if (!command) 2906 2907 return -ENOMEM; 2907 2908 2908 2909 spin_lock_irqsave(&xhci->lock, flags); 2909 - ret = xhci_queue_stop_endpoint(xhci, command, ep->vdev->slot_id, 2910 - ep->ep_index, suspend); 2910 + 2911 + /* make sure endpoint exists and is running before stopping it */ 2912 + if (ep->ring) { 2913 + ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index); 2914 + if (GET_EP_CTX_STATE(ep_ctx) == EP_STATE_RUNNING) 2915 + ret = xhci_queue_stop_endpoint(xhci, command, 2916 + ep->vdev->slot_id, 2917 + ep->ep_index, suspend); 2918 + } 2919 + 2911 2920 if (ret < 0) { 2912 2921 spin_unlock_irqrestore(&xhci->lock, flags); 2913 2922 goto out;
+47 -30
drivers/usb/serial/f81232.c
··· 70 70 #define F81232_REGISTER_REQUEST 0xa0 71 71 #define F81232_GET_REGISTER 0xc0 72 72 #define F81232_SET_REGISTER 0x40 73 - #define F81534A_ACCESS_REG_RETRY 2 74 73 75 74 #define SERIAL_BASE_ADDRESS 0x0120 76 75 #define RECEIVE_BUFFER_REGISTER (0x00 + SERIAL_BASE_ADDRESS) ··· 823 824 static int f81534a_ctrl_set_register(struct usb_interface *intf, u16 reg, 824 825 u16 size, void *val) 825 826 { 826 - struct usb_device *dev = interface_to_usbdev(intf); 827 - int retry = F81534A_ACCESS_REG_RETRY; 828 - int status; 827 + return usb_control_msg_send(interface_to_usbdev(intf), 828 + 0, 829 + F81232_REGISTER_REQUEST, 830 + F81232_SET_REGISTER, 831 + reg, 832 + 0, 833 + val, 834 + size, 835 + USB_CTRL_SET_TIMEOUT, 836 + GFP_KERNEL); 837 + } 829 838 830 - while (retry--) { 831 - status = usb_control_msg_send(dev, 832 - 0, 833 - F81232_REGISTER_REQUEST, 834 - F81232_SET_REGISTER, 835 - reg, 836 - 0, 837 - val, 838 - size, 839 - USB_CTRL_SET_TIMEOUT, 840 - GFP_KERNEL); 841 - if (status) { 842 - status = usb_translate_errors(status); 843 - if (status == -EIO) 844 - continue; 845 - } 846 - 847 - break; 848 - } 849 - 850 - if (status) { 851 - dev_err(&intf->dev, "failed to set register 0x%x: %d\n", 852 - reg, status); 853 - } 854 - 855 - return status; 839 + static int f81534a_ctrl_get_register(struct usb_interface *intf, u16 reg, 840 + u16 size, void *val) 841 + { 842 + return usb_control_msg_recv(interface_to_usbdev(intf), 843 + 0, 844 + F81232_REGISTER_REQUEST, 845 + F81232_GET_REGISTER, 846 + reg, 847 + 0, 848 + val, 849 + size, 850 + USB_CTRL_GET_TIMEOUT, 851 + GFP_KERNEL); 856 852 } 857 853 858 854 static int f81534a_ctrl_enable_all_ports(struct usb_interface *intf, bool en) ··· 863 869 * bit 0~11 : Serial port enable bit. 864 870 */ 865 871 if (en) { 872 + /* 873 + * The Fintek F81532A/534A/535/536 family relies on the 874 + * F81534A_CTRL_CMD_ENABLE_PORT (116h) register during 875 + * initialization to both determine serial port status and 876 + * control port creation. 877 + * 878 + * If the driver experiences fast load/unload cycles, the 879 + * device state may becomes unstable, resulting in the 880 + * incomplete generation of serial ports. 881 + * 882 + * Performing a dummy read operation on the register prior 883 + * to the initial write command resolves the issue. 884 + * 885 + * This clears the device's stale internal state. Subsequent 886 + * write operations will correctly generate all serial ports. 887 + */ 888 + status = f81534a_ctrl_get_register(intf, 889 + F81534A_CTRL_CMD_ENABLE_PORT, 890 + sizeof(enable), 891 + enable); 892 + if (status) 893 + return status; 894 + 866 895 enable[0] = 0xff; 867 896 enable[1] = 0x8f; 868 897 }
+1
drivers/usb/serial/ftdi_sio.c
··· 848 848 { USB_DEVICE_INTERFACE_NUMBER(FTDI_VID, LMI_LM3S_DEVEL_BOARD_PID, 1) }, 849 849 { USB_DEVICE_INTERFACE_NUMBER(FTDI_VID, LMI_LM3S_EVAL_BOARD_PID, 1) }, 850 850 { USB_DEVICE_INTERFACE_NUMBER(FTDI_VID, LMI_LM3S_ICDI_BOARD_PID, 1) }, 851 + { USB_DEVICE(FTDI_VID, FTDI_AXE027_PID) }, 851 852 { USB_DEVICE_INTERFACE_NUMBER(FTDI_VID, FTDI_TURTELIZER_PID, 1) }, 852 853 { USB_DEVICE(RATOC_VENDOR_ID, RATOC_PRODUCT_ID_USB60F) }, 853 854 { USB_DEVICE(RATOC_VENDOR_ID, RATOC_PRODUCT_ID_SCU18) },
+2
drivers/usb/serial/ftdi_sio_ids.h
··· 96 96 #define LMI_LM3S_EVAL_BOARD_PID 0xbcd9 97 97 #define LMI_LM3S_ICDI_BOARD_PID 0xbcda 98 98 99 + #define FTDI_AXE027_PID 0xBD90 /* PICAXE AXE027 USB download cable */ 100 + 99 101 #define FTDI_TURTELIZER_PID 0xBDC8 /* JTAG/RS-232 adapter by egnite GmbH */ 100 102 101 103 /* OpenDCC (www.opendcc.de) product id */
+1
drivers/usb/serial/option.c
··· 1505 1505 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1231, 0xff), /* Telit LE910Cx (RNDIS) */ 1506 1506 .driver_info = NCTRL(2) | RSVD(3) }, 1507 1507 { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x1250, 0xff, 0x00, 0x00) }, /* Telit LE910Cx (rmnet) */ 1508 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1252, 0xff) }, /* Telit LE910Cx (MBIM) */ 1508 1509 { USB_DEVICE(TELIT_VENDOR_ID, 0x1260), 1509 1510 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1510 1511 { USB_DEVICE(TELIT_VENDOR_ID, 0x1261),
+1 -1
drivers/usb/typec/tcpm/tcpm.c
··· 7890 7890 port->partner_desc.identity = &port->partner_ident; 7891 7891 7892 7892 port->role_sw = fwnode_usb_role_switch_get(tcpc->fwnode); 7893 - if (!port->role_sw) 7893 + if (IS_ERR_OR_NULL(port->role_sw)) 7894 7894 port->role_sw = usb_role_switch_get(port->dev); 7895 7895 if (IS_ERR(port->role_sw)) { 7896 7896 err = PTR_ERR(port->role_sw);
+5 -1
fs/btrfs/Kconfig
··· 115 115 116 116 - extent tree v2 - complex rework of extent tracking 117 117 118 - - large folio support 118 + - large folio and block size (> page size) support 119 + 120 + - shutdown ioctl and auto-degradation support 121 + 122 + - asynchronous checksum generation for data writes 119 123 120 124 If unsure, say N.
+9
fs/btrfs/inode.c
··· 4180 4180 4181 4181 return 0; 4182 4182 out: 4183 + /* 4184 + * We may have a read locked leaf and iget_failed() triggers inode 4185 + * eviction which needs to release the delayed inode and that needs 4186 + * to lock the delayed inode's mutex. This can cause a ABBA deadlock 4187 + * with a task running delayed items, as that require first locking 4188 + * the delayed inode's mutex and then modifying its subvolume btree. 4189 + * So release the path before iget_failed(). 4190 + */ 4191 + btrfs_release_path(path); 4183 4192 iget_failed(vfs_inode); 4184 4193 return ret; 4185 4194 }
+13 -10
fs/btrfs/reflink.c
··· 705 705 struct inode *src = file_inode(file_src); 706 706 struct btrfs_fs_info *fs_info = inode_to_fs_info(inode); 707 707 int ret; 708 - int wb_ret; 709 708 u64 len = olen; 710 709 u64 bs = fs_info->sectorsize; 711 710 u64 end; ··· 749 750 btrfs_lock_extent(&BTRFS_I(inode)->io_tree, destoff, end, &cached_state); 750 751 ret = btrfs_clone(src, inode, off, olen, len, destoff, 0); 751 752 btrfs_unlock_extent(&BTRFS_I(inode)->io_tree, destoff, end, &cached_state); 753 + if (ret < 0) 754 + return ret; 752 755 753 756 /* 754 757 * We may have copied an inline extent into a page of the destination 755 - * range, so wait for writeback to complete before truncating pages 758 + * range, so wait for writeback to complete before invalidating pages 756 759 * from the page cache. This is a rare case. 757 760 */ 758 - wb_ret = btrfs_wait_ordered_range(BTRFS_I(inode), destoff, len); 759 - ret = ret ? ret : wb_ret; 761 + ret = btrfs_wait_ordered_range(BTRFS_I(inode), destoff, len); 762 + if (ret < 0) 763 + return ret; 764 + 760 765 /* 761 - * Truncate page cache pages so that future reads will see the cloned 762 - * data immediately and not the previous data. 766 + * Invalidate page cache so that future reads will see the cloned data 767 + * immediately and not the previous data. 763 768 */ 764 - truncate_inode_pages_range(&inode->i_data, 765 - round_down(destoff, PAGE_SIZE), 766 - round_up(destoff + len, PAGE_SIZE) - 1); 769 + ret = filemap_invalidate_inode(inode, false, destoff, end); 770 + if (ret < 0) 771 + return ret; 767 772 768 773 btrfs_btree_balance_dirty(fs_info); 769 774 770 - return ret; 775 + return 0; 771 776 } 772 777 773 778 static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+2
fs/btrfs/send.c
··· 6383 6383 extent_end = btrfs_file_extent_end(path); 6384 6384 if (extent_end <= start) 6385 6385 goto next; 6386 + if (btrfs_file_extent_type(leaf, fi) == BTRFS_FILE_EXTENT_INLINE) 6387 + return 0; 6386 6388 if (btrfs_file_extent_disk_bytenr(leaf, fi) == 0) { 6387 6389 search_start = extent_end; 6388 6390 goto next;
+6 -2
fs/btrfs/space-info.c
··· 306 306 0); 307 307 308 308 if (ret) 309 - return ret; 309 + goto out_free; 310 310 } 311 311 312 312 ret = btrfs_sysfs_add_space_info_type(space_info); 313 313 if (ret) 314 - return ret; 314 + goto out_free; 315 315 316 316 list_add(&space_info->list, &info->space_info); 317 317 if (flags & BTRFS_BLOCK_GROUP_DATA) 318 318 info->data_sinfo = space_info; 319 319 320 + return ret; 321 + 322 + out_free: 323 + kfree(space_info); 320 324 return ret; 321 325 } 322 326
-52
fs/btrfs/sysfs.c
··· 26 26 #include "misc.h" 27 27 #include "fs.h" 28 28 #include "accessors.h" 29 - #include "zoned.h" 30 29 31 30 /* 32 31 * Structure name Path ··· 1188 1189 } 1189 1190 BTRFS_ATTR_RW(, commit_stats, btrfs_commit_stats_show, btrfs_commit_stats_store); 1190 1191 1191 - static ssize_t btrfs_zoned_stats_show(struct kobject *kobj, 1192 - struct kobj_attribute *a, char *buf) 1193 - { 1194 - struct btrfs_fs_info *fs_info = to_fs_info(kobj); 1195 - struct btrfs_block_group *bg; 1196 - size_t ret = 0; 1197 - 1198 - 1199 - if (!btrfs_is_zoned(fs_info)) 1200 - return ret; 1201 - 1202 - spin_lock(&fs_info->zone_active_bgs_lock); 1203 - ret += sysfs_emit_at(buf, ret, "active block-groups: %zu\n", 1204 - list_count_nodes(&fs_info->zone_active_bgs)); 1205 - spin_unlock(&fs_info->zone_active_bgs_lock); 1206 - 1207 - mutex_lock(&fs_info->reclaim_bgs_lock); 1208 - spin_lock(&fs_info->unused_bgs_lock); 1209 - ret += sysfs_emit_at(buf, ret, "\treclaimable: %zu\n", 1210 - list_count_nodes(&fs_info->reclaim_bgs)); 1211 - ret += sysfs_emit_at(buf, ret, "\tunused: %zu\n", 1212 - list_count_nodes(&fs_info->unused_bgs)); 1213 - spin_unlock(&fs_info->unused_bgs_lock); 1214 - mutex_unlock(&fs_info->reclaim_bgs_lock); 1215 - 1216 - ret += sysfs_emit_at(buf, ret, "\tneed reclaim: %s\n", 1217 - str_true_false(btrfs_zoned_should_reclaim(fs_info))); 1218 - 1219 - if (fs_info->data_reloc_bg) 1220 - ret += sysfs_emit_at(buf, ret, 1221 - "data relocation block-group: %llu\n", 1222 - fs_info->data_reloc_bg); 1223 - if (fs_info->treelog_bg) 1224 - ret += sysfs_emit_at(buf, ret, 1225 - "tree-log block-group: %llu\n", 1226 - fs_info->treelog_bg); 1227 - 1228 - spin_lock(&fs_info->zone_active_bgs_lock); 1229 - ret += sysfs_emit_at(buf, ret, "active zones:\n"); 1230 - list_for_each_entry(bg, &fs_info->zone_active_bgs, active_bg_list) { 1231 - ret += sysfs_emit_at(buf, ret, 1232 - "\tstart: %llu, wp: %llu used: %llu, reserved: %llu, unusable: %llu\n", 1233 - bg->start, bg->alloc_offset, bg->used, 1234 - bg->reserved, bg->zone_unusable); 1235 - } 1236 - spin_unlock(&fs_info->zone_active_bgs_lock); 1237 - return ret; 1238 - } 1239 - BTRFS_ATTR(, zoned_stats, btrfs_zoned_stats_show); 1240 - 1241 1192 static ssize_t btrfs_clone_alignment_show(struct kobject *kobj, 1242 1193 struct kobj_attribute *a, char *buf) 1243 1194 { ··· 1600 1651 BTRFS_ATTR_PTR(, bg_reclaim_threshold), 1601 1652 BTRFS_ATTR_PTR(, commit_stats), 1602 1653 BTRFS_ATTR_PTR(, temp_fsid), 1603 - BTRFS_ATTR_PTR(, zoned_stats), 1604 1654 #ifdef CONFIG_BTRFS_EXPERIMENTAL 1605 1655 BTRFS_ATTR_PTR(, offload_csum), 1606 1656 #endif
+3
fs/btrfs/tests/extent-map-tests.c
··· 1059 1059 1060 1060 if (out_stripe_len != BTRFS_STRIPE_LEN) { 1061 1061 test_err("calculated stripe length doesn't match"); 1062 + ret = -EINVAL; 1062 1063 goto out; 1063 1064 } 1064 1065 ··· 1067 1066 for (i = 0; i < out_ndaddrs; i++) 1068 1067 test_msg("mapped %llu", logical[i]); 1069 1068 test_err("unexpected number of mapped addresses: %d", out_ndaddrs); 1069 + ret = -EINVAL; 1070 1070 goto out; 1071 1071 } 1072 1072 1073 1073 for (i = 0; i < out_ndaddrs; i++) { 1074 1074 if (logical[i] != test->mapped_logical[i]) { 1075 1075 test_err("unexpected logical address mapped"); 1076 + ret = -EINVAL; 1076 1077 goto out; 1077 1078 } 1078 1079 }
+3 -3
fs/btrfs/tests/qgroup-tests.c
··· 517 517 tmp_root->root_key.objectid = BTRFS_FS_TREE_OBJECTID; 518 518 root->fs_info->fs_root = tmp_root; 519 519 ret = btrfs_insert_fs_root(root->fs_info, tmp_root); 520 + btrfs_put_root(tmp_root); 520 521 if (ret) { 521 522 test_err("couldn't insert fs root %d", ret); 522 523 goto out; 523 524 } 524 - btrfs_put_root(tmp_root); 525 525 526 526 tmp_root = btrfs_alloc_dummy_root(fs_info); 527 527 if (IS_ERR(tmp_root)) { ··· 532 532 533 533 tmp_root->root_key.objectid = BTRFS_FIRST_FREE_OBJECTID; 534 534 ret = btrfs_insert_fs_root(root->fs_info, tmp_root); 535 + btrfs_put_root(tmp_root); 535 536 if (ret) { 536 - test_err("couldn't insert fs root %d", ret); 537 + test_err("couldn't insert subvolume root %d", ret); 537 538 goto out; 538 539 } 539 - btrfs_put_root(tmp_root); 540 540 541 541 test_msg("running qgroup tests"); 542 542 ret = test_no_shared_qgroup(root, sectorsize, nodesize);
+2
fs/ext4/move_extent.c
··· 393 393 394 394 repair_branches: 395 395 ret2 = 0; 396 + ext4_double_down_write_data_sem(orig_inode, donor_inode); 396 397 r_len = ext4_swap_extents(handle, donor_inode, orig_inode, 397 398 mext->donor_lblk, orig_map->m_lblk, 398 399 *m_len, 0, &ret2); 400 + ext4_double_up_write_data_sem(orig_inode, donor_inode); 399 401 if (ret2 || r_len != *m_len) { 400 402 ext4_error_inode_block(orig_inode, (sector_t)(orig_map->m_lblk), 401 403 EIO, "Unable to copy data block, data will be lost!");
+1
fs/ext4/xattr.c
··· 1037 1037 ext4_error_inode(ea_inode, __func__, __LINE__, 0, 1038 1038 "EA inode %lu ref wraparound: ref_count=%lld ref_change=%d", 1039 1039 ea_inode->i_ino, ref_count, ref_change); 1040 + brelse(iloc.bh); 1040 1041 ret = -EFSCORRUPTED; 1041 1042 goto out; 1042 1043 }
+1 -1
fs/gfs2/lops.c
··· 484 484 new = bio_alloc(prev->bi_bdev, nr_iovecs, prev->bi_opf, GFP_NOIO); 485 485 bio_clone_blkg_association(new, prev); 486 486 new->bi_iter.bi_sector = bio_end_sector(prev); 487 - bio_chain(prev, new); 487 + bio_chain(new, prev); 488 488 submit_bio(prev); 489 489 return new; 490 490 }
+4 -2
fs/nfs/blocklayout/dev.c
··· 417 417 d->map = bl_map_simple; 418 418 d->pr_key = v->scsi.pr_key; 419 419 420 - if (d->len == 0) 421 - return -ENODEV; 420 + if (d->len == 0) { 421 + error = -ENODEV; 422 + goto out_blkdev_put; 423 + } 422 424 423 425 ops = bdev->bd_disk->fops->pr_ops; 424 426 if (!ops) {
+6 -1
fs/nfs/delegation.c
··· 149 149 int nfs4_have_delegation(struct inode *inode, fmode_t type, int flags) 150 150 { 151 151 if (S_ISDIR(inode->i_mode) && !directory_delegations) 152 - nfs_inode_evict_delegation(inode); 152 + nfs4_inode_set_return_delegation_on_close(inode); 153 153 return nfs4_do_check_delegation(inode, type, flags, true); 154 154 } 155 155 ··· 581 581 if (delegation == NULL) 582 582 return 0; 583 583 584 + /* Directory delegations don't require any state recovery */ 585 + if (!S_ISREG(inode->i_mode)) 586 + goto out_return; 587 + 584 588 if (!issync) 585 589 mode |= O_NONBLOCK; 586 590 /* Recall of any remaining application leases */ ··· 608 604 goto out; 609 605 } 610 606 607 + out_return: 611 608 err = nfs_do_return_delegation(inode, delegation, issync); 612 609 out: 613 610 /* Refcount matched in nfs_start_delegation_return_locked() */
+51 -27
fs/nfs/dir.c
··· 1440 1440 1441 1441 if (!dir || !nfs_verify_change_attribute(dir, verf)) 1442 1442 return; 1443 - if (inode && NFS_PROTO(inode)->have_delegation(inode, FMODE_READ, 0)) 1443 + if (NFS_PROTO(dir)->have_delegation(dir, FMODE_READ, 0) || 1444 + (inode && NFS_PROTO(inode)->have_delegation(inode, FMODE_READ, 0))) 1444 1445 nfs_set_verifier_delegated(&verf); 1445 1446 dentry->d_time = verf; 1446 1447 } ··· 1466 1465 EXPORT_SYMBOL_GPL(nfs_set_verifier); 1467 1466 1468 1467 #if IS_ENABLED(CONFIG_NFS_V4) 1468 + static void nfs_clear_verifier_file(struct inode *inode) 1469 + { 1470 + struct dentry *alias; 1471 + struct inode *dir; 1472 + 1473 + hlist_for_each_entry(alias, &inode->i_dentry, d_u.d_alias) { 1474 + spin_lock(&alias->d_lock); 1475 + dir = d_inode_rcu(alias->d_parent); 1476 + if (!dir || 1477 + !NFS_PROTO(dir)->have_delegation(dir, FMODE_READ, 0)) 1478 + nfs_unset_verifier_delegated(&alias->d_time); 1479 + spin_unlock(&alias->d_lock); 1480 + } 1481 + } 1482 + 1483 + static void nfs_clear_verifier_directory(struct inode *dir) 1484 + { 1485 + struct dentry *this_parent; 1486 + struct dentry *dentry; 1487 + struct inode *inode; 1488 + 1489 + if (hlist_empty(&dir->i_dentry)) 1490 + return; 1491 + this_parent = 1492 + hlist_entry(dir->i_dentry.first, struct dentry, d_u.d_alias); 1493 + 1494 + spin_lock(&this_parent->d_lock); 1495 + nfs_unset_verifier_delegated(&this_parent->d_time); 1496 + dentry = d_first_child(this_parent); 1497 + hlist_for_each_entry_from(dentry, d_sib) { 1498 + if (unlikely(dentry->d_flags & DCACHE_DENTRY_CURSOR)) 1499 + continue; 1500 + inode = d_inode_rcu(dentry); 1501 + if (inode && 1502 + NFS_PROTO(inode)->have_delegation(inode, FMODE_READ, 0)) 1503 + continue; 1504 + spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED); 1505 + nfs_unset_verifier_delegated(&dentry->d_time); 1506 + spin_unlock(&dentry->d_lock); 1507 + } 1508 + spin_unlock(&this_parent->d_lock); 1509 + } 1510 + 1469 1511 /** 1470 1512 * nfs_clear_verifier_delegated - clear the dir verifier delegation tag 1471 1513 * @inode: pointer to inode ··· 1521 1477 */ 1522 1478 void nfs_clear_verifier_delegated(struct inode *inode) 1523 1479 { 1524 - struct dentry *alias; 1525 - 1526 1480 if (!inode) 1527 1481 return; 1528 1482 spin_lock(&inode->i_lock); 1529 - hlist_for_each_entry(alias, &inode->i_dentry, d_u.d_alias) { 1530 - spin_lock(&alias->d_lock); 1531 - nfs_unset_verifier_delegated(&alias->d_time); 1532 - spin_unlock(&alias->d_lock); 1533 - } 1483 + if (S_ISREG(inode->i_mode)) 1484 + nfs_clear_verifier_file(inode); 1485 + else if (S_ISDIR(inode->i_mode)) 1486 + nfs_clear_verifier_directory(inode); 1534 1487 spin_unlock(&inode->i_lock); 1535 1488 } 1536 1489 EXPORT_SYMBOL_GPL(nfs_clear_verifier_delegated); ··· 1555 1514 if (NFS_SERVER(dir)->flags & NFS_MOUNT_LOOKUP_CACHE_NONE) 1556 1515 return 0; 1557 1516 if (!nfs_dentry_verify_change(dir, dentry)) 1558 - return 0; 1559 - 1560 - /* 1561 - * If we have a directory delegation then we don't need to revalidate 1562 - * the directory. The delegation will either get recalled or we will 1563 - * receive a notification when it changes. 1564 - */ 1565 - if (nfs_have_directory_delegation(dir)) 1566 1517 return 0; 1567 1518 1568 1519 /* Revalidate nfsi->cache_change_attribute before we declare a match */ ··· 2250 2217 EXPORT_SYMBOL_GPL(nfs_atomic_open); 2251 2218 2252 2219 static int 2253 - nfs_lookup_revalidate_delegated_parent(struct inode *dir, struct dentry *dentry, 2254 - struct inode *inode) 2255 - { 2256 - return nfs_lookup_revalidate_done(dir, dentry, inode, 1); 2257 - } 2258 - 2259 - static int 2260 2220 nfs4_lookup_revalidate(struct inode *dir, const struct qstr *name, 2261 2221 struct dentry *dentry, unsigned int flags) 2262 2222 { ··· 2273 2247 if (inode == NULL) 2274 2248 goto full_reval; 2275 2249 2276 - if (nfs_verifier_is_delegated(dentry)) 2250 + if (nfs_verifier_is_delegated(dentry) || 2251 + nfs_have_directory_delegation(inode)) 2277 2252 return nfs_lookup_revalidate_delegated(dir, dentry, inode); 2278 - 2279 - if (nfs_have_directory_delegation(dir)) 2280 - return nfs_lookup_revalidate_delegated_parent(dir, dentry, inode); 2281 2253 2282 2254 /* NFS only supports OPEN on regular files */ 2283 2255 if (!S_ISREG(inode->i_mode))
+2 -1
fs/nfs/file.c
··· 511 511 if ((current_gfp_context(gfp) & GFP_KERNEL) != GFP_KERNEL || 512 512 current_is_kswapd() || current_is_kcompactd()) 513 513 return false; 514 - if (nfs_wb_folio(folio->mapping->host, folio) < 0) 514 + if (nfs_wb_folio_reclaim(folio->mapping->host, folio) < 0 || 515 + folio_test_private(folio)) 515 516 return false; 516 517 } 517 518 return nfs_fscache_release_folio(folio, gfp);
+1 -1
fs/nfs/flexfilelayout/flexfilelayoutdev.c
··· 103 103 sizeof(struct nfs4_ff_ds_version), 104 104 gfp_flags); 105 105 if (!ds_versions) 106 - goto out_scratch; 106 + goto out_err_drain_dsaddrs; 107 107 108 108 for (i = 0; i < version_count; i++) { 109 109 /* 20 = version(4) + minor_version(4) + rsize(4) + wsize(4) +
+6 -4
fs/nfs/inode.c
··· 716 716 { 717 717 struct inode *inode = d_inode(dentry); 718 718 struct nfs_fattr *fattr; 719 - loff_t oldsize = i_size_read(inode); 719 + loff_t oldsize; 720 720 int error = 0; 721 721 kuid_t task_uid = current_fsuid(); 722 722 kuid_t owner_uid = inode->i_uid; ··· 727 727 if (attr->ia_valid & (ATTR_KILL_SUID | ATTR_KILL_SGID)) 728 728 attr->ia_valid &= ~ATTR_MODE; 729 729 730 + if (S_ISREG(inode->i_mode)) 731 + nfs_file_block_o_direct(NFS_I(inode)); 732 + 733 + oldsize = i_size_read(inode); 730 734 if (attr->ia_valid & ATTR_SIZE) { 731 735 BUG_ON(!S_ISREG(inode->i_mode)); 732 736 ··· 778 774 trace_nfs_setattr_enter(inode); 779 775 780 776 /* Write all dirty data */ 781 - if (S_ISREG(inode->i_mode)) { 782 - nfs_file_block_o_direct(NFS_I(inode)); 777 + if (S_ISREG(inode->i_mode)) 783 778 nfs_sync_inode(inode); 784 - } 785 779 786 780 fattr = nfs_alloc_fattr_with_label(NFS_SERVER(inode)); 787 781 if (fattr == NULL) {
+2
fs/nfs/io.c
··· 84 84 nfs_file_block_o_direct(NFS_I(inode)); 85 85 return err; 86 86 } 87 + EXPORT_SYMBOL_GPL(nfs_start_io_write); 87 88 88 89 /** 89 90 * nfs_end_io_write - declare that the buffered write operation is done ··· 98 97 { 99 98 up_write(&inode->i_rwsem); 100 99 } 100 + EXPORT_SYMBOL_GPL(nfs_end_io_write); 101 101 102 102 /* Call with exclusively locked inode->i_rwsem */ 103 103 static void nfs_block_buffered(struct nfs_inode *nfsi, struct inode *inode)
+16 -16
fs/nfs/localio.c
··· 461 461 v = 0; 462 462 total = hdr->args.count; 463 463 base = hdr->args.pgbase; 464 + pagevec += base >> PAGE_SHIFT; 465 + base &= ~PAGE_MASK; 464 466 while (total && v < hdr->page_array.npages) { 465 467 len = min_t(size_t, total, PAGE_SIZE - base); 466 468 bvec_set_page(&iocb->bvec[v], *pagevec, len, base); ··· 620 618 struct nfs_local_kiocb *iocb = 621 619 container_of(work, struct nfs_local_kiocb, work); 622 620 struct file *filp = iocb->kiocb.ki_filp; 623 - bool force_done = false; 624 621 ssize_t status; 625 622 int n_iters; 626 623 ··· 638 637 scoped_with_creds(filp->f_cred) 639 638 status = filp->f_op->read_iter(&iocb->kiocb, &iocb->iters[i]); 640 639 641 - if (status != -EIOCBQUEUED) { 642 - if (unlikely(status >= 0 && status < iocb->iters[i].count)) 643 - force_done = true; /* Partial read */ 644 - if (nfs_local_pgio_done(iocb, status, force_done)) { 645 - nfs_local_read_iocb_done(iocb); 646 - break; 647 - } 640 + if (status == -EIOCBQUEUED) 641 + continue; 642 + /* Break on completion, errors, or short reads */ 643 + if (nfs_local_pgio_done(iocb, status, false) || status < 0 || 644 + (size_t)status < iov_iter_count(&iocb->iters[i])) { 645 + nfs_local_read_iocb_done(iocb); 646 + break; 648 647 } 649 648 } 650 649 } ··· 822 821 container_of(work, struct nfs_local_kiocb, work); 823 822 struct file *filp = iocb->kiocb.ki_filp; 824 823 unsigned long old_flags = current->flags; 825 - bool force_done = false; 826 824 ssize_t status; 827 825 int n_iters; 828 826 ··· 843 843 scoped_with_creds(filp->f_cred) 844 844 status = filp->f_op->write_iter(&iocb->kiocb, &iocb->iters[i]); 845 845 846 - if (status != -EIOCBQUEUED) { 847 - if (unlikely(status >= 0 && status < iocb->iters[i].count)) 848 - force_done = true; /* Partial write */ 849 - if (nfs_local_pgio_done(iocb, status, force_done)) { 850 - nfs_local_write_iocb_done(iocb); 851 - break; 852 - } 846 + if (status == -EIOCBQUEUED) 847 + continue; 848 + /* Break on completion, errors, or short writes */ 849 + if (nfs_local_pgio_done(iocb, status, false) || status < 0 || 850 + (size_t)status < iov_iter_count(&iocb->iters[i])) { 851 + nfs_local_write_iocb_done(iocb); 852 + break; 853 853 } 854 854 } 855 855 file_end_write(filp);
+19 -10
fs/nfs/nfs42proc.c
··· 114 114 exception.inode = inode; 115 115 exception.state = lock->open_context->state; 116 116 117 - nfs_file_block_o_direct(NFS_I(inode)); 118 117 err = nfs_sync_inode(inode); 119 118 if (err) 120 119 goto out; ··· 137 138 .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE], 138 139 }; 139 140 struct inode *inode = file_inode(filep); 140 - loff_t oldsize = i_size_read(inode); 141 + loff_t oldsize; 141 142 int err; 142 143 143 144 if (!nfs_server_capable(inode, NFS_CAP_ALLOCATE)) 144 145 return -EOPNOTSUPP; 145 146 146 - inode_lock(inode); 147 + err = nfs_start_io_write(inode); 148 + if (err) 149 + return err; 150 + 151 + oldsize = i_size_read(inode); 147 152 148 153 err = nfs42_proc_fallocate(&msg, filep, offset, len); 149 154 ··· 158 155 NFS_SERVER(inode)->caps &= ~(NFS_CAP_ALLOCATE | 159 156 NFS_CAP_ZERO_RANGE); 160 157 161 - inode_unlock(inode); 158 + nfs_end_io_write(inode); 162 159 return err; 163 160 } 164 161 ··· 173 170 if (!nfs_server_capable(inode, NFS_CAP_DEALLOCATE)) 174 171 return -EOPNOTSUPP; 175 172 176 - inode_lock(inode); 173 + err = nfs_start_io_write(inode); 174 + if (err) 175 + return err; 177 176 178 177 err = nfs42_proc_fallocate(&msg, filep, offset, len); 179 178 if (err == 0) ··· 184 179 NFS_SERVER(inode)->caps &= ~(NFS_CAP_DEALLOCATE | 185 180 NFS_CAP_ZERO_RANGE); 186 181 187 - inode_unlock(inode); 182 + nfs_end_io_write(inode); 188 183 return err; 189 184 } 190 185 ··· 194 189 .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ZERO_RANGE], 195 190 }; 196 191 struct inode *inode = file_inode(filep); 197 - loff_t oldsize = i_size_read(inode); 192 + loff_t oldsize; 198 193 int err; 199 194 200 195 if (!nfs_server_capable(inode, NFS_CAP_ZERO_RANGE)) 201 196 return -EOPNOTSUPP; 202 197 203 - inode_lock(inode); 198 + err = nfs_start_io_write(inode); 199 + if (err) 200 + return err; 204 201 202 + oldsize = i_size_read(inode); 205 203 err = nfs42_proc_fallocate(&msg, filep, offset, len); 206 204 if (err == 0) { 207 205 nfs_truncate_last_folio(inode->i_mapping, oldsize, ··· 213 205 } else if (err == -EOPNOTSUPP) 214 206 NFS_SERVER(inode)->caps &= ~NFS_CAP_ZERO_RANGE; 215 207 216 - inode_unlock(inode); 208 + nfs_end_io_write(inode); 217 209 return err; 218 210 } 219 211 ··· 424 416 struct nfs_server *src_server = NFS_SERVER(src_inode); 425 417 loff_t pos_src = args->src_pos; 426 418 loff_t pos_dst = args->dst_pos; 427 - loff_t oldsize_dst = i_size_read(dst_inode); 419 + loff_t oldsize_dst; 428 420 size_t count = args->count; 429 421 ssize_t status; 430 422 ··· 469 461 &src_lock->open_context->state->flags); 470 462 set_bit(NFS_CLNT_DST_SSC_COPY_STATE, 471 463 &dst_lock->open_context->state->flags); 464 + oldsize_dst = i_size_read(dst_inode); 472 465 473 466 status = nfs4_call_sync(dst_server->client, dst_server, &msg, 474 467 &args->seq_args, &res->seq_res, 0);
+42 -11
fs/nfs/nfs4proc.c
··· 3894 3894 calldata->res.seqid = calldata->arg.seqid; 3895 3895 calldata->res.server = server; 3896 3896 calldata->res.lr_ret = -NFS4ERR_NOMATCHING_LAYOUT; 3897 - calldata->lr.roc = pnfs_roc(state->inode, 3898 - &calldata->lr.arg, &calldata->lr.res, msg.rpc_cred); 3897 + calldata->lr.roc = pnfs_roc(state->inode, &calldata->lr.arg, 3898 + &calldata->lr.res, msg.rpc_cred, wait); 3899 3899 if (calldata->lr.roc) { 3900 3900 calldata->arg.lr_args = &calldata->lr.arg; 3901 3901 calldata->res.lr_res = &calldata->lr.res; ··· 4494 4494 } 4495 4495 #endif /* CONFIG_NFS_V4_1 */ 4496 4496 4497 + static void nfs4_call_getattr_prepare(struct rpc_task *task, void *calldata) 4498 + { 4499 + struct nfs4_call_sync_data *data = calldata; 4500 + nfs4_setup_sequence(data->seq_server->nfs_client, data->seq_args, 4501 + data->seq_res, task); 4502 + } 4503 + 4504 + static void nfs4_call_getattr_done(struct rpc_task *task, void *calldata) 4505 + { 4506 + struct nfs4_call_sync_data *data = calldata; 4507 + 4508 + nfs4_sequence_process(task, data->seq_res); 4509 + } 4510 + 4511 + static const struct rpc_call_ops nfs4_call_getattr_ops = { 4512 + .rpc_call_prepare = nfs4_call_getattr_prepare, 4513 + .rpc_call_done = nfs4_call_getattr_done, 4514 + }; 4515 + 4497 4516 static int _nfs4_proc_getattr(struct nfs_server *server, struct nfs_fh *fhandle, 4498 4517 struct nfs_fattr *fattr, struct inode *inode) 4499 4518 { ··· 4530 4511 .rpc_argp = &args, 4531 4512 .rpc_resp = &res, 4532 4513 }; 4514 + struct nfs4_call_sync_data data = { 4515 + .seq_server = server, 4516 + .seq_args = &args.seq_args, 4517 + .seq_res = &res.seq_res, 4518 + }; 4519 + struct rpc_task_setup task_setup = { 4520 + .rpc_client = server->client, 4521 + .rpc_message = &msg, 4522 + .callback_ops = &nfs4_call_getattr_ops, 4523 + .callback_data = &data, 4524 + }; 4533 4525 struct nfs4_gdd_res gdd_res; 4534 - unsigned short task_flags = 0; 4535 4526 int status; 4536 4527 4537 4528 if (nfs4_has_session(server->nfs_client)) 4538 - task_flags = RPC_TASK_MOVEABLE; 4529 + task_setup.flags = RPC_TASK_MOVEABLE; 4539 4530 4540 4531 /* Is this is an attribute revalidation, subject to softreval? */ 4541 4532 if (inode && (server->flags & NFS_MOUNT_SOFTREVAL)) 4542 - task_flags |= RPC_TASK_TIMEOUT; 4533 + task_setup.flags |= RPC_TASK_TIMEOUT; 4543 4534 4544 4535 args.get_dir_deleg = should_request_dir_deleg(inode); 4545 4536 if (args.get_dir_deleg) ··· 4559 4530 nfs_fattr_init(fattr); 4560 4531 nfs4_init_sequence(&args.seq_args, &res.seq_res, 0, 0); 4561 4532 4562 - status = nfs4_do_call_sync(server->client, server, &msg, 4563 - &args.seq_args, &res.seq_res, task_flags); 4533 + status = nfs4_call_sync_custom(&task_setup); 4534 + 4564 4535 if (args.get_dir_deleg) { 4565 4536 switch (status) { 4566 4537 case 0: 4567 4538 if (gdd_res.status != GDD4_OK) 4568 4539 break; 4569 - status = nfs_inode_set_delegation( 4570 - inode, current_cred(), FMODE_READ, 4571 - &gdd_res.deleg, 0, NFS4_OPEN_DELEGATE_READ); 4540 + nfs_inode_set_delegation(inode, current_cred(), 4541 + FMODE_READ, &gdd_res.deleg, 0, 4542 + NFS4_OPEN_DELEGATE_READ); 4572 4543 break; 4573 4544 case -ENOTSUPP: 4574 4545 case -EOPNOTSUPP: 4575 4546 server->caps &= ~NFS_CAP_DIR_DELEG; 4576 4547 } 4577 4548 } 4549 + 4550 + nfs4_sequence_free_slot(&res.seq_res); 4578 4551 return status; 4579 4552 } 4580 4553 ··· 7036 7005 data->inode = nfs_igrab_and_active(inode); 7037 7006 if (data->inode || issync) { 7038 7007 data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res, 7039 - cred); 7008 + cred, issync); 7040 7009 if (data->lr.roc) { 7041 7010 data->args.lr_args = &data->lr.arg; 7042 7011 data->res.lr_res = &data->lr.res;
+5 -1
fs/nfs/nfs4state.c
··· 1445 1445 struct nfs4_state *state; 1446 1446 bool found = false; 1447 1447 1448 + if (!S_ISREG(inode->i_mode)) 1449 + goto out; 1448 1450 rcu_read_lock(); 1449 1451 list_for_each_entry_rcu(ctx, &nfsi->open_files, list) { 1450 1452 state = ctx->state; ··· 1468 1466 found = true; 1469 1467 } 1470 1468 rcu_read_unlock(); 1471 - 1469 + out: 1472 1470 nfs_inode_find_delegation_state_and_recover(inode, stateid); 1473 1471 if (found) 1474 1472 nfs4_schedule_state_manager(clp); ··· 1480 1478 struct nfs_inode *nfsi = NFS_I(inode); 1481 1479 struct nfs_open_context *ctx; 1482 1480 1481 + if (!S_ISREG(inode->i_mode)) 1482 + return; 1483 1483 rcu_read_lock(); 1484 1484 list_for_each_entry_rcu(ctx, &nfsi->open_files, list) { 1485 1485 if (ctx->state != state)
+3
fs/nfs/nfstrace.h
··· 1062 1062 DEFINE_NFS_FOLIO_EVENT(nfs_aop_readpage); 1063 1063 DEFINE_NFS_FOLIO_EVENT_DONE(nfs_aop_readpage_done); 1064 1064 1065 + DEFINE_NFS_FOLIO_EVENT(nfs_writeback_folio_reclaim); 1066 + DEFINE_NFS_FOLIO_EVENT_DONE(nfs_writeback_folio_reclaim_done); 1067 + 1065 1068 DEFINE_NFS_FOLIO_EVENT(nfs_writeback_folio); 1066 1069 DEFINE_NFS_FOLIO_EVENT_DONE(nfs_writeback_folio_done); 1067 1070
+41 -17
fs/nfs/pnfs.c
··· 1533 1533 PNFS_FL_LAYOUTRETURN_PRIVILEGED); 1534 1534 } 1535 1535 1536 - bool pnfs_roc(struct inode *ino, 1537 - struct nfs4_layoutreturn_args *args, 1538 - struct nfs4_layoutreturn_res *res, 1539 - const struct cred *cred) 1536 + bool pnfs_roc(struct inode *ino, struct nfs4_layoutreturn_args *args, 1537 + struct nfs4_layoutreturn_res *res, const struct cred *cred, 1538 + bool sync) 1540 1539 { 1541 1540 struct nfs_inode *nfsi = NFS_I(ino); 1542 1541 struct nfs_open_context *ctx; ··· 1546 1547 nfs4_stateid stateid; 1547 1548 enum pnfs_iomode iomode = 0; 1548 1549 bool layoutreturn = false, roc = false; 1549 - bool skip_read = false; 1550 + bool skip_read; 1550 1551 1551 1552 if (!nfs_have_layout(ino)) 1552 1553 return false; ··· 1559 1560 lo = NULL; 1560 1561 goto out_noroc; 1561 1562 } 1562 - pnfs_get_layout_hdr(lo); 1563 - if (test_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags)) { 1564 - spin_unlock(&ino->i_lock); 1565 - rcu_read_unlock(); 1566 - wait_on_bit(&lo->plh_flags, NFS_LAYOUT_RETURN, 1567 - TASK_UNINTERRUPTIBLE); 1568 - pnfs_put_layout_hdr(lo); 1569 - goto retry; 1570 - } 1571 1563 1572 1564 /* no roc if we hold a delegation */ 1565 + skip_read = false; 1573 1566 if (nfs4_check_delegation(ino, FMODE_READ)) { 1574 - if (nfs4_check_delegation(ino, FMODE_WRITE)) 1567 + if (nfs4_check_delegation(ino, FMODE_WRITE)) { 1568 + lo = NULL; 1575 1569 goto out_noroc; 1570 + } 1576 1571 skip_read = true; 1577 1572 } 1578 1573 ··· 1575 1582 if (state == NULL) 1576 1583 continue; 1577 1584 /* Don't return layout if there is open file state */ 1578 - if (state->state & FMODE_WRITE) 1585 + if (state->state & FMODE_WRITE) { 1586 + lo = NULL; 1579 1587 goto out_noroc; 1588 + } 1580 1589 if (state->state & FMODE_READ) 1581 1590 skip_read = true; 1582 1591 } 1583 1592 1593 + if (skip_read) { 1594 + bool writes = false; 1595 + 1596 + list_for_each_entry(lseg, &lo->plh_segs, pls_list) { 1597 + if (lseg->pls_range.iomode != IOMODE_READ) { 1598 + writes = true; 1599 + break; 1600 + } 1601 + } 1602 + if (!writes) { 1603 + lo = NULL; 1604 + goto out_noroc; 1605 + } 1606 + } 1607 + 1608 + pnfs_get_layout_hdr(lo); 1609 + if (test_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags)) { 1610 + if (!sync) { 1611 + pnfs_set_plh_return_info( 1612 + lo, skip_read ? IOMODE_RW : IOMODE_ANY, 0); 1613 + goto out_noroc; 1614 + } 1615 + spin_unlock(&ino->i_lock); 1616 + rcu_read_unlock(); 1617 + wait_on_bit(&lo->plh_flags, NFS_LAYOUT_RETURN, 1618 + TASK_UNINTERRUPTIBLE); 1619 + pnfs_put_layout_hdr(lo); 1620 + goto retry; 1621 + } 1584 1622 1585 1623 list_for_each_entry_safe(lseg, next, &lo->plh_segs, pls_list) { 1586 1624 if (skip_read && lseg->pls_range.iomode == IOMODE_READ) ··· 1651 1627 out_noroc: 1652 1628 spin_unlock(&ino->i_lock); 1653 1629 rcu_read_unlock(); 1654 - pnfs_layoutcommit_inode(ino, true); 1630 + pnfs_layoutcommit_inode(ino, sync); 1655 1631 if (roc) { 1656 1632 struct pnfs_layoutdriver_type *ld = NFS_SERVER(ino)->pnfs_curr_ld; 1657 1633 if (ld->prepare_layoutreturn)
+7 -10
fs/nfs/pnfs.h
··· 303 303 u32 seq); 304 304 int pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo, 305 305 struct list_head *lseg_list); 306 - bool pnfs_roc(struct inode *ino, 307 - struct nfs4_layoutreturn_args *args, 308 - struct nfs4_layoutreturn_res *res, 309 - const struct cred *cred); 306 + bool pnfs_roc(struct inode *ino, struct nfs4_layoutreturn_args *args, 307 + struct nfs4_layoutreturn_res *res, const struct cred *cred, 308 + bool sync); 310 309 int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp, 311 310 struct nfs4_layoutreturn_res **respp, int *ret); 312 311 void pnfs_roc_release(struct nfs4_layoutreturn_args *args, ··· 772 773 return false; 773 774 } 774 775 775 - 776 - static inline bool 777 - pnfs_roc(struct inode *ino, 778 - struct nfs4_layoutreturn_args *args, 779 - struct nfs4_layoutreturn_res *res, 780 - const struct cred *cred) 776 + static inline bool pnfs_roc(struct inode *ino, 777 + struct nfs4_layoutreturn_args *args, 778 + struct nfs4_layoutreturn_res *res, 779 + const struct cred *cred, bool sync) 781 780 { 782 781 return false; 783 782 }
+33
fs/nfs/write.c
··· 2025 2025 } 2026 2026 2027 2027 /** 2028 + * nfs_wb_folio_reclaim - Write back all requests on one page 2029 + * @inode: pointer to page 2030 + * @folio: pointer to folio 2031 + * 2032 + * Assumes that the folio has been locked by the caller 2033 + */ 2034 + int nfs_wb_folio_reclaim(struct inode *inode, struct folio *folio) 2035 + { 2036 + loff_t range_start = folio_pos(folio); 2037 + size_t len = folio_size(folio); 2038 + struct writeback_control wbc = { 2039 + .sync_mode = WB_SYNC_ALL, 2040 + .nr_to_write = 0, 2041 + .range_start = range_start, 2042 + .range_end = range_start + len - 1, 2043 + .for_sync = 1, 2044 + }; 2045 + int ret; 2046 + 2047 + if (folio_test_writeback(folio)) 2048 + return -EBUSY; 2049 + if (folio_clear_dirty_for_io(folio)) { 2050 + trace_nfs_writeback_folio_reclaim(inode, range_start, len); 2051 + ret = nfs_writepage_locked(folio, &wbc); 2052 + trace_nfs_writeback_folio_reclaim_done(inode, range_start, len, 2053 + ret); 2054 + return ret; 2055 + } 2056 + nfs_commit_inode(inode, 0); 2057 + return 0; 2058 + } 2059 + 2060 + /** 2028 2061 * nfs_wb_folio - Write back all requests on one page 2029 2062 * @inode: pointer to page 2030 2063 * @folio: pointer to folio
+6 -5
fs/xfs/libxfs/xfs_ialloc.c
··· 848 848 * invalid inode records, such as records that start at agbno 0 849 849 * or extend beyond the AG. 850 850 * 851 - * Set min agbno to the first aligned, non-zero agbno and max to 852 - * the last aligned agbno that is at least one full chunk from 853 - * the end of the AG. 851 + * Set min agbno to the first chunk aligned, non-zero agbno and 852 + * max to one less than the last chunk aligned agbno from the 853 + * end of the AG. We subtract 1 from max so that the cluster 854 + * allocation alignment takes over and allows allocation within 855 + * the last full inode chunk in the AG. 854 856 */ 855 857 args.min_agbno = args.mp->m_sb.sb_inoalignmt; 856 858 args.max_agbno = round_down(xfs_ag_block_count(args.mp, 857 859 pag_agno(pag)), 858 - args.mp->m_sb.sb_inoalignmt) - 859 - igeo->ialloc_blks; 860 + args.mp->m_sb.sb_inoalignmt) - 1; 860 861 861 862 error = xfs_alloc_vextent_near_bno(&args, 862 863 xfs_agbno_to_fsb(pag,
+27 -26
fs/xfs/libxfs/xfs_rtgroup.c
··· 48 48 return 0; 49 49 } 50 50 51 + /* Compute the number of rt extents in this realtime group. */ 52 + static xfs_rtxnum_t 53 + __xfs_rtgroup_extents( 54 + struct xfs_mount *mp, 55 + xfs_rgnumber_t rgno, 56 + xfs_rgnumber_t rgcount, 57 + xfs_rtbxlen_t rextents) 58 + { 59 + ASSERT(rgno < rgcount); 60 + if (rgno == rgcount - 1) 61 + return rextents - ((xfs_rtxnum_t)rgno * mp->m_sb.sb_rgextents); 62 + 63 + ASSERT(xfs_has_rtgroups(mp)); 64 + return mp->m_sb.sb_rgextents; 65 + } 66 + 67 + xfs_rtxnum_t 68 + xfs_rtgroup_extents( 69 + struct xfs_mount *mp, 70 + xfs_rgnumber_t rgno) 71 + { 72 + return __xfs_rtgroup_extents(mp, rgno, mp->m_sb.sb_rgcount, 73 + mp->m_sb.sb_rextents); 74 + } 75 + 51 76 /* Precompute this group's geometry */ 52 77 void 53 78 xfs_rtgroup_calc_geometry( ··· 83 58 xfs_rtbxlen_t rextents) 84 59 { 85 60 rtg->rtg_extents = __xfs_rtgroup_extents(mp, rgno, rgcount, rextents); 86 - rtg_group(rtg)->xg_block_count = rtg->rtg_extents * mp->m_sb.sb_rextsize; 61 + rtg_group(rtg)->xg_block_count = 62 + rtg->rtg_extents * mp->m_sb.sb_rextsize; 87 63 rtg_group(rtg)->xg_min_gbno = xfs_rtgroup_min_block(mp, rgno); 88 64 } 89 65 ··· 160 134 out_unwind_new_rtgs: 161 135 xfs_free_rtgroups(mp, first_rgno, index); 162 136 return error; 163 - } 164 - 165 - /* Compute the number of rt extents in this realtime group. */ 166 - xfs_rtxnum_t 167 - __xfs_rtgroup_extents( 168 - struct xfs_mount *mp, 169 - xfs_rgnumber_t rgno, 170 - xfs_rgnumber_t rgcount, 171 - xfs_rtbxlen_t rextents) 172 - { 173 - ASSERT(rgno < rgcount); 174 - if (rgno == rgcount - 1) 175 - return rextents - ((xfs_rtxnum_t)rgno * mp->m_sb.sb_rgextents); 176 - 177 - ASSERT(xfs_has_rtgroups(mp)); 178 - return mp->m_sb.sb_rgextents; 179 - } 180 - 181 - xfs_rtxnum_t 182 - xfs_rtgroup_extents( 183 - struct xfs_mount *mp, 184 - xfs_rgnumber_t rgno) 185 - { 186 - return __xfs_rtgroup_extents(mp, rgno, mp->m_sb.sb_rgcount, 187 - mp->m_sb.sb_rextents); 188 137 } 189 138 190 139 /*
-2
fs/xfs/libxfs/xfs_rtgroup.h
··· 285 285 int xfs_initialize_rtgroups(struct xfs_mount *mp, xfs_rgnumber_t first_rgno, 286 286 xfs_rgnumber_t end_rgno, xfs_rtbxlen_t rextents); 287 287 288 - xfs_rtxnum_t __xfs_rtgroup_extents(struct xfs_mount *mp, xfs_rgnumber_t rgno, 289 - xfs_rgnumber_t rgcount, xfs_rtbxlen_t rextents); 290 288 xfs_rtxnum_t xfs_rtgroup_extents(struct xfs_mount *mp, xfs_rgnumber_t rgno); 291 289 void xfs_rtgroup_calc_geometry(struct xfs_mount *mp, struct xfs_rtgroup *rtg, 292 290 xfs_rgnumber_t rgno, xfs_rgnumber_t rgcount,
+5 -3
fs/xfs/xfs_log.c
··· 1180 1180 int error = 0; 1181 1181 bool need_covered; 1182 1182 1183 - ASSERT((xlog_cil_empty(mp->m_log) && xlog_iclogs_empty(mp->m_log) && 1184 - !xfs_ail_min_lsn(mp->m_log->l_ailp)) || 1185 - xlog_is_shutdown(mp->m_log)); 1183 + if (!xlog_is_shutdown(mp->m_log)) { 1184 + ASSERT(xlog_cil_empty(mp->m_log)); 1185 + ASSERT(xlog_iclogs_empty(mp->m_log)); 1186 + ASSERT(!xfs_ail_min_lsn(mp->m_log->l_ailp)); 1187 + } 1186 1188 1187 1189 if (!xfs_log_writable(mp)) 1188 1190 return 0;
+3 -3
fs/xfs/xfs_rtalloc.c
··· 126 126 error = 0; 127 127 out: 128 128 xfs_rtbuf_cache_relse(oargs); 129 - return 0; 129 + return error; 130 130 } 131 131 /* 132 132 * Mark an extent specified by start and len allocated. ··· 1265 1265 uint32_t rem; 1266 1266 1267 1267 if (rextsize != 1) 1268 - return -EINVAL; 1268 + goto out_inval; 1269 1269 div_u64_rem(nmp->m_sb.sb_rblocks, gblocks, &rem); 1270 1270 if (rem) { 1271 1271 xfs_warn(mp, ··· 1326 1326 return true; 1327 1327 if (mp->m_sb.sb_rgcount == 0) 1328 1328 return false; 1329 - return xfs_rtgroup_extents(mp, mp->m_sb.sb_rgcount - 1) <= 1329 + return xfs_rtgroup_extents(mp, mp->m_sb.sb_rgcount - 1) < 1330 1330 mp->m_sb.sb_rgextents; 1331 1331 } 1332 1332
+1
include/drm/bridge/dw_hdmi_qp.h
··· 34 34 struct dw_hdmi_qp *dw_hdmi_qp_bind(struct platform_device *pdev, 35 35 struct drm_encoder *encoder, 36 36 const struct dw_hdmi_qp_plat_data *plat_data); 37 + void dw_hdmi_qp_suspend(struct device *dev, struct dw_hdmi_qp *hdmi); 37 38 void dw_hdmi_qp_resume(struct device *dev, struct dw_hdmi_qp *hdmi); 38 39 #endif /* __DW_HDMI_QP__ */
+37 -20
include/drm/display/drm_dp_helper.h
··· 552 552 void *buffer, size_t size); 553 553 554 554 /** 555 + * drm_dp_dpcd_readb() - read a single byte from the DPCD 556 + * @aux: DisplayPort AUX channel 557 + * @offset: address of the register to read 558 + * @valuep: location where the value of the register will be stored 559 + * 560 + * Returns the number of bytes transferred (1) on success, or a negative 561 + * error code on failure. In most of the cases you should be using 562 + * drm_dp_dpcd_read_byte() instead. 563 + */ 564 + static inline ssize_t drm_dp_dpcd_readb(struct drm_dp_aux *aux, 565 + unsigned int offset, u8 *valuep) 566 + { 567 + return drm_dp_dpcd_read(aux, offset, valuep, 1); 568 + } 569 + 570 + /** 555 571 * drm_dp_dpcd_read_data() - read a series of bytes from the DPCD 556 572 * @aux: DisplayPort AUX channel (SST or MST) 557 573 * @offset: address of the (first) register to read ··· 586 570 void *buffer, size_t size) 587 571 { 588 572 int ret; 573 + size_t i; 574 + u8 *buf = buffer; 589 575 590 576 ret = drm_dp_dpcd_read(aux, offset, buffer, size); 591 - if (ret < 0) 592 - return ret; 593 - if (ret < size) 594 - return -EPROTO; 577 + if (ret >= 0) { 578 + if (ret < size) 579 + return -EPROTO; 580 + return 0; 581 + } 582 + 583 + /* 584 + * Workaround for USB-C hubs/adapters with buggy firmware that fail 585 + * multi-byte AUX reads but work with single-byte reads. 586 + * Known affected devices: 587 + * - Lenovo USB-C to VGA adapter (VIA VL817, idVendor=17ef, idProduct=7217) 588 + * - Dell DA310 USB-C hub (idVendor=413c, idProduct=c010) 589 + * Attempt byte-by-byte reading as a fallback. 590 + */ 591 + for (i = 0; i < size; i++) { 592 + ret = drm_dp_dpcd_readb(aux, offset + i, &buf[i]); 593 + if (ret < 0) 594 + return ret; 595 + } 595 596 596 597 return 0; 597 598 } ··· 640 607 return -EPROTO; 641 608 642 609 return 0; 643 - } 644 - 645 - /** 646 - * drm_dp_dpcd_readb() - read a single byte from the DPCD 647 - * @aux: DisplayPort AUX channel 648 - * @offset: address of the register to read 649 - * @valuep: location where the value of the register will be stored 650 - * 651 - * Returns the number of bytes transferred (1) on success, or a negative 652 - * error code on failure. In most of the cases you should be using 653 - * drm_dp_dpcd_read_byte() instead. 654 - */ 655 - static inline ssize_t drm_dp_dpcd_readb(struct drm_dp_aux *aux, 656 - unsigned int offset, u8 *valuep) 657 - { 658 - return drm_dp_dpcd_read(aux, offset, valuep, 1); 659 610 } 660 611 661 612 /**
+5 -2
include/hyperv/hvgdk_mini.h
··· 578 578 struct hv_tlb_flush_ex { 579 579 u64 address_space; 580 580 u64 flags; 581 - struct hv_vpset hv_vp_set; 582 - u64 gva_list[]; 581 + __TRAILING_OVERLAP(struct hv_vpset, hv_vp_set, bank_contents, __packed, 582 + u64 gva_list[]; 583 + ); 583 584 } __packed; 585 + static_assert(offsetof(struct hv_tlb_flush_ex, hv_vp_set.bank_contents) == 586 + offsetof(struct hv_tlb_flush_ex, gva_list)); 584 587 585 588 struct ms_hyperv_tsc_page { /* HV_REFERENCE_TSC_PAGE */ 586 589 volatile u32 tsc_sequence;
+24
include/linux/can/can-ml.h
··· 46 46 #include <linux/list.h> 47 47 #include <linux/netdevice.h> 48 48 49 + /* exposed CAN device capabilities for network layer */ 50 + #define CAN_CAP_CC BIT(0) /* CAN CC aka Classical CAN */ 51 + #define CAN_CAP_FD BIT(1) /* CAN FD */ 52 + #define CAN_CAP_XL BIT(2) /* CAN XL */ 53 + #define CAN_CAP_RO BIT(3) /* read-only mode (LISTEN/RESTRICTED) */ 54 + 49 55 #define CAN_SFF_RCV_ARRAY_SZ (1 << CAN_SFF_ID_BITS) 50 56 #define CAN_EFF_RCV_HASH_BITS 10 51 57 #define CAN_EFF_RCV_ARRAY_SZ (1 << CAN_EFF_RCV_HASH_BITS) ··· 70 64 #ifdef CAN_J1939 71 65 struct j1939_priv *j1939_priv; 72 66 #endif 67 + u32 can_cap; 73 68 }; 74 69 75 70 static inline struct can_ml_priv *can_get_ml_priv(struct net_device *dev) ··· 82 75 struct can_ml_priv *ml_priv) 83 76 { 84 77 netdev_set_ml_priv(dev, ml_priv, ML_PRIV_CAN); 78 + } 79 + 80 + static inline bool can_cap_enabled(struct net_device *dev, u32 cap) 81 + { 82 + struct can_ml_priv *can_ml = can_get_ml_priv(dev); 83 + 84 + if (!can_ml) 85 + return false; 86 + 87 + return (can_ml->can_cap & cap); 88 + } 89 + 90 + static inline void can_set_cap(struct net_device *dev, u32 cap) 91 + { 92 + struct can_ml_priv *can_ml = can_get_ml_priv(dev); 93 + 94 + can_ml->can_cap = cap; 85 95 } 86 96 87 97 #endif /* CAN_ML_H */
+1 -7
include/linux/can/dev.h
··· 111 111 void free_candev(struct net_device *dev); 112 112 113 113 /* a candev safe wrapper around netdev_priv */ 114 - #if IS_ENABLED(CONFIG_CAN_NETLINK) 115 114 struct can_priv *safe_candev_priv(struct net_device *dev); 116 - #else 117 - static inline struct can_priv *safe_candev_priv(struct net_device *dev) 118 - { 119 - return NULL; 120 - } 121 - #endif 122 115 123 116 int open_candev(struct net_device *dev); 124 117 void close_candev(struct net_device *dev); 125 118 void can_set_default_mtu(struct net_device *dev); 119 + void can_set_cap_info(struct net_device *dev); 126 120 int __must_check can_set_static_ctrlmode(struct net_device *dev, 127 121 u32 static_mode); 128 122 int can_hwtstamp_get(struct net_device *netdev,
+14 -11
include/linux/cgroup-defs.h
··· 626 626 #endif 627 627 628 628 /* All ancestors including self */ 629 - struct cgroup *ancestors[]; 629 + union { 630 + DECLARE_FLEX_ARRAY(struct cgroup *, ancestors); 631 + struct { 632 + struct cgroup *_root_ancestor; 633 + DECLARE_FLEX_ARRAY(struct cgroup *, _low_ancestors); 634 + }; 635 + }; 630 636 }; 631 637 632 638 /* ··· 653 647 struct list_head root_list; 654 648 struct rcu_head rcu; /* Must be near the top */ 655 649 656 - /* 657 - * The root cgroup. The containing cgroup_root will be destroyed on its 658 - * release. cgrp->ancestors[0] will be used overflowing into the 659 - * following field. cgrp_ancestor_storage must immediately follow. 660 - */ 661 - struct cgroup cgrp; 662 - 663 - /* must follow cgrp for cgrp->ancestors[0], see above */ 664 - struct cgroup *cgrp_ancestor_storage; 665 - 666 650 /* Number of cgroups in the hierarchy, used only for /proc/cgroups */ 667 651 atomic_t nr_cgrps; 668 652 ··· 664 668 665 669 /* The name for this hierarchy - may be empty */ 666 670 char name[MAX_CGROUP_ROOT_NAMELEN]; 671 + 672 + /* 673 + * The root cgroup. The containing cgroup_root will be destroyed on its 674 + * release. This must be embedded last due to flexible array at the end 675 + * of struct cgroup. 676 + */ 677 + struct cgroup cgrp; 667 678 }; 668 679 669 680 /*
+1 -1
include/linux/energy_model.h
··· 18 18 * @power: The power consumed at this level (by 1 CPU or by a registered 19 19 * device). It can be a total power: static and dynamic. 20 20 * @cost: The cost coefficient associated with this level, used during 21 - * energy calculation. Equal to: power * max_frequency / frequency 21 + * energy calculation. Equal to: 10 * power * max_frequency / frequency 22 22 * @flags: see "em_perf_state flags" description below. 23 23 */ 24 24 struct em_perf_state {
+1
include/linux/kfence.h
··· 211 211 * __kfence_obj_info() - fill kmem_obj_info struct 212 212 * @kpp: kmem_obj_info to be filled 213 213 * @object: the object 214 + * @slab: the slab 214 215 * 215 216 * Return: 216 217 * * false - not a KFENCE object
+1
include/linux/nfs_fs.h
··· 637 637 extern int nfs_sync_inode(struct inode *inode); 638 638 extern int nfs_wb_all(struct inode *inode); 639 639 extern int nfs_wb_folio(struct inode *inode, struct folio *folio); 640 + extern int nfs_wb_folio_reclaim(struct inode *inode, struct folio *folio); 640 641 int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio); 641 642 extern int nfs_commit_inode(struct inode *, int); 642 643 extern struct nfs_commit_data *nfs_commitdata_alloc(void);
+1
include/linux/nmi.h
··· 83 83 #if defined(CONFIG_HARDLOCKUP_DETECTOR) 84 84 extern void hardlockup_detector_disable(void); 85 85 extern unsigned int hardlockup_panic; 86 + extern unsigned long hardlockup_si_mask; 86 87 #else 87 88 static inline void hardlockup_detector_disable(void) {} 88 89 #endif
+4
include/linux/pci.h
··· 2210 2210 { 2211 2211 return -ENOSPC; 2212 2212 } 2213 + 2214 + static inline void pci_free_irq_vectors(struct pci_dev *dev) 2215 + { 2216 + } 2213 2217 #endif /* CONFIG_PCI */ 2214 2218 2215 2219 /* Include architecture-dependent settings and functions */
-1
include/linux/sched.h
··· 1874 1874 extern int can_nice(const struct task_struct *p, const int nice); 1875 1875 extern int task_curr(const struct task_struct *p); 1876 1876 extern int idle_cpu(int cpu); 1877 - extern int available_idle_cpu(int cpu); 1878 1877 extern int sched_setscheduler(struct task_struct *, int, const struct sched_param *); 1879 1878 extern int sched_setscheduler_nocheck(struct task_struct *, int, const struct sched_param *); 1880 1879 extern void sched_set_fifo(struct task_struct *p);
+1
include/linux/sched/mm.h
··· 325 325 326 326 /** 327 327 * memalloc_flags_save - Add a PF_* flag to current->flags, save old value 328 + * @flags: Flags to add. 328 329 * 329 330 * This allows PF_* flags to be conveniently added, irrespective of current 330 331 * value, and then the old version restored with memalloc_flags_restore().
+2 -2
include/linux/soc/airoha/airoha_offload.h
··· 52 52 { 53 53 } 54 54 55 - static inline int airoha_ppe_setup_tc_block_cb(struct airoha_ppe_dev *dev, 56 - void *type_data) 55 + static inline int airoha_ppe_dev_setup_tc_block_cb(struct airoha_ppe_dev *dev, 56 + void *type_data) 57 57 { 58 58 return -EOPNOTSUPP; 59 59 }
+1
include/linux/textsearch.h
··· 35 35 * @get_pattern: return head of pattern 36 36 * @get_pattern_len: return length of pattern 37 37 * @owner: module reference to algorithm 38 + * @list: list to search 38 39 */ 39 40 struct ts_ops 40 41 {
+3
include/linux/usb/quirks.h
··· 75 75 /* short SET_ADDRESS request timeout */ 76 76 #define USB_QUIRK_SHORT_SET_ADDRESS_REQ_TIMEOUT BIT(16) 77 77 78 + /* skip BOS descriptor request */ 79 + #define USB_QUIRK_NO_BOS BIT(17) 80 + 78 81 #endif /* __LINUX_USB_QUIRKS_H */
+6
include/net/dropreason-core.h
··· 67 67 FN(TC_EGRESS) \ 68 68 FN(SECURITY_HOOK) \ 69 69 FN(QDISC_DROP) \ 70 + FN(QDISC_BURST_DROP) \ 70 71 FN(QDISC_OVERLIMIT) \ 71 72 FN(QDISC_CONGESTED) \ 72 73 FN(CAKE_FLOOD) \ ··· 375 374 * failed to enqueue to current qdisc) 376 375 */ 377 376 SKB_DROP_REASON_QDISC_DROP, 377 + /** 378 + * @SKB_DROP_REASON_QDISC_BURST_DROP: dropped when net.core.qdisc_max_burst 379 + * limit is hit. 380 + */ 381 + SKB_DROP_REASON_QDISC_BURST_DROP, 378 382 /** 379 383 * @SKB_DROP_REASON_QDISC_OVERLIMIT: dropped by qdisc when a qdisc 380 384 * instance exceeds its total buffer size limit.
+1
include/net/hotdata.h
··· 42 42 int netdev_budget_usecs; 43 43 int tstamp_prequeue; 44 44 int max_backlog; 45 + int qdisc_max_burst; 45 46 int dev_tx_weight; 46 47 int dev_rx_weight; 47 48 int sysctl_max_skb_frags;
+12 -1
include/net/ip_tunnels.h
··· 19 19 #include <net/rtnetlink.h> 20 20 #include <net/lwtunnel.h> 21 21 #include <net/dst_cache.h> 22 + #include <net/netdev_lock.h> 22 23 23 24 #if IS_ENABLED(CONFIG_IPV6) 24 25 #include <net/ipv6.h> ··· 373 372 fl4->flowi4_flags = flow_flags; 374 373 } 375 374 376 - int ip_tunnel_init(struct net_device *dev); 375 + int __ip_tunnel_init(struct net_device *dev); 376 + #define ip_tunnel_init(DEV) \ 377 + ({ \ 378 + struct net_device *__dev = (DEV); \ 379 + int __res = __ip_tunnel_init(__dev); \ 380 + \ 381 + if (!__res) \ 382 + netdev_lockdep_set_classes(__dev);\ 383 + __res; \ 384 + }) 385 + 377 386 void ip_tunnel_uninit(struct net_device *dev); 378 387 void ip_tunnel_dellink(struct net_device *dev, struct list_head *head); 379 388 struct net *ip_tunnel_get_link_net(const struct net_device *dev);
+6
include/scsi/scsi_eh.h
··· 41 41 unsigned char cmnd[32]; 42 42 struct scsi_data_buffer sdb; 43 43 struct scatterlist sense_sgl; 44 + 45 + /* struct request fields */ 46 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 47 + struct bio_crypt_ctx *rq_crypt_ctx; 48 + struct blk_crypto_keyslot *rq_crypt_keyslot; 49 + #endif 44 50 }; 45 51 46 52 extern void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd,
+1 -1
include/sound/pcm.h
··· 1402 1402 #define snd_pcm_lib_mmap_iomem NULL 1403 1403 #endif 1404 1404 1405 - void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime); 1405 + int snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime); 1406 1406 1407 1407 /** 1408 1408 * snd_pcm_limit_isa_dma_size - Get the max size fitting with ISA DMA transfer
+82
include/uapi/linux/dev_energymodel.h
··· 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ 2 + /* Do not edit directly, auto-generated from: */ 3 + /* Documentation/netlink/specs/dev-energymodel.yaml */ 4 + /* YNL-GEN uapi header */ 5 + /* To regenerate run: tools/net/ynl/ynl-regen.sh */ 6 + 7 + #ifndef _UAPI_LINUX_DEV_ENERGYMODEL_H 8 + #define _UAPI_LINUX_DEV_ENERGYMODEL_H 9 + 10 + #define DEV_ENERGYMODEL_FAMILY_NAME "dev-energymodel" 11 + #define DEV_ENERGYMODEL_FAMILY_VERSION 1 12 + 13 + /** 14 + * enum dev_energymodel_perf_state_flags 15 + * @DEV_ENERGYMODEL_PERF_STATE_FLAGS_PERF_STATE_INEFFICIENT: The performance 16 + * state is inefficient. There is in this perf-domain, another performance 17 + * state with a higher frequency but a lower or equal power cost. 18 + */ 19 + enum dev_energymodel_perf_state_flags { 20 + DEV_ENERGYMODEL_PERF_STATE_FLAGS_PERF_STATE_INEFFICIENT = 1, 21 + }; 22 + 23 + /** 24 + * enum dev_energymodel_perf_domain_flags 25 + * @DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_MICROWATTS: The power values 26 + * are in micro-Watts or some other scale. 27 + * @DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_SKIP_INEFFICIENCIES: Skip 28 + * inefficient states when estimating energy consumption. 29 + * @DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_ARTIFICIAL: The power values 30 + * are artificial and might be created by platform missing real power 31 + * information. 32 + */ 33 + enum dev_energymodel_perf_domain_flags { 34 + DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_MICROWATTS = 1, 35 + DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_SKIP_INEFFICIENCIES = 2, 36 + DEV_ENERGYMODEL_PERF_DOMAIN_FLAGS_PERF_DOMAIN_ARTIFICIAL = 4, 37 + }; 38 + 39 + enum { 40 + DEV_ENERGYMODEL_A_PERF_DOMAIN_PAD = 1, 41 + DEV_ENERGYMODEL_A_PERF_DOMAIN_PERF_DOMAIN_ID, 42 + DEV_ENERGYMODEL_A_PERF_DOMAIN_FLAGS, 43 + DEV_ENERGYMODEL_A_PERF_DOMAIN_CPUS, 44 + 45 + __DEV_ENERGYMODEL_A_PERF_DOMAIN_MAX, 46 + DEV_ENERGYMODEL_A_PERF_DOMAIN_MAX = (__DEV_ENERGYMODEL_A_PERF_DOMAIN_MAX - 1) 47 + }; 48 + 49 + enum { 50 + DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID = 1, 51 + DEV_ENERGYMODEL_A_PERF_TABLE_PERF_STATE, 52 + 53 + __DEV_ENERGYMODEL_A_PERF_TABLE_MAX, 54 + DEV_ENERGYMODEL_A_PERF_TABLE_MAX = (__DEV_ENERGYMODEL_A_PERF_TABLE_MAX - 1) 55 + }; 56 + 57 + enum { 58 + DEV_ENERGYMODEL_A_PERF_STATE_PAD = 1, 59 + DEV_ENERGYMODEL_A_PERF_STATE_PERFORMANCE, 60 + DEV_ENERGYMODEL_A_PERF_STATE_FREQUENCY, 61 + DEV_ENERGYMODEL_A_PERF_STATE_POWER, 62 + DEV_ENERGYMODEL_A_PERF_STATE_COST, 63 + DEV_ENERGYMODEL_A_PERF_STATE_FLAGS, 64 + 65 + __DEV_ENERGYMODEL_A_PERF_STATE_MAX, 66 + DEV_ENERGYMODEL_A_PERF_STATE_MAX = (__DEV_ENERGYMODEL_A_PERF_STATE_MAX - 1) 67 + }; 68 + 69 + enum { 70 + DEV_ENERGYMODEL_CMD_GET_PERF_DOMAINS = 1, 71 + DEV_ENERGYMODEL_CMD_GET_PERF_TABLE, 72 + DEV_ENERGYMODEL_CMD_PERF_DOMAIN_CREATED, 73 + DEV_ENERGYMODEL_CMD_PERF_DOMAIN_UPDATED, 74 + DEV_ENERGYMODEL_CMD_PERF_DOMAIN_DELETED, 75 + 76 + __DEV_ENERGYMODEL_CMD_MAX, 77 + DEV_ENERGYMODEL_CMD_MAX = (__DEV_ENERGYMODEL_CMD_MAX - 1) 78 + }; 79 + 80 + #define DEV_ENERGYMODEL_MCGRP_EVENT "event" 81 + 82 + #endif /* _UAPI_LINUX_DEV_ENERGYMODEL_H */
-63
include/uapi/linux/energy_model.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ 2 - /* Do not edit directly, auto-generated from: */ 3 - /* Documentation/netlink/specs/em.yaml */ 4 - /* YNL-GEN uapi header */ 5 - /* To regenerate run: tools/net/ynl/ynl-regen.sh */ 6 - 7 - #ifndef _UAPI_LINUX_ENERGY_MODEL_H 8 - #define _UAPI_LINUX_ENERGY_MODEL_H 9 - 10 - #define EM_FAMILY_NAME "em" 11 - #define EM_FAMILY_VERSION 1 12 - 13 - enum { 14 - EM_A_PDS_PD = 1, 15 - 16 - __EM_A_PDS_MAX, 17 - EM_A_PDS_MAX = (__EM_A_PDS_MAX - 1) 18 - }; 19 - 20 - enum { 21 - EM_A_PD_PAD = 1, 22 - EM_A_PD_PD_ID, 23 - EM_A_PD_FLAGS, 24 - EM_A_PD_CPUS, 25 - 26 - __EM_A_PD_MAX, 27 - EM_A_PD_MAX = (__EM_A_PD_MAX - 1) 28 - }; 29 - 30 - enum { 31 - EM_A_PD_TABLE_PD_ID = 1, 32 - EM_A_PD_TABLE_PS, 33 - 34 - __EM_A_PD_TABLE_MAX, 35 - EM_A_PD_TABLE_MAX = (__EM_A_PD_TABLE_MAX - 1) 36 - }; 37 - 38 - enum { 39 - EM_A_PS_PAD = 1, 40 - EM_A_PS_PERFORMANCE, 41 - EM_A_PS_FREQUENCY, 42 - EM_A_PS_POWER, 43 - EM_A_PS_COST, 44 - EM_A_PS_FLAGS, 45 - 46 - __EM_A_PS_MAX, 47 - EM_A_PS_MAX = (__EM_A_PS_MAX - 1) 48 - }; 49 - 50 - enum { 51 - EM_CMD_GET_PDS = 1, 52 - EM_CMD_GET_PD_TABLE, 53 - EM_CMD_PD_CREATED, 54 - EM_CMD_PD_UPDATED, 55 - EM_CMD_PD_DELETED, 56 - 57 - __EM_CMD_MAX, 58 - EM_CMD_MAX = (__EM_CMD_MAX - 1) 59 - }; 60 - 61 - #define EM_MCGRP_EVENT "event" 62 - 63 - #endif /* _UAPI_LINUX_ENERGY_MODEL_H */
+1 -1
include/uapi/linux/ext4.h
··· 139 139 __u32 clear_feature_incompat_mask; 140 140 __u32 clear_feature_ro_compat_mask; 141 141 __u8 mount_opts[64]; 142 - __u8 pad[64]; 142 + __u8 pad[68]; 143 143 }; 144 144 145 145 #define EXT4_TUNE_FL_ERRORS_BEHAVIOR 0x00000001
+17 -20
include/uapi/linux/landlock.h
··· 216 216 * :manpage:`ftruncate(2)`, :manpage:`creat(2)`, or :manpage:`open(2)` with 217 217 * ``O_TRUNC``. This access right is available since the third version of the 218 218 * Landlock ABI. 219 + * - %LANDLOCK_ACCESS_FS_IOCTL_DEV: Invoke :manpage:`ioctl(2)` commands on an opened 220 + * character or block device. 221 + * 222 + * This access right applies to all `ioctl(2)` commands implemented by device 223 + * drivers. However, the following common IOCTL commands continue to be 224 + * invokable independent of the %LANDLOCK_ACCESS_FS_IOCTL_DEV right: 225 + * 226 + * * IOCTL commands targeting file descriptors (``FIOCLEX``, ``FIONCLEX``), 227 + * * IOCTL commands targeting file descriptions (``FIONBIO``, ``FIOASYNC``), 228 + * * IOCTL commands targeting file systems (``FIFREEZE``, ``FITHAW``, 229 + * ``FIGETBSZ``, ``FS_IOC_GETFSUUID``, ``FS_IOC_GETFSSYSFSPATH``) 230 + * * Some IOCTL commands which do not make sense when used with devices, but 231 + * whose implementations are safe and return the right error codes 232 + * (``FS_IOC_FIEMAP``, ``FICLONE``, ``FICLONERANGE``, ``FIDEDUPERANGE``) 233 + * 234 + * This access right is available since the fifth version of the Landlock 235 + * ABI. 219 236 * 220 237 * Whether an opened file can be truncated with :manpage:`ftruncate(2)` or used 221 238 * with `ioctl(2)` is determined during :manpage:`open(2)`, in the same way as ··· 291 274 * 292 275 * If multiple requirements are not met, the ``EACCES`` error code takes 293 276 * precedence over ``EXDEV``. 294 - * 295 - * The following access right applies both to files and directories: 296 - * 297 - * - %LANDLOCK_ACCESS_FS_IOCTL_DEV: Invoke :manpage:`ioctl(2)` commands on an opened 298 - * character or block device. 299 - * 300 - * This access right applies to all `ioctl(2)` commands implemented by device 301 - * drivers. However, the following common IOCTL commands continue to be 302 - * invokable independent of the %LANDLOCK_ACCESS_FS_IOCTL_DEV right: 303 - * 304 - * * IOCTL commands targeting file descriptors (``FIOCLEX``, ``FIONCLEX``), 305 - * * IOCTL commands targeting file descriptions (``FIONBIO``, ``FIOASYNC``), 306 - * * IOCTL commands targeting file systems (``FIFREEZE``, ``FITHAW``, 307 - * ``FIGETBSZ``, ``FS_IOC_GETFSUUID``, ``FS_IOC_GETFSSYSFSPATH``) 308 - * * Some IOCTL commands which do not make sense when used with devices, but 309 - * whose implementations are safe and return the right error codes 310 - * (``FS_IOC_FIEMAP``, ``FICLONE``, ``FICLONERANGE``, ``FIDEDUPERANGE``) 311 - * 312 - * This access right is available since the fifth version of the Landlock 313 - * ABI. 314 277 * 315 278 * .. warning:: 316 279 *
-9
include/uapi/linux/media/arm/mali-c55-config.h
··· 195 195 } __attribute__((packed)); 196 196 197 197 /** 198 - * enum mali_c55_param_buffer_version - Mali-C55 parameters block versioning 199 - * 200 - * @MALI_C55_PARAM_BUFFER_V1: First version of Mali-C55 parameters block 201 - */ 202 - enum mali_c55_param_buffer_version { 203 - MALI_C55_PARAM_BUFFER_V1, 204 - }; 205 - 206 - /** 207 198 * enum mali_c55_param_block_type - Enumeration of Mali-C55 parameter blocks 208 199 * 209 200 * This enumeration defines the types of Mali-C55 parameters block. Each block
+4 -4
io_uring/io_uring.c
··· 3003 3003 mutex_unlock(&ctx->uring_lock); 3004 3004 } 3005 3005 3006 - if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) 3007 - io_move_task_work_from_local(ctx); 3008 - 3009 3006 /* The SQPOLL thread never reaches this path */ 3010 - while (io_uring_try_cancel_requests(ctx, NULL, true, false)) 3007 + do { 3008 + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) 3009 + io_move_task_work_from_local(ctx); 3011 3010 cond_resched(); 3011 + } while (io_uring_try_cancel_requests(ctx, NULL, true, false)); 3012 3012 3013 3013 if (ctx->sq_data) { 3014 3014 struct io_sq_data *sqd = ctx->sq_data;
+5
kernel/bpf/verifier.c
··· 9609 9609 if (reg->type != PTR_TO_MAP_VALUE) 9610 9610 return -EINVAL; 9611 9611 9612 + if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) { 9613 + verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno); 9614 + return -EACCES; 9615 + } 9616 + 9612 9617 if (!bpf_map_is_rdonly(map)) { 9613 9618 verbose(env, "R%d does not point to a readonly map'\n", regno); 9614 9619 return -EACCES;
+2 -5
kernel/cgroup/cgroup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Generic process-grouping system. 3 4 * ··· 21 20 * 2003-10-22 Updates by Stephen Hemminger. 22 21 * 2004 May-July Rework by Paul Jackson. 23 22 * --------------------------------------------------- 24 - * 25 - * This file is subject to the terms and conditions of the GNU General Public 26 - * License. See the file COPYING in the main directory of the Linux 27 - * distribution for more details. 28 23 */ 29 24 30 25 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ··· 5844 5847 int ret; 5845 5848 5846 5849 /* allocate the cgroup and its ID, 0 is reserved for the root */ 5847 - cgrp = kzalloc(struct_size(cgrp, ancestors, (level + 1)), GFP_KERNEL); 5850 + cgrp = kzalloc(struct_size(cgrp, _low_ancestors, level), GFP_KERNEL); 5848 5851 if (!cgrp) 5849 5852 return ERR_PTR(-ENOMEM); 5850 5853
+1 -4
kernel/cgroup/cpuset.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * kernel/cpuset.c 3 4 * ··· 17 16 * 2006 Rework by Paul Menage to use generic cgroups 18 17 * 2008 Rework of the scheduler domains and CPU hotplug handling 19 18 * by Max Krasnyansky 20 - * 21 - * This file is subject to the terms and conditions of the GNU General Public 22 - * License. See the file COPYING in the main directory of the Linux 23 - * distribution for more details. 24 19 */ 25 20 #include "cpuset-internal.h" 26 21
+1 -8
kernel/cgroup/legacy_freezer.c
··· 1 + // SPDX-License-Identifier: LGPL-2.1 1 2 /* 2 3 * cgroup_freezer.c - control group freezer subsystem 3 4 * 4 5 * Copyright IBM Corporation, 2007 5 6 * 6 7 * Author : Cedric Le Goater <clg@fr.ibm.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify it 9 - * under the terms of version 2.1 of the GNU Lesser General Public License 10 - * as published by the Free Software Foundation. 11 - * 12 - * This program is distributed in the hope that it would be useful, but 13 - * WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 15 8 */ 16 9 17 10 #include <linux/export.h>
+19 -18
kernel/liveupdate/kexec_handover.c
··· 460 460 } 461 461 } 462 462 463 - /* Return true if memory was deserizlied */ 464 - static bool __init kho_mem_deserialize(const void *fdt) 463 + /* Returns physical address of the preserved memory map from FDT */ 464 + static phys_addr_t __init kho_get_mem_map_phys(const void *fdt) 465 465 { 466 - struct khoser_mem_chunk *chunk; 467 466 const void *mem_ptr; 468 - u64 mem; 469 467 int len; 470 468 471 469 mem_ptr = fdt_getprop(fdt, 0, PROP_PRESERVED_MEMORY_MAP, &len); 472 470 if (!mem_ptr || len != sizeof(u64)) { 473 471 pr_err("failed to get preserved memory bitmaps\n"); 474 - return false; 472 + return 0; 475 473 } 476 474 477 - mem = get_unaligned((const u64 *)mem_ptr); 478 - chunk = mem ? phys_to_virt(mem) : NULL; 475 + return get_unaligned((const u64 *)mem_ptr); 476 + } 479 477 480 - /* No preserved physical pages were passed, no deserialization */ 481 - if (!chunk) 482 - return false; 483 - 478 + static void __init kho_mem_deserialize(struct khoser_mem_chunk *chunk) 479 + { 484 480 while (chunk) { 485 481 unsigned int i; 486 482 ··· 485 489 &chunk->bitmaps[i]); 486 490 chunk = KHOSER_LOAD_PTR(chunk->hdr.next); 487 491 } 488 - 489 - return true; 490 492 } 491 493 492 494 /* ··· 1247 1253 struct kho_in { 1248 1254 phys_addr_t fdt_phys; 1249 1255 phys_addr_t scratch_phys; 1256 + phys_addr_t mem_map_phys; 1250 1257 struct kho_debugfs dbg; 1251 1258 }; 1252 1259 ··· 1429 1434 1430 1435 void __init kho_memory_init(void) 1431 1436 { 1432 - if (kho_in.scratch_phys) { 1437 + if (kho_in.mem_map_phys) { 1433 1438 kho_scratch = phys_to_virt(kho_in.scratch_phys); 1434 1439 kho_release_scratch(); 1435 - 1436 - if (!kho_mem_deserialize(kho_get_fdt())) 1437 - kho_in.fdt_phys = 0; 1440 + kho_mem_deserialize(phys_to_virt(kho_in.mem_map_phys)); 1438 1441 } else { 1439 1442 kho_reserve_scratch(); 1440 1443 } ··· 1441 1448 void __init kho_populate(phys_addr_t fdt_phys, u64 fdt_len, 1442 1449 phys_addr_t scratch_phys, u64 scratch_len) 1443 1450 { 1444 - void *fdt = NULL; 1445 1451 struct kho_scratch *scratch = NULL; 1452 + phys_addr_t mem_map_phys; 1453 + void *fdt = NULL; 1446 1454 int err = 0; 1447 1455 unsigned int scratch_cnt = scratch_len / sizeof(*kho_scratch); 1448 1456 ··· 1466 1472 pr_warn("setup: handover FDT (0x%llx) is incompatible with '%s': %d\n", 1467 1473 fdt_phys, KHO_FDT_COMPATIBLE, err); 1468 1474 err = -EINVAL; 1475 + goto out; 1476 + } 1477 + 1478 + mem_map_phys = kho_get_mem_map_phys(fdt); 1479 + if (!mem_map_phys) { 1480 + err = -ENOENT; 1469 1481 goto out; 1470 1482 } 1471 1483 ··· 1515 1515 1516 1516 kho_in.fdt_phys = fdt_phys; 1517 1517 kho_in.scratch_phys = scratch_phys; 1518 + kho_in.mem_map_phys = mem_map_phys; 1518 1519 kho_scratch_cnt = scratch_cnt; 1519 1520 pr_info("found kexec handover data.\n"); 1520 1521
+1
kernel/module/kmod.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * kmod - the kernel module loader 3 4 *
+140 -73
kernel/power/em_netlink.c
··· 12 12 #include <linux/energy_model.h> 13 13 #include <net/sock.h> 14 14 #include <net/genetlink.h> 15 - #include <uapi/linux/energy_model.h> 15 + #include <uapi/linux/dev_energymodel.h> 16 16 17 17 #include "em_netlink.h" 18 18 #include "em_netlink_autogen.h" 19 19 20 - #define EM_A_PD_CPUS_LEN 256 21 - 22 20 /*************************** Command encoding ********************************/ 21 + struct dump_ctx { 22 + int idx; 23 + int start; 24 + struct sk_buff *skb; 25 + struct netlink_callback *cb; 26 + }; 27 + 23 28 static int __em_nl_get_pd_size(struct em_perf_domain *pd, void *data) 24 29 { 25 - char cpus_buf[EM_A_PD_CPUS_LEN]; 30 + int nr_cpus, msg_sz, cpus_sz; 26 31 int *tot_msg_sz = data; 27 - int msg_sz, cpus_sz; 28 32 29 - cpus_sz = snprintf(cpus_buf, sizeof(cpus_buf), "%*pb", 30 - cpumask_pr_args(to_cpumask(pd->cpus))); 33 + nr_cpus = cpumask_weight(to_cpumask(pd->cpus)); 34 + cpus_sz = nla_total_size_64bit(sizeof(u64)) * nr_cpus; 31 35 32 - msg_sz = nla_total_size(0) + /* EM_A_PDS_PD */ 33 - nla_total_size(sizeof(u32)) + /* EM_A_PD_PD_ID */ 34 - nla_total_size_64bit(sizeof(u64)) + /* EM_A_PD_FLAGS */ 35 - nla_total_size(cpus_sz); /* EM_A_PD_CPUS */ 36 + msg_sz = nla_total_size(0) + 37 + /* DEV_ENERGYMODEL_A_PERF_DOMAINS_PERF_DOMAIN */ 38 + nla_total_size(sizeof(u32)) + 39 + /* DEV_ENERGYMODEL_A_PERF_DOMAIN_PERF_DOMAIN_ID */ 40 + nla_total_size_64bit(sizeof(u64)) + 41 + /* DEV_ENERGYMODEL_A_PERF_DOMAIN_FLAGS */ 42 + nla_total_size(cpus_sz); 43 + /* DEV_ENERGYMODEL_A_PERF_DOMAIN_CPUS */ 36 44 37 45 *tot_msg_sz += nlmsg_total_size(genlmsg_msg_size(msg_sz)); 38 46 return 0; ··· 48 40 49 41 static int __em_nl_get_pd(struct em_perf_domain *pd, void *data) 50 42 { 51 - char cpus_buf[EM_A_PD_CPUS_LEN]; 52 43 struct sk_buff *msg = data; 53 - struct nlattr *entry; 44 + struct cpumask *cpumask; 45 + int cpu; 54 46 55 - entry = nla_nest_start(msg, EM_A_PDS_PD); 56 - if (!entry) 47 + if (nla_put_u32(msg, DEV_ENERGYMODEL_A_PERF_DOMAIN_PERF_DOMAIN_ID, 48 + pd->id)) 57 49 goto out_cancel_nest; 58 50 59 - if (nla_put_u32(msg, EM_A_PD_PD_ID, pd->id)) 51 + if (nla_put_u64_64bit(msg, DEV_ENERGYMODEL_A_PERF_DOMAIN_FLAGS, 52 + pd->flags, DEV_ENERGYMODEL_A_PERF_DOMAIN_PAD)) 60 53 goto out_cancel_nest; 61 54 62 - if (nla_put_u64_64bit(msg, EM_A_PD_FLAGS, pd->flags, EM_A_PD_PAD)) 63 - goto out_cancel_nest; 64 - 65 - snprintf(cpus_buf, sizeof(cpus_buf), "%*pb", 66 - cpumask_pr_args(to_cpumask(pd->cpus))); 67 - if (nla_put_string(msg, EM_A_PD_CPUS, cpus_buf)) 68 - goto out_cancel_nest; 69 - 70 - nla_nest_end(msg, entry); 55 + cpumask = to_cpumask(pd->cpus); 56 + for_each_cpu(cpu, cpumask) { 57 + if (nla_put_u64_64bit(msg, DEV_ENERGYMODEL_A_PERF_DOMAIN_CPUS, 58 + cpu, DEV_ENERGYMODEL_A_PERF_DOMAIN_PAD)) 59 + goto out_cancel_nest; 60 + } 71 61 72 62 return 0; 73 63 74 64 out_cancel_nest: 75 - nla_nest_cancel(msg, entry); 76 - 77 65 return -EMSGSIZE; 78 66 } 79 67 80 - int em_nl_get_pds_doit(struct sk_buff *skb, struct genl_info *info) 68 + static int __em_nl_get_pd_for_dump(struct em_perf_domain *pd, void *data) 81 69 { 70 + const struct genl_info *info; 71 + struct dump_ctx *ctx = data; 72 + void *hdr; 73 + int ret; 74 + 75 + if (ctx->idx++ < ctx->start) 76 + return 0; 77 + 78 + info = genl_info_dump(ctx->cb); 79 + hdr = genlmsg_iput(ctx->skb, info); 80 + if (!hdr) { 81 + genlmsg_cancel(ctx->skb, hdr); 82 + return -EMSGSIZE; 83 + } 84 + 85 + ret = __em_nl_get_pd(pd, ctx->skb); 86 + genlmsg_end(ctx->skb, hdr); 87 + return ret; 88 + } 89 + 90 + int dev_energymodel_nl_get_perf_domains_doit(struct sk_buff *skb, 91 + struct genl_info *info) 92 + { 93 + int id, ret = -EMSGSIZE, msg_sz = 0; 94 + int cmd = info->genlhdr->cmd; 95 + struct em_perf_domain *pd; 82 96 struct sk_buff *msg; 83 97 void *hdr; 84 - int cmd = info->genlhdr->cmd; 85 - int ret = -EMSGSIZE, msg_sz = 0; 86 98 87 - for_each_em_perf_domain(__em_nl_get_pd_size, &msg_sz); 99 + if (!info->attrs[DEV_ENERGYMODEL_A_PERF_DOMAIN_PERF_DOMAIN_ID]) 100 + return -EINVAL; 88 101 102 + id = nla_get_u32(info->attrs[DEV_ENERGYMODEL_A_PERF_DOMAIN_PERF_DOMAIN_ID]); 103 + pd = em_perf_domain_get_by_id(id); 104 + 105 + __em_nl_get_pd_size(pd, &msg_sz); 89 106 msg = genlmsg_new(msg_sz, GFP_KERNEL); 90 107 if (!msg) 91 108 return -ENOMEM; 92 109 93 - hdr = genlmsg_put_reply(msg, info, &em_nl_family, 0, cmd); 110 + hdr = genlmsg_put_reply(msg, info, &dev_energymodel_nl_family, 0, cmd); 94 111 if (!hdr) 95 112 goto out_free_msg; 96 113 97 - ret = for_each_em_perf_domain(__em_nl_get_pd, msg); 114 + ret = __em_nl_get_pd(pd, msg); 98 115 if (ret) 99 116 goto out_cancel_msg; 100 - 101 117 genlmsg_end(msg, hdr); 102 118 103 119 return genlmsg_reply(msg, info); ··· 130 98 genlmsg_cancel(msg, hdr); 131 99 out_free_msg: 132 100 nlmsg_free(msg); 133 - 134 101 return ret; 102 + } 103 + 104 + int dev_energymodel_nl_get_perf_domains_dumpit(struct sk_buff *skb, 105 + struct netlink_callback *cb) 106 + { 107 + struct dump_ctx ctx = { 108 + .idx = 0, 109 + .start = cb->args[0], 110 + .skb = skb, 111 + .cb = cb, 112 + }; 113 + 114 + return for_each_em_perf_domain(__em_nl_get_pd_for_dump, &ctx); 135 115 } 136 116 137 117 static struct em_perf_domain *__em_nl_get_pd_table_id(struct nlattr **attrs) ··· 151 107 struct em_perf_domain *pd; 152 108 int id; 153 109 154 - if (!attrs[EM_A_PD_TABLE_PD_ID]) 110 + if (!attrs[DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID]) 155 111 return NULL; 156 112 157 - id = nla_get_u32(attrs[EM_A_PD_TABLE_PD_ID]); 113 + id = nla_get_u32(attrs[DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID]); 158 114 pd = em_perf_domain_get_by_id(id); 159 115 return pd; 160 116 } ··· 163 119 { 164 120 int id_sz, ps_sz; 165 121 166 - id_sz = nla_total_size(sizeof(u32)); /* EM_A_PD_TABLE_PD_ID */ 167 - ps_sz = nla_total_size(0) + /* EM_A_PD_TABLE_PS */ 168 - nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_PERFORMANCE */ 169 - nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_FREQUENCY */ 170 - nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_POWER */ 171 - nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_COST */ 172 - nla_total_size_64bit(sizeof(u64)); /* EM_A_PS_FLAGS */ 122 + id_sz = nla_total_size(sizeof(u32)); 123 + /* DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID */ 124 + ps_sz = nla_total_size(0) + 125 + /* DEV_ENERGYMODEL_A_PERF_TABLE_PERF_STATE */ 126 + nla_total_size_64bit(sizeof(u64)) + 127 + /* DEV_ENERGYMODEL_A_PERF_STATE_PERFORMANCE */ 128 + nla_total_size_64bit(sizeof(u64)) + 129 + /* DEV_ENERGYMODEL_A_PERF_STATE_FREQUENCY */ 130 + nla_total_size_64bit(sizeof(u64)) + 131 + /* DEV_ENERGYMODEL_A_PERF_STATE_POWER */ 132 + nla_total_size_64bit(sizeof(u64)) + 133 + /* DEV_ENERGYMODEL_A_PERF_STATE_COST */ 134 + nla_total_size_64bit(sizeof(u64)); 135 + /* DEV_ENERGYMODEL_A_PERF_STATE_FLAGS */ 173 136 ps_sz *= pd->nr_perf_states; 174 137 175 138 return nlmsg_total_size(genlmsg_msg_size(id_sz + ps_sz)); 176 139 } 177 140 178 - static int __em_nl_get_pd_table(struct sk_buff *msg, const struct em_perf_domain *pd) 141 + static 142 + int __em_nl_get_pd_table(struct sk_buff *msg, const struct em_perf_domain *pd) 179 143 { 180 144 struct em_perf_state *table, *ps; 181 145 struct nlattr *entry; 182 146 int i; 183 147 184 - if (nla_put_u32(msg, EM_A_PD_TABLE_PD_ID, pd->id)) 148 + if (nla_put_u32(msg, DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID, 149 + pd->id)) 185 150 goto out_err; 186 151 187 152 rcu_read_lock(); ··· 199 146 for (i = 0; i < pd->nr_perf_states; i++) { 200 147 ps = &table[i]; 201 148 202 - entry = nla_nest_start(msg, EM_A_PD_TABLE_PS); 149 + entry = nla_nest_start(msg, 150 + DEV_ENERGYMODEL_A_PERF_TABLE_PERF_STATE); 203 151 if (!entry) 204 152 goto out_unlock_ps; 205 153 206 - if (nla_put_u64_64bit(msg, EM_A_PS_PERFORMANCE, 207 - ps->performance, EM_A_PS_PAD)) 154 + if (nla_put_u64_64bit(msg, 155 + DEV_ENERGYMODEL_A_PERF_STATE_PERFORMANCE, 156 + ps->performance, 157 + DEV_ENERGYMODEL_A_PERF_STATE_PAD)) 208 158 goto out_cancel_ps_nest; 209 - if (nla_put_u64_64bit(msg, EM_A_PS_FREQUENCY, 210 - ps->frequency, EM_A_PS_PAD)) 159 + if (nla_put_u64_64bit(msg, 160 + DEV_ENERGYMODEL_A_PERF_STATE_FREQUENCY, 161 + ps->frequency, 162 + DEV_ENERGYMODEL_A_PERF_STATE_PAD)) 211 163 goto out_cancel_ps_nest; 212 - if (nla_put_u64_64bit(msg, EM_A_PS_POWER, 213 - ps->power, EM_A_PS_PAD)) 164 + if (nla_put_u64_64bit(msg, 165 + DEV_ENERGYMODEL_A_PERF_STATE_POWER, 166 + ps->power, 167 + DEV_ENERGYMODEL_A_PERF_STATE_PAD)) 214 168 goto out_cancel_ps_nest; 215 - if (nla_put_u64_64bit(msg, EM_A_PS_COST, 216 - ps->cost, EM_A_PS_PAD)) 169 + if (nla_put_u64_64bit(msg, 170 + DEV_ENERGYMODEL_A_PERF_STATE_COST, 171 + ps->cost, 172 + DEV_ENERGYMODEL_A_PERF_STATE_PAD)) 217 173 goto out_cancel_ps_nest; 218 - if (nla_put_u64_64bit(msg, EM_A_PS_FLAGS, 219 - ps->flags, EM_A_PS_PAD)) 174 + if (nla_put_u64_64bit(msg, 175 + DEV_ENERGYMODEL_A_PERF_STATE_FLAGS, 176 + ps->flags, 177 + DEV_ENERGYMODEL_A_PERF_STATE_PAD)) 220 178 goto out_cancel_ps_nest; 221 179 222 180 nla_nest_end(msg, entry); ··· 243 179 return -EMSGSIZE; 244 180 } 245 181 246 - int em_nl_get_pd_table_doit(struct sk_buff *skb, struct genl_info *info) 182 + int dev_energymodel_nl_get_perf_table_doit(struct sk_buff *skb, 183 + struct genl_info *info) 247 184 { 248 185 int cmd = info->genlhdr->cmd; 249 186 int msg_sz, ret = -EMSGSIZE; ··· 262 197 if (!msg) 263 198 return -ENOMEM; 264 199 265 - hdr = genlmsg_put_reply(msg, info, &em_nl_family, 0, cmd); 200 + hdr = genlmsg_put_reply(msg, info, &dev_energymodel_nl_family, 0, cmd); 266 201 if (!hdr) 267 202 goto out_free_msg; 268 203 ··· 286 221 int msg_sz, ret = -EMSGSIZE; 287 222 void *hdr; 288 223 289 - if (!genl_has_listeners(&em_nl_family, &init_net, EM_NLGRP_EVENT)) 224 + if (!genl_has_listeners(&dev_energymodel_nl_family, &init_net, DEV_ENERGYMODEL_NLGRP_EVENT)) 290 225 return; 291 226 292 227 msg_sz = __em_nl_get_pd_table_size(pd); ··· 295 230 if (!msg) 296 231 return; 297 232 298 - hdr = genlmsg_put(msg, 0, 0, &em_nl_family, 0, ntf_type); 233 + hdr = genlmsg_put(msg, 0, 0, &dev_energymodel_nl_family, 0, ntf_type); 299 234 if (!hdr) 300 235 goto out_free_msg; 301 236 ··· 305 240 306 241 genlmsg_end(msg, hdr); 307 242 308 - genlmsg_multicast(&em_nl_family, msg, 0, EM_NLGRP_EVENT, GFP_KERNEL); 243 + genlmsg_multicast(&dev_energymodel_nl_family, msg, 0, 244 + DEV_ENERGYMODEL_NLGRP_EVENT, GFP_KERNEL); 309 245 310 246 return; 311 247 312 248 out_free_msg: 313 249 nlmsg_free(msg); 314 - return; 315 250 } 316 251 317 252 void em_notify_pd_created(const struct em_perf_domain *pd) 318 253 { 319 - __em_notify_pd_table(pd, EM_CMD_PD_CREATED); 254 + __em_notify_pd_table(pd, DEV_ENERGYMODEL_CMD_PERF_DOMAIN_CREATED); 320 255 } 321 256 322 257 void em_notify_pd_updated(const struct em_perf_domain *pd) 323 258 { 324 - __em_notify_pd_table(pd, EM_CMD_PD_UPDATED); 259 + __em_notify_pd_table(pd, DEV_ENERGYMODEL_CMD_PERF_DOMAIN_UPDATED); 325 260 } 326 261 327 262 static int __em_notify_pd_deleted_size(const struct em_perf_domain *pd) 328 263 { 329 - int id_sz = nla_total_size(sizeof(u32)); /* EM_A_PD_TABLE_PD_ID */ 264 + int id_sz = nla_total_size(sizeof(u32)); /* DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID */ 330 265 331 266 return nlmsg_total_size(genlmsg_msg_size(id_sz)); 332 267 } ··· 337 272 void *hdr; 338 273 int msg_sz; 339 274 340 - if (!genl_has_listeners(&em_nl_family, &init_net, EM_NLGRP_EVENT)) 275 + if (!genl_has_listeners(&dev_energymodel_nl_family, &init_net, 276 + DEV_ENERGYMODEL_NLGRP_EVENT)) 341 277 return; 342 278 343 279 msg_sz = __em_notify_pd_deleted_size(pd); ··· 347 281 if (!msg) 348 282 return; 349 283 350 - hdr = genlmsg_put(msg, 0, 0, &em_nl_family, 0, EM_CMD_PD_DELETED); 284 + hdr = genlmsg_put(msg, 0, 0, &dev_energymodel_nl_family, 0, 285 + DEV_ENERGYMODEL_CMD_PERF_DOMAIN_DELETED); 351 286 if (!hdr) 352 287 goto out_free_msg; 353 288 354 - if (nla_put_u32(msg, EM_A_PD_TABLE_PD_ID, pd->id)) { 289 + if (nla_put_u32(msg, DEV_ENERGYMODEL_A_PERF_TABLE_PERF_DOMAIN_ID, 290 + pd->id)) 355 291 goto out_free_msg; 356 - } 357 292 358 293 genlmsg_end(msg, hdr); 359 294 360 - genlmsg_multicast(&em_nl_family, msg, 0, EM_NLGRP_EVENT, GFP_KERNEL); 295 + genlmsg_multicast(&dev_energymodel_nl_family, msg, 0, 296 + DEV_ENERGYMODEL_NLGRP_EVENT, GFP_KERNEL); 361 297 362 298 return; 363 299 364 300 out_free_msg: 365 301 nlmsg_free(msg); 366 - return; 367 302 } 368 303 369 304 /**************************** Initialization *********************************/ 370 305 static int __init em_netlink_init(void) 371 306 { 372 - return genl_register_family(&em_nl_family); 307 + return genl_register_family(&dev_energymodel_nl_family); 373 308 } 374 309 postcore_initcall(em_netlink_init);
+4 -2
kernel/power/energy_model.c
··· 449 449 INIT_LIST_HEAD(&pd->node); 450 450 451 451 id = ida_alloc(&em_pd_ida, GFP_KERNEL); 452 - if (id < 0) 453 - return -ENOMEM; 452 + if (id < 0) { 453 + kfree(pd); 454 + return id; 455 + } 454 456 pd->id = id; 455 457 456 458 em_table = em_table_alloc(pd);
+18 -20
kernel/printk/nbcon.c
··· 1557 1557 ctxt->allow_unsafe_takeover = nbcon_allow_unsafe_takeover(); 1558 1558 1559 1559 while (nbcon_seq_read(con) < stop_seq) { 1560 - if (!nbcon_context_try_acquire(ctxt, false)) 1561 - return -EPERM; 1562 - 1563 1560 /* 1564 - * nbcon_emit_next_record() returns false when the console was 1565 - * handed over or taken over. In both cases the context is no 1566 - * longer valid. 1561 + * Atomic flushing does not use console driver synchronization 1562 + * (i.e. it does not hold the port lock for uart consoles). 1563 + * Therefore IRQs must be disabled to avoid being interrupted 1564 + * and then calling into a driver that will deadlock trying 1565 + * to acquire console ownership. 1567 1566 */ 1568 - if (!nbcon_emit_next_record(&wctxt, true)) 1569 - return -EAGAIN; 1567 + scoped_guard(irqsave) { 1568 + if (!nbcon_context_try_acquire(ctxt, false)) 1569 + return -EPERM; 1570 1570 1571 - nbcon_context_release(ctxt); 1571 + /* 1572 + * nbcon_emit_next_record() returns false when 1573 + * the console was handed over or taken over. 1574 + * In both cases the context is no longer valid. 1575 + */ 1576 + if (!nbcon_emit_next_record(&wctxt, true)) 1577 + return -EAGAIN; 1578 + 1579 + nbcon_context_release(ctxt); 1580 + } 1572 1581 1573 1582 if (!ctxt->backlog) { 1574 1583 /* Are there reserved but not yet finalized records? */ ··· 1604 1595 static void nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq) 1605 1596 { 1606 1597 struct console_flush_type ft; 1607 - unsigned long flags; 1608 1598 int err; 1609 1599 1610 1600 again: 1611 - /* 1612 - * Atomic flushing does not use console driver synchronization (i.e. 1613 - * it does not hold the port lock for uart consoles). Therefore IRQs 1614 - * must be disabled to avoid being interrupted and then calling into 1615 - * a driver that will deadlock trying to acquire console ownership. 1616 - */ 1617 - local_irq_save(flags); 1618 - 1619 1601 err = __nbcon_atomic_flush_pending_con(con, stop_seq); 1620 - 1621 - local_irq_restore(flags); 1622 1602 1623 1603 /* 1624 1604 * If there was a new owner (-EPERM, -EAGAIN), that context is
+11 -7
kernel/sched/core.c
··· 4950 4950 return __splice_balance_callbacks(rq, true); 4951 4951 } 4952 4952 4953 - static void __balance_callbacks(struct rq *rq) 4953 + void __balance_callbacks(struct rq *rq, struct rq_flags *rf) 4954 4954 { 4955 + if (rf) 4956 + rq_unpin_lock(rq, rf); 4955 4957 do_balance_callbacks(rq, __splice_balance_callbacks(rq, false)); 4958 + if (rf) 4959 + rq_repin_lock(rq, rf); 4956 4960 } 4957 4961 4958 4962 void balance_callbacks(struct rq *rq, struct balance_callback *head) ··· 4995 4991 * prev into current: 4996 4992 */ 4997 4993 spin_acquire(&__rq_lockp(rq)->dep_map, 0, 0, _THIS_IP_); 4998 - __balance_callbacks(rq); 4994 + __balance_callbacks(rq, NULL); 4999 4995 raw_spin_rq_unlock_irq(rq); 5000 4996 } 5001 4997 ··· 6871 6867 proxy_tag_curr(rq, next); 6872 6868 6873 6869 rq_unpin_lock(rq, &rf); 6874 - __balance_callbacks(rq); 6870 + __balance_callbacks(rq, NULL); 6875 6871 raw_spin_rq_unlock_irq(rq); 6876 6872 } 6877 6873 trace_sched_exit_tp(is_switch); ··· 7320 7316 trace_sched_pi_setprio(p, pi_task); 7321 7317 oldprio = p->prio; 7322 7318 7323 - if (oldprio == prio) 7319 + if (oldprio == prio && !dl_prio(prio)) 7324 7320 queue_flag &= ~DEQUEUE_MOVE; 7325 7321 7326 7322 prev_class = p->sched_class; ··· 7366 7362 out_unlock: 7367 7363 /* Caller holds task_struct::pi_lock, IRQs are still disabled */ 7368 7364 7369 - rq_unpin_lock(rq, &rf); 7370 - __balance_callbacks(rq); 7371 - rq_repin_lock(rq, &rf); 7365 + __balance_callbacks(rq, &rf); 7372 7366 __task_rq_unlock(rq, p, &rf); 7373 7367 } 7374 7368 #endif /* CONFIG_RT_MUTEXES */ ··· 9126 9124 9127 9125 if (resched) 9128 9126 resched_curr(rq); 9127 + 9128 + __balance_callbacks(rq, &rq_guard.rf); 9129 9129 } 9130 9130 9131 9131 static struct cgroup_subsys_state *
+19 -17
kernel/sched/deadline.c
··· 752 752 struct dl_rq *dl_rq = dl_rq_of_se(dl_se); 753 753 struct rq *rq = rq_of_dl_rq(dl_rq); 754 754 755 - update_rq_clock(rq); 756 - 757 755 WARN_ON(is_dl_boosted(dl_se)); 758 756 WARN_ON(dl_time_before(rq_clock(rq), dl_se->deadline)); 759 757 ··· 1418 1420 1419 1421 static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64 delta_exec) 1420 1422 { 1421 - bool idle = rq->curr == rq->idle; 1423 + bool idle = idle_rq(rq); 1422 1424 s64 scaled_delta_exec; 1423 1425 1424 1426 if (unlikely(delta_exec <= 0)) { ··· 1601 1603 * | 8 | B:zero_laxity-wait | | | 1602 1604 * | | | <---+ | 1603 1605 * | +--------------------------------+ | 1604 - * | | ^ ^ 2 | 1605 - * | | 7 | 2 +--------------------+ 1606 + * | | ^ ^ 2 | 1607 + * | | 7 | 2, 1 +----------------+ 1606 1608 * | v | 1607 1609 * | +-------------+ | 1608 1610 * +-- | C:idle-wait | -+ ··· 1647 1649 * dl_defer_idle = 0 1648 1650 * 1649 1651 * 1650 - * [1] A->B, A->D 1652 + * [1] A->B, A->D, C->B 1651 1653 * dl_server_start() 1654 + * dl_defer_idle = 0; 1655 + * if (dl_server_active) 1656 + * return; // [B] 1652 1657 * dl_server_active = 1; 1653 1658 * enqueue_dl_entity() 1654 1659 * update_dl_entity(WAKEUP) ··· 1760 1759 * "B:zero_laxity-wait" -> "C:idle-wait" [label="7:dl_server_update_idle"] 1761 1760 * "B:zero_laxity-wait" -> "D:running" [label="3:dl_server_timer"] 1762 1761 * "C:idle-wait" -> "A:init" [label="8:dl_server_timer"] 1762 + * "C:idle-wait" -> "B:zero_laxity-wait" [label="1:dl_server_start"] 1763 1763 * "C:idle-wait" -> "B:zero_laxity-wait" [label="2:dl_server_update"] 1764 1764 * "C:idle-wait" -> "C:idle-wait" [label="7:dl_server_update_idle"] 1765 1765 * "D:running" -> "A:init" [label="4:pick_task_dl"] ··· 1786 1784 { 1787 1785 struct rq *rq = dl_se->rq; 1788 1786 1787 + dl_se->dl_defer_idle = 0; 1789 1788 if (!dl_server(dl_se) || dl_se->dl_server_active) 1790 1789 return; 1791 1790 ··· 1837 1834 rq = cpu_rq(cpu); 1838 1835 1839 1836 guard(rq_lock_irq)(rq); 1837 + update_rq_clock(rq); 1840 1838 1841 1839 dl_se = &rq->fair_server; 1842 1840 ··· 2214 2210 update_dl_entity(dl_se); 2215 2211 } else if (flags & ENQUEUE_REPLENISH) { 2216 2212 replenish_dl_entity(dl_se); 2217 - } else if ((flags & ENQUEUE_RESTORE) && 2213 + } else if ((flags & ENQUEUE_MOVE) && 2218 2214 !is_dl_boosted(dl_se) && 2219 2215 dl_time_before(dl_se->deadline, rq_clock(rq_of_dl_se(dl_se)))) { 2220 2216 setup_new_dl_entity(dl_se); ··· 3158 3154 struct rq *rq; 3159 3155 struct dl_bw *dl_b; 3160 3156 unsigned int cpu; 3161 - struct cpumask *msk = this_cpu_cpumask_var_ptr(local_cpu_mask_dl); 3157 + struct cpumask *msk; 3162 3158 3163 3159 raw_spin_lock_irqsave(&p->pi_lock, rf.flags); 3164 3160 if (!dl_task(p) || dl_entity_is_special(&p->dl)) { ··· 3166 3162 return; 3167 3163 } 3168 3164 3169 - /* 3170 - * Get an active rq, whose rq->rd traces the correct root 3171 - * domain. 3172 - * Ideally this would be under cpuset reader lock until rq->rd is 3173 - * fetched. However, sleepable locks cannot nest inside pi_lock, so we 3174 - * rely on the caller of dl_add_task_root_domain() holds 'cpuset_mutex' 3175 - * to guarantee the CPU stays in the cpuset. 3176 - */ 3165 + msk = this_cpu_cpumask_var_ptr(local_cpu_mask_dl); 3177 3166 dl_get_task_effective_cpus(p, msk); 3178 3167 cpu = cpumask_first_and(cpu_active_mask, msk); 3179 3168 BUG_ON(cpu >= nr_cpu_ids); 3180 3169 rq = cpu_rq(cpu); 3181 3170 dl_b = &rq->rd->dl_bw; 3182 - /* End of fetching rd */ 3183 3171 3184 3172 raw_spin_lock(&dl_b->lock); 3185 3173 __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span)); ··· 3295 3299 3296 3300 static u64 get_prio_dl(struct rq *rq, struct task_struct *p) 3297 3301 { 3302 + /* 3303 + * Make sure to update current so we don't return a stale value. 3304 + */ 3305 + if (task_current_donor(rq, p)) 3306 + update_curr_dl(rq); 3307 + 3298 3308 return p->dl.deadline; 3299 3309 } 3300 3310
+1
kernel/sched/ext.c
··· 545 545 static void __scx_task_iter_rq_unlock(struct scx_task_iter *iter) 546 546 { 547 547 if (iter->locked_task) { 548 + __balance_callbacks(iter->rq, &iter->rf); 548 549 task_rq_unlock(iter->rq, iter->locked_task, &iter->rf); 549 550 iter->locked_task = NULL; 550 551 }
+26 -1
kernel/sched/sched.h
··· 1364 1364 #define cpu_curr(cpu) (cpu_rq(cpu)->curr) 1365 1365 #define raw_rq() raw_cpu_ptr(&runqueues) 1366 1366 1367 + static inline bool idle_rq(struct rq *rq) 1368 + { 1369 + return rq->curr == rq->idle && !rq->nr_running && !rq->ttwu_pending; 1370 + } 1371 + 1372 + /** 1373 + * available_idle_cpu - is a given CPU idle for enqueuing work. 1374 + * @cpu: the CPU in question. 1375 + * 1376 + * Return: 1 if the CPU is currently idle. 0 otherwise. 1377 + */ 1378 + static inline bool available_idle_cpu(int cpu) 1379 + { 1380 + if (!idle_rq(cpu_rq(cpu))) 1381 + return 0; 1382 + 1383 + if (vcpu_is_preempted(cpu)) 1384 + return 0; 1385 + 1386 + return 1; 1387 + } 1388 + 1367 1389 #ifdef CONFIG_SCHED_PROXY_EXEC 1368 1390 static inline void rq_set_donor(struct rq *rq, struct task_struct *t) 1369 1391 { ··· 2388 2366 * should preserve as much state as possible. 2389 2367 * 2390 2368 * MOVE - paired with SAVE/RESTORE, explicitly does not preserve the location 2391 - * in the runqueue. 2369 + * in the runqueue. IOW the priority is allowed to change. Callers 2370 + * must expect to deal with balance callbacks. 2392 2371 * 2393 2372 * NOCLOCK - skip the update_rq_clock() (avoids double updates) 2394 2373 * ··· 3970 3947 extern bool dequeue_task(struct rq *rq, struct task_struct *p, int flags); 3971 3948 3972 3949 extern struct balance_callback *splice_balance_callbacks(struct rq *rq); 3950 + 3951 + extern void __balance_callbacks(struct rq *rq, struct rq_flags *rf); 3973 3952 extern void balance_callbacks(struct rq *rq, struct balance_callback *head); 3974 3953 3975 3954 /*
+2 -30
kernel/sched/syscalls.c
··· 180 180 */ 181 181 int idle_cpu(int cpu) 182 182 { 183 - struct rq *rq = cpu_rq(cpu); 184 - 185 - if (rq->curr != rq->idle) 186 - return 0; 187 - 188 - if (rq->nr_running) 189 - return 0; 190 - 191 - if (rq->ttwu_pending) 192 - return 0; 193 - 194 - return 1; 195 - } 196 - 197 - /** 198 - * available_idle_cpu - is a given CPU idle for enqueuing work. 199 - * @cpu: the CPU in question. 200 - * 201 - * Return: 1 if the CPU is currently idle. 0 otherwise. 202 - */ 203 - int available_idle_cpu(int cpu) 204 - { 205 - if (!idle_cpu(cpu)) 206 - return 0; 207 - 208 - if (vcpu_is_preempted(cpu)) 209 - return 0; 210 - 211 - return 1; 183 + return idle_rq(cpu_rq(cpu)); 212 184 } 213 185 214 186 /** ··· 639 667 * itself. 640 668 */ 641 669 newprio = rt_effective_prio(p, newprio); 642 - if (newprio == oldprio) 670 + if (newprio == oldprio && !dl_prio(newprio)) 643 671 queue_flags &= ~DEQUEUE_MOVE; 644 672 } 645 673
+1 -1
kernel/time/hrtimer.c
··· 913 913 return true; 914 914 915 915 /* Extra check for softirq clock bases */ 916 - if (base->clockid < HRTIMER_BASE_MONOTONIC_SOFT) 916 + if (base->index < HRTIMER_BASE_MONOTONIC_SOFT) 917 917 continue; 918 918 if (cpu_base->softirq_activated) 919 919 continue;
+15 -14
kernel/trace/ftrace.c
··· 1148 1148 }; 1149 1149 1150 1150 #define ENTRY_SIZE sizeof(struct dyn_ftrace) 1151 - #define ENTRIES_PER_PAGE (PAGE_SIZE / ENTRY_SIZE) 1152 1151 1153 1152 static struct ftrace_page *ftrace_pages_start; 1154 1153 static struct ftrace_page *ftrace_pages; ··· 3833 3834 return 0; 3834 3835 } 3835 3836 3836 - static int ftrace_allocate_records(struct ftrace_page *pg, int count) 3837 + static int ftrace_allocate_records(struct ftrace_page *pg, int count, 3838 + unsigned long *num_pages) 3837 3839 { 3838 3840 int order; 3839 3841 int pages; ··· 3844 3844 return -EINVAL; 3845 3845 3846 3846 /* We want to fill as much as possible, with no empty pages */ 3847 - pages = DIV_ROUND_UP(count, ENTRIES_PER_PAGE); 3847 + pages = DIV_ROUND_UP(count * ENTRY_SIZE, PAGE_SIZE); 3848 3848 order = fls(pages) - 1; 3849 3849 3850 3850 again: ··· 3859 3859 } 3860 3860 3861 3861 ftrace_number_of_pages += 1 << order; 3862 + *num_pages += 1 << order; 3862 3863 ftrace_number_of_groups++; 3863 3864 3864 3865 cnt = (PAGE_SIZE << order) / ENTRY_SIZE; ··· 3888 3887 } 3889 3888 3890 3889 static struct ftrace_page * 3891 - ftrace_allocate_pages(unsigned long num_to_init) 3890 + ftrace_allocate_pages(unsigned long num_to_init, unsigned long *num_pages) 3892 3891 { 3893 3892 struct ftrace_page *start_pg; 3894 3893 struct ftrace_page *pg; 3895 3894 int cnt; 3895 + 3896 + *num_pages = 0; 3896 3897 3897 3898 if (!num_to_init) 3898 3899 return NULL; ··· 3909 3906 * waste as little space as possible. 3910 3907 */ 3911 3908 for (;;) { 3912 - cnt = ftrace_allocate_records(pg, num_to_init); 3909 + cnt = ftrace_allocate_records(pg, num_to_init, num_pages); 3913 3910 if (cnt < 0) 3914 3911 goto free_pages; 3915 3912 ··· 7195 7192 if (!count) 7196 7193 return 0; 7197 7194 7198 - pages = DIV_ROUND_UP(count, ENTRIES_PER_PAGE); 7199 - 7200 7195 /* 7201 7196 * Sorting mcount in vmlinux at build time depend on 7202 7197 * CONFIG_BUILDTIME_MCOUNT_SORT, while mcount loc in ··· 7207 7206 test_is_sorted(start, count); 7208 7207 } 7209 7208 7210 - start_pg = ftrace_allocate_pages(count); 7209 + start_pg = ftrace_allocate_pages(count, &pages); 7211 7210 if (!start_pg) 7212 7211 return -ENOMEM; 7213 7212 ··· 7306 7305 /* We should have used all pages unless we skipped some */ 7307 7306 if (pg_unuse) { 7308 7307 unsigned long pg_remaining, remaining = 0; 7309 - unsigned long skip; 7308 + long skip; 7310 7309 7311 7310 /* Count the number of entries unused and compare it to skipped. */ 7312 - pg_remaining = (ENTRIES_PER_PAGE << pg->order) - pg->index; 7311 + pg_remaining = (PAGE_SIZE << pg->order) / ENTRY_SIZE - pg->index; 7313 7312 7314 7313 if (!WARN(skipped < pg_remaining, "Extra allocated pages for ftrace")) { 7315 7314 7316 7315 skip = skipped - pg_remaining; 7317 7316 7318 - for (pg = pg_unuse; pg; pg = pg->next) 7317 + for (pg = pg_unuse; pg && skip > 0; pg = pg->next) { 7319 7318 remaining += 1 << pg->order; 7319 + skip -= (PAGE_SIZE << pg->order) / ENTRY_SIZE; 7320 + } 7320 7321 7321 7322 pages -= remaining; 7322 - 7323 - skip = DIV_ROUND_UP(skip, ENTRIES_PER_PAGE); 7324 7323 7325 7324 /* 7326 7325 * Check to see if the number of pages remaining would 7327 7326 * just fit the number of entries skipped. 7328 7327 */ 7329 - WARN(skip != remaining, "Extra allocated pages for ftrace: %lu with %lu skipped", 7328 + WARN(pg || skip > 0, "Extra allocated pages for ftrace: %lu with %lu skipped", 7330 7329 remaining, skipped); 7331 7330 } 7332 7331 /* Need to synchronize with ftrace_location_range() */
+1 -1
kernel/watchdog.c
··· 71 71 * hard lockup is detected, it could be task, memory, lock etc. 72 72 * Refer include/linux/sys_info.h for detailed bit definition. 73 73 */ 74 - static unsigned long hardlockup_si_mask; 74 + unsigned long hardlockup_si_mask; 75 75 76 76 #ifdef CONFIG_SYSFS 77 77
+20 -12
lib/buildid.c
··· 5 5 #include <linux/elf.h> 6 6 #include <linux/kernel.h> 7 7 #include <linux/pagemap.h> 8 + #include <linux/fs.h> 8 9 #include <linux/secretmem.h> 9 10 10 11 #define BUILD_ID 3 ··· 47 46 48 47 freader_put_folio(r); 49 48 50 - /* reject secretmem folios created with memfd_secret() */ 51 - if (secretmem_mapping(r->file->f_mapping)) 52 - return -EFAULT; 53 - 49 + /* only use page cache lookup - fail if not already cached */ 54 50 r->folio = filemap_get_folio(r->file->f_mapping, file_off >> PAGE_SHIFT); 55 - 56 - /* if sleeping is allowed, wait for the page, if necessary */ 57 - if (r->may_fault && (IS_ERR(r->folio) || !folio_test_uptodate(r->folio))) { 58 - filemap_invalidate_lock_shared(r->file->f_mapping); 59 - r->folio = read_cache_folio(r->file->f_mapping, file_off >> PAGE_SHIFT, 60 - NULL, r->file); 61 - filemap_invalidate_unlock_shared(r->file->f_mapping); 62 - } 63 51 64 52 if (IS_ERR(r->folio) || !folio_test_uptodate(r->folio)) { 65 53 if (!IS_ERR(r->folio)) ··· 85 95 return NULL; 86 96 } 87 97 return r->data + file_off; 98 + } 99 + 100 + /* reject secretmem folios created with memfd_secret() */ 101 + if (secretmem_mapping(r->file->f_mapping)) { 102 + r->err = -EFAULT; 103 + return NULL; 104 + } 105 + 106 + /* use __kernel_read() for sleepable context */ 107 + if (r->may_fault) { 108 + ssize_t ret; 109 + 110 + ret = __kernel_read(r->file, r->buf, sz, &file_off); 111 + if (ret != sz) { 112 + r->err = (ret < 0) ? ret : -EIO; 113 + return NULL; 114 + } 115 + return r->buf; 88 116 } 89 117 90 118 /* fetch or reuse folio for given file offset */
+7 -3
mm/Kconfig
··· 1220 1220 Device memory hotplug support allows for establishing pmem, 1221 1221 or other device driver discovered memory regions, in the 1222 1222 memmap. This allows pfn_to_page() lookups of otherwise 1223 - "device-physical" addresses which is needed for using a DAX 1224 - mapping in an O_DIRECT operation, among other things. 1223 + "device-physical" addresses which is needed for DAX, PCI_P2PDMA, and 1224 + DEVICE_PRIVATE features among others. 1225 1225 1226 - If FS_DAX is enabled, then say Y. 1226 + Enabling this option will reduce the entropy of x86 KASLR memory 1227 + regions. For example - on a 46 bit system, the entropy goes down 1228 + from 16 bits to 15 bits. The actual reduction in entropy depends 1229 + on the physical address bits, on processor features, kernel config 1230 + (5 level page table) and physical memory present on the system. 1227 1231 1228 1232 # 1229 1233 # Helpers to mirror range of the CPU page tables of a process into device page
+37 -4
mm/damon/core.c
··· 1431 1431 return running; 1432 1432 } 1433 1433 1434 + /* 1435 + * damon_call_handle_inactive_ctx() - handle DAMON call request that added to 1436 + * an inactive context. 1437 + * @ctx: The inactive DAMON context. 1438 + * @control: Control variable of the call request. 1439 + * 1440 + * This function is called in a case that @control is added to @ctx but @ctx is 1441 + * not running (inactive). See if @ctx handled @control or not, and cleanup 1442 + * @control if it was not handled. 1443 + * 1444 + * Returns 0 if @control was handled by @ctx, negative error code otherwise. 1445 + */ 1446 + static int damon_call_handle_inactive_ctx( 1447 + struct damon_ctx *ctx, struct damon_call_control *control) 1448 + { 1449 + struct damon_call_control *c; 1450 + 1451 + mutex_lock(&ctx->call_controls_lock); 1452 + list_for_each_entry(c, &ctx->call_controls, list) { 1453 + if (c == control) { 1454 + list_del(&control->list); 1455 + mutex_unlock(&ctx->call_controls_lock); 1456 + return -EINVAL; 1457 + } 1458 + } 1459 + mutex_unlock(&ctx->call_controls_lock); 1460 + return 0; 1461 + } 1462 + 1434 1463 /** 1435 1464 * damon_call() - Invoke a given function on DAMON worker thread (kdamond). 1436 1465 * @ctx: DAMON context to call the function for. ··· 1490 1461 list_add_tail(&control->list, &ctx->call_controls); 1491 1462 mutex_unlock(&ctx->call_controls_lock); 1492 1463 if (!damon_is_running(ctx)) 1493 - return -EINVAL; 1464 + return damon_call_handle_inactive_ctx(ctx, control); 1494 1465 if (control->repeat) 1495 1466 return 0; 1496 1467 wait_for_completion(&control->completion); ··· 2080 2051 2081 2052 rcu_read_lock(); 2082 2053 memcg = mem_cgroup_from_id(goal->memcg_id); 2083 - rcu_read_unlock(); 2084 - if (!memcg) { 2054 + if (!memcg || !mem_cgroup_tryget(memcg)) { 2055 + rcu_read_unlock(); 2085 2056 if (goal->metric == DAMOS_QUOTA_NODE_MEMCG_USED_BP) 2086 2057 return 0; 2087 2058 else /* DAMOS_QUOTA_NODE_MEMCG_FREE_BP */ 2088 2059 return 10000; 2089 2060 } 2061 + rcu_read_unlock(); 2062 + 2090 2063 mem_cgroup_flush_stats(memcg); 2091 2064 lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(goal->nid)); 2092 2065 used_pages = lruvec_page_state(lruvec, NR_ACTIVE_ANON); 2093 2066 used_pages += lruvec_page_state(lruvec, NR_INACTIVE_ANON); 2094 2067 used_pages += lruvec_page_state(lruvec, NR_ACTIVE_FILE); 2095 2068 used_pages += lruvec_page_state(lruvec, NR_INACTIVE_FILE); 2069 + 2070 + mem_cgroup_put(memcg); 2096 2071 2097 2072 si_meminfo_node(&i, goal->nid); 2098 2073 if (goal->metric == DAMOS_QUOTA_NODE_MEMCG_USED_BP) ··· 2784 2751 if (ctx->ops.cleanup) 2785 2752 ctx->ops.cleanup(ctx); 2786 2753 kfree(ctx->regions_score_histogram); 2754 + kdamond_call(ctx, true); 2787 2755 2788 2756 pr_debug("kdamond (%d) finishes\n", current->pid); 2789 2757 mutex_lock(&ctx->kdamond_lock); 2790 2758 ctx->kdamond = NULL; 2791 2759 mutex_unlock(&ctx->kdamond_lock); 2792 2760 2793 - kdamond_call(ctx, true); 2794 2761 damos_walk_cancel(ctx); 2795 2762 2796 2763 mutex_lock(&damon_lock);
+6 -4
mm/damon/sysfs-schemes.c
··· 2152 2152 return err; 2153 2153 err = damos_sysfs_set_dests(scheme); 2154 2154 if (err) 2155 - goto put_access_pattern_out; 2155 + goto rmdir_put_access_pattern_out; 2156 2156 err = damon_sysfs_scheme_set_quotas(scheme); 2157 2157 if (err) 2158 2158 goto put_dests_out; 2159 2159 err = damon_sysfs_scheme_set_watermarks(scheme); 2160 2160 if (err) 2161 - goto put_quotas_access_pattern_out; 2161 + goto rmdir_put_quotas_access_pattern_out; 2162 2162 err = damos_sysfs_set_filter_dirs(scheme); 2163 2163 if (err) 2164 2164 goto put_watermarks_quotas_access_pattern_out; ··· 2183 2183 put_watermarks_quotas_access_pattern_out: 2184 2184 kobject_put(&scheme->watermarks->kobj); 2185 2185 scheme->watermarks = NULL; 2186 - put_quotas_access_pattern_out: 2186 + rmdir_put_quotas_access_pattern_out: 2187 + damon_sysfs_quotas_rm_dirs(scheme->quotas); 2187 2188 kobject_put(&scheme->quotas->kobj); 2188 2189 scheme->quotas = NULL; 2189 2190 put_dests_out: 2190 2191 kobject_put(&scheme->dests->kobj); 2191 2192 scheme->dests = NULL; 2192 - put_access_pattern_out: 2193 + rmdir_put_access_pattern_out: 2194 + damon_sysfs_access_pattern_rm_dirs(scheme->access_pattern); 2193 2195 kobject_put(&scheme->access_pattern->kobj); 2194 2196 scheme->access_pattern = NULL; 2195 2197 return err;
+6 -3
mm/damon/sysfs.c
··· 792 792 nr_regions_range = damon_sysfs_ul_range_alloc(10, 1000); 793 793 if (!nr_regions_range) { 794 794 err = -ENOMEM; 795 - goto put_intervals_out; 795 + goto rmdir_put_intervals_out; 796 796 } 797 797 798 798 err = kobject_init_and_add(&nr_regions_range->kobj, ··· 806 806 put_nr_regions_intervals_out: 807 807 kobject_put(&nr_regions_range->kobj); 808 808 attrs->nr_regions_range = NULL; 809 + rmdir_put_intervals_out: 810 + damon_sysfs_intervals_rm_dirs(intervals); 809 811 put_intervals_out: 810 812 kobject_put(&intervals->kobj); 811 813 attrs->intervals = NULL; ··· 950 948 951 949 err = damon_sysfs_context_set_targets(context); 952 950 if (err) 953 - goto put_attrs_out; 951 + goto rmdir_put_attrs_out; 954 952 955 953 err = damon_sysfs_context_set_schemes(context); 956 954 if (err) ··· 960 958 put_targets_attrs_out: 961 959 kobject_put(&context->targets->kobj); 962 960 context->targets = NULL; 963 - put_attrs_out: 961 + rmdir_put_attrs_out: 962 + damon_sysfs_attrs_rm_dirs(context->attrs); 964 963 kobject_put(&context->attrs->kobj); 965 964 context->attrs = NULL; 966 965 return err;
+16
mm/hugetlb.c
··· 4286 4286 unsigned long tmp; 4287 4287 char *p = s; 4288 4288 4289 + if (!hugepages_supported()) { 4290 + pr_warn("HugeTLB: hugepages unsupported, ignoring hugepages=%s cmdline\n", s); 4291 + return 0; 4292 + } 4293 + 4289 4294 if (!parsed_valid_hugepagesz) { 4290 4295 pr_warn("HugeTLB: hugepages=%s does not follow a valid hugepagesz, ignoring\n", s); 4291 4296 parsed_valid_hugepagesz = true; ··· 4371 4366 unsigned long size; 4372 4367 struct hstate *h; 4373 4368 4369 + if (!hugepages_supported()) { 4370 + pr_warn("HugeTLB: hugepages unsupported, ignoring hugepagesz=%s cmdline\n", s); 4371 + return 0; 4372 + } 4373 + 4374 4374 parsed_valid_hugepagesz = false; 4375 4375 size = (unsigned long)memparse(s, NULL); 4376 4376 ··· 4423 4413 { 4424 4414 unsigned long size; 4425 4415 int i; 4416 + 4417 + if (!hugepages_supported()) { 4418 + pr_warn("HugeTLB: hugepages unsupported, ignoring default_hugepagesz=%s cmdline\n", 4419 + s); 4420 + return 0; 4421 + } 4426 4422 4427 4423 parsed_valid_hugepagesz = false; 4428 4424 if (parsed_default_hugepagesz) {
+1 -1
mm/kmsan/shadow.c
··· 207 207 if (!kmsan_enabled || kmsan_in_runtime()) 208 208 return; 209 209 kmsan_enter_runtime(); 210 - kmsan_internal_poison_memory(page_address(page), page_size(page), 210 + kmsan_internal_poison_memory(page_address(page), PAGE_SIZE << order, 211 211 GFP_KERNEL & ~(__GFP_RECLAIM), 212 212 KMSAN_POISON_CHECK | KMSAN_POISON_FREE); 213 213 kmsan_leave_runtime();
+2
mm/numa_memblks.c
··· 7 7 #include <linux/numa.h> 8 8 #include <linux/numa_memblks.h> 9 9 10 + #include <asm/numa.h> 11 + 10 12 int numa_distance_cnt; 11 13 static u8 *numa_distance; 12 14
+48 -9
mm/page_alloc.c
··· 167 167 pcp_trylock_finish(UP_flags); \ 168 168 }) 169 169 170 + /* 171 + * With the UP spinlock implementation, when we spin_lock(&pcp->lock) (for i.e. 172 + * a potentially remote cpu drain) and get interrupted by an operation that 173 + * attempts pcp_spin_trylock(), we can't rely on the trylock failure due to UP 174 + * spinlock assumptions making the trylock a no-op. So we have to turn that 175 + * spin_lock() to a spin_lock_irqsave(). This works because on UP there are no 176 + * remote cpu's so we can only be locking the only existing local one. 177 + */ 178 + #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT) 179 + static inline void __flags_noop(unsigned long *flags) { } 180 + #define pcp_spin_lock_maybe_irqsave(ptr, flags) \ 181 + ({ \ 182 + __flags_noop(&(flags)); \ 183 + spin_lock(&(ptr)->lock); \ 184 + }) 185 + #define pcp_spin_unlock_maybe_irqrestore(ptr, flags) \ 186 + ({ \ 187 + spin_unlock(&(ptr)->lock); \ 188 + __flags_noop(&(flags)); \ 189 + }) 190 + #else 191 + #define pcp_spin_lock_maybe_irqsave(ptr, flags) \ 192 + spin_lock_irqsave(&(ptr)->lock, flags) 193 + #define pcp_spin_unlock_maybe_irqrestore(ptr, flags) \ 194 + spin_unlock_irqrestore(&(ptr)->lock, flags) 195 + #endif 196 + 170 197 #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID 171 198 DEFINE_PER_CPU(int, numa_node); 172 199 EXPORT_PER_CPU_SYMBOL(numa_node); ··· 2583 2556 bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) 2584 2557 { 2585 2558 int high_min, to_drain, to_drain_batched, batch; 2559 + unsigned long UP_flags; 2586 2560 bool todo = false; 2587 2561 2588 2562 high_min = READ_ONCE(pcp->high_min); ··· 2603 2575 to_drain = pcp->count - pcp->high; 2604 2576 while (to_drain > 0) { 2605 2577 to_drain_batched = min(to_drain, batch); 2606 - spin_lock(&pcp->lock); 2578 + pcp_spin_lock_maybe_irqsave(pcp, UP_flags); 2607 2579 free_pcppages_bulk(zone, to_drain_batched, pcp, 0); 2608 - spin_unlock(&pcp->lock); 2580 + pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags); 2609 2581 todo = true; 2610 2582 2611 2583 to_drain -= to_drain_batched; ··· 2622 2594 */ 2623 2595 void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) 2624 2596 { 2597 + unsigned long UP_flags; 2625 2598 int to_drain, batch; 2626 2599 2627 2600 batch = READ_ONCE(pcp->batch); 2628 2601 to_drain = min(pcp->count, batch); 2629 2602 if (to_drain > 0) { 2630 - spin_lock(&pcp->lock); 2603 + pcp_spin_lock_maybe_irqsave(pcp, UP_flags); 2631 2604 free_pcppages_bulk(zone, to_drain, pcp, 0); 2632 - spin_unlock(&pcp->lock); 2605 + pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags); 2633 2606 } 2634 2607 } 2635 2608 #endif ··· 2641 2612 static void drain_pages_zone(unsigned int cpu, struct zone *zone) 2642 2613 { 2643 2614 struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu); 2615 + unsigned long UP_flags; 2644 2616 int count; 2645 2617 2646 2618 do { 2647 - spin_lock(&pcp->lock); 2619 + pcp_spin_lock_maybe_irqsave(pcp, UP_flags); 2648 2620 count = pcp->count; 2649 2621 if (count) { 2650 2622 int to_drain = min(count, ··· 2654 2624 free_pcppages_bulk(zone, to_drain, pcp, 0); 2655 2625 count -= to_drain; 2656 2626 } 2657 - spin_unlock(&pcp->lock); 2627 + pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags); 2658 2628 } while (count); 2659 2629 } 2660 2630 ··· 6139 6109 { 6140 6110 struct per_cpu_pages *pcp; 6141 6111 struct cpu_cacheinfo *cci; 6112 + unsigned long UP_flags; 6142 6113 6143 6114 pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu); 6144 6115 cci = get_cpu_cacheinfo(cpu); ··· 6150 6119 * This can reduce zone lock contention without hurting 6151 6120 * cache-hot pages sharing. 6152 6121 */ 6153 - spin_lock(&pcp->lock); 6122 + pcp_spin_lock_maybe_irqsave(pcp, UP_flags); 6154 6123 if ((cci->per_cpu_data_slice_size >> PAGE_SHIFT) > 3 * pcp->batch) 6155 6124 pcp->flags |= PCPF_FREE_HIGH_BATCH; 6156 6125 else 6157 6126 pcp->flags &= ~PCPF_FREE_HIGH_BATCH; 6158 - spin_unlock(&pcp->lock); 6127 + pcp_spin_unlock_maybe_irqrestore(pcp, UP_flags); 6159 6128 } 6160 6129 6161 6130 void setup_pcp_cacheinfo(unsigned int cpu) ··· 6698 6667 int old_percpu_pagelist_high_fraction; 6699 6668 int ret; 6700 6669 6670 + /* 6671 + * Avoid using pcp_batch_high_lock for reads as the value is read 6672 + * atomically and a race with offlining is harmless. 6673 + */ 6674 + 6675 + if (!write) 6676 + return proc_dointvec_minmax(table, write, buffer, length, ppos); 6677 + 6701 6678 mutex_lock(&pcp_batch_high_lock); 6702 6679 old_percpu_pagelist_high_fraction = percpu_pagelist_high_fraction; 6703 6680 6704 6681 ret = proc_dointvec_minmax(table, write, buffer, length, ppos); 6705 - if (!write || ret < 0) 6682 + if (ret < 0) 6706 6683 goto out; 6707 6684 6708 6685 /* Sanity checking to avoid pcp imbalance */
+74 -37
mm/vma.c
··· 67 67 .state = VMA_MERGE_START, \ 68 68 } 69 69 70 - /* 71 - * If, at any point, the VMA had unCoW'd mappings from parents, it will maintain 72 - * more than one anon_vma_chain connecting it to more than one anon_vma. A merge 73 - * would mean a wider range of folios sharing the root anon_vma lock, and thus 74 - * potential lock contention, we do not wish to encourage merging such that this 75 - * scales to a problem. 76 - */ 77 - static bool vma_had_uncowed_parents(struct vm_area_struct *vma) 70 + /* Was this VMA ever forked from a parent, i.e. maybe contains CoW mappings? */ 71 + static bool vma_is_fork_child(struct vm_area_struct *vma) 78 72 { 79 73 /* 80 74 * The list_is_singular() test is to avoid merging VMA cloned from 81 - * parents. This can improve scalability caused by anon_vma lock. 75 + * parents. This can improve scalability caused by the anon_vma root 76 + * lock. 82 77 */ 83 78 return vma && vma->anon_vma && !list_is_singular(&vma->anon_vma_chain); 84 79 } ··· 110 115 VM_WARN_ON(src && src_anon != src->anon_vma); 111 116 112 117 /* Case 1 - we will dup_anon_vma() from src into tgt. */ 113 - if (!tgt_anon && src_anon) 114 - return !vma_had_uncowed_parents(src); 118 + if (!tgt_anon && src_anon) { 119 + struct vm_area_struct *copied_from = vmg->copied_from; 120 + 121 + if (vma_is_fork_child(src)) 122 + return false; 123 + if (vma_is_fork_child(copied_from)) 124 + return false; 125 + 126 + return true; 127 + } 115 128 /* Case 2 - we will simply use tgt's anon_vma. */ 116 129 if (tgt_anon && !src_anon) 117 - return !vma_had_uncowed_parents(tgt); 130 + return !vma_is_fork_child(tgt); 118 131 /* Case 3 - the anon_vma's are already shared. */ 119 132 return src_anon == tgt_anon; 120 133 } ··· 832 829 VM_WARN_ON_VMG(middle && 833 830 !(vma_iter_addr(vmg->vmi) >= middle->vm_start && 834 831 vma_iter_addr(vmg->vmi) < middle->vm_end), vmg); 832 + /* An existing merge can never be used by the mremap() logic. */ 833 + VM_WARN_ON_VMG(vmg->copied_from, vmg); 835 834 836 835 vmg->state = VMA_MERGE_NOMERGE; 837 836 ··· 1104 1099 } 1105 1100 1106 1101 /* 1102 + * vma_merge_copied_range - Attempt to merge a VMA that is being copied by 1103 + * mremap() 1104 + * 1105 + * @vmg: Describes the VMA we are adding, in the copied-to range @vmg->start to 1106 + * @vmg->end (exclusive), which we try to merge with any adjacent VMAs if 1107 + * possible. 1108 + * 1109 + * vmg->prev, next, start, end, pgoff should all be relative to the COPIED TO 1110 + * range, i.e. the target range for the VMA. 1111 + * 1112 + * Returns: In instances where no merge was possible, NULL. Otherwise, a pointer 1113 + * to the VMA we expanded. 1114 + * 1115 + * ASSUMPTIONS: Same as vma_merge_new_range(), except vmg->middle must contain 1116 + * the copied-from VMA. 1117 + */ 1118 + static struct vm_area_struct *vma_merge_copied_range(struct vma_merge_struct *vmg) 1119 + { 1120 + /* We must have a copied-from VMA. */ 1121 + VM_WARN_ON_VMG(!vmg->middle, vmg); 1122 + 1123 + vmg->copied_from = vmg->middle; 1124 + vmg->middle = NULL; 1125 + return vma_merge_new_range(vmg); 1126 + } 1127 + 1128 + /* 1107 1129 * vma_expand - Expand an existing VMA 1108 1130 * 1109 1131 * @vmg: Describes a VMA expansion operation. ··· 1149 1117 int vma_expand(struct vma_merge_struct *vmg) 1150 1118 { 1151 1119 struct vm_area_struct *anon_dup = NULL; 1152 - bool remove_next = false; 1153 1120 struct vm_area_struct *target = vmg->target; 1154 1121 struct vm_area_struct *next = vmg->next; 1122 + bool remove_next = false; 1155 1123 vm_flags_t sticky_flags; 1156 - 1157 - sticky_flags = vmg->vm_flags & VM_STICKY; 1158 - sticky_flags |= target->vm_flags & VM_STICKY; 1159 - 1160 - VM_WARN_ON_VMG(!target, vmg); 1124 + int ret = 0; 1161 1125 1162 1126 mmap_assert_write_locked(vmg->mm); 1163 - 1164 1127 vma_start_write(target); 1165 - if (next && (target != next) && (vmg->end == next->vm_end)) { 1166 - int ret; 1167 1128 1168 - sticky_flags |= next->vm_flags & VM_STICKY; 1129 + if (next && target != next && vmg->end == next->vm_end) 1169 1130 remove_next = true; 1170 - /* This should already have been checked by this point. */ 1171 - VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg); 1172 - vma_start_write(next); 1173 - /* 1174 - * In this case we don't report OOM, so vmg->give_up_on_mm is 1175 - * safe. 1176 - */ 1177 - ret = dup_anon_vma(target, next, &anon_dup); 1178 - if (ret) 1179 - return ret; 1180 - } 1181 1131 1132 + /* We must have a target. */ 1133 + VM_WARN_ON_VMG(!target, vmg); 1134 + /* This should have already been checked by this point. */ 1135 + VM_WARN_ON_VMG(remove_next && !can_merge_remove_vma(next), vmg); 1182 1136 /* Not merging but overwriting any part of next is not handled. */ 1183 1137 VM_WARN_ON_VMG(next && !remove_next && 1184 1138 next != target && vmg->end > next->vm_start, vmg); 1185 - /* Only handles expanding */ 1139 + /* Only handles expanding. */ 1186 1140 VM_WARN_ON_VMG(target->vm_start < vmg->start || 1187 1141 target->vm_end > vmg->end, vmg); 1188 1142 1143 + sticky_flags = vmg->vm_flags & VM_STICKY; 1144 + sticky_flags |= target->vm_flags & VM_STICKY; 1189 1145 if (remove_next) 1190 - vmg->__remove_next = true; 1146 + sticky_flags |= next->vm_flags & VM_STICKY; 1191 1147 1148 + /* 1149 + * If we are removing the next VMA or copying from a VMA 1150 + * (e.g. mremap()'ing), we must propagate anon_vma state. 1151 + * 1152 + * Note that, by convention, callers ignore OOM for this case, so 1153 + * we don't need to account for vmg->give_up_on_mm here. 1154 + */ 1155 + if (remove_next) 1156 + ret = dup_anon_vma(target, next, &anon_dup); 1157 + if (!ret && vmg->copied_from) 1158 + ret = dup_anon_vma(target, vmg->copied_from, &anon_dup); 1159 + if (ret) 1160 + return ret; 1161 + 1162 + if (remove_next) { 1163 + vma_start_write(next); 1164 + vmg->__remove_next = true; 1165 + } 1192 1166 if (commit_merge(vmg)) 1193 1167 goto nomem; 1194 1168 ··· 1866 1828 if (new_vma && new_vma->vm_start < addr + len) 1867 1829 return NULL; /* should never get here */ 1868 1830 1869 - vmg.middle = NULL; /* New VMA range. */ 1870 1831 vmg.pgoff = pgoff; 1871 1832 vmg.next = vma_iter_next_rewind(&vmi, NULL); 1872 - new_vma = vma_merge_new_range(&vmg); 1833 + new_vma = vma_merge_copied_range(&vmg); 1873 1834 1874 1835 if (new_vma) { 1875 1836 /*
+3
mm/vma.h
··· 106 106 struct anon_vma_name *anon_name; 107 107 enum vma_merge_state state; 108 108 109 + /* If copied from (i.e. mremap()'d) the VMA from which we are copying. */ 110 + struct vm_area_struct *copied_from; 111 + 109 112 /* Flags which callers can use to modify merge behaviour: */ 110 113 111 114 /*
+1 -1
mm/vmalloc.c
··· 4248 4248 EXPORT_SYMBOL(vzalloc_node_noprof); 4249 4249 4250 4250 /** 4251 - * vrealloc_node_align_noprof - reallocate virtually contiguous memory; contents 4251 + * vrealloc_node_align - reallocate virtually contiguous memory; contents 4252 4252 * remain unchanged 4253 4253 * @p: object to reallocate memory for 4254 4254 * @size: the size to reallocate
+1 -1
mm/zswap.c
··· 787 787 return 0; 788 788 789 789 fail: 790 - if (acomp) 790 + if (!IS_ERR_OR_NULL(acomp)) 791 791 crypto_free_acomp(acomp); 792 792 kfree(buffer); 793 793 return ret;
+1
net/bluetooth/hci_sync.c
··· 4420 4420 if (bis_capable(hdev)) { 4421 4421 events[1] |= 0x20; /* LE PA Report */ 4422 4422 events[1] |= 0x40; /* LE PA Sync Established */ 4423 + events[1] |= 0x80; /* LE PA Sync Lost */ 4423 4424 events[3] |= 0x04; /* LE Create BIG Complete */ 4424 4425 events[3] |= 0x08; /* LE Terminate BIG Complete */ 4425 4426 events[3] |= 0x10; /* LE BIG Sync Established */
+17 -8
net/bpf/test_run.c
··· 1294 1294 batch_size = NAPI_POLL_WEIGHT; 1295 1295 else if (batch_size > TEST_XDP_MAX_BATCH) 1296 1296 return -E2BIG; 1297 - 1298 - headroom += sizeof(struct xdp_page_head); 1299 1297 } else if (batch_size) { 1300 1298 return -EINVAL; 1301 1299 } ··· 1306 1308 /* There can't be user provided data before the meta data */ 1307 1309 if (ctx->data_meta || ctx->data_end > kattr->test.data_size_in || 1308 1310 ctx->data > ctx->data_end || 1309 - unlikely(xdp_metalen_invalid(ctx->data)) || 1310 1311 (do_live && (kattr->test.data_out || kattr->test.ctx_out))) 1311 1312 goto free_ctx; 1312 - /* Meta data is allocated from the headroom */ 1313 - headroom -= ctx->data; 1314 1313 1315 1314 meta_sz = ctx->data; 1315 + if (xdp_metalen_invalid(meta_sz) || meta_sz > headroom - sizeof(struct xdp_frame)) 1316 + goto free_ctx; 1317 + 1318 + /* Meta data is allocated from the headroom */ 1319 + headroom -= meta_sz; 1316 1320 linear_sz = ctx->data_end; 1317 1321 } 1322 + 1323 + /* The xdp_page_head structure takes up space in each page, limiting the 1324 + * size of the packet data; add the extra size to headroom here to make 1325 + * sure it's accounted in the length checks below, but not in the 1326 + * metadata size check above. 1327 + */ 1328 + if (do_live) 1329 + headroom += sizeof(struct xdp_page_head); 1318 1330 1319 1331 max_linear_sz = PAGE_SIZE - headroom - tailroom; 1320 1332 linear_sz = min_t(u32, linear_sz, max_linear_sz); ··· 1363 1355 1364 1356 if (sinfo->nr_frags == MAX_SKB_FRAGS) { 1365 1357 ret = -ENOMEM; 1366 - goto out; 1358 + goto out_put_dev; 1367 1359 } 1368 1360 1369 1361 page = alloc_page(GFP_KERNEL); 1370 1362 if (!page) { 1371 1363 ret = -ENOMEM; 1372 - goto out; 1364 + goto out_put_dev; 1373 1365 } 1374 1366 1375 1367 frag = &sinfo->frags[sinfo->nr_frags++]; ··· 1381 1373 if (copy_from_user(page_address(page), data_in + size, 1382 1374 data_len)) { 1383 1375 ret = -EFAULT; 1384 - goto out; 1376 + goto out_put_dev; 1385 1377 } 1386 1378 sinfo->xdp_frags_size += data_len; 1387 1379 size += data_len; ··· 1396 1388 ret = bpf_test_run_xdp_live(prog, &xdp, repeat, batch_size, &duration); 1397 1389 else 1398 1390 ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); 1391 + out_put_dev: 1399 1392 /* We convert the xdp_buff back to an xdp_md before checking the return 1400 1393 * code so the reference count of any held netdevice will be decremented 1401 1394 * even if the test run failed.
+16 -12
net/bridge/br_fdb.c
··· 70 70 { 71 71 return !test_bit(BR_FDB_STATIC, &fdb->flags) && 72 72 !test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags) && 73 - time_before_eq(fdb->updated + hold_time(br), jiffies); 73 + time_before_eq(READ_ONCE(fdb->updated) + hold_time(br), jiffies); 74 74 } 75 75 76 76 static int fdb_to_nud(const struct net_bridge *br, ··· 126 126 if (nla_put_u32(skb, NDA_FLAGS_EXT, ext_flags)) 127 127 goto nla_put_failure; 128 128 129 - ci.ndm_used = jiffies_to_clock_t(now - fdb->used); 129 + ci.ndm_used = jiffies_to_clock_t(now - READ_ONCE(fdb->used)); 130 130 ci.ndm_confirmed = 0; 131 - ci.ndm_updated = jiffies_to_clock_t(now - fdb->updated); 131 + ci.ndm_updated = jiffies_to_clock_t(now - READ_ONCE(fdb->updated)); 132 132 ci.ndm_refcnt = 0; 133 133 if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci)) 134 134 goto nla_put_failure; ··· 551 551 */ 552 552 rcu_read_lock(); 553 553 hlist_for_each_entry_rcu(f, &br->fdb_list, fdb_node) { 554 - unsigned long this_timer = f->updated + delay; 554 + unsigned long this_timer = READ_ONCE(f->updated) + delay; 555 555 556 556 if (test_bit(BR_FDB_STATIC, &f->flags) || 557 557 test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &f->flags)) { ··· 924 924 { 925 925 struct net_bridge_fdb_entry *f; 926 926 struct __fdb_entry *fe = buf; 927 + unsigned long delta; 927 928 int num = 0; 928 929 929 930 memset(buf, 0, maxnum*sizeof(struct __fdb_entry)); ··· 954 953 fe->port_hi = f->dst->port_no >> 8; 955 954 956 955 fe->is_local = test_bit(BR_FDB_LOCAL, &f->flags); 957 - if (!test_bit(BR_FDB_STATIC, &f->flags)) 958 - fe->ageing_timer_value = jiffies_delta_to_clock_t(jiffies - f->updated); 956 + if (!test_bit(BR_FDB_STATIC, &f->flags)) { 957 + delta = jiffies - READ_ONCE(f->updated); 958 + fe->ageing_timer_value = 959 + jiffies_delta_to_clock_t(delta); 960 + } 959 961 ++fe; 960 962 ++num; 961 963 } ··· 1006 1002 unsigned long now = jiffies; 1007 1003 bool fdb_modified = false; 1008 1004 1009 - if (now != fdb->updated) { 1010 - fdb->updated = now; 1005 + if (now != READ_ONCE(fdb->updated)) { 1006 + WRITE_ONCE(fdb->updated, now); 1011 1007 fdb_modified = __fdb_mark_active(fdb); 1012 1008 } 1013 1009 ··· 1246 1242 if (fdb_handle_notify(fdb, notify)) 1247 1243 modified = true; 1248 1244 1249 - fdb->used = jiffies; 1245 + WRITE_ONCE(fdb->used, jiffies); 1250 1246 if (modified) { 1251 1247 if (refresh) 1252 - fdb->updated = jiffies; 1248 + WRITE_ONCE(fdb->updated, jiffies); 1253 1249 fdb_notify(br, fdb, RTM_NEWNEIGH, true); 1254 1250 } 1255 1251 ··· 1560 1556 goto err_unlock; 1561 1557 } 1562 1558 1563 - fdb->updated = jiffies; 1559 + WRITE_ONCE(fdb->updated, jiffies); 1564 1560 1565 1561 if (READ_ONCE(fdb->dst) != p) { 1566 1562 WRITE_ONCE(fdb->dst, p); ··· 1569 1565 1570 1566 if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) { 1571 1567 /* Refresh entry */ 1572 - fdb->used = jiffies; 1568 + WRITE_ONCE(fdb->used, jiffies); 1573 1569 } else { 1574 1570 modified = true; 1575 1571 }
+2 -2
net/bridge/br_input.c
··· 221 221 if (test_bit(BR_FDB_LOCAL, &dst->flags)) 222 222 return br_pass_frame_up(skb, false); 223 223 224 - if (now != dst->used) 225 - dst->used = now; 224 + if (now != READ_ONCE(dst->used)) 225 + WRITE_ONCE(dst->used, now); 226 226 br_forward(dst->dst, skb, local_rcv, false); 227 227 } else { 228 228 if (!mcast_hit)
+9 -1
net/can/j1939/transport.c
··· 1695 1695 1696 1696 j1939_session_timers_cancel(session); 1697 1697 j1939_session_cancel(session, J1939_XTP_ABORT_BUSY); 1698 - if (session->transmission) 1698 + if (session->transmission) { 1699 1699 j1939_session_deactivate_activate_next(session); 1700 + } else if (session->state == J1939_SESSION_WAITING_ABORT) { 1701 + /* Force deactivation for the receiver. 1702 + * If we rely on the timer starting in j1939_session_cancel, 1703 + * a second RTS call here will cancel that timer and fail 1704 + * to restart it because the state is already WAITING_ABORT. 1705 + */ 1706 + j1939_session_deactivate_activate_next(session); 1707 + } 1700 1708 1701 1709 return -EBUSY; 1702 1710 }
+10 -41
net/can/raw.c
··· 49 49 #include <linux/if_arp.h> 50 50 #include <linux/skbuff.h> 51 51 #include <linux/can.h> 52 + #include <linux/can/can-ml.h> 52 53 #include <linux/can/core.h> 53 - #include <linux/can/dev.h> /* for can_is_canxl_dev_mtu() */ 54 54 #include <linux/can/skb.h> 55 55 #include <linux/can/raw.h> 56 56 #include <net/sock.h> ··· 892 892 } 893 893 } 894 894 895 - static inline bool raw_dev_cc_enabled(struct net_device *dev, 896 - struct can_priv *priv) 897 - { 898 - /* The CANXL-only mode disables error-signalling on the CAN bus 899 - * which is needed to send CAN CC/FD frames 900 - */ 901 - if (priv) 902 - return !can_dev_in_xl_only_mode(priv); 903 - 904 - /* virtual CAN interfaces always support CAN CC */ 905 - return true; 906 - } 907 - 908 - static inline bool raw_dev_fd_enabled(struct net_device *dev, 909 - struct can_priv *priv) 910 - { 911 - /* check FD ctrlmode on real CAN interfaces */ 912 - if (priv) 913 - return (priv->ctrlmode & CAN_CTRLMODE_FD); 914 - 915 - /* check MTU for virtual CAN FD interfaces */ 916 - return (READ_ONCE(dev->mtu) >= CANFD_MTU); 917 - } 918 - 919 - static inline bool raw_dev_xl_enabled(struct net_device *dev, 920 - struct can_priv *priv) 921 - { 922 - /* check XL ctrlmode on real CAN interfaces */ 923 - if (priv) 924 - return (priv->ctrlmode & CAN_CTRLMODE_XL); 925 - 926 - /* check MTU for virtual CAN XL interfaces */ 927 - return can_is_canxl_dev_mtu(READ_ONCE(dev->mtu)); 928 - } 929 - 930 895 static unsigned int raw_check_txframe(struct raw_sock *ro, struct sk_buff *skb, 931 896 struct net_device *dev) 932 897 { 933 - struct can_priv *priv = safe_candev_priv(dev); 934 - 935 898 /* Classical CAN */ 936 - if (can_is_can_skb(skb) && raw_dev_cc_enabled(dev, priv)) 899 + if (can_is_can_skb(skb) && can_cap_enabled(dev, CAN_CAP_CC)) 937 900 return CAN_MTU; 938 901 939 902 /* CAN FD */ 940 903 if (ro->fd_frames && can_is_canfd_skb(skb) && 941 - raw_dev_fd_enabled(dev, priv)) 904 + can_cap_enabled(dev, CAN_CAP_FD)) 942 905 return CANFD_MTU; 943 906 944 907 /* CAN XL */ 945 908 if (ro->xl_frames && can_is_canxl_skb(skb) && 946 - raw_dev_xl_enabled(dev, priv)) 909 + can_cap_enabled(dev, CAN_CAP_XL)) 947 910 return CANXL_MTU; 948 911 949 912 return 0; ··· 944 981 dev = dev_get_by_index(sock_net(sk), ifindex); 945 982 if (!dev) 946 983 return -ENXIO; 984 + 985 + /* no sending on a CAN device in read-only mode */ 986 + if (can_cap_enabled(dev, CAN_CAP_RO)) { 987 + err = -EACCES; 988 + goto put_dev; 989 + } 947 990 948 991 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv), 949 992 msg->msg_flags & MSG_DONTWAIT, &err);
+22 -9
net/core/dev.c
··· 478 478 ARPHRD_IEEE1394, ARPHRD_EUI64, ARPHRD_INFINIBAND, ARPHRD_SLIP, 479 479 ARPHRD_CSLIP, ARPHRD_SLIP6, ARPHRD_CSLIP6, ARPHRD_RSRVD, 480 480 ARPHRD_ADAPT, ARPHRD_ROSE, ARPHRD_X25, ARPHRD_HWX25, 481 + ARPHRD_CAN, ARPHRD_MCTP, 481 482 ARPHRD_PPP, ARPHRD_CISCO, ARPHRD_LAPB, ARPHRD_DDCMP, 482 - ARPHRD_RAWHDLC, ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD, 483 + ARPHRD_RAWHDLC, ARPHRD_RAWIP, 484 + ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD, 483 485 ARPHRD_SKIP, ARPHRD_LOOPBACK, ARPHRD_LOCALTLK, ARPHRD_FDDI, 484 486 ARPHRD_BIF, ARPHRD_SIT, ARPHRD_IPDDP, ARPHRD_IPGRE, 485 487 ARPHRD_PIMREG, ARPHRD_HIPPI, ARPHRD_ASH, ARPHRD_ECONET, 486 488 ARPHRD_IRDA, ARPHRD_FCPP, ARPHRD_FCAL, ARPHRD_FCPL, 487 489 ARPHRD_FCFABRIC, ARPHRD_IEEE80211, ARPHRD_IEEE80211_PRISM, 488 - ARPHRD_IEEE80211_RADIOTAP, ARPHRD_PHONET, ARPHRD_PHONET_PIPE, 489 - ARPHRD_IEEE802154, ARPHRD_VOID, ARPHRD_NONE}; 490 + ARPHRD_IEEE80211_RADIOTAP, 491 + ARPHRD_IEEE802154, ARPHRD_IEEE802154_MONITOR, 492 + ARPHRD_PHONET, ARPHRD_PHONET_PIPE, 493 + ARPHRD_CAIF, ARPHRD_IP6GRE, ARPHRD_NETLINK, ARPHRD_6LOWPAN, 494 + ARPHRD_VSOCKMON, 495 + ARPHRD_VOID, ARPHRD_NONE}; 490 496 491 497 static const char *const netdev_lock_name[] = { 492 498 "_xmit_NETROM", "_xmit_ETHER", "_xmit_EETHER", "_xmit_AX25", ··· 501 495 "_xmit_IEEE1394", "_xmit_EUI64", "_xmit_INFINIBAND", "_xmit_SLIP", 502 496 "_xmit_CSLIP", "_xmit_SLIP6", "_xmit_CSLIP6", "_xmit_RSRVD", 503 497 "_xmit_ADAPT", "_xmit_ROSE", "_xmit_X25", "_xmit_HWX25", 498 + "_xmit_CAN", "_xmit_MCTP", 504 499 "_xmit_PPP", "_xmit_CISCO", "_xmit_LAPB", "_xmit_DDCMP", 505 - "_xmit_RAWHDLC", "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD", 500 + "_xmit_RAWHDLC", "_xmit_RAWIP", 501 + "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD", 506 502 "_xmit_SKIP", "_xmit_LOOPBACK", "_xmit_LOCALTLK", "_xmit_FDDI", 507 503 "_xmit_BIF", "_xmit_SIT", "_xmit_IPDDP", "_xmit_IPGRE", 508 504 "_xmit_PIMREG", "_xmit_HIPPI", "_xmit_ASH", "_xmit_ECONET", 509 505 "_xmit_IRDA", "_xmit_FCPP", "_xmit_FCAL", "_xmit_FCPL", 510 506 "_xmit_FCFABRIC", "_xmit_IEEE80211", "_xmit_IEEE80211_PRISM", 511 - "_xmit_IEEE80211_RADIOTAP", "_xmit_PHONET", "_xmit_PHONET_PIPE", 512 - "_xmit_IEEE802154", "_xmit_VOID", "_xmit_NONE"}; 507 + "_xmit_IEEE80211_RADIOTAP", 508 + "_xmit_IEEE802154", "_xmit_IEEE802154_MONITOR", 509 + "_xmit_PHONET", "_xmit_PHONET_PIPE", 510 + "_xmit_CAIF", "_xmit_IP6GRE", "_xmit_NETLINK", "_xmit_6LOWPAN", 511 + "_xmit_VSOCKMON", 512 + "_xmit_VOID", "_xmit_NONE"}; 513 513 514 514 static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)]; 515 515 static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)]; ··· 528 516 if (netdev_lock_type[i] == dev_type) 529 517 return i; 530 518 /* the last key is used by default */ 519 + WARN_ONCE(1, "netdev_lock_pos() could not find dev_type=%u\n", dev_type); 531 520 return ARRAY_SIZE(netdev_lock_type) - 1; 532 521 } 533 522 ··· 4203 4190 do { 4204 4191 if (first_n && !defer_count) { 4205 4192 defer_count = atomic_long_inc_return(&q->defer_count); 4206 - if (unlikely(defer_count > READ_ONCE(q->limit))) { 4207 - kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_DROP); 4193 + if (unlikely(defer_count > READ_ONCE(net_hotdata.qdisc_max_burst))) { 4194 + kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_BURST_DROP); 4208 4195 return NET_XMIT_DROP; 4209 4196 } 4210 4197 } ··· 4222 4209 ll_list = llist_del_all(&q->defer_list); 4223 4210 /* There is a small race because we clear defer_count not atomically 4224 4211 * with the prior llist_del_all(). This means defer_list could grow 4225 - * over q->limit. 4212 + * over qdisc_max_burst. 4226 4213 */ 4227 4214 atomic_long_set(&q->defer_count, 0); 4228 4215
+1
net/core/dst.c
··· 68 68 dst->lwtstate = NULL; 69 69 rcuref_init(&dst->__rcuref, 1); 70 70 INIT_LIST_HEAD(&dst->rt_uncached); 71 + dst->rt_uncached_list = NULL; 71 72 dst->__use = 0; 72 73 dst->lastuse = jiffies; 73 74 dst->flags = flags;
+1
net/core/hotdata.c
··· 17 17 18 18 .tstamp_prequeue = 1, 19 19 .max_backlog = 1000, 20 + .qdisc_max_burst = 1000, 20 21 .dev_tx_weight = 64, 21 22 .dev_rx_weight = 64, 22 23 .sysctl_max_skb_frags = MAX_SKB_FRAGS,
+7
net/core/sysctl_net_core.c
··· 430 430 .proc_handler = proc_dointvec 431 431 }, 432 432 { 433 + .procname = "qdisc_max_burst", 434 + .data = &net_hotdata.qdisc_max_burst, 435 + .maxlen = sizeof(int), 436 + .mode = 0644, 437 + .proc_handler = proc_dointvec 438 + }, 439 + { 433 440 .procname = "netdev_rss_key", 434 441 .data = &netdev_rss_key, 435 442 .maxlen = sizeof(int),
+2 -2
net/ipv4/esp4_offload.c
··· 122 122 struct sk_buff *skb, 123 123 netdev_features_t features) 124 124 { 125 - const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, 126 - XFRM_MODE_SKB_CB(skb)->protocol); 125 + struct xfrm_offload *xo = xfrm_offload(skb); 126 + const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, xo->proto); 127 127 __be16 type = inner_mode->family == AF_INET6 ? htons(ETH_P_IPV6) 128 128 : htons(ETH_P_IP); 129 129
+9 -2
net/ipv4/ip_gre.c
··· 891 891 const void *daddr, const void *saddr, unsigned int len) 892 892 { 893 893 struct ip_tunnel *t = netdev_priv(dev); 894 - struct iphdr *iph; 895 894 struct gre_base_hdr *greh; 895 + struct iphdr *iph; 896 + int needed; 896 897 897 - iph = skb_push(skb, t->hlen + sizeof(*iph)); 898 + needed = t->hlen + sizeof(*iph); 899 + if (skb_headroom(skb) < needed && 900 + pskb_expand_head(skb, HH_DATA_ALIGN(needed - skb_headroom(skb)), 901 + 0, GFP_ATOMIC)) 902 + return -needed; 903 + 904 + iph = skb_push(skb, needed); 898 905 greh = (struct gre_base_hdr *)(iph+1); 899 906 greh->flags = gre_tnl_flags_to_gre_flags(t->parms.o_flags); 900 907 greh->protocol = htons(type);
+2 -3
net/ipv4/ip_tunnel.c
··· 1281 1281 } 1282 1282 EXPORT_SYMBOL_GPL(ip_tunnel_changelink); 1283 1283 1284 - int ip_tunnel_init(struct net_device *dev) 1284 + int __ip_tunnel_init(struct net_device *dev) 1285 1285 { 1286 1286 struct ip_tunnel *tunnel = netdev_priv(dev); 1287 1287 struct iphdr *iph = &tunnel->parms.iph; ··· 1308 1308 1309 1309 if (tunnel->collect_md) 1310 1310 netif_keep_dst(dev); 1311 - netdev_lockdep_set_classes(dev); 1312 1311 return 0; 1313 1312 } 1314 - EXPORT_SYMBOL_GPL(ip_tunnel_init); 1313 + EXPORT_SYMBOL_GPL(__ip_tunnel_init); 1315 1314 1316 1315 void ip_tunnel_uninit(struct net_device *dev) 1317 1316 {
+2 -2
net/ipv4/route.c
··· 1537 1537 1538 1538 void rt_del_uncached_list(struct rtable *rt) 1539 1539 { 1540 - if (!list_empty(&rt->dst.rt_uncached)) { 1541 - struct uncached_list *ul = rt->dst.rt_uncached_list; 1540 + struct uncached_list *ul = rt->dst.rt_uncached_list; 1542 1541 1542 + if (ul) { 1543 1543 spin_lock_bh(&ul->lock); 1544 1544 list_del_init(&rt->dst.rt_uncached); 1545 1545 spin_unlock_bh(&ul->lock);
+2 -2
net/ipv6/addrconf.c
··· 3112 3112 in6_ifa_hold(ifp); 3113 3113 read_unlock_bh(&idev->lock); 3114 3114 3115 - ipv6_del_addr(ifp); 3116 - 3117 3115 if (!(ifp->flags & IFA_F_TEMPORARY) && 3118 3116 (ifp->flags & IFA_F_MANAGETEMPADDR)) 3119 3117 delete_tempaddrs(idev, ifp); 3118 + 3119 + ipv6_del_addr(ifp); 3120 3120 3121 3121 addrconf_verify_rtnl(net); 3122 3122 if (ipv6_addr_is_multicast(pfx)) {
+2 -2
net/ipv6/esp6_offload.c
··· 158 158 struct sk_buff *skb, 159 159 netdev_features_t features) 160 160 { 161 - const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, 162 - XFRM_MODE_SKB_CB(skb)->protocol); 161 + struct xfrm_offload *xo = xfrm_offload(skb); 162 + const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, xo->proto); 163 163 __be16 type = inner_mode->family == AF_INET ? htons(ETH_P_IP) 164 164 : htons(ETH_P_IPV6); 165 165
+1 -1
net/ipv6/ip6_tunnel.c
··· 844 844 845 845 skb_reset_network_header(skb); 846 846 847 - if (!pskb_inet_may_pull(skb)) { 847 + if (skb_vlan_inet_prepare(skb, true)) { 848 848 DEV_STATS_INC(tunnel->dev, rx_length_errors); 849 849 DEV_STATS_INC(tunnel->dev, rx_errors); 850 850 goto drop;
+2 -2
net/ipv6/route.c
··· 148 148 149 149 void rt6_uncached_list_del(struct rt6_info *rt) 150 150 { 151 - if (!list_empty(&rt->dst.rt_uncached)) { 152 - struct uncached_list *ul = rt->dst.rt_uncached_list; 151 + struct uncached_list *ul = rt->dst.rt_uncached_list; 153 152 153 + if (ul) { 154 154 spin_lock_bh(&ul->lock); 155 155 list_del_init(&rt->dst.rt_uncached); 156 156 spin_unlock_bh(&ul->lock);
+4 -2
net/sched/sch_qfq.c
··· 529 529 return 0; 530 530 531 531 destroy_class: 532 - qdisc_put(cl->qdisc); 533 - kfree(cl); 532 + if (!existing) { 533 + qdisc_put(cl->qdisc); 534 + kfree(cl); 535 + } 534 536 return err; 535 537 } 536 538
+1
net/xfrm/xfrm_state.c
··· 3151 3151 int err; 3152 3152 3153 3153 if (family == AF_INET && 3154 + (!x->dir || x->dir == XFRM_SA_DIR_OUT) && 3154 3155 READ_ONCE(xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc)) 3155 3156 x->props.flags |= XFRM_STATE_NOPMTUDISC; 3156 3157
+42
rust/helpers/bitops.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 #include <linux/bitops.h> 4 + #include <linux/find.h> 4 5 5 6 void rust_helper___set_bit(unsigned long nr, unsigned long *addr) 6 7 { ··· 22 21 { 23 22 clear_bit(nr, addr); 24 23 } 24 + 25 + /* 26 + * The rust_helper_ prefix is intentionally omitted below so that the 27 + * declarations in include/linux/find.h are compatible with these helpers. 28 + * 29 + * Note that the below #ifdefs mean that the helper is only created if C does 30 + * not provide a definition. 31 + */ 32 + #ifdef find_first_zero_bit 33 + __rust_helper 34 + unsigned long _find_first_zero_bit(const unsigned long *p, unsigned long size) 35 + { 36 + return find_first_zero_bit(p, size); 37 + } 38 + #endif /* find_first_zero_bit */ 39 + 40 + #ifdef find_next_zero_bit 41 + __rust_helper 42 + unsigned long _find_next_zero_bit(const unsigned long *addr, 43 + unsigned long size, unsigned long offset) 44 + { 45 + return find_next_zero_bit(addr, size, offset); 46 + } 47 + #endif /* find_next_zero_bit */ 48 + 49 + #ifdef find_first_bit 50 + __rust_helper 51 + unsigned long _find_first_bit(const unsigned long *addr, unsigned long size) 52 + { 53 + return find_first_bit(addr, size); 54 + } 55 + #endif /* find_first_bit */ 56 + 57 + #ifdef find_next_bit 58 + __rust_helper 59 + unsigned long _find_next_bit(const unsigned long *addr, unsigned long size, 60 + unsigned long offset) 61 + { 62 + return find_next_bit(addr, size, offset); 63 + } 64 + #endif /* find_next_bit */
+1 -1
security/landlock/audit.c
··· 191 191 long youngest_layer = -1; 192 192 193 193 for_each_set_bit(access_bit, &access_req, layer_masks_size) { 194 - const access_mask_t mask = (*layer_masks)[access_bit]; 194 + const layer_mask_t mask = (*layer_masks)[access_bit]; 195 195 long layer; 196 196 197 197 if (!mask)
+1 -1
security/landlock/domain.h
··· 97 97 */ 98 98 atomic64_t num_denials; 99 99 /** 100 - * @id: Landlock domain ID, sets once at domain creation time. 100 + * @id: Landlock domain ID, set once at domain creation time. 101 101 */ 102 102 u64 id; 103 103 /**
+1 -1
security/landlock/errata/abi-6.h
··· 9 9 * This fix addresses an issue where signal scoping was overly restrictive, 10 10 * preventing sandboxed threads from signaling other threads within the same 11 11 * process if they belonged to different domains. Because threads are not 12 - * security boundaries, user space might assume that any thread within the same 12 + * security boundaries, user space might assume that all threads within the same 13 13 * process can send signals between themselves (see :manpage:`nptl(7)` and 14 14 * :manpage:`libpsx(3)`). Consistent with :manpage:`ptrace(2)` behavior, direct 15 15 * interaction between threads of the same process should always be allowed.
+11 -3
security/landlock/fs.c
··· 939 939 } 940 940 path_put(&walker_path); 941 941 942 - if (!allowed_parent1) { 942 + /* 943 + * Check CONFIG_AUDIT to enable elision of log_request_parent* and 944 + * associated caller's stack variables thanks to dead code elimination. 945 + */ 946 + #ifdef CONFIG_AUDIT 947 + if (!allowed_parent1 && log_request_parent1) { 943 948 log_request_parent1->type = LANDLOCK_REQUEST_FS_ACCESS; 944 949 log_request_parent1->audit.type = LSM_AUDIT_DATA_PATH; 945 950 log_request_parent1->audit.u.path = *path; ··· 954 949 ARRAY_SIZE(*layer_masks_parent1); 955 950 } 956 951 957 - if (!allowed_parent2) { 952 + if (!allowed_parent2 && log_request_parent2) { 958 953 log_request_parent2->type = LANDLOCK_REQUEST_FS_ACCESS; 959 954 log_request_parent2->audit.type = LSM_AUDIT_DATA_PATH; 960 955 log_request_parent2->audit.u.path = *path; ··· 963 958 log_request_parent2->layer_masks_size = 964 959 ARRAY_SIZE(*layer_masks_parent2); 965 960 } 961 + #endif /* CONFIG_AUDIT */ 962 + 966 963 return allowed_parent1 && allowed_parent2; 967 964 } 968 965 ··· 1321 1314 * second call to iput() for the same Landlock object. Also 1322 1315 * checks I_NEW because such inode cannot be tied to an object. 1323 1316 */ 1324 - if (inode_state_read(inode) & (I_FREEING | I_WILL_FREE | I_NEW)) { 1317 + if (inode_state_read(inode) & 1318 + (I_FREEING | I_WILL_FREE | I_NEW)) { 1325 1319 spin_unlock(&inode->i_lock); 1326 1320 continue; 1327 1321 }
+67 -51
security/landlock/net.c
··· 71 71 72 72 switch (address->sa_family) { 73 73 case AF_UNSPEC: 74 + if (access_request == LANDLOCK_ACCESS_NET_CONNECT_TCP) { 75 + /* 76 + * Connecting to an address with AF_UNSPEC dissolves 77 + * the TCP association, which have the same effect as 78 + * closing the connection while retaining the socket 79 + * object (i.e., the file descriptor). As for dropping 80 + * privileges, closing connections is always allowed. 81 + * 82 + * For a TCP access control system, this request is 83 + * legitimate. Let the network stack handle potential 84 + * inconsistencies and return -EINVAL if needed. 85 + */ 86 + return 0; 87 + } else if (access_request == LANDLOCK_ACCESS_NET_BIND_TCP) { 88 + /* 89 + * Binding to an AF_UNSPEC address is treated 90 + * differently by IPv4 and IPv6 sockets. The socket's 91 + * family may change under our feet due to 92 + * setsockopt(IPV6_ADDRFORM), but that's ok: we either 93 + * reject entirely or require 94 + * %LANDLOCK_ACCESS_NET_BIND_TCP for the given port, so 95 + * it cannot be used to bypass the policy. 96 + * 97 + * IPv4 sockets map AF_UNSPEC to AF_INET for 98 + * retrocompatibility for bind accesses, only if the 99 + * address is INADDR_ANY (cf. __inet_bind). IPv6 100 + * sockets always reject it. 101 + * 102 + * Checking the address is required to not wrongfully 103 + * return -EACCES instead of -EAFNOSUPPORT or -EINVAL. 104 + * We could return 0 and let the network stack handle 105 + * these checks, but it is safer to return a proper 106 + * error and test consistency thanks to kselftest. 107 + */ 108 + if (sock->sk->__sk_common.skc_family == AF_INET) { 109 + const struct sockaddr_in *const sockaddr = 110 + (struct sockaddr_in *)address; 111 + 112 + if (addrlen < sizeof(struct sockaddr_in)) 113 + return -EINVAL; 114 + 115 + if (sockaddr->sin_addr.s_addr != 116 + htonl(INADDR_ANY)) 117 + return -EAFNOSUPPORT; 118 + } else { 119 + if (addrlen < SIN6_LEN_RFC2133) 120 + return -EINVAL; 121 + else 122 + return -EAFNOSUPPORT; 123 + } 124 + } else { 125 + WARN_ON_ONCE(1); 126 + } 127 + /* Only for bind(AF_UNSPEC+INADDR_ANY) on IPv4 socket. */ 128 + fallthrough; 74 129 case AF_INET: { 75 130 const struct sockaddr_in *addr4; 76 131 ··· 174 119 return 0; 175 120 } 176 121 177 - /* Specific AF_UNSPEC handling. */ 178 - if (address->sa_family == AF_UNSPEC) { 179 - /* 180 - * Connecting to an address with AF_UNSPEC dissolves the TCP 181 - * association, which have the same effect as closing the 182 - * connection while retaining the socket object (i.e., the file 183 - * descriptor). As for dropping privileges, closing 184 - * connections is always allowed. 185 - * 186 - * For a TCP access control system, this request is legitimate. 187 - * Let the network stack handle potential inconsistencies and 188 - * return -EINVAL if needed. 189 - */ 190 - if (access_request == LANDLOCK_ACCESS_NET_CONNECT_TCP) 191 - return 0; 192 - 193 - /* 194 - * For compatibility reason, accept AF_UNSPEC for bind 195 - * accesses (mapped to AF_INET) only if the address is 196 - * INADDR_ANY (cf. __inet_bind). Checking the address is 197 - * required to not wrongfully return -EACCES instead of 198 - * -EAFNOSUPPORT. 199 - * 200 - * We could return 0 and let the network stack handle these 201 - * checks, but it is safer to return a proper error and test 202 - * consistency thanks to kselftest. 203 - */ 204 - if (access_request == LANDLOCK_ACCESS_NET_BIND_TCP) { 205 - /* addrlen has already been checked for AF_UNSPEC. */ 206 - const struct sockaddr_in *const sockaddr = 207 - (struct sockaddr_in *)address; 208 - 209 - if (sock->sk->__sk_common.skc_family != AF_INET) 210 - return -EINVAL; 211 - 212 - if (sockaddr->sin_addr.s_addr != htonl(INADDR_ANY)) 213 - return -EAFNOSUPPORT; 214 - } 215 - } else { 216 - /* 217 - * Checks sa_family consistency to not wrongfully return 218 - * -EACCES instead of -EINVAL. Valid sa_family changes are 219 - * only (from AF_INET or AF_INET6) to AF_UNSPEC. 220 - * 221 - * We could return 0 and let the network stack handle this 222 - * check, but it is safer to return a proper error and test 223 - * consistency thanks to kselftest. 224 - */ 225 - if (address->sa_family != sock->sk->__sk_common.skc_family) 226 - return -EINVAL; 227 - } 122 + /* 123 + * Checks sa_family consistency to not wrongfully return 124 + * -EACCES instead of -EINVAL. Valid sa_family changes are 125 + * only (from AF_INET or AF_INET6) to AF_UNSPEC. 126 + * 127 + * We could return 0 and let the network stack handle this 128 + * check, but it is safer to return a proper error and test 129 + * consistency thanks to kselftest. 130 + */ 131 + if (address->sa_family != sock->sk->__sk_common.skc_family && 132 + address->sa_family != AF_UNSPEC) 133 + return -EINVAL; 228 134 229 135 id.key.data = (__force uintptr_t)port; 230 136 BUILD_BUG_ON(sizeof(port) > sizeof(id.key.data));
-1
security/landlock/ruleset.c
··· 23 23 #include <linux/workqueue.h> 24 24 25 25 #include "access.h" 26 - #include "audit.h" 27 26 #include "domain.h" 28 27 #include "limits.h" 29 28 #include "object.h"
+6 -6
security/landlock/task.c
··· 86 86 const unsigned int mode) 87 87 { 88 88 const struct landlock_cred_security *parent_subject; 89 - const struct landlock_ruleset *child_dom; 90 89 int err; 91 90 92 91 /* Quick return for non-landlocked tasks. */ ··· 95 96 96 97 scoped_guard(rcu) 97 98 { 98 - child_dom = landlock_get_task_domain(child); 99 + const struct landlock_ruleset *const child_dom = 100 + landlock_get_task_domain(child); 99 101 err = domain_ptrace(parent_subject->domain, child_dom); 100 102 } 101 103 ··· 166 166 } 167 167 168 168 /** 169 - * domain_is_scoped - Checks if the client domain is scoped in the same 170 - * domain as the server. 169 + * domain_is_scoped - Check if an interaction from a client/sender to a 170 + * server/receiver should be restricted based on scope controls. 171 171 * 172 172 * @client: IPC sender domain. 173 173 * @server: IPC receiver domain. 174 174 * @scope: The scope restriction criteria. 175 175 * 176 - * Returns: True if the @client domain is scoped to access the @server, 177 - * unless the @server is also scoped in the same domain as @client. 176 + * Returns: True if @server is in a different domain from @client, and @client 177 + * is scoped to access @server (i.e. access should be denied). 178 178 */ 179 179 static bool domain_is_scoped(const struct landlock_ruleset *const client, 180 180 const struct landlock_ruleset *const server,
+3 -1
sound/core/oss/pcm_oss.c
··· 1074 1074 runtime->oss.params = 0; 1075 1075 runtime->oss.prepare = 1; 1076 1076 runtime->oss.buffer_used = 0; 1077 - snd_pcm_runtime_buffer_set_silence(runtime); 1077 + err = snd_pcm_runtime_buffer_set_silence(runtime); 1078 + if (err < 0) 1079 + goto failure; 1078 1080 1079 1081 runtime->oss.period_frames = snd_pcm_alsa_frames(substream, oss_period_size); 1080 1082
+7 -2
sound/core/pcm_native.c
··· 730 730 } 731 731 732 732 /* fill the PCM buffer with the current silence format; called from pcm_oss.c */ 733 - void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime) 733 + int snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime) 734 734 { 735 - snd_pcm_buffer_access_lock(runtime); 735 + int err; 736 + 737 + err = snd_pcm_buffer_access_lock(runtime); 738 + if (err < 0) 739 + return err; 736 740 if (runtime->dma_area) 737 741 snd_pcm_format_set_silence(runtime->format, runtime->dma_area, 738 742 bytes_to_samples(runtime, runtime->dma_bytes)); 739 743 snd_pcm_buffer_access_unlock(runtime); 744 + return 0; 740 745 } 741 746 EXPORT_SYMBOL_GPL(snd_pcm_runtime_buffer_set_silence); 742 747
+2
sound/hda/codecs/realtek/alc269.c
··· 6613 6613 SND_PCI_QUIRK(0x103c, 0x8a2e, "HP Envy 16", ALC287_FIXUP_CS35L41_I2C_2), 6614 6614 SND_PCI_QUIRK(0x103c, 0x8a30, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2), 6615 6615 SND_PCI_QUIRK(0x103c, 0x8a31, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2), 6616 + SND_PCI_QUIRK(0x103c, 0x8a34, "HP Pavilion x360 2-in-1 Laptop 14-ek0xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 6616 6617 SND_PCI_QUIRK(0x103c, 0x8a4f, "HP Victus 15-fa0xxx (MB 8A4F)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 6617 6618 SND_PCI_QUIRK(0x103c, 0x8a6e, "HP EDNA 360", ALC287_FIXUP_CS35L41_I2C_4), 6618 6619 SND_PCI_QUIRK(0x103c, 0x8a74, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), ··· 6818 6817 SND_PCI_QUIRK(0x103c, 0x8f42, "HP ZBook 8 G2a 14W", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), 6819 6818 SND_PCI_QUIRK(0x103c, 0x8f57, "HP Trekker G7JC", ALC287_FIXUP_CS35L41_I2C_2), 6820 6819 SND_PCI_QUIRK(0x103c, 0x8f62, "HP ZBook 8 G2a 16W", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), 6820 + SND_PCI_QUIRK(0x1043, 0x1024, "ASUS Zephyrus G14 2025", ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC), 6821 6821 SND_PCI_QUIRK(0x1043, 0x1032, "ASUS VivoBook X513EA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 6822 6822 SND_PCI_QUIRK(0x1043, 0x1034, "ASUS GU605C", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 6823 6823 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+2 -1
sound/hda/codecs/side-codecs/cirrus_scodec_test.c
··· 103 103 104 104 /* GPIO core modifies our struct gpio_chip so use a copy */ 105 105 gpio_priv->chip = cirrus_scodec_test_gpio_chip; 106 + gpio_priv->chip.parent = &pdev->dev; 106 107 ret = devm_gpiochip_add_data(&pdev->dev, &gpio_priv->chip, gpio_priv); 107 108 if (ret) 108 109 return dev_err_probe(&pdev->dev, ret, "Failed to add gpiochip\n"); ··· 320 319 }; 321 320 322 321 static struct kunit_suite cirrus_scodec_test_suite = { 323 - .name = "snd-hda-scodec-cs35l56-test", 322 + .name = "snd-hda-cirrus-scodec-test", 324 323 .init = cirrus_scodec_test_case_init, 325 324 .test_cases = cirrus_scodec_test_cases, 326 325 };
+16 -2
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 2 2 // 3 3 // TAS2781 HDA I2C driver 4 4 // 5 - // Copyright 2023 - 2025 Texas Instruments, Inc. 5 + // Copyright 2023 - 2026 Texas Instruments, Inc. 6 6 // 7 7 // Author: Shenghao Ding <shenghao-ding@ti.com> 8 8 // Current maintainer: Baojun Xu <baojun.xu@ti.com> ··· 60 60 int (*save_calibration)(struct tas2781_hda *h); 61 61 62 62 int hda_chip_id; 63 + bool skip_calibration; 63 64 }; 64 65 65 66 static int tas2781_get_i2c_res(struct acpi_resource *ares, void *data) ··· 492 491 /* If calibrated data occurs error, dsp will still works with default 493 492 * calibrated data inside algo. 494 493 */ 495 - hda_priv->save_calibration(tas_hda); 494 + if (!hda_priv->skip_calibration) 495 + hda_priv->save_calibration(tas_hda); 496 496 } 497 497 498 498 static void tasdev_fw_ready(const struct firmware *fmw, void *context) ··· 550 548 void *master_data) 551 549 { 552 550 struct tas2781_hda *tas_hda = dev_get_drvdata(dev); 551 + struct tas2781_hda_i2c_priv *hda_priv = tas_hda->hda_priv; 553 552 struct hda_component_parent *parent = master_data; 554 553 struct hda_component *comp; 555 554 struct hda_codec *codec; ··· 571 568 case 0x1028: 572 569 tas_hda->catlog_id = DELL; 573 570 break; 571 + case 0x103C: 572 + tas_hda->catlog_id = HP; 573 + break; 574 574 default: 575 575 tas_hda->catlog_id = LENOVO; 576 576 break; 577 577 } 578 + 579 + /* 580 + * Using ASUS ROG Xbox Ally X (RC73XA) UEFI calibration data 581 + * causes audio dropouts during playback, use fallback data 582 + * from DSP firmware as a workaround. 583 + */ 584 + if (codec->core.subsystem_id == 0x10431384) 585 + hda_priv->skip_calibration = true; 578 586 579 587 pm_runtime_get_sync(dev); 580 588
+7
sound/soc/amd/yc/acp6x-mach.c
··· 420 420 .driver_data = &acp6x_card, 421 421 .matches = { 422 422 DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 423 + DMI_MATCH(DMI_PRODUCT_NAME, "M6500RE"), 424 + } 425 + }, 426 + { 427 + .driver_data = &acp6x_card, 428 + .matches = { 429 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 423 430 DMI_MATCH(DMI_PRODUCT_NAME, "M6501RM"), 424 431 } 425 432 },
+9
sound/soc/codecs/wsa881x.c
··· 678 678 */ 679 679 unsigned int sd_n_val; 680 680 int active_ports; 681 + bool hw_init; 681 682 bool port_prepared[WSA881X_MAX_SWR_PORTS]; 682 683 bool port_enable[WSA881X_MAX_SWR_PORTS]; 683 684 }; ··· 687 686 { 688 687 struct regmap *rm = wsa881x->regmap; 689 688 unsigned int val = 0; 689 + 690 + if (wsa881x->hw_init) 691 + return; 690 692 691 693 regmap_register_patch(wsa881x->regmap, wsa881x_rev_2_0, 692 694 ARRAY_SIZE(wsa881x_rev_2_0)); ··· 728 724 regmap_update_bits(rm, WSA881X_OTP_REG_28, 0x3F, 0x3A); 729 725 regmap_update_bits(rm, WSA881X_BONGO_RESRV_REG1, 0xFF, 0xB2); 730 726 regmap_update_bits(rm, WSA881X_BONGO_RESRV_REG2, 0xFF, 0x05); 727 + 728 + wsa881x->hw_init = true; 731 729 } 732 730 733 731 static int wsa881x_component_probe(struct snd_soc_component *comp) ··· 1072 1066 enum sdw_slave_status status) 1073 1067 { 1074 1068 struct wsa881x_priv *wsa881x = dev_get_drvdata(&slave->dev); 1069 + 1070 + if (status == SDW_SLAVE_UNATTACHED) 1071 + wsa881x->hw_init = false; 1075 1072 1076 1073 if (status == SDW_SLAVE_ATTACHED && slave->dev_num > 0) 1077 1074 wsa881x_init(wsa881x);
+18 -8
sound/soc/codecs/wsa883x.c
··· 475 475 int active_ports; 476 476 int dev_mode; 477 477 int comp_offset; 478 + bool hw_init; 478 479 /* 479 480 * Protects temperature reading code (related to speaker protection) and 480 481 * fields: temperature and pa_on. ··· 1044 1043 struct regmap *regmap = wsa883x->regmap; 1045 1044 int variant, version, ret; 1046 1045 1046 + if (wsa883x->hw_init) 1047 + return 0; 1048 + 1047 1049 ret = regmap_read(regmap, WSA883X_OTP_REG_0, &variant); 1048 1050 if (ret) 1049 1051 return ret; ··· 1058 1054 1059 1055 switch (variant) { 1060 1056 case WSA8830: 1061 - dev_info(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8830\n", 1062 - version); 1057 + dev_dbg(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8830\n", 1058 + version); 1063 1059 break; 1064 1060 case WSA8835: 1065 - dev_info(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8835\n", 1066 - version); 1061 + dev_dbg(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8835\n", 1062 + version); 1067 1063 break; 1068 1064 case WSA8832: 1069 - dev_info(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8832\n", 1070 - version); 1065 + dev_dbg(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8832\n", 1066 + version); 1071 1067 break; 1072 1068 case WSA8835_V2: 1073 - dev_info(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8835_V2\n", 1074 - version); 1069 + dev_dbg(wsa883x->dev, "WSA883X Version 1_%d, Variant: WSA8835_V2\n", 1070 + version); 1075 1071 break; 1076 1072 default: 1073 + dev_warn(wsa883x->dev, "unknown variant: %d\n", variant); 1077 1074 break; 1078 1075 } 1079 1076 ··· 1090 1085 wsa883x->comp_offset); 1091 1086 } 1092 1087 1088 + wsa883x->hw_init = true; 1089 + 1093 1090 return 0; 1094 1091 } 1095 1092 ··· 1099 1092 enum sdw_slave_status status) 1100 1093 { 1101 1094 struct wsa883x_priv *wsa883x = dev_get_drvdata(&slave->dev); 1095 + 1096 + if (status == SDW_SLAVE_UNATTACHED) 1097 + wsa883x->hw_init = false; 1102 1098 1103 1099 if (status == SDW_SLAVE_ATTACHED && slave->dev_num > 0) 1104 1100 return wsa883x_init(wsa883x);
+1 -2
sound/soc/codecs/wsa884x.c
··· 1534 1534 1535 1535 wsa884x_set_gain_parameters(wsa884x); 1536 1536 1537 - wsa884x->hw_init = false; 1537 + wsa884x->hw_init = true; 1538 1538 } 1539 1539 1540 1540 static int wsa884x_update_status(struct sdw_slave *slave, ··· 2109 2109 2110 2110 /* Start in cache-only until device is enumerated */ 2111 2111 regcache_cache_only(wsa884x->regmap, true); 2112 - wsa884x->hw_init = true; 2113 2112 2114 2113 if (IS_REACHABLE(CONFIG_HWMON)) { 2115 2114 struct device *hwmon;
+2 -2
sound/soc/generic/simple-card-utils.c
··· 1179 1179 bool is_playback_only = of_property_read_bool(np, "playback-only"); 1180 1180 bool is_capture_only = of_property_read_bool(np, "capture-only"); 1181 1181 1182 - if (playback_only) 1182 + if (np && playback_only) 1183 1183 *playback_only = is_playback_only; 1184 - if (capture_only) 1184 + if (np && capture_only) 1185 1185 *capture_only = is_capture_only; 1186 1186 } 1187 1187 EXPORT_SYMBOL_GPL(graph_util_parse_link_direction);
+8
sound/soc/intel/boards/sof_sdw.c
··· 767 767 { 768 768 .callback = sof_sdw_quirk_cb, 769 769 .matches = { 770 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"), 771 + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0DD6") 772 + }, 773 + .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS), 774 + }, 775 + { 776 + .callback = sof_sdw_quirk_cb, 777 + .matches = { 770 778 DMI_MATCH(DMI_PRODUCT_FAMILY, "Intel_ptlrvp"), 771 779 }, 772 780 .driver_data = (void *)(SOC_SDW_PCH_DMIC),
+8 -16
sound/soc/renesas/rz-ssi.c
··· 180 180 static inline struct rz_ssi_stream * 181 181 rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream) 182 182 { 183 - struct rz_ssi_stream *stream = &ssi->playback; 184 - 185 - if (substream->stream != SNDRV_PCM_STREAM_PLAYBACK) 186 - stream = &ssi->capture; 187 - 188 - return stream; 183 + return (ssi->playback.substream == substream) ? &ssi->playback : &ssi->capture; 189 184 } 190 185 191 186 static inline bool rz_ssi_is_dma_enabled(struct rz_ssi_priv *ssi) ··· 604 609 return IRQ_HANDLED; /* Left over TX/RX interrupt */ 605 610 606 611 if (irq == ssi->irq_int) { /* error or idle */ 607 - bool is_stopped = false; 612 + bool is_stopped = !!(ssisr & (SSISR_RUIRQ | SSISR_ROIRQ | 613 + SSISR_TUIRQ | SSISR_TOIRQ)); 608 614 int i, count; 609 615 610 616 if (rz_ssi_is_dma_enabled(ssi)) 611 617 count = 4; 612 618 else 613 619 count = 1; 614 - 615 - if (ssisr & (SSISR_RUIRQ | SSISR_ROIRQ | SSISR_TUIRQ | SSISR_TOIRQ)) 616 - is_stopped = true; 617 620 618 621 if (ssi->capture.substream && is_stopped) { 619 622 if (ssisr & SSISR_RUIRQ) ··· 881 888 for (i = 0; i < num_transfer; i++) { 882 889 ret = strm->transfer(ssi, strm); 883 890 if (ret) 884 - goto done; 891 + return ret; 885 892 } 886 893 887 894 ret = rz_ssi_start(ssi, strm); ··· 897 904 break; 898 905 } 899 906 900 - done: 901 907 return ret; 902 908 } 903 909 ··· 1187 1195 goto err_release_dma_chs; 1188 1196 } 1189 1197 1190 - ret = devm_request_irq(dev, ssi->irq_int, &rz_ssi_interrupt, 1198 + ret = devm_request_irq(dev, ssi->irq_int, rz_ssi_interrupt, 1191 1199 0, dev_name(dev), ssi); 1192 1200 if (ret < 0) { 1193 1201 dev_err_probe(dev, ret, "irq request error (int_req)\n"); ··· 1204 1212 return ssi->irq_rt; 1205 1213 1206 1214 ret = devm_request_irq(dev, ssi->irq_rt, 1207 - &rz_ssi_interrupt, 0, 1215 + rz_ssi_interrupt, 0, 1208 1216 dev_name(dev), ssi); 1209 1217 if (ret < 0) 1210 1218 return dev_err_probe(dev, ret, ··· 1217 1225 return ssi->irq_rx; 1218 1226 1219 1227 ret = devm_request_irq(dev, ssi->irq_tx, 1220 - &rz_ssi_interrupt, 0, 1228 + rz_ssi_interrupt, 0, 1221 1229 dev_name(dev), ssi); 1222 1230 if (ret < 0) 1223 1231 return dev_err_probe(dev, ret, 1224 1232 "irq request error (dma_tx)\n"); 1225 1233 1226 1234 ret = devm_request_irq(dev, ssi->irq_rx, 1227 - &rz_ssi_interrupt, 0, 1235 + rz_ssi_interrupt, 0, 1228 1236 dev_name(dev), ssi); 1229 1237 if (ret < 0) 1230 1238 return dev_err_probe(dev, ret,
+1 -1
sound/soc/sdw_utils/soc_sdw_cs42l43.c
··· 44 44 static struct snd_soc_jack_pin soc_jack_pins[] = { 45 45 { 46 46 .pin = "Headphone", 47 - .mask = SND_JACK_HEADPHONE, 47 + .mask = SND_JACK_HEADPHONE | SND_JACK_LINEOUT, 48 48 }, 49 49 { 50 50 .pin = "Headset Mic",
+42 -1
sound/soc/sdw_utils/soc_sdw_utils.c
··· 855 855 } 856 856 EXPORT_SYMBOL_NS(asoc_sdw_find_codec_info_part, "SND_SOC_SDW_UTILS"); 857 857 858 + static struct asoc_sdw_codec_info *asoc_sdw_find_codec_info_sdw_id(const struct sdw_slave_id *id) 859 + { 860 + int i; 861 + 862 + for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) 863 + if (id->part_id == codec_info_list[i].part_id && 864 + (!codec_info_list[i].version_id || 865 + id->sdw_version == codec_info_list[i].version_id)) 866 + return &codec_info_list[i]; 867 + 868 + return NULL; 869 + } 870 + 858 871 struct asoc_sdw_codec_info *asoc_sdw_find_codec_info_acpi(const u8 *acpi_id) 859 872 { 860 873 int i; ··· 900 887 } 901 888 EXPORT_SYMBOL_NS(asoc_sdw_find_codec_info_dai, "SND_SOC_SDW_UTILS"); 902 889 890 + static int asoc_sdw_find_codec_info_dai_index(const struct asoc_sdw_codec_info *codec_info, 891 + const char *dai_name) 892 + { 893 + int i; 894 + 895 + for (i = 0; i < codec_info->dai_num; i++) { 896 + if (!strcmp(codec_info->dais[i].dai_name, dai_name)) 897 + return i; 898 + } 899 + 900 + return -ENOENT; 901 + } 902 + 903 903 int asoc_sdw_rtd_init(struct snd_soc_pcm_runtime *rtd) 904 904 { 905 905 struct snd_soc_card *card = rtd->card; 906 906 struct snd_soc_dapm_context *dapm = snd_soc_card_to_dapm(card); 907 907 struct asoc_sdw_codec_info *codec_info; 908 908 struct snd_soc_dai *dai; 909 + struct sdw_slave *sdw_peripheral; 909 910 const char *spk_components=""; 910 911 int dai_index; 911 912 int ret; 912 913 int i; 913 914 914 915 for_each_rtd_codec_dais(rtd, i, dai) { 915 - codec_info = asoc_sdw_find_codec_info_dai(dai->name, &dai_index); 916 + if (is_sdw_slave(dai->component->dev)) 917 + sdw_peripheral = dev_to_sdw_dev(dai->component->dev); 918 + else if (dai->component->dev->parent && is_sdw_slave(dai->component->dev->parent)) 919 + sdw_peripheral = dev_to_sdw_dev(dai->component->dev->parent); 920 + else 921 + continue; 922 + 923 + codec_info = asoc_sdw_find_codec_info_sdw_id(&sdw_peripheral->id); 916 924 if (!codec_info) 917 925 return -EINVAL; 926 + 927 + dai_index = asoc_sdw_find_codec_info_dai_index(codec_info, dai->name); 928 + WARN_ON(dai_index < 0); 918 929 919 930 /* 920 931 * A codec dai can be connected to different dai links for capture and playback, ··· 948 911 */ 949 912 if (codec_info->dais[dai_index].rtd_init_done) 950 913 continue; 914 + 915 + dev_dbg(card->dev, "%#x/%s initializing for %s/%s\n", 916 + codec_info->part_id, codec_info->dais[dai_index].dai_name, 917 + dai->component->name, dai->name); 951 918 952 919 /* 953 920 * Add card controls and dapm widgets for the first codec dai.
+2 -2
sound/soc/soc-ops.c
··· 543 543 ucontrol->value.bytes.data[0] &= ~params->mask; 544 544 break; 545 545 case 2: 546 - ((u16 *)(&ucontrol->value.bytes.data))[0] 546 + ((__be16 *)(&ucontrol->value.bytes.data))[0] 547 547 &= cpu_to_be16(~params->mask); 548 548 break; 549 549 case 4: 550 - ((u32 *)(&ucontrol->value.bytes.data))[0] 550 + ((__be32 *)(&ucontrol->value.bytes.data))[0] 551 551 &= cpu_to_be32(~params->mask); 552 552 break; 553 553 default:
+3 -3
sound/soc/tegra/tegra210_ahub.c
··· 2077 2077 .val_bits = 32, 2078 2078 .reg_stride = 4, 2079 2079 .max_register = TEGRA210_MAX_REGISTER_ADDR, 2080 - .cache_type = REGCACHE_FLAT_S, 2080 + .cache_type = REGCACHE_FLAT, 2081 2081 }; 2082 2082 2083 2083 static const struct regmap_config tegra186_ahub_regmap_config = { ··· 2085 2085 .val_bits = 32, 2086 2086 .reg_stride = 4, 2087 2087 .max_register = TEGRA186_MAX_REGISTER_ADDR, 2088 - .cache_type = REGCACHE_FLAT_S, 2088 + .cache_type = REGCACHE_FLAT, 2089 2089 }; 2090 2090 2091 2091 static const struct regmap_config tegra264_ahub_regmap_config = { ··· 2094 2094 .reg_stride = 4, 2095 2095 .writeable_reg = tegra264_ahub_wr_reg, 2096 2096 .max_register = TEGRA264_MAX_REGISTER_ADDR, 2097 - .cache_type = REGCACHE_FLAT_S, 2097 + .cache_type = REGCACHE_FLAT, 2098 2098 }; 2099 2099 2100 2100 static const struct tegra_ahub_soc_data soc_data_tegra210 = {
+31 -8
sound/soc/ti/davinci-evm.c
··· 194 194 return -EINVAL; 195 195 196 196 dai->cpus->of_node = of_parse_phandle(np, "ti,mcasp-controller", 0); 197 - if (!dai->cpus->of_node) 198 - return -EINVAL; 197 + if (!dai->cpus->of_node) { 198 + ret = -EINVAL; 199 + goto err_put; 200 + } 199 201 200 202 dai->platforms->of_node = dai->cpus->of_node; 201 203 202 204 evm_soc_card.dev = &pdev->dev; 203 205 ret = snd_soc_of_parse_card_name(&evm_soc_card, "ti,model"); 204 206 if (ret) 205 - return ret; 207 + goto err_put; 206 208 207 209 mclk = devm_clk_get(&pdev->dev, "mclk"); 208 210 if (PTR_ERR(mclk) == -EPROBE_DEFER) { 209 - return -EPROBE_DEFER; 211 + ret = -EPROBE_DEFER; 212 + goto err_put; 210 213 } else if (IS_ERR(mclk)) { 211 214 dev_dbg(&pdev->dev, "mclk not found.\n"); 212 215 mclk = NULL; 213 216 } 214 217 215 218 drvdata = devm_kzalloc(&pdev->dev, sizeof(*drvdata), GFP_KERNEL); 216 - if (!drvdata) 217 - return -ENOMEM; 219 + if (!drvdata) { 220 + ret = -ENOMEM; 221 + goto err_put; 222 + } 218 223 219 224 drvdata->mclk = mclk; 220 225 ··· 229 224 if (!drvdata->mclk) { 230 225 dev_err(&pdev->dev, 231 226 "No clock or clock rate defined.\n"); 232 - return -EINVAL; 227 + ret = -EINVAL; 228 + goto err_put; 233 229 } 234 230 drvdata->sysclk = clk_get_rate(drvdata->mclk); 235 231 } else if (drvdata->mclk) { ··· 246 240 snd_soc_card_set_drvdata(&evm_soc_card, drvdata); 247 241 ret = devm_snd_soc_register_card(&pdev->dev, &evm_soc_card); 248 242 249 - if (ret) 243 + if (ret) { 250 244 dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", ret); 245 + goto err_put; 246 + } 247 + 248 + return ret; 249 + 250 + err_put: 251 + dai->platforms->of_node = NULL; 252 + 253 + if (dai->cpus->of_node) { 254 + of_node_put(dai->cpus->of_node); 255 + dai->cpus->of_node = NULL; 256 + } 257 + 258 + if (dai->codecs->of_node) { 259 + of_node_put(dai->codecs->of_node); 260 + dai->codecs->of_node = NULL; 261 + } 251 262 252 263 return ret; 253 264 }
+1 -1
sound/usb/pcm.c
··· 1553 1553 1554 1554 for (i = 0; i < ctx->packets; i++) { 1555 1555 counts = snd_usb_endpoint_next_packet_size(ep, ctx, i, avail); 1556 - if (counts < 0) 1556 + if (counts < 0 || frames + counts >= ep->max_urb_frames) 1557 1557 break; 1558 1558 /* set up descriptor */ 1559 1559 urb->iso_frame_desc[i].offset = frames * stride;
+6 -3
tools/net/ynl/pyynl/lib/doc_generator.py
··· 166 166 continue 167 167 lines.append(self.fmt.rst_paragraph(self.fmt.bold(key), level + 1)) 168 168 if key in ['request', 'reply']: 169 - lines.append(self.parse_do_attributes(do_dict[key], level + 1) + "\n") 169 + lines.append(self.parse_op_attributes(do_dict[key], level + 1) + "\n") 170 170 else: 171 171 lines.append(self.fmt.headroom(level + 2) + do_dict[key] + "\n") 172 172 173 173 return "\n".join(lines) 174 174 175 - def parse_do_attributes(self, attrs: Dict[str, Any], level: int = 0) -> str: 175 + def parse_op_attributes(self, attrs: Dict[str, Any], level: int = 0) -> str: 176 176 """Parse 'attributes' section""" 177 177 if "attributes" not in attrs: 178 178 return "" ··· 184 184 185 185 def parse_operations(self, operations: List[Dict[str, Any]], namespace: str) -> str: 186 186 """Parse operations block""" 187 - preprocessed = ["name", "doc", "title", "do", "dump", "flags"] 187 + preprocessed = ["name", "doc", "title", "do", "dump", "flags", "event"] 188 188 linkable = ["fixed-header", "attribute-set"] 189 189 lines = [] 190 190 ··· 217 217 if "dump" in operation: 218 218 lines.append(self.fmt.rst_paragraph(":dump:", 0)) 219 219 lines.append(self.parse_do(operation["dump"], 0)) 220 + if "event" in operation: 221 + lines.append(self.fmt.rst_paragraph(":event:", 0)) 222 + lines.append(self.parse_op_attributes(operation["event"], 0)) 220 223 221 224 # New line after fields 222 225 lines.append("\n")
+14 -10
tools/objtool/Makefile
··· 72 72 73 73 # 74 74 # To support disassembly, objtool needs libopcodes which is provided 75 - # with libbdf (binutils-dev or binutils-devel package). 75 + # with libbfd (binutils-dev or binutils-devel package). 76 76 # 77 - FEATURE_USER = .objtool 78 - FEATURE_TESTS = libbfd disassembler-init-styled 79 - FEATURE_DISPLAY = 80 - include $(srctree)/tools/build/Makefile.feature 77 + # We check using HOSTCC directly rather than the shared feature framework 78 + # because objtool is a host tool that links against host libraries. 79 + # 80 + HAVE_LIBOPCODES := $(shell echo 'int main(void) { return 0; }' | \ 81 + $(HOSTCC) -xc - -o /dev/null -lopcodes 2>/dev/null && echo y) 81 82 82 - ifeq ($(feature-disassembler-init-styled), 1) 83 - OBJTOOL_CFLAGS += -DDISASM_INIT_STYLED 84 - endif 83 + # Styled disassembler support requires binutils >= 2.39 84 + HAVE_DISASM_STYLED := $(shell echo '$(pound)include <dis-asm.h>' | \ 85 + $(HOSTCC) -E -xc - 2>/dev/null | grep -q disassembler_style && echo y) 85 86 86 87 BUILD_DISAS := n 87 88 88 - ifeq ($(feature-libbfd),1) 89 + ifeq ($(HAVE_LIBOPCODES),y) 89 90 BUILD_DISAS := y 90 - OBJTOOL_CFLAGS += -DDISAS -DPACKAGE="objtool" 91 + OBJTOOL_CFLAGS += -DDISAS -DPACKAGE='"objtool"' 91 92 OBJTOOL_LDFLAGS += -lopcodes 93 + ifeq ($(HAVE_DISASM_STYLED),y) 94 + OBJTOOL_CFLAGS += -DDISASM_INIT_STYLED 95 + endif 92 96 endif 93 97 94 98 export BUILD_DISAS
+2 -2
tools/objtool/include/objtool/warn.h
··· 152 152 if (unlikely(insn->sym && insn->sym->pfunc && \ 153 153 insn->sym->pfunc->debug_checksum)) { \ 154 154 char *insn_off = offstr(insn->sec, insn->offset); \ 155 - __dbg("checksum: %s %s %016lx", \ 156 - func->name, insn_off, checksum); \ 155 + __dbg("checksum: %s %s %016llx", \ 156 + func->name, insn_off, (unsigned long long)checksum);\ 157 157 free(insn_off); \ 158 158 } \ 159 159 })
+16 -10
tools/testing/cxl/test/cxl_translate.c
··· 68 68 69 69 /* Calculate base HPA offset from DPA and position */ 70 70 hpa_offset = cxl_calculate_hpa_offset(dpa_offset, pos, r_eiw, r_eig); 71 + if (hpa_offset == ULLONG_MAX) 72 + return ULLONG_MAX; 71 73 72 74 if (math == XOR_MATH) { 73 75 cximsd->nr_maps = hbiw_to_nr_maps[hb_ways]; ··· 260 258 pos = get_random_u32() % ways; 261 259 dpa = get_random_u64() >> 12; 262 260 261 + reverse_dpa = ULLONG_MAX; 262 + reverse_pos = -1; 263 + 263 264 hpa = cxl_calculate_hpa_offset(dpa, pos, eiw, eig); 264 - reverse_dpa = cxl_calculate_dpa_offset(hpa, eiw, eig); 265 - reverse_pos = cxl_calculate_position(hpa, eiw, eig); 265 + if (hpa != ULLONG_MAX) { 266 + reverse_dpa = cxl_calculate_dpa_offset(hpa, eiw, eig); 267 + reverse_pos = cxl_calculate_position(hpa, eiw, eig); 268 + if (reverse_dpa == dpa && reverse_pos == pos) 269 + continue; 270 + } 266 271 267 - if (reverse_dpa != dpa || reverse_pos != pos) { 268 - pr_err("test random iter %d FAIL hpa=%llu, dpa=%llu reverse_dpa=%llu, pos=%d reverse_pos=%d eiw=%u eig=%u\n", 269 - i, hpa, dpa, reverse_dpa, pos, reverse_pos, eiw, 270 - eig); 272 + pr_err("test random iter %d FAIL hpa=%llu, dpa=%llu reverse_dpa=%llu, pos=%d reverse_pos=%d eiw=%u eig=%u\n", 273 + i, hpa, dpa, reverse_dpa, pos, reverse_pos, eiw, eig); 271 274 272 - if (failures++ > 10) { 273 - pr_err("test random too many failures, stop\n"); 274 - break; 275 - } 275 + if (failures++ > 10) { 276 + pr_err("test random too many failures, stop\n"); 277 + break; 276 278 } 277 279 } 278 280 pr_info("..... test random: PASS %d FAIL %d\n", i - failures, failures);
+11 -3
tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
··· 47 47 struct test_xdp_context_test_run *skel = NULL; 48 48 char data[sizeof(pkt_v4) + sizeof(__u32)]; 49 49 char bad_ctx[sizeof(struct xdp_md) + 1]; 50 + char large_data[256]; 50 51 struct xdp_md ctx_in, ctx_out; 51 52 DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 52 53 .data_in = &data, ··· 95 94 test_xdp_context_error(prog_fd, opts, 4, sizeof(__u32), sizeof(data), 96 95 0, 0, 0); 97 96 98 - /* Meta data must be 255 bytes or smaller */ 99 - test_xdp_context_error(prog_fd, opts, 0, 256, sizeof(data), 0, 0, 0); 100 - 101 97 /* Total size of data must be data_end - data_meta or larger */ 102 98 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32), 103 99 sizeof(data) + 1, 0, 0, 0); ··· 113 115 /* The egress cannot be specified */ 114 116 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32), sizeof(data), 115 117 0, 0, 1); 118 + 119 + /* Meta data must be 216 bytes or smaller (256 - sizeof(struct 120 + * xdp_frame)). Test both nearest invalid size and nearest invalid 121 + * 4-byte-aligned size, and make sure data_in is large enough that we 122 + * actually hit the check on metadata length 123 + */ 124 + opts.data_in = large_data; 125 + opts.data_size_in = sizeof(large_data); 126 + test_xdp_context_error(prog_fd, opts, 0, 217, sizeof(large_data), 0, 0, 0); 127 + test_xdp_context_error(prog_fd, opts, 0, 220, sizeof(large_data), 0, 0, 0); 116 128 117 129 test_xdp_context_test_run__destroy(skel); 118 130 }
+2 -2
tools/testing/selftests/drivers/net/hw/toeplitz.c
··· 485 485 486 486 bitmap = strtoul(arg, NULL, 0); 487 487 488 - if (bitmap & ~(RPS_MAX_CPUS - 1)) 489 - error(1, 0, "rps bitmap 0x%lx out of bounds 0..%lu", 488 + if (bitmap & ~((1UL << RPS_MAX_CPUS) - 1)) 489 + error(1, 0, "rps bitmap 0x%lx out of bounds, max cpu %lu", 490 490 bitmap, RPS_MAX_CPUS - 1); 491 491 492 492 for (i = 0; i < RPS_MAX_CPUS; i++)
+4 -2
tools/testing/selftests/drivers/net/hw/toeplitz.py
··· 94 94 mask = 0 95 95 for cpu in rps_cpus: 96 96 mask |= (1 << cpu) 97 - mask = hex(mask)[2:] 97 + 98 + mask = hex(mask) 98 99 99 100 # Set RPS bitmap for all rx queues 100 101 for rps_file in glob.glob(f"/sys/class/net/{cfg.ifname}/queues/rx-*/rps_cpus"): 101 102 with open(rps_file, "w", encoding="utf-8") as fp: 102 - fp.write(mask) 103 + # sysfs expects hex without '0x' prefix, toeplitz.c needs the prefix 104 + fp.write(mask[2:]) 103 105 104 106 return mask 105 107
+85 -59
tools/testing/selftests/kvm/x86/amx_test.c
··· 69 69 : : "a"(tile), "d"(0)); 70 70 } 71 71 72 + static inline int tileloadd_safe(void *tile) 73 + { 74 + return kvm_asm_safe(".byte 0xc4,0xe2,0x7b,0x4b,0x04,0x10", 75 + "a"(tile), "d"(0)); 76 + } 77 + 72 78 static inline void __tilerelease(void) 73 79 { 74 80 asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0" ::); ··· 130 124 } 131 125 } 132 126 127 + enum { 128 + /* Retrieve TMM0 from guest, stash it for TEST_RESTORE_TILEDATA */ 129 + TEST_SAVE_TILEDATA = 1, 130 + 131 + /* Check TMM0 against tiledata */ 132 + TEST_COMPARE_TILEDATA = 2, 133 + 134 + /* Restore TMM0 from earlier save */ 135 + TEST_RESTORE_TILEDATA = 4, 136 + 137 + /* Full VM save/restore */ 138 + TEST_SAVE_RESTORE = 8, 139 + }; 140 + 133 141 static void __attribute__((__flatten__)) guest_code(struct tile_config *amx_cfg, 134 142 struct tile_data *tiledata, 135 143 struct xstate *xstate) 136 144 { 145 + int vector; 146 + 137 147 GUEST_ASSERT(this_cpu_has(X86_FEATURE_XSAVE) && 138 148 this_cpu_has(X86_FEATURE_OSXSAVE)); 139 149 check_xtile_info(); 140 - GUEST_SYNC(1); 150 + GUEST_SYNC(TEST_SAVE_RESTORE); 141 151 142 152 /* xfd=0, enable amx */ 143 153 wrmsr(MSR_IA32_XFD, 0); 144 - GUEST_SYNC(2); 154 + GUEST_SYNC(TEST_SAVE_RESTORE); 145 155 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == 0); 146 156 set_tilecfg(amx_cfg); 147 157 __ldtilecfg(amx_cfg); 148 - GUEST_SYNC(3); 158 + GUEST_SYNC(TEST_SAVE_RESTORE); 149 159 /* Check save/restore when trap to userspace */ 150 160 __tileloadd(tiledata); 151 - GUEST_SYNC(4); 161 + GUEST_SYNC(TEST_SAVE_TILEDATA | TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE); 162 + 163 + /* xfd=0x40000, disable amx tiledata */ 164 + wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA); 165 + 166 + /* host tries setting tiledata while guest XFD is set */ 167 + GUEST_SYNC(TEST_RESTORE_TILEDATA); 168 + GUEST_SYNC(TEST_SAVE_RESTORE); 169 + 170 + wrmsr(MSR_IA32_XFD, 0); 152 171 __tilerelease(); 153 - GUEST_SYNC(5); 172 + GUEST_SYNC(TEST_SAVE_RESTORE); 154 173 /* 155 174 * After XSAVEC, XTILEDATA is cleared in the xstate_bv but is set in 156 175 * the xcomp_bv. ··· 184 153 __xsavec(xstate, XFEATURE_MASK_XTILE_DATA); 185 154 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA)); 186 155 GUEST_ASSERT(xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA); 156 + 157 + /* #NM test */ 187 158 188 159 /* xfd=0x40000, disable amx tiledata */ 189 160 wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA); ··· 199 166 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA)); 200 167 GUEST_ASSERT((xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA)); 201 168 202 - GUEST_SYNC(6); 169 + GUEST_SYNC(TEST_SAVE_RESTORE); 203 170 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA); 204 171 set_tilecfg(amx_cfg); 205 172 __ldtilecfg(amx_cfg); 173 + 206 174 /* Trigger #NM exception */ 207 - __tileloadd(tiledata); 208 - GUEST_SYNC(10); 175 + vector = tileloadd_safe(tiledata); 176 + __GUEST_ASSERT(vector == NM_VECTOR, 177 + "Wanted #NM on tileloadd with XFD[18]=1, got %s", 178 + ex_str(vector)); 209 179 210 - GUEST_DONE(); 211 - } 212 - 213 - void guest_nm_handler(struct ex_regs *regs) 214 - { 215 - /* Check if #NM is triggered by XFEATURE_MASK_XTILE_DATA */ 216 - GUEST_SYNC(7); 217 180 GUEST_ASSERT(!(get_cr0() & X86_CR0_TS)); 218 181 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA); 219 182 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA); 220 - GUEST_SYNC(8); 183 + GUEST_SYNC(TEST_SAVE_RESTORE); 221 184 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA); 222 185 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA); 223 186 /* Clear xfd_err */ 224 187 wrmsr(MSR_IA32_XFD_ERR, 0); 225 188 /* xfd=0, enable amx */ 226 189 wrmsr(MSR_IA32_XFD, 0); 227 - GUEST_SYNC(9); 190 + GUEST_SYNC(TEST_SAVE_RESTORE); 191 + 192 + __tileloadd(tiledata); 193 + GUEST_SYNC(TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE); 194 + 195 + GUEST_DONE(); 228 196 } 229 197 230 198 int main(int argc, char *argv[]) ··· 234 200 struct kvm_vcpu *vcpu; 235 201 struct kvm_vm *vm; 236 202 struct kvm_x86_state *state; 203 + struct kvm_x86_state *tile_state = NULL; 237 204 int xsave_restore_size; 238 205 vm_vaddr_t amx_cfg, tiledata, xstate; 239 206 struct ucall uc; 240 - u32 amx_offset; 241 207 int ret; 242 208 243 209 /* ··· 262 228 263 229 vcpu_regs_get(vcpu, &regs1); 264 230 265 - /* Register #NM handler */ 266 - vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler); 267 - 268 231 /* amx cfg for guest_code */ 269 232 amx_cfg = vm_vaddr_alloc_page(vm); 270 233 memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize()); ··· 275 244 memset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE)); 276 245 vcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate); 277 246 247 + int iter = 0; 278 248 for (;;) { 279 249 vcpu_run(vcpu); 280 250 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); ··· 285 253 REPORT_GUEST_ASSERT(uc); 286 254 /* NOT REACHED */ 287 255 case UCALL_SYNC: 288 - switch (uc.args[1]) { 289 - case 1: 290 - case 2: 291 - case 3: 292 - case 5: 293 - case 6: 294 - case 7: 295 - case 8: 296 - fprintf(stderr, "GUEST_SYNC(%ld)\n", uc.args[1]); 297 - break; 298 - case 4: 299 - case 10: 300 - fprintf(stderr, 301 - "GUEST_SYNC(%ld), check save/restore status\n", uc.args[1]); 256 + ++iter; 257 + if (uc.args[1] & TEST_SAVE_TILEDATA) { 258 + fprintf(stderr, "GUEST_SYNC #%d, save tiledata\n", iter); 259 + tile_state = vcpu_save_state(vcpu); 260 + } 261 + if (uc.args[1] & TEST_COMPARE_TILEDATA) { 262 + fprintf(stderr, "GUEST_SYNC #%d, check TMM0 contents\n", iter); 302 263 303 264 /* Compacted mode, get amx offset by xsave area 304 265 * size subtract 8K amx size. 305 266 */ 306 - amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE; 307 - state = vcpu_save_state(vcpu); 308 - void *amx_start = (void *)state->xsave + amx_offset; 267 + u32 amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE; 268 + void *amx_start = (void *)tile_state->xsave + amx_offset; 309 269 void *tiles_data = (void *)addr_gva2hva(vm, tiledata); 310 270 /* Only check TMM0 register, 1 tile */ 311 271 ret = memcmp(amx_start, tiles_data, TILE_SIZE); 312 272 TEST_ASSERT(ret == 0, "memcmp failed, ret=%d", ret); 273 + } 274 + if (uc.args[1] & TEST_RESTORE_TILEDATA) { 275 + fprintf(stderr, "GUEST_SYNC #%d, before KVM_SET_XSAVE\n", iter); 276 + vcpu_xsave_set(vcpu, tile_state->xsave); 277 + fprintf(stderr, "GUEST_SYNC #%d, after KVM_SET_XSAVE\n", iter); 278 + } 279 + if (uc.args[1] & TEST_SAVE_RESTORE) { 280 + fprintf(stderr, "GUEST_SYNC #%d, save/restore VM state\n", iter); 281 + state = vcpu_save_state(vcpu); 282 + memset(&regs1, 0, sizeof(regs1)); 283 + vcpu_regs_get(vcpu, &regs1); 284 + 285 + kvm_vm_release(vm); 286 + 287 + /* Restore state in a new VM. */ 288 + vcpu = vm_recreate_with_one_vcpu(vm); 289 + vcpu_load_state(vcpu, state); 313 290 kvm_x86_state_cleanup(state); 314 - break; 315 - case 9: 316 - fprintf(stderr, 317 - "GUEST_SYNC(%ld), #NM exception and enable amx\n", uc.args[1]); 318 - break; 291 + 292 + memset(&regs2, 0, sizeof(regs2)); 293 + vcpu_regs_get(vcpu, &regs2); 294 + TEST_ASSERT(!memcmp(&regs1, &regs2, sizeof(regs2)), 295 + "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx", 296 + (ulong) regs2.rdi, (ulong) regs2.rsi); 319 297 } 320 298 break; 321 299 case UCALL_DONE: ··· 335 293 TEST_FAIL("Unknown ucall %lu", uc.cmd); 336 294 } 337 295 338 - state = vcpu_save_state(vcpu); 339 - memset(&regs1, 0, sizeof(regs1)); 340 - vcpu_regs_get(vcpu, &regs1); 341 - 342 - kvm_vm_release(vm); 343 - 344 - /* Restore state in a new VM. */ 345 - vcpu = vm_recreate_with_one_vcpu(vm); 346 - vcpu_load_state(vcpu, state); 347 - kvm_x86_state_cleanup(state); 348 - 349 - memset(&regs2, 0, sizeof(regs2)); 350 - vcpu_regs_get(vcpu, &regs2); 351 - TEST_ASSERT(!memcmp(&regs1, &regs2, sizeof(regs2)), 352 - "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx", 353 - (ulong) regs2.rdi, (ulong) regs2.rsi); 354 296 } 355 297 done: 356 298 kvm_vm_free(vm);
+1
tools/testing/selftests/landlock/common.h
··· 237 237 struct sockaddr_un unix_addr; 238 238 socklen_t unix_addr_len; 239 239 }; 240 + struct sockaddr_storage _largest; 240 241 }; 241 242 }; 242 243
+15 -19
tools/testing/selftests/landlock/fs_test.c
··· 4362 4362 { 4363 4363 const char *const path = file1_s1d1; 4364 4364 int srv_fd, cli_fd, ruleset_fd; 4365 - socklen_t size; 4366 - struct sockaddr_un srv_un, cli_un; 4365 + struct sockaddr_un srv_un = { 4366 + .sun_family = AF_UNIX, 4367 + }; 4368 + struct sockaddr_un cli_un = { 4369 + .sun_family = AF_UNIX, 4370 + }; 4367 4371 const struct landlock_ruleset_attr attr = { 4368 4372 .handled_access_fs = LANDLOCK_ACCESS_FS_IOCTL_DEV, 4369 4373 }; 4370 4374 4371 4375 /* Sets up a server */ 4372 - srv_un.sun_family = AF_UNIX; 4373 - strncpy(srv_un.sun_path, path, sizeof(srv_un.sun_path)); 4374 - 4375 4376 ASSERT_EQ(0, unlink(path)); 4376 4377 srv_fd = socket(AF_UNIX, SOCK_STREAM, 0); 4377 4378 ASSERT_LE(0, srv_fd); 4378 4379 4379 - size = offsetof(struct sockaddr_un, sun_path) + strlen(srv_un.sun_path); 4380 - ASSERT_EQ(0, bind(srv_fd, (struct sockaddr *)&srv_un, size)); 4380 + strncpy(srv_un.sun_path, path, sizeof(srv_un.sun_path)); 4381 + ASSERT_EQ(0, bind(srv_fd, (struct sockaddr *)&srv_un, sizeof(srv_un))); 4382 + 4381 4383 ASSERT_EQ(0, listen(srv_fd, 10 /* qlen */)); 4382 4384 4383 4385 /* Enables Landlock. */ ··· 4389 4387 ASSERT_EQ(0, close(ruleset_fd)); 4390 4388 4391 4389 /* Sets up a client connection to it */ 4392 - cli_un.sun_family = AF_UNIX; 4393 4390 cli_fd = socket(AF_UNIX, SOCK_STREAM, 0); 4394 4391 ASSERT_LE(0, cli_fd); 4395 4392 4396 - size = offsetof(struct sockaddr_un, sun_path) + strlen(cli_un.sun_path); 4397 - ASSERT_EQ(0, bind(cli_fd, (struct sockaddr *)&cli_un, size)); 4398 - 4399 - bzero(&cli_un, sizeof(cli_un)); 4400 - cli_un.sun_family = AF_UNIX; 4401 4393 strncpy(cli_un.sun_path, path, sizeof(cli_un.sun_path)); 4402 - size = offsetof(struct sockaddr_un, sun_path) + strlen(cli_un.sun_path); 4403 - 4404 - ASSERT_EQ(0, connect(cli_fd, (struct sockaddr *)&cli_un, size)); 4394 + ASSERT_EQ(0, 4395 + connect(cli_fd, (struct sockaddr *)&cli_un, sizeof(cli_un))); 4405 4396 4406 4397 /* FIONREAD and other IOCTLs should not be forbidden. */ 4407 4398 EXPECT_EQ(0, test_fionread_ioctl(cli_fd)); 4408 4399 4409 - ASSERT_EQ(0, close(cli_fd)); 4400 + EXPECT_EQ(0, close(cli_fd)); 4401 + EXPECT_EQ(0, close(srv_fd)); 4410 4402 } 4411 4403 4412 4404 /* clang-format off */ ··· 7070 7074 return -E2BIG; 7071 7075 7072 7076 /* 7073 - * It is assume that absolute_path does not contain control characters nor 7074 - * spaces, see audit_string_contains_control(). 7077 + * It is assumed that absolute_path does not contain control 7078 + * characters nor spaces, see audit_string_contains_control(). 7075 7079 */ 7076 7080 absolute_path = realpath(path, NULL); 7077 7081 if (!absolute_path)
+28 -2
tools/testing/selftests/landlock/net_test.c
··· 121 121 { 122 122 switch (srv->protocol.domain) { 123 123 case AF_UNSPEC: 124 + if (minimal) 125 + return sizeof(sa_family_t); 126 + return sizeof(struct sockaddr_storage); 127 + 124 128 case AF_INET: 125 129 return sizeof(srv->ipv4_addr); 126 130 ··· 762 758 bind_fd = socket_variant(&self->srv0); 763 759 ASSERT_LE(0, bind_fd); 764 760 761 + /* Tries to bind with too small addrlen. */ 762 + EXPECT_EQ(-EINVAL, bind_variant_addrlen( 763 + bind_fd, &self->unspec_any0, 764 + get_addrlen(&self->unspec_any0, true) - 1)); 765 + 765 766 /* Allowed bind on AF_UNSPEC/INADDR_ANY. */ 766 767 ret = bind_variant(bind_fd, &self->unspec_any0); 767 768 if (variant->prot.domain == AF_INET) { ··· 775 766 TH_LOG("Failed to bind to unspec/any socket: %s", 776 767 strerror(errno)); 777 768 } 769 + } else if (variant->prot.domain == AF_INET6) { 770 + EXPECT_EQ(-EAFNOSUPPORT, ret); 778 771 } else { 779 772 EXPECT_EQ(-EINVAL, ret); 780 773 } ··· 803 792 } else { 804 793 EXPECT_EQ(0, ret); 805 794 } 795 + } else if (variant->prot.domain == AF_INET6) { 796 + EXPECT_EQ(-EAFNOSUPPORT, ret); 806 797 } else { 807 798 EXPECT_EQ(-EINVAL, ret); 808 799 } ··· 814 801 bind_fd = socket_variant(&self->srv0); 815 802 ASSERT_LE(0, bind_fd); 816 803 ret = bind_variant(bind_fd, &self->unspec_srv0); 817 - if (variant->prot.domain == AF_INET) { 804 + if (variant->prot.domain == AF_INET || 805 + variant->prot.domain == AF_INET6) { 818 806 EXPECT_EQ(-EAFNOSUPPORT, ret); 819 807 } else { 820 808 EXPECT_EQ(-EINVAL, ret) ··· 906 892 EXPECT_EQ(0, close(ruleset_fd)); 907 893 } 908 894 909 - ret = connect_variant(connect_fd, &self->unspec_any0); 895 + /* Try to re-disconnect with a truncated address struct. */ 896 + EXPECT_EQ(-EINVAL, 897 + connect_variant_addrlen( 898 + connect_fd, &self->unspec_any0, 899 + get_addrlen(&self->unspec_any0, true) - 1)); 900 + 901 + /* 902 + * Re-disconnect, with a minimal sockaddr struct (just a 903 + * bare af_family=AF_UNSPEC field). 904 + */ 905 + ret = connect_variant_addrlen(connect_fd, &self->unspec_any0, 906 + get_addrlen(&self->unspec_any0, 907 + true)); 910 908 if (self->srv0.protocol.domain == AF_UNIX && 911 909 self->srv0.protocol.type == SOCK_STREAM) { 912 910 EXPECT_EQ(-EINVAL, ret);
+5 -149
tools/testing/selftests/landlock/ptrace_test.c
··· 86 86 } 87 87 88 88 /* clang-format off */ 89 - FIXTURE(hierarchy) {}; 89 + FIXTURE(scoped_domains) {}; 90 90 /* clang-format on */ 91 - 92 - FIXTURE_VARIANT(hierarchy) 93 - { 94 - const bool domain_both; 95 - const bool domain_parent; 96 - const bool domain_child; 97 - }; 98 91 99 92 /* 100 93 * Test multiple tracing combinations between a parent process P1 and a child ··· 97 104 * restriction is enforced in addition to any Landlock check, which means that 98 105 * all P2 requests to trace P1 would be denied. 99 106 */ 107 + #include "scoped_base_variants.h" 100 108 101 - /* 102 - * No domain 103 - * 104 - * P1-. P1 -> P2 : allow 105 - * \ P2 -> P1 : allow 106 - * 'P2 107 - */ 108 - /* clang-format off */ 109 - FIXTURE_VARIANT_ADD(hierarchy, allow_without_domain) { 110 - /* clang-format on */ 111 - .domain_both = false, 112 - .domain_parent = false, 113 - .domain_child = false, 114 - }; 115 - 116 - /* 117 - * Child domain 118 - * 119 - * P1--. P1 -> P2 : allow 120 - * \ P2 -> P1 : deny 121 - * .'-----. 122 - * | P2 | 123 - * '------' 124 - */ 125 - /* clang-format off */ 126 - FIXTURE_VARIANT_ADD(hierarchy, allow_with_one_domain) { 127 - /* clang-format on */ 128 - .domain_both = false, 129 - .domain_parent = false, 130 - .domain_child = true, 131 - }; 132 - 133 - /* 134 - * Parent domain 135 - * .------. 136 - * | P1 --. P1 -> P2 : deny 137 - * '------' \ P2 -> P1 : allow 138 - * ' 139 - * P2 140 - */ 141 - /* clang-format off */ 142 - FIXTURE_VARIANT_ADD(hierarchy, deny_with_parent_domain) { 143 - /* clang-format on */ 144 - .domain_both = false, 145 - .domain_parent = true, 146 - .domain_child = false, 147 - }; 148 - 149 - /* 150 - * Parent + child domain (siblings) 151 - * .------. 152 - * | P1 ---. P1 -> P2 : deny 153 - * '------' \ P2 -> P1 : deny 154 - * .---'--. 155 - * | P2 | 156 - * '------' 157 - */ 158 - /* clang-format off */ 159 - FIXTURE_VARIANT_ADD(hierarchy, deny_with_sibling_domain) { 160 - /* clang-format on */ 161 - .domain_both = false, 162 - .domain_parent = true, 163 - .domain_child = true, 164 - }; 165 - 166 - /* 167 - * Same domain (inherited) 168 - * .-------------. 169 - * | P1----. | P1 -> P2 : allow 170 - * | \ | P2 -> P1 : allow 171 - * | ' | 172 - * | P2 | 173 - * '-------------' 174 - */ 175 - /* clang-format off */ 176 - FIXTURE_VARIANT_ADD(hierarchy, allow_sibling_domain) { 177 - /* clang-format on */ 178 - .domain_both = true, 179 - .domain_parent = false, 180 - .domain_child = false, 181 - }; 182 - 183 - /* 184 - * Inherited + child domain 185 - * .-----------------. 186 - * | P1----. | P1 -> P2 : allow 187 - * | \ | P2 -> P1 : deny 188 - * | .-'----. | 189 - * | | P2 | | 190 - * | '------' | 191 - * '-----------------' 192 - */ 193 - /* clang-format off */ 194 - FIXTURE_VARIANT_ADD(hierarchy, allow_with_nested_domain) { 195 - /* clang-format on */ 196 - .domain_both = true, 197 - .domain_parent = false, 198 - .domain_child = true, 199 - }; 200 - 201 - /* 202 - * Inherited + parent domain 203 - * .-----------------. 204 - * |.------. | P1 -> P2 : deny 205 - * || P1 ----. | P2 -> P1 : allow 206 - * |'------' \ | 207 - * | ' | 208 - * | P2 | 209 - * '-----------------' 210 - */ 211 - /* clang-format off */ 212 - FIXTURE_VARIANT_ADD(hierarchy, deny_with_nested_and_parent_domain) { 213 - /* clang-format on */ 214 - .domain_both = true, 215 - .domain_parent = true, 216 - .domain_child = false, 217 - }; 218 - 219 - /* 220 - * Inherited + parent and child domain (siblings) 221 - * .-----------------. 222 - * | .------. | P1 -> P2 : deny 223 - * | | P1 . | P2 -> P1 : deny 224 - * | '------'\ | 225 - * | \ | 226 - * | .--'---. | 227 - * | | P2 | | 228 - * | '------' | 229 - * '-----------------' 230 - */ 231 - /* clang-format off */ 232 - FIXTURE_VARIANT_ADD(hierarchy, deny_with_forked_domain) { 233 - /* clang-format on */ 234 - .domain_both = true, 235 - .domain_parent = true, 236 - .domain_child = true, 237 - }; 238 - 239 - FIXTURE_SETUP(hierarchy) 109 + FIXTURE_SETUP(scoped_domains) 240 110 { 241 111 } 242 112 243 - FIXTURE_TEARDOWN(hierarchy) 113 + FIXTURE_TEARDOWN(scoped_domains) 244 114 { 245 115 } 246 116 247 117 /* Test PTRACE_TRACEME and PTRACE_ATTACH for parent and child. */ 248 - TEST_F(hierarchy, trace) 118 + TEST_F(scoped_domains, trace) 249 119 { 250 120 pid_t child, parent; 251 121 int status, err_proc_read;
+10 -13
tools/testing/selftests/landlock/scoped_abstract_unix_test.c
··· 543 543 544 544 ASSERT_EQ(1, write(pipe_child[1], ".", 1)); 545 545 ASSERT_EQ(grand_child, waitpid(grand_child, &status, 0)); 546 - EXPECT_EQ(0, close(stream_server_child)) 546 + EXPECT_EQ(0, close(stream_server_child)); 547 547 EXPECT_EQ(0, close(dgram_server_child)); 548 548 return; 549 549 } ··· 779 779 780 780 TEST_F(various_address_sockets, scoped_pathname_sockets) 781 781 { 782 - socklen_t size_stream, size_dgram; 783 782 pid_t child; 784 783 int status; 785 784 char buf_child, buf_parent; ··· 797 798 /* Pathname address. */ 798 799 snprintf(stream_pathname_addr.sun_path, 799 800 sizeof(stream_pathname_addr.sun_path), "%s", stream_path); 800 - size_stream = offsetof(struct sockaddr_un, sun_path) + 801 - strlen(stream_pathname_addr.sun_path); 802 801 snprintf(dgram_pathname_addr.sun_path, 803 802 sizeof(dgram_pathname_addr.sun_path), "%s", dgram_path); 804 - size_dgram = offsetof(struct sockaddr_un, sun_path) + 805 - strlen(dgram_pathname_addr.sun_path); 806 803 807 804 /* Abstract address. */ 808 805 memset(&stream_abstract_addr, 0, sizeof(stream_abstract_addr)); ··· 836 841 /* Connects with pathname sockets. */ 837 842 stream_pathname_socket = socket(AF_UNIX, SOCK_STREAM, 0); 838 843 ASSERT_LE(0, stream_pathname_socket); 839 - ASSERT_EQ(0, connect(stream_pathname_socket, 840 - &stream_pathname_addr, size_stream)); 844 + ASSERT_EQ(0, 845 + connect(stream_pathname_socket, &stream_pathname_addr, 846 + sizeof(stream_pathname_addr))); 841 847 ASSERT_EQ(1, write(stream_pathname_socket, "b", 1)); 842 848 EXPECT_EQ(0, close(stream_pathname_socket)); 843 849 ··· 846 850 dgram_pathname_socket = socket(AF_UNIX, SOCK_DGRAM, 0); 847 851 ASSERT_LE(0, dgram_pathname_socket); 848 852 err = sendto(dgram_pathname_socket, "c", 1, 0, 849 - &dgram_pathname_addr, size_dgram); 853 + &dgram_pathname_addr, sizeof(dgram_pathname_addr)); 850 854 EXPECT_EQ(1, err); 851 855 852 856 /* Sends with connection. */ 853 - ASSERT_EQ(0, connect(dgram_pathname_socket, 854 - &dgram_pathname_addr, size_dgram)); 857 + ASSERT_EQ(0, 858 + connect(dgram_pathname_socket, &dgram_pathname_addr, 859 + sizeof(dgram_pathname_addr))); 855 860 ASSERT_EQ(1, write(dgram_pathname_socket, "d", 1)); 856 861 EXPECT_EQ(0, close(dgram_pathname_socket)); 857 862 ··· 907 910 stream_pathname_socket = socket(AF_UNIX, SOCK_STREAM, 0); 908 911 ASSERT_LE(0, stream_pathname_socket); 909 912 ASSERT_EQ(0, bind(stream_pathname_socket, &stream_pathname_addr, 910 - size_stream)); 913 + sizeof(stream_pathname_addr))); 911 914 ASSERT_EQ(0, listen(stream_pathname_socket, backlog)); 912 915 913 916 dgram_pathname_socket = socket(AF_UNIX, SOCK_DGRAM, 0); 914 917 ASSERT_LE(0, dgram_pathname_socket); 915 918 ASSERT_EQ(0, bind(dgram_pathname_socket, &dgram_pathname_addr, 916 - size_dgram)); 919 + sizeof(dgram_pathname_addr))); 917 920 918 921 /* Sets up abstract servers. */ 919 922 stream_abstract_socket = socket(AF_UNIX, SOCK_STREAM, 0);
+7 -2
tools/testing/selftests/landlock/scoped_base_variants.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 - * Landlock scoped_domains variants 3 + * Landlock scoped_domains test variant definition. 4 4 * 5 - * See the hierarchy variants from ptrace_test.c 5 + * This file defines a fixture variant "scoped_domains" that has all 6 + * permutations of parent/child process being in separate or shared 7 + * Landlock domain, or not being in a Landlock domain at all. 8 + * 9 + * Scoped access tests can include this file to avoid repeating these 10 + * combinations. 6 11 * 7 12 * Copyright © 2017-2020 Mickaël Salaün <mic@digikod.net> 8 13 * Copyright © 2019-2020 ANSSI
+1 -1
tools/testing/selftests/mm/gup_longterm.c
··· 179 179 if (rw && shared && fs_is_unknown(fs_type)) { 180 180 ksft_print_msg("Unknown filesystem\n"); 181 181 result = KSFT_SKIP; 182 - return; 182 + break; 183 183 } 184 184 /* 185 185 * R/O pinning or pinning in a private mapping is always
+357 -27
tools/testing/selftests/mm/merge.c
··· 22 22 struct procmap_fd procmap; 23 23 }; 24 24 25 + static char *map_carveout(unsigned int page_size) 26 + { 27 + return mmap(NULL, 30 * page_size, PROT_NONE, 28 + MAP_ANON | MAP_PRIVATE, -1, 0); 29 + } 30 + 31 + static pid_t do_fork(struct procmap_fd *procmap) 32 + { 33 + pid_t pid = fork(); 34 + 35 + if (pid == -1) 36 + return -1; 37 + if (pid != 0) { 38 + wait(NULL); 39 + return pid; 40 + } 41 + 42 + /* Reopen for child. */ 43 + if (close_procmap(procmap)) 44 + return -1; 45 + if (open_self_procmap(procmap)) 46 + return -1; 47 + 48 + return 0; 49 + } 50 + 25 51 FIXTURE_SETUP(merge) 26 52 { 27 53 self->page_size = psize(); 28 54 /* Carve out PROT_NONE region to map over. */ 29 - self->carveout = mmap(NULL, 30 * self->page_size, PROT_NONE, 30 - MAP_ANON | MAP_PRIVATE, -1, 0); 55 + self->carveout = map_carveout(self->page_size); 31 56 ASSERT_NE(self->carveout, MAP_FAILED); 32 57 /* Setup PROCMAP_QUERY interface. */ 33 58 ASSERT_EQ(open_self_procmap(&self->procmap), 0); ··· 61 36 FIXTURE_TEARDOWN(merge) 62 37 { 63 38 ASSERT_EQ(munmap(self->carveout, 30 * self->page_size), 0); 64 - ASSERT_EQ(close_procmap(&self->procmap), 0); 39 + /* May fail for parent of forked process. */ 40 + close_procmap(&self->procmap); 65 41 /* 66 42 * Clear unconditionally, as some tests set this. It is no issue if this 67 43 * fails (KSM may be disabled for instance). 68 44 */ 45 + prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0); 46 + } 47 + 48 + FIXTURE(merge_with_fork) 49 + { 50 + unsigned int page_size; 51 + char *carveout; 52 + struct procmap_fd procmap; 53 + }; 54 + 55 + FIXTURE_VARIANT(merge_with_fork) 56 + { 57 + bool forked; 58 + }; 59 + 60 + FIXTURE_VARIANT_ADD(merge_with_fork, forked) 61 + { 62 + .forked = true, 63 + }; 64 + 65 + FIXTURE_VARIANT_ADD(merge_with_fork, unforked) 66 + { 67 + .forked = false, 68 + }; 69 + 70 + FIXTURE_SETUP(merge_with_fork) 71 + { 72 + self->page_size = psize(); 73 + self->carveout = map_carveout(self->page_size); 74 + ASSERT_NE(self->carveout, MAP_FAILED); 75 + ASSERT_EQ(open_self_procmap(&self->procmap), 0); 76 + } 77 + 78 + FIXTURE_TEARDOWN(merge_with_fork) 79 + { 80 + ASSERT_EQ(munmap(self->carveout, 30 * self->page_size), 0); 81 + ASSERT_EQ(close_procmap(&self->procmap), 0); 82 + /* See above. */ 69 83 prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0); 70 84 } 71 85 ··· 386 322 unsigned int page_size = self->page_size; 387 323 char *carveout = self->carveout; 388 324 struct procmap_fd *procmap = &self->procmap; 389 - pid_t pid; 390 325 char *ptr, *ptr2; 326 + pid_t pid; 391 327 int i; 392 328 393 329 /* ··· 408 344 */ 409 345 ptr[0] = 'x'; 410 346 411 - pid = fork(); 347 + pid = do_fork(&self->procmap); 412 348 ASSERT_NE(pid, -1); 413 - 414 - if (pid != 0) { 415 - wait(NULL); 349 + if (pid != 0) 416 350 return; 417 - } 418 - 419 - /* Child process below: */ 420 - 421 - /* Reopen for child. */ 422 - ASSERT_EQ(close_procmap(&self->procmap), 0); 423 - ASSERT_EQ(open_self_procmap(&self->procmap), 0); 424 351 425 352 /* unCOWing everything does not cause the AVC to go away. */ 426 353 for (i = 0; i < 5 * page_size; i += page_size) ··· 441 386 unsigned int page_size = self->page_size; 442 387 char *carveout = self->carveout; 443 388 struct procmap_fd *procmap = &self->procmap; 444 - pid_t pid; 445 389 char *ptr, *ptr2; 390 + pid_t pid; 446 391 int i; 447 392 448 393 /* ··· 463 408 */ 464 409 ptr[0] = 'x'; 465 410 466 - pid = fork(); 411 + pid = do_fork(&self->procmap); 467 412 ASSERT_NE(pid, -1); 468 - 469 - if (pid != 0) { 470 - wait(NULL); 413 + if (pid != 0) 471 414 return; 472 - } 473 - 474 - /* Child process below: */ 475 - 476 - /* Reopen for child. */ 477 - ASSERT_EQ(close_procmap(&self->procmap), 0); 478 - ASSERT_EQ(open_self_procmap(&self->procmap), 0); 479 415 480 416 /* unCOWing everything does not cause the AVC to go away. */ 481 417 for (i = 0; i < 5 * page_size; i += page_size) ··· 1215 1169 ASSERT_TRUE(find_vma_procmap(procmap, ptr)); 1216 1170 ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr); 1217 1171 ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 15 * page_size); 1172 + } 1173 + 1174 + TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev) 1175 + { 1176 + struct procmap_fd *procmap = &self->procmap; 1177 + unsigned int page_size = self->page_size; 1178 + unsigned long offset; 1179 + char *ptr_a, *ptr_b; 1180 + 1181 + /* 1182 + * mremap() such that A and B merge: 1183 + * 1184 + * |------------| 1185 + * | \ | 1186 + * |-----------| | / |---------| 1187 + * | unfaulted | v \ | faulted | 1188 + * |-----------| / |---------| 1189 + * B \ A 1190 + */ 1191 + 1192 + /* Map VMA A into place. */ 1193 + ptr_a = mmap(&self->carveout[page_size + 3 * page_size], 1194 + 3 * page_size, 1195 + PROT_READ | PROT_WRITE, 1196 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1197 + ASSERT_NE(ptr_a, MAP_FAILED); 1198 + /* Fault it in. */ 1199 + ptr_a[0] = 'x'; 1200 + 1201 + if (variant->forked) { 1202 + pid_t pid = do_fork(&self->procmap); 1203 + 1204 + ASSERT_NE(pid, -1); 1205 + if (pid != 0) 1206 + return; 1207 + } 1208 + 1209 + /* 1210 + * Now move it out of the way so we can place VMA B in position, 1211 + * unfaulted. 1212 + */ 1213 + ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size, 1214 + MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]); 1215 + ASSERT_NE(ptr_a, MAP_FAILED); 1216 + 1217 + /* Map VMA B into place. */ 1218 + ptr_b = mmap(&self->carveout[page_size], 3 * page_size, 1219 + PROT_READ | PROT_WRITE, 1220 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1221 + ASSERT_NE(ptr_b, MAP_FAILED); 1222 + 1223 + /* 1224 + * Now move VMA A into position with MREMAP_DONTUNMAP to catch incorrect 1225 + * anon_vma propagation. 1226 + */ 1227 + ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size, 1228 + MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 1229 + &self->carveout[page_size + 3 * page_size]); 1230 + ASSERT_NE(ptr_a, MAP_FAILED); 1231 + 1232 + /* The VMAs should have merged, if not forked. */ 1233 + ASSERT_TRUE(find_vma_procmap(procmap, ptr_b)); 1234 + ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b); 1235 + 1236 + offset = variant->forked ? 3 * page_size : 6 * page_size; 1237 + ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + offset); 1238 + } 1239 + 1240 + TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_next) 1241 + { 1242 + struct procmap_fd *procmap = &self->procmap; 1243 + unsigned int page_size = self->page_size; 1244 + unsigned long offset; 1245 + char *ptr_a, *ptr_b; 1246 + 1247 + /* 1248 + * mremap() such that A and B merge: 1249 + * 1250 + * |---------------------------| 1251 + * | \ | 1252 + * | |-----------| / |---------| 1253 + * v | unfaulted | \ | faulted | 1254 + * |-----------| / |---------| 1255 + * B \ A 1256 + * 1257 + * Then unmap VMA A to trigger the bug. 1258 + */ 1259 + 1260 + /* Map VMA A into place. */ 1261 + ptr_a = mmap(&self->carveout[page_size], 3 * page_size, 1262 + PROT_READ | PROT_WRITE, 1263 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1264 + ASSERT_NE(ptr_a, MAP_FAILED); 1265 + /* Fault it in. */ 1266 + ptr_a[0] = 'x'; 1267 + 1268 + if (variant->forked) { 1269 + pid_t pid = do_fork(&self->procmap); 1270 + 1271 + ASSERT_NE(pid, -1); 1272 + if (pid != 0) 1273 + return; 1274 + } 1275 + 1276 + /* 1277 + * Now move it out of the way so we can place VMA B in position, 1278 + * unfaulted. 1279 + */ 1280 + ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size, 1281 + MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]); 1282 + ASSERT_NE(ptr_a, MAP_FAILED); 1283 + 1284 + /* Map VMA B into place. */ 1285 + ptr_b = mmap(&self->carveout[page_size + 3 * page_size], 3 * page_size, 1286 + PROT_READ | PROT_WRITE, 1287 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1288 + ASSERT_NE(ptr_b, MAP_FAILED); 1289 + 1290 + /* 1291 + * Now move VMA A into position with MREMAP_DONTUNMAP to catch incorrect 1292 + * anon_vma propagation. 1293 + */ 1294 + ptr_a = mremap(ptr_a, 3 * page_size, 3 * page_size, 1295 + MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 1296 + &self->carveout[page_size]); 1297 + ASSERT_NE(ptr_a, MAP_FAILED); 1298 + 1299 + /* The VMAs should have merged, if not forked. */ 1300 + ASSERT_TRUE(find_vma_procmap(procmap, ptr_a)); 1301 + ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a); 1302 + offset = variant->forked ? 3 * page_size : 6 * page_size; 1303 + ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + offset); 1304 + } 1305 + 1306 + TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev_unfaulted_next) 1307 + { 1308 + struct procmap_fd *procmap = &self->procmap; 1309 + unsigned int page_size = self->page_size; 1310 + unsigned long offset; 1311 + char *ptr_a, *ptr_b, *ptr_c; 1312 + 1313 + /* 1314 + * mremap() with MREMAP_DONTUNMAP such that A, B and C merge: 1315 + * 1316 + * |---------------------------| 1317 + * | \ | 1318 + * |-----------| | |-----------| / |---------| 1319 + * | unfaulted | v | unfaulted | \ | faulted | 1320 + * |-----------| |-----------| / |---------| 1321 + * A C \ B 1322 + */ 1323 + 1324 + /* Map VMA B into place. */ 1325 + ptr_b = mmap(&self->carveout[page_size + 3 * page_size], 3 * page_size, 1326 + PROT_READ | PROT_WRITE, 1327 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1328 + ASSERT_NE(ptr_b, MAP_FAILED); 1329 + /* Fault it in. */ 1330 + ptr_b[0] = 'x'; 1331 + 1332 + if (variant->forked) { 1333 + pid_t pid = do_fork(&self->procmap); 1334 + 1335 + ASSERT_NE(pid, -1); 1336 + if (pid != 0) 1337 + return; 1338 + } 1339 + 1340 + /* 1341 + * Now move it out of the way so we can place VMAs A, C in position, 1342 + * unfaulted. 1343 + */ 1344 + ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size, 1345 + MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]); 1346 + ASSERT_NE(ptr_b, MAP_FAILED); 1347 + 1348 + /* Map VMA A into place. */ 1349 + 1350 + ptr_a = mmap(&self->carveout[page_size], 3 * page_size, 1351 + PROT_READ | PROT_WRITE, 1352 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1353 + ASSERT_NE(ptr_a, MAP_FAILED); 1354 + 1355 + /* Map VMA C into place. */ 1356 + ptr_c = mmap(&self->carveout[page_size + 3 * page_size + 3 * page_size], 1357 + 3 * page_size, PROT_READ | PROT_WRITE, 1358 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1359 + ASSERT_NE(ptr_c, MAP_FAILED); 1360 + 1361 + /* 1362 + * Now move VMA B into position with MREMAP_DONTUNMAP to catch incorrect 1363 + * anon_vma propagation. 1364 + */ 1365 + ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size, 1366 + MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 1367 + &self->carveout[page_size + 3 * page_size]); 1368 + ASSERT_NE(ptr_b, MAP_FAILED); 1369 + 1370 + /* The VMAs should have merged, if not forked. */ 1371 + ASSERT_TRUE(find_vma_procmap(procmap, ptr_a)); 1372 + ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a); 1373 + offset = variant->forked ? 3 * page_size : 9 * page_size; 1374 + ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + offset); 1375 + 1376 + /* If forked, B and C should also not have merged. */ 1377 + if (variant->forked) { 1378 + ASSERT_TRUE(find_vma_procmap(procmap, ptr_b)); 1379 + ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b); 1380 + ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 3 * page_size); 1381 + } 1382 + } 1383 + 1384 + TEST_F(merge_with_fork, mremap_faulted_to_unfaulted_prev_faulted_next) 1385 + { 1386 + struct procmap_fd *procmap = &self->procmap; 1387 + unsigned int page_size = self->page_size; 1388 + char *ptr_a, *ptr_b, *ptr_bc; 1389 + 1390 + /* 1391 + * mremap() with MREMAP_DONTUNMAP such that A, B and C merge: 1392 + * 1393 + * |---------------------------| 1394 + * | \ | 1395 + * |-----------| | |-----------| / |---------| 1396 + * | unfaulted | v | faulted | \ | faulted | 1397 + * |-----------| |-----------| / |---------| 1398 + * A C \ B 1399 + */ 1400 + 1401 + /* 1402 + * Map VMA B and C into place. We have to map them together so their 1403 + * anon_vma is the same and the vma->vm_pgoff's are correctly aligned. 1404 + */ 1405 + ptr_bc = mmap(&self->carveout[page_size + 3 * page_size], 1406 + 3 * page_size + 3 * page_size, 1407 + PROT_READ | PROT_WRITE, 1408 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1409 + ASSERT_NE(ptr_bc, MAP_FAILED); 1410 + 1411 + /* Fault it in. */ 1412 + ptr_bc[0] = 'x'; 1413 + 1414 + if (variant->forked) { 1415 + pid_t pid = do_fork(&self->procmap); 1416 + 1417 + ASSERT_NE(pid, -1); 1418 + if (pid != 0) 1419 + return; 1420 + } 1421 + 1422 + /* 1423 + * Now move VMA B out the way (splitting VMA BC) so we can place VMA A 1424 + * in position, unfaulted, and leave the remainder of the VMA we just 1425 + * moved in place, faulted, as VMA C. 1426 + */ 1427 + ptr_b = mremap(ptr_bc, 3 * page_size, 3 * page_size, 1428 + MREMAP_FIXED | MREMAP_MAYMOVE, &self->carveout[20 * page_size]); 1429 + ASSERT_NE(ptr_b, MAP_FAILED); 1430 + 1431 + /* Map VMA A into place. */ 1432 + ptr_a = mmap(&self->carveout[page_size], 3 * page_size, 1433 + PROT_READ | PROT_WRITE, 1434 + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); 1435 + ASSERT_NE(ptr_a, MAP_FAILED); 1436 + 1437 + /* 1438 + * Now move VMA B into position with MREMAP_DONTUNMAP to catch incorrect 1439 + * anon_vma propagation. 1440 + */ 1441 + ptr_b = mremap(ptr_b, 3 * page_size, 3 * page_size, 1442 + MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 1443 + &self->carveout[page_size + 3 * page_size]); 1444 + ASSERT_NE(ptr_b, MAP_FAILED); 1445 + 1446 + /* The VMAs should have merged. A,B,C if unforked, B, C if forked. */ 1447 + if (variant->forked) { 1448 + ASSERT_TRUE(find_vma_procmap(procmap, ptr_b)); 1449 + ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_b); 1450 + ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_b + 6 * page_size); 1451 + } else { 1452 + ASSERT_TRUE(find_vma_procmap(procmap, ptr_a)); 1453 + ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr_a); 1454 + ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr_a + 9 * page_size); 1455 + } 1218 1456 } 1219 1457 1220 1458 TEST_HARNESS_MAIN
+1
tools/testing/selftests/x86/Makefile
··· 36 36 BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64)) 37 37 38 38 CFLAGS := -O2 -g -std=gnu99 -pthread -Wall $(KHDR_INCLUDES) 39 + CFLAGS += -I $(top_srcdir)/tools/testing/selftests/ 39 40 40 41 # call32_from_64 in thunks.S uses absolute addresses. 41 42 ifeq ($(CAN_BUILD_WITH_NOPIE),1)
+12
tools/testing/vsock/util.c
··· 511 511 512 512 printf("ok\n"); 513 513 } 514 + 515 + printf("All tests have been executed. Waiting other peer..."); 516 + fflush(stdout); 517 + 518 + /* 519 + * Final full barrier, to ensure that all tests have been run and 520 + * that even the last one has been successful on both sides. 521 + */ 522 + control_writeln("COMPLETED"); 523 + control_expectln("COMPLETED"); 524 + 525 + printf("ok\n"); 514 526 } 515 527 516 528 void list_tests(const struct test_case *test_cases)